From xen-devel-bounces@lists.xenproject.org Fri May 01 00:15:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 00:15:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUJKB-0005l2-1M; Fri, 01 May 2020 00:14:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xOzx=6P=gmail.com=pryorm09@srs-us1.protection.inumbo.net>)
 id 1jUJK8-0005kx-Qx
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 00:14:36 +0000
X-Inumbo-ID: bd27c930-8b40-11ea-b9cf-bc764e2007e4
Received: from mail-il1-x12f.google.com (unknown [2607:f8b0:4864:20::12f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd27c930-8b40-11ea-b9cf-bc764e2007e4;
 Fri, 01 May 2020 00:14:35 +0000 (UTC)
Received: by mail-il1-x12f.google.com with SMTP id t12so3142832ile.9
 for <xen-devel@lists.xenproject.org>; Thu, 30 Apr 2020 17:14:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:from:date:message-id:subject:to;
 bh=iyGMNZXwsO96nEXQ+ErLdrvn6v4CWSTkPblT+6bbBRo=;
 b=MExk9PQ7Us+aXj+WiN6DRTpQMvRtLgiE1f7//qKKGCPpMwNOSFp70KAgqheYgKwhbG
 EZ10XsjFCJRrhrCBbickH/1prMjKBXTAp+Jdqma31eGgvvWiVHMi2megaJkPO1rjMpEs
 g6eN0TcWjrzwMOiDZIjjVaxMgCsxgWw0DWuLD8qiQFM0eOSIzoDEyCnczGtiSlvb9RRn
 t/AWK96ePVUx2yAkbgEfeaWRSwLsxMteaIETakEvPVhRnfleAlLXpiH3vp5u7xPwX4l7
 mKjcR+OaEsjELa/2V4NuKxDe5qTtuREKgbSkKFPl8lkXgtwo1hp7JI5Ml1/Sjt4kJdJ+
 h9OQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
 bh=iyGMNZXwsO96nEXQ+ErLdrvn6v4CWSTkPblT+6bbBRo=;
 b=jKpATlVkLIn7Bsuxj6qojwKe0nAaYrMJTkIDrPU9F58qVmWnhzviWpLJTbONwAjJL7
 mgp1aganR7PDjmsSzI+3ApZ/0CYGjPA8zwhhJm8zB8t9N/L+4yE1jKD7M89WqlWcgY3U
 p7R1MlOw+G0YcvkXI8Xe+t2WMG0oVFJCo489s37R/JzhQ21xkcIwwLd4i6voiXLVnuIm
 00sD4/ZuIYOzxkfL91L/86TbO+GejOAwhEDak/5JbRRfUUtMsYJZozNsdrHcuTLeGPQQ
 E2Kb4D/Yr76i2U5OWcDIbsnoifg2/Bqq000OfSJGP/VuukuGmTu/i8+XIlGFbzIeG70S
 x+XQ==
X-Gm-Message-State: AGi0PubohMRoUaBTNrnM1r3Ld+4NziP7olKy00v6E7ubNI++sNu+9E2A
 NFvAsUheiLMELpQaMiwUlqJ1EWBJYnFp4W/5Jc7kXw==
X-Google-Smtp-Source: APiQypLweRWqoMWHr/H/ZYnNF30AK2Jgls27HH73An5KGcSmvlAEgaeRHfXRIe8mLIItS5maYN4KszjZ8QbQ/1hkcJg=
X-Received: by 2002:a92:8c4c:: with SMTP id o73mr1096337ild.169.1588292074908; 
 Thu, 30 Apr 2020 17:14:34 -0700 (PDT)
MIME-Version: 1.0
Received: by 2002:a02:a518:0:0:0:0:0 with HTTP; Thu, 30 Apr 2020 17:14:34
 -0700 (PDT)
From: Pry Mar <pryorm09@gmail.com>
Date: Thu, 30 Apr 2020 17:14:34 -0700
Message-ID: <CAHnBbQ8F6vQJB4sXorxmfxU7YJ--Nt-4mb8CechXQXdwWJjNgQ@mail.gmail.com>
Subject: Re: Xen Compilation Error on Ubuntu 20.04
To: xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello Ayush Dosaj,

https://bugs.launchpad.net/ubuntu/+source/xen/+bug/1851091

see the bug report above ^.

As soon as Ubuntu 19.10 went to LZ4 kernel compression this bug
appeared. As Stefan Bader reports in his testing, the hypervisor not
only crashes, but resets the machine.

The same trouble continued with 20.04. Their solution was to forward
package the Disco hypervisor. You could also hybridize with the Buster
hypervisor - I've done both with perfect results.

Whatever this bug ends up to be, it goes back to 2015 when the lz4
decompression code was first introduced:
[xen.git] / xen / common / lz4 /

I reported trouble too back in Oct 2019:
https://lists.xenproject.org/archives/html/xen-devel/2019-10/msg01637.html

cheers,
PryMar56
##xen-packaging on Freenode IRC


From xen-devel-bounces@lists.xenproject.org Fri May 01 01:18:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 01:18:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUKJM-0007uQ-VX; Fri, 01 May 2020 01:17:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YjF2=6P=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jUKJM-0007uL-1R
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 01:17:52 +0000
X-Inumbo-ID: 92bfc266-8b49-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 92bfc266-8b49-11ea-ae69-bc764e2007e4;
 Fri, 01 May 2020 01:17:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ScugmbTyEA6J7iUdftWh1B+PHMkXVB2ctffI5naqmHM=; b=1r9wyLS8xOMH1xyydyKPMMJ07
 2GKneowQn6z7GangJ3v2uPszk2JlM2TgxIsJ2ojzdDvAjPvxuZKVAMrRzIMadrPS/hjURvV6In9UQ
 qQs49FQSfEBxBXXsS2yjLwP5HRHCu4qgw+4MQy46I6aj5AIVEumiVaS5fhhQaSLtW4aPQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUKJJ-0007eq-Nh; Fri, 01 May 2020 01:17:49 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUKJJ-0000xE-Ev; Fri, 01 May 2020 01:17:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jUKJJ-0002K7-DT; Fri, 01 May 2020 01:17:49 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149890-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 149890: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=16aaacb307ed607b9780c12702c44f0fe52edc7e
X-Osstest-Versions-That: qemuu=648db19685b7030aa558a4ddbd3a8e53d8c9a062
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 01 May 2020 01:17:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149890 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149890/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate           fail  like 149885
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149885
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149885
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149885
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149885
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149885
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                16aaacb307ed607b9780c12702c44f0fe52edc7e
baseline version:
 qemuu                648db19685b7030aa558a4ddbd3a8e53d8c9a062

Last test of basis   149885  2020-04-30 00:07:29 Z    1 days
Testing same since   149890  2020-04-30 14:35:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alistair Francis <alistair.francis@wdc.com>
  Anup Patel <anup.patel@wdc.com>
  Anup Patel <anup@brainfault.org>
  Bin Meng <bmeng.cn@gmail.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Corey Wharton <coreyw7@fb.com>
  Cornelia Huck <cohuck@redhat.com>
  David Hildenbrand <david@redhat.com>
  Janosch Frank <frankja@linux.ibm.com>
  Janosch Frank <frankja@linux.vnet.ibm.com>
  LIU Zhiwei <zhiwei_liu@c-sky.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Markus Armbruster <armbru@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   648db19685..16aaacb307  16aaacb307ed607b9780c12702c44f0fe52edc7e -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri May 01 01:25:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 01:25:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUKQr-0000K9-QP; Fri, 01 May 2020 01:25:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J7As=6P=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jUKQq-0000K4-IC
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 01:25:36 +0000
X-Inumbo-ID: a846b0da-8b4a-11ea-9ad8-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a846b0da-8b4a-11ea-9ad8-12813bfff9fa;
 Fri, 01 May 2020 01:25:36 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id F10582073E;
 Fri,  1 May 2020 01:25:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1588296335;
 bh=UtEHAzTyJf+XEKD4qLwN0fzppM5kmHavBolwyS80XTU=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=L47ogOAB1wLYBjttxrzBeGB580Q21RGt2vnd9fXiXrdM0gAvsPynzUpOnd4yNYVeF
 CxooeLBPIuL66jWBQKTK5cIDkav/flZl9GCooAP+cUmeteOHwM3AJf525aybA37Flz
 afyUN+gkcCIpCEqysZjKrupIsDKhVvjq9S7VuSCc=
Date: Thu, 30 Apr 2020 18:25:34 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 11/12] xen/arm: if xen_force don't try to setup the IOMMU
In-Reply-To: <b60d6ae3-e300-04a1-a884-e73d01a108d5@xen.org>
Message-ID: <alpine.DEB.2.21.2004301249490.28941@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-11-sstabellini@kernel.org>
 <4b4263ba-bf6f-e578-037d-edb8add52aad@xen.org>
 <alpine.DEB.2.21.2004291400340.28941@sstabellini-ThinkPad-T480s>
 <b60d6ae3-e300-04a1-a884-e73d01a108d5@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr_Babchuk@epam.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 30 Apr 2020, Julien Grall wrote:
> Hi Stefano,
> 
> On 29/04/2020 22:55, Stefano Stabellini wrote:
> > On Wed, 15 Apr 2020, Julien Grall wrote:
> > > Hi Stefano,
> > > 
> > > On 15/04/2020 02:02, Stefano Stabellini wrote:
> > > > If xen_force (which means xen,force-assign-without-iommu was requested)
> > > > don't try to add the device to the IOMMU. Return early instead.
> > > 
> > > 
> > > Could you explain why this is an issue to call xen_force after
> > > iommu_add_dt_device()?
> > 
> > There are two issues. I should add info about both of them to the commit
> > message.
> > 
> > 
> > The first issue is that an error returned by iommu_add_dt_device (for
> > any reason) would cause handle_passthrough_prop to stop and return error
> > right away. But actually the iommu is not needed for that device if
> > xen_force is set.
> 
> During boot, Xen will configure the IOMMUs to fault on any DMA transactions
> that are not valid. So if you don't call iommu_assign_dt_device(), then your
> device will be unusable.
> 
> Without your patch, the user will know directly something went wrong. With
> your patch, the fault may occur much later and be more difficult to
> diagnostics.
> 
> > (In fact, one of the reasons why a user might want to set
> > force-assign-without-iommu is because there are iommu issues with a
> > device.)
> This would not work because of the reasons I explained above. The only way
> would be to configure the IOMMU in bypass mode for that device.
> 
> So you would still need to call the IOMMU subsystem.

What you wrote makes sense and also matches my understanding. But some
of my testing results confused me. I am going to go back and look at
this closely again, but I am thinking of dropping this patch and
avoiding interfering with IOMMU settings.


From xen-devel-bounces@lists.xenproject.org Fri May 01 01:26:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 01:26:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUKRL-0000MN-2q; Fri, 01 May 2020 01:26:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J7As=6P=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jUKRJ-0000ME-6a
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 01:26:05 +0000
X-Inumbo-ID: b9833e36-8b4a-11ea-ae69-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9833e36-8b4a-11ea-ae69-bc764e2007e4;
 Fri, 01 May 2020 01:26:04 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 0CD8D2073E;
 Fri,  1 May 2020 01:26:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1588296364;
 bh=pWv/QX33ZOXmq/5MgHQqFH1A2mUcgj7dyYWuh8k13zU=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=ytKGFG1bNFJAwu24PrnUAYhKZPQtyRtZTq9BEp1FXsgeez4RvdVSmG+Hra1s8vMcq
 3+J2d9cWh4DZ6LZwgKqbfB94Fb5KmDTKs8tK5h1MUGv0Rj8qiDmNlccszUvY5e+Ovd
 mJS0jO/c0oGBWG5rB7NoXsUA+Vaih5W3GePu/qZ8=
Date: Thu, 30 Apr 2020 18:26:03 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 10/12] xen/arm: if is_domain_direct_mapped use native
 UART address for vPL011
In-Reply-To: <05b46414-12c3-5f79-f4b1-46cf8750d28c@xen.org>
Message-ID: <alpine.DEB.2.21.2004301319380.28941@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-10-sstabellini@kernel.org>
 <05b46414-12c3-5f79-f4b1-46cf8750d28c@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr_Babchuk@epam.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, 15 Apr 2020, Julien Grall wrote:
> Hi Stefano,
> 
> On 15/04/2020 02:02, Stefano Stabellini wrote:
> > We always use a fix address to map the vPL011 to domains. The address
> > could be a problem for domains that are directly mapped.
> > 
> > Instead, for domains that are directly mapped, reuse the address of the
> > physical UART on the platform to avoid potential clashes.
> 
> How do you know the physical UART MMIO region is big enough to fit the PL011?

That cannot be because the vPL011 MMIO size is 1 page, which is the
minimum right?


> What if the user want to assign the physical UART to the domain + the vpl011?

A user can assign a UART to the domain, but it cannot assign the UART
used by Xen (DTUART), which is the one we are using here to get the
physical information.


(If there is no UART used by Xen then we'll fall back to the virtual
addresses. If they conflict we return error and let the user fix the
configuration.)


From xen-devel-bounces@lists.xenproject.org Fri May 01 01:26:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 01:26:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUKRY-0000P6-FS; Fri, 01 May 2020 01:26:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J7As=6P=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jUKRX-0000Op-Aw
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 01:26:19 +0000
X-Inumbo-ID: c1fff28e-8b4a-11ea-b07b-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c1fff28e-8b4a-11ea-b07b-bc764e2007e4;
 Fri, 01 May 2020 01:26:19 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 4700320787;
 Fri,  1 May 2020 01:26:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1588296378;
 bh=CgAWZcq0REh8JirC9d7joi6d2dEeb8gkYbFAttQ/cB4=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=gLrKMP26n7kkpaTaGTku9nOaHeGr33mEZSFWe2dgLc6Ep8z9E6Xe7I1GOq8YrCRTB
 wgtc2VfZWg/U5dL+gB4gxZhpnBa8HwW5987LfPdku9BhpZpIaCY8sNODFQ60f4T/1R
 kKV25JrptarrWncjzVmcpGI71WvU4D3isMLcHL9s=
Date: Thu, 30 Apr 2020 18:26:17 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 08/12] xen/arm: if is_domain_direct_mapped use native
 addresses for GICv2
In-Reply-To: <05375484-43f2-9d4b-205a-b9dcf4ee5d8e@xen.org>
Message-ID: <alpine.DEB.2.21.2004301412460.28941@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-8-sstabellini@kernel.org>
 <05375484-43f2-9d4b-205a-b9dcf4ee5d8e@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr_Babchuk@epam.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, 15 Apr 2020, Julien Grall wrote:
> On 15/04/2020 02:02, Stefano Stabellini wrote:
> > Today we use native addresses to map the GICv2 for Dom0 and fixed
> > addresses for DomUs.
> > 
> > This patch changes the behavior so that native addresses are used for
> > any domain that is_domain_direct_mapped.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > ---
> >   xen/arch/arm/domain_build.c | 10 +++++++---
> >   xen/arch/arm/vgic-v2.c      | 12 ++++++------
> >   xen/arch/arm/vgic/vgic-v2.c |  2 +-
> >   xen/include/asm-arm/vgic.h  |  1 +
> >   4 files changed, 15 insertions(+), 10 deletions(-)
> > 
> > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > index 627e0c5e8e..303bee60f6 100644
> > --- a/xen/arch/arm/domain_build.c
> > +++ b/xen/arch/arm/domain_build.c
> > @@ -1643,8 +1643,12 @@ static int __init make_gicv2_domU_node(struct
> > kernel_info *kinfo)
> >       int res = 0;
> >       __be32 reg[(GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS) * 2];
> >       __be32 *cells;
> > +    struct domain *d = kinfo->d;
> > +    char buf[38];
> >   -    res = fdt_begin_node(fdt,
> > "interrupt-controller@"__stringify(GUEST_GICD_BASE));
> > +    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
> > +             d->arch.vgic.dbase);
> > +    res = fdt_begin_node(fdt, buf);
> >       if ( res )
> >           return res;
> >   @@ -1666,9 +1670,9 @@ static int __init make_gicv2_domU_node(struct
> > kernel_info *kinfo)
> >         cells = &reg[0];
> >       dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS,
> > GUEST_ROOT_SIZE_CELLS,
> > -                       GUEST_GICD_BASE, GUEST_GICD_SIZE);
> > +                       d->arch.vgic.dbase, GUEST_GICD_SIZE);
> >       dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS,
> > GUEST_ROOT_SIZE_CELLS,
> > -                       GUEST_GICC_BASE, GUEST_GICC_SIZE);
> > +                       d->arch.vgic.cbase, GUEST_GICC_SIZE);
> 
> You can't use the native base address and not the native size. Particularly,
> this is going to screw any GIC using 8KB alias.

I don't follow why it could cause problems with a GIC using the 8KB
alias. Maybe an example (even a fake example) would help.



> It would be preferable if only expose part of the CPU interface as we do for
> the guest. So d->arch.vgic.cbase would be equal to vgic_v2_hw.dbase +
> vgic_v2.hw.aliased_offset.



From xen-devel-bounces@lists.xenproject.org Fri May 01 01:26:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 01:26:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUKRz-0000Ur-PP; Fri, 01 May 2020 01:26:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J7As=6P=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jUKRy-0000Ub-KQ
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 01:26:46 +0000
X-Inumbo-ID: d23c8932-8b4a-11ea-b07b-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d23c8932-8b4a-11ea-b07b-bc764e2007e4;
 Fri, 01 May 2020 01:26:46 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 7A6D72073E;
 Fri,  1 May 2020 01:26:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1588296405;
 bh=hHmNrSXX0b8xwo49uXz+1R8w1FiK6501aUC2Xai0ID8=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=vepCulncd8+MOpZ0vaAZMao3Yiu50yDWoEeCRVQEcIbkQ4JqTeyLLXIMLEtQOQsrH
 GO5E70Z3R4lBd6Pz1Wt6h8ts/F3V8tEVR1vtqY1oj2i5O4dt/ANLAP3b2XqFz2xxRC
 G5Jgk4p/CK4ICq1TsuLmz2qzQLOApDcFpCgTGPnA=
Date: Thu, 30 Apr 2020 18:26:45 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 03/12] xen/arm: introduce 1:1 mapping for domUs
In-Reply-To: <3f26f6a0-89bd-cbec-f07f-90a08fa60e26@xen.org>
Message-ID: <alpine.DEB.2.21.2004301417070.28941@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-3-sstabellini@kernel.org>
 <3f26f6a0-89bd-cbec-f07f-90a08fa60e26@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr_Babchuk@epam.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, 15 Apr 2020, Julien Grall wrote:
> On 15/04/2020 02:02, Stefano Stabellini wrote:
> > In some cases it is desirable to map domU memory 1:1 (guest physical ==
> > physical.) For instance, because we want to assign a device to the domU
> > but the IOMMU is not present or cannot be used. In these cases, other
> > mechanisms should be used for DMA protection, e.g. a MPU.
> 
> I am not against this, however the documentation should clearly explain that
> you are making your platform insecure unless you have other mean for DMA
> protection.

I'll expand the docs


> > 
> > This patch introduces a new device tree option for dom0less guests to
> > request a domain to be directly mapped. It also specifies the memory
> > ranges. This patch documents the new attribute and parses it at boot
> > time. (However, the implementation of 1:1 mapping is missing and just
> > BUG() out at the moment.)  Finally the patch sets the new direct_map
> > flag for DomU domains.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > ---
> >   docs/misc/arm/device-tree/booting.txt | 13 +++++++
> >   docs/misc/arm/passthrough-noiommu.txt | 35 ++++++++++++++++++
> >   xen/arch/arm/domain_build.c           | 52 +++++++++++++++++++++++++--
> >   3 files changed, 98 insertions(+), 2 deletions(-)
> >   create mode 100644 docs/misc/arm/passthrough-noiommu.txt
> > 
> > diff --git a/docs/misc/arm/device-tree/booting.txt
> > b/docs/misc/arm/device-tree/booting.txt
> > index 5243bc7fd3..fce5f7ed5a 100644
> > --- a/docs/misc/arm/device-tree/booting.txt
> > +++ b/docs/misc/arm/device-tree/booting.txt
> > @@ -159,6 +159,19 @@ with the following properties:
> >       used, or GUEST_VPL011_SPI+1 if vpl011 is enabled, whichever is
> >       greater.
> >   +- direct-map
> > +
> > +    Optional. An array of integer pairs specifying addresses and sizes.
> > +    direct_map requests the memory of the domain to be 1:1 mapped with
> > +    the memory ranges specified as argument. Only sizes that are a
> > +    power of two number of pages are allowed.
> > +
> > +- #direct-map-addr-cells and #direct-map-size-cells
> > +
> > +    The number of cells to use for the addresses and for the sizes in
> > +    direct-map. Default and maximum are 2 cells for both addresses and
> > +    sizes.
> > +
> 
> As this is going to be mostly used for passthrough, can't we take advantage of
> the partial device-tree and describe the memory region using memory node?

With the system device tree bindings that are under discussion the role
of the partial device tree might be reduce going forward, and might even
go away in the long term. For this reason, I would prefer not to add
more things to the partial device tree.


> >   - #address-cells and #size-cells
> >         Both #address-cells and #size-cells need to be specified because
> > diff --git a/docs/misc/arm/passthrough-noiommu.txt
> > b/docs/misc/arm/passthrough-noiommu.txt
> > new file mode 100644
> > index 0000000000..d2bfaf26c3
> > --- /dev/null
> > +++ b/docs/misc/arm/passthrough-noiommu.txt
> > @@ -0,0 +1,35 @@
> > +Request Device Assignment without IOMMU support
> > +===============================================
> > +
> > +Add xen,force-assign-without-iommu; to the device tree snippet
> > +
> > +    ethernet: ethernet@ff0e0000 {
> > +        compatible = "cdns,zynqmp-gem";
> > +        xen,path = "/amba/ethernet@ff0e0000";
> > +        xen,reg = <0x0 0xff0e0000 0x1000 0x0 0xff0e0000>;
> > +        xen,force-assign-without-iommu;
> > +
> > +Optionally, if none of the domains require an IOMMU, then it could be
> > +disabled (not recommended). For instance by adding status = "disabled";
> > +under the smmu node:
> > +
> > +    smmu@fd800000 {
> > +        compatible = "arm,mmu-500";
> > +        status = "disabled";
> 
> I am not sure why this section is added in this patch. Furthermore, the file
> is named "noiommu" but here you mention the IOMMU.

I took the habit of writing user and testing docs at the time of writing
the patch series. I'll move this doc to the end of the the series. Also,
the words here are inaccurate, I'll improve it in the next version.


I have addressed all other comments.


From xen-devel-bounces@lists.xenproject.org Fri May 01 01:31:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 01:31:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUKWO-0001Nz-BD; Fri, 01 May 2020 01:31:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J7As=6P=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jUKWM-0001Ns-QI
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 01:31:18 +0000
X-Inumbo-ID: 747b58c2-8b4b-11ea-9ad8-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 747b58c2-8b4b-11ea-9ad8-12813bfff9fa;
 Fri, 01 May 2020 01:31:18 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9C9CC206C0;
 Fri,  1 May 2020 01:31:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1588296677;
 bh=ristXRsVtwg5JAi2xLjOMlOfg89wsdnLx7DmfQWqDFg=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=eKgFLzK8zzpdo1H+Bpx5WnfbmvLlAn09VObEJNZKnqoiuHEHqZx8MFFPjqgb98d3F
 zzH9hLBA8X0MGNTOER4HLyP54q7tWm4Nd7/JqYC3lVBLO6DXnGBkbPlTmmsHHwKPLR
 V+BnxdVYliJMECjPB92jx0vA6SDVNGTtmSEOBNbI=
Date: Thu, 30 Apr 2020 18:31:17 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 09/12] xen/arm: if is_domain_direct_mapped use native
 addresses for GICv3
In-Reply-To: <923411c5-37d4-c86e-c5a8-8acd8a6830e7@xen.org>
Message-ID: <alpine.DEB.2.21.2004301613220.28941@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-9-sstabellini@kernel.org>
 <923411c5-37d4-c86e-c5a8-8acd8a6830e7@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr_Babchuk@epam.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, 15 Apr 2020, Julien Grall wrote:
> > diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> > index 4e60ba15cc..4cf430f865 100644
> > --- a/xen/arch/arm/vgic-v3.c
> > +++ b/xen/arch/arm/vgic-v3.c
> > @@ -1677,13 +1677,25 @@ static int vgic_v3_domain_init(struct domain *d)
> 
> 
> I think you also want to modify vgic_v3_max_rdist_count().

I don't think so: domUs even direct-mapped still only get 1 rdist
region. This patch is not changing the layout of the domU gic, it is
only finding a "hole" in the physical address space to make sure there
are no conflicts (or at least minimize the chance of conflicts.)


> >        * Domain 0 gets the hardware address.
> >        * Guests get the virtual platform layout.
> 
> This comment needs to be updated.

Yep, I'll do


> >        */
> > -    if ( is_hardware_domain(d) )
> > +    if ( is_domain_direct_mapped(d) )
> >       {
> >           unsigned int first_cpu = 0;
> > +        unsigned int nr_rdist_regions;
> >             d->arch.vgic.dbase = vgic_v3_hw.dbase;
> >   -        for ( i = 0; i < vgic_v3_hw.nr_rdist_regions; i++ )
> > +        if ( is_hardware_domain(d) )
> > +        {
> > +            nr_rdist_regions = vgic_v3_hw.nr_rdist_regions;
> > +            d->arch.vgic.intid_bits = vgic_v3_hw.intid_bits;
> > +        }
> > +        else
> > +        {
> > +            nr_rdist_regions = 1;
> 
> What does promise your the rdist region will be big enough to cater all the
> re-distributors for your domain?

Good point. I'll add an explicit check for that with at least a warning.
I don't think we want to return error because the configuration it is
still likely to work. 

It might be better to continue this conversation on the next version of
the patch -- it is going to be much clearer.


From xen-devel-bounces@lists.xenproject.org Fri May 01 02:20:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 02:20:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jULHp-0004sM-2n; Fri, 01 May 2020 02:20:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eoOh=6P=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1jULHn-0004sH-0M
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 02:20:19 +0000
X-Inumbo-ID: 4c2a4be2-8b52-11ea-ae69-bc764e2007e4
Received: from mail-qt1-x844.google.com (unknown [2607:f8b0:4864:20::844])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4c2a4be2-8b52-11ea-ae69-bc764e2007e4;
 Fri, 01 May 2020 02:20:17 +0000 (UTC)
Received: by mail-qt1-x844.google.com with SMTP id v26so7011858qto.0
 for <xen-devel@lists.xenproject.org>; Thu, 30 Apr 2020 19:20:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zededa.com; s=google;
 h=mime-version:from:date:message-id:subject:to:cc;
 bh=HtFIjs5xEn8d8PuX7TrdCdC6yeVtAmN+ZGcg3en1R4c=;
 b=dybE9p5judQJPa50cpBMEUeZHtcuiQjp+1Ays1kHXlwKZWA3Vqxzt+sobfk7d6dqEg
 047Egilj+SNzqWR6idhjDDYeUqj+YUxulDqgH20wHEqnkUpagTuxot8l8yDMTINTZUIw
 loL8uoxoyG02xoRsRBQT83fttQwCmaiGZg/dMkEFZZqCSZLx7tEAJ3VFF54mX0k7SjZ1
 xx0GqJ22qEI8LcJa7Mj6d3muBWZcbkt0rOAi4PUYMjVbkK/hi0lHJufsr2eak2EdY/cB
 H5puEkTf/Pai1vWI8aMHwkj9JX4GFHPb9YVQvprgLTPYGF2ja/u2nk7jJC6j6ZKvwuM5
 yWQg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:from:date:message-id:subject:to:cc;
 bh=HtFIjs5xEn8d8PuX7TrdCdC6yeVtAmN+ZGcg3en1R4c=;
 b=ZTWaKE9pmVeI30QWSJOPE2GdBD5o42r67hzXWndmSBBNvVLc5+dcU+gUM4yHkCLWPA
 tuIf5c1AmCwfMA8D98dap0iTgl7FnoNlXDY7KylA04fM9VJbYILHixly3kDjFnJs+AP2
 1X/oZC/sBdKY33w4b2tcp7HWpbxVC9j0y1HoySIB/y9HN0c91cmKQ7lb9CaG1KcrYjcq
 Fl3CRLDFXL+bp15P1qvHFvCcip97o0p2YrlLQVLMOY3xUkySbfxhzmLr7OthvCkrOWWr
 RyVRdacGm2HuoKySmEI/Wn3E581cQyir4MJ79DQ7cJbguitQ9gJ+X0sXqgOyQLok4r2v
 XA1A==
X-Gm-Message-State: AGi0PuajJOiHI7wW4vKrU5Ly8I6yqQCDI6VTuLAY4ClPN6NMD/aF+7RN
 CgP9BdNGO0SYgGmPRpqUxUbswhBrM3uBElnV1TMSiE/P+6J3Dg==
X-Google-Smtp-Source: APiQypLgx09iR3/Q/jCqjE6Kic7aDPu0y/hta7MksFA8RehoUH+t+95OV9+9S3hgYeKKKshFL2Rs9usDGAaCHQ3VcRs=
X-Received: by 2002:ac8:3fc2:: with SMTP id v2mr1627438qtk.113.1588299616235; 
 Thu, 30 Apr 2020 19:20:16 -0700 (PDT)
MIME-Version: 1.0
From: Roman Shaposhnik <roman@zededa.com>
Date: Thu, 30 Apr 2020 19:20:05 -0700
Message-ID: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
Subject: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: xen-devel@lists.xenproject.org
Content-Type: multipart/mixed; boundary="0000000000006c6fd005a48cd2eb"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 minyard@acm.org, Roman Shaposhnik <roman@zededa.com>,
 jeff.kubascik@dornerworks.com, Julien Grall <julien.grall@arm.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--0000000000006c6fd005a48cd2eb
Content-Type: text/plain; charset="UTF-8"

Hi!

I'm trying to run Xen on Raspberry Pi 4 with 5.6.1 stock,
upstream kernel. The kernel itself works perfectly well
on the board. When I try booting it as Dom0 under Xen,
it goes into a stacktrace (attached).

Looking at what nice folks over at Dornerworks have previously
done to make RPi kernels boot as Dom0 I've come across these
3 patches:
    https://github.com/dornerworks/xen-rpi4-builder/tree/master/patches/linux

The first patch seems irrelevant (unless I'm missing something
really basic here). The 2nd patch applied with no issue (but
I don't think it is related) but the 3d patch failed to apply on
account of 5.6.1 kernel no longer having:
    dev->archdata.dev_dma_ops
E.g.
    https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/arch/arm64/mm/dma-mapping.c?h=v5.6.1#n55

I've tried to emulate the effect of the patch by simply introducing
a static variable that would signal that we already initialized
dev->dma_ops -- but that didn't help at all.

I'm CCing Jeff Kubascik to see if the original authors of that Corey Minyard
to see if may be they have any suggestions on how this may be dealt
with.

Any advice would be greatly appreciated!

Thanks,
Roman.

--0000000000006c6fd005a48cd2eb
Content-Type: text/plain; charset="US-ASCII"; name="xen.rpi.txt"
Content-Disposition: attachment; filename="xen.rpi.txt"
Content-Transfer-Encoding: base64
Content-ID: <f_k9nkjpi90>
X-Attachment-Id: f_k9nkjpi90

Z3J1Yj4geGVuX2h5cGVydmlzb3IgKGhkMCxncHQxKS94ZW4gZG9tMF9tZW09MTAyNE0sbWF4OjEw
MjRNIGRvbTBfbWF4X3ZjcHVzPTEKZ3J1Yj4geGVuX21vZHVsZSAoaGQwLGdwdDEpL2tlcm5lbCBj
b25zb2xlPWh2YzAgZWFybHlwcmludGs9eGVuIG5vbW9kZXNldApncnViPiBkZXZpY2V0cmVlICho
ZDAsZ3B0MSkvYmNtMjcxMS1ycGktNC1iLmR0YgpncnViPiBib290ClVzaW5nIG1vZHVsZXMgcHJv
dmlkZWQgYnkgYm9vdGxvYWRlciBpbiBGRFQKWGVuIDQuMTMuMCAoYy9zIFR1ZSBEZWMgMTcgMTQ6
MTk6NDkgMjAxOSArMDAwMCBnaXQ6YTJlODRkOGU0Mi1kaXJ0eSkgRUZJIGxvYWRlcgpXYXJuaW5n
OiBDb3VsZCBub3QgcXVlcnkgdmFyaWFibGUgc3RvcmU6IDB4ODAwMDAwMDAwMDAwMDAwMwotIFVB
UlQgZW5hYmxlZCAtCi0gQm9vdCBDUFUgYm9vdGluZyAtCi0gQ3VycmVudCBFTCAwMDAwMDAwOCAt
Ci0gSW5pdGlhbGl6ZSBDUFUgLQotIFR1cm5pbmcgb24gcGFnaW5nIC0KLSBSZWFkeSAtCihYRU4p
IENoZWNraW5nIGZvciBpbml0cmQgaW4gL2Nob3NlbgooWEVOKSBSQU06IDAwMDAwMDAwMDAwMDEw
MDAgLSAwMDAwMDAwMDA3ZWYxZmZmCihYRU4pIFJBTTogMDAwMDAwMDAwN2VmMjAwMCAtIDAwMDAw
MDAwMDdmMGRmZmYKKFhFTikgUkFNOiAwMDAwMDAwMDA3ZjBlMDAwIC0gMDAwMDAwMDAyYmM0ZGZm
ZgooWEVOKSBSQU06IDAwMDAwMDAwM2NiMTAwMDAgLSAwMDAwMDAwMDNjYjEwZmZmCihYRU4pIFJB
TTogMDAwMDAwMDAzY2IxMjAwMCAtIDAwMDAwMDAwM2NiMTNmZmYKKFhFTikgUkFNOiAwMDAwMDAw
MDNjYjFiMDAwIC0gMDAwMDAwMDAzY2IxY2ZmZgooWEVOKSBSQU06IDAwMDAwMDAwNDAwMDAwMDAg
LSAwMDAwMDAwMGZiZmZmZmZmCihYRU4pCihYRU4pIE1PRFVMRVswXTogMDAwMDAwMDAyYmM1ZDAw
MCAtIDAwMDAwMDAwMmJkOTc4ZjAgWGVuCihYRU4pIE1PRFVMRVsxXTogMDAwMDAwMDAyYmM0ZjAw
MCAtIDAwMDAwMDAwMmJjNWQwMDAgRGV2aWNlIFRyZWUKKFhFTikgTU9EVUxFWzJdOiAwMDAwMDAw
MDJiZGE1MDAwIC0gMDAwMDAwMDAyZDY4NDIwMCBLZXJuZWwKKFhFTikKKFhFTikgQ01ETElORVsw
MDAwMDAwMDJiZGE1MDAwXTpjaG9zZW4gY29uc29sZT1odmMwIGVhcmx5cHJpbnRrPXhlbiBub21v
ZGVzZXQKKFhFTikKKFhFTikgQ29tbWFuZCBsaW5lOiBkb20wX21lbT0xMDI0TSxtYXg6MTAyNE0g
ZG9tMF9tYXhfdmNwdXM9MQooWEVOKSBwYXJhbWV0ZXIgImRvbTBfbWVtIiBoYXMgaW52YWxpZCB2
YWx1ZSAiMTAyNE0sbWF4OjEwMjRNIiwgcmM9LTIyIQooWEVOKSBEb21haW4gaGVhcCBpbml0aWFs
aXNlZAooWEVOKSBCb290aW5nIHVzaW5nIERldmljZSBUcmVlCihYRU4pIFBsYXRmb3JtOiBSYXNw
YmVycnkgUGkgNAooWEVOKSBObyBkdHVhcnQgcGF0aCBjb25maWd1cmVkCihYRU4pIEJhZCBjb25z
b2xlPSBvcHRpb24gJ2R0dWFydCcKIFhlbiA0LjEzLjAKKFhFTikgWGVuIHZlcnNpb24gNC4xMy4w
IChAKSAoYWFyY2g2NC1saW51eC1nbnUtZ2NjIChVYnVudHUgOS4zLjAtMTB1YnVudHUyKSA5LjMu
MCkgZGVidWc9eSAgVGh1IEFwciAzMCAxNzowNjo0MCBQRFQgMjAyMAooWEVOKSBMYXRlc3QgQ2hh
bmdlU2V0OiBUdWUgRGVjIDE3IDE0OjE5OjQ5IDIwMTkgKzAwMDAgZ2l0OmEyZTg0ZDhlNDItZGly
dHkKKFhFTikgYnVpbGQtaWQ6IDdlZmEzYzVjYjI3YTk4ZTllYjJlNzUwZmE3MWM4YTA2NWI5YjVj
YjYKKFhFTikgUHJvY2Vzc29yOiA0MTBmZDA4MzogIkFSTSBMaW1pdGVkIiwgdmFyaWFudDogMHgw
LCBwYXJ0IDB4ZDA4LCByZXYgMHgzCihYRU4pIDY0LWJpdCBFeGVjdXRpb246CihYRU4pICAgUHJv
Y2Vzc29yIEZlYXR1cmVzOiAwMDAwMDAwMDAwMDAyMjIyIDAwMDAwMDAwMDAwMDAwMDAKKFhFTikg
ICAgIEV4Y2VwdGlvbiBMZXZlbHM6IEVMMzo2NCszMiBFTDI6NjQrMzIgRUwxOjY0KzMyIEVMMDo2
NCszMgooWEVOKSAgICAgRXh0ZW5zaW9uczogRmxvYXRpbmdQb2ludCBBZHZhbmNlZFNJTUQKKFhF
TikgICBEZWJ1ZyBGZWF0dXJlczogMDAwMDAwMDAxMDMwNTEwNiAwMDAwMDAwMDAwMDAwMDAwCihY
RU4pICAgQXV4aWxpYXJ5IEZlYXR1cmVzOiAwMDAwMDAwMDAwMDAwMDAwIDAwMDAwMDAwMDAwMDAw
MDAKKFhFTikgICBNZW1vcnkgTW9kZWwgRmVhdHVyZXM6IDAwMDAwMDAwMDAwMDExMjQgMDAwMDAw
MDAwMDAwMDAwMAooWEVOKSAgIElTQSBGZWF0dXJlczogIDAwMDAwMDAwMDAwMTAwMDAgMDAwMDAw
MDAwMDAwMDAwMAooWEVOKSAzMi1iaXQgRXhlY3V0aW9uOgooWEVOKSAgIFByb2Nlc3NvciBGZWF0
dXJlczogMDAwMDAxMzE6MDAwMTEwMTEKKFhFTikgICAgIEluc3RydWN0aW9uIFNldHM6IEFBcmNo
MzIgQTMyIFRodW1iIFRodW1iLTIgSmF6ZWxsZQooWEVOKSAgICAgRXh0ZW5zaW9uczogR2VuZXJp
Y1RpbWVyIFNlY3VyaXR5CihYRU4pICAgRGVidWcgRmVhdHVyZXM6IDAzMDEwMDY2CihYRU4pICAg
QXV4aWxpYXJ5IEZlYXR1cmVzOiAwMDAwMDAwMAooWEVOKSAgIE1lbW9yeSBNb2RlbCBGZWF0dXJl
czogMTAyMDExMDUgNDAwMDAwMDAgMDEyNjAwMDAgMDIxMDIyMTEKKFhFTikgIElTQSBGZWF0dXJl
czogMDIxMDExMTAgMTMxMTIxMTEgMjEyMzIwNDIgMDExMTIxMzEgMDAwMTExNDIgMDAwMTAwMDEK
KFhFTikgU01QOiBBbGxvd2luZyA0IENQVXMKKFhFTikgZW5hYmxlZCB3b3JrYXJvdW5kIGZvcjog
QVJNIGVycmF0dW0gMTMxOTUzNwooWEVOKSBHZW5lcmljIFRpbWVyIElSUTogcGh5cz0zMCBoeXA9
MjYgdmlydD0yNyBGcmVxOiA1NDAwMCBLSHoKKFhFTikgR0lDdjIgaW5pdGlhbGl6YXRpb246CihY
RU4pICAgICAgICAgZ2ljX2Rpc3RfYWRkcj0wMDAwMDAwMGZmODQxMDAwCihYRU4pICAgICAgICAg
Z2ljX2NwdV9hZGRyPTAwMDAwMDAwZmY4NDIwMDAKKFhFTikgICAgICAgICBnaWNfaHlwX2FkZHI9
MDAwMDAwMDBmZjg0NDAwMAooWEVOKSAgICAgICAgIGdpY192Y3B1X2FkZHI9MDAwMDAwMDBmZjg0
NjAwMAooWEVOKSAgICAgICAgIGdpY19tYWludGVuYW5jZV9pcnE9MjUKKFhFTikgR0lDdjI6IDI1
NiBsaW5lcywgNCBjcHVzLCBzZWN1cmUgKElJRCAwMjAwMTQzYikuCihYRU4pIFhTTSBGcmFtZXdv
cmsgdjEuMC4wIGluaXRpYWxpemVkCihYRU4pIEluaXRpYWxpc2luZyBYU00gU0lMTyBtb2RlCihY
RU4pIFVzaW5nIHNjaGVkdWxlcjogU01QIENyZWRpdCBTY2hlZHVsZXIgcmV2MiAoY3JlZGl0MikK
KFhFTikgSW5pdGlhbGl6aW5nIENyZWRpdDIgc2NoZWR1bGVyCihYRU4pICBsb2FkX3ByZWNpc2lv
bl9zaGlmdDogMTgKKFhFTikgIGxvYWRfd2luZG93X3NoaWZ0OiAzMAooWEVOKSAgdW5kZXJsb2Fk
X2JhbGFuY2VfdG9sZXJhbmNlOiAwCihYRU4pICBvdmVybG9hZF9iYWxhbmNlX3RvbGVyYW5jZTog
LTMKKFhFTikgIHJ1bnF1ZXVlcyBhcnJhbmdlbWVudDogc29ja2V0CihYRU4pICBjYXAgZW5mb3Jj
ZW1lbnQgZ3JhbnVsYXJpdHk6IDEwbXMKKFhFTikgbG9hZCB0cmFja2luZyB3aW5kb3cgbGVuZ3Ro
IDEwNzM3NDE4MjQgbnMKKFhFTikgQWxsb2NhdGVkIGNvbnNvbGUgcmluZyBvZiAzMiBLaUIuCihY
RU4pIENQVTA6IEd1ZXN0IGF0b21pY3Mgd2lsbCB0cnkgNiB0aW1lcyBiZWZvcmUgcGF1c2luZyB0
aGUgZG9tYWluCihYRU4pIEJyaW5naW5nIHVwIENQVTEKLSBDUFUgMDAwMDAwMDEgYm9vdGluZyAt
Ci0gQ3VycmVudCBFTCAwMDAwMDAwOCAtCi0gSW5pdGlhbGl6ZSBDUFUgLQotIFR1cm5pbmcgb24g
cGFnaW5nIC0KLSBSZWFkeSAtCihYRU4pIENQVTE6IEd1ZXN0IGF0b21pY3Mgd2lsbCB0cnkgNSB0
aW1lcyBiZWZvcmUgcGF1c2luZyB0aGUgZG9tYWluCihYRU4pIENQVSAxIGJvb3RlZC4KKFhFTikg
QnJpbmdpbmcgdXAgQ1BVMgotIENQVSAwMDAwMDAwMiBib290aW5nIC0KLSBDdXJyZW50IEVMIDAw
MDAwMDA4IC0KLSBJbml0aWFsaXplIENQVSAtCi0gVHVybmluZyBvbiBwYWdpbmcgLQotIFJlYWR5
IC0KKFhFTikgQ1BVMjogR3Vlc3QgYXRvbWljcyB3aWxsIHRyeSA1IHRpbWVzIGJlZm9yZSBwYXVz
aW5nIHRoZSBkb21haW4KKFhFTikgQ1BVIDIgYm9vdGVkLgooWEVOKSBCcmluZ2luZyB1cCBDUFUz
Ci0gQ1BVIDAwMDAwMDAzIGJvb3RpbmcgLQotIEN1cnJlbnQgRUwgMDAwMDAwMDggLQotIEluaXRp
YWxpemUgQ1BVIC0KLSBUdXJuaW5nIG9uIHBhZ2luZyAtCi0gUmVhZHkgLQooWEVOKSBDUFUzOiBH
dWVzdCBhdG9taWNzIHdpbGwgdHJ5IDUgdGltZXMgYmVmb3JlIHBhdXNpbmcgdGhlIGRvbWFpbgoo
WEVOKSBDUFUgMyBib290ZWQuCihYRU4pIEJyb3VnaHQgdXAgNCBDUFVzCihYRU4pIEkvTyB2aXJ0
dWFsaXNhdGlvbiBkaXNhYmxlZAooWEVOKSBQMk06IDQ0LWJpdCBJUEEgd2l0aCA0NC1iaXQgUEEg
YW5kIDgtYml0IFZNSUQKKFhFTikgUDJNOiA0IGxldmVscyB3aXRoIG9yZGVyLTAgcm9vdCwgVlRD
UiAweDgwMDQzNTk0CihYRU4pIEFkZGluZyBjcHUgMCB0byBydW5xdWV1ZSAwCihYRU4pICBGaXJz
dCBjcHUgb24gcnVucXVldWUsIGFjdGl2YXRpbmcKKFhFTikgQWRkaW5nIGNwdSAxIHRvIHJ1bnF1
ZXVlIDAKKFhFTikgQWRkaW5nIGNwdSAyIHRvIHJ1bnF1ZXVlIDAKKFhFTikgQWRkaW5nIGNwdSAz
IHRvIHJ1bnF1ZXVlIDAKKFhFTikgYWx0ZXJuYXRpdmVzOiBQYXRjaGluZyB3aXRoIGFsdCB0YWJs
ZSAwMDAwMDAwMDAwMmNjMGI4IC0+IDAwMDAwMDAwMDAyY2M3Y2MKKFhFTikgKioqIExPQURJTkcg
RE9NQUlOIDAgKioqCihYRU4pIExvYWRpbmcgZDAga2VybmVsIGZyb20gYm9vdCBtb2R1bGUgQCAw
MDAwMDAwMDJiZGE1MDAwCihYRU4pIEFsbG9jYXRpbmcgMToxIG1hcHBpbmdzIHRvdGFsbGluZyAx
MDI0TUIgZm9yIGRvbTA6CihYRU4pIEJBTktbMF0gMHgwMDAwMDA0MDAwMDAwMC0weDAwMDAwMDgw
MDAwMDAwICgxMDI0TUIpCihYRU4pIEdyYW50IHRhYmxlIHJhbmdlOiAweDAwMDAwMDJiYzVkMDAw
LTB4MDAwMDAwMmJjOWQwMDAKKFhFTikgQWxsb2NhdGluZyBQUEkgMTYgZm9yIGV2ZW50IGNoYW5u
ZWwgaW50ZXJydXB0CihYRU4pIExvYWRpbmcgekltYWdlIGZyb20gMDAwMDAwMDAyYmRhNTAwMCB0
byAwMDAwMDAwMDQwMDgwMDAwLTAwMDAwMDAwNDE5NWYyMDAKKFhFTikgTG9hZGluZyBkMCBEVEIg
dG8gMHgwMDAwMDAwMDQ4MDAwMDAwLTB4MDAwMDAwMDA0ODAwYTQ1ZAooWEVOKSBJbml0aWFsIGxv
dyBtZW1vcnkgdmlycSB0aHJlc2hvbGQgc2V0IGF0IDB4NDAwMCBwYWdlcy4KKFhFTikgU2NydWJi
aW5nIEZyZWUgUkFNIGluIGJhY2tncm91bmQKKFhFTikgU3RkLiBMb2dsZXZlbDogQWxsCihYRU4p
IEd1ZXN0IExvZ2xldmVsOiBBbGwKKFhFTikgKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqCihYRU4pIE5vIHN1cHBvcnQgZm9yIEFSTV9TTUNDQ19BUkNI
X1dPUktBUk9VTkRfMS4KKFhFTikgUGxlYXNlIHVwZGF0ZSB5b3VyIGZpcm13YXJlLgooWEVOKSAq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioKKFhFTikg
Tm8gc3VwcG9ydCBmb3IgQVJNX1NNQ0NDX0FSQ0hfV09SS0FST1VORF8xLgooWEVOKSBQbGVhc2Ug
dXBkYXRlIHlvdXIgZmlybXdhcmUuCihYRU4pICoqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKgooWEVOKSBObyBzdXBwb3J0IGZvciBBUk1fU01DQ0NfQVJD
SF9XT1JLQVJPVU5EXzEuCihYRU4pIFBsZWFzZSB1cGRhdGUgeW91ciBmaXJtd2FyZS4KKFhFTikg
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqCihYRU4p
IDMuLi4gMi4uLiAxLi4uCihYRU4pICoqKiBTZXJpYWwgaW5wdXQgdG8gRE9NMCAodHlwZSAnQ1RS
TC1hJyB0aHJlZSB0aW1lcyB0byBzd2l0Y2ggaW5wdXQpCihYRU4pIEZyZWVkIDMzNmtCIGluaXQg
bWVtb3J5LgooWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBm
ZmZmZmZmZiB0byBJQ0FDVElWRVI0CihYRU4pIGQwdjA6IHZHSUNEOiB1bmhhbmRsZWQgd29yZCB3
cml0ZSAweDAwMDAwMGZmZmZmZmZmIHRvIElDQUNUSVZFUjgKKFhFTikgZDB2MDogdkdJQ0Q6IHVu
aGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMTIKKFhFTikg
ZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNB
Q1RJVkVSMTYKKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAw
ZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMjAKKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3Jk
IHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMjQKKFhFTikgZDB2MDogdkdJQ0Q6
IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMjgKKFhF
TikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8g
SUNBQ1RJVkVSMApbICAgIDAuMDAwMDAwXSBCb290aW5nIExpbnV4IG9uIHBoeXNpY2FsIENQVSAw
eDAwMDAwMDAwMDAgWzB4NDEwZmQwODNdClsgICAgMC4wMDAwMDBdIExpbnV4IHZlcnNpb24gNS42
LjEtZGVmYXVsdCAocm9vdEAwZmQ2ZDEzM2RkMTIpIChnY2MgdmVyc2lvbiA4LjMuMCAoQWxwaW5l
IDguMy4wKSkgIzEgU01QIEZyaSBNYXkgMSAwMDo1NzoyNiBVVEMgMjAyMApbICAgIDAuMDAwMDAw
XSBNYWNoaW5lIG1vZGVsOiBSYXNwYmVycnkgUGkgNCBNb2RlbCBCClsgICAgMC4wMDAwMDBdIFhl
biA0LjEzIHN1cHBvcnQgZm91bmQKWyAgICAwLjAwMDAwMF0gZWZpOiBHZXR0aW5nIEVGSSBwYXJh
bWV0ZXJzIGZyb20gRkRUOgpbICAgIDAuMDAwMDAwXSBlZmk6IFVFRkkgbm90IGZvdW5kLgpbICAg
IDAuMDAwMDAwXSBOVU1BOiBObyBOVU1BIGNvbmZpZ3VyYXRpb24gZm91bmQKWyAgICAwLjAwMDAw
MF0gTlVNQTogRmFraW5nIGEgbm9kZSBhdCBbbWVtIDB4MDAwMDAwMDA0MDAwMDAwMC0weDAwMDAw
MDAwN2ZmZmZmZmZdClsgICAgMC4wMDAwMDBdIE5VTUE6IE5PREVfREFUQSBbbWVtIDB4N2ZkYzYy
YzAtMHg3ZmRjOWZmZl0KWyAgICAwLjAwMDAwMF0gWm9uZSByYW5nZXM6ClsgICAgMC4wMDAwMDBd
ICAgRE1BICAgICAgW21lbSAweDAwMDAwMDAwNDAwMDAwMDAtMHgwMDAwMDAwMDdmZmZmZmZmXQpb
ICAgIDAuMDAwMDAwXSAgIERNQTMyICAgIGVtcHR5ClsgICAgMC4wMDAwMDBdICAgTm9ybWFsICAg
ZW1wdHkKWyAgICAwLjAwMDAwMF0gTW92YWJsZSB6b25lIHN0YXJ0IGZvciBlYWNoIG5vZGUKWyAg
ICAwLjAwMDAwMF0gRWFybHkgbWVtb3J5IG5vZGUgcmFuZ2VzClsgICAgMC4wMDAwMDBdICAgbm9k
ZSAgIDA6IFttZW0gMHgwMDAwMDAwMDQwMDAwMDAwLTB4MDAwMDAwMDA3ZmZmZmZmZl0KWyAgICAw
LjAwMDAwMF0gSW5pdG1lbSBzZXR1cCBub2RlIDAgW21lbSAweDAwMDAwMDAwNDAwMDAwMDAtMHgw
MDAwMDAwMDdmZmZmZmZmXQpbICAgIDAuMDAwMDAwXSBwc2NpOiBwcm9iaW5nIGZvciBjb25kdWl0
IG1ldGhvZCBmcm9tIERULgpbICAgIDAuMDAwMDAwXSBwc2NpOiBQU0NJdjEuMSBkZXRlY3RlZCBp
biBmaXJtd2FyZS4KWyAgICAwLjAwMDAwMF0gcHNjaTogVXNpbmcgc3RhbmRhcmQgUFNDSSB2MC4y
IGZ1bmN0aW9uIElEcwpbICAgIDAuMDAwMDAwXSBwc2NpOiBUcnVzdGVkIE9TIG1pZ3JhdGlvbiBu
b3QgcmVxdWlyZWQKWyAgICAwLjAwMDAwMF0gcHNjaTogU01DIENhbGxpbmcgQ29udmVudGlvbiB2
MS4xClsgICAgMC4wMDAwMDBdIHBlcmNwdTogRW1iZWRkZWQgMjMgcGFnZXMvY3B1IHM1NDIzMiBy
ODE5MiBkMzE3ODQgdTk0MjA4ClsgICAgMC4wMDAwMDBdIERldGVjdGVkIFBJUFQgSS1jYWNoZSBv
biBDUFUwClsgICAgMC4wMDAwMDBdIENQVSBmZWF0dXJlczogZGV0ZWN0ZWQ6IEVMMiB2ZWN0b3Ig
aGFyZGVuaW5nClsgICAgMC4wMDAwMDBdIENQVSBmZWF0dXJlczogZGV0ZWN0ZWQ6IFNwZWN1bGF0
aXZlIFN0b3JlIEJ5cGFzcyBEaXNhYmxlClsgICAgMC4wMDAwMDBdIENQVSBmZWF0dXJlczogZGV0
ZWN0ZWQ6IEFSTSBlcnJhdHVtIDEzMTkzNjcKWyAgICAwLjAwMDAwMF0gQnVpbHQgMSB6b25lbGlz
dHMsIG1vYmlsaXR5IGdyb3VwaW5nIG9uLiAgVG90YWwgcGFnZXM6IDI1ODA0OApbICAgIDAuMDAw
MDAwXSBQb2xpY3kgem9uZTogRE1BClsgICAgMC4wMDAwMDBdIEtlcm5lbCBjb21tYW5kIGxpbmU6
IGNvbnNvbGU9aHZjMCBlYXJseXByaW50az14ZW4gbm9tb2Rlc2V0ClsgICAgMC4wMDAwMDBdIERl
bnRyeSBjYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDEzMTA3MiAob3JkZXI6IDgsIDEwNDg1NzYg
Ynl0ZXMsIGxpbmVhcikKWyAgICAwLjAwMDAwMF0gSW5vZGUtY2FjaGUgaGFzaCB0YWJsZSBlbnRy
aWVzOiA2NTUzNiAob3JkZXI6IDcsIDUyNDI4OCBieXRlcywgbGluZWFyKQpbICAgIDAuMDAwMDAw
XSBtZW0gYXV0by1pbml0OiBzdGFjazpvZmYsIGhlYXAgYWxsb2M6b2ZmLCBoZWFwIGZyZWU6b2Zm
ClsgICAgMC4wMDAwMDBdIE1lbW9yeTogMTAwMTk4OEsvMTA0ODU3NksgYXZhaWxhYmxlICgxMjcz
Mksga2VybmVsIGNvZGUsIDE4NTJLIHJ3ZGF0YSwgNjE4NEsgcm9kYXRhLCA0NjcySyBpbml0LCA3
NThLIGJzcywgNDY1ODhLIHJlc2VydmVkLCAwSyBjbWEtcmVzZXJ2ZWQpClsgICAgMC4wMDAwMDBd
IHJhbmRvbTogZ2V0X3JhbmRvbV91NjQgY2FsbGVkIGZyb20gX19rbWVtX2NhY2hlX2NyZWF0ZSsw
eDQwLzB4NTc4IHdpdGggY3JuZ19pbml0PTAKWyAgICAwLjAwMDAwMF0gU0xVQjogSFdhbGlnbj02
NCwgT3JkZXI9MC0zLCBNaW5PYmplY3RzPTAsIENQVXM9MSwgTm9kZXM9MQpbICAgIDAuMDAwMDAw
XSByY3U6IEhpZXJhcmNoaWNhbCBSQ1UgaW1wbGVtZW50YXRpb24uClsgICAgMC4wMDAwMDBdIHJj
dTogCVJDVSByZXN0cmljdGluZyBDUFVzIGZyb20gTlJfQ1BVUz00ODAgdG8gbnJfY3B1X2lkcz0x
LgpbICAgIDAuMDAwMDAwXSByY3U6IFJDVSBjYWxjdWxhdGVkIHZhbHVlIG9mIHNjaGVkdWxlci1l
bmxpc3RtZW50IGRlbGF5IGlzIDEwIGppZmZpZXMuClsgICAgMC4wMDAwMDBdIHJjdTogQWRqdXN0
aW5nIGdlb21ldHJ5IGZvciByY3VfZmFub3V0X2xlYWY9MTYsIG5yX2NwdV9pZHM9MQpbICAgIDAu
MDAwMDAwXSBOUl9JUlFTOiA2NCwgbnJfaXJxczogNjQsIHByZWFsbG9jYXRlZCBpcnFzOiAwClsg
ICAgMC4wMDAwMDBdIGFyY2hfdGltZXI6IGNwMTUgdGltZXIocykgcnVubmluZyBhdCA1NC4wME1I
eiAodmlydCkuClsgICAgMC4wMDAwMDBdIGNsb2Nrc291cmNlOiBhcmNoX3N5c19jb3VudGVyOiBt
YXNrOiAweGZmZmZmZmZmZmZmZmZmIG1heF9jeWNsZXM6IDB4Yzc0M2NlMzQ2LCBtYXhfaWRsZV9u
czogNDQwNzk1MjAzMTIzIG5zClsgICAgMC4wMDAwMDZdIHNjaGVkX2Nsb2NrOiA1NiBiaXRzIGF0
IDU0TUh6LCByZXNvbHV0aW9uIDE4bnMsIHdyYXBzIGV2ZXJ5IDQzOTgwNDY1MTExMDJucwpbICAg
IDAuMDAwNDA3XSBDb25zb2xlOiBjb2xvdXIgZHVtbXkgZGV2aWNlIDgweDI1ClsgICAgMC4yNzA5
NjldIHByaW50azogY29uc29sZSBbaHZjMF0gZW5hYmxlZApbICAgIDAuMjc1MjU2XSBDYWxpYnJh
dGluZyBkZWxheSBsb29wIChza2lwcGVkKSwgdmFsdWUgY2FsY3VsYXRlZCB1c2luZyB0aW1lciBm
cmVxdWVuY3kuLiAxMDguMDAgQm9nb01JUFMgKGxwaj01NDAwMDApClsgICAgMC4yODU4MTJdIHBp
ZF9tYXg6IGRlZmF1bHQ6IDMyNzY4IG1pbmltdW06IDMwMQpbICAgIDAuMjkwNjU2XSBMU006IFNl
Y3VyaXR5IEZyYW1ld29yayBpbml0aWFsaXppbmcKWyAgICAwLjI5NTM1NF0gTW91bnQtY2FjaGUg
aGFzaCB0YWJsZSBlbnRyaWVzOiAyMDQ4IChvcmRlcjogMiwgMTYzODQgYnl0ZXMsIGxpbmVhcikK
WyAgICAwLjMwMjg2Ml0gTW91bnRwb2ludC1jYWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDIwNDgg
KG9yZGVyOiAyLCAxNjM4NCBieXRlcywgbGluZWFyKQpbICAgIDAuMzEzMzcyXSB4ZW46Z3JhbnRf
dGFibGU6IEdyYW50IHRhYmxlcyB1c2luZyB2ZXJzaW9uIDEgbGF5b3V0ClsgICAgMC4zMTg4NThd
IEdyYW50IHRhYmxlIGluaXRpYWxpemVkClsgICAgMC4zMjI0NThdIHhlbjpldmVudHM6IFVzaW5n
IEZJRk8tYmFzZWQgQUJJClsgICAgMC4zMjY5MDZdIFhlbjogaW5pdGlhbGl6aW5nIGNwdTAKWyAg
ICAwLjMzMDUwOV0gcmN1OiBIaWVyYXJjaGljYWwgU1JDVSBpbXBsZW1lbnRhdGlvbi4KWyAgICAw
LjMzOTQ5N10gRUZJIHNlcnZpY2VzIHdpbGwgbm90IGJlIGF2YWlsYWJsZS4KWyAgICAwLjM0MzY0
NF0gc21wOiBCcmluZ2luZyB1cCBzZWNvbmRhcnkgQ1BVcyAuLi4KWyAgICAwLjM0ODE1N10gc21w
OiBCcm91Z2h0IHVwIDEgbm9kZSwgMSBDUFUKWyAgICAwLjM1MjI1NV0gU01QOiBUb3RhbCBvZiAx
IHByb2Nlc3NvcnMgYWN0aXZhdGVkLgpbICAgIDAuMzU3MDkwXSBDUFUgZmVhdHVyZXM6IGRldGVj
dGVkOiAzMi1iaXQgRUwwIFN1cHBvcnQKWyAgICAwLjM2MjM3MF0gQ1BVIGZlYXR1cmVzOiBkZXRl
Y3RlZDogQ1JDMzIgaW5zdHJ1Y3Rpb25zClsgICAgMC4zOTY5NjldIENQVTogQWxsIENQVShzKSBz
dGFydGVkIGF0IEVMMQpbICAgIDAuNDAwNTU2XSBhbHRlcm5hdGl2ZXM6IHBhdGNoaW5nIGtlcm5l
bCBjb2RlClsgICAgMC40MDY1MzhdIGRldnRtcGZzOiBpbml0aWFsaXplZApbICAgIDAuNDE1OTEz
XSBSZWdpc3RlcmVkIGNwMTVfYmFycmllciBlbXVsYXRpb24gaGFuZGxlcgpbICAgIDAuNDIwNDQ4
XSBSZWdpc3RlcmVkIHNldGVuZCBlbXVsYXRpb24gaGFuZGxlcgpbICAgIDAuNDI1MDc1XSBLQVNM
UiBkaXNhYmxlZCBkdWUgdG8gbGFjayBvZiBzZWVkClsgICAgMC40MzAwNDJdIGNsb2Nrc291cmNl
OiBqaWZmaWVzOiBtYXNrOiAweGZmZmZmZmZmIG1heF9jeWNsZXM6IDB4ZmZmZmZmZmYsIG1heF9p
ZGxlX25zOiAxOTExMjYwNDQ2Mjc1MDAwMCBucwpbICAgIDAuNDM5NzAxXSBmdXRleCBoYXNoIHRh
YmxlIGVudHJpZXM6IDI1NiAob3JkZXI6IDIsIDE2Mzg0IGJ5dGVzLCBsaW5lYXIpClsgICAgMC40
NDY4MTddIHBpbmN0cmwgY29yZTogaW5pdGlhbGl6ZWQgcGluY3RybCBzdWJzeXN0ZW0KWyAgICAw
LjQ1MzEzMl0gdGhlcm1hbF9zeXM6IFJlZ2lzdGVyZWQgdGhlcm1hbCBnb3Zlcm5vciAnZmFpcl9z
aGFyZScKWyAgICAwLjQ1MzEzNV0gdGhlcm1hbF9zeXM6IFJlZ2lzdGVyZWQgdGhlcm1hbCBnb3Zl
cm5vciAnYmFuZ19iYW5nJwpbICAgIDAuNDU4Njg1XSB0aGVybWFsX3N5czogUmVnaXN0ZXJlZCB0
aGVybWFsIGdvdmVybm9yICdzdGVwX3dpc2UnClsgICAgMC40NjQ4NzZdIHRoZXJtYWxfc3lzOiBS
ZWdpc3RlcmVkIHRoZXJtYWwgZ292ZXJub3IgJ3VzZXJfc3BhY2UnClsgICAgMC40NzExMDNdIERN
SSBub3QgcHJlc2VudCBvciBpbnZhbGlkLgpbICAgIDAuNDgxNzIwXSBORVQ6IFJlZ2lzdGVyZWQg
cHJvdG9jb2wgZmFtaWx5IDE2ClsgICAgMC40ODYwMTJdIERNQTogcHJlYWxsb2NhdGVkIDI1NiBL
aUIgcG9vbCBmb3IgYXRvbWljIGFsbG9jYXRpb25zClsgICAgMC40OTE5ODVdIGF1ZGl0OiBpbml0
aWFsaXppbmcgbmV0bGluayBzdWJzeXMgKGRpc2FibGVkKQpbICAgIDAuNDk4Njk4XSBhdWRpdDog
dHlwZT0yMDAwIGF1ZGl0KDAuMzgwOjEpOiBzdGF0ZT1pbml0aWFsaXplZCBhdWRpdF9lbmFibGVk
PTAgcmVzPTEKWyAgICAwLjUwNjU1NF0gaHctYnJlYWtwb2ludDogZm91bmQgNiBicmVha3BvaW50
IGFuZCA0IHdhdGNocG9pbnQgcmVnaXN0ZXJzLgpbICAgIDAuNTEyOTM0XSBBU0lEIGFsbG9jYXRv
ciBpbml0aWFsaXNlZCB3aXRoIDY1NTM2IGVudHJpZXMKWyAgICAwLjUxODQ1OF0geGVuOnN3aW90
bGJfeGVuOiBXYXJuaW5nOiBvbmx5IGFibGUgdG8gYWxsb2NhdGUgNCBNQiBmb3Igc29mdHdhcmUg
SU8gVExCClsgICAgMC41MjgxODldIHNvZnR3YXJlIElPIFRMQjogbWFwcGVkIFttZW0gMHg3ZjAw
MDAwMC0weDdmNDAwMDAwXSAoNE1CKQpbICAgIDAuNTM1MzM1XSBTZXJpYWw6IEFNQkEgUEwwMTEg
VUFSVCBkcml2ZXIKWyAgICAwLjU1ODA1MV0gSHVnZVRMQiByZWdpc3RlcmVkIDEuMDAgR2lCIHBh
Z2Ugc2l6ZSwgcHJlLWFsbG9jYXRlZCAwIHBhZ2VzClsgICAgMC41NjQyNjBdIEh1Z2VUTEIgcmVn
aXN0ZXJlZCAzMi4wIE1pQiBwYWdlIHNpemUsIHByZS1hbGxvY2F0ZWQgMCBwYWdlcwpbICAgIDAu
NTcxMDg1XSBIdWdlVExCIHJlZ2lzdGVyZWQgMi4wMCBNaUIgcGFnZSBzaXplLCBwcmUtYWxsb2Nh
dGVkIDAgcGFnZXMKWyAgICAwLjU3NzkzN10gSHVnZVRMQiByZWdpc3RlcmVkIDY0LjAgS2lCIHBh
Z2Ugc2l6ZSwgcHJlLWFsbG9jYXRlZCAwIHBhZ2VzClsgICAgMC41ODkwMjFdIGNyeXB0ZDogbWF4
X2NwdV9xbGVuIHNldCB0byAxMDAwClsgICAgMC41OTkwNDddIEFDUEk6IEludGVycHJldGVyIGRp
c2FibGVkLgpbICAgIDAuNjAyNzMyXSB4ZW46YmFsbG9vbjogSW5pdGlhbGlzaW5nIGJhbGxvb24g
ZHJpdmVyClsgICAgMC42MDg2NTVdIGlvbW11OiBEZWZhdWx0IGRvbWFpbiB0eXBlOiBUcmFuc2xh
dGVkClsgICAgMC42MTMyNzZdIHZnYWFyYjogbG9hZGVkClsgICAgMC42MTYxNjVdIFNDU0kgc3Vi
c3lzdGVtIGluaXRpYWxpemVkClsgICAgMC42MjAzODddIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3
IGludGVyZmFjZSBkcml2ZXIgdXNiZnMKWyAgICAwLjYyNTM1NV0gdXNiY29yZTogcmVnaXN0ZXJl
ZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBodWIKWyAgICAwLjYzMDg3M10gdXNiY29yZTogcmVnaXN0
ZXJlZCBuZXcgZGV2aWNlIGRyaXZlciB1c2IKWyAgICAwLjYzNjA5Ml0gdXNiX3BoeV9nZW5lcmlj
IHBoeTogcGh5IHN1cHBseSB2Y2Mgbm90IGZvdW5kLCB1c2luZyBkdW1teSByZWd1bGF0b3IKWyAg
ICAwLjY0Mzg5NV0gcHBzX2NvcmU6IExpbnV4UFBTIEFQSSB2ZXIuIDEgcmVnaXN0ZXJlZApbICAg
IDAuNjQ4NTg4XSBwcHNfY29yZTogU29mdHdhcmUgdmVyLiA1LjMuNiAtIENvcHlyaWdodCAyMDA1
LTIwMDcgUm9kb2xmbyBHaW9tZXR0aSA8Z2lvbWV0dGlAbGludXguaXQ+ClsgICAgMC42NTc5MjVd
IFBUUCBjbG9jayBzdXBwb3J0IHJlZ2lzdGVyZWQKWyAgICAwLjY2MjA2Ml0gRURBQyBNQzogVmVy
OiAzLjAuMApbICAgIDAuNjY2NDI5XSBOZXRMYWJlbDogSW5pdGlhbGl6aW5nClsgICAgMC42Njky
ODJdIE5ldExhYmVsOiAgZG9tYWluIGhhc2ggc2l6ZSA9IDEyOApbICAgIDAuNjczNzM2XSBOZXRM
YWJlbDogIHByb3RvY29scyA9IFVOTEFCRUxFRCBDSVBTT3Y0IENBTElQU08KWyAgICAwLjY3OTYx
M10gTmV0TGFiZWw6ICB1bmxhYmVsZWQgdHJhZmZpYyBhbGxvd2VkIGJ5IGRlZmF1bHQKWyAgICAw
LjY4NTc4MV0gY2xvY2tzb3VyY2U6IFN3aXRjaGVkIHRvIGNsb2Nrc291cmNlIGFyY2hfc3lzX2Nv
dW50ZXIKWyAgICAwLjY5MTc1NV0gVkZTOiBEaXNrIHF1b3RhcyBkcXVvdF82LjYuMApbICAgIDAu
Njk1NTc0XSBWRlM6IERxdW90LWNhY2hlIGhhc2ggdGFibGUgZW50cmllczogNTEyIChvcmRlciAw
LCA0MDk2IGJ5dGVzKQpbICAgIDAuNzAyNzUwXSBwbnA6IFBuUCBBQ1BJOiBkaXNhYmxlZApbICAg
IDAuNzEyOTY0XSBORVQ6IFJlZ2lzdGVyZWQgcHJvdG9jb2wgZmFtaWx5IDIKWyAgICAwLjcxNzI4
MV0gdGNwX2xpc3Rlbl9wb3J0YWRkcl9oYXNoIGhhc2ggdGFibGUgZW50cmllczogNTEyIChvcmRl
cjogMSwgODE5MiBieXRlcywgbGluZWFyKQpbICAgIDAuNzI1Mjk1XSBUQ1AgZXN0YWJsaXNoZWQg
aGFzaCB0YWJsZSBlbnRyaWVzOiA4MTkyIChvcmRlcjogNCwgNjU1MzYgYnl0ZXMsIGxpbmVhcikK
WyAgICAwLjczMzI1MV0gVENQIGJpbmQgaGFzaCB0YWJsZSBlbnRyaWVzOiA4MTkyIChvcmRlcjog
NSwgMTMxMDcyIGJ5dGVzLCBsaW5lYXIpClsgICAgMC43NDA2ODFdIFRDUDogSGFzaCB0YWJsZXMg
Y29uZmlndXJlZCAoZXN0YWJsaXNoZWQgODE5MiBiaW5kIDgxOTIpClsgICAgMC43NDcyMzddIFVE
UCBoYXNoIHRhYmxlIGVudHJpZXM6IDUxMiAob3JkZXI6IDIsIDE2Mzg0IGJ5dGVzLCBsaW5lYXIp
ClsgICAgMC43NTM4MjldIFVEUC1MaXRlIGhhc2ggdGFibGUgZW50cmllczogNTEyIChvcmRlcjog
MiwgMTYzODQgYnl0ZXMsIGxpbmVhcikKWyAgICAwLjc2MTIwN10gTkVUOiBSZWdpc3RlcmVkIHBy
b3RvY29sIGZhbWlseSAxClsgICAgMC43NjU1MTRdIFBDSTogQ0xTIDAgYnl0ZXMsIGRlZmF1bHQg
NjQKWyAgICAwLjc3MDA1NV0ga3ZtIFsxXTogSFlQIG1vZGUgbm90IGF2YWlsYWJsZQpbICAgIDAu
NzgxMTc2XSBJbml0aWFsaXNlIHN5c3RlbSB0cnVzdGVkIGtleXJpbmdzClsgICAgMC43ODUxOTNd
IHdvcmtpbmdzZXQ6IHRpbWVzdGFtcF9iaXRzPTQwIG1heF9vcmRlcj0xOCBidWNrZXRfb3JkZXI9
MApbICAgIDAuNzk4MDAwXSB6YnVkOiBsb2FkZWQKWyAgICAwLjgwMTQ2N10gc3F1YXNoZnM6IHZl
cnNpb24gNC4wICgyMDA5LzAxLzMxKSBQaGlsbGlwIExvdWdoZXIKWyAgICAwLjgwNzMzMF0gOXA6
IEluc3RhbGxpbmcgdjlmcyA5cDIwMDAgZmlsZSBzeXN0ZW0gc3VwcG9ydApbICAgIDAuODM2MDk3
XSBLZXkgdHlwZSBhc3ltbWV0cmljIHJlZ2lzdGVyZWQKWyAgICAwLjgzOTYzMF0gQXN5bW1ldHJp
YyBrZXkgcGFyc2VyICd4NTA5JyByZWdpc3RlcmVkClsgICAgMC44NDQ2NjZdIEJsb2NrIGxheWVy
IFNDU0kgZ2VuZXJpYyAoYnNnKSBkcml2ZXIgdmVyc2lvbiAwLjQgbG9hZGVkIChtYWpvciAyNDYp
ClsgICAgMC44NTIzNDZdIGlvIHNjaGVkdWxlciBtcS1kZWFkbGluZSByZWdpc3RlcmVkClsgICAg
MC44NTY4NjRdIGlvIHNjaGVkdWxlciBreWJlciByZWdpc3RlcmVkClsgICAgMC44NjExMzNdIGlv
IHNjaGVkdWxlciBiZnEgcmVnaXN0ZXJlZApbICAgIDAuODY5Nzk1XSBzaHBjaHA6IFN0YW5kYXJk
IEhvdCBQbHVnIFBDSSBDb250cm9sbGVyIERyaXZlciB2ZXJzaW9uOiAwLjQKWyAgICAwLjg3NzIy
Nl0gYnJjbS1wY2llIGZkNTAwMDAwLnBjaWU6IGhvc3QgYnJpZGdlIC9zY2IvcGNpZUA3ZDUwMDAw
MCByYW5nZXM6ClsgICAgMC44ODM2NzddIGJyY20tcGNpZSBmZDUwMDAwMC5wY2llOiAgICAgIE1F
TSAweDA2MDAwMDAwMDAuLjB4MDYwM2ZmZmZmZiAtPiAweDAwZjgwMDAwMDAKWyAgICAwLjg5MTk3
N10gYnJjbS1wY2llIGZkNTAwMDAwLnBjaWU6ICAgSUIgTUVNIDB4MDAwMDAwMDAwMC4uMHgwMGJm
ZmZmZmZmIC0+IDB4MDAwMDAwMDAwMApbICAgIDAuOTU1ODA4XSBicmNtLXBjaWUgZmQ1MDAwMDAu
cGNpZTogbGluayB1cCwgNSBHVC9zIHgxICghU1NDKQpbICAgIDAuOTYxMzEyXSBicmNtLXBjaWUg
ZmQ1MDAwMDAucGNpZTogUENJIGhvc3QgYnJpZGdlIHRvIGJ1cyAwMDAwOjAwClsgICAgMC45Njc1
MjRdIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW2J1cyAwMC0wMV0KWyAgICAw
Ljk3MzEyNl0gcGNpX2J1cyAwMDAwOjAwOiByb290IGJ1cyByZXNvdXJjZSBbbWVtIDB4NjAwMDAw
MDAwLTB4NjAzZmZmZmZmXSAoYnVzIGFkZHJlc3MgWzB4ZjgwMDAwMDAtMHhmYmZmZmZmZl0pClsg
ICAgMC45ODM2OTZdIHBjaSAwMDAwOjAwOjAwLjA6IFsxNGU0OjI3MTFdIHR5cGUgMDEgY2xhc3Mg
MHgwNjA0MDAKWyAgICAwLjk4OTg5N10gcGNpIDAwMDA6MDA6MDAuMDogUE1FIyBzdXBwb3J0ZWQg
ZnJvbSBEMCBEM2hvdAooWEVOKSBwaHlzZGV2LmM6MTY6ZDB2MCBQSFlTREVWT1AgY21kPTI1OiBu
b3QgaW1wbGVtZW50ZWQKKFhFTikgcGh5c2Rldi5jOjE2OmQwdjAgUEhZU0RFVk9QIGNtZD0xNTog
bm90IGltcGxlbWVudGVkClsgICAgMS4wMDU4MTJdIHBjaSAwMDAwOjAwOjAwLjA6IEZhaWxlZCB0
byBhZGQgLSBwYXNzdGhyb3VnaCBvciBNU0kvTVNJLVggbWlnaHQgZmFpbCEKWyAgICAxLjAxNjQw
MF0gcGNpIDAwMDA6MDA6MDAuMDogYnJpZGdlIGNvbmZpZ3VyYXRpb24gaW52YWxpZCAoW2J1cyAw
MC0wMF0pLCByZWNvbmZpZ3VyaW5nClsgICAgMS4wMjM5OTNdIHBjaSAwMDAwOjAxOjAwLjA6IFsx
MTA2OjM0ODNdIHR5cGUgMDAgY2xhc3MgMHgwYzAzMzAKWyAgICAxLjAzMDA3M10gcGNpIDAwMDA6
MDE6MDAuMDogcmVnIDB4MTA6IFttZW0gMHgwMDAwMDAwMC0weDAwMDAwZmZmIDY0Yml0XQpbICAg
IDEuMDM3MDg5XSBwY2kgMDAwMDowMTowMC4wOiBQTUUjIHN1cHBvcnRlZCBmcm9tIEQwIEQzY29s
ZAooWEVOKSBwaHlzZGV2LmM6MTY6ZDB2MCBQSFlTREVWT1AgY21kPTE1OiBub3QgaW1wbGVtZW50
ZWQKWyAgICAxLjA0Nzg4M10gcGNpIDAwMDA6MDE6MDAuMDogRmFpbGVkIHRvIGFkZCAtIHBhc3N0
aHJvdWdoIG9yIE1TSS9NU0ktWCBtaWdodCBmYWlsIQpbICAgIDEuMDU4NDIwXSBwY2lfYnVzIDAw
MDA6MDE6IGJ1c25fcmVzOiBbYnVzIDAxXSBlbmQgaXMgdXBkYXRlZCB0byAwMQpbICAgIDEuMDY0
MjUyXSBwY2kgMDAwMDowMDowMC4wOiBCQVIgMTQ6IGFzc2lnbmVkIFttZW0gMHg2MDAwMDAwMDAt
MHg2MDAwZmZmZmZdClsgICAgMS4wNzE0NTNdIHBjaSAwMDAwOjAxOjAwLjA6IEJBUiAwOiBhc3Np
Z25lZCBbbWVtIDB4NjAwMDAwMDAwLTB4NjAwMDAwZmZmIDY0Yml0XQpbICAgIDEuMDc5MDk0XSBw
Y2kgMDAwMDowMDowMC4wOiBQQ0kgYnJpZGdlIHRvIFtidXMgMDFdClsgICAgMS4wODQxNjVdIHBj
aSAwMDAwOjAwOjAwLjA6ICAgYnJpZGdlIHdpbmRvdyBbbWVtIDB4NjAwMDAwMDAwLTB4NjAwMGZm
ZmZmXQpbICAgIDEuMDkxNDI2XSBwY2llcG9ydCAwMDAwOjAwOjAwLjA6IGVuYWJsaW5nIGRldmlj
ZSAoMDAwMCAtPiAwMDAyKQpbICAgIDEuMDk3NjU5XSBwY2llcG9ydCAwMDAwOjAwOjAwLjA6IFBN
RTogU2lnbmFsaW5nIHdpdGggSVJRIDM2ClsgICAgMS4xMDM2NzVdIHBjaWVwb3J0IDAwMDA6MDA6
MDAuMDogQUVSOiBlbmFibGVkIHdpdGggSVJRIDM2ClsgICAgMS4xMDkyNzNdIHBjaSAwMDAwOjAx
OjAwLjA6IGVuYWJsaW5nIGRldmljZSAoMDAwMCAtPiAwMDAyKQooWEVOKSB0cmFwcy5jOjE5NzI6
ZDB2MCBIU1I9MHg5MzgwMDAwNSBwYz0weGZmZmY4MDAwMTA4ZTM1NjQgZ3ZhPTB4ZmZmZjgwMDAx
MDAxZDAxMCBncGE9MHgwMDAwMDYwMDAwMDAxMApbICAgIDEuMTI0MjM0XSBVbmhhbmRsZWQgZmF1
bHQgYXQgMHhmZmZmODAwMDEwMDFkMDEwClsgICAgMS4xMjkwMzldIE1lbSBhYm9ydCBpbmZvOgpb
ICAgIDEuMTMxOTI2XSAgIEVTUiA9IDB4OTYwMDAwMDAKWyAgICAxLjEzNTA4OV0gICBFQyA9IDB4
MjU6IERBQlQgKGN1cnJlbnQgRUwpLCBJTCA9IDMyIGJpdHMKWyAgICAxLjE0MDU0Ml0gICBTRVQg
PSAwLCBGblYgPSAwClsgICAgMS4xNDM2OTFdICAgRUEgPSAwLCBTMVBUVyA9IDAKWyAgICAxLjE0
Njk1MF0gRGF0YSBhYm9ydCBpbmZvOgpbICAgIDEuMTQ5OTI2XSAgIElTViA9IDAsIElTUyA9IDB4
MDAwMDAwMDAKWyAgICAxLjE1Mzg3N10gICBDTSA9IDAsIFduUiA9IDAKWyAgICAxLjE1Njk2NV0g
c3dhcHBlciBwZ3RhYmxlOiA0ayBwYWdlcywgNDgtYml0IFZBcywgcGdkcD0wMDAwMDAwMDQxMmZl
MDAwClsgICAgMS4xNjM4MDBdIFtmZmZmODAwMDEwMDFkMDEwXSBwZ2Q9MDAwMDAwMDA3ZmZmZjAw
MywgcHVkPTAwMDAwMDAwN2ZmZmUwMDMsIHBtZD0wMDAwMDAwMDdmZmZkMDAzLCBwdGU9MDA2ODAw
MDYwMDAwMDcwNwpbICAgIDEuMTc0NjE0XSBJbnRlcm5hbCBlcnJvcjogdHRiciBhZGRyZXNzIHNp
emUgZmF1bHQ6IDk2MDAwMDAwIFsjMV0gU01QClsgICAgMS4xODEyNzJdIE1vZHVsZXMgbGlua2Vk
IGluOgpbICAgIDEuMTg0NDM4XSBDUFU6IDAgUElEOiAxIENvbW06IHN3YXBwZXIvMCBOb3QgdGFp
bnRlZCA1LjYuMS1kZWZhdWx0ICMxClsgICAgMS4xOTExMDZdIEhhcmR3YXJlIG5hbWU6IFJhc3Bi
ZXJyeSBQaSA0IE1vZGVsIEIgKERUKQpbICAgIDEuMTk2Mzg2XSBwc3RhdGU6IDYwMDAwMDA1IChu
WkN2IGRhaWYgLVBBTiAtVUFPKQpbICAgIDEuMjAxMzA3XSBwYyA6IHF1aXJrX3VzYl9lYXJseV9o
YW5kb2ZmKzB4NGZjLzB4ODcwClsgICAgMS4yMDYzOThdIGxyIDogcXVpcmtfdXNiX2Vhcmx5X2hh
bmRvZmYrMHg0ZWMvMHg4NzAKWyAgICAxLjIxMTQ3Nl0gc3AgOiBmZmZmODAwMDEwMDEzOWYwClsg
ICAgMS4yMTQ5MDFdIHgyOTogZmZmZjgwMDAxMDAxMzlmMCB4Mjg6IGZmZmYwMDAwM2RkOGMwODAK
WyAgICAxLjIyMDM0NV0geDI3OiBmZmZmMDAwMDNmZGY0YzcwIHgyNjogZmZmZjgwMDAxMDhlMzA2
OApbICAgIDEuMjI1Nzk3XSB4MjU6IGZmZmY4MDAwMTE5NjQwNTAgeDI0OiAwMDAwMDAwMDM5NmRm
YmY4ClsgICAgMS4yMzEyMzJdIHgyMzogMDAwMDAwMDAwMDAwZmZmZiB4MjI6IGZmZmY4MDAwMTAw
MWQwMDAKWyAgICAxLjIzNjY4NF0geDIxOiAwMDAwMDAwMDAwMDAxMDAwIHgyMDogZmZmZjgwMDAx
MTc5YjU0OApbICAgIDEuMjQyMTE5XSB4MTk6IGZmZmYwMDAwM2RjYWMwMDAgeDE4OiBmZmZmODAw
MDExMWRiYTQ4ClsgICAgMS4yNDc1NjRdIHgxNzogMDAwMDAwMDAwMDAwMDAwMSB4MTY6IDAwMDAw
MDAwZGVhZGJlZWYKWyAgICAxLjI1MzAwN10geDE1OiAwMDAwMDAwNjAwMDAwMDAwIHgxNDogZmZm
ZjgwMDAxMDAyNTAwMApbICAgIDEuMjU4NDUxXSB4MTM6IGZmZmY4MDAwMTAwMWQwMDAgeDEyOiBm
ZmZmMDAwMDNmODA0NDQwClsgICAgMS4yNjM4OTVdIHgxMTogZmZmZjgwMDAxMTQ2MTkyOCB4MTA6
IGZmZmY4MDAwMTAwMTM1MTAKWyAgICAxLjI2OTMzOV0geDkgOiAwMDAwMDAwMDAwMDAxMDAwIHg4
IDogZmZmZjgwMDAxMTdiMmQ1OApbICAgIDEuMjc0NzgyXSB4NyA6IDAwMDAwMDAwMDAwMDAwMDAg
eDYgOiBmZmZmODAwMDEwMDFkMDAwClsgICAgMS4yODAyMjZdIHg1IDogZmZmZjAwMDAzZmZmZjAw
MCB4NCA6IGZmZmYwMDAwM2ZmZmU0MDAKWyAgICAxLjI4NTY3MF0geDMgOiAwMDY4MDAwMDAwMDAw
NzA3IHgyIDogMDE0MDAwMDAwMDAwMDAwMApbICAgIDEuMjkxMTE0XSB4MSA6IDAwMDA4MDA1ZWZm
ZTMwMDAgeDAgOiBmZmZmODAwMDEwMDFkMDEwClsgICAgMS4yOTY1NjddIENhbGwgdHJhY2U6Clsg
ICAgMS4yOTkxMTFdICBxdWlya191c2JfZWFybHlfaGFuZG9mZisweDRmYy8weDg3MApbICAgIDEu
MzAzODUwXSAgcGNpX2RvX2ZpeHVwcysweGUwLzB4MTM4ClsgICAgMS4zMDc2MjVdICBwY2lfZml4
dXBfZGV2aWNlKzB4NGMvMHgxMzAKWyAgICAxLjMxMTY2NF0gIHBjaV9idXNfYWRkX2RldmljZSsw
eDIwLzB4YjgKWyAgICAxLjMxNTc5OV0gIHBjaV9idXNfYWRkX2RldmljZXMrMHgzOC8weDg4Clsg
ICAgMS4zMjAwMDRdICBwY2lfYnVzX2FkZF9kZXZpY2VzKzB4NjgvMHg4OApbICAgIDEuMzI0MjIw
XSAgYnJjbV9wY2llX3Byb2JlKzB4NzY4LzB4YjI4ClsgICAgMS4zMjgyNTldICBwbGF0Zm9ybV9k
cnZfcHJvYmUrMHg1MC8weGEwClsgICAgMS4zMzIzODddICByZWFsbHlfcHJvYmUrMHhkOC8weDQz
OApbICAgIDEuMzM2MDgzXSAgZHJpdmVyX3Byb2JlX2RldmljZSsweGRjLzB4MTMwClsgICAgMS4z
NDAzNzZdICBkZXZpY2VfZHJpdmVyX2F0dGFjaCsweDZjLzB4NzgKWyAgICAxLjM0NDY3OV0gIF9f
ZHJpdmVyX2F0dGFjaCsweDljLzB4MTY4ClsgICAgMS4zNDg2MzBdICBidXNfZm9yX2VhY2hfZGV2
KzB4NzAvMHhjMApbICAgIDEuMzUyNTgxXSAgZHJpdmVyX2F0dGFjaCsweDIwLzB4MjgKWyAgICAx
LjM1NjI3OF0gIGJ1c19hZGRfZHJpdmVyKzB4MTkwLzB4MjIwClsgICAgMS4zNjAyMjBdICBkcml2
ZXJfcmVnaXN0ZXIrMHg2MC8weDExMApbICAgIDEuMzY0MTY5XSAgX19wbGF0Zm9ybV9kcml2ZXJf
cmVnaXN0ZXIrMHg0NC8weDUwClsgICAgMS4zNjkwMDFdICBicmNtX3BjaWVfZHJpdmVyX2luaXQr
MHgxOC8weDIwClsgICAgMS4zNzMzOTFdICBkb19vbmVfaW5pdGNhbGwrMHg3NC8weDFiMApbICAg
IDEuMzc3MzQyXSAga2VybmVsX2luaXRfZnJlZWFibGUrMHgyMTQvMHgyYjAKWyAgICAxLjM4MTgy
MF0gIGtlcm5lbF9pbml0KzB4MTAvMHgxMDAKWyAgICAxLjM4NTQxOF0gIHJldF9mcm9tX2Zvcmsr
MHgxMC8weDE4ClsgICAgMS4zODkxMTJdIENvZGU6IGFhMDAwM2Y2IGI0ZmZkY2UwIDkxMDA0MDAw
IGE5MDM2M2Y3IChiOTQwMDAwMCkKWyAgICAxLjM5NTM1MV0gLS0tWyBlbmQgdHJhY2UgOGE2NDRh
YzE4NDIzZjQ1NyBdLS0tClsgICAgMS40MDAxNDNdIEtlcm5lbCBwYW5pYyAtIG5vdCBzeW5jaW5n
OiBBdHRlbXB0ZWQgdG8ga2lsbCBpbml0ISBleGl0Y29kZT0weDAwMDAwMDBiClsgICAgMS40MDc4
OThdIEtlcm5lbCBPZmZzZXQ6IGRpc2FibGVkClsgICAgMS40MTE0OTZdIENQVSBmZWF0dXJlczog
MHgxMDAwMiw2MTAwNjAwMApbICAgIDEuNDE1NzE3XSBNZW1vcnkgTGltaXQ6IG5vbmUKWyAgICAx
LjQxODg3NF0gLS0tWyBlbmQgS2VybmVsIHBhbmljIC0gbm90IHN5bmNpbmc6IEF0dGVtcHRlZCB0
byBraWxsIGluaXQhIGV4aXRjb2RlPTB4MDAwMDAwMGIgXS0tLQo=
--0000000000006c6fd005a48cd2eb--


From xen-devel-bounces@lists.xenproject.org Fri May 01 06:30:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 06:30:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUPBm-0008Hb-9A; Fri, 01 May 2020 06:30:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GvLF=6P=glacier-peak.net=sam.felton@srs-us1.protection.inumbo.net>)
 id 1jUPAq-0007dO-Vm
 for xen-devel@lists.xen.org; Fri, 01 May 2020 06:29:25 +0000
X-Inumbo-ID: 17b83784-8b75-11ea-ae69-bc764e2007e4
Received: from palantir.glacier-peak.net (unknown [50.106.60.213])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 17b83784-8b75-11ea-ae69-bc764e2007e4;
 Fri, 01 May 2020 06:29:21 +0000 (UTC)
Received: from silmaril.corp.glacier-peak.net (192.168.42.226) by
 palantir.glacier-peak.net (192.168.42.223) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id
 15.1.1913.5; Thu, 30 Apr 2020 22:59:19 -0700
Received: from silmaril.corp.glacier-peak.net (192.168.42.226) by
 silmaril.corp.glacier-peak.net (192.168.42.226) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1913.5; Thu, 30 Apr 2020 22:59:17 -0700
Received: from silmaril.corp.glacier-peak.net ([fe80::3c33:3873:6e2:4337]) by
 silmaril.corp.glacier-peak.net ([fe80::3c33:3873:6e2:4337%3]) with
 mapi id 15.01.1913.010; Thu, 30 Apr 2020 22:59:17 -0700
From: "Samuel P. Felton - GPT LLC" <sam.felton@glacier-peak.net>
To: "'xen-devel@lists.xen.org'" <xen-devel@lists.xen.org>
Subject: Question...
Thread-Topic: Question...
Thread-Index: AdYffIk+nIOwP7iTQLuOs3KZB2G7Lw==
Date: Fri, 1 May 2020 05:59:17 +0000
Message-ID: <f017c46e427a45ecab00c1c59413658c@glacier-peak.net>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.1.24]
Content-Type: multipart/related;
 boundary="_004_f017c46e427a45ecab00c1c59413658cglacierpeaknet_";
 type="multipart/alternative"
MIME-Version: 1.0
X-Mailman-Approved-At: Fri, 01 May 2020 06:30:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--_004_f017c46e427a45ecab00c1c59413658cglacierpeaknet_
Content-Type: multipart/alternative;
	boundary="_000_f017c46e427a45ecab00c1c59413658cglacierpeaknet_"

--_000_f017c46e427a45ecab00c1c59413658cglacierpeaknet_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

So, I'm trying to get a Xen Dom0 and DomU, both running Ubuntu 20.04 LTS up=
 on our brand-new Gigabyte ThunderX2 ARM64 box. I can get Ubuntu up and run=
ning, but after installing the Xen bits, it dies after the UEFI layer launc=
hes GRUB. I haven't been able to get any logfiles because it doesn't get th=
at far. Nothing shows up on the serial port log either - it just hangs.

Has anyone over there been trying to get a similar setup running? Am I out =
to lunch for trying this, or is there something I'm missing? Any help at al=
l would be appreciated.

If this doesn't work, I'm going to have to go to FreeBSD and Bhyve because =
I know someone who has that working. I'd rather use Linux than BSD for this=
 application, there are more drivers supporting this hardaware.

Thanks,
~Sam




[cid:image001.png@01D61F42.F8BC9760]
Phone: +1 206 701-7321 ext. 101
Email: info@glacier-peak.net<mailto:info@glacier-peak.net?subject=3DTell%20=
me%20about%20Evolucid(r)>



--_000_f017c46e427a45ecab00c1c59413658cglacierpeaknet_
Content-Type: text/html; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable

<html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr=
osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" =
xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:=
//www.w3.org/TR/REC-html40">
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
<meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)">
<!--[if !mso]><style>v\:* {behavior:url(#default#VML);}
o\:* {behavior:url(#default#VML);}
w\:* {behavior:url(#default#VML);}
.shape {behavior:url(#default#VML);}
</style><![endif]--><style><!--
/* Font Definitions */
@font-face
	{font-family:"Cambria Math";
	panose-1:2 4 5 3 5 4 6 3 2 4;}
@font-face
	{font-family:DengXian;
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Calibri;
	panose-1:2 15 5 2 2 2 4 3 2 4;}
@font-face
	{font-family:"\@DengXian";
	panose-1:2 1 6 0 3 1 1 1 1 1;}
@font-face
	{font-family:Consolas;
	panose-1:2 11 6 9 2 2 4 3 2 4;}
@font-face
	{font-family:"Monotype Corsiva";
	panose-1:3 1 1 1 1 2 1 1 1 1;}
/* Style Definitions */
p.MsoNormal, li.MsoNormal, div.MsoNormal
	{margin:0in;
	margin-bottom:.0001pt;
	font-size:11.0pt;
	font-family:"Calibri",sans-serif;}
span.EmailStyle17
	{mso-style-type:personal-compose;
	font-family:"Calibri",sans-serif;
	color:windowtext;}
.MsoChpDefault
	{mso-style-type:export-only;
	font-family:"Calibri",sans-serif;}
@page WordSection1
	{size:8.5in 11.0in;
	margin:1.0in 1.0in 1.0in 1.0in;}
div.WordSection1
	{page:WordSection1;}
--></style><!--[if gte mso 9]><xml>
<o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" />
</xml><![endif]--><!--[if gte mso 9]><xml>
<o:shapelayout v:ext=3D"edit">
<o:idmap v:ext=3D"edit" data=3D"1" />
</o:shapelayout></xml><![endif]-->
</head>
<body lang=3D"EN-US" link=3D"#0563C1" vlink=3D"#954F72">
<div class=3D"WordSection1">
<p class=3D"MsoNormal">So, I&#8217;m trying to get a Xen Dom0 and DomU, bot=
h running Ubuntu 20.04 LTS up on our brand-new Gigabyte ThunderX2 ARM64 box=
. I can get Ubuntu up and running, but after installing the Xen bits, it di=
es after the UEFI layer launches GRUB. I
 haven&#8217;t been able to get any logfiles because it doesn&#8217;t get t=
hat far. Nothing shows up on the serial port log either &#8211; it just han=
gs.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Has anyone over there been trying to get a similar s=
etup running? Am I out to lunch for trying this, or is there something I&#8=
217;m missing? Any help at all would be appreciated.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">If this doesn&#8217;t work, I&#8217;m going to have =
to go to FreeBSD and Bhyve because I know someone who has that working. I&#=
8217;d rather use Linux than BSD for this application, there are more drive=
rs supporting this hardaware.<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal">Thanks,<o:p></o:p></p>
<p class=3D"MsoNormal">~Sam<o:p></o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal" align=3D"center" style=3D"text-align:center"><o:p>&n=
bsp;</o:p></p>
<p class=3D"MsoNormal" align=3D"center" style=3D"text-align:center"><o:p>&n=
bsp;</o:p></p>
<p class=3D"MsoNormal" align=3D"center" style=3D"text-align:center"><img wi=
dth=3D"306" height=3D"193" style=3D"width:3.1875in;height:2.0104in" id=3D"P=
icture_x0020_1" src=3D"cid:image001.png@01D61F42.F8BC9760"><o:p></o:p></p>
<p class=3D"MsoNormal" align=3D"center" style=3D"text-align:center"><span s=
tyle=3D"font-family:&quot;Monotype Corsiva&quot;">Phone: &#43;1 206 701-732=
1 ext. 101<o:p></o:p></span></p>
<p class=3D"MsoNormal" align=3D"center" style=3D"text-align:center"><span s=
tyle=3D"font-family:&quot;Monotype Corsiva&quot;">Email:
</span><span style=3D"font-size:8.0pt;font-family:Consolas"><a href=3D"mail=
to:info@glacier-peak.net?subject=3DTell%20me%20about%20Evolucid&reg;"><span=
 style=3D"color:#0563C1">info@glacier-peak.net</span></a><o:p></o:p></span>=
</p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
<p class=3D"MsoNormal"><o:p>&nbsp;</o:p></p>
</div>
</body>
</html>

--_000_f017c46e427a45ecab00c1c59413658cglacierpeaknet_--

--_004_f017c46e427a45ecab00c1c59413658cglacierpeaknet_
Content-Type: image/png; name="image001.png"
Content-Description: image001.png
Content-Disposition: inline; filename="image001.png"; size=45509;
	creation-date="Fri, 01 May 2020 05:59:15 GMT";
	modification-date="Fri, 01 May 2020 05:59:15 GMT"
Content-ID: <image001.png@01D61F42.F8BC9760>
Content-Transfer-Encoding: base64

iVBORw0KGgoAAAANSUhEUgAAATMAAADCCAIAAACqg7IBAAAAAXNSR0IArs4c6QAAAAlwSFlzAAAO
wwAADsMBx2+oZAAAsWpJREFUeF7tvQeAXGd1PT69t+1VK6valtzB2JYNNiRAqAESIHQIxOSXBiQh
kH8KpBACgQTTe4dQAphmU01x77Jkq9ftu7PTe/+fc++bt7NF8qpasme8Xs3OvPq973y3nXuvtdFo
WNqv9gi0R+AMGwHbGXY97ctpj0B7BDgCVpWZt912W3s82iPQHoHHfASuueYavYa2zHzMn0X7Atoj
sMwItJHZnhbtETgTR2CxNmsK0zPxYtvX1B6Bx+kImOZkW5t9nD7h9m09Xkagrc0+Xp5k+z4eXyPQ
Rubj63m27+bxMgJtZJ6JTxKhLI1mtXkgZ+LjOS3X1EbmaRnmFZxE0diKyTYsVzBsj9tN2sg8Ix6t
CcKl4Gzj84x4Qqf9ItrIPOVDvkgYLj1fKyyX/bZWq1UqlXK5jDen/HLbJzgzRqCNzNPxHKzGi5bj
0vPhS1OJbf1WPwQaq/ICOAuFQi6Xw/vTcdHtczymI9BG5gkMP1AmQGv+i7f1Brw2xiHr9ODgf2sD
7+RlLVqskxXL3pl0PJOv1nVL/KpZl0MsDycv7tl8AajAZz6fb+PzBJ7cWbBrG5nH+ZCIQGvdgh8L
f6zy29KwWRuQgApWwK1RrVQL5cahbOW348lP37bjnV+7403v+8Z7vvjd2UzWbrMqXBt1/LdYlkJa
KiaJ3aarFvC02Wz4C5othCcgam5znLfR3u1MHYE2Mo/zyQBVVvyHH0KRvw0B2IQYPq5ZrXWHve6w
fe+3d3/+Z7fvHJvuC3v/8OmX/fmrn7NusNfUbBvAdx3YNPZUHOonKjNbLVW8V/GJ61b9tlQqte5+
nPfT3u0MG4E2b/Y4H4gASbCpCqnVAueMnZJT/+LA5srlVK5Uc7hSlYbb4+z3O0P8ii9sR1jLQSB+
oZoCXQ6Hw7Q5FajQXU2UqvA0paip4kKKYi+73Y7d8f4476e922M6Am3e7Mkbfmu9YYUnhsYkNdd6
wwFppj8iSXEml8NldzrTxZq3UV3lsQWMkwPCNRuVXZqP0GTxUhmonh5TALZamEtdRKrZmvITG+ju
7SjLyXvGj+WR2kvscY4+sGeDGgpsiXfGQqMRb+34aVhslWqjVKkVatWapeGz1cIuuxs44vYQkNZa
zQZhWIc8hMNIBScyZWGiLlRfTfGouG2FnPmJIlMhSlHc1H6P867au50xI9BG5kofhbh1WjyvFofF
6ijXrbvjmRsf3vXb8clDpcr2eHrrbGzr6GS6WoV+iS0iHs9wp7835LdTT60DqBUeB6ajYoj/q0Js
GpCt+FSVdYViUFHaqv2u9N7a2515I9BG5jLPZEnMUQGJ4Ee9aqkXKpW8pZGwWLYnCz/YP/eF+/bf
tG/uR/uiX996+MaHR3+7f7ro8uWqjVShWEIQkuFIwyal66ZuEckJvEG0UsQx0GK4Zs3IiAEwytU6
JCtfClG9VhWSppxcKkvPvGnWvqJjHoE2MhcOmUhGhjvkR4SZRiXrkHh2i91utSfKjZ/uPPzVe3f+
cOfhe8aniw3LpjXrBgNdtnJj89DQpuFhq80+m8umqmVs7nE4XA5n84g8oAGvZojSPP0ilVWx16rE
mshctItowvMGZyuGj3k6tHc4Y0agjcyFjwKOUgl/MCxJEMmfVhusx1i5tnMuvmMmPpEpzxQg5lw9
/sDmgc7r1g9eNRi+dl3v0zcNX9gfXhMO9Hm9vaFIwOWleETIkzyeebF4JAV1kY/HVG7hdDUlZCs4
FyF5EW6XwviMmXLtC1nRCLSRuWiYDBFJKalxEYs1W6vtSGbvnSvcNZm741By50zS7/dt7Ot80kj3
ZQM953ZF+v3uPp+j19GwVKHnllz1qqfW8CGIwWAmHLgUv+ImMl5HshvNTXQDczMTmfqGx7Xb9Vit
n2B3fK5IXtHDb290Bo9A+xEueDhiBoqbFcqr1RovV/bGc1++Y+97f3jnzY8cmsqWrT6vpyPosVld
jnqxlM/ncpWGZTKVnZxLOu32kNsdcrv8jJZYgFNb1WKvNlTk0dtDFXmJDdtiQJqMAoWlAtV0vSom
PR6Py+XCG/3tdDrxRi1UYFLfHNN8M0/apisc07id6o3byFwwwlZLVSgDEHTWWMXyq0cO3X54JuMK
1NzhZNF6KJ47EE/OJdNuiyUcCBWrDfB7JubmZhOpYqNRhOu1SnMUMc2arVG21iv4qSPMWCtXKnDz
iLVpIHOBXtqUhCoM8Zf+NsFpmo6m74fyuAnOY4Xi0illKtJtv+6pxtvKj99GZstY0feDAbHvTFbf
e+O9f/GJH/zwkeiDM5U9M3GX2+uyeV2uQKkGJl3D4/PuPjR9eCYdzZUadmc4EPTYXUjSEg8uDlF3
2hoOOIvsDpvDZsV/dkjQRq1aAXgh0VQFJfqa2qdpTOJziEFFHV6mA1YxqRBaCVd2hYGWZSdKG58r
x8+p2/KJwc5TQSVMc+irwglQehzVRvllrFATmfwte6Z/fv+BQ/H0RLpartldHls4Eoj43UG3vSPk
szZK3X7v6p7uSqXodtpdlnpPwDPcEerwOP1Oh9NK760D7qN6rW61A0xum6UMoVm3OG1Wpx2YxFnt
kKqQzPAwycXwD2wBUgJUaOLZZtiQotPWK8j5wpLRotmqCG01Slvfm0rpIoftkUComnPrt4p/PUXb
ZD112Gs98hOUncdMEDHZgEBAUAg1+Bd2n2YiC04tllKj9uutO75/986SN3zB5ot9wSCQUyjWU7nK
dCztC4RDwbDX43P5fIl8rlKve52uNV2d5/d0DQe8ERh9Fku+WkpXSvGqdapoieeK5TLI5qC1NzxO
sPQcNau9ULPmKtViuUiXrdUGLh1U4nShnK2U6w6Lw+kwYWlomBC3ELzNoMiRFFcNe4KapyFQU7Rq
LPRI8rP186XbKJ1oJfL59MzdJ9pZnhDaLOWSEHjmZSTRScWVv/Gl4BUouGTzJm8odNfO0ZtvfTCR
zjtddpvDVa7UAY9ipZLIpiDbbPVKfzh46VD/pSN9a8Du8dpKtVKiVJ4tVmLF+ky6OJnMHJqN1uz0
FjXsjqrNlS7V45lSsVB20DJtWByOTKU+ly9Gs2lYpBbIXofbjnBpM4+zVYYhUrlU+pluG9U8FYH6
xpSZ+pX54aKZ3er4McGs8tNEKd4cBdhPNKic5vt9YmizkI4qM0VcSkAE2qEEKuUjMNPvPRz73q07
b9kxOZ6vQ++E9Go4vHZbPRhwRUKBUMDjspW6Qq7hztBI0Dfs96+KuN0ua7leK1Wq5Wq91rBXAApL
A1nNgUDA6XbF0vmp2WQykw24HecOdV22uq9UtSRyZYu15nc7cCnc3GItV8q4FJfF6rXZ/G4X5CPw
AB1XVG1DpVwkuFolp2kTtiLKVEFNH5Kpmuq+i+BnQtH8VmehbtZOYTkNmFyqzT4xkCnykoadpWGH
65VUVbH5LJasxXL/6NyNd+y+ZevodLpaA6QAV3vN4kFU0hoOuwb6QwG/x9GojXQF1/Z1hd22Lrd9
IOj32htQSiFOrTZXtWEpohgI1MmGLV2uxQsQhoV90+m9h6aGuyMDXaGBiN9Vq23beaBkdZ+7fsRX
zwYjfp/b5bHUIx7nUGe4L+gLWOvQXGGfQo4DocgHUxsYaGkFkn5i/jYp7KYM1K9MQ7TVBG31J7UK
xkVqrYlnHArHV2sT+DwNE/QJe4onKjIZ62f1ARqaDIrAU2pJ1Cx37hr/zl277tw7Hc1V7U5P3QY3
jQWJWxaHDQon+Oh2e6m7N9wZdK/qDp3T2eGzVSIeV6ffFfF5SoUSZiwgAs8NKn8UQEioVOayxWiq
EEsXCtVGzeqIxxNOeGMr9dGJ6Ew8B9vS47KsWdU/PNATDNnXdPvP7wxu7OsOup1wLNVKxbDP60RY
Eu5ZIMEKJZcQbJVyC2ItTelngrNVKcWWiygHplNnqbFqymQT3ouEJzaAo9ikNzxh8XPqbvyJi0xJ
ZzZC8DGL5VcPHRxNlXdPZb//60canoAV3hkYmw4wCBoNRjusHrvd5/GGI5BrrpHuUG8Qymy+O+zt
DPm8CIRUy6VK3e3z1REKaVhy5fpMMp8qVLK1ar5UqjUcdYsdKmvA69+1d/zwxEzD7i3XrF5HY/O6
Xo+1ZqlVVnd5rrhwbdDuSM3FERvpCri7gx4PVOgaWENWvHHaic9mRZJlUsBU29Tf6vtZ5MUxg58m
FFs9uq2TrFXeLpLP5jERy2kjs43M4x8BIz5i+FIkPqKmpeQyJ2uW+0ajX/3F1l8/PFG12b0+Xx6+
UZuTWJRUSwDC6baBZuP3AVluVNvxuC0XjfRsHOjsDiPREkmVddiWdXujVKqWKwiquOLpXDxXqFoc
CJMU65USjM4Koph2hEsgqFEEqEYukKWQL9bSyXVdgY0DgbWDXV2RSCabTadSAZdz3eBA0Gmvl4q2
RtXvcUPLpeyGGgk72LiDBfmZrQqqIqc1JaV17Exw6oemNrtofJcisxXkOLhZMGHZUM3xP632ns0R
eCLITAn2G64eJiUDb9BjU+XaoWTxjgPRm+7de/cjo1ZXQCIRDRsTQaD5WRGbAN3N4/G63E44Ncul
cjFfLhXzvb3B37/ukr6AvVwsAGTVui1dqOAbp8uGvGhYmUjRhOKaL1cLpWoum4fvyON22OFUqpVx
Bq/Xj0KxiGqu7+seCbsHw96I35PJ5LKZLDxI3V1hH2zLQgHuIJQjCfs8DGXCM1RvUGDaFydM63Nc
hMxWcJpS1JzzrTFJZTiYtqv5ZlmHkAlOCGQ9CH5Dcur7Vlu0ja8TH4EnBDIFl4biitTIUq02m6ts
G0t9986dt+6aTBQaTrh5AEg4Xjk3GTRxOG1en9vpAEHAhoJXxUK+XKBLx+d1h8P+/m5vb3cwl0v7
fF5sTKaqx+33e6FCFsu1ZCobS6Qt1D2dlXI1EnKuHgwFXA2f0+3FieyOer0adlo29HX6HI1sJgN7
t9Pn6wuQ35MtlWGF+l3OoNcFTRZnxBE8VBwhdQEFUwFfAEhzHpiIMj20rSGTVgy3+n6WDYqakG71
DKk0Rh0w/IbdrV/hUD6frw3OE0dj6xEe/0yD+So8JKpaEuX6I1OZ2/bNTFedsAsbUMugnrrsdViK
1WKlXrI5LF6fy+Nxg4OTSecSc4lENJlL5gvZUqVYzaQKE2PRPfunMiVHvuauOoO+jt5S3ZrJlUan
44dnU9PxXCxdRhEuOHdgXqayxVwmu36w77ze7g0dHecEg0N+d28QZqN9bCaOA/s93lVdEcASePXY
bQGnvTfsC3rd4CYVinDu1vxeN0S3EcA0ki4NR2urzmkKz0We20X66lKH7dL5ZG7TiupWoOoGasqa
ZalbYX9y52j7aDoC9ne/+934Z3R0VP8eGRk5S4fG8D2S7dMAjzxXrk1mSvvjpVsePvz1n94dLdBP
U7Pa8BUq9IB/46Drk9kfINWVCuVcJlfIFPEGQqyBBBHw5ozUEOvw4AAKaR06MDY7F09n8giUJFL5
UsUKXw+4QCj5AwpBIp5IplIwTsNee28Avz35Yjmezc9l0+lsBkampVJd3dPV7XE2isViqQihCG4s
QprQdaG7QkEEScgLA1NSQsW61GqZpvhfECxpVVbx3vS76nv91lQ49ZNF7p9WyWmCc+kbdfyarlp9
0+qnbau1JwUvSwH4+IpnNhrI6ShabOlKbTKRyZSqe8ZjNX/PL+94aP9kLFtB5gfgZqtCmNpdIMhU
ykXVGUvQSiusZwdljZMPmwGdEmmH6gp7D0qm1e6sNip9gz3wEiH+AWUXWi+gBTEM8ex02jo6Iz2D
3cnp0fP6IxdtGLHWy36PM+Rxd3v8HSGvx91A9NJVqnV4XFBWi9VqMgOnUM3jcsMp63Pa7FawhoBG
MGuRr1InUwifaf5282VirHU2tMIJn5usulZkLg2iLDVWl8ISR1NS0SJkIhMNGrdptZ6UqfkEP8jj
zc4UkxI4o1cDNBwQycfThX3RVKxsnUxnQ50d0WiyUbcFwh2jM4lte0cPzmRTBSRKYsZVUC0Lnh8Q
2iC1MP0gFdU3QuY7hBjpCKDN8iPWnyRxiD4QqL4gGADemK2QznYkSLoc4Yjf5qj5/K7h3tB5wz1r
u2BsWsBF8NgsvoYNLlxou9l0bMDjvHh4wOd0ZKu1fN2SSqYB8nDA7xQWrxMuKJtUxGSGKJnvWlBB
vcqmW0tJBi1Y5fpBNEqLBt3eBOei+KdZz3aRqWkGQhWc6uk1jdilmNHdNV7qdrvbJIQTX1bOXjtT
Pa6qpfFHA3lgwwFk4NohQIFYf8ZmTzbseac3bfN9/549//61X3zhlw/dtXt270Ty4MTc1FwGliXN
Jcw9lkWv1cqlKjjnSOYArLVSFj4lxPGnEGnBk8OmwDB56XVIkFw6V4dcrtVt9ZoLRmrdWiiihUFl
ZFX/YG93KZ/pBGXI66xWiul0ZiqaGIunZnO5Yt0a7Oy1+YIpSFqpK+Sx2bvAkXf7wFRolKtIMYHT
U0oQMf1EkSeFSoQApPdukgubH5C+J9uxKB+8wbh6ASk+axVoAJJiTwWgjNyC5JJFmq1JXTjShDMt
TzRxaHdYOXFYLnuEs0WbbcYpFZkiHVSiMGcEQZF67f7D8V3o5INWWRbL1FxhMlObydfSmSxkVC6L
cEbR6fFirheLFUg/QKuUL1gqzJbkIWXSo1YsUS5OJJFMQpCTNYBEW0UHoIpYi90BcQsyOpy0PX1d
gaCvAtsxlxvs8Z+/YSjotfaEvJ2ob+Cy93WFgMZH9o2W8sWNg30be0Mhe8NpsYEq70HgBfal1eJh
4pjcCNm0JPNW5GQOwlKzZHgN2MDohMIkbMCYkh1XTb2bIo55M6QD69ZN7JmYJISbzYuMuzNcS8Yc
MAXmItwunTfqdtKD4JjQbP1+fztf7EQgetZqs6rPiU+GlpgRsIQgsCaKjd0z8bLT8dNHxm/Ztn9o
uN/lZEZHsWKP58rj0Sg689jAGK8Dk/lGpcqpj+mOwq+QjdWaTjHD6SLSqWlTmUKa8JyvRoCdQSKA
ewgkoZDHH3IBpclYKhzybjp/+LzVfbAtWbC9VI74A0j+ymQT+yZmkTt2yZrBDZ3+kLUKKk3N5ixW
ETCpIbJSodCurO7tQotMiGW4jhCm8DgdzMAGtZ0mMRYCEY1SFIGCXNYLcoANVxG8S2yxaVqkrV4Z
jXngpszem0s9Q8c6pVptVOyrcRTg81iP097eHIGzGZkyOXVW6YwFyIo1azRX++ZdO+8ciyZrTvhe
wZiDhupyuqNziUIRmqIdoYhaqVwpluH/saEMc7WO/1AIloeCT5RYpD7IMaIQUpuSssh4J8gV604+
xj52m9vndQW8xQrgVfZ4neefv/rcc1e5nBbQejLpdL1UQeQTC0Qhkwr73ZvXrBoKhwK2OlAM3S9Z
rMxmiplyGRXcw/4AMqMh7ka6u4qFDKhAXT5XsNFY3dkB1IGsYMMlAclU3GlO40OWSBABy5xrcoCx
5tjQcQyVTgxttynNVHJC4TQtxlZHTqszaRFWVwiYVskJmdlG5grHbdnNzlo7s+nykNlA10vZahnN
lH/+8P79iayzoztatE0nSnC/jk+l4sni2HSM7Smt9kqhUsuXLdVSo1qsoxoP7Up0b0bClnQuQK0B
ejxox0mggn4fGJr4kZrPILgCHdhSf/gdoOF2e7BlKpWA7ygcCnVEkPNln56N3X3v7gcfHp2OlVIl
y0wmnyyXevu6Nw71u+rl2bmpw9Hoztnk7lRxf7YQR6GgQMjii8zkyiWXp+J0RvOZks2SqdZQc2ug
rwsJ3qgxhOvKNuqZcgVhHsljQ7EvFkYA3x6qbcksOyJ9UpC0po9cVdRFqmnrh60zo1VxXYkSa4Ze
9I3JKDLPeCKzs71v6wicHXZm08rUTlu2gzNzmOiR4fVbx6K7p+Z2TCZjOQQwbdFs1ml1YdpbHFaI
ykq+UkN0AlgUNw+tM2k2S6tRKlbSwQpUQgCBTgdo0uWibTAlYCIaJCSUGQ3E1whDAqNWh92LjGlk
VdobLren2rACUZCTMB+9XtCDXC631VEvD0W8G2F6glskiWflhjVXrScYNy0WC0UWsrXUOwO+sMMa
8TlG+ro63c5ej9OGDE+w4d0ut8PJykKNGgsG4bwg39Ys2WI5lko/tP+gpVh66dOv9jusYA6BZoTN
jOWl+XjV5WP2IDJF3IkDoBWWOJqWCzMXhSOxc0/8vI/jI5wt2myLv0eehulJ1KhArFDcOT3z0OHo
JCxHX9fBWDGRSk/MpGweH9wrc/E4jEiU9KgWKpZKHepcHbFKwRmmL2UufkEoiobKj1nVAGojOwjR
xaLAbXo4DZtOvR0wWN0wLOGxwbagKzDdBNEDqJlISWH4wAkxUkNS10BnYOOqwYGeCP6s0iEMvReG
ZR15KNCsbQ57KOALBQKgF9irlW63M+isdXsdULnttXJ/wNkbDLJukAXLDMpiwu1siRYr43Pp6Xgy
lc4giXt9V2BtX09fKGDHLQGX2ura7G2t64+Yl8rdOfE53eq/beX6mblmuoEJWlO6nvipnwhHOCuQ
aUZHTBVWHaR47Ig6UjYUbPb9ueJ4IjFXrO2JVvZMpho2byyVn4xGkf6RjGdqEJLwwSJKWQWtzAAk
wSYCUzyuIhgpgBkcIfgpTtHKSxYCcQXpWWHbkYxjeGytvgCgaEPRLQTzXCgmgixrcHDdDkx/IN8f
9PZ0B0YGe4Z6Iy57o5DLA4caOXU5nV5E6D0ebAb9NJeF6Mvl4beplrsDngtW9YUdFrfDNhjxbujw
hewu2I1Va6NYqqcK5clUajaVCro8a7ojq7uC3ajjJ9fJ1UTQUAcHQrqumCalyHsiE79PSuu+VmS2
6rFLkakDCPRq/dsnAq5O/B7PCmRKVF/CIRqo4JOW8IGEFexli23rbPY3uye9AQ8U1Yf2jZdtblTZ
yZWtU7FMKp5rlGsguUFOFjIFOmPF26olcUz3DhQ9I14idUfUaOIJxXMrq4D2dJeSXnoN4sK1OeEq
5VdQjgORkMfnQZlLFJQFSjs7IpGusNcP5ytMxIrLbg37fT43KD/w41qBsXSuPJfMgJMLFdvJHExw
xKseh3X9YPe6/s5OsmurCLm6qpZL+sNAcjyXhY2Kywv6nAMh/wA4tbpuIOVFlAf6gkTxRpooSiGA
vAv/lhQxEdw2CQNHr+XT6sh91BnWKhhVKioyzc+bLrr50ntt4fmoo4oNzgZkGgGSZmREYaLeU6st
U67eeWjq7vHEQ+MJOF47fL5wIFKy2bfuP1htOGOTqVyygKwN6JqZVLqQyRnwYg1YPQ511RqmNg6n
4UBCEvAD9KUfO6uRCJaFFqchFMEn4ashfOyGSngu5KEEmHpSt9edbuSVoEEmSpQgj8Xh8rqxC6qJ
eN1OlE+H85bl3lHHHd6hSg252B6nuwG4Oqxep7MnEuyOBK3VUjGbLuSK08ncXDI/EvZ1ui3n9blf
/7tXdnqcXvG6whUFrxSL1zJLjOWqaThTOsGsxnVB14Y/2ZBRyhZAr3iF5THJrlYsmdZpqzRWKJq/
TdmoO7Z6ksxDtfF5dHyeFcjUaloqpjSMIc4ZvrVOJlM7orGMt+O2reMTCeR5FGLxbMPhRguDbLZQ
ypQRsaiWy0hqLmSymL/I2mBXZ21BC2HCgDwKgcDVCbEmChcrNavgwXdAEP6AN0jEKOq+yskx5Y05
x3gFv+3p74N5GU8lnPD3hP0OVNxChieEJWqZIGkTFQ1AW0OJAuRwkgcA2i2szQpOBiofeoPhJKVS
Ac1Rhnr7UsnE3FwMchS1+WAO2xFsge+nVn7l0za+dst5fTYr7GQHcsyAQiSUsaA0VHrUgudloLAt
jVyUJpHwDysBtrxwSwjSwNm0FBWLwGPupFqorD+GGBQZbaguarvqNotgaTpp9c3S47eRedYjU10Z
ggj5F5aUYlNU3B3x3E8fPvSDO/dMJOsw3ex21NlxZQvIIqmCk+OsVtFoBCpuuViqoU4PxAVodDgE
UjHhuUSKCV1BPCgFJwL1xCZ1QARR5BQS4gRuMC+BQJ2dTK8mSZQzkjhgEBTp0cCJA6WgPS5vwOcN
+uD/RYwUQgtEWvY0geHHCCO6KaBeuw1FK5UCzuYKBC9qzrL8gYVGaMUJEPv9OCpmNZCJy4ESfsFI
aNhVvmLDqmdesNopcR2iAtdC09haBdvdYvHZSB5S95WdF77YqFOxCWybaFH9U7zNYk4Tacbwqhqv
JUWWlbGKN8WngpOLQtMrq3g+ChOoTRI6CjjPApkp5pyAUesRyCtjsWwdnfv5tkO/2T0RLVlc/gik
SAO1eArlStWGREoUQseEp47Hgq42xEtyiWQpk60UihBjmBP4Eo4XaKLq8ZGZbuiyjJUYTkWuA+JU
QUFmCCrSZgEXWqLYjVQ6vqediYPA74NEalTroR+IrhhgUnkAAD2IDFBWnVBuWTmEB0CVPSSMAZL4
B4xdaudIwnbYEGXxItxCoIEbWKqImmp1eCDDB93l63/nkudfPAJVF8wjJ7KpUWNSyrxbLBXwbJ12
BwjDuUrZ67D74KpiNpma0vMQBZDgAdIXNFvBEtRhokgxZrJkMQrzMGuVei0TCojV7U0cKkveRF3r
+0UTsY3Msw6Z8zERCY80QyYyu6KVxraJ6E8eHr3/cDSP4naBCKRgsQBlEIwB6I22arluyZfsmPTV
CmQa6qJDYNYLpWImW83mbHDP4oCYlNKDwHCZsM8XcUANkBOUQpLf4QfVmLkciCyluSrQbIoR+qDo
J0L0EiIKBb3cvlCw4SRHhyIY9q14fZmfDdAg+A9ZjKsECZDsBbIVKJWBXtF7oYDi4DgWDki0UeGz
OeEHcoGG6gr7XVeu733Weau6UCLaCRdXplYpDXR34nKhivsg7x2uiVR+NBrLl4sXrh1y5/LreyJe
JJ0eWeIBUeIqRmUiMaRFK6FlaMhjagmqutICMNrULzOdFNL6hbYkM1GnNu0iEKoEPiZb9yiT+HH5
1ZknM7m+KxzFpBRZxoW5YYOH56GZxL3j6a2j8UQF5AEkcNRz2UIhh0gEtbBauVEtwS1aR3kC7lKu
WGuVQgEFfMqo5FNB/7xMBjoo9Uc4XnAEg+/NqStmq7p48A9sNgcpBZAlFKmY9Zxd0HbFfKNmSyK7
dCMRcMvMI3WWZiErJbvd/Jxo05tRCqFoi9TNEcwAhY5YJIMdSdOspoND4TvmcIt2TBPU6WG/PfiV
Q2HvyEBn2G21FnKr+yI+XyCRyaMXAyr3Ib0l6PODkp8sVOeyhXythJTrYKN6zTm9TxrugX6LLtfE
vfFaID8paqFmg3RBhYG0W6H48VJFbxf9VhkW4u7ld80bWeRGMjVbReYi4C2yKtvIfNTV5ExDpq67
wlCXFyIQmNrIbz6UL909nbx3Io5aBKViPZsv5QooOAADEgFKiEAkPaOIHew4CyijZSqJJMFAI/RY
HZZyLZuM52NxS7mM+AU2gwKJecZJRqxRODJ3RKIOMuHAwgX2AE/VZuGhoe+HdihNMiEhEIrk2NJf
xIkr9pjNimxrG21OeF/Fa2I4R6nyieQQZ5LYrQSBlEmQj+ngZcVp0BZofzLMAjoRWAwgEqHeiDfg
DQfdfpcdFYNcDsUSZLcVXH1cHPJJC1Xotw6arY0KWEAbI54XXryx34vqRiilAkE8n2+tC4piA8NM
B68h8cSCx4UwliTf09FrIFNXFVEk1P6mStyaqmJONZMApJ/o9q3KrbllW2YeBZ9nGjIZudT0ZEwy
/C5b0IXSejBVvHU8titWyOQbqWwJdDY4MSD0mOFMJyfJA/iT0wVVX2Fokr4Dknsjn05l5xIoYmep
lJ2I8EGHReywVGIKNEQae3FJ7pjWDZCefIwIUq7QD4RfkkdCNyugyjkrCi1OxMAAjFX2NUAnL/IU
KFnhGwX2kFtJW0vEFP2/clhBMtlCBCS1Zi4DTPasgwCk/fngSoXtiBgNjgkp6vG7ELf0gwoPbp8P
2SZYosrg+aFxA2xOlEQRSi/rapLIhGBNoxG224bDvgtGetZHPKv9kMNcOIh+EXZ04woxWL0+4jAF
+CCmm0FQ5b5L+pu8k7hRE11QSwztQCaUaa/qBrql6Qcy0WjCT8Vm65+LpO5RpukT8KszDJmcxQwC
CIPUDpL6eK7ywPTcIzPZiXw9Wajn0yigUykAIpCKKGmHCL0wzKtsVABqD8SMtYZ6O6grkkpX0tla
vuDEoUBDZ5Otcr1YJNNA+HUCJtR6FkxKzgZko5h2MmEoMxmFoGAVdZMLBUUpkzogaQkmaracqzDT
KBgIS4AZ8KKvSMQsnb5YZwhBunwpciT6IgFSQSZLO9PpixJhjLEi+AljFS+U0vT6QZWHxaaSvIqu
1X6vqzscRmkS0mJxx4j3SFwVZ0A16pFO/3DQubEzuL4rFLRT9RZCE1PQJP4jgotLybwApfYsejyx
KUg1oKhUjqbbyICT2Jkq/UzJaRJxVbouIgC1bqz7tiLzCYi3ld/yGZZrgqmgRTus9ol8+a6p1C8O
zd47lUPPn1SxlCnmCyiGDs4NE7dgWdLFArmBCYq5iSKUkCf1XDE9NZObma0mktZC3lkuW3M51Jms
JFPVTMZSLNpqFTBHASZEUNjTkhxapJ7gB9w9tIQm0GGzgfvDeCY6meCHawWURHxVZ8EB7ouiBYiJ
kP8KzMNJZINAIR0WSrKQFpTQJ9NcoU1lAOqAIVkgshgSEcaORing2YVvl42N4L7FfWm5sFwe9bzS
8G6hxmVXZyTg88CGRpgWlWxLJOTTHsaER/nN7nAg7Gis7/KHXehxlMbyg7IJRagSuF7T0WwUNVoU
SlHLQcmIuBNQmiC3yWDQoK26ogyHVNM3bmqqiP1oESBtj71IQdXbNV/GmCz8cOWT9Qm+pekqeCzG
gbacJW+x7krmbh+L3TWZ2p8sw7FRKsCkAdkbGh1EkgU5uaC4wcbCZMKEDvj8qJ1TK5bi49PZ6agl
m7PnirZ8sZ7I1BOpciJZy+VQ1g5lK1nmnNV86DqVsrLSR5PAY7AeohU/8J/iPUAIwcQ36kJlrqPE
T1kxSCwxcBMYlEGTPkx/Q6WWhDTWE1CHJqe0/BaygZB1KGiBRuClApTzO6l2LsLTDsItjEwgFSHH
XDZL7oHX3dfT2duJitBuVp5FFIV83KaGCRQRUzWPve6117zOBlzU6SKc07j7Mor+IaJapi3Ii1Kz
UVhL834dQ/oxrRNl421o4hlHZneuwAgPfcl8YZUQQ1i7YRN7pu6qh8Kf0LcBTo3QmhLV1F0XGqvG
sqUUhUXQfSzm3FlzzlOEzJa1s/l2wXIqggb8gPFc/u6xmTvGk7sz1al8LVOgr9KGGCCbNSPXApYY
YhGo/kjljwLG7s7nihOjU5lExsGOIoVKMlNLpCtziUYqY80X7Cohq8w5hmSTSDp/dIbS6FKXLEUa
6Kc0U0E/ICCpJTO4gjgHxKOgGqQAFGCn8COVT35oY1JZFA2cRpyQeWmMNnkKYqfSbCXVVwWntrHl
hBdh1MD6ArkKzg9KTgNNwYB3YKB3sL+vIxTwYgmyM85JdhI7eiJ9RVRmDAfyaGxokmtZN9y9qisE
ZmIkhNJ8WL/cuBJ4r3BYuIBZlBOl/XgiNR0lu5rXQs49PoTVkChUoskM6FJVmxUFNg8l8/cdmIgn
09TEzR+Bmmqkpl5qCkkGhpqSc1GM5EjwWzksFcOtsdazBk8n70JPBTIFg5ypzdik+iTUAyv/QjdN
lUvj2cruZHmPYBJVXMErqdos+VojXSpDlU2C+ZouJFOFaCKLnI1CHu/T0XgMec9Bl9dVaOTGpwtT
M/VYspZKWwsFO1RTsaEIFIkKCF0F8APwNCNT+D+itEH8WTGBjaJz9JVAlaX/lsW4YLxCBy4Tk1JT
T2UvFVeScDhReRD6dFjeUqw7+ot4CtAahCogYRNjjsPcdEAqwcalnos1AZkwYOPlsQUKCPUPdA0N
96DjJlRg+JGgI1JTRFATXY+AObvNS73RwRLwqCVvQ88V39Rc9sB0skhRWUFXiAMogwveE6oH1Vhj
AZJTkr3BueePRGbUXWMv1C2TmcLOidjeyVkUVUjmizP4c3TynoceSWdRYoEYlpWrhamg49USkGzV
YBWfpnQ1ld5FWu5xT1fFp9IkzAjqcR/t7NrxFGROq3AUfJgvw2cvf0Mw5ev1XLW2bTJzAFZho54v
1dLFUqpchSe2WEf+FltpINJRLtbA7yF1hvqX1Y0gQ7mRGpuJ7Ttcmo5ZCmg0UpHQCPM9KBGx5ouE
xH9i6vGtdopV7g/9IkbUUc09aGvkDNAbI0ydpltIumsyAgnTDoFNDWPSp8GgJ1w/5NGhCqWzBtS4
EXaknomD1eAwhXyjp5eChmRatS9VsoqTyel1+gJ+sG29fphsYO3hFOQvYDu2UBBaOn7g8fWip1+D
BFncA47Gw9LNW6sVix1uz0DA3u22DUQ6+tCL3g0btIbCXxC2UDI8ZEwwPoOVjg0JrXUMeDxbiYNX
jM3cVvTYnZqKI3177ap+1ENB9c2BcJC7NCOxR4KWYnzZbxVF+sTNzRZtaYL86CBZpA+bf+ruqmO3
6saPA3bRafHNLkGmuCCMT/HoinDEWK25uvW+w3N7UoXZUi2RryYRGiF7gJVt2OyjjFLmjUoJ6fxw
9yOlq1xFx4LZRHT/YdSKBBcP3QegDmILJm5BfTVqBMmJmKmoeWP8EWDQT6OKJzRN/Bg15vgVQEWo
qfNUcs/E/YnPiS4wgaAiMt7JiKf4ZulRJmPdjUYMdreL7hMNJMAHSyyTIgTFkhqzODxrEHlwvfp8
noDb40exLrc4fhA9EQ4S1UdGNzHVkL5Chp8FVFspP0sPkc0Haet0sPdfw8aK8Cgw7XSG3I6N3f7L
hnqCFNxllwVmJzwzVJNdBLYUb5Dpj9a9iWwR1eIp3VBGMJPJFwpoftaF6pt+z1Qsncrke3zoio1q
CiBoyKK60PFzHKJGkWni3JSlKzy4iTqFunk0M6BKuqWUa9AD4k9VrY/jUs+QXU4LMg0cyrOhYod/
MGFp4SCkkK7VDsWS6Zo9UayPZYqH0lm0AyrV0b8S3dQhTIsAIAjqdfhNK1ZU8Sllc8nYXG4mXomn
apmcvVRBUiPcrbAGG1Q4UXJLQaiFHimciEMKUOqqAlFsglwUY9YxV5LeDiLHKO/KMAQZA4wxUnml
LBVuu+Rx2FDqTjw3iJewNA8wC6w4UIvSAhGHslxCIWBuFmEr6SkQfOAHoG4lM8E8bPqHMgSo6+V2
1OoV+FjwwvWXwCKsVz1eLyxAtDnC4hMKA3S2gM+9amQQCxJGAeiFQosLRc1aKJyo/ePx2JBp3emx
b1m/qtdad1TL5wx0hJxwJ0HKcgBKDct0Jr9/Zg7XEwmHZrLVVCLVC0ev24G8lsGO4HBHBy44lspk
UPLT5Rzo6ujwQWoaiXInd6Yusi1XiPmlMlPx2fq5Zp+a0DX9UsrjPbl3cRqOdjqQqdakocxymkvL
LTQ1qDWipcpoOg80TueqM5lirkKvRR35+xXbxGwsjlbNOdDrypijxXSulM6WU7kqo5Ql1Eq2VWts
lc78jDI0WBtpMWSjUkmUOIeRXE11VhVag1wmlwOgE5nkGFBNZbkQhjS55FJfFd4c2z3DyYM9Sduh
Rkr+Olww0CyVvibcBAkd4iDAmgcXT6lHJyyUUhCGcFys3V6PE53FfF7ABcsNFwomvWARgWu3wowS
JIuweQo7L2hBBRwBwROHhykk/Sgj0tsDV63azOVaAy1XUKWLuHfZOgKOzYM9qwKuIb99dcSDFmMo
TALw4mu0VxqNZQ7E0+DTlklnpB8KukWJSW/Fc4Kua9cPIyM7ms7BoujwOTuDfjQkMxbSM2wyK6RN
j26r8FSc6FcKTlOoCrkD5gUze84ufJ4mZM5blVbMGNSVaqQq9alM4UAsNVUox8GegwMVgGw4Epnc
9HQiiVoEoIAWa6VMpZYtl9DHJ4/SBCU7EFhCLgmcpuDE1tDZByYW5CTkCWUb4cc4JKLwjIXgUaqv
Q/ifTcVJEUV3qgARRqNYj/SSagQPH5OCp2oovuG+9IFANZIcFFp+Lp0NUCs1+oe0LhuodOgpxhpC
Qq1FQJKd/1zI2AQsGYeRhFBVruXFMvA6/xGZQB4KlU7uaEPPTiPzQ4xwL4ANUSyZ3GRXgIbnciJW
hIa3fX1BmJcjXvvl53Sd2xMZCKEiAlzclulUbkc0M5bMw3lWw8EdKBpmAZcRjEa04EWvQOjJa7pC
ffZqqFbcvHowghafRlxI4C9T3Qj6nAYBseJTqJA0s2FaJbBp06qLyHzimqGG+rcI7QhHknd35qP0
dCDTWIMpYKyFhmWuVJ/OZMZS2ZlcKddwlOzebNkym8hE47nZuTSK9+TSaMSF9Eb0dy4gQmcrkvED
RqoF8bliwYqQhv6gF14hjzgHUi0Re2T8AxFGdDIw3vMRihNHDEW1sWD4GWw5cQ7XJbwoWVRiEwqH
nr4SIFNcPgJU5espp4cmKCUtIpDS/Rl1RCAePS6kZaI9kDhlGVAQ3xLwyfQUUgoY6KFPiIcBaxAO
W4RExYxVHhCNZ5HbjB0itUvDO6L7k0Yn9wDlVCipCHsyXAR6MJpyDvaFItbCHz3tkgt7vSCvw080
PpfZPR2bQDkEixshVIhnNE/KlqtIDcNtgFMEe9KLaEqlFEKRIZ/jgoHQ+u6QW84h2oH6XZmfKiET
fTVdBbKwyIc0XEltFHVIAjKn6WWKTYWoKR7NN7yRZl0V9Qyp2AQyWxZouYczWMs92chUAaVO9ubT
VN8DpkiiWBtP56FZJeuNos2RqTSS+cboTBrFYOcSILugoXOtnCuWsxmosPDxYFpBljXK0PNIx7HR
0IR/FrorKAEgppfr5SI+AfubWV01hOWkJKwgUYIx+EfCiuqjleilrA+EBPXPOnybJLhqOqU4Y9lx
C/Nb2IG0VsV/qo5c1pumBwjHlE4j9KpA3QVW/F6ryw4qEKaABOURSkSohd9yW/h14JXBtGAEhadQ
Rp6Yo1SPxfNK3Vl4vMSq+H4Z3Kcriu0DUc6SjiDcEHPUHA1bzYG2SKintwaJJ67Gxb3BIV+jr68n
Xawfmo4VrfaawwWmQrJQRm4c2MWouul02YNwCmEZsdc9NlvY40a5vZGQvc9pR4hGbUoDikLW4yJi
iM5mhoHRLwIqAlcd/IPS94JULm1qtp/Eua7uola4mw4kUySapmar8NQP1RukL+UnqQtXI7FnvuQ8
5ciUED6YYvVD8fRoupBFKTuPf6Zke3gidmg6PhXNgqXD6QPDKVso5kqonk6KHG0uFp5Eqbs6muyR
xlJGRLGWz6MsB4xM4oyl5BBmRO0rbERpCa8siaFi/WmMUdd3w9WDN4Yaia8lCYtNMV14Ix5UMBEI
ZtLSRa0VcavJhXyeAlxRVSlmWeRDkGmjaQiXrBeuVeAchqXOJx4NixF5eeIuQoIz5KdISLIFhMuu
F8jMT6mLCWIcjXAjL1SCnbrEqckJdxjof9gQURBw+UGoPXfjmo6Qrd9v2zTSD+ctfKxVi6OIdLl8
AVSeDLj7UAoQW0Kba/RCgjCuWwIud1fI0+lz2yq5czu9F6/qQ+kEpoVKmTEJ0DSXVF7evKjUdVbA
t0Q+GivxSZOZrTBbdFCVk1qVUxGoUrFVYOpXrQqtwtLEJHahJiLryElcSk7a/cuBTjEyIcXANavU
YtlctFyJw3FoccUKdbTl2jWdKRUrRRAI0tlKNl/NIx2EgoatfxDWZ1UsSkWUCIFv0Q52KnoLFHNw
9iihR4wuMOYYv6AbVjjbWjcEzB2BpcoAY9k3FnSZ9TK5tIYl3kMASTI0nDoWtNBkRFNlhOBTOTBq
Z9It1HQRoVWJqMZUbBnNYJEP2U6ZDYw1CkkHWIVaCzCxIK2kiclhlcauBGG8ZRAVyJRsTQKYJ1Xu
t4gpibVQ7xbIE/RWR8PFCkL2c4fDb3jhtdVCcjZbSeRRiS+dwhJXAhGRMtkLzRcnZXJYFdANulz9
nRGoGAgvhT32q84bKERner2e9f29uPwWCruMGocMABA3tfHSj6x5OHsLlbFYrD/oR2jF53Dg5ySq
s4sEYOuMV2Qq6kzV1HyvmMTvVuqfpr8oFPW9AtIkEp6Z4DwlyDSHEoMUR8U6eAXRcdnu2Dab3xVN
P3Io+uCuGVDoimiVni3Ar1MvleiuRTYlyaso1Qw+AQRj2YqEaFTbKMD7CqkIxnmJTlfpjafIVIWV
MOKjqEgg0pjMuuAzrsjoDHGlViDnmay0Qsihp4f4ZC0RfOWB8AC+xSwVMDS1KZFeKkWFOIrKJkyu
kiJ0Ykyq/1bSqyTZEueG88jjBiYZscGHLm5mcIFIymP2idHNSxLKuKOwUzljpEymiCLwj3AwSYuj
FFWRBfGKVYmf93f4Llo3VHdY0XUB2IXeir2wJYk4LJ7H+C0b07scfo8rAOY79AqH01Yteev5EZ9r
bcT7lHWrQvBUzasXKhdFYrbUKsFYJxuWqXRxXyK3J5qaTsO/axsMe88NVH93/SAq355cibFIBi6i
+7CbEkpDSBFAFX3624R06+6KQ5WZ2EzrLbRKyzNTcp5sZBrPx0iAxrCBhT2Rze5P5ffFqgeS1bv3
jB6aSuSzqLVaRmE7cr+rbJEHHjm8kGhfCUBSZYUxCe8rvmJVHyhxJXLKIQyZ5SsNSLhOShdL6qjK
RwW/RXwsQp/TFVXQKNPZEHwSGjGcp4xjkoIuFAKmRKP5gY15mAJCmZvG7YhVSd2amqhmfok5LbqU
8nvkJCIDRR7yvdUKf6fbhZgKBSCQwtbU5JaDcSfwhktHCnwpFCXrjFo4M8hE2vP0LHUiVi4JOTRy
OQEp8eFwZHwyggBkAxVJXC6b3+t1eZArDfFF3i/ZQnYn2leHnA6EUbEjqlCjSSGSzS9a3Xthn/8p
wz2DXjdFiXh1mnqs8CZ5Vi48UD9QD34qV4X1gbqhhxMFABvLJzgPQZe7z+d82rqei8PO8IKyCScE
0qUCc1llVXVak6Onau3RT6zINEkIpsw093rUI5zQjR3jzqccmRitw+XyjTtmvnvH/vEoJgaK8lis
0FERVUf3LQAN6VdMxULpOAZCKB7Rx1Lqa0mHH7gykfNVYNyPnDtqrdIThK5VhiVFXkKWsiOQfigU
WfrHJbJIs1DaIRjKrVQZIN9bS/3QhjUKiMCeQ0EBfCX8c0JesrjVGUvAk5Ag6R3qVdImCxSOPA9w
o2ULBNlQYiEhgUxgjx4limXWjOZeRCnICUCqyHepOuQQ+alxVJKLZNqom4k2MWcUlWSGSPEBlEwH
OUGORjDkD3cF0aQaPHYAF9R7TFhsgnaACKrADevzeKA0pHOZZCaLtQr8v43d4SePdI/4LWvDnqAT
gQT6swztXWYPhg/sK0S24uXqjsnEA6Oxh6dTCWSbgyiP8IPD3tPp7w061/V4I7aGu1oIW+pXnjOM
FeIY594RN18q+lploAlUVVlbCQYruQCVn8yhEQaCuoV0x8cTMpe1/TWNg6t+sdHYE8vetHP8poem
JtO2HEzKPEoKIOMIWVAIgZRqRdQOKUBNRcYllz0E3VCUoFCwFMukYDNXCwQ9yMwykidpYYo4lNVd
bUvxvxGWFBHSh1Y9wSrPxM9peC2MGnhiNeEAMBRVnEJmwAlEGpfsCWThT0alBZC0OxWZrHwilXtE
ToriJNxWg7+OD1B9UixGcRoBiuT4sLYkaa+gq0rBSTF5yO+j/4l4E50aIRHIS6U6ENU4i3TIxEkk
qQriUcQ/rgucM5KASHPH5EKpLqc/EsEHkHDwQGEpCHmdnWEA1ge1H25ZNGRIwq2Geg42K8Iqgx3h
y86JnBtCgnXYj2TXUhlcBqRw4SzgWWWr9WSpPpEpgf6xfyb6yFhiIlUqkqzkA5OXGTf1is9pX72q
p99v7/Oggnzx8jW9V/V1+QyBu0hqGdODN2ooICt1Fi0C5yJSXis41Rtk+oRWAk41O02fUKvxuZLd
T882JyIzVXioCmm8OPC0uPAvNbrd0ej//Hbfb/ansnnw5xw1BDmKZUuhjFynEuBXggZbRjQSyirk
DUp3gNBTKxSJVVQeEEYHGnmwOKUkOhN+EJs8LYsaSOU5JljyAiSfAx5dgy2rPhsh5Kk2K9Nf+nqp
1oZW6yLcahbgCVWwlCDCuIgyeAAwDb+QkSdINO5SWbKmwikuH11rpWMe5C3UVyANqiaeP6taCn0I
Z4DVJxQEZqTArhNvkNR3F4TTiytFhkStlUIG6naCOAUxANjGbxbsEo8M8qq5lLCWPGoFOdAaENE6
9B3sifj6uzsQDoJykmFCDh4G9nQypFQtBZy289esevLq3rCl7G6UhztDHbCELRZJHqhPJnJjydKe
aGbnZHwylUeKT8PuhZaAniscaQsUc6fLZQ8HPGGUBauVB0OQnLaBRu55m9f3oI0DwSQKP9c4xQ7d
c8hGAGMpnUqsGR6inXuUeW1Oo5aN9EDmS5618VoU2Fxki7aanYvOaZqdreBUmXnmyM8TQaZCcsFQ
c9YKRPALKLlrKv6hnzwQq/mSicpMtAguTyWXgq+1DpYKyAOw4wvo3QWBCSZ6xYr0YyzQwKrot6zF
TCShfLMUE2BOFpk9ksAFpEgXOravFBEt0CA+Jfmw5UUTUrTZprfWuGD8CXgg+KExEkx0ijy5dEot
rQyEdxCU/EAwpS5adb1S3WQ7XTVapagrUelGaUkW9SIpif9Iwz+RwZBoUoaL98SmXoiUSI0vul5E
/EqWJtJP2aoFVduhqwqvCBVHWKYdjliqr/Qh0ZFFTgKTmslHEtY91DocPoAeRm5XHo2LeGFwC5OY
CDz4Xc6hnu7ecLCYSR8andl3cOzJ5/U/a8v5+w5OxdMNv9OWrjT2TccOxzLxEkpFwGh1YqgRIIYw
QqEFlJx3eFH/BAz3GiqeuDw2v6W8PuK7sL9jc09obTgAShTWGUbIpJC0PAapT2SBFw/mYA2E4pAf
xOJH0xiXk6mLwNkqME0ykEk5aH32RwKn6DR0ApmSs9XmVHA+5prtCSJTxqF1qSMwjHj1L/eNfvnO
/f6eYdRO/+3t22fm8lh9WeMDHn3KwkIdWcJ5tAAqAaKkv4IKW4fWSicQM5gpACVtA9JSCuEJgxVv
GDJEpqGQF+jSFetfBaQw1xcOKyMOGm0QuWYsHdwIORgucecIJaCB5j3SbEBptCqbRGYq10CRqTwF
RhoFwWqFilMWhCJyAoSRJ8lljE4qpbZhkars3JW1MLEDlgAak5TlQhviikC8EYqkxUsrFNilDvRF
AacM5AJ4i5CgKWEVZqXAn0SLE1YW2FGCa9D7YKzClUvLmS4gHBz2MmDp83hB1UDQuJiHslKEk5w1
hmz4JDUTzdQaLjDzUKkXAwrpyltmlBbxV2wHRRdlFlhwzIWQraPhbdT6Q67zBjouH+nZ1BnoFWIT
tRdqy5JnJqOvv6S8tiGFZOWUYgzcbKWvRd4gU6c1zc5WZD4qOE0vkWqz+iBb3+snjzkmdXSOH5nz
q1vTjDA/wcyYrdRu3DH528PluUxu3+6xVLpKtyIKwqJZB9jV2ST4rqjRgR4drG+F8BTq2VXA6Smg
+h0nr/hdxZiU6nPChiVljbUM+PhFUZJ0YKJci3xIIdlFElMdtM3ZQmVVUjMJbiixFrfoXJjcEJ5o
CiS2n1GBgJ4Y/EUXKwvnGYFQhblhvkpZAoZPxBSVAzM6yhiJyEN4fuDpBWLtbjANhGHEUwi7xiif
p7wCKszkGcBdhBwRNF8I+e3ouSk8IQ3EkYlAVZk0A0Q9KF3lSujJkHJ8HDKOE71FVNIkxIJMAEQX
JFjL3ZDSWiiWk8kkhh1qbiaP+pdMSKlCx2TledoIADm3hdpMNKLwF9YAghM9k1aHnE87d+jCvki/
zxNx4OCNDEwS8BMbtbDHGcDSI84kmdpNOiSHvrl088a5cKwUl6IF6catb1qVWFMqmhA1+ymZODRP
pwdRUWmKTROfKkV1gzMBnCcDmdJBTx0vkHSpam13JvfTRyYfGi0cnE4m0Eq55rIXreV0ophKltJ5
5JhYyhnIRui7dAXBMVsq1OEKQs2aWlnmhyhEaDyHpwh1VT094hDC4Eqwnw+L0GUSlmqwikm9ivlX
88k2TUGqwSwaK59DUtE7KnwEYAs+Ia1fzBJVknHCcAfxxurs/FPVAVkXxIAl6ikcWcDA8AHjIDQg
mUNKLZoI4Qk98NXIM+cFyMkJdVaOlgPSXUTzEk2+giErVFlEQrAfmT8AlJbhocuWlfVYjFZoQyw/
gguGjg/fDR1o+BPUUMiuaqWMJDKIU3BFweQGqLPpfCKeSiFfB05vdkphgphEc1kZocF6I7CCBcI4
IfxISKehco62vY6Ayxl0NTb2BJ+0quOi4Q4UepH6S2AaoTJ+BTmcqFl07mB3lxvlYIghagHyfPha
bFZyqiyCirLMlYcs+zUjOObDa0GmopR2i7yMfWVQF3FlF6HLRJ36e1oRqCh9/CFTHDCAGFIZisVE
uXzvVPrnB5IPTqLCFBbtRgnrKlJxY+lKLN0o5ViCuQLOJWrMwc2jJNhytZS3VlDVjpXCWWGS2ET+
JCYzIp2AKt5je2H8yAPncigCTsr5MHYiLlliRKb9/HQweHnC35KHQelA7VMjEtySpqbITAAEv6nK
ikYnyBRiHrk5aGSkBqfQ1dmukqCU2CaRqfqp4cnRbGkxbuF94VJgdYHEZyATG5HUx+qypMOJiclK
BzwIarsH/NAgoQYIzMgugk4LMxN2JrgDTuSgYHTEpQw3NNrpApOQaj4PtmV1WWQBoPwDTE2Xw4sF
BTk6iVg8kUTaDkwIlZxCsmeBBFjQsPXp/XLaK7hGURYQ2oFKy1ODnEBCn7M+1OkF5f3C/u7BkNvZ
qLglWjuRTqXKlS5faDiADmdMxwu7vRFp58ncNvxDirCEnaEYU/nnZxRWYnboI8LgSioe36gOjBxu
LAZYXHT9MtG1yLujOMTqg84Ypgpq2p+mIF0q/RSBGi8x5bApNs9imSlEZ/XxcNTVrtDCzZBc8M7H
6vXfHkp99dYDiQYIAuVCrFQEbSyXrubz9WzRUsiA34MS6RCP9BbCf4OMfDTPKRdZSFJilRJ3Z3lV
ccwyoC6hEXJiSUynCMOQsiWJWHi0LXVjkaK6HIt6q/7S+QA0nyZD+YClVLETpp34gSxCMOCqoOFE
GnOUzUwokTwPKp8UhsZXVHGlkjQ+EYI6K80qEkXkSoUEQpZfKQHd5UR+ibjHOAdplUqWC/ehTUsG
fAP+GohYFBTx+XDDaDoUCARgdmNg4XuRHG7ombwtwAtF9DL5HDAZjoTB0kPSNbJwOjs6UMEOx0dS
dSKZhpAswhMOIgVNX20AIVoz75LagYhtrEr4CuW72PMT5U4gY3HTqHKPL3r8zuFu/8aRnoFIwI/L
qKIOtSNbKcYKlblUvicY2NwT6Qu44QgDGnHjEKGZQjmRy0ID7gS5gdHkBnonQSWGjSzp5TCFZdqI
RWAKTzy8cqORzqKSfmzd8Cppa9aywIr4bf4Sv6+onPgX6eUk7GOhb6JQnrhuTuu7VXKqbNTfrWFM
nSStstS8sMfwzbFpswYy1V7g6kchpsM0Uyz/+OHDjXBntGD7yQNjE2nUYs6UkOiF+s25bB1UdmRs
kYyODBIS1rHYk/QNY6VM3yy9r4x9sHWymCqQgpLPpZX8SZFlXR80y5KpDQeihEkk1Ez3j/T80gyt
Fp3WAK+Mry7hqqOyiab496GXUVSRbcfYpdh8Br+HoQ+qpsrLg+yD0aiRftGICVpqrNQsOQpyWTIg
EtIUBxERji/FipMFR29OTV3+yOxTMU5HL0UFpZYbNiUgA2iiMwQkHTZluTLeHk+FWCaIsd6g1+WD
bUxVUIqZwBR0ZxKpbCoDQHKNIHGC1HlZcIxKOc3ZSftWSmwy7BQKBUIhn2oKBfRKs9Y6wwFU0hzp
CfeGfJCHhXKpXCqe09fpgBmCQEjDDe/QcJe/w40svcocnnMqBz69x+PFU/DZbMMdgR6vs9ttRcKZ
FxJfHHRC3EIrB5I88LTwCMEBgzWTKxZjabTCyHUFfEORUHcwoGX6daGl7WB4+GS4DODJc5aFGAYN
ZGcRU4sOCEnKpdGg7Al9CCz8xFWwaUnqGxOorV/pLo8hIM1TrwiZxiAssOD0vjU+SCPzwXT5q9um
DsLtPpNOJOux2SiYYIW5TCWbgw8WuZTg+jhoFWKyUS8V+xCy0EBms44WK1OxiBZRJHUlaZghVqCC
kdNN7DwQUtjUXR6GPjtVdLW6j+JTJLqhPMkf8izoqFH2LH9gcIrqCZkphqUYOSLKoKmqKwhMdz5I
sUKNFppMH2F1Eu17S9KsxCmFXST+KaJOSULCNaKqTYApC08tMYPjI/aV5pTIL94u08hARbLAZyrV
xNiIRLK/eFOsFSSOJGjIoc4wSnpBakAJREUWZGzShuYhm8mlTY6wEONFvDOYIvQXVspBtrZ9aLAX
amsml48l0nAgRcLB4YHuSMAB3h6U6BrEYBpVTdBm1Iom2iFX/akXnh9LZOYyBdT8zUDKZVBNzYYW
D50+x4beULfH2h/wB+zWTp+nw0vNezxVQtvsXAVVVHzobeGw1VzWus/tgbTLojA3PG/WRk+HvzPg
Q/Y2oi+GbJRlWJ6Frl/EoIyA4adptWM5j1Aau1REyRmBJikdUNcVcgo2xaHOexN7pqhUN1uLhvXY
Y3NFyFxkzlPHlImP5wxRN1uo7E1Xfrpz8o6xxGSilI9mCjOJCqowY+nOIDtE5CRaVpJqR7adlh0w
ZAxYJWVyCWTCMluYTXwo3ggzqXdOOSJJHWzVLMYe31sQbOMizJC/xEsAEr0urSUrcRR5uESiwlT0
V1lzCTwqvVpYhCqlOG0IVH10UtMAJheUL/LOMY8pP40OCOLHkQxLwRKuTqgH3Ea5DXIpBI+YkBo4
UFhS/9XdeACJwWjvTTqBJC7JC5YGKYJS0aDpLdW9OJNEpaD7ydZAZAVKBW0tXq3Wk1bs8fhiiLdE
d1WJppMX9qzdiTAIo6Lk82ZzqFtoQcFMn88x0NcBIgGSVODVZUvgSgVBE2yN/5m6SocTXQAISLPq
XgPc+HI46I/4Hd3O+rWb1g0F/Ml00hcJHBifno0Xk/lSClVRYCMjHoQLtjZWd0f6eyKYN6g5GkRl
Bpct6HN3Om1IRiMRiQMvuoQ44Kl8I6LN8r+0jGHRsCM4R0JjYAZ2mzBC+gMKlpGXQm+/qO8KQsWk
KTNN2OkGHOZmEOWxR2TzClaGTBGagkahcIpcUJpbrF69eyr18z1zv7p/NDZXyCWypWQKDJRqNi84
rDqlejKruQrSpFYysAPvvMTp0cELkUzE0kQRJXNA4cTCyprWL7iirilkWZpzwC0TqQXLxtCrJgNT
zuDrCdWU2wunp3mzqtxqOERynogYglNME0JL7UxGPuhiBTwgMMHjQdEtVW6F+K4NKcWMFP3JUJkY
IFFcE2wEKT2qlGFMgBZRKiu+xEXl1BLMbHJWZZHh57RveQ1EG9VmaAyUgQJaWTD0G55CFGPR1jjx
lByhE1oLBEreuL64dmCU2MQWpTYlN0yeYANiEy8UNPEH3F4/jFxyDEC7wsVhBWVbFZgScJGhwAqe
YrWWp9IDrCA+w2F02awRnzPscfX4vUGnw+t07zw8FitXEgielssQnxC9nf7ASE+Hq17sCHgQR0Wz
GASsOoMB9DUDoR5s4rDH0e3xBo0H1bQU4UCs1sAuxIPH5YGBRC6ITEX8gilerDeSmQL094mJ6HBv
xwXrBmVySgdvtIijJjOv0CoyTamoiG2F7lLcPrYoPSZkis1HPwZnZKZai9ettx+e/eX28VvvH58c
z9vyOYRGkEVJqh0RCPxUkFrJ4CN8BASn7E6aCGk9TPunqxbEdko8Ckv0FOHIs1mryEmVzsIsl++F
1y36rlBpZLbyezwOTmlNDSPm1A2gs1S0Ww1ycO1kQEM0F5F8RoxEBZzYZHSTAoSUk4AFmQbsPkSg
CigU6oS0ZEWrBJPTCQYN1YsglLcUd6b2JFeKk6LrNA9C+afdYqXmrQh6SgwKSgOZrDwi3hKmj8pc
0hVRTqqZnE05LTiXexZNQZaGpqtD7V4sK/B8SucvrIC8Knh6YdJCEAEjUHFZdL5eh68WTWLA9IA/
CAdji2CYFZr1gkVRHhU94rj+Wh2FS8IBH8jQ8AfnCnkkwKMQBZtF1KpACxp7JuNxn9uNSvOoTxQJ
oRCuc31fpNPNktY+ly0CKkW13B30okYglgqooWR48Sa48EIA4kgeL1PMoDLhSvJVCwoGJnPFWDaf
KlRn49lYIouQ8TkD4dWd9msuWCtpPkJSwfIBjUz7KbaAUN8rRB8nyBQrjvMpXW8czhYOJHK3PTJz
0+27JyaSxTTK29WcNUQ+qJpqehZYsMiaRjs5KYfMvBJpH4LAvkTFpZYszUVN2BL/DVVBVmrmNNW6
IdLSWVoeSKoXOO/cEqqKhPMFdniJI5dzFXQUbif6Jh8B/5Kv1HMqqqEKL1l7xSVD5U+qCagoY7oJ
m3nB8hRpyfmIOcMcaCXTEVSKUqaRiXdHwmlyOQIIzltkYAHbjBKqAipTmVdBbg4rRSthkDQKFcGA
q67wAnGRnnpGZrSoWsuDq4pmrDpUgDl+un1ztonz0lgh5JYlF9woR8S8M1kBJFMOzTm7ezpQHh56
oAQwqAS73A5EQjC9oexqmBSXDg8TxkfIy/Cn05cGsxazn7OCzicfc3EQt6YaiZQ+Si504kUxJ3hc
wRiMBNFw0I7wmN9lWdUTOqc7FEHVMdAJEXK1WMJYL5D0Z61HfO4Iql9DXKPBjIQYcfNYF+AuSufL
sXQ2VWXhP/wJukoK+RElawLFc/Po51KuZGKve+6l121ahRxxaUgsDCl5tYpKHT0ZIlVcRLdo+mwf
WznZevZjkJnYDc9hfzx/72TsgbHZHYdS9zx4OJeputAKA8E1YM+K+gPgFTA5BH+hriIJsuRxY8wB
V1aCpSxEXQKS7BijorDElOYDkN5bmqIi42ZUqFK+KwFE7yu704lBKs5/7VSOD2nySH0D9f6YERVc
FTl9KlzlKYgnpDkAojhr7oianVh88YxAfKW0FPhpvz1AFNamkGBFbAK9Ujhd9hVJJc9eEGiUgaZM
ZZSV+CJ0eQGqTQFvTo2gcOrJtQqQpNCHKKBc94UzL5YwlWQDqjqXaEsK0mWNIJ711mS1UdxyAPm3
bMZpR6cSvT6ElHiyZGMmWAMabKiEseepyDEKoCyf3wts8AnJOoYiAnhS0uiMPCxZMhGTZiwRQwUY
gByhJau5ELJToVCQ2GoYJXNDKMBZQLfvWqMzHEIdsd7uABZxv93SGwmjrqDb48GpEAwH68nTsHR7
Ec9kBTOPzRFE/SM7dWtYO2j0gARBLOko6hnPF9ByOx7PJuGAgnsql89XrHBrrO6y/fELrnny+pGW
Ka6F9jgthHCNYxs2B1V+mmfippcF78yBJa5kWWRy2rR+AdwlK6WJbG4uU9w+Gn1gKrt9NDcBLkEq
R4IK1RcblNJiAc49hEDYFw5ETRbRgvwk6ZzOMsovyEssuijiBl4BpShmLuoYcEnHVEEGpswXglJs
SNU7aFGJVNEMTNHz4A0G0Y8RURV/UuKAopUQEFoB4y2Yn6yppz4XQxekLBYsiPbIM4ipx7wTVDMA
MiE/UVESFo0AXz5B/B90GK4wlIrNol4MvjSxKPYcbleuU45HWUdDWaIwAJhEZGRdUPxQ9gqpXbRs
adfAC1KOgvi0JLYqYU+mgMnqIVoZp5GowcxQa6rWPISIW3mp/BT4NG+dvh96lRjEp39IPVJi20r4
RP1HuDgJteCsRLUxX4V2hWPCvSLXarhW9Fw4kTg2CYBypYLITTgU1r4jcBgXi0V8W8ojW6WWy+Zw
ENSfDkDCep0DPR3o2cuCm6IsZPN5UJ56I5H+CMJBNhQYhgk6FPR2eR0REC44h+gIxJZQcCGtcVhU
8MX4YXzRHy0urW660HbJ7ylnsh6745z+Tg6YuBP0OgWemBJYx5GoAOUBIgW6HUwL9aefbci86upr
UqUy6lCmSijI5sVI702Vvn/rjn0HZxw2z3QcZS6g2dTqaOlYBC+6CG2Hk5+pz5CPSDiA7iPCsVEH
OxuWTqVYKudylhqKLKo0AwTIwkF0W4sVYJWkiiZihRPX0DKJVU20QFpFHfZ/BfKZQo7TRSrNIumT
VhZRbdil0lRdZBJ/i7Yslq5agTLhxPspzh4aUnjDtGmSgTi/yXrFTAWEkIfFgjrCN6BmK3RZWWg5
n8VZKljg7TRLfulc5w8RYriLmS0j0FUpTpQSTCpy1V0kJqX0StHSYSLbaUbqC95Vnk6a7hqrFnPT
pL+YoRAYerF5SdzdiKfTmaTeI8KSJL/5tAsuI6yCwkC8vJdkcvmbmis9AhhbSn4OuFE0iUoL3e2y
6kBaEjzSUJD9rWVBZq0VpxNcPvpy3F6sKohyYsGBQPQHwFWyugAnNFmci6GXGQrhcml0Wlb3+jcN
do50+Nd0BFxV+KJAGRSHnQWBFq6K7HWIpd9qYXS7Js4tEIOKWA7KIPEPdCD/1FiY9F8DeUxpQ7EW
DK5E2pq6ytmHzMuuvDKVKzrcHtwQEirj5fp4qX7v/qkHd45NTKbzeVQgKGdzWUyfGppHVYoYLDa6
w5NE8YIGrQWNDgKZbOfh9rFWarUCyh4KwMkjlexkmZns6o5Cz+BzEVxQi5piTcKY8uxZOlLNCOnM
RUY75Cc6K/C8IlRFZgoIYftxboj8VZY860sJ953PlwoXObSSFKLxTCl+pxRZCTlimcaSilWdRqMo
lrwSdQXJKo43Uj9E/GJcokVr5ezHpKcMFSOHItlYGVRgCkpFPVBxyD/Fs0rt0IjoiHilmao5nFLy
q6l4GYAnaEVB1ujo/Et0ZJ5H3bbGS1V5/UB9xeSziz3HlK06/LQIprCguSxsaFMv7VqQ+ckL4w1K
73n8iIKouojClF23mxoDC17zRb2jaf2qksK9sFRCz6XBUw2g5ILfh3v0+DwVZO2iBzEuz4HAqQt6
7IWrOsIuCOp8Z9DjA1/QboU/Cs0I0V0bhQo7/N6AE+loatHU8ICQboY1F0RfkqHrUu2ew6rLlS6R
XPBkWOU5cWqobDDinC0D+Ni/fXQ7c9NlT5rNZHzhMPityXw503BsOzj50L4JV2cEKlA+47r3gb0T
M0lCI1eolzIWK8xLiMPahvVrQS4ZHxtNxZNYxugOkXgj8g5Bc8HC73Z60McZ9BVOEeZPIDGpmktk
UAC6nM7ZK3VJYeRk1RizoQ3K+BqEWQog1toDYwFVH+mEEAEpSc+ctECvNB8R5xM/loBn87mIHGK1
EYopYclRrZQasAQaQ3+o4Mwq7OIEkgZdmKssesD3Us5HHUsi0IAT4Rzw5JgxqPpDlYHeZl670VCT
+rLoWKQPmJ4JSkXIbM5+CWY0/a9CixdlSxpw8rAKYFkLqAwbnmhRvo3FRu5cKAqUCiYsm7NSIGkY
t8Zsba5mDIhKooy6q+A7wQlBZmDlMOHWE9H0BPFcYrwBF7D4sbAaRZkp0psFIw3QqmkiyW/aG1Fp
W2DV4TeohRxoDEqdTib4kOAuxJvB7o6L13Ss6QlDpwUzAZXn8Wjhv/VbkGtqs5TzETRqQikGhq7Z
OVgOgZQ3UIjhbAZTbNFStQRpKuJ1pdLo6Bn2Wh6ZGLzbb79dL/Xip1wJZNZsznylDN0m5HVDPZ3I
FX6+7fD+2XI6b39g+8HYXNrjQFTKPtQbXLdm2OcPptPJzo4QkAnjEE32aHZakX1v9/scyImOJ3MV
qz2RzoHDF43l5uJZLHogoGGkMEgoxZGHcT8xW80WmUcsMJDZLa216M5AlR1CB39ByvIhoAxGLo+n
LQMNaouLUpKkP9q1zOoEmYHgFDqfyiqBC34Mhg1kHGjrrNMlL0KUHhLaI8L+ofnL7QXGhArnoLhe
1A+kFqv4l+FelnJ4uB2IdVmiRSPQ9QIYlgbtyqCSl/QNpAdUkMKjSXFULTKia4zYqypdeSZOJiwp
onILSsUIlW+wmqiAVeHQdIBQ4xdLmNFVlV3qkeJ2fMnN0TnQOkW5BIg3iXEeXVPAyXI5/QEfu5tB
D8GCiBs3KjMZbmE9ZPPAzaVEysRgfzh1xUmnwpYptYiIIJzDuAk4yC60U3MCsR63DfUTPAGX321H
NQbUGFvX34uO911u2+pgoBseXLDKPG7uy8WgDjGLC0wXC8WaNV2ogjOIiy4Xy9CyOkO+oNsJ+xY5
AhgzKMNAO+6W6g5VjlZ140wB6KMj86InX54q5EOBIGYi8kbweOA1T9Yt2w5MT2fxezIQ8Pf4Xb2d
IUTB4ANAdgRbzbLTlD2A0bU08BuWGiwHr93ik4QqZD4gFQyGO3jMIPCNzmam44WJueRcMptBoSAh
1RYyhfhktJDMwO4hFZ1xN4g/eJoqpHdLloayzNAiulGu1HPgygPdVIOZQwxZSuof4qVg6rKZNBVa
KfooxiybvXPxFho60Ch5/B5yxkVHlRnPh4ZJhAtQwU41mKuBpHdyA2h3xC89pcJQELtYxDxAJYom
O+MKCunl4ZHFMUUVV2Kr4jAkxGn1qGYlWwvZXOQwxXJT8lFblrbruGhMaa1IhgpKLINN0q2UrqVm
azRy19ILPJQ4gVWZ5IIFXyt0UtHxoTw28cmr4cLS1PsMy0vRa+gseK/5oDYLVFD4DaTMAgsgigWh
Oq2h4qp8lN3hjKf7Xg0Sjgy1fyyVDHHLzdCqwYggIoNFHLlssBSQ79LVFUYGGqvL++E0ckQQznHU
RjoC6yOBc1BknqFXrg5QwMRTZofBOZlIx3K17fvGt+2bhA3Kuot4oZWLpeH3eeBc8rvsWzaPvOy6
i4OGpFTF9ox7LYNMVTxMmbnlmmtkBsG70phKotlsBeAKByNQPCbTuZ2TszBBQy57JpUqNZwogujy
UAWE7hUCnYTuAgw0fShoRMz3WK5ICuVMgbKHJMJcw5IoV2LgZyJVJVtLZKtoa5JB6LoMTKHyQQke
fUarQOwA+xm4rTXQNqdQKiMQUy1JW2jymFmUG842ztMacrIrIIkQANImDO52ZJwBI/DkiY6rXAbI
0uZEBPcMVULIxSN5TWcY65ZrGXYRhoI3UShZGIsykx5liEd8IX0vlQlMcShqqnDPaVypWwXzix4U
ObZMS5vH7abAkYkMoODKcNmQJwItHlU5HZJlohgTn7LdPtAVXDfSD1srjTrsEq3D40HsAJMfHk6E
JsvorkC3FkQJjARxNhGt8H1KGjR5vjwjS0KiFEiZfgGSfmQw1JZlTrqk+YjiLSsLj8FlDDRLZKWw
HgruW4qesF2SKCEmFI31pSkXWdMbj4BHgYbfYqCKYcFDs7gzoSsd3HgwDCXaimLmgLeJwA4cRYh+
YhZ5vc41A52DAftFqzpXR3wRpIOjTkMVXkjIydr+ydmZDIr8u2ByQuHBPYpuY81n8+AwwnuLshmp
RHR9X/CCgWDQWrjuiosuHOzFIiXLFxcNWZckcqfOiKZPTZerpoPtdMB4eWTiMs0vrr76ajWiMYni
2Rya3SBZ1g03u8UykUx9+xe3D61du2H1ANJyY9mCD+EqeH2cno6OiBjzqOZGQUEHalUYrMi7FdcL
u7VKOjBkSaFay1XgvrZnijX0rooXy9FsKV4oAZ+o74RC0uiIjAkIbmeRPj9kIaJVHx4I4GfLZfNo
El+CHcty6GR18sljnkFppmCA5ogQWMmKZ4dTQEZK/Uip1SrCSTRS8KHFXyuhFFU4pfIPPkH6suh5
4olR0UEmLQrVSfKiIFaMQ6HBiu4M/xbqcMLScwJ8KCvp8+KqQGTB5GDfTJc7h7zJAlq4SMqiVC0I
Bv1eh+vA7r1glbG4Dz1McDhpHhkAUoNwAKVOlhXGl8I+17rVA+iKC6th3Ujfxev6O1EKxOVCB+BD
M/GpVGU8lpmm9CihOx95GVIcWrNP0LAIMx5qBe1DCWsBo5V8Basdljysf4hIMzxEVQ+GtzQIMSLE
uoygYB9KnxCRXJKEAqGkYG08wTimUcOJdgbXTSr5RLWGtljnToyKKlyAKI8i2j0969SURd8VlECp
pbTVvFYxgHEOjJXbaYl4LBec033R2v6Rru6+oA8kBkwt8PFGo3MHp+fSJaxQ4DjAisLvajqT6+tF
BosXdwSjyue1DUa8EbBGS4UAXE2W2qXrwX2AFJGXOvPFJpJ4pyFOqS7JVZ2216N7gK6hzOSFabhc
SXG6viJGgdqxe8fGManRQTmZz4VCYYwmcggwNzHa8IBjvrJFMlODkbIPIwLhQk57rpB83vSk0A2D
VZyzGQ4A9Hdv5Ovo5FcvlFGTpJQolZNsegKbtlyoNNCkDuIBWk80nkJ+ft3mxHKcTZUmR2fQRIx1
cGgtVe3qQiTHFn8hQkNTkyglA0lK1UkwUBCHMAw0X2aiUcER/qos5vKVxh4JIC05KaKTXD2GVRnz
VA+vtjwQ9RT/a4IhvLrQEKCHIRGKMQ1cjdaEksJ9+A8pV4UC+rIj+YY1s+GfBKUMCcGGa4rrIS+R
ZhT9STDwXGiMC92QRByMMFsB1gdDrmdevu55l46sH+zssFsRrQOFLV6ozKZzB+Yyh6fTo3MZdHmK
58t5yEZQlSV8yc5YaJINCap97iXYBIUD+V5lPL9MppSFeiI5bjAfYOpjV2rv9LGx/RFuX4O+GrUS
5zCXNQ3GaKoAzWR1dQm9WANRkkmLoUAQRrh3jMYITUljsOSbAKTUn2XR473LFOEQ8QCMu4rnC6VS
agG/Ex16h7qDg50RBDAHeiMhnwdbImqTK+QyoO9myqliNZ7KCOsWXQCwi6vD60SqGibPwcOT2bno
c6+5JGhnGai+SGDdYI88OpWR6nUXsousLKfTibtCZIoXXZQZQysX7wUk0FQiDe0JYahkJg030b7J
KQd40f5AqVjo7oh0hgJopMzwlPCsNM+JmiC1M2YN8FEJxxyPAesnpU4VBiPSE9RpQmzg4eD0NEpR
bc9iRdAUOYFohZ6vVhO5wlyugI5XqJ+QyVUPH54ZH5tLpaGz0DXKUEwF9qQEVen7EXO0UEMWM7Cp
cQ5Z7ymZMEukAhgXb4pHoZcL0ughEI+OukGEQ0fBq7qlHEMWdQm6sK0P2GygoUKRZ+Vq5pWSd2qD
tYQvdFqJZ5kIlXbrkFgs3ANdoFCAjEaCB/QubIh8Y5i5lSLCASQfiyUl+ZaUeCROQf1GchbA1Cjn
OyFG1vQ+5bzhC1b3rOmN9IeRK0lndK5uSeZqqFI5GsdPBqUrZ9K5eC6P1k95hqtwVVBOhUwBYS3H
5pMFe6tcLknDzRqET64IxZmsPBI0K3SiUrNj8VtcFHkeTejI9BCMUfTJKMq4cTkXNVW86vMWLDaD
ki8mv7HcU1WhLYw1SkhdnASCTHlJDFn84EbrX/B+GZGDQuSw1NCYt7fD3xvwjfR1nzOIkp2SoNKo
s85KvgiyEUqKw4CCoVQoYPJkggHP5vVrMtGYz1bvcVsHQ94NI/0SrrMFnPTxNp22vIzTbI+uCJnz
1yQ2h6iBurCgjWUFzxKJumNT0w6fL1tBNbyy3+cr5nOdUNGQamRtYIAYLJOHxbQ5NWLkKRF96rrj
INA9yToT0idLAgvqWaRaShnMhyIhcMn5xVBVwdiyWnK18ngsHc2UUXkZW955//47Hh7bNQl6L5ZG
gEEfOh0ObJqCxwKxonqSLsTSiUuXHBwZMAaEMLfEK8FzinOD3hIJWNANA/yphSSXKumXrPPMDGa2
pMZDxX1LGFIra2EDwkv8ngJlSXCh20iWBiqGrFCHwCzie1Jpq47cCqx16QL5rJiGsKiha7LSJ26Z
MSbQYpAyAnCyrjxr/zBvp2i3VZCzOdIdPG8gsmGoa31/ZCji7/G5QYWD6ZWqWND9YC5XimayE8nc
WCIDxCbRyQnqiridUGAEZD3p0imUIK3uBMsSw1YsQwWHxgtLFtlWQacb5+uMBKr1si8cSBXrSWRx
wmRFnV5mhzGcCq2YTiahwguPkEJVfMFqSRhaqjwFOoFEr6ZaI7OBWa1QpIFQ1a+otvEixW7QbjTC
G6EpjSnoRJAcigQFMoYDbUIjPkd3h3+gK7xmsAvKqhc1stlRG3MOktOC8UyXy0WmKFbhj4QO0tMZ
YKkNxHIaDQxBR4CDFrHbVnWhQwUy1PmUheZwml4rRWZzOFsuS1HKUZUfiB2UAipj+SdutDsOpYp8
CcCQsMFwglTcISCo06hdzWEXFjo8K1IUT0Sr0G0UCXx4ok2I6FYfH6UtQ2ukhmLyOhLFChxUWM67
OzoOz+Z+cv/eOx4Z34NuknCSoJmAxSMhDlyCWJ+wvVh/UyMZcizmhqp/h5E7YgXECHiyHKiWQ7cS
E5ah2kmkXXgzCLuxDQL8hwG/N5NJgm+NjCWSSRH3AX2fBdRZZFpsUElp5soj6TBYmMRhT8EnHAZe
BtrJu2zwrTjtNdRZXzsyiCBCoVzOo/9LFiHeSgbvqg0YCmigRsNYIw8McLLZoNWBandSwdoFppbD
73cF/PYen60/6B3pDq3vDQ0GvL2AKKQ011OY7vVMqTqVK05kCzPpPDzkyXwhVS5mAQYMLGt9QWSz
kz3LJVB1gUINoxMcARRrx0ZFZ60yHAl02S2DHcFYOh3NFmeytZk88ragERNRENdgnwPVfGg0MSVL
QeOxRlTIEIVGTh8fJsErRAaGqKhMyoCTuMKnII615pyTaqPyEKn0oLGS9PumXk3jCHlqUl/NhkYS
Eb876HHiQXXhgjv8fSFfb8CLbG98hUACPWdg3pcrRTrkmGUL35PTUerwOkIIy/d2o1agUUJV+443
LeFTitEVIbOpwqrqbax4hs/ORKYARsLdBltHFVLZRbQSQTL9F2qpCsIEV+IOED1ZW3UL8ASBGmPT
5VE2FmcZPUlQn5ScplEPikDRM0GehIIWgNBDG7K65f7R+Fd/ufXhg4nJuTwIJlwwYKHCkYdLof9C
6uqomiV8ej0WJ4UsME4P2D9kqGF6UhelqGR8H/IEqVIIjsHU6Qj5ujvQ18cBv3SxUE3FktlkGshL
pDLxRIaaseCPzQ4AOpmRABLuGnNcyuNSEcCPFq3kNdGPTeUfNZe7w/aerojT44PrC/2CoI+wE3cO
MbsqKqChRAe81lgRNTtHRpt2AUp6+fweP6Yj8iED7qDPEfZYut32fr9zIBLs9fk6fG5U22J/eZgJ
NXSJr6cqdfjeYvlSNFeI5kqJfCmH9CFIMMp9SGW4/RxhrzfowY7o+k3pBiQELI3BIGSyYzXKWrpd
cAeMR1P7p9PTySx8eFPZMsAP6yOP0AVyrKGZi2tMCOaiANFjrL5jzfvlwqXy1LDzqDQZ1Efgk7Ee
theXTkgys2RqCDz17qVStqBVbCRMEMaJqIKrzgZWICIG4YC7O+jrjPh6g96+gAclqkGsZ2Ukoesz
8xperoYl4HGF3Y4upJ76HB3g+tIzcfpeJ4BMQ2QaUJWFpClZW8WpSkAT0MIsoZvEKLhjPIamusKB
Nggo6l43KV7GgiBqp9qnBlwFWmJ8SFFKQ7TStCSN2zlVadx/OHbzfft+s3MiXbGu6usLOS2ZeAwu
9GgMpTLoMiF3gcaTXreIaohXNjKREptcSMn8xIy3sPsdoSr1pmBvUjfA84fhBxETDnq7Q97egOv8
1f1ht2f/3tFdY3MHp2JT0XS2BHuN90PF1cZSz0KUoDtSlDU5DiubIM+NpYNQlUvcn6g+YHN67aCA
w5GIBClMDw4f40E1ADWZziVTKC+J6vactmRBqReRIQ14N2FG2sIdvp7uYE/YF/F5sGZ1OusBr70T
sPe6kfGMOrHodyLlfUkWz6H2V6UBV1oqX0mUislCIV0q5ZGHJcV9AWeUwg15kWLt6EKNHyti1FCW
ymFYzJYaSkV34tBOuIhriIGhyMjh2dRYLD+eLs3lKxBK4FZD2BdRV1hyPmUmgGOCh0dfFCS+aE/G
DzPrm1oSxo06Pt0D4tsVbAsvVyaDCFgKgHl9mRkwQlGhHkaliK51roowNDxeNIeH3o6KCvaAywX2
CygNWK06fC4/1iBOBJ4cdQgxzPAbeZ122KydqKjidwKiYZfdu5Cc0HS/yIpvTMGFGDYWnkXfGxrb
snBfGTJP6kohLjjjpe8Nh0BTuqq2IJF3Bfu8ci80svmraX2PT5nySEep+NLERqXsEVmaqlnuH0/c
eXA6msn5HZbnPPn8kNO5azR2aCa17cD43snERDyfh94LRRUOEaDUENqUafQe0UwSnVp0XaEGGSYi
ZSIMSzxOhyPgcfoddbfD0uF3XbSq42mbRoIe+3S8vHsytnsyvvPw7EysGE8gBwMFP1G3HZORzFVO
SZtLWPwwKeurByJoT5LIwPMvfkpUalaHLmQ4FghMKQd677nQCgH5jZB6cFnPJTLxVCEaR4Eu2ILQ
+jTIys4QnLw2WyAI/oulM+Tp6QwiHNoR8na4bB3gnYKMigXF4Qi6LPAg+5wsxMN5hmhvtZqr23Pl
mmh61UylhoJaEKRwJWC6u2313rA/7HIE7dAVYZHB7wVDmXkLKBSPnvN+VOLzwI1szVXqaLZ5aC49
lixMJXOzmSJSK3ko1JlBGAz+H1iHIg/JyZdUbxWEcOAJDYNqlnqVhHCs/CV+J/RYphcay5vE4WWN
NjLjRNUy9lW/E9uyocEgbtNu8XhdXo8dGi1Q6gHYQFBz2jCmQdwUFixEvHBLPAIVaRgAmF0gsWEh
Q4OnTo8z7HWCDxfAZBBieHMRET+0/GEoMS2zVdVNc/6qEDF00CUQewyQqUyGVkAqACWKomxMvVrj
fxO3y6wPi25tIc6NYxpaHoPOaAU5mSocmpi64JyBPp+3BMeMpVGwWNlpZ3R292jyof3TOydiCXAQ
kBwkZinLbApvhyJYWQPQfOByQNQAumy96vNjBrroTYCDqVbz4YmiiaXNErLV1/WENw4Fn7ShfyTk
yZXqD48lH5lMHZpOHJqKTcyAm4iuvaQx0WmLLG3Wyql0ucovuPrC84a7vF4vghXJfBFN3VO5EjbG
5jlk2XFaYmsLbKdOv6c75OsModuPdS6Tz5fryWzp0GRsPJqdjOZySDBgKQlMQpInSA12WZwe9Bfx
9nQE+sK+bo+rvyvEIyBJ0l4GYS3kdPmRBu1kviXZA7CrqeKhN0m9AJ95rYbIcxbcKpKBuQEHAZXE
cOE1OGyxPKGINFcQLavPMkGId7J2iqMEH0TdiqBXMl+Ziaenktlp1G+Dg7RYy6KzPex1OH8qCHDW
0J4InqQSyrghF0/6sikFSXzidaT2Mo1dfLmkOol9AVtWg5CAEAJr+Aunlaw+6uJiVImZQshQs9WO
TqRJgF3iQsAP7CCGkLCwBr3oRQrVHQ+57nc4Qg60A0aEiWs15qbLCo4nZgbTDlwOBFpqkLHo5A0k
g6oE0ao5trwkrULTKlVaMDkPx6YuuXRuPwbIbJWTi4zpBQLRkPzzAnMZZB75I0PwiqlKr7qET7km
C8FDWGoG65mpXmL9IlQzWajd8tCBew/N7ZxMTiMQZgWPBNEPPGwEL6SQGHaz1yg3vb5Q2HPx2hAE
o9vu2ncI3e6LWHdRlgZKcBqpieVyMIRePPCrls/pDl+0tnvDUA9CplCgKxb7/om5w9H81v1TjxxE
bxGgmiVOws78G154xZNXd+eTOQQN4H9C/AmXjRID0KIh+hBeoMHJqnOlPJLpmDcHmVrHxEKzvaDP
i+AegpAII40nsgen41AmJ6PJbK6Meu+QSdiHMhgTDYoo8pRrKK7lAml8fX/nSGcATki/teG11OEX
Ia+BPd5rqBnLutCQY+JnhX5XgB+TIk4dsMoaYbCPE5/qNKNhpAYBouKKJRuJKy7K/9TcNCoRZbGh
lF6iVJnJlAHRqVia5cEQaPJ54F6FqBzqCsxOR2/bN7Xj0HQaRTLsHkn3wXODuAYsqbnW4JljeVIH
PVoYBCe0UxQw4alwEJT0Q/lxLCgMA2Ffru5KeBBXBTVfibJKsAcXqgVAYTKg+An8eaAlBN22EPCG
rjCwLMDicCAaD53C7nO6vYCuww6nIup7s44OaybVUT0Jjt+AB3oTMmNgfyzWd+en6spm9GOAzKMA
bJFye0xQPOrG2utZE/K4olPjAKsB/nM+HbFRjOpgpBOAFbxnJn373vH7Ds5kqw5URw4465et6i2l
U2PR+Fy2tn8qmkB+aKN+ycbep583eM0F60MutOyygiWsulciX00W8nlUbbS5M4XqI4fHZmKxVZ3e
a89fc8HQAFRGSUizgJl4YCa5fTR25/ax7QenRwaCr3vRtX1eC0o8kvgDPa9cnE1mknAsgZwDMQUr
CGxyduUiU0inG7nE8PbCcSGBe6pzAAA5/Y1CuRHNlvdMzkXzlVQ6jz62IZft8NTcZBK1KBpAg8cD
Fa466LdvHugY7ulE3R404IMiiha2IOC4oZqiuB6kAfgSKOiMVBCGPtjtALAkW5kOHQg7Tabi+kdh
JlwOtVJYZ5pMeM2dodAjrjSCgvEH0xWdNGoQpPmpTPFwPFu2AoW1i1d19npd+YZlMhrfdnjugQMz
u6dTMUJUasyTxqEOWRs73TssyIXpDLgGPLYOex3VOItWez5Xgo8hmSlOTMfQ3xyRcBZ0I6FFtHvJ
e2JqIhduCYkJI1oL6EGAeyQnG/Qt1GAIhzywTTpEKvqASbvVC0c/aMMOm9/uQJFrcJJgsuJq8ACQ
HQpCFaRuCI8J+jyVEc3rE+1OziailDJcpet80HThDD6zkHnyoLjgSBKV41jIkipGiDwTTCEhgxoD
ZbiwVMFAmpLFMl0sI0A/li5OTc9cu3nVRT2d+CZrscBkmphJ1az2R/aNTsdjmNG9XUFEai9bv2Zd
BBmHLA2AK5CSxzw+fqcY8UdLwmo/5BWNVHUKU+rg8qBmj4MDgMADEuaq1Qjcgo0KpsJwVxD1r7Ap
3Lj7xqajyYQPbpcaEinZEQiXL/KDbYOkrR/ZRerXlIgwYkOkhJYbVph20WQ6UywFQv5KwzU6lzgE
6ne6lE/NPePyTddsWrUm4kqk0gcnZ5AcjWyEHr8vAvw7XGxhTQct01TR/h3d/pCWINdurPw4DyZ9
RcAo7k86tJg7AEtVzEEFgUSiJU1bUCplHiRJCCslJJ3TNpXO7ZmNI3kEWZoDAW8fHMtOR384gGHM
WSx7o8n7DsS2H57bG03joYAjCpi6GzCSHUUQZ8vFc1d3vfKaC68YCoEalWVTuXrAjloHlvFYcjZf
msqU0IMwls4jfXgmlY3m0OEcajDblWJLRqSNa5crlYAAvcYsLA/Wri0E/zZcX25nV6enNxTo8DhR
xMgHmx8RYAhlKBooPsg+hrw7xK1Rp5tRcRTaBz6RHc7FFJDm8WgRGXNTnST0YCw77Z8AyBRB3ESd
/CtTt6lT6CwWzIqTgc9G9iBqZf7BHIVyiEI1XT7nQDhoAFeGE9uhYch4Ord/fHouldm8ZuScDn8Y
7T+UMiK+FOrQIkD0AYhja/78OqEVqKAlHp7L7Z2N51nnuo4WuSiRjKIqqEAFK26wv3soHGTnEMOS
4YXBJAOHEb5Q+H3YRQApNaKlweRhhEZSwsgWpM7GRtfRRGr/XDJRLME13BEII10k4nGEmMfWCPt9
cP9yblVQxDsLywsiFFcOSUzWHr2pDA/CmgJK6cBUV7Yxv+aHU0q/SPdvMi6lWJdYeRo05i6Upcoh
JyuAVwiBj6t2grKFsiMFhDeRU42cBnhsvWwcaExdrF+Hk4U90eT2ycSuqQRq0V9z6cW+uuXwzNxU
fKbf43japo1+L0pg2ny12lAQ1aX9uit4FGkYushHqlgmEpkDM/F9s2msuWCwIeU4j3baKI4hkGSb
ZbDN4JbCBYn7SZqyMUEVbgU4df2g2nrcfRF/X5iVUHqxcFJyNyBCoTEBe0zdFp+enJnxJQhMmKaw
wOF+gFcJxgK6dVPD5ui1zMSFAD3JyFxo0MpDU4ltvJoGsnkRRqBCN+ScWu6lcr/pFFp4xGV3WPCh
XoVxZNUi5pd8mVgtEdYmB0uN3IW3YxQpad6U8h2ANDFe5cUeB+IaFvyLjWuczwjmSPhINT3+3+KU
nr9/SK04gpVIhnJ6QtAjwUMuV6OorIr+BUhQhOd3flkxr1x6p7HBc0MdnkApJgXcOZoGKfFD0hiA
VKhfoDVDYOAkLJZZZYYUYiwIvsTTOXQLQ0R+qCPYhXLQC52HYJIYRB30D0eWI/L9GDdsuZyWcdeB
lpITBCq1XIkt6tMQrYVrIfmADA4TFbJems/JGDy5wxbvvI4z1Znq4WgMdYb6urptdk+2VkF9A7QC
yCLEUy0PBoOD4SBrWEuG39Ipg7GKVy2zueJ4Mrs/nj0cTcXQz5Vl3lCuujII9zUagkRTU6nCdI7J
FXRpVet5rDSwc0GypanuGekJXdofugAhXY8byTSSqI61BikD5IAx6EU9Rop6CKMftbCljQyvCm/g
JIabXWjkzYFrjg7+vu322/RTIa4bQ8bvl0K2ZdhX+LblPCbmmjOzCTRzm+YcXubYBjIXPbQVXsSp
20zVEUN0NGsLKfZalpiF8DYWK0bIjb0VqYbhoe5+48VogATN9bmITk5SqSrk5mteWsmBhLnB/t2I
RbDyBoqgoViEBPGZzlGrQ7NqmQsUaFoEBCmRoP+hDTxgBK4TCtspsXm5ByJpa1rO42gvhacuUZIE
IcEOwpVAlcC10CYxO+FwkvVy6QHnVx/RdlTIGJuxqGYdSQ51KOoQdrAMgaFysTiCghv08zGxW6Ty
wsMuXHHVmoiXa3A3Tcwl5nJFlOC8cCjS66fKgCR/eMhxZgR4983R2Ijns4cSuX3RbK2Ue/lTznv5
k9aHPGgG0QBVI1EspIo1cT01XHa45eD6BTjhxudvOJqxSEB4Uq0VVgOfLwqosi0jIaxyVKfPSZaZ
ix5T6wRqigqNJevQCjG6+WJ5YtUqF79MqWdIvCNqACtG4SJhaJ5gxQfQwE+T4s9Lnp9ABvpapPOi
NVH/nD+pcTVGOd/Wi5H36jcwQMvqni1T0zhQC5dDJI+KZjJwmbTF/MmFGFtyw80Bwb96IwSzVrpd
8Zgs99jUucK5xmTPpneUCFNSrQZPSbhg2NXIDjDHR97I9WgMn4FlA5b8W00Fylrq26BGITQJpzZy
d6EBI5KBeJYY2kqol9eCuzECdPOFzeShZBuWaVinWdSYL6zq7tjY2bnoxrBNEW78XPHh8VmvzXLt
hhGErfToeISoHJ/I5idSyNZhPXswehks5dJDXhmTTCk8LfA1AKJSpJAMfqwwvEmmuTHCjes/ychc
NONbp9/SB2xsbAgZCWosaw0vOCif03Ejc1GQpvWSllz50SckmdOqFcuEOyJtq6m9UpnUI+puTWgY
a01zyoiEkUGQmScKH+eeLgTCy380pBgcQzmAxiuaYlz3bJ32OJ0ZJNcCodLXSCxVIupRT3bUQWqu
Xgs2alkCzOdPA1QEKl84qRZWaW4p6QfLStPmgQ1PsNydhizzhSLSVUKgDuuw63Drq1UW8A+xgsVh
LIvI/NfRXH5sNnru0CDIxvgKiUPCaFe3hLGZno4+LdEOzAlMhNfrU5l8NMdcf0CWPefIziTTSSI5
7LYIfRtKuOa6cndNuhEa2n333K3Xe3K0WZ1oMhhaZQM3IvluzReiajCAyJVe8LzAz5bE+2VfC61k
4YLzv+OYNq3IRKoZjHVNYGo+rkeb9eblyaSTupL4d/6yQWHDBAOftvVApKFoWqsIBp0cLcjEw1AH
kOAPbfmkBlZTXJgHX3aeLzNecJ/IHbVqHzLtKVweZdAElc2X4d9f/pms7FMB1aM9J12S9aX+MEWK
QYVtte6PelbmjkoUzMTXvGawPDJR6AL0Kvp9GEcR1OlFqNbRfAbS9UMxKcTueechMgc1E1eeqXkX
sCSa+bqUorlaI4v2PdUauPIQ4CBXKTkRTwNAhUUKTZa9vtlVg2m/AvvG/XfdoQe+6uot4pBo8t+O
z86cHwtDNvDQ2/Yc/M1tdz7wwIMHDh5KIxBfq6HRYn9vzwWbz73qqqdc+ZQn9YhznEGxZbEpg/TD
n97y1a987XWvf+3v/c61WL/ErD/ml4nMeCb/N2//+3/8+7evHRkScJonfnRwivYlWpQ4Xe554KHb
b79769Zth8am0esOKiTqEwwPD1x4weart+DmLgU5U563BPKai3JTXmihavuHP/35u+/b+va3/MUl
m9aL1DQ76+gzV/lpvfFHP/nf7/4QSg+I7SJV5mccS/wWci954fPe8OqXQfGVvDZTAOhs5+N94KFH
Pvixz0HLLRcLUlcLaWTOkdWrn3L5RddecxX6gMn6IFe3AlA92gPgcfYcGv+fj3wimUyjRT0VbKYb
2YeGBp506UXPeNrVw33dIrWocJoINe6Kd2zYCN++8aYbf3wznCyY1hgOrlIGgZpQRn7aK1/yklf/
0Yuh1tLhsrwdvvhiIcv+8V3//gcv/oOrnnwBTQRjQhn6tTrxmj84m1EYH//uPnjoPR+44eJN573l
+jfCIqcflmPl2Lpj7//890dCkeDfvPXPVw8NMvtQi7RJm3NZdg2hgqxJ8KjgqAPdnz1ESSNFrJp6
LKuNCbPqwbvv1Cu+4qqrRHacGJ9eFg5TQ7P85s57nvvS11525XV/+/Z/uPe+hzo6ui590qVXXXXl
hg1rE+nEp7/4pRe++BVXXPN77/rPj+4+OM6otUHVWTSIfFIPPLT9W1/7/APbHsHNoCCiCv3jfn3z
Oz/88mc+9aWvfxuezBJC3aovtQiMoxxZBTY2+Pb3b37q01949bXPe9d7P7Dz4KHB4b4rL7/kyisu
H1q16sDhiff/90d/5/de9LSnP+/jn/36xGyCJT8oCqRwl6GdKrx5qJt+csvXP/P5Q2MTkkCFQlMt
MlUvRc54z4Pb/u9Ln962fQcL9rIEOqoeo20kf4A0fIQSCaichKRnyQRVLU60LCN+Yzl0+PA3PveR
W357KzrwpQvoPIDA7PTXv/3tl7/i+iuvec5HP/tVMGdkZmuN6RN5GTGSqdm5T3/ycz/86a+SuTL4
T4VCbnYu+t0f3fS6P/7Tp2x5xns+8NG5ZEFNafN8LcLLuIA77r7/G5//1MN79uHecIe4WRwI78o0
BpFHj+Qbjge+lUInMmALn6hOytbXrbfffcN/f+QzX/lGrogKezAPm0sShZZ581LwW9INWbhGcv+j
M9H//cwXb7zp11kmEqDnHT50PLB774te+sovf/WbG8/fjJqRaNTLvFLuSF867QQt6yB/IPQVQLqi
1zMU8q8KhwZDAUTaHPDQsYM2iPTMGjcvlQJT1HtD37+1+TLU/5X/IwnHKB79l+/8d4unt3/dxf/8
7++/4657xyZnZmOJeDIdT6aisfjodHTrjn2f/fK3nvG8l3s6V//L+z+SQhpiCSlykia75PXvH/q4
xeLDZrNzidgcuswXpRjxSl5GwyJuCt5Og2VKNl/5LIslNLjpylvvfiCZZDncIx1I3P36v77h3e0d
n3nJq6+3WIKbLn/GBz72xXu37RqbmZmLxxOJVDyRmplLjE5E73ngkf/5yGcuvvLpnQNrv/qN74JM
h77Ncs3zl63cbLxe+KrrLb7hb9x401wU0cYExItctLzMjRqNf/7PD9tskfd96FNTOAUHdG56NjY9
g5+56ZnZmdnZiYnJmakpHAEIbY6kMBGbZ73xx78Av/1Vb/7rQxOzWAgmpmfR53LX3oNf+8b3Lrvm
uej989o/e/vExAyxvZKhfZRtSLy59d5ttvCaZ77kDXsOTx8anxmdmRqfmd57cOxbP/zZdc9/ucUW
fv7Lr993cBLgaHmgevL5O3/7P/+nzdb54c9+Y2J27vDE5NRsVH7m+MN7n5ucmMYrmcKjLBxlYggL
gocGa+o5L30tBjM4tPnHv7w1nowX8qnmIInneOFL55D0BWlgmXCERp7+wlePTczOzkTxyd33bxva
+GRnePVnvvLduVhicnoyFo/l8zn4x1sPY9zVonnbPBX+hRhPF4qothXN5E0AQrqS828ULj7utZIa
iQ3p7S97zRs/8p8feNVrX/uTH333rX/x5rWrh1A2I4du1LEogJVJJir5dMRne94zr/7EDe+98bv/
+6IXPiebwXxCvpH0s1nmxRUH0M1mUtksOohz7q70MtVok2vDm8999ZuPPLB11cVPmjw0+uWvfTOG
nhjZtMmzX3RMc/2mGJMj7Nh7CKL+u//33b/713/6wXe+/oZXvmSwK1QvFDJJNKSbS8Tncqk4spv7
OoN/9Acv/NLnPvmNr3/58idfCshCHmpDAZ5i0QIuZ0UhiHQmg5tD9wGVIdxKI336kqU8l8uk5qLJ
+Cz+yWdRcjRdzIPfhr4hECJw2KPyBqpAsPehoT8t0AVwBAfEaiKOEN40jlAp5L1O6zOvu/LLX/j4
M178oi9/8jOf/OLXkqkMhnqlw3vE7VRBhKKNMt0ldiOPTWex7mSz7kb1aZdf+smP/ffL3/j6m779
v//z0U8kkilI/uZ9Lhkd3rc1n8aiPpOKzWTB1culS7kUfoo58CARsGAVQTB6pbbQESeGOZI/+tkt
P7/5Fz3nXpBJJD7yic9HZ2OZNLJo5LzLPRpRkwx9Umq4od4CrAfwFyw33XL7i17+urnozCc+/sHn
/+6WVDyK7gTIHNcqR61jo0rBMrZ+U8iDyscMWBB3pTGhvjDxhDJ2vNqsrg1iTln+4p3v+ul3v/MX
7/jb//zXf+wOemLRGUw38Cq4iuvJ+D8LgaIMAP5dt6rXg0JQyDCAKoeajIYut/SBo2peCVMXLy6N
LRL/KHNIKZ1qwgPbE9PRT3zy870jQ//13n9Zs2Hdt779/dvv2ZpMJkoFsO6O/KI/mA8HluSb/vQt
Ox/a+oEPffBtf369s1FKzE0DFSyv2bxsfbzlSimdSSLh+JxVQ2zmWMUH1FOOoIfz+qCQ4dZyOfKN
5mPyCy6KawNraOXyyCUpgAkvEUF2NqR9JEES5iZLMwQzqr/4tlA+u5rjEdBFHlxALshYTJHC/+d/
/CpnuPP7P/3VwcPj2XQaD+NoY3IM37FqJUYJ/5XyyFchgy+bToBn+IZXvzw8vPqHN/1036HDqNbD
OmlNFbz18Dp1se4gloEbR2EHhv61CKZx+4zKal+Ho8wfdhjCOJdK//FfH7Y7ve/6p795ylO3/PSm
n938i98m0pALeTm5NJJZ7qVuczUQ2PioYbvpl795wxv/PJfOfuqTNzznGVtS6SSnOaYxHjVSxQ1Q
HHWkllhRUKYR8JzfR3oNHj8yDT6L1fqL39zxtc988YpnPucv/uS1lTzkG8SRglYCVpKAJyUsEFsm
ZR85SvVSxcpSwiypxfE9stjE3WLuZjLop8I1ciVzw5A8hryxfPqLXzu846HXvPKlT7t888t//7mp
qalvfe/HUIcSycTya60pruShfe4r37z71z977Z/88R+84NmpeAydJ+nUIDsUtbKYssBbE1seKdb4
QeIi2HWM3mk9Iy2IvPzLCh0yk0X1H9DOikeeXiS/iW+PlAP0jPV6waVFDQM3aiB4ESMTo02mz5EN
RdBkYKqhFlgZmcHs1YOsRfSG6YqEnIFgJp2ORWfjCfB8cXdHutqVjH3r7GIlIZrEZWSDIZMKWego
PNMALzgYCKWgv8VgDMSwKpGLsOQldwK+sDSvIUkQqc9INEDBENScAhEZJH8UJFuYvLs8tHikr3zz
xvt/c8sLXvjs51235Q+f/yxrtQxz4+DoRCIRZ5nFVl7ake4SBEOv7+e33vm2v/0HVP397w++55lP
vRpVm6ROkwWiX4tIrWT0Fo+vEDvntSTk8XI6sSjAEUNzj/4oRNf6/k9+gYLoL3j27wRctVwuiSJa
IsCl6oZUi2NLG4wr/LMeF8peYmADQRR8cqlr5Kg3wyQs6ISQK1IMcmXarITKtNfJgbGpT37ysyOb
L3nR856VT8z+7nVb1l9yyc9++ou7HngkkYQ3JL/8Pcqqog/++z/+Wd3T9XtPv6ZeyoJrggQgWPnw
ByLBUNCoxSJxd0CMBz+oNYOETWYDC5vUcHsuexppyANM5sCaK9EiXWYreYwZFO+ZiY3BRJyZm42n
YqnsXBIV8UrTSVQIQuTAIwnkZOwf4ZHhyOxJiYUA8pKlDzyoSxLo7h+664GHihOj60f60cEO1VIw
zCw2e+IvCcgiVIBIFRROUJAwRL5guLNvaOvO3eMH9q9dsxrjm05Tl4dza6krTrR6K5LJJ6NzY1NR
WJvReBoV/WcT+J3BspovlYQxPx80Xu6qMUHtc8n0B274lKez/7V/9JJaPvu0Ky+76Oor7rv7rl/e
dkcskcjlM+IwW349agaWQab17dl78F3//r5kLImMmf2HRufiCbha1QN+rAG9FS5+J4BMGQx4FSwO
b29nRxYeHXIJKfqZH+xyAY0+LHR+fxCvUDASCUQifsASWTRYGZxskmgYzEeeDGwXi1otpIUuaxAs
v9waK84NH//M7Oj4m65/4/CqVahbOzA0/NKXvgQuvu/d/POpeAqL9pHksI4dDPHxySlvZ0fA7yvk
YBIjD8yoGYTyAEAiGsIG+IJzLhwKRcLhjkhHRwdSUUIhECR1JTyCIDMiIJSHol4uug+9AAYq7c6v
fvt7f/q2d7zlHf/0+j/9qxe89FXPfvHLf+/FL3/2i17xjGe9ABqBNxRBNQHU15Ig+9KHLpOczBQU
MIDICXf2Dvb29nsCkS/87/fe9e739K8954XPfgaNKOh2OWQ1w1V7Yi9ZjljnyI3yAd6Ar7O7q6+/
r8/f0XXjz3/1zn/4R3+n/w9f9FwrYh95KOkobcQSniokzRMz3uh0fuErX/mzt73jr97xz6+5/i+f
99JX/96LcON/hHt/xu8+65vf+CYGHYX/cOMmPpe97s997dv7tz74yte95qILL8yXauFI1+tf+yqn
L/Cjm39xeGwilkyzusJR3dIQaaC3z+zbg25Ib/qz61cPD7/3PR/4zJe/ns2locgiN5X8HlmgWfxi
BS7uBWfTP+Yd1fM3Qbl5Iq9wwGupVeJQ21G1uVJHcyemt2IWUHwg+1SI+MblcqLufHjXn/y/v3r6
06/7p797a6NaBoaPMrJmBO9Y1iXjxrft3P35L37NEuq6//4H9u58BIYA+P5zyZQlELzj9nvuuee+
gY5QJBLGmrHg9o2kET4shJuCwUAFFlEaNXFhzlWRCgy9DEUMgErUoWRZgmYhfvMgWGxYJl4c+qwW
ufzgthDIlnuYxvOiVlG/+ilPOnfjGvgfAv5gR3cPRlVo6xj10oWbzrVWi6x0iVJ6IqiXWalszunZ
+G133VstFoIdndv37J+bm/vBD3/861/85vKrr37n3751sDsQi04imQnRGO2ZuZLpdaQ5gyvA85+N
4Yx3Iy3NGwp379oFvfGnv/jNz3/x6/PO3fDOd7z1/LUjsakxrLiorYukmeCSBYzevlr96quuOnfN
KsRHsJp3d3VB1IuGBSugdOnFF2DdFBI5H8GRpNDE1PSnPv8VtKMbGxt993vej7gT1B1o2TZfeOe2
XbfdcTcC0VhPOyORo0AAB4e+0TM08Nf/73WQ9hvXDH7uS//70f/+GPwDf/yKl/T3oVSJP4BFGj3U
FhFqljvoMvPhCFPkRJG55corv/yJz/3m3gefetUljWqx7u5ElUWUnoX2unAJYcQe8z2VTt9/293d
A8No6sVWfQLdI4wLdxDqmiz82iheVpjm7yXWtKw+eqf/8T8fz0bnLn3qldNjhw8hwxW1ocCldrk3
nbtux4Nb/+8HP7nk4gsh3GCz4em2zA2+1TMBdFdf9eQHb/31/fc9uH5kGMLN57dBWPIxeJGjx0Z6
LZdEZUG5wVB0sZKqh4yKgUHG0Q3E7aglBA2mOd5IJh8/l21MfgL/ql64aeOzr7s6k4x39XSuGlmD
swtzhsssZifqwWKorfYqHH1SUbfJtjNZLQ731ocf2bdnB+JzVYstOZeADbFly1M+8N/ve+5zno18
30RsJhLpMmMPJ7BM8+4QGEV2y969Bz72qc+xGxQYL2CNOu2rRlb9+z///fOf++yeSDAWj0Y6u6Eq
sI8CNJElmgXZPbX6ZRdvftoVF+dS6e6enpGREehfsiKxJwacVSyEi4a5JIYbo2dMlJajffJL3zq8
/ZFNVzw5FYvNHBrVUtOwzTevW7U1l4IMv/wpT+oKh4I+P/LFl9641oZkqb9yed1FF160eXN0ZmrN
QM87/+YvEBn9wqc/iynxtj9/Y08/rBhm1Rqajs7QxZlLxzyux49MnUF/+ILf++CTLrv5+z94yiXn
P/3KS6Eb+NEtzI9KTgu6rJqjz6XdF8DTgrOO1ShY6UqLTS546URlN0xU+2B/6FaBL18KWpfTQ8hz
/PWd9337a9+56OqrPn3D+xBVAN8YFcixRzjSgVIXf/POf7z/9rt+e+udw6ghCT00HGmxCM0r4SW/
8fWv/vIXv/61b/zfhnPXb96wGl6TENpUsT0rm440N+UCoZHzJpVAy32q/bLMkijbKhuAYyjOG7Uz
zecps1wq2cDBCx81fvCrAJeJFN/TGv/YAuocvIbssoXQOOroGENjnJRnqOSvvPzpf/D8Z5ezaIva
8b0f/uTnP/jRxs0XvugFz7XV0Aglg8UxEAjDEoVqeCLSsrnuoKdM8cLNm173ij9A9Vy3yz0ysmrN
yKqerg6IOJjVuXwaT9znD0CHZxs9Xal0Z/Nf8RQ0yiU2LIeIRDk/WhNGp2uhg6OKgwuJJZhvTifr
3xszQsdVaF57D4998tOfGdq4+iMf+DeUhk4nUvTwW2qhIOp1+v/1Ax+56ZvfueU3d6wd6guFgl1d
Pcvdu05bQyTDlmFjNqtl3dDAu//+b/7tfR/8yuc/h841//J3f41CmSTFqlNKJ/MCTu4xwxI7nKid
2dUR/PB//RsSSt/3wY/98Ja7UarGj+4m0s6xSXjCjUlZUblgLPAQRhjtHAxTNOnLo8zNEZ31ErlA
R2qkyOFhLDTGllPN9SQwST/w4c/UC9k3vfYVXQG/pZJz1gsog4MWqR5745yBzte/6mXI0sEUPTg2
mUgkQTLSHeUJtAKpfsm56//t394FqsN//NcN9219BAwPnzcodZJ1S/kROSdMx/kHwJqQnEDNIxoP
dylKFZbzs1OWG/ps8JEmTJJXAm+TVFJnwiXr1SLkhbp70tJBCoBIPz6d4MaZ5u+jUUXR8XNH+tcN
d+P3u/+/v/6d5z/zix/+2L++9wO4L0gDHJYeGvRtF+vjBMDJhy64QCa0d+PqwY1DPeet7t0w3N0V
clcYmU7APSydbXlGepl9frboW/DSBQ6jx6wR1gEzKGyS6CgF29mTGBuwuTKDUlq72JRXRlKAxfLR
T34ufnjv61/1ypGeznoxbbOipH3JaYNJUu/tCLz6Jb/v7+750Y9/vvfQOBz1amAv9EdihhnLpVxg
A6hE8W64+JBNsnqg+x/e+TdPvubqz3304//6gRum5+AQYhBLOJTibF1mQT42fB4/MpsCq/Hsp1/9
0U/c0NvV+cH3f/A/PvDh2+66Hww44ypEdJh8a4SnfnXr7WgYhGvPFhAABDDRr2Q5f6CogvBrwLuB
2ICfviSybc0XO1Uu7+9G0On2m7/3w6uf+8xnXHtlNo26H6y6BpsQEwG1rRz14jOv2/K0Zz9zxwPb
fnX7vXNxRkxbLbTm1JZpZrG86TUv/9f3/htCC+9813984NNfu2fHAeHYtyC4+cf47Nz7bvjkS1/z
prvue1A0VmO7Fr6r8bhaIdRyusXile2x4QKCHyUQQvmvYLgjFO7ETzgCywjt1zr8Pr9UlgNLEwV3
VB/j8XStaP5JRieK06HuGFoLdQY8//b373jK06/78ic/BXbeTAqtnhpwXkBDh7IDB8GxTZ8FW5sa
Ama4tByDZ5bhvnoV3H+G6FhmCHUDscIAkHgaeJmCuintzZtAqMQPj67ceAS33PzpghfL5YuwN7Bh
GhjnbQ4u7eStu/Z//vNf3vDkq5/zu8+q5orSjZgFrlEIGgsCwlqXX3rB81/wezMHDvzoZ79Bq+VU
KiW4asWT8Vz16MwewhoBv5bbG6SzL3D+muF/eefbtzzjGZ/98Ef+6T3vH5+JoVSSLKytj/T4h/P4
tVljFrA5d+33nv7UyP+89wtf/d+bf/6rX/7m1iuveNLVW64697zzujo74UcBy3FqcvLBrdt++9vb
Do+N/u4Lnv+Kl7ywks9AVuY9zkUyU9UBeFFgRt2/fSdGBGxFdDPq6e3DYoUlEn0Br9lyxYa15yzL
iU/lC+/5r/9BmuqbX/cKt6WKbidWlHZkJwMbwjVeHwuuRQLeN77qZbf9+tbv/PBmcNDDKEHnR4lg
VQWN2W06Vh3Wxh+//PeHeyNf+Mo3vvLVr33vBz9+6pYrr77isg3r1obDITwudLc5cOjwXffc+9tb
70hGY9dcfSUoAah5hyJ3DjhoDaN1wRJKtjcYmIyUGIrtwgdoJGdK3476fdt3QiwiIAm/dqSjk+BB
zU2quZVN5224/NJLpP8a6jpiY2Oy8tFI8QVyOVHimC2umYqE5aleLQ/0drz779/29lzqS5/5ctli
/9PX/tHqoV4cHFFS9oE+sReVBXB60b+U3hlUwWPGP7vDaVaIoejRuaDgZE3p+VHXW2A/KPy66/6t
mWyqUirBAwRowhRkBUTMh0r5ogs3X3LhJkkoFe3MfGh8Y4MK8S/veX9+Lvqaf3h7V4evmE7AHEVs
y4JKloiI+n1YqZG7/IqXvuAnP/3JD35487VbrowEcZIgxHjr3audye4yUIMR+WTHRqAbLu5QT2cY
vmXM7X98x1+/12H/8ic+iZD7u/7+79aNDDClSSrHL2vIrHx0T/RJ4ExQsTxO66Z1q972/974guc9
776t2x58aOvnv/RVkATEOgetABaFu79/cMuWq/79Wf90waZNpVwsHY8imR4BvcUxAzG10L+od/Cc
h7Zvu//BBySZRsKIeLDgEzYqN/zX+9efcw5L+rM6x4JJf9vtdx3as/3Vr3/FlZdekssm2f8KZUzR
MBnuVDR5kzo5uKqnXfnkl/3h7//wxu9t3/bAuWuHsuFwZ1enPl815Q1zoYH8dNRWs19xyaaRwbfu
2XcILPMHHt7xgY/dB8Kg9DdjDRy/2z0yPPyqP3jx7/7OMy644HyUbkKUErqhNEdY5DLlgjrQ09Wz
aghtcViXQhWxpi2jGrLeUndHqHdwGIN53/13kfjDZknaQYaeDLRVeNMbX3ftli0YEKgCTJ5YYKvx
CFiJevsH0e4BNh6Kz6IWJOLL0JLXre7/x7//23/7jw/+7Ec/Pm9V3ytf9hJol9AwT0CVNcxMn9s5
ODLY1xVBrRMLKqigEg6AiSoGcldKO2MrUgkzAJzL+jN7OsK9Q0MP3HPP3XfeJmR7asDqg8XsKGeS
f/mWv7z6isvRS5DhIlbeMgx0tQuRDvHg3Xdc+4IXQGCARVazVrE/bh7iDj4QkF5QaBMHu/zSC0FB
+dJnvoDzXHz+2nQ6jEtqvR51U2Px6F/V39fdIVYEa+F7UJPTF0T8mq4Ei/Wf/+6v/8vj+dmPfzTU
0/mWv/h/A31d6J8tD/CENFqDSHF8WWDNpQqWRS2eiE9OTRfyBRfa3aL5HMp7Cz8Llw7WSldnF7Sv
kM8N7zw+xFoI0hVS06FErVq1ClFAcy2hA8cCXmoFlSEnJw7NzExnMyW4aYZHVkGycR7X6zBj8IMH
i9GB7dW6DiFGhuo2rEmBslP5LMwRj9WGEA5ijWg6C3eoeESh4FVHp6Z37doJmkR3F3ye5wwMDkkC
p1qOMscVMmgoWK0i4WBmZgqUO6yY6JCOnsY4Ong1MPuCAX9XZwc8SVC8KxXoB0Uc381oLk0SyHk6
eWXu8BdV3Fo8Wx2bnismZoqZBFzUCLeeMzLsdLEclzHJRBtFAfbx6SgY3TPTUwg3hsPh4eERP5Ux
enxwKCjnHejagVnOoqeGi1KNLhSfwo0i1jOFmEl0Op/LgJsHubBq1UjIH0DsMpZIjU1MpxIJyIp1
a0b6B/oRIVyJ3/9Iq76aV9DoJpGEMDtdBNm1WPD4/EPDq3BkzZ9ggUtt5QKjbWHASS4b5CrmJGNu
oHLg1NjEDK68WITyiCFCqErd3VjNUIA7EkJddScWXARrFQS6uOENTCSWvEcvBDgzcmkIBpwTTx/W
gN+HkrroKIc6lexZPzUb275rFxxrvV0dw0PIGlqFh2beoPjnWMJrYi4NMns5EwO9MRIMr16/EdMZ
KyEIX/BTTEcT0VgCDZehcawaRr+O7nA4yBr2x4LMpQC0v/vd78aljI6O6gXBPX2koV/+cxXbwCIL
CNvJFkXUyIJqxc7OcHCwvwc/3R1omowqkEjfQc0Ulh0XmhDdGdAfMDNah0MHFzcJOxsUMlYis1vC
QR+ce2HEYlj3nqKGARV1mS+sT4NFFJFUlHJmPTTphYxS4ghahiMR2Kq4SKgxODj+ZURSBBqbgLmp
WeE7FRpiYcjAyi+cBjIfbxh+q1RwepSh7OkIDff3DGEtBTbQcbZWQeQNXY+1mxJr+jOhndRETnc5
avNRAbfoioCJwdYn2BKMnFA4rA7G5un5Ly4VwhCNeB3WutPWwLTCIDAg5bLzB+5/QJ4xF+YxSe9q
8+INLRrnxiTW3iA4IO6xp7sHCEfJaakwju6AGBy/cJnYrm8FxX6OODt05HBUtNbi2gEeqdWK2G93
dzesYnKjJB6PQV42DtwcbA4Bb5xl0pHHiDp0FvgXezojQeg98DugkBnkJFkVXGMxCYxSvMZT4+Xh
BKw8gjIHoD2xHyC8ZfZAEESXMHxOwtyCaixzwMtJgHFgqiQof6iY3aI46AzAFPOB14VFB67KhhV7
wezFSOJE+ApLLx4DGvfiOYORgqtmnFt6qh2TArIUgCeMTGOdgrYJciN0FyxgJqERBBfW+ZOeMXQf
yvTkizndqN6JcqYwbtBirsVBZyrohg+BvZoYsseoYYqzh5sQcwWTfCiq57TOF+1uY3KdwZ0DpQBD
uYi+L20z2RBJbR5w61qUqwV6CP7A1WJSMYYp7k8sz7gKnEMb6UgRUx0I42JMcqIKh/nLEx8u044k
g1ZvBCpWIIA406K4LoWs3C/bt0t4AMPFcDaZ3EwX4P5SS0abDGrNp9Yr53u9QJ5IWHKApY4kVkOM
Ko6MbzlTsTgJOE9EbKqQwKjI5bF4M44JgQnNyJCZCz3YR0K5sDVwx1gGcXm8cfIrWJRaXs2DsxOU
/r8kJI69NPtDQr9wcell+DnIMkY6TBw0WEgoVoCifpwDRObCEeC2JM/zh7tipEIo6cX2SyqTOJK4
QjZsqVYJeF3mlkzLowu8U4RM46QKTpa1Fzkm8ozTRWeMvtEXlxReP1cpidovTzYwJBi7HWNSovqu
lOOXo3GMNJKwZDLpXiarnjYBmLrzngbjarlANEOpyiVc8lRaBpMbc36Yaphib8ndESf6FV7YeDEy
9XGqfJHfuugcaRCas4RbaqTBPKl5ahHMR+Rs6HBhPuMNjmCug/hQwCmqoMxLWViPH5kmeUifjnlY
jP+xWrB6BPOlN64RHXMK4b05wuYZzQdmzgF9o3MAx1kED5lFhnCjwwrqiKh+SzeT2+HHjDC13JGu
rUrN02CmcsVVATk6Glu/PbXIxJlwlTrLldyjL4Wi+afCSYdbx0KHb9FLn4HuZc6b1mPqjFx2MpkP
Fd9iLuqKu/QU5oTWUxzJIWHuqEuPXrOeV69h0W89lHo4TPAve3c6vcQkXYbbZU5EvWvzgOYgmCjF
WY508XoQ/a3rprlI4UM9pl7GCcpMc4Rbz6iToVUnWuFkNRe+I924XvORliRzAgAtuF9gSTWFZaeZ
zlWdusuuzq0rgorW1jHUNah1JI+yUB7p9pci84Q9QEtOpcIKvyHc2a0YIdiWRAq9SV3tzFX/SFxN
PciiVAwzHKwL3rKLnO6o4WNdwJaeQj/Bb1Z8kID7sodadiiZ4iW3psFYvSSdmroMKVzx/ig0VBxE
8j/YlU+lytLlQ4+ME+kg6AUv2gyf6EqH30d68GJ3kxO7dDMcFgfHZeh8OspBVggq3UzHHwfXMy67
Mh79gObT13zFpRsf/cZb54AqL8uOnjlXMQgKziONgE5FbG8uuK3PXQ+uI6lAPaaRPMlVLZcdWRCm
EbfFbNNvcbn60j/N+dc6TOa35vxuPXLrt60w0Dmqz2zpEcyTKjyWXuqiCzgKhJbdV3dvvbtWobH0
qhYdxNyg9byt8691xJa9fvOArfNj2Sey7LnMLfVbXT2XHc9jAqT5lM1Hr8c8+i0sOzjm/NFDLbts
HQnw5hnNwVn2GszddTPzLMs+Pnyoi2Pr0GEXAHVwcLD1UK0zc4WjdzqQ2cJvNKC4FGYn/qiWncTz
Q2bE9haDdvEw4XvxE7di20TsESeT7NU6BVtBstzuugNP08IHWuaRtRx4/viPCryWs/M+mgEEHmzp
AY80URbNUf55NDCt9MDH+qCPtLIcad1cZv1qPtPFizX/PtptLRqBpVdi4m3RwGBHlUPmkzqmVV5P
dDqQucJF4izczIh1toSpNHZ4BIajRGaNADzv1oy3LfPQm87CZjxV1rTlhsgU/st+u2ANlD+OL+XB
IBYaPDPzVBrIf+xfRxnJI13ccexyWu9zKTKPnzd7Wi9cTrZgrTK0D0OdXHQxzQIBWrlQ4hzyWjhz
UVsGP8ZnxneypZ5ryS5N6rTkiOjxFmwk+7WIRakGJ3Ub5JgaJtc7Md4ZgXM5UBO7WvjOCMPM76LH
Nv4+Ej7kZowjNW8Y/yrp2zzxMqp988rkK7lW4/7M3eRjafwlvVHIuZeL0ds7rZNBwlaLHtD8w9Jb
0JFtXpbU9JAne3qv9ESG5WxC5gKjwnDL04ZZeP98BNRo+HC03qcmgizdEoQTaXMr2ZLGUWRLPtrm
UVunHR+rHlKfPA8p7VH1CmQ/SaxRXJFuIKkR+rV09DG3lHf4JSc0CpTLZywwxP5RknHC5C7z9mTT
Ry8M29yBIUQ1Hec7fagWunB66ge6l3FV7DhiDIKRdqj3S8oE78gsCW8Q7k4SiXuFE1mGa8lzNx6c
kc1r3I2x7jY13JUq4Su8kFO62dmETER7wbNFyh5qyKXzBbj+Dh48uGvXriUPyXgASGNBojbT+0ql
7Q8/nMmg6MuSV4vGtnvP3j3796WymbvvvWdJ5oFixoKGsOh7qxMDs3XX3gPxeFwRabyIPvQNJ4YP
TUzt2ndA0YHiMbt37zW3ku2NbC/98ODhQ/sPoutU/u5t23F3pips7mJWCnpg+yNzc7Glt9I0MYkz
rfuGmPy27dvnEqkdew7u2rvfmK6tOFJMCiAz+dx9W7fOzsUOHDi0bdt23K3IRpOcbQPzEYzT2+6+
D6VvZUnhXtsefnhyeup0KrnoLHjXvchnYt36+WXZar3rnvtQAlju0RwqDsKuPXt37tufyGRuv+tO
KTt0drxOCgfolN8qBBceQjQ6h3mGsmuoUvHZz38FpbrnZqZ7ens7F/ZvUjce/v/kl7955223BwLe
g4cOg8O5Zs0aKbY//5L5RNG6fefuD3/sk+PjY6lk4u577kklk5ddcknTVblg1n3nBz9C1tj6tWvw
bSxb+NRnv3jeunPAhlcdStOgdao/uGff7XfdMxdLdnd3Bf2+b37n++lkcvPm80XNkmQENmCkDARX
5fNf+vItt/wSRTRRtxo1Mw+PjW1ce04zqdoQ0tgFV58ulj7+yc9uXLu6p6dnybjrRNV0HcsjO3e9
7z/fF0Vx1VQGFTf27T8AwktPd7cMJjabT8JH/abPfO6L3/3Od9OZNKro/epXvx0aGEDGs4p0Q++w
WFCl6rd33uv0BXfv2bV+7Trwoj760Y/++te/vu7aa5WtdnpeO/cfvPH7P7jqKZcjFNtcEq3oOPqF
L35p8/nng4PNG+T9WXfv3fef73v/5MQk7uuBBx4YHx29/MmXw1ujetBSqXt6rn/ZsyyNZ54dMpNa
XKMBBuaVaI3ypEuRqxBLJDdt3vzc5z1v/fr1C29VpRdq0DTKVuflV1596SWXXX31NSgski9I1cam
RceHx7/ZguLr3/pub2/vX7/lr171ilf+wYte/Io/+qNWX4dpnWBLBKqCnYpDSzSZRXNq9HxadqwT
2cydd98zcs7a7/zgh8lsPo/qjLqlMXEw5VU5tNx6zwO//M1tf/nnf/mG17yaZfj8oUy2mEpLRVw9
t0hBPSkUgUK5ojzBRS/VOPVDhFm/8JWvX3TxJW9729ue/czfGRwaQH2eB7c+RNKiqbw2peXPfv2b
m37xizf/yZv+6s/+7HnPf/7zfv8F3b29enYTlni3edNGFE4Cvb6zbxDdPj7zuc/feedd73zHO7q7
uqUy22l6gehPvXrhIptBgWN8SiI0LpvqLiodfujjn1q7fv1fv+0tr3nFH734hS9405veqAwBw2A5
zebxMQ7P2YFMXeQQmWTjM8bykU+MBr4c5SV+mqaxBE+Fxfq9H/7oAx/56Hv/6/033XwTrb55yuS8
zyVbbUQ6+zacC2nG4w0MDICmvHAYm7YZpZ3tW//33ff/9w3v++8b9u4/5PEFdARlkquJa4Df6XIg
BQWCHUe+b9tOhxRz0FNQVTZOYE3mctCQn3z5U5DLis9+5+qrQMrc+fD2pkSbN3ibeyBJmoG1pQ+a
19BEJwoZIrFtzYYNGK1IwH/+6qGxsTHk5cm8XPDQQfxHCbunXXvdmjVrP/aZz/3sll/f/9B2JN8x
dmccT0/Fki/Puu6a/s4QAmN79h98aPsjH/vYxyC6T7MHCDEKZGA2b99wJYACi5LrWg4b9we/1N6D
o6vXrN9yzdOE+lMf6OtHmg5XZt1mOUv1GLFzajc/O5CpHhzTd4E8xRr4PQuD4zpO0qCVG0Jzq2aS
SLF9zctf+tpXv+oNr3s9aGmGOaiSpVmX4pHt28/ffF62XMkUmR0L3XLHjp3zlRa4Mct4MwhitaCl
xoUb1r3ipS95xUv/4LILNzZKGdC3FZliGUoajP5VqYwMDaCGAKzicE9fci5m1KxnxQztF8ktUeym
IxhYv27t4emo3sLPfvwDn9uB5DI9ol4qCpQpdQPVxHq7uryhkNysJYqKISaLQ68DmW4w/x7aeuWW
LVOxZEFKAG/d/si9t//mogs2c5ouFG97DxxA3Yjzzj1/14FDf/ii33/mtU+dHBtDDWe9mObN6EXw
Wn7n2qdecckFDz3w4FOveSoq6BizfFGb51M1aXnpWJ17e/tAm8J75IvNxeY4UFYo6j3+AIcFzKzR
8YlDB/c/6clP2rFnXxx6DVr31uvbtz+C7JPl2ESn6nJP5Lhnh53Z9BnyX9RpevDBh2AQXn3lFcjh
WXTzlGo04mz7Do3GYonVI6uQEVrI5UbHxrBqghap00287oIMEDtq1R07d+PRTsAHs3fvr37167lo
9JJLLpm3QwRo+IVKX9se2va0a7ZsWLcOBQ22b394ZnIC6eABWFniSlF/Cv4HWh586CHkAz3zuqdN
T02Pj47FpicvvWjzquFB6fyhE52HRdLQ7t17YnGccw52ESyoUiH/J3/8BiQ0iNo7n+WnPqeHH95B
b6vNMjk1tWPnromJCVyMavuGV0ogj/SmfXv3oe/J/v377r33nu985zuvePnLnvH065aaWKFgYHRs
dGZ6BuJ9FpY8BOZ991x33bVrz1m9MBo6D1KkIyK19eILLxwZHjqRyXfs+1pRquyRHTtAT0cZKXie
7rv3Xjy+IeaXc7SRfjMzPX3/ffejGREW5R27diDvBSYcbOxbbvnVxPj4ZZddarLTjv3sp3CP08Gb
PYWXL4fG4hedjTJvSqTKohfjFeJZiafQJK5M66wGxmkDdmZHOMLC6fQOcSf6SShtUG3HnskXE/HE
vr274fvZfMHmc1avnidhG2jjLuyvksv1dHWJgWqdnp0FuTkicmM+8CASCZcxG4t1RCJuuwMWEBp3
Ycuurg716csh57UVrDVj4+O7du7Clps2bVq/5hy5PIPWa3hrpJU6FDF0TGJmFKqxiEqMtG+kBRh4
axqZYm9akeZ74NDozt07kdZ9yUUXQVvmuTWXyRAcqoVAnjRmo9HJycmDBw8ju/qiiy8c6O2RnjVs
x7p0kOOpNBzjfd3dp1n+4DYhD6PxBK4fWV64QrCT+2gSW2diMXyLD6VJraW3pxcJWWi1NDo2sWfv
bojKjevWrd+wHvlEp3p+Ht/xH/8cIJn0jH4vO6WabkuFkth7LHbW7Hd8pEFtAaexiSqEinC17lor
srduv3hfE5lNS/MIJ23uJ2uAEvvEpl7+vppIloOJoUvELTBVdH89axOZC6ILSy5EeRpHTWVaOjLH
NzFPxV7LXdsZe71nNweo5fGJrFtkMOmc46SkuqcbSysu2IFN02rBIs+6CJi/0tdCj7b8MedFkUxu
E5N6imUQZn4kqJWKkwIs3VzV7daXef6WD00XUZOeIBqw0FmXncamHU5VgK5LA8zmxvjbSGxtCQI2
DckFkdXmLtzjUSDzKMvLqQCcLD3LPPqmb2f+nPOcLWMynKLLOTWHfbShPzVnPfajLnoYOmmWmxdS
wLUZPKApKTH3ZeFDP6qE/tSCU5Q9ylxbZHzKZZkkcuO25q9VnFYLVb5lZhXPbZx52TlnzkS5kSNo
kCoPzR/jrR5veSybZz3SYD76aBz7czwpeyx7Yaq3mLcqhLyW4VjR0z0pV3eSDnK2ILN1JTzqZGtC
qynajgDgFrlgeFMNfBzDuJqCeIH8W3IATvzmJBELUFmcra+m6XvUO1ssrBcdQpeXVhzqHdGpvBSc
hh5r7mEwS1d29y17rWyHU78V1l/NCdRlyIhcnfrznroznC3InF8mF66DS0eG37cuqkcWgs2tuMWj
mZrLPYEFV9L0oS5ZpiWUMn8R8udCyWxKQXELLX4iCyXhUWfCoqEx7k/OvngUjLO0HnzlIvIYLunU
zdzFa1vzFuVOyed9VFX8dF3b8Z3HOjU1hT337dun+y+h1BzfYdt7tUegPQLHMAJLAXjyq40cw+W0
N22PQHsEZAQeN77Z9vNsj8DjfATOFjvzcf4Y2rfXHoFFI9BGZntKtEfgTByBNjLPxKfSvqb2CLSR
2Z4D7RE4E0egjcwz8am0r6k9Am1ktudAewTOxBFoI/NMfCrta2qPQBuZZ9Yc2HfD1WR/Xn2Dwck6
6tUZG1vffPMx3cSKdrv5zVfjxcuQzVsu6OY3C0H1GE96TFfY3vjR03zaY/RYjMCWlz1vUd2xZa7i
5jdveOsd8vmnn6tlbY4JMFs+9PbnHOXenvOpL73McsdbN1jfvPctX/rQFnknC8DNN1quv/76mxqf
Otrej8WgPc7O2ZaZZ+IDvWDjowETguy5n7ZYgBB53XQ9bmPLh/Y2TiJg1r/lH2+6Hsfds8+Cd3i9
CFjcd8O/f5qv+bVgZeL9TBzlM/ua2sh8jJ/PzTdcLfqr8VIx2CoE9YsF8x+wxHYEogqum99MOXbT
7W9Zr6rnSrXhI9266KuQkM95zttfdP3Db92wwfpceeHTDTsu+BAXgPkXz9p+nYIR0CG+tflqHfP2
+9MxAnvl1TzTvPAzPpEPTNloSMdHmwdNUbr89e/90JamhOUG+mfLKZoyeOFZcExjyyWfn45hepyf
YykA2zLz0Wb5qf4eeXfzqXc33wgV1XI0ZfY5n1o8RxUvBhiXBc+yt0DLUV4Uv1u2XG+5scWn85y3
G3D90Ic+dJMuDpDjFOiiMhtwPsnq86ke6bPr+G1knknPS4F5PQ0647Vvz8OUb5s2GH8bjtEW/Vfw
YirAC/44qgloQExwfvvtt38Krxafzvq33A4R+aFNFstG/XTLhz50PQCsvil1Pq3IT3UmDe9ZdS1t
ZJ5Bj2spMI2LmxeiS0WmoWMukJnGHydmAu778bfe+tZvWTbo4nDBxo2WC/Bq8U2p1G1HT07NDGoj
89SM63EclW5PyqYFwYy9OzQwsuCFBHgzB/44TvSou6gbqSkW9RI+/dznzjtl6RbW1/U3vchybNHU
Rz17ewMZgTYyz5CJsO+G14kV96UFrs7FyqxxsT9+3QY4TJdz5zZDnCd0V+sZwZQjQETefOPDUGI/
tFcsWEMFbg3SPOc57cDmCY32EXZuI/NUjOoxH1MNNw18tL5UXi32CNEInH8t8ADpxyfMAkAsU6zK
TRugYd9xxwX/+Jb1y0rvY77R9g4rHYE2Mlc6UqdsO+qOUA/naTVN9hxkoqiNrR6hY7+KlqMtELKm
b7bVm2TGTQWQd1i+9TpESi3Xb9pzw5sf1Wt87JfW3uNoI9COZz6WkTINSDSVxCWCcPnvjM0WBkiO
HsI8nnucj3MaVymX2Xy/KAB6PCdo7zM/Akvjme3aeY/hwn3zm0HesbxoQbjiMbyc9qkfuxFo1857
7MZ+mTM/h0HEE7YJz6hbal/MyRqBtp15skayfZz2CJzMEWgj82SOZvtY7RE4WSPQRubJGsn2cdoj
cDJHoI3Mkzma7WO1R+BkjUAbmSdrJNvHaY/AyRyBNjJP5mi2j9UegZM1Am1knqyRbB+nPQIncwTa
yDyZo9k+VnsETtYItJF5skayfZz2CJzMEVjMzjuZx24fqz0C7RE4xhG45pprdI+2zDzGkWtv3h6B
0zICbWSelmFun6Q9Asc4AoY2e4x7tTdvj0B7BE7tCLRl5qkd3/bR2yNwfCPQRubxjVt7r/YInNoR
+P8BJXWf3KFSSagAAAAASUVORK5CYII=

--_004_f017c46e427a45ecab00c1c59413658cglacierpeaknet_--


From xen-devel-bounces@lists.xenproject.org Fri May 01 08:10:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 08:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUQjs-0007kH-I1; Fri, 01 May 2020 08:09:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=s7FD=6P=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jUQjr-0007kB-3d
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 08:09:39 +0000
X-Inumbo-ID: 19eb30a2-8b83-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 19eb30a2-8b83-11ea-9887-bc764e2007e4;
 Fri, 01 May 2020 08:09:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=U3K9cTwT20CMFOnPsRqWmK3gfZUiU6P5nia29a8XCgM=; b=RTrPUJA5AM9B3Zu6hjKTrYCdhn
 wT3djYWuNcSex+tTrw1/f82OM1gorVGVu+1B8Pn34mE7VcvUfSqV/o23hHdthHMdzGqnUilPCig4k
 xQXbt0oBcFg46TwrkPg3ZXuxatFSuo3KEfD+Y2QPZLMFhLstKAY6OJE86jGp1gsdPQR8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jUQjo-0004WA-DD; Fri, 01 May 2020 08:09:36 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jUQjo-0000Em-6O; Fri, 01 May 2020 08:09:36 +0000
Subject: Re: [PATCH 10/12] xen/arm: if is_domain_direct_mapped use native UART
 address for vPL011
To: Stefano Stabellini <sstabellini@kernel.org>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-10-sstabellini@kernel.org>
 <05b46414-12c3-5f79-f4b1-46cf8750d28c@xen.org>
 <alpine.DEB.2.21.2004301319380.28941@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <7176c924-eb16-959e-53cd-c73db88f65db@xen.org>
Date: Fri, 1 May 2020 09:09:34 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2004301319380.28941@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>, Volodymyr_Babchuk@epam.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 01/05/2020 02:26, Stefano Stabellini wrote:
> On Wed, 15 Apr 2020, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 15/04/2020 02:02, Stefano Stabellini wrote:
>>> We always use a fix address to map the vPL011 to domains. The address
>>> could be a problem for domains that are directly mapped.
>>>
>>> Instead, for domains that are directly mapped, reuse the address of the
>>> physical UART on the platform to avoid potential clashes.
>>
>> How do you know the physical UART MMIO region is big enough to fit the PL011?
> 
> That cannot be because the vPL011 MMIO size is 1 page, which is the
> minimum right?

No, there are platforms out with multiple UARTs in the same page (see 
sunxi for instance).

> 
> 
>> What if the user want to assign the physical UART to the domain + the vpl011?
> 
> A user can assign a UART to the domain, but it cannot assign the UART
> used by Xen (DTUART), which is the one we are using here to get the
> physical information.
> 
> 
> (If there is no UART used by Xen then we'll fall back to the virtual
> addresses. If they conflict we return error and let the user fix the
> configuration.)

If there is no UART in Xen, how the user will know the addresses 
conflicted? Earlyprintk?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 01 08:23:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 08:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUQxa-0000v8-RT; Fri, 01 May 2020 08:23:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=s7FD=6P=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jUQxZ-0000v1-7E
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 08:23:49 +0000
X-Inumbo-ID: 14e139c4-8b85-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14e139c4-8b85-11ea-ae69-bc764e2007e4;
 Fri, 01 May 2020 08:23:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=qewJIuTpS07OrFiNhsuUyU+IP0nB2jK+VVcv/vQUIc4=; b=UbHOcW94bnqq1K3vM/a+bKnFMv
 ua4s3kppMu1UuUGY3kUbpP2clgX2EK30f9khTQ6U3oJHbEh4t4j1AqIun29H5ail+lkBbeOhMBr3P
 J5kzwxLGQ9pDySAFEz1YQbLta5BI6r+VJEgy9GKafcJFH6NoO9Z9Zem6whDjbVu06PRc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jUQxX-0004lw-5x; Fri, 01 May 2020 08:23:47 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jUQxW-0001AG-VZ; Fri, 01 May 2020 08:23:47 +0000
Subject: Re: [PATCH 08/12] xen/arm: if is_domain_direct_mapped use native
 addresses for GICv2
To: Stefano Stabellini <sstabellini@kernel.org>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-8-sstabellini@kernel.org>
 <05375484-43f2-9d4b-205a-b9dcf4ee5d8e@xen.org>
 <alpine.DEB.2.21.2004301412460.28941@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <4384aed5-97cd-ef93-5512-b41c0124b072@xen.org>
Date: Fri, 1 May 2020 09:23:45 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2004301412460.28941@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>, Volodymyr_Babchuk@epam.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 01/05/2020 02:26, Stefano Stabellini wrote:
> On Wed, 15 Apr 2020, Julien Grall wrote:
>> On 15/04/2020 02:02, Stefano Stabellini wrote:
>>> Today we use native addresses to map the GICv2 for Dom0 and fixed
>>> addresses for DomUs.
>>>
>>> This patch changes the behavior so that native addresses are used for
>>> any domain that is_domain_direct_mapped.
>>>
>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>> ---
>>>    xen/arch/arm/domain_build.c | 10 +++++++---
>>>    xen/arch/arm/vgic-v2.c      | 12 ++++++------
>>>    xen/arch/arm/vgic/vgic-v2.c |  2 +-
>>>    xen/include/asm-arm/vgic.h  |  1 +
>>>    4 files changed, 15 insertions(+), 10 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>> index 627e0c5e8e..303bee60f6 100644
>>> --- a/xen/arch/arm/domain_build.c
>>> +++ b/xen/arch/arm/domain_build.c
>>> @@ -1643,8 +1643,12 @@ static int __init make_gicv2_domU_node(struct
>>> kernel_info *kinfo)
>>>        int res = 0;
>>>        __be32 reg[(GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS) * 2];
>>>        __be32 *cells;
>>> +    struct domain *d = kinfo->d;
>>> +    char buf[38];
>>>    -    res = fdt_begin_node(fdt,
>>> "interrupt-controller@"__stringify(GUEST_GICD_BASE));
>>> +    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
>>> +             d->arch.vgic.dbase);
>>> +    res = fdt_begin_node(fdt, buf);
>>>        if ( res )
>>>            return res;
>>>    @@ -1666,9 +1670,9 @@ static int __init make_gicv2_domU_node(struct
>>> kernel_info *kinfo)
>>>          cells = &reg[0];
>>>        dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS,
>>> GUEST_ROOT_SIZE_CELLS,
>>> -                       GUEST_GICD_BASE, GUEST_GICD_SIZE);
>>> +                       d->arch.vgic.dbase, GUEST_GICD_SIZE);
>>>        dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS,
>>> GUEST_ROOT_SIZE_CELLS,
>>> -                       GUEST_GICC_BASE, GUEST_GICC_SIZE);
>>> +                       d->arch.vgic.cbase, GUEST_GICC_SIZE);
>>
>> You can't use the native base address and not the native size. Particularly,
>> this is going to screw any GIC using 8KB alias.
> 
> I don't follow why it could cause problems with a GIC using the 8KB
> alias. Maybe an example (even a fake example) would help.

The GICC interface is composed of the two 4KB pages. In some of the 
implementation, each pages starts at a 64KB aligned address. Each page 
are also aliased every 4KB in the 64KB region.

For guest, we don't expose the full 128KB region but only part of it 
(8KB). So the guest interface is the same regardless the underlying 
implementation of the GIC.

vgic.cbase points to the beginning of the first region. So what you are 
mapping is the first 8KB of the first region. The second region is not 
mapped at all.

As the pages are aliased, the trick we use is to map from vgic.cbase + 
60KB (vgic_v2.hw.aliased_offset). This means the 2 pages will now be 
contiguous in the guest physical memory.

>> It would be preferable if only expose part of the CPU interface as we do for
>> the guest. So d->arch.vgic.cbase would be equal to vgic_v2_hw.dbase +

I meant cbase rather than dbase here.

>> vgic_v2.hw.aliased_offset.
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 01 08:31:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 08:31:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUR4S-0001kF-Jx; Fri, 01 May 2020 08:30:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=s7FD=6P=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jUR4R-0001kA-4H
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 08:30:55 +0000
X-Inumbo-ID: 12a73dd8-8b86-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 12a73dd8-8b86-11ea-b9cf-bc764e2007e4;
 Fri, 01 May 2020 08:30:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=2Mjm7tofe6HvMbgvqqPjLneHbicRlRda9SC1fni7v+s=; b=xDehKdfDtqwgTRWVxIKCjeCZ3l
 zfjmdNtKGBzZyB1jtxztKAVdbsviBcAfg/XVsx1ghF5xM8SJZOMK0Jl9ROebDdsjTFG39G0d5MbJE
 GjSCkV5Dqdxqg2fBG3d68FVRPzuJJi/Ab7RJcr4n9Jf4eLQgx+/GnsliPb3P5loNVHXo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jUR4O-0004tk-Rt; Fri, 01 May 2020 08:30:52 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jUR4O-0001UT-J2; Fri, 01 May 2020 08:30:52 +0000
Subject: Re: [PATCH 03/12] xen/arm: introduce 1:1 mapping for domUs
To: Stefano Stabellini <sstabellini@kernel.org>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-3-sstabellini@kernel.org>
 <3f26f6a0-89bd-cbec-f07f-90a08fa60e26@xen.org>
 <alpine.DEB.2.21.2004301417070.28941@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <77d2858c-768c-e2a1-e2c9-32cb1612512d@xen.org>
Date: Fri, 1 May 2020 09:30:50 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2004301417070.28941@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>, Volodymyr_Babchuk@epam.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 01/05/2020 02:26, Stefano Stabellini wrote:
> On Wed, 15 Apr 2020, Julien Grall wrote:
>> On 15/04/2020 02:02, Stefano Stabellini wrote:
>>> In some cases it is desirable to map domU memory 1:1 (guest physical ==
>>> physical.) For instance, because we want to assign a device to the domU
>>> but the IOMMU is not present or cannot be used. In these cases, other
>>> mechanisms should be used for DMA protection, e.g. a MPU.
>>
>> I am not against this, however the documentation should clearly explain that
>> you are making your platform insecure unless you have other mean for DMA
>> protection.
> 
> I'll expand the docs
> 
> 
>>>
>>> This patch introduces a new device tree option for dom0less guests to
>>> request a domain to be directly mapped. It also specifies the memory
>>> ranges. This patch documents the new attribute and parses it at boot
>>> time. (However, the implementation of 1:1 mapping is missing and just
>>> BUG() out at the moment.)  Finally the patch sets the new direct_map
>>> flag for DomU domains.
>>>
>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>> ---
>>>    docs/misc/arm/device-tree/booting.txt | 13 +++++++
>>>    docs/misc/arm/passthrough-noiommu.txt | 35 ++++++++++++++++++
>>>    xen/arch/arm/domain_build.c           | 52 +++++++++++++++++++++++++--
>>>    3 files changed, 98 insertions(+), 2 deletions(-)
>>>    create mode 100644 docs/misc/arm/passthrough-noiommu.txt
>>>
>>> diff --git a/docs/misc/arm/device-tree/booting.txt
>>> b/docs/misc/arm/device-tree/booting.txt
>>> index 5243bc7fd3..fce5f7ed5a 100644
>>> --- a/docs/misc/arm/device-tree/booting.txt
>>> +++ b/docs/misc/arm/device-tree/booting.txt
>>> @@ -159,6 +159,19 @@ with the following properties:
>>>        used, or GUEST_VPL011_SPI+1 if vpl011 is enabled, whichever is
>>>        greater.
>>>    +- direct-map
>>> +
>>> +    Optional. An array of integer pairs specifying addresses and sizes.
>>> +    direct_map requests the memory of the domain to be 1:1 mapped with
>>> +    the memory ranges specified as argument. Only sizes that are a
>>> +    power of two number of pages are allowed.
>>> +
>>> +- #direct-map-addr-cells and #direct-map-size-cells
>>> +
>>> +    The number of cells to use for the addresses and for the sizes in
>>> +    direct-map. Default and maximum are 2 cells for both addresses and
>>> +    sizes.
>>> +
>>
>> As this is going to be mostly used for passthrough, can't we take advantage of
>> the partial device-tree and describe the memory region using memory node?
> 
> With the system device tree bindings that are under discussion the role
> of the partial device tree might be reduce going forward, and might even
> go away in the long term. For this reason, I would prefer not to add
> more things to the partial device tree.

Was the interface you suggested approved by the committee behind system 
device tree? If not, we will still have to support your proposal + 
whatever the committee come up with. So I am not entirely sure why using 
the partial device-tree will be an issue.

It is actually better to keep everything in the partial device-tree as 
it would avoid to clash with whatever you come up with the system device 
tree.

Also, I don't think the partial device-tree could ever go away at least 
in Xen. This is an external interface we provide to the user, removing 
it would mean users would not be able to upgrade from Xen 4.x to 4.y 
without any major rewrite of there DT.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 01 08:40:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 08:40:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jURDY-0002bb-K0; Fri, 01 May 2020 08:40:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=s7FD=6P=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jURDW-0002bW-VY
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 08:40:19 +0000
X-Inumbo-ID: 62c7ccfa-8b87-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 62c7ccfa-8b87-11ea-ae69-bc764e2007e4;
 Fri, 01 May 2020 08:40:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=BhBFqZ62xkew4T/rAsoiVvhDXft83XsDK8eCAT+fzq0=; b=A0PXYXRYqny8clbxvd2XrtEQf4
 bGA+4Ql4JU3JOKovzHqi4CIrvCHKkDV76M7P61cFGgNRecq9hGO7MGDYz9wDJlcZopy3hWbkscZ/3
 vXLzgrXbsjBIWDA+g883GmVJSXMXxeud27ctzkRnzU+g5AR/OROvYDV6yrV/ftb1i1rY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jURDV-00054H-6J; Fri, 01 May 2020 08:40:17 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jURDU-0002Vw-Vc; Fri, 01 May 2020 08:40:17 +0000
Subject: Re: [PATCH 09/12] xen/arm: if is_domain_direct_mapped use native
 addresses for GICv3
To: Stefano Stabellini <sstabellini@kernel.org>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-9-sstabellini@kernel.org>
 <923411c5-37d4-c86e-c5a8-8acd8a6830e7@xen.org>
 <alpine.DEB.2.21.2004301613220.28941@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <ab281b4d-c78f-15c3-57d3-85d0cef7bec8@xen.org>
Date: Fri, 1 May 2020 09:40:15 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2004301613220.28941@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>, Volodymyr_Babchuk@epam.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Stefano,

On 01/05/2020 02:31, Stefano Stabellini wrote:
> On Wed, 15 Apr 2020, Julien Grall wrote:
>>> diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
>>> index 4e60ba15cc..4cf430f865 100644
>>> --- a/xen/arch/arm/vgic-v3.c
>>> +++ b/xen/arch/arm/vgic-v3.c
>>> @@ -1677,13 +1677,25 @@ static int vgic_v3_domain_init(struct domain *d)
>>
>>
>> I think you also want to modify vgic_v3_max_rdist_count().
> 
> I don't think so: domUs even direct-mapped still only get 1 rdist
> region. This patch is not changing the layout of the domU gic, it is
> only finding a "hole" in the physical address space to make sure there
> are no conflicts (or at least minimize the chance of conflicts.)

How do you know the "hole" is big enough?

> 
>>>         * Domain 0 gets the hardware address.
>>>         * Guests get the virtual platform layout.
>>
>> This comment needs to be updated.
> 
> Yep, I'll do
> 
> 
>>>         */
>>> -    if ( is_hardware_domain(d) )
>>> +    if ( is_domain_direct_mapped(d) )
>>>        {
>>>            unsigned int first_cpu = 0;
>>> +        unsigned int nr_rdist_regions;
>>>              d->arch.vgic.dbase = vgic_v3_hw.dbase;
>>>    -        for ( i = 0; i < vgic_v3_hw.nr_rdist_regions; i++ )
>>> +        if ( is_hardware_domain(d) )
>>> +        {
>>> +            nr_rdist_regions = vgic_v3_hw.nr_rdist_regions;
>>> +            d->arch.vgic.intid_bits = vgic_v3_hw.intid_bits;
>>> +        }
>>> +        else
>>> +        {
>>> +            nr_rdist_regions = 1;
>>
>> What does promise your the rdist region will be big enough to cater all the
>> re-distributors for your domain?
> 
> Good point. I'll add an explicit check for that with at least a warning.
> I don't think we want to return error because the configuration it is
> still likely to work.

No it is not going to work. Imagine you have have a guest with 3 vCPUs 
but the first re-distributor region can only cater 2 re-distributor. How 
is this going to be fine to continue?

For dom0, we are re-using the same hole but possibly not all of them. 
Why can't we do that for domU?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 01 08:43:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 08:43:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jURGs-0002kB-3u; Fri, 01 May 2020 08:43:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=s7FD=6P=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jURGr-0002k6-27
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 08:43:45 +0000
X-Inumbo-ID: dd3b0c40-8b87-11ea-9af3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd3b0c40-8b87-11ea-9af3-12813bfff9fa;
 Fri, 01 May 2020 08:43:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=b+53yu1rnlYHaiiQO100tmKyI0+mZyB8g8q6Jmwu8To=; b=V9sPWVIaProBwpyhJH/YqEL7cn
 I7/xMi+HdlvFazzpD3YomuC3HhRo4RdOzYS5i5PkyJnuTZ7Z2kXiDCfP+o+x/I4xhuoF4N9lZ0pCu
 WxVzGMuObs7yfejwTby5R5SBb2jgkp5IRlYLP02Ykforj8kr0/Us1TIZyaRRfyxIp6sw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jURGl-00058C-NK; Fri, 01 May 2020 08:43:39 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jURGl-0002jv-Gb; Fri, 01 May 2020 08:43:39 +0000
Subject: Re: [PATCH 11/16] x86: add a boot option to enable and disable the
 direct map
To: Hongyan Xia <hx242@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1588278317.git.hongyxia@amazon.com>
 <7360b59e8fd39796fee56430a437b20c948d08c2.1588278317.git.hongyxia@amazon.com>
From: Julien Grall <julien@xen.org>
Message-ID: <91d65dd4-ef38-9d42-c4ac-275831acdb61@xen.org>
Date: Fri, 1 May 2020 09:43:37 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <7360b59e8fd39796fee56430a437b20c948d08c2.1588278317.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Hongyan,

On 30/04/2020 21:44, Hongyan Xia wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> Also add a helper function to retrieve it. Change arch_mfn_in_direct_map
> to check this option before returning.
> 
> This is added as a boot command line option, not a Kconfig. We do not
> produce different builds for EC2 so this is not introduced as a
> compile-time configuration.
> 
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> ---
>   docs/misc/xen-command-line.pandoc | 12 ++++++++++++
>   xen/arch/x86/mm.c                 |  3 +++
>   xen/arch/x86/setup.c              |  2 ++
>   xen/include/asm-arm/mm.h          |  5 +++++
>   xen/include/asm-x86/mm.h          | 17 ++++++++++++++++-
>   5 files changed, 38 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
> index ee12b0f53f..7027e3a15c 100644
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -652,6 +652,18 @@ Specify the size of the console debug trace buffer. By specifying `cpu:`
>   additionally a trace buffer of the specified size is allocated per cpu.
>   The debug trace feature is only enabled in debugging builds of Xen.
>   
> +### directmap (x86)
> +> `= <boolean>`
> +
> +> Default: `true`
> +
> +Enable or disable the direct map region in Xen.
> +
> +By default, Xen creates the direct map region which maps physical memory
> +in that region. Setting this to no will remove the direct map, blocking
> +exploits that leak secrets via speculative memory access in the direct
> +map.
> +
>   ### dma_bits
>   > `= <integer>`
>   
> diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
> index b3530d2763..64da997764 100644
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -162,6 +162,9 @@ l1_pgentry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
>   l1_pgentry_t __section(".bss.page_aligned") __aligned(PAGE_SIZE)
>       l1_fixmap_x[L1_PAGETABLE_ENTRIES];
>   
> +bool __read_mostly opt_directmap = true;
> +boolean_param("directmap", opt_directmap);
> +
>   paddr_t __read_mostly mem_hotplug;
>   
>   /* Frame table size in pages. */
> diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
> index faca8c9758..60fc4038be 100644
> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -1282,6 +1282,8 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>       if ( highmem_start )
>           xenheap_max_mfn(PFN_DOWN(highmem_start - 1));
>   
> +    printk("Booting with directmap %s\n", arch_has_directmap() ? "on" : "off");
> +
>       /*
>        * Walk every RAM region and map it in its entirety (on x86/64, at least)
>        * and notify it to the boot allocator.
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index 7df91280bc..e6fd934113 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -366,6 +366,11 @@ int arch_acquire_resource(struct domain *d, unsigned int type, unsigned int id,
>       return -EOPNOTSUPP;
>   }
>   
> +static inline bool arch_has_directmap(void)
> +{
> +    return true;

arm32 doesn't have a directmap, so this needs to be false for arm32 and 
true for arm64.

I would also like the implementation of the helper close to 
arch_mfn_in_directmap() in asm-arm/arm*/mm.h.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 01 08:47:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 08:47:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jURKK-0002sP-Ju; Fri, 01 May 2020 08:47:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YjF2=6P=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jURKJ-0002sK-E8
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 08:47:19 +0000
X-Inumbo-ID: 5c39e46c-8b88-11ea-9af3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5c39e46c-8b88-11ea-9af3-12813bfff9fa;
 Fri, 01 May 2020 08:47:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=BuRLQvS2S2w168f8xOSqY99RHcZ9DWjTPe5kA52Wacg=; b=pNXapThVYl/U8lNmRabFhrawj
 tUGVJQSKZ7151ufUoBmdi93AO+3wFODDN94VEh/C7P4j5ANqBbnzIjFOSM/CYRhX2D57M9/LqM1ru
 BNHHqWhXkOJBLJWxezJYmIrx76dBGMMvfoXyO+sbKqm71cpB5+It11/9lWl8Se7+RGZ00=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jURKG-0005DU-Ks; Fri, 01 May 2020 08:47:16 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jURKG-0001uR-8Q; Fri, 01 May 2020 08:47:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jURKG-0003Oh-7A; Fri, 01 May 2020 08:47:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149895-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 149895: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=c4ccb0d0ce57264361dd2ca704f9037935f7f44d
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 01 May 2020 08:47:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149895 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149895/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              c4ccb0d0ce57264361dd2ca704f9037935f7f44d
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  105 days
Failing since        146211  2020-01-18 04:18:52 Z  104 days   97 attempts
Testing same since   149870  2020-04-29 04:18:52 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 16767 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 01 08:50:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 08:50:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jURNI-0003fi-7i; Fri, 01 May 2020 08:50:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=s7FD=6P=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jURNH-0003fb-Oc
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 08:50:23 +0000
X-Inumbo-ID: cb31414f-8b88-11ea-9af3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cb31414f-8b88-11ea-9af3-12813bfff9fa;
 Fri, 01 May 2020 08:50:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=YoTKvPMAknxrTnJEGDXlTHukXwmOTzocpuRBu4Fjzbg=; b=ExJMM04sMi/ZCbB5bUUivhzk8v
 6J7G248yzz0nC9duNw4T+ZbC8+cFw6e9XQG+mRhvwJaGweLA3OlVmaWa2H+tYSWZUWVGSv7qpc2s3
 u/pkhdg9EuM94i8yBwlfxzDxFwnB2413ldraOJAIIbn1e2mTDFP7xYBkiyXHTTU7/Crs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jURNE-0005GT-Th; Fri, 01 May 2020 08:50:20 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jURNE-0003Mz-MN; Fri, 01 May 2020 08:50:20 +0000
Subject: Re: [PATCH 13/16] xen/page_alloc: add a path for xenheap when there
 is no direct map
To: Hongyan Xia <hx242@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1588278317.git.hongyxia@amazon.com>
 <32ae7c14babf7e78b60febb53095a74c5e865456.1588278317.git.hongyxia@amazon.com>
From: Julien Grall <julien@xen.org>
Message-ID: <4faf56ec-1495-8b77-8a3c-ef9ec7c515c6@xen.org>
Date: Fri, 1 May 2020 09:50:18 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <32ae7c14babf7e78b60febb53095a74c5e865456.1588278317.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Hongyan,

On 30/04/2020 21:44, Hongyan Xia wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> When there is not an always-mapped direct map, xenheap allocations need
> to be mapped and unmapped on-demand.
> 
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> ---
>   xen/common/page_alloc.c | 45 ++++++++++++++++++++++++++++++++++++++---
>   1 file changed, 42 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 10b7aeca48..1285fc5977 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2143,6 +2143,7 @@ void init_xenheap_pages(paddr_t ps, paddr_t pe)
>   void *alloc_xenheap_pages(unsigned int order, unsigned int memflags)
>   {
>       struct page_info *pg;
> +    void *ret;
>   
>       ASSERT(!in_irq());
>   
> @@ -2151,14 +2152,27 @@ void *alloc_xenheap_pages(unsigned int order, unsigned int memflags)
>       if ( unlikely(pg == NULL) )
>           return NULL;
>   
> -    memguard_unguard_range(page_to_virt(pg), 1 << (order + PAGE_SHIFT));
> +    ret = page_to_virt(pg);
>   
> -    return page_to_virt(pg);
> +    if ( !arch_has_directmap() &&
> +         map_pages_to_xen((unsigned long)ret, page_to_mfn(pg), 1UL << order,
> +                          PAGE_HYPERVISOR) )

The only user (arm32) of split domheap/xenheap has no directmap. So this 
will break arm32 as we don't support superpage shattering (for good 
reasons).

In this configuration, only xenheap pages are always mapped. Domheap 
will be mapped on-demand. So I don't think we need to map/unmap xenheap 
pages at allocation/free.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 01 09:05:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 09:05:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jURbf-0004fW-IE; Fri, 01 May 2020 09:05:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YjF2=6P=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jURbe-0004fR-0a
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 09:05:14 +0000
X-Inumbo-ID: dbff375e-8b8a-11ea-9af3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dbff375e-8b8a-11ea-9af3-12813bfff9fa;
 Fri, 01 May 2020 09:05:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=SpO3LevLUkT4zGsDvNqyKtaN3xPg3HS6YIE+tPI4hLY=; b=cfHlZ7jcePZ7ZbQMXjd6JyShY
 ZlL5ApPAiwSWhzBca+IMsYNrO+FsrKf9cPf6+eShFjDjBd48UXssXMzWVebPDJBzTk+HDtvsh5O1G
 JH+SQJxWm3fSYHGCFtQJZIPvFpUk6Uyx6TSEvrQczmeYi6wI7bgzj1EQSjbVl0PuyiWys=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jURbZ-0005Yq-RA; Fri, 01 May 2020 09:05:09 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jURbZ-0002xK-7O; Fri, 01 May 2020 09:05:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jURbZ-0003bP-6L; Fri, 01 May 2020 09:05:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149892-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 149892: regressions - FAIL
X-Osstest-Failures: xen-unstable:test-amd64-i386-libvirt-xsm:guest-start/debian.repeat:fail:regression
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=0135be8bd8cd60090298f02310691b688d95c3a8
X-Osstest-Versions-That: xen=8065e1b41688592778de76c731c62f34e71f3129
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 01 May 2020 09:05:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149892 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149892/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-libvirt-xsm 18 guest-start/debian.repeat fail REGR. vs. 149889

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 149889
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149889
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149889
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 149889
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 149889
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149889
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149889
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149889
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 149889
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 149889
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  0135be8bd8cd60090298f02310691b688d95c3a8
baseline version:
 xen                  8065e1b41688592778de76c731c62f34e71f3129

Last test of basis   149889  2020-04-30 09:52:27 Z    0 days
Testing same since   149892  2020-04-30 23:37:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>
  Varad Gautam <vrd@amazon.de>
  Wei Liu <wei.liu2@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 0135be8bd8cd60090298f02310691b688d95c3a8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Apr 30 10:45:09 2020 +0200

    x86/CPUID: correct error indicator for max extended leaf
    
    With the max base leaf using 0, this one should be using the extended
    leaf counterpart thereof, rather than some arbitrary extended leaf.
    
    Fixes: 588a966a572e ("libx86: Introduce x86_cpu_policies_are_compatible()")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit d0234b2055b86919514381624d6ed6ec0686f241
Author: Wei Liu <wei.liu2@citrix.com>
Date:   Thu Apr 30 10:44:34 2020 +0200

    x86/pv: map and unmap page tables in mark_pv_pt_pages_rdonly
    
    Also, clean up the initialisation of plXe.
    
    Signed-off-by: Wei Liu <wei.liu2@citrix.com>
    Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 1a0000ac775faf8ef9efedd6068814d245f1dd8a
Author: Tamas K Lengyel <tamas.lengyel@intel.com>
Date:   Thu Apr 30 10:43:52 2020 +0200

    mem_sharing: map shared_info page to same gfn during fork
    
    During a VM fork we copy the shared_info page; however, we also need to ensure
    that the page is mapped into the same GFN in the fork as its in the parent.
    
    Suggested-by: Roger Pau Monné <roger.pau@citrix.com>
    Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

commit 5b58dad089880127674d460494d1a9d68109b3d7
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Apr 30 10:40:59 2020 +0200

    x86/pass-through: avoid double IRQ unbind during domain cleanup
    
    XEN_DOMCTL_destroydomain creates a continuation if domain_kill -ERESTARTs.
    In that scenario, it is possible to receive multiple _pirq_guest_unbind
    calls for the same pirq from domain_kill, if the pirq has not yet been
    removed from the domain's pirq_tree, as:
      domain_kill()
        -> domain_relinquish_resources()
          -> pci_release_devices()
            -> pci_clean_dpci_irq()
              -> pirq_guest_unbind()
                -> __pirq_guest_unbind()
    
    Avoid recurring invocations of pirq_guest_unbind() by removing the pIRQ
    from the tree being iterated after the first call there. In case such a
    removed entry still has a softirq outstanding, record it and re-check
    upon re-invocation.
    
    Note that pirq_cleanup_check() gets relaxed beyond what's strictly
    needed here, to avoid introducing an asymmetry there between HVM and PV
    guests.
    
    Reported-by: Varad Gautam <vrd@amazon.de>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Tested-by: Varad Gautam <vrd@amazon.de>
    Reviewed-by: Paul Durrant <paul@xen.org>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

commit 5af040ef8b572ffccb7e3530e617d4259a9ff724
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Apr 30 10:38:07 2020 +0200

    x86: drop high compat r/o M2P table address range
    
    Now that we don't properly hook things up into the page tables anymore
    we also don't need to set aside an address range. Drop it, using
    compat_idle_pg_table_l2[] simply (explicitly) from slot 0.
    
    While doing the re-arrangement, which is accompanied by the dropping or
    replacing of some local variables, restrict the scopes of some further
    ones at the same time.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Wei Liu <wl@xen.org>

commit 4f1b2e581744d5f78e4c809faf0a3167644afb82
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Apr 30 10:34:56 2020 +0200

    x86/msr: Fix XEN_MSR_PAT to build with older binutils
    
    Older binutils complains with:
      trampoline.S:95: Error: junk `ul&0xffffffff' after expression
    
    Use an assembly-safe constant.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 1df23a5ba85ca0266ebba25c7663465a8923d584
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu Apr 30 10:28:27 2020 +0200

    x86: drop unnecessary page table walking in compat r/o M2P handling
    
    We have a global variable where the necessary L2 table is recorded; no
    need to inspect L4 and L3 tables (and this way a few less places will
    eventually need adjustment when we want to support 5-level page tables).
    Also avoid setting up the L3 entry, as the address range never gets used
    anyway (it'll be dropped altogether in a subsequent patch).
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Hongyan Xia <hongyxia@amazon.com>
    Reviewed-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri May 01 09:35:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 09:35:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUS48-00076Z-4n; Fri, 01 May 2020 09:34:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jt3Y=6P=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1jUS46-00076U-3t
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 09:34:38 +0000
X-Inumbo-ID: f915c052-8b8e-11ea-ae69-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id f915c052-8b8e-11ea-ae69-bc764e2007e4;
 Fri, 01 May 2020 09:34:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588325676;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=HtdiKEBYnl+F4fKOxJyusZbV9rQDJIfKH4dG2/G+1qg=;
 b=gSAG3Mu50fMiEPGFNm6G5Sexi8OCiZh8Wn+yemLcvBeA0NXyUAaTWII0iarHR6C5mwpb0a
 k0z3WNG1RZIkOXswsgS6JE0zn9r+HPL1w1HkQ4jZdlFN8lDPjWlGtmPcObhfZxGZywS2lz
 Iex0E89okpA3UL1S78G373suPv348f0=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-319-YZZnaj37OHisW4YVqlgBvA-1; Fri, 01 May 2020 05:34:32 -0400
X-MC-Unique: YZZnaj37OHisW4YVqlgBvA-1
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 12CF21054F8B;
 Fri,  1 May 2020 09:34:30 +0000 (UTC)
Received: from [10.36.112.251] (ovpn-112-251.ams2.redhat.com [10.36.112.251])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 86CCB6A954;
 Fri,  1 May 2020 09:34:23 +0000 (UTC)
Subject: Re: [PATCH v2 2/3] mm/memory_hotplug: Introduce MHP_NO_FIRMWARE_MEMMAP
To: Andrew Morton <akpm@linux-foundation.org>
References: <20200430102908.10107-1-david@redhat.com>
 <20200430102908.10107-3-david@redhat.com>
 <87pnbp2dcz.fsf@x220.int.ebiederm.org>
 <1b49c3be-6e2f-57cb-96f7-f66a8f8a9380@redhat.com>
 <871ro52ary.fsf@x220.int.ebiederm.org>
 <373a6898-4020-4af1-5b3d-f827d705dd77@redhat.com>
 <875zdg26hp.fsf@x220.int.ebiederm.org>
 <b28c9e02-8cf2-33ae-646b-fe50a185738e@redhat.com>
 <20200430152403.e0d6da5eb1cad06411ac6d46@linux-foundation.org>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMFCQlmAYAGCwkIBwMCBhUI
 AgkKCwQWAgMBAh4BAheAFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl3pImkCGQEACgkQTd4Q
 9wD/g1o+VA//SFvIHUAvul05u6wKv/pIR6aICPdpF9EIgEU448g+7FfDgQwcEny1pbEzAmiw
 zAXIQ9H0NZh96lcq+yDLtONnXk/bEYWHHUA014A1wqcYNRY8RvY1+eVHb0uu0KYQoXkzvu+s
 Dncuguk470XPnscL27hs8PgOP6QjG4jt75K2LfZ0eAqTOUCZTJxA8A7E9+XTYuU0hs7QVrWJ
 jQdFxQbRMrYz7uP8KmTK9/Cnvqehgl4EzyRaZppshruKMeyheBgvgJd5On1wWq4ZUV5PFM4x
 II3QbD3EJfWbaJMR55jI9dMFa+vK7MFz3rhWOkEx/QR959lfdRSTXdxs8V3zDvChcmRVGN8U
 Vo93d1YNtWnA9w6oCW1dnDZ4kgQZZSBIjp6iHcA08apzh7DPi08jL7M9UQByeYGr8KuR4i6e
 RZI6xhlZerUScVzn35ONwOC91VdYiQgjemiVLq1WDDZ3B7DIzUZ4RQTOaIWdtXBWb8zWakt/
 ztGhsx0e39Gvt3391O1PgcA7ilhvqrBPemJrlb9xSPPRbaNAW39P8ws/UJnzSJqnHMVxbRZC
 Am4add/SM+OCP0w3xYss1jy9T+XdZa0lhUvJfLy7tNcjVG/sxkBXOaSC24MFPuwnoC9WvCVQ
 ZBxouph3kqc4Dt5X1EeXVLeba+466P1fe1rC8MbcwDkoUo65Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAiUEGAECAA8FAlXLn5ECGwwFCQlmAYAACgkQTd4Q
 9wD/g1qA6w/+M+ggFv+JdVsz5+ZIc6MSyGUozASX+bmIuPeIecc9UsFRatc91LuJCKMkD9Uv
 GOcWSeFpLrSGRQ1Z7EMzFVU//qVs6uzhsNk0RYMyS0B6oloW3FpyQ+zOVylFWQCzoyyf227y
 GW8HnXunJSC+4PtlL2AY4yZjAVAPLK2l6mhgClVXTQ/S7cBoTQKP+jvVJOoYkpnFxWE9pn4t
 H5QIFk7Ip8TKr5k3fXVWk4lnUi9MTF/5L/mWqdyIO1s7cjharQCstfWCzWrVeVctpVoDfJWp
 4LwTuQ5yEM2KcPeElLg5fR7WB2zH97oI6/Ko2DlovmfQqXh9xWozQt0iGy5tWzh6I0JrlcxJ
 ileZWLccC4XKD1037Hy2FLAjzfoWgwBLA6ULu0exOOdIa58H4PsXtkFPrUF980EEibUp0zFz
 GotRVekFAceUaRvAj7dh76cToeZkfsjAvBVb4COXuhgX6N4pofgNkW2AtgYu1nUsPAo+NftU
 CxrhjHtLn4QEBpkbErnXQyMjHpIatlYGutVMS91XTQXYydCh5crMPs7hYVsvnmGHIaB9ZMfB
 njnuI31KBiLUks+paRkHQlFcgS2N3gkRBzH7xSZ+t7Re3jvXdXEzKBbQ+dC3lpJB0wPnyMcX
 FOTT3aZT7IgePkt5iC/BKBk3hqKteTnJFeVIT7EC+a6YUFg=
Organization: Red Hat GmbH
Message-ID: <5c908ec3-9495-531e-9291-cbab24f292d6@redhat.com>
Date: Fri, 1 May 2020 11:34:22 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200430152403.e0d6da5eb1cad06411ac6d46@linux-foundation.org>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: virtio-dev@lists.oasis-open.org, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Baoquan He <bhe@redhat.com>,
 linux-acpi@vger.kernel.org, Wei Yang <richard.weiyang@gmail.com>,
 linux-s390@vger.kernel.org, linux-nvdimm@lists.01.org,
 linux-kernel@vger.kernel.org, virtualization@lists.linux-foundation.org,
 linux-mm@kvack.org, "Michael S . Tsirkin" <mst@redhat.com>,
 "Eric W. Biederman" <ebiederm@xmission.com>,
 Pankaj Gupta <pankaj.gupta.linux@gmail.com>, xen-devel@lists.xenproject.org,
 Michal Hocko <mhocko@kernel.org>, linuxppc-dev@lists.ozlabs.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01.05.20 00:24, Andrew Morton wrote:
> On Thu, 30 Apr 2020 20:43:39 +0200 David Hildenbrand <david@redhat.com> wrote:
> 
>>>
>>> Why does the firmware map support hotplug entries?
>>
>> I assume:
>>
>> The firmware memmap was added primarily for x86-64 kexec (and still, is
>> mostly used on x86-64 only IIRC). There, we had ACPI hotplug. When DIMMs
>> get hotplugged on real HW, they get added to e820. Same applies to
>> memory added via HyperV balloon (unless memory is unplugged via
>> ballooning and you reboot ... the the e820 is changed as well). I assume
>> we wanted to be able to reflect that, to make kexec look like a real reboot.
>>
>> This worked for a while. Then came dax/kmem. Now comes virtio-mem.
>>
>>
>> But I assume only Andrew can enlighten us.
>>
>> @Andrew, any guidance here? Should we really add all memory to the
>> firmware memmap, even if this contradicts with the existing
>> documentation? (especially, if the actual firmware memmap will *not*
>> contain that memory after a reboot)
> 
> For some reason that patch is misattributed - it was authored by
> Shaohui Zheng <shaohui.zheng@intel.com>, who hasn't been heard from in
> a decade.  I looked through the email discussion from that time and I'm
> not seeing anything useful.  But I wasn't able to locate Dave Hansen's
> review comments.

Okay, thanks for checking. I think the documentation from 2008 is pretty
clear what has to be done here. I will add some of these details to the
patch description.

Also, now that I know that esp. kexec-tools already don't consider
dax/kmem memory properly (memory will not get dumped via kdump) and
won't really suffer from a name change in /proc/iomem, I will go back to
the MHP_DRIVER_MANAGED approach and
1. Don't create firmware memmap entries
2. Name the resource "System RAM (driver managed)"
3. Flag the resource via something like IORESOURCE_MEM_DRIVER_MANAGED.

This way, kernel users and user space can figure out that this memory
has different semantics and handle it accordingly - I think that was
what Eric was asking for.

Of course, open for suggestions.

-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Fri May 01 09:52:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 09:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUSLP-0000Hn-OT; Fri, 01 May 2020 09:52:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/rha=6P=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jUSLP-0000Hi-8M
 for xen-devel@lists.xen.org; Fri, 01 May 2020 09:52:31 +0000
X-Inumbo-ID: 77b3e9b4-8b91-11ea-b9cf-bc764e2007e4
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.76]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77b3e9b4-8b91-11ea-b9cf-bc764e2007e4;
 Fri, 01 May 2020 09:52:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/3g1FoEufwcGPWv0JcEVYzL/MA9QpcMUjeENsLHfKxo=;
 b=8i9izFGk5CchLfno1vkazkgZMUK8G18DZiYtQUamjcF4JkTdX5oX9zJlFtAiU2GU/OY8yS8EzIEBWqkiSkcGi0hWyH6w3VVvhIimRaVeGEB+wm0Fjzbni3yWZRhCMImDqBcdxygSUvtm5h+S8cM0SRCXy3qJsEzXDOCBCs3TGFg=
Received: from AM5PR0402CA0008.eurprd04.prod.outlook.com
 (2603:10a6:203:90::18) by AM5PR0801MB1891.eurprd08.prod.outlook.com
 (2603:10a6:203:4a::7) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2958.19; Fri, 1 May
 2020 09:52:27 +0000
Received: from VE1EUR03FT050.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:90:cafe::b6) by AM5PR0402CA0008.outlook.office365.com
 (2603:10a6:203:90::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2958.19 via Frontend
 Transport; Fri, 1 May 2020 09:52:27 +0000
Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xen.org; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;lists.xen.org; dmarc=bestguesspass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT050.mail.protection.outlook.com (10.152.19.209) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.2958.20 via Frontend Transport; Fri, 1 May 2020 09:52:26 +0000
Received: ("Tessian outbound ff098c684b24:v54");
 Fri, 01 May 2020 09:52:26 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4399fab4997e21a8
X-CR-MTA-TID: 64aa7808
Received: from 30ff47e8bea2.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D5453799-1394-4DA7-B33D-7D3F797429A8.1; 
 Fri, 01 May 2020 09:52:21 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 30ff47e8bea2.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 01 May 2020 09:52:21 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Sn/kzQFEZ24EDNQdGUr+gOj40C43wpB84XyUR4cY2ovubb6YOoahSj52oNMXm505pSLQFPRa/t4NjOd9SXu6vFJquXuWzw66hYy2aR+W9XApHbt0BCCIve/8vh7tTnrKNavlJn6SXZWfLjwtVQkKkByz6TgZ+uwv37Pp0I1XEHI3WDW8Jv52jSPqM+bOYPp0MNG92nxvfh6sCHaKiFowiojbH4nGx/KRqCqN6o9ro6SKLlsib0pxWRbq6X1DjGoRaksl7X6QP18+CcSlQzGrWQTVWodlqKuHPVVN1AB0jxvUOn4ajM/Z/PoZgt/VZr+FSdK59KIsKBNKvfj+6FaG5g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/3g1FoEufwcGPWv0JcEVYzL/MA9QpcMUjeENsLHfKxo=;
 b=INQ/HWL2apDmSCDTwaarDI3d+YRTs977e51nAap4eOGzY95ruxUVN2V4HqsVmAc6hAnkVPEcLvZG9OM0cFK4BQmN4Tvp6uCIYcUxDziLk1C2WKSEV0NeXbAxR2HiCB+AsnIkajdkPqWwJukRUBjzSjkCiG0cEytLOLHnirCektnn1mpkBne2uPY8dN6reAaqEn+feE/GndU4atXuOhsHAmZFOsa5kSqopqy4FOJZvXRMjEVVAYT7v/RAlNygTCZhFkCvg1nnLAk24DBk48s7ZEyN67stp0HObmLLgKEiCMCDztV7D1mnd1KSmwD9mJwE2LrPYaxR70e8AZ5/ZHCzOg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/3g1FoEufwcGPWv0JcEVYzL/MA9QpcMUjeENsLHfKxo=;
 b=8i9izFGk5CchLfno1vkazkgZMUK8G18DZiYtQUamjcF4JkTdX5oX9zJlFtAiU2GU/OY8yS8EzIEBWqkiSkcGi0hWyH6w3VVvhIimRaVeGEB+wm0Fjzbni3yWZRhCMImDqBcdxygSUvtm5h+S8cM0SRCXy3qJsEzXDOCBCs3TGFg=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3402.eurprd08.prod.outlook.com (2603:10a6:10:4f::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2958.21; Fri, 1 May
 2020 09:52:19 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::94d7:a242:40b4:cfb5]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::94d7:a242:40b4:cfb5%6]) with mapi id 15.20.2937.028; Fri, 1 May 2020
 09:52:19 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: "Samuel P. Felton - GPT LLC" <sam.felton@glacier-peak.net>
Subject: Re: Question...
Thread-Topic: Question...
Thread-Index: AdYffIk+nIOwP7iTQLuOs3KZB2G7LwAIaoMA
Date: Fri, 1 May 2020 09:52:19 +0000
Message-ID: <907FA58B-240E-45E6-B452-9094531F715E@arm.com>
References: <f017c46e427a45ecab00c1c59413658c@glacier-peak.net>
In-Reply-To: <f017c46e427a45ecab00c1c59413658c@glacier-peak.net>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: glacier-peak.net; dkim=none (message not
 signed) header.d=none;glacier-peak.net; dmarc=none action=none
 header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 0f6957d1-03d3-4729-4d2d-08d7edb55af8
x-ms-traffictypediagnostic: DB7PR08MB3402:|AM5PR0801MB1891:
X-Microsoft-Antispam-PRVS: <AM5PR0801MB189126BDEBC83A8C8F244ED99DAB0@AM5PR0801MB1891.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 0390DB4BDA
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: RZVqBSgu0Me+xvUPkbqJBUp0fTDrEQZydHHn71YkVZlUvCVC5r+NBWviiWEXFWOt5GnlZdd80POnLQz6loPLe4MjA5d/av99dficvXyel6SXQHxPBVcqCwbUXM5Z1Y8Yh/E3vfF2arvYR5hOpEU8Sc77/QGLcqctXGPmwFk2WOj5KK1FmkkaIbQnW4ucdx19irnRoSDqwfqN12K3ViPE9ubjnfBlV5nALBZZGjSH8BrN8+L3Ripw3hBDiUJ93ugcTUF+ffZPKq5UFUUHb6sgVVTil8FKMWFx55HFymKVnODnR39HZnlbWhwTUtdL2l8lc4E5ywXGPj8FZPIq93icH8eSrpbHdCB5vXJcwWeWzSKebKCHVaQz6TXmDicBf2WDqFEHpkDNQ7jiXI7mWbNNASQpkY5H4Mi7snkN5+o+wxR/EtXbWMbedC8ZEwUxe+4Y
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(376002)(136003)(346002)(396003)(366004)(6512007)(53546011)(316002)(91956017)(66946007)(76116006)(66446008)(64756008)(66556008)(66476007)(4326008)(33656002)(5660300002)(478600001)(71200400001)(2906002)(6506007)(36756003)(6916009)(86362001)(26005)(2616005)(7116003)(186003)(6486002)(8676002)(8936002)(3480700007);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: nLHSkLm1it5hlp3at8uHZFyWkW9tqJB2a8uC2QYCnNtvck+AeuqAm1NOrmw068v0wor/oA2Rg2XnFafrdUzCAf9lxrWqfeyKjyFDLPBz4+XO+nN8mi20WTVYZLIodF9F3uTxs5Kdns3QIlNH8u0fWsrwpjybVf//LjdnIF5ycNJcNeaj1ovskFuY3JoYVbSn8E0gAyn2M4HfBKZhqWc8ysqQ3qsFbS5J7/8nrGOflMv8IHDAd+U4tCUpQTumPNz1EDJubaHapmmS/hYowcfKzuRn3yh0DU827T++2l1shWcvWLuJsIH2Ztc6G7klWFvQV0Z1DO4+dVZJIyGBJHh1KL+5xeXQG0AnQQCBGtc7CM1keR4kp1ZHMfY4KjDiy4/GpWa3KYymG3VRE4wBOjqBd5BpmxnpBC46K/aoj0HQto0B27HqRGxbkH1DcC/iNumnDKHiM2EmAEwhtTYFi4RL0gpDUXhgazYpjFfGoU81uX5x3/Sn5Qe9Lxq6qarjQFC6VXyVJBQHrsm+XQJ1k7P6xBP6fwVan0kKg1/YaT8+CE+nx0msyQOJySJ+Y1gWQAwxpISgD5ZpPoCKJh83NHz0olBYK0MlTRZ6LGREYBdsoOWAb7JR3tIvIdD5rUPlNa0LZuOu30jz6kN6bMaPzI73tgtL7syRuVwXOvb4x3akVGK8O6p9tvTbea8dNUtcxO93uM00YfpfLEulgTVm0m9pPuIM+/RSrdUHv6ST7ISDQoGVXW1x85FSl2tI6HJpJW5Yzykd5gWx0KT/xnvuGj5DRKbmVz4LAxRvNgm3XXN1HVE=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <6497289C45E86B438EC4D9F881E44126@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3402
Original-Authentication-Results: glacier-peak.net;
 dkim=none (message not signed)
 header.d=none;glacier-peak.net; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(376002)(136003)(346002)(396003)(39860400002)(46966005)(47076004)(7116003)(86362001)(36756003)(82740400003)(8676002)(5660300002)(70206006)(186003)(2906002)(3480700007)(53546011)(6506007)(70586007)(26005)(33656002)(316002)(8936002)(2616005)(36906005)(336012)(81166007)(6486002)(4326008)(478600001)(6862004)(82310400002)(6512007)(356005);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 5019afeb-037d-4e29-96a1-08d7edb556c6
X-Forefront-PRVS: 0390DB4BDA
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: NBy58Dn/q/bWRC7EEDcioF5TMrSmA7L35YarnXXxFf0g+tPRo79VVACIvJCwvAQQu6D/Ets8n9aTdDXevOLqKx2RMXHuKJjvaNUDqAmR8dP9YeDzHRW974pPlpbCdE+ZR0J2+ovtc57d+e5PLlr3o1XBdWSr2t3VXZIIya5cjJiTvy2hhhSN/RwyrBgGg8I8wTimuPHibIsaCn3qbGjiS6pEz7TN7bqLlMljR2DEBgCfCJmUPgB8hPlqiIpJGnOTgUPBJs/lge66rSyXvUIyRtIwSq8O8kO3G8wHyqIVB8h4ouB95vhyTu4++6RDxhHUJohier8gD9z3rCXhg/CRbqJX9VsP1xv9ucDOv/MOtaCmab6/Re9WaGcAvSPh5nJJb/7wpRCk4He6rikU+guiVJ5ejJXRlAbd0oS9KqNcSL4nDGb+d++PN8oF3l47ziqO9tucELWk8kVu7IDMYpHHnVsNaAz027JRfXLknzpOHqDKGF4FsUYN9ls9cC7AtnCfPniat13G2iHpgtfVB2tKXw==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 May 2020 09:52:26.6266 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 0f6957d1-03d3-4729-4d2d-08d7edb55af8
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0801MB1891
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGkgU2FtdWVsLA0KDQo+IE9uIDEgTWF5IDIwMjAsIGF0IDA2OjU5LCBTYW11ZWwgUC4gRmVsdG9u
IC0gR1BUIExMQyA8c2FtLmZlbHRvbkBnbGFjaWVyLXBlYWsubmV0PiB3cm90ZToNCj4NCj4gU28s
IEnigJltIHRyeWluZyB0byBnZXQgYSBYZW4gRG9tMCBhbmQgRG9tVSwgYm90aCBydW5uaW5nIFVi
dW50dSAyMC4wNCBMVFMgdXAgb24gb3VyIGJyYW5kLW5ldyBHaWdhYnl0ZSBUaHVuZGVyWDIgQVJN
NjQgYm94LiBJIGNhbiBnZXQgVWJ1bnR1IHVwIGFuZCBydW5uaW5nLCBidXQgYWZ0ZXIgaW5zdGFs
bGluZyB0aGUgWGVuIGJpdHMsIGl0IGRpZXMgYWZ0ZXIgdGhlIFVFRkkgbGF5ZXIgbGF1bmNoZXMg
R1JVQi4gSSBoYXZlbuKAmXQgYmVlbiBhYmxlIHRvIGdldCBhbnkgbG9nZmlsZXMgYmVjYXVzZSBp
dCBkb2VzbuKAmXQgZ2V0IHRoYXQgZmFyLiBOb3RoaW5nIHNob3dzIHVwIG9uIHRoZSBzZXJpYWwg
cG9ydCBsb2cgZWl0aGVyIOKAkyBpdCBqdXN0IGhhbmdzLg0KPg0KPiBIYXMgYW55b25lIG92ZXIg
dGhlcmUgYmVlbiB0cnlpbmcgdG8gZ2V0IGEgc2ltaWxhciBzZXR1cCBydW5uaW5nPyBBbSBJIG91
dCB0byBsdW5jaCBmb3IgdHJ5aW5nIHRoaXMsIG9yIGlzIHRoZXJlIHNvbWV0aGluZyBJ4oCZbSBt
aXNzaW5nPyBBbnkgaGVscCBhdCBhbGwgd291bGQgYmUgYXBwcmVjaWF0ZWQuDQoNCkkgYW0gY3Vy
cmVudGx5IHBvcnRpbmcgWGVuIG9uIGFuIE4xU0RQIGFybSBib2FyZCB3aGljaCBpcyBhbHNvIHVz
aW5nIGEgRURLMi9ncnViIHNldHVwIGFuZCBJIG1hbmFnZSB0byBzdGFydCB4ZW4gZnJvbSBncnVi
IGFuZCB0aGVuIHN0YXJ0IGRvbTAgcHJvdmlkaW5nIGEgRFRCLg0KDQpNeSBncnViIGNvbmZpZ3Vy
YXRpb24gbG9va3MgbGlrZSB0aGlzOg0KbWVudWVudHJ5ICd4ZW4nIHsNCiAgICB4ZW5faHlwZXJ2
aXNvciAkcHJlZml4L3hlbi5lZmkgbG9nbHZsPWFsbCBndWVzdF9sb2dsdmw9YWxsIGNvbnNvbGU9
ZHR1YXJ0IGR0dWFydD1zZXJpYWwwIGRvbTBfbWVtPTFHDQogICAgeGVuX21vZHVsZSAkcHJlZml4
L0ltYWdlIGNvbnNvbGU9aHZjMCBlZmk9bm9ydW50aW1lIHJ3IHJvb3Q9L2Rldi9zZGExIHJvb3R3
YWl0DQogICAgZGV2aWNldHJlZSAkcHJlZml4L24xc2RwLmR0Yg0KfQ0KDQpDb3VsZCB5b3Ugc2hh
cmUgeW91ciBncnViIGNvbmZpZ3VyYXRpb24gPw0KDQpCZXJ0cmFuZA0KDQo+DQo+IElmIHRoaXMg
ZG9lc27igJl0IHdvcmssIEnigJltIGdvaW5nIHRvIGhhdmUgdG8gZ28gdG8gRnJlZUJTRCBhbmQg
Qmh5dmUgYmVjYXVzZSBJIGtub3cgc29tZW9uZSB3aG8gaGFzIHRoYXQgd29ya2luZy4gSeKAmWQg
cmF0aGVyIHVzZSBMaW51eCB0aGFuIEJTRCBmb3IgdGhpcyBhcHBsaWNhdGlvbiwgdGhlcmUgYXJl
IG1vcmUgZHJpdmVycyBzdXBwb3J0aW5nIHRoaXMgaGFyZGF3YXJlLg0KPg0KPiBUaGFua3MsDQo+
IH5TYW0NCj4NCj4NCj4NCj4NCj4gPGltYWdlMDAxLnBuZz4NCj4gUGhvbmU6ICsxIDIwNiA3MDEt
NzMyMSBleHQuIDEwMQ0KPiBFbWFpbDogaW5mb0BnbGFjaWVyLXBlYWsubmV0DQoNCklNUE9SVEFO
VCBOT1RJQ0U6IFRoZSBjb250ZW50cyBvZiB0aGlzIGVtYWlsIGFuZCBhbnkgYXR0YWNobWVudHMg
YXJlIGNvbmZpZGVudGlhbCBhbmQgbWF5IGFsc28gYmUgcHJpdmlsZWdlZC4gSWYgeW91IGFyZSBu
b3QgdGhlIGludGVuZGVkIHJlY2lwaWVudCwgcGxlYXNlIG5vdGlmeSB0aGUgc2VuZGVyIGltbWVk
aWF0ZWx5IGFuZCBkbyBub3QgZGlzY2xvc2UgdGhlIGNvbnRlbnRzIHRvIGFueSBvdGhlciBwZXJz
b24sIHVzZSBpdCBmb3IgYW55IHB1cnBvc2UsIG9yIHN0b3JlIG9yIGNvcHkgdGhlIGluZm9ybWF0
aW9uIGluIGFueSBtZWRpdW0uIFRoYW5rIHlvdS4NCg==


From xen-devel-bounces@lists.xenproject.org Fri May 01 09:57:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 09:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUSQD-0000S7-Cf; Fri, 01 May 2020 09:57:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/rha=6P=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jUSQB-0000Ry-VZ
 for xen-devel@lists.xen.org; Fri, 01 May 2020 09:57:28 +0000
X-Inumbo-ID: 2816ebc6-8b92-11ea-9af7-12813bfff9fa
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.78]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2816ebc6-8b92-11ea-9af7-12813bfff9fa;
 Fri, 01 May 2020 09:57:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=a0F1SrCJZGagszlZMESLfxC6Ae3iHXo8wo1NAom7Rss=;
 b=cBKc0F5WBNrVY9IZtWbxtMJOb/RAibEOGNARlFvqazEzo8YU3RuTBm6iZdeAT8jUlQ3neBqpFrhwQqsn0Mc8PouiEj8AN2mFWfKnCrHkoqp8nq4CdT+dFmuItScRlNAbTFcG0qmYbknoqiyYfOj4EkIyUBNgsNYQJgYtYAEwUsk=
Received: from MR2P264CA0031.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::19) by
 DB6PR0802MB2565.eurprd08.prod.outlook.com (2603:10a6:4:a1::23) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.2958.20; Fri, 1 May 2020 09:57:23 +0000
Received: from VE1EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:0:cafe::39) by MR2P264CA0031.outlook.office365.com
 (2603:10a6:500::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2958.19 via Frontend
 Transport; Fri, 1 May 2020 09:57:22 +0000
Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xen.org; dkim=pass (signature was verified)
 header.d=armh.onmicrosoft.com;lists.xen.org; dmarc=bestguesspass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT031.mail.protection.outlook.com (10.152.18.69) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.2958.20 via Frontend Transport; Fri, 1 May 2020 09:57:22 +0000
Received: ("Tessian outbound 4cdf5642225a:v54");
 Fri, 01 May 2020 09:57:22 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: e7148ed3c3e5fdcf
X-CR-MTA-TID: 64aa7808
Received: from c87f2d2d158f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 84E024D0-5FEB-4C07-97DE-02796AFEF261.1; 
 Fri, 01 May 2020 09:57:16 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c87f2d2d158f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 01 May 2020 09:57:16 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cncbeMWCIPA+iKc46SLpzrYThbN/1N+DaCTqSLO6SgSpy0ZzVq8BvaSIAz/d75Ilxtukfri6t4NwIZFaJS2AymmeVWhaUuLuuoxJyL+xbhdfqr0F2HBAm9oXh35/35hwURyodJRbuyEhfV1bTMEcsTQupOIAGFEYNoIKRMBPNo+mxxjm16kvvxxcZbQgEMM9g7BKq1tjG3lqcz/qkZ7jeqSeBYrOuDtUcRXOiV567Iv0sccqxMu2m3cv3zRbHcbtQFgIjD5TiIN9SRvxd8u8W78cyMMmbNL9/6KeXgzrjnbzSba9RoRsXkv9T/FNX52fPrwoNgdBCgaCZisRTMV9Fg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=a0F1SrCJZGagszlZMESLfxC6Ae3iHXo8wo1NAom7Rss=;
 b=A/96Hpi+KJh006hvX28FpiPAXiQuphE0pRhicgnk5RdUD1M4h+mwxibY2gKIETr88h4AYTtfa9e6UM9MnKfQFt9XjkQDjhe971kIDBOTkDKUQfOR8BNefw4vNPtvxmb5lNajCYEf8/7LYFqVEbSvsg1DXSX2nmCERhxjbgEb3Ql5fOujjkGHYCv2FAQ2d7fh3+qql8NzIRouZG3d80LRCPyJkXW4fmhR+e6N+oTyzBgE0ewBP1Bdz18inm2R6ubToHM5XV1/urlXED5a9+24XY19jyN4k23IOsd5vHbBuAUv0uig5ApXx6RzqBDK2rCDbIssIdjpTVL3+JCpGpzjgA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=a0F1SrCJZGagszlZMESLfxC6Ae3iHXo8wo1NAom7Rss=;
 b=cBKc0F5WBNrVY9IZtWbxtMJOb/RAibEOGNARlFvqazEzo8YU3RuTBm6iZdeAT8jUlQ3neBqpFrhwQqsn0Mc8PouiEj8AN2mFWfKnCrHkoqp8nq4CdT+dFmuItScRlNAbTFcG0qmYbknoqiyYfOj4EkIyUBNgsNYQJgYtYAEwUsk=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3211.eurprd08.prod.outlook.com (2603:10a6:5:27::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2958.19; Fri, 1 May
 2020 09:57:15 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::94d7:a242:40b4:cfb5]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::94d7:a242:40b4:cfb5%6]) with mapi id 15.20.2937.028; Fri, 1 May 2020
 09:57:15 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: Question...
Thread-Topic: Question...
Thread-Index: AdYffIk+nIOwP7iTQLuOs3KZB2G7LwAIaoMAAAAsHAA=
Date: Fri, 1 May 2020 09:57:15 +0000
Message-ID: <B8FADD00-21A7-4556-93B2-BF5FD2218D84@arm.com>
References: <f017c46e427a45ecab00c1c59413658c@glacier-peak.net>
 <907FA58B-240E-45E6-B452-9094531F715E@arm.com>
In-Reply-To: <907FA58B-240E-45E6-B452-9094531F715E@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: lists.xen.org; dkim=none (message not signed)
 header.d=none; lists.xen.org;
 dmarc=none action=none header.from=arm.com; 
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 5c4f3f73-9059-42a2-2195-08d7edb60b38
x-ms-traffictypediagnostic: DB7PR08MB3211:|DB6PR0802MB2565:
X-Microsoft-Antispam-PRVS: <DB6PR0802MB25651CBECB4A7C032F0E4B569DAB0@DB6PR0802MB2565.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 0390DB4BDA
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: oBjo72NnGK2O8Ra+5xNkjw3W6d5cHu2TkWShUMGsdcP/r/m/yc25zCfHuRcCMpcIQjNtOFm3Gu7Lu+5ZyLCd8p2tsTg9LUET6fFpJ+dGwRM9EHpYZg0nixhzrYRD6IHyxGkoT+/BOEtl1ryfmLMOxpA9lv7AnY+1yxFU+2ijKMy85p7aU7sLv6JfaE5NpUBz5EsmXdGPzArPqSKzv7Vq4zJ3Ex6xjsKNIgQPO0kjyqcRK4LD6nadpmEmOw1DvRPmp3fN81zvTRwXUqeNBur7uXs5QAVn+AHOnI1UOAIRcfvspXxG2xmjVH/Dflw4QP+xLtEwJCn686LOBD/OEF9/pcPVvLaVhSqs/6sE59atq3/IwyzXnl7cKncITwzKl44dxM8T/WOHJem11jQI/vDnRnnJf3VmX9Y5JVH5OZqV8dIpGPLffsNHm4bZ6PYhm3a0
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(376002)(136003)(39860400002)(366004)(396003)(7116003)(4326008)(33656002)(3480700007)(6506007)(6916009)(53546011)(186003)(316002)(26005)(6512007)(71200400001)(8936002)(8676002)(86362001)(2616005)(6486002)(2906002)(66946007)(478600001)(91956017)(66476007)(76116006)(66446008)(64756008)(66556008)(36756003)(5660300002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: dGsqayzU9Oqyf1EIHTnAfykUxurkJXy9I8+3DRPwAtPT1wvkHuUrQIjPWEs6pwSnwlD0tplhYFtLtcm3/2Fv55HNlpbIBzyC1QaM0JyiHH3ENsyspUVV86nave+7CgG9TTk9AxoUlNWQ8lcPLxAMlYmp6WOlkJzSNJcyw549qxmtNoFkXpxD9Exi9rIo03SFJPOVNa9DfLgx6rSpSDLkvW//GL/6S7ERxejjkhNKDAsXcZOg66+Pdeg6lVjirgSuruMNptKTjXA8TwU17WfUKOzB+X0lrxdyRg3MkYDoJ1w2czFG7CM1X3RyBHKJnEwMxxQUY2dDQx3CqKqL5SihPQRC62MNA+EQPFEyw08LqoHvnnSuwTPHbBIh7e9hTj4J6276WCS3/+w3FNcG4lJKGl1FBu0tqGHGU28iVcohRt+RjWbJga4Zy6+f14lT6sQPMUOlg2wTJu9BJRXUBVWO+zZJHG8XP3I7oamm6YwZg1LFtiRWtEvKknJC+T8kXM+6xcv5/xqDxkQqQtts+Jn/84Xc9oLMDsyjde6c1OmHfC6w0O+/kTkP41am6vLmstPEyS68f0/CaxNReD98kTIrqPEuLhjaP117bzm4hxv16rVVcC0yugRAraOdx0h7w6P4ZcvgG0Tt9GYK4hg4m6oD72mGojRx3Dw+BxsHd6qAh39/QkwpFo9g28RXfLRUVg5V+yV2nAUKe2Lf/FEh7QlVOQmNdrAnCLW9pIpeTE3EJwyKQBgphIq+3UGQKx05FYvDhC5ZUY2JRNVF7KUYmAK+nonA/+bR9+RFDPlz+VKyyE0=
x-ms-exchange-transport-forked: True
Content-Type: multipart/alternative;
 boundary="_000_B8FADD0021A7455693B2BF5FD2218D84armcom_"
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3211
Original-Authentication-Results: lists.xen.org; dkim=none (message not signed)
 header.d=none; lists.xen.org;
 dmarc=none action=none header.from=arm.com; 
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(396003)(136003)(376002)(346002)(46966005)(70586007)(47076004)(6512007)(6486002)(2616005)(82310400002)(8676002)(6916009)(336012)(45080400002)(478600001)(356005)(81166007)(2906002)(3480700007)(5660300002)(82740400003)(33656002)(33964004)(36906005)(316002)(186003)(70206006)(8936002)(26005)(53546011)(6506007)(4326008)(86362001)(36756003)(7116003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 0fc5a9de-2c2f-42eb-2a44-08d7edb606ff
X-Forefront-PRVS: 0390DB4BDA
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: +he/xF0WI55ClFbXkH+zs0RVHvwIgt6ziY2fqYljvxD7yuND6zng1z3MRVPmzXPjLrBAZZY55txLLtEa0eIH3hkn2Fgj4w3w3Hh9KZK1BjVeOfPm9MwH38V/Uiu9Ib5MIUUL6+apnmrUzao8QIyww5Q+EFRpxrVdoYZnhmSc5RoDeCI0+mo9JlaTggPVDd41pbregAKJ+rc+MZ4hrgVsDaDx08E5hypEH19Bka4/nqQTmm25hY7JIokZNxAe95dvCCXPXKYLZOzd0vX5audew98H/igKAkZcLuHHSsPXGy/T+Qyl+ZbpBtk8BxMhtaA2s8SxV7WfyoW9yIM9Z0s8YprgAw6w+jCL6AUa2MvQQgm4H9kWj+NUeKEP2eXeuJ/dYKNwc6+K+nf6dGjxkJHfe2s1EyoPUvgcXGljQll06fS7Aq2e5sZVBCPxgseQEFLLu65CVeClCCTlte/N7AfSCFvNqD+6Ap9Rd+AjwCX+YMEggxo3pldnT1wDlHCzlf8yi338J2CHcQAykAAmGtOcjw==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 May 2020 09:57:22.2883 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 5c4f3f73-9059-42a2-2195-08d7edb60b38
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2565
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--_000_B8FADD0021A7455693B2BF5FD2218D84armcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

DQoNCk9uIDEgTWF5IDIwMjAsIGF0IDEwOjUyLCBCZXJ0cmFuZCBNYXJxdWlzIDxiZXJ0cmFuZC5t
YXJxdWlzQGFybS5jb208bWFpbHRvOmJlcnRyYW5kLm1hcnF1aXNAYXJtLmNvbT4+IHdyb3RlOg0K
DQpIaSBTYW11ZWwsDQoNCk9uIDEgTWF5IDIwMjAsIGF0IDA2OjU5LCBTYW11ZWwgUC4gRmVsdG9u
IC0gR1BUIExMQyA8c2FtLmZlbHRvbkBnbGFjaWVyLXBlYWsubmV0PG1haWx0bzpzYW0uZmVsdG9u
QGdsYWNpZXItcGVhay5uZXQ+PiB3cm90ZToNCg0KU28sIEnigJltIHRyeWluZyB0byBnZXQgYSBY
ZW4gRG9tMCBhbmQgRG9tVSwgYm90aCBydW5uaW5nIFVidW50dSAyMC4wNCBMVFMgdXAgb24gb3Vy
IGJyYW5kLW5ldyBHaWdhYnl0ZSBUaHVuZGVyWDIgQVJNNjQgYm94LiBJIGNhbiBnZXQgVWJ1bnR1
IHVwIGFuZCBydW5uaW5nLCBidXQgYWZ0ZXIgaW5zdGFsbGluZyB0aGUgWGVuIGJpdHMsIGl0IGRp
ZXMgYWZ0ZXIgdGhlIFVFRkkgbGF5ZXIgbGF1bmNoZXMgR1JVQi4gSSBoYXZlbuKAmXQgYmVlbiBh
YmxlIHRvIGdldCBhbnkgbG9nZmlsZXMgYmVjYXVzZSBpdCBkb2VzbuKAmXQgZ2V0IHRoYXQgZmFy
LiBOb3RoaW5nIHNob3dzIHVwIG9uIHRoZSBzZXJpYWwgcG9ydCBsb2cgZWl0aGVyIOKAkyBpdCBq
dXN0IGhhbmdzLg0KDQpIYXMgYW55b25lIG92ZXIgdGhlcmUgYmVlbiB0cnlpbmcgdG8gZ2V0IGEg
c2ltaWxhciBzZXR1cCBydW5uaW5nPyBBbSBJIG91dCB0byBsdW5jaCBmb3IgdHJ5aW5nIHRoaXMs
IG9yIGlzIHRoZXJlIHNvbWV0aGluZyBJ4oCZbSBtaXNzaW5nPyBBbnkgaGVscCBhdCBhbGwgd291
bGQgYmUgYXBwcmVjaWF0ZWQuDQoNCkkgYW0gY3VycmVudGx5IHBvcnRpbmcgWGVuIG9uIGFuIE4x
U0RQIGFybSBib2FyZCB3aGljaCBpcyBhbHNvIHVzaW5nIGEgRURLMi9ncnViIHNldHVwIGFuZCBJ
IG1hbmFnZSB0byBzdGFydCB4ZW4gZnJvbSBncnViIGFuZCB0aGVuIHN0YXJ0IGRvbTAgcHJvdmlk
aW5nIGEgRFRCLg0KDQpNeSBncnViIGNvbmZpZ3VyYXRpb24gbG9va3MgbGlrZSB0aGlzOg0KbWVu
dWVudHJ5ICd4ZW4nIHsNCiAgIHhlbl9oeXBlcnZpc29yICRwcmVmaXgveGVuLmVmaSBsb2dsdmw9
YWxsIGd1ZXN0X2xvZ2x2bD1hbGwgY29uc29sZT1kdHVhcnQgZHR1YXJ0PXNlcmlhbDAgZG9tMF9t
ZW09MUcNCiAgIHhlbl9tb2R1bGUgJHByZWZpeC9JbWFnZSBjb25zb2xlPWh2YzAgZWZpPW5vcnVu
dGltZSBydyByb290PS9kZXYvc2RhMSByb290d2FpdA0KICAgZGV2aWNldHJlZSAkcHJlZml4L24x
c2RwLmR0Yg0KfQ0KDQpDb3VsZCB5b3Ugc2hhcmUgeW91ciBncnViIGNvbmZpZ3VyYXRpb24gPw0K
DQpCZXJ0cmFuZA0KDQoNCklmIHRoaXMgZG9lc27igJl0IHdvcmssIEnigJltIGdvaW5nIHRvIGhh
dmUgdG8gZ28gdG8gRnJlZUJTRCBhbmQgQmh5dmUgYmVjYXVzZSBJIGtub3cgc29tZW9uZSB3aG8g
aGFzIHRoYXQgd29ya2luZy4gSeKAmWQgcmF0aGVyIHVzZSBMaW51eCB0aGFuIEJTRCBmb3IgdGhp
cyBhcHBsaWNhdGlvbiwgdGhlcmUgYXJlIG1vcmUgZHJpdmVycyBzdXBwb3J0aW5nIHRoaXMgaGFy
ZGF3YXJlLg0KDQpUaGFua3MsDQp+U2FtDQoNCg0KDQoNCjxpbWFnZTAwMS5wbmc+DQpQaG9uZTog
KzEgMjA2IDcwMS03MzIxIGV4dC4gMTAxDQpFbWFpbDogaW5mb0BnbGFjaWVyLXBlYWsubmV0PG1h
aWx0bzppbmZvQGdsYWNpZXItcGVhay5uZXQ+DQoNCklNUE9SVEFOVCBOT1RJQ0U6IFRoZSBjb250
ZW50cyBvZiB0aGlzIGVtYWlsIGFuZCBhbnkgYXR0YWNobWVudHMgYXJlIGNvbmZpZGVudGlhbCBh
bmQgbWF5IGFsc28gYmUgcHJpdmlsZWdlZC4gSWYgeW91IGFyZSBub3QgdGhlIGludGVuZGVkIHJl
Y2lwaWVudCwgcGxlYXNlIG5vdGlmeSB0aGUgc2VuZGVyIGltbWVkaWF0ZWx5IGFuZCBkbyBub3Qg
ZGlzY2xvc2UgdGhlIGNvbnRlbnRzIHRvIGFueSBvdGhlciBwZXJzb24sIHVzZSBpdCBmb3IgYW55
IHB1cnBvc2UsIG9yIHN0b3JlIG9yIGNvcHkgdGhlIGluZm9ybWF0aW9uIGluIGFueSBtZWRpdW0u
IFRoYW5rIHlvdS4NCg0KU29ycnksIEkgZm9yZ290IHRvIHJlbW92ZSB0aGUgZGlzY2xhaW1lci4N
CkJlcnRyYW5kDQoNCg==

--_000_B8FADD0021A7455693B2BF5FD2218D84armcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <95EF20DE4737C548B29A6E819B3B5C72@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64

PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgbGluZS1icmVhazogYWZ0
ZXItd2hpdGUtc3BhY2U7IiBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCjxkaXY+PGJyIGNsYXNz
PSIiPg0KPGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSIgY2xhc3M9IiI+DQo8ZGl2IGNsYXNzPSIiPk9u
IDEgTWF5IDIwMjAsIGF0IDEwOjUyLCBCZXJ0cmFuZCBNYXJxdWlzICZsdDs8YSBocmVmPSJtYWls
dG86YmVydHJhbmQubWFycXVpc0Bhcm0uY29tIiBjbGFzcz0iIj5iZXJ0cmFuZC5tYXJxdWlzQGFy
bS5jb208L2E+Jmd0OyB3cm90ZTo8L2Rpdj4NCjxiciBjbGFzcz0iQXBwbGUtaW50ZXJjaGFuZ2Ut
bmV3bGluZSI+DQo8ZGl2IGNsYXNzPSIiPjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAs
IDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5
bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1h
bDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50
OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNw
YWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRp
b246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNz
PSIiPkhpDQogU2FtdWVsLDwvc3Bhbj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwg
MCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTog
bm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBs
ZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBw
eDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2lu
ZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjog
bm9uZTsiIGNsYXNzPSIiPg0KPGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBm
b250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1h
bDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVy
LXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRl
eHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBw
eDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7
IiBjbGFzcz0iIj4NCjxibG9ja3F1b3RlIHR5cGU9ImNpdGUiIHN0eWxlPSJmb250LWZhbWlseTog
SGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJp
YW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5v
cm1hbDsgb3JwaGFuczogYXV0bzsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7
IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3aWRvd3M6IGF1dG87
IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc2l6ZS1hZGp1c3Q6IGF1dG87IC13ZWJr
aXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9
IiI+DQpPbiAxIE1heSAyMDIwLCBhdCAwNjo1OSwgU2FtdWVsIFAuIEZlbHRvbiAtIEdQVCBMTEMg
Jmx0OzxhIGhyZWY9Im1haWx0bzpzYW0uZmVsdG9uQGdsYWNpZXItcGVhay5uZXQiIGNsYXNzPSIi
PnNhbS5mZWx0b25AZ2xhY2llci1wZWFrLm5ldDwvYT4mZ3Q7IHdyb3RlOjxiciBjbGFzcz0iIj4N
CjxiciBjbGFzcz0iIj4NClNvLCBJ4oCZbSB0cnlpbmcgdG8gZ2V0IGEgWGVuIERvbTAgYW5kIERv
bVUsIGJvdGggcnVubmluZyBVYnVudHUgMjAuMDQgTFRTIHVwIG9uIG91ciBicmFuZC1uZXcgR2ln
YWJ5dGUgVGh1bmRlclgyIEFSTTY0IGJveC4gSSBjYW4gZ2V0IFVidW50dSB1cCBhbmQgcnVubmlu
ZywgYnV0IGFmdGVyIGluc3RhbGxpbmcgdGhlIFhlbiBiaXRzLCBpdCBkaWVzIGFmdGVyIHRoZSBV
RUZJIGxheWVyIGxhdW5jaGVzIEdSVUIuIEkgaGF2ZW7igJl0IGJlZW4gYWJsZSB0bw0KIGdldCBh
bnkgbG9nZmlsZXMgYmVjYXVzZSBpdCBkb2VzbuKAmXQgZ2V0IHRoYXQgZmFyLiBOb3RoaW5nIHNo
b3dzIHVwIG9uIHRoZSBzZXJpYWwgcG9ydCBsb2cgZWl0aGVyIOKAkyBpdCBqdXN0IGhhbmdzLjxi
ciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCkhhcyBhbnlvbmUgb3ZlciB0aGVyZSBiZWVuIHRy
eWluZyB0byBnZXQgYSBzaW1pbGFyIHNldHVwIHJ1bm5pbmc/IEFtIEkgb3V0IHRvIGx1bmNoIGZv
ciB0cnlpbmcgdGhpcywgb3IgaXMgdGhlcmUgc29tZXRoaW5nIEnigJltIG1pc3Npbmc/IEFueSBo
ZWxwIGF0IGFsbCB3b3VsZCBiZSBhcHByZWNpYXRlZC48YnIgY2xhc3M9IiI+DQo8L2Jsb2NrcXVv
dGU+DQo8YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBI
ZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlh
bnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9y
bWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06
IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRl
eHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0K
PHNwYW4gc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2
ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQt
Y2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFs
OyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5v
bmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQt
c3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsgZmxvYXQ6IG5vbmU7IGRp
c3BsYXk6IGlubGluZSAhaW1wb3J0YW50OyIgY2xhc3M9IiI+SQ0KIGFtIGN1cnJlbnRseSBwb3J0
aW5nIFhlbiBvbiBhbiBOMVNEUCBhcm0gYm9hcmQgd2hpY2ggaXMgYWxzbyB1c2luZyBhIEVESzIv
Z3J1YiBzZXR1cCBhbmQgSSBtYW5hZ2UgdG8gc3RhcnQgeGVuIGZyb20gZ3J1YiBhbmQgdGhlbiBz
dGFydCBkb20wIHByb3ZpZGluZyBhIERUQi48L3NwYW4+PGJyIHN0eWxlPSJjYXJldC1jb2xvcjog
cmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZv
bnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6
IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQt
aW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3
b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRl
Y29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigw
LCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0
eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3Jt
YWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVu
dDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1z
cGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0
aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8c3BhbiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAw
LCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxl
OiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7
IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDog
MHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFj
aW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9u
OiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0i
Ij5NeQ0KIGdydWIgY29uZmlndXJhdGlvbiBsb29rcyBsaWtlIHRoaXM6PC9zcGFuPjxiciBzdHls
ZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9u
dC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3Jt
YWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxp
Z246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUt
c3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lk
dGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8c3BhbiBzdHlsZT0i
Y2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1z
aXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7
IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246
IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3Bh
Y2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6
IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5l
ICFpbXBvcnRhbnQ7IiBjbGFzcz0iIj5tZW51ZW50cnkNCiAneGVuJyB7PC9zcGFuPjxiciBzdHls
ZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9u
dC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3Jt
YWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxp
Z246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUt
c3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lk
dGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8c3BhbiBzdHlsZT0i
Y2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1z
aXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7
IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246
IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3Bh
Y2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6
IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5l
ICFpbXBvcnRhbnQ7IiBjbGFzcz0iIj4mbmJzcDsmbmJzcDsmbmJzcDt4ZW5faHlwZXJ2aXNvcg0K
ICRwcmVmaXgveGVuLmVmaSBsb2dsdmw9YWxsIGd1ZXN0X2xvZ2x2bD1hbGwgY29uc29sZT1kdHVh
cnQgZHR1YXJ0PXNlcmlhbDAgZG9tMF9tZW09MUc8L3NwYW4+PGJyIHN0eWxlPSJjYXJldC1jb2xv
cjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7
IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWln
aHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRl
eHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFs
OyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0
LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjog
cmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZv
bnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6
IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQt
aW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3
b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRl
Y29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsi
IGNsYXNzPSIiPiZuYnNwOyZuYnNwOyZuYnNwO3hlbl9tb2R1bGUNCiAkcHJlZml4L0ltYWdlIGNv
bnNvbGU9aHZjMCBlZmk9bm9ydW50aW1lIHJ3IHJvb3Q9L2Rldi9zZGExIHJvb3R3YWl0PC9zcGFu
PjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZl
dGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1j
YXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7
IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9u
ZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1z
dHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8c3Bh
biBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGlj
YTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBz
OiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRl
eHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsg
d2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJv
a2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxh
eTogaW5saW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0iIj4mbmJzcDsmbmJzcDsmbmJzcDtkZXZpY2V0
cmVlDQogJHByZWZpeC9uMXNkcC5kdGI8L3NwYW4+PGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdi
KDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQt
c3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5v
cm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5k
ZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3Jk
LXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29y
YXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAs
IDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5
bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1h
bDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50
OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNw
YWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRp
b246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiIGNsYXNz
PSIiPn08L3NwYW4+PGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZh
bWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9u
dC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNp
bmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJh
bnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdl
YmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFz
cz0iIj4NCjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6
IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFy
aWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBu
b3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9y
bTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQt
dGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+
DQo8c3BhbiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhl
bHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFu
dC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3Jt
YWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTog
bm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4
dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyBmbG9hdDogbm9uZTsg
ZGlzcGxheTogaW5saW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0iIj5Db3VsZA0KIHlvdSBzaGFyZSB5
b3VyIGdydWIgY29uZmlndXJhdGlvbiA/PC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJn
YigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250
LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBu
b3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWlu
ZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29y
ZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNv
cmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwg
MCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHls
ZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFs
OyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6
IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3Bh
Y2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlv
bjogbm9uZTsiIGNsYXNzPSIiPg0KPHNwYW4gc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwg
MCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTog
bm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBs
ZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBw
eDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2lu
ZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjog
bm9uZTsgZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAhaW1wb3J0YW50OyIgY2xhc3M9IiI+
QmVydHJhbmQ8L3NwYW4+PGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250
LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsg
Zm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNw
YWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQt
dHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsg
LXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBj
bGFzcz0iIj4NCjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1p
bHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQt
dmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5n
OiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5z
Zm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJr
aXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9
IiI+DQo8YmxvY2txdW90ZSB0eXBlPSJjaXRlIiBzdHlsZT0iZm9udC1mYW1pbHk6IEhlbHZldGlj
YTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBz
OiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IG9y
cGhhbnM6IGF1dG87IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRy
YW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd2lkb3dzOiBhdXRvOyB3b3JkLXNw
YWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXNpemUtYWRqdXN0OiBhdXRvOyAtd2Via2l0LXRleHQt
c3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPGJy
IGNsYXNzPSIiPg0KSWYgdGhpcyBkb2VzbuKAmXQgd29yaywgSeKAmW0gZ29pbmcgdG8gaGF2ZSB0
byBnbyB0byBGcmVlQlNEIGFuZCBCaHl2ZSBiZWNhdXNlIEkga25vdyBzb21lb25lIHdobyBoYXMg
dGhhdCB3b3JraW5nLiBJ4oCZZCByYXRoZXIgdXNlIExpbnV4IHRoYW4gQlNEIGZvciB0aGlzIGFw
cGxpY2F0aW9uLCB0aGVyZSBhcmUgbW9yZSBkcml2ZXJzIHN1cHBvcnRpbmcgdGhpcyBoYXJkYXdh
cmUuPGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIiPg0KVGhhbmtzLDxiciBjbGFzcz0iIj4NCn5T
YW08YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9
IiI+DQo8YnIgY2xhc3M9IiI+DQombHQ7aW1hZ2UwMDEucG5nJmd0OzxiciBjbGFzcz0iIj4NClBo
b25lOiAmIzQzOzEgMjA2IDcwMS03MzIxIGV4dC4gMTAxPGJyIGNsYXNzPSIiPg0KPGEgaHJlZj0i
bWFpbHRvOmluZm9AZ2xhY2llci1wZWFrLm5ldCIgY2xhc3M9IiI+RW1haWw6IGluZm9AZ2xhY2ll
ci1wZWFrLm5ldDwvYT48YnIgY2xhc3M9IiI+DQo8L2Jsb2NrcXVvdGU+DQo8YnIgc3R5bGU9ImNh
cmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6
ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBm
b250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBz
dGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNl
OiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAw
cHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPHNwYW4gc3R5bGU9ImNhcmV0
LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTog
MTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250
LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFy
dDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBu
b3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7
IHRleHQtZGVjb3JhdGlvbjogbm9uZTsgZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAhaW1w
b3J0YW50OyIgY2xhc3M9IiI+SU1QT1JUQU5UDQogTk9USUNFOiBUaGUgY29udGVudHMgb2YgdGhp
cyBlbWFpbCBhbmQgYW55IGF0dGFjaG1lbnRzIGFyZSBjb25maWRlbnRpYWwgYW5kIG1heSBhbHNv
IGJlIHByaXZpbGVnZWQuIElmIHlvdSBhcmUgbm90IHRoZSBpbnRlbmRlZCByZWNpcGllbnQsIHBs
ZWFzZSBub3RpZnkgdGhlIHNlbmRlciBpbW1lZGlhdGVseSBhbmQgZG8gbm90IGRpc2Nsb3NlIHRo
ZSBjb250ZW50cyB0byBhbnkgb3RoZXIgcGVyc29uLCB1c2UgaXQgZm9yIGFueSBwdXJwb3NlLCBv
cg0KIHN0b3JlIG9yIGNvcHkgdGhlIGluZm9ybWF0aW9uIGluIGFueSBtZWRpdW0uIFRoYW5rIHlv
dS48L3NwYW4+PC9kaXY+DQo8L2Jsb2NrcXVvdGU+DQo8L2Rpdj4NCjxiciBjbGFzcz0iIj4NCjxk
aXYgY2xhc3M9IiI+U29ycnksIEkgZm9yZ290IHRvIHJlbW92ZSB0aGUgZGlzY2xhaW1lci48L2Rp
dj4NCjxkaXYgY2xhc3M9IiI+QmVydHJhbmQ8L2Rpdj4NCjxkaXYgY2xhc3M9IiI+PGJyIGNsYXNz
PSIiPg0KPC9kaXY+DQo8L2JvZHk+DQo8L2h0bWw+DQo=

--_000_B8FADD0021A7455693B2BF5FD2218D84armcom_--


From xen-devel-bounces@lists.xenproject.org Fri May 01 11:08:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 11:08:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUTWq-00069v-Fc; Fri, 01 May 2020 11:08:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2s4U=6P=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jUTWp-00069q-Iz
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 11:08:23 +0000
X-Inumbo-ID: 1252fcf8-8b9c-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1252fcf8-8b9c-11ea-b07b-bc764e2007e4;
 Fri, 01 May 2020 11:08:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:Reply-To:
 MIME-Version:Message-Id:Date:Subject:Cc:To:From:Sender:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=yH9X2AhgxC1bDIQCz8Z5QaVxawIVxME6WxXTi6j47pU=; b=nhd+/OpFlUfgAWI/Zyk31cd/6Y
 4mKmd6J8kLLfB/k3xrcMk4apAKEhDmIGlTa7kvkjVZbseoiCEhzZ5Ig0DdZ4Yfs/27nb8dVeE99p9
 wJNLz+JagiOfZ6A469M2TRaBbbLv6BFWd4730dS/mVR+A+j1q/xceGs005FO9OTe0lvQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jUTWo-0007r9-0P; Fri, 01 May 2020 11:08:22 +0000
Received: from [54.239.6.185] (helo=CBG-R90WXYV0.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jUTWn-0007u2-JX; Fri, 01 May 2020 11:08:21 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [ANNOUNCE] Xen 4.14 Development Update - LAST POSTING
Date: Fri,  1 May 2020 12:08:19 +0100
Message-Id: <20200501110819.955-1-paul@xen.org>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: xen-devel@lists.xenproject.org, pdurrant@amazon.com
Cc: jgross@suse.com, andrew.cooper3@citrix.com, pdurrant@amazon.com,
 marmarek@invisiblethingslab.com, hongyxia@amazon.com, jbeulich@suse.com,
 tamas.k.lengyel@gmail.com, roger.pau@citrix.com, dwmw@amazon.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

*** LAST POSTING TODAY ***

This email only tracks big items for xen.git tree. Please reply for items
you would like to see in 4.14 so that people have an idea what
is going on and prioritise accordingly.

You're welcome to provide description and use cases of the feature you're
working on.

= Timeline =

We now adopt a fixed cut-off date scheme. We will release about every 8
 months.
The critical dates for Xen 4.14 are as follows:

* Last posting date: May 1st, 2020 <--- We are here
* Hard code freeze: May 22nd, 2020
* Release: June 26th, 2020

Note that we don't have a freeze exception scheme anymore. All patches
that wish to go into 4.14 must be posted initially no later than the
last posting date and finally no later than the hard code freeze.
All patches posted after that date will be automatically queued into next
release.

RCs will be arranged immediately after freeze.

There is also a jira instance to track all the tasks (not only big)
for the project. See: https://xenproject.atlassian.net/projects/XEN/issues.

Some of the tasks tracked by this e-mail also have a corresponding jira task
referred by XEN-N.

There is a version number for patch series associated to each feature.
Can each owner send an update giving the latest version number if the
series has been re-posted? Also, can the owners of any completed items
please respond so that the item can be moved into the 'Completed' section.

= Projects =

== Hypervisor == 

*  Live-Updating Xen (RFC v2)
  -  David Woodhouse
  -  The latest code is available at https://xenbits.xen.org/gitweb/?p=people/dwmw2/xen.git;a=shortlog;h=refs/heads/lu-master
  -  Project wiki page at https://wiki.xenproject.org/wiki/Live-Updating_Xen

*  Non-Cooperative Live Migration (v4 (re-work the xenstore migration document))
  -  Paul Durrant

*  Secret Hiding (Split into several series; 'Remove the direct map' now posted)
  -  Hongyan Xia

*  Hypervisor file system (v7)
  -  Juergen Gross

=== x86 === 

*  Linux stub domains (v5)
  -  Marek Marczykowski-Górecki

*  Virtualise MSR_ARCH_CAPS for guests
  -  Andrew Cooper

*  AMD hardware mitigations (Rome)
  -  Andrew Cooper

*  Xen ioreq server (v3)
  -  Roger Pau Monne

*  Instruction emulator improvements
  -  Jan Beulich

*  VM forking (v17 (hypervisor side merged))
  -  Tamas K Lengyel

=== ARM === 

== Completed == 


Paul Durrant


From xen-devel-bounces@lists.xenproject.org Fri May 01 11:32:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 11:32:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUTtq-0008V6-6M; Fri, 01 May 2020 11:32:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=s7FD=6P=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jUTtp-0008V1-86
 for xen-devel@lists.xen.org; Fri, 01 May 2020 11:32:09 +0000
X-Inumbo-ID: 62b61025-8b9f-11ea-9afe-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 62b61025-8b9f-11ea-9afe-12813bfff9fa;
 Fri, 01 May 2020 11:32:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=z/kJ02TphEIrJYC8kWhtpkm1F9rG2aDTeebBXI2o5dc=; b=RGH4tONqavx0umd9qbvOEvFH8h
 d+82M5Ln9r6joFZgFYKWuE8WZVsce8oqLlVVGDP7uXUGi4bMGuDJl2n4PeMsIY3KtHhAw97AjUT14
 FUdJbFvQ/6TwRQMT4oo77/YT4PvTBSsHMZKgT8yxX4x/gBFTJm1EWizUVWNcy//K9j8U=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jUTtn-0008I0-DI; Fri, 01 May 2020 11:32:07 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jUTtn-0001bc-6R; Fri, 01 May 2020 11:32:07 +0000
Subject: Re: Question...
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 "Samuel P. Felton - GPT LLC" <sam.felton@glacier-peak.net>
References: <f017c46e427a45ecab00c1c59413658c@glacier-peak.net>
 <907FA58B-240E-45E6-B452-9094531F715E@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a8000286-4add-5428-ba4e-66105aaeabff@xen.org>
Date: Fri, 1 May 2020 12:32:05 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <907FA58B-240E-45E6-B452-9094531F715E@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 01/05/2020 10:52, Bertrand Marquis wrote:
> Hi Samuel,
> 
>> On 1 May 2020, at 06:59, Samuel P. Felton - GPT LLC <sam.felton@glacier-peak.net> wrote:
>>
>> So, I’m trying to get a Xen Dom0 and DomU, both running Ubuntu 20.04 LTS up on our brand-new Gigabyte ThunderX2 ARM64 box. I can get Ubuntu up and running, but after installing the Xen bits, it dies after the UEFI layer launches GRUB. I haven’t been able to get any logfiles because it doesn’t get that far. Nothing shows up on the serial port log either – it just hangs.

IIUC what you wrote, you don't see any prompt from GRUB. Am I correct?

>>
>> Has anyone over there been trying to get a similar setup running? Am I out to lunch for trying this, or is there something I’m missing? Any help at all would be appreciated.
> 
> I am currently porting Xen on an N1SDP arm board which is also using a EDK2/grub setup and I manage to start xen from grub and then start dom0 providing a DTB.
> 
> My grub configuration looks like this:
> menuentry 'xen' {
>      xen_hypervisor $prefix/xen.efi loglvl=all guest_loglvl=all console=dtuart dtuart=serial0 dom0_mem=1G
>      xen_module $prefix/Image console=hvc0 efi=noruntime rw root=/dev/sda1 rootwait
>      devicetree $prefix/n1sdp.dtb
> }

Depending on your GRUB configuration, this may not work. For older GRUB, 
you will want to use chainloading (see [1]).

I haven't used Thunder-X 2 yet, only the previous version. Both version 
are shipped with ACPI and is the preferred way to boot. You should be 
able to boot Xen using ACPI, but it is not yet in feature parity with DT.

So I would recommend to use DT if Thunder-X 2 provide one. If you still 
want to have a try with ACPI, then you will need to build Xen with 
CONFIG_ACPI=y.

Additionally, Thunder-X 2 is using GICv3 ITS. You will need to build Xen 
with CONFIG_GICV3_ITS=y.

Both options can only be selectable when using the expert mode. In order 
to access it you have to add XEN_CONFIG_EXPERT=y on *all* your make 
command line.

I hope this helps.

> 
> Could you share your grub configuration ?
> 
> Bertrand
> 
>>
>> If this doesn’t work, I’m going to have to go to FreeBSD and Bhyve because I know someone who has that working. I’d rather use Linux than BSD for this application, there are more drivers supporting this hardaware.
>>
>> Thanks,
>> ~Sam


[1] https://wiki.xenproject.org/wiki/Xen_EFI

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 01 11:42:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 11:42:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUU3b-0000wK-5g; Fri, 01 May 2020 11:42:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bKOD=6P=gmail.com=tcminyard@srs-us1.protection.inumbo.net>)
 id 1jUU3Z-0000wF-P2
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 11:42:13 +0000
X-Inumbo-ID: cb95441a-8ba0-11ea-9887-bc764e2007e4
Received: from mail-ot1-x343.google.com (unknown [2607:f8b0:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cb95441a-8ba0-11ea-9887-bc764e2007e4;
 Fri, 01 May 2020 11:42:11 +0000 (UTC)
Received: by mail-ot1-x343.google.com with SMTP id b13so2356293oti.3
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 04:42:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:date:from:to:cc:subject:message-id:reply-to:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=pFoeUECwVIrcEgps9xEKluuyWqbnMaowz1zKu/YRL4Q=;
 b=BzCKTuuIg9xSrsgZpn+sFAcp98pGTLUIoiJnRWVW+nMPDbXNIm0+vhoK3v94uv0sCR
 +rFpxbPEBsRU/37fzIKwtYmraWAt9YY8AXkPTBokZ1U0vcKT1qwk2cI8InHlFEmphf5D
 rntuetAF1oN7KdGNZiuH3y51iZ4FL6nzXmFl4XKlxpHuJ+q3jCkCEYefZx96lY0l/fwc
 ZeAhUNbVG4OzY7b4QikLUyfmtzU8vuu7Gfan3ulGxW9hx55MNp8kPZo54ibK9lbMsZAl
 mpDFr/C6cDEFgmZIuAyRC1qhoWn5Iz+CfQ/rNONuAsbzIen0w9WVAgg2o5oskybVsSw3
 DFuw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
 :reply-to:references:mime-version:content-disposition:in-reply-to
 :user-agent;
 bh=pFoeUECwVIrcEgps9xEKluuyWqbnMaowz1zKu/YRL4Q=;
 b=jAzci+N4ztuzOt/e2DKPzpmFFSgMmVQAEEwP+oziVyBAP/pnXWxsGvwumynNFtyJAj
 mBqZDu5guyL1I1HTozmJdPz2pfEqwFScWlaOq/QxJZeP+8zoFf5Buk/pg8lgl5kk/UL6
 7mP1e9TknggQCVk4h0s/u1+hI9gwrBW1pio/cO5NMwJxMnz1/J6J3fbBWgBMCq9nWxHy
 yento1ZVbKMjRwAlFpJbNprc1lVPDebxxAfDDq7bUE7sKB7aIBLTYqire/bKKIVv976B
 tMCXmeHnYEd8nr4mxNL5HlOGe4yBd6JqlKPKrFKvu0IvFK30h4rC4AubenpPoCfyux/1
 uRTQ==
X-Gm-Message-State: AGi0PuaflUumE5JyzZdatJUnqvt5thLUXum7PHiB/7tIZJoyaoDDl06k
 h1kwmpgdET8NjMUvu8afaA==
X-Google-Smtp-Source: APiQypLzjXUHnDec8+yGA/oweUwhFmXemSt1CRyFY6Ep/YjMwlGirejm9neQiWOUbQqTqUVY+4VDpg==
X-Received: by 2002:a9d:37c9:: with SMTP id x67mr3104736otb.207.1588333330528; 
 Fri, 01 May 2020 04:42:10 -0700 (PDT)
Received: from serve.minyard.net (serve.minyard.net. [2001:470:b8f6:1b::1])
 by smtp.gmail.com with ESMTPSA id w14sm755042oou.46.2020.05.01.04.42.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 01 May 2020 04:42:09 -0700 (PDT)
Received: from minyard.net (unknown
 [IPv6:2001:470:b8f6:1b:8b39:c3f3:f502:5c4e])
 by serve.minyard.net (Postfix) with ESMTPSA id 83BCB18000D;
 Fri,  1 May 2020 11:42:08 +0000 (UTC)
Date: Fri, 1 May 2020 06:42:01 -0500
From: Corey Minyard <minyard@acm.org>
To: Roman Shaposhnik <roman@zededa.com>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
Message-ID: <20200501114201.GE9902@minyard.net>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
User-Agent: Mutt/1.9.4 (2018-02-28)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: minyard@acm.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 jeff.kubascik@dornerworks.com, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Apr 30, 2020 at 07:20:05PM -0700, Roman Shaposhnik wrote:
> Hi!
> 
> I'm trying to run Xen on Raspberry Pi 4 with 5.6.1 stock,
> upstream kernel. The kernel itself works perfectly well
> on the board. When I try booting it as Dom0 under Xen,
> it goes into a stacktrace (attached).

Getting Xen working on the Pi4 requires a lot of moving parts, and they
all have to be right.

> 
> Looking at what nice folks over at Dornerworks have previously
> done to make RPi kernels boot as Dom0 I've come across these
> 3 patches:
>     https://github.com/dornerworks/xen-rpi4-builder/tree/master/patches/linux
> 
> The first patch seems irrelevant (unless I'm missing something
> really basic here). 

It might be irrelevant for your configuration, assuming that Xen gets
the right information from EFI.  I haven't tried EFI booting.

> The 2nd patch applied with no issue (but
> I don't think it is related) but the 3d patch failed to apply on
> account of 5.6.1 kernel no longer having:
>     dev->archdata.dev_dma_ops
> E.g.
>     https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/arch/arm64/mm/dma-mapping.c?h=v5.6.1#n55
> 
> I've tried to emulate the effect of the patch by simply introducing
> a static variable that would signal that we already initialized
> dev->dma_ops -- but that didn't help at all.
> 
> I'm CCing Jeff Kubascik to see if the original authors of that Corey Minyard
> to see if may be they have any suggestions on how this may be dealt
> with.
> 
> Any advice would be greatly appreciated!

What's your Pi4 config.txt file look like?  The GIC is not being handled
correctly, and I'm guessing that configuration is wrong.  You can't use
the stock config.txt file with Xen, you have to modify the configuration a
bit. 

I think just adding:

enable_gic=1
total_mem=1024

might get it working, or at least solve one problem.  It's required either
way.  That might get rid of the GIC errors at the beginning.  I see a
few of those, but not that many.

My kernel command line is:

coherent_pool=1M 8250.nr_uarts=1 cma=64M cma=256M  smsc95xx.macaddr=DC:A6:32:4F:3A:CD vc_mem.mem_base=0x3ec00000 vc_mem.mem_size=0x40000000  console=hvc0 clk_ignore_unused root=/dev/mmcblk0p2 rootwait

A lot of that configuration gets pulled from the initialization done by
the GPU at startup which it put into the device tree.  I'm not sure what a
lot of it means.  Some of it is added by Xen, too.

I can verify the DMA patch is important.  I'm not sure how to apply it
to a 5.6 kernel, though.

Keep us informed when you get it working.

-corey

> 
> Thanks,
> Roman.

> grub> xen_hypervisor (hd0,gpt1)/xen dom0_mem=1024M,max:1024M dom0_max_vcpus=1
> grub> xen_module (hd0,gpt1)/kernel console=hvc0 earlyprintk=xen nomodeset
> grub> devicetree (hd0,gpt1)/bcm2711-rpi-4-b.dtb
> grub> boot
> Using modules provided by bootloader in FDT
> Xen 4.13.0 (c/s Tue Dec 17 14:19:49 2019 +0000 git:a2e84d8e42-dirty) EFI loader
> Warning: Could not query variable store: 0x8000000000000003
> - UART enabled -
> - Boot CPU booting -
> - Current EL 00000008 -
> - Initialize CPU -
> - Turning on paging -
> - Ready -
> (XEN) Checking for initrd in /chosen
> (XEN) RAM: 0000000000001000 - 0000000007ef1fff
> (XEN) RAM: 0000000007ef2000 - 0000000007f0dfff
> (XEN) RAM: 0000000007f0e000 - 000000002bc4dfff
> (XEN) RAM: 000000003cb10000 - 000000003cb10fff
> (XEN) RAM: 000000003cb12000 - 000000003cb13fff
> (XEN) RAM: 000000003cb1b000 - 000000003cb1cfff
> (XEN) RAM: 0000000040000000 - 00000000fbffffff
> (XEN)
> (XEN) MODULE[0]: 000000002bc5d000 - 000000002bd978f0 Xen
> (XEN) MODULE[1]: 000000002bc4f000 - 000000002bc5d000 Device Tree
> (XEN) MODULE[2]: 000000002bda5000 - 000000002d684200 Kernel
> (XEN)
> (XEN) CMDLINE[000000002bda5000]:chosen console=hvc0 earlyprintk=xen nomodeset
> (XEN)
> (XEN) Command line: dom0_mem=1024M,max:1024M dom0_max_vcpus=1
> (XEN) parameter "dom0_mem" has invalid value "1024M,max:1024M", rc=-22!
> (XEN) Domain heap initialised
> (XEN) Booting using Device Tree
> (XEN) Platform: Raspberry Pi 4
> (XEN) No dtuart path configured
> (XEN) Bad console= option 'dtuart'
>  Xen 4.13.0
> (XEN) Xen version 4.13.0 (@) (aarch64-linux-gnu-gcc (Ubuntu 9.3.0-10ubuntu2) 9.3.0) debug=y  Thu Apr 30 17:06:40 PDT 2020
> (XEN) Latest ChangeSet: Tue Dec 17 14:19:49 2019 +0000 git:a2e84d8e42-dirty
> (XEN) build-id: 7efa3c5cb27a98e9eb2e750fa71c8a065b9b5cb6
> (XEN) Processor: 410fd083: "ARM Limited", variant: 0x0, part 0xd08, rev 0x3
> (XEN) 64-bit Execution:
> (XEN)   Processor Features: 0000000000002222 0000000000000000
> (XEN)     Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32
> (XEN)     Extensions: FloatingPoint AdvancedSIMD
> (XEN)   Debug Features: 0000000010305106 0000000000000000
> (XEN)   Auxiliary Features: 0000000000000000 0000000000000000
> (XEN)   Memory Model Features: 0000000000001124 0000000000000000
> (XEN)   ISA Features:  0000000000010000 0000000000000000
> (XEN) 32-bit Execution:
> (XEN)   Processor Features: 00000131:00011011
> (XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle
> (XEN)     Extensions: GenericTimer Security
> (XEN)   Debug Features: 03010066
> (XEN)   Auxiliary Features: 00000000
> (XEN)   Memory Model Features: 10201105 40000000 01260000 02102211
> (XEN)  ISA Features: 02101110 13112111 21232042 01112131 00011142 00010001
> (XEN) SMP: Allowing 4 CPUs
> (XEN) enabled workaround for: ARM erratum 1319537
> (XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27 Freq: 54000 KHz
> (XEN) GICv2 initialization:
> (XEN)         gic_dist_addr=00000000ff841000
> (XEN)         gic_cpu_addr=00000000ff842000
> (XEN)         gic_hyp_addr=00000000ff844000
> (XEN)         gic_vcpu_addr=00000000ff846000
> (XEN)         gic_maintenance_irq=25
> (XEN) GICv2: 256 lines, 4 cpus, secure (IID 0200143b).
> (XEN) XSM Framework v1.0.0 initialized
> (XEN) Initialising XSM SILO mode
> (XEN) Using scheduler: SMP Credit Scheduler rev2 (credit2)
> (XEN) Initializing Credit2 scheduler
> (XEN)  load_precision_shift: 18
> (XEN)  load_window_shift: 30
> (XEN)  underload_balance_tolerance: 0
> (XEN)  overload_balance_tolerance: -3
> (XEN)  runqueues arrangement: socket
> (XEN)  cap enforcement granularity: 10ms
> (XEN) load tracking window length 1073741824 ns
> (XEN) Allocated console ring of 32 KiB.
> (XEN) CPU0: Guest atomics will try 6 times before pausing the domain
> (XEN) Bringing up CPU1
> - CPU 00000001 booting -
> - Current EL 00000008 -
> - Initialize CPU -
> - Turning on paging -
> - Ready -
> (XEN) CPU1: Guest atomics will try 5 times before pausing the domain
> (XEN) CPU 1 booted.
> (XEN) Bringing up CPU2
> - CPU 00000002 booting -
> - Current EL 00000008 -
> - Initialize CPU -
> - Turning on paging -
> - Ready -
> (XEN) CPU2: Guest atomics will try 5 times before pausing the domain
> (XEN) CPU 2 booted.
> (XEN) Bringing up CPU3
> - CPU 00000003 booting -
> - Current EL 00000008 -
> - Initialize CPU -
> - Turning on paging -
> - Ready -
> (XEN) CPU3: Guest atomics will try 5 times before pausing the domain
> (XEN) CPU 3 booted.
> (XEN) Brought up 4 CPUs
> (XEN) I/O virtualisation disabled
> (XEN) P2M: 44-bit IPA with 44-bit PA and 8-bit VMID
> (XEN) P2M: 4 levels with order-0 root, VTCR 0x80043594
> (XEN) Adding cpu 0 to runqueue 0
> (XEN)  First cpu on runqueue, activating
> (XEN) Adding cpu 1 to runqueue 0
> (XEN) Adding cpu 2 to runqueue 0
> (XEN) Adding cpu 3 to runqueue 0
> (XEN) alternatives: Patching with alt table 00000000002cc0b8 -> 00000000002cc7cc
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN) Loading d0 kernel from boot module @ 000000002bda5000
> (XEN) Allocating 1:1 mappings totalling 1024MB for dom0:
> (XEN) BANK[0] 0x00000040000000-0x00000080000000 (1024MB)
> (XEN) Grant table range: 0x0000002bc5d000-0x0000002bc9d000
> (XEN) Allocating PPI 16 for event channel interrupt
> (XEN) Loading zImage from 000000002bda5000 to 0000000040080000-000000004195f200
> (XEN) Loading d0 DTB to 0x0000000048000000-0x000000004800a45d
> (XEN) Initial low memory virq threshold set at 0x4000 pages.
> (XEN) Scrubbing Free RAM in background
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) ***************************************************
> (XEN) No support for ARM_SMCCC_ARCH_WORKAROUND_1.
> (XEN) Please update your firmware.
> (XEN) ***************************************************
> (XEN) No support for ARM_SMCCC_ARCH_WORKAROUND_1.
> (XEN) Please update your firmware.
> (XEN) ***************************************************
> (XEN) No support for ARM_SMCCC_ARCH_WORKAROUND_1.
> (XEN) Please update your firmware.
> (XEN) ***************************************************
> (XEN) 3... 2... 1...
> (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
> (XEN) Freed 336kB init memory.
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER8
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER12
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER16
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER20
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER24
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER28
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
> [    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd083]
> [    0.000000] Linux version 5.6.1-default (root@0fd6d133dd12) (gcc version 8.3.0 (Alpine 8.3.0)) #1 SMP Fri May 1 00:57:26 UTC 2020
> [    0.000000] Machine model: Raspberry Pi 4 Model B
> [    0.000000] Xen 4.13 support found
> [    0.000000] efi: Getting EFI parameters from FDT:
> [    0.000000] efi: UEFI not found.
> [    0.000000] NUMA: No NUMA configuration found
> [    0.000000] NUMA: Faking a node at [mem 0x0000000040000000-0x000000007fffffff]
> [    0.000000] NUMA: NODE_DATA [mem 0x7fdc62c0-0x7fdc9fff]
> [    0.000000] Zone ranges:
> [    0.000000]   DMA      [mem 0x0000000040000000-0x000000007fffffff]
> [    0.000000]   DMA32    empty
> [    0.000000]   Normal   empty
> [    0.000000] Movable zone start for each node
> [    0.000000] Early memory node ranges
> [    0.000000]   node   0: [mem 0x0000000040000000-0x000000007fffffff]
> [    0.000000] Initmem setup node 0 [mem 0x0000000040000000-0x000000007fffffff]
> [    0.000000] psci: probing for conduit method from DT.
> [    0.000000] psci: PSCIv1.1 detected in firmware.
> [    0.000000] psci: Using standard PSCI v0.2 function IDs
> [    0.000000] psci: Trusted OS migration not required
> [    0.000000] psci: SMC Calling Convention v1.1
> [    0.000000] percpu: Embedded 23 pages/cpu s54232 r8192 d31784 u94208
> [    0.000000] Detected PIPT I-cache on CPU0
> [    0.000000] CPU features: detected: EL2 vector hardening
> [    0.000000] CPU features: detected: Speculative Store Bypass Disable
> [    0.000000] CPU features: detected: ARM erratum 1319367
> [    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 258048
> [    0.000000] Policy zone: DMA
> [    0.000000] Kernel command line: console=hvc0 earlyprintk=xen nomodeset
> [    0.000000] Dentry cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
> [    0.000000] Inode-cache hash table entries: 65536 (order: 7, 524288 bytes, linear)
> [    0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off
> [    0.000000] Memory: 1001988K/1048576K available (12732K kernel code, 1852K rwdata, 6184K rodata, 4672K init, 758K bss, 46588K reserved, 0K cma-reserved)
> [    0.000000] random: get_random_u64 called from __kmem_cache_create+0x40/0x578 with crng_init=0
> [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
> [    0.000000] rcu: Hierarchical RCU implementation.
> [    0.000000] rcu: 	RCU restricting CPUs from NR_CPUS=480 to nr_cpu_ids=1.
> [    0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 10 jiffies.
> [    0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=1
> [    0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
> [    0.000000] arch_timer: cp15 timer(s) running at 54.00MHz (virt).
> [    0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0xc743ce346, max_idle_ns: 440795203123 ns
> [    0.000006] sched_clock: 56 bits at 54MHz, resolution 18ns, wraps every 4398046511102ns
> [    0.000407] Console: colour dummy device 80x25
> [    0.270969] printk: console [hvc0] enabled
> [    0.275256] Calibrating delay loop (skipped), value calculated using timer frequency.. 108.00 BogoMIPS (lpj=540000)
> [    0.285812] pid_max: default: 32768 minimum: 301
> [    0.290656] LSM: Security Framework initializing
> [    0.295354] Mount-cache hash table entries: 2048 (order: 2, 16384 bytes, linear)
> [    0.302862] Mountpoint-cache hash table entries: 2048 (order: 2, 16384 bytes, linear)
> [    0.313372] xen:grant_table: Grant tables using version 1 layout
> [    0.318858] Grant table initialized
> [    0.322458] xen:events: Using FIFO-based ABI
> [    0.326906] Xen: initializing cpu0
> [    0.330509] rcu: Hierarchical SRCU implementation.
> [    0.339497] EFI services will not be available.
> [    0.343644] smp: Bringing up secondary CPUs ...
> [    0.348157] smp: Brought up 1 node, 1 CPU
> [    0.352255] SMP: Total of 1 processors activated.
> [    0.357090] CPU features: detected: 32-bit EL0 Support
> [    0.362370] CPU features: detected: CRC32 instructions
> [    0.396969] CPU: All CPU(s) started at EL1
> [    0.400556] alternatives: patching kernel code
> [    0.406538] devtmpfs: initialized
> [    0.415913] Registered cp15_barrier emulation handler
> [    0.420448] Registered setend emulation handler
> [    0.425075] KASLR disabled due to lack of seed
> [    0.430042] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
> [    0.439701] futex hash table entries: 256 (order: 2, 16384 bytes, linear)
> [    0.446817] pinctrl core: initialized pinctrl subsystem
> [    0.453132] thermal_sys: Registered thermal governor 'fair_share'
> [    0.453135] thermal_sys: Registered thermal governor 'bang_bang'
> [    0.458685] thermal_sys: Registered thermal governor 'step_wise'
> [    0.464876] thermal_sys: Registered thermal governor 'user_space'
> [    0.471103] DMI not present or invalid.
> [    0.481720] NET: Registered protocol family 16
> [    0.486012] DMA: preallocated 256 KiB pool for atomic allocations
> [    0.491985] audit: initializing netlink subsys (disabled)
> [    0.498698] audit: type=2000 audit(0.380:1): state=initialized audit_enabled=0 res=1
> [    0.506554] hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
> [    0.512934] ASID allocator initialised with 65536 entries
> [    0.518458] xen:swiotlb_xen: Warning: only able to allocate 4 MB for software IO TLB
> [    0.528189] software IO TLB: mapped [mem 0x7f000000-0x7f400000] (4MB)
> [    0.535335] Serial: AMBA PL011 UART driver
> [    0.558051] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
> [    0.564260] HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages
> [    0.571085] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
> [    0.577937] HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages
> [    0.589021] cryptd: max_cpu_qlen set to 1000
> [    0.599047] ACPI: Interpreter disabled.
> [    0.602732] xen:balloon: Initialising balloon driver
> [    0.608655] iommu: Default domain type: Translated
> [    0.613276] vgaarb: loaded
> [    0.616165] SCSI subsystem initialized
> [    0.620387] usbcore: registered new interface driver usbfs
> [    0.625355] usbcore: registered new interface driver hub
> [    0.630873] usbcore: registered new device driver usb
> [    0.636092] usb_phy_generic phy: phy supply vcc not found, using dummy regulator
> [    0.643895] pps_core: LinuxPPS API ver. 1 registered
> [    0.648588] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
> [    0.657925] PTP clock support registered
> [    0.662062] EDAC MC: Ver: 3.0.0
> [    0.666429] NetLabel: Initializing
> [    0.669282] NetLabel:  domain hash size = 128
> [    0.673736] NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
> [    0.679613] NetLabel:  unlabeled traffic allowed by default
> [    0.685781] clocksource: Switched to clocksource arch_sys_counter
> [    0.691755] VFS: Disk quotas dquot_6.6.0
> [    0.695574] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
> [    0.702750] pnp: PnP ACPI: disabled
> [    0.712964] NET: Registered protocol family 2
> [    0.717281] tcp_listen_portaddr_hash hash table entries: 512 (order: 1, 8192 bytes, linear)
> [    0.725295] TCP established hash table entries: 8192 (order: 4, 65536 bytes, linear)
> [    0.733251] TCP bind hash table entries: 8192 (order: 5, 131072 bytes, linear)
> [    0.740681] TCP: Hash tables configured (established 8192 bind 8192)
> [    0.747237] UDP hash table entries: 512 (order: 2, 16384 bytes, linear)
> [    0.753829] UDP-Lite hash table entries: 512 (order: 2, 16384 bytes, linear)
> [    0.761207] NET: Registered protocol family 1
> [    0.765514] PCI: CLS 0 bytes, default 64
> [    0.770055] kvm [1]: HYP mode not available
> [    0.781176] Initialise system trusted keyrings
> [    0.785193] workingset: timestamp_bits=40 max_order=18 bucket_order=0
> [    0.798000] zbud: loaded
> [    0.801467] squashfs: version 4.0 (2009/01/31) Phillip Lougher
> [    0.807330] 9p: Installing v9fs 9p2000 file system support
> [    0.836097] Key type asymmetric registered
> [    0.839630] Asymmetric key parser 'x509' registered
> [    0.844666] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
> [    0.852346] io scheduler mq-deadline registered
> [    0.856864] io scheduler kyber registered
> [    0.861133] io scheduler bfq registered
> [    0.869795] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
> [    0.877226] brcm-pcie fd500000.pcie: host bridge /scb/pcie@7d500000 ranges:
> [    0.883677] brcm-pcie fd500000.pcie:      MEM 0x0600000000..0x0603ffffff -> 0x00f8000000
> [    0.891977] brcm-pcie fd500000.pcie:   IB MEM 0x0000000000..0x00bfffffff -> 0x0000000000
> [    0.955808] brcm-pcie fd500000.pcie: link up, 5 GT/s x1 (!SSC)
> [    0.961312] brcm-pcie fd500000.pcie: PCI host bridge to bus 0000:00
> [    0.967524] pci_bus 0000:00: root bus resource [bus 00-01]
> [    0.973126] pci_bus 0000:00: root bus resource [mem 0x600000000-0x603ffffff] (bus address [0xf8000000-0xfbffffff])
> [    0.983696] pci 0000:00:00.0: [14e4:2711] type 01 class 0x060400
> [    0.989897] pci 0000:00:00.0: PME# supported from D0 D3hot
> (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=25: not implemented
> (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=15: not implemented
> [    1.005812] pci 0000:00:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
> [    1.016400] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
> [    1.023993] pci 0000:01:00.0: [1106:3483] type 00 class 0x0c0330
> [    1.030073] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit]
> [    1.037089] pci 0000:01:00.0: PME# supported from D0 D3cold
> (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=15: not implemented
> [    1.047883] pci 0000:01:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
> [    1.058420] pci_bus 0000:01: busn_res: [bus 01] end is updated to 01
> [    1.064252] pci 0000:00:00.0: BAR 14: assigned [mem 0x600000000-0x6000fffff]
> [    1.071453] pci 0000:01:00.0: BAR 0: assigned [mem 0x600000000-0x600000fff 64bit]
> [    1.079094] pci 0000:00:00.0: PCI bridge to [bus 01]
> [    1.084165] pci 0000:00:00.0:   bridge window [mem 0x600000000-0x6000fffff]
> [    1.091426] pcieport 0000:00:00.0: enabling device (0000 -> 0002)
> [    1.097659] pcieport 0000:00:00.0: PME: Signaling with IRQ 36
> [    1.103675] pcieport 0000:00:00.0: AER: enabled with IRQ 36
> [    1.109273] pci 0000:01:00.0: enabling device (0000 -> 0002)
> (XEN) traps.c:1972:d0v0 HSR=0x93800005 pc=0xffff8000108e3564 gva=0xffff80001001d010 gpa=0x00000600000010
> [    1.124234] Unhandled fault at 0xffff80001001d010
> [    1.129039] Mem abort info:
> [    1.131926]   ESR = 0x96000000
> [    1.135089]   EC = 0x25: DABT (current EL), IL = 32 bits
> [    1.140542]   SET = 0, FnV = 0
> [    1.143691]   EA = 0, S1PTW = 0
> [    1.146950] Data abort info:
> [    1.149926]   ISV = 0, ISS = 0x00000000
> [    1.153877]   CM = 0, WnR = 0
> [    1.156965] swapper pgtable: 4k pages, 48-bit VAs, pgdp=00000000412fe000
> [    1.163800] [ffff80001001d010] pgd=000000007ffff003, pud=000000007fffe003, pmd=000000007fffd003, pte=0068000600000707
> [    1.174614] Internal error: ttbr address size fault: 96000000 [#1] SMP
> [    1.181272] Modules linked in:
> [    1.184438] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.6.1-default #1
> [    1.191106] Hardware name: Raspberry Pi 4 Model B (DT)
> [    1.196386] pstate: 60000005 (nZCv daif -PAN -UAO)
> [    1.201307] pc : quirk_usb_early_handoff+0x4fc/0x870
> [    1.206398] lr : quirk_usb_early_handoff+0x4ec/0x870
> [    1.211476] sp : ffff8000100139f0
> [    1.214901] x29: ffff8000100139f0 x28: ffff00003dd8c080
> [    1.220345] x27: ffff00003fdf4c70 x26: ffff8000108e3068
> [    1.225797] x25: ffff800011964050 x24: 00000000396dfbf8
> [    1.231232] x23: 000000000000ffff x22: ffff80001001d000
> [    1.236684] x21: 0000000000001000 x20: ffff80001179b548
> [    1.242119] x19: ffff00003dcac000 x18: ffff8000111dba48
> [    1.247564] x17: 0000000000000001 x16: 00000000deadbeef
> [    1.253007] x15: 0000000600000000 x14: ffff800010025000
> [    1.258451] x13: ffff80001001d000 x12: ffff00003f804440
> [    1.263895] x11: ffff800011461928 x10: ffff800010013510
> [    1.269339] x9 : 0000000000001000 x8 : ffff8000117b2d58
> [    1.274782] x7 : 0000000000000000 x6 : ffff80001001d000
> [    1.280226] x5 : ffff00003ffff000 x4 : ffff00003fffe400
> [    1.285670] x3 : 0068000000000707 x2 : 0140000000000000
> [    1.291114] x1 : 00008005effe3000 x0 : ffff80001001d010
> [    1.296567] Call trace:
> [    1.299111]  quirk_usb_early_handoff+0x4fc/0x870
> [    1.303850]  pci_do_fixups+0xe0/0x138
> [    1.307625]  pci_fixup_device+0x4c/0x130
> [    1.311664]  pci_bus_add_device+0x20/0xb8
> [    1.315799]  pci_bus_add_devices+0x38/0x88
> [    1.320004]  pci_bus_add_devices+0x68/0x88
> [    1.324220]  brcm_pcie_probe+0x768/0xb28
> [    1.328259]  platform_drv_probe+0x50/0xa0
> [    1.332387]  really_probe+0xd8/0x438
> [    1.336083]  driver_probe_device+0xdc/0x130
> [    1.340376]  device_driver_attach+0x6c/0x78
> [    1.344679]  __driver_attach+0x9c/0x168
> [    1.348630]  bus_for_each_dev+0x70/0xc0
> [    1.352581]  driver_attach+0x20/0x28
> [    1.356278]  bus_add_driver+0x190/0x220
> [    1.360220]  driver_register+0x60/0x110
> [    1.364169]  __platform_driver_register+0x44/0x50
> [    1.369001]  brcm_pcie_driver_init+0x18/0x20
> [    1.373391]  do_one_initcall+0x74/0x1b0
> [    1.377342]  kernel_init_freeable+0x214/0x2b0
> [    1.381820]  kernel_init+0x10/0x100
> [    1.385418]  ret_from_fork+0x10/0x18
> [    1.389112] Code: aa0003f6 b4ffdce0 91004000 a90363f7 (b9400000)
> [    1.395351] ---[ end trace 8a644ac18423f457 ]---
> [    1.400143] Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b
> [    1.407898] Kernel Offset: disabled
> [    1.411496] CPU features: 0x10002,61006000
> [    1.415717] Memory Limit: none
> [    1.418874] ---[ end Kernel panic - not syncing: Attempted to kill init! exitcode=0x0000000b ]---



From xen-devel-bounces@lists.xenproject.org Fri May 01 11:53:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 11:53:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUUEC-0001qq-D6; Fri, 01 May 2020 11:53:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YjF2=6P=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jUUEB-0001ql-0D
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 11:53:11 +0000
X-Inumbo-ID: 4ff8f624-8ba2-11ea-9b00-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4ff8f624-8ba2-11ea-9b00-12813bfff9fa;
 Fri, 01 May 2020 11:53:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Ht4osnPvD5HinAFh+al8nGxGTRu9ApH/7Vr9r+9RuNk=; b=KfKst/dlawl5jKB/Oc5wvB/su
 6siIWfibFCHfNA7FZjgzsaLonVzIRuO5lhZ6RqA0SH20VGJHnePg6LJ9lX+v+auhaLDMSwvoyGDva
 3wn6mLXUgAnaIzCDAaUBxf3b/vBDXx3BmUX3ggkFaugckThwrva0tbiI0udOFYbBStVqM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUUE2-0000ES-UA; Fri, 01 May 2020 11:53:02 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUUE2-0003MF-JZ; Fri, 01 May 2020 11:53:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jUUE2-0006RS-Iv; Fri, 01 May 2020 11:53:02 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149893-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 149893: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=c45e8bccecaf633480d378daff11e122dfd5e96d
X-Osstest-Versions-That: linux=1d2cc5ac6f6668cc15216d51051103c61467d7e8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 01 May 2020 11:53:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149893 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149893/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 149882
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 149882
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149882
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149882
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 149882
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149882
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 149882
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149882
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149882
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                c45e8bccecaf633480d378daff11e122dfd5e96d
baseline version:
 linux                1d2cc5ac6f6668cc15216d51051103c61467d7e8

Last test of basis   149882  2020-04-29 17:08:48 Z    1 days
Testing same since   149893  2020-05-01 00:08:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Brendan Higgins <brendanhiggins@google.com>
  David Gow <davidgow@google.com>
  Douglas Anderson <dianders@chromium.org>
  Gabriel Krisman Bertazi <krisman@collabora.com>
  Jason Yan <yanaijie@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Marco Elver <elver@google.com>
  Marek Behún <marek.behun@nic.cz>
  Martin Blumenstingl <martin.blumenstingl@googlemail.com>
  Masami Hiramatsu <mhiramat@kernel.org>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Milan Broz <gmazyland@gmail.com>
  Paul Moore <paul@paul-moore.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Sunwook Eom <speed.eom@samsung.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Veerabhadrarao Badiganti <vbadigan@codeaurora.org>
  Wei Yongjun <weiyongjun1@huawei.com>
  Xiao Yang <yangx.jy@cn.fujitsu.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   1d2cc5ac6f66..c45e8bccecaf  c45e8bccecaf633480d378daff11e122dfd5e96d -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Fri May 01 12:00:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 12:00:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUULE-0002ib-BL; Fri, 01 May 2020 12:00:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vLKT=6P=oracle.com=daniel.kiper@srs-us1.protection.inumbo.net>)
 id 1jUULD-0002iW-4E
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 12:00:27 +0000
X-Inumbo-ID: 5769e3a4-8ba3-11ea-9b00-12813bfff9fa
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5769e3a4-8ba3-11ea-9b00-12813bfff9fa;
 Fri, 01 May 2020 12:00:25 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 041Br5FN133462;
 Fri, 1 May 2020 12:00:13 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=date : from : to : cc
 : subject : message-id : references : mime-version : content-type :
 content-transfer-encoding : in-reply-to; s=corp-2020-01-29;
 bh=L7cRxAV2fSsC98+NEgXP0poAtObZaznUM2AQxwN8m4c=;
 b=XoBp7z6LoEoz4Zmn9+pncrkBf8HB9TKUGJCadEq0tUuPXzcBfb0qP6P03607f8ncCc1i
 D3lNxYY+BMSN3itr0fdbzK0xYhPa8n99fgGZDkD3JVDmbDlSGhO1K41Y8iR2KIgGynIJ
 gf0TXWf5fb3X08qe9tgxDkIgPmPgwMPBfAF2bGxYughgkBKJ3hKlxGh6u7ix7J5fwD2H
 j0CDrbVx2byzd5JxWtZvM3JOKr+0c0iAWB9h9Bgs6LCsF6XUyKpMTP0qDT1edDstTNlQ
 uafH0vBG8zdSTuIoQHZmQENgD4U/5x411SBVlZfWC7qRUc52IvP/TWkP+S4fUZ4BSSzr XA== 
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by aserp2120.oracle.com with ESMTP id 30r7f3hwjn-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 01 May 2020 12:00:13 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 041BqxOa146692;
 Fri, 1 May 2020 12:00:12 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by aserp3020.oracle.com with ESMTP id 30r7fabrb0-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 01 May 2020 12:00:12 +0000
Received: from abhmp0009.oracle.com (abhmp0009.oracle.com [141.146.116.15])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 041C037G022330;
 Fri, 1 May 2020 12:00:04 GMT
Received: from tomti.i.net-space.pl (/10.175.206.116)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 01 May 2020 05:00:03 -0700
Date: Fri, 1 May 2020 13:59:59 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>
Subject: Re: [RFC] UEFI Secure Boot on Xen Hosts
Message-ID: <20200501115959.m5pwm735mxbrs66a@tomti.i.net-space.pl>
References: <20200429225108.GA54201@bobbye-pc> <20200430222717.GA6364@mail-itl>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200430222717.GA6364@mail-itl>
User-Agent: NeoMutt/20170113 (1.7.2)
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9607
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0
 suspectscore=0
 phishscore=0 malwarescore=0 mlxscore=0 spamscore=0 mlxlogscore=999
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2003020000 definitions=main-2005010093
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9607
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0
 bulkscore=0 suspectscore=0
 phishscore=0 mlxlogscore=999 impostorscore=0 spamscore=0 mlxscore=0
 priorityscore=1501 lowpriorityscore=0 malwarescore=0 clxscore=1011
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000
 definitions=main-2005010093
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: michal.zygowski@3mdeb.com, Bobby Eshleman <bobbyeshleman@gmail.com>,
 olivier.lambert@vates.fr, krystian.hebel@3mdeb.com,
 xen-devel@lists.xenproject.org, piotr.krol@3mdeb.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 01, 2020 at 12:27:17AM +0200, Marek Marczykowski-Grecki wrote:
> On Wed, Apr 29, 2020 at 05:51:08PM -0500, Bobby Eshleman wrote:
> > # Option #3: Lean on Grub2's LoadFile2() Verification
> >
> > Grub2 will provide a LoadFile2() method to subsequent programs that supports
> > signature verification of arbitrary files.  Linux is moving in the
> > direction of using LoadFile2() for loading the initrd [2], and Grub2 will
> > support verifying the signatures of files loaded via LoadFile2().
>
> This description gives me flashbacks to how xen.efi loads dom0
> kernel+initramfs (Loaded Image Protocol). Practical issues encountered:
>
> 1. It works only when xen.efi is loaded via appropriate EFI boot
> service directly. If xen.efi is loaded by a filesystem driver in
> grub/other bootloader, it can't load dom0 kernel+initramfs.
>
> 2. Given variety of UEFI implementations and boot medias, it was almost
> impossible to force bootloader to use the right file loading method
> (cases like the same file accessible both via UEFI fs0: and via grub's
> filesystem driver). Which means loading xen.efi via a bootloader (as
> opposed to directly from UEFI) was hit or miss (mostly miss).
>
> 3. To avoid the above issue, one was forced to load xen.efi directly
> from EFI. This gave up any possibility to manipulate xen cmdline, or
> even choose system to boot in case of some EFI implementations.

Are you talking about GRUB chainloader command? If yes then use "set root=..."
to select ESP before running the chainloader. Of course the xen.cfg,
kernel and initramfs have to live on ESP then. Xen uses UEFI file system
calls which understand FAT only. And if GRUB root points to non-FAT
partition than Xen cannot read anything from it...

> 4. Even if one is lucky to boot xen.efi via grub2-efi, then still only
> xen options could be modified. But not those for dom0.
>
> 5. It was impossible to load xen/kernel/initrd using fancy grub/ipxe
> features like HTTP.

Why cannot you use multiboot2/module2 to load Xen from GRUB on x86 UEFI
machines? It does not have issues discussed above. Additionally, the
Multiboot2 protocol works on legacy BIOS machines too.

> In practice the above points mean:
>
> * To get a usable product, one is forced to enable all kind of
>   workarounds (not only related to EFI) _in default configuration_,
>   because otherwise there is very little chance to enable them only when
>   needed.
>
> * If something goes wrong, in most cases you need to boot from
>   alternative media (to edit xen.cfg, or efi variables). If you happen
>   to forget your rescue usb stick, you are out of luck.
>
> So, please, can someone confirm the LoadFile2() approach wouldn't have
> any of the above issues?

If it is done properly it should not.

Daniel


From xen-devel-bounces@lists.xenproject.org Fri May 01 12:02:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 12:02:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUUNa-0002q0-Oq; Fri, 01 May 2020 12:02:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UQub=6P=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jUUNY-0002pv-ND
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 12:02:52 +0000
X-Inumbo-ID: aeb0f260-8ba3-11ea-9b00-12813bfff9fa
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aeb0f260-8ba3-11ea-9b00-12813bfff9fa;
 Fri, 01 May 2020 12:02:51 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id g12so6106310wmh.3
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 05:02:51 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=g8BdGaK3T4ewPDAvcqcX4aOkqblz2cj6sMht5XmYlIg=;
 b=STZidTBjy3wOraNnG/zyvqsdwfW3BgSDlkwpCMr4U4Zz7bMxyrYu7Y3H7blTrmEwSG
 wopG418fiUA9G8DXxMNXQXIhbfTn1/LSjTGKzO486lCkAUqoCSQTWS2cGuHqD/42LUbH
 /sqqEqfxL6tIXIBFYflVOYsOl2t8i/ZMYs54dFoQBDfW5zbiat/SkBnFGc5uHxl6ZQGX
 XZcQNNx3K7/RJy1mSuE/8VH3Agx6ofP/rmLOc9yBcof2Ms/u5t+EDEYK/QpOjM5r87kS
 nvhYS4AGZebSNS6K8nLuip/Rxfv8nFdavpLhjxKsPHvrH7dfsK19xWMcJ/IkjWTW2+w1
 5C7A==
X-Gm-Message-State: AGi0PubD4evoLdMWrRVXbwEdp16JGyrhuBThOoyUHbtjYM+FQfSMSg7I
 D2UVuxAphwGcVu9PccPrTtQ=
X-Google-Smtp-Source: APiQypI6FOwjwFUFoxCqiVkExJmPNqtNHhxS4j3S4de2+aPiykkhZf4ltYXR0BbSXlIwgXxO+G3Tlw==
X-Received: by 2002:a05:600c:2214:: with SMTP id
 z20mr3990339wml.189.1588334571169; 
 Fri, 01 May 2020 05:02:51 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id q8sm3650021wmg.22.2020.05.01.05.02.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 01 May 2020 05:02:50 -0700 (PDT)
Date: Fri, 1 May 2020 12:02:49 +0000
From: Wei Liu <wl@xen.org>
To: Hongyan Xia <hx242@xen.org>
Subject: Re: [PATCH 02/16] acpi: vmap pages in acpi_os_alloc_memory
Message-ID: <20200501120249.bd2w6ksgii4tkrai@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <cover.1588278317.git.hongyxia@amazon.com>
 <a71d1903267b84afdb0e54fa2ac55540ab2f9357.1588278317.git.hongyxia@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <a71d1903267b84afdb0e54fa2ac55540ab2f9357.1588278317.git.hongyxia@amazon.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Apr 30, 2020 at 09:44:11PM +0100, Hongyan Xia wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> Also, introduce a wrapper around vmap that maps a contiguous range for
> boot allocations.
> 
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> ---
>  xen/drivers/acpi/osl.c | 9 ++++++++-
>  xen/include/xen/vmap.h | 5 +++++
>  2 files changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/acpi/osl.c b/xen/drivers/acpi/osl.c
> index 4c8bb7839e..d0762dad4e 100644
> --- a/xen/drivers/acpi/osl.c
> +++ b/xen/drivers/acpi/osl.c
> @@ -219,7 +219,11 @@ void *__init acpi_os_alloc_memory(size_t sz)
>  	void *ptr;
>  
>  	if (system_state == SYS_STATE_early_boot)
> -		return mfn_to_virt(mfn_x(alloc_boot_pages(PFN_UP(sz), 1)));
> +	{
> +		mfn_t mfn = alloc_boot_pages(PFN_UP(sz), 1);
> +
> +		return vmap_boot_pages(mfn, PFN_UP(sz));
> +	}
>  
>  	ptr = xmalloc_bytes(sz);
>  	ASSERT(!ptr || is_xmalloc_memory(ptr));
> @@ -244,5 +248,8 @@ void __init acpi_os_free_memory(void *ptr)
>  	if (is_xmalloc_memory(ptr))
>  		xfree(ptr);
>  	else if (ptr && system_state == SYS_STATE_early_boot)
> +	{
> +		vunmap(ptr);
>  		init_boot_pages(__pa(ptr), __pa(ptr) + PAGE_SIZE);
> +	}
>  }
> diff --git a/xen/include/xen/vmap.h b/xen/include/xen/vmap.h
> index 369560e620..c70801e195 100644
> --- a/xen/include/xen/vmap.h
> +++ b/xen/include/xen/vmap.h
> @@ -23,6 +23,11 @@ void *vmalloc_xen(size_t size);
>  void *vzalloc(size_t size);
>  void vfree(void *va);
>  
> +static inline void *vmap_boot_pages(mfn_t mfn, unsigned int nr_pages)

Nothing seems to tie this to boot pages only. Maybe it is better to name
it after what it does, like vmap_mfns?

> +{
> +    return __vmap(&mfn, nr_pages, 1, 1, PAGE_HYPERVISOR, VMAP_DEFAULT);
> +}
> +
>  void __iomem *ioremap(paddr_t, size_t);
>  
>  static inline void iounmap(void __iomem *va)
> -- 
> 2.24.1.AMZN
> 


From xen-devel-bounces@lists.xenproject.org Fri May 01 12:07:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 12:07:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUURs-0002zh-BP; Fri, 01 May 2020 12:07:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UQub=6P=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jUURr-0002zc-7d
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 12:07:19 +0000
X-Inumbo-ID: 4d872cb0-8ba4-11ea-b9cf-bc764e2007e4
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d872cb0-8ba4-11ea-b9cf-bc764e2007e4;
 Fri, 01 May 2020 12:07:18 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id d15so11323407wrx.3
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 05:07:18 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=KcUaR/6Xxp7zv1PaB/sN8VK7jUrhkZVI6Q4Sx2lflC4=;
 b=h4knzI+Qy+3ZzytKeNtY1fFNI9/ocOnS1REntgMH7CZy+TP8irepuxYa6JBYMqKCi3
 MkVw9FeAZFZN751u/6iYcmEtGdvoDTQgWLDiuCj7SG/WK+nqcvm0QJDChBODqoUmr9hT
 vXV4+6HsM139wehEU6rO3XbsLJbQxsuQV2jF9K9CGz+VFwOYjepX3SK72WdrHHA2ZRCi
 z2iH552AVfEL1KlqIpMf/GbaSJK6W1RbVSseDFHqLf2cHwvjxLdts5lWAM3l7W4P6jAA
 TF4EaH/AjbAnfkpnmiw05H3y2Q0rdFejt2LL+IbtiBZXaG7Hjj2jn2Za/0EZm29jYvkM
 LwNw==
X-Gm-Message-State: AGi0PuZkasmpWxibkExZLBoZ8f2qYhFnTja4VtyHtphMyJ3yLkUdLaJr
 REcLPNUEf0eqCcz4JzolC1Y=
X-Google-Smtp-Source: APiQypJ2huXAmES/4WmwyLIpismsDDSwtf3Yf8xKof78ikM80E1CyDkgwq/I7z35OaaHqcIYuJbqJg==
X-Received: by 2002:adf:ea05:: with SMTP id q5mr4216817wrm.180.1588334837612; 
 Fri, 01 May 2020 05:07:17 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id k133sm4087847wma.0.2020.05.01.05.07.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 01 May 2020 05:07:17 -0700 (PDT)
Date: Fri, 1 May 2020 12:07:15 +0000
From: Wei Liu <wl@xen.org>
To: Hongyan Xia <hx242@xen.org>
Subject: Re: [PATCH 00/16] Remove the direct map
Message-ID: <20200501120715.hjak2gjp7ialwfq5@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <cover.1588278317.git.hongyxia@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <cover.1588278317.git.hongyxia@amazon.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Apr 30, 2020 at 09:44:09PM +0100, Hongyan Xia wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> This series depends on Xen page table domheap conversion:
> https://lists.xenproject.org/archives/html/xen-devel/2020-04/msg01374.html.
> 
> After breaking the reliance on the direct map to manipulate Xen page
> tables, we can now finally remove the direct map altogether.
> 
> This series:
> - fixes many places that use the direct map incorrectly or assume the
>   presence of an always-mapped direct map in a wrong way.
> - includes the early vmap patches for global mappings.
> - initialises the mapcache for all domains, disables the fast path that
>   uses the direct map for mappings.
> - maps and unmaps xenheap on-demand.
> - adds a boot command line switch to enable or disable the direct map.
> 
> This previous version was in RFC state and can be found here:
> https://lists.xenproject.org/archives/html/xen-devel/2019-09/msg02647.html,
> which has since been broken into small series.

OOI have you done any performance measurements?

Seeing that now even guest table needs mapping / unmapping during
restore, I'm curious to know how that would impact performance.

Wei.

> 
> Hongyan Xia (12):
>   acpi: vmap pages in acpi_os_alloc_memory
>   x86/numa: vmap the pages for memnodemap
>   x86/srat: vmap the pages for acpi_slit
>   x86: map/unmap pages in restore_all_guests.
>   x86/pv: rewrite how building PV dom0 handles domheap mappings
>   x86/mapcache: initialise the mapcache for the idle domain
>   x86: add a boot option to enable and disable the direct map
>   x86/domain_page: remove the fast paths when mfn is not in the
>     directmap
>   xen/page_alloc: add a path for xenheap when there is no direct map
>   x86/setup: leave early boot slightly earlier
>   x86/setup: vmap heap nodes when they are outside the direct map
>   x86/setup: do not create valid mappings when directmap=no
> 
> Wei Liu (4):
>   x86/setup: move vm_init() before acpi calls
>   x86/pv: domheap pages should be mapped while relocating initrd
>   x86: add Persistent Map (PMAP) infrastructure
>   x86: lift mapcache variable to the arch level
> 
>  docs/misc/xen-command-line.pandoc |  12 +++
>  xen/arch/arm/setup.c              |   4 +-
>  xen/arch/x86/Makefile             |   1 +
>  xen/arch/x86/domain.c             |   4 +-
>  xen/arch/x86/domain_page.c        |  53 ++++++++-----
>  xen/arch/x86/mm.c                 |   8 +-
>  xen/arch/x86/numa.c               |   8 +-
>  xen/arch/x86/pmap.c               |  87 +++++++++++++++++++++
>  xen/arch/x86/pv/dom0_build.c      |  75 ++++++++++++++----
>  xen/arch/x86/setup.c              | 125 +++++++++++++++++++++++++-----
>  xen/arch/x86/srat.c               |   3 +-
>  xen/arch/x86/x86_64/entry.S       |  27 ++++++-
>  xen/common/page_alloc.c           |  85 +++++++++++++++++---
>  xen/common/vmap.c                 |  37 +++++++--
>  xen/drivers/acpi/osl.c            |   9 ++-
>  xen/include/asm-arm/mm.h          |   5 ++
>  xen/include/asm-x86/domain.h      |  12 +--
>  xen/include/asm-x86/fixmap.h      |   3 +
>  xen/include/asm-x86/mm.h          |  17 +++-
>  xen/include/asm-x86/pmap.h        |  10 +++
>  xen/include/xen/vmap.h            |   5 ++
>  21 files changed, 495 insertions(+), 95 deletions(-)
>  create mode 100644 xen/arch/x86/pmap.c
>  create mode 100644 xen/include/asm-x86/pmap.h
> 
> -- 
> 2.24.1.AMZN
> 


From xen-devel-bounces@lists.xenproject.org Fri May 01 12:11:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 12:11:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUUW2-0003mZ-U8; Fri, 01 May 2020 12:11:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UQub=6P=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jUUW2-0003mU-1Q
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 12:11:38 +0000
X-Inumbo-ID: e6cd0944-8ba4-11ea-9b01-12813bfff9fa
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e6cd0944-8ba4-11ea-9b01-12813bfff9fa;
 Fri, 01 May 2020 12:11:35 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id g12so6127423wmh.3
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 05:11:35 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=oISAyXYDrHJxSbEFstkEoJjITTjc22xzBIPAVa2VhQk=;
 b=DrEXsWheMDZTc7YJxxd9dd1bEBFt78YHrhZuL4xv/tqS/EUL4cT5kwdR528LTNWp9+
 zM9sxo6/JtChf6Z7AOIqCzmby9Nt4rFpwAFnZ0hiGgrPUmUT3XFceY3Tvr7qILQpIBLL
 BK5LiH3yR3tJtXGFlQosbwCUiu1Urn50EOUpRV89NY1bJeYjoCodxt879Vu5DRn2YBKt
 VVIOq0bmXteL68gdkM7fuwNaZed2zUx5cyreGrkbTZVBenna9/YSj491oFs3c3nR26IH
 cjP8YGyMiSsnRFiwvgj866NKLSGMH/uACYBcBtvVFSwog/5tHFOCqXOEO0tT+x2hT6Uh
 Utrg==
X-Gm-Message-State: AGi0PuaKjGh0h5VC5LuwOp0DRA+2DnvhYX/iZqnXGq0/q5StOSfKfNOo
 HGf+zKXZmMU2niZpw8zowog=
X-Google-Smtp-Source: APiQypIRtzFVm+O8Mk/G5G33UG5hi8Ufted1QM/8uykOvQQtW0zJNP0MAxmowiGouF46U0rtmd5BOA==
X-Received: by 2002:a1c:9cca:: with SMTP id f193mr3655724wme.71.1588335094797; 
 Fri, 01 May 2020 05:11:34 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id a9sm3547346wmm.38.2020.05.01.05.11.33
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 01 May 2020 05:11:34 -0700 (PDT)
Date: Fri, 1 May 2020 12:11:32 +0000
From: Wei Liu <wl@xen.org>
To: Hongyan Xia <hx242@xen.org>
Subject: Re: [PATCH 11/16] x86: add a boot option to enable and disable the
 direct map
Message-ID: <20200501121132.kzhu7u2vmpoeju2x@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <cover.1588278317.git.hongyxia@amazon.com>
 <7360b59e8fd39796fee56430a437b20c948d08c2.1588278317.git.hongyxia@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <7360b59e8fd39796fee56430a437b20c948d08c2.1588278317.git.hongyxia@amazon.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Apr 30, 2020 at 09:44:20PM +0100, Hongyan Xia wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> Also add a helper function to retrieve it. Change arch_mfn_in_direct_map
> to check this option before returning.
> 
> This is added as a boot command line option, not a Kconfig. We do not
> produce different builds for EC2 so this is not introduced as a
> compile-time configuration.

Having a Kconfig will probably allow the compiler to eliminate dead
code.

This is not asking you to do the work, someone can come along and adjust 
arch_has_directmap easily.

Wei.


From xen-devel-bounces@lists.xenproject.org Fri May 01 12:16:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 12:16:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUUaA-0003wa-Fo; Fri, 01 May 2020 12:15:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UQub=6P=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jUUa8-0003wV-Tb
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 12:15:52 +0000
X-Inumbo-ID: 7f437b54-8ba5-11ea-9b01-12813bfff9fa
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f437b54-8ba5-11ea-9b01-12813bfff9fa;
 Fri, 01 May 2020 12:15:51 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id v4so9676367wme.1
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 05:15:51 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=vICyT7apXMYKPCbuxxQqROkSh5+Ohp8a/LbOd+8OaeQ=;
 b=jJ/SUTEeHeEfVCU5SwLt8MCoTGl+JN4CyxkxtiHb2l3r5/XfKz+8oF5omdqda33cUj
 ypayty3SvbS0yQ9r4Ibtn0sPTiqbfisVTfhpcTs6ig1TJKP8gswOY0VmjN5cFQBV9jvf
 DxJMZ1XTNezRU3iks0lDPBLZRUQQl4rWDb6smjJmb8tBDoLBa5NAmFec7ebWSQurSqJ4
 oIQ2Zi0tzqRGMVfCESwxNsJxL6FuttmC+71ZM7C5t53TeXUm0pT+bhiJDHKRUvQClca7
 Bnu0hS+lvQoGsE7ki1jy/U9S3NpPuAFaMiwIWiPdaiZp/u+YagcS1yt70Gliw5cvtXve
 A6lw==
X-Gm-Message-State: AGi0PubgLwkGh5kIaGRPRpUeNfecLmWMqRArPbXHgL72toZHqkLjeuTN
 c+iD6b91iYCXTUWXFksjZqY=
X-Google-Smtp-Source: APiQypIUZJI4F6AZaaYtMHcvccg6xv8A60GVGB38/2fKc50ytb5V2ZuNf/Ma/ky8r3SzOqKqlsEbhg==
X-Received: by 2002:a7b:c156:: with SMTP id z22mr3973885wmi.51.1588335350613; 
 Fri, 01 May 2020 05:15:50 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id a24sm3644621wmb.24.2020.05.01.05.15.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 01 May 2020 05:15:50 -0700 (PDT)
Date: Fri, 1 May 2020 12:15:48 +0000
From: Wei Liu <wl@xen.org>
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH] tools/xl: vcpu-pin: Skip global affinity when the hard
 affinity is not changed
Message-ID: <20200501121548.2iv5hbztxmcsxjwa@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <20200430110051.8965-1-julien@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200430110051.8965-1-julien@xen.org>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 Pawel Wieczorkiewicz <wipawel@amazon.de>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Apr 30, 2020 at 12:00:51PM +0100, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> After XSA-273, it is not possible to modify the vCPU soft affinity using
> xl vcpu-pin without modifying the hard affinity. Instead the command
> will crash.
> 
> 42sh> gdb /usr/local/sbin/xl
> (gdb) r vcpu-pin 0 0 - 10
> [...]
> Program received signal SIGSEGV, Segmentation fault.
> [...]
> (gdb) bt
> 
> This is happening because 'xl' will use NULL when an affinity doesn't
> need to be modified. However, we will still try to apply the global
> affinity in the this case.
> 
> As the hard affinity is not changed, then we don't need to apply the
> global affinity. So skip it when hard is NULL.
> 
> Backport: 4.6+ # Any release with XSA-273
> Fixes: aa67b97ed342 ("xl.conf: Add global affinity masks")
> Reported-by: Pawel Wieczorkiewicz <wipawel@amazon.de>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Wei Liu <wl@xen.org>

> ---
>  tools/xl/xl_vcpu.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/xl/xl_vcpu.c b/tools/xl/xl_vcpu.c
> index 9ff5354f749b..66877a57dee4 100644
> --- a/tools/xl/xl_vcpu.c
> +++ b/tools/xl/xl_vcpu.c
> @@ -283,7 +283,7 @@ int main_vcpupin(int argc, char **argv)
>      }
>  
>      /* Only hard affinity matters here */
> -    if (!ignore_masks) {
> +    if (!ignore_masks && hard) {
>          libxl_dominfo dominfo;
>  
>          if (libxl_domain_info(ctx, &dominfo, domid)) {
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri May 01 12:17:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 12:17:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUUc3-00042x-SX; Fri, 01 May 2020 12:17:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UQub=6P=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jUUc2-00042s-MV
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 12:17:50 +0000
X-Inumbo-ID: c591095a-8ba5-11ea-9b01-12813bfff9fa
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c591095a-8ba5-11ea-9b01-12813bfff9fa;
 Fri, 01 May 2020 12:17:49 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id x18so11352193wrq.2
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 05:17:49 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=liMYuUxId91gL59cBzKMthRPQOfyR4BgWxxNuFgvwkk=;
 b=nRruK9uEzHn/DE5JRtu2I8Xwcf8sJWYR2iXky6DP4GN7t5w1aAk3osBiulO0ggs8fc
 BpcuhUMQON2Hgb0DbKkqxkIkJrL7Vr+svYFBzWS+H12++CHSf+sKcs21dHgZN5U28UYn
 WRXfCotLF9VZIg8HH71ZX8gxSvaif8Uh1KF7MQrn+uEx8pFgXQkIBce6HSLw1TI9E7Nn
 ZD3btS+8D7Ha2ZJ8B9VrIdhT/OMdE0wkpkJOP/NSXYij/awdNZew2065XXdavtpfXin2
 5CldJM3NnbvR2nRvDYEyGlOZkUXTVNgiarvPN/w+5+VLwrG340FiYLI6YGqBEpDj0sum
 jXFg==
X-Gm-Message-State: AGi0PuYrFaBp/y3vpRzcC6nS4c3jGGuuioZQUc4R2iOQJmqIw40IL5hY
 ppYgbwWjYYV8C6MBgZfwlis=
X-Google-Smtp-Source: APiQypILDFJz8N0xxelSBx2skcFO/uDfxdcAdzwAB3F4fBcu0c3DUkud3Yt4tS5q/eoqH9XkSJQfkQ==
X-Received: by 2002:adf:cd0a:: with SMTP id w10mr3822774wrm.404.1588335468547; 
 Fri, 01 May 2020 05:17:48 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id u30sm4320515wru.13.2020.05.01.05.17.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 01 May 2020 05:17:48 -0700 (PDT)
Date: Fri, 1 May 2020 12:17:46 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86/HyperV: correct hv_hcall_page for xen.efi build
Message-ID: <20200501121746.7t6xtvtu2w5l5oxb@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <c45d6098-a4e1-b565-4180-cc6da433dc57@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <c45d6098-a4e1-b565-4180-cc6da433dc57@suse.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Apr 30, 2020 at 12:24:15PM +0200, Jan Beulich wrote:
> Along the lines of what the not reverted part of 3c4b2eef4941 ("x86:
> refine link time stub area related assertion") did, we need to transform
> the absolute HV_HCALL_PAGE into the image base relative hv_hcall_page
> (or else there'd be no need for two distinct symbols). Otherwise
> mkreloc, as used for generating the base relocations of xen.efi, will
> spit out warnings like "Difference at .text:0009b74f is 0xc0000000
> (expected 0x40000000)". As long as the offending relocations are PC
> relative ones, the generated binary is correct afaict, but if there ever
> was the absolute address stored, xen.efi would miss a fixup for it.
> 
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Wei Liu <wl@xen.org>

> ---
> Build tested only (and generated binary inspected) - Wei, please check
> that this doesn't break things.
> 

I don't have time to verify this in next couple of weeks, but I will
surely notice if there is a breakage.

> --- a/xen/arch/x86/xen.lds.S
> +++ b/xen/arch/x86/xen.lds.S
> @@ -327,7 +327,7 @@ SECTIONS
>  #endif
>  
>  #ifdef CONFIG_HYPERV_GUEST
> -  hv_hcall_page = ABSOLUTE(HV_HCALL_PAGE);
> +  hv_hcall_page = ABSOLUTE(HV_HCALL_PAGE - XEN_VIRT_START + __XEN_VIRT_START);
>  #endif
>  
>    /* Sections to be discarded */


From xen-devel-bounces@lists.xenproject.org Fri May 01 12:19:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 12:19:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUUdu-0004Bp-CU; Fri, 01 May 2020 12:19:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUUdt-0004Bj-VD
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 12:19:46 +0000
X-Inumbo-ID: 0a0567d4-8ba6-11ea-9b02-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0a0567d4-8ba6-11ea-9b02-12813bfff9fa;
 Fri, 01 May 2020 12:19:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588335584;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=aNSPAo7Um/umuy4yrkXHAy9pnWDenhQlwawKBhAiNDY=;
 b=fO4B2qcNcO6GIAcFfgvpgP7kEpOJZrF8JzIOJtt7P0ureMGJFMAUe+il
 jC33ThY0t7PfnXzpbo/HbTow+QqOKCofHUEuocrAg391ExqDLPOLcHYjB
 zWmrqaV/l4g4wHIu4ybvTQcr5sHkK02DJtYitS5+W0SOnPfbvLc5YCQ4X 8=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: /M4eIBAXrW+avN8rV5uKd7HCg6ot95iTo7HewsBIn+bwO0vss/9yGM/JAYMCgn1CEe2soVkNUp
 JVveP3EaKFsNrFPj1Ntz353rflmejgUby3smbHhKicJOOanGSUO+8MuT26u8HhQ5iWWPi/ttz5
 txMfRRB18UIHSWuWR4fBKR+X9pwjXJjwdSZosLcx3sub9Ely/cWzblrd7jdgip9pos6/u+5sWF
 9YnekC17BQG82qGDAd8Jpg6dbtfKJcCFlJn9DrdzZz1DAHw5qSVcbpbZooAm+zekW0jeog+Was
 Uyc=
X-SBRS: 2.7
X-MesageID: 16822838
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,339,1583211600"; d="scan'208";a="16822838"
Subject: Re: [PATCH] x86/HyperV: correct hv_hcall_page for xen.efi build
To: Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>
References: <c45d6098-a4e1-b565-4180-cc6da433dc57@suse.com>
 <20200501121746.7t6xtvtu2w5l5oxb@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <fb545dac-43a9-cda8-f1a4-1b716308f8d2@citrix.com>
Date: Fri, 1 May 2020 13:19:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200501121746.7t6xtvtu2w5l5oxb@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01/05/2020 13:17, Wei Liu wrote:
> On Thu, Apr 30, 2020 at 12:24:15PM +0200, Jan Beulich wrote:
>> Along the lines of what the not reverted part of 3c4b2eef4941 ("x86:
>> refine link time stub area related assertion") did, we need to transform
>> the absolute HV_HCALL_PAGE into the image base relative hv_hcall_page
>> (or else there'd be no need for two distinct symbols). Otherwise
>> mkreloc, as used for generating the base relocations of xen.efi, will
>> spit out warnings like "Difference at .text:0009b74f is 0xc0000000
>> (expected 0x40000000)". As long as the offending relocations are PC
>> relative ones, the generated binary is correct afaict, but if there ever
>> was the absolute address stored, xen.efi would miss a fixup for it.
>>
>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Wei Liu <wl@xen.org>
>
>> ---
>> Build tested only (and generated binary inspected) - Wei, please check
>> that this doesn't break things.
>>
> I don't have time to verify this in next couple of weeks, but I will
> surely notice if there is a breakage.

FWIW, the resulting assembly is identical, so it surely will be as fine
as it was previously.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 01 12:21:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 12:21:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUUfP-0004wC-Nl; Fri, 01 May 2020 12:21:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UQub=6P=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jUUfO-0004w6-Mm
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 12:21:18 +0000
X-Inumbo-ID: 41d7efc4-8ba6-11ea-b07b-bc764e2007e4
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 41d7efc4-8ba6-11ea-b07b-bc764e2007e4;
 Fri, 01 May 2020 12:21:17 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id k12so5779492wmj.3
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 05:21:17 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=5CHXKIyohMZDKbjmQPTXencN2ra8/x2Ql2B8ZH/3B0o=;
 b=Jdna3m9zpdDKI3EexOZNNnWvLRBwbCMk4u5RjNbC4wvGdZS1tnncDsaT075ydJhyoM
 LIWUqtz4a3V1SyL9Rjanmm1SCiWFLZYl82zRBxB6/31IzAAgWrnjHXCWgmZ5upyH3jQ3
 B7ZWRc/aTmH06CiP8EfAem+CVMfXQ71ElM5Dd/LWuuCoizq6QK+vIRMPwkQKKGXc0tmX
 E+eLPYBCpuQOVXMf5/Jwvt1Y2lfyGtOovH/xQpNN8vlaKjIdzQzqpvzoDfVztJ96h2z9
 o5AsiG4KTt6e46azHDIc/Nqybi8b6iKII3aPvgAgUMQfdboWBovh7FOg1UJZalbdb2H3
 oeqQ==
X-Gm-Message-State: AGi0PuYkTZWLeyvfWa2r52Q7ZwIkECeGZ+7jHaPuwvcjD0nnSxOqwnaB
 +9TiB7TMr7PdpuJFfuGYEpw=
X-Google-Smtp-Source: APiQypIwlu4XHrcJkuUwYdro3qzejLcxA7pxZWuQESFWx7Wi91bz1z137uN+UC0O8lWK1hZUQU1YKQ==
X-Received: by 2002:a7b:cb88:: with SMTP id m8mr3766435wmi.103.1588335676989; 
 Fri, 01 May 2020 05:21:16 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id u188sm3783472wmg.37.2020.05.01.05.21.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 01 May 2020 05:21:16 -0700 (PDT)
Date: Fri, 1 May 2020 12:21:15 +0000
From: Wei Liu <wl@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/HyperV: correct hv_hcall_page for xen.efi build
Message-ID: <20200501122115.dfbcpczxtiragkzo@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <c45d6098-a4e1-b565-4180-cc6da433dc57@suse.com>
 <20200501121746.7t6xtvtu2w5l5oxb@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
 <fb545dac-43a9-cda8-f1a4-1b716308f8d2@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <fb545dac-43a9-cda8-f1a4-1b716308f8d2@citrix.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Jan Beulich <jbeulich@suse.com>,
 Wei Liu <wl@xen.org>, Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 01, 2020 at 01:19:39PM +0100, Andrew Cooper wrote:
> On 01/05/2020 13:17, Wei Liu wrote:
> > On Thu, Apr 30, 2020 at 12:24:15PM +0200, Jan Beulich wrote:
> >> Along the lines of what the not reverted part of 3c4b2eef4941 ("x86:
> >> refine link time stub area related assertion") did, we need to transform
> >> the absolute HV_HCALL_PAGE into the image base relative hv_hcall_page
> >> (or else there'd be no need for two distinct symbols). Otherwise
> >> mkreloc, as used for generating the base relocations of xen.efi, will
> >> spit out warnings like "Difference at .text:0009b74f is 0xc0000000
> >> (expected 0x40000000)". As long as the offending relocations are PC
> >> relative ones, the generated binary is correct afaict, but if there ever
> >> was the absolute address stored, xen.efi would miss a fixup for it.
> >>
> >> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > Acked-by: Wei Liu <wl@xen.org>
> >
> >> ---
> >> Build tested only (and generated binary inspected) - Wei, please check
> >> that this doesn't break things.
> >>
> > I don't have time to verify this in next couple of weeks, but I will
> > surely notice if there is a breakage.
> 
> FWIW, the resulting assembly is identical, so it surely will be as fine
> as it was previously.

Thanks. That should be okay.

Wei.

> 
> ~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 01 12:42:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 12:42:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUUzb-0006cv-Cv; Fri, 01 May 2020 12:42:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=omc8=6P=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1jUUza-0006cq-AJ
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 12:42:10 +0000
X-Inumbo-ID: 2afb9500-8ba9-11ea-9b04-12813bfff9fa
Received: from wout3-smtp.messagingengine.com (unknown [64.147.123.19])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2afb9500-8ba9-11ea-9b04-12813bfff9fa;
 Fri, 01 May 2020 12:42:07 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id 6A2676D8;
 Fri,  1 May 2020 08:42:06 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Fri, 01 May 2020 08:42:06 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
 messagingengine.com; h=cc:content-type:date:from:in-reply-to
 :message-id:mime-version:references:subject:to:x-me-proxy
 :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=dH47A/
 HCA03Gm0iJjCfgzzdHe/2mgAJtRjt+ZkKFEcg=; b=30WSFI0ZXxMDWwaUo99VLA
 NZGlMNWzFzz8dyC3ikrLeERTLGpjmETn2/PreTNEsAPoDLas/EzmfgqdSASvKyN4
 Ps4cyEozj72vswMnVWLozty3q/IV8SLQT7FUeK5qXYpCbq1xpkzLjr06VXNru5eO
 9nMOnFdnfaWLyaQDSuL1+fvK6POM9vMuVuOD7bEIlzjN5VUuM2tO5mBVpLh1YWJ6
 ThWUA/DJPfDext5ZI21buzfytK/Ah6N41CuFFTro7UDrTAXppjGbtMiJGqxKpSl3
 u0p+MWFDrw+h5rah6aGHCb/Z/CxHe4hRgXiGO0hIipmQBkvEFUKCcC+VLfG4EuQg
 ==
X-ME-Sender: <xms:HRmsXnvykCzwXngegHjtJElfnTY58p1WoX4S7Y4Wb1d6QL_Nt3_EmA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduhedrieejgdehgecutefuodetggdotefrodftvf
 curfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfghnecu
 uegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmdenuc
 fjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcu
 ofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinhhvih
 hsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepteevffei
 gffhkefhgfegfeffhfegveeikeettdfhheevieehieeitddugeefteffnecukfhppeelud
 drieehrdefgedrfeefnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghi
 lhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtg
 homh
X-ME-Proxy: <xmx:HRmsXo5WlKxqIRUpgysAZ-J3sk-pXqtNK2T5HAD5vammAYRpdOUBxA>
 <xmx:HRmsXhqxpopNJ1qKSN_KMz4tMRIp8Rp831BgEZysfVhk33pXLA3a2A>
 <xmx:HRmsXsczIvMaTykICrgh536NJTwTsFzxTXFNoJTECfU-_pJDpbvLsg>
 <xmx:HhmsXgqQSoh7Oucts-vjAM3EMBLb-frQdz-V7HuHW0cMa-VOUpVHmQ>
Received: from mail-itl (ip5b412221.dynamic.kabel-deutschland.de [91.65.34.33])
 by mail.messagingengine.com (Postfix) with ESMTPA id CFC823065F5F;
 Fri,  1 May 2020 08:42:04 -0400 (EDT)
Date: Fri, 1 May 2020 14:42:01 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>
To: Daniel Kiper <daniel.kiper@oracle.com>
Subject: Re: [RFC] UEFI Secure Boot on Xen Hosts
Message-ID: <20200501124201.GJ1178@mail-itl>
References: <20200429225108.GA54201@bobbye-pc> <20200430222717.GA6364@mail-itl>
 <20200501115959.m5pwm735mxbrs66a@tomti.i.net-space.pl>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature"; boundary="1y6imfT/xHuCvpN0"
Content-Disposition: inline
In-Reply-To: <20200501115959.m5pwm735mxbrs66a@tomti.i.net-space.pl>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: michal.zygowski@3mdeb.com, Bobby Eshleman <bobbyeshleman@gmail.com>,
 olivier.lambert@vates.fr, krystian.hebel@3mdeb.com,
 xen-devel@lists.xenproject.org, piotr.krol@3mdeb.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--1y6imfT/xHuCvpN0
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: [RFC] UEFI Secure Boot on Xen Hosts

On Fri, May 01, 2020 at 01:59:59PM +0200, Daniel Kiper wrote:
> On Fri, May 01, 2020 at 12:27:17AM +0200, Marek Marczykowski-G=C3=B3recki=
 wrote:
> > On Wed, Apr 29, 2020 at 05:51:08PM -0500, Bobby Eshleman wrote:
> > > # Option #3: Lean on Grub2's LoadFile2() Verification
> > >
> > > Grub2 will provide a LoadFile2() method to subsequent programs that s=
upports
> > > signature verification of arbitrary files.  Linux is moving in the
> > > direction of using LoadFile2() for loading the initrd [2], and Grub2 =
will
> > > support verifying the signatures of files loaded via LoadFile2().
> >
> > This description gives me flashbacks to how xen.efi loads dom0
> > kernel+initramfs (Loaded Image Protocol). Practical issues encountered:
> >
> > 1. It works only when xen.efi is loaded via appropriate EFI boot
> > service directly. If xen.efi is loaded by a filesystem driver in
> > grub/other bootloader, it can't load dom0 kernel+initramfs.
> >
> > 2. Given variety of UEFI implementations and boot medias, it was almost
> > impossible to force bootloader to use the right file loading method
> > (cases like the same file accessible both via UEFI fs0: and via grub's
> > filesystem driver). Which means loading xen.efi via a bootloader (as
> > opposed to directly from UEFI) was hit or miss (mostly miss).
> >
> > 3. To avoid the above issue, one was forced to load xen.efi directly
> > from EFI. This gave up any possibility to manipulate xen cmdline, or
> > even choose system to boot in case of some EFI implementations.
>=20
> Are you talking about GRUB chainloader command? If yes then use "set root=
=3D..."
> to select ESP before running the chainloader.=20

This exactly resulted in issues in point 2. In many cases it ended up
accessing ESP via grub's own FAT driver.

> Of course the xen.cfg,
> kernel and initramfs have to live on ESP then.=20

Which is another issue - ESP on ISO9660 is limited to 32MB. Which is a
very tight limit for initramfs of an installer image.

> Xen uses UEFI file system
> calls which understand FAT only. And if GRUB root points to non-FAT
> partition than Xen cannot read anything from it...
>=20
> > 4. Even if one is lucky to boot xen.efi via grub2-efi, then still only
> > xen options could be modified. But not those for dom0.
> >
> > 5. It was impossible to load xen/kernel/initrd using fancy grub/ipxe
> > features like HTTP.
>=20
> Why cannot you use multiboot2/module2 to load Xen from GRUB on x86 UEFI
> machines? It does not have issues discussed above. Additionally, the
> Multiboot2 protocol works on legacy BIOS machines too.

Yes, multiboot2 (now supported with UEFI too) solves all the above. I
want to avoid situation where we'd go back to the (subset of) state from
before multiboot2 on UEFI.

>=20
> > In practice the above points mean:
> >
> > * To get a usable product, one is forced to enable all kind of
> >   workarounds (not only related to EFI) _in default configuration_,
> >   because otherwise there is very little chance to enable them only when
> >   needed.
> >
> > * If something goes wrong, in most cases you need to boot from
> >   alternative media (to edit xen.cfg, or efi variables). If you happen
> >   to forget your rescue usb stick, you are out of luck.
> >
> > So, please, can someone confirm the LoadFile2() approach wouldn't have
> > any of the above issues?
>=20
> If it is done properly it should not.
>=20
> Daniel

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--1y6imfT/xHuCvpN0
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl6sGRkACgkQ24/THMrX
1yyOfAf/dft8RGYrNSg7HW1BPhFhZoa3lOaE45q3EG8F1eakMeZnDqZ1ZVL4cYNf
SjTGZRZuj/O9LcnlmnwDkNcmSgVJQ/UOg4FYy+jnyApCiqSTNx/CqYSGecNDXxV2
zDtNHw0F7rqcARDQAmWixPFyzmeB930KZt4oRTrEcm/1KIwr2bqq5mCysJ8SZXx1
PQYxVEo6qOe+mt9NzIhQ7Q4POd4lMBvDqFATgvVsXXeNmI3hheVEhP1nlhzWWaiD
VEKQCTtfv/VeK6lM9acFx+/pqs67l0mTBsvnaGCNmKmTPZc9ulrVfJKeXt19d4aA
AZWFLMpdw1Mhfnhp6WVhY5BARngmsQ==
=dxlk
-----END PGP SIGNATURE-----

--1y6imfT/xHuCvpN0--


From xen-devel-bounces@lists.xenproject.org Fri May 01 12:46:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 12:46:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUV3p-0006lz-Vs; Fri, 01 May 2020 12:46:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Po+f=6P=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jUV3o-0006lu-SI
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 12:46:32 +0000
X-Inumbo-ID: c8a43c94-8ba9-11ea-9b04-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c8a43c94-8ba9-11ea-9b04-12813bfff9fa;
 Fri, 01 May 2020 12:46:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
 References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=p7AYxF2x4WqJ7KzvUodOW8YS2NUfT3SzO82zW2+p7FY=; b=rOu58TACcWMbUSFzPDDR/6G/ma
 4e4IUWwvH9sXu/jFCKg/sBC5mKIHBUpgQgjL/wZoBDimanOIaSRTu3wzk9bR8kJ1tkw1NPJxSxv+6
 yvRPSg1tuOYlMdobHE/etBIApS0Dp98MjOAFwImAEew9jT0j5PfARW9ygH6H7k7vIwDM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jUV3n-0001Fb-OE; Fri, 01 May 2020 12:46:31 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <hx242@xen.org>)
 id 1jUV3n-00080n-Cn; Fri, 01 May 2020 12:46:31 +0000
Message-ID: <e16decc83bc9b097e2df2b80d0c71565a1fd280b.camel@xen.org>
Subject: Re: [PATCH 02/16] acpi: vmap pages in acpi_os_alloc_memory
From: Hongyan Xia <hx242@xen.org>
To: Wei Liu <wl@xen.org>
Date: Fri, 01 May 2020 13:46:29 +0100
In-Reply-To: <20200501120249.bd2w6ksgii4tkrai@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <cover.1588278317.git.hongyxia@amazon.com>
 <a71d1903267b84afdb0e54fa2ac55540ab2f9357.1588278317.git.hongyxia@amazon.com>
 <20200501120249.bd2w6ksgii4tkrai@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 2020-05-01 at 12:02 +0000, Wei Liu wrote:
> On Thu, Apr 30, 2020 at 09:44:11PM +0100, Hongyan Xia wrote:
> > From: Hongyan Xia <hongyxia@amazon.com>
> > 
> > Also, introduce a wrapper around vmap that maps a contiguous range
> > for
> > boot allocations.
> > 
> > Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> > ---
> >  xen/drivers/acpi/osl.c | 9 ++++++++-
> >  xen/include/xen/vmap.h | 5 +++++
> >  2 files changed, 13 insertions(+), 1 deletion(-)
> > 
> > diff --git a/xen/drivers/acpi/osl.c b/xen/drivers/acpi/osl.c
> > index 4c8bb7839e..d0762dad4e 100644
> > --- a/xen/drivers/acpi/osl.c
> > +++ b/xen/drivers/acpi/osl.c
> > @@ -219,7 +219,11 @@ void *__init acpi_os_alloc_memory(size_t sz)
> >  	void *ptr;
> >  
> >  	if (system_state == SYS_STATE_early_boot)
> > -		return mfn_to_virt(mfn_x(alloc_boot_pages(PFN_UP(sz),
> > 1)));
> > +	{
> > +		mfn_t mfn = alloc_boot_pages(PFN_UP(sz), 1);
> > +
> > +		return vmap_boot_pages(mfn, PFN_UP(sz));
> > +	}
> >  
> >  	ptr = xmalloc_bytes(sz);
> >  	ASSERT(!ptr || is_xmalloc_memory(ptr));
> > @@ -244,5 +248,8 @@ void __init acpi_os_free_memory(void *ptr)
> >  	if (is_xmalloc_memory(ptr))
> >  		xfree(ptr);
> >  	else if (ptr && system_state == SYS_STATE_early_boot)
> > +	{
> > +		vunmap(ptr);
> >  		init_boot_pages(__pa(ptr), __pa(ptr) + PAGE_SIZE);
> > +	}
> >  }
> > diff --git a/xen/include/xen/vmap.h b/xen/include/xen/vmap.h
> > index 369560e620..c70801e195 100644
> > --- a/xen/include/xen/vmap.h
> > +++ b/xen/include/xen/vmap.h
> > @@ -23,6 +23,11 @@ void *vmalloc_xen(size_t size);
> >  void *vzalloc(size_t size);
> >  void vfree(void *va);
> >  
> > +static inline void *vmap_boot_pages(mfn_t mfn, unsigned int
> > nr_pages)
> 
> Nothing seems to tie this to boot pages only. Maybe it is better to
> name
> it after what it does, like vmap_mfns?

Hmm, indeed nothing so special about *boot* pages. Will change.

Hongyan



From xen-devel-bounces@lists.xenproject.org Fri May 01 12:59:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 12:59:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUVGN-0007h3-8V; Fri, 01 May 2020 12:59:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Po+f=6P=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jUVGM-0007gy-64
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 12:59:30 +0000
X-Inumbo-ID: 96f0b860-8bab-11ea-9b06-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 96f0b860-8bab-11ea-9b06-12813bfff9fa;
 Fri, 01 May 2020 12:59:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
 References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=IdRBKq0YBkLqGZVM10/BnE5IIMr0ef1luLOVLtRwzTs=; b=5QkT3jS8hMiwkDstGRlcLJHHFr
 PkvTnIckgIwRxTFzFUfhptDAkIJWrG4YLFpfqADUjjEUAwelpBoTCUiLghxqj4jabOf56nhozOGM4
 bvx6opdLGSJgj5Mo6NQpiesbdxd9nPZBjGTwhHpQG576n1fMEvs6HtT0PRAFC9uVnPqs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jUVGI-0001U1-W4; Fri, 01 May 2020 12:59:26 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <hx242@xen.org>)
 id 1jUVGI-0000Xe-JI; Fri, 01 May 2020 12:59:26 +0000
Message-ID: <2235f884b65c9f20cf55637f91ddab6924f53ca1.camel@xen.org>
Subject: Re: [PATCH 11/16] x86: add a boot option to enable and disable the
 direct map
From: Hongyan Xia <hx242@xen.org>
To: Wei Liu <wl@xen.org>
Date: Fri, 01 May 2020 13:59:24 +0100
In-Reply-To: <20200501121132.kzhu7u2vmpoeju2x@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <cover.1588278317.git.hongyxia@amazon.com>
 <7360b59e8fd39796fee56430a437b20c948d08c2.1588278317.git.hongyxia@amazon.com>
 <20200501121132.kzhu7u2vmpoeju2x@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 2020-05-01 at 12:11 +0000, Wei Liu wrote:
> On Thu, Apr 30, 2020 at 09:44:20PM +0100, Hongyan Xia wrote:
> > From: Hongyan Xia <hongyxia@amazon.com>
> > 
> > Also add a helper function to retrieve it. Change
> > arch_mfn_in_direct_map
> > to check this option before returning.
> > 
> > This is added as a boot command line option, not a Kconfig. We do
> > not
> > produce different builds for EC2 so this is not introduced as a
> > compile-time configuration.
> 
> Having a Kconfig will probably allow the compiler to eliminate dead
> code.
> 
> This is not asking you to do the work, someone can come along and
> adjust 
> arch_has_directmap easily.

My original code added this as a CONFIG option, but I converted it into
a boot-time switch, so I can just dig out history and convert it back.
I wonder if we should get more opinions on this to make a decision.

I would love Xen to have static key support though so that a boot-time
switch costs no run-time performance.

Hongyan



From xen-devel-bounces@lists.xenproject.org Fri May 01 13:12:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 13:12:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUVSV-0000pT-Fo; Fri, 01 May 2020 13:12:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UQub=6P=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jUVSU-0000pN-Dj
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 13:12:02 +0000
X-Inumbo-ID: 58340404-8bad-11ea-9887-bc764e2007e4
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58340404-8bad-11ea-9887-bc764e2007e4;
 Fri, 01 May 2020 13:12:01 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id g13so11471095wrb.8
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 06:12:01 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=8djvSfIVuW/VL/354hClW0MbL2pbkL5s0sSCO0T6GYc=;
 b=MqGqXop5oj/pV1ASxcMu9Dc7bYGO6aRrRbgjjRkWgOZpSxee5XBCA5Jq0yQYgMwdK2
 VWDqKNxrEsb2zYEhi5iFcD+eEMgvxpci9rJ80QJ4d0n7DxkX4Wtqp90/S9iwOvUR1fk3
 qs7DIdfNzRYa5HxYUGvFJLFCHs1Yym6hGnuyyz8jge+Qjl5i8lI5VuvvWSvhQySAbSId
 Zuzks2p6QtKxR3Cf7lO9PnOqVppY/olzWiUaZmimEmBnn7wGoWIPmUmtMqN73Tegk8hd
 B7dxepJKYajSXyAplyi5iFR0XvqYrsqF2rsLf28NWbp3p4K0wHh1AGlgA4iqCLY0KCJw
 ujHQ==
X-Gm-Message-State: AGi0PuaqyK7WWaU0zBfUxn7jLeEUf2SCAH7gLIYYDoTfr9YrKlnoFrqN
 4HoA3oJiS0RIv4fJ6oeyOWU=
X-Google-Smtp-Source: APiQypIBZcA1otiDxaAWN4u+D1Gz827BaSrDQFlDNc0lMqgLfZTygUHuzT5m0t4Ud1I1oRoYaELLkg==
X-Received: by 2002:a5d:68cf:: with SMTP id p15mr4317019wrw.139.1588338721041; 
 Fri, 01 May 2020 06:12:01 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id 92sm4563957wrm.71.2020.05.01.06.11.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 01 May 2020 06:12:00 -0700 (PDT)
Date: Fri, 1 May 2020 13:11:58 +0000
From: Wei Liu <wl@xen.org>
To: Hongyan Xia <hx242@xen.org>
Subject: Re: [PATCH 11/16] x86: add a boot option to enable and disable the
 direct map
Message-ID: <20200501131158.utexymcn3lnt65qp@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <cover.1588278317.git.hongyxia@amazon.com>
 <7360b59e8fd39796fee56430a437b20c948d08c2.1588278317.git.hongyxia@amazon.com>
 <20200501121132.kzhu7u2vmpoeju2x@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
 <2235f884b65c9f20cf55637f91ddab6924f53ca1.camel@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <2235f884b65c9f20cf55637f91ddab6924f53ca1.camel@xen.org>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 01, 2020 at 01:59:24PM +0100, Hongyan Xia wrote:
> On Fri, 2020-05-01 at 12:11 +0000, Wei Liu wrote:
> > On Thu, Apr 30, 2020 at 09:44:20PM +0100, Hongyan Xia wrote:
> > > From: Hongyan Xia <hongyxia@amazon.com>
> > > 
> > > Also add a helper function to retrieve it. Change
> > > arch_mfn_in_direct_map
> > > to check this option before returning.
> > > 
> > > This is added as a boot command line option, not a Kconfig. We do
> > > not
> > > produce different builds for EC2 so this is not introduced as a
> > > compile-time configuration.
> > 
> > Having a Kconfig will probably allow the compiler to eliminate dead
> > code.
> > 
> > This is not asking you to do the work, someone can come along and
> > adjust 
> > arch_has_directmap easily.
> 
> My original code added this as a CONFIG option, but I converted it into
> a boot-time switch, so I can just dig out history and convert it back.
> I wonder if we should get more opinions on this to make a decision.

Form my perspective, you as a contributor has done the work to scratch
your own itch, hence I said "not asking you to do the work". I don't
want to turn every comment into a formal ask and eventually lead to
feature creep.

> 
> I would love Xen to have static key support though so that a boot-time
> switch costs no run-time performance.
> 

Yes that would be great.

Wei.

> Hongyan
> 


From xen-devel-bounces@lists.xenproject.org Fri May 01 13:24:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 13:24:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUVeH-0001kQ-SG; Fri, 01 May 2020 13:24:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YjF2=6P=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jUVeG-0001kL-HW
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 13:24:12 +0000
X-Inumbo-ID: 0af44cc4-8baf-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0af44cc4-8baf-11ea-ae69-bc764e2007e4;
 Fri, 01 May 2020 13:24:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=sF8YtTXpXA75WcB0qmuFnswmeWgyXgwi7ihmRoU5MtE=; b=tv22BfQ/tPWrl0iZv++XREFB6
 27BFKPx1xZXBfDcPMXxb6FH99nNcnkZr/S7GZB3FiR4tJ0OJzL0fgBgu+MyyXa0KXZx6I96aL766N
 K4GUY35NJaBTqxL56ueepU72Uj3M3DycjA0sN1Y+HuXkhBsOZAEFbV+nu0A32hmkMC66c=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUVeE-0001xY-Jj; Fri, 01 May 2020 13:24:10 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUVeE-0007FW-4B; Fri, 01 May 2020 13:24:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jUVeE-0008E0-3X; Fri, 01 May 2020 13:24:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149894-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 149894: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-armhf-armhf-xl-rtds:guest-stop:fail:allowable
 qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=27c94566379069fb8930bb1433dcffbf7df3203d
X-Osstest-Versions-That: qemuu=16aaacb307ed607b9780c12702c44f0fe52edc7e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 01 May 2020 13:24:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149894 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149894/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     15 guest-stop               fail REGR. vs. 149890

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 149890
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149890
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149890
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149890
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149890
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149890
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                27c94566379069fb8930bb1433dcffbf7df3203d
baseline version:
 qemuu                16aaacb307ed607b9780c12702c44f0fe52edc7e

Last test of basis   149890  2020-04-30 14:35:19 Z    0 days
Testing same since   149894  2020-05-01 01:38:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alistair Francis <alistair.francis@wdc.com>
  Cameron Esfahani <dirty@apple.com>
  Cédric Le Goater <clg@kaod.org>
  Damien Hedde <damien.hedde@greensocs.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Jerome Forissier <jerome@forissier.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Ramon Fried <rfried.dev@gmail.com>
  Richard Henderson <richard.henderson@linaro.org>
  Subbaraya Sundeep <sundeep.lkml@gmail.com>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   16aaacb307..27c9456637  27c94566379069fb8930bb1433dcffbf7df3203d -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri May 01 13:53:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 13:53:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUW6L-0004Bh-CR; Fri, 01 May 2020 13:53:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Po+f=6P=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jUW6K-0004Bc-IU
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 13:53:12 +0000
X-Inumbo-ID: 18c1d566-8bb3-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 18c1d566-8bb3-11ea-b9cf-bc764e2007e4;
 Fri, 01 May 2020 13:53:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
 References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=l2y+h/GQCApgQHTFNSrRWEFRKreTRyjQ3vsBBRSP0fI=; b=VZkzW0mdgC07Vmj3+UOx5Jibre
 kzwFx3BPetv4xxRtSiUvvHBRSoHMHUkOPxyFr+uzHxoq4Fi4MTdFisqfaYLUic8cMd5a8Un1zByZT
 8Y0YsiV1mr5jsPtV6NCucbYSXCHgf8yFCvthv+qnso0RLGB8BdovWu55x3zdqln4BHbQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jUW6J-0002U0-H7; Fri, 01 May 2020 13:53:11 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <hx242@xen.org>)
 id 1jUW6J-0004y6-3U; Fri, 01 May 2020 13:53:11 +0000
Message-ID: <689a7c860a8a551e3b6009b16590e812dbf21055.camel@xen.org>
Subject: Re: [PATCH 00/16] Remove the direct map
From: Hongyan Xia <hx242@xen.org>
To: Wei Liu <wl@xen.org>
Date: Fri, 01 May 2020 14:53:08 +0100
In-Reply-To: <20200501120715.hjak2gjp7ialwfq5@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <cover.1588278317.git.hongyxia@amazon.com>
 <20200501120715.hjak2gjp7ialwfq5@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 2020-05-01 at 12:07 +0000, Wei Liu wrote:
> On Thu, Apr 30, 2020 at 09:44:09PM +0100, Hongyan Xia wrote:
> > From: Hongyan Xia <hongyxia@amazon.com>
> > 
> > This series depends on Xen page table domheap conversion:
> > 
https://lists.xenproject.org/archives/html/xen-devel/2020-04/msg01374.html
> > .
> > 
> > After breaking the reliance on the direct map to manipulate Xen
> > page
> > tables, we can now finally remove the direct map altogether.
> > 
> > This series:
> > - fixes many places that use the direct map incorrectly or assume
> > the
> >   presence of an always-mapped direct map in a wrong way.
> > - includes the early vmap patches for global mappings.
> > - initialises the mapcache for all domains, disables the fast path
> > that
> >   uses the direct map for mappings.
> > - maps and unmaps xenheap on-demand.
> > - adds a boot command line switch to enable or disable the direct
> > map.
> > 
> > This previous version was in RFC state and can be found here:
> > 
https://lists.xenproject.org/archives/html/xen-devel/2019-09/msg02647.html
> > ,
> > which has since been broken into small series.
> 
> OOI have you done any performance measurements?
> 
> Seeing that now even guest table needs mapping / unmapping during
> restore, I'm curious to know how that would impact performance.

I actually have a lot of performance numbers but unfortunately on an
older version of Xen, not staging. I need to evaluate it again before
coming back to you. As you suspected, one strong signal from the
performance results is definitely the impact of walking guest tables.
For EPT, mapping and unmapping 20 times is no fun. This shows up in
micro-benchmarks, although larger benchmarks tend to be fine.

My question is, do we care about hiding EPT? I think it is fine to just
use xenheap pages (or any other form which does the job) so that we go
down from 20 mappings to only 4. I have done this hack with EPT and saw
significantly reduced impact for HVM guests in micro-benchmarks.

Hongyan



From xen-devel-bounces@lists.xenproject.org Fri May 01 14:00:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 14:00:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUWDI-00055R-6f; Fri, 01 May 2020 14:00:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7CKl=6P=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jUWDH-00055I-EF
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 14:00:23 +0000
X-Inumbo-ID: 19547744-8bb4-11ea-b9cf-bc764e2007e4
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 19547744-8bb4-11ea-b9cf-bc764e2007e4;
 Fri, 01 May 2020 14:00:22 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id j1so11663090wrt.1
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 07:00:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=j9Wb4a28pv4W3Xa7edRDFSrtb8gWICqmhdjZIosP3F0=;
 b=vY1u5Q91lXf+b1qkJPYcumRCqZv5qZwQaSzGRQk9+PuhjnaPJq/RR8B7kFpKs6xRod
 G5yqDN8p1daV4XZVAl2rKazNPa2jKTi/EtVAde0lDoRK2b+e2LdyQs5qCC2DVy/70Qu9
 GQOhGJUYRQy3+exA/66ger+mL2qdomN/3J1jgMCiti3eBA2uoLLl8q4WRnpMRGzarJPz
 uKrnJBmQPssvIB+M1SxKFqciiWNMllVSSl1bx/VTvHZ9aCFB5uSGu55SJxCdUVNmmPU0
 ZJYZZN8SGO8kgTBor3ZG8MoAmCKwV0UpPl8zCCNacc5ePWqQf1K8r8O4xSVNhiTcFwcz
 Zm3A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=j9Wb4a28pv4W3Xa7edRDFSrtb8gWICqmhdjZIosP3F0=;
 b=O9Vl7mU5Xq5fWDNgzPZFBly1Qez8Tgu6SeS17+oejA3fnfg1l9J0vE0G6+Mr47guBN
 6aVPIvQKZmwVwXepWEg1IMh0ZhgdgayIrKIazL/W/QHfveSarnUCGQLdpuxfIWynp7Se
 uKLUUTCbq9Sk51CT8Pate0oVUFWzdJwooRffoLMmIKEe7ny/4uJm9ldnzYn6v/d13gQ5
 P6D/LvW5J4vL34Aiit9nVo76yj9TR4ZZs7zsNiNhwGCEZ+hcrNcL2FlbE9OBdlVrXZW1
 Zv2PWFBJo9hW6sVZyIjnJBR1q9HQ4i4L26GhURJrlQs15tpMEMiK9bua8qdm+2MsRlgr
 7XTQ==
X-Gm-Message-State: AGi0Pub8PyNpf+o81vKUIHU0Eqhqnngrbw4uC4dX9/Hw8foF4UAeRhvH
 i/tlX2zKckGlPTaXbZyXy5TDw7gAOU4+o/WQZqU=
X-Google-Smtp-Source: APiQypKnAx7M3IjeyZyBEwiIGKHLhLjDBBoLmVdhCXGS2l6f1ASvJlLEBLAgwN2cb+yq/RsbUEJN14L1jPRD7aQ9XhU=
X-Received: by 2002:adf:e450:: with SMTP id t16mr4651948wrm.301.1588341621975; 
 Fri, 01 May 2020 07:00:21 -0700 (PDT)
MIME-Version: 1.0
References: <70ea4889e30ed35760329331ddfeb279fcd80786.1587655725.git.tamas.lengyel@intel.com>
 <e416eac0c986fd1aba5f576d9b065a6f47660b2c.1587655725.git.tamas.lengyel@intel.com>
In-Reply-To: <e416eac0c986fd1aba5f576d9b065a6f47660b2c.1587655725.git.tamas.lengyel@intel.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Fri, 1 May 2020 07:59:45 -0600
Message-ID: <CABfawhnxoQbehu-bvT7Uhd808rsjjDsB87O=CKqHDsrBUvur-g@mail.gmail.com>
Subject: Re: [PATCH v17 2/2] xen/tools: VM forking toolstack side
To: Tamas K Lengyel <tamas.lengyel@intel.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Apr 23, 2020 at 9:33 AM Tamas K Lengyel <tamas.lengyel@intel.com> wrote:
>
> Add necessary bits to implement "xl fork-vm" commands. The command allows the
> user to specify how to launch the device model allowing for a late-launch model
> in which the user can execute the fork without the device model and decide to
> only later launch it.
>
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>

Patch ping. If nothing else at least the libxc parts would be nice to
get merged before the freeze.


From xen-devel-bounces@lists.xenproject.org Fri May 01 14:32:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 14:32:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUWiF-0007ju-EB; Fri, 01 May 2020 14:32:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ojhz=6P=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jUWiE-0007jp-FG
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 14:32:22 +0000
X-Inumbo-ID: 9108da1a-8bb8-11ea-9b16-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9108da1a-8bb8-11ea-9b16-12813bfff9fa;
 Fri, 01 May 2020 14:32:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588343541;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=VMQteekUVywQm2O/041EgLV88Aa0nA+GlxeS8W/BlsQ=;
 b=gzOzvXyxWpv9zscxz8HgHsQOrCRjN/A4oJwHC+AtdIuOgVU/eQiVd+Iz
 +1eGea4249nhXaP0s13sx7MxhbXWqRcoR9uV7OZ40XEKjHOk6HtP/U40R
 FImHSM+X6NCU3B7FXHtCZmi/Viuj6CT6ylNcNDDnzyHq4N3AwrECqETzF Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=anthony.perard@citrix.com;
 spf=Pass smtp.mailfrom=anthony.perard@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 anthony.perard@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="anthony.perard@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 anthony.perard@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="anthony.perard@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: Myl32l8cbv7VqxAh29KK9KoZBJH7UFlLRY6IDKdasUnMsj7DANSEFbtXeF7px8yak0RMh9S/vJ
 Saen0CmgTRXBYwlPv1fiRDvQapCwXBl8Og6DDzrqISl+btHqhzJsOeV8vczyfEk/qn4GbZKn7n
 wmSvpUilfhLD6pmqfEewn55/9ttd3hSAWtGg0aeVWCCe69PPnXVpalndScqbjYN3pcJ5s9hqKQ
 eHzzf1rZwRj8l7G9R15GJgESsnG3SkPqui7SS+N9g1ObVL1J5l+zmh/6Q3SgP7lqLnycZjhvFn
 Dv4=
X-SBRS: 2.7
X-MesageID: 16561827
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,339,1583211600"; d="scan'208";a="16561827"
Date: Fri, 1 May 2020 15:32:15 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [XEN PATCH v5 08/16] build: Introduce $(cpp_flags)
Message-ID: <20200501143215.GD2116@perard.uk.xensource.com>
References: <20200421161208.2429539-1-anthony.perard@citrix.com>
 <20200421161208.2429539-9-anthony.perard@citrix.com>
 <62011f46-b208-334a-4070-0bd72cb21d28@suse.com>
 <20200428140119.GC2116@perard.uk.xensource.com>
 <86af7c75-8f8b-db0a-7420-343ccd70fc33@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <86af7c75-8f8b-db0a-7420-343ccd70fc33@suse.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Ian
 Jackson <ian.jackson@eu.citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Apr 28, 2020 at 04:20:57PM +0200, Jan Beulich wrote:
> On 28.04.2020 16:01, Anthony PERARD wrote:
> > On Thu, Apr 23, 2020 at 06:48:51PM +0200, Jan Beulich wrote:
> >> On 21.04.2020 18:12, Anthony PERARD wrote:
> >>> --- a/xen/Rules.mk
> >>> +++ b/xen/Rules.mk
> >>> @@ -123,6 +123,7 @@ $(obj-bin-y): XEN_CFLAGS := $(filter-out -flto,$(XEN_CFLAGS))
> >>>  
> >>>  c_flags = -MMD -MP -MF $(@D)/.$(@F).d $(XEN_CFLAGS) '-D__OBJECT_FILE__="$@"'
> >>>  a_flags = -MMD -MP -MF $(@D)/.$(@F).d $(XEN_AFLAGS)
> >>> +cpp_flags = $(filter-out -Wa$(comma)%,$(a_flags))
> >>
> >> I can see this happening to be this way right now, but in principle
> >> I could see a_flags to hold items applicable to assembly files only,
> >> but not to (the preprocessing of) C files. Hence while this is fine
> >> for now, ...
> >>
> >>> @@ -207,7 +208,7 @@ quiet_cmd_cc_s_c = CC      $@
> >>>  cmd_cc_s_c = $(CC) $(filter-out -Wa$(comma)%,$(c_flags)) -S $< -o $@
> >>>  
> >>>  quiet_cmd_s_S = CPP     $@
> >>> -cmd_s_S = $(CPP) $(filter-out -Wa$(comma)%,$(a_flags)) $< -o $@
> >>> +cmd_s_S = $(CPP) $(cpp_flags) $< -o $@
> >>
> >> ... this one is a trap waiting for someone to fall in imo. Instead
> >> where I'd expect this patch to use $(cpp_flags) is e.g. in
> >> xen/arch/x86/mm/Makefile:
> >>
> >> guest_walk_%.i: guest_walk.c Makefile
> >> 	$(CPP) $(cpp_flags) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
> >>
> >> And note how this currently uses $(c_flags), not $(a_flags), which
> >> suggests that your deriving from $(a_flags) isn't correct either.
> > 
> > I think we can drop this patch for now, and change patch "xen/build:
> > factorise generation of the linker scripts" to not use $(cpp_flags).
> > 
> > If we derive $(cpp_flags) from $(c_flags) instead, we would need to
> > find out if CPP commands using a_flags can use c_flags instead.
> > 
> > On the other hand, I've looked at Linux source code, and they use
> > $(cpp_flags) for only a few targets, only to generate the .lds scripts.
> > For other rules, they use either a_flags or c_flags, for example:
> >     %.i: %.c ; uses $(c_flags)
> >     %.i: %.S ; uses $(a_flags)
> >     %.s: %.S ; uses $(a_flags)
> 
> The first on really ought to be use cpp_flags. I couldn't find the
> middle one. The last one clearly has to do something about -Wa,
> options, but apart from this I'd consider a_flags appropriate to
> use there.
> 
> > (Also, they use -Qunused-arguments clang's options, so they don't need
> > to filter out -Wa,* arguments, I think.)
> 
> Maybe we should do so too then?
> 
> > So, maybe having a single $(cpp_flags) when running the CPP command
> > isn't such a good idea.
> 
> Right - after all in particular the use of CPP to produce .lds is
> an abuse, as the source file (named .lds.S) isn't really what its
> name says.
> 
> > So, would dropping $(cpp_flags) for now, and rework the *FLAGS later, be
> > good enough?
> 
> I don't think so, no, I'm sorry. cpp_flags should be there for its
> real purpose. Whether the .lds.S -> .lds rule can use it, or should
> use a_flags, or yet something else is a different thing.


OK. I think we can rework the patch to derive cpp_flags from c_flags,
use this new cpp_flags for %.i:%.c; but keep using a_flags for %.s:%.S.

As for the .lds, we could use this new cpp_flags, the only think I saw
missing was -D__ASSEMBLY__, which can be added to the command line.
(There would also be an extra -std=gnu99, but I don't think it matters.)

Does that sounds good?
(Just two patch to change, this one and the one reworking .lds
generation.)


As for using -Qunused-arguments with clang, I didn't managed to find the
documentation of clang's command line argument for clang 3.5 on llvm
website, but I found it for clang 5.0 and the option is listed there.
I've tested building Xen on our gitlab CI, which as debian jessie which
seems to have clang 3.5, and Xen built just fine. So that might be an
option we can use later, but probably only for CPP flags.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri May 01 14:42:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 14:42:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUWsE-0000Ct-Mu; Fri, 01 May 2020 14:42:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ojhz=6P=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jUWsD-0000Co-9n
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 14:42:41 +0000
X-Inumbo-ID: 01cadb44-8bba-11ea-b07b-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 01cadb44-8bba-11ea-b07b-bc764e2007e4;
 Fri, 01 May 2020 14:42:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588344160;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=S1rwDrLLWuY+5CaJz3cnKFA5E39iYwEwSHYTyKtmp9A=;
 b=VwFOscustoJbnOezX06bX+/TvOBtNE1XzsMKxYHxmBlG/0ShP0Vyd0aH
 w60vOdU27SGY/SBjFoEw/2gJR6zBJa4lPWjTq/4fCu1ClLkXszLJlOEh9
 kXRZqlKMi2jCRVzQeBRvNRDAuGcOApUzxk/kxdujfELv5rHB40tkDrPf2 4=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=anthony.perard@citrix.com;
 spf=Pass smtp.mailfrom=anthony.perard@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 anthony.perard@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="anthony.perard@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 anthony.perard@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="anthony.perard@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: +K/FU1n3lEYyMQXxPxTLGrqiT0gB314+gbP7IN6a2FP+hvInOfAH5scWW+UeZoujJYxaUK/EaF
 9IO+AwEa6QdFqnDvKGFEH/25upZVvROVCt4hiF6K6CDngj7qijHeZO+gxmFrTvdIAQeBEs4yFn
 6I41MpnyM1tJmh7jy2EVYxTC7OxOVmh8F1Fb6gzJI7w4idm0y+PP/awJl1NhhqtxaURPnWfI5j
 gOcig+v0kuRDe/jCH9nJNEMXrbgl8JbJjwOUyywlGJfOUIhWAbOoHjaWGO8Wp0XLzgy9JQKhz2
 CmI=
X-SBRS: 2.7
X-MesageID: 16974406
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,339,1583211600"; d="scan'208";a="16974406"
Date: Fri, 1 May 2020 15:42:34 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [XEN PATCH v5 09/16] xen/build: use if_changed on built_in.o
Message-ID: <20200501144234.GE2116@perard.uk.xensource.com>
References: <20200421161208.2429539-1-anthony.perard@citrix.com>
 <20200421161208.2429539-10-anthony.perard@citrix.com>
 <6c6d20f5-d8ab-ee15-d2fc-e19b1dced99a@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <6c6d20f5-d8ab-ee15-d2fc-e19b1dced99a@suse.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Ian
 Jackson <ian.jackson@eu.citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Apr 28, 2020 at 03:48:13PM +0200, Jan Beulich wrote:
> On 21.04.2020 18:12, Anthony PERARD wrote:
> > In the case where $(obj-y) is empty, we also replace $(c_flags) by
> > $(XEN_CFLAGS) to avoid generating an .%.d dependency file. This avoid
> > make trying to include %.h file in the ld command if $(obj-y) isn't
> > empty anymore on a second run.
> > 
> > Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> Personally I'd prefer ...
> 
> > --- a/xen/Rules.mk
> > +++ b/xen/Rules.mk
> > @@ -130,15 +130,24 @@ include $(BASEDIR)/arch/$(TARGET_ARCH)/Rules.mk
> >  c_flags += $(CFLAGS-y)
> >  a_flags += $(CFLAGS-y) $(AFLAGS-y)
> >  
> > -built_in.o: $(obj-y) $(extra-y)
> > -ifeq ($(obj-y),)
> > -	$(CC) $(c_flags) -c -x c /dev/null -o $@
> > -else
> > +quiet_cmd_ld_builtin = LD      $@
> >  ifeq ($(CONFIG_LTO),y)
> > -	$(LD_LTO) -r -o $@ $(filter-out $(extra-y),$^)
> > +cmd_ld_builtin = \
> > +    $(LD_LTO) -r -o $@ $(filter-out $(extra-y),$(real-prereqs))
> >  else
> > -	$(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out $(extra-y),$^)
> > +cmd_ld_builtin = \
> > +    $(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out $(extra-y),$(real-prereqs))
> >  endif
> > +
> > +quiet_cmd_cc_builtin = LD      $@
> > +cmd_cc_builtin = \
> > +    $(CC) $(XEN_CFLAGS) -c -x c /dev/null -o $@
> > +
> > +built_in.o: $(obj-y) $(extra-y) FORCE
> > +ifeq ($(obj-y),)
> > +	$(call if_changed,cc_builtin)
> > +else
> > +	$(call if_changed,ld_builtin)
> >  endif
> 
> ...
> 
>    $(call if_changed,$(if $(obj-y),ld,cc)_builtin)
> 
> but perhaps I'm the only one.

I think so. Spelling the full name of the command makes it easier to
look for where it is used, or for where it is defined.

Linux doesn't have this issue about checking $(obj-y) as they use 'ar'
to make archives of objects, an archive with 0 object is fine. But that
is something I'll look at later, to find out if it is better and why.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri May 01 16:00:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 16:00:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUY56-0006bO-O4; Fri, 01 May 2020 16:00:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=s7FD=6P=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jUY55-0006YB-JG
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 16:00:03 +0000
X-Inumbo-ID: d0bf27f2-8bc4-11ea-9b30-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d0bf27f2-8bc4-11ea-9b30-12813bfff9fa;
 Fri, 01 May 2020 16:00:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=WNnYilYnao2Mg/ANUwAtRPY0twuuz+bXUxBM0mapFzE=; b=D/9tPbQJ5CQmkT3GzluB1LE8Mu
 roDrQCuu9WGpGh3sUq3hpU3oNwCfmhZx/1QFgwH8PrOumay/EAK08YBPgmV1c9xzMjH9TJL0bOu/5
 YE9vtUmUhfN81suoa5SBOu5QM6ZzSVeieITrjJ694oyQyLHLPVmBHfWdDXMQZU+wIHzg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jUY4z-0004pI-Gy; Fri, 01 May 2020 15:59:57 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jUY4z-0007QB-7n; Fri, 01 May 2020 15:59:57 +0000
Subject: Re: [PATCH 11/16] x86: add a boot option to enable and disable the
 direct map
To: Wei Liu <wl@xen.org>, Hongyan Xia <hx242@xen.org>
References: <cover.1588278317.git.hongyxia@amazon.com>
 <7360b59e8fd39796fee56430a437b20c948d08c2.1588278317.git.hongyxia@amazon.com>
 <20200501121132.kzhu7u2vmpoeju2x@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
 <2235f884b65c9f20cf55637f91ddab6924f53ca1.camel@xen.org>
 <20200501131158.utexymcn3lnt65qp@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
From: Julien Grall <julien@xen.org>
Message-ID: <ce1e64f5-bcd7-096d-4973-5cd7f105d42f@xen.org>
Date: Fri, 1 May 2020 16:59:54 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200501131158.utexymcn3lnt65qp@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 01/05/2020 14:11, Wei Liu wrote:
> On Fri, May 01, 2020 at 01:59:24PM +0100, Hongyan Xia wrote:
>> On Fri, 2020-05-01 at 12:11 +0000, Wei Liu wrote:
>>> On Thu, Apr 30, 2020 at 09:44:20PM +0100, Hongyan Xia wrote:
>>>> From: Hongyan Xia <hongyxia@amazon.com>
>>>>
>>>> Also add a helper function to retrieve it. Change
>>>> arch_mfn_in_direct_map
>>>> to check this option before returning.
>>>>
>>>> This is added as a boot command line option, not a Kconfig. We do
>>>> not
>>>> produce different builds for EC2 so this is not introduced as a
>>>> compile-time configuration.
>>>
>>> Having a Kconfig will probably allow the compiler to eliminate dead
>>> code.
>>>
>>> This is not asking you to do the work, someone can come along and
>>> adjust
>>> arch_has_directmap easily.
>>
>> My original code added this as a CONFIG option, but I converted it into
>> a boot-time switch, so I can just dig out history and convert it back.
>> I wonder if we should get more opinions on this to make a decision.
> 
> Form my perspective, you as a contributor has done the work to scratch
> your own itch, hence I said "not asking you to do the work". I don't
> want to turn every comment into a formal ask and eventually lead to
> feature creep.
> 
>>
>> I would love Xen to have static key support though so that a boot-time
>> switch costs no run-time performance.
>>
> 
> Yes that would be great.

 From my understanding static key is very powerful as you could modify 
the value even at runtime.

On Arm, I wrote a version that I would call static key for the poor. We 
are using alternative to select between 0 and 1 as an immediate value.

#define CHECK_WORKAROUND_HELPER(erratum, feature, arch)         \
static inline bool check_workaround_##erratum(void)             \
{                                                               \
     if ( !IS_ENABLED(arch) )                                    \
         return false;                                           \
     else                                                        \
     {                                                           \
         register_t ret;                                         \
                                                                 \
         asm volatile (ALTERNATIVE("mov %0, #0",                 \
                                   "mov %0, #1",                 \
                                   feature)                      \
                       : "=r" (ret));                            \
                                                                 \
         return unlikely(ret);                                   \
     }                                                           \
}

The code generated will still be using a conditional branch, but the 
processor should be able to always infer correctly whether the condition 
is true or not.

The implementation is also very tailored to Arm as we consider 
workaround are not enabled by default. But I think this can be improved 
and made more generic.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 01 16:57:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 16:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUYyP-00032b-Du; Fri, 01 May 2020 16:57:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D0mR=6P=intel.com=dan.j.williams@srs-us1.protection.inumbo.net>)
 id 1jUYyO-00032W-Ly
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 16:57:12 +0000
X-Inumbo-ID: cacd0f64-8bcc-11ea-b07b-bc764e2007e4
Received: from mail-ed1-x544.google.com (unknown [2a00:1450:4864:20::544])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cacd0f64-8bcc-11ea-b07b-bc764e2007e4;
 Fri, 01 May 2020 16:57:08 +0000 (UTC)
Received: by mail-ed1-x544.google.com with SMTP id p16so7689550edm.10
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 09:57:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=intel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=b9REn6X5SevH+LhOI4dPec9Nj6TaOfQYrUIVwbI6BoM=;
 b=mG1IcDPBM3HrlLCRXRv7RTTYgDzrduMX/5i/AlHCRx1Od+IpfQAyd0PeN//EsT0f5h
 4ISFK8ebgantTgILgro22Ez5VhTT+nYI/BrtdR2EEj+8gVO8TXYZXeoKV+n5R7g8buaW
 2yUZz1nPzY+7zON3G0XQLly2Qxcbo0G4O5Y6WDLkdJ7/fnZJpvu7siFKyoRkKMeO0fUZ
 /D0cspPD70+dXDGbdUVG4+VwKt7dWiqpPW5XzMx4fPRxJpNAeD6LnRMMzp+TiNYAZ53P
 qpaqeGLDtip2gO0UP/p8HNcC5mdTxBx8phoHoUjMgHPlB7ivmPuMEBOrgC5YppxB1E9h
 UOtg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=b9REn6X5SevH+LhOI4dPec9Nj6TaOfQYrUIVwbI6BoM=;
 b=Moxh9WXoZQnTSDaUwy4STZsKFsnMTiSOHV+QgcQ32Hk2fzuYj5tQoIwpNUSg9NNkUp
 d6b2NQ3aRuMvoqsZrF4FmjobiMYuSnxJdL1ENOtYCgZKgShv8fwHBbxgzyFMmDSYmkh+
 Z4YpDTbqdEWhMhw++lQOhB4TXrBfJvzZHSNpZR4U9yIjmyh9gcqCC+3MhmPlcOk5IRN+
 O2IkyddVggq+gTTOgfFaG2hwk17MlUs4AGRKF2jvFdRJp6NURmcVOi9rrIbI6tZ2kP2Y
 yBzi0A459XahTcIOQSL1xR9Klhc8ADk0Ij0rUm7knGGf3j8u1xwiqhr/4Q/YE0iWKqpP
 j2nQ==
X-Gm-Message-State: AGi0PuZa1qFwYAcaxPCXWqaNHIysS7JmeFbXZTN+006KtxAhuM7H82Xw
 +NzOz6oP/9llOyHSbcISTeyCt4WmmyD+OGimOFHY+g==
X-Google-Smtp-Source: APiQypJO8dBUZTWLLj3LS+obr3vgzbWXqFSCM7dxYsl5W+J7URhuw83eMvKFki8Cpffi0gsBU7jQUIflqfdneq89wWQ=
X-Received: by 2002:aa7:c643:: with SMTP id z3mr4236295edr.154.1588352227631; 
 Fri, 01 May 2020 09:57:07 -0700 (PDT)
MIME-Version: 1.0
References: <20200430102908.10107-1-david@redhat.com>
 <20200430102908.10107-3-david@redhat.com>
 <87pnbp2dcz.fsf@x220.int.ebiederm.org>
 <1b49c3be-6e2f-57cb-96f7-f66a8f8a9380@redhat.com>
 <871ro52ary.fsf@x220.int.ebiederm.org>
 <373a6898-4020-4af1-5b3d-f827d705dd77@redhat.com>
 <875zdg26hp.fsf@x220.int.ebiederm.org>
 <b28c9e02-8cf2-33ae-646b-fe50a185738e@redhat.com>
 <20200430152403.e0d6da5eb1cad06411ac6d46@linux-foundation.org>
 <5c908ec3-9495-531e-9291-cbab24f292d6@redhat.com>
In-Reply-To: <5c908ec3-9495-531e-9291-cbab24f292d6@redhat.com>
From: Dan Williams <dan.j.williams@intel.com>
Date: Fri, 1 May 2020 09:56:56 -0700
Message-ID: <CAPcyv4j=YKnr1HW4OhAmpzbuKjtfP7FdAn4-V7uA=b-Tcpfu+A@mail.gmail.com>
Subject: Re: [PATCH v2 2/3] mm/memory_hotplug: Introduce MHP_NO_FIRMWARE_MEMMAP
To: David Hildenbrand <david@redhat.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: virtio-dev@lists.oasis-open.org, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Baoquan He <bhe@redhat.com>,
 Linux ACPI <linux-acpi@vger.kernel.org>, Wei Yang <richard.weiyang@gmail.com>,
 linux-s390 <linux-s390@vger.kernel.org>,
 linux-nvdimm <linux-nvdimm@lists.01.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 virtualization@lists.linux-foundation.org, Linux MM <linux-mm@kvack.org>,
 "Michael S . Tsirkin" <mst@redhat.com>,
 "Eric W. Biederman" <ebiederm@xmission.com>,
 Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Morton <akpm@linux-foundation.org>, Michal Hocko <mhocko@kernel.org>,
 linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 1, 2020 at 2:34 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 01.05.20 00:24, Andrew Morton wrote:
> > On Thu, 30 Apr 2020 20:43:39 +0200 David Hildenbrand <david@redhat.com> wrote:
> >
> >>>
> >>> Why does the firmware map support hotplug entries?
> >>
> >> I assume:
> >>
> >> The firmware memmap was added primarily for x86-64 kexec (and still, is
> >> mostly used on x86-64 only IIRC). There, we had ACPI hotplug. When DIMMs
> >> get hotplugged on real HW, they get added to e820. Same applies to
> >> memory added via HyperV balloon (unless memory is unplugged via
> >> ballooning and you reboot ... the the e820 is changed as well). I assume
> >> we wanted to be able to reflect that, to make kexec look like a real reboot.
> >>
> >> This worked for a while. Then came dax/kmem. Now comes virtio-mem.
> >>
> >>
> >> But I assume only Andrew can enlighten us.
> >>
> >> @Andrew, any guidance here? Should we really add all memory to the
> >> firmware memmap, even if this contradicts with the existing
> >> documentation? (especially, if the actual firmware memmap will *not*
> >> contain that memory after a reboot)
> >
> > For some reason that patch is misattributed - it was authored by
> > Shaohui Zheng <shaohui.zheng@intel.com>, who hasn't been heard from in
> > a decade.  I looked through the email discussion from that time and I'm
> > not seeing anything useful.  But I wasn't able to locate Dave Hansen's
> > review comments.
>
> Okay, thanks for checking. I think the documentation from 2008 is pretty
> clear what has to be done here. I will add some of these details to the
> patch description.
>
> Also, now that I know that esp. kexec-tools already don't consider
> dax/kmem memory properly (memory will not get dumped via kdump) and
> won't really suffer from a name change in /proc/iomem, I will go back to
> the MHP_DRIVER_MANAGED approach and
> 1. Don't create firmware memmap entries
> 2. Name the resource "System RAM (driver managed)"
> 3. Flag the resource via something like IORESOURCE_MEM_DRIVER_MANAGED.
>
> This way, kernel users and user space can figure out that this memory
> has different semantics and handle it accordingly - I think that was
> what Eric was asking for.
>
> Of course, open for suggestions.

I'm still more of a fan of this being communicated by "System RAM"
being parented especially because that tells you something about how
the memory is driver-managed and which mechanism might be in play.
What about adding an optional /sys/firmware/memmap/X/parent attribute.
This lets tooling check if it cares via that interface and lets it
lookup the related infrastructure to interact with if it would do
something different for virtio-mem vs dax/kmem?


From xen-devel-bounces@lists.xenproject.org Fri May 01 17:21:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 17:21:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUZLw-0005RM-CV; Fri, 01 May 2020 17:21:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jt3Y=6P=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1jUZLu-0005RH-TX
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 17:21:31 +0000
X-Inumbo-ID: 313eaba9-8bd0-11ea-9b36-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 313eaba9-8bd0-11ea-9b36-12813bfff9fa;
 Fri, 01 May 2020 17:21:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588353689;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=CwCN/AX/tWOCuWe4AmpsObWXryEFtxnspD8Qi0iRr4Q=;
 b=COMXwHoicjIn9xbWDTtnEsPVpuwhJUhNf58o8nwKgjCmV6BpDu5SatkCcPA/fqXSnfUVld
 ZggfYeSR44tbB1DIomAtYCmlEAvvkx6sm0zybNSEudMS22N5QBEWUCOGEBXbPYXh4NxnZz
 ToXEVKrSHtlXqGKIQKYvKMFHVdLtfNs=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-383-U_ZxuXKVML6FReyA-c7MZg-1; Fri, 01 May 2020 13:21:27 -0400
X-MC-Unique: U_ZxuXKVML6FReyA-c7MZg-1
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0BD6E462;
 Fri,  1 May 2020 17:21:23 +0000 (UTC)
Received: from [10.36.112.180] (ovpn-112-180.ams2.redhat.com [10.36.112.180])
 by smtp.corp.redhat.com (Postfix) with ESMTP id ADEEE6084A;
 Fri,  1 May 2020 17:21:16 +0000 (UTC)
Subject: Re: [PATCH v2 2/3] mm/memory_hotplug: Introduce MHP_NO_FIRMWARE_MEMMAP
To: Dan Williams <dan.j.williams@intel.com>
References: <20200430102908.10107-1-david@redhat.com>
 <20200430102908.10107-3-david@redhat.com>
 <87pnbp2dcz.fsf@x220.int.ebiederm.org>
 <1b49c3be-6e2f-57cb-96f7-f66a8f8a9380@redhat.com>
 <871ro52ary.fsf@x220.int.ebiederm.org>
 <373a6898-4020-4af1-5b3d-f827d705dd77@redhat.com>
 <875zdg26hp.fsf@x220.int.ebiederm.org>
 <b28c9e02-8cf2-33ae-646b-fe50a185738e@redhat.com>
 <20200430152403.e0d6da5eb1cad06411ac6d46@linux-foundation.org>
 <5c908ec3-9495-531e-9291-cbab24f292d6@redhat.com>
 <CAPcyv4j=YKnr1HW4OhAmpzbuKjtfP7FdAn4-V7uA=b-Tcpfu+A@mail.gmail.com>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMFCQlmAYAGCwkIBwMCBhUI
 AgkKCwQWAgMBAh4BAheAFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl3pImkCGQEACgkQTd4Q
 9wD/g1o+VA//SFvIHUAvul05u6wKv/pIR6aICPdpF9EIgEU448g+7FfDgQwcEny1pbEzAmiw
 zAXIQ9H0NZh96lcq+yDLtONnXk/bEYWHHUA014A1wqcYNRY8RvY1+eVHb0uu0KYQoXkzvu+s
 Dncuguk470XPnscL27hs8PgOP6QjG4jt75K2LfZ0eAqTOUCZTJxA8A7E9+XTYuU0hs7QVrWJ
 jQdFxQbRMrYz7uP8KmTK9/Cnvqehgl4EzyRaZppshruKMeyheBgvgJd5On1wWq4ZUV5PFM4x
 II3QbD3EJfWbaJMR55jI9dMFa+vK7MFz3rhWOkEx/QR959lfdRSTXdxs8V3zDvChcmRVGN8U
 Vo93d1YNtWnA9w6oCW1dnDZ4kgQZZSBIjp6iHcA08apzh7DPi08jL7M9UQByeYGr8KuR4i6e
 RZI6xhlZerUScVzn35ONwOC91VdYiQgjemiVLq1WDDZ3B7DIzUZ4RQTOaIWdtXBWb8zWakt/
 ztGhsx0e39Gvt3391O1PgcA7ilhvqrBPemJrlb9xSPPRbaNAW39P8ws/UJnzSJqnHMVxbRZC
 Am4add/SM+OCP0w3xYss1jy9T+XdZa0lhUvJfLy7tNcjVG/sxkBXOaSC24MFPuwnoC9WvCVQ
 ZBxouph3kqc4Dt5X1EeXVLeba+466P1fe1rC8MbcwDkoUo65Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAiUEGAECAA8FAlXLn5ECGwwFCQlmAYAACgkQTd4Q
 9wD/g1qA6w/+M+ggFv+JdVsz5+ZIc6MSyGUozASX+bmIuPeIecc9UsFRatc91LuJCKMkD9Uv
 GOcWSeFpLrSGRQ1Z7EMzFVU//qVs6uzhsNk0RYMyS0B6oloW3FpyQ+zOVylFWQCzoyyf227y
 GW8HnXunJSC+4PtlL2AY4yZjAVAPLK2l6mhgClVXTQ/S7cBoTQKP+jvVJOoYkpnFxWE9pn4t
 H5QIFk7Ip8TKr5k3fXVWk4lnUi9MTF/5L/mWqdyIO1s7cjharQCstfWCzWrVeVctpVoDfJWp
 4LwTuQ5yEM2KcPeElLg5fR7WB2zH97oI6/Ko2DlovmfQqXh9xWozQt0iGy5tWzh6I0JrlcxJ
 ileZWLccC4XKD1037Hy2FLAjzfoWgwBLA6ULu0exOOdIa58H4PsXtkFPrUF980EEibUp0zFz
 GotRVekFAceUaRvAj7dh76cToeZkfsjAvBVb4COXuhgX6N4pofgNkW2AtgYu1nUsPAo+NftU
 CxrhjHtLn4QEBpkbErnXQyMjHpIatlYGutVMS91XTQXYydCh5crMPs7hYVsvnmGHIaB9ZMfB
 njnuI31KBiLUks+paRkHQlFcgS2N3gkRBzH7xSZ+t7Re3jvXdXEzKBbQ+dC3lpJB0wPnyMcX
 FOTT3aZT7IgePkt5iC/BKBk3hqKteTnJFeVIT7EC+a6YUFg=
Organization: Red Hat GmbH
Message-ID: <2d019c11-a478-9d70-abd5-4fd2ebf4bc1d@redhat.com>
Date: Fri, 1 May 2020 19:21:15 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <CAPcyv4j=YKnr1HW4OhAmpzbuKjtfP7FdAn4-V7uA=b-Tcpfu+A@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: virtio-dev@lists.oasis-open.org, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Baoquan He <bhe@redhat.com>,
 Linux ACPI <linux-acpi@vger.kernel.org>, Wei Yang <richard.weiyang@gmail.com>,
 linux-s390 <linux-s390@vger.kernel.org>,
 linux-nvdimm <linux-nvdimm@lists.01.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 virtualization@lists.linux-foundation.org, Linux MM <linux-mm@kvack.org>,
 "Michael S . Tsirkin" <mst@redhat.com>,
 "Eric W. Biederman" <ebiederm@xmission.com>,
 Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Morton <akpm@linux-foundation.org>, Michal Hocko <mhocko@kernel.org>,
 linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01.05.20 18:56, Dan Williams wrote:
> On Fri, May 1, 2020 at 2:34 AM David Hildenbrand <david@redhat.com> wro=
te:
>>
>> On 01.05.20 00:24, Andrew Morton wrote:
>>> On Thu, 30 Apr 2020 20:43:39 +0200 David Hildenbrand <david@redhat.co=
m> wrote:
>>>
>>>>>
>>>>> Why does the firmware map support hotplug entries?
>>>>
>>>> I assume:
>>>>
>>>> The firmware memmap was added primarily for x86-64 kexec (and still,=
 is
>>>> mostly used on x86-64 only IIRC). There, we had ACPI hotplug. When D=
IMMs
>>>> get hotplugged on real HW, they get added to e820. Same applies to
>>>> memory added via HyperV balloon (unless memory is unplugged via
>>>> ballooning and you reboot ... the the e820 is changed as well). I as=
sume
>>>> we wanted to be able to reflect that, to make kexec look like a real=
 reboot.
>>>>
>>>> This worked for a while. Then came dax/kmem. Now comes virtio-mem.
>>>>
>>>>
>>>> But I assume only Andrew can enlighten us.
>>>>
>>>> @Andrew, any guidance here? Should we really add all memory to the
>>>> firmware memmap, even if this contradicts with the existing
>>>> documentation? (especially, if the actual firmware memmap will *not*
>>>> contain that memory after a reboot)
>>>
>>> For some reason that patch is misattributed - it was authored by
>>> Shaohui Zheng <shaohui.zheng@intel.com>, who hasn't been heard from i=
n
>>> a decade.  I looked through the email discussion from that time and I=
'm
>>> not seeing anything useful.  But I wasn't able to locate Dave Hansen'=
s
>>> review comments.
>>
>> Okay, thanks for checking. I think the documentation from 2008 is pret=
ty
>> clear what has to be done here. I will add some of these details to th=
e
>> patch description.
>>
>> Also, now that I know that esp. kexec-tools already don't consider
>> dax/kmem memory properly (memory will not get dumped via kdump) and
>> won't really suffer from a name change in /proc/iomem, I will go back =
to
>> the MHP_DRIVER_MANAGED approach and
>> 1. Don't create firmware memmap entries
>> 2. Name the resource "System RAM (driver managed)"
>> 3. Flag the resource via something like IORESOURCE_MEM_DRIVER_MANAGED.
>>
>> This way, kernel users and user space can figure out that this memory
>> has different semantics and handle it accordingly - I think that was
>> what Eric was asking for.
>>
>> Of course, open for suggestions.
>=20
> I'm still more of a fan of this being communicated by "System RAM"

I was mentioning somewhere in this thread that "System RAM" inside a
hierarchy (like dax/kmem) will already be basically ignored by
kexec-tools. So, placing it inside a hierarchy already makes it look
special already.

But after all, as we have to change kexec-tools either way, we can
directly go ahead and flag it properly as special (in case there will
ever be other cases where we could no longer distinguish it).

> being parented especially because that tells you something about how
> the memory is driver-managed and which mechanism might be in play.

The could be communicated to some degree via the resource hierarchy.

E.g.,

            [root@localhost ~]# cat /proc/iomem
            ...
            140000000-33fffffff : Persistent Memory
              140000000-1481fffff : namespace0.0
              150000000-33fffffff : dax0.0
                150000000-33fffffff : System RAM (driver managed)

vs.

           :/# cat /proc/iomem
            [...]
            140000000-333ffffff : virtio-mem (virtio0)
              140000000-147ffffff : System RAM (driver managed)
              148000000-14fffffff : System RAM (driver managed)
              150000000-157ffffff : System RAM (driver managed)

Good enough for my taste.

> What about adding an optional /sys/firmware/memmap/X/parent attribute.

I really don't want any firmware memmap entries for something that is
not part of the firmware provided memmap. In addition,
/sys/firmware/memmap/ is still a fairly x86_64 specific thing. Only mips
and two arm configs enable it at all.

So, IMHO, /sys/firmware/memmap/ is definitely not the way to go.

--=20
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Fri May 01 17:39:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 17:39:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUZda-0006Rs-4M; Fri, 01 May 2020 17:39:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D0mR=6P=intel.com=dan.j.williams@srs-us1.protection.inumbo.net>)
 id 1jUZdY-0006Rn-Ox
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 17:39:44 +0000
X-Inumbo-ID: bc30e592-8bd2-11ea-9887-bc764e2007e4
Received: from mail-ej1-x644.google.com (unknown [2a00:1450:4864:20::644])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bc30e592-8bd2-11ea-9887-bc764e2007e4;
 Fri, 01 May 2020 17:39:41 +0000 (UTC)
Received: by mail-ej1-x644.google.com with SMTP id q8so8075155eja.2
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 10:39:40 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=intel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=dy1QO3v2TmcaobStg64BpwnTJR5lVFKTuFOSCo8GmkA=;
 b=ZFbBfzs2QE4xxYrnFXEMSgV7VomZkqIDLnKCbwZuLXE2bdQRRKnDJ2f4J0afAZ5AGo
 rP9kuD2G28NH74yg1IkNEaGUxilA8vimracGEKo4jJDRlhwm0FMsADHlSBvQrzOuq0ZK
 CZBlpTbtqNsQtcEwMe9p4/JuUqyXanSoZOH1zmdbc+9MNweeD6uoRH0cl0RKU1zhcEwi
 KVK/UkwuhvXWFad3q9M9sXR1q1XB/5wW5F4/UrTA0mWQDlwW4wqfNJ4uHouclRWwP8aF
 YHpwqQqOd2mOX5F09RWu9qYoNn6ux/9GamnFIvyqW4sbCL/HFriqsV+cTfO6VBepp8ym
 UREw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=dy1QO3v2TmcaobStg64BpwnTJR5lVFKTuFOSCo8GmkA=;
 b=PcF/7jZa9vqOO6aEeHZFbDOr0+Qapqvj5onNnwqPtVblftHVDYM9lW4BRHQvFM/Gtt
 MsTh9qYxl+KNyKNhEP6oEGiFii+MYZ8XsdVLW7PzacJJHtsjBIPwrWfm4gM+/jykaVpk
 5yUxx15taxkqbtnPIwZLdYCJzX1eEqNgptdjdLAktTZ8wv5E6T+LO3EgmhMhIiR3Nz4U
 dOnCOis0ZAcrZVmAtuhCdepOZxAeEOUPPBrVUehmq/1JUH/itiQ9fgvefBbhJYtqdGoj
 KrY92Kqj8rMQvSvoinedekaJ5E9GZrTjFAdp5g1BgyBNzcmv+zVxiTXbqr0Yk66fxpB+
 VMxw==
X-Gm-Message-State: AGi0Pubox/6sbCvJ2d6gFhRyP9r7xODQxadVAhuXxpzybXKoBq/B0WjZ
 IwP17OKBxKXRKud0sLdqpwOSvc3a/Ylt47EbfC0sXA==
X-Google-Smtp-Source: APiQypKykdi+282JilRe16+6KKLcB6tg9cgSEAjlHPf89wAO5Cv3AtR5Wk0ydTXi3sTqOc+y0w//0y1mp0EsvfqmPHo=
X-Received: by 2002:a17:906:90cc:: with SMTP id
 v12mr4384205ejw.211.1588354779963; 
 Fri, 01 May 2020 10:39:39 -0700 (PDT)
MIME-Version: 1.0
References: <20200430102908.10107-1-david@redhat.com>
 <20200430102908.10107-3-david@redhat.com>
 <87pnbp2dcz.fsf@x220.int.ebiederm.org>
 <1b49c3be-6e2f-57cb-96f7-f66a8f8a9380@redhat.com>
 <871ro52ary.fsf@x220.int.ebiederm.org>
 <373a6898-4020-4af1-5b3d-f827d705dd77@redhat.com>
 <875zdg26hp.fsf@x220.int.ebiederm.org>
 <b28c9e02-8cf2-33ae-646b-fe50a185738e@redhat.com>
 <20200430152403.e0d6da5eb1cad06411ac6d46@linux-foundation.org>
 <5c908ec3-9495-531e-9291-cbab24f292d6@redhat.com>
 <CAPcyv4j=YKnr1HW4OhAmpzbuKjtfP7FdAn4-V7uA=b-Tcpfu+A@mail.gmail.com>
 <2d019c11-a478-9d70-abd5-4fd2ebf4bc1d@redhat.com>
In-Reply-To: <2d019c11-a478-9d70-abd5-4fd2ebf4bc1d@redhat.com>
From: Dan Williams <dan.j.williams@intel.com>
Date: Fri, 1 May 2020 10:39:28 -0700
Message-ID: <CAPcyv4iOqS0Wbfa2KPfE1axQFGXoRB4mmPRP__Lmqpw6Qpr_ig@mail.gmail.com>
Subject: Re: [PATCH v2 2/3] mm/memory_hotplug: Introduce MHP_NO_FIRMWARE_MEMMAP
To: David Hildenbrand <david@redhat.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: virtio-dev@lists.oasis-open.org, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Baoquan He <bhe@redhat.com>,
 Linux ACPI <linux-acpi@vger.kernel.org>, Wei Yang <richard.weiyang@gmail.com>,
 linux-s390 <linux-s390@vger.kernel.org>,
 linux-nvdimm <linux-nvdimm@lists.01.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 virtualization@lists.linux-foundation.org, Linux MM <linux-mm@kvack.org>,
 "Michael S . Tsirkin" <mst@redhat.com>,
 "Eric W. Biederman" <ebiederm@xmission.com>,
 Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Morton <akpm@linux-foundation.org>, Michal Hocko <mhocko@kernel.org>,
 linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 1, 2020 at 10:21 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 01.05.20 18:56, Dan Williams wrote:
> > On Fri, May 1, 2020 at 2:34 AM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 01.05.20 00:24, Andrew Morton wrote:
> >>> On Thu, 30 Apr 2020 20:43:39 +0200 David Hildenbrand <david@redhat.com> wrote:
> >>>
> >>>>>
> >>>>> Why does the firmware map support hotplug entries?
> >>>>
> >>>> I assume:
> >>>>
> >>>> The firmware memmap was added primarily for x86-64 kexec (and still, is
> >>>> mostly used on x86-64 only IIRC). There, we had ACPI hotplug. When DIMMs
> >>>> get hotplugged on real HW, they get added to e820. Same applies to
> >>>> memory added via HyperV balloon (unless memory is unplugged via
> >>>> ballooning and you reboot ... the the e820 is changed as well). I assume
> >>>> we wanted to be able to reflect that, to make kexec look like a real reboot.
> >>>>
> >>>> This worked for a while. Then came dax/kmem. Now comes virtio-mem.
> >>>>
> >>>>
> >>>> But I assume only Andrew can enlighten us.
> >>>>
> >>>> @Andrew, any guidance here? Should we really add all memory to the
> >>>> firmware memmap, even if this contradicts with the existing
> >>>> documentation? (especially, if the actual firmware memmap will *not*
> >>>> contain that memory after a reboot)
> >>>
> >>> For some reason that patch is misattributed - it was authored by
> >>> Shaohui Zheng <shaohui.zheng@intel.com>, who hasn't been heard from in
> >>> a decade.  I looked through the email discussion from that time and I'm
> >>> not seeing anything useful.  But I wasn't able to locate Dave Hansen's
> >>> review comments.
> >>
> >> Okay, thanks for checking. I think the documentation from 2008 is pretty
> >> clear what has to be done here. I will add some of these details to the
> >> patch description.
> >>
> >> Also, now that I know that esp. kexec-tools already don't consider
> >> dax/kmem memory properly (memory will not get dumped via kdump) and
> >> won't really suffer from a name change in /proc/iomem, I will go back to
> >> the MHP_DRIVER_MANAGED approach and
> >> 1. Don't create firmware memmap entries
> >> 2. Name the resource "System RAM (driver managed)"
> >> 3. Flag the resource via something like IORESOURCE_MEM_DRIVER_MANAGED.
> >>
> >> This way, kernel users and user space can figure out that this memory
> >> has different semantics and handle it accordingly - I think that was
> >> what Eric was asking for.
> >>
> >> Of course, open for suggestions.
> >
> > I'm still more of a fan of this being communicated by "System RAM"
>
> I was mentioning somewhere in this thread that "System RAM" inside a
> hierarchy (like dax/kmem) will already be basically ignored by
> kexec-tools. So, placing it inside a hierarchy already makes it look
> special already.
>
> But after all, as we have to change kexec-tools either way, we can
> directly go ahead and flag it properly as special (in case there will
> ever be other cases where we could no longer distinguish it).
>
> > being parented especially because that tells you something about how
> > the memory is driver-managed and which mechanism might be in play.
>
> The could be communicated to some degree via the resource hierarchy.
>
> E.g.,
>
>             [root@localhost ~]# cat /proc/iomem
>             ...
>             140000000-33fffffff : Persistent Memory
>               140000000-1481fffff : namespace0.0
>               150000000-33fffffff : dax0.0
>                 150000000-33fffffff : System RAM (driver managed)
>
> vs.
>
>            :/# cat /proc/iomem
>             [...]
>             140000000-333ffffff : virtio-mem (virtio0)
>               140000000-147ffffff : System RAM (driver managed)
>               148000000-14fffffff : System RAM (driver managed)
>               150000000-157ffffff : System RAM (driver managed)
>
> Good enough for my taste.
>
> > What about adding an optional /sys/firmware/memmap/X/parent attribute.
>
> I really don't want any firmware memmap entries for something that is
> not part of the firmware provided memmap. In addition,
> /sys/firmware/memmap/ is still a fairly x86_64 specific thing. Only mips
> and two arm configs enable it at all.
>
> So, IMHO, /sys/firmware/memmap/ is definitely not the way to go.

I think that's a policy decision and policy decisions do not belong in
the kernel. Give the tooling the opportunity to decide whether System
RAM stays that way over a kexec. The parenthetical reference otherwise
looks out of place to me in the /proc/iomem output. What makes it
"driver managed" is how the kernel handles it, not how the kernel
names it.


From xen-devel-bounces@lists.xenproject.org Fri May 01 17:45:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 17:45:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUZj4-0007GD-Pu; Fri, 01 May 2020 17:45:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jt3Y=6P=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1jUZj3-0007G8-Iu
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 17:45:25 +0000
X-Inumbo-ID: 889db133-8bd3-11ea-9b3d-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 889db133-8bd3-11ea-9b3d-12813bfff9fa;
 Fri, 01 May 2020 17:45:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588355123;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=3erjJ6PUj0+kTbH6WM4sv87u+V7N5mk0eIrgDOfdhPg=;
 b=dpXdnK0HidpvX5Hh5H75j9sj+9JD0r3X4Dy+sP3HLxnFkA6KUCY45R+y4GD7PRrM/UKDs2
 h8HLbj+tsnx6r0H4DVdD4fgRjbzB4NL6YNjbFgaHgKF5tTVcy2DNLGj18n647btc69kRQF
 892wHm8FwItYC1EYgZy7+Xh14ax7CXU=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-247-RMUFCZcdPIaCTt6gBqiHfA-1; Fri, 01 May 2020 13:45:19 -0400
X-MC-Unique: RMUFCZcdPIaCTt6gBqiHfA-1
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 2B00D835B8E;
 Fri,  1 May 2020 17:45:17 +0000 (UTC)
Received: from [10.36.112.180] (ovpn-112-180.ams2.redhat.com [10.36.112.180])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 69C445C1D2;
 Fri,  1 May 2020 17:45:08 +0000 (UTC)
Subject: Re: [PATCH v2 2/3] mm/memory_hotplug: Introduce MHP_NO_FIRMWARE_MEMMAP
To: Dan Williams <dan.j.williams@intel.com>
References: <20200430102908.10107-1-david@redhat.com>
 <20200430102908.10107-3-david@redhat.com>
 <87pnbp2dcz.fsf@x220.int.ebiederm.org>
 <1b49c3be-6e2f-57cb-96f7-f66a8f8a9380@redhat.com>
 <871ro52ary.fsf@x220.int.ebiederm.org>
 <373a6898-4020-4af1-5b3d-f827d705dd77@redhat.com>
 <875zdg26hp.fsf@x220.int.ebiederm.org>
 <b28c9e02-8cf2-33ae-646b-fe50a185738e@redhat.com>
 <20200430152403.e0d6da5eb1cad06411ac6d46@linux-foundation.org>
 <5c908ec3-9495-531e-9291-cbab24f292d6@redhat.com>
 <CAPcyv4j=YKnr1HW4OhAmpzbuKjtfP7FdAn4-V7uA=b-Tcpfu+A@mail.gmail.com>
 <2d019c11-a478-9d70-abd5-4fd2ebf4bc1d@redhat.com>
 <CAPcyv4iOqS0Wbfa2KPfE1axQFGXoRB4mmPRP__Lmqpw6Qpr_ig@mail.gmail.com>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMFCQlmAYAGCwkIBwMCBhUI
 AgkKCwQWAgMBAh4BAheAFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl3pImkCGQEACgkQTd4Q
 9wD/g1o+VA//SFvIHUAvul05u6wKv/pIR6aICPdpF9EIgEU448g+7FfDgQwcEny1pbEzAmiw
 zAXIQ9H0NZh96lcq+yDLtONnXk/bEYWHHUA014A1wqcYNRY8RvY1+eVHb0uu0KYQoXkzvu+s
 Dncuguk470XPnscL27hs8PgOP6QjG4jt75K2LfZ0eAqTOUCZTJxA8A7E9+XTYuU0hs7QVrWJ
 jQdFxQbRMrYz7uP8KmTK9/Cnvqehgl4EzyRaZppshruKMeyheBgvgJd5On1wWq4ZUV5PFM4x
 II3QbD3EJfWbaJMR55jI9dMFa+vK7MFz3rhWOkEx/QR959lfdRSTXdxs8V3zDvChcmRVGN8U
 Vo93d1YNtWnA9w6oCW1dnDZ4kgQZZSBIjp6iHcA08apzh7DPi08jL7M9UQByeYGr8KuR4i6e
 RZI6xhlZerUScVzn35ONwOC91VdYiQgjemiVLq1WDDZ3B7DIzUZ4RQTOaIWdtXBWb8zWakt/
 ztGhsx0e39Gvt3391O1PgcA7ilhvqrBPemJrlb9xSPPRbaNAW39P8ws/UJnzSJqnHMVxbRZC
 Am4add/SM+OCP0w3xYss1jy9T+XdZa0lhUvJfLy7tNcjVG/sxkBXOaSC24MFPuwnoC9WvCVQ
 ZBxouph3kqc4Dt5X1EeXVLeba+466P1fe1rC8MbcwDkoUo65Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAiUEGAECAA8FAlXLn5ECGwwFCQlmAYAACgkQTd4Q
 9wD/g1qA6w/+M+ggFv+JdVsz5+ZIc6MSyGUozASX+bmIuPeIecc9UsFRatc91LuJCKMkD9Uv
 GOcWSeFpLrSGRQ1Z7EMzFVU//qVs6uzhsNk0RYMyS0B6oloW3FpyQ+zOVylFWQCzoyyf227y
 GW8HnXunJSC+4PtlL2AY4yZjAVAPLK2l6mhgClVXTQ/S7cBoTQKP+jvVJOoYkpnFxWE9pn4t
 H5QIFk7Ip8TKr5k3fXVWk4lnUi9MTF/5L/mWqdyIO1s7cjharQCstfWCzWrVeVctpVoDfJWp
 4LwTuQ5yEM2KcPeElLg5fR7WB2zH97oI6/Ko2DlovmfQqXh9xWozQt0iGy5tWzh6I0JrlcxJ
 ileZWLccC4XKD1037Hy2FLAjzfoWgwBLA6ULu0exOOdIa58H4PsXtkFPrUF980EEibUp0zFz
 GotRVekFAceUaRvAj7dh76cToeZkfsjAvBVb4COXuhgX6N4pofgNkW2AtgYu1nUsPAo+NftU
 CxrhjHtLn4QEBpkbErnXQyMjHpIatlYGutVMS91XTQXYydCh5crMPs7hYVsvnmGHIaB9ZMfB
 njnuI31KBiLUks+paRkHQlFcgS2N3gkRBzH7xSZ+t7Re3jvXdXEzKBbQ+dC3lpJB0wPnyMcX
 FOTT3aZT7IgePkt5iC/BKBk3hqKteTnJFeVIT7EC+a6YUFg=
Organization: Red Hat GmbH
Message-ID: <62dd4ce2-86cc-5b85-734f-ec8766528a1b@redhat.com>
Date: Fri, 1 May 2020 19:45:08 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <CAPcyv4iOqS0Wbfa2KPfE1axQFGXoRB4mmPRP__Lmqpw6Qpr_ig@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: virtio-dev@lists.oasis-open.org, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Baoquan He <bhe@redhat.com>,
 Linux ACPI <linux-acpi@vger.kernel.org>, Wei Yang <richard.weiyang@gmail.com>,
 linux-s390 <linux-s390@vger.kernel.org>,
 linux-nvdimm <linux-nvdimm@lists.01.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 virtualization@lists.linux-foundation.org, Linux MM <linux-mm@kvack.org>,
 "Michael S . Tsirkin" <mst@redhat.com>,
 "Eric W. Biederman" <ebiederm@xmission.com>,
 Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Morton <akpm@linux-foundation.org>, Michal Hocko <mhocko@kernel.org>,
 linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01.05.20 19:39, Dan Williams wrote:
> On Fri, May 1, 2020 at 10:21 AM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 01.05.20 18:56, Dan Williams wrote:
>>> On Fri, May 1, 2020 at 2:34 AM David Hildenbrand <david@redhat.com> wrote:
>>>>
>>>> On 01.05.20 00:24, Andrew Morton wrote:
>>>>> On Thu, 30 Apr 2020 20:43:39 +0200 David Hildenbrand <david@redhat.com> wrote:
>>>>>
>>>>>>>
>>>>>>> Why does the firmware map support hotplug entries?
>>>>>>
>>>>>> I assume:
>>>>>>
>>>>>> The firmware memmap was added primarily for x86-64 kexec (and still, is
>>>>>> mostly used on x86-64 only IIRC). There, we had ACPI hotplug. When DIMMs
>>>>>> get hotplugged on real HW, they get added to e820. Same applies to
>>>>>> memory added via HyperV balloon (unless memory is unplugged via
>>>>>> ballooning and you reboot ... the the e820 is changed as well). I assume
>>>>>> we wanted to be able to reflect that, to make kexec look like a real reboot.
>>>>>>
>>>>>> This worked for a while. Then came dax/kmem. Now comes virtio-mem.
>>>>>>
>>>>>>
>>>>>> But I assume only Andrew can enlighten us.
>>>>>>
>>>>>> @Andrew, any guidance here? Should we really add all memory to the
>>>>>> firmware memmap, even if this contradicts with the existing
>>>>>> documentation? (especially, if the actual firmware memmap will *not*
>>>>>> contain that memory after a reboot)
>>>>>
>>>>> For some reason that patch is misattributed - it was authored by
>>>>> Shaohui Zheng <shaohui.zheng@intel.com>, who hasn't been heard from in
>>>>> a decade.  I looked through the email discussion from that time and I'm
>>>>> not seeing anything useful.  But I wasn't able to locate Dave Hansen's
>>>>> review comments.
>>>>
>>>> Okay, thanks for checking. I think the documentation from 2008 is pretty
>>>> clear what has to be done here. I will add some of these details to the
>>>> patch description.
>>>>
>>>> Also, now that I know that esp. kexec-tools already don't consider
>>>> dax/kmem memory properly (memory will not get dumped via kdump) and
>>>> won't really suffer from a name change in /proc/iomem, I will go back to
>>>> the MHP_DRIVER_MANAGED approach and
>>>> 1. Don't create firmware memmap entries
>>>> 2. Name the resource "System RAM (driver managed)"
>>>> 3. Flag the resource via something like IORESOURCE_MEM_DRIVER_MANAGED.
>>>>
>>>> This way, kernel users and user space can figure out that this memory
>>>> has different semantics and handle it accordingly - I think that was
>>>> what Eric was asking for.
>>>>
>>>> Of course, open for suggestions.
>>>
>>> I'm still more of a fan of this being communicated by "System RAM"
>>
>> I was mentioning somewhere in this thread that "System RAM" inside a
>> hierarchy (like dax/kmem) will already be basically ignored by
>> kexec-tools. So, placing it inside a hierarchy already makes it look
>> special already.
>>
>> But after all, as we have to change kexec-tools either way, we can
>> directly go ahead and flag it properly as special (in case there will
>> ever be other cases where we could no longer distinguish it).
>>
>>> being parented especially because that tells you something about how
>>> the memory is driver-managed and which mechanism might be in play.
>>
>> The could be communicated to some degree via the resource hierarchy.
>>
>> E.g.,
>>
>>             [root@localhost ~]# cat /proc/iomem
>>             ...
>>             140000000-33fffffff : Persistent Memory
>>               140000000-1481fffff : namespace0.0
>>               150000000-33fffffff : dax0.0
>>                 150000000-33fffffff : System RAM (driver managed)
>>
>> vs.
>>
>>            :/# cat /proc/iomem
>>             [...]
>>             140000000-333ffffff : virtio-mem (virtio0)
>>               140000000-147ffffff : System RAM (driver managed)
>>               148000000-14fffffff : System RAM (driver managed)
>>               150000000-157ffffff : System RAM (driver managed)
>>
>> Good enough for my taste.
>>
>>> What about adding an optional /sys/firmware/memmap/X/parent attribute.
>>
>> I really don't want any firmware memmap entries for something that is
>> not part of the firmware provided memmap. In addition,
>> /sys/firmware/memmap/ is still a fairly x86_64 specific thing. Only mips
>> and two arm configs enable it at all.
>>
>> So, IMHO, /sys/firmware/memmap/ is definitely not the way to go.
> 
> I think that's a policy decision and policy decisions do not belong in
> the kernel. Give the tooling the opportunity to decide whether System
> RAM stays that way over a kexec. The parenthetical reference otherwise
> looks out of place to me in the /proc/iomem output. What makes it
> "driver managed" is how the kernel handles it, not how the kernel
> names it.

At least, virtio-mem is different. It really *has to be handled* by the
driver. This is not a policy. It's how it works.

-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Fri May 01 17:51:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 17:51:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUZon-00084V-Gn; Fri, 01 May 2020 17:51:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jt3Y=6P=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1jUZom-00084Q-Fl
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 17:51:20 +0000
X-Inumbo-ID: 5c5ec330-8bd4-11ea-9b3d-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 5c5ec330-8bd4-11ea-9b3d-12813bfff9fa;
 Fri, 01 May 2020 17:51:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588355478;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=Dq7G4KVWvOqMjRsIzKfmwQxEZrZMpfzdRmuvnf8bdkY=;
 b=dLgM2lxA97PTF32AOgplTfvADbNNznH4kjX00bGynqPCUki4fPDP7vtcbUa0zmlPjB8I2x
 LxG+t11ovRveFRc7ryW2p5dnCOFygNSwubtNvcabBlD7wEGO09AK39lw0fOmUkD7w3xBSJ
 cpL1igrEqC9goiUpCCFgLdG67K+UQ6s=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-446-aDqSWr9xNVmETGL4NuXjpQ-1; Fri, 01 May 2020 13:51:13 -0400
X-MC-Unique: aDqSWr9xNVmETGL4NuXjpQ-1
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5E6DA8014C1;
 Fri,  1 May 2020 17:51:11 +0000 (UTC)
Received: from [10.36.112.180] (ovpn-112-180.ams2.redhat.com [10.36.112.180])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 1B1C15C1B0;
 Fri,  1 May 2020 17:51:04 +0000 (UTC)
Subject: Re: [PATCH v2 2/3] mm/memory_hotplug: Introduce MHP_NO_FIRMWARE_MEMMAP
From: David Hildenbrand <david@redhat.com>
To: Dan Williams <dan.j.williams@intel.com>
References: <20200430102908.10107-1-david@redhat.com>
 <20200430102908.10107-3-david@redhat.com>
 <87pnbp2dcz.fsf@x220.int.ebiederm.org>
 <1b49c3be-6e2f-57cb-96f7-f66a8f8a9380@redhat.com>
 <871ro52ary.fsf@x220.int.ebiederm.org>
 <373a6898-4020-4af1-5b3d-f827d705dd77@redhat.com>
 <875zdg26hp.fsf@x220.int.ebiederm.org>
 <b28c9e02-8cf2-33ae-646b-fe50a185738e@redhat.com>
 <20200430152403.e0d6da5eb1cad06411ac6d46@linux-foundation.org>
 <5c908ec3-9495-531e-9291-cbab24f292d6@redhat.com>
 <CAPcyv4j=YKnr1HW4OhAmpzbuKjtfP7FdAn4-V7uA=b-Tcpfu+A@mail.gmail.com>
 <2d019c11-a478-9d70-abd5-4fd2ebf4bc1d@redhat.com>
 <CAPcyv4iOqS0Wbfa2KPfE1axQFGXoRB4mmPRP__Lmqpw6Qpr_ig@mail.gmail.com>
 <62dd4ce2-86cc-5b85-734f-ec8766528a1b@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMFCQlmAYAGCwkIBwMCBhUI
 AgkKCwQWAgMBAh4BAheAFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl3pImkCGQEACgkQTd4Q
 9wD/g1o+VA//SFvIHUAvul05u6wKv/pIR6aICPdpF9EIgEU448g+7FfDgQwcEny1pbEzAmiw
 zAXIQ9H0NZh96lcq+yDLtONnXk/bEYWHHUA014A1wqcYNRY8RvY1+eVHb0uu0KYQoXkzvu+s
 Dncuguk470XPnscL27hs8PgOP6QjG4jt75K2LfZ0eAqTOUCZTJxA8A7E9+XTYuU0hs7QVrWJ
 jQdFxQbRMrYz7uP8KmTK9/Cnvqehgl4EzyRaZppshruKMeyheBgvgJd5On1wWq4ZUV5PFM4x
 II3QbD3EJfWbaJMR55jI9dMFa+vK7MFz3rhWOkEx/QR959lfdRSTXdxs8V3zDvChcmRVGN8U
 Vo93d1YNtWnA9w6oCW1dnDZ4kgQZZSBIjp6iHcA08apzh7DPi08jL7M9UQByeYGr8KuR4i6e
 RZI6xhlZerUScVzn35ONwOC91VdYiQgjemiVLq1WDDZ3B7DIzUZ4RQTOaIWdtXBWb8zWakt/
 ztGhsx0e39Gvt3391O1PgcA7ilhvqrBPemJrlb9xSPPRbaNAW39P8ws/UJnzSJqnHMVxbRZC
 Am4add/SM+OCP0w3xYss1jy9T+XdZa0lhUvJfLy7tNcjVG/sxkBXOaSC24MFPuwnoC9WvCVQ
 ZBxouph3kqc4Dt5X1EeXVLeba+466P1fe1rC8MbcwDkoUo65Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAiUEGAECAA8FAlXLn5ECGwwFCQlmAYAACgkQTd4Q
 9wD/g1qA6w/+M+ggFv+JdVsz5+ZIc6MSyGUozASX+bmIuPeIecc9UsFRatc91LuJCKMkD9Uv
 GOcWSeFpLrSGRQ1Z7EMzFVU//qVs6uzhsNk0RYMyS0B6oloW3FpyQ+zOVylFWQCzoyyf227y
 GW8HnXunJSC+4PtlL2AY4yZjAVAPLK2l6mhgClVXTQ/S7cBoTQKP+jvVJOoYkpnFxWE9pn4t
 H5QIFk7Ip8TKr5k3fXVWk4lnUi9MTF/5L/mWqdyIO1s7cjharQCstfWCzWrVeVctpVoDfJWp
 4LwTuQ5yEM2KcPeElLg5fR7WB2zH97oI6/Ko2DlovmfQqXh9xWozQt0iGy5tWzh6I0JrlcxJ
 ileZWLccC4XKD1037Hy2FLAjzfoWgwBLA6ULu0exOOdIa58H4PsXtkFPrUF980EEibUp0zFz
 GotRVekFAceUaRvAj7dh76cToeZkfsjAvBVb4COXuhgX6N4pofgNkW2AtgYu1nUsPAo+NftU
 CxrhjHtLn4QEBpkbErnXQyMjHpIatlYGutVMS91XTQXYydCh5crMPs7hYVsvnmGHIaB9ZMfB
 njnuI31KBiLUks+paRkHQlFcgS2N3gkRBzH7xSZ+t7Re3jvXdXEzKBbQ+dC3lpJB0wPnyMcX
 FOTT3aZT7IgePkt5iC/BKBk3hqKteTnJFeVIT7EC+a6YUFg=
Organization: Red Hat GmbH
Message-ID: <0169e822-a6cc-1543-88ed-2a85d95ffb93@redhat.com>
Date: Fri, 1 May 2020 19:51:04 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <62dd4ce2-86cc-5b85-734f-ec8766528a1b@redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: virtio-dev@lists.oasis-open.org, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Baoquan He <bhe@redhat.com>,
 Linux ACPI <linux-acpi@vger.kernel.org>, Wei Yang <richard.weiyang@gmail.com>,
 linux-s390 <linux-s390@vger.kernel.org>,
 linux-nvdimm <linux-nvdimm@lists.01.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 virtualization@lists.linux-foundation.org, Linux MM <linux-mm@kvack.org>,
 "Michael S . Tsirkin" <mst@redhat.com>,
 "Eric W. Biederman" <ebiederm@xmission.com>,
 Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Morton <akpm@linux-foundation.org>, Michal Hocko <mhocko@kernel.org>,
 linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01.05.20 19:45, David Hildenbrand wrote:
> On 01.05.20 19:39, Dan Williams wrote:
>> On Fri, May 1, 2020 at 10:21 AM David Hildenbrand <david@redhat.com> wrote:
>>>
>>> On 01.05.20 18:56, Dan Williams wrote:
>>>> On Fri, May 1, 2020 at 2:34 AM David Hildenbrand <david@redhat.com> wrote:
>>>>>
>>>>> On 01.05.20 00:24, Andrew Morton wrote:
>>>>>> On Thu, 30 Apr 2020 20:43:39 +0200 David Hildenbrand <david@redhat.com> wrote:
>>>>>>
>>>>>>>>
>>>>>>>> Why does the firmware map support hotplug entries?
>>>>>>>
>>>>>>> I assume:
>>>>>>>
>>>>>>> The firmware memmap was added primarily for x86-64 kexec (and still, is
>>>>>>> mostly used on x86-64 only IIRC). There, we had ACPI hotplug. When DIMMs
>>>>>>> get hotplugged on real HW, they get added to e820. Same applies to
>>>>>>> memory added via HyperV balloon (unless memory is unplugged via
>>>>>>> ballooning and you reboot ... the the e820 is changed as well). I assume
>>>>>>> we wanted to be able to reflect that, to make kexec look like a real reboot.
>>>>>>>
>>>>>>> This worked for a while. Then came dax/kmem. Now comes virtio-mem.
>>>>>>>
>>>>>>>
>>>>>>> But I assume only Andrew can enlighten us.
>>>>>>>
>>>>>>> @Andrew, any guidance here? Should we really add all memory to the
>>>>>>> firmware memmap, even if this contradicts with the existing
>>>>>>> documentation? (especially, if the actual firmware memmap will *not*
>>>>>>> contain that memory after a reboot)
>>>>>>
>>>>>> For some reason that patch is misattributed - it was authored by
>>>>>> Shaohui Zheng <shaohui.zheng@intel.com>, who hasn't been heard from in
>>>>>> a decade.  I looked through the email discussion from that time and I'm
>>>>>> not seeing anything useful.  But I wasn't able to locate Dave Hansen's
>>>>>> review comments.
>>>>>
>>>>> Okay, thanks for checking. I think the documentation from 2008 is pretty
>>>>> clear what has to be done here. I will add some of these details to the
>>>>> patch description.
>>>>>
>>>>> Also, now that I know that esp. kexec-tools already don't consider
>>>>> dax/kmem memory properly (memory will not get dumped via kdump) and
>>>>> won't really suffer from a name change in /proc/iomem, I will go back to
>>>>> the MHP_DRIVER_MANAGED approach and
>>>>> 1. Don't create firmware memmap entries
>>>>> 2. Name the resource "System RAM (driver managed)"
>>>>> 3. Flag the resource via something like IORESOURCE_MEM_DRIVER_MANAGED.
>>>>>
>>>>> This way, kernel users and user space can figure out that this memory
>>>>> has different semantics and handle it accordingly - I think that was
>>>>> what Eric was asking for.
>>>>>
>>>>> Of course, open for suggestions.
>>>>
>>>> I'm still more of a fan of this being communicated by "System RAM"
>>>
>>> I was mentioning somewhere in this thread that "System RAM" inside a
>>> hierarchy (like dax/kmem) will already be basically ignored by
>>> kexec-tools. So, placing it inside a hierarchy already makes it look
>>> special already.
>>>
>>> But after all, as we have to change kexec-tools either way, we can
>>> directly go ahead and flag it properly as special (in case there will
>>> ever be other cases where we could no longer distinguish it).
>>>
>>>> being parented especially because that tells you something about how
>>>> the memory is driver-managed and which mechanism might be in play.
>>>
>>> The could be communicated to some degree via the resource hierarchy.
>>>
>>> E.g.,
>>>
>>>             [root@localhost ~]# cat /proc/iomem
>>>             ...
>>>             140000000-33fffffff : Persistent Memory
>>>               140000000-1481fffff : namespace0.0
>>>               150000000-33fffffff : dax0.0
>>>                 150000000-33fffffff : System RAM (driver managed)
>>>
>>> vs.
>>>
>>>            :/# cat /proc/iomem
>>>             [...]
>>>             140000000-333ffffff : virtio-mem (virtio0)
>>>               140000000-147ffffff : System RAM (driver managed)
>>>               148000000-14fffffff : System RAM (driver managed)
>>>               150000000-157ffffff : System RAM (driver managed)
>>>
>>> Good enough for my taste.
>>>
>>>> What about adding an optional /sys/firmware/memmap/X/parent attribute.
>>>
>>> I really don't want any firmware memmap entries for something that is
>>> not part of the firmware provided memmap. In addition,
>>> /sys/firmware/memmap/ is still a fairly x86_64 specific thing. Only mips
>>> and two arm configs enable it at all.
>>>
>>> So, IMHO, /sys/firmware/memmap/ is definitely not the way to go.
>>
>> I think that's a policy decision and policy decisions do not belong in
>> the kernel. Give the tooling the opportunity to decide whether System
>> RAM stays that way over a kexec. The parenthetical reference otherwise
>> looks out of place to me in the /proc/iomem output. What makes it
>> "driver managed" is how the kernel handles it, not how the kernel
>> names it.
> 
> At least, virtio-mem is different. It really *has to be handled* by the
> driver. This is not a policy. It's how it works.
> 

Oh, and I don't see why "System RAM (driver managed)" would hinder any
policy in user case to still do what it thinks is the right thing to do
(e.g., for dax).

"System RAM (driver managed)" would mean: Memory is not part of the raw
firmware memmap. It was detected and added by a driver. Handle with
care, this is special.

-- 
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Fri May 01 18:03:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 18:03:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUa0W-0000dz-P0; Fri, 01 May 2020 18:03:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D0mR=6P=intel.com=dan.j.williams@srs-us1.protection.inumbo.net>)
 id 1jUa0V-0000dt-7b
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 18:03:27 +0000
X-Inumbo-ID: 0c14e308-8bd6-11ea-b07b-bc764e2007e4
Received: from mail-ej1-x641.google.com (unknown [2a00:1450:4864:20::641])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c14e308-8bd6-11ea-b07b-bc764e2007e4;
 Fri, 01 May 2020 18:03:23 +0000 (UTC)
Received: by mail-ej1-x641.google.com with SMTP id q8so8135647eja.2
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 11:03:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=intel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=7VjEvuT69fuKhyGG06CN4lqi4R0WHnRT6LOTJYOtlwM=;
 b=wrpJY141RNMC3/SZK5H0aRw4NlCI+ZaY1VPwPmaCsEUsigMEzJQ39dnvypn/oA5GKr
 hkOoMUpFwNJ0R8aX383yEXVYduWK4O6zsbkpNwTXRf9hzKRgR/+FgDvXMBkWaOzGtMFi
 2MiTQrsspiRcD7wO/FwjNkwHIwQmMcLL0HQ03X1HL0Y8/rKlYdsstb+DNVktFHT5jsKE
 NDHid8BRNBnBb+zkN6VO6gbqAyJcZEk7ndFw+/YH44qjBpOokB34pjwEHCkf4xQZaemp
 5XFEZkqe8z71d8R4MF9ACZcMNSJ1a9SJieowN5h314WG5Iu4YToa5MJqggcxdBUKMwbM
 RE3w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=7VjEvuT69fuKhyGG06CN4lqi4R0WHnRT6LOTJYOtlwM=;
 b=lZ8qJGGntpaRh7uwZNHwHzX9lYFRumx9WowkTG35sy9edNeWAwkFKbqjqHxsTl1c7Z
 kW78B+erclkraHGiCsngMlJRCMQr/iAcOX1RK1DaEHW/NQs5dlruar3xILdqLmFf+xUe
 it+Fyhp4lEzKg2e8uZ/GH7IN9jZhbOSatXYQKl96vDWReuio3IAc5cVAnCKkViDZsv3v
 cYQtt7OJbfJudc2ZERN6Ae/mfAEdMlwyBX15pgLZ0gEVBzov0e3L6Qd7KSOYSj7ywKBw
 a7cp4j0O0z8jHSkL+0Xyt9DZwx6jFLDSj4U3fqFjfoWHB/I+LUKOewzra1D6TB1ePgsW
 CCTg==
X-Gm-Message-State: AGi0Pubdkq6x7YApjP6Epbw9Yw2QP3OxDatiaR9mZ2VHxJTgxs9Pqule
 Q1TAN2dm/ALSneOw3N6TcwHVnVeTLrqeUsT4X7cF0Q==
X-Google-Smtp-Source: APiQypJcoCxtc0ammAwXG/EYhkx9tuC4S2HKq973vyjprCVeujwW3ylCf8/X33dvWjefi0dkm6VKzYSFjEpO9Yj13DM=
X-Received: by 2002:a17:906:eb90:: with SMTP id
 mh16mr4393729ejb.201.1588356202537; 
 Fri, 01 May 2020 11:03:22 -0700 (PDT)
MIME-Version: 1.0
References: <20200430102908.10107-1-david@redhat.com>
 <20200430102908.10107-3-david@redhat.com>
 <87pnbp2dcz.fsf@x220.int.ebiederm.org>
 <1b49c3be-6e2f-57cb-96f7-f66a8f8a9380@redhat.com>
 <871ro52ary.fsf@x220.int.ebiederm.org>
 <373a6898-4020-4af1-5b3d-f827d705dd77@redhat.com>
 <875zdg26hp.fsf@x220.int.ebiederm.org>
 <b28c9e02-8cf2-33ae-646b-fe50a185738e@redhat.com>
 <20200430152403.e0d6da5eb1cad06411ac6d46@linux-foundation.org>
 <5c908ec3-9495-531e-9291-cbab24f292d6@redhat.com>
 <CAPcyv4j=YKnr1HW4OhAmpzbuKjtfP7FdAn4-V7uA=b-Tcpfu+A@mail.gmail.com>
 <2d019c11-a478-9d70-abd5-4fd2ebf4bc1d@redhat.com>
 <CAPcyv4iOqS0Wbfa2KPfE1axQFGXoRB4mmPRP__Lmqpw6Qpr_ig@mail.gmail.com>
 <62dd4ce2-86cc-5b85-734f-ec8766528a1b@redhat.com>
 <0169e822-a6cc-1543-88ed-2a85d95ffb93@redhat.com>
In-Reply-To: <0169e822-a6cc-1543-88ed-2a85d95ffb93@redhat.com>
From: Dan Williams <dan.j.williams@intel.com>
Date: Fri, 1 May 2020 11:03:11 -0700
Message-ID: <CAPcyv4jGnR_fPtpKBC1rD2KRcT88bTkhqnTMmuwuc+f9Dwrz1g@mail.gmail.com>
Subject: Re: [PATCH v2 2/3] mm/memory_hotplug: Introduce MHP_NO_FIRMWARE_MEMMAP
To: David Hildenbrand <david@redhat.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: virtio-dev@lists.oasis-open.org, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Baoquan He <bhe@redhat.com>,
 Linux ACPI <linux-acpi@vger.kernel.org>, Wei Yang <richard.weiyang@gmail.com>,
 linux-s390 <linux-s390@vger.kernel.org>,
 linux-nvdimm <linux-nvdimm@lists.01.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 virtualization@lists.linux-foundation.org, Linux MM <linux-mm@kvack.org>,
 "Michael S . Tsirkin" <mst@redhat.com>,
 "Eric W. Biederman" <ebiederm@xmission.com>,
 Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Morton <akpm@linux-foundation.org>, Michal Hocko <mhocko@kernel.org>,
 linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 1, 2020 at 10:51 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 01.05.20 19:45, David Hildenbrand wrote:
> > On 01.05.20 19:39, Dan Williams wrote:
> >> On Fri, May 1, 2020 at 10:21 AM David Hildenbrand <david@redhat.com> wrote:
> >>>
> >>> On 01.05.20 18:56, Dan Williams wrote:
> >>>> On Fri, May 1, 2020 at 2:34 AM David Hildenbrand <david@redhat.com> wrote:
> >>>>>
> >>>>> On 01.05.20 00:24, Andrew Morton wrote:
> >>>>>> On Thu, 30 Apr 2020 20:43:39 +0200 David Hildenbrand <david@redhat.com> wrote:
> >>>>>>
> >>>>>>>>
> >>>>>>>> Why does the firmware map support hotplug entries?
> >>>>>>>
> >>>>>>> I assume:
> >>>>>>>
> >>>>>>> The firmware memmap was added primarily for x86-64 kexec (and still, is
> >>>>>>> mostly used on x86-64 only IIRC). There, we had ACPI hotplug. When DIMMs
> >>>>>>> get hotplugged on real HW, they get added to e820. Same applies to
> >>>>>>> memory added via HyperV balloon (unless memory is unplugged via
> >>>>>>> ballooning and you reboot ... the the e820 is changed as well). I assume
> >>>>>>> we wanted to be able to reflect that, to make kexec look like a real reboot.
> >>>>>>>
> >>>>>>> This worked for a while. Then came dax/kmem. Now comes virtio-mem.
> >>>>>>>
> >>>>>>>
> >>>>>>> But I assume only Andrew can enlighten us.
> >>>>>>>
> >>>>>>> @Andrew, any guidance here? Should we really add all memory to the
> >>>>>>> firmware memmap, even if this contradicts with the existing
> >>>>>>> documentation? (especially, if the actual firmware memmap will *not*
> >>>>>>> contain that memory after a reboot)
> >>>>>>
> >>>>>> For some reason that patch is misattributed - it was authored by
> >>>>>> Shaohui Zheng <shaohui.zheng@intel.com>, who hasn't been heard from in
> >>>>>> a decade.  I looked through the email discussion from that time and I'm
> >>>>>> not seeing anything useful.  But I wasn't able to locate Dave Hansen's
> >>>>>> review comments.
> >>>>>
> >>>>> Okay, thanks for checking. I think the documentation from 2008 is pretty
> >>>>> clear what has to be done here. I will add some of these details to the
> >>>>> patch description.
> >>>>>
> >>>>> Also, now that I know that esp. kexec-tools already don't consider
> >>>>> dax/kmem memory properly (memory will not get dumped via kdump) and
> >>>>> won't really suffer from a name change in /proc/iomem, I will go back to
> >>>>> the MHP_DRIVER_MANAGED approach and
> >>>>> 1. Don't create firmware memmap entries
> >>>>> 2. Name the resource "System RAM (driver managed)"
> >>>>> 3. Flag the resource via something like IORESOURCE_MEM_DRIVER_MANAGED.
> >>>>>
> >>>>> This way, kernel users and user space can figure out that this memory
> >>>>> has different semantics and handle it accordingly - I think that was
> >>>>> what Eric was asking for.
> >>>>>
> >>>>> Of course, open for suggestions.
> >>>>
> >>>> I'm still more of a fan of this being communicated by "System RAM"
> >>>
> >>> I was mentioning somewhere in this thread that "System RAM" inside a
> >>> hierarchy (like dax/kmem) will already be basically ignored by
> >>> kexec-tools. So, placing it inside a hierarchy already makes it look
> >>> special already.
> >>>
> >>> But after all, as we have to change kexec-tools either way, we can
> >>> directly go ahead and flag it properly as special (in case there will
> >>> ever be other cases where we could no longer distinguish it).
> >>>
> >>>> being parented especially because that tells you something about how
> >>>> the memory is driver-managed and which mechanism might be in play.
> >>>
> >>> The could be communicated to some degree via the resource hierarchy.
> >>>
> >>> E.g.,
> >>>
> >>>             [root@localhost ~]# cat /proc/iomem
> >>>             ...
> >>>             140000000-33fffffff : Persistent Memory
> >>>               140000000-1481fffff : namespace0.0
> >>>               150000000-33fffffff : dax0.0
> >>>                 150000000-33fffffff : System RAM (driver managed)
> >>>
> >>> vs.
> >>>
> >>>            :/# cat /proc/iomem
> >>>             [...]
> >>>             140000000-333ffffff : virtio-mem (virtio0)
> >>>               140000000-147ffffff : System RAM (driver managed)
> >>>               148000000-14fffffff : System RAM (driver managed)
> >>>               150000000-157ffffff : System RAM (driver managed)
> >>>
> >>> Good enough for my taste.
> >>>
> >>>> What about adding an optional /sys/firmware/memmap/X/parent attribute.
> >>>
> >>> I really don't want any firmware memmap entries for something that is
> >>> not part of the firmware provided memmap. In addition,
> >>> /sys/firmware/memmap/ is still a fairly x86_64 specific thing. Only mips
> >>> and two arm configs enable it at all.
> >>>
> >>> So, IMHO, /sys/firmware/memmap/ is definitely not the way to go.
> >>
> >> I think that's a policy decision and policy decisions do not belong in
> >> the kernel. Give the tooling the opportunity to decide whether System
> >> RAM stays that way over a kexec. The parenthetical reference otherwise
> >> looks out of place to me in the /proc/iomem output. What makes it
> >> "driver managed" is how the kernel handles it, not how the kernel
> >> names it.
> >
> > At least, virtio-mem is different. It really *has to be handled* by the
> > driver. This is not a policy. It's how it works.

...but that's not necessarily how dax/kmem works.

> >
>
> Oh, and I don't see why "System RAM (driver managed)" would hinder any
> policy in user case to still do what it thinks is the right thing to do
> (e.g., for dax).
>
> "System RAM (driver managed)" would mean: Memory is not part of the raw
> firmware memmap. It was detected and added by a driver. Handle with
> care, this is special.

Oh, no, I was more reacting to your, "don't update
/sys/firmware/memmap for the (driver managed) range" choice as being a
policy decision. It otherwise feels to me "System RAM (driver
managed)" adds confusion for casual users of /proc/iomem and for clued
in tools they have the parent association to decide policy.


From xen-devel-bounces@lists.xenproject.org Fri May 01 18:14:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 18:14:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUaB8-0001ah-Qy; Fri, 01 May 2020 18:14:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jt3Y=6P=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1jUaB7-0001aa-GG
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 18:14:25 +0000
X-Inumbo-ID: 963d52e4-8bd7-11ea-9887-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 963d52e4-8bd7-11ea-9887-bc764e2007e4;
 Fri, 01 May 2020 18:14:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588356864;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=3BcharDSS0C522XE+F74i8hftlx0y6kzvefhWCjcxR0=;
 b=TH9u2KUOXksER3zji9SioHo1Cvn/OJkhlPY+ZS63Pd1KbrNfcIWCdCKXk4kxA2VNIN3jnc
 RhYJvI/ynEqtzDYi5TINqB4q+YF7OPPfopgFvcDiZ7EPWbEzsyB8z8/09PmKyT++K3STj0
 /QiMSTDlM5iHJvA3XM61unI7FEaScZQ=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-47-IwD9H75UMJmtlc6c2AUL0Q-1; Fri, 01 May 2020 14:14:20 -0400
X-MC-Unique: IwD9H75UMJmtlc6c2AUL0Q-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id BA9551800D42;
 Fri,  1 May 2020 18:14:17 +0000 (UTC)
Received: from [10.36.112.180] (ovpn-112-180.ams2.redhat.com [10.36.112.180])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 731BF2B4CC;
 Fri,  1 May 2020 18:14:11 +0000 (UTC)
Subject: Re: [PATCH v2 2/3] mm/memory_hotplug: Introduce MHP_NO_FIRMWARE_MEMMAP
To: Dan Williams <dan.j.williams@intel.com>
References: <20200430102908.10107-1-david@redhat.com>
 <20200430102908.10107-3-david@redhat.com>
 <87pnbp2dcz.fsf@x220.int.ebiederm.org>
 <1b49c3be-6e2f-57cb-96f7-f66a8f8a9380@redhat.com>
 <871ro52ary.fsf@x220.int.ebiederm.org>
 <373a6898-4020-4af1-5b3d-f827d705dd77@redhat.com>
 <875zdg26hp.fsf@x220.int.ebiederm.org>
 <b28c9e02-8cf2-33ae-646b-fe50a185738e@redhat.com>
 <20200430152403.e0d6da5eb1cad06411ac6d46@linux-foundation.org>
 <5c908ec3-9495-531e-9291-cbab24f292d6@redhat.com>
 <CAPcyv4j=YKnr1HW4OhAmpzbuKjtfP7FdAn4-V7uA=b-Tcpfu+A@mail.gmail.com>
 <2d019c11-a478-9d70-abd5-4fd2ebf4bc1d@redhat.com>
 <CAPcyv4iOqS0Wbfa2KPfE1axQFGXoRB4mmPRP__Lmqpw6Qpr_ig@mail.gmail.com>
 <62dd4ce2-86cc-5b85-734f-ec8766528a1b@redhat.com>
 <0169e822-a6cc-1543-88ed-2a85d95ffb93@redhat.com>
 <CAPcyv4jGnR_fPtpKBC1rD2KRcT88bTkhqnTMmuwuc+f9Dwrz1g@mail.gmail.com>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMFCQlmAYAGCwkIBwMCBhUI
 AgkKCwQWAgMBAh4BAheAFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl3pImkCGQEACgkQTd4Q
 9wD/g1o+VA//SFvIHUAvul05u6wKv/pIR6aICPdpF9EIgEU448g+7FfDgQwcEny1pbEzAmiw
 zAXIQ9H0NZh96lcq+yDLtONnXk/bEYWHHUA014A1wqcYNRY8RvY1+eVHb0uu0KYQoXkzvu+s
 Dncuguk470XPnscL27hs8PgOP6QjG4jt75K2LfZ0eAqTOUCZTJxA8A7E9+XTYuU0hs7QVrWJ
 jQdFxQbRMrYz7uP8KmTK9/Cnvqehgl4EzyRaZppshruKMeyheBgvgJd5On1wWq4ZUV5PFM4x
 II3QbD3EJfWbaJMR55jI9dMFa+vK7MFz3rhWOkEx/QR959lfdRSTXdxs8V3zDvChcmRVGN8U
 Vo93d1YNtWnA9w6oCW1dnDZ4kgQZZSBIjp6iHcA08apzh7DPi08jL7M9UQByeYGr8KuR4i6e
 RZI6xhlZerUScVzn35ONwOC91VdYiQgjemiVLq1WDDZ3B7DIzUZ4RQTOaIWdtXBWb8zWakt/
 ztGhsx0e39Gvt3391O1PgcA7ilhvqrBPemJrlb9xSPPRbaNAW39P8ws/UJnzSJqnHMVxbRZC
 Am4add/SM+OCP0w3xYss1jy9T+XdZa0lhUvJfLy7tNcjVG/sxkBXOaSC24MFPuwnoC9WvCVQ
 ZBxouph3kqc4Dt5X1EeXVLeba+466P1fe1rC8MbcwDkoUo65Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAiUEGAECAA8FAlXLn5ECGwwFCQlmAYAACgkQTd4Q
 9wD/g1qA6w/+M+ggFv+JdVsz5+ZIc6MSyGUozASX+bmIuPeIecc9UsFRatc91LuJCKMkD9Uv
 GOcWSeFpLrSGRQ1Z7EMzFVU//qVs6uzhsNk0RYMyS0B6oloW3FpyQ+zOVylFWQCzoyyf227y
 GW8HnXunJSC+4PtlL2AY4yZjAVAPLK2l6mhgClVXTQ/S7cBoTQKP+jvVJOoYkpnFxWE9pn4t
 H5QIFk7Ip8TKr5k3fXVWk4lnUi9MTF/5L/mWqdyIO1s7cjharQCstfWCzWrVeVctpVoDfJWp
 4LwTuQ5yEM2KcPeElLg5fR7WB2zH97oI6/Ko2DlovmfQqXh9xWozQt0iGy5tWzh6I0JrlcxJ
 ileZWLccC4XKD1037Hy2FLAjzfoWgwBLA6ULu0exOOdIa58H4PsXtkFPrUF980EEibUp0zFz
 GotRVekFAceUaRvAj7dh76cToeZkfsjAvBVb4COXuhgX6N4pofgNkW2AtgYu1nUsPAo+NftU
 CxrhjHtLn4QEBpkbErnXQyMjHpIatlYGutVMS91XTQXYydCh5crMPs7hYVsvnmGHIaB9ZMfB
 njnuI31KBiLUks+paRkHQlFcgS2N3gkRBzH7xSZ+t7Re3jvXdXEzKBbQ+dC3lpJB0wPnyMcX
 FOTT3aZT7IgePkt5iC/BKBk3hqKteTnJFeVIT7EC+a6YUFg=
Organization: Red Hat GmbH
Message-ID: <9f3a813e-dc1d-b675-6e69-85beed3057a4@redhat.com>
Date: Fri, 1 May 2020 20:14:10 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <CAPcyv4jGnR_fPtpKBC1rD2KRcT88bTkhqnTMmuwuc+f9Dwrz1g@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: virtio-dev@lists.oasis-open.org, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Baoquan He <bhe@redhat.com>,
 Linux ACPI <linux-acpi@vger.kernel.org>, Wei Yang <richard.weiyang@gmail.com>,
 linux-s390 <linux-s390@vger.kernel.org>,
 linux-nvdimm <linux-nvdimm@lists.01.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 virtualization@lists.linux-foundation.org, Linux MM <linux-mm@kvack.org>,
 "Michael S . Tsirkin" <mst@redhat.com>,
 "Eric W. Biederman" <ebiederm@xmission.com>,
 Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Morton <akpm@linux-foundation.org>, Michal Hocko <mhocko@kernel.org>,
 linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01.05.20 20:03, Dan Williams wrote:
> On Fri, May 1, 2020 at 10:51 AM David Hildenbrand <david@redhat.com> wr=
ote:
>>
>> On 01.05.20 19:45, David Hildenbrand wrote:
>>> On 01.05.20 19:39, Dan Williams wrote:
>>>> On Fri, May 1, 2020 at 10:21 AM David Hildenbrand <david@redhat.com>=
 wrote:
>>>>>
>>>>> On 01.05.20 18:56, Dan Williams wrote:
>>>>>> On Fri, May 1, 2020 at 2:34 AM David Hildenbrand <david@redhat.com=
> wrote:
>>>>>>>
>>>>>>> On 01.05.20 00:24, Andrew Morton wrote:
>>>>>>>> On Thu, 30 Apr 2020 20:43:39 +0200 David Hildenbrand <david@redh=
at.com> wrote:
>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> Why does the firmware map support hotplug entries?
>>>>>>>>>
>>>>>>>>> I assume:
>>>>>>>>>
>>>>>>>>> The firmware memmap was added primarily for x86-64 kexec (and s=
till, is
>>>>>>>>> mostly used on x86-64 only IIRC). There, we had ACPI hotplug. W=
hen DIMMs
>>>>>>>>> get hotplugged on real HW, they get added to e820. Same applies=
 to
>>>>>>>>> memory added via HyperV balloon (unless memory is unplugged via
>>>>>>>>> ballooning and you reboot ... the the e820 is changed as well).=
 I assume
>>>>>>>>> we wanted to be able to reflect that, to make kexec look like a=
 real reboot.
>>>>>>>>>
>>>>>>>>> This worked for a while. Then came dax/kmem. Now comes virtio-m=
em.
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> But I assume only Andrew can enlighten us.
>>>>>>>>>
>>>>>>>>> @Andrew, any guidance here? Should we really add all memory to =
the
>>>>>>>>> firmware memmap, even if this contradicts with the existing
>>>>>>>>> documentation? (especially, if the actual firmware memmap will =
*not*
>>>>>>>>> contain that memory after a reboot)
>>>>>>>>
>>>>>>>> For some reason that patch is misattributed - it was authored by
>>>>>>>> Shaohui Zheng <shaohui.zheng@intel.com>, who hasn't been heard f=
rom in
>>>>>>>> a decade.  I looked through the email discussion from that time =
and I'm
>>>>>>>> not seeing anything useful.  But I wasn't able to locate Dave Ha=
nsen's
>>>>>>>> review comments.
>>>>>>>
>>>>>>> Okay, thanks for checking. I think the documentation from 2008 is=
 pretty
>>>>>>> clear what has to be done here. I will add some of these details =
to the
>>>>>>> patch description.
>>>>>>>
>>>>>>> Also, now that I know that esp. kexec-tools already don't conside=
r
>>>>>>> dax/kmem memory properly (memory will not get dumped via kdump) a=
nd
>>>>>>> won't really suffer from a name change in /proc/iomem, I will go =
back to
>>>>>>> the MHP_DRIVER_MANAGED approach and
>>>>>>> 1. Don't create firmware memmap entries
>>>>>>> 2. Name the resource "System RAM (driver managed)"
>>>>>>> 3. Flag the resource via something like IORESOURCE_MEM_DRIVER_MAN=
AGED.
>>>>>>>
>>>>>>> This way, kernel users and user space can figure out that this me=
mory
>>>>>>> has different semantics and handle it accordingly - I think that =
was
>>>>>>> what Eric was asking for.
>>>>>>>
>>>>>>> Of course, open for suggestions.
>>>>>>
>>>>>> I'm still more of a fan of this being communicated by "System RAM"
>>>>>
>>>>> I was mentioning somewhere in this thread that "System RAM" inside =
a
>>>>> hierarchy (like dax/kmem) will already be basically ignored by
>>>>> kexec-tools. So, placing it inside a hierarchy already makes it loo=
k
>>>>> special already.
>>>>>
>>>>> But after all, as we have to change kexec-tools either way, we can
>>>>> directly go ahead and flag it properly as special (in case there wi=
ll
>>>>> ever be other cases where we could no longer distinguish it).
>>>>>
>>>>>> being parented especially because that tells you something about h=
ow
>>>>>> the memory is driver-managed and which mechanism might be in play.
>>>>>
>>>>> The could be communicated to some degree via the resource hierarchy=
.
>>>>>
>>>>> E.g.,
>>>>>
>>>>>             [root@localhost ~]# cat /proc/iomem
>>>>>             ...
>>>>>             140000000-33fffffff : Persistent Memory
>>>>>               140000000-1481fffff : namespace0.0
>>>>>               150000000-33fffffff : dax0.0
>>>>>                 150000000-33fffffff : System RAM (driver managed)
>>>>>
>>>>> vs.
>>>>>
>>>>>            :/# cat /proc/iomem
>>>>>             [...]
>>>>>             140000000-333ffffff : virtio-mem (virtio0)
>>>>>               140000000-147ffffff : System RAM (driver managed)
>>>>>               148000000-14fffffff : System RAM (driver managed)
>>>>>               150000000-157ffffff : System RAM (driver managed)
>>>>>
>>>>> Good enough for my taste.
>>>>>
>>>>>> What about adding an optional /sys/firmware/memmap/X/parent attrib=
ute.
>>>>>
>>>>> I really don't want any firmware memmap entries for something that =
is
>>>>> not part of the firmware provided memmap. In addition,
>>>>> /sys/firmware/memmap/ is still a fairly x86_64 specific thing. Only=
 mips
>>>>> and two arm configs enable it at all.
>>>>>
>>>>> So, IMHO, /sys/firmware/memmap/ is definitely not the way to go.
>>>>
>>>> I think that's a policy decision and policy decisions do not belong =
in
>>>> the kernel. Give the tooling the opportunity to decide whether Syste=
m
>>>> RAM stays that way over a kexec. The parenthetical reference otherwi=
se
>>>> looks out of place to me in the /proc/iomem output. What makes it
>>>> "driver managed" is how the kernel handles it, not how the kernel
>>>> names it.
>>>
>>> At least, virtio-mem is different. It really *has to be handled* by t=
he
>>> driver. This is not a policy. It's how it works.
>=20
> ...but that's not necessarily how dax/kmem works.
>=20

Yes, and user space could still take that memory and add it to the
firmware memmap if it really wants to. It knows that it is special. It
can figure out that it belongs to a dax device using /proc/iomem.

>>>
>>
>> Oh, and I don't see why "System RAM (driver managed)" would hinder any
>> policy in user case to still do what it thinks is the right thing to d=
o
>> (e.g., for dax).
>>
>> "System RAM (driver managed)" would mean: Memory is not part of the ra=
w
>> firmware memmap. It was detected and added by a driver. Handle with
>> care, this is special.
>=20
> Oh, no, I was more reacting to your, "don't update
> /sys/firmware/memmap for the (driver managed) range" choice as being a
> policy decision. It otherwise feels to me "System RAM (driver
> managed)" adds confusion for casual users of /proc/iomem and for clued
> in tools they have the parent association to decide policy.

Not sure if I understand correctly, so bear with me :).

Adding or not adding stuff to /sys/firmware/memmap is not a policy
decision. If it's not part of the raw firmware-provided memmap, it has
nothing to do in /sys/firmware/memmap. That's what the documentation
from 2008 tells us.

Again, my point is that we don't create /sys/firmware/memmap entries for
dax/kmem and virtio-mem memory - because it's not part of the raw
firmware-provided memmap. I was not suggesting to add something like
"System RAM (driver managed)" there instead, maybe that part was confusin=
g.

--=20
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Fri May 01 18:44:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 18:44:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUade-00044G-91; Fri, 01 May 2020 18:43:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D0mR=6P=intel.com=dan.j.williams@srs-us1.protection.inumbo.net>)
 id 1jUadd-000446-DO
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 18:43:53 +0000
X-Inumbo-ID: b240cc06-8bdb-11ea-b07b-bc764e2007e4
Received: from mail-ed1-x543.google.com (unknown [2a00:1450:4864:20::543])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b240cc06-8bdb-11ea-b07b-bc764e2007e4;
 Fri, 01 May 2020 18:43:49 +0000 (UTC)
Received: by mail-ed1-x543.google.com with SMTP id r7so7930725edo.11
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 11:43:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=intel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=o/okLHQ7l1eoRIFhJ+xNSW46JphId+vYlLM2LdsSU3E=;
 b=weN8QfHuExHYaepq+7E1Q/0SRYMh7p8ZjcGwSWK2zt19OAmRo/ZapS6C19zKkSm2B7
 MgQyzH6J0oiR443CAJkWGf8tj+BVUlZSRUQ9WuY+YJli6fdOz/CSkzTn2K8HogDu50cD
 W2P3M2G+3faMDDJu2LjMFToE0w4DO9UD1vdNV28xQfo7n2TAVLQ5qprKz/RbrdAhYzCf
 FlniNgfFyxwzkns6nwdnGfxWiaPLt7btxT5lqw3kQWDQeAZp0ulWOQ1mPozpFGBjSDvh
 vQAiEc4xPOYkrtwSgXBLsqhlMidXYrFnlpwyZdjYONP8/X5zXKxBC6LYv8ySlQEUF4d9
 8YQw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=o/okLHQ7l1eoRIFhJ+xNSW46JphId+vYlLM2LdsSU3E=;
 b=Cy4HOTKPd19I7+P1pvkD1x+jRrqxihIHhiU6G7mNvR3BPrnTU/s5zgrQWH/h3VhMY0
 cxeJ4XAGHotxtUgbRjliwQYgXicGml70k2UUBa/j1Kh/Ec7H5IkNnb/lmcPY/w2N6yic
 JXCXPit/kuqbspjlepreFJmtghlAhpunYhYLlFX598UqQSUUWol2opL+b1FUL1nlQ7X1
 N4LzV2QAHV0BcA3T584LcvYk/2/U94jOzugGus49Nur97t5lcinGgoOtZ4eZJJ/MDO1z
 8kArlpQUknAErGZrpKp9IxxHPq65b+O7iDE4/fIT+ufbdHJ1eCOyQQoSClECH6RLzvLx
 rtFA==
X-Gm-Message-State: AGi0PuZcxnFf40eBYoI6kMkPogWfzduVBLrnytnROmW84MKkNQFbMW+j
 qMKp55kqNEmtyR36iYoYjFfxKYYHVOOLDp5/x1/Blg==
X-Google-Smtp-Source: APiQypLqnaYv43r6Uq1aSbujXwhGYNQdyJhTIBkjZMc5QK+kjynpE0ZZSNw0BY1FtHl+WNRGQWKFueMB52jMwly/Eb8=
X-Received: by 2002:a50:ee86:: with SMTP id f6mr4936861edr.123.1588358628608; 
 Fri, 01 May 2020 11:43:48 -0700 (PDT)
MIME-Version: 1.0
References: <20200430102908.10107-1-david@redhat.com>
 <20200430102908.10107-3-david@redhat.com>
 <87pnbp2dcz.fsf@x220.int.ebiederm.org>
 <1b49c3be-6e2f-57cb-96f7-f66a8f8a9380@redhat.com>
 <871ro52ary.fsf@x220.int.ebiederm.org>
 <373a6898-4020-4af1-5b3d-f827d705dd77@redhat.com>
 <875zdg26hp.fsf@x220.int.ebiederm.org>
 <b28c9e02-8cf2-33ae-646b-fe50a185738e@redhat.com>
 <20200430152403.e0d6da5eb1cad06411ac6d46@linux-foundation.org>
 <5c908ec3-9495-531e-9291-cbab24f292d6@redhat.com>
 <CAPcyv4j=YKnr1HW4OhAmpzbuKjtfP7FdAn4-V7uA=b-Tcpfu+A@mail.gmail.com>
 <2d019c11-a478-9d70-abd5-4fd2ebf4bc1d@redhat.com>
 <CAPcyv4iOqS0Wbfa2KPfE1axQFGXoRB4mmPRP__Lmqpw6Qpr_ig@mail.gmail.com>
 <62dd4ce2-86cc-5b85-734f-ec8766528a1b@redhat.com>
 <0169e822-a6cc-1543-88ed-2a85d95ffb93@redhat.com>
 <CAPcyv4jGnR_fPtpKBC1rD2KRcT88bTkhqnTMmuwuc+f9Dwrz1g@mail.gmail.com>
 <9f3a813e-dc1d-b675-6e69-85beed3057a4@redhat.com>
In-Reply-To: <9f3a813e-dc1d-b675-6e69-85beed3057a4@redhat.com>
From: Dan Williams <dan.j.williams@intel.com>
Date: Fri, 1 May 2020 11:43:37 -0700
Message-ID: <CAPcyv4jjrxQ27rsfmz6wYPgmedevU=KG+wZ0GOm=qiE6tqa+VA@mail.gmail.com>
Subject: Re: [PATCH v2 2/3] mm/memory_hotplug: Introduce MHP_NO_FIRMWARE_MEMMAP
To: David Hildenbrand <david@redhat.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: virtio-dev@lists.oasis-open.org, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Baoquan He <bhe@redhat.com>,
 Linux ACPI <linux-acpi@vger.kernel.org>, Wei Yang <richard.weiyang@gmail.com>,
 linux-s390 <linux-s390@vger.kernel.org>,
 linux-nvdimm <linux-nvdimm@lists.01.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 virtualization@lists.linux-foundation.org, Linux MM <linux-mm@kvack.org>,
 "Michael S . Tsirkin" <mst@redhat.com>,
 "Eric W. Biederman" <ebiederm@xmission.com>,
 Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Morton <akpm@linux-foundation.org>, Michal Hocko <mhocko@kernel.org>,
 linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 1, 2020 at 11:14 AM David Hildenbrand <david@redhat.com> wrote:
>
> On 01.05.20 20:03, Dan Williams wrote:
> > On Fri, May 1, 2020 at 10:51 AM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 01.05.20 19:45, David Hildenbrand wrote:
> >>> On 01.05.20 19:39, Dan Williams wrote:
> >>>> On Fri, May 1, 2020 at 10:21 AM David Hildenbrand <david@redhat.com> wrote:
> >>>>>
> >>>>> On 01.05.20 18:56, Dan Williams wrote:
> >>>>>> On Fri, May 1, 2020 at 2:34 AM David Hildenbrand <david@redhat.com> wrote:
> >>>>>>>
> >>>>>>> On 01.05.20 00:24, Andrew Morton wrote:
> >>>>>>>> On Thu, 30 Apr 2020 20:43:39 +0200 David Hildenbrand <david@redhat.com> wrote:
> >>>>>>>>
> >>>>>>>>>>
> >>>>>>>>>> Why does the firmware map support hotplug entries?
> >>>>>>>>>
> >>>>>>>>> I assume:
> >>>>>>>>>
> >>>>>>>>> The firmware memmap was added primarily for x86-64 kexec (and still, is
> >>>>>>>>> mostly used on x86-64 only IIRC). There, we had ACPI hotplug. When DIMMs
> >>>>>>>>> get hotplugged on real HW, they get added to e820. Same applies to
> >>>>>>>>> memory added via HyperV balloon (unless memory is unplugged via
> >>>>>>>>> ballooning and you reboot ... the the e820 is changed as well). I assume
> >>>>>>>>> we wanted to be able to reflect that, to make kexec look like a real reboot.
> >>>>>>>>>
> >>>>>>>>> This worked for a while. Then came dax/kmem. Now comes virtio-mem.
> >>>>>>>>>
> >>>>>>>>>
> >>>>>>>>> But I assume only Andrew can enlighten us.
> >>>>>>>>>
> >>>>>>>>> @Andrew, any guidance here? Should we really add all memory to the
> >>>>>>>>> firmware memmap, even if this contradicts with the existing
> >>>>>>>>> documentation? (especially, if the actual firmware memmap will *not*
> >>>>>>>>> contain that memory after a reboot)
> >>>>>>>>
> >>>>>>>> For some reason that patch is misattributed - it was authored by
> >>>>>>>> Shaohui Zheng <shaohui.zheng@intel.com>, who hasn't been heard from in
> >>>>>>>> a decade.  I looked through the email discussion from that time and I'm
> >>>>>>>> not seeing anything useful.  But I wasn't able to locate Dave Hansen's
> >>>>>>>> review comments.
> >>>>>>>
> >>>>>>> Okay, thanks for checking. I think the documentation from 2008 is pretty
> >>>>>>> clear what has to be done here. I will add some of these details to the
> >>>>>>> patch description.
> >>>>>>>
> >>>>>>> Also, now that I know that esp. kexec-tools already don't consider
> >>>>>>> dax/kmem memory properly (memory will not get dumped via kdump) and
> >>>>>>> won't really suffer from a name change in /proc/iomem, I will go back to
> >>>>>>> the MHP_DRIVER_MANAGED approach and
> >>>>>>> 1. Don't create firmware memmap entries
> >>>>>>> 2. Name the resource "System RAM (driver managed)"
> >>>>>>> 3. Flag the resource via something like IORESOURCE_MEM_DRIVER_MANAGED.
> >>>>>>>
> >>>>>>> This way, kernel users and user space can figure out that this memory
> >>>>>>> has different semantics and handle it accordingly - I think that was
> >>>>>>> what Eric was asking for.
> >>>>>>>
> >>>>>>> Of course, open for suggestions.
> >>>>>>
> >>>>>> I'm still more of a fan of this being communicated by "System RAM"
> >>>>>
> >>>>> I was mentioning somewhere in this thread that "System RAM" inside a
> >>>>> hierarchy (like dax/kmem) will already be basically ignored by
> >>>>> kexec-tools. So, placing it inside a hierarchy already makes it look
> >>>>> special already.
> >>>>>
> >>>>> But after all, as we have to change kexec-tools either way, we can
> >>>>> directly go ahead and flag it properly as special (in case there will
> >>>>> ever be other cases where we could no longer distinguish it).
> >>>>>
> >>>>>> being parented especially because that tells you something about how
> >>>>>> the memory is driver-managed and which mechanism might be in play.
> >>>>>
> >>>>> The could be communicated to some degree via the resource hierarchy.
> >>>>>
> >>>>> E.g.,
> >>>>>
> >>>>>             [root@localhost ~]# cat /proc/iomem
> >>>>>             ...
> >>>>>             140000000-33fffffff : Persistent Memory
> >>>>>               140000000-1481fffff : namespace0.0
> >>>>>               150000000-33fffffff : dax0.0
> >>>>>                 150000000-33fffffff : System RAM (driver managed)
> >>>>>
> >>>>> vs.
> >>>>>
> >>>>>            :/# cat /proc/iomem
> >>>>>             [...]
> >>>>>             140000000-333ffffff : virtio-mem (virtio0)
> >>>>>               140000000-147ffffff : System RAM (driver managed)
> >>>>>               148000000-14fffffff : System RAM (driver managed)
> >>>>>               150000000-157ffffff : System RAM (driver managed)
> >>>>>
> >>>>> Good enough for my taste.
> >>>>>
> >>>>>> What about adding an optional /sys/firmware/memmap/X/parent attribute.
> >>>>>
> >>>>> I really don't want any firmware memmap entries for something that is
> >>>>> not part of the firmware provided memmap. In addition,
> >>>>> /sys/firmware/memmap/ is still a fairly x86_64 specific thing. Only mips
> >>>>> and two arm configs enable it at all.
> >>>>>
> >>>>> So, IMHO, /sys/firmware/memmap/ is definitely not the way to go.
> >>>>
> >>>> I think that's a policy decision and policy decisions do not belong in
> >>>> the kernel. Give the tooling the opportunity to decide whether System
> >>>> RAM stays that way over a kexec. The parenthetical reference otherwise
> >>>> looks out of place to me in the /proc/iomem output. What makes it
> >>>> "driver managed" is how the kernel handles it, not how the kernel
> >>>> names it.
> >>>
> >>> At least, virtio-mem is different. It really *has to be handled* by the
> >>> driver. This is not a policy. It's how it works.
> >
> > ...but that's not necessarily how dax/kmem works.
> >
>
> Yes, and user space could still take that memory and add it to the
> firmware memmap if it really wants to. It knows that it is special. It
> can figure out that it belongs to a dax device using /proc/iomem.
>
> >>>
> >>
> >> Oh, and I don't see why "System RAM (driver managed)" would hinder any
> >> policy in user case to still do what it thinks is the right thing to do
> >> (e.g., for dax).
> >>
> >> "System RAM (driver managed)" would mean: Memory is not part of the raw
> >> firmware memmap. It was detected and added by a driver. Handle with
> >> care, this is special.
> >
> > Oh, no, I was more reacting to your, "don't update
> > /sys/firmware/memmap for the (driver managed) range" choice as being a
> > policy decision. It otherwise feels to me "System RAM (driver
> > managed)" adds confusion for casual users of /proc/iomem and for clued
> > in tools they have the parent association to decide policy.
>
> Not sure if I understand correctly, so bear with me :).
>
> Adding or not adding stuff to /sys/firmware/memmap is not a policy
> decision. If it's not part of the raw firmware-provided memmap, it has
> nothing to do in /sys/firmware/memmap. That's what the documentation
> from 2008 tells us.

It just occurs to me that there are valid cases for both wanting to
start over with driver managed memory with a kexec and keeping it in
the map. Consider the case of EFI Special Purpose (SP) Memory that is
marked EFI Conventional Memory with the SP attribute. In that case the
firmware memory map marked it as conventional RAM, but the kernel
optionally marks it as System RAM vs Soft Reserved. The 2008 patch
simply does not consider that case. I'm not sure strict textualism
works for coding decisions.


From xen-devel-bounces@lists.xenproject.org Fri May 01 19:18:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 19:18:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUbAu-0006dB-6Q; Fri, 01 May 2020 19:18:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jt3Y=6P=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1jUbAt-0006d6-33
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 19:18:15 +0000
X-Inumbo-ID: 8039d393-8be0-11ea-9b4f-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 8039d393-8be0-11ea-9b4f-12813bfff9fa;
 Fri, 01 May 2020 19:18:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588360692;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=mVhETyfODbXJJ/nYfDw5o0zsHqXv28oT+S+V75mVd3M=;
 b=RTKlV8oz8rGqzg4dm9T1RKYFPERL8bBkJDAqLMIDi/B5QdTguhzXq87XXfIpeGk5jqTn6M
 wKnF8IU11mC7eRKhOQmozLiTF0dpQvFTR7IYd/kFl4EHnxzj1eaEgEc0i2Uvq1nClUXsBw
 oxvKZ5vSUqMiIpNHS9BB3EDMx7tUiOU=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-123-UDB8ubrTNymfT6HkLbVrgg-1; Fri, 01 May 2020 15:18:09 -0400
X-MC-Unique: UDB8ubrTNymfT6HkLbVrgg-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 69FE418FE868;
 Fri,  1 May 2020 19:18:06 +0000 (UTC)
Received: from [10.36.112.180] (ovpn-112-180.ams2.redhat.com [10.36.112.180])
 by smtp.corp.redhat.com (Postfix) with ESMTP id A3A9639D;
 Fri,  1 May 2020 19:17:59 +0000 (UTC)
Subject: Re: [PATCH v2 2/3] mm/memory_hotplug: Introduce MHP_NO_FIRMWARE_MEMMAP
To: Dan Williams <dan.j.williams@intel.com>
References: <20200430102908.10107-1-david@redhat.com>
 <20200430102908.10107-3-david@redhat.com>
 <87pnbp2dcz.fsf@x220.int.ebiederm.org>
 <1b49c3be-6e2f-57cb-96f7-f66a8f8a9380@redhat.com>
 <871ro52ary.fsf@x220.int.ebiederm.org>
 <373a6898-4020-4af1-5b3d-f827d705dd77@redhat.com>
 <875zdg26hp.fsf@x220.int.ebiederm.org>
 <b28c9e02-8cf2-33ae-646b-fe50a185738e@redhat.com>
 <20200430152403.e0d6da5eb1cad06411ac6d46@linux-foundation.org>
 <5c908ec3-9495-531e-9291-cbab24f292d6@redhat.com>
 <CAPcyv4j=YKnr1HW4OhAmpzbuKjtfP7FdAn4-V7uA=b-Tcpfu+A@mail.gmail.com>
 <2d019c11-a478-9d70-abd5-4fd2ebf4bc1d@redhat.com>
 <CAPcyv4iOqS0Wbfa2KPfE1axQFGXoRB4mmPRP__Lmqpw6Qpr_ig@mail.gmail.com>
 <62dd4ce2-86cc-5b85-734f-ec8766528a1b@redhat.com>
 <0169e822-a6cc-1543-88ed-2a85d95ffb93@redhat.com>
 <CAPcyv4jGnR_fPtpKBC1rD2KRcT88bTkhqnTMmuwuc+f9Dwrz1g@mail.gmail.com>
 <9f3a813e-dc1d-b675-6e69-85beed3057a4@redhat.com>
 <CAPcyv4jjrxQ27rsfmz6wYPgmedevU=KG+wZ0GOm=qiE6tqa+VA@mail.gmail.com>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMFCQlmAYAGCwkIBwMCBhUI
 AgkKCwQWAgMBAh4BAheAFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl3pImkCGQEACgkQTd4Q
 9wD/g1o+VA//SFvIHUAvul05u6wKv/pIR6aICPdpF9EIgEU448g+7FfDgQwcEny1pbEzAmiw
 zAXIQ9H0NZh96lcq+yDLtONnXk/bEYWHHUA014A1wqcYNRY8RvY1+eVHb0uu0KYQoXkzvu+s
 Dncuguk470XPnscL27hs8PgOP6QjG4jt75K2LfZ0eAqTOUCZTJxA8A7E9+XTYuU0hs7QVrWJ
 jQdFxQbRMrYz7uP8KmTK9/Cnvqehgl4EzyRaZppshruKMeyheBgvgJd5On1wWq4ZUV5PFM4x
 II3QbD3EJfWbaJMR55jI9dMFa+vK7MFz3rhWOkEx/QR959lfdRSTXdxs8V3zDvChcmRVGN8U
 Vo93d1YNtWnA9w6oCW1dnDZ4kgQZZSBIjp6iHcA08apzh7DPi08jL7M9UQByeYGr8KuR4i6e
 RZI6xhlZerUScVzn35ONwOC91VdYiQgjemiVLq1WDDZ3B7DIzUZ4RQTOaIWdtXBWb8zWakt/
 ztGhsx0e39Gvt3391O1PgcA7ilhvqrBPemJrlb9xSPPRbaNAW39P8ws/UJnzSJqnHMVxbRZC
 Am4add/SM+OCP0w3xYss1jy9T+XdZa0lhUvJfLy7tNcjVG/sxkBXOaSC24MFPuwnoC9WvCVQ
 ZBxouph3kqc4Dt5X1EeXVLeba+466P1fe1rC8MbcwDkoUo65Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAiUEGAECAA8FAlXLn5ECGwwFCQlmAYAACgkQTd4Q
 9wD/g1qA6w/+M+ggFv+JdVsz5+ZIc6MSyGUozASX+bmIuPeIecc9UsFRatc91LuJCKMkD9Uv
 GOcWSeFpLrSGRQ1Z7EMzFVU//qVs6uzhsNk0RYMyS0B6oloW3FpyQ+zOVylFWQCzoyyf227y
 GW8HnXunJSC+4PtlL2AY4yZjAVAPLK2l6mhgClVXTQ/S7cBoTQKP+jvVJOoYkpnFxWE9pn4t
 H5QIFk7Ip8TKr5k3fXVWk4lnUi9MTF/5L/mWqdyIO1s7cjharQCstfWCzWrVeVctpVoDfJWp
 4LwTuQ5yEM2KcPeElLg5fR7WB2zH97oI6/Ko2DlovmfQqXh9xWozQt0iGy5tWzh6I0JrlcxJ
 ileZWLccC4XKD1037Hy2FLAjzfoWgwBLA6ULu0exOOdIa58H4PsXtkFPrUF980EEibUp0zFz
 GotRVekFAceUaRvAj7dh76cToeZkfsjAvBVb4COXuhgX6N4pofgNkW2AtgYu1nUsPAo+NftU
 CxrhjHtLn4QEBpkbErnXQyMjHpIatlYGutVMS91XTQXYydCh5crMPs7hYVsvnmGHIaB9ZMfB
 njnuI31KBiLUks+paRkHQlFcgS2N3gkRBzH7xSZ+t7Re3jvXdXEzKBbQ+dC3lpJB0wPnyMcX
 FOTT3aZT7IgePkt5iC/BKBk3hqKteTnJFeVIT7EC+a6YUFg=
Organization: Red Hat GmbH
Message-ID: <04242d48-5fa9-6da4-3e4a-991e401eb580@redhat.com>
Date: Fri, 1 May 2020 21:17:58 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <CAPcyv4jjrxQ27rsfmz6wYPgmedevU=KG+wZ0GOm=qiE6tqa+VA@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: virtio-dev@lists.oasis-open.org, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Baoquan He <bhe@redhat.com>,
 Linux ACPI <linux-acpi@vger.kernel.org>, Wei Yang <richard.weiyang@gmail.com>,
 linux-s390 <linux-s390@vger.kernel.org>,
 linux-nvdimm <linux-nvdimm@lists.01.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 virtualization@lists.linux-foundation.org, Linux MM <linux-mm@kvack.org>,
 "Michael S . Tsirkin" <mst@redhat.com>,
 "Eric W. Biederman" <ebiederm@xmission.com>,
 Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Morton <akpm@linux-foundation.org>, Michal Hocko <mhocko@kernel.org>,
 linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01.05.20 20:43, Dan Williams wrote:
> On Fri, May 1, 2020 at 11:14 AM David Hildenbrand <david@redhat.com> wr=
ote:
>>
>> On 01.05.20 20:03, Dan Williams wrote:
>>> On Fri, May 1, 2020 at 10:51 AM David Hildenbrand <david@redhat.com> =
wrote:
>>>>
>>>> On 01.05.20 19:45, David Hildenbrand wrote:
>>>>> On 01.05.20 19:39, Dan Williams wrote:
>>>>>> On Fri, May 1, 2020 at 10:21 AM David Hildenbrand <david@redhat.co=
m> wrote:
>>>>>>>
>>>>>>> On 01.05.20 18:56, Dan Williams wrote:
>>>>>>>> On Fri, May 1, 2020 at 2:34 AM David Hildenbrand <david@redhat.c=
om> wrote:
>>>>>>>>>
>>>>>>>>> On 01.05.20 00:24, Andrew Morton wrote:
>>>>>>>>>> On Thu, 30 Apr 2020 20:43:39 +0200 David Hildenbrand <david@re=
dhat.com> wrote:
>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> Why does the firmware map support hotplug entries?
>>>>>>>>>>>
>>>>>>>>>>> I assume:
>>>>>>>>>>>
>>>>>>>>>>> The firmware memmap was added primarily for x86-64 kexec (and=
 still, is
>>>>>>>>>>> mostly used on x86-64 only IIRC). There, we had ACPI hotplug.=
 When DIMMs
>>>>>>>>>>> get hotplugged on real HW, they get added to e820. Same appli=
es to
>>>>>>>>>>> memory added via HyperV balloon (unless memory is unplugged v=
ia
>>>>>>>>>>> ballooning and you reboot ... the the e820 is changed as well=
). I assume
>>>>>>>>>>> we wanted to be able to reflect that, to make kexec look like=
 a real reboot.
>>>>>>>>>>>
>>>>>>>>>>> This worked for a while. Then came dax/kmem. Now comes virtio=
-mem.
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> But I assume only Andrew can enlighten us.
>>>>>>>>>>>
>>>>>>>>>>> @Andrew, any guidance here? Should we really add all memory t=
o the
>>>>>>>>>>> firmware memmap, even if this contradicts with the existing
>>>>>>>>>>> documentation? (especially, if the actual firmware memmap wil=
l *not*
>>>>>>>>>>> contain that memory after a reboot)
>>>>>>>>>>
>>>>>>>>>> For some reason that patch is misattributed - it was authored =
by
>>>>>>>>>> Shaohui Zheng <shaohui.zheng@intel.com>, who hasn't been heard=
 from in
>>>>>>>>>> a decade.  I looked through the email discussion from that tim=
e and I'm
>>>>>>>>>> not seeing anything useful.  But I wasn't able to locate Dave =
Hansen's
>>>>>>>>>> review comments.
>>>>>>>>>
>>>>>>>>> Okay, thanks for checking. I think the documentation from 2008 =
is pretty
>>>>>>>>> clear what has to be done here. I will add some of these detail=
s to the
>>>>>>>>> patch description.
>>>>>>>>>
>>>>>>>>> Also, now that I know that esp. kexec-tools already don't consi=
der
>>>>>>>>> dax/kmem memory properly (memory will not get dumped via kdump)=
 and
>>>>>>>>> won't really suffer from a name change in /proc/iomem, I will g=
o back to
>>>>>>>>> the MHP_DRIVER_MANAGED approach and
>>>>>>>>> 1. Don't create firmware memmap entries
>>>>>>>>> 2. Name the resource "System RAM (driver managed)"
>>>>>>>>> 3. Flag the resource via something like IORESOURCE_MEM_DRIVER_M=
ANAGED.
>>>>>>>>>
>>>>>>>>> This way, kernel users and user space can figure out that this =
memory
>>>>>>>>> has different semantics and handle it accordingly - I think tha=
t was
>>>>>>>>> what Eric was asking for.
>>>>>>>>>
>>>>>>>>> Of course, open for suggestions.
>>>>>>>>
>>>>>>>> I'm still more of a fan of this being communicated by "System RA=
M"
>>>>>>>
>>>>>>> I was mentioning somewhere in this thread that "System RAM" insid=
e a
>>>>>>> hierarchy (like dax/kmem) will already be basically ignored by
>>>>>>> kexec-tools. So, placing it inside a hierarchy already makes it l=
ook
>>>>>>> special already.
>>>>>>>
>>>>>>> But after all, as we have to change kexec-tools either way, we ca=
n
>>>>>>> directly go ahead and flag it properly as special (in case there =
will
>>>>>>> ever be other cases where we could no longer distinguish it).
>>>>>>>
>>>>>>>> being parented especially because that tells you something about=
 how
>>>>>>>> the memory is driver-managed and which mechanism might be in pla=
y.
>>>>>>>
>>>>>>> The could be communicated to some degree via the resource hierarc=
hy.
>>>>>>>
>>>>>>> E.g.,
>>>>>>>
>>>>>>>             [root@localhost ~]# cat /proc/iomem
>>>>>>>             ...
>>>>>>>             140000000-33fffffff : Persistent Memory
>>>>>>>               140000000-1481fffff : namespace0.0
>>>>>>>               150000000-33fffffff : dax0.0
>>>>>>>                 150000000-33fffffff : System RAM (driver managed)
>>>>>>>
>>>>>>> vs.
>>>>>>>
>>>>>>>            :/# cat /proc/iomem
>>>>>>>             [...]
>>>>>>>             140000000-333ffffff : virtio-mem (virtio0)
>>>>>>>               140000000-147ffffff : System RAM (driver managed)
>>>>>>>               148000000-14fffffff : System RAM (driver managed)
>>>>>>>               150000000-157ffffff : System RAM (driver managed)
>>>>>>>
>>>>>>> Good enough for my taste.
>>>>>>>
>>>>>>>> What about adding an optional /sys/firmware/memmap/X/parent attr=
ibute.
>>>>>>>
>>>>>>> I really don't want any firmware memmap entries for something tha=
t is
>>>>>>> not part of the firmware provided memmap. In addition,
>>>>>>> /sys/firmware/memmap/ is still a fairly x86_64 specific thing. On=
ly mips
>>>>>>> and two arm configs enable it at all.
>>>>>>>
>>>>>>> So, IMHO, /sys/firmware/memmap/ is definitely not the way to go.
>>>>>>
>>>>>> I think that's a policy decision and policy decisions do not belon=
g in
>>>>>> the kernel. Give the tooling the opportunity to decide whether Sys=
tem
>>>>>> RAM stays that way over a kexec. The parenthetical reference other=
wise
>>>>>> looks out of place to me in the /proc/iomem output. What makes it
>>>>>> "driver managed" is how the kernel handles it, not how the kernel
>>>>>> names it.
>>>>>
>>>>> At least, virtio-mem is different. It really *has to be handled* by=
 the
>>>>> driver. This is not a policy. It's how it works.
>>>
>>> ...but that's not necessarily how dax/kmem works.
>>>
>>
>> Yes, and user space could still take that memory and add it to the
>> firmware memmap if it really wants to. It knows that it is special. It
>> can figure out that it belongs to a dax device using /proc/iomem.
>>
>>>>>
>>>>
>>>> Oh, and I don't see why "System RAM (driver managed)" would hinder a=
ny
>>>> policy in user case to still do what it thinks is the right thing to=
 do
>>>> (e.g., for dax).
>>>>
>>>> "System RAM (driver managed)" would mean: Memory is not part of the =
raw
>>>> firmware memmap. It was detected and added by a driver. Handle with
>>>> care, this is special.
>>>
>>> Oh, no, I was more reacting to your, "don't update
>>> /sys/firmware/memmap for the (driver managed) range" choice as being =
a
>>> policy decision. It otherwise feels to me "System RAM (driver
>>> managed)" adds confusion for casual users of /proc/iomem and for clue=
d
>>> in tools they have the parent association to decide policy.
>>
>> Not sure if I understand correctly, so bear with me :).
>>
>> Adding or not adding stuff to /sys/firmware/memmap is not a policy
>> decision. If it's not part of the raw firmware-provided memmap, it has
>> nothing to do in /sys/firmware/memmap. That's what the documentation
>> from 2008 tells us.
>=20
> It just occurs to me that there are valid cases for both wanting to
> start over with driver managed memory with a kexec and keeping it in
> the map.

Yes, there might be valid cases. My gut feeling is that in the general
case, you want to let the kexec kernel implement a policy/ let the user
in the new system decide.

But as I said, you can implement in kexec-tools whatever policy you
want. It has access to all information.

> Consider the case of EFI Special Purpose (SP) Memory that is
> marked EFI Conventional Memory with the SP attribute. In that case the
> firmware memory map marked it as conventional RAM, but the kernel
> optionally marks it as System RAM vs Soft Reserved. The 2008 patch
> simply does not consider that case. I'm not sure strict textualism
> works for coding decisions.

I am no expert on that matter (esp EFI). But looking at the users of
firmware_map_add_early(), the single user is in arch/x86/kernel/e820.c
. So the single source of /sys/firmware/memmap is (besides hotplug) e820.

"'e820_table_firmware': the original firmware version passed to us by
the bootloader - not modified by the kernel. ... inform the user about
the firmware's notion of memory layout via /sys/firmware/memmap"
(arch/x86/kernel/e820.c)

How is the EFI Special Purpose (SP) Memory represented in e820?

/sys/firmware/memmap is really simple: just dump in e820. No policies IIU=
C.

--=20
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Fri May 01 20:12:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 20:12:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUc1I-0002yq-CR; Fri, 01 May 2020 20:12:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D0mR=6P=intel.com=dan.j.williams@srs-us1.protection.inumbo.net>)
 id 1jUc1G-0002yl-IH
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 20:12:22 +0000
X-Inumbo-ID: 0eab1da0-8be8-11ea-ae69-bc764e2007e4
Received: from mail-ej1-x641.google.com (unknown [2a00:1450:4864:20::641])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0eab1da0-8be8-11ea-ae69-bc764e2007e4;
 Fri, 01 May 2020 20:12:18 +0000 (UTC)
Received: by mail-ej1-x641.google.com with SMTP id s9so8432263eju.1
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 13:12:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=intel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=iRKcFvP8imbVvHAKrE89Lr7nDWcrY0jAQxHChe7qeD0=;
 b=lzB4EVreyCTz1QQfhHkVzRcKFR5wHMHEr8Iw9FRkD6AiB8+TE+4OPD04GIAtwuf5fo
 +A8rXRN+gKWLSRKGaFqidiOQpBK8Di9lEQNUCAblfaPezwsWc1HzB66Z1XS+GFsnG+RT
 zZAc8Iueh+b1kTQ/TVsa8E3uTu7ALJucVtkhxtv798uFrSVRum15GE+a3ZE3OwMoXSgy
 NIA1AwEFVQ9xezXSDheI9gn5LbgLaoifWqmCVl2xSoJR8qjEmhjwtgdRWDjUCtYtR42H
 Mq80I8izLaJyJ2xhtBv9jV1r5wGXmFFb9d+ts2CWciDV5FuPssdRnlveU5WiQxTPm8f0
 35Pg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=iRKcFvP8imbVvHAKrE89Lr7nDWcrY0jAQxHChe7qeD0=;
 b=UrTvfaCsfCwe+4juIAj68kmjZWyTzhWu16rEPvHsbq0PON54zUXpd8Q3TjyZ7W15E7
 j86UpZAzrE8Wen0kHSfIL7K9+jbHm8SalmfcmCoKBN98Cx7ViyA2LeiRPA+WB1wJX2aR
 scac4y7jq/ADs9+UoxYy+L/jsKm6PdYinfmv2VvDKaeV1klbTyGNlBh3vm/wZgL6RoqN
 4H4uvyvaR/JGmuKrcA+WpcL3J4CIEsHL1wHb58GUPKHH9XExvjyK7C7IcwIbtCS+oFNX
 VZHPX9mBNatXHvTAd+sMa2yCYWCdhFn5LvDwMLMpWIQrYdRo3f4xrxL7P79iV7b5f2PX
 oG1g==
X-Gm-Message-State: AGi0PuattJ2E4O+bcHVWapiEA8t2MJ+83v0NJS87QsLayh4Fa1MMIYvI
 zuWdzDGlDJrTg5Vn4S4Fsm4ToQEEe82JNjjwOR7J5w==
X-Google-Smtp-Source: APiQypKNW1Zqw23KSgXAf2K2McMwoP6ixY0wguuAmrxkTDjVDOa91p7W66/GSmtI9dzKzp4Hm4+3EhSGn+NGMeFiXqY=
X-Received: by 2002:a17:906:855a:: with SMTP id
 h26mr5041948ejy.56.1588363937742; 
 Fri, 01 May 2020 13:12:17 -0700 (PDT)
MIME-Version: 1.0
References: <20200430102908.10107-1-david@redhat.com>
 <20200430102908.10107-3-david@redhat.com>
 <87pnbp2dcz.fsf@x220.int.ebiederm.org>
 <1b49c3be-6e2f-57cb-96f7-f66a8f8a9380@redhat.com>
 <871ro52ary.fsf@x220.int.ebiederm.org>
 <373a6898-4020-4af1-5b3d-f827d705dd77@redhat.com>
 <875zdg26hp.fsf@x220.int.ebiederm.org>
 <b28c9e02-8cf2-33ae-646b-fe50a185738e@redhat.com>
 <20200430152403.e0d6da5eb1cad06411ac6d46@linux-foundation.org>
 <5c908ec3-9495-531e-9291-cbab24f292d6@redhat.com>
 <CAPcyv4j=YKnr1HW4OhAmpzbuKjtfP7FdAn4-V7uA=b-Tcpfu+A@mail.gmail.com>
 <2d019c11-a478-9d70-abd5-4fd2ebf4bc1d@redhat.com>
 <CAPcyv4iOqS0Wbfa2KPfE1axQFGXoRB4mmPRP__Lmqpw6Qpr_ig@mail.gmail.com>
 <62dd4ce2-86cc-5b85-734f-ec8766528a1b@redhat.com>
 <0169e822-a6cc-1543-88ed-2a85d95ffb93@redhat.com>
 <CAPcyv4jGnR_fPtpKBC1rD2KRcT88bTkhqnTMmuwuc+f9Dwrz1g@mail.gmail.com>
 <9f3a813e-dc1d-b675-6e69-85beed3057a4@redhat.com>
 <CAPcyv4jjrxQ27rsfmz6wYPgmedevU=KG+wZ0GOm=qiE6tqa+VA@mail.gmail.com>
 <04242d48-5fa9-6da4-3e4a-991e401eb580@redhat.com>
In-Reply-To: <04242d48-5fa9-6da4-3e4a-991e401eb580@redhat.com>
From: Dan Williams <dan.j.williams@intel.com>
Date: Fri, 1 May 2020 13:12:06 -0700
Message-ID: <CAPcyv4iXyOUDZgqhWH1KCObvATL=gP55xEr64rsRfUuJg5B+eQ@mail.gmail.com>
Subject: Re: [PATCH v2 2/3] mm/memory_hotplug: Introduce MHP_NO_FIRMWARE_MEMMAP
To: David Hildenbrand <david@redhat.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: virtio-dev@lists.oasis-open.org, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Baoquan He <bhe@redhat.com>,
 Linux ACPI <linux-acpi@vger.kernel.org>, Wei Yang <richard.weiyang@gmail.com>,
 linux-s390 <linux-s390@vger.kernel.org>,
 linux-nvdimm <linux-nvdimm@lists.01.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 virtualization@lists.linux-foundation.org, Linux MM <linux-mm@kvack.org>,
 "Michael S . Tsirkin" <mst@redhat.com>,
 "Eric W. Biederman" <ebiederm@xmission.com>,
 Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Morton <akpm@linux-foundation.org>, Michal Hocko <mhocko@kernel.org>,
 linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 1, 2020 at 12:18 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 01.05.20 20:43, Dan Williams wrote:
> > On Fri, May 1, 2020 at 11:14 AM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 01.05.20 20:03, Dan Williams wrote:
> >>> On Fri, May 1, 2020 at 10:51 AM David Hildenbrand <david@redhat.com> wrote:
> >>>>
> >>>> On 01.05.20 19:45, David Hildenbrand wrote:
> >>>>> On 01.05.20 19:39, Dan Williams wrote:
> >>>>>> On Fri, May 1, 2020 at 10:21 AM David Hildenbrand <david@redhat.com> wrote:
> >>>>>>>
> >>>>>>> On 01.05.20 18:56, Dan Williams wrote:
> >>>>>>>> On Fri, May 1, 2020 at 2:34 AM David Hildenbrand <david@redhat.com> wrote:
> >>>>>>>>>
> >>>>>>>>> On 01.05.20 00:24, Andrew Morton wrote:
> >>>>>>>>>> On Thu, 30 Apr 2020 20:43:39 +0200 David Hildenbrand <david@redhat.com> wrote:
> >>>>>>>>>>
> >>>>>>>>>>>>
> >>>>>>>>>>>> Why does the firmware map support hotplug entries?
> >>>>>>>>>>>
> >>>>>>>>>>> I assume:
> >>>>>>>>>>>
> >>>>>>>>>>> The firmware memmap was added primarily for x86-64 kexec (and still, is
> >>>>>>>>>>> mostly used on x86-64 only IIRC). There, we had ACPI hotplug. When DIMMs
> >>>>>>>>>>> get hotplugged on real HW, they get added to e820. Same applies to
> >>>>>>>>>>> memory added via HyperV balloon (unless memory is unplugged via
> >>>>>>>>>>> ballooning and you reboot ... the the e820 is changed as well). I assume
> >>>>>>>>>>> we wanted to be able to reflect that, to make kexec look like a real reboot.
> >>>>>>>>>>>
> >>>>>>>>>>> This worked for a while. Then came dax/kmem. Now comes virtio-mem.
> >>>>>>>>>>>
> >>>>>>>>>>>
> >>>>>>>>>>> But I assume only Andrew can enlighten us.
> >>>>>>>>>>>
> >>>>>>>>>>> @Andrew, any guidance here? Should we really add all memory to the
> >>>>>>>>>>> firmware memmap, even if this contradicts with the existing
> >>>>>>>>>>> documentation? (especially, if the actual firmware memmap will *not*
> >>>>>>>>>>> contain that memory after a reboot)
> >>>>>>>>>>
> >>>>>>>>>> For some reason that patch is misattributed - it was authored by
> >>>>>>>>>> Shaohui Zheng <shaohui.zheng@intel.com>, who hasn't been heard from in
> >>>>>>>>>> a decade.  I looked through the email discussion from that time and I'm
> >>>>>>>>>> not seeing anything useful.  But I wasn't able to locate Dave Hansen's
> >>>>>>>>>> review comments.
> >>>>>>>>>
> >>>>>>>>> Okay, thanks for checking. I think the documentation from 2008 is pretty
> >>>>>>>>> clear what has to be done here. I will add some of these details to the
> >>>>>>>>> patch description.
> >>>>>>>>>
> >>>>>>>>> Also, now that I know that esp. kexec-tools already don't consider
> >>>>>>>>> dax/kmem memory properly (memory will not get dumped via kdump) and
> >>>>>>>>> won't really suffer from a name change in /proc/iomem, I will go back to
> >>>>>>>>> the MHP_DRIVER_MANAGED approach and
> >>>>>>>>> 1. Don't create firmware memmap entries
> >>>>>>>>> 2. Name the resource "System RAM (driver managed)"
> >>>>>>>>> 3. Flag the resource via something like IORESOURCE_MEM_DRIVER_MANAGED.
> >>>>>>>>>
> >>>>>>>>> This way, kernel users and user space can figure out that this memory
> >>>>>>>>> has different semantics and handle it accordingly - I think that was
> >>>>>>>>> what Eric was asking for.
> >>>>>>>>>
> >>>>>>>>> Of course, open for suggestions.
> >>>>>>>>
> >>>>>>>> I'm still more of a fan of this being communicated by "System RAM"
> >>>>>>>
> >>>>>>> I was mentioning somewhere in this thread that "System RAM" inside a
> >>>>>>> hierarchy (like dax/kmem) will already be basically ignored by
> >>>>>>> kexec-tools. So, placing it inside a hierarchy already makes it look
> >>>>>>> special already.
> >>>>>>>
> >>>>>>> But after all, as we have to change kexec-tools either way, we can
> >>>>>>> directly go ahead and flag it properly as special (in case there will
> >>>>>>> ever be other cases where we could no longer distinguish it).
> >>>>>>>
> >>>>>>>> being parented especially because that tells you something about how
> >>>>>>>> the memory is driver-managed and which mechanism might be in play.
> >>>>>>>
> >>>>>>> The could be communicated to some degree via the resource hierarchy.
> >>>>>>>
> >>>>>>> E.g.,
> >>>>>>>
> >>>>>>>             [root@localhost ~]# cat /proc/iomem
> >>>>>>>             ...
> >>>>>>>             140000000-33fffffff : Persistent Memory
> >>>>>>>               140000000-1481fffff : namespace0.0
> >>>>>>>               150000000-33fffffff : dax0.0
> >>>>>>>                 150000000-33fffffff : System RAM (driver managed)
> >>>>>>>
> >>>>>>> vs.
> >>>>>>>
> >>>>>>>            :/# cat /proc/iomem
> >>>>>>>             [...]
> >>>>>>>             140000000-333ffffff : virtio-mem (virtio0)
> >>>>>>>               140000000-147ffffff : System RAM (driver managed)
> >>>>>>>               148000000-14fffffff : System RAM (driver managed)
> >>>>>>>               150000000-157ffffff : System RAM (driver managed)
> >>>>>>>
> >>>>>>> Good enough for my taste.
> >>>>>>>
> >>>>>>>> What about adding an optional /sys/firmware/memmap/X/parent attribute.
> >>>>>>>
> >>>>>>> I really don't want any firmware memmap entries for something that is
> >>>>>>> not part of the firmware provided memmap. In addition,
> >>>>>>> /sys/firmware/memmap/ is still a fairly x86_64 specific thing. Only mips
> >>>>>>> and two arm configs enable it at all.
> >>>>>>>
> >>>>>>> So, IMHO, /sys/firmware/memmap/ is definitely not the way to go.
> >>>>>>
> >>>>>> I think that's a policy decision and policy decisions do not belong in
> >>>>>> the kernel. Give the tooling the opportunity to decide whether System
> >>>>>> RAM stays that way over a kexec. The parenthetical reference otherwise
> >>>>>> looks out of place to me in the /proc/iomem output. What makes it
> >>>>>> "driver managed" is how the kernel handles it, not how the kernel
> >>>>>> names it.
> >>>>>
> >>>>> At least, virtio-mem is different. It really *has to be handled* by the
> >>>>> driver. This is not a policy. It's how it works.
> >>>
> >>> ...but that's not necessarily how dax/kmem works.
> >>>
> >>
> >> Yes, and user space could still take that memory and add it to the
> >> firmware memmap if it really wants to. It knows that it is special. It
> >> can figure out that it belongs to a dax device using /proc/iomem.
> >>
> >>>>>
> >>>>
> >>>> Oh, and I don't see why "System RAM (driver managed)" would hinder any
> >>>> policy in user case to still do what it thinks is the right thing to do
> >>>> (e.g., for dax).
> >>>>
> >>>> "System RAM (driver managed)" would mean: Memory is not part of the raw
> >>>> firmware memmap. It was detected and added by a driver. Handle with
> >>>> care, this is special.
> >>>
> >>> Oh, no, I was more reacting to your, "don't update
> >>> /sys/firmware/memmap for the (driver managed) range" choice as being a
> >>> policy decision. It otherwise feels to me "System RAM (driver
> >>> managed)" adds confusion for casual users of /proc/iomem and for clued
> >>> in tools they have the parent association to decide policy.
> >>
> >> Not sure if I understand correctly, so bear with me :).
> >>
> >> Adding or not adding stuff to /sys/firmware/memmap is not a policy
> >> decision. If it's not part of the raw firmware-provided memmap, it has
> >> nothing to do in /sys/firmware/memmap. That's what the documentation
> >> from 2008 tells us.
> >
> > It just occurs to me that there are valid cases for both wanting to
> > start over with driver managed memory with a kexec and keeping it in
> > the map.
>
> Yes, there might be valid cases. My gut feeling is that in the general
> case, you want to let the kexec kernel implement a policy/ let the user
> in the new system decide.
>
> But as I said, you can implement in kexec-tools whatever policy you
> want. It has access to all information.

Right, so why is a new type needed if all the information is there by
other means?

> > Consider the case of EFI Special Purpose (SP) Memory that is
> > marked EFI Conventional Memory with the SP attribute. In that case the
> > firmware memory map marked it as conventional RAM, but the kernel
> > optionally marks it as System RAM vs Soft Reserved. The 2008 patch
> > simply does not consider that case. I'm not sure strict textualism
> > works for coding decisions.
>
> I am no expert on that matter (esp EFI). But looking at the users of
> firmware_map_add_early(), the single user is in arch/x86/kernel/e820.c
> . So the single source of /sys/firmware/memmap is (besides hotplug) e820.
>
> "'e820_table_firmware': the original firmware version passed to us by
> the bootloader - not modified by the kernel. ... inform the user about
> the firmware's notion of memory layout via /sys/firmware/memmap"
> (arch/x86/kernel/e820.c)
>
> How is the EFI Special Purpose (SP) Memory represented in e820?
> /sys/firmware/memmap is really simple: just dump in e820. No policies IIUC.

e820 now has a Soft Reserved translation for this which means "try to
reserve, but treat as System RAM is ok too". It seems generically
useful to me that the toggle for determining whether Soft Reserved or
System RAM shows up /sys/firmware/memmap is a determination that
policy can make. The kernel need not preemptively block it.


From xen-devel-bounces@lists.xenproject.org Fri May 01 21:11:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 21:11:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUcwE-0007p0-5L; Fri, 01 May 2020 21:11:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jt3Y=6P=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1jUcwC-0007ov-Nl
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 21:11:12 +0000
X-Inumbo-ID: 48829b86-8bf0-11ea-b07b-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 48829b86-8bf0-11ea-b07b-bc764e2007e4;
 Fri, 01 May 2020 21:11:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588367471;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=fRnz7yVdPFbQgbwK7ShtSld7Iu+KxsdDibH5r6Rzhl8=;
 b=jDhri3UhQTrjWS8CxNkL4gjID1KSF3lJUEwbVM/i6KkX1ildH/HveGEg8JwDoOzTkbaBoE
 tlD/XYUdS/m29Z56lLXFMm1l09VQkMnRrZGbqUyDsSoSm70PJcI703Jc+eQdrQkDBXFqCs
 WiozvAnU7kuYbNoKArU9vMQxXUaNM4w=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-177-f73fybUJPlyu_J1p8_v2yQ-1; Fri, 01 May 2020 17:11:04 -0400
X-MC-Unique: f73fybUJPlyu_J1p8_v2yQ-1
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 167D181CBF5;
 Fri,  1 May 2020 21:11:01 +0000 (UTC)
Received: from [10.36.112.180] (ovpn-112-180.ams2.redhat.com [10.36.112.180])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 0706A605D7;
 Fri,  1 May 2020 21:10:53 +0000 (UTC)
Subject: Re: [PATCH v2 2/3] mm/memory_hotplug: Introduce MHP_NO_FIRMWARE_MEMMAP
To: Dan Williams <dan.j.williams@intel.com>
References: <20200430102908.10107-1-david@redhat.com>
 <1b49c3be-6e2f-57cb-96f7-f66a8f8a9380@redhat.com>
 <871ro52ary.fsf@x220.int.ebiederm.org>
 <373a6898-4020-4af1-5b3d-f827d705dd77@redhat.com>
 <875zdg26hp.fsf@x220.int.ebiederm.org>
 <b28c9e02-8cf2-33ae-646b-fe50a185738e@redhat.com>
 <20200430152403.e0d6da5eb1cad06411ac6d46@linux-foundation.org>
 <5c908ec3-9495-531e-9291-cbab24f292d6@redhat.com>
 <CAPcyv4j=YKnr1HW4OhAmpzbuKjtfP7FdAn4-V7uA=b-Tcpfu+A@mail.gmail.com>
 <2d019c11-a478-9d70-abd5-4fd2ebf4bc1d@redhat.com>
 <CAPcyv4iOqS0Wbfa2KPfE1axQFGXoRB4mmPRP__Lmqpw6Qpr_ig@mail.gmail.com>
 <62dd4ce2-86cc-5b85-734f-ec8766528a1b@redhat.com>
 <0169e822-a6cc-1543-88ed-2a85d95ffb93@redhat.com>
 <CAPcyv4jGnR_fPtpKBC1rD2KRcT88bTkhqnTMmuwuc+f9Dwrz1g@mail.gmail.com>
 <9f3a813e-dc1d-b675-6e69-85beed3057a4@redhat.com>
 <CAPcyv4jjrxQ27rsfmz6wYPgmedevU=KG+wZ0GOm=qiE6tqa+VA@mail.gmail.com>
 <04242d48-5fa9-6da4-3e4a-991e401eb580@redhat.com>
 <CAPcyv4iXyOUDZgqhWH1KCObvATL=gP55xEr64rsRfUuJg5B+eQ@mail.gmail.com>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMFCQlmAYAGCwkIBwMCBhUI
 AgkKCwQWAgMBAh4BAheAFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl3pImkCGQEACgkQTd4Q
 9wD/g1o+VA//SFvIHUAvul05u6wKv/pIR6aICPdpF9EIgEU448g+7FfDgQwcEny1pbEzAmiw
 zAXIQ9H0NZh96lcq+yDLtONnXk/bEYWHHUA014A1wqcYNRY8RvY1+eVHb0uu0KYQoXkzvu+s
 Dncuguk470XPnscL27hs8PgOP6QjG4jt75K2LfZ0eAqTOUCZTJxA8A7E9+XTYuU0hs7QVrWJ
 jQdFxQbRMrYz7uP8KmTK9/Cnvqehgl4EzyRaZppshruKMeyheBgvgJd5On1wWq4ZUV5PFM4x
 II3QbD3EJfWbaJMR55jI9dMFa+vK7MFz3rhWOkEx/QR959lfdRSTXdxs8V3zDvChcmRVGN8U
 Vo93d1YNtWnA9w6oCW1dnDZ4kgQZZSBIjp6iHcA08apzh7DPi08jL7M9UQByeYGr8KuR4i6e
 RZI6xhlZerUScVzn35ONwOC91VdYiQgjemiVLq1WDDZ3B7DIzUZ4RQTOaIWdtXBWb8zWakt/
 ztGhsx0e39Gvt3391O1PgcA7ilhvqrBPemJrlb9xSPPRbaNAW39P8ws/UJnzSJqnHMVxbRZC
 Am4add/SM+OCP0w3xYss1jy9T+XdZa0lhUvJfLy7tNcjVG/sxkBXOaSC24MFPuwnoC9WvCVQ
 ZBxouph3kqc4Dt5X1EeXVLeba+466P1fe1rC8MbcwDkoUo65Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAiUEGAECAA8FAlXLn5ECGwwFCQlmAYAACgkQTd4Q
 9wD/g1qA6w/+M+ggFv+JdVsz5+ZIc6MSyGUozASX+bmIuPeIecc9UsFRatc91LuJCKMkD9Uv
 GOcWSeFpLrSGRQ1Z7EMzFVU//qVs6uzhsNk0RYMyS0B6oloW3FpyQ+zOVylFWQCzoyyf227y
 GW8HnXunJSC+4PtlL2AY4yZjAVAPLK2l6mhgClVXTQ/S7cBoTQKP+jvVJOoYkpnFxWE9pn4t
 H5QIFk7Ip8TKr5k3fXVWk4lnUi9MTF/5L/mWqdyIO1s7cjharQCstfWCzWrVeVctpVoDfJWp
 4LwTuQ5yEM2KcPeElLg5fR7WB2zH97oI6/Ko2DlovmfQqXh9xWozQt0iGy5tWzh6I0JrlcxJ
 ileZWLccC4XKD1037Hy2FLAjzfoWgwBLA6ULu0exOOdIa58H4PsXtkFPrUF980EEibUp0zFz
 GotRVekFAceUaRvAj7dh76cToeZkfsjAvBVb4COXuhgX6N4pofgNkW2AtgYu1nUsPAo+NftU
 CxrhjHtLn4QEBpkbErnXQyMjHpIatlYGutVMS91XTQXYydCh5crMPs7hYVsvnmGHIaB9ZMfB
 njnuI31KBiLUks+paRkHQlFcgS2N3gkRBzH7xSZ+t7Re3jvXdXEzKBbQ+dC3lpJB0wPnyMcX
 FOTT3aZT7IgePkt5iC/BKBk3hqKteTnJFeVIT7EC+a6YUFg=
Organization: Red Hat GmbH
Message-ID: <8242c0c5-2df2-fc0c-079a-3be62c113a11@redhat.com>
Date: Fri, 1 May 2020 23:10:53 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <CAPcyv4iXyOUDZgqhWH1KCObvATL=gP55xEr64rsRfUuJg5B+eQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: virtio-dev@lists.oasis-open.org, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Baoquan He <bhe@redhat.com>,
 Linux ACPI <linux-acpi@vger.kernel.org>, Wei Yang <richard.weiyang@gmail.com>,
 linux-s390 <linux-s390@vger.kernel.org>,
 linux-nvdimm <linux-nvdimm@lists.01.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 virtualization@lists.linux-foundation.org, Linux MM <linux-mm@kvack.org>,
 "Michael S . Tsirkin" <mst@redhat.com>,
 "Eric W. Biederman" <ebiederm@xmission.com>,
 Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Morton <akpm@linux-foundation.org>, Michal Hocko <mhocko@kernel.org>,
 linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01.05.20 22:12, Dan Williams wrote:
> On Fri, May 1, 2020 at 12:18 PM David Hildenbrand <david@redhat.com> wr=
ote:
>>
>> On 01.05.20 20:43, Dan Williams wrote:
>>> On Fri, May 1, 2020 at 11:14 AM David Hildenbrand <david@redhat.com> =
wrote:
>>>>
>>>> On 01.05.20 20:03, Dan Williams wrote:
>>>>> On Fri, May 1, 2020 at 10:51 AM David Hildenbrand <david@redhat.com=
> wrote:
>>>>>>
>>>>>> On 01.05.20 19:45, David Hildenbrand wrote:
>>>>>>> On 01.05.20 19:39, Dan Williams wrote:
>>>>>>>> On Fri, May 1, 2020 at 10:21 AM David Hildenbrand <david@redhat.=
com> wrote:
>>>>>>>>>
>>>>>>>>> On 01.05.20 18:56, Dan Williams wrote:
>>>>>>>>>> On Fri, May 1, 2020 at 2:34 AM David Hildenbrand <david@redhat=
.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>> On 01.05.20 00:24, Andrew Morton wrote:
>>>>>>>>>>>> On Thu, 30 Apr 2020 20:43:39 +0200 David Hildenbrand <david@=
redhat.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Why does the firmware map support hotplug entries?
>>>>>>>>>>>>>
>>>>>>>>>>>>> I assume:
>>>>>>>>>>>>>
>>>>>>>>>>>>> The firmware memmap was added primarily for x86-64 kexec (a=
nd still, is
>>>>>>>>>>>>> mostly used on x86-64 only IIRC). There, we had ACPI hotplu=
g. When DIMMs
>>>>>>>>>>>>> get hotplugged on real HW, they get added to e820. Same app=
lies to
>>>>>>>>>>>>> memory added via HyperV balloon (unless memory is unplugged=
 via
>>>>>>>>>>>>> ballooning and you reboot ... the the e820 is changed as we=
ll). I assume
>>>>>>>>>>>>> we wanted to be able to reflect that, to make kexec look li=
ke a real reboot.
>>>>>>>>>>>>>
>>>>>>>>>>>>> This worked for a while. Then came dax/kmem. Now comes virt=
io-mem.
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> But I assume only Andrew can enlighten us.
>>>>>>>>>>>>>
>>>>>>>>>>>>> @Andrew, any guidance here? Should we really add all memory=
 to the
>>>>>>>>>>>>> firmware memmap, even if this contradicts with the existing
>>>>>>>>>>>>> documentation? (especially, if the actual firmware memmap w=
ill *not*
>>>>>>>>>>>>> contain that memory after a reboot)
>>>>>>>>>>>>
>>>>>>>>>>>> For some reason that patch is misattributed - it was authore=
d by
>>>>>>>>>>>> Shaohui Zheng <shaohui.zheng@intel.com>, who hasn't been hea=
rd from in
>>>>>>>>>>>> a decade.  I looked through the email discussion from that t=
ime and I'm
>>>>>>>>>>>> not seeing anything useful.  But I wasn't able to locate Dav=
e Hansen's
>>>>>>>>>>>> review comments.
>>>>>>>>>>>
>>>>>>>>>>> Okay, thanks for checking. I think the documentation from 200=
8 is pretty
>>>>>>>>>>> clear what has to be done here. I will add some of these deta=
ils to the
>>>>>>>>>>> patch description.
>>>>>>>>>>>
>>>>>>>>>>> Also, now that I know that esp. kexec-tools already don't con=
sider
>>>>>>>>>>> dax/kmem memory properly (memory will not get dumped via kdum=
p) and
>>>>>>>>>>> won't really suffer from a name change in /proc/iomem, I will=
 go back to
>>>>>>>>>>> the MHP_DRIVER_MANAGED approach and
>>>>>>>>>>> 1. Don't create firmware memmap entries
>>>>>>>>>>> 2. Name the resource "System RAM (driver managed)"
>>>>>>>>>>> 3. Flag the resource via something like IORESOURCE_MEM_DRIVER=
_MANAGED.
>>>>>>>>>>>
>>>>>>>>>>> This way, kernel users and user space can figure out that thi=
s memory
>>>>>>>>>>> has different semantics and handle it accordingly - I think t=
hat was
>>>>>>>>>>> what Eric was asking for.
>>>>>>>>>>>
>>>>>>>>>>> Of course, open for suggestions.
>>>>>>>>>>
>>>>>>>>>> I'm still more of a fan of this being communicated by "System =
RAM"
>>>>>>>>>
>>>>>>>>> I was mentioning somewhere in this thread that "System RAM" ins=
ide a
>>>>>>>>> hierarchy (like dax/kmem) will already be basically ignored by
>>>>>>>>> kexec-tools. So, placing it inside a hierarchy already makes it=
 look
>>>>>>>>> special already.
>>>>>>>>>
>>>>>>>>> But after all, as we have to change kexec-tools either way, we =
can
>>>>>>>>> directly go ahead and flag it properly as special (in case ther=
e will
>>>>>>>>> ever be other cases where we could no longer distinguish it).
>>>>>>>>>
>>>>>>>>>> being parented especially because that tells you something abo=
ut how
>>>>>>>>>> the memory is driver-managed and which mechanism might be in p=
lay.
>>>>>>>>>
>>>>>>>>> The could be communicated to some degree via the resource hiera=
rchy.
>>>>>>>>>
>>>>>>>>> E.g.,
>>>>>>>>>
>>>>>>>>>             [root@localhost ~]# cat /proc/iomem
>>>>>>>>>             ...
>>>>>>>>>             140000000-33fffffff : Persistent Memory
>>>>>>>>>               140000000-1481fffff : namespace0.0
>>>>>>>>>               150000000-33fffffff : dax0.0
>>>>>>>>>                 150000000-33fffffff : System RAM (driver manage=
d)
>>>>>>>>>
>>>>>>>>> vs.
>>>>>>>>>
>>>>>>>>>            :/# cat /proc/iomem
>>>>>>>>>             [...]
>>>>>>>>>             140000000-333ffffff : virtio-mem (virtio0)
>>>>>>>>>               140000000-147ffffff : System RAM (driver managed)
>>>>>>>>>               148000000-14fffffff : System RAM (driver managed)
>>>>>>>>>               150000000-157ffffff : System RAM (driver managed)
>>>>>>>>>
>>>>>>>>> Good enough for my taste.
>>>>>>>>>
>>>>>>>>>> What about adding an optional /sys/firmware/memmap/X/parent at=
tribute.
>>>>>>>>>
>>>>>>>>> I really don't want any firmware memmap entries for something t=
hat is
>>>>>>>>> not part of the firmware provided memmap. In addition,
>>>>>>>>> /sys/firmware/memmap/ is still a fairly x86_64 specific thing. =
Only mips
>>>>>>>>> and two arm configs enable it at all.
>>>>>>>>>
>>>>>>>>> So, IMHO, /sys/firmware/memmap/ is definitely not the way to go=
.
>>>>>>>>
>>>>>>>> I think that's a policy decision and policy decisions do not bel=
ong in
>>>>>>>> the kernel. Give the tooling the opportunity to decide whether S=
ystem
>>>>>>>> RAM stays that way over a kexec. The parenthetical reference oth=
erwise
>>>>>>>> looks out of place to me in the /proc/iomem output. What makes i=
t
>>>>>>>> "driver managed" is how the kernel handles it, not how the kerne=
l
>>>>>>>> names it.
>>>>>>>
>>>>>>> At least, virtio-mem is different. It really *has to be handled* =
by the
>>>>>>> driver. This is not a policy. It's how it works.
>>>>>
>>>>> ...but that's not necessarily how dax/kmem works.
>>>>>
>>>>
>>>> Yes, and user space could still take that memory and add it to the
>>>> firmware memmap if it really wants to. It knows that it is special. =
It
>>>> can figure out that it belongs to a dax device using /proc/iomem.
>>>>
>>>>>>>
>>>>>>
>>>>>> Oh, and I don't see why "System RAM (driver managed)" would hinder=
 any
>>>>>> policy in user case to still do what it thinks is the right thing =
to do
>>>>>> (e.g., for dax).
>>>>>>
>>>>>> "System RAM (driver managed)" would mean: Memory is not part of th=
e raw
>>>>>> firmware memmap. It was detected and added by a driver. Handle wit=
h
>>>>>> care, this is special.
>>>>>
>>>>> Oh, no, I was more reacting to your, "don't update
>>>>> /sys/firmware/memmap for the (driver managed) range" choice as bein=
g a
>>>>> policy decision. It otherwise feels to me "System RAM (driver
>>>>> managed)" adds confusion for casual users of /proc/iomem and for cl=
ued
>>>>> in tools they have the parent association to decide policy.
>>>>
>>>> Not sure if I understand correctly, so bear with me :).
>>>>
>>>> Adding or not adding stuff to /sys/firmware/memmap is not a policy
>>>> decision. If it's not part of the raw firmware-provided memmap, it h=
as
>>>> nothing to do in /sys/firmware/memmap. That's what the documentation
>>>> from 2008 tells us.
>>>
>>> It just occurs to me that there are valid cases for both wanting to
>>> start over with driver managed memory with a kexec and keeping it in
>>> the map.
>>
>> Yes, there might be valid cases. My gut feeling is that in the general
>> case, you want to let the kexec kernel implement a policy/ let the use=
r
>> in the new system decide.
>>
>> But as I said, you can implement in kexec-tools whatever policy you
>> want. It has access to all information.
>=20
> Right, so why is a new type needed if all the information is there by
> other means?

You mean "System RAM (driver managed)" in /proc/iomem? See below for more=
.

>=20
>>> Consider the case of EFI Special Purpose (SP) Memory that is
>>> marked EFI Conventional Memory with the SP attribute. In that case th=
e
>>> firmware memory map marked it as conventional RAM, but the kernel
>>> optionally marks it as System RAM vs Soft Reserved. The 2008 patch
>>> simply does not consider that case. I'm not sure strict textualism
>>> works for coding decisions.
>>
>> I am no expert on that matter (esp EFI). But looking at the users of
>> firmware_map_add_early(), the single user is in arch/x86/kernel/e820.c
>> . So the single source of /sys/firmware/memmap is (besides hotplug) e8=
20.
>>
>> "'e820_table_firmware': the original firmware version passed to us by
>> the bootloader - not modified by the kernel. ... inform the user about
>> the firmware's notion of memory layout via /sys/firmware/memmap"
>> (arch/x86/kernel/e820.c)
>>
>> How is the EFI Special Purpose (SP) Memory represented in e820?
>> /sys/firmware/memmap is really simple: just dump in e820. No policies =
IIUC.
>=20
> e820 now has a Soft Reserved translation for this which means "try to
> reserve, but treat as System RAM is ok too". It seems generically
> useful to me that the toggle for determining whether Soft Reserved or
> System RAM shows up /sys/firmware/memmap is a determination that
> policy can make. The kernel need not preemptively block it.

So, I think I have to clarify something here. We do have two ways to kexe=
c

1. kexec_load(): User space (kexec-tools) crafts the memmap (e.g., using
/sys/firmware/memmap on x86-64) and selects memory where to place the
kexec images (e.g., using /proc/iomem)

2. kexec_file_load(): The kernel reuses the (basically) raw firmware
memmap and selects memory where to place kexec images.

We are talking about changing 1, to behave like 2 in regards to
dax/kmem. 2. does currently not add any hotplugged memory to the
fixed-up e820, and it should be fixed regarding hotplugged DIMMs that
would appear in e820 after a reboot.

Now, all these policy discussions are nice and fun, but I don't really
see a good reason to (ab)use /sys/firmware/memmap for that (e.g., parent
properties). If you want to be able to make this configurable, then
e.g., add a way to configure this in the kernel (for example along with
kmem) to make 1. and 2. behave the same way. Otherwise, you really only
can change 1.


Now, let's clarify what I want regarding virtio-mem:

1. kexec should not add virtio-mem memory to the initial firmware
   memmap. The driver has to be in charge as discussed.
2. kexec should not place kexec images onto virtio-mem memory. That
   would end badly.
3. kexec should still dump virtio-mem memory via kdump.

This has to work when using kexec_load() or kexec_file_load(). This has
to theoretically work on different architectures (especially, without
/sys/firmware/memmap). kexec-tools has to have access to that
information to figure out what to do.

Regarding 1:
- kexec_file_load(): works out of the box currently.
- kexec_load(): Don't create entries in /sys/firmware/memmap (for
  reasons discussed)
Regarding 2:
- kexec_file_load(): tag the resources as IORESOURCE_MEM_DRIVER_MANAGED
  (inspired by Eric)
- kexec_load(): indicate the memory as "System RAM (driver managed)"
Regarding 3:
- Same as 2. kexec-tools need to be thought to properly consider the
  memory during kdump.

Now, you are asking, "why System RAM (driver managed)". I don't think
it's strictly needed right now, but it feels cleaner. E.g., for
virtio-mem the current plan is to have /proc/iomem look like

           :/# cat /proc/iomem
            [...]
            140000000-333ffffff : virtio-mem (virtio0)
              140000000-147ffffff : System RAM (driver managed)
              148000000-14fffffff : System RAM (driver managed)
              150000000-157ffffff : System RAM (driver managed)

One could judge by looking at the hierarchy, that this memory is
special. kexec-tools will skip it currently in either form.

If we all agree here, that we can drop it, then let's drop it,
especially if it would allow dax/kmem to use the same mechanism I am
proposing here for virtio-mem.


Now, it would be fairly simple to add a config option for dax/kmem,
making it configurable in the kernel, whether to add memory via
MHP_DRIVER_MANAGED or just as we do now. It would contradict with the
"raw firmware/prov..." description of /sys/firmware/memmap, but hey,
somebody explicitly configured it, so it can't be wrong.

--=20
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Fri May 01 21:36:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 21:36:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUdJz-00017t-6I; Fri, 01 May 2020 21:35:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Eksi=6P=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1jUdJy-00017l-4r
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 21:35:46 +0000
X-Inumbo-ID: b6dea2e8-8bf3-11ea-b07b-bc764e2007e4
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b6dea2e8-8bf3-11ea-b07b-bc764e2007e4;
 Fri, 01 May 2020 21:35:45 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id h4so1182062wmb.4
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 14:35:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=9Bsp+zgN5c2T/7JN1ydtC5H7G2SO7afVw6MOmB3cNQg=;
 b=Ppko3UdRDIbvvB7u9KFnWZhjC52nBJPZ6p9qAPKrwpY3Ek7LI0BZLTos519DS1pXol
 95Iozp+Nam2C0J+DijnBA9skd/6ci4MQUpWevJT4slXrZ4p9vRdjounQGzhLUB5YI0/8
 WF3OYvny2Fw8w3VognghIFX4ECGrdVirqzXkuaNEohs3jXyupwdt1RqaXu94ltbZGYiN
 7HHoH6N+P87UluZaBAyt/3mpcTpCJIXxyb1le21VVvKGPYh1UikfWWRbNsxoE2bp1nAN
 1Yg0SwKV0TgOo4cdJGXOgkOP4Yyz7XjBb2BmkczEqQxcAK5CxR/YjAkhu2q7RcY6DZ0+
 UTBg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=9Bsp+zgN5c2T/7JN1ydtC5H7G2SO7afVw6MOmB3cNQg=;
 b=GxxZpxNUV1hloQspKrFFSE/UDhw6eVHHmudIxOTgLCrKR9MY/iqjWvx70fEpCxnkut
 A1FNPflIrzIoyjXWrCcqB4HWWAHIVBsK9zfCpfXY7pMLJaJ2d6zmczDygLlBhgrlnap9
 0+1oM97mtuX1a0JP/tQKpUdSgajx2WAQZveJfK/iM8hK3xdiIVk89rdEfGARBtTL2jRS
 goNJwmypaaJ6uKjWfBf8bHW5GtIh9TTzuqpPNnTZozKAO6uNoWEk05oM8WPKo+59mLH2
 KyChEH5lilWR+dFAMlsqLH6EWwM80/Y7clwqrzUT2ogptz7Hi4QXMEWTXKf94LPHKE2B
 RtFQ==
X-Gm-Message-State: AGi0Pub+xz/CRp9l5VDd6OrmETtyZdR8R/rKH4dP9DiOQNbw+PXdhz1r
 FjL0muqBbfEBSLwhxv9Ljou0+VLKxaeqeOkc4SE=
X-Google-Smtp-Source: APiQypJF/L5LpmZx1M/fe9BJRSBIG7gWKElydPkdDksWWpJfWfFP4RhKrsEwTJMthIXo2oOzcazqNVPdm9mWJdITLxw=
X-Received: by 2002:a05:600c:28e:: with SMTP id
 14mr1530450wmk.79.1588368944548; 
 Fri, 01 May 2020 14:35:44 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1588278317.git.hongyxia@amazon.com>
 <a71d1903267b84afdb0e54fa2ac55540ab2f9357.1588278317.git.hongyxia@amazon.com>
In-Reply-To: <a71d1903267b84afdb0e54fa2ac55540ab2f9357.1588278317.git.hongyxia@amazon.com>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Fri, 1 May 2020 22:35:33 +0100
Message-ID: <CAJ=z9a0S0rOYbJVPGK6mTKN0OgJtiTU7YN-APLF4Dvr4CaKfJg@mail.gmail.com>
Subject: Re: [PATCH 02/16] acpi: vmap pages in acpi_os_alloc_memory
To: Hongyan Xia <hx242@xen.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On Thu, 30 Apr 2020 at 21:44, Hongyan Xia <hx242@xen.org> wrote:
>
> From: Hongyan Xia <hongyxia@amazon.com>
>
> Also, introduce a wrapper around vmap that maps a contiguous range for
> boot allocations.
>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> ---
>  xen/drivers/acpi/osl.c | 9 ++++++++-
>  xen/include/xen/vmap.h | 5 +++++
>  2 files changed, 13 insertions(+), 1 deletion(-)
>
> diff --git a/xen/drivers/acpi/osl.c b/xen/drivers/acpi/osl.c
> index 4c8bb7839e..d0762dad4e 100644
> --- a/xen/drivers/acpi/osl.c
> +++ b/xen/drivers/acpi/osl.c
> @@ -219,7 +219,11 @@ void *__init acpi_os_alloc_memory(size_t sz)
>         void *ptr;
>
>         if (system_state == SYS_STATE_early_boot)
> -               return mfn_to_virt(mfn_x(alloc_boot_pages(PFN_UP(sz), 1)));
> +       {
> +               mfn_t mfn = alloc_boot_pages(PFN_UP(sz), 1);
> +
> +               return vmap_boot_pages(mfn, PFN_UP(sz));
> +       }
>
>         ptr = xmalloc_bytes(sz);
>         ASSERT(!ptr || is_xmalloc_memory(ptr));
> @@ -244,5 +248,8 @@ void __init acpi_os_free_memory(void *ptr)
>         if (is_xmalloc_memory(ptr))
>                 xfree(ptr);
>         else if (ptr && system_state == SYS_STATE_early_boot)
> +       {
> +               vunmap(ptr);
>                 init_boot_pages(__pa(ptr), __pa(ptr) + PAGE_SIZE);

__pa(ptr) can only work on the direct map. Even worth, on Arm it will
fault because there is no mapping.
I think you will want to use vmap_to_mfn() before calling vunmap().

Cheers,


From xen-devel-bounces@lists.xenproject.org Fri May 01 21:50:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 21:50:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUdXy-0002ie-H9; Fri, 01 May 2020 21:50:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YjF2=6P=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jUdXx-0002iW-AR
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 21:50:13 +0000
X-Inumbo-ID: b9f0a434-8bf5-11ea-9b66-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b9f0a434-8bf5-11ea-9b66-12813bfff9fa;
 Fri, 01 May 2020 21:50:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=wdmEpydm7FeHh8uBfSwnlv8blhotQ6wtAGn8F0ApdYE=; b=GPqTEDv5fRpbdyeerauc+Mtz6
 xXhtA4ZbEnmIAak+MWKsuHDa/5/mwh3SV7vouBi/XCo/JwLId3FvbeMJ1GPOY37TUrqc9i29shy/t
 D8pUM3oMXqfx7WBja5S7iqTH6csvh5/xeH3kCSwekMWjhkU/Hmt8RtCjUglReTqQQ8+WY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUdXs-0003UC-PY; Fri, 01 May 2020 21:50:08 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUdXs-0001DC-Hi; Fri, 01 May 2020 21:50:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jUdXs-0007Ok-H6; Fri, 01 May 2020 21:50:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149896-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 149896: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-amd64-i386-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=0135be8bd8cd60090298f02310691b688d95c3a8
X-Osstest-Versions-That: xen=8065e1b41688592778de76c731c62f34e71f3129
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 01 May 2020 21:50:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149896 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149896/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-xsm 18 guest-start/debian.repeat fail in 149892 pass in 149896
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat  fail pass in 149892

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 149889
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149889
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149889
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 149889
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 149889
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149889
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149889
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149889
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 149889
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 149889
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  0135be8bd8cd60090298f02310691b688d95c3a8
baseline version:
 xen                  8065e1b41688592778de76c731c62f34e71f3129

Last test of basis   149889  2020-04-30 09:52:27 Z    1 days
Testing same since   149892  2020-04-30 23:37:21 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>
  Varad Gautam <vrd@amazon.de>
  Wei Liu <wei.liu2@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8065e1b416..0135be8bd8  0135be8bd8cd60090298f02310691b688d95c3a8 -> master


From xen-devel-bounces@lists.xenproject.org Fri May 01 21:52:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 21:52:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUdZz-0002qW-3Z; Fri, 01 May 2020 21:52:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D0mR=6P=intel.com=dan.j.williams@srs-us1.protection.inumbo.net>)
 id 1jUdZy-0002qR-30
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 21:52:18 +0000
X-Inumbo-ID: 04f28628-8bf6-11ea-b07b-bc764e2007e4
Received: from mail-ed1-x541.google.com (unknown [2a00:1450:4864:20::541])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 04f28628-8bf6-11ea-b07b-bc764e2007e4;
 Fri, 01 May 2020 21:52:15 +0000 (UTC)
Received: by mail-ed1-x541.google.com with SMTP id d16so8316490edv.8
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 14:52:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=intel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=l07ccX0wm+AEN60sydd1sDGIVZzQnw4FgS9Ujh07hXI=;
 b=mklS7vIKVEUdl7wntEpgBeCcvwVfQHQL1n/rmnKFbnOtHIZRisDisf4dMQS+dM0Nv0
 /mRQLawzCoRa0iAq+jVvpf0LYqgCjY+tO0Nl2pQjod+pUu+y/cbys3B6QuzFNwIhgq+e
 J0hSqTnwqIVOMJOuynhAAZ9fW5a6Vt9Xzotvtdm26jvNeifB0m/HBJs7okf++OwjA9rv
 5lz3SdD51W3h8Z5eRe/aaTM0oZ8G7J5TGcoo4kQ/achD7IKEF7tdsapTw8o+0ZAJulwF
 tm+BI0y7Xg01WDXA39uPG+FQVFMkDZI0YOggS1Fzhv4x3y9WAQf7Z6g1qNe1txuqYjtH
 o9jg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=l07ccX0wm+AEN60sydd1sDGIVZzQnw4FgS9Ujh07hXI=;
 b=JxuF7weXg6NMoYN5LDrkyg/I9sFd+DrN4W+YsQAen834W8ntRoKSRkM4PqHd0KUrxw
 H0HqbidSixUV7zCd+kEvUDKwMLQ2C531CJeCobKZg2R3oXF9W0UPc/+fWyazL6dpIFgY
 UtA3dkLFsIKoN2eecmcHjZd5jJ6WjHOuiWCKPoxw4SRGhEORt4zrXuc5kFKxTPmMNZdq
 Ak30hVC/VOUi57EiMuOpvtqElht/c4kdzNHFiiMPW/O8Zy1vc95NLQyjNiX2tl5HHUqZ
 gBLn5+CHvMh/YCGOIP0o9bVMssBQYICXvLRMpCY9165uijoXz4SmJN/KTbIFSl/x1JLx
 GgMQ==
X-Gm-Message-State: AGi0PubUogUtzkBhTc9Ax24lq+cO/9I2Jti5/zziSp+utB5A3tbthpw7
 SWVtb+T88aGzH2tt0aEccPmDFKF8GGCqKO8aErcHUA==
X-Google-Smtp-Source: APiQypJmId2DaeM7LwaM8l70eo6m7yrxIS8YHkCL5rA5nuZuCXHz7IuoKF/W4HpVt52A/r7auZb6rjukAuRVUpn3RXs=
X-Received: by 2002:a05:6402:759:: with SMTP id
 p25mr5505386edy.102.1588369934563; 
 Fri, 01 May 2020 14:52:14 -0700 (PDT)
MIME-Version: 1.0
References: <20200430102908.10107-1-david@redhat.com>
 <1b49c3be-6e2f-57cb-96f7-f66a8f8a9380@redhat.com>
 <871ro52ary.fsf@x220.int.ebiederm.org>
 <373a6898-4020-4af1-5b3d-f827d705dd77@redhat.com>
 <875zdg26hp.fsf@x220.int.ebiederm.org>
 <b28c9e02-8cf2-33ae-646b-fe50a185738e@redhat.com>
 <20200430152403.e0d6da5eb1cad06411ac6d46@linux-foundation.org>
 <5c908ec3-9495-531e-9291-cbab24f292d6@redhat.com>
 <CAPcyv4j=YKnr1HW4OhAmpzbuKjtfP7FdAn4-V7uA=b-Tcpfu+A@mail.gmail.com>
 <2d019c11-a478-9d70-abd5-4fd2ebf4bc1d@redhat.com>
 <CAPcyv4iOqS0Wbfa2KPfE1axQFGXoRB4mmPRP__Lmqpw6Qpr_ig@mail.gmail.com>
 <62dd4ce2-86cc-5b85-734f-ec8766528a1b@redhat.com>
 <0169e822-a6cc-1543-88ed-2a85d95ffb93@redhat.com>
 <CAPcyv4jGnR_fPtpKBC1rD2KRcT88bTkhqnTMmuwuc+f9Dwrz1g@mail.gmail.com>
 <9f3a813e-dc1d-b675-6e69-85beed3057a4@redhat.com>
 <CAPcyv4jjrxQ27rsfmz6wYPgmedevU=KG+wZ0GOm=qiE6tqa+VA@mail.gmail.com>
 <04242d48-5fa9-6da4-3e4a-991e401eb580@redhat.com>
 <CAPcyv4iXyOUDZgqhWH1KCObvATL=gP55xEr64rsRfUuJg5B+eQ@mail.gmail.com>
 <8242c0c5-2df2-fc0c-079a-3be62c113a11@redhat.com>
In-Reply-To: <8242c0c5-2df2-fc0c-079a-3be62c113a11@redhat.com>
From: Dan Williams <dan.j.williams@intel.com>
Date: Fri, 1 May 2020 14:52:03 -0700
Message-ID: <CAPcyv4h1nWjszkVJQgeXkUc=-nPv5=Me25BOGFQCpihUyFsD6w@mail.gmail.com>
Subject: Re: [PATCH v2 2/3] mm/memory_hotplug: Introduce MHP_NO_FIRMWARE_MEMMAP
To: David Hildenbrand <david@redhat.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: virtio-dev@lists.oasis-open.org, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Baoquan He <bhe@redhat.com>,
 Linux ACPI <linux-acpi@vger.kernel.org>, Wei Yang <richard.weiyang@gmail.com>,
 linux-s390 <linux-s390@vger.kernel.org>,
 linux-nvdimm <linux-nvdimm@lists.01.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 virtualization@lists.linux-foundation.org, Linux MM <linux-mm@kvack.org>,
 "Michael S . Tsirkin" <mst@redhat.com>,
 "Eric W. Biederman" <ebiederm@xmission.com>,
 Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Morton <akpm@linux-foundation.org>, Michal Hocko <mhocko@kernel.org>,
 linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 1, 2020 at 2:11 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 01.05.20 22:12, Dan Williams wrote:
[..]
> >>> Consider the case of EFI Special Purpose (SP) Memory that is
> >>> marked EFI Conventional Memory with the SP attribute. In that case the
> >>> firmware memory map marked it as conventional RAM, but the kernel
> >>> optionally marks it as System RAM vs Soft Reserved. The 2008 patch
> >>> simply does not consider that case. I'm not sure strict textualism
> >>> works for coding decisions.
> >>
> >> I am no expert on that matter (esp EFI). But looking at the users of
> >> firmware_map_add_early(), the single user is in arch/x86/kernel/e820.c
> >> . So the single source of /sys/firmware/memmap is (besides hotplug) e820.
> >>
> >> "'e820_table_firmware': the original firmware version passed to us by
> >> the bootloader - not modified by the kernel. ... inform the user about
> >> the firmware's notion of memory layout via /sys/firmware/memmap"
> >> (arch/x86/kernel/e820.c)
> >>
> >> How is the EFI Special Purpose (SP) Memory represented in e820?
> >> /sys/firmware/memmap is really simple: just dump in e820. No policies IIUC.
> >
> > e820 now has a Soft Reserved translation for this which means "try to
> > reserve, but treat as System RAM is ok too". It seems generically
> > useful to me that the toggle for determining whether Soft Reserved or
> > System RAM shows up /sys/firmware/memmap is a determination that
> > policy can make. The kernel need not preemptively block it.
>
> So, I think I have to clarify something here. We do have two ways to kexec
>
> 1. kexec_load(): User space (kexec-tools) crafts the memmap (e.g., using
> /sys/firmware/memmap on x86-64) and selects memory where to place the
> kexec images (e.g., using /proc/iomem)
>
> 2. kexec_file_load(): The kernel reuses the (basically) raw firmware
> memmap and selects memory where to place kexec images.
>
> We are talking about changing 1, to behave like 2 in regards to
> dax/kmem. 2. does currently not add any hotplugged memory to the
> fixed-up e820, and it should be fixed regarding hotplugged DIMMs that
> would appear in e820 after a reboot.
>
> Now, all these policy discussions are nice and fun, but I don't really
> see a good reason to (ab)use /sys/firmware/memmap for that (e.g., parent
> properties). If you want to be able to make this configurable, then
> e.g., add a way to configure this in the kernel (for example along with
> kmem) to make 1. and 2. behave the same way. Otherwise, you really only
> can change 1.

That's clearer.

>
>
> Now, let's clarify what I want regarding virtio-mem:
>
> 1. kexec should not add virtio-mem memory to the initial firmware
>    memmap. The driver has to be in charge as discussed.
> 2. kexec should not place kexec images onto virtio-mem memory. That
>    would end badly.
> 3. kexec should still dump virtio-mem memory via kdump.

Ok, but then seems to say to me that dax/kmem is a different type of
(driver managed) than virtio-mem and it's confusing to try to apply
the same meaning. Why not just call your type for the distinct type it
is "System RAM (virtio-mem)" and let any other driver managed memory
follow the same "System RAM ($driver)" format if it wants?


From xen-devel-bounces@lists.xenproject.org Fri May 01 22:59:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 22:59:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUecd-0007w6-0s; Fri, 01 May 2020 22:59:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUecc-0007vF-31
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 22:59:06 +0000
X-Inumbo-ID: 57d17eb8-8bff-11ea-b07b-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57d17eb8-8bff-11ea-b07b-bc764e2007e4;
 Fri, 01 May 2020 22:59:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588373940;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=rL/dG5kGKiStpqKmDj82L3vvn9kv06+Bf9qYwu7T+rE=;
 b=Kn9cZ2YJBdhJQbkjLexIclJ/3Dm+uFcV1T7SZRyiX/XxGRUW1T0CE2kJ
 qGXbDyvdkHwc9bEp+KTrQ2z2lJLmA+goxswBKWNgIiYa0Aq+4VMPX9WrB
 ISb3v7cQKDPMvqBBZY2rHigt8UD5kC3ioFYm0cMvwnub1cQjPrnWmzdhB 0=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: zm/Rio6/a5ZOg6ZjO8FJSzE8qGDSpCwuHsz559zCVgwOpMbAVIWa+S+s+4hHf9z7aT2FUR1wds
 j8hzds/+dTrTjpEaIXPADLOAMqt3yRyHmm9A0LnVkHmhpXigbIKGdELb900ETMHcpI+qyFOeGA
 58gpwwWRCnlpY1OeCdntVjVcMg2aq4AC/6YmnP7hHHHxJ2IXmCoK/j4lBT7MWB0mM2EtG93O09
 DkbCoFQgm5gIq7A0Q4kz3bgf1EJRMWgsE5KCpUVrZsfjWyGjgLhB3oFrNAA/yQlvLBrqvQhX/0
 CVo=
X-SBRS: 2.7
X-MesageID: 16584676
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,341,1583211600"; d="scan'208";a="16584676"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 08/16] x86/shstk: Create shadow stacks
Date: Fri, 1 May 2020 23:58:30 +0100
Message-ID: <20200501225838.9866-9-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200501225838.9866-1-andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Introduce HYPERVISOR_SHSTK pagetable constants, which are Read-Only + Dirty.
Use these in place of _PAGE_NONE for memguard_guard_stack().

Supervisor shadow stacks need a token written at the top, which is most easily
done before making the frame read only.

Allocate the shadow IST stack block in struct tss_page.  It doesn't strictly
need to live here, but it is a convenient location (and XPTI-safe, for testing
purposes).

Have load_system_tables() set up the shadow IST stack table when setting up
the regular IST in the TSS.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/cpu/common.c         | 19 +++++++++++++++++++
 xen/arch/x86/mm.c                 | 22 +++++++++++++++++++---
 xen/include/asm-x86/page.h        |  1 +
 xen/include/asm-x86/processor.h   |  3 ++-
 xen/include/asm-x86/x86_64/page.h |  1 +
 5 files changed, 42 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 290f9f1c30..3962717aa5 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -748,6 +748,25 @@ void load_system_tables(void)
 		.bitmap = IOBMP_INVALID_OFFSET,
 	};
 
+	/* Set up the shadow stack IST. */
+	if ( cpu_has_xen_shstk ) {
+		unsigned int i;
+		uint64_t *ist_ssp = this_cpu(tss_page).ist_ssp;
+
+		/* Must point at the supervisor stack token. */
+		ist_ssp[IST_MCE] = stack_top + (IST_MCE * 0x400) - 8;
+		ist_ssp[IST_NMI] = stack_top + (IST_NMI * 0x400) - 8;
+		ist_ssp[IST_DB]  = stack_top + (IST_DB  * 0x400) - 8;
+		ist_ssp[IST_DF]  = stack_top + (IST_DF  * 0x400) - 8;
+
+		/* Poision unused entries. */
+		for ( i = IST_MAX;
+		      i < ARRAY_SIZE(this_cpu(tss_page).ist_ssp); ++i )
+			ist_ssp[i] = 0x8600111111111111ul;
+
+		wrmsrl(MSR_INTERRUPT_SSP_TABLE, (unsigned long)ist_ssp);
+	}
+
 	BUILD_BUG_ON(sizeof(*tss) <= 0x67); /* Mandated by the architecture. */
 
 	_set_tssldt_desc(gdt + TSS_ENTRY, (unsigned long)tss,
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index bc44d865ef..4e2c3c9735 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -6000,12 +6000,28 @@ void memguard_unguard_range(void *p, unsigned long l)
 
 #endif
 
-void memguard_guard_stack(void *p)
+static void write_sss_token(unsigned long *ptr)
 {
-    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
+    /*
+     * A supervisor shadow stack token is its own linear address, with the
+     * busy bit (0) clear.
+     */
+    *ptr = (unsigned long)ptr;
+}
 
+void memguard_guard_stack(void *p)
+{
+    /* IST Shadow stacks.  4x 1k in stack page 0. */
+    write_sss_token(p + 0x3f8);
+    write_sss_token(p + 0x7f8);
+    write_sss_token(p + 0xbf8);
+    write_sss_token(p + 0xff8);
+    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, PAGE_HYPERVISOR_SHSTK);
+
+    /* Primary Shadow Stack.  1x 4k in stack page 5. */
     p += 5 * PAGE_SIZE;
-    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
+    write_sss_token(p + 0xff8);
+    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, PAGE_HYPERVISOR_SHSTK);
 }
 
 void memguard_unguard_stack(void *p)
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 5acf3d3d5a..f632affaef 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -364,6 +364,7 @@ void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t);
                                    _PAGE_DIRTY | _PAGE_RW)
 #define __PAGE_HYPERVISOR_UCMINUS (__PAGE_HYPERVISOR | _PAGE_PCD)
 #define __PAGE_HYPERVISOR_UC      (__PAGE_HYPERVISOR | _PAGE_PCD | _PAGE_PWT)
+#define __PAGE_HYPERVISOR_SHSTK   (__PAGE_HYPERVISOR_RO | _PAGE_DIRTY)
 
 #define MAP_SMALL_PAGES _PAGE_AVAIL0 /* don't use superpages mappings */
 
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index f7e80d12e4..54e1a8b605 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -434,7 +434,8 @@ struct __packed tss64 {
     uint16_t :16, bitmap;
 };
 struct tss_page {
-    struct tss64 __aligned(PAGE_SIZE) tss;
+    uint64_t __aligned(PAGE_SIZE) ist_ssp[8];
+    struct tss64 tss;
 };
 DECLARE_PER_CPU(struct tss_page, tss_page);
 
diff --git a/xen/include/asm-x86/x86_64/page.h b/xen/include/asm-x86/x86_64/page.h
index 9876634881..26621f9519 100644
--- a/xen/include/asm-x86/x86_64/page.h
+++ b/xen/include/asm-x86/x86_64/page.h
@@ -171,6 +171,7 @@ static inline intpte_t put_pte_flags(unsigned int x)
 #define PAGE_HYPERVISOR_RW      (__PAGE_HYPERVISOR_RW      | _PAGE_GLOBAL)
 #define PAGE_HYPERVISOR_RX      (__PAGE_HYPERVISOR_RX      | _PAGE_GLOBAL)
 #define PAGE_HYPERVISOR_RWX     (__PAGE_HYPERVISOR         | _PAGE_GLOBAL)
+#define PAGE_HYPERVISOR_SHSTK   (__PAGE_HYPERVISOR_SHSTK   | _PAGE_GLOBAL)
 
 #define PAGE_HYPERVISOR         PAGE_HYPERVISOR_RW
 #define PAGE_HYPERVISOR_UCMINUS (__PAGE_HYPERVISOR_UCMINUS | \
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 01 22:59:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 22:59:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUeci-0007x5-HD; Fri, 01 May 2020 22:59:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUecg-0007wY-RE
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 22:59:10 +0000
X-Inumbo-ID: 5cf8f43f-8bff-11ea-9b6f-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5cf8f43f-8bff-11ea-9b6f-12813bfff9fa;
 Fri, 01 May 2020 22:59:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588373950;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=/F+D6hsQxodq6rXf+K3jo7CsyQLXckH86K/eAndVO7A=;
 b=W5hPMo+AtKM9SfAZAKucnc7KqAHchiBZ94AOU8K17mfANdY7vZ/eV317
 UAHuQsGMsgW5Wz819HRbTH9lOWCWK1t8DU6yctdQtd7H6DKkEAt0Rgnhr
 l7TDbzlRSgwpPmZxbHpou/50zPz4Jq3vazdVDmqXNeAR6WVCiH7P88OMn c=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: dPHGxFspW1wRPP49DXo66IYXvO9cK4Bt/EPT0HGBFSsfKiNJZIIvpKskWh2ibiJW+/hd3b0Gpv
 GI8k9lsx5Ne53H5tQcEAflZQetbw4YxwDZs3n/hCxlRoM5I81g4bOPMIaPjIkAs665am6ovBN7
 NIMFIzVQiOuIf6qj3xP4OjemoVDAhEjWiaRPuYpGfnNgoeGG1h63VeWgfvqbyr2emvgiqV1Uy8
 0M4LwaUtCneAYSqpam1lTgIpIanoB4+Bqt3AER3TLXWIarSHOJPm6/2HmyxN/zmpPgtzKZ/Vo/
 N+c=
X-SBRS: 2.7
X-MesageID: 16584683
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,341,1583211600"; d="scan'208";a="16584683"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 05/16] x86/shstk: Introduce Supervisor Shadow Stack support
Date: Fri, 1 May 2020 23:58:27 +0100
Message-ID: <20200501225838.9866-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200501225838.9866-1-andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Introduce CONFIG_HAS_AS_CET to determine whether CET instructions are
supported in the assembler, and CONFIG_XEN_SHSTK as the main build option.

Introduce xen={no-,}shstk to for a user to select whether or not to use shadow
stacks at runtime, and X86_FEATURE_XEN_SHSTK to determine Xen's overall
enablement of shadow stacks.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/Kconfig              | 17 +++++++++++++++++
 xen/arch/x86/setup.c              | 30 ++++++++++++++++++++++++++++++
 xen/include/asm-x86/cpufeature.h  |  1 +
 xen/include/asm-x86/cpufeatures.h |  1 +
 xen/scripts/Kconfig.include       |  4 ++++
 5 files changed, 53 insertions(+)

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index 96432f1f69..ebd01e6893 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -34,6 +34,9 @@ config ARCH_DEFCONFIG
 config INDIRECT_THUNK
 	def_bool $(cc-option,-mindirect-branch-register)
 
+config HAS_AS_CET
+	def_bool $(as-instr,wrssq %rax$(comma)0;setssbsy;endbr64)
+
 menu "Architecture Features"
 
 source "arch/Kconfig"
@@ -97,6 +100,20 @@ config HVM
 
 	  If unsure, say Y.
 
+config XEN_SHSTK
+	bool "Supervisor Shadow Stacks"
+	depends on HAS_AS_CET && EXPERT = "y"
+	default y
+        ---help---
+          Control-flow Enforcement Technology (CET) is a set of features in
+          hardware designed to combat Return-oriented Programming (ROP, also
+          call/jump COP/JOP) attacks.  Shadow Stacks are one CET feature
+          designed to provide return address protection.
+
+          This option arranges for Xen to use CET-SS for its own protection.
+          When CET-SS is active, 32bit PV guests cannot be used.  Backwards
+          compatiblity can be provided vai the PV Shim mechanism.
+
 config SHADOW_PAGING
         bool "Shadow Paging"
         default y
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 9e9576344c..aa21201507 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -95,6 +95,36 @@ unsigned long __initdata highmem_start;
 size_param("highmem-start", highmem_start);
 #endif
 
+static bool __initdata opt_xen_shstk = true;
+
+static int parse_xen(const char *s)
+{
+    const char *ss;
+    int val, rc = 0;
+
+    do {
+        ss = strchr(s, ',');
+        if ( !ss )
+            ss = strchr(s, '\0');
+
+        if ( (val = parse_boolean("shstk", s, ss)) >= 0 )
+        {
+#ifdef CONFIG_XEN_SHSTK
+            opt_xen_shstk = val;
+#else
+            no_config_param("XEN_SHSTK", "xen", s, ss);
+#endif
+        }
+        else
+            rc = -EINVAL;
+
+        s = ss + 1;
+    } while ( *ss );
+
+    return rc;
+}
+custom_param("xen", parse_xen);
+
 cpumask_t __read_mostly cpu_present_map;
 
 unsigned long __read_mostly xen_phys_start;
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 859970570b..6b25f61832 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -136,6 +136,7 @@
 #define cpu_has_aperfmperf      boot_cpu_has(X86_FEATURE_APERFMPERF)
 #define cpu_has_lfence_dispatch boot_cpu_has(X86_FEATURE_LFENCE_DISPATCH)
 #define cpu_has_xen_lbr         boot_cpu_has(X86_FEATURE_XEN_LBR)
+#define cpu_has_xen_shstk       boot_cpu_has(X86_FEATURE_XEN_SHSTK)
 
 #define cpu_has_msr_tsc_aux     (cpu_has_rdtscp || cpu_has_rdpid)
 
diff --git a/xen/include/asm-x86/cpufeatures.h b/xen/include/asm-x86/cpufeatures.h
index b9d3cac975..d7e42d9bb6 100644
--- a/xen/include/asm-x86/cpufeatures.h
+++ b/xen/include/asm-x86/cpufeatures.h
@@ -38,6 +38,7 @@ XEN_CPUFEATURE(XEN_LBR,           X86_SYNTH(22)) /* Xen uses MSR_DEBUGCTL.LBR */
 XEN_CPUFEATURE(SC_VERW_PV,        X86_SYNTH(23)) /* VERW used by Xen for PV */
 XEN_CPUFEATURE(SC_VERW_HVM,       X86_SYNTH(24)) /* VERW used by Xen for HVM */
 XEN_CPUFEATURE(SC_VERW_IDLE,      X86_SYNTH(25)) /* VERW used by Xen for idle */
+XEN_CPUFEATURE(XEN_SHSTK,         X86_SYNTH(26)) /* Xen uses CET Shadow Stacks */
 
 /* Bug words follow the synthetic words. */
 #define X86_NR_BUG 1
diff --git a/xen/scripts/Kconfig.include b/xen/scripts/Kconfig.include
index 8221095ca3..e1f13e1720 100644
--- a/xen/scripts/Kconfig.include
+++ b/xen/scripts/Kconfig.include
@@ -31,6 +31,10 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -E -x c /dev/null -o /de
 # Return y if the linker supports <flag>, n otherwise
 ld-option = $(success,$(LD) -v $(1))
 
+# $(as-instr,<instr>)
+# Return y if the assembler supports <instr>, n otherwise
+as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -)
+
 # check if $(CC) and $(LD) exist
 $(error-if,$(failure,command -v $(CC)),compiler '$(CC)' not found)
 $(error-if,$(failure,command -v $(LD)),linker '$(LD)' not found)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 01 22:59:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 22:59:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUecY-0007um-Cu; Fri, 01 May 2020 22:59:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUecX-0007uc-3v
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 22:59:01 +0000
X-Inumbo-ID: 57b00d78-8bff-11ea-9b6f-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 57b00d78-8bff-11ea-9b6f-12813bfff9fa;
 Fri, 01 May 2020 22:58:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588373939;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=mdKKOqqqJIKSsZmEMlIiWYVPPhe96GgpGtAJ8re5vaM=;
 b=AoZkj8XWAXySDVIlp67DaBa4KIy3gfhba0AtRgtQR/Vz/5hQfTsH4NnA
 f4DFSaUYpybfSrJ+1e7APrU4EEa0b0bsZ4E7muV8rYG8EyzerlvRBk6cE
 j9m7FlE2Xu9dik/pahXmWkpw7yho5jvf85uJM6DAreNkKgX53QCdJbSZw k=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: et/aZxFKlhLg+JTjMM/J44nwQ0ngPJ/IIJz7GCRgGRYgzmeti5B6CJG2rdhI1RBGdRwd39Wlq6
 h+HTh7Oi0XBaTAI3Wl3YDd0HgoitKVBNijB0hbAUhJ8EjgnVhE9vKorVbjJ9UU3L0qrcHIvxDt
 z3n8egHgMpzXiCWGKWMRpyRccw6RG+Z/re9cosldoJx3OZSniZ6LrRMhx1/v13FziJZKu6qwLo
 w59xsYhop4A+lThdHGdBCYHBup0BCl/WOjbedOTFug7t6GXDvd7fLSyI7QqRIktI/JadpUh7WW
 hyE=
X-SBRS: 2.7
X-MesageID: 16905916
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,341,1583211600"; d="scan'208";a="16905916"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 06/16] x86/traps: Implement #CP handler and extend #PF for
 shadow stacks
Date: Fri, 1 May 2020 23:58:28 +0100
Message-ID: <20200501225838.9866-7-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200501225838.9866-1-andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

For now, any #CP exception or shadow stack #PF indicate a bug in Xen, but
attempt to recover if taken in guest context.

Drop the comment beside do_page_fault().  It's stale (missing PFEC_prot_key),
and inaccurate (PFEC_present being set means just that, not necesserily a
protection violation).

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/traps.c            | 55 ++++++++++++++++++++++++++++++++++-------
 xen/arch/x86/x86_64/entry.S     |  7 +++++-
 xen/include/asm-x86/processor.h |  2 ++
 3 files changed, 54 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 737ab036d2..ddbe312f89 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -158,7 +158,9 @@ void (* const exception_table[TRAP_nr])(struct cpu_user_regs *regs) = {
     [TRAP_alignment_check]              = do_trap,
     [TRAP_machine_check]                = (void *)do_machine_check,
     [TRAP_simd_error]                   = do_trap,
-    [TRAP_virtualisation ...
+    [TRAP_virtualisation]               = do_reserved_trap,
+    [X86_EXC_CP]                        = do_entry_CP,
+    [X86_EXC_CP + 1 ...
      (ARRAY_SIZE(exception_table) - 1)] = do_reserved_trap,
 };
 
@@ -1427,14 +1429,6 @@ static int fixup_page_fault(unsigned long addr, struct cpu_user_regs *regs)
     return 0;
 }
 
-/*
- * #PF error code:
- *  Bit 0: Protection violation (=1) ; Page not present (=0)
- *  Bit 1: Write access
- *  Bit 2: User mode (=1) ; Supervisor mode (=0)
- *  Bit 3: Reserved bit violation
- *  Bit 4: Instruction fetch
- */
 void do_page_fault(struct cpu_user_regs *regs)
 {
     unsigned long addr;
@@ -1457,6 +1451,10 @@ void do_page_fault(struct cpu_user_regs *regs)
     {
         enum pf_type pf_type = spurious_page_fault(addr, regs);
 
+        /* Any fault on a shadow stack access is a bug in Xen. */
+        if ( error_code & PFEC_shstk )
+            goto fatal;
+
         if ( (pf_type == smep_fault) || (pf_type == smap_fault) )
         {
             console_start_sync();
@@ -1476,6 +1474,7 @@ void do_page_fault(struct cpu_user_regs *regs)
             return;
         }
 
+    fatal:
         if ( debugger_trap_fatal(TRAP_page_fault, regs) )
             return;
 
@@ -1906,6 +1905,43 @@ void do_debug(struct cpu_user_regs *regs)
     pv_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC);
 }
 
+void do_entry_CP(struct cpu_user_regs *regs)
+{
+    static const char errors[][10] = {
+        [1] = "near ret",
+        [2] = "far/iret",
+        [3] = "endbranch",
+        [4] = "rstorssp",
+        [5] = "setssbsy",
+    };
+    const char *err = "??";
+    unsigned int ec = regs->error_code;
+
+    if ( debugger_trap_entry(TRAP_debug, regs) )
+        return;
+
+    /* Decode ec if possible */
+    if ( ec < ARRAY_SIZE(errors) && errors[ec][0] )
+        err = errors[ec];
+
+    /*
+     * For now, only supervisors shadow stacks should be active.  A #CP from
+     * guest context is probably a Xen bug, but kill the guest in an attempt
+     * to recover.
+     */
+    if ( guest_mode(regs) )
+    {
+        gprintk(XENLOG_ERR, "Hit #CP[%04x] in guest context %04x:%p\n",
+                ec, regs->cs, _p(regs->rip));
+        ASSERT_UNREACHABLE();
+        domain_crash(current->domain);
+        return;
+    }
+
+    show_execution_state(regs);
+    panic("CONTROL-FLOW PROTECTION FAULT: #CP[%04x] %s\n", ec, err);
+}
+
 static void __init noinline __set_intr_gate(unsigned int n,
                                             uint32_t dpl, void *addr)
 {
@@ -1995,6 +2031,7 @@ void __init init_idt_traps(void)
     set_intr_gate(TRAP_alignment_check,&alignment_check);
     set_intr_gate(TRAP_machine_check,&machine_check);
     set_intr_gate(TRAP_simd_error,&simd_coprocessor_error);
+    set_intr_gate(X86_EXC_CP, entry_CP);
 
     /* Specify dedicated interrupt stacks for NMI, #DF, and #MC. */
     enable_each_ist(idt_table);
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index a3ce298529..6403c0ab92 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -795,6 +795,10 @@ ENTRY(alignment_check)
         movl  $TRAP_alignment_check,4(%rsp)
         jmp   handle_exception
 
+ENTRY(entry_CP)
+        movl  $X86_EXC_CP, 4(%rsp)
+        jmp   handle_exception
+
 ENTRY(double_fault)
         movl  $TRAP_double_fault,4(%rsp)
         /* Set AC to reduce chance of further SMAP faults */
@@ -940,7 +944,8 @@ autogen_stubs: /* Automatically generated stubs. */
         entrypoint 1b
 
         /* Reserved exceptions, heading towards do_reserved_trap(). */
-        .elseif vec == TRAP_copro_seg || vec == TRAP_spurious_int || (vec > TRAP_simd_error && vec < TRAP_nr)
+        .elseif vec == TRAP_copro_seg || vec == TRAP_spurious_int || \
+                vec == TRAP_virtualisation || (vec > X86_EXC_CP && vec < TRAP_nr)
 
 1:      test  $8,%spl        /* 64bit exception frames are 16 byte aligned, but the word */
         jz    2f             /* size is 8 bytes.  Check whether the processor gave us an */
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 12b55e1022..5e8a0fb649 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -68,6 +68,7 @@
 #define PFEC_reserved_bit   (_AC(1,U) << 3)
 #define PFEC_insn_fetch     (_AC(1,U) << 4)
 #define PFEC_prot_key       (_AC(1,U) << 5)
+#define PFEC_shstk         (_AC(1,U) << 6)
 #define PFEC_arch_mask      (_AC(0xffff,U)) /* Architectural PFEC values. */
 /* Internally used only flags. */
 #define PFEC_page_paged     (1U<<16)
@@ -529,6 +530,7 @@ DECLARE_TRAP_HANDLER(coprocessor_error);
 DECLARE_TRAP_HANDLER(simd_coprocessor_error);
 DECLARE_TRAP_HANDLER_CONST(machine_check);
 DECLARE_TRAP_HANDLER(alignment_check);
+DECLARE_TRAP_HANDLER(entry_CP);
 
 DECLARE_TRAP_HANDLER(entry_int82);
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 01 22:59:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 22:59:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUecZ-0007us-Kz; Fri, 01 May 2020 22:59:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUecX-0007ud-Gj
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 22:59:01 +0000
X-Inumbo-ID: 57b0d122-8bff-11ea-9887-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57b0d122-8bff-11ea-9887-bc764e2007e4;
 Fri, 01 May 2020 22:58:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588373939;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=7x5ESMNeelNN6mx92zjbIOif/mDl8/XYbLA1njk7T8g=;
 b=R9E45x62qjLIIIrMD3jzK2GbTFHHkI+BwqxvDmcVb70lHvfjcb3mzZ/C
 ghkXCb8EpC4SZSzeTtfL4uwQlKdO5KPQpNKPWrHjIlvE4gTrPZAr9/USD
 1+ISsVUN2IdU39rVqsWHJNvkaCXBqRyz3J7RPXXlTYe2h6rJJJuEDwl9T 8=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: V89ehA/w7+CdA8L11/1s8fe0cbsS4DVzi0DwAvyQ+3YkGGg7gmWsJXV9QR5avOXlUNEoFQS9n6
 8Gkk62mZSUQqmPjD3g7sISTx7m4uUmjqAVbftWuDHHvCj1CO0anil0m0yVq0Itnv+YA2QQ2jUW
 s5qAbhy5L3GsMo8wVxwOe+szIBB48mDxwqqJvPSEokAEDmXxXq2XAx5NcGxNIdnVWvXsnDaUSl
 +z8W3a83ZhXBmPRQGBGoZLvUnDFkqtwG/L+2gPxpuZ1w6nOicjY37awmlO9tdJUHNM27mrC9A8
 p8Y=
X-SBRS: 2.7
X-MesageID: 16994843
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,341,1583211600"; d="scan'208";a="16994843"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 07/16] x86/shstk: Re-layout the stack block for shadow stacks
Date: Fri, 1 May 2020 23:58:29 +0100
Message-ID: <20200501225838.9866-8-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200501225838.9866-1-andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

We have two free pages in the current stack.  A useful property of shadow
stacks and regular stacks is that they act as each others guard pages as far
as OoB writes go.

Move the regular IST stacks up by one page, to allow their shadow stack page
to be in slot 0.  The primary shadow stack uses slot 5.

As the shadow IST stacks are only 1k large, shuffle the order of IST vectors
to have #DF numerically highest (so there is no chance of a shadow stack
overflow clobbering the supervisor token).

The XPTI code already breaks the MEMORY_GUARD abstraction for stacks by
forcing it to be present.  To avoid having too many configurations, do away
with the concept entirely, and unconditionally unmap the pages in all cases.

A later change will turn these properly into shadow stacks.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/cpu/common.c       | 10 +++++-----
 xen/arch/x86/mm.c               | 19 ++++++-------------
 xen/arch/x86/smpboot.c          |  3 +--
 xen/arch/x86/traps.c            | 23 ++++++-----------------
 xen/include/asm-x86/current.h   | 12 ++++++------
 xen/include/asm-x86/mm.h        |  1 -
 xen/include/asm-x86/processor.h |  6 +++---
 7 files changed, 27 insertions(+), 47 deletions(-)

diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 131ff03fcf..290f9f1c30 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -732,14 +732,14 @@ void load_system_tables(void)
 		.rsp2 = 0x8600111111111111ul,
 
 		/*
-		 * MCE, NMI and Double Fault handlers get their own stacks.
+		 * #DB, NMI, DF and #MCE handlers get their own stacks.
 		 * All others poisoned.
 		 */
 		.ist = {
-			[IST_MCE - 1] = stack_top + IST_MCE * PAGE_SIZE,
-			[IST_DF  - 1] = stack_top + IST_DF  * PAGE_SIZE,
-			[IST_NMI - 1] = stack_top + IST_NMI * PAGE_SIZE,
-			[IST_DB  - 1] = stack_top + IST_DB  * PAGE_SIZE,
+			[IST_MCE - 1] = stack_top + (1 + IST_MCE) * PAGE_SIZE,
+			[IST_NMI - 1] = stack_top + (1 + IST_NMI) * PAGE_SIZE,
+			[IST_DB  - 1] = stack_top + (1 + IST_DB)  * PAGE_SIZE,
+			[IST_DF  - 1] = stack_top + (1 + IST_DF)  * PAGE_SIZE,
 
 			[IST_MAX ... ARRAY_SIZE(tss->ist) - 1] =
 				0x8600111111111111ul,
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 355c50ff91..bc44d865ef 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -6002,25 +6002,18 @@ void memguard_unguard_range(void *p, unsigned long l)
 
 void memguard_guard_stack(void *p)
 {
-    /* IST_MAX IST pages + at least 1 guard page + primary stack. */
-    BUILD_BUG_ON((IST_MAX + 1) * PAGE_SIZE + PRIMARY_STACK_SIZE > STACK_SIZE);
+    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
 
-    memguard_guard_range(p + IST_MAX * PAGE_SIZE,
-                         STACK_SIZE - PRIMARY_STACK_SIZE - IST_MAX * PAGE_SIZE);
+    p += 5 * PAGE_SIZE;
+    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
 }
 
 void memguard_unguard_stack(void *p)
 {
-    memguard_unguard_range(p + IST_MAX * PAGE_SIZE,
-                           STACK_SIZE - PRIMARY_STACK_SIZE - IST_MAX * PAGE_SIZE);
-}
-
-bool memguard_is_stack_guard_page(unsigned long addr)
-{
-    addr &= STACK_SIZE - 1;
+    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, PAGE_HYPERVISOR_RW);
 
-    return addr >= IST_MAX * PAGE_SIZE &&
-           addr < STACK_SIZE - PRIMARY_STACK_SIZE;
+    p += 5 * PAGE_SIZE;
+    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, PAGE_HYPERVISOR_RW);
 }
 
 void arch_dump_shared_mem_info(void)
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index f999323bc4..e0f421ca3d 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -823,8 +823,7 @@ static int setup_cpu_root_pgt(unsigned int cpu)
 
     /* Install direct map page table entries for stack, IDT, and TSS. */
     for ( off = rc = 0; !rc && off < STACK_SIZE; off += PAGE_SIZE )
-        if ( !memguard_is_stack_guard_page(off) )
-            rc = clone_mapping(__va(__pa(stack_base[cpu])) + off, rpt);
+        rc = clone_mapping(__va(__pa(stack_base[cpu])) + off, rpt);
 
     if ( !rc )
         rc = clone_mapping(idt_tables[cpu], rpt);
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index ddbe312f89..1cf00c1f4a 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -369,20 +369,15 @@ static void show_guest_stack(struct vcpu *v, const struct cpu_user_regs *regs)
 /*
  * Notes for get_stack_trace_bottom() and get_stack_dump_bottom()
  *
- * Stack pages 0 - 3:
+ * Stack pages 1 - 4:
  *   These are all 1-page IST stacks.  Each of these stacks have an exception
  *   frame and saved register state at the top.  The interesting bound for a
  *   trace is the word adjacent to this, while the bound for a dump is the
  *   very top, including the exception frame.
  *
- * Stack pages 4 and 5:
- *   None of these are particularly interesting.  With MEMORY_GUARD, page 5 is
- *   explicitly not present, so attempting to dump or trace it is
- *   counterproductive.  Without MEMORY_GUARD, it is possible for a call chain
- *   to use the entire primary stack and wander into page 5.  In this case,
- *   consider these pages an extension of the primary stack to aid debugging
- *   hopefully rare situations where the primary stack has effective been
- *   overflown.
+ * Stack pages 0 and 5:
+ *   Shadow stacks.  These are mapped read-only, and used by CET-SS capable
+ *   processors.  They will never contain regular stack data.
  *
  * Stack pages 6 and 7:
  *   These form the primary stack, and have a cpu_info at the top.  For a
@@ -396,13 +391,10 @@ unsigned long get_stack_trace_bottom(unsigned long sp)
 {
     switch ( get_stack_page(sp) )
     {
-    case 0 ... 3:
+    case 1 ... 4:
         return ROUNDUP(sp, PAGE_SIZE) -
             offsetof(struct cpu_user_regs, es) - sizeof(unsigned long);
 
-#ifndef MEMORY_GUARD
-    case 4 ... 5:
-#endif
     case 6 ... 7:
         return ROUNDUP(sp, STACK_SIZE) -
             sizeof(struct cpu_info) - sizeof(unsigned long);
@@ -416,12 +408,9 @@ unsigned long get_stack_dump_bottom(unsigned long sp)
 {
     switch ( get_stack_page(sp) )
     {
-    case 0 ... 3:
+    case 1 ... 4:
         return ROUNDUP(sp, PAGE_SIZE) - sizeof(unsigned long);
 
-#ifndef MEMORY_GUARD
-    case 4 ... 5:
-#endif
     case 6 ... 7:
         return ROUNDUP(sp, STACK_SIZE) - sizeof(unsigned long);
 
diff --git a/xen/include/asm-x86/current.h b/xen/include/asm-x86/current.h
index 5b8f4dbc79..99b66a0087 100644
--- a/xen/include/asm-x86/current.h
+++ b/xen/include/asm-x86/current.h
@@ -16,12 +16,12 @@
  *
  * 7 - Primary stack (with a struct cpu_info at the top)
  * 6 - Primary stack
- * 5 - Optionally not present (MEMORY_GUARD)
- * 4 - Unused; optionally not present (MEMORY_GUARD)
- * 3 - Unused; optionally not present (MEMORY_GUARD)
- * 2 - MCE IST stack
- * 1 - NMI IST stack
- * 0 - Double Fault IST stack
+ * 5 - Primay Shadow Stack (read-only)
+ * 4 - #DF IST stack
+ * 3 - #DB IST stack
+ * 2 - NMI IST stack
+ * 1 - #MC IST stack
+ * 0 - IST Shadow Stacks (4x 1k, read-only)
  */
 
 /*
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 3d3f9d49ac..7e74996053 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -536,7 +536,6 @@ void memguard_unguard_range(void *p, unsigned long l);
 
 void memguard_guard_stack(void *p);
 void memguard_unguard_stack(void *p);
-bool __attribute_const__ memguard_is_stack_guard_page(unsigned long addr);
 
 struct mmio_ro_emulate_ctxt {
         unsigned long cr2;
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 5e8a0fb649..f7e80d12e4 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -439,10 +439,10 @@ struct tss_page {
 DECLARE_PER_CPU(struct tss_page, tss_page);
 
 #define IST_NONE 0UL
-#define IST_DF   1UL
+#define IST_MCE  1UL
 #define IST_NMI  2UL
-#define IST_MCE  3UL
-#define IST_DB   4UL
+#define IST_DB   3UL
+#define IST_DF   4UL
 #define IST_MAX  4UL
 
 /* Set the Interrupt Stack Table used by a particular IDT entry. */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 01 22:59:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 22:59:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUech-0007wj-9B; Fri, 01 May 2020 22:59:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUecg-0007wT-18
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 22:59:10 +0000
X-Inumbo-ID: 5cf8f43e-8bff-11ea-9b6f-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5cf8f43e-8bff-11ea-9b6f-12813bfff9fa;
 Fri, 01 May 2020 22:59:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588373949;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=vfDg0gLirp3wqWcYHVf6YMOyxLiUvrLksiHPpLTiioo=;
 b=CWTRh3jMoBdgTb8l5JZHkZrzWJ2i0vSdsuBe5XBN8ZgsOo63lF313NXC
 jcrMgm4JJKaBTeMwUeVoA6dDaITrVacXQchFXaHEJeyNLRGmAFy70EYJM
 ReZyH0y3kedaALnaXzJwEBoVOhmM2WzXdWtUC4TuN20BTnPgajOGU/hSd s=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: kIYvcEtKJuBgW1dMx7Ua+jevmpknfZwyDEOZbchuAhJfHUXQJf87Ye5sg9yPemW/8UqWX4hLus
 St19uT9t+tWzu0d55eT5RBQKAJxbdBzfeHAW6pOK/P81MbGa14YVdTnu4e7mi0Cmk3kNQzuEzx
 SlaZso0FMEoWSNc5fA5n1mhN2agYO9cEcTFH5WYR8JGX8bCbIZdYciuvQoRfTxC/qQFUAi88VU
 +hm++7OuEAR1TbbVXSFEMtG66NL0jubThZ0LPvqy/3f0dbqp3mkKaOzHfvlBVeaovzxEm1pQ/a
 5Jo=
X-SBRS: 2.7
X-MesageID: 16584680
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,341,1583211600"; d="scan'208";a="16584680"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 02/16] x86/traps: Clean up printing in
 do_reserved_trap()/fatal_trap()
Date: Fri, 1 May 2020 23:58:24 +0100
Message-ID: <20200501225838.9866-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200501225838.9866-1-andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

For one, they render the vector in a different base.

Introduce X86_EXC_* constants and vec_name() to refer to exceptions by their
mnemonic, which starts bringing the code/diagnostics in line with the Intel
and AMD manuals.

Provide constants for every archtiecturally defined exception, even those not
implemented by Xen yet, as do_reserved_trap() is a catch-all handler.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/traps.c            | 24 +++++++++++++++++++-----
 xen/include/asm-x86/processor.h |  6 +-----
 xen/include/asm-x86/x86-defns.h | 35 +++++++++++++++++++++++++++++++++++
 3 files changed, 55 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index fe9457cdb6..e73f07f28a 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -686,6 +686,20 @@ const char *trapstr(unsigned int trapnr)
     return trapnr < ARRAY_SIZE(strings) ? strings[trapnr] : "???";
 }
 
+static const char *vec_name(unsigned int vec)
+{
+    static const char names[][4] = {
+#define N(x) [X86_EXC_ ## x] = #x
+        N(DE),  N(DB),  N(NMI), N(BP),  N(OF),  N(BR),  N(UD),  N(NM),
+        N(DF),  N(CSO), N(TS),  N(NP),  N(SS),  N(GP),  N(PF),  N(SPV),
+        N(MF),  N(AC),  N(MC),  N(XM),  N(VE),  N(CP),
+                                        N(HV),  N(VC),  N(SX),
+#undef N
+    };
+
+    return (vec < ARRAY_SIZE(names) && names[vec][0]) ? names[vec] : "??";
+}
+
 /*
  * This is called for faults at very unexpected times (e.g., when interrupts
  * are disabled). In such situations we can't do much that is safe. We try to
@@ -743,10 +757,9 @@ void fatal_trap(const struct cpu_user_regs *regs, bool show_remote)
         }
     }
 
-    panic("FATAL TRAP: vector = %d (%s)\n"
-          "[error_code=%04x] %s\n",
-          trapnr, trapstr(trapnr), regs->error_code,
-          (regs->eflags & X86_EFLAGS_IF) ? "" : ", IN INTERRUPT CONTEXT");
+    panic("FATAL TRAP: vec %u, #%s[%04x]%s\n",
+          trapnr, vec_name(trapnr), regs->error_code,
+          (regs->eflags & X86_EFLAGS_IF) ? "" : " IN INTERRUPT CONTEXT");
 }
 
 static void do_reserved_trap(struct cpu_user_regs *regs)
@@ -757,7 +770,8 @@ static void do_reserved_trap(struct cpu_user_regs *regs)
         return;
 
     show_execution_state(regs);
-    panic("FATAL RESERVED TRAP %#x: %s\n", trapnr, trapstr(trapnr));
+    panic("FATAL RESERVED TRAP: vec %u, #%s[%04x]\n",
+          trapnr, vec_name(trapnr), regs->error_code);
 }
 
 static void do_trap(struct cpu_user_regs *regs)
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 8f6f5a97dd..12b55e1022 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -43,11 +43,7 @@
 #define TRAP_virtualisation   20
 #define TRAP_nr               32
 
-#define TRAP_HAVE_EC                                                    \
-    ((1u << TRAP_double_fault) | (1u << TRAP_invalid_tss) |             \
-     (1u << TRAP_no_segment) | (1u << TRAP_stack_error) |               \
-     (1u << TRAP_gp_fault) | (1u << TRAP_page_fault) |                  \
-     (1u << TRAP_alignment_check))
+#define TRAP_HAVE_EC X86_EXC_HAVE_EC
 
 /* Set for entry via SYSCALL. Informs return code to use SYSRETQ not IRETQ. */
 /* NB. Same as VGCF_in_syscall. No bits in common with any other TRAP_ defn. */
diff --git a/xen/include/asm-x86/x86-defns.h b/xen/include/asm-x86/x86-defns.h
index 8bf503220a..84e15b15be 100644
--- a/xen/include/asm-x86/x86-defns.h
+++ b/xen/include/asm-x86/x86-defns.h
@@ -118,4 +118,39 @@
 
 #define X86_NR_VECTORS 256
 
+/* Exception Vectors */
+#define X86_EXC_DE             0 /* Divide Error. */
+#define X86_EXC_DB             1 /* Debug Exception. */
+#define X86_EXC_NMI            2 /* NMI. */
+#define X86_EXC_BP             3 /* Breakpoint. */
+#define X86_EXC_OF             4 /* Overflow. */
+#define X86_EXC_BR             5 /* BOUND Range. */
+#define X86_EXC_UD             6 /* Invalid Opcode. */
+#define X86_EXC_NM             7 /* Device Not Available. */
+#define X86_EXC_DF             8 /* Double Fault. */
+#define X86_EXC_CSO            9 /* Coprocessor Segment Overrun. */
+#define X86_EXC_TS            10 /* Invalid TSS. */
+#define X86_EXC_NP            11 /* Segment Not Present. */
+#define X86_EXC_SS            12 /* Stack-Segment Fault. */
+#define X86_EXC_GP            13 /* General Porection Fault. */
+#define X86_EXC_PF            14 /* Page Fault. */
+#define X86_EXC_SPV           15 /* PIC Spurious Interrupt Vector. */
+#define X86_EXC_MF            16 /* Maths fault (x87 FPU). */
+#define X86_EXC_AC            17 /* Alignment Check. */
+#define X86_EXC_MC            18 /* Machine Check. */
+#define X86_EXC_XM            19 /* SIMD Exception. */
+#define X86_EXC_VE            20 /* Virtualisation Exception. */
+#define X86_EXC_CP            21 /* Control-flow Protection. */
+#define X86_EXC_HV            28 /* Hypervisor Injection. */
+#define X86_EXC_VC            29 /* VMM Communication. */
+#define X86_EXC_SX            30 /* Security Exception. */
+
+/* Bitmap of exceptions which have error codes. */
+#define X86_EXC_HAVE_EC                                             \
+    ((1u << X86_EXC_DF) | (1u << X86_EXC_TS) | (1u << X86_EXC_NP) | \
+     (1u << X86_EXC_SS) | (1u << X86_EXC_GP) | (1u << X86_EXC_PF) | \
+     (1u << X86_EXC_AC) | (1u << X86_EXC_CP) |                      \
+     (1u << X86_EXC_VC) | (1u << X86_EXC_SX))
+
+
 #endif	/* __XEN_X86_DEFNS_H__ */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 01 22:59:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 22:59:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUecn-0007zf-7K; Fri, 01 May 2020 22:59:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUecm-0007z2-4R
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 22:59:16 +0000
X-Inumbo-ID: 5ce7dcf8-8bff-11ea-b9cf-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ce7dcf8-8bff-11ea-b9cf-bc764e2007e4;
 Fri, 01 May 2020 22:59:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588373948;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=KmdiFWTUtoRQ6SF0OGgDhEv3ORD8KJHYv15kmE23ReQ=;
 b=HRQXCw/9UuQPkIuaP98GPgTkF0YBdIab2DH4VEdhwH9i4faNXdsIGgyF
 NkuISlKUyrxyGN9IzXFuPbnIc1bbYnMwE8ZdfdYkmJnQZiuPDfTBcOw0Z
 +QafbUqMWxnEjyYATrkQJc3PHsKTd16Qy/wX0vdgZSwkAvzdJKHjnci9k c=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: 7WuTT0jWBm09Mc/nbOEE+auc69OKlH3GWOmbKE2uVAaOnAmqnkYT90HxHA3xmAPWNI+BMHFQ2r
 QHw2wS8jFcFSMJk6tCSvXzMJibya4VQtH2wTwGkhBdEFfWkDG6fKCGbYlu5ntbh7Dsr5JO9xLN
 Fer8dsWTpxRAq4tmk0Yp0XzXCokf/n7WlYO6HQGVD0hhksc5wAJxM2nddEnnSLjZW9vBsgpxs6
 zEEL36xfT+/7Vf5tWmR5HD6wPpCbAhCWLB/LFoVBCjAczqFaoycmS3subZlpqx8D2WiXLueYZb
 klg=
X-SBRS: 2.7
X-MesageID: 17293962
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,341,1583211600"; d="scan'208";a="17293962"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 01/16] x86/traps: Drop last_extable_addr
Date: Fri, 1 May 2020 23:58:23 +0100
Message-ID: <20200501225838.9866-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200501225838.9866-1-andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The only user of this facility is dom_crash_sync_extable() by passing 0 into
asm_domain_crash_synchronous().  The common error cases are already covered
with show_page_walk(), leaving only %ss/%fs selector/segment errors in the
compat case.

Point at dom_crash_sync_extable in the error message, which is about as good
as the error hints from other users of asm_domain_crash_synchronous(), and
drop last_extable_addr.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/traps.c        | 11 +----------
 xen/arch/x86/x86_64/entry.S |  2 +-
 2 files changed, 2 insertions(+), 11 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 33e5d21ece..fe9457cdb6 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -96,7 +96,6 @@ static char __read_mostly opt_nmi[10] = "fatal";
 string_param("nmi", opt_nmi);
 
 DEFINE_PER_CPU(uint64_t, efer);
-static DEFINE_PER_CPU(unsigned long, last_extable_addr);
 
 DEFINE_PER_CPU_READ_MOSTLY(seg_desc_t *, gdt);
 DEFINE_PER_CPU_READ_MOSTLY(l1_pgentry_t, gdt_l1e);
@@ -786,7 +785,6 @@ static void do_trap(struct cpu_user_regs *regs)
     {
         dprintk(XENLOG_ERR, "Trap %u: %p [%ps] -> %p\n",
                 trapnr, _p(regs->rip), _p(regs->rip), _p(fixup));
-        this_cpu(last_extable_addr) = regs->rip;
         regs->rip = fixup;
         return;
     }
@@ -1099,7 +1097,6 @@ void do_invalid_op(struct cpu_user_regs *regs)
  die:
     if ( (fixup = search_exception_table(regs)) != 0 )
     {
-        this_cpu(last_extable_addr) = regs->rip;
         regs->rip = fixup;
         return;
     }
@@ -1122,7 +1119,6 @@ void do_int3(struct cpu_user_regs *regs)
 
         if ( (fixup = search_exception_table(regs)) != 0 )
         {
-            this_cpu(last_extable_addr) = regs->rip;
             dprintk(XENLOG_DEBUG, "Trap %u: %p [%ps] -> %p\n",
                     TRAP_int3, _p(regs->rip), _p(regs->rip), _p(fixup));
             regs->rip = fixup;
@@ -1461,7 +1457,6 @@ void do_page_fault(struct cpu_user_regs *regs)
             perfc_incr(copy_user_faults);
             if ( unlikely(regs->error_code & PFEC_reserved_bit) )
                 reserved_bit_page_fault(addr, regs);
-            this_cpu(last_extable_addr) = regs->rip;
             regs->rip = fixup;
             return;
         }
@@ -1591,7 +1586,6 @@ void do_general_protection(struct cpu_user_regs *regs)
     {
         dprintk(XENLOG_INFO, "GPF (%04x): %p [%ps] -> %p\n",
                 regs->error_code, _p(regs->rip), _p(regs->rip), _p(fixup));
-        this_cpu(last_extable_addr) = regs->rip;
         regs->rip = fixup;
         return;
     }
@@ -2085,10 +2079,7 @@ void asm_domain_crash_synchronous(unsigned long addr)
      */
     clac();
 
-    if ( addr == 0 )
-        addr = this_cpu(last_extable_addr);
-
-    printk("domain_crash_sync called from entry.S: fault at %p %pS\n",
+    printk("domain_crash_sync called from entry.S: issue around %p %pS\n",
            _p(addr), _p(addr));
 
     __domain_crash(current->domain);
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index d55453f3f3..a3ce298529 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -527,7 +527,7 @@ ENTRY(dom_crash_sync_extable)
         sete  %al
         leal  (%rax,%rax,2),%eax
         orb   %al,UREGS_cs(%rsp)
-        xorl  %edi,%edi
+        lea   dom_crash_sync_extable(%rip), %rdi
         jmp   asm_domain_crash_synchronous /* Does not return */
         .popsection
 #endif /* CONFIG_PV */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 01 22:59:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 22:59:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUeci-0007xH-PR; Fri, 01 May 2020 22:59:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUech-0007wd-3P
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 22:59:11 +0000
X-Inumbo-ID: 58e26510-8bff-11ea-b07b-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58e26510-8bff-11ea-b07b-bc764e2007e4;
 Fri, 01 May 2020 22:59:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588373941;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=Ap0r5eSmJ8rufFjf/AMweZgfzQrJaoB/RQzTcg9JGIc=;
 b=ErI7A2pbb2ioIashxwuWHNLQAyklofIDa/D2N96acCgo3k+vCBkNV2gK
 gmAuAy5i5iA+cQnrVzmK88VQ/LTbg75b+zi+1DEK5gVd0h5Aj9ysdA2Zo
 R8289BthNZdqkpTON6atESa0yYzB1en2ytYN+Xhv7iVQMZrSrcDJ7Zvod Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: l2SQ2JLuOddSOfsNIBJCTSLhOtnmkBvFgvONzvDTCcY8Vkp/NtCl2khdZ9t2t6JtA7zD4GarrM
 BqM+kxLU7j90gPA9Mp6887De5bHXiO5NWfRNikGaWTDozSZykal5kSAwGFRXOF2TRn2G8/vW9i
 psvTC6stvSV5iP+ognHDjvS9c8hUo0/bVTGX/mip9oFyenMUwKfYF0xuGotxrfdCAmi16TcbUO
 IZ+h13PiXE4v3ZHgGZU4m+eNTts24WDvxeJeeOa5GzXBA0gSoFcgRtRKUSN69nEpr7Ozrc7ZXV
 m00=
X-SBRS: 2.7
X-MesageID: 16584677
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,341,1583211600"; d="scan'208";a="16584677"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 09/16] x86/cpu: Adjust enable_nmis() to be shadow stack
 compatible
Date: Fri, 1 May 2020 23:58:31 +0100
Message-ID: <20200501225838.9866-10-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200501225838.9866-1-andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

When executing an IRET-to-self, the shadow stack must agree with the regular
stack.  We can't manipulate SSP directly, so have to fake a shadow IRET frame
by executing 3 CALLs, then editing the result to look correct.

This is not a fastpath, is called on the BSP long before CET can be set up,
and may be called on the crash path after CET is disabled.  Use the fact that
INCSSP is allocated from the hint nop space to construct a test for CET being
active which is safe on all processors.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/include/asm-x86/processor.h | 43 +++++++++++++++++++++++++++++++----------
 1 file changed, 33 insertions(+), 10 deletions(-)

diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 54e1a8b605..654d46a6f4 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -544,17 +544,40 @@ static inline void enable_nmis(void)
 {
     unsigned long tmp;
 
-    asm volatile ( "mov %%rsp, %[tmp]     \n\t"
-                   "push %[ss]            \n\t"
-                   "push %[tmp]           \n\t"
-                   "pushf                 \n\t"
-                   "push %[cs]            \n\t"
-                   "lea 1f(%%rip), %[tmp] \n\t"
-                   "push %[tmp]           \n\t"
-                   "iretq; 1:             \n\t"
-                   : [tmp] "=&r" (tmp)
+    asm volatile ( "mov     %%rsp, %[rsp]        \n\t"
+                   "lea    .Ldone(%%rip), %[rip] \n\t"
+#ifdef CONFIG_XEN_SHSTK
+                   /* Check for CET-SS being active. */
+                   "mov    $1, %k[ssp]           \n\t"
+                   "rdsspq %[ssp]                \n\t"
+                   "cmp    $1, %k[ssp]           \n\t"
+                   "je     .Lshstk_done          \n\t"
+
+                   /* Push 3 words on the shadow stack */
+                   ".rept 3                      \n\t"
+                   "call 1f; nop; 1:             \n\t"
+                   ".endr                        \n\t"
+
+                   /* Fixup to be an IRET shadow stack frame */
+                   "wrssq  %q[cs], -1*8(%[ssp])  \n\t"
+                   "wrssq  %[rip], -2*8(%[ssp])  \n\t"
+                   "wrssq  %[ssp], -3*8(%[ssp])  \n\t"
+
+                   ".Lshstk_done:"
+#endif
+                   /* Write an IRET regular frame */
+                   "push   %[ss]                 \n\t"
+                   "push   %[rsp]                \n\t"
+                   "pushf                        \n\t"
+                   "push   %q[cs]                \n\t"
+                   "push   %[rip]                \n\t"
+                   "iretq                        \n\t"
+                   ".Ldone:                      \n\t"
+                   : [rip] "=&r" (tmp),
+                     [rsp] "=&r" (tmp),
+                     [ssp] "=&r" (tmp)
                    : [ss] "i" (__HYPERVISOR_DS),
-                     [cs] "i" (__HYPERVISOR_CS) );
+                     [cs] "r" (__HYPERVISOR_CS) );
 }
 
 void sysenter_entry(void);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 01 22:59:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 22:59:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUecs-00083b-Fx; Fri, 01 May 2020 22:59:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUecr-00082n-4H
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 22:59:21 +0000
X-Inumbo-ID: 5cf85420-8bff-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5cf85420-8bff-11ea-ae69-bc764e2007e4;
 Fri, 01 May 2020 22:59:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588373948;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=gDqmpBaBuFtcNPsY5Y83vtQt6FtCDBM6ODRWWUkB7Wo=;
 b=Tv0lhi3k/l1WKgwwMevpenyJaZiXE5d/s5JPp1EflcHKkfquzlW2lTDk
 4l+6JycZ1EzRzbKNjQ8ML2MD0BIaQoJWuaufKLndS/1yA5hCCVDhBWb4Q
 jHUUoTITvB0kI5sZxMKSioGZKw6nKfLyQwGlZZVZVipMge/krzE+hPwkI E=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: tvG01VWEhW9lEFXuYuSyuiQfsPHtPwE54KCT0FZT3ZQyrETbooExtlWNlIk+4j9NJ3pKOEm75Q
 b+x+EVLcXMtWpDKddHDvArnH8eIcgctc+pYmuZ8TqWNIRxrmeFL9W70A2t42ehsVoQ6CQoOt/V
 83Jgdw1iHZ4S7hCaYmL5XCCzVNDkTlPXvE6cUoeTo1eHWK8GF4abRoi4yzrYRrpgimmFNQvGIx
 lAJ10SqevCFFx0nt3KE9hptBvQdCVuC4g1p6LoktErRxOAQqLqJ0m0czbOJ99siyvkFBRkwvxx
 NbE=
X-SBRS: 2.7
X-MesageID: 16854947
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,341,1583211600"; d="scan'208";a="16854947"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 00/16] x86: Support for CET Supervisor Shadow Stacks
Date: Fri, 1 May 2020 23:58:22 +0100
Message-ID: <20200501225838.9866-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This series implements Shadow Stack support for Xen to use.

You'll need a CET-capable toolchain (Binutils 2.32 and later), but no specific
compiler support required.

CET-SS makes PV32 unusable, so using shadow stacks prevents the use of 32bit
PV guests.  Compatibilty can be obtained using PV Shim

Andrew Cooper (16):
  x86/traps: Drop last_extable_addr
  x86/traps: Clean up printing in do_reserved_trap()/fatal_trap()
  x86/traps: Factor out exception_fixup() and make printing consistent
  x86/smpboot: Write the top-of-stack block in cpu_smpboot_alloc()
  x86/shstk: Introduce Supervisor Shadow Stack support
  x86/traps: Implement #CP handler and extend #PF for shadow stacks
  x86/shstk: Re-layout the stack block for shadow stacks
  x86/shstk: Create shadow stacks
  x86/cpu: Adjust enable_nmis() to be shadow stack compatible
  x86/cpu: Adjust reset_stack_and_jump() to be shadow stack compatible
  x86/spec-ctrl: Adjust DO_OVERWRITE_RSB to be shadow stack compatible
  x86/extable: Adjust extable handling to be shadow stack compatible
  x86/ioemul: Rewrite stub generation to be shadow stack compatible
  x86/alt: Adjust _alternative_instructions() to not create shadow stacks
  x86/entry: Adjust guest paths to be shadow stack compatible
  x86/shstk: Activate Supervisor Shadow Stacks

 xen/arch/x86/Kconfig                |  17 +++
 xen/arch/x86/acpi/wakeup_prot.S     |  56 ++++++++++
 xen/arch/x86/alternative.c          |  14 +++
 xen/arch/x86/boot/x86_64.S          |  30 +++++-
 xen/arch/x86/cpu/common.c           |  34 +++++-
 xen/arch/x86/crash.c                |   7 ++
 xen/arch/x86/ioport_emulate.c       |  11 +-
 xen/arch/x86/mm.c                   |  41 ++++---
 xen/arch/x86/pv/emul-priv-op.c      |  91 ++++++++++++----
 xen/arch/x86/pv/gpr_switch.S        |  37 ++-----
 xen/arch/x86/setup.c                |  56 ++++++++++
 xen/arch/x86/smpboot.c              |  10 +-
 xen/arch/x86/spec_ctrl.c            |   8 ++
 xen/arch/x86/traps.c                | 206 ++++++++++++++++++++++--------------
 xen/arch/x86/x86_64/compat/entry.S  |   2 +-
 xen/arch/x86/x86_64/entry.S         |  39 ++++++-
 xen/include/asm-x86/cpufeature.h    |   1 +
 xen/include/asm-x86/cpufeatures.h   |   1 +
 xen/include/asm-x86/current.h       |  59 ++++++++---
 xen/include/asm-x86/io.h            |   3 +-
 xen/include/asm-x86/mm.h            |   1 -
 xen/include/asm-x86/msr-index.h     |   3 +
 xen/include/asm-x86/page.h          |   1 +
 xen/include/asm-x86/processor.h     |  60 +++++++----
 xen/include/asm-x86/spec_ctrl_asm.h |  16 ++-
 xen/include/asm-x86/x86-defns.h     |  36 +++++++
 xen/include/asm-x86/x86_64/page.h   |   1 +
 xen/scripts/Kconfig.include         |   4 +
 28 files changed, 640 insertions(+), 205 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 01 22:59:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 22:59:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUecx-00086e-OP; Fri, 01 May 2020 22:59:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUecw-00085r-4x
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 22:59:26 +0000
X-Inumbo-ID: 5d9ce7c4-8bff-11ea-b9cf-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d9ce7c4-8bff-11ea-b9cf-bc764e2007e4;
 Fri, 01 May 2020 22:59:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588373949;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=yuj7W8FBuxAyknqo2wUBSETsY2vvMAq34uBhRh1QqIU=;
 b=PFIorDf9U2lcHF4KRelD4zKVEGGgTsqywgiKwD/Bmi46SZMqu8YuZkeW
 0s2ljweIozajvWXyGYmH9q5BiJ49GUwSgmqXF+nDVrpUxXhYRJpILFmDZ
 H8e3SlIc1ryWSusinCueyNFzy9pZdxvTMiYmCb9oy5lHoWHBtldDK33rx 4=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: 1A+w2IiSnHoSp+2Fe/WtiLGlowGS5xLZFh3Kc1HlxmeFkXDESQL+ABj8ImGYUdnf88c9oSuF95
 a2mnTEP9WFBGNQEjnPFutxWQz9U7Ez7a43V8iAZ0rtEJ06CQONN4B7XhJV7WDXek1nYwlUYK5c
 aD8KBkD6uU11lcpIelSRNn1Y0cMSTwvSdhvhe5L9Zo9aT9Zy9YNRw1M6yDMK3fix3Mc/o6574p
 m5HTRrk4b0vl6y0LGDEs9rEByLDJhIVH3i8BV6Df0L4cDc0L67G64vD8U8x3aBoqvuL1nUjasz
 uNs=
X-SBRS: 2.7
X-MesageID: 17293963
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,341,1583211600"; d="scan'208";a="17293963"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 03/16] x86/traps: Factor out exception_fixup() and make
 printing consistent
Date: Fri, 1 May 2020 23:58:25 +0100
Message-ID: <20200501225838.9866-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200501225838.9866-1-andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

UD faults never had any diagnostics printed, and the others were inconsistent.

Don't use dprintk() because identifying traps.c is actively unhelpful in the
message, as it is the location of the fixup, not the fault.  Use the new
vec_name() infrastructure, rather than leaving raw numbers for the log.

  (XEN) Running stub recovery selftests...
  (XEN) Fixup #UD[0000]: ffff82d07fffd040 [ffff82d07fffd040] -> ffff82d0403ac9d6
  (XEN) Fixup #GP[0000]: ffff82d07fffd041 [ffff82d07fffd041] -> ffff82d0403ac9d6
  (XEN) Fixup #SS[0000]: ffff82d07fffd040 [ffff82d07fffd040] -> ffff82d0403ac9d6
  (XEN) Fixup #BP[0000]: ffff82d07fffd041 [ffff82d07fffd041] -> ffff82d0403ac9d6

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/traps.c | 68 ++++++++++++++++++++++++----------------------------
 1 file changed, 31 insertions(+), 37 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index e73f07f28a..737ab036d2 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -774,10 +774,27 @@ static void do_reserved_trap(struct cpu_user_regs *regs)
           trapnr, vec_name(trapnr), regs->error_code);
 }
 
+static bool exception_fixup(struct cpu_user_regs *regs, bool print)
+{
+    unsigned long fixup = search_exception_table(regs);
+
+    if ( unlikely(fixup == 0) )
+        return false;
+
+    /* Can currently be triggered by guests.  Make sure we ratelimit. */
+    if ( IS_ENABLED(CONFIG_DEBUG) && print )
+        printk(XENLOG_GUEST XENLOG_WARNING "Fixup #%s[%04x]: %p [%ps] -> %p\n",
+               vec_name(regs->entry_vector), regs->error_code,
+               _p(regs->rip), _p(regs->rip), _p(fixup));
+
+    regs->rip = fixup;
+
+    return true;
+}
+
 static void do_trap(struct cpu_user_regs *regs)
 {
     unsigned int trapnr = regs->entry_vector;
-    unsigned long fixup;
 
     if ( regs->error_code & X86_XEC_EXT )
         goto hardware_trap;
@@ -795,13 +812,8 @@ static void do_trap(struct cpu_user_regs *regs)
         return;
     }
 
-    if ( likely((fixup = search_exception_table(regs)) != 0) )
-    {
-        dprintk(XENLOG_ERR, "Trap %u: %p [%ps] -> %p\n",
-                trapnr, _p(regs->rip), _p(regs->rip), _p(fixup));
-        regs->rip = fixup;
+    if ( likely(exception_fixup(regs, true)) )
         return;
-    }
 
  hardware_trap:
     if ( debugger_trap_fatal(trapnr, regs) )
@@ -1109,11 +1121,8 @@ void do_invalid_op(struct cpu_user_regs *regs)
     }
 
  die:
-    if ( (fixup = search_exception_table(regs)) != 0 )
-    {
-        regs->rip = fixup;
+    if ( likely(exception_fixup(regs, true)) )
         return;
-    }
 
     if ( debugger_trap_fatal(TRAP_invalid_op, regs) )
         return;
@@ -1129,15 +1138,8 @@ void do_int3(struct cpu_user_regs *regs)
 
     if ( !guest_mode(regs) )
     {
-        unsigned long fixup;
-
-        if ( (fixup = search_exception_table(regs)) != 0 )
-        {
-            dprintk(XENLOG_DEBUG, "Trap %u: %p [%ps] -> %p\n",
-                    TRAP_int3, _p(regs->rip), _p(regs->rip), _p(fixup));
-            regs->rip = fixup;
+        if ( likely(exception_fixup(regs, true)) )
             return;
-        }
 
         if ( !debugger_trap_fatal(TRAP_int3, regs) )
             printk(XENLOG_DEBUG "Hit embedded breakpoint at %p [%ps]\n",
@@ -1435,7 +1437,7 @@ static int fixup_page_fault(unsigned long addr, struct cpu_user_regs *regs)
  */
 void do_page_fault(struct cpu_user_regs *regs)
 {
-    unsigned long addr, fixup;
+    unsigned long addr;
     unsigned int error_code;
 
     addr = read_cr2();
@@ -1466,12 +1468,11 @@ void do_page_fault(struct cpu_user_regs *regs)
         if ( pf_type != real_fault )
             return;
 
-        if ( likely((fixup = search_exception_table(regs)) != 0) )
+        if ( likely(exception_fixup(regs, false)) )
         {
             perfc_incr(copy_user_faults);
             if ( unlikely(regs->error_code & PFEC_reserved_bit) )
                 reserved_bit_page_fault(addr, regs);
-            regs->rip = fixup;
             return;
         }
 
@@ -1529,7 +1530,6 @@ void do_general_protection(struct cpu_user_regs *regs)
 #ifdef CONFIG_PV
     struct vcpu *v = current;
 #endif
-    unsigned long fixup;
 
     if ( debugger_trap_entry(TRAP_gp_fault, regs) )
         return;
@@ -1596,13 +1596,8 @@ void do_general_protection(struct cpu_user_regs *regs)
 
  gp_in_kernel:
 
-    if ( likely((fixup = search_exception_table(regs)) != 0) )
-    {
-        dprintk(XENLOG_INFO, "GPF (%04x): %p [%ps] -> %p\n",
-                regs->error_code, _p(regs->rip), _p(regs->rip), _p(fixup));
-        regs->rip = fixup;
+    if ( likely(exception_fixup(regs, true)) )
         return;
-    }
 
  hardware_gp:
     if ( debugger_trap_fatal(TRAP_gp_fault, regs) )
@@ -1761,18 +1756,17 @@ void do_device_not_available(struct cpu_user_regs *regs)
 
     if ( !guest_mode(regs) )
     {
-        unsigned long fixup = search_exception_table(regs);
-
-        gprintk(XENLOG_ERR, "#NM: %p [%ps] -> %p\n",
-                _p(regs->rip), _p(regs->rip), _p(fixup));
         /*
          * We shouldn't be able to reach here, but for release builds have
          * the recovery logic in place nevertheless.
          */
-        ASSERT_UNREACHABLE();
-        BUG_ON(!fixup);
-        regs->rip = fixup;
-        return;
+        if ( exception_fixup(regs, true) )
+        {
+            ASSERT_UNREACHABLE();
+            return;
+        }
+
+        fatal_trap(regs, false);
     }
 
 #ifdef CONFIG_PV
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 01 22:59:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 22:59:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUed2-000898-1K; Fri, 01 May 2020 22:59:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUed1-00088e-40
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 22:59:31 +0000
X-Inumbo-ID: 5dd33824-8bff-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5dd33824-8bff-11ea-ae69-bc764e2007e4;
 Fri, 01 May 2020 22:59:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588373949;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=zvkVxgzJkr/FcspZwSx7AStaUlQ9aD0VLFmFs6qVRg0=;
 b=fGi6lLAUwPdBOiAVnHjWKK41Hlx2go5w8R3p2YUdlH3M2qWCCLOm/+30
 CGTZ7u/y2QHMJB2jLnSBgDL3eDVMkfd8796S2IOUlQd7lXm65I+AS4zt8
 0mnc6XI6GExBY2f1SmPcH3skfdtYVPkE1xl82dnU3cNsqsYlsSFSn9U/h M=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: uy/J8kG2T7UsumKNrlGpZrycXRrLKHkS8/gkbU9dYQkI//NE/4SNcE1Qgh87iP7E0nwcLnHD82
 NqrkcX/IuKsap+7nG9do8Pmvg5rRYn+kNoArkjTxcKfm4aNlugT2LrRAe+VZLgA4aJqbeuKBhb
 I8P1T/h6bQGm6aw4IR62GF8X9FdbwUMVXXMgmB/4tOlS+VC7KDeWKCZofJWfAqQdV+a+H/T0FD
 fCueCjlYLwN7bzkITbTT5OPtNTDae3q9QDVZwbo1ytYpxTw58XrwSYQg7RIFaF1Wc3+LfNd8Qs
 JZk=
X-SBRS: 2.7
X-MesageID: 16854948
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,341,1583211600"; d="scan'208";a="16854948"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 04/16] x86/smpboot: Write the top-of-stack block in
 cpu_smpboot_alloc()
Date: Fri, 1 May 2020 23:58:26 +0100
Message-ID: <20200501225838.9866-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200501225838.9866-1-andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This allows the AP boot assembly use per-cpu variables, and brings the
semantics closer to that of the BSP, which can use per-cpu variables from the
start of day.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/smpboot.c        | 7 ++++++-
 xen/include/asm-x86/current.h | 5 -----
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 5a3786d399..f999323bc4 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -329,7 +329,6 @@ void start_secondary(void *unused)
 
     /* Critical region without IDT or TSS.  Any fault is deadly! */
 
-    set_processor_id(cpu);
     set_current(idle_vcpu[cpu]);
     this_cpu(curr_vcpu) = idle_vcpu[cpu];
     rdmsrl(MSR_EFER, this_cpu(efer));
@@ -986,6 +985,7 @@ static void cpu_smpboot_free(unsigned int cpu, bool remove)
 
 static int cpu_smpboot_alloc(unsigned int cpu)
 {
+    struct cpu_info *info;
     unsigned int i, memflags = 0;
     nodeid_t node = cpu_to_node(cpu);
     seg_desc_t *gdt;
@@ -999,6 +999,11 @@ static int cpu_smpboot_alloc(unsigned int cpu)
         stack_base[cpu] = alloc_xenheap_pages(STACK_ORDER, memflags);
     if ( stack_base[cpu] == NULL )
         goto out;
+
+    info = get_cpu_info_from_stack((unsigned long)stack_base[cpu]);
+    info->processor_id = cpu;
+    info->per_cpu_offset = __per_cpu_offset[cpu];
+
     memguard_guard_stack(stack_base[cpu]);
 
     gdt = per_cpu(gdt, cpu) ?: alloc_xenheap_pages(0, memflags);
diff --git a/xen/include/asm-x86/current.h b/xen/include/asm-x86/current.h
index 0b47485337..5b8f4dbc79 100644
--- a/xen/include/asm-x86/current.h
+++ b/xen/include/asm-x86/current.h
@@ -100,11 +100,6 @@ static inline struct cpu_info *get_cpu_info(void)
 #define current               (get_current())
 
 #define get_processor_id()    (get_cpu_info()->processor_id)
-#define set_processor_id(id)  do {                                      \
-    struct cpu_info *ci__ = get_cpu_info();                             \
-    ci__->per_cpu_offset = __per_cpu_offset[ci__->processor_id = (id)]; \
-} while (0)
-
 #define guest_cpu_user_regs() (&get_cpu_info()->guest_cpu_user_regs)
 
 /*
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 01 23:04:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 23:04:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUehg-0001AP-P4; Fri, 01 May 2020 23:04:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUehf-0001AF-N1
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 23:04:19 +0000
X-Inumbo-ID: 15fcccbc-8c00-11ea-9887-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15fcccbc-8c00-11ea-9887-bc764e2007e4;
 Fri, 01 May 2020 23:04:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588374258;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=HTUK8m+QrV1a4/c+FcbBl6B+TxDRXiRJZVfXM92PQy0=;
 b=iOc9QmKxzAeTJCrxrx/MA3LkSf8Dde+9NsbXTPx7VC+uh1TNJ1r4ZSth
 cnMZLPIFRVYxoxolviwSMGZB/OGw3yD2TTR8r+OKo2WtgHJ3PZByi2q29
 ZIJl+1fuy2gzZxNaDsCmKgG5XIW/+WaD8zePshBmg/3NsGwsBh7ekJU2B k=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: LwgmbsDFxdS+tPhQKNfQOdklFvxXZDioEMP9BKxTatOmGiSlPKvotKZZwqe3z+LlvdvTNSoFug
 NJ2mHwyf1Ao8tw+o6ofA7ZWar0nPm/qP15wYYp3kPU9d9mj5tHrECaarbawvhyN6nNML/mr2zj
 nV6vc007LnblHHWt4VgzZNHORmk98DXVTGnb9tKQ23wuwFb3Z69Sj6/2mDLXs0eRRopJDOE/J5
 bDq1LMsTN2dNfIyG4ZSHRsaXwMf2x/ZRetoDZ9OmSoGfZ/vGglnjx+g51ZgbGV/iEsQcEvHcRj
 Xzk=
X-SBRS: 2.7
X-MesageID: 17294144
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,341,1583211600"; d="scan'208";a="17294144"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 14/16] x86/alt: Adjust _alternative_instructions() to not
 create shadow stacks
Date: Fri, 1 May 2020 23:58:36 +0100
Message-ID: <20200501225838.9866-15-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200501225838.9866-1-andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The current alternatives algorithm clears CR0.WP and writes into .text.  This
has a side effect of the mappings becoming shadow stacks once CET is active.

Adjust _alternative_instructions() to clean up after itself.  This involves
extending the set of bits modify_xen_mappings() to include Dirty (and Accessed
for good measure).

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/alternative.c | 14 ++++++++++++++
 xen/arch/x86/mm.c          |  6 +++---
 2 files changed, 17 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/alternative.c b/xen/arch/x86/alternative.c
index ce2b4302e6..004e9ede25 100644
--- a/xen/arch/x86/alternative.c
+++ b/xen/arch/x86/alternative.c
@@ -21,6 +21,7 @@
 #include <asm/processor.h>
 #include <asm/alternative.h>
 #include <xen/init.h>
+#include <asm/setup.h>
 #include <asm/system.h>
 #include <asm/traps.h>
 #include <asm/nmi.h>
@@ -398,6 +399,19 @@ static void __init _alternative_instructions(bool force)
         panic("Timed out waiting for alternatives self-NMI to hit\n");
 
     set_nmi_callback(saved_nmi_callback);
+
+    /*
+     * When Xen is using shadow stacks, the alternatives clearing CR0.WP and
+     * writing into the mappings set dirty bits, turning the mappings into
+     * shadow stack mappings.
+     *
+     * While we can execute from them, this would also permit them to be the
+     * target of WRSS instructions, so reset the dirty after patching.
+     */
+    if ( cpu_has_xen_shstk )
+        modify_xen_mappings(XEN_VIRT_START + MB(2),
+                            (unsigned long)&__2M_text_end,
+                            PAGE_HYPERVISOR_RX);
 }
 
 void __init alternative_instructions(void)
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 4e2c3c9735..26b01cb917 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5448,8 +5448,8 @@ int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
  * mappings, but will shatter superpages if necessary, and will destroy
  * mappings if not passed _PAGE_PRESENT.
  *
- * The only flags considered are NX, RW and PRESENT.  All other input flags
- * are ignored.
+ * The only flags considered are NX, D, A, RW and PRESENT.  All other input
+ * flags are ignored.
  *
  * It is an error to call with present flags over an unpopulated range.
  */
@@ -5462,7 +5462,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     unsigned long v = s;
 
     /* Set of valid PTE bits which may be altered. */
-#define FLAGS_MASK (_PAGE_NX|_PAGE_RW|_PAGE_PRESENT)
+#define FLAGS_MASK (_PAGE_NX|_PAGE_DIRTY|_PAGE_ACCESSED|_PAGE_RW|_PAGE_PRESENT)
     nf &= FLAGS_MASK;
 
     ASSERT(IS_ALIGNED(s, PAGE_SIZE));
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 01 23:04:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 23:04:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUehm-0001Av-0w; Fri, 01 May 2020 23:04:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUehk-0001Ad-J0
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 23:04:24 +0000
X-Inumbo-ID: 1779f240-8c00-11ea-9887-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1779f240-8c00-11ea-9887-bc764e2007e4;
 Fri, 01 May 2020 23:04:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588374261;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=6HdmjgF6D28Zm4TcRg1rdSfUjLRRQBlEC1IznYRSCjQ=;
 b=KNNp+gbapOKdSfOCeWRD8NK8qtexsKixwsdTUuELxQEIqnupsNlkQm8B
 s2/ML2342gRREcHTI4Ug26K3Wqq5Bgy34C7OMTLxUolB97dCUFn6Y/yKo
 r5Ut1BEpY/VkrTj8DiP54+Cedw1PBRkIB/xmyZW2UTa4zPFnxdHmiTsWV g=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: V6SFzoHHkY51sW4Cww8TJ2Z4t6bksd+bmW8hNop1qzs5IfwqLPCX6DhPY3lmIO4x0k91u4Tl+5
 Q3yoaFOFI0bDWvpc32p3mXv5uDq36txwkI9u615vvf/bN1cNXNfwcPigtV6yFhjOMFAV2pO115
 n4jaEypIRuhd5AWjfYWhhCqj9LDNIec9PegiXvJibGt14RJ7+WgjbN/mSkWS5TWN3IPYSGVjlt
 HBlgtPNhSyM8R20u8iLCWAqHq5Ie0BNsDykjyE9S22fSt13dz4BDZ4gra6gkaC2dbpsA5IVtbj
 CXo=
X-SBRS: 2.7
X-MesageID: 17294146
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,341,1583211600"; d="scan'208";a="17294146"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 15/16] x86/entry: Adjust guest paths to be shadow stack
 compatible
Date: Fri, 1 May 2020 23:58:37 +0100
Message-ID: <20200501225838.9866-16-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200501225838.9866-1-andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The SYSCALL/SYSEXIT paths need to use {SET,CLR}SSBSY.  The IRET to guest paths
must not, which forces us to spill a register to the stack.

The IST switch onto the primary stack is not great as we have an instruction
boundary with no shadow stack.  This is the least bad option available.

These paths are not used before shadow stacks are properly established, so can
use alternatives to avoid extra runtime CET detection logic.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/x86_64/compat/entry.S |  2 +-
 xen/arch/x86/x86_64/entry.S        | 19 ++++++++++++++++++-
 2 files changed, 19 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/x86_64/compat/entry.S b/xen/arch/x86/x86_64/compat/entry.S
index 3cd375bd48..7816d0d4ac 100644
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -198,7 +198,7 @@ ENTRY(cr4_pv32_restore)
 
 /* See lstar_enter for entry register state. */
 ENTRY(cstar_enter)
-        /* sti could live here when we don't switch page tables below. */
+        ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
         CR4_PV32_RESTORE
         movq  8(%rsp),%rax /* Restore %rax. */
         movq  $FLAT_USER_SS32, 8(%rsp) /* Assume a 64bit domain.  Compat handled lower. */
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 06da350ba0..91cd8f94fd 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -194,6 +194,15 @@ restore_all_guest:
         movq  8(%rsp),%rcx            # RIP
         ja    iret_exit_to_guest
 
+        /* Clear the supervisor shadow stack token busy bit. */
+.macro rag_clrssbsy
+        push %rax
+        rdsspq %rax
+        clrssbsy (%rax)
+        pop %rax
+.endm
+        ALTERNATIVE "", rag_clrssbsy, X86_FEATURE_XEN_SHSTK
+
         cmpw  $FLAT_USER_CS32,16(%rsp)# CS
         movq  32(%rsp),%rsp           # RSP
         je    1f
@@ -226,7 +235,7 @@ iret_exit_to_guest:
  * %ss must be saved into the space left by the trampoline.
  */
 ENTRY(lstar_enter)
-        /* sti could live here when we don't switch page tables below. */
+        ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
         movq  8(%rsp),%rax /* Restore %rax. */
         movq  $FLAT_KERNEL_SS,8(%rsp)
         pushq %r11
@@ -877,6 +886,14 @@ handle_ist_exception:
         movl  $UREGS_kernel_sizeof/8,%ecx
         movq  %rdi,%rsp
         rep   movsq
+
+        /* Switch Shadow Stacks */
+.macro ist_switch_shstk
+        rdsspq %rdi
+        clrssbsy (%rdi)
+        setssbsy
+.endm
+        ALTERNATIVE "", ist_switch_shstk, X86_FEATURE_XEN_SHSTK
 1:
 #else
         ASSERT_CONTEXT_IS_XEN
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 01 23:04:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 23:04:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUeho-0001Bq-9T; Fri, 01 May 2020 23:04:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUehn-0001BR-50
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 23:04:27 +0000
X-Inumbo-ID: 1a67054c-8c00-11ea-9b70-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1a67054c-8c00-11ea-9b70-12813bfff9fa;
 Fri, 01 May 2020 23:04:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588374266;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=xNDjbEw11ojfVPqDcOxo4l2IKtjrw8Xz8xcuwlJlm7I=;
 b=eOVxo4n+9/viNDAsWhSpUoPBcSA59fnHUISp3LXnZfWKMoFNgl87Imud
 T76m4Z/2FwXqpTIKEIB76Kd1K0DumP8ZwZWBGayH4TRgwapFo6EWMKMdw
 JeM2v7SjYBZyB9x2dRltNYcvtrTXwIuAVyUuy//YWWhHwYLzpk8yx+ZNj 8=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: RQO3kNgN7pqeaRKjD9Q7Gzk0acvKAER0HyP3NHfdCg9AB4epBxfWQjxpqYwfziLU6MdX16WBTJ
 m7hpKTIs/06zkqPrwoy3oS5boN20Wlm8uHuxFZDZcx8vQaXXvLB2NXEYjwYsiEZVOwvvLgtwGT
 9LV4M8CLgpvLmAljGbxKnZTLrcvmZLaS9N31HkYU3sWXcTzPB3wrcVwfKxrM9XG0x36zjjoDW4
 VBv/6wK9MncRLIlOKplp+a8VKMEnCrxKzzAeNz3FFmHDfCXM788JCNHWizvayskqdZeJwtXzUY
 4CA=
X-SBRS: 2.7
X-MesageID: 16906065
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,341,1583211600"; d="scan'208";a="16906065"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 12/16] x86/extable: Adjust extable handling to be shadow stack
 compatible
Date: Fri, 1 May 2020 23:58:34 +0100
Message-ID: <20200501225838.9866-13-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200501225838.9866-1-andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

When adjusting an IRET frame to recover from a fault, and equivalent
adjustment needs making in the shadow IRET frame.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/traps.c        | 22 ++++++++++++++++++++++
 xen/arch/x86/x86_64/entry.S | 11 ++++++++++-
 2 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 1cf00c1f4a..2354357cc1 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -778,6 +778,28 @@ static bool exception_fixup(struct cpu_user_regs *regs, bool print)
                vec_name(regs->entry_vector), regs->error_code,
                _p(regs->rip), _p(regs->rip), _p(fixup));
 
+    if ( IS_ENABLED(CONFIG_XEN_SHSTK) )
+    {
+        unsigned long ssp;
+
+        asm ("rdsspq %0" : "=r" (ssp) : "0" (1) );
+        if ( ssp != 1 )
+        {
+            unsigned long *ptr = _p(ssp);
+
+            /* Search for %rip in the shadow stack, ... */
+            while ( *ptr != regs->rip )
+                ptr++;
+
+            ASSERT(ptr[1] == __HYPERVISOR_CS);
+
+            /* ... and adjust to the fixup location. */
+            asm ("wrssq %[fix], %[stk]"
+                 : [stk] "=m" (*ptr)
+                 : [fix] "r" (fixup));
+        }
+    }
+
     regs->rip = fixup;
 
     return true;
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 6403c0ab92..06da350ba0 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -708,7 +708,16 @@ exception_with_ints_disabled:
         call  search_pre_exception_table
         testq %rax,%rax                 # no fixup code for faulting EIP?
         jz    1b
-        movq  %rax,UREGS_rip(%rsp)
+        movq  %rax,UREGS_rip(%rsp)      # fixup regular stack
+
+#ifdef CONFIG_XEN_SHSTK
+        mov    $1, %edi
+        rdsspq %rdi
+        cmp    $1, %edi
+        je     .L_exn_shstk_done
+        wrssq  %rax, (%rdi)             # fixup shadow stack
+.L_exn_shstk_done:
+#endif
         subq  $8,UREGS_rsp(%rsp)        # add ec/ev to previous stack frame
         testb $15,UREGS_rsp(%rsp)       # return %rsp is now aligned?
         jz    1f                        # then there is a pad quadword already
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 01 23:04:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 23:04:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUehq-0001DA-HP; Fri, 01 May 2020 23:04:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUehp-0001Cf-IW
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 23:04:29 +0000
X-Inumbo-ID: 18c43340-8c00-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 18c43340-8c00-11ea-ae69-bc764e2007e4;
 Fri, 01 May 2020 23:04:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588374263;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=QQTKLF5FcqWscWYTybl+ldFADN5l5S3i4DlWR7dOp0U=;
 b=Bwi9hlB9twrgPWza/i/YntZ7SklkzWhK0zE/g/G/SlgHFMG1/egX3gKX
 pNcAshrDAbplO/4jVljkNkPC1zAWbTKDzYomEKd6oZtxIpiF20Y69E8TV
 QOJQ6WlGJkW5YTquqzoPxHYHHf/xIEr+FhrtMG06LTnJLim37yBAiurYE Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: warOANc0prrJjWlz01fdFNaC2VNLf73T7N7GydCNmFSXCFe6EDaDu2iZOgOD68YmWQ2GEUfuTM
 A3HcFFK3s9Rbp+jKi7I/QHNKpiiv3hEgRrValrt6B2uexJr9r9j2RQBlv18LDcAJN6qN82CoNV
 QwD4q0xFbOm8t9Xqc6X/l+FMOacKZQOIj+ynNDE/6wNHVHsEfc+Zezrmp0KIqz+NGH6XtijoDo
 Gq0NcGs3jzdFKwVRtlo9phZJaYJargH2g+P0H+mSlZWhbZTuT02AP6LyQn/c+u6tIXqkBeE+Wm
 eW4=
X-SBRS: 2.7
X-MesageID: 16855100
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,341,1583211600"; d="scan'208";a="16855100"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 16/16] x86/shstk: Activate Supervisor Shadow Stacks
Date: Fri, 1 May 2020 23:58:38 +0100
Message-ID: <20200501225838.9866-17-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200501225838.9866-1-andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

With all other plumbing in place, activate shadow stacks when possible.

The BSP needs to wait until alternatives have run (to avoid interaction with
CR0.WP), and after the first reset_stack_and_jump() to avoid having a pristine
shadow stack interact in problematic ways with an in-use regular stack.
Activate shadow stack in reinit_bsp_stack().

APs have all infrastructure set up by the booting CPU, so enable shadow stacks
before entering C.  The S3 path needs save and restore SSP along side RSP.

The crash path needs to turn CET off to avoid interfereing with the kexec
kernel's environment.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/acpi/wakeup_prot.S | 56 +++++++++++++++++++++++++++++++++++++++++
 xen/arch/x86/boot/x86_64.S      | 30 +++++++++++++++++++++-
 xen/arch/x86/cpu/common.c       |  5 ++++
 xen/arch/x86/crash.c            |  7 ++++++
 xen/arch/x86/setup.c            | 26 +++++++++++++++++++
 xen/arch/x86/spec_ctrl.c        |  8 ++++++
 xen/include/asm-x86/msr-index.h |  3 +++
 xen/include/asm-x86/x86-defns.h |  1 +
 8 files changed, 135 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/acpi/wakeup_prot.S b/xen/arch/x86/acpi/wakeup_prot.S
index 4dba6020a7..22c0f8cc79 100644
--- a/xen/arch/x86/acpi/wakeup_prot.S
+++ b/xen/arch/x86/acpi/wakeup_prot.S
@@ -1,3 +1,8 @@
+#include <asm/config.h>
+#include <asm/msr-index.h>
+#include <asm/page.h>
+#include <asm/processor.h>
+
         .file __FILE__
         .text
         .code64
@@ -15,6 +20,12 @@ ENTRY(do_suspend_lowlevel)
         mov     %cr0, %rax
         mov     %rax, saved_cr0(%rip)
 
+#ifdef CONFIG_XEN_SHSTK
+        mov     $1, %eax
+        rdsspq  %rax
+        mov     %rax, saved_ssp(%rip)
+#endif
+
         /* enter sleep state physically */
         mov     $3, %edi
         call    acpi_enter_sleep_state
@@ -48,6 +59,48 @@ ENTRY(s3_resume)
         pushq   %rax
         lretq
 1:
+#ifdef CONFIG_XEN_SHSTK
+	/*
+         * Restoring SSP is a little convoluted, because we are intercepting
+         * the middle of an in-use shadow stack.  Write a temporary supervisor
+         * token under the stack, so SETSSBSY takes us where we want, then
+         * reset MSR_PL0_SSP to its usual value and pop the temporary token.
+         */
+        mov     saved_rsp(%rip), %rdi
+        cmpq    $1, %rdi
+        je      .L_shstk_done
+
+        /* Write a supervisor token under SSP. */
+        sub     $8, %rdi
+        mov     %rdi, (%rdi)
+
+        /* Load it into MSR_PL0_SSP. */
+        mov     $MSR_PL0_SSP, %ecx
+        mov     %rdi, %rdx
+        shr     $32, %rdx
+        mov     %edi, %eax
+
+        /* Enable CET. */
+        mov     $MSR_S_CET, %ecx
+        xor     %edx, %edx
+        mov     $CET_SHSTK_EN | CET_WRSS_EN, %eax
+        wrmsr
+
+        /* Activate our temporary token. */
+        mov     $XEN_MINIMAL_CR4 | X86_CR4_CET, %ebx
+        mov     %rbx, %cr4
+        setssbsy
+
+        /* Reset MSR_PL0_SSP back to its expected value. */
+        and     $~(STACK_SIZE - 1), %eax
+        or      $0x5ff8, %eax
+        wrmsr
+
+        /* Pop the temporary token off the stack. */
+        mov     $2, %eax
+        incsspd %eax
+.L_shstk_done:
+#endif
 
         call    load_system_tables
 
@@ -65,6 +118,9 @@ ENTRY(s3_resume)
 
 saved_rsp:      .quad   0
 saved_cr0:      .quad   0
+#ifdef CONFIG_XEN_SHSTK
+saved_ssp:      .quad   0
+#endif
 
 GLOBAL(saved_magic)
         .long   0x9abcdef0
diff --git a/xen/arch/x86/boot/x86_64.S b/xen/arch/x86/boot/x86_64.S
index 314a32a19f..59b770f955 100644
--- a/xen/arch/x86/boot/x86_64.S
+++ b/xen/arch/x86/boot/x86_64.S
@@ -28,8 +28,36 @@ ENTRY(__high_start)
         lretq
 1:
         test    %ebx,%ebx
-        jnz     start_secondary
+        jz      .L_bsp
 
+        /* APs.  Set up shadow stacks before entering C. */
+
+        testl   $cpufeat_mask(X86_FEATURE_XEN_SHSTK), \
+                CPUINFO_FEATURE_OFFSET(X86_FEATURE_XEN_SHSTK) + boot_cpu_data(%rip)
+        je      .L_ap_shstk_done
+
+        mov     $MSR_S_CET, %ecx
+        xor     %edx, %edx
+        mov     $CET_SHSTK_EN | CET_WRSS_EN, %eax
+        wrmsr
+
+        mov     $MSR_PL0_SSP, %ecx
+        mov     %rsp, %rdx
+        shr     $32, %rdx
+        mov     %esp, %eax
+        and     $~(STACK_SIZE - 1), %eax
+        or      $0x5ff8, %eax
+        wrmsr
+
+        mov     $XEN_MINIMAL_CR4 | X86_CR4_CET, %ecx
+        mov     %rcx, %cr4
+        setssbsy
+
+.L_ap_shstk_done:
+        call    start_secondary
+        BUG     /* start_secondary() shouldn't return. */
+
+.L_bsp:
         /* Pass off the Multiboot info structure to C land (if applicable). */
         mov     multiboot_ptr(%rip),%edi
         call    __start_xen
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 3962717aa5..a77be36349 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -323,6 +323,11 @@ void __init early_cpu_init(void)
 	       x86_cpuid_vendor_to_str(c->x86_vendor), c->x86, c->x86,
 	       c->x86_model, c->x86_model, c->x86_mask, eax);
 
+	if (c->cpuid_level >= 7) {
+		cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
+		c->x86_capability[cpufeat_word(X86_FEATURE_CET_SS)] = ecx;
+	}
+
 	eax = cpuid_eax(0x80000000);
 	if ((eax >> 16) == 0x8000 && eax >= 0x80000008) {
 		eax = cpuid_eax(0x80000008);
diff --git a/xen/arch/x86/crash.c b/xen/arch/x86/crash.c
index 450eecd46b..0611b4fb9b 100644
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -200,6 +200,13 @@ void machine_crash_shutdown(void)
     /* Reset CPUID masking and faulting to the host's default. */
     ctxt_switch_levelling(NULL);
 
+    /* Disable shadow stacks. */
+    if ( cpu_has_xen_shstk )
+    {
+        wrmsrl(MSR_S_CET, 0);
+        write_cr4(read_cr4() & ~X86_CR4_CET);
+    }
+
     info = kexec_crash_save_info();
     info->xen_phys_start = xen_phys_start;
     info->dom0_pfn_to_mfn_frame_list_list =
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index aa21201507..5c574b2035 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -664,6 +664,13 @@ static void __init noreturn reinit_bsp_stack(void)
     stack_base[0] = stack;
     memguard_guard_stack(stack);
 
+    if ( cpu_has_xen_shstk )
+    {
+        wrmsrl(MSR_PL0_SSP, (unsigned long)stack + 0x5ff8);
+        wrmsrl(MSR_S_CET, CET_SHSTK_EN | CET_WRSS_EN);
+        asm volatile ("setssbsy" ::: "memory");
+    }
+
     reset_stack_and_jump_nolp(init_done);
 }
 
@@ -985,6 +992,21 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     /* This must come before e820 code because it sets paddr_bits. */
     early_cpu_init();
 
+    /* Choose shadow stack early, to set infrastructure up appropriately. */
+    if ( opt_xen_shstk && boot_cpu_has(X86_FEATURE_CET_SS) )
+    {
+        printk("Enabling Supervisor Shadow Stacks\n");
+
+        setup_force_cpu_cap(X86_FEATURE_XEN_SHSTK);
+#ifdef CONFIG_PV32
+        if ( opt_pv32 )
+        {
+            opt_pv32 = 0;
+            printk("  - Disabling PV32 due to Shadow Stacks\n");
+        }
+#endif
+    }
+
     /* Sanitise the raw E820 map to produce a final clean version. */
     max_page = raw_max_page = init_e820(memmap_type, &e820_raw);
 
@@ -1721,6 +1743,10 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 
     alternative_branches();
 
+    /* Defer CR4.CET until alternatives have finished playing with CR4.WP */
+    if ( cpu_has_xen_shstk )
+        set_in_cr4(X86_CR4_CET);
+
     /*
      * NB: when running as a PV shim VCPUOP_up/down is wired to the shim
      * physical cpu_add/remove functions, so launch the guest with only
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index c5d8e587a8..a94be2d594 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -882,6 +882,14 @@ void __init init_speculation_mitigations(void)
     hw_smt_enabled = check_smt_enabled();
 
     /*
+     * First, disable the use of retpolines if Xen is using shadow stacks, as
+     * they are incompatible.
+     */
+    if ( cpu_has_xen_shstk &&
+         (opt_thunk == THUNK_DEFAULT || opt_thunk == THUNK_RETPOLINE) )
+        thunk = THUNK_JMP;
+
+    /*
      * Has the user specified any custom BTI mitigations?  If so, follow their
      * instructions exactly and disable all heuristics.
      */
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 85c5f20b76..cdfb7b047b 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -68,6 +68,9 @@
 
 #define MSR_U_CET                           0x000006a0
 #define MSR_S_CET                           0x000006a2
+#define  CET_SHSTK_EN                       (_AC(1, ULL) <<  0)
+#define  CET_WRSS_EN                        (_AC(1, ULL) <<  1)
+
 #define MSR_PL0_SSP                         0x000006a4
 #define MSR_PL1_SSP                         0x000006a5
 #define MSR_PL2_SSP                         0x000006a6
diff --git a/xen/include/asm-x86/x86-defns.h b/xen/include/asm-x86/x86-defns.h
index 84e15b15be..4051a80485 100644
--- a/xen/include/asm-x86/x86-defns.h
+++ b/xen/include/asm-x86/x86-defns.h
@@ -73,6 +73,7 @@
 #define X86_CR4_SMEP       0x00100000 /* enable SMEP */
 #define X86_CR4_SMAP       0x00200000 /* enable SMAP */
 #define X86_CR4_PKE        0x00400000 /* enable PKE */
+#define X86_CR4_CET        0x00800000 /* Control-flow Enforcement Technology */
 
 /*
  * XSTATE component flags in XCR0
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 01 23:04:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 23:04:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUehv-0001G4-UQ; Fri, 01 May 2020 23:04:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUehu-0001FM-JM
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 23:04:34 +0000
X-Inumbo-ID: 1aa23f5e-8c00-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1aa23f5e-8c00-11ea-ae69-bc764e2007e4;
 Fri, 01 May 2020 23:04:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588374266;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=uwqto+nsVNLj4DX0sYxzltmriQwYE0YYvDMISJG9Vwo=;
 b=ca/Y9wdg9cmMQpC3zP2Ui4KH0I8NVjJXvc0kr3vkrt3YLGY2usoshxAX
 nhfOsQW3Cku71pAPLN/s+m5eBRf+kvHP+SBlA5zqYSN+8KYbZYor0ro4I
 DyI23g5Tf59uthtG8myT92ICjOsJQEXCIuhMgjH4N2KjVr/Zgt0vRVxhq 4=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: vyZJf3XChK3+6Biz+YWJLkqCcBFG7t+PFauyzWQfKb9zrybxorqUy3rh0tRdcOmcwU3CGgISvh
 AH+Bh7Ql2I3ViuWPd3lwaZ/ovOI8NPHBZEFMxWfxzDjTzc2pnXc7vFq0zTZrOcincvZo0Ph3FH
 tCaVyfz5hEEiOHBL60lcUiC8CQCTswwturEabnlNLXsLXQsfyZh2bRhBM6jxWb4HLI9q9hyu4N
 Be8ZGyyI27g3DUZr1FjC13O0uesvVRjPAJPRoplG9UjBwr3UaYkomzHNjiZJTT7JJkg6DikVbw
 oWI=
X-SBRS: 2.7
X-MesageID: 16855103
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,341,1583211600"; d="scan'208";a="16855103"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 10/16] x86/cpu: Adjust reset_stack_and_jump() to be shadow
 stack compatible
Date: Fri, 1 May 2020 23:58:32 +0100
Message-ID: <20200501225838.9866-11-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200501225838.9866-1-andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

We need to unwind up to the supervisor token.  See the comment for details.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/include/asm-x86/current.h | 42 +++++++++++++++++++++++++++++++++++++++---
 1 file changed, 39 insertions(+), 3 deletions(-)

diff --git a/xen/include/asm-x86/current.h b/xen/include/asm-x86/current.h
index 99b66a0087..2a7b728b1e 100644
--- a/xen/include/asm-x86/current.h
+++ b/xen/include/asm-x86/current.h
@@ -124,13 +124,49 @@ unsigned long get_stack_dump_bottom (unsigned long sp);
 # define CHECK_FOR_LIVEPATCH_WORK ""
 #endif
 
+#ifdef CONFIG_XEN_SHSTK
+/*
+ * We need to unwind the primary shadow stack to its supervisor token, located
+ * at 0x5ff8 from the base of the stack blocks.
+ *
+ * Read the shadow stack pointer, subtract it from 0x5ff8, divide by 8 to get
+ * the number of slots needing popping.
+ *
+ * INCSSPQ can't pop more than 255 entries.  We shouldn't ever need to pop
+ * that many entries, and getting this wrong will cause us to #DF later.
+ */
+# define SHADOW_STACK_WORK                      \
+    "mov $1, %[ssp];"                           \
+    "rdsspd %[ssp];"                            \
+    "cmp $1, %[ssp];"                           \
+    "je 1f;" /* CET not active?  Skip. */       \
+    "mov $"STR(0x5ff8)", %[val];"               \
+    "and $"STR(STACK_SIZE - 1)", %[ssp];"       \
+    "sub %[ssp], %[val];"                       \
+    "shr $3, %[val];"                           \
+    "cmp $255, %[val];"                         \
+    "jle 2f;"                                   \
+    "ud2a;"                                     \
+    "2: incsspq %q[val];"                       \
+    "1:"
+#else
+# define SHADOW_STACK_WORK ""
+#endif
+
 #define switch_stack_and_jump(fn, instr)                                \
     ({                                                                  \
+        unsigned int tmp;                                               \
         __asm__ __volatile__ (                                          \
-            "mov %0,%%"__OP"sp;"                                        \
+            "cmc;"                                                      \
+            SHADOW_STACK_WORK                                           \
+            "mov %[stk], %%rsp;"                                        \
             instr                                                       \
-             "jmp %c1"                                                  \
-            : : "r" (guest_cpu_user_regs()), "i" (fn) : "memory" );     \
+            "jmp %c[fun];"                                              \
+            : [val] "=&r" (tmp),                                        \
+              [ssp] "=&r" (tmp)                                         \
+            : [stk] "r" (guest_cpu_user_regs()),                        \
+              [fun] "i" (fn)                                            \
+            : "memory" );                                               \
         unreachable();                                                  \
     })
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 01 23:04:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 23:04:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUei3-0001JS-7D; Fri, 01 May 2020 23:04:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUei2-0001It-1o
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 23:04:42 +0000
X-Inumbo-ID: 224a49b9-8c00-11ea-9b70-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 224a49b9-8c00-11ea-9b70-12813bfff9fa;
 Fri, 01 May 2020 23:04:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588374281;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=yRFj9eeWE6eYBW6P5Xnzqp10Fr4wYOcVKuNdQFor0Io=;
 b=VNJPbmkvQgE5brThFiytUokhbVXDGIEIJEUABfEjVNjoaVr+PW33VVwu
 NL8Ju50UGhtNhW63wQXi0NFbAzGqtKifBYAP+K77LEhN+wNYkhQOfdzp0
 7L/X9lMAYrHKLboA6bkdOy8xlIADP4ehPprUHq0cVd/wRZAcvNOqqWUeE w=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: 2i7piaTx+JM8Ul2K6o4UR4T0PTd0cCe5iJrxEMSC3awac4iuUrQcAQeGW7Yrs3Agn+wv4pZ2wa
 R1CTWdRp2YrDB3jQm8kZKJWX4qQVJyqaxzSaAJ8J1SBwQEx7SX407VXgjN3bBhhspUd1+qxrPx
 ARkljdaBFO6B3G6gw9/4al0M8dQ7BLBCxPURI3LCxQ/MuY+MVP8rFJCIckc6hymGNNkeVPlqiq
 2xUStgugFXq/EOTe2DKfhb1ajO4x5+zN8lFoBE1oul2xUeOxBmGCc3l2lt/VKSJV6pIEB59AJt
 BY0=
X-SBRS: 2.7
X-MesageID: 16995035
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,341,1583211600"; d="scan'208";a="16995035"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 11/16] x86/spec-ctrl: Adjust DO_OVERWRITE_RSB to be shadow
 stack compatible
Date: Fri, 1 May 2020 23:58:33 +0100
Message-ID: <20200501225838.9866-12-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200501225838.9866-1-andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The 32 calls need dropping from the shadow stack as well as the regular stack.
To shorten the code, we can use the 32bit forms of RDSSP/INCSSP, but need to
double up the input to INCSSP to counter the operand size based multiplier.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/include/asm-x86/spec_ctrl_asm.h | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/xen/include/asm-x86/spec_ctrl_asm.h b/xen/include/asm-x86/spec_ctrl_asm.h
index c60093b090..cb34299a86 100644
--- a/xen/include/asm-x86/spec_ctrl_asm.h
+++ b/xen/include/asm-x86/spec_ctrl_asm.h
@@ -83,9 +83,9 @@
  * Requires nothing
  * Clobbers \tmp (%rax by default), %rcx
  *
- * Requires 256 bytes of stack space, but %rsp has no net change. Based on
- * Google's performance numbers, the loop is unrolled to 16 iterations and two
- * calls per iteration.
+ * Requires 256 bytes of {,shadow}stack space, but %rsp/SSP has no net
+ * change. Based on Google's performance numbers, the loop is unrolled to 16
+ * iterations and two calls per iteration.
  *
  * The call filling the RSB needs a nonzero displacement.  A nop would do, but
  * we use "1: pause; lfence; jmp 1b" to safely contains any ret-based
@@ -114,6 +114,16 @@
     sub $1, %ecx
     jnz .L\@_fill_rsb_loop
     mov %\tmp, %rsp                 /* Restore old %rsp */
+
+#ifdef CONFIG_XEN_SHSTK
+    mov $1, %ecx
+    rdsspd %ecx
+    cmp $1, %ecx
+    je .L\@_shstk_done
+    mov $64, %ecx                   /* 64 * 4 bytes, given incsspd */
+    incsspd %ecx                    /* Restore old SSP */
+.L\@_shstk_done:
+#endif
 .endm
 
 .macro DO_SPEC_CTRL_ENTRY_FROM_HVM
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 01 23:04:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 23:04:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUei4-0001KJ-G0; Fri, 01 May 2020 23:04:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T3Df=6P=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jUei2-0001JE-Ud
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 23:04:42 +0000
X-Inumbo-ID: 23f6d6f0-8c00-11ea-9b70-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 23f6d6f0-8c00-11ea-9b70-12813bfff9fa;
 Fri, 01 May 2020 23:04:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588374282;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=VTFDnoal8uf23mjLhPOcZyRMpnFsT5lsnYXHVwdqsjQ=;
 b=iXGR5INgY6IsJTqgU5U0m96sGcM65YqJFlHDsJVfUDQEQT2bTBefk84h
 CgPr1SEbpwYsd/iSCDfjIhcLtSWDd5//0XyvXSr6RLLS7H/nrwyqQZLCS
 S3Va8RIJ3JHnhKYnk5nXULzc7YC3ux3hOMSjBSL2DdK8olR+Q/Gl9DUQf s=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: tcUAXi3qqq2QFegR7Dr5w7LqqzPZbEH3INzJFOcZ7ggbdbyDPEu2JiVPCJu7W+lB8aSaYbcjp3
 ghu7Kun0k2rPVCAmuCXw/nGsEbp0MadK4aIZVUYC5cuKpbXAjmjaC+yXHXLkGMmhqwFWJkRy3W
 sNDSLLPX6aS5bFQevEC+M/20+8Mi/8ihFzxrB0H2f4cdL0IOzU3q70XtFxk3/bb99q7y3UbSwL
 nSeSenWENhksll65mhdlmACEFbjKcslcREi7707oGG37RL8HJ3bn+LPCCG/YXlSBtfvyWPewJc
 8s4=
X-SBRS: 2.7
X-MesageID: 16995031
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,341,1583211600"; d="scan'208";a="16995031"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH 13/16] x86/ioemul: Rewrite stub generation to be shadow stack
 compatible
Date: Fri, 1 May 2020 23:58:35 +0100
Message-ID: <20200501225838.9866-14-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200501225838.9866-1-andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The logic is completely undocumented and almost impossible to follow.  It
actually uses return oriented programming.  Rewrite it to conform to more
normal call mechanics, and leave a big comment explaining thing.  As well as
the code being easier to follow, it will execute faster as it isn't fighting
the branch predictor.

Move the ioemul_handle_quirk() function pointer from traps.c to
ioport_emulate.c.  There is no reason for it to be in neither of the two
translation units which use it.  Alter the behaviour to return the number of
bytes written into the stub.

Access the addresses of the host/guest helpers with extern const char arrays.
Nothing good will come of C thinking they are regular functions.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
--
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

Posted previously on its perf benefits alone, but here is the real reason
behind the change.
---
 xen/arch/x86/ioport_emulate.c  | 11 ++---
 xen/arch/x86/pv/emul-priv-op.c | 91 +++++++++++++++++++++++++++++++-----------
 xen/arch/x86/pv/gpr_switch.S   | 37 +++++------------
 xen/arch/x86/traps.c           |  3 --
 xen/include/asm-x86/io.h       |  3 +-
 5 files changed, 85 insertions(+), 60 deletions(-)

diff --git a/xen/arch/x86/ioport_emulate.c b/xen/arch/x86/ioport_emulate.c
index 499c1f6056..f7511a9c49 100644
--- a/xen/arch/x86/ioport_emulate.c
+++ b/xen/arch/x86/ioport_emulate.c
@@ -8,7 +8,10 @@
 #include <xen/sched.h>
 #include <xen/dmi.h>
 
-static bool ioemul_handle_proliant_quirk(
+unsigned int (*ioemul_handle_quirk)(
+    u8 opcode, char *io_emul_stub, struct cpu_user_regs *regs);
+
+static unsigned int ioemul_handle_proliant_quirk(
     u8 opcode, char *io_emul_stub, struct cpu_user_regs *regs)
 {
     static const char stub[] = {
@@ -19,18 +22,16 @@ static bool ioemul_handle_proliant_quirk(
         0xa8, 0x80, /*    test $0x80, %al */
         0x75, 0xfb, /*    jnz 1b          */
         0x9d,       /*    popf            */
-        0xc3,       /*    ret             */
     };
     uint16_t port = regs->dx;
     uint8_t value = regs->al;
 
     if ( (opcode != 0xee) || (port != 0xcd4) || !(value & 0x80) )
-        return false;
+        return 0;
 
     memcpy(io_emul_stub, stub, sizeof(stub));
-    BUILD_BUG_ON(IOEMUL_QUIRK_STUB_BYTES < sizeof(stub));
 
-    return true;
+    return sizeof(stub);
 }
 
 /* This table is the set of system-specific I/O emulation hooks. */
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index e24b84f46a..f150886711 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -54,51 +54,96 @@ struct priv_op_ctxt {
     unsigned int bpmatch;
 };
 
-/* I/O emulation support. Helper routines for, and type of, the stack stub. */
-void host_to_guest_gpr_switch(struct cpu_user_regs *);
-unsigned long guest_to_host_gpr_switch(unsigned long);
+/* I/O emulation helpers.  Use non-standard calling conventions. */
+extern const char load_guest_gprs[], save_guest_gprs[];
 
 typedef void io_emul_stub_t(struct cpu_user_regs *);
 
 static io_emul_stub_t *io_emul_stub_setup(struct priv_op_ctxt *ctxt, u8 opcode,
                                           unsigned int port, unsigned int bytes)
 {
+    /*
+     * Construct a stub for IN/OUT emulation.
+     *
+     * Some platform drivers communicate with the SMM handler using GPRs as a
+     * mailbox.  Therefore, we must perform the emulation with the hardware
+     * domain's registers in view.
+     *
+     * We write a stub of the following form, using the guest load/save
+     * helpers (abnormal calling conventions), and one of several possible
+     * stubs performing the real I/O.
+     */
+    static const char prologue[] = {
+        0x53,       /* push %rbx */
+        0x55,       /* push %rbp */
+        0x41, 0x54, /* push %r12 */
+        0x41, 0x55, /* push %r13 */
+        0x41, 0x56, /* push %r14 */
+        0x41, 0x57, /* push %r15 */
+        0x57,       /* push %rdi (param for save_guest_gprs) */
+    };              /* call load_guest_gprs */
+                    /* <I/O stub> */
+                    /* call save_guest_gprs */
+    static const char epilogue[] = {
+        0x5f,       /* pop %rdi  */
+        0x41, 0x5f, /* pop %r15  */
+        0x41, 0x5e, /* pop %r14  */
+        0x41, 0x5d, /* pop %r13  */
+        0x41, 0x5c, /* pop %r12  */
+        0x5d,       /* pop %rbp  */
+        0x5b,       /* pop %rbx  */
+        0xc3,       /* ret       */
+    };
+
     struct stubs *this_stubs = &this_cpu(stubs);
     unsigned long stub_va = this_stubs->addr + STUB_BUF_SIZE / 2;
-    long disp;
-    bool use_quirk_stub = false;
+    unsigned int quirk_bytes = 0;
+    char *p;
+
+    /* Helpers - Read outer scope but only modify p. */
+#define APPEND_BUFF(b) ({ memcpy(p, b, sizeof(b)); p += sizeof(b); })
+#define APPEND_CALL(f)                                                  \
+    ({                                                                  \
+        long disp = (long)(f) - (stub_va + p - ctxt->io_emul_stub + 5); \
+        BUG_ON((int32_t)disp != disp);                                  \
+        *p++ = 0xe8;                                                    \
+        *(int32_t *)p = disp; p += 4;                                   \
+    })
 
     if ( !ctxt->io_emul_stub )
         ctxt->io_emul_stub =
             map_domain_page(_mfn(this_stubs->mfn)) + (stub_va & ~PAGE_MASK);
 
-    /* call host_to_guest_gpr_switch */
-    ctxt->io_emul_stub[0] = 0xe8;
-    disp = (long)host_to_guest_gpr_switch - (stub_va + 5);
-    BUG_ON((int32_t)disp != disp);
-    *(int32_t *)&ctxt->io_emul_stub[1] = disp;
+    p = ctxt->io_emul_stub;
+
+    APPEND_BUFF(prologue);
+    APPEND_CALL(load_guest_gprs);
 
+    /* Some platforms might need to quirk the stub for specific inputs. */
     if ( unlikely(ioemul_handle_quirk) )
-        use_quirk_stub = ioemul_handle_quirk(opcode, &ctxt->io_emul_stub[5],
-                                             ctxt->ctxt.regs);
+    {
+        quirk_bytes = ioemul_handle_quirk(opcode, p, ctxt->ctxt.regs);
+        p += quirk_bytes;
+    }
 
-    if ( !use_quirk_stub )
+    /* Default I/O stub. */
+    if ( likely(!quirk_bytes) )
     {
-        /* data16 or nop */
-        ctxt->io_emul_stub[5] = (bytes != 2) ? 0x90 : 0x66;
-        /* <io-access opcode> */
-        ctxt->io_emul_stub[6] = opcode;
-        /* imm8 or nop */
-        ctxt->io_emul_stub[7] = !(opcode & 8) ? port : 0x90;
-        /* ret (jumps to guest_to_host_gpr_switch) */
-        ctxt->io_emul_stub[8] = 0xc3;
+        *p++ = (bytes != 2) ? 0x90 : 0x66;  /* data16 or nop */
+        *p++ = opcode;                      /* <opcode>      */
+        *p++ = !(opcode & 8) ? port : 0x90; /* imm8 or nop   */
     }
 
-    BUILD_BUG_ON(STUB_BUF_SIZE / 2 < MAX(9, /* Default emul stub */
-                                         5 + IOEMUL_QUIRK_STUB_BYTES));
+    APPEND_CALL(save_guest_gprs);
+    APPEND_BUFF(epilogue);
+
+    BUG_ON(STUB_BUF_SIZE / 2 < (p - ctxt->io_emul_stub));
 
     /* Handy function-typed pointer to the stub. */
     return (void *)stub_va;
+
+#undef APPEND_CALL
+#undef APPEND_BUFF
 }
 
 
diff --git a/xen/arch/x86/pv/gpr_switch.S b/xen/arch/x86/pv/gpr_switch.S
index 6d26192c2c..e3f8037b69 100644
--- a/xen/arch/x86/pv/gpr_switch.S
+++ b/xen/arch/x86/pv/gpr_switch.S
@@ -9,59 +9,42 @@
 
 #include <asm/asm_defns.h>
 
-ENTRY(host_to_guest_gpr_switch)
-        movq  (%rsp), %rcx
-        movq  %rdi, (%rsp)
+/* Load guest GPRs.  Parameter in %rdi, clobbers all registers. */
+ENTRY(load_guest_gprs)
         movq  UREGS_rdx(%rdi), %rdx
-        pushq %rbx
         movq  UREGS_rax(%rdi), %rax
         movq  UREGS_rbx(%rdi), %rbx
-        pushq %rbp
         movq  UREGS_rsi(%rdi), %rsi
         movq  UREGS_rbp(%rdi), %rbp
-        pushq %r12
-        movq  UREGS_r8(%rdi), %r8
+        movq  UREGS_r8 (%rdi), %r8
         movq  UREGS_r12(%rdi), %r12
-        pushq %r13
-        movq  UREGS_r9(%rdi), %r9
+        movq  UREGS_r9 (%rdi), %r9
         movq  UREGS_r13(%rdi), %r13
-        pushq %r14
         movq  UREGS_r10(%rdi), %r10
         movq  UREGS_r14(%rdi), %r14
-        pushq %r15
         movq  UREGS_r11(%rdi), %r11
         movq  UREGS_r15(%rdi), %r15
-        pushq %rcx /* dummy push, filled by guest_to_host_gpr_switch pointer */
-        pushq %rcx
-        leaq  guest_to_host_gpr_switch(%rip),%rcx
-        movq  %rcx,8(%rsp)
         movq  UREGS_rcx(%rdi), %rcx
         movq  UREGS_rdi(%rdi), %rdi
         ret
 
-ENTRY(guest_to_host_gpr_switch)
+/* Save guest GPRs.  Parameter on the stack above the return address. */
+ENTRY(save_guest_gprs)
         pushq %rdi
-        movq  7*8(%rsp), %rdi
+        movq  2*8(%rsp), %rdi
         movq  %rax, UREGS_rax(%rdi)
-        popq  UREGS_rdi(%rdi)
+        popq        UREGS_rdi(%rdi)
         movq  %r15, UREGS_r15(%rdi)
         movq  %r11, UREGS_r11(%rdi)
-        popq  %r15
         movq  %r14, UREGS_r14(%rdi)
         movq  %r10, UREGS_r10(%rdi)
-        popq  %r14
         movq  %r13, UREGS_r13(%rdi)
-        movq  %r9, UREGS_r9(%rdi)
-        popq  %r13
+        movq  %r9,  UREGS_r9 (%rdi)
         movq  %r12, UREGS_r12(%rdi)
-        movq  %r8, UREGS_r8(%rdi)
-        popq  %r12
+        movq  %r8,  UREGS_r8 (%rdi)
         movq  %rbp, UREGS_rbp(%rdi)
         movq  %rsi, UREGS_rsi(%rdi)
-        popq  %rbp
         movq  %rbx, UREGS_rbx(%rdi)
         movq  %rdx, UREGS_rdx(%rdi)
-        popq  %rbx
         movq  %rcx, UREGS_rcx(%rdi)
-        popq  %rcx
         ret
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 2354357cc1..3923950df7 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -117,9 +117,6 @@ idt_entry_t *idt_tables[NR_CPUS] __read_mostly;
  */
 DEFINE_PER_CPU_PAGE_ALIGNED(struct tss_page, tss_page);
 
-bool (*ioemul_handle_quirk)(
-    u8 opcode, char *io_emul_stub, struct cpu_user_regs *regs);
-
 static int debug_stack_lines = 20;
 integer_param("debug_stack_lines", debug_stack_lines);
 
diff --git a/xen/include/asm-x86/io.h b/xen/include/asm-x86/io.h
index 8708b79b99..c4ec52cba7 100644
--- a/xen/include/asm-x86/io.h
+++ b/xen/include/asm-x86/io.h
@@ -49,8 +49,7 @@ __OUT(w,"w",short)
 __OUT(l,,int)
 
 /* Function pointer used to handle platform specific I/O port emulation. */
-#define IOEMUL_QUIRK_STUB_BYTES 10
-extern bool (*ioemul_handle_quirk)(
+extern unsigned int (*ioemul_handle_quirk)(
     u8 opcode, char *io_emul_stub, struct cpu_user_regs *regs);
 
 #endif
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 01 23:09:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 01 May 2020 23:09:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUemR-0001sz-3t; Fri, 01 May 2020 23:09:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YjF2=6P=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jUemQ-0001st-8c
 for xen-devel@lists.xenproject.org; Fri, 01 May 2020 23:09:14 +0000
X-Inumbo-ID: c168f2ba-8c00-11ea-9b73-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c168f2ba-8c00-11ea-9b73-12813bfff9fa;
 Fri, 01 May 2020 23:09:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Ze8Dro+XTYUVQvkLNpGC9tk/GSaSGEOGKT5WkNIFZ9U=; b=XRLPb44g4FVnb/toetfBJOhmo
 SlK/Mg4sJ9tgDboQaf7YVNJUugXJjESFdFpKh6o34oNKrMLHg5QwECjLs6pJh7H5raNvWr5CKvxis
 X4biws/d6bb0+Ex9aSSs2GvzPALXXjUSIcFmend3DKzaQOJl1Fj4P1USP5g7Ehf4P1MhE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUemH-00052L-VE; Fri, 01 May 2020 23:09:06 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUemH-0005MD-Lk; Fri, 01 May 2020 23:09:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jUemH-0005AW-L9; Fri, 01 May 2020 23:09:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149898-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 149898: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-saverestore:fail:allowable
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=1c47613588ccff44422d4bdeea0dc36a0a308ec7
X-Osstest-Versions-That: qemuu=27c94566379069fb8930bb1433dcffbf7df3203d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 01 May 2020 23:09:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149898 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149898/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     15 guest-saverestore        fail REGR. vs. 149894

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds   16 guest-start/debian.repeat fail blocked in 149894
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149894
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149894
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149894
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149894
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149894
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                1c47613588ccff44422d4bdeea0dc36a0a308ec7
baseline version:
 qemuu                27c94566379069fb8930bb1433dcffbf7df3203d

Last test of basis   149894  2020-05-01 01:38:49 Z    0 days
Testing same since   149898  2020-05-01 13:28:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alberto Garcia <berto@igalia.com>
  Andrzej Jakowski <andrzej.jakowski@linux.intel.com>
  Kevin Wolf <kwolf@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   27c9456637..1c47613588  1c47613588ccff44422d4bdeea0dc36a0a308ec7 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sat May 02 00:05:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 00:05:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUfea-0007OC-It; Sat, 02 May 2020 00:05:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8oeO=6Q=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jUfeZ-0007O6-11
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 00:05:11 +0000
X-Inumbo-ID: 969578ee-8c08-11ea-9b76-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 969578ee-8c08-11ea-9b76-12813bfff9fa;
 Sat, 02 May 2020 00:05:10 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 839702072E;
 Sat,  2 May 2020 00:05:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1588377909;
 bh=NaydumJziN7Xj8P2fkWebvMTk1qUbAlKPLPT5xLIzg0=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=jXjsHuHHpYKb9DuPKVJNyE7QQQ3txagLDJsVQgq2TVSq89lI02OOxAW/rOVpIhPph
 kjtDJe+UM+mZLNcQuULWAvtRCZ14CCD6SN7kDKOm0gQ+r6Uoa4P7DpaJkZLbsx69oK
 qcg5UkCm2ja/dPgaQy9DcY+Uxp+v8YrMH4Wwe5/U=
Date: Fri, 1 May 2020 17:05:09 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Roman Shaposhnik <roman@zededa.com>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
In-Reply-To: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2005011637380.28941@sstabellini-ThinkPad-T480s>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 minyard@acm.org, jeff.kubascik@dornerworks.com,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Roman,


In regards to the attached stack trace, nothing rings a bell
unfortunately. I don't know why quirk_usb_early_handoff causes a crash.
It would be useful to add a few printk in quirk_usb_early_handoff to
know where the crash is happening exactly.


In regards to Dornerworks's third patch, it doesn't look like it is
related to the quirk_usb_early_handoff crash. The third patch is
probably not useful anymore because dev->archdata.dev_dma_ops is gone
completely. However, just in case, something like the following would
help recognize if the original bug still persists in newer kernels
somehow:


diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
index 6c45350e33aa..61af12d79add 100644
--- a/arch/arm64/mm/dma-mapping.c
+++ b/arch/arm64/mm/dma-mapping.c
@@ -53,7 +53,9 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
 		iommu_setup_dma_ops(dev, dma_base, size);
 
 #ifdef CONFIG_XEN
-	if (xen_initial_domain())
+	if (xen_initial_domain()) {
+		WARN_ON(dev->dma_ops != NULL);
 		dev->dma_ops = &xen_swiotlb_dma_ops;
+	}
 #endif
 }


On Thu, 30 Apr 2020, Roman Shaposhnik wrote:
> Hi!
> 
> I'm trying to run Xen on Raspberry Pi 4 with 5.6.1 stock,
> upstream kernel. The kernel itself works perfectly well
> on the board. When I try booting it as Dom0 under Xen,
> it goes into a stacktrace (attached).
> 
> Looking at what nice folks over at Dornerworks have previously
> done to make RPi kernels boot as Dom0 I've come across these
> 3 patches:
>     https://github.com/dornerworks/xen-rpi4-builder/tree/master/patches/linux
> 
> The first patch seems irrelevant (unless I'm missing something
> really basic here). The 2nd patch applied with no issue (but
> I don't think it is related) but the 3d patch failed to apply on
> account of 5.6.1 kernel no longer having:
>     dev->archdata.dev_dma_ops
> E.g.
>     https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/arch/arm64/mm/dma-mapping.c?h=v5.6.1#n55
> 
> I've tried to emulate the effect of the patch by simply introducing
> a static variable that would signal that we already initialized
> dev->dma_ops -- but that didn't help at all.
> 
> I'm CCing Jeff Kubascik to see if the original authors of that Corey Minyard
> to see if may be they have any suggestions on how this may be dealt
> with.
> 
> Any advice would be greatly appreciated!
> 
> Thanks,
> Roman.
> 


From xen-devel-bounces@lists.xenproject.org Sat May 02 01:06:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 01:06:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUgbr-0003OQ-BK; Sat, 02 May 2020 01:06:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kx6T=6Q=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1jUgbp-0003OL-PQ
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 01:06:25 +0000
X-Inumbo-ID: 238958b2-8c11-11ea-9887-bc764e2007e4
Received: from mail-qt1-x841.google.com (unknown [2607:f8b0:4864:20::841])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 238958b2-8c11-11ea-9887-bc764e2007e4;
 Sat, 02 May 2020 01:06:22 +0000 (UTC)
Received: by mail-qt1-x841.google.com with SMTP id k12so9394186qtm.4
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 18:06:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zededa.com; s=google;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=GnMKAb4e/g77YdknZeWWc56msVksBB50H2RuYvTTPsU=;
 b=Gf2PrLFVGIHCazNYsevSafKdNHezmApHbAr0EKscvWkpxymtrcsIEJMa/KrpGZ+v0h
 L9CYK/QM2DOSa1+WyXZp9OiJfYWteiRiayzssdu3k7kijtFi2IFPA5r25QWmrpw7Cs6S
 hk7Xo87IXvQ/fj8MXPY60ZQ/b7S4GXkQ8ICeoLXjJWVdkWVg/XXXF5Gu0jGnK0Vr9jgO
 LhAQLW+eezZgN37wuMZMdYXwCHlG8cKSwfrgcUMiajoavUKXicg7EoYbVcLcoN5RHpZg
 JZKuG9HbwqDzAKN/oYkbGptTzLhZkBjxwaIUWFnUH5VH6/Pwc+AW/EEXcSwIzudhRd6M
 DWEQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=GnMKAb4e/g77YdknZeWWc56msVksBB50H2RuYvTTPsU=;
 b=QzOxORACwsbDyqV39h4GhfCqJ1M/ByMVy+sYh69pV2ui0tvWItZb802CpcCCRkWIbm
 tWONGRYzjKiW3GPLFFa8y9OSVg0zb2d3s8aoeXlmdX6fmzuOdHAzruAHY17LAMc2wmsr
 CACdzmG5yICr73TWfLK89CvEUwjOPb139g0SblIQ/sMohNrQMcmx4xK/z4LAIwfVZmcv
 K9OkXgf6qQwNz6keeZqpVAIGkFPhMxqsxrNyTV2pMP15PIGAA9NPoN2bXdu6J682kIxD
 QDxPBQx5vjL/oK0DF4t/irj1k6m1MBNscjLa/q8r0UCwHDeqT1TTW8INRYgSYchfcauj
 MxRg==
X-Gm-Message-State: AGi0Pubh/A68OmbWptjGTAbYjG+2vmfBsjbuItKaS4yUMmoLujmjfoSO
 DBVkvE0IZGdR3RazV0eEhufVH2XFDa5XOcuAwZBtJQ==
X-Google-Smtp-Source: APiQypI71hEnEsIHv9lHr1tX02kxZgHAc5WEVPGSyWWIWsh4DZK69L8FVDmVGRzO3FE07GbRge4fupb82CDlsZsTH9E=
X-Received: by 2002:aed:253c:: with SMTP id v57mr6440685qtc.63.1588381582141; 
 Fri, 01 May 2020 18:06:22 -0700 (PDT)
MIME-Version: 1.0
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
In-Reply-To: <20200501114201.GE9902@minyard.net>
From: Roman Shaposhnik <roman@zededa.com>
Date: Fri, 1 May 2020 18:06:11 -0700
Message-ID: <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: minyard@acm.org
Content-Type: multipart/mixed; boundary="000000000000f8eaad05a49fe716"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 jeff.kubascik@dornerworks.com, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--000000000000f8eaad05a49fe716
Content-Type: text/plain; charset="UTF-8"

On Fri, May 1, 2020 at 4:42 AM Corey Minyard <minyard@acm.org> wrote:
>
> On Thu, Apr 30, 2020 at 07:20:05PM -0700, Roman Shaposhnik wrote:
> > Hi!
> >
> > I'm trying to run Xen on Raspberry Pi 4 with 5.6.1 stock,
> > upstream kernel. The kernel itself works perfectly well
> > on the board. When I try booting it as Dom0 under Xen,
> > it goes into a stacktrace (attached).
>
> Getting Xen working on the Pi4 requires a lot of moving parts, and they
> all have to be right.

Tell me about it! It is a pretty frustrating journey to align
everything just right.
On the other hand -- it seems worth to enable RPi as an ARM development
platform for Xen given how ubiquitous it is.

Hence me trying to combine pristine upstream kernel (5.6.1) with
pristine upstream
Xen to enable 100% upstream developer workflow on RPi.

> > Looking at what nice folks over at Dornerworks have previously
> > done to make RPi kernels boot as Dom0 I've come across these
> > 3 patches:
> >     https://github.com/dornerworks/xen-rpi4-builder/tree/master/patches/linux
> >
> > The first patch seems irrelevant (unless I'm missing something
> > really basic here).
>
> It might be irrelevant for your configuration, assuming that Xen gets
> the right information from EFI.  I haven't tried EFI booting.

I'd doing a bit of belt-and-suspenders strategy really -- I'm actually
using UEFI u-boot implementation that pre-populates device trees
and then I'm also forcing an extra copy of it to be load explicitly
via the GRUB devicetree command (GRUB runs as a UEFI payload).

I also have access to the semi-official TianoCore RPi4 port that seems
to be working pretty well: https://github.com/pftf/RPi4/releases/tag/v1.5
for booting all sort of UEFI payloads on RPi4.

> > The 2nd patch applied with no issue (but
> > I don't think it is related) but the 3d patch failed to apply on
> > account of 5.6.1 kernel no longer having:
> >     dev->archdata.dev_dma_ops
> > E.g.
> >     https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/arch/arm64/mm/dma-mapping.c?h=v5.6.1#n55
> >
> > I've tried to emulate the effect of the patch by simply introducing
> > a static variable that would signal that we already initialized
> > dev->dma_ops -- but that didn't help at all.
> >
> > I'm CCing Jeff Kubascik to see if the original authors of that Corey Minyard
> > to see if may be they have any suggestions on how this may be dealt
> > with.
> >
> > Any advice would be greatly appreciated!
>
> What's your Pi4 config.txt file look like? The GIC is not being handled
> correctly, and I'm guessing that configuration is wrong.  You can't use
> the stock config.txt file with Xen, you have to modify the configuration a
> bit.

Understood. I'm actually using a custom one:
    https://github.com/lf-edge/eve/blob/master/pkg/u-boot/rpi/config.txt

I could swear that I had a GIC setting in it -- but apparently no -- so
I added the following at the top of what you could see at the URL above:

total_mem=4096
enable_gic=1

> I think just adding:
>
> enable_gic=1
> total_mem=1024

Right -- but my board has 4G memory -- so I think what I did above should work.

> might get it working, or at least solve one problem.  It's required either
> way.  That might get rid of the GIC errors at the beginning.  I see a
> few of those, but not that many.
>
> My kernel command line is:
>
> coherent_pool=1M 8250.nr_uarts=1 cma=64M cma=256M  smsc95xx.macaddr=DC:A6:32:4F:3A:CD vc_mem.mem_base=0x3ec00000 vc_mem.mem_size=0x40000000  console=hvc0 clk_ignore_unused root=/dev/mmcblk0p2 rootwait
>
> A lot of that configuration gets pulled from the initialization done by
> the GPU at startup which it put into the device tree.  I'm not sure what a
> lot of it means.  Some of it is added by Xen, too.

Just to be on the safe side -- I'm now using Device Tree from Dornerworks build.
The file is attached.

> I can verify the DMA patch is important.  I'm not sure how to apply it
> to a 5.6 kernel, though.

OK, after chatting with Stefano it definitely seems that there's some kind of
an issue with DMA. It just seems different to what the original issue with
4.19.x kernel used to be.

Here's the clue. When I disable Xen DMA operations altogether with
the following. I get the Linux kernel that is booting all the way to trying
mounting a root filesystem (which sadly can't be found since eMMC
requires DMA FWIU). I hope this will provide enough of a clue to
figure out what may be wrong with the 5.x kernel. I'm attaching the
boo log with the following patch applied:

diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index d40e9e5fc52b..c25ead822de0 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -137,8 +137,7 @@ void xen_destroy_contiguous_region(phys_addr_t
pstart, unsigned int order)
 static int __init xen_mm_init(void)
 {
        struct gnttab_cache_flush cflush;
-       if (!xen_initial_domain())
-               return 0;
+       return 0;
        xen_swiotlb_init(1, false);

        cflush.op = 0;

Thanks,
Roman.

--000000000000f8eaad05a49fe716
Content-Type: text/plain; charset="US-ASCII"; name="log.txt"
Content-Disposition: attachment; filename="log.txt"
Content-Transfer-Encoding: base64
Content-ID: <f_k9oxdao91>
X-Attachment-Id: f_k9oxdao91

VXNpbmcgbW9kdWxlcyBwcm92aWRlZCBieSBib290bG9hZGVyIGluIEZEVApYZW4gNC4xNC11bnN0
YWJsZSAoYy9zIE1vbiBBcHIgMjAgMTQ6MzY6NTMgMjAyMCArMDEwMCBnaXQ6ODA2NWUxYjQxNi1k
aXJ0eSkgRUZJIGxvYWRlcgpXYXJuaW5nOiBDb3VsZCBub3QgcXVlcnkgdmFyaWFibGUgc3RvcmU6
IDB4ODAwMDAwMDAwMDAwMDAwMwotIFVBUlQgZW5hYmxlZCAtCi0gQm9vdCBDUFUgYm9vdGluZyAt
Ci0gQ3VycmVudCBFTCAwMDAwMDAwOCAtCi0gSW5pdGlhbGl6ZSBDUFUgLQotIFR1cm5pbmcgb24g
cGFnaW5nIC0KLSBSZWFkeSAtCihYRU4pIENoZWNraW5nIGZvciBpbml0cmQgaW4gL2Nob3Nlbgoo
WEVOKSBSQU06IDAwMDAwMDAwMDAwMDEwMDAgLSAwMDAwMDAwMDA3ZWYxZmZmCihYRU4pIFJBTTog
MDAwMDAwMDAwN2VmMjAwMCAtIDAwMDAwMDAwMDdmMGRmZmYKKFhFTikgUkFNOiAwMDAwMDAwMDA3
ZjBlMDAwIC0gMDAwMDAwMDAyYmM0NWZmZgooWEVOKSBSQU06IDAwMDAwMDAwMmJjNDYwMDAgLSAw
MDAwMDAwMDJiYzUwZmZmCihYRU4pIFJBTTogMDAwMDAwMDAyYmM1MTAwMCAtIDAwMDAwMDAwMmJk
OTNmZmYKKFhFTikgUkFNOiAwMDAwMDAwMDJiZDk0MDAwIC0gMDAwMDAwMDAyZDc4MGZmZgooWEVO
KSBSQU06IDAwMDAwMDAwMmQ3ODEwMDAgLSAwMDAwMDAwMDNjOWY2ZmZmCihYRU4pIFJBTTogMDAw
MDAwMDAzYzlmNzAwMCAtIDAwMDAwMDAwM2M5ZjhmZmYKKFhFTikgUkFNOiAwMDAwMDAwMDNjOWZi
MDAwIC0gMDAwMDAwMDAzYzlmZGZmZgooWEVOKSBSQU06IDAwMDAwMDAwM2M5ZmUwMDAgLSAwMDAw
MDAwMDNjYjA4ZmZmCihYRU4pIFJBTTogMDAwMDAwMDAzY2IxMDAwMCAtIDAwMDAwMDAwM2NiMTBm
ZmYKKFhFTikgUkFNOiAwMDAwMDAwMDNjYjEyMDAwIC0gMDAwMDAwMDAzY2IxM2ZmZgooWEVOKSBS
QU06IDAwMDAwMDAwM2NiMWIwMDAgLSAwMDAwMDAwMDNjYjFjZmZmCihYRU4pIFJBTTogMDAwMDAw
MDAzY2IxZTAwMCAtIDAwMDAwMDAwM2RmM2ZmZmYKKFhFTikgUkFNOiAwMDAwMDAwMDNkZjUwMDAw
IC0gMDAwMDAwMDAzZGZmZmZmZgooWEVOKSBSQU06IDAwMDAwMDAwNDAwMDAwMDAgLSAwMDAwMDAw
MGZiZmZmZmZmCihYRU4pCihYRU4pIE1PRFVMRVswXTogMDAwMDAwMDAyYmM1MTAwMCAtIDAwMDAw
MDAwMmJkOTMwYzggWGVuCihYRU4pIE1PRFVMRVsxXTogMDAwMDAwMDAyYmM0NzAwMCAtIDAwMDAw
MDAwMmJjNTEwMDAgRGV2aWNlIFRyZWUKKFhFTikgTU9EVUxFWzJdOiAwMDAwMDAwMDJiZDlkMDAw
IC0gMDAwMDAwMDAyZDY3YzIwMCBLZXJuZWwKKFhFTikKKFhFTikgQ01ETElORVswMDAwMDAwMDJi
ZDlkMDAwXTpjaG9zZW4gY29uc29sZT1odmMwIGVhcmx5cHJpbnRrPXhlbiBub21vZGVzZXQKKFhF
TikKKFhFTikgQ29tbWFuZCBsaW5lOiBkb20wX21lbT0xMDI0TSxtYXg6MTAyNE0gZG9tMF9tYXhf
dmNwdXM9MQooWEVOKSBwYXJhbWV0ZXIgImRvbTBfbWVtIiBoYXMgaW52YWxpZCB2YWx1ZSAiMTAy
NE0sbWF4OjEwMjRNIiwgcmM9LTIyIQooWEVOKSBEb21haW4gaGVhcCBpbml0aWFsaXNlZAooWEVO
KSBCb290aW5nIHVzaW5nIERldmljZSBUcmVlCihYRU4pIFBsYXRmb3JtOiBSYXNwYmVycnkgUGkg
NAooWEVOKSBObyBkdHVhcnQgcGF0aCBjb25maWd1cmVkCihYRU4pIEJhZCBjb25zb2xlPSBvcHRp
b24gJ2R0dWFydCcKIFhlbiA0LjE0LXVuc3RhYmxlCihYRU4pIFhlbiB2ZXJzaW9uIDQuMTQtdW5z
dGFibGUgKEApIChhYXJjaDY0LWxpbnV4LWdudS1nY2MgKFVidW50dSA5LjMuMC0xMHVidW50dTIp
IDkuMy4wKSBkZWJ1Zz15ICBUaHUgQXByIDMwIDE3OjQzOjMzIFBEVCAyMDIwCihYRU4pIExhdGVz
dCBDaGFuZ2VTZXQ6IE1vbiBBcHIgMjAgMTQ6MzY6NTMgMjAyMCArMDEwMCBnaXQ6ODA2NWUxYjQx
Ni1kaXJ0eQooWEVOKSBidWlsZC1pZDogOTg4YmE4ZjUyNTdjNDk0NjQ1ZGY2ZDVjNGMyZDVmZDlm
MzllMjUzOQooWEVOKSBQcm9jZXNzb3I6IDQxMGZkMDgzOiAiQVJNIExpbWl0ZWQiLCB2YXJpYW50
OiAweDAsIHBhcnQgMHhkMDgsIHJldiAweDMKKFhFTikgNjQtYml0IEV4ZWN1dGlvbjoKKFhFTikg
ICBQcm9jZXNzb3IgRmVhdHVyZXM6IDAwMDAwMDAwMDAwMDIyMjIgMDAwMDAwMDAwMDAwMDAwMAoo
WEVOKSAgICAgRXhjZXB0aW9uIExldmVsczogRUwzOjY0KzMyIEVMMjo2NCszMiBFTDE6NjQrMzIg
RUwwOjY0KzMyCihYRU4pICAgICBFeHRlbnNpb25zOiBGbG9hdGluZ1BvaW50IEFkdmFuY2VkU0lN
RAooWEVOKSAgIERlYnVnIEZlYXR1cmVzOiAwMDAwMDAwMDEwMzA1MTA2IDAwMDAwMDAwMDAwMDAw
MDAKKFhFTikgICBBdXhpbGlhcnkgRmVhdHVyZXM6IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAw
MDAwMDAwMAooWEVOKSAgIE1lbW9yeSBNb2RlbCBGZWF0dXJlczogMDAwMDAwMDAwMDAwMTEyNCAw
MDAwMDAwMDAwMDAwMDAwCihYRU4pICAgSVNBIEZlYXR1cmVzOiAgMDAwMDAwMDAwMDAxMDAwMCAw
MDAwMDAwMDAwMDAwMDAwCihYRU4pIDMyLWJpdCBFeGVjdXRpb246CihYRU4pICAgUHJvY2Vzc29y
IEZlYXR1cmVzOiAwMDAwMDEzMTowMDAxMTAxMQooWEVOKSAgICAgSW5zdHJ1Y3Rpb24gU2V0czog
QUFyY2gzMiBBMzIgVGh1bWIgVGh1bWItMiBKYXplbGxlCihYRU4pICAgICBFeHRlbnNpb25zOiBH
ZW5lcmljVGltZXIgU2VjdXJpdHkKKFhFTikgICBEZWJ1ZyBGZWF0dXJlczogMDMwMTAwNjYKKFhF
TikgICBBdXhpbGlhcnkgRmVhdHVyZXM6IDAwMDAwMDAwCihYRU4pICAgTWVtb3J5IE1vZGVsIEZl
YXR1cmVzOiAxMDIwMTEwNSA0MDAwMDAwMCAwMTI2MDAwMCAwMjEwMjIxMQooWEVOKSAgSVNBIEZl
YXR1cmVzOiAwMjEwMTExMCAxMzExMjExMSAyMTIzMjA0MiAwMTExMjEzMSAwMDAxMTE0MiAwMDAx
MDAwMQooWEVOKSBTTVA6IEFsbG93aW5nIDQgQ1BVcwooWEVOKSBlbmFibGVkIHdvcmthcm91bmQg
Zm9yOiBBUk0gZXJyYXR1bSAxMzE5NTM3CihYRU4pIEdlbmVyaWMgVGltZXIgSVJROiBwaHlzPTMw
IGh5cD0yNiB2aXJ0PTI3IEZyZXE6IDU0MDAwIEtIegooWEVOKSBHSUN2MiBpbml0aWFsaXphdGlv
bjoKKFhFTikgICAgICAgICBnaWNfZGlzdF9hZGRyPTAwMDAwMDAwZmY4NDEwMDAKKFhFTikgICAg
ICAgICBnaWNfY3B1X2FkZHI9MDAwMDAwMDBmZjg0MjAwMAooWEVOKSAgICAgICAgIGdpY19oeXBf
YWRkcj0wMDAwMDAwMGZmODQ0MDAwCihYRU4pICAgICAgICAgZ2ljX3ZjcHVfYWRkcj0wMDAwMDAw
MGZmODQ2MDAwCihYRU4pICAgICAgICAgZ2ljX21haW50ZW5hbmNlX2lycT0yNQooWEVOKSBHSUN2
MjogMjU2IGxpbmVzLCA0IGNwdXMsIHNlY3VyZSAoSUlEIDAyMDAxNDNiKS4KKFhFTikgWFNNIEZy
YW1ld29yayB2MS4wLjAgaW5pdGlhbGl6ZWQKKFhFTikgSW5pdGlhbGlzaW5nIFhTTSBTSUxPIG1v
ZGUKKFhFTikgVXNpbmcgc2NoZWR1bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciByZXYyIChjcmVk
aXQyKQooWEVOKSBJbml0aWFsaXppbmcgQ3JlZGl0MiBzY2hlZHVsZXIKKFhFTikgIGxvYWRfcHJl
Y2lzaW9uX3NoaWZ0OiAxOAooWEVOKSAgbG9hZF93aW5kb3dfc2hpZnQ6IDMwCihYRU4pICB1bmRl
cmxvYWRfYmFsYW5jZV90b2xlcmFuY2U6IDAKKFhFTikgIG92ZXJsb2FkX2JhbGFuY2VfdG9sZXJh
bmNlOiAtMwooWEVOKSAgcnVucXVldWVzIGFycmFuZ2VtZW50OiBzb2NrZXQKKFhFTikgIGNhcCBl
bmZvcmNlbWVudCBncmFudWxhcml0eTogMTBtcwooWEVOKSBsb2FkIHRyYWNraW5nIHdpbmRvdyBs
ZW5ndGggMTA3Mzc0MTgyNCBucwooWEVOKSBBbGxvY2F0ZWQgY29uc29sZSByaW5nIG9mIDMyIEtp
Qi4KKFhFTikgQ1BVMDogR3Vlc3QgYXRvbWljcyB3aWxsIHRyeSA1IHRpbWVzIGJlZm9yZSBwYXVz
aW5nIHRoZSBkb21haW4KKFhFTikgQnJpbmdpbmcgdXAgQ1BVMQotIENQVSAwMDAwMDAwMSBib290
aW5nIC0KLSBDdXJyZW50IEVMIDAwMDAwMDA4IC0KLSBJbml0aWFsaXplIENQVSAtCi0gVHVybmlu
ZyBvbiBwYWdpbmcgLQotIFJlYWR5IC0KKFhFTikgQ1BVMTogR3Vlc3QgYXRvbWljcyB3aWxsIHRy
eSA1IHRpbWVzIGJlZm9yZSBwYXVzaW5nIHRoZSBkb21haW4KKFhFTikgQ1BVIDEgYm9vdGVkLgoo
WEVOKSBCcmluZ2luZyB1cCBDUFUyCi0gQ1BVIDAwMDAwMDAyIGJvb3RpbmcgLQotIEN1cnJlbnQg
RUwgMDAwMDAwMDggLQotIEluaXRpYWxpemUgQ1BVIC0KLSBUdXJuaW5nIG9uIHBhZ2luZyAtCi0g
UmVhZHkgLQooWEVOKSBDUFUyOiBHdWVzdCBhdG9taWNzIHdpbGwgdHJ5IDUgdGltZXMgYmVmb3Jl
IHBhdXNpbmcgdGhlIGRvbWFpbgooWEVOKSBDUFUgMiBib290ZWQuCihYRU4pIEJyaW5naW5nIHVw
IENQVTMKLSBDUFUgMDAwMDAwMDMgYm9vdGluZyAtCi0gQ3VycmVudCBFTCAwMDAwMDAwOCAtCi0g
SW5pdGlhbGl6ZSBDUFUgLQotIFR1cm5pbmcgb24gcGFnaW5nIC0KLSBSZWFkeSAtCihYRU4pIENQ
VTM6IEd1ZXN0IGF0b21pY3Mgd2lsbCB0cnkgNSB0aW1lcyBiZWZvcmUgcGF1c2luZyB0aGUgZG9t
YWluCihYRU4pIENQVSAzIGJvb3RlZC4KKFhFTikgQnJvdWdodCB1cCA0IENQVXMKKFhFTikgSS9P
IHZpcnR1YWxpc2F0aW9uIGRpc2FibGVkCihYRU4pIFAyTTogNDQtYml0IElQQSB3aXRoIDQ0LWJp
dCBQQSBhbmQgOC1iaXQgVk1JRAooWEVOKSBQMk06IDQgbGV2ZWxzIHdpdGggb3JkZXItMCByb290
LCBWVENSIDB4ODAwNDM1OTQKKFhFTikgQWRkaW5nIGNwdSAwIHRvIHJ1bnF1ZXVlIDAKKFhFTikg
IEZpcnN0IGNwdSBvbiBydW5xdWV1ZSwgYWN0aXZhdGluZwooWEVOKSBBZGRpbmcgY3B1IDEgdG8g
cnVucXVldWUgMAooWEVOKSBBZGRpbmcgY3B1IDIgdG8gcnVucXVldWUgMAooWEVOKSBBZGRpbmcg
Y3B1IDMgdG8gcnVucXVldWUgMAooWEVOKSBhbHRlcm5hdGl2ZXM6IFBhdGNoaW5nIHdpdGggYWx0
IHRhYmxlIDAwMDAwMDAwMDAyZDQyMDggLT4gMDAwMDAwMDAwMDJkNDkxYwooWEVOKSAqKiogTE9B
RElORyBET01BSU4gMCAqKioKKFhFTikgTG9hZGluZyBkMCBrZXJuZWwgZnJvbSBib290IG1vZHVs
ZSBAIDAwMDAwMDAwMmJkOWQwMDAKKFhFTikgQWxsb2NhdGluZyAxOjEgbWFwcGluZ3MgdG90YWxs
aW5nIDEwMjRNQiBmb3IgZG9tMDoKKFhFTikgQkFOS1swXSAweDAwMDAwMDQwMDAwMDAwLTB4MDAw
MDAwODAwMDAwMDAgKDEwMjRNQikKKFhFTikgR3JhbnQgdGFibGUgcmFuZ2U6IDB4MDAwMDAwMmJj
NTEwMDAtMHgwMDAwMDAyYmM5MTAwMAooWEVOKSBBbGxvY2F0aW5nIFBQSSAxNiBmb3IgZXZlbnQg
Y2hhbm5lbCBpbnRlcnJ1cHQKKFhFTikgTG9hZGluZyB6SW1hZ2UgZnJvbSAwMDAwMDAwMDJiZDlk
MDAwIHRvIDAwMDAwMDAwNDAwODAwMDAtMDAwMDAwMDA0MTk1ZjIwMAooWEVOKSBMb2FkaW5nIGQw
IERUQiB0byAweDAwMDAwMDAwNDgwMDAwMDAtMHgwMDAwMDAwMDQ4MDA2ZTBhCihYRU4pIEluaXRp
YWwgbG93IG1lbW9yeSB2aXJxIHRocmVzaG9sZCBzZXQgYXQgMHg0MDAwIHBhZ2VzLgooWEVOKSBT
Y3J1YmJpbmcgRnJlZSBSQU0gaW4gYmFja2dyb3VuZAooWEVOKSBTdGQuIExvZ2xldmVsOiBBbGwK
KFhFTikgR3Vlc3QgTG9nbGV2ZWw6IEFsbAooWEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioKKFhFTikgTm8gc3VwcG9ydCBmb3IgQVJNX1NNQ0ND
X0FSQ0hfV09SS0FST1VORF8xLgooWEVOKSBQbGVhc2UgdXBkYXRlIHlvdXIgZmlybXdhcmUuCihY
RU4pICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKgoo
WEVOKSBObyBzdXBwb3J0IGZvciBBUk1fU01DQ0NfQVJDSF9XT1JLQVJPVU5EXzEuCihYRU4pIFBs
ZWFzZSB1cGRhdGUgeW91ciBmaXJtd2FyZS4KKFhFTikgKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqCihYRU4pIE5vIHN1cHBvcnQgZm9yIEFSTV9TTUND
Q19BUkNIX1dPUktBUk9VTkRfMS4KKFhFTikgUGxlYXNlIHVwZGF0ZSB5b3VyIGZpcm13YXJlLgoo
WEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioK
KFhFTikgMy4uLiAyLi4uIDEuLi4KKFhFTikgKioqIFNlcmlhbCBpbnB1dCB0byBET00wICh0eXBl
ICdDVFJMLWEnIHRocmVlIHRpbWVzIHRvIHN3aXRjaCBpbnB1dCkKKFhFTikgRnJlZWQgMzQ4a0Ig
aW5pdCBtZW1vcnkuCihYRU4pIGQwdjA6IHZHSUNEOiB1bmhhbmRsZWQgd29yZCB3cml0ZSAweDAw
MDAwMGZmZmZmZmZmIHRvIElDQUNUSVZFUjQKKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3
b3JkIHdyaXRlIDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSOAooWEVOKSBkMHYwOiB2R0lD
RDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElWRVIxMgoo
WEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0
byBJQ0FDVElWRVIxNgooWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgw
MDAwMDBmZmZmZmZmZiB0byBJQ0FDVElWRVIyMAooWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVk
IHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElWRVIyNAooWEVOKSBkMHYwOiB2
R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElWRVIy
OAooWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZm
ZiB0byBJQ0FDVElWRVIwClsgICAgMC4wMDAwMDBdIEJvb3RpbmcgTGludXggb24gcGh5c2ljYWwg
Q1BVIDB4MDAwMDAwMDAwMCBbMHg0MTBmZDA4M10KWyAgICAwLjAwMDAwMF0gTGludXggdmVyc2lv
biA1LjYuMS1kZWZhdWx0IChyb290QDEzMDI0ZWIxZDJjZikgKGdjYyB2ZXJzaW9uIDguMy4wIChB
bHBpbmUgOC4zLjApKSAjNiBTTVAgU2F0IE1heSAyIDAwOjEwOjM2IFVUQyAyMDIwClsgICAgMC4w
MDAwMDBdIE1hY2hpbmUgbW9kZWw6IFJhc3BiZXJyeSBQaSA0IE1vZGVsIEIKWyAgICAwLjAwMDAw
MF0gWGVuIDQuMTQgc3VwcG9ydCBmb3VuZApbICAgIDAuMDAwMDAwXSBlZmk6IEdldHRpbmcgRUZJ
IHBhcmFtZXRlcnMgZnJvbSBGRFQ6ClsgICAgMC4wMDAwMDBdIGVmaTogVUVGSSBub3QgZm91bmQu
ClsgICAgMC4wMDAwMDBdIE5VTUE6IE5vIE5VTUEgY29uZmlndXJhdGlvbiBmb3VuZApbICAgIDAu
MDAwMDAwXSBOVU1BOiBGYWtpbmcgYSBub2RlIGF0IFttZW0gMHgwMDAwMDAwMDQwMDAwMDAwLTB4
MDAwMDAwMDA3ZmZmZmZmZl0KWyAgICAwLjAwMDAwMF0gTlVNQTogTk9ERV9EQVRBIFttZW0gMHg3
ZmRkNDJjMC0weDdmZGQ3ZmZmXQpbICAgIDAuMDAwMDAwXSBab25lIHJhbmdlczoKWyAgICAwLjAw
MDAwMF0gICBETUEgICAgICBbbWVtIDB4MDAwMDAwMDA0MDAwMDAwMC0weDAwMDAwMDAwN2ZmZmZm
ZmZdClsgICAgMC4wMDAwMDBdICAgRE1BMzIgICAgZW1wdHkKWyAgICAwLjAwMDAwMF0gICBOb3Jt
YWwgICBlbXB0eQpbICAgIDAuMDAwMDAwXSBNb3ZhYmxlIHpvbmUgc3RhcnQgZm9yIGVhY2ggbm9k
ZQpbICAgIDAuMDAwMDAwXSBFYXJseSBtZW1vcnkgbm9kZSByYW5nZXMKWyAgICAwLjAwMDAwMF0g
ICBub2RlICAgMDogW21lbSAweDAwMDAwMDAwNDAwMDAwMDAtMHgwMDAwMDAwMDdmZmZmZmZmXQpb
ICAgIDAuMDAwMDAwXSBJbml0bWVtIHNldHVwIG5vZGUgMCBbbWVtIDB4MDAwMDAwMDA0MDAwMDAw
MC0weDAwMDAwMDAwN2ZmZmZmZmZdClsgICAgMC4wMDAwMDBdIHBzY2k6IHByb2JpbmcgZm9yIGNv
bmR1aXQgbWV0aG9kIGZyb20gRFQuClsgICAgMC4wMDAwMDBdIHBzY2k6IFBTQ0l2MS4xIGRldGVj
dGVkIGluIGZpcm13YXJlLgpbICAgIDAuMDAwMDAwXSBwc2NpOiBVc2luZyBzdGFuZGFyZCBQU0NJ
IHYwLjIgZnVuY3Rpb24gSURzClsgICAgMC4wMDAwMDBdIHBzY2k6IFRydXN0ZWQgT1MgbWlncmF0
aW9uIG5vdCByZXF1aXJlZApbICAgIDAuMDAwMDAwXSBwc2NpOiBTTUMgQ2FsbGluZyBDb252ZW50
aW9uIHYxLjEKWyAgICAwLjAwMDAwMF0gcGVyY3B1OiBFbWJlZGRlZCAyMyBwYWdlcy9jcHUgczU0
MjMyIHI4MTkyIGQzMTc4NCB1OTQyMDgKWyAgICAwLjAwMDAwMF0gRGV0ZWN0ZWQgUElQVCBJLWNh
Y2hlIG9uIENQVTAKWyAgICAwLjAwMDAwMF0gQ1BVIGZlYXR1cmVzOiBkZXRlY3RlZDogRUwyIHZl
Y3RvciBoYXJkZW5pbmcKWyAgICAwLjAwMDAwMF0gQ1BVIGZlYXR1cmVzOiBkZXRlY3RlZDogU3Bl
Y3VsYXRpdmUgU3RvcmUgQnlwYXNzIERpc2FibGUKWyAgICAwLjAwMDAwMF0gQ1BVIGZlYXR1cmVz
OiBkZXRlY3RlZDogQVJNIGVycmF0dW0gMTMxOTM2NwpbICAgIDAuMDAwMDAwXSBCdWlsdCAxIHpv
bmVsaXN0cywgbW9iaWxpdHkgZ3JvdXBpbmcgb24uICBUb3RhbCBwYWdlczogMjU4MDQ4ClsgICAg
MC4wMDAwMDBdIFBvbGljeSB6b25lOiBETUEKWyAgICAwLjAwMDAwMF0gS2VybmVsIGNvbW1hbmQg
bGluZTogY29uc29sZT1odmMwIGVhcmx5cHJpbnRrPXhlbiBub21vZGVzZXQKWyAgICAwLjAwMDAw
MF0gRGVudHJ5IGNhY2hlIGhhc2ggdGFibGUgZW50cmllczogMTMxMDcyIChvcmRlcjogOCwgMTA0
ODU3NiBieXRlcywgbGluZWFyKQpbICAgIDAuMDAwMDAwXSBJbm9kZS1jYWNoZSBoYXNoIHRhYmxl
IGVudHJpZXM6IDY1NTM2IChvcmRlcjogNywgNTI0Mjg4IGJ5dGVzLCBsaW5lYXIpClsgICAgMC4w
MDAwMDBdIG1lbSBhdXRvLWluaXQ6IHN0YWNrOm9mZiwgaGVhcCBhbGxvYzpvZmYsIGhlYXAgZnJl
ZTpvZmYKWyAgICAwLjAwMDAwMF0gTWVtb3J5OiAxMDAyMDYwSy8xMDQ4NTc2SyBhdmFpbGFibGUg
KDEyNzMySyBrZXJuZWwgY29kZSwgMTg1MksgcndkYXRhLCA2MTg0SyByb2RhdGEsIDQ2NzJLIGlu
aXQsIDc1OEsgYnNzLCA0NjUxNksgcmVzZXJ2ZWQsIDBLIGNtYS1yZXNlcnZlZCkKWyAgICAwLjAw
MDAwMF0gcmFuZG9tOiBnZXRfcmFuZG9tX3U2NCBjYWxsZWQgZnJvbSBfX2ttZW1fY2FjaGVfY3Jl
YXRlKzB4NDAvMHg1Nzggd2l0aCBjcm5nX2luaXQ9MApbICAgIDAuMDAwMDAwXSBTTFVCOiBIV2Fs
aWduPTY0LCBPcmRlcj0wLTMsIE1pbk9iamVjdHM9MCwgQ1BVcz0xLCBOb2Rlcz0xClsgICAgMC4w
MDAwMDBdIHJjdTogSGllcmFyY2hpY2FsIFJDVSBpbXBsZW1lbnRhdGlvbi4KWyAgICAwLjAwMDAw
MF0gcmN1OiAJUkNVIHJlc3RyaWN0aW5nIENQVXMgZnJvbSBOUl9DUFVTPTQ4MCB0byBucl9jcHVf
aWRzPTEuClsgICAgMC4wMDAwMDBdIHJjdTogUkNVIGNhbGN1bGF0ZWQgdmFsdWUgb2Ygc2NoZWR1
bGVyLWVubGlzdG1lbnQgZGVsYXkgaXMgMTAgamlmZmllcy4KWyAgICAwLjAwMDAwMF0gcmN1OiBB
ZGp1c3RpbmcgZ2VvbWV0cnkgZm9yIHJjdV9mYW5vdXRfbGVhZj0xNiwgbnJfY3B1X2lkcz0xClsg
ICAgMC4wMDAwMDBdIE5SX0lSUVM6IDY0LCBucl9pcnFzOiA2NCwgcHJlYWxsb2NhdGVkIGlycXM6
IDAKWyAgICAwLjAwMDAwMF0gYXJjaF90aW1lcjogY3AxNSB0aW1lcihzKSBydW5uaW5nIGF0IDU0
LjAwTUh6ICh2aXJ0KS4KWyAgICAwLjAwMDAwMF0gY2xvY2tzb3VyY2U6IGFyY2hfc3lzX2NvdW50
ZXI6IG1hc2s6IDB4ZmZmZmZmZmZmZmZmZmYgbWF4X2N5Y2xlczogMHhjNzQzY2UzNDYsIG1heF9p
ZGxlX25zOiA0NDA3OTUyMDMxMjMgbnMKWyAgICAwLjAwMDAwNV0gc2NoZWRfY2xvY2s6IDU2IGJp
dHMgYXQgNTRNSHosIHJlc29sdXRpb24gMThucywgd3JhcHMgZXZlcnkgNDM5ODA0NjUxMTEwMm5z
ClsgICAgMC4wMDAyNzNdIENvbnNvbGU6IGNvbG91ciBkdW1teSBkZXZpY2UgODB4MjUKWyAgICAw
LjI3MDgzNl0gcHJpbnRrOiBjb25zb2xlIFtodmMwXSBlbmFibGVkClsgICAgMC4yNzUxMjRdIENh
bGlicmF0aW5nIGRlbGF5IGxvb3AgKHNraXBwZWQpLCB2YWx1ZSBjYWxjdWxhdGVkIHVzaW5nIHRp
bWVyIGZyZXF1ZW5jeS4uIDEwOC4wMCBCb2dvTUlQUyAobHBqPTU0MDAwMCkKWyAgICAwLjI4NTY3
OF0gcGlkX21heDogZGVmYXVsdDogMzI3NjggbWluaW11bTogMzAxClsgICAgMC4yOTA1MTFdIExT
TTogU2VjdXJpdHkgRnJhbWV3b3JrIGluaXRpYWxpemluZwpbICAgIDAuMjk1MjIxXSBNb3VudC1j
YWNoZSBoYXNoIHRhYmxlIGVudHJpZXM6IDIwNDggKG9yZGVyOiAyLCAxNjM4NCBieXRlcywgbGlu
ZWFyKQpbICAgIDAuMzAyNzMwXSBNb3VudHBvaW50LWNhY2hlIGhhc2ggdGFibGUgZW50cmllczog
MjA0OCAob3JkZXI6IDIsIDE2Mzg0IGJ5dGVzLCBsaW5lYXIpClsgICAgMC4zMTMyMjhdIHhlbjpn
cmFudF90YWJsZTogR3JhbnQgdGFibGVzIHVzaW5nIHZlcnNpb24gMSBsYXlvdXQKWyAgICAwLjMx
ODcxNF0gR3JhbnQgdGFibGUgaW5pdGlhbGl6ZWQKWyAgICAwLjMyMjMxMl0geGVuOmV2ZW50czog
VXNpbmcgRklGTy1iYXNlZCBBQkkKWyAgICAwLjMyNjc2MF0gWGVuOiBpbml0aWFsaXppbmcgY3B1
MApbICAgIDAuMzMwMzUzXSByY3U6IEhpZXJhcmNoaWNhbCBTUkNVIGltcGxlbWVudGF0aW9uLgpb
ICAgIDAuMzM4ODE3XSBFRkkgc2VydmljZXMgd2lsbCBub3QgYmUgYXZhaWxhYmxlLgpbICAgIDAu
MzQyOTU5XSBzbXA6IEJyaW5naW5nIHVwIHNlY29uZGFyeSBDUFVzIC4uLgpbICAgIDAuMzQ3NDc5
XSBzbXA6IEJyb3VnaHQgdXAgMSBub2RlLCAxIENQVQpbICAgIDAuMzUxNTc1XSBTTVA6IFRvdGFs
IG9mIDEgcHJvY2Vzc29ycyBhY3RpdmF0ZWQuClsgICAgMC4zNTY0MDhdIENQVSBmZWF0dXJlczog
ZGV0ZWN0ZWQ6IDMyLWJpdCBFTDAgU3VwcG9ydApbICAgIDAuMzYxNjg5XSBDUFUgZmVhdHVyZXM6
IGRldGVjdGVkOiBDUkMzMiBpbnN0cnVjdGlvbnMKWyAgICAwLjM5NjI5NV0gQ1BVOiBBbGwgQ1BV
KHMpIHN0YXJ0ZWQgYXQgRUwxClsgICAgMC4zOTk4NjddIGFsdGVybmF0aXZlczogcGF0Y2hpbmcg
a2VybmVsIGNvZGUKWyAgICAwLjQwNTg3Ml0gZGV2dG1wZnM6IGluaXRpYWxpemVkClsgICAgMC40
MTM1ODBdIFJlZ2lzdGVyZWQgY3AxNV9iYXJyaWVyIGVtdWxhdGlvbiBoYW5kbGVyClsgICAgMC40
MTgxNjRdIFJlZ2lzdGVyZWQgc2V0ZW5kIGVtdWxhdGlvbiBoYW5kbGVyClsgICAgMC40MjI3NDFd
IEtBU0xSIGRpc2FibGVkIGR1ZSB0byBsYWNrIG9mIHNlZWQKWyAgICAwLjQyNzY1OV0gY2xvY2tz
b3VyY2U6IGppZmZpZXM6IG1hc2s6IDB4ZmZmZmZmZmYgbWF4X2N5Y2xlczogMHhmZmZmZmZmZiwg
bWF4X2lkbGVfbnM6IDE5MTEyNjA0NDYyNzUwMDAwIG5zClsgICAgMC40MzczNjBdIGZ1dGV4IGhh
c2ggdGFibGUgZW50cmllczogMjU2IChvcmRlcjogMiwgMTYzODQgYnl0ZXMsIGxpbmVhcikKWyAg
ICAwLjQ0NDQ0OV0gcGluY3RybCBjb3JlOiBpbml0aWFsaXplZCBwaW5jdHJsIHN1YnN5c3RlbQpb
ICAgIDAuNDUwNjg0XSB0aGVybWFsX3N5czogUmVnaXN0ZXJlZCB0aGVybWFsIGdvdmVybm9yICdm
YWlyX3NoYXJlJwpbICAgIDAuNDUwNjg3XSB0aGVybWFsX3N5czogUmVnaXN0ZXJlZCB0aGVybWFs
IGdvdmVybm9yICdiYW5nX2JhbmcnClsgICAgMC40NTYyMzVdIHRoZXJtYWxfc3lzOiBSZWdpc3Rl
cmVkIHRoZXJtYWwgZ292ZXJub3IgJ3N0ZXBfd2lzZScKWyAgICAwLjQ2MjQyM10gdGhlcm1hbF9z
eXM6IFJlZ2lzdGVyZWQgdGhlcm1hbCBnb3Zlcm5vciAndXNlcl9zcGFjZScKWyAgICAwLjQ2ODY1
NF0gRE1JIG5vdCBwcmVzZW50IG9yIGludmFsaWQuClsgICAgMC40NzkyODFdIE5FVDogUmVnaXN0
ZXJlZCBwcm90b2NvbCBmYW1pbHkgMTYKWyAgICAwLjQ4MzU1OV0gRE1BOiBwcmVhbGxvY2F0ZWQg
MjU2IEtpQiBwb29sIGZvciBhdG9taWMgYWxsb2NhdGlvbnMKWyAgICAwLjQ4OTUzOV0gYXVkaXQ6
IGluaXRpYWxpemluZyBuZXRsaW5rIHN1YnN5cyAoZGlzYWJsZWQpClsgICAgMC40OTYxMDNdIGF1
ZGl0OiB0eXBlPTIwMDAgYXVkaXQoMC4zNzA6MSk6IHN0YXRlPWluaXRpYWxpemVkIGF1ZGl0X2Vu
YWJsZWQ9MCByZXM9MQpbICAgIDAuNTAzOTUxXSBody1icmVha3BvaW50OiBmb3VuZCA2IGJyZWFr
cG9pbnQgYW5kIDQgd2F0Y2hwb2ludCByZWdpc3RlcnMuClsgICAgMC41MTAzNDFdIEFTSUQgYWxs
b2NhdG9yIGluaXRpYWxpc2VkIHdpdGggNjU1MzYgZW50cmllcwpbICAgIDAuNTE2OTI1XSBTZXJp
YWw6IEFNQkEgUEwwMTEgVUFSVCBkcml2ZXIKWyAgICAwLjUzOTUyOF0gSHVnZVRMQiByZWdpc3Rl
cmVkIDEuMDAgR2lCIHBhZ2Ugc2l6ZSwgcHJlLWFsbG9jYXRlZCAwIHBhZ2VzClsgICAgMC41NDU3
MDJdIEh1Z2VUTEIgcmVnaXN0ZXJlZCAzMi4wIE1pQiBwYWdlIHNpemUsIHByZS1hbGxvY2F0ZWQg
MCBwYWdlcwpbICAgIDAuNTUyNTc4XSBIdWdlVExCIHJlZ2lzdGVyZWQgMi4wMCBNaUIgcGFnZSBz
aXplLCBwcmUtYWxsb2NhdGVkIDAgcGFnZXMKWyAgICAwLjU1OTM5OF0gSHVnZVRMQiByZWdpc3Rl
cmVkIDY0LjAgS2lCIHBhZ2Ugc2l6ZSwgcHJlLWFsbG9jYXRlZCAwIHBhZ2VzClsgICAgMC41NzA0
MzhdIGNyeXB0ZDogbWF4X2NwdV9xbGVuIHNldCB0byAxMDAwClsgICAgMC41ODAzNDddIEFDUEk6
IEludGVycHJldGVyIGRpc2FibGVkLgpbICAgIDAuNTg0MDIzXSB4ZW46YmFsbG9vbjogSW5pdGlh
bGlzaW5nIGJhbGxvb24gZHJpdmVyClsgICAgMC41OTAzOTVdIGlvbW11OiBEZWZhdWx0IGRvbWFp
biB0eXBlOiBUcmFuc2xhdGVkClsgICAgMC41OTUwMjZdIHZnYWFyYjogbG9hZGVkClsgICAgMC41
OTc5MzNdIFNDU0kgc3Vic3lzdGVtIGluaXRpYWxpemVkClsgICAgMC42MDE5NDBdIHVzYmNvcmU6
IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIgdXNiZnMKWyAgICAwLjYwNzA3Nl0gdXNi
Y29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciBodWIKWyAgICAwLjYxMjYyN10g
dXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgZGV2aWNlIGRyaXZlciB1c2IKWyAgICAwLjYxNzc5MF0g
dXNiX3BoeV9nZW5lcmljIHBoeTogcGh5IHN1cHBseSB2Y2Mgbm90IGZvdW5kLCB1c2luZyBkdW1t
eSByZWd1bGF0b3IKWyAgICAwLjYyNTUzMl0gcHBzX2NvcmU6IExpbnV4UFBTIEFQSSB2ZXIuIDEg
cmVnaXN0ZXJlZApbICAgIDAuNjMwMjgxXSBwcHNfY29yZTogU29mdHdhcmUgdmVyLiA1LjMuNiAt
IENvcHlyaWdodCAyMDA1LTIwMDcgUm9kb2xmbyBHaW9tZXR0aSA8Z2lvbWV0dGlAbGludXguaXQ+
ClsgICAgMC42Mzk2NjVdIFBUUCBjbG9jayBzdXBwb3J0IHJlZ2lzdGVyZWQKWyAgICAwLjY0Mzc2
Ml0gRURBQyBNQzogVmVyOiAzLjAuMApbICAgIDAuNjQ4MTA2XSBOZXRMYWJlbDogSW5pdGlhbGl6
aW5nClsgICAgMC42NTA5NTldIE5ldExhYmVsOiAgZG9tYWluIGhhc2ggc2l6ZSA9IDEyOApbICAg
IDAuNjU1NDEyXSBOZXRMYWJlbDogIHByb3RvY29scyA9IFVOTEFCRUxFRCBDSVBTT3Y0IENBTElQ
U08KWyAgICAwLjY2MTI5MV0gTmV0TGFiZWw6ICB1bmxhYmVsZWQgdHJhZmZpYyBhbGxvd2VkIGJ5
IGRlZmF1bHQKWyAgICAwLjY2NzQ0NV0gY2xvY2tzb3VyY2U6IFN3aXRjaGVkIHRvIGNsb2Nrc291
cmNlIGFyY2hfc3lzX2NvdW50ZXIKWyAgICAwLjY3MzQxNV0gVkZTOiBEaXNrIHF1b3RhcyBkcXVv
dF82LjYuMApbICAgIDAuNjc3MjY2XSBWRlM6IERxdW90LWNhY2hlIGhhc2ggdGFibGUgZW50cmll
czogNTEyIChvcmRlciAwLCA0MDk2IGJ5dGVzKQpbICAgIDAuNjg0NDI5XSBwbnA6IFBuUCBBQ1BJ
OiBkaXNhYmxlZApbICAgIDAuNjk0Njg0XSBORVQ6IFJlZ2lzdGVyZWQgcHJvdG9jb2wgZmFtaWx5
IDIKWyAgICAwLjY5OTAxNF0gdGNwX2xpc3Rlbl9wb3J0YWRkcl9oYXNoIGhhc2ggdGFibGUgZW50
cmllczogNTEyIChvcmRlcjogMSwgODE5MiBieXRlcywgbGluZWFyKQpbICAgIDAuNzA3MDI2XSBU
Q1AgZXN0YWJsaXNoZWQgaGFzaCB0YWJsZSBlbnRyaWVzOiA4MTkyIChvcmRlcjogNCwgNjU1MzYg
Ynl0ZXMsIGxpbmVhcikKWyAgICAwLjcxNDk3MF0gVENQIGJpbmQgaGFzaCB0YWJsZSBlbnRyaWVz
OiA4MTkyIChvcmRlcjogNSwgMTMxMDcyIGJ5dGVzLCBsaW5lYXIpClsgICAgMC43MjIzOTddIFRD
UDogSGFzaCB0YWJsZXMgY29uZmlndXJlZCAoZXN0YWJsaXNoZWQgODE5MiBiaW5kIDgxOTIpClsg
ICAgMC43Mjg5NjJdIFVEUCBoYXNoIHRhYmxlIGVudHJpZXM6IDUxMiAob3JkZXI6IDIsIDE2Mzg0
IGJ5dGVzLCBsaW5lYXIpClsgICAgMC43MzU1NDddIFVEUC1MaXRlIGhhc2ggdGFibGUgZW50cmll
czogNTEyIChvcmRlcjogMiwgMTYzODQgYnl0ZXMsIGxpbmVhcikKWyAgICAwLjc0MjkyOF0gTkVU
OiBSZWdpc3RlcmVkIHByb3RvY29sIGZhbWlseSAxClsgICAgMC43NDcyMzVdIFBDSTogQ0xTIDAg
Ynl0ZXMsIGRlZmF1bHQgNjQKWyAgICAwLjc1MTc5NF0ga3ZtIFsxXTogSFlQIG1vZGUgbm90IGF2
YWlsYWJsZQpbICAgIDAuNzYyODQ4XSBJbml0aWFsaXNlIHN5c3RlbSB0cnVzdGVkIGtleXJpbmdz
ClsgICAgMC43NjY4NjVdIHdvcmtpbmdzZXQ6IHRpbWVzdGFtcF9iaXRzPTQwIG1heF9vcmRlcj0x
OCBidWNrZXRfb3JkZXI9MApbICAgIDAuNzc5NjkyXSB6YnVkOiBsb2FkZWQKWyAgICAwLjc4MzEx
M10gc3F1YXNoZnM6IHZlcnNpb24gNC4wICgyMDA5LzAxLzMxKSBQaGlsbGlwIExvdWdoZXIKWyAg
ICAwLjc4ODk5NV0gOXA6IEluc3RhbGxpbmcgdjlmcyA5cDIwMDAgZmlsZSBzeXN0ZW0gc3VwcG9y
dApbICAgIDAuODE4MDYxXSBLZXkgdHlwZSBhc3ltbWV0cmljIHJlZ2lzdGVyZWQKWyAgICAwLjgy
MTU5M10gQXN5bW1ldHJpYyBrZXkgcGFyc2VyICd4NTA5JyByZWdpc3RlcmVkClsgICAgMC44MjY2
MjhdIEJsb2NrIGxheWVyIFNDU0kgZ2VuZXJpYyAoYnNnKSBkcml2ZXIgdmVyc2lvbiAwLjQgbG9h
ZGVkIChtYWpvciAyNDYpClsgICAgMC44MzQzMDddIGlvIHNjaGVkdWxlciBtcS1kZWFkbGluZSBy
ZWdpc3RlcmVkClsgICAgMC44Mzg4MjldIGlvIHNjaGVkdWxlciBreWJlciByZWdpc3RlcmVkClsg
ICAgMC44NDMwOTBdIGlvIHNjaGVkdWxlciBiZnEgcmVnaXN0ZXJlZApbICAgIDAuODUxODMzXSBz
aHBjaHA6IFN0YW5kYXJkIEhvdCBQbHVnIFBDSSBDb250cm9sbGVyIERyaXZlciB2ZXJzaW9uOiAw
LjQKWyAgICAwLjg1OTMxNl0gYnJjbS1wY2llIGZkNTAwMDAwLnBjaWU6IGhvc3QgYnJpZGdlIC9z
Y2IvcGNpZUA3ZDUwMDAwMCByYW5nZXM6ClsgICAgMC44NjU3NjZdIGJyY20tcGNpZSBmZDUwMDAw
MC5wY2llOiAgICAgIE1FTSAweDA2MDAwMDAwMDAuLjB4MDYwM2ZmZmZmZiAtPiAweDAwZjgwMDAw
MDAKWyAgICAwLjg3NDA2NV0gYnJjbS1wY2llIGZkNTAwMDAwLnBjaWU6ICAgSUIgTUVNIDB4MDAw
MDAwMDAwMC4uMHgwMGZmZmZmZmZmIC0+IDB4MDAwMDAwMDAwMApbICAgIDAuOTM3NDczXSBicmNt
LXBjaWUgZmQ1MDAwMDAucGNpZTogbGluayB1cCwgNSBHVC9zIHgxICghU1NDKQpbICAgIDAuOTQy
OTc3XSBicmNtLXBjaWUgZmQ1MDAwMDAucGNpZTogUENJIGhvc3QgYnJpZGdlIHRvIGJ1cyAwMDAw
OjAwClsgICAgMC45NDkxODhdIHBjaV9idXMgMDAwMDowMDogcm9vdCBidXMgcmVzb3VyY2UgW2J1
cyAwMC0wMV0KWyAgICAwLjk1NDc5M10gcGNpX2J1cyAwMDAwOjAwOiByb290IGJ1cyByZXNvdXJj
ZSBbbWVtIDB4NjAwMDAwMDAwLTB4NjAzZmZmZmZmXSAoYnVzIGFkZHJlc3MgWzB4ZjgwMDAwMDAt
MHhmYmZmZmZmZl0pClsgICAgMC45NjUzNjFdIHBjaSAwMDAwOjAwOjAwLjA6IFsxNGU0OjI3MTFd
IHR5cGUgMDEgY2xhc3MgMHgwNjA0MDAKWyAgICAwLjk3MTU2Ml0gcGNpIDAwMDA6MDA6MDAuMDog
UE1FIyBzdXBwb3J0ZWQgZnJvbSBEMCBEM2hvdAooWEVOKSBwaHlzZGV2LmM6MTY6ZDB2MCBQSFlT
REVWT1AgY21kPTI1OiBub3QgaW1wbGVtZW50ZWQKKFhFTikgcGh5c2Rldi5jOjE2OmQwdjAgUEhZ
U0RFVk9QIGNtZD0xNTogbm90IGltcGxlbWVudGVkClsgICAgMC45ODc0NjhdIHBjaSAwMDAwOjAw
OjAwLjA6IEZhaWxlZCB0byBhZGQgLSBwYXNzdGhyb3VnaCBvciBNU0kvTVNJLVggbWlnaHQgZmFp
bCEKWyAgICAwLjk5NzMyNV0gcGNpIDAwMDA6MDA6MDAuMDogYnJpZGdlIGNvbmZpZ3VyYXRpb24g
aW52YWxpZCAoW2J1cyAwMC0wMF0pLCByZWNvbmZpZ3VyaW5nClsgICAgMS4wMDQ5MzFdIHBjaSAw
MDAwOjAxOjAwLjA6IFsxMTA2OjM0ODNdIHR5cGUgMDAgY2xhc3MgMHgwYzAzMzAKWyAgICAxLjAx
MDk5NV0gcGNpIDAwMDA6MDE6MDAuMDogcmVnIDB4MTA6IFttZW0gMHgwMDAwMDAwMC0weDAwMDAw
ZmZmIDY0Yml0XQpbICAgIDEuMDE4MDAyXSBwY2kgMDAwMDowMTowMC4wOiBQTUUjIHN1cHBvcnRl
ZCBmcm9tIEQwIEQzY29sZAooWEVOKSBwaHlzZGV2LmM6MTY6ZDB2MCBQSFlTREVWT1AgY21kPTE1
OiBub3QgaW1wbGVtZW50ZWQKWyAgICAxLjAyODc5N10gcGNpIDAwMDA6MDE6MDAuMDogRmFpbGVk
IHRvIGFkZCAtIHBhc3N0aHJvdWdoIG9yIE1TSS9NU0ktWCBtaWdodCBmYWlsIQpbICAgIDEuMDM4
NjMxXSBwY2lfYnVzIDAwMDA6MDE6IGJ1c25fcmVzOiBbYnVzIDAxXSBlbmQgaXMgdXBkYXRlZCB0
byAwMQpbICAgIDEuMDQ0NDY0XSBwY2kgMDAwMDowMDowMC4wOiBCQVIgMTQ6IGFzc2lnbmVkIFtt
ZW0gMHg2MDAwMDAwMDAtMHg2MDAwZmZmZmZdClsgICAgMS4wNTE2NjRdIHBjaSAwMDAwOjAxOjAw
LjA6IEJBUiAwOiBhc3NpZ25lZCBbbWVtIDB4NjAwMDAwMDAwLTB4NjAwMDAwZmZmIDY0Yml0XQpb
ICAgIDEuMDU5MzA1XSBwY2kgMDAwMDowMDowMC4wOiBQQ0kgYnJpZGdlIHRvIFtidXMgMDFdClsg
ICAgMS4wNjQzNzddIHBjaSAwMDAwOjAwOjAwLjA6ICAgYnJpZGdlIHdpbmRvdyBbbWVtIDB4NjAw
MDAwMDAwLTB4NjAwMGZmZmZmXQpbICAgIDEuMDcxNjQyXSBwY2llcG9ydCAwMDAwOjAwOjAwLjA6
IGVuYWJsaW5nIGRldmljZSAoMDAwMCAtPiAwMDAyKQpbICAgIDEuMDc3ODgyXSBwY2llcG9ydCAw
MDAwOjAwOjAwLjA6IFBNRTogU2lnbmFsaW5nIHdpdGggSVJRIDMyClsgICAgMS4wODM4NzFdIHBj
aWVwb3J0IDAwMDA6MDA6MDAuMDogQUVSOiBlbmFibGVkIHdpdGggSVJRIDMyClsgICAgMS4wODk0
ODldIHBjaSAwMDAwOjAxOjAwLjA6IGVuYWJsaW5nIGRldmljZSAoMDAwMCAtPiAwMDAyKQpbICAg
IDEuMTAzNzMyXSAtLS0tLS0tLS0tLS1bIGN1dCBoZXJlIF0tLS0tLS0tLS0tLS0KWyAgICAxLjEw
NzgyOF0gYmNtMjgzNS1kbWEgZmUwMDcwMDAuZG1hOiBETUEgYWRkciAweDAwMDAwMDAxMDE5NjMw
MDArNDA5NiBvdmVyZmxvdyAobWFzayBmZmZmZmZmZiwgYnVzIGxpbWl0IGZiZmZmZmZmKS4KWyAg
ICAxLjExODU0OV0gV0FSTklORzogQ1BVOiAwIFBJRDogMSBhdCBrZXJuZWwvZG1hL2RpcmVjdC5j
OjM2NCBkbWFfZGlyZWN0X21hcF9wYWdlKzB4MTZjLzB4MTgwClsgICAgMS4xMjcxOTRdIE1vZHVs
ZXMgbGlua2VkIGluOgpbICAgIDEuMTMwMzYwXSBDUFU6IDAgUElEOiAxIENvbW06IHN3YXBwZXIv
MCBOb3QgdGFpbnRlZCA1LjYuMS1kZWZhdWx0ICM2ClsgICAgMS4xMzcwMjhdIEhhcmR3YXJlIG5h
bWU6IFJhc3BiZXJyeSBQaSA0IE1vZGVsIEIgKERUKQpbICAgIDEuMTQyMzA5XSBwc3RhdGU6IDYw
MDAwMDA1IChuWkN2IGRhaWYgLVBBTiAtVUFPKQpbICAgIDEuMTQ3MjE3XSBwYyA6IGRtYV9kaXJl
Y3RfbWFwX3BhZ2UrMHgxNmMvMHgxODAKWyAgICAxLjE1MTk2N10gbHIgOiBkbWFfZGlyZWN0X21h
cF9wYWdlKzB4MTZjLzB4MTgwClsgICAgMS4xNTY2OTVdIHNwIDogZmZmZjgwMDAxMDAxM2FkMApb
ICAgIDEuMTYwMTIxXSB4Mjk6IGZmZmY4MDAwMTAwMTNhZDAgeDI4OiAwMDAwMDAwMDAwMDAwMDAw
ClsgICAgMS4xNjU1NjRdIHgyNzogZmZmZjgwMDAxMTMwMDQ0YyB4MjY6IGZmZmY4MDAwMTEzZDM5
NjAKWyAgICAxLjE3MTAwN10geDI1OiBmZmZmODAwMDExMmY0MGE4IHgyNDogMDAwMDAwMDAwMDAw
MDAwMApbICAgIDEuMTc2NDUxXSB4MjM6IGZmZmY4MDAwMTE3OWI1NDggeDIyOiBmZmZmMDAwMDNm
OWU2YzEwClsgICAgMS4xODE5MDRdIHgyMTogZmZmZjAwMDAzZjllNmMwMCB4MjA6IGZmZmYwMDAw
M2Y5ZTZjMTAKWyAgICAxLjE4NzMzOV0geDE5OiBmZmZmODAwMDExNzliNTQ4IHgxODogMDAwMDAw
MDAwMDAwMDAxMApbICAgIDEuMTkyNzkyXSB4MTc6IDAwMDAwMDAwMDAwMDAwMDEgeDE2OiAwMDAw
MDAwMGRlYWRiZWVmClsgICAgMS4xOTgyMjZdIHgxNTogZmZmZmZmZmZmZmZmZmZmZiB4MTQ6IDIw
Nzc2ZjZjNjY3MjY1NzYKWyAgICAxLjIwMzY3MV0geDEzOiA2ZjIwMzYzOTMwMzQyYjMwIHgxMjog
MzAzMDMzMzYzOTMxMzAzMQpbICAgIDEuMjA5MTE0XSB4MTE6IDMwMzAzMDMwMzAzMDMwNzggeDEw
OiAzMDIwNzI2NDY0NjEyMDQxClsgICAgMS4yMTQ1NThdIHg5IDogNGQ0NDIwM2E2MTZkNjQyZSB4
OCA6IGZmZmY4MDAwMTA3MWU5MDAKWyAgICAxLjIyMDAwMl0geDcgOiAwMDAwMDAwMDAwMDAwMGE1
IHg2IDogZmZmZjgwMDAxMTk3YTZmNwpbICAgIDEuMjI1NDQ2XSB4NSA6IGZmZmY4MDAwMTBlMTlk
MTggeDQgOiAwMDAwMDAwMDAwMDAwMDAwClsgICAgMS4yMzA4ODldIHgzIDogMDAwMDAwMDAwMDAw
MDAwMCB4MiA6IDAwMDAwMDAwZmZmZmZmZmYKWyAgICAxLjIzNjMzM10geDEgOiA3NDViNDM5ZTZk
ODM0MDAwIHgwIDogMDAwMDAwMDAwMDAwMDAwMApbICAgIDEuMjQxNzc4XSBDYWxsIHRyYWNlOgpb
ICAgIDEuMjQ0MzMwXSAgZG1hX2RpcmVjdF9tYXBfcGFnZSsweDE2Yy8weDE4MApbICAgIDEuMjQ4
NzIyXSAgYmNtMjgzNV9kbWFfcHJvYmUrMHg0YmMvMHg1NTgKWyAgICAxLjI1Mjk0Ml0gIHBsYXRm
b3JtX2Rydl9wcm9iZSsweDUwLzB4YTAKWyAgICAxLjI1NzA2MV0gIHJlYWxseV9wcm9iZSsweGQ4
LzB4NDM4ClsgICAgMS4yNjA3NDhdICBkcml2ZXJfcHJvYmVfZGV2aWNlKzB4ZGMvMHgxMzAKWyAg
ICAxLjI2NTA1MV0gIGRldmljZV9kcml2ZXJfYXR0YWNoKzB4NmMvMHg3OApbICAgIDEuMjY5MzUy
XSAgX19kcml2ZXJfYXR0YWNoKzB4OWMvMHgxNjgKWyAgICAxLjI3MzMwM10gIGJ1c19mb3JfZWFj
aF9kZXYrMHg3MC8weGMwClsgICAgMS4yNzcyNTRdICBkcml2ZXJfYXR0YWNoKzB4MjAvMHgyOApb
ICAgIDEuMjgwOTQyXSAgYnVzX2FkZF9kcml2ZXIrMHgxOTAvMHgyMjAKWyAgICAxLjI4NDg5NF0g
IGRyaXZlcl9yZWdpc3RlcisweDYwLzB4MTEwClsgICAgMS4yODg4NDNdICBfX3BsYXRmb3JtX2Ry
aXZlcl9yZWdpc3RlcisweDQ0LzB4NTAKWyAgICAxLjI5MzY3OF0gIGJjbTI4MzVfZG1hX2RyaXZl
cl9pbml0KzB4MTgvMHgyMApbICAgIDEuMjk4MjQwXSAgZG9fb25lX2luaXRjYWxsKzB4NzQvMHgx
YjAKWyAgICAxLjMwMjIwMF0gIGtlcm5lbF9pbml0X2ZyZWVhYmxlKzB4MjE0LzB4MmIwClsgICAg
MS4zMDY2NjldICBrZXJuZWxfaW5pdCsweDEwLzB4MTAwClsgICAgMS4zMTAyNjddICByZXRfZnJv
bV9mb3JrKzB4MTAvMHgxOApbICAgIDEuMzEzOTYwXSAtLS1bIGVuZCB0cmFjZSAyZjcyM2Y2ZWEx
ZTA3NTdlIF0tLS0KWyAgICAxLjMxODcyNl0gYmNtMjgzNS1kbWEgZmUwMDcwMDAuZG1hOiBGYWls
ZWQgdG8gbWFwIHplcm8gcGFnZQpbICAgIDEuMzI0Njg3XSBiY20yODM1LWRtYTogcHJvYmUgb2Yg
ZmUwMDcwMDAuZG1hIGZhaWxlZCB3aXRoIGVycm9yIC0xMgpbICAgIDEuMzM0MTY5XSB4ZW46eGVu
X2V2dGNobjogRXZlbnQtY2hhbm5lbCBkZXZpY2UgaW5zdGFsbGVkClsgICAgMS4zMzk0MTRdIElu
aXRpYWxpc2luZyBYZW4gcHZjYWxscyBmcm9udGVuZCBkcml2ZXIKWyAgICAxLjM0NTg4MF0gU2Vy
aWFsOiA4MjUwLzE2NTUwIGRyaXZlciwgMzIgcG9ydHMsIElSUSBzaGFyaW5nIGVuYWJsZWQKWyAg
ICAxLjM1OTI1OF0gYmNtMjgzNS1hdXgtdWFydCBmZTIxNTA0MC5zZXJpYWw6IHRoZXJlIGlzIG5v
dCB2YWxpZCBtYXBzIGZvciBzdGF0ZSBkZWZhdWx0ClsgICAgMS4zNjY4NDJdIE9GOiAvc29jL3Nl
cmlhbEA3ZTIxNTA0MDogY291bGQgbm90IGZpbmQgcGhhbmRsZQpbICAgIDEuMzcyNjA1XSBiY20y
ODM1LWF1eC11YXJ0IGZlMjE1MDQwLnNlcmlhbDogY291bGQgbm90IGdldCBjbGs6IC0yClsgICAg
MS4zNzg5OThdIGJjbTI4MzUtYXV4LXVhcnQ6IHByb2JlIG9mIGZlMjE1MDQwLnNlcmlhbCBmYWls
ZWQgd2l0aCBlcnJvciAtMgpbICAgIDEuMzg2NTY0XSBTZXJpYWw6IEFNQkEgZHJpdmVyClsgICAg
MS4zODk4NzddIG1zbV9zZXJpYWw6IGRyaXZlciBpbml0aWFsaXplZApbICAgIDEuMzk0NzE5XSBj
YWNoZWluZm86IFVuYWJsZSB0byBkZXRlY3QgY2FjaGUgaGllcmFyY2h5IGZvciBDUFUgMApbICAg
IDEuNDA5NjMwXSBicmQ6IG1vZHVsZSBsb2FkZWQKWyAgICAxLjQxOTYwN10gbG9vcDogbW9kdWxl
IGxvYWRlZApbICAgIDEuNDM4MDMxXSBJbnZhbGlkIG1heF9xdWV1ZXMgKDQpLCB3aWxsIHVzZSBk
ZWZhdWx0IG1heDogMS4KWyAgICAxLjQ1NDAzMV0gZHJiZDogaW5pdGlhbGl6ZWQuIFZlcnNpb246
IDguNC4xMSAoYXBpOjEvcHJvdG86ODYtMTAxKQpbICAgIDEuNDU5ODA0XSBkcmJkOiBidWlsdC1p
bgpbICAgIDEuNDYyNTkyXSBkcmJkOiByZWdpc3RlcmVkIGFzIGJsb2NrIGRldmljZSBtYWpvciAx
NDcKWyAgICAxLjQ2ODczOV0gYmNtMjgzNS1wb3dlciBiY20yODM1LXBvd2VyOiBCcm9hZGNvbSBC
Q00yODM1IHBvd2VyIGRvbWFpbnMgZHJpdmVyClsgICAgMS40NzY1NTldIHdpcmVndWFyZDogYWxs
b3dlZGlwcyBzZWxmLXRlc3RzOiBwYXNzClsgICAgMS40ODMzNDFdIHdpcmVndWFyZDogbm9uY2Ug
Y291bnRlciBzZWxmLXRlc3RzOiBwYXNzClsgICAgMS45MDg2MjhdIHdpcmVndWFyZDogcmF0ZWxp
bWl0ZXIgc2VsZi10ZXN0czogcGFzcwpbICAgIDEuOTEzMDMwXSB3aXJlZ3VhcmQ6IFdpcmVHdWFy
ZCAxLjAuMCBsb2FkZWQuIFNlZSB3d3cud2lyZWd1YXJkLmNvbSBmb3IgaW5mb3JtYXRpb24uClsg
ICAgMS45MjA5ODRdIHdpcmVndWFyZDogQ29weXJpZ2h0IChDKSAyMDE1LTIwMTkgSmFzb24gQS4g
RG9uZW5mZWxkIDxKYXNvbkB6eDJjNC5jb20+LiBBbGwgUmlnaHRzIFJlc2VydmVkLgpbICAgIDEu
OTMyMzU0XSBsaWJwaHk6IEZpeGVkIE1ESU8gQnVzOiBwcm9iZWQKWyAgICAxLjkzNjQwOF0gYmNt
Z2VuZXQgZmQ1ODAwMDAuZ2VuZXQ6IHVzaW5nIHJhbmRvbSBFdGhlcm5ldCBNQUMKWyAgICAxLjk0
MTkwMl0gYmNtZ2VuZXQgZmQ1ODAwMDAuZ2VuZXQ6IGZhaWxlZCB0byBnZXQgZW5ldCBjbG9jawpb
ICAgIDEuOTQ3NzU4XSBiY21nZW5ldCBmZDU4MDAwMC5nZW5ldDogR0VORVQgNS4wIEVQSFk6IDB4
MDAwMApbICAgIDEuOTUzNDU1XSBiY21nZW5ldCBmZDU4MDAwMC5nZW5ldDogZmFpbGVkIHRvIGdl
dCBlbmV0LXdvbCBjbG9jawpbICAgIDEuOTU5NzAwXSBiY21nZW5ldCBmZDU4MDAwMC5nZW5ldDog
ZmFpbGVkIHRvIGdldCBlbmV0LWVlZSBjbG9jawpbICAgIDEuOTc3NDk2XSBsaWJwaHk6IGJjbWdl
bmV0IE1JSSBidXM6IHByb2JlZApbICAgIDIuMDU3NTM5XSB1bmltYWMtbWRpbyB1bmltYWMtbWRp
by4tMTk6IEJyb2FkY29tIFVuaU1BQyBNRElPIGJ1cwpbICAgIDIuMDYzNzU1XSB4ZW5fbmV0ZnJv
bnQ6IEluaXRpYWxpc2luZyBYZW4gdmlydHVhbCBldGhlcm5ldCBkcml2ZXIKWyAgICAyLjA2OTcy
MV0geGhjaV9oY2QgMDAwMDowMTowMC4wOiB4SENJIEhvc3QgQ29udHJvbGxlcgpbICAgIDIuMDc0
ODAwXSB4aGNpX2hjZCAwMDAwOjAxOjAwLjA6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2ln
bmVkIGJ1cyBudW1iZXIgMQpbICAgIDIuMDgyNzc5XSB4aGNpX2hjZCAwMDAwOjAxOjAwLjA6IGhj
YyBwYXJhbXMgMHgwMDI4NDFlYiBoY2kgdmVyc2lvbiAweDEwMCBxdWlya3MgMHgwMDAwMDAwMDAw
MDAwMDkwClsgICAgMi4wOTIxMzJdIHVzYiB1c2IxOiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRW
ZW5kb3I9MWQ2YiwgaWRQcm9kdWN0PTAwMDIsIGJjZERldmljZT0gNS4wNgpbICAgIDIuMTAwMDc2
XSB1c2IgdXNiMTogTmV3IFVTQiBkZXZpY2Ugc3RyaW5nczogTWZyPTMsIFByb2R1Y3Q9MiwgU2Vy
aWFsTnVtYmVyPTEKWyAgICAyLjEwNzQ0OV0gdXNiIHVzYjE6IFByb2R1Y3Q6IHhIQ0kgSG9zdCBD
b250cm9sbGVyClsgICAgMi4xMTI0MzddIHVzYiB1c2IxOiBNYW51ZmFjdHVyZXI6IExpbnV4IDUu
Ni4xLWRlZmF1bHQgeGhjaS1oY2QKWyAgICAyLjExODU5NV0gdXNiIHVzYjE6IFNlcmlhbE51bWJl
cjogMDAwMDowMTowMC4wClsgICAgMi4xMjM3NjZdIGh1YiAxLTA6MS4wOiBVU0IgaHViIGZvdW5k
ClsgICAgMi4xMjcyMTddIGh1YiAxLTA6MS4wOiAxIHBvcnQgZGV0ZWN0ZWQKWyAgICAyLjEzMTY2
Ml0geGhjaV9oY2QgMDAwMDowMTowMC4wOiB4SENJIEhvc3QgQ29udHJvbGxlcgpbICAgIDIuMTM2
NTkxXSB4aGNpX2hjZCAwMDAwOjAxOjAwLjA6IG5ldyBVU0IgYnVzIHJlZ2lzdGVyZWQsIGFzc2ln
bmVkIGJ1cyBudW1iZXIgMgpbICAgIDIuMTQ0MTgyXSB4aGNpX2hjZCAwMDAwOjAxOjAwLjA6IEhv
c3Qgc3VwcG9ydHMgVVNCIDMuMCBTdXBlclNwZWVkClsgICAgMi4xNTA2NzNdIHVzYiB1c2IyOiBX
ZSBkb24ndCBrbm93IHRoZSBhbGdvcml0aG1zIGZvciBMUE0gZm9yIHRoaXMgaG9zdCwgZGlzYWJs
aW5nIExQTS4KWyAgICAyLjE1ODkxNl0gdXNiIHVzYjI6IE5ldyBVU0IgZGV2aWNlIGZvdW5kLCBp
ZFZlbmRvcj0xZDZiLCBpZFByb2R1Y3Q9MDAwMywgYmNkRGV2aWNlPSA1LjA2ClsgICAgMi4xNjcy
MjhdIHVzYiB1c2IyOiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVjdD0yLCBT
ZXJpYWxOdW1iZXI9MQpbICAgIDIuMTc0NjE4XSB1c2IgdXNiMjogUHJvZHVjdDogeEhDSSBIb3N0
IENvbnRyb2xsZXIKWyAgICAyLjE3OTYxOF0gdXNiIHVzYjI6IE1hbnVmYWN0dXJlcjogTGludXgg
NS42LjEtZGVmYXVsdCB4aGNpLWhjZApbICAgIDIuMTg1NzUzXSB1c2IgdXNiMjogU2VyaWFsTnVt
YmVyOiAwMDAwOjAxOjAwLjAKWyAgICAyLjE5MDg4NV0gaHViIDItMDoxLjA6IFVTQiBodWIgZm91
bmQKWyAgICAyLjE5NDQwNV0gaHViIDItMDoxLjA6IDQgcG9ydHMgZGV0ZWN0ZWQKWyAgICAyLjE5
OTE0NV0gdXNiY29yZTogcmVnaXN0ZXJlZCBuZXcgaW50ZXJmYWNlIGRyaXZlciB1c2Itc3RvcmFn
ZQpbICAgIDIuMjA0ODc3XSBtb3VzZWRldjogUFMvMiBtb3VzZSBkZXZpY2UgY29tbW9uIGZvciBh
bGwgbWljZQpbICAgIDIuMjEyNjU5XSBiY20yODM1LXdkdCBiY20yODM1LXdkdDogQnJvYWRjb20g
QkNNMjgzNSB3YXRjaGRvZyB0aW1lcgpbICAgIDIuMjE5MTE5XSB4ZW5fd2R0IHhlbl93ZHQ6IGlu
aXRpYWxpemVkICh0aW1lb3V0PTYwcywgbm93YXlvdXQ9MCkKWyAgICAyLjIyNTQ3M10gc2RoY2k6
IFNlY3VyZSBEaWdpdGFsIEhvc3QgQ29udHJvbGxlciBJbnRlcmZhY2UgZHJpdmVyClsgICAgMi4y
MzExNDZdIHNkaGNpOiBDb3B5cmlnaHQoYykgUGllcnJlIE9zc21hbgpbICAgIDIuMjM1ODI1XSBz
ZGhjaS1wbHRmbTogU0RIQ0kgcGxhdGZvcm0gYW5kIE9GIGRyaXZlciBoZWxwZXIKWyAgICAyLjI3
MzUwNl0gbW1jMDogU0RIQ0kgY29udHJvbGxlciBvbiBmZTMwMDAwMC5tbWNuciBbZmUzMDAwMDAu
bW1jbnJdIHVzaW5nIFBJTwpbICAgIDIuMjgxNDE3XSBsZWR0cmlnLWNwdTogcmVnaXN0ZXJlZCB0
byBpbmRpY2F0ZSBhY3Rpdml0eSBvbiBDUFVzClsgICAgMi4yODc3ODZdIGhpZDogcmF3IEhJRCBl
dmVudHMgZHJpdmVyIChDKSBKaXJpIEtvc2luYQpbICAgIDIuMjkyNTk3XSB1c2Jjb3JlOiByZWdp
c3RlcmVkIG5ldyBpbnRlcmZhY2UgZHJpdmVyIHVzYmhpZApbICAgIDIuMjk4MDk1XSB1c2JoaWQ6
IFVTQiBISUQgY29yZSBkcml2ZXIKWyAgICAyLjMwMjUyOV0gYmNtMjgzNS1tYm94IGZlMDBiODgw
Lm1haWxib3g6IG1haWxib3ggZW5hYmxlZApbICAgIDIuMzExNjc1XSB4dF90aW1lOiBrZXJuZWwg
dGltZXpvbmUgaXMgLTAwMDAKWyAgICAyLjMxNTU3Nl0gSVBWUzogUmVnaXN0ZXJlZCBwcm90b2Nv
bHMgKCkKWyAgICAyLjMxOTczNV0gSVBWUzogQ29ubmVjdGlvbiBoYXNoIHRhYmxlIGNvbmZpZ3Vy
ZWQgKHNpemU9NDA5NiwgbWVtb3J5PTY0S2J5dGVzKQpbICAgIDIuMzI3MzM3XSBJUFZTOiBpcHZz
IGxvYWRlZC4KWyAgICAyLjMzMDQ4N10gaXBpcDogSVB2NCBhbmQgTVBMUyBvdmVyIElQdjQgdHVu
bmVsaW5nIGRyaXZlcgpbICAgIDIuMzM2NDU2XSBncmU6IEdSRSBvdmVyIElQdjQgZGVtdWx0aXBs
ZXhvciBkcml2ZXIKWyAgICAyLjM0MTE2Ml0gaXB0X0NMVVNURVJJUDogQ2x1c3RlcklQIFZlcnNp
b24gMC44IGxvYWRlZCBzdWNjZXNzZnVsbHkKWyAgICAyLjM0Nzg4OF0gSW5pdGlhbGl6aW5nIFhG
Uk0gbmV0bGluayBzb2NrZXQKWyAgICAyLjM1MjU2N10gTkVUOiBSZWdpc3RlcmVkIHByb3RvY29s
IGZhbWlseSAxMApbICAgIDIuMzU3ODUzXSBTZWdtZW50IFJvdXRpbmcgd2l0aCBJUHY2ClsgICAg
Mi4zNjIxNDBdIE5FVDogUmVnaXN0ZXJlZCBwcm90b2NvbCBmYW1pbHkgMTcKWyAgICAyLjM2NjA2
NV0gbW1jMDogcXVldWluZyB1bmtub3duIENJUyB0dXBsZSAweDgwICgyIGJ5dGVzKQpbICAgIDIu
MzcxNzc1XSBORVQ6IFJlZ2lzdGVyZWQgcHJvdG9jb2wgZmFtaWx5IDE1ClsgICAgMi4zNzYyOTJd
IEJyaWRnZSBmaXJld2FsbGluZyByZWdpc3RlcmVkClsgICAgMi4zODA1MTBdIDgwMjFxOiA4MDIu
MVEgVkxBTiBTdXBwb3J0IHYxLjgKWyAgICAyLjM4NDg4MV0gOXBuZXQ6IEluc3RhbGxpbmcgOVAy
MDAwIHN1cHBvcnQKWyAgICAyLjM4OTA4MV0gSW5pdGlhbGlzaW5nIFhlbiB0cmFuc3BvcnQgZm9y
IDlwZnMKWyAgICAyLjM5MzgzM10gS2V5IHR5cGUgZG5zX3Jlc29sdmVyIHJlZ2lzdGVyZWQKWyAg
ICAyLjM5ODUwNV0gcmVnaXN0ZXJlZCB0YXNrc3RhdHMgdmVyc2lvbiAxClsgICAgMi40MDIzMjJd
IExvYWRpbmcgY29tcGlsZWQtaW4gWC41MDkgY2VydGlmaWNhdGVzClsgICAgMi40MDcyMTJdIG1t
YzA6IHF1ZXVpbmcgdW5rbm93biBDSVMgdHVwbGUgMHg4MCAoMyBieXRlcykKWyAgICAyLjQxNjkx
NF0gTG9hZGVkIFguNTA5IGNlcnQgJ0J1aWxkIHRpbWUgYXV0b2dlbmVyYXRlZCBrZXJuZWwga2V5
OiBjYTg1MjMyZDJlN2ViYzJhZjJmZWFhNDgxMmUyZDY1MTcwMDQ0NmQ4JwpbICAgIDIuNDI2NTE0
XSB6c3dhcDogbG9hZGVkIHVzaW5nIHBvb2wgbHpvL3pidWQKWyAgICAyLjQzMTMwN10gS2V5IHR5
cGUgLl9mc2NyeXB0IHJlZ2lzdGVyZWQKWyAgICAyLjQzNDk5MV0gbW1jMDogcXVldWluZyB1bmtu
b3duIENJUyB0dXBsZSAweDgwICgzIGJ5dGVzKQpbICAgIDIuNDQwNjkwXSBLZXkgdHlwZSAuZnNj
cnlwdCByZWdpc3RlcmVkClsgICAgMi40NDQ2MTldIEtleSB0eXBlIGZzY3J5cHQtcHJvdmlzaW9u
aW5nIHJlZ2lzdGVyZWQKWyAgICAyLjQ1MjQ3Nl0gbW1jMDogcXVldWluZyB1bmtub3duIENJUyB0
dXBsZSAweDgwICg3IGJ5dGVzKQpbICAgIDIuNDU5NTQ4XSB1YXJ0LXBsMDExIGZlMjAxMDAwLnNl
cmlhbDogdGhlcmUgaXMgbm90IHZhbGlkIG1hcHMgZm9yIHN0YXRlIGRlZmF1bHQKWyAgICAyLjQ2
Njc3NF0gZmUyMDEwMDAuc2VyaWFsOiB0dHlBTUExIGF0IE1NSU8gMHhmZTIwMTAwMCAoaXJxID0g
MTgsIGJhc2VfYmF1ZCA9IDApIGlzIGEgUEwwMTEgcmV2MgpbICAgIDIuNDc2Mzg0XSBtbWMwOiBx
dWV1aW5nIHVua25vd24gQ0lTIHR1cGxlIDB4ODAgKDMgYnl0ZXMpClsgICAgMi40ODUwOTJdIHJh
c3BiZXJyeXBpLWZpcm13YXJlIHNvYzpmaXJtd2FyZTogUmVxdWVzdCAweDAwMDAwMDAxIHJldHVy
bmVkIHN0YXR1cyAweDAwMDAwMDAwClsgICAgMi40OTMxNTddIHJhc3BiZXJyeXBpLWZpcm13YXJl
IHNvYzpmaXJtd2FyZTogUmVxdWVzdCAweDAwMDMwMDQ2IHJldHVybmVkIHN0YXR1cyAweDAwMDAw
MDAwClsgICAgMi41MDE3MzRdIHVzYiAxLTE6IG5ldyBoaWdoLXNwZWVkIFVTQiBkZXZpY2UgbnVt
YmVyIDIgdXNpbmcgeGhjaV9oY2QKWyAgICAyLjUwODgxM10gcmFzcGJlcnJ5cGktZmlybXdhcmUg
c29jOmZpcm13YXJlOiBSZXF1ZXN0IDB4MDAwMzAwMDcgcmV0dXJuZWQgc3RhdHVzIDB4MDAwMDAw
MDAKWyAgICAyLjUxNjkzOF0gcmFzcGJlcnJ5cGktY2xrIHJhc3BiZXJyeXBpLWNsazogRmFpbGVk
IHRvIGdldCBwbGxiIG1pbiBmcmVxOiAtMjIKWyAgICAyLjUyNDIwNV0gcmFzcGJlcnJ5cGktY2xr
IHJhc3BiZXJyeXBpLWNsazogRmFpbGVkIHRvIGluaXRpYWxpemUgcGxsYiwgLTIyClsgICAgMi41
MzEzODldIHJhc3BiZXJyeXBpLWNsazogcHJvYmUgb2YgcmFzcGJlcnJ5cGktY2xrIGZhaWxlZCB3
aXRoIGVycm9yIC0yMgpbICAgIDIuNTQwNDIyXSByYXNwYmVycnlwaS1maXJtd2FyZSBzb2M6Zmly
bXdhcmU6IFJlcXVlc3QgMHgwMDAzMDA0MyByZXR1cm5lZCBzdGF0dXMgMHgwMDAwMDAwMApbICAg
IDIuNTQ4NDA3XSByYXNwYmVycnlwaS1leHAtZ3BpbyBzb2M6ZmlybXdhcmU6Z3BpbzogRmFpbGVk
IHRvIGdldCBHUElPIDAgY29uZmlnICgtMjIgODApClsgICAgMi41NTY2ODJdIHJhc3BiZXJyeXBp
LWZpcm13YXJlIHNvYzpmaXJtd2FyZTogUmVxdWVzdCAweDAwMDMwMDQzIHJldHVybmVkIHN0YXR1
cyAweDAwMDAwMDAwClsgICAgMi41NjUyNTVdIHJhc3BiZXJyeXBpLWV4cC1ncGlvIHNvYzpmaXJt
d2FyZTpncGlvOiBGYWlsZWQgdG8gZ2V0IEdQSU8gMSBjb25maWcgKC0yMiA4MSkKWyAgICAyLjU3
MzU3NF0gcmFzcGJlcnJ5cGktZmlybXdhcmUgc29jOmZpcm13YXJlOiBSZXF1ZXN0IDB4MDAwMzAw
NDMgcmV0dXJuZWQgc3RhdHVzIDB4MDAwMDAwMDAKWyAgICAyLjU4MjEzMl0gcmFzcGJlcnJ5cGkt
ZXhwLWdwaW8gc29jOmZpcm13YXJlOmdwaW86IEZhaWxlZCB0byBnZXQgR1BJTyAyIGNvbmZpZyAo
LTIyIDgyKQpbICAgIDIuNTkwNDQyXSByYXNwYmVycnlwaS1maXJtd2FyZSBzb2M6ZmlybXdhcmU6
IFJlcXVlc3QgMHgwMDAzMDA0MyByZXR1cm5lZCBzdGF0dXMgMHgwMDAwMDAwMApbICAgIDIuNTk4
OTg0XSByYXNwYmVycnlwaS1leHAtZ3BpbyBzb2M6ZmlybXdhcmU6Z3BpbzogRmFpbGVkIHRvIGdl
dCBHUElPIDMgY29uZmlnICgtMjIgODMpClsgICAgMi42MDcyOTBdIHJhc3BiZXJyeXBpLWZpcm13
YXJlIHNvYzpmaXJtd2FyZTogUmVxdWVzdCAweDAwMDMwMDQzIHJldHVybmVkIHN0YXR1cyAweDAw
MDAwMDAwClsgICAgMi42MTU4NDddIHJhc3BiZXJyeXBpLWV4cC1ncGlvIHNvYzpmaXJtd2FyZTpn
cGlvOiBGYWlsZWQgdG8gZ2V0IEdQSU8gNCBjb25maWcgKC0yMiA4NCkKWyAgICAyLjYyNDE0MV0g
cmFzcGJlcnJ5cGktZmlybXdhcmUgc29jOmZpcm13YXJlOiBSZXF1ZXN0IDB4MDAwMzAwNDMgcmV0
dXJuZWQgc3RhdHVzIDB4MDAwMDAwMDAKWyAgICAyLjYzMjcwMF0gcmFzcGJlcnJ5cGktZXhwLWdw
aW8gc29jOmZpcm13YXJlOmdwaW86IEZhaWxlZCB0byBnZXQgR1BJTyA1IGNvbmZpZyAoLTIyIDg1
KQpbICAgIDIuNjQxMDIwXSByYXNwYmVycnlwaS1maXJtd2FyZSBzb2M6ZmlybXdhcmU6IFJlcXVl
c3QgMHgwMDAzMDA0MyByZXR1cm5lZCBzdGF0dXMgMHgwMDAwMDAwMApbICAgIDIuNjQ5NTU5XSBy
YXNwYmVycnlwaS1leHAtZ3BpbyBzb2M6ZmlybXdhcmU6Z3BpbzogRmFpbGVkIHRvIGdldCBHUElP
IDYgY29uZmlnICgtMjIgODYpClsgICAgMi42NTc4NjhdIHJhc3BiZXJyeXBpLWZpcm13YXJlIHNv
YzpmaXJtd2FyZTogUmVxdWVzdCAweDAwMDMwMDQzIHJldHVybmVkIHN0YXR1cyAweDAwMDAwMDAw
ClsgICAgMi42NjY0MDldIHJhc3BiZXJyeXBpLWV4cC1ncGlvIHNvYzpmaXJtd2FyZTpncGlvOiBG
YWlsZWQgdG8gZ2V0IEdQSU8gNyBjb25maWcgKC0yMiA4NykKWyAgICAyLjY3NTA1M10gdXNiIDEt
MTogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTIxMDksIGlkUHJvZHVjdD0zNDMxLCBi
Y2REZXZpY2U9IDQuMjAKWyAgICAyLjY4MzAwNF0gdXNiIDEtMTogTmV3IFVTQiBkZXZpY2Ugc3Ry
aW5nczogTWZyPTAsIFByb2R1Y3Q9MSwgU2VyaWFsTnVtYmVyPTAKWyAgICAyLjY5MDI1M10gdXNi
IDEtMTogUHJvZHVjdDogVVNCMi4wIEh1YgpbICAgIDIuNjk0OTQ0XSByYXNwYmVycnlwaS1maXJt
d2FyZSBzb2M6ZmlybXdhcmU6IFJlcXVlc3QgMHgwMDAzMDAzMCByZXR1cm5lZCBzdGF0dXMgMHgw
MDAwMDAwMApbICAgIDIuNzA0MTU2XSByYXNwYmVycnlwaS1maXJtd2FyZSBzb2M6ZmlybXdhcmU6
IFJlcXVlc3QgMHgwMDAzMDA0MyByZXR1cm5lZCBzdGF0dXMgMHgwMDAwMDAwMApbICAgIDIuNzEy
MTM4XSByYXNwYmVycnlwaS1leHAtZ3BpbyBzb2M6ZmlybXdhcmU6Z3BpbzogRmFpbGVkIHRvIGdl
dCBHUElPIDYgY29uZmlnICgtMjIgODYpClsgICAgMi43MjA4NjldIHJhc3BiZXJyeXBpLWZpcm13
YXJlIHNvYzpmaXJtd2FyZTogUmVxdWVzdCAweDAwMDMwMDQzIHJldHVybmVkIHN0YXR1cyAweDAw
MDAwMDAwClsgICAgMi43MjkwMDZdIHJhc3BiZXJyeXBpLWV4cC1ncGlvIHNvYzpmaXJtd2FyZTpn
cGlvOiBGYWlsZWQgdG8gZ2V0IEdQSU8gNiBjb25maWcgKC0yMiA4NikKWyAgICAyLjczNzI5Nl0g
cmVnLWZpeGVkLXZvbHRhZ2U6IHByb2JlIG9mIHNkX3ZjY19yZWcgZmFpbGVkIHdpdGggZXJyb3Ig
LTIyClsgICAgMi43NDUwOTFdIGh1YiAxLTE6MS4wOiBVU0IgaHViIGZvdW5kClsgICAgMi43NDg0
NDddIHJhc3BiZXJyeXBpLWZpcm13YXJlIHNvYzpmaXJtd2FyZTogUmVxdWVzdCAweDAwMDMwMDQz
IHJldHVybmVkIHN0YXR1cyAweDAwMDAwMDAwClsgICAgMi43NTY5MTddIHJhc3BiZXJyeXBpLWV4
cC1ncGlvIHNvYzpmaXJtd2FyZTpncGlvOiBGYWlsZWQgdG8gZ2V0IEdQSU8gNCBjb25maWcgKC0y
MiA4NCkKWyAgICAyLjc2NTI1Nl0gcmFzcGJlcnJ5cGktZmlybXdhcmUgc29jOmZpcm13YXJlOiBS
ZXF1ZXN0IDB4MDAwMzAwNDMgcmV0dXJuZWQgc3RhdHVzIDB4MDAwMDAwMDAKWyAgICAyLjc3Mzgw
MV0gcmFzcGJlcnJ5cGktZXhwLWdwaW8gc29jOmZpcm13YXJlOmdwaW86IEZhaWxlZCB0byBnZXQg
R1BJTyA0IGNvbmZpZyAoLTIyIDg0KQpbICAgIDIuNzgyMDY3XSBodWIgMS0xOjEuMDogNCBwb3J0
cyBkZXRlY3RlZApbICAgIDIuNzg2MjIyXSBncGlvLXJlZ3VsYXRvcjogcHJvYmUgb2Ygc2RfaW9f
MXY4X3JlZyBmYWlsZWQgd2l0aCBlcnJvciAtMjIKWyAgICAyLjc5NTM0N10gaGN0b3N5czogdW5h
YmxlIHRvIG9wZW4gcnRjIGRldmljZSAocnRjMCkKWyAgICAyLjgwNjI2N10gVkZTOiBDYW5ub3Qg
b3BlbiByb290IGRldmljZSAiKG51bGwpIiBvciB1bmtub3duLWJsb2NrKDAsMCk6IGVycm9yIC02
ClsgICAgMi44MTMzMTNdIFBsZWFzZSBhcHBlbmQgYSBjb3JyZWN0ICJyb290PSIgYm9vdCBvcHRp
b247IGhlcmUgYXJlIHRoZSBhdmFpbGFibGUgcGFydGl0aW9uczoKWyAgICAyLjgyMTc3M10gMDEw
MCAgICAgICAgICAxMzEwNzIgcmFtMApbICAgIDIuODIxNzc2XSAgKGRyaXZlcj8pClsgICAgMi44
MjgxMDFdIDAxMDEgICAgICAgICAgMTMxMDcyIHJhbTEKWyAgICAyLjgyODEwM10gIChkcml2ZXI/
KQpbICAgIDIuODM0MzkzXSAwMTAyICAgICAgICAgIDEzMTA3MiByYW0yClsgICAgMi44MzQzOTVd
ICAoZHJpdmVyPykKWyAgICAyLjg0MDcxNl0gMDEwMyAgICAgICAgICAxMzEwNzIgcmFtMwpbICAg
IDIuODQwNzE4XSAgKGRyaXZlcj8pClsgICAgMi44NDcwMzddIDAxMDQgICAgICAgICAgMTMxMDcy
IHJhbTQKWyAgICAyLjg0NzAzOV0gIChkcml2ZXI/KQpbICAgIDIuODUzMzU5XSAwMTA1ICAgICAg
ICAgIDEzMTA3MiByYW01ClsgICAgMi44NTMzNjFdICAoZHJpdmVyPykKWyAgICAyLjg1OTY5M10g
MDEwNiAgICAgICAgICAxMzEwNzIgcmFtNgpbICAgIDIuODU5Njk1XSAgKGRyaXZlcj8pClsgICAg
Mi44NjYwMDJdIDAxMDcgICAgICAgICAgMTMxMDcyIHJhbTcKWyAgICAyLjg2NjAwM10gIChkcml2
ZXI/KQpbICAgIDIuODcyMzI1XSAwMTA4ICAgICAgICAgIDEzMTA3MiByYW04ClsgICAgMi44NzIz
MjddICAoZHJpdmVyPykKWyAgICAyLjg3ODY2OV0gMDEwOSAgICAgICAgICAxMzEwNzIgcmFtOQpb
ICAgIDIuODc4NjcxXSAgKGRyaXZlcj8pClsgICAgMi44ODQ5NjhdIDAxMGEgICAgICAgICAgMTMx
MDcyIHJhbTEwClsgICAgMi44ODQ5NjldICAoZHJpdmVyPykKWyAgICAyLjg5MTM3OF0gMDEwYiAg
ICAgICAgICAxMzEwNzIgcmFtMTEKWyAgICAyLjg5MTM4MF0gIChkcml2ZXI/KQpbICAgIDIuODk3
ODEzXSAwMTBjICAgICAgICAgIDEzMTA3MiByYW0xMgpbICAgIDIuODk3ODE1XSAgKGRyaXZlcj8p
ClsgICAgMi45MDQxOTddIDAxMGQgICAgICAgICAgMTMxMDcyIHJhbTEzClsgICAgMi45MDQxOThd
ICAoZHJpdmVyPykKWyAgICAyLjkxMDYwN10gMDEwZSAgICAgICAgICAxMzEwNzIgcmFtMTQKWyAg
ICAyLjkxMDYwOV0gIChkcml2ZXI/KQpbICAgIDIuOTE3MDE2XSAwMTBmICAgICAgICAgIDEzMTA3
MiByYW0xNQpbICAgIDIuOTE3MDE3XSAgKGRyaXZlcj8pClsgICAgMi45MjM0NTNdIEtlcm5lbCBw
YW5pYyAtIG5vdCBzeW5jaW5nOiBWRlM6IFVuYWJsZSB0byBtb3VudCByb290IGZzIG9uIHVua25v
d24tYmxvY2soMCwwKQpbICAgIDIuOTMxODU4XSBDUFU6IDAgUElEOiAxIENvbW06IHN3YXBwZXIv
MCBUYWludGVkOiBHICAgICAgICBXICAgICAgICAgNS42LjEtZGVmYXVsdCAjNgpbICAgIDIuOTM5
OTMxXSBIYXJkd2FyZSBuYW1lOiBSYXNwYmVycnkgUGkgNCBNb2RlbCBCIChEVCkKWyAgICAyLjk0
NTE5OV0gQ2FsbCB0cmFjZToKWyAgICAyLjk0Nzc1M10gIGR1bXBfYmFja3RyYWNlKzB4MC8weDFk
MApbICAgIDIuOTUxNTI0XSAgc2hvd19zdGFjaysweDE0LzB4MjAKWyAgICAyLjk1NDk1MF0gIGR1
bXBfc3RhY2srMHhiYy8weDEwMApbICAgIDIuOTU4NDcyXSAgcGFuaWMrMHgxNTAvMHgzMjQKWyAg
ICAyLjk2MTYyMl0gIG1vdW50X2Jsb2NrX3Jvb3QrMHgyOGMvMHgzMmMKWyAgICAyLjk2NTc0Nl0g
IG1vdW50X3Jvb3QrMHg3Yy8weDg4ClsgICAgMi45NjkxNzFdICBwcmVwYXJlX25hbWVzcGFjZSsw
eDE1OC8weDE5OApbICAgIDIuOTczMzg1XSAga2VybmVsX2luaXRfZnJlZWFibGUrMHgyNjQvMHgy
YjAKWyAgICAyLjk3Nzg3Nl0gIGtlcm5lbF9pbml0KzB4MTAvMHgxMDAKWyAgICAyLjk4MTQ2NV0g
IHJldF9mcm9tX2ZvcmsrMHgxMC8weDE4ClsgICAgMi45ODUxNTddIEtlcm5lbCBPZmZzZXQ6IGRp
c2FibGVkClsgICAgMi45ODg3NTFdIENQVSBmZWF0dXJlczogMHgxMDAwMiw2MTAwNjAwMApbICAg
IDIuOTkyOTYzXSBNZW1vcnkgTGltaXQ6IG5vbmUKWyAgICAyLjk5NjEzMV0gLS0tWyBlbmQgS2Vy
bmVsIHBhbmljIC0gbm90IHN5bmNpbmc6IFZGUzogVW5hYmxlIHRvIG1vdW50IHJvb3QgZnMgb24g
dW5rbm93bi1ibG9jaygwLDApIF0tLS0K
--000000000000f8eaad05a49fe716
Content-Type: application/octet-stream; name="r.bin"
Content-Disposition: attachment; filename="r.bin"
Content-Transfer-Encoding: base64
Content-ID: <f_k9ovllwu0>
X-Attachment-Id: f_k9ovllwu0

0A3+7QAAcd4AAABIAABqZAAAACgAAAARAAAAEAAAAAAAAAd6AABqHAAAAAAAAAAAAAAAAAAAEAAA
AAAAAAAAAAAAAAAAAAAAAAAAAQAAAAAAAAADAAAAIwAAAAByYXNwYmVycnlwaSw0LW1vZGVsLWIA
YnJjbSxiY20yNzExAAAAAAADAAAAFwAAAAtSYXNwYmVycnkgUGkgNCBNb2RlbCBCAAAAAAADAAAA
BAAAABEAAAABAAAAAwAAAAQAAAAiAAAAAgAAAAMAAAAEAAAAMQAAAAEAAAABYWxpYXNlcwAAAAAD
AAAAFQAAAD0vc29jL3NlcmlhbEA3ZTIxNTA0MAAAAAAAAAADAAAAFQAAAEUvc29jL3NlcmlhbEA3
ZTIwMTAwMAAAAAAAAAADAAAACwAAAE0vc29jL2F1ZGlvAAAAAAADAAAAEgAAAFMvc29jL2F1eEA3
ZTIxNTAwMAAAAAAAAAMAAAALAAAAVy9zb2Mvc291bmQAAAAAAAMAAAAFAAAAXS9zb2MAAAAAAAAA
AwAAABIAAABhL3NvYy9kbWFAN2UwMDcwMDAAAAAAAAADAAAAFwAAAGUvc29jL3dhdGNoZG9nQDdl
MTAwMDAwAAAAAAADAAAAEgAAAG4vc29jL3JuZ0A3ZTEwNDAwMAAAAAAAAAMAAAAWAAAAdS9zb2Mv
bWFpbGJveEA3ZTAwYjg4MAAAAAAAAAMAAAATAAAAfS9zb2MvZ3Bpb0A3ZTIwMDAwMAAAAAAAAwAA
ABUAAACCL3NvYy9zZXJpYWxAN2UyMDEwMDAAAAAAAAAAAwAAABIAAACIL3NvYy9tbWNAN2UyMDIw
MDAAAAAAAAADAAAAGQAAAI8vZW1tYzJidXMvZW1tYzJAN2UzNDAwMDAAAAAAAAAAAwAAABIAAACU
L3NvYy9pMnNAN2UyMDMwMDAAAAAAAAADAAAAEgAAAJgvc29jL3NwaUA3ZTIwNDAwMAAAAAAAAAMA
AAASAAAAnS9zb2MvaTJjQDdlMjA1MDAwAAAAAAAAAwAAABUAAACiL3NvYy9zZXJpYWxAN2UyMTUw
NDAAAAAAAAAAAwAAABIAAACoL3NvYy9zcGlAN2UyMTUwODAAAAAAAAADAAAAEgAAAK0vc29jL3Nw
aUA3ZTIxNTBjMAAAAAAAAAMAAAASAAAAsi9zb2MvbW1jQDdlMzAwMDAwAAAAAAAAAwAAABQAAAC2
L3NvYy9tbWNuckA3ZTMwMDAwMAAAAAADAAAAEgAAALsvc29jL2kyY0A3ZTgwNDAwMAAAAAAAAAMA
AAASAAAAwC9zb2MvaTJjQDdlODA1MDAwAAAAAAAAAwAAABIAAADFL3NvYy91c2JAN2U5ODAwMDAA
AAAAAAADAAAABgAAAMkvbGVkcwAAAAAAAAMAAAAIAAAAzi9zb2MvZmIAAAAAAwAAABYAAADRL3Nv
Yy90aGVybWFsQDdkNWQyMjAwAAAAAAAAAwAAAA0AAADZL3NvYy9heGlwZXJmAAAAAAAAAAMAAAAS
AAAA4S9zb2MvbW1jQDdlMjAyMDAwAAAAAAAAAwAAABIAAADmL3NvYy9pMmNAN2UyMDU2MDAAAAAA
AAADAAAAEgAAAOsvc29jL2kyY0A3ZTIwNTgwMAAAAAAAAAMAAAASAAAA8C9zb2MvaTJjQDdlMjA1
YTAwAAAAAAAAAwAAABIAAAD1L3NvYy9pMmNAN2UyMDVjMDAAAAAAAAADAAAAFAAAAPovc2NiL2dl
bmV0QDdkNTgwMDAwAAAAAAMAAAATAAABBC9zY2IvcGNpZUA3ZDUwMDAwMAAAAAAAAwAAAAoAAAEK
L2VtbWMyYnVzAAAAAAAAAgAAAAFjaG9zZW4AAAAAAAMAAAApAAABE2NvaGVyZW50X3Bvb2w9MU0g
ODI1MC5ucl91YXJ0cz0xIGNtYT02NE0AAAAAAAAAAgAAAAF0aGVybWFsLXpvbmVzAAAAAAAAAWNw
dS10aGVybWFsAAAAAAMAAAAEAAABHAAAAAAAAAADAAAABAAAATIAAAPoAAAAAwAAAAQAAAFAAAAA
AgAAAAMAAAAIAAABUP///hkABkG4AAAAAWNvb2xpbmctbWFwcwAAAAAAAAACAAAAAgAAAAIAAAAB
c29jAAAAAAMAAAALAAAAAHNpbXBsZS1idXMAAAAAAAMAAAAEAAAAIgAAAAEAAAADAAAABAAAADEA
AAABAAAAAwAAADAAAAFdfgAAAAAAAAD+AAAAAYAAAHwAAAAAAAAA/AAAAAIAAABAAAAAAAAAAP+A
AAAAgAAAAAAAAwAAABAAAAFkwAAAAAAAAAAAAAAAPAAAAAAAAAF0eHBAN2UwMDQwMDAAAAAAAAAA
AwAAABEAAAAAYnJjbSxiY20yODM1LXR4cAAAAAAAAAADAAAACAAAAW9+AEAAAAAAIAAAAAMAAAAM
AAABcwAAAAAAAABLAAAABAAAAAMAAAAJAAABfmRpc2FibGVkAAAAAAAAAAIAAAABZG1hQDdlMDA3
MDAwAAAAAAAAAAMAAAARAAAAAGJyY20sYmNtMjgzNS1kbWEAAAAAAAAAAwAAAAgAAAFvfgBwAAAA
CwAAAAADAAAAhAAAAXMAAAAAAAAAUAAAAAQAAAAAAAAAUQAAAAQAAAAAAAAAUgAAAAQAAAAAAAAA
UwAAAAQAAAAAAAAAVAAAAAQAAAAAAAAAVQAAAAQAAAAAAAAAVgAAAAQAAAAAAAAAVwAAAAQAAAAA
AAAAVwAAAAQAAAAAAAAAWAAAAAQAAAAAAAAAWAAAAAQAAAADAAAAOAAAAYVkbWEwAGRtYTEAZG1h
MgBkbWEzAGRtYTQAZG1hNQBkbWE2AGRtYTcAZG1hOABkbWE5AGRtYTEwAAAAAAMAAAAEAAABlQAA
AAEAAAADAAAABAAAAaAAAAH1AAAAAwAAAAQAAAG2AAAACgAAAAIAAAABd2F0Y2hkb2dAN2UxMDAw
MDAAAAAAAAADAAAAJAAAAABicmNtLGJjbTI4MzUtcG0AYnJjbSxiY20yODM1LXBtLXdkdAAAAAAD
AAAABAAAAb4AAAABAAAAAwAAAAQAAAHSAAAAAQAAAAMAAAAYAAABb34QAAAAAAEUfgCgAAAAACR+
wRAAAAAAIAAAAAMAAAAgAAAB3wAAAAMAAAAVAAAAAwAAAB0AAAADAAAAFwAAAAMAAAAWAAAAAwAA
ABgAAAHmdjNkAHBlcmlfaW1hZ2UAaDI2NABpc3AAAAAAAwAAAAAAAAHyAAAAAwAAAAQAAAG2AAAA
IAAAAAIAAAABY3BybWFuQDdlMTAxMDAwAAAAAAMAAAAUAAAAAGJyY20sYmNtMjcxMS1jcHJtYW4A
AAAAAwAAAAQAAAIKAAAAAQAAAAMAAAAIAAABb34QEAAAACAAAAAAAwAAADQAAAHfAAAABAAAAAUA
AAAAAAAABQAAAAEAAAAFAAAAAgAAAAYAAAAAAAAABgAAAAEAAAAGAAAAAgAAAAMAAAAEAAACFwAA
AAcAAAADAAAABAAAAbYAAAADAAAAAgAAAAFybmdAN2UxMDQwMDAAAAAAAAAAAwAAABQAAAAAYnJj
bSxiY20yODM4LXJuZzIwMAAAAAADAAAACAAAAW9+EEAAAAAAEAAAAAMAAAAMAAABcwAAAAAAAAB9
AAAABAAAAAMAAAAEAAABtgAAACoAAAACAAAAAW1haWxib3hAN2UwMGI4ODAAAAAAAAAAAwAAABIA
AAAAYnJjbSxiY20yODM1LW1ib3gAAAAAAAADAAAACAAAAW9+ALiAAAAAQAAAAAMAAAAMAAABcwAA
AAAAAAAhAAAABAAAAAMAAAAEAAACIAAAAAAAAAADAAAABAAAAbYAAAAaAAAAAgAAAAFncGlvQDdl
MjAwMDAwAAAAAAAAAwAAACQAAAAAYnJjbSxiY20yNzExLWdwaW8AYnJjbSxiY20yODM1LWdwaW8A
AAAAAwAAAAgAAAFvfiAAAAAAALQAAAADAAAAGAAAAXMAAAAAAAAAcQAAAAQAAAAAAAAAcgAAAAQA
AAADAAAAAAAAAiwAAAADAAAABAAAAjwAAAACAAAAAwAAAAAAAAJIAAAAAwAAAAQAAAJdAAAAAgAA
AAMAAAAIAAACbmRlZmF1bHQAAAAAAwAAAAQAAAG2AAAADwAAAAFkcGlfZ3BpbzAAAAAAAAADAAAA
cAAAAnwAAAAAAAAAAQAAAAIAAAADAAAABAAAAAUAAAAGAAAABwAAAAgAAAAJAAAACgAAAAsAAAAM
AAAADQAAAA4AAAAPAAAAEAAAABEAAAASAAAAEwAAABQAAAAVAAAAFgAAABcAAAAYAAAAGQAAABoA
AAAbAAAAAwAAAAQAAAKGAAAABgAAAAIAAAABZW1tY19ncGlvMjIAAAAAAwAAABgAAAJ8AAAAFgAA
ABcAAAAYAAAAGQAAABoAAAAbAAAAAwAAAAQAAAKGAAAABwAAAAIAAAABZW1tY19ncGlvMzQAAAAA
AwAAABgAAAJ8AAAAIgAAACMAAAAkAAAAJQAAACYAAAAnAAAAAwAAAAQAAAKGAAAABwAAAAMAAAAY
AAAClAAAAAAAAAACAAAAAgAAAAIAAAACAAAAAgAAAAIAAAABZW1tY19ncGlvNDgAAAAAAwAAABgA
AAJ8AAAAMAAAADEAAAAyAAAAMwAAADQAAAA1AAAAAwAAAAQAAAKGAAAABwAAAAMAAAAEAAABtgAA
ABgAAAACAAAAAWdwY2xrMF9ncGlvNAAAAAAAAAADAAAABAAAAnwAAAAEAAAAAwAAAAQAAAKGAAAA
BAAAAAIAAAABZ3BjbGsxX2dwaW81AAAAAAAAAAMAAAAEAAACfAAAAAUAAAADAAAABAAAAoYAAAAE
AAAAAgAAAAFncGNsazFfZ3BpbzQyAAAAAAAAAwAAAAQAAAJ8AAAAKgAAAAMAAAAEAAAChgAAAAQA
AAACAAAAAWdwY2xrMV9ncGlvNDQAAAAAAAADAAAABAAAAnwAAAAsAAAAAwAAAAQAAAKGAAAABAAA
AAIAAAABZ3BjbGsyX2dwaW82AAAAAAAAAAMAAAAEAAACfAAAAAYAAAADAAAABAAAAoYAAAAEAAAA
AgAAAAFncGNsazJfZ3BpbzQzAAAAAAAAAwAAAAQAAAJ8AAAAKwAAAAMAAAAEAAAChgAAAAQAAAAD
AAAABAAAApQAAAAAAAAAAgAAAAFpMmMwX2dwaW8wAAAAAAADAAAACAAAAnwAAAAAAAAAAQAAAAMA
AAAEAAAChgAAAAQAAAACAAAAAWkyYzBfZ3BpbzI4AAAAAAMAAAAIAAACfAAAABwAAAAdAAAAAwAA
AAQAAAKGAAAABAAAAAIAAAABaTJjMF9ncGlvNDQAAAAAAwAAAAgAAAJ8AAAALAAAAC0AAAADAAAA
BAAAAoYAAAAFAAAAAgAAAAFpMmMxX2dwaW8yAAAAAAADAAAACAAAAnwAAAACAAAAAwAAAAMAAAAE
AAAChgAAAAQAAAACAAAAAWkyYzFfZ3BpbzQ0AAAAAAMAAAAIAAACfAAAACwAAAAtAAAAAwAAAAQA
AAKGAAAABgAAAAIAAAABanRhZ19ncGlvMjIAAAAAAwAAABgAAAJ8AAAAFgAAABcAAAAYAAAAGQAA
ABoAAAAbAAAAAwAAAAQAAAKGAAAAAwAAAAIAAAABcGNtX2dwaW8xOAAAAAAAAwAAABAAAAJ8AAAA
EgAAABMAAAAUAAAAFQAAAAMAAAAEAAAChgAAAAQAAAACAAAAAXBjbV9ncGlvMjgAAAAAAAMAAAAQ
AAACfAAAABwAAAAdAAAAHgAAAB8AAAADAAAABAAAAoYAAAAGAAAAAgAAAAFwd20wX2dwaW8xMgAA
AAADAAAABAAAAnwAAAAMAAAAAwAAAAQAAAKGAAAABAAAAAIAAAABcHdtMF9ncGlvMTgAAAAAAwAA
AAQAAAJ8AAAAEgAAAAMAAAAEAAAChgAAAAIAAAACAAAAAXB3bTBfZ3BpbzQwAAAAAAMAAAAEAAAC
fAAAACgAAAADAAAABAAAAoYAAAAEAAAAAgAAAAFwd20xX2dwaW8xMwAAAAADAAAABAAAAnwAAAAN
AAAAAwAAAAQAAAKGAAAABAAAAAIAAAABcHdtMV9ncGlvMTkAAAAAAwAAAAQAAAJ8AAAAEwAAAAMA
AAAEAAAChgAAAAIAAAACAAAAAXB3bTFfZ3BpbzQxAAAAAAMAAAAEAAACfAAAACkAAAADAAAABAAA
AoYAAAAEAAAAAgAAAAFwd20xX2dwaW80NQAAAAADAAAABAAAAnwAAAAtAAAAAwAAAAQAAAKGAAAA
BAAAAAIAAAABc2Rob3N0X2dwaW80OAAAAAAAAAMAAAAYAAACfAAAABYAAAAXAAAAGAAAABkAAAAa
AAAAGwAAAAMAAAAEAAAChgAAAAQAAAADAAAABAAAAbYAAAALAAAAAgAAAAFzcGkwX2dwaW83AAAA
AAADAAAAFAAAAnwAAAAHAAAACAAAAAkAAAAKAAAACwAAAAMAAAAEAAAChgAAAAQAAAACAAAAAXNw
aTBfZ3BpbzM1AAAAAAMAAAAUAAACfAAAACMAAAAkAAAAJQAAACYAAAAnAAAAAwAAAAQAAAKGAAAA
BAAAAAIAAAABc3BpMV9ncGlvMTYAAAAAAwAAABgAAAJ8AAAAEAAAABEAAAASAAAAEwAAABQAAAAV
AAAAAwAAAAQAAAKGAAAAAwAAAAIAAAABc3BpMl9ncGlvNDAAAAAAAwAAABgAAAJ8AAAAKAAAACkA
AAAqAAAAKwAAACwAAAAtAAAAAwAAAAQAAAKGAAAAAwAAAAIAAAABdWFydDBfZ3BpbzE0AAAAAAAA
AAMAAAAIAAACfAAAAA4AAAAPAAAAAwAAAAQAAAKGAAAABAAAAAIAAAABdWFydDBfY3RzcnRzX2dw
aW8xNgAAAAADAAAACAAAAnwAAAAQAAAAEQAAAAMAAAAEAAAChgAAAAcAAAACAAAAAXVhcnQwX2N0
c3J0c19ncGlvMzAAAAAAAwAAAAgAAAJ8AAAAHgAAAB8AAAADAAAABAAAAoYAAAAHAAAAAwAAAAgA
AAKUAAAAAgAAAAAAAAACAAAAAXVhcnQwX2dwaW8zMgAAAAAAAAADAAAACAAAAnwAAAAgAAAAIQAA
AAMAAAAEAAAChgAAAAcAAAADAAAACAAAApQAAAAAAAAAAgAAAAIAAAABdWFydDBfZ3BpbzM2AAAA
AAAAAAMAAAAIAAACfAAAACQAAAAlAAAAAwAAAAQAAAKGAAAABgAAAAIAAAABdWFydDBfY3RzcnRz
X2dwaW8zOAAAAAADAAAACAAAAnwAAAAmAAAAJwAAAAMAAAAEAAAChgAAAAYAAAACAAAAAXVhcnQx
X2dwaW8xNAAAAAAAAAADAAAACAAAAnwAAAAOAAAADwAAAAMAAAAEAAAChgAAAAIAAAACAAAAAXVh
cnQxX2N0c3J0c19ncGlvMTYAAAAAAwAAAAgAAAJ8AAAAEAAAABEAAAADAAAABAAAAoYAAAACAAAA
AgAAAAF1YXJ0MV9ncGlvMzIAAAAAAAAAAwAAAAgAAAJ8AAAAIAAAACEAAAADAAAABAAAAoYAAAAC
AAAAAgAAAAF1YXJ0MV9jdHNydHNfZ3BpbzMwAAAAAAMAAAAIAAACfAAAAB4AAAAfAAAAAwAAAAQA
AAKGAAAAAgAAAAIAAAABdWFydDFfZ3BpbzQwAAAAAAAAAAMAAAAIAAACfAAAACgAAAApAAAAAwAA
AAQAAAKGAAAAAgAAAAIAAAABdWFydDFfY3RzcnRzX2dwaW80MgAAAAADAAAACAAAAnwAAAAqAAAA
KwAAAAMAAAAEAAAChgAAAAIAAAACAAAAAWdwY2xrMF9ncGlvNDkAAAAAAAADAAAABAAAAnwAAAAx
AAAAAwAAAAQAAAKGAAAABQAAAAMAAAAEAAAClAAAAAAAAAACAAAAAWdwY2xrMV9ncGlvNTAAAAAA
AAADAAAABAAAAnwAAAAyAAAAAwAAAAQAAAKGAAAABQAAAAMAAAAEAAAClAAAAAAAAAACAAAAAWdw
Y2xrMl9ncGlvNTEAAAAAAAADAAAABAAAAnwAAAAzAAAAAwAAAAQAAAKGAAAABQAAAAMAAAAEAAAC
lAAAAAAAAAACAAAAAWkyYzBfZ3BpbzQ2AAAAAAMAAAAIAAACfAAAAC4AAAAvAAAAAwAAAAQAAAKG
AAAABAAAAAIAAAABaTJjMV9ncGlvNDYAAAAAAwAAAAgAAAJ8AAAALgAAAC8AAAADAAAABAAAAoYA
AAAFAAAAAgAAAAFpMmMzX2dwaW8yAAAAAAADAAAACAAAAnwAAAACAAAAAwAAAAMAAAAEAAAChgAA
AAIAAAACAAAAAWkyYzNfZ3BpbzQAAAAAAAMAAAAIAAACfAAAAAQAAAAFAAAAAwAAAAQAAAKGAAAA
AgAAAAIAAAABaTJjNF9ncGlvNgAAAAAAAwAAAAgAAAJ8AAAABgAAAAcAAAADAAAABAAAAoYAAAAC
AAAAAgAAAAFpMmM0X2dwaW84AAAAAAADAAAACAAAAnwAAAAIAAAACQAAAAMAAAAEAAAChgAAAAIA
AAACAAAAAWkyYzVfZ3BpbzEwAAAAAAMAAAAIAAACfAAAAAoAAAALAAAAAwAAAAQAAAKGAAAAAgAA
AAIAAAABaTJjNV9ncGlvMTIAAAAAAwAAAAgAAAJ8AAAADAAAAA0AAAADAAAABAAAAoYAAAACAAAA
AgAAAAFpMmM2X2dwaW8wAAAAAAADAAAACAAAAnwAAAAAAAAAAQAAAAMAAAAEAAAChgAAAAIAAAAC
AAAAAWkyYzZfZ3BpbzIyAAAAAAMAAAAIAAACfAAAABYAAAAXAAAAAwAAAAQAAAKGAAAAAgAAAAIA
AAABaTJjX3NsYXZlX2dwaW84AAAAAAMAAAAQAAACfAAAAAgAAAAJAAAACgAAAAsAAAADAAAABAAA
AoYAAAAHAAAAAgAAAAFqdGFnX2dwaW80OAAAAAADAAAAGAAAAnwAAAAwAAAAMQAAADIAAAAzAAAA
NAAAADUAAAADAAAABAAAAoYAAAADAAAAAgAAAAFtaWlfZ3BpbzI4AAAAAAADAAAAEAAAAnwAAAAc
AAAAHQAAAB4AAAAfAAAAAwAAAAQAAAKGAAAAAwAAAAIAAAABbWlpX2dwaW8zNgAAAAAAAwAAABAA
AAJ8AAAAJAAAACUAAAAmAAAAJwAAAAMAAAAEAAAChgAAAAIAAAACAAAAAXBjbV9ncGlvNTAAAAAA
AAMAAAAQAAACfAAAADIAAAAzAAAANAAAADUAAAADAAAABAAAAoYAAAAGAAAAAgAAAAFwd20wX2dw
aW81MgAAAAADAAAABAAAAnwAAAA0AAAAAwAAAAQAAAKGAAAABQAAAAMAAAAEAAAClAAAAAAAAAAC
AAAAAXB3bTFfZ3BpbzUzAAAAAAMAAAAEAAACfAAAADUAAAADAAAABAAAAoYAAAAFAAAAAwAAAAQA
AAKUAAAAAAAAAAIAAAABcmdtaWlfZ3BpbzM1AAAAAAAAAAMAAAAIAAACfAAAACMAAAAkAAAAAwAA
AAQAAAKGAAAAAwAAAAIAAAABcmdtaWlfaXJxX2dwaW8zNAAAAAAAAAADAAAABAAAAnwAAAAiAAAA
AwAAAAQAAAKGAAAAAgAAAAIAAAABcmdtaWlfaXJxX2dwaW8zOQAAAAAAAAADAAAABAAAAnwAAAAn
AAAAAwAAAAQAAAKGAAAAAwAAAAIAAAABcmdtaWlfbWRpb19ncGlvMjgAAAAAAAADAAAACAAAAnwA
AAAcAAAAHQAAAAMAAAAEAAAChgAAAAIAAAACAAAAAXJnbWlpX21kaW9fZ3BpbzM3AAAAAAAAAwAA
AAgAAAJ8AAAAJQAAACYAAAADAAAABAAAAoYAAAADAAAAAgAAAAFzcGkwX2dwaW80NgAAAAADAAAA
EAAAAnwAAAAuAAAALwAAADAAAAAxAAAAAwAAAAQAAAKGAAAABgAAAAIAAAABc3BpMl9ncGlvNDYA
AAAAAwAAABQAAAJ8AAAALgAAAC8AAAAwAAAAMQAAADIAAAADAAAABAAAAoYAAAACAAAAAgAAAAFz
cGkzX2dwaW8wAAAAAAADAAAAEAAAAnwAAAAAAAAAAQAAAAIAAAADAAAAAwAAAAQAAAKGAAAABwAA
AAIAAAABc3BpNF9ncGlvNAAAAAAAAwAAABAAAAJ8AAAABAAAAAUAAAAGAAAABwAAAAMAAAAEAAAC
hgAAAAcAAAACAAAAAXNwaTVfZ3BpbzEyAAAAAAMAAAAQAAACfAAAAAwAAAANAAAADgAAAA8AAAAD
AAAABAAAAoYAAAAHAAAAAgAAAAFzcGk2X2dwaW8xOAAAAAADAAAAEAAAAnwAAAASAAAAEwAAABQA
AAAVAAAAAwAAAAQAAAKGAAAABwAAAAIAAAABdWFydDJfZ3BpbzAAAAAAAwAAAAgAAAJ8AAAAAAAA
AAEAAAADAAAABAAAAoYAAAADAAAAAwAAAAgAAAKUAAAAAAAAAAIAAAACAAAAAXVhcnQyX2N0c3J0
c19ncGlvMgAAAAAAAwAAAAgAAAJ8AAAAAgAAAAMAAAADAAAABAAAAoYAAAADAAAAAwAAAAgAAAKU
AAAAAgAAAAAAAAACAAAAAXVhcnQzX2dwaW80AAAAAAMAAAAIAAACfAAAAAQAAAAFAAAAAwAAAAQA
AAKGAAAAAwAAAAMAAAAIAAAClAAAAAAAAAACAAAAAgAAAAF1YXJ0M19jdHNydHNfZ3BpbzYAAAAA
AAMAAAAIAAACfAAAAAYAAAAHAAAAAwAAAAQAAAKGAAAAAwAAAAMAAAAIAAAClAAAAAIAAAAAAAAA
AgAAAAF1YXJ0NF9ncGlvOAAAAAADAAAACAAAAnwAAAAIAAAACQAAAAMAAAAEAAAChgAAAAMAAAAD
AAAACAAAApQAAAAAAAAAAgAAAAIAAAABdWFydDRfY3RzcnRzX2dwaW8xMAAAAAADAAAACAAAAnwA
AAAKAAAACwAAAAMAAAAEAAAChgAAAAMAAAADAAAACAAAApQAAAACAAAAAAAAAAIAAAABdWFydDVf
Z3BpbzEyAAAAAAAAAAMAAAAIAAACfAAAAAwAAAANAAAAAwAAAAQAAAKGAAAAAwAAAAMAAAAIAAAC
lAAAAAAAAAACAAAAAgAAAAF1YXJ0NV9jdHNydHNfZ3BpbzE0AAAAAAMAAAAIAAACfAAAAA4AAAAP
AAAAAwAAAAQAAAKGAAAAAwAAAAMAAAAIAAAClAAAAAIAAAAAAAAAAgAAAAFkcGlfMThiaXRfZ3Bp
bzAAAAAAAwAAAFgAAAJ8AAAAAAAAAAEAAAACAAAAAwAAAAQAAAAFAAAABgAAAAcAAAAIAAAACQAA
AAoAAAALAAAADAAAAA0AAAAOAAAADwAAABAAAAARAAAAEgAAABMAAAAUAAAAFQAAAAMAAAAEAAAC
hgAAAAYAAAACAAAAAWdwaW9vdXQAAAAAAwAAAAQAAAJ8AAAABgAAAAMAAAAEAAAChgAAAAEAAAAC
AAAAAWFsdDAAAAAAAAAAAwAAABwAAAJ8AAAABAAAAAUAAAAHAAAACAAAAAkAAAAKAAAACwAAAAMA
AAAEAAAChgAAAAQAAAACAAAAAXNwaTBfcGlucwAAAAAAAAMAAAAMAAACfAAAAAkAAAAKAAAACwAA
AAMAAAAEAAAChgAAAAQAAAADAAAABAAAAbYAAAANAAAAAgAAAAFzcGkwX2NzX3BpbnMAAAAAAAAA
AwAAAAgAAAJ8AAAACAAAAAcAAAADAAAABAAAAoYAAAABAAAAAwAAAAQAAAG2AAAADgAAAAIAAAAB
c3BpM19waW5zAAAAAAAAAwAAAAwAAAJ8AAAAAQAAAAIAAAADAAAAAwAAAAQAAAKGAAAABwAAAAIA
AAABc3BpM19jc19waW5zAAAAAAAAAAMAAAAIAAACfAAAAAAAAAAYAAAAAwAAAAQAAAKGAAAAAQAA
AAIAAAABc3BpNF9waW5zAAAAAAAAAwAAAAwAAAJ8AAAABQAAAAYAAAAHAAAAAwAAAAQAAAKGAAAA
BwAAAAIAAAABc3BpNF9jc19waW5zAAAAAAAAAAMAAAAIAAACfAAAAAQAAAAZAAAAAwAAAAQAAAKG
AAAAAQAAAAIAAAABc3BpNV9waW5zAAAAAAAAAwAAAAwAAAJ8AAAADQAAAA4AAAAPAAAAAwAAAAQA
AAKGAAAABwAAAAIAAAABc3BpNV9jc19waW5zAAAAAAAAAAMAAAAIAAACfAAAAAwAAAAaAAAAAwAA
AAQAAAKGAAAAAQAAAAIAAAABc3BpNl9waW5zAAAAAAAAAwAAAAwAAAJ8AAAAEwAAABQAAAAVAAAA
AwAAAAQAAAKGAAAABwAAAAIAAAABc3BpNl9jc19waW5zAAAAAAAAAAMAAAAIAAACfAAAABIAAAAb
AAAAAwAAAAQAAAKGAAAAAQAAAAIAAAABaTJjMAAAAAAAAAADAAAACAAAAnwAAAAAAAAAAQAAAAMA
AAAEAAAChgAAAAQAAAADAAAABAAAApQAAAACAAAAAwAAAAQAAAG2AAAAEAAAAAIAAAABaTJjMQAA
AAAAAAADAAAACAAAAnwAAAACAAAAAwAAAAMAAAAEAAAChgAAAAQAAAADAAAABAAAApQAAAACAAAA
AwAAAAQAAAG2AAAAFAAAAAIAAAABaTJjMwAAAAAAAAADAAAACAAAAnwAAAAEAAAABQAAAAMAAAAE
AAAChgAAAAIAAAADAAAABAAAApQAAAACAAAAAgAAAAFpMmM0AAAAAAAAAAMAAAAIAAACfAAAAAgA
AAAJAAAAAwAAAAQAAAKGAAAAAgAAAAMAAAAEAAAClAAAAAIAAAACAAAAAWkyYzUAAAAAAAAAAwAA
AAgAAAJ8AAAADAAAAA0AAAADAAAABAAAAoYAAAACAAAAAwAAAAQAAAKUAAAAAgAAAAIAAAABaTJj
NgAAAAAAAAADAAAACAAAAnwAAAAWAAAAFwAAAAMAAAAEAAAChgAAAAIAAAADAAAABAAAApQAAAAC
AAAAAgAAAAFpMnMAAAAAAwAAABAAAAJ8AAAAEgAAABMAAAAUAAAAFQAAAAMAAAAEAAAChgAAAAQA
AAADAAAABAAAAbYAAAAMAAAAAgAAAAFzZGlvX3BpbnMAAAAAAAADAAAAGAAAAnwAAAAiAAAAIwAA
ACQAAAAlAAAAJgAAACcAAAADAAAABAAAAoYAAAAHAAAAAwAAABgAAAKUAAAAAAAAAAIAAAACAAAA
AgAAAAIAAAACAAAAAwAAAAQAAAG2AAAAGQAAAAIAAAABYnRfcGlucwAAAAADAAAAAgAAAnwtAAAA
AAAAAwAAAAQAAAKGAAAAAAAAAAMAAAAEAAAClAAAAAIAAAADAAAABAAAAbYAAAAJAAAAAgAAAAF1
YXJ0MF9waW5zAAAAAAADAAAACAAAAnwAAAAgAAAAIQAAAAMAAAAEAAAChgAAAAcAAAADAAAACAAA
ApQAAAAAAAAAAgAAAAMAAAAEAAABtgAAAAgAAAACAAAAAXVhcnQxX3BpbnMAAAAAAAMAAAAAAAAC
fAAAAAMAAAAAAAAChgAAAAMAAAAAAAAClAAAAAMAAAAEAAABtgAAABMAAAACAAAAAXVhcnQyX3Bp
bnMAAAAAAAMAAAAIAAACfAAAAAAAAAABAAAAAwAAAAQAAAKGAAAAAwAAAAMAAAAIAAAClAAAAAAA
AAACAAAAAgAAAAF1YXJ0M19waW5zAAAAAAADAAAACAAAAnwAAAAEAAAABQAAAAMAAAAEAAAChgAA
AAMAAAADAAAACAAAApQAAAAAAAAAAgAAAAIAAAABdWFydDRfcGlucwAAAAAAAwAAAAgAAAJ8AAAA
CAAAAAkAAAADAAAABAAAAoYAAAADAAAAAwAAAAgAAAKUAAAAAAAAAAIAAAACAAAAAXVhcnQ1X3Bp
bnMAAAAAAAMAAAAIAAACfAAAAAwAAAANAAAAAwAAAAQAAAKGAAAAAwAAAAMAAAAIAAAClAAAAAAA
AAACAAAAAgAAAAFhdWRpb19waW5zAAAAAAADAAAACAAAAnwAAAAoAAAAKQAAAAMAAAAEAAAChgAA
AAQAAAADAAAABAAAAbYAAAAbAAAAAgAAAAIAAAABc2VyaWFsQDdlMjAxMDAwAAAAAAMAAAArAAAA
AGJyY20sYmNtMjgzNS1wbDAxMQBhcm0scGwwMTEAYXJtLHByaW1lY2VsbAAAAAAAAwAAAAgAAAFv
fiAQAAAAAgAAAAADAAAADAAAAXMAAAAAAAAAeQAAAAQAAAADAAAAEAAAAd8AAAADAAAAEwAAAAMA
AAAUAAAAAwAAABEAAAHmdWFydGNsawBhcGJfcGNsawAAAAAAAAADAAAABAAAAp4AJBARAAAAAwAA
AAAAAAK1AAAAAwAAAAgAAAJuZGVmYXVsdAAAAAADAAAACAAAAsoAAAAIAAAACQAAAAMAAAAFAAAB
fm9rYXkAAAAAAAAAAwAAAAQAAAG2AAAAIwAAAAIAAAABbW1jQDdlMjAyMDAwAAAAAAAAAAMAAAAU
AAAAAGJyY20sYmNtMjgzNS1zZGhvc3QAAAAAAwAAAAgAAAFvfiAgAAAAAQAAAAADAAAADAAAAXMA
AAAAAAAAeAAAAAQAAAADAAAACAAAAd8AAAADAAAAFAAAAAMAAAAIAAAC1AAAAAogAAANAAAAAwAA
AAYAAALZcngtdHgAAAAAAAADAAAACQAAAX5kaXNhYmxlZAAAAAAAAAADAAAACAAAAm5kZWZhdWx0
AAAAAAMAAAAEAAACygAAAAsAAAADAAAABAAAAuMAAAAEAAAAAwAAAAQAAALtAAAAAAAAAAMAAAAE
AAAC/wAAAAEAAAADAAAABAAAAbYAAAArAAAAAgAAAAFpMnNAN2UyMDMwMDAAAAAAAAAAAwAAABEA
AAAAYnJjbSxiY20yODM1LWkycwAAAAAAAAADAAAACAAAAW9+IDAAAAAAJAAAAAMAAAAIAAAB3wAA
AAMAAAAfAAAAAwAAABAAAALUAAAACgAAAAIAAAAKAAAAAwAAAAMAAAAGAAAC2XR4AHJ4AAAAAAAA
AwAAAAkAAAF+ZGlzYWJsZWQAAAAAAAAAAwAAAAQAAAMOAAAAAAAAAAMAAAAIAAACbmRlZmF1bHQA
AAAAAwAAAAQAAALKAAAADAAAAAMAAAAEAAABtgAAACUAAAACAAAAAXNwaUA3ZTIwNDAwMAAAAAAA
AAADAAAAEQAAAABicmNtLGJjbTI4MzUtc3BpAAAAAAAAAAMAAAAIAAABb34gQAAAAAIAAAAAAwAA
AAwAAAFzAAAAAAAAAHYAAAAEAAAAAwAAAAgAAAHfAAAAAwAAABQAAAADAAAAEAAAAtQAAAAKAAAA
BgAAAAoAAAAHAAAAAwAAAAYAAALZdHgAcngAAAAAAAADAAAABAAAACIAAAABAAAAAwAAAAQAAAAx
AAAAAAAAAAMAAAAJAAABfmRpc2FibGVkAAAAAAAAAAMAAAAIAAACbmRlZmF1bHQAAAAAAwAAAAgA
AALKAAAADQAAAA4AAAADAAAAGAAAAx8AAAAPAAAACAAAAAEAAAAPAAAABwAAAAEAAAADAAAABAAA
AbYAAAAmAAAAAXNwaWRldkAwAAAAAAAAAAMAAAAHAAAAAHNwaWRldgAAAAAAAwAAAAQAAAFvAAAA
AAAAAAMAAAAEAAAAIgAAAAEAAAADAAAABAAAADEAAAAAAAAAAwAAAAQAAAMoB3NZQAAAAAIAAAAB
c3BpZGV2QDEAAAAAAAAAAwAAAAcAAAAAc3BpZGV2AAAAAAADAAAABAAAAW8AAAABAAAAAwAAAAQA
AAAiAAAAAQAAAAMAAAAEAAAAMQAAAAAAAAADAAAABAAAAygHc1lAAAAAAgAAAAIAAAABaTJjQDdl
MjA1MDAwAAAAAAAAAAMAAAARAAAAAGJyY20sYmNtMjgzNS1pMmMAAAAAAAAAAwAAAAgAAAFvfiBQ
AAAAAgAAAAADAAAADAAAAXMAAAAAAAAAdQAAAAQAAAADAAAACAAAAd8AAAADAAAAFAAAAAMAAAAE
AAAAIgAAAAEAAAADAAAABAAAADEAAAAAAAAAAwAAAAkAAAF+ZGlzYWJsZWQAAAAAAAAAAwAAAAgA
AAJuZGVmYXVsdAAAAAADAAAABAAAAsoAAAAQAAAAAwAAAAQAAAM6AAGGoAAAAAMAAAAEAAABtgAA
ACcAAAACAAAAAXBpeGVsdmFsdmVAN2UyMDYwMDAAAAAAAwAAABkAAAAAYnJjbSxiY20yODM1LXBp
eGVsdmFsdmUwAAAAAAAAAAMAAAAIAAABb34gYAAAAAEAAAAAAwAAAAwAAAFzAAAAAAAAAG0AAAAE
AAAAAwAAAAkAAAF+ZGlzYWJsZWQAAAAAAAAAAgAAAAFwaXhlbHZhbHZlQDdlMjA3MDAwAAAAAAMA
AAAZAAAAAGJyY20sYmNtMjgzNS1waXhlbHZhbHZlMQAAAAAAAAADAAAACAAAAW9+IHAAAAABAAAA
AAMAAAAMAAABcwAAAAAAAABuAAAABAAAAAMAAAAJAAABfmRpc2FibGVkAAAAAAAAAAIAAAABZHBp
QDdlMjA4MDAwAAAAAAAAAAMAAAARAAAAAGJyY20sYmNtMjgzNS1kcGkAAAAAAAAAAwAAAAgAAAFv
fiCAAAAAAIwAAAADAAAAEAAAAd8AAAADAAAAFAAAAAMAAAAsAAAAAwAAAAsAAAHmY29yZQBwaXhl
bAAAAAAAAwAAAAQAAAAiAAAAAQAAAAMAAAAEAAAAMQAAAAAAAAADAAAACQAAAX5kaXNhYmxlZAAA
AAAAAAACAAAAAWRzaUA3ZTIwOTAwMAAAAAAAAAADAAAAEgAAAABicmNtLGJjbTI4MzUtZHNpMAAA
AAAAAAMAAAAIAAABb34gkAAAAAB4AAAAAwAAAAwAAAFzAAAAAAAAAGQAAAAEAAAAAwAAAAQAAAAi
AAAAAQAAAAMAAAAEAAAAMQAAAAAAAAADAAAABAAAAgoAAAABAAAAAwAAABgAAAHfAAAAAwAAACAA
AAADAAAALwAAAAMAAAAxAAAAAwAAABEAAAHmcGh5AGVzY2FwZQBwaXhlbAAAAAAAAAADAAAAHQAA
A0pkc2kwX2J5dGUAZHNpMF9kZHIyAGRzaTBfZGRyAAAAAAAAAAMAAAAIAAADXQAAABEAAAARAAAA
AwAAAAQAAAG2AAAABQAAAAIAAAABYXV4QDdlMjE1MDAwAAAAAAAAAAMAAAARAAAAAGJyY20sYmNt
MjgzNS1hdXgAAAAAAAAAAwAAAAQAAAIKAAAAAQAAAAMAAAAIAAABb34hUAAAAAAIAAAAAwAAAAgA
AAHfAAAAAwAAABQAAAADAAAABAAAAbYAAAASAAAAAgAAAAFzZXJpYWxAN2UyMTUwNDAAAAAAAwAA
ABYAAAAAYnJjbSxiY20yODM1LWF1eC11YXJ0AAAAAAAAAwAAAAgAAAFvfiFQQAAAAEAAAAADAAAA
DAAAAXMAAAAAAAAAXQAAAAQAAAADAAAACAAAAd8AAAASAAAAAAAAAAMAAAAFAAABfm9rYXkAAAAA
AAAAAwAAAAgAAAJuZGVmYXVsdAAAAAADAAAABAAAAsoAAAATAAAAAwAAAAQAAAG2AAAAJAAAAAIA
AAABc3BpQDdlMjE1MDgwAAAAAAAAAAMAAAAVAAAAAGJyY20sYmNtMjgzNS1hdXgtc3BpAAAAAAAA
AAMAAAAIAAABb34hUIAAAABAAAAAAwAAAAwAAAFzAAAAAAAAAF0AAAAEAAAAAwAAAAgAAAHfAAAA
EgAAAAEAAAADAAAABAAAACIAAAABAAAAAwAAAAQAAAAxAAAAAAAAAAMAAAAJAAABfmRpc2FibGVk
AAAAAAAAAAIAAAABc3BpQDdlMjE1MGMwAAAAAAAAAAMAAAAVAAAAAGJyY20sYmNtMjgzNS1hdXgt
c3BpAAAAAAAAAAMAAAAIAAABb34hUMAAAABAAAAAAwAAAAwAAAFzAAAAAAAAAF0AAAAEAAAAAwAA
AAgAAAHfAAAAEgAAAAIAAAADAAAABAAAACIAAAABAAAAAwAAAAQAAAAxAAAAAAAAAAMAAAAJAAAB
fmRpc2FibGVkAAAAAAAAAAIAAAABcHdtQDdlMjBjMDAwAAAAAAAAAAMAAAARAAAAAGJyY20sYmNt
MjgzNS1wd20AAAAAAAAAAwAAAAgAAAFvfiDAAAAAACgAAAADAAAACAAAAd8AAAADAAAAHgAAAAMA
AAAIAAADawAAAAMAAAAeAAAAAwAAAAQAAAN7AJiWgAAAAAMAAAAEAAADkAAAAAIAAAADAAAACQAA
AX5kaXNhYmxlZAAAAAAAAAACAAAAAWh2c0A3ZTQwMDAwMAAAAAAAAAADAAAAEQAAAABicmNtLGJj
bTI4MzUtaHZzAAAAAAAAAAMAAAAIAAABb35AAAAAAGAAAAAAAwAAAAwAAAFzAAAAAAAAAGEAAAAE
AAAAAwAAAAkAAAF+ZGlzYWJsZWQAAAAAAAAAAgAAAAFkc2lAN2U3MDAwMDAAAAAAAAAAAwAAABIA
AAAAYnJjbSxiY20yODM1LWRzaTEAAAAAAAADAAAACAAAAW9+cAAAAAAAjAAAAAMAAAAMAAABcwAA
AAAAAABsAAAABAAAAAMAAAAEAAAAIgAAAAEAAAADAAAABAAAADEAAAAAAAAAAwAAAAQAAAIKAAAA
AQAAAAMAAAAYAAAB3wAAAAMAAAAjAAAAAwAAADAAAAADAAAAMgAAAAMAAAARAAAB5nBoeQBlc2Nh
cGUAcGl4ZWwAAAAAAAAAAwAAAB0AAANKZHNpMV9ieXRlAGRzaTFfZGRyMgBkc2kxX2RkcgAAAAAA
AAADAAAACQAAAX5kaXNhYmxlZAAAAAAAAAADAAAACAAAA10AAAARAAAAEgAAAAMAAAAEAAABtgAA
AAYAAAACAAAAAWNzaUA3ZTgwMDAwMAAAAAAAAAADAAAAFAAAAABicmNtLGJjbTI4MzUtdW5pY2Ft
AAAAAAMAAAAQAAABb36AAAAAAAgAfoAgAAAAAAQAAAADAAAADAAAAXMAAAAAAAAAZgAAAAQAAAAD
AAAACAAAAd8AAAADAAAALQAAAAMAAAADAAAB5mxwAAAAAAADAAAABAAAACIAAAABAAAAAwAAAAQA
AAAxAAAAAAAAAAMAAAAEAAACCgAAAAEAAAADAAAACQAAAX5kaXNhYmxlZAAAAAAAAAADAAAACAAA
A10AAAARAAAADAAAAAIAAAABY3NpQDdlODAxMDAwAAAAAAAAAAMAAAAUAAAAAGJyY20sYmNtMjgz
NS11bmljYW0AAAAAAwAAABAAAAFvfoAQAAAACAB+gCAEAAAABAAAAAMAAAAMAAABcwAAAAAAAABn
AAAABAAAAAMAAAAIAAAB3wAAAAMAAAAuAAAAAwAAAAMAAAHmbHAAAAAAAAMAAAAEAAAAIgAAAAEA
AAADAAAABAAAADEAAAAAAAAAAwAAAAQAAAIKAAAAAQAAAAMAAAAJAAABfmRpc2FibGVkAAAAAAAA
AAMAAAAIAAADXQAAABEAAAANAAAAAXBvcnQAAAAAAAAAAWVuZHBvaW50AAAAAAAAAAMAAAAIAAAD
mwAAAAEAAAACAAAAAgAAAAIAAAACAAAAAWkyY0A3ZTgwNDAwMAAAAAAAAAADAAAAEQAAAABicmNt
LGJjbTI4MzUtaTJjAAAAAAAAAAMAAAAIAAABb36AQAAAABAAAAAAAwAAAAwAAAFzAAAAAAAAAHUA
AAAEAAAAAwAAAAgAAAHfAAAAAwAAABQAAAADAAAABAAAACIAAAABAAAAAwAAAAQAAAAxAAAAAAAA
AAMAAAAJAAABfmRpc2FibGVkAAAAAAAAAAMAAAAIAAACbmRlZmF1bHQAAAAAAwAAAAQAAALKAAAA
FAAAAAMAAAAEAAADOgABhqAAAAADAAAABAAAAbYAAAAoAAAAAgAAAAFpMmNAN2U4MDUwMDAAAAAA
AAAAAwAAABEAAAAAYnJjbSxiY20yODM1LWkyYwAAAAAAAAADAAAACAAAAW9+gFAAAAAQAAAAAAMA
AAAMAAABcwAAAAAAAAB1AAAABAAAAAMAAAAIAAAB3wAAAAMAAAAUAAAAAwAAAAQAAAAiAAAAAQAA
AAMAAAAEAAAAMQAAAAAAAAADAAAACQAAAX5kaXNhYmxlZAAAAAAAAAADAAAABAAAAzoAAYagAAAA
AwAAAAQAAAG2AAAAFQAAAAIAAAABdmVjQDdlODA2MDAwAAAAAAAAAAMAAAARAAAAAGJyY20sYmNt
MjgzNS12ZWMAAAAAAAAAAwAAAAgAAAFvfoBgAAAAEAAAAAADAAAACAAAAd8AAAADAAAAGAAAAAMA
AAAMAAABcwAAAAAAAAB7AAAABAAAAAMAAAAJAAABfmRpc2FibGVkAAAAAAAAAAMAAAAIAAADXQAA
ABEAAAAHAAAAAgAAAAFwaXhlbHZhbHZlQDdlODA3MDAwAAAAAAMAAAAZAAAAAGJyY20sYmNtMjgz
NS1waXhlbHZhbHZlMgAAAAAAAAADAAAACAAAAW9+gHAAAAABAAAAAAMAAAAMAAABcwAAAAAAAABq
AAAABAAAAAMAAAAJAAABfmRpc2FibGVkAAAAAAAAAAIAAAABaGRtaUA3ZTkwMjAwMAAAAAAAAAMA
AAASAAAAAGJyY20sYmNtMjgzNS1oZG1pAAAAAAAAAwAAABAAAAFvfpAgAAAABgB+gIAAAAABAAAA
AAMAAAAYAAABcwAAAAAAAABoAAAABAAAAAAAAABpAAAABAAAAAMAAAAEAAADpgAAABUAAAADAAAA
EAAAAd8AAAADAAAAEAAAAAMAAAAZAAAAAwAAAAsAAAHmcGl4ZWwAaGRtaQAAAAAAAwAAAAgAAALU
AAAACgAAABEAAAADAAAACQAAAtlhdWRpby1yeAAAAAAAAAADAAAACQAAAX5kaXNhYmxlZAAAAAAA
AAADAAAACAAAA10AAAARAAAABQAAAAIAAAABdXNiQDdlOTgwMDAwAAAAAAAAAAMAAAARAAAAAGJy
Y20sYmNtMjcwOC11c2IAAAAAAAAAAwAAABAAAAFvfpgAAAABAAB+ALIAAAACAAAAAAMAAAAYAAAB
cwAAAAAAAABJAAAABAAAAAAAAAAoAAAABAAAAAMAAAAEAAAAIgAAAAEAAAADAAAABAAAADEAAAAA
AAAAAwAAAAQAAAHfAAAAFgAAAAMAAAAEAAAB5m90ZwAAAAADAAAABAAAA6oAAAAXAAAAAwAAAAkA
AAOvdXNiMi1waHkAAAAAAAAAAwAAAAkAAAF+ZGlzYWJsZWQAAAAAAAAAAwAAAAkAAAGFdXNiAHNv
ZnQAAAAAAAAAAwAAAAgAAANdAAAAEQAAAAYAAAACAAAAAWdwdQAAAAADAAAAEQAAAABicmNtLGJj
bTI4MzUtdmM0AAAAAAAAAAMAAAAJAAABfmRpc2FibGVkAAAAAAAAAAIAAAABbG9jYWxfaW50Y0A0
MDAwMDAwMAAAAAADAAAAFQAAAABicmNtLGJjbTI4MzYtbDEtaW50YwAAAAAAAAADAAAACAAAAW9A
AAAAAAABAAAAAAIAAAABZ2ljNDAwQDQwMDQxMDAwAAAAAAMAAAAAAAACSAAAAAMAAAAEAAACXQAA
AAMAAAADAAAADAAAAABhcm0sZ2ljLTQwMAAAAAADAAAAIAAAAW9ABBAAAAAQAEAEIAAAACAAQARA
AAAAIABABGAAAAAgAAAAAAMAAAAMAAABcwAAAAEAAAAJAAAPBAAAAAMAAAAEAAABtgAAAAEAAAAC
AAAAAXRoZXJtYWxAN2Q1ZDIyMDAAAAAAAAAAAwAAABYAAAAAYnJjbSxhdnMtdG1vbi1iY20yODM4
AAAAAAAAAwAAAAgAAAFvfV0iAAAAACwAAAADAAAADAAAAXMAAAAAAAAAiQAAAAQAAAADAAAABQAA
AYV0bW9uAAAAAAAAAAMAAAAIAAAB3wAAAAMAAAAbAAAAAwAAAAQAAAO5AAAAAAAAAAMAAAAFAAAB
fm9rYXkAAAAAAAAAAwAAAAQAAAG2AAAAAgAAAAIAAAABc2VyaWFsQDdlMjAxNDAwAAAAAAMAAAAr
AAAAAGJyY20sYmNtMjgzNS1wbDAxMQBhcm0scGwwMTEAYXJtLHByaW1lY2VsbAAAAAAAAwAAAAgA
AAFvfiAUAAAAAgAAAAADAAAADAAAAXMAAAAAAAAAeQAAAAQAAAADAAAAEAAAAd8AAAADAAAAEwAA
AAMAAAAUAAAAAwAAABEAAAHmdWFydGNsawBhcGJfcGNsawAAAAAAAAADAAAABAAAAp4AJBARAAAA
AwAAAAkAAAF+ZGlzYWJsZWQAAAAAAAAAAgAAAAFzZXJpYWxAN2UyMDE2MDAAAAAAAwAAACsAAAAA
YnJjbSxiY20yODM1LXBsMDExAGFybSxwbDAxMQBhcm0scHJpbWVjZWxsAAAAAAADAAAACAAAAW9+
IBYAAAACAAAAAAMAAAAMAAABcwAAAAAAAAB5AAAABAAAAAMAAAAQAAAB3wAAAAMAAAATAAAAAwAA
ABQAAAADAAAAEQAAAeZ1YXJ0Y2xrAGFwYl9wY2xrAAAAAAAAAAMAAAAEAAACngAkEBEAAAADAAAA
CQAAAX5kaXNhYmxlZAAAAAAAAAACAAAAAXNlcmlhbEA3ZTIwMTgwMAAAAAADAAAAKwAAAABicmNt
LGJjbTI4MzUtcGwwMTEAYXJtLHBsMDExAGFybSxwcmltZWNlbGwAAAAAAAMAAAAIAAABb34gGAAA
AAIAAAAAAwAAAAwAAAFzAAAAAAAAAHkAAAAEAAAAAwAAABAAAAHfAAAAAwAAABMAAAADAAAAFAAA
AAMAAAARAAAB5nVhcnRjbGsAYXBiX3BjbGsAAAAAAAAAAwAAAAQAAAKeACQQEQAAAAMAAAAJAAAB
fmRpc2FibGVkAAAAAAAAAAIAAAABc2VyaWFsQDdlMjAxYTAwAAAAAAMAAAArAAAAAGJyY20sYmNt
MjgzNS1wbDAxMQBhcm0scGwwMTEAYXJtLHByaW1lY2VsbAAAAAAAAwAAAAgAAAFvfiAaAAAAAgAA
AAADAAAADAAAAXMAAAAAAAAAeQAAAAQAAAADAAAAEAAAAd8AAAADAAAAEwAAAAMAAAAUAAAAAwAA
ABEAAAHmdWFydGNsawBhcGJfcGNsawAAAAAAAAADAAAABAAAAp4AJBARAAAAAwAAAAkAAAF+ZGlz
YWJsZWQAAAAAAAAAAgAAAAFzcGlAN2UyMDQ2MDAAAAAAAAAAAwAAABEAAAAAYnJjbSxiY20yODM1
LXNwaQAAAAAAAAADAAAACAAAAW9+IEYAAAACAAAAAAMAAAAMAAABcwAAAAAAAAB2AAAABAAAAAMA
AAAIAAAB3wAAAAMAAAAUAAAAAwAAAAQAAAAiAAAAAQAAAAMAAAAEAAAAMQAAAAAAAAADAAAACQAA
AX5kaXNhYmxlZAAAAAAAAAACAAAAAXNwaUA3ZTIwNDgwMAAAAAAAAAADAAAAEQAAAABicmNtLGJj
bTI4MzUtc3BpAAAAAAAAAAMAAAAIAAABb34gSAAAAAIAAAAAAwAAAAwAAAFzAAAAAAAAAHYAAAAE
AAAAAwAAAAgAAAHfAAAAAwAAABQAAAADAAAABAAAACIAAAABAAAAAwAAAAQAAAAxAAAAAAAAAAMA
AAAJAAABfmRpc2FibGVkAAAAAAAAAAIAAAABc3BpQDdlMjA0YTAwAAAAAAAAAAMAAAARAAAAAGJy
Y20sYmNtMjgzNS1zcGkAAAAAAAAAAwAAAAgAAAFvfiBKAAAAAgAAAAADAAAADAAAAXMAAAAAAAAA
dgAAAAQAAAADAAAACAAAAd8AAAADAAAAFAAAAAMAAAAEAAAAIgAAAAEAAAADAAAABAAAADEAAAAA
AAAAAwAAAAkAAAF+ZGlzYWJsZWQAAAAAAAAAAgAAAAFzcGlAN2UyMDRjMDAAAAAAAAAAAwAAABEA
AAAAYnJjbSxiY20yODM1LXNwaQAAAAAAAAADAAAACAAAAW9+IEwAAAACAAAAAAMAAAAMAAABcwAA
AAAAAAB2AAAABAAAAAMAAAAIAAAB3wAAAAMAAAAUAAAAAwAAAAQAAAAiAAAAAQAAAAMAAAAEAAAA
MQAAAAAAAAADAAAACQAAAX5kaXNhYmxlZAAAAAAAAAACAAAAAWkyY0A3ZTIwNTYwMAAAAAAAAAAD
AAAAEQAAAABicmNtLGJjbTI4MzUtaTJjAAAAAAAAAAMAAAAIAAABb34gVgAAAAIAAAAAAwAAAAwA
AAFzAAAAAAAAAHUAAAAEAAAAAwAAAAgAAAHfAAAAAwAAABQAAAADAAAABAAAACIAAAABAAAAAwAA
AAQAAAAxAAAAAAAAAAMAAAAJAAABfmRpc2FibGVkAAAAAAAAAAIAAAABaTJjQDdlMjA1ODAwAAAA
AAAAAAMAAAARAAAAAGJyY20sYmNtMjgzNS1pMmMAAAAAAAAAAwAAAAgAAAFvfiBYAAAAAgAAAAAD
AAAADAAAAXMAAAAAAAAAdQAAAAQAAAADAAAACAAAAd8AAAADAAAAFAAAAAMAAAAEAAAAIgAAAAEA
AAADAAAABAAAADEAAAAAAAAAAwAAAAkAAAF+ZGlzYWJsZWQAAAAAAAAAAgAAAAFpMmNAN2UyMDVh
MDAAAAAAAAAAAwAAABEAAAAAYnJjbSxiY20yODM1LWkyYwAAAAAAAAADAAAACAAAAW9+IFoAAAAC
AAAAAAMAAAAMAAABcwAAAAAAAAB1AAAABAAAAAMAAAAIAAAB3wAAAAMAAAAUAAAAAwAAAAQAAAAi
AAAAAQAAAAMAAAAEAAAAMQAAAAAAAAADAAAACQAAAX5kaXNhYmxlZAAAAAAAAAACAAAAAWkyY0A3
ZTIwNWMwMAAAAAAAAAADAAAAEQAAAABicmNtLGJjbTI4MzUtaTJjAAAAAAAAAAMAAAAIAAABb34g
XAAAAAIAAAAAAwAAAAwAAAFzAAAAAAAAAHUAAAAEAAAAAwAAAAgAAAHfAAAAAwAAABQAAAADAAAA
BAAAACIAAAABAAAAAwAAAAQAAAAxAAAAAAAAAAMAAAAJAAABfmRpc2FibGVkAAAAAAAAAAIAAAAB
cHdtQDdlMjBjODAwAAAAAAAAAAMAAAARAAAAAGJyY20sYmNtMjgzNS1wd20AAAAAAAAAAwAAAAgA
AAFvfiDIAAAAACgAAAADAAAACAAAAd8AAAADAAAAHgAAAAMAAAAIAAADawAAAAMAAAAeAAAAAwAA
AAQAAAN7AJiWgAAAAAMAAAAEAAADkAAAAAIAAAADAAAACQAAAX5kaXNhYmxlZAAAAAAAAAACAAAA
AW1tY0A3ZTMwMDAwMAAAAAAAAAADAAAAJAAAAABicmNtLGJjbTI4MzUtbW1jAGJyY20sYmNtMjgz
NS1zZGhjaQAAAAADAAAACAAAAW9+MAAAAAABAAAAAAMAAAAMAAABcwAAAAAAAAB+AAAABAAAAAMA
AAAIAAAB3wAAAAMAAAAcAAAAAwAAAAgAAALUAAAACgAAAAsAAAADAAAABgAAAtlyeC10eAAAAAAA
AAMAAAAEAAAC7QAAAAAAAAADAAAACQAAAX5kaXNhYmxlZAAAAAAAAAADAAAACAAAAm5kZWZhdWx0
AAAAAAMAAAAEAAACygAAABgAAAADAAAABAAAAuMAAAAEAAAAAwAAAAQAAAG2AAAALQAAAAIAAAAB
bW1jbnJAN2UzMDAwMDAAAAAAAAMAAAAkAAAAAGJyY20sYmNtMjgzNS1tbWMAYnJjbSxiY20yODM1
LXNkaGNpAAAAAAMAAAAIAAABb34wAAAAAAEAAAAAAwAAAAwAAAFzAAAAAAAAAH4AAAAEAAAAAwAA
AAgAAAHfAAAAAwAAABwAAAADAAAACAAAAtQAAAAKAAAACwAAAAMAAAAGAAAC2XJ4LXR4AAAAAAAA
AwAAAAQAAALtAAAAAAAAAAMAAAAAAAADzwAAAAMAAAAFAAABfm9rYXkAAAAAAAAAAwAAAAgAAAJu
ZGVmYXVsdAAAAAADAAAABAAAAsoAAAAZAAAAAwAAAAQAAALjAAAABAAAAAMAAAAEAAABtgAAAC4A
AAACAAAAAWZpcm13YXJla21zQDdlNjAwMDAwAAAAAAAAAAMAAAAdAAAAAHJhc3BiZXJyeXBpLHJw
aS1maXJtd2FyZS1rbXMAAAAAAAAAAwAAAAgAAAFvfmAAAAAAAQAAAAADAAAADAAAAXMAAAAAAAAA
cAAAAAQAAAADAAAABAAAA90AAAAHAAAAAwAAAAkAAAF+ZGlzYWJsZWQAAAAAAAAAAgAAAAFzbWlA
N2U2MDAwMDAAAAAAAAAAAwAAABEAAAAAYnJjbSxiY20yODM1LXNtaQAAAAAAAAADAAAACAAAAW9+
YAAAAAABAAAAAAMAAAAMAAABcwAAAAAAAABwAAAABAAAAAMAAAAIAAAB3wAAAAMAAAAqAAAAAwAA
AAgAAANrAAAAAwAAACoAAAADAAAABAAAA3sHc1lAAAAAAwAAAAgAAALUAAAACgAAAAQAAAADAAAA
BgAAAtlyeC10eAAAAAAAAAMAAAAJAAABfmRpc2FibGVkAAAAAAAAAAIAAAABYXhpcGVyZgAAAAAD
AAAAFQAAAABicmNtLGJjbTI4MzUtYXhpcGVyZgAAAAAAAAADAAAAEAAAAW9+AJgAAAABAH7ggAAA
AAEAAAAAAwAAAAQAAAIXAAAABwAAAAMAAAAJAAABfmRpc2FibGVkAAAAAAAAAAMAAAAEAAABtgAA
AC8AAAACAAAAAWZpcm13YXJlAAAAAAAAAAMAAAAoAAAAAHJhc3BiZXJyeXBpLGJjbTI4MzUtZmly
bXdhcmUAc2ltcGxlLWJ1cwAAAAADAAAABAAAA+sAAAAaAAAAAwAAAAQAAAG2AAAABwAAAAFncGlv
AAAAAAAAAAMAAAAaAAAAAHJhc3BiZXJyeXBpLGZpcm13YXJlLWdwaW8AAAAAAAADAAAAAAAAAiwA
AAADAAAABAAAAjwAAAACAAAAAwAAAD4AAAPyQlRfT04AV0xfT04AUFdSX0xFRF9PRkYAR0xPQkFM
X1JFU0VUAFZERF9TRF9JT19TRUwAQ0FNX0dQSU8AAAAAAAAAAAMAAAAFAAABfm9rYXkAAAAAAAAA
AwAAAAQAAAG2AAAAMwAAAAIAAAACAAAAAXBvd2VyAAAAAAAAAwAAABoAAAAAcmFzcGJlcnJ5cGks
YmNtMjgzNS1wb3dlcgAAAAAAAAMAAAAEAAACFwAAAAcAAAADAAAABAAAAb4AAAABAAAAAwAAAAQA
AAG2AAAAEQAAAAIAAAABZ3Bpb21lbQAAAAADAAAAFQAAAABicmNtLGJjbTI4MzUtZ3Bpb21lbQAA
AAAAAAADAAAACAAAAW9+IAAAAAAQAAAAAAIAAAABZmIAAAAAAAMAAAAQAAAAAGJyY20sYmNtMjcw
OC1mYgAAAAADAAAABAAAAhcAAAAHAAAAAwAAAAUAAAF+b2theQAAAAAAAAACAAAAAXZjc20AAAAA
AAAAAwAAABkAAAAAcmFzcGJlcnJ5cGksYmNtMjgzNS12Y3NtAAAAAAAAAAMAAAAEAAACFwAAAAcA
AAADAAAABQAAAX5va2F5AAAAAAAAAAIAAAABYXVkaW8AAAAAAAADAAAAEwAAAABicmNtLGJjbTI4
MzUtYXVkaW8AAAAAAAMAAAAEAAAEAgAAAAgAAAADAAAACQAAAX5kaXNhYmxlZAAAAAAAAAADAAAA
CAAAAm5kZWZhdWx0AAAAAAMAAAAEAAACygAAABsAAAADAAAABAAAAbYAAAApAAAAAgAAAAFzb3Vu
ZAAAAAAAAAMAAAAJAAABfmRpc2FibGVkAAAAAAAAAAIAAAACAAAAAWNsb2NrcwAAAAAAAwAAAAsA
AAAAc2ltcGxlLWJ1cwAAAAAAAwAAAAQAAAAiAAAAAQAAAAMAAAAEAAAAMQAAAAAAAAABY2xvY2tA
MwAAAAADAAAADAAAAABmaXhlZC1jbG9jawAAAAADAAAABAAAAW8AAAADAAAAAwAAAAQAAAIKAAAA
AAAAAAMAAAAEAAADSm9zYwAAAAADAAAABAAAAzoDN/mAAAAAAwAAAAQAAAG2AAAABAAAAAIAAAAB
Y2xvY2tANAAAAAADAAAADAAAAABmaXhlZC1jbG9jawAAAAADAAAABAAAAW8AAAAEAAAAAwAAAAQA
AAIKAAAAAAAAAAMAAAAEAAADSm90ZwAAAAADAAAABAAAAzocnDgAAAAAAwAAAAQAAAG2AAAAFgAA
AAIAAAACAAAAAXBoeQAAAAADAAAADgAAAAB1c2Itbm9wLXhjZWl2AAAAAAAAAwAAAAQAAAQUAAAA
AAAAAAMAAAAEAAABtgAAABcAAAACAAAAAWFybS1wbXUAAAAAAwAAACYAAAAAYXJtLGNvcnRleC1h
NzItcG11AGFybSxjb3J0ZXgtYTE1LXBtdQAAAAAAAAMAAAAwAAABcwAAAAAAAAAQAAAABAAAAAAA
AAARAAAABAAAAAAAAAASAAAABAAAAAAAAAATAAAABAAAAAMAAAAQAAAEHwAAABwAAAAdAAAAHgAA
AB8AAAACAAAAAXRpbWVyAAAAAAAAAwAAABAAAAAAYXJtLGFybXY3LXRpbWVyAAAAAAMAAAAwAAAB
cwAAAAEAAAANAAAPCAAAAAEAAAAOAAAPCAAAAAEAAAALAAAPCAAAAAEAAAAKAAAPCAAAAAMAAAAA
AAAEMgAAAAMAAAAAAAAEVgAAAAIAAAABY3B1cwAAAAAAAAADAAAABAAAACIAAAABAAAAAwAAAAQA
AAAxAAAAAAAAAAMAAAARAAAEYGJyY20sYmNtMjgzNi1zbXAAAAAAAAAAAWNwdUAwAAAAAAAAAwAA
AAQAAARuY3B1AAAAAAMAAAAPAAAAAGFybSxjb3J0ZXgtYTcyAAAAAAADAAAABAAAAW8AAAAAAAAA
AwAAAAsAAARgc3Bpbi10YWJsZQAAAAAAAwAAAAgAAAR6AAAAAAAAANgAAAADAAAABAAAAbYAAAAc
AAAAAgAAAAFjcHVAMQAAAAAAAAMAAAAEAAAEbmNwdQAAAAADAAAADwAAAABhcm0sY29ydGV4LWE3
MgAAAAAAAwAAAAQAAAFvAAAAAQAAAAMAAAALAAAEYHNwaW4tdGFibGUAAAAAAAMAAAAIAAAEegAA
AAAAAADgAAAAAwAAAAQAAAG2AAAAHQAAAAIAAAABY3B1QDIAAAAAAAADAAAABAAABG5jcHUAAAAA
AwAAAA8AAAAAYXJtLGNvcnRleC1hNzIAAAAAAAMAAAAEAAABbwAAAAIAAAADAAAACwAABGBzcGlu
LXRhYmxlAAAAAAADAAAACAAABHoAAAAAAAAA6AAAAAMAAAAEAAABtgAAAB4AAAACAAAAAWNwdUAz
AAAAAAAAAwAAAAQAAARuY3B1AAAAAAMAAAAPAAAAAGFybSxjb3J0ZXgtYTcyAAAAAAADAAAABAAA
AW8AAAADAAAAAwAAAAsAAARgc3Bpbi10YWJsZQAAAAAAAwAAAAgAAAR6AAAAAAAAAPAAAAADAAAA
BAAAAbYAAAAfAAAAAgAAAAIAAAABdjNkYnVzAAAAAAADAAAACwAAAABzaW1wbGUtYnVzAAAAAAAD
AAAABAAAACIAAAABAAAAAwAAAAQAAAAxAAAAAQAAAAMAAAAgAAABXXxQAAAAAAAA/FAAAAMwAABA
AAAAAAAAAP+AAAAAgAAAAAAAAwAAABAAAAFkAAAAAAAAAAAAAAAAPAAAAAAAAAF2M2RAN2VjMDQw
MDAAAAAAAAAAAwAAAA4AAAAAYnJjbSwyNzExLXYzZAAAAAAAAAMAAAAQAAABb37AAAAAAEAAfsBA
AAAAQAAAAAADAAAACgAABItodWIAY29yZTAAAAAAAAADAAAACAAAA10AAAAgAAAAAQAAAAMAAAAI
AAAElQAAACAAAAAAAAAAAwAAAAgAAAHfAAAAAwAAABUAAAADAAAADAAAAXMAAAAAAAAASgAAAAQA
AAADAAAACQAAAX5kaXNhYmxlZAAAAAAAAAACAAAAAgAAAAFzY2IAAAAAAwAAAAsAAAAAc2ltcGxl
LWJ1cwAAAAAAAwAAAAQAAAAiAAAAAgAAAAMAAAAEAAAAMQAAAAEAAAADAAAAUAAAAV0AAAAAfAAA
AAAAAAD8AAAAA4AAAAAAAABAAAAAAAAAAP+AAAAAgAAAAAAABgAAAAAAAAAGAAAAAEAAAAAAAAAA
AAAAAAAAAAAAAAAA/AAAAAAAAAMAAAA8AAABZAAAAAAAAAAAAAAAAAAAAAD8AAAAAAAAAQAAAAAA
AAABAAAAAIAAAAAAAAABgAAAAAAAAAGAAAAAgAAAAAAAAAFwY2llQDdkNTAwMDAwAAAAAAAAAwAA
ABgAAAFvAAAAAH1QAAAAAJMQAAAAAH4A8wAAAAAgAAAAAwAAAAAAAAScAAAAAwAAAAQAAASrAAAA
IQAAAAMAAAAEAAAAIgAAAAMAAAADAAAABAAAAl0AAAABAAAAAwAAAAQAAAAxAAAAAgAAAAMAAAAI
AAAEtgAAAAAAAAABAAAAAwAAADgAAAAAYnJjbSxiY20yNzExYjAtcGNpZQBicmNtLGJjbTI3MTEt
cGNpZQBicmNtLHBjaS1wbGF0LWRldgAAAAADAAAABAAABMAAAAACAAAAAwAAAAQAAATPAAAAAQAA
AAMAAAAEAAAE3AAAAAAAAAADAAAAGAAAAXMAAAAAAAAAlAAAAAQAAAAAAAAAlAAAAAQAAAADAAAA
CQAAAYVwY2llAG1zaQAAAAAAAAADAAAAEAAABO0AAAAAAAAAAAAAAAAAAAAHAAAAAwAAAIAAAAUA
AAAAAAAAAAAAAAAAAAAAAQAAAAEAAAAAAAAAjwAAAAQAAAAAAAAAAAAAAAAAAAACAAAAAQAAAAAA
AACQAAAABAAAAAAAAAAAAAAAAAAAAAMAAAABAAAAAAAAAJEAAAAEAAAAAAAAAAAAAAAAAAAABAAA
AAEAAAAAAAAAkgAAAAQAAAADAAAAHAAAAV0CAAAAAAAAAPgAAAAAAAAGAAAAAAAAAAAEAAAAAAAA
AwAAABwAAAFkAgAAAAAAAAAAAAAAAAAAAAAAAAAAAAABAAAAAAAAAAMAAAAFAAABfm9rYXkAAAAA
AAAAAwAAAAQAAAG2AAAAIQAAAAMAAAAEAAAEbnBjaQAAAAACAAAAAWdlbmV0QDdkNTgwMDAwAAAA
AAADAAAADgAAAABicmNtLGdlbmV0LXY1AAAAAAAAAwAAAAwAAAFvAAAAAH1YAAAAAQAAAAAAAwAA
AAUAAAF+b2theQAAAAAAAAADAAAABAAAACIAAAABAAAAAwAAAAQAAAAxAAAAAQAAAAMAAAAYAAAB
cwAAAAAAAACdAAAABAAAAAAAAACeAAAABAAAAAMAAAAEAAAFDgAAACIAAAADAAAABgAABRlyZ21p
aQAAAAAAAAFtZGlvQGUxNAAAAAAAAAADAAAABAAAACIAAAAAAAAAAwAAAAQAAAAxAAAAAQAAAAMA
AAATAAAAAGJyY20sZ2VuZXQtbWRpby12NQAAAAAAAwAAAAgAAAFvAAAOFAAAAAgAAAADAAAABQAA
BIttZGlvAAAAAAAAAAFnZW5ldC1waHlAMAAAAAADAAAAGwAAAABldGhlcm5ldC1waHktaWVlZTgw
Mi4zLWMyMgAAAAAAAwAAAAQAAAUiAAAD6AAAAAMAAAAEAAABbwAAAAEAAAADAAAACAAABSwAAAAA
AAAACAAAAAMAAAAEAAABtgAAACIAAAACAAAAAgAAAAIAAAABZG1hQDdlMDA3YjAwAAAAAAAAAAMA
AAARAAAAAGJyY20sYmNtMjgzOC1kbWEAAAAAAAAAAwAAAAwAAAFvAAAAAH4AewAAAAQAAAAAAwAA
ADAAAAFzAAAAAAAAAFkAAAAEAAAAAAAAAFoAAAAEAAAAAAAAAFsAAAAEAAAAAAAAAFwAAAAEAAAA
AwAAABgAAAGFZG1hMTEAZG1hMTIAZG1hMTMAZG1hMTQAAAAAAwAAAAQAAAGVAAAAAQAAAAMAAAAE
AAABoAAAcAAAAAADAAAABAAAAbYAAAAyAAAAAgAAAAF4aGNpQDdlOWMwMDAwAAAAAAAAAwAAAA0A
AAAAZ2VuZXJpYy14aGNpAAAAAAAAAAMAAAAJAAABfmRpc2FibGVkAAAAAAAAAAMAAAAMAAABbwAA
AAB+nAAAABAAAAAAAAMAAAAMAAABcwAAAAAAAACwAAAABAAAAAIAAAABaGV2Yy1kZWNvZGVyQDdl
YjAwMDAwAAAAAAAAAwAAACAAAAAAcmFzcGJlcnJ5cGkscnBpdmlkLWhldmMtZGVjb2RlcgAAAAAD
AAAADAAAAW8AAAAAfrAAAAABAAAAAAADAAAABQAAAX5va2F5AAAAAAAAAAIAAAABcnBpdmlkLWxv
Y2FsLWludGNAN2ViMTAwMDAAAAAAAAMAAAAeAAAAAHJhc3BiZXJyeXBpLHJwaXZpZC1sb2NhbC1p
bnRjAAAAAAAAAwAAAAwAAAFvAAAAAH6xAAAAABAAAAAAAwAAAAUAAAF+b2theQAAAAAAAAADAAAA
DAAAAXMAAAAAAAAAYgAAAAQAAAACAAAAAWgyNjQtZGVjb2RlckA3ZWIyMDAwMAAAAAAAAAMAAAAg
AAAAAHJhc3BiZXJyeXBpLHJwaXZpZC1oMjY0LWRlY29kZXIAAAAAAwAAAAwAAAFvAAAAAH6yAAAA
AQAAAAAAAwAAAAUAAAF+b2theQAAAAAAAAACAAAAAXZwOS1kZWNvZGVyQDdlYjMwMDAwAAAAAAAA
AAMAAAAfAAAAAHJhc3BiZXJyeXBpLHJwaXZpZC12cDktZGVjb2RlcgAAAAAAAwAAAAwAAAFvAAAA
AH6zAAAAAQAAAAAAAwAAAAUAAAF+b2theQAAAAAAAAACAAAAAW1haWxib3hAN2UwMGI4NDAAAAAA
AAAAAwAAABMAAAAAYnJjbSxiY20yODM4LXZjaGlxAAAAAAADAAAADAAAAW8AAAAAfgC4QAAAADwA
AAADAAAADAAAAXMAAAAAAAAAIgAAAAQAAAACAAAAAgAAAAFfX292ZXJyaWRlc19fAAAAAAAAAwAA
AAAAAAU2AAAAAwAAAAAAAAVFAAAAAwAAAAAAAAVPAAAAAwAAAAAAAAVdAAAAAwAAAAAAAAVmAAAA
AwAAAAAAAAVvAAAAAwAAAAsAAACCAAAAI3N0YXR1cwAAAAAAAwAAAAsAAACiAAAAJHN0YXR1cwAA
AAAAAwAAAAsAAACUAAAAJXN0YXR1cwAAAAAAAwAAAAsAAAV/AAAAJnN0YXR1cwAAAAAAAwAAAAsA
AACdAAAAJ3N0YXR1cwAAAAAAAwAAAAsAAAC7AAAAKHN0YXR1cwAAAAAAAwAAAAsAAAWDAAAAFXN0
YXR1cwAAAAAAAwAAABYAAAWZAAAAJ2Nsb2NrLWZyZXF1ZW5jeTowAAAAAAAAAwAAABYAAAWnAAAA
KGNsb2NrLWZyZXF1ZW5jeTowAAAAAAAAAwAAABYAAAW1AAAAFWNsb2NrLWZyZXF1ZW5jeTowAAAA
AAAAAwAAAAsAAABNAAAAKXN0YXR1cwAAAAAAAwAAAAsAAABlAAAAIHN0YXR1cwAAAAAAAwAAAAsA
AABuAAAAKnN0YXR1cwAAAAAAAwAAABgAAAXDAAAAK2JyY20sb3ZlcmNsb2NrLTUwOjAAAAAAAwAA
ABMAAAXQAAAALG5vbi1yZW1vdmFibGU/AAAAAAADAAAAFAAABd0AAAArYnJjbSxmb3JjZS1waW8/
AAAAAAMAAAAVAAAF6gAAACticmNtLHBpby1saW1pdDowAAAAAAAAAAMAAAAPAAAF9wAAACticmNt
LGRlYnVnAAAAAAADAAAAMAAABgAAAAAtYnJjbSxvdmVyY2xvY2stNTA6MAAAAAAuYnJjbSxvdmVy
Y2xvY2stNTA6MAAAAAADAAAACwAAANkAAAAvc3RhdHVzAAAAAAADAAAACwAABg8AAABncGlvczo0
AAAAAAADAAAACwAABhwAAABncGlvczo4AAAAAAADAAAAGQAABi4AAABsaW51eCxkZWZhdWx0LXRy
aWdnZXIAAAAAAAAAAwAAAAsAAAY+AAABZ3Bpb3M6NAAAAAAAAwAAAAsAAAZLAAABZ3Bpb3M6OAAA
AAAAAwAAABkAAAZdAAABbGludXgsZGVmYXVsdC10cmlnZ2VyAAAAAAAAAAMAAAAQAAAGbQAAACJs
ZWQtbW9kZXM6MAAAAAADAAAAEAAABnYAAAAibGVkLW1vZGVzOjQAAAAAAwAAACAAAAZ/AAAAJmRt
YXM6MD0AAAAAMgAAACZkbWFzOjg9AAAAADIAAAACAAAAAW1lbW9yeUAwAAAAAAAAAAMAAAAHAAAE
bm1lbW9yeQAAAAAAAwAAAAwAAAFvAAAAAAAAAAAAAAAAAAAAAgAAAAFsZWRzAAAAAAAAAAMAAAAK
AAAAAGdwaW8tbGVkcwAAAAAAAAFhY3QAAAAAAwAAAAUAAAaIbGVkMAAAAAAAAAADAAAABQAABo5r
ZWVwAAAAAAAAAAMAAAAFAAAGnG1tYzAAAAAAAAAAAwAAAAwAAAMiAAAADwAAACoAAAAAAAAAAwAA
AAQAAAG2AAAAMAAAAAIAAAABcHdyAAAAAAMAAAAFAAAGiGxlZDEAAAAAAAAAAwAAAAsAAAacZGVm
YXVsdC1vbgAAAAAAAwAAAAwAAAMiAAAAMwAAAAIAAAABAAAAAwAAAAQAAAG2AAAAMQAAAAIAAAAC
AAAAAWZpeGVkcmVndWxhdG9yXzN2MwAAAAAAAwAAABAAAAAAcmVndWxhdG9yLWZpeGVkAAAAAAMA
AAAEAAAGsjN2MwAAAAADAAAABAAABsEAMlqgAAAAAwAAAAQAAAbZADJaoAAAAAMAAAAAAAAG8QAA
AAIAAAABZml4ZWRyZWd1bGF0b3JfNXYwAAAAAAADAAAAEAAAAAByZWd1bGF0b3ItZml4ZWQAAAAA
AwAAAAQAAAayNXYwAAAAAAMAAAAEAAAGwQBMS0AAAAADAAAABAAABtkATEtAAAAAAwAAAAAAAAbx
AAAAAwAAAAQAAAG2AAAANgAAAAIAAAABZW1tYzJidXMAAAAAAAAAAwAAAAsAAAAAc2ltcGxlLWJ1
cwAAAAAAAwAAAAQAAAAiAAAAAgAAAAMAAAAEAAAAMQAAAAEAAAADAAAAFAAAAV0AAAAAfgAAAAAA
AAD+AAAAAYAAAAAAAAMAAAAUAAABZAAAAADAAAAAAAAAAAAAAAA8AAAAAAAAAWVtbWMyQDdlMzQw
MDAwAAAAAAADAAAAEwAAAABicmNtLGJjbTI3MTEtZW1tYzIAAAAAAAMAAAAFAAABfm9rYXkAAAAA
AAAAAwAAAAwAAAFzAAAAAAAAAH4AAAAEAAAAAwAAAAgAAAHfAAAAAwAAADMAAAADAAAADAAAAW8A
AAAAfjQAAAAAAQAAAAADAAAABAAABwUAAAA0AAAAAwAAAAAAAAcSAAAAAwAAAAQAAAccAAAANQAA
AAMAAAAEAAABtgAAACwAAAACAAAAAgAAAAFzZF9pb18xdjhfcmVnAAAAAAAAAwAAAAUAAAF+b2th
eQAAAAAAAAADAAAADwAAAAByZWd1bGF0b3ItZ3BpbwAAAAAAAwAAAAQAAAcoAAAANgAAAAMAAAAK
AAAGsnZkZC1zZC1pbwAAAAAAAAMAAAAEAAAGwQAbd0AAAAADAAAABAAABtkAMlqgAAAAAwAAAAAA
AAczAAAAAwAAAAAAAAbxAAAAAwAAAAQAAAdFAAATiAAAAAMAAAAMAAADIgAAADMAAAAEAAAAAAAA
AAMAAAAQAAAHYAAbd0AAAAABADJaoAAAAAAAAAADAAAABAAAAbYAAAA0AAAAAgAAAAFzZF92Y2Nf
cmVnAAAAAAADAAAAEAAAAAByZWd1bGF0b3ItZml4ZWQAAAAAAwAAAAcAAAaydmNjLXNkAAAAAAAD
AAAABAAABsEAMlqgAAAAAwAAAAQAAAbZADJaoAAAAAMAAAAAAAAHMwAAAAMAAAAAAAAHZwAAAAMA
AAAMAAAAfQAAADMAAAAGAAAAAAAAAAMAAAAEAAABtgAAADUAAAACAAAAAgAAAAljb21wYXRpYmxl
AG1vZGVsAGludGVycnVwdC1wYXJlbnQAI2FkZHJlc3MtY2VsbHMAI3NpemUtY2VsbHMAc2VyaWFs
MABzZXJpYWwxAGF1ZGlvAGF1eABzb3VuZABzb2MAZG1hAHdhdGNoZG9nAHJhbmRvbQBtYWlsYm94
AGdwaW8AdWFydDAAc2Rob3N0AG1tYzAAaTJzAHNwaTAAaTJjMAB1YXJ0MQBzcGkxAHNwaTIAbW1j
AG1tYzEAaTJjMQBpMmMyAHVzYgBsZWRzAGZiAHRoZXJtYWwAYXhpcGVyZgBtbWMyAGkyYzMAaTJj
NABpMmM1AGkyYzYAZXRoZXJuZXQwAHBjaWUwAGVtbWMyYnVzAGJvb3RhcmdzAHBvbGxpbmctZGVs
YXktcGFzc2l2ZQBwb2xsaW5nLWRlbGF5AHRoZXJtYWwtc2Vuc29ycwBjb2VmZmljaWVudHMAcmFu
Z2VzAGRtYS1yYW5nZXMAcmVnAGludGVycnVwdHMAc3RhdHVzAGludGVycnVwdC1uYW1lcwAjZG1h
LWNlbGxzAGJyY20sZG1hLWNoYW5uZWwtbWFzawBwaGFuZGxlACNwb3dlci1kb21haW4tY2VsbHMA
I3Jlc2V0LWNlbGxzAGNsb2NrcwBjbG9jay1uYW1lcwBzeXN0ZW0tcG93ZXItY29udHJvbGxlcgAj
Y2xvY2stY2VsbHMAZmlybXdhcmUAI21ib3gtY2VsbHMAZ3Bpby1jb250cm9sbGVyACNncGlvLWNl
bGxzAGludGVycnVwdC1jb250cm9sbGVyACNpbnRlcnJ1cHQtY2VsbHMAcGluY3RybC1uYW1lcwBi
cmNtLHBpbnMAYnJjbSxmdW5jdGlvbgBicmNtLHB1bGwAYXJtLHByaW1lY2VsbC1wZXJpcGhpZABj
dHMtZXZlbnQtd29ya2Fyb3VuZABwaW5jdHJsLTAAZG1hcwBkbWEtbmFtZXMAYnVzLXdpZHRoAGJy
Y20sb3ZlcmNsb2NrLTUwAGJyY20scGlvLWxpbWl0ACNzb3VuZC1kYWktY2VsbHMAY3MtZ3Bpb3MA
c3BpLW1heC1mcmVxdWVuY3kAY2xvY2stZnJlcXVlbmN5AGNsb2NrLW91dHB1dC1uYW1lcwBwb3dl
ci1kb21haW5zAGFzc2lnbmVkLWNsb2NrcwBhc3NpZ25lZC1jbG9jay1yYXRlcwAjcHdtLWNlbGxz
AGRhdGEtbGFuZXMAZGRjAHBoeXMAcGh5LW5hbWVzACN0aGVybWFsLXNlbnNvci1jZWxscwBub24t
cmVtb3ZhYmxlAGJyY20sZmlybXdhcmUAbWJveGVzAGdwaW8tbGluZS1uYW1lcwBicmNtLHB3bS1j
aGFubmVscwAjcGh5LWNlbGxzAGludGVycnVwdC1hZmZpbml0eQBhcm0sY3B1LXJlZ2lzdGVycy1u
b3QtZnctY29uZmlndXJlZABhbHdheXMtb24AZW5hYmxlLW1ldGhvZABkZXZpY2VfdHlwZQBjcHUt
cmVsZWFzZS1hZGRyAHJlZy1uYW1lcwByZXNldHMAbXNpLWNvbnRyb2xsZXIAbXNpLXBhcmVudABi
dXMtcmFuZ2UAbWF4LWxpbmstc3BlZWQAdG90LW51bS1wY2llAGxpbnV4LHBjaS1kb21haW4AaW50
ZXJydXB0LW1hcC1tYXNrAGludGVycnVwdC1tYXAAcGh5LWhhbmRsZQBwaHktbW9kZQBtYXgtc3Bl
ZWQAbGVkLW1vZGVzAGNhbTAtcHdkbi1jdHJsAGNhbTAtcHdkbgBjYW0wLWxlZC1jdHJsAGNhbTAt
bGVkAGFybV9mcmVxAGNhY2hlX2xpbmVfc2l6ZQBzcGkAaTJjMl9pa25vd3doYXRpbWRvaW5nAGky
YzBfYmF1ZHJhdGUAaTJjMV9iYXVkcmF0ZQBpMmMyX2JhdWRyYXRlAHNkX292ZXJjbG9jawBzZF9w
b2xsX29uY2UAc2RfZm9yY2VfcGlvAHNkX3Bpb19saW1pdABzZF9kZWJ1ZwBzZGlvX292ZXJjbG9j
awBhY3RfbGVkX2dwaW8AYWN0X2xlZF9hY3RpdmVsb3cAYWN0X2xlZF90cmlnZ2VyAHB3cl9sZWRf
Z3BpbwBwd3JfbGVkX2FjdGl2ZWxvdwBwd3JfbGVkX3RyaWdnZXIAZXRoX2xlZDAAZXRoX2xlZDEA
c3BpX2RtYTQAbGFiZWwAZGVmYXVsdC1zdGF0ZQBsaW51eCxkZWZhdWx0LXRyaWdnZXIAcmVndWxh
dG9yLW5hbWUAcmVndWxhdG9yLW1pbi1taWNyb3ZvbHQAcmVndWxhdG9yLW1heC1taWNyb3ZvbHQA
cmVndWxhdG9yLWFsd2F5cy1vbgB2cW1tYy1zdXBwbHkAYnJva2VuLWNkAHZtbWMtc3VwcGx5AHZp
bi1zdXBwbHkAcmVndWxhdG9yLWJvb3Qtb24AcmVndWxhdG9yLXNldHRsaW5nLXRpbWUtdXMAc3Rh
dGVzAGVuYWJsZS1hY3RpdmUtaGlnaAA=
--000000000000f8eaad05a49fe716--


From xen-devel-bounces@lists.xenproject.org Sat May 02 01:12:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 01:12:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUghm-0004D2-83; Sat, 02 May 2020 01:12:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=kx6T=6Q=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1jUghk-0004Cw-Uv
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 01:12:32 +0000
X-Inumbo-ID: ffcd4072-8c11-11ea-9887-bc764e2007e4
Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ffcd4072-8c11-11ea-9887-bc764e2007e4;
 Sat, 02 May 2020 01:12:32 +0000 (UTC)
Received: by mail-qk1-x742.google.com with SMTP id k81so8315195qke.5
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 18:12:32 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zededa.com; s=google;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=9KyGVP5MPtns44PEQtRdyQS5fO/IUpYK0AZTnal2Ixs=;
 b=UfAR5kJvYkdKGQY58vrBFTC47A7JYH+C+tTQYfH3Pyr3x5SSopF70Fd/QvjoldkWfY
 l8kqaSJsfGuykrZjgQkkr+PdliASA6bbMga8WE6UxJBuAM7euzS33EtfJ+JazwqztoQ9
 OmtPhjWY4qwi3rKrrcZ0AlhEqXobpR4Da8/enAWPHzsCD0LWsiGy06kWnIQNINi/f1CW
 iKVkn8rSPTmdCW80ock6yia4Suwg072DTYkQCqgIYaWcAf5mxy9EFBErFLqW8dNildJy
 cpQRroLhvc+VjfVDxOv2otjzysQkloMDzYlp2l7Hg4uC5MIFSrOrwqLwYQRynjNmVrl2
 ZWsw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=9KyGVP5MPtns44PEQtRdyQS5fO/IUpYK0AZTnal2Ixs=;
 b=PYeAqwg6A+/3LrYtau4yEzJ+55rg0Zahn39WOpBbEjmTU5NUUuCzIAFCgqrdKlLGA0
 iO/PYWC/6XdkMeWdn2fUBGccuiGkf5dKHP/M1OiHWCiYRx4qTosmXxoHj90urAduIyYO
 zleBZeuxsA4IJ3/d7HaxstJ9nog/DUDmw2IfNLQ6ij3wzDRTMzZbdE1PVW0B6PivsQo+
 n2bCRc7wkccZV4Jhj5nEMsckiBD6zR9fT16AxUBjQXKmfpgAJbCllgimIXRCHcsNanUo
 MwV8a8lcsOE9BejF8LzyOtxvIIAafcMAvQS75wLD51Wp22TS64Egzt2V5wOLyeMtAzr7
 ZJhA==
X-Gm-Message-State: AGi0PuZXtz5VSoimZ1utFamdw4MrAgrteBMOD12UFeOaFSGddzpLsrHz
 Y5Zbw7DLq0XxGUjMDs8w9wegZk25yChSxD1JLxiD9w==
X-Google-Smtp-Source: APiQypLzAYwBi99dXA+Dbxc8cASeNvgv3hAMuGVLy0hybrMAoovgaTd43utBNqxpF/KpiALhpMFFE8yL05Q10aELkFk=
X-Received: by 2002:a05:620a:22a6:: with SMTP id
 p6mr6350913qkh.267.1588381951824; 
 Fri, 01 May 2020 18:12:31 -0700 (PDT)
MIME-Version: 1.0
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <alpine.DEB.2.21.2005011637380.28941@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2005011637380.28941@sstabellini-ThinkPad-T480s>
From: Roman Shaposhnik <roman@zededa.com>
Date: Fri, 1 May 2020 18:12:20 -0700
Message-ID: <CAMmSBy-+ebBN-QernLnDbW9t51igZBFVAX-RQLRJ72_Ut_j5Ww@mail.gmail.com>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, minyard@acm.org,
 jeff.kubascik@dornerworks.com, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Stefano!

On Fri, May 1, 2020 at 5:05 PM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> Hi Roman,
>
>
> In regards to the attached stack trace, nothing rings a bell
> unfortunately. I don't know why quirk_usb_early_handoff causes a crash.
> It would be useful to add a few printk in quirk_usb_early_handoff to
> know where the crash is happening exactly.

That definitely seems to be a cascading error from our DMA issues.
Basically when I completely disable DMA (as I showed before) this
issue completely goes away.

> In regards to Dornerworks's third patch, it doesn't look like it is
> related to the quirk_usb_early_handoff crash. The third patch is
> probably not useful anymore because dev->archdata.dev_dma_ops is gone
> completely.

Yup. That seems to be correct. Applying your WARN_ON(dev->dma_ops != NULL);
patch didn't show anything on the console -- so at least that is not
an issue anymore.

I think the focus really should be on:
    https://github.com/dornerworks/xen-rpi4-builder/blob/master/patches/linux/0002-Disable-DMA-in-sdhci-driver.patch

But even this patch, I think, is a cascading error from something that is still
somehow broken about DMA handling in Xen. IOW, it may very well be the
case that the reason Dornerworks folks had to add:
    SDHCI_QUIRK_BROKEN_DMA | SDHCI_QUIRK_BROKEN_ADMA
is exactly the same reason we're now seeing DMA issues much earlier.

Do you think this is something that we could tackle? Like I mentioned,
having an upstream, 5.x kernel capable of being a Dom0 on RPi 4
would be really a huge deal to the Xen on ARM development community.

It seems that we're within reach of that -- I just wish I knew in
which direction
to take it.

Thanks,
Roman.

> However, just in case, something like the following would
> help recognize if the original bug still persists in newer kernels
> somehow:
>
>
> diff --git a/arch/arm64/mm/dma-mapping.c b/arch/arm64/mm/dma-mapping.c
> index 6c45350e33aa..61af12d79add 100644
> --- a/arch/arm64/mm/dma-mapping.c
> +++ b/arch/arm64/mm/dma-mapping.c
> @@ -53,7 +53,9 @@ void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
>                 iommu_setup_dma_ops(dev, dma_base, size);
>
>  #ifdef CONFIG_XEN
> -       if (xen_initial_domain())
> +       if (xen_initial_domain()) {
> +               WARN_ON(dev->dma_ops != NULL);
>                 dev->dma_ops = &xen_swiotlb_dma_ops;
> +       }
>  #endif
>  }
>
>
> On Thu, 30 Apr 2020, Roman Shaposhnik wrote:
> > Hi!
> >
> > I'm trying to run Xen on Raspberry Pi 4 with 5.6.1 stock,
> > upstream kernel. The kernel itself works perfectly well
> > on the board. When I try booting it as Dom0 under Xen,
> > it goes into a stacktrace (attached).
> >
> > Looking at what nice folks over at Dornerworks have previously
> > done to make RPi kernels boot as Dom0 I've come across these
> > 3 patches:
> >     https://github.com/dornerworks/xen-rpi4-builder/tree/master/patches/linux
> >
> > The first patch seems irrelevant (unless I'm missing something
> > really basic here). The 2nd patch applied with no issue (but
> > I don't think it is related) but the 3d patch failed to apply on
> > account of 5.6.1 kernel no longer having:
> >     dev->archdata.dev_dma_ops
> > E.g.
> >     https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/arch/arm64/mm/dma-mapping.c?h=v5.6.1#n55
> >
> > I've tried to emulate the effect of the patch by simply introducing
> > a static variable that would signal that we already initialized
> > dev->dma_ops -- but that didn't help at all.
> >
> > I'm CCing Jeff Kubascik to see if the original authors of that Corey Minyard
> > to see if may be they have any suggestions on how this may be dealt
> > with.
> >
> > Any advice would be greatly appreciated!
> >
> > Thanks,
> > Roman.
> >


From xen-devel-bounces@lists.xenproject.org Sat May 02 02:17:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 02:17:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUhi5-0000xi-0u; Sat, 02 May 2020 02:16:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zGBs=6Q=gmail.com=tcminyard@srs-us1.protection.inumbo.net>)
 id 1jUhi3-0000xd-Er
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 02:16:55 +0000
X-Inumbo-ID: fc6c6936-8c1a-11ea-ae69-bc764e2007e4
Received: from mail-oi1-x242.google.com (unknown [2607:f8b0:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fc6c6936-8c1a-11ea-ae69-bc764e2007e4;
 Sat, 02 May 2020 02:16:52 +0000 (UTC)
Received: by mail-oi1-x242.google.com with SMTP id r66so1473300oie.5
 for <xen-devel@lists.xenproject.org>; Fri, 01 May 2020 19:16:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:date:from:to:cc:subject:message-id:reply-to:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=n5Ul91/uMGfL3z94+8q3hg8YyBQNrBPJwdvuXrDPA0A=;
 b=SIU7P8NAtiNjIALTo2zvBkID5PzltWD32ZO7ZKicxpWcd1W5PxSIuZoyhwrNWyKShl
 0RrrTwUswaVQXpKSdXikH0ioDeGtgu5zmcwsm3EAQCSp7lcretigQiCy+g/3e788Plrk
 BQ2cRXpax8IVZFmC4XY9qFWKDwddC3oMrWlglrugY5qnJXDZP7HhT64KzzohkQhyNUBu
 BIsV+OWdHMlZDnWSdZYxJsuaUMebe5kP+mnwKxKVQG9pF3CpJ2uaEdmCHpdd9g6Eum7k
 1BDne3npOfWgoZRaIaVdInOscXPF25vDfJDVIih7NaoesVPv93mtNZoxGpghBDtDttik
 3Bxw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
 :reply-to:references:mime-version:content-disposition:in-reply-to
 :user-agent;
 bh=n5Ul91/uMGfL3z94+8q3hg8YyBQNrBPJwdvuXrDPA0A=;
 b=LKfIxX48NLsPbCRIC6OxP9l47UuGeCV0Gbmderv3DWbmayE1HfcgRub5k3r6KnmeRk
 o6dfms2t9J8faJfNbm106/uFj0KbZiuXcteR4Ge2C97mw+ZbzzJfrCudS+Cn91+3YAC8
 /NtofuavNj6JESqh62xqWnRQR9wwEk+wXVcRp1XBt5ykxoneXjuDgUDrjAWC6lRumusy
 Tz974CFfe4mbpDhvoMrzD0AAXCriZPppFkdpef2R6h7kD1DI5HqnGD7BGXR4Nf54r1R1
 QIVuE6KzSyl9TD1nQKcRbNqv3tkHDsOlxd/xcabmpBZw0VEah8RKcyQ1Mfbdpqcw90Vx
 3FLw==
X-Gm-Message-State: AGi0PuYK0jWk3r84zd3cfCVwFuKbshKMSUCpzSFUzVh9KW/VG3bk5p0y
 0gDy45MtxicrrfLL/NsZTQ==
X-Google-Smtp-Source: APiQypJUw8EL+gwPZRWJ40KOpmVGIJsPyty4rKgr5mQIsyQgu+SRmRW7tdLVM3eSiDZj0MRC2vQC2A==
X-Received: by 2002:aca:5643:: with SMTP id k64mr1782926oib.152.1588385811155; 
 Fri, 01 May 2020 19:16:51 -0700 (PDT)
Received: from serve.minyard.net ([47.184.149.130])
 by smtp.gmail.com with ESMTPSA id z24sm1293483otq.75.2020.05.01.19.16.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 01 May 2020 19:16:49 -0700 (PDT)
Received: from minyard.net (unknown
 [IPv6:2001:470:b8f6:1b:8b39:c3f3:f502:5c4e])
 by serve.minyard.net (Postfix) with ESMTPSA id 412AD180042;
 Sat,  2 May 2020 02:16:48 +0000 (UTC)
Date: Fri, 1 May 2020 21:16:47 -0500
From: Corey Minyard <minyard@acm.org>
To: Roman Shaposhnik <roman@zededa.com>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
Message-ID: <20200502021647.GG9902@minyard.net>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
User-Agent: Mutt/1.9.4 (2018-02-28)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: minyard@acm.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 jeff.kubascik@dornerworks.com, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 01, 2020 at 06:06:11PM -0700, Roman Shaposhnik wrote:
> On Fri, May 1, 2020 at 4:42 AM Corey Minyard <minyard@acm.org> wrote:
> >
> > On Thu, Apr 30, 2020 at 07:20:05PM -0700, Roman Shaposhnik wrote:
> > > Hi!
> > >
> > > I'm trying to run Xen on Raspberry Pi 4 with 5.6.1 stock,
> > > upstream kernel. The kernel itself works perfectly well
> > > on the board. When I try booting it as Dom0 under Xen,
> > > it goes into a stacktrace (attached).
> >
> > Getting Xen working on the Pi4 requires a lot of moving parts, and they
> > all have to be right.
> 
> Tell me about it! It is a pretty frustrating journey to align
> everything just right.
> On the other hand -- it seems worth to enable RPi as an ARM development
> platform for Xen given how ubiquitous it is.
> 
> Hence me trying to combine pristine upstream kernel (5.6.1) with
> pristine upstream
> Xen to enable 100% upstream developer workflow on RPi.
> 
> > > Looking at what nice folks over at Dornerworks have previously
> > > done to make RPi kernels boot as Dom0 I've come across these
> > > 3 patches:
> > >     https://github.com/dornerworks/xen-rpi4-builder/tree/master/patches/linux
> > >
> > > The first patch seems irrelevant (unless I'm missing something
> > > really basic here).
> >
> > It might be irrelevant for your configuration, assuming that Xen gets
> > the right information from EFI.  I haven't tried EFI booting.
> 
> I'd doing a bit of belt-and-suspenders strategy really -- I'm actually
> using UEFI u-boot implementation that pre-populates device trees
> and then I'm also forcing an extra copy of it to be load explicitly
> via the GRUB devicetree command (GRUB runs as a UEFI payload).
> 
> I also have access to the semi-official TianoCore RPi4 port that seems
> to be working pretty well: https://github.com/pftf/RPi4/releases/tag/v1.5
> for booting all sort of UEFI payloads on RPi4.
> 
> > > The 2nd patch applied with no issue (but
> > > I don't think it is related) but the 3d patch failed to apply on
> > > account of 5.6.1 kernel no longer having:
> > >     dev->archdata.dev_dma_ops
> > > E.g.
> > >     https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/arch/arm64/mm/dma-mapping.c?h=v5.6.1#n55
> > >
> > > I've tried to emulate the effect of the patch by simply introducing
> > > a static variable that would signal that we already initialized
> > > dev->dma_ops -- but that didn't help at all.
> > >
> > > I'm CCing Jeff Kubascik to see if the original authors of that Corey Minyard
> > > to see if may be they have any suggestions on how this may be dealt
> > > with.
> > >
> > > Any advice would be greatly appreciated!
> >
> > What's your Pi4 config.txt file look like? The GIC is not being handled
> > correctly, and I'm guessing that configuration is wrong.  You can't use
> > the stock config.txt file with Xen, you have to modify the configuration a
> > bit.
> 
> Understood. I'm actually using a custom one:
>     https://github.com/lf-edge/eve/blob/master/pkg/u-boot/rpi/config.txt
> 
> I could swear that I had a GIC setting in it -- but apparently no -- so
> I added the following at the top of what you could see at the URL above:
> 
> total_mem=4096
> enable_gic=1
> 
> > I think just adding:
> >
> > enable_gic=1
> > total_mem=1024
> 
> Right -- but my board has 4G memory -- so I think what I did above should work.

Nope.  If you say 4096M of RAM, your issue is almost certainly DMA, but
it's not (just) the Linux code.  On the Raspberry Pi 4, several devices
cannot DMA to above 1024M of RAM, but Xen does not handle this.  The
1024M of RAM is a limitation you will have to live with until Xen has a
fix.

I know I saw a reference for this, but I can't find it now.  I saw
someone say it booted with 4G, but I was unable to get it to boot with
without setting total_mem=1024.

-corey

> 
> > might get it working, or at least solve one problem.  It's required either
> > way.  That might get rid of the GIC errors at the beginning.  I see a
> > few of those, but not that many.
> >
> > My kernel command line is:
> >
> > coherent_pool=1M 8250.nr_uarts=1 cma=64M cma=256M  smsc95xx.macaddr=DC:A6:32:4F:3A:CD vc_mem.mem_base=0x3ec00000 vc_mem.mem_size=0x40000000  console=hvc0 clk_ignore_unused root=/dev/mmcblk0p2 rootwait
> >
> > A lot of that configuration gets pulled from the initialization done by
> > the GPU at startup which it put into the device tree.  I'm not sure what a
> > lot of it means.  Some of it is added by Xen, too.
> 
> Just to be on the safe side -- I'm now using Device Tree from Dornerworks build.
> The file is attached.
> 
> > I can verify the DMA patch is important.  I'm not sure how to apply it
> > to a 5.6 kernel, though.
> 
> OK, after chatting with Stefano it definitely seems that there's some kind of
> an issue with DMA. It just seems different to what the original issue with
> 4.19.x kernel used to be.
> 
> Here's the clue. When I disable Xen DMA operations altogether with
> the following. I get the Linux kernel that is booting all the way to trying
> mounting a root filesystem (which sadly can't be found since eMMC
> requires DMA FWIU). I hope this will provide enough of a clue to
> figure out what may be wrong with the 5.x kernel. I'm attaching the
> boo log with the following patch applied:
> 
> diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
> index d40e9e5fc52b..c25ead822de0 100644
> --- a/arch/arm/xen/mm.c
> +++ b/arch/arm/xen/mm.c
> @@ -137,8 +137,7 @@ void xen_destroy_contiguous_region(phys_addr_t
> pstart, unsigned int order)
>  static int __init xen_mm_init(void)
>  {
>         struct gnttab_cache_flush cflush;
> -       if (!xen_initial_domain())
> -               return 0;
> +       return 0;
>         xen_swiotlb_init(1, false);
> 
>         cflush.op = 0;
> 
> Thanks,
> Roman.

> Using modules provided by bootloader in FDT
> Xen 4.14-unstable (c/s Mon Apr 20 14:36:53 2020 +0100 git:8065e1b416-dirty) EFI loader
> Warning: Could not query variable store: 0x8000000000000003
> - UART enabled -
> - Boot CPU booting -
> - Current EL 00000008 -
> - Initialize CPU -
> - Turning on paging -
> - Ready -
> (XEN) Checking for initrd in /chosen
> (XEN) RAM: 0000000000001000 - 0000000007ef1fff
> (XEN) RAM: 0000000007ef2000 - 0000000007f0dfff
> (XEN) RAM: 0000000007f0e000 - 000000002bc45fff
> (XEN) RAM: 000000002bc46000 - 000000002bc50fff
> (XEN) RAM: 000000002bc51000 - 000000002bd93fff
> (XEN) RAM: 000000002bd94000 - 000000002d780fff
> (XEN) RAM: 000000002d781000 - 000000003c9f6fff
> (XEN) RAM: 000000003c9f7000 - 000000003c9f8fff
> (XEN) RAM: 000000003c9fb000 - 000000003c9fdfff
> (XEN) RAM: 000000003c9fe000 - 000000003cb08fff
> (XEN) RAM: 000000003cb10000 - 000000003cb10fff
> (XEN) RAM: 000000003cb12000 - 000000003cb13fff
> (XEN) RAM: 000000003cb1b000 - 000000003cb1cfff
> (XEN) RAM: 000000003cb1e000 - 000000003df3ffff
> (XEN) RAM: 000000003df50000 - 000000003dffffff
> (XEN) RAM: 0000000040000000 - 00000000fbffffff
> (XEN)
> (XEN) MODULE[0]: 000000002bc51000 - 000000002bd930c8 Xen
> (XEN) MODULE[1]: 000000002bc47000 - 000000002bc51000 Device Tree
> (XEN) MODULE[2]: 000000002bd9d000 - 000000002d67c200 Kernel
> (XEN)
> (XEN) CMDLINE[000000002bd9d000]:chosen console=hvc0 earlyprintk=xen nomodeset
> (XEN)
> (XEN) Command line: dom0_mem=1024M,max:1024M dom0_max_vcpus=1
> (XEN) parameter "dom0_mem" has invalid value "1024M,max:1024M", rc=-22!
> (XEN) Domain heap initialised
> (XEN) Booting using Device Tree
> (XEN) Platform: Raspberry Pi 4
> (XEN) No dtuart path configured
> (XEN) Bad console= option 'dtuart'
>  Xen 4.14-unstable
> (XEN) Xen version 4.14-unstable (@) (aarch64-linux-gnu-gcc (Ubuntu 9.3.0-10ubuntu2) 9.3.0) debug=y  Thu Apr 30 17:43:33 PDT 2020
> (XEN) Latest ChangeSet: Mon Apr 20 14:36:53 2020 +0100 git:8065e1b416-dirty
> (XEN) build-id: 988ba8f5257c494645df6d5c4c2d5fd9f39e2539
> (XEN) Processor: 410fd083: "ARM Limited", variant: 0x0, part 0xd08, rev 0x3
> (XEN) 64-bit Execution:
> (XEN)   Processor Features: 0000000000002222 0000000000000000
> (XEN)     Exception Levels: EL3:64+32 EL2:64+32 EL1:64+32 EL0:64+32
> (XEN)     Extensions: FloatingPoint AdvancedSIMD
> (XEN)   Debug Features: 0000000010305106 0000000000000000
> (XEN)   Auxiliary Features: 0000000000000000 0000000000000000
> (XEN)   Memory Model Features: 0000000000001124 0000000000000000
> (XEN)   ISA Features:  0000000000010000 0000000000000000
> (XEN) 32-bit Execution:
> (XEN)   Processor Features: 00000131:00011011
> (XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle
> (XEN)     Extensions: GenericTimer Security
> (XEN)   Debug Features: 03010066
> (XEN)   Auxiliary Features: 00000000
> (XEN)   Memory Model Features: 10201105 40000000 01260000 02102211
> (XEN)  ISA Features: 02101110 13112111 21232042 01112131 00011142 00010001
> (XEN) SMP: Allowing 4 CPUs
> (XEN) enabled workaround for: ARM erratum 1319537
> (XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27 Freq: 54000 KHz
> (XEN) GICv2 initialization:
> (XEN)         gic_dist_addr=00000000ff841000
> (XEN)         gic_cpu_addr=00000000ff842000
> (XEN)         gic_hyp_addr=00000000ff844000
> (XEN)         gic_vcpu_addr=00000000ff846000
> (XEN)         gic_maintenance_irq=25
> (XEN) GICv2: 256 lines, 4 cpus, secure (IID 0200143b).
> (XEN) XSM Framework v1.0.0 initialized
> (XEN) Initialising XSM SILO mode
> (XEN) Using scheduler: SMP Credit Scheduler rev2 (credit2)
> (XEN) Initializing Credit2 scheduler
> (XEN)  load_precision_shift: 18
> (XEN)  load_window_shift: 30
> (XEN)  underload_balance_tolerance: 0
> (XEN)  overload_balance_tolerance: -3
> (XEN)  runqueues arrangement: socket
> (XEN)  cap enforcement granularity: 10ms
> (XEN) load tracking window length 1073741824 ns
> (XEN) Allocated console ring of 32 KiB.
> (XEN) CPU0: Guest atomics will try 5 times before pausing the domain
> (XEN) Bringing up CPU1
> - CPU 00000001 booting -
> - Current EL 00000008 -
> - Initialize CPU -
> - Turning on paging -
> - Ready -
> (XEN) CPU1: Guest atomics will try 5 times before pausing the domain
> (XEN) CPU 1 booted.
> (XEN) Bringing up CPU2
> - CPU 00000002 booting -
> - Current EL 00000008 -
> - Initialize CPU -
> - Turning on paging -
> - Ready -
> (XEN) CPU2: Guest atomics will try 5 times before pausing the domain
> (XEN) CPU 2 booted.
> (XEN) Bringing up CPU3
> - CPU 00000003 booting -
> - Current EL 00000008 -
> - Initialize CPU -
> - Turning on paging -
> - Ready -
> (XEN) CPU3: Guest atomics will try 5 times before pausing the domain
> (XEN) CPU 3 booted.
> (XEN) Brought up 4 CPUs
> (XEN) I/O virtualisation disabled
> (XEN) P2M: 44-bit IPA with 44-bit PA and 8-bit VMID
> (XEN) P2M: 4 levels with order-0 root, VTCR 0x80043594
> (XEN) Adding cpu 0 to runqueue 0
> (XEN)  First cpu on runqueue, activating
> (XEN) Adding cpu 1 to runqueue 0
> (XEN) Adding cpu 2 to runqueue 0
> (XEN) Adding cpu 3 to runqueue 0
> (XEN) alternatives: Patching with alt table 00000000002d4208 -> 00000000002d491c
> (XEN) *** LOADING DOMAIN 0 ***
> (XEN) Loading d0 kernel from boot module @ 000000002bd9d000
> (XEN) Allocating 1:1 mappings totalling 1024MB for dom0:
> (XEN) BANK[0] 0x00000040000000-0x00000080000000 (1024MB)
> (XEN) Grant table range: 0x0000002bc51000-0x0000002bc91000
> (XEN) Allocating PPI 16 for event channel interrupt
> (XEN) Loading zImage from 000000002bd9d000 to 0000000040080000-000000004195f200
> (XEN) Loading d0 DTB to 0x0000000048000000-0x0000000048006e0a
> (XEN) Initial low memory virq threshold set at 0x4000 pages.
> (XEN) Scrubbing Free RAM in background
> (XEN) Std. Loglevel: All
> (XEN) Guest Loglevel: All
> (XEN) ***************************************************
> (XEN) No support for ARM_SMCCC_ARCH_WORKAROUND_1.
> (XEN) Please update your firmware.
> (XEN) ***************************************************
> (XEN) No support for ARM_SMCCC_ARCH_WORKAROUND_1.
> (XEN) Please update your firmware.
> (XEN) ***************************************************
> (XEN) No support for ARM_SMCCC_ARCH_WORKAROUND_1.
> (XEN) Please update your firmware.
> (XEN) ***************************************************
> (XEN) 3... 2... 1...
> (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
> (XEN) Freed 348kB init memory.
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER4
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER8
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER12
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER16
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER20
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER24
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER28
> (XEN) d0v0: vGICD: unhandled word write 0x000000ffffffff to ICACTIVER0
> [    0.000000] Booting Linux on physical CPU 0x0000000000 [0x410fd083]
> [    0.000000] Linux version 5.6.1-default (root@13024eb1d2cf) (gcc version 8.3.0 (Alpine 8.3.0)) #6 SMP Sat May 2 00:10:36 UTC 2020
> [    0.000000] Machine model: Raspberry Pi 4 Model B
> [    0.000000] Xen 4.14 support found
> [    0.000000] efi: Getting EFI parameters from FDT:
> [    0.000000] efi: UEFI not found.
> [    0.000000] NUMA: No NUMA configuration found
> [    0.000000] NUMA: Faking a node at [mem 0x0000000040000000-0x000000007fffffff]
> [    0.000000] NUMA: NODE_DATA [mem 0x7fdd42c0-0x7fdd7fff]
> [    0.000000] Zone ranges:
> [    0.000000]   DMA      [mem 0x0000000040000000-0x000000007fffffff]
> [    0.000000]   DMA32    empty
> [    0.000000]   Normal   empty
> [    0.000000] Movable zone start for each node
> [    0.000000] Early memory node ranges
> [    0.000000]   node   0: [mem 0x0000000040000000-0x000000007fffffff]
> [    0.000000] Initmem setup node 0 [mem 0x0000000040000000-0x000000007fffffff]
> [    0.000000] psci: probing for conduit method from DT.
> [    0.000000] psci: PSCIv1.1 detected in firmware.
> [    0.000000] psci: Using standard PSCI v0.2 function IDs
> [    0.000000] psci: Trusted OS migration not required
> [    0.000000] psci: SMC Calling Convention v1.1
> [    0.000000] percpu: Embedded 23 pages/cpu s54232 r8192 d31784 u94208
> [    0.000000] Detected PIPT I-cache on CPU0
> [    0.000000] CPU features: detected: EL2 vector hardening
> [    0.000000] CPU features: detected: Speculative Store Bypass Disable
> [    0.000000] CPU features: detected: ARM erratum 1319367
> [    0.000000] Built 1 zonelists, mobility grouping on.  Total pages: 258048
> [    0.000000] Policy zone: DMA
> [    0.000000] Kernel command line: console=hvc0 earlyprintk=xen nomodeset
> [    0.000000] Dentry cache hash table entries: 131072 (order: 8, 1048576 bytes, linear)
> [    0.000000] Inode-cache hash table entries: 65536 (order: 7, 524288 bytes, linear)
> [    0.000000] mem auto-init: stack:off, heap alloc:off, heap free:off
> [    0.000000] Memory: 1002060K/1048576K available (12732K kernel code, 1852K rwdata, 6184K rodata, 4672K init, 758K bss, 46516K reserved, 0K cma-reserved)
> [    0.000000] random: get_random_u64 called from __kmem_cache_create+0x40/0x578 with crng_init=0
> [    0.000000] SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=1, Nodes=1
> [    0.000000] rcu: Hierarchical RCU implementation.
> [    0.000000] rcu: 	RCU restricting CPUs from NR_CPUS=480 to nr_cpu_ids=1.
> [    0.000000] rcu: RCU calculated value of scheduler-enlistment delay is 10 jiffies.
> [    0.000000] rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=1
> [    0.000000] NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0
> [    0.000000] arch_timer: cp15 timer(s) running at 54.00MHz (virt).
> [    0.000000] clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0xc743ce346, max_idle_ns: 440795203123 ns
> [    0.000005] sched_clock: 56 bits at 54MHz, resolution 18ns, wraps every 4398046511102ns
> [    0.000273] Console: colour dummy device 80x25
> [    0.270836] printk: console [hvc0] enabled
> [    0.275124] Calibrating delay loop (skipped), value calculated using timer frequency.. 108.00 BogoMIPS (lpj=540000)
> [    0.285678] pid_max: default: 32768 minimum: 301
> [    0.290511] LSM: Security Framework initializing
> [    0.295221] Mount-cache hash table entries: 2048 (order: 2, 16384 bytes, linear)
> [    0.302730] Mountpoint-cache hash table entries: 2048 (order: 2, 16384 bytes, linear)
> [    0.313228] xen:grant_table: Grant tables using version 1 layout
> [    0.318714] Grant table initialized
> [    0.322312] xen:events: Using FIFO-based ABI
> [    0.326760] Xen: initializing cpu0
> [    0.330353] rcu: Hierarchical SRCU implementation.
> [    0.338817] EFI services will not be available.
> [    0.342959] smp: Bringing up secondary CPUs ...
> [    0.347479] smp: Brought up 1 node, 1 CPU
> [    0.351575] SMP: Total of 1 processors activated.
> [    0.356408] CPU features: detected: 32-bit EL0 Support
> [    0.361689] CPU features: detected: CRC32 instructions
> [    0.396295] CPU: All CPU(s) started at EL1
> [    0.399867] alternatives: patching kernel code
> [    0.405872] devtmpfs: initialized
> [    0.413580] Registered cp15_barrier emulation handler
> [    0.418164] Registered setend emulation handler
> [    0.422741] KASLR disabled due to lack of seed
> [    0.427659] clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604462750000 ns
> [    0.437360] futex hash table entries: 256 (order: 2, 16384 bytes, linear)
> [    0.444449] pinctrl core: initialized pinctrl subsystem
> [    0.450684] thermal_sys: Registered thermal governor 'fair_share'
> [    0.450687] thermal_sys: Registered thermal governor 'bang_bang'
> [    0.456235] thermal_sys: Registered thermal governor 'step_wise'
> [    0.462423] thermal_sys: Registered thermal governor 'user_space'
> [    0.468654] DMI not present or invalid.
> [    0.479281] NET: Registered protocol family 16
> [    0.483559] DMA: preallocated 256 KiB pool for atomic allocations
> [    0.489539] audit: initializing netlink subsys (disabled)
> [    0.496103] audit: type=2000 audit(0.370:1): state=initialized audit_enabled=0 res=1
> [    0.503951] hw-breakpoint: found 6 breakpoint and 4 watchpoint registers.
> [    0.510341] ASID allocator initialised with 65536 entries
> [    0.516925] Serial: AMBA PL011 UART driver
> [    0.539528] HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages
> [    0.545702] HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages
> [    0.552578] HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages
> [    0.559398] HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages
> [    0.570438] cryptd: max_cpu_qlen set to 1000
> [    0.580347] ACPI: Interpreter disabled.
> [    0.584023] xen:balloon: Initialising balloon driver
> [    0.590395] iommu: Default domain type: Translated
> [    0.595026] vgaarb: loaded
> [    0.597933] SCSI subsystem initialized
> [    0.601940] usbcore: registered new interface driver usbfs
> [    0.607076] usbcore: registered new interface driver hub
> [    0.612627] usbcore: registered new device driver usb
> [    0.617790] usb_phy_generic phy: phy supply vcc not found, using dummy regulator
> [    0.625532] pps_core: LinuxPPS API ver. 1 registered
> [    0.630281] pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
> [    0.639665] PTP clock support registered
> [    0.643762] EDAC MC: Ver: 3.0.0
> [    0.648106] NetLabel: Initializing
> [    0.650959] NetLabel:  domain hash size = 128
> [    0.655412] NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
> [    0.661291] NetLabel:  unlabeled traffic allowed by default
> [    0.667445] clocksource: Switched to clocksource arch_sys_counter
> [    0.673415] VFS: Disk quotas dquot_6.6.0
> [    0.677266] VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
> [    0.684429] pnp: PnP ACPI: disabled
> [    0.694684] NET: Registered protocol family 2
> [    0.699014] tcp_listen_portaddr_hash hash table entries: 512 (order: 1, 8192 bytes, linear)
> [    0.707026] TCP established hash table entries: 8192 (order: 4, 65536 bytes, linear)
> [    0.714970] TCP bind hash table entries: 8192 (order: 5, 131072 bytes, linear)
> [    0.722397] TCP: Hash tables configured (established 8192 bind 8192)
> [    0.728962] UDP hash table entries: 512 (order: 2, 16384 bytes, linear)
> [    0.735547] UDP-Lite hash table entries: 512 (order: 2, 16384 bytes, linear)
> [    0.742928] NET: Registered protocol family 1
> [    0.747235] PCI: CLS 0 bytes, default 64
> [    0.751794] kvm [1]: HYP mode not available
> [    0.762848] Initialise system trusted keyrings
> [    0.766865] workingset: timestamp_bits=40 max_order=18 bucket_order=0
> [    0.779692] zbud: loaded
> [    0.783113] squashfs: version 4.0 (2009/01/31) Phillip Lougher
> [    0.788995] 9p: Installing v9fs 9p2000 file system support
> [    0.818061] Key type asymmetric registered
> [    0.821593] Asymmetric key parser 'x509' registered
> [    0.826628] Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
> [    0.834307] io scheduler mq-deadline registered
> [    0.838829] io scheduler kyber registered
> [    0.843090] io scheduler bfq registered
> [    0.851833] shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
> [    0.859316] brcm-pcie fd500000.pcie: host bridge /scb/pcie@7d500000 ranges:
> [    0.865766] brcm-pcie fd500000.pcie:      MEM 0x0600000000..0x0603ffffff -> 0x00f8000000
> [    0.874065] brcm-pcie fd500000.pcie:   IB MEM 0x0000000000..0x00ffffffff -> 0x0000000000
> [    0.937473] brcm-pcie fd500000.pcie: link up, 5 GT/s x1 (!SSC)
> [    0.942977] brcm-pcie fd500000.pcie: PCI host bridge to bus 0000:00
> [    0.949188] pci_bus 0000:00: root bus resource [bus 00-01]
> [    0.954793] pci_bus 0000:00: root bus resource [mem 0x600000000-0x603ffffff] (bus address [0xf8000000-0xfbffffff])
> [    0.965361] pci 0000:00:00.0: [14e4:2711] type 01 class 0x060400
> [    0.971562] pci 0000:00:00.0: PME# supported from D0 D3hot
> (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=25: not implemented
> (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=15: not implemented
> [    0.987468] pci 0000:00:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
> [    0.997325] pci 0000:00:00.0: bridge configuration invalid ([bus 00-00]), reconfiguring
> [    1.004931] pci 0000:01:00.0: [1106:3483] type 00 class 0x0c0330
> [    1.010995] pci 0000:01:00.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit]
> [    1.018002] pci 0000:01:00.0: PME# supported from D0 D3cold
> (XEN) physdev.c:16:d0v0 PHYSDEVOP cmd=15: not implemented
> [    1.028797] pci 0000:01:00.0: Failed to add - passthrough or MSI/MSI-X might fail!
> [    1.038631] pci_bus 0000:01: busn_res: [bus 01] end is updated to 01
> [    1.044464] pci 0000:00:00.0: BAR 14: assigned [mem 0x600000000-0x6000fffff]
> [    1.051664] pci 0000:01:00.0: BAR 0: assigned [mem 0x600000000-0x600000fff 64bit]
> [    1.059305] pci 0000:00:00.0: PCI bridge to [bus 01]
> [    1.064377] pci 0000:00:00.0:   bridge window [mem 0x600000000-0x6000fffff]
> [    1.071642] pcieport 0000:00:00.0: enabling device (0000 -> 0002)
> [    1.077882] pcieport 0000:00:00.0: PME: Signaling with IRQ 32
> [    1.083871] pcieport 0000:00:00.0: AER: enabled with IRQ 32
> [    1.089489] pci 0000:01:00.0: enabling device (0000 -> 0002)
> [    1.103732] ------------[ cut here ]------------
> [    1.107828] bcm2835-dma fe007000.dma: DMA addr 0x0000000101963000+4096 overflow (mask ffffffff, bus limit fbffffff).
> [    1.118549] WARNING: CPU: 0 PID: 1 at kernel/dma/direct.c:364 dma_direct_map_page+0x16c/0x180
> [    1.127194] Modules linked in:
> [    1.130360] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.6.1-default #6
> [    1.137028] Hardware name: Raspberry Pi 4 Model B (DT)
> [    1.142309] pstate: 60000005 (nZCv daif -PAN -UAO)
> [    1.147217] pc : dma_direct_map_page+0x16c/0x180
> [    1.151967] lr : dma_direct_map_page+0x16c/0x180
> [    1.156695] sp : ffff800010013ad0
> [    1.160121] x29: ffff800010013ad0 x28: 0000000000000000
> [    1.165564] x27: ffff80001130044c x26: ffff8000113d3960
> [    1.171007] x25: ffff8000112f40a8 x24: 0000000000000000
> [    1.176451] x23: ffff80001179b548 x22: ffff00003f9e6c10
> [    1.181904] x21: ffff00003f9e6c00 x20: ffff00003f9e6c10
> [    1.187339] x19: ffff80001179b548 x18: 0000000000000010
> [    1.192792] x17: 0000000000000001 x16: 00000000deadbeef
> [    1.198226] x15: ffffffffffffffff x14: 20776f6c66726576
> [    1.203671] x13: 6f20363930342b30 x12: 3030333639313031
> [    1.209114] x11: 3030303030303078 x10: 3020726464612041
> [    1.214558] x9 : 4d44203a616d642e x8 : ffff80001071e900
> [    1.220002] x7 : 00000000000000a5 x6 : ffff80001197a6f7
> [    1.225446] x5 : ffff800010e19d18 x4 : 0000000000000000
> [    1.230889] x3 : 0000000000000000 x2 : 00000000ffffffff
> [    1.236333] x1 : 745b439e6d834000 x0 : 0000000000000000
> [    1.241778] Call trace:
> [    1.244330]  dma_direct_map_page+0x16c/0x180
> [    1.248722]  bcm2835_dma_probe+0x4bc/0x558
> [    1.252942]  platform_drv_probe+0x50/0xa0
> [    1.257061]  really_probe+0xd8/0x438
> [    1.260748]  driver_probe_device+0xdc/0x130
> [    1.265051]  device_driver_attach+0x6c/0x78
> [    1.269352]  __driver_attach+0x9c/0x168
> [    1.273303]  bus_for_each_dev+0x70/0xc0
> [    1.277254]  driver_attach+0x20/0x28
> [    1.280942]  bus_add_driver+0x190/0x220
> [    1.284894]  driver_register+0x60/0x110
> [    1.288843]  __platform_driver_register+0x44/0x50
> [    1.293678]  bcm2835_dma_driver_init+0x18/0x20
> [    1.298240]  do_one_initcall+0x74/0x1b0
> [    1.302200]  kernel_init_freeable+0x214/0x2b0
> [    1.306669]  kernel_init+0x10/0x100
> [    1.310267]  ret_from_fork+0x10/0x18
> [    1.313960] ---[ end trace 2f723f6ea1e0757e ]---
> [    1.318726] bcm2835-dma fe007000.dma: Failed to map zero page
> [    1.324687] bcm2835-dma: probe of fe007000.dma failed with error -12
> [    1.334169] xen:xen_evtchn: Event-channel device installed
> [    1.339414] Initialising Xen pvcalls frontend driver
> [    1.345880] Serial: 8250/16550 driver, 32 ports, IRQ sharing enabled
> [    1.359258] bcm2835-aux-uart fe215040.serial: there is not valid maps for state default
> [    1.366842] OF: /soc/serial@7e215040: could not find phandle
> [    1.372605] bcm2835-aux-uart fe215040.serial: could not get clk: -2
> [    1.378998] bcm2835-aux-uart: probe of fe215040.serial failed with error -2
> [    1.386564] Serial: AMBA driver
> [    1.389877] msm_serial: driver initialized
> [    1.394719] cacheinfo: Unable to detect cache hierarchy for CPU 0
> [    1.409630] brd: module loaded
> [    1.419607] loop: module loaded
> [    1.438031] Invalid max_queues (4), will use default max: 1.
> [    1.454031] drbd: initialized. Version: 8.4.11 (api:1/proto:86-101)
> [    1.459804] drbd: built-in
> [    1.462592] drbd: registered as block device major 147
> [    1.468739] bcm2835-power bcm2835-power: Broadcom BCM2835 power domains driver
> [    1.476559] wireguard: allowedips self-tests: pass
> [    1.483341] wireguard: nonce counter self-tests: pass
> [    1.908628] wireguard: ratelimiter self-tests: pass
> [    1.913030] wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information.
> [    1.920984] wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld <Jason@zx2c4.com>. All Rights Reserved.
> [    1.932354] libphy: Fixed MDIO Bus: probed
> [    1.936408] bcmgenet fd580000.genet: using random Ethernet MAC
> [    1.941902] bcmgenet fd580000.genet: failed to get enet clock
> [    1.947758] bcmgenet fd580000.genet: GENET 5.0 EPHY: 0x0000
> [    1.953455] bcmgenet fd580000.genet: failed to get enet-wol clock
> [    1.959700] bcmgenet fd580000.genet: failed to get enet-eee clock
> [    1.977496] libphy: bcmgenet MII bus: probed
> [    2.057539] unimac-mdio unimac-mdio.-19: Broadcom UniMAC MDIO bus
> [    2.063755] xen_netfront: Initialising Xen virtual ethernet driver
> [    2.069721] xhci_hcd 0000:01:00.0: xHCI Host Controller
> [    2.074800] xhci_hcd 0000:01:00.0: new USB bus registered, assigned bus number 1
> [    2.082779] xhci_hcd 0000:01:00.0: hcc params 0x002841eb hci version 0x100 quirks 0x0000000000000090
> [    2.092132] usb usb1: New USB device found, idVendor=1d6b, idProduct=0002, bcdDevice= 5.06
> [    2.100076] usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
> [    2.107449] usb usb1: Product: xHCI Host Controller
> [    2.112437] usb usb1: Manufacturer: Linux 5.6.1-default xhci-hcd
> [    2.118595] usb usb1: SerialNumber: 0000:01:00.0
> [    2.123766] hub 1-0:1.0: USB hub found
> [    2.127217] hub 1-0:1.0: 1 port detected
> [    2.131662] xhci_hcd 0000:01:00.0: xHCI Host Controller
> [    2.136591] xhci_hcd 0000:01:00.0: new USB bus registered, assigned bus number 2
> [    2.144182] xhci_hcd 0000:01:00.0: Host supports USB 3.0 SuperSpeed
> [    2.150673] usb usb2: We don't know the algorithms for LPM for this host, disabling LPM.
> [    2.158916] usb usb2: New USB device found, idVendor=1d6b, idProduct=0003, bcdDevice= 5.06
> [    2.167228] usb usb2: New USB device strings: Mfr=3, Product=2, SerialNumber=1
> [    2.174618] usb usb2: Product: xHCI Host Controller
> [    2.179618] usb usb2: Manufacturer: Linux 5.6.1-default xhci-hcd
> [    2.185753] usb usb2: SerialNumber: 0000:01:00.0
> [    2.190885] hub 2-0:1.0: USB hub found
> [    2.194405] hub 2-0:1.0: 4 ports detected
> [    2.199145] usbcore: registered new interface driver usb-storage
> [    2.204877] mousedev: PS/2 mouse device common for all mice
> [    2.212659] bcm2835-wdt bcm2835-wdt: Broadcom BCM2835 watchdog timer
> [    2.219119] xen_wdt xen_wdt: initialized (timeout=60s, nowayout=0)
> [    2.225473] sdhci: Secure Digital Host Controller Interface driver
> [    2.231146] sdhci: Copyright(c) Pierre Ossman
> [    2.235825] sdhci-pltfm: SDHCI platform and OF driver helper
> [    2.273506] mmc0: SDHCI controller on fe300000.mmcnr [fe300000.mmcnr] using PIO
> [    2.281417] ledtrig-cpu: registered to indicate activity on CPUs
> [    2.287786] hid: raw HID events driver (C) Jiri Kosina
> [    2.292597] usbcore: registered new interface driver usbhid
> [    2.298095] usbhid: USB HID core driver
> [    2.302529] bcm2835-mbox fe00b880.mailbox: mailbox enabled
> [    2.311675] xt_time: kernel timezone is -0000
> [    2.315576] IPVS: Registered protocols ()
> [    2.319735] IPVS: Connection hash table configured (size=4096, memory=64Kbytes)
> [    2.327337] IPVS: ipvs loaded.
> [    2.330487] ipip: IPv4 and MPLS over IPv4 tunneling driver
> [    2.336456] gre: GRE over IPv4 demultiplexor driver
> [    2.341162] ipt_CLUSTERIP: ClusterIP Version 0.8 loaded successfully
> [    2.347888] Initializing XFRM netlink socket
> [    2.352567] NET: Registered protocol family 10
> [    2.357853] Segment Routing with IPv6
> [    2.362140] NET: Registered protocol family 17
> [    2.366065] mmc0: queuing unknown CIS tuple 0x80 (2 bytes)
> [    2.371775] NET: Registered protocol family 15
> [    2.376292] Bridge firewalling registered
> [    2.380510] 8021q: 802.1Q VLAN Support v1.8
> [    2.384881] 9pnet: Installing 9P2000 support
> [    2.389081] Initialising Xen transport for 9pfs
> [    2.393833] Key type dns_resolver registered
> [    2.398505] registered taskstats version 1
> [    2.402322] Loading compiled-in X.509 certificates
> [    2.407212] mmc0: queuing unknown CIS tuple 0x80 (3 bytes)
> [    2.416914] Loaded X.509 cert 'Build time autogenerated kernel key: ca85232d2e7ebc2af2feaa4812e2d651700446d8'
> [    2.426514] zswap: loaded using pool lzo/zbud
> [    2.431307] Key type ._fscrypt registered
> [    2.434991] mmc0: queuing unknown CIS tuple 0x80 (3 bytes)
> [    2.440690] Key type .fscrypt registered
> [    2.444619] Key type fscrypt-provisioning registered
> [    2.452476] mmc0: queuing unknown CIS tuple 0x80 (7 bytes)
> [    2.459548] uart-pl011 fe201000.serial: there is not valid maps for state default
> [    2.466774] fe201000.serial: ttyAMA1 at MMIO 0xfe201000 (irq = 18, base_baud = 0) is a PL011 rev2
> [    2.476384] mmc0: queuing unknown CIS tuple 0x80 (3 bytes)
> [    2.485092] raspberrypi-firmware soc:firmware: Request 0x00000001 returned status 0x00000000
> [    2.493157] raspberrypi-firmware soc:firmware: Request 0x00030046 returned status 0x00000000
> [    2.501734] usb 1-1: new high-speed USB device number 2 using xhci_hcd
> [    2.508813] raspberrypi-firmware soc:firmware: Request 0x00030007 returned status 0x00000000
> [    2.516938] raspberrypi-clk raspberrypi-clk: Failed to get pllb min freq: -22
> [    2.524205] raspberrypi-clk raspberrypi-clk: Failed to initialize pllb, -22
> [    2.531389] raspberrypi-clk: probe of raspberrypi-clk failed with error -22
> [    2.540422] raspberrypi-firmware soc:firmware: Request 0x00030043 returned status 0x00000000
> [    2.548407] raspberrypi-exp-gpio soc:firmware:gpio: Failed to get GPIO 0 config (-22 80)
> [    2.556682] raspberrypi-firmware soc:firmware: Request 0x00030043 returned status 0x00000000
> [    2.565255] raspberrypi-exp-gpio soc:firmware:gpio: Failed to get GPIO 1 config (-22 81)
> [    2.573574] raspberrypi-firmware soc:firmware: Request 0x00030043 returned status 0x00000000
> [    2.582132] raspberrypi-exp-gpio soc:firmware:gpio: Failed to get GPIO 2 config (-22 82)
> [    2.590442] raspberrypi-firmware soc:firmware: Request 0x00030043 returned status 0x00000000
> [    2.598984] raspberrypi-exp-gpio soc:firmware:gpio: Failed to get GPIO 3 config (-22 83)
> [    2.607290] raspberrypi-firmware soc:firmware: Request 0x00030043 returned status 0x00000000
> [    2.615847] raspberrypi-exp-gpio soc:firmware:gpio: Failed to get GPIO 4 config (-22 84)
> [    2.624141] raspberrypi-firmware soc:firmware: Request 0x00030043 returned status 0x00000000
> [    2.632700] raspberrypi-exp-gpio soc:firmware:gpio: Failed to get GPIO 5 config (-22 85)
> [    2.641020] raspberrypi-firmware soc:firmware: Request 0x00030043 returned status 0x00000000
> [    2.649559] raspberrypi-exp-gpio soc:firmware:gpio: Failed to get GPIO 6 config (-22 86)
> [    2.657868] raspberrypi-firmware soc:firmware: Request 0x00030043 returned status 0x00000000
> [    2.666409] raspberrypi-exp-gpio soc:firmware:gpio: Failed to get GPIO 7 config (-22 87)
> [    2.675053] usb 1-1: New USB device found, idVendor=2109, idProduct=3431, bcdDevice= 4.20
> [    2.683004] usb 1-1: New USB device strings: Mfr=0, Product=1, SerialNumber=0
> [    2.690253] usb 1-1: Product: USB2.0 Hub
> [    2.694944] raspberrypi-firmware soc:firmware: Request 0x00030030 returned status 0x00000000
> [    2.704156] raspberrypi-firmware soc:firmware: Request 0x00030043 returned status 0x00000000
> [    2.712138] raspberrypi-exp-gpio soc:firmware:gpio: Failed to get GPIO 6 config (-22 86)
> [    2.720869] raspberrypi-firmware soc:firmware: Request 0x00030043 returned status 0x00000000
> [    2.729006] raspberrypi-exp-gpio soc:firmware:gpio: Failed to get GPIO 6 config (-22 86)
> [    2.737296] reg-fixed-voltage: probe of sd_vcc_reg failed with error -22
> [    2.745091] hub 1-1:1.0: USB hub found
> [    2.748447] raspberrypi-firmware soc:firmware: Request 0x00030043 returned status 0x00000000
> [    2.756917] raspberrypi-exp-gpio soc:firmware:gpio: Failed to get GPIO 4 config (-22 84)
> [    2.765256] raspberrypi-firmware soc:firmware: Request 0x00030043 returned status 0x00000000
> [    2.773801] raspberrypi-exp-gpio soc:firmware:gpio: Failed to get GPIO 4 config (-22 84)
> [    2.782067] hub 1-1:1.0: 4 ports detected
> [    2.786222] gpio-regulator: probe of sd_io_1v8_reg failed with error -22
> [    2.795347] hctosys: unable to open rtc device (rtc0)
> [    2.806267] VFS: Cannot open root device "(null)" or unknown-block(0,0): error -6
> [    2.813313] Please append a correct "root=" boot option; here are the available partitions:
> [    2.821773] 0100          131072 ram0
> [    2.821776]  (driver?)
> [    2.828101] 0101          131072 ram1
> [    2.828103]  (driver?)
> [    2.834393] 0102          131072 ram2
> [    2.834395]  (driver?)
> [    2.840716] 0103          131072 ram3
> [    2.840718]  (driver?)
> [    2.847037] 0104          131072 ram4
> [    2.847039]  (driver?)
> [    2.853359] 0105          131072 ram5
> [    2.853361]  (driver?)
> [    2.859693] 0106          131072 ram6
> [    2.859695]  (driver?)
> [    2.866002] 0107          131072 ram7
> [    2.866003]  (driver?)
> [    2.872325] 0108          131072 ram8
> [    2.872327]  (driver?)
> [    2.878669] 0109          131072 ram9
> [    2.878671]  (driver?)
> [    2.884968] 010a          131072 ram10
> [    2.884969]  (driver?)
> [    2.891378] 010b          131072 ram11
> [    2.891380]  (driver?)
> [    2.897813] 010c          131072 ram12
> [    2.897815]  (driver?)
> [    2.904197] 010d          131072 ram13
> [    2.904198]  (driver?)
> [    2.910607] 010e          131072 ram14
> [    2.910609]  (driver?)
> [    2.917016] 010f          131072 ram15
> [    2.917017]  (driver?)
> [    2.923453] Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0)
> [    2.931858] CPU: 0 PID: 1 Comm: swapper/0 Tainted: G        W         5.6.1-default #6
> [    2.939931] Hardware name: Raspberry Pi 4 Model B (DT)
> [    2.945199] Call trace:
> [    2.947753]  dump_backtrace+0x0/0x1d0
> [    2.951524]  show_stack+0x14/0x20
> [    2.954950]  dump_stack+0xbc/0x100
> [    2.958472]  panic+0x150/0x324
> [    2.961622]  mount_block_root+0x28c/0x32c
> [    2.965746]  mount_root+0x7c/0x88
> [    2.969171]  prepare_namespace+0x158/0x198
> [    2.973385]  kernel_init_freeable+0x264/0x2b0
> [    2.977876]  kernel_init+0x10/0x100
> [    2.981465]  ret_from_fork+0x10/0x18
> [    2.985157] Kernel Offset: disabled
> [    2.988751] CPU features: 0x10002,61006000
> [    2.992963] Memory Limit: none
> [    2.996131] ---[ end Kernel panic - not syncing: VFS: Unable to mount root fs on unknown-block(0,0) ]---




From xen-devel-bounces@lists.xenproject.org Sat May 02 03:35:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 03:35:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUivi-0007Hd-PG; Sat, 02 May 2020 03:35:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Elib=6Q=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jUivh-0007HY-Nf
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 03:35:05 +0000
X-Inumbo-ID: e8973f70-8c25-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e8973f70-8c25-11ea-b9cf-bc764e2007e4;
 Sat, 02 May 2020 03:35:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1S9DFcWQ2X0S8F14CLyzqQg/tP9Ev1hSteSkJfdaYfU=; b=GILaulV5oP7YgbfRfBqg8sD4c
 RHmumxYaT6NyFFGtEWeSydREbh3fwNm4YBfSwyMU6iLIxB00pJS/aJDX3DAlKXqi71P+vWqiQVdKw
 oN/YdASCXMl9sUebw1XNhVeVGfGfQU/YM1QgZEQ8GKRr3J20Ywd2Uxi7IA871Zmev1YxQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUive-0002JO-W9; Sat, 02 May 2020 03:35:03 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUive-0006de-GC; Sat, 02 May 2020 03:35:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jUive-00069l-Es; Sat, 02 May 2020 03:35:02 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149899-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 149899: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=052c467cb58748e302a95546925928e637025acc
X-Osstest-Versions-That: linux=c45e8bccecaf633480d378daff11e122dfd5e96d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 02 May 2020 03:35:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149899 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149899/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     12 guest-start              fail REGR. vs. 149893

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 149893
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 149893
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149893
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149893
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 149893
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149893
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 149893
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149893
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149893
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                052c467cb58748e302a95546925928e637025acc
baseline version:
 linux                c45e8bccecaf633480d378daff11e122dfd5e96d

Last test of basis   149893  2020-05-01 00:08:58 Z    1 days
Testing same since   149899  2020-05-01 18:39:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Deucher <alexander.deucher@amd.com>
  Alex Deucher <alexdeucher@gmail.com>
  Aric Cyr <aric.cyr@amd.com>
  Arnd Bergmann <arnd@arndb.de>
  Arun Easi <aeasi@marvell.com>
  Aurabindo Pillai <aurabindo.pillai@amd.com>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christoph Hellwig <hch@lst.de>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Vetter <daniel.vetter@intel.com>
  Dave Airlie <airlied@redhat.com>
  David Disseldorp <ddiss@suse.de>
  Dexuan Cui <decui@microsoft.com>
  Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Gurchetan Singh <gurchetansingh@chromium.org>
  Hui Wang <hui.wang@canonical.com>
  Jens Axboe <axboe@kernel.dk>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lyude Paul <lyude@redhat.com>
  Marek Olšák <marek.olsak@amd.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Liu <liumartin@google.com>
  Martin Wilck <mwilck@suse.com>
  Matt Roper <matthew.d.roper@intel.com>
  Maxime Ripard <maxime@cerno.tech>
  Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
  Niklas Cassel <niklas.cassel@wdc.com>
  Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
  Randy Dunlap <rdunlap@infradead.org>
  Rayagonda Kokatanur <rayagonda.kokatanur@broadcom.com>
  Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  ryan_chen <ryan_chen@aspeedtech.com>
  Sumit Semwal <sumit.semwal@linaro.org>
  Sung Lee <sung.lee@amd.com>
  Takashi Iwai <tiwai@suse.de>
  Tiecheng Zhou <Tiecheng.Zhou@amd.com>
  Tony Cheng <Tony.Cheng@amd.com>
  Vasily Averin <vvs@virtuozzo.com>
  Vasily Khoruzhick <anarsoul@gmail.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Wolfram Sang <wsa@the-dreams.de>
  Wu Bo <wubo40@huawei.com>
  Xiaodong Yan <Xiaodong.Yan@amd.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   c45e8bccecaf..052c467cb587  052c467cb58748e302a95546925928e637025acc -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sat May 02 06:18:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 06:18:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUlTl-0003bE-JC; Sat, 02 May 2020 06:18:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Elib=6Q=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jUlTk-0003b9-Ri
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 06:18:24 +0000
X-Inumbo-ID: b5678a80-8c3c-11ea-9b8b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b5678a80-8c3c-11ea-9b8b-12813bfff9fa;
 Sat, 02 May 2020 06:18:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=xjX0OlTnPf1xtmY5X5Lj/bPpF6+/V/JkNO/mSaETw28=; b=VXdHHsI7ZhR0DHmS7clZRxKte
 neDd+LXeBl9NUUI7l8wSe/roVqwVbk3NpYufkLEEfNrlf1yWNGajZtioadClULjvJODfOopZOEtDa
 pj3/piTeJOD1FVT7WCvW4wsgqeFWcpmw9dny/JTS/qLNskeaxZI5znfL0DFJC0/vjhx3c=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUlTb-0005pA-Gx; Sat, 02 May 2020 06:18:15 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUlTb-0007hh-7b; Sat, 02 May 2020 06:18:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jUlTb-0001w7-5h; Sat, 02 May 2020 06:18:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149902-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 149902: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=de7e9840e7f888f1a872c86b0cb793b283193137
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 02 May 2020 06:18:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149902 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149902/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              de7e9840e7f888f1a872c86b0cb793b283193137
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  106 days
Failing since        146211  2020-01-18 04:18:52 Z  105 days   98 attempts
Testing same since   149902  2020-05-02 04:18:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 16779 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 02 09:27:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 09:27:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUoQF-0002AT-Ux; Sat, 02 May 2020 09:26:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=H5V/=6Q=redhat.com=david@srs-us1.protection.inumbo.net>)
 id 1jUoQE-0002AO-Pe
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 09:26:58 +0000
X-Inumbo-ID: 1195cb5e-8c57-11ea-b07b-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 1195cb5e-8c57-11ea-b07b-bc764e2007e4;
 Sat, 02 May 2020 09:26:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588411617;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references:autocrypt:autocrypt;
 bh=7BlBxvbDuA+9KvE9CDetQLMcWvsctxQWUvlkpoisi9Y=;
 b=dazWy9f02vd4ASXVrD1Rq5da8+wTkpkc5fkrDKAMblUUHbO03LHvS5GXLTLPfDaa5eGk5U
 HJhNjzWX/RMscJAAJMcjsXs/SeasPhXMMmPg9QbAiaGBJeldmHprrMOQ3JaCZc29O9Yo76
 obo2XDRHtwP2isDP/J8wbybZSrcHhpo=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-58-IYiQA9KcOLe61Ujso1qrMw-1; Sat, 02 May 2020 05:26:51 -0400
X-MC-Unique: IYiQA9KcOLe61Ujso1qrMw-1
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 409B51005510;
 Sat,  2 May 2020 09:26:49 +0000 (UTC)
Received: from [10.36.112.72] (ovpn-112-72.ams2.redhat.com [10.36.112.72])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 0E16B1001281;
 Sat,  2 May 2020 09:26:41 +0000 (UTC)
Subject: Re: [PATCH v2 2/3] mm/memory_hotplug: Introduce MHP_NO_FIRMWARE_MEMMAP
To: Dan Williams <dan.j.williams@intel.com>
References: <20200430102908.10107-1-david@redhat.com>
 <875zdg26hp.fsf@x220.int.ebiederm.org>
 <b28c9e02-8cf2-33ae-646b-fe50a185738e@redhat.com>
 <20200430152403.e0d6da5eb1cad06411ac6d46@linux-foundation.org>
 <5c908ec3-9495-531e-9291-cbab24f292d6@redhat.com>
 <CAPcyv4j=YKnr1HW4OhAmpzbuKjtfP7FdAn4-V7uA=b-Tcpfu+A@mail.gmail.com>
 <2d019c11-a478-9d70-abd5-4fd2ebf4bc1d@redhat.com>
 <CAPcyv4iOqS0Wbfa2KPfE1axQFGXoRB4mmPRP__Lmqpw6Qpr_ig@mail.gmail.com>
 <62dd4ce2-86cc-5b85-734f-ec8766528a1b@redhat.com>
 <0169e822-a6cc-1543-88ed-2a85d95ffb93@redhat.com>
 <CAPcyv4jGnR_fPtpKBC1rD2KRcT88bTkhqnTMmuwuc+f9Dwrz1g@mail.gmail.com>
 <9f3a813e-dc1d-b675-6e69-85beed3057a4@redhat.com>
 <CAPcyv4jjrxQ27rsfmz6wYPgmedevU=KG+wZ0GOm=qiE6tqa+VA@mail.gmail.com>
 <04242d48-5fa9-6da4-3e4a-991e401eb580@redhat.com>
 <CAPcyv4iXyOUDZgqhWH1KCObvATL=gP55xEr64rsRfUuJg5B+eQ@mail.gmail.com>
 <8242c0c5-2df2-fc0c-079a-3be62c113a11@redhat.com>
 <CAPcyv4h1nWjszkVJQgeXkUc=-nPv5=Me25BOGFQCpihUyFsD6w@mail.gmail.com>
From: David Hildenbrand <david@redhat.com>
Autocrypt: addr=david@redhat.com; prefer-encrypt=mutual; keydata=
 mQINBFXLn5EBEAC+zYvAFJxCBY9Tr1xZgcESmxVNI/0ffzE/ZQOiHJl6mGkmA1R7/uUpiCjJ
 dBrn+lhhOYjjNefFQou6478faXE6o2AhmebqT4KiQoUQFV4R7y1KMEKoSyy8hQaK1umALTdL
 QZLQMzNE74ap+GDK0wnacPQFpcG1AE9RMq3aeErY5tujekBS32jfC/7AnH7I0v1v1TbbK3Gp
 XNeiN4QroO+5qaSr0ID2sz5jtBLRb15RMre27E1ImpaIv2Jw8NJgW0k/D1RyKCwaTsgRdwuK
 Kx/Y91XuSBdz0uOyU/S8kM1+ag0wvsGlpBVxRR/xw/E8M7TEwuCZQArqqTCmkG6HGcXFT0V9
 PXFNNgV5jXMQRwU0O/ztJIQqsE5LsUomE//bLwzj9IVsaQpKDqW6TAPjcdBDPLHvriq7kGjt
 WhVhdl0qEYB8lkBEU7V2Yb+SYhmhpDrti9Fq1EsmhiHSkxJcGREoMK/63r9WLZYI3+4W2rAc
 UucZa4OT27U5ZISjNg3Ev0rxU5UH2/pT4wJCfxwocmqaRr6UYmrtZmND89X0KigoFD/XSeVv
 jwBRNjPAubK9/k5NoRrYqztM9W6sJqrH8+UWZ1Idd/DdmogJh0gNC0+N42Za9yBRURfIdKSb
 B3JfpUqcWwE7vUaYrHG1nw54pLUoPG6sAA7Mehl3nd4pZUALHwARAQABtCREYXZpZCBIaWxk
 ZW5icmFuZCA8ZGF2aWRAcmVkaGF0LmNvbT6JAlgEEwEIAEICGwMFCQlmAYAGCwkIBwMCBhUI
 AgkKCwQWAgMBAh4BAheAFiEEG9nKrXNcTDpGDfzKTd4Q9wD/g1oFAl3pImkCGQEACgkQTd4Q
 9wD/g1o+VA//SFvIHUAvul05u6wKv/pIR6aICPdpF9EIgEU448g+7FfDgQwcEny1pbEzAmiw
 zAXIQ9H0NZh96lcq+yDLtONnXk/bEYWHHUA014A1wqcYNRY8RvY1+eVHb0uu0KYQoXkzvu+s
 Dncuguk470XPnscL27hs8PgOP6QjG4jt75K2LfZ0eAqTOUCZTJxA8A7E9+XTYuU0hs7QVrWJ
 jQdFxQbRMrYz7uP8KmTK9/Cnvqehgl4EzyRaZppshruKMeyheBgvgJd5On1wWq4ZUV5PFM4x
 II3QbD3EJfWbaJMR55jI9dMFa+vK7MFz3rhWOkEx/QR959lfdRSTXdxs8V3zDvChcmRVGN8U
 Vo93d1YNtWnA9w6oCW1dnDZ4kgQZZSBIjp6iHcA08apzh7DPi08jL7M9UQByeYGr8KuR4i6e
 RZI6xhlZerUScVzn35ONwOC91VdYiQgjemiVLq1WDDZ3B7DIzUZ4RQTOaIWdtXBWb8zWakt/
 ztGhsx0e39Gvt3391O1PgcA7ilhvqrBPemJrlb9xSPPRbaNAW39P8ws/UJnzSJqnHMVxbRZC
 Am4add/SM+OCP0w3xYss1jy9T+XdZa0lhUvJfLy7tNcjVG/sxkBXOaSC24MFPuwnoC9WvCVQ
 ZBxouph3kqc4Dt5X1EeXVLeba+466P1fe1rC8MbcwDkoUo65Ag0EVcufkQEQAOfX3n0g0fZz
 Bgm/S2zF/kxQKCEKP8ID+Vz8sy2GpDvveBq4H2Y34XWsT1zLJdvqPI4af4ZSMxuerWjXbVWb
 T6d4odQIG0fKx4F8NccDqbgHeZRNajXeeJ3R7gAzvWvQNLz4piHrO/B4tf8svmRBL0ZB5P5A
 2uhdwLU3NZuK22zpNn4is87BPWF8HhY0L5fafgDMOqnf4guJVJPYNPhUFzXUbPqOKOkL8ojk
 CXxkOFHAbjstSK5Ca3fKquY3rdX3DNo+EL7FvAiw1mUtS+5GeYE+RMnDCsVFm/C7kY8c2d0G
 NWkB9pJM5+mnIoFNxy7YBcldYATVeOHoY4LyaUWNnAvFYWp08dHWfZo9WCiJMuTfgtH9tc75
 7QanMVdPt6fDK8UUXIBLQ2TWr/sQKE9xtFuEmoQGlE1l6bGaDnnMLcYu+Asp3kDT0w4zYGsx
 5r6XQVRH4+5N6eHZiaeYtFOujp5n+pjBaQK7wUUjDilPQ5QMzIuCL4YjVoylWiBNknvQWBXS
 lQCWmavOT9sttGQXdPCC5ynI+1ymZC1ORZKANLnRAb0NH/UCzcsstw2TAkFnMEbo9Zu9w7Kv
 AxBQXWeXhJI9XQssfrf4Gusdqx8nPEpfOqCtbbwJMATbHyqLt7/oz/5deGuwxgb65pWIzufa
 N7eop7uh+6bezi+rugUI+w6DABEBAAGJAiUEGAECAA8FAlXLn5ECGwwFCQlmAYAACgkQTd4Q
 9wD/g1qA6w/+M+ggFv+JdVsz5+ZIc6MSyGUozASX+bmIuPeIecc9UsFRatc91LuJCKMkD9Uv
 GOcWSeFpLrSGRQ1Z7EMzFVU//qVs6uzhsNk0RYMyS0B6oloW3FpyQ+zOVylFWQCzoyyf227y
 GW8HnXunJSC+4PtlL2AY4yZjAVAPLK2l6mhgClVXTQ/S7cBoTQKP+jvVJOoYkpnFxWE9pn4t
 H5QIFk7Ip8TKr5k3fXVWk4lnUi9MTF/5L/mWqdyIO1s7cjharQCstfWCzWrVeVctpVoDfJWp
 4LwTuQ5yEM2KcPeElLg5fR7WB2zH97oI6/Ko2DlovmfQqXh9xWozQt0iGy5tWzh6I0JrlcxJ
 ileZWLccC4XKD1037Hy2FLAjzfoWgwBLA6ULu0exOOdIa58H4PsXtkFPrUF980EEibUp0zFz
 GotRVekFAceUaRvAj7dh76cToeZkfsjAvBVb4COXuhgX6N4pofgNkW2AtgYu1nUsPAo+NftU
 CxrhjHtLn4QEBpkbErnXQyMjHpIatlYGutVMS91XTQXYydCh5crMPs7hYVsvnmGHIaB9ZMfB
 njnuI31KBiLUks+paRkHQlFcgS2N3gkRBzH7xSZ+t7Re3jvXdXEzKBbQ+dC3lpJB0wPnyMcX
 FOTT3aZT7IgePkt5iC/BKBk3hqKteTnJFeVIT7EC+a6YUFg=
Organization: Red Hat GmbH
Message-ID: <467ccba3-80ac-085c-3127-d5618d77d3e0@redhat.com>
Date: Sat, 2 May 2020 11:26:41 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <CAPcyv4h1nWjszkVJQgeXkUc=-nPv5=Me25BOGFQCpihUyFsD6w@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: virtio-dev@lists.oasis-open.org, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Baoquan He <bhe@redhat.com>,
 Linux ACPI <linux-acpi@vger.kernel.org>, Wei Yang <richard.weiyang@gmail.com>,
 linux-s390 <linux-s390@vger.kernel.org>,
 linux-nvdimm <linux-nvdimm@lists.01.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 Dave Hansen <dave.hansen@linux.intel.com>,
 virtualization@lists.linux-foundation.org, Linux MM <linux-mm@kvack.org>,
 "Michael S . Tsirkin" <mst@redhat.com>,
 "Eric W. Biederman" <ebiederm@xmission.com>,
 Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Morton <akpm@linux-foundation.org>, Michal Hocko <mhocko@kernel.org>,
 linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

>> Now, let's clarify what I want regarding virtio-mem:
>>
>> 1. kexec should not add virtio-mem memory to the initial firmware
>>    memmap. The driver has to be in charge as discussed.
>> 2. kexec should not place kexec images onto virtio-mem memory. That
>>    would end badly.
>> 3. kexec should still dump virtio-mem memory via kdump.
>=20
> Ok, but then seems to say to me that dax/kmem is a different type of
> (driver managed) than virtio-mem and it's confusing to try to apply
> the same meaning. Why not just call your type for the distinct type it
> is "System RAM (virtio-mem)" and let any other driver managed memory
> follow the same "System RAM ($driver)" format if it wants?

I had the same idea but discarded it because it seemed to uglify the
add_memory() interface (passing yet another parameter only relevant for
driver managed memory). Maybe we really want a new one, because I like
that idea:

/*
 * Add special, driver-managed memory to the system as system ram.
 * The resource_name is expected to have the name format "System RAM
 * ($DRIVER)", so user space (esp. kexec-tools)" can special-case it.
 *
 * For this memory, no entries in /sys/firmware/memmap are created,
 * as this memory won't be part of the raw firmware-provided memory map
 * e.g., after a reboot. Also, the created memory resource is flagged
 * with IORESOURCE_MEM_DRIVER_MANAGED, so in-kernel users can special-
 * case this memory (e.g., not place kexec images onto it).
 */
int add_memory_driver_managed(int nid, u64 start, u64 size,
			      const char *resource_name);


If we'd ever have to special case it even more in the kernel, we could
allow to specify further resource flags. While passing the driver name
instead of the resource_name would be an option, this way we don't have
to hand craft new resource strings for added memory resources.

Thoughts?

--=20
Thanks,

David / dhildenb



From xen-devel-bounces@lists.xenproject.org Sat May 02 11:04:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 11:04:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUpw5-0001d9-BE; Sat, 02 May 2020 11:03:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Elib=6Q=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jUpw4-0001d4-BK
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 11:03:56 +0000
X-Inumbo-ID: 99cfc030-8c64-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 99cfc030-8c64-11ea-9887-bc764e2007e4;
 Sat, 02 May 2020 11:03:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Y+2r2zvCHiO9UgMNgMxEZryknTsBjCYIQq5IvkEU9+Q=; b=LdgG6esvXWukKT1tKAqYiQJhq
 jh9mcqeqoHX9rOlHy0NRet4tJGT/KZdVMpnu6VgZ2dxe7KqFY8XoPw4eV2UAFYs2zCvNYvqIVO6x+
 dX3E8cDt7cdT83HkD9Tc5sL69kqiQ+TMnUmHkAsYmkU0JY8SMoI3Uq3sIfpW0W2jKzWfM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUpvx-000399-4Y; Sat, 02 May 2020 11:03:49 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUpvw-0007ZQ-PR; Sat, 02 May 2020 11:03:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jUpvw-0007m2-ME; Sat, 02 May 2020 11:03:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149900-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 149900: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:heisenbug
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=0135be8bd8cd60090298f02310691b688d95c3a8
X-Osstest-Versions-That: xen=0135be8bd8cd60090298f02310691b688d95c3a8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 02 May 2020 11:03:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149900 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149900/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate         fail pass in 149896

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds  18 guest-localmigrate/x10 fail in 149896 like 149892
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149896
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149896
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 149896
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 149896
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 149896
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149896
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149896
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149896
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 149896
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 149896
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  0135be8bd8cd60090298f02310691b688d95c3a8
baseline version:
 xen                  0135be8bd8cd60090298f02310691b688d95c3a8

Last test of basis   149900  2020-05-02 01:52:28 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat May 02 11:37:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 11:37:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUqRz-00044n-4J; Sat, 02 May 2020 11:36:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L6si=6Q=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jUqRy-00044i-8N
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 11:36:54 +0000
X-Inumbo-ID: 386ca5ce-8c69-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 386ca5ce-8c69-11ea-b07b-bc764e2007e4;
 Sat, 02 May 2020 11:36:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=tO3PIjh7OdxzIES4DppdeP43LpKddLtfk2rLdmA9WAA=; b=FIz6a9rWZqqCqIHA9+AK8IsICR
 u2KuueaNrH4r36kWE+NG/o4fwq5EZL3SPCWlRSKGAQQmGt1Q5qdmY8LdiXhjji7b4lTRCjBiAegnA
 qlQNsKat8Iuv1DpOplqMm2Otb9C7f7I7coXXSTQ0YOUIqOuj6qfMY00p+0pO5OKFhplU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jUqRu-0003jN-SW; Sat, 02 May 2020 11:36:50 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jUqRu-0007nr-Kz; Sat, 02 May 2020 11:36:50 +0000
Subject: Re: [PATCH v1.1 2/3] xen/sched: fix theoretical races accessing
 vcpu->dirty_cpu
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
References: <20200430152848.20275-1-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <987abb9e-e4f1-1981-595d-0474e91d67f8@xen.org>
Date: Sat, 2 May 2020 12:36:47 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200430152848.20275-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Juergen,

They are less theoritical than we would want. :/ There was a great 
series of article on lwn [1] about compiler optimization last year.

There is at least a few TOCTOU in the code where you could end up with 
cpumask_of(VCPU_CPU_CLEAN).

On 30/04/2020 16:28, Juergen Gross wrote:
> The dirty_cpu field of struct vcpu denotes which cpu still holds data
> of a vcpu. All accesses to this field should be atomic in case the
> vcpu could just be running, as it is accessed without any lock held
> in most cases.

Looking at the patch below, I am not sure why the issue is happening 
only when running. For instance, in the case of context_switch(), 'next' 
should not be running.

Instead, I think, the race would happen if the vCPU state is 
synchronized (__sync_local_execstate()) at the same time as time 
context_switch(). Am I correct?

> 
> There are some instances where accesses are not atomically done, and
> even worse where multiple accesses are done when a single one would
> be mandated.
> 
> Correct that in order to avoid potential problems.
> 
> Add some assertions to verify dirty_cpu is handled properly.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   xen/arch/x86/domain.c   | 14 ++++++++++----
>   xen/include/xen/sched.h |  2 +-
>   2 files changed, 11 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index a4428190d5..f0579a56d1 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1769,6 +1769,7 @@ static void __context_switch(void)
>   
>       if ( !is_idle_domain(pd) )
>       {
> +        ASSERT(read_atomic(&p->dirty_cpu) == cpu);
>           memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES);
>           vcpu_save_fpu(p);
>           pd->arch.ctxt_switch->from(p);
> @@ -1832,7 +1833,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
>   {
>       unsigned int cpu = smp_processor_id();
>       const struct domain *prevd = prev->domain, *nextd = next->domain;
> -    unsigned int dirty_cpu = next->dirty_cpu;
> +    unsigned int dirty_cpu = read_atomic(&next->dirty_cpu);
>   
>       ASSERT(prev != next);
>       ASSERT(local_irq_is_enabled());
> @@ -1844,6 +1845,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
>       {
>           /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
>           flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
> +        ASSERT(read_atomic(&next->dirty_cpu) == VCPU_CPU_CLEAN);
>       }
>   
>       _update_runstate_area(prev);
> @@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
>   
>   void sync_vcpu_execstate(struct vcpu *v)
>   {
> -    if ( v->dirty_cpu == smp_processor_id() )
> +    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
> +
> +    if ( dirty_cpu == smp_processor_id() )
>           sync_local_execstate();
> -    else if ( vcpu_cpu_dirty(v) )
> +    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
>       {
>           /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
> -        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
> +        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>       }
> +    ASSERT(read_atomic(&v->dirty_cpu) != dirty_cpu ||
> +           dirty_cpu == VCPU_CPU_CLEAN); >   }
>   
>   static int relinquish_memory(
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 195e7ee583..81628e2d98 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -844,7 +844,7 @@ static inline bool is_vcpu_dirty_cpu(unsigned int cpu)
>   
>   static inline bool vcpu_cpu_dirty(const struct vcpu *v)
>   {
> -    return is_vcpu_dirty_cpu(v->dirty_cpu);
> +    return is_vcpu_dirty_cpu(read_atomic((unsigned int *)&v->dirty_cpu));

Is the cast necessary?

>   }
>   
>   void vcpu_block(void);
> 

Cheers,

[1] https://lwn.net/Articles/793253/

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 02 11:46:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 11:46:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUqai-0004vf-1e; Sat, 02 May 2020 11:45:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HMXb=6Q=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jUqag-0004va-Ud
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 11:45:54 +0000
X-Inumbo-ID: 79e93930-8c6a-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 79e93930-8c6a-11ea-ae69-bc764e2007e4;
 Sat, 02 May 2020 11:45:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E2FBCAC40;
 Sat,  2 May 2020 11:45:51 +0000 (UTC)
Subject: Re: [PATCH v1.1 2/3] xen/sched: fix theoretical races accessing
 vcpu->dirty_cpu
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
References: <20200430152848.20275-1-jgross@suse.com>
 <987abb9e-e4f1-1981-595d-0474e91d67f8@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <678d3815-d554-7b92-aa81-908978e2b19b@suse.com>
Date: Sat, 2 May 2020 13:45:49 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <987abb9e-e4f1-1981-595d-0474e91d67f8@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.20 13:36, Julien Grall wrote:
> Hi Juergen,
> 
> They are less theoritical than we would want. :/ There was a great 
> series of article on lwn [1] about compiler optimization last year.
> 
> There is at least a few TOCTOU in the code where you could end up with 
> cpumask_of(VCPU_CPU_CLEAN).

It is theoretical in the sense that I don't know of any failure
resulting due to this.

> 
> On 30/04/2020 16:28, Juergen Gross wrote:
>> The dirty_cpu field of struct vcpu denotes which cpu still holds data
>> of a vcpu. All accesses to this field should be atomic in case the
>> vcpu could just be running, as it is accessed without any lock held
>> in most cases.
> 
> Looking at the patch below, I am not sure why the issue is happening 
> only when running. For instance, in the case of context_switch(), 'next' 
> should not be running.
> 
> Instead, I think, the race would happen if the vCPU state is 
> synchronized (__sync_local_execstate()) at the same time as time 
> context_switch(). Am I correct?

Yes.

> 
>>
>> There are some instances where accesses are not atomically done, and
>> even worse where multiple accesses are done when a single one would
>> be mandated.
>>
>> Correct that in order to avoid potential problems.
>>
>> Add some assertions to verify dirty_cpu is handled properly.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>   xen/arch/x86/domain.c   | 14 ++++++++++----
>>   xen/include/xen/sched.h |  2 +-
>>   2 files changed, 11 insertions(+), 5 deletions(-)
>>
>> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
>> index a4428190d5..f0579a56d1 100644
>> --- a/xen/arch/x86/domain.c
>> +++ b/xen/arch/x86/domain.c
>> @@ -1769,6 +1769,7 @@ static void __context_switch(void)
>>       if ( !is_idle_domain(pd) )
>>       {
>> +        ASSERT(read_atomic(&p->dirty_cpu) == cpu);
>>           memcpy(&p->arch.user_regs, stack_regs, 
>> CTXT_SWITCH_STACK_BYTES);
>>           vcpu_save_fpu(p);
>>           pd->arch.ctxt_switch->from(p);
>> @@ -1832,7 +1833,7 @@ void context_switch(struct vcpu *prev, struct 
>> vcpu *next)
>>   {
>>       unsigned int cpu = smp_processor_id();
>>       const struct domain *prevd = prev->domain, *nextd = next->domain;
>> -    unsigned int dirty_cpu = next->dirty_cpu;
>> +    unsigned int dirty_cpu = read_atomic(&next->dirty_cpu);
>>       ASSERT(prev != next);
>>       ASSERT(local_irq_is_enabled());
>> @@ -1844,6 +1845,7 @@ void context_switch(struct vcpu *prev, struct 
>> vcpu *next)
>>       {
>>           /* Remote CPU calls __sync_local_execstate() from flush IPI 
>> handler. */
>>           flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>> +        ASSERT(read_atomic(&next->dirty_cpu) == VCPU_CPU_CLEAN);
>>       }
>>       _update_runstate_area(prev);
>> @@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
>>   void sync_vcpu_execstate(struct vcpu *v)
>>   {
>> -    if ( v->dirty_cpu == smp_processor_id() )
>> +    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
>> +
>> +    if ( dirty_cpu == smp_processor_id() )
>>           sync_local_execstate();
>> -    else if ( vcpu_cpu_dirty(v) )
>> +    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
>>       {
>>           /* Remote CPU calls __sync_local_execstate() from flush IPI 
>> handler. */
>> -        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
>> +        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>>       }
>> +    ASSERT(read_atomic(&v->dirty_cpu) != dirty_cpu ||
>> +           dirty_cpu == VCPU_CPU_CLEAN); >   }
>>   static int relinquish_memory(
>> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>> index 195e7ee583..81628e2d98 100644
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -844,7 +844,7 @@ static inline bool is_vcpu_dirty_cpu(unsigned int 
>> cpu)
>>   static inline bool vcpu_cpu_dirty(const struct vcpu *v)
>>   {
>> -    return is_vcpu_dirty_cpu(v->dirty_cpu);
>> +    return is_vcpu_dirty_cpu(read_atomic((unsigned int 
>> *)&v->dirty_cpu));
> 
> Is the cast necessary?

Yes, that was the problem when building for ARM I encountered.

read_atomic() on ARM has a local variable of the same type as the
read_atomic() parameter for storing the result. Due to the const
attribute of v this results in assignment to a read-only variable.


Juergen


From xen-devel-bounces@lists.xenproject.org Sat May 02 11:46:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 11:46:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUqb9-0004xp-AO; Sat, 02 May 2020 11:46:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L6si=6Q=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jUqb8-0004xf-Dl
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 11:46:22 +0000
X-Inumbo-ID: 8a4540a8-8c6a-11ea-9ba8-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8a4540a8-8c6a-11ea-9ba8-12813bfff9fa;
 Sat, 02 May 2020 11:46:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=zfYKVIXfWGSHzghCifTnqfjpwaipvBibQ0SKLEC6gbs=; b=2LDGd2VRZuxMsxnPDKgR1ie8Ay
 2m3UkjUsZi/LhIr0r/M5NORDfJugCOw0wSrI4wGg+sXiEubMVvYogTX10w/grM9wd+h5XE5mNDHtb
 ixk040Lu0uk+su02CrSjgXsrIt6uKfLLgmVjp5DF5a8vfLdtnfyz13daJZfS6ItDOy6o=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jUqb2-0003tM-Tq; Sat, 02 May 2020 11:46:16 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jUqb2-0000HB-If; Sat, 02 May 2020 11:46:16 +0000
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: minyard@acm.org, Roman Shaposhnik <roman@zededa.com>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
From: Julien Grall <julien@xen.org>
Message-ID: <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
Date: Sat, 2 May 2020 12:46:14 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200502021647.GG9902@minyard.net>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Julien Grall <julien.grall@arm.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 jeff.kubascik@dornerworks.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 02/05/2020 03:16, Corey Minyard wrote:
> On Fri, May 01, 2020 at 06:06:11PM -0700, Roman Shaposhnik wrote:
>> On Fri, May 1, 2020 at 4:42 AM Corey Minyard <minyard@acm.org> wrote:
>>>
>>> On Thu, Apr 30, 2020 at 07:20:05PM -0700, Roman Shaposhnik wrote:
>>>> Hi!
>>>>
>>>> I'm trying to run Xen on Raspberry Pi 4 with 5.6.1 stock,
>>>> upstream kernel. The kernel itself works perfectly well
>>>> on the board. When I try booting it as Dom0 under Xen,
>>>> it goes into a stacktrace (attached).
>>>
>>> Getting Xen working on the Pi4 requires a lot of moving parts, and they
>>> all have to be right.
>>
>> Tell me about it! It is a pretty frustrating journey to align
>> everything just right.
>> On the other hand -- it seems worth to enable RPi as an ARM development
>> platform for Xen given how ubiquitous it is.
>>
>> Hence me trying to combine pristine upstream kernel (5.6.1) with
>> pristine upstream
>> Xen to enable 100% upstream developer workflow on RPi.
>>
>>>> Looking at what nice folks over at Dornerworks have previously
>>>> done to make RPi kernels boot as Dom0 I've come across these
>>>> 3 patches:
>>>>      https://github.com/dornerworks/xen-rpi4-builder/tree/master/patches/linux
>>>>
>>>> The first patch seems irrelevant (unless I'm missing something
>>>> really basic here).
>>>
>>> It might be irrelevant for your configuration, assuming that Xen gets
>>> the right information from EFI.  I haven't tried EFI booting.
>>
>> I'd doing a bit of belt-and-suspenders strategy really -- I'm actually
>> using UEFI u-boot implementation that pre-populates device trees
>> and then I'm also forcing an extra copy of it to be load explicitly
>> via the GRUB devicetree command (GRUB runs as a UEFI payload).
>>
>> I also have access to the semi-official TianoCore RPi4 port that seems
>> to be working pretty well: https://github.com/pftf/RPi4/releases/tag/v1.5
>> for booting all sort of UEFI payloads on RPi4.
>>
>>>> The 2nd patch applied with no issue (but
>>>> I don't think it is related) but the 3d patch failed to apply on
>>>> account of 5.6.1 kernel no longer having:
>>>>      dev->archdata.dev_dma_ops
>>>> E.g.
>>>>      https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/arch/arm64/mm/dma-mapping.c?h=v5.6.1#n55
>>>>
>>>> I've tried to emulate the effect of the patch by simply introducing
>>>> a static variable that would signal that we already initialized
>>>> dev->dma_ops -- but that didn't help at all.
>>>>
>>>> I'm CCing Jeff Kubascik to see if the original authors of that Corey Minyard
>>>> to see if may be they have any suggestions on how this may be dealt
>>>> with.
>>>>
>>>> Any advice would be greatly appreciated!
>>>
>>> What's your Pi4 config.txt file look like? The GIC is not being handled
>>> correctly, and I'm guessing that configuration is wrong.  You can't use
>>> the stock config.txt file with Xen, you have to modify the configuration a
>>> bit.
>>
>> Understood. I'm actually using a custom one:
>>      https://github.com/lf-edge/eve/blob/master/pkg/u-boot/rpi/config.txt
>>
>> I could swear that I had a GIC setting in it -- but apparently no -- so
>> I added the following at the top of what you could see at the URL above:
>>
>> total_mem=4096
>> enable_gic=1
>>
>>> I think just adding:
>>>
>>> enable_gic=1
>>> total_mem=1024
>>
>> Right -- but my board has 4G memory -- so I think what I did above should work.
> 
> Nope.  If you say 4096M of RAM, your issue is almost certainly DMA, but
> it's not (just) the Linux code.  On the Raspberry Pi 4, several devices
> cannot DMA to above 1024M of RAM, but Xen does not handle this.  The
> 1024M of RAM is a limitation you will have to live with until Xen has a
> fix.

IIUC, dom0 would need to have some memory below 1GB for this to work, am 
I correct?

If so could you try the following patch?

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 430708753642..002f49dba74b 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -282,7 +282,7 @@ static void __init allocate_memory_11(struct domain *d,
       */
      while ( order >= min_low_order )
      {
-        for ( bits = order ; bits <= (lowmem ? 32 : PADDR_BITS); bits++ )
+        for ( bits = order ; bits <= (lowmem ? 30 : PADDR_BITS); bits++ )
          {
              pg = alloc_domheap_pages(d, order, MEMF_bits(bits));
              if ( pg != NULL )
@@ -313,7 +313,7 @@ static void __init allocate_memory_11(struct domain *d,
      order = get_allocation_size(kinfo->unassigned_mem);
      while ( kinfo->unassigned_mem && kinfo->mem.nr_banks < NR_MEM_BANKS )
      {
-        pg = alloc_domheap_pages(d, order, lowmem ? MEMF_bits(32) : 0);
+        pg = alloc_domheap_pages(d, order, lowmem ? MEMF_bits(30) : 0);
          if ( !pg )
          {
              order --;


Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 02 12:35:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 12:35:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUrM0-0000e7-BU; Sat, 02 May 2020 12:34:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L6si=6Q=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jUrLz-0000e2-3E
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 12:34:47 +0000
X-Inumbo-ID: 4e024792-8c71-11ea-9bad-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4e024792-8c71-11ea-9bad-12813bfff9fa;
 Sat, 02 May 2020 12:34:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=y/UNBxVJySOR8VU1IIGcGaN3wHkrUeKuHPpXf+/J9eI=; b=OxS3Vg4QEsDRYHVT88RynxMhRc
 QgL7OAzPO4EbW/YR6tPCzYWxTshvcCojOeSD8X71yLK1uwckRi1p6lwy2Sj0DzQmflmrO6Vkn8eQU
 CvS3KSUzS2dt89iuzWJzkwUgOvDi2rYEtY5LqlmL8tK2fUrQhJebpBTMBB889w54qPoc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jUrLu-0004mX-E6; Sat, 02 May 2020 12:34:42 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jUrLu-0005DW-6T; Sat, 02 May 2020 12:34:42 +0000
Subject: Re: [PATCH v1.1 2/3] xen/sched: fix theoretical races accessing
 vcpu->dirty_cpu
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org
References: <20200430152848.20275-1-jgross@suse.com>
 <987abb9e-e4f1-1981-595d-0474e91d67f8@xen.org>
 <678d3815-d554-7b92-aa81-908978e2b19b@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2c72bb17-cf67-a7ce-6dcb-2c3b4d1231e7@xen.org>
Date: Sat, 2 May 2020 13:34:39 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <678d3815-d554-7b92-aa81-908978e2b19b@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 02/05/2020 12:45, Jürgen Groß wrote:
> On 02.05.20 13:36, Julien Grall wrote:
>> Hi Juergen,
>>
>> They are less theoritical than we would want. :/ There was a great 
>> series of article on lwn [1] about compiler optimization last year.
>>
>> There is at least a few TOCTOU in the code where you could end up with 
>> cpumask_of(VCPU_CPU_CLEAN).
> 
> It is theoretical in the sense that I don't know of any failure
> resulting due to this.

How about "latent" instead of "theoritical"?

> 
>>
>> On 30/04/2020 16:28, Juergen Gross wrote:
>>> The dirty_cpu field of struct vcpu denotes which cpu still holds data
>>> of a vcpu. All accesses to this field should be atomic in case the
>>> vcpu could just be running, as it is accessed without any lock held
>>> in most cases.
>>
>> Looking at the patch below, I am not sure why the issue is happening 
>> only when running. For instance, in the case of context_switch(), 
>> 'next' should not be running.
>>
>> Instead, I think, the race would happen if the vCPU state is 
>> synchronized (__sync_local_execstate()) at the same time as time 
>> context_switch(). Am I correct?
> 
> Yes.

Would you mind adding this context in the commit message?

[...]

>>> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>>> index 195e7ee583..81628e2d98 100644
>>> --- a/xen/include/xen/sched.h
>>> +++ b/xen/include/xen/sched.h
>>> @@ -844,7 +844,7 @@ static inline bool is_vcpu_dirty_cpu(unsigned int 
>>> cpu)
>>>   static inline bool vcpu_cpu_dirty(const struct vcpu *v)
>>>   {
>>> -    return is_vcpu_dirty_cpu(v->dirty_cpu);
>>> +    return is_vcpu_dirty_cpu(read_atomic((unsigned int 
>>> *)&v->dirty_cpu));
>>
>> Is the cast necessary?
> 
> Yes, that was the problem when building for ARM I encountered.
> 
> read_atomic() on ARM has a local variable of the same type as the
> read_atomic() parameter for storing the result. Due to the const
> attribute of v this results in assignment to a read-only variable.

Doh, we should be able to read from a const value without. So I would 
argue this is a bug in the read_atomic() implementation on Arm. I will 
try to come up with a patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 02 13:33:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 13:33:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUsGH-0005Ne-Rl; Sat, 02 May 2020 13:32:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Elib=6Q=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jUsGG-0005Mh-Im
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 13:32:56 +0000
X-Inumbo-ID: 6a915da0-8c79-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6a915da0-8c79-11ea-b07b-bc764e2007e4;
 Sat, 02 May 2020 13:32:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=yNi3AWsybwO8PLoLqLzUnXwd3s4PkrToiux4GNA/kgA=; b=rihnGKOzvmik/ybOLRyASHeE7
 sx8u/bYnYD1zQgLnWrCmbh5xY+2i244QBx1N9asKZFV7Ytm9Furu6u8fEMLOCpSvY8T6Wx8gE4fI9
 xiAUgNsrEXp6XiVH1a+SxroKkyAikjcApaWdsHLsXQWUimcxKC3uuawKOluezLjDWIcZY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUsG9-0005p3-BS; Sat, 02 May 2020 13:32:49 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUsG8-0007cR-LU; Sat, 02 May 2020 13:32:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jUsG8-0001NG-Ke; Sat, 02 May 2020 13:32:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149901-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 149901: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=690e2aba7beb1ef06352803bea41a68a3c695015
X-Osstest-Versions-That: linux=052c467cb58748e302a95546925928e637025acc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 02 May 2020 13:32:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149901 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149901/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 149899

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 149899
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149899
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149899
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 149899
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149899
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 149899
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149899
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149899
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                690e2aba7beb1ef06352803bea41a68a3c695015
baseline version:
 linux                052c467cb58748e302a95546925928e637025acc

Last test of basis   149899  2020-05-01 18:39:24 Z    0 days
Testing same since   149901  2020-05-02 03:37:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Williamson <alex.williamson@redhat.com>
  Bijan Mottahedeh <bijan.mottahedeh@oracle.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Jens Axboe <axboe@kernel.dk>
  Linus Torvalds <torvalds@linux-foundation.org>
  Pavel Begunkov <asml.silence@gmail.com>
  Sean Christopherson <sean.j.christopherson@intel.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
  Yan Zhao <yan.y.zhao@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   052c467cb587..690e2aba7beb  690e2aba7beb1ef06352803bea41a68a3c695015 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sat May 02 16:07:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 16:07:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUufW-00014b-HS; Sat, 02 May 2020 16:07:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L6si=6Q=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jUufV-00014W-G9
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 16:07:09 +0000
X-Inumbo-ID: f8ba485c-8c8e-11ea-9bd3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f8ba485c-8c8e-11ea-9bd3-12813bfff9fa;
 Sat, 02 May 2020 16:07:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=0DZRtHN+DWgkT9wV9AdwEWaM8yO76gVkuFpYC4Ito8Y=; b=tYNu4Jnwx4FrjVSD3Qr29oadK7
 zspxZr/aznHXD7brDyeV1U1bN7GibM9gR+VNOxezJHMgg+0riMC9QUdXDQVkBLOg56dcbEnzDu1FI
 5M/Ta6wUw+zW0KzzueJuWEnaRqA4/T+pFJT2YbKhtoQcdn3yjuTyY9Q/+zYLvKLCqTtc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jUufS-0000ps-Uj; Sat, 02 May 2020 16:07:06 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jUufS-00054b-It; Sat, 02 May 2020 16:07:06 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH for-4.14 2/3] xen/arm: atomic: Rewrite write_atomic()
Date: Sat,  2 May 2020 17:06:59 +0100
Message-Id: <20200502160700.19573-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200502160700.19573-1-julien@xen.org>
References: <20200502160700.19573-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

The current implementation of write_atomic has two issues:
    1) It cannot be used to write pointer value because the switch
    contains cast to other size than the size of the pointer.
    2) It will happily allow to write to a pointer to const.

Additionally, the Arm implementation is returning a value when the x86
implementation does not anymore. This was introduced in commit
2934148a0773 "x86: simplify a few macros / inline functions". There are
no users of the return value, so it is fine to drop it.

The switch is now moved in a static inline helper allowing the compiler
to prevent use of const pointer and also allow to write pointer value.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/include/asm-arm/atomic.h | 40 ++++++++++++++++++++++++++----------
 1 file changed, 29 insertions(+), 11 deletions(-)

diff --git a/xen/include/asm-arm/atomic.h b/xen/include/asm-arm/atomic.h
index 3c3d6bb04ee8..ac2798d095eb 100644
--- a/xen/include/asm-arm/atomic.h
+++ b/xen/include/asm-arm/atomic.h
@@ -98,23 +98,41 @@ static always_inline void read_atomic_size(const volatile void *p,
     }
 }
 
+static always_inline void write_atomic_size(volatile void *p,
+                                            void *val,
+                                            unsigned int size)
+{
+    switch ( size )
+    {
+    case 1:
+        write_u8_atomic(p, *(uint8_t *)val);
+        break;
+    case 2:
+        write_u16_atomic(p, *(uint16_t *)val);
+        break;
+    case 4:
+        write_u32_atomic(p, *(uint32_t *)val);
+        break;
+    case 8:
+        write_u64_atomic(p, *(uint64_t *)val);
+        break;
+    default:
+        __bad_atomic_size();
+        break;
+    }
+}
+
 #define read_atomic(p) ({                                               \
     union { typeof(*p) val; char c[0]; } x_;                            \
     read_atomic_size(p, x_.c, sizeof(*p));                              \
     x_.val;                                                             \
 })
 
-#define write_atomic(p, x) ({                                           \
-    typeof(*p) __x = (x);                                               \
-    switch ( sizeof(*p) ) {                                             \
-    case 1: write_u8_atomic((uint8_t *)p, (uint8_t)__x); break;         \
-    case 2: write_u16_atomic((uint16_t *)p, (uint16_t)__x); break;      \
-    case 4: write_u32_atomic((uint32_t *)p, (uint32_t)__x); break;      \
-    case 8: write_u64_atomic((uint64_t *)p, (uint64_t)__x); break;      \
-    default: __bad_atomic_size(); break;                                \
-    }                                                                   \
-    __x;                                                                \
-})
+#define write_atomic(p, x)                                              \
+    do {                                                                \
+        typeof(*p) x_ = (x);                                            \
+        write_atomic_size(p, &x_, sizeof(*p));                          \
+    } while ( false )
 
 #define add_sized(p, x) ({                                              \
     typeof(*(p)) __x = (x);                                             \
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sat May 02 16:07:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 16:07:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUufX-00014w-Sf; Sat, 02 May 2020 16:07:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L6si=6Q=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jUufX-00014h-98
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 16:07:11 +0000
X-Inumbo-ID: f8426b2a-8c8e-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f8426b2a-8c8e-11ea-b07b-bc764e2007e4;
 Sat, 02 May 2020 16:07:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=EquJ+P1MjKxYVanc5AWfgfmBQn5lz7a3I5D3bo//H6o=; b=KUDrWZVvEZWpLImbEeyABfPOQ0
 lzI103l846PopeLBujNexEEOSDZloGpuAry1hWahRpExEJ7uIRDMk1+6O9LnGHAVFnAMScNsTos6z
 Zj6gK1fgZTknM4ls5CqaKS3LnI9V/Jk4WDx42uIh5OxXqD1M+nrRK3Eo7Q8UL5wp6A8Q=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jUufR-0000pm-Sl; Sat, 02 May 2020 16:07:05 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jUufR-00054b-He; Sat, 02 May 2020 16:07:05 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH for-4.14 1/3] xen/arm: atomic: Allow read_atomic() to be used
 in more cases
Date: Sat,  2 May 2020 17:06:58 +0100
Message-Id: <20200502160700.19573-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200502160700.19573-1-julien@xen.org>
References: <20200502160700.19573-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

The current implementation of read_atomic() on Arm will not allow to:
    1) Read a value from a pointer to const because the temporary
    variable will be const and therefore it is not possible to assign
    any value. This can be solved by using a union between the type and
    a char[0].
    2) Read a pointer value (e.g void *) because the switch contains
    cast from other type than the size of a pointer. This can be solved by
    by introducing a static inline for the switch and use void * for the
    pointer.

Reported-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/include/asm-arm/atomic.h | 37 +++++++++++++++++++++++++++---------
 1 file changed, 28 insertions(+), 9 deletions(-)

diff --git a/xen/include/asm-arm/atomic.h b/xen/include/asm-arm/atomic.h
index e81bf80e305c..3c3d6bb04ee8 100644
--- a/xen/include/asm-arm/atomic.h
+++ b/xen/include/asm-arm/atomic.h
@@ -71,18 +71,37 @@ build_add_sized(add_u32_sized, "", WORD, uint32_t)
 #undef build_atomic_write
 #undef build_add_sized
 
+void __bad_atomic_read(const volatile void *p, void *res);
 void __bad_atomic_size(void);
 
+static always_inline void read_atomic_size(const volatile void *p,
+                                           void *res,
+                                           unsigned int size)
+{
+    switch ( size )
+    {
+    case 1:
+        *(uint8_t *)res = read_u8_atomic(p);
+        break;
+    case 2:
+        *(uint16_t *)res = read_u16_atomic(p);
+        break;
+    case 4:
+        *(uint32_t *)res = read_u32_atomic(p);
+        break;
+    case 8:
+        *(uint64_t *)res = read_u64_atomic(p);
+        break;
+    default:
+        __bad_atomic_read(p, res);
+        break;
+    }
+}
+
 #define read_atomic(p) ({                                               \
-    typeof(*p) __x;                                                     \
-    switch ( sizeof(*p) ) {                                             \
-    case 1: __x = (typeof(*p))read_u8_atomic((uint8_t *)p); break;      \
-    case 2: __x = (typeof(*p))read_u16_atomic((uint16_t *)p); break;    \
-    case 4: __x = (typeof(*p))read_u32_atomic((uint32_t *)p); break;    \
-    case 8: __x = (typeof(*p))read_u64_atomic((uint64_t *)p); break;    \
-    default: __x = 0; __bad_atomic_size(); break;                       \
-    }                                                                   \
-    __x;                                                                \
+    union { typeof(*p) val; char c[0]; } x_;                            \
+    read_atomic_size(p, x_.c, sizeof(*p));                              \
+    x_.val;                                                             \
 })
 
 #define write_atomic(p, x) ({                                           \
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sat May 02 16:07:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 16:07:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUufd-00015b-5d; Sat, 02 May 2020 16:07:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L6si=6Q=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jUufc-00015U-9s
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 16:07:16 +0000
X-Inumbo-ID: f98e5e1c-8c8e-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f98e5e1c-8c8e-11ea-9887-bc764e2007e4;
 Sat, 02 May 2020 16:07:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=wb54UEHsoeNGMmUecx4QgE7grxFabOL6fqWlr9nKSEQ=; b=U0+/ooRnDk2I96yc6AxomlNAcq
 ypC8j6cRb+EpsMAARN672EU5IsqoQ/lbfmhIdfsPtkYv+sXKZXdgJsdBVyr8HKZEMW/u/HaAJ5+El
 ADU9zLAg1VPcKOJiTeqmfHL8oAV7H2yiElJ8kKuul8GiPtaDMVOtmN3SSmq+s8oRy9UI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jUufU-0000q3-7W; Sat, 02 May 2020 16:07:08 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jUufT-00054b-RM; Sat, 02 May 2020 16:07:08 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH for-4.14 3/3] xen/x86: atomic: Don't allow to write atomically
 in a pointer to const
Date: Sat,  2 May 2020 17:07:00 +0100
Message-Id: <20200502160700.19573-4-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200502160700.19573-1-julien@xen.org>
References: <20200502160700.19573-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: julien@xen.org, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

At the moment, write_atomic() will happily write to a pointer to const.
While there are no use in Xen, it would be best to catch them at
compilation time.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/include/asm-x86/atomic.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/xen/include/asm-x86/atomic.h b/xen/include/asm-x86/atomic.h
index 6b40f9c9f872..0a332b1fae18 100644
--- a/xen/include/asm-x86/atomic.h
+++ b/xen/include/asm-x86/atomic.h
@@ -63,6 +63,8 @@ void __bad_atomic_size(void);
 
 #define write_atomic(p, x) ({                             \
     typeof(*(p)) __x = (x);                               \
+    /* Check that the pointer is not const */             \
+    void *__maybe_unused p_ = &__x;                       \
     unsigned long x_ = (unsigned long)__x;                \
     switch ( sizeof(*(p)) ) {                             \
     case 1: write_u8_atomic((uint8_t *)(p), x_); break;   \
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sat May 02 16:07:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 16:07:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUufT-00014Q-9U; Sat, 02 May 2020 16:07:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L6si=6Q=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jUufS-00014L-Dn
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 16:07:06 +0000
X-Inumbo-ID: f7b73a14-8c8e-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7b73a14-8c8e-11ea-b07b-bc764e2007e4;
 Sat, 02 May 2020 16:07:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=XXC8DGMWkMGF7BQd7M647PKodx+HoA9G+1TKVa1kGOs=; b=hGqXa86aIBJir8+lxov5b9R1sa
 J12iDNGBF2AjT2VKrsyyqxeBzi9UOjW/WxQ5PFkkPluHZrRQYMNqgQviXdz1/pgJj/YNt9KvZZai9
 Y0fycBclKU47PU0E8rIBvQXvbxylVWTqh1hgCuK+4wIKdJvGlQ+rBco5/JJmj4DU5y1k=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jUufQ-0000pi-Oj; Sat, 02 May 2020 16:07:04 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jUufQ-00054b-Co; Sat, 02 May 2020 16:07:04 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH for-4.14 0/3] Rework {read, write}_atomic()
Date: Sat,  2 May 2020 17:06:57 +0100
Message-Id: <20200502160700.19573-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

Hi all,

This small series is:
    - Hardening write_atomic() to prevent writing to const pointer
    - Allow {read, write}_atomic() to be used in more cases on Arm.

While this was posted after the last posting date, patch #1 is
necessary to avoid the cast introduced by Juergen in [1]. The rest of
the patches would be good hardening to have in Xen 4.14. So I would like
to request the full series to be included in Xen 4.14.

Cheers,

[1] <20200430152848.20275-1-jgross@suse.com>

CC: paul@xen.org

Julien Grall (3):
  xen/arm: atomic: Allow read_atomic() to be used in more cases
  xen/arm: atomic: Rewrite write_atomic()
  xen/x86: atomic: Don't allow to write atomically in a pointer to const

 xen/include/asm-arm/atomic.h | 77 ++++++++++++++++++++++++++----------
 xen/include/asm-x86/atomic.h |  2 +
 2 files changed, 59 insertions(+), 20 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sat May 02 16:10:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 16:10:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUuiF-0001Q8-K0; Sat, 02 May 2020 16:09:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L6si=6Q=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jUuiE-0001Py-HO
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 16:09:58 +0000
X-Inumbo-ID: 5e68695e-8c8f-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e68695e-8c8f-11ea-ae69-bc764e2007e4;
 Sat, 02 May 2020 16:09:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:References:Cc:To:From:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=sjgRGKeJeHNEvm2DlH6hf3yn/kVQ4Ai7mTa9cxvFe94=; b=2fgUg2DkTSF57YEgShbKnGoC4Q
 fXzcoD/W404cEH+LhTI3RkSAC0mBSmOOsiuX2Y7a7MHn8kpMzyDlPZqCHpQNrCL3PIv1nyxOfbJ6x
 cNzeDDywn5mA2erheJsavrv43bhpqf6zr5iSJK7hTp2dHamlhoWf2VHC9mNwsW49jx6A=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jUuiC-0000tC-OL; Sat, 02 May 2020 16:09:56 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jUuiC-0005Lo-Dw; Sat, 02 May 2020 16:09:56 +0000
Subject: Re: [PATCH v1.1 2/3] xen/sched: fix theoretical races accessing
 vcpu->dirty_cpu
From: Julien Grall <julien@xen.org>
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org
References: <20200430152848.20275-1-jgross@suse.com>
 <987abb9e-e4f1-1981-595d-0474e91d67f8@xen.org>
 <678d3815-d554-7b92-aa81-908978e2b19b@suse.com>
 <2c72bb17-cf67-a7ce-6dcb-2c3b4d1231e7@xen.org>
Message-ID: <e274cf53-261d-0af5-6d81-2031e70da3e3@xen.org>
Date: Sat, 2 May 2020 17:09:53 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <2c72bb17-cf67-a7ce-6dcb-2c3b4d1231e7@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 02/05/2020 13:34, Julien Grall wrote:
>>>> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>>>> index 195e7ee583..81628e2d98 100644
>>>> --- a/xen/include/xen/sched.h
>>>> +++ b/xen/include/xen/sched.h
>>>> @@ -844,7 +844,7 @@ static inline bool is_vcpu_dirty_cpu(unsigned 
>>>> int cpu)
>>>>   static inline bool vcpu_cpu_dirty(const struct vcpu *v)
>>>>   {
>>>> -    return is_vcpu_dirty_cpu(v->dirty_cpu);
>>>> +    return is_vcpu_dirty_cpu(read_atomic((unsigned int 
>>>> *)&v->dirty_cpu));
>>>
>>> Is the cast necessary?
>>
>> Yes, that was the problem when building for ARM I encountered.
>>
>> read_atomic() on ARM has a local variable of the same type as the
>> read_atomic() parameter for storing the result. Due to the const
>> attribute of v this results in assignment to a read-only variable.
> 
> Doh, we should be able to read from a const value without. So I would 
> argue this is a bug in the read_atomic() implementation on Arm. I will 
> try to come up with a patch.

I have just sent a series [1] to address the issue reported here and a 
few more.

Cheers,

[1] <20200502160700.19573-1-julien@xen.org>

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 02 16:19:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 16:19:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUurb-0002Mr-IZ; Sat, 02 May 2020 16:19:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Elib=6Q=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jUura-0002Ml-75
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 16:19:38 +0000
X-Inumbo-ID: b5e37c36-8c90-11ea-9bd4-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b5e37c36-8c90-11ea-9bd4-12813bfff9fa;
 Sat, 02 May 2020 16:19:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=c+m6DhYrlpPePq47n4GJDyh9xvXyzPPx58fyeexRKzE=; b=xvBCULyRTs4wePMfkgleCxm+H
 I/UaIQA5pOzc7io+z2Tay8nJkE25rK5A9baJJEEbmuwt5LOY1LjA62NmIXM7JCEbUWLFBcxdp/xFu
 v/PLHIbkBGLclKZvOG1rTg6C5SSqM4Zu6l4rQ35EV0IBgyR8CZw7ybWXW0d7DEqhJT/Lk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUurW-000156-4H; Sat, 02 May 2020 16:19:34 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jUurV-0006n3-Sk; Sat, 02 May 2020 16:19:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jUurV-0001dj-S7; Sat, 02 May 2020 16:19:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149903-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 149903: regressions - FAIL
X-Osstest-Failures: linux-5.4:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
 linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=527c60e8b7a89d9e63ffa9b9f579d536e0f30568
X-Osstest-Versions-That: linux=aa73bcc376865c23e61dcebd467697b527901be8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 02 May 2020 16:19:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149903 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149903/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd     15 guest-start/debian.repeat fail REGR. vs. 149878

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 149878
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                527c60e8b7a89d9e63ffa9b9f579d536e0f30568
baseline version:
 linux                aa73bcc376865c23e61dcebd467697b527901be8

Last test of basis   149878  2020-04-29 14:40:04 Z    3 days
Testing same since   149903  2020-05-02 07:10:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Al Viro <viro@zeniv.linux.org.uk>
  Alexander Schmidt <alexs@linux.ibm.com>
  Alexei Starovoitov <ast@kernel.org>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Arnd Bergmann <arnd@arndb.de>
  Atsushi Nemoto <atsushi.nemoto@sord.co.jp>
  Ayush Sawal <ayush.sawal@chelsio.com>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Bjorn Helgaas <bhelgaas@google.com>
  Bodo Stroesser <bstroesser@ts.fujitsu.com>
  Borislav Petkov <bp@suse.de>
  Brendan Higgins <brendanhiggins@google.com>
  Brian Foster <bfoster@redhat.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Christian Brauner <christian.brauner@ubuntu.com>
  Chuck Lever <chuck.lever@oracle.com>
  Clement Leger <cleger@kalray.eu>
  Cristian Birsan <cristian.birsan@microchip.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Darrick J. Wong <darrick.wong@oracle.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Denis Bolotin <dbolotin@marvell.com>
  Eric Biggers <ebiggers@google.com>
  Eric Dumazet <edumazet@google.com>
  Eugene Syromiatnikov <esyr@redhat.com>
  Fangrui Song <maskray@google.com>
  Felipe Balbi <balbi@kernel.org>
  Florian Fainelli <f.fainelli@gmail.com>
  Fugang Duan <fugang.duan@nxp.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hillf Danton <hdanton@sina.com>
  Hugh Dickins <hughd@google.com>
  Hui Wang <hui.wang@canonical.com>
  Ian Rogers <irogers@google.com>
  Jann Horn <jannh@google.com>
  Jason Gunthorpe <jgg@mellanox.com>
  Jef Driesen <jef.driesen@niko.eu>
  Jens Axboe <axboe@kernel.dk>
  Jeremy Cline <jcline@redhat.com>
  Jerome Brunet <jbrunet@baylibre.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Johannes Berg <johannes.berg@intel.com>
  John Garry <john.garry@huawei.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Juergen Gross <jgross@suse.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Karl Olsen <karl@micro-technic.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Luis Mendes <luis.p.mendes@gmail.com>
  Luke Nelson <luke.r.nels@gmail.com>
  Luke Nelson <lukenels@cs.washington.edu>
  Mark Brown <broonie@kernel.org>
  Martin Fuzzey <martin.fuzzey@flowbird.group>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Maxim Mikityanskiy <maximmi@mellanox.com>
  Michal Kalderon <mkalderon@marvell.com>
  Michal Simek <michal.simek@xilinx.com>
  Mike Christie <mchristi@redhat.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olivier Moysan <olivier.moysan@st.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Philipp Puschmann <p.puschmann@pironex.de>
  Philipp Rudo <prudo@linux.ibm.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Quentin Perret <qperret@google.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raymond Pang <RaymondPang-oc@zhaoxin.com>
  Richard Weinberger <richard@nod.at>
  Ritesh Harjani <riteshh@linux.ibm.com>
  Roy Spliet <nouveau@spliet.org>
  Saeed Mahameed <saeedm@mellanox.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Sasha Levin <sashal@kernel.org>
  Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
  Shengjiu Wang <shengjiu.wang@nxp.com>
  Song Liu <songliubraving@fb.com>
  Stephan Gerhold <stephan@gerhold.net>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Syed Nayyar Waris <syednwaris@gmail.com>
  syzbot+e27980339d305f2dbfd9@syzkaller.appspotmail.com
  Takashi Iwai <tiwai@suse.de>
  Tamizh chelvam <tamizhr@codeaurora.org>
  Tejun Heo <tj@kernel.org>
  Theodore Ts'o <tytso@mit.edu>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thinh Nguyen <thinhn@synopsys.com>
  Toke Høiland-Jørgensen <toke@redhat.com>
  Vasily Averin <vvs@virtuozzo.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vitor Massaru Iha <vitor@massaru.org>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Waiman Long <longman@redhat.com>
  Wang YanQing <udknight@gmail.com>
  Wei Liu <wei.liu@kernel.org>
  Willem de Bruijn <willemb@google.com>
  William Breathitt Gray <vilhelm.gray@gmail.com>
  Wolfram Sang <wsa+renesas@sang-engineering.com>
  Wolfram Sang <wsa@the-dreams.de>
  Xi Wang <xi.wang@gmail.com>
  Xiumei Mu <xmu@redhat.com>
  Yang Shi <yang.shi@linux.alibaba.com>
  yangerkun <yangerkun@huawei.com>
  YueHaibing <yuehaibing@huawei.com>
  Yuval Bason <ybason@marvell.com>
  Yuval Basson <ybason@marvell.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zhu Yanjun <yanjunz@mellanox.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2311 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 02 17:36:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 17:36:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUw37-0000Ta-Tr; Sat, 02 May 2020 17:35:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zGBs=6Q=gmail.com=tcminyard@srs-us1.protection.inumbo.net>)
 id 1jUw36-0000TV-HO
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 17:35:36 +0000
X-Inumbo-ID: 544f2db6-8c9b-11ea-b9cf-bc764e2007e4
Received: from mail-oi1-x241.google.com (unknown [2607:f8b0:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 544f2db6-8c9b-11ea-b9cf-bc764e2007e4;
 Sat, 02 May 2020 17:35:35 +0000 (UTC)
Received: by mail-oi1-x241.google.com with SMTP id 19so2737790oiy.8
 for <xen-devel@lists.xenproject.org>; Sat, 02 May 2020 10:35:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:date:from:to:cc:subject:message-id:reply-to:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=ecZ4re6tEFUOaEs9z2seUIgFP49nJczxjDwXDfKyVyc=;
 b=K2ifj9z/sQn31A/IBQLCXwAi04xDfpFq7Ky1CCwmd7ugBV58T7l3EMtQFXgjS254q0
 x0v0Wdn4/D1+QtG6oNVyQ8M1EOqVhHy5FRPLv+MHQBWVtzdrmRHotnIidwP63+FVOuz+
 m0uz7PuAS5YRLvLZAIMZSxKZupKIpzUqOrE+6xJmvsvHiLH/1qm9NGjY2qhu1HixaU4s
 9Z7FFfIpMSrEMYANiKpvd+bD2NACrvX5FL1+7ZxBteT1XU+GejFo5BE0usm78FwyD05B
 MET707nqU268/87aUTRX1EQOs+2ewTJ9kn8QFrwDRHf5d5y00f6IU/zFVnJ42AOIAdrC
 hA/Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
 :reply-to:references:mime-version:content-disposition:in-reply-to
 :user-agent;
 bh=ecZ4re6tEFUOaEs9z2seUIgFP49nJczxjDwXDfKyVyc=;
 b=iU1x2vY8oNMvJT9SymKqnYoI6mcO4o/HHZaG3DEmG+eI1AFRw6KR5D2HCq5tujqbnE
 PcIaEQUYKHQOFbE2EWywzMYs73YRYcADbHOsxVG9h4lQvJW3kkkOKnakOKrgjxIR2/aa
 4L0efoz9xAvQf86QBrFQd6pM1VHbbRqTVgQE5YlBFd9odOpTy1P9i1rGkZjXBWxMNxtv
 rqs/sE6i//0AC0knWQhMewhpUfL+9DJ7qPzQ+nBek7g7qDFedbkDAyhrpupzoiEXbg8N
 kRrLSFLDU6kOHX21+JUQWLRnRlif3JfXXbhw8Zs7b6o3uMJfvFopjf1C0bIhu3Q06Kvj
 9zZA==
X-Gm-Message-State: AGi0PuZxZ+iddMky//+E4aBNtcLZ9HEEzVv4fRF8aLg4GW2O7rzeWMHS
 oGwlOxnvS8d7seIVG+vPdQ==
X-Google-Smtp-Source: APiQypJHytizxtHclb6e6Hv0NygOgGxE1U+n0m+bD8T82Br1kt1mWgRMGVpoi7xQF6TQ2SVy2QA0gw==
X-Received: by 2002:aca:2113:: with SMTP id 19mr3688100oiz.128.1588440934369; 
 Sat, 02 May 2020 10:35:34 -0700 (PDT)
Received: from serve.minyard.net (serve.minyard.net. [2001:470:b8f6:1b::1])
 by smtp.gmail.com with ESMTPSA id q28sm1765305oof.42.2020.05.02.10.35.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 02 May 2020 10:35:32 -0700 (PDT)
Received: from minyard.net (unknown
 [IPv6:2001:470:b8f6:1b:8b39:c3f3:f502:5c4e])
 by serve.minyard.net (Postfix) with ESMTPSA id BAAC918000D;
 Sat,  2 May 2020 17:35:30 +0000 (UTC)
Date: Sat, 2 May 2020 12:35:29 -0500
From: Corey Minyard <minyard@acm.org>
To: Julien Grall <julien@xen.org>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
Message-ID: <20200502173529.GH9902@minyard.net>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
User-Agent: Mutt/1.9.4 (2018-02-28)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: minyard@acm.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Roman Shaposhnik <roman@zededa.com>, jeff.kubascik@dornerworks.com,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, May 02, 2020 at 12:46:14PM +0100, Julien Grall wrote:
> Hi,
> 
> On 02/05/2020 03:16, Corey Minyard wrote:
> > On Fri, May 01, 2020 at 06:06:11PM -0700, Roman Shaposhnik wrote:
> > > On Fri, May 1, 2020 at 4:42 AM Corey Minyard <minyard@acm.org> wrote:
> > > > 
> > > > On Thu, Apr 30, 2020 at 07:20:05PM -0700, Roman Shaposhnik wrote:
> > > > > Hi!
> > > > > 
> > > > > I'm trying to run Xen on Raspberry Pi 4 with 5.6.1 stock,
> > > > > upstream kernel. The kernel itself works perfectly well
> > > > > on the board. When I try booting it as Dom0 under Xen,
> > > > > it goes into a stacktrace (attached).
> > > > 
> > > > Getting Xen working on the Pi4 requires a lot of moving parts, and they
> > > > all have to be right.
> > > 
> > > Tell me about it! It is a pretty frustrating journey to align
> > > everything just right.
> > > On the other hand -- it seems worth to enable RPi as an ARM development
> > > platform for Xen given how ubiquitous it is.
> > > 
> > > Hence me trying to combine pristine upstream kernel (5.6.1) with
> > > pristine upstream
> > > Xen to enable 100% upstream developer workflow on RPi.
> > > 
> > > > > Looking at what nice folks over at Dornerworks have previously
> > > > > done to make RPi kernels boot as Dom0 I've come across these
> > > > > 3 patches:
> > > > >      https://github.com/dornerworks/xen-rpi4-builder/tree/master/patches/linux
> > > > > 
> > > > > The first patch seems irrelevant (unless I'm missing something
> > > > > really basic here).
> > > > 
> > > > It might be irrelevant for your configuration, assuming that Xen gets
> > > > the right information from EFI.  I haven't tried EFI booting.
> > > 
> > > I'd doing a bit of belt-and-suspenders strategy really -- I'm actually
> > > using UEFI u-boot implementation that pre-populates device trees
> > > and then I'm also forcing an extra copy of it to be load explicitly
> > > via the GRUB devicetree command (GRUB runs as a UEFI payload).
> > > 
> > > I also have access to the semi-official TianoCore RPi4 port that seems
> > > to be working pretty well: https://github.com/pftf/RPi4/releases/tag/v1.5
> > > for booting all sort of UEFI payloads on RPi4.
> > > 
> > > > > The 2nd patch applied with no issue (but
> > > > > I don't think it is related) but the 3d patch failed to apply on
> > > > > account of 5.6.1 kernel no longer having:
> > > > >      dev->archdata.dev_dma_ops
> > > > > E.g.
> > > > >      https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/arch/arm64/mm/dma-mapping.c?h=v5.6.1#n55
> > > > > 
> > > > > I've tried to emulate the effect of the patch by simply introducing
> > > > > a static variable that would signal that we already initialized
> > > > > dev->dma_ops -- but that didn't help at all.
> > > > > 
> > > > > I'm CCing Jeff Kubascik to see if the original authors of that Corey Minyard
> > > > > to see if may be they have any suggestions on how this may be dealt
> > > > > with.
> > > > > 
> > > > > Any advice would be greatly appreciated!
> > > > 
> > > > What's your Pi4 config.txt file look like? The GIC is not being handled
> > > > correctly, and I'm guessing that configuration is wrong.  You can't use
> > > > the stock config.txt file with Xen, you have to modify the configuration a
> > > > bit.
> > > 
> > > Understood. I'm actually using a custom one:
> > >      https://github.com/lf-edge/eve/blob/master/pkg/u-boot/rpi/config.txt
> > > 
> > > I could swear that I had a GIC setting in it -- but apparently no -- so
> > > I added the following at the top of what you could see at the URL above:
> > > 
> > > total_mem=4096
> > > enable_gic=1
> > > 
> > > > I think just adding:
> > > > 
> > > > enable_gic=1
> > > > total_mem=1024
> > > 
> > > Right -- but my board has 4G memory -- so I think what I did above should work.
> > 
> > Nope.  If you say 4096M of RAM, your issue is almost certainly DMA, but
> > it's not (just) the Linux code.  On the Raspberry Pi 4, several devices
> > cannot DMA to above 1024M of RAM, but Xen does not handle this.  The
> > 1024M of RAM is a limitation you will have to live with until Xen has a
> > fix.
> 
> IIUC, dom0 would need to have some memory below 1GB for this to work, am I
> correct?

No.  If I am understanding this correctly, all the memory in dom0 below
1GB would have to be physically below 1GB.

The Linux patch set starts at:

https://lore.kernel.org/linux-iommu/20191015174616.GO13874@arrakis.emea.arm.com/t/

There is also an interesting reference at:

https://www.raspberrypi.org/forums/viewtopic.php?t=264975

-corey

> 
> If so could you try the following patch?
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 430708753642..002f49dba74b 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -282,7 +282,7 @@ static void __init allocate_memory_11(struct domain *d,
>       */
>      while ( order >= min_low_order )
>      {
> -        for ( bits = order ; bits <= (lowmem ? 32 : PADDR_BITS); bits++ )
> +        for ( bits = order ; bits <= (lowmem ? 30 : PADDR_BITS); bits++ )
>          {
>              pg = alloc_domheap_pages(d, order, MEMF_bits(bits));
>              if ( pg != NULL )
> @@ -313,7 +313,7 @@ static void __init allocate_memory_11(struct domain *d,
>      order = get_allocation_size(kinfo->unassigned_mem);
>      while ( kinfo->unassigned_mem && kinfo->mem.nr_banks < NR_MEM_BANKS )
>      {
> -        pg = alloc_domheap_pages(d, order, lowmem ? MEMF_bits(32) : 0);
> +        pg = alloc_domheap_pages(d, order, lowmem ? MEMF_bits(30) : 0);
>          if ( !pg )
>          {
>              order --;
> 
> 
> Cheers,
> 
> -- 
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 02 17:48:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 17:48:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUwFf-0001Nm-48; Sat, 02 May 2020 17:48:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L6si=6Q=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jUwFd-0001Nh-N7
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 17:48:33 +0000
X-Inumbo-ID: 241bad8e-8c9d-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 241bad8e-8c9d-11ea-b07b-bc764e2007e4;
 Sat, 02 May 2020 17:48:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=AaOG4OG1gF8GBDEVqmWxVrWzYiHajboN3N8b3XWOgi0=; b=I16hPbpho2LMLLjA0g+QFIRduf
 yMoyGqGD0ZXxy2bOFr3DjdxJ5F7vx/pcUgd3/fhEnLG7YlGeqimVFww2Dk94DY+vA5v0pNapzkP1D
 mjm3GUaGDNWxFCZaQ1H+u/K9zM90e8fQ7IC8fdPiuY8RQxbbEK2c1fU34rUGtgGbFdGs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jUwFZ-0002eo-KI; Sat, 02 May 2020 17:48:29 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jUwFZ-0005An-D1; Sat, 02 May 2020 17:48:29 +0000
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: minyard@acm.org
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
From: Julien Grall <julien@xen.org>
Message-ID: <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
Date: Sat, 2 May 2020 18:48:27 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200502173529.GH9902@minyard.net>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Roman Shaposhnik <roman@zededa.com>, jeff.kubascik@dornerworks.com,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 02/05/2020 18:35, Corey Minyard wrote:
> On Sat, May 02, 2020 at 12:46:14PM +0100, Julien Grall wrote:
> No.  If I am understanding this correctly, all the memory in dom0 below
> 1GB would have to be physically below 1GB.

Well, dom0 is directly mapped. In other word, dom0 physical address == 
host physical address. So if we allocate memory below 1GB in Xen, then 
it will mapped below 1GB on dom0 as well.

The patch I suggested would try to allocate as much as possible memory 
below 1GB. I believe this should do the trick for you here.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 02 18:03:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 18:03:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUwTs-0003CE-P1; Sat, 02 May 2020 18:03:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u8kx=6Q=intel.com=dan.j.williams@srs-us1.protection.inumbo.net>)
 id 1jUwTs-0003C9-40
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 18:03:16 +0000
X-Inumbo-ID: 3065921a-8c9f-11ea-b07b-bc764e2007e4
Received: from mail-ed1-x542.google.com (unknown [2a00:1450:4864:20::542])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3065921a-8c9f-11ea-b07b-bc764e2007e4;
 Sat, 02 May 2020 18:03:13 +0000 (UTC)
Received: by mail-ed1-x542.google.com with SMTP id k22so9883271eds.6
 for <xen-devel@lists.xenproject.org>; Sat, 02 May 2020 11:03:13 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=intel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=gzsuxDlItzKXIUBK7WGNpTkOsm9nq+BDdgFK8X2BWvo=;
 b=aSBw5wCDConKD/IcaaHbp4Mo0CTJ0pgKRvNWRkiIWGE3lCN/oPbrEa9/oKAglpfRn1
 ybYK1kBJxRVbPzvApGi7N1gI88wlw4xjREME4zX7kVXKZH3FDgSbie3ks5shBRY8TZ6q
 DMFWhZRW7RkjeJCSKPKh/8pZ2QWGSZw/Nyq/zAlnOngdfv/+o/QV9wf4QVQZ5gXVDkzN
 XNN+MYW95/ejtNi3wUzQ6oJ8i+BfB05eb9CwAbUAIbruiYbVwz5lfsdGnAXaouOdPAgk
 c8v1YIbx6b+WQ999NPSKnSrIN3qI9oKtRGvdPYhTcKRXMBwc/GxItvO5XlgyBCaOSzcl
 2H2w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=gzsuxDlItzKXIUBK7WGNpTkOsm9nq+BDdgFK8X2BWvo=;
 b=QLUCzB6hCLV1TRenQHLxv1RwOFLugUnMzJrBZSVsitu6Mq5K7P96Zk3GHj6/j4V3A0
 sJ1ZabuRKD28ZVz9DqZpobe7gt0QudtMUlxiVj+O+HEA+wdfjGmnxF16HOcspUS6BOeo
 N+0oKZtvd6PDat38oDFSBDsJkwYbrKbC23Fj7PWdds+5GE3dyRUZxdeQpGj7Y/TOIu4d
 w1/iEtTiy2m/g2sgQPJtm0jviG9McIsg3+6qUyYfvGkLsEU7awDxVmdeI56bxetmLUpP
 Os+e2/g7TkOC5iSQiKfddFIXDoouz0AImMQFmQOJ82fe8w8JYR7v35bq78088kYzbPgR
 EdMw==
X-Gm-Message-State: AGi0PuY+lbF2wzMzjdtQlGCIbvbv+EmN6tA4AlNXVFOWDvAybFL0pVb/
 brGXzDdpcp5gPA2EWwoaMZcn/Sa9Bj4tkEwMQ4pQZQ==
X-Google-Smtp-Source: APiQypJrbz2VzfjebhxcGzu52aKY5ET0/26CFym5+FOkkSLJ9Vu46v3NJUgCYAVe4ZsgBJWlfsgCXWWgDBxtIIIlF1U=
X-Received: by 2002:a05:6402:3136:: with SMTP id
 dd22mr8275680edb.165.1588442592251; 
 Sat, 02 May 2020 11:03:12 -0700 (PDT)
MIME-Version: 1.0
References: <20200430102908.10107-1-david@redhat.com>
 <875zdg26hp.fsf@x220.int.ebiederm.org>
 <b28c9e02-8cf2-33ae-646b-fe50a185738e@redhat.com>
 <20200430152403.e0d6da5eb1cad06411ac6d46@linux-foundation.org>
 <5c908ec3-9495-531e-9291-cbab24f292d6@redhat.com>
 <CAPcyv4j=YKnr1HW4OhAmpzbuKjtfP7FdAn4-V7uA=b-Tcpfu+A@mail.gmail.com>
 <2d019c11-a478-9d70-abd5-4fd2ebf4bc1d@redhat.com>
 <CAPcyv4iOqS0Wbfa2KPfE1axQFGXoRB4mmPRP__Lmqpw6Qpr_ig@mail.gmail.com>
 <62dd4ce2-86cc-5b85-734f-ec8766528a1b@redhat.com>
 <0169e822-a6cc-1543-88ed-2a85d95ffb93@redhat.com>
 <CAPcyv4jGnR_fPtpKBC1rD2KRcT88bTkhqnTMmuwuc+f9Dwrz1g@mail.gmail.com>
 <9f3a813e-dc1d-b675-6e69-85beed3057a4@redhat.com>
 <CAPcyv4jjrxQ27rsfmz6wYPgmedevU=KG+wZ0GOm=qiE6tqa+VA@mail.gmail.com>
 <04242d48-5fa9-6da4-3e4a-991e401eb580@redhat.com>
 <CAPcyv4iXyOUDZgqhWH1KCObvATL=gP55xEr64rsRfUuJg5B+eQ@mail.gmail.com>
 <8242c0c5-2df2-fc0c-079a-3be62c113a11@redhat.com>
 <CAPcyv4h1nWjszkVJQgeXkUc=-nPv5=Me25BOGFQCpihUyFsD6w@mail.gmail.com>
 <467ccba3-80ac-085c-3127-d5618d77d3e0@redhat.com>
In-Reply-To: <467ccba3-80ac-085c-3127-d5618d77d3e0@redhat.com>
From: Dan Williams <dan.j.williams@intel.com>
Date: Sat, 2 May 2020 11:03:01 -0700
Message-ID: <CAPcyv4iqwh6k40DUy-Pwi2h5pJm9vu7+JU1ghELy=3MGM1naNg@mail.gmail.com>
Subject: Re: [PATCH v2 2/3] mm/memory_hotplug: Introduce MHP_NO_FIRMWARE_MEMMAP
To: David Hildenbrand <david@redhat.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: virtio-dev@lists.oasis-open.org, linux-hyperv@vger.kernel.org,
 Michal Hocko <mhocko@suse.com>, Baoquan He <bhe@redhat.com>,
 Linux ACPI <linux-acpi@vger.kernel.org>, Wei Yang <richard.weiyang@gmail.com>,
 linux-s390 <linux-s390@vger.kernel.org>,
 linux-nvdimm <linux-nvdimm@lists.01.org>,
 Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
 Dave Hansen <dave.hansen@linux.intel.com>,
 virtualization@lists.linux-foundation.org, Linux MM <linux-mm@kvack.org>,
 "Michael S . Tsirkin" <mst@redhat.com>,
 "Eric W. Biederman" <ebiederm@xmission.com>,
 Pankaj Gupta <pankaj.gupta.linux@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Andrew Morton <akpm@linux-foundation.org>, Michal Hocko <mhocko@kernel.org>,
 linuxppc-dev <linuxppc-dev@lists.ozlabs.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, May 2, 2020 at 2:27 AM David Hildenbrand <david@redhat.com> wrote:
>
> >> Now, let's clarify what I want regarding virtio-mem:
> >>
> >> 1. kexec should not add virtio-mem memory to the initial firmware
> >>    memmap. The driver has to be in charge as discussed.
> >> 2. kexec should not place kexec images onto virtio-mem memory. That
> >>    would end badly.
> >> 3. kexec should still dump virtio-mem memory via kdump.
> >
> > Ok, but then seems to say to me that dax/kmem is a different type of
> > (driver managed) than virtio-mem and it's confusing to try to apply
> > the same meaning. Why not just call your type for the distinct type it
> > is "System RAM (virtio-mem)" and let any other driver managed memory
> > follow the same "System RAM ($driver)" format if it wants?
>
> I had the same idea but discarded it because it seemed to uglify the
> add_memory() interface (passing yet another parameter only relevant for
> driver managed memory). Maybe we really want a new one, because I like
> that idea:
>
> /*
>  * Add special, driver-managed memory to the system as system ram.
>  * The resource_name is expected to have the name format "System RAM
>  * ($DRIVER)", so user space (esp. kexec-tools)" can special-case it.
>  *
>  * For this memory, no entries in /sys/firmware/memmap are created,
>  * as this memory won't be part of the raw firmware-provided memory map
>  * e.g., after a reboot. Also, the created memory resource is flagged
>  * with IORESOURCE_MEM_DRIVER_MANAGED, so in-kernel users can special-
>  * case this memory (e.g., not place kexec images onto it).
>  */
> int add_memory_driver_managed(int nid, u64 start, u64 size,
>                               const char *resource_name);
>
>
> If we'd ever have to special case it even more in the kernel, we could
> allow to specify further resource flags. While passing the driver name
> instead of the resource_name would be an option, this way we don't have
> to hand craft new resource strings for added memory resources.
>
> Thoughts?

Looks useful to me and simplifies walking /proc/iomem. I personally
like the safety of the string just being the $driver component of the
name, but I won't lose sleep if the interface stays freeform like you
propose.


From xen-devel-bounces@lists.xenproject.org Sat May 02 19:13:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 19:13:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUxZ7-0000aM-PH; Sat, 02 May 2020 19:12:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0nVz=6Q=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1jUxZ6-0000aH-Di
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 19:12:44 +0000
X-Inumbo-ID: e5ecae1c-8ca8-11ea-9bf4-12813bfff9fa
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e5ecae1c-8ca8-11ea-9bf4-12813bfff9fa;
 Sat, 02 May 2020 19:12:43 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 042Imbho096734
 (version=TLSv1.2 cipher=DHE-RSA-AES128-GCM-SHA256 bits=128 verify=NO);
 Sat, 2 May 2020 14:48:43 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 042ImaYR096733;
 Sat, 2 May 2020 11:48:36 -0700 (PDT) (envelope-from ehem)
Date: Sat, 2 May 2020 11:48:36 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Corey Minyard <minyard@acm.org>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
Message-ID: <20200502184836.GA96674@mattapan.m5p.com>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200502173529.GH9902@minyard.net>
User-Agent: Mutt/1.11.4 (2019-03-13)
X-Spam-Status: No, score=0.3 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
 autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Roman Shaposhnik <roman@zededa.com>, jeff.kubascik@dornerworks.com,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, May 02, 2020 at 12:35:29PM -0500, Corey Minyard wrote:
> On Sat, May 02, 2020 at 12:46:14PM +0100, Julien Grall wrote:
> > 
> > On 02/05/2020 03:16, Corey Minyard wrote:
> > > 
> > > Nope.  If you say 4096M of RAM, your issue is almost certainly DMA, but
> > > it's not (just) the Linux code.  On the Raspberry Pi 4, several devices
> > > cannot DMA to above 1024M of RAM, but Xen does not handle this.  The
> > > 1024M of RAM is a limitation you will have to live with until Xen has a
> > > fix.
> > 
> > IIUC, dom0 would need to have some memory below 1GB for this to work, am I
> > correct?
> 
> No.  If I am understanding this correctly, all the memory in dom0 below
> 1GB would have to be physically below 1GB.
> 
> The Linux patch set starts at:
> 
> https://lore.kernel.org/linux-iommu/20191015174616.GO13874@arrakis.emea.arm.com/t/
> 

Actually, things get worse.  What if someone wants to run an X-server in
DomU and have a DomU accessing the graphics hardware?  Really needs to be
a case of allocating DMA-capable memory means talking to Xen.

As pointed out in that discussion different boards are going to have the
DMA-borderline in different places.  There could be enough variation that
it needs to be settable at run time.  Then some boards might have some
DMA devices which can access all memory, and some which cannot (full-DMA
versus limited-DMA?).


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Sat May 02 19:44:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 19:44:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jUy3L-00033T-7Z; Sat, 02 May 2020 19:43:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L6si=6Q=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jUy3J-00033N-RZ
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 19:43:57 +0000
X-Inumbo-ID: 42fdcb28-8cad-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 42fdcb28-8cad-11ea-b07b-bc764e2007e4;
 Sat, 02 May 2020 19:43:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=kW0zjqnP9wTG7XNLFLNKwa2GDumnd5MVp8hUu8yOTO8=; b=tOS889RZ4gH6WWmkCA3SePCKL9
 aEfOptqFCfVByEScs91tnPy8LwnixqYI9kgCa1mWwdYvC+EglHkCDdSeBDv+rkHKjWN+bXgG/ogK3
 wpSOTigpPg94TUdeJdkhAcJiyDq3bYGKdlljrOUEr4PZZML6Xgj5AixkAAWAjG2l5GsY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jUy3I-0004pG-F8; Sat, 02 May 2020 19:43:56 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jUy3I-0005cz-7K; Sat, 02 May 2020 19:43:56 +0000
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: Elliott Mitchell <ehem+xen@m5p.com>, Corey Minyard <minyard@acm.org>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net> <20200502184836.GA96674@mattapan.m5p.com>
From: Julien Grall <julien@xen.org>
Message-ID: <00800979-6202-b2c1-7e1f-d05184d187a6@xen.org>
Date: Sat, 2 May 2020 20:43:53 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200502184836.GA96674@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Roman Shaposhnik <roman@zededa.com>, jeff.kubascik@dornerworks.com,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Elliott,

On 02/05/2020 19:48, Elliott Mitchell wrote:
> On Sat, May 02, 2020 at 12:35:29PM -0500, Corey Minyard wrote:
>> On Sat, May 02, 2020 at 12:46:14PM +0100, Julien Grall wrote:
>>>
>>> On 02/05/2020 03:16, Corey Minyard wrote:
>>>>
>>>> Nope.  If you say 4096M of RAM, your issue is almost certainly DMA, but
>>>> it's not (just) the Linux code.  On the Raspberry Pi 4, several devices
>>>> cannot DMA to above 1024M of RAM, but Xen does not handle this.  The
>>>> 1024M of RAM is a limitation you will have to live with until Xen has a
>>>> fix.
>>>
>>> IIUC, dom0 would need to have some memory below 1GB for this to work, am I
>>> correct?
>>
>> No.  If I am understanding this correctly, all the memory in dom0 below
>> 1GB would have to be physically below 1GB.
>>
>> The Linux patch set starts at:
>>
>> https://lore.kernel.org/linux-iommu/20191015174616.GO13874@arrakis.emea.arm.com/t/
>>
> 
> Actually, things get worse.  What if someone wants to run an X-server in
> DomU and have a DomU accessing the graphics hardware?  Really needs to be
> a case of allocating DMA-capable memory means talking to Xen.

I am confused, if you passthrough a device to your DomU then you most 
likely going to want to protect using an IOMMU. So are you talking with 
or without IOMMU?

Lets imagine that you want to do without an IOMMU. The easiest way would 
be to direct-map your domain (e.g host physical == guest physical) 
because you require no modification in your guest. Only the toolstack 
and Xen would require modification.

Stefano has been working on a solution in the dom0less case (see [1]). 
The approach is to let the user select the region of RAM to use for a 
given guest.


> As pointed out in that discussion different boards are going to have the
> DMA-borderline in different places.  There could be enough variation that
> it needs to be settable at run time.  Then some boards might have some
> DMA devices which can access all memory, and some which cannot (full-DMA
> versus limited-DMA?).
> 
> 

Cheers,

[1] 
https://lore.kernel.org/xen-devel/91b9d1d9-6e6f-c8b9-55ac-a3477b20a17b@xen.org/T/#t

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 02 22:41:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 02 May 2020 22:41:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jV0ou-0000VY-Cu; Sat, 02 May 2020 22:41:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Elib=6Q=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jV0ot-0000VT-Lj
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 22:41:15 +0000
X-Inumbo-ID: 03cebcfa-8cc6-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03cebcfa-8cc6-11ea-ae69-bc764e2007e4;
 Sat, 02 May 2020 22:41:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=myQIwseKmNANNUN8/3sa/9rzjwYgvfTVJlzfxtLwzhY=; b=WLf8SoPxqwYSkExw2Su4/bVvF
 pR7mwQEycoMcRuIsRaWg4T3OAHYB+aC6EHL+7Ch+ZqjVv6V5Toy/MOIft/zdDLvbDTbYOEtbJjsFJ
 11NfJSXGdVQrH0n/HGiVFAVD/0JKxk7afh5s8FuAupwxIcC/0q4SNv4cCTLeGmZVUk9G0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jV0om-00086d-6e; Sat, 02 May 2020 22:41:08 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jV0ol-0007xI-RH; Sat, 02 May 2020 22:41:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jV0ol-0002dN-Qm; Sat, 02 May 2020 22:41:07 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149905-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 149905: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=9895e0ac338a8060e6947f897397c21c4d78d80d
X-Osstest-Versions-That: linux=aa73bcc376865c23e61dcebd467697b527901be8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 02 May 2020 22:41:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149905 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149905/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 149878
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                9895e0ac338a8060e6947f897397c21c4d78d80d
baseline version:
 linux                aa73bcc376865c23e61dcebd467697b527901be8

Last test of basis   149878  2020-04-29 14:40:04 Z    3 days
Failing since        149903  2020-05-02 07:10:15 Z    0 days    2 attempts
Testing same since   149905  2020-05-02 16:38:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Al Viro <viro@zeniv.linux.org.uk>
  Alexander Schmidt <alexs@linux.ibm.com>
  Alexei Starovoitov <ast@kernel.org>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Arnd Bergmann <arnd@arndb.de>
  Atsushi Nemoto <atsushi.nemoto@sord.co.jp>
  Ayush Sawal <ayush.sawal@chelsio.com>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Bjorn Helgaas <bhelgaas@google.com>
  Bodo Stroesser <bstroesser@ts.fujitsu.com>
  Borislav Petkov <bp@suse.de>
  Brendan Higgins <brendanhiggins@google.com>
  Brian Foster <bfoster@redhat.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Christian Brauner <christian.brauner@ubuntu.com>
  Chuck Lever <chuck.lever@oracle.com>
  Clement Leger <cleger@kalray.eu>
  Cristian Birsan <cristian.birsan@microchip.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Darrick J. Wong <darrick.wong@oracle.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Denis Bolotin <dbolotin@marvell.com>
  Eric Biggers <ebiggers@google.com>
  Eric Dumazet <edumazet@google.com>
  Eugene Syromiatnikov <esyr@redhat.com>
  Fangrui Song <maskray@google.com>
  Felipe Balbi <balbi@kernel.org>
  Florian Fainelli <f.fainelli@gmail.com>
  Fugang Duan <fugang.duan@nxp.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hillf Danton <hdanton@sina.com>
  Hugh Dickins <hughd@google.com>
  Hui Wang <hui.wang@canonical.com>
  Ian Rogers <irogers@google.com>
  Jann Horn <jannh@google.com>
  Jason Gunthorpe <jgg@mellanox.com>
  Jef Driesen <jef.driesen@niko.eu>
  Jens Axboe <axboe@kernel.dk>
  Jeremy Cline <jcline@redhat.com>
  Jerome Brunet <jbrunet@baylibre.com>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Johannes Berg <johannes.berg@intel.com>
  John Garry <john.garry@huawei.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Juergen Gross <jgross@suse.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Karl Olsen <karl@micro-technic.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Luis Mendes <luis.p.mendes@gmail.com>
  Luke Nelson <luke.r.nels@gmail.com>
  Luke Nelson <lukenels@cs.washington.edu>
  Mark Brown <broonie@kernel.org>
  Martin Fuzzey <martin.fuzzey@flowbird.group>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Maxim Mikityanskiy <maximmi@mellanox.com>
  Michal Kalderon <mkalderon@marvell.com>
  Michal Simek <michal.simek@xilinx.com>
  Mike Christie <mchristi@redhat.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Nicolas Saenz Julienne <nsaenzjulienne@suse.de>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Olaf Hering <olaf@aepfle.de>
  Olivier Moysan <olivier.moysan@st.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Philipp Puschmann <p.puschmann@pironex.de>
  Philipp Rudo <prudo@linux.ibm.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Quentin Perret <qperret@google.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raymond Pang <RaymondPang-oc@zhaoxin.com>
  Richard Weinberger <richard@nod.at>
  Ritesh Harjani <riteshh@linux.ibm.com>
  Roy Spliet <nouveau@spliet.org>
  Saeed Mahameed <saeedm@mellanox.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Sasha Levin <sashal@kernel.org>
  Sergey Senozhatsky <sergey.senozhatsky@gmail.com>
  Shengjiu Wang <shengjiu.wang@nxp.com>
  Song Liu <songliubraving@fb.com>
  Stephan Gerhold <stephan@gerhold.net>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Syed Nayyar Waris <syednwaris@gmail.com>
  syzbot+e27980339d305f2dbfd9@syzkaller.appspotmail.com
  Takashi Iwai <tiwai@suse.de>
  Tamizh chelvam <tamizhr@codeaurora.org>
  Tejun Heo <tj@kernel.org>
  Theodore Ts'o <tytso@mit.edu>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thinh Nguyen <thinhn@synopsys.com>
  Toke Høiland-Jørgensen <toke@redhat.com>
  Vasily Averin <vvs@virtuozzo.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vitor Massaru Iha <vitor@massaru.org>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Waiman Long <longman@redhat.com>
  Wang YanQing <udknight@gmail.com>
  Wei Liu <wei.liu@kernel.org>
  Willem de Bruijn <willemb@google.com>
  William Breathitt Gray <vilhelm.gray@gmail.com>
  Wolfram Sang <wsa+renesas@sang-engineering.com>
  Wolfram Sang <wsa@the-dreams.de>
  Xi Wang <xi.wang@gmail.com>
  Xiumei Mu <xmu@redhat.com>
  Yang Shi <yang.shi@linux.alibaba.com>
  yangerkun <yangerkun@huawei.com>
  YueHaibing <yuehaibing@huawei.com>
  Yuval Bason <ybason@marvell.com>
  Yuval Basson <ybason@marvell.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zhu Yanjun <yanjunz@mellanox.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   aa73bcc37686..9895e0ac338a  9895e0ac338a8060e6947f897397c21c4d78d80d -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Sun May 03 03:31:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 03 May 2020 03:31:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jV5LI-0004oX-S7; Sun, 03 May 2020 03:31:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3VTY=6R=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jV5LH-0004oS-H9
 for xen-devel@lists.xenproject.org; Sun, 03 May 2020 03:30:59 +0000
X-Inumbo-ID: 7c8879c4-8cee-11ea-9c47-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7c8879c4-8cee-11ea-9c47-12813bfff9fa;
 Sun, 03 May 2020 03:30:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=n6Hf0zpTpdIMf+557ouh4X4JN1EJy8uFI9/UqDFNsys=; b=dHlS6YVUm3PyT9xR3ooKhePTM
 WMI1WOdZcFWl0uwp96kxxckgKAvpCIRzQGJwvvp2l4ATPFVPUYB7makAcxDAHD/Odb7Ajm2ShXjht
 FbK2TYR+6fd2W+6EQz6UpKGZnsfbzr5kAlMZiAy2evHVA6zR+FA77TXPU68e0eYWIWyAY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jV5L8-0003dH-Dw; Sun, 03 May 2020 03:30:50 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jV5L7-0005DH-NT; Sun, 03 May 2020 03:30:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jV5L7-0007z3-L9; Sun, 03 May 2020 03:30:49 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149906-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 149906: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=f66ed1ebbfde37631fba289f7c399eaa70632abf
X-Osstest-Versions-That: linux=690e2aba7beb1ef06352803bea41a68a3c695015
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 03 May 2020 03:30:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149906 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149906/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 149901
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 149901
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149901
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149901
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 149901
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149901
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 149901
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149901
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149901
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                f66ed1ebbfde37631fba289f7c399eaa70632abf
baseline version:
 linux                690e2aba7beb1ef06352803bea41a68a3c695015

Last test of basis   149901  2020-05-02 03:37:50 Z    0 days
Testing same since   149906  2020-05-02 19:08:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andreas Gruenbacher <agruenba@redhat.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Anna Schumaker <Anna.Schumaker@Netapp.com>
  Chuck Lever <chuck.lever@oracle.com>
  Darrick J. Wong <darrick.wong@oracle.com>
  Dave Jiang <dave.jiang@intel.com>
  Dmitry Osipenko <digetx@gmail.com>
  Grygorii Strashko <grygorii.strashko@ti.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lubomir Rintel <lkundrak@v3.sk>
  Maciej Grochowski <maciej.grochowski@pm.me>
  Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com>
  Masahiro Yamada <yamada.masahiro@socionext.com>
  NeilBrown <neilb@suse.de>
  Nicolas Ferre <nicolas.ferre@microchip.com>
  Olga Kornievskaia <kolga@netapp.com>
  Olga Kornievskaia <olga.kornievskaia@gmail.com>
  Peter Ujfalusi <peter.ujfalusi@ti.com>
  Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
  Ritesh Harjani <riteshh@linux.ibm.com>
  Sebastian von Ohr <vonohr@smaract.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Vinod Koul <vkoul@kernel.org>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yixin Zhang <yixin.zhang@intel.com>
  YueHaibing <yuehaibing@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   690e2aba7beb..f66ed1ebbfde  f66ed1ebbfde37631fba289f7c399eaa70632abf -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sun May 03 05:30:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 03 May 2020 05:30:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jV7Cx-0006OC-CV; Sun, 03 May 2020 05:30:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=m3Uy=6Q=gmail.com=zachm1996@srs-us1.protection.inumbo.net>)
 id 1jUzRF-0001t5-PA
 for xen-devel@lists.xenproject.org; Sat, 02 May 2020 21:12:45 +0000
X-Inumbo-ID: aa1f20ac-8cb9-11ea-ae69-bc764e2007e4
Received: from mail-yb1-xb41.google.com (unknown [2607:f8b0:4864:20::b41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa1f20ac-8cb9-11ea-ae69-bc764e2007e4;
 Sat, 02 May 2020 21:12:44 +0000 (UTC)
Received: by mail-yb1-xb41.google.com with SMTP id e16so4729111ybn.7
 for <xen-devel@lists.xenproject.org>; Sat, 02 May 2020 14:12:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=L6ufc7Z3BoosHaEKS6oXGUAberAmw4FdVeDYSG6Bu8E=;
 b=uSTlVhFD2QSKzPEt6S9MmQ9AaaH3ziQU8DFaY9jWB/FejoTiNe8P8SZvHGIlnaYaRw
 edMP9SVGm2qXcRHRMEgiTYQECbUNTE4Sm3j1DK3FS1FACpRIXtb39Pf6ckngrRwOQhWC
 RMBgcFlFCy60rZLffVjCOIVJj1YZN6Z90mycMChzsppqNG3P8aH3bgGBMhp77gTFaaYp
 TcNtyuJiX6jruJQenODG64H35q3rYt5KLzl0eShzNbwhSYQ3qgc1P7u557R7xWQ4PIEh
 PXgT33SLbK1KflajHh8gWHK9alTbk1cxmXST851n+UyFohb97qj9PxbZnSbvcI3XlU+A
 lV1Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=L6ufc7Z3BoosHaEKS6oXGUAberAmw4FdVeDYSG6Bu8E=;
 b=hOPd3mOfGqznBtF8yheKmXDXWmkMog+tJP7RvZX8XJLHAdQOWUNKaoEsuL5UDk2TY8
 1HvLldNcyNG944gReiquJLmrOlWe5pDDuhwtkX/q/p1+WXau34T39sR02wMQkJPBywbW
 h095wKCQU7vNeHIYlsZH7royjGI8FVrZ130Db+Xu6gY/v/iI+vmXSN+59CMQTikl0juP
 1v5CuQp7jbBWx5Sstz0YtEPMn9RRLXi6O5xyyYaDSfvXXm4xf4UY+WUgnyf6bYjAGKgA
 xrTMZCkLW2zte3yB5yf12+QoFacilQBc7XOs+10cTcSNXCKRa5pKlL8MqGtD8sT/zatJ
 Ksqw==
X-Gm-Message-State: AGi0PuaCkaFdqvE0sGVIeMVkJW5O1uVJnk+b/27ERSCjoSiln/pglswc
 q3DPKD8GGGFUq5LgEsUCiriH9hXJdgDQcWDwZWc=
X-Google-Smtp-Source: APiQypJfKLHpKzSfNQZr9v5Bdn04xX532i2C8vbsfkD8bj50Z3l9V3L1XrcZEXSHAT7D0MFjkf1CmbFkKCjnpy5T0kQ=
X-Received: by 2002:a5b:3d1:: with SMTP id t17mr16196140ybp.405.1588453963560; 
 Sat, 02 May 2020 14:12:43 -0700 (PDT)
MIME-Version: 1.0
References: <20200210121443.GQ7869@mail-itl> <20200209230655.GA32524@mail-itl>
 <a6b9455d-dea1-6c61-ff7f-4fbaaba9a953@suse.com>
 <41b3896b-5414-bfdf-a515-bf2f06ab6463@citrix.com>
 <b1dfd8e66ff2cfdd1a5d77d46238b637@disroot.org>
 <20200211210852.GF2995@mail-itl>
 <088eac9d953d043e337ce100928c2e58@disroot.org>
In-Reply-To: <088eac9d953d043e337ce100928c2e58@disroot.org>
From: Z M <zachm1996@gmail.com>
Date: Sat, 2 May 2020 16:12:31 -0500
Message-ID: <CAOJhNkUKT8gVgTH=18kkydhurJqqCFRA3_Bcxaexc=t=ojBaVg@mail.gmail.com>
Subject: Re: [Xen-devel] Xen fails to resume on AMD Fam15h (and Fam17h?)
 because of CPUID mismatch
To: Claudia <claudia1@disroot.org>
Content-Type: multipart/alternative; boundary="0000000000003d7a1e05a4b0c2a3"
X-Mailman-Approved-At: Sun, 03 May 2020 05:30:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--0000000000003d7a1e05a4b0c2a3
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Sorry to bump this ancient issue.. Claudia, did you try the RPMs out, or
did anyone else? I've had xen* and python*-xen excluded from yum/dnf for a
while on my R4.0 so I'm not getting patches on the Xen 4.8 branch. I guess
we need to wait til R4.1 to use it in production as it's built for Xen 4.13
though. Unless there's some way to put Xen 4.13 on R4.0 safely.



On Tue, Feb 11, 2020 at 10:20 PM Claudia <claudia1@disroot.org> wrote:

> February 11, 2020 9:09 PM, "Marek Marczykowski-G=C3=B3recki" <
> marmarek@invisiblethingslab.com> wrote:
>
> > On Tue, Feb 11, 2020 at 12:59:22PM +0000, Claudia wrote:
> >
> >> February 10, 2020 12:14 PM, "Marek Marczykowski-G=C3=B3recki" <
> marmarek@invisiblethingslab.com> wrote:
> >>
> >> On Mon, Feb 10, 2020 at 11:17:34AM +0000, Andrew Cooper wrote:
> >>
> >> On 10/02/2020 08:55, Jan Beulich wrote:
> >> On 10.02.2020 00:06, Marek Marczykowski-G=C3=B3recki wrote:
> >> Hi,
> >>
> >> Multiple Qubes users have reported issues with resuming from S3 on AMD
> >> systems (Ryzen 2500U, Ryzen Pro 3700U, maybe more). The error message
> >> is:
> >>
> >> (XEN) CPU0: cap[ 1] is 7ed8320b (expected f6d8320b)
> >>
> >> If I read it right, this is:
> >> - OSXSAVE: 0 -> 1
> >> - HYPERVISOR: 1 -> 0
> >>
> >> Commenting out the panic on a failed recheck_cpu_features() in power.c
> >> makes the system work after resume, reportedly stable. But that doesn'=
t
> >> sounds like a good idea generally.
> >>
> >> Is this difference a Xen fault (some missing MSR / other register
> >> restore on resume)? Or BIOS vendor / AMD, that could be worked around =
in
> >> Xen?
> >> The transition of the HYPERVISOR bit is definitely a Xen issue,
> >> with Andrew having sent a patch already (iirc).
> >>
> >>
> https://lore.kernel.org/xen-devel/20200127202121.2961-1-andrew.cooper3@ci=
trix.com
> >>
> >> Code is correct. Commit message needs rework, including in light of
> >> this discovery. (I may eventually split it into two patches.)
> >>
> >> Claudia, do you want to test with this patch?
> >>
> >> I'm getting hunk failed in domctl.c applying to R4.1 default repo
> (fc31, Xen 4.13). I'll see if I
> >> can fix it but bear with me, I'm new at this.
> >>
> >> Marek: Would you by any chance be willing to merge this into a test
> branch on your repo, so the
> >> rest of us can pull it directly into qubes-builder? It'll take you a
> fraction of the time it'll
> >> take me, plus then zachm and awokd and anyone else can pull it also.
> >
> > Here is one for Xen 4.13:
> > https://github.com/QubesOS/qubes-vmm-xen/pull/71
> > builder.conf snippet for qubes-builder:
> >
> > BRANCH_vmm_xen=3Dxen-4.13-amd-suspend
> > GIT_URL_vmm_xen=3Dhttps://github.com/marmarek/qubes-vmm-xen
> >
> > This is already v2 patch from the other thread.
>
> Thanks! For anyone else trying this, I also had to add NO_CHECK=3Dvmm-xen
> vmm-xen-stubdom-legacy, I guess because there are no tags on that branch.
> The RPMs built successfully, and I'll be able to test them as soon as I g=
et
> the latest R4.1 build downloaded and installed (I'm currently running 4.0=
).
>


--=20
- - - - - -
Zachary Muller
http://muller.is/

--0000000000003d7a1e05a4b0c2a3
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Sorry to bump this ancient issue.. Claudia, did you t=
ry the RPMs out, or did anyone else? I&#39;ve had xen* and python*-xen excl=
uded from yum/dnf for a while on my R4.0 so I&#39;m not getting patches on =
the Xen 4.8 branch. I guess we need to wait til R4.1 to use it in productio=
n as it&#39;s built for Xen 4.13 though. Unless there&#39;s some way to put=
 Xen 4.13 on R4.0 safely.<br></div><div><br></div><br></div><br><div class=
=3D"gmail_quote"><div dir=3D"ltr" class=3D"gmail_attr">On Tue, Feb 11, 2020=
 at 10:20 PM Claudia &lt;<a href=3D"mailto:claudia1@disroot.org">claudia1@d=
isroot.org</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=
=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding=
-left:1ex">February 11, 2020 9:09 PM, &quot;Marek Marczykowski-G=C3=B3recki=
&quot; &lt;<a href=3D"mailto:marmarek@invisiblethingslab.com" target=3D"_bl=
ank">marmarek@invisiblethingslab.com</a>&gt; wrote:<br>
<br>
&gt; On Tue, Feb 11, 2020 at 12:59:22PM +0000, Claudia wrote:<br>
&gt; <br>
&gt;&gt; February 10, 2020 12:14 PM, &quot;Marek Marczykowski-G=C3=B3recki&=
quot; &lt;<a href=3D"mailto:marmarek@invisiblethingslab.com" target=3D"_bla=
nk">marmarek@invisiblethingslab.com</a>&gt; wrote:<br>
&gt;&gt; <br>
&gt;&gt; On Mon, Feb 10, 2020 at 11:17:34AM +0000, Andrew Cooper wrote:<br>
&gt;&gt; <br>
&gt;&gt; On 10/02/2020 08:55, Jan Beulich wrote:<br>
&gt;&gt; On 10.02.2020 00:06, Marek Marczykowski-G=C3=B3recki wrote:<br>
&gt;&gt; Hi,<br>
&gt;&gt; <br>
&gt;&gt; Multiple Qubes users have reported issues with resuming from S3 on=
 AMD<br>
&gt;&gt; systems (Ryzen 2500U, Ryzen Pro 3700U, maybe more). The error mess=
age<br>
&gt;&gt; is:<br>
&gt;&gt; <br>
&gt;&gt; (XEN) CPU0: cap[ 1] is 7ed8320b (expected f6d8320b)<br>
&gt;&gt; <br>
&gt;&gt; If I read it right, this is:<br>
&gt;&gt; - OSXSAVE: 0 -&gt; 1<br>
&gt;&gt; - HYPERVISOR: 1 -&gt; 0<br>
&gt;&gt; <br>
&gt;&gt; Commenting out the panic on a failed recheck_cpu_features() in pow=
er.c<br>
&gt;&gt; makes the system work after resume, reportedly stable. But that do=
esn&#39;t<br>
&gt;&gt; sounds like a good idea generally.<br>
&gt;&gt; <br>
&gt;&gt; Is this difference a Xen fault (some missing MSR / other register<=
br>
&gt;&gt; restore on resume)? Or BIOS vendor / AMD, that could be worked aro=
und in<br>
&gt;&gt; Xen?<br>
&gt;&gt; The transition of the HYPERVISOR bit is definitely a Xen issue,<br=
>
&gt;&gt; with Andrew having sent a patch already (iirc).<br>
&gt;&gt; <br>
&gt;&gt; <a href=3D"https://lore.kernel.org/xen-devel/20200127202121.2961-1=
-andrew.cooper3@citrix.com" rel=3D"noreferrer" target=3D"_blank">https://lo=
re.kernel.org/xen-devel/20200127202121.2961-1-andrew.cooper3@citrix.com</a>=
<br>
&gt;&gt; <br>
&gt;&gt; Code is correct. Commit message needs rework, including in light o=
f<br>
&gt;&gt; this discovery. (I may eventually split it into two patches.)<br>
&gt;&gt; <br>
&gt;&gt; Claudia, do you want to test with this patch?<br>
&gt;&gt; <br>
&gt;&gt; I&#39;m getting hunk failed in domctl.c applying to R4.1 default r=
epo (fc31, Xen 4.13). I&#39;ll see if I<br>
&gt;&gt; can fix it but bear with me, I&#39;m new at this.<br>
&gt;&gt; <br>
&gt;&gt; Marek: Would you by any chance be willing to merge this into a tes=
t branch on your repo, so the<br>
&gt;&gt; rest of us can pull it directly into qubes-builder? It&#39;ll take=
 you a fraction of the time it&#39;ll<br>
&gt;&gt; take me, plus then zachm and awokd and anyone else can pull it als=
o.<br>
&gt; <br>
&gt; Here is one for Xen 4.13:<br>
&gt; <a href=3D"https://github.com/QubesOS/qubes-vmm-xen/pull/71" rel=3D"no=
referrer" target=3D"_blank">https://github.com/QubesOS/qubes-vmm-xen/pull/7=
1</a><br>
&gt; builder.conf snippet for qubes-builder:<br>
&gt; <br>
&gt; BRANCH_vmm_xen=3Dxen-4.13-amd-suspend<br>
&gt; GIT_URL_vmm_xen=3D<a href=3D"https://github.com/marmarek/qubes-vmm-xen=
" rel=3D"noreferrer" target=3D"_blank">https://github.com/marmarek/qubes-vm=
m-xen</a><br>
&gt; <br>
&gt; This is already v2 patch from the other thread.<br>
<br>
Thanks! For anyone else trying this, I also had to add NO_CHECK=3Dvmm-xen v=
mm-xen-stubdom-legacy, I guess because there are no tags on that branch. Th=
e RPMs built successfully, and I&#39;ll be able to test them as soon as I g=
et the latest R4.1 build downloaded and installed (I&#39;m currently runnin=
g 4.0).<br>
</blockquote></div><br clear=3D"all"><br>-- <br><div dir=3D"ltr" class=3D"g=
mail_signature"><div dir=3D"ltr"><div>- - - - - -<br></div>Zachary Muller<b=
r><a href=3D"http://muller.is/" target=3D"_blank">http://muller.is/</a><br>=
</div></div>

--0000000000003d7a1e05a4b0c2a3--


From xen-devel-bounces@lists.xenproject.org Sun May 03 07:37:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 03 May 2020 07:37:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jV9Bb-0007eJ-Ej; Sun, 03 May 2020 07:37:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3VTY=6R=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jV9BZ-0007eE-Vv
 for xen-devel@lists.xenproject.org; Sun, 03 May 2020 07:37:14 +0000
X-Inumbo-ID: e40d9d82-8d10-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e40d9d82-8d10-11ea-b07b-bc764e2007e4;
 Sun, 03 May 2020 07:37:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=beJ3jVHXT91trEorypHvflhPrnUGqzpjke1XmqlDJHo=; b=4qx+hAOKR1HB82IIzQH8V8Web
 JCJ//5aDM0HdEiqLfK6QluK053YcSu7HZeArYR6WPwefjAfHAsCJ38gLlXcnYrat+7qHHe/+Tdcck
 4QwJfYckm0sGNlUX0jPAv8ID9aB4nUmPO2b8arxkIZbxozvaHZnhaUgcq+K3bpiqoXDc8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jV9BT-0000CT-59; Sun, 03 May 2020 07:37:07 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jV9BR-0001Iv-M8; Sun, 03 May 2020 07:37:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jV9BR-0002RN-LR; Sun, 03 May 2020 07:37:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149909-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 149909: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=de7e9840e7f888f1a872c86b0cb793b283193137
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 03 May 2020 07:37:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149909 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149909/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              de7e9840e7f888f1a872c86b0cb793b283193137
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  107 days
Failing since        146211  2020-01-18 04:18:52 Z  106 days   99 attempts
Testing same since   149902  2020-05-02 04:18:53 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 16779 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 03 09:51:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 03 May 2020 09:51:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVBGq-0002Ib-1d; Sun, 03 May 2020 09:50:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3VTY=6R=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jVBGo-0002IT-IG
 for xen-devel@lists.xenproject.org; Sun, 03 May 2020 09:50:46 +0000
X-Inumbo-ID: 8dd9a72c-8d23-11ea-9c6b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8dd9a72c-8d23-11ea-9c6b-12813bfff9fa;
 Sun, 03 May 2020 09:50:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=TJ6J2lBEcAKRTL+oKM2TIlopNdiJg5WBuE6RuU6985c=; b=J6yCUA+9j29nd8mZ0WMGwnMnE
 hZ73GVAmqY5doLgQ+8+7Ie+VMzpdLnf9mPl7+sucC8DhEnw1ep2QFusrmHKpjc/9h0mo/yK05l/fr
 WquiODvNWTa+mcCB1kBKpqRPbIurA3emcx55x6WIk/fr8MvBAjWPfwlnp1v7WkwUiYUHk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jVBGk-0003EU-VP; Sun, 03 May 2020 09:50:43 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jVBGk-00068u-Mj; Sun, 03 May 2020 09:50:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jVBGk-0001ZI-M1; Sun, 03 May 2020 09:50:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149908-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 149908: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=0135be8bd8cd60090298f02310691b688d95c3a8
X-Osstest-Versions-That: xen=0135be8bd8cd60090298f02310691b688d95c3a8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 03 May 2020 09:50:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149908 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149908/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 149896
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149900
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149900
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 149900
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 149900
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149900
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149900
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149900
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 149900
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 149900
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  0135be8bd8cd60090298f02310691b688d95c3a8
baseline version:
 xen                  0135be8bd8cd60090298f02310691b688d95c3a8

Last test of basis   149908  2020-05-03 01:54:24 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun May 03 10:15:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 03 May 2020 10:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVBf3-000485-9x; Sun, 03 May 2020 10:15:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3VTY=6R=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jVBf2-000480-4z
 for xen-devel@lists.xenproject.org; Sun, 03 May 2020 10:15:48 +0000
X-Inumbo-ID: 0a933dac-8d27-11ea-9c71-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0a933dac-8d27-11ea-9c71-12813bfff9fa;
 Sun, 03 May 2020 10:15:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=vdbpq/mrYNdqckikny+159/ycjmUgDoGLiBpa5tKzsM=; b=TpukJINIT8TGvWBy11j5MYoFF
 sAXuKRK+L2AUuRyQr/9MmdklYatuPxhtiezQ3iVXIbz5+7zCs112pnj3y0JmOv1394L2b0p7FrX4g
 +cdQy5z4R3VkNaV0BPbrupGBk6RU6GL+DXJqJBsZhz0IHcrRtcVawAOEm1r1aX8/wtSWI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jVBeu-0003m9-NO; Sun, 03 May 2020 10:15:40 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jVBeu-0006jo-5T; Sun, 03 May 2020 10:15:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jVBeu-00062h-4w; Sun, 03 May 2020 10:15:40 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149910-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 149910: all pass - PUSHED
X-Osstest-Versions-This: xen=0135be8bd8cd60090298f02310691b688d95c3a8
X-Osstest-Versions-That: xen=4ec07971f1c5a236a0d8c528d806efb6bfd3d1a6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 03 May 2020 10:15:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149910 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149910/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  0135be8bd8cd60090298f02310691b688d95c3a8
baseline version:
 xen                  4ec07971f1c5a236a0d8c528d806efb6bfd3d1a6

Last test of basis   149873  2020-04-29 09:19:00 Z    4 days
Testing same since   149910  2020-05-03 09:19:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>
  Tim Deegan <tim@xen.org>
  Varad Gautam <vrd@amazon.de>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wei.liu2@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   4ec07971f1..0135be8bd8  0135be8bd8cd60090298f02310691b688d95c3a8 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun May 03 19:37:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 03 May 2020 19:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVKQI-0007Xu-JN; Sun, 03 May 2020 19:37:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3VTY=6R=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jVKQG-0007Xo-Cf
 for xen-devel@lists.xenproject.org; Sun, 03 May 2020 19:37:08 +0000
X-Inumbo-ID: 766d07bc-8d75-11ea-9cb9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 766d07bc-8d75-11ea-9cb9-12813bfff9fa;
 Sun, 03 May 2020 19:37:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=CGmd8qCZsVdZSjjPerFZ+X7eWGygoUaTv7w+rO0RgqE=; b=FjcWVOYH6fhdy6gehE1C31kpg
 nT+YmPtLB093gfYEH1GbFBBylVFZ/e0jUO6mjFDbThd8e4FLYI4oy7Z2e/mu3lD8b0mvFv1Hf0Wpn
 alb/HD1cPOew8KfzsHPNgqlP774qXrhexGaHTtPfItxQb04FtTlUSDR0/mrzK8hXLg+us=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jVKQA-00063Y-8I; Sun, 03 May 2020 19:37:02 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jVKQ9-00085e-TU; Sun, 03 May 2020 19:37:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jVKQ9-0002K8-PB; Sun, 03 May 2020 19:37:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149911-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 149911: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=2ef486e76d64436be90f7359a3071fb2a56ce835
X-Osstest-Versions-That: qemuu=1c47613588ccff44422d4bdeea0dc36a0a308ec7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 03 May 2020 19:37:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149911 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149911/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate      fail blocked in 149898
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149898
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149898
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 149898
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149898
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149898
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149898
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                2ef486e76d64436be90f7359a3071fb2a56ce835
baseline version:
 qemuu                1c47613588ccff44422d4bdeea0dc36a0a308ec7

Last test of basis   149898  2020-05-01 13:28:08 Z    2 days
Testing same since   149911  2020-05-03 14:06:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Marcel Apfelbaum <marcel.apfelbaum@gmail.com>
  Max Reitz <mreitz@redhat.com>
  Miklos Szeredi <mszeredi@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Stefan Hajnoczi <stefanha@redhat.com>
  Yuval Shaia <yuval.shaia.ml@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   1c47613588..2ef486e76d  2ef486e76d64436be90f7359a3071fb2a56ce835 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Mon May 04 00:59:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 00:59:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVPRO-00088u-Pf; Mon, 04 May 2020 00:58:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vMqs=6S=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jVPRM-00088p-Lx
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 00:58:36 +0000
X-Inumbo-ID: 5e5a61d8-8da2-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e5a61d8-8da2-11ea-b9cf-bc764e2007e4;
 Mon, 04 May 2020 00:58:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=4EKBJi5fNXUw9mV4M7rzKa6np6YRZH0YSyqzOpUZIkg=; b=5hVqMAnPdAGUIZWCJW7v18l/J
 gG7PZOwV2S30urJikLtk8djva1xZzEiGzD3iW1kR0TGz4hK9Si0lQi5GA3SFYQlyHQc8Zs8FPjy8s
 UmRaCiSz4Eg9jHNaTW1gr5UukIXHNaMKnqiKYsfheAJFe27DOzcSOBDkRXSSAGkvpbOaM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jVPRF-0004BX-8T; Mon, 04 May 2020 00:58:29 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jVPRE-000643-Td; Mon, 04 May 2020 00:58:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jVPRE-0008S6-SV; Mon, 04 May 2020 00:58:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149912-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 149912: regressions - FAIL
X-Osstest-Failures: linux-linus:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
 linux-linus:build-i386-libvirt:libvirt-build:fail:regression
 linux-linus:test-amd64-amd64-xl-rtds:guest-saverestore:fail:allowable
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=262f7a6b8317a06e7d51befb690f0bca06a473ea
X-Osstest-Versions-That: linux=f66ed1ebbfde37631fba289f7c399eaa70632abf
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 04 May 2020 00:58:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149912 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149912/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-freebsd10-amd64 14 guest-saverestore     fail REGR. vs. 149906
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 149906

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     15 guest-saverestore        fail REGR. vs. 149906

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 149906
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149906
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149906
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 149906
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149906
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 149906
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149906
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149906
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                262f7a6b8317a06e7d51befb690f0bca06a473ea
baseline version:
 linux                f66ed1ebbfde37631fba289f7c399eaa70632abf

Last test of basis   149906  2020-05-02 19:08:56 Z    1 days
Testing same since   149912  2020-05-03 18:38:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Arnd Bergmann <arnd@arndb.de>
  Chris Wilson <chris@chris-wilson.co.uk>
  David Sterba <dsterba@suse.com>
  Dexuan Cui <decui@microsoft.com>
  Eric Biggers <ebiggers@google.com>
  Filipe Manana <fdmanana@suse.com>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Joerg Roedel <jroedel@suse.de>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kevin Hao <haokexin@gmail.com>
  Krzysztof Kozlowski <krzk@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Qu Wenruo <wqu@suse.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Tang Bin <tangbin@cmss.chinamobile.com>
  Will Deacon <will@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 511 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 04 07:33:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 07:33:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVVbF-0004fN-QW; Mon, 04 May 2020 07:33:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WsoH=6S=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jVVbF-0004fI-Bv
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 07:33:13 +0000
X-Inumbo-ID: 81a2ac18-8dd9-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 81a2ac18-8dd9-11ea-b9cf-bc764e2007e4;
 Mon, 04 May 2020 07:33:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4C4B3AEC1;
 Mon,  4 May 2020 07:33:11 +0000 (UTC)
Subject: Re: [PATCH v1.1 2/3] xen/sched: fix theoretical races accessing
 vcpu->dirty_cpu
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
References: <20200430152848.20275-1-jgross@suse.com>
 <987abb9e-e4f1-1981-595d-0474e91d67f8@xen.org>
 <678d3815-d554-7b92-aa81-908978e2b19b@suse.com>
 <2c72bb17-cf67-a7ce-6dcb-2c3b4d1231e7@xen.org>
 <e274cf53-261d-0af5-6d81-2031e70da3e3@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <6256827c-c703-8158-d47b-79e3f0d44140@suse.com>
Date: Mon, 4 May 2020 09:33:08 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <e274cf53-261d-0af5-6d81-2031e70da3e3@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.20 18:09, Julien Grall wrote:
> 
> 
> On 02/05/2020 13:34, Julien Grall wrote:
>>>>> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
>>>>> index 195e7ee583..81628e2d98 100644
>>>>> --- a/xen/include/xen/sched.h
>>>>> +++ b/xen/include/xen/sched.h
>>>>> @@ -844,7 +844,7 @@ static inline bool is_vcpu_dirty_cpu(unsigned 
>>>>> int cpu)
>>>>>   static inline bool vcpu_cpu_dirty(const struct vcpu *v)
>>>>>   {
>>>>> -    return is_vcpu_dirty_cpu(v->dirty_cpu);
>>>>> +    return is_vcpu_dirty_cpu(read_atomic((unsigned int 
>>>>> *)&v->dirty_cpu));
>>>>
>>>> Is the cast necessary?
>>>
>>> Yes, that was the problem when building for ARM I encountered.
>>>
>>> read_atomic() on ARM has a local variable of the same type as the
>>> read_atomic() parameter for storing the result. Due to the const
>>> attribute of v this results in assignment to a read-only variable.
>>
>> Doh, we should be able to read from a const value without. So I would 
>> argue this is a bug in the read_atomic() implementation on Arm. I will 
>> try to come up with a patch.
> 
> I have just sent a series [1] to address the issue reported here and a 
> few more.

With that series V1 of this patch is fine again. :-)


Juergen


From xen-devel-bounces@lists.xenproject.org Mon May 04 08:27:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 08:27:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVWS1-00012C-LA; Mon, 04 May 2020 08:27:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EiMY=6S=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jVWRz-000123-O0
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 08:27:43 +0000
X-Inumbo-ID: 1fe8673a-8de1-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1fe8673a-8de1-11ea-b9cf-bc764e2007e4;
 Mon, 04 May 2020 08:27:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
 References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=JjAg5xRaxEfxbRzTOHID8bMgMfwvn+IOOGPb4fJo2CM=; b=l++DlWrXH0NfLsCTkBa9ohXokZ
 jsTFIotuKYbZhyj/Ruwk5SaiRjgfMzGzyGCrRHZSTk6BvKC1TsnLenGjzgka/39zG2qL00NsLsEwP
 8osHSfhHWvedWmbT6TPqagJPpceWDaG95pIQuc8xcYLxjIbM/OcsrF2X/M0edgFSxG68=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jVWRx-0003V5-UZ; Mon, 04 May 2020 08:27:41 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=freeip.amazon.com) by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <hx242@xen.org>)
 id 1jVWRx-00006I-JT; Mon, 04 May 2020 08:27:41 +0000
Message-ID: <c3edd42c66ded0c1f129e1de2a4b8f54cef4c136.camel@xen.org>
Subject: Re: [PATCH 02/16] acpi: vmap pages in acpi_os_alloc_memory
From: Hongyan Xia <hx242@xen.org>
To: Julien Grall <julien.grall.oss@gmail.com>
Date: Mon, 04 May 2020 09:27:39 +0100
In-Reply-To: <CAJ=z9a0S0rOYbJVPGK6mTKN0OgJtiTU7YN-APLF4Dvr4CaKfJg@mail.gmail.com>
References: <cover.1588278317.git.hongyxia@amazon.com>
 <a71d1903267b84afdb0e54fa2ac55540ab2f9357.1588278317.git.hongyxia@amazon.com>
 <CAJ=z9a0S0rOYbJVPGK6mTKN0OgJtiTU7YN-APLF4Dvr4CaKfJg@mail.gmail.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 2020-05-01 at 22:35 +0100, Julien Grall wrote:
> Hi,
> 
> On Thu, 30 Apr 2020 at 21:44, Hongyan Xia <hx242@xen.org> wrote:
> > 
> > From: Hongyan Xia <hongyxia@amazon.com>
> > 
> > Also, introduce a wrapper around vmap that maps a contiguous range
> > for
> > boot allocations.
> > 
> > Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> > ---
> >  xen/drivers/acpi/osl.c | 9 ++++++++-
> >  xen/include/xen/vmap.h | 5 +++++
> >  2 files changed, 13 insertions(+), 1 deletion(-)
> > 
> > diff --git a/xen/drivers/acpi/osl.c b/xen/drivers/acpi/osl.c
> > index 4c8bb7839e..d0762dad4e 100644
> > --- a/xen/drivers/acpi/osl.c
> > +++ b/xen/drivers/acpi/osl.c
> > @@ -219,7 +219,11 @@ void *__init acpi_os_alloc_memory(size_t sz)
> >         void *ptr;
> > 
> >         if (system_state == SYS_STATE_early_boot)
> > -               return
> > mfn_to_virt(mfn_x(alloc_boot_pages(PFN_UP(sz), 1)));
> > +       {
> > +               mfn_t mfn = alloc_boot_pages(PFN_UP(sz), 1);
> > +
> > +               return vmap_boot_pages(mfn, PFN_UP(sz));
> > +       }
> > 
> >         ptr = xmalloc_bytes(sz);
> >         ASSERT(!ptr || is_xmalloc_memory(ptr));
> > @@ -244,5 +248,8 @@ void __init acpi_os_free_memory(void *ptr)
> >         if (is_xmalloc_memory(ptr))
> >                 xfree(ptr);
> >         else if (ptr && system_state == SYS_STATE_early_boot)
> > +       {
> > +               vunmap(ptr);
> >                 init_boot_pages(__pa(ptr), __pa(ptr) + PAGE_SIZE);
> 
> __pa(ptr) can only work on the direct map. Even worth, on Arm it will
> fault because there is no mapping.
> I think you will want to use vmap_to_mfn() before calling vunmap().

Thanks for spotting this. This is definitely wrong. Will revise.

Hongyan



From xen-devel-bounces@lists.xenproject.org Mon May 04 08:53:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 08:53:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVWr3-0003Ws-Td; Mon, 04 May 2020 08:53:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVWr3-0003Wn-5g
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 08:53:37 +0000
X-Inumbo-ID: bd972bc6-8de4-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd972bc6-8de4-11ea-b9cf-bc764e2007e4;
 Mon, 04 May 2020 08:53:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 115D4AC61;
 Mon,  4 May 2020 08:53:37 +0000 (UTC)
Subject: Re: [PATCH] x86/amd: Initial support for Fam19h processors
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200430095947.31958-1-andrew.cooper3@citrix.com>
 <471aaf7e-497f-edd2-6eb0-06d337a23538@suse.com>
 <9dc3a9e6-4a86-f24a-b279-59fec5ef22d8@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9106a1bf-fcc9-1ea3-97ae-e13432fa1236@suse.com>
Date: Mon, 4 May 2020 10:53:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <9dc3a9e6-4a86-f24a-b279-59fec5ef22d8@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30.04.2020 17:50, Andrew Cooper wrote:
> On 30/04/2020 12:09, Jan Beulich wrote:
>> On 30.04.2020 11:59, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/cpu/microcode/amd.c
>>> +++ b/xen/arch/x86/cpu/microcode/amd.c
>>> @@ -125,6 +125,7 @@ static bool_t verify_patch_size(uint32_t patch_size)
>>>          max_size = F16H_MPB_MAX_SIZE;
>>>          break;
>>>      case 0x17:
>>> +    case 0x19:
>>>          max_size = F17H_MPB_MAX_SIZE;
>>>          break;
>> Didn't you indicate to me the other day that the upper bound would
>> grow?
> 
> That was a very non-specific patch to Linux.  I've asked around, and the
> answer seems to be 4800.
> 
> Are you happy for your review to stand with adding a new
> F19H_MPB_MAX_SIZE define to this effect?

Yes.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 04 09:10:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 09:10:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVX6x-0004r6-CJ; Mon, 04 May 2020 09:10:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVX6w-0004fI-D5
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 09:10:02 +0000
X-Inumbo-ID: 0896781e-8de7-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0896781e-8de7-11ea-9887-bc764e2007e4;
 Mon, 04 May 2020 09:10:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id CB76EAC7D;
 Mon,  4 May 2020 09:10:01 +0000 (UTC)
Subject: Re: [XEN PATCH v5 08/16] build: Introduce $(cpp_flags)
To: Anthony PERARD <anthony.perard@citrix.com>
References: <20200421161208.2429539-1-anthony.perard@citrix.com>
 <20200421161208.2429539-9-anthony.perard@citrix.com>
 <62011f46-b208-334a-4070-0bd72cb21d28@suse.com>
 <20200428140119.GC2116@perard.uk.xensource.com>
 <86af7c75-8f8b-db0a-7420-343ccd70fc33@suse.com>
 <20200501143215.GD2116@perard.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e8eb1674-b149-7bd6-a903-673914b40bba@suse.com>
Date: Mon, 4 May 2020 11:09:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200501143215.GD2116@perard.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01.05.2020 16:32, Anthony PERARD wrote:
> On Tue, Apr 28, 2020 at 04:20:57PM +0200, Jan Beulich wrote:
>> On 28.04.2020 16:01, Anthony PERARD wrote:
>>> On Thu, Apr 23, 2020 at 06:48:51PM +0200, Jan Beulich wrote:
>>>> On 21.04.2020 18:12, Anthony PERARD wrote:
>>>>> --- a/xen/Rules.mk
>>>>> +++ b/xen/Rules.mk
>>>>> @@ -123,6 +123,7 @@ $(obj-bin-y): XEN_CFLAGS := $(filter-out -flto,$(XEN_CFLAGS))
>>>>>  
>>>>>  c_flags = -MMD -MP -MF $(@D)/.$(@F).d $(XEN_CFLAGS) '-D__OBJECT_FILE__="$@"'
>>>>>  a_flags = -MMD -MP -MF $(@D)/.$(@F).d $(XEN_AFLAGS)
>>>>> +cpp_flags = $(filter-out -Wa$(comma)%,$(a_flags))
>>>>
>>>> I can see this happening to be this way right now, but in principle
>>>> I could see a_flags to hold items applicable to assembly files only,
>>>> but not to (the preprocessing of) C files. Hence while this is fine
>>>> for now, ...
>>>>
>>>>> @@ -207,7 +208,7 @@ quiet_cmd_cc_s_c = CC      $@
>>>>>  cmd_cc_s_c = $(CC) $(filter-out -Wa$(comma)%,$(c_flags)) -S $< -o $@
>>>>>  
>>>>>  quiet_cmd_s_S = CPP     $@
>>>>> -cmd_s_S = $(CPP) $(filter-out -Wa$(comma)%,$(a_flags)) $< -o $@
>>>>> +cmd_s_S = $(CPP) $(cpp_flags) $< -o $@
>>>>
>>>> ... this one is a trap waiting for someone to fall in imo. Instead
>>>> where I'd expect this patch to use $(cpp_flags) is e.g. in
>>>> xen/arch/x86/mm/Makefile:
>>>>
>>>> guest_walk_%.i: guest_walk.c Makefile
>>>> 	$(CPP) $(cpp_flags) -DGUEST_PAGING_LEVELS=$* -c $< -o $@
>>>>
>>>> And note how this currently uses $(c_flags), not $(a_flags), which
>>>> suggests that your deriving from $(a_flags) isn't correct either.
>>>
>>> I think we can drop this patch for now, and change patch "xen/build:
>>> factorise generation of the linker scripts" to not use $(cpp_flags).
>>>
>>> If we derive $(cpp_flags) from $(c_flags) instead, we would need to
>>> find out if CPP commands using a_flags can use c_flags instead.
>>>
>>> On the other hand, I've looked at Linux source code, and they use
>>> $(cpp_flags) for only a few targets, only to generate the .lds scripts.
>>> For other rules, they use either a_flags or c_flags, for example:
>>>     %.i: %.c ; uses $(c_flags)
>>>     %.i: %.S ; uses $(a_flags)
>>>     %.s: %.S ; uses $(a_flags)
>>
>> The first on really ought to be use cpp_flags. I couldn't find the
>> middle one. The last one clearly has to do something about -Wa,
>> options, but apart from this I'd consider a_flags appropriate to
>> use there.
>>
>>> (Also, they use -Qunused-arguments clang's options, so they don't need
>>> to filter out -Wa,* arguments, I think.)
>>
>> Maybe we should do so too then?
>>
>>> So, maybe having a single $(cpp_flags) when running the CPP command
>>> isn't such a good idea.
>>
>> Right - after all in particular the use of CPP to produce .lds is
>> an abuse, as the source file (named .lds.S) isn't really what its
>> name says.
>>
>>> So, would dropping $(cpp_flags) for now, and rework the *FLAGS later, be
>>> good enough?
>>
>> I don't think so, no, I'm sorry. cpp_flags should be there for its
>> real purpose. Whether the .lds.S -> .lds rule can use it, or should
>> use a_flags, or yet something else is a different thing.
> 
> 
> OK. I think we can rework the patch to derive cpp_flags from c_flags,
> use this new cpp_flags for %.i:%.c; but keep using a_flags for %.s:%.S.
> 
> As for the .lds, we could use this new cpp_flags, the only think I saw
> missing was -D__ASSEMBLY__, which can be added to the command line.
> (There would also be an extra -std=gnu99, but I don't think it matters.)
> 
> Does that sounds good?

Yes. I had another thought though in the meantime: What if cpp_flags
became a macro to be used with $(call ), with c_flags or a_flags (or
whatever else) passed in by the use sites?

> As for using -Qunused-arguments with clang, I didn't managed to find the
> documentation of clang's command line argument for clang 3.5 on llvm
> website, but I found it for clang 5.0 and the option is listed there.
> I've tested building Xen on our gitlab CI, which as debian jessie which
> seems to have clang 3.5, and Xen built just fine. So that might be an
> option we can use later, but probably only for CPP flags.

Okay, thanks for checking.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 04 09:11:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 09:11:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVX89-0005Ec-NP; Mon, 04 May 2020 09:11:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVX88-0005E2-Lj
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 09:11:16 +0000
X-Inumbo-ID: 35283dea-8de7-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35283dea-8de7-11ea-b07b-bc764e2007e4;
 Mon, 04 May 2020 09:11:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id AC0B1ACF1;
 Mon,  4 May 2020 09:11:16 +0000 (UTC)
Subject: Re: [XEN PATCH v5 09/16] xen/build: use if_changed on built_in.o
To: Anthony PERARD <anthony.perard@citrix.com>
References: <20200421161208.2429539-1-anthony.perard@citrix.com>
 <20200421161208.2429539-10-anthony.perard@citrix.com>
 <6c6d20f5-d8ab-ee15-d2fc-e19b1dced99a@suse.com>
 <20200501144234.GE2116@perard.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1cb0c187-137b-46d7-c05e-0dcb39f45a46@suse.com>
Date: Mon, 4 May 2020 11:11:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200501144234.GE2116@perard.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 01.05.2020 16:42, Anthony PERARD wrote:
> On Tue, Apr 28, 2020 at 03:48:13PM +0200, Jan Beulich wrote:
>> On 21.04.2020 18:12, Anthony PERARD wrote:
>>> In the case where $(obj-y) is empty, we also replace $(c_flags) by
>>> $(XEN_CFLAGS) to avoid generating an .%.d dependency file. This avoid
>>> make trying to include %.h file in the ld command if $(obj-y) isn't
>>> empty anymore on a second run.
>>>
>>> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
>>
>> Acked-by: Jan Beulich <jbeulich@suse.com>
>>
>> Personally I'd prefer ...
>>
>>> --- a/xen/Rules.mk
>>> +++ b/xen/Rules.mk
>>> @@ -130,15 +130,24 @@ include $(BASEDIR)/arch/$(TARGET_ARCH)/Rules.mk
>>>  c_flags += $(CFLAGS-y)
>>>  a_flags += $(CFLAGS-y) $(AFLAGS-y)
>>>  
>>> -built_in.o: $(obj-y) $(extra-y)
>>> -ifeq ($(obj-y),)
>>> -	$(CC) $(c_flags) -c -x c /dev/null -o $@
>>> -else
>>> +quiet_cmd_ld_builtin = LD      $@
>>>  ifeq ($(CONFIG_LTO),y)
>>> -	$(LD_LTO) -r -o $@ $(filter-out $(extra-y),$^)
>>> +cmd_ld_builtin = \
>>> +    $(LD_LTO) -r -o $@ $(filter-out $(extra-y),$(real-prereqs))
>>>  else
>>> -	$(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out $(extra-y),$^)
>>> +cmd_ld_builtin = \
>>> +    $(LD) $(XEN_LDFLAGS) -r -o $@ $(filter-out $(extra-y),$(real-prereqs))
>>>  endif
>>> +
>>> +quiet_cmd_cc_builtin = LD      $@
>>> +cmd_cc_builtin = \
>>> +    $(CC) $(XEN_CFLAGS) -c -x c /dev/null -o $@
>>> +
>>> +built_in.o: $(obj-y) $(extra-y) FORCE
>>> +ifeq ($(obj-y),)
>>> +	$(call if_changed,cc_builtin)
>>> +else
>>> +	$(call if_changed,ld_builtin)
>>>  endif
>>
>> ...
>>
>>    $(call if_changed,$(if $(obj-y),ld,cc)_builtin)
>>
>> but perhaps I'm the only one.
> 
> I think so. Spelling the full name of the command makes it easier to
> look for where it is used, or for where it is defined.

   $(call if_changed,$(if $(obj-y),ld_builtin,cc_builtin))

then?

Jan



From xen-devel-bounces@lists.xenproject.org Mon May 04 09:16:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 09:16:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVXDS-0005RP-Bj; Mon, 04 May 2020 09:16:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVXDQ-0005RK-MF
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 09:16:44 +0000
X-Inumbo-ID: f8027fce-8de7-11ea-9d06-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f8027fce-8de7-11ea-9d06-12813bfff9fa;
 Mon, 04 May 2020 09:16:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 3C528ACF1;
 Mon,  4 May 2020 09:16:43 +0000 (UTC)
Subject: Re: [PATCH 05/12] xen: introduce reserve_heap_pages
To: Stefano Stabellini <sstabellini@kernel.org>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-5-sstabellini@kernel.org>
 <3129ab49-5898-9d2e-8fbb-d1fcaf6cdec7@suse.com>
 <alpine.DEB.2.21.2004291510270.28941@sstabellini-ThinkPad-T480s>
 <8a517cbc-9ff7-5b9e-f2c9-08c411703d5d@suse.com>
 <alpine.DEB.2.21.2004300907060.28941@sstabellini-ThinkPad-T480s>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <224e04ed-e790-d963-f74a-d600677c4413@suse.com>
Date: Mon, 4 May 2020 11:16:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2004300907060.28941@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: julien@xen.org, Wei Liu <wl@xen.org>, andrew.cooper3@citrix.com,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>, Volodymyr_Babchuk@epam.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30.04.2020 18:21, Stefano Stabellini wrote:
> On Thu, 30 Apr 2020, Jan Beulich wrote:
>> On 30.04.2020 00:46, Stefano Stabellini wrote:
>>> On Fri, 17 Apr 2020, Jan Beulich wrote:
>>>> On 15.04.2020 03:02, Stefano Stabellini wrote:
>>>>> Introduce a function named reserve_heap_pages (similar to
>>>>> alloc_heap_pages) that allocates a requested memory range. Call
>>>>> __alloc_heap_pages for the implementation.
>>>>>
>>>>> Change __alloc_heap_pages so that the original page doesn't get
>>>>> modified, giving back unneeded memory top to bottom rather than bottom
>>>>> to top.
>>>>
>>>> While it may be less of a problem within a zone, doing so is
>>>> against our general "return high pages first" policy.
>>>
>>> Is this something you'd be OK with anyway?
>>
>> As a last resort, maybe. But it really depends on why it needs to be
>> this way.
>>
>>> If not, do you have a suggestion on how to do it better? I couldn't find
>>> a nice way to do it without code duplication, or a big nasty 'if' in the
>>> middle of the function.
>>
>> I'd first need to understand the problem to solve.
> 
> OK, I'll make an example.
> 
> reserve_heap_pages wants to reserve the range 0x10000000 - 0x20000000.
> 
> reserve_heap_pages get the struct page_info for 0x10000000 and calls
> alloc_pages_from_buddy to allocate an order of 28.
> 
> alloc_pages_from_buddy realizes that the free memory area starting from
> 0x10000000 is actually of order 30, even larger than the requested order
> of 28. The free area is 0x10000000 - 0x50000000.
> 
> alloc_pages_from_buddy instead of just allocating an order of 28
> starting from 0x10000000, it would allocate the "top" order of 28 in the
> free area. So it would allocate: 0x40000000 - 0x50000000, returning
> 0x40000000.
> 
> Of course, this doesn't work for reserve_heap_pages.
> 
> 
> This patch changes the behavior of alloc_pages_from_buddy so that in a
> situation like the above, it would return 0x10000000 - 0x20000000
> (leaving the rest of the free area unallocated.)

So what if then, for the same order-30 (really order-18 if I assume
you name addresses, not frame numbers), a reservation request came
in for the highest order-28 sub-region? You'd again be screwed if
you relied on which part of a larger buddy gets returned by the
lower level function you call. I can't help thinking that basing
reservation on allocation functions can't really be made work for
all possible cases. Instead reservation requests need to check that
the requested range is free _and_ split the potentially larger
range according to the request.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 04 09:18:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 09:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVXFY-0005Za-QH; Mon, 04 May 2020 09:18:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVXFX-0005ZT-OC
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 09:18:55 +0000
X-Inumbo-ID: 459b3424-8de8-11ea-9d06-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 459b3424-8de8-11ea-9d06-12813bfff9fa;
 Mon, 04 May 2020 09:18:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id C70E3ACF1;
 Mon,  4 May 2020 09:18:53 +0000 (UTC)
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Julien Grall <julien@xen.org>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <548a9fc5-c251-5d8b-d297-4788d60b801d@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <36944bda-14a8-0134-cd1d-1f08becb28b0@suse.com>
Date: Mon, 4 May 2020 11:18:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <548a9fc5-c251-5d8b-d297-4788d60b801d@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30.04.2020 17:35, Julien Grall wrote:
> On 30/04/2020 15:50, Jan Beulich wrote:
>> On 30.04.2020 16:25, Julien Grall wrote:
>>> EXPERT mode is currently used to gate any options that are in technical
>>> preview or not security supported At the moment, the only way to select
>>> it is to use XEN_CONFIG_EXPERT=y on the make command line.
>>>
>>> However, if the user forget to add the option of one of the make
>>> command (even a clean), then .config will get rewritten. This may lead
>>> to a rather frustrating experience as it is difficult to diagnostic the
>>> issue.
>>
>> Is / will this still be true after Anthony's rework of the build
>> system? Right now we already have
>>
>> clean-targets := %clean
>> no-dot-config-targets := $(clean-targets) \
>>                           ...
> 
> I haven't tried Anthony's rework yet. But I guess the problem would
> be the same if you forget to add XEN_CONFIG_EXPERT=y on make.

Why? xen/.config would get re-written only if kconfig got run in
the first place. It is my understanding that no-dot-config-targets
exist to avoid including .config, and as a result make won't find
a need anymore to cause it to re-made if out of date.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 04 09:31:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 09:31:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVXRG-00077k-4B; Mon, 04 May 2020 09:31:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7wWN=6S=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jVXRE-00077f-My
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 09:31:00 +0000
X-Inumbo-ID: f713ef24-8de9-11ea-9d08-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f713ef24-8de9-11ea-9d08-12813bfff9fa;
 Mon, 04 May 2020 09:31:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=dVV/KySiOy8F9WzYMxS64Du6ppmq4y7ol7TvltUJI18=; b=Zfo+VtODjNAFquLlUz65ax6Edv
 lowhOGNSLf/OIWdh2/vvuYkEs3dMLssqVrVfbrqjfm1nacdbr0z3vKOOlhxvhpn2ESbKY+035Fi6P
 axxx1IGO2nMSqAIpqKY2vrMADhrFhQ6K525njV4nk7ooY+OAfKAcxwjo9ELFpZgLGkYA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jVXRB-0004gi-PF; Mon, 04 May 2020 09:30:57 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jVXRB-0004Nt-Hm; Mon, 04 May 2020 09:30:57 +0000
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Jan Beulich <jbeulich@suse.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <548a9fc5-c251-5d8b-d297-4788d60b801d@xen.org>
 <36944bda-14a8-0134-cd1d-1f08becb28b0@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <898479ac-fd5c-48f4-28cb-8bbb2dc60d83@xen.org>
Date: Mon, 4 May 2020 10:30:55 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <36944bda-14a8-0134-cd1d-1f08becb28b0@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan,

On 04/05/2020 10:18, Jan Beulich wrote:
> On 30.04.2020 17:35, Julien Grall wrote:
>> On 30/04/2020 15:50, Jan Beulich wrote:
>>> On 30.04.2020 16:25, Julien Grall wrote:
>>>> EXPERT mode is currently used to gate any options that are in technical
>>>> preview or not security supported At the moment, the only way to select
>>>> it is to use XEN_CONFIG_EXPERT=y on the make command line.
>>>>
>>>> However, if the user forget to add the option of one of the make
>>>> command (even a clean), then .config will get rewritten. This may lead
>>>> to a rather frustrating experience as it is difficult to diagnostic the
>>>> issue.
>>>
>>> Is / will this still be true after Anthony's rework of the build
>>> system? Right now we already have
>>>
>>> clean-targets := %clean
>>> no-dot-config-targets := $(clean-targets) \
>>>                            ...
>>
>> I haven't tried Anthony's rework yet. But I guess the problem would
>> be the same if you forget to add XEN_CONFIG_EXPERT=y on make.
> 
> Why? xen/.config would get re-written only if kconfig got run in
> the first place. It is my understanding that no-dot-config-targets
> exist to avoid including .config, and as a result make won't find
> a need anymore to cause it to re-made if out of date.

kconfig may be executed because you change one of the */Kconfig file. So 
if you happen to forget XEN_CONFIG_EXPERT=y on your build command line, 
then you will have your .config rewritten without expert options.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 04 09:37:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 09:37:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVXXg-0007K8-S7; Mon, 04 May 2020 09:37:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVXXf-0007K3-Dq
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 09:37:39 +0000
X-Inumbo-ID: e342a106-8dea-11ea-9d09-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e342a106-8dea-11ea-9d09-12813bfff9fa;
 Mon, 04 May 2020 09:37:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 1FA69AE53;
 Mon,  4 May 2020 09:37:37 +0000 (UTC)
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Julien Grall <julien@xen.org>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <548a9fc5-c251-5d8b-d297-4788d60b801d@xen.org>
 <36944bda-14a8-0134-cd1d-1f08becb28b0@suse.com>
 <898479ac-fd5c-48f4-28cb-8bbb2dc60d83@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <443018dd-d30e-037d-b7b1-d531d81bed15@suse.com>
Date: Mon, 4 May 2020 11:37:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <898479ac-fd5c-48f4-28cb-8bbb2dc60d83@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 04.05.2020 11:30, Julien Grall wrote:
> Hi Jan,
> 
> On 04/05/2020 10:18, Jan Beulich wrote:
>> On 30.04.2020 17:35, Julien Grall wrote:
>>> On 30/04/2020 15:50, Jan Beulich wrote:
>>>> On 30.04.2020 16:25, Julien Grall wrote:
>>>>> EXPERT mode is currently used to gate any options that are in technical
>>>>> preview or not security supported At the moment, the only way to select
>>>>> it is to use XEN_CONFIG_EXPERT=y on the make command line.
>>>>>
>>>>> However, if the user forget to add the option of one of the make
>>>>> command (even a clean), then .config will get rewritten. This may lead
>>>>> to a rather frustrating experience as it is difficult to diagnostic the
>>>>> issue.
>>>>
>>>> Is / will this still be true after Anthony's rework of the build
>>>> system? Right now we already have
>>>>
>>>> clean-targets := %clean
>>>> no-dot-config-targets := $(clean-targets) \
>>>>                            ...
>>>
>>> I haven't tried Anthony's rework yet. But I guess the problem would
>>> be the same if you forget to add XEN_CONFIG_EXPERT=y on make.
>>
>> Why? xen/.config would get re-written only if kconfig got run in
>> the first place. It is my understanding that no-dot-config-targets
>> exist to avoid including .config, and as a result make won't find
>> a need anymore to cause it to re-made if out of date.
> 
> kconfig may be executed because you change one of the */Kconfig file.
> So if you happen to forget XEN_CONFIG_EXPERT=y on your build command
> line, then you will have your .config rewritten without expert options.

That's still a build system issue then (if this is really what happens):
Dependencies of xen/.config shouldn't be evaluated as long as it doesn't
get used.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 04 09:54:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 09:54:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVXnh-0000bs-PE; Mon, 04 May 2020 09:54:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7wWN=6S=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jVXng-0000bl-BN
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 09:54:12 +0000
X-Inumbo-ID: 3458ef80-8ded-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3458ef80-8ded-11ea-9887-bc764e2007e4;
 Mon, 04 May 2020 09:54:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=zPDpfiShJhHJmozdGMIlI4MkxqBwf/bG401bHOfdNig=; b=k8dJ0g9Q9N47CXqPAMhcFuOvWB
 j6H8Vqi3OhR3mrhkix5qH36QGuzORxB3ykMz+nMo1ID5xg4eQzcfyrOuNgJalZjxQtVUWsDahr6eQ
 oEe6KTFJC5A8vwnUIMMjVdvoHQyz5I6iN7mUioyzfTrMLNDbzmQ3ra6Puvz1k6OjBa1o=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jVXnc-00056w-MF; Mon, 04 May 2020 09:54:08 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jVXnc-0005p2-Et; Mon, 04 May 2020 09:54:08 +0000
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Jan Beulich <jbeulich@suse.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <548a9fc5-c251-5d8b-d297-4788d60b801d@xen.org>
 <36944bda-14a8-0134-cd1d-1f08becb28b0@suse.com>
 <898479ac-fd5c-48f4-28cb-8bbb2dc60d83@xen.org>
 <443018dd-d30e-037d-b7b1-d531d81bed15@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <3c581eac-9b2f-493c-f86e-2377450a6a2a@xen.org>
Date: Mon, 4 May 2020 10:54:06 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <443018dd-d30e-037d-b7b1-d531d81bed15@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan,

On 04/05/2020 10:37, Jan Beulich wrote:
> On 04.05.2020 11:30, Julien Grall wrote:
>> Hi Jan,
>>
>> On 04/05/2020 10:18, Jan Beulich wrote:
>>> On 30.04.2020 17:35, Julien Grall wrote:
>>>> On 30/04/2020 15:50, Jan Beulich wrote:
>>>>> On 30.04.2020 16:25, Julien Grall wrote:
>>>>>> EXPERT mode is currently used to gate any options that are in technical
>>>>>> preview or not security supported At the moment, the only way to select
>>>>>> it is to use XEN_CONFIG_EXPERT=y on the make command line.
>>>>>>
>>>>>> However, if the user forget to add the option of one of the make
>>>>>> command (even a clean), then .config will get rewritten. This may lead
>>>>>> to a rather frustrating experience as it is difficult to diagnostic the
>>>>>> issue.
>>>>>
>>>>> Is / will this still be true after Anthony's rework of the build
>>>>> system? Right now we already have
>>>>>
>>>>> clean-targets := %clean
>>>>> no-dot-config-targets := $(clean-targets) \
>>>>>                             ...
>>>>
>>>> I haven't tried Anthony's rework yet. But I guess the problem would
>>>> be the same if you forget to add XEN_CONFIG_EXPERT=y on make.
>>>
>>> Why? xen/.config would get re-written only if kconfig got run in
>>> the first place. It is my understanding that no-dot-config-targets
>>> exist to avoid including .config, and as a result make won't find
>>> a need anymore to cause it to re-made if out of date.
>>
>> kconfig may be executed because you change one of the */Kconfig file.
>> So if you happen to forget XEN_CONFIG_EXPERT=y on your build command
>> line, then you will have your .config rewritten without expert options.
> 
> That's still a build system issue then (if this is really what happens):
> Dependencies of xen/.config shouldn't be evaluated as long as it doesn't
> get used.

I am not sure to understand what you mean by "doesn't get used here". 
When you build Xen, xen/.config is a dependency for the auto-generated 
header. So 'make' will actually check whether there are any modification 
in */Kconfig.

A user would also expect that any modification in a */Kconfig will be 
picked by 'make' when building the hypervisor. This is how it works in 
Linux and I see no reason to diverge in Xen for this.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 04 10:07:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 10:07:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVY0j-0001sn-F6; Mon, 04 May 2020 10:07:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z3vB=6S=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jVY0h-0001si-MJ
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 10:07:39 +0000
X-Inumbo-ID: 15824f82-8def-11ea-b9cf-bc764e2007e4
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15824f82-8def-11ea-b9cf-bc764e2007e4;
 Mon, 04 May 2020 10:07:39 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id g13so20178529wrb.8
 for <xen-devel@lists.xenproject.org>; Mon, 04 May 2020 03:07:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=/ZVZt1eZRouG8SfIcedxiaba1cChmaXwtwdN3BvQWTc=;
 b=HcRGrtw9qZwIdAjOVDw2nBeZQtDcI35wjvWlHOGFRg+YHxIgcEOeEOIUXHFvYJWe1y
 0aEc8lF3QcOuHPeMJ45OuZQ1/+1Dnot2UsK7TlMShn1hmMCU98nR3HqSnBLrZu1ndQJM
 w1kaQH2SKAmFC3AZAy7uhqrr/gwyVu0QHVGrrHfctlxK7lQakRs3aFMLtMqke8iQsdYT
 PLVukoSv+TknjgwBfsv/LgVJCO02T7xkFe7qLDeEyKZT9f5P4xcYNT0INYTTlZ5cGDXi
 WsE3CbUxGOSfFLN2WtsdFFKoJ5fyjZFpRUSS1D0HqN6DiszZ4ePD73Tu2IZ8D9II0tF4
 AY8A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :mime-version:content-transfer-encoding;
 bh=/ZVZt1eZRouG8SfIcedxiaba1cChmaXwtwdN3BvQWTc=;
 b=Wjwmp0yjNXlK5TgKvH/Stw5V8I9pl8wyxIyiDyWbNBiE4kK2l0xj8Pcl8Vw7zptarx
 FDeoe6ucPnB9m93axlJRTJpScBz+Tqo7WYH/5Jmrfyhkc/MTcJ4gek8pgkTfcm4YxqIV
 WOxVOflL82kkpreBTOv9ksGiWRndgU3ZkYWi/8SfwQAW9xyuseeta/RZgQDFyqBdJ/6P
 4xZuldZgMBc1Lyk3mm+0YOwAunt9s3WRcV0bi6aJ7QZPofXkvSRPs1CSKJ3ria2GN24x
 cVxozPVhBVTsPGdPNxhZN8UONvcoaec4P/HtcEhO/7MikhdDZTUQQGLZngIo5CB5iZi/
 lRTQ==
X-Gm-Message-State: AGi0PuarReB8iKtA4IesTP2v01PmDvF8695nHIEev2DzbVMDRSJZD9c8
 rqOxWz6wcXgkYuqwF+Tyn6o=
X-Google-Smtp-Source: APiQypI4CLhwM1s9qsXBHl1cozRkicm4SNedwT3p1NV3wnHda5P0CMNLiBvDg4MvOP8bkIL5h8q9ag==
X-Received: by 2002:adf:a74b:: with SMTP id e11mr16878108wrd.99.1588586858230; 
 Mon, 04 May 2020 03:07:38 -0700 (PDT)
Received: from x1w.redhat.com (26.red-88-21-207.staticip.rima-tde.net.
 [88.21.207.26])
 by smtp.gmail.com with ESMTPSA id k9sm18517778wrd.17.2020.05.04.03.07.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 04 May 2020 03:07:37 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Subject: [PATCH v2 0/3] various: Remove unnecessary casts
Date: Mon,  4 May 2020 12:07:32 +0200
Message-Id: <20200504100735.10269-1-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 David Hildenbrand <david@redhat.com>, Markus Armbruster <armbru@redhat.com>,
 Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, Richard Henderson <rth@twiddle.net>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
 John Snow <jsnow@redhat.com>, David Gibson <david@gibson.dropbear.id.au>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, qemu-ppc@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Remove unnecessary casts using coccinelle scripts.

The CPU()/OBJECT() patches don't introduce logical change,
The DEVICE() one removes various OBJECT_CHECK() calls.

Since v2:
- Add A-b/R-b tags
- Reword description (Markus)

Philippe Mathieu-Daudé (3):
  target: Remove unnecessary CPU() cast
  various: Remove unnecessary OBJECT() cast
  hw: Remove unnecessary DEVICE() cast

 hw/core/bus.c                       | 2 +-
 hw/display/artist.c                 | 2 +-
 hw/display/cg3.c                    | 2 +-
 hw/display/sm501.c                  | 2 +-
 hw/display/tcx.c                    | 4 ++--
 hw/display/vga-isa.c                | 2 +-
 hw/i2c/imx_i2c.c                    | 2 +-
 hw/i2c/mpc_i2c.c                    | 2 +-
 hw/ide/ahci-allwinner.c             | 2 +-
 hw/ide/piix.c                       | 2 +-
 hw/ipmi/smbus_ipmi.c                | 2 +-
 hw/microblaze/petalogix_ml605_mmu.c | 8 ++++----
 hw/misc/macio/pmu.c                 | 2 +-
 hw/net/ftgmac100.c                  | 3 +--
 hw/net/imx_fec.c                    | 2 +-
 hw/nubus/nubus-device.c             | 2 +-
 hw/pci-host/bonito.c                | 2 +-
 hw/ppc/spapr.c                      | 2 +-
 hw/s390x/sclp.c                     | 2 +-
 hw/sh4/sh_pci.c                     | 2 +-
 hw/xen/xen-legacy-backend.c         | 2 +-
 monitor/misc.c                      | 3 +--
 qom/object.c                        | 4 ++--
 target/ppc/mmu_helper.c             | 2 +-
 24 files changed, 29 insertions(+), 31 deletions(-)

-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Mon May 04 10:07:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 10:07:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVY0n-0001t4-Ql; Mon, 04 May 2020 10:07:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z3vB=6S=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jVY0m-0001sw-KV
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 10:07:44 +0000
X-Inumbo-ID: 16da9592-8def-11ea-b9cf-bc764e2007e4
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 16da9592-8def-11ea-b9cf-bc764e2007e4;
 Mon, 04 May 2020 10:07:41 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id y3so1167475wrt.1
 for <xen-devel@lists.xenproject.org>; Mon, 04 May 2020 03:07:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=pqAO0bA1QvbNiBti+VeepWcwOaVLpxEZnxKr37xmzuE=;
 b=mNzUhqyuvwbObBrckIQQRqTzNHBBsL3ym2fUD3m/+zm9Xqm7CM2aUjGEy0tWaFQrrD
 YVDAjI8z6v9lLJx42ToBPknDZfH05+UKtCzKzgNr4MTMkCfIk/m3I+6VTw+a3ljwYwDX
 vqw6SrYla+n8Fc7zIMhcGqQDjESGwm5u1EuKEWljsNLyKjNuANONlXznl4FYJ2xVrp40
 VW0Oy0lzxZ/OSSeB5OKPMnQxWOlMkuv984lpi+FuBa6VtbIb57yelQpHA/eX75x4Zd5t
 tYxJobeQgsnQgUS/GWXUbL6lTDYSgG2TenM1Q/dfLoGVkes9QJ1ghKgcevtFXHzQM8bp
 J78Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=pqAO0bA1QvbNiBti+VeepWcwOaVLpxEZnxKr37xmzuE=;
 b=Zbkrpwvg0oVp2gpMibD1sKPZPCnc+GCFqa2jYegO+8YAKvQbMjJe9UnIUtLw6iKfko
 rZIBl1p/QmY7HRlyGUbELULC22cCWU0ZwciKi0HgHssz1LOPy2M08cxh+bktCdlvBtcz
 IxfS+oRGBcPuOdjRElS38dtuRKH+Gsyc8S7uElN2izYzvzExlKP6Auw1sSBVd71dipiu
 CgMhUZMHVIv1AxLIX79X75gqa+qYDHal8ngP2OjF1VhyCVEuGokyonMEL5l/t1oOT3b+
 F4IRczcsL2Xl11zYK7AAp24acXdqYPkgZNhZisey+/f6/FcEthT11Cc4tRWM/RLSvnYM
 SCEw==
X-Gm-Message-State: AGi0PuauBYym6dCU5UJhhY/H9ndwdmpvBDa3YRlSIPvZfW6MWIjho5pc
 irVG82WK/HlLxkzfmJGiBsA=
X-Google-Smtp-Source: APiQypLq5H1B+qwCkRrljK1N7qAw8SaQFKARSpmFekNGsBBZxyVnzr6zXEYdT7BPPkaenEkB9e3G3w==
X-Received: by 2002:adf:eacb:: with SMTP id o11mr10572540wrn.253.1588586860549; 
 Mon, 04 May 2020 03:07:40 -0700 (PDT)
Received: from x1w.redhat.com (26.red-88-21-207.staticip.rima-tde.net.
 [88.21.207.26])
 by smtp.gmail.com with ESMTPSA id k9sm18517778wrd.17.2020.05.04.03.07.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 04 May 2020 03:07:39 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Subject: [PATCH v2 1/3] target: Remove unnecessary CPU() cast
Date: Mon,  4 May 2020 12:07:33 +0200
Message-Id: <20200504100735.10269-2-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200504100735.10269-1-f4bug@amsat.org>
References: <20200504100735.10269-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 David Hildenbrand <david@redhat.com>, Markus Armbruster <armbru@redhat.com>,
 Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, Richard Henderson <rth@twiddle.net>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
 John Snow <jsnow@redhat.com>, David Gibson <david@gibson.dropbear.id.au>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, qemu-ppc@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The CPU() macro is defined as:

  #define CPU(obj) ((CPUState *)(obj))

which expands to:

  ((CPUState *)object_dynamic_cast_assert((Object *)(obj), (name),
                                          __FILE__, __LINE__, __func__))

This assertion can only fail when @obj points to something other
than its stated type, i.e. when we're in undefined behavior country.

Remove the unnecessary CPU() casts when we already know the pointer
is of CPUState type.

Patch created mechanically using spatch with this script:

  @@
  typedef CPUState;
  CPUState *s;
  @@
  -   CPU(s)
  +   s

Acked-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
v2: Reword (Markus)
---
 target/ppc/mmu_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/ppc/mmu_helper.c b/target/ppc/mmu_helper.c
index 86c667b094..8972714775 100644
--- a/target/ppc/mmu_helper.c
+++ b/target/ppc/mmu_helper.c
@@ -1820,7 +1820,7 @@ static inline void do_invalidate_BAT(CPUPPCState *env, target_ulong BATu,
     if (((end - base) >> TARGET_PAGE_BITS) > 1024) {
         /* Flushing 1024 4K pages is slower than a complete flush */
         LOG_BATS("Flush all BATs\n");
-        tlb_flush(CPU(cs));
+        tlb_flush(cs);
         LOG_BATS("Flush done\n");
         return;
     }
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Mon May 04 10:07:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 10:07:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVY0t-0001tn-6D; Mon, 04 May 2020 10:07:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z3vB=6S=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jVY0r-0001ta-Jj
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 10:07:49 +0000
X-Inumbo-ID: 183e7d68-8def-11ea-9887-bc764e2007e4
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 183e7d68-8def-11ea-9887-bc764e2007e4;
 Mon, 04 May 2020 10:07:43 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id l18so9462306wrn.6
 for <xen-devel@lists.xenproject.org>; Mon, 04 May 2020 03:07:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=3T0kMXUkml3SPOC+ozrWaQIIxf6I2/xjXP4RZGOc7C0=;
 b=LBTEMgDf21d8EYLLYw4CGG8/pL52gzKDRRi3VpCUJkbuls2ghqJPmeqte7XWNL48LH
 yjrNEpeDOF2wVYy7hldkagWzQZyJ4GoaA5Xw3rfxXfAT9JUt9c8cNu9LNmG3XRrExNfM
 fc8NKqKVYT4RuesNvayTEZbl5oTDlvJIiUlfhnMiNe0OXJr50hTT6BDfBeRubK/qchuw
 V3Mv+fiB8qlXR3PVSs+GjbRyZ51EksoO3hlE8DqHlBz9/TDKBdGBqpvfvMFTF5BejsdB
 3pNHSONBxGghtv3FJuU6wXa+q2LpRPbagT1YWxHvcfEJyDFnzPDeLEK6EnH4dwSG3kcv
 U8YQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=3T0kMXUkml3SPOC+ozrWaQIIxf6I2/xjXP4RZGOc7C0=;
 b=dM0gePsuHcvemONPQBydk3fA/q+UghoTVUlcZKB0Pem9VzJQqAN2rTQwgu5A15CKlc
 Fx4V/MpsAvdfaxeLbeiUuWenvMRUlVS874UkEgJM4bZGjbOnqEnwrrCmY8atcpjgUclG
 RIBTZ3XGPuEdZxKmqvPiIE8fOGY7nkFwCJMl4eD8SKjX4JE7Q3yxUVDvoJd/w1In4DO2
 ynR7w6E1XzAiO2jNawlnVHL2NbjOgN+VGCkPiq6T6SfYV+zF+tB1DGEefFQf5Kd9JJG4
 67MBnu7w89ozWXlcA5nFCIuXZLMS/yltCdTWccnd014Abur6clAeFSAXLCDdRmMNE9py
 Tl4A==
X-Gm-Message-State: AGi0PubKz1hyyy1Jb9OBLV7SZUMxizZUb+Mr1UH4c75D9W3saLkyo20C
 yBl4WPqDzmw3S2ULClB7/xU=
X-Google-Smtp-Source: APiQypJBR0VsXTUOks5yliB8aAc5q8D6l3Gg5z/mv3Da0c8XMYX7HdFCNrgqVwjEZd9i5aQQrfLYZw==
X-Received: by 2002:adf:fa41:: with SMTP id y1mr17727689wrr.131.1588586862825; 
 Mon, 04 May 2020 03:07:42 -0700 (PDT)
Received: from x1w.redhat.com (26.red-88-21-207.staticip.rima-tde.net.
 [88.21.207.26])
 by smtp.gmail.com with ESMTPSA id k9sm18517778wrd.17.2020.05.04.03.07.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 04 May 2020 03:07:42 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Subject: [PATCH v2 2/3] various: Remove unnecessary OBJECT() cast
Date: Mon,  4 May 2020 12:07:34 +0200
Message-Id: <20200504100735.10269-3-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200504100735.10269-1-f4bug@amsat.org>
References: <20200504100735.10269-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 David Hildenbrand <david@redhat.com>, Markus Armbruster <armbru@redhat.com>,
 Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, Richard Henderson <rth@twiddle.net>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
 John Snow <jsnow@redhat.com>, David Gibson <david@gibson.dropbear.id.au>,
 Corey Minyard <cminyard@mvista.com>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, qemu-ppc@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The OBJECT() macro is defined as:

  #define OBJECT(obj) ((Object *)(obj))

which expands to:

  ((Object *)object_dynamic_cast_assert((Object *)(obj), (name),
                                        __FILE__, __LINE__, __func__))

This assertion can only fail when @obj points to something other
than its stated type, i.e. when we're in undefined behavior country.

Remove the unnecessary OBJECT() casts when we already know the
pointer is of Object type.

Patch created mechanically using spatch with this script:

  @@
  typedef Object;
  Object *o;
  @@
  -   OBJECT(o)
  +   o

Acked-by: Cornelia Huck <cohuck@redhat.com>
Acked-by: Corey Minyard <cminyard@mvista.com>
Acked-by: John Snow <jsnow@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
v2: Reword (Markus)
---
 hw/core/bus.c                       | 2 +-
 hw/ide/ahci-allwinner.c             | 2 +-
 hw/ipmi/smbus_ipmi.c                | 2 +-
 hw/microblaze/petalogix_ml605_mmu.c | 8 ++++----
 hw/s390x/sclp.c                     | 2 +-
 monitor/misc.c                      | 3 +--
 qom/object.c                        | 4 ++--
 7 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/hw/core/bus.c b/hw/core/bus.c
index 3dc0a825f0..4ea5870de8 100644
--- a/hw/core/bus.c
+++ b/hw/core/bus.c
@@ -25,7 +25,7 @@
 
 void qbus_set_hotplug_handler(BusState *bus, Object *handler, Error **errp)
 {
-    object_property_set_link(OBJECT(bus), OBJECT(handler),
+    object_property_set_link(OBJECT(bus), handler,
                              QDEV_HOTPLUG_HANDLER_PROPERTY, errp);
 }
 
diff --git a/hw/ide/ahci-allwinner.c b/hw/ide/ahci-allwinner.c
index bb8393d2b6..8536b9eb5a 100644
--- a/hw/ide/ahci-allwinner.c
+++ b/hw/ide/ahci-allwinner.c
@@ -90,7 +90,7 @@ static void allwinner_ahci_init(Object *obj)
     SysbusAHCIState *s = SYSBUS_AHCI(obj);
     AllwinnerAHCIState *a = ALLWINNER_AHCI(obj);
 
-    memory_region_init_io(&a->mmio, OBJECT(obj), &allwinner_ahci_mem_ops, a,
+    memory_region_init_io(&a->mmio, obj, &allwinner_ahci_mem_ops, a,
                           "allwinner-ahci", ALLWINNER_AHCI_MMIO_SIZE);
     memory_region_add_subregion(&s->ahci.mem, ALLWINNER_AHCI_MMIO_OFF,
                                 &a->mmio);
diff --git a/hw/ipmi/smbus_ipmi.c b/hw/ipmi/smbus_ipmi.c
index 2a9470d9df..f1a0148755 100644
--- a/hw/ipmi/smbus_ipmi.c
+++ b/hw/ipmi/smbus_ipmi.c
@@ -329,7 +329,7 @@ static void smbus_ipmi_init(Object *obj)
 {
     SMBusIPMIDevice *sid = SMBUS_IPMI(obj);
 
-    ipmi_bmc_find_and_link(OBJECT(obj), (Object **) &sid->bmc);
+    ipmi_bmc_find_and_link(obj, (Object **) &sid->bmc);
 }
 
 static void smbus_ipmi_get_fwinfo(struct IPMIInterface *ii, IPMIFwInfo *info)
diff --git a/hw/microblaze/petalogix_ml605_mmu.c b/hw/microblaze/petalogix_ml605_mmu.c
index 0a2640c40b..52dcea9abd 100644
--- a/hw/microblaze/petalogix_ml605_mmu.c
+++ b/hw/microblaze/petalogix_ml605_mmu.c
@@ -150,9 +150,9 @@ petalogix_ml605_init(MachineState *machine)
     qdev_set_nic_properties(eth0, &nd_table[0]);
     qdev_prop_set_uint32(eth0, "rxmem", 0x1000);
     qdev_prop_set_uint32(eth0, "txmem", 0x1000);
-    object_property_set_link(OBJECT(eth0), OBJECT(ds),
+    object_property_set_link(OBJECT(eth0), ds,
                              "axistream-connected", &error_abort);
-    object_property_set_link(OBJECT(eth0), OBJECT(cs),
+    object_property_set_link(OBJECT(eth0), cs,
                              "axistream-control-connected", &error_abort);
     qdev_init_nofail(eth0);
     sysbus_mmio_map(SYS_BUS_DEVICE(eth0), 0, AXIENET_BASEADDR);
@@ -163,9 +163,9 @@ petalogix_ml605_init(MachineState *machine)
     cs = object_property_get_link(OBJECT(eth0),
                                   "axistream-control-connected-target", NULL);
     qdev_prop_set_uint32(dma, "freqhz", 100 * 1000000);
-    object_property_set_link(OBJECT(dma), OBJECT(ds),
+    object_property_set_link(OBJECT(dma), ds,
                              "axistream-connected", &error_abort);
-    object_property_set_link(OBJECT(dma), OBJECT(cs),
+    object_property_set_link(OBJECT(dma), cs,
                              "axistream-control-connected", &error_abort);
     qdev_init_nofail(dma);
     sysbus_mmio_map(SYS_BUS_DEVICE(dma), 0, AXIDMA_BASEADDR);
diff --git a/hw/s390x/sclp.c b/hw/s390x/sclp.c
index ede056b3ef..4132286db7 100644
--- a/hw/s390x/sclp.c
+++ b/hw/s390x/sclp.c
@@ -322,7 +322,7 @@ void s390_sclp_init(void)
 
     object_property_add_child(qdev_get_machine(), TYPE_SCLP, new,
                               NULL);
-    object_unref(OBJECT(new));
+    object_unref(new);
     qdev_init_nofail(DEVICE(new));
 }
 
diff --git a/monitor/misc.c b/monitor/misc.c
index 6c45fa490f..57af5fa5a4 100644
--- a/monitor/misc.c
+++ b/monitor/misc.c
@@ -1839,8 +1839,7 @@ void object_add_completion(ReadLineState *rs, int nb_args, const char *str)
 static int qdev_add_hotpluggable_device(Object *obj, void *opaque)
 {
     GSList **list = opaque;
-    DeviceState *dev = (DeviceState *)object_dynamic_cast(OBJECT(obj),
-                                                          TYPE_DEVICE);
+    DeviceState *dev = (DeviceState *)object_dynamic_cast(obj, TYPE_DEVICE);
 
     if (dev == NULL) {
         return 0;
diff --git a/qom/object.c b/qom/object.c
index be700e831f..07c1443d0e 100644
--- a/qom/object.c
+++ b/qom/object.c
@@ -762,7 +762,7 @@ Object *object_new_with_propv(const char *typename,
         }
     }
 
-    object_unref(OBJECT(obj));
+    object_unref(obj);
     return obj;
 
  error:
@@ -1687,7 +1687,7 @@ void object_property_add_child(Object *obj, const char *name,
         return;
     }
 
-    type = g_strdup_printf("child<%s>", object_get_typename(OBJECT(child)));
+    type = g_strdup_printf("child<%s>", object_get_typename(child));
 
     op = object_property_add(obj, name, type, object_get_child_property, NULL,
                              object_finalize_child_property, child, &local_err);
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Mon May 04 10:07:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 10:07:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVY0y-0001vw-N3; Mon, 04 May 2020 10:07:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z3vB=6S=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jVY0w-0001vK-L4
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 10:07:54 +0000
X-Inumbo-ID: 19c6d2de-8def-11ea-ae69-bc764e2007e4
Received: from mail-wm1-x344.google.com (unknown [2a00:1450:4864:20::344])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 19c6d2de-8def-11ea-ae69-bc764e2007e4;
 Mon, 04 May 2020 10:07:46 +0000 (UTC)
Received: by mail-wm1-x344.google.com with SMTP id v4so15139213wme.1
 for <xen-devel@lists.xenproject.org>; Mon, 04 May 2020 03:07:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=R2TWn8nDih2THqdikXordxmPJ+K4YiSHVpyhzbFn/CU=;
 b=o3V1EEtH9tSvZEyXSopNv5y2KWuE4TZV1e1dK5kdXwoA1FgiXKTecpBfzbl7UzmiaX
 c4531ZThZDg9qED2USOeM3zygLsnuql17lavV8nr1KDAbQXghfcdr1aAXQHosnrF1S+B
 ziZ7BJDj8zIFTKC34O9EuZwCvApwMp36oPhLM7ez/V+JQ/u0B7MXuNwzPSUZODmRSVOG
 NOeIz5GBXx+JQafCcK3nnNJ3U32h2HmweYqfuYFyUlN+0Qtqgpxoldsj2zLaGH4mTkKu
 9OtFbIqOLQz5U2cWsmhnH2AvnXXBFM7CXa6Ox7EMhv7kGonl1irCn38KKtpLc4gVN/Ty
 kWrA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=R2TWn8nDih2THqdikXordxmPJ+K4YiSHVpyhzbFn/CU=;
 b=G6aRjtSrvgyiaGdtiHduMRZiMKpetcIL/H35Y2KvyFch8m9KHRjM7mf5/4JvZWK7c3
 57Yo5eJIjC48kdZECm/sC0wsEfOt2SCXdAiw+ULyHKUX+nycV7TQtqSo0ezMcaM8Vztj
 QDMc13xSKqgH+fftw5Z41lFRwN+46XamsN5lTTbbqY60Tfb3G99LDK9GVfuGFchIqNGL
 Yfg5S9k2aAVafpFhKMFRmCYGHsfPkhYrE4eQoiT/bdeJuq/h+2ss0uW9hXR9EE9wZR/B
 eNGXL6MqN6j0Wfdc0iTZ/XUWegorjZEdK4aQIeaf+KsSftqyBtruvtmK7Drn99e1s5D6
 Gkxw==
X-Gm-Message-State: AGi0PuZRzMEySVc15trv1sEGT1+neaMIzVf5x6jcPdtLVVI7Q6+JfRpu
 IMOmH8LR9V8Ev1lq5Rklzvo=
X-Google-Smtp-Source: APiQypKCgq+7Nn0t0YCtFT8+ATEOSeuYjpzWUEC4oFo6QWD9aCYh0hFoqx154W1a/kdxeludgm3o7g==
X-Received: by 2002:a7b:ce8b:: with SMTP id q11mr13835493wmj.101.1588586865249; 
 Mon, 04 May 2020 03:07:45 -0700 (PDT)
Received: from x1w.redhat.com (26.red-88-21-207.staticip.rima-tde.net.
 [88.21.207.26])
 by smtp.gmail.com with ESMTPSA id k9sm18517778wrd.17.2020.05.04.03.07.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 04 May 2020 03:07:44 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Subject: [PATCH v2 3/3] hw: Remove unnecessary DEVICE() cast
Date: Mon,  4 May 2020 12:07:35 +0200
Message-Id: <20200504100735.10269-4-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200504100735.10269-1-f4bug@amsat.org>
References: <20200504100735.10269-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 David Hildenbrand <david@redhat.com>, Markus Armbruster <armbru@redhat.com>,
 Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, Richard Henderson <rth@twiddle.net>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
 John Snow <jsnow@redhat.com>, David Gibson <david@gibson.dropbear.id.au>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, qemu-ppc@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The DEVICE() macro is defined as:

  #define DEVICE(obj) OBJECT_CHECK(DeviceState, (obj), TYPE_DEVICE)

which expands to:

  ((DeviceState *)object_dynamic_cast_assert((Object *)(obj), (name),
                                             __FILE__, __LINE__,
                                             __func__))

This assertion can only fail when @obj points to something other
than its stated type, i.e. when we're in undefined behavior country.

Remove the unnecessary DEVICE() casts when we already know the
pointer is of DeviceState type.

Patch created mechanically using spatch with this script:

  @@
  typedef DeviceState;
  DeviceState *s;
  @@
  -   DEVICE(s)
  +   s

Acked-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Paul Durrant <paul@xen.org>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Acked-by: John Snow <jsnow@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
v2: Reword (Markus)
---
 hw/display/artist.c         | 2 +-
 hw/display/cg3.c            | 2 +-
 hw/display/sm501.c          | 2 +-
 hw/display/tcx.c            | 4 ++--
 hw/display/vga-isa.c        | 2 +-
 hw/i2c/imx_i2c.c            | 2 +-
 hw/i2c/mpc_i2c.c            | 2 +-
 hw/ide/piix.c               | 2 +-
 hw/misc/macio/pmu.c         | 2 +-
 hw/net/ftgmac100.c          | 3 +--
 hw/net/imx_fec.c            | 2 +-
 hw/nubus/nubus-device.c     | 2 +-
 hw/pci-host/bonito.c        | 2 +-
 hw/ppc/spapr.c              | 2 +-
 hw/sh4/sh_pci.c             | 2 +-
 hw/xen/xen-legacy-backend.c | 2 +-
 16 files changed, 17 insertions(+), 18 deletions(-)

diff --git a/hw/display/artist.c b/hw/display/artist.c
index 753dbb9a77..7e2a4556bd 100644
--- a/hw/display/artist.c
+++ b/hw/display/artist.c
@@ -1353,7 +1353,7 @@ static void artist_realizefn(DeviceState *dev, Error **errp)
     s->cursor_height = 32;
     s->cursor_width = 32;
 
-    s->con = graphic_console_init(DEVICE(dev), 0, &artist_ops, s);
+    s->con = graphic_console_init(dev, 0, &artist_ops, s);
     qemu_console_resize(s->con, s->width, s->height);
 }
 
diff --git a/hw/display/cg3.c b/hw/display/cg3.c
index a1ede10394..f7f1c199ce 100644
--- a/hw/display/cg3.c
+++ b/hw/display/cg3.c
@@ -321,7 +321,7 @@ static void cg3_realizefn(DeviceState *dev, Error **errp)
 
     sysbus_init_irq(sbd, &s->irq);
 
-    s->con = graphic_console_init(DEVICE(dev), 0, &cg3_ops, s);
+    s->con = graphic_console_init(dev, 0, &cg3_ops, s);
     qemu_console_resize(s->con, s->width, s->height);
 }
 
diff --git a/hw/display/sm501.c b/hw/display/sm501.c
index de0ab9d977..2a564889bd 100644
--- a/hw/display/sm501.c
+++ b/hw/display/sm501.c
@@ -1839,7 +1839,7 @@ static void sm501_init(SM501State *s, DeviceState *dev,
                                 &s->twoD_engine_region);
 
     /* create qemu graphic console */
-    s->con = graphic_console_init(DEVICE(dev), 0, &sm501_ops, s);
+    s->con = graphic_console_init(dev, 0, &sm501_ops, s);
 }
 
 static const VMStateDescription vmstate_sm501_state = {
diff --git a/hw/display/tcx.c b/hw/display/tcx.c
index 76de16e8ea..1fb45b1aab 100644
--- a/hw/display/tcx.c
+++ b/hw/display/tcx.c
@@ -868,9 +868,9 @@ static void tcx_realizefn(DeviceState *dev, Error **errp)
     sysbus_init_irq(sbd, &s->irq);
 
     if (s->depth == 8) {
-        s->con = graphic_console_init(DEVICE(dev), 0, &tcx_ops, s);
+        s->con = graphic_console_init(dev, 0, &tcx_ops, s);
     } else {
-        s->con = graphic_console_init(DEVICE(dev), 0, &tcx24_ops, s);
+        s->con = graphic_console_init(dev, 0, &tcx24_ops, s);
     }
     s->thcmisc = 0;
 
diff --git a/hw/display/vga-isa.c b/hw/display/vga-isa.c
index 0633ed382c..3aaeeeca1e 100644
--- a/hw/display/vga-isa.c
+++ b/hw/display/vga-isa.c
@@ -74,7 +74,7 @@ static void vga_isa_realizefn(DeviceState *dev, Error **errp)
                                         0x000a0000,
                                         vga_io_memory, 1);
     memory_region_set_coalescing(vga_io_memory);
-    s->con = graphic_console_init(DEVICE(dev), 0, s->hw_ops, s);
+    s->con = graphic_console_init(dev, 0, s->hw_ops, s);
 
     memory_region_add_subregion(isa_address_space(isadev),
                                 VBE_DISPI_LFB_PHYSICAL_ADDRESS,
diff --git a/hw/i2c/imx_i2c.c b/hw/i2c/imx_i2c.c
index 30b9aea247..2e02e1c4fa 100644
--- a/hw/i2c/imx_i2c.c
+++ b/hw/i2c/imx_i2c.c
@@ -305,7 +305,7 @@ static void imx_i2c_realize(DeviceState *dev, Error **errp)
                           IMX_I2C_MEM_SIZE);
     sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->iomem);
     sysbus_init_irq(SYS_BUS_DEVICE(dev), &s->irq);
-    s->bus = i2c_init_bus(DEVICE(dev), NULL);
+    s->bus = i2c_init_bus(dev, NULL);
 }
 
 static void imx_i2c_class_init(ObjectClass *klass, void *data)
diff --git a/hw/i2c/mpc_i2c.c b/hw/i2c/mpc_i2c.c
index 0aa1be3ce7..9a724f3a3e 100644
--- a/hw/i2c/mpc_i2c.c
+++ b/hw/i2c/mpc_i2c.c
@@ -332,7 +332,7 @@ static void mpc_i2c_realize(DeviceState *dev, Error **errp)
     memory_region_init_io(&i2c->iomem, OBJECT(i2c), &i2c_ops, i2c,
                           "mpc-i2c", 0x14);
     sysbus_init_mmio(SYS_BUS_DEVICE(dev), &i2c->iomem);
-    i2c->bus = i2c_init_bus(DEVICE(dev), "i2c");
+    i2c->bus = i2c_init_bus(dev, "i2c");
 }
 
 static void mpc_i2c_class_init(ObjectClass *klass, void *data)
diff --git a/hw/ide/piix.c b/hw/ide/piix.c
index 3b2de4c312..b402a93636 100644
--- a/hw/ide/piix.c
+++ b/hw/ide/piix.c
@@ -193,7 +193,7 @@ int pci_piix3_xen_ide_unplug(DeviceState *dev, bool aux)
             blk_unref(blk);
         }
     }
-    qdev_reset_all(DEVICE(dev));
+    qdev_reset_all(dev);
     return 0;
 }
 
diff --git a/hw/misc/macio/pmu.c b/hw/misc/macio/pmu.c
index b8466a4a3f..4b7def9096 100644
--- a/hw/misc/macio/pmu.c
+++ b/hw/misc/macio/pmu.c
@@ -758,7 +758,7 @@ static void pmu_realize(DeviceState *dev, Error **errp)
 
     if (s->has_adb) {
         qbus_create_inplace(&s->adb_bus, sizeof(s->adb_bus), TYPE_ADB_BUS,
-                            DEVICE(dev), "adb.0");
+                            dev, "adb.0");
         s->adb_poll_timer = timer_new_ms(QEMU_CLOCK_VIRTUAL, pmu_adb_poll, s);
         s->adb_poll_mask = 0xffff;
         s->autopoll_rate_ms = 20;
diff --git a/hw/net/ftgmac100.c b/hw/net/ftgmac100.c
index 041ed21017..25ebee7ec2 100644
--- a/hw/net/ftgmac100.c
+++ b/hw/net/ftgmac100.c
@@ -1035,8 +1035,7 @@ static void ftgmac100_realize(DeviceState *dev, Error **errp)
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
 
     s->nic = qemu_new_nic(&net_ftgmac100_info, &s->conf,
-                          object_get_typename(OBJECT(dev)), DEVICE(dev)->id,
-                          s);
+                          object_get_typename(OBJECT(dev)), dev->id, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 }
 
diff --git a/hw/net/imx_fec.c b/hw/net/imx_fec.c
index a35c33683e..7adcc9df65 100644
--- a/hw/net/imx_fec.c
+++ b/hw/net/imx_fec.c
@@ -1323,7 +1323,7 @@ static void imx_eth_realize(DeviceState *dev, Error **errp)
 
     s->nic = qemu_new_nic(&imx_eth_net_info, &s->conf,
                           object_get_typename(OBJECT(dev)),
-                          DEVICE(dev)->id, s);
+                          dev->id, s);
 
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 }
diff --git a/hw/nubus/nubus-device.c b/hw/nubus/nubus-device.c
index 01ccad9e8e..ffe78a8823 100644
--- a/hw/nubus/nubus-device.c
+++ b/hw/nubus/nubus-device.c
@@ -156,7 +156,7 @@ void nubus_register_rom(NubusDevice *dev, const uint8_t *rom, uint32_t size,
 
 static void nubus_device_realize(DeviceState *dev, Error **errp)
 {
-    NubusBus *nubus = NUBUS_BUS(qdev_get_parent_bus(DEVICE(dev)));
+    NubusBus *nubus = NUBUS_BUS(qdev_get_parent_bus(dev));
     NubusDevice *nd = NUBUS_DEVICE(dev);
     char *name;
     hwaddr slot_offset;
diff --git a/hw/pci-host/bonito.c b/hw/pci-host/bonito.c
index cc6545c8a8..f212796044 100644
--- a/hw/pci-host/bonito.c
+++ b/hw/pci-host/bonito.c
@@ -606,7 +606,7 @@ static void bonito_pcihost_realize(DeviceState *dev, Error **errp)
     BonitoState *bs = BONITO_PCI_HOST_BRIDGE(dev);
 
     memory_region_init(&bs->pci_mem, OBJECT(dev), "pci.mem", BONITO_PCILO_SIZE);
-    phb->bus = pci_register_root_bus(DEVICE(dev), "pci",
+    phb->bus = pci_register_root_bus(dev, "pci",
                                      pci_bonito_set_irq, pci_bonito_map_irq,
                                      dev, &bs->pci_mem, get_system_io(),
                                      0x28, 32, TYPE_PCI_BUS);
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index 9a2bd501aa..3337f5e79c 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -4031,7 +4031,7 @@ static void spapr_phb_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
     /* hotplug hooks should check it's enabled before getting this far */
     assert(drc);
 
-    spapr_drc_attach(drc, DEVICE(dev), &local_err);
+    spapr_drc_attach(drc, dev, &local_err);
     if (local_err) {
         error_propagate(errp, local_err);
         return;
diff --git a/hw/sh4/sh_pci.c b/hw/sh4/sh_pci.c
index 08f2fc1dde..0a3e86f949 100644
--- a/hw/sh4/sh_pci.c
+++ b/hw/sh4/sh_pci.c
@@ -129,7 +129,7 @@ static void sh_pci_device_realize(DeviceState *dev, Error **errp)
     for (i = 0; i < 4; i++) {
         sysbus_init_irq(sbd, &s->irq[i]);
     }
-    phb->bus = pci_register_root_bus(DEVICE(dev), "pci",
+    phb->bus = pci_register_root_bus(dev, "pci",
                                      sh_pci_set_irq, sh_pci_map_irq,
                                      s->irq,
                                      get_system_memory(),
diff --git a/hw/xen/xen-legacy-backend.c b/hw/xen/xen-legacy-backend.c
index 4a373b2373..f9d013811a 100644
--- a/hw/xen/xen-legacy-backend.c
+++ b/hw/xen/xen-legacy-backend.c
@@ -705,7 +705,7 @@ int xen_be_init(void)
 
     xen_sysdev = qdev_create(NULL, TYPE_XENSYSDEV);
     qdev_init_nofail(xen_sysdev);
-    xen_sysbus = qbus_create(TYPE_XENSYSBUS, DEVICE(xen_sysdev), "xen-sysbus");
+    xen_sysbus = qbus_create(TYPE_XENSYSBUS, xen_sysdev, "xen-sysbus");
     qbus_set_bus_hotplug_handler(xen_sysbus, &error_abort);
 
     return 0;
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Mon May 04 10:14:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 10:14:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVY7h-00032A-J3; Mon, 04 May 2020 10:14:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y3Bs=6S=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jVY7g-000325-Ji
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 10:14:52 +0000
X-Inumbo-ID: 16f8e2d0-8df0-11ea-9d0c-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 16f8e2d0-8df0-11ea-9d0c-12813bfff9fa;
 Mon, 04 May 2020 10:14:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588587292;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=hHXsCBHhwytgii8wsjHMhAMeO4nlwNzTdfKVVjJT9fA=;
 b=M2/7w9TQqV55Be9IGBIW5pMpkXFxHOsoFAOnwxTC4cJTPQ+7AJuutPcf
 CCpH4fCUVOpo/PuJa/I9AP8A3NrpXfLqxvOAwTndWCJzaBY+wXBqbV04t
 PtJZUdMB7ljizj+b3ajHo4h1BGKEfxJ/BiYOx4VbS16tHvq+9gBjyAN8q c=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: SNtEGKj6AHmTYc1l6JrCrcHWXNdLdYXwAC1CtnPp4dmehEJWK3fBd2Xza3kfcnGTxgV56NFaA1
 Kc8LHSQYGhmdyxy6LD2wFNzeLN36b7J8L3kG/eOIhnarqEJX7u7HVjO5petW2YZQkTLEsbV8kP
 B1vm/YrHLfl603JqJWfP/cU/NXyzqn40Po7JbLg7drX853N3o4zhZCGUZ534GJbZxG+5VN/3YB
 EKFVMWyvShwsmJxBS3dkC4p36OfFsu59+b1lrLmk8dtOtzY7cbeQLXKemE+WTm8yKakgOT88Qd
 clQ=
X-SBRS: 2.7
X-MesageID: 16644716
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,351,1583211600"; d="scan'208";a="16644716"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <qemu-devel@nongnu.org>
Subject: [PATCH] xen: fix build without pci passthrough
Date: Mon, 4 May 2020 12:14:43 +0200
Message-ID: <20200504101443.3165-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, Paul
 Durrant <paul@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

has_igd_gfx_passthru is only available when QEMU is built with
CONFIG_XEN_PCI_PASSTHROUGH, and hence shouldn't be used in common
code without checking if it's available.

Fixes: 46472d82322d0 ('xen: convert "-machine igd-passthru" to an accelerator property')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org
---
 hw/xen/xen-common.c | 4 ++++
 hw/xen/xen_pt.h     | 7 +++++++
 2 files changed, 11 insertions(+)

diff --git a/hw/xen/xen-common.c b/hw/xen/xen-common.c
index a15070f7f6..c800862419 100644
--- a/hw/xen/xen-common.c
+++ b/hw/xen/xen-common.c
@@ -127,6 +127,7 @@ static void xen_change_state_handler(void *opaque, int running,
     }
 }
 
+#ifdef CONFIG_XEN_PCI_PASSTHROUGH
 static bool xen_get_igd_gfx_passthru(Object *obj, Error **errp)
 {
     return has_igd_gfx_passthru;
@@ -136,6 +137,7 @@ static void xen_set_igd_gfx_passthru(Object *obj, bool value, Error **errp)
 {
     has_igd_gfx_passthru = value;
 }
+#endif
 
 static void xen_setup_post(MachineState *ms, AccelState *accel)
 {
@@ -197,11 +199,13 @@ static void xen_accel_class_init(ObjectClass *oc, void *data)
 
     compat_props_add(ac->compat_props, compat, G_N_ELEMENTS(compat));
 
+#ifdef CONFIG_XEN_PCI_PASSTHROUGH
     object_class_property_add_bool(oc, "igd-passthru",
         xen_get_igd_gfx_passthru, xen_set_igd_gfx_passthru,
         &error_abort);
     object_class_property_set_description(oc, "igd-passthru",
         "Set on/off to enable/disable igd passthrou", &error_abort);
+#endif
 }
 
 #define TYPE_XEN_ACCEL ACCEL_CLASS_NAME("xen")
diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
index 179775db7b..660dd8a008 100644
--- a/hw/xen/xen_pt.h
+++ b/hw/xen/xen_pt.h
@@ -1,6 +1,7 @@
 #ifndef XEN_PT_H
 #define XEN_PT_H
 
+#include "qemu/osdep.h"
 #include "hw/xen/xen_common.h"
 #include "hw/pci/pci.h"
 #include "xen-host-pci-device.h"
@@ -322,7 +323,13 @@ extern void *pci_assign_dev_load_option_rom(PCIDevice *dev,
                                             unsigned int domain,
                                             unsigned int bus, unsigned int slot,
                                             unsigned int function);
+
+#ifdef CONFIG_XEN_PCI_PASSTHROUGH
 extern bool has_igd_gfx_passthru;
+#else
+# define has_igd_gfx_passthru false
+#endif
+
 static inline bool is_igd_vga_passthrough(XenHostPCIDevice *dev)
 {
     return (has_igd_gfx_passthru
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon May 04 10:18:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 10:18:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVYBZ-0003BP-4c; Mon, 04 May 2020 10:18:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVYBY-0003BK-BB
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 10:18:52 +0000
X-Inumbo-ID: a658ed62-8df0-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a658ed62-8df0-11ea-b9cf-bc764e2007e4;
 Mon, 04 May 2020 10:18:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 05460AC53;
 Mon,  4 May 2020 10:18:51 +0000 (UTC)
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Julien Grall <julien@xen.org>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <548a9fc5-c251-5d8b-d297-4788d60b801d@xen.org>
 <36944bda-14a8-0134-cd1d-1f08becb28b0@suse.com>
 <898479ac-fd5c-48f4-28cb-8bbb2dc60d83@xen.org>
 <443018dd-d30e-037d-b7b1-d531d81bed15@suse.com>
 <3c581eac-9b2f-493c-f86e-2377450a6a2a@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cb829776-b18d-3535-1869-da9dc2232ec8@suse.com>
Date: Mon, 4 May 2020 12:18:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <3c581eac-9b2f-493c-f86e-2377450a6a2a@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 04.05.2020 11:54, Julien Grall wrote:
> Hi Jan,
> 
> On 04/05/2020 10:37, Jan Beulich wrote:
>> On 04.05.2020 11:30, Julien Grall wrote:
>>> Hi Jan,
>>>
>>> On 04/05/2020 10:18, Jan Beulich wrote:
>>>> On 30.04.2020 17:35, Julien Grall wrote:
>>>>> On 30/04/2020 15:50, Jan Beulich wrote:
>>>>>> On 30.04.2020 16:25, Julien Grall wrote:
>>>>>>> EXPERT mode is currently used to gate any options that are in technical
>>>>>>> preview or not security supported At the moment, the only way to select
>>>>>>> it is to use XEN_CONFIG_EXPERT=y on the make command line.
>>>>>>>
>>>>>>> However, if the user forget to add the option of one of the make
>>>>>>> command (even a clean), then .config will get rewritten. This may lead
>>>>>>> to a rather frustrating experience as it is difficult to diagnostic the
>>>>>>> issue.
>>>>>>
>>>>>> Is / will this still be true after Anthony's rework of the build
>>>>>> system? Right now we already have
>>>>>>
>>>>>> clean-targets := %clean
>>>>>> no-dot-config-targets := $(clean-targets) \
>>>>>>                             ...
>>>>>
>>>>> I haven't tried Anthony's rework yet. But I guess the problem would
>>>>> be the same if you forget to add XEN_CONFIG_EXPERT=y on make.
>>>>
>>>> Why? xen/.config would get re-written only if kconfig got run in
>>>> the first place. It is my understanding that no-dot-config-targets
>>>> exist to avoid including .config, and as a result make won't find
>>>> a need anymore to cause it to re-made if out of date.
>>>
>>> kconfig may be executed because you change one of the */Kconfig file.
>>> So if you happen to forget XEN_CONFIG_EXPERT=y on your build command
>>> line, then you will have your .config rewritten without expert options.
>>
>> That's still a build system issue then (if this is really what happens):
>> Dependencies of xen/.config shouldn't be evaluated as long as it doesn't
>> get used.
> 
> I am not sure to understand what you mean by "doesn't get used here". When you build Xen, xen/.config is a dependency for the auto-generated header. So 'make' will actually check whether there are any modification in */Kconfig.

But you were talking about "make clean", weren't you?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 04 10:23:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 10:23:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVYFy-0003zw-Pf; Mon, 04 May 2020 10:23:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7wWN=6S=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jVYFx-0003zr-0p
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 10:23:25 +0000
X-Inumbo-ID: 47702ca6-8df1-11ea-9d0e-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 47702ca6-8df1-11ea-9d0e-12813bfff9fa;
 Mon, 04 May 2020 10:23:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=aq185/LLIJVNFWVo267jVDYioB6vRDAzwe5R59+Z9vw=; b=n+4x8ANF+A7sMrufdi+yElB/h0
 5TIGxnGXlxZikpMLRSq8wi61trIBUABCzjW9wWXVQ1cRcN0JeQ3OiKfi83VqeMFDgIlMmhRqUKRXA
 up+TpSvxOnDoB/6EQKxoEMcjdYjB92VEv8SzKNE+LuzvpzWW1KzgD4/4Q203tTW8W/Lc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jVYFr-0005mk-1E; Mon, 04 May 2020 10:23:19 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jVYFq-0007t2-Q3; Mon, 04 May 2020 10:23:18 +0000
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Jan Beulich <jbeulich@suse.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <548a9fc5-c251-5d8b-d297-4788d60b801d@xen.org>
 <36944bda-14a8-0134-cd1d-1f08becb28b0@suse.com>
 <898479ac-fd5c-48f4-28cb-8bbb2dc60d83@xen.org>
 <443018dd-d30e-037d-b7b1-d531d81bed15@suse.com>
 <3c581eac-9b2f-493c-f86e-2377450a6a2a@xen.org>
 <cb829776-b18d-3535-1869-da9dc2232ec8@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f64a6200-3ab0-0783-8eed-2c7ec66af484@xen.org>
Date: Mon, 4 May 2020 11:23:16 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <cb829776-b18d-3535-1869-da9dc2232ec8@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 04/05/2020 11:18, Jan Beulich wrote:
> On 04.05.2020 11:54, Julien Grall wrote:
>> Hi Jan,
>>
>> On 04/05/2020 10:37, Jan Beulich wrote:
>>> On 04.05.2020 11:30, Julien Grall wrote:
>>>> Hi Jan,
>>>>
>>>> On 04/05/2020 10:18, Jan Beulich wrote:
>>>>> On 30.04.2020 17:35, Julien Grall wrote:
>>>>>> On 30/04/2020 15:50, Jan Beulich wrote:
>>>>>>> On 30.04.2020 16:25, Julien Grall wrote:
>>>>>>>> EXPERT mode is currently used to gate any options that are in technical
>>>>>>>> preview or not security supported At the moment, the only way to select
>>>>>>>> it is to use XEN_CONFIG_EXPERT=y on the make command line.
>>>>>>>>
>>>>>>>> However, if the user forget to add the option of one of the make
>>>>>>>> command (even a clean), then .config will get rewritten. This may lead
>>>>>>>> to a rather frustrating experience as it is difficult to diagnostic the
>>>>>>>> issue.
>>>>>>>
>>>>>>> Is / will this still be true after Anthony's rework of the build
>>>>>>> system? Right now we already have
>>>>>>>
>>>>>>> clean-targets := %clean
>>>>>>> no-dot-config-targets := $(clean-targets) \
>>>>>>>                              ...
>>>>>>
>>>>>> I haven't tried Anthony's rework yet. But I guess the problem would
>>>>>> be the same if you forget to add XEN_CONFIG_EXPERT=y on make.
>>>>>
>>>>> Why? xen/.config would get re-written only if kconfig got run in
>>>>> the first place. It is my understanding that no-dot-config-targets
>>>>> exist to avoid including .config, and as a result make won't find
>>>>> a need anymore to cause it to re-made if out of date.
>>>>
>>>> kconfig may be executed because you change one of the */Kconfig file.
>>>> So if you happen to forget XEN_CONFIG_EXPERT=y on your build command
>>>> line, then you will have your .config rewritten without expert options.
>>>
>>> That's still a build system issue then (if this is really what happens):
>>> Dependencies of xen/.config shouldn't be evaluated as long as it doesn't
>>> get used.
>>
>> I am not sure to understand what you mean by "doesn't get used here". When you build Xen, xen/.config is a dependency for the auto-generated header. So 'make' will actually check whether there are any modification in */Kconfig.
> 
> But you were talking about "make clean", weren't you?

In the commit message yes... You asked whether this was true and I 
answered I didn't get a chance to test Anthony's rework. However, I also
pointed out that it wouldn't solve a simple 'make' issue.

I considered that your 'why?' were related to the simple 'make'.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 04 10:36:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 10:36:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVYRx-00052p-FC; Mon, 04 May 2020 10:35:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1i/5=6S=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1jVYRv-00052j-Tu
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 10:35:48 +0000
X-Inumbo-ID: 03ac53c7-8df3-11ea-9d10-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 03ac53c7-8df3-11ea-9d10-12813bfff9fa;
 Mon, 04 May 2020 10:35:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588588547;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=p7qjpECM+aCBLbyZzL1R9qYhC5J3RWAlBF99gihbx48=;
 b=HIFaBJLu8wmQmemuVUG5SSd2g+Ojrp1HVErTdpFwcQOh6q/0qaaHEWsxelklCf78l/fjiu
 kUy3jXaJMkIydyN8zRTU6IWtNcTlh8oWzUBIbykRP8/LxxHwIArH/sfq1M8wBhY+I+hGrf
 kbkh65CvQpZCkkgp6lOKwj1vPXj3Nnc=
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-225-0aJGlC4TOKa8VYfm0kvVpw-1; Mon, 04 May 2020 06:35:42 -0400
X-MC-Unique: 0aJGlC4TOKa8VYfm0kvVpw-1
Received: by mail-wm1-f72.google.com with SMTP id 71so3269765wmb.8
 for <xen-devel@lists.xenproject.org>; Mon, 04 May 2020 03:35:42 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-language
 :content-transfer-encoding;
 bh=65qtTvM5Bdj416dEt86Eaggl2O6nVd/jv4X1UlNrfBg=;
 b=Np2PtVw2rwXJJpDJVuSiGjeJZvrFpW66W0KNUnA1puXKcUltg7Q1I5BQ2DnC5qJ+KB
 MlCA+bLg4eLL1sO8ak/GRIjtcF0HHvDtb4pAnv3/hPcMykkx7MzxZ9JaMwFlZm1s8Ojf
 ZEdkeFTymewMsgt2gFRGxyfBAw5WJUtzL2yXyBT0RJmn5CuXNMTfrQpUln129h87l8YQ
 BqjNGN29TYZ1CYXPj4Cs/kocK6iLVG9A0WVl8F3wWIH9+uwHIw/pf2/KEahbPBNiq1XA
 EJFPMkfknVND7dSa1f3+xn/bqosAcl88W0w4zphsTQoNxif4DDwkTtWp25/pw3CKCTe4
 bjKA==
X-Gm-Message-State: AGi0PuYIVFGx4Amm6ODGrnq5N/de2Vrbo/3CMMx0jHHP/j8Je8XRQq0T
 tyF21eqCI9EU0psx7T9dZAHzaR0gtn5daougZ1RPOdLm5TZIkwmskDSztBo2hme5bu1nlxgqdhG
 /wl4kG29jLSARS1ezhlfVv4+4r8E=
X-Received: by 2002:a5d:66c5:: with SMTP id k5mr9244267wrw.17.1588588541487;
 Mon, 04 May 2020 03:35:41 -0700 (PDT)
X-Google-Smtp-Source: APiQypKyJu1SMZNPQhE0tcEAXzN/rWndTUAqQVqX8t6qJaWFe5I/Z+N5VZ1io0MxIvHn6HTRdzqEog==
X-Received: by 2002:a5d:66c5:: with SMTP id k5mr9244239wrw.17.1588588541200;
 Mon, 04 May 2020 03:35:41 -0700 (PDT)
Received: from [192.168.1.39] (26.red-88-21-207.staticip.rima-tde.net.
 [88.21.207.26])
 by smtp.gmail.com with ESMTPSA id l19sm13335878wmj.14.2020.05.04.03.35.40
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 04 May 2020 03:35:40 -0700 (PDT)
Subject: Re: [PATCH] xen: fix build without pci passthrough
To: Roger Pau Monne <roger.pau@citrix.com>, qemu-devel@nongnu.org
References: <20200504101443.3165-1-roger.pau@citrix.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <ccf11b67-4aaa-5fb2-e23f-674380b47a13@redhat.com>
Date: Mon, 4 May 2020 12:35:39 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.5.0
MIME-Version: 1.0
In-Reply-To: <20200504101443.3165-1-roger.pau@citrix.com>
Content-Language: en-US
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Roger,

On 5/4/20 12:14 PM, Roger Pau Monne wrote:
> has_igd_gfx_passthru is only available when QEMU is built with
> CONFIG_XEN_PCI_PASSTHROUGH, and hence shouldn't be used in common
> code without checking if it's available.
>=20
> Fixes: 46472d82322d0 ('xen: convert "-machine igd-passthru" to an acceler=
ator property')
> Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>

See Kconfig fix suggested here:
https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg61844.html

> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org
> ---
>   hw/xen/xen-common.c | 4 ++++
>   hw/xen/xen_pt.h     | 7 +++++++
>   2 files changed, 11 insertions(+)
>=20
> diff --git a/hw/xen/xen-common.c b/hw/xen/xen-common.c
> index a15070f7f6..c800862419 100644
> --- a/hw/xen/xen-common.c
> +++ b/hw/xen/xen-common.c
> @@ -127,6 +127,7 @@ static void xen_change_state_handler(void *opaque, in=
t running,
>       }
>   }
>  =20
> +#ifdef CONFIG_XEN_PCI_PASSTHROUGH
>   static bool xen_get_igd_gfx_passthru(Object *obj, Error **errp)
>   {
>       return has_igd_gfx_passthru;
> @@ -136,6 +137,7 @@ static void xen_set_igd_gfx_passthru(Object *obj, boo=
l value, Error **errp)
>   {
>       has_igd_gfx_passthru =3D value;
>   }
> +#endif
>  =20
>   static void xen_setup_post(MachineState *ms, AccelState *accel)
>   {
> @@ -197,11 +199,13 @@ static void xen_accel_class_init(ObjectClass *oc, v=
oid *data)
>  =20
>       compat_props_add(ac->compat_props, compat, G_N_ELEMENTS(compat));
>  =20
> +#ifdef CONFIG_XEN_PCI_PASSTHROUGH
>       object_class_property_add_bool(oc, "igd-passthru",
>           xen_get_igd_gfx_passthru, xen_set_igd_gfx_passthru,
>           &error_abort);
>       object_class_property_set_description(oc, "igd-passthru",
>           "Set on/off to enable/disable igd passthrou", &error_abort);
> +#endif
>   }
>  =20
>   #define TYPE_XEN_ACCEL ACCEL_CLASS_NAME("xen")
> diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
> index 179775db7b..660dd8a008 100644
> --- a/hw/xen/xen_pt.h
> +++ b/hw/xen/xen_pt.h
> @@ -1,6 +1,7 @@
>   #ifndef XEN_PT_H
>   #define XEN_PT_H
>  =20
> +#include "qemu/osdep.h"
>   #include "hw/xen/xen_common.h"
>   #include "hw/pci/pci.h"
>   #include "xen-host-pci-device.h"
> @@ -322,7 +323,13 @@ extern void *pci_assign_dev_load_option_rom(PCIDevic=
e *dev,
>                                               unsigned int domain,
>                                               unsigned int bus, unsigned =
int slot,
>                                               unsigned int function);
> +
> +#ifdef CONFIG_XEN_PCI_PASSTHROUGH
>   extern bool has_igd_gfx_passthru;
> +#else
> +# define has_igd_gfx_passthru false
> +#endif
> +
>   static inline bool is_igd_vga_passthrough(XenHostPCIDevice *dev)
>   {
>       return (has_igd_gfx_passthru
>=20



From xen-devel-bounces@lists.xenproject.org Mon May 04 10:36:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 10:36:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVYT1-00057R-Qo; Mon, 04 May 2020 10:36:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8vuU=6S=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jVYT0-00057G-AI
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 10:36:54 +0000
X-Inumbo-ID: 2b631c74-8df3-11ea-ae69-bc764e2007e4
Received: from mail-io1-xd43.google.com (unknown [2607:f8b0:4864:20::d43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b631c74-8df3-11ea-ae69-bc764e2007e4;
 Mon, 04 May 2020 10:36:53 +0000 (UTC)
Received: by mail-io1-xd43.google.com with SMTP id c2so11717397iow.7
 for <xen-devel@lists.xenproject.org>; Mon, 04 May 2020 03:36:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=Z7DhdbVuTWBeX+O4IPd0jTsuX2yd4sCb6clmYEE1/2Y=;
 b=nlgXE82f7j9kP+WX8XVKFvYdi+Bei6QtLD/bGWYqOs5xa2gOK2qYedOb4kwpYNNIWe
 UjDj+GHbPtqITCCjqm8/YBHYjrP1wksZ6GA3Uh9xNyaKVUWMRxC4gfMq1P5ja5YtOKFy
 oP9DpIlltRjra1LN6yaHicfgA5DA0HVDMwnpy8qUGoHTW5gWkVloupdNQWY6aTh5R3YD
 lZwrapJiwQQFv4wsMcY2GNByeymiyohY6ki6OJE/RZahpahMhdZiepk3j1YSb2x8XiQu
 5vzehZR+PV6Lpwj/uVoxEAM6bDqRyJKUF7GAxWlOGqFCAAM3D6AxRCirX/4gK6/ZatvO
 wMEg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=Z7DhdbVuTWBeX+O4IPd0jTsuX2yd4sCb6clmYEE1/2Y=;
 b=tt518HvN/E5Og948tPIzz5WnY+1XFTJJ9h3C7byNEDrjVJKxVCtp258Ry2ZjcElN1W
 PzMIjONRO8pFFwgvhqyEMTuQBWxCfU7NW62TZkURylV+xJ+ZLqkqFtNp8Oo22/8ZE93f
 fuam4FO/thw29F2HjbXXFjsL2CE1Y62xW+hOmXij9cqoGVx2wkNg+ro7+qNN10xV6MXz
 Qw4ScBAAORMfwbL3G4Bf1xnawgfBODnsTG49YmEQ9zmchB8jYS/NgaYFHCTKuAi7nXT2
 Has7hIGSHKmGSGEVydvF/Z7VFcI5/IM5ZIitrbPZQkarR4jT3MJL4wQfd/CrXTjFwhWD
 4ehw==
X-Gm-Message-State: AGi0PuZulKr7gbyOw4TKg9BLYdwdzoBJ3d+/YIqzwzrJdeW23H21Oq6y
 Zfc32xzNTak2RBftgEFznmgG+hiyylnzT0HuNyo=
X-Google-Smtp-Source: APiQypI/2x+ooVsvfmTtWvfpWFQmLFmEX7KN3UuIFQbZR1+rjq/Nb+KNsXHA+PPOQK++HABjNmHkTp/zrsL9NPwFC9s=
X-Received: by 2002:a5e:d918:: with SMTP id n24mr14699845iop.28.1588588612306; 
 Mon, 04 May 2020 03:36:52 -0700 (PDT)
MIME-Version: 1.0
References: <20200429030409.9406-1-gorbak25@gmail.com>
 <20200429030409.9406-3-gorbak25@gmail.com>
In-Reply-To: <20200429030409.9406-3-gorbak25@gmail.com>
From: Paul Durrant <xadimgnik@gmail.com>
Date: Mon, 4 May 2020 11:36:41 +0100
Message-ID: <CAAgS=SmOQqJco70uK5Ept=ex5nf6WsB2YO0EAnC7S=3ocnJ=6Q@mail.gmail.com>
Subject: RE: [PATCH v2 2/2] Improve legacy vbios handling
To: Grzegorz Uriasz <gorbak25@gmail.com>, qemu-devel@nongnu.org
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Artur Puzio <artur@puzio.waw.pl>,
 Stefano Stabellini <sstabellini@kernel.org>, jakub@bartmin.ski,
 marmarek@invisiblethingslab.com, Anthony Perard <anthony.perard@citrix.com>,
 Jakub Nowak <j.nowak26@student.uw.edu.pl>,
 Xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Grzegorz Uriasz <gorbak25@gmail.com>
> Sent: 29 April 2020 04:04
> To: qemu-devel@nongnu.org
> Cc: Grzegorz Uriasz <gorbak25@gmail.com>;
marmarek@invisiblethingslab.com; artur@puzio.waw.pl;
> jakub@bartmin.ski; j.nowak26@student.uw.edu.pl; Stefano Stabellini
<sstabellini@kernel.org>; Anthony
> Perard <anthony.perard@citrix.com>; Paul Durrant <paul@xen.org>;
xen-devel@lists.xenproject.org
> Subject: [PATCH v2 2/2] Improve legacy vbios handling
>
> The current method of getting the vbios is broken - it just isn't
working on any device I've tested -
> the reason
> for this is explained in the previous patch. The vbios is polymorphic
and getting a proper unmodified
> copy is
> often not possible without reverse engineering the firmware. We don't
need an unmodified copy for most
> purposes -
> an unmodified copy is only needed for initializing the bios framebuffer
and providing the bios with a
> corrupted
> copy of the rom won't do any damage as the bios will just ignore the
rom.
>
> After the i915 driver takes over the vbios is only needed for reading
some metadata/configuration
> stuff etc...
> I've tested that not having any kind of vbios in the guest actually
works fine but on older
> generations of IGD
> there are some slight hiccups. To maximize compatibility the best
approach is to just copy the results
> of the vbios
> execution directly to the guest. It turns out the vbios is always
present on an hardcoded memory range
> in a reserved
> memory range from real mode - all we need to do is to memcpy it into the
guest.
>
> The following patch does 2 things:
> 1) When pci_assign_dev_load_option_rom fails to read the vbios from
sysfs(this works only when the igd
> is not the
> boot gpu - this is unlikely to happen) it falls back to using /dev/mem
to copy the vbios directly to
> the guest.
> Using /dev/mem should be fine as there is more xen specific pci code
which also relies on /dev/mem.

Why bother with sysfs at all if it is unlikely to work?

> 2) When dealing with IGD in the more generic code we skip the allocation
of the rom resource - the
> reason for this is to prevent
> a malicious guest from modifying the vbios in the host -> this is needed
as someone might try to pwn
> the i915 driver in the host by doing so
> (attach igd to guest, guest modifies vbios, the guest is terminated and
the idg is reattached to the
> host, i915 driver in the host uses data from the modified vbios).
> This is also needed to not overwrite the proper shadow copy made before.
>
> I've tested this patch and it works fine - the guest isn't complaining
about the missing vbios tables
> and the pci config
> space in the guest looks fine.
>
> Signed-off-by: Grzegorz Uriasz <gorbak25@gmail.com>
> ---
>  hw/xen/xen_pt.c          |  8 +++++--
>  hw/xen/xen_pt_graphics.c | 48 +++++++++++++++++++++++++++++++++++++---
>  hw/xen/xen_pt_load_rom.c |  2 +-
>  3 files changed, 52 insertions(+), 6 deletions(-)
>
> diff --git a/hw/xen/xen_pt.c b/hw/xen/xen_pt.c
> index b91082cb8b..ffc3559dd4 100644
> --- a/hw/xen/xen_pt.c
> +++ b/hw/xen/xen_pt.c
> @@ -483,8 +483,12 @@ static int
xen_pt_register_regions(XenPCIPassthroughState *s, uint16_t *cmd)
>                     i, r->size, r->base_addr, type);
>      }
>
> -    /* Register expansion ROM address */
> -    if (d->rom.base_addr && d->rom.size) {
> +    /*
> +     * Register expansion ROM address. If we are dealing with a ROM
> +     * shadow copy for legacy vga devices then don't bother to map it
> +     * as previous code creates a proper shadow copy
> +     */
> +    if (d->rom.base_addr && d->rom.size &&
!(is_igd_vga_passthrough(d))) {

You don't need the brackets around is_igd_vga_passthrough()

  Paul


>          uint32_t bar_data = 0;
>
>          /* Re-set BAR reported by OS, otherwise ROM can't be read. */
> diff --git a/hw/xen/xen_pt_graphics.c b/hw/xen/xen_pt_graphics.c
> index a3bc7e3921..fe0ef2685c 100644
> --- a/hw/xen/xen_pt_graphics.c
> +++ b/hw/xen/xen_pt_graphics.c
> @@ -129,7 +129,7 @@ int xen_pt_unregister_vga_regions(XenHostPCIDevice
*dev)
>      return 0;
>  }
>
> -static void *get_vgabios(XenPCIPassthroughState *s, int *size,
> +static void *get_sysfs_vgabios(XenPCIPassthroughState *s, int *size,
>                         XenHostPCIDevice *dev)
>  {
>      return pci_assign_dev_load_option_rom(&s->dev, size,
> @@ -137,6 +137,45 @@ static void *get_vgabios(XenPCIPassthroughState *s,
int *size,
>                                            dev->dev, dev->func);
>  }
>
> +static void xen_pt_direct_vbios_copy(XenPCIPassthroughState *s, Error
**errp)
> +{
> +    int fd = -1;
> +    void *guest_bios = NULL;
> +    void *host_vbios = NULL;
> +    /* This is always 32 pages in the real mode reserved region */
> +    int bios_size = 32 << XC_PAGE_SHIFT;
> +    int vbios_addr = 0xc0000;
> +
> +    fd = open("/dev/mem", O_RDONLY);
> +    if (fd == -1) {
> +        error_setg(errp, "Can't open /dev/mem: %s", strerror(errno));
> +        return;
> +    }
> +    host_vbios = mmap(NULL, bios_size,
> +            PROT_READ, MAP_ANONYMOUS | MAP_PRIVATE, fd, vbios_addr);
> +    close(fd);
> +
> +    if (host_vbios == MAP_FAILED) {
> +        error_setg(errp, "Failed to mmap host vbios: %s",
strerror(errno));
> +        return;
> +    }
> +
> +    memory_region_init_ram(&s->dev.rom, OBJECT(&s->dev),
> +            "legacy_vbios.rom", bios_size, &error_abort);
> +    guest_bios = memory_region_get_ram_ptr(&s->dev.rom);
> +    memcpy(guest_bios, host_vbios, bios_size);
> +
> +    if (munmap(host_vbios, bios_size) == -1) {
> +        XEN_PT_LOG(&s->dev, "Failed to unmap host vbios: %s\n",
strerror(errno));
> +    }
> +
> +    cpu_physical_memory_write(vbios_addr, guest_bios, bios_size);
> +    memory_region_set_address(&s->dev.rom, vbios_addr);
> +    pci_register_bar(&s->dev, PCI_ROM_SLOT,
PCI_BASE_ADDRESS_SPACE_MEMORY, &s->dev.rom);
> +    s->dev.has_rom = true;
> +    XEN_PT_LOG(&s->dev, "Legacy VBIOS registered\n");
> +}
> +
>  /* Refer to Seabios. */
>  struct rom_header {
>      uint16_t signature;
> @@ -179,9 +218,11 @@ void xen_pt_setup_vga(XenPCIPassthroughState *s,
XenHostPCIDevice *dev,
>          return;
>      }
>
> -    bios = get_vgabios(s, &bios_size, dev);
> +    bios = get_sysfs_vgabios(s, &bios_size, dev);
>      if (!bios) {
> -        error_setg(errp, "VGA: Can't get VBIOS");
> +        XEN_PT_LOG(&s->dev, "Unable to get host VBIOS from sysfs - "
> +                            "falling back to a direct copy of memory
ranges\n");
> +        xen_pt_direct_vbios_copy(s, errp);
>          return;
>      }
>
> @@ -223,6 +264,7 @@ void xen_pt_setup_vga(XenPCIPassthroughState *s,
XenHostPCIDevice *dev,
>
>      /* Currently we fixed this address as a primary for legacy BIOS. */
>      cpu_physical_memory_write(0xc0000, bios, bios_size);
> +    XEN_PT_LOG(&s->dev, "Legacy VBIOS registered\n");
>  }
>
>  uint32_t igd_read_opregion(XenPCIPassthroughState *s)
> diff --git a/hw/xen/xen_pt_load_rom.c b/hw/xen/xen_pt_load_rom.c
> index 9f100dc159..8cd9aa84dc 100644
> --- a/hw/xen/xen_pt_load_rom.c
> +++ b/hw/xen/xen_pt_load_rom.c
> @@ -65,7 +65,7 @@ void *pci_assign_dev_load_option_rom(PCIDevice *dev,
>          goto close_rom;
>      }
>
> -    pci_register_bar(dev, PCI_ROM_SLOT, 0, &dev->rom);
> +    pci_register_bar(dev, PCI_ROM_SLOT, PCI_BASE_ADDRESS_SPACE_MEMORY,
&dev->rom);
>      dev->has_rom = true;
>      *size = st.st_size;
>  close_rom:
> --
> 2.26.1


From xen-devel-bounces@lists.xenproject.org Mon May 04 10:45:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 10:45:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVYbE-00064b-MI; Mon, 04 May 2020 10:45:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y3Bs=6S=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jVYbD-00064R-DY
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 10:45:23 +0000
X-Inumbo-ID: 5a884974-8df4-11ea-ae69-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a884974-8df4-11ea-ae69-bc764e2007e4;
 Mon, 04 May 2020 10:45:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588589122;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=tXO0ICCiT142bp9ixaiqrYulWlsUgTl6QaOusnFsu4U=;
 b=SesOYbNy18JXQZdG8vW5dESkleHqbOqoKbTaz1gMXdUxmIZ3lRpgQsGn
 Y9d1FsAOemTtp9XzdMUZwYbQZL63dDc4J3o39SvSpcDZO6ckikDTRLUZX
 XkP47w1qgu+YG5KFFjqJ5bMsvVGL0aNR4XzxOsFW/F4h8bJOBO5Vg04sC E=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: VA3TIEhLTXp19jB7KBHvDtiNb7ydxUl7/pqE0PN/ICWErY/2eqpU1iwd9W7mtFrxXdtFnLftqj
 0Kmkr+jP+e5W/4TSAaIZwVGklxWHvocqq8eQ64FOAj9FxBheeCKksSPw+q1C9OL+x58Db5JKA7
 YOOT/DjeR/kDcJQU9cRw3AspnuU36z5KDTLLlx3Q58mlTEVWFqcTwnywwvbwASFSLpDGIpCAPu
 V4GbVkYxG276mG1eZYPLKhcJ0Bh9wQ8sUWfXS9rXnO1l57OMkxUzoA/149rGJZhHprFTGQCd4u
 bOM=
X-SBRS: 2.7
X-MesageID: 17353198
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,351,1583211600"; d="scan'208";a="17353198"
Date: Mon, 4 May 2020 12:45:12 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: Re: [PATCH] xen: fix build without pci passthrough
Message-ID: <20200504104512.GA1353@Air-de-Roger>
References: <20200504101443.3165-1-roger.pau@citrix.com>
 <ccf11b67-4aaa-5fb2-e23f-674380b47a13@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ccf11b67-4aaa-5fb2-e23f-674380b47a13@redhat.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-devel@nongnu.org,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 04, 2020 at 12:35:39PM +0200, Philippe Mathieu-Daudé wrote:
> Hi Roger,
> 
> On 5/4/20 12:14 PM, Roger Pau Monne wrote:
> > has_igd_gfx_passthru is only available when QEMU is built with
> > CONFIG_XEN_PCI_PASSTHROUGH, and hence shouldn't be used in common
> > code without checking if it's available.
> > 
> > Fixes: 46472d82322d0 ('xen: convert "-machine igd-passthru" to an accelerator property')
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> See Kconfig fix suggested here:
> https://www.mail-archive.com/xen-devel@lists.xenproject.org/msg61844.html

Having it available on Kconfig is indeed fine, but this still needs
some kind of configure checks AFAIK, as it's only available on Linux.

I'm certainly missing some context, but whether XEN_IGD_PASSTHROUGH
gets defined on Kconfig or not shouldn't really matter for this patch,
as we would still need to gate the code properly so it's not build
when PCI passthrough (or whatever name the option has) is not enabled?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 04 10:47:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 10:47:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVYdW-0006CJ-4I; Mon, 04 May 2020 10:47:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVYdT-0006CD-Vx
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 10:47:44 +0000
X-Inumbo-ID: ae8d5802-8df4-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ae8d5802-8df4-11ea-b07b-bc764e2007e4;
 Mon, 04 May 2020 10:47:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D9951AB3D;
 Mon,  4 May 2020 10:47:43 +0000 (UTC)
Subject: Re: [PATCH for-4.14 3/3] xen/x86: atomic: Don't allow to write
 atomically in a pointer to const
To: Julien Grall <julien@xen.org>
References: <20200502160700.19573-1-julien@xen.org>
 <20200502160700.19573-4-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9270c034-4a12-19b4-459f-45c95c9a5c48@suse.com>
Date: Mon, 4 May 2020 12:47:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200502160700.19573-4-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.2020 18:07, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> At the moment, write_atomic() will happily write to a pointer to const.
> While there are no use in Xen, it would be best to catch them at
> compilation time.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Jan Beulich <jbeulich@suse.com>
albeit ...

> --- a/xen/include/asm-x86/atomic.h
> +++ b/xen/include/asm-x86/atomic.h
> @@ -63,6 +63,8 @@ void __bad_atomic_size(void);
>  
>  #define write_atomic(p, x) ({                             \
>      typeof(*(p)) __x = (x);                               \
> +    /* Check that the pointer is not const */             \
> +    void *__maybe_unused p_ = &__x;                       \

... along the lines of the similar case with guest handles I'd
like to suggest for the comment to be more precise: It's not
the pointer's const-ness you're after, but the pointed to
object's. Maybe "Check that the pointer is not to a const
type" or even just "Check that the pointer is not to const"?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 04 11:03:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 11:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVYsp-0000Be-MF; Mon, 04 May 2020 11:03:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7wWN=6S=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jVYso-0000BV-QT
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 11:03:34 +0000
X-Inumbo-ID: e5b10ebc-8df6-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e5b10ebc-8df6-11ea-b07b-bc764e2007e4;
 Mon, 04 May 2020 11:03:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Pl1JyEmI8kJpZFr3F+5sLPnq+7U81PdGVp1RtwwgB3A=; b=LPax6YDSThI3eBFzrLXQ/Z0g5v
 fJJ6MKWVAH0LjIo3aJhv68o0/xMOOumcSCjZqMI/dzeYog0sbbYuk73u1AWdH4qlX/rx+WWm+V8MV
 sFfqoTXSEDo0FgSu95mCgkjUc8erbB+tFrKQ4AKngDGQeLptpilvp6KxZeWn3qRTZPzc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jVYsm-0006ZE-TQ; Mon, 04 May 2020 11:03:32 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jVYsm-0001wn-MJ; Mon, 04 May 2020 11:03:32 +0000
Subject: Re: [PATCH for-4.14 3/3] xen/x86: atomic: Don't allow to write
 atomically in a pointer to const
To: Jan Beulich <jbeulich@suse.com>
References: <20200502160700.19573-1-julien@xen.org>
 <20200502160700.19573-4-julien@xen.org>
 <9270c034-4a12-19b4-459f-45c95c9a5c48@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <154a6790-3c9a-ed2b-e4df-9fb5c9315d9e@xen.org>
Date: Mon, 4 May 2020 12:03:30 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <9270c034-4a12-19b4-459f-45c95c9a5c48@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan,

On 04/05/2020 11:47, Jan Beulich wrote:
> On 02.05.2020 18:07, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> At the moment, write_atomic() will happily write to a pointer to const.
>> While there are no use in Xen, it would be best to catch them at
>> compilation time.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>
> albeit ...
> 
>> --- a/xen/include/asm-x86/atomic.h
>> +++ b/xen/include/asm-x86/atomic.h
>> @@ -63,6 +63,8 @@ void __bad_atomic_size(void);
>>   
>>   #define write_atomic(p, x) ({                             \
>>       typeof(*(p)) __x = (x);                               \
>> +    /* Check that the pointer is not const */             \
>> +    void *__maybe_unused p_ = &__x;                       \
> 
> ... along the lines of the similar case with guest handles I'd
> like to suggest for the comment to be more precise: It's not
> the pointer's const-ness you're after, but the pointed to
> object's. Maybe "Check that the pointer is not to a const
> type" or even just "Check that the pointer is not to const"?

I am happy with "Check that the pointer is not to a const type".

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 04 11:10:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 11:10:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVYzf-0001DO-J7; Mon, 04 May 2020 11:10:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kdZW=6S=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jVYze-0001DJ-FI
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 11:10:38 +0000
X-Inumbo-ID: e1fc48da-8df7-11ea-9d13-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e1fc48da-8df7-11ea-9d13-12813bfff9fa;
 Mon, 04 May 2020 11:10:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588590638;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=spH7D/ZHX+3UZmvNVc3EbOGcC3CdDDwOplfa1KaR1Ao=;
 b=Gi5+B7saawxmVOwzfWKhiiKDfAWG3zyKWhvfpfL4qoffUkGY2jqAsywX
 zKAscjOk+Hvo58efetulfniTd/IOGmIFvQqQfCj3dcp97VYIJ/zTmegRG
 g9KsHI8zcBYIth/37XUpl/l7JMIIR1wrHPv1MybystQ71lp+x6l4rmgUM Q=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: SIGzhxO0ytvQL/RpZKVU8hdlqHQHthWtoKj4PRv/FfY+lL+HqK/HLC6M7Nf7d1QmLSWUnCknAV
 w54baYL2JK1VNEeGZpn/PP57J7+WlhuWhhl+Z5fmKAsCA3xjsJDXBUV7Eoi8zy/L53n94hzCKC
 MWq+G8LrjQO9CajMJ+G7+bmV0AxFujbTgvhO87HNDJ05dJvx+9UInZH3jAriEiqxZUVcmnUiBu
 BFgbo1Oo2MWXxCfle9HnB+AsmW1pCzNJKktSk9KlgGqXb/hAoaGbvXDSKwD8I4gMR9FTe5y5cl
 lKA=
X-SBRS: 2.7
X-MesageID: 17054501
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,351,1583211600"; d="scan'208";a="17054501"
From: George Dunlap <George.Dunlap@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
Thread-Topic: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from
 the menuconfig directly
Thread-Index: AQHWHvtEfLlMb5EF90y7GzIGxHVxJ6iRncgAgAYL5QA=
Date: Mon, 4 May 2020 11:10:14 +0000
Message-ID: <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
In-Reply-To: <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.60.0.2.5)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <5AAE4E5113AE7D4C9AC5619F07D42E5F@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <Ian.Jackson@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gQXByIDMwLCAyMDIwLCBhdCAzOjUwIFBNLCBKYW4gQmV1bGljaCA8SkJldWxpY2hA
c3VzZS5jb20+IHdyb3RlOg0KPiANCj4+IEluIG9yZGVyIHRvIG1ha2UgZWFzaWVyIHRvIGV4cGVy
aW1lbnQsIHRoZSBvcHRpb24gRVhQRVJUIGNhbiBub3cgYmUNCj4+IHNlbGVjdGVkIGZyb20gdGhl
IG1lbnVjb25maWcgcmF0aGVyIHRoYW4gbWFrZSBjb21tYW5kIGxpbmUuIFRoaXMgZG9lcw0KPj4g
bm90IGNoYW5nZSB0aGUgZmFjdCBhIGtlcm5lbCB3aXRoIEVYUEVSVCBtb2RlIHNlbGVjdGVkIHdp
bGwgbm90IGJlDQo+PiBzZWN1cml0eSBzdXBwb3J0ZWQuDQo+IA0KPiBXZWxsLCBpZiBJJ20gbm90
IG1pcy1yZW1lbWJlcmluZyBpdCB3YXMgb24gcHVycG9zZSB0byBtYWtlIGl0IG1vcmUNCj4gZGlm
ZmljdWx0IGZvciBwZW9wbGUgdG8gZGVjbGFyZSB0aGVtc2VsdmVzICJleHBlcnRzIi4gRkFPRCBJ
J20gbm90DQo+IG1lYW5pbmcgdG8gaW1wbHkgSSBkb24ndCBzZWUgYW5kIGFjY2VwdCB0aGUgZnJ1
c3RyYXRpb24gYXNwZWN0IHlvdQ0KPiBtZW50aW9uIGZ1cnRoZXIgdXAuIFRoZSB0d28gbmVlZCB0
byBiZSBjYXJlZnVsbHkgd2VpZ2hlZCBhZ2FpbnN0DQo+IG9uZSBhbm90aGVyLg0KDQpJIGRvbuKA
mXQgdGhpbmsgd2UgbmVlZCB0byBtYWtlIGl0IGRpZmZpY3VsdCBmb3IgcGVvcGxlIHRvIGRlY2xh
cmUgdGhlbXNlbHZlcyBleHBlcnRzLCBwYXJ0aWN1bGFybHkgYXMg4oCcYWxs4oCdIGl0IG1lYW5z
IGF0IHRoZSBtb21lbnQgaXMsIOKAnENhbiBidWlsZCBzb21ldGhpbmcgd2hpY2ggaXMgbm90IHNl
Y3VyaXR5IHN1cHBvcnRlZOKAnS4gIFBlb3BsZSB3aG8gYXJlIGJ1aWxkaW5nIHRoZWlyIG93biBo
eXBlcnZpc29ycyBhcmUgYWxyZWFkeSBwcmV0dHkgd2VsbCBhZHZhbmNlZDsgSSB0aGluayB3ZSBj
YW4gbGV0IHRoZW0gc2hvb3QgdGhlbXNlbHZlcyBpbiB0aGUgZm9vdCBpZiB0aGV5IHdhbnQgdG8u
DQoNCiAtR2Vvcmdl


From xen-devel-bounces@lists.xenproject.org Mon May 04 11:52:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 11:52:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVZdd-0004Xh-SV; Mon, 04 May 2020 11:51:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVZdc-0004Xa-FO
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 11:51:56 +0000
X-Inumbo-ID: a6cc47be-8dfd-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6cc47be-8dfd-11ea-ae69-bc764e2007e4;
 Mon, 04 May 2020 11:51:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 331C8AC11;
 Mon,  4 May 2020 11:51:56 +0000 (UTC)
Subject: Re: [PATCH v1.1 2/3] xen/sched: fix theoretical races accessing
 vcpu->dirty_cpu
To: Juergen Gross <jgross@suse.com>
References: <20200430152848.20275-1-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d1b322c2-98d8-b3a3-1f48-2af89cf9407e@suse.com>
Date: Mon, 4 May 2020 13:51:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200430152848.20275-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30.04.2020 17:28, Juergen Gross wrote:
> The dirty_cpu field of struct vcpu denotes which cpu still holds data
> of a vcpu. All accesses to this field should be atomic in case the
> vcpu could just be running, as it is accessed without any lock held
> in most cases.
> 
> There are some instances where accesses are not atomically done, and
> even worse where multiple accesses are done when a single one would
> be mandated.
> 
> Correct that in order to avoid potential problems.

Beyond the changes you're making, what about the assignment in
startup_cpu_idle_loop()? And while less important, dump_domains()
also has a use that I think would better be converted for
completeness.

> @@ -1844,6 +1845,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
>      {
>          /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
>          flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
> +        ASSERT(read_atomic(&next->dirty_cpu) == VCPU_CPU_CLEAN);

ASSERT(!is_vcpu_dirty_cpu(read_atomic(&next->dirty_cpu))) ?

> @@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
>  
>  void sync_vcpu_execstate(struct vcpu *v)
>  {
> -    if ( v->dirty_cpu == smp_processor_id() )
> +    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
> +
> +    if ( dirty_cpu == smp_processor_id() )
>          sync_local_execstate();
> -    else if ( vcpu_cpu_dirty(v) )
> +    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
>      {
>          /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
> -        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
> +        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>      }
> +    ASSERT(read_atomic(&v->dirty_cpu) != dirty_cpu ||
> +           dirty_cpu == VCPU_CPU_CLEAN);

!is_vcpu_dirty_cpu(dirty_cpu) again? Also perhaps flip both
sides of the || (to have the cheaper one first), and maybe

    if ( is_vcpu_dirty_cpu(dirty_cpu) )
        ASSERT(read_atomic(&v->dirty_cpu) != dirty_cpu);

as the longer assertion string literal isn't really of that
much extra value.

However, having stared at it for a while now - is this race
free? I can see this being fine in the (initial) case of
dirty_cpu == smp_processor_id(), but if this is for a foreign
CPU, can't the vCPU have gone back to that same CPU again in
the meantime?

> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -844,7 +844,7 @@ static inline bool is_vcpu_dirty_cpu(unsigned int cpu)
>  
>  static inline bool vcpu_cpu_dirty(const struct vcpu *v)
>  {
> -    return is_vcpu_dirty_cpu(v->dirty_cpu);
> +    return is_vcpu_dirty_cpu(read_atomic((unsigned int *)&v->dirty_cpu));

As per your communication with Julien I understand the cast
will go away again.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 04 12:30:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 12:30:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVaEk-00082I-L4; Mon, 04 May 2020 12:30:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVaEi-00082C-Sa
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 12:30:16 +0000
X-Inumbo-ID: 018df5e4-8e03-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 018df5e4-8e03-11ea-ae69-bc764e2007e4;
 Mon, 04 May 2020 12:30:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 0B642ABD0;
 Mon,  4 May 2020 12:30:15 +0000 (UTC)
Subject: Re: [PATCH] x86/hvm: allow for more fine grained assisted flush
To: Roger Pau Monne <roger.pau@citrix.com>
References: <20200430091725.80656-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <56d86cd8-9c0b-fe85-fa2f-5b5a9cf89cef@suse.com>
Date: Mon, 4 May 2020 14:30:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200430091725.80656-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 30.04.2020 11:17, Roger Pau Monne wrote:
> Improve the assisted flush by expanding the interface and allowing for
> more fine grained TLB flushes to be issued using the HVMOP_flush_tlbs
> hypercall. Support for such advanced flushes is signaled in CPUID
> using the XEN_HVM_CPUID_ADVANCED_FLUSH flag.
> 
> The new features make use of the NULL parameter so far passed in the
> hypercall in order to convey extra data to perform more selective
> flushes: a virtual address, an order field, a flags field and finally a
> vCPU bitmap. Note that not all features are implemented as part of
> this patch, but are already added to the interface in order to avoid
> having to introduce yet a new CPUID flag when the new features are
> added.
> 
> The feature currently implemented is the usage of a guest provided
> vCPU bitmap in order to signal which vCPUs require a TLB flush,
> instead of assuming all vCPUs must be flushed. Note that not
> implementing the rest of the features just make the flush less
> efficient, but it's still correct and safe.

A risk of not supporting these right away is that guest bugs may go
unnoticed for quite some time. Just as a remark, not as a request
to do the implementation right away.

> --- a/xen/arch/x86/guest/xen/xen.c
> +++ b/xen/arch/x86/guest/xen/xen.c
> @@ -326,7 +326,27 @@ static void __init e820_fixup(struct e820map *e820)
>  
>  static int flush_tlb(const cpumask_t *mask, const void *va, unsigned int flags)
>  {
> -    return xen_hypercall_hvm_op(HVMOP_flush_tlbs, NULL);
> +    xen_hvm_flush_tlbs_t op = {
> +        .va = (unsigned long)va,
> +        .order = (flags - 1) & FLUSH_ORDER_MASK,

I consider such an expression as reasonable to use when there's a
single, central place (in flushtlb.c). For uses elsewhere (and
then better mirrored to the original place) I think we want to
have a macro inverting FLUSH_ORDER(), e.g. FLUSH_GET_ORDER().

> +        .flags = ((flags & FLUSH_TLB_GLOBAL) ? HVMOP_flush_global : 0) |
> +                 ((flags & FLUSH_VA_VALID) ? HVMOP_flush_va_valid : 0),
> +        .mask_size = BITS_TO_LONGS(nr_cpu_ids),
> +    };
> +    static int8_t __read_mostly advanced_flush = -1;
> +
> +    if ( advanced_flush == -1 )
> +    {
> +        uint32_t eax, ebx, ecx, edx;
> +
> +        ASSERT(xen_cpuid_base);
> +        cpuid(xen_cpuid_base + 4, &eax, &ebx, &ecx, &edx);
> +        advanced_flush = (eax & XEN_HVM_CPUID_ADVANCED_FLUSH) ? 1 : 0;

Use the more conventional (in our code base) !! here? Also use
cpuid_eax(), to avoid several dead locals?

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -4011,17 +4011,42 @@ static void hvm_s3_resume(struct domain *d)
>      }
>  }
>  
> -static bool always_flush(void *ctxt, struct vcpu *v)
> +static bool flush_check(void *mask, struct vcpu *v)

const twice?

>  {
> -    return true;
> +    return mask ? test_bit(v->vcpu_id, (unsigned long *)mask) : true;

And a 3rd time?

>  }
>  
> -static int hvmop_flush_tlb_all(void)
> +static int hvmop_flush_tlb(XEN_GUEST_HANDLE_PARAM(xen_hvm_flush_tlbs_t) uop)
>  {
> +    xen_hvm_flush_tlbs_t op;

This could move into the more narrow scope below.

> +    DECLARE_BITMAP(mask, HVM_MAX_VCPUS) = { };
> +    bool valid_mask = false;
> +
>      if ( !is_hvm_domain(current->domain) )
>          return -EINVAL;
>  
> -    return paging_flush_tlb(always_flush, NULL) ? 0 : -ERESTART;
> +    if ( !guest_handle_is_null(uop) )
> +    {
> +        if ( copy_from_guest(&op, uop, 1) )
> +            return -EFAULT;
> +
> +        /*
> +         * TODO: implement support for the other features present in
> +         * xen_hvm_flush_tlbs_t: flushing a specific virtual address and not
> +         * flushing global mappings.
> +         */
> +
> +        if ( op.mask_size > ARRAY_SIZE(mask) )
> +            return -EINVAL;

While a proper safeguard for the implementation, this looks rather
arbitrary from the guests's pov: Bits beyond the guest's vCPU count
aren't meaningful anyway. They could be ignored by not copying here,
rather than by never inspecting them in flush_check(). And ignoring
some bits here would then call for this to be done consistently for
all of them, i.e. not returning -EINVAL.

> +        if ( copy_from_guest(mask, op.vcpu_mask, op.mask_size) )
> +            return -EFAULT;
> +
> +        valid_mask = true;
> +    }
> +
> +    return paging_flush_tlb(flush_check, valid_mask ? mask : NULL) ? 0
> +                                                                   : -ERESTART;

Just as a suggestion, this might look a little less odd when wrapped
as

    return paging_flush_tlb(flush_check,
                            valid_mask ? mask : NULL) ? 0 : -ERESTART;

But it's really up to you.

> @@ -5017,7 +5042,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
>          break;
>  
>      case HVMOP_flush_tlbs:
> -        rc = guest_handle_is_null(arg) ? hvmop_flush_tlb_all() : -EINVAL;
> +        rc = hvmop_flush_tlb(guest_handle_cast(arg, xen_hvm_flush_tlbs_t));

There's nothing to be passed back so maybe you could even cast to a
const handle here?

> --- a/xen/include/public/hvm/hvm_op.h
> +++ b/xen/include/public/hvm/hvm_op.h
> @@ -99,8 +99,29 @@ DEFINE_XEN_GUEST_HANDLE(xen_hvm_set_pci_link_route_t);
>  
>  #endif /* __XEN_INTERFACE_VERSION__ < 0x00040900 */
>  
> -/* Flushes all VCPU TLBs: @arg must be NULL. */
> +/*
> + * Flushes all VCPU TLBs: @arg can be NULL or xen_hvm_flush_tlbs_t.
> + *
> + * Support for passing a xen_hvm_flush_tlbs_t parameter is signaled in CPUID,
> + * see XEN_HVM_CPUID_ADVANCED_FLUSH.
> + */
>  #define HVMOP_flush_tlbs          5
> +struct xen_hvm_flush_tlbs {
> +    /* Virtual address to be flushed. */
> +    uint64_t va;
> +    uint16_t order;
> +    uint16_t flags;
> +/* Flush global mappings. */
> +#define HVMOP_flush_global      (1u << 0)
> +/* VA for the flush has a valid mapping. */
> +#define HVMOP_flush_va_valid    (1u << 1)
> +    /* Number of uint64_t elements in vcpu_mask. */
> +    uint32_t mask_size;
> +    /* Bitmask of vcpus that should be flushed. */
> +    XEN_GUEST_HANDLE(const_uint64) vcpu_mask;

This will make the structure have different size for 64- and 32-bit
callers. Apart from altp2m interfaces there's no precedent of a
handle in the hvmop structures, so I wonder whether this wouldn't
better be replaced, e.g. by having an effectively variable size
array at the end of the struct. Simply forcing suitable padding,
e.g. by using uint64_aligned_t for the first field, won't help, as
this would lead to 32-bit callers not having suitable control over
the upper 32 bits of what Xen will take as the handle. (And of
course right now both uint64_aligned_t and XEN_GUEST_HANDLE_64 are
Xen+tools constructs only.)

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 04 12:34:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 12:34:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVaJD-0008E2-BH; Mon, 04 May 2020 12:34:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QXc9=6S=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jVaJC-0008Dx-HW
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 12:34:54 +0000
X-Inumbo-ID: a7531d24-8e03-11ea-b07b-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7531d24-8e03-11ea-b07b-bc764e2007e4;
 Mon, 04 May 2020 12:34:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588595693;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=VJ9czVXLihgM9UipBJi3mo9l4BtGdOy0Wr/OhyZ+xio=;
 b=hYki99Y2DkGmRZ6dmRiRH4xLOoJmGpSbSCFfV8B2lUQewzQhzC9K0vSM
 gGd+Q8CjySR+9Wgd2lvPlacid4+OATmoHZKlN9ON9i6pGFJia+uAFHA+w
 JsneeW4hGMdwrEhZtFIT1obZgWu/nm8HhUyw7oWIurkU2ieultaZ2SEv+ Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: P0L8QXjcZgfw9moBcyl/BuW8tmnVQo1O4S0NDpS7YEVym5d2YKyz4Ja0sZY9bQk3BjsCD9faC/
 PTsNWTrHuie3YOsLUfonPRfB9kVwFJel4HnP6YXHOT4dUBvwmK9mndbK6bt2Bfa0rbeX7LrQa4
 lFuD5PzjUC+IcSrIn3mbPU69qSTkkhmxbNrsOFJ9bWgAnMhnGnmJkgLAKmMDVY44AJsRdVRvS3
 ygVfe3xL3F+G1SNX3ZoZGcnIiC3NU3zZIIasMWhmjwqUv1q1k8Ksl3I4aNk1Xjh/2cDRnk6qFb
 1Xo=
X-SBRS: 2.7
X-MesageID: 16650907
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,352,1583211600"; d="scan'208";a="16650907"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Message-ID: <24240.3047.877655.345428@mariner.uk.xensource.com>
Date: Mon, 4 May 2020 13:34:47 +0100
To: George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
In-Reply-To: <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew
 Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
> On Apr 30, 2020, at 3:50 PM, Jan Beulich <JBeulich@suse.com> wrote:
> > Well, if I'm not mis-remembering it was on purpose to make it more
> > difficult for people to declare themselves "experts". FAOD I'm not
> > meaning to imply I don't see and accept the frustration aspect you
> > mention further up. The two need to be carefully weighed against
> > one another.

Yes, it was on purpose.  However, I had my doubts at the time and
I think experience has shown that this was a mistake.

> I don’t think we need to make it difficult for people to declare
> themselves experts, particularly as “all” it means at the moment is,
> “Can build something which is not security supported”.  People who
> are building their own hypervisors are already pretty well advanced;
> I think we can let them shoot themselves in the foot if they want
> to.

Precisely.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon May 04 12:41:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 12:41:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVaPV-0000iu-Ix; Mon, 04 May 2020 12:41:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WsoH=6S=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jVaPT-0000in-OS
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 12:41:23 +0000
X-Inumbo-ID: 8f76f472-8e04-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8f76f472-8e04-11ea-b9cf-bc764e2007e4;
 Mon, 04 May 2020 12:41:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 53199ABD0;
 Mon,  4 May 2020 12:41:23 +0000 (UTC)
Subject: Re: [PATCH v1.1 2/3] xen/sched: fix theoretical races accessing
 vcpu->dirty_cpu
To: Jan Beulich <jbeulich@suse.com>
References: <20200430152848.20275-1-jgross@suse.com>
 <d1b322c2-98d8-b3a3-1f48-2af89cf9407e@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <9d4fd1cd-173f-5128-6a73-ac2c6d679f93@suse.com>
Date: Mon, 4 May 2020 14:41:20 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <d1b322c2-98d8-b3a3-1f48-2af89cf9407e@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 04.05.20 13:51, Jan Beulich wrote:
> On 30.04.2020 17:28, Juergen Gross wrote:
>> The dirty_cpu field of struct vcpu denotes which cpu still holds data
>> of a vcpu. All accesses to this field should be atomic in case the
>> vcpu could just be running, as it is accessed without any lock held
>> in most cases.
>>
>> There are some instances where accesses are not atomically done, and
>> even worse where multiple accesses are done when a single one would
>> be mandated.
>>
>> Correct that in order to avoid potential problems.
> 
> Beyond the changes you're making, what about the assignment in
> startup_cpu_idle_loop()? And while less important, dump_domains()
> also has a use that I think would better be converted for
> completeness.

startup_cpu_idle_loop() is not important, as it is set before any
scheduling activity might occur on a cpu. But I can change both
instances.

> 
>> @@ -1844,6 +1845,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
>>       {
>>           /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
>>           flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>> +        ASSERT(read_atomic(&next->dirty_cpu) == VCPU_CPU_CLEAN);
> 
> ASSERT(!is_vcpu_dirty_cpu(read_atomic(&next->dirty_cpu))) ?

Yes, this is better.

> 
>> @@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
>>   
>>   void sync_vcpu_execstate(struct vcpu *v)
>>   {
>> -    if ( v->dirty_cpu == smp_processor_id() )
>> +    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
>> +
>> +    if ( dirty_cpu == smp_processor_id() )
>>           sync_local_execstate();
>> -    else if ( vcpu_cpu_dirty(v) )
>> +    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
>>       {
>>           /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
>> -        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
>> +        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>>       }
>> +    ASSERT(read_atomic(&v->dirty_cpu) != dirty_cpu ||
>> +           dirty_cpu == VCPU_CPU_CLEAN);
> 
> !is_vcpu_dirty_cpu(dirty_cpu) again? Also perhaps flip both
> sides of the || (to have the cheaper one first), and maybe
> 
>      if ( is_vcpu_dirty_cpu(dirty_cpu) )
>          ASSERT(read_atomic(&v->dirty_cpu) != dirty_cpu);
> 
> as the longer assertion string literal isn't really of that
> much extra value.

I can do that, in case we can be sure the compiler will drop the
test in case of a non-debug build.

> 
> However, having stared at it for a while now - is this race
> free? I can see this being fine in the (initial) case of
> dirty_cpu == smp_processor_id(), but if this is for a foreign
> CPU, can't the vCPU have gone back to that same CPU again in
> the meantime?

This should never happen. Either the vcpu in question is paused,
or it has been forced off the cpu due to not being allowed to run
there (e.g. affinity has been changed, or cpu is about to be
removed from cpupool). I can add a comment explaining it.

> 
>> --- a/xen/include/xen/sched.h
>> +++ b/xen/include/xen/sched.h
>> @@ -844,7 +844,7 @@ static inline bool is_vcpu_dirty_cpu(unsigned int cpu)
>>   
>>   static inline bool vcpu_cpu_dirty(const struct vcpu *v)
>>   {
>> -    return is_vcpu_dirty_cpu(v->dirty_cpu);
>> +    return is_vcpu_dirty_cpu(read_atomic((unsigned int *)&v->dirty_cpu));
> 
> As per your communication with Julien I understand the cast
> will go away again.

Yes, I think so.


Juergen


From xen-devel-bounces@lists.xenproject.org Mon May 04 12:44:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 12:44:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVaSJ-0000tF-1U; Mon, 04 May 2020 12:44:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVaSH-0000tA-PB
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 12:44:17 +0000
X-Inumbo-ID: f75eb6c4-8e04-11ea-9d1c-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f75eb6c4-8e04-11ea-9d1c-12813bfff9fa;
 Mon, 04 May 2020 12:44:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4276FAF30;
 Mon,  4 May 2020 12:44:18 +0000 (UTC)
Subject: Re: [PATCH 01/16] x86/traps: Drop last_extable_addr
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3ec06cb1-6b31-2fde-6de4-23938acab73a@suse.com>
Date: Mon, 4 May 2020 14:44:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200501225838.9866-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.2020 00:58, Andrew Cooper wrote:
> The only user of this facility is dom_crash_sync_extable() by passing 0 into
> asm_domain_crash_synchronous().  The common error cases are already covered
> with show_page_walk(), leaving only %ss/%fs selector/segment errors in the
> compat case.
> 
> Point at dom_crash_sync_extable in the error message, which is about as good
> as the error hints from other users of asm_domain_crash_synchronous(), and
> drop last_extable_addr.

While I'm not entirely opposed, I'd like you to clarify that you indeed
mean to (mostly) revert your own improvement from 6 or 7 years back
(commit 8e0da8c07f4f). I'm also surprised to find this as part of the
series it's in - in what way does this logic get in the way of CET-SS?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 04 12:44:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 12:44:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVaSx-0000xm-As; Mon, 04 May 2020 12:44:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4iI9=6S=gmail.com=tcminyard@srs-us1.protection.inumbo.net>)
 id 1jVaSw-0000xZ-2B
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 12:44:58 +0000
X-Inumbo-ID: 0f4785cc-8e05-11ea-b07b-bc764e2007e4
Received: from mail-ot1-x343.google.com (unknown [2607:f8b0:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f4785cc-8e05-11ea-b07b-bc764e2007e4;
 Mon, 04 May 2020 12:44:57 +0000 (UTC)
Received: by mail-ot1-x343.google.com with SMTP id j4so8676515otr.11
 for <xen-devel@lists.xenproject.org>; Mon, 04 May 2020 05:44:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:date:from:to:cc:subject:message-id:reply-to:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=yKN6Orp+qv0eDJWOFRBgwbnE7XXmJgLTrkQ1xQ02x5A=;
 b=uXSbf6BiNG1Gcm+nuKiWEP/RxvwTQWMrIgdTL3kKjBDv8WE6ros7gvZfE9zL2USjsG
 0BkUWsY2P5FhaXwpZ6OcsaW7d4Ln9QAyETxkr73sUFDF2fWHHTCVNRUjqYYTUZD6G9XB
 WuhCehuchlG4DHONe0wEDwKbIBudTpcNdjJZ/z2MxB7O3y2ZDMz5gCuQ84hgihZZN7Bu
 0fruxYSpSrsaLTtqcd8cP5mTCMiq9mjCqmLqz3skh1Jvc0t53/COVbdHc+9NyxVf2oUO
 0Oz2D48RKabl6U3QfWLIqCXers6EhjFdijSQt0ytAMinx9ZzsD70J3+a2Kw+9zavg9PG
 ESHA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
 :reply-to:references:mime-version:content-disposition:in-reply-to
 :user-agent;
 bh=yKN6Orp+qv0eDJWOFRBgwbnE7XXmJgLTrkQ1xQ02x5A=;
 b=b0kSQj6Cc/0/O+UOPNq6zGgfMutAKewVEFnS5q6umsGzPWWhPYeKF6ntD477nvgj8/
 6g6RKQYdPMIqbXsaOQvln8ijBk4JdNX3lcjeXKn4Du63UNnzKwm7wKgR+PVc5rgi+nAo
 M1EZzGdkydcYsbzikvC13nue3xN7Boe5CqZz1UvLt+ASA0aI1Y17YKf0p2P9TuJc+LwD
 SNMX5Jui+kBp0DXVWNI37GJnomBWQ8kbWCKHr9zcNGiXJsn2Ll2oTCDZFzibOB9egJgk
 or5CYqzHp6bLHVUVmW+kCoFjqVeFmIKYKRP6FUoPAA/PsN1Ie/r4HNt/sE2MoIS01RPh
 p5Yw==
X-Gm-Message-State: AGi0PuYZTYyp6eis0JHNNX0PPPZaRllGINm6fv3enmaMl4BfzeVAE6c4
 oCJAiCc17R5Ck4cpX7wmug==
X-Google-Smtp-Source: APiQypIOqTOQMS4dBVAstFsVtlfi1AFsQvXTSS8ZJ2tZdhq7OyDzQ+epAuX1MH4cO3IUn/1d+HhhrA==
X-Received: by 2002:a05:6830:1bd4:: with SMTP id
 v20mr12996393ota.353.1588596296537; 
 Mon, 04 May 2020 05:44:56 -0700 (PDT)
Received: from serve.minyard.net (serve.minyard.net. [2001:470:b8f6:1b::1])
 by smtp.gmail.com with ESMTPSA id u197sm2382367oie.7.2020.05.04.05.44.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 04 May 2020 05:44:55 -0700 (PDT)
Received: from minyard.net (unknown
 [IPv6:2001:470:b8f6:1b:8b39:c3f3:f502:5c4e])
 by serve.minyard.net (Postfix) with ESMTPSA id A6F3E180042;
 Mon,  4 May 2020 12:44:54 +0000 (UTC)
Date: Mon, 4 May 2020 07:44:53 -0500
From: Corey Minyard <minyard@acm.org>
To: Julien Grall <julien@xen.org>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
Message-ID: <20200504124453.GI9902@minyard.net>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
User-Agent: Mutt/1.9.4 (2018-02-28)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: minyard@acm.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Roman Shaposhnik <roman@zededa.com>, jeff.kubascik@dornerworks.com,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, May 02, 2020 at 06:48:27PM +0100, Julien Grall wrote:
> 
> 
> On 02/05/2020 18:35, Corey Minyard wrote:
> > On Sat, May 02, 2020 at 12:46:14PM +0100, Julien Grall wrote:
> > No.  If I am understanding this correctly, all the memory in dom0 below
> > 1GB would have to be physically below 1GB.
> 
> Well, dom0 is directly mapped. In other word, dom0 physical address == host
> physical address. So if we allocate memory below 1GB in Xen, then it will
> mapped below 1GB on dom0 as well.
> 
> The patch I suggested would try to allocate as much as possible memory below
> 1GB. I believe this should do the trick for you here.

Yes, that does seem to do the trick:

root@raspberrypi4-64-xen:~# xl info
host                   : raspberrypi4-64-xen
release                : 4.19.115-v8
version                : #1 SMP PREEMPT Thu Apr 16 13:53:57 UTC 2020
machine                : aarch64
nr_cpus                : 4
max_cpu_id             : 3
nr_nodes               : 1
cores_per_socket       : 1
threads_per_core       : 1
cpu_mhz                : 54.000
hw_caps                : 00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000
virt_caps              : hvm hap
total_memory           : 3956
free_memory            : 3634
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 13
xen_extra              : .0
xen_version            : 4.13.0
xen_caps               : xen-3.0-aarch64 xen-3.0-armv7l
xen_scheduler          : credit2
xen_pagesize           : 4096
platform_params        : virt_start=0x200000
xen_changeset          : Tue Dec 17 14:19:49 2019 +0000 git:a2e84d8e42-dirty
xen_commandline        : console=dtuart dtuart=/soc/serial@7e215040 sync_console dom0_mem=256M bootscrub=0
cc_compiler            : aarch64-montavista-linux-gcc (GCC) 9.3.0
cc_compile_by          : xen-4.13+gitAUT
cc_compile_domain      : mvista-cgx
cc_compile_date        : 2019-12-17
build_id               : b0e9b4af9d83f67953e1640976f0720452a88f6a
xend_config_format     : 4

Thanks for the fix.

-corey

> 
> Cheers,
> 
> -- 
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 04 12:48:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 12:48:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVaW4-0001AC-Qd; Mon, 04 May 2020 12:48:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SbWK=6S=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jVaW3-0001A6-Tl
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 12:48:11 +0000
X-Inumbo-ID: 82b78106-8e05-11ea-b07b-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82b78106-8e05-11ea-b07b-bc764e2007e4;
 Mon, 04 May 2020 12:48:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588596491;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=of9Bp0YG2XahqOQbz/YM+020tm4DXyqwFSBcCWLgl3c=;
 b=dBkNoXJqBBieRPVdbDHj6Q5wGktmf3bl+P8rhiHMM1U7wK+BVGfWsG52
 dYbZozwnYJxbSMgppgMJtDf10zGhV35q24pgsdF0X7kDhornycstMQMeC
 sNtOr4/HXiJImT1GBlEnBsaQOiQFDchzgGl1b+5WQTxXG9VTWLWxyODLw k=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: /JSZgeJBN8B6+g9eVWrT5wUa6A+V6EHzivDdQuAznTamy7nFtOybPIWxZwXuys64fD2bDZvocH
 9o/I61CwuckcEWz7H7RCus3C1LldQQc7VuYarJIAEWC/yjK/KHfjOK+Vjao6iBKumxm/U7orYX
 RuAV1R81YcTUFrO1twgNZkPJKhzdn9TtAkjSRTHJ7WygbZue5GgnBOlNoviWSqNIgfJdD2xjrf
 5WE9O+O1NgcMJsuOgBX1cAX6cKIPaeaoOGphoBylOROnnrIoPLTpXhWt1ulDuAEcNz3zT1OMMu
 kQM=
X-SBRS: 2.7
X-MesageID: 16651872
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,352,1583211600"; d="scan'208";a="16651872"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/ucode/intel: Writeback and invalidate caches before
 updating microcode
Date: Mon, 4 May 2020 13:47:52 +0100
Message-ID: <20200504124752.29806-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Ashok Raj <ashok.raj@intel.com>

Updating microcode is less error prone when caches have been flushed and
depending on what exactly the microcode is updating. For example, some of the
issues around certain Broadwell parts can be addressed by doing a full cache
flush.

Signed-off-by: Ashok Raj <ashok.raj@intel.com>
Signed-off-by: Borislav Petkov <bp@suse.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
[Linux commit 91df9fdf51492aec9fed6b4cbd33160886740f47, ported to Xen]
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/cpu/microcode/intel.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/x86/cpu/microcode/intel.c b/xen/arch/x86/cpu/microcode/intel.c
index a9f4d6e829..d031196d4c 100644
--- a/xen/arch/x86/cpu/microcode/intel.c
+++ b/xen/arch/x86/cpu/microcode/intel.c
@@ -25,6 +25,7 @@
 #include <xen/init.h>
 
 #include <asm/msr.h>
+#include <asm/system.h>
 
 #include "private.h"
 
@@ -267,6 +268,8 @@ static int apply_microcode(const struct microcode_patch *patch)
     if ( microcode_update_match(patch) != NEW_UCODE )
         return -EINVAL;
 
+    wbinvd();
+
     /* write microcode via MSR 0x79 */
     wrmsrl(MSR_IA32_UCODE_WRITE, (unsigned long)patch->data);
     wrmsrl(MSR_IA32_UCODE_REV, 0x0ULL);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon May 04 12:48:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 12:48:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVaWZ-0001Dr-6H; Mon, 04 May 2020 12:48:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVaWX-0001D4-Ui
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 12:48:41 +0000
X-Inumbo-ID: 94da3392-8e05-11ea-9d1c-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 94da3392-8e05-11ea-9d1c-12813bfff9fa;
 Mon, 04 May 2020 12:48:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 047FBABD0;
 Mon,  4 May 2020 12:48:41 +0000 (UTC)
Subject: Re: [PATCH v1.1 2/3] xen/sched: fix theoretical races accessing
 vcpu->dirty_cpu
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
References: <20200430152848.20275-1-jgross@suse.com>
 <d1b322c2-98d8-b3a3-1f48-2af89cf9407e@suse.com>
 <9d4fd1cd-173f-5128-6a73-ac2c6d679f93@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <eafd8c4a-26a1-9db9-97d7-e78c629e9a0a@suse.com>
Date: Mon, 4 May 2020 14:48:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <9d4fd1cd-173f-5128-6a73-ac2c6d679f93@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 04.05.2020 14:41, Jürgen Groß wrote:
> On 04.05.20 13:51, Jan Beulich wrote:
>> On 30.04.2020 17:28, Juergen Gross wrote:
>>> @@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
>>>     void sync_vcpu_execstate(struct vcpu *v)
>>>   {
>>> -    if ( v->dirty_cpu == smp_processor_id() )
>>> +    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
>>> +
>>> +    if ( dirty_cpu == smp_processor_id() )
>>>           sync_local_execstate();
>>> -    else if ( vcpu_cpu_dirty(v) )
>>> +    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
>>>       {
>>>           /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
>>> -        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
>>> +        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>>>       }
>>> +    ASSERT(read_atomic(&v->dirty_cpu) != dirty_cpu ||
>>> +           dirty_cpu == VCPU_CPU_CLEAN);
>>
>> !is_vcpu_dirty_cpu(dirty_cpu) again? Also perhaps flip both
>> sides of the || (to have the cheaper one first), and maybe
>>
>>      if ( is_vcpu_dirty_cpu(dirty_cpu) )
>>          ASSERT(read_atomic(&v->dirty_cpu) != dirty_cpu);
>>
>> as the longer assertion string literal isn't really of that
>> much extra value.
> 
> I can do that, in case we can be sure the compiler will drop the
> test in case of a non-debug build.

Modern gcc will afaik; no idea about clang though.

>> However, having stared at it for a while now - is this race
>> free? I can see this being fine in the (initial) case of
>> dirty_cpu == smp_processor_id(), but if this is for a foreign
>> CPU, can't the vCPU have gone back to that same CPU again in
>> the meantime?
> 
> This should never happen. Either the vcpu in question is paused,
> or it has been forced off the cpu due to not being allowed to run
> there (e.g. affinity has been changed, or cpu is about to be
> removed from cpupool). I can add a comment explaining it.

There is a time window from late in flush_mask() to the assertion
you add. All sorts of things can happen during this window on
other CPUs. IOW what guarantees the vCPU not getting unpaused or
its affinity getting changed yet another time?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 04 12:52:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 12:52:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVaaK-0002BW-CY; Mon, 04 May 2020 12:52:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVaaJ-0002BQ-3V
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 12:52:35 +0000
X-Inumbo-ID: 1f94ab02-8e06-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1f94ab02-8e06-11ea-b07b-bc764e2007e4;
 Mon, 04 May 2020 12:52:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 38604ABD0;
 Mon,  4 May 2020 12:52:35 +0000 (UTC)
Subject: Re: [PATCH] x86/ucode/intel: Writeback and invalidate caches before
 updating microcode
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200504124752.29806-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <961ecc3e-0450-473a-8c85-17c7f9aa827b@suse.com>
Date: Mon, 4 May 2020 14:52:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200504124752.29806-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 04.05.2020 14:47, Andrew Cooper wrote:
> From: Ashok Raj <ashok.raj@intel.com>
> 
> Updating microcode is less error prone when caches have been flushed and
> depending on what exactly the microcode is updating. For example, some of the
> issues around certain Broadwell parts can be addressed by doing a full cache
> flush.
> 
> Signed-off-by: Ashok Raj <ashok.raj@intel.com>
> Signed-off-by: Borislav Petkov <bp@suse.de>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> [Linux commit 91df9fdf51492aec9fed6b4cbd33160886740f47, ported to Xen]
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Mon May 04 12:54:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 12:54:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVac5-0002Lf-Om; Mon, 04 May 2020 12:54:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bZZa=6S=samsung.com=m.szyprowski@srs-us1.protection.inumbo.net>)
 id 1jVac3-0002LZ-E0
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 12:54:24 +0000
X-Inumbo-ID: 5fa87390-8e06-11ea-9d1d-12813bfff9fa
Received: from mailout1.w1.samsung.com (unknown [210.118.77.11])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5fa87390-8e06-11ea-9d1d-12813bfff9fa;
 Mon, 04 May 2020 12:54:21 +0000 (UTC)
Received: from eucas1p1.samsung.com (unknown [182.198.249.206])
 by mailout1.w1.samsung.com (KnoxPortal) with ESMTP id
 20200504125420euoutp01cd3261afc5d7b4a137945f489a18d304~L1G2lw0NK2849828498euoutp01A
 for <xen-devel@lists.xenproject.org>; Mon,  4 May 2020 12:54:20 +0000 (GMT)
DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.w1.samsung.com
 20200504125420euoutp01cd3261afc5d7b4a137945f489a18d304~L1G2lw0NK2849828498euoutp01A
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com;
 s=mail20170921; t=1588596861;
 bh=Ttmk3z/wgaMlZxwZtUmC4+W+gIbK2TZtfskAp6FVn7g=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=M+d1o5ZVhUVN1ZfQPrZfcMnshXvBbdGH9uQSDnzkA1rkF5Fa0TTbDqJeIa2JLYN5a
 ri+j49X6WhbENiIDYSQRM4Zu38uH++zQsA9puDZohEvHoy3o0dQGSx5gGmkOqaAl5V
 jX6UUXADZApBPRcOQoC87q7cSHMnDNZekKucTcDA=
Received: from eusmges2new.samsung.com (unknown [203.254.199.244]) by
 eucas1p2.samsung.com (KnoxPortal) with ESMTP id
 20200504125420eucas1p2a81c07be105dda54ab34624f355a272f~L1G2WAnsK2246922469eucas1p2m;
 Mon,  4 May 2020 12:54:20 +0000 (GMT)
Received: from eucas1p2.samsung.com ( [182.198.249.207]) by
 eusmges2new.samsung.com (EUCPMTA) with SMTP id 20.E2.60679.C7010BE5; Mon,  4
 May 2020 13:54:20 +0100 (BST)
Received: from eusmtrp2.samsung.com (unknown [182.198.249.139]) by
 eucas1p2.samsung.com (KnoxPortal) with ESMTPA id
 20200504125420eucas1p2387a795af11e62779e8aa7f7673a8562~L1G194XiM2235822358eucas1p2q;
 Mon,  4 May 2020 12:54:20 +0000 (GMT)
Received: from eusmgms1.samsung.com (unknown [182.198.249.179]) by
 eusmtrp2.samsung.com (KnoxPortal) with ESMTP id
 20200504125420eusmtrp2717a6a2fb13d54ea48e921ea03000013~L1G19MkLc2826928269eusmtrp2X;
 Mon,  4 May 2020 12:54:20 +0000 (GMT)
X-AuditID: cbfec7f4-0e5ff7000001ed07-af-5eb0107c4555
Received: from eusmtip2.samsung.com ( [203.254.199.222]) by
 eusmgms1.samsung.com (EUCPMTA) with SMTP id 4C.69.08375.C7010BE5; Mon,  4
 May 2020 13:54:20 +0100 (BST)
Received: from AMDC2765.digital.local (unknown [106.120.51.73]) by
 eusmtip2.samsung.com (KnoxPortal) with ESMTPA id
 20200504125419eusmtip21a00d04f6d566f3a535fb519094872c2~L1G1UE7360241302413eusmtip2Z;
 Mon,  4 May 2020 12:54:19 +0000 (GMT)
From: Marek Szyprowski <m.szyprowski@samsung.com>
To: dri-devel@lists.freedesktop.org, iommu@lists.linux-foundation.org,
 linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org
Subject: [PATCH v2 15/21] drm: xen: fix sg_table nents vs. orig_nents misuse
Date: Mon,  4 May 2020 14:53:53 +0200
Message-Id: <20200504125359.5678-15-m.szyprowski@samsung.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200504125359.5678-1-m.szyprowski@samsung.com>
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFtrJKsWRmVeSWpSXmKPExsWy7djP87o1AhviDBruaVj0njvJZLFxxnpW
 i//bJjJbXPn6ns1i5eqjTBYL9ltbfLnykMli0+NrrBaXd81hs1h75C67xYfV71ktDn54wmrx
 fctkJgdejzXz1jB63Fm6k9Fj77cFLB7bvz1g9bjffZzJY/OSeo/b/x4ze0y+sZzR4/CHKywe
 u282sHn0bVnF6PF5k1wATxSXTUpqTmZZapG+XQJXxvlZL9kK1nNWPN7Uw97A+IW9i5GDQ0LA
 ROLfQeYuRi4OIYEVjBJvW98AxTmBnC+MErcemkEkPjNK7G14ANdw/qg3RHw5o8S69l8sEA5Q
 Q+/3k2DdbAKGEl1vu9hAbBGBVkaJE708IEXMAj+YJM4tOwFWJCzgIzHz8TYmEJtFQFVi6/8e
 sAZeAVuJQ60nweISAvISqzccYAbZzAkUn9mrDjJHQuAcu8S+jlXsEDUuEuvP9DJD2MISr45v
 gYrLSPzfOZ8JoqGZUeLhubXsEE4Po8TlphmMEFXWEnfO/WID2cAsoCmxfpc+RNhRYs3uFcwQ
 L/NJ3HgrCBJmBjInbZsOFeaV6GgTgqhWk5h1fB3c2oMXLkGd4yHROW87KySADjNKfN9yiHkC
 o/wshGULGBlXMYqnlhbnpqcWG+WllusVJ+YWl+al6yXn525iBCao0/+Of9nBuOtP0iFGAQ5G
 JR7eiM/r44RYE8uKK3MPMUpwMCuJ8O5oAQrxpiRWVqUW5ccXleakFh9ilOZgURLnNV70MlZI
 ID2xJDU7NbUgtQgmy8TBKdXAqPxMeH9caFgt04/DWtmWj/x0q89+28o8YYqCp8lV+9ADG/ey
 HGZQv38jqWBe9yyDINbMG0zbeidKq9dzNs6rVbFQ1FgyPe+VmIHgMv+qZ6dyQmvz703783jt
 lZY7AdMcJiuc5mtNTj7+bNkWETFzr3ut1ztiHLdOW/n/4UGBaqV1woHdN/bPUGIpzkg01GIu
 Kk4EAL6Bl7lMAwAA
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprDIsWRmVeSWpSXmKPExsVy+t/xe7o1AhviDLat5rfoPXeSyWLjjPWs
 Fv+3TWS2uPL1PZvFytVHmSwW7Le2+HLlIZPFpsfXWC0u75rDZrH2yF12iw+r37NaHPzwhNXi
 +5bJTA68HmvmrWH0uLN0J6PH3m8LWDy2f3vA6nG/+ziTx+Yl9R63/z1m9ph8Yzmjx+EPV1g8
 dt9sYPPo27KK0ePzJrkAnig9m6L80pJUhYz84hJbpWhDCyM9Q0sLPSMTSz1DY/NYKyNTJX07
 m5TUnMyy1CJ9uwS9jPOzXrIVrOeseLyph72B8Qt7FyMHh4SAicT5o95djFwcQgJLGSW+dfWw
 dTFyAsVlJE5Oa2CFsIUl/lzrYoMo+sQocez/OrAiNgFDia63EAkRgU5GiWndH9lBHGaBf0wS
 J/ZuZwKpEhbwkZj5eBuYzSKgKrH1P8QKXgFbiUOtJ5kgVshLrN5wgBnkJE6g+MxedZCwkEC+
 xN2n/1gmMPItYGRYxSiSWlqcm55bbKhXnJhbXJqXrpecn7uJERgz24793LyD8dLG4EOMAhyM
 Sjy8EZ/XxwmxJpYVV+YeYpTgYFYS4d3RAhTiTUmsrEotyo8vKs1JLT7EaAp000RmKdHkfGA8
 55XEG5oamltYGpobmxubWSiJ83YIHIwREkhPLEnNTk0tSC2C6WPi4JRqYDRl2smZdaTtsAHT
 T4sU8eo3p25X3D342e7br9fmfP+e7LjlGn36SPmHNtmeQ8VLFCaKr9U9sK8i+Cr72feKndd+
 N+zxMNy5z83t4wFLKTvH45pJd79N+fNWkVNEa1eZgI+P3MYGC4s0j4fnfq7WmJG/xLwkLq7m
 cL9IXfVq/U9ujQ4T11X2eyixFGckGmoxFxUnAgAhUwpyrwIAAA==
X-CMS-MailID: 20200504125420eucas1p2387a795af11e62779e8aa7f7673a8562
X-Msg-Generator: CA
Content-Type: text/plain; charset="utf-8"
X-RootMTR: 20200504125420eucas1p2387a795af11e62779e8aa7f7673a8562
X-EPHeader: CA
CMS-TYPE: 201P
X-CMS-RootMailID: 20200504125420eucas1p2387a795af11e62779e8aa7f7673a8562
References: <20200504125017.5494-1-m.szyprowski@samsung.com>
 <20200504125359.5678-1-m.szyprowski@samsung.com>
 <CGME20200504125420eucas1p2387a795af11e62779e8aa7f7673a8562@eucas1p2.samsung.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>,
 David Airlie <airlied@linux.ie>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 Daniel Vetter <daniel@ffwll.ch>, xen-devel@lists.xenproject.org,
 Robin Murphy <robin.murphy@arm.com>, Christoph Hellwig <hch@lst.de>,
 linux-arm-kernel@lists.infradead.org,
 Marek Szyprowski <m.szyprowski@samsung.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The Documentation/DMA-API-HOWTO.txt states that dma_map_sg returns the
numer of the created entries in the DMA address space. However the
subsequent calls to dma_sync_sg_for_{device,cpu} and dma_unmap_sg must be
called with the original number of entries passed to dma_map_sg. The
sg_table->nents in turn holds the result of the dma_map_sg call as stated
in include/linux/scatterlist.h. Adapt the code to obey those rules.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
---
For more information, see '[PATCH v2 00/21] DRM: fix struct sg_table nents
vs. orig_nents misuse' thread: https://lkml.org/lkml/2020/5/4/373
---
 drivers/gpu/drm/xen/xen_drm_front_gem.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index f0b85e0..ba4bdc5 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -215,7 +215,7 @@ struct drm_gem_object *
 		return ERR_PTR(ret);
 
 	DRM_DEBUG("Imported buffer of size %zu with nents %u\n",
-		  size, sgt->nents);
+		  size, sgt->orig_nents);
 
 	return &xen_obj->base;
 }
-- 
1.9.1



From xen-devel-bounces@lists.xenproject.org Mon May 04 12:54:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 12:54:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVac9-0002ME-0q; Mon, 04 May 2020 12:54:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bZZa=6S=samsung.com=m.szyprowski@srs-us1.protection.inumbo.net>)
 id 1jVac8-0002Lu-E1
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 12:54:28 +0000
X-Inumbo-ID: 6108c5c8-8e06-11ea-9d1d-12813bfff9fa
Received: from mailout2.w1.samsung.com (unknown [210.118.77.12])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6108c5c8-8e06-11ea-9d1d-12813bfff9fa;
 Mon, 04 May 2020 12:54:24 +0000 (UTC)
Received: from eucas1p2.samsung.com (unknown [182.198.249.207])
 by mailout2.w1.samsung.com (KnoxPortal) with ESMTP id
 20200504125423euoutp0292de7261d76489d5285c5aa08e293d18~L1G43A3CK1779417794euoutp02d
 for <xen-devel@lists.xenproject.org>; Mon,  4 May 2020 12:54:23 +0000 (GMT)
DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.w1.samsung.com
 20200504125423euoutp0292de7261d76489d5285c5aa08e293d18~L1G43A3CK1779417794euoutp02d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com;
 s=mail20170921; t=1588596863;
 bh=Q3uTnE0pCgvHZKaralFzdEAhk4lnCxC2sn9hvLGfyXI=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=u8jdqki4WvlAP22BOY0JysA2WwnbFY7k30IQuiD8e7Pg3nRmFZlVZDJCKGR1pPVVA
 9Uf3RD0XZ0F0COg1ni+rkpZRnmEEK6Sf0mlG0k7euQhkAnQKDhGd6UQAn2w5LFM++y
 4CZ0k/kI7ucNo5f23Eho17C8lgyY42aK5XKPqbz4=
Received: from eusmges3new.samsung.com (unknown [203.254.199.245]) by
 eucas1p2.samsung.com (KnoxPortal) with ESMTP id
 20200504125423eucas1p2ebcf5bcb67d9e5a919928ba62f968d29~L1G4f_XI52250922509eucas1p29;
 Mon,  4 May 2020 12:54:23 +0000 (GMT)
Received: from eucas1p1.samsung.com ( [182.198.249.206]) by
 eusmges3new.samsung.com (EUCPMTA) with SMTP id 7C.12.60698.E7010BE5; Mon,  4
 May 2020 13:54:22 +0100 (BST)
Received: from eusmtrp2.samsung.com (unknown [182.198.249.139]) by
 eucas1p2.samsung.com (KnoxPortal) with ESMTPA id
 20200504125422eucas1p206476912d5137bcad804bccbd75ed2f0~L1G4HMxes1833418334eucas1p2H;
 Mon,  4 May 2020 12:54:22 +0000 (GMT)
Received: from eusmgms1.samsung.com (unknown [182.198.249.179]) by
 eusmtrp2.samsung.com (KnoxPortal) with ESMTP id
 20200504125422eusmtrp23b7055eba07f8554b4f4ee1eae479a56~L1G4GhHu82826928269eusmtrp2b;
 Mon,  4 May 2020 12:54:22 +0000 (GMT)
X-AuditID: cbfec7f5-a0fff7000001ed1a-61-5eb0107e72b4
Received: from eusmtip2.samsung.com ( [203.254.199.222]) by
 eusmgms1.samsung.com (EUCPMTA) with SMTP id 3F.69.08375.E7010BE5; Mon,  4
 May 2020 13:54:22 +0100 (BST)
Received: from AMDC2765.digital.local (unknown [106.120.51.73]) by
 eusmtip2.samsung.com (KnoxPortal) with ESMTPA id
 20200504125421eusmtip25b5c1e2266031611705f4c5013c31891~L1G3WvG4w0241002410eusmtip2J;
 Mon,  4 May 2020 12:54:21 +0000 (GMT)
From: Marek Szyprowski <m.szyprowski@samsung.com>
To: dri-devel@lists.freedesktop.org, iommu@lists.linux-foundation.org,
 linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org
Subject: [PATCH v2 18/21] xen: gntdev: fix sg_table nents vs. orig_nents misuse
Date: Mon,  4 May 2020 14:53:56 +0200
Message-Id: <20200504125359.5678-18-m.szyprowski@samsung.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200504125359.5678-1-m.szyprowski@samsung.com>
X-Brightmail-Tracker: H4sIAAAAAAAAA0WSWUwTYRSF/WfrUCkOhcQ/uECqYjRhExNHcE14mEQfUBPXWCw6AQIt2qFu
 D4pQFatVFAkIRBqiQQsKlLaixqUFLQpUQDRQRSGgphqiWChRtNhxXN6+e+65OTc3l0SlT/Ew
 MkOVw6pViiwZIcasj785o45Q9fJYj2UZrXc+QeiG0jqc7io5jtFT1vMo3TP+maCv1zxCaMOD
 RLqibwk91jOI0Kahlzj9/E4FQd9o6RfRti/DOF1zM4WeMBcha2YwtZdrAXPPa8AYk/EUwdzy
 DuDM29MOhGm8cpR55RtCmaLeasA0f+nBmLt9uQQz+s6FMWfNRsDUmV9gjMc0Nzlou3jFHjYr
 Yz+rjlm1S5x+rnzd3kvBB0vdVXgumJToAElCail0WLfpQAAppa4B+PG1RgfEfh4D8HTFW0wo
 PAC6Cl0I7+IHLF3FiNCoBrBTX439GzEOngC8i6DioG5ER/AcSh0HsFUfyJtQqgmFlcVuEd8I
 oTbAk/lXf5swagEct53CeZZQK2Fbf+OfuHBYU/8Q5XcN8OuX9AsF2S2CA/WEwEmw0duJChwC
 PzrMIoFnw6nblb83hVQ+gIPOGyKhOAPg87xSILgS4Wvnd4IPQKlFsO5OjCCvhdqvhbhwoyDY
 OxLMy6gfL1hLUEGWwIITUsEdCcscN//F2jq7/6zDQOtwh0i4TzOA2u4CtBCEl/0PMwBgBDNZ
 DadMY7l4FXsgmlMoOY0qLXp3ttIE/B/W5nOMN4H7P1LtgCKBLFCy1VMnl+KK/dwhpR1AEpWF
 Spq0fkmyR3HoMKvOTlFrsljODmaRmGymJL7KvVNKpSly2EyW3cuq/3YRMiAsF6x3f46iSrg5
 F5Nt9nK5ZoMvMXkeYt8RSU6LxV+stgwvj5i0G8zeUUlB6m4T1Yo86/ikqUcSOHTqYlh2HqVJ
 +Pk+KSnKlxNBOMvm9q7cqLWLpzcEbrK+KjZY5PKhCeXmVGX7mjxXrdZ1rGP+G2b9h30tmYqR
 lC33BxrG2vtDZRiXrohbjKo5xS+3tDAHXQMAAA==
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrLIsWRmVeSWpSXmKPExsVy+t/xe7p1AhviDDa1K1r0njvJZLFxxnpW
 i4vTW1ks/m+byGxx5et7NouVq48yWSzYb20x56aRxZcrD5ksNj2+xmpxedccNou1R+6yWxz8
 8ITVYvW6eIvvWyYzOfB7rJm3htFj77cFLB6bVnWyeWz/9oDV4373cSaPzUvqPW7/e8zsMfnG
 ckaPwx+usHjsvtnA5vHx6S0Wj74tqxg91m+5yuLxeZNcAF+Unk1RfmlJqkJGfnGJrVK0oYWR
 nqGlhZ6RiaWeobF5rJWRqZK+nU1Kak5mWWqRvl2CXkb/bO+CmYIVM14uYm1g/M3bxcjJISFg
 IrH14lSmLkYuDiGBpYwSK2b/ZINIyEicnNbACmELS/y51sUGUfSJUWLr0y1MIAk2AUOJrrcQ
 CRGBTkaJad0f2UESzAKHmCW2XpcEsYUF/CU+/JoG1sAioCrx9WAn2FReAVuJ03c3M0FskJdY
 veEAcxcjBwcnUHxmrzpIWEggX+Lu038sExj5FjAyrGIUSS0tzk3PLTbUK07MLS7NS9dLzs/d
 xAiMqm3Hfm7ewXhpY/AhRgEORiUe3ojP6+OEWBPLiitzDzFKcDArifDuaAEK8aYkVlalFuXH
 F5XmpBYfYjQFumkis5Rocj4w4vNK4g1NDc0tLA3Njc2NzSyUxHk7BA7GCAmkJ5akZqemFqQW
 wfQxcXBKNTAG5aWJznJNDp2+8r+Hwtx1YuGPTJsOn7bas4vV/Q7j7W+P17545ymbtJXhlLtq
 f8XKkHsxey8IGdx+c3yVVzTPQQeR7c2rTKp7bXv5vY4a2/7Ufh4ovvvKo3mrfnr3Oomnhwtf
 Lubxj9DU4VwlVSnVzPprml+15pHvPyu+zTL7kHJ5QevZoqNKLMUZiYZazEXFiQChzi19wAIA
 AA==
X-CMS-MailID: 20200504125422eucas1p206476912d5137bcad804bccbd75ed2f0
X-Msg-Generator: CA
Content-Type: text/plain; charset="utf-8"
X-RootMTR: 20200504125422eucas1p206476912d5137bcad804bccbd75ed2f0
X-EPHeader: CA
CMS-TYPE: 201P
X-CMS-RootMailID: 20200504125422eucas1p206476912d5137bcad804bccbd75ed2f0
References: <20200504125017.5494-1-m.szyprowski@samsung.com>
 <20200504125359.5678-1-m.szyprowski@samsung.com>
 <CGME20200504125422eucas1p206476912d5137bcad804bccbd75ed2f0@eucas1p2.samsung.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Robin Murphy <robin.murphy@arm.com>, Christoph Hellwig <hch@lst.de>,
 linux-arm-kernel@lists.infradead.org,
 Marek Szyprowski <m.szyprowski@samsung.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The Documentation/DMA-API-HOWTO.txt states that dma_map_sg returns the
numer of the created entries in the DMA address space. However the
subsequent calls to dma_sync_sg_for_{device,cpu} and dma_unmap_sg must be
called with the original number of entries passed to dma_map_sg. The
sg_table->nents in turn holds the result of the dma_map_sg call as stated
in include/linux/scatterlist.h. Adapt the code to obey those rules.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
---
For more information, see '[PATCH v2 00/21] DRM: fix struct sg_table nents
vs. orig_nents misuse' thread: https://lkml.org/lkml/2020/5/4/373
---
 drivers/xen/gntdev-dmabuf.c | 10 ++++++----
 1 file changed, 6 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
index 75d3bb9..ed749fd 100644
--- a/drivers/xen/gntdev-dmabuf.c
+++ b/drivers/xen/gntdev-dmabuf.c
@@ -248,7 +248,7 @@ static void dmabuf_exp_ops_detach(struct dma_buf *dma_buf,
 		if (sgt) {
 			if (gntdev_dmabuf_attach->dir != DMA_NONE)
 				dma_unmap_sg_attrs(attach->dev, sgt->sgl,
-						   sgt->nents,
+						   sgt->orig_nents,
 						   gntdev_dmabuf_attach->dir,
 						   DMA_ATTR_SKIP_CPU_SYNC);
 			sg_free_table(sgt);
@@ -288,8 +288,10 @@ static void dmabuf_exp_ops_detach(struct dma_buf *dma_buf,
 	sgt = dmabuf_pages_to_sgt(gntdev_dmabuf->pages,
 				  gntdev_dmabuf->nr_pages);
 	if (!IS_ERR(sgt)) {
-		if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
-				      DMA_ATTR_SKIP_CPU_SYNC)) {
+		sgt->nents = dma_map_sg_attrs(attach->dev, sgt->sgl,
+					      sgt->orig_nents, dir,
+					      DMA_ATTR_SKIP_CPU_SYNC);
+		if (!sgt->nents) {
 			sg_free_table(sgt);
 			kfree(sgt);
 			sgt = ERR_PTR(-ENOMEM);
@@ -625,7 +627,7 @@ static struct gntdev_dmabuf *dmabuf_imp_alloc_storage(int count)
 
 	/* Now convert sgt to array of pages and check for page validity. */
 	i = 0;
-	for_each_sg_page(sgt->sgl, &sg_iter, sgt->nents, 0) {
+	for_each_sg_page(sgt->sgl, &sg_iter, sgt->orig_nents, 0) {
 		struct page *page = sg_page_iter_page(&sg_iter);
 		/*
 		 * Check if page is valid: this can happen if we are given
-- 
1.9.1



From xen-devel-bounces@lists.xenproject.org Mon May 04 13:08:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 13:08:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVapb-0003X8-Ad; Mon, 04 May 2020 13:08:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVapa-0003X3-Ai
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 13:08:22 +0000
X-Inumbo-ID: 5431174a-8e08-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5431174a-8e08-11ea-9887-bc764e2007e4;
 Mon, 04 May 2020 13:08:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 5D240ABD0;
 Mon,  4 May 2020 13:08:22 +0000 (UTC)
Subject: Re: [PATCH 02/16] x86/traps: Clean up printing in
 do_reserved_trap()/fatal_trap()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <aca22b53-895e-19bb-c54c-f1e4945c95c1@suse.com>
Date: Mon, 4 May 2020 15:08:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200501225838.9866-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.2020 00:58, Andrew Cooper wrote:
> For one, they render the vector in a different base.
> 
> Introduce X86_EXC_* constants and vec_name() to refer to exceptions by their
> mnemonic, which starts bringing the code/diagnostics in line with the Intel
> and AMD manuals.

For this "bringing in line" purpose I'd like to see whether you could
live with some adjustments to how you're currently doing things:
- NMI is nowhere prefixed by #, hence I think we'd better not do so
  either; may require embedding the #-es in the names[] table, or not
  using N() for NMI
- neither Coprocessor Segment Overrun nor vector 0x0f have a mnemonic
  and hence I think we shouldn't invent one; just treat them like
  other reserved vectors (of which at least vector 0x09 indeed is one
  on x86-64)?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 04 13:20:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 13:20:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVb0z-00054e-DL; Mon, 04 May 2020 13:20:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVb0y-00054Z-FU
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 13:20:08 +0000
X-Inumbo-ID: f7bf6654-8e09-11ea-9d1f-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f7bf6654-8e09-11ea-9d1f-12813bfff9fa;
 Mon, 04 May 2020 13:20:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 33593AEA1;
 Mon,  4 May 2020 13:20:06 +0000 (UTC)
Subject: Re: [PATCH 03/16] x86/traps: Factor out exception_fixup() and make
 printing consistent
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f7cb696a-5c2c-4aa6-d379-ed77772b7c35@suse.com>
Date: Mon, 4 May 2020 15:20:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200501225838.9866-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.2020 00:58, Andrew Cooper wrote:
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -774,10 +774,27 @@ static void do_reserved_trap(struct cpu_user_regs *regs)
>            trapnr, vec_name(trapnr), regs->error_code);
>  }
>  
> +static bool exception_fixup(struct cpu_user_regs *regs, bool print)
> +{
> +    unsigned long fixup = search_exception_table(regs);
> +
> +    if ( unlikely(fixup == 0) )
> +        return false;
> +
> +    /* Can currently be triggered by guests.  Make sure we ratelimit. */
> +    if ( IS_ENABLED(CONFIG_DEBUG) && print )

I didn't think we consider dprintk()-s a possible security issue.
Why would we consider so a printk() hidden behind
IS_ENABLED(CONFIG_DEBUG)? IOW I think one of XENLOG_GUEST and
IS_ENABLED(CONFIG_DEBUG) wants dropping.

> @@ -1466,12 +1468,11 @@ void do_page_fault(struct cpu_user_regs *regs)
>          if ( pf_type != real_fault )
>              return;
>  
> -        if ( likely((fixup = search_exception_table(regs)) != 0) )
> +        if ( likely(exception_fixup(regs, false)) )
>          {
>              perfc_incr(copy_user_faults);
>              if ( unlikely(regs->error_code & PFEC_reserved_bit) )
>                  reserved_bit_page_fault(addr, regs);
> -            regs->rip = fixup;

I'm afraid this modification can't validly be pulled ahead -
show_execution_state(), as called from reserved_bit_page_fault(),
wants to / should print the original RIP, not the fixed up one.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 04 13:22:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 13:22:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVb3G-0005DM-UH; Mon, 04 May 2020 13:22:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVb3F-0005DF-Mh
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 13:22:29 +0000
X-Inumbo-ID: 4d6c1034-8e0a-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d6c1034-8e0a-11ea-ae69-bc764e2007e4;
 Mon, 04 May 2020 13:22:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 22D41AEA1;
 Mon,  4 May 2020 13:22:30 +0000 (UTC)
Subject: Re: [PATCH 04/16] x86/smpboot: Write the top-of-stack block in
 cpu_smpboot_alloc()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-5-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2051a940-2cca-1e40-17cf-232a0b16f6c3@suse.com>
Date: Mon, 4 May 2020 15:22:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200501225838.9866-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.2020 00:58, Andrew Cooper wrote:
> This allows the AP boot assembly use per-cpu variables, and brings the
> semantics closer to that of the BSP, which can use per-cpu variables from the
> start of day.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Mon May 04 13:53:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 13:53:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVbWt-0007hb-Ap; Mon, 04 May 2020 13:53:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVbWs-0007hW-PB
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 13:53:06 +0000
X-Inumbo-ID: 942213f8-8e0e-11ea-9d23-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 942213f8-8e0e-11ea-9d23-12813bfff9fa;
 Mon, 04 May 2020 13:53:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7F55BAD2C;
 Mon,  4 May 2020 13:53:06 +0000 (UTC)
Subject: Re: [PATCH 05/16] x86/shstk: Introduce Supervisor Shadow Stack support
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Anthony Perard <anthony.perard@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-6-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d0347fec-3ccb-daa7-5c4d-f0e74b5fb247@suse.com>
Date: Mon, 4 May 2020 15:52:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200501225838.9866-6-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.2020 00:58, Andrew Cooper wrote:
> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -34,6 +34,9 @@ config ARCH_DEFCONFIG
>  config INDIRECT_THUNK
>  	def_bool $(cc-option,-mindirect-branch-register)
>  
> +config HAS_AS_CET
> +	def_bool $(as-instr,wrssq %rax$(comma)0;setssbsy;endbr64)

I see you add as-instr here as a side effect. Until the other
similar checks get converted, I think for the time being we should
use the old model, to have all such checks in one place. I realize
this means you can't have a Kconfig dependency then.

Also why do you check multiple insns, when just one (like we do
elsewhere) would suffice?

The crucial insns to check are those which got changed pretty
close before the release of 2.29 (in the cover letter you
mention 2.32): One of incssp{d,q}, setssbsy, or saveprevssp.
There weren't official binutils releases with the original
insns, but distros may still have picked up intermediate
snapshots.

> @@ -97,6 +100,20 @@ config HVM
>  
>  	  If unsure, say Y.
>  
> +config XEN_SHSTK
> +	bool "Supervisor Shadow Stacks"
> +	depends on HAS_AS_CET && EXPERT = "y"
> +	default y
> +        ---help---
> +          Control-flow Enforcement Technology (CET) is a set of features in
> +          hardware designed to combat Return-oriented Programming (ROP, also
> +          call/jump COP/JOP) attacks.  Shadow Stacks are one CET feature
> +          designed to provide return address protection.
> +
> +          This option arranges for Xen to use CET-SS for its own protection.
> +          When CET-SS is active, 32bit PV guests cannot be used.  Backwards
> +          compatiblity can be provided vai the PV Shim mechanism.

Indentation looks odd here - the whole help section should
start with hard tabs, I think.

> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -95,6 +95,36 @@ unsigned long __initdata highmem_start;
>  size_param("highmem-start", highmem_start);
>  #endif
>  
> +static bool __initdata opt_xen_shstk = true;
> +
> +static int parse_xen(const char *s)
> +{
> +    const char *ss;
> +    int val, rc = 0;
> +
> +    do {
> +        ss = strchr(s, ',');
> +        if ( !ss )
> +            ss = strchr(s, '\0');
> +
> +        if ( (val = parse_boolean("shstk", s, ss)) >= 0 )
> +        {
> +#ifdef CONFIG_XEN_SHSTK
> +            opt_xen_shstk = val;
> +#else
> +            no_config_param("XEN_SHSTK", "xen", s, ss);
> +#endif
> +        }
> +        else
> +            rc = -EINVAL;
> +
> +        s = ss + 1;
> +    } while ( *ss );
> +
> +    return rc;
> +}
> +custom_param("xen", parse_xen);

What's the idea here going forward, i.e. why the new top level
"xen" option? Almost all options are for Xen itself, after all.
Did you perhaps mean this to be "cet"?

Also you surely meant to document this new option?

> --- a/xen/scripts/Kconfig.include
> +++ b/xen/scripts/Kconfig.include
> @@ -31,6 +31,10 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -E -x c /dev/null -o /de
>  # Return y if the linker supports <flag>, n otherwise
>  ld-option = $(success,$(LD) -v $(1))
>  
> +# $(as-instr,<instr>)
> +# Return y if the assembler supports <instr>, n otherwise
> +as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -)

CLANG_FLAGS caught my eye here, then noticing that cc-option
also uses it. Anthony - what's the deal with this? It doesn't
look to get defined anywhere, and I also don't see what clang-
specific about these constructs.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 04 13:53:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 13:53:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVbXN-0007iz-Jl; Mon, 04 May 2020 13:53:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WsoH=6S=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jVbXN-0007is-0B
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 13:53:37 +0000
X-Inumbo-ID: a54d28fd-8e0e-11ea-9d23-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a54d28fd-8e0e-11ea-9d23-12813bfff9fa;
 Mon, 04 May 2020 13:53:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7DABFACD8;
 Mon,  4 May 2020 13:53:36 +0000 (UTC)
Subject: Re: [PATCH v1.1 2/3] xen/sched: fix theoretical races accessing
 vcpu->dirty_cpu
To: Jan Beulich <jbeulich@suse.com>
References: <20200430152848.20275-1-jgross@suse.com>
 <d1b322c2-98d8-b3a3-1f48-2af89cf9407e@suse.com>
 <9d4fd1cd-173f-5128-6a73-ac2c6d679f93@suse.com>
 <eafd8c4a-26a1-9db9-97d7-e78c629e9a0a@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <03e06725-a8ae-dfc1-04b3-44963871ee3a@suse.com>
Date: Mon, 4 May 2020 15:53:33 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <eafd8c4a-26a1-9db9-97d7-e78c629e9a0a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 04.05.20 14:48, Jan Beulich wrote:
> On 04.05.2020 14:41, Jürgen Groß wrote:
>> On 04.05.20 13:51, Jan Beulich wrote:
>>> On 30.04.2020 17:28, Juergen Gross wrote:
>>>> @@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
>>>>      void sync_vcpu_execstate(struct vcpu *v)
>>>>    {
>>>> -    if ( v->dirty_cpu == smp_processor_id() )
>>>> +    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
>>>> +
>>>> +    if ( dirty_cpu == smp_processor_id() )
>>>>            sync_local_execstate();
>>>> -    else if ( vcpu_cpu_dirty(v) )
>>>> +    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
>>>>        {
>>>>            /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
>>>> -        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
>>>> +        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>>>>        }
>>>> +    ASSERT(read_atomic(&v->dirty_cpu) != dirty_cpu ||
>>>> +           dirty_cpu == VCPU_CPU_CLEAN);
>>>
>>> !is_vcpu_dirty_cpu(dirty_cpu) again? Also perhaps flip both
>>> sides of the || (to have the cheaper one first), and maybe
>>>
>>>       if ( is_vcpu_dirty_cpu(dirty_cpu) )
>>>           ASSERT(read_atomic(&v->dirty_cpu) != dirty_cpu);
>>>
>>> as the longer assertion string literal isn't really of that
>>> much extra value.
>>
>> I can do that, in case we can be sure the compiler will drop the
>> test in case of a non-debug build.
> 
> Modern gcc will afaik; no idea about clang though.

I'll go with both tests inside the ASSERT(), as this will hurt debug
build only.

> 
>>> However, having stared at it for a while now - is this race
>>> free? I can see this being fine in the (initial) case of
>>> dirty_cpu == smp_processor_id(), but if this is for a foreign
>>> CPU, can't the vCPU have gone back to that same CPU again in
>>> the meantime?
>>
>> This should never happen. Either the vcpu in question is paused,
>> or it has been forced off the cpu due to not being allowed to run
>> there (e.g. affinity has been changed, or cpu is about to be
>> removed from cpupool). I can add a comment explaining it.
> 
> There is a time window from late in flush_mask() to the assertion
> you add. All sorts of things can happen during this window on
> other CPUs. IOW what guarantees the vCPU not getting unpaused or
> its affinity getting changed yet another time?

Hmm, very unlikely, but not impossible (especially in nested virt case).

This makes me wonder whether adding the call to sync_vcpu_execstate() to
sched_unit_migrate_finish() was really a good idea. There might be cases
where it is not necessary (e.g. in the case you are mentioning, but with
a much larger time window), or where it is more expensive than doing it
the lazy way.

Dropping this call of sync_vcpu_execstate() will remove the race you are
mentioning completely.


Juergen


From xen-devel-bounces@lists.xenproject.org Mon May 04 14:10:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 14:10:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVbnb-00012z-2L; Mon, 04 May 2020 14:10:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVbna-00012t-2S
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 14:10:22 +0000
X-Inumbo-ID: fd466a12-8e10-11ea-9d24-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fd466a12-8e10-11ea-9d24-12813bfff9fa;
 Mon, 04 May 2020 14:10:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E40A7AC6C;
 Mon,  4 May 2020 14:10:21 +0000 (UTC)
Subject: Re: [PATCH 06/16] x86/traps: Implement #CP handler and extend #PF for
 shadow stacks
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-7-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <561914ce-d076-8f1a-e74b-7c37567480a1@suse.com>
Date: Mon, 4 May 2020 16:10:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200501225838.9866-7-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.2020 00:58, Andrew Cooper wrote:
> @@ -1457,6 +1451,10 @@ void do_page_fault(struct cpu_user_regs *regs)
>      {
>          enum pf_type pf_type = spurious_page_fault(addr, regs);
>  
> +        /* Any fault on a shadow stack access is a bug in Xen. */
> +        if ( error_code & PFEC_shstk )
> +            goto fatal;

Not going through the full spurious_page_fault() in this case
would seem desirable, as would be at least a respective
adjustment to __page_fault_type(). Perhaps such an adjustment
could then avoid the change (and the need for goto) here?

> @@ -1906,6 +1905,43 @@ void do_debug(struct cpu_user_regs *regs)
>      pv_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC);
>  }
>  
> +void do_entry_CP(struct cpu_user_regs *regs)

If there's a plan to change other exception handlers to this naming
scheme too, I can live with the intermediate inconsistency.
Otherwise, though, I'd prefer a name closer to what other handlers
use, e.g. do_control_prot(). Same for the asm entry point then.

> @@ -940,7 +944,8 @@ autogen_stubs: /* Automatically generated stubs. */
>          entrypoint 1b
>  
>          /* Reserved exceptions, heading towards do_reserved_trap(). */
> -        .elseif vec == TRAP_copro_seg || vec == TRAP_spurious_int || (vec > TRAP_simd_error && vec < TRAP_nr)
> +        .elseif vec == TRAP_copro_seg || vec == TRAP_spurious_int || \
> +                vec == TRAP_virtualisation || (vec > X86_EXC_CP && vec < TRAP_nr)

Use the shorter X86_EXC_VE here, since you don't want/need to
retain what was there before? (I'd be fine with you changing
the two other TRAP_* too at this occasion.)

> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -68,6 +68,7 @@
>  #define PFEC_reserved_bit   (_AC(1,U) << 3)
>  #define PFEC_insn_fetch     (_AC(1,U) << 4)
>  #define PFEC_prot_key       (_AC(1,U) << 5)
> +#define PFEC_shstk         (_AC(1,U) << 6)

One too few padding blanks?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 04 14:25:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 14:25:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVc1d-0001z7-CU; Mon, 04 May 2020 14:24:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVc1b-0001z2-Tt
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 14:24:51 +0000
X-Inumbo-ID: 03bc855a-8e13-11ea-9d26-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 03bc855a-8e13-11ea-9d26-12813bfff9fa;
 Mon, 04 May 2020 14:24:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D785DAA7C;
 Mon,  4 May 2020 14:24:51 +0000 (UTC)
Subject: Re: [PATCH 07/16] x86/shstk: Re-layout the stack block for shadow
 stacks
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-8-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8b6e03ee-545d-eada-457e-79c183a72d6d@suse.com>
Date: Mon, 4 May 2020 16:24:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200501225838.9866-8-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.2020 00:58, Andrew Cooper wrote:
> --- a/xen/arch/x86/cpu/common.c
> +++ b/xen/arch/x86/cpu/common.c
> @@ -732,14 +732,14 @@ void load_system_tables(void)
>  		.rsp2 = 0x8600111111111111ul,
>  
>  		/*
> -		 * MCE, NMI and Double Fault handlers get their own stacks.
> +		 * #DB, NMI, DF and #MCE handlers get their own stacks.

Then also #DF and #MC?

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -6002,25 +6002,18 @@ void memguard_unguard_range(void *p, unsigned long l)
>  
>  void memguard_guard_stack(void *p)
>  {
> -    /* IST_MAX IST pages + at least 1 guard page + primary stack. */
> -    BUILD_BUG_ON((IST_MAX + 1) * PAGE_SIZE + PRIMARY_STACK_SIZE > STACK_SIZE);
> +    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
>  
> -    memguard_guard_range(p + IST_MAX * PAGE_SIZE,
> -                         STACK_SIZE - PRIMARY_STACK_SIZE - IST_MAX * PAGE_SIZE);
> +    p += 5 * PAGE_SIZE;

The literal 5 here and ...

> +    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
>  }
>  
>  void memguard_unguard_stack(void *p)
>  {
> -    memguard_unguard_range(p + IST_MAX * PAGE_SIZE,
> -                           STACK_SIZE - PRIMARY_STACK_SIZE - IST_MAX * PAGE_SIZE);
> -}
> -
> -bool memguard_is_stack_guard_page(unsigned long addr)
> -{
> -    addr &= STACK_SIZE - 1;
> +    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, PAGE_HYPERVISOR_RW);
>  
> -    return addr >= IST_MAX * PAGE_SIZE &&
> -           addr < STACK_SIZE - PRIMARY_STACK_SIZE;
> +    p += 5 * PAGE_SIZE;

... here could do with macro-izing: IST_MAX + 1 would already be
a little better, I guess.

Preferably with adjustments along these lines
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 04 14:46:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 14:46:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVcMb-0003iZ-7h; Mon, 04 May 2020 14:46:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IZT0=6S=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jVcMa-0003iU-KT
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 14:46:32 +0000
X-Inumbo-ID: 0a6fa942-8e16-11ea-9d2c-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0a6fa942-8e16-11ea-9d2c-12813bfff9fa;
 Mon, 04 May 2020 14:46:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 926DDAA7C;
 Mon,  4 May 2020 14:46:31 +0000 (UTC)
Message-ID: <598252160488c067b65c51bb91bdd5446ba67989.camel@suse.com>
Subject: Re: [PATCH v3] sched: print information about scheduling granularity
From: Dario Faggioli <dfaggioli@suse.com>
To: Jan Beulich <jbeulich@suse.com>, Sergey Dyasli <sergey.dyasli@citrix.com>
Date: Mon, 04 May 2020 16:46:28 +0200
In-Reply-To: <b8f74570-fc9f-61c5-7e52-c2a50e8350dc@suse.com>
References: <20200422093010.12940-1-sergey.dyasli@citrix.com>
 <b8f74570-fc9f-61c5-7e52-c2a50e8350dc@suse.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-kjjKoyzQMaEXrnSEV1t0"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-kjjKoyzQMaEXrnSEV1t0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2020-04-22 at 12:50 +0200, Jan Beulich wrote:
> On 22.04.2020 11:30, Sergey Dyasli wrote:
> > +struct sched_gran_name {
> > +    enum sched_gran mode;
> > +    const char *name;
> > +};
> > +
> > +static const struct sched_gran_name sg_name[] =3D {
> > +    {SCHED_GRAN_cpu, "cpu"},
> > +    {SCHED_GRAN_core, "core"},
> > +    {SCHED_GRAN_socket, "socket"},
> > +};
>=20
> Personally I think that in cases like this one it is more
> (space) efficient to use char[8] or so for the second
> struct member.
>
Yeah, I think that is actually better.

Sergey, can you do it like this?

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-kjjKoyzQMaEXrnSEV1t0
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl6wKsQACgkQFkJ4iaW4
c+72HA//Zg6x3lTdZhoTyGvdeUnZkkcw+m93Vweer8vXzK5OUDEVgB1gaZWrtdkc
0kxeCASHzFXSqAUXEOa9tkyUlXR9Exez3nyvJZGPx//2FMxemrlSZ55X+Cy7kKgZ
DucE9LNF+AXr3NbxdU0x8sJECaH8Ke34GyAYWdDDHpt0HHelbiKWXradPdZsiZzu
kIz2cLXOajNaQebaCfMcklB/0I8sdZCUrvVhkW/UnKyl8oYSyurcoRUOOgMe3bVN
TpNaKZau43hSzpyDZ6KhxrR+em7SyaglHZQ01gyV/wfFDsIfwaB/qvBVu0rOyt1F
2q17KMWlY0MGJQ/e0tzTBbDE/uu3E1I9CnyL81lNhxpR84hSPv/5WWJQG2idl6Ik
KNCTNaeWhmYb4xqShHj4P9bjcsXSQn0B1c6jOxojlsAalFq2KFC6XSjY+jFnx7ZE
9Tq3pGl3uasmVx+LoJVL28kkht+B5OpwoIjZjH53rp0Bq9Y2Ei52HsaQ+arJdjyP
HK3VV33Jt2BL14Sqd08+T5oiPy9hJ3VrpXMJAit2DzWLNlsjH/uICMh6EeItBPzN
WKOvsXJ0ZUpWbIm86/q9oEIa6xKZKLGchXniB8gJlyq9vJ7DNJp8T524u7tYKVPN
f+XHDAFg4c4xvUddGMsLwury7X5kLpDkon0jT4r6O9TTCOyeKmg=
=yrbM
-----END PGP SIGNATURE-----

--=-kjjKoyzQMaEXrnSEV1t0--



From xen-devel-bounces@lists.xenproject.org Mon May 04 14:56:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 14:56:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVcVd-0004bu-5a; Mon, 04 May 2020 14:55:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVcVc-0004bp-Ch
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 14:55:52 +0000
X-Inumbo-ID: 5894f554-8e17-11ea-9d30-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5894f554-8e17-11ea-9d30-12813bfff9fa;
 Mon, 04 May 2020 14:55:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 3C469AA7C;
 Mon,  4 May 2020 14:55:52 +0000 (UTC)
Subject: Re: [PATCH 08/16] x86/shstk: Create shadow stacks
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-9-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c31d9d4d-76ea-1391-5f7d-fbb3f47e16ce@suse.com>
Date: Mon, 4 May 2020 16:55:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200501225838.9866-9-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.2020 00:58, Andrew Cooper wrote:
> --- a/xen/arch/x86/cpu/common.c
> +++ b/xen/arch/x86/cpu/common.c
> @@ -748,6 +748,25 @@ void load_system_tables(void)
>  		.bitmap = IOBMP_INVALID_OFFSET,
>  	};
>  
> +	/* Set up the shadow stack IST. */
> +	if ( cpu_has_xen_shstk ) {

This being a Linux style function, you want to omit the blanks
immediately inside the parentheses bother here and in the for()
below.

> +		unsigned int i;
> +		uint64_t *ist_ssp = this_cpu(tss_page).ist_ssp;
> +
> +		/* Must point at the supervisor stack token. */
> +		ist_ssp[IST_MCE] = stack_top + (IST_MCE * 0x400) - 8;
> +		ist_ssp[IST_NMI] = stack_top + (IST_NMI * 0x400) - 8;
> +		ist_ssp[IST_DB]  = stack_top + (IST_DB  * 0x400) - 8;
> +		ist_ssp[IST_DF]  = stack_top + (IST_DF  * 0x400) - 8;

Introduce a constant for 0x400, to then also be used in the
invocations of write_sss_token()?

> +		/* Poision unused entries. */
> +		for ( i = IST_MAX;
> +		      i < ARRAY_SIZE(this_cpu(tss_page).ist_ssp); ++i )
> +			ist_ssp[i] = 0x8600111111111111ul;

IST_MAX == IST_DF, so you're overwriting one token here.

> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -434,7 +434,8 @@ struct __packed tss64 {
>      uint16_t :16, bitmap;
>  };
>  struct tss_page {
> -    struct tss64 __aligned(PAGE_SIZE) tss;
> +    uint64_t __aligned(PAGE_SIZE) ist_ssp[8];
> +    struct tss64 tss;
>  };

Just curious - any particular reason you put this ahead of the TSS?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 04 15:06:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 15:06:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVcg1-0005Xc-6w; Mon, 04 May 2020 15:06:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NHsq=6S=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVcg0-0005XR-2k
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 15:06:36 +0000
X-Inumbo-ID: d8625fdc-8e18-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8625fdc-8e18-11ea-b9cf-bc764e2007e4;
 Mon, 04 May 2020 15:06:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 13E88ABCC;
 Mon,  4 May 2020 15:06:36 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul: extend x86_insn_is_mem_write() coverage
Message-ID: <5bf829b6-c60d-7849-e2a5-f84485849197@suse.com>
Date: Mon, 4 May 2020 17:06:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Several insns were missed when this function was first added. As far as
insns already supported by the emulator go - SMSW and {,V}STMXCSR were
wrongly considered r/o insns so far.

Insns like the VMX, SVM, or CET-SS ones, PTWRITE, or AMD's new SNP ones
are intentionally not covered just yet. VMPTRST is put there just to
complete the respective group.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -11551,13 +11551,39 @@ x86_insn_is_mem_write(const struct x86_e
         break;
 
     case X86EMUL_OPC(0x0f, 0x01):
-        return !(state->modrm_reg & 6); /* SGDT / SIDT */
+        switch ( state->modrm_reg & 7 )
+        {
+        case 0: /* SGDT */
+        case 1: /* SIDT */
+        case 4: /* SMSW */
+            return true;
+        }
+        break;
+
+    case X86EMUL_OPC(0x0f, 0xae):
+        switch ( state->modrm_reg & 7 )
+        {
+        case 0: /* FXSAVE */
+        case 3: /* {,V}STMXCSR */
+        case 4: /* XSAVE */
+        case 6: /* XSAVEOPT */
+            return true;
+        }
+        break;
 
     case X86EMUL_OPC(0x0f, 0xba):
         return (state->modrm_reg & 7) > 4; /* BTS / BTR / BTC */
 
     case X86EMUL_OPC(0x0f, 0xc7):
-        return (state->modrm_reg & 7) == 1; /* CMPXCHG{8,16}B */
+        switch ( state->modrm_reg & 7 )
+        {
+        case 1: /* CMPXCHG{8,16}B */
+        case 4: /* XSAVEC */
+        case 5: /* XSAVES */
+        case 7: /* VMPTRST */
+            return true;
+        }
+        break;
     }
 
     return false;


From xen-devel-bounces@lists.xenproject.org Mon May 04 15:08:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 15:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVci9-0005fb-Mn; Mon, 04 May 2020 15:08:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SbWK=6S=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jVci8-0005fV-GI
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 15:08:48 +0000
X-Inumbo-ID: 26ba02c0-8e19-11ea-b9cf-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26ba02c0-8e19-11ea-b9cf-bc764e2007e4;
 Mon, 04 May 2020 15:08:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588604927;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=wjSY2GhoO17AjmZTCXOxGZ+UkDFTIAB/CbmgGVGaI+c=;
 b=gmRlYsxETJDOEcBZwQXlhGwTrqMeTNkHX3UbYD0VnSs1LbDgAMftVKlr
 BoqERimWUPkpLGartVagA+EpacUk3Z2PvMabHwxEkxkHuueQLiYWulL4x
 B785681bJlkRsz059ukYbAwJmvAoZqB/bRjgFbPyTwwRMi0BhLhyVSGk2 U=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: Xz63FMZyf8hWLY7PLjITaPwT89WVg3nUrHNbIzXwYddsEqqDLDpOppoTcPfQrJP/P0SQ2GJXjn
 K5FFOFdBTsS6PTjJKfi1kllQ2wjtF3ut6CVUv1bFU5r+AokzGPS8pKBRHW+54KB3JDlnnVA9pj
 Ji+tvxrh+qFBAgl0IeODq33hzLTTj7DSKUQQ2eNB3vwjQF9lIzKxi6Kc/EUSE517T9jEsY26F8
 1iYCGw1UUtl/wEl0BqFwrgxn/UUgyrET+vxi6qXAJZI0zxHQHrz9C8EgC4sesTThLXYtaLWrHC
 Xxk=
X-SBRS: 2.7
X-MesageID: 17376503
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,352,1583211600"; d="scan'208";a="17376503"
Subject: Re: [PATCH 08/16] x86/shstk: Create shadow stacks
To: Jan Beulich <jbeulich@suse.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-9-andrew.cooper3@citrix.com>
 <c31d9d4d-76ea-1391-5f7d-fbb3f47e16ce@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <92816b17-448f-d1ed-3579-393292120565@citrix.com>
Date: Mon, 4 May 2020 16:08:17 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <c31d9d4d-76ea-1391-5f7d-fbb3f47e16ce@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 04/05/2020 15:55, Jan Beulich wrote:
>> +		/* Poision unused entries. */
>> +		for ( i = IST_MAX;
>> +		      i < ARRAY_SIZE(this_cpu(tss_page).ist_ssp); ++i )
>> +			ist_ssp[i] = 0x8600111111111111ul;
> IST_MAX == IST_DF, so you're overwriting one token here.

And failing to poison entry 0.  This was a bad rearrangement when
tidying the series up.

Unfortunately, testing the #DF path isn't terribly easy.

>> --- a/xen/include/asm-x86/processor.h
>> +++ b/xen/include/asm-x86/processor.h
>> @@ -434,7 +434,8 @@ struct __packed tss64 {
>>      uint16_t :16, bitmap;
>>  };
>>  struct tss_page {
>> -    struct tss64 __aligned(PAGE_SIZE) tss;
>> +    uint64_t __aligned(PAGE_SIZE) ist_ssp[8];
>> +    struct tss64 tss;
>>  };
> Just curious - any particular reason you put this ahead of the TSS?

Yes.  Reduced chance of interacting with a buggy IO bitmap offset.

Furthermore, we could do away most of the IO emulation quirking, and the
#GP path overhead, if we actually constructed a real IO bitmap for
dom0.  That would require using the 8k following the TSS.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 04 16:31:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 16:31:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVe02-0004vi-2P; Mon, 04 May 2020 16:31:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y3Bs=6S=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jVe01-0004vd-1u
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 16:31:21 +0000
X-Inumbo-ID: af0370d4-8e24-11ea-9d37-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id af0370d4-8e24-11ea-9d37-12813bfff9fa;
 Mon, 04 May 2020 16:31:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588609881;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=UJ2gD1f2Ep4coTy/WS32mz12r1T5ew6EbcALSFcpFP4=;
 b=gEigX4dhZggxvcN3Hz7rbD+f1sLgss5P6mk5ymOB5MFag6xE/IaTzcOQ
 ST0vu2mWvyVKHZ0LaKzGuJWCoJyHsAlBeH9jvWoGvDmISfu9upR+iuHFX
 jWKXKJwuUE88674ohZ8VS5u+JZHVe9L4V/jsEgJXuyJHTU558y/ShtSSj s=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: BgK4pZaj3VCW72WVVesuZdmnrMhFY+dHu0CHuL+K/0nvouHf5PtiVRFYCy6xDD1psLrKxHcTcG
 cAozNAqecSOkAtx19eRiyP/erGtdW1yl9kPnl5YGYPbq5/04fPYSop1cWwlPOYVqKN4FRe8PXV
 nUAVo4HZRUupmdzs9qKX8pRZc3oMr1Ja+oLothXfCLf+facTIpNapNzDxC84rr84VkywljusEt
 Fuq+BJWj6r8JMtUTwNTxpJAcgXQqTLJbI0yX6nGaztFtGn0f/6oSEnDnDOlP1plEznflLqpHqh
 m0w=
X-SBRS: 2.7
X-MesageID: 16680042
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,352,1583211600"; d="scan'208";a="16680042"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/hvm: simplify hvm_physdev_op allowance control
Date: Mon, 4 May 2020 18:31:03 +0200
Message-ID: <20200504163103.7798-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PVHv1 dom0 was given access to all PHYSDEVOP hypercalls, and such
restriction was not removed when PVHv1 code was removed. As a result
the switch in hvm_physdev_op was more complicated than required, and
relied on PVHv2 dom0 not having PIRQ support in order to prevent
access to some PV specific PHYSDEVOPs.

Fix this by moving the default case to the bottom of the switch, since
there's no need for any fall through now. Also remove the hardware
domain check, as all the not explicitly listed PHYSDEVOPs are
forbidden for HVM domains.

Finally tighten the condition to allow usage of
PHYSDEVOP_pci_mmcfg_reserved: apart from having vPCI enabled it should
only be used by the hardware domain. Note that the code in
do_physdev_op is already restricting the call to privileged domains
only, but it can be further restricted to the hardware domain only, as
other privileged domains don't have access to MMCFG regions anyway.

Overall no functional change should arise from this change.

Reported-by: Julien Grall <jgrall@amazon.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/hypercall.c | 12 ++++++------
 1 file changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index 17ba0fe91b..c41c2179c9 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -82,26 +82,26 @@ static long hvm_grant_table_op(
 static long hvm_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
     const struct vcpu *curr = current;
+    const struct domain *currd = curr->domain;
 
     switch ( cmd )
     {
-    default:
-        if ( !is_hardware_domain(curr->domain) )
-            return -ENOSYS;
-        /* fall through */
     case PHYSDEVOP_map_pirq:
     case PHYSDEVOP_unmap_pirq:
     case PHYSDEVOP_eoi:
     case PHYSDEVOP_irq_status_query:
     case PHYSDEVOP_get_free_pirq:
-        if ( !has_pirq(curr->domain) )
+        if ( !has_pirq(currd) )
             return -ENOSYS;
         break;
 
     case PHYSDEVOP_pci_mmcfg_reserved:
-        if ( !has_vpci(curr->domain) )
+        if ( !has_vpci(currd) || !is_hardware_domain(currd) )
             return -ENOSYS;
         break;
+
+    default:
+        return -ENOSYS;
     }
 
     if ( !curr->hcall_compat )
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon May 04 17:12:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 17:12:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVedt-0008Ev-4X; Mon, 04 May 2020 17:12:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SbWK=6S=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jVeds-0008Eq-F9
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 17:12:32 +0000
X-Inumbo-ID: 6faef42a-8e2a-11ea-9d3b-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6faef42a-8e2a-11ea-9d3b-12813bfff9fa;
 Mon, 04 May 2020 17:12:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588612351;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=B+d6I1J6DSHGkVXz26upfmgklP4aKmjkpXU/WYX8rgQ=;
 b=CMz/QXBOGiYt3mz1v19r3BTE+BeOvGHA0Cv43hAkD1jQ097Qo7wjgQgG
 YzLyrG8GXbwE+/9xQUZcocKq0Qh7mt9w9lKIiHRTaFIi5qc7BQvwm1WjK
 +phrJVJWzitbjqxrXXeVHw4CCu/IMLPIEotWjSvt8fpP+tjCVMUE1FjML k=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: x0opOaB4aaHfUbCUEGbOkWP1KkFZLDaANwSFSHm9Dpf42t9I+94Hvmlh3N3ddZbE3iKo54xhzu
 n1z1XwrFxeC/zkMJPKQ1JpITpJhaOLofBg40lIzc4KNCtBUzDC5ptpK1WXz9kHbcfve5wo4TT4
 THLrqeGSo7efVrbdF2nrZmkvG1DqJ5nqQvPHrrNIfzRTgv7v6kEtQ8Pa4hR1Z+0WOYJewM/MMX
 tt9rjqRX8eMtioD2a1wIzp8ZBJLyVCIygiTqXHRrzVWVTeDUhVcAdQ1vmTCqWdYKH6BFaeXUBb
 FIw=
X-SBRS: 2.7
X-MesageID: 16957999
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,352,1583211600"; d="scan'208";a="16957999"
Subject: Re: [PATCH] x86emul: extend x86_insn_is_mem_write() coverage
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <5bf829b6-c60d-7849-e2a5-f84485849197@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f0fea332-6f79-70a3-502a-6e8e3193115e@citrix.com>
Date: Mon, 4 May 2020 18:12:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <5bf829b6-c60d-7849-e2a5-f84485849197@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 04/05/2020 16:06, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> Several insns were missed when this function was first added. As far as
> insns already supported by the emulator go - SMSW and {,V}STMXCSR were
> wrongly considered r/o insns so far.
>
> Insns like the VMX, SVM, or CET-SS ones, PTWRITE, or AMD's new SNP ones
> are intentionally not covered just yet. VMPTRST is put there just to
> complete the respective group.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon May 04 17:14:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 17:14:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVefM-0008Ko-G7; Mon, 04 May 2020 17:14:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SbWK=6S=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jVefK-0008Kf-Qs
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 17:14:02 +0000
X-Inumbo-ID: a6295cf2-8e2a-11ea-9d3b-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a6295cf2-8e2a-11ea-9d3b-12813bfff9fa;
 Mon, 04 May 2020 17:14:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588612442;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=OV6ifNQEXMIkyPbKgUmWZMnlZzWBPXmkTxRNaaDloXk=;
 b=HmDo6rgfw4IfxtTviCrRyXSOdMANFl5JLDm8quQ9nyIE0ExbjEINtNy6
 ezb0x7V5LtyndZdIyt2tezDPVqOC6FvYGf/lzcFF19g0gnf09xjmgStUX
 HONBNp9AC3AyblTu+ekW7+PDCgySJ4m0VYhbqVqoIlqh7ZhxgbTZT6be4 Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: v0G2zd5TKyLcWH1g0v+nILvRLvQL/7RnwG0rclLXfv1V3u8okbmg/fBhzyZXJJ6+lySdUx3R+C
 kNZYEF1PEMPPDIpZlnX+I2vDBGZ2NGg/HVpfuZQkvBzEiw1Wg6GLrH1NzVnvFK1blrP7n+gLDJ
 CgTjvd4ZhfxJdum4i24c0WkeHaGbgDj1Q2nFcKgbSJx7shoJlfCeKSmKXRhPEx++uYa3TwAk6+
 49aniIw9xkUwphjFR0UFXgHtTq3FkXy8Rwh/cb5gRsgtfRHdU4+np4rGGKl/qgxNutQ95cCNCO
 sr4=
X-SBRS: 2.7
X-MesageID: 16958198
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,352,1583211600"; d="scan'208";a="16958198"
Subject: Re: [PATCH] x86/hvm: simplify hvm_physdev_op allowance control
To: Roger Pau Monne <roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
References: <20200504163103.7798-1-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6a06c35c-c7b6-a37b-32ab-07b462ad9c03@citrix.com>
Date: Mon, 4 May 2020 18:13:57 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200504163103.7798-1-roger.pau@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <jgrall@amazon.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 04/05/2020 17:31, Roger Pau Monne wrote:
> PVHv1 dom0 was given access to all PHYSDEVOP hypercalls, and such
> restriction was not removed when PVHv1 code was removed. As a result
> the switch in hvm_physdev_op was more complicated than required, and
> relied on PVHv2 dom0 not having PIRQ support in order to prevent
> access to some PV specific PHYSDEVOPs.
>
> Fix this by moving the default case to the bottom of the switch, since
> there's no need for any fall through now. Also remove the hardware
> domain check, as all the not explicitly listed PHYSDEVOPs are
> forbidden for HVM domains.
>
> Finally tighten the condition to allow usage of
> PHYSDEVOP_pci_mmcfg_reserved: apart from having vPCI enabled it should
> only be used by the hardware domain. Note that the code in
> do_physdev_op is already restricting the call to privileged domains
> only, but it can be further restricted to the hardware domain only, as
> other privileged domains don't have access to MMCFG regions anyway.
>
> Overall no functional change should arise from this change.
>
> Reported-by: Julien Grall <jgrall@amazon.com>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon May 04 20:55:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 04 May 2020 20:55:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVi6p-0000qS-AK; Mon, 04 May 2020 20:54:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pjZ3=6S=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1jVi6o-0000qN-Df
 for xen-devel@lists.xenproject.org; Mon, 04 May 2020 20:54:38 +0000
X-Inumbo-ID: 76f9c5a6-8e49-11ea-ae69-bc764e2007e4
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 76f9c5a6-8e49-11ea-ae69-bc764e2007e4;
 Mon, 04 May 2020 20:54:37 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id c10so168164qka.4
 for <xen-devel@lists.xenproject.org>; Mon, 04 May 2020 13:54:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zededa.com; s=google;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=GZOuZSTvV0jhwIrT62ip5ZliElHa53IG+/0FBeQEuHc=;
 b=cPJTcoqWRvkoRZmXxCz8RvzYIIQwmTIJWi4fO630gKDy0+nOMe53Z272S1F5gFI05V
 BZIslGbP8TU5ySg7JIntlH7Q3OiXgTJaycc3VExJgRUpFWdQL3V9oReJntnjN3JxZDa+
 51yupwwF93LifFvmMNpfFoxNDfDPBF9/8PudN9cexRk2n2cP3bEUZqpiPTg1xuz3mcHx
 zZy+ZEX8mg8srqIeUh+mToLDMq0KiD5ZSa+fF0/e5bGdwNffPAjMi11ZwnmLhKoQk65v
 JlzwAYSYT4tG6ydddX4DGrwqWsXOp47OzzIaEv+309asEvNRo0U5HwW5kA3J5cTRj7KF
 3DeA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=GZOuZSTvV0jhwIrT62ip5ZliElHa53IG+/0FBeQEuHc=;
 b=irylBY2PuQuoPImDA27Ew5v3mLXBIGkHpivVvmLQ4qyjSJVwfxR2Gnz28IndJev7PL
 +yPSKA4fAJ6A3p19IRHkoc8MzvySiHpFzGQwVSrWpV6x3EsglMz3mstvhR2z01o06coZ
 KhYBl/DIs4Ax2MlV+FMeY9lKfXhJsAGcOe3LGsG+CwKc/wtSPUfQ6M7j1eYBW2NwXd6b
 1LMRA1/aYk5w9Cynkq/Os2j6r/MGAh+zZtSA+BuPrIcFDUKyjwv3S8ugAgU0XLZFN6Qi
 x52On33K/HKcmZyQsS9JBrNtDeOcfakpjJ6etgmVwydiAqq8OvkB+8C4bI//nifYGVfy
 DbFA==
X-Gm-Message-State: AGi0PuYXI9Wy/1uLuEygtsDXBHkj5hPCxLCg1FO62T0ERg5V3MAT+GC4
 X5dGTatm6DIgBe7Wm803jqRlkGCVsOzeHEguaoRSgA==
X-Google-Smtp-Source: APiQypKfKwAfPzgNIPfo9Tjc0OL48Vi2D4ZCkEHF3fHsDG+zIDG47M84igv4rOCcLl/ltcGnoCYB8mQ6ZlPhpYBR25A=
X-Received: by 2002:a05:620a:16d6:: with SMTP id
 a22mr214295qkn.291.1588625676221; 
 Mon, 04 May 2020 13:54:36 -0700 (PDT)
MIME-Version: 1.0
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
In-Reply-To: <20200504124453.GI9902@minyard.net>
From: Roman Shaposhnik <roman@zededa.com>
Date: Mon, 4 May 2020 13:54:24 -0700
Message-ID: <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: minyard@acm.org
Content-Type: multipart/mixed; boundary="0000000000001cfeef05a4d8bdca"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 jeff.kubascik@dornerworks.com, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--0000000000001cfeef05a4d8bdca
Content-Type: text/plain; charset="UTF-8"

Hi Julien,

thank for your patch -- just like Corey I tried it out and it seems to
work fine and gets
me further. At this point, I'm pretty sure I'm past initial
bootstrapping issues and into
what can be basically described as Xen DMA issue of some kind (so I'm
pretty sure
I will need Stefano's help to debug this further). I'm attaching
verbose logs, but the
culprit seems to be:

[    2.534292] Unable to handle kernel paging request at virtual
address 000000000026c340
[    2.542373] Mem abort info:
[    2.545257]   ESR = 0x96000004
[    2.548421]   EC = 0x25: DABT (current EL), IL = 32 bits
[    2.553877]   SET = 0, FnV = 0
[    2.557023]   EA = 0, S1PTW = 0
[    2.560297] Data abort info:
[    2.563258]   ISV = 0, ISS = 0x00000004
[    2.567208]   CM = 0, WnR = 0
[    2.570294] [000000000026c340] user address but active_mm is swapper
[    2.576783] Internal error: Oops: 96000004 [#1] SMP
[    2.581784] Modules linked in:
[    2.584950] CPU: 3 PID: 135 Comm: kworker/3:1 Not tainted 5.6.1-default #9
[    2.591970] Hardware name: Raspberry Pi 4 Model B (DT)
[    2.597256] Workqueue: events deferred_probe_work_func
[    2.602509] pstate: 60000005 (nZCv daif -PAN -UAO)
[    2.607431] pc : xen_swiotlb_free_coherent+0x198/0x1d8
[    2.612696] lr : dma_free_attrs+0x98/0xd0
[    2.616827] sp : ffff800011db3970
[    2.620242] x29: ffff800011db3970 x28: 00000000000f7b00
[    2.625695] x27: 0000000000001000 x26: ffff000037b68410
[    2.631129] x25: 0000000000000001 x24: 00000000f7b00000
[    2.636583] x23: 0000000000000000 x22: 0000000000000000
[    2.642017] x21: ffff800011b0d000 x20: ffff80001179b548
[    2.647461] x19: ffff000037b68410 x18: 0000000000000010
[    2.652905] x17: ffff000035d66a00 x16: 00000000deadbeef
[    2.658348] x15: ffffffffffffffff x14: ffff80001179b548
[    2.663792] x13: ffff800091db37b7 x12: ffff800011db37bf
[    2.669236] x11: ffff8000117c7000 x10: ffff800011db3740
[    2.674680] x9 : 00000000ffffffd0 x8 : ffff80001071e980
[    2.680124] x7 : 0000000000000132 x6 : ffff80001197a6ab
[    2.685568] x5 : 0000000000000000 x4 : 0000000000000000
[    2.691012] x3 : 00000000f7b00000 x2 : fffffdffffe00000
[    2.696465] x1 : 000000000026c340 x0 : 000002000046c340
[    2.701899] Call trace:
[    2.704452]  xen_swiotlb_free_coherent+0x198/0x1d8
[    2.709367]  dma_free_attrs+0x98/0xd0
[    2.713143]  rpi_firmware_property_list+0x1e4/0x240
[    2.718146]  rpi_firmware_property+0x6c/0xb0
[    2.722535]  rpi_firmware_probe+0xf0/0x1e0
[    2.726760]  platform_drv_probe+0x50/0xa0
[    2.730879]  really_probe+0xd8/0x438
[    2.734567]  driver_probe_device+0xdc/0x130
[    2.738870]  __device_attach_driver+0x88/0x108
[    2.743434]  bus_for_each_drv+0x78/0xc8
[    2.747386]  __device_attach+0xd4/0x158
[    2.751337]  device_initial_probe+0x10/0x18
[    2.755649]  bus_probe_device+0x90/0x98
[    2.759590]  deferred_probe_work_func+0x88/0xd8
[    2.764244]  process_one_work+0x1f0/0x3c0
[    2.768369]  worker_thread+0x138/0x570
[    2.772234]  kthread+0x118/0x120
[    2.775571]  ret_from_fork+0x10/0x18
[    2.779263] Code: d34cfc00 f2dfbfe2 d37ae400 8b020001 (f8626800)
[    2.785492] ---[ end trace 4c435212e349f45f ]---
[    2.793340] usb 1-1: New USB device found, idVendor=2109,
idProduct=3431, bcdDevice= 4.20
[    2.801038] usb 1-1: New USB device strings: Mfr=0, Product=1, SerialNumber=0
[    2.808297] usb 1-1: Product: USB2.0 Hub
[    2.813710] hub 1-1:1.0: USB hub found
[    2.817117] hub 1-1:1.0: 4 ports detected

This is bailing out right here:
     https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/firmware/raspberrypi.c?h=v5.6.1#n125

FYIW (since I modified the source to actually print what was returned
right before it bails) we get:
   buf[1] == 0x800000004
   buf[2] == 0x00000001

Status 0x800000004 is of course suspicious since it is not even listed here:
    https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/include/soc/bcm2835/raspberrypi-firmware.h#n14

So it appears that this DMA request path is somehow busted and it
would be really nice to figure out why.

Thanks,
Roman.

On Mon, May 4, 2020 at 5:44 AM Corey Minyard <minyard@acm.org> wrote:
>
> On Sat, May 02, 2020 at 06:48:27PM +0100, Julien Grall wrote:
> >
> >
> > On 02/05/2020 18:35, Corey Minyard wrote:
> > > On Sat, May 02, 2020 at 12:46:14PM +0100, Julien Grall wrote:
> > > No.  If I am understanding this correctly, all the memory in dom0 below
> > > 1GB would have to be physically below 1GB.
> >
> > Well, dom0 is directly mapped. In other word, dom0 physical address == host
> > physical address. So if we allocate memory below 1GB in Xen, then it will
> > mapped below 1GB on dom0 as well.
> >
> > The patch I suggested would try to allocate as much as possible memory below
> > 1GB. I believe this should do the trick for you here.
>
> Yes, that does seem to do the trick:
>
> root@raspberrypi4-64-xen:~# xl info
> host                   : raspberrypi4-64-xen
> release                : 4.19.115-v8
> version                : #1 SMP PREEMPT Thu Apr 16 13:53:57 UTC 2020
> machine                : aarch64
> nr_cpus                : 4
> max_cpu_id             : 3
> nr_nodes               : 1
> cores_per_socket       : 1
> threads_per_core       : 1
> cpu_mhz                : 54.000
> hw_caps                : 00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000
> virt_caps              : hvm hap
> total_memory           : 3956
> free_memory            : 3634
> sharing_freed_memory   : 0
> sharing_used_memory    : 0
> outstanding_claims     : 0
> free_cpus              : 0
> xen_major              : 4
> xen_minor              : 13
> xen_extra              : .0
> xen_version            : 4.13.0
> xen_caps               : xen-3.0-aarch64 xen-3.0-armv7l
> xen_scheduler          : credit2
> xen_pagesize           : 4096
> platform_params        : virt_start=0x200000
> xen_changeset          : Tue Dec 17 14:19:49 2019 +0000 git:a2e84d8e42-dirty
> xen_commandline        : console=dtuart dtuart=/soc/serial@7e215040 sync_console dom0_mem=256M bootscrub=0
> cc_compiler            : aarch64-montavista-linux-gcc (GCC) 9.3.0
> cc_compile_by          : xen-4.13+gitAUT
> cc_compile_domain      : mvista-cgx
> cc_compile_date        : 2019-12-17
> build_id               : b0e9b4af9d83f67953e1640976f0720452a88f6a
> xend_config_format     : 4
>
> Thanks for the fix.
>
> -corey
>
> >
> > Cheers,
> >
> > --
> > Julien Grall

--0000000000001cfeef05a4d8bdca
Content-Type: application/octet-stream; name="xen.verbose.log"
Content-Disposition: attachment; filename="xen.verbose.log"
Content-Transfer-Encoding: base64
Content-ID: <f_k9syp7qa0>
X-Attachment-Id: f_k9syp7qa0

VXNpbmcgbW9kdWxlcyBwcm92aWRlZCBieSBib290bG9hZGVyIGluIEZEVApYZW4gNC4xNC11bnN0
YWJsZSAoYy9zIFRodSBBcHIgMzAgMTA6NDU6MDkgMjAyMCArMDIwMCBnaXQ6MDEzNWJlOC1kaXJ0
eSkgRUZJIGxvYWRlcgpXYXJuaW5nOiBDb3VsZCBub3QgcXVlcnkgdmFyaWFibGUgc3RvcmU6IDB4
ODAwMDAwMDAwMDAwMDAwMwotIFVBUlQgZW5hYmxlZCAtCi0gQm9vdCBDUFUgYm9vdGluZyAtCi0g
Q3VycmVudCBFTCAwMDAwMDAwOCAtCi0gSW5pdGlhbGl6ZSBDUFUgLQotIFR1cm5pbmcgb24gcGFn
aW5nIC0KLSBSZWFkeSAtCihYRU4pIENoZWNraW5nIGZvciBpbml0cmQgaW4gL2Nob3NlbgooWEVO
KSBSQU06IDAwMDAwMDAwMDAwMDEwMDAgLSAwMDAwMDAwMDA3ZWYxZmZmCihYRU4pIFJBTTogMDAw
MDAwMDAwN2VmMjAwMCAtIDAwMDAwMDAwMDdmMGRmZmYKKFhFTikgUkFNOiAwMDAwMDAwMDA3ZjBl
MDAwIC0gMDAwMDAwMDAyYmM0N2ZmZgooWEVOKSBSQU06IDAwMDAwMDAwMmJjNDgwMDAgLSAwMDAw
MDAwMDJiYzUwZmZmCihYRU4pIFJBTTogMDAwMDAwMDAyYmM1MTAwMCAtIDAwMDAwMDAwMmJkOTVm
ZmYKKFhFTikgUkFNOiAwMDAwMDAwMDJiZDk2MDAwIC0gMDAwMDAwMDAyZDc4MGZmZgooWEVOKSBS
QU06IDAwMDAwMDAwMmQ3ODEwMDAgLSAwMDAwMDAwMDNjOWY2ZmZmCihYRU4pIFJBTTogMDAwMDAw
MDAzYzlmNzAwMCAtIDAwMDAwMDAwM2M5ZjhmZmYKKFhFTikgUkFNOiAwMDAwMDAwMDNjOWZiMDAw
IC0gMDAwMDAwMDAzYzlmZGZmZgooWEVOKSBSQU06IDAwMDAwMDAwM2M5ZmUwMDAgLSAwMDAwMDAw
MDNjYjA4ZmZmCihYRU4pIFJBTTogMDAwMDAwMDAzY2IxMDAwMCAtIDAwMDAwMDAwM2NiMTBmZmYK
KFhFTikgUkFNOiAwMDAwMDAwMDNjYjEyMDAwIC0gMDAwMDAwMDAzY2IxM2ZmZgooWEVOKSBSQU06
IDAwMDAwMDAwM2NiMWIwMDAgLSAwMDAwMDAwMDNjYjFjZmZmCihYRU4pIFJBTTogMDAwMDAwMDAz
Y2IxZTAwMCAtIDAwMDAwMDAwM2RmM2ZmZmYKKFhFTikgUkFNOiAwMDAwMDAwMDNkZjUwMDAwIC0g
MDAwMDAwMDAzZGZmZmZmZgooWEVOKSBSQU06IDAwMDAwMDAwNDAwMDAwMDAgLSAwMDAwMDAwMGZi
ZmZmZmZmCihYRU4pCihYRU4pIE1PRFVMRVswXTogMDAwMDAwMDAyYmM1MTAwMCAtIDAwMDAwMDAw
MmJkOTUwZDggWGVuCihYRU4pIE1PRFVMRVsxXTogMDAwMDAwMDAyYmM0OTAwMCAtIDAwMDAwMDAw
MmJjNTEwMDAgRGV2aWNlIFRyZWUKKFhFTikgTU9EVUxFWzJdOiAwMDAwMDAwMDJiZDlkMDAwIC0g
MDAwMDAwMDAyZDY3YzIwMCBLZXJuZWwKKFhFTikKKFhFTikgQ01ETElORVswMDAwMDAwMDJiZDlk
MDAwXTpjaG9zZW4gZWFybHlwcmludGs9eGVuIGNvbnNvbGU9aHZjMCByb290ZGVsYXk9MTAKKFhF
TikKKFhFTikgQ29tbWFuZCBsaW5lOiBkb20wX21lbT01MTJNCihYRU4pIERvbWFpbiBoZWFwIGlu
aXRpYWxpc2VkCihYRU4pIEJvb3RpbmcgdXNpbmcgRGV2aWNlIFRyZWUKKFhFTikgIC0+IHVuZmxh
dHRlbl9kZXZpY2VfdHJlZSgpCihYRU4pIFVuZmxhdHRlbmluZyBkZXZpY2UgdHJlZToKKFhFTikg
bWFnaWM6IDB4ZDAwZGZlZWQKKFhFTikgc2l6ZTogMHgwMDgwMDAKKFhFTikgdmVyc2lvbjogMHgw
MDAwMTEKKFhFTikgICBzaXplIGlzIDB4MTIzMDAgYWxsb2NhdGluZy4uLgooWEVOKSAgIHVuZmxh
dHRlbmluZyA4MDAwZjdmYzAwMDAuLi4KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgIC0+CihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIGFsaWFzZXMgLT4gYWxpYXNlcwooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBjaG9zZW4gLT4gY2hvc2VuCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG1vZHVsZUAyYmQ5
ZDAwMCAtPiBtb2R1bGUKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcmVzZXJ2ZWQtbWVtb3J5IC0+
IHJlc2VydmVkLW1lbW9yeQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBsaW51eCxjbWEgLT4gbGlu
dXgsY21hCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHRoZXJtYWwtem9uZXMgLT4gdGhlcm1hbC16
b25lcwooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjcHUtdGhlcm1hbCAtPiBjcHUtdGhlcm1hbAoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciB0cmlwcyAtPiB0cmlwcwooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBjcHUtY3JpdCAtPiBjcHUtY3JpdAooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjb29saW5n
LW1hcHMgLT4gY29vbGluZy1tYXBzCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNvYyAtPiBzb2MK
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdGltZXJAN2UwMDMwMDAgLT4gdGltZXIKKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgdHhwQDdlMDA0MDAwIC0+IHR4cAooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBjcHJtYW5AN2UxMDEwMDAgLT4gY3BybWFuCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG1haWxi
b3hAN2UwMGI4ODAgLT4gbWFpbGJveAooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBncGlvQDdlMjAw
MDAwIC0+IGdwaW8KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZHBpX2dwaW8wIC0+IGRwaV9ncGlv
MAooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBlbW1jX2dwaW8yMiAtPiBlbW1jX2dwaW8yMgooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBlbW1jX2dwaW8zNCAtPiBlbW1jX2dwaW8zNAooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBlbW1jX2dwaW80OCAtPiBlbW1jX2dwaW80OAooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBncGNsazBfZ3BpbzQgLT4gZ3BjbGswX2dwaW80CihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIGdwY2xrMV9ncGlvNSAtPiBncGNsazFfZ3BpbzUKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
Z3BjbGsxX2dwaW80MiAtPiBncGNsazFfZ3BpbzQyCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGdw
Y2xrMV9ncGlvNDQgLT4gZ3BjbGsxX2dwaW80NAooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBncGNs
azJfZ3BpbzYgLT4gZ3BjbGsyX2dwaW82CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGdwY2xrMl9n
cGlvNDMgLT4gZ3BjbGsyX2dwaW80MwooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpMmMwX2dwaW8w
IC0+IGkyYzBfZ3BpbzAKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjMF9ncGlvMjggLT4gaTJj
MF9ncGlvMjgKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjMF9ncGlvNDQgLT4gaTJjMF9ncGlv
NDQKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjMV9ncGlvMiAtPiBpMmMxX2dwaW8yCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIGkyYzFfZ3BpbzQ0IC0+IGkyYzFfZ3BpbzQ0CihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIGp0YWdfZ3BpbzIyIC0+IGp0YWdfZ3BpbzIyCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIHBjbV9ncGlvMTggLT4gcGNtX2dwaW8xOAooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBw
Y21fZ3BpbzI4IC0+IHBjbV9ncGlvMjgKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2Rob3N0X2dw
aW80OCAtPiBzZGhvc3RfZ3BpbzQ4CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwaTBfZ3Bpbzcg
LT4gc3BpMF9ncGlvNwooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzcGkwX2dwaW8zNSAtPiBzcGkw
X2dwaW8zNQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzcGkxX2dwaW8xNiAtPiBzcGkxX2dwaW8x
NgooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzcGkyX2dwaW80MCAtPiBzcGkyX2dwaW80MAooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciB1YXJ0MF9ncGlvMTQgLT4gdWFydDBfZ3BpbzE0CihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIHVhcnQwX2N0c3J0c19ncGlvMTYgLT4gdWFydDBfY3RzcnRzX2dwaW8x
NgooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1YXJ0MF9jdHNydHNfZ3BpbzMwIC0+IHVhcnQwX2N0
c3J0c19ncGlvMzAKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdWFydDBfZ3BpbzMyIC0+IHVhcnQw
X2dwaW8zMgooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1YXJ0MF9ncGlvMzYgLT4gdWFydDBfZ3Bp
bzM2CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHVhcnQwX2N0c3J0c19ncGlvMzggLT4gdWFydDBf
Y3RzcnRzX2dwaW8zOAooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1YXJ0MV9ncGlvMTQgLT4gdWFy
dDFfZ3BpbzE0CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHVhcnQxX2N0c3J0c19ncGlvMTYgLT4g
dWFydDFfY3RzcnRzX2dwaW8xNgooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1YXJ0MV9ncGlvMzIg
LT4gdWFydDFfZ3BpbzMyCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHVhcnQxX2N0c3J0c19ncGlv
MzAgLT4gdWFydDFfY3RzcnRzX2dwaW8zMAooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1YXJ0MV9n
cGlvNDAgLT4gdWFydDFfZ3BpbzQwCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHVhcnQxX2N0c3J0
c19ncGlvNDIgLT4gdWFydDFfY3RzcnRzX2dwaW80MgooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBn
cGNsazBfZ3BpbzQ5IC0+IGdwY2xrMF9ncGlvNDkKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGlu
LWdwY2xrIC0+IHBpbi1ncGNsawooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBncGNsazFfZ3BpbzUw
IC0+IGdwY2xrMV9ncGlvNTAKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGluLWdwY2xrIC0+IHBp
bi1ncGNsawooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBncGNsazJfZ3BpbzUxIC0+IGdwY2xrMl9n
cGlvNTEKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGluLWdwY2xrIC0+IHBpbi1ncGNsawooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBpMmMwX2dwaW80NiAtPiBpMmMwX2dwaW80NgooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBwaW4tc2RhIC0+IHBpbi1zZGEKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
cGluLXNjbCAtPiBwaW4tc2NsCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGkyYzFfZ3BpbzQ2IC0+
IGkyYzFfZ3BpbzQ2CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1zZGEgLT4gcGluLXNkYQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tc2NsIC0+IHBpbi1zY2wKKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3IgaTJjM19ncGlvMiAtPiBpMmMzX2dwaW8yCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IHBpbi1zZGEgLT4gcGluLXNkYQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tc2NsIC0+IHBp
bi1zY2wKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjM19ncGlvNCAtPiBpMmMzX2dwaW80CihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1zZGEgLT4gcGluLXNkYQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBwaW4tc2NsIC0+IHBpbi1zY2wKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjNF9n
cGlvNiAtPiBpMmM0X2dwaW82CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1zZGEgLT4gcGlu
LXNkYQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tc2NsIC0+IHBpbi1zY2wKKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgaTJjNF9ncGlvOCAtPiBpMmM0X2dwaW84CihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIHBpbi1zZGEgLT4gcGluLXNkYQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tc2Ns
IC0+IHBpbi1zY2wKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjNV9ncGlvMTAgLT4gaTJjNV9n
cGlvMTAKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGluLXNkYSAtPiBwaW4tc2RhCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIHBpbi1zY2wgLT4gcGluLXNjbAooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBpMmM1X2dwaW8xMiAtPiBpMmM1X2dwaW8xMgooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4t
c2RhIC0+IHBpbi1zZGEKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGluLXNjbCAtPiBwaW4tc2Ns
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGkyYzZfZ3BpbzAgLT4gaTJjNl9ncGlvMAooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBwaW4tc2RhIC0+IHBpbi1zZGEKKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgcGluLXNjbCAtPiBwaW4tc2NsCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGkyYzZfZ3BpbzIy
IC0+IGkyYzZfZ3BpbzIyCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1zZGEgLT4gcGluLXNk
YQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tc2NsIC0+IHBpbi1zY2wKKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgaTJjX3NsYXZlX2dwaW84IC0+IGkyY19zbGF2ZV9ncGlvOAooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBwaW5zLWkyYy1zbGF2ZSAtPiBwaW5zLWkyYy1zbGF2ZQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBqdGFnX2dwaW80OCAtPiBqdGFnX2dwaW80OAooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBwaW5zLWp0YWcgLT4gcGlucy1qdGFnCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG1p
aV9ncGlvMjggLT4gbWlpX2dwaW8yOAooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW5zLW1paSAt
PiBwaW5zLW1paQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBtaWlfZ3BpbzM2IC0+IG1paV9ncGlv
MzYKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGlucy1taWkgLT4gcGlucy1taWkKKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgcGNtX2dwaW81MCAtPiBwY21fZ3BpbzUwCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIHBpbnMtcGNtIC0+IHBpbnMtcGNtCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHB3bTBf
MF9ncGlvMTIgLT4gcHdtMF8wX2dwaW8xMgooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tcHdt
IC0+IHBpbi1wd20KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHdtMF8wX2dwaW8xOCAtPiBwd20w
XzBfZ3BpbzE4CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1wd20gLT4gcGluLXB3bQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBwd20xXzBfZ3BpbzQwIC0+IHB3bTFfMF9ncGlvNDAKKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgcGluLXB3bSAtPiBwaW4tcHdtCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIHB3bTBfMV9ncGlvMTMgLT4gcHdtMF8xX2dwaW8xMwooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBwaW4tcHdtIC0+IHBpbi1wd20KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHdtMF8xX2dwaW8x
OSAtPiBwd20wXzFfZ3BpbzE5CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1wd20gLT4gcGlu
LXB3bQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwd20xXzFfZ3BpbzQxIC0+IHB3bTFfMV9ncGlv
NDEKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGluLXB3bSAtPiBwaW4tcHdtCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIHB3bTBfMV9ncGlvNDUgLT4gcHdtMF8xX2dwaW80NQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBwaW4tcHdtIC0+IHBpbi1wd20KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHdt
MF8wX2dwaW81MiAtPiBwd20wXzBfZ3BpbzUyCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1w
d20gLT4gcGluLXB3bQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwd20wXzFfZ3BpbzUzIC0+IHB3
bTBfMV9ncGlvNTMKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGluLXB3bSAtPiBwaW4tcHdtCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHJnbWlpX2dwaW8zNSAtPiByZ21paV9ncGlvMzUKKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgcGluLXN0YXJ0LXN0b3AgLT4gcGluLXN0YXJ0LXN0b3AKKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgcGluLXJ4LW9rIC0+IHBpbi1yeC1vawooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciByZ21paV9pcnFfZ3BpbzM0IC0+IHJnbWlpX2lycV9ncGlvMzQKKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgcGluLWlycSAtPiBwaW4taXJxCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHJn
bWlpX2lycV9ncGlvMzkgLT4gcmdtaWlfaXJxX2dwaW8zOQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBwaW4taXJxIC0+IHBpbi1pcnEKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcmdtaWlfbWRpb19n
cGlvMjggLT4gcmdtaWlfbWRpb19ncGlvMjgKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGlucy1t
ZGlvIC0+IHBpbnMtbWRpbwooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciByZ21paV9tZGlvX2dwaW8z
NyAtPiByZ21paV9tZGlvX2dwaW8zNwooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW5zLW1kaW8g
LT4gcGlucy1tZGlvCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwaTBfZ3BpbzQ2IC0+IHNwaTBf
Z3BpbzQ2CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbnMtc3BpIC0+IHBpbnMtc3BpCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIHNwaTJfZ3BpbzQ2IC0+IHNwaTJfZ3BpbzQ2CihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIHBpbnMtc3BpIC0+IHBpbnMtc3BpCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IHNwaTNfZ3BpbzAgLT4gc3BpM19ncGlvMAooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW5zLXNw
aSAtPiBwaW5zLXNwaQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzcGk0X2dwaW80IC0+IHNwaTRf
Z3BpbzQKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGlucy1zcGkgLT4gcGlucy1zcGkKKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3Igc3BpNV9ncGlvMTIgLT4gc3BpNV9ncGlvMTIKKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgcGlucy1zcGkgLT4gcGlucy1zcGkKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
c3BpNl9ncGlvMTggLT4gc3BpNl9ncGlvMTgKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGlucy1z
cGkgLT4gcGlucy1zcGkKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdWFydDJfZ3BpbzAgLT4gdWFy
dDJfZ3BpbzAKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGluLXR4IC0+IHBpbi10eAooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBwaW4tcnggLT4gcGluLXJ4CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IHVhcnQyX2N0c3J0c19ncGlvMiAtPiB1YXJ0Ml9jdHNydHNfZ3BpbzIKKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3IgcGluLWN0cyAtPiBwaW4tY3RzCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1y
dHMgLT4gcGluLXJ0cwooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1YXJ0M19ncGlvNCAtPiB1YXJ0
M19ncGlvNAooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tdHggLT4gcGluLXR4CihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIHBpbi1yeCAtPiBwaW4tcngKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
dWFydDNfY3RzcnRzX2dwaW82IC0+IHVhcnQzX2N0c3J0c19ncGlvNgooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBwaW4tY3RzIC0+IHBpbi1jdHMKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGluLXJ0
cyAtPiBwaW4tcnRzCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHVhcnQ0X2dwaW84IC0+IHVhcnQ0
X2dwaW84CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi10eCAtPiBwaW4tdHgKKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgcGluLXJ4IC0+IHBpbi1yeAooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1
YXJ0NF9jdHNydHNfZ3BpbzEwIC0+IHVhcnQ0X2N0c3J0c19ncGlvMTAKKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3IgcGluLWN0cyAtPiBwaW4tY3RzCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1y
dHMgLT4gcGluLXJ0cwooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1YXJ0NV9ncGlvMTIgLT4gdWFy
dDVfZ3BpbzEyCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi10eCAtPiBwaW4tdHgKKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgcGluLXJ4IC0+IHBpbi1yeAooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciB1YXJ0NV9jdHNydHNfZ3BpbzE0IC0+IHVhcnQ1X2N0c3J0c19ncGlvMTQKKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgcGluLWN0cyAtPiBwaW4tY3RzCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBp
bi1ydHMgLT4gcGluLXJ0cwooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBncGlvb3V0IC0+IGdwaW9v
dXQKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgYWx0MCAtPiBhbHQwCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIHNlcmlhbEA3ZTIwMTAwMCAtPiBzZXJpYWwKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
Ymx1ZXRvb3RoIC0+IGJsdWV0b290aAooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBtbWNAN2UyMDIw
MDAgLT4gbW1jCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGkyc0A3ZTIwMzAwMCAtPiBpMnMKKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpQDdlMjA0MDAwIC0+IHNwaQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBpMmNAN2UyMDUwMDAgLT4gaTJjCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGRwaUA3
ZTIwODAwMCAtPiBkcGkKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZHNpQDdlMjA5MDAwIC0+IGRz
aQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBhdXhAN2UyMTUwMDAgLT4gYXV4CihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIHNlcmlhbEA3ZTIxNTA0MCAtPiBzZXJpYWwKKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3Igc3BpQDdlMjE1MDgwIC0+IHNwaQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzcGlAN2Uy
MTUwYzAgLT4gc3BpCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHB3bUA3ZTIwYzAwMCAtPiBwd20K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2RoY2lAN2UzMDAwMDAgLT4gc2RoY2kKKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3Igd2lmaUAxIC0+IHdpZmkKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaHZz
QDdlNDAwMDAwIC0+IGh2cwooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBkc2lAN2U3MDAwMDAgLT4g
ZHNpCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGkyY0A3ZTgwNDAwMCAtPiBpMmMKKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgdmVjQDdlODA2MDAwIC0+IHZlYwooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciB1c2JAN2U5ODAwMDAgLT4gdXNiCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGxvY2FsX2ludGNA
NDAwMDAwMDAgLT4gbG9jYWxfaW50YwooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpbnRlcnJ1cHQt
Y29udHJvbGxlckA0MDA0MTAwMCAtPiBpbnRlcnJ1cHQtY29udHJvbGxlcgooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBhdnMtbW9uaXRvckA3ZDVkMjAwMCAtPiBhdnMtbW9uaXRvcgooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciB0aGVybWFsIC0+IHRoZXJtYWwKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
ZG1hQDdlMDA3MDAwIC0+IGRtYQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB3YXRjaGRvZ0A3ZTEw
MDAwMCAtPiB3YXRjaGRvZwooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBybmdAN2UxMDQwMDAgLT4g
cm5nCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNlcmlhbEA3ZTIwMTQwMCAtPiBzZXJpYWwKKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3Igc2VyaWFsQDdlMjAxNjAwIC0+IHNlcmlhbAooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBzZXJpYWxAN2UyMDE4MDAgLT4gc2VyaWFsCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIHNlcmlhbEA3ZTIwMWEwMCAtPiBzZXJpYWwKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
c3BpQDdlMjA0NjAwIC0+IHNwaQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzcGlAN2UyMDQ4MDAg
LT4gc3BpCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwaUA3ZTIwNGEwMCAtPiBzcGkKKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3Igc3BpQDdlMjA0YzAwIC0+IHNwaQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBpMmNAN2UyMDU2MDAgLT4gaTJjCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGkyY0A3ZTIw
NTgwMCAtPiBpMmMKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjQDdlMjA1YTAwIC0+IGkyYwoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpMmNAN2UyMDVjMDAgLT4gaTJjCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIHB3bUA3ZTIwYzgwMCAtPiBwd20KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZW1t
YzJAN2UzNDAwMDAgLT4gZW1tYzIKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZmlybXdhcmUgLT4g
ZmlybXdhcmUKKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZ3BpbyAtPiBncGlvCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIHBvd2VyIC0+IHBvd2VyCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG1haWxi
b3hAN2UwMGI4NDAgLT4gbWFpbGJveAooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjbG9ja3MgLT4g
Y2xvY2tzCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNsay1vc2MgLT4gY2xrLW9zYwooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBjbGstdXNiIC0+IGNsay11c2IKKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgcGh5IC0+IHBoeQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBhcm0tcG11IC0+IGFybS1wbXUK
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdGltZXIgLT4gdGltZXIKKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgY3B1cyAtPiBjcHVzCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNwdUAwIC0+IGNwdQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjcHVAMSAtPiBjcHUKKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgY3B1QDIgLT4gY3B1CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNwdUAzIC0+IGNwdQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBzY2IgLT4gc2NiCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBj
aWVAN2Q1MDAwMDAgLT4gcGNpZQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBldGhlcm5ldEA3ZDU4
MDAwMCAtPiBldGhlcm5ldAooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBtZGlvQGUxNCAtPiBtZGlv
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGV0aGVybmV0LXBoeUAxIC0+IGV0aGVybmV0LXBoeQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBsZWRzIC0+IGxlZHMKKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgYWN0IC0+IGFjdAooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwd3IgLT4gcHdyCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIHdpZmktcHdyc2VxIC0+IHdpZmktcHdyc2VxCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIHNkX2lvXzF2OF9yZWcgLT4gc2RfaW9fMXY4X3JlZwooWEVOKSAgPC0gdW5mbGF0
dGVuX2RldmljZV90cmVlKCkKKFhFTikgYWRkaW5nIERUIGFsaWFzOnNlcmlhbDA6IHN0ZW09c2Vy
aWFsIGlkPTAgbm9kZT0vc29jL3NlcmlhbEA3ZTIwMTAwMAooWEVOKSBhZGRpbmcgRFQgYWxpYXM6
c2VyaWFsMTogc3RlbT1zZXJpYWwgaWQ9MSBub2RlPS9zb2Mvc2VyaWFsQDdlMjE1MDQwCihYRU4p
IGFkZGluZyBEVCBhbGlhczpldGhlcm5ldDA6IHN0ZW09ZXRoZXJuZXQgaWQ9MCBub2RlPS9zY2Iv
ZXRoZXJuZXRAN2Q1ODAwMDAKKFhFTikgYWRkaW5nIERUIGFsaWFzOnBjaWUwOiBzdGVtPXBjaWUg
aWQ9MCBub2RlPS9zY2IvcGNpZUA3ZDUwMDAwMAooWEVOKSBQbGF0Zm9ybTogUmFzcGJlcnJ5IFBp
IDQKKFhFTikgVGFraW5nIGR0dWFydCBjb25maWd1cmF0aW9uIGZyb20gL2Nob3Nlbi9zdGRvdXQt
cGF0aAooWEVOKSBMb29raW5nIGZvciBkdHVhcnQgYXQgInNlcmlhbDEiLCBvcHRpb25zICIxMTUy
MDBuOCIKKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3NvYy9zZXJpYWxAN2Uy
MTUwNDAgKioKKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvc29jCihY
RU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3ZTIxNTA0MDwzPgooWEVOKSBEVDogcGFy
ZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0xKSBvbiAvCihYRU4pIERUOiB3YWxraW5nIHJh
bmdlcy4uLgooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTdlMDAwMDAwLCBzPTE4MDAwMDAsIGRh
PTdlMjE1MDQwCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwz
PiBmZTAwMDAwMDwzPgooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDIxNTA0MAooWEVOKSBEVDogb25l
IGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiBmZTIxNTA0MDwzPgooWEVOKSBEVDog
cmVhY2hlZCByb290IG5vZGUKKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9z
ZXJpYWxAN2UyMTUwNDAsIGluZGV4PTAKKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0
eQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMKKFhF
TikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAw
LGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA1ZC4uLl0sb2ludHNpemU9MwooWEVOKSBkdF9p
cnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXpl
PTMKKFhFTikgIC0+IGFkZHJzaXplPTEKKFhFTikgIC0+IGdvdCBYZW4gNC4xNC11bnN0YWJsZQoo
WEVOKSBYZW4gdmVyc2lvbiA0LjE0LXVuc3RhYmxlIChAKSAoZ2NjIChBbHBpbmUgNi40LjApIDYu
NC4wKSBkZWJ1Zz15ICBNb24gTWF5ICA0IDE4OjUyOjI3IFVUQyAyMDIwCihYRU4pIExhdGVzdCBD
aGFuZ2VTZXQ6IFRodSBBcHIgMzAgMTA6NDU6MDkgMjAyMCArMDIwMCBnaXQ6MDEzNWJlOC1kaXJ0
eQooWEVOKSBidWlsZC1pZDogMjFlZGE5NmQ1MmRkYzI0ZDBlNDk4OGI0MWJkNGU3YjcwNDU3MTRh
NgooWEVOKSBQcm9jZXNzb3I6IDQxMGZkMDgzOiAiQVJNIExpbWl0ZWQiLCB2YXJpYW50OiAweDAs
IHBhcnQgMHhkMDgsIHJldiAweDMKKFhFTikgNjQtYml0IEV4ZWN1dGlvbjoKKFhFTikgICBQcm9j
ZXNzb3IgRmVhdHVyZXM6IDAwMDAwMDAwMDAwMDIyMjIgMDAwMDAwMDAwMDAwMDAwMAooWEVOKSAg
ICAgRXhjZXB0aW9uIExldmVsczogRUwzOjY0KzMyIEVMMjo2NCszMiBFTDE6NjQrMzIgRUwwOjY0
KzMyCihYRU4pICAgICBFeHRlbnNpb25zOiBGbG9hdGluZ1BvaW50IEFkdmFuY2VkU0lNRAooWEVO
KSAgIERlYnVnIEZlYXR1cmVzOiAwMDAwMDAwMDEwMzA1MTA2IDAwMDAwMDAwMDAwMDAwMDAKKFhF
TikgICBBdXhpbGlhcnkgRmVhdHVyZXM6IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAwMDAw
MAooWEVOKSAgIE1lbW9yeSBNb2RlbCBGZWF0dXJlczogMDAwMDAwMDAwMDAwMTEyNCAwMDAwMDAw
MDAwMDAwMDAwCihYRU4pICAgSVNBIEZlYXR1cmVzOiAgMDAwMDAwMDAwMDAxMDAwMCAwMDAwMDAw
MDAwMDAwMDAwCihYRU4pIDMyLWJpdCBFeGVjdXRpb246CihYRU4pICAgUHJvY2Vzc29yIEZlYXR1
cmVzOiAwMDAwMDEzMTowMDAxMTAxMQooWEVOKSAgICAgSW5zdHJ1Y3Rpb24gU2V0czogQUFyY2gz
MiBBMzIgVGh1bWIgVGh1bWItMiBKYXplbGxlCihYRU4pICAgICBFeHRlbnNpb25zOiBHZW5lcmlj
VGltZXIgU2VjdXJpdHkKKFhFTikgICBEZWJ1ZyBGZWF0dXJlczogMDMwMTAwNjYKKFhFTikgICBB
dXhpbGlhcnkgRmVhdHVyZXM6IDAwMDAwMDAwCihYRU4pICAgTWVtb3J5IE1vZGVsIEZlYXR1cmVz
OiAxMDIwMTEwNSA0MDAwMDAwMCAwMTI2MDAwMCAwMjEwMjIxMQooWEVOKSAgSVNBIEZlYXR1cmVz
OiAwMjEwMTExMCAxMzExMjExMSAyMTIzMjA0MiAwMTExMjEzMSAwMDAxMTE0MiAwMDAxMDAwMQoo
WEVOKSBTTVA6IEFsbG93aW5nIDQgQ1BVcwooWEVOKSBlbmFibGVkIHdvcmthcm91bmQgZm9yOiBB
Uk0gZXJyYXR1bSAxMzE5NTM3CihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS90aW1l
ciwgaW5kZXg9MAooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5CihYRU4pICBpbnRz
cGVjPTEgaW50bGVuPTEyCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTEyCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsw
eDAwMDAwMDAxIDB4MDAwMDAwMGQuLi5dLG9pbnRzaXplPTMKKFhFTikgZHRfaXJxX21hcF9yYXc6
IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zCihYRU4pICAt
PiBhZGRyc2l6ZT0xCihYRU4pICAtPiBnb3QgaXQgIQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19p
cnE6IGRldj0vdGltZXIsIGluZGV4PTEKKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0
eQooWEVOKSAgaW50c3BlYz0xIGludGxlbj0xMgooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMgoo
WEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEw
MDAsaW50c3BlYz1bMHgwMDAwMDAwMSAweDAwMDAwMDBlLi4uXSxvaW50c2l6ZT0zCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNp
emU9MwooWEVOKSAgLT4gYWRkcnNpemU9MQooWEVOKSAgLT4gZ290IGl0ICEKKFhFTikgZHRfZGV2
aWNlX2dldF9yYXdfaXJxOiBkZXY9L3RpbWVyLCBpbmRleD0yCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkKKFhFTikgIGludHNwZWM9MSBpbnRsZW49MTIKKFhFTikgIGludHNpemU9
MyBpbnRsZW49MTIKKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250
cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDEgMHgwMDAwMDAwYi4uLl0sb2ludHNp
emU9MwooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVy
QDQwMDQxMDAwLCBzaXplPTMKKFhFTikgIC0+IGFkZHJzaXplPTEKKFhFTikgIC0+IGdvdCBpdCAh
CihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS90aW1lciwgaW5kZXg9MwooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5CihYRU4pICBpbnRzcGVjPTEgaW50bGVuPTEyCihY
RU4pICBpbnRzaXplPTMgaW50bGVuPTEyCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9p
bnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAxIDB4MDAwMDAw
MGEuLi5dLG9pbnRzaXplPTMKKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1
cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zCihYRU4pICAtPiBhZGRyc2l6ZT0xCihYRU4p
ICAtPiBnb3QgaXQgIQooWEVOKSBHZW5lcmljIFRpbWVyIElSUTogcGh5cz0zMCBoeXA9MjYgdmly
dD0yNyBGcmVxOiA1NDAwMCBLSHoKKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2Ug
L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCAqKgooWEVOKSBEVDogYnVzIGlzIGRl
ZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9zb2MKKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6
PDM+IDQwMDQxMDAwPDM+CihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5z
PTEpIG9uIC8KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uCihYRU4pIERUOiBkZWZhdWx0IG1h
cCwgY3A9N2UwMDAwMDAsIHM9MTgwMDAwMCwgZGE9NDAwNDEwMDAKKFhFTikgRFQ6IGRlZmF1bHQg
bWFwLCBjcD03YzAwMDAwMCwgcz0yMDAwMDAwLCBkYT00MDA0MTAwMAooWEVOKSBEVDogZGVmYXVs
dCBtYXAsIGNwPTQwMDAwMDAwLCBzPTgwMDAwMCwgZGE9NDAwNDEwMDAKKFhFTikgRFQ6IHBhcmVu
dCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZmODAwMDAwPDM+CihYRU4pIERUOiB3
aXRoIG9mZnNldDogNDEwMDAKKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAw
MDAwMDA8Mz4gZmY4NDEwMDA8Mz4KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlCihYRU4pIERU
OiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAw
NDEwMDAgKioKKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvc29jCihY
RU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA0MDA0MjAwMDwzPgooWEVOKSBEVDogcGFy
ZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0xKSBvbiAvCihYRU4pIERUOiB3YWxraW5nIHJh
bmdlcy4uLgooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTdlMDAwMDAwLCBzPTE4MDAwMDAsIGRh
PTQwMDQyMDAwCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2MwMDAwMDAsIHM9MjAwMDAwMCwg
ZGE9NDAwNDIwMDAKKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD00MDAwMDAwMCwgcz04MDAwMDAs
IGRhPTQwMDQyMDAwCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAw
MDwzPiBmZjgwMDAwMDwzPgooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDQyMDAwCihYRU4pIERUOiBv
bmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZmODQyMDAwPDM+CihYRU4pIERU
OiByZWFjaGVkIHJvb3Qgbm9kZQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAv
c29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwICoqCihYRU4pIERUOiBidXMgaXMgZGVm
YXVsdCAobmE9MSwgbnM9MSkgb24gL3NvYwooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8
Mz4gNDAwNDQwMDA8Mz4KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9
MSkgb24gLwooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4KKFhFTikgRFQ6IGRlZmF1bHQgbWFw
LCBjcD03ZTAwMDAwMCwgcz0xODAwMDAwLCBkYT00MDA0NDAwMAooWEVOKSBEVDogZGVmYXVsdCBt
YXAsIGNwPTdjMDAwMDAwLCBzPTIwMDAwMDAsIGRhPTQwMDQ0MDAwCihYRU4pIERUOiBkZWZhdWx0
IG1hcCwgY3A9NDAwMDAwMDAsIHM9ODAwMDAwLCBkYT00MDA0NDAwMAooWEVOKSBEVDogcGFyZW50
IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gZmY4MDAwMDA8Mz4KKFhFTikgRFQ6IHdp
dGggb2Zmc2V0OiA0NDAwMAooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAw
MDAwMDwzPiBmZjg0NDAwMDwzPgooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUKKFhFTikgRFQ6
ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0
MTAwMCAqKgooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9zb2MKKFhF
TikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDQwMDQ2MDAwPDM+CihYRU4pIERUOiBwYXJl
bnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8KKFhFTikgRFQ6IHdhbGtpbmcgcmFu
Z2VzLi4uCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9MTgwMDAwMCwgZGE9
NDAwNDYwMDAKKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03YzAwMDAwMCwgcz0yMDAwMDAwLCBk
YT00MDA0NjAwMAooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTQwMDAwMDAwLCBzPTgwMDAwMCwg
ZGE9NDAwNDYwMDAKKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAw
PDM+IGZmODAwMDAwPDM+CihYRU4pIERUOiB3aXRoIG9mZnNldDogNDYwMDAKKFhFTikgRFQ6IG9u
ZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gZmY4NDYwMDA8Mz4KKFhFTikgRFQ6
IHJlYWNoZWQgcm9vdCBub2RlCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2Mv
aW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIGluZGV4PTAKKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQooWEVOKSAgaW50c3BlYz0xIGludGxlbj0zCihYRU4pICBpbnRzaXpl
PTMgaW50bGVuPTMKKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250
cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDEgMHgwMDAwMDAwOS4uLl0sb2ludHNp
emU9MwooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVy
QDQwMDQxMDAwLCBzaXplPTMKKFhFTikgIC0+IGFkZHJzaXplPTEKKFhFTikgIC0+IGdvdCBpdCAh
CihYRU4pIEdJQ3YyIGluaXRpYWxpemF0aW9uOgooWEVOKSAgICAgICAgIGdpY19kaXN0X2FkZHI9
MDAwMDAwMDBmZjg0MTAwMAooWEVOKSAgICAgICAgIGdpY19jcHVfYWRkcj0wMDAwMDAwMGZmODQy
MDAwCihYRU4pICAgICAgICAgZ2ljX2h5cF9hZGRyPTAwMDAwMDAwZmY4NDQwMDAKKFhFTikgICAg
ICAgICBnaWNfdmNwdV9hZGRyPTAwMDAwMDAwZmY4NDYwMDAKKFhFTikgICAgICAgICBnaWNfbWFp
bnRlbmFuY2VfaXJxPTI1CihYRU4pIEdJQ3YyOiAyNTYgbGluZXMsIDQgY3B1cywgc2VjdXJlIChJ
SUQgMDIwMDE0M2IpLgooWEVOKSBYU00gRnJhbWV3b3JrIHYxLjAuMCBpbml0aWFsaXplZAooWEVO
KSBJbml0aWFsaXNpbmcgWFNNIFNJTE8gbW9kZQooWEVOKSBVc2luZyBzY2hlZHVsZXI6IFNNUCBD
cmVkaXQgU2NoZWR1bGVyIHJldjIgKGNyZWRpdDIpCihYRU4pIEluaXRpYWxpemluZyBDcmVkaXQy
IHNjaGVkdWxlcgooWEVOKSAgbG9hZF9wcmVjaXNpb25fc2hpZnQ6IDE4CihYRU4pICBsb2FkX3dp
bmRvd19zaGlmdDogMzAKKFhFTikgIHVuZGVybG9hZF9iYWxhbmNlX3RvbGVyYW5jZTogMAooWEVO
KSAgb3ZlcmxvYWRfYmFsYW5jZV90b2xlcmFuY2U6IC0zCihYRU4pICBydW5xdWV1ZXMgYXJyYW5n
ZW1lbnQ6IHNvY2tldAooWEVOKSAgY2FwIGVuZm9yY2VtZW50IGdyYW51bGFyaXR5OiAxMG1zCihY
RU4pIGxvYWQgdHJhY2tpbmcgd2luZG93IGxlbmd0aCAxMDczNzQxODI0IG5zCihYRU4pIEFsbG9j
YXRlZCBjb25zb2xlIHJpbmcgb2YgMzIgS2lCLgooWEVOKSBDUFUwOiBHdWVzdCBhdG9taWNzIHdp
bGwgdHJ5IDYgdGltZXMgYmVmb3JlIHBhdXNpbmcgdGhlIGRvbWFpbgooWEVOKSBCcmluZ2luZyB1
cCBDUFUxCi0gQ1BVIDAwMDAwMDAxIGJvb3RpbmcgLQotIEN1cnJlbnQgRUwgMDAwMDAwMDggLQot
IEluaXRpYWxpemUgQ1BVIC0KLSBUdXJuaW5nIG9uIHBhZ2luZyAtCi0gUmVhZHkgLQooWEVOKSBD
UFUxOiBHdWVzdCBhdG9taWNzIHdpbGwgdHJ5IDUgdGltZXMgYmVmb3JlIHBhdXNpbmcgdGhlIGRv
bWFpbgooWEVOKSBDUFUgMSBib290ZWQuCihYRU4pIEJyaW5naW5nIHVwIENQVTIKLSBDUFUgMDAw
MDAwMDIgYm9vdGluZyAtCi0gQ3VycmVudCBFTCAwMDAwMDAwOCAtCi0gSW5pdGlhbGl6ZSBDUFUg
LQotIFR1cm5pbmcgb24gcGFnaW5nIC0KLSBSZWFkeSAtCihYRU4pIENQVTI6IEd1ZXN0IGF0b21p
Y3Mgd2lsbCB0cnkgNSB0aW1lcyBiZWZvcmUgcGF1c2luZyB0aGUgZG9tYWluCihYRU4pIENQVSAy
IGJvb3RlZC4KKFhFTikgQnJpbmdpbmcgdXAgQ1BVMwotIENQVSAwMDAwMDAwMyBib290aW5nIC0K
LSBDdXJyZW50IEVMIDAwMDAwMDA4IC0KLSBJbml0aWFsaXplIENQVSAtCi0gVHVybmluZyBvbiBw
YWdpbmcgLQotIFJlYWR5IC0KKFhFTikgQ1BVIGF0b21pY3N5IDUgdGltZSBwYXVzaW5nYWluCihY
RSBib290ZWQuQnJvdWdodCBzCihYRU4pdHVhbGlzYXRibGVkCihYIDQ0LWJpdCAgNDQtYml0IC1i
aXQgVk1JIFAyTTogNCBpdGggb3JkZSwgVlRDUiAwNAooWEVOKWNwdSAwIHRvZSAwCihYRXQgY3B1
IG9uZSwgYWN0aXZYRU4pIEFkZDEgdG8gcnVuCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgKFhFTikgQXUgMiB0byByMAooWEVOKWNwdSAzIHRvZSAwCihYRW5hdGl2ZXM6ZyB3aXRo
IGEgMDAwMDAwMDggLT4gMDAwZDRjY2MKKCBMT0FESU5HMCAqKioKKGRpbmcgZDAgcm9tIGJvb3RA
IDAwMDAwMDAwCnJhbnQgdGFiOiAweDAwMDAwMC0weDAwMDAwMG8oWEVOKSBCQU5LWzBdIDB4MDAw
MDAwLTB4MDAwMDAwICgoWEVOKSBCQTAwMDAwMDMweDAwMDAwMDMoMTI4TUIpCihYRWUgLwooWEVx
X251bWJlcgogICAgICAgICAgIChYRU4pIC9vdWdoID0gMSAwCihYRU5pZiAvIGlzIGhlIElPTU1V
IGl0CihYRXFfbnVtYmVyCiAgICAgICAgICAgKFhFTikgaGxpYXNlcwpfaXJxX251bT0vYWxpYXNl
IC9hbGlhc2Vyb3VnaCA9ID0gMAooWEUgaWYgL2FsaWJlaGluZCB0IGFuZCBhZGROKSBkdF9pcjog
ZGV2PS9hKFhFTikgaGFvc2VuCihYcnFfbnVtYmVjaG9zZW4KaG9zZW4gcGFoID0gMSBuYQogICAg
ICAgICAgICAgICAgKFhFTikgQy9jaG9zZW4gZCB0aGUgSU9hZGQgaXQKX2lycV9udW09L2Nob3Nl
bmhhbmRsZSAvb2R1bGVAMmIoWEVOKSAgIChtYXRjaGVkIGhhbmRsZSBkLW1lbW9yeWR0X2lycV9u
ZXY9L3Jlc2VvcnkKKFhFcnZlZC1tZW10aHJvdWdoIHIgPSAwCihjayBpZiAvcm1lbW9yeSBpIHRo
ZSBJT01kZCBpdAooaXJxX251bWIvcmVzZXJ2ZQooWEVOKSByZXNlcnZlZGxpbnV4LGNtIGR0X2ly
cV9kZXY9L3Jlc21vcnkvbGluKFhFTikgL3JtZW1vcnkvbCBwYXNzdGhyIG5hZGRyID0pIENoZWNr
IHJ2ZWQtbWVteCxjbWEgaXN0aGUgSU9NTWQgaXQKKFhycV9udW1iZXJlc2VydmVkbGludXgsY20g
aGFuZGxlIC16b25lcwpfaXJxX251bT0vdGhlcm1hCiAgICAgICAgICAgICAgICAoWEVOKSAvem9u
ZXMgcGFoID0gMSBuYQogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAoWEVO
KSBDL3RoZXJtYWxzIGJlaGluZE1VIGFuZCBhWEVOKSBkdF9lcjogZGV2PS16b25lcwpuZGxlIC90
aG5lcy9jcHUtCiAgICAgICAgICAgICAgICAoWEVOKSBkbWJlcjogZGVhbC16b25lc3JtYWwKKFhy
bWFsLXpvbmhlcm1hbCBwZ2ggPSAxIG4KKFhFTikgIC90aGVybWFjcHUtdGhlcmVoaW5kIHRoYW5k
IGFkZCApIGR0X2lycSBkZXY9L3RobmVzL2NwdS0KICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAoWEVOKSBoaGVybWFsLXotdGhlcm1h
bChYRU4pIGR0YmVyOiBkZXZsLXpvbmVzL21hbC90cmlwIC90aGVybWFjcHUtdGhlcnMgcGFzc3Ro
MSBuYWRkciBOKSBDaGVja3JtYWwtem9uaGVybWFsL3RiZWhpbmQgdCBhbmQgYWRkTikgZHRfaXI6
IGRldj0vdG9uZXMvY3B1L3RyaXBzCm5kbGUgL3RobmVzL2NwdS10cmlwcy9jcChYRU4pIGR0YmVy
OiBkZXZsLXpvbmVzL21hbC90cmlwaXQKKFhFTmFsLXpvbmVzcm1hbC90cmlyaXQgcGFzcz0gMSBu
YWRkWEVOKSBDaGVoZXJtYWwtei10aGVybWFscHUtY3JpdCBkIHRoZSBJT2FkZCBpdApfaXJxX251
bT0vdGhlcm1hY3B1LXRoZXJzL2NwdS1jcikgaGFuZGxlbC16b25lcy9tYWwvY29vbAooWEVOKSB1
bWJlcjogZG1hbC16b25lZXJtYWwvY29wcwooWEVOYWwtem9uZXNybWFsL2Nvb3MgcGFzc3RoMSBu
YWRkciBOKSBDaGVja3JtYWwtem9uaGVybWFsL2NhcHMgaXMgYmUgSU9NTVUgaXQKKFhFTl9udW1i
ZXI6ZXJtYWwtem90aGVybWFsL21hcHMKdF9pcnFfbnV2PS9zb2MKb2MgcGFzc3QgMSBuYWRkckVO
KSBDaGVjYyBpcyBiZWhJT01NVSBhbgooWEVOKSB1bWJlcjogZAogICAgICAgICAgICAgIChYRU4p
IGhvYy90aW1lcjAKKFhFTiludW1iZXI6IC90aW1lckA3CiAgICAgICAgICAgICAgICAgICAgIChY
RU4pICBudGVycnVwdHJ0eQooWEVwZWM9MCBpbgogICAgICAgICAgIChYRU4pICAzIGludGxlbk4p
IGR0X2RlX3Jhd19pcnFvYy90aW1lcjAsIGluZGV4KSAgdXNpbmd1cHRzJyBwcihYRU4pICBpIGlu
dGxlbj0pICBpbnRzaWxlbj0xMgpfaXJxX21hcHI9L3NvYy9pLWNvbnRyb2wxMDAwLGludDAwMDAw
MDAwMDQwLi4uXSw9MwooWEVOX21hcF9yYXdzb2MvaW50ZW50cm9sbGVyMCwgc2l6ZT0gIC0+IGFk
ZAogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIChYRU4pICB0ICEK
KFhFdmljZV9nZXQ6IGRldj0vc0A3ZTAwMzAwPTEKKFhFTiAnaW50ZXJyb3BlcnR5Cm50c3BlYz0w
MTIKKFhFTnplPTMgaW50KFhFTikgZHRfcmF3OiBwYW50ZXJydXB0bGVyQDQwMDRzcGVjPVsweCAw
eDAwMDAwb2ludHNpemUpIGR0X2lycTogaXBhcj0vcnJ1cHQtY29ANDAwNDEwMDMKLT4gZ290IGlO
KSBkdF9kZV9yYXdfaXJxb2MvdGltZXIwLCBpbmRleCkgIHVzaW5ndXB0cycgcHIoWEVOKSAgaSBp
bnRsZW49KSAgaW50c2lsZW49MTIKX2lycV9tYXByPS9zb2MvaS1jb250cm9sMTAwMCxpbnQwMDAw
MDAwMDA0Mi4uLl0sPTMKKFhFTl9tYXBfcmF3c29jL2ludGVudHJvbGxlcjAsIHNpemU9ICAtPiBh
ZGQKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAoWEVOKSAgdCAh
CihYRXZpY2VfZ2V0OiBkZXY9L3NAN2UwMDMwMD0zCihYRU4gJ2ludGVycm9wZXJ0eQpudHNwZWM9
MDEyCihYRU56ZT0zIGludChYRU4pIGR0X3JhdzogcGFudGVycnVwdGxlckA0MDA0c3BlYz1bMHgg
MHgwMDAwMG9pbnRzaXplKSBkdF9pcnE6IGlwYXI9L3JydXB0LWNvQDQwMDQxMDAzCi0+IGdvdCBp
TikgL3NvYy8wMDMwMDAgcGdoID0gMSBuCihYRU4pICAvc29jL3RpMzAwMCBpcyBoZSBJT01NVSBp
dAooWEVxX251bWJlcm9jL3RpbWVyMAooWEVOKSdpbnRlcnJ1cGVydHkKKHRzcGVjPTAgMgooWEVO
KWU9MyBpbnRsWEVOKSBkdF9ldF9yYXdfaS9zb2MvdGltMDAwLCBpbmRFTikgIHVzaXJydXB0cycK
KFhFTikgPTAgaW50bGVFTikgIGludG50bGVuPTEyZHRfaXJxX21wYXI9L3NvY3B0LWNvbnRyMDQx
MDAwLGkweDAwMDAwMDAwMDQwLi4uemU9MwooWHJxX21hcF9yPS9zb2MvaW5jb250cm9sbDAwMCwg
c2l6TikgIC0+IGExCihYRU4pIGl0ICEKKGRldmljZV9ncnE6IGRldj1lckA3ZTAwM2V4PTAKKFhu
ZyAnaW50ZXByb3BlcnR5IGludHNwZWNuPTEyCihYc2l6ZT0zIGkKKFhFTikgYXBfcmF3OiAvaW50
ZXJydW9sbGVyQDQwbnRzcGVjPVswMCAweDAwMF0sb2ludHNpRU4pIGR0X2lhdzogaXBhcnRlcnJ1
cHQtZXJANDAwNDFlPTMKKFhFZGRyc2l6ZT0gIC0+IGdvdFhFTikgICAtCihYRU4pIGVfZ2V0X3Jh
ZXY9L3NvYy8wMDMwMDAsCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIChYRU4pICBudGVy
cnVwdHJ0eQooWEVwZWM9MCBpbgp0X2lycV9tYXBhcj0vc29jcHQtY29udHIwNDEwMDAsIChYRU4p
ICAtemU9MXVwdC1jb0A0MDA0MTAwYz1bMHgwMDAwMDAwMDA0MXRzaXplPTMKaW50c2l6ZT09MTJw
cm9wZU4pICBpbnRzdGxlbj0xMkA3ZWluZGV4PTEKKFhFcV9tYXBfcmFzb2MvaW50ZW50cm9sbGVy
MCxpbnRzcGUwMDAwMCAweC4uLl0sb2luCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIChYRU4pIGRwX3JhdzogaS9pbnRlcnJ1b2xsZXJANDBzaXplPTMK
PiBhZGRyc2lFTikgIC0+CihYRU4pICA5NwooWEV2aWNlX2dldDogZGV2PS9zQDdlMDAzMDA9Mgoo
WEVOICdpbnRlcnJvcGVydHkKbnRzcGVjPTAxMgooWEVOemU9MyBpbnQoWEVOKSBkdF9yYXc6IHBh
bnRlcnJ1cHRsZXJANDAwNHNwZWM9WzB4IDB4MDAwMDBvaW50c2l6ZSkgZHRfaXJxOiBpcGFyPS9y
cnVwdC1jb0A0MDA0MTAwMwotPiBnb3QgaU4pIGR0X2RlX3Jhd19pcnFvYy90aW1lcjAsIGluZGV4
KSAgdXNpbmd1cHRzJyBwcihYRU4pICBpIGludGxlbj0pICBpbnRzaWxlbj0xMgpfaXJxX21hcHI9
L3NvYy9pLWNvbnRyb2wxMDAwLGludDAwMDAwMDAwMDQyLi4uXSw9MwooWEVOX21hcF9yYXdzb2Mv
aW50ZW50cm9sbGVyMCwgc2l6ZT0gIC0+IGFkZAogICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIChYRU4pICB0ICEKKFhFUlE6IDk4Cl9kZXZpY2VfaXJxOiBkZXZtZXJA
N2UwMGRleD0zCihpbmcgJ2ludCBwcm9wZXJ0ICBpbnRzcGVlbj0xMgoodHNpemU9MyAyCihYRU4p
bWFwX3JhdzpjL2ludGVycnJvbGxlckA0aW50c3BlYz0wMDAgMHgwMC5dLG9pbnRzWEVOKSBkdF9y
YXc6IGlwYW50ZXJydXB0bGVyQDQwMDR6ZT0zCihYYWRkcnNpemUpICAtPiBnbyhYRU4pIGR0Z2V0
X3Jhd189L3NvYy90aTMwMDAsIGluWEVOKSAgdXNlcnJ1cHRzJ3kKKFhFTiljPTAgaW50bFhFTikg
IGluaW50bGVuPTEgZHRfaXJxXyBwYXI9L3NvdXB0LWNvbnQwMDQxMDAwLFsweDAwMDAwMDAwMDQz
Li5pemU9MwooaXJxX21hcF9yPS9zb2MvaS1jb250cm9sMTAwMCwgc2lFTikgIC0+ID0xCihYRU50
IGl0ICEKLSBJUlE6IDkgRFQ6ICoqIGlvbiBmb3Igc29jL3RpbWUwMCAqKgooIGJ1cyBpcyAobmE9
MSwgbi9zb2MKKFh0cmFuc2xhdGVzczo8Mz4gPDM+CihYRWFyZW50IGJ1YXVsdCAobmEpIG9uIC8K
OiB3YWxraW4uLi4KKFhFZWZhdWx0IG1lMDAwMDAwLDAwLCBkYT03ClQ6IHJlYWNobm9kZSAgICAg
ICAgICAgICAgIChYRU4pIER0IHRyYW5zbHI6PDM+IDAwPiBmZTAwMDBYRU4pIERUOmZzZXQ6IDMw
KSBEVDogb250cmFuc2xhdDAwMDAwMDAwMzAwMDwzPgp0X2lycV9udXY9L3NvYy90MDAwVHlwZT01
CmludHNwZWM9PTNycm9wZXJ0eQooWEVOemU9MyBpbnRYRU4pIGR0X2V0X3Jhd19pL3NvYy90eHAw
LCBpbmRleCkgIHVzaW5ndXB0cycgcHIoWEVOKSAgaSBpbnRsZW49ICBpbnRzaXplbj0zCihYcnFf
bWFwX3Ivc29jL2ludG9udHJvbGxlMDAsaW50c3AwMDAwMDAgMGIuLi5dLG9pCi0+IGFkZHJzWEVO
KSAgLT4hbnRlcnJyb2xsZXJANCBzaXplPTMKKFhFTilwQDdlMDA0MGhyb3VnaCA9ID0gMQooWGsg
aWYgL3NvMDA0MDAwIGkgdGhlIElPTWRkIGl0CihpcnFfbnVtYi9zb2MvdHhwMAooWEVOKSdpbnRl
cnJ1cGVydHkKKHRzcGVjPTAKaW50c2l6ZT09M250bGVOKSBkdF9kZV9yYXdfaXJxb2MvdHhwQDcg
aW5kZXg9MCB1c2luZyAndHMnIHByb3BFTikgIGludG50bGVuPTMKKFhFTl9tYXBfcmF3b2MvaW50
ZXJ0cm9sbGVyQCxpbnRzcGVjMDAwMCAweDAuLl0sb2ludChYRU4pIGR0X3JhdzogaXBpbnRlcnJ1
cGxsZXJANDAwaXplPTMKKCBhZGRyc2l6TikgIC0+IGcKICAgICAgICAgICAgICAgICAoWEVOKSBk
X2dldF9yYXd2PS9zb2MvdDAwMCwgaW5kRU4pICB1c2lycnVwdHMnCihYRU4pID0wIGludGxlTikg
IGludHN0bGVuPTMKX2lycV9tYXByPS9zb2MvaS1jb250cm9sMTAwMCxpbnQwMDAwMDAwMDA0Yi4u
Ll0sPTMKKFhFTl9tYXBfcmF3c29jL2ludGVudHJvbGxlcjAsIHNpemU9ICAtPiBhZGQKICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAoWEVOKSAgdCAhClQ6ICoqIHRy
biBmb3IgZGVjL3R4cEA3ZSoKKFhFTikgaXMgZGVmYTEsIG5zPTEpCihYRU4pIHNsYXRpbmcgPDM+
IDdlMDAKICAgICAgICAgICAgICAgICAgICAgIChYRU4pIER0IGJ1cyBpcyAobmE9MiwgIC8KKFhF
TmxraW5nIHJhCiAgICAgICAgICAgIChYRU4pIERsdCBtYXAsIDAwMCwgcz0xZGE9N2UwMDROKSBE
VDogcGFuc2xhdGlvPiAwMDAwMDAwMDAwMDA8MyBEVDogd2l0OiA0MDAwCjogb25lIGxlc2xhdGlv
bjowMDAwPDM+IDwzPgooWEVlYWNoZWQgcgooWEVOKSA6IDAwZmUwMDBmZTAwNDAyZT01CihYRWUg
L3NvYy9jMTAxMDAwCl9pcnFfbnVtPS9zb2MvY3AwMTAwMAooYy9jcHJtYW4wIHBhc3N0aDEgbmFk
ZHIgTikgQ2hlY2svY3BybWFuQCBpcyBiZWhpT01NVSBhbmQKICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKFhFTikgZG1iZXI6IGRlcHJtYW5A
N2UoWEVOKSBEVG5zbGF0aW9uaWNlIC9zb2M3ZTEwMTAwME4pIERUOiBiZmF1bHQgKG4xKSBvbiAv
cykgRFQ6IHRyZyBhZGRyZXMxMDEwMDA8MyBEVDogcGFyaXMgZGVmYXUsIG5zPTEpIEVOKSBEVDog
cmFuZ2VzLi4gRFQ6IGRlZiwgY3A9N2UwPTE4MDAwMDAwMTAwMAooIHBhcmVudCBpb24gZm9yOjAw
MDA8Mz4gPDM+CihYRWl0aCBvZmZzMDAKKFhFTmUgbGV2ZWwgaW9uOjwzPiA8Mz4gZmUxMAogICAg
ICAgICAgICAgICAgICAgICAgICAgICAgKFhFTikgRGVkIHJvb3QgRU4pICAgLSBmZTEwMTAwMDAz
MDAwIFAyCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAoWEVOKSBob2MvbWFpbGI4ODAKKFhFcV9udW1iZXJvYy9tYWlsYjg4
MAppbnRzcGVjPT0zcnJvcGVydHkKKFhFTnplPTMgaW50WEVOKSBkdF9ldF9yYXdfaS9zb2MvbWFp
MGI4ODAsIGkoWEVOKSAgdXRlcnJ1cHRzdHkKKFhFTmVjPTAgaW50WEVOKSAgaW5pbnRsZW49M2R0
X2lycV9tcGFyPS9zb2NwdC1jb250cjA0MTAwMCxpMHgwMDAwMDAwMDAyMS4uLnplPTMKKFhycV9t
YXBfcj0vc29jL2luY29udHJvbGwwMDAsIHNpek4pICAtPiBhMQooWEVOKSBpdCAhCihjL21haWxi
bzgwIHBhc3N0IDEgbmFkZHJFTikgQ2hlY2MvbWFpbGJvODAgaXMgYmUgSU9NTVUgYXQKKFhFTilu
dW1iZXI6IC9tYWlsYm94MAooWEVOKSdpbnRlcnJ1cGVydHkKKHRzcGVjPTAKKFhFTikgPTMgaW50
bGVOKSBkdF9kZV9yYXdfaXJxb2MvbWFpbGI4ODAsIGluZEVOKSAgdXNpcnJ1cHRzJwooWEVOKSA9
MCBpbnRsZU4pICBpbnRzdGxlbj0zCl9pcnFfbWFwcj0vc29jL2ktY29udHJvbDEwMDAsaW50MDAw
MDAwMDAwMjEuLi5dLD0zCihYRU5fbWFwX3Jhd3NvYy9pbnRlbnRyb2xsZXIwLCBzaXplPSAgLT4g
YWRkCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgKFhFTikgIHQg
IQooWEV2aWNlX2dldDogZGV2PS9zb3hAN2UwMGJleD0wCihYbmcgJ2ludGVwcm9wZXJ0eSBpbnRz
cGVjbj0zCihYRWl6ZT0zIGluKFhFTikgZHRfcmF3OiBwYW50ZXJydXB0bGVyQDQwMDRzcGVjPVsw
eCAweDAwMDAwb2ludHNpemUpIGR0X2lycTogaXBhcj0vcnJ1cHQtY29ANDAwNDEwMDMKLT4gZ290
IGlOKSAgIC0gSShYRU4pIERUbnNsYXRpb25pY2UgL3NvY0A3ZTAwYjg4RU4pIERUOiBlZmF1bHQg
KD0xKSBvbiAvTikgRFQ6IHRuZyBhZGRyZWUwMGI4ODA8KSBEVDogcGEgaXMgZGVmYTIsIG5zPTEp
WEVOKSBEVDogcmFuZ2VzLikgRFQ6IGRlcCwgY3A9N2VzPTE4MDAwMDAwYjg4MAo6IHBhcmVudHRp
b24gZm9yMDAwMDA8Mz4wPDM+CihYd2l0aCBvZmYwCihYRU4pIGxldmVsIHRvbjo8Mz4gMDM+IGZl
MDBiKFhFTikgRFRkIHJvb3Qgbk4pICAgLSBNZTAwYjg4MCBiOGMwIFAyTShYRU4pIGhhYy9ncGlv
QDcKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIChYRU4pIGRtYmVyOiBkZXBpb0A3ZTIwRU4p
ICB1c2lycnVwdHMnCihYRU4pID0wIGludGxlRU4pICBpbnRudGxlbj0xMmR0X2Rldmljd19pcnE6
IGRncGlvQDdlMm5kZXg9MApzaW5nICdpbicgcHJvcGVyKSAgaW50c3BsZW49MTIKbnRzaXplPTMx
MgooWEVOX21hcF9yYXdvYy9pbnRlcnRyb2xsZXJALGludHNwZWMwMDAwIDB4MC4uXSxvaW50KFhF
TikgZHRfcmF3OiBpcGludGVycnVwbGxlckA0MDBpemU9MwooIGFkZHJzaXpOKSAgLT4gZwogICAg
ICAgICAgICAgICAgIChYRU4pIGRfZ2V0X3Jhd3Y9L3NvYy9nMDAwMCwgaW5YRU4pICB1c2VycnVw
dHMneQooWEVOKWM9MCBpbnRsWEVOKSAgaW5pbnRsZW49MSBkdF9pcnFfIHBhcj0vc291cHQtY29u
dDAwNDEwMDAsWzB4MDAwMDAwMDAwNzIuLml6ZT0zCihpcnFfbWFwX3I9L3NvYy9pLWNvbnRyb2wx
MDAwLCBzaUVOKSAgLT4gPTEKKFhFTnQgaXQgIQpfZGV2aWNlX2lycTogZGV2aW9AN2UyMDBleD0y
CihYbmcgJ2ludGVwcm9wZXJ0eSBpbnRzcGVjbj0xMgooWHNpemU9MyBpCihYRU4pIGFwX3Jhdzog
L2ludGVycnVvbGxlckA0MG50c3BlYz1bMDAgMHgwMDBdLG9pbnRzaUVOKSBkdF9pYXc6IGlwYXJ0
ZXJydXB0LWVyQDQwMDQxZT0zCihYRWRkcnNpemU9ICAtPiBnb3RYRU4pIGR0X2V0X3Jhd19pL3Nv
Yy9ncGkwMCwgaW5kZU4pICB1c2lucnVwdHMnIHAKdF9pcnFfbWFhcj0vc29jL3QtY29udHJvNDEw
MDAsaW54MDAwMDAwMDAwNzQuLi5dZT0zICAgICAgICAgICAgICAgIChYRU4pICAwIGludGxlbk4p
ICBpbnRzdGxlbj0xMgooWEVxX21hcF9yYS9zb2MvaW50b250cm9sbGUwMCwgc2l6ZSkgIC0+IGFk
CihYRU4pIGl0ICEKaW50c2l6ZT09MTJwcm9wZU4pICBpbnRzdGxlbj0xMiBpZmlvQDdlMjAwZWhp
bmQgdGhhbmQgYWRkICkgZHRfaXJxIGRldj0vc29lMjAwMDAwCihYRXZpY2VfZ2V0OiBkZXY9L3M3
ZTIwMDAwMDAKKFhFTiknaW50ZXJydXBlcnR5Cih0c3BlYz0wIDIKKFhFTillPTMgaW50bFhFTikg
ZHRfcmF3OiBwYXJ0ZXJydXB0LWVyQDQwMDQxcGVjPVsweDAweDAwMDAwMGludHNpemU9IGR0X2ly
cV8gaXBhcj0vc3J1cHQtY29uNDAwNDEwMDAKKFhFTikgc2l6ZT0xCj4gZ290IGl0KSBkdF9kZXZy
YXdfaXJxOmMvZ3Bpb0A3IGluZGV4PTAgdXNpbmcgJ3RzJyBwcm9wRU4pICBpbnRudGxlbj0xMiBp
bnRzaXplbj0xMgooWHJxX21hcF9yL3NvYy9pbnRvbnRyb2xsZTAwLGludHNwMDAwMDAwIDAxLi4u
XSxvaQotPiBhZGRyc1hFTikgIC0+IW50ZXJycm9sbGVyQDQgc2l6ZT0zCihYRU4pOiAxNDUKKGRl
dmljZV9ncnE6IGRldj1vQDdlMjAwMHg9MQppbnRzcGVjPT0xMnJvcGVydHkKKFhFaXplPTMgaW4K
ICAgICAgICAgICAoWEVOKSBkcF9yYXc6IHBpbnRlcnJ1cGxsZXJANDAwdHNwZWM9WzAwIDB4MDAw
MCxvaW50c2l6TikgZHRfaXJ3OiBpcGFyPTIuLi4gMS4uLgooWEVOKSAqKiogU2VyaWF0byBET00w
IFRSTC1hJyB0ZXMgdG8gc3d1dCkKKFhFTikgRnJlZWQgMzQ0a0Jtb3J5LgooWEVOKSBkMHYwOiB2
R0lDRGxlZCB3b3JkeDAwMDAwMGZ0byBJQ0FDVFhFTikgZDB2OiB1bmhhbmQgd3JpdGUgMGZmZmZm
ZmYgSVZFUjgKMHYwOiB2R0luZGxlZCB3byAweDAwMDAwZiB0byBJQ0FUKFhFTikgZDBEOiB1bmhh
bmQgd3JpdGUgZmZmZmZmZmZUSVZFUjE2CihYRU4pIElDRDogdW5ob3JkIHdyaXQwMGZmZmZmZkFD
VElWRVIyIGQwdjA6IHZoYW5kbGVkIHRlIDB4MDAwZmZmIHRvIEkyOAooWEVOdkdJQ0Q6IHUgd29y
ZCB3cjAwMDBmZmZmSUNBQ1RJVkUgMC4wMDAwMG5nIExpbnV4aWNhbCBDUFUwMDAwMCBbMDNdClsg
IDAwXSBMaW51biA1LjYuMS0ocm9vdEAyZjhkKSAoZ2NjIDguMy4wICguMy4wKSkgI3UgQXByIDkg
IFVUQyAyMDIgMC4wMDAwMG5lIG1vZGVscnJ5IFBpIDQKWyAgICBdIFhlbiA0LnJ0IGZvdW5kMC4w
MDAwMDBldHRpbmcgRWV0ZXJzIGZyClsgICAgMCBlZmk6IFVFb3VuZC4KMDAwMDBdIFJtZW1vcnk6
IENNQSBtZW1vYXQgMHgwMDAwMDAwMCwgc2lCClsgIDAwXSBPRjogIG1lbTogaW5kIG5vZGUgbCwg
Y29tcGF0c2hhcmVkLWQKWyAgICAwIE5VTUE6IE5vbmZpZ3VyYW5kCjAwMDAwMF0gREVfREFUQSAz
ZWQ5MmMwLWZmXTAwLTB4MDAwZmZmZmZdClsgMDAwXSBab246ClsgICAwXSAgIERNQWVtIDB4MDAw
MDAwMDAtMHgzN2ZmZmZmZiAwLjAwMDAwMzIgICAgZW0gICAwLjAwMG9ybWFsICAgWyAgICAwLjBv
dmFibGUgenQgZm9yIGVhClsgICAgMCBFYXJseSBtZGUgcmFuZ2UgMC4wMDAwMGUgICAwOiBbMDAw
MDAwMTB4MDAwMDAwMGZdClsgIDAwXSAgIG5vW21lbSAweDAwMDAwMDAwLTAwMzdmZmZmICAgMC4w
MDB0bWVtIHNldDAgW21lbSAwMDEwMDAwMDAwMDAwMzdmZlsgICAgMC4wc2NpOiBwcm8gY29uZHVp
dGZyb20gRFQuMC4wMDAwMDBQU0NJdjEuMWQgaW4gZmlyCiAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgWyAgICAw
LnBzY2k6IFVzZGFyZCBQU0N1bmN0aW9uICAgIDAuMDAwaTogVHJ1c3RncmF0aW9uIGlyZWQKLjAw
MDAwMF0gRW1iZWRkZWVzL2NwdSBzMTkyIGQzMTc4ClsgICAwXSBEZXRlYyBJLWNhY2hlClsgICAg
XSBDUFUgZmVkZXRlY3RlZGN0b3IgaGFyCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgWyAg
ICAwLkNQVSBmZWF0dGVjdGVkOiBpdmUgU3RvciBEaXNhYmxlMC4wMDAwMDBhdHVyZXM6IDogQVJN
IGVyMTkzNjcKMDAwMDBdIEJvbmVsaXN0c3R5IGdyb3VwIFRvdGFsIHA5MDI0CjAwMDAwMF0gYWNo
ZSBoYXNlbnRyaWVzOm9yZGVyOiA3IGJ5dGVzLAouMDAwMDAwXW8taW5pdDogZiwgaGVhcCBmLCBo
ZWFwIDI2MjE0NGxpbmVhcikKWyAgICBdIE1lbW9yeUsvNTI0Mjg4YmxlICgxMjdlbCBjb2RlLHdk
YXRhLCA2YXRhLCA0NjcgNzU4SyBic0sgcmVzZXJ2NksgY21hLXIKWyAgICBdIHJhbmRvbW5kb21f
dTY0ZnJvbSBfX2tlX2NyZWF0ZTU3OCB3aXRoaXQ9MApbMDAwMF0gU0xpZ249NjQsIDMsIE1pbk9i
IENQVXM9NCwKLjAwMDAwMF1DVSByZXN0clBVcyBmcm9tPTQ4MCB0byBkcz00LgowMDAwMF0gcmNh
bGN1bGF0IG9mIHNjaGVsaXN0bWVudHMgMTAgamlmWyAgICAwLjBjdTogQWRqdW9tZXRyeSBmYW5v
dXRfbGVyX2NwdV9pZCAgIDAuMDAwSVJRUzogNjRzOiA2NCwgcHRlZCBpcnFzICAgMC4wMDBoX3Rp
bWVyOm1lcihzKSBydCA1NC4wME0pLgpbICAwMF0gY2xvYyBhcmNoX3N5cjogbWFzazpmZmZmZmZm
ZmNsZXM6IDB4NiwgbWF4X2k0NDA3OTUyMApbICAgIDAgc2NoZWRfYyBiaXRzIGF0cmVzb2x1dGkg
d3JhcHMgZTgwNDY1MTExICAgIDAuMDBuc29sZTogY21teSBkZXZpClsgICAgXSBwcmludGtlIFto
dmMwXQpbICAgIF0gQ2FsaWJybGF5IGxvb3BkKSwgdmFsdWF0ZWQgdXNpIGZyZXF1ZW4uMDAgQm9n
b2o9NTQwMDAwIDAuMDAxNjRheDogZGVmYTY4IG1pbmltClsgICAgMCBMU006IFNlcmFtZXdvcmtp
emluZwowMTg3Ml0gTWhlIGhhc2ggdHJpZXM6IDFlcjogMSwgOHMsIGxpbmVhICAwLjAwMTl0cG9p
bnQtY2ggdGFibGUgIDEwMjQgKG8gODE5MiBieWVhcikKWzQ0NDJdIHhldGFibGU6IEdsZXMgdXNp
bm4gMSBsYXlvICAwLjAwNDV0IHRhYmxlIHplZApbIDU2M10geGVuIFVzaW5nIEZkIEFCSQowNDY0
OF0gWGlhbGl6aW5nWyAgICAwLjBjdTogSGllciBTUkNVIGltdGlvbi4KMDgzMjZdIEVjZXMgd2ls
bGF2YWlsYWJsICAwLjAwODYgQnJpbmdpbm9uZGFyeSBDCjAwOTQ2Nl0gdGlhbGl6aW5kIHdyaXQw
MGZmZmZmZkFDVElWRVIwLjAwOTMyMF1kIFBJUFQgSW4gQ1BVMQogICAgICAgICAgICAgICAgWyAg
ICAwLkNQVTE6IEJvb25kYXJ5IHAgMHgwMDAwMHg0MTBmZDA4TikgZDB2Mjp1bmhhbmRsZXJpdGUg
MHgwZmZmZmYgdG9FUjAKKFhFIHZHSUNEOiBkIHdvcmQgdzAwMDAwZmZmIElDQUNUSVYgIDAuMDEw
NGN0ZWQgUElQZSBvbiBDUFUgMC4wMTA1MmluaXRpYWxpMgpbICAgNl0gQ1BVMjpzZWNvbmRhcnNv
ciAweDAwIFsweDQxMGZbICAgIDAuMGV0ZWN0ZWQgYWNoZSBvbiAgICAgMC4wMW46IGluaXRpY3B1
MwpbMTQzM10gQ1BlZCBzZWNvbmNlc3NvciAwMDAzIFsweDQKWyAgICBdIHNtcDogQnAgMSBub2Rl
Ci4wMTE3OTlddHVyZXM6IGQgMzItYml0IG9ydC4KWyA4MjhdIENQVXM6IGRldGVjMzIgaW5zdHIK
WyAgICAwIENQVTogQWwgc3RhcnRlZApbICAgIDAgYWx0ZXJuYWF0Y2hpbmcgb2RlClsgNzE4XSBk
ZXZuaXRpYWxpeiAgMC4wNDgyc3RlcmVkIGNpZXIgZW11bG5kbGVyCjQ4MzYzXSBSZCBzZXRlbmRv
biBoYW5kbCAgMC4wNDgzUiBkaXNhYmxvIGxhY2sgbwogICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICBbICAgIDAuY2xvY2tzb3VmaWVzOiBtYWZmZmZmZiBtczog
MHhmZmZheF9pZGxlXzI2MDQ0NjI3ClsgICAgXSBmdXRleCBsZSBlbnRyaSAob3JkZXI6NiBieXRl
cywKWyAgICBdIHBpbmN0cmluaXRpYWxpdHJsIHN1YnNbICAgIDAuMGhlcm1hbF9zc3RlcmVkIHRv
dmVybm9yIGFyZScKWzE5MTddIHRoczogUmVnaXNlcm1hbCBnb2JhbmdfYmFuICAwLjA1MTltYWxf
c3lzOnJlZCB0aGVycm5vciAnc3QKLjA1MjE5NV0gcHJlc2VudGxpZC5lbCBnb3Zlcm5fc3BhY2Un
ClsyODc5XSBORXRlcmVkIHByYW1pbHkgMTYwLjA1NDg1NXJlYWxsb2NhS2lCIHBvb2xtaWMgYWxs
bwpbICAgIDAgYXVkaXQ6IHppbmcgbmV0c3lzIChkaXMKMDU3NDc4XSB0bGJfeGVuOjogb25seSBh
bGxvY2F0ZSAgc29mdHdhcml0OiB0IGF1ZGl0KDAgc3RhdGU9aWVkIGF1ZGl0PTAgcmVzPTEwLjA1
Njg5M2FrcG9pbnQ6IGJyZWFrcG80IHdhdGNocGlzdGVycy4KLjA3OTMwNF0gcmVnaXN0ZSBNaUIg
cGFncHJlLWFsbG9wYWdlcyAgIDAuMDYwaWFsOiBBTUJVQVJUIGRyaSAgIDAuMDc5ZVRMQiByZWcx
LjAwIEdpQnplLCBwcmUtZCAwIHBhZ2UgMC4wNzkyNkxCIHJlZ2lzLjAgTWlCIHAsIHByZS1hbDAg
cGFnZXMKNzkzMzddIEhlZ2lzdGVyZWlCIHBhZ2UgZS1hbGxvY2FnZXMKWyAwMjFdIGNyeV9jcHVf
cWxlIDEwMDAKMDA3MDVdIEFlcnByZXRlcmQuClsgIDM0XSB4ZW46IEluaXRpYWxsbG9vbiBkciAg
ICAwLjEwbW11OiBEZWZhaW4gdHlwZWF0ZWQKMDIzNDJdIHZvYWRlZAowMjc4NF0gU3lzdGVtIGlu
ZApbICAgNF0gdXNiY29zdGVyZWQgbmZhY2UgZHJpcwpbICAgMF0gdXNiY29zdGVyZWQgbmZhY2Ug
ZHJpClsgICAgMCB1c2Jjb3JlZXJlZCBuZXdkcml2ZXIgdSAgMC4xMDM2cGh5X2dlbmUgcGh5IHN1
cG5vdCBmb3VuIGR1bW15IHIKWyAgICBdIHBwc19jb3hQUFMgQVBJcmVnaXN0ZXIgIDAuMTA0MGNv
cmU6IFNvZXIuIDUuMy5yaWdodCAyMFJvZG9sZm8gIDxnaW9tZXQuaXQ+Cls0MDY1XSBQVHN1cHBv
cnQgZWQKLjEwNTYyNl1sOiBJbml0aS4wClsgICAgMCBOZXRMYWJlaW4gaGFzaCAyOApbICA4MV0g
TmV0THJvdG9jb2xzRUxFRCBDSVBJUFNPCls1NzYxXSBOZSB1bmxhYmVsaWMgYWxsb3dmYXVsdAow
NjQ4Nl0gY2NlOiBTd2l0Y2xvY2tzb3Vfc3lzX2NvdSAgICAwLjEwUzogRGlzayBxdW90XzYuNiAg
MC4xMDY4IERxdW90LWNoIHRhYmxlICA1MTIgKG9yMDk2IGJ5dGUgIDAuMTA3MCBQblAgQUNQbGVk
Ci4xMTk0OTBdYWJsaXNoZWRibGUgZW50cjYgKG9yZGVyNjggYnl0ZXMpaW5lYXIpClsgICA3XSBU
Q1AgYiB0YWJsZSBlNDA5NiAob3I2NTUzNiBieWVhcikKWzk2NDddIFRDdGFibGVzIGNkIChlc3Rh
YjA5NiBiaW5kCiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFsgICAgMC5VRFAgaGFz
aG50cmllczogZXI6IDEsIDhzLCBsaW5lYSAgMC4xMTk4TGl0ZSBoYXNlbnRyaWVzOmRlcjogMSwg
ZXMsIGxpbmUgICAwLjEyMDogUmVnaXN0dG9jb2wgZmEKICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFsg
ICAgMC5QQ0k6IENMUywgZGVmYXVsICAgIDAuMTJtIFsxXTogSG5vdCBhdmFpWyAgICAwLjFuaXRp
YWxpcyB0cnVzdGVkcwpbICAgNF0gd29ya2lpbWVzdGFtcCBtYXhfb3JkY2tldF9vcmQgICAgMC4x
NHVkOiBsb2FkICAwLjE0MzdzaGZzOiB2ZTAgKDIwMDkvaGlsbGlwIEwKICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgWyAgICAwLjlwOiBJbnN0OWZzIDlwMjBzeXN0ZW0gcwogICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgWyAgICAwLktleSB0
eXBlcmljIHJlZ2kKLjE2ODA0Nl1heWVyIFNDU2MgKGJzZykgZXJzaW9uIDBkIChtYWpvclsgICAg
MC4xbyBzY2hlZHVlYWRsaW5lIGVkICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgWyAgICAwLkFzeW1tZXRyYXJzZXIg
J3hpc3RlcmVkClsgIDkyXSBpbyBzIGt5YmVyIHJkClsgICA2XSBpbyBzY2JmcSByZWdpCjE3NDY4
Nl0gZSBmZDUwMDAgICAgICBNRTAwMDAwMC4uZmZmZiAtPiAwMDAwdHJvbGxlcnZlcnNpb246ICAg
IDAuMTdjbS1wY2llIC5wY2llOiBoZ2UgL3NjYi8wMDAwMCByYVsgICAgMC4xcmNtLXBjaWUwLnBj
aWU6ICByYW5nZSBmIC9zY2IvcGMwMDAsIHVzaTAwLWZmXQpbNDc0NF0gYnJmZDUwMDAwMCBJQiBN
RU0gMDAwMC4uMHhmZiAtPiAweDAwClsgIDIzXSBicmNtNTAwMDAwLnBrIHVwLCA1IChTU0MpCjI4
ODg4XSBiIGZkNTAwMDBQQ0kgaG9zdHRvIGJ1cyAwCiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgWyAgICAwLnBjaV9idXMgIHJvb3QgYnVjZSBbYnVzCjIyOTAzMV0gOjAwOjAwLjAyNzEx
XSB0eWFzcyAweDA2ICAgIDAuMjJpIDAwMDA6MFBNRSMgc3Vwcm9tIEQwIERYRU4pIHBoeTY6ZDB2
MiBQIGNtZD0yNTpsZW1lbnRlZHBoeXNkZXYuMiBQSFlTREUxNTogbm90IHRlZCkKLjIzMTU2Ml0w
OjAxOjAwLjozNDgzXSB0bGFzcyAweDBbICAgIDAuMmNpIDAwMDA6IHJlZyAweDEweDAwMDAwMDAw
ZmZmIDY0ICAgIDAuMjNpIDAwMDA6MFBNRSMgc3Vwcm9tIEQwIEQoWEVOKSBwaDE2OmQwdjIgUCBj
bWQ9MTVwbGVtZW50ZTAuMjMxOTcyMDA6MDE6MDBlZCB0byBhZHRocm91Z2ggU0ktWCBtaWcKWyAg
ICBdIHBjaV9idTE6IGJ1c25fcyAwMS1mZl11cGRhdGVkIFsgICAgMC4yY2kgMDAwMDogQkFSIDE0
OmQgW21lbSAwMDAtMHg2MDAKMjM0MDk2XSA6MDA6MDAuMGlkZ2UgdG8gIFttZW0gMDAwLTB4NjA2
NGJpdF0KWyAgICBdIHBjaSAwMC4wOiAgIGJyZG93IFttZW0wMDAwLTB4Nl0KWyAgIDhdIHBjaWVw
OjAwOjAwLjBuZyBkZXZpYy0+IDAwMDIpMC4yMzQ0ODlydCAwMDAwOiBQTUU6IFNpd2l0aCBJUlEg
ICAwLjIzNGVwb3J0IDAwLjA6IEFFUjogd2l0aCBJUiAgICAwLjIzaSAwMDAwOjBlbmFibGluZygw
MDAwIC0+CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIFsgICAgMC54ZW46eGVuX0V2ZW50LWNodmljZSBpbnMK
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIFsgICAgMC5J
bml0aWFsaSBwdmNhbGxzZCBkcml2ZXIwLjI0OTc0NTogODI1MC8xdmVyLCAzMiBSUSBzaGFyaWVk
Ci4yNjE0MDJdZm86IFVuYWJ0ZWN0IGNhY3JjaHkgZm9yCiAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgWyAgICAwLmJyZDogbW9kZWQKLjMxMTQ2Nl1uaXRpYWxpemlvbjogOC40OjEvcHJv
dG8KWyAgICBdIGRyYmQ6ClsgICAgXSBkcmJkOiBlZCBhcyBibGNlIG1ham9yICAgIDAuMzFtMjgz
NS1wbzgzNS1wb3dlY29tIEJDTTJyIGRvbWFpbgpbICAgIF0gd2lyZWd1b3dlZGlwcyB0czogcGFz
czAuMzE2MTI3YXJkOiBub25lciBzZWxmLWFzcwouNjg5MTc1XXQgZmQ1ODAwbmV0OiB1c2ltIEV0
aGVybjpbICAgIDAuNmlyZWd1YXJkYXJkIDEuMC4uIFNlZSB3d2FyZC5jb20gcm1hdGlvbi4wLjY4
NzAzNGFyZDogQ29wQykgMjAxNS1vbiBBLiBEbzxKYXNvbkB6Pi4gQWxsIFJzZXJ2ZWQuCiAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgWyAgICAwLmJjbWdlbmV0MC5ldGhlcm5lZCB0byBn
ZWxvY2sKWzkyNjVdIGJjZDU4MDAwMC46IEdFTkVUIDogMHgwMDAwMC42ODkzMDNldCBmZDU4MHJu
ZXQ6IGZhZ2V0IGVuZXRjawpbICAzN10gYmNtZzgwMDAwLmV0ZmFpbGVkIHRldC1lZWUgYyAgICAw
LjcwYnBoeTogYmNJSSBidXM6CiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgWyAgICAwLjc4NjU3Nl0gdW5vIHVuaW1hYzk6IEJyb2FkQUMg
TURJTyAgICAwLjc4N19uZXRmcm9uYWxpc2luZyB1YWwgZXRoZXZlcgpbIDU1NV0geGhjMDA6MDE6
MDAgSG9zdCBDbwo3ODgxNTRdICAwMDAwOjAxY2MgcGFyYW00MWViIGhjaSAweDEwMCBxMDAwMDAw
MDAKWyAgICBdIHVzYiB1c1VTQiBkZXZpLCBpZFZlbmQgaWRQcm9kdSBiY2REZXZpClsgICAgXSB1
c2IgdXNVU0IgZGV2aWdzOiBNZnI9Y3Q9MiwgU2Vlcj0xCls4Nzc4XSB1c1Byb2R1Y3Q6c3QgQ29u
dHJbICAgIDAuN3NiIHVzYjE6dHVyZXI6IEwuMS1kZWZhdWhjZApbIDg0MF0gdXNiZXJpYWxOdW0w
OjAxOjAwLiAwLjc4OTM3LTA6MS4wOiBmb3VuZAo4OTQyOF0gaC4wOiAxIHBvdGVkClsgODQxXSB4
aGMwMDowMTowMCBIb3N0IENvCjc4OTkyOV0gIDAwMDA6MDFvc3Qgc3VwcCAzLjAgU3Vwc2lnbmVt
YmVyIDIKWyAgICAwIHVzYiB1c2JuJ3Qga25vd29yaXRobXMgZm9yIHRoaXNpc2FibGluZ1sgICAg
MC43c2IgdXNiMjogZGV2aWNlIGRWZW5kb3I9UHJvZHVjdD1kRGV2aWNlPVsgICAgMC43c2IgdXNi
MjogZGV2aWNlICBNZnI9MywgMiwgU2VyaWExClsgICA0XSB1c2IgdWR1Y3Q6IHhIQ29udHJvbGwg
IDAuNzkwMnVzYjI6IE1hZXI6IExpbnVkZWZhdWx0Cjc5MDc4Nl0gMS4wOiBVU0JuZGJlcjE6MDAu
MApbICA0OF0gaHViICA0IHBvcnRzZAouNzkxODY2XXY6IFBTLzIgdmljZSBjb21hbGwgbWljZTAu
Nzk0NDI2NS13ZHQgYmN0OiBCcm9hZDgzNSB3YXRjZXIKWyAgMjRdIHhlbl93ZHQ6IGluaSAodGlt
ZW91b3dheW91dD0gIDAuNzk1OGk6IFNlY3VybCBIb3N0IENyIEludGVyZmVyClsgIDcxXSBzZGhj
aWdodChjKSBzc21hbgouODAxMzQxXToga2VybmVsZSBpcyAtMDAgIDAuODAxMzogUmVnaXN0dG9j
b2xzICggMC44MDE0NSBDb25uZWN0IHRhYmxlIGNkIChzaXplPW1vcnk9NjRLdmVyIHVzYmggIDAu
Nzk4MmlkOiBVU0IgIGRyaXZlcgogICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgWyAgICAw
LklQVlM6IGlwZC4KWyAgNjFdIGlwaXBuZCBNUExTIDQgdHVubmVsZXIKWyAgOTJdIGdyZTpyIElQ
djQgZGV4b3IgZHJpICAgMC44MDJfQ0xVU1RFUnRlcklQIFZlOCBsb2FkZWRmdWxseQo4MDU0MDld
IGlzdGVyZWQgIGZhbWlseSAgIDAuODA1NCBSZWdpc3Rlb2NvbCBmYW0KICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgWyAgICAwLkJyaWRnZSBmbmcgcmVnaXNb
ICAgIDAuODAyMXE6IDgwTiBTdXBwb3IKICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBbICAgIDAuOXBuZXQ6IElnIDlQMjAwMApbICAgIF0gSW5pdGlhZW4gdHJhbnMgOXBmcwou
ODA2NDQ5XSBjb21waWxlMDkgY2VydGkKWyAgICAwIExvYWRlZCBydCAnQnVpbHV0b2dlbmVybmVs
IGtleTowNDNkM2QyYThlOGZhY2QxZCcKWyAgMzldIHpzd2FkIHVzaW5nIC96YnVkCjExNzA2XSBL
Ll9mc2NyeXBlcmVkClsxNzMyXSBLZWZzY3J5cHQgZWQKWyAgNTRdIEtleSByeXB0LXByb2cgcmVn
aXN0ICAgIDAuODEyMDEwMDAuc3R5QU1BMCBheGZlMjAxMDAgMTUsIGJhcyAwKSBpcyBhZXYyClsg
MjA3XSBzZXJhbDA6IHR0eXlBTUEwIHJlClsgICAgXSBVbmFibGVsZSBrZXJuZSByZXF1ZXN0dWFs
IGFkZHIwMDAwMDAyNiAgICAwLjgybSBhYm9ydCBbICAgIDAuOCBFU1IgPSAwNApbICAgM10gICBF
QyBEQUJUIChjdSksIElMID0KWyAgICAwICAgU0VUID09IDAKLjgyMTIwMl0wLCBXblIgPSAgMC44
MjEyMDAwMDAwMDJzZXIgYWRkcmFjdGl2ZV9tcHBlcjAwMDA0ClsxMjU4XSBJbnJyb3I6IE9vMDAw
NCBbIzFbICAgIDAuOG9kdWxlcyBsOgpbICAgM10gQ1BVOiA0IENvbW06IDM6MSBOb3QgNS42LjEt
ZGUKWyAgICBdIEhhcmR3YSBSYXNwYmVyTW9kZWwgQiAgICAgMC44MnJrcXVldWU6ZGVmZXJyZWRv
cmtfZnVuYzAuODIxNDIxOiA2MDAwMDAgZGFpZiAtUApbICAgIF0gcGMgOiB4bGJfZnJlZV8rMHgx
OTgvMCAgICAwLjgyIDogZG1hX2ZzKzB4OTgvMCAgIDAuODIxOiBmZmZmODA3MApbICAyN10geDI5
OjAxMWIyYjk3MDAwMDAwMDAKWyAgICBdIHgyNzogMDAwMDEwMDAgZjAwMDAzM2IKODIxNzAwXSBm
MDAwMDMxYzY6IGZmZmY4Mjg4ICBbICAgIDAueDI1OiAwMDAwMDAwMSB4MjAwMGY0MDAwICAgIDAu
ODIzOiAwMDAwMDAwMCB4MjI6MDAwMDAwMDAgIDAuODIxNiBmZmZmODAwMCB4MjA6IGYxNzliNTQ4
IDAuODIxNjcyZmZmMDAwMDN4MTg6IDAwMDAwMDAxClsxNzI4XSB4MTE2MDMxMTE2IDAwMTBhNWI0
ClsgIDU2XSB4MTM6MDAwMDAwM2EwMDAwMDAwMApbICAgIF0geDExOiAwMDAwMDAwMCAwMDAwMDAw
MAo4MjE5MjJdIDAwMDAwMDAyIDogMDAwMDBiNDAgIFsgICAgMC54OSA6IGZmZjJiODQwIHg4MDAw
MzNhNDQgICAgMC44MiA6IGZmZmYwNDgwIHg2IDowMjZkNWYwMSAgMC44MjE4IDAwMDAwMDAwIHg0
IDogMDAwMDAwMDAgMC44MjE4OTUwMDAwMDAwZngyIDogZmZmMDAwMDAKWzE5NTFdIENhOgpbICAg
M10gIHhlbl9mcmVlX2NvaDE5OC8weDFkIDAuODIyMDBmcmVlX2F0dDB4ZDAKWzIwMjddICByYXJl
X3Byb3B0KzB4MTQwL1sgICAgMC44cnBpX2Zpcm1wZXJ0eSsweApbICAgIDAgIHJwaV9maXJvYmUr
MHhmClsgICAgMCAgcGxhdGZvcm9iZSsweDUKLjgyMjI1NF1faW5pdGlhbHgxMC8weDE4MC44MjIy
Nzlyb2JlX2Rldi8weDk4MC44MjIxNTNyX3Byb2JlX3hkYy8weDEzIDAuODIyMTd2aWNlX2F0dGVy
KzB4ODgvWyAgICAwLjhidXNfZm9yXysweDc4LzB4ICAwLjgyMjJldmljZV9hdDQvMHgxNTgKMjIz
MDNdICBfcHJvYmVfdysweDg4LzB4ICAwLjgyMjNjZXNzX29uZTFmMC8weDNjIDAuODIyMzVlcl90
aHJlYTB4NTcwCjIyMzc5XSAgMHgxMTgvMHggICAwLjgyMnRfZnJvbV9mLzB4MTgKMjI0MzNdIENj
ZmMwMCBmMjM3YWU0MDAgIChmODYyNjggICAgMC44Mi1bIGVuZCB0ZTUwOTVhYWUtLQpbICAgIDEu
MTU2NTIwXSB1c2V3IGhpZ2gtQiBkZXZpY2UyIHVzaW5nClsgICAgMS4xODk0OTZdIHVzZXcgVVNC
IGR1bmQsIGlkVjA5LCBpZFByMzEsIGJjZEQuMjAKMTg5NTg3XSAgUHJvZHVjdCBIdWJnczogTWZk
dWN0PTEsIG1iZXI9MApbMDkyNF0gaHUwOiBVU0IgaApbICAgIF0gaHViIDEtIHBvcnRzIGQKWyAg
ICAxLjUxNjUxOV0gdXMgbmV3IGxvd1NCIGRldmljIDMgdXNpbmdkClsgICAgMS42NTg5OTddIHVz
IE5ldyBVU0Jmb3VuZCwgaTAzZWUsIGlkNTYwMSwgYmMgMS4wMAo1OTA0OF0gdTogTmV3IFVTIHN0
cmluZ3MgUHJvZHVjdGFsTnVtYmVyICAxLjY1OTAxLTEuMzogUE1pdHN1bWkgb2FyZApbOTExNV0g
dXMgTWFudWZhY2l0c3VtaSBFClsgICAgMSBpbnB1dDogRWxlY3RyaWMgVVNCIEtleSAvZGV2aWNl
cm0vc2NiL2ZwY2llL3BjaTAwMDA6MDA6MDowMTowMC4tMS8xLTEuMy4wLzAwMDM6MS4wMDAxL2l1
dDAKWyAgICAxLjczNzEyNF0gaGljIDAwMDM6MC4wMDAxOiBpcmF3MDogVVMuMDAgS2V5YnRzdW1p
IEVsaXRzdW1pIFVhcmRdIG9uIDowMTowMC4wdXQwCg==
--0000000000001cfeef05a4d8bdca--


From xen-devel-bounces@lists.xenproject.org Tue May 05 03:52:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 03:52:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVod3-0006sC-HX; Tue, 05 May 2020 03:52:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Uk1a=6T=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jVod2-0006s7-C3
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 03:52:20 +0000
X-Inumbo-ID: d14f1fbc-8e83-11ea-ae69-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d14f1fbc-8e83-11ea-ae69-bc764e2007e4;
 Tue, 05 May 2020 03:52:19 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 85F56206CC;
 Tue,  5 May 2020 03:52:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1588650738;
 bh=u8IJ5l9i78MtyuRFThT6LJOAnz40GV8d96t0Ql3bsV4=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=onY/dlOGASbUswlzO5guNeBsRifxRIuagWHthWGEPlMBe8SPnrsvzh1yEq5yA24nY
 RXeJ7Y4bQupe+YCJhjimX61qTdjOa9P0qD0o9G6XhZJV7cdmANg8Ul5p+/N/fnrhSK
 BRIJUar5WB4ZCvtz9rCi6vZ1Axe22EMk6V3fkXIk=
Date: Mon, 4 May 2020 20:52:17 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Roman Shaposhnik <roman@zededa.com>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
In-Reply-To: <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 minyard@acm.org, jeff.kubascik@dornerworks.com,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, 4 May 2020, Roman Shaposhnik wrote:
> Hi Julien,
> 
> thank for your patch -- just like Corey I tried it out and it seems to
> work fine and gets
> me further. At this point, I'm pretty sure I'm past initial
> bootstrapping issues and into
> what can be basically described as Xen DMA issue of some kind (so I'm
> pretty sure
> I will need Stefano's help to debug this further). I'm attaching
> verbose logs, but the
> culprit seems to be:
> 
> [    2.534292] Unable to handle kernel paging request at virtual
> address 000000000026c340
> [    2.542373] Mem abort info:
> [    2.545257]   ESR = 0x96000004
> [    2.548421]   EC = 0x25: DABT (current EL), IL = 32 bits
> [    2.553877]   SET = 0, FnV = 0
> [    2.557023]   EA = 0, S1PTW = 0
> [    2.560297] Data abort info:
> [    2.563258]   ISV = 0, ISS = 0x00000004
> [    2.567208]   CM = 0, WnR = 0
> [    2.570294] [000000000026c340] user address but active_mm is swapper
> [    2.576783] Internal error: Oops: 96000004 [#1] SMP
> [    2.581784] Modules linked in:
> [    2.584950] CPU: 3 PID: 135 Comm: kworker/3:1 Not tainted 5.6.1-default #9
> [    2.591970] Hardware name: Raspberry Pi 4 Model B (DT)
> [    2.597256] Workqueue: events deferred_probe_work_func
> [    2.602509] pstate: 60000005 (nZCv daif -PAN -UAO)
> [    2.607431] pc : xen_swiotlb_free_coherent+0x198/0x1d8
> [    2.612696] lr : dma_free_attrs+0x98/0xd0
> [    2.616827] sp : ffff800011db3970
> [    2.620242] x29: ffff800011db3970 x28: 00000000000f7b00
> [    2.625695] x27: 0000000000001000 x26: ffff000037b68410
> [    2.631129] x25: 0000000000000001 x24: 00000000f7b00000
> [    2.636583] x23: 0000000000000000 x22: 0000000000000000
> [    2.642017] x21: ffff800011b0d000 x20: ffff80001179b548
> [    2.647461] x19: ffff000037b68410 x18: 0000000000000010
> [    2.652905] x17: ffff000035d66a00 x16: 00000000deadbeef
> [    2.658348] x15: ffffffffffffffff x14: ffff80001179b548
> [    2.663792] x13: ffff800091db37b7 x12: ffff800011db37bf
> [    2.669236] x11: ffff8000117c7000 x10: ffff800011db3740
> [    2.674680] x9 : 00000000ffffffd0 x8 : ffff80001071e980
> [    2.680124] x7 : 0000000000000132 x6 : ffff80001197a6ab
> [    2.685568] x5 : 0000000000000000 x4 : 0000000000000000
> [    2.691012] x3 : 00000000f7b00000 x2 : fffffdffffe00000
> [    2.696465] x1 : 000000000026c340 x0 : 000002000046c340
> [    2.701899] Call trace:
> [    2.704452]  xen_swiotlb_free_coherent+0x198/0x1d8
> [    2.709367]  dma_free_attrs+0x98/0xd0
> [    2.713143]  rpi_firmware_property_list+0x1e4/0x240
> [    2.718146]  rpi_firmware_property+0x6c/0xb0
> [    2.722535]  rpi_firmware_probe+0xf0/0x1e0
> [    2.726760]  platform_drv_probe+0x50/0xa0
> [    2.730879]  really_probe+0xd8/0x438
> [    2.734567]  driver_probe_device+0xdc/0x130
> [    2.738870]  __device_attach_driver+0x88/0x108
> [    2.743434]  bus_for_each_drv+0x78/0xc8
> [    2.747386]  __device_attach+0xd4/0x158
> [    2.751337]  device_initial_probe+0x10/0x18
> [    2.755649]  bus_probe_device+0x90/0x98
> [    2.759590]  deferred_probe_work_func+0x88/0xd8
> [    2.764244]  process_one_work+0x1f0/0x3c0
> [    2.768369]  worker_thread+0x138/0x570
> [    2.772234]  kthread+0x118/0x120
> [    2.775571]  ret_from_fork+0x10/0x18
> [    2.779263] Code: d34cfc00 f2dfbfe2 d37ae400 8b020001 (f8626800)
> [    2.785492] ---[ end trace 4c435212e349f45f ]---
> [    2.793340] usb 1-1: New USB device found, idVendor=2109,
> idProduct=3431, bcdDevice= 4.20
> [    2.801038] usb 1-1: New USB device strings: Mfr=0, Product=1, SerialNumber=0
> [    2.808297] usb 1-1: Product: USB2.0 Hub
> [    2.813710] hub 1-1:1.0: USB hub found
> [    2.817117] hub 1-1:1.0: 4 ports detected
> 
> This is bailing out right here:
>      https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/firmware/raspberrypi.c?h=v5.6.1#n125
> 
> FYIW (since I modified the source to actually print what was returned
> right before it bails) we get:
>    buf[1] == 0x800000004
>    buf[2] == 0x00000001
> 
> Status 0x800000004 is of course suspicious since it is not even listed here:
>     https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/include/soc/bcm2835/raspberrypi-firmware.h#n14
> 
> So it appears that this DMA request path is somehow busted and it
> would be really nice to figure out why.

You have actually discovered a genuine bug in the recent xen dma rework
in Linux. Congrats :-)

I am doing some guesswork here, but from what I read in the thread and
the information in this email I think this patch might fix the issue.
If it doesn't fix the issue please add a few printks in
drivers/xen/swiotlb-xen.c:xen_swiotlb_free_coherent and please let me
know where exactly it crashes.


diff --git a/include/xen/arm/page-coherent.h b/include/xen/arm/page-coherent.h
index b9cc11e887ed..ff4677ed9788 100644
--- a/include/xen/arm/page-coherent.h
+++ b/include/xen/arm/page-coherent.h
@@ -8,12 +8,17 @@
 static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t size,
 		dma_addr_t *dma_handle, gfp_t flags, unsigned long attrs)
 {
+	void *cpu_addr;
+	if (dma_alloc_from_dev_coherent(hwdev, size, dma_handle, &cpu_addr))
+		return cpu_addr;
 	return dma_direct_alloc(hwdev, size, dma_handle, flags, attrs);
 }
 
 static inline void xen_free_coherent_pages(struct device *hwdev, size_t size,
 		void *cpu_addr, dma_addr_t dma_handle, unsigned long attrs)
 {
+	if (dma_release_from_dev_coherent(hwdev, get_order(size), cpu_addr))
+		return;
 	dma_direct_free(hwdev, size, cpu_addr, dma_handle, attrs);
 }
 


From xen-devel-bounces@lists.xenproject.org Tue May 05 03:53:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 03:53:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVoe9-0006wJ-Re; Tue, 05 May 2020 03:53:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FNY7=6T=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jVoe8-0006wB-6e
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 03:53:28 +0000
X-Inumbo-ID: f7972ed0-8e83-11ea-9d7a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f7972ed0-8e83-11ea-9d7a-12813bfff9fa;
 Tue, 05 May 2020 03:53:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Vzm5FhmC92WygCmP/BBW7FEiEwT2eVMsQEGqGu1d5OI=; b=tAPelbYKgiCvp4w+l2lsCgV8Y
 e3Ll1NKGBJBl+Pz6TuyRYgxBoWINKGkcLXez+V8LEuvx62qkLxoBIpueLyX3lT6JZEM9wzybEU5RE
 6WZI4igwfr51ne4eKI7xinL/Rq3dN6Y3EmJYk4ru4FMl6Oo9nknRGrI8elOOLtyCjk54c=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jVoe3-0007md-6d; Tue, 05 May 2020 03:53:23 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jVoe2-0006on-St; Tue, 05 May 2020 03:53:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jVoe1-0002bn-HW; Tue, 05 May 2020 03:53:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-149986-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 149986: trouble: blocked/broken/pass
X-Osstest-Failures: xen-unstable-smoke:build-arm64-xsm:<job
 status>:broken:regression
 xen-unstable-smoke:test-armhf-armhf-xl:<job status>:broken:regression
 xen-unstable-smoke:build-amd64:<job status>:broken:regression
 xen-unstable-smoke:build-arm64-xsm:host-install(4):broken:regression
 xen-unstable-smoke:build-amd64:host-install(4):broken:regression
 xen-unstable-smoke:test-armhf-armhf-xl:host-install(4):broken:regression
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: xen=fe36a173d110fd792f5e337e208a5ed714df1536
X-Osstest-Versions-That: xen=0135be8bd8cd60090298f02310691b688d95c3a8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 05 May 2020 03:53:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 149986 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/149986/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm                 <job status>                 broken
 test-armhf-armhf-xl             <job status>                 broken
 build-amd64                     <job status>                 broken
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 149888
 build-amd64                   4 host-install(4)        broken REGR. vs. 149888
 test-armhf-armhf-xl           4 host-install(4)        broken REGR. vs. 149888

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  fe36a173d110fd792f5e337e208a5ed714df1536
baseline version:
 xen                  0135be8bd8cd60090298f02310691b688d95c3a8

Last test of basis   149888  2020-04-30 09:00:53 Z    4 days
Testing same since   149986  2020-05-05 01:24:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  broken  
 build-armhf                                                  pass    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          broken  
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64-xsm broken
broken-job test-armhf-armhf-xl broken
broken-job build-amd64 broken
broken-step build-arm64-xsm host-install(4)
broken-step build-amd64 host-install(4)
broken-step test-armhf-armhf-xl host-install(4)

Not pushing.

------------------------------------------------------------
commit fe36a173d110fd792f5e337e208a5ed714df1536
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Apr 30 10:47:14 2020 +0100

    x86/amd: Initial support for Fam19h processors
    
    Fam19h is very similar to Fam17h in these regards.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 0c9751b53c2ee135fd484a03fd47f3bb5fbe63b8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 4 11:54:35 2020 +0200

    x86/HyperV: correct hv_hcall_page for xen.efi build
    
    Along the lines of what the not reverted part of 3c4b2eef4941 ("x86:
    refine link time stub area related assertion") did, we need to transform
    the absolute HV_HCALL_PAGE into the image base relative hv_hcall_page
    (or else there'd be no need for two distinct symbols). Otherwise
    mkreloc, as used for generating the base relocations of xen.efi, will
    spit out warnings like "Difference at .text:0009b74f is 0xc0000000
    (expected 0x40000000)". As long as the offending relocations are PC
    relative ones, the generated binary is correct afaict, but if there ever
    was the absolute address stored, xen.efi would miss a fixup for it.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit b0f666c569b8af6a51ab8aeec3664d6acd1abee9
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 4 11:53:42 2020 +0200

    x86/EFI: correct section offsets in mkreloc diagnostics
    
    These are more helpful if they point at the address where the relocated
    value starts, rather than at the specific byte of the difference.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 17b997aa1edb9eb8d9bd1958457ff50927f46832
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Mon May 4 11:53:01 2020 +0200

    x86/hap: be more selective with assisted TLB flush
    
    When doing an assisted flush on HAP the purpose of the
    on_selected_cpus is just to trigger a vmexit on remote CPUs that are
    in guest context, and hence just using is_vcpu_dirty_cpu is too lax,
    also check that the vCPU is running. Due to the lazy context switching
    done by Xen dirty_cpu won't always be cleared when the guest vCPU is
    not running, and hence relying on is_running allows more fine grained
    control of whether the vCPU is actually running.
    
    I've measured the time of the non-local branch of flush_area_mask
    inside the shim running with 32vCPUs over 100000 executions and
    averaged the result on a large Westmere system (80 ways total). The
    figures where fetched during the boot of a SLES 11 PV guest. The
    results are as follow (less is better):
    
    Non assisted flush with x2APIC:      112406ns
    Assisted flush without this patch:   820450ns
    Assisted flush with this patch:        8330ns
    
    While there also pass NULL as the data parameter of on_selected_cpus,
    the dummy handler doesn't consume the data in any way.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 8d928648fd816f97ba3ebe98ab5d4b4a7def58ff
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 4 11:51:47 2020 +0200

    xenoprof: limit scope of types and #define-s
    
    Quite a few of the items are used by xenoprof.c only, so move them there
    to limit their visibility as well as the amount of re-building needed in
    case of changes. Also drop the inclusion of the public header there.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit e83719b53a9be1c69033b3ded8051d47e3dadab8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 4 11:51:18 2020 +0200

    xenoprof: drop unused struct xenoprof fields
    
    Both is_primary and domain_ready are only ever written to. Drop both
    fields and restrict structure visibility to just the one involved CU.
    While doing so (and just for starters) make "is_compat" properly bool.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Wei Liu <wl@xen.org>

commit 7f6a6e8c0a400d1a073b083fe0b7d25ef74b14e0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 4 11:48:13 2020 +0200

    xenoprof: adjust ordering of page sharing vs domain type setting
    
    Buffer pages should be shared with "ignored" or "active" guests only
    (besides, obviously, the primary profiling domain). Hence domain type
    should be set to "ignored" before unsharing from the primary domain
    (which implies even a previously "passive" domain may then access its
    buffers, albeit that's not very useful unless it gets promoted to
    "active" subsequently), i.e. such that no further writes of records to
    the buffer would occur, and (at least for consistency) also before
    sharing it (with the calling domain) from the XENOPROF_get_buffer path.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue May 05 05:37:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 05:37:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVqGU-00073m-Bo; Tue, 05 May 2020 05:37:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jKJh=6T=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1jVqGS-000733-RA
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 05:37:08 +0000
X-Inumbo-ID: 7540a65a-8e92-11ea-9887-bc764e2007e4
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7540a65a-8e92-11ea-9887-bc764e2007e4;
 Tue, 05 May 2020 05:37:07 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id g185so1182170qke.7
 for <xen-devel@lists.xenproject.org>; Mon, 04 May 2020 22:37:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zededa.com; s=google;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=++FzMEFfP1PBiT6l2HAcsqAZsEvUSdWRF0jHzpVko30=;
 b=fgIX9mM5A6XWOFHl+Jxyy4JjYI/xejc3U3JhVdm0DvOHJEib8drIIWmJlCncdN9QDE
 feOn9mMEeskbm7bLJdPccgpHTXZzlnltAUB2z8YHxuJ30pDoL91fn4AiRF3peIC1Dynm
 hapqK1ELZSciba+Xt7tQm7tBldm7NzG3lpjZglkQsxt6IO1MdqQmBOXr9e+nok4kXfws
 aUO4OpyaGJ7NSzSHQrpniHvVvsVutCy6pbVDbaqQzbScNy9EdC2N7n14TWDZBSOQY2bu
 H8ZRpP5QcBkVZJh9/u5Hmi5F6azYdEvoZVQGcXc7uUtdMpiM6HmHId4OEQzHybypnmSt
 dIPA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=++FzMEFfP1PBiT6l2HAcsqAZsEvUSdWRF0jHzpVko30=;
 b=TlXUHusxg9B4t+aOjchYhEy8qMRJcpYf/X7uV2cGYRDBwif18Dwa+qEo4H2fQfEFfE
 tAD5hEQ7x4POfhx9bDck91uI/iPAl3FrEQEDUKDKqSXwjjzqYijTJRcOJYkkshAaDx+h
 K+kzlYYKc1UFu+ol2bMi8jY7/xuDhVxsKGJTkdTCS+jp53a9d/asdECajLZZXN5VFw+k
 Rludttt6Gng5v8KyMlEuPtSojXV4kK9QcBvWmcNprVuzHADiEtkZWPtafpi56vwJsUCs
 L7nEpQ7nf3qCKBILZdkhKCTsL066XdvLCmuJFxCTAtP63yjRGNqGv4kvJkGuWzX2r4nx
 I3pQ==
X-Gm-Message-State: AGi0PuZq65H0dh1ZTJLSUBVRVlOGOZSxArVM92EeJ4VKbEiJB8dFxhMr
 Y+FldB+P1lqSp17e0ieUSPt30yFnQkwWgyMVhgAe/Q==
X-Google-Smtp-Source: APiQypLdMzCxpZMthhkTK84Nqls92tR7z98aEK9DX7Sbp014IAlhU5ZoRudas0ZKbamdilthldnfvpN07hZxyuPOj3A=
X-Received: by 2002:a05:620a:16d6:: with SMTP id
 a22mr1760812qkn.291.1588657026736; 
 Mon, 04 May 2020 22:37:06 -0700 (PDT)
MIME-Version: 1.0
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
From: Roman Shaposhnik <roman@zededa.com>
Date: Mon, 4 May 2020 22:36:55 -0700
Message-ID: <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, minyard@acm.org,
 jeff.kubascik@dornerworks.com, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 4, 2020 at 8:52 PM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> On Mon, 4 May 2020, Roman Shaposhnik wrote:
> > Hi Julien,
> >
> > thank for your patch -- just like Corey I tried it out and it seems to
> > work fine and gets
> > me further. At this point, I'm pretty sure I'm past initial
> > bootstrapping issues and into
> > what can be basically described as Xen DMA issue of some kind (so I'm
> > pretty sure
> > I will need Stefano's help to debug this further). I'm attaching
> > verbose logs, but the
> > culprit seems to be:
> >
> > [    2.534292] Unable to handle kernel paging request at virtual
> > address 000000000026c340
> > [    2.542373] Mem abort info:
> > [    2.545257]   ESR = 0x96000004
> > [    2.548421]   EC = 0x25: DABT (current EL), IL = 32 bits
> > [    2.553877]   SET = 0, FnV = 0
> > [    2.557023]   EA = 0, S1PTW = 0
> > [    2.560297] Data abort info:
> > [    2.563258]   ISV = 0, ISS = 0x00000004
> > [    2.567208]   CM = 0, WnR = 0
> > [    2.570294] [000000000026c340] user address but active_mm is swapper
> > [    2.576783] Internal error: Oops: 96000004 [#1] SMP
> > [    2.581784] Modules linked in:
> > [    2.584950] CPU: 3 PID: 135 Comm: kworker/3:1 Not tainted 5.6.1-default #9
> > [    2.591970] Hardware name: Raspberry Pi 4 Model B (DT)
> > [    2.597256] Workqueue: events deferred_probe_work_func
> > [    2.602509] pstate: 60000005 (nZCv daif -PAN -UAO)
> > [    2.607431] pc : xen_swiotlb_free_coherent+0x198/0x1d8
> > [    2.612696] lr : dma_free_attrs+0x98/0xd0
> > [    2.616827] sp : ffff800011db3970
> > [    2.620242] x29: ffff800011db3970 x28: 00000000000f7b00
> > [    2.625695] x27: 0000000000001000 x26: ffff000037b68410
> > [    2.631129] x25: 0000000000000001 x24: 00000000f7b00000
> > [    2.636583] x23: 0000000000000000 x22: 0000000000000000
> > [    2.642017] x21: ffff800011b0d000 x20: ffff80001179b548
> > [    2.647461] x19: ffff000037b68410 x18: 0000000000000010
> > [    2.652905] x17: ffff000035d66a00 x16: 00000000deadbeef
> > [    2.658348] x15: ffffffffffffffff x14: ffff80001179b548
> > [    2.663792] x13: ffff800091db37b7 x12: ffff800011db37bf
> > [    2.669236] x11: ffff8000117c7000 x10: ffff800011db3740
> > [    2.674680] x9 : 00000000ffffffd0 x8 : ffff80001071e980
> > [    2.680124] x7 : 0000000000000132 x6 : ffff80001197a6ab
> > [    2.685568] x5 : 0000000000000000 x4 : 0000000000000000
> > [    2.691012] x3 : 00000000f7b00000 x2 : fffffdffffe00000
> > [    2.696465] x1 : 000000000026c340 x0 : 000002000046c340
> > [    2.701899] Call trace:
> > [    2.704452]  xen_swiotlb_free_coherent+0x198/0x1d8
> > [    2.709367]  dma_free_attrs+0x98/0xd0
> > [    2.713143]  rpi_firmware_property_list+0x1e4/0x240
> > [    2.718146]  rpi_firmware_property+0x6c/0xb0
> > [    2.722535]  rpi_firmware_probe+0xf0/0x1e0
> > [    2.726760]  platform_drv_probe+0x50/0xa0
> > [    2.730879]  really_probe+0xd8/0x438
> > [    2.734567]  driver_probe_device+0xdc/0x130
> > [    2.738870]  __device_attach_driver+0x88/0x108
> > [    2.743434]  bus_for_each_drv+0x78/0xc8
> > [    2.747386]  __device_attach+0xd4/0x158
> > [    2.751337]  device_initial_probe+0x10/0x18
> > [    2.755649]  bus_probe_device+0x90/0x98
> > [    2.759590]  deferred_probe_work_func+0x88/0xd8
> > [    2.764244]  process_one_work+0x1f0/0x3c0
> > [    2.768369]  worker_thread+0x138/0x570
> > [    2.772234]  kthread+0x118/0x120
> > [    2.775571]  ret_from_fork+0x10/0x18
> > [    2.779263] Code: d34cfc00 f2dfbfe2 d37ae400 8b020001 (f8626800)
> > [    2.785492] ---[ end trace 4c435212e349f45f ]---
> > [    2.793340] usb 1-1: New USB device found, idVendor=2109,
> > idProduct=3431, bcdDevice= 4.20
> > [    2.801038] usb 1-1: New USB device strings: Mfr=0, Product=1, SerialNumber=0
> > [    2.808297] usb 1-1: Product: USB2.0 Hub
> > [    2.813710] hub 1-1:1.0: USB hub found
> > [    2.817117] hub 1-1:1.0: 4 ports detected
> >
> > This is bailing out right here:
> >      https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/firmware/raspberrypi.c?h=v5.6.1#n125
> >
> > FYIW (since I modified the source to actually print what was returned
> > right before it bails) we get:
> >    buf[1] == 0x800000004
> >    buf[2] == 0x00000001
> >
> > Status 0x800000004 is of course suspicious since it is not even listed here:
> >     https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/include/soc/bcm2835/raspberrypi-firmware.h#n14
> >
> > So it appears that this DMA request path is somehow busted and it
> > would be really nice to figure out why.
>
> You have actually discovered a genuine bug in the recent xen dma rework
> in Linux. Congrats :-)

Nice! ;-)

> I am doing some guesswork here, but from what I read in the thread and
> the information in this email I think this patch might fix the issue.
> If it doesn't fix the issue please add a few printks in
> drivers/xen/swiotlb-xen.c:xen_swiotlb_free_coherent and please let me
> know where exactly it crashes.
>
>
> diff --git a/include/xen/arm/page-coherent.h b/include/xen/arm/page-coherent.h
> index b9cc11e887ed..ff4677ed9788 100644
> --- a/include/xen/arm/page-coherent.h
> +++ b/include/xen/arm/page-coherent.h
> @@ -8,12 +8,17 @@
>  static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t size,
>                 dma_addr_t *dma_handle, gfp_t flags, unsigned long attrs)
>  {
> +       void *cpu_addr;
> +       if (dma_alloc_from_dev_coherent(hwdev, size, dma_handle, &cpu_addr))
> +               return cpu_addr;
>         return dma_direct_alloc(hwdev, size, dma_handle, flags, attrs);
>  }
>
>  static inline void xen_free_coherent_pages(struct device *hwdev, size_t size,
>                 void *cpu_addr, dma_addr_t dma_handle, unsigned long attrs)
>  {
> +       if (dma_release_from_dev_coherent(hwdev, get_order(size), cpu_addr))
> +               return;
>         dma_direct_free(hwdev, size, cpu_addr, dma_handle, attrs);
>  }

Applied the patch, but it didn't help and after printk's it turns out
it surprisingly crashes right inside this (rather convoluted if you
ask me) if statement:
    https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/xen/swiotlb-xen.c?h=v5.6.1#n349

So it makes sense that the patch didn't help -- we never hit that
xen_free_coherent_pages.

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Tue May 05 06:16:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 06:16:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVqsH-0001up-JW; Tue, 05 May 2020 06:16:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVqsG-0001uk-5q
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 06:16:12 +0000
X-Inumbo-ID: ea2236aa-8e97-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea2236aa-8e97-11ea-ae69-bc764e2007e4;
 Tue, 05 May 2020 06:16:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 2BDCBAC85;
 Tue,  5 May 2020 06:16:12 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3] x86/PV: remove unnecessary toggle_guest_pt() overhead
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <24d8b606-f74b-9367-d67e-e952838c7048@suse.com>
Date: Tue, 5 May 2020 08:16:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

While the mere updating of ->pv_cr3 and ->root_pgt_changed aren't overly
expensive (but still needed only for the toggle_guest_mode() path), the
effect of the latter on the exit-to-guest path is not insignificant.
Move the logic into toggle_guest_mode(), on the basis that
toggle_guest_pt() will always be invoked in pairs, yet we can't safely
undo the setting of root_pgt_changed during the second of these
invocations.

While at it, add a comment ahead of toggle_guest_pt() to clarify its
intended usage.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: Add comment ahead of toggle_guest_pt().
v2: Extend description.

--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -393,18 +393,10 @@ bool __init xpti_pcid_enabled(void)
 
 static void _toggle_guest_pt(struct vcpu *v)
 {
-    const struct domain *d = v->domain;
-    struct cpu_info *cpu_info = get_cpu_info();
     unsigned long cr3;
 
     v->arch.flags ^= TF_kernel_mode;
     update_cr3(v);
-    if ( d->arch.pv.xpti )
-    {
-        cpu_info->root_pgt_changed = true;
-        cpu_info->pv_cr3 = __pa(this_cpu(root_pgt)) |
-                           (d->arch.pv.pcid ? get_pcid_bits(v, true) : 0);
-    }
 
     /*
      * Don't flush user global mappings from the TLB. Don't tick TLB clock.
@@ -412,15 +404,11 @@ static void _toggle_guest_pt(struct vcpu
      * In shadow mode, though, update_cr3() may need to be accompanied by a
      * TLB flush (for just the incoming PCID), as the top level page table may
      * have changed behind our backs. To be on the safe side, suppress the
-     * no-flush unconditionally in this case. The XPTI CR3 write, if enabled,
-     * will then need to be a flushing one too.
+     * no-flush unconditionally in this case.
      */
     cr3 = v->arch.cr3;
-    if ( shadow_mode_enabled(d) )
-    {
+    if ( shadow_mode_enabled(v->domain) )
         cr3 &= ~X86_CR3_NOFLUSH;
-        cpu_info->pv_cr3 &= ~X86_CR3_NOFLUSH;
-    }
     write_cr3(cr3);
 
     if ( !(v->arch.flags & TF_kernel_mode) )
@@ -436,6 +424,8 @@ static void _toggle_guest_pt(struct vcpu
 
 void toggle_guest_mode(struct vcpu *v)
 {
+    const struct domain *d = v->domain;
+
     ASSERT(!is_pv_32bit_vcpu(v));
 
     /* %fs/%gs bases can only be stale if WR{FS,GS}BASE are usable. */
@@ -449,8 +439,27 @@ void toggle_guest_mode(struct vcpu *v)
     asm volatile ( "swapgs" );
 
     _toggle_guest_pt(v);
+
+    if ( d->arch.pv.xpti )
+    {
+        struct cpu_info *cpu_info = get_cpu_info();
+
+        cpu_info->root_pgt_changed = true;
+        cpu_info->pv_cr3 = __pa(this_cpu(root_pgt)) |
+                           (d->arch.pv.pcid ? get_pcid_bits(v, true) : 0);
+        /*
+         * As in _toggle_guest_pt() the XPTI CR3 write needs to be a TLB-
+         * flushing one too for shadow mode guests.
+         */
+        if ( shadow_mode_enabled(d) )
+            cpu_info->pv_cr3 &= ~X86_CR3_NOFLUSH;
+    }
 }
 
+/*
+ * Must be called in matching pairs without returning to guest context
+ * inbetween.
+ */
 void toggle_guest_pt(struct vcpu *v)
 {
     if ( !is_pv_32bit_vcpu(v) )


From xen-devel-bounces@lists.xenproject.org Tue May 05 06:26:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 06:26:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVr24-0002n0-Is; Tue, 05 May 2020 06:26:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVr23-0002mv-UC
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 06:26:19 +0000
X-Inumbo-ID: 538d687b-8e99-11ea-9d8c-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 538d687b-8e99-11ea-9d8c-12813bfff9fa;
 Tue, 05 May 2020 06:26:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 020FAAB8F;
 Tue,  5 May 2020 06:26:19 +0000 (UTC)
Subject: Re: [PATCH v2 4/4] x86: adjustments to guest handle treatment
To: Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <julien@xen.org>, 
 Stefano Stabellini <sstabellini@kernel.org>
References: <9d4b738a-4487-6bfc-3076-597d074c7b47@suse.com>
 <e820e1b9-7a7e-21f3-1ea0-d939de1905dd@suse.com>
 <20200422082610.GA28601@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0b43670b-cc0b-0b0b-ef24-4734de35d4b7@suse.com>
Date: Tue, 5 May 2020 08:26:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200422082610.GA28601@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.04.2020 10:26, Roger Pau Monné wrote:
> On Tue, Apr 21, 2020 at 11:13:23AM +0200, Jan Beulich wrote:
>> First of all avoid excessive conversions. copy_{from,to}_guest(), for
>> example, work fine with all of XEN_GUEST_HANDLE{,_64,_PARAM}().
>>
>> Further
>> - do_physdev_op_compat() didn't use the param form for its parameter,
>> - {hap,shadow}_track_dirty_vram() wrongly used the param form,
>> - compat processor Px logic failed to check compatibility of native and
>>   compat structures not further converted.
>>
>> As this eliminates all users of guest_handle_from_param() and as there's
>> no real need to allow for conversions in both directions, drop the
>> macros as well.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>[...]
>> --- a/xen/drivers/acpi/pmstat.c
>> +++ b/xen/drivers/acpi/pmstat.c
>> @@ -492,7 +492,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op
>>      return ret;
>>  }
>>  
>> -int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32) pdc)
>> +int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32) pdc)
> 
> Nit: switch to uint32_t while there?
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Unless I hear objections, I'm intending to commit this then in a
day or two with the suggested change made and the R-b given. Of
course a formally required ack for the Arm side dropping of
guest_handle_from_param() would still be nice ...

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 05 06:31:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 06:31:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVr77-0003bF-5s; Tue, 05 May 2020 06:31:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVr76-0003bA-62
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 06:31:32 +0000
X-Inumbo-ID: 0e19caf9-8e9a-11ea-9d8c-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0e19caf9-8e9a-11ea-9d8c-12813bfff9fa;
 Tue, 05 May 2020 06:31:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E757EAB8F;
 Tue,  5 May 2020 06:31:31 +0000 (UTC)
Subject: Re: [PATCH v2 1/4] x86/mm: no-one passes a NULL domain to
 init_xen_l4_slots()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <9d4b738a-4487-6bfc-3076-597d074c7b47@suse.com>
 <8787b72e-c71e-b75d-2ca0-0c6fe7c8259f@suse.com>
 <20200421164055.GW28601@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4779dde6-6582-1776-ea9b-a2cd46ac3bc3@suse.com>
Date: Tue, 5 May 2020 08:31:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200421164055.GW28601@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Andrew,

On 21.04.2020 18:40, Roger Pau Monné wrote:
> On Tue, Apr 21, 2020 at 11:11:03AM +0200, Jan Beulich wrote:
>> Drop the NULL checks - they've been introduced by commit 8d7b633ada
>> ("x86/mm: Consolidate all Xen L4 slot writing into
>> init_xen_l4_slots()") for no apparent reason.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

you weren't entirely happy with the change because of the
possible (or, as you state, necessary) need to undo this. I
still think in the current shape the NULL checks are
pointless and hence would better go away. Re-introducing them
(adjusted to whatever shape the function may be in by that
time) is not that big of a problem. May I ask that you
explicitly clarify whether you actively NAK the patch, accept
it going in with Roger's R-b, or would be willing to ack it?

Thanks, Jan


From xen-devel-bounces@lists.xenproject.org Tue May 05 08:10:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 08:10:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVsej-0003jM-26; Tue, 05 May 2020 08:10:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVsei-0003jH-4I
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 08:10:20 +0000
X-Inumbo-ID: db91352c-8ea7-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db91352c-8ea7-11ea-ae69-bc764e2007e4;
 Tue, 05 May 2020 08:10:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 99C70AC1F;
 Tue,  5 May 2020 08:10:19 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v8 00/12] x86emul: further work
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Date: Tue, 5 May 2020 10:10:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

At least the RDPRU patch is still at least partly RFC. I'd
appreciate though if at least some of the series could go in rather
sooner than later. Note in particular that there's no strong
ordering throughout the entire series, i.e. certain later parts
could be arranged for to go in earlier. This is also specifically
the case for what is now the last patch.

 1: x86emul: disable FPU/MMX/SIMD insn emulation when !HVM
 2: x86emul: support MOVDIR{I,64B} insns
 3: x86emul: support ENQCMD insn
 4: x86emul: support SERIALIZE
 5: x86emul: support X{SUS,RES}LDTRK
 6: x86/HVM: make hvmemul_blk() capable of handling r/o operations
 7: x86emul: support FNSTENV and FNSAVE
 8: x86emul: support FLDENV and FRSTOR
 9: x86emul: support FXSAVE/FXRSTOR
10: x86/HVM: scale MPERF values reported to guests (on AMD)
11: x86emul: support RDPRU
12: x86/HVM: don't needlessly intercept APERF/MPERF/TSC MSR reads

Main changes from v7 are the new patch 6 and quiite a bit of re-work
of what is now patch 9. See individual patches for revision details.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 05 08:12:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 08:12:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVshD-0003q3-GW; Tue, 05 May 2020 08:12:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVshC-0003px-B7
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 08:12:54 +0000
X-Inumbo-ID: 3754f43e-8ea8-11ea-9d93-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3754f43e-8ea8-11ea-9d93-12813bfff9fa;
 Tue, 05 May 2020 08:12:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 0CC59AC1F;
 Tue,  5 May 2020 08:12:53 +0000 (UTC)
Subject: [PATCH v8 01/12] x86emul: disable FPU/MMX/SIMD insn emulation when
 !HVM
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Message-ID: <6110ec4d-36a7-efa7-fb86-069ec5b83ac2@suse.com>
Date: Tue, 5 May 2020 10:12:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In a pure PV environment (the PV shim in particular) we don't really
need emulation of all these. To limit #ifdef-ary utilize some of the
CASE_*() macros we have, by providing variants expanding to
(effectively) nothing (really a label, which in turn requires passing
-Wno-unused-label to the compiler when build such configurations).

Due to the mixture of macro and #ifdef use, the placement of some of
the #ifdef-s is a little arbitrary.

The resulting object file's .text is less than half the size of the
original, and looks to also be compiling a little more quickly.

This is meant as a first step; more parts can likely be disabled down
the road.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v7: Integrate into this series. Re-base.
---
I'll be happy to take suggestions allowing to avoid -Wno-unused-label.

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -73,6 +73,9 @@ obj-y += vm_event.o
 obj-y += xstate.o
 extra-y += asm-macros.i
 
+ifneq ($(CONFIG_HVM),y)
+x86_emulate.o: CFLAGS-y += -Wno-unused-label
+endif
 x86_emulate.o: x86_emulate/x86_emulate.c x86_emulate/x86_emulate.h
 
 efi-y := $(shell if [ ! -r $(BASEDIR)/include/xen/compile.h -o \
--- a/xen/arch/x86/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate.c
@@ -42,6 +42,12 @@
     }                                                      \
 })
 
+#ifndef CONFIG_HVM
+# define X86EMUL_NO_FPU
+# define X86EMUL_NO_MMX
+# define X86EMUL_NO_SIMD
+#endif
+
 #include "x86_emulate/x86_emulate.c"
 
 int x86emul_read_xcr(unsigned int reg, uint64_t *val,
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -3492,6 +3492,7 @@ x86_decode(
             op_bytes = 4;
         break;
 
+#ifndef X86EMUL_NO_SIMD
     case simd_packed_int:
         switch ( vex.pfx )
         {
@@ -3557,6 +3558,7 @@ x86_decode(
     case simd_256:
         op_bytes = 32;
         break;
+#endif /* !X86EMUL_NO_SIMD */
 
     default:
         op_bytes = 0;
@@ -3711,6 +3713,7 @@ x86_emulate(
         break;
     }
 
+#ifndef X86EMUL_NO_SIMD
     /* With a memory operand, fetch the mask register in use (if any). */
     if ( ea.type == OP_MEM && evex.opmsk &&
          _get_fpu(fpu_type = X86EMUL_FPU_opmask, ctxt, ops) == X86EMUL_OKAY )
@@ -3741,6 +3744,7 @@ x86_emulate(
         put_fpu(X86EMUL_FPU_opmask, false, state, ctxt, ops);
         fpu_type = X86EMUL_FPU_none;
     }
+#endif /* !X86EMUL_NO_SIMD */
 
     /* Decode (but don't fetch) the destination operand: register or memory. */
     switch ( d & DstMask )
@@ -4386,11 +4390,13 @@ x86_emulate(
         singlestep = _regs.eflags & X86_EFLAGS_TF;
         break;
 
+#ifndef X86EMUL_NO_FPU
     case 0x9b:  /* wait/fwait */
         host_and_vcpu_must_have(fpu);
         get_fpu(X86EMUL_FPU_wait);
         emulate_fpu_insn_stub(b);
         break;
+#endif
 
     case 0x9c: /* pushf */
         if ( (_regs.eflags & X86_EFLAGS_VM) &&
@@ -4800,6 +4806,7 @@ x86_emulate(
         break;
     }
 
+#ifndef X86EMUL_NO_FPU
     case 0xd8: /* FPU 0xd8 */
         host_and_vcpu_must_have(fpu);
         get_fpu(X86EMUL_FPU_fpu);
@@ -5134,6 +5141,7 @@ x86_emulate(
             }
         }
         break;
+#endif /* !X86EMUL_NO_FPU */
 
     case 0xe0 ... 0xe2: /* loop{,z,nz} */ {
         unsigned long count = get_loop_count(&_regs, ad_bytes);
@@ -6079,6 +6087,8 @@ x86_emulate(
     case X86EMUL_OPC(0x0f, 0x19) ... X86EMUL_OPC(0x0f, 0x1f): /* nop */
         break;
 
+#ifndef X86EMUL_NO_MMX
+
     case X86EMUL_OPC(0x0f, 0x0e): /* femms */
         host_and_vcpu_must_have(3dnow);
         asm volatile ( "femms" );
@@ -6099,39 +6109,71 @@ x86_emulate(
         state->simd_size = simd_other;
         goto simd_0f_imm8;
 
-#define CASE_SIMD_PACKED_INT(pfx, opc)       \
+#endif /* !X86EMUL_NO_MMX */
+
+#if !defined(X86EMUL_NO_SIMD) && !defined(X86EMUL_NO_MMX)
+# define CASE_SIMD_PACKED_INT(pfx, opc)      \
     case X86EMUL_OPC(pfx, opc):              \
     case X86EMUL_OPC_66(pfx, opc)
-#define CASE_SIMD_PACKED_INT_VEX(pfx, opc)   \
+#elif !defined(X86EMUL_NO_SIMD)
+# define CASE_SIMD_PACKED_INT(pfx, opc)      \
+    case X86EMUL_OPC_66(pfx, opc)
+#elif !defined(X86EMUL_NO_MMX)
+# define CASE_SIMD_PACKED_INT(pfx, opc)      \
+    case X86EMUL_OPC(pfx, opc)
+#else
+# define CASE_SIMD_PACKED_INT(pfx, opc) C##pfx##_##opc
+#endif
+
+#ifndef X86EMUL_NO_SIMD
+
+# define CASE_SIMD_PACKED_INT_VEX(pfx, opc)  \
     CASE_SIMD_PACKED_INT(pfx, opc):          \
     case X86EMUL_OPC_VEX_66(pfx, opc)
 
-#define CASE_SIMD_ALL_FP(kind, pfx, opc)     \
+# define CASE_SIMD_ALL_FP(kind, pfx, opc)    \
     CASE_SIMD_PACKED_FP(kind, pfx, opc):     \
     CASE_SIMD_SCALAR_FP(kind, pfx, opc)
-#define CASE_SIMD_PACKED_FP(kind, pfx, opc)  \
+# define CASE_SIMD_PACKED_FP(kind, pfx, opc) \
     case X86EMUL_OPC##kind(pfx, opc):        \
     case X86EMUL_OPC##kind##_66(pfx, opc)
-#define CASE_SIMD_SCALAR_FP(kind, pfx, opc)  \
+# define CASE_SIMD_SCALAR_FP(kind, pfx, opc) \
     case X86EMUL_OPC##kind##_F3(pfx, opc):   \
     case X86EMUL_OPC##kind##_F2(pfx, opc)
-#define CASE_SIMD_SINGLE_FP(kind, pfx, opc)  \
+# define CASE_SIMD_SINGLE_FP(kind, pfx, opc) \
     case X86EMUL_OPC##kind(pfx, opc):        \
     case X86EMUL_OPC##kind##_F3(pfx, opc)
 
-#define CASE_SIMD_ALL_FP_VEX(pfx, opc)       \
+# define CASE_SIMD_ALL_FP_VEX(pfx, opc)      \
     CASE_SIMD_ALL_FP(, pfx, opc):            \
     CASE_SIMD_ALL_FP(_VEX, pfx, opc)
-#define CASE_SIMD_PACKED_FP_VEX(pfx, opc)    \
+# define CASE_SIMD_PACKED_FP_VEX(pfx, opc)   \
     CASE_SIMD_PACKED_FP(, pfx, opc):         \
     CASE_SIMD_PACKED_FP(_VEX, pfx, opc)
-#define CASE_SIMD_SCALAR_FP_VEX(pfx, opc)    \
+# define CASE_SIMD_SCALAR_FP_VEX(pfx, opc)   \
     CASE_SIMD_SCALAR_FP(, pfx, opc):         \
     CASE_SIMD_SCALAR_FP(_VEX, pfx, opc)
-#define CASE_SIMD_SINGLE_FP_VEX(pfx, opc)    \
+# define CASE_SIMD_SINGLE_FP_VEX(pfx, opc)   \
     CASE_SIMD_SINGLE_FP(, pfx, opc):         \
     CASE_SIMD_SINGLE_FP(_VEX, pfx, opc)
 
+#else
+
+# define CASE_SIMD_PACKED_INT_VEX(pfx, opc)  \
+    CASE_SIMD_PACKED_INT(pfx, opc)
+
+# define CASE_SIMD_ALL_FP(kind, pfx, opc)    C##kind##pfx##_##opc
+# define CASE_SIMD_PACKED_FP(kind, pfx, opc) Cp##kind##pfx##_##opc
+# define CASE_SIMD_SCALAR_FP(kind, pfx, opc) Cs##kind##pfx##_##opc
+# define CASE_SIMD_SINGLE_FP(kind, pfx, opc) C##kind##pfx##_##opc
+
+# define CASE_SIMD_ALL_FP_VEX(pfx, opc)    CASE_SIMD_ALL_FP(, pfx, opc)
+# define CASE_SIMD_PACKED_FP_VEX(pfx, opc) CASE_SIMD_PACKED_FP(, pfx, opc)
+# define CASE_SIMD_SCALAR_FP_VEX(pfx, opc) CASE_SIMD_SCALAR_FP(, pfx, opc)
+# define CASE_SIMD_SINGLE_FP_VEX(pfx, opc) CASE_SIMD_SINGLE_FP(, pfx, opc)
+
+#endif
+
     CASE_SIMD_SCALAR_FP(, 0x0f, 0x2b):     /* movnts{s,d} xmm,mem */
         host_and_vcpu_must_have(sse4a);
         /* fall through */
@@ -6269,6 +6311,8 @@ x86_emulate(
         insn_bytes = EVEX_PFX_BYTES + 2;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_66(0x0f, 0x12):       /* movlpd m64,xmm */
     case X86EMUL_OPC_VEX_66(0x0f, 0x12):   /* vmovlpd m64,xmm,xmm */
     CASE_SIMD_PACKED_FP_VEX(0x0f, 0x13):   /* movlp{s,d} xmm,m64 */
@@ -6375,6 +6419,8 @@ x86_emulate(
         avx512_vlen_check(false);
         goto simd_zmm;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f, 0x20): /* mov cr,reg */
     case X86EMUL_OPC(0x0f, 0x21): /* mov dr,reg */
     case X86EMUL_OPC(0x0f, 0x22): /* mov reg,cr */
@@ -6401,6 +6447,8 @@ x86_emulate(
             goto done;
         break;
 
+#if !defined(X86EMUL_NO_MMX) && !defined(X86EMUL_NO_SIMD)
+
     case X86EMUL_OPC_66(0x0f, 0x2a):       /* cvtpi2pd mm/m64,xmm */
         if ( ea.type == OP_REG )
         {
@@ -6412,6 +6460,8 @@ x86_emulate(
         op_bytes = (b & 4) && (vex.pfx & VEX_PREFIX_DOUBLE_MASK) ? 16 : 8;
         goto simd_0f_fp;
 
+#endif /* !X86EMUL_NO_MMX && !X86EMUL_NO_SIMD */
+
     CASE_SIMD_SCALAR_FP_VEX(0x0f, 0x2a):   /* {,v}cvtsi2s{s,d} r/m,xmm */
         if ( vex.opcx == vex_none )
         {
@@ -6758,6 +6808,8 @@ x86_emulate(
             dst.val = src.val;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_VEX(0x0f, 0x4a):    /* kadd{w,q} k,k,k */
         if ( !vex.w )
             host_and_vcpu_must_have(avx512dq);
@@ -6812,6 +6864,8 @@ x86_emulate(
         generate_exception_if(!vex.l || vex.w, EXC_UD);
         goto opmask_common;
 
+#endif /* X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_FP_VEX(0x0f, 0x50):   /* movmskp{s,d} xmm,reg */
                                            /* vmovmskp{s,d} {x,y}mm,reg */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xd7):  /* pmovmskb {,x}mm,reg */
@@ -6895,6 +6949,8 @@ x86_emulate(
                          evex.w);
         goto avx512f_all_fp;
 
+#ifndef X86EMUL_NO_SIMD
+
     CASE_SIMD_PACKED_FP_VEX(0x0f, 0x5b):   /* cvt{ps,dq}2{dq,ps} xmm/mem,xmm */
                                            /* vcvt{ps,dq}2{dq,ps} {x,y}mm/mem,{x,y}mm */
     case X86EMUL_OPC_F3(0x0f, 0x5b):       /* cvttps2dq xmm/mem,xmm */
@@ -6925,6 +6981,8 @@ x86_emulate(
         op_bytes = 16 << evex.lr;
         goto simd_zmm;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x60): /* punpcklbw {,x}mm/mem,{,x}mm */
                                           /* vpunpcklbw {x,y}mm/mem,{x,y}mm,{x,y}mm */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x61): /* punpcklwd {,x}mm/mem,{,x}mm */
@@ -6951,6 +7009,7 @@ x86_emulate(
                                           /* vpackusbw {x,y}mm/mem,{x,y}mm,{x,y}mm */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x6b): /* packsswd {,x}mm/mem,{,x}mm */
                                           /* vpacksswd {x,y}mm/mem,{x,y}mm,{x,y}mm */
+#ifndef X86EMUL_NO_SIMD
     case X86EMUL_OPC_66(0x0f, 0x6c):     /* punpcklqdq xmm/m128,xmm */
     case X86EMUL_OPC_VEX_66(0x0f, 0x6c): /* vpunpcklqdq {x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_66(0x0f, 0x6d):     /* punpckhqdq xmm/m128,xmm */
@@ -7035,6 +7094,7 @@ x86_emulate(
                                           /* vpsubd {x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_66(0x0f, 0xfb):     /* psubq xmm/m128,xmm */
     case X86EMUL_OPC_VEX_66(0x0f, 0xfb): /* vpsubq {x,y}mm/mem,{x,y}mm,{x,y}mm */
+#endif /* !X86EMUL_NO_SIMD */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xfc): /* paddb {,x}mm/mem,{,x}mm */
                                           /* vpaddb {x,y}mm/mem,{x,y}mm,{x,y}mm */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xfd): /* paddw {,x}mm/mem,{,x}mm */
@@ -7042,6 +7102,7 @@ x86_emulate(
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xfe): /* paddd {,x}mm/mem,{,x}mm */
                                           /* vpaddd {x,y}mm/mem,{x,y}mm,{x,y}mm */
     simd_0f_int:
+#ifndef X86EMUL_NO_SIMD
         if ( vex.opcx != vex_none )
         {
     case X86EMUL_OPC_VEX_66(0x0f38, 0x00): /* vpshufb {x,y}mm/mem,{x,y}mm,{x,y}mm */
@@ -7083,11 +7144,14 @@ x86_emulate(
         }
         if ( vex.pfx )
             goto simd_0f_sse2;
+#endif /* !X86EMUL_NO_SIMD */
     simd_0f_mmx:
         host_and_vcpu_must_have(mmx);
         get_fpu(X86EMUL_FPU_mmx);
         goto simd_0f_common;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0xf6): /* vpsadbw [xyz]mm/mem,[xyz]mm,[xyz]mm */
         generate_exception_if(evex.opmsk, EXC_UD);
         /* fall through */
@@ -7181,6 +7245,8 @@ x86_emulate(
         generate_exception_if(!evex.w, EXC_UD);
         goto avx512f_no_sae;
 
+#endif /* X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x6e): /* mov{d,q} r/m,{,x}mm */
                                           /* vmov{d,q} r/m,xmm */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x7e): /* mov{d,q} {,x}mm,r/m */
@@ -7222,6 +7288,8 @@ x86_emulate(
         ASSERT(!state->simd_size);
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0x6e): /* vmov{d,q} r/m,xmm */
     case X86EMUL_OPC_EVEX_66(0x0f, 0x7e): /* vmov{d,q} xmm,r/m */
         generate_exception_if((evex.lr || evex.opmsk || evex.brs ||
@@ -7294,11 +7362,15 @@ x86_emulate(
         d |= TwoOp;
         /* fall through */
     case X86EMUL_OPC_66(0x0f, 0xd6):     /* movq xmm,xmm/m64 */
+#endif /* !X86EMUL_NO_SIMD */
+#ifndef X86EMUL_NO_MMX
     case X86EMUL_OPC(0x0f, 0x6f):        /* movq mm/m64,mm */
     case X86EMUL_OPC(0x0f, 0x7f):        /* movq mm,mm/m64 */
+#endif
         op_bytes = 8;
         goto simd_0f_int;
 
+#ifndef X86EMUL_NO_SIMD
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x70):/* pshuf{w,d} $imm8,{,x}mm/mem,{,x}mm */
                                          /* vpshufd $imm8,{x,y}mm/mem,{x,y}mm */
     case X86EMUL_OPC_F3(0x0f, 0x70):     /* pshufhw $imm8,xmm/m128,xmm */
@@ -7307,12 +7379,15 @@ x86_emulate(
     case X86EMUL_OPC_VEX_F2(0x0f, 0x70): /* vpshuflw $imm8,{x,y}mm/mem,{x,y}mm */
         d = (d & ~SrcMask) | SrcMem | TwoOp;
         op_bytes = vex.pfx ? 16 << vex.l : 8;
+#endif
     simd_0f_int_imm8:
         if ( vex.opcx != vex_none )
         {
+#ifndef X86EMUL_NO_SIMD
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x0e): /* vpblendw $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x0f): /* vpalignr $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x42): /* vmpsadbw $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
+#endif
             if ( vex.l )
             {
     simd_0f_imm8_avx2:
@@ -7320,6 +7395,7 @@ x86_emulate(
             }
             else
             {
+#ifndef X86EMUL_NO_SIMD
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x08): /* vroundps $imm8,{x,y}mm/mem,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x09): /* vroundpd $imm8,{x,y}mm/mem,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x0a): /* vroundss $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
@@ -7327,6 +7403,7 @@ x86_emulate(
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x0c): /* vblendps $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x0d): /* vblendpd $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x40): /* vdpps $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
+#endif
     simd_0f_imm8_avx:
                 host_and_vcpu_must_have(avx);
             }
@@ -7360,6 +7437,8 @@ x86_emulate(
         insn_bytes = PFX_BYTES + 3;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0x70): /* vpshufd $imm8,[xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_F3(0x0f, 0x70): /* vpshufhw $imm8,[xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_F2(0x0f, 0x70): /* vpshuflw $imm8,[xyz]mm/mem,[xyz]mm{k} */
@@ -7418,6 +7497,9 @@ x86_emulate(
         opc[1] = modrm;
         opc[2] = imm1;
         insn_bytes = PFX_BYTES + 3;
+
+#endif /* X86EMUL_NO_SIMD */
+
     simd_0f_reg_only:
         opc[insn_bytes - PFX_BYTES] = 0xc3;
 
@@ -7428,6 +7510,8 @@ x86_emulate(
         ASSERT(!state->simd_size);
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0x71): /* Grp12 */
         switch ( modrm_reg & 7 )
         {
@@ -7459,6 +7543,9 @@ x86_emulate(
         }
         goto unrecognized_insn;
 
+#endif /* !X86EMUL_NO_SIMD */
+#ifndef X86EMUL_NO_MMX
+
     case X86EMUL_OPC(0x0f, 0x73):        /* Grp14 */
         switch ( modrm_reg & 7 )
         {
@@ -7468,6 +7555,9 @@ x86_emulate(
         }
         goto unrecognized_insn;
 
+#endif /* !X86EMUL_NO_MMX */
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_66(0x0f, 0x73):
     case X86EMUL_OPC_VEX_66(0x0f, 0x73):
         switch ( modrm_reg & 7 )
@@ -7498,7 +7588,12 @@ x86_emulate(
         }
         goto unrecognized_insn;
 
+#endif /* !X86EMUL_NO_SIMD */
+
+#ifndef X86EMUL_NO_MMX
     case X86EMUL_OPC(0x0f, 0x77):        /* emms */
+#endif
+#ifndef X86EMUL_NO_SIMD
     case X86EMUL_OPC_VEX(0x0f, 0x77):    /* vzero{all,upper} */
         if ( vex.opcx != vex_none )
         {
@@ -7544,6 +7639,7 @@ x86_emulate(
 #endif
         }
         else
+#endif /* !X86EMUL_NO_SIMD */
         {
             host_and_vcpu_must_have(mmx);
             get_fpu(X86EMUL_FPU_mmx);
@@ -7557,6 +7653,8 @@ x86_emulate(
         insn_bytes = PFX_BYTES + 1;
         goto simd_0f_reg_only;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_66(0x0f, 0x78):     /* Grp17 */
         switch ( modrm_reg & 7 )
         {
@@ -7654,6 +7752,8 @@ x86_emulate(
         op_bytes = 8;
         goto simd_zmm;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f, 0x80) ... X86EMUL_OPC(0x0f, 0x8f): /* jcc (near) */
         if ( test_cc(b, _regs.eflags) )
             jmp_rel((int32_t)src.val);
@@ -7664,6 +7764,8 @@ x86_emulate(
         dst.val = test_cc(b, _regs.eflags);
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_VEX(0x0f, 0x91):    /* kmov{w,q} k,mem */
     case X86EMUL_OPC_VEX_66(0x0f, 0x91): /* kmov{b,d} k,mem */
         generate_exception_if(ea.type != OP_MEM, EXC_UD);
@@ -7812,6 +7914,8 @@ x86_emulate(
         dst.type = OP_NONE;
         break;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f, 0xa2): /* cpuid */
         msr_val = 0;
         fail_if(ops->cpuid == NULL);
@@ -7908,6 +8012,7 @@ x86_emulate(
     case X86EMUL_OPC(0x0f, 0xae): case X86EMUL_OPC_66(0x0f, 0xae): /* Grp15 */
         switch ( modrm_reg & 7 )
         {
+#ifndef X86EMUL_NO_SIMD
         case 2: /* ldmxcsr */
             generate_exception_if(vex.pfx, EXC_UD);
             vcpu_must_have(sse);
@@ -7926,6 +8031,7 @@ x86_emulate(
             get_fpu(vex.opcx ? X86EMUL_FPU_ymm : X86EMUL_FPU_xmm);
             asm volatile ( "stmxcsr %0" : "=m" (dst.val) );
             break;
+#endif /* X86EMUL_NO_SIMD */
 
         case 5: /* lfence */
             fail_if(modrm_mod != 3);
@@ -7974,6 +8080,8 @@ x86_emulate(
         }
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_VEX(0x0f, 0xae): /* Grp15 */
         switch ( modrm_reg & 7 )
         {
@@ -7988,6 +8096,8 @@ x86_emulate(
         }
         goto unrecognized_insn;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC_F3(0x0f, 0xae): /* Grp15 */
         fail_if(modrm_mod != 3);
         generate_exception_if((modrm_reg & 4) || !mode_64bit(), EXC_UD);
@@ -8227,6 +8337,8 @@ x86_emulate(
         }
         goto simd_0f_imm8_avx;
 
+#ifndef X86EMUL_NO_SIMD
+
     CASE_SIMD_ALL_FP(_EVEX, 0x0f, 0xc2): /* vcmp{p,s}{s,d} $imm8,[xyz]mm/mem,[xyz]mm,k{k} */
         generate_exception_if((evex.w != (evex.pfx & VEX_PREFIX_DOUBLE_MASK) ||
                                (ea.type != OP_REG && evex.brs &&
@@ -8253,6 +8365,8 @@ x86_emulate(
         insn_bytes = EVEX_PFX_BYTES + 3;
         break;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f, 0xc3): /* movnti */
         /* Ignore the non-temporal hint for now. */
         vcpu_must_have(sse2);
@@ -8267,6 +8381,8 @@ x86_emulate(
         ea.type = OP_MEM;
         goto simd_0f_int_imm8;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0xc4):   /* vpinsrw $imm8,r32/m16,xmm,xmm */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x20): /* vpinsrb $imm8,r32/m8,xmm,xmm */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x22): /* vpinsr{d,q} $imm8,r/m,xmm,xmm */
@@ -8284,6 +8400,8 @@ x86_emulate(
         state->simd_size = simd_other;
         goto avx512f_imm8_no_sae;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xc5):  /* pextrw $imm8,{,x}mm,reg */
                                            /* vpextrw $imm8,xmm,reg */
         generate_exception_if(vex.l, EXC_UD);
@@ -8299,6 +8417,8 @@ x86_emulate(
         insn_bytes = PFX_BYTES + 3;
         goto simd_0f_to_gpr;
 
+#ifndef X86EMUL_NO_SIMD
+
     CASE_SIMD_PACKED_FP(_EVEX, 0x0f, 0xc6): /* vshufp{s,d} $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         generate_exception_if(evex.w != (evex.pfx & VEX_PREFIX_DOUBLE_MASK),
                               EXC_UD);
@@ -8313,6 +8433,8 @@ x86_emulate(
         avx512_vlen_check(false);
         goto simd_imm8_zmm;
 
+#endif /* X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f, 0xc7): /* Grp9 */
     {
         union {
@@ -8503,6 +8625,8 @@ x86_emulate(
         }
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0xd2): /* vpsrld xmm/m128,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0xd3): /* vpsrlq xmm/m128,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0xe2): /* vpsra{d,q} xmm/m128,[xyz]mm,[xyz]mm{k} */
@@ -8524,12 +8648,18 @@ x86_emulate(
         generate_exception_if(evex.w != (b & 1), EXC_UD);
         goto avx512f_no_sae;
 
+#endif /* !X86EMUL_NO_SIMD */
+#ifndef X86EMUL_NO_MMX
+
     case X86EMUL_OPC(0x0f, 0xd4):        /* paddq mm/m64,mm */
     case X86EMUL_OPC(0x0f, 0xf4):        /* pmuludq mm/m64,mm */
     case X86EMUL_OPC(0x0f, 0xfb):        /* psubq mm/m64,mm */
         vcpu_must_have(sse2);
         goto simd_0f_mmx;
 
+#endif /* !X86EMUL_NO_MMX */
+#if !defined(X86EMUL_NO_MMX) && !defined(X86EMUL_NO_SIMD)
+
     case X86EMUL_OPC_F3(0x0f, 0xd6):     /* movq2dq mm,xmm */
     case X86EMUL_OPC_F2(0x0f, 0xd6):     /* movdq2q xmm,mm */
         generate_exception_if(ea.type != OP_REG, EXC_UD);
@@ -8537,6 +8667,9 @@ x86_emulate(
         host_and_vcpu_must_have(mmx);
         goto simd_0f_int;
 
+#endif /* !X86EMUL_NO_MMX && !X86EMUL_NO_SIMD */
+#ifndef X86EMUL_NO_MMX
+
     case X86EMUL_OPC(0x0f, 0xe7):        /* movntq mm,m64 */
         generate_exception_if(ea.type != OP_MEM, EXC_UD);
         sfence = true;
@@ -8552,6 +8685,9 @@ x86_emulate(
         vcpu_must_have(mmxext);
         goto simd_0f_mmx;
 
+#endif /* !X86EMUL_NO_MMX */
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0xda): /* vpminub [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0xde): /* vpmaxub [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0xe4): /* vpmulhuw [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
@@ -8572,6 +8708,8 @@ x86_emulate(
         op_bytes = 8 << (!!(vex.pfx & VEX_PREFIX_DOUBLE_MASK) + vex.l);
         goto simd_0f_cvt;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xf7): /* {,v}maskmov{q,dqu} {,x}mm,{,x}mm */
         generate_exception_if(ea.type != OP_REG, EXC_UD);
         if ( vex.opcx != vex_none )
@@ -8675,6 +8813,8 @@ x86_emulate(
         insn_bytes = PFX_BYTES + 3;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_VEX_66(0x0f38, 0x19): /* vbroadcastsd xmm/m64,ymm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x1a): /* vbroadcastf128 m128,ymm */
         generate_exception_if(!vex.l, EXC_UD);
@@ -9257,6 +9397,8 @@ x86_emulate(
         ASSERT(!state->simd_size);
         break;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC_66(0x0f38, 0x82): /* invpcid reg,m128 */
         vcpu_must_have(invpcid);
         generate_exception_if(ea.type != OP_MEM, EXC_UD);
@@ -9299,6 +9441,8 @@ x86_emulate(
         state->simd_size = simd_none;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x83): /* vpmultishiftqb [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         generate_exception_if(!evex.w, EXC_UD);
         host_and_vcpu_must_have(avx512_vbmi);
@@ -9862,6 +10006,8 @@ x86_emulate(
         generate_exception_if(evex.brs || evex.opmsk, EXC_UD);
         goto avx512f_no_sae;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f38, 0xf0): /* movbe m,r */
     case X86EMUL_OPC(0x0f38, 0xf1): /* movbe r,m */
         vcpu_must_have(movbe);
@@ -10027,6 +10173,8 @@ x86_emulate(
                             : "0" ((uint32_t)src.val), "rm" (_regs.edx) );
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x00): /* vpermq $imm8,ymm/m256,ymm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x01): /* vpermpd $imm8,ymm/m256,ymm */
         generate_exception_if(!vex.l || !vex.w, EXC_UD);
@@ -10087,6 +10235,8 @@ x86_emulate(
         avx512_vlen_check(b & 2);
         goto simd_imm8_zmm;
 
+#endif /* X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_INT(0x0f3a, 0x0f): /* palignr $imm8,{,x}mm/mem,{,x}mm */
         host_and_vcpu_must_have(ssse3);
         if ( vex.pfx )
@@ -10114,6 +10264,8 @@ x86_emulate(
         insn_bytes = PFX_BYTES + 4;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x42): /* vdbpsadbw $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         generate_exception_if(evex.w, EXC_UD);
         /* fall through */
@@ -10612,6 +10764,8 @@ x86_emulate(
         generate_exception_if(vex.l, EXC_UD);
         goto simd_0f_imm8_avx;
 
+#endif /* X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC_VEX_F2(0x0f3a, 0xf0): /* rorx imm,r/m,r */
         vcpu_must_have(bmi2);
         generate_exception_if(vex.l || vex.reg != 0xf, EXC_UD);
@@ -10626,6 +10780,8 @@ x86_emulate(
             asm ( "rorl %b1,%k0" : "=g" (dst.val) : "c" (imm1), "0" (src.val) );
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_XOP(08, 0x85): /* vpmacssww xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x86): /* vpmacsswd xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x87): /* vpmacssdql xmm,xmm/m128,xmm,xmm */
@@ -10661,6 +10817,8 @@ x86_emulate(
         host_and_vcpu_must_have(xop);
         goto simd_0f_imm8_ymm;
 
+#endif /* X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC_XOP(09, 0x01): /* XOP Grp1 */
         switch ( modrm_reg & 7 )
         {
@@ -10720,6 +10878,8 @@ x86_emulate(
         }
         goto unrecognized_insn;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_XOP(09, 0x82): /* vfrczss xmm/m128,xmm */
     case X86EMUL_OPC_XOP(09, 0x83): /* vfrczsd xmm/m128,xmm */
         generate_exception_if(vex.l, EXC_UD);
@@ -10775,6 +10935,8 @@ x86_emulate(
         host_and_vcpu_must_have(xop);
         goto simd_0f_ymm;
 
+#endif /* X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC_XOP(0a, 0x10): /* bextr imm,r/m,r */
     {
         uint8_t *buf = get_stub(stub);



From xen-devel-bounces@lists.xenproject.org Tue May 05 08:13:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 08:13:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVshY-0003u8-TQ; Tue, 05 May 2020 08:13:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVshX-0003tz-E3
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 08:13:15 +0000
X-Inumbo-ID: 4423c85c-8ea8-11ea-9d93-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4423c85c-8ea8-11ea-9d93-12813bfff9fa;
 Tue, 05 May 2020 08:13:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 86720AC1F;
 Tue,  5 May 2020 08:13:15 +0000 (UTC)
Subject: [PATCH v8 02/12] x86emul: support MOVDIR{I,64B} insns
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Message-ID: <04e52d0a-fcce-eba4-0341-3b8838c0faae@suse.com>
Date: Tue, 5 May 2020 10:13:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Introduce a new blk() hook, paralleling the rmw() one in a certain way,
but being intended for larger data sizes, and hence its HVM intermediate
handling function doesn't fall back to splitting the operation if the
requested virtual address can't be mapped.

Note that SDM revision 071 doesn't specify exception behavior for
ModRM.mod == 0b11; assuming #UD here.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
v7: Add blk_NONE. Move  harness'es setting of .blk. Correct indentation.
    Re-base.
v6: Fold MOVDIRI and MOVDIR64B changes again. Use blk() for both. All
    tags dropped.
v5: Introduce/use ->blk() hook. Correct asm() operands.
v4: Split MOVDIRI and MOVDIR64B and move this one ahead. Re-base.
v3: Update description.
---
(SDE: -tnt)

--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -652,6 +652,18 @@ static int cmpxchg(
     return X86EMUL_OKAY;
 }
 
+static int blk(
+    enum x86_segment seg,
+    unsigned long offset,
+    void *p_data,
+    unsigned int bytes,
+    uint32_t *eflags,
+    struct x86_emulate_state *state,
+    struct x86_emulate_ctxt *ctxt)
+{
+    return x86_emul_blk((void *)offset, p_data, bytes, eflags, state, ctxt);
+}
+
 static int read_segment(
     enum x86_segment seg,
     struct segment_register *reg,
@@ -721,6 +733,7 @@ static struct x86_emulate_ops emulops =
     .insn_fetch = fetch,
     .write      = write,
     .cmpxchg    = cmpxchg,
+    .blk        = blk,
     .read_segment = read_segment,
     .cpuid      = emul_test_cpuid,
     .read_cr    = emul_test_read_cr,
@@ -2339,6 +2352,50 @@ int main(int argc, char **argv)
         goto fail;
     printf("okay\n");
 
+    printf("%-40s", "Testing movdiri %edx,(%ecx)...");
+    if ( stack_exec && cpu_has_movdiri )
+    {
+        instr[0] = 0x0f; instr[1] = 0x38; instr[2] = 0xf9; instr[3] = 0x11;
+
+        regs.eip = (unsigned long)&instr[0];
+        regs.ecx = (unsigned long)memset(res, -1, 16);
+        regs.edx = 0x44332211;
+
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             (regs.eip != (unsigned long)&instr[4]) ||
+             res[0] != 0x44332211 || ~res[1] )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
+    printf("%-40s", "Testing movdir64b 144(%edx),%ecx...");
+    if ( stack_exec && cpu_has_movdir64b )
+    {
+        instr[0] = 0x66; instr[1] = 0x0f; instr[2] = 0x38; instr[3] = 0xf8;
+        instr[4] = 0x8a; instr[5] = 0x90; instr[8] = instr[7] = instr[6] = 0;
+
+        regs.eip = (unsigned long)&instr[0];
+        for ( i = 0; i < 64; ++i )
+            res[i] = i - 20;
+        regs.edx = (unsigned long)res;
+        regs.ecx = (unsigned long)(res + 16);
+
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             (regs.eip != (unsigned long)&instr[9]) ||
+             res[15] != -5 || res[32] != 12 )
+            goto fail;
+        for ( i = 16; i < 32; ++i )
+            if ( res[i] != i )
+                goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
     printf("%-40s", "Testing movq %mm3,(%ecx)...");
     if ( stack_exec && cpu_has_mmx )
     {
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -154,6 +154,8 @@ static inline bool xcr0_mask(uint64_t ma
 #define cpu_has_avx512_vnni (cp.feat.avx512_vnni && xcr0_mask(0xe6))
 #define cpu_has_avx512_bitalg (cp.feat.avx512_bitalg && xcr0_mask(0xe6))
 #define cpu_has_avx512_vpopcntdq (cp.feat.avx512_vpopcntdq && xcr0_mask(0xe6))
+#define cpu_has_movdiri    cp.feat.movdiri
+#define cpu_has_movdir64b  cp.feat.movdir64b
 #define cpu_has_avx512_4vnniw (cp.feat.avx512_4vnniw && xcr0_mask(0xe6))
 #define cpu_has_avx512_4fmaps (cp.feat.avx512_4fmaps && xcr0_mask(0xe6))
 #define cpu_has_avx512_bf16 (cp.feat.avx512_bf16 && xcr0_mask(0xe6))
--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -47,6 +47,7 @@ $(call as-option-add,CFLAGS,CC,"rdseed %
 $(call as-option-add,CFLAGS,CC,"clwb (%rax)",-DHAVE_AS_CLWB)
 $(call as-option-add,CFLAGS,CC,".equ \"x\"$$(comma)1",-DHAVE_AS_QUOTED_SYM)
 $(call as-option-add,CFLAGS,CC,"invpcid (%rax)$$(comma)%rax",-DHAVE_AS_INVPCID)
+$(call as-option-add,CFLAGS,CC,"movdiri %rax$$(comma)(%rax)",-DHAVE_AS_MOVDIR)
 
 # GAS's idea of true is -1.  Clang's idea is 1
 $(call as-option-add,CFLAGS,CC,\
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -1441,6 +1441,44 @@ static int hvmemul_rmw(
     return rc;
 }
 
+static int hvmemul_blk(
+    enum x86_segment seg,
+    unsigned long offset,
+    void *p_data,
+    unsigned int bytes,
+    uint32_t *eflags,
+    struct x86_emulate_state *state,
+    struct x86_emulate_ctxt *ctxt)
+{
+    struct hvm_emulate_ctxt *hvmemul_ctxt =
+        container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
+    unsigned long addr;
+    uint32_t pfec = PFEC_page_present | PFEC_write_access;
+    int rc;
+    void *mapping = NULL;
+
+    rc = hvmemul_virtual_to_linear(
+        seg, offset, bytes, NULL, hvm_access_write, hvmemul_ctxt, &addr);
+    if ( rc != X86EMUL_OKAY || !bytes )
+        return rc;
+
+    if ( is_x86_system_segment(seg) )
+        pfec |= PFEC_implicit;
+    else if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 )
+        pfec |= PFEC_user_mode;
+
+    mapping = hvmemul_map_linear_addr(addr, bytes, pfec, hvmemul_ctxt);
+    if ( IS_ERR(mapping) )
+        return ~PTR_ERR(mapping);
+    if ( !mapping )
+        return X86EMUL_UNHANDLEABLE;
+
+    rc = x86_emul_blk(mapping, p_data, bytes, eflags, state, ctxt);
+    hvmemul_unmap_linear_addr(mapping, addr, bytes, hvmemul_ctxt);
+
+    return rc;
+}
+
 static int hvmemul_write_discard(
     enum x86_segment seg,
     unsigned long offset,
@@ -2512,6 +2550,7 @@ static const struct x86_emulate_ops hvm_
     .write         = hvmemul_write,
     .rmw           = hvmemul_rmw,
     .cmpxchg       = hvmemul_cmpxchg,
+    .blk           = hvmemul_blk,
     .validate      = hvmemul_validate,
     .rep_ins       = hvmemul_rep_ins,
     .rep_outs      = hvmemul_rep_outs,
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -548,6 +548,8 @@ static const struct ext0f38_table {
     [0xf1] = { .to_mem = 1, .two_op = 1 },
     [0xf2 ... 0xf3] = {},
     [0xf5 ... 0xf7] = {},
+    [0xf8] = { .simd_size = simd_other },
+    [0xf9] = { .to_mem = 1, .two_op = 1 /* Mov */ },
 };
 
 /* Shift values between src and dst sizes of pmov{s,z}x{b,w,d}{w,d,q}. */
@@ -851,6 +853,10 @@ struct x86_emulate_state {
         rmw_xchg,
         rmw_xor,
     } rmw;
+    enum {
+        blk_NONE,
+        blk_movdir,
+    } blk;
     uint8_t modrm, modrm_mod, modrm_reg, modrm_rm;
     uint8_t sib_index, sib_scale;
     uint8_t rex_prefix;
@@ -1914,6 +1920,8 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_avx512_bitalg() (ctxt->cpuid->feat.avx512_bitalg)
 #define vcpu_has_avx512_vpopcntdq() (ctxt->cpuid->feat.avx512_vpopcntdq)
 #define vcpu_has_rdpid()       (ctxt->cpuid->feat.rdpid)
+#define vcpu_has_movdiri()     (ctxt->cpuid->feat.movdiri)
+#define vcpu_has_movdir64b()   (ctxt->cpuid->feat.movdir64b)
 #define vcpu_has_avx512_4vnniw() (ctxt->cpuid->feat.avx512_4vnniw)
 #define vcpu_has_avx512_4fmaps() (ctxt->cpuid->feat.avx512_4fmaps)
 #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
@@ -2722,10 +2730,12 @@ x86_decode_0f38(
     {
     case 0x00 ... 0xef:
     case 0xf2 ... 0xf5:
-    case 0xf7 ... 0xff:
+    case 0xf7 ... 0xf8:
+    case 0xfa ... 0xff:
         op_bytes = 0;
         /* fall through */
     case 0xf6: /* adcx / adox */
+    case 0xf9: /* movdiri */
         ctxt->opcode |= MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK);
         break;
 
@@ -10173,6 +10183,34 @@ x86_emulate(
                             : "0" ((uint32_t)src.val), "rm" (_regs.edx) );
         break;
 
+    case X86EMUL_OPC_66(0x0f38, 0xf8): /* movdir64b r,m512 */
+        host_and_vcpu_must_have(movdir64b);
+        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        src.val = truncate_ea(*dst.reg);
+        generate_exception_if(!is_aligned(x86_seg_es, src.val, 64, ctxt, ops),
+                              EXC_GP, 0);
+        fail_if(!ops->blk);
+        state->blk = blk_movdir;
+        BUILD_BUG_ON(sizeof(*mmvalp) < 64);
+        if ( (rc = ops->read(ea.mem.seg, ea.mem.off, mmvalp, 64,
+                             ctxt)) != X86EMUL_OKAY ||
+             (rc = ops->blk(x86_seg_es, src.val, mmvalp, 64, &_regs.eflags,
+                            state, ctxt)) != X86EMUL_OKAY )
+            goto done;
+        state->simd_size = simd_none;
+        break;
+
+    case X86EMUL_OPC(0x0f38, 0xf9): /* movdiri mem,r */
+        host_and_vcpu_must_have(movdiri);
+        generate_exception_if(dst.type != OP_MEM, EXC_UD);
+        fail_if(!ops->blk);
+        state->blk = blk_movdir;
+        if ( (rc = ops->blk(dst.mem.seg, dst.mem.off, &src.val, op_bytes,
+                            &_regs.eflags, state, ctxt)) != X86EMUL_OKAY )
+            goto done;
+        dst.type = OP_NONE;
+        break;
+
 #ifndef X86EMUL_NO_SIMD
 
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x00): /* vpermq $imm8,ymm/m256,ymm */
@@ -11431,6 +11469,77 @@ int x86_emul_rmw(
 
     return X86EMUL_OKAY;
 }
+
+int x86_emul_blk(
+    void *ptr,
+    void *data,
+    unsigned int bytes,
+    uint32_t *eflags,
+    struct x86_emulate_state *state,
+    struct x86_emulate_ctxt *ctxt)
+{
+    switch ( state->blk )
+    {
+        /*
+         * Throughout this switch(), memory clobbers are used to compensate
+         * that other operands may not properly express the (full) memory
+         * ranges covered.
+         */
+    case blk_movdir:
+        switch ( bytes )
+        {
+#ifdef __x86_64__
+        case sizeof(uint32_t):
+# ifdef HAVE_AS_MOVDIR
+            asm ( "movdiri %0, (%1)"
+                  :: "r" (*(uint32_t *)data), "r" (ptr) : "memory" );
+# else
+            /* movdiri %esi, (%rdi) */
+            asm ( ".byte 0x0f, 0x38, 0xf9, 0x37"
+                  :: "S" (*(uint32_t *)data), "D" (ptr) : "memory" );
+# endif
+            break;
+#endif
+
+        case sizeof(unsigned long):
+#ifdef HAVE_AS_MOVDIR
+            asm ( "movdiri %0, (%1)"
+                  :: "r" (*(unsigned long *)data), "r" (ptr) : "memory" );
+#else
+            /* movdiri %rsi, (%rdi) */
+            asm ( ".byte 0x48, 0x0f, 0x38, 0xf9, 0x37"
+                  :: "S" (*(unsigned long *)data), "D" (ptr) : "memory" );
+#endif
+            break;
+
+        case 64:
+            if ( ((unsigned long)ptr & 0x3f) )
+            {
+                ASSERT_UNREACHABLE();
+                return X86EMUL_UNHANDLEABLE;
+            }
+#ifdef HAVE_AS_MOVDIR
+            asm ( "movdir64b (%0), %1" :: "r" (data), "r" (ptr) : "memory" );
+#else
+            /* movdir64b (%rsi), %rdi */
+            asm ( ".byte 0x66, 0x0f, 0x38, 0xf8, 0x3e"
+                  :: "S" (data), "D" (ptr) : "memory" );
+#endif
+            break;
+
+        default:
+            ASSERT_UNREACHABLE();
+            return X86EMUL_UNHANDLEABLE;
+        }
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+        return X86EMUL_UNHANDLEABLE;
+    }
+
+    return X86EMUL_OKAY;
+}
 
 static void __init __maybe_unused build_assertions(void)
 {
--- a/xen/arch/x86/x86_emulate/x86_emulate.h
+++ b/xen/arch/x86/x86_emulate/x86_emulate.h
@@ -310,6 +310,22 @@ struct x86_emulate_ops
         struct x86_emulate_ctxt *ctxt);
 
     /*
+     * blk: Emulate a large (block) memory access.
+     * @p_data: [IN/OUT] (optional) Pointer to source/destination buffer.
+     * @eflags: [IN/OUT] Pointer to EFLAGS to be updated according to
+     *                   instruction effects.
+     * @state:  [IN/OUT] Pointer to (opaque) emulator state.
+     */
+    int (*blk)(
+        enum x86_segment seg,
+        unsigned long offset,
+        void *p_data,
+        unsigned int bytes,
+        uint32_t *eflags,
+        struct x86_emulate_state *state,
+        struct x86_emulate_ctxt *ctxt);
+
+    /*
      * validate: Post-decode, pre-emulate hook to allow caller controlled
      * filtering.
      */
@@ -793,6 +809,14 @@ x86_emul_rmw(
     unsigned int bytes,
     uint32_t *eflags,
     struct x86_emulate_state *state,
+    struct x86_emulate_ctxt *ctxt);
+int
+x86_emul_blk(
+    void *ptr,
+    void *data,
+    unsigned int bytes,
+    uint32_t *eflags,
+    struct x86_emulate_state *state,
     struct x86_emulate_ctxt *ctxt);
 
 static inline void x86_emul_hw_exception(
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -118,6 +118,8 @@
 #define cpu_has_avx512_bitalg   boot_cpu_has(X86_FEATURE_AVX512_BITALG)
 #define cpu_has_avx512_vpopcntdq boot_cpu_has(X86_FEATURE_AVX512_VPOPCNTDQ)
 #define cpu_has_rdpid           boot_cpu_has(X86_FEATURE_RDPID)
+#define cpu_has_movdiri         boot_cpu_has(X86_FEATURE_MOVDIRI)
+#define cpu_has_movdir64b       boot_cpu_has(X86_FEATURE_MOVDIR64B)
 
 /* CPUID level 0x80000007.edx */
 #define cpu_has_itsc            boot_cpu_has(X86_FEATURE_ITSC)
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -238,6 +238,8 @@ XEN_CPUFEATURE(AVX512_BITALG, 6*32+12) /
 XEN_CPUFEATURE(AVX512_VPOPCNTDQ, 6*32+14) /*A  POPCNT for vectors of DW/QW */
 XEN_CPUFEATURE(RDPID,         6*32+22) /*A  RDPID instruction */
 XEN_CPUFEATURE(CLDEMOTE,      6*32+25) /*A  CLDEMOTE instruction */
+XEN_CPUFEATURE(MOVDIRI,       6*32+27) /*A  MOVDIRI instruction */
+XEN_CPUFEATURE(MOVDIR64B,     6*32+28) /*A  MOVDIR64B instruction */
 
 /* AMD-defined CPU features, CPUID level 0x80000007.edx, word 7 */
 XEN_CPUFEATURE(ITSC,          7*32+ 8) /*   Invariant TSC */



From xen-devel-bounces@lists.xenproject.org Tue May 05 08:13:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 08:13:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVsiB-0003yP-6g; Tue, 05 May 2020 08:13:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVsi9-0003y9-IX
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 08:13:53 +0000
X-Inumbo-ID: 5b0dba96-8ea8-11ea-9d93-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5b0dba96-8ea8-11ea-9d93-12813bfff9fa;
 Tue, 05 May 2020 08:13:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id F08DCAC1F;
 Tue,  5 May 2020 08:13:53 +0000 (UTC)
Subject: [PATCH v8 03/12] x86emul: support ENQCMD insns
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Message-ID: <099d03d0-2846-2a3d-93ec-2d10dab12655@suse.com>
Date: Tue, 5 May 2020 10:13:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Note that the ISA extensions document revision 038 doesn't specify
exception behavior for ModRM.mod == 0b11; assuming #UD here.

No tests are being added to the harness - this would be quite hard,
we can't just issue the insns against RAM. Their similarity with
MOVDIR64B should have the test case there be god enough to cover any
fundamental flaws.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
TBD: This doesn't (can't) consult PASID translation tables yet, as we
     have no VMX code for this so far. I guess for this we will want to
     replace the direct ->read_msr(MSR_IA32_PASID, ...) with a new
     ->read_pasid() hook.
---
v7: Re-base.
v6: Re-base.
v5: New.

--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -48,6 +48,7 @@ $(call as-option-add,CFLAGS,CC,"clwb (%r
 $(call as-option-add,CFLAGS,CC,".equ \"x\"$$(comma)1",-DHAVE_AS_QUOTED_SYM)
 $(call as-option-add,CFLAGS,CC,"invpcid (%rax)$$(comma)%rax",-DHAVE_AS_INVPCID)
 $(call as-option-add,CFLAGS,CC,"movdiri %rax$$(comma)(%rax)",-DHAVE_AS_MOVDIR)
+$(call as-option-add,CFLAGS,CC,"enqcmd (%rax)$$(comma)%rax",-DHAVE_AS_ENQCMD)
 
 # GAS's idea of true is -1.  Clang's idea is 1
 $(call as-option-add,CFLAGS,CC,\
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -855,6 +855,7 @@ struct x86_emulate_state {
     } rmw;
     enum {
         blk_NONE,
+        blk_enqcmd,
         blk_movdir,
     } blk;
     uint8_t modrm, modrm_mod, modrm_reg, modrm_rm;
@@ -901,6 +902,7 @@ typedef union {
     uint64_t __attribute__ ((aligned(16))) xmm[2];
     uint64_t __attribute__ ((aligned(32))) ymm[4];
     uint64_t __attribute__ ((aligned(64))) zmm[8];
+    uint32_t data32[16];
 } mmval_t;
 
 /*
@@ -1922,6 +1924,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_rdpid()       (ctxt->cpuid->feat.rdpid)
 #define vcpu_has_movdiri()     (ctxt->cpuid->feat.movdiri)
 #define vcpu_has_movdir64b()   (ctxt->cpuid->feat.movdir64b)
+#define vcpu_has_enqcmd()      (ctxt->cpuid->feat.enqcmd)
 #define vcpu_has_avx512_4vnniw() (ctxt->cpuid->feat.avx512_4vnniw)
 #define vcpu_has_avx512_4fmaps() (ctxt->cpuid->feat.avx512_4fmaps)
 #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
@@ -10200,6 +10203,36 @@ x86_emulate(
         state->simd_size = simd_none;
         break;
 
+    case X86EMUL_OPC_F2(0x0f38, 0xf8): /* enqcmd r,m512 */
+    case X86EMUL_OPC_F3(0x0f38, 0xf8): /* enqcmds r,m512 */
+        host_and_vcpu_must_have(enqcmd);
+        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        generate_exception_if(vex.pfx != vex_f2 && !mode_ring0(), EXC_GP, 0);
+        src.val = truncate_ea(*dst.reg);
+        generate_exception_if(!is_aligned(x86_seg_es, src.val, 64, ctxt, ops),
+                              EXC_GP, 0);
+        fail_if(!ops->blk);
+        BUILD_BUG_ON(sizeof(*mmvalp) < 64);
+        if ( (rc = ops->read(ea.mem.seg, ea.mem.off, mmvalp, 64,
+                             ctxt)) != X86EMUL_OKAY )
+            goto done;
+        if ( vex.pfx == vex_f2 ) /* enqcmd */
+        {
+            fail_if(!ops->read_msr);
+            if ( (rc = ops->read_msr(MSR_IA32_PASID,
+                                     &msr_val, ctxt)) != X86EMUL_OKAY )
+                goto done;
+            generate_exception_if(!(msr_val & PASID_VALID), EXC_GP, 0);
+            mmvalp->data32[0] = MASK_EXTR(msr_val, PASID_PASID_MASK);
+        }
+        mmvalp->data32[0] &= ~0x7ff00000;
+        state->blk = blk_enqcmd;
+        if ( (rc = ops->blk(x86_seg_es, src.val, mmvalp, 64, &_regs.eflags,
+                            state, ctxt)) != X86EMUL_OKAY )
+            goto done;
+        state->simd_size = simd_none;
+        break;
+
     case X86EMUL_OPC(0x0f38, 0xf9): /* movdiri mem,r */
         host_and_vcpu_must_have(movdiri);
         generate_exception_if(dst.type != OP_MEM, EXC_UD);
@@ -11480,11 +11513,36 @@ int x86_emul_blk(
 {
     switch ( state->blk )
     {
+        bool zf;
+
         /*
          * Throughout this switch(), memory clobbers are used to compensate
          * that other operands may not properly express the (full) memory
          * ranges covered.
          */
+    case blk_enqcmd:
+        ASSERT(bytes == 64);
+        if ( ((unsigned long)ptr & 0x3f) )
+        {
+            ASSERT_UNREACHABLE();
+            return X86EMUL_UNHANDLEABLE;
+        }
+        *eflags &= ~EFLAGS_MASK;
+#ifdef HAVE_AS_ENQCMD
+        asm ( "enqcmds (%[src]), %[dst]" ASM_FLAG_OUT(, "; setz %0")
+              : [zf] ASM_FLAG_OUT("=@ccz", "=qm") (zf)
+              : [src] "r" (data), [dst] "r" (ptr) : "memory" );
+#else
+        /* enqcmds (%rsi), %rdi */
+        asm ( ".byte 0xf3, 0x0f, 0x38, 0xf8, 0x3e"
+              ASM_FLAG_OUT(, "; setz %[zf]")
+              : [zf] ASM_FLAG_OUT("=@ccz", "=qm") (zf)
+              : "S" (data), "D" (ptr) : "memory" );
+#endif
+        if ( zf )
+            *eflags |= X86_EFLAGS_ZF;
+        break;
+
     case blk_movdir:
         switch ( bytes )
         {
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -120,6 +120,7 @@
 #define cpu_has_rdpid           boot_cpu_has(X86_FEATURE_RDPID)
 #define cpu_has_movdiri         boot_cpu_has(X86_FEATURE_MOVDIRI)
 #define cpu_has_movdir64b       boot_cpu_has(X86_FEATURE_MOVDIR64B)
+#define cpu_has_enqcmd          boot_cpu_has(X86_FEATURE_ENQCMD)
 
 /* CPUID level 0x80000007.edx */
 #define cpu_has_itsc            boot_cpu_has(X86_FEATURE_ITSC)
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -420,6 +420,10 @@
 #define MSR_IA32_TSC_DEADLINE		0x000006E0
 #define MSR_IA32_ENERGY_PERF_BIAS	0x000001b0
 
+#define MSR_IA32_PASID			0x00000d93
+#define  PASID_PASID_MASK		0x000fffff
+#define  PASID_VALID			0x80000000
+
 /* Platform Shared Resource MSRs */
 #define MSR_IA32_CMT_EVTSEL		0x00000c8d
 #define MSR_IA32_CMT_EVTSEL_UE_MASK	0x0000ffff
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -240,6 +240,7 @@ XEN_CPUFEATURE(RDPID,         6*32+22) /
 XEN_CPUFEATURE(CLDEMOTE,      6*32+25) /*A  CLDEMOTE instruction */
 XEN_CPUFEATURE(MOVDIRI,       6*32+27) /*A  MOVDIRI instruction */
 XEN_CPUFEATURE(MOVDIR64B,     6*32+28) /*A  MOVDIR64B instruction */
+XEN_CPUFEATURE(ENQCMD,        6*32+29) /*   ENQCMD{,S} instructions */
 
 /* AMD-defined CPU features, CPUID level 0x80000007.edx, word 7 */
 XEN_CPUFEATURE(ITSC,          7*32+ 8) /*   Invariant TSC */



From xen-devel-bounces@lists.xenproject.org Tue May 05 08:14:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 08:14:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVsio-00044E-Gz; Tue, 05 May 2020 08:14:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVsin-000444-IA
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 08:14:33 +0000
X-Inumbo-ID: 72f9738e-8ea8-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 72f9738e-8ea8-11ea-ae69-bc764e2007e4;
 Tue, 05 May 2020 08:14:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 1A073AFF0;
 Tue,  5 May 2020 08:14:34 +0000 (UTC)
Subject: [PATCH v8 04/12] x86emul: support SERIALIZE
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Message-ID: <0bbbf95e-48ec-ee73-5234-52cf9c6c06d8@suse.com>
Date: Tue, 5 May 2020 10:14:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

... enabling its use by all guest kinds at the same time.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v7: Re-base.
v6: New.

--- a/tools/libxl/libxl_cpuid.c
+++ b/tools/libxl/libxl_cpuid.c
@@ -214,6 +214,7 @@ int libxl_cpuid_parse_config(libxl_cpuid
         {"avx512-4vnniw",0x00000007,  0, CPUID_REG_EDX,  2,  1},
         {"avx512-4fmaps",0x00000007,  0, CPUID_REG_EDX,  3,  1},
         {"md-clear",     0x00000007,  0, CPUID_REG_EDX, 10,  1},
+        {"serialize",    0x00000007,  0, CPUID_REG_EDX, 14,  1},
         {"cet-ibt",      0x00000007,  0, CPUID_REG_EDX, 20,  1},
         {"ibrsb",        0x00000007,  0, CPUID_REG_EDX, 26,  1},
         {"stibp",        0x00000007,  0, CPUID_REG_EDX, 27,  1},
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -161,6 +161,7 @@ static const char *const str_7d0[32] =
 
     [10] = "md-clear",
     /* 12 */                [13] = "tsx-force-abort",
+    [14] = "serialize",
 
     [18] = "pconfig",
     [20] = "cet-ibt",
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -158,6 +158,7 @@ static inline bool xcr0_mask(uint64_t ma
 #define cpu_has_movdir64b  cp.feat.movdir64b
 #define cpu_has_avx512_4vnniw (cp.feat.avx512_4vnniw && xcr0_mask(0xe6))
 #define cpu_has_avx512_4fmaps (cp.feat.avx512_4fmaps && xcr0_mask(0xe6))
+#define cpu_has_serialize  cp.feat.serialize
 #define cpu_has_avx512_bf16 (cp.feat.avx512_bf16 && xcr0_mask(0xe6))
 
 #define cpu_has_xgetbv1   (cpu_has_xsave && cp.xstate.xgetbv1)
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1927,6 +1927,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_enqcmd()      (ctxt->cpuid->feat.enqcmd)
 #define vcpu_has_avx512_4vnniw() (ctxt->cpuid->feat.avx512_4vnniw)
 #define vcpu_has_avx512_4fmaps() (ctxt->cpuid->feat.avx512_4fmaps)
+#define vcpu_has_serialize()   (ctxt->cpuid->feat.serialize)
 #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
 
 #define vcpu_must_have(feat) \
@@ -5660,6 +5661,18 @@ x86_emulate(
                 goto done;
             break;
 
+        case 0xe8:
+            switch ( vex.pfx )
+            {
+            case vex_none: /* serialize */
+                host_and_vcpu_must_have(serialize);
+                asm volatile ( ".byte 0x0f, 0x01, 0xe8" );
+                break;
+            default:
+                goto unimplemented_insn;
+            }
+            break;
+
         case 0xf8: /* swapgs */
             generate_exception_if(!mode_64bit(), EXC_UD);
             generate_exception_if(!mode_ring0(), EXC_GP, 0);
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -129,6 +129,7 @@
 #define cpu_has_avx512_4vnniw   boot_cpu_has(X86_FEATURE_AVX512_4VNNIW)
 #define cpu_has_avx512_4fmaps   boot_cpu_has(X86_FEATURE_AVX512_4FMAPS)
 #define cpu_has_tsx_force_abort boot_cpu_has(X86_FEATURE_TSX_FORCE_ABORT)
+#define cpu_has_serialize       boot_cpu_has(X86_FEATURE_SERIALIZE)
 
 /* CPUID level 0x00000007:1.eax */
 #define cpu_has_avx512_bf16     boot_cpu_has(X86_FEATURE_AVX512_BF16)
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -258,6 +258,7 @@ XEN_CPUFEATURE(AVX512_4VNNIW, 9*32+ 2) /
 XEN_CPUFEATURE(AVX512_4FMAPS, 9*32+ 3) /*A  AVX512 Multiply Accumulation Single Precision */
 XEN_CPUFEATURE(MD_CLEAR,      9*32+10) /*A  VERW clears microarchitectural buffers */
 XEN_CPUFEATURE(TSX_FORCE_ABORT, 9*32+13) /* MSR_TSX_FORCE_ABORT.RTM_ABORT */
+XEN_CPUFEATURE(SERIALIZE,     9*32+14) /*A  SERIALIZE insn */
 XEN_CPUFEATURE(CET_IBT,       9*32+20) /*   CET - Indirect Branch Tracking */
 XEN_CPUFEATURE(IBRSB,         9*32+26) /*A  IBRS and IBPB support (used by Intel) */
 XEN_CPUFEATURE(STIBP,         9*32+27) /*A  STIBP */



From xen-devel-bounces@lists.xenproject.org Tue May 05 08:14:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 08:14:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVsj7-00048R-Pu; Tue, 05 May 2020 08:14:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVsj7-00048J-6f
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 08:14:53 +0000
X-Inumbo-ID: 7e9b7354-8ea8-11ea-9d93-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7e9b7354-8ea8-11ea-9d93-12813bfff9fa;
 Tue, 05 May 2020 08:14:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 9CE49AFF1;
 Tue,  5 May 2020 08:14:53 +0000 (UTC)
Subject: [PATCH v8 05/12] x86emul: support X{SUS,RES}LDTRK
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Message-ID: <6e7500d2-262c-29c7-b9be-3fc9be26d198@suse.com>
Date: Tue, 5 May 2020 10:14:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

There's nothing to be done by the emulator, as we unconditionally abort
any XBEGIN.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v6: New.

--- a/tools/libxl/libxl_cpuid.c
+++ b/tools/libxl/libxl_cpuid.c
@@ -208,6 +208,7 @@ int libxl_cpuid_parse_config(libxl_cpuid
         {"avx512-vnni",  0x00000007,  0, CPUID_REG_ECX, 11,  1},
         {"avx512-bitalg",0x00000007,  0, CPUID_REG_ECX, 12,  1},
         {"avx512-vpopcntdq",0x00000007,0,CPUID_REG_ECX, 14,  1},
+        {"tsxldtrk",     0x00000007,  0, CPUID_REG_ECX, 16,  1},
         {"rdpid",        0x00000007,  0, CPUID_REG_ECX, 22,  1},
         {"cldemote",     0x00000007,  0, CPUID_REG_ECX, 25,  1},
 
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -128,6 +128,7 @@ static const char *const str_7c0[32] =
     [10] = "vpclmulqdq",       [11] = "avx512_vnni",
     [12] = "avx512_bitalg",
     [14] = "avx512_vpopcntdq",
+    [16] = "tsxldtrk",
 
     [22] = "rdpid",
     /* 24 */                   [25] = "cldemote",
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1921,6 +1921,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_avx512_vnni() (ctxt->cpuid->feat.avx512_vnni)
 #define vcpu_has_avx512_bitalg() (ctxt->cpuid->feat.avx512_bitalg)
 #define vcpu_has_avx512_vpopcntdq() (ctxt->cpuid->feat.avx512_vpopcntdq)
+#define vcpu_has_tsxldtrk()    (ctxt->cpuid->feat.tsxldtrk)
 #define vcpu_has_rdpid()       (ctxt->cpuid->feat.rdpid)
 #define vcpu_has_movdiri()     (ctxt->cpuid->feat.movdiri)
 #define vcpu_has_movdir64b()   (ctxt->cpuid->feat.movdir64b)
@@ -5668,6 +5669,20 @@ x86_emulate(
                 host_and_vcpu_must_have(serialize);
                 asm volatile ( ".byte 0x0f, 0x01, 0xe8" );
                 break;
+            case vex_f2: /* xsusldtrk */
+                vcpu_must_have(tsxldtrk);
+                break;
+            default:
+                goto unimplemented_insn;
+            }
+            break;
+
+        case 0xe9:
+            switch ( vex.pfx )
+            {
+            case vex_f2: /* xresldtrk */
+                vcpu_must_have(tsxldtrk);
+                break;
             default:
                 goto unimplemented_insn;
             }
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -236,6 +236,7 @@ XEN_CPUFEATURE(VPCLMULQDQ,    6*32+10) /
 XEN_CPUFEATURE(AVX512_VNNI,   6*32+11) /*A  Vector Neural Network Instrs */
 XEN_CPUFEATURE(AVX512_BITALG, 6*32+12) /*A  Support for VPOPCNT[B,W] and VPSHUFBITQMB */
 XEN_CPUFEATURE(AVX512_VPOPCNTDQ, 6*32+14) /*A  POPCNT for vectors of DW/QW */
+XEN_CPUFEATURE(TSXLDTRK,      6*32+16) /*A  TSX load tracking suspend/resume insns */
 XEN_CPUFEATURE(RDPID,         6*32+22) /*A  RDPID instruction */
 XEN_CPUFEATURE(CLDEMOTE,      6*32+25) /*A  CLDEMOTE instruction */
 XEN_CPUFEATURE(MOVDIRI,       6*32+27) /*A  MOVDIRI instruction */
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -284,6 +284,9 @@ def crunch_numbers(state):
         # as dependent features simplifies Xen's logic, and prevents the guest
         # from seeing implausible configurations.
         IBRSB: [STIBP, SSBD],
+
+        # In principle the TSXLDTRK insns could also be considered independent.
+        RTM: [TSXLDTRK],
     }
 
     deep_features = tuple(sorted(deps.keys()))



From xen-devel-bounces@lists.xenproject.org Tue May 05 08:15:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 08:15:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVsjl-0004IC-6m; Tue, 05 May 2020 08:15:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVsjj-0004Ht-LX
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 08:15:31 +0000
X-Inumbo-ID: 95217967-8ea8-11ea-9d93-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 95217967-8ea8-11ea-9d93-12813bfff9fa;
 Tue, 05 May 2020 08:15:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 5B5A9AFF0;
 Tue,  5 May 2020 08:15:32 +0000 (UTC)
Subject: [PATCH v8 06/12] x86/HVM: make hvmemul_blk() capable of handling r/o
 operations
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Message-ID: <1587789a-b0d6-6d18-99fc-a94bbea52d7b@suse.com>
Date: Tue, 5 May 2020 10:15:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In preparation for handling e.g. FLDENV or {F,FX,X}RSTOR here as well.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v8: New (could be folded into "x86emul: support MOVDIR{I,64B} insns",
    but would invalidate Paul's R-b there).

--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -1453,7 +1453,7 @@ static int hvmemul_blk(
     struct hvm_emulate_ctxt *hvmemul_ctxt =
         container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
     unsigned long addr;
-    uint32_t pfec = PFEC_page_present | PFEC_write_access;
+    uint32_t pfec = PFEC_page_present;
     int rc;
     void *mapping = NULL;
 
@@ -1462,6 +1462,9 @@ static int hvmemul_blk(
     if ( rc != X86EMUL_OKAY || !bytes )
         return rc;
 
+    if ( x86_insn_is_mem_write(state, ctxt) )
+        pfec |= PFEC_write_access;
+
     if ( is_x86_system_segment(seg) )
         pfec |= PFEC_implicit;
     else if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 )



From xen-devel-bounces@lists.xenproject.org Tue May 05 08:16:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 08:16:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVskD-0004Ne-Fg; Tue, 05 May 2020 08:16:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVskC-0004NM-1e
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 08:16:00 +0000
X-Inumbo-ID: a6685f00-8ea8-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6685f00-8ea8-11ea-ae69-bc764e2007e4;
 Tue, 05 May 2020 08:15:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 63F98AFF0;
 Tue,  5 May 2020 08:16:00 +0000 (UTC)
Subject: [PATCH v8 07/12] x86emul: support FNSTENV and FNSAVE
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Message-ID: <9a2afbb1-af92-2c7d-9fde-d8d8e4563a2a@suse.com>
Date: Tue, 5 May 2020 10:15:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

To avoid introducing another boolean into emulator state, the
rex_prefix field gets (ab)used to convey the real/VM86 vs protected mode
info (affecting structure layout, albeit not size) to x86_emul_blk().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
TBD: The full 16-bit padding fields in the 32-bit structures get filled
     with all ones by modern CPUs (i.e. other than the comment says for
     FIP and FDP). We may want to mirror this as well (for the real mode
     variant), even if those fields' contents are unspecified.
---
v7: New.

--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -120,6 +120,7 @@ static inline bool xcr0_mask(uint64_t ma
 }
 
 #define cache_line_size() (cp.basic.clflush_size * 8)
+#define cpu_has_fpu        cp.basic.fpu
 #define cpu_has_mmx        cp.basic.mmx
 #define cpu_has_fxsr       cp.basic.fxsr
 #define cpu_has_sse        cp.basic.sse
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -748,6 +748,25 @@ static struct x86_emulate_ops emulops =
 
 #define MMAP_ADDR 0x100000
 
+/*
+ * 64-bit OSes may not (be able to) properly restore the two selectors in
+ * the FPU environment. Zap them so that memcmp() on two saved images will
+ * work regardless of whether a context switch occurred in the middle.
+ */
+static void zap_fpsel(unsigned int *env, bool is_32bit)
+{
+    if ( is_32bit )
+    {
+        env[4] &= ~0xffff;
+        env[6] &= ~0xffff;
+    }
+    else
+    {
+        env[2] &= ~0xffff;
+        env[3] &= ~0xffff;
+    }
+}
+
 #ifdef __x86_64__
 # define STKVAL_DISP 64
 static const struct {
@@ -2394,6 +2413,62 @@ int main(int argc, char **argv)
         printf("okay\n");
     }
     else
+        printf("skipped\n");
+
+    printf("%-40s", "Testing fnstenv 4(%ecx)...");
+    if ( stack_exec && cpu_has_fpu )
+    {
+        const uint16_t three = 3;
+
+        asm volatile ( "fninit\n\t"
+                       "fld1\n\t"
+                       "fidivs %1\n\t"
+                       "fstenv %0"
+                       : "=m" (res[9]) : "m" (three) : "memory" );
+        zap_fpsel(&res[9], true);
+        instr[0] = 0xd9; instr[1] = 0x71; instr[2] = 0x04;
+        regs.eip = (unsigned long)&instr[0];
+        regs.ecx = (unsigned long)res;
+        res[8] = 0xaa55aa55;
+        rc = x86_emulate(&ctxt, &emulops);
+        zap_fpsel(&res[1], true);
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res + 1, res + 9, 28) ||
+             res[8] != 0xaa55aa55 ||
+             (regs.eip != (unsigned long)&instr[3]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
+    printf("%-40s", "Testing 16-bit fnsave (%ecx)...");
+    if ( stack_exec && cpu_has_fpu )
+    {
+        const uint16_t five = 5;
+
+        asm volatile ( "fninit\n\t"
+                       "fld1\n\t"
+                       "fidivs %1\n\t"
+                       "fsaves %0"
+                       : "=m" (res[25]) : "m" (five) : "memory" );
+        zap_fpsel(&res[25], false);
+        asm volatile ( "frstors %0" :: "m" (res[25]) : "memory" );
+        instr[0] = 0x66; instr[1] = 0xdd; instr[2] = 0x31;
+        regs.eip = (unsigned long)&instr[0];
+        regs.ecx = (unsigned long)res;
+        res[23] = 0xaa55aa55;
+        res[24] = 0xaa55aa55;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res, res + 25, 94) ||
+             (res[23] >> 16) != 0xaa55 ||
+             res[24] != 0xaa55aa55 ||
+             (regs.eip != (unsigned long)&instr[3]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
         printf("skipped\n");
 
     printf("%-40s", "Testing movq %mm3,(%ecx)...");
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -856,6 +856,9 @@ struct x86_emulate_state {
     enum {
         blk_NONE,
         blk_enqcmd,
+#ifndef X86EMUL_NO_FPU
+        blk_fst, /* FNSTENV, FNSAVE */
+#endif
         blk_movdir,
     } blk;
     uint8_t modrm, modrm_mod, modrm_reg, modrm_rm;
@@ -897,6 +900,50 @@ struct x86_emulate_state {
 #define PTR_POISON NULL /* 32-bit builds are for user-space, so NULL is OK. */
 #endif
 
+#ifndef X86EMUL_NO_FPU
+struct x87_env16 {
+    uint16_t fcw;
+    uint16_t fsw;
+    uint16_t ftw;
+    union {
+        struct {
+            uint16_t fip_lo;
+            uint16_t fop:11, :1, fip_hi:4;
+            uint16_t fdp_lo;
+            uint16_t :12, fdp_hi:4;
+        } real;
+        struct {
+            uint16_t fip;
+            uint16_t fcs;
+            uint16_t fdp;
+            uint16_t fds;
+        } prot;
+    } mode;
+};
+
+struct x87_env32 {
+    uint32_t fcw:16, :16;
+    uint32_t fsw:16, :16;
+    uint32_t ftw:16, :16;
+    union {
+        struct {
+            /* some CPUs/FPUs also store the full FIP here */
+            uint32_t fip_lo:16, :16;
+            uint32_t fop:11, :1, fip_hi:16, :4;
+            /* some CPUs/FPUs also store the full FDP here */
+            uint32_t fdp_lo:16, :16;
+            uint32_t :12, fdp_hi:16, :4;
+        } real;
+        struct {
+            uint32_t fip;
+            uint32_t fcs:16, fop:11, :5;
+            uint32_t fdp;
+            uint32_t fds:16, :16;
+        } prot;
+    } mode;
+};
+#endif
+
 typedef union {
     uint64_t mmx;
     uint64_t __attribute__ ((aligned(16))) xmm[2];
@@ -4912,9 +4959,19 @@ x86_emulate(
                     goto done;
                 emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
                 break;
-            case 6: /* fnstenv - TODO */
+            case 6: /* fnstenv */
+                fail_if(!ops->blk);
+                state->blk = blk_fst;
+                /* REX is meaningless for this insn by this point. */
+                rex_prefix = in_protmode(ctxt, ops);
+                if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
+                                    op_bytes > 2 ? sizeof(struct x87_env32)
+                                                 : sizeof(struct x87_env16),
+                                    &_regs.eflags,
+                                    state, ctxt)) != X86EMUL_OKAY )
+                    goto done;
                 state->fpu_ctrl = true;
-                goto unimplemented_insn;
+                break;
             case 7: /* fnstcw m2byte */
                 state->fpu_ctrl = true;
             fpu_memdst16:
@@ -5068,9 +5125,21 @@ x86_emulate(
                 emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
                 break;
             case 4: /* frstor - TODO */
-            case 6: /* fnsave - TODO */
                 state->fpu_ctrl = true;
                 goto unimplemented_insn;
+            case 6: /* fnsave */
+                fail_if(!ops->blk);
+                state->blk = blk_fst;
+                /* REX is meaningless for this insn by this point. */
+                rex_prefix = in_protmode(ctxt, ops);
+                if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
+                                    op_bytes > 2 ? sizeof(struct x87_env32) + 80
+                                                 : sizeof(struct x87_env16) + 80,
+                                    &_regs.eflags,
+                                    state, ctxt)) != X86EMUL_OKAY )
+                    goto done;
+                state->fpu_ctrl = true;
+                break;
             case 7: /* fnstsw m2byte */
                 state->fpu_ctrl = true;
                 goto fpu_memdst16;
@@ -11542,6 +11611,12 @@ int x86_emul_blk(
     switch ( state->blk )
     {
         bool zf;
+        struct {
+            struct x87_env32 env;
+            struct {
+               uint8_t bytes[10];
+            } freg[8];
+        } fpstate;
 
         /*
          * Throughout this switch(), memory clobbers are used to compensate
@@ -11571,6 +11646,93 @@ int x86_emul_blk(
             *eflags |= X86_EFLAGS_ZF;
         break;
 
+#ifndef X86EMUL_NO_FPU
+
+    case blk_fst:
+        ASSERT(!data);
+
+        if ( bytes > sizeof(fpstate.env) )
+            asm ( "fnsave %0" : "=m" (fpstate) );
+        else
+            asm ( "fnstenv %0" : "=m" (fpstate.env) );
+
+        /* state->rex_prefix carries CR0.PE && !EFLAGS.VM setting */
+        switch ( bytes )
+        {
+        case sizeof(fpstate.env):
+        case sizeof(fpstate):
+            if ( !state->rex_prefix )
+            {
+                unsigned int fip = fpstate.env.mode.prot.fip +
+                                   (fpstate.env.mode.prot.fcs << 4);
+                unsigned int fdp = fpstate.env.mode.prot.fdp +
+                                   (fpstate.env.mode.prot.fds << 4);
+                unsigned int fop = fpstate.env.mode.prot.fop;
+
+                memset(&fpstate.env.mode, 0, sizeof(fpstate.env.mode));
+                fpstate.env.mode.real.fip_lo = fip;
+                fpstate.env.mode.real.fip_hi = fip >> 16;
+                fpstate.env.mode.real.fop = fop;
+                fpstate.env.mode.real.fdp_lo = fdp;
+                fpstate.env.mode.real.fdp_hi = fdp >> 16;
+            }
+            memcpy(ptr, &fpstate.env, sizeof(fpstate.env));
+            if ( bytes == sizeof(fpstate.env) )
+                ptr = NULL;
+            else
+                ptr += sizeof(fpstate.env);
+            break;
+
+        case sizeof(struct x87_env16):
+        case sizeof(struct x87_env16) + sizeof(fpstate.freg):
+            if ( state->rex_prefix )
+            {
+                struct x87_env16 *env = ptr;
+
+                env->fcw = fpstate.env.fcw;
+                env->fsw = fpstate.env.fsw;
+                env->ftw = fpstate.env.ftw;
+                env->mode.prot.fip = fpstate.env.mode.prot.fip;
+                env->mode.prot.fcs = fpstate.env.mode.prot.fcs;
+                env->mode.prot.fdp = fpstate.env.mode.prot.fdp;
+                env->mode.prot.fds = fpstate.env.mode.prot.fds;
+            }
+            else
+            {
+                unsigned int fip = fpstate.env.mode.prot.fip +
+                                   (fpstate.env.mode.prot.fcs << 4);
+                unsigned int fdp = fpstate.env.mode.prot.fdp +
+                                   (fpstate.env.mode.prot.fds << 4);
+                struct x87_env16 env = {
+                    .fcw = fpstate.env.fcw,
+                    .fsw = fpstate.env.fsw,
+                    .ftw = fpstate.env.ftw,
+                    .mode.real.fip_lo = fip,
+                    .mode.real.fip_hi = fip >> 16,
+                    .mode.real.fop = fpstate.env.mode.prot.fop,
+                    .mode.real.fdp_lo = fdp,
+                    .mode.real.fdp_hi = fdp >> 16
+                };
+
+                memcpy(ptr, &env, sizeof(env));
+            }
+            if ( bytes == sizeof(struct x87_env16) )
+                ptr = NULL;
+            else
+                ptr += sizeof(struct x87_env16);
+            break;
+
+        default:
+            ASSERT_UNREACHABLE();
+            return X86EMUL_UNHANDLEABLE;
+        }
+
+        if ( ptr )
+            memcpy(ptr, fpstate.freg, sizeof(fpstate.freg));
+        break;
+
+#endif /* X86EMUL_NO_FPU */
+
     case blk_movdir:
         switch ( bytes )
         {



From xen-devel-bounces@lists.xenproject.org Tue May 05 08:16:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 08:16:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVska-0004SC-PC; Tue, 05 May 2020 08:16:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVska-0004S2-9N
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 08:16:24 +0000
X-Inumbo-ID: b4e8ef72-8ea8-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4e8ef72-8ea8-11ea-b9cf-bc764e2007e4;
 Tue, 05 May 2020 08:16:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id B68C7AFF0;
 Tue,  5 May 2020 08:16:24 +0000 (UTC)
Subject: [PATCH v8 08/12] x86emul: support FLDENV and FRSTOR
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Message-ID: <09fe2c18-0037-af71-93be-87261051e2a2@suse.com>
Date: Tue, 5 May 2020 10:16:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

While the Intel SDM claims that FRSTOR itself may raise #MF upon
completion, this was confirmed by Intel to be a doc error which will be
corrected in due course; behavior is like FLDENV, and like old hard copy
manuals describe it. Otherwise we'd have to emulate the insn by filling
st(N) in suitable order, followed by FLDENV.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v7: New.

--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -2442,6 +2442,27 @@ int main(int argc, char **argv)
     else
         printf("skipped\n");
 
+    printf("%-40s", "Testing fldenv 8(%edx)...");
+    if ( stack_exec && cpu_has_fpu )
+    {
+        asm volatile ( "fnstenv %0\n\t"
+                       "fninit"
+                       : "=m" (res[2]) :: "memory" );
+        zap_fpsel(&res[2], true);
+        instr[0] = 0xd9; instr[1] = 0x62; instr[2] = 0x08;
+        regs.eip = (unsigned long)&instr[0];
+        regs.edx = (unsigned long)res;
+        rc = x86_emulate(&ctxt, &emulops);
+        asm volatile ( "fnstenv %0" : "=m" (res[9]) :: "memory" );
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res + 2, res + 9, 28) ||
+             (regs.eip != (unsigned long)&instr[3]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
     printf("%-40s", "Testing 16-bit fnsave (%ecx)...");
     if ( stack_exec && cpu_has_fpu )
     {
@@ -2468,6 +2489,31 @@ int main(int argc, char **argv)
             goto fail;
         printf("okay\n");
     }
+    else
+        printf("skipped\n");
+
+    printf("%-40s", "Testing frstor (%edx)...");
+    if ( stack_exec && cpu_has_fpu )
+    {
+        const uint16_t seven = 7;
+
+        asm volatile ( "fninit\n\t"
+                       "fld1\n\t"
+                       "fidivs %1\n\t"
+                       "fnsave %0\n\t"
+                       : "=&m" (res[0]) : "m" (seven) : "memory" );
+        zap_fpsel(&res[0], true);
+        instr[0] = 0xdd; instr[1] = 0x22;
+        regs.eip = (unsigned long)&instr[0];
+        regs.edx = (unsigned long)res;
+        rc = x86_emulate(&ctxt, &emulops);
+        asm volatile ( "fnsave %0" : "=m" (res[27]) :: "memory" );
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res, res + 27, 108) ||
+             (regs.eip != (unsigned long)&instr[2]) )
+            goto fail;
+        printf("okay\n");
+    }
     else
         printf("skipped\n");
 
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -857,6 +857,7 @@ struct x86_emulate_state {
         blk_NONE,
         blk_enqcmd,
 #ifndef X86EMUL_NO_FPU
+        blk_fld, /* FLDENV, FRSTOR */
         blk_fst, /* FNSTENV, FNSAVE */
 #endif
         blk_movdir,
@@ -4948,21 +4949,14 @@ x86_emulate(
                 dst.bytes = 4;
                 emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
                 break;
-            case 4: /* fldenv - TODO */
-                state->fpu_ctrl = true;
-                goto unimplemented_insn;
-            case 5: /* fldcw m2byte */
-                state->fpu_ctrl = true;
-            fpu_memsrc16:
-                if ( (rc = ops->read(ea.mem.seg, ea.mem.off, &src.val,
-                                     2, ctxt)) != X86EMUL_OKAY )
-                    goto done;
-                emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
-                break;
+            case 4: /* fldenv */
+                /* Raise #MF now if there are pending unmasked exceptions. */
+                emulate_fpu_insn_stub(0xd9, 0xd0 /* fnop */);
+                /* fall through */
             case 6: /* fnstenv */
                 fail_if(!ops->blk);
-                state->blk = blk_fst;
-                /* REX is meaningless for this insn by this point. */
+                state->blk = modrm_reg & 2 ? blk_fst : blk_fld;
+                /* REX is meaningless for these insns by this point. */
                 rex_prefix = in_protmode(ctxt, ops);
                 if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
                                     op_bytes > 2 ? sizeof(struct x87_env32)
@@ -4972,6 +4966,14 @@ x86_emulate(
                     goto done;
                 state->fpu_ctrl = true;
                 break;
+            case 5: /* fldcw m2byte */
+                state->fpu_ctrl = true;
+            fpu_memsrc16:
+                if ( (rc = ops->read(ea.mem.seg, ea.mem.off, &src.val,
+                                     2, ctxt)) != X86EMUL_OKAY )
+                    goto done;
+                emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
+                break;
             case 7: /* fnstcw m2byte */
                 state->fpu_ctrl = true;
             fpu_memdst16:
@@ -5124,13 +5126,14 @@ x86_emulate(
                 dst.bytes = 8;
                 emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
                 break;
-            case 4: /* frstor - TODO */
-                state->fpu_ctrl = true;
-                goto unimplemented_insn;
+            case 4: /* frstor */
+                /* Raise #MF now if there are pending unmasked exceptions. */
+                emulate_fpu_insn_stub(0xd9, 0xd0 /* fnop */);
+                /* fall through */
             case 6: /* fnsave */
                 fail_if(!ops->blk);
-                state->blk = blk_fst;
-                /* REX is meaningless for this insn by this point. */
+                state->blk = modrm_reg & 2 ? blk_fst : blk_fld;
+                /* REX is meaningless for these insns by this point. */
                 rex_prefix = in_protmode(ctxt, ops);
                 if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
                                     op_bytes > 2 ? sizeof(struct x87_env32) + 80
@@ -11648,6 +11651,89 @@ int x86_emul_blk(
 
 #ifndef X86EMUL_NO_FPU
 
+    case blk_fld:
+        ASSERT(!data);
+
+        /* state->rex_prefix carries CR0.PE && !EFLAGS.VM setting */
+        switch ( bytes )
+        {
+        case sizeof(fpstate.env):
+        case sizeof(fpstate):
+            memcpy(&fpstate.env, ptr, sizeof(fpstate.env));
+            if ( !state->rex_prefix )
+            {
+                unsigned int fip = fpstate.env.mode.real.fip_lo +
+                                   (fpstate.env.mode.real.fip_hi << 16);
+                unsigned int fdp = fpstate.env.mode.real.fdp_lo +
+                                   (fpstate.env.mode.real.fdp_hi << 16);
+                unsigned int fop = fpstate.env.mode.real.fop;
+
+                fpstate.env.mode.prot.fip = fip & 0xf;
+                fpstate.env.mode.prot.fcs = fip >> 4;
+                fpstate.env.mode.prot.fop = fop;
+                fpstate.env.mode.prot.fdp = fdp & 0xf;
+                fpstate.env.mode.prot.fds = fdp >> 4;
+            }
+
+            if ( bytes == sizeof(fpstate.env) )
+                ptr = NULL;
+            else
+                ptr += sizeof(fpstate.env);
+            break;
+
+        case sizeof(struct x87_env16):
+        case sizeof(struct x87_env16) + sizeof(fpstate.freg):
+        {
+            const struct x87_env16 *env = ptr;
+
+            fpstate.env.fcw = env->fcw;
+            fpstate.env.fsw = env->fsw;
+            fpstate.env.ftw = env->ftw;
+
+            if ( state->rex_prefix )
+            {
+                fpstate.env.mode.prot.fip = env->mode.prot.fip;
+                fpstate.env.mode.prot.fcs = env->mode.prot.fcs;
+                fpstate.env.mode.prot.fdp = env->mode.prot.fdp;
+                fpstate.env.mode.prot.fds = env->mode.prot.fds;
+                fpstate.env.mode.prot.fop = 0; /* unknown */
+            }
+            else
+            {
+                unsigned int fip = env->mode.real.fip_lo +
+                                   (env->mode.real.fip_hi << 16);
+                unsigned int fdp = env->mode.real.fdp_lo +
+                                   (env->mode.real.fdp_hi << 16);
+                unsigned int fop = env->mode.real.fop;
+
+                fpstate.env.mode.prot.fip = fip & 0xf;
+                fpstate.env.mode.prot.fcs = fip >> 4;
+                fpstate.env.mode.prot.fop = fop;
+                fpstate.env.mode.prot.fdp = fdp & 0xf;
+                fpstate.env.mode.prot.fds = fdp >> 4;
+            }
+
+            if ( bytes == sizeof(*env) )
+                ptr = NULL;
+            else
+                ptr += sizeof(*env);
+            break;
+        }
+
+        default:
+            ASSERT_UNREACHABLE();
+            return X86EMUL_UNHANDLEABLE;
+        }
+
+        if ( ptr )
+        {
+            memcpy(fpstate.freg, ptr, sizeof(fpstate.freg));
+            asm volatile ( "frstor %0" :: "m" (fpstate) );
+        }
+        else
+            asm volatile ( "fldenv %0" :: "m" (fpstate.env) );
+        break;
+
     case blk_fst:
         ASSERT(!data);
 



From xen-devel-bounces@lists.xenproject.org Tue May 05 08:16:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 08:16:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVsl0-0004Xu-2Q; Tue, 05 May 2020 08:16:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVskz-0004Xd-7j
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 08:16:49 +0000
X-Inumbo-ID: c3af7990-8ea8-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3af7990-8ea8-11ea-ae69-bc764e2007e4;
 Tue, 05 May 2020 08:16:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7EE74AFF3;
 Tue,  5 May 2020 08:16:49 +0000 (UTC)
Subject: [PATCH v8 09/12] x86emul: support FXSAVE/FXRSTOR
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Message-ID: <ea1db2c5-3dd7-f1c8-c051-e39f0dffc94e@suse.com>
Date: Tue, 5 May 2020 10:16:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Note that FPU selector handling as well as MXCSR mask saving for now
does not honor differences between host and guest visible featuresets.

While for Intel operation of the insns with CR4.OSFXSR=0 is
implementation dependent, use the easiest solution there: Simply don't
look at the bit in the first place. For AMD and alike the behavior is
well defined, so it gets handled together with FFXSR.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v8: Respect EFER.FFXSE and CR4.OSFXSR. Correct wrong X86EMUL_NO_*
    dependencies. Reduce #ifdef-ary.
v7: New.

--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -767,6 +767,12 @@ static void zap_fpsel(unsigned int *env,
     }
 }
 
+static void zap_xfpsel(unsigned int *env)
+{
+    env[3] &= ~0xffff;
+    env[5] &= ~0xffff;
+}
+
 #ifdef __x86_64__
 # define STKVAL_DISP 64
 static const struct {
@@ -2517,6 +2523,91 @@ int main(int argc, char **argv)
     else
         printf("skipped\n");
 
+    printf("%-40s", "Testing fxsave 4(%ecx)...");
+    if ( stack_exec && cpu_has_fxsr )
+    {
+        const uint16_t nine = 9;
+
+        memset(res + 0x80, 0xcc, 0x400);
+        if ( cpu_has_sse2 )
+            asm volatile ( "pcmpeqd %xmm7, %xmm7\n\t"
+                           "pxor %xmm6, %xmm6\n\t"
+                           "psubw %xmm7, %xmm6" );
+        asm volatile ( "fninit\n\t"
+                       "fld1\n\t"
+                       "fidivs %1\n\t"
+                       "fxsave %0"
+                       : "=m" (res[0x100]) : "m" (nine) : "memory" );
+        zap_xfpsel(&res[0x100]);
+        instr[0] = 0x0f; instr[1] = 0xae; instr[2] = 0x41; instr[3] = 0x04;
+        regs.eip = (unsigned long)&instr[0];
+        regs.ecx = (unsigned long)(res + 0x7f);
+        memset(res + 0x100 + 0x74, 0x33, 0x30);
+        memset(res + 0x80 + 0x74, 0x33, 0x30);
+        rc = x86_emulate(&ctxt, &emulops);
+        zap_xfpsel(&res[0x80]);
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res + 0x80, res + 0x100, 0x200) ||
+             (regs.eip != (unsigned long)&instr[4]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
+    printf("%-40s", "Testing fxrstor -4(%ecx)...");
+    if ( stack_exec && cpu_has_fxsr )
+    {
+        const uint16_t eleven = 11;
+
+        memset(res + 0x80, 0xcc, 0x400);
+        asm volatile ( "fxsave %0" : "=m" (res[0x80]) :: "memory" );
+        zap_xfpsel(&res[0x80]);
+        if ( cpu_has_sse2 )
+            asm volatile ( "pxor %xmm7, %xmm6\n\t"
+                           "pxor %xmm7, %xmm3\n\t"
+                           "pxor %xmm7, %xmm0\n\t"
+                           "pxor %xmm7, %xmm7" );
+        asm volatile ( "fninit\n\t"
+                       "fld1\n\t"
+                       "fidivs %0\n\t"
+                       :: "m" (eleven) );
+        instr[0] = 0x0f; instr[1] = 0xae; instr[2] = 0x49; instr[3] = 0xfc;
+        regs.eip = (unsigned long)&instr[0];
+        regs.ecx = (unsigned long)(res + 0x81);
+        rc = x86_emulate(&ctxt, &emulops);
+        asm volatile ( "fxsave %0" : "=m" (res[0x100]) :: "memory" );
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res + 0x100, res + 0x80, 0x200) ||
+             (regs.eip != (unsigned long)&instr[4]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
+#ifdef __x86_64__
+    printf("%-40s", "Testing fxsaveq 8(%edx)...");
+    if ( stack_exec && cpu_has_fxsr )
+    {
+        memset(res + 0x80, 0xcc, 0x400);
+        asm volatile ( "fxsaveq %0" : "=m" (res[0x100]) :: "memory" );
+        instr[0] = 0x48; instr[1] = 0x0f; instr[2] = 0xae; instr[3] = 0x42; instr[4] = 0x08;
+        regs.eip = (unsigned long)&instr[0];
+        regs.edx = (unsigned long)(res + 0x7e);
+        memset(res + 0x100 + 0x74, 0x33, 0x30);
+        memset(res + 0x80 + 0x74, 0x33, 0x30);
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res + 0x80, res + 0x100, 0x200) ||
+             (regs.eip != (unsigned long)&instr[5]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+#endif
+
     printf("%-40s", "Testing movq %mm3,(%ecx)...");
     if ( stack_exec && cpu_has_mmx )
     {
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -30,6 +30,13 @@ struct cpuid_policy cp;
 static char fpu_save_area[4096] __attribute__((__aligned__((64))));
 static bool use_xsave;
 
+/*
+ * Re-use the area above also as scratch space for the emulator itself.
+ * (When debugging the emulator, care needs to be taken when inserting
+ * printf() or alike function calls into regions using this.)
+ */
+#define FXSAVE_AREA ((struct x86_fxsr *)fpu_save_area)
+
 void emul_save_fpu_state(void)
 {
     if ( use_xsave )
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -860,6 +860,11 @@ struct x86_emulate_state {
         blk_fld, /* FLDENV, FRSTOR */
         blk_fst, /* FNSTENV, FNSAVE */
 #endif
+#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
+    !defined(X86EMUL_NO_SIMD)
+        blk_fxrstor,
+        blk_fxsave,
+#endif
         blk_movdir,
     } blk;
     uint8_t modrm, modrm_mod, modrm_reg, modrm_rm;
@@ -953,6 +958,29 @@ typedef union {
     uint32_t data32[16];
 } mmval_t;
 
+struct x86_fxsr {
+    uint16_t fcw;
+    uint16_t fsw;
+    uint16_t ftw:8, :8;
+    uint16_t fop;
+    union {
+        struct {
+            uint32_t offs;
+            uint32_t sel:16, :16;
+        };
+        uint64_t addr;
+    } fip, fdp;
+    uint32_t mxcsr;
+    uint32_t mxcsr_mask;
+    struct {
+        uint8_t data[10];
+        uint8_t _[6];
+    } fpreg[8];
+    uint64_t __attribute__ ((aligned(16))) xmm[16][2];
+    uint64_t _[6];
+    uint64_t avl[6];
+};
+
 /*
  * While proper alignment gets specified above, this doesn't get honored by
  * the compiler for automatic variables. Use this helper to instantiate a
@@ -1910,6 +1938,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_cmov()        (ctxt->cpuid->basic.cmov)
 #define vcpu_has_clflush()     (ctxt->cpuid->basic.clflush)
 #define vcpu_has_mmx()         (ctxt->cpuid->basic.mmx)
+#define vcpu_has_fxsr()        (ctxt->cpuid->basic.fxsr)
 #define vcpu_has_sse()         (ctxt->cpuid->basic.sse)
 #define vcpu_has_sse2()        (ctxt->cpuid->basic.sse2)
 #define vcpu_has_sse3()        (ctxt->cpuid->basic.sse3)
@@ -8125,6 +8154,47 @@ x86_emulate(
     case X86EMUL_OPC(0x0f, 0xae): case X86EMUL_OPC_66(0x0f, 0xae): /* Grp15 */
         switch ( modrm_reg & 7 )
         {
+#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
+    !defined(X86EMUL_NO_SIMD)
+        case 0: /* fxsave */
+        case 1: /* fxrstor */
+            generate_exception_if(vex.pfx, EXC_UD);
+            vcpu_must_have(fxsr);
+            generate_exception_if(ea.type != OP_MEM, EXC_UD);
+            generate_exception_if(!is_aligned(ea.mem.seg, ea.mem.off, 16,
+                                              ctxt, ops),
+                                  EXC_GP, 0);
+            fail_if(!ops->blk);
+            op_bytes =
+#ifdef __x86_64__
+                !mode_64bit() ? offsetof(struct x86_fxsr, xmm[8]) :
+#endif
+                sizeof(struct x86_fxsr);
+            if ( amd_like(ctxt) )
+            {
+                if ( !ops->read_cr ||
+                     ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
+                    cr4 = X86_CR4_OSFXSR;
+                if ( !ops->read_msr ||
+                     ops->read_msr(MSR_EFER, &msr_val, ctxt) != X86EMUL_OKAY )
+                    msr_val = 0;
+                if ( !(cr4 & X86_CR4_OSFXSR) ||
+                     (mode_64bit() && mode_ring0() && (msr_val & EFER_FFXSE)) )
+                    op_bytes = offsetof(struct x86_fxsr, xmm[0]);
+            }
+            /*
+             * This could also be X86EMUL_FPU_mmx, but it shouldn't be
+             * X86EMUL_FPU_xmm, as we don't want CR4.OSFXSR checked.
+             */
+            get_fpu(X86EMUL_FPU_fpu);
+            state->blk = modrm_reg & 1 ? blk_fxrstor : blk_fxsave;
+            if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
+                                sizeof(struct x86_fxsr), &_regs.eflags,
+                                state, ctxt)) != X86EMUL_OKAY )
+                goto done;
+            break;
+#endif /* X86EMUL_NO_{FPU,MMX,SIMD} */
+
 #ifndef X86EMUL_NO_SIMD
         case 2: /* ldmxcsr */
             generate_exception_if(vex.pfx, EXC_UD);
@@ -11611,6 +11681,8 @@ int x86_emul_blk(
     struct x86_emulate_state *state,
     struct x86_emulate_ctxt *ctxt)
 {
+    int rc = X86EMUL_OKAY;
+
     switch ( state->blk )
     {
         bool zf;
@@ -11819,6 +11891,77 @@ int x86_emul_blk(
 
 #endif /* X86EMUL_NO_FPU */
 
+#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
+    !defined(X86EMUL_NO_SIMD)
+
+    case blk_fxrstor:
+    {
+        struct x86_fxsr *fxsr = FXSAVE_AREA;
+
+        ASSERT(!data);
+        ASSERT(bytes == sizeof(*fxsr));
+        ASSERT(state->op_bytes <= bytes);
+
+        if ( state->op_bytes < sizeof(*fxsr) )
+        {
+            if ( state->rex_prefix & REX_W )
+            {
+                /*
+                 * The only way to force fxsaveq on a wide range of gas
+                 * versions. On older versions the rex64 prefix works only if
+                 * we force an addressing mode that doesn't require extended
+                 * registers.
+                 */
+                asm volatile ( ".byte 0x48; fxsave (%1)"
+                               : "=m" (*fxsr) : "R" (fxsr) );
+            }
+            else
+                asm volatile ( "fxsave %0" : "=m" (*fxsr) );
+        }
+
+        memcpy(fxsr, ptr, min(state->op_bytes,
+                              (unsigned int)offsetof(struct x86_fxsr, _)));
+        memset(fxsr->_, 0, sizeof(*fxsr) - offsetof(struct x86_fxsr, _));
+
+        generate_exception_if(fxsr->mxcsr & ~mxcsr_mask, EXC_GP, 0);
+
+        if ( state->rex_prefix & REX_W )
+        {
+            /* See above for why operand/constraints are this way. */
+            asm volatile ( ".byte 0x48; fxrstor (%0)"
+                           :: "R" (fxsr), "m" (*fxsr) );
+        }
+        else
+            asm volatile ( "fxrstor %0" :: "m" (*fxsr) );
+        break;
+    }
+
+    case blk_fxsave:
+    {
+        struct x86_fxsr *fxsr = FXSAVE_AREA;
+
+        ASSERT(!data);
+        ASSERT(bytes == sizeof(*fxsr));
+        ASSERT(state->op_bytes <= bytes);
+
+        if ( state->rex_prefix & REX_W )
+        {
+            /* See above for why operand/constraint are this way. */
+            asm volatile ( ".byte 0x48; fxsave (%0)"
+                           :: "R" (state->op_bytes < sizeof(*fxsr) ? fxsr : ptr)
+                           : "memory" );
+        }
+        else
+            asm volatile ( "fxsave (%0)"
+                           :: "r" (state->op_bytes < sizeof(*fxsr) ? fxsr : ptr)
+                           : "memory" );
+        if ( state->op_bytes < sizeof(*fxsr) )
+            memcpy(ptr, fxsr, state->op_bytes);
+        break;
+    }
+
+#endif /* X86EMUL_NO_{FPU,MMX,SIMD} */
+
     case blk_movdir:
         switch ( bytes )
         {
@@ -11872,7 +12015,8 @@ int x86_emul_blk(
         return X86EMUL_UNHANDLEABLE;
     }
 
-    return X86EMUL_OKAY;
+ done:
+    return rc;
 }
 
 static void __init __maybe_unused build_assertions(void)
--- a/xen/arch/x86/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate.c
@@ -42,6 +42,8 @@
     }                                                      \
 })
 
+#define FXSAVE_AREA current->arch.fpu_ctxt
+
 #ifndef CONFIG_HVM
 # define X86EMUL_NO_FPU
 # define X86EMUL_NO_MMX



From xen-devel-bounces@lists.xenproject.org Tue May 05 08:17:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 08:17:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVslY-0004fX-GB; Tue, 05 May 2020 08:17:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVslX-0004fI-3D
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 08:17:23 +0000
X-Inumbo-ID: d75ce5c2-8ea8-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d75ce5c2-8ea8-11ea-b07b-bc764e2007e4;
 Tue, 05 May 2020 08:17:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 84C8EAFF2;
 Tue,  5 May 2020 08:17:22 +0000 (UTC)
Subject: [PATCH v8 09/12] x86/HVM: scale MPERF values reported to guests (on
 AMD)
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Message-ID: <c14edd2c-3cd8-c9b4-0cc0-7cbf2c672127@suse.com>
Date: Tue, 5 May 2020 10:17:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

AMD's PM specifies that MPERF (and its r/o counterpart) reads are
affected by the TSC ratio. Hence when processing such reads in software
we too should scale the values. While we don't currently (yet) expose
the underlying feature flags, besides us allowing the MSRs to be read
nevertheless, RDPRU is going to expose the values even to user space.

Furthermore, due to the not exposed feature flags, this change has the
effect of making properly inaccessible (for reads) the two MSRs.

Note that writes to MPERF (and APERF) continue to be unsupported.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: New.
---
I did consider whether to put the code in guest_rdmsr() instead, but
decided that it's better to have it next to TSC handling.

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3478,6 +3478,22 @@ int hvm_msr_read_intercept(unsigned int
         *msr_content = v->arch.hvm.msr_tsc_adjust;
         break;
 
+    case MSR_MPERF_RD_ONLY:
+        if ( !d->arch.cpuid->extd.efro )
+        {
+            goto gp_fault;
+
+    case MSR_IA32_MPERF:
+            if ( !(d->arch.cpuid->basic.raw[6].c &
+                   CPUID6_ECX_APERFMPERF_CAPABILITY) )
+                goto gp_fault;
+        }
+        if ( rdmsr_safe(msr, *msr_content) )
+            goto gp_fault;
+        if ( d->arch.cpuid->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON) )
+            *msr_content = hvm_get_guest_tsc_fixed(v, *msr_content);
+        break;
+
     case MSR_APIC_BASE:
         *msr_content = vcpu_vlapic(v)->hw.apic_base_msr;
         break;
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -405,6 +405,9 @@
 #define MSR_IA32_MPERF			0x000000e7
 #define MSR_IA32_APERF			0x000000e8
 
+#define MSR_MPERF_RD_ONLY		0xc00000e7
+#define MSR_APERF_RD_ONLY		0xc00000e8
+
 #define MSR_IA32_THERM_CONTROL		0x0000019a
 #define MSR_IA32_THERM_INTERRUPT	0x0000019b
 #define MSR_IA32_THERM_STATUS		0x0000019c



From xen-devel-bounces@lists.xenproject.org Tue May 05 08:18:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 08:18:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVsmK-0004oI-PN; Tue, 05 May 2020 08:18:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVsmJ-0004o0-Dr
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 08:18:11 +0000
X-Inumbo-ID: f503539a-8ea8-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f503539a-8ea8-11ea-ae69-bc764e2007e4;
 Tue, 05 May 2020 08:18:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 464D4AFF4;
 Tue,  5 May 2020 08:18:12 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v8 10/12] x86/HVM: scale MPERF values reported to guests (on
 AMD)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Message-ID: <5da4ed2e-8eb8-0b18-3c1f-9d419371c08a@suse.com>
Date: Tue, 5 May 2020 10:18:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

AMD's PM specifies that MPERF (and its r/o counterpart) reads are
affected by the TSC ratio. Hence when processing such reads in software
we too should scale the values. While we don't currently (yet) expose
the underlying feature flags, besides us allowing the MSRs to be read
nevertheless, RDPRU is going to expose the values even to user space.

Furthermore, due to the not exposed feature flags, this change has the
effect of making properly inaccessible (for reads) the two MSRs.

Note that writes to MPERF (and APERF) continue to be unsupported.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: New.
---
I did consider whether to put the code in guest_rdmsr() instead, but
decided that it's better to have it next to TSC handling.

--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3478,6 +3478,22 @@ int hvm_msr_read_intercept(unsigned int
         *msr_content = v->arch.hvm.msr_tsc_adjust;
         break;
 
+    case MSR_MPERF_RD_ONLY:
+        if ( !d->arch.cpuid->extd.efro )
+        {
+            goto gp_fault;
+
+    case MSR_IA32_MPERF:
+            if ( !(d->arch.cpuid->basic.raw[6].c &
+                   CPUID6_ECX_APERFMPERF_CAPABILITY) )
+                goto gp_fault;
+        }
+        if ( rdmsr_safe(msr, *msr_content) )
+            goto gp_fault;
+        if ( d->arch.cpuid->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON) )
+            *msr_content = hvm_get_guest_tsc_fixed(v, *msr_content);
+        break;
+
     case MSR_APIC_BASE:
         *msr_content = vcpu_vlapic(v)->hw.apic_base_msr;
         break;
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -405,6 +405,9 @@
 #define MSR_IA32_MPERF			0x000000e7
 #define MSR_IA32_APERF			0x000000e8
 
+#define MSR_MPERF_RD_ONLY		0xc00000e7
+#define MSR_APERF_RD_ONLY		0xc00000e8
+
 #define MSR_IA32_THERM_CONTROL		0x0000019a
 #define MSR_IA32_THERM_INTERRUPT	0x0000019b
 #define MSR_IA32_THERM_STATUS		0x0000019c



From xen-devel-bounces@lists.xenproject.org Tue May 05 08:19:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 08:19:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVsnH-0004uy-2q; Tue, 05 May 2020 08:19:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVsnF-0004um-Lc
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 08:19:09 +0000
X-Inumbo-ID: 17b1e99c-8ea9-11ea-9d93-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 17b1e99c-8ea9-11ea-9d93-12813bfff9fa;
 Tue, 05 May 2020 08:19:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 773EFAFF0;
 Tue,  5 May 2020 08:19:10 +0000 (UTC)
Subject: Re: [PATCH v8 09/12] x86/HVM: scale MPERF values reported to guests
 (on AMD)
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <c14edd2c-3cd8-c9b4-0cc0-7cbf2c672127@suse.com>
Message-ID: <d4ae14f1-52c2-1dfb-439b-d0b09b36282a@suse.com>
Date: Tue, 5 May 2020 10:19:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <c14edd2c-3cd8-c9b4-0cc0-7cbf2c672127@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.2020 10:17, Jan Beulich wrote:
> AMD's PM specifies that MPERF (and its r/o counterpart) reads are
> affected by the TSC ratio. Hence when processing such reads in software
> we too should scale the values. While we don't currently (yet) expose
> the underlying feature flags, besides us allowing the MSRs to be read
> nevertheless, RDPRU is going to expose the values even to user space.
> 
> Furthermore, due to the not exposed feature flags, this change has the
> effect of making properly inaccessible (for reads) the two MSRs.
> 
> Note that writes to MPERF (and APERF) continue to be unsupported.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Sorry, just re-sent this one with correct numbering.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 05 08:19:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 08:19:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVsnb-0004z3-Bx; Tue, 05 May 2020 08:19:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVsna-0004ym-5p
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 08:19:30 +0000
X-Inumbo-ID: 23b5f4fe-8ea9-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23b5f4fe-8ea9-11ea-b07b-bc764e2007e4;
 Tue, 05 May 2020 08:19:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 9AC9EAFF5;
 Tue,  5 May 2020 08:19:30 +0000 (UTC)
Subject: [PATCH v8 11/12] x86emul: support RDPRU
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Message-ID: <9cd2134c-b5fa-bcda-0ace-da138ff03a1d@suse.com>
Date: Tue, 5 May 2020 10:19:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

While the PM doesn't say so, this assumes that the MPERF value read this
way gets scaled similarly to its reading through RDMSR.

Also introduce the SVM related constants at this occasion.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v6: Re-base.
v5: The CPUID field used is just 8 bits wide.
v4: Add GENERAL2_INTERCEPT_RDPRU and VMEXIT_RDPRU enumerators. Fold
    handling of out of bounds indexes into switch(). Avoid
    recalculate_misc() clobbering what recalculate_cpu_policy() has
    done. Re-base.
v3: New.
---
RFC: Andrew promised to take care of the CPUID side of this; re-base
     over his work once available.

--- a/tools/libxl/libxl_cpuid.c
+++ b/tools/libxl/libxl_cpuid.c
@@ -264,6 +264,7 @@ int libxl_cpuid_parse_config(libxl_cpuid
 
         {"clzero",       0x80000008, NA, CPUID_REG_EBX,  0,  1},
         {"rstr-fp-err-ptrs", 0x80000008, NA, CPUID_REG_EBX, 2, 1},
+        {"rdpru",        0x80000008, NA, CPUID_REG_EBX,  4,  1},
         {"wbnoinvd",     0x80000008, NA, CPUID_REG_EBX,  9,  1},
         {"ibpb",         0x80000008, NA, CPUID_REG_EBX, 12,  1},
         {"ppin",         0x80000008, NA, CPUID_REG_EBX, 23,  1},
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -148,6 +148,8 @@ static const char *const str_e8b[32] =
     [ 0] = "clzero",
     [ 2] = "rstr-fp-err-ptrs",
 
+    [ 4] = "rdpru",
+
     /* [ 8] */            [ 9] = "wbnoinvd",
 
     [12] = "ibpb",
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -683,6 +683,13 @@ static int read_msr(
 {
     switch ( reg )
     {
+    case 0x000000e8: /* APERF */
+    case 0xc00000e8: /* APERF_RD_ONLY */
+#define APERF_LO_VALUE 0xAEAEAEAE
+#define APERF_HI_VALUE 0xEAEAEAEA
+        *val = ((uint64_t)APERF_HI_VALUE << 32) | APERF_LO_VALUE;
+        return X86EMUL_OKAY;
+
     case 0xc0000080: /* EFER */
         *val = ctxt->addr_size > 32 ? 0x500 /* LME|LMA */ : 0;
         return X86EMUL_OKAY;
@@ -2421,6 +2428,30 @@ int main(int argc, char **argv)
     else
         printf("skipped\n");
 
+    printf("%-40s", "Testing rdpru...");
+    instr[0] = 0x0f; instr[1] = 0x01; instr[2] = 0xfd;
+    regs.eip = (unsigned long)&instr[0];
+    regs.ecx = 1;
+    regs.eflags = EFLAGS_ALWAYS_SET;
+    rc = x86_emulate(&ctxt, &emulops);
+    if ( (rc != X86EMUL_OKAY) ||
+         (regs.eax != APERF_LO_VALUE) || (regs.edx != APERF_HI_VALUE) ||
+         !(regs.eflags & X86_EFLAGS_CF) ||
+         (regs.eip != (unsigned long)&instr[3]) )
+        goto fail;
+    if ( ctxt.cpuid->extd.rdpru_max < 0xffff )
+    {
+        regs.eip = (unsigned long)&instr[0];
+        regs.ecx = ctxt.cpuid->extd.rdpru_max + 1;
+        regs.eflags = EFLAGS_ALWAYS_SET | X86_EFLAGS_CF;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) || regs.eax || regs.edx ||
+             (regs.eflags & X86_EFLAGS_CF) ||
+             (regs.eip != (unsigned long)&instr[3]) )
+            goto fail;
+    }
+    printf("okay\n");
+
     printf("%-40s", "Testing fnstenv 4(%ecx)...");
     if ( stack_exec && cpu_has_fpu )
     {
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -84,6 +84,8 @@ bool emul_test_init(void)
     cp.feat.avx512pf = cp.feat.avx512f;
     cp.feat.rdpid = true;
     cp.extd.clzero = true;
+    cp.extd.rdpru = true;
+    cp.extd.rdpru_max = 1;
 
     if ( cpu_has_xsave )
     {
@@ -156,11 +158,11 @@ int emul_test_cpuid(
     }
 
     /*
-     * The emulator doesn't itself use CLZERO, so we can always run the
+     * The emulator doesn't itself use CLZERO/RDPRU, so we can always run the
      * respective test(s).
      */
     if ( leaf == 0x80000008 )
-        res->b |= 1U << 0;
+        res->b |= (1U << 0) | (1U << 4);
 
     return X86EMUL_OKAY;
 }
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -243,8 +243,6 @@ static void recalculate_misc(struct cpui
     /* Most of Power/RAS hidden from guests. */
     p->extd.raw[0x7].a = p->extd.raw[0x7].b = p->extd.raw[0x7].c = 0;
 
-    p->extd.raw[0x8].d = 0;
-
     switch ( p->x86_vendor )
     {
     case X86_VENDOR_INTEL:
@@ -263,6 +261,7 @@ static void recalculate_misc(struct cpui
 
         p->extd.raw[0x8].a &= 0x0000ffff;
         p->extd.raw[0x8].c = 0;
+        p->extd.raw[0x8].d = 0;
         break;
 
     case X86_VENDOR_AMD:
@@ -281,6 +280,7 @@ static void recalculate_misc(struct cpui
 
         p->extd.raw[0x8].a &= 0x0000ffff; /* GuestMaxPhysAddr hidden. */
         p->extd.raw[0x8].c &= 0x0003f0ff;
+        p->extd.raw[0x8].d &= 0xffff0000;
 
         p->extd.raw[0x9] = EMPTY_LEAF;
 
@@ -643,6 +643,11 @@ void recalculate_cpuid_policy(struct dom
 
     p->extd.maxlinaddr = p->extd.lm ? 48 : 32;
 
+    if ( p->extd.rdpru )
+        p->extd.rdpru_max = min(p->extd.rdpru_max, max->extd.rdpru_max);
+    else
+        p->extd.rdpru_max = 0;
+
     recalculate_xstate(p);
     recalculate_misc(p);
 
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1967,6 +1967,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_fma4()        (ctxt->cpuid->extd.fma4)
 #define vcpu_has_tbm()         (ctxt->cpuid->extd.tbm)
 #define vcpu_has_clzero()      (ctxt->cpuid->extd.clzero)
+#define vcpu_has_rdpru()       (ctxt->cpuid->extd.rdpru)
 #define vcpu_has_wbnoinvd()    (ctxt->cpuid->extd.wbnoinvd)
 
 #define vcpu_has_bmi1()        (ctxt->cpuid->feat.bmi1)
@@ -5855,6 +5856,50 @@ x86_emulate(
                 limit -= sizeof(zero);
             }
             break;
+
+        case 0xfd: /* rdpru */
+            vcpu_must_have(rdpru);
+
+            if ( !mode_ring0() )
+            {
+                fail_if(!ops->read_cr);
+                if ( (rc = ops->read_cr(4, &cr4, ctxt)) != X86EMUL_OKAY )
+                    goto done;
+                generate_exception_if(cr4 & X86_CR4_TSD, EXC_UD);
+            }
+
+            switch ( _regs.ecx | -(_regs.ecx > ctxt->cpuid->extd.rdpru_max) )
+            {
+            case 0:  n = MSR_IA32_MPERF; break;
+            case 1:  n = MSR_IA32_APERF; break;
+            default: n = 0; break;
+            }
+
+            _regs.eflags &= ~EFLAGS_MASK;
+            if ( n )
+            {
+                fail_if(!ops->read_msr);
+                switch ( rc = ops->read_msr(n, &msr_val, ctxt) )
+                {
+                case X86EMUL_OKAY:
+                    _regs.eflags |= X86_EFLAGS_CF;
+                    break;
+
+                case X86EMUL_EXCEPTION:
+                    x86_emul_reset_event(ctxt);
+                    rc = X86EMUL_OKAY;
+                    break;
+
+                default:
+                    goto done;
+                }
+            }
+
+            if ( !(_regs.eflags & X86_EFLAGS_CF) )
+                msr_val = 0;
+            _regs.r(dx) = msr_val >> 32;
+            _regs.r(ax) = (uint32_t)msr_val;
+            break;
         }
 
 #define _GRP7(mod, reg) \
--- a/xen/include/asm-x86/hvm/svm/vmcb.h
+++ b/xen/include/asm-x86/hvm/svm/vmcb.h
@@ -74,7 +74,8 @@ enum GenericIntercept2bits
     GENERAL2_INTERCEPT_MONITOR = 1 << 10,
     GENERAL2_INTERCEPT_MWAIT   = 1 << 11,
     GENERAL2_INTERCEPT_MWAIT_CONDITIONAL = 1 << 12,
-    GENERAL2_INTERCEPT_XSETBV  = 1 << 13
+    GENERAL2_INTERCEPT_XSETBV  = 1 << 13,
+    GENERAL2_INTERCEPT_RDPRU   = 1 << 14,
 };
 
 
@@ -298,6 +299,7 @@ enum VMEXIT_EXITCODE
     VMEXIT_MWAIT            = 139, /* 0x8b */
     VMEXIT_MWAIT_CONDITIONAL= 140, /* 0x8c */
     VMEXIT_XSETBV           = 141, /* 0x8d */
+    VMEXIT_RDPRU            = 142, /* 0x8e */
     VMEXIT_NPF              = 1024, /* 0x400, nested paging fault */
     VMEXIT_INVALID          =  -1
 };
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -250,6 +250,7 @@ XEN_CPUFEATURE(EFRO,          7*32+10) /
 /* AMD-defined CPU features, CPUID level 0x80000008.ebx, word 8 */
 XEN_CPUFEATURE(CLZERO,        8*32+ 0) /*A  CLZERO instruction */
 XEN_CPUFEATURE(RSTR_FP_ERR_PTRS, 8*32+ 2) /*A  (F)X{SAVE,RSTOR} always saves/restores FPU Error pointers */
+XEN_CPUFEATURE(RDPRU,         8*32+ 4) /*A  RDPRU instruction */
 XEN_CPUFEATURE(WBNOINVD,      8*32+ 9) /*   WBNOINVD instruction */
 XEN_CPUFEATURE(IBPB,          8*32+12) /*A  IBPB support only (no IBRS, used by AMD) */
 XEN_CPUFEATURE(AMD_PPIN,      8*32+23) /*   Protected Processor Inventory Number */
--- a/xen/include/xen/lib/x86/cpuid.h
+++ b/xen/include/xen/lib/x86/cpuid.h
@@ -263,7 +263,7 @@ struct cpuid_policy
                 struct { DECL_BITFIELD(e8b); };
             };
             uint32_t nc:8, :4, apic_id_size:4, :16;
-            uint32_t /* d */:32;
+            uint8_t :8, :8, rdpru_max, :8;
         };
     } extd;
 



From xen-devel-bounces@lists.xenproject.org Tue May 05 08:20:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 08:20:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVsoS-0005kA-NW; Tue, 05 May 2020 08:20:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVsoR-0005jv-Jk
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 08:20:23 +0000
X-Inumbo-ID: 4380a45a-8ea9-11ea-9d95-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4380a45a-8ea9-11ea-9d95-12813bfff9fa;
 Tue, 05 May 2020 08:20:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id EEA90AFF3;
 Tue,  5 May 2020 08:20:23 +0000 (UTC)
Subject: [PATCH v8 12/12] x86/HVM: don't needlessly intercept APERF/MPERF/TSC
 MSR reads
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Message-ID: <e92b6c1a-b2c3-13e7-116c-4772c851dd0b@suse.com>
Date: Tue, 5 May 2020 10:20:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

If the hardware can handle accesses, we should allow it to do so. This
way we can expose EFRO to HVM guests, and "all" that's left for exposing
APERF/MPERF is to figure out how to handle writes to these MSRs. (Note
that the leaf 6 guest CPUID checks will evaluate to false for now, as
recalculate_misc() zaps the entire leaf.)

For TSC the intercepts are made mirror the RDTSC ones.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Kevin Tian <kevin.tian@intel.com>
---
v4: Make TSC intercepts mirror RDTSC ones. Re-base.
v3: New.

--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -595,6 +595,7 @@ static void svm_cpuid_policy_changed(str
     struct vmcb_struct *vmcb = svm->vmcb;
     const struct cpuid_policy *cp = v->domain->arch.cpuid;
     u32 bitmap = vmcb_get_exception_intercepts(vmcb);
+    unsigned int mode;
 
     if ( opt_hvm_fep ||
          (v->domain->arch.cpuid->x86_vendor != boot_cpu_data.x86_vendor) )
@@ -607,6 +608,17 @@ static void svm_cpuid_policy_changed(str
     /* Give access to MSR_PRED_CMD if the guest has been told about it. */
     svm_intercept_msr(v, MSR_PRED_CMD,
                       cp->extd.ibpb ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW);
+
+    /* Allow direct reads from APERF/MPERF if permitted by the policy. */
+    mode = cp->basic.raw[6].c & CPUID6_ECX_APERFMPERF_CAPABILITY
+           ? MSR_INTERCEPT_WRITE : MSR_INTERCEPT_RW;
+    svm_intercept_msr(v, MSR_IA32_APERF, mode);
+    svm_intercept_msr(v, MSR_IA32_MPERF, mode);
+
+    /* Allow direct access to their r/o counterparts if permitted. */
+    mode = cp->extd.efro ? MSR_INTERCEPT_NONE : MSR_INTERCEPT_RW;
+    svm_intercept_msr(v, MSR_APERF_RD_ONLY, mode);
+    svm_intercept_msr(v, MSR_MPERF_RD_ONLY, mode);
 }
 
 void svm_sync_vmcb(struct vcpu *v, enum vmcb_sync_state new_state)
@@ -860,7 +872,10 @@ static void svm_set_rdtsc_exiting(struct
     {
         general1_intercepts |= GENERAL1_INTERCEPT_RDTSC;
         general2_intercepts |= GENERAL2_INTERCEPT_RDTSCP;
+        svm_enable_intercept_for_msr(v, MSR_IA32_TSC);
     }
+    else
+        svm_intercept_msr(v, MSR_IA32_TSC, MSR_INTERCEPT_WRITE);
 
     vmcb_set_general1_intercepts(vmcb, general1_intercepts);
     vmcb_set_general2_intercepts(vmcb, general2_intercepts);
--- a/xen/arch/x86/hvm/svm/vmcb.c
+++ b/xen/arch/x86/hvm/svm/vmcb.c
@@ -108,6 +108,7 @@ static int construct_vmcb(struct vcpu *v
     {
         vmcb->_general1_intercepts |= GENERAL1_INTERCEPT_RDTSC;
         vmcb->_general2_intercepts |= GENERAL2_INTERCEPT_RDTSCP;
+        svm_intercept_msr(v, MSR_IA32_TSC, MSR_INTERCEPT_WRITE);
     }
 
     /* Guest segment limits. */
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -1141,8 +1141,13 @@ static int construct_vmcs(struct vcpu *v
         vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_CS, VMX_MSR_RW);
         vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_ESP, VMX_MSR_RW);
         vmx_clear_msr_intercept(v, MSR_IA32_SYSENTER_EIP, VMX_MSR_RW);
+
+        if ( !(v->arch.hvm.vmx.exec_control & CPU_BASED_RDTSC_EXITING) )
+            vmx_clear_msr_intercept(v, MSR_IA32_TSC, VMX_MSR_R);
+
         if ( paging_mode_hap(d) && (!is_iommu_enabled(d) || iommu_snoop) )
             vmx_clear_msr_intercept(v, MSR_IA32_CR_PAT, VMX_MSR_RW);
+
         if ( (vmexit_ctl & VM_EXIT_CLEAR_BNDCFGS) &&
              (vmentry_ctl & VM_ENTRY_LOAD_BNDCFGS) )
             vmx_clear_msr_intercept(v, MSR_IA32_BNDCFGS, VMX_MSR_RW);
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -589,6 +589,18 @@ static void vmx_cpuid_policy_changed(str
         vmx_clear_msr_intercept(v, MSR_FLUSH_CMD, VMX_MSR_RW);
     else
         vmx_set_msr_intercept(v, MSR_FLUSH_CMD, VMX_MSR_RW);
+
+    /* Allow direct reads from APERF/MPERF if permitted by the policy. */
+    if ( cp->basic.raw[6].c & CPUID6_ECX_APERFMPERF_CAPABILITY )
+    {
+        vmx_clear_msr_intercept(v, MSR_IA32_APERF, VMX_MSR_R);
+        vmx_clear_msr_intercept(v, MSR_IA32_MPERF, VMX_MSR_R);
+    }
+    else
+    {
+        vmx_set_msr_intercept(v, MSR_IA32_APERF, VMX_MSR_R);
+        vmx_set_msr_intercept(v, MSR_IA32_MPERF, VMX_MSR_R);
+    }
 }
 
 int vmx_guest_x86_mode(struct vcpu *v)
@@ -1254,7 +1266,12 @@ static void vmx_set_rdtsc_exiting(struct
     vmx_vmcs_enter(v);
     v->arch.hvm.vmx.exec_control &= ~CPU_BASED_RDTSC_EXITING;
     if ( enable )
+    {
         v->arch.hvm.vmx.exec_control |= CPU_BASED_RDTSC_EXITING;
+        vmx_set_msr_intercept(v, MSR_IA32_TSC, VMX_MSR_R);
+    }
+    else
+        vmx_clear_msr_intercept(v, MSR_IA32_TSC, VMX_MSR_R);
     vmx_update_cpu_exec_control(v);
     vmx_vmcs_exit(v);
 }
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -245,7 +245,7 @@ XEN_CPUFEATURE(ENQCMD,        6*32+29) /
 
 /* AMD-defined CPU features, CPUID level 0x80000007.edx, word 7 */
 XEN_CPUFEATURE(ITSC,          7*32+ 8) /*   Invariant TSC */
-XEN_CPUFEATURE(EFRO,          7*32+10) /*   APERF/MPERF Read Only interface */
+XEN_CPUFEATURE(EFRO,          7*32+10) /*S  APERF/MPERF Read Only interface */
 
 /* AMD-defined CPU features, CPUID level 0x80000008.ebx, word 8 */
 XEN_CPUFEATURE(CLZERO,        8*32+ 0) /*A  CLZERO instruction */



From xen-devel-bounces@lists.xenproject.org Tue May 05 08:46:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 08:46:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVtDs-0007ZG-Ta; Tue, 05 May 2020 08:46:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QDd3=6T=samsung.com=m.szyprowski@srs-us1.protection.inumbo.net>)
 id 1jVtDq-0007ZB-To
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 08:46:39 +0000
X-Inumbo-ID: ec88aacc-8eac-11ea-b9cf-bc764e2007e4
Received: from mailout1.w1.samsung.com (unknown [210.118.77.11])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec88aacc-8eac-11ea-b9cf-bc764e2007e4;
 Tue, 05 May 2020 08:46:34 +0000 (UTC)
Received: from eucas1p1.samsung.com (unknown [182.198.249.206])
 by mailout1.w1.samsung.com (KnoxPortal) with ESMTP id
 20200505084633euoutp01489177e970e71a84010fda34500631e3~MFXyz1WIZ0326103261euoutp01l
 for <xen-devel@lists.xenproject.org>; Tue,  5 May 2020 08:46:33 +0000 (GMT)
DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.w1.samsung.com
 20200505084633euoutp01489177e970e71a84010fda34500631e3~MFXyz1WIZ0326103261euoutp01l
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com;
 s=mail20170921; t=1588668393;
 bh=XbOrFi14wZnLLR+D8IQye6yZyzXRz7ibxgFZt2Xt8y8=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=hL5JSfDcU+DCHzbwlON4/36WnmwtQhza0RHC2bCgARLmMp2RhZKAc+e0zyqjiP/1E
 glBuz9elbxgLBS8EGLKEhcrp4Y+FSuHU0sajYwt75ne7L1JIXLr4BuzKlG3L/6GC8D
 ab7TDZY/PRhoDjK8/5gupLnJe/5NTLLCa0/oEedE=
Received: from eusmges1new.samsung.com (unknown [203.254.199.242]) by
 eucas1p1.samsung.com (KnoxPortal) with ESMTP id
 20200505084633eucas1p1bef72d98018191e2ab095d6b5cde6d45~MFXyiloBc0600906009eucas1p1M;
 Tue,  5 May 2020 08:46:33 +0000 (GMT)
Received: from eucas1p2.samsung.com ( [182.198.249.207]) by
 eusmges1new.samsung.com (EUCPMTA) with SMTP id 26.CF.61286.9E721BE5; Tue,  5
 May 2020 09:46:33 +0100 (BST)
Received: from eusmtrp1.samsung.com (unknown [182.198.249.138]) by
 eucas1p2.samsung.com (KnoxPortal) with ESMTPA id
 20200505084633eucas1p26a6a3f44c64955aadec834bed027e522~MFXyIHf071949019490eucas1p2L;
 Tue,  5 May 2020 08:46:33 +0000 (GMT)
Received: from eusmgms2.samsung.com (unknown [182.198.249.180]) by
 eusmtrp1.samsung.com (KnoxPortal) with ESMTP id
 20200505084633eusmtrp153afdf239488ef4df2fa03d0ecbc82de~MFXyHcV0-0942309423eusmtrp1f;
 Tue,  5 May 2020 08:46:33 +0000 (GMT)
X-AuditID: cbfec7f2-ef1ff7000001ef66-ad-5eb127e955d2
Received: from eusmtip1.samsung.com ( [203.254.199.221]) by
 eusmgms2.samsung.com (EUCPMTA) with SMTP id 1C.21.07950.9E721BE5; Tue,  5
 May 2020 09:46:33 +0100 (BST)
Received: from AMDC2765.digital.local (unknown [106.120.51.73]) by
 eusmtip1.samsung.com (KnoxPortal) with ESMTPA id
 20200505084632eusmtip1c357efec5cd7eb62ab62b23c5f3547be~MFXxkQbKo0309503095eusmtip1v;
 Tue,  5 May 2020 08:46:32 +0000 (GMT)
From: Marek Szyprowski <m.szyprowski@samsung.com>
To: dri-devel@lists.freedesktop.org, iommu@lists.linux-foundation.org,
 linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org
Subject: [PATCH v3 16/25] xen: gntdev: fix common struct sg_table related
 issues
Date: Tue,  5 May 2020 10:46:05 +0200
Message-Id: <20200505084614.30424-16-m.szyprowski@samsung.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200505084614.30424-1-m.szyprowski@samsung.com>
X-Brightmail-Tracker: H4sIAAAAAAAAA0VSWUwTURTldZYODcWxgrwgkaRECSqb+jEJ2GD0YyLRoPHDmIiMMkEjBdIC
 ikvEBaIFZNOACEioAVlqWWohKCrVUhFlKQVBKC5IEAxr2dxQygD+nXuWe5KbSyAiPeZMnI6I
 ZmURTLgYF6Daxh+tnsPulcE+qbWuVEpLE4+qzFZjVHtWAkr91aYjlGlmHKdKyvQ8quC5H5Xb
 s52aNn3mUVUDXRjVUZeLU6pXZj7VMPEVo+Y0mbwAe7o8vxzQ9bMFKF0z+wmjPyYZeHT1g8t0
 78IAQmd2FwP65YQJpZ/0xOP05OAHlL6lKQW0WtOJ0paqjUHCowL/UDb8dCwr85aECE615+l5
 UWrROa26C48HqWsUwJaA5E5YfuUNogACQkQ+BNCiKALcMA1gWpuBxw0WAJvfTyErkeqnv5eF
 YgA7u77xVyOD34eWXDjpCxWjCtyKHcgEAF+n2FlNCJmEwDuqUswqrCMPwbrmjKUASm6Cpg71
 4lqCEJISqGty59pcYVnFiyWL7SI9YOzDrXsg+YEPvwzOAc60F9akf8E4vA6OGDR8DrvA5sxk
 lAtcA/Bzi4rPDckAdlzNXk77wb6Wn7i1GSE9oLrOm6N3w6HEEtRKQ9Iedo+utdLIIszQZiEc
 LYQ3EkWcezPMMTxarW1oMy5fi4Zz983LB9YDaB7rw9OAa87/sgIASoETGyOXhrFy3wj2rJec
 kcpjIsK8TkZKq8DiczUvGKZqwYzxhA6QBBDbCY9Y1MEijImVx0l1ABKI2EFYNF8RLBKGMnHn
 WVnkcVlMOCvXgQ0EKnYS7igcPiYiw5ho9gzLRrGyFZVH2DrHA9U0dbtB6WPcpWMMtN/44W+X
 kEmNW1D69fz9IaFO2/r3MWZHlxcFgSmF0sB+F0wY4GbpmdQGNSkl4+7nplz9I3uJxnLil/n7
 26E5paDeJBiR5j0btgF3AySJ7xoSlZ7zDhcvCu7taT24fupPhY3DyGOPm3GOW7N9L1SMHbiA
 8MWo/BTjuwWRyZl/hn1cLlgDAAA=
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrKIsWRmVeSWpSXmKPExsVy+t/xu7ov1TfGGTxYz27Re+4kk8XGGetZ
 LS5Ob2Wx+L9tIrPFla/v2SxWrj7KZLFgv7XFnJtGFl+uPGSy2PT4GqvF5V1z2CzWHrnLbnHw
 wxNWi+9bJjM58HmsmbeG0WPvtwUsHtu/PWD1uN99nMlj85J6j9v/HjN7TL6xnNHj8IcrLB67
 bzaweXx8eovFo2/LKkaP9Vuusnh83iQXwBulZ1OUX1qSqpCRX1xiqxRtaGGkZ2hpoWdkYqln
 aGwea2VkqqRvZ5OSmpNZllqkb5egl3Fx7lGmgvVCFdvWX2NrYOzn72Lk5JAQMJHYvOcPE4gt
 JLCUUeJJkxFEXEbi5LQGVghbWOLPtS62LkYuoJpPjBLPLu8ES7AJGEp0vYVIiAh0MkpM6/7I
 DuIwC0xmlni2+jrYWGGBAIlFR64zgtgsAqoSVy6vB4pzcPAK2EkcOqkOsUFeYvWGA8wgNidQ
 +PGlO2wQFxVKfDj/nXUCI98CRoZVjCKppcW56bnFRnrFibnFpXnpesn5uZsYgVG07djPLTsY
 u94FH2IU4GBU4uHd8HV9nBBrYllxZe4hRgkOZiUR3mU/NsQJ8aYkVlalFuXHF5XmpBYfYjQF
 umkis5Rocj4wwvNK4g1NDc0tLA3Njc2NzSyUxHk7BA7GCAmkJ5akZqemFqQWwfQxcXBKNTDu
 UKrVPM4bzCF1Rs89/+CqWlVf/+m3Nz0Uq9jHW7mCn9V2sj0Dq9CK5+f/xGmdtCs6dGH2izPP
 TOpb7qtK7ZjNFStx7MMcgbuFi2JdO1N81jyVul7o5Km09tfWf99qnjaJfW/rbVdf9W+C8Uc7
 n3l/7lz0F2UIXV9mu3ebqaqBdMv2pOSXD4qUWIozEg21mIuKEwEKUxAkuAIAAA==
X-CMS-MailID: 20200505084633eucas1p26a6a3f44c64955aadec834bed027e522
X-Msg-Generator: CA
Content-Type: text/plain; charset="utf-8"
X-RootMTR: 20200505084633eucas1p26a6a3f44c64955aadec834bed027e522
X-EPHeader: CA
CMS-TYPE: 201P
X-CMS-RootMailID: 20200505084633eucas1p26a6a3f44c64955aadec834bed027e522
References: <20200505083926.28503-1-m.szyprowski@samsung.com>
 <20200505084614.30424-1-m.szyprowski@samsung.com>
 <CGME20200505084633eucas1p26a6a3f44c64955aadec834bed027e522@eucas1p2.samsung.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Robin Murphy <robin.murphy@arm.com>, Christoph Hellwig <hch@lst.de>,
 linux-arm-kernel@lists.infradead.org,
 Marek Szyprowski <m.szyprowski@samsung.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The Documentation/DMA-API-HOWTO.txt states that dma_map_sg returns the
numer of the created entries in the DMA address space. However the
subsequent calls to dma_sync_sg_for_{device,cpu} and dma_unmap_sg must be
called with the original number of the entries passed to dma_map_sg. The
sg_table->nents in turn holds the result of the dma_map_sg call as stated
in include/linux/scatterlist.h. A common mistake was to ignore a result
of the dma_map_sg function and don't use the sg_table->orig_nents at all.

To avoid such issues, lets use common dma-mapping wrappers operating
directly on the struct sg_table objects and adjust references to the
nents and orig_nents respectively.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
---
For more information, see '[PATCH v3 00/25] DRM: fix struct sg_table nents
vs. orig_nents misuse' thread: https://lkml.org/lkml/2020/5/5/187
---
 drivers/xen/gntdev-dmabuf.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
index 75d3bb9..4b22785 100644
--- a/drivers/xen/gntdev-dmabuf.c
+++ b/drivers/xen/gntdev-dmabuf.c
@@ -247,8 +247,7 @@ static void dmabuf_exp_ops_detach(struct dma_buf *dma_buf,
 
 		if (sgt) {
 			if (gntdev_dmabuf_attach->dir != DMA_NONE)
-				dma_unmap_sg_attrs(attach->dev, sgt->sgl,
-						   sgt->nents,
+				dma_unmap_sgtable_attrs(attach->dev, sgt,
 						   gntdev_dmabuf_attach->dir,
 						   DMA_ATTR_SKIP_CPU_SYNC);
 			sg_free_table(sgt);
@@ -288,7 +287,7 @@ static void dmabuf_exp_ops_detach(struct dma_buf *dma_buf,
 	sgt = dmabuf_pages_to_sgt(gntdev_dmabuf->pages,
 				  gntdev_dmabuf->nr_pages);
 	if (!IS_ERR(sgt)) {
-		if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
+		if (dma_map_sgtable_attrs(attach->dev, sgt, dir,
 				      DMA_ATTR_SKIP_CPU_SYNC)) {
 			sg_free_table(sgt);
 			kfree(sgt);
@@ -625,7 +624,7 @@ static struct gntdev_dmabuf *dmabuf_imp_alloc_storage(int count)
 
 	/* Now convert sgt to array of pages and check for page validity. */
 	i = 0;
-	for_each_sg_page(sgt->sgl, &sg_iter, sgt->nents, 0) {
+	for_each_sg_page(sgt->sgl, &sg_iter, sgt->orig_nents, 0) {
 		struct page *page = sg_page_iter_page(&sg_iter);
 		/*
 		 * Check if page is valid: this can happen if we are given
-- 
1.9.1



From xen-devel-bounces@lists.xenproject.org Tue May 05 09:25:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 09:25:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVtpB-0002Nz-Dt; Tue, 05 May 2020 09:25:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XWWA=6T=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jVtpA-0002Nr-4E
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 09:25:12 +0000
X-Inumbo-ID: 513e6bc8-8eb2-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 513e6bc8-8eb2-11ea-ae69-bc764e2007e4;
 Tue, 05 May 2020 09:25:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588670711;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=99RC9b/V5iDp9O+PzyOxH6obFMvEPeacU7D+AkWN0Os=;
 b=XagVZvfwgcfPeUU4ZzdnaL9Eel0kRwwb/r+cYlTZ/st2NFQQUm9V3/eM
 n476cwPgTqJ0/gqa9tBsFg5E292nxvKDKO3TZIduUB+hvCtt63ogKFvSJ
 rwbD21GZdX+FB8kkQcgdVBiQrwir28fcS3C4eHV/jtHx89hcdCwXtcYD3 Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: TD3bVff+XQLNCZ8KBoB6hjmooAXQEDkX6Gn0aFn4jTtOJisDJ/bFX3Dxvbu9o+Z+gO/BKzzUR/
 xDN35BPvxwC9lLBsgzEFUQHERy5MsSE/lL5rsqMEaPWZ+chNA/PbiAhIsqAM5wUGoVBhTO9Cl2
 IKeYmqSjfATz+UeRvAe415Ma+Jhc4pWJ0dWea9QAq6cWF+zBD4PQKCOmfFkX9JEMg/CS7H1jUY
 fgVunq1SpSotYzSe5lXMtF4FSH+PlGhYFF9fuavT2rdJWgNNlI3ZKrAxzpus2RB4BfynIJlioW
 T1M=
X-SBRS: 2.7
X-MesageID: 17012785
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,354,1583211600"; d="scan'208";a="17012785"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH 1/3] x86/mm: do not attempt to convert _PAGE_GNTTAB to a
 boolean
Date: Tue, 5 May 2020 11:24:52 +0200
Message-ID: <20200505092454.9161-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200505092454.9161-1-roger.pau@citrix.com>
References: <20200505092454.9161-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Clang 10 complains with:

mm.c:1239:10: error: converting the result of '<<' to a boolean always evaluates to true
      [-Werror,-Wtautological-constant-compare]
    if ( _PAGE_GNTTAB && (l1e_get_flags(l1e) & _PAGE_GNTTAB) &&
         ^
xen/include/asm/x86_64/page.h:161:25: note: expanded from macro '_PAGE_GNTTAB'
#define _PAGE_GNTTAB (1U<<22)
                        ^

Remove the conversion of _PAGE_GNTTAB to a boolean, since the and
operation performed afterwards will already return false if the value
of the macro is 0.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/mm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 355c50ff91..27069d2451 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1236,7 +1236,7 @@ void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner)
      * (Note that the undestroyable active grants are not a security hole in
      * Xen. All active grants can safely be cleaned up when the domain dies.)
      */
-    if ( _PAGE_GNTTAB && (l1e_get_flags(l1e) & _PAGE_GNTTAB) &&
+    if ( (l1e_get_flags(l1e) & _PAGE_GNTTAB) &&
          !l1e_owner->is_shutting_down && !l1e_owner->is_dying )
     {
         gdprintk(XENLOG_WARNING,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue May 05 09:25:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 09:25:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVtpG-0002P3-Mu; Tue, 05 May 2020 09:25:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XWWA=6T=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jVtpF-0002Ow-1H
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 09:25:17 +0000
X-Inumbo-ID: 522c3eca-8eb2-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 522c3eca-8eb2-11ea-ae69-bc764e2007e4;
 Tue, 05 May 2020 09:25:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588670712;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=ydy5pb6EJChjFoO7PsuBYhKfJuc95DxbV5NMzU75wZ0=;
 b=S25wKUGHlcOBIx6NOiqjQF82P4ywu847wecibu8LirNzZA0mtwO5n7Z1
 W4dm/q1h+vB0At6bM/0XYF4Ki6aY7yT9Ug94ENBEZDaE//c92EtOk2HwX
 kqCt/1cCkIlQJCsy34MCdG6gL7dmZrrt2B+HHs9U4zeQnMq/AZ/BAftT3 8=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: WUkgJErIGddDuuFXn2no1XKVtYl4d3Eonfkieg4QrPopOTyBSesy6VvaFZBWAXa5dZJDA4IzCa
 FRePTwMQXg8M5MeW2O++iXskDIURn31oQvuaC7fyzyYVZveKgK4j5s4RxQDw73nEG8rzzliKyl
 RksCYZR6nmbaSxAGoUosu6ixcS9/ezeoEjetQEwkQTioNU/CNzhFF66J8MwO7HdMYLJY8DC8ND
 yILIILzysPfGaa4ERQwmS5+ynMMebNN0gG6HBDwB04IfJI25WXNhnu7jjClYPixjQLUkHk3Uhi
 jKM=
X-SBRS: 2.7
X-MesageID: 17012789
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,354,1583211600"; d="scan'208";a="17012789"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH 2/3] configure: also add EXTRA_PREFIX to {CPP/LD}FLAGS
Date: Tue, 5 May 2020 11:24:53 +0200
Message-ID: <20200505092454.9161-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200505092454.9161-1-roger.pau@citrix.com>
References: <20200505092454.9161-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The path provided by EXTRA_PREFIX should be added to the search path
of the configure script, like it's done in Config.mk. Not doing so
makes the search path for configure differ from the search path used
by the build.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Please re-run autoconf.sh after applying.
---
 m4/set_cflags_ldflags.m4 | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/m4/set_cflags_ldflags.m4 b/m4/set_cflags_ldflags.m4
index cbad3c10b0..08f5c983cc 100644
--- a/m4/set_cflags_ldflags.m4
+++ b/m4/set_cflags_ldflags.m4
@@ -15,6 +15,10 @@ for ldflag in $APPEND_LIB
 do
     APPEND_LDFLAGS="$APPEND_LDFLAGS -L$ldflag"
 done
+if [ ! -z $EXTRA_PREFIX ]; then
+    CPPFLAGS="$CPPFLAGS -I$EXTRA_PREFIX/include"
+    LDFLAGS="$LDFLAGS -L$EXTRA_PREFIX/lib"
+fi
 CPPFLAGS="$PREPEND_CPPFLAGS $CPPFLAGS $APPEND_CPPFLAGS"
 LDFLAGS="$PREPEND_LDFLAGS $LDFLAGS $APPEND_LDFLAGS"])
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue May 05 09:25:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 09:25:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVtp8-0002Nl-59; Tue, 05 May 2020 09:25:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XWWA=6T=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jVtp7-0002Ng-LN
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 09:25:09 +0000
X-Inumbo-ID: 4f7acb60-8eb2-11ea-9d9d-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f7acb60-8eb2-11ea-9d9d-12813bfff9fa;
 Tue, 05 May 2020 09:25:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588670708;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=V2xxiJi6Vs/2wqarbDZISrlp3POhWUfUovLsxoF85tk=;
 b=ZvsCwAcculCw5BLNsGvhScGwDmafr+dmZoAcuIJ/LPf7pVWDVOHeOrX5
 Kaz7RI0JjLdYLDU9yaU/mpzkXMERrqv1z1+jNfJxnT2vtEFuPh0hzLSvQ
 kYZmxp7PWVV72RPRtetCvWtP3vos+7q+OTj4otoB+zD7FPQnvr1wAM5jh g=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: sMjqAbiTLUk7yBkRd7fO9+DhePiH/RmIQDTn2bM7GhjFvnuCnr6lSLZizC3eiY++ahyIpM39P+
 YGcgJNgRPa1DaB1XJqB47fr4Ykdu8Qh7I2zWkANymBYlA+9pstD+G4LDEO2xoHBStusAj/cmVS
 CarqLGuGL0wlq8Ys1wDiTY0Q+xP8AYzgE9D5hzTM4dLbCxWDINVquE8jGMKdpQqLZkxcTNp9Ch
 1FpoKrkTarcbyIYvN7d1u9vAYP/EIHenR3a0fBswrZVlvjH8QB2C1cjiby+umDP7lwiiQgMgss
 q9A=
X-SBRS: 2.7
X-MesageID: 17141822
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,354,1583211600"; d="scan'208";a="17141822"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH 0/3] build: fixes for clang 10
Date: Tue, 5 May 2020 11:24:51 +0200
Message-ID: <20200505092454.9161-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

Patches 1 and 3 are fixes in order to build Xen with Clang 10. Patch 2
is a fix for a configure bug I've found while attempting to fix Clang
10.

Thanks, Roger.

Roger Pau Monne (3):
  x86/mm: do not attempt to convert _PAGE_GNTTAB to a boolean
  configure: also add EXTRA_PREFIX to {CPP/LD}FLAGS
  tools/libxl: disable clang indentation check for the disk parser

 m4/set_cflags_ldflags.m4    |  4 ++++
 tools/libxl/libxlu_disk_l.l | 11 +++++++++++
 xen/arch/x86/mm.c           |  2 +-
 3 files changed, 16 insertions(+), 1 deletion(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue May 05 09:25:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 09:25:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVtpK-0002Pj-WB; Tue, 05 May 2020 09:25:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XWWA=6T=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jVtpJ-0002PG-EK
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 09:25:21 +0000
X-Inumbo-ID: 567332c4-8eb2-11ea-9d9d-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 567332c4-8eb2-11ea-9d9d-12813bfff9fa;
 Tue, 05 May 2020 09:25:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588670720;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=0+aC6uoM/qymQoZzM91I3BVdkUyiXIlm5H5MQBXrOls=;
 b=MM6v7NuhHd5fT7TqZZPmt5dK93mp0lKjjnRti4568gJlv7FBMl5wqKyp
 5MbT+zaeLkZtZjOvZzy6y9FvrUx7ioBtzyGEm7ZZ+6XdLclY6Y0c/pg3x
 zMPYKZJIzUMa+ROE8TIRL4Xzifw4z8UFpxS2Y0ZxFnFrFGjNbzFo9MR/K k=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: TxkxxY4pQWtscr4wjKnycYrOsRO5VMlITzvjiod2o8rPEoGSz7tbts61pxx2TkA2SKawCIATGF
 ybKZD/zznx6cnIDX9KyFWLPwc5SBIBxl6Rbf8msFVGLkCYrTNx5arkW/Zh71U7u5/SzsVINnD1
 1ctC+xS0vZXzjnpZcIjwd8b3l87uUie0VW0AOXAo+04y0jRWcqJfg3JHyCT2MBkspZDp9tTTnr
 Q7Y5OMwHpyBxEm8qnHB/u3f3JN6C6hRubG8frgvMTvGJQDUc9xrOr7yGsFX4m3+XmigP+XQs5Y
 6WU=
X-SBRS: 2.7
X-MesageID: 16772808
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,354,1583211600"; d="scan'208";a="16772808"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH 3/3] tools/libxl: disable clang indentation check for the disk
 parser
Date: Tue, 5 May 2020 11:24:54 +0200
Message-ID: <20200505092454.9161-4-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200505092454.9161-1-roger.pau@citrix.com>
References: <20200505092454.9161-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Clang 10 complains with:

13: error: misleading indentation; statement is not part of the previous 'if'
      [-Werror,-Wmisleading-indentation]
            if ( ! yyg->yy_state_buf )
            ^
libxlu_disk_l.c:1259:9: note: previous statement is here
        if ( ! yyg->yy_state_buf )
        ^

Due to the missing braces in single line statements and the wrong
indentation. Fix this by disabling the warning for that specific file.
I haven't found a way to force flex to add braces around single line
statements in conditional blocks.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Please re-generate libxlu_disk_l.c before committing.
---
 tools/libxl/libxlu_disk_l.l | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/tools/libxl/libxlu_disk_l.l b/tools/libxl/libxlu_disk_l.l
index 97039a2800..7a46f4a30c 100644
--- a/tools/libxl/libxlu_disk_l.l
+++ b/tools/libxl/libxlu_disk_l.l
@@ -36,6 +36,17 @@
 
 #define YY_NO_INPUT
 
+/* The code generated by flex is missing braces in single line expressions and
+ * is not properly indented, which triggers the clang misleading-indentation
+ * check that has been made part of -Wall since clang 10. In order to safely
+ * disable it on clang versions that don't have the diagnostic implemented
+ * also disable the unknown option and pragma warning. */
+#ifdef __clang__
+# pragma clang diagnostic ignored "-Wunknown-pragmas"
+# pragma clang diagnostic ignored "-Wunknown-warning-option"
+# pragma clang diagnostic ignored "-Wmisleading-indentation"
+#endif
+
 /* Some versions of flex have a bug (Fedora bugzilla 612465) which causes
  * it to fail to declare these functions, which it defines.  So declare
  * them ourselves.  Hopefully we won't have to simultaneously support
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue May 05 10:19:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 10:19:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVufa-0006ni-89; Tue, 05 May 2020 10:19:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ph1I=6T=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jVufZ-0006nd-Ag
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 10:19:21 +0000
X-Inumbo-ID: e179ae27-8eb9-11ea-9da1-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id e179ae27-8eb9-11ea-9da1-12813bfff9fa;
 Tue, 05 May 2020 10:19:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588673959;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=1P/5QYytElvztL4q85MDX6gLDmepTUOXbdNhCTBxhz4=;
 b=GR15LwTL3fnshm0QX/LZNScTDuQjZ4cBRQR6TS9LmWbGPXHWy4Z99bfh3ZLv34MDK+jpNV
 1QliD0BfZTFNVCsMI0/iu7Lc2zGTPv39F6OpmL7jG3xNhrgDX5a3ttq5jGiLd9UZvWIJNR
 QH1f818apt5zk8bZ/oRaVhqDAt8qNz0=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-260-2AN3aTbAPjeur6A4AMgvKQ-1; Tue, 05 May 2020 06:19:14 -0400
X-MC-Unique: 2AN3aTbAPjeur6A4AMgvKQ-1
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 5FA98107ACCD;
 Tue,  5 May 2020 10:19:13 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-113-6.ams2.redhat.com [10.36.113.6])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 48EAE627D9;
 Tue,  5 May 2020 10:19:10 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id C9E3D11358BE; Tue,  5 May 2020 12:19:08 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v2 02/10] xen: Fix and improve handling of device_add usb-host
 errors
Date: Tue,  5 May 2020 12:19:00 +0200
Message-Id: <20200505101908.6207-3-armbru@redhat.com>
In-Reply-To: <20200505101908.6207-1-armbru@redhat.com>
References: <20200505101908.6207-1-armbru@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

usbback_portid_add() leaks the error when qdev_device_add() fails.
Fix that.  While there, use the error to improve the error message.

The qemu_opts_from_qdict() similarly leaks on failure.  But any
failure there is a programming error.  Pass &error_abort.

Fixes: 816ac92ef769f9ffc534e49a1bb6177bddce7aa2
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
---
 hw/usb/xen-usb.c | 19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/hw/usb/xen-usb.c b/hw/usb/xen-usb.c
index 961190d0f7..4d266d7bb4 100644
--- a/hw/usb/xen-usb.c
+++ b/hw/usb/xen-usb.c
@@ -30,6 +30,7 @@
 #include "hw/usb.h"
 #include "hw/xen/xen-legacy-backend.h"
 #include "monitor/qdev.h"
+#include "qapi/error.h"
 #include "qapi/qmp/qdict.h"
 #include "qapi/qmp/qstring.h"
=20
@@ -755,13 +756,16 @@ static void usbback_portid_add(struct usbback_info *u=
sbif, unsigned port,
     qdict_put_int(qdict, "port", port);
     qdict_put_int(qdict, "hostbus", atoi(busid));
     qdict_put_str(qdict, "hostport", portname);
-    opts =3D qemu_opts_from_qdict(qemu_find_opts("device"), qdict, &local_=
err);
-    if (local_err) {
-        goto err;
-    }
+    opts =3D qemu_opts_from_qdict(qemu_find_opts("device"), qdict,
+                                &error_abort);
     usbif->ports[port - 1].dev =3D USB_DEVICE(qdev_device_add(opts, &local=
_err));
     if (!usbif->ports[port - 1].dev) {
-        goto err;
+        qobject_unref(qdict);
+        xen_pv_printf(&usbif->xendev, 0,
+                      "device %s could not be opened: %s\n",
+                      busid, error_get_pretty(local_err));
+        error_free(local_err);
+        return;
     }
     qobject_unref(qdict);
     speed =3D usbif->ports[port - 1].dev->speed;
@@ -793,11 +797,6 @@ static void usbback_portid_add(struct usbback_info *us=
bif, unsigned port,
     usbback_hotplug_enq(usbif, port);
=20
     TR_BUS(&usbif->xendev, "port %d attached\n", port);
-    return;
-
-err:
-    qobject_unref(qdict);
-    xen_pv_printf(&usbif->xendev, 0, "device %s could not be opened\n", bu=
sid);
 }
=20
 static void usbback_process_port(struct usbback_info *usbif, unsigned port=
)
--=20
2.21.1



From xen-devel-bounces@lists.xenproject.org Tue May 05 11:07:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 11:07:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVvPK-0002N6-TF; Tue, 05 May 2020 11:06:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Sfw0=6T=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jVvPJ-0002N1-BN
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 11:06:37 +0000
X-Inumbo-ID: 7ce87a80-8ec0-11ea-9daa-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7ce87a80-8ec0-11ea-9daa-12813bfff9fa;
 Tue, 05 May 2020 11:06:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=o6t4XEAWupBQ7XmDt0+Vd5JHQn6V1VVS150N+FXhM2w=; b=7Lt4gWISKrG9ePh0o1ZJnfhFsQ
 93z1oftshVhjhiOT0HNao9nHEja7pa9jVgi6RU7OLzhpPkzM1wfaHyWRkzVv2vVge8huzVUebnQDL
 6GfC9TLWOHrlgstDd97zafKR6GzSZp520i6MDhi08XbT9xDnekMyEXqcM9ikbIep592E=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jVvPI-0000Dz-El; Tue, 05 May 2020 11:06:36 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <hx242@xen.org>)
 id 1jVvPI-0003lw-4N; Tue, 05 May 2020 11:06:36 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] x86/traps: fix an off-by-one error
Date: Tue,  5 May 2020 12:06:30 +0100
Message-Id: <37b7ec049ff82f92cc6724a743867e1cd9365f5b.1588676790.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Hongyan Xia <hongyxia@amazon.com>

stack++ can go into the next page and unmap_domain_page() will unmap the
wrong one, causing mapcache and memory corruption. Fix.

This is found with direct map removal. For now, the idle domain does not
have a mapcache and uses the direct map, so no errors will occur.

Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
---
 xen/arch/x86/traps.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 33e5d21ece..f033a804a3 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -300,6 +300,7 @@ static void show_guest_stack(struct vcpu *v, const struct cpu_user_regs *regs)
     int i;
     unsigned long *stack, addr;
     unsigned long mask = STACK_SIZE;
+    void *stack_page = NULL;
 
     /* Avoid HVM as we don't know what the stack looks like. */
     if ( is_hvm_vcpu(v) )
@@ -328,7 +329,7 @@ static void show_guest_stack(struct vcpu *v, const struct cpu_user_regs *regs)
         vcpu = maddr_get_owner(read_cr3()) == v->domain ? v : NULL;
         if ( !vcpu )
         {
-            stack = do_page_walk(v, (unsigned long)stack);
+            stack_page = stack = do_page_walk(v, (unsigned long)stack);
             if ( (unsigned long)stack < PAGE_SIZE )
             {
                 printk("Inaccessible guest memory.\n");
@@ -358,7 +359,7 @@ static void show_guest_stack(struct vcpu *v, const struct cpu_user_regs *regs)
     if ( mask == PAGE_SIZE )
     {
         BUILD_BUG_ON(PAGE_SIZE == STACK_SIZE);
-        unmap_domain_page(stack);
+        unmap_domain_page(stack_page);
     }
     if ( i == 0 )
         printk("Stack empty.");
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 05 11:36:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 11:36:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVvrq-0004os-Eq; Tue, 05 May 2020 11:36:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u3eM=6T=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jVvrp-0004on-Qm
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 11:36:05 +0000
X-Inumbo-ID: 9a5311bc-8ec4-11ea-9887-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9a5311bc-8ec4-11ea-9887-bc764e2007e4;
 Tue, 05 May 2020 11:36:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588678564;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=lTq7CICoEDGNph2EL4IdzyBFICd4qnvEDY+8bJyNrWk=;
 b=U2dJUsaJl+/DCJ66udtViXKWuppljyZejpf0T3TMqNdlzDSt/LsLzO28
 vR9nplt6u20s9tJ9H7AKwixlBdxhfIQ/JKzjlQ4l6xFKJ0ImGBx2J+yql
 Ii/YiNiRyjT3iJ28Ls83Jr6f9edcXH7jR4giV7aH2Xy439BYDFBZHUkpW 0=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: +9OMVbVe8bsqxerofCQojVVSJyGyInTtLhDaYmw6qBpxuZUC7IWDfRCsomMF1Sk0kEeIq46sW7
 lCfPoaAIBIg8OD1cRafzRt+0pin7F+f8c7piPTvRqzfXfXLQq4n1QGICSmSNUYWyZOi7g0qAYQ
 mUiivOvUttrQSnZU3+gsp+ArEQOB+zHQRRioUO8ss3yAsXhsBdQHOcZzEzTul+qg469zniVR9X
 nPxfjCxL2Gn9/Rl2RLnjbo1YbIbJQjQ96UeupiGKNSn+C9glpDvzUMTfiLi8hyTaF28sYEEJ9f
 itY=
X-SBRS: 2.7
X-MesageID: 17453108
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,354,1583211600"; d="scan'208";a="17453108"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/pv: Compile out emul-gate-op in !CONFIG_PV32 builds
Date: Tue, 5 May 2020 12:35:37 +0100
Message-ID: <20200505113537.29968-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The caller is already guarded by is_pv_32bit_vcpu().

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/pv/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/pv/Makefile b/xen/arch/x86/pv/Makefile
index cf28434ba9..75b01b0062 100644
--- a/xen/arch/x86/pv/Makefile
+++ b/xen/arch/x86/pv/Makefile
@@ -2,7 +2,7 @@ obj-y += callback.o
 obj-y += descriptor-tables.o
 obj-y += domain.o
 obj-y += emulate.o
-obj-y += emul-gate-op.o
+obj-$(CONFIG_PV32) += emul-gate-op.o
 obj-y += emul-inv-op.o
 obj-y += emul-priv-op.o
 obj-$(CONFIG_GRANT_TABLE) += grant_table.o
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue May 05 11:43:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 11:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVvyc-0005ee-8V; Tue, 05 May 2020 11:43:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u3eM=6T=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jVvya-0005eZ-NE
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 11:43:04 +0000
X-Inumbo-ID: 9456eb70-8ec5-11ea-9dab-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9456eb70-8ec5-11ea-9dab-12813bfff9fa;
 Tue, 05 May 2020 11:43:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588678983;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=31zhRKWgpHkbWlayt6sZYj3rSqkFq6i7AiR+Kki79xM=;
 b=IdEVt6T/edDXGhbXGyzdYV6BUpjJ32lnTHsXSwrmO+Ehwxy9nwOdckbP
 Sg8YRueWxM8V0NAtZjYnCcH0YoPO/zcsaEgR1O19F5hcT4DFsKK44caBz
 J4dm8mD+YutVFgRr976kAZ2ZZ+WxAqpn+ZgaYqCOHz5uAMwzZZSCk1VmM 8=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: VNq9br66eKc0dRkhyp6a8k7+W7qo8Ef2Uk/EamQMXwxBJ5LS3FXQJu36AVsrCkdMZSxz8GORjc
 X1Y85i2Ta/jH3b5d0VBYwXp88hT6jef0QYYhiyMw9R9CJeDURfUVG5jsEEJ6dl0woerE50Ztax
 LhC5mW1uVXS1Zsogf9zVJco9H/C9tYmNq2N1V9vbl93Facea3+FiF/vhqvn4YNFGX7KVc+GWSA
 gCr/N+f8fr4ggSSHcOc0E13y0Ij2xjkqNof/7A9hpapq/M64nnRevc6s/POkY4pX7ElbAdxNko
 7TE=
X-SBRS: 2.7
X-MesageID: 17149542
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,354,1583211600"; d="scan'208";a="17149542"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/pv: Prune include lists
Date: Tue, 5 May 2020 12:42:36 +0100
Message-ID: <20200505114236.31269-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Several of these in particular haven't been pruned since the logic was all
part of arch/x86/traps.c

Some adjustments to header files are required to avoid compile errors:
 * emulate.h needs xen/sched.h because gdt_ldt_desc_ptr() uses v->vcpu_id.
 * mmconfig.h needs to forward declare acpi_table_header.
 * shadow.h and trace.h need to have uint*_t in scope before including the Xen
   public headers.  For shadow.h, reorder the includes.  For trace.h, include
   types.h

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/pv/callback.c      |  5 -----
 xen/arch/x86/pv/emul-gate-op.c  | 19 -------------------
 xen/arch/x86/pv/emul-inv-op.c   | 18 ------------------
 xen/arch/x86/pv/emul-priv-op.c  |  9 +--------
 xen/arch/x86/pv/emulate.h       |  2 ++
 xen/arch/x86/pv/ro-page-fault.c |  8 --------
 xen/arch/x86/pv/shim.c          |  3 ---
 xen/arch/x86/x86_64/mmconfig.h  |  1 +
 xen/include/asm-x86/shadow.h    |  3 ++-
 xen/include/xen/trace.h         |  1 +
 10 files changed, 7 insertions(+), 62 deletions(-)

diff --git a/xen/arch/x86/pv/callback.c b/xen/arch/x86/pv/callback.c
index 106c16ed01..97242cd3d4 100644
--- a/xen/arch/x86/pv/callback.c
+++ b/xen/arch/x86/pv/callback.c
@@ -19,15 +19,10 @@
 #include <xen/event.h>
 #include <xen/hypercall.h>
 #include <xen/guest_access.h>
-#include <xen/lib.h>
-#include <xen/sched.h>
 #include <compat/callback.h>
 #include <compat/nmi.h>
 
-#include <asm/current.h>
-#include <asm/nmi.h>
 #include <asm/shared.h>
-#include <asm/traps.h>
 
 #include <public/callback.h>
 
diff --git a/xen/arch/x86/pv/emul-gate-op.c b/xen/arch/x86/pv/emul-gate-op.c
index 3c7f6d70bc..61e65ce521 100644
--- a/xen/arch/x86/pv/emul-gate-op.c
+++ b/xen/arch/x86/pv/emul-gate-op.c
@@ -20,25 +20,6 @@
  */
 
 #include <xen/err.h>
-#include <xen/errno.h>
-#include <xen/event.h>
-#include <xen/guest_access.h>
-#include <xen/iocap.h>
-#include <xen/spinlock.h>
-#include <xen/trace.h>
-
-#include <asm/apic.h>
-#include <asm/debugreg.h>
-#include <asm/hpet.h>
-#include <asm/hypercall.h>
-#include <asm/mc146818rtc.h>
-#include <asm/p2m.h>
-#include <asm/pv/traps.h>
-#include <asm/shared.h>
-#include <asm/traps.h>
-#include <asm/x86_emulate.h>
-
-#include <xsm/xsm.h>
 
 #include "emulate.h"
 
diff --git a/xen/arch/x86/pv/emul-inv-op.c b/xen/arch/x86/pv/emul-inv-op.c
index 91d05790c2..59e3edc8c4 100644
--- a/xen/arch/x86/pv/emul-inv-op.c
+++ b/xen/arch/x86/pv/emul-inv-op.c
@@ -19,26 +19,8 @@
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include <xen/errno.h>
-#include <xen/event.h>
-#include <xen/guest_access.h>
-#include <xen/iocap.h>
-#include <xen/spinlock.h>
 #include <xen/trace.h>
 
-#include <asm/apic.h>
-#include <asm/debugreg.h>
-#include <asm/hpet.h>
-#include <asm/hypercall.h>
-#include <asm/mc146818rtc.h>
-#include <asm/p2m.h>
-#include <asm/pv/traps.h>
-#include <asm/shared.h>
-#include <asm/traps.h>
-#include <asm/x86_emulate.h>
-
-#include <xsm/xsm.h>
-
 #include "emulate.h"
 
 static int emulate_forced_invalid_op(struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index e24b84f46a..3b705299cf 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -19,25 +19,18 @@
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include <xen/errno.h>
+#include <xen/domain_page.h>
 #include <xen/event.h>
 #include <xen/guest_access.h>
 #include <xen/iocap.h>
-#include <xen/spinlock.h>
-#include <xen/trace.h>
 
 #include <asm/amd.h>
-#include <asm/apic.h>
 #include <asm/debugreg.h>
 #include <asm/hpet.h>
 #include <asm/hypercall.h>
 #include <asm/mc146818rtc.h>
-#include <asm/p2m.h>
 #include <asm/pv/domain.h>
-#include <asm/pv/traps.h>
 #include <asm/shared.h>
-#include <asm/traps.h>
-#include <asm/x86_emulate.h>
 
 #include <xsm/xsm.h>
 
diff --git a/xen/arch/x86/pv/emulate.h b/xen/arch/x86/pv/emulate.h
index fd2aa0a484..4b845b08e3 100644
--- a/xen/arch/x86/pv/emulate.h
+++ b/xen/arch/x86/pv/emulate.h
@@ -1,6 +1,8 @@
 #ifndef __PV_EMULATE_H__
 #define __PV_EMULATE_H__
 
+#include <xen/sched.h>
+
 #include <asm/processor.h>
 #include <asm/x86_emulate.h>
 
diff --git a/xen/arch/x86/pv/ro-page-fault.c b/xen/arch/x86/pv/ro-page-fault.c
index a920fb5e15..0eedb70002 100644
--- a/xen/arch/x86/pv/ro-page-fault.c
+++ b/xen/arch/x86/pv/ro-page-fault.c
@@ -20,15 +20,7 @@
  * along with this program; If not, see <http://www.gnu.org/licenses/>.
  */
 
-#include <xen/guest_access.h>
-#include <xen/rangeset.h>
-#include <xen/sched.h>
 #include <xen/trace.h>
-
-#include <asm/domain.h>
-#include <asm/mm.h>
-#include <asm/pci.h>
-#include <asm/pv/mm.h>
 #include <asm/shadow.h>
 
 #include "emulate.h"
diff --git a/xen/arch/x86/pv/shim.c b/xen/arch/x86/pv/shim.c
index 31264582cc..3a0525c209 100644
--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -25,14 +25,11 @@
 #include <xen/iocap.h>
 #include <xen/param.h>
 #include <xen/shutdown.h>
-#include <xen/types.h>
 #include <xen/consoled.h>
 #include <xen/pv_console.h>
 
-#include <asm/apic.h>
 #include <asm/dom0_build.h>
 #include <asm/guest.h>
-#include <asm/pv/mm.h>
 
 #include <public/arch-x86/cpuid.h>
 #include <public/hvm/params.h>
diff --git a/xen/arch/x86/x86_64/mmconfig.h b/xen/arch/x86/x86_64/mmconfig.h
index 2e836848ad..4d3b9fcbdd 100644
--- a/xen/arch/x86/x86_64/mmconfig.h
+++ b/xen/arch/x86/x86_64/mmconfig.h
@@ -75,6 +75,7 @@ static inline void mmio_config_writel(void __iomem *pos, u32 val)
 }
 
 /* function prototypes */
+struct acpi_table_header;
 int acpi_parse_mcfg(struct acpi_table_header *header);
 int pci_mmcfg_reserved(uint64_t address, unsigned int segment,
                        unsigned int start_bus, unsigned int end_bus,
diff --git a/xen/include/asm-x86/shadow.h b/xen/include/asm-x86/shadow.h
index 907c71f497..8335862c87 100644
--- a/xen/include/asm-x86/shadow.h
+++ b/xen/include/asm-x86/shadow.h
@@ -22,7 +22,6 @@
 #ifndef _XEN_SHADOW_H
 #define _XEN_SHADOW_H
 
-#include <public/domctl.h>
 #include <xen/sched.h>
 #include <xen/perfc.h>
 #include <xen/domain_page.h>
@@ -31,6 +30,8 @@
 #include <asm/p2m.h>
 #include <asm/spec_ctrl.h>
 
+#include <public/domctl.h>
+
 /*****************************************************************************
  * Macros to tell which shadow paging mode a domain is in*/
 
diff --git a/xen/include/xen/trace.h b/xen/include/xen/trace.h
index af925bcfcc..3cbb542af8 100644
--- a/xen/include/xen/trace.h
+++ b/xen/include/xen/trace.h
@@ -28,6 +28,7 @@ extern int tb_init_done;
 #define tb_init_done false
 #endif
 
+#include <xen/types.h>
 #include <public/sysctl.h>
 #include <public/trace.h>
 #include <asm/trace.h>
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue May 05 12:36:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 12:36:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVwo1-0001KR-Mh; Tue, 05 May 2020 12:36:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVwo0-0001KM-Um
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 12:36:12 +0000
X-Inumbo-ID: ff7bbca8-8ecc-11ea-9db2-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ff7bbca8-8ecc-11ea-9db2-12813bfff9fa;
 Tue, 05 May 2020 12:36:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id BB20BAB3D;
 Tue,  5 May 2020 12:36:11 +0000 (UTC)
Subject: Re: [PATCH v8 07/12] x86emul: support FNSTENV and FNSAVE
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <9a2afbb1-af92-2c7d-9fde-d8d8e4563a2a@suse.com>
Message-ID: <947612c1-8d6b-c484-e080-0d35bfb9bb3c@suse.com>
Date: Tue, 5 May 2020 14:36:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <9a2afbb1-af92-2c7d-9fde-d8d8e4563a2a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.2020 10:15, Jan Beulich wrote:
> @@ -11542,6 +11611,12 @@ int x86_emul_blk(
>      switch ( state->blk )
>      {
>          bool zf;
> +        struct {
> +            struct x87_env32 env;
> +            struct {
> +               uint8_t bytes[10];
> +            } freg[8];
> +        } fpstate;

This also needs #ifndef X86EMUL_NO_FPU around it for !HVM builds
to work.

Jan



From xen-devel-bounces@lists.xenproject.org Tue May 05 13:01:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 13:01:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVxCV-0003jy-Rf; Tue, 05 May 2020 13:01:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZEp/=6T=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jVxCV-0003jt-0A
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 13:01:31 +0000
X-Inumbo-ID: 897eea30-8ed0-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 897eea30-8ed0-11ea-b9cf-bc764e2007e4;
 Tue, 05 May 2020 13:01:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Jh8l5a0f3O0hS8f0yce1Ky168iVxZ8G1XGOqcTqcB3I=; b=jiV9fTXaIDDuv0Tk9mgBKSrlHw
 efTzMFDUKcCekr6y4NQ04DEnzvsI5bYSiIVcEcIu/UJZ1E0FL3Z7cNf5UDd8Dw1j2cSVqSRUldROA
 1iZjAF8N73O6JNxwQwzp5QrJ/Uwh6MY1sPjCpOGeWYl4aPO02NeQfiWYbqTuLJFPCA/M=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jVxCN-0002J9-9t; Tue, 05 May 2020 13:01:23 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jVxCN-0002PQ-1p; Tue, 05 May 2020 13:01:23 +0000
Subject: Re: [PATCH v5] docs/designs: re-work the xenstore migration
 document...
To: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20200428150624.265-1-paul@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <2bf7dc13-65a4-317e-2c8c-bd6972dbb35a@xen.org>
Date: Tue, 5 May 2020 14:01:20 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200428150624.265-1-paul@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Paul,

On 28/04/2020 16:06, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ... to specify a separate migration stream that will also be suitable for
> live update.
> 
> The original scope of the document was to support non-cooperative migration
> of guests [1] but, since then, live update of xenstored has been brought into
> scope. Thus it makes more sense to define a separate image format for
> serializing xenstore state that is suitable for both purposes.
> 
> The document has been limited to specifying a new image format. The mechanism
> for acquiring the image for live update or migration is not covered as that
> is more appropriately dealt with by a patch to docs/misc/xenstore.txt. It is
> also expected that, when the first implementation of live update or migration
> making use of this specification is committed, that the document is moved from
> docs/designs into docs/specs.
> 
> NOTE: It will only be necessary to save and restore state for active xenstore
>        connections, but the documentation for 'RESUME' in xenstore.txt implies
>        otherwise. That command is unused so this patch deletes it from the
>        specification.
> 
> [1] See https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=docs/designs/non-cooperative-migration.md
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Reviewed-by: Juergen Gross <jgross@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

I will wait until tomorrow afternoon (BST) to give an opportunity to 
other to comment.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 05 13:13:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 13:13:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVxOB-0004eZ-MY; Tue, 05 May 2020 13:13:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uapr=6T=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jVxOA-0004eU-RW
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 13:13:34 +0000
X-Inumbo-ID: 3884906a-8ed2-11ea-9db6-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3884906a-8ed2-11ea-9db6-12813bfff9fa;
 Tue, 05 May 2020 13:13:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id ECF3DAFA0
 for <xen-devel@lists.xenproject.org>; Tue,  5 May 2020 13:13:34 +0000 (UTC)
Subject: Re: [PATCH v5] docs/designs: re-work the xenstore migration
 document...
To: xen-devel@lists.xenproject.org
References: <20200428150624.265-1-paul@xen.org>
 <2bf7dc13-65a4-317e-2c8c-bd6972dbb35a@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <fb319876-41eb-e785-a197-92440187a135@suse.com>
Date: Tue, 5 May 2020 15:13:31 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <2bf7dc13-65a4-317e-2c8c-bd6972dbb35a@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.20 15:01, Julien Grall wrote:
> Hi Paul,
> 
> On 28/04/2020 16:06, Paul Durrant wrote:
>> From: Paul Durrant <pdurrant@amazon.com>
>>
>> ... to specify a separate migration stream that will also be suitable for
>> live update.
>>
>> The original scope of the document was to support non-cooperative 
>> migration
>> of guests [1] but, since then, live update of xenstored has been 
>> brought into
>> scope. Thus it makes more sense to define a separate image format for
>> serializing xenstore state that is suitable for both purposes.
>>
>> The document has been limited to specifying a new image format. The 
>> mechanism
>> for acquiring the image for live update or migration is not covered as 
>> that
>> is more appropriately dealt with by a patch to docs/misc/xenstore.txt. 
>> It is
>> also expected that, when the first implementation of live update or 
>> migration
>> making use of this specification is committed, that the document is 
>> moved from
>> docs/designs into docs/specs.
>>
>> NOTE: It will only be necessary to save and restore state for active 
>> xenstore
>>        connections, but the documentation for 'RESUME' in xenstore.txt 
>> implies
>>        otherwise. That command is unused so this patch deletes it from 
>> the
>>        specification.

Could someone from Citrix please verify that XAPI isn't using XS_RESUME?


Juergen


From xen-devel-bounces@lists.xenproject.org Tue May 05 13:31:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 13:31:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVxfY-0006Hc-B3; Tue, 05 May 2020 13:31:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XWWA=6T=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jVxfX-0006HX-C2
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 13:31:31 +0000
X-Inumbo-ID: ba41ed80-8ed4-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba41ed80-8ed4-11ea-ae69-bc764e2007e4;
 Tue, 05 May 2020 13:31:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588685490;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=HLVryuYvm4E3GILTVgMqdrirvdvs/GBYf/P2QgtxmpU=;
 b=hmrognyqlJiTJWgntkhFrCjImiCKNvEA0gyJhGV3LMP+2J7vkIceBia/
 tQIrh3kt6kI0oHI9FF97163a/YR2cTcbmCSsd1QbUwrAb07YWGEw5Mx0W
 ueOlzH5jPruhhbGmfYX0IpbSw343j0svyvjAz1+HXccaLhOsXWoykhgpr o=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: pnEiRSEHGfgLAxTTvX3RXcnzx6E0EPonEqdf26gbgkX0x1XPOTrCWEC7d+M5X+G6TAnWNonmSm
 ZxfszX5DGlhnRvGDm79wjt1FTUBTte6DjrMoAj8z7s1UZWxDSqSqbiGT6dKDpMpwd1rgavlDxP
 PAkvNoRLbn9XPjGM8/GNGGnwfJ0/Lhd667c9qXTL6H+LvV1i3Fct98AXK13vD7/YhZkRof0GfA
 1Puus/L8VxzUMJV3NfhMMyrgwRBPUSrpZ2+LyRYGhcUOkLFnHODNQs4acGjOhRMPzxorObZiM7
 oiE=
X-SBRS: 2.7
X-MesageID: 17031866
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,355,1583211600"; d="scan'208";a="17031866"
Date: Tue, 5 May 2020 15:31:19 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/pv: Compile out emul-gate-op in !CONFIG_PV32 builds
Message-ID: <20200505133119.GB1353@Air-de-Roger>
References: <20200505113537.29968-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200505113537.29968-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 05, 2020 at 12:35:37PM +0100, Andrew Cooper wrote:
> The caller is already guarded by is_pv_32bit_vcpu().
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 05 13:38:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 13:38:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVxmI-0006Ti-3C; Tue, 05 May 2020 13:38:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVxmG-0006Td-0i
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 13:38:28 +0000
X-Inumbo-ID: b2c56041-8ed5-11ea-9dbe-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b2c56041-8ed5-11ea-9dbe-12813bfff9fa;
 Tue, 05 May 2020 13:38:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 77D65AB3D;
 Tue,  5 May 2020 13:38:28 +0000 (UTC)
Subject: Re: [PATCH] x86/traps: fix an off-by-one error
To: Hongyan Xia <hx242@xen.org>
References: <37b7ec049ff82f92cc6724a743867e1cd9365f5b.1588676790.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8c3d6a2c-316c-f7fc-a2b0-3ea12721867d@suse.com>
Date: Tue, 5 May 2020 15:38:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <37b7ec049ff82f92cc6724a743867e1cd9365f5b.1588676790.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.2020 13:06, Hongyan Xia wrote:
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -300,6 +300,7 @@ static void show_guest_stack(struct vcpu *v, const struct cpu_user_regs *regs)
>      int i;
>      unsigned long *stack, addr;
>      unsigned long mask = STACK_SIZE;
> +    void *stack_page = NULL;
>  
>      /* Avoid HVM as we don't know what the stack looks like. */
>      if ( is_hvm_vcpu(v) )
> @@ -328,7 +329,7 @@ static void show_guest_stack(struct vcpu *v, const struct cpu_user_regs *regs)
>          vcpu = maddr_get_owner(read_cr3()) == v->domain ? v : NULL;
>          if ( !vcpu )
>          {
> -            stack = do_page_walk(v, (unsigned long)stack);
> +            stack_page = stack = do_page_walk(v, (unsigned long)stack);
>              if ( (unsigned long)stack < PAGE_SIZE )
>              {
>                  printk("Inaccessible guest memory.\n");
> @@ -358,7 +359,7 @@ static void show_guest_stack(struct vcpu *v, const struct cpu_user_regs *regs)
>      if ( mask == PAGE_SIZE )
>      {
>          BUILD_BUG_ON(PAGE_SIZE == STACK_SIZE);
> -        unmap_domain_page(stack);
> +        unmap_domain_page(stack_page);
>      }

With this I think you want to change the whole construct here to

    if ( stack_page )
        unmap_domain_page(stack_page);

i.e. with the then no longer relevant BUILD_BUG_ON() also dropped.

What's more important though - please also fix the same issue in
compat_show_guest_stack(). Unless I'm mistaken of course, in which
case it would be nice if the description could mention why the
other similar function isn't affected.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 05 13:47:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 13:47:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVxvH-0007Ka-Vi; Tue, 05 May 2020 13:47:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVxvH-0007KV-94
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 13:47:47 +0000
X-Inumbo-ID: 0026d37c-8ed7-11ea-9dbf-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0026d37c-8ed7-11ea-9dbf-12813bfff9fa;
 Tue, 05 May 2020 13:47:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E47E7ABB2;
 Tue,  5 May 2020 13:47:47 +0000 (UTC)
Subject: Re: [PATCH 1/3] x86/mm: do not attempt to convert _PAGE_GNTTAB to a
 boolean
To: Roger Pau Monne <roger.pau@citrix.com>
References: <20200505092454.9161-1-roger.pau@citrix.com>
 <20200505092454.9161-2-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <20332b18-960c-a180-8150-55fae60bdc6e@suse.com>
Date: Tue, 5 May 2020 15:47:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200505092454.9161-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.2020 11:24, Roger Pau Monne wrote:
> Clang 10 complains with:
> 
> mm.c:1239:10: error: converting the result of '<<' to a boolean always evaluates to true
>       [-Werror,-Wtautological-constant-compare]
>     if ( _PAGE_GNTTAB && (l1e_get_flags(l1e) & _PAGE_GNTTAB) &&
>          ^
> xen/include/asm/x86_64/page.h:161:25: note: expanded from macro '_PAGE_GNTTAB'
> #define _PAGE_GNTTAB (1U<<22)
>                         ^

This is a rather odd warning. Do they also warn for "if ( 0 )"
or "do { } while ( 0 )", as we use in various places? There's
no difference to me between a plain number and a constant
composed via an expression.

> Remove the conversion of _PAGE_GNTTAB to a boolean, since the and
> operation performed afterwards will already return false if the value
> of the macro is 0.

I'm sorry, but no. The expression was put there on purpose by
0932210ac095 ("x86: Address "Bitwise-and with zero
CONSTANT_EXPRESSION_RESULT" Coverity issues"), and the
description there is clearly telling us that this wants to stay
unless Coverity changed in the meantime. Otherwise I'm afraid
a more elaborate solution will be needed to please both. Or a
more simplistic one, like using "#if _PAGE_GNTTAB" around the
construct.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 05 13:51:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 13:51:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVxzJ-00088c-Gu; Tue, 05 May 2020 13:51:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVxzI-00088V-MG
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 13:51:56 +0000
X-Inumbo-ID: 9502ac0a-8ed7-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9502ac0a-8ed7-11ea-ae69-bc764e2007e4;
 Tue, 05 May 2020 13:51:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 861EAAC7B;
 Tue,  5 May 2020 13:51:57 +0000 (UTC)
Subject: Re: [PATCH] x86/pv: Prune include lists
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200505114236.31269-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <460a0a7a-17b4-5f02-943d-4b637d2a089c@suse.com>
Date: Tue, 5 May 2020 15:51:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200505114236.31269-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.2020 13:42, Andrew Cooper wrote:
> Several of these in particular haven't been pruned since the logic was all
> part of arch/x86/traps.c
> 
> Some adjustments to header files are required to avoid compile errors:
>  * emulate.h needs xen/sched.h because gdt_ldt_desc_ptr() uses v->vcpu_id.
>  * mmconfig.h needs to forward declare acpi_table_header.
>  * shadow.h and trace.h need to have uint*_t in scope before including the Xen
>    public headers.  For shadow.h, reorder the includes.  For trace.h, include
>    types.h
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Tue May 05 13:53:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 13:53:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVy0Q-0008Dx-R8; Tue, 05 May 2020 13:53:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVy0P-0008Ds-O8
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 13:53:05 +0000
X-Inumbo-ID: be0d9cd6-8ed7-11ea-9dbf-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id be0d9cd6-8ed7-11ea-9dbf-12813bfff9fa;
 Tue, 05 May 2020 13:53:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 81B33AB3D;
 Tue,  5 May 2020 13:53:06 +0000 (UTC)
Subject: Re: [PATCH] x86/pv: Compile out emul-gate-op in !CONFIG_PV32 builds
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200505113537.29968-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5098aa8d-fc4d-42fd-2c6f-3ea30de7556e@suse.com>
Date: Tue, 5 May 2020 15:53:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200505113537.29968-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.2020 13:35, Andrew Cooper wrote:
> The caller is already guarded by is_pv_32bit_vcpu().
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Tue May 05 13:55:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 13:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVy2U-0008Lo-77; Tue, 05 May 2020 13:55:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FNY7=6T=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jVy2S-0008Lf-KZ
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 13:55:12 +0000
X-Inumbo-ID: 08d3adb4-8ed8-11ea-9dbf-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 08d3adb4-8ed8-11ea-9dbf-12813bfff9fa;
 Tue, 05 May 2020 13:55:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=W4pkNyNFjzX9Kmz+1LK/Gv+vtCnwXvMLRVLXlicRwhY=; b=vLvLHV55PxpVxW6W3vMBNm6ta
 bGCcrL4+khkWxh0tKxL36zAEFdaRt/vmeZvyzFsZjAlEa7QsuvZ3rrsuok94IzYk7dHkoJNhNUh5i
 zK7HfMmP4/1AF795YgG6b8/rF8lWhgAIZm39mbXaH2i0AOPlrt/fyPSiux0Qv9CR73VTc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jVy2P-0003If-T0; Tue, 05 May 2020 13:55:09 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jVy2P-0002pM-Ju; Tue, 05 May 2020 13:55:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jVy2P-0005FA-Hq; Tue, 05 May 2020 13:55:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150024-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150024: trouble: blocked/broken
X-Osstest-Failures: xen-unstable-smoke:build-amd64:<job
 status>:broken:regression
 xen-unstable-smoke:build-arm64-xsm:<job status>:broken:regression
 xen-unstable-smoke:build-armhf:<job status>:broken:regression
 xen-unstable-smoke:build-arm64-xsm:host-install(4):broken:regression
 xen-unstable-smoke:build-amd64:host-install(4):broken:regression
 xen-unstable-smoke:build-armhf:host-install(4):broken:regression
 xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: xen=fe36a173d110fd792f5e337e208a5ed714df1536
X-Osstest-Versions-That: xen=0135be8bd8cd60090298f02310691b688d95c3a8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 05 May 2020 13:55:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150024 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150024/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                     <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-armhf                     <job status>                 broken
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 149888
 build-amd64                   4 host-install(4)        broken REGR. vs. 149888
 build-armhf                   4 host-install(4)        broken REGR. vs. 149888

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  fe36a173d110fd792f5e337e208a5ed714df1536
baseline version:
 xen                  0135be8bd8cd60090298f02310691b688d95c3a8

Last test of basis   149888  2020-04-30 09:00:53 Z    5 days
Testing same since   149986  2020-05-05 01:24:46 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              broken  
 build-amd64                                                  broken  
 build-armhf                                                  broken  
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-amd64 broken
broken-job build-arm64-xsm broken
broken-job build-armhf broken
broken-step build-arm64-xsm host-install(4)
broken-step build-amd64 host-install(4)
broken-step build-armhf host-install(4)

Not pushing.

------------------------------------------------------------
commit fe36a173d110fd792f5e337e208a5ed714df1536
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu Apr 30 10:47:14 2020 +0100

    x86/amd: Initial support for Fam19h processors
    
    Fam19h is very similar to Fam17h in these regards.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 0c9751b53c2ee135fd484a03fd47f3bb5fbe63b8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 4 11:54:35 2020 +0200

    x86/HyperV: correct hv_hcall_page for xen.efi build
    
    Along the lines of what the not reverted part of 3c4b2eef4941 ("x86:
    refine link time stub area related assertion") did, we need to transform
    the absolute HV_HCALL_PAGE into the image base relative hv_hcall_page
    (or else there'd be no need for two distinct symbols). Otherwise
    mkreloc, as used for generating the base relocations of xen.efi, will
    spit out warnings like "Difference at .text:0009b74f is 0xc0000000
    (expected 0x40000000)". As long as the offending relocations are PC
    relative ones, the generated binary is correct afaict, but if there ever
    was the absolute address stored, xen.efi would miss a fixup for it.
    
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit b0f666c569b8af6a51ab8aeec3664d6acd1abee9
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 4 11:53:42 2020 +0200

    x86/EFI: correct section offsets in mkreloc diagnostics
    
    These are more helpful if they point at the address where the relocated
    value starts, rather than at the specific byte of the difference.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 17b997aa1edb9eb8d9bd1958457ff50927f46832
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Mon May 4 11:53:01 2020 +0200

    x86/hap: be more selective with assisted TLB flush
    
    When doing an assisted flush on HAP the purpose of the
    on_selected_cpus is just to trigger a vmexit on remote CPUs that are
    in guest context, and hence just using is_vcpu_dirty_cpu is too lax,
    also check that the vCPU is running. Due to the lazy context switching
    done by Xen dirty_cpu won't always be cleared when the guest vCPU is
    not running, and hence relying on is_running allows more fine grained
    control of whether the vCPU is actually running.
    
    I've measured the time of the non-local branch of flush_area_mask
    inside the shim running with 32vCPUs over 100000 executions and
    averaged the result on a large Westmere system (80 ways total). The
    figures where fetched during the boot of a SLES 11 PV guest. The
    results are as follow (less is better):
    
    Non assisted flush with x2APIC:      112406ns
    Assisted flush without this patch:   820450ns
    Assisted flush with this patch:        8330ns
    
    While there also pass NULL as the data parameter of on_selected_cpus,
    the dummy handler doesn't consume the data in any way.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 8d928648fd816f97ba3ebe98ab5d4b4a7def58ff
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 4 11:51:47 2020 +0200

    xenoprof: limit scope of types and #define-s
    
    Quite a few of the items are used by xenoprof.c only, so move them there
    to limit their visibility as well as the amount of re-building needed in
    case of changes. Also drop the inclusion of the public header there.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit e83719b53a9be1c69033b3ded8051d47e3dadab8
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 4 11:51:18 2020 +0200

    xenoprof: drop unused struct xenoprof fields
    
    Both is_primary and domain_ready are only ever written to. Drop both
    fields and restrict structure visibility to just the one involved CU.
    While doing so (and just for starters) make "is_compat" properly bool.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Wei Liu <wl@xen.org>

commit 7f6a6e8c0a400d1a073b083fe0b7d25ef74b14e0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 4 11:48:13 2020 +0200

    xenoprof: adjust ordering of page sharing vs domain type setting
    
    Buffer pages should be shared with "ignored" or "active" guests only
    (besides, obviously, the primary profiling domain). Hence domain type
    should be set to "ignored" before unsharing from the primary domain
    (which implies even a previously "passive" domain may then access its
    buffers, albeit that's not very useful unless it gets promoted to
    "active" subsequently), i.e. such that no further writes of records to
    the buffer would occur, and (at least for consistency) also before
    sharing it (with the calling domain) from the XENOPROF_get_buffer path.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue May 05 14:00:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 14:00:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVy76-00007b-0d; Tue, 05 May 2020 14:00:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=56xU=6T=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jVy75-00007V-KP
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 13:59:59 +0000
X-Inumbo-ID: b4ca7800-8ed8-11ea-b9cf-bc764e2007e4
Received: from mail-ed1-x532.google.com (unknown [2a00:1450:4864:20::532])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4ca7800-8ed8-11ea-b9cf-bc764e2007e4;
 Tue, 05 May 2020 13:59:59 +0000 (UTC)
Received: by mail-ed1-x532.google.com with SMTP id w2so1858624edx.4
 for <xen-devel@lists.xenproject.org>; Tue, 05 May 2020 06:59:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=O9OCBMkBxpYV7nwYvW253+iHNM16Xw+0gCVniK4rZ0A=;
 b=Quia9v/xRYgVnTDwV1BgYeICL+6UIoPhizH6hPPF4DkNcTtvHJl5Ty9UDZWF1Wj0AF
 FS3KgmAq6KlzYeXdYQ3OA++1Q4I0seBrL8agnF3UoSJLLSKmOeC9DQqvrhlT51K/e1v3
 xgF6OTVucZjrirHv5xQwRe0P3H9l5KoUo+hkDyXcUw1NGQ9dQeGxOR/N8raXLcTN0s5g
 wPBq8vTmzTMQayESPKqeEdcHb8eiqjxUByqyltjrifpPut7NUtSbThKGFcm/cKMcc+tQ
 RrGmgDnKGOp8PJc1PEzW2mY7Ht46+7iAqPL4RGQCqPpT46h6N8Zf18TpGw7OoG96GT/c
 phFA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=O9OCBMkBxpYV7nwYvW253+iHNM16Xw+0gCVniK4rZ0A=;
 b=Dly9hIY12hAjM1B9MrKYrdPcmNJfLHPK4kjY16nuBjabx7fPz0Q0DI7pyN0VILhaKT
 +e4KkkV07DKso2aJkcmPEyiLOMYm7n38KGqaDiRarkcVuEmKGVHy732HoBGl7mobylSp
 OAFqx7pecCn93v+TsNRgAhQlhdJ0ney7L/opfaIywyJsGzRmoML0vtQphEJa/pxPG39A
 UY+4nbAlj8/U0SGjaaFIUgcstnOrmMq0nzZg2hKR/jWXSYdfHJcjQ5LxX68weAdTYjF0
 9w4iQhHGCisuxei3HgJRn8/Iq6mXyYMt1IebFbY/rHl2gef2MO161CHDsCPHBIWy3XxE
 ai6A==
X-Gm-Message-State: AGi0PuacaDDE1Z+2DCHw+2UINFjDXJuZ+hi4TiaMTCtfhe8u5R0PboJn
 tQbtwNhYXilRQvAIzMfkFrQ=
X-Google-Smtp-Source: APiQypKMaCbhuhWKhU6qOX1gyF4ylKwmgItCJ0Vgnfw2kOhlHOT2eRuqxGQbmSbMf9zCagFUVC55Eg==
X-Received: by 2002:aa7:c0d2:: with SMTP id j18mr2756750edp.283.1588687198204; 
 Tue, 05 May 2020 06:59:58 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.187])
 by smtp.gmail.com with ESMTPSA id n7sm280474edt.69.2020.05.05.06.59.56
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 05 May 2020 06:59:57 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Markus Armbruster'" <armbru@redhat.com>,
	<qemu-devel@nongnu.org>
References: <20200505101908.6207-1-armbru@redhat.com>
 <20200505101908.6207-3-armbru@redhat.com>
In-Reply-To: <20200505101908.6207-3-armbru@redhat.com>
Subject: RE: [PATCH v2 02/10] xen: Fix and improve handling of device_add
 usb-host errors
Date: Tue, 5 May 2020 14:59:56 +0100
Message-ID: <004001d622e5$75fd2d70$61f78850$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQD2kCtjJmTvNkmzRgy8FtMMvttLfQNDV1uLqj6PYxA=
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Anthony Perard' <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Gerd Hoffmann' <kraxel@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Markus Armbruster <armbru@redhat.com>
> Sent: 05 May 2020 11:19
> To: qemu-devel@nongnu.org
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Anthony Perard <anthony.perard@citrix.com>; Paul
> Durrant <paul@xen.org>; Gerd Hoffmann <kraxel@redhat.com>; xen-devel@lists.xenproject.org
> Subject: [PATCH v2 02/10] xen: Fix and improve handling of device_add usb-host errors
> 
> usbback_portid_add() leaks the error when qdev_device_add() fails.
> Fix that.  While there, use the error to improve the error message.
> 
> The qemu_opts_from_qdict() similarly leaks on failure.  But any
> failure there is a programming error.  Pass &error_abort.
> 
> Fixes: 816ac92ef769f9ffc534e49a1bb6177bddce7aa2
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: Gerd Hoffmann <kraxel@redhat.com>
> Cc: xen-devel@lists.xenproject.org
> Signed-off-by: Markus Armbruster <armbru@redhat.com>

Acked-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Tue May 05 14:01:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 14:01:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVy8Y-0000y0-CZ; Tue, 05 May 2020 14:01:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Sfw0=6T=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jVy8W-0000xq-RN
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 14:01:28 +0000
X-Inumbo-ID: ea4b69c6-8ed8-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea4b69c6-8ed8-11ea-b07b-bc764e2007e4;
 Tue, 05 May 2020 14:01:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
 References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=j/16uOTQEqCwgl++GGTs/nDrWl9pSAixBruWvTsWLr4=; b=zxNSDZ16cd14dAzwrlMkywDFQP
 c0wtyJbR9LgD6rsgA9f+0uysV1k3GzHNWL+YYnC3mfvtTaKAUWQ3mVaNE7/o0oQZR4FWrIYNXRrVz
 IGWBi9WofPOeBg8cnbYWNs4IT21RHddLsONhR09yXxFnoCy37E1qJqxHfY8XwfHhfyoI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jVy8V-0003WJ-Da; Tue, 05 May 2020 14:01:27 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=freeip.amazon.com) by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <hx242@xen.org>)
 id 1jVy8V-0006Oc-2k; Tue, 05 May 2020 14:01:27 +0000
Message-ID: <45b6e79141e638c2930ccdfbc26a0de54034c525.camel@xen.org>
Subject: Re: [PATCH] x86/traps: fix an off-by-one error
From: Hongyan Xia <hx242@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Date: Tue, 05 May 2020 15:01:25 +0100
In-Reply-To: <8c3d6a2c-316c-f7fc-a2b0-3ea12721867d@suse.com>
References: <37b7ec049ff82f92cc6724a743867e1cd9365f5b.1588676790.git.hongyxia@amazon.com>
 <8c3d6a2c-316c-f7fc-a2b0-3ea12721867d@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, 2020-05-05 at 15:38 +0200, Jan Beulich wrote:
> On 05.05.2020 13:06, Hongyan Xia wrote:
> > --- a/xen/arch/x86/traps.c
> > +++ b/xen/arch/x86/traps.c
> > @@ -300,6 +300,7 @@ static void show_guest_stack(struct vcpu *v,
> > const struct cpu_user_regs *regs)
> >      int i;
> >      unsigned long *stack, addr;
> >      unsigned long mask = STACK_SIZE;
> > +    void *stack_page = NULL;
> >  
> >      /* Avoid HVM as we don't know what the stack looks like. */
> >      if ( is_hvm_vcpu(v) )
> > @@ -328,7 +329,7 @@ static void show_guest_stack(struct vcpu *v,
> > const struct cpu_user_regs *regs)
> >          vcpu = maddr_get_owner(read_cr3()) == v->domain ? v :
> > NULL;
> >          if ( !vcpu )
> >          {
> > -            stack = do_page_walk(v, (unsigned long)stack);
> > +            stack_page = stack = do_page_walk(v, (unsigned
> > long)stack);
> >              if ( (unsigned long)stack < PAGE_SIZE )
> >              {
> >                  printk("Inaccessible guest memory.\n");
> > @@ -358,7 +359,7 @@ static void show_guest_stack(struct vcpu *v,
> > const struct cpu_user_regs *regs)
> >      if ( mask == PAGE_SIZE )
> >      {
> >          BUILD_BUG_ON(PAGE_SIZE == STACK_SIZE);
> > -        unmap_domain_page(stack);
> > +        unmap_domain_page(stack_page);
> >      }
> 
> With this I think you want to change the whole construct here to
> 
>     if ( stack_page )
>         unmap_domain_page(stack_page);
> 
> i.e. with the then no longer relevant BUILD_BUG_ON() also dropped.

I wonder if such a construct is better with UNMAP_DOMAIN_PAGE(), since
it deals with NULL and will nullify it to prevent misuse.

> What's more important though - please also fix the same issue in
> compat_show_guest_stack(). Unless I'm mistaken of course, in which
> case it would be nice if the description could mention why the
> other similar function isn't affected.

Compat suffers from the same problem. Thanks for pointing that out.

Hongyan



From xen-devel-bounces@lists.xenproject.org Tue May 05 14:11:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 14:11:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVyI3-0001qq-C7; Tue, 05 May 2020 14:11:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lOzX=6T=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jVyI2-0001ql-Gc
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 14:11:18 +0000
X-Inumbo-ID: 48f6212d-8eda-11ea-9dc0-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 48f6212d-8eda-11ea-9dc0-12813bfff9fa;
 Tue, 05 May 2020 14:11:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588687877;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=Lv42nvGup2zH6vXJrBtIDvIjrn5fvVYc9R2+p+zlZFk=;
 b=Ny/ikc2Y7B0UXvAyZg4N/VjauO8FDJvnhWlBpAMfJFhjSOkLivmUJQ4C
 1aWSo0zQ8lLNwL/lS3ymrcv1TAVvY7dr4lub4bxgYC9Ji8cD9cEnpCwEb
 P1UDAzLQqE2KXEYcy063sntPG6GZtYsaJ0Br9vUAr94W+Q7fEm9riGQc8 I=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: gH/OnwiMyaJMR33rEHZ5urwEnIXr+DV43xXRkVu6uLZ22cAlAm8KXX+Uht3/xjkmB77MO727kc
 JqWREo7LB9cKqX3rb3YJr+LNQJ0YrNfSd+HwAnpMDb9gp6T3r7D+hZWfnN3Fm0LJkwne/7JzWe
 gz7BaVuFD9ezTKBGJbVCvjswCRfFU6fbxqdN0XYpCHls42hdmVZKsGkKvi1HI4nI9w0+GjqAD9
 Evq/8vYx3WKWQPlm800lY3ukCK1m+8y0O4a8ZsYL1p+p/LE5XfClyXWn8iN4acCJCZqfZAPWna
 WNI=
X-SBRS: 2.7
X-MesageID: 17470581
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,355,1583211600"; d="scan'208";a="17470581"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24241.29697.707595.28182@mariner.uk.xensource.com>
Date: Tue, 5 May 2020 15:11:13 +0100
To: Sander Eikelenboom <linux@eikelenboom.it>
Subject: Re: xen-4.13 tools/xentop.c backport request
In-Reply-To: <a8a6764c-fa1d-5a8d-5470-adf149e4dfda@eikelenboom.it>
References: <a8a6764c-fa1d-5a8d-5470-adf149e4dfda@eikelenboom.it>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Sander Eikelenboom writes ("xen-4.13 tools/xentop.c backport request"):
> If I'm not mistaken you do the tools backports.
> 
> I just noticed that the problem that is fixed by commit: 
> 4b5b431edd984b26f43b3efc7de465f3560a949e tools/xentop: Fix calculation of used memory
> 
> is already present in the xen-4.13 branch (older releases are uneffected).
> Unfortunately I didn't check before, so I didn't include a "backport tag".
> 
> If it wasn't already on you backport list, please consider to backport / apply this one.

It wasn't.  I have done this.

Ian.


From xen-devel-bounces@lists.xenproject.org Tue May 05 14:11:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 14:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVyIW-0001tC-LV; Tue, 05 May 2020 14:11:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XWWA=6T=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jVyIU-0001t3-U8
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 14:11:46 +0000
X-Inumbo-ID: 59f351d4-8eda-11ea-9887-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59f351d4-8eda-11ea-9887-bc764e2007e4;
 Tue, 05 May 2020 14:11:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588687905;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=mt9W80B1cr2KdxQEtpC2LWjZtPVcOMOx1aAqW1ajV0k=;
 b=aRSI8zOL/BJF5nTNL20mk32SA762ksj+sMRWA6cA8VE7zQeT2IPK40HZ
 xPG/DfMCJi4lD/6qxH6DYG0OHI5Z+4b+AKY68OvgnDvREEpZb1/bEFIYM
 zoM1awkDCqds42PYQK58zmhAFlozEFRpYmGCQx2on89YzDv3mWHbcBbjJ A=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: w+cUjwm19U3XJkb1qVAJ5lIOTxogNuQ5K+AfTTU8v3T/wnkZkVRA+AcSZzGDlAZeDA63LFRW0S
 ygZ/v9Oz/lvpZX1FLDgwnkUCNm6C2sr6Hz7M6LN7CrrHz2IoBjqmhcaFdQNIO/buTLXmLBNYOK
 XfhIi/o2rCOAmHVGKeYsQY+GRqTEq5xh5eHfBJaAClSZDpTXi+BxnoHxk/I5+5ln9FRzJTDnJC
 HWcq2SFv+/zX7S93801l2vQF/9DBe++c+kAhgEQMfS2MzRT/3dN7wNJJl6xosge+TUto69ePK5
 2e8=
X-SBRS: 2.7
X-MesageID: 17164708
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,355,1583211600"; d="scan'208";a="17164708"
Date: Tue, 5 May 2020 16:11:38 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 1/3] x86/mm: do not attempt to convert _PAGE_GNTTAB to a
 boolean
Message-ID: <20200505141138.GC1353@Air-de-Roger>
References: <20200505092454.9161-1-roger.pau@citrix.com>
 <20200505092454.9161-2-roger.pau@citrix.com>
 <20332b18-960c-a180-8150-55fae60bdc6e@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20332b18-960c-a180-8150-55fae60bdc6e@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 05, 2020 at 03:47:43PM +0200, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> 
> On 05.05.2020 11:24, Roger Pau Monne wrote:
> > Clang 10 complains with:
> > 
> > mm.c:1239:10: error: converting the result of '<<' to a boolean always evaluates to true
> >       [-Werror,-Wtautological-constant-compare]
> >     if ( _PAGE_GNTTAB && (l1e_get_flags(l1e) & _PAGE_GNTTAB) &&
> >          ^
> > xen/include/asm/x86_64/page.h:161:25: note: expanded from macro '_PAGE_GNTTAB'
> > #define _PAGE_GNTTAB (1U<<22)
> >                         ^
> 
> This is a rather odd warning. Do they also warn for "if ( 0 )"
> or "do { } while ( 0 )", as we use in various places? There's
> no difference to me between a plain number and a constant
> composed via an expression.

Using plain 0 is fine, they just seem to dislike using << for some
reason that escapes me. Seems like it might be useful to catch bugs
where || is wrongly used instead of | when setting flags, ie:

https://github.com/haproxy/haproxy/issues/588

> 
> > Remove the conversion of _PAGE_GNTTAB to a boolean, since the and
> > operation performed afterwards will already return false if the value
> > of the macro is 0.
> 
> I'm sorry, but no. The expression was put there on purpose by
> 0932210ac095 ("x86: Address "Bitwise-and with zero
> CONSTANT_EXPRESSION_RESULT" Coverity issues"), and the
> description there is clearly telling us that this wants to stay
> unless Coverity changed in the meantime. Otherwise I'm afraid
> a more elaborate solution will be needed to please both.

Clang is fine with changing this to _PAGE_GNTTAB != 0. Would you be
OK with this approach?

> Or a
> more simplistic one, like using "#if _PAGE_GNTTAB" around the
> construct.

Yes, that's the other solution I had in mind.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 05 14:15:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 14:15:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVyMU-00026H-6m; Tue, 05 May 2020 14:15:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MjzU=6T=arndb.de=arnd@srs-us1.protection.inumbo.net>)
 id 1jVyMT-00026C-1L
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 14:15:53 +0000
X-Inumbo-ID: eccb781a-8eda-11ea-9887-bc764e2007e4
Received: from mout.kundenserver.de (unknown [212.227.17.13])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eccb781a-8eda-11ea-9887-bc764e2007e4;
 Tue, 05 May 2020 14:15:52 +0000 (UTC)
Received: from localhost.localdomain ([149.172.19.189]) by
 mrelayeu.kundenserver.de (mreue106 [212.227.15.145]) with ESMTPA (Nemesis) id
 1M9nEJ-1jSNK23B49-005rjT; Tue, 05 May 2020 16:15:47 +0200
From: Arnd Bergmann <arnd@arndb.de>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>
Subject: [PATCH] xenbus: avoid stack overflow warning
Date: Tue,  5 May 2020 16:15:37 +0200
Message-Id: <20200505141546.824573-1-arnd@arndb.de>
X-Mailer: git-send-email 2.26.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Provags-ID: V03:K1:A6tw1zrTOAQW5yW+MxCDJvOopLSS/Ga3YzKkHajnYByzROmhf/Z
 UCjmaSkpNjyjF6nnYkDrgoDTQcQKOsy9GqpcEYeDBFITW4FIlbQVan7MDSfBvVH5vKDNaMG
 I57/Bb6kf/hHfTGPMDXpo9MiGKtA/lQKOkRk2jrdXtJteCeqspXA4ZRf6fi0zCQjBphQskO
 vo1QzWOdqSAs5HBhhLKyg==
X-Spam-Flag: NO
X-UI-Out-Filterresults: notjunk:1;V03:K0:AVDw9QU2Bq0=:RtvByz9YZEMaNPXxSQQaan
 cDa1tb8w5qa0/seyrGG5bEJDTWS1bQ6DU6Z3caE8AVZWP1DLGjx3JqQeK/1xFSA7XkiiMJyhk
 g9yWhIyFP3lr0xAKhrdIAeNeM227xqBDcoaJzkkSz7XLlBTcbuKxl+ZBf6c7IvRpJTium5Wpy
 wbna0RoQzxMBC5A1ijVWpzJPTQ7zvqxiSFQvyLxdIYvMvoRuaB1v0r0X2ffQfaZ/Czv/uqxyY
 E9XMnNAgucakN6lhB6eQ6MOsvvookARXupmUbbnYbrkjFuSavXZ5pPP4YBTM6PZq3heRt0Ffb
 JXcfBQqCOIzDgg7z68QhXEjzhQzOU4skSyZfx4/PAjDqxOMtEfkTgAIUF067/bJHW4AdVoJMY
 X+QbTqcaz/Ajz1Bf4V8+HNe3gCPNTAZr5Ybk5lRqkDoEqY/Ks4zyMQ/eR4+CxwQJ9LlAs7EQJ
 h6UvsBe6lcNJlmhfiIaAlxlQjzHwYJ2LmHLpAEq9YFUPDzY2vcYxrehJKsZD3z9sR9EMwrX1I
 SCBTxeQ5jjvIOIGLuslnh9tYC1VW3T9yt9QsuzcNAm2vg28xGYLkrNxvRpFlltV+SGz4EriaI
 ZKIpN/t8zQ1fTUnto9/GFaHTg0xIITo1QroKaSzEh2NeBZt5Z1ioW6I3pIWOY2xIhgHsRK/NH
 UIcynKaRVx7azRrZE9UI8sHcKSwYGlA0KhQM/SBjFqUO6NKyqrqMAwhYjVYdRmoSIMoeKxDMm
 ZE/GCuMVoM+OE+Jo4mT72blCEhI8HrcXl2M0M3aXX9f7P/C/S9P0oo9BzRoS655scEfNndInz
 7spEwdABS87Ga/MwDfUpMag/fwUWsaGdnX00Lewh0xtIkUjZQc=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Arnd Bergmann <arnd@arndb.de>,
 Wei Liu <wl@xen.org>, Yan Yankovskyi <yyankovskyi@gmail.com>,
 linux-kernel@vger.kernel.org, clang-built-linux@googlegroups.com,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The __xenbus_map_ring() function has two large arrays, 'map' and
'unmap' on its stack. When clang decides to inline it into its caller,
xenbus_map_ring_valloc_hvm(), the total stack usage exceeds the warning
limit for stack size on 32-bit architectures.

drivers/xen/xenbus/xenbus_client.c:592:12: error: stack frame size of 1104 bytes in function 'xenbus_map_ring_valloc_hvm' [-Werror,-Wframe-larger-than=]

As far as I can tell, other compilers don't inline it here, so we get
no warning, but the stack usage is actually the same. It is possible
for both arrays to use the same location on the stack, but the compiler
cannot prove that this is safe because they get passed to external
functions that may end up using them until they go out of scope.

Move the two arrays into separate basic blocks to limit the scope
and force them to occupy less stack in total, regardless of the
inlining decision.

Signed-off-by: Arnd Bergmann <arnd@arndb.de>
---
 drivers/xen/xenbus/xenbus_client.c | 74 +++++++++++++++++-------------
 1 file changed, 41 insertions(+), 33 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index 040d2a43e8e3..23ca70378e36 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -470,54 +470,62 @@ static int __xenbus_map_ring(struct xenbus_device *dev,
 			     unsigned int flags,
 			     bool *leaked)
 {
-	struct gnttab_map_grant_ref map[XENBUS_MAX_RING_GRANTS];
-	struct gnttab_unmap_grant_ref unmap[XENBUS_MAX_RING_GRANTS];
 	int i, j;
 	int err = GNTST_okay;
 
-	if (nr_grefs > XENBUS_MAX_RING_GRANTS)
-		return -EINVAL;
+	{
+		struct gnttab_map_grant_ref map[XENBUS_MAX_RING_GRANTS];
 
-	for (i = 0; i < nr_grefs; i++) {
-		memset(&map[i], 0, sizeof(map[i]));
-		gnttab_set_map_op(&map[i], addrs[i], flags, gnt_refs[i],
-				  dev->otherend_id);
-		handles[i] = INVALID_GRANT_HANDLE;
-	}
+		if (nr_grefs > XENBUS_MAX_RING_GRANTS)
+			return -EINVAL;
 
-	gnttab_batch_map(map, i);
+		for (i = 0; i < nr_grefs; i++) {
+			memset(&map[i], 0, sizeof(map[i]));
+			gnttab_set_map_op(&map[i], addrs[i], flags,
+					  gnt_refs[i], dev->otherend_id);
+			handles[i] = INVALID_GRANT_HANDLE;
+		}
+
+		gnttab_batch_map(map, i);
 
-	for (i = 0; i < nr_grefs; i++) {
-		if (map[i].status != GNTST_okay) {
-			err = map[i].status;
-			xenbus_dev_fatal(dev, map[i].status,
+		for (i = 0; i < nr_grefs; i++) {
+			if (map[i].status != GNTST_okay) {
+				err = map[i].status;
+				xenbus_dev_fatal(dev, map[i].status,
 					 "mapping in shared page %d from domain %d",
 					 gnt_refs[i], dev->otherend_id);
-			goto fail;
-		} else
-			handles[i] = map[i].handle;
+				goto fail;
+			} else
+				handles[i] = map[i].handle;
+		}
 	}
-
 	return GNTST_okay;
 
  fail:
-	for (i = j = 0; i < nr_grefs; i++) {
-		if (handles[i] != INVALID_GRANT_HANDLE) {
-			memset(&unmap[j], 0, sizeof(unmap[j]));
-			gnttab_set_unmap_op(&unmap[j], (phys_addr_t)addrs[i],
-					    GNTMAP_host_map, handles[i]);
-			j++;
+	{
+		struct gnttab_unmap_grant_ref unmap[XENBUS_MAX_RING_GRANTS];
+
+		for (i = j = 0; i < nr_grefs; i++) {
+			if (handles[i] != INVALID_GRANT_HANDLE) {
+				memset(&unmap[j], 0, sizeof(unmap[j]));
+				gnttab_set_unmap_op(&unmap[j],
+						    (phys_addr_t)addrs[i],
+						    GNTMAP_host_map,
+						    handles[i]);
+				j++;
+			}
 		}
-	}
 
-	if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap, j))
-		BUG();
+		if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
+					      unmap, j))
+			BUG();
 
-	*leaked = false;
-	for (i = 0; i < j; i++) {
-		if (unmap[i].status != GNTST_okay) {
-			*leaked = true;
-			break;
+		*leaked = false;
+		for (i = 0; i < j; i++) {
+			if (unmap[i].status != GNTST_okay) {
+				*leaked = true;
+				break;
+			}
 		}
 	}
 
-- 
2.26.0



From xen-devel-bounces@lists.xenproject.org Tue May 05 14:18:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 14:18:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVyOS-0002Cx-J6; Tue, 05 May 2020 14:17:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Sfw0=6T=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jVyOR-0002Cs-R4
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 14:17:55 +0000
X-Inumbo-ID: 367d83fe-8edb-11ea-9dc1-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 367d83fe-8edb-11ea-9dc1-12813bfff9fa;
 Tue, 05 May 2020 14:17:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=aznXzJb7bC80HkraGpCr4P91jZn6zGG8FU6thEDrtOc=; b=nMMasNV2zYjmI8eZikgVoC9nv8
 yFMIKabmvs6M73fuuPfanBJk6yC3c4IeaQ5J9LZs+4lLMpb9/yNoFdTEfFfVQ7HD97wcNbkr2lx3u
 rWucL6h1tLqJgqn5Tz9ASK5ebMXU42Y02jdFIR64MqZsLjcyI6EmzyozELx2yvxUCMfU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jVyOQ-0003pp-Tn; Tue, 05 May 2020 14:17:54 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <hx242@xen.org>)
 id 1jVyOQ-0007dV-H3; Tue, 05 May 2020 14:17:54 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2] x86/traps: fix an off-by-one error
Date: Tue,  5 May 2020 15:17:35 +0100
Message-Id: <f825eca729ee9fab872e4ef1b0af10d0a7b3d852.1588688249.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Hongyan Xia <hongyxia@amazon.com>

stack++ can go into the next page and unmap_domain_page() will unmap the
wrong one, causing mapcache and memory corruption. Fix.

Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v2:
- tweak how the unmap is handled.
- fix the bug in compat as well.
- remove part of the commit message which was not accurate.
---
 xen/arch/x86/traps.c | 22 ++++++++++------------
 1 file changed, 10 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 33e5d21ece..73c6218660 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -236,6 +236,7 @@ static void compat_show_guest_stack(struct vcpu *v,
                                     int debug_stack_lines)
 {
     unsigned int i, *stack, addr, mask = STACK_SIZE;
+    void *stack_page = NULL;
 
     stack = (unsigned int *)(unsigned long)regs->esp;
     printk("Guest stack trace from esp=%08lx:\n ", (unsigned long)stack);
@@ -258,7 +259,7 @@ static void compat_show_guest_stack(struct vcpu *v,
                 break;
         if ( !vcpu )
         {
-            stack = do_page_walk(v, (unsigned long)stack);
+            stack_page = stack = do_page_walk(v, (unsigned long)stack);
             if ( (unsigned long)stack < PAGE_SIZE )
             {
                 printk("Inaccessible guest memory.\n");
@@ -285,11 +286,9 @@ static void compat_show_guest_stack(struct vcpu *v,
         printk(" %08x", addr);
         stack++;
     }
-    if ( mask == PAGE_SIZE )
-    {
-        BUILD_BUG_ON(PAGE_SIZE == STACK_SIZE);
-        unmap_domain_page(stack);
-    }
+
+    UNMAP_DOMAIN_PAGE(stack_page);
+
     if ( i == 0 )
         printk("Stack empty.");
     printk("\n");
@@ -300,6 +299,7 @@ static void show_guest_stack(struct vcpu *v, const struct cpu_user_regs *regs)
     int i;
     unsigned long *stack, addr;
     unsigned long mask = STACK_SIZE;
+    void *stack_page = NULL;
 
     /* Avoid HVM as we don't know what the stack looks like. */
     if ( is_hvm_vcpu(v) )
@@ -328,7 +328,7 @@ static void show_guest_stack(struct vcpu *v, const struct cpu_user_regs *regs)
         vcpu = maddr_get_owner(read_cr3()) == v->domain ? v : NULL;
         if ( !vcpu )
         {
-            stack = do_page_walk(v, (unsigned long)stack);
+            stack_page = stack = do_page_walk(v, (unsigned long)stack);
             if ( (unsigned long)stack < PAGE_SIZE )
             {
                 printk("Inaccessible guest memory.\n");
@@ -355,11 +355,9 @@ static void show_guest_stack(struct vcpu *v, const struct cpu_user_regs *regs)
         printk(" %p", _p(addr));
         stack++;
     }
-    if ( mask == PAGE_SIZE )
-    {
-        BUILD_BUG_ON(PAGE_SIZE == STACK_SIZE);
-        unmap_domain_page(stack);
-    }
+
+    UNMAP_DOMAIN_PAGE(stack_page);
+
     if ( i == 0 )
         printk("Stack empty.");
     printk("\n");
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 05 14:20:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 14:20:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVyRE-0002zh-4x; Tue, 05 May 2020 14:20:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=56xU=6T=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jVyRD-0002zZ-2t
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 14:20:47 +0000
X-Inumbo-ID: 9c51549e-8edb-11ea-9887-bc764e2007e4
Received: from mail-ed1-x52a.google.com (unknown [2a00:1450:4864:20::52a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c51549e-8edb-11ea-9887-bc764e2007e4;
 Tue, 05 May 2020 14:20:46 +0000 (UTC)
Received: by mail-ed1-x52a.google.com with SMTP id r16so1926384edw.5
 for <xen-devel@lists.xenproject.org>; Tue, 05 May 2020 07:20:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=ssmMzSYbkAlQNENJ0+sfsbcrFdtSKUKyOj/WFKOO5TU=;
 b=iY6eg7bgOWB4PUYqdbPEFGOZdZ30mqmktDJR2lX6xz13oFnTBgKzuGqTzfCkTIP7T1
 2NYkeFEq928l9lBdT9Fe9rNQV2djGM2PYjezT8E9/RyRPq5ymXrUHpHEPxSvPzzPq4RC
 LmSfs5KRz7jCgbu913uNiZ7s8oGQW3TVkLbjbMDrwyc5//8mkbwgvvWVHZnbZiD5HyX6
 3997SX8BkGA/upCHO/l9uwFDpxSIlEzDPP7cky4HOwZdkhlVHhLaW2TxM+6/KuPXfyHe
 wfQKVrsHpQOR2FONMw/P/MLQab6EP3jw7PVDu8qoyStjpnIQ59u468NYH9vypOItWGV6
 S70w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=ssmMzSYbkAlQNENJ0+sfsbcrFdtSKUKyOj/WFKOO5TU=;
 b=ZdEAjDYsd4PsOdNgI7onfGA1OiR3812SNzXQA0HVx9P6F9Tfq3XlFymXJ/3Z3gRTaV
 6IwONfM+rMN5fPi297uNaghU94gij5C7cIWjt8r0R9tNBmzicjH15q0FPb/nsStN2BKi
 mswxqpHtkEdzhoSVo65qe6udqKzY9CipaQXIBe7pVcXrt/SBPnGFQfwiF+377Y+L4YCn
 cLPx4McBBgrnvBq1Pu3/unzHZveg1jtA0k3RipgzBlCEZDM062aO3eMriSSo/89G7Hp2
 nrY3/7wNyuUKmwbpL/nq025SdjGzqJ+Lici7K/WOhI4Wr/2BHch3mG1JusCOlJoQPlT0
 oBag==
X-Gm-Message-State: AGi0PuZoDB8VQKrcF4TLKO6Yzbyj3HGIeGpKmT4w41+ZP0IiTz5DXgie
 ejhy3C8dOq7EgJKW4xfArTc=
X-Google-Smtp-Source: APiQypLO9+FQU3b6jb1gtgRf+odkGpyNe+f10RvplGzuGo96OzQJLrYCnPnFyZjSvS3jnT0HPW985Q==
X-Received: by 2002:a50:f381:: with SMTP id g1mr2857730edm.219.1588688445712; 
 Tue, 05 May 2020 07:20:45 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.186])
 by smtp.gmail.com with ESMTPSA id s9sm290038edy.33.2020.05.05.07.20.44
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 05 May 2020 07:20:45 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
	<xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <1587789a-b0d6-6d18-99fc-a94bbea52d7b@suse.com>
In-Reply-To: <1587789a-b0d6-6d18-99fc-a94bbea52d7b@suse.com>
Subject: RE: [PATCH v8 06/12] x86/HVM: make hvmemul_blk() capable of handling
 r/o operations
Date: Tue, 5 May 2020 15:20:43 +0100
Message-ID: <004301d622e8$5d81c0f0$188542d0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQF4q+H3Wm1UTucZ9GNJEhcAR5TDKQK9B+Q6qT6QcoA=
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>, 'Wei Liu' <wl@xen.org>,
 'Roger Pau Monne' <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 05 May 2020 09:15
> To: xen-devel@lists.xenproject.org
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Wei Liu <wl@xen.org>; Roger Pau Monne
> <roger.pau@citrix.com>; Paul Durrant <paul@xen.org>
> Subject: [PATCH v8 06/12] x86/HVM: make hvmemul_blk() capable of handling r/o operations
> 
> In preparation for handling e.g. FLDENV or {F,FX,X}RSTOR here as well.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Tue May 05 14:28:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 14:28:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVyYo-0003Q8-Gw; Tue, 05 May 2020 14:28:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u3eM=6T=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jVyYn-0003Q3-Ro
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 14:28:37 +0000
X-Inumbo-ID: b4902408-8edc-11ea-9dc1-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b4902408-8edc-11ea-9dc1-12813bfff9fa;
 Tue, 05 May 2020 14:28:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588688916;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=GVfuPbUQsqeIhGJoG8naDKUltMlZiudeC+6lsAOKUQ8=;
 b=aIzSUz/OVNWTRva1/0CdaO4GJzj5F/gjW9Ga55SqEfL8p2aarlnYjN1g
 Wt846UO92B2QeHYPAZ8miUiML5NuIsfO+9wWwRabbk9vf4sbGHCvrCzRB
 +1f4T2fqik/+6qXJ0C/zr1rrdmwlx/EetDVe0d4thuliJMOTmg7Vjlkhf o=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: sqBHqNaj+V6A6wrt75k4CeaLMaQ9pdNnijpRDxGzrH0H32oA9Y5ymumUO2IHjXxrGzzIA47z+d
 xzMLRUBMY0WW8AAHNZPwWmWxmpOK5jyuo8XW34LU56i1obKDQcyfgTTdN9xN9OKo2FbCM/JEvK
 zczJ8ARQqzSGchK4ui6t/R5xpDx6j07g7QYJ2Raz4FhS61lNlq2yvMKR5IVqw9ctuFg9Xp4Zkf
 Vj0pf2KB2GWuxdw6OoyredLHA2P/F27mY4EuerEOrHOmVqEuEnqEHh+oPL02ew8EDFco0t3EDP
 YjY=
X-SBRS: 2.7
X-MesageID: 17472687
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,355,1583211600"; d="scan'208";a="17472687"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/pv: Fix Clang build with !CONFIG_PV32
Date: Tue, 5 May 2020 15:28:10 +0100
Message-ID: <20200505142810.14827-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Clang 3.5 doesn't do enough dead-code-elimination to drop the compat_gdt
reference, resulting in a linker failure:

  hidden symbol `per_cpu__compat_gdt' isn't defined

Drop the local variable, and move evaluation of this_cpu(compat_gdt) to within
the guarded region.

Reported-by: Roger Pau Monné <roger.pau@citrix.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/cpu/common.c | 7 +++----
 1 file changed, 3 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 131ff03fcf..63f3893c7a 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -711,8 +711,6 @@ void load_system_tables(void)
 	struct tss64 *tss = &this_cpu(tss_page).tss;
 	seg_desc_t *gdt =
 		this_cpu(gdt) - FIRST_RESERVED_GDT_ENTRY;
-	seg_desc_t *compat_gdt =
-		this_cpu(compat_gdt) - FIRST_RESERVED_GDT_ENTRY;
 
 	const struct desc_ptr gdtr = {
 		.base = (unsigned long)gdt,
@@ -753,8 +751,9 @@ void load_system_tables(void)
 	_set_tssldt_desc(gdt + TSS_ENTRY, (unsigned long)tss,
 			 sizeof(*tss) - 1, SYS_DESC_tss_avail);
 	if ( IS_ENABLED(CONFIG_PV32) )
-		_set_tssldt_desc(compat_gdt + TSS_ENTRY, (unsigned long)tss,
-				 sizeof(*tss) - 1, SYS_DESC_tss_busy);
+		_set_tssldt_desc(
+			this_cpu(compat_gdt) - FIRST_RESERVED_GDT_ENTRY + TSS_ENTRY,
+			(unsigned long)tss, sizeof(*tss) - 1, SYS_DESC_tss_busy);
 
 	per_cpu(full_gdt_loaded, cpu) = false;
 	lgdt(&gdtr);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue May 05 14:34:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 14:34:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVye3-0004GL-6r; Tue, 05 May 2020 14:34:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uapr=6T=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jVye1-0004GG-SJ
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 14:34:01 +0000
X-Inumbo-ID: 75bd3e9a-8edd-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 75bd3e9a-8edd-11ea-b9cf-bc764e2007e4;
 Tue, 05 May 2020 14:34:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 003A9AB3D;
 Tue,  5 May 2020 14:34:01 +0000 (UTC)
Subject: Re: [PATCH] xenbus: avoid stack overflow warning
To: Arnd Bergmann <arnd@arndb.de>, Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <20200505141546.824573-1-arnd@arndb.de>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <30d49e6d-570b-f6fd-3a6f-628abcc8b127@suse.com>
Date: Tue, 5 May 2020 16:33:58 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200505141546.824573-1-arnd@arndb.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Yan Yankovskyi <yyankovskyi@gmail.com>, linux-kernel@vger.kernel.org,
 clang-built-linux@googlegroups.com, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.20 16:15, Arnd Bergmann wrote:
> The __xenbus_map_ring() function has two large arrays, 'map' and
> 'unmap' on its stack. When clang decides to inline it into its caller,
> xenbus_map_ring_valloc_hvm(), the total stack usage exceeds the warning
> limit for stack size on 32-bit architectures.
> 
> drivers/xen/xenbus/xenbus_client.c:592:12: error: stack frame size of 1104 bytes in function 'xenbus_map_ring_valloc_hvm' [-Werror,-Wframe-larger-than=]
> 
> As far as I can tell, other compilers don't inline it here, so we get
> no warning, but the stack usage is actually the same. It is possible
> for both arrays to use the same location on the stack, but the compiler
> cannot prove that this is safe because they get passed to external
> functions that may end up using them until they go out of scope.
> 
> Move the two arrays into separate basic blocks to limit the scope
> and force them to occupy less stack in total, regardless of the
> inlining decision.

Why don't you put both arrays into a union?


Juergen

> 
> Signed-off-by: Arnd Bergmann <arnd@arndb.de>
> ---
>   drivers/xen/xenbus/xenbus_client.c | 74 +++++++++++++++++-------------
>   1 file changed, 41 insertions(+), 33 deletions(-)
> 
> diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
> index 040d2a43e8e3..23ca70378e36 100644
> --- a/drivers/xen/xenbus/xenbus_client.c
> +++ b/drivers/xen/xenbus/xenbus_client.c
> @@ -470,54 +470,62 @@ static int __xenbus_map_ring(struct xenbus_device *dev,
>   			     unsigned int flags,
>   			     bool *leaked)
>   {
> -	struct gnttab_map_grant_ref map[XENBUS_MAX_RING_GRANTS];
> -	struct gnttab_unmap_grant_ref unmap[XENBUS_MAX_RING_GRANTS];
>   	int i, j;
>   	int err = GNTST_okay;
>   
> -	if (nr_grefs > XENBUS_MAX_RING_GRANTS)
> -		return -EINVAL;
> +	{
> +		struct gnttab_map_grant_ref map[XENBUS_MAX_RING_GRANTS];
>   
> -	for (i = 0; i < nr_grefs; i++) {
> -		memset(&map[i], 0, sizeof(map[i]));
> -		gnttab_set_map_op(&map[i], addrs[i], flags, gnt_refs[i],
> -				  dev->otherend_id);
> -		handles[i] = INVALID_GRANT_HANDLE;
> -	}
> +		if (nr_grefs > XENBUS_MAX_RING_GRANTS)
> +			return -EINVAL;
>   
> -	gnttab_batch_map(map, i);
> +		for (i = 0; i < nr_grefs; i++) {
> +			memset(&map[i], 0, sizeof(map[i]));
> +			gnttab_set_map_op(&map[i], addrs[i], flags,
> +					  gnt_refs[i], dev->otherend_id);
> +			handles[i] = INVALID_GRANT_HANDLE;
> +		}
> +
> +		gnttab_batch_map(map, i);
>   
> -	for (i = 0; i < nr_grefs; i++) {
> -		if (map[i].status != GNTST_okay) {
> -			err = map[i].status;
> -			xenbus_dev_fatal(dev, map[i].status,
> +		for (i = 0; i < nr_grefs; i++) {
> +			if (map[i].status != GNTST_okay) {
> +				err = map[i].status;
> +				xenbus_dev_fatal(dev, map[i].status,
>   					 "mapping in shared page %d from domain %d",
>   					 gnt_refs[i], dev->otherend_id);
> -			goto fail;
> -		} else
> -			handles[i] = map[i].handle;
> +				goto fail;
> +			} else
> +				handles[i] = map[i].handle;
> +		}
>   	}
> -
>   	return GNTST_okay;
>   
>    fail:
> -	for (i = j = 0; i < nr_grefs; i++) {
> -		if (handles[i] != INVALID_GRANT_HANDLE) {
> -			memset(&unmap[j], 0, sizeof(unmap[j]));
> -			gnttab_set_unmap_op(&unmap[j], (phys_addr_t)addrs[i],
> -					    GNTMAP_host_map, handles[i]);
> -			j++;
> +	{
> +		struct gnttab_unmap_grant_ref unmap[XENBUS_MAX_RING_GRANTS];
> +
> +		for (i = j = 0; i < nr_grefs; i++) {
> +			if (handles[i] != INVALID_GRANT_HANDLE) {
> +				memset(&unmap[j], 0, sizeof(unmap[j]));
> +				gnttab_set_unmap_op(&unmap[j],
> +						    (phys_addr_t)addrs[i],
> +						    GNTMAP_host_map,
> +						    handles[i]);
> +				j++;
> +			}
>   		}
> -	}
>   
> -	if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap, j))
> -		BUG();
> +		if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref,
> +					      unmap, j))
> +			BUG();
>   
> -	*leaked = false;
> -	for (i = 0; i < j; i++) {
> -		if (unmap[i].status != GNTST_okay) {
> -			*leaked = true;
> -			break;
> +		*leaked = false;
> +		for (i = 0; i < j; i++) {
> +			if (unmap[i].status != GNTST_okay) {
> +				*leaked = true;
> +				break;
> +			}
>   		}
>   	}
>   
> 



From xen-devel-bounces@lists.xenproject.org Tue May 05 14:46:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 14:46:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVypi-0005KZ-Ng; Tue, 05 May 2020 14:46:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XWWA=6T=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jVyph-0005KU-Gn
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 14:46:05 +0000
X-Inumbo-ID: 2550c4de-8edf-11ea-9dc5-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2550c4de-8edf-11ea-9dc5-12813bfff9fa;
 Tue, 05 May 2020 14:46:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588689964;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=Z/06VXz1QCIMRlvIrPWSVg2baNeBhGKRScRV3SSV5vg=;
 b=bOKQY1RTkPIYGO1eOugAY7ujYlxrreSsMeBBGfjzpTVKFP3tX6PANHM4
 L4Be5jBc7wG/azkiF7igGcmHs/1LRAQQRvu+Z6hBRJ7ArQCwpvJ+PS1j/
 EoqLPflGrflE4taPfu4G6quzoxbJiFww11dmSwvVo5aEOcq7wbGJOsovp 0=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: ZHb8yAuz7SXyRwf1x6QFARxzfAsQX1CEgLISabU3b+K9JkX8WspZcgJ3ol4nixTybrrH9XYv02
 L7IFfw7dOqbPJYdvNQ5CzZsIddfVqGVmzq+b3SOKFZ1c3OPzJC7YojEHMfPWqNv9Fx4MshrGPs
 PVGzj9DLjuJTTVkIefntrcxv6B+J+BAUq2Mld72Y5pnNb31bztj7kLcr9m46tv0+QJl7rFLfVJ
 +awM6KXQKJV4BGQgV8l4xSr/0RK76uWd9vZ0uqmlDZtFSLxacnuHdMpfSJvP7yH4ZTJ+LIvFAa
 w84=
X-SBRS: 2.7
X-MesageID: 17474732
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,355,1583211600"; d="scan'208";a="17474732"
Date: Tue, 5 May 2020 16:45:57 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/pv: Fix Clang build with !CONFIG_PV32
Message-ID: <20200505144557.GD1353@Air-de-Roger>
References: <20200505142810.14827-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200505142810.14827-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 05, 2020 at 03:28:10PM +0100, Andrew Cooper wrote:
> Clang 3.5 doesn't do enough dead-code-elimination to drop the compat_gdt
> reference, resulting in a linker failure:
> 
>   hidden symbol `per_cpu__compat_gdt' isn't defined
> 
> Drop the local variable, and move evaluation of this_cpu(compat_gdt) to within
> the guarded region.
> 
> Reported-by: Roger Pau Monné <roger.pau@citrix.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Tested-and-reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks!


From xen-devel-bounces@lists.xenproject.org Tue May 05 14:48:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 14:48:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVysJ-0005T8-6G; Tue, 05 May 2020 14:48:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVysH-0005T3-Uu
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 14:48:45 +0000
X-Inumbo-ID: 84e55e00-8edf-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 84e55e00-8edf-11ea-b9cf-bc764e2007e4;
 Tue, 05 May 2020 14:48:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 96118AB3D;
 Tue,  5 May 2020 14:48:46 +0000 (UTC)
Subject: Re: [PATCH 09/16] x86/cpu: Adjust enable_nmis() to be shadow stack
 compatible
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-10-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <478340f1-4238-1419-eeb7-c8c2ed7103a6@suse.com>
Date: Tue, 5 May 2020 16:48:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200501225838.9866-10-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.2020 00:58, Andrew Cooper wrote:
> When executing an IRET-to-self, the shadow stack must agree with the regular
> stack.  We can't manipulate SSP directly, so have to fake a shadow IRET frame
> by executing 3 CALLs, then editing the result to look correct.
> 
> This is not a fastpath, is called on the BSP long before CET can be set up,
> and may be called on the crash path after CET is disabled.  Use the fact that
> INCSSP is allocated from the hint nop space to construct a test for CET being
> active which is safe on all processors.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
albeit with a question which may make a further change necessary:

> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -544,17 +544,40 @@ static inline void enable_nmis(void)
>  {
>      unsigned long tmp;
>  
> -    asm volatile ( "mov %%rsp, %[tmp]     \n\t"
> -                   "push %[ss]            \n\t"
> -                   "push %[tmp]           \n\t"
> -                   "pushf                 \n\t"
> -                   "push %[cs]            \n\t"
> -                   "lea 1f(%%rip), %[tmp] \n\t"
> -                   "push %[tmp]           \n\t"
> -                   "iretq; 1:             \n\t"
> -                   : [tmp] "=&r" (tmp)
> +    asm volatile ( "mov     %%rsp, %[rsp]        \n\t"
> +                   "lea    .Ldone(%%rip), %[rip] \n\t"
> +#ifdef CONFIG_XEN_SHSTK
> +                   /* Check for CET-SS being active. */
> +                   "mov    $1, %k[ssp]           \n\t"
> +                   "rdsspq %[ssp]                \n\t"
> +                   "cmp    $1, %k[ssp]           \n\t"
> +                   "je     .Lshstk_done          \n\t"
> +
> +                   /* Push 3 words on the shadow stack */
> +                   ".rept 3                      \n\t"
> +                   "call 1f; nop; 1:             \n\t"
> +                   ".endr                        \n\t"
> +
> +                   /* Fixup to be an IRET shadow stack frame */
> +                   "wrssq  %q[cs], -1*8(%[ssp])  \n\t"
> +                   "wrssq  %[rip], -2*8(%[ssp])  \n\t"
> +                   "wrssq  %[ssp], -3*8(%[ssp])  \n\t"
> +
> +                   ".Lshstk_done:"
> +#endif
> +                   /* Write an IRET regular frame */
> +                   "push   %[ss]                 \n\t"
> +                   "push   %[rsp]                \n\t"
> +                   "pushf                        \n\t"
> +                   "push   %q[cs]                \n\t"
> +                   "push   %[rip]                \n\t"
> +                   "iretq                        \n\t"
> +                   ".Ldone:                      \n\t"
> +                   : [rip] "=&r" (tmp),
> +                     [rsp] "=&r" (tmp),
> +                     [ssp] "=&r" (tmp)

Even after an hour of reading and searching through the gcc docs
I can't convince myself that this utilizes defined behavior. We
do tie multiple outputs to the same C variable elsewhere, yes,
but only in cases where we don't really care about the register,
or where the register is a fixed one anyway. What I can't find
is a clear statement that gcc wouldn't ever, now or in the
future, use the same register for all three outputs. I think one
can deduce this in certain ways, and experimentation also seems
to confirm it, but still.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 05 14:51:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 14:51:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVyum-0006FI-Kb; Tue, 05 May 2020 14:51:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lOzX=6T=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jVyul-0006FD-Oq
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 14:51:19 +0000
X-Inumbo-ID: dfc4f380-8edf-11ea-9dc5-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dfc4f380-8edf-11ea-9dc5-12813bfff9fa;
 Tue, 05 May 2020 14:51:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588690277;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=TrJXnP9oN7aALC3qD7BVGxz5pYfWGDjBdXPiEkjL7fQ=;
 b=AHCmDevP4bDTLCQ6YGlwfpnaIWrHl6KYYy97J15ce4uQxmIth+SfOrem
 /k4aDeHsrSP9kn32HbEUiTjiwlDiUXeAtxirPf2mF6rUrnPlT7CJghypc
 hUGZ/Q6tFTX4wHyOts1QhzDw/t+L32plxdcsE9FEwX54EJlcIIm9v32rC Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: O6GxdFOmOXxi7XSVicIvQDxDU840K6m9A/opAn05pGoAMq6iGWE4DLfw1mUPTi0eLgNJ5RA3Vk
 DseqLCYYGvHVrg8QWXAhYjMEnNXv4Wbu8Bj3x8WvonhoxnvlR9RRSDoe1saU4mtKmYaPop8a91
 gLiyGF371fFvBprXz1M0JT6+XfD54S0kkBkhK+zYGrNfiiVAwNfQAdY64SmxAvwo7Hj7PsPcZg
 pja1ikEaCDptRFeeivkPJTY8q5EUTrQlNpGJfKtoql636pbFeqBUFiuHOcREwnmFEkjjcZUIz/
 LHs=
X-SBRS: 2.7
X-MesageID: 17168958
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,355,1583211600"; d="scan'208";a="17168958"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24241.32091.790503.338211@mariner.uk.xensource.com>
Date: Tue, 5 May 2020 15:51:07 +0100
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: Re: Backports to 4.13
In-Reply-To: <352f0b08-d869-1d57-a357-246574cb9b55@citrix.com>
References: <352f0b08-d869-1d57-a357-246574cb9b55@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Andrew Cooper writes ("Backports to 4.13"):
> On the tools side of things, f50a4f6e244c aafae0e800e9 2a62c22715bf
> d79cc6bc2bac 0729830cc425 are all bugs in CPUID and/or migration. "Fix
> HVM_PARAM_PAE_ENABLED handling" is only for 4.13, whereas all the others
> are back to 4.7 (technically speaking).

Done.

There seem to have been quite few requests, but there were a lot of
fixes tagged with Backport or Fixes.  I have now applied these (not
just to 4.13 but to all the supported trees so back to 4.10, as
applicable; plus one non-security bugfix I considered important enough
to go to the security-supported 4.9).

Ian.


From xen-devel-bounces@lists.xenproject.org Tue May 05 14:52:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 14:52:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVyvZ-0006JW-1u; Tue, 05 May 2020 14:52:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVyvX-0006JM-Hb
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 14:52:07 +0000
X-Inumbo-ID: fbcdf52e-8edf-11ea-9dc5-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fbcdf52e-8edf-11ea-9dc5-12813bfff9fa;
 Tue, 05 May 2020 14:52:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7855DAB3D;
 Tue,  5 May 2020 14:52:07 +0000 (UTC)
Subject: Re: [PATCH] x86/pv: Fix Clang build with !CONFIG_PV32
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200505142810.14827-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3434eaa5-d8ce-fa8f-17a8-19e9739121d8@suse.com>
Date: Tue, 5 May 2020 16:52:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200505142810.14827-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.2020 16:28, Andrew Cooper wrote:
> @@ -753,8 +751,9 @@ void load_system_tables(void)
>  	_set_tssldt_desc(gdt + TSS_ENTRY, (unsigned long)tss,
>  			 sizeof(*tss) - 1, SYS_DESC_tss_avail);
>  	if ( IS_ENABLED(CONFIG_PV32) )
> -		_set_tssldt_desc(compat_gdt + TSS_ENTRY, (unsigned long)tss,
> -				 sizeof(*tss) - 1, SYS_DESC_tss_busy);
> +		_set_tssldt_desc(
> +			this_cpu(compat_gdt) - FIRST_RESERVED_GDT_ENTRY + TSS_ENTRY,
> +			(unsigned long)tss, sizeof(*tss) - 1, SYS_DESC_tss_busy);

Isn't indentation here off by 4 compared to what we
normally do with extremely large argument expressions?
Other than this lgtm.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 05 14:53:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 14:53:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVywx-0006TF-D3; Tue, 05 May 2020 14:53:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVyww-0006TA-0z
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 14:53:34 +0000
X-Inumbo-ID: 300e958a-8ee0-11ea-9dc5-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 300e958a-8ee0-11ea-9dc5-12813bfff9fa;
 Tue, 05 May 2020 14:53:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id DACC1ACC3;
 Tue,  5 May 2020 14:53:33 +0000 (UTC)
Subject: Re: Backports to 4.13
To: Ian Jackson <ian.jackson@citrix.com>
References: <352f0b08-d869-1d57-a357-246574cb9b55@citrix.com>
 <24241.32091.790503.338211@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5ac1c474-27d6-8df5-996c-35726cb819cc@suse.com>
Date: Tue, 5 May 2020 16:53:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <24241.32091.790503.338211@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.2020 16:51, Ian Jackson wrote:
> Andrew Cooper writes ("Backports to 4.13"):
>> On the tools side of things, f50a4f6e244c aafae0e800e9 2a62c22715bf
>> d79cc6bc2bac 0729830cc425 are all bugs in CPUID and/or migration.  "Fix
>> HVM_PARAM_PAE_ENABLED handling" is only for 4.13, whereas all the others
>> are back to 4.7 (technically speaking).
> 
> Done.
> 
> There seem to have been quite few requests, but there were a lot of
> fixes tagged with Backport or Fixes.  I have now applied these (not
> just to 4.13 but to all the supported trees so back to 4.10, as
> applicable; plus one non-security bugfix I considered important enough
> to go to the security-supported 4.9).

Why back to 4.10? Only 4.12 and 4.13 are fully supported at this point,
afaict. Older trees should only get security fixes imo, with extremely
few exceptions.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 05 14:55:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 14:55:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVyzC-0006fu-1a; Tue, 05 May 2020 14:55:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVyzB-0006fl-0J
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 14:55:53 +0000
X-Inumbo-ID: 83999448-8ee0-11ea-9dc6-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 83999448-8ee0-11ea-9dc6-12813bfff9fa;
 Tue, 05 May 2020 14:55:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D783EABD1;
 Tue,  5 May 2020 14:55:53 +0000 (UTC)
Subject: Re: [PATCH] x86/traps: fix an off-by-one error
To: Hongyan Xia <hx242@xen.org>
References: <37b7ec049ff82f92cc6724a743867e1cd9365f5b.1588676790.git.hongyxia@amazon.com>
 <8c3d6a2c-316c-f7fc-a2b0-3ea12721867d@suse.com>
 <45b6e79141e638c2930ccdfbc26a0de54034c525.camel@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ac455651-b8d2-7fe5-ff4f-d9c24eb10be1@suse.com>
Date: Tue, 5 May 2020 16:55:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <45b6e79141e638c2930ccdfbc26a0de54034c525.camel@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.2020 16:01, Hongyan Xia wrote:
> On Tue, 2020-05-05 at 15:38 +0200, Jan Beulich wrote:
>> On 05.05.2020 13:06, Hongyan Xia wrote:
>>> @@ -358,7 +359,7 @@ static void show_guest_stack(struct vcpu *v,
>>> const struct cpu_user_regs *regs)
>>>      if ( mask == PAGE_SIZE )
>>>      {
>>>          BUILD_BUG_ON(PAGE_SIZE == STACK_SIZE);
>>> -        unmap_domain_page(stack);
>>> +        unmap_domain_page(stack_page);
>>>      }
>>
>> With this I think you want to change the whole construct here to
>>
>>     if ( stack_page )
>>         unmap_domain_page(stack_page);
>>
>> i.e. with the then no longer relevant BUILD_BUG_ON() also dropped.
> 
> I wonder if such a construct is better with UNMAP_DOMAIN_PAGE(), since
> it deals with NULL and will nullify it to prevent misuse.

In the case here I think I agree. For the future may I ask that
you wait with sending a new version until the discussion on the
previous one has settled?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 05 14:57:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 14:57:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVz0f-0006vc-PV; Tue, 05 May 2020 14:57:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XWWA=6T=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jVz0e-0006vV-18
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 14:57:24 +0000
X-Inumbo-ID: b847467c-8ee0-11ea-ae69-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b847467c-8ee0-11ea-ae69-bc764e2007e4;
 Tue, 05 May 2020 14:57:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588690640;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=jpg7LSRrdpRK6MQdn8AuGg28BoPfjPRlErfxvAtNaHw=;
 b=FYxKJRnY20Wgkf3Z7mqwm4WeSKcljjsHkeL8rTsHK7xIUM2b2mVlxj23
 357ndltpw46axP53Cl1HJ67qV6Kv/0Ike5uGeFZ4wbuW2qY+eqWWHOs3d
 gf+oDvGSFpPcrwzzXXHmf7QsNVILxmvMIIhG1xXBwoFINB/fA9724zuNm w=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: 4x16cARGe7unI3ZgSDVGTtkJHmYxjfgOdIa05dG8ZKkrtZah3zgGxOhAYaDCWaHNPuqk0phdWG
 abGpw7aBunaYxtfssOLC8P0GffcBB/vz8Env/1bskU2jFn2giw1v4RZKF1vOdANZB2jL6hiEfe
 v0FKE/wpfWjLEK9kiQmFd4SxHX1PxTMcxAU4TJYhYt5dyxdvywL57ljke4FHG7la7LG/XdWB1B
 P1a68h2IUnLhMR06TF5CaWpYBNqfne1U93porHWKrgRYqQBF95nXQjHa8YEz8XLbI+bJEwezSf
 ue4=
X-SBRS: 2.7
X-MesageID: 17080200
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,355,1583211600"; d="scan'208";a="17080200"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH RFC] automation: split logs into separate files
Date: Tue, 5 May 2020 16:57:10 +0200
Message-ID: <20200505145710.17630-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Doug Goldstein <cardoe@cardoe.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The aim of this patch is to make it easier to digest the output of the
gitlab jobs, as the current is IMO too long and that makes it hard to
spot what went wrong. So this patch does the following:
 - Rename build.log into run.log.
 - Split each build log into a logs folder, using the
   build{-kconfig}.log file. Note the default kconfig (either random
   or not) will use the name build.log.
 - The output from kconfig is also saved as kconfig{-kconfig}.log.
 - The build and configure output is no longer part of the default
   gitlab output.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
I find the output from the gitlab tests hard to consume, as it's
thousands of lines long. With this change such output is split into
several smaller files, IMO easier to consume, and the default output
should make it also easier to identify exactly which step went wrong.
---
 automation/gitlab-ci/build.yaml |  3 ++-
 automation/scripts/build        | 18 +++++++++++-------
 2 files changed, 13 insertions(+), 8 deletions(-)

diff --git a/automation/gitlab-ci/build.yaml b/automation/gitlab-ci/build.yaml
index 1e61d30c85..697fe7cc55 100644
--- a/automation/gitlab-ci/build.yaml
+++ b/automation/gitlab-ci/build.yaml
@@ -2,9 +2,10 @@
   stage: build
   image: registry.gitlab.com/xen-project/xen/${CONTAINER}
   script:
-    - ./automation/scripts/build 2>&1 | tee build.log
+    - ./automation/scripts/build 2>&1 | tee run.log
   artifacts:
     paths:
+      - logs/
       - binaries/
       - xen-config
       - '*.log'
diff --git a/automation/scripts/build b/automation/scripts/build
index 0cd0f3971d..704b428f86 100755
--- a/automation/scripts/build
+++ b/automation/scripts/build
@@ -8,11 +8,14 @@ cc-ver()
     $CC -dumpversion | awk -F. '{ printf "0x%02x%02x%02x", $1, $2, $3 }'
 }
 
+mkdir logs
+
 # random config or default config
 if [[ "${RANDCONFIG}" == "y" ]]; then
-    make -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig
+    make -C xen KCONFIG_ALLCONFIG=tools/kconfig/allrandom.config randconfig \
+         > logs/kconfig.log 2>&1
 else
-    make -C xen defconfig
+    make -C xen defconfig > logs/kconfig.log 2>&1
 fi
 
 # build up our configure options
@@ -38,9 +41,9 @@ if [[ "${CC}" == "gcc" && `cc-ver` -lt 0x040600 ]]; then
     cfgargs+=("--with-system-seabios=/bin/false")
 fi
 
-./configure "${cfgargs[@]}"
+./configure "${cfgargs[@]}" > logs/configure.log 2>&1
 
-make -j$(nproc) dist
+make -j$(nproc) dist > logs/build.log 2>&1
 
 # Extract artifacts to avoid getting rewritten by customised builds
 cp xen/.config xen-config
@@ -58,8 +61,9 @@ esac
 cfg_dir="automation/configs/${arch}"
 for cfg in `ls ${cfg_dir}`; do
     echo "Building $cfg"
-    make -j$(nproc) -C xen clean
+    make -j$(nproc) -C xen clean > /dev/null 2>&1
     rm -f xen/.config
-    make -C xen KBUILD_DEFCONFIG=../../../../${cfg_dir}/${cfg} XEN_CONFIG_EXPERT=y defconfig
-    make -j$(nproc) -C xen XEN_CONFIG_EXPERT=y
+    make -C xen KBUILD_DEFCONFIG=../../../../${cfg_dir}/${cfg} \
+         XEN_CONFIG_EXPERT=y defconfig > logs/kconfig-${cfg}.log 2>&1
+    make -j$(nproc) -C xen XEN_CONFIG_EXPERT=y > logs/build-${cfg}.log 2>&1
 done
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue May 05 14:59:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 14:59:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVz2H-00076D-Ai; Tue, 05 May 2020 14:59:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVz2G-000768-Hc
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 14:59:04 +0000
X-Inumbo-ID: f5ceee5a-8ee0-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5ceee5a-8ee0-11ea-9887-bc764e2007e4;
 Tue, 05 May 2020 14:59:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 6199DABB2;
 Tue,  5 May 2020 14:59:05 +0000 (UTC)
Subject: Re: [PATCH 1/3] x86/mm: do not attempt to convert _PAGE_GNTTAB to a
 boolean
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200505092454.9161-1-roger.pau@citrix.com>
 <20200505092454.9161-2-roger.pau@citrix.com>
 <20332b18-960c-a180-8150-55fae60bdc6e@suse.com>
 <20200505141138.GC1353@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <26ec20bb-411f-c16a-40ff-417c8c5ce777@suse.com>
Date: Tue, 5 May 2020 16:59:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200505141138.GC1353@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.2020 16:11, Roger Pau Monné wrote:
> On Tue, May 05, 2020 at 03:47:43PM +0200, Jan Beulich wrote:
>> On 05.05.2020 11:24, Roger Pau Monne wrote:
>>> Remove the conversion of _PAGE_GNTTAB to a boolean, since the and
>>> operation performed afterwards will already return false if the value
>>> of the macro is 0.
>>
>> I'm sorry, but no. The expression was put there on purpose by
>> 0932210ac095 ("x86: Address "Bitwise-and with zero
>> CONSTANT_EXPRESSION_RESULT" Coverity issues"), and the
>> description there is clearly telling us that this wants to stay
>> unless Coverity changed in the meantime. Otherwise I'm afraid
>> a more elaborate solution will be needed to please both.
> 
> Clang is fine with changing this to _PAGE_GNTTAB != 0. Would you be
> OK with this approach?

I'd be okay with it, but then I guess I'd prefer ...

>> Or a
>> more simplistic one, like using "#if _PAGE_GNTTAB" around the
>> construct.
> 
> Yes, that's the other solution I had in mind.

.... this one. Let's see if Andrew has a clear opinion either
way - it was him to address the original Coverity issue after
all.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 05 15:01:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 15:01:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVz4j-0007v3-OY; Tue, 05 May 2020 15:01:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MjzU=6T=arndb.de=arnd@srs-us1.protection.inumbo.net>)
 id 1jVz4j-0007uy-1g
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 15:01:37 +0000
X-Inumbo-ID: 4f9d86e5-8ee1-11ea-9dc6-12813bfff9fa
Received: from mout.kundenserver.de (unknown [212.227.126.134])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f9d86e5-8ee1-11ea-9dc6-12813bfff9fa;
 Tue, 05 May 2020 15:01:36 +0000 (UTC)
Received: from mail-qk1-f174.google.com ([209.85.222.174]) by
 mrelayeu.kundenserver.de (mreue012 [212.227.15.129]) with ESMTPSA (Nemesis)
 id 1N0Ip5-1jBP5c18a7-00xLdy for <xen-devel@lists.xenproject.org>; Tue, 05 May
 2020 17:01:35 +0200
Received: by mail-qk1-f174.google.com with SMTP id c10so2574414qka.4
 for <xen-devel@lists.xenproject.org>; Tue, 05 May 2020 08:01:34 -0700 (PDT)
X-Gm-Message-State: AGi0PuZlLKX3+47mdNkiSAKNhkHFhBzn7pT1c0KPY9Ju5ofMB+7NClnY
 4DnhnhhvIy2D3973ZxC23s1sRQHH89kXQOHSyek=
X-Google-Smtp-Source: APiQypKEmmDkWyjdjHqC8UEv4XIcDMi+fpqsIc8gLYnt2vwvk7Vu3FpHhRwPKldfTYlGKweDKG4ilgk/kiL3dUraQdk=
X-Received: by 2002:a37:aa82:: with SMTP id t124mr3769549qke.3.1588690894009; 
 Tue, 05 May 2020 08:01:34 -0700 (PDT)
MIME-Version: 1.0
References: <20200505141546.824573-1-arnd@arndb.de>
 <30d49e6d-570b-f6fd-3a6f-628abcc8b127@suse.com>
In-Reply-To: <30d49e6d-570b-f6fd-3a6f-628abcc8b127@suse.com>
From: Arnd Bergmann <arnd@arndb.de>
Date: Tue, 5 May 2020 17:01:18 +0200
X-Gmail-Original-Message-ID: <CAK8P3a0mWH=Zcq180+cTRMpqOkGt05xDP1+kCTP6yc9grAg2VQ@mail.gmail.com>
Message-ID: <CAK8P3a0mWH=Zcq180+cTRMpqOkGt05xDP1+kCTP6yc9grAg2VQ@mail.gmail.com>
Subject: Re: [PATCH] xenbus: avoid stack overflow warning
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Provags-ID: V03:K1:XAi3NAIG9q+S2wymV1p9JIzTW3zSz38g/JiAiisYkR6zwCP6Dna
 MHXv3KEcH2jfJns33Ph0LwCW3t18yc2Sq0kfyuYssjp2YzA8KFlqxwZ7pKC0ElUtdV8+Fe7
 iMVSBuXeJhRYF7rpBhCu/3io+8u3J/eZkVvljJDetdEyE14V60PtNTtoF04CXwYSejLuT/3
 qVR2yIso+CerZ2ruX0F2A==
X-Spam-Flag: NO
X-UI-Out-Filterresults: notjunk:1;V03:K0:uEQeqHjKmH4=:BiP2CXFuxso7Gi1ciPJnG6
 sAjgeQnIDKr3oNyWo7Z6DzbOexPT2Hpk5PYE3cZ8SZzJ9PV719IzvJMSF/QaIWPChS/MlDDv1
 bedNneesIFiWfVGsy+6Hu+HNK4u37GZeskPsres6S8fczSg+q6Lah8FxKsNBbYYgkP6r2cmzJ
 A0/isdIGI+gOwKPIk5727Cl9oRR4dwo0ClFnZS7b7INe2JZcmOY+4uALZPYiSTXrZssiw+AJj
 h9Fe+sdL51ThHzhsPdTMH5c7JKcEBUNya1b+N50fJoFxY++etPP+WCS7T8Hman6oGBRDSwLae
 O+aQNZq3fq56JJzTQQ8QeMf819lDGfvx6Hf99nM3pxzsEU3cQVUD6yKicTX8vuHUEkEXPYd9X
 uiaPnZO39UVI/nUtdY85TX/7BrRmuW3+sT30qMDpU/IN+Sl0CWt1LuOpmnuBI/pwGkSTQdktv
 7afX798AtBhAQc6/Phrths6uMG2j+otmym98kcw9I9pm0SzE94Iw6u7h9DgaXXYiVAjWzMUBE
 3ht1VLgnVYA4kpXmruCaqCThRR9vJZ9bl5JaZqY0lrfeDpIRRIzZavzMJy5iALkAtmttzEdX/
 E5GN8Ovg0AnadvS6sR6hg9woklq1/mTCSH7ssorjhkHdiMZ6fvGzinDB+/y8hPmwujEFNY4Jj
 hEL03hweNS9w8zG16B8aOBPsBDqH5ISJoUEMBFsWPfnxVLDi/8BgetK40kaf9H4zGKDwN4605
 LOhEEbNStFFN/830ZP9j1TJcv3lie/wqTMYWv44ixHuZfnHf/GU7WlZv6O6jxRRAFZbQIFYlG
 wboHmiatJ4MWr2Chto1uHS1YHpLsWAnURE6b9jdB7DupSTzugU=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Yan Yankovskyi <yyankovskyi@gmail.com>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 clang-built-linux <clang-built-linux@googlegroups.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 5, 2020 at 4:34 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wrot=
e:
> On 05.05.20 16:15, Arnd Bergmann wrote:
> > The __xenbus_map_ring() function has two large arrays, 'map' and
> > 'unmap' on its stack. When clang decides to inline it into its caller,
> > xenbus_map_ring_valloc_hvm(), the total stack usage exceeds the warning
> > limit for stack size on 32-bit architectures.
> >
> > drivers/xen/xenbus/xenbus_client.c:592:12: error: stack frame size of 1=
104 bytes in function 'xenbus_map_ring_valloc_hvm' [-Werror,-Wframe-larger-=
than=3D]
> >
> > As far as I can tell, other compilers don't inline it here, so we get
> > no warning, but the stack usage is actually the same. It is possible
> > for both arrays to use the same location on the stack, but the compiler
> > cannot prove that this is safe because they get passed to external
> > functions that may end up using them until they go out of scope.
> >
> > Move the two arrays into separate basic blocks to limit the scope
> > and force them to occupy less stack in total, regardless of the
> > inlining decision.
>
> Why don't you put both arrays into a union?

I considered that as well, and don't really mind either way. I think it doe=
s
get a bit ugly whatever we do. If you prefer the union, I can respin the
patch that way.

      Arnd


From xen-devel-bounces@lists.xenproject.org Tue May 05 15:03:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 15:03:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVz6I-00080i-3l; Tue, 05 May 2020 15:03:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVz6G-00080Y-Hy
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 15:03:12 +0000
X-Inumbo-ID: 88744570-8ee1-11ea-9dc6-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 88744570-8ee1-11ea-9dc6-12813bfff9fa;
 Tue, 05 May 2020 15:03:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 625D8ABD1;
 Tue,  5 May 2020 15:03:11 +0000 (UTC)
Subject: Re: [PATCH v2] x86/traps: fix an off-by-one error
To: Hongyan Xia <hx242@xen.org>
References: <f825eca729ee9fab872e4ef1b0af10d0a7b3d852.1588688249.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c2ed7881-226c-cdf4-23a1-8e011b01263b@suse.com>
Date: Tue, 5 May 2020 17:03:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <f825eca729ee9fab872e4ef1b0af10d0a7b3d852.1588688249.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.2020 16:17, Hongyan Xia wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> stack++ can go into the next page and unmap_domain_page() will unmap the
> wrong one, causing mapcache and memory corruption. Fix.
> 
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Tue May 05 15:06:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 15:06:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVz97-0008Ay-IB; Tue, 05 May 2020 15:06:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u3eM=6T=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jVz95-0008At-Tb
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 15:06:07 +0000
X-Inumbo-ID: eff2a016-8ee1-11ea-9dc8-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eff2a016-8ee1-11ea-9dc8-12813bfff9fa;
 Tue, 05 May 2020 15:06:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588691164;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=uzRIosQwWM4rnlL5lCtWVUmTdxQNkiKBVfaaqWmhZNo=;
 b=CLNrLS1ukTAk1Hxb4EgLIS/CKyhOrXez4Po9GdwiPGK0GmLyKY75xENm
 UJHG90/8AGUrSHFVpmaFXPibHFVwVQkP7Ha5RPIga+DIqvei+d6H05Pv5
 5KaMb9F5PV9++ejiNpoL0jzc/cY3zLYQMN73Us+EbJfDWhJDFYLopeqdI w=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: nrXnuxROmWD5NfrrBYUDWyyZHGdEV5CPjtYeo3y1c9XoL+9UyqVKMecel/711Tmvdj/H6EvQcE
 6KA1p7enQhs1nBxEYmQBrwbupOhal+VkXsJIsgyoc0noY2zYC2bxPBf/yFd+46wAhDWsslCm9R
 SD3XMDdN5SaUZPzKswYgZ803gZ8OO7QQLB8ewB5uitMpB+4Sh6HntYZV9dHnVmtYR8F+8xDNrH
 8x/cHAqU1S5o1b1fPMaSsKC7s2kCn+rmm1FKUmYESjNSLICgV+Am3WYKm4VdVP9eQqUG8JJMR2
 q3A=
X-SBRS: 2.7
X-MesageID: 17045496
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,355,1583211600"; d="scan'208";a="17045496"
Subject: Re: [PATCH] x86/pv: Fix Clang build with !CONFIG_PV32
To: Jan Beulich <jbeulich@suse.com>
References: <20200505142810.14827-1-andrew.cooper3@citrix.com>
 <3434eaa5-d8ce-fa8f-17a8-19e9739121d8@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a423db54-601b-b445-5f66-507301c78102@citrix.com>
Date: Tue, 5 May 2020 16:05:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <3434eaa5-d8ce-fa8f-17a8-19e9739121d8@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05/05/2020 15:52, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> On 05.05.2020 16:28, Andrew Cooper wrote:
>> @@ -753,8 +751,9 @@ void load_system_tables(void)
>>  	_set_tssldt_desc(gdt + TSS_ENTRY, (unsigned long)tss,
>>  			 sizeof(*tss) - 1, SYS_DESC_tss_avail);
>>  	if ( IS_ENABLED(CONFIG_PV32) )
>> -		_set_tssldt_desc(compat_gdt + TSS_ENTRY, (unsigned long)tss,
>> -				 sizeof(*tss) - 1, SYS_DESC_tss_busy);
>> +		_set_tssldt_desc(
>> +			this_cpu(compat_gdt) - FIRST_RESERVED_GDT_ENTRY + TSS_ENTRY,
>> +			(unsigned long)tss, sizeof(*tss) - 1, SYS_DESC_tss_busy);
> Isn't indentation here off by 4 compared to what we
> normally do with extremely large argument expressions?

No.  This is Linux style (therefore 8-space tabs), not Xen style (4 spaces).

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 05 15:16:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 15:16:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVzIe-0000es-Mk; Tue, 05 May 2020 15:16:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GqFM=6T=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1jVzId-0000en-S8
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 15:15:59 +0000
X-Inumbo-ID: 529e26c6-8ee3-11ea-ae69-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 529e26c6-8ee3-11ea-ae69-bc764e2007e4;
 Tue, 05 May 2020 15:15:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588691758;
 h=from:to:subject:date:message-id:references:in-reply-to:
 content-id:content-transfer-encoding:mime-version;
 bh=liSVRsl0Xx4iwB/mkVtl/D04ZJpCKai472FPetG79Xo=;
 b=HLM0ywOF+LaTbILswg4skTAE/2S+YnJxTUsSUWC4xacm/6PLMv2g/7cF
 sz740f/KNM8PQf17WJ3TkZ0oa+HfUvR6qJo7+i70qzP76vIoUCmfhaJ6T
 y5XBI+hDIS3z4o784Y4eC/Fz3bmQy12eKl++GCaHWA9M4ub8+CLgdpctg I=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=edvin.torok@citrix.com;
 spf=Pass smtp.mailfrom=edvin.torok@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 edvin.torok@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="edvin.torok@citrix.com";
 x-sender="edvin.torok@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 edvin.torok@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="edvin.torok@citrix.com";
 x-sender="edvin.torok@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="edvin.torok@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: lckBtrF/ypPuUEq9XWclWdz7AAnbRDq4pjkpAFDBuSXq30E1YAVRXb/T7qg9nMvjFzPkMBFFyy
 sje0kAX/OD432QyVSYhUSnOlWgLQF904kZa6ZCjFbYkmAEBzJLphWnTCIg2KQoSpT/6Jzx+btu
 2lLBYPolzp1uxz5qsCn+w3YsbCHJR62XYEBOiEzBq1U1+7veS4m1qvLQXNFbX7qzhJLp9qXgMn
 sx4RehsK69cf2KD4/jZPIUgKRKUtu987AvZnRUMdFXVIJLWpIo02b5Vg9Q3AvnwkY8NcjuVvUV
 WDI=
X-SBRS: 2.7
X-MesageID: 17479141
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,355,1583211600"; d="scan'208";a="17479141"
From: Edwin Torok <edvin.torok@citrix.com>
To: "jgross@suse.com" <jgross@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v5] docs/designs: re-work the xenstore migration
 document...
Thread-Topic: [PATCH v5] docs/designs: re-work the xenstore migration
 document...
Thread-Index: AQHWIvASEjEPMqXWqJ3UeMFTJvZDPw==
Date: Tue, 5 May 2020 15:15:54 +0000
Message-ID: <3fa9445d4677a9a6c24fb3aaee08913ad5c13a34.camel@citrix.com>
References: <20200428150624.265-1-paul@xen.org>
 <2bf7dc13-65a4-317e-2c8c-bd6972dbb35a@xen.org>
 <fb319876-41eb-e785-a197-92440187a135@suse.com>
In-Reply-To: <fb319876-41eb-e785-a197-92440187a135@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <2B7F26E536C3574CAEABC0B14BD55951@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

T24gVHVlLCAyMDIwLTA1LTA1IGF0IDE0OjEzICswMTAwLCBKw7xyZ2VuIEdyb8OfIHdyb3RlOg0K
PiBPbiAwNS4wNS4yMCAxNTowMSwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPiA+IEhpIFBhdWwsDQo+
ID4gDQo+ID4gT24gMjgvMDQvMjAyMCAxNjowNiwgUGF1bCBEdXJyYW50IHdyb3RlOg0KPiA+ID4g
RnJvbTogUGF1bCBEdXJyYW50IDxwZHVycmFudEBhbWF6b24uY29tPg0KPiA+ID4gDQo+ID4gPiAu
Li4gdG8gc3BlY2lmeSBhIHNlcGFyYXRlIG1pZ3JhdGlvbiBzdHJlYW0gdGhhdCB3aWxsIGFsc28g
YmUNCj4gPiA+IHN1aXRhYmxlIGZvcg0KPiA+ID4gbGl2ZSB1cGRhdGUuDQo+ID4gPiANCj4gPiA+
IFRoZSBvcmlnaW5hbCBzY29wZSBvZiB0aGUgZG9jdW1lbnQgd2FzIHRvIHN1cHBvcnQgbm9uLQ0K
PiA+ID4gY29vcGVyYXRpdmUgDQo+ID4gPiBtaWdyYXRpb24NCj4gPiA+IG9mIGd1ZXN0cyBbMV0g
YnV0LCBzaW5jZSB0aGVuLCBsaXZlIHVwZGF0ZSBvZiB4ZW5zdG9yZWQgaGFzIGJlZW4gDQo+ID4g
PiBicm91Z2h0IGludG8NCj4gPiA+IHNjb3BlLiBUaHVzIGl0IG1ha2VzIG1vcmUgc2Vuc2UgdG8g
ZGVmaW5lIGEgc2VwYXJhdGUgaW1hZ2UgZm9ybWF0DQo+ID4gPiBmb3INCj4gPiA+IHNlcmlhbGl6
aW5nIHhlbnN0b3JlIHN0YXRlIHRoYXQgaXMgc3VpdGFibGUgZm9yIGJvdGggcHVycG9zZXMuDQo+
ID4gPiANCj4gPiA+IFRoZSBkb2N1bWVudCBoYXMgYmVlbiBsaW1pdGVkIHRvIHNwZWNpZnlpbmcg
YSBuZXcgaW1hZ2UgZm9ybWF0Lg0KPiA+ID4gVGhlIA0KPiA+ID4gbWVjaGFuaXNtDQo+ID4gPiBm
b3IgYWNxdWlyaW5nIHRoZSBpbWFnZSBmb3IgbGl2ZSB1cGRhdGUgb3IgbWlncmF0aW9uIGlzIG5v
dA0KPiA+ID4gY292ZXJlZCBhcyANCj4gPiA+IHRoYXQNCj4gPiA+IGlzIG1vcmUgYXBwcm9wcmlh
dGVseSBkZWFsdCB3aXRoIGJ5IGEgcGF0Y2ggdG8NCj4gPiA+IGRvY3MvbWlzYy94ZW5zdG9yZS50
eHQuIA0KPiA+ID4gSXQgaXMNCj4gPiA+IGFsc28gZXhwZWN0ZWQgdGhhdCwgd2hlbiB0aGUgZmly
c3QgaW1wbGVtZW50YXRpb24gb2YgbGl2ZSB1cGRhdGUNCj4gPiA+IG9yIA0KPiA+ID4gbWlncmF0
aW9uDQo+ID4gPiBtYWtpbmcgdXNlIG9mIHRoaXMgc3BlY2lmaWNhdGlvbiBpcyBjb21taXR0ZWQs
IHRoYXQgdGhlIGRvY3VtZW50DQo+ID4gPiBpcyANCj4gPiA+IG1vdmVkIGZyb20NCj4gPiA+IGRv
Y3MvZGVzaWducyBpbnRvIGRvY3Mvc3BlY3MuDQo+ID4gPiANCj4gPiA+IE5PVEU6IEl0IHdpbGwg
b25seSBiZSBuZWNlc3NhcnkgdG8gc2F2ZSBhbmQgcmVzdG9yZSBzdGF0ZSBmb3INCj4gPiA+IGFj
dGl2ZSANCj4gPiA+IHhlbnN0b3JlDQo+ID4gPiAgICAgICAgY29ubmVjdGlvbnMsIGJ1dCB0aGUg
ZG9jdW1lbnRhdGlvbiBmb3IgJ1JFU1VNRScgaW4NCj4gPiA+IHhlbnN0b3JlLnR4dCANCj4gPiA+
IGltcGxpZXMNCj4gPiA+ICAgICAgICBvdGhlcndpc2UuIFRoYXQgY29tbWFuZCBpcyB1bnVzZWQg
c28gdGhpcyBwYXRjaCBkZWxldGVzIGl0DQo+ID4gPiBmcm9tIA0KPiA+ID4gdGhlDQo+ID4gPiAg
ICAgICAgc3BlY2lmaWNhdGlvbi4NCj4gDQo+IENvdWxkIHNvbWVvbmUgZnJvbSBDaXRyaXggcGxl
YXNlIHZlcmlmeSB0aGF0IFhBUEkgaXNuJ3QgdXNpbmcNCj4gWFNfUkVTVU1FPw0KDQpUaGUgaW1w
bGVtZW50YXRpb24gb2YgWFNfUkVTVU1FIGluIG94ZW5zdG9yZWQgZG9lc24ndCBkbyBtdWNoOiBp
dCBzZWVtcw0KdG8gYmUganVzdCBhIHdheSBmb3IgRG9tMCB0byBjaGVjayB3aGV0aGVyIGEgZG9t
YWluIGV4aXN0cyBvciBub3QsIGFuZA0KZm9yIGEgZG9tYWluIHRvIGNoZWNrIHdoZXRoZXIgdGhl
eSBhcmUgRG9tMCBvciBub3QuDQpJZiB0aGUgZG9tYWluIGV4aXN0cywgdGhlbiB0aGUgcmVzdW1l
IGltcGxlbWVudGF0aW9uIGp1c3QgcmV0dXJucyBgKClgLA0KaS5lLiBkb2VzIG5vdGhpbmcuDQoN
CkkgY2FuJ3QgZmluZCBhbnkgcmVmZXJlbmNlcyB0byBYcy5yZXN1bWUgaW4geGVub3BzZCAob3Ig
dGhlIG90aGVyIFhBUEkNCnJlcG9zIHRoYXQgSSBnb3QgY2hlY2tlZCBvdXQpLCBzbyBJIHRoaW5r
IGl0IGNhbiBiZSBzYWZlbHkgcmVtb3ZlZCBmcm9tDQp0aGUgc3BlYyBhbmQgY2xpZW50IGxpYnJh
cmllcyAoSSdkIGtlZXAgaXQgaW4gdGhlIGFjdHVhbCBveGVuc3RvcmVkDQppbXBsZW1lbnRhdGlv
biBqdXN0IGluIGNhc2Ugc29tZSBndWVzdCBkb2VzIGNhbGwgaXQpLg0KDQpCZXN0IHJlZ2FyZHMs
DQotLUVkd2luDQoNCj4gDQo+IA0KPiBKdWVyZ2VuDQo+IA0K


From xen-devel-bounces@lists.xenproject.org Tue May 05 15:20:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 15:20:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVzNB-0001Rs-9R; Tue, 05 May 2020 15:20:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5wz9=6T=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jVzNA-0001Rn-Jh
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 15:20:40 +0000
X-Inumbo-ID: fa484050-8ee3-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa484050-8ee3-11ea-b07b-bc764e2007e4;
 Tue, 05 May 2020 15:20:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4D74FAE67;
 Tue,  5 May 2020 15:20:41 +0000 (UTC)
Subject: Re: [PATCH] x86/pv: Fix Clang build with !CONFIG_PV32
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200505142810.14827-1-andrew.cooper3@citrix.com>
 <3434eaa5-d8ce-fa8f-17a8-19e9739121d8@suse.com>
 <a423db54-601b-b445-5f66-507301c78102@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <51841820-8dbf-ff41-5348-355a9379d74e@suse.com>
Date: Tue, 5 May 2020 17:20:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <a423db54-601b-b445-5f66-507301c78102@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.2020 17:05, Andrew Cooper wrote:
> On 05/05/2020 15:52, Jan Beulich wrote:
>> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>>
>> On 05.05.2020 16:28, Andrew Cooper wrote:
>>> @@ -753,8 +751,9 @@ void load_system_tables(void)
>>>  	_set_tssldt_desc(gdt + TSS_ENTRY, (unsigned long)tss,
>>>  			 sizeof(*tss) - 1, SYS_DESC_tss_avail);
>>>  	if ( IS_ENABLED(CONFIG_PV32) )
>>> -		_set_tssldt_desc(compat_gdt + TSS_ENTRY, (unsigned long)tss,
>>> -				 sizeof(*tss) - 1, SYS_DESC_tss_busy);
>>> +		_set_tssldt_desc(
>>> +			this_cpu(compat_gdt) - FIRST_RESERVED_GDT_ENTRY + TSS_ENTRY,
>>> +			(unsigned long)tss, sizeof(*tss) - 1, SYS_DESC_tss_busy);
>> Isn't indentation here off by 4 compared to what we
>> normally do with extremely large argument expressions?
> 
> No.  This is Linux style (therefore 8-space tabs), not Xen style (4 spaces).

Oh, right - din't pay attention at all to this being tabs, sorry.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 05 15:26:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 15:26:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jVzSw-0001iW-44; Tue, 05 May 2020 15:26:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uapr=6T=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jVzSu-0001iR-Tv
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 15:26:36 +0000
X-Inumbo-ID: ce535e0c-8ee4-11ea-9dca-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ce535e0c-8ee4-11ea-9dca-12813bfff9fa;
 Tue, 05 May 2020 15:26:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 5E14AAE37;
 Tue,  5 May 2020 15:26:37 +0000 (UTC)
Subject: Re: [PATCH v5] docs/designs: re-work the xenstore migration
 document...
To: Edwin Torok <edvin.torok@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20200428150624.265-1-paul@xen.org>
 <2bf7dc13-65a4-317e-2c8c-bd6972dbb35a@xen.org>
 <fb319876-41eb-e785-a197-92440187a135@suse.com>
 <3fa9445d4677a9a6c24fb3aaee08913ad5c13a34.camel@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <25c0675d-a55a-e7fd-8026-594c3148099c@suse.com>
Date: Tue, 5 May 2020 17:26:34 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <3fa9445d4677a9a6c24fb3aaee08913ad5c13a34.camel@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.20 17:15, Edwin Torok wrote:
> On Tue, 2020-05-05 at 14:13 +0100, Jürgen Groß wrote:
>> On 05.05.20 15:01, Julien Grall wrote:
>>> Hi Paul,
>>>
>>> On 28/04/2020 16:06, Paul Durrant wrote:
>>>> From: Paul Durrant <pdurrant@amazon.com>
>>>>
>>>> ... to specify a separate migration stream that will also be
>>>> suitable for
>>>> live update.
>>>>
>>>> The original scope of the document was to support non-
>>>> cooperative
>>>> migration
>>>> of guests [1] but, since then, live update of xenstored has been
>>>> brought into
>>>> scope. Thus it makes more sense to define a separate image format
>>>> for
>>>> serializing xenstore state that is suitable for both purposes.
>>>>
>>>> The document has been limited to specifying a new image format.
>>>> The
>>>> mechanism
>>>> for acquiring the image for live update or migration is not
>>>> covered as
>>>> that
>>>> is more appropriately dealt with by a patch to
>>>> docs/misc/xenstore.txt.
>>>> It is
>>>> also expected that, when the first implementation of live update
>>>> or
>>>> migration
>>>> making use of this specification is committed, that the document
>>>> is
>>>> moved from
>>>> docs/designs into docs/specs.
>>>>
>>>> NOTE: It will only be necessary to save and restore state for
>>>> active
>>>> xenstore
>>>>         connections, but the documentation for 'RESUME' in
>>>> xenstore.txt
>>>> implies
>>>>         otherwise. That command is unused so this patch deletes it
>>>> from
>>>> the
>>>>         specification.
>>
>> Could someone from Citrix please verify that XAPI isn't using
>> XS_RESUME?
> 
> The implementation of XS_RESUME in oxenstored doesn't do much: it seems
> to be just a way for Dom0 to check whether a domain exists or not, and
> for a domain to check whether they are Dom0 or not.
> If the domain exists, then the resume implementation just returns `()`,
> i.e. does nothing.
> 
> I can't find any references to Xs.resume in xenopsd (or the other XAPI
> repos that I got checked out), so I think it can be safely removed from
> the spec and client libraries (I'd keep it in the actual oxenstored
> implementation just in case some guest does call it).

Thanks for the confirmation.


Juergen


From xen-devel-bounces@lists.xenproject.org Tue May 05 16:02:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 16:02:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jW01X-0005Tu-03; Tue, 05 May 2020 16:02:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uapr=6T=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jW01V-0005Tp-RO
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 16:02:21 +0000
X-Inumbo-ID: cc9e4f9a-8ee9-11ea-9dcd-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cc9e4f9a-8ee9-11ea-9dcd-12813bfff9fa;
 Tue, 05 May 2020 16:02:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 8B845AEFD;
 Tue,  5 May 2020 16:02:21 +0000 (UTC)
Subject: Re: [PATCH] xenbus: avoid stack overflow warning
To: Arnd Bergmann <arnd@arndb.de>
References: <20200505141546.824573-1-arnd@arndb.de>
 <30d49e6d-570b-f6fd-3a6f-628abcc8b127@suse.com>
 <CAK8P3a0mWH=Zcq180+cTRMpqOkGt05xDP1+kCTP6yc9grAg2VQ@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <48893239-dde9-4e94-040d-859f4348816d@suse.com>
Date: Tue, 5 May 2020 18:02:18 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <CAK8P3a0mWH=Zcq180+cTRMpqOkGt05xDP1+kCTP6yc9grAg2VQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Yan Yankovskyi <yyankovskyi@gmail.com>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 clang-built-linux <clang-built-linux@googlegroups.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.20 17:01, Arnd Bergmann wrote:
> On Tue, May 5, 2020 at 4:34 PM Jürgen Groß <jgross@suse.com> wrote:
>> On 05.05.20 16:15, Arnd Bergmann wrote:
>>> The __xenbus_map_ring() function has two large arrays, 'map' and
>>> 'unmap' on its stack. When clang decides to inline it into its caller,
>>> xenbus_map_ring_valloc_hvm(), the total stack usage exceeds the warning
>>> limit for stack size on 32-bit architectures.
>>>
>>> drivers/xen/xenbus/xenbus_client.c:592:12: error: stack frame size of 1104 bytes in function 'xenbus_map_ring_valloc_hvm' [-Werror,-Wframe-larger-than=]
>>>
>>> As far as I can tell, other compilers don't inline it here, so we get
>>> no warning, but the stack usage is actually the same. It is possible
>>> for both arrays to use the same location on the stack, but the compiler
>>> cannot prove that this is safe because they get passed to external
>>> functions that may end up using them until they go out of scope.
>>>
>>> Move the two arrays into separate basic blocks to limit the scope
>>> and force them to occupy less stack in total, regardless of the
>>> inlining decision.
>>
>> Why don't you put both arrays into a union?
> 
> I considered that as well, and don't really mind either way. I think it does
> get a bit ugly whatever we do. If you prefer the union, I can respin the
> patch that way.

Hmm, thinking more about it I think the real clean solution would be to
extend struct map_ring_valloc_hvm to cover the pv case, too, to add the
map and unmap arrays (possibly as a union) to it and to allocate it
dynamically instead of having it on the stack.

Would you be fine doing this?


Juergen


From xen-devel-bounces@lists.xenproject.org Tue May 05 16:12:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 16:12:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jW0B9-0006Mk-0L; Tue, 05 May 2020 16:12:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QztI=6T=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jW0B7-0006Mf-Vq
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 16:12:18 +0000
X-Inumbo-ID: 3061e662-8eeb-11ea-b9cf-bc764e2007e4
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3061e662-8eeb-11ea-b9cf-bc764e2007e4;
 Tue, 05 May 2020 16:12:17 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 045G3fkV164222;
 Tue, 5 May 2020 16:12:11 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=gl4ufvD0inhnCx10cJzHgIvAaRMgE8TFgjDbAje8y1s=;
 b=bEGCIuuyMFjWXJSMoiJZCpl4FwR/jI38y/0wtriI4fX9EdHYi4VnsQ7QxUICLOv2DE8Y
 gD7RQuMed5aoNLuYWO2qK8Z6unYQ7XWcFIfIiN5iNiEXvFCBt9FXO/aQnyE4vHf/+wdc
 vXoE2pF7+rfpXozWJBm2EVcDzrXCerCOOmBv2ulwK0mLqgBPIACBtKZ6+ORsZspdYC21
 AlNfvqEhuLcdA6d9YCSbDNPgFegoRjWvYXLL4MFaRScKII54P4jIONELsSHcvhQIdSGa
 rXUAjk0rb0dw+IZ61iZuvSP3ZpcA2oAWt2gTd/Nex3z8Qy1wwGFpKuQ8G+BmbP6ygroJ fw== 
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2120.oracle.com with ESMTP id 30s1gn5j01-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 05 May 2020 16:12:10 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 045G8Wnl016936;
 Tue, 5 May 2020 16:12:09 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by userp3020.oracle.com with ESMTP id 30sjk01e4c-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 05 May 2020 16:12:09 +0000
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 045GC2NH009468;
 Tue, 5 May 2020 16:12:02 GMT
Received: from [10.39.218.134] (/10.39.218.134)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Tue, 05 May 2020 09:12:02 -0700
Subject: Re: [PATCH] xenbus: avoid stack overflow warning
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Arnd Bergmann <arnd@arndb.de>
References: <20200505141546.824573-1-arnd@arndb.de>
 <30d49e6d-570b-f6fd-3a6f-628abcc8b127@suse.com>
 <CAK8P3a0mWH=Zcq180+cTRMpqOkGt05xDP1+kCTP6yc9grAg2VQ@mail.gmail.com>
 <48893239-dde9-4e94-040d-859f4348816d@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <656d0dc4-6c4f-29df-be7b-4cb70c2c0a5e@oracle.com>
Date: Tue, 5 May 2020 12:12:00 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.4.1
MIME-Version: 1.0
In-Reply-To: <48893239-dde9-4e94-040d-859f4348816d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9612
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0
 mlxscore=0 phishscore=0
 bulkscore=0 malwarescore=0 spamscore=0 mlxlogscore=999 adultscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000
 definitions=main-2005050127
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9612
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0
 suspectscore=0 mlxscore=0
 spamscore=0 clxscore=1011 priorityscore=1501 bulkscore=0 phishscore=0
 impostorscore=0 malwarescore=0 lowpriorityscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000
 definitions=main-2005050126
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Yan Yankovskyi <yyankovskyi@gmail.com>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 clang-built-linux <clang-built-linux@googlegroups.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


On 5/5/20 12:02 PM, Jürgen Groß wrote:
> On 05.05.20 17:01, Arnd Bergmann wrote:
>> On Tue, May 5, 2020 at 4:34 PM Jürgen Groß <jgross@suse.com> wrote:
>>> On 05.05.20 16:15, Arnd Bergmann wrote:
>>>> The __xenbus_map_ring() function has two large arrays, 'map' and
>>>> 'unmap' on its stack. When clang decides to inline it into its caller,
>>>> xenbus_map_ring_valloc_hvm(), the total stack usage exceeds the
>>>> warning
>>>> limit for stack size on 32-bit architectures.
>>>>
>>>> drivers/xen/xenbus/xenbus_client.c:592:12: error: stack frame size
>>>> of 1104 bytes in function 'xenbus_map_ring_valloc_hvm'
>>>> [-Werror,-Wframe-larger-than=]
>>>>
>>>> As far as I can tell, other compilers don't inline it here, so we get
>>>> no warning, but the stack usage is actually the same. It is possible
>>>> for both arrays to use the same location on the stack, but the
>>>> compiler
>>>> cannot prove that this is safe because they get passed to external
>>>> functions that may end up using them until they go out of scope.
>>>>
>>>> Move the two arrays into separate basic blocks to limit the scope
>>>> and force them to occupy less stack in total, regardless of the
>>>> inlining decision.
>>>
>>> Why don't you put both arrays into a union?
>>
>> I considered that as well, and don't really mind either way. I think
>> it does
>> get a bit ugly whatever we do. If you prefer the union, I can respin the
>> patch that way.
>
> Hmm, thinking more about it I think the real clean solution would be to
> extend struct map_ring_valloc_hvm to cover the pv case, too, to add the
> map and unmap arrays (possibly as a union) to it and to allocate it
> dynamically instead of having it on the stack.
>
> Would you be fine doing this?



Another option might be to factor out/modify code from 
xenbus_unmap_ring() and call the resulting code from
__xenbus_map_ring()'s fail path.


-boris



From xen-devel-bounces@lists.xenproject.org Tue May 05 16:34:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 16:34:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jW0WM-00085z-ST; Tue, 05 May 2020 16:34:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uapr=6T=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jW0WL-00085u-Rq
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 16:34:13 +0000
X-Inumbo-ID: 4086b646-8eee-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4086b646-8eee-11ea-9887-bc764e2007e4;
 Tue, 05 May 2020 16:34:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 34C5BAFB8;
 Tue,  5 May 2020 16:34:14 +0000 (UTC)
Subject: Re: [PATCH] xenbus: avoid stack overflow warning
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Arnd Bergmann <arnd@arndb.de>
References: <20200505141546.824573-1-arnd@arndb.de>
 <30d49e6d-570b-f6fd-3a6f-628abcc8b127@suse.com>
 <CAK8P3a0mWH=Zcq180+cTRMpqOkGt05xDP1+kCTP6yc9grAg2VQ@mail.gmail.com>
 <48893239-dde9-4e94-040d-859f4348816d@suse.com>
 <656d0dc4-6c4f-29df-be7b-4cb70c2c0a5e@oracle.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <02dcf5df-f447-d29b-052e-c3fcd71a214f@suse.com>
Date: Tue, 5 May 2020 18:34:10 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <656d0dc4-6c4f-29df-be7b-4cb70c2c0a5e@oracle.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Yan Yankovskyi <yyankovskyi@gmail.com>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 clang-built-linux <clang-built-linux@googlegroups.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.20 18:12, Boris Ostrovsky wrote:
> 
> On 5/5/20 12:02 PM, Jürgen Groß wrote:
>> On 05.05.20 17:01, Arnd Bergmann wrote:
>>> On Tue, May 5, 2020 at 4:34 PM Jürgen Groß <jgross@suse.com> wrote:
>>>> On 05.05.20 16:15, Arnd Bergmann wrote:
>>>>> The __xenbus_map_ring() function has two large arrays, 'map' and
>>>>> 'unmap' on its stack. When clang decides to inline it into its caller,
>>>>> xenbus_map_ring_valloc_hvm(), the total stack usage exceeds the
>>>>> warning
>>>>> limit for stack size on 32-bit architectures.
>>>>>
>>>>> drivers/xen/xenbus/xenbus_client.c:592:12: error: stack frame size
>>>>> of 1104 bytes in function 'xenbus_map_ring_valloc_hvm'
>>>>> [-Werror,-Wframe-larger-than=]
>>>>>
>>>>> As far as I can tell, other compilers don't inline it here, so we get
>>>>> no warning, but the stack usage is actually the same. It is possible
>>>>> for both arrays to use the same location on the stack, but the
>>>>> compiler
>>>>> cannot prove that this is safe because they get passed to external
>>>>> functions that may end up using them until they go out of scope.
>>>>>
>>>>> Move the two arrays into separate basic blocks to limit the scope
>>>>> and force them to occupy less stack in total, regardless of the
>>>>> inlining decision.
>>>>
>>>> Why don't you put both arrays into a union?
>>>
>>> I considered that as well, and don't really mind either way. I think
>>> it does
>>> get a bit ugly whatever we do. If you prefer the union, I can respin the
>>> patch that way.
>>
>> Hmm, thinking more about it I think the real clean solution would be to
>> extend struct map_ring_valloc_hvm to cover the pv case, too, to add the
>> map and unmap arrays (possibly as a union) to it and to allocate it
>> dynamically instead of having it on the stack.
>>
>> Would you be fine doing this?
> 
> 
> 
> Another option might be to factor out/modify code from
> xenbus_unmap_ring() and call the resulting code from
> __xenbus_map_ring()'s fail path.

This will still allocate large arrays on the stack. If we ever increase
the max ring page order it will explode.


Juergen


From xen-devel-bounces@lists.xenproject.org Tue May 05 17:22:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 17:22:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jW1GK-0003mx-Mc; Tue, 05 May 2020 17:21:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZEp/=6T=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jW1GJ-0003ms-86
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 17:21:43 +0000
X-Inumbo-ID: e271c531-8ef4-11ea-9ddc-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e271c531-8ef4-11ea-9ddc-12813bfff9fa;
 Tue, 05 May 2020 17:21:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=nL2MO+nQpNNpHXiaN/n7s+tnvyO+S+JUbNAor9wMmAU=; b=NLGcLMI2o9Z6phESRAHMx8j7kN
 bku6HR7z6vLzKALp8T4XxbVPENL2adiHE5AZ5HQlWs5+ywWcDPBCaKHFwYSgHUOknVSr4MPJNt5P1
 0KFaKiJTtFKlELarHMsQufn3EcylMg2jlhQjBGxHnazGigXQS8BHdJ4miVzstLAfQnBQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jW1GG-0008IE-OY; Tue, 05 May 2020 17:21:40 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jW1GG-0005hD-Hb; Tue, 05 May 2020 17:21:40 +0000
Subject: Re: [PATCH v3] tools/xenstore: don't store domU's mfn of ring page in
 xenstored
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
References: <20200430053842.4376-1-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <419d1386-a634-60e2-5d5b-e90c9300dd3f@xen.org>
Date: Tue, 5 May 2020 18:21:38 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200430053842.4376-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Juergen,

On 30/04/2020 06:38, Juergen Gross wrote:
> The XS_INTRODUCE command has two parameters: the mfn (or better: gfn)
> of the domain's xenstore ring page and the event channel of the
> domain for communicating with Xenstore.
> 
> The gfn is not really needed. It is stored in the per-domain struct
> in xenstored and in case of another XS_INTRODUCE for the domain it
> is tested to match the original value. If it doesn't match the
> command is aborted via EINVAL, otherwise the event channel to the
> domain is recreated.
> 
> As XS_INTRODUCE is limited to dom0 and there is no real downside of
> recreating the event channel just omit the test for the gfn to
> match and don't return EINVAL for multiple XS_INTRODUCE calls.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 05 17:33:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 17:33:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jW1Rb-0004iz-Na; Tue, 05 May 2020 17:33:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u3eM=6T=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jW1Ra-0004iu-VC
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 17:33:23 +0000
X-Inumbo-ID: 8414a992-8ef6-11ea-9ddd-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8414a992-8ef6-11ea-9ddd-12813bfff9fa;
 Tue, 05 May 2020 17:33:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588700002;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=LuqC5+kPUQnfZRlEPz2H/N4Ap20VVvbFi1E/NmLAuSI=;
 b=XoEJ0gdjxlfCP75Nb6rm7re213FXsbrl04eySZ8/GJbFxiofofVbhoUb
 sGhePbGpUlXQbc+sgZSFBJ4m3P4WewBeO9TSmviEgWiBQwOxxGvl7azp4
 z1SPg9FwZjvBgZ1HQcvW3ecddzLpFoyUP1KoKEIuVhD9oag1yfcuEbgz9 8=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: j1NnrzWPlsyEvBcjuEECcy4HD08SlgiEiRgCgF470WATJhLzgTus2tta+lDMJgaDCrn1ladMn+
 Zsgaf0iHBWvYJM2dzwex1o1kYAu8qNsY+Re9UUBokKy0/eQGRLKPqRtI0OY0sFabHYduQL2S36
 84CuJNSK5l0Si4XX/G0nyU1LNf7ywFZ9O44T5NHvvGsJgPOSLg7Zu7wzginPoi/RUa4jJQhCcE
 3HQ/hcaoevbW649JdRCfIS6YuQmr0F0w+1VrRE0KFMNBdWd+tydQcEncmIRRatmgIgoFS9ZBoN
 9aE=
X-SBRS: 2.7
X-MesageID: 17494119
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,356,1583211600"; d="scan'208";a="17494119"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/svm: Clean up vmcbcleanbits_t handling
Date: Tue, 5 May 2020 18:32:50 +0100
Message-ID: <20200505173250.5916-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Rework the vmcbcleanbits_t definitons to use bool, drop 'fields' from the
namespace, position the comments in an unambiguous position, and include the
bit position.

In svm_vmexit_handler(), don't bother conditionally writing ~0 or 0 based on
hardware support.  The field was entirely unused and ignored on older
hardware (and we're already setting reserved cleanbits anyway).

In nsvm_vmcb_prepare4vmrun(), simplify the logic massivly by dropping the
vcleanbit_set() macro using a vmcbcleanbits_t local variable which only gets
filled in the case that clean bits were valid previously.  Fix up the style on
impacted lines.

No practical change in behaviour.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/svm/nestedsvm.c   | 45 +++++++++++++++++++-------------------
 xen/arch/x86/hvm/svm/svm.c         | 10 ++++-----
 xen/arch/x86/hvm/svm/svmdebug.c    |  2 +-
 xen/include/asm-x86/hvm/svm/vmcb.h | 45 ++++++++++++++------------------------
 4 files changed, 44 insertions(+), 58 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/nestedsvm.c b/xen/arch/x86/hvm/svm/nestedsvm.c
index bbd06e342e..998790af1b 100644
--- a/xen/arch/x86/hvm/svm/nestedsvm.c
+++ b/xen/arch/x86/hvm/svm/nestedsvm.c
@@ -343,7 +343,7 @@ static int nsvm_vcpu_hostrestore(struct vcpu *v, struct cpu_user_regs *regs)
     n1vmcb->exit_int_info.raw = 0;
 
     /* Cleanbits */
-    n1vmcb->cleanbits.bytes = 0;
+    n1vmcb->cleanbits.raw = 0;
 
     return 0;
 }
@@ -423,7 +423,7 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
     struct nestedvcpu *nv = &vcpu_nestedhvm(v);
     struct nestedsvm *svm = &vcpu_nestedsvm(v);
     struct vmcb_struct *ns_vmcb, *n1vmcb, *n2vmcb;
-    bool_t vcleanbits_valid;
+    vmcbcleanbits_t clean = {};
     int rc;
     uint64_t cr0;
 
@@ -435,17 +435,13 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
     ASSERT(n2vmcb != NULL);
 
     /* Check if virtual VMCB cleanbits are valid */
-    vcleanbits_valid = 1;
-    if ( svm->ns_ovvmcb_pa == INVALID_PADDR )
-        vcleanbits_valid = 0;
-    if (svm->ns_ovvmcb_pa != nv->nv_vvmcxaddr)
-        vcleanbits_valid = 0;
-
-#define vcleanbit_set(_name)	\
-    (vcleanbits_valid && ns_vmcb->cleanbits.fields._name)
+    if ( svm->ns_ovvmcb_pa != INVALID_PADDR &&
+         svm->ns_ovvmcb_pa != nv->nv_vvmcxaddr )
+        clean = ns_vmcb->cleanbits;
 
     /* Enable l2 guest intercepts */
-    if (!vcleanbit_set(intercepts)) {
+    if ( !clean.intercepts )
+    {
         svm->ns_cr_intercepts = ns_vmcb->_cr_intercepts;
         svm->ns_dr_intercepts = ns_vmcb->_dr_intercepts;
         svm->ns_exception_intercepts = ns_vmcb->_exception_intercepts;
@@ -492,7 +488,7 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
     n2vmcb->_tsc_offset = n1vmcb->_tsc_offset + ns_vmcb->_tsc_offset;
 
     /* Nested IO permission bitmaps */
-    rc = nsvm_vmrun_permissionmap(v, vcleanbit_set(iopm));
+    rc = nsvm_vmrun_permissionmap(v, clean.iopm);
     if (rc)
         return rc;
 
@@ -502,7 +498,8 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
     n2vmcb->tlb_control = ns_vmcb->tlb_control;
 
     /* Virtual Interrupts */
-    if (!vcleanbit_set(tpr)) {
+    if ( !clean.tpr )
+    {
         n2vmcb->_vintr = ns_vmcb->_vintr;
         n2vmcb->_vintr.fields.intr_masking = 1;
     }
@@ -520,9 +517,9 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
     n2vmcb->event_inj = ns_vmcb->event_inj;
 
     /* LBR and other virtualization */
-    if (!vcleanbit_set(lbr)) {
+    if ( !clean.lbr )
         svm->ns_virt_ext = ns_vmcb->virt_ext;
-    }
+
     n2vmcb->virt_ext.bytes =
         n1vmcb->virt_ext.bytes | ns_vmcb->virt_ext.bytes;
 
@@ -533,7 +530,8 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
      */
 
     /* Segments */
-    if (!vcleanbit_set(seg)) {
+    if ( !clean.seg )
+    {
         n2vmcb->es = ns_vmcb->es;
         n2vmcb->cs = ns_vmcb->cs;
         n2vmcb->ss = ns_vmcb->ss;
@@ -541,7 +539,8 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
         /* CPL */
         n2vmcb->_cpl = ns_vmcb->_cpl;
     }
-    if (!vcleanbit_set(dt)) {
+    if ( !clean.dt )
+    {
         n2vmcb->gdtr = ns_vmcb->gdtr;
         n2vmcb->idtr = ns_vmcb->idtr;
     }
@@ -614,7 +613,8 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
     }
 
     /* DRn */
-    if (!vcleanbit_set(dr)) {
+    if ( !clean.dr )
+    {
         n2vmcb->_dr7 = ns_vmcb->_dr7;
         n2vmcb->_dr6 = ns_vmcb->_dr6;
     }
@@ -637,11 +637,11 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
      */
 
     /* PAT */
-    if (!vcleanbit_set(np)) {
+    if ( !clean.np )
         n2vmcb->_g_pat = ns_vmcb->_g_pat;
-    }
 
-    if (!vcleanbit_set(lbr)) {
+    if ( !clean.lbr )
+    {
         /* Debug Control MSR */
         n2vmcb->_debugctlmsr = ns_vmcb->_debugctlmsr;
 
@@ -653,7 +653,7 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
     }
 
     /* Cleanbits */
-    n2vmcb->cleanbits.bytes = 0;
+    n2vmcb->cleanbits.raw = 0;
 
     rc = svm_vmcb_isvalid(__func__, ns_vmcb, v, true);
     if (rc) {
@@ -673,7 +673,6 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
     regs->rsp = ns_vmcb->rsp;
     regs->rflags = ns_vmcb->rflags;
 
-#undef vcleanbit_set
     return 0;
 }
 
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 5950e4d52b..aeebeaf873 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -345,7 +345,7 @@ static int svm_vmcb_restore(struct vcpu *v, struct hvm_hw_cpu *c)
     else
         vmcb->event_inj.raw = 0;
 
-    vmcb->cleanbits.bytes = 0;
+    vmcb->cleanbits.raw = 0;
     paging_update_paging_modes(v);
 
     return 0;
@@ -693,12 +693,12 @@ static void svm_set_segment_register(struct vcpu *v, enum x86_segment seg,
     case x86_seg_ds:
     case x86_seg_es:
     case x86_seg_ss: /* cpl */
-        vmcb->cleanbits.fields.seg = 0;
+        vmcb->cleanbits.seg = 0;
         break;
 
     case x86_seg_gdtr:
     case x86_seg_idtr:
-        vmcb->cleanbits.fields.dt = 0;
+        vmcb->cleanbits.dt = 0;
         break;
 
     case x86_seg_fs:
@@ -980,7 +980,7 @@ static void svm_ctxt_switch_to(struct vcpu *v)
     svm_restore_dr(v);
 
     svm_vmsave_pa(per_cpu(host_vmcb, cpu));
-    vmcb->cleanbits.bytes = 0;
+    vmcb->cleanbits.raw = 0;
     svm_tsc_ratio_load(v);
 
     if ( cpu_has_msr_tsc_aux )
@@ -2594,7 +2594,7 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
 
     hvm_maybe_deassert_evtchn_irq();
 
-    vmcb->cleanbits.bytes = cpu_has_svm_cleanbits ? ~0u : 0u;
+    vmcb->cleanbits.raw = ~0u;
 
     /* Event delivery caused this intercept? Queue for redelivery. */
     if ( unlikely(vmcb->exit_int_info.v) &&
diff --git a/xen/arch/x86/hvm/svm/svmdebug.c b/xen/arch/x86/hvm/svm/svmdebug.c
index 91f5d9400c..ba26b6a80b 100644
--- a/xen/arch/x86/hvm/svm/svmdebug.c
+++ b/xen/arch/x86/hvm/svm/svmdebug.c
@@ -83,7 +83,7 @@ void svm_vmcb_dump(const char *from, const struct vmcb_struct *vmcb)
     printk("KernGSBase = 0x%016"PRIx64" PAT = 0x%016"PRIx64"\n",
            vmcb->kerngsbase, vmcb_get_g_pat(vmcb));
     printk("H_CR3 = 0x%016"PRIx64" CleanBits = %#x\n",
-           vmcb_get_h_cr3(vmcb), vmcb->cleanbits.bytes);
+           vmcb_get_h_cr3(vmcb), vmcb->cleanbits.raw);
 
     /* print out all the selectors */
     printk("       sel attr  limit   base\n");
diff --git a/xen/include/asm-x86/hvm/svm/vmcb.h b/xen/include/asm-x86/hvm/svm/vmcb.h
index c2e1972feb..4ed69d535c 100644
--- a/xen/include/asm-x86/hvm/svm/vmcb.h
+++ b/xen/include/asm-x86/hvm/svm/vmcb.h
@@ -384,34 +384,21 @@ typedef union
 
 typedef union
 {
-    uint32_t bytes;
-    struct
-    {
-        /* cr_intercepts, dr_intercepts, exception_intercepts,
-         * general{1,2}_intercepts, pause_filter_count, tsc_offset */
-        uint32_t intercepts: 1;
-        /* iopm_base_pa, msrpm_base_pa */
-        uint32_t iopm: 1;
-        /* guest_asid */
-        uint32_t asid: 1;
-        /* vintr */
-        uint32_t tpr: 1;
-        /* np_enable, h_cr3, g_pat */
-        uint32_t np: 1;
-        /* cr0, cr3, cr4, efer */
-        uint32_t cr: 1;
-        /* dr6, dr7 */
-        uint32_t dr: 1;
-        /* gdtr, idtr */
-        uint32_t dt: 1;
-        /* cs, ds, es, ss, cpl */
-        uint32_t seg: 1;
-        /* cr2 */
-        uint32_t cr2: 1;
-        /* debugctlmsr, last{branch,int}{to,from}ip */
-        uint32_t lbr: 1;
-        uint32_t resv: 21;
-    } fields;
+    struct {
+        bool intercepts:1; /* 0:  cr/dr/exception/general1/2_intercepts,
+                            *     pause_filter_count, tsc_offset */
+        bool iopm:1;       /* 1:  iopm_base_pa, msrpm_base_pa */
+        bool asid:1;       /* 2:  guest_asid */
+        bool tpr:1;        /* 3:  vintr */
+        bool np:1;         /* 4:  np_enable, h_cr3, g_pat */
+        bool cr:1;         /* 5:  cr0, cr3, cr4, efer */
+        bool dr:1;         /* 6:  dr6, dr7 */
+        bool dt:1;         /* 7:  gdtr, idtr */
+        bool seg:1;        /* 8:  cs, ds, es, ss, cpl */
+        bool cr2:1;        /* 9:  cr2 */
+        bool lbr:1;        /* 10: debugctlmsr, last{branch,int}{to,from}ip */
+    };
+    uint32_t raw;
 } vmcbcleanbits_t;
 
 #define IOPM_SIZE   (12 * 1024)
@@ -604,7 +591,7 @@ vmcb_set_ ## name(struct vmcb_struct *vmcb,       \
                   type value)                     \
 {                                                 \
     vmcb->_ ## name = value;                      \
-    vmcb->cleanbits.fields.cleanbit = 0;          \
+    vmcb->cleanbits.cleanbit = false;             \
 }                                                 \
 static inline type                                \
 vmcb_get_ ## name(const struct vmcb_struct *vmcb) \
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue May 05 18:26:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 18:26:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jW2Gc-0000dU-MB; Tue, 05 May 2020 18:26:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u3eM=6T=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jW2Gb-0000dP-V6
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 18:26:05 +0000
X-Inumbo-ID: de76600e-8efd-11ea-9de6-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id de76600e-8efd-11ea-9de6-12813bfff9fa;
 Tue, 05 May 2020 18:26:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588703160;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=ozbTOxlqkPrt33bDOSsjlry+LZ5xJKDHpuhB5Brbp+4=;
 b=iPkDnDdPtyl4S8f9C0W/7KHrL/8NkiXgZi3bbVzt7xJtibQlPfpWD6sM
 xqTU30OKTLcRsiVr9YVBEfP39R4y8jTeXg1cPLrOKBa0XB2Ka6y0t0x1i
 z+Opz6XjNmnFMs9qgOJ9LlE91+376ZI9oH41xoYqhlzkw24ZhMpGSpwvk 8=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: NzTDGMX+nHT7lgCRmFpXIaAAlMbE74v4a8FfpBFqO8jqZkVPiTNV275vF85+kVH7egwwd3xEhr
 pq4gtmxVkIhlBUTHOVMq1tmnn11xd3s/F+04B3746mysXtKy1QRq4X+Gpi0KF0u4AQ35v1os7y
 XuZdDCP48BlmpkhswZgK0UUPWCljIKNLRigOQkoCnIjph5o0+9/FYlMBI7Rn2VJFzUt8WQQf2Z
 t/4dVIQEvc9GsEMEBk9AtONcm1Q2wRDz+qbciu695xD9aIA2mO/LXT8K+GoLk1ZtmfFvbanGxz
 ivA=
X-SBRS: 2.7
X-MesageID: 17067254
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,356,1583211600"; d="scan'208";a="17067254"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/svm: Use flush-by-asid when available
Date: Tue, 5 May 2020 19:25:39 +0100
Message-ID: <20200505182539.12247-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

AMD Fam15h processors introduced the flush-by-asid feature, for more fine
grain flushing purposes.

Flushing everything including ASID 0 (i.e. Xen context) is an an unnecesserily
large hammer, and never necessary in the context of guest TLBs needing
invalidating.

When available, use TLB_CTRL_FLUSH_ASID in preference to TLB_CTRL_FLUSH_ALL.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

The APM currently describes tlb_control encoding 1 as "Flush entire
TLB (Should be used only by legacy hypervisors.)".  AMD have agreed that this
is misleading and should say "legacy hardware" instead.

This change does come with a minor observed perf improvement on Fam17h
hardware, of ~0.6s over ~22s for my XTF pagewalk test.  This test will spot
TLB flushing issues, but isn't optimal for spotting the perf increase from
better flushing.  There were no observed differences for Fam15h, but this
could simply mean that the measured code footprint was larger than the TLB on
this CPU.
---
 xen/arch/x86/hvm/svm/asid.c       | 9 ++++++---
 xen/include/asm-x86/hvm/svm/svm.h | 1 +
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/asid.c b/xen/arch/x86/hvm/svm/asid.c
index 9be90058c7..ab06dd3f3a 100644
--- a/xen/arch/x86/hvm/svm/asid.c
+++ b/xen/arch/x86/hvm/svm/asid.c
@@ -18,6 +18,7 @@
 #include <asm/amd.h>
 #include <asm/hvm/nestedhvm.h>
 #include <asm/hvm/svm/asid.h>
+#include <asm/hvm/svm/svm.h>
 
 void svm_asid_init(const struct cpuinfo_x86 *c)
 {
@@ -47,15 +48,17 @@ void svm_asid_handle_vmrun(void)
     if ( p_asid->asid == 0 )
     {
         vmcb_set_guest_asid(vmcb, 1);
-        /* TODO: investigate using TLB_CTRL_FLUSH_ASID here instead. */
-        vmcb->tlb_control = TLB_CTRL_FLUSH_ALL;
+        vmcb->tlb_control =
+            cpu_has_svm_flushbyasid ? TLB_CTRL_FLUSH_ASID : TLB_CTRL_FLUSH_ALL;
         return;
     }
 
     if ( vmcb_get_guest_asid(vmcb) != p_asid->asid )
         vmcb_set_guest_asid(vmcb, p_asid->asid);
 
-    vmcb->tlb_control = need_flush ? TLB_CTRL_FLUSH_ALL : TLB_CTRL_NO_FLUSH;
+    vmcb->tlb_control =
+        !need_flush ? TLB_CTRL_NO_FLUSH :
+        cpu_has_svm_flushbyasid ? TLB_CTRL_FLUSH_ASID : TLB_CTRL_FLUSH_ALL;
 }
 
 /*
diff --git a/xen/include/asm-x86/hvm/svm/svm.h b/xen/include/asm-x86/hvm/svm/svm.h
index 16a994ec74..cd71402cbb 100644
--- a/xen/include/asm-x86/hvm/svm/svm.h
+++ b/xen/include/asm-x86/hvm/svm/svm.h
@@ -79,6 +79,7 @@ extern u32 svm_feature_flags;
 #define cpu_has_svm_svml      cpu_has_svm_feature(SVM_FEATURE_SVML)
 #define cpu_has_svm_nrips     cpu_has_svm_feature(SVM_FEATURE_NRIPS)
 #define cpu_has_svm_cleanbits cpu_has_svm_feature(SVM_FEATURE_VMCBCLEAN)
+#define cpu_has_svm_flushbyasid cpu_has_svm_feature(SVM_FEATURE_FLUSHBYASID)
 #define cpu_has_svm_decode    cpu_has_svm_feature(SVM_FEATURE_DECODEASSISTS)
 #define cpu_has_svm_vgif      cpu_has_svm_feature(SVM_FEATURE_VGIF)
 #define cpu_has_pause_filter  cpu_has_svm_feature(SVM_FEATURE_PAUSEFILTER)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue May 05 18:39:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 18:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jW2Tp-0001bE-US; Tue, 05 May 2020 18:39:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FNY7=6T=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jW2To-0001b9-LE
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 18:39:44 +0000
X-Inumbo-ID: c95a9418-8eff-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c95a9418-8eff-11ea-b07b-bc764e2007e4;
 Tue, 05 May 2020 18:39:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=7+PnDWNsw5K4MI2qWyqSObZBEh5jUuNccNIbw94zz84=; b=4vw0fKU4ve6jNcatznZ1WEN0c
 XrBCEXy6PwmYupIC3JkNkxxzY4mEgWjFbKWWcpJ7yJAlXrSKQoJ8WhMkEaG+n8deFT3A0TU+ewQFe
 kyqigE3Jd6Z83FhUgpK9YUTAOW6jGl8ab4pcYi0EjZKwBA5hTqJ7H/SrsQO3y2k8ROzOA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jW2Tn-0001MP-85; Tue, 05 May 2020 18:39:43 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jW2Tn-00017M-00; Tue, 05 May 2020 18:39:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jW2Tm-0004lJ-Vk; Tue, 05 May 2020 18:39:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150037-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150037: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=2e3d87cc734a895ef5b486926274a178836b67a9
X-Osstest-Versions-That: xen=0135be8bd8cd60090298f02310691b688d95c3a8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 05 May 2020 18:39:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150037 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150037/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  2e3d87cc734a895ef5b486926274a178836b67a9
baseline version:
 xen                  0135be8bd8cd60090298f02310691b688d95c3a8

Last test of basis   149888  2020-04-30 09:00:53 Z    5 days
Failing since        149986  2020-05-05 01:24:46 Z    0 days    3 attempts
Testing same since   150037  2020-05-05 16:00:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0135be8bd8..2e3d87cc73  2e3d87cc734a895ef5b486926274a178836b67a9 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 05 20:13:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 20:13:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jW3wA-0001W1-CN; Tue, 05 May 2020 20:13:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FNY7=6T=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jW3w9-0001Vw-Dp
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 20:13:05 +0000
X-Inumbo-ID: d10c2cd2-8f0c-11ea-9deb-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d10c2cd2-8f0c-11ea-9deb-12813bfff9fa;
 Tue, 05 May 2020 20:12:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=me9HXYCTkFKfWEiAealr4c2q1A+SrTSogJT5lQtzors=; b=dmtKonIyX7B4LVBL5JYwSIBRT
 wielwBTGZ6h79ZP1bUtupm7vNZOluQetlk2Rux4rYUEsAfyR22JkS0JA9E2JGyPfX9g54ZBM1+W4j
 pEBN/pyj7tie9b8yXQ+A4cKJn4P9ztvs6D7aTrDMWB7Eip2leW/+cDElqLkLT8YuT/diA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jW3w3-00038C-Hi; Tue, 05 May 2020 20:12:59 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jW3w3-0004nq-8R; Tue, 05 May 2020 20:12:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jW3w3-0005D3-7h; Tue, 05 May 2020 20:12:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150045-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150045: tolerable trouble: pass/starved - PUSHED
X-Osstest-Failures: ovmf:test-amd64-amd64-xl-qemuu-ovmf-amd64:hosts-allocate:starved:nonblocking
 ovmf:test-amd64-i386-xl-qemuu-ovmf-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: ovmf=de15e7c2651ada46cc649c5b3c8c0c145354ac04
X-Osstest-Versions-That: ovmf=e54310451f1ac2ce4ccb90a110f45bb9b4f3ccd6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 05 May 2020 20:12:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150045 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150045/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ovmf-amd64  2 hosts-allocate             starved n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  2 hosts-allocate              starved n/a

version targeted for testing:
 ovmf                 de15e7c2651ada46cc649c5b3c8c0c145354ac04
baseline version:
 ovmf                 e54310451f1ac2ce4ccb90a110f45bb9b4f3ccd6

Last test of basis   149891  2020-04-30 16:10:01 Z    5 days
Testing same since   150045  2020-05-05 16:09:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ard.biesheuvel@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         starved 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          starved 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e54310451f..de15e7c265  de15e7c2651ada46cc649c5b3c8c0c145354ac04 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue May 05 20:57:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 20:57:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jW4dE-0004ry-Rk; Tue, 05 May 2020 20:57:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MjzU=6T=arndb.de=arnd@srs-us1.protection.inumbo.net>)
 id 1jW4dD-0004rt-DS
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 20:57:35 +0000
X-Inumbo-ID: 0a95c2be-8f13-11ea-ae69-bc764e2007e4
Received: from mout.kundenserver.de (unknown [212.227.17.13])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0a95c2be-8f13-11ea-ae69-bc764e2007e4;
 Tue, 05 May 2020 20:57:34 +0000 (UTC)
Received: from mail-qv1-f41.google.com ([209.85.219.41]) by
 mrelayeu.kundenserver.de (mreue107 [212.227.15.145]) with ESMTPSA (Nemesis)
 id 1N49Qd-1j5ECr2dDl-0107Nk for <xen-devel@lists.xenproject.org>; Tue, 05 May
 2020 22:57:32 +0200
Received: by mail-qv1-f41.google.com with SMTP id y19so1784160qvv.4
 for <xen-devel@lists.xenproject.org>; Tue, 05 May 2020 13:57:32 -0700 (PDT)
X-Gm-Message-State: AGi0PuY8zWWwp807FEScVvJ8Zm1FJbWCnA5VF4tFUVJLKqRheNkX9n9B
 NSfByNeCpI7+T/wZr88IIHvYnlNGuLdNwhK33GQ=
X-Google-Smtp-Source: APiQypJrgFYMvWYQxR3/H4be6yfzqH6LIElWNTCfGS8MCkKF2CDVLP6zcvtTKAL/xjFld7TVJiEQt5NIfuQ2sfTN1L0=
X-Received: by 2002:ad4:4a8b:: with SMTP id h11mr4439312qvx.210.1588712251082; 
 Tue, 05 May 2020 13:57:31 -0700 (PDT)
MIME-Version: 1.0
References: <20200505141546.824573-1-arnd@arndb.de>
 <30d49e6d-570b-f6fd-3a6f-628abcc8b127@suse.com>
 <CAK8P3a0mWH=Zcq180+cTRMpqOkGt05xDP1+kCTP6yc9grAg2VQ@mail.gmail.com>
 <48893239-dde9-4e94-040d-859f4348816d@suse.com>
In-Reply-To: <48893239-dde9-4e94-040d-859f4348816d@suse.com>
From: Arnd Bergmann <arnd@arndb.de>
Date: Tue, 5 May 2020 22:57:15 +0200
X-Gmail-Original-Message-ID: <CAK8P3a2_7+_a_cwDK1cwfrJX4azQJhd_Y0xB18cCUn6=p7fVsg@mail.gmail.com>
Message-ID: <CAK8P3a2_7+_a_cwDK1cwfrJX4azQJhd_Y0xB18cCUn6=p7fVsg@mail.gmail.com>
Subject: Re: [PATCH] xenbus: avoid stack overflow warning
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Provags-ID: V03:K1:eSPut2LR8tLz9Gzz4klRf8e6S9M70N85/jjDmGdciw76UpH+DKg
 Br6YHWXJF1V9PdpF7cCOCME8zBq3T+F4THcpTx9HGFNvgavsbUFoi/bcAEAPf84zpq8/Bqd
 a4srMQ4BndtOstdNzP4Pwo5sy4pPTUBmJ0WqflcoNn+IQvGjfWx7liVUA1wFSTAtdmSgzY+
 akwufJWkjOPvm1jBzbAnA==
X-Spam-Flag: NO
X-UI-Out-Filterresults: notjunk:1;V03:K0:WxzfDs83W4I=:vtkzVRGM2JCTD4CZbpLuqZ
 MEYBZS4ZM8eTEeSrw2k32hXmS3rosOYTAO+AwW5XoIDEYo6pfSsdY81uVMiZStVyxBfM4YXSE
 jf6aYfCyWE2NK8S8SM/radEhX+nraQvwmTCJL901WhDWdD3JGkHbiKtWcQBEHcKuKjFLV3Ms0
 U1QTD8uZ3Uy3ohslLg4SA3Z5dt2nfy2UHFVeVLj7FKQz2txWyWwGiLiYugvXePZcuhNmVx3DN
 goej5W9nfSxtFqyrv9zQGS1Bp2dN8HIidNGf+AhwoI/W3DiiUDdJGxcWPuY/AlcvvldCG5pcW
 lzyhCMjooz9t9h4CMzSsbEyfN5XikLQPoATrItKHcVX5RQqXhYGsoDaC1qQMXsXMmun9EYkQH
 eyu3Q4aZOTZeR0KSnMqmCUQ6G1HWQz25g0lsaBPo4VFa8cluVYomC9ujsVQM+VByiOhQdkc6W
 vGcJu0/SgVrcQLPwY6m6A2UnJxza5kIZCTc39cXviKk8fi2u85ZyG47xXhOmEhzk8niF8vdXM
 g78FLyrpN5zZSpxCm899i9/KbKchHQhqV6P/QND7k4PwvwpkKHhQ4hPo72xj+fc5vgIFpBVm/
 gRq6Fasdg4CC9216JHrFC9CIewSEsvg7eAmGAxrmri1PAESDcByaI/0tWBok0Wk45HixECbVs
 xlO1y82meQcZN/PV33kgg/Ybnlvl1sGPsV/DPKhe5ddVH1QVq/RFbf08A96fjER61QmM53uTy
 OUEni7QZj8c2Z7H4SnXDVkZFbKbV+ZjsOnNQSFNdKvTWR0iGlexJgL8EpnQtxhSpVoti2fEJp
 HRfztUO0zNRtYxUe8WkxWy1WighfLMakJ1Nb1Ur5F1fZMSkhzc=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Yan Yankovskyi <yyankovskyi@gmail.com>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 clang-built-linux <clang-built-linux@googlegroups.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 5, 2020 at 6:02 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wrot=
e:
> On 05.05.20 17:01, Arnd Bergmann wrote:
> > On Tue, May 5, 2020 at 4:34 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> =
wrote:
> >> On 05.05.20 16:15, Arnd Bergmann wrote:
> >
> > I considered that as well, and don't really mind either way. I think it=
 does
> > get a bit ugly whatever we do. If you prefer the union, I can respin th=
e
> > patch that way.
>
> Hmm, thinking more about it I think the real clean solution would be to
> extend struct map_ring_valloc_hvm to cover the pv case, too, to add the
> map and unmap arrays (possibly as a union) to it and to allocate it
> dynamically instead of having it on the stack.
>
> Would you be fine doing this?

This is a little more complex than I'd want to do without doing any testing
(and no, I don't want to do the testing either) ;-)

It does sound like a better approach though.

      Arnd


From xen-devel-bounces@lists.xenproject.org Tue May 05 21:59:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 21:59:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jW5aJ-0001Mg-E8; Tue, 05 May 2020 21:58:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NSdO=6T=gmail.com=bobbyeshleman@srs-us1.protection.inumbo.net>)
 id 1jW5aH-0001Mb-Fs
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 21:58:37 +0000
X-Inumbo-ID: 920d9908-8f1b-11ea-b9cf-bc764e2007e4
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 920d9908-8f1b-11ea-b9cf-bc764e2007e4;
 Tue, 05 May 2020 21:58:36 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id b188so44782qkd.9
 for <xen-devel@lists.xenproject.org>; Tue, 05 May 2020 14:58:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=date:from:to:cc:subject:message-id:references:mime-version
 :content-disposition:in-reply-to;
 bh=UA/FDN/AZmzTyESLz4DzMK+ojUvHNwC6JyTvHuyLC6I=;
 b=D+MigUZpu4+NbAqaht/10YOmvc1yZGsK8eQvPAMo/HPfTRFq5dpRBOtv3D6YxNSv+J
 tH3PrbMQnc57orn0DQGERvRprLeJVXlExc172W5pvr86HLms/2bwaeoLh+bNJDyn4PWl
 BXv/ZlNVbUfTF6cG4qY3Q+CKqtv9+Ak5S5cE03YqZjBReH/ZHny7HbJMShIwrGnZfuo8
 fChO7iL7K59s8BQFC3o8wZTZv5hLf2GJXVvNx0gmSisFCBAPNAivpcx4ylKeQNTsDBOt
 vc5yZBFHrAqwv0Gkq9Ikg/9ENs99S6cWZHmhPEIFfpb74nfo5k1pgUM0clxIvFPeCP7F
 gPiQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to;
 bh=UA/FDN/AZmzTyESLz4DzMK+ojUvHNwC6JyTvHuyLC6I=;
 b=EX7Wqi65Rc/rpzkaq8S7IpADw718l+my/V1tz/8aGJQwVLK2GKgDp+kLRueECAOShn
 oB6ZagDg+8ezTNysvm+20wHNZSy9cEevCWnFi72odu2bfe7VWDrkLJ42WB2qGhVh3luO
 CHujnXRIsoyHhRploVqOK9v49zUjN1WO8gwNskJzG0lspZKoZ5yTGF6EDh0kEskCvlZd
 0qTNdlrZ2z4aQy7xtP8X/2GfnUfWFIkV3IceQ9rMyEl91jAYj0pSC0zIR+Z0ZtDxdR+e
 kDksfo9H8CwyOSexkN9Uz3eUwfluREnLFLb7l/MfOcWwsKZN8/tLJKXc97/vIDxt9Nei
 PjqA==
X-Gm-Message-State: AGi0PuZn10ZvytqSB4v3eiUo2pqOnuDJceCoaRwPUJoQ3y/3n1DfBRYl
 wZPxE5STtoUDdx7I61pU0Ag=
X-Google-Smtp-Source: APiQypJahDcebQMtoyWAHu+3eWyNvl+B3/0PMtj32ASJuoh0Qnr+LHHIwIj5m0QAwzxGGdj5jUFBmg==
X-Received: by 2002:a05:620a:7e8:: with SMTP id
 k8mr5818499qkk.183.1588715916144; 
 Tue, 05 May 2020 14:58:36 -0700 (PDT)
Received: from piano ([216.186.244.35])
 by smtp.gmail.com with ESMTPSA id d26sm145362qkk.69.2020.05.05.14.58.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 05 May 2020 14:58:35 -0700 (PDT)
Date: Tue, 5 May 2020 16:58:33 -0500
From: Bobby Eshleman <bobbyeshleman@gmail.com>
To: Ard Biesheuvel <ardb@kernel.org>
Subject: Re: [RFC] UEFI Secure Boot on Xen Hosts
Message-ID: <20200505215833.GB301373@piano>
References: <20200429225108.GA54201@bobbye-pc>
 <ebdd7b4a-767b-1b72-a344-78b190f42ceb@suse.com>
 <20200430111501.336wte64pwsfptze@tomti.i.net-space.pl>
 <CAMj1kXF1vRtqbGS-eptB682h1xJ8LYQi74YaTZgM1nyYcpFsYA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAMj1kXF1vRtqbGS-eptB682h1xJ8LYQi74YaTZgM1nyYcpFsYA@mail.gmail.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: michal.zygowski@3mdeb.com, Daniel Kiper <daniel.kiper@oracle.com>,
 krystian.hebel@3mdeb.com, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, olivier.lambert@vates.fr, piotr.krol@3mdeb.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Apr 30, 2020 at 01:41:12PM +0200, Ard Biesheuvel wrote:
> On Thu, 30 Apr 2020 at 13:15, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> > Anyway, the advantage of this new protocol is that it uses UEFI API to
> > load and execute PE kernel and its modules. This means that it will be
> > architecture independent. However, IIRC, if we want to add new modules
> > types to the Xen then we have to teach all bootloaders in the wild about
> > new GUIDs. Ard, am I correct?
> >
> 
> Well, if you are passing multiple files that are not interchangeable,
> each bootloader will need to do something, right? It could be another
> GUID, or we could extend the initrd media device path to take
> discriminators.
> 
> So today, we have
> 
> VendorMedia(5568e427-68fc-4f3d-ac74-ca555231cc68)
> 
> which identifies /the/ initrd on Linux. As long as this keeps working
> as intended, I have no objections extending this to
> 
> VendorMedia(5568e427-68fc-4f3d-ac74-ca555231cc68)/File(...)
> 
> etc, if we can agree that omitting the File() part means the unnamed
> initrd, and that L"initrd" is reserved as a file name. That way, you
> can choose any abstract file name you want, but please note that the
> loader still needs to provide some kind of mapping of how these
> abstract file names relate to actual files selected by the user.

This seems reasonable to me and I can't think of any good reason right
now, if we go down this path, to not just extend the initrd media device
path (as opposed to creating one that is Xen-specific).  It definitely
beats having a GUID per boot module since the number of modules changes
per Xen deployment, so there would need to be some range of GUIDs
representing modules specifically for Xen, and some rules as to how they
are mapped to real files.

It does seem simpler to ask the loader for "dom0's kernel" or "dom1's
initrd" and to have the loader use these references to grab real files.

> One thing to keep in mind, though, is that this protocol goes away
> after ExitBootServices().
> 

Roger that.


Thanks,

-Bobby


From xen-devel-bounces@lists.xenproject.org Tue May 05 22:04:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 22:04:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jW5fa-0002CR-2N; Tue, 05 May 2020 22:04:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FNY7=6T=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jW5fZ-0002CM-5I
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 22:04:05 +0000
X-Inumbo-ID: 54328e6c-8f1c-11ea-9e00-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 54328e6c-8f1c-11ea-9e00-12813bfff9fa;
 Tue, 05 May 2020 22:04:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=AA3aCfrNfdynb4LdrfCkVNc6OiJdtm0TYvFNv/rsb3s=; b=SnMhGGcnGDdZOCSiZyVJQ++2m
 3g1J/FxsoMVnOn663UubbjP4nbgF6cKZetgC5D98jipNGUbgdoAp3MdRDB0vPyHQazpCbDJ4Yt/1P
 PQvqru4AllPv6Q+A9SNrUGHUh2O9ghss5b9ZO7Yphx/J02c6ah0zAW6BiNiwnrYP8ayXA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jW5fW-0005D8-5M; Tue, 05 May 2020 22:04:02 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jW5fV-0003Ji-U1; Tue, 05 May 2020 22:04:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jW5fV-0003mu-TE; Tue, 05 May 2020 22:04:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150049-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150049: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=779efdbb502b38c66b774b124fa0ceed254875bd
X-Osstest-Versions-That: xen=2e3d87cc734a895ef5b486926274a178836b67a9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 05 May 2020 22:04:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150049 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150049/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  779efdbb502b38c66b774b124fa0ceed254875bd
baseline version:
 xen                  2e3d87cc734a895ef5b486926274a178836b67a9

Last test of basis   150037  2020-05-05 16:00:51 Z    0 days
Testing same since   150049  2020-05-05 20:00:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ashok Raj <ashok.raj@intel.com>
  Borislav Petkov <bp@suse.de>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Thomas Gleixner <tglx@linutronix.de>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   2e3d87cc73..779efdbb50  779efdbb502b38c66b774b124fa0ceed254875bd -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 05 22:34:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 05 May 2020 22:34:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jW69K-0004fh-Ij; Tue, 05 May 2020 22:34:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Uk1a=6T=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jW69J-0004fc-1i
 for xen-devel@lists.xenproject.org; Tue, 05 May 2020 22:34:49 +0000
X-Inumbo-ID: a03b458f-8f20-11ea-9e02-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a03b458f-8f20-11ea-9e02-12813bfff9fa;
 Tue, 05 May 2020 22:34:48 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 3B453206FA;
 Tue,  5 May 2020 22:34:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1588718087;
 bh=8nkRRYLCCCBSFdsSfU/gwyvFbWvN5TeiE7UTyrrQ8E8=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=2D0VFBOdF9u77613bgq1E/m7tMK1+wCKmjqkdAgVqhbCGq+jgvKuRGHiElux7oUnH
 zvRdbvN8tCbEFHiGAjZPjYVxSq2a+20tbyV37omS0aQcH6wfxRlfqxDogPiKol6MMo
 sCntOQNC6lUVNV01ambPbOjQ/1qDYJRS6Erdbsz8=
Date: Tue, 5 May 2020 15:34:46 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Roman Shaposhnik <roman@zededa.com>, jgross@suse.com, 
 boris.ostrovsky@oracle.com
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
In-Reply-To: <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-256487737-1588716576=:14706"
Content-ID: <alpine.DEB.2.21.2005051509460.14706@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: peng.fan@nxp.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, minyard@acm.org, jeff.kubascik@dornerworks.com,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-256487737-1588716576=:14706
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2005051509461.14706@sstabellini-ThinkPad-T480s>

+ Boris, Jürgen

See the crash Roman is seeing booting dom0 on the Raspberry Pi. It is
related to the recent xen dma_ops changes. Possibly the same thing
reported by Peng here:

https://marc.info/?l=linux-kernel&m=158805976230485&w=2

An in-depth analysis below.


On Mon, 4 May 2020, Roman Shaposhnik wrote:
> > > [    2.534292] Unable to handle kernel paging request at virtual
> > > address 000000000026c340
> > > [    2.542373] Mem abort info:
> > > [    2.545257]   ESR = 0x96000004
> > > [    2.548421]   EC = 0x25: DABT (current EL), IL = 32 bits
> > > [    2.553877]   SET = 0, FnV = 0
> > > [    2.557023]   EA = 0, S1PTW = 0
> > > [    2.560297] Data abort info:
> > > [    2.563258]   ISV = 0, ISS = 0x00000004
> > > [    2.567208]   CM = 0, WnR = 0
> > > [    2.570294] [000000000026c340] user address but active_mm is swapper
> > > [    2.576783] Internal error: Oops: 96000004 [#1] SMP
> > > [    2.581784] Modules linked in:
> > > [    2.584950] CPU: 3 PID: 135 Comm: kworker/3:1 Not tainted 5.6.1-default #9
> > > [    2.591970] Hardware name: Raspberry Pi 4 Model B (DT)
> > > [    2.597256] Workqueue: events deferred_probe_work_func
> > > [    2.602509] pstate: 60000005 (nZCv daif -PAN -UAO)
> > > [    2.607431] pc : xen_swiotlb_free_coherent+0x198/0x1d8
> > > [    2.612696] lr : dma_free_attrs+0x98/0xd0
> > > [    2.616827] sp : ffff800011db3970
> > > [    2.620242] x29: ffff800011db3970 x28: 00000000000f7b00
> > > [    2.625695] x27: 0000000000001000 x26: ffff000037b68410
> > > [    2.631129] x25: 0000000000000001 x24: 00000000f7b00000
> > > [    2.636583] x23: 0000000000000000 x22: 0000000000000000
> > > [    2.642017] x21: ffff800011b0d000 x20: ffff80001179b548
> > > [    2.647461] x19: ffff000037b68410 x18: 0000000000000010
> > > [    2.652905] x17: ffff000035d66a00 x16: 00000000deadbeef
> > > [    2.658348] x15: ffffffffffffffff x14: ffff80001179b548
> > > [    2.663792] x13: ffff800091db37b7 x12: ffff800011db37bf
> > > [    2.669236] x11: ffff8000117c7000 x10: ffff800011db3740
> > > [    2.674680] x9 : 00000000ffffffd0 x8 : ffff80001071e980
> > > [    2.680124] x7 : 0000000000000132 x6 : ffff80001197a6ab
> > > [    2.685568] x5 : 0000000000000000 x4 : 0000000000000000
> > > [    2.691012] x3 : 00000000f7b00000 x2 : fffffdffffe00000
> > > [    2.696465] x1 : 000000000026c340 x0 : 000002000046c340
> > > [    2.701899] Call trace:
> > > [    2.704452]  xen_swiotlb_free_coherent+0x198/0x1d8
> > > [    2.709367]  dma_free_attrs+0x98/0xd0
> > > [    2.713143]  rpi_firmware_property_list+0x1e4/0x240
> > > [    2.718146]  rpi_firmware_property+0x6c/0xb0
> > > [    2.722535]  rpi_firmware_probe+0xf0/0x1e0
> > > [    2.726760]  platform_drv_probe+0x50/0xa0
> > > [    2.730879]  really_probe+0xd8/0x438
> > > [    2.734567]  driver_probe_device+0xdc/0x130
> > > [    2.738870]  __device_attach_driver+0x88/0x108
> > > [    2.743434]  bus_for_each_drv+0x78/0xc8
> > > [    2.747386]  __device_attach+0xd4/0x158
> > > [    2.751337]  device_initial_probe+0x10/0x18
> > > [    2.755649]  bus_probe_device+0x90/0x98
> > > [    2.759590]  deferred_probe_work_func+0x88/0xd8
> > > [    2.764244]  process_one_work+0x1f0/0x3c0
> > > [    2.768369]  worker_thread+0x138/0x570
> > > [    2.772234]  kthread+0x118/0x120
> > > [    2.775571]  ret_from_fork+0x10/0x18
> > > [    2.779263] Code: d34cfc00 f2dfbfe2 d37ae400 8b020001 (f8626800)
> > > [    2.785492] ---[ end trace 4c435212e349f45f ]---
> > > [    2.793340] usb 1-1: New USB device found, idVendor=2109,
> > > idProduct=3431, bcdDevice= 4.20
> > > [    2.801038] usb 1-1: New USB device strings: Mfr=0, Product=1, SerialNumber=0
> > > [    2.808297] usb 1-1: Product: USB2.0 Hub
> > > [    2.813710] hub 1-1:1.0: USB hub found
> > > [    2.817117] hub 1-1:1.0: 4 ports detected
> > >
> > > This is bailing out right here:
> > >      https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/firmware/raspberrypi.c?h=v5.6.1#n125
> > >
> > > FYIW (since I modified the source to actually print what was returned
> > > right before it bails) we get:
> > >    buf[1] == 0x800000004
> > >    buf[2] == 0x00000001
> > >
> > > Status 0x800000004 is of course suspicious since it is not even listed here:
> > >     https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/include/soc/bcm2835/raspberrypi-firmware.h#n14
> > >
> > > So it appears that this DMA request path is somehow busted and it
> > > would be really nice to figure out why.
> >
> > You have actually discovered a genuine bug in the recent xen dma rework
> > in Linux. Congrats :-)
> 
> Nice! ;-)
> 
> > I am doing some guesswork here, but from what I read in the thread and
> > the information in this email I think this patch might fix the issue.
> > If it doesn't fix the issue please add a few printks in
> > drivers/xen/swiotlb-xen.c:xen_swiotlb_free_coherent and please let me
> > know where exactly it crashes.
> >
> >
> > diff --git a/include/xen/arm/page-coherent.h b/include/xen/arm/page-coherent.h
> > index b9cc11e887ed..ff4677ed9788 100644
> > --- a/include/xen/arm/page-coherent.h
> > +++ b/include/xen/arm/page-coherent.h
> > @@ -8,12 +8,17 @@
> >  static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t size,
> >                 dma_addr_t *dma_handle, gfp_t flags, unsigned long attrs)
> >  {
> > +       void *cpu_addr;
> > +       if (dma_alloc_from_dev_coherent(hwdev, size, dma_handle, &cpu_addr))
> > +               return cpu_addr;
> >         return dma_direct_alloc(hwdev, size, dma_handle, flags, attrs);
> >  }
> >
> >  static inline void xen_free_coherent_pages(struct device *hwdev, size_t size,
> >                 void *cpu_addr, dma_addr_t dma_handle, unsigned long attrs)
> >  {
> > +       if (dma_release_from_dev_coherent(hwdev, get_order(size), cpu_addr))
> > +               return;
> >         dma_direct_free(hwdev, size, cpu_addr, dma_handle, attrs);
> >  }
> 
> Applied the patch, but it didn't help and after printk's it turns out
> it surprisingly crashes right inside this (rather convoluted if you
> ask me) if statement:
>     https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/xen/swiotlb-xen.c?h=v5.6.1#n349
> 
> So it makes sense that the patch didn't help -- we never hit that
> xen_free_coherent_pages.
 
The crash happens here:

	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
		     range_straddles_page_boundary(phys, size)) &&
	    TestClearPageXenRemapped(virt_to_page(vaddr)))
		xen_destroy_contiguous_region(phys, order);

I don't know exactly what is causing the crash. Is it the WARN_ON somehow?
Is it TestClearPageXenRemapped? Neither should cause a crash in theory.


But I do know that there are problems with that if statement on ARM. It
can trigger for one of the following conditions:

1) dev_addr + size - 1 > dma_mask
2) range_straddles_page_boundary(phys, size)


The first condition might happen after bef4d2037d214 because
dma_direct_alloc might not respect the device dma_mask. It is actually a
bug and I would like to keep the WARN_ON for that. The patch I sent
yesterday (https://marc.info/?l=xen-devel&m=158865080224504) should
solve that issue. But Roman is telling us that the crash still persists.

The second condition is completely normal and not an error on ARM
because dom0 is 1:1 mapped. It is not an issue if the address range is
straddling a page boundary. We certainly shouldn't WARN (or crash).

So, I suggest something similar to Peng's patch, appended.

Roman, does it solve your problem?


diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index b6d27762c6f8..994ca3a4b653 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -346,9 +346,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	/* Convert the size to actually allocated. */
 	size = 1UL << (order + XEN_PAGE_SHIFT);
 
-	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
-		     range_straddles_page_boundary(phys, size)) &&
-	    TestClearPageXenRemapped(virt_to_page(vaddr)))
+	WARN_ON(dev_addr + size - 1 > dma_mask);
+	if (TestClearPageXenRemapped(virt_to_page(vaddr)))
 		xen_destroy_contiguous_region(phys, order);
 
 	xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
--8323329-256487737-1588716576=:14706--


From xen-devel-bounces@lists.xenproject.org Wed May 06 01:26:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 01:26:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jW8pB-0007jG-4g; Wed, 06 May 2020 01:26:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=U6FA=6U=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jW8p9-0007jB-7D
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 01:26:11 +0000
X-Inumbo-ID: 8fc7d0ba-8f38-11ea-9e13-12813bfff9fa
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8fc7d0ba-8f38-11ea-9e13-12813bfff9fa;
 Wed, 06 May 2020 01:26:08 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0461NDJA146859;
 Wed, 6 May 2020 01:25:56 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=RBUCu1m+DN/gBGjTacwyDSSmflDLVdgn4dNMEIao+Ks=;
 b=IkjOnp9a9drwVWnSGnYocaX1lB/CsO1H9ktPAEPRwk5/4MOfWVd1PR/Vsj5wybMO0hz2
 q52Eqh1cfqlCSCP12fcfzxvgmEqbvJnnIh/uB/RCOMUSY/YCvdzHUF88VEaiB3plAvMG
 SWhwrde4ZzRvV5mK6hBMSH51FiqsLZaY+ljDopzV9VfXYUt2sSs2D0lc3NJV9GC8qRWU
 asn04frwhD7r+FOqHjb33/M8a9AS+9Of0Y2AGo2xhBjVqOK7BMA2olwOJYmB8t4qi3UB
 M0gJbur1+VREtK0ACh4J+rZ1iosgsSx7SchfeKgFpMY23SqKh2KCVenhqvyjjLvYca+N Eg== 
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by aserp2120.oracle.com with ESMTP id 30s0tmfscc-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 06 May 2020 01:25:56 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 0461NPwt133970;
 Wed, 6 May 2020 01:25:55 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by aserp3020.oracle.com with ESMTP id 30sjngfawy-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 06 May 2020 01:25:55 +0000
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 0461Pr0j004305;
 Wed, 6 May 2020 01:25:53 GMT
Received: from [10.39.218.134] (/10.39.218.134)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Tue, 05 May 2020 18:25:53 -0700
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: Stefano Stabellini <sstabellini@kernel.org>,
 Roman Shaposhnik <roman@zededa.com>, jgross@suse.com
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <8d3f7599-3afe-1662-da08-e4a20cfa5094@oracle.com>
Date: Tue, 5 May 2020 21:25:51 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.4.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9612
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0
 adultscore=0 phishscore=0
 mlxlogscore=999 bulkscore=0 malwarescore=0 spamscore=0 suspectscore=2
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000
 definitions=main-2005060006
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9612
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 mlxscore=0
 priorityscore=1501 lowpriorityscore=0 spamscore=0 suspectscore=2
 phishscore=0 clxscore=1011 bulkscore=0 mlxlogscore=999 adultscore=0
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2003020000 definitions=main-2005060006
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: peng.fan@nxp.com, Julien Grall <julien@xen.org>, minyard@acm.org,
 jeff.kubascik@dornerworks.com, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


On 5/5/20 6:34 PM, Stefano Stabellini wrote:
>
> =20
> The crash happens here:
>
> 	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
> 		     range_straddles_page_boundary(phys, size)) &&
> 	    TestClearPageXenRemapped(virt_to_page(vaddr)))
> 		xen_destroy_contiguous_region(phys, order);
>
> I don't know exactly what is causing the crash. Is it the WARN_ON someh=
ow?
> Is it TestClearPageXenRemapped? Neither should cause a crash in theory.=



This doesn't look like warning to me. Would it be possible to see what
xen_swiotlb_free_coherent+0x198 corresponds to in code? I don't know if
=2E/scripts/faddr2line works for ARM but could you try


=2E/scripts/faddr2line vmlinux xen_swiotlb_free_coherent+0x198


-boris






From xen-devel-bounces@lists.xenproject.org Wed May 06 01:44:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 01:44:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jW96X-0000wA-Ls; Wed, 06 May 2020 01:44:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VINC=6U=epam.com=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1jW96W-0000w4-IK
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 01:44:08 +0000
X-Inumbo-ID: 12bbb868-8f3b-11ea-9887-bc764e2007e4
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.54]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 12bbb868-8f3b-11ea-9887-bc764e2007e4;
 Wed, 06 May 2020 01:44:07 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Dg0TjyqMo3ZmVj9F55z6c+JO9PA9WgdLzHJJR3pEmpRGOvVLdRLwtl53Q8clxMy64RjD+rtHu2krfS68xgx5x6aIyB0S5hS5aEVOf6FcT/yl9TQCByTYHRIx7YFnzA9njyRBUzY2KsbxHAf+JhzTvNuN+tAmNDyevUmMqB//DBES0S5edq0EzBZswfVvtCNb6t8aSrYG62W2mGe4OMuTlaXj3z9MneULUJDpE4nr2JDYJ/Y9BXX4Q7uRk+i7PozbRRTWXmwruND80yeEEdubCQ25Mjumw47bkb5mdjT2dyZeUpgR/TCTYLMsldN1ycHvRmsiathU7MIO7rOHBSol2Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fKnDljvhrbKgsAXtvfIFdp/jMPzwjBEMHTOWfMN9AuQ=;
 b=TBLOjqNjmeaSxgq4Q3vkqNNixfKHh/X769rn383jtR6vsEmBTug5x5EjybgVa7LsYkPaAohAtpnVZG283Ha/jPxXarIop748HcJOX2R99QBYEaQuxa7y2YwBRnmjJqtHvAeP7A+eNH4BTYfog2h4Le1hcHos/aP+Ds+ro9yeUm7ZjsaJzkRa4+bw1zuKUN+CtS1r9t5RzR8rfzn21BCWuBc2ZMJiEHuI60tWca3PY0gs7npLLIvN9XWT987EbEcfMQLDBbMuXWlyaAZq1n92xSgnbNSEl5FM5hooxhmnEajVIVa/GSfAkKeEyn2ICEh6M1YLZYWvfaXHorGFra3kEw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fKnDljvhrbKgsAXtvfIFdp/jMPzwjBEMHTOWfMN9AuQ=;
 b=JExpJt+0eZ84/gLrD9CQXsSq+Ew0Q2huxoyOIBroNPnp3e56mAfaN00StmtdgNsDglsOchqmxZ7GQKkOp4W4dhoYP2F750BfL82CKA1x3z4QgInBS3zHdGL3FubgDxwrOtXHeYJcrbePBM/Z/xN8YM3Ya1XXGZGcVcS1cSdz+GI=
Received: from AM0PR03MB4148.eurprd03.prod.outlook.com (2603:10a6:208:c7::10)
 by AM0PR03MB6145.eurprd03.prod.outlook.com (2603:10a6:20b:15b::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2979.28; Wed, 6 May
 2020 01:44:05 +0000
Received: from AM0PR03MB4148.eurprd03.prod.outlook.com
 ([fe80::84c0:df74:747c:17bf]) by AM0PR03MB4148.eurprd03.prod.outlook.com
 ([fe80::84c0:df74:747c:17bf%7]) with mapi id 15.20.2958.030; Wed, 6 May 2020
 01:44:05 +0000
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: [PATCH] optee: immediately free buffers that are released by OP-TEE
Thread-Topic: [PATCH] optee: immediately free buffers that are released by
 OP-TEE
Thread-Index: AQHWI0fTRy+rlPvmsUKZMLlR49t2iQ==
Date: Wed, 6 May 2020 01:44:05 +0000
Message-ID: <20200506014246.3397490-1-volodymyr_babchuk@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [85.223.209.22]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 150b7557-17b5-44fc-611d-08d7f15ef651
x-ms-traffictypediagnostic: AM0PR03MB6145:|AM0PR03MB6145:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <AM0PR03MB61458D214F0253150C974F47E6A40@AM0PR03MB6145.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7691;
x-forefront-prvs: 03950F25EC
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: AfDzHHT1oaA+DB/wpTL5BTxXN5432XKuzxII94Mbb7zkYDcZTuDzUv34mc0dMN3dSwJK51aUN4yqpdW7egynaH0dgLK2s+7bFyR6VEAjvVz2zGDc+yGn8jB2Ls4YkXgEPbhqwNmmt+2SAyh6BTWiMnBpu6boxzW1IwfeFKvkj6onQ2rr/bGHkXqettqna/5MfzibhRv1f57iNoCXx29rJm6VZHEiF+N7W/Is0KhcUYMnIdXH22SkbrTA2w8+DL2o9YnzyGL7OMok4oyzDgtyB2cAvTI1sSpPWwZJMX59WwdeK0YwdItTVsQj7m2plLoeH84dUmUcO51EQBig3Hgic1ZEnEmgNDlWJ+vTcCT2X3o+tQPeQuSOQhe/81htreaesteElrZCSxsTX+87y6IqR5la4HOsnzmUIT8gYaTMdPokUBG4XIZhheFxY+t0DRD8dImSUi5l+N/5dEy4CPGcxUzGubmCrEoXDItIb4S9Zu4u6HzZog6bI5yNSKrytPPo9hoyO0fPcdpr94nOnTHJtFcqHlvzNilvCbCwCdse2dI=
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:AM0PR03MB4148.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(39860400002)(366004)(396003)(136003)(376002)(346002)(33430700001)(71200400001)(33440700001)(66446008)(64756008)(316002)(2616005)(66476007)(66946007)(66556008)(26005)(54906003)(1076003)(5660300002)(6506007)(186003)(55236004)(86362001)(36756003)(6512007)(6486002)(478600001)(8676002)(6916009)(76116006)(4326008)(2906002)(8936002)(14773001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: P5y37iZRNkpjlW75UxorGWa8vNZqMOVYiJE4yXB5VZCzkVfQF6B5nCKZU+syJHTW5Xtg5BspRtR20mPak870dfUmKoRoAgNV4FoJpq1jXVslEEY5BYpLnT1gdYUtvy6uvSz8LdwWwDvnqxpLGaPC+rNokx2ekcc3CY+8iz3/GSIYaR5LsEklnW3D/WgbdsBJ7T+xYfAYlRJfp6d81vOyhAseLTboGFwWLD0VNRUwQvvuM+Rnl178RJeAGISF/RI83iL4tfR+lYhYPQM5TOBk00q3pIRosoIuzgTKODku8LguVp+jce5cQOkkC2TKWtEq2IvnKywS6MYAyzchd1RdLyWG9OzxJ8serp9bUR6DeX/r708rWh6cKQx4gHwoZs4lWZ/3lhwXkPHWtr1m7r1u94xE4DFY6qwDDAVupm+yisXqgoMS4FaadpiQ4BKSRO5xwN84qbo/wV7r4LDYQI4ilkJUgor+IW+zZseHSpXmyHdO9ipDBKgrgpxTlT09jbGxYQ+XKoiYLXKPzZQaZVzl2b/nbOCNSfZe0x5Z/8++rB/H+5JvbumB8T/p2NqPYtQoi1WpBYIJWNlls0ArYKoEn2TMFlmffIwJMjS9s7wlADOjX9UrsWgR5S/BRNGrlVFE8YFRG505+m4U+d+Sjjw4Wk25fq3Lebyw1N7d2zpplOJm0lx10ekBfPQdpjefgZVaIqpIZjyE5nmJkYPTLP0bEqZ2J3mA4WHu6gprwYU0+Q12gaXXiu2Clw+jyNz5HtlteeHESyr6yoEUBUDdCQcNmaOJgcY8SdHlVJdx6JzHrIk=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 150b7557-17b5-44fc-611d-08d7f15ef651
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 May 2020 01:44:05.7133 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: vsa1WHBaM3sSg+wC1Z7KH3dPlrN/ErEpy+zR+BmZJbwSJYEqz6Xr0H7OG3ScGENOWwF5Dvwbe1mmI6Z9jYxr7BDayk5HpMhN+gK/2IYEO98=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB6145
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "tee-dev@lists.linaro.org" <tee-dev@lists.linaro.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Normal World can share buffer with OP-TEE for two reasons:
1. Some client application wants to exchange data with TA
2. OP-TEE asks for shared buffer for internal needs

The second case was handle more strictly than necessary:

1. In RPC request OP-TEE asks for buffer
2. NW allocates buffer and provides it via RPC response
3. Xen pins pages and translates data
4. Xen provides buffer to OP-TEE
5. OP-TEE uses it
6. OP-TEE sends request to free the buffer
7. NW frees the buffer and sends the RPC response
8. Xen unpins pages and forgets about the buffer

The problem is that Xen should forget about buffer in between stages 6
and 7. I.e. the right flow should be like this:

6. OP-TEE sends request to free the buffer
7. Xen unpins pages and forgets about the buffer
8. NW frees the buffer and sends the RPC response

This is because OP-TEE internally frees the buffer before sending the
"free SHM buffer" request. So we have no reason to hold reference for
this buffer anymore. Moreover, in multiprocessor systems NW have time
to reuse buffer cookie for another buffer. Xen complained about this
and denied the new buffer registration. I have seen this issue while
running tests on iMX SoC.

So, this patch basically corrects that behavior by freeing the buffer
earlier, when handling RPC return from OP-TEE.

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 xen/arch/arm/tee/optee.c | 24 ++++++++++++++++++++----
 1 file changed, 20 insertions(+), 4 deletions(-)

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index 6a035355db..af19fc31f8 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -1099,6 +1099,26 @@ static int handle_rpc_return(struct optee_domain *ct=
x,
         if ( shm_rpc->xen_arg->cmd =3D=3D OPTEE_RPC_CMD_SHM_ALLOC )
             call->rpc_buffer_type =3D shm_rpc->xen_arg->params[0].u.value.=
a;
=20
+        /*
+         * OP-TEE signals that it frees the buffer that it requested
+         * before. This is the right for us to do the same.
+         */
+        if ( shm_rpc->xen_arg->cmd =3D=3D OPTEE_RPC_CMD_SHM_FREE )
+        {
+            uint64_t cookie =3D shm_rpc->xen_arg->params[0].u.value.b;
+
+            free_optee_shm_buf(ctx, cookie);
+
+            /*
+             * This should never happen. We have a bug either in the
+             * OP-TEE or in the mediator.
+             */
+            if ( call->rpc_data_cookie && call->rpc_data_cookie !=3D cooki=
e )
+                gprintk(XENLOG_ERR,
+                        "Saved RPC cookie does not corresponds to OP-TEE's=
 (%"PRIx64" !=3D %"PRIx64")\n",
+                        call->rpc_data_cookie, cookie);
+            call->rpc_data_cookie =3D 0;
+        }
         unmap_domain_page(shm_rpc->xen_arg);
     }
=20
@@ -1464,10 +1484,6 @@ static void handle_rpc_cmd(struct optee_domain *ctx,=
 struct cpu_user_regs *regs,
             }
             break;
         case OPTEE_RPC_CMD_SHM_FREE:
-            free_optee_shm_buf(ctx, shm_rpc->xen_arg->params[0].u.value.b)=
;
-            if ( call->rpc_data_cookie =3D=3D
-                 shm_rpc->xen_arg->params[0].u.value.b )
-                call->rpc_data_cookie =3D 0;
             break;
         default:
             break;
--=20
2.25.0


From xen-devel-bounces@lists.xenproject.org Wed May 06 01:58:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 01:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jW9Jt-0001rv-TL; Wed, 06 May 2020 01:57:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LxRp=6U=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jW9Js-0001qy-Pm
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 01:57:56 +0000
X-Inumbo-ID: 01041b86-8f3d-11ea-9e14-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 01041b86-8f3d-11ea-9e14-12813bfff9fa;
 Wed, 06 May 2020 01:57:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=SG6xOQB1a7vvSvOS//bYDk4ObQd1BRp6IrZxNA6MxSQ=; b=VCKB/4LlL8KATxU5kex5BnBdA
 quT+F8Q7AmD0uambqtksgMagvwPR1k4MgaO3Zi3fEoysfQ+EuL8DfPNt3H3HvWpsbLXsrWpIh5+A3
 NUULV2ncPeDECqDKR06F9x2dc5s6UQtzTIZRhaxzr8XoB0b/9G8nDiDuts5REJdEhArjE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jW9Jr-0007eG-U4; Wed, 06 May 2020 01:57:55 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jW9Jr-0002Tx-Jg; Wed, 06 May 2020 01:57:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jW9Jr-0004rN-Fg; Wed, 06 May 2020 01:57:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150039-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.10-testing test] 150039: tolerable trouble: fail/pass/starved
 - PUSHED
X-Osstest-Failures: xen-4.10-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.10-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.10-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=b922c4431df33ed5b724e53c3f3348e470ddd349
X-Osstest-Versions-That: xen=24d62e126296b3f67dabdd49887d41d8ed69487f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 06 May 2020 01:57:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150039 xen-4.10-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150039/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop             fail like 149676
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2 fail like 149676
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  b922c4431df33ed5b724e53c3f3348e470ddd349
baseline version:
 xen                  24d62e126296b3f67dabdd49887d41d8ed69487f

Last test of basis   149676  2020-04-15 13:44:41 Z   20 days
Testing same since   150039  2020-05-05 16:06:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   24d62e1262..b922c4431d  b922c4431df33ed5b724e53c3f3348e470ddd349 -> stable-4.10


From xen-devel-bounces@lists.xenproject.org Wed May 06 02:58:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 02:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWAGZ-0007Ci-3V; Wed, 06 May 2020 02:58:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LxRp=6U=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWAGY-0007Cd-8e
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 02:58:34 +0000
X-Inumbo-ID: 757d9b9c-8f45-11ea-9e1a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 757d9b9c-8f45-11ea-9e1a-12813bfff9fa;
 Wed, 06 May 2020 02:58:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=zRM7HYdQ3GWcap82iMzD8yH82Hb8Cury8NMlOPz4As4=; b=htPapxo/ZOLZJFfHXo45v50D3
 eduCZX8LpIZNh1Wz7ZM80XkEMwwxkDXRIsWoV7xmdBwq1fAABnGsLhOGiKPXFaPSV3nfHc8T0DkID
 c7dVkE3+soeuioTnl3P8OcNFmvv3wMj2AkOMm5YWT/h8d/LIeB+M7HCaH6RM9B3oHiJpg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWAGR-0000nu-Ak; Wed, 06 May 2020 02:58:27 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWAGR-0005Vz-2g; Wed, 06 May 2020 02:58:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWAGR-0004E9-1Y; Wed, 06 May 2020 02:58:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150038-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.9-testing test] 150038: regressions - trouble:
 fail/pass/starved
X-Osstest-Failures: xen-4.9-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:regression
 xen-4.9-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=93cc305d1f3e7c6949a8f4116446624fa2dbfdf4
X-Osstest-Versions-That: xen=45c90737d5f0c8bf479adcd8cb88450f1998e55c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 06 May 2020 02:58:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150038 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150038/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop       fail REGR. vs. 149649

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop      fail blocked in 149649
 test-amd64-amd64-xl-qemuu-ws16-amd64 16 guest-localmigrate/x10 fail blocked in 149649
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 149649
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149649
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 149649
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149649
 test-amd64-i386-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail like 149649
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 149649
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  93cc305d1f3e7c6949a8f4116446624fa2dbfdf4
baseline version:
 xen                  45c90737d5f0c8bf479adcd8cb88450f1998e55c

Last test of basis   149649  2020-04-14 13:35:25 Z   21 days
Testing same since   150038  2020-05-05 16:06:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 93cc305d1f3e7c6949a8f4116446624fa2dbfdf4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Apr 3 13:03:40 2020 +0100

    tools/xenstore: fix a use after free problem in xenstored
    
    Commit 562a1c0f7ef3fb ("tools/xenstore: dont unlink connection object
    twice") introduced a potential use after free problem in
    domain_cleanup(): after calling talloc_unlink() for domain->conn
    domain->conn is set to NULL. The problem is that domain is registered
    as talloc child of domain->conn, so it might be freed by the
    talloc_unlink() call.
    
    With Xenstore being single threaded there are normally no concurrent
    memory allocations running and freeing a virtual memory area normally
    doesn't result in that area no longer being accessible. A problem
    could occur only in case either a signal received results in some
    memory allocation done in the signal handler (SIGHUP is a primary
    candidate leading to reopening the log file), or in case the talloc
    framework would do some internal memory allocation during freeing of
    the memory (which would lead to clobbering of the freed domain
    structure).
    
    Fixes: 562a1c0f7ef3fb ("tools/xenstore: dont unlink connection object twice")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    (cherry picked from commit bb2a34fd740e9a26be9e2244f1a5b4cef439e5a8)
    (cherry picked from commit dc5176d0f9434e275e0be1df8d0518e243798beb)
    (cherry picked from commit a997ffe678e698ff2b4c89ae5a98661d12247fef)
    (cherry picked from commit 48e8564435aca590f1c292ab7bb1f3dbc6b75693)
    (cherry picked from commit 1e722e6971539eab4f484affd60490cbc8429951)
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 06 03:23:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 03:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWAey-0001Bg-9O; Wed, 06 May 2020 03:23:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1E00=6U=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
 id 1jWAew-0001Bb-EK
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 03:23:46 +0000
X-Inumbo-ID: fd7969a6-8f48-11ea-ae69-bc764e2007e4
Received: from mail-pl1-x62d.google.com (unknown [2607:f8b0:4864:20::62d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd7969a6-8f48-11ea-ae69-bc764e2007e4;
 Wed, 06 May 2020 03:23:44 +0000 (UTC)
Received: by mail-pl1-x62d.google.com with SMTP id u22so1687475plq.12
 for <xen-devel@lists.xenproject.org>; Tue, 05 May 2020 20:23:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=EuSnHYhuX98oLNcKm7E5rTHeA+lvdIgafMbgv9phvdo=;
 b=r3FbSPTSndy+TMhL/jKc9Te1I1VIJuId0mtrhC7YTn+Z0V7niOlEhkWe/HhSW/ceVP
 mcJCU2Zau3YmAXABhOcmRwbod1ppyb4F9PDTeJLrh1lJDuhiM5lYoB6H0VCSp2F0k05B
 FFO1x12zH98dS2r9dlYofrPzRhMfk65nxmLJTpceu9bW4RKS5sg42UNJL5IDo74v7RJi
 nSyGbftZuY+R26jcOVuD+ltBH0xosD+cxfh/f6T5i0Mb4f1IhjYNwY6uId0joZfD3hoA
 TA4ZMtbAS4ueVUgCHo3mj+yytrhQjNhOTgtHxPWWu1Cioz6keoTKzAkjJJ8aF+LROGlC
 qnvQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=EuSnHYhuX98oLNcKm7E5rTHeA+lvdIgafMbgv9phvdo=;
 b=j4TF26p3NFWT0FjTgEx4kDi4YFvdg7IvwZlD78ySdYkdEbBd6DvvegCXx5jDSz4FmC
 n2Z0gyfjLhe+HycCoXpG7DUNc5QFntE1k4QtWXLFpDH4CMhvq2OFSSlraDm4/3DtPTZl
 5Wgz9XQOec0OFSNc/0yOzt4RMIQdcBbYc1Wkl1xM1PzxIPbeEh2y4rLAIlMn9UVk1P4g
 coObizpNvFnnU/m9hpt3ocQu9WZ9kq/UQFVLQZ7r3Tgnj8RE4IfGJKBKbHMu7s/z0lmv
 fKT5kmfWBYZ8WDs+a89s92oVYoLRshL9HHanCG5luZiM7HTh6VjLovPB6EoluOBbDkH8
 HWDw==
X-Gm-Message-State: AGi0PuZYC7ZAa3/M/VKK224Iy480V28EWh8ptQC7xNgeOmnNCUdxBp9R
 35JcYodPyXcaahq2FOpUldwsePXKGAU=
X-Google-Smtp-Source: APiQypLIhQ2x3Iu/AmQZCBWZ3NO381fj1eqYibhPClVwE9gUS4NoO9/9UnU0X4Rua/m2GpKURuWTaA==
X-Received: by 2002:a17:902:b187:: with SMTP id s7mr6428301plr.0.1588735422924; 
 Tue, 05 May 2020 20:23:42 -0700 (PDT)
Received: from desktop.ice.pyrology.org
 (static-50-53-74-115.bvtn.or.frontiernet.net. [50.53.74.115])
 by smtp.gmail.com with ESMTPSA id b3sm441021pga.48.2020.05.05.20.23.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 05 May 2020 20:23:42 -0700 (PDT)
From: Christopher Clark <christopher.w.clark@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [RFC PATCH] docs/designs: domB design document
Date: Tue,  5 May 2020 20:23:12 -0700
Message-Id: <20200506032312.878-1-christopher.w.clark@gmail.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Adam Schwalm <adam.schwalm@starlab.io>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Rich Persaud <persaur@gmail.com>,
 Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Adds a design document for DomB.

Signed-off-by: Christopher Clark <christopher.clark@starlab.io>
Signed-off by: Daniel P. Smith <dpsmith@apertussolutions.com>
---


This is a Request for Comments on this draft design document which
describes the motivation and design for the funded development of domB.
We invite discussion of this on this month’s Xen Community Call on the
7th of May.



 docs/designs/disaggregated-launch.md | 297 +++++++++++++++++++++++++++
 1 file changed, 297 insertions(+)
 create mode 100644 docs/designs/disaggregated-launch.md

diff --git a/docs/designs/disaggregated-launch.md b/docs/designs/disaggregated-launch.md
new file mode 100644
index 0000000000..de4db1baed
--- /dev/null
+++ b/docs/designs/disaggregated-launch.md
@@ -0,0 +1,297 @@
+# Introduction
+
+Born out of improving support for Dynamic Root of Trust for Measurement (DRTM),
+the DomB project is focused on restructuring the system launch of Xen. It
+provides a security architecture that builds on the principles of Least
+Privilege and Strong Isolation, achieving this through the disaggregation of
+system functions. DomB enables this with the introduction of a boot domain that
+works in conjunction with the hypervisor to provide the ability to launch
+multiple domains as part of host boot while maintaining a least privilege
+implementation.
+
+While the DomB project inception was and continues to be driven by a focus on
+security through disaggregation, there are multiple use cases with a
+non-security focus that would benefit from or are dependent upon the ability to
+launch multiple domains at host boot. This has been proven with the need that
+drove the implementation of the dom0less capability in the ARM branch of Xen. As
+such the design for DomB has been developed to allow for a flexible solution
+that is reusable across multiple use cases. In doing so this should ensure that
+DomB is capable, widely exercised, comprehensively tested, and well understood
+by the Xen community.
+
+When looking across those that have expressed interest or discussed a need for
+launching multiple domains at host boot, a majority are able to be binned in one
+of the four following use cases. 
+
+  * Deprivileging Dom0: disaggregating and/or eliminating one or more of Dom0’s roles
+
+  * System partitioning: dividing a systems resources and/or functions performed to a set of domains
+
+  * Low-latency boot and reboot: fast launch of the initial and possible final set of domains
+
+  * Isolated multiplexing of singular system resources: enabling dedicated domains to manage and multiplex a system resource for which only one exist but all domains must believe have exclusive access
+
+It is with this understanding as presented that the DomB project used as the
+basis for the development of its multiple domain boot capability for Xen. Within
+the remainder of this document is a detailed explanation of the multiple domain
+boot, the objectives it strives to achieve, the structure behind the approach,
+the sequence events in a boot, a contrasting with ARM’s dom0less, and closing
+with some exemplar implementations.
+
+## Terminology
+
+To help ensure clarity in reading this document, the following is the definition
+of terminology used within this document.
+
+* __Host Boot__: the system startup of Xen using the configuration provided by the bootloader
+* __Classic Boot__: the existing host boot that ends with the launch of a single domain (Dom0)
+* __Multiple Domain Boot__: a host boot that ends with the launch of one or more domains
+* __Boot Domain__: a domain with limited privileges launched by the hypervisor during a Multiple Domain Boot that is responsible for assisting with higher level operations of the domain setup process
+* __Initial Domain__: a domain other than the boot domain that is started as part of a multiple domain boot
+* __Crash Domain__: a fallback domain that the hypervisor may launch in the event of a detectable error during the multiple domain boot process
+* __Basic Configuration__: the minimal information Xen requires to instantiate a domain instance
+* __Extended Configuration__: any configuration options for a domain beyond its Basic Configuration
+
+
+# Approach
+
+At the outset of design of DomB a core set of objectives were established. These
+objectives were viewed as the bar that should be strived towards inorder to
+minimize the impact for existing Xen environments.
+
+## Objectives
+
+* In general strive to maintain compatibility with existing Xen behavior
+ 
+* A default build of Xen should be capable of booting both style of launch: Dom0 or DomB
+    - Preferred that it be managed via two KCONFIG options to govern inclusion of support for each style
+* The selection between classic boot and multiple domain boot should be automatic
+    - Preferred that it not require a kernel command line parameter for selection
+* It should not require modification to boot loaders
+* It should provide a user friendly interface for its configuration and management
+* It must provide a fallback to console access in the event of misconfiguration
+* It should be able to boot an x86 Xen environment without the need for a Dom0 domain
+
+## A top level look at DomB
+
+Before delving into DomB a good basis to start with is an understanding of the
+process to create a domain. A way to view this process starts with the core
+configuration which is the information the hypervisor requires to make the call
+to `domain_create` followed by the extended configuration used by the toolstack
+to provide a domain with any additional configuration information. Until the
+extended configuration is completed, a domain has access to no resources except
+its allocated vcpus and memory. The exception to this is Dom0 which the
+hypervisor explicitly grants control and access to all system resources except
+for those that only the hypervisor should have control over. This exception for
+Dom0 is driven by the system structure with a monolithic dom0 domain predating
+introduction of support for disaggregation into Xen, and the corresponding
+default assignment of multiple roles within the Xen system to Dom0.
+
+The DomB approach’s primary focus is on how to assign the roles traditionally
+granted to Dom0 to one or more domains at host boot. While the statement is
+simple to make, the implications are not trivial by any means. This also
+explains why the DomB approach is orthogonal to the existing dom0less
+capability. The dom0less capability focuses on enabling the launch of multiple
+domains in parallel with Dom0 at host boot. A corollary for dom0less is that for
+systems that don’t require Dom0 after all guest domains have started, they are
+able to do the host boot without a Dom0. Though it should be noted that it may
+be possible to start  Dom0 at a later point. Whereas with DomB, its approach of
+separating Dom0’s roles requires the ability to launch multiple domains at host
+boot. The direct consequences from this approach are profound and provide a
+myriad of possible configurations.
+
+To enable the DomB approach a new alternative path for host boot within the
+hypervisor must be introduced. This alternative path effectively branches just
+before Dom0 construction and begins an alternate means of system construction.
+The determination if this alternate path should be taken is through the
+inspection of the Multiboot2 (MB2) chain. If the bootloader has loaded a
+specific configuration, as described later, it will enable Xen to detect that a
+DomB configuration has been provided. Once a DomB configuration is detected,
+this alternate path can be thought of as occurring in four phases, configuration
+parsing, domain creation, domain launch preparation, and boot finalization.
+
+The configuration parsing phase begins with Xen parsing the MB2 chain, to
+understand the content of the multiboot modules provided. It will then load any
+microcode or XSM policy it discovers. With the domain creation phase, for each
+domain configuration Xen finds, it parses the configuration to construct the
+necessary domain definition to instantiate an instance of the domain and leave
+it in a paused state. When all domain configurations have been instantiated as
+domains, if one of them  is flagged as the Boot Domain, that domain will be
+unpaused starting the domain launch preparation phase. If there is no Boot
+Domain defined, then the domain launch preparation phase will be skipped and Xen
+will trigger the boot finalization phase.
+
+The domain launch preparation phase is an optional check point for the execution
+of a workload specific domain, the Boot Domain. While the Boot Domain is the
+first domain to run and has some degree of control over the system, similar to
+Dom0, it is extremely restricted in both system resource access and hypervisor
+operations. Its purpose is to:
+
+* Access the configuration provided by the bootloader
+* Finalize the configuration of the domains
+* Conduct any setup and launch related operations
+* Do an ordered unpause of domains that require an ordered start
+
+When the Boot Domain has completed, it will notify the hypervisor that it is
+done triggering the boot finalization phase.
+
+The hypervisor handles the boot finalization phase which is equivalent to the
+clean up phase. As such the steps taken by the hypervisor, not necessarily in
+implementation order, are as follows,
+
+* Free the MB2 module chain
+* If the Boot Domain exits, reclaim Boot Domain resources
+* Unpause any domains still in a paused state
+* Boot Domain uses reserved domid thus can never be respawned
+
+While the focus thus far has been on how the DomB capability will work, it is
+worth mentioning what it does not do or limit from occurring. It does not stop
+or inhibit the assigning of the control domain role which gives the domain the
+ability to create, start, stop, restart, and destroy domains. In particular it
+is still possible to construct a domain with all the privileged roles, i.e. a
+Dom0, with or without the domain id being zero. In fact what limitations are
+imposed now become fully configurable without the risk of circumvention by an
+all privileged domain.
+
+## Structuring of DomB
+
+The structure of DomB is built around the existing capabilities of the MB2
+protocol. This approach was driven by the objective not to require modifications
+to the boot loader. As a result the DomB capability does not rely on nor
+preclude any specific BIOS boot protocol, i.e legacy BIOS boot or UEFI boot. The
+only requirement is that the boot loader supports MB2 protocol.
+
+### Multiboot2
+
+The MB2 protocol has no concept of a manifest to tell the initial kernel what is
+contained in the chain, leaving it to the kernel to impose a loading convention,
+use magic number identification, or both. Unfortunately for passing multiple
+kernels, ramdisks, and domain configuration along with the existing modules
+already passed, there is no sane convention that could be imposed and magic
+number identification is nearly impossible when considering the objective not to
+impose unnecessary complication to the hypervisor.
+
+As it was just alluded to previously, a manifest describing the contents in the
+MB2 chain and how they relate within a Xen context is needed. To address this
+need the Launch Control Module (LCM) was designed to provide such a manifest.
+The LCM was designed to have a specific set of properties,
+
+* minimize the complexity of the parsing logic required by the hypervisor
+* allow for expanding and optional configuration fragments without breaking backwards compatibility
+
+To enable automatic detection of a DomB configuration, the LCM must be the first
+MB2 module in the MB2 module chain. The LCM has a magic number in its first four
+bytes to enable the hypervisor to detect its presence. The contents of the LCM
+are a series of Tag, Length, Value (TLV) entries that allows,
+
+* for the hypervisor to parse only those entries it understands,
+* for packing custom information for a custom boot domain,
+* the ability to use a new LCM with an older hypervisor,
+* and the ability to use an older LCM with a new hypervisor.
+
+In the initial implementation the provided LCM entries include an entry that
+describes what each MB2 module is in the chain, one or more checksum entries for
+providing checksums of the MB2 modules, and one or more domain configuration
+description entries. The module description is a simple dynamic array of index
+and type while the checksum provides an algorithm description and digest. The
+more intricate entry is the domain configuration description. It consists of the
+static fields detailing the type of configuration it contains and an indexing to
+the MB2 modules for its kernel and ramdisk, allowing multiple domain
+configurations to use the same MB2 module for its kernel and/or ramdisk.
+Depending on which type of configuration has been selected results in one of two
+static Basic Configurations that is for use by the hypervisor to build the
+domain. The last information that can be carried in a domain configuration is an
+optional Extended Configuration which is a free form data buffer. 
+
+### LCM Utility
+
+An objective of this project was not to require modification to bootloaders. To
+maintain that requirement a utility, lcm-tool, will be provided for generating
+the LCM. The utility will be capable of consuming a user-friendly description,
+for the initial implementation this will be JSON. When processing a
+configuration, the utility will be able to generate checksums of  each MB2
+module so that a Boot Domain might consume those checksums to verify the
+integrity and correctness of the MB2 module chain.
+
+### Xen hypervisor
+
+It was prior discussed at a higher level of the new host boot flow that will be
+introduced. Within the new flow is the configuration parsing and domain creation
+phase which will be expanded upon here. The hypervisor will iterate through the
+entries in the LCM processing each entry type that it understands. The MB2
+module description entry is used to identify if any MB2 modules contain
+microcode or an XSM policy. As it processes domain configuration entries, it
+will construct the domain using the MB2 modules and the Basic Configuration
+information. Once it has completed iterating through all the entries in the LCM,
+if a constructed domain has the Boot Domain attribute it will then be unpaused.
+Otherwise the hypervisor will start the boot finalization phase.
+
+### Boot Domain
+
+Traditionally domain creation was controlled by the user within the Dom0
+environment whereby custom toolstacks could be implemented to impose
+requirements on the process. The Boot Domain is a means to enable the user to
+continue to maintain that control over domain creation but within a limited
+privilege environment. The Boot Domain will have access to the LCM and the MB2
+module chain along with access to a subset of the hypercall operations.When the
+Boot Domain is finished it will notify the hypervisor through a hypercall op.
+
+### Crash Domain
+
+With the existing Dom0 host boot path, when a failure occurs there are several
+assumptions that can safely be made to get the user to a console for
+troubleshooting. With the DomB host boot path those assumptions can no longer be
+made, thus a means is needed to get the user to a console in the case of a
+recoverable failure. To handle this situation DomB introduces the concept of a
+crash domain. The crash domain is configured by a domain configuration entry in
+the LCM and is the only domain that will not be unpaused at boot finalization.
+
+## A Detailed Flow of a DomB Boot
+
+Provided here is an ordered flow of a DomB Boot with a highlight logic decision
+points. Not all branch points are recorded, specifically for the variety of
+error conditions that may occur.
+
+0. Hypervisor Startup:
+
+1. Inspect first MB2 module
+    a. Is the module an LCM
+        i. YES: proceed with the DomB host boot path
+        ii. NO: proceed with a Dom0 host boot path
+
+2. Iterate through the LCM entries looking for the MB2 module description entry
+    a. Check if any of the MB2 modules are microcode or policy and if so, load
+
+3. Iterate through the LCM entries processing all domain description entries
+    a. Use the details from the Basic Configuration to call `domain_create`
+    b. Record if a domain is flagged as the Boot Domain
+    c. Record if a domain is flagged as the Crash Domain
+
+4. Was a Boot Domain created
+    a. YES:
+        i. Attach console to Boot Domain
+        ii. Unpause Boot Domain
+        iii. Goto Boot Domain (step 5)
+    b. NO: Goto Boot Finalization (step 9)
+
+5. Boot Domain:
+
+6. Boot Domain comes online and may do any of the following actions
+    a. Process the LCM
+    b. Validate the MB2 chain
+    c. Make additional configuration settings for staged domains
+    d. Unpause any precursor domains
+    e. Set any runtime configurations
+
+7. Boot Domain does any necessary cleanup
+
+8. Boot Domain may make hypercall op call to signal it is finished
+    i. Hypervisor reclaims all Boot Domain resources
+    ii. Hypervisor records that the Boot Domain ran
+    ii. Goto Boot Finalization (step 9)
+
+9. Boot Finalization
+
+10. If a configured domain was flagged to have the console, the hypervisor (re)assigns it
+
+11. If no Boot Domain was launched, the hypervisor iterates through domains unpausing any domain not flagged as the crash domain
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 06 04:20:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 04:20:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWBXR-00060l-Qx; Wed, 06 May 2020 04:20:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=POZY=6U=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1jWBXP-0005gD-Rk
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 04:20:03 +0000
X-Inumbo-ID: db0248b8-8f50-11ea-b07b-bc764e2007e4
Received: from mail-qt1-x843.google.com (unknown [2607:f8b0:4864:20::843])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db0248b8-8f50-11ea-b07b-bc764e2007e4;
 Wed, 06 May 2020 04:20:02 +0000 (UTC)
Received: by mail-qt1-x843.google.com with SMTP id h26so344344qtu.8
 for <xen-devel@lists.xenproject.org>; Tue, 05 May 2020 21:20:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zededa.com; s=google;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=v4sxVkW1RAfUO6uH6SANXyFvtNXk2seAii2bb50iF1w=;
 b=QulVLVJgrqu0wa8By35J/mLxpDQrKEk0GiTz1XCDB+MIgcxX9sxhB8k3+7/ikOmTq3
 vvmA86dgjGW6oaQvhHTpmdfN+3kP2GZxDkcchVEN6xdj9ug19X4qr8Bwy7I19vZlVjVL
 O7M957QT/nzo5Xy3gHm4wDjqOlLKH7rOYcV/Dq8X+fnZnc8vosSUYJF0f4rvojFFlNpZ
 lfEnMaCmkGXHJIznPKcAcGGhvYeKO5oq6toK6NVbymRzlJ5Wx6JLcTRPqfJOWGoQlaZh
 9i/URJ57fKMIywhf/3wH4gt2XHCAPXP/Vnl2QC49R/8Rd7FUuRHnIj2A0ZWCabMXB4u6
 vl3g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=v4sxVkW1RAfUO6uH6SANXyFvtNXk2seAii2bb50iF1w=;
 b=nD57K6R/f1mvoCHWBn4w9uzalG1j9pK4WY5wmceTh/VJeRCJs6hkgxDkYGU0gBqDiN
 E+XAfIWXeBOvaqnOHGm7+iyuLFIi+uXEP7iZFuv6YR20mr05qjLC/UbGCBFSLaYb/R5u
 sBG1zKzNmGiGF6LuvdGSKziQ2jNCeiT7rq97LMDo5JUdLnhUV4buJkE27BBA7SfrhjvL
 JoDuvroTMO/NINuwCGrENhT7IhF2eMKTKRxVIm2KQi8O3wYUfcRSS52CG5Rg0EdiO465
 /cucWJCq7j8V8pT8doIRXsDsY21yuZw9EhYOVxqIyhzNA7AOAks0S51M5s/WtkaKrgT0
 0bAA==
X-Gm-Message-State: AGi0Pua2XEja2JHF3F7e4ZEQRCNIkG97ZMVxVDOe2EcC3rJwlUa/mlf4
 679ttXSJM4gA05FNLDp8Y4pUmXGZV2CL3qRw9j6z2A==
X-Google-Smtp-Source: APiQypIXC8ZUZ+RMxwQ3B8pFIIXGlPjwkiT6mxhRPyzG3SI1P9MCQ2Pvt1yIfxzBbRtrLOJzCSyxwQPnAomxQveAN08=
X-Received: by 2002:ac8:2c78:: with SMTP id e53mr6354937qta.365.1588738801846; 
 Tue, 05 May 2020 21:20:01 -0700 (PDT)
MIME-Version: 1.0
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
From: Roman Shaposhnik <roman@zededa.com>
Date: Tue, 5 May 2020 21:19:50 -0700
Message-ID: <CAMmSBy9vgeCw4LCtKuWaq8SDw16+pJX4F=i6gA3SN8V=BBENjA@mail.gmail.com>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, peng.fan@nxp.com,
 Julien Grall <julien@xen.org>, minyard@acm.org, jeff.kubascik@dornerworks.com,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 5, 2020 at 3:34 PM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> + Boris, J=C3=BCrgen
>
> See the crash Roman is seeing booting dom0 on the Raspberry Pi. It is
> related to the recent xen dma_ops changes. Possibly the same thing
> reported by Peng here:
>
> https://marc.info/?l=3Dlinux-kernel&m=3D158805976230485&w=3D2
>
> An in-depth analysis below.
>
>
> On Mon, 4 May 2020, Roman Shaposhnik wrote:
> > > > [    2.534292] Unable to handle kernel paging request at virtual
> > > > address 000000000026c340
> > > > [    2.542373] Mem abort info:
> > > > [    2.545257]   ESR =3D 0x96000004
> > > > [    2.548421]   EC =3D 0x25: DABT (current EL), IL =3D 32 bits
> > > > [    2.553877]   SET =3D 0, FnV =3D 0
> > > > [    2.557023]   EA =3D 0, S1PTW =3D 0
> > > > [    2.560297] Data abort info:
> > > > [    2.563258]   ISV =3D 0, ISS =3D 0x00000004
> > > > [    2.567208]   CM =3D 0, WnR =3D 0
> > > > [    2.570294] [000000000026c340] user address but active_mm is swa=
pper
> > > > [    2.576783] Internal error: Oops: 96000004 [#1] SMP
> > > > [    2.581784] Modules linked in:
> > > > [    2.584950] CPU: 3 PID: 135 Comm: kworker/3:1 Not tainted 5.6.1-=
default #9
> > > > [    2.591970] Hardware name: Raspberry Pi 4 Model B (DT)
> > > > [    2.597256] Workqueue: events deferred_probe_work_func
> > > > [    2.602509] pstate: 60000005 (nZCv daif -PAN -UAO)
> > > > [    2.607431] pc : xen_swiotlb_free_coherent+0x198/0x1d8
> > > > [    2.612696] lr : dma_free_attrs+0x98/0xd0
> > > > [    2.616827] sp : ffff800011db3970
> > > > [    2.620242] x29: ffff800011db3970 x28: 00000000000f7b00
> > > > [    2.625695] x27: 0000000000001000 x26: ffff000037b68410
> > > > [    2.631129] x25: 0000000000000001 x24: 00000000f7b00000
> > > > [    2.636583] x23: 0000000000000000 x22: 0000000000000000
> > > > [    2.642017] x21: ffff800011b0d000 x20: ffff80001179b548
> > > > [    2.647461] x19: ffff000037b68410 x18: 0000000000000010
> > > > [    2.652905] x17: ffff000035d66a00 x16: 00000000deadbeef
> > > > [    2.658348] x15: ffffffffffffffff x14: ffff80001179b548
> > > > [    2.663792] x13: ffff800091db37b7 x12: ffff800011db37bf
> > > > [    2.669236] x11: ffff8000117c7000 x10: ffff800011db3740
> > > > [    2.674680] x9 : 00000000ffffffd0 x8 : ffff80001071e980
> > > > [    2.680124] x7 : 0000000000000132 x6 : ffff80001197a6ab
> > > > [    2.685568] x5 : 0000000000000000 x4 : 0000000000000000
> > > > [    2.691012] x3 : 00000000f7b00000 x2 : fffffdffffe00000
> > > > [    2.696465] x1 : 000000000026c340 x0 : 000002000046c340
> > > > [    2.701899] Call trace:
> > > > [    2.704452]  xen_swiotlb_free_coherent+0x198/0x1d8
> > > > [    2.709367]  dma_free_attrs+0x98/0xd0
> > > > [    2.713143]  rpi_firmware_property_list+0x1e4/0x240
> > > > [    2.718146]  rpi_firmware_property+0x6c/0xb0
> > > > [    2.722535]  rpi_firmware_probe+0xf0/0x1e0
> > > > [    2.726760]  platform_drv_probe+0x50/0xa0
> > > > [    2.730879]  really_probe+0xd8/0x438
> > > > [    2.734567]  driver_probe_device+0xdc/0x130
> > > > [    2.738870]  __device_attach_driver+0x88/0x108
> > > > [    2.743434]  bus_for_each_drv+0x78/0xc8
> > > > [    2.747386]  __device_attach+0xd4/0x158
> > > > [    2.751337]  device_initial_probe+0x10/0x18
> > > > [    2.755649]  bus_probe_device+0x90/0x98
> > > > [    2.759590]  deferred_probe_work_func+0x88/0xd8
> > > > [    2.764244]  process_one_work+0x1f0/0x3c0
> > > > [    2.768369]  worker_thread+0x138/0x570
> > > > [    2.772234]  kthread+0x118/0x120
> > > > [    2.775571]  ret_from_fork+0x10/0x18
> > > > [    2.779263] Code: d34cfc00 f2dfbfe2 d37ae400 8b020001 (f8626800)
> > > > [    2.785492] ---[ end trace 4c435212e349f45f ]---
> > > > [    2.793340] usb 1-1: New USB device found, idVendor=3D2109,
> > > > idProduct=3D3431, bcdDevice=3D 4.20
> > > > [    2.801038] usb 1-1: New USB device strings: Mfr=3D0, Product=3D=
1, SerialNumber=3D0
> > > > [    2.808297] usb 1-1: Product: USB2.0 Hub
> > > > [    2.813710] hub 1-1:1.0: USB hub found
> > > > [    2.817117] hub 1-1:1.0: 4 ports detected
> > > >
> > > > This is bailing out right here:
> > > >      https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.g=
it/tree/drivers/firmware/raspberrypi.c?h=3Dv5.6.1#n125
> > > >
> > > > FYIW (since I modified the source to actually print what was return=
ed
> > > > right before it bails) we get:
> > > >    buf[1] =3D=3D 0x800000004
> > > >    buf[2] =3D=3D 0x00000001
> > > >
> > > > Status 0x800000004 is of course suspicious since it is not even lis=
ted here:
> > > >     https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.gi=
t/tree/include/soc/bcm2835/raspberrypi-firmware.h#n14
> > > >
> > > > So it appears that this DMA request path is somehow busted and it
> > > > would be really nice to figure out why.
> > >
> > > You have actually discovered a genuine bug in the recent xen dma rewo=
rk
> > > in Linux. Congrats :-)
> >
> > Nice! ;-)
> >
> > > I am doing some guesswork here, but from what I read in the thread an=
d
> > > the information in this email I think this patch might fix the issue.
> > > If it doesn't fix the issue please add a few printks in
> > > drivers/xen/swiotlb-xen.c:xen_swiotlb_free_coherent and please let me
> > > know where exactly it crashes.
> > >
> > >
> > > diff --git a/include/xen/arm/page-coherent.h b/include/xen/arm/page-c=
oherent.h
> > > index b9cc11e887ed..ff4677ed9788 100644
> > > --- a/include/xen/arm/page-coherent.h
> > > +++ b/include/xen/arm/page-coherent.h
> > > @@ -8,12 +8,17 @@
> > >  static inline void *xen_alloc_coherent_pages(struct device *hwdev, s=
ize_t size,
> > >                 dma_addr_t *dma_handle, gfp_t flags, unsigned long at=
trs)
> > >  {
> > > +       void *cpu_addr;
> > > +       if (dma_alloc_from_dev_coherent(hwdev, size, dma_handle, &cpu=
_addr))
> > > +               return cpu_addr;
> > >         return dma_direct_alloc(hwdev, size, dma_handle, flags, attrs=
);
> > >  }
> > >
> > >  static inline void xen_free_coherent_pages(struct device *hwdev, siz=
e_t size,
> > >                 void *cpu_addr, dma_addr_t dma_handle, unsigned long =
attrs)
> > >  {
> > > +       if (dma_release_from_dev_coherent(hwdev, get_order(size), cpu=
_addr))
> > > +               return;
> > >         dma_direct_free(hwdev, size, cpu_addr, dma_handle, attrs);
> > >  }
> >
> > Applied the patch, but it didn't help and after printk's it turns out
> > it surprisingly crashes right inside this (rather convoluted if you
> > ask me) if statement:
> >     https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tr=
ee/drivers/xen/swiotlb-xen.c?h=3Dv5.6.1#n349
> >
> > So it makes sense that the patch didn't help -- we never hit that
> > xen_free_coherent_pages.
>
> The crash happens here:
>
>         if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
>                      range_straddles_page_boundary(phys, size)) &&
>             TestClearPageXenRemapped(virt_to_page(vaddr)))
>                 xen_destroy_contiguous_region(phys, order);
>
> I don't know exactly what is causing the crash. Is it the WARN_ON somehow=
?
> Is it TestClearPageXenRemapped? Neither should cause a crash in theory.
>
>
> But I do know that there are problems with that if statement on ARM. It
> can trigger for one of the following conditions:
>
> 1) dev_addr + size - 1 > dma_mask
> 2) range_straddles_page_boundary(phys, size)
>
>
> The first condition might happen after bef4d2037d214 because
> dma_direct_alloc might not respect the device dma_mask. It is actually a
> bug and I would like to keep the WARN_ON for that. The patch I sent
> yesterday (https://marc.info/?l=3Dxen-devel&m=3D158865080224504) should
> solve that issue. But Roman is telling us that the crash still persists.
>
> The second condition is completely normal and not an error on ARM
> because dom0 is 1:1 mapped. It is not an issue if the address range is
> straddling a page boundary. We certainly shouldn't WARN (or crash).
>
> So, I suggest something similar to Peng's patch, appended.
>
> Roman, does it solve your problem?
>
>
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index b6d27762c6f8..994ca3a4b653 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -346,9 +346,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_=
t size, void *vaddr,
>         /* Convert the size to actually allocated. */
>         size =3D 1UL << (order + XEN_PAGE_SHIFT);
>
> -       if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
> -                    range_straddles_page_boundary(phys, size)) &&
> -           TestClearPageXenRemapped(virt_to_page(vaddr)))
> +       WARN_ON(dev_addr + size - 1 > dma_mask);
> +       if (TestClearPageXenRemapped(virt_to_page(vaddr)))
>                 xen_destroy_contiguous_region(phys, order);
>
>         xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, att=
rs);


Ok, so "the plot thickens" it seems ;-)

It definitely crashes inside of the
TestClearPageXenRemapped(virt_to_page(vaddr)) statement

Now, to test what part of it -- I actually unpacked the statement
which also allowed me to see what page virt_to_page(vaddr) returns by
doing:
    struct page *page =3D virt_to_page(vaddr);
Turns out the value of the page (as per %px) is 000000000026a340
(which doesn't seem odd to me -- but may be it will to you Stefano)

That address, however, doesn't seem to be mapped -- when I try adding a sim=
ple:
    printk(KERN_CRIT "page flags %ld\n", page->flags);
it crashes trying to dereference a page.

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Wed May 06 05:12:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 05:12:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWCM9-00026O-TE; Wed, 06 May 2020 05:12:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3H5S=6U=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jWCM8-00026G-B6
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 05:12:28 +0000
X-Inumbo-ID: 2d595d52-8f58-11ea-9e1c-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2d595d52-8f58-11ea-9e1c-12813bfff9fa;
 Wed, 06 May 2020 05:12:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E896DACD5;
 Wed,  6 May 2020 05:12:28 +0000 (UTC)
Subject: Re: [PATCH] xenbus: avoid stack overflow warning
To: Arnd Bergmann <arnd@arndb.de>
References: <20200505141546.824573-1-arnd@arndb.de>
 <30d49e6d-570b-f6fd-3a6f-628abcc8b127@suse.com>
 <CAK8P3a0mWH=Zcq180+cTRMpqOkGt05xDP1+kCTP6yc9grAg2VQ@mail.gmail.com>
 <48893239-dde9-4e94-040d-859f4348816d@suse.com>
 <CAK8P3a2_7+_a_cwDK1cwfrJX4azQJhd_Y0xB18cCUn6=p7fVsg@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <2c6e4b36-6618-1889-55c4-16eeb1ef6f57@suse.com>
Date: Wed, 6 May 2020 07:12:24 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <CAK8P3a2_7+_a_cwDK1cwfrJX4azQJhd_Y0xB18cCUn6=p7fVsg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Yan Yankovskyi <yyankovskyi@gmail.com>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 clang-built-linux <clang-built-linux@googlegroups.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.20 22:57, Arnd Bergmann wrote:
> On Tue, May 5, 2020 at 6:02 PM Jürgen Groß <jgross@suse.com> wrote:
>> On 05.05.20 17:01, Arnd Bergmann wrote:
>>> On Tue, May 5, 2020 at 4:34 PM Jürgen Groß <jgross@suse.com> wrote:
>>>> On 05.05.20 16:15, Arnd Bergmann wrote:
>>>
>>> I considered that as well, and don't really mind either way. I think it does
>>> get a bit ugly whatever we do. If you prefer the union, I can respin the
>>> patch that way.
>>
>> Hmm, thinking more about it I think the real clean solution would be to
>> extend struct map_ring_valloc_hvm to cover the pv case, too, to add the
>> map and unmap arrays (possibly as a union) to it and to allocate it
>> dynamically instead of having it on the stack.
>>
>> Would you be fine doing this?
> 
> This is a little more complex than I'd want to do without doing any testing
> (and no, I don't want to do the testing either) ;-)
> 
> It does sound like a better approach though.

I take this as you are fine with me writing the patch and adding you as
"Reported-by:"?


Juergen


From xen-devel-bounces@lists.xenproject.org Wed May 06 05:42:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 05:42:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWCol-0004Y6-8U; Wed, 06 May 2020 05:42:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3H5S=6U=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jWCok-0004Y1-HC
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 05:42:02 +0000
X-Inumbo-ID: 4f06062c-8f5c-11ea-9e1d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f06062c-8f5c-11ea-9e1d-12813bfff9fa;
 Wed, 06 May 2020 05:42:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 50553AA7C;
 Wed,  6 May 2020 05:42:03 +0000 (UTC)
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: Stefano Stabellini <sstabellini@kernel.org>,
 Roman Shaposhnik <roman@zededa.com>, boris.ostrovsky@oracle.com
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
Date: Wed, 6 May 2020 07:41:58 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: peng.fan@nxp.com, Julien Grall <julien@xen.org>, minyard@acm.org,
 jeff.kubascik@dornerworks.com, Julien Grall <julien.grall@arm.com>,
 xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 06.05.20 00:34, Stefano Stabellini wrote:
> + Boris, Jürgen
> 
> See the crash Roman is seeing booting dom0 on the Raspberry Pi. It is
> related to the recent xen dma_ops changes. Possibly the same thing
> reported by Peng here:
> 
> https://marc.info/?l=linux-kernel&m=158805976230485&w=2
> 
> An in-depth analysis below.
> 
> 
> On Mon, 4 May 2020, Roman Shaposhnik wrote:
>>>> [    2.534292] Unable to handle kernel paging request at virtual
>>>> address 000000000026c340
>>>> [    2.542373] Mem abort info:
>>>> [    2.545257]   ESR = 0x96000004
>>>> [    2.548421]   EC = 0x25: DABT (current EL), IL = 32 bits
>>>> [    2.553877]   SET = 0, FnV = 0
>>>> [    2.557023]   EA = 0, S1PTW = 0
>>>> [    2.560297] Data abort info:
>>>> [    2.563258]   ISV = 0, ISS = 0x00000004
>>>> [    2.567208]   CM = 0, WnR = 0
>>>> [    2.570294] [000000000026c340] user address but active_mm is swapper
>>>> [    2.576783] Internal error: Oops: 96000004 [#1] SMP
>>>> [    2.581784] Modules linked in:
>>>> [    2.584950] CPU: 3 PID: 135 Comm: kworker/3:1 Not tainted 5.6.1-default #9
>>>> [    2.591970] Hardware name: Raspberry Pi 4 Model B (DT)
>>>> [    2.597256] Workqueue: events deferred_probe_work_func
>>>> [    2.602509] pstate: 60000005 (nZCv daif -PAN -UAO)
>>>> [    2.607431] pc : xen_swiotlb_free_coherent+0x198/0x1d8
>>>> [    2.612696] lr : dma_free_attrs+0x98/0xd0
>>>> [    2.616827] sp : ffff800011db3970
>>>> [    2.620242] x29: ffff800011db3970 x28: 00000000000f7b00
>>>> [    2.625695] x27: 0000000000001000 x26: ffff000037b68410
>>>> [    2.631129] x25: 0000000000000001 x24: 00000000f7b00000
>>>> [    2.636583] x23: 0000000000000000 x22: 0000000000000000
>>>> [    2.642017] x21: ffff800011b0d000 x20: ffff80001179b548
>>>> [    2.647461] x19: ffff000037b68410 x18: 0000000000000010
>>>> [    2.652905] x17: ffff000035d66a00 x16: 00000000deadbeef
>>>> [    2.658348] x15: ffffffffffffffff x14: ffff80001179b548
>>>> [    2.663792] x13: ffff800091db37b7 x12: ffff800011db37bf
>>>> [    2.669236] x11: ffff8000117c7000 x10: ffff800011db3740
>>>> [    2.674680] x9 : 00000000ffffffd0 x8 : ffff80001071e980
>>>> [    2.680124] x7 : 0000000000000132 x6 : ffff80001197a6ab
>>>> [    2.685568] x5 : 0000000000000000 x4 : 0000000000000000
>>>> [    2.691012] x3 : 00000000f7b00000 x2 : fffffdffffe00000
>>>> [    2.696465] x1 : 000000000026c340 x0 : 000002000046c340
>>>> [    2.701899] Call trace:
>>>> [    2.704452]  xen_swiotlb_free_coherent+0x198/0x1d8
>>>> [    2.709367]  dma_free_attrs+0x98/0xd0
>>>> [    2.713143]  rpi_firmware_property_list+0x1e4/0x240
>>>> [    2.718146]  rpi_firmware_property+0x6c/0xb0
>>>> [    2.722535]  rpi_firmware_probe+0xf0/0x1e0
>>>> [    2.726760]  platform_drv_probe+0x50/0xa0
>>>> [    2.730879]  really_probe+0xd8/0x438
>>>> [    2.734567]  driver_probe_device+0xdc/0x130
>>>> [    2.738870]  __device_attach_driver+0x88/0x108
>>>> [    2.743434]  bus_for_each_drv+0x78/0xc8
>>>> [    2.747386]  __device_attach+0xd4/0x158
>>>> [    2.751337]  device_initial_probe+0x10/0x18
>>>> [    2.755649]  bus_probe_device+0x90/0x98
>>>> [    2.759590]  deferred_probe_work_func+0x88/0xd8
>>>> [    2.764244]  process_one_work+0x1f0/0x3c0
>>>> [    2.768369]  worker_thread+0x138/0x570
>>>> [    2.772234]  kthread+0x118/0x120
>>>> [    2.775571]  ret_from_fork+0x10/0x18
>>>> [    2.779263] Code: d34cfc00 f2dfbfe2 d37ae400 8b020001 (f8626800)
>>>> [    2.785492] ---[ end trace 4c435212e349f45f ]---
>>>> [    2.793340] usb 1-1: New USB device found, idVendor=2109,
>>>> idProduct=3431, bcdDevice= 4.20
>>>> [    2.801038] usb 1-1: New USB device strings: Mfr=0, Product=1, SerialNumber=0
>>>> [    2.808297] usb 1-1: Product: USB2.0 Hub
>>>> [    2.813710] hub 1-1:1.0: USB hub found
>>>> [    2.817117] hub 1-1:1.0: 4 ports detected
>>>>
>>>> This is bailing out right here:
>>>>       https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/firmware/raspberrypi.c?h=v5.6.1#n125
>>>>
>>>> FYIW (since I modified the source to actually print what was returned
>>>> right before it bails) we get:
>>>>     buf[1] == 0x800000004
>>>>     buf[2] == 0x00000001
>>>>
>>>> Status 0x800000004 is of course suspicious since it is not even listed here:
>>>>      https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/include/soc/bcm2835/raspberrypi-firmware.h#n14
>>>>
>>>> So it appears that this DMA request path is somehow busted and it
>>>> would be really nice to figure out why.
>>>
>>> You have actually discovered a genuine bug in the recent xen dma rework
>>> in Linux. Congrats :-)
>>
>> Nice! ;-)
>>
>>> I am doing some guesswork here, but from what I read in the thread and
>>> the information in this email I think this patch might fix the issue.
>>> If it doesn't fix the issue please add a few printks in
>>> drivers/xen/swiotlb-xen.c:xen_swiotlb_free_coherent and please let me
>>> know where exactly it crashes.
>>>
>>>
>>> diff --git a/include/xen/arm/page-coherent.h b/include/xen/arm/page-coherent.h
>>> index b9cc11e887ed..ff4677ed9788 100644
>>> --- a/include/xen/arm/page-coherent.h
>>> +++ b/include/xen/arm/page-coherent.h
>>> @@ -8,12 +8,17 @@
>>>   static inline void *xen_alloc_coherent_pages(struct device *hwdev, size_t size,
>>>                  dma_addr_t *dma_handle, gfp_t flags, unsigned long attrs)
>>>   {
>>> +       void *cpu_addr;
>>> +       if (dma_alloc_from_dev_coherent(hwdev, size, dma_handle, &cpu_addr))
>>> +               return cpu_addr;
>>>          return dma_direct_alloc(hwdev, size, dma_handle, flags, attrs);
>>>   }
>>>
>>>   static inline void xen_free_coherent_pages(struct device *hwdev, size_t size,
>>>                  void *cpu_addr, dma_addr_t dma_handle, unsigned long attrs)
>>>   {
>>> +       if (dma_release_from_dev_coherent(hwdev, get_order(size), cpu_addr))
>>> +               return;
>>>          dma_direct_free(hwdev, size, cpu_addr, dma_handle, attrs);
>>>   }
>>
>> Applied the patch, but it didn't help and after printk's it turns out
>> it surprisingly crashes right inside this (rather convoluted if you
>> ask me) if statement:
>>      https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/xen/swiotlb-xen.c?h=v5.6.1#n349
>>
>> So it makes sense that the patch didn't help -- we never hit that
>> xen_free_coherent_pages.
>   
> The crash happens here:
> 
> 	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
> 		     range_straddles_page_boundary(phys, size)) &&
> 	    TestClearPageXenRemapped(virt_to_page(vaddr)))
> 		xen_destroy_contiguous_region(phys, order);
> 
> I don't know exactly what is causing the crash. Is it the WARN_ON somehow?
> Is it TestClearPageXenRemapped? Neither should cause a crash in theory.
> 
> 
> But I do know that there are problems with that if statement on ARM. It
> can trigger for one of the following conditions:
> 
> 1) dev_addr + size - 1 > dma_mask
> 2) range_straddles_page_boundary(phys, size)
> 
> 
> The first condition might happen after bef4d2037d214 because
> dma_direct_alloc might not respect the device dma_mask. It is actually a
> bug and I would like to keep the WARN_ON for that. The patch I sent
> yesterday (https://marc.info/?l=xen-devel&m=158865080224504) should
> solve that issue. But Roman is telling us that the crash still persists.
> 
> The second condition is completely normal and not an error on ARM
> because dom0 is 1:1 mapped. It is not an issue if the address range is
> straddling a page boundary. We certainly shouldn't WARN (or crash).
> 
> So, I suggest something similar to Peng's patch, appended.
> 
> Roman, does it solve your problem?
> 
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index b6d27762c6f8..994ca3a4b653 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -346,9 +346,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>   	/* Convert the size to actually allocated. */
>   	size = 1UL << (order + XEN_PAGE_SHIFT);
>   
> -	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
> -		     range_straddles_page_boundary(phys, size)) &&
> -	    TestClearPageXenRemapped(virt_to_page(vaddr)))
> +	WARN_ON(dev_addr + size - 1 > dma_mask);
> +	if (TestClearPageXenRemapped(virt_to_page(vaddr)))
>   		xen_destroy_contiguous_region(phys, order);

This is a very bad idea for x86. Calling xen_destroy_contiguous_region()
for cases where it shouldn't be called is causing subtle bugs in memory
management, like pages being visible twice and thus resulting in weird
memory inconsistencies.

What you want here is probably an architecture specific test.


Juergen


From xen-devel-bounces@lists.xenproject.org Wed May 06 06:40:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 06:40:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWDit-0000hh-R3; Wed, 06 May 2020 06:40:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LxRp=6U=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWDis-0000UY-Iv
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 06:40:02 +0000
X-Inumbo-ID: 68eb97fc-8f64-11ea-9e1d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 68eb97fc-8f64-11ea-9e1d-12813bfff9fa;
 Wed, 06 May 2020 06:40:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=85euq4PY5hn0XBah8fSk0292YHaeIe7YTbdnNugYj2k=; b=1O2XiVYkctOsU3aQTZ5JFpiLZ
 0b12fOX3WeFHUh/0Nobj1W3OZ6bapuHD2+zNPg8U3FIWxzvgLeOBl0M/ZFnNLJis8++CV5y5P61Wn
 l2NwUPAWS3c3Zr3pE7bPBuXK8FjLydBDqJSnPiEdaetIJR4VN9Cyz427ukWFyErkXinjk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWDiq-0005PX-LA; Wed, 06 May 2020 06:40:00 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWDiq-0002fP-Am; Wed, 06 May 2020 06:40:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWDiq-0007T2-9E; Wed, 06 May 2020 06:40:00 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150041-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 150041: tolerable trouble: fail/pass/starved
 - PUSHED
X-Osstest-Failures: xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=e43fc14ec58329813af876ed3b30899a04d65a08
X-Osstest-Versions-That: xen=e6a2681148382e9227f54b70a5df8e895914c877
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 06 May 2020 06:40:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150041 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150041/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    17 guest-localmigrate/x10       fail  like 149845
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  e43fc14ec58329813af876ed3b30899a04d65a08
baseline version:
 xen                  e6a2681148382e9227f54b70a5df8e895914c877

Last test of basis   149845  2020-04-28 03:00:06 Z    8 days
Testing same since   150041  2020-05-05 16:06:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <julien@xen.org>
  Stefano Stabellini <sstabellini@kernel.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e6a2681148..e43fc14ec5  e43fc14ec58329813af876ed3b30899a04d65a08 -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Wed May 06 06:59:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 06:59:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWE1E-0002Aa-34; Wed, 06 May 2020 06:59:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=oCMl=6U=kernel.org=ardb@srs-us1.protection.inumbo.net>)
 id 1jWE1D-0002AV-3m
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 06:58:59 +0000
X-Inumbo-ID: 0eb5494c-8f67-11ea-b07b-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0eb5494c-8f67-11ea-b07b-bc764e2007e4;
 Wed, 06 May 2020 06:58:58 +0000 (UTC)
Received: from mail-il1-f175.google.com (mail-il1-f175.google.com
 [209.85.166.175])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 8F22320936
 for <xen-devel@lists.xenproject.org>; Wed,  6 May 2020 06:58:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1588748337;
 bh=Ks479yn5aVQr+JYUmqEOjDXZVAqGpQ/rIdxivLx95fA=;
 h=References:In-Reply-To:From:Date:Subject:To:Cc:From;
 b=Z75NbGVXlZKdfDB8TaKIII1u2Z0wfX2NGPFIXlUCzJ4CQMU2iN/hD+l5KiD8CjdUz
 HJahgNa3Ga4lwyGTpK+xgNFl9AGf4G+hP/EE1LIFpHa8v3YhYyBDWnTaq4ZAkkkqws
 5D6ThZRL99DujMDWEG2+I0ML55Q57SXpKaaJ5GJY=
Received: by mail-il1-f175.google.com with SMTP id f82so738559ilh.8
 for <xen-devel@lists.xenproject.org>; Tue, 05 May 2020 23:58:57 -0700 (PDT)
X-Gm-Message-State: AGi0PuY6d/YoSCn786aw7YPr0dNHW0hL+VGxk6OzuUuwG/SAZgWbe9Qd
 tpnwf3qmGh2T/LuYuxLriqdoWtQw5il59jcjkDs=
X-Google-Smtp-Source: APiQypJ+j9WB0CXDzEMS0djQWwsuQmQ1YFMfpWuKDz5YblY1xozgDCyQueyoHt/K75Od17jx0p9Uopjbk7a9keIA0ls=
X-Received: by 2002:a92:39dd:: with SMTP id h90mr7636723ilf.80.1588748336958; 
 Tue, 05 May 2020 23:58:56 -0700 (PDT)
MIME-Version: 1.0
References: <20200429225108.GA54201@bobbye-pc>
 <ebdd7b4a-767b-1b72-a344-78b190f42ceb@suse.com>
 <20200430111501.336wte64pwsfptze@tomti.i.net-space.pl>
 <CAMj1kXF1vRtqbGS-eptB682h1xJ8LYQi74YaTZgM1nyYcpFsYA@mail.gmail.com>
 <20200505215833.GB301373@piano>
In-Reply-To: <20200505215833.GB301373@piano>
From: Ard Biesheuvel <ardb@kernel.org>
Date: Wed, 6 May 2020 08:58:46 +0200
X-Gmail-Original-Message-ID: <CAMj1kXH1TsHKS9wS4br8_YYYXh-6sWes=2vAK7_MZG7RRu0YFA@mail.gmail.com>
Message-ID: <CAMj1kXH1TsHKS9wS4br8_YYYXh-6sWes=2vAK7_MZG7RRu0YFA@mail.gmail.com>
Subject: Re: [RFC] UEFI Secure Boot on Xen Hosts
To: Bobby Eshleman <bobbyeshleman@gmail.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: michal.zygowski@3mdeb.com, Daniel Kiper <daniel.kiper@oracle.com>,
 krystian.hebel@3mdeb.com, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, olivier.lambert@vates.fr, piotr.krol@3mdeb.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, 5 May 2020 at 23:58, Bobby Eshleman <bobbyeshleman@gmail.com> wrote:
>
> On Thu, Apr 30, 2020 at 01:41:12PM +0200, Ard Biesheuvel wrote:
> > On Thu, 30 Apr 2020 at 13:15, Daniel Kiper <daniel.kiper@oracle.com> wrote:
> > > Anyway, the advantage of this new protocol is that it uses UEFI API to
> > > load and execute PE kernel and its modules. This means that it will be
> > > architecture independent. However, IIRC, if we want to add new modules
> > > types to the Xen then we have to teach all bootloaders in the wild about
> > > new GUIDs. Ard, am I correct?
> > >
> >
> > Well, if you are passing multiple files that are not interchangeable,
> > each bootloader will need to do something, right? It could be another
> > GUID, or we could extend the initrd media device path to take
> > discriminators.
> >
> > So today, we have
> >
> > VendorMedia(5568e427-68fc-4f3d-ac74-ca555231cc68)
> >
> > which identifies /the/ initrd on Linux. As long as this keeps working
> > as intended, I have no objections extending this to
> >
> > VendorMedia(5568e427-68fc-4f3d-ac74-ca555231cc68)/File(...)
> >
> > etc, if we can agree that omitting the File() part means the unnamed
> > initrd, and that L"initrd" is reserved as a file name. That way, you
> > can choose any abstract file name you want, but please note that the
> > loader still needs to provide some kind of mapping of how these
> > abstract file names relate to actual files selected by the user.
>
> This seems reasonable to me and I can't think of any good reason right
> now, if we go down this path, to not just extend the initrd media device
> path (as opposed to creating one that is Xen-specific).  It definitely
> beats having a GUID per boot module since the number of modules changes
> per Xen deployment, so there would need to be some range of GUIDs
> representing modules specifically for Xen, and some rules as to how they
> are mapped to real files.
>
> It does seem simpler to ask the loader for "dom0's kernel" or "dom1's
> initrd" and to have the loader use these references to grab real files.
>

Yes. And note that using a single GUID + File component is easier on
the implementation too: the protocol implementation has to be
registered only once, and the single callback that exists will be
invoked with different values for 'FilePath', corresponding with
different values for File(). For example, in [0], this maps to the
check at line 120, where we currently only consider the
'end-of-device-path' terminator type to be valid, but this could
easily be extended to parse the contents of a subsequent file node and
resolve it to grab some actual contents.

[0] https://github.com/u-boot/u-boot/commit/ec80b4735a593961fe701cc3a5d717d4739b0fd0


> > One thing to keep in mind, though, is that this protocol goes away
> > after ExitBootServices().
> >
>
> Roger that.
>
>
> Thanks,
>
> -Bobby


From xen-devel-bounces@lists.xenproject.org Wed May 06 07:09:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 07:09:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWEB0-00037c-2u; Wed, 06 May 2020 07:09:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=raAm=6U=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jWEAy-00037X-NM
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 07:09:04 +0000
X-Inumbo-ID: 7733b64c-8f68-11ea-9e1f-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7733b64c-8f68-11ea-9e1f-12813bfff9fa;
 Wed, 06 May 2020 07:09:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588748943;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=1mvcrLmH/R/bgzMfV8ZlI0HANXQ1/5k5Kq3wxVMcWZc=;
 b=EnS1UHSq8Q/qfCNj6w7XV+le2R+mQmruCs+r2svIS92cErmZhkADtvv/
 zCcRzkcknwKxbxx4UQ+1mCUv15gq8YnMY71dlACK/2q2koDIQ2qyHTYky
 ojY741cP+MAUINUGBfBOrHbMXjXM4k//9NaOhG+VU3lsaHdMCVolTzu2S o=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: 8Lq6aG8/yWSZuXBMccfeU/gD7OsL6KqCiFrUa6LlHklnpqH2yeqFTJjfVFNpwp5TQdEmxdk6eD
 lclFMFelQEPyGbaEtku8M6zAYw/uVeS6HXzPWVC0BF9mby1dGPNbeiXiR5yaZB4p/Ev2dYceV4
 VdPKMn2kft02F+123SHPNKLYjhdfmxxSJgVksOwHHl0bVXB2Bnd39EoUwTpYjnR39EUVOkTEOX
 MGQpWoz9Dt6jxkr9fWAII/5zusL+E4u7pybc5lJGtefYQ47IYAMfViSIQ6qCDg9ZD4Xjp1oZ/i
 0XM=
X-SBRS: 2.7
X-MesageID: 17105430
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,358,1583211600"; d="scan'208";a="17105430"
Date: Wed, 6 May 2020 09:08:52 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/svm: Use flush-by-asid when available
Message-ID: <20200506070852.GE1353@Air-de-Roger>
References: <20200505182539.12247-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200505182539.12247-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 05, 2020 at 07:25:39PM +0100, Andrew Cooper wrote:
> AMD Fam15h processors introduced the flush-by-asid feature, for more fine
> grain flushing purposes.
> 
> Flushing everything including ASID 0 (i.e. Xen context) is an an unnecesserily
> large hammer, and never necessary in the context of guest TLBs needing
> invalidating.
> 
> When available, use TLB_CTRL_FLUSH_ASID in preference to TLB_CTRL_FLUSH_ALL.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

I should also look into restricting the usage of FLUSH_HVM_ASID_CORE
and instead perform more fine grained per-vCPU flushes when possible,
since FLUSH_HVM_ASID_CORE resets the pCPU ASID generation forcing a
new ASID to be allocated for all vCPUs running on that pCPU.

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> 
> The APM currently describes tlb_control encoding 1 as "Flush entire
> TLB (Should be used only by legacy hypervisors.)".  AMD have agreed that this
> is misleading and should say "legacy hardware" instead.

AFAICT using TLB_CTRL_FLUSH_ASID on hardware not supporting the
feature has not been found to be safe? Ie: TLB_CTRL_FLUSH_ALL is a
subset of TLB_CTRL_FLUSH_ASID from a bitmap PoV, so if those bits
where ignored on older hardware it should be safe to unconditionally
use it.

> This change does come with a minor observed perf improvement on Fam17h
> hardware, of ~0.6s over ~22s for my XTF pagewalk test.  This test will spot
> TLB flushing issues, but isn't optimal for spotting the perf increase from
> better flushing.  There were no observed differences for Fam15h, but this
> could simply mean that the measured code footprint was larger than the TLB on
> this CPU.
> ---
>  xen/arch/x86/hvm/svm/asid.c       | 9 ++++++---
>  xen/include/asm-x86/hvm/svm/svm.h | 1 +
>  2 files changed, 7 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/svm/asid.c b/xen/arch/x86/hvm/svm/asid.c
> index 9be90058c7..ab06dd3f3a 100644
> --- a/xen/arch/x86/hvm/svm/asid.c
> +++ b/xen/arch/x86/hvm/svm/asid.c
> @@ -18,6 +18,7 @@
>  #include <asm/amd.h>
>  #include <asm/hvm/nestedhvm.h>
>  #include <asm/hvm/svm/asid.h>
> +#include <asm/hvm/svm/svm.h>
>  
>  void svm_asid_init(const struct cpuinfo_x86 *c)
>  {
> @@ -47,15 +48,17 @@ void svm_asid_handle_vmrun(void)
>      if ( p_asid->asid == 0 )
>      {
>          vmcb_set_guest_asid(vmcb, 1);
> -        /* TODO: investigate using TLB_CTRL_FLUSH_ASID here instead. */
> -        vmcb->tlb_control = TLB_CTRL_FLUSH_ALL;
> +        vmcb->tlb_control =
> +            cpu_has_svm_flushbyasid ? TLB_CTRL_FLUSH_ASID : TLB_CTRL_FLUSH_ALL;
>          return;
>      }
>  
>      if ( vmcb_get_guest_asid(vmcb) != p_asid->asid )
>          vmcb_set_guest_asid(vmcb, p_asid->asid);
>  
> -    vmcb->tlb_control = need_flush ? TLB_CTRL_FLUSH_ALL : TLB_CTRL_NO_FLUSH;
> +    vmcb->tlb_control =
> +        !need_flush ? TLB_CTRL_NO_FLUSH :
> +        cpu_has_svm_flushbyasid ? TLB_CTRL_FLUSH_ASID : TLB_CTRL_FLUSH_ALL;

Since this code structure is used in two places I would consider
locally introducing something like:

#define TLB_CTRL_FLUSH_CMD (cpu_has_svm_flushbyasid ? TLB_CTRL_FLUSH_ASID \
                                                    : TLB_CTRL_FLUSH_ALL)

To abstract it away.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 06 08:55:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 08:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWFpL-00040l-AJ; Wed, 06 May 2020 08:54:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dr4E=6U=nxp.com=peng.fan@srs-us1.protection.inumbo.net>)
 id 1jWFpJ-00040f-GA
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 08:54:49 +0000
X-Inumbo-ID: 3c0ab19d-8f77-11ea-9e23-12813bfff9fa
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.68]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3c0ab19d-8f77-11ea-9e23-12813bfff9fa;
 Wed, 06 May 2020 08:54:46 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J7/kZLlwnOZLqjLjrkXLcxFU0NE3TKPV4QBNa2TYZNftcczB1iWIgt29QSDFn522KunPSkbezHrjbVg+kiyryCLLZDGQQahN6IFW9i6ZJDDtdL6r3YoSKp4f0dDBCR3WZyDFZRPULHudWFVM3V6jL7fsuJZduwYPfh5gUnG2SydASHkeTIud3U9RH42jzqB4ryj0JmYbxVmz7Z58XM9Z4NQ4o48uyiCitAsd8APTgD4nqtDgtnu63o2kHg6ycF65jsHxnm1yY8yoMhginfS39NSvMuoi4NaB2GaXAHLcICfcntNZiX+2A3AMCkuendNQ9W6xL8jbkIyjZWvBKEuJwA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=diqS4PCpcmyNey4bv0bHlyYUExILkB6FLm8ijjN2J3Q=;
 b=S+67vkNIW8k2k9XJR9aI0yUX4Uehr/DO2Zhl3ShN6nxt41DzhOSNLqrAiPcidtQTRE7HjZCpz0SB3r3agdsLzeEO73xM+t0BU9Ljx21nFyRudVWoycFvT/fPeBXeu/ERINupfkTQ61AJnUy+F3OYoTRmdC4KgCTl2f/Bb3qm7nXlQphltmji43foyD17RQ7tEerc3qzgZ6SPxmv2B0NQe8zopqQ/NmI8/j/j6HLD7hOcvVh0oA6WY2uwJiXp14MTyOpJwzH44W6SdR8Zj/qFVAMvQpZHkO7UZwHt6nvCeZWQpue/QmLJu4t0zkC+yU8RrOEnlTIObjlqav4gx3TP5Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=nxp.com; dmarc=pass action=none header.from=nxp.com; dkim=pass
 header.d=nxp.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=nxp.com; s=selector2; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=diqS4PCpcmyNey4bv0bHlyYUExILkB6FLm8ijjN2J3Q=;
 b=P2JgxUZXzqU6jkJd9RqnwF54I7J7Hj206LkdLEMQHUHMv8HJhi+lf3gScSDu8CIg5NOLwWNBo3nOxWF2MRdNafKIa7g8gdifO62D6E+24N3nejJjzKl95Q6iSKDse3iwKKEKnpEKAMPLQtSNa53DUn4mYXiqZQwkGdnfaAH4pOA=
Received: from DB6PR0402MB2760.eurprd04.prod.outlook.com (2603:10a6:4:a1::14)
 by DB6PR0402MB2805.eurprd04.prod.outlook.com (2603:10a6:4:94::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2958.20; Wed, 6 May
 2020 08:54:44 +0000
Received: from DB6PR0402MB2760.eurprd04.prod.outlook.com
 ([fe80::d17b:d767:19c3:b871]) by DB6PR0402MB2760.eurprd04.prod.outlook.com
 ([fe80::d17b:d767:19c3:b871%6]) with mapi id 15.20.2979.028; Wed, 6 May 2020
 08:54:44 +0000
From: Peng Fan <peng.fan@nxp.com>
To: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Roman Shaposhnik <roman@zededa.com>,
 "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>
Subject: RE: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
Thread-Topic: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
Thread-Index: AQHWH18dQjCrgLweTkufDj1dUDc6u6iTHE+AgADgr4CAABO6gIAAnxoAgABhlICAAAOggIACz9iAgACIxQCAAHTCgIAAHTyAgAEcYgCAAHdcAIAANOLw
Date: Wed, 6 May 2020 08:54:44 +0000
Message-ID: <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
In-Reply-To: <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=nxp.com;
x-originating-ip: [119.31.174.71]
x-ms-publictraffictype: Email
x-ms-office365-filtering-ht: Tenant
x-ms-office365-filtering-correlation-id: 1fcbb6b9-6be8-48c2-bbe3-08d7f19b1f74
x-ms-traffictypediagnostic: DB6PR0402MB2805:
x-microsoft-antispam-prvs: <DB6PR0402MB2805C5F06195858367FDBDD288A40@DB6PR0402MB2805.eurprd04.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-forefront-prvs: 03950F25EC
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: nFxO9Ufht7oP8a/eHGAkYEofNFueA8hj4PS7tSvnwgXYlVdiPlXOpXkZo9HSfT1JAVUh5+vX/UR3X5vmKDs+voVV70ZH+zYNOMp4Qiv4Cac8H24HENsWA3Q2GBVS4B+FT15QI+NDnSmhgcNXqRvgbk22RoHBDwAv3Eow3QQNA9R31+sFNvjzeYR+qUEQSMzLPtZw5DCXE4s95k7DWXK0FJtFE2MvU3LOS8J3Vs+XjnXKYMSbeu7fpv4goZce8TyOKE/QJSZRZJK7cbAQW8WZc93GcyGG5iwBldilI4Gmhm+rkNetoGdfBxk3QExl+aGzcMuSIZvEupZ+O580lnuTTC4vnG30QpEe6E8ri+i6yx/kdS+mEVvfHJwNzKeI0kQtlnuUayWl+P0m2e5NMVsxS/Q/r0Rm8yTobwWVRAo2ofrqHY72vnxIftOoU6FxO7unFibQZyNwxXvYIPMdtbMWCyESBpsDe3OlSQdwo3vQG+bTYwaeN0q4lnq3t8IpZABo0dTnWjsbviVpfYFWQhFBGDhC3xzCJRSPBGBPiIUyDhNuyYGNVovvxXHHoB3h1lWjg3wF/6jek+0NJ9sXIodMyJ/0yS1+oO7G+db0xAC2my8=
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:DB6PR0402MB2760.eurprd04.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(376002)(346002)(136003)(366004)(39860400002)(396003)(33430700001)(33440700001)(55016002)(76116006)(71200400001)(66574014)(66446008)(4326008)(44832011)(64756008)(66556008)(478600001)(66476007)(5660300002)(186003)(26005)(66946007)(9686003)(2906002)(45080400002)(86362001)(966005)(7696005)(110136005)(7416002)(30864003)(83080400001)(54906003)(33656002)(8936002)(52536014)(53546011)(316002)(8676002)(6506007);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: l8Lit9KWpfATlEnMYSlqzqGmArKCnYzxtrElAXX9wBb5kUr82kM67lAM11ApUrR0GN3lLV+DVZ7D2AATNpMO5geo21+GOS1YrKg9Bot0XjGwg0EqDhFzDLOwMm/ho2RLXdWias9Tt7UHW7DHuhR7Vdj7lNFAxhedtZiXgIf9twQH/ySeaR7Rp+QlV2l4085AFYu0dSKzNaDY3DLJBLDhBExg3U12JuBTyXwZzpmMDQcAt8V5+fkOyNSbUaQeazHIWAPl1CfP0GsudeZoKnXEhNTieLGX2pzP97qeyViUMhdb/0mZmrOZjWstgQgVjCv16t1laRQdDElQYeEwrr6yQgkujdwgca82Gs88Wv173LqYxoBG6gDb+cZIheOMokvTGwWLFuol9tFz7Av4WWoXt8wlBKzfxXLkOlpq+EH/GHMve1RgmKjhVGNXP3sw4kXtfpao2vk9uGRWMr/c6F7eij9EuwtKk24p2qLDGZQxhLqlKQPKjZtzdgqnbcNdStiSR9u+aDKrKgMmx5znTDAoRX3AWb9HqhxVhzQdALg84VF7Xbwg06Tx/tn2+fQPJLDr5Kpyo76FN7ufBB/YXbR9zjtNUUxTLLfR9FCXog/wF2XJ9G14EFN60PyV7OKY8edG/8iRHheXojtA9EbzxLqJyKWqc8iqM9ccax751WFhdSuLQjbK3F1rcpnGIZyT8hhc9Dy8mFWni0Ulj4RTzdHAMhCZUDS3iweSDA4tZ/WugVBwVTBstQCnL+5rmFrGC7kXJ0DF7emnuTds1sGOAE1npbWgb5iAmMBuwF9STNhdFg4=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: nxp.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1fcbb6b9-6be8-48c2-bbe3-08d7f19b1f74
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 May 2020 08:54:44.3984 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 686ea1d3-bc2b-4c6f-a92c-d99c5c301635
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: flHF2NzccAypnlfZdZBEhYK+LYW+Ucgn5jXUnZik/5PxFosgElOOMFDNK8Qd+6g6NPLxUGa8Mve+4Uv78cfKKg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0402MB2805
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, "minyard@acm.org" <minyard@acm.org>,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 Julien Grall <julien.grall@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiBTdWJqZWN0OiBSZTogVHJvdWJsZXMgcnVubmluZyBYZW4gb24gUmFzcGJlcnJ5IFBpIDQgd2l0
aCA1LjYuMSBEb21VDQo+IA0KPiBPbiAwNi4wNS4yMCAwMDozNCwgU3RlZmFubyBTdGFiZWxsaW5p
IHdyb3RlOg0KPiA+ICsgQm9yaXMsIErDvHJnZW4NCj4gPg0KPiA+IFNlZSB0aGUgY3Jhc2ggUm9t
YW4gaXMgc2VlaW5nIGJvb3RpbmcgZG9tMCBvbiB0aGUgUmFzcGJlcnJ5IFBpLiBJdCBpcw0KPiA+
IHJlbGF0ZWQgdG8gdGhlIHJlY2VudCB4ZW4gZG1hX29wcyBjaGFuZ2VzLiBQb3NzaWJseSB0aGUg
c2FtZSB0aGluZw0KPiA+IHJlcG9ydGVkIGJ5IFBlbmcgaGVyZToNCg0KWWVzLiBJdCBpcyBzYW1l
IGlzc3VlLg0KDQo+ID4NCj4gPiBodHRwczovL2V1cjAxLnNhZmVsaW5rcy5wcm90ZWN0aW9uLm91
dGxvb2suY29tLz91cmw9aHR0cHMlM0ElMkYlMkZtYXJjDQo+ID4gLmluZm8lMkYlM0ZsJTNEbGlu
dXgta2VybmVsJTI2bSUzRDE1ODgwNTk3NjIzMDQ4NSUyNnclM0QyJmFtDQo+IHA7ZGF0YT0wMiUN
Cj4gPg0KPiA3QzAxJTdDcGVuZy5mYW4lNDBueHAuY29tJTdDYWI5OGIyZGI5NDQ4NDE0MWE4YWQw
OGQ3ZjE4MDMzNzIlNw0KPiBDNjg2ZWExZA0KPiA+DQo+IDNiYzJiNGM2ZmE5MmNkOTljNWMzMDE2
MzUlN0MwJTdDMCU3QzYzNzI0MzQwNTIzMzU3MjM1NCZhbXA7c2QNCj4gYXRhPTBZcjVoDQo+ID4g
Umc0JTJGdUFwc0JwVHdJSUw0U3RaVSUyRlVBNTVvUDVMZm5mbXRqNEhnJTNEJmFtcDtyZXNlcnZl
ZD0wDQo+ID4NCj4gPiBBbiBpbi1kZXB0aCBhbmFseXNpcyBiZWxvdy4NCj4gPg0KPiA+DQo+ID4g
T24gTW9uLCA0IE1heSAyMDIwLCBSb21hbiBTaGFwb3NobmlrIHdyb3RlOg0KPiA+Pj4+IFsgICAg
Mi41MzQyOTJdIFVuYWJsZSB0byBoYW5kbGUga2VybmVsIHBhZ2luZyByZXF1ZXN0IGF0IHZpcnR1
YWwNCj4gPj4+PiBhZGRyZXNzIDAwMDAwMDAwMDAyNmMzNDANCj4gPj4+PiBbICAgIDIuNTQyMzcz
XSBNZW0gYWJvcnQgaW5mbzoNCj4gPj4+PiBbICAgIDIuNTQ1MjU3XSAgIEVTUiA9IDB4OTYwMDAw
MDQNCj4gPj4+PiBbICAgIDIuNTQ4NDIxXSAgIEVDID0gMHgyNTogREFCVCAoY3VycmVudCBFTCks
IElMID0gMzIgYml0cw0KPiA+Pj4+IFsgICAgMi41NTM4NzddICAgU0VUID0gMCwgRm5WID0gMA0K
PiA+Pj4+IFsgICAgMi41NTcwMjNdICAgRUEgPSAwLCBTMVBUVyA9IDANCj4gPj4+PiBbICAgIDIu
NTYwMjk3XSBEYXRhIGFib3J0IGluZm86DQo+ID4+Pj4gWyAgICAyLjU2MzI1OF0gICBJU1YgPSAw
LCBJU1MgPSAweDAwMDAwMDA0DQo+ID4+Pj4gWyAgICAyLjU2NzIwOF0gICBDTSA9IDAsIFduUiA9
IDANCj4gPj4+PiBbICAgIDIuNTcwMjk0XSBbMDAwMDAwMDAwMDI2YzM0MF0gdXNlciBhZGRyZXNz
IGJ1dCBhY3RpdmVfbW0gaXMNCj4gc3dhcHBlcg0KPiA+Pj4+IFsgICAgMi41NzY3ODNdIEludGVy
bmFsIGVycm9yOiBPb3BzOiA5NjAwMDAwNCBbIzFdIFNNUA0KPiA+Pj4+IFsgICAgMi41ODE3ODRd
IE1vZHVsZXMgbGlua2VkIGluOg0KPiA+Pj4+IFsgICAgMi41ODQ5NTBdIENQVTogMyBQSUQ6IDEz
NSBDb21tOiBrd29ya2VyLzM6MSBOb3QgdGFpbnRlZA0KPiA1LjYuMS1kZWZhdWx0ICM5DQo+ID4+
Pj4gWyAgICAyLjU5MTk3MF0gSGFyZHdhcmUgbmFtZTogUmFzcGJlcnJ5IFBpIDQgTW9kZWwgQiAo
RFQpDQo+ID4+Pj4gWyAgICAyLjU5NzI1Nl0gV29ya3F1ZXVlOiBldmVudHMgZGVmZXJyZWRfcHJv
YmVfd29ya19mdW5jDQo+ID4+Pj4gWyAgICAyLjYwMjUwOV0gcHN0YXRlOiA2MDAwMDAwNSAoblpD
diBkYWlmIC1QQU4gLVVBTykNCj4gPj4+PiBbICAgIDIuNjA3NDMxXSBwYyA6IHhlbl9zd2lvdGxi
X2ZyZWVfY29oZXJlbnQrMHgxOTgvMHgxZDgNCj4gPj4+PiBbICAgIDIuNjEyNjk2XSBsciA6IGRt
YV9mcmVlX2F0dHJzKzB4OTgvMHhkMA0KPiA+Pj4+IFsgICAgMi42MTY4MjddIHNwIDogZmZmZjgw
MDAxMWRiMzk3MA0KPiA+Pj4+IFsgICAgMi42MjAyNDJdIHgyOTogZmZmZjgwMDAxMWRiMzk3MCB4
Mjg6IDAwMDAwMDAwMDAwZjdiMDANCj4gPj4+PiBbICAgIDIuNjI1Njk1XSB4Mjc6IDAwMDAwMDAw
MDAwMDEwMDAgeDI2OiBmZmZmMDAwMDM3YjY4NDEwDQo+ID4+Pj4gWyAgICAyLjYzMTEyOV0geDI1
OiAwMDAwMDAwMDAwMDAwMDAxIHgyNDogMDAwMDAwMDBmN2IwMDAwMA0KPiA+Pj4+IFsgICAgMi42
MzY1ODNdIHgyMzogMDAwMDAwMDAwMDAwMDAwMCB4MjI6IDAwMDAwMDAwMDAwMDAwMDANCj4gPj4+
PiBbICAgIDIuNjQyMDE3XSB4MjE6IGZmZmY4MDAwMTFiMGQwMDAgeDIwOiBmZmZmODAwMDExNzli
NTQ4DQo+ID4+Pj4gWyAgICAyLjY0NzQ2MV0geDE5OiBmZmZmMDAwMDM3YjY4NDEwIHgxODogMDAw
MDAwMDAwMDAwMDAxMA0KPiA+Pj4+IFsgICAgMi42NTI5MDVdIHgxNzogZmZmZjAwMDAzNWQ2NmEw
MCB4MTY6IDAwMDAwMDAwZGVhZGJlZWYNCj4gPj4+PiBbICAgIDIuNjU4MzQ4XSB4MTU6IGZmZmZm
ZmZmZmZmZmZmZmYgeDE0OiBmZmZmODAwMDExNzliNTQ4DQo+ID4+Pj4gWyAgICAyLjY2Mzc5Ml0g
eDEzOiBmZmZmODAwMDkxZGIzN2I3IHgxMjogZmZmZjgwMDAxMWRiMzdiZg0KPiA+Pj4+IFsgICAg
Mi42NjkyMzZdIHgxMTogZmZmZjgwMDAxMTdjNzAwMCB4MTA6IGZmZmY4MDAwMTFkYjM3NDANCj4g
Pj4+PiBbICAgIDIuNjc0NjgwXSB4OSA6IDAwMDAwMDAwZmZmZmZmZDAgeDggOiBmZmZmODAwMDEw
NzFlOTgwDQo+ID4+Pj4gWyAgICAyLjY4MDEyNF0geDcgOiAwMDAwMDAwMDAwMDAwMTMyIHg2IDog
ZmZmZjgwMDAxMTk3YTZhYg0KPiA+Pj4+IFsgICAgMi42ODU1NjhdIHg1IDogMDAwMDAwMDAwMDAw
MDAwMCB4NCA6IDAwMDAwMDAwMDAwMDAwMDANCj4gPj4+PiBbICAgIDIuNjkxMDEyXSB4MyA6IDAw
MDAwMDAwZjdiMDAwMDAgeDIgOiBmZmZmZmRmZmZmZTAwMDAwDQo+ID4+Pj4gWyAgICAyLjY5NjQ2
NV0geDEgOiAwMDAwMDAwMDAwMjZjMzQwIHgwIDogMDAwMDAyMDAwMDQ2YzM0MA0KPiA+Pj4+IFsg
ICAgMi43MDE4OTldIENhbGwgdHJhY2U6DQo+ID4+Pj4gWyAgICAyLjcwNDQ1Ml0gIHhlbl9zd2lv
dGxiX2ZyZWVfY29oZXJlbnQrMHgxOTgvMHgxZDgNCj4gPj4+PiBbICAgIDIuNzA5MzY3XSAgZG1h
X2ZyZWVfYXR0cnMrMHg5OC8weGQwDQo+ID4+Pj4gWyAgICAyLjcxMzE0M10gIHJwaV9maXJtd2Fy
ZV9wcm9wZXJ0eV9saXN0KzB4MWU0LzB4MjQwDQo+ID4+Pj4gWyAgICAyLjcxODE0Nl0gIHJwaV9m
aXJtd2FyZV9wcm9wZXJ0eSsweDZjLzB4YjANCj4gPj4+PiBbICAgIDIuNzIyNTM1XSAgcnBpX2Zp
cm13YXJlX3Byb2JlKzB4ZjAvMHgxZTANCj4gPj4+PiBbICAgIDIuNzI2NzYwXSAgcGxhdGZvcm1f
ZHJ2X3Byb2JlKzB4NTAvMHhhMA0KPiA+Pj4+IFsgICAgMi43MzA4NzldICByZWFsbHlfcHJvYmUr
MHhkOC8weDQzOA0KPiA+Pj4+IFsgICAgMi43MzQ1NjddICBkcml2ZXJfcHJvYmVfZGV2aWNlKzB4
ZGMvMHgxMzANCj4gPj4+PiBbICAgIDIuNzM4ODcwXSAgX19kZXZpY2VfYXR0YWNoX2RyaXZlcisw
eDg4LzB4MTA4DQo+ID4+Pj4gWyAgICAyLjc0MzQzNF0gIGJ1c19mb3JfZWFjaF9kcnYrMHg3OC8w
eGM4DQo+ID4+Pj4gWyAgICAyLjc0NzM4Nl0gIF9fZGV2aWNlX2F0dGFjaCsweGQ0LzB4MTU4DQo+
ID4+Pj4gWyAgICAyLjc1MTMzN10gIGRldmljZV9pbml0aWFsX3Byb2JlKzB4MTAvMHgxOA0KPiA+
Pj4+IFsgICAgMi43NTU2NDldICBidXNfcHJvYmVfZGV2aWNlKzB4OTAvMHg5OA0KPiA+Pj4+IFsg
ICAgMi43NTk1OTBdICBkZWZlcnJlZF9wcm9iZV93b3JrX2Z1bmMrMHg4OC8weGQ4DQo+ID4+Pj4g
WyAgICAyLjc2NDI0NF0gIHByb2Nlc3Nfb25lX3dvcmsrMHgxZjAvMHgzYzANCj4gPj4+PiBbICAg
IDIuNzY4MzY5XSAgd29ya2VyX3RocmVhZCsweDEzOC8weDU3MA0KPiA+Pj4+IFsgICAgMi43NzIy
MzRdICBrdGhyZWFkKzB4MTE4LzB4MTIwDQo+ID4+Pj4gWyAgICAyLjc3NTU3MV0gIHJldF9mcm9t
X2ZvcmsrMHgxMC8weDE4DQo+ID4+Pj4gWyAgICAyLjc3OTI2M10gQ29kZTogZDM0Y2ZjMDAgZjJk
ZmJmZTIgZDM3YWU0MDAgOGIwMjAwMDENCj4gKGY4NjI2ODAwKQ0KPiA+Pj4+IFsgICAgMi43ODU0
OTJdIC0tLVsgZW5kIHRyYWNlIDRjNDM1MjEyZTM0OWY0NWYgXS0tLQ0KPiA+Pj4+IFsgICAgMi43
OTMzNDBdIHVzYiAxLTE6IE5ldyBVU0IgZGV2aWNlIGZvdW5kLCBpZFZlbmRvcj0yMTA5LA0KPiA+
Pj4+IGlkUHJvZHVjdD0zNDMxLCBiY2REZXZpY2U9IDQuMjANCj4gPj4+PiBbICAgIDIuODAxMDM4
XSB1c2IgMS0xOiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MCwgUHJvZHVjdD0xLA0KPiBT
ZXJpYWxOdW1iZXI9MA0KPiA+Pj4+IFsgICAgMi44MDgyOTddIHVzYiAxLTE6IFByb2R1Y3Q6IFVT
QjIuMCBIdWINCj4gPj4+PiBbICAgIDIuODEzNzEwXSBodWIgMS0xOjEuMDogVVNCIGh1YiBmb3Vu
ZA0KPiA+Pj4+IFsgICAgMi44MTcxMTddIGh1YiAxLTE6MS4wOiA0IHBvcnRzIGRldGVjdGVkDQo+
ID4+Pj4NCj4gPj4+PiBUaGlzIGlzIGJhaWxpbmcgb3V0IHJpZ2h0IGhlcmU6DQo+ID4+Pj4NCj4g
Pj4+PiBodHRwczovL2V1cjAxLnNhZmVsaW5rcy5wcm90ZWN0aW9uLm91dGxvb2suY29tLz91cmw9
aHR0cHMlM0ElMkYlMkZnDQo+ID4+Pj4NCj4gaXQua2VybmVsLm9yZyUyRnB1YiUyRnNjbSUyRmxp
bnV4JTJGa2VybmVsJTJGZ2l0JTJGc3RhYmxlJTJGbGludXguZw0KPiA+Pj4+DQo+IGl0JTJGdHJl
ZSUyRmRyaXZlcnMlMkZmaXJtd2FyZSUyRnJhc3BiZXJyeXBpLmMlM0ZoJTNEdjUuNi4xJTIzbjEy
NQ0KPiAmDQo+ID4+Pj4NCj4gYW1wO2RhdGE9MDIlN0MwMSU3Q3BlbmcuZmFuJTQwbnhwLmNvbSU3
Q2FiOThiMmRiOTQ0ODQxNDFhOGFkMA0KPiA4ZDdmMTgNCj4gPj4+Pg0KPiAwMzM3MiU3QzY4NmVh
MWQzYmMyYjRjNmZhOTJjZDk5YzVjMzAxNjM1JTdDMCU3QzAlN0M2MzcyNDM0MDUyDQo+IDMzNTcy
Mw0KPiA+Pj4+DQo+IDU0JmFtcDtzZGF0YT1oMWR5SGtiJTJGc2lmVXFIM1owbTN1SUljcVVoWFZ3
aEhTJTJGdCUyRlZ2aWclMkZvDQo+IHU0JTNEDQo+ID4+Pj4gJmFtcDtyZXNlcnZlZD0wDQo+ID4+
Pj4NCj4gPj4+PiBGWUlXIChzaW5jZSBJIG1vZGlmaWVkIHRoZSBzb3VyY2UgdG8gYWN0dWFsbHkg
cHJpbnQgd2hhdCB3YXMNCj4gPj4+PiByZXR1cm5lZCByaWdodCBiZWZvcmUgaXQgYmFpbHMpIHdl
IGdldDoNCj4gPj4+PiAgICAgYnVmWzFdID09IDB4ODAwMDAwMDA0DQo+ID4+Pj4gICAgIGJ1Zlsy
XSA9PSAweDAwMDAwMDAxDQo+ID4+Pj4NCj4gPj4+PiBTdGF0dXMgMHg4MDAwMDAwMDQgaXMgb2Yg
Y291cnNlIHN1c3BpY2lvdXMgc2luY2UgaXQgaXMgbm90IGV2ZW4gbGlzdGVkDQo+IGhlcmU6DQo+
ID4+Pj4NCj4gPj4+PiBodHRwczovL2V1cjAxLnNhZmVsaW5rcy5wcm90ZWN0aW9uLm91dGxvb2su
Y29tLz91cmw9aHR0cHMlM0ElMkYlMkZnDQo+ID4+Pj4NCj4gaXQua2VybmVsLm9yZyUyRnB1YiUy
RnNjbSUyRmxpbnV4JTJGa2VybmVsJTJGZ2l0JTJGc3RhYmxlJTJGbGludXguZw0KPiA+Pj4+DQo+
IGl0JTJGdHJlZSUyRmluY2x1ZGUlMkZzb2MlMkZiY20yODM1JTJGcmFzcGJlcnJ5cGktZmlybXdh
cmUuaCUyM24xNA0KPiAmDQo+ID4+Pj4NCj4gYW1wO2RhdGE9MDIlN0MwMSU3Q3BlbmcuZmFuJTQw
bnhwLmNvbSU3Q2FiOThiMmRiOTQ0ODQxNDFhOGFkMA0KPiA4ZDdmMTgNCj4gPj4+Pg0KPiAwMzM3
MiU3QzY4NmVhMWQzYmMyYjRjNmZhOTJjZDk5YzVjMzAxNjM1JTdDMCU3QzAlN0M2MzcyNDM0MDUy
DQo+IDMzNTcyMw0KPiA+Pj4+DQo+IDU0JmFtcDtzZGF0YT0zeU1tJTJGdWpIQ1ZmJTJGaWdwTkxF
OGhjQldWY3ZkR3JaR3Y1VHVxZU16TVYwDQo+IFUlM0QmYW1wDQo+ID4+Pj4gO3Jlc2VydmVkPTAN
Cj4gPj4+Pg0KPiA+Pj4+IFNvIGl0IGFwcGVhcnMgdGhhdCB0aGlzIERNQSByZXF1ZXN0IHBhdGgg
aXMgc29tZWhvdyBidXN0ZWQgYW5kIGl0DQo+ID4+Pj4gd291bGQgYmUgcmVhbGx5IG5pY2UgdG8g
ZmlndXJlIG91dCB3aHkuDQo+ID4+Pg0KPiA+Pj4gWW91IGhhdmUgYWN0dWFsbHkgZGlzY292ZXJl
ZCBhIGdlbnVpbmUgYnVnIGluIHRoZSByZWNlbnQgeGVuIGRtYQ0KPiA+Pj4gcmV3b3JrIGluIExp
bnV4LiBDb25ncmF0cyA6LSkNCj4gPj4NCj4gPj4gTmljZSEgOy0pDQo+ID4+DQo+ID4+PiBJIGFt
IGRvaW5nIHNvbWUgZ3Vlc3N3b3JrIGhlcmUsIGJ1dCBmcm9tIHdoYXQgSSByZWFkIGluIHRoZSB0
aHJlYWQNCj4gPj4+IGFuZCB0aGUgaW5mb3JtYXRpb24gaW4gdGhpcyBlbWFpbCBJIHRoaW5rIHRo
aXMgcGF0Y2ggbWlnaHQgZml4IHRoZSBpc3N1ZS4NCj4gPj4+IElmIGl0IGRvZXNuJ3QgZml4IHRo
ZSBpc3N1ZSBwbGVhc2UgYWRkIGEgZmV3IHByaW50a3MgaW4NCj4gPj4+IGRyaXZlcnMveGVuL3N3
aW90bGIteGVuLmM6eGVuX3N3aW90bGJfZnJlZV9jb2hlcmVudCBhbmQgcGxlYXNlIGxldA0KPiA+
Pj4gbWUga25vdyB3aGVyZSBleGFjdGx5IGl0IGNyYXNoZXMuDQo+ID4+Pg0KPiA+Pj4NCj4gPj4+
IGRpZmYgLS1naXQgYS9pbmNsdWRlL3hlbi9hcm0vcGFnZS1jb2hlcmVudC5oDQo+ID4+PiBiL2lu
Y2x1ZGUveGVuL2FybS9wYWdlLWNvaGVyZW50LmggaW5kZXggYjljYzExZTg4N2VkLi5mZjQ2Nzdl
ZDk3ODgNCj4gPj4+IDEwMDY0NA0KPiA+Pj4gLS0tIGEvaW5jbHVkZS94ZW4vYXJtL3BhZ2UtY29o
ZXJlbnQuaA0KPiA+Pj4gKysrIGIvaW5jbHVkZS94ZW4vYXJtL3BhZ2UtY29oZXJlbnQuaA0KPiA+
Pj4gQEAgLTgsMTIgKzgsMTcgQEANCj4gPj4+ICAgc3RhdGljIGlubGluZSB2b2lkICp4ZW5fYWxs
b2NfY29oZXJlbnRfcGFnZXMoc3RydWN0IGRldmljZSAqaHdkZXYsDQo+IHNpemVfdCBzaXplLA0K
PiA+Pj4gICAgICAgICAgICAgICAgICBkbWFfYWRkcl90ICpkbWFfaGFuZGxlLCBnZnBfdCBmbGFn
cywgdW5zaWduZWQNCj4gbG9uZyBhdHRycykNCj4gPj4+ICAgew0KPiA+Pj4gKyAgICAgICB2b2lk
ICpjcHVfYWRkcjsNCj4gPj4+ICsgICAgICAgaWYgKGRtYV9hbGxvY19mcm9tX2Rldl9jb2hlcmVu
dChod2Rldiwgc2l6ZSwgZG1hX2hhbmRsZSwNCj4gJmNwdV9hZGRyKSkNCj4gPj4+ICsgICAgICAg
ICAgICAgICByZXR1cm4gY3B1X2FkZHI7DQo+ID4+PiAgICAgICAgICByZXR1cm4gZG1hX2RpcmVj
dF9hbGxvYyhod2Rldiwgc2l6ZSwgZG1hX2hhbmRsZSwgZmxhZ3MsDQo+IGF0dHJzKTsNCj4gPj4+
ICAgfQ0KPiA+Pj4NCj4gPj4+ICAgc3RhdGljIGlubGluZSB2b2lkIHhlbl9mcmVlX2NvaGVyZW50
X3BhZ2VzKHN0cnVjdCBkZXZpY2UgKmh3ZGV2LA0KPiBzaXplX3Qgc2l6ZSwNCj4gPj4+ICAgICAg
ICAgICAgICAgICAgdm9pZCAqY3B1X2FkZHIsIGRtYV9hZGRyX3QgZG1hX2hhbmRsZSwNCj4gdW5z
aWduZWQgbG9uZyBhdHRycykNCj4gPj4+ICAgew0KPiA+Pj4gKyAgICAgICBpZiAoZG1hX3JlbGVh
c2VfZnJvbV9kZXZfY29oZXJlbnQoaHdkZXYsIGdldF9vcmRlcihzaXplKSwNCj4gY3B1X2FkZHIp
KQ0KPiA+Pj4gKyAgICAgICAgICAgICAgIHJldHVybjsNCj4gPj4+ICAgICAgICAgIGRtYV9kaXJl
Y3RfZnJlZShod2Rldiwgc2l6ZSwgY3B1X2FkZHIsIGRtYV9oYW5kbGUsIGF0dHJzKTsNCj4gPj4+
ICAgfQ0KPiA+Pg0KPiA+PiBBcHBsaWVkIHRoZSBwYXRjaCwgYnV0IGl0IGRpZG4ndCBoZWxwIGFu
ZCBhZnRlciBwcmludGsncyBpdCB0dXJucyBvdXQNCj4gPj4gaXQgc3VycHJpc2luZ2x5IGNyYXNo
ZXMgcmlnaHQgaW5zaWRlIHRoaXMgKHJhdGhlciBjb252b2x1dGVkIGlmIHlvdQ0KPiA+PiBhc2sg
bWUpIGlmIHN0YXRlbWVudDoNCj4gPj4NCj4gPj4gaHR0cHM6Ly9ldXIwMS5zYWZlbGlua3MucHJv
dGVjdGlvbi5vdXRsb29rLmNvbS8/dXJsPWh0dHBzJTNBJTJGJTJGZ2l0DQo+ID4+IC5rZXJuZWwu
b3JnJTJGcHViJTJGc2NtJTJGbGludXglMkZrZXJuZWwlMkZnaXQlMkZzdGFibGUlMkZsaW51eC5n
aQ0KPiB0JTINCj4gPj4NCj4gRnRyZWUlMkZkcml2ZXJzJTJGeGVuJTJGc3dpb3RsYi14ZW4uYyUz
RmglM0R2NS42LjElMjNuMzQ5JmFtcDtkYXQNCj4gYT0wMg0KPiA+PiAlN0MwMSU3Q3BlbmcuZmFu
JTQwbnhwLmNvbSU3Q2FiOThiMmRiOTQ0ODQxNDFhOGFkMDhkN2YxODAzMw0KPiA3MiU3QzY4NmVh
DQo+ID4+DQo+IDFkM2JjMmI0YzZmYTkyY2Q5OWM1YzMwMTYzNSU3QzAlN0MwJTdDNjM3MjQzNDA1
MjMzNTcyMzU0JmFtcDsNCj4gc2RhdGE9RnUNCj4gPj4NCj4gQldHQUVnJTJGa2JzbllJSUdIbWlJ
Q1RxeSUyQmNnWnM3ViUyQk1uZUp1bSUyQlR0cyUzRCZhbXA7cmVzDQo+IGVydmVkPTANCj4gPj4N
Cj4gPj4gU28gaXQgbWFrZXMgc2Vuc2UgdGhhdCB0aGUgcGF0Y2ggZGlkbid0IGhlbHAgLS0gd2Ug
bmV2ZXIgaGl0IHRoYXQNCj4gPj4geGVuX2ZyZWVfY29oZXJlbnRfcGFnZXMuDQo+ID4NCj4gPiBU
aGUgY3Jhc2ggaGFwcGVucyBoZXJlOg0KPiA+DQo+ID4gCWlmICghV0FSTl9PTigoZGV2X2FkZHIg
KyBzaXplIC0gMSA+IGRtYV9tYXNrKSB8fA0KPiA+IAkJICAgICByYW5nZV9zdHJhZGRsZXNfcGFn
ZV9ib3VuZGFyeShwaHlzLCBzaXplKSkgJiYNCj4gPiAJICAgIFRlc3RDbGVhclBhZ2VYZW5SZW1h
cHBlZCh2aXJ0X3RvX3BhZ2UodmFkZHIpKSkNCj4gPiAJCXhlbl9kZXN0cm95X2NvbnRpZ3VvdXNf
cmVnaW9uKHBoeXMsIG9yZGVyKTsNCj4gPg0KPiA+IEkgZG9uJ3Qga25vdyBleGFjdGx5IHdoYXQg
aXMgY2F1c2luZyB0aGUgY3Jhc2guIElzIGl0IHRoZSBXQVJOX09ODQo+IHNvbWVob3c/DQo+ID4g
SXMgaXQgVGVzdENsZWFyUGFnZVhlblJlbWFwcGVkPyBOZWl0aGVyIHNob3VsZCBjYXVzZSBhIGNy
YXNoIGluIHRoZW9yeS4NCj4gPg0KPiA+DQo+ID4gQnV0IEkgZG8ga25vdyB0aGF0IHRoZXJlIGFy
ZSBwcm9ibGVtcyB3aXRoIHRoYXQgaWYgc3RhdGVtZW50IG9uIEFSTS4NCj4gPiBJdCBjYW4gdHJp
Z2dlciBmb3Igb25lIG9mIHRoZSBmb2xsb3dpbmcgY29uZGl0aW9uczoNCj4gPg0KPiA+IDEpIGRl
dl9hZGRyICsgc2l6ZSAtIDEgPiBkbWFfbWFzaw0KPiA+IDIpIHJhbmdlX3N0cmFkZGxlc19wYWdl
X2JvdW5kYXJ5KHBoeXMsIHNpemUpDQoNCg0KZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL3N3aW90
bGIteGVuLmMgYi9kcml2ZXJzL3hlbi9zd2lvdGxiLXhlbi5jDQppbmRleCBiZDNhMTBkZmFjMTUu
LjMzYjAyN2NiMGIyYSAxMDA2NDQNCi0tLSBhL2RyaXZlcnMveGVuL3N3aW90bGIteGVuLmMNCisr
KyBiL2RyaXZlcnMveGVuL3N3aW90bGIteGVuLmMNCkBAIC0zNDYsNiArMzQ2LDcgQEAgeGVuX3N3
aW90bGJfZnJlZV9jb2hlcmVudChzdHJ1Y3QgZGV2aWNlICpod2Rldiwgc2l6ZV90IHNpemUsIHZv
aWQgKnZhZGRyLA0KICAgICAgICAvKiBDb252ZXJ0IHRoZSBzaXplIHRvIGFjdHVhbGx5IGFsbG9j
YXRlZC4gKi8NCiAgICAgICAgc2l6ZSA9IDFVTCA8PCAob3JkZXIgKyBYRU5fUEFHRV9TSElGVCk7
DQoNCisgICAgICAgcHJpbnRrKCIleCAleCAlcHggJXggJXB4XG4iLCBkZXZfYWRkciArIHNpemUg
LSAxID4gZG1hX21hc2ssIHJhbmdlX3N0cmFkZGxlc19wYWdlX2JvdW5kYXJ5KHBoeXMsIHNpemUp
LCB2aXJ0X3RvX3BhZ2UodmFkZHIpLCBwaHlzLCB2YWRkcik7DQogICAgICAgIGlmICghV0FSTl9P
TigoZGV2X2FkZHIgKyBzaXplIC0gMSA+IGRtYV9tYXNrKSB8fA0KICAgICAgICAgICAgICAgICAg
ICAgcmFuZ2Vfc3RyYWRkbGVzX3BhZ2VfYm91bmRhcnkocGh5cywgc2l6ZSkpICYmDQogICAgICAg
ICAgICBUZXN0Q2xlYXJQYWdlWGVuUmVtYXBwZWQodmlydF90b19wYWdlKHZhZGRyKSkpDQoNCg0K
SW4gbXkgY2FzZToNCjAgMCAwMDAwMDAwMDAwMjcxZjQwIGJjMDAwMDAwIGZmZmY4MDAwMTFjN2Qw
MDANCg0KDQpUaGUgYWxsb2MgcGF0aCBpbiBteSBzaWRlOg0KMzE0ICAgICAgICAgcGh5cyA9ICpk
bWFfaGFuZGxlOw0KMzE1ICAgICAgICAgZGV2X2FkZHIgPSB4ZW5fcGh5c190b19idXMocGh5cyk7
DQozMTYgICAgICAgICBpZiAoKChkZXZfYWRkciArIHNpemUgLSAxIDw9IGRtYV9tYXNrKSkgJiYN
CjMxNyAgICAgICAgICAgICAhcmFuZ2Vfc3RyYWRkbGVzX3BhZ2VfYm91bmRhcnkocGh5cywgc2l6
ZSkpDQozMTggICAgICAgICAgICAgICAgICpkbWFfaGFuZGxlID0gZGV2X2FkZHI7DQoNClNvIEkg
YW0gY29uZnVzZWQgd2h5IHRoZSBmcmVlIHBhdGggbmVlZHMgY2xlYXIgeGVuIHJlbWFwLg0KDQpU
aGFua3MsDQpQZW5nLg0KDQo+ID4NCj4gPg0KPiA+IFRoZSBmaXJzdCBjb25kaXRpb24gbWlnaHQg
aGFwcGVuIGFmdGVyIGJlZjRkMjAzN2QyMTQgYmVjYXVzZQ0KPiA+IGRtYV9kaXJlY3RfYWxsb2Mg
bWlnaHQgbm90IHJlc3BlY3QgdGhlIGRldmljZSBkbWFfbWFzay4gSXQgaXMgYWN0dWFsbHkNCj4g
PiBhIGJ1ZyBhbmQgSSB3b3VsZCBsaWtlIHRvIGtlZXAgdGhlIFdBUk5fT04gZm9yIHRoYXQuIFRo
ZSBwYXRjaCBJIHNlbnQNCj4gPiB5ZXN0ZXJkYXkNCj4gPg0KPiAoaHR0cHM6Ly9ldXIwMS5zYWZl
bGlua3MucHJvdGVjdGlvbi5vdXRsb29rLmNvbS8/dXJsPWh0dHBzJTNBJTJGJTJGbWFyYy4NCj4g
aW5mbyUyRiUzRmwlM0R4ZW4tZGV2ZWwlMjZtJTNEMTU4ODY1MDgwMjI0NTA0JmFtcDtkYXRhPTAy
JTdDMA0KPiAxJTdDcGVuZy5mYW4lNDBueHAuY29tJTdDYWI5OGIyZGI5NDQ4NDE0MWE4YWQwOGQ3
ZjE4MDMzNzIlN0M2OA0KPiA2ZWExZDNiYzJiNGM2ZmE5MmNkOTljNWMzMDE2MzUlN0MwJTdDMCU3
QzYzNzI0MzQwNTIzMzU3MjM1NCZhDQo+IG1wO3NkYXRhPTV3Q0doaTZIdVhWZWdQdmx1N2dtYnd2
OXlFWjVYUUtidXFxUHc1Wmw4c3MlM0QmYW1wO3JlDQo+IHNlcnZlZD0wKSBzaG91bGQgc29sdmUg
dGhhdCBpc3N1ZS4gQnV0IFJvbWFuIGlzIHRlbGxpbmcgdXMgdGhhdCB0aGUgY3Jhc2ggc3RpbGwN
Cj4gcGVyc2lzdHMuDQo+ID4NCj4gPiBUaGUgc2Vjb25kIGNvbmRpdGlvbiBpcyBjb21wbGV0ZWx5
IG5vcm1hbCBhbmQgbm90IGFuIGVycm9yIG9uIEFSTQ0KPiA+IGJlY2F1c2UgZG9tMCBpcyAxOjEg
bWFwcGVkLiBJdCBpcyBub3QgYW4gaXNzdWUgaWYgdGhlIGFkZHJlc3MgcmFuZ2UgaXMNCj4gPiBz
dHJhZGRsaW5nIGEgcGFnZSBib3VuZGFyeS4gV2UgY2VydGFpbmx5IHNob3VsZG4ndCBXQVJOIChv
ciBjcmFzaCkuDQo+ID4NCj4gPiBTbywgSSBzdWdnZXN0IHNvbWV0aGluZyBzaW1pbGFyIHRvIFBl
bmcncyBwYXRjaCwgYXBwZW5kZWQuDQo+ID4NCj4gPiBSb21hbiwgZG9lcyBpdCBzb2x2ZSB5b3Vy
IHByb2JsZW0/DQo+ID4NCj4gPg0KPiA+IGRpZmYgLS1naXQgYS9kcml2ZXJzL3hlbi9zd2lvdGxi
LXhlbi5jIGIvZHJpdmVycy94ZW4vc3dpb3RsYi14ZW4uYw0KPiA+IGluZGV4IGI2ZDI3NzYyYzZm
OC4uOTk0Y2EzYTRiNjUzIDEwMDY0NA0KPiA+IC0tLSBhL2RyaXZlcnMveGVuL3N3aW90bGIteGVu
LmMNCj4gPiArKysgYi9kcml2ZXJzL3hlbi9zd2lvdGxiLXhlbi5jDQo+ID4gQEAgLTM0Niw5ICsz
NDYsOCBAQCB4ZW5fc3dpb3RsYl9mcmVlX2NvaGVyZW50KHN0cnVjdCBkZXZpY2UgKmh3ZGV2LA0K
PiBzaXplX3Qgc2l6ZSwgdm9pZCAqdmFkZHIsDQo+ID4gICAJLyogQ29udmVydCB0aGUgc2l6ZSB0
byBhY3R1YWxseSBhbGxvY2F0ZWQuICovDQo+ID4gICAJc2l6ZSA9IDFVTCA8PCAob3JkZXIgKyBY
RU5fUEFHRV9TSElGVCk7DQo+ID4NCj4gPiAtCWlmICghV0FSTl9PTigoZGV2X2FkZHIgKyBzaXpl
IC0gMSA+IGRtYV9tYXNrKSB8fA0KPiA+IC0JCSAgICAgcmFuZ2Vfc3RyYWRkbGVzX3BhZ2VfYm91
bmRhcnkocGh5cywgc2l6ZSkpICYmDQo+ID4gLQkgICAgVGVzdENsZWFyUGFnZVhlblJlbWFwcGVk
KHZpcnRfdG9fcGFnZSh2YWRkcikpKQ0KPiA+ICsJV0FSTl9PTihkZXZfYWRkciArIHNpemUgLSAx
ID4gZG1hX21hc2spOw0KPiA+ICsJaWYgKFRlc3RDbGVhclBhZ2VYZW5SZW1hcHBlZCh2aXJ0X3Rv
X3BhZ2UodmFkZHIpKSkNCj4gPiAgIAkJeGVuX2Rlc3Ryb3lfY29udGlndW91c19yZWdpb24ocGh5
cywgb3JkZXIpOw0KPiANCj4gVGhpcyBpcyBhIHZlcnkgYmFkIGlkZWEgZm9yIHg4Ni4gQ2FsbGlu
ZyB4ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbigpIGZvcg0KPiBjYXNlcyB3aGVyZSBpdCBz
aG91bGRuJ3QgYmUgY2FsbGVkIGlzIGNhdXNpbmcgc3VidGxlIGJ1Z3MgaW4gbWVtb3J5DQo+IG1h
bmFnZW1lbnQsIGxpa2UgcGFnZXMgYmVpbmcgdmlzaWJsZSB0d2ljZSBhbmQgdGh1cyByZXN1bHRp
bmcgaW4gd2VpcmQNCj4gbWVtb3J5IGluY29uc2lzdGVuY2llcy4NCj4gDQo+IFdoYXQgeW91IHdh
bnQgaGVyZSBpcyBwcm9iYWJseSBhbiBhcmNoaXRlY3R1cmUgc3BlY2lmaWMgdGVzdC4NCj4gDQo+
IA0KPiBKdWVyZ2VuDQo=


From xen-devel-bounces@lists.xenproject.org Wed May 06 09:25:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 09:25:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWGJ4-0006Wr-Nv; Wed, 06 May 2020 09:25:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=raAm=6U=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jWGJ3-0006Wm-3e
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 09:25:33 +0000
X-Inumbo-ID: 8845a496-8f7b-11ea-ae69-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8845a496-8f7b-11ea-ae69-bc764e2007e4;
 Wed, 06 May 2020 09:25:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588757132;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=dQyVmYvJunWuhXAX5BimlQbMM137IBIQu09YRvhqy58=;
 b=gcGmcEGmxl5XnmHymUv6kVB07MmLboP+1X9aEsKRBK0Dd86w1V8KL9v9
 6zn+UqWOUVzmuhBEehooQcNCv2mhEYk9jffWj51mVHuO3dZxK4qOFP3uT
 D2CGyTLA+YXuQ5JxXy7GEyx68m2NcO8iW5RaII1HYmhJTHKaDh+AgxBDg 8=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: mvkOgejXicwkBN7nAPfD0KYWN0poT2pIUaQng7/ayDN1kKVP+e7EEQqReumjReQpDFJBxPfBph
 8+FCNgF/qtN9kod04ziIswIL2Ee+HAvVsn66n0/EMgn5PmF9hjdRyLmU0MmBniO/irJGilbrrq
 9i7oLYKeNAPi3vfb4CwaIEXvn4xFIkWR6tiKVB2bq/nD+meQc1yfvsQULoeKeBRxlB7gQWkcac
 k7wExiY0TzfMtD24qQCpOAo3phzz6BnKgoUQM8Z+FMpDgQ7qeER82eN4827EcnEmas8DW6pDAI
 cpo=
X-SBRS: 2.7
X-MesageID: 17235407
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,358,1583211600"; d="scan'208";a="17235407"
Date: Wed, 6 May 2020 11:25:21 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/svm: Clean up vmcbcleanbits_t handling
Message-ID: <20200506092521.GF1353@Air-de-Roger>
References: <20200505173250.5916-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200505173250.5916-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 05, 2020 at 06:32:50PM +0100, Andrew Cooper wrote:
> Rework the vmcbcleanbits_t definitons to use bool, drop 'fields' from the
> namespace, position the comments in an unambiguous position, and include the
> bit position.
> 
> In svm_vmexit_handler(), don't bother conditionally writing ~0 or 0 based on
> hardware support.  The field was entirely unused and ignored on older
> hardware (and we're already setting reserved cleanbits anyway).
> 
> In nsvm_vmcb_prepare4vmrun(), simplify the logic massivly by dropping the
                                                        ^e
> vcleanbit_set() macro using a vmcbcleanbits_t local variable which only gets
> filled in the case that clean bits were valid previously.  Fix up the style on
> impacted lines.
> 
> No practical change in behaviour.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 5950e4d52b..aeebeaf873 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -345,7 +345,7 @@ static int svm_vmcb_restore(struct vcpu *v, struct hvm_hw_cpu *c)
>      else
>          vmcb->event_inj.raw = 0;
>  
> -    vmcb->cleanbits.bytes = 0;
> +    vmcb->cleanbits.raw = 0;
>      paging_update_paging_modes(v);
>  
>      return 0;
> @@ -693,12 +693,12 @@ static void svm_set_segment_register(struct vcpu *v, enum x86_segment seg,
>      case x86_seg_ds:
>      case x86_seg_es:
>      case x86_seg_ss: /* cpl */
> -        vmcb->cleanbits.fields.seg = 0;
> +        vmcb->cleanbits.seg = 0;
>          break;
>  
>      case x86_seg_gdtr:
>      case x86_seg_idtr:
> -        vmcb->cleanbits.fields.dt = 0;
> +        vmcb->cleanbits.dt = 0;

Nit: using false here (and above) would be better, since the fields
are now booleans.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 06 09:25:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 09:25:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWGJF-0006XK-Vs; Wed, 06 May 2020 09:25:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+RWK=6U=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jWGJF-0006XF-2n
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 09:25:45 +0000
X-Inumbo-ID: 8f7ec288-8f7b-11ea-b07b-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8f7ec288-8f7b-11ea-b07b-bc764e2007e4;
 Wed, 06 May 2020 09:25:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588757144;
 h=from:to:subject:date:message-id:content-id:
 content-transfer-encoding:mime-version;
 bh=zOR7oP0vIGWuptoMR316WLNacqZpyL9csw+Bz1qBQCg=;
 b=c0SNTGyKuMirVzDiWhUGeyixFRPXtIZuyA40F32vqAzaRZmRUS+SuyzR
 FBsqrT+l48CKt2oc8YfbiY50dNIcofbBjglsDfxnacCvX9cJzB78D/eJx
 gXLEwuTMnra9F6fayEI6wtdf/ATtJNiIK9gyU6+fegN72Yi8TEWXhWqVa M=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: Cxt1s9HfSv2neARbaPlUv403uldiZzVK7krGhjhLtw9wLJYX4ZKqtVOpFjS/PNn/wNYxHfPmRh
 HG84E7RCIHyuoUBTwx+4zPYIXh52PJum5FXgqH1wISXiyyPdJJ6PS/wo9Dck/iVR2L/qawmLiN
 PtVNGfY0b+eKYSSvQIUgnotdQ+hYompTDZCVYRe09EAjhnF/JBGpqmvKbf+IBM6oUoNtsO7b5b
 jrZuryBxwOoAf6bUAmIp4kGE5xHGlxWxcF3JgJJX+6UDpDpK2FT3vkIiByLL9NfRQzc/zh2sy3
 cKw=
X-SBRS: 2.7
X-MesageID: 17235415
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,358,1583211600"; d="scan'208";a="17235415"
From: George Dunlap <George.Dunlap@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: [ANNOUNCE] Call for agenda items for 7 May Community Call @ 15:00 UTC
Thread-Topic: [ANNOUNCE] Call for agenda items for 7 May Community Call @
 15:00 UTC
Thread-Index: AQHWI4hPEIh5dR/9x0+t1y63Yoi16g==
Date: Wed, 6 May 2020 09:25:40 +0000
Message-ID: <076A5C6D-A3FA-46D9-8640-90BC77B066CE@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.60.0.2.5)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <C8330C5A39FD2F4092F3B3AA02149F02@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGkgYWxsLA0KDQpUaGUgcHJvcG9zZWQgYWdlbmRhIGlzIGluICBhbmQgaHR0cHM6Ly9jcnlwdHBh
ZC5mci9wYWQvIy8yL3BhZC9lZGl0L3FQUVVFUUV2M25KSjk3Y2xTOGIyS2R0UC8geW91IGNhbiBl
ZGl0IHRvIGFkZCBpdGVtcy4gIEFsdGVybmF0aXZlbHksIHlvdSBjYW4gcmVwbHkgdG8gdGhpcyBt
YWlsIGRpcmVjdGx5Lg0KDQpBZ2VuZGEgaXRlbXMgYXBwcmVjaWF0ZWQgYSBmZXcgZGF5cyBiZWZv
cmUgdGhlIGNhbGw6IHBsZWFzZSBwdXQgeW91ciBuYW1lIGJlc2lkZXMgaXRlbXMgaWYgeW91IGVk
aXQgdGhlIGRvY3VtZW50Lg0KDQpOb3RlIHRoZSBmb2xsb3dpbmcgYWRtaW5pc3RyYXRpdmUgY29u
dmVudGlvbnMgZm9yIHRoZSBjYWxsOg0KKiBVbmxlc3MsIGFncmVlZCBpbiB0aGUgcGVydmlvdXMg
bWVldGluZyBvdGhlcndpc2UsIHRoZSBjYWxsIGlzIG9uIHRoZSAxc3QgVGh1cnNkYXkgb2YgZWFj
aCBtb250aCBhdCAxNjAwIEJyaXRpc2ggVGltZSAoZWl0aGVyIEdNVCBvciBCU1QpDQoqIEkgdXN1
YWxseSBzZW5kIG91dCBhIG1lZXRpbmcgcmVtaW5kZXIgYSBmZXcgZGF5cyBiZWZvcmUgd2l0aCBh
IHByb3Zpc2lvbmFsIGFnZW5kYQ0KDQoqIElmIHlvdSB3YW50IHRvIGJlIENDJ2VkIHBsZWFzZSBh
ZGQgb3IgcmVtb3ZlIHlvdXJzZWxmIGZyb20gdGhlIHNpZ24tdXAtc2hlZXQgYXQgaHR0cHM6Ly9j
cnlwdHBhZC5mci9wYWQvIy8yL3BhZC9lZGl0L0Q5dkd6aWhQeHhBT2U2UkZQejBzUkNmKy8NCg0K
QmVzdCBSZWdhcmRzDQpHZW9yZ2UNCg0KDQoNCj09IERpYWwtaW4gSW5mb3JtYXRpb24gPT0NCiMj
IE1lZXRpbmcgdGltZQ0KMTU6MDAgLSAxNjowMCBVVEMgKGR1cmluZyBCU1QpDQpGdXJ0aGVyIElu
dGVybmF0aW9uYWwgbWVldGluZyB0aW1lczogaHR0cHM6Ly93d3cudGltZWFuZGRhdGUuY29tL3dv
cmxkY2xvY2svbWVldGluZ2RldGFpbHMuaHRtbD95ZWFyPTIwMjAmbW9udGg9NSZkYXk9NyZob3Vy
PTE1Jm1pbj0wJnNlYz0wJnAxPTEyMzQmcDI9MzcmcDM9MjI0JnA0PTE3OQ0KDQoNCiMjIERpYWwg
aW4gZGV0YWlscw0KV2ViOiBodHRwczovL3d3dy5nb3RvbWVldC5tZS9HZW9yZ2VEdW5sYXANCg0K
WW91IGNhbiBhbHNvIGRpYWwgaW4gdXNpbmcgeW91ciBwaG9uZS4NCkFjY2VzcyBDb2RlOiAxNjgt
NjgyLTEwOQ0KDQpDaGluYSAoVG9sbCBGcmVlKTogNDAwOCA4MTEwODQNCkdlcm1hbnk6ICs0OSA2
OTIgNTczNiA3MzE3DQpQb2xhbmQgKFRvbGwgRnJlZSk6IDAwIDgwMCAxMTI0NzU5DQpVa3JhaW5l
IChUb2xsIEZyZWUpOiAwIDgwMCA1MCAxNzMzDQpVbml0ZWQgS2luZ2RvbTogKzQ0IDMzMCAyMjEg
MDA4OA0KVW5pdGVkIFN0YXRlczogKzEgKDU3MSkgMzE3LTMxMjkNClNwYWluOiArMzQgOTMyIDc1
IDIwMDQNCg0KDQpNb3JlIHBob25lIG51bWJlcnMNCkF1c3RyYWxpYTogKzYxIDIgOTA4NyAzNjA0
DQpBdXN0cmlhOiArNDMgNyAyMDgxIDU0MjcNCkFyZ2VudGluYSAoVG9sbCBGcmVlKTogMCA4MDAg
NDQ0IDMzNzUNCkJhaHJhaW4gKFRvbGwgRnJlZSk6IDgwMCA4MSAxMTENCkJlbGFydXMgKFRvbGwg
RnJlZSk6IDggODIwIDAwMTEgMDQwMA0KQmVsZ2l1bTogKzMyIDI4IDkzIDcwMTgNCkJyYXppbCAo
VG9sbCBGcmVlKTogMCA4MDAgMDQ3IDQ5MDYNCkJ1bGdhcmlhIChUb2xsIEZyZWUpOiAwMDgwMCAx
MjAgNDQxNw0KQ2FuYWRhOiArMSAoNjQ3KSA0OTctOTM5MQ0KQ2hpbGUgKFRvbGwgRnJlZSk6IDgw
MCAzOTUgMTUwDQpDb2xvbWJpYSAoVG9sbCBGcmVlKTogMDEgODAwIDUxOCA0NDgzDQpDemVjaCBS
ZXB1YmxpYyAoVG9sbCBGcmVlKTogODAwIDUwMDQ0OA0KRGVubWFyazogKzQ1IDMyIDcyIDAzIDgy
DQpGaW5sYW5kOiArMzU4IDkyMyAxNyAwNTY4DQpGcmFuY2U6ICszMyAxNzAgOTUwIDU5NA0KR3Jl
ZWNlIChUb2xsIEZyZWUpOiAwMCA4MDAgNDQxNCAzODM4DQpIb25nIEtvbmcgKFRvbGwgRnJlZSk6
IDMwNzEzMTY5OTA2LTg4Ni05NjUNCkh1bmdhcnkgKFRvbGwgRnJlZSk6ICgwNikgODAgOTg2IDI1
NQ0KSWNlbGFuZCAoVG9sbCBGcmVlKTogODAwIDcyMDQNCkluZGlhIChUb2xsIEZyZWUpOiAxODAw
MjY2OTI3Mg0KSW5kb25lc2lhIChUb2xsIEZyZWUpOiAwMDcgODAzIDAyMCA1Mzc1DQpJcmVsYW5k
OiArMzUzIDE1IDM2MCA3MjgNCklzcmFlbCAoVG9sbCBGcmVlKTogMSA4MDkgNDU0IDgzMA0KSXRh
bHk6ICszOSAwIDI0NyA5MiAxMyAwMQ0KSmFwYW4gKFRvbGwgRnJlZSk6IDAgMTIwIDY2MyA4MDAN
CktvcmVhLCBSZXB1YmxpYyBvZiAoVG9sbCBGcmVlKTogMDA3OTggMTQgMjA3IDQ5MTQNCkx1eGVt
Ym91cmcgKFRvbGwgRnJlZSk6IDgwMCA4NTE1OA0KTWFsYXlzaWEgKFRvbGwgRnJlZSk6IDEgODAw
IDgxIDY4NTQNCk1leGljbyAoVG9sbCBGcmVlKTogMDEgODAwIDUyMiAxMTMzDQpOZXRoZXJsYW5k
czogKzMxIDIwNyA5NDEgMzc3DQpOZXcgWmVhbGFuZDogKzY0IDkgMjgwIDYzMDINCk5vcndheTog
KzQ3IDIxIDkzIDM3IDUxDQpQYW5hbWEgKFRvbGwgRnJlZSk6IDAwIDgwMCAyMjYgNzkyOA0KUGVy
dSAoVG9sbCBGcmVlKTogMCA4MDAgNzcwMjMNClBoaWxpcHBpbmVzIChUb2xsIEZyZWUpOiAxIDgw
MCAxMTEwIDE2NjENClBvcnR1Z2FsIChUb2xsIEZyZWUpOiA4MDAgODE5IDU3NQ0KUm9tYW5pYSAo
VG9sbCBGcmVlKTogMCA4MDAgNDEwIDAyOQ0KUnVzc2lhbiBGZWRlcmF0aW9uIChUb2xsIEZyZWUp
OiA4IDgwMCAxMDAgNjIwMw0KU2F1ZGkgQXJhYmlhIChUb2xsIEZyZWUpOiA4MDAgODQ0IDM2MzMN
ClNpbmdhcG9yZSAoVG9sbCBGcmVlKTogMTgwMDcyMzEzMjMNClNvdXRoIEFmcmljYSAoVG9sbCBG
cmVlKTogMCA4MDAgNTU1IDQ0Nw0KU3dlZGVuOiArNDYgODUzIDUyNyA4MjcNClN3aXR6ZXJsYW5k
OiArNDEgMjI1IDQ1OTkgNzgNClRhaXdhbiAoVG9sbCBGcmVlKTogMCA4MDAgNjY2IDg1NA0KVGhh
aWxhbmQgKFRvbGwgRnJlZSk6IDAwMSA4MDAgMDExIDAyMw0KVHVya2V5IChUb2xsIEZyZWUpOiAw
MCA4MDAgNDQ4OCAyMzY4Mw0KVW5pdGVkIEFyYWIgRW1pcmF0ZXMgKFRvbGwgRnJlZSk6IDgwMCAw
NDQgNDA0MzkNClVydWd1YXkgKFRvbGwgRnJlZSk6IDAwMDQgMDE5IDEwMTgNClZpZXQgTmFtIChU
b2xsIEZyZWUpOiAxMjIgODAgNDgxDQrigIvigIvigIvigIvigIvigIvigIsNCg0KRmlyc3QgR29U
b01lZXRpbmc/IExldCdzIGRvIGEgcXVpY2sgc3lzdGVtIGNoZWNrOg0KDQpodHRwczovL2xpbmsu
Z290b21lZXRpbmcuY29tL3N5c3RlbS1jaGVjaw==


From xen-devel-bounces@lists.xenproject.org Wed May 06 09:34:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 09:34:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWGRw-0007Tm-S3; Wed, 06 May 2020 09:34:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jP1D=6U=xen.org=wl@srs-us1.protection.inumbo.net>)
 id 1jWGRv-0007Th-Mm
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 09:34:43 +0000
X-Inumbo-ID: d0d74e98-8f7c-11ea-9e29-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d0d74e98-8f7c-11ea-9e29-12813bfff9fa;
 Wed, 06 May 2020 09:34:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID
 :Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID
 :Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:
 Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe
 :List-Post:List-Owner:List-Archive;
 bh=BP8uYz/aYWuKgcLUpwpdowh/5shn/LrRYs5btpMcU/E=; b=Qq/O+DK/vZdH44A7vk0UATJ7YG
 tBSh4Zpmk5wsZADQNk+xPLf2b0dj9rvsIyrz3bM4gfIslftWwp4t/vBFN/Hhia9TljOLbNBdc3p7Q
 mPeo93jX8gm6rqhJ/ypTp+Uki270gIHwFB4oVfMucy4spOvSWTUgonj6+7cwt/tndPVI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <wl@xen.org>)
 id 1jWGRu-0000jH-03; Wed, 06 May 2020 09:34:42 +0000
Received: from 44.142.6.51.dyn.plus.net ([51.6.142.44] helo=debian)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <wl@xen.org>)
 id 1jWGRt-0008BZ-M6; Wed, 06 May 2020 09:34:41 +0000
Date: Wed, 6 May 2020 10:34:39 +0100
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v3] tools/xenstore: don't store domU's mfn of ring page
 in xenstored
Message-ID: <20200506093439.eqmuc26dglgiqdah@debian>
References: <20200430053842.4376-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200430053842.4376-1-jgross@suse.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Apr 30, 2020 at 07:38:42AM +0200, Juergen Gross wrote:
> The XS_INTRODUCE command has two parameters: the mfn (or better: gfn)
> of the domain's xenstore ring page and the event channel of the
> domain for communicating with Xenstore.
> 
> The gfn is not really needed. It is stored in the per-domain struct
> in xenstored and in case of another XS_INTRODUCE for the domain it
> is tested to match the original value. If it doesn't match the
> command is aborted via EINVAL, otherwise the event channel to the
> domain is recreated.
> 
> As XS_INTRODUCE is limited to dom0 and there is no real downside of
> recreating the event channel just omit the test for the gfn to
> match and don't return EINVAL for multiple XS_INTRODUCE calls.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Wed May 06 09:37:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 09:37:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWGUA-0007aw-8s; Wed, 06 May 2020 09:37:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cKFb=6U=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWGU8-0007aq-Se
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 09:37:00 +0000
X-Inumbo-ID: 2150d66e-8f7d-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2150d66e-8f7d-11ea-b07b-bc764e2007e4;
 Wed, 06 May 2020 09:36:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 26DD8AC4A;
 Wed,  6 May 2020 09:37:00 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] Arm: fix build with CONFIG_DTB_FILE set
Message-ID: <f4c6b07e-c5de-ba75-c1ce-1475939f10af@suse.com>
Date: Wed, 6 May 2020 11:36:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Recent changes no longer allow modification of AFLAGS. The needed
conversion was apparently missed in 2740d96efdd3 ("xen/build: have the
root Makefile generates the CFLAGS").

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/arm/Makefile
+++ b/xen/arch/arm/Makefile
@@ -68,7 +68,7 @@ extra-y += $(TARGET_SUBARCH)/head.o
 
 ifdef CONFIG_DTB_FILE
 obj-y += dtb.o
-AFLAGS += -DCONFIG_DTB_FILE=\"$(CONFIG_DTB_FILE)\"
+AFLAGS-y += -DCONFIG_DTB_FILE=\"$(CONFIG_DTB_FILE)\"
 endif
 
 ALL_OBJS := $(TARGET_SUBARCH)/head.o $(ALL_OBJS)


From xen-devel-bounces@lists.xenproject.org Wed May 06 09:37:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 09:37:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWGUw-0007eW-IV; Wed, 06 May 2020 09:37:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jP1D=6U=xen.org=wl@srs-us1.protection.inumbo.net>)
 id 1jWGUu-0007eJ-Oj
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 09:37:48 +0000
X-Inumbo-ID: 3f51dbe0-8f7d-11ea-9e29-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3f51dbe0-8f7d-11ea-9e29-12813bfff9fa;
 Wed, 06 May 2020 09:37:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Transfer-Encoding:Content-Type:
 MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=AiuIkPHt3//36IVZTKF7/3+gbZ6ov+Qa2jsC3AwA2hA=; b=ljaSZaEWRBJ7gqe4JfVcfMe6j9
 guPjBgeZrrxnVC4blPUKfxmcwOV7hkYOk/idRCcKewjpACaAbEigAfErkEwMUgQFXs97dJjQdn2+A
 SKSvifHsjknlPRs7GhIYwL56fRMoUcktEO/TxAaYTNpZ2MaNA2RwcahX69U7fz5MJUgY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <wl@xen.org>)
 id 1jWGUs-0000mz-MC; Wed, 06 May 2020 09:37:46 +0000
Received: from 44.142.6.51.dyn.plus.net ([51.6.142.44] helo=debian)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <wl@xen.org>)
 id 1jWGUs-0008PF-Db; Wed, 06 May 2020 09:37:46 +0000
Date: Wed, 6 May 2020 10:37:43 +0100
From: Wei Liu <wl@xen.org>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: [PATCH] x86/hyperv: stash and use the configured max VP index
Message-ID: <20200506093743.ztkvh5fwv2c4hw2r@debian>
References: <20200429104144.17816-1-liuwe@microsoft.com>
 <20200429114718.zclpy6r6sbxuo6ph@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
 <20200430101558.GA28601@Air-de-Roger>
 <20200430102118.GB28601@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200430102118.GB28601@Air-de-Roger>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <liuwe@microsoft.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Michael Kelley <mikelley@microsoft.com>, Jan Beulich <jbeulich@suse.com>,
 Xen Development List <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Apr 30, 2020 at 12:21:18PM +0200, Roger Pau Monn wrote:
> On Thu, Apr 30, 2020 at 12:15:58PM +0200, Roger Pau Monn wrote:
> > On Wed, Apr 29, 2020 at 11:47:18AM +0000, Wei Liu wrote:
> > > On Wed, Apr 29, 2020 at 11:41:44AM +0100, Wei Liu wrote:
> > > > The value returned from CPUID is the maximum number for virtual
> > > > processors supported by Hyper-V. It could be larger than the maximum
> > > > number of virtual processors configured.
> > > > 
> > > > Stash the configured number into a variable and use it in calculations.
> > > > 
> > > > Signed-off-by: Wei Liu <liuwe@microsoft.com>
> > > > ---
> > > >  xen/arch/x86/guest/hyperv/hyperv.c  | 4 ++++
> > > >  xen/arch/x86/guest/hyperv/private.h | 1 +
> > > >  xen/arch/x86/guest/hyperv/tlb.c     | 2 +-
> > > >  xen/arch/x86/guest/hyperv/util.c    | 2 +-
> > > >  4 files changed, 7 insertions(+), 2 deletions(-)
> > > > 
> > > > diff --git a/xen/arch/x86/guest/hyperv/hyperv.c b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > index 91a6782cd986..84221b751453 100644
> > > > --- a/xen/arch/x86/guest/hyperv/hyperv.c
> > > > +++ b/xen/arch/x86/guest/hyperv/hyperv.c
> > > > @@ -33,6 +33,7 @@ DEFINE_PER_CPU_READ_MOSTLY(void *, hv_input_page);
> > > >  DEFINE_PER_CPU_READ_MOSTLY(void *, hv_vp_assist);
> > > >  DEFINE_PER_CPU_READ_MOSTLY(unsigned int, hv_vp_index);
> > > >  
> > > > +unsigned int __read_mostly hv_max_vp_index;
> > > >  static bool __read_mostly hcall_page_ready;
> > > >  
> > > >  static uint64_t generate_guest_id(void)
> > > > @@ -143,6 +144,9 @@ static int setup_hypercall_pcpu_arg(void)
> > > >      rdmsrl(HV_X64_MSR_VP_INDEX, vp_index_msr);
> > > >      this_cpu(hv_vp_index) = vp_index_msr;
> > > >  
> > > > +    if ( vp_index_msr > hv_max_vp_index )
> > > > +        hv_max_vp_index = vp_index_msr;
> > > > +
> > > >      return 0;
> > > >  }
> > > >  
> > > > diff --git a/xen/arch/x86/guest/hyperv/private.h b/xen/arch/x86/guest/hyperv/private.h
> > > > index 354fc7f685a7..fea3e291e944 100644
> > > > --- a/xen/arch/x86/guest/hyperv/private.h
> > > > +++ b/xen/arch/x86/guest/hyperv/private.h
> > > > @@ -28,6 +28,7 @@
> > > >  DECLARE_PER_CPU(void *, hv_input_page);
> > > >  DECLARE_PER_CPU(void *, hv_vp_assist);
> > > >  DECLARE_PER_CPU(unsigned int, hv_vp_index);
> > > > +extern unsigned int hv_max_vp_index;
> > > >  
> > > >  static inline unsigned int hv_vp_index(unsigned int cpu)
> > > >  {
> > > > diff --git a/xen/arch/x86/guest/hyperv/tlb.c b/xen/arch/x86/guest/hyperv/tlb.c
> > > > index 1d723d6ee679..0a44071481bd 100644
> > > > --- a/xen/arch/x86/guest/hyperv/tlb.c
> > > > +++ b/xen/arch/x86/guest/hyperv/tlb.c
> > > > @@ -166,7 +166,7 @@ int hyperv_flush_tlb(const cpumask_t *mask, const void *va,
> > > >          {
> > > >              unsigned int vpid = hv_vp_index(cpu);
> > > >  
> > > > -            if ( vpid >= ms_hyperv.max_vp_index )
> > > > +            if ( vpid >= hv_max_vp_index )
> > > 
> > > I think the >= should be changed to > here.
> > 
> > I agree. With this fixed:
> > 
> > Reviewed-by: Roger Pau Monn <roger.pau@citrix.com>
> 
> FWIW, I think it should also be nice to add an ASSERT_UNREACHABLE in
> the if body, as now it shouldn't be possible for vpid to be greater
> than hv_max_vp_index unless something has gone really wrong?

At some point I will initialise vpid to (uint32_t)-1 so it could go over
hv_max_vp_index if there is a bug in code.

Wei.

> 
> Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 06 09:42:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 09:42:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWGYw-0008Ur-3J; Wed, 06 May 2020 09:41:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C64T=6U=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jWGYt-0008Uk-TA
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 09:41:55 +0000
X-Inumbo-ID: d2818546-8f7d-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d2818546-8f7d-11ea-b07b-bc764e2007e4;
 Wed, 06 May 2020 09:41:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=RKQOmLoat1cFrMBPDuFyWmD8jgcx1D0Dku4nqxilMYQ=; b=yKcN10tuYBtr7991e5VV1n/fgq
 bdD7HB1BfVEzFR8YhJjwGEgoIHhnUoiz1JMhSUJUizqNaClMg9q28qT8bPhvFLMcHyNy+kybQ5uNS
 3QmyHPmJm7ToI5/NebfX/xIsviEccEWib9JTbUKj/YVMg69onmQ5WeoTxXHzz55Si6/g=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jWGYs-0000sH-6W; Wed, 06 May 2020 09:41:54 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jWGYr-0000Bt-Vc; Wed, 06 May 2020 09:41:54 +0000
Subject: Re: [PATCH] Arm: fix build with CONFIG_DTB_FILE set
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <f4c6b07e-c5de-ba75-c1ce-1475939f10af@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a0e21f63-f367-d32c-8d4c-baf4f9a5b21d@xen.org>
Date: Wed, 6 May 2020 10:41:52 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <f4c6b07e-c5de-ba75-c1ce-1475939f10af@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 06/05/2020 10:36, Jan Beulich wrote:
> Recent changes no longer allow modification of AFLAGS. The needed
> conversion was apparently missed in 2740d96efdd3 ("xen/build: have the
> root Makefile generates the CFLAGS").
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> 
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -68,7 +68,7 @@ extra-y += $(TARGET_SUBARCH)/head.o
>   
>   ifdef CONFIG_DTB_FILE
>   obj-y += dtb.o
> -AFLAGS += -DCONFIG_DTB_FILE=\"$(CONFIG_DTB_FILE)\"
> +AFLAGS-y += -DCONFIG_DTB_FILE=\"$(CONFIG_DTB_FILE)\"
>   endif
>   
>   ALL_OBJS := $(TARGET_SUBARCH)/head.o $(ALL_OBJS)
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 06 09:45:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 09:45:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWGcI-0000Dy-NX; Wed, 06 May 2020 09:45:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C64T=6U=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jWGcH-0000Ds-HV
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 09:45:25 +0000
X-Inumbo-ID: 4f0981a5-8f7e-11ea-9e2b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f0981a5-8f7e-11ea-9e2b-12813bfff9fa;
 Wed, 06 May 2020 09:45:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=YTAfR6Ikyr6Y/khl6iA0fhEplgR7Z4Y3RkEVfoPbqoc=; b=xUO5jh5Sf1YvzpbCKj3LhpcP/v
 uL/FE1O91wnQZSLOnzvgowv11WruYb65ppD5L2WXUSH04qSg/IFE8U8ICQ/ymw3yIv7YP8dDR/Cwb
 TKDTEIQ0q3/9HeAo3+B2YaNaKi4lPyMwNGwiIlyEuY8+ENIIrdHpWM0KvGf7LuKR/q5c=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jWGcG-0000wh-AV; Wed, 06 May 2020 09:45:24 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jWGcG-0000VQ-3g; Wed, 06 May 2020 09:45:24 +0000
Subject: Re: [PATCH v2 4/4] x86: adjustments to guest handle treatment
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>
References: <9d4b738a-4487-6bfc-3076-597d074c7b47@suse.com>
 <e820e1b9-7a7e-21f3-1ea0-d939de1905dd@suse.com>
 <20200422082610.GA28601@Air-de-Roger>
 <0b43670b-cc0b-0b0b-ef24-4734de35d4b7@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6eaa1d25-7a91-d2ef-db01-20c5cb5101c4@xen.org>
Date: Wed, 6 May 2020 10:45:22 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <0b43670b-cc0b-0b0b-ef24-4734de35d4b7@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan,

On 05/05/2020 07:26, Jan Beulich wrote:
> On 22.04.2020 10:26, Roger Pau Monné wrote:
>> On Tue, Apr 21, 2020 at 11:13:23AM +0200, Jan Beulich wrote:
>>> First of all avoid excessive conversions. copy_{from,to}_guest(), for
>>> example, work fine with all of XEN_GUEST_HANDLE{,_64,_PARAM}().
>>>
>>> Further
>>> - do_physdev_op_compat() didn't use the param form for its parameter,
>>> - {hap,shadow}_track_dirty_vram() wrongly used the param form,
>>> - compat processor Px logic failed to check compatibility of native and
>>>    compat structures not further converted.
>>>
>>> As this eliminates all users of guest_handle_from_param() and as there's
>>> no real need to allow for conversions in both directions, drop the
>>> macros as well.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> [...]
>>> --- a/xen/drivers/acpi/pmstat.c
>>> +++ b/xen/drivers/acpi/pmstat.c
>>> @@ -492,7 +492,7 @@ int do_pm_op(struct xen_sysctl_pm_op *op
>>>       return ret;
>>>   }
>>>   
>>> -int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE_PARAM(uint32) pdc)
>>> +int acpi_set_pdc_bits(u32 acpi_id, XEN_GUEST_HANDLE(uint32) pdc)
>>
>> Nit: switch to uint32_t while there?
>>
>> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Unless I hear objections, I'm intending to commit this then in a
> day or two with the suggested change made and the R-b given. Of
> course a formally required ack for the Arm side dropping of
> guest_handle_from_param() would still be nice ...

I missed the small change on Arm sorry:

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 06 09:46:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 09:46:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWGde-0000Jv-2l; Wed, 06 May 2020 09:46:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C64T=6U=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jWGdd-0000Jq-7s
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 09:46:49 +0000
X-Inumbo-ID: 81779c98-8f7e-11ea-9e2b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 81779c98-8f7e-11ea-9e2b-12813bfff9fa;
 Wed, 06 May 2020 09:46:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=5Flmg0pC/A30wPca0pIylAF4z9Ya865mavKNqv7oous=; b=h2b0c/g6ixht7EC8+j+sBOUT4j
 tMTM8Y0sZ6fHqzbSRag9638Bif5MMohLRxxU+3FKsvZlz4GnbefZ5OZSCsQFK+GTE3wVxcszqWPSw
 ht2EzA09sh2BAZ3jTyVGCe2G9jCdraxUQDxVCGzL/AsHzL+XBhyZt4QyxCozdGC4pk1o=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jWGdb-0000yo-Ji; Wed, 06 May 2020 09:46:47 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jWGdb-0000Yt-DB; Wed, 06 May 2020 09:46:47 +0000
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <JBeulich@suse.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <debb2604-a239-46f7-740c-d4ac9968b4a5@xen.org>
Date: Wed, 6 May 2020 10:46:45 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <Ian.Jackson@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi George,

On 04/05/2020 12:10, George Dunlap wrote:
> 
> 
>> On Apr 30, 2020, at 3:50 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>
>>> In order to make easier to experiment, the option EXPERT can now be
>>> selected from the menuconfig rather than make command line. This does
>>> not change the fact a kernel with EXPERT mode selected will not be
>>> security supported.
>>
>> Well, if I'm not mis-remembering it was on purpose to make it more
>> difficult for people to declare themselves "experts". FAOD I'm not
>> meaning to imply I don't see and accept the frustration aspect you
>> mention further up. The two need to be carefully weighed against
>> one another.
> 
> I don’t think we need to make it difficult for people to declare themselves experts, particularly as “all” it means at the moment is, “Can build something which is not security supported”.  People who are building their own hypervisors are already pretty well advanced; I think we can let them shoot themselves in the foot if they want to.

Assuming I reword the commit message regarding "make clean", could I 
consider this as an acked-by?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 06 09:47:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 09:47:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWGe6-0000NC-BG; Wed, 06 May 2020 09:47:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C64T=6U=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jWGe4-0000N3-Ss
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 09:47:16 +0000
X-Inumbo-ID: 91c26ec0-8f7e-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 91c26ec0-8f7e-11ea-ae69-bc764e2007e4;
 Wed, 06 May 2020 09:47:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=obrGnkdPlKPx3aYfivWzsx15XeV2cLLhfAQVJKC9a8M=; b=x9D1JWipEzBdFK3O3pIEfqUU+C
 M0Rqi1ilw69f3NoaAGSaLNgeSDGOPlA9N2g4LKGi2dm19BFf5A9zE43dRTEvnUZfzmWwyq8vdKssO
 7ju3gbLv/fLSe7uaPOLcK2qbCX1ooqPsfLDQVIG+Zfkf6LKSF1zw+5Oc09P3cQvwBd+I=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jWGe3-0000zj-3Z; Wed, 06 May 2020 09:47:15 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jWGe2-0000aS-Sj; Wed, 06 May 2020 09:47:15 +0000
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Ian Jackson <ian.jackson@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
Date: Wed, 6 May 2020 10:47:12 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <24240.3047.877655.345428@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Ian,

On 04/05/2020 13:34, Ian Jackson wrote:
> George Dunlap writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>> On Apr 30, 2020, at 3:50 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>> Well, if I'm not mis-remembering it was on purpose to make it more
>>> difficult for people to declare themselves "experts". FAOD I'm not
>>> meaning to imply I don't see and accept the frustration aspect you
>>> mention further up. The two need to be carefully weighed against
>>> one another.
> 
> Yes, it was on purpose.  However, I had my doubts at the time and
> I think experience has shown that this was a mistake.
> 
>> I don’t think we need to make it difficult for people to declare
>> themselves experts, particularly as “all” it means at the moment is,
>> “Can build something which is not security supported”.  People who
>> are building their own hypervisors are already pretty well advanced;
>> I think we can let them shoot themselves in the foot if they want
>> to.
> 
> Precisely.

Can I consider this as an Acked-by? :)

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 06 09:51:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 09:51:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWGiM-0001EO-T4; Wed, 06 May 2020 09:51:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C64T=6U=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jWGiK-0001EJ-TY
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 09:51:40 +0000
X-Inumbo-ID: 2f2b09ec-8f7f-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2f2b09ec-8f7f-11ea-ae69-bc764e2007e4;
 Wed, 06 May 2020 09:51:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=J6qrzWUYkk4A0UVI37gtnTDXBKm8T2IKX9n7JBeYnYU=; b=f8DISzF0GgL1R1L0Otj/Tnudpj
 DgyVf9dQOM4O2N91YDUDZNqLAXlhM/QLD+XUYl5ma4XHzfoyx2Fz4mM7Y4tfpyXalWGHhar0ngQH1
 m/QEQyQ0RINTgbOrEjd4ohTlCErEc/kV0kjfu0lJ0ec++7DpKKKw2dystVybYQJES/sk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jWGiJ-00015u-UO; Wed, 06 May 2020 09:51:39 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jWGiJ-0000nR-OE; Wed, 06 May 2020 09:51:39 +0000
Subject: Re: [PATCH 1/2] xen/Kconfig: define EXPERT a bool rather than a string
To: Jan Beulich <jbeulich@suse.com>
References: <20200430124343.29886-1-julien@xen.org>
 <20200430124343.29886-2-julien@xen.org>
 <d069d81b-24bf-1aac-3009-63e90a45af4b@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <4a0e868f-ce85-db62-ae22-1bde2aa11be2@xen.org>
Date: Wed, 6 May 2020 10:51:38 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <d069d81b-24bf-1aac-3009-63e90a45af4b@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan,

On 30/04/2020 15:32, Jan Beulich wrote:
> On 30.04.2020 14:43, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Since commit f80fe2b34f08 "xen: Update Kconfig to Linux v5.4" EXPERT
>> can only have two values (enabled or disabled). So switch from a string
>> to a bool.
>>
>> Take the opportunity to replace all "EXPERT = y" to "EXPERT".
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>
> with a remark:
> 
>> --- a/xen/arch/arm/Kconfig
>> +++ b/xen/arch/arm/Kconfig
>> @@ -33,7 +33,7 @@ source "arch/Kconfig"
>>   
>>   config ACPI
>>   	bool
>> -	prompt "ACPI (Advanced Configuration and Power Interface) Support" if EXPERT = "y"
>> +	prompt "ACPI (Advanced Configuration and Power Interface) Support" if EXPERT
>>   	depends on ARM_64
>>   	---help---
>>   
>> @@ -51,7 +51,7 @@ config GICV3
>>   
>>   config HAS_ITS
>>           bool
>> -        prompt "GICv3 ITS MSI controller support" if EXPERT = "y"
>> +        prompt "GICv3 ITS MSI controller support" if EXPERT
>>           depends on GICV3 && !NEW_VGIC
> 
> Could I talk you info switching ones like the above (looks like
> there aren't further ones) to ...
> 
>> @@ -81,7 +81,7 @@ config SBSA_VUART_CONSOLE
>>   	  SBSA Generic UART implements a subset of ARM PL011 UART.
>>   
>>   config ARM_SSBD
>> -	bool "Speculative Store Bypass Disable" if EXPERT = "y"
>> +	bool "Speculative Store Bypass Disable" if EXPERT
> 
> ... this more compact form on this occasion?

I will do the switch on commit if there are no more comment.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 06 10:00:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 10:00:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWGqz-0002AB-Pr; Wed, 06 May 2020 10:00:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gf/M=6U=citrix.com=sergey.dyasli@srs-us1.protection.inumbo.net>)
 id 1jWGqy-0002A6-E2
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 10:00:36 +0000
X-Inumbo-ID: 6d71ac5a-8f80-11ea-9e2f-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6d71ac5a-8f80-11ea-9e2f-12813bfff9fa;
 Wed, 06 May 2020 10:00:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588759235;
 h=from:to:cc:subject:date:message-id:mime-version;
 bh=VEHvrNdna954yOi6N0z6Tc4Wo5AXNtDxRKBvEml7kTc=;
 b=cbOEilXB9+3pBBG4NT5xnXz5/vkBSzNeGLEv1HRMFpTh12hitIulIoOj
 6JjmNVW+1xAmH8Str78M4rPq27ZyuW1Bo422xQl4E68fY9psPzvduMz3l
 Ec1pg502Bd97H4ojoHMIs44IPt1ILG0hdFYCDZb3MvqWM0/ozvWcjhQbQ w=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=sergey.dyasli@citrix.com;
 spf=Pass smtp.mailfrom=sergey.dyasli@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 sergey.dyasli@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="sergey.dyasli@citrix.com";
 x-sender="sergey.dyasli@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 sergey.dyasli@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="sergey.dyasli@citrix.com";
 x-sender="sergey.dyasli@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="sergey.dyasli@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: boEVyDM4bTUastmKGN5ISboGEULGiFlWhpfVdliKwfx348M754JSoXHT597CA5tSng11X+70sX
 1JYVG9MyIkwe58nVBX2ByB2Qltj/QoyNOnPL2tZ1Fo7evp71qDws/pM4izodaBm5erNDhJO8Zg
 X96JAQie/0/cWy9qy0Rh2Ml+F/pdIQNaqrJoXN9iYmb2SJm7Rp79jHrbDOvXelhweQisbIeafS
 Gpd8uIB0UGZqm4A4fe0Gxb1MbPwQuaJj/ZhAUGpuO1C0HaTragqqpNV2HT8NPI6+w5v/pCLzmH
 w3o=
X-SBRS: 2.7
X-MesageID: 17114949
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,358,1583211600"; d="scan'208";a="17114949"
From: Sergey Dyasli <sergey.dyasli@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v4] sched: print information about scheduling granularity
Date: Wed, 6 May 2020 11:00:24 +0100
Message-ID: <20200506100024.17387-1-sergey.dyasli@citrix.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Sergey Dyasli <sergey.dyasli@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Dario Faggioli <dfaggioli@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Currently it might be not obvious which scheduling mode (e.g. core-
scheduling) is being used by the scheduler. Alleviate this by printing
additional information about the selected granularity per-cpupool.

Note: per-cpupool granularity selection is not implemented yet. Every
      cpupool gets its granularity from the single global value.

Take this opportunity to introduce struct sched_gran_name array and
refactor sched_select_granularity().

Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
---
v4:
- use char[8]

v3:
- use const char*
- use sched_gran_name array instead of switch
- updated commit message

v2:
- print information on a separate line
- use per-cpupool granularity
- updated commit message

CC: Juergen Gross <jgross@suse.com>
CC: Dario Faggioli <dfaggioli@suse.com>
CC: George Dunlap <george.dunlap@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
---
 xen/common/sched/cpupool.c | 51 +++++++++++++++++++++++++++++++-------
 1 file changed, 42 insertions(+), 9 deletions(-)

diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index d40345b585..97c2d5b3c1 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -40,19 +40,50 @@ static DEFINE_SPINLOCK(cpupool_lock);
 static enum sched_gran __read_mostly opt_sched_granularity = SCHED_GRAN_cpu;
 static unsigned int __read_mostly sched_granularity = 1;
 
+struct sched_gran_name {
+    enum sched_gran mode;
+    char name[8];
+};
+
+static const struct sched_gran_name sg_name[] = {
+    {SCHED_GRAN_cpu, "cpu"},
+    {SCHED_GRAN_core, "core"},
+    {SCHED_GRAN_socket, "socket"},
+};
+
+static void sched_gran_print(enum sched_gran mode, unsigned int gran)
+{
+    const char *name = "";
+    unsigned int i;
+
+    for ( i = 0; i < ARRAY_SIZE(sg_name); i++ )
+    {
+        if ( mode == sg_name[i].mode )
+        {
+            name = sg_name[i].name;
+            break;
+        }
+    }
+
+    printk("Scheduling granularity: %s, %u CPU%s per sched-resource\n",
+           name, gran, gran == 1 ? "" : "s");
+}
+
 #ifdef CONFIG_HAS_SCHED_GRANULARITY
 static int __init sched_select_granularity(const char *str)
 {
-    if ( strcmp("cpu", str) == 0 )
-        opt_sched_granularity = SCHED_GRAN_cpu;
-    else if ( strcmp("core", str) == 0 )
-        opt_sched_granularity = SCHED_GRAN_core;
-    else if ( strcmp("socket", str) == 0 )
-        opt_sched_granularity = SCHED_GRAN_socket;
-    else
-        return -EINVAL;
+    unsigned int i;
 
-    return 0;
+    for ( i = 0; i < ARRAY_SIZE(sg_name); i++ )
+    {
+        if ( strcmp(sg_name[i].name, str) == 0 )
+        {
+            opt_sched_granularity = sg_name[i].mode;
+            return 0;
+        }
+    }
+
+    return -EINVAL;
 }
 custom_param("sched-gran", sched_select_granularity);
 #endif
@@ -115,6 +146,7 @@ static void __init cpupool_gran_init(void)
         warning_add(fallback);
 
     sched_granularity = gran;
+    sched_gran_print(opt_sched_granularity, sched_granularity);
 }
 
 unsigned int cpupool_get_granularity(const struct cpupool *c)
@@ -911,6 +943,7 @@ void dump_runq(unsigned char key)
     {
         printk("Cpupool %d:\n", (*c)->cpupool_id);
         printk("Cpus: %*pbl\n", CPUMASK_PR((*c)->cpu_valid));
+        sched_gran_print((*c)->gran, cpupool_get_granularity(*c));
         schedule_dump(*c);
     }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 06 10:09:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 10:09:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWGzb-0002Pm-Pa; Wed, 06 May 2020 10:09:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LxRp=6U=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWGza-0002Ph-6K
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 10:09:30 +0000
X-Inumbo-ID: ab374c56-8f81-11ea-9e2f-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ab374c56-8f81-11ea-9e2f-12813bfff9fa;
 Wed, 06 May 2020 10:09:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Rs83auLvWHR1A8C+uW4C6W6bn0GQbYuof6sZFA0Go/Q=; b=PhcstV5JXZyB4UxXI41h72F7H
 c74t/pJyiPRNrf+7/e272FxWxHum5YBkV7m1SePlox7l7zdSa4S4izKMBsUGs3pSzeZC9oFdMy1/j
 mAtnPAlfzsvHTsDVdGqi1R2wS/xdMhUrZnC8RTRDWdNhnwWbbmTJB1wrg0IeqH9qJthio=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWGzX-0001WB-9G; Wed, 06 May 2020 10:09:27 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWGzW-000685-Jq; Wed, 06 May 2020 10:09:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWGzW-0001sm-Ic; Wed, 06 May 2020 10:09:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150042-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 150042: tolerable trouble: fail/pass/starved
 - PUSHED
X-Osstest-Failures: xen-4.13-testing:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=d2aecd86c4481291b260869c47cf0a9a02321564
X-Osstest-Versions-That: xen=35b80b2a011416383466f21e32cb72cf73df491b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 06 May 2020 10:09:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150042 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150042/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 149836
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 149836
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  d2aecd86c4481291b260869c47cf0a9a02321564
baseline version:
 xen                  35b80b2a011416383466f21e32cb72cf73df491b

Last test of basis   149836  2020-04-27 13:36:20 Z    8 days
Testing same since   150042  2020-05-05 16:06:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <julien@xen.org>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Michael Young <m.a.young@durham.ac.uk>
  Sander Eikelenboom <linux@eikelenboom.it>
  Stefano Stabellini <sstabellini@kernel.org>
  Wei Liu <wl@xen.org>
  YOUNG, MICHAEL A <m.a.young@durham.ac.uk>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   35b80b2a01..d2aecd86c4  d2aecd86c4481291b260869c47cf0a9a02321564 -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Wed May 06 10:15:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 10:15:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWH5E-0003Uf-GE; Wed, 06 May 2020 10:15:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LxRp=6U=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWH5D-0003Ua-3U
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 10:15:19 +0000
X-Inumbo-ID: 7b8683ea-8f82-11ea-9e31-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7b8683ea-8f82-11ea-9e31-12813bfff9fa;
 Wed, 06 May 2020 10:15:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Cg9bnIiB+c2pIaHpzub87FSIrF1dLWCeFXH6eHPUunU=; b=OVIrvXuWz4FMeLJTMjgR/k3d8
 d8gO8pfdgfOjr82Ku5WqUbF/C9H9Yw1y02wbzGET/0+MrOHmyQauOM6+0q7kL2+hiZYZz9/blA3TE
 DE/qZGl9L5QRbdJ9c3pujwJVb8sWhB7+4rUOoMh7JyQ+UBIKW+J2GoaGdTTZMO7ECDIBI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWH5A-0001gF-Pc; Wed, 06 May 2020 10:15:16 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWH5A-0006Nf-GZ; Wed, 06 May 2020 10:15:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWH5A-0004x2-G3; Wed, 06 May 2020 10:15:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150055-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 150055: all pass - PUSHED
X-Osstest-Versions-This: xen=779efdbb502b38c66b774b124fa0ceed254875bd
X-Osstest-Versions-That: xen=0135be8bd8cd60090298f02310691b688d95c3a8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 06 May 2020 10:15:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150055 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150055/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  779efdbb502b38c66b774b124fa0ceed254875bd
baseline version:
 xen                  0135be8bd8cd60090298f02310691b688d95c3a8

Last test of basis   149910  2020-05-03 09:19:24 Z    3 days
Testing same since   150055  2020-05-06 09:19:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ashok Raj <ashok.raj@intel.com>
  Borislav Petkov <bp@suse.de>
  Hongyan Xia <hongyxia@amazon.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Thomas Gleixner <tglx@linutronix.de>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0135be8bd8..779efdbb50  779efdbb502b38c66b774b124fa0ceed254875bd -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed May 06 10:29:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 10:29:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWHIc-0004Qt-Qp; Wed, 06 May 2020 10:29:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=8/PJ=6U=arndb.de=arnd@srs-us1.protection.inumbo.net>)
 id 1jWHIb-0004Qo-LO
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 10:29:09 +0000
X-Inumbo-ID: 6ad4651a-8f84-11ea-9887-bc764e2007e4
Received: from mout.kundenserver.de (unknown [212.227.17.13])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6ad4651a-8f84-11ea-9887-bc764e2007e4;
 Wed, 06 May 2020 10:29:08 +0000 (UTC)
Received: from mail-qk1-f170.google.com ([209.85.222.170]) by
 mrelayeu.kundenserver.de (mreue108 [212.227.15.145]) with ESMTPSA (Nemesis)
 id 1Mn2iP-1ip0aK0jCB-00k4J1 for <xen-devel@lists.xenproject.org>; Wed, 06 May
 2020 12:29:07 +0200
Received: by mail-qk1-f170.google.com with SMTP id b6so1297684qkh.11
 for <xen-devel@lists.xenproject.org>; Wed, 06 May 2020 03:29:06 -0700 (PDT)
X-Gm-Message-State: AGi0PuZkYMq+fogU5Y/X54TUJuatwgT1PGwe6+/WRRIdIqE7HFRBa68H
 QhfU/n6gZcghIH1gXTgJ01kXRqYhQHkX3Q9UAsk=
X-Google-Smtp-Source: APiQypLplZDjvgilEtNOvewSJ4/Rigx9zKpijezGZjbjJHJFpHNinyeAHrYJil4GkKYLbPsidhx2J3DNgSsgw1MT/WA=
X-Received: by 2002:a37:aa82:: with SMTP id t124mr7608771qke.3.1588760945935; 
 Wed, 06 May 2020 03:29:05 -0700 (PDT)
MIME-Version: 1.0
References: <20200505141546.824573-1-arnd@arndb.de>
 <30d49e6d-570b-f6fd-3a6f-628abcc8b127@suse.com>
 <CAK8P3a0mWH=Zcq180+cTRMpqOkGt05xDP1+kCTP6yc9grAg2VQ@mail.gmail.com>
 <48893239-dde9-4e94-040d-859f4348816d@suse.com>
 <CAK8P3a2_7+_a_cwDK1cwfrJX4azQJhd_Y0xB18cCUn6=p7fVsg@mail.gmail.com>
 <2c6e4b36-6618-1889-55c4-16eeb1ef6f57@suse.com>
In-Reply-To: <2c6e4b36-6618-1889-55c4-16eeb1ef6f57@suse.com>
From: Arnd Bergmann <arnd@arndb.de>
Date: Wed, 6 May 2020 12:28:49 +0200
X-Gmail-Original-Message-ID: <CAK8P3a05wLCy0GT88mc451h3uXuU86aZ7XC=YXYXi12J0dFJkw@mail.gmail.com>
Message-ID: <CAK8P3a05wLCy0GT88mc451h3uXuU86aZ7XC=YXYXi12J0dFJkw@mail.gmail.com>
Subject: Re: [PATCH] xenbus: avoid stack overflow warning
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Provags-ID: V03:K1:no1xygskoIyr6XbP9tD+FwsVm3E8iYChYzqkU68yj5KmNi+94Kh
 rDcAY43aGW9TvjUt0ToyFy1pdtD+dEXA/kFLaFDyQvgXHxJW+X67v3/UWMvbrS+pTG3LXuV
 GhM0mvGwZGIyTKAvC4XfsK4Rt6U4Jo2DA2WgD4Kt8tu1omkpECs/zOeJfs0CeFIzRoouj1k
 5IlKeoINCQZiv4jCeixWw==
X-Spam-Flag: NO
X-UI-Out-Filterresults: notjunk:1;V03:K0:oGesJvs0B7Y=:rUfkz73Y/00BVlFWH9iBAl
 qfXet7RQSK2KJsWO+X/Vh8Nnd9X48siQf9a1QeSxgfNbAVjlp6O+okklTmm9i8SAKa65pxkCt
 xMAsFZBRcsKcNhyq7OcZ08mU7cPgB5p3e/dn+lqAn6ImyZ/CIw5lNLtqW+9k2ytywUzFReoeK
 nXN9jLblTn1VaIg3wRT0WcLt49PDrhiQUSvyudAcQmlc9/tpJvcadqdqbCnd45nv4nPiCLsp6
 qZtQ8Uy2pdVoLDpz/9j7+mn/maU8C1bAMyd500es+k9voAOtqn5detjZ+1NjRI0hrBxFZYyi8
 JJwJXoG4aiDlRVGH/152JGnuYIiixScw8cWH21U0vXZQF1AVTTsSbzLD56gp1KBLtleDZc6h2
 +WyrDuqBwuSMVzy9IGjUMQ+fwpoPlA2rBaMhx+EgFwF1lrKdPtEGM7D05JztDsishMQSk3vWv
 91MBEHND373SRpf1OpOE3YsHfpJvt7LvBjrhHrUMpmL3szZCADM4Sk/qGTHV7VqrFZr/p0DHT
 dxvRZp9iG4VlVyXjq8dtT30Y3QDrVMZGZbSlHOiQEqHanYxEnMC7xxgegCKhQ3tWnGP71D8sM
 y3aXf5j/vLSfC+GNyujPfg3pQ42ybGfxfz8NV5VP5Yh/bk143xqcE3J2bCRKEf3+uSq2R+Q8j
 M72lID4WSeLxXyV4/bt7FeuaJwzdpIQTSdI2gAPnbPZl3aAhNQEVO/i8aGDLx67skc2E1XEL4
 R3QafzT/gANnGqPW2DfIrSEFTLlLd1VP9alX7dhEN25MKO6zJucnGGpjK70hEr3uzatdp6YS0
 rtYC/qh5rgBbaM4qzVl9N8jQlJ5NZS9nShZxeCMhrFvjAwHiyY=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Yan Yankovskyi <yyankovskyi@gmail.com>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
 clang-built-linux <clang-built-linux@googlegroups.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 6, 2020 at 7:12 AM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wrot=
e:
>
> On 05.05.20 22:57, Arnd Bergmann wrote:
> > On Tue, May 5, 2020 at 6:02 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> =
wrote:
> >> On 05.05.20 17:01, Arnd Bergmann wrote:
> >>> On Tue, May 5, 2020 at 4:34 PM J=C3=BCrgen Gro=C3=9F <jgross@suse.com=
> wrote:
> >>>> On 05.05.20 16:15, Arnd Bergmann wrote:
> >>>
> >>> I considered that as well, and don't really mind either way. I think =
it does
> >>> get a bit ugly whatever we do. If you prefer the union, I can respin =
the
> >>> patch that way.
> >>
> >> Hmm, thinking more about it I think the real clean solution would be t=
o
> >> extend struct map_ring_valloc_hvm to cover the pv case, too, to add th=
e
> >> map and unmap arrays (possibly as a union) to it and to allocate it
> >> dynamically instead of having it on the stack.
> >>
> >> Would you be fine doing this?
> >
> > This is a little more complex than I'd want to do without doing any tes=
ting
> > (and no, I don't want to do the testing either) ;-)
> >
> > It does sound like a better approach though.
>
> I take this as you are fine with me writing the patch and adding you as
> "Reported-by:"?

Yes, definitely. Thanks!

     Arnd


From xen-devel-bounces@lists.xenproject.org Wed May 06 12:59:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 12:59:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWJe9-0007wz-7x; Wed, 06 May 2020 12:59:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LxRp=6U=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWJe7-0007wu-Sd
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 12:59:31 +0000
X-Inumbo-ID: 6c5f072c-8f99-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c5f072c-8f99-11ea-b9cf-bc764e2007e4;
 Wed, 06 May 2020 12:59:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Aew3Sz2vaT7MmIqzAMXdtMq6kcMB5I3ZdECPzbAhJQ0=; b=6yp1vq7GaAfWQUlFhYk+mUWf0
 peBbW9ZwvdKy+Lz/AGYdttTkr044DHoXk1PxUzALg/wPVhwSslH7r5FcWAG8zpcvOwZLTvWtPshbJ
 gWEJFqlFbPr7w4BHnsO7qxgC8Yc0Z2Mf/HhhuNNy6OYR704P3dFYbkRkvK5WTq3oS705I=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWJe5-0004eS-P9; Wed, 06 May 2020 12:59:29 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWJe5-0007He-GJ; Wed, 06 May 2020 12:59:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWJe5-0003C3-FT; Wed, 06 May 2020 12:59:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150056-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150056: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=b58dc6dbfa3a038c5a22f06861a7652da80eca28
X-Osstest-Versions-That: xen=779efdbb502b38c66b774b124fa0ceed254875bd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 06 May 2020 12:59:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150056 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150056/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b58dc6dbfa3a038c5a22f06861a7652da80eca28
baseline version:
 xen                  779efdbb502b38c66b774b124fa0ceed254875bd

Last test of basis   150049  2020-05-05 20:00:32 Z    0 days
Testing same since   150056  2020-05-06 10:00:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Wei Liu <liuwe@microsoft.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   779efdbb50..b58dc6dbfa  b58dc6dbfa3a038c5a22f06861a7652da80eca28 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 06 13:00:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 13:00:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWJfJ-0000El-J4; Wed, 06 May 2020 13:00:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jP1D=6U=xen.org=wl@srs-us1.protection.inumbo.net>)
 id 1jWJfI-0000Ee-Fj
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 13:00:44 +0000
X-Inumbo-ID: 987ce40a-8f99-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 987ce40a-8f99-11ea-9887-bc764e2007e4;
 Wed, 06 May 2020 13:00:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID
 :Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID
 :Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:
 Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe
 :List-Post:List-Owner:List-Archive;
 bh=5Qt1IN0amKd3IMVto1s7SIFtL7Lw/I28kzlL/23oAc8=; b=Z51/CjEBZNHsLnpUAVYZStz08c
 683Kq79KmfdhjtiEBTD5/IjGtehwro6uUXBofNfKKX/BAO5vvEB+TgHClKVus3tuj8bg+KvhXTee3
 yAoSl5kPJoptTQr5TvEVG2Bjx3dDJLjE3sW2rYd8G5mGsi+k/mEXVcl6Q+RMq07JhBwM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <wl@xen.org>)
 id 1jWJfE-0004ha-Ki; Wed, 06 May 2020 13:00:40 +0000
Received: from 44.142.6.51.dyn.plus.net ([51.6.142.44] helo=debian)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <wl@xen.org>)
 id 1jWJfE-0005cV-BW; Wed, 06 May 2020 13:00:40 +0000
Date: Wed, 6 May 2020 14:00:37 +0100
From: Wei Liu <wl@xen.org>
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Subject: Re: [PATCH v17 2/2] xen/tools: VM forking toolstack side
Message-ID: <20200506130037.mklkimsaetmzqu6h@debian>
References: <70ea4889e30ed35760329331ddfeb279fcd80786.1587655725.git.tamas.lengyel@intel.com>
 <e416eac0c986fd1aba5f576d9b065a6f47660b2c.1587655725.git.tamas.lengyel@intel.com>
 <CABfawhnxoQbehu-bvT7Uhd808rsjjDsB87O=CKqHDsrBUvur-g@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CABfawhnxoQbehu-bvT7Uhd808rsjjDsB87O=CKqHDsrBUvur-g@mail.gmail.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Tamas K Lengyel <tamas.lengyel@intel.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 01, 2020 at 07:59:45AM -0600, Tamas K Lengyel wrote:
> On Thu, Apr 23, 2020 at 9:33 AM Tamas K Lengyel <tamas.lengyel@intel.com> wrote:
> >
> > Add necessary bits to implement "xl fork-vm" commands. The command allows the
> > user to specify how to launch the device model allowing for a late-launch model
> > in which the user can execute the fork without the device model and decide to
> > only later launch it.
> >
> > Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
> 
> Patch ping. If nothing else at least the libxc parts would be nice to
> get merged before the freeze.

Changes to libxc looks good to me.

Please split it out to a patch with proper commit message.

Wei.


From xen-devel-bounces@lists.xenproject.org Wed May 06 13:07:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 13:07:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWJlV-0000Ul-9t; Wed, 06 May 2020 13:07:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jP1D=6U=xen.org=wl@srs-us1.protection.inumbo.net>)
 id 1jWJlT-0000Ug-85
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 13:07:07 +0000
X-Inumbo-ID: 7c8c7ee4-8f9a-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c8c7ee4-8f9a-11ea-ae69-bc764e2007e4;
 Wed, 06 May 2020 13:07:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Transfer-Encoding:Content-Type:
 MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=N/ZFp1530uz7aUGqA/uN7FSwOLHjjgeceoA48t2DuOQ=; b=mWANcB0oL6o3Mn77Ta/XPPR4fr
 58yBWulZSQOFoKLohOrEtgHMKhrEyvNaI/AjT7+XKomvSXHL7fwomVQ6KwKBa87KO78ndYUK/3XaQ
 vBhhLkvyH9JKYDz1W7IK6xPbNQGs26GV20s8jter0GtQWGjNnmsViksivKw1743O7nT0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <wl@xen.org>)
 id 1jWJlR-0004qB-Ub; Wed, 06 May 2020 13:07:05 +0000
Received: from 44.142.6.51.dyn.plus.net ([51.6.142.44] helo=debian)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <wl@xen.org>)
 id 1jWJlR-0005zO-Ja; Wed, 06 May 2020 13:07:05 +0000
Date: Wed, 6 May 2020 14:07:02 +0100
From: Wei Liu <wl@xen.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH 2/3] configure: also add EXTRA_PREFIX to {CPP/LD}FLAGS
Message-ID: <20200506130702.ahzjezahqg7pnznv@debian>
References: <20200505092454.9161-1-roger.pau@citrix.com>
 <20200505092454.9161-3-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200505092454.9161-3-roger.pau@citrix.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 05, 2020 at 11:24:53AM +0200, Roger Pau Monne wrote:
> The path provided by EXTRA_PREFIX should be added to the search path
> of the configure script, like it's done in Config.mk. Not doing so
> makes the search path for configure differ from the search path used
> by the build.
> 
> Signed-off-by: Roger Pau Monn <roger.pau@citrix.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Wed May 06 13:07:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 13:07:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWJm5-0000X6-J3; Wed, 06 May 2020 13:07:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jP1D=6U=xen.org=wl@srs-us1.protection.inumbo.net>)
 id 1jWJm5-0000X1-2J
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 13:07:45 +0000
X-Inumbo-ID: 933b9cba-8f9a-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 933b9cba-8f9a-11ea-b9cf-bc764e2007e4;
 Wed, 06 May 2020 13:07:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Transfer-Encoding:Content-Type:
 MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=JsUk+4tBVrIV3bONlj1imIJNVURzu5HaN2YUTeQbEvQ=; b=eMZ4pACHtLo5apavIQNr9dgUSX
 6Do/q5JdMyaHDnffNYTlBovXFWMUbrHnx3BJST4cvxlJn5aQQINtLuXT93IGrOtFQp0ZCzDrlYSXY
 v0bffQqpJOfZCEvrVqXWzze6pQvP9FhKzFe7KcDYVAd8gsJIeUz6Ix/nkbH6hk028dRI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <wl@xen.org>)
 id 1jWJm4-0004qK-8I; Wed, 06 May 2020 13:07:44 +0000
Received: from 44.142.6.51.dyn.plus.net ([51.6.142.44] helo=debian)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <wl@xen.org>)
 id 1jWJm3-00063V-Vy; Wed, 06 May 2020 13:07:44 +0000
Date: Wed, 6 May 2020 14:07:41 +0100
From: Wei Liu <wl@xen.org>
To: Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH 3/3] tools/libxl: disable clang indentation check for the
 disk parser
Message-ID: <20200506130741.lgpi4gduon7cqnup@debian>
References: <20200505092454.9161-1-roger.pau@citrix.com>
 <20200505092454.9161-4-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200505092454.9161-4-roger.pau@citrix.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 05, 2020 at 11:24:54AM +0200, Roger Pau Monne wrote:
> Clang 10 complains with:
> 
> 13: error: misleading indentation; statement is not part of the previous 'if'
>       [-Werror,-Wmisleading-indentation]
>             if ( ! yyg->yy_state_buf )
>             ^
> libxlu_disk_l.c:1259:9: note: previous statement is here
>         if ( ! yyg->yy_state_buf )
>         ^
> 
> Due to the missing braces in single line statements and the wrong
> indentation. Fix this by disabling the warning for that specific file.
> I haven't found a way to force flex to add braces around single line
> statements in conditional blocks.
> 
> Signed-off-by: Roger Pau Monn <roger.pau@citrix.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Wed May 06 13:08:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 13:08:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWJnB-0000gN-UM; Wed, 06 May 2020 13:08:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qiEG=6U=gmail.com=malus.brandywine@srs-us1.protection.inumbo.net>)
 id 1jWJnA-0000gD-Kk
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 13:08:52 +0000
X-Inumbo-ID: bb06f898-8f9a-11ea-ae69-bc764e2007e4
Received: from mail-vs1-xe41.google.com (unknown [2607:f8b0:4864:20::e41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb06f898-8f9a-11ea-ae69-bc764e2007e4;
 Wed, 06 May 2020 13:08:51 +0000 (UTC)
Received: by mail-vs1-xe41.google.com with SMTP id y185so922949vsy.8
 for <xen-devel@lists.xenproject.org>; Wed, 06 May 2020 06:08:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=crH71PSGlX60gi001b8Ir4Yx3HaKDCUVktUUEZ0DKlQ=;
 b=lwROdwQX55h2fNMLfeIp+OJAprU8EbaHRc7Zr3mJ7HcckK+rGsB+9hmS+xQms2/+WS
 6VpfqS/dsPKrOXsrcqauYsf/tVvG8S64gvbBTZsOES788eeNa3aKmH+shQ6XZSZmj3dB
 TfUUjRL94a4F6pEPPlkAVDdVDV2i6WE6cEYJx+pQ1ojWPAH4RT7+oXcBKHJ8n1DTNbgR
 YECXvE+RWy0cplu6tiO/MfOb7zpnBuIqBWs6OCZOZN+c26B/+m7Zqp2b9wrt7KD5MgUK
 pcCRH3CBk7+obUCqxIQS85A13QEkFzpcWmMiFWF5Qro5vj1uvYhEAvYzNd6BFV+HkI6Z
 l+sg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=crH71PSGlX60gi001b8Ir4Yx3HaKDCUVktUUEZ0DKlQ=;
 b=NTdIWcqBA9AfAnM000v01ymYRZUpHwf7gNIRtkCLQa567HZuR+Il3zrClvKg4GOxVj
 HbYq0JVwf+UbeIDk4UnEg/IV/M6xAW+OUIJ2ZdEQbAVzEQdiTsyZp1ZrrfzpI2JweGUi
 onMtXvC1HNL1MQM1zAGs4dsL8dvp+R7Li6ctzoDXW99MRXZVI2kHVSPUniRAcOWmCFRO
 VOZu8uJAJgIV+rInaB7krCVC+boYEQO5hFjHezU2njg7YwL/O7jp2JvALAy/BgygRLq8
 /NKayDVjnI4to5K3sagEUTshQUKT2ACKxEwYjirwGbUZHGGJWUNgNRMT0vtB3CvVLHs5
 V8xg==
X-Gm-Message-State: AGi0PuYUiqGc48zHRiCvxj79/E3woMIogvuPfGwnWwo4gksv8ECduZr7
 WJodurXqAO2HtsTAxMH0piQ5KuyEHCGSLLa3Jyc=
X-Google-Smtp-Source: APiQypIkSXiFyvOPbcO+LAFbpJXRSjrAxP1nFsYX1a3mgTzq9GSmflqfCcbF1g7FlrBNJnd+3XgKiedK5hVUkSCno3I=
X-Received: by 2002:a67:80d6:: with SMTP id b205mr7620486vsd.57.1588770530891; 
 Wed, 06 May 2020 06:08:50 -0700 (PDT)
MIME-Version: 1.0
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
 <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
In-Reply-To: <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
From: Nataliya Korovkina <malus.brandywine@gmail.com>
Date: Wed, 6 May 2020 09:08:39 -0400
Message-ID: <CADz_WD5Ln7Pe1WAFp73d2Mz9wxspzTE3WgAJusp5S8LX4=83Bw@mail.gmail.com>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: Peng Fan <peng.fan@nxp.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 "minyard@acm.org" <minyard@acm.org>, Roman Shaposhnik <roman@zededa.com>,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 Julien Grall <julien.grall@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

What I found out: rpi_firmware_property_list() allocates memory from
dma_atomic_pool which was mapped to VMALLOC region, so virt_to_page()
is not eligible in this case.

Thanks,
Nataliya

On Wed, May 6, 2020 at 4:57 AM Peng Fan <peng.fan@nxp.com> wrote:
>
> > Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
> >
> > On 06.05.20 00:34, Stefano Stabellini wrote:
> > > + Boris, J=C3=BCrgen
> > >
> > > See the crash Roman is seeing booting dom0 on the Raspberry Pi. It is
> > > related to the recent xen dma_ops changes. Possibly the same thing
> > > reported by Peng here:
>
> Yes. It is same issue.
>
> > >
> > > https://eur01.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%2Fm=
arc
> > > .info%2F%3Fl%3Dlinux-kernel%26m%3D158805976230485%26w%3D2&am
> > p;data=3D02%
> > >
> > 7C01%7Cpeng.fan%40nxp.com%7Cab98b2db94484141a8ad08d7f1803372%7
> > C686ea1d
> > >
> > 3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637243405233572354&amp;sd
> > ata=3D0Yr5h
> > > Rg4%2FuApsBpTwIIL4StZU%2FUA55oP5Lfnfmtj4Hg%3D&amp;reserved=3D0
> > >
> > > An in-depth analysis below.
> > >
> > >
> > > On Mon, 4 May 2020, Roman Shaposhnik wrote:
> > >>>> [    2.534292] Unable to handle kernel paging request at virtual
> > >>>> address 000000000026c340
> > >>>> [    2.542373] Mem abort info:
> > >>>> [    2.545257]   ESR =3D 0x96000004
> > >>>> [    2.548421]   EC =3D 0x25: DABT (current EL), IL =3D 32 bits
> > >>>> [    2.553877]   SET =3D 0, FnV =3D 0
> > >>>> [    2.557023]   EA =3D 0, S1PTW =3D 0
> > >>>> [    2.560297] Data abort info:
> > >>>> [    2.563258]   ISV =3D 0, ISS =3D 0x00000004
> > >>>> [    2.567208]   CM =3D 0, WnR =3D 0
> > >>>> [    2.570294] [000000000026c340] user address but active_mm is
> > swapper
> > >>>> [    2.576783] Internal error: Oops: 96000004 [#1] SMP
> > >>>> [    2.581784] Modules linked in:
> > >>>> [    2.584950] CPU: 3 PID: 135 Comm: kworker/3:1 Not tainted
> > 5.6.1-default #9
> > >>>> [    2.591970] Hardware name: Raspberry Pi 4 Model B (DT)
> > >>>> [    2.597256] Workqueue: events deferred_probe_work_func
> > >>>> [    2.602509] pstate: 60000005 (nZCv daif -PAN -UAO)
> > >>>> [    2.607431] pc : xen_swiotlb_free_coherent+0x198/0x1d8
> > >>>> [    2.612696] lr : dma_free_attrs+0x98/0xd0
> > >>>> [    2.616827] sp : ffff800011db3970
> > >>>> [    2.620242] x29: ffff800011db3970 x28: 00000000000f7b00
> > >>>> [    2.625695] x27: 0000000000001000 x26: ffff000037b68410
> > >>>> [    2.631129] x25: 0000000000000001 x24: 00000000f7b00000
> > >>>> [    2.636583] x23: 0000000000000000 x22: 0000000000000000
> > >>>> [    2.642017] x21: ffff800011b0d000 x20: ffff80001179b548
> > >>>> [    2.647461] x19: ffff000037b68410 x18: 0000000000000010
> > >>>> [    2.652905] x17: ffff000035d66a00 x16: 00000000deadbeef
> > >>>> [    2.658348] x15: ffffffffffffffff x14: ffff80001179b548
> > >>>> [    2.663792] x13: ffff800091db37b7 x12: ffff800011db37bf
> > >>>> [    2.669236] x11: ffff8000117c7000 x10: ffff800011db3740
> > >>>> [    2.674680] x9 : 00000000ffffffd0 x8 : ffff80001071e980
> > >>>> [    2.680124] x7 : 0000000000000132 x6 : ffff80001197a6ab
> > >>>> [    2.685568] x5 : 0000000000000000 x4 : 0000000000000000
> > >>>> [    2.691012] x3 : 00000000f7b00000 x2 : fffffdffffe00000
> > >>>> [    2.696465] x1 : 000000000026c340 x0 : 000002000046c340
> > >>>> [    2.701899] Call trace:
> > >>>> [    2.704452]  xen_swiotlb_free_coherent+0x198/0x1d8
> > >>>> [    2.709367]  dma_free_attrs+0x98/0xd0
> > >>>> [    2.713143]  rpi_firmware_property_list+0x1e4/0x240
> > >>>> [    2.718146]  rpi_firmware_property+0x6c/0xb0
> > >>>> [    2.722535]  rpi_firmware_probe+0xf0/0x1e0
> > >>>> [    2.726760]  platform_drv_probe+0x50/0xa0
> > >>>> [    2.730879]  really_probe+0xd8/0x438
> > >>>> [    2.734567]  driver_probe_device+0xdc/0x130
> > >>>> [    2.738870]  __device_attach_driver+0x88/0x108
> > >>>> [    2.743434]  bus_for_each_drv+0x78/0xc8
> > >>>> [    2.747386]  __device_attach+0xd4/0x158
> > >>>> [    2.751337]  device_initial_probe+0x10/0x18
> > >>>> [    2.755649]  bus_probe_device+0x90/0x98
> > >>>> [    2.759590]  deferred_probe_work_func+0x88/0xd8
> > >>>> [    2.764244]  process_one_work+0x1f0/0x3c0
> > >>>> [    2.768369]  worker_thread+0x138/0x570
> > >>>> [    2.772234]  kthread+0x118/0x120
> > >>>> [    2.775571]  ret_from_fork+0x10/0x18
> > >>>> [    2.779263] Code: d34cfc00 f2dfbfe2 d37ae400 8b020001
> > (f8626800)
> > >>>> [    2.785492] ---[ end trace 4c435212e349f45f ]---
> > >>>> [    2.793340] usb 1-1: New USB device found, idVendor=3D2109,
> > >>>> idProduct=3D3431, bcdDevice=3D 4.20
> > >>>> [    2.801038] usb 1-1: New USB device strings: Mfr=3D0, Product=
=3D1,
> > SerialNumber=3D0
> > >>>> [    2.808297] usb 1-1: Product: USB2.0 Hub
> > >>>> [    2.813710] hub 1-1:1.0: USB hub found
> > >>>> [    2.817117] hub 1-1:1.0: 4 ports detected
> > >>>>
> > >>>> This is bailing out right here:
> > >>>>
> > >>>> https://eur01.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%=
2Fg
> > >>>>
> > it.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Fstable%2Flinux.g
> > >>>>
> > it%2Ftree%2Fdrivers%2Ffirmware%2Fraspberrypi.c%3Fh%3Dv5.6.1%23n125
> > &
> > >>>>
> > amp;data=3D02%7C01%7Cpeng.fan%40nxp.com%7Cab98b2db94484141a8ad0
> > 8d7f18
> > >>>>
> > 03372%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C6372434052
> > 335723
> > >>>>
> > 54&amp;sdata=3Dh1dyHkb%2FsifUqH3Z0m3uIIcqUhXVwhHS%2Ft%2FVvig%2Fo
> > u4%3D
> > >>>> &amp;reserved=3D0
> > >>>>
> > >>>> FYIW (since I modified the source to actually print what was
> > >>>> returned right before it bails) we get:
> > >>>>     buf[1] =3D=3D 0x800000004
> > >>>>     buf[2] =3D=3D 0x00000001
> > >>>>
> > >>>> Status 0x800000004 is of course suspicious since it is not even li=
sted
> > here:
> > >>>>
> > >>>> https://eur01.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%=
2Fg
> > >>>>
> > it.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Fstable%2Flinux.g
> > >>>>
> > it%2Ftree%2Finclude%2Fsoc%2Fbcm2835%2Fraspberrypi-firmware.h%23n14
> > &
> > >>>>
> > amp;data=3D02%7C01%7Cpeng.fan%40nxp.com%7Cab98b2db94484141a8ad0
> > 8d7f18
> > >>>>
> > 03372%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C6372434052
> > 335723
> > >>>>
> > 54&amp;sdata=3D3yMm%2FujHCVf%2FigpNLE8hcBWVcvdGrZGv5TuqeMzMV0
> > U%3D&amp
> > >>>> ;reserved=3D0
> > >>>>
> > >>>> So it appears that this DMA request path is somehow busted and it
> > >>>> would be really nice to figure out why.
> > >>>
> > >>> You have actually discovered a genuine bug in the recent xen dma
> > >>> rework in Linux. Congrats :-)
> > >>
> > >> Nice! ;-)
> > >>
> > >>> I am doing some guesswork here, but from what I read in the thread
> > >>> and the information in this email I think this patch might fix the =
issue.
> > >>> If it doesn't fix the issue please add a few printks in
> > >>> drivers/xen/swiotlb-xen.c:xen_swiotlb_free_coherent and please let
> > >>> me know where exactly it crashes.
> > >>>
> > >>>
> > >>> diff --git a/include/xen/arm/page-coherent.h
> > >>> b/include/xen/arm/page-coherent.h index b9cc11e887ed..ff4677ed9788
> > >>> 100644
> > >>> --- a/include/xen/arm/page-coherent.h
> > >>> +++ b/include/xen/arm/page-coherent.h
> > >>> @@ -8,12 +8,17 @@
> > >>>   static inline void *xen_alloc_coherent_pages(struct device *hwdev=
,
> > size_t size,
> > >>>                  dma_addr_t *dma_handle, gfp_t flags, unsigned
> > long attrs)
> > >>>   {
> > >>> +       void *cpu_addr;
> > >>> +       if (dma_alloc_from_dev_coherent(hwdev, size, dma_handle,
> > &cpu_addr))
> > >>> +               return cpu_addr;
> > >>>          return dma_direct_alloc(hwdev, size, dma_handle, flags,
> > attrs);
> > >>>   }
> > >>>
> > >>>   static inline void xen_free_coherent_pages(struct device *hwdev,
> > size_t size,
> > >>>                  void *cpu_addr, dma_addr_t dma_handle,
> > unsigned long attrs)
> > >>>   {
> > >>> +       if (dma_release_from_dev_coherent(hwdev, get_order(size),
> > cpu_addr))
> > >>> +               return;
> > >>>          dma_direct_free(hwdev, size, cpu_addr, dma_handle, attrs);
> > >>>   }
> > >>
> > >> Applied the patch, but it didn't help and after printk's it turns ou=
t
> > >> it surprisingly crashes right inside this (rather convoluted if you
> > >> ask me) if statement:
> > >>
> > >> https://eur01.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%2F=
git
> > >> .kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Fstable%2Flinux.gi
> > t%2
> > >>
> > Ftree%2Fdrivers%2Fxen%2Fswiotlb-xen.c%3Fh%3Dv5.6.1%23n349&amp;dat
> > a=3D02
> > >> %7C01%7Cpeng.fan%40nxp.com%7Cab98b2db94484141a8ad08d7f18033
> > 72%7C686ea
> > >>
> > 1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637243405233572354&amp;
> > sdata=3DFu
> > >>
> > BWGAEg%2FkbsnYIIGHmiICTqy%2BcgZs7V%2BMneJum%2BTts%3D&amp;res
> > erved=3D0
> > >>
> > >> So it makes sense that the patch didn't help -- we never hit that
> > >> xen_free_coherent_pages.
> > >
> > > The crash happens here:
> > >
> > >     if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
> > >                  range_straddles_page_boundary(phys, size)) &&
> > >         TestClearPageXenRemapped(virt_to_page(vaddr)))
> > >             xen_destroy_contiguous_region(phys, order);
> > >
> > > I don't know exactly what is causing the crash. Is it the WARN_ON
> > somehow?
> > > Is it TestClearPageXenRemapped? Neither should cause a crash in theor=
y.
> > >
> > >
> > > But I do know that there are problems with that if statement on ARM.
> > > It can trigger for one of the following conditions:
> > >
> > > 1) dev_addr + size - 1 > dma_mask
> > > 2) range_straddles_page_boundary(phys, size)
>
>
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index bd3a10dfac15..33b027cb0b2a 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -346,6 +346,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_=
t size, void *vaddr,
>         /* Convert the size to actually allocated. */
>         size =3D 1UL << (order + XEN_PAGE_SHIFT);
>
> +       printk("%x %x %px %x %px\n", dev_addr + size - 1 > dma_mask, rang=
e_straddles_page_boundary(phys, size), virt_to_page(vaddr), phys, vaddr);
>         if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
>                      range_straddles_page_boundary(phys, size)) &&
>             TestClearPageXenRemapped(virt_to_page(vaddr)))
>
>
> In my case:
> 0 0 0000000000271f40 bc000000 ffff800011c7d000
>
>
> The alloc path in my side:
> 314         phys =3D *dma_handle;
> 315         dev_addr =3D xen_phys_to_bus(phys);
> 316         if (((dev_addr + size - 1 <=3D dma_mask)) &&
> 317             !range_straddles_page_boundary(phys, size))
> 318                 *dma_handle =3D dev_addr;
>
> So I am confused why the free path needs clear xen remap.
>
> Thanks,
> Peng.
>
> > >
> > >
> > > The first condition might happen after bef4d2037d214 because
> > > dma_direct_alloc might not respect the device dma_mask. It is actuall=
y
> > > a bug and I would like to keep the WARN_ON for that. The patch I sent
> > > yesterday
> > >
> > (https://eur01.safelinks.protection.outlook.com/?url=3Dhttps%3A%2F%2Fma=
rc.
> > info%2F%3Fl%3Dxen-devel%26m%3D158865080224504&amp;data=3D02%7C0
> > 1%7Cpeng.fan%40nxp.com%7Cab98b2db94484141a8ad08d7f1803372%7C68
> > 6ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637243405233572354&a
> > mp;sdata=3D5wCGhi6HuXVegPvlu7gmbwv9yEZ5XQKbuqqPw5Zl8ss%3D&amp;re
> > served=3D0) should solve that issue. But Roman is telling us that the c=
rash still
> > persists.
> > >
> > > The second condition is completely normal and not an error on ARM
> > > because dom0 is 1:1 mapped. It is not an issue if the address range i=
s
> > > straddling a page boundary. We certainly shouldn't WARN (or crash).
> > >
> > > So, I suggest something similar to Peng's patch, appended.
> > >
> > > Roman, does it solve your problem?
> > >
> > >
> > > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > > index b6d27762c6f8..994ca3a4b653 100644
> > > --- a/drivers/xen/swiotlb-xen.c
> > > +++ b/drivers/xen/swiotlb-xen.c
> > > @@ -346,9 +346,8 @@ xen_swiotlb_free_coherent(struct device *hwdev,
> > size_t size, void *vaddr,
> > >     /* Convert the size to actually allocated. */
> > >     size =3D 1UL << (order + XEN_PAGE_SHIFT);
> > >
> > > -   if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
> > > -                range_straddles_page_boundary(phys, size)) &&
> > > -       TestClearPageXenRemapped(virt_to_page(vaddr)))
> > > +   WARN_ON(dev_addr + size - 1 > dma_mask);
> > > +   if (TestClearPageXenRemapped(virt_to_page(vaddr)))
> > >             xen_destroy_contiguous_region(phys, order);
> >
> > This is a very bad idea for x86. Calling xen_destroy_contiguous_region(=
) for
> > cases where it shouldn't be called is causing subtle bugs in memory
> > management, like pages being visible twice and thus resulting in weird
> > memory inconsistencies.
> >
> > What you want here is probably an architecture specific test.
> >
> >
> > Juergen


From xen-devel-bounces@lists.xenproject.org Wed May 06 13:17:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 13:17:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWJvk-0001bI-Tc; Wed, 06 May 2020 13:17:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TX5G=6U=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jWJvj-0001bB-M9
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 13:17:43 +0000
X-Inumbo-ID: f79d6246-8f9b-11ea-b9cf-bc764e2007e4
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f79d6246-8f9b-11ea-b9cf-bc764e2007e4;
 Wed, 06 May 2020 13:17:43 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id h4so2554070wmb.4
 for <xen-devel@lists.xenproject.org>; Wed, 06 May 2020 06:17:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=ZEu/pshMue4rFNKdoEFdM6+ctXD8/pMrLXmSDPxBu6g=;
 b=N/JEN4E+HvkH+/iP5iqSrY5e9xDPIKI+1cmVeOz2Y1EplP3Bq+aRMAECEDIQSnD4t4
 Gj3v0VpIcvo5TBGvI9bbbchE0hMIkRjSKixz+pXR/JucnWsHiuafDhmL3JuyqLm5QWaQ
 hm4C/Hbsvq0q8gpjmIlokRw7KRFSgMfaFHqLs3t/GA+TAtgaaJAafUu4BKWPV9b3bhdY
 a8nwqNV8XPI7B6gTT8+XnBb4GyP4BzaXKTS34tei1kdMcxDJ91/cx5nnIcIeetkyQcrn
 G4wsihVTNwaSjS70dMwQS1m/yC4Q8cOAMzPIUCJdLA+EtZsSpAkHRCDvkIg/nM3kySRa
 8B0A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=ZEu/pshMue4rFNKdoEFdM6+ctXD8/pMrLXmSDPxBu6g=;
 b=H6b8wEE3I9+usDu2W42rvOUYrPrEHY9Dyp0hqre8Uf0c9hKorTxyxpSmOoxx1C0/0E
 PsTvOkVgDdhjTBfAncda1Q3RNbUPddZ9EwqoGEMnj+OfkLKmPrbMvv8E1TczAH0SAxvW
 4kkofOmeIgbBdQProPoeinOQm5mZxJbuyjewMPvEkclE57QwKkvYtFyxjRWMZYr50fjd
 C+p8ep157VxwvoyzWtB94f0le/MaAZJuA3GYsldYoEBrKmVQ+FYpqchZdHQjGESqFB+8
 vSdftSQIc4tqxhe8UJW06LxFXqzYenGgvAbjSbC1zJhpkZP97eF3cOfhbNvgF1Ay08d/
 FSGw==
X-Gm-Message-State: AGi0PuZ/4ddD/15nHPnzgOa0cGWyF4C5QYt1OJnkptsa4uWfSIQWIlE3
 aHEQu9NkyEBLCHBTZHR/Q+VBlWq7O07ab+Nxav8=
X-Google-Smtp-Source: APiQypKziUFUYHjMPtxC9bIUrcMnbZwPOYTGATFUJ6tldHw86ldUZCMbQ68TyIjyDkuRbfBLf4cu3LTEWUBh4kTOqGs=
X-Received: by 2002:a1c:2ed3:: with SMTP id u202mr1872390wmu.77.1588771061198; 
 Wed, 06 May 2020 06:17:41 -0700 (PDT)
MIME-Version: 1.0
References: <70ea4889e30ed35760329331ddfeb279fcd80786.1587655725.git.tamas.lengyel@intel.com>
 <e416eac0c986fd1aba5f576d9b065a6f47660b2c.1587655725.git.tamas.lengyel@intel.com>
 <CABfawhnxoQbehu-bvT7Uhd808rsjjDsB87O=CKqHDsrBUvur-g@mail.gmail.com>
 <20200506130037.mklkimsaetmzqu6h@debian>
In-Reply-To: <20200506130037.mklkimsaetmzqu6h@debian>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Wed, 6 May 2020 07:17:05 -0600
Message-ID: <CABfawhniFVz05fc1TWwvjOL2nfM9JcMB3AbXeX7GsJojMLToyg@mail.gmail.com>
Subject: Re: [PATCH v17 2/2] xen/tools: VM forking toolstack side
To: Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Tamas K Lengyel <tamas.lengyel@intel.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 6, 2020 at 7:00 AM Wei Liu <wl@xen.org> wrote:
>
> On Fri, May 01, 2020 at 07:59:45AM -0600, Tamas K Lengyel wrote:
> > On Thu, Apr 23, 2020 at 9:33 AM Tamas K Lengyel <tamas.lengyel@intel.com> wrote:
> > >
> > > Add necessary bits to implement "xl fork-vm" commands. The command allows the
> > > user to specify how to launch the device model allowing for a late-launch model
> > > in which the user can execute the fork without the device model and decide to
> > > only later launch it.
> > >
> > > Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
> >
> > Patch ping. If nothing else at least the libxc parts would be nice to
> > get merged before the freeze.
>
> Changes to libxc looks good to me.
>
> Please split it out to a patch with proper commit message.
>

Sounds good, will do.

Thanks,
Tamas


From xen-devel-bounces@lists.xenproject.org Wed May 06 13:42:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 13:42:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWKJI-00043F-5K; Wed, 06 May 2020 13:42:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Srcy=6U=intel.com=tamas.lengyel@srs-us1.protection.inumbo.net>)
 id 1jWKJG-000438-Se
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 13:42:02 +0000
X-Inumbo-ID: 5b09d884-8f9f-11ea-9887-bc764e2007e4
Received: from mga04.intel.com (unknown [192.55.52.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5b09d884-8f9f-11ea-9887-bc764e2007e4;
 Wed, 06 May 2020 13:41:57 +0000 (UTC)
IronPort-SDR: LbeQH51YC+RkABrqv9CnsaXAkBVRLHKkc1lMtGSK1mUDvh2Wg1aHkkQgxsGs2WN+al0OCwGpbX
 omijvrMFzFpQ==
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 06 May 2020 06:41:55 -0700
IronPort-SDR: NS4FRqcpJbthNY847rK/VzvkeXbuiyX6/hJduu+OAtwvHbHRjMg3I2FSPT013IG09BPvZKpDB+
 4OZPLvzLhkKA==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.73,359,1583222400"; d="scan'208";a="304802047"
Received: from srcuster-mobl11.amr.corp.intel.com (HELO localhost.localdomain)
 ([10.209.81.167])
 by FMSMGA003.fm.intel.com with ESMTP; 06 May 2020 06:41:54 -0700
From: Tamas K Lengyel <tamas.lengyel@intel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v18 2/2] tools/libxl: VM forking toolstack side
Date: Wed,  6 May 2020 06:41:44 -0700
Message-Id: <b91c338ab8165b6e228b46bbd1853eb140ab69c7.1588772376.git.tamas.lengyel@intel.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <a59dabe3a40d4f3709d3ad6ca605523f180c2dc5.1588772376.git.tamas.lengyel@intel.com>
References: <a59dabe3a40d4f3709d3ad6ca605523f180c2dc5.1588772376.git.tamas.lengyel@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 Tamas K Lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add necessary bits to implement "xl fork-vm" commands. The command allows the
user to specify how to launch the device model allowing for a late-launch model
in which the user can execute the fork without the device model and decide to
only later launch it.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
---
 docs/man/xl.1.pod.in         |  49 +++++
 tools/libxl/libxl.h          |  12 ++
 tools/libxl/libxl_create.c   | 359 ++++++++++++++++++++---------------
 tools/libxl/libxl_dm.c       |   2 +-
 tools/libxl/libxl_dom.c      |  43 ++++-
 tools/libxl/libxl_internal.h |   7 +
 tools/libxl/libxl_types.idl  |   1 +
 tools/libxl/libxl_x86.c      |  42 ++++
 tools/xl/Makefile            |   2 +-
 tools/xl/xl.h                |   5 +
 tools/xl/xl_cmdtable.c       |  15 ++
 tools/xl/xl_forkvm.c         | 149 +++++++++++++++
 tools/xl/xl_vmcontrol.c      |  14 ++
 13 files changed, 535 insertions(+), 165 deletions(-)
 create mode 100644 tools/xl/xl_forkvm.c

diff --git a/docs/man/xl.1.pod.in b/docs/man/xl.1.pod.in
index 09339282e6..67b4e8588a 100644
--- a/docs/man/xl.1.pod.in
+++ b/docs/man/xl.1.pod.in
@@ -708,6 +708,55 @@ above).
 
 =back
 
+=item B<fork-vm> [I<OPTIONS>] I<domain-id>
+
+Create a fork of a running VM.  The domain will be paused after the operation
+and remains paused while forks of it exist.  Experimental and x86 only.
+Forks can only be made of domains with HAP enabled and on Intel hardware.  The
+parent domain must be created with the xl toolstack and its configuration must
+not manually define max_grant_frames, max_maptrack_frames or max_event_channels.
+
+B<OPTIONS>
+
+=over 4
+
+=item B<-p>
+
+Leave the fork paused after creating it.
+
+=item B<--launch-dm>
+
+Specify whether the device model (QEMU) should be launched for the fork. Late
+launch allows to start the device model for an already running fork.
+
+=item B<-C>
+
+The config file to use when launching the device model.  Currently required when
+launching the device model.  Most config settings MUST match the parent domain
+exactly, only change VM name, disk path and network configurations.
+
+=item B<-Q>
+
+The path to the qemu save file to use when launching the device model.  Currently
+required when launching the device model.
+
+=item B<--fork-reset>
+
+Perform a reset operation of an already running fork.  Note that resetting may
+be less performant then creating a new fork depending on how much memory the
+fork has deduplicated during its runtime.
+
+=item B<--max-vcpus>
+
+Specify the max-vcpus matching the parent domain when not launching the dm.
+
+=item B<--allow-iommu>
+
+Specify to allow forking a domain that has IOMMU enabled. Only compatible with
+forks using --launch-dm no.
+
+=back
+
 =item B<sharing> [I<domain-id>]
 
 Display the number of shared pages for a specified domain. If no domain is
diff --git a/tools/libxl/libxl.h b/tools/libxl/libxl.h
index 71709dc585..4bbb0a773d 100644
--- a/tools/libxl/libxl.h
+++ b/tools/libxl/libxl.h
@@ -2666,6 +2666,18 @@ int libxl_psr_get_hw_info(libxl_ctx *ctx, libxl_psr_feat_type type,
                           unsigned int lvl, unsigned int *nr,
                           libxl_psr_hw_info **info);
 void libxl_psr_hw_info_list_free(libxl_psr_hw_info *list, unsigned int nr);
+
+int libxl_domain_fork_vm(libxl_ctx *ctx, uint32_t pdomid, uint32_t max_vcpus,
+                         bool allow_with_iommu, uint32_t *domid)
+                         LIBXL_EXTERNAL_CALLERS_ONLY;
+
+int libxl_domain_fork_launch_dm(libxl_ctx *ctx, libxl_domain_config *d_config,
+                                uint32_t domid,
+                                const libxl_asyncprogress_how *aop_console_how)
+                                LIBXL_EXTERNAL_CALLERS_ONLY;
+
+int libxl_domain_fork_reset(libxl_ctx *ctx, uint32_t domid)
+                            LIBXL_EXTERNAL_CALLERS_ONLY;
 #endif
 
 /* misc */
diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 5a043df15f..1a930c2de7 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -538,12 +538,12 @@ out:
     return ret;
 }
 
-int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
-                       libxl__domain_build_state *state,
-                       uint32_t *domid, bool soft_reset)
+static int libxl__domain_make_xs_entries(libxl__gc *gc, libxl_domain_config *d_config,
+                                         libxl__domain_build_state *state,
+                                         uint32_t domid)
 {
     libxl_ctx *ctx = libxl__gc_owner(gc);
-    int ret, rc, nb_vm;
+    int rc, nb_vm;
     const char *dom_type;
     char *uuid_string;
     char *dom_path, *vm_path, *libxl_path;
@@ -555,9 +555,6 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
 
     /* convenience aliases */
     libxl_domain_create_info *info = &d_config->c_info;
-    libxl_domain_build_info *b_info = &d_config->b_info;
-
-    assert(soft_reset || *domid == INVALID_DOMID);
 
     uuid_string = libxl__uuid2string(gc, info->uuid);
     if (!uuid_string) {
@@ -565,137 +562,7 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
         goto out;
     }
 
-    if (!soft_reset) {
-        struct xen_domctl_createdomain create = {
-            .ssidref = info->ssidref,
-            .max_vcpus = b_info->max_vcpus,
-            .max_evtchn_port = b_info->event_channels,
-            .max_grant_frames = b_info->max_grant_frames,
-            .max_maptrack_frames = b_info->max_maptrack_frames,
-        };
-
-        if (info->type != LIBXL_DOMAIN_TYPE_PV) {
-            create.flags |= XEN_DOMCTL_CDF_hvm;
-            create.flags |=
-                libxl_defbool_val(info->hap) ? XEN_DOMCTL_CDF_hap : 0;
-            create.flags |=
-                libxl_defbool_val(info->oos) ? 0 : XEN_DOMCTL_CDF_oos_off;
-        }
-
-        assert(info->passthrough != LIBXL_PASSTHROUGH_DEFAULT);
-        LOG(DETAIL, "passthrough: %s",
-            libxl_passthrough_to_string(info->passthrough));
-
-        if (info->passthrough != LIBXL_PASSTHROUGH_DISABLED)
-            create.flags |= XEN_DOMCTL_CDF_iommu;
-
-        if (info->passthrough == LIBXL_PASSTHROUGH_SYNC_PT)
-            create.iommu_opts |= XEN_DOMCTL_IOMMU_no_sharept;
-
-        /* Ultimately, handle is an array of 16 uint8_t, same as uuid */
-        libxl_uuid_copy(ctx, (libxl_uuid *)&create.handle, &info->uuid);
-
-        ret = libxl__arch_domain_prepare_config(gc, d_config, &create);
-        if (ret < 0) {
-            LOGED(ERROR, *domid, "fail to get domain config");
-            rc = ERROR_FAIL;
-            goto out;
-        }
-
-        for (;;) {
-            uint32_t local_domid;
-            bool recent;
-
-            if (info->domid == RANDOM_DOMID) {
-                uint16_t v;
-
-                ret = libxl__random_bytes(gc, (void *)&v, sizeof(v));
-                if (ret < 0)
-                    break;
-
-                v &= DOMID_MASK;
-                if (!libxl_domid_valid_guest(v))
-                    continue;
-
-                local_domid = v;
-            } else {
-                local_domid = info->domid; /* May not be valid */
-            }
-
-            ret = xc_domain_create(ctx->xch, &local_domid, &create);
-            if (ret < 0) {
-                /*
-                 * If we generated a random domid and creation failed
-                 * because that domid already exists then simply try
-                 * again.
-                 */
-                if (errno == EEXIST && info->domid == RANDOM_DOMID)
-                    continue;
-
-                LOGED(ERROR, local_domid, "domain creation fail");
-                rc = ERROR_FAIL;
-                goto out;
-            }
-
-            /* A new domain now exists */
-            *domid = local_domid;
-
-            rc = libxl__is_domid_recent(gc, local_domid, &recent);
-            if (rc)
-                goto out;
-
-            /* The domid is not recent, so we're done */
-            if (!recent)
-                break;
-
-            /*
-             * If the domid was specified then there's no point in
-             * trying again.
-             */
-            if (libxl_domid_valid_guest(info->domid)) {
-                LOGED(ERROR, local_domid, "domain id recently used");
-                rc = ERROR_FAIL;
-                goto out;
-            }
-
-            /*
-             * The domain is recent and so cannot be used. Clear domid
-             * here since, if xc_domain_destroy() fails below there is
-             * little point calling it again in the error path.
-             */
-            *domid = INVALID_DOMID;
-
-            ret = xc_domain_destroy(ctx->xch, local_domid);
-            if (ret < 0) {
-                LOGED(ERROR, local_domid, "domain destroy fail");
-                rc = ERROR_FAIL;
-                goto out;
-            }
-
-            /* The domain was successfully destroyed, so we can try again */
-        }
-
-        rc = libxl__arch_domain_save_config(gc, d_config, state, &create);
-        if (rc < 0)
-            goto out;
-    }
-
-    /*
-     * If soft_reset is set the the domid will have been valid on entry.
-     * If it was not set then xc_domain_create() should have assigned a
-     * valid value. Either way, if we reach this point, domid should be
-     * valid.
-     */
-    assert(libxl_domid_valid_guest(*domid));
-
-    ret = xc_cpupool_movedomain(ctx->xch, info->poolid, *domid);
-    if (ret < 0) {
-        LOGED(ERROR, *domid, "domain move fail");
-        rc = ERROR_FAIL;
-        goto out;
-    }
-
-    dom_path = libxl__xs_get_dompath(gc, *domid);
+    dom_path = libxl__xs_get_dompath(gc, domid);
     if (!dom_path) {
         rc = ERROR_FAIL;
         goto out;
@@ -703,12 +570,12 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
 
     vm_path = GCSPRINTF("/vm/%s", uuid_string);
     if (!vm_path) {
-        LOGD(ERROR, *domid, "cannot allocate create paths");
+        LOGD(ERROR, domid, "cannot allocate create paths");
         rc = ERROR_FAIL;
         goto out;
     }
 
-    libxl_path = libxl__xs_libxl_path(gc, *domid);
+    libxl_path = libxl__xs_libxl_path(gc, domid);
     if (!libxl_path) {
         rc = ERROR_FAIL;
         goto out;
@@ -719,10 +586,10 @@ int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
 
     roperm[0].id = 0;
     roperm[0].perms = XS_PERM_NONE;
-    roperm[1].id = *domid;
+    roperm[1].id = domid;
     roperm[1].perms = XS_PERM_READ;
 
-    rwperm[0].id = *domid;
+    rwperm[0].id = domid;
     rwperm[0].perms = XS_PERM_NONE;
 
 retry_transaction:
@@ -740,7 +607,7 @@ retry_transaction:
                     noperm, ARRAY_SIZE(noperm));
 
     xs_write(ctx->xsh, t, GCSPRINTF("%s/vm", dom_path), vm_path, strlen(vm_path));
-    rc = libxl__domain_rename(gc, *domid, 0, info->name, t);
+    rc = libxl__domain_rename(gc, domid, 0, info->name, t);
     if (rc)
         goto out;
 
@@ -830,7 +697,7 @@ retry_transaction:
 
     vm_list = libxl_list_vm(ctx, &nb_vm);
     if (!vm_list) {
-        LOGD(ERROR, *domid, "cannot get number of running guests");
+        LOGD(ERROR, domid, "cannot get number of running guests");
         rc = ERROR_FAIL;
         goto out;
     }
@@ -854,7 +721,7 @@ retry_transaction:
             t = 0;
             goto retry_transaction;
         }
-        LOGED(ERROR, *domid, "domain creation ""xenstore transaction commit failed");
+        LOGED(ERROR, domid, "domain creation ""xenstore transaction commit failed");
         rc = ERROR_FAIL;
         goto out;
     }
@@ -866,6 +733,155 @@ retry_transaction:
     return rc;
 }
 
+int libxl__domain_make(libxl__gc *gc, libxl_domain_config *d_config,
+                       libxl__domain_build_state *state,
+                       uint32_t *domid, bool soft_reset)
+{
+    libxl_ctx *ctx = libxl__gc_owner(gc);
+    int ret, rc;
+
+    /* convenience aliases */
+    libxl_domain_create_info *info = &d_config->c_info;
+    libxl_domain_build_info *b_info = &d_config->b_info;
+
+    assert(soft_reset || *domid == INVALID_DOMID);
+
+    if (!soft_reset) {
+        struct xen_domctl_createdomain create = {
+            .ssidref = info->ssidref,
+            .max_vcpus = b_info->max_vcpus,
+            .max_evtchn_port = b_info->event_channels,
+            .max_grant_frames = b_info->max_grant_frames,
+            .max_maptrack_frames = b_info->max_maptrack_frames,
+        };
+
+        if (info->type != LIBXL_DOMAIN_TYPE_PV) {
+            create.flags |= XEN_DOMCTL_CDF_hvm;
+            create.flags |=
+                libxl_defbool_val(info->hap) ? XEN_DOMCTL_CDF_hap : 0;
+            create.flags |=
+                libxl_defbool_val(info->oos) ? 0 : XEN_DOMCTL_CDF_oos_off;
+        }
+
+        assert(info->passthrough != LIBXL_PASSTHROUGH_DEFAULT);
+        LOG(DETAIL, "passthrough: %s",
+            libxl_passthrough_to_string(info->passthrough));
+
+        if (info->passthrough != LIBXL_PASSTHROUGH_DISABLED)
+            create.flags |= XEN_DOMCTL_CDF_iommu;
+
+        if (info->passthrough == LIBXL_PASSTHROUGH_SYNC_PT)
+            create.iommu_opts |= XEN_DOMCTL_IOMMU_no_sharept;
+
+        /* Ultimately, handle is an array of 16 uint8_t, same as uuid */
+        libxl_uuid_copy(ctx, (libxl_uuid *)&create.handle, &info->uuid);
+
+        ret = libxl__arch_domain_prepare_config(gc, d_config, &create);
+        if (ret < 0) {
+            LOGED(ERROR, *domid, "fail to get domain config");
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        for (;;) {
+            uint32_t local_domid;
+            bool recent;
+
+            if (info->domid == RANDOM_DOMID) {
+                uint16_t v;
+
+                ret = libxl__random_bytes(gc, (void *)&v, sizeof(v));
+                if (ret < 0)
+                    break;
+
+                v &= DOMID_MASK;
+                if (!libxl_domid_valid_guest(v))
+                    continue;
+
+                local_domid = v;
+            } else {
+                local_domid = info->domid; /* May not be valid */
+            }
+
+            ret = xc_domain_create(ctx->xch, &local_domid, &create);
+            if (ret < 0) {
+                /*
+                 * If we generated a random domid and creation failed
+                 * because that domid already exists then simply try
+                 * again.
+                 */
+                if (errno == EEXIST && info->domid == RANDOM_DOMID)
+                    continue;
+
+                LOGED(ERROR, local_domid, "domain creation fail");
+                rc = ERROR_FAIL;
+                goto out;
+            }
+
+            /* A new domain now exists */
+            *domid = local_domid;
+
+            rc = libxl__is_domid_recent(gc, local_domid, &recent);
+            if (rc)
+                goto out;
+
+            /* The domid is not recent, so we're done */
+            if (!recent)
+                break;
+
+            /*
+             * If the domid was specified then there's no point in
+             * trying again.
+             */
+            if (libxl_domid_valid_guest(info->domid)) {
+                LOGED(ERROR, local_domid, "domain id recently used");
+                rc = ERROR_FAIL;
+                goto out;
+            }
+
+            /*
+             * The domain is recent and so cannot be used. Clear domid
+             * here since, if xc_domain_destroy() fails below there is
+             * little point calling it again in the error path.
+             */
+            *domid = INVALID_DOMID;
+
+            ret = xc_domain_destroy(ctx->xch, local_domid);
+            if (ret < 0) {
+                LOGED(ERROR, local_domid, "domain destroy fail");
+                rc = ERROR_FAIL;
+                goto out;
+            }
+
+            /* The domain was successfully destroyed, so we can try again */
+        }
+
+        rc = libxl__arch_domain_save_config(gc, d_config, state, &create);
+        if (rc < 0)
+            goto out;
+    }
+
+    /*
+     * If soft_reset is set the the domid will have been valid on entry.
+     * If it was not set then xc_domain_create() should have assigned a
+     * valid value. Either way, if we reach this point, domid should be
+     * valid.
+     */
+    assert(libxl_domid_valid_guest(*domid));
+
+    ret = xc_cpupool_movedomain(ctx->xch, info->poolid, *domid);
+    if (ret < 0) {
+        LOGED(ERROR, *domid, "domain move fail");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    rc = libxl__domain_make_xs_entries(gc, d_config, state, *domid);
+
+out:
+    return rc;
+}
+
 static int store_libxl_entry(libxl__gc *gc, uint32_t domid,
                              libxl_domain_build_info *b_info)
 {
@@ -1192,15 +1208,31 @@ static void initiate_domain_create(libxl__egc *egc,
     ret = libxl__domain_config_setdefault(gc,d_config,domid);
     if (ret) goto error_out;
 
-    ret = libxl__domain_make(gc, d_config, dbs, &domid, dcs->soft_reset);
-    if (ret) {
-        LOGD(ERROR, domid, "cannot make domain: %d", ret);
+    if (!d_config->dm_restore_file)
+    {
+        ret = libxl__domain_make(gc, d_config, dbs, &domid, dcs->soft_reset);
+        if (ret) {
+            LOGD(ERROR, domid, "cannot make domain: %d", ret);
+            ret = ERROR_FAIL;
+            goto error_out;
+        }
+
         dcs->guest_domid = domid;
+    } else if (dcs->guest_domid != INVALID_DOMID) {
+        domid = dcs->guest_domid;
+
+        ret = libxl__domain_make_xs_entries(gc, d_config, &dcs->build_state, domid);
+        if (ret) {
+            LOGD(ERROR, domid, "cannot make domain: %d", ret);
+            ret = ERROR_FAIL;
+            goto error_out;
+        }
+    } else {
+        LOGD(ERROR, domid, "cannot make domain");
         ret = ERROR_FAIL;
         goto error_out;
     }
 
-    dcs->guest_domid = domid;
     dcs->sdss.dm.guest_domid = 0; /* means we haven't spawned */
 
     /* post-4.13 todo: move these next bits of defaulting to
@@ -1236,7 +1268,7 @@ static void initiate_domain_create(libxl__egc *egc,
     if (ret)
         goto error_out;
 
-    if (dbs->restore || dcs->soft_reset) {
+    if (dbs->restore || dcs->soft_reset || d_config->dm_restore_file) {
         LOGD(DEBUG, domid, "restoring, not running bootloader");
         domcreate_bootloader_done(egc, &dcs->bl, 0);
     } else  {
@@ -1312,7 +1344,16 @@ static void domcreate_bootloader_done(libxl__egc *egc,
     dcs->sdss.dm.callback = domcreate_devmodel_started;
     dcs->sdss.callback = domcreate_devmodel_started;
 
-    if (restore_fd < 0 && !dcs->soft_reset) {
+    if (restore_fd < 0 && !dcs->soft_reset && !d_config->dm_restore_file) {
+        rc = libxl__domain_build(gc, d_config, domid, state);
+        domcreate_rebuild_done(egc, dcs, rc);
+        return;
+    }
+
+    if (d_config->dm_restore_file) {
+        dcs->srs.dcs = dcs;
+        dcs->srs.ao = ao;
+        state->forked_vm = true;
         rc = libxl__domain_build(gc, d_config, domid, state);
         domcreate_rebuild_done(egc, dcs, rc);
         return;
@@ -1510,6 +1551,7 @@ static void domcreate_rebuild_done(libxl__egc *egc,
     /* convenience aliases */
     const uint32_t domid = dcs->guest_domid;
     libxl_domain_config *const d_config = dcs->guest_config;
+    libxl__domain_build_state *const state = &dcs->build_state;
 
     if (ret) {
         LOGD(ERROR, domid, "cannot (re-)build domain: %d", ret);
@@ -1517,6 +1559,9 @@ static void domcreate_rebuild_done(libxl__egc *egc,
         goto error_out;
     }
 
+    if (d_config->dm_restore_file)
+        state->saved_state = GCSPRINTF("%s", d_config->dm_restore_file);
+
     store_libxl_entry(gc, domid, &d_config->b_info);
 
     libxl__multidev_begin(ao, &dcs->multidev);
@@ -1947,7 +1992,7 @@ static void domain_create_cb(libxl__egc *egc,
                              libxl__domain_create_state *dcs,
                              int rc, uint32_t domid);
 
-static int do_domain_create(libxl_ctx *ctx, libxl_domain_config *d_config,
+int libxl__do_domain_create(libxl_ctx *ctx, libxl_domain_config *d_config,
                             uint32_t *domid, int restore_fd, int send_back_fd,
                             const libxl_domain_restore_params *params,
                             const libxl_asyncop_how *ao_how,
@@ -1960,6 +2005,8 @@ static int do_domain_create(libxl_ctx *ctx, libxl_domain_config *d_config,
     GCNEW(cdcs);
     cdcs->dcs.ao = ao;
     cdcs->dcs.guest_config = d_config;
+    cdcs->dcs.guest_domid = *domid;
+
     libxl_domain_config_init(&cdcs->dcs.guest_config_saved);
     libxl_domain_config_copy(ctx, &cdcs->dcs.guest_config_saved, d_config);
     cdcs->dcs.restore_fd = cdcs->dcs.libxc_fd = restore_fd;
@@ -2204,8 +2251,8 @@ int libxl_domain_create_new(libxl_ctx *ctx, libxl_domain_config *d_config,
                             const libxl_asyncprogress_how *aop_console_how)
 {
     unset_disk_colo_restore(d_config);
-    return do_domain_create(ctx, d_config, domid, -1, -1, NULL,
-                            ao_how, aop_console_how);
+    return libxl__do_domain_create(ctx, d_config, domid, -1, -1, NULL,
+                                  ao_how, aop_console_how);
 }
 
 int libxl_domain_create_restore(libxl_ctx *ctx, libxl_domain_config *d_config,
@@ -2221,8 +2268,8 @@ int libxl_domain_create_restore(libxl_ctx *ctx, libxl_domain_config *d_config,
         unset_disk_colo_restore(d_config);
     }
 
-    return do_domain_create(ctx, d_config, domid, restore_fd, send_back_fd,
-                            params, ao_how, aop_console_how);
+    return libxl__do_domain_create(ctx, d_config, domid, restore_fd, send_back_fd,
+                                   params, ao_how, aop_console_how);
 }
 
 int libxl_domain_soft_reset(libxl_ctx *ctx,
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index f4007bbe50..b615f1fc88 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -2803,7 +2803,7 @@ static void device_model_spawn_outcome(libxl__egc *egc,
 
     libxl__domain_build_state *state = dmss->build_state;
 
-    if (state->saved_state) {
+    if (state->saved_state && !state->forked_vm) {
         ret2 = unlink(state->saved_state);
         if (ret2) {
             LOGED(ERROR, dmss->guest_domid, "%s: failed to remove device-model state %s",
diff --git a/tools/libxl/libxl_dom.c b/tools/libxl/libxl_dom.c
index 71cb578923..9e47162f67 100644
--- a/tools/libxl/libxl_dom.c
+++ b/tools/libxl/libxl_dom.c
@@ -249,9 +249,12 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     libxl_domain_build_info *const info = &d_config->b_info;
     libxl_ctx *ctx = libxl__gc_owner(gc);
     char *xs_domid, *con_domid;
-    int rc;
+    int rc = 0;
     uint64_t size;
 
+    if (state->forked_vm)
+        goto skip_fork;
+
     if (xc_domain_max_vcpus(ctx->xch, domid, info->max_vcpus) != 0) {
         LOG(ERROR, "Couldn't set max vcpu count");
         return ERROR_FAIL;
@@ -362,7 +365,6 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
         }
     }
 
-
     rc = libxl__arch_extra_memory(gc, info, &size);
     if (rc < 0) {
         LOGE(ERROR, "Couldn't get arch extra constant memory size");
@@ -374,6 +376,11 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
         return ERROR_FAIL;
     }
 
+    rc = libxl__arch_domain_create(gc, d_config, domid);
+    if (rc)
+        goto out;
+
+skip_fork:
     xs_domid = xs_read(ctx->xsh, XBT_NULL, "/tool/xenstored/domid", NULL);
     state->store_domid = xs_domid ? atoi(xs_domid) : 0;
     free(xs_domid);
@@ -385,8 +392,7 @@ int libxl__build_pre(libxl__gc *gc, uint32_t domid,
     state->store_port = xc_evtchn_alloc_unbound(ctx->xch, domid, state->store_domid);
     state->console_port = xc_evtchn_alloc_unbound(ctx->xch, domid, state->console_domid);
 
-    rc = libxl__arch_domain_create(gc, d_config, domid);
-
+out:
     return rc;
 }
 
@@ -444,6 +450,9 @@ int libxl__build_post(libxl__gc *gc, uint32_t domid,
     char **ents;
     int i, rc;
 
+    if (state->forked_vm)
+        goto skip_fork;
+
     if (info->num_vnuma_nodes && !info->num_vcpu_soft_affinity) {
         rc = set_vnuma_affinity(gc, domid, info);
         if (rc)
@@ -466,6 +475,7 @@ int libxl__build_post(libxl__gc *gc, uint32_t domid,
         }
     }
 
+skip_fork:
     ents = libxl__calloc(gc, 12 + (info->max_vcpus * 2) + 2, sizeof(char *));
     ents[0] = "memory/static-max";
     ents[1] = GCSPRINTF("%"PRId64, info->max_memkb);
@@ -728,14 +738,16 @@ static int hvm_build_set_params(xc_interface *handle, uint32_t domid,
                                 libxl_domain_build_info *info,
                                 int store_evtchn, unsigned long *store_mfn,
                                 int console_evtchn, unsigned long *console_mfn,
-                                domid_t store_domid, domid_t console_domid)
+                                domid_t store_domid, domid_t console_domid,
+                                bool forked_vm)
 {
     struct hvm_info_table *va_hvm;
     uint8_t *va_map, sum;
     uint64_t str_mfn, cons_mfn;
     int i;
 
-    if (info->type == LIBXL_DOMAIN_TYPE_HVM) {
+    if (info->type == LIBXL_DOMAIN_TYPE_HVM && !forked_vm)
+    {
         va_map = xc_map_foreign_range(handle, domid,
                                       XC_PAGE_SIZE, PROT_READ | PROT_WRITE,
                                       HVM_INFO_PFN);
@@ -1051,6 +1063,23 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
     struct xc_dom_image *dom = NULL;
     bool device_model = info->type == LIBXL_DOMAIN_TYPE_HVM ? true : false;
 
+    if (state->forked_vm)
+    {
+        rc = hvm_build_set_params(ctx->xch, domid, info, state->store_port,
+                                  &state->store_mfn, state->console_port,
+                                  &state->console_mfn, state->store_domid,
+                                  state->console_domid, state->forked_vm);
+
+        if ( rc )
+            return rc;
+
+        return xc_dom_gnttab_seed(ctx->xch, domid, true,
+                                  state->console_mfn,
+                                  state->store_mfn,
+                                  state->console_domid,
+                                  state->store_domid);
+    }
+
     xc_dom_loginit(ctx->xch);
 
     /*
@@ -1175,7 +1204,7 @@ int libxl__build_hvm(libxl__gc *gc, uint32_t domid,
     rc = hvm_build_set_params(ctx->xch, domid, info, state->store_port,
                                &state->store_mfn, state->console_port,
                                &state->console_mfn, state->store_domid,
-                               state->console_domid);
+                               state->console_domid, false);
     if (rc != 0) {
         LOG(ERROR, "hvm build set params failed");
         goto out;
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index e5effd2ad1..b25cb201b1 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -1374,6 +1374,7 @@ typedef struct {
 
     char *saved_state;
     int dm_monitor_fd;
+    bool forked_vm;
 
     libxl__file_reference pv_kernel;
     libxl__file_reference pv_ramdisk;
@@ -4822,6 +4823,12 @@ _hidden int libxl__domain_pvcontrol(libxl__egc *egc,
 /* Check whether a domid is recent */
 int libxl__is_domid_recent(libxl__gc *gc, uint32_t domid, bool *recent);
 
+_hidden int libxl__do_domain_create(libxl_ctx *ctx, libxl_domain_config *d_config,
+                                    uint32_t *domid, int restore_fd, int send_back_fd,
+                                    const libxl_domain_restore_params *params,
+                                    const libxl_asyncop_how *ao_how,
+                                    const libxl_asyncprogress_how *aop_console_how);
+
 #endif
 
 /*
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index f7c473be74..2bb5e6319e 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -958,6 +958,7 @@ libxl_domain_config = Struct("domain_config", [
     ("on_watchdog", libxl_action_on_shutdown),
     ("on_crash", libxl_action_on_shutdown),
     ("on_soft_reset", libxl_action_on_shutdown),
+    ("dm_restore_file", string, {'const': True}),
     ], dir=DIR_IN)
 
 libxl_diskinfo = Struct("diskinfo", [
diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index f8bc828e62..f6d7daa8fe 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -2,6 +2,7 @@
 #include "libxl_arch.h"
 
 #include <xc_dom.h>
+#include <xen-xsm/flask/flask.h>
 
 int libxl__arch_domain_prepare_config(libxl__gc *gc,
                                       libxl_domain_config *d_config,
@@ -842,6 +843,47 @@ int libxl__arch_passthrough_mode_setdefault(libxl__gc *gc,
     return rc;
 }
 
+/*
+ * The parent domain is expected to be created with default settings for
+ * - max_evtch_port
+ * - max_grant_frames
+ * - max_maptrack_frames
+ */
+int libxl_domain_fork_vm(libxl_ctx *ctx, uint32_t pdomid, uint32_t max_vcpus,
+                         bool allow_with_iommu, uint32_t *domid)
+{
+    int rc;
+    struct xen_domctl_createdomain create = {0};
+    create.flags |= XEN_DOMCTL_CDF_hvm;
+    create.flags |= XEN_DOMCTL_CDF_hap;
+    create.flags |= XEN_DOMCTL_CDF_oos_off;
+    create.arch.emulation_flags = (XEN_X86_EMU_ALL & ~XEN_X86_EMU_VPCI);
+    create.ssidref = SECINITSID_DOMU;
+    create.max_vcpus = max_vcpus;
+    create.max_evtchn_port = 1023;
+    create.max_grant_frames = LIBXL_MAX_GRANT_FRAMES_DEFAULT;
+    create.max_maptrack_frames = LIBXL_MAX_MAPTRACK_FRAMES_DEFAULT;
+
+    if ( (rc = xc_domain_create(ctx->xch, domid, &create)) )
+        return rc;
+
+    if ( (rc = xc_memshr_fork(ctx->xch, pdomid, *domid, allow_with_iommu)) )
+        xc_domain_destroy(ctx->xch, *domid);
+
+    return rc;
+}
+
+int libxl_domain_fork_launch_dm(libxl_ctx *ctx, libxl_domain_config *d_config,
+                                uint32_t domid,
+                                const libxl_asyncprogress_how *aop_console_how)
+{
+    return libxl__do_domain_create(ctx, d_config, &domid, -1, -1, 0, 0, aop_console_how);
+}
+
+int libxl_domain_fork_reset(libxl_ctx *ctx, uint32_t domid)
+{
+    return xc_memshr_fork_reset(ctx->xch, domid);
+}
 
 /*
  * Local variables:
diff --git a/tools/xl/Makefile b/tools/xl/Makefile
index af4912e67a..073222233b 100644
--- a/tools/xl/Makefile
+++ b/tools/xl/Makefile
@@ -15,7 +15,7 @@ LDFLAGS += $(PTHREAD_LDFLAGS)
 CFLAGS_XL += $(CFLAGS_libxenlight)
 CFLAGS_XL += -Wshadow
 
-XL_OBJS-$(CONFIG_X86) = xl_psr.o
+XL_OBJS-$(CONFIG_X86) = xl_psr.o xl_forkvm.o
 XL_OBJS = xl.o xl_cmdtable.o xl_sxp.o xl_utils.o $(XL_OBJS-y)
 XL_OBJS += xl_parse.o xl_cpupool.o xl_flask.o
 XL_OBJS += xl_vtpm.o xl_block.o xl_nic.o xl_usb.o
diff --git a/tools/xl/xl.h b/tools/xl/xl.h
index 06569c6c4a..1105c34b15 100644
--- a/tools/xl/xl.h
+++ b/tools/xl/xl.h
@@ -31,6 +31,7 @@ struct cmd_spec {
 };
 
 struct domain_create {
+    uint32_t ddomid; /* fork launch dm for this domid */
     int debug;
     int daemonize;
     int monitor; /* handle guest reboots etc */
@@ -45,6 +46,7 @@ struct domain_create {
     const char *config_file;
     char *extra_config; /* extra config string */
     const char *restore_file;
+    const char *dm_restore_file;
     char *colo_proxy_script;
     bool userspace_colo_proxy;
     int migrate_fd; /* -1 means none */
@@ -128,6 +130,8 @@ int main_pciassignable_remove(int argc, char **argv);
 int main_pciassignable_list(int argc, char **argv);
 #ifndef LIBXL_HAVE_NO_SUSPEND_RESUME
 int main_restore(int argc, char **argv);
+int main_fork_launch_dm(int argc, char **argv);
+int main_fork_reset(int argc, char **argv);
 int main_migrate_receive(int argc, char **argv);
 int main_save(int argc, char **argv);
 int main_migrate(int argc, char **argv);
@@ -212,6 +216,7 @@ int main_psr_cat_cbm_set(int argc, char **argv);
 int main_psr_cat_show(int argc, char **argv);
 int main_psr_mba_set(int argc, char **argv);
 int main_psr_mba_show(int argc, char **argv);
+int main_fork_vm(int argc, char **argv);
 #endif
 int main_qemu_monitor_command(int argc, char **argv);
 
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 08335394e5..ef634abf32 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -187,6 +187,21 @@ struct cmd_spec cmd_table[] = {
       "Restore a domain from a saved state",
       "- for internal use only",
     },
+#if defined(__i386__) || defined(__x86_64__)
+    { "fork-vm",
+      &main_fork_vm, 0, 1,
+      "Fork a domain from the running parent domid. Experimental. Most config settings must match parent.",
+      "[options] <Domid>",
+      "-h                           Print this help.\n"
+      "-C <config>                  Use config file for VM fork.\n"
+      "-Q <qemu-save-file>          Use qemu save file for VM fork.\n"
+      "--launch-dm <yes|no|late>    Launch device model (QEMU) for VM fork.\n"
+      "--fork-reset                 Reset VM fork.\n"
+      "--max-vcpus                  Specify max-vcpus matching the parent domain when not launching dm\n"
+      "-p                           Do not unpause fork VM after operation.\n"
+      "-d                           Enable debug messages.\n"
+    },
+#endif
 #endif
     { "dump-core",
       &main_dump_core, 0, 1,
diff --git a/tools/xl/xl_forkvm.c b/tools/xl/xl_forkvm.c
new file mode 100644
index 0000000000..46805b84f3
--- /dev/null
+++ b/tools/xl/xl_forkvm.c
@@ -0,0 +1,149 @@
+/*
+ * Copyright 2020 Intel Corporation
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ */
+
+#include <fcntl.h>
+#include <inttypes.h>
+#include <stdlib.h>
+#include <sys/stat.h>
+#include <sys/types.h>
+#include <sys/utsname.h>
+#include <time.h>
+#include <unistd.h>
+
+#include <libxl.h>
+#include <libxl_utils.h>
+#include <libxlutil.h>
+
+#include "xl.h"
+#include "xl_utils.h"
+#include "xl_parse.h"
+
+int main_fork_vm(int argc, char **argv)
+{
+    int rc, debug = 0;
+    uint32_t domid_in = INVALID_DOMID, domid_out = INVALID_DOMID;
+    int launch_dm = 1;
+    bool reset = 0;
+    bool pause = 0;
+    bool allow_iommu = 0;
+    const char *config_file = NULL;
+    const char *dm_restore_file = NULL;
+    uint32_t max_vcpus = 0;
+
+    int opt;
+    static struct option opts[] = {
+        {"launch-dm", 1, 0, 'l'},
+        {"fork-reset", 0, 0, 'r'},
+        {"max-vcpus", 1, 0, 'm'},
+        {"allow-iommu", 1, 0, 'i'},
+        COMMON_LONG_OPTS
+    };
+
+    SWITCH_FOREACH_OPT(opt, "phdC:Q:l:rm:i", opts, "fork-vm", 1) {
+    case 'd':
+        debug = 1;
+        break;
+    case 'p':
+        pause = 1;
+        break;
+    case 'm':
+        max_vcpus = atoi(optarg);
+        break;
+    case 'C':
+        config_file = optarg;
+        break;
+    case 'Q':
+        dm_restore_file = optarg;
+        break;
+    case 'l':
+        if ( !strcmp(optarg, "no") )
+            launch_dm = 0;
+        if ( !strcmp(optarg, "yes") )
+            launch_dm = 1;
+        if ( !strcmp(optarg, "late") )
+            launch_dm = 2;
+        break;
+    case 'r':
+        reset = 1;
+        break;
+    case 'i':
+        allow_iommu = 1;
+        break;
+    default:
+        fprintf(stderr, "Unimplemented option(s)\n");
+        return EXIT_FAILURE;
+    }
+
+    if (argc-optind == 1) {
+        domid_in = atoi(argv[optind]);
+    } else {
+        help("fork-vm");
+        return EXIT_FAILURE;
+    }
+
+    if (launch_dm && (!config_file || !dm_restore_file)) {
+        fprintf(stderr, "Currently you must provide both -C and -Q options\n");
+        return EXIT_FAILURE;
+    }
+
+    if (reset) {
+        domid_out = domid_in;
+        if (libxl_domain_fork_reset(ctx, domid_in) == EXIT_FAILURE)
+            return EXIT_FAILURE;
+    }
+
+    if (launch_dm == 2 || reset) {
+        domid_out = domid_in;
+        rc = EXIT_SUCCESS;
+    } else {
+        if ( !max_vcpus )
+        {
+            fprintf(stderr, "Currently you must parent's max_vcpu for this option\n");
+            return EXIT_FAILURE;
+        }
+
+        rc = libxl_domain_fork_vm(ctx, domid_in, max_vcpus, allow_iommu, &domid_out);
+    }
+
+    if (rc == EXIT_SUCCESS) {
+        if ( launch_dm ) {
+            struct domain_create dom_info;
+            memset(&dom_info, 0, sizeof(dom_info));
+            dom_info.ddomid = domid_out;
+            dom_info.dm_restore_file = dm_restore_file;
+            dom_info.debug = debug;
+            dom_info.paused = pause;
+            dom_info.config_file = config_file;
+            dom_info.migrate_fd = -1;
+            dom_info.send_back_fd = -1;
+            rc = create_domain(&dom_info) < 0 ? EXIT_FAILURE : EXIT_SUCCESS;
+        } else if ( !pause )
+            rc = libxl_domain_unpause(ctx, domid_out, NULL);
+    }
+
+    if (rc == EXIT_SUCCESS)
+        fprintf(stderr, "fork-vm command successfully returned domid: %u\n", domid_out);
+    else if ( domid_out != INVALID_DOMID )
+        libxl_domain_destroy(ctx, domid_out, 0);
+
+    return rc;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/xl/xl_vmcontrol.c b/tools/xl/xl_vmcontrol.c
index 17b4514c94..c64123d0a1 100644
--- a/tools/xl/xl_vmcontrol.c
+++ b/tools/xl/xl_vmcontrol.c
@@ -676,6 +676,12 @@ int create_domain(struct domain_create *dom_info)
 
     int restoring = (restore_file || (migrate_fd >= 0));
 
+#if defined(__i386__) || defined(__x86_64__)
+    /* VM forking */
+    uint32_t ddomid = dom_info->ddomid; // launch dm for this domain iff set
+    const char *dm_restore_file = dom_info->dm_restore_file;
+#endif
+
     libxl_domain_config_init(&d_config);
 
     if (restoring) {
@@ -928,6 +934,14 @@ start:
          * restore/migrate-receive it again.
          */
         restoring = 0;
+#if defined(__i386__) || defined(__x86_64__)
+    } else if ( ddomid ) {
+        d_config.dm_restore_file = dm_restore_file;
+        ret = libxl_domain_fork_launch_dm(ctx, &d_config, ddomid,
+                                          autoconnect_console_how);
+        domid = ddomid;
+        ddomid = INVALID_DOMID;
+#endif
     } else if (domid_soft_reset != INVALID_DOMID) {
         /* Do soft reset. */
         ret = libxl_domain_soft_reset(ctx, &d_config, domid_soft_reset,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed May 06 13:42:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 13:42:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWKJD-000430-Tu; Wed, 06 May 2020 13:41:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Srcy=6U=intel.com=tamas.lengyel@srs-us1.protection.inumbo.net>)
 id 1jWKJB-00042H-UM
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 13:41:57 +0000
X-Inumbo-ID: 5972e952-8f9f-11ea-9887-bc764e2007e4
Received: from mga04.intel.com (unknown [192.55.52.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5972e952-8f9f-11ea-9887-bc764e2007e4;
 Wed, 06 May 2020 13:41:56 +0000 (UTC)
IronPort-SDR: ZKOUV3xuFZC//voXop9CLqejJAZ2JauOvQakU0OCKC7aiBhWuCWGPvA7rPVJ00WTqkEMdeYVBx
 uKnh4etw9dHQ==
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by fmsmga104.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 06 May 2020 06:41:54 -0700
IronPort-SDR: 85gj5SCOulbu6Af8PbMjNCSbyhnJNfLJ6RXQ8aPKtpmmCQ+sZxj0LXWHxMiaBvC+U0cGFZF5Er
 14gbVQKNJKwA==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.73,359,1583222400"; d="scan'208";a="304802044"
Received: from srcuster-mobl11.amr.corp.intel.com (HELO localhost.localdomain)
 ([10.209.81.167])
 by FMSMGA003.fm.intel.com with ESMTP; 06 May 2020 06:41:53 -0700
From: Tamas K Lengyel <tamas.lengyel@intel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v18 1/2] tools/libxc: add VM forking functions
Date: Wed,  6 May 2020 06:41:43 -0700
Message-Id: <a59dabe3a40d4f3709d3ad6ca605523f180c2dc5.1588772376.git.tamas.lengyel@intel.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 Tamas K Lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add functions to issue VM forking hypercalls

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
---
 tools/libxc/include/xenctrl.h | 14 ++++++++++++++
 tools/libxc/xc_memshr.c       | 26 ++++++++++++++++++++++++++
 2 files changed, 40 insertions(+)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 5f25c5a6d4..0a6ff93229 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2232,6 +2232,20 @@ int xc_memshr_range_share(xc_interface *xch,
                           uint64_t first_gfn,
                           uint64_t last_gfn);
 
+int xc_memshr_fork(xc_interface *xch,
+                   uint32_t source_domain,
+                   uint32_t client_domain,
+                   bool allow_with_iommu);
+
+/*
+ * Note: this function is only intended to be used on short-lived forks that
+ * haven't yet aquired a lot of memory. In case the fork has a lot of memory
+ * it is likely more performant to create a new fork with xc_memshr_fork.
+ *
+ * With VMs that have a lot of memory this call may block for a long time.
+ */
+int xc_memshr_fork_reset(xc_interface *xch, uint32_t forked_domain);
+
 /* Debug calls: return the number of pages referencing the shared frame backing
  * the input argument. Should be one or greater.
  *
diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c
index 97e2e6a8d9..2300cc7075 100644
--- a/tools/libxc/xc_memshr.c
+++ b/tools/libxc/xc_memshr.c
@@ -239,6 +239,32 @@ int xc_memshr_debug_gref(xc_interface *xch,
     return xc_memshr_memop(xch, domid, &mso);
 }
 
+int xc_memshr_fork(xc_interface *xch, uint32_t pdomid, uint32_t domid,
+                   bool allow_with_iommu)
+{
+    xen_mem_sharing_op_t mso;
+
+    memset(&mso, 0, sizeof(mso));
+
+    mso.op = XENMEM_sharing_op_fork;
+    mso.u.fork.parent_domain = pdomid;
+
+    if ( allow_with_iommu )
+        mso.u.fork.flags |= XENMEM_FORK_WITH_IOMMU_ALLOWED;
+
+    return xc_memshr_memop(xch, domid, &mso);
+}
+
+int xc_memshr_fork_reset(xc_interface *xch, uint32_t domid)
+{
+    xen_mem_sharing_op_t mso;
+
+    memset(&mso, 0, sizeof(mso));
+    mso.op = XENMEM_sharing_op_fork_reset;
+
+    return xc_memshr_memop(xch, domid, &mso);
+}
+
 int xc_memshr_audit(xc_interface *xch)
 {
     xen_mem_sharing_op_t mso;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed May 06 13:43:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 13:43:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWKKP-0004CI-LD; Wed, 06 May 2020 13:43:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=U6FA=6U=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jWKKO-0004C2-E8
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 13:43:12 +0000
X-Inumbo-ID: 86a97d32-8f9f-11ea-9e55-12813bfff9fa
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 86a97d32-8f9f-11ea-9e55-12813bfff9fa;
 Wed, 06 May 2020 13:43:11 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 046DbfHI029690;
 Wed, 6 May 2020 13:42:54 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=ygWhqrsLGp4i2WMgtYbGtc7BuglNFqBJ00m7Mxq4JTQ=;
 b=O9LIKXrASZEBSFVl0zZMpIKol3Xdaf7QLAd9YnbAe1tHtG6JdKHrYTxz9XEo1hPyLEjQ
 1Ql8KNGPm/faRhczMPUTYJDhHBQ5Aqi9FqH0msDR+0R0QSPuudEr4ZkVMUP5cuOb/Wdo
 ZvbWDHpLlQN7Lq946nvbTyvyC6xcysKPHSmr+PeDz4A4WFA1PtNKBltps0cV7Pzrh2Of
 olWzxullxIMx4i/jEh2Rz0UQ9zxZ49mhHywr+CkFsn/vEH10DGn7P0MPo69T5tJniUGF
 1pOaIEvnys2ZD6ZD0ZOo/PjODdU/51Ilx7IDRQtnCAck8MX5pmp+idciiMdNo2CC13wA dA== 
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by userp2130.oracle.com with ESMTP id 30s09raefh-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 06 May 2020 13:42:54 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 046DgrA1106368;
 Wed, 6 May 2020 13:42:53 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by aserp3030.oracle.com with ESMTP id 30sjdvj641-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 06 May 2020 13:42:53 +0000
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 046Dgld1024526;
 Wed, 6 May 2020 13:42:47 GMT
Received: from [10.39.253.250] (/10.39.253.250)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Wed, 06 May 2020 06:42:46 -0700
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: Nataliya Korovkina <malus.brandywine@gmail.com>,
 Peng Fan <peng.fan@nxp.com>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
 <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <CADz_WD5Ln7Pe1WAFp73d2Mz9wxspzTE3WgAJusp5S8LX4=83Bw@mail.gmail.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <2187cd7c-4d48-986b-77d6-4428e8178404@oracle.com>
Date: Wed, 6 May 2020 09:42:44 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.4.1
MIME-Version: 1.0
In-Reply-To: <CADz_WD5Ln7Pe1WAFp73d2Mz9wxspzTE3WgAJusp5S8LX4=83Bw@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9612
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0
 suspectscore=2 mlxscore=0
 bulkscore=0 adultscore=0 phishscore=0 mlxlogscore=864 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000
 definitions=main-2005060109
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9612
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0
 mlxscore=0
 lowpriorityscore=0 spamscore=0 adultscore=0 clxscore=1011 suspectscore=2
 priorityscore=1501 malwarescore=0 mlxlogscore=908 phishscore=0
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2003020000 definitions=main-2005060108
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 "minyard@acm.org" <minyard@acm.org>, Roman Shaposhnik <roman@zededa.com>,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 Julien Grall <julien.grall@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


On 5/6/20 9:08 AM, Nataliya Korovkina wrote:
> Hello,
>
> What I found out: rpi_firmware_property_list() allocates memory from
> dma_atomic_pool which was mapped to VMALLOC region, so virt_to_page()
> is not eligible in this case.


So then it seems it didn't go through xen_swiotlb_alloc_coherent(). In
which case it has no business calling xen_swiotlb_free_coherent().


-boris






From xen-devel-bounces@lists.xenproject.org Wed May 06 13:48:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 13:48:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWKPl-0004Rf-A3; Wed, 06 May 2020 13:48:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Be+=6U=gmail.com=tcminyard@srs-us1.protection.inumbo.net>)
 id 1jWKPj-0004Ra-Kv
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 13:48:43 +0000
X-Inumbo-ID: 4c3f3a3c-8fa0-11ea-b07b-bc764e2007e4
Received: from mail-ot1-x342.google.com (unknown [2607:f8b0:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4c3f3a3c-8fa0-11ea-b07b-bc764e2007e4;
 Wed, 06 May 2020 13:48:42 +0000 (UTC)
Received: by mail-ot1-x342.google.com with SMTP id z25so1256119otq.13
 for <xen-devel@lists.xenproject.org>; Wed, 06 May 2020 06:48:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:date:from:to:cc:subject:message-id:reply-to:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=B1x7s6vfJM3yJXc0YvRLPDwyJ+OwvXIVCSHF6e3S+AE=;
 b=uOiz+EA+CEB9R6mbCd7c4XrojKB5HBlq+bve7HLf2Ok550hAYr5uXKyv463cx6hwUp
 9ABDGaisOrruNr9AmjGjACMm0SKpo/pINzkV5JF9MX6me/s5X3MYzVjlQlvpJ2jm8fY8
 GMC66LkWnHofVIvI2UW30xi0TrAN3hIOLXNNtATE0HF9VjweCA1+KS7UzdZJ77wMAzlP
 z1Tq0zexu5S6PjAn8d5zMxT6UcM9aROxRkIPX6xRAUFvH1nz5EGKqrNh4lFpL51zbjys
 qOq8SSyflVOAwJHzHVKZPYLlVPHx4Oc2m3ABz0wtcz2evPaNxtESRQb7oyQN0gbaGM3b
 r/zQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
 :reply-to:references:mime-version:content-disposition:in-reply-to
 :user-agent;
 bh=B1x7s6vfJM3yJXc0YvRLPDwyJ+OwvXIVCSHF6e3S+AE=;
 b=GqeRqFSHcwo0UrutDtjrkVb9Hv5bd9f9Wzrg8FydO42bk0DG3tPEHyA+Opvz2qE5IX
 K09mapWFxgJPgVhk+8r+XzwAEB82JB+B1q3JBUdqiTXDeHDr5D6QEGhv4lbpj2xo3aIZ
 W5Q89o0Q0TFJVhjwMhlfnfj0xlan+Y+FNiJyLvZkdGcIshIm4Q2EpCKl9g6sfrIBQkgL
 Y2WVs1VEdYljrH0J1EJHnoP3j3nfk1ncnBliv8CRwaEP40W65FhWtyiO5oF48MMf7sCY
 4kiTWQHMOmxKBVHNispcJOvps0o2frX62L332aIWb/KqokJAgZ7uvrNXZuNQgxJ/D6Ku
 /JVw==
X-Gm-Message-State: AGi0PuZhDE5O89qhS1aW8puA/rSR/HQKY8NkSagL2Kk2J1S/8e4cdj7P
 ela2FJzypyR8OoqMcsZaAg==
X-Google-Smtp-Source: APiQypJQ52Ya4nI+8bXygKX/fORdQMBdvuIaz162JS9GPv8+X28KuMuDIUvk5P8kaGwzUI6PvA6UfQ==
X-Received: by 2002:a9d:6e3:: with SMTP id 90mr6518859otx.261.1588772921899;
 Wed, 06 May 2020 06:48:41 -0700 (PDT)
Received: from serve.minyard.net ([47.184.149.130])
 by smtp.gmail.com with ESMTPSA id f21sm524753oig.41.2020.05.06.06.48.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 06 May 2020 06:48:40 -0700 (PDT)
Received: from minyard.net (unknown
 [IPv6:2001:470:b8f6:1b:8b39:c3f3:f502:5c4e])
 by serve.minyard.net (Postfix) with ESMTPSA id 9E87D180051;
 Wed,  6 May 2020 13:48:39 +0000 (UTC)
Date: Wed, 6 May 2020 08:48:38 -0500
From: Corey Minyard <minyard@acm.org>
To: Julien Grall <julien@xen.org>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
Message-ID: <20200506134838.GM9902@minyard.net>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
User-Agent: Mutt/1.9.4 (2018-02-28)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: minyard@acm.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Roman Shaposhnik <roman@zededa.com>, jeff.kubascik@dornerworks.com,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, May 02, 2020 at 12:46:14PM +0100, Julien Grall wrote:
> Hi,
> 
> On 02/05/2020 03:16, Corey Minyard wrote:
> > On Fri, May 01, 2020 at 06:06:11PM -0700, Roman Shaposhnik wrote:
> > > On Fri, May 1, 2020 at 4:42 AM Corey Minyard <minyard@acm.org> wrote:
> > > > 
> > > > On Thu, Apr 30, 2020 at 07:20:05PM -0700, Roman Shaposhnik wrote:
> > > > > Hi!
> > > > > 
> > > > > I'm trying to run Xen on Raspberry Pi 4 with 5.6.1 stock,
> > > > > upstream kernel. The kernel itself works perfectly well
> > > > > on the board. When I try booting it as Dom0 under Xen,
> > > > > it goes into a stacktrace (attached).
> > > > 
> > > > Getting Xen working on the Pi4 requires a lot of moving parts, and they
> > > > all have to be right.
> > > 
> > > Tell me about it! It is a pretty frustrating journey to align
> > > everything just right.
> > > On the other hand -- it seems worth to enable RPi as an ARM development
> > > platform for Xen given how ubiquitous it is.
> > > 
> > > Hence me trying to combine pristine upstream kernel (5.6.1) with
> > > pristine upstream
> > > Xen to enable 100% upstream developer workflow on RPi.
> > > 
> > > > > Looking at what nice folks over at Dornerworks have previously
> > > > > done to make RPi kernels boot as Dom0 I've come across these
> > > > > 3 patches:
> > > > >      https://github.com/dornerworks/xen-rpi4-builder/tree/master/patches/linux
> > > > > 
> > > > > The first patch seems irrelevant (unless I'm missing something
> > > > > really basic here).
> > > > 
> > > > It might be irrelevant for your configuration, assuming that Xen gets
> > > > the right information from EFI.  I haven't tried EFI booting.
> > > 
> > > I'd doing a bit of belt-and-suspenders strategy really -- I'm actually
> > > using UEFI u-boot implementation that pre-populates device trees
> > > and then I'm also forcing an extra copy of it to be load explicitly
> > > via the GRUB devicetree command (GRUB runs as a UEFI payload).
> > > 
> > > I also have access to the semi-official TianoCore RPi4 port that seems
> > > to be working pretty well: https://github.com/pftf/RPi4/releases/tag/v1.5
> > > for booting all sort of UEFI payloads on RPi4.
> > > 
> > > > > The 2nd patch applied with no issue (but
> > > > > I don't think it is related) but the 3d patch failed to apply on
> > > > > account of 5.6.1 kernel no longer having:
> > > > >      dev->archdata.dev_dma_ops
> > > > > E.g.
> > > > >      https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/arch/arm64/mm/dma-mapping.c?h=v5.6.1#n55
> > > > > 
> > > > > I've tried to emulate the effect of the patch by simply introducing
> > > > > a static variable that would signal that we already initialized
> > > > > dev->dma_ops -- but that didn't help at all.
> > > > > 
> > > > > I'm CCing Jeff Kubascik to see if the original authors of that Corey Minyard
> > > > > to see if may be they have any suggestions on how this may be dealt
> > > > > with.
> > > > > 
> > > > > Any advice would be greatly appreciated!
> > > > 
> > > > What's your Pi4 config.txt file look like? The GIC is not being handled
> > > > correctly, and I'm guessing that configuration is wrong.  You can't use
> > > > the stock config.txt file with Xen, you have to modify the configuration a
> > > > bit.
> > > 
> > > Understood. I'm actually using a custom one:
> > >      https://github.com/lf-edge/eve/blob/master/pkg/u-boot/rpi/config.txt
> > > 
> > > I could swear that I had a GIC setting in it -- but apparently no -- so
> > > I added the following at the top of what you could see at the URL above:
> > > 
> > > total_mem=4096
> > > enable_gic=1
> > > 
> > > > I think just adding:
> > > > 
> > > > enable_gic=1
> > > > total_mem=1024
> > > 
> > > Right -- but my board has 4G memory -- so I think what I did above should work.
> > 
> > Nope.  If you say 4096M of RAM, your issue is almost certainly DMA, but
> > it's not (just) the Linux code.  On the Raspberry Pi 4, several devices
> > cannot DMA to above 1024M of RAM, but Xen does not handle this.  The
> > 1024M of RAM is a limitation you will have to live with until Xen has a
> > fix.
> 
> IIUC, dom0 would need to have some memory below 1GB for this to work, am I
> correct?

FYI, this also seems to fix the issue with HDMI not working.

-corey

> 
> If so could you try the following patch?
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 430708753642..002f49dba74b 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -282,7 +282,7 @@ static void __init allocate_memory_11(struct domain *d,
>       */
>      while ( order >= min_low_order )
>      {
> -        for ( bits = order ; bits <= (lowmem ? 32 : PADDR_BITS); bits++ )
> +        for ( bits = order ; bits <= (lowmem ? 30 : PADDR_BITS); bits++ )
>          {
>              pg = alloc_domheap_pages(d, order, MEMF_bits(bits));
>              if ( pg != NULL )
> @@ -313,7 +313,7 @@ static void __init allocate_memory_11(struct domain *d,
>      order = get_allocation_size(kinfo->unassigned_mem);
>      while ( kinfo->unassigned_mem && kinfo->mem.nr_banks < NR_MEM_BANKS )
>      {
> -        pg = alloc_domheap_pages(d, order, lowmem ? MEMF_bits(32) : 0);
> +        pg = alloc_domheap_pages(d, order, lowmem ? MEMF_bits(30) : 0);
>          if ( !pg )
>          {
>              order --;
> 
> 
> Cheers,
> 
> -- 
> Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 06 13:56:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 13:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWKXC-0005Iw-3C; Wed, 06 May 2020 13:56:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=C64T=6U=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jWKXA-0005Ir-5c
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 13:56:24 +0000
X-Inumbo-ID: 5e5b8620-8fa1-11ea-9e59-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5e5b8620-8fa1-11ea-9e59-12813bfff9fa;
 Wed, 06 May 2020 13:56:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1lswStQ5X/DxTvWsOwHI2BqoFyBPwn8dMbSpITfxVxY=; b=E6GVERQadkYW8Vvx43BuI4jRqN
 PF30f3U8f8rRDB/lMaRMgSNaSCR2fD4Kn1rUoALY0NCQy1fgSoqR6dmdfk30OFtSimh0/y/8RC8bN
 yi3Tmb9pQOHiJYGltchs3XQQ1g3VJWjzzZ/6Tbg9MDn0Y+NMAavA6XrXXSVk70Kcbztc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jWKX5-0005l6-CF; Wed, 06 May 2020 13:56:19 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jWKX5-00013D-4m; Wed, 06 May 2020 13:56:19 +0000
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: minyard@acm.org
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200506134838.GM9902@minyard.net>
From: Julien Grall <julien@xen.org>
Message-ID: <b7ef47a7-4e47-d26b-d4aa-e13ecb9c8ca2@xen.org>
Date: Wed, 6 May 2020 14:56:17 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200506134838.GM9902@minyard.net>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Roman Shaposhnik <roman@zededa.com>, jeff.kubascik@dornerworks.com,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 06/05/2020 14:48, Corey Minyard wrote:
> On Sat, May 02, 2020 at 12:46:14PM +0100, Julien Grall wrote:
>> Hi,
>>
>> On 02/05/2020 03:16, Corey Minyard wrote:
>>> On Fri, May 01, 2020 at 06:06:11PM -0700, Roman Shaposhnik wrote:
>>>> On Fri, May 1, 2020 at 4:42 AM Corey Minyard <minyard@acm.org> wrote:
>>>>>
>>>>> On Thu, Apr 30, 2020 at 07:20:05PM -0700, Roman Shaposhnik wrote:
>>>>>> Hi!
>>>>>>
>>>>>> I'm trying to run Xen on Raspberry Pi 4 with 5.6.1 stock,
>>>>>> upstream kernel. The kernel itself works perfectly well
>>>>>> on the board. When I try booting it as Dom0 under Xen,
>>>>>> it goes into a stacktrace (attached).
>>>>>
>>>>> Getting Xen working on the Pi4 requires a lot of moving parts, and they
>>>>> all have to be right.
>>>>
>>>> Tell me about it! It is a pretty frustrating journey to align
>>>> everything just right.
>>>> On the other hand -- it seems worth to enable RPi as an ARM development
>>>> platform for Xen given how ubiquitous it is.
>>>>
>>>> Hence me trying to combine pristine upstream kernel (5.6.1) with
>>>> pristine upstream
>>>> Xen to enable 100% upstream developer workflow on RPi.
>>>>
>>>>>> Looking at what nice folks over at Dornerworks have previously
>>>>>> done to make RPi kernels boot as Dom0 I've come across these
>>>>>> 3 patches:
>>>>>>       https://github.com/dornerworks/xen-rpi4-builder/tree/master/patches/linux
>>>>>>
>>>>>> The first patch seems irrelevant (unless I'm missing something
>>>>>> really basic here).
>>>>>
>>>>> It might be irrelevant for your configuration, assuming that Xen gets
>>>>> the right information from EFI.  I haven't tried EFI booting.
>>>>
>>>> I'd doing a bit of belt-and-suspenders strategy really -- I'm actually
>>>> using UEFI u-boot implementation that pre-populates device trees
>>>> and then I'm also forcing an extra copy of it to be load explicitly
>>>> via the GRUB devicetree command (GRUB runs as a UEFI payload).
>>>>
>>>> I also have access to the semi-official TianoCore RPi4 port that seems
>>>> to be working pretty well: https://github.com/pftf/RPi4/releases/tag/v1.5
>>>> for booting all sort of UEFI payloads on RPi4.
>>>>
>>>>>> The 2nd patch applied with no issue (but
>>>>>> I don't think it is related) but the 3d patch failed to apply on
>>>>>> account of 5.6.1 kernel no longer having:
>>>>>>       dev->archdata.dev_dma_ops
>>>>>> E.g.
>>>>>>       https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/arch/arm64/mm/dma-mapping.c?h=v5.6.1#n55
>>>>>>
>>>>>> I've tried to emulate the effect of the patch by simply introducing
>>>>>> a static variable that would signal that we already initialized
>>>>>> dev->dma_ops -- but that didn't help at all.
>>>>>>
>>>>>> I'm CCing Jeff Kubascik to see if the original authors of that Corey Minyard
>>>>>> to see if may be they have any suggestions on how this may be dealt
>>>>>> with.
>>>>>>
>>>>>> Any advice would be greatly appreciated!
>>>>>
>>>>> What's your Pi4 config.txt file look like? The GIC is not being handled
>>>>> correctly, and I'm guessing that configuration is wrong.  You can't use
>>>>> the stock config.txt file with Xen, you have to modify the configuration a
>>>>> bit.
>>>>
>>>> Understood. I'm actually using a custom one:
>>>>       https://github.com/lf-edge/eve/blob/master/pkg/u-boot/rpi/config.txt
>>>>
>>>> I could swear that I had a GIC setting in it -- but apparently no -- so
>>>> I added the following at the top of what you could see at the URL above:
>>>>
>>>> total_mem=4096
>>>> enable_gic=1
>>>>
>>>>> I think just adding:
>>>>>
>>>>> enable_gic=1
>>>>> total_mem=1024
>>>>
>>>> Right -- but my board has 4G memory -- so I think what I did above should work.
>>>
>>> Nope.  If you say 4096M of RAM, your issue is almost certainly DMA, but
>>> it's not (just) the Linux code.  On the Raspberry Pi 4, several devices
>>> cannot DMA to above 1024M of RAM, but Xen does not handle this.  The
>>> 1024M of RAM is a limitation you will have to live with until Xen has a
>>> fix.
>>
>> IIUC, dom0 would need to have some memory below 1GB for this to work, am I
>> correct?
> 
> FYI, this also seems to fix the issue with HDMI not working.

Thank you for the testing! I will have a look how I can properly 
upstream this fix (I think we want to keep 4GB limit for other platforms).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 06 15:11:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 15:11:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWLhO-0003Ga-QP; Wed, 06 May 2020 15:11:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=cKFb=6U=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWLhN-0003G4-GP
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 15:11:01 +0000
X-Inumbo-ID: caa20962-8fab-11ea-9e81-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id caa20962-8fab-11ea-9e81-12813bfff9fa;
 Wed, 06 May 2020 15:10:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 0188EAEE2;
 Wed,  6 May 2020 15:11:00 +0000 (UTC)
Subject: Re: [PATCH] x86/svm: Clean up vmcbcleanbits_t handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200505173250.5916-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <961921e3-c882-dad0-837e-71644f8bf208@suse.com>
Date: Wed, 6 May 2020 17:10:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200505173250.5916-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.2020 19:32, Andrew Cooper wrote:
> @@ -435,17 +435,13 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
>      ASSERT(n2vmcb != NULL);
>  
>      /* Check if virtual VMCB cleanbits are valid */
> -    vcleanbits_valid = 1;
> -    if ( svm->ns_ovvmcb_pa == INVALID_PADDR )
> -        vcleanbits_valid = 0;
> -    if (svm->ns_ovvmcb_pa != nv->nv_vvmcxaddr)
> -        vcleanbits_valid = 0;
> -
> -#define vcleanbit_set(_name)	\
> -    (vcleanbits_valid && ns_vmcb->cleanbits.fields._name)
> +    if ( svm->ns_ovvmcb_pa != INVALID_PADDR &&
> +         svm->ns_ovvmcb_pa != nv->nv_vvmcxaddr )
> +        clean = ns_vmcb->cleanbits;

It looks to me as if the proper inversion of the original condition
would mean == on the right side of &&, not != .

> --- a/xen/include/asm-x86/hvm/svm/vmcb.h
> +++ b/xen/include/asm-x86/hvm/svm/vmcb.h
> @@ -384,34 +384,21 @@ typedef union
>  
>  typedef union
>  {
> -    uint32_t bytes;
> -    struct
> -    {
> -        /* cr_intercepts, dr_intercepts, exception_intercepts,
> -         * general{1,2}_intercepts, pause_filter_count, tsc_offset */
> -        uint32_t intercepts: 1;
> -        /* iopm_base_pa, msrpm_base_pa */
> -        uint32_t iopm: 1;
> -        /* guest_asid */
> -        uint32_t asid: 1;
> -        /* vintr */
> -        uint32_t tpr: 1;
> -        /* np_enable, h_cr3, g_pat */
> -        uint32_t np: 1;
> -        /* cr0, cr3, cr4, efer */
> -        uint32_t cr: 1;
> -        /* dr6, dr7 */
> -        uint32_t dr: 1;
> -        /* gdtr, idtr */
> -        uint32_t dt: 1;
> -        /* cs, ds, es, ss, cpl */
> -        uint32_t seg: 1;
> -        /* cr2 */
> -        uint32_t cr2: 1;
> -        /* debugctlmsr, last{branch,int}{to,from}ip */
> -        uint32_t lbr: 1;
> -        uint32_t resv: 21;
> -    } fields;
> +    struct {
> +        bool intercepts:1; /* 0:  cr/dr/exception/general1/2_intercepts,
> +                            *     pause_filter_count, tsc_offset */

Could I talk you into omitting the 1/2 part, as there's going to
be a 3 for at least MCOMMIT? Just "general" ought to be clear
enough, I would think.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 06 15:17:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 15:17:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWLnA-0003SJ-JC; Wed, 06 May 2020 15:17:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3H5S=6U=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jWLn9-0003SE-FW
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 15:16:59 +0000
X-Inumbo-ID: a0b6ffda-8fac-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a0b6ffda-8fac-11ea-ae69-bc764e2007e4;
 Wed, 06 May 2020 15:16:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 57E58AE07;
 Wed,  6 May 2020 15:17:00 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] xen/sched: always modify vcpu pause flags atomically
Date: Wed,  6 May 2020 17:16:55 +0200
Message-Id: <20200506151655.26445-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

credit2 is currently modifying the pause flags of vcpus non-atomically
via sched_set_pause_flags() and sched_clear_pause_flags(). This is
dangerous as there are cases where the paus flags are modified without
any lock held.

So drop the non-atomic pause flag modification functions and rename the
atomic ones dropping the _atomic suffix.

Fixes: a76255b4266516 ("xen/sched: make credit2 scheduler vcpu agnostic.")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
It should be noted that this issue wasn't introduced by core scheduling
as even before credit2 was using the non-atomic __set_bit() and
__clear_bit() variants.
---
 xen/common/sched/credit.c  |  4 ++--
 xen/common/sched/private.h | 22 +---------------------
 2 files changed, 3 insertions(+), 23 deletions(-)

diff --git a/xen/common/sched/credit.c b/xen/common/sched/credit.c
index 93d89da278..d0aa017c64 100644
--- a/xen/common/sched/credit.c
+++ b/xen/common/sched/credit.c
@@ -453,7 +453,7 @@ static inline void __runq_tickle(const struct csched_unit *new)
                     SCHED_UNIT_STAT_CRANK(cur, kicked_away);
                     SCHED_UNIT_STAT_CRANK(cur, migrate_r);
                     SCHED_STAT_CRANK(migrate_kicked_away);
-                    sched_set_pause_flags_atomic(cur->unit, _VPF_migrating);
+                    sched_set_pause_flags(cur->unit, _VPF_migrating);
                 }
                 /* Tickle cpu anyway, to let new preempt cur. */
                 SCHED_STAT_CRANK(tickled_busy_cpu);
@@ -973,7 +973,7 @@ csched_unit_acct(struct csched_private *prv, unsigned int cpu)
         {
             SCHED_UNIT_STAT_CRANK(svc, migrate_r);
             SCHED_STAT_CRANK(migrate_running);
-            sched_set_pause_flags_atomic(currunit, _VPF_migrating);
+            sched_set_pause_flags(currunit, _VPF_migrating);
             /*
              * As we are about to tickle cpu, we should clear its bit in
              * idlers. But, if we are here, it means there is someone running
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index 367811a12f..b9a5b4c01c 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -172,7 +172,7 @@ static inline void sched_set_pause_flags(struct sched_unit *unit,
     struct vcpu *v;
 
     for_each_sched_unit_vcpu ( unit, v )
-        __set_bit(bit, &v->pause_flags);
+        set_bit(bit, &v->pause_flags);
 }
 
 /* Clear a bit in pause_flags of all vcpus of a unit. */
@@ -181,26 +181,6 @@ static inline void sched_clear_pause_flags(struct sched_unit *unit,
 {
     struct vcpu *v;
 
-    for_each_sched_unit_vcpu ( unit, v )
-        __clear_bit(bit, &v->pause_flags);
-}
-
-/* Set a bit in pause_flags of all vcpus of a unit via atomic updates. */
-static inline void sched_set_pause_flags_atomic(struct sched_unit *unit,
-                                                unsigned int bit)
-{
-    struct vcpu *v;
-
-    for_each_sched_unit_vcpu ( unit, v )
-        set_bit(bit, &v->pause_flags);
-}
-
-/* Clear a bit in pause_flags of all vcpus of a unit via atomic updates. */
-static inline void sched_clear_pause_flags_atomic(struct sched_unit *unit,
-                                                  unsigned int bit)
-{
-    struct vcpu *v;
-
     for_each_sched_unit_vcpu ( unit, v )
         clear_bit(bit, &v->pause_flags);
 }
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Wed May 06 15:53:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 15:53:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWMMT-0006dG-Dv; Wed, 06 May 2020 15:53:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LxRp=6U=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWMMR-0006dB-Mv
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 15:53:27 +0000
X-Inumbo-ID: b87fb882-8fb1-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b87fb882-8fb1-11ea-b9cf-bc764e2007e4;
 Wed, 06 May 2020 15:53:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=fSaDVIciZbcVN6Wo2ZD3qzLzeZ8f51LNhCHrcPPSBfo=; b=r1uVv8HSgT46BObUW56QDmnNX
 q4KrnKV7ikOUZtEm5z053ERaGGPI14gWImN2sbc4i07oMKb1gmFFnnaY1FbKjIiDKJDeyYMQm2aKv
 Yheiv/dWeF/u0iDRlEPw/Q9tMviZgn8OQ+kjcpK3WcLW9uBIxcL7CQA+vw1XDEB2ZjtBU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWMMP-0007z7-Cu; Wed, 06 May 2020 15:53:25 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWMMP-0000n4-3K; Wed, 06 May 2020 15:53:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWMMP-0008Pv-0X; Wed, 06 May 2020 15:53:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150040-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 150040: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: xen=7dd2ac39e40f0afe1cc6d879bfe65cbf19520cab
X-Osstest-Versions-That: xen=96a8b5bc48be2ae9691369849036453f8850135b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 06 May 2020 15:53:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150040 xen-4.11-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150040/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 10 debian-hvm-install fail never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass

version targeted for testing:
 xen                  7dd2ac39e40f0afe1cc6d879bfe65cbf19520cab
baseline version:
 xen                  96a8b5bc48be2ae9691369849036453f8850135b

Last test of basis   149701  2020-04-17 12:06:32 Z   19 days
Testing same since   150040  2020-05-05 16:06:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Juergen Gross <jgross@suse.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   96a8b5bc48..7dd2ac39e4  7dd2ac39e40f0afe1cc6d879bfe65cbf19520cab -> stable-4.11


From xen-devel-bounces@lists.xenproject.org Wed May 06 16:14:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 16:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWMgu-0000Xe-Fa; Wed, 06 May 2020 16:14:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qiEG=6U=gmail.com=malus.brandywine@srs-us1.protection.inumbo.net>)
 id 1jWMgt-0000XZ-22
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 16:14:35 +0000
X-Inumbo-ID: acbf19a4-8fb4-11ea-ae69-bc764e2007e4
Received: from mail-vs1-xe2e.google.com (unknown [2607:f8b0:4864:20::e2e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id acbf19a4-8fb4-11ea-ae69-bc764e2007e4;
 Wed, 06 May 2020 16:14:34 +0000 (UTC)
Received: by mail-vs1-xe2e.google.com with SMTP id a5so1364194vsm.7
 for <xen-devel@lists.xenproject.org>; Wed, 06 May 2020 09:14:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=l3mdOtu577kPSbiCaat4KPi2GMtAE32pt3egFdx040Y=;
 b=KPANkYJMK9xkhrEW3tQNVuLFiEV16dJGY3sR150cIg2XHHEv25y9/LJX7fdGOVxcR8
 wuqgipzvEJKr+39+wAxRC4a6mnypp/v0x+sdhgkYSjd9F0R+GFQY/gwt4MBjHNbMuhU6
 2jK7aCfCUMI0BBYD80lCpDpx/QD01z/ObdPDQg3nV3asHyklWO0Bi/94V9wpq/x1FGKU
 7zST94iC1ywWrJ0GSfv7iQQQTQADmpsE+ZLEd5oUK83RMkYTO3wQ0dDe6h7E+EFG/VHg
 Y1XW5aH4Fy8oscxLVzjebZwTTWDiWQrxnLjbCzCMr4YxVrnktEhF9k111mU8BOdslJM9
 Dxjw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=l3mdOtu577kPSbiCaat4KPi2GMtAE32pt3egFdx040Y=;
 b=hpRWZ07ZHMX6eHzkctLoGtLCdHiFsRBm4kirwvJSNrJy5+MxbA+DWj0gqYmecxSKdY
 wCN0+tCcXWcfYGx3bEhdhPb167KOAo2BnoPqmx+PO2yDydOdyRozw2c09e4Wkx20CNVa
 Lh7jou+31YB9G5PhUIMpQ8FYCksYIMqlWqeBtnVFRWOQvmXGZuEaSemfoPF5OgUAhZbA
 bRwrNy5FZv1lb3jO+Ae+L5yc4ClBs6fUq7lXgxxBGIt0rp4ECdZ9k+F07X04sbluNJFX
 YqAZcoWyMCZqD40oZvPfhBAZrIOEUIs3JFF5dV7pAtcQgBHjULfJx+E324Rm2+ss3/4U
 OA8g==
X-Gm-Message-State: AGi0PuYJYwNvdpcWCJZASyYKMeJYUQGiN772KVCnMaMHyIjo/LyJoDyX
 e1Ia/mPsP8FCyHJEs5dSBRnSia30owjgHc72bak=
X-Google-Smtp-Source: APiQypLNpB2Sj/pWyvcXNlaPIw1tzPhFhoMqKURGqGI+nUHR+ly4iTdXr09t+e+YRVuUr54vVc5DYkgc8yGt00m4lm4=
X-Received: by 2002:a67:80d6:: with SMTP id b205mr8894264vsd.57.1588781674048; 
 Wed, 06 May 2020 09:14:34 -0700 (PDT)
MIME-Version: 1.0
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
 <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <CADz_WD5Ln7Pe1WAFp73d2Mz9wxspzTE3WgAJusp5S8LX4=83Bw@mail.gmail.com>
 <2187cd7c-4d48-986b-77d6-4428e8178404@oracle.com>
In-Reply-To: <2187cd7c-4d48-986b-77d6-4428e8178404@oracle.com>
From: Nataliya Korovkina <malus.brandywine@gmail.com>
Date: Wed, 6 May 2020 12:14:22 -0400
Message-ID: <CADz_WD68bYj-0CSm_zib+LRiMGd1+1eoNLgiJj=vHog685Khsw@mail.gmail.com>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Peng Fan <peng.fan@nxp.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, "minyard@acm.org" <minyard@acm.org>,
 Roman Shaposhnik <roman@zededa.com>,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 Julien Grall <julien.grall@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 6, 2020 at 9:43 AM Boris Ostrovsky
<boris.ostrovsky@oracle.com> wrote:
>
>
> On 5/6/20 9:08 AM, Nataliya Korovkina wrote:
> > Hello,
> >
> > What I found out: rpi_firmware_property_list() allocates memory from
> > dma_atomic_pool which was mapped to VMALLOC region, so virt_to_page()
> > is not eligible in this case.
>
>
> So then it seems it didn't go through xen_swiotlb_alloc_coherent(). In
> which case it has no business calling xen_swiotlb_free_coherent().
>
>
> -boris
>
>
>
>

It does go.
dma_alloc_coherent() indirectly calls xen_swiotlb_alloc_coherent(),
then  xen_alloc_coherent_pages() eventually calls arch_dma_alloc() in
remap.c which successfully allocates pages from atomic pool.

The patch Julien offered for domian_build.c moved Dom0 banks in the
first G of RAM.
So it covered the previous symptom (a crash during allocation) because
now we avoid pathway  when we mark a page "XenMapped".

But the symptom still remains in xen_swiotlb_free_coherent() because
"TestPage..." is called unconditionally. virt_to_page() is not
applicable to such allocations.

Thanks,
Nataliya


From xen-devel-bounces@lists.xenproject.org Wed May 06 16:40:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 16:40:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWN62-0002xS-O8; Wed, 06 May 2020 16:40:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qF4D=6U=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jWN61-0002xM-3j
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 16:40:33 +0000
X-Inumbo-ID: 4d23be88-8fb8-11ea-b9cf-bc764e2007e4
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d23be88-8fb8-11ea-b9cf-bc764e2007e4;
 Wed, 06 May 2020 16:40:32 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id h4so3292745wmb.4
 for <xen-devel@lists.xenproject.org>; Wed, 06 May 2020 09:40:32 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=MkI6n+fViqiHgtCJ+QWC+BPhYGKK5stRiIiXuexn7Fg=;
 b=K7xe6adkFQG3t9eaIUCWmA8uZOZA3tuIzdYrxrWeaiWX/IiCzi0YDyjdMNWTWOotKG
 8K0G0MKnvb9Vnlm1LtEvFdXqifTunmogP4Xmtx6kIKsEXz/Ma2v39sV5ElbKKMnmbjqR
 HmBgUOR55OCJ/nOT6+qvXsIJvyU/WnDwKJTakoYddQ9gG8ipAIXxocqNaAy/jbW01xQ3
 sqoQY0tEoma8UZhEN3upDmKvQfMiPZ+fS5hsWgvwZd324orxtzf9zeQkINDgiEiO8ua4
 hCc4LdoK5fGKWMgUhQhgS+Bzyv3TtsEY/QYtz1542TIcXk6ToT8rJizedAdmKXoq0UXD
 qchg==
X-Gm-Message-State: AGi0PuZcLvsADVgSnODozKUgsOH7iuKNGQWQs159cNAaqHlPrBv6gCGg
 3IesB/sP/rui6450pwU0Qr4=
X-Google-Smtp-Source: APiQypJXVodWL27zHkWTWGS2VT/N7zjssoQQz1SHi2w7+RdFqzfuz3UtnYfDDIaYTWoBWy2EyTXyuA==
X-Received: by 2002:a05:600c:14d4:: with SMTP id
 i20mr5755869wmh.118.1588783231650; 
 Wed, 06 May 2020 09:40:31 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id z132sm910961wmc.29.2020.05.06.09.40.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 06 May 2020 09:40:31 -0700 (PDT)
Date: Wed, 6 May 2020 16:40:29 +0000
From: Wei Liu <wl@xen.org>
To: Tamas K Lengyel <tamas.lengyel@intel.com>
Subject: Re: [PATCH v18 1/2] tools/libxc: add VM forking functions
Message-ID: <20200506164029.ako4t7yubxslhcg2@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <a59dabe3a40d4f3709d3ad6ca605523f180c2dc5.1588772376.git.tamas.lengyel@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <a59dabe3a40d4f3709d3ad6ca605523f180c2dc5.1588772376.git.tamas.lengyel@intel.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 06, 2020 at 06:41:43AM -0700, Tamas K Lengyel wrote:
> Add functions to issue VM forking hypercalls
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Wed May 06 16:44:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 16:44:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWN9i-00036z-7y; Wed, 06 May 2020 16:44:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+hl2=6U=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jWN9g-00036t-J4
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 16:44:20 +0000
X-Inumbo-ID: d4250d42-8fb8-11ea-9887-bc764e2007e4
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d4250d42-8fb8-11ea-9887-bc764e2007e4;
 Wed, 06 May 2020 16:44:18 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id x4so3316973wmj.1
 for <xen-devel@lists.xenproject.org>; Wed, 06 May 2020 09:44:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=JGb1EsNymSTckzPGaVVggD9lhdcR4lnc1L48Drw2mgM=;
 b=DyHpAYZxEvQ5y5kC1bVmLOnoT9PgUaokBGT1e4r1SJSpT/TXad/liqZwgxFwPnUl2b
 BItJDamG23d+NU9hJDa9D3uoxfAvg71GAF0K44I0pptcmd4rCx/luOSad+WnSeTH2S7d
 2Oea3ttvCan/7U8UtV4Wjtq2OulMgJuVTnpH1UYD6NomXnwWZVbmEdrrzpdRBkGeLVtT
 xigpKQ11za/j3r6Tlxppa9hWReDTp2NomzSPoqQIR5UPVSsXbddPIdHEm90E3I6qoBpD
 cosM4M7FWoYK5amKLKma+xegkAc1KGlVMRA4DgBVaweHsxdhvTTYU/lEkCuK8oUL0Unu
 gfZA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=JGb1EsNymSTckzPGaVVggD9lhdcR4lnc1L48Drw2mgM=;
 b=KLemtokeBzqMnWJiNjQDAzgtPniVhly1RjBImZwfX3m/OLxmwDGXZiIEoScy8189JO
 z2ULKVAVhXtFz7JVucPkmL9v8uMVQsA/XkvrHtvkIwQol+SX42L9+mZQqZMtRIJDBC/p
 Hhz7Kf8TJYVNasntYET9ti7AMv9QO5tUaboe9rNM2kQlwy0hvpb7Mrg05lZbGMQvzvm6
 2dtzVJXfBtv/hk9evvzwhiLv5FNOG7kc7752HxFj8SUSJbVQ3uqMkU7AM2indyTEhdKN
 siaHWjEVyUC36fHqdiRLUEmDK7nqOLUh6YkV4YyAwa6lOazUUIOsF7+GJBI5TCVsNVym
 vGrg==
X-Gm-Message-State: AGi0PuammGfBcRXKuUJGndEpHJ3Z3OyYaoFhrKTfFLNffh+SZku1G52B
 +Ju4942iCeGXpJjz12CNEQU=
X-Google-Smtp-Source: APiQypL2bpf6GgoTago2PS59p5RSm/g5IiKgKZokh8uecB2Xv5CAltshveOwdC5JOKUD96m1HogACQ==
X-Received: by 2002:a1c:e906:: with SMTP id q6mr5287419wmc.62.1588783457916;
 Wed, 06 May 2020 09:44:17 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.186])
 by smtp.gmail.com with ESMTPSA id u16sm1084130wrq.17.2020.05.06.09.44.16
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 06 May 2020 09:44:17 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
References: <20200407173847.1595-1-paul@xen.org>
 <20200407173847.1595-2-paul@xen.org>
 <d5013c9d-b1a9-6139-a120-741332d6e086@suse.com>
In-Reply-To: <d5013c9d-b1a9-6139-a120-741332d6e086@suse.com>
Subject: RE: [PATCH v2 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
Date: Wed, 6 May 2020 17:44:15 +0100
Message-ID: <009601d623c5$9547abc0$bfd70340$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQIOC3/NwyZzJjhdRz0oBcI7Is/lxwF9QIbqAnK/qAyoC/QOkA==
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 29 April 2020 12:02
> To: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; Paul Durrant =
<pdurrant@amazon.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>; =
Ian Jackson
> <ian.jackson@eu.citrix.com>; Julien Grall <julien@xen.org>; Stefano =
Stabellini
> <sstabellini@kernel.org>; Wei Liu <wl@xen.org>; Volodymyr Babchuk =
<Volodymyr_Babchuk@epam.com>; Roger
> Pau Monn=C3=A9 <roger.pau@citrix.com>
> Subject: Re: [PATCH v2 1/5] xen/common: introduce a new framework for =
save/restore of 'domain' context
>=20
> On 07.04.2020 19:38, Paul Durrant wrote:
> > +void __init domain_register_save_type(unsigned int tc, const char =
*name,
> > +                                      bool per_vcpu,
> > +                                      domain_save_handler save,
> > +                                      domain_load_handler load)
> > +{
> > +    BUG_ON(tc > ARRAY_SIZE(handlers));
>=20
> >=3D

Yes.

>=20
> > +    ASSERT(!handlers[tc].save);
> > +    ASSERT(!handlers[tc].load);
> > +
> > +    handlers[tc].name =3D name;
> > +    handlers[tc].per_vcpu =3D per_vcpu;
> > +    handlers[tc].save =3D save;
> > +    handlers[tc].load =3D load;
> > +}
> > +
> > +int domain_save_begin(struct domain_context *c, unsigned int tc,
> > +                      const char *name, const struct vcpu *v, =
size_t len)
>=20
> I find it quite odd for a function like this one to take a
> struct vcpu * rather than a struct domain *. See also below
> comment on the vcpu_id field in the public header.

I guess struct domain + vcpu_id can be used.

>=20
> > +{
> > +    int rc;
> > +
> > +    if ( c->log )
> > +        gdprintk(XENLOG_INFO, "%pv save: %s (%lu)\n", v, name,
> > +                 (unsigned long)len);
> > +
> > +    BUG_ON(tc !=3D c->desc.typecode);
> > +    BUG_ON(v->vcpu_id !=3D c->desc.vcpu_id);
> > +
> > +    ASSERT(!c->data_len);
> > +    c->data_len =3D c->desc.length =3D len;
> > +
> > +    rc =3D c->copy.write(c->priv, &c->desc, sizeof(c->desc));
> > +    if ( rc )
> > +        return rc;
> > +
> > +    c->desc.length =3D 0;
> > +
> > +    return 0;
> > +}
> > +
> > +int domain_save_data(struct domain_context *c, const void *src, =
size_t len)
> > +{
> > +    if ( c->desc.length + len > c->data_len )
> > +        return -ENOSPC;
> > +
> > +    c->desc.length +=3D len;
> > +
> > +    return c->copy.write(c->priv, src, len);
> > +}
> > +
> > +int domain_save_end(struct domain_context *c)
>=20
> I'm sure there is a reason for the split into three load/save
> functions (begin/data/end), but I can't figure it and the
> description also doesn't explain it. They're all used together
> only afaics, in domain_{save,load}_entry(). Or wait, there are
> DOMAIN_{SAVE,LOAD}_BEGIN() macros apparently allowing separate
> use of ..._begin(), but then it's still not clear why ..._end()
> need to be separate from ..._data().

The split is to avoid the need to double-buffer in the save code, shared =
info being a case in point. If the entire save record needs to be =
written in one call then the shared info content would need to be copied =
into a newly allocated save record and then copied again into the =
aggregate context buffer.

>=20
> > +int domain_save(struct domain *d, domain_write_entry write, void =
*priv,
> > +                bool dry_run)
> > +{
> > +    struct domain_context c =3D {
> > +        .copy.write =3D write,
> > +        .priv =3D priv,
> > +        .log =3D !dry_run,
> > +    };
> > +    struct domain_save_header h =3D {
>=20
> const? Perhaps even static?
>=20

Ok, static is a good idea.

> > +        .magic =3D DOMAIN_SAVE_MAGIC,
> > +        .version =3D DOMAIN_SAVE_VERSION,
> > +    };
> > +    struct domain_save_header e;
> > +    unsigned int i;
> > +    int rc;
> > +
> > +    ASSERT(d !=3D current->domain);
> > +
> > +    if ( d->is_dying )
> > +        return -EINVAL;
>=20
> Could I talk you into using less generic an error code here, e.g.
> -ESRCH or -ENXIO? There look to be further uses of EINVAL that
> may want replacing.
>=20

I'm going to drop this check as it creates a problem for live update, =
where we actually do need to extract and restore state for dying =
domains.

> > +    domain_pause(d);
> > +
> > +    c.desc.typecode =3D DOMAIN_SAVE_CODE(HEADER);
> > +
> > +    rc =3D DOMAIN_SAVE_ENTRY(HEADER, &c, d->vcpu[0], &h, =
sizeof(h));
> > +    if ( rc )
> > +        goto out;
> > +
> > +    for ( i =3D 0; i < ARRAY_SIZE(handlers); i++ )
> > +    {
> > +        domain_save_handler save =3D handlers[i].save;
> > +
> > +        if ( !save )
> > +            continue;
> > +
> > +        memset(&c.desc, 0, sizeof(c.desc));
> > +        c.desc.typecode =3D i;
> > +
> > +        if ( handlers[i].per_vcpu )
> > +        {
> > +            struct vcpu *v;
> > +
> > +            for_each_vcpu ( d, v )
> > +            {
> > +                c.desc.vcpu_id =3D v->vcpu_id;
> > +
> > +                rc =3D save(v, &c, dry_run);
> > +                if ( rc )
> > +                    goto out;
> > +            }
> > +        }
> > +        else
> > +        {
> > +            rc =3D save(d->vcpu[0], &c, dry_run);
> > +            if ( rc )
> > +                goto out;
> > +        }
> > +    }
> > +
> > +    memset(&c.desc, 0, sizeof(c.desc));
> > +    c.desc.typecode =3D DOMAIN_SAVE_CODE(END);
> > +
> > +    rc =3D DOMAIN_SAVE_ENTRY(END, &c, d->vcpu[0], &e, 0);
>=20
> By the looks of it you're passing uninitialized e here; it's just
> that the struct has no members. It would look less odd if you used
> NULL here. Otherwise please don't use literal 0, but sizeof() for
> the last parameter.

I'll init the 'e'.

>=20
> > +int domain_load_begin(struct domain_context *c, unsigned int tc,
> > +                      const char *name, const struct vcpu *v, =
size_t len,
> > +                      bool exact)
> > +{
> > +    if ( c->log )
> > +        gdprintk(XENLOG_INFO, "%pv load: %s (%lu)\n", v, name,
> > +                 (unsigned long)len);
> > +
> > +    BUG_ON(tc !=3D c->desc.typecode);
> > +    BUG_ON(v->vcpu_id !=3D c->desc.vcpu_id);
> > +
> > +    if ( (exact && (len !=3D c->desc.length)) ||
> > +         (len < c->desc.length) )
> > +        return -EINVAL;
>=20
> How about
>=20
>     if ( exact ? len !=3D c->desc.length
>                : len < c->desc.length )
>=20

Yes, that doesn't look too bad.

> ? I'm also unsure about the < - don't you mean > instead? Too
> little data would be compensated by zero padding, but too
> much data can't be dealt with. But maybe I'm getting the sense
> of len wrong ...

I think the < is correct. The caller needs to have at least enough space =
to accommodate the context record.

> > +int domain_load(struct domain *d, domain_read_entry read, void =
*priv)
> > +{
> > +    struct domain_context c =3D {
> > +        .copy.read =3D read,
> > +        .priv =3D priv,
> > +        .log =3D true,
> > +    };
> > +    struct domain_save_header h;
> > +    int rc;
> > +
> > +    ASSERT(d !=3D current->domain);
> > +
> > +    if ( d->is_dying )
> > +        return -EINVAL;
> > +
> > +    rc =3D c.copy.read(c.priv, &c.desc, sizeof(c.desc));
> > +    if ( rc )
> > +        return rc;
> > +
> > +    if ( c.desc.typecode !=3D DOMAIN_SAVE_CODE(HEADER) || =
c.desc.vcpu_id ||
> > +         c.desc.flags )
> > +        return -EINVAL;
> > +
> > +    rc =3D DOMAIN_LOAD_ENTRY(HEADER, &c, d->vcpu[0], &h, sizeof(h), =
true);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    if ( h.magic !=3D DOMAIN_SAVE_MAGIC || h.version !=3D =
DOMAIN_SAVE_VERSION )
> > +        return -EINVAL;
> > +
> > +    domain_pause(d);
> > +
> > +    for (;;)
> > +    {
> > +        unsigned int i;
> > +        unsigned int flags;
> > +        domain_load_handler load;
> > +        struct vcpu *v;
> > +
> > +        rc =3D c.copy.read(c.priv, &c.desc, sizeof(c.desc));
> > +        if ( rc )
> > +            break;
> > +
> > +        rc =3D -EINVAL;
> > +
> > +        flags =3D c.desc.flags;
> > +        if ( flags & ~DOMAIN_SAVE_FLAG_IGNORE )
> > +            break;
> > +
> > +        if ( c.desc.typecode =3D=3D DOMAIN_SAVE_CODE(END) ) {
>=20
> Brace placement.
>=20

Oops, yes.

> > +            if ( !(flags & DOMAIN_SAVE_FLAG_IGNORE) )
> > +                rc =3D DOMAIN_LOAD_ENTRY(END, &c, d->vcpu[0], NULL, =
0, true);
>=20
> The public header says DOMAIN_SAVE_FLAG_IGNORE is invalid for
> END.
>=20

Indeed, and it should be... don't know why I put that in there.

> > +            break;
> > +        }
> > +
> > +        i =3D c.desc.typecode;
> > +        if ( i >=3D ARRAY_SIZE(handlers) )
> > +            break;
> > +
> > +        if ( (!handlers[i].per_vcpu && c.desc.vcpu_id) ||
> > +             (c.desc.vcpu_id >=3D d->max_vcpus) )
> > +            break;
> > +
> > +        v =3D d->vcpu[c.desc.vcpu_id];
> > +
> > +        if ( flags & DOMAIN_SAVE_FLAG_IGNORE )
> > +        {
> > +            /* Sink the data */
> > +            rc =3D domain_load_entry(&c, c.desc.typecode, =
"IGNORED",
> > +                                   v, NULL, c.desc.length, true);
>=20
> IOW the read handlers need to be able to deal with a NULL dst?
> Sounds a little fragile to me. Is there a reason
> domain_load_data() can't skip the call to read()?

Something has to move the cursor so sinking the data using a NULL dst =
seemed like the best way to avoid the need for a separate callback =
function.

>=20
> > --- /dev/null
> > +++ b/xen/include/public/save.h
> > @@ -0,0 +1,84 @@
> > +/*
> > + * save.h
> > + *
> > + * Structure definitions for common PV/HVM domain state that is =
held by
> > + * Xen and must be saved along with the domain's memory.
> > + *
> > + * Copyright Amazon.com Inc. or its affiliates.
> > + *
> > + * Permission is hereby granted, free of charge, to any person =
obtaining a copy
> > + * of this software and associated documentation files (the =
"Software"), to
> > + * deal in the Software without restriction, including without =
limitation the
> > + * rights to use, copy, modify, merge, publish, distribute, =
sublicense, and/or
> > + * sell copies of the Software, and to permit persons to whom the =
Software is
> > + * furnished to do so, subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be =
included in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, =
EXPRESS OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF =
MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO =
EVENT SHALL THE
> > + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR =
OTHER
> > + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, =
ARISING
> > + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR =
OTHER
> > + * DEALINGS IN THE SOFTWARE.
> > + */
> > +
> > +#ifndef __XEN_PUBLIC_SAVE_H__
> > +#define __XEN_PUBLIC_SAVE_H__
> > +
> > +#include "xen.h"
> > +
> > +/* Each entry is preceded by a descriptor */
> > +struct domain_save_descriptor {
>=20
> Throughout this new public header, let's please play nice in name
> space terms: Prefix global scope items with xen_ / XEN_
> respectively, and don't introduce violations of the standard's
> rules (e.g. _DOMAIN_SAVE_FLAG_IGNORE below). The latter point also
> goes for naming outside of the public header, of course.

Sorry, I just didn't pay enough attention to the cut'n'paste from the =
hvm save.h; I'll fix these up.

>=20
> > +    uint16_t typecode;
> > +    /*
> > +     * Each entry will contain either to global or per-vcpu domain =
state.
>=20
> s/contain/apply/ or drop "to"?

Yes, 'to' should be dropped.

>=20
> > +     * Entries relating to global state should have zero in this =
field.
>=20
> Is there a reason to specify zero?
>=20

Not particularly but I thought it best to at least specify a value in =
all cases.

> > +     */
> > +    uint16_t vcpu_id;
>=20
> Seeing (possibly) multi-instance records in HVM state (PIC, IO-APIC, =
HPET),
> wouldn't it be better to generalize this to ID, meaning "vCPU ID" just =
for
> per-vCPU state?
>=20

True, a generic 'instance' would be needed for such records. I'll so as =
you suggest.

> > +    uint32_t flags;
> > +    /*
> > +     * When restoring state this flag can be set in a descriptor to =
cause
> > +     * its content to be ignored.
> > +     *
> > +     * NOTE: It is invalid to set this flag for HEADER or END =
records (see
> > +     *       below).
> > +     */
> > +#define _DOMAIN_SAVE_FLAG_IGNORE 0
> > +#define DOMAIN_SAVE_FLAG_IGNORE (1u << _DOMAIN_SAVE_FLAG_IGNORE)
>=20
> As per your reply to Julien I think this wants further clarification =
on
> the intentions with this flag. I'm also uncertain it is a good idea to
> allow such partial restores - there may be dependencies between =
records,
> and hence things can easily go wrong in perhaps non-obvious ways.
>=20

OK, I'll drop this for now. It could be added later if it is needed.

> > +    /* Entry length not including this descriptor */
> > +    uint64_t length;
>=20
> Do you really envision descriptors with more than 4Gb of data? If so,
> for 32-bit purposes wouldn't this want to be uint64_aligned_t?
>=20

I don't think we'll ever see a single record that big. I'll drop to 32 =
bits.

> > +};
> > +
> > +/*
> > + * Each entry has a type associated with it. =
DECLARE_DOMAIN_SAVE_TYPE
> > + * binds these things together.
> > + */
> > +#define DECLARE_DOMAIN_SAVE_TYPE(_x, _code, _type) \
> > +    struct __DOMAIN_SAVE_TYPE_##_x { char c[_code]; _type t; };
> > +
> > +#define DOMAIN_SAVE_CODE(_x) \
> > +    (sizeof(((struct __DOMAIN_SAVE_TYPE_##_x *)(0))->c))
> > +#define DOMAIN_SAVE_TYPE(_x) \
> > +    typeof(((struct __DOMAIN_SAVE_TYPE_##_x *)(0))->t)
>=20
> Just like the typeof() I dont think the sizeof() needs an outer
> pair of parentheses. I also don't see why 0 gets parenthesized.
>=20

Cut'n'paste... I'll fix it.

> > +/* Terminating entry */
> > +struct domain_save_end {};
> > +DECLARE_DOMAIN_SAVE_TYPE(END, 0, struct domain_save_end);
>=20
> If the header gets a __XEN__ || __XEN_TOOLS__ restriction, as
> suggested by Julien, using 0 here may be fine. If not, char[0]
> and typeof() aren't legal plain C and hence would need to be
> avoided.
>=20

I'll restrict to tools.

> > --- /dev/null
> > +++ b/xen/include/xen/save.h
> > @@ -0,0 +1,152 @@
> > +/*
> > + * save.h: support routines for save/restore
> > + *
> > + * Copyright Amazon.com Inc. or its affiliates.
> > + *
> > + * This program is free software; you can redistribute it and/or =
modify it
> > + * under the terms and conditions of the GNU General Public =
License,
> > + * version 2, as published by the Free Software Foundation.
> > + *
> > + * This program is distributed in the hope it will be useful, but =
WITHOUT
> > + * ANY WARRANTY; without even the implied warranty of =
MERCHANTABILITY or
> > + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public =
License for
> > + * more details.
> > + *
> > + * You should have received a copy of the GNU General Public =
License along with
> > + * this program; If not, see <http://www.gnu.org/licenses/>.
> > + */
> > +
> > +#ifndef __XEN_SAVE_H__
> > +#define __XEN_SAVE_H__
> > +
> > +#include <xen/sched.h>
> > +#include <xen/types.h>
> > +#include <xen/init.h>
>=20
> Please sort these.
>=20

Ok.

> > +#include <public/xen.h>
> > +#include <public/save.h>
>=20
> The latter includes the former anyway - is the former necessary
> for some reason nevertheless?
>=20

No. I suspect it was at one point, but it can be dropped.

> > +struct domain_context;
> > +
> > +int domain_save_begin(struct domain_context *c, unsigned int tc,
> > +                      const char *name, const struct vcpu *v, =
size_t len);
> > +
> > +#define DOMAIN_SAVE_BEGIN(_x, _c, _v, _len) \
> > +        domain_save_begin((_c), DOMAIN_SAVE_CODE(_x), #_x, (_v), =
(_len))
>=20
> In new code I'd like to ask for no leading underscores on macro
> parameters as well as no unnecessary parenthesization of macro
> parameters (e.g. when they simply get passed on to another macro
> or function without being part of a larger expression).

Personally I think it is generally good practice to parenthesize but I =
can drop if you prefer.

>=20
> > +int domain_save_data(struct domain_context *c, const void *data, =
size_t len);
> > +int domain_save_end(struct domain_context *c);
> > +
> > +static inline int domain_save_entry(struct domain_context *c,
> > +                                    unsigned int tc, const char =
*name,
> > +                                    const struct vcpu *v, const =
void *src,
> > +                                    size_t len)
> > +{
> > +    int rc;
> > +
> > +    rc =3D domain_save_begin(c, tc, name, v, len);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    rc =3D domain_save_data(c, src, len);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    return domain_save_end(c);
> > +}
> > +
> > +#define DOMAIN_SAVE_ENTRY(_x, _c, _v, _src, _len) \
> > +    domain_save_entry((_c), DOMAIN_SAVE_CODE(_x), #_x, (_v), =
(_src), (_len))
> > +
> > +int domain_load_begin(struct domain_context *c, unsigned int tc,
> > +                      const char *name, const struct vcpu *v, =
size_t len,
> > +                      bool exact);
> > +
> > +#define DOMAIN_LOAD_BEGIN(_x, _c, _v, _len, _exact) \
> > +        domain_load_begin((_c), DOMAIN_SAVE_CODE(_x), #_x, (_v), =
(_len), \
> > +                          (_exact));
> > +
> > +int domain_load_data(struct domain_context *c, void *data, size_t =
len);
> > +int domain_load_end(struct domain_context *c);
> > +
> > +static inline int domain_load_entry(struct domain_context *c,
> > +                                    unsigned int tc, const char =
*name,
> > +                                    const struct vcpu *v, void =
*dst,
> > +                                    size_t len, bool exact)
> > +{
> > +    int rc;
> > +
> > +    rc =3D domain_load_begin(c, tc, name, v, len, exact);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    rc =3D domain_load_data(c, dst, len);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    return domain_load_end(c);
> > +}
> > +
> > +#define DOMAIN_LOAD_ENTRY(_x, _c, _v, _dst, _len, _exact) \
> > +    domain_load_entry((_c), DOMAIN_SAVE_CODE(_x), #_x, (_v), =
(_dst), (_len), \
> > +                          (_exact))
> > +
> > +/*
> > + * The 'dry_run' flag indicates that the caller of domain_save() =
(see
> > + * below) is not trying to actually acquire the data, only the size
> > + * of the data. The save handler can therefore limit work to only =
that
> > + * which is necessary to call DOMAIN_SAVE_BEGIN/ENTRY() with an =
accurate
> > + * value for '_len'.
> > + */
> > +typedef int (*domain_save_handler)(const struct vcpu *v,
> > +                                   struct domain_context *h,
> > +                                   bool dry_run);
> > +typedef int (*domain_load_handler)(struct vcpu *v,
> > +                                   struct domain_context *h);
>=20
> Does a load handler have a need to modify struct domain_context?
>=20

Not sure. I'll try const-ing.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Wed May 06 16:49:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 16:49:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWNEh-0003HY-Sl; Wed, 06 May 2020 16:49:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wdQ7=6U=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jWNEg-0003HT-VA
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 16:49:31 +0000
X-Inumbo-ID: 8d72c136-8fb9-11ea-ae69-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d72c136-8fb9-11ea-ae69-bc764e2007e4;
 Wed, 06 May 2020 16:49:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588783770;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=J7Zk9KjUudpNmeIavEPpapZ54N6Ni8oIRzXY6TpxSOc=;
 b=f6Y+9Xs5nNCTdm3q+y5JCX/FeTOevKmG3anbYjQfi5R7m4rZCSOIl/d6
 IYN3S/xHiuVX7gOU40j9mhLNyzAZO31KmxgZkPpHFlUhI/hG7A7wdkGFz
 vMsjqRRqYfXi0xfgrL4brBjG4pFn5ltcOgoqmXDghyK01as8m7vzWWR2Y c=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: eSDl6Q1sGO08BBrwYQPIYhweIE+S4UKr3twNbR/tCIm0O80s7uoDJe/FaWe31gllIMwmgQ/K+h
 V3Rp0HqPqys3dIt8f9aqSuadfR92qN28Q8wZn+mTet5sJBrtnsgPBjYsoqxEzum6JIN051i04w
 n9md28d1nuxCRa/lQHcRIBld1u4CcS3A9E3faAFfS/zIDY6pNqQd/mQ/qVHupeeFe4tuqrMX25
 Kc86VKaYS1sRy43DDrMDdJ9n1Qs0iJG64fgPvOKxrxGEDk8QVf4RdtBJeSqXkUSThgbAwkkTTk
 W4U=
X-SBRS: 2.7
X-MesageID: 17583608
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,360,1583211600"; d="scan'208";a="17583608"
Subject: Re: [PATCH] x86/svm: Clean up vmcbcleanbits_t handling
To: Jan Beulich <jbeulich@suse.com>
References: <20200505173250.5916-1-andrew.cooper3@citrix.com>
 <961921e3-c882-dad0-837e-71644f8bf208@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d64d8593-9d88-2d42-69dc-c1d8b7018c99@citrix.com>
Date: Wed, 6 May 2020 17:49:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <961921e3-c882-dad0-837e-71644f8bf208@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 06/05/2020 16:10, Jan Beulich wrote:
> On 05.05.2020 19:32, Andrew Cooper wrote:
>> @@ -435,17 +435,13 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
>>      ASSERT(n2vmcb != NULL);
>>  
>>      /* Check if virtual VMCB cleanbits are valid */
>> -    vcleanbits_valid = 1;
>> -    if ( svm->ns_ovvmcb_pa == INVALID_PADDR )
>> -        vcleanbits_valid = 0;
>> -    if (svm->ns_ovvmcb_pa != nv->nv_vvmcxaddr)
>> -        vcleanbits_valid = 0;
>> -
>> -#define vcleanbit_set(_name)	\
>> -    (vcleanbits_valid && ns_vmcb->cleanbits.fields._name)
>> +    if ( svm->ns_ovvmcb_pa != INVALID_PADDR &&
>> +         svm->ns_ovvmcb_pa != nv->nv_vvmcxaddr )
>> +        clean = ns_vmcb->cleanbits;
> It looks to me as if the proper inversion of the original condition
> would mean == on the right side of &&, not != .

Oops, yes.  Fixed.

>
>> --- a/xen/include/asm-x86/hvm/svm/vmcb.h
>> +++ b/xen/include/asm-x86/hvm/svm/vmcb.h
>> @@ -384,34 +384,21 @@ typedef union
>>  
>>  typedef union
>>  {
>> -    uint32_t bytes;
>> -    struct
>> -    {
>> -        /* cr_intercepts, dr_intercepts, exception_intercepts,
>> -         * general{1,2}_intercepts, pause_filter_count, tsc_offset */
>> -        uint32_t intercepts: 1;
>> -        /* iopm_base_pa, msrpm_base_pa */
>> -        uint32_t iopm: 1;
>> -        /* guest_asid */
>> -        uint32_t asid: 1;
>> -        /* vintr */
>> -        uint32_t tpr: 1;
>> -        /* np_enable, h_cr3, g_pat */
>> -        uint32_t np: 1;
>> -        /* cr0, cr3, cr4, efer */
>> -        uint32_t cr: 1;
>> -        /* dr6, dr7 */
>> -        uint32_t dr: 1;
>> -        /* gdtr, idtr */
>> -        uint32_t dt: 1;
>> -        /* cs, ds, es, ss, cpl */
>> -        uint32_t seg: 1;
>> -        /* cr2 */
>> -        uint32_t cr2: 1;
>> -        /* debugctlmsr, last{branch,int}{to,from}ip */
>> -        uint32_t lbr: 1;
>> -        uint32_t resv: 21;
>> -    } fields;
>> +    struct {
>> +        bool intercepts:1; /* 0:  cr/dr/exception/general1/2_intercepts,
>> +                            *     pause_filter_count, tsc_offset */
> Could I talk you into omitting the 1/2 part, as there's going to
> be a 3 for at least MCOMMIT? Just "general" ought to be clear
> enough, I would think.

Can do.  I'm not overly happy about this spilling onto two lines, but I
can't think of how to usefully shrink the comment further without losing
information.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 06 16:50:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 16:50:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWNFd-000400-AI; Wed, 06 May 2020 16:50:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qF4D=6U=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jWNFc-0003zq-8j
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 16:50:28 +0000
X-Inumbo-ID: adcf41c0-8fb9-11ea-9e94-12813bfff9fa
Received: from mail-wm1-f41.google.com (unknown [209.85.128.41])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id adcf41c0-8fb9-11ea-9e94-12813bfff9fa;
 Wed, 06 May 2020 16:50:24 +0000 (UTC)
Received: by mail-wm1-f41.google.com with SMTP id x4so3337085wmj.1
 for <xen-devel@lists.xenproject.org>; Wed, 06 May 2020 09:50:24 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=REKJZzBmoXxzUpxFa9z7kH5tG5OsQ6wZOYBM5bjxo2w=;
 b=LD+RY5nKeowM1d+C/g3zibEOIiulVptZ9vXKUhymg85OMvYQd5kfPtbgLD/gKkhFX5
 7a6beRIyc+vTsaD5EhOFoTtGhtC3wg6XfG0z/zNX3So58p+A/vSqhCv3fJ6TT3AXaT23
 OY6H9mCWA5vfL1tiwIS9w1x1eejyVDJ+Zcq8rTV4AhKodWUmFI6sTfBrOWW4nkJVvYh0
 nLFSgHvYe74aXusxTdWNNgZQLswc1Ys9Zd6mm630lU48880Azj51iP4VRi5tb3Enm3+x
 L6pUH9qTZyNw9JL+P5/bDuxDUOB+k67NI7uDs+zMrynhKL+cXHWsXlYwQdC/GQs2ZdbV
 ZRtA==
X-Gm-Message-State: AGi0PuYx5HckjmBT91I+6kUuj6gXN2RpBbWKSYTD3IxejfgGOoFqUKnw
 /XWlLiVbw/HMySLGuADRuYuebu2R8Ls=
X-Google-Smtp-Source: APiQypJQauC4ELPDlLlVWwlUZHmqlaSR5lohXlvN7i6ZQSaIbpmm+PXCaYB8GWlorvR8s9tdPTRryA==
X-Received: by 2002:a1c:7fc6:: with SMTP id a189mr5408035wmd.27.1588783821274; 
 Wed, 06 May 2020 09:50:21 -0700 (PDT)
Received: from localhost.localdomain (44.142.6.51.dyn.plus.net. [51.6.142.44])
 by smtp.gmail.com with ESMTPSA id
 q8sm3671106wmg.22.2020.05.06.09.50.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 06 May 2020 09:50:20 -0700 (PDT)
From: Wei Liu <wl@xen.org>
To: Xen Development List <xen-devel@lists.xenproject.org>
Subject: [PATCH] libxl: update libxlu_disk_l.[ch]
Date: Wed,  6 May 2020 17:50:18 +0100
Message-Id: <20200506165018.32209-1-wl@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Use flex 2.6.4 that is shipped in Debian Buster.

Signed-off-by: Wei Liu <wl@xen.org>
---
Do this because Roger posted a patch to fix clang build, which requires
updating the same files. I don't want bury his changes in unrelated
ones.
---
 tools/libxl/libxlu_disk_l.c | 867 ++++++++++++++++++++++--------------
 tools/libxl/libxlu_disk_l.h | 474 +++++++++++++++++---
 2 files changed, 947 insertions(+), 394 deletions(-)

diff --git a/tools/libxl/libxlu_disk_l.c b/tools/libxl/libxlu_disk_l.c
index 944990732b0b..b0ac3a865a61 100644
--- a/tools/libxl/libxlu_disk_l.c
+++ b/tools/libxl/libxlu_disk_l.c
@@ -1,10 +1,7 @@
 #line 2 "libxlu_disk_l.c"
-#line 31 "libxlu_disk_l.l"
 #include "libxl_osdeps.h" /* must come before any other headers */
 
-
-
-#line 8 "libxlu_disk_l.c"
+#line 5 "libxlu_disk_l.c"
 
 #define  YY_INT_ALIGNED short int
 
@@ -12,12 +9,222 @@
 
 #define FLEX_SCANNER
 #define YY_FLEX_MAJOR_VERSION 2
-#define YY_FLEX_MINOR_VERSION 5
-#define YY_FLEX_SUBMINOR_VERSION 39
+#define YY_FLEX_MINOR_VERSION 6
+#define YY_FLEX_SUBMINOR_VERSION 4
 #if YY_FLEX_SUBMINOR_VERSION > 0
 #define FLEX_BETA
 #endif
 
+#ifdef yy_create_buffer
+#define xlu__disk_yy_create_buffer_ALREADY_DEFINED
+#else
+#define yy_create_buffer xlu__disk_yy_create_buffer
+#endif
+
+#ifdef yy_delete_buffer
+#define xlu__disk_yy_delete_buffer_ALREADY_DEFINED
+#else
+#define yy_delete_buffer xlu__disk_yy_delete_buffer
+#endif
+
+#ifdef yy_scan_buffer
+#define xlu__disk_yy_scan_buffer_ALREADY_DEFINED
+#else
+#define yy_scan_buffer xlu__disk_yy_scan_buffer
+#endif
+
+#ifdef yy_scan_string
+#define xlu__disk_yy_scan_string_ALREADY_DEFINED
+#else
+#define yy_scan_string xlu__disk_yy_scan_string
+#endif
+
+#ifdef yy_scan_bytes
+#define xlu__disk_yy_scan_bytes_ALREADY_DEFINED
+#else
+#define yy_scan_bytes xlu__disk_yy_scan_bytes
+#endif
+
+#ifdef yy_init_buffer
+#define xlu__disk_yy_init_buffer_ALREADY_DEFINED
+#else
+#define yy_init_buffer xlu__disk_yy_init_buffer
+#endif
+
+#ifdef yy_flush_buffer
+#define xlu__disk_yy_flush_buffer_ALREADY_DEFINED
+#else
+#define yy_flush_buffer xlu__disk_yy_flush_buffer
+#endif
+
+#ifdef yy_load_buffer_state
+#define xlu__disk_yy_load_buffer_state_ALREADY_DEFINED
+#else
+#define yy_load_buffer_state xlu__disk_yy_load_buffer_state
+#endif
+
+#ifdef yy_switch_to_buffer
+#define xlu__disk_yy_switch_to_buffer_ALREADY_DEFINED
+#else
+#define yy_switch_to_buffer xlu__disk_yy_switch_to_buffer
+#endif
+
+#ifdef yypush_buffer_state
+#define xlu__disk_yypush_buffer_state_ALREADY_DEFINED
+#else
+#define yypush_buffer_state xlu__disk_yypush_buffer_state
+#endif
+
+#ifdef yypop_buffer_state
+#define xlu__disk_yypop_buffer_state_ALREADY_DEFINED
+#else
+#define yypop_buffer_state xlu__disk_yypop_buffer_state
+#endif
+
+#ifdef yyensure_buffer_stack
+#define xlu__disk_yyensure_buffer_stack_ALREADY_DEFINED
+#else
+#define yyensure_buffer_stack xlu__disk_yyensure_buffer_stack
+#endif
+
+#ifdef yylex
+#define xlu__disk_yylex_ALREADY_DEFINED
+#else
+#define yylex xlu__disk_yylex
+#endif
+
+#ifdef yyrestart
+#define xlu__disk_yyrestart_ALREADY_DEFINED
+#else
+#define yyrestart xlu__disk_yyrestart
+#endif
+
+#ifdef yylex_init
+#define xlu__disk_yylex_init_ALREADY_DEFINED
+#else
+#define yylex_init xlu__disk_yylex_init
+#endif
+
+#ifdef yylex_init_extra
+#define xlu__disk_yylex_init_extra_ALREADY_DEFINED
+#else
+#define yylex_init_extra xlu__disk_yylex_init_extra
+#endif
+
+#ifdef yylex_destroy
+#define xlu__disk_yylex_destroy_ALREADY_DEFINED
+#else
+#define yylex_destroy xlu__disk_yylex_destroy
+#endif
+
+#ifdef yyget_debug
+#define xlu__disk_yyget_debug_ALREADY_DEFINED
+#else
+#define yyget_debug xlu__disk_yyget_debug
+#endif
+
+#ifdef yyset_debug
+#define xlu__disk_yyset_debug_ALREADY_DEFINED
+#else
+#define yyset_debug xlu__disk_yyset_debug
+#endif
+
+#ifdef yyget_extra
+#define xlu__disk_yyget_extra_ALREADY_DEFINED
+#else
+#define yyget_extra xlu__disk_yyget_extra
+#endif
+
+#ifdef yyset_extra
+#define xlu__disk_yyset_extra_ALREADY_DEFINED
+#else
+#define yyset_extra xlu__disk_yyset_extra
+#endif
+
+#ifdef yyget_in
+#define xlu__disk_yyget_in_ALREADY_DEFINED
+#else
+#define yyget_in xlu__disk_yyget_in
+#endif
+
+#ifdef yyset_in
+#define xlu__disk_yyset_in_ALREADY_DEFINED
+#else
+#define yyset_in xlu__disk_yyset_in
+#endif
+
+#ifdef yyget_out
+#define xlu__disk_yyget_out_ALREADY_DEFINED
+#else
+#define yyget_out xlu__disk_yyget_out
+#endif
+
+#ifdef yyset_out
+#define xlu__disk_yyset_out_ALREADY_DEFINED
+#else
+#define yyset_out xlu__disk_yyset_out
+#endif
+
+#ifdef yyget_leng
+#define xlu__disk_yyget_leng_ALREADY_DEFINED
+#else
+#define yyget_leng xlu__disk_yyget_leng
+#endif
+
+#ifdef yyget_text
+#define xlu__disk_yyget_text_ALREADY_DEFINED
+#else
+#define yyget_text xlu__disk_yyget_text
+#endif
+
+#ifdef yyget_lineno
+#define xlu__disk_yyget_lineno_ALREADY_DEFINED
+#else
+#define yyget_lineno xlu__disk_yyget_lineno
+#endif
+
+#ifdef yyset_lineno
+#define xlu__disk_yyset_lineno_ALREADY_DEFINED
+#else
+#define yyset_lineno xlu__disk_yyset_lineno
+#endif
+
+#ifdef yyget_column
+#define xlu__disk_yyget_column_ALREADY_DEFINED
+#else
+#define yyget_column xlu__disk_yyget_column
+#endif
+
+#ifdef yyset_column
+#define xlu__disk_yyset_column_ALREADY_DEFINED
+#else
+#define yyset_column xlu__disk_yyset_column
+#endif
+
+#ifdef yywrap
+#define xlu__disk_yywrap_ALREADY_DEFINED
+#else
+#define yywrap xlu__disk_yywrap
+#endif
+
+#ifdef yyalloc
+#define xlu__disk_yyalloc_ALREADY_DEFINED
+#else
+#define yyalloc xlu__disk_yyalloc
+#endif
+
+#ifdef yyrealloc
+#define xlu__disk_yyrealloc_ALREADY_DEFINED
+#else
+#define yyrealloc xlu__disk_yyrealloc
+#endif
+
+#ifdef yyfree
+#define xlu__disk_yyfree_ALREADY_DEFINED
+#else
+#define yyfree xlu__disk_yyfree
+#endif
+
 /* First, we deal with  platform-specific or compiler-specific issues. */
 
 /* begin standard C headers. */
@@ -88,40 +295,32 @@ typedef unsigned int flex_uint32_t;
 #define UINT32_MAX             (4294967295U)
 #endif
 
+#ifndef SIZE_MAX
+#define SIZE_MAX               (~(size_t)0)
+#endif
+
 #endif /* ! C99 */
 
 #endif /* ! FLEXINT_H */
 
-#ifdef __cplusplus
+/* begin standard C++ headers. */
 
-/* The "const" storage-class-modifier is valid. */
-#define YY_USE_CONST
-
-#else	/* ! __cplusplus */
-
-/* C99 requires __STDC__ to be defined as 1. */
-#if defined (__STDC__)
-
-#define YY_USE_CONST
-
-#endif	/* defined (__STDC__) */
-#endif	/* ! __cplusplus */
-
-#ifdef YY_USE_CONST
+/* TODO: this is always defined, so inline it */
 #define yyconst const
+
+#if defined(__GNUC__) && __GNUC__ >= 3
+#define yynoreturn __attribute__((__noreturn__))
 #else
-#define yyconst
+#define yynoreturn
 #endif
 
 /* Returned upon end-of-file. */
 #define YY_NULL 0
 
-/* Promotes a possibly negative, possibly signed char to an unsigned
- * integer for use as an array index.  If the signed char is negative,
- * we want to instead treat it as an 8-bit unsigned char, hence the
- * double cast.
+/* Promotes a possibly negative, possibly signed char to an
+ *   integer in range [0..255] for use as an array index.
  */
-#define YY_SC_TO_UI(c) ((unsigned int) (unsigned char) c)
+#define YY_SC_TO_UI(c) ((YY_CHAR) (c))
 
 /* An opaque pointer. */
 #ifndef YY_TYPEDEF_YY_SCANNER_T
@@ -145,20 +344,16 @@ typedef void* yyscan_t;
  * definition of BEGIN.
  */
 #define BEGIN yyg->yy_start = 1 + 2 *
-
 /* Translate the current start state into a value that can be later handed
  * to BEGIN to return to the state.  The YYSTATE alias is for lex
  * compatibility.
  */
 #define YY_START ((yyg->yy_start - 1) / 2)
 #define YYSTATE YY_START
-
 /* Action number for EOF rule of a given start state. */
 #define YY_STATE_EOF(state) (YY_END_OF_BUFFER + state + 1)
-
 /* Special action meaning "start processing a new file". */
-#define YY_NEW_FILE xlu__disk_yyrestart(yyin ,yyscanner )
-
+#define YY_NEW_FILE yyrestart( yyin , yyscanner )
 #define YY_END_OF_BUFFER_CHAR 0
 
 /* Size of default input buffer. */
@@ -191,7 +386,7 @@ typedef size_t yy_size_t;
 #define EOB_ACT_CONTINUE_SCAN 0
 #define EOB_ACT_END_OF_FILE 1
 #define EOB_ACT_LAST_MATCH 2
-
+    
     #define YY_LESS_LINENO(n)
     #define YY_LINENO_REWIND_TO(ptr)
     
@@ -208,7 +403,6 @@ typedef size_t yy_size_t;
 		YY_DO_BEFORE_ACTION; /* set up yytext again */ \
 		} \
 	while ( 0 )
-
 #define unput(c) yyunput( c, yyg->yytext_ptr , yyscanner )
 
 #ifndef YY_STRUCT_YY_BUFFER_STATE
@@ -223,12 +417,12 @@ struct yy_buffer_state
 	/* Size of input buffer in bytes, not including room for EOB
 	 * characters.
 	 */
-	yy_size_t yy_buf_size;
+	int yy_buf_size;
 
 	/* Number of characters read into yy_ch_buf, not including EOB
 	 * characters.
 	 */
-	yy_size_t yy_n_chars;
+	int yy_n_chars;
 
 	/* Whether we "own" the buffer - i.e., we know we created it,
 	 * and can realloc() it to grow it, and should free() it to
@@ -251,7 +445,7 @@ struct yy_buffer_state
 
     int yy_bs_lineno; /**< The line count. */
     int yy_bs_column; /**< The column count. */
-    
+
 	/* Whether to try to fill the input buffer when we reach the
 	 * end of it.
 	 */
@@ -268,7 +462,7 @@ struct yy_buffer_state
 	 * possible backing-up.
 	 *
 	 * When we actually see the EOF, we change the status to "new"
-	 * (via xlu__disk_yyrestart()), so that the user can continue scanning by
+	 * (via yyrestart()), so that the user can continue scanning by
 	 * just pointing yyin at a new input file.
 	 */
 #define YY_BUFFER_EOF_PENDING 2
@@ -285,71 +479,65 @@ struct yy_buffer_state
 #define YY_CURRENT_BUFFER ( yyg->yy_buffer_stack \
                           ? yyg->yy_buffer_stack[yyg->yy_buffer_stack_top] \
                           : NULL)
-
 /* Same as previous macro, but useful when we know that the buffer stack is not
  * NULL or when we need an lvalue. For internal use only.
  */
 #define YY_CURRENT_BUFFER_LVALUE yyg->yy_buffer_stack[yyg->yy_buffer_stack_top]
 
-void xlu__disk_yyrestart (FILE *input_file ,yyscan_t yyscanner );
-void xlu__disk_yy_switch_to_buffer (YY_BUFFER_STATE new_buffer ,yyscan_t yyscanner );
-YY_BUFFER_STATE xlu__disk_yy_create_buffer (FILE *file,int size ,yyscan_t yyscanner );
-void xlu__disk_yy_delete_buffer (YY_BUFFER_STATE b ,yyscan_t yyscanner );
-void xlu__disk_yy_flush_buffer (YY_BUFFER_STATE b ,yyscan_t yyscanner );
-void xlu__disk_yypush_buffer_state (YY_BUFFER_STATE new_buffer ,yyscan_t yyscanner );
-void xlu__disk_yypop_buffer_state (yyscan_t yyscanner );
+void yyrestart ( FILE *input_file , yyscan_t yyscanner );
+void yy_switch_to_buffer ( YY_BUFFER_STATE new_buffer , yyscan_t yyscanner );
+YY_BUFFER_STATE yy_create_buffer ( FILE *file, int size , yyscan_t yyscanner );
+void yy_delete_buffer ( YY_BUFFER_STATE b , yyscan_t yyscanner );
+void yy_flush_buffer ( YY_BUFFER_STATE b , yyscan_t yyscanner );
+void yypush_buffer_state ( YY_BUFFER_STATE new_buffer , yyscan_t yyscanner );
+void yypop_buffer_state ( yyscan_t yyscanner );
 
-static void xlu__disk_yyensure_buffer_stack (yyscan_t yyscanner );
-static void xlu__disk_yy_load_buffer_state (yyscan_t yyscanner );
-static void xlu__disk_yy_init_buffer (YY_BUFFER_STATE b,FILE *file ,yyscan_t yyscanner );
+static void yyensure_buffer_stack ( yyscan_t yyscanner );
+static void yy_load_buffer_state ( yyscan_t yyscanner );
+static void yy_init_buffer ( YY_BUFFER_STATE b, FILE *file , yyscan_t yyscanner );
+#define YY_FLUSH_BUFFER yy_flush_buffer( YY_CURRENT_BUFFER , yyscanner)
 
-#define YY_FLUSH_BUFFER xlu__disk_yy_flush_buffer(YY_CURRENT_BUFFER ,yyscanner)
+YY_BUFFER_STATE yy_scan_buffer ( char *base, yy_size_t size , yyscan_t yyscanner );
+YY_BUFFER_STATE yy_scan_string ( const char *yy_str , yyscan_t yyscanner );
+YY_BUFFER_STATE yy_scan_bytes ( const char *bytes, int len , yyscan_t yyscanner );
 
-YY_BUFFER_STATE xlu__disk_yy_scan_buffer (char *base,yy_size_t size ,yyscan_t yyscanner );
-YY_BUFFER_STATE xlu__disk_yy_scan_string (yyconst char *yy_str ,yyscan_t yyscanner );
-YY_BUFFER_STATE xlu__disk_yy_scan_bytes (yyconst char *bytes,yy_size_t len ,yyscan_t yyscanner );
-
-void *xlu__disk_yyalloc (yy_size_t ,yyscan_t yyscanner );
-void *xlu__disk_yyrealloc (void *,yy_size_t ,yyscan_t yyscanner );
-void xlu__disk_yyfree (void * ,yyscan_t yyscanner );
-
-#define yy_new_buffer xlu__disk_yy_create_buffer
+void *yyalloc ( yy_size_t , yyscan_t yyscanner );
+void *yyrealloc ( void *, yy_size_t , yyscan_t yyscanner );
+void yyfree ( void * , yyscan_t yyscanner );
 
+#define yy_new_buffer yy_create_buffer
 #define yy_set_interactive(is_interactive) \
 	{ \
 	if ( ! YY_CURRENT_BUFFER ){ \
-        xlu__disk_yyensure_buffer_stack (yyscanner); \
+        yyensure_buffer_stack (yyscanner); \
 		YY_CURRENT_BUFFER_LVALUE =    \
-            xlu__disk_yy_create_buffer(yyin,YY_BUF_SIZE ,yyscanner); \
+            yy_create_buffer( yyin, YY_BUF_SIZE , yyscanner); \
 	} \
 	YY_CURRENT_BUFFER_LVALUE->yy_is_interactive = is_interactive; \
 	}
-
 #define yy_set_bol(at_bol) \
 	{ \
 	if ( ! YY_CURRENT_BUFFER ){\
-        xlu__disk_yyensure_buffer_stack (yyscanner); \
+        yyensure_buffer_stack (yyscanner); \
 		YY_CURRENT_BUFFER_LVALUE =    \
-            xlu__disk_yy_create_buffer(yyin,YY_BUF_SIZE ,yyscanner); \
+            yy_create_buffer( yyin, YY_BUF_SIZE , yyscanner); \
 	} \
 	YY_CURRENT_BUFFER_LVALUE->yy_at_bol = at_bol; \
 	}
-
 #define YY_AT_BOL() (YY_CURRENT_BUFFER_LVALUE->yy_at_bol)
 
-#define xlu__disk_yywrap(yyscanner) 1
+#define xlu__disk_yywrap(yyscanner) (/*CONSTCOND*/1)
 #define YY_SKIP_YYWRAP
-
-typedef unsigned char YY_CHAR;
+typedef flex_uint8_t YY_CHAR;
 
 typedef int yy_state_type;
 
 #define yytext_ptr yytext_r
 
-static yy_state_type yy_get_previous_state (yyscan_t yyscanner );
-static yy_state_type yy_try_NUL_trans (yy_state_type current_state  ,yyscan_t yyscanner);
-static int yy_get_next_buffer (yyscan_t yyscanner );
-static void yy_fatal_error (yyconst char msg[] ,yyscan_t yyscanner );
+static yy_state_type yy_get_previous_state ( yyscan_t yyscanner );
+static yy_state_type yy_try_NUL_trans ( yy_state_type current_state  , yyscan_t yyscanner);
+static int yy_get_next_buffer ( yyscan_t yyscanner );
+static void yynoreturn yy_fatal_error ( const char* msg , yyscan_t yyscanner );
 
 /* Done after the current pattern has been matched and before the
  * corresponding action - sets up yytext.
@@ -357,11 +545,10 @@ static void yy_fatal_error (yyconst char msg[] ,yyscan_t yyscanner );
 #define YY_DO_BEFORE_ACTION \
 	yyg->yytext_ptr = yy_bp; \
 	yyg->yytext_ptr -= yyg->yy_more_len; \
-	yyleng = (size_t) (yy_cp - yyg->yytext_ptr); \
+	yyleng = (int) (yy_cp - yyg->yytext_ptr); \
 	yyg->yy_hold_char = *yy_cp; \
 	*yy_cp = '\0'; \
 	yyg->yy_c_buf_p = yy_cp;
-
 #define YY_NUM_RULES 36
 #define YY_END_OF_BUFFER 37
 /* This struct is not used in this scanner,
@@ -371,7 +558,7 @@ struct yy_trans_info
 	flex_int32_t yy_verify;
 	flex_int32_t yy_nxt;
 	};
-static yyconst flex_int16_t yy_acclist[575] =
+static const flex_int16_t yy_acclist[575] =
     {   0,
        35,   35,   37,   33,   34,   36, 8193,   33,   34,   36,
     16385, 8193,   33,   36,16385,   33,   34,   36,   34,   36,
@@ -438,7 +625,7 @@ static yyconst flex_int16_t yy_acclist[575] =
        21,   23,   12,   33
     } ;
 
-static yyconst flex_int16_t yy_accept[356] =
+static const flex_int16_t yy_accept[356] =
     {   0,
         1,    1,    1,    2,    3,    4,    7,   12,   16,   19,
        21,   24,   27,   30,   33,   36,   39,   42,   45,   48,
@@ -481,7 +668,7 @@ static yyconst flex_int16_t yy_accept[356] =
       570,  571,  573,  575,  575
     } ;
 
-static yyconst flex_int32_t yy_ec[256] =
+static const YY_CHAR yy_ec[256] =
     {   0,
         1,    1,    1,    1,    1,    1,    1,    1,    2,    3,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
@@ -513,7 +700,7 @@ static yyconst flex_int32_t yy_ec[256] =
         1,    1,    1,    1,    1
     } ;
 
-static yyconst flex_int32_t yy_meta[35] =
+static const YY_CHAR yy_meta[35] =
     {   0,
         1,    1,    2,    3,    1,    1,    1,    1,    4,    1,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
@@ -521,7 +708,7 @@ static yyconst flex_int32_t yy_meta[35] =
         1,    1,    1,    1
     } ;
 
-static yyconst flex_int16_t yy_base[424] =
+static const flex_int16_t yy_base[424] =
     {   0,
         0,    0,  901,  900,  902,  897,   33,   36,  905,  905,
        45,   63,   31,   42,   51,   52,  890,   33,   65,   67,
@@ -572,7 +759,7 @@ static yyconst flex_int16_t yy_base[424] =
       883,  887,  891
     } ;
 
-static yyconst flex_int16_t yy_def[424] =
+static const flex_int16_t yy_def[424] =
     {   0,
       354,    1,  355,  355,  354,  356,  357,  357,  354,  354,
       358,  358,   12,   12,   12,   12,   12,   12,   12,   12,
@@ -623,7 +810,7 @@ static yyconst flex_int16_t yy_def[424] =
       354,  354,  354
     } ;
 
-static yyconst flex_int16_t yy_nxt[940] =
+static const flex_int16_t yy_nxt[940] =
     {   0,
         6,    7,    8,    9,    6,    6,    6,    6,   10,   11,
        12,   13,   14,   15,   16,   17,   18,   19,   17,   17,
@@ -730,7 +917,7 @@ static yyconst flex_int16_t yy_nxt[940] =
       354,  354,  354,  354,  354,  354,  354,  354,  354
     } ;
 
-static yyconst flex_int16_t yy_chk[940] =
+static const flex_int16_t yy_chk[940] =
     {   0,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
@@ -1001,8 +1188,9 @@ static int vdev_and_devtype(DiskParseContext *dpc, char *str) {
 #undef DPC /* needs to be defined differently the actual lexer */
 #define DPC ((DiskParseContext*)yyextra)
 
+#line 1192 "libxlu_disk_l.c"
 
-#line 1006 "libxlu_disk_l.c"
+#line 1194 "libxlu_disk_l.c"
 
 #define INITIAL 0
 #define LEXERR 1
@@ -1032,8 +1220,8 @@ struct yyguts_t
     size_t yy_buffer_stack_max; /**< capacity of stack. */
     YY_BUFFER_STATE * yy_buffer_stack; /**< Stack as an array. */
     char yy_hold_char;
-    yy_size_t yy_n_chars;
-    yy_size_t yyleng_r;
+    int yy_n_chars;
+    int yyleng_r;
     char *yy_c_buf_p;
     int yy_init;
     int yy_start;
@@ -1064,44 +1252,44 @@ struct yyguts_t
 
     }; /* end struct yyguts_t */
 
-static int yy_init_globals (yyscan_t yyscanner );
+static int yy_init_globals ( yyscan_t yyscanner );
 
-int xlu__disk_yylex_init (yyscan_t* scanner);
+int yylex_init (yyscan_t* scanner);
 
-int xlu__disk_yylex_init_extra (YY_EXTRA_TYPE user_defined,yyscan_t* scanner);
+int yylex_init_extra ( YY_EXTRA_TYPE user_defined, yyscan_t* scanner);
 
 /* Accessor methods to globals.
    These are made visible to non-reentrant scanners for convenience. */
 
-int xlu__disk_yylex_destroy (yyscan_t yyscanner );
+int yylex_destroy ( yyscan_t yyscanner );
 
-int xlu__disk_yyget_debug (yyscan_t yyscanner );
+int yyget_debug ( yyscan_t yyscanner );
 
-void xlu__disk_yyset_debug (int debug_flag ,yyscan_t yyscanner );
+void yyset_debug ( int debug_flag , yyscan_t yyscanner );
 
-YY_EXTRA_TYPE xlu__disk_yyget_extra (yyscan_t yyscanner );
+YY_EXTRA_TYPE yyget_extra ( yyscan_t yyscanner );
 
-void xlu__disk_yyset_extra (YY_EXTRA_TYPE user_defined ,yyscan_t yyscanner );
+void yyset_extra ( YY_EXTRA_TYPE user_defined , yyscan_t yyscanner );
 
-FILE *xlu__disk_yyget_in (yyscan_t yyscanner );
+FILE *yyget_in ( yyscan_t yyscanner );
 
-void xlu__disk_yyset_in  (FILE * in_str ,yyscan_t yyscanner );
+void yyset_in  ( FILE * _in_str , yyscan_t yyscanner );
 
-FILE *xlu__disk_yyget_out (yyscan_t yyscanner );
+FILE *yyget_out ( yyscan_t yyscanner );
 
-void xlu__disk_yyset_out  (FILE * out_str ,yyscan_t yyscanner );
+void yyset_out  ( FILE * _out_str , yyscan_t yyscanner );
 
-yy_size_t xlu__disk_yyget_leng (yyscan_t yyscanner );
+			int yyget_leng ( yyscan_t yyscanner );
 
-char *xlu__disk_yyget_text (yyscan_t yyscanner );
+char *yyget_text ( yyscan_t yyscanner );
 
-int xlu__disk_yyget_lineno (yyscan_t yyscanner );
+int yyget_lineno ( yyscan_t yyscanner );
 
-void xlu__disk_yyset_lineno (int line_number ,yyscan_t yyscanner );
+void yyset_lineno ( int _line_number , yyscan_t yyscanner );
 
-int xlu__disk_yyget_column  (yyscan_t yyscanner );
+int yyget_column  ( yyscan_t yyscanner );
 
-void xlu__disk_yyset_column (int column_no ,yyscan_t yyscanner );
+void yyset_column ( int _column_no , yyscan_t yyscanner );
 
 /* Macros after this point can all be overridden by user definitions in
  * section 1.
@@ -1109,26 +1297,29 @@ void xlu__disk_yyset_column (int column_no ,yyscan_t yyscanner );
 
 #ifndef YY_SKIP_YYWRAP
 #ifdef __cplusplus
-extern "C" int xlu__disk_yywrap (yyscan_t yyscanner );
+extern "C" int yywrap ( yyscan_t yyscanner );
 #else
-extern int xlu__disk_yywrap (yyscan_t yyscanner );
+extern int yywrap ( yyscan_t yyscanner );
 #endif
 #endif
 
+#ifndef YY_NO_UNPUT
+    
+#endif
+
 #ifndef yytext_ptr
-static void yy_flex_strncpy (char *,yyconst char *,int ,yyscan_t yyscanner);
+static void yy_flex_strncpy ( char *, const char *, int , yyscan_t yyscanner);
 #endif
 
 #ifdef YY_NEED_STRLEN
-static int yy_flex_strlen (yyconst char * ,yyscan_t yyscanner);
+static int yy_flex_strlen ( const char * , yyscan_t yyscanner);
 #endif
 
 #ifndef YY_NO_INPUT
-
 #ifdef __cplusplus
-static int yyinput (yyscan_t yyscanner );
+static int yyinput ( yyscan_t yyscanner );
 #else
-static int input (yyscan_t yyscanner );
+static int input ( yyscan_t yyscanner );
 #endif
 
 #endif
@@ -1148,7 +1339,7 @@ static int input (yyscan_t yyscanner );
 /* This used to be an fputs(), but since the string might contain NUL's,
  * we now use fwrite().
  */
-#define ECHO do { if (fwrite( yytext, yyleng, 1, yyout )) {} } while (0)
+#define ECHO do { if (fwrite( yytext, (size_t) yyleng, 1, yyout )) {} } while (0)
 #endif
 
 /* Gets input and stuffs it into "buf".  number of characters read, or YY_NULL,
@@ -1172,7 +1363,7 @@ static int input (yyscan_t yyscanner );
 	else \
 		{ \
 		errno=0; \
-		while ( (result = fread(buf, 1, (yy_size_t) max_size, yyin)) == 0 && ferror(yyin)) \
+		while ( (result = (int) fread(buf, 1, (yy_size_t) max_size, yyin)) == 0 && ferror(yyin)) \
 			{ \
 			if( errno != EINTR) \
 				{ \
@@ -1213,9 +1404,9 @@ static int input (yyscan_t yyscanner );
 #ifndef YY_DECL
 #define YY_DECL_IS_OURS 1
 
-extern int xlu__disk_yylex (yyscan_t yyscanner);
+extern int yylex (yyscan_t yyscanner);
 
-#define YY_DECL int xlu__disk_yylex (yyscan_t yyscanner)
+#define YY_DECL int yylex (yyscan_t yyscanner)
 #endif /* !YY_DECL */
 
 /* Code executed at the beginning of each rule, after yytext and yyleng
@@ -1227,7 +1418,7 @@ extern int xlu__disk_yylex (yyscan_t yyscanner);
 
 /* Code executed at the end of each rule. */
 #ifndef YY_BREAK
-#define YY_BREAK break;
+#define YY_BREAK /*LINTED*/break;
 #endif
 
 #define YY_RULE_SETUP \
@@ -1237,9 +1428,9 @@ extern int xlu__disk_yylex (yyscan_t yyscanner);
  */
 YY_DECL
 {
-	register yy_state_type yy_current_state;
-	register char *yy_cp, *yy_bp;
-	register int yy_act;
+	yy_state_type yy_current_state;
+	char *yy_cp, *yy_bp;
+	int yy_act;
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
 
 	if ( !yyg->yy_init )
@@ -1252,9 +1443,9 @@ YY_DECL
 
         /* Create the reject buffer large enough to save one state per allowed character. */
         if ( ! yyg->yy_state_buf )
-            yyg->yy_state_buf = (yy_state_type *)xlu__disk_yyalloc(YY_STATE_BUF_SIZE  ,yyscanner);
+            yyg->yy_state_buf = (yy_state_type *)yyalloc(YY_STATE_BUF_SIZE  , yyscanner);
             if ( ! yyg->yy_state_buf )
-                YY_FATAL_ERROR( "out of dynamic memory in xlu__disk_yylex()" );
+                YY_FATAL_ERROR( "out of dynamic memory in yylex()" );
 
 		if ( ! yyg->yy_start )
 			yyg->yy_start = 1;	/* first start state */
@@ -1266,28 +1457,29 @@ YY_DECL
 			yyout = stdout;
 
 		if ( ! YY_CURRENT_BUFFER ) {
-			xlu__disk_yyensure_buffer_stack (yyscanner);
+			yyensure_buffer_stack (yyscanner);
 			YY_CURRENT_BUFFER_LVALUE =
-				xlu__disk_yy_create_buffer(yyin,YY_BUF_SIZE ,yyscanner);
+				yy_create_buffer( yyin, YY_BUF_SIZE , yyscanner);
 		}
 
-		xlu__disk_yy_load_buffer_state(yyscanner );
+		yy_load_buffer_state( yyscanner );
 		}
 
 	{
 #line 166 "libxlu_disk_l.l"
 
 
+#line 169 "libxlu_disk_l.l"
  /*----- the scanner rules which do the parsing -----*/
 
-#line 1284 "libxlu_disk_l.c"
+#line 1476 "libxlu_disk_l.c"
 
-	while ( 1 )		/* loops until end-of-file is reached */
+	while ( /*CONSTCOND*/1 )		/* loops until end-of-file is reached */
 		{
 		yyg->yy_more_len = 0;
 		if ( yyg->yy_more_flag )
 			{
-			yyg->yy_more_len = yyg->yy_c_buf_p - yyg->yytext_ptr;
+			yyg->yy_more_len = (int) (yyg->yy_c_buf_p - yyg->yytext_ptr);
 			yyg->yy_more_flag = 0;
 			}
 		yy_cp = yyg->yy_c_buf_p;
@@ -1308,14 +1500,14 @@ YY_DECL
 yy_match:
 		do
 			{
-			register YY_CHAR yy_c = yy_ec[YY_SC_TO_UI(*yy_cp)] ;
+			YY_CHAR yy_c = yy_ec[YY_SC_TO_UI(*yy_cp)] ;
 			while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 				{
 				yy_current_state = (int) yy_def[yy_current_state];
 				if ( yy_current_state >= 355 )
-					yy_c = yy_meta[(unsigned int) yy_c];
+					yy_c = yy_meta[yy_c];
 				}
-			yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
+			yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
 			*yyg->yy_state_ptr++ = yy_current_state;
 			++yy_cp;
 			}
@@ -1369,135 +1561,135 @@ do_action:	/* This label is used only to access EOF actions. */
 case 1:
 /* rule 1 can match eol */
 YY_RULE_SETUP
-#line 170 "libxlu_disk_l.l"
+#line 171 "libxlu_disk_l.l"
 { /* ignore whitespace before parameters */ }
 	YY_BREAK
 /* ordinary parameters setting enums or strings */
 case 2:
 /* rule 2 can match eol */
 YY_RULE_SETUP
-#line 174 "libxlu_disk_l.l"
+#line 175 "libxlu_disk_l.l"
 { STRIP(','); setformat(DPC, FROMEQUALS); }
 	YY_BREAK
 case 3:
 YY_RULE_SETUP
-#line 176 "libxlu_disk_l.l"
+#line 177 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 4:
 YY_RULE_SETUP
-#line 177 "libxlu_disk_l.l"
+#line 178 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 1; }
 	YY_BREAK
 case 5:
 YY_RULE_SETUP
-#line 178 "libxlu_disk_l.l"
+#line 179 "libxlu_disk_l.l"
 { DPC->disk->is_cdrom = 0; }
 	YY_BREAK
 case 6:
 /* rule 6 can match eol */
 YY_RULE_SETUP
-#line 179 "libxlu_disk_l.l"
+#line 180 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown value for type"); }
 	YY_BREAK
 case 7:
 /* rule 7 can match eol */
 YY_RULE_SETUP
-#line 181 "libxlu_disk_l.l"
+#line 182 "libxlu_disk_l.l"
 { STRIP(','); setaccess(DPC, FROMEQUALS); }
 	YY_BREAK
 case 8:
 /* rule 8 can match eol */
 YY_RULE_SETUP
-#line 182 "libxlu_disk_l.l"
+#line 183 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("backend", backend_domname, FROMEQUALS); }
 	YY_BREAK
 case 9:
 /* rule 9 can match eol */
 YY_RULE_SETUP
-#line 183 "libxlu_disk_l.l"
+#line 184 "libxlu_disk_l.l"
 { STRIP(','); setbackendtype(DPC,FROMEQUALS); }
 	YY_BREAK
 case 10:
 /* rule 10 can match eol */
 YY_RULE_SETUP
-#line 185 "libxlu_disk_l.l"
+#line 186 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("vdev", vdev, FROMEQUALS); }
 	YY_BREAK
 case 11:
 /* rule 11 can match eol */
 YY_RULE_SETUP
-#line 186 "libxlu_disk_l.l"
+#line 187 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
 	YY_BREAK
 case 12:
 YY_RULE_SETUP
-#line 187 "libxlu_disk_l.l"
+#line 188 "libxlu_disk_l.l"
 { DPC->disk->direct_io_safe = 1; }
 	YY_BREAK
 case 13:
 YY_RULE_SETUP
-#line 188 "libxlu_disk_l.l"
+#line 189 "libxlu_disk_l.l"
 { libxl_defbool_set(&DPC->disk->discard_enable, true); }
 	YY_BREAK
 case 14:
 YY_RULE_SETUP
-#line 189 "libxlu_disk_l.l"
+#line 190 "libxlu_disk_l.l"
 { libxl_defbool_set(&DPC->disk->discard_enable, false); }
 	YY_BREAK
 /* Note that the COLO configuration settings should be considered unstable.
   * They may change incompatibly in future versions of Xen. */
 case 15:
 YY_RULE_SETUP
-#line 192 "libxlu_disk_l.l"
+#line 193 "libxlu_disk_l.l"
 { libxl_defbool_set(&DPC->disk->colo_enable, true); }
 	YY_BREAK
 case 16:
 YY_RULE_SETUP
-#line 193 "libxlu_disk_l.l"
+#line 194 "libxlu_disk_l.l"
 { libxl_defbool_set(&DPC->disk->colo_enable, false); }
 	YY_BREAK
 case 17:
 /* rule 17 can match eol */
 YY_RULE_SETUP
-#line 194 "libxlu_disk_l.l"
+#line 195 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("colo-host", colo_host, FROMEQUALS); }
 	YY_BREAK
 case 18:
 /* rule 18 can match eol */
 YY_RULE_SETUP
-#line 195 "libxlu_disk_l.l"
+#line 196 "libxlu_disk_l.l"
 { STRIP(','); setcoloport(DPC, FROMEQUALS); }
 	YY_BREAK
 case 19:
 /* rule 19 can match eol */
 YY_RULE_SETUP
-#line 196 "libxlu_disk_l.l"
+#line 197 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("colo-export", colo_export, FROMEQUALS); }
 	YY_BREAK
 case 20:
 /* rule 20 can match eol */
 YY_RULE_SETUP
-#line 197 "libxlu_disk_l.l"
+#line 198 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("active-disk", active_disk, FROMEQUALS); }
 	YY_BREAK
 case 21:
 /* rule 21 can match eol */
 YY_RULE_SETUP
-#line 198 "libxlu_disk_l.l"
+#line 199 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("hidden-disk", hidden_disk, FROMEQUALS); }
 	YY_BREAK
 /* the target magic parameter, eats the rest of the string */
 case 22:
 YY_RULE_SETUP
-#line 202 "libxlu_disk_l.l"
+#line 203 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("target", pdev_path, FROMEQUALS); }
 	YY_BREAK
 /* unknown parameters */
 case 23:
 /* rule 23 can match eol */
 YY_RULE_SETUP
-#line 206 "libxlu_disk_l.l"
+#line 207 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown parameter"); }
 	YY_BREAK
 /* deprecated prefixes */
@@ -1505,7 +1697,7 @@ YY_RULE_SETUP
    * matched the whole string, so these patterns take precedence */
 case 24:
 YY_RULE_SETUP
-#line 213 "libxlu_disk_l.l"
+#line 214 "libxlu_disk_l.l"
 {
                     STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `[format=]...,'");
@@ -1514,7 +1706,7 @@ YY_RULE_SETUP
 	YY_BREAK
 case 25:
 YY_RULE_SETUP
-#line 219 "libxlu_disk_l.l"
+#line 220 "libxlu_disk_l.l"
 {
                     char *newscript;
                     STRIP(':');
@@ -1533,12 +1725,12 @@ case 26:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 8;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 232 "libxlu_disk_l.l"
+#line 233 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 27:
 YY_RULE_SETUP
-#line 233 "libxlu_disk_l.l"
+#line 234 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 28:
@@ -1546,7 +1738,7 @@ case 28:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 234 "libxlu_disk_l.l"
+#line 235 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 29:
@@ -1554,7 +1746,7 @@ case 29:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 6;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 235 "libxlu_disk_l.l"
+#line 236 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 30:
@@ -1562,7 +1754,7 @@ case 30:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 5;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 236 "libxlu_disk_l.l"
+#line 237 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 31:
@@ -1570,13 +1762,13 @@ case 31:
 yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 237 "libxlu_disk_l.l"
+#line 238 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 32:
 /* rule 32 can match eol */
 YY_RULE_SETUP
-#line 239 "libxlu_disk_l.l"
+#line 240 "libxlu_disk_l.l"
 {
 		  xlu__disk_err(DPC,yytext,"unknown deprecated disk prefix");
 		  return 0;
@@ -1586,7 +1778,7 @@ YY_RULE_SETUP
 case 33:
 /* rule 33 can match eol */
 YY_RULE_SETUP
-#line 246 "libxlu_disk_l.l"
+#line 247 "libxlu_disk_l.l"
 {
     STRIP(',');
 
@@ -1615,7 +1807,7 @@ YY_RULE_SETUP
 	YY_BREAK
 case 34:
 YY_RULE_SETUP
-#line 272 "libxlu_disk_l.l"
+#line 273 "libxlu_disk_l.l"
 {
     BEGIN(LEXERR);
     yymore();
@@ -1623,17 +1815,17 @@ YY_RULE_SETUP
 	YY_BREAK
 case 35:
 YY_RULE_SETUP
-#line 276 "libxlu_disk_l.l"
+#line 277 "libxlu_disk_l.l"
 {
     xlu__disk_err(DPC,yytext,"bad disk syntax"); return 0;
 }
 	YY_BREAK
 case 36:
 YY_RULE_SETUP
-#line 279 "libxlu_disk_l.l"
+#line 280 "libxlu_disk_l.l"
 YY_FATAL_ERROR( "flex scanner jammed" );
 	YY_BREAK
-#line 1637 "libxlu_disk_l.c"
+#line 1829 "libxlu_disk_l.c"
 			case YY_STATE_EOF(INITIAL):
 			case YY_STATE_EOF(LEXERR):
 				yyterminate();
@@ -1652,7 +1844,7 @@ YY_FATAL_ERROR( "flex scanner jammed" );
 			/* We're scanning a new file or input source.  It's
 			 * possible that this happened because the user
 			 * just pointed yyin at a new source and called
-			 * xlu__disk_yylex().  If so, then we have to assure
+			 * yylex().  If so, then we have to assure
 			 * consistency between YY_CURRENT_BUFFER and our
 			 * globals.  Here is the right place to do so, because
 			 * this is the first action (other than possibly a
@@ -1712,7 +1904,7 @@ YY_FATAL_ERROR( "flex scanner jammed" );
 				{
 				yyg->yy_did_buffer_switch_on_eof = 0;
 
-				if ( xlu__disk_yywrap(yyscanner ) )
+				if ( yywrap( yyscanner ) )
 					{
 					/* Note: because we've taken care in
 					 * yy_get_next_buffer() to have set up
@@ -1766,7 +1958,7 @@ YY_FATAL_ERROR( "flex scanner jammed" );
 	} /* end of action switch */
 		} /* end of scanning one token */
 	} /* end of user's declarations */
-} /* end of xlu__disk_yylex */
+} /* end of yylex */
 
 /* yy_get_next_buffer - try to read in a new buffer
  *
@@ -1778,9 +1970,9 @@ YY_FATAL_ERROR( "flex scanner jammed" );
 static int yy_get_next_buffer (yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
-	register char *dest = YY_CURRENT_BUFFER_LVALUE->yy_ch_buf;
-	register char *source = yyg->yytext_ptr;
-	register int number_to_move, i;
+	char *dest = YY_CURRENT_BUFFER_LVALUE->yy_ch_buf;
+	char *source = yyg->yytext_ptr;
+	int number_to_move, i;
 	int ret_val;
 
 	if ( yyg->yy_c_buf_p > &YY_CURRENT_BUFFER_LVALUE->yy_ch_buf[yyg->yy_n_chars + 1] )
@@ -1809,7 +2001,7 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 	/* Try to read more data. */
 
 	/* First move last chars to start of buffer. */
-	number_to_move = (int) (yyg->yy_c_buf_p - yyg->yytext_ptr) - 1;
+	number_to_move = (int) (yyg->yy_c_buf_p - yyg->yytext_ptr - 1);
 
 	for ( i = 0; i < number_to_move; ++i )
 		*(dest++) = *(source++);
@@ -1848,7 +2040,7 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 		if ( number_to_move == YY_MORE_ADJ )
 			{
 			ret_val = EOB_ACT_END_OF_FILE;
-			xlu__disk_yyrestart(yyin  ,yyscanner);
+			yyrestart( yyin  , yyscanner);
 			}
 
 		else
@@ -1862,12 +2054,15 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 	else
 		ret_val = EOB_ACT_CONTINUE_SCAN;
 
-	if ((yy_size_t) (yyg->yy_n_chars + number_to_move) > YY_CURRENT_BUFFER_LVALUE->yy_buf_size) {
+	if ((yyg->yy_n_chars + number_to_move) > YY_CURRENT_BUFFER_LVALUE->yy_buf_size) {
 		/* Extend the array by 50%, plus the number we really need. */
-		yy_size_t new_size = yyg->yy_n_chars + number_to_move + (yyg->yy_n_chars >> 1);
-		YY_CURRENT_BUFFER_LVALUE->yy_ch_buf = (char *) xlu__disk_yyrealloc((void *) YY_CURRENT_BUFFER_LVALUE->yy_ch_buf,new_size ,yyscanner );
+		int new_size = yyg->yy_n_chars + number_to_move + (yyg->yy_n_chars >> 1);
+		YY_CURRENT_BUFFER_LVALUE->yy_ch_buf = (char *) yyrealloc(
+			(void *) YY_CURRENT_BUFFER_LVALUE->yy_ch_buf, (yy_size_t) new_size , yyscanner );
 		if ( ! YY_CURRENT_BUFFER_LVALUE->yy_ch_buf )
 			YY_FATAL_ERROR( "out of dynamic memory in yy_get_next_buffer()" );
+		/* "- 2" to take care of EOB's */
+		YY_CURRENT_BUFFER_LVALUE->yy_buf_size = (int) (new_size - 2);
 	}
 
 	yyg->yy_n_chars += number_to_move;
@@ -1883,8 +2078,8 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 
     static yy_state_type yy_get_previous_state (yyscan_t yyscanner)
 {
-	register yy_state_type yy_current_state;
-	register char *yy_cp;
+	yy_state_type yy_current_state;
+	char *yy_cp;
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
 
 	yy_current_state = yyg->yy_start;
@@ -1894,14 +2089,14 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 
 	for ( yy_cp = yyg->yytext_ptr + YY_MORE_ADJ; yy_cp < yyg->yy_c_buf_p; ++yy_cp )
 		{
-		register YY_CHAR yy_c = (*yy_cp ? yy_ec[YY_SC_TO_UI(*yy_cp)] : 1);
+		YY_CHAR yy_c = (*yy_cp ? yy_ec[YY_SC_TO_UI(*yy_cp)] : 1);
 		while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 			{
 			yy_current_state = (int) yy_def[yy_current_state];
 			if ( yy_current_state >= 355 )
-				yy_c = yy_meta[(unsigned int) yy_c];
+				yy_c = yy_meta[yy_c];
 			}
-		yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
+		yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
 		*yyg->yy_state_ptr++ = yy_current_state;
 		}
 
@@ -1915,17 +2110,17 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
  */
     static yy_state_type yy_try_NUL_trans  (yy_state_type yy_current_state , yyscan_t yyscanner)
 {
-	register int yy_is_jam;
+	int yy_is_jam;
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner; /* This var may be unused depending upon options. */
 
-	register YY_CHAR yy_c = 1;
+	YY_CHAR yy_c = 1;
 	while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 		{
 		yy_current_state = (int) yy_def[yy_current_state];
 		if ( yy_current_state >= 355 )
-			yy_c = yy_meta[(unsigned int) yy_c];
+			yy_c = yy_meta[yy_c];
 		}
-	yy_current_state = yy_nxt[yy_base[yy_current_state] + (unsigned int) yy_c];
+	yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
 	yy_is_jam = (yy_current_state == 354);
 	if ( ! yy_is_jam )
 		*yyg->yy_state_ptr++ = yy_current_state;
@@ -1934,6 +2129,10 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 	return yy_is_jam ? 0 : yy_current_state;
 }
 
+#ifndef YY_NO_UNPUT
+
+#endif
+
 #ifndef YY_NO_INPUT
 #ifdef __cplusplus
     static int yyinput (yyscan_t yyscanner)
@@ -1959,7 +2158,7 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 
 		else
 			{ /* need more input */
-			yy_size_t offset = yyg->yy_c_buf_p - yyg->yytext_ptr;
+			int offset = (int) (yyg->yy_c_buf_p - yyg->yytext_ptr);
 			++yyg->yy_c_buf_p;
 
 			switch ( yy_get_next_buffer( yyscanner ) )
@@ -1976,14 +2175,14 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 					 */
 
 					/* Reset buffer status. */
-					xlu__disk_yyrestart(yyin ,yyscanner);
+					yyrestart( yyin , yyscanner);
 
 					/*FALLTHROUGH*/
 
 				case EOB_ACT_END_OF_FILE:
 					{
-					if ( xlu__disk_yywrap(yyscanner ) )
-						return EOF;
+					if ( yywrap( yyscanner ) )
+						return 0;
 
 					if ( ! yyg->yy_did_buffer_switch_on_eof )
 						YY_NEW_FILE;
@@ -2014,34 +2213,34 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
  * @param yyscanner The scanner object.
  * @note This function does not reset the start condition to @c INITIAL .
  */
-    void xlu__disk_yyrestart  (FILE * input_file , yyscan_t yyscanner)
+    void yyrestart  (FILE * input_file , yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
 
 	if ( ! YY_CURRENT_BUFFER ){
-        xlu__disk_yyensure_buffer_stack (yyscanner);
+        yyensure_buffer_stack (yyscanner);
 		YY_CURRENT_BUFFER_LVALUE =
-            xlu__disk_yy_create_buffer(yyin,YY_BUF_SIZE ,yyscanner);
+            yy_create_buffer( yyin, YY_BUF_SIZE , yyscanner);
 	}
 
-	xlu__disk_yy_init_buffer(YY_CURRENT_BUFFER,input_file ,yyscanner);
-	xlu__disk_yy_load_buffer_state(yyscanner );
+	yy_init_buffer( YY_CURRENT_BUFFER, input_file , yyscanner);
+	yy_load_buffer_state( yyscanner );
 }
 
 /** Switch to a different input buffer.
  * @param new_buffer The new input buffer.
  * @param yyscanner The scanner object.
  */
-    void xlu__disk_yy_switch_to_buffer  (YY_BUFFER_STATE  new_buffer , yyscan_t yyscanner)
+    void yy_switch_to_buffer  (YY_BUFFER_STATE  new_buffer , yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
 
 	/* TODO. We should be able to replace this entire function body
 	 * with
-	 *		xlu__disk_yypop_buffer_state();
-	 *		xlu__disk_yypush_buffer_state(new_buffer);
+	 *		yypop_buffer_state();
+	 *		yypush_buffer_state(new_buffer);
      */
-	xlu__disk_yyensure_buffer_stack (yyscanner);
+	yyensure_buffer_stack (yyscanner);
 	if ( YY_CURRENT_BUFFER == new_buffer )
 		return;
 
@@ -2054,17 +2253,17 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 		}
 
 	YY_CURRENT_BUFFER_LVALUE = new_buffer;
-	xlu__disk_yy_load_buffer_state(yyscanner );
+	yy_load_buffer_state( yyscanner );
 
 	/* We don't actually know whether we did this switch during
-	 * EOF (xlu__disk_yywrap()) processing, but the only time this flag
-	 * is looked at is after xlu__disk_yywrap() is called, so it's safe
+	 * EOF (yywrap()) processing, but the only time this flag
+	 * is looked at is after yywrap() is called, so it's safe
 	 * to go ahead and always set it.
 	 */
 	yyg->yy_did_buffer_switch_on_eof = 1;
 }
 
-static void xlu__disk_yy_load_buffer_state  (yyscan_t yyscanner)
+static void yy_load_buffer_state  (yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
 	yyg->yy_n_chars = YY_CURRENT_BUFFER_LVALUE->yy_n_chars;
@@ -2079,35 +2278,35 @@ static void xlu__disk_yy_load_buffer_state  (yyscan_t yyscanner)
  * @param yyscanner The scanner object.
  * @return the allocated buffer state.
  */
-    YY_BUFFER_STATE xlu__disk_yy_create_buffer  (FILE * file, int  size , yyscan_t yyscanner)
+    YY_BUFFER_STATE yy_create_buffer  (FILE * file, int  size , yyscan_t yyscanner)
 {
 	YY_BUFFER_STATE b;
     
-	b = (YY_BUFFER_STATE) xlu__disk_yyalloc(sizeof( struct yy_buffer_state ) ,yyscanner );
+	b = (YY_BUFFER_STATE) yyalloc( sizeof( struct yy_buffer_state ) , yyscanner );
 	if ( ! b )
-		YY_FATAL_ERROR( "out of dynamic memory in xlu__disk_yy_create_buffer()" );
+		YY_FATAL_ERROR( "out of dynamic memory in yy_create_buffer()" );
 
 	b->yy_buf_size = size;
 
 	/* yy_ch_buf has to be 2 characters longer than the size given because
 	 * we need to put in 2 end-of-buffer characters.
 	 */
-	b->yy_ch_buf = (char *) xlu__disk_yyalloc(b->yy_buf_size + 2 ,yyscanner );
+	b->yy_ch_buf = (char *) yyalloc( (yy_size_t) (b->yy_buf_size + 2) , yyscanner );
 	if ( ! b->yy_ch_buf )
-		YY_FATAL_ERROR( "out of dynamic memory in xlu__disk_yy_create_buffer()" );
+		YY_FATAL_ERROR( "out of dynamic memory in yy_create_buffer()" );
 
 	b->yy_is_our_buffer = 1;
 
-	xlu__disk_yy_init_buffer(b,file ,yyscanner);
+	yy_init_buffer( b, file , yyscanner);
 
 	return b;
 }
 
 /** Destroy the buffer.
- * @param b a buffer created with xlu__disk_yy_create_buffer()
+ * @param b a buffer created with yy_create_buffer()
  * @param yyscanner The scanner object.
  */
-    void xlu__disk_yy_delete_buffer (YY_BUFFER_STATE  b , yyscan_t yyscanner)
+    void yy_delete_buffer (YY_BUFFER_STATE  b , yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
 
@@ -2118,28 +2317,28 @@ static void xlu__disk_yy_load_buffer_state  (yyscan_t yyscanner)
 		YY_CURRENT_BUFFER_LVALUE = (YY_BUFFER_STATE) 0;
 
 	if ( b->yy_is_our_buffer )
-		xlu__disk_yyfree((void *) b->yy_ch_buf ,yyscanner );
+		yyfree( (void *) b->yy_ch_buf , yyscanner );
 
-	xlu__disk_yyfree((void *) b ,yyscanner );
+	yyfree( (void *) b , yyscanner );
 }
 
 /* Initializes or reinitializes a buffer.
  * This function is sometimes called more than once on the same buffer,
- * such as during a xlu__disk_yyrestart() or at EOF.
+ * such as during a yyrestart() or at EOF.
  */
-    static void xlu__disk_yy_init_buffer  (YY_BUFFER_STATE  b, FILE * file , yyscan_t yyscanner)
+    static void yy_init_buffer  (YY_BUFFER_STATE  b, FILE * file , yyscan_t yyscanner)
 
 {
 	int oerrno = errno;
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
 
-	xlu__disk_yy_flush_buffer(b ,yyscanner);
+	yy_flush_buffer( b , yyscanner);
 
 	b->yy_input_file = file;
 	b->yy_fill_buffer = 1;
 
-    /* If b is the current buffer, then xlu__disk_yy_init_buffer was _probably_
-     * called from xlu__disk_yyrestart() or through yy_get_next_buffer.
+    /* If b is the current buffer, then yy_init_buffer was _probably_
+     * called from yyrestart() or through yy_get_next_buffer.
      * In that case, we don't want to reset the lineno or column.
      */
     if (b != YY_CURRENT_BUFFER){
@@ -2156,7 +2355,7 @@ static void xlu__disk_yy_load_buffer_state  (yyscan_t yyscanner)
  * @param b the buffer state to be flushed, usually @c YY_CURRENT_BUFFER.
  * @param yyscanner The scanner object.
  */
-    void xlu__disk_yy_flush_buffer (YY_BUFFER_STATE  b , yyscan_t yyscanner)
+    void yy_flush_buffer (YY_BUFFER_STATE  b , yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
 	if ( ! b )
@@ -2177,7 +2376,7 @@ static void xlu__disk_yy_load_buffer_state  (yyscan_t yyscanner)
 	b->yy_buffer_status = YY_BUFFER_NEW;
 
 	if ( b == YY_CURRENT_BUFFER )
-		xlu__disk_yy_load_buffer_state(yyscanner );
+		yy_load_buffer_state( yyscanner );
 }
 
 /** Pushes the new state onto the stack. The new state becomes
@@ -2186,15 +2385,15 @@ static void xlu__disk_yy_load_buffer_state  (yyscan_t yyscanner)
  *  @param new_buffer The new state.
  *  @param yyscanner The scanner object.
  */
-void xlu__disk_yypush_buffer_state (YY_BUFFER_STATE new_buffer , yyscan_t yyscanner)
+void yypush_buffer_state (YY_BUFFER_STATE new_buffer , yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
 	if (new_buffer == NULL)
 		return;
 
-	xlu__disk_yyensure_buffer_stack(yyscanner);
+	yyensure_buffer_stack(yyscanner);
 
-	/* This block is copied from xlu__disk_yy_switch_to_buffer. */
+	/* This block is copied from yy_switch_to_buffer. */
 	if ( YY_CURRENT_BUFFER )
 		{
 		/* Flush out information for old buffer. */
@@ -2208,8 +2407,8 @@ void xlu__disk_yypush_buffer_state (YY_BUFFER_STATE new_buffer , yyscan_t yyscan
 		yyg->yy_buffer_stack_top++;
 	YY_CURRENT_BUFFER_LVALUE = new_buffer;
 
-	/* copied from xlu__disk_yy_switch_to_buffer. */
-	xlu__disk_yy_load_buffer_state(yyscanner );
+	/* copied from yy_switch_to_buffer. */
+	yy_load_buffer_state( yyscanner );
 	yyg->yy_did_buffer_switch_on_eof = 1;
 }
 
@@ -2217,19 +2416,19 @@ void xlu__disk_yypush_buffer_state (YY_BUFFER_STATE new_buffer , yyscan_t yyscan
  *  The next element becomes the new top.
  *  @param yyscanner The scanner object.
  */
-void xlu__disk_yypop_buffer_state (yyscan_t yyscanner)
+void yypop_buffer_state (yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
 	if (!YY_CURRENT_BUFFER)
 		return;
 
-	xlu__disk_yy_delete_buffer(YY_CURRENT_BUFFER ,yyscanner);
+	yy_delete_buffer(YY_CURRENT_BUFFER , yyscanner);
 	YY_CURRENT_BUFFER_LVALUE = NULL;
 	if (yyg->yy_buffer_stack_top > 0)
 		--yyg->yy_buffer_stack_top;
 
 	if (YY_CURRENT_BUFFER) {
-		xlu__disk_yy_load_buffer_state(yyscanner );
+		yy_load_buffer_state( yyscanner );
 		yyg->yy_did_buffer_switch_on_eof = 1;
 	}
 }
@@ -2237,7 +2436,7 @@ void xlu__disk_yypop_buffer_state (yyscan_t yyscanner)
 /* Allocates the stack if it does not exist.
  *  Guarantees space for at least one push.
  */
-static void xlu__disk_yyensure_buffer_stack (yyscan_t yyscanner)
+static void yyensure_buffer_stack (yyscan_t yyscanner)
 {
 	yy_size_t num_to_alloc;
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
@@ -2248,15 +2447,15 @@ static void xlu__disk_yyensure_buffer_stack (yyscan_t yyscanner)
 		 * scanner will even need a stack. We use 2 instead of 1 to avoid an
 		 * immediate realloc on the next call.
          */
-		num_to_alloc = 1;
-		yyg->yy_buffer_stack = (struct yy_buffer_state**)xlu__disk_yyalloc
+      num_to_alloc = 1; /* After all that talk, this was set to 1 anyways... */
+		yyg->yy_buffer_stack = (struct yy_buffer_state**)yyalloc
 								(num_to_alloc * sizeof(struct yy_buffer_state*)
 								, yyscanner);
 		if ( ! yyg->yy_buffer_stack )
-			YY_FATAL_ERROR( "out of dynamic memory in xlu__disk_yyensure_buffer_stack()" );
-								  
+			YY_FATAL_ERROR( "out of dynamic memory in yyensure_buffer_stack()" );
+
 		memset(yyg->yy_buffer_stack, 0, num_to_alloc * sizeof(struct yy_buffer_state*));
-				
+
 		yyg->yy_buffer_stack_max = num_to_alloc;
 		yyg->yy_buffer_stack_top = 0;
 		return;
@@ -2265,15 +2464,15 @@ static void xlu__disk_yyensure_buffer_stack (yyscan_t yyscanner)
 	if (yyg->yy_buffer_stack_top >= (yyg->yy_buffer_stack_max) - 1){
 
 		/* Increase the buffer to prepare for a possible push. */
-		int grow_size = 8 /* arbitrary grow size */;
+		yy_size_t grow_size = 8 /* arbitrary grow size */;
 
 		num_to_alloc = yyg->yy_buffer_stack_max + grow_size;
-		yyg->yy_buffer_stack = (struct yy_buffer_state**)xlu__disk_yyrealloc
+		yyg->yy_buffer_stack = (struct yy_buffer_state**)yyrealloc
 								(yyg->yy_buffer_stack,
 								num_to_alloc * sizeof(struct yy_buffer_state*)
 								, yyscanner);
 		if ( ! yyg->yy_buffer_stack )
-			YY_FATAL_ERROR( "out of dynamic memory in xlu__disk_yyensure_buffer_stack()" );
+			YY_FATAL_ERROR( "out of dynamic memory in yyensure_buffer_stack()" );
 
 		/* zero only the new slots.*/
 		memset(yyg->yy_buffer_stack + yyg->yy_buffer_stack_max, 0, grow_size * sizeof(struct yy_buffer_state*));
@@ -2285,9 +2484,9 @@ static void xlu__disk_yyensure_buffer_stack (yyscan_t yyscanner)
  * @param base the character buffer
  * @param size the size in bytes of the character buffer
  * @param yyscanner The scanner object.
- * @return the newly allocated buffer state object. 
+ * @return the newly allocated buffer state object.
  */
-YY_BUFFER_STATE xlu__disk_yy_scan_buffer  (char * base, yy_size_t  size , yyscan_t yyscanner)
+YY_BUFFER_STATE yy_scan_buffer  (char * base, yy_size_t  size , yyscan_t yyscanner)
 {
 	YY_BUFFER_STATE b;
     
@@ -2295,69 +2494,69 @@ YY_BUFFER_STATE xlu__disk_yy_scan_buffer  (char * base, yy_size_t  size , yyscan
 	     base[size-2] != YY_END_OF_BUFFER_CHAR ||
 	     base[size-1] != YY_END_OF_BUFFER_CHAR )
 		/* They forgot to leave room for the EOB's. */
-		return 0;
+		return NULL;
 
-	b = (YY_BUFFER_STATE) xlu__disk_yyalloc(sizeof( struct yy_buffer_state ) ,yyscanner );
+	b = (YY_BUFFER_STATE) yyalloc( sizeof( struct yy_buffer_state ) , yyscanner );
 	if ( ! b )
-		YY_FATAL_ERROR( "out of dynamic memory in xlu__disk_yy_scan_buffer()" );
+		YY_FATAL_ERROR( "out of dynamic memory in yy_scan_buffer()" );
 
-	b->yy_buf_size = size - 2;	/* "- 2" to take care of EOB's */
+	b->yy_buf_size = (int) (size - 2);	/* "- 2" to take care of EOB's */
 	b->yy_buf_pos = b->yy_ch_buf = base;
 	b->yy_is_our_buffer = 0;
-	b->yy_input_file = 0;
+	b->yy_input_file = NULL;
 	b->yy_n_chars = b->yy_buf_size;
 	b->yy_is_interactive = 0;
 	b->yy_at_bol = 1;
 	b->yy_fill_buffer = 0;
 	b->yy_buffer_status = YY_BUFFER_NEW;
 
-	xlu__disk_yy_switch_to_buffer(b ,yyscanner );
+	yy_switch_to_buffer( b , yyscanner );
 
 	return b;
 }
 
-/** Setup the input buffer state to scan a string. The next call to xlu__disk_yylex() will
+/** Setup the input buffer state to scan a string. The next call to yylex() will
  * scan from a @e copy of @a str.
  * @param yystr a NUL-terminated string to scan
  * @param yyscanner The scanner object.
  * @return the newly allocated buffer state object.
  * @note If you want to scan bytes that may contain NUL values, then use
- *       xlu__disk_yy_scan_bytes() instead.
+ *       yy_scan_bytes() instead.
  */
-YY_BUFFER_STATE xlu__disk_yy_scan_string (yyconst char * yystr , yyscan_t yyscanner)
+YY_BUFFER_STATE yy_scan_string (const char * yystr , yyscan_t yyscanner)
 {
     
-	return xlu__disk_yy_scan_bytes(yystr,strlen(yystr) ,yyscanner);
+	return yy_scan_bytes( yystr, (int) strlen(yystr) , yyscanner);
 }
 
-/** Setup the input buffer state to scan the given bytes. The next call to xlu__disk_yylex() will
+/** Setup the input buffer state to scan the given bytes. The next call to yylex() will
  * scan from a @e copy of @a bytes.
  * @param yybytes the byte buffer to scan
  * @param _yybytes_len the number of bytes in the buffer pointed to by @a bytes.
  * @param yyscanner The scanner object.
  * @return the newly allocated buffer state object.
  */
-YY_BUFFER_STATE xlu__disk_yy_scan_bytes  (yyconst char * yybytes, yy_size_t  _yybytes_len , yyscan_t yyscanner)
+YY_BUFFER_STATE yy_scan_bytes  (const char * yybytes, int  _yybytes_len , yyscan_t yyscanner)
 {
 	YY_BUFFER_STATE b;
 	char *buf;
 	yy_size_t n;
-	yy_size_t i;
+	int i;
     
 	/* Get memory for full buffer, including space for trailing EOB's. */
-	n = _yybytes_len + 2;
-	buf = (char *) xlu__disk_yyalloc(n ,yyscanner );
+	n = (yy_size_t) (_yybytes_len + 2);
+	buf = (char *) yyalloc( n , yyscanner );
 	if ( ! buf )
-		YY_FATAL_ERROR( "out of dynamic memory in xlu__disk_yy_scan_bytes()" );
+		YY_FATAL_ERROR( "out of dynamic memory in yy_scan_bytes()" );
 
 	for ( i = 0; i < _yybytes_len; ++i )
 		buf[i] = yybytes[i];
 
 	buf[_yybytes_len] = buf[_yybytes_len+1] = YY_END_OF_BUFFER_CHAR;
 
-	b = xlu__disk_yy_scan_buffer(buf,n ,yyscanner);
+	b = yy_scan_buffer( buf, n , yyscanner);
 	if ( ! b )
-		YY_FATAL_ERROR( "bad buffer in xlu__disk_yy_scan_bytes()" );
+		YY_FATAL_ERROR( "bad buffer in yy_scan_bytes()" );
 
 	/* It's okay to grow etc. this buffer, and we should throw it
 	 * away when we're done.
@@ -2371,9 +2570,11 @@ YY_BUFFER_STATE xlu__disk_yy_scan_bytes  (yyconst char * yybytes, yy_size_t  _yy
 #define YY_EXIT_FAILURE 2
 #endif
 
-static void yy_fatal_error (yyconst char* msg , yyscan_t yyscanner)
+static void yynoreturn yy_fatal_error (const char* msg , yyscan_t yyscanner)
 {
-    	(void) fprintf( stderr, "%s\n", msg );
+	struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
+	(void)yyg;
+	fprintf( stderr, "%s\n", msg );
 	exit( YY_EXIT_FAILURE );
 }
 
@@ -2399,7 +2600,7 @@ static void yy_fatal_error (yyconst char* msg , yyscan_t yyscanner)
 /** Get the user-defined data for this scanner.
  * @param yyscanner The scanner object.
  */
-YY_EXTRA_TYPE xlu__disk_yyget_extra  (yyscan_t yyscanner)
+YY_EXTRA_TYPE yyget_extra  (yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
     return yyextra;
@@ -2408,10 +2609,10 @@ YY_EXTRA_TYPE xlu__disk_yyget_extra  (yyscan_t yyscanner)
 /** Get the current line number.
  * @param yyscanner The scanner object.
  */
-int xlu__disk_yyget_lineno  (yyscan_t yyscanner)
+int yyget_lineno  (yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
-    
+
         if (! YY_CURRENT_BUFFER)
             return 0;
     
@@ -2421,10 +2622,10 @@ int xlu__disk_yyget_lineno  (yyscan_t yyscanner)
 /** Get the current column number.
  * @param yyscanner The scanner object.
  */
-int xlu__disk_yyget_column  (yyscan_t yyscanner)
+int yyget_column  (yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
-    
+
         if (! YY_CURRENT_BUFFER)
             return 0;
     
@@ -2434,7 +2635,7 @@ int xlu__disk_yyget_column  (yyscan_t yyscanner)
 /** Get the input stream.
  * @param yyscanner The scanner object.
  */
-FILE *xlu__disk_yyget_in  (yyscan_t yyscanner)
+FILE *yyget_in  (yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
     return yyin;
@@ -2443,7 +2644,7 @@ FILE *xlu__disk_yyget_in  (yyscan_t yyscanner)
 /** Get the output stream.
  * @param yyscanner The scanner object.
  */
-FILE *xlu__disk_yyget_out  (yyscan_t yyscanner)
+FILE *yyget_out  (yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
     return yyout;
@@ -2452,7 +2653,7 @@ FILE *xlu__disk_yyget_out  (yyscan_t yyscanner)
 /** Get the length of the current token.
  * @param yyscanner The scanner object.
  */
-yy_size_t xlu__disk_yyget_leng  (yyscan_t yyscanner)
+int yyget_leng  (yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
     return yyleng;
@@ -2462,7 +2663,7 @@ yy_size_t xlu__disk_yyget_leng  (yyscan_t yyscanner)
  * @param yyscanner The scanner object.
  */
 
-char *xlu__disk_yyget_text  (yyscan_t yyscanner)
+char *yyget_text  (yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
     return yytext;
@@ -2472,90 +2673,88 @@ char *xlu__disk_yyget_text  (yyscan_t yyscanner)
  * @param user_defined The data to be associated with this scanner.
  * @param yyscanner The scanner object.
  */
-void xlu__disk_yyset_extra (YY_EXTRA_TYPE  user_defined , yyscan_t yyscanner)
+void yyset_extra (YY_EXTRA_TYPE  user_defined , yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
     yyextra = user_defined ;
 }
 
 /** Set the current line number.
- * @param line_number
+ * @param _line_number line number
  * @param yyscanner The scanner object.
  */
-void xlu__disk_yyset_lineno (int  line_number , yyscan_t yyscanner)
+void yyset_lineno (int  _line_number , yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
 
         /* lineno is only valid if an input buffer exists. */
         if (! YY_CURRENT_BUFFER )
-           YY_FATAL_ERROR( "xlu__disk_yyset_lineno called with no buffer" );
+           YY_FATAL_ERROR( "yyset_lineno called with no buffer" );
     
-    yylineno = line_number;
+    yylineno = _line_number;
 }
 
 /** Set the current column.
- * @param line_number
+ * @param _column_no column number
  * @param yyscanner The scanner object.
  */
-void xlu__disk_yyset_column (int  column_no , yyscan_t yyscanner)
+void yyset_column (int  _column_no , yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
 
         /* column is only valid if an input buffer exists. */
         if (! YY_CURRENT_BUFFER )
-           YY_FATAL_ERROR( "xlu__disk_yyset_column called with no buffer" );
+           YY_FATAL_ERROR( "yyset_column called with no buffer" );
     
-    yycolumn = column_no;
+    yycolumn = _column_no;
 }
 
 /** Set the input stream. This does not discard the current
  * input buffer.
- * @param in_str A readable stream.
+ * @param _in_str A readable stream.
  * @param yyscanner The scanner object.
- * @see xlu__disk_yy_switch_to_buffer
+ * @see yy_switch_to_buffer
  */
-void xlu__disk_yyset_in (FILE *  in_str , yyscan_t yyscanner)
+void yyset_in (FILE *  _in_str , yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
-    yyin = in_str ;
+    yyin = _in_str ;
 }
 
-void xlu__disk_yyset_out (FILE *  out_str , yyscan_t yyscanner)
+void yyset_out (FILE *  _out_str , yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
-    yyout = out_str ;
+    yyout = _out_str ;
 }
 
-int xlu__disk_yyget_debug  (yyscan_t yyscanner)
+int yyget_debug  (yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
     return yy_flex_debug;
 }
 
-void xlu__disk_yyset_debug (int  bdebug , yyscan_t yyscanner)
+void yyset_debug (int  _bdebug , yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
-    yy_flex_debug = bdebug ;
+    yy_flex_debug = _bdebug ;
 }
 
 /* Accessor methods for yylval and yylloc */
 
 /* User-visible API */
 
-/* xlu__disk_yylex_init is special because it creates the scanner itself, so it is
+/* yylex_init is special because it creates the scanner itself, so it is
  * the ONLY reentrant function that doesn't take the scanner as the last argument.
  * That's why we explicitly handle the declaration, instead of using our macros.
  */
-
-int xlu__disk_yylex_init(yyscan_t* ptr_yy_globals)
-
+int yylex_init(yyscan_t* ptr_yy_globals)
 {
     if (ptr_yy_globals == NULL){
         errno = EINVAL;
         return 1;
     }
 
-    *ptr_yy_globals = (yyscan_t) xlu__disk_yyalloc ( sizeof( struct yyguts_t ), NULL );
+    *ptr_yy_globals = (yyscan_t) yyalloc ( sizeof( struct yyguts_t ), NULL );
 
     if (*ptr_yy_globals == NULL){
         errno = ENOMEM;
@@ -2568,39 +2767,37 @@ int xlu__disk_yylex_init(yyscan_t* ptr_yy_globals)
     return yy_init_globals ( *ptr_yy_globals );
 }
 
-/* xlu__disk_yylex_init_extra has the same functionality as xlu__disk_yylex_init, but follows the
+/* yylex_init_extra has the same functionality as yylex_init, but follows the
  * convention of taking the scanner as the last argument. Note however, that
  * this is a *pointer* to a scanner, as it will be allocated by this call (and
  * is the reason, too, why this function also must handle its own declaration).
- * The user defined value in the first argument will be available to xlu__disk_yyalloc in
+ * The user defined value in the first argument will be available to yyalloc in
  * the yyextra field.
  */
-
-int xlu__disk_yylex_init_extra(YY_EXTRA_TYPE yy_user_defined,yyscan_t* ptr_yy_globals )
-
+int yylex_init_extra( YY_EXTRA_TYPE yy_user_defined, yyscan_t* ptr_yy_globals )
 {
     struct yyguts_t dummy_yyguts;
 
-    xlu__disk_yyset_extra (yy_user_defined, &dummy_yyguts);
+    yyset_extra (yy_user_defined, &dummy_yyguts);
 
     if (ptr_yy_globals == NULL){
         errno = EINVAL;
         return 1;
     }
-	
-    *ptr_yy_globals = (yyscan_t) xlu__disk_yyalloc ( sizeof( struct yyguts_t ), &dummy_yyguts );
-	
+
+    *ptr_yy_globals = (yyscan_t) yyalloc ( sizeof( struct yyguts_t ), &dummy_yyguts );
+
     if (*ptr_yy_globals == NULL){
         errno = ENOMEM;
         return 1;
     }
-    
+
     /* By setting to 0xAA, we expose bugs in
     yy_init_globals. Leave at 0x00 for releases. */
     memset(*ptr_yy_globals,0x00,sizeof(struct yyguts_t));
-    
-    xlu__disk_yyset_extra (yy_user_defined, *ptr_yy_globals);
-    
+
+    yyset_extra (yy_user_defined, *ptr_yy_globals);
+
     return yy_init_globals ( *ptr_yy_globals );
 }
 
@@ -2608,13 +2805,13 @@ static int yy_init_globals (yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
     /* Initialization is the same as for the non-reentrant scanner.
-     * This function is called from xlu__disk_yylex_destroy(), so don't allocate here.
+     * This function is called from yylex_destroy(), so don't allocate here.
      */
 
-    yyg->yy_buffer_stack = 0;
+    yyg->yy_buffer_stack = NULL;
     yyg->yy_buffer_stack_top = 0;
     yyg->yy_buffer_stack_max = 0;
-    yyg->yy_c_buf_p = (char *) 0;
+    yyg->yy_c_buf_p = NULL;
     yyg->yy_init = 0;
     yyg->yy_start = 0;
 
@@ -2632,45 +2829,45 @@ static int yy_init_globals (yyscan_t yyscanner)
     yyin = stdin;
     yyout = stdout;
 #else
-    yyin = (FILE *) 0;
-    yyout = (FILE *) 0;
+    yyin = NULL;
+    yyout = NULL;
 #endif
 
     /* For future reference: Set errno on error, since we are called by
-     * xlu__disk_yylex_init()
+     * yylex_init()
      */
     return 0;
 }
 
-/* xlu__disk_yylex_destroy is for both reentrant and non-reentrant scanners. */
-int xlu__disk_yylex_destroy  (yyscan_t yyscanner)
+/* yylex_destroy is for both reentrant and non-reentrant scanners. */
+int yylex_destroy  (yyscan_t yyscanner)
 {
     struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
 
     /* Pop the buffer stack, destroying each element. */
 	while(YY_CURRENT_BUFFER){
-		xlu__disk_yy_delete_buffer(YY_CURRENT_BUFFER ,yyscanner );
+		yy_delete_buffer( YY_CURRENT_BUFFER , yyscanner );
 		YY_CURRENT_BUFFER_LVALUE = NULL;
-		xlu__disk_yypop_buffer_state(yyscanner);
+		yypop_buffer_state(yyscanner);
 	}
 
 	/* Destroy the stack itself. */
-	xlu__disk_yyfree(yyg->yy_buffer_stack ,yyscanner);
+	yyfree(yyg->yy_buffer_stack , yyscanner);
 	yyg->yy_buffer_stack = NULL;
 
     /* Destroy the start condition stack. */
-        xlu__disk_yyfree(yyg->yy_start_stack ,yyscanner );
+        yyfree( yyg->yy_start_stack , yyscanner );
         yyg->yy_start_stack = NULL;
 
-    xlu__disk_yyfree ( yyg->yy_state_buf , yyscanner);
+    yyfree ( yyg->yy_state_buf , yyscanner);
     yyg->yy_state_buf  = NULL;
 
     /* Reset the globals. This is important in a non-reentrant scanner so the next time
-     * xlu__disk_yylex() is called, initialization will occur. */
+     * yylex() is called, initialization will occur. */
     yy_init_globals( yyscanner);
 
     /* Destroy the main struct (reentrant only). */
-    xlu__disk_yyfree ( yyscanner , yyscanner );
+    yyfree ( yyscanner , yyscanner );
     yyscanner = NULL;
     return 0;
 }
@@ -2680,18 +2877,21 @@ int xlu__disk_yylex_destroy  (yyscan_t yyscanner)
  */
 
 #ifndef yytext_ptr
-static void yy_flex_strncpy (char* s1, yyconst char * s2, int n , yyscan_t yyscanner)
+static void yy_flex_strncpy (char* s1, const char * s2, int n , yyscan_t yyscanner)
 {
-	register int i;
+	struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
+	(void)yyg;
+
+	int i;
 	for ( i = 0; i < n; ++i )
 		s1[i] = s2[i];
 }
 #endif
 
 #ifdef YY_NEED_STRLEN
-static int yy_flex_strlen (yyconst char * s , yyscan_t yyscanner)
+static int yy_flex_strlen (const char * s , yyscan_t yyscanner)
 {
-	register int n;
+	int n;
 	for ( n = 0; s[n]; ++n )
 		;
 
@@ -2699,13 +2899,18 @@ static int yy_flex_strlen (yyconst char * s , yyscan_t yyscanner)
 }
 #endif
 
-void *xlu__disk_yyalloc (yy_size_t  size , yyscan_t yyscanner)
+void *yyalloc (yy_size_t  size , yyscan_t yyscanner)
 {
-	return (void *) malloc( size );
+	struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
+	(void)yyg;
+	return malloc(size);
 }
 
-void *xlu__disk_yyrealloc  (void * ptr, yy_size_t  size , yyscan_t yyscanner)
+void *yyrealloc  (void * ptr, yy_size_t  size , yyscan_t yyscanner)
 {
+	struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
+	(void)yyg;
+
 	/* The cast to (char *) in the following accommodates both
 	 * implementations that use char* generic pointers, and those
 	 * that use void* generic pointers.  It works with the latter
@@ -2713,14 +2918,16 @@ void *xlu__disk_yyrealloc  (void * ptr, yy_size_t  size , yyscan_t yyscanner)
 	 * any pointer type to void*, and deal with argument conversions
 	 * as though doing an assignment.
 	 */
-	return (void *) realloc( (char *) ptr, size );
+	return realloc(ptr, size);
 }
 
-void xlu__disk_yyfree (void * ptr , yyscan_t yyscanner)
+void yyfree (void * ptr , yyscan_t yyscanner)
 {
-	free( (char *) ptr );	/* see xlu__disk_yyrealloc() for (char *) cast */
+	struct yyguts_t * yyg = (struct yyguts_t*)yyscanner;
+	(void)yyg;
+	free( (char *) ptr );	/* see yyrealloc() for (char *) cast */
 }
 
 #define YYTABLES_NAME "yytables"
 
-#line 278 "libxlu_disk_l.l"
+#line 280 "libxlu_disk_l.l"
diff --git a/tools/libxl/libxlu_disk_l.h b/tools/libxl/libxlu_disk_l.h
index 4d60d5c563b5..9275a3ab5547 100644
--- a/tools/libxl/libxlu_disk_l.h
+++ b/tools/libxl/libxlu_disk_l.h
@@ -3,12 +3,9 @@
 #define xlu__disk_yyIN_HEADER 1
 
 #line 6 "libxlu_disk_l.h"
-#line 31 "libxlu_disk_l.l"
 #include "libxl_osdeps.h" /* must come before any other headers */
 
-
-
-#line 12 "libxlu_disk_l.h"
+#line 9 "libxlu_disk_l.h"
 
 #define  YY_INT_ALIGNED short int
 
@@ -16,12 +13,222 @@
 
 #define FLEX_SCANNER
 #define YY_FLEX_MAJOR_VERSION 2
-#define YY_FLEX_MINOR_VERSION 5
-#define YY_FLEX_SUBMINOR_VERSION 39
+#define YY_FLEX_MINOR_VERSION 6
+#define YY_FLEX_SUBMINOR_VERSION 4
 #if YY_FLEX_SUBMINOR_VERSION > 0
 #define FLEX_BETA
 #endif
 
+#ifdef yy_create_buffer
+#define xlu__disk_yy_create_buffer_ALREADY_DEFINED
+#else
+#define yy_create_buffer xlu__disk_yy_create_buffer
+#endif
+
+#ifdef yy_delete_buffer
+#define xlu__disk_yy_delete_buffer_ALREADY_DEFINED
+#else
+#define yy_delete_buffer xlu__disk_yy_delete_buffer
+#endif
+
+#ifdef yy_scan_buffer
+#define xlu__disk_yy_scan_buffer_ALREADY_DEFINED
+#else
+#define yy_scan_buffer xlu__disk_yy_scan_buffer
+#endif
+
+#ifdef yy_scan_string
+#define xlu__disk_yy_scan_string_ALREADY_DEFINED
+#else
+#define yy_scan_string xlu__disk_yy_scan_string
+#endif
+
+#ifdef yy_scan_bytes
+#define xlu__disk_yy_scan_bytes_ALREADY_DEFINED
+#else
+#define yy_scan_bytes xlu__disk_yy_scan_bytes
+#endif
+
+#ifdef yy_init_buffer
+#define xlu__disk_yy_init_buffer_ALREADY_DEFINED
+#else
+#define yy_init_buffer xlu__disk_yy_init_buffer
+#endif
+
+#ifdef yy_flush_buffer
+#define xlu__disk_yy_flush_buffer_ALREADY_DEFINED
+#else
+#define yy_flush_buffer xlu__disk_yy_flush_buffer
+#endif
+
+#ifdef yy_load_buffer_state
+#define xlu__disk_yy_load_buffer_state_ALREADY_DEFINED
+#else
+#define yy_load_buffer_state xlu__disk_yy_load_buffer_state
+#endif
+
+#ifdef yy_switch_to_buffer
+#define xlu__disk_yy_switch_to_buffer_ALREADY_DEFINED
+#else
+#define yy_switch_to_buffer xlu__disk_yy_switch_to_buffer
+#endif
+
+#ifdef yypush_buffer_state
+#define xlu__disk_yypush_buffer_state_ALREADY_DEFINED
+#else
+#define yypush_buffer_state xlu__disk_yypush_buffer_state
+#endif
+
+#ifdef yypop_buffer_state
+#define xlu__disk_yypop_buffer_state_ALREADY_DEFINED
+#else
+#define yypop_buffer_state xlu__disk_yypop_buffer_state
+#endif
+
+#ifdef yyensure_buffer_stack
+#define xlu__disk_yyensure_buffer_stack_ALREADY_DEFINED
+#else
+#define yyensure_buffer_stack xlu__disk_yyensure_buffer_stack
+#endif
+
+#ifdef yylex
+#define xlu__disk_yylex_ALREADY_DEFINED
+#else
+#define yylex xlu__disk_yylex
+#endif
+
+#ifdef yyrestart
+#define xlu__disk_yyrestart_ALREADY_DEFINED
+#else
+#define yyrestart xlu__disk_yyrestart
+#endif
+
+#ifdef yylex_init
+#define xlu__disk_yylex_init_ALREADY_DEFINED
+#else
+#define yylex_init xlu__disk_yylex_init
+#endif
+
+#ifdef yylex_init_extra
+#define xlu__disk_yylex_init_extra_ALREADY_DEFINED
+#else
+#define yylex_init_extra xlu__disk_yylex_init_extra
+#endif
+
+#ifdef yylex_destroy
+#define xlu__disk_yylex_destroy_ALREADY_DEFINED
+#else
+#define yylex_destroy xlu__disk_yylex_destroy
+#endif
+
+#ifdef yyget_debug
+#define xlu__disk_yyget_debug_ALREADY_DEFINED
+#else
+#define yyget_debug xlu__disk_yyget_debug
+#endif
+
+#ifdef yyset_debug
+#define xlu__disk_yyset_debug_ALREADY_DEFINED
+#else
+#define yyset_debug xlu__disk_yyset_debug
+#endif
+
+#ifdef yyget_extra
+#define xlu__disk_yyget_extra_ALREADY_DEFINED
+#else
+#define yyget_extra xlu__disk_yyget_extra
+#endif
+
+#ifdef yyset_extra
+#define xlu__disk_yyset_extra_ALREADY_DEFINED
+#else
+#define yyset_extra xlu__disk_yyset_extra
+#endif
+
+#ifdef yyget_in
+#define xlu__disk_yyget_in_ALREADY_DEFINED
+#else
+#define yyget_in xlu__disk_yyget_in
+#endif
+
+#ifdef yyset_in
+#define xlu__disk_yyset_in_ALREADY_DEFINED
+#else
+#define yyset_in xlu__disk_yyset_in
+#endif
+
+#ifdef yyget_out
+#define xlu__disk_yyget_out_ALREADY_DEFINED
+#else
+#define yyget_out xlu__disk_yyget_out
+#endif
+
+#ifdef yyset_out
+#define xlu__disk_yyset_out_ALREADY_DEFINED
+#else
+#define yyset_out xlu__disk_yyset_out
+#endif
+
+#ifdef yyget_leng
+#define xlu__disk_yyget_leng_ALREADY_DEFINED
+#else
+#define yyget_leng xlu__disk_yyget_leng
+#endif
+
+#ifdef yyget_text
+#define xlu__disk_yyget_text_ALREADY_DEFINED
+#else
+#define yyget_text xlu__disk_yyget_text
+#endif
+
+#ifdef yyget_lineno
+#define xlu__disk_yyget_lineno_ALREADY_DEFINED
+#else
+#define yyget_lineno xlu__disk_yyget_lineno
+#endif
+
+#ifdef yyset_lineno
+#define xlu__disk_yyset_lineno_ALREADY_DEFINED
+#else
+#define yyset_lineno xlu__disk_yyset_lineno
+#endif
+
+#ifdef yyget_column
+#define xlu__disk_yyget_column_ALREADY_DEFINED
+#else
+#define yyget_column xlu__disk_yyget_column
+#endif
+
+#ifdef yyset_column
+#define xlu__disk_yyset_column_ALREADY_DEFINED
+#else
+#define yyset_column xlu__disk_yyset_column
+#endif
+
+#ifdef yywrap
+#define xlu__disk_yywrap_ALREADY_DEFINED
+#else
+#define yywrap xlu__disk_yywrap
+#endif
+
+#ifdef yyalloc
+#define xlu__disk_yyalloc_ALREADY_DEFINED
+#else
+#define yyalloc xlu__disk_yyalloc
+#endif
+
+#ifdef yyrealloc
+#define xlu__disk_yyrealloc_ALREADY_DEFINED
+#else
+#define yyrealloc xlu__disk_yyrealloc
+#endif
+
+#ifdef yyfree
+#define xlu__disk_yyfree_ALREADY_DEFINED
+#else
+#define yyfree xlu__disk_yyfree
+#endif
+
 /* First, we deal with  platform-specific or compiler-specific issues. */
 
 /* begin standard C headers. */
@@ -92,29 +299,23 @@ typedef unsigned int flex_uint32_t;
 #define UINT32_MAX             (4294967295U)
 #endif
 
+#ifndef SIZE_MAX
+#define SIZE_MAX               (~(size_t)0)
+#endif
+
 #endif /* ! C99 */
 
 #endif /* ! FLEXINT_H */
 
-#ifdef __cplusplus
-
-/* The "const" storage-class-modifier is valid. */
-#define YY_USE_CONST
-
-#else	/* ! __cplusplus */
-
-/* C99 requires __STDC__ to be defined as 1. */
-#if defined (__STDC__)
-
-#define YY_USE_CONST
-
-#endif	/* defined (__STDC__) */
-#endif	/* ! __cplusplus */
+/* begin standard C++ headers. */
 
-#ifdef YY_USE_CONST
+/* TODO: this is always defined, so inline it */
 #define yyconst const
+
+#if defined(__GNUC__) && __GNUC__ >= 3
+#define yynoreturn __attribute__((__noreturn__))
 #else
-#define yyconst
+#define yynoreturn
 #endif
 
 /* An opaque pointer. */
@@ -169,12 +370,12 @@ struct yy_buffer_state
 	/* Size of input buffer in bytes, not including room for EOB
 	 * characters.
 	 */
-	yy_size_t yy_buf_size;
+	int yy_buf_size;
 
 	/* Number of characters read into yy_ch_buf, not including EOB
 	 * characters.
 	 */
-	yy_size_t yy_n_chars;
+	int yy_n_chars;
 
 	/* Whether we "own" the buffer - i.e., we know we created it,
 	 * and can realloc() it to grow it, and should free() it to
@@ -197,7 +398,7 @@ struct yy_buffer_state
 
     int yy_bs_lineno; /**< The line count. */
     int yy_bs_column; /**< The column count. */
-    
+
 	/* Whether to try to fill the input buffer when we reach the
 	 * end of it.
 	 */
@@ -208,23 +409,23 @@ struct yy_buffer_state
 	};
 #endif /* !YY_STRUCT_YY_BUFFER_STATE */
 
-void xlu__disk_yyrestart (FILE *input_file ,yyscan_t yyscanner );
-void xlu__disk_yy_switch_to_buffer (YY_BUFFER_STATE new_buffer ,yyscan_t yyscanner );
-YY_BUFFER_STATE xlu__disk_yy_create_buffer (FILE *file,int size ,yyscan_t yyscanner );
-void xlu__disk_yy_delete_buffer (YY_BUFFER_STATE b ,yyscan_t yyscanner );
-void xlu__disk_yy_flush_buffer (YY_BUFFER_STATE b ,yyscan_t yyscanner );
-void xlu__disk_yypush_buffer_state (YY_BUFFER_STATE new_buffer ,yyscan_t yyscanner );
-void xlu__disk_yypop_buffer_state (yyscan_t yyscanner );
+void yyrestart ( FILE *input_file , yyscan_t yyscanner );
+void yy_switch_to_buffer ( YY_BUFFER_STATE new_buffer , yyscan_t yyscanner );
+YY_BUFFER_STATE yy_create_buffer ( FILE *file, int size , yyscan_t yyscanner );
+void yy_delete_buffer ( YY_BUFFER_STATE b , yyscan_t yyscanner );
+void yy_flush_buffer ( YY_BUFFER_STATE b , yyscan_t yyscanner );
+void yypush_buffer_state ( YY_BUFFER_STATE new_buffer , yyscan_t yyscanner );
+void yypop_buffer_state ( yyscan_t yyscanner );
 
-YY_BUFFER_STATE xlu__disk_yy_scan_buffer (char *base,yy_size_t size ,yyscan_t yyscanner );
-YY_BUFFER_STATE xlu__disk_yy_scan_string (yyconst char *yy_str ,yyscan_t yyscanner );
-YY_BUFFER_STATE xlu__disk_yy_scan_bytes (yyconst char *bytes,yy_size_t len ,yyscan_t yyscanner );
+YY_BUFFER_STATE yy_scan_buffer ( char *base, yy_size_t size , yyscan_t yyscanner );
+YY_BUFFER_STATE yy_scan_string ( const char *yy_str , yyscan_t yyscanner );
+YY_BUFFER_STATE yy_scan_bytes ( const char *bytes, int len , yyscan_t yyscanner );
 
-void *xlu__disk_yyalloc (yy_size_t ,yyscan_t yyscanner );
-void *xlu__disk_yyrealloc (void *,yy_size_t ,yyscan_t yyscanner );
-void xlu__disk_yyfree (void * ,yyscan_t yyscanner );
+void *yyalloc ( yy_size_t , yyscan_t yyscanner );
+void *yyrealloc ( void *, yy_size_t , yyscan_t yyscanner );
+void yyfree ( void * , yyscan_t yyscanner );
 
-#define xlu__disk_yywrap(yyscanner) 1
+#define xlu__disk_yywrap(yyscanner) (/*CONSTCOND*/1)
 #define YY_SKIP_YYWRAP
 
 #define yytext_ptr yytext_r
@@ -247,42 +448,42 @@ void xlu__disk_yyfree (void * ,yyscan_t yyscanner );
 #define YY_EXTRA_TYPE void *
 #endif
 
-int xlu__disk_yylex_init (yyscan_t* scanner);
+int yylex_init (yyscan_t* scanner);
 
-int xlu__disk_yylex_init_extra (YY_EXTRA_TYPE user_defined,yyscan_t* scanner);
+int yylex_init_extra ( YY_EXTRA_TYPE user_defined, yyscan_t* scanner);
 
 /* Accessor methods to globals.
    These are made visible to non-reentrant scanners for convenience. */
 
-int xlu__disk_yylex_destroy (yyscan_t yyscanner );
+int yylex_destroy ( yyscan_t yyscanner );
 
-int xlu__disk_yyget_debug (yyscan_t yyscanner );
+int yyget_debug ( yyscan_t yyscanner );
 
-void xlu__disk_yyset_debug (int debug_flag ,yyscan_t yyscanner );
+void yyset_debug ( int debug_flag , yyscan_t yyscanner );
 
-YY_EXTRA_TYPE xlu__disk_yyget_extra (yyscan_t yyscanner );
+YY_EXTRA_TYPE yyget_extra ( yyscan_t yyscanner );
 
-void xlu__disk_yyset_extra (YY_EXTRA_TYPE user_defined ,yyscan_t yyscanner );
+void yyset_extra ( YY_EXTRA_TYPE user_defined , yyscan_t yyscanner );
 
-FILE *xlu__disk_yyget_in (yyscan_t yyscanner );
+FILE *yyget_in ( yyscan_t yyscanner );
 
-void xlu__disk_yyset_in  (FILE * in_str ,yyscan_t yyscanner );
+void yyset_in  ( FILE * _in_str , yyscan_t yyscanner );
 
-FILE *xlu__disk_yyget_out (yyscan_t yyscanner );
+FILE *yyget_out ( yyscan_t yyscanner );
 
-void xlu__disk_yyset_out  (FILE * out_str ,yyscan_t yyscanner );
+void yyset_out  ( FILE * _out_str , yyscan_t yyscanner );
 
-yy_size_t xlu__disk_yyget_leng (yyscan_t yyscanner );
+			int yyget_leng ( yyscan_t yyscanner );
 
-char *xlu__disk_yyget_text (yyscan_t yyscanner );
+char *yyget_text ( yyscan_t yyscanner );
 
-int xlu__disk_yyget_lineno (yyscan_t yyscanner );
+int yyget_lineno ( yyscan_t yyscanner );
 
-void xlu__disk_yyset_lineno (int line_number ,yyscan_t yyscanner );
+void yyset_lineno ( int _line_number , yyscan_t yyscanner );
 
-int xlu__disk_yyget_column  (yyscan_t yyscanner );
+int yyget_column  ( yyscan_t yyscanner );
 
-void xlu__disk_yyset_column (int column_no ,yyscan_t yyscanner );
+void yyset_column ( int _column_no , yyscan_t yyscanner );
 
 /* Macros after this point can all be overridden by user definitions in
  * section 1.
@@ -290,18 +491,18 @@ void xlu__disk_yyset_column (int column_no ,yyscan_t yyscanner );
 
 #ifndef YY_SKIP_YYWRAP
 #ifdef __cplusplus
-extern "C" int xlu__disk_yywrap (yyscan_t yyscanner );
+extern "C" int yywrap ( yyscan_t yyscanner );
 #else
-extern int xlu__disk_yywrap (yyscan_t yyscanner );
+extern int yywrap ( yyscan_t yyscanner );
 #endif
 #endif
 
 #ifndef yytext_ptr
-static void yy_flex_strncpy (char *,yyconst char *,int ,yyscan_t yyscanner);
+static void yy_flex_strncpy ( char *, const char *, int , yyscan_t yyscanner);
 #endif
 
 #ifdef YY_NEED_STRLEN
-static int yy_flex_strlen (yyconst char * ,yyscan_t yyscanner);
+static int yy_flex_strlen ( const char * , yyscan_t yyscanner);
 #endif
 
 #ifndef YY_NO_INPUT
@@ -329,9 +530,9 @@ static int yy_flex_strlen (yyconst char * ,yyscan_t yyscanner);
 #ifndef YY_DECL
 #define YY_DECL_IS_OURS 1
 
-extern int xlu__disk_yylex (yyscan_t yyscanner);
+extern int yylex (yyscan_t yyscanner);
 
-#define YY_DECL int xlu__disk_yylex (yyscan_t yyscanner)
+#define YY_DECL int yylex (yyscan_t yyscanner)
 #endif /* !YY_DECL */
 
 /* yy_get_previous_state - get the state just before the EOB char was reached */
@@ -348,8 +549,153 @@ extern int xlu__disk_yylex (yyscan_t yyscanner);
 #undef YY_DECL
 #endif
 
-#line 278 "libxlu_disk_l.l"
+#ifndef xlu__disk_yy_create_buffer_ALREADY_DEFINED
+#undef yy_create_buffer
+#endif
+#ifndef xlu__disk_yy_delete_buffer_ALREADY_DEFINED
+#undef yy_delete_buffer
+#endif
+#ifndef xlu__disk_yy_scan_buffer_ALREADY_DEFINED
+#undef yy_scan_buffer
+#endif
+#ifndef xlu__disk_yy_scan_string_ALREADY_DEFINED
+#undef yy_scan_string
+#endif
+#ifndef xlu__disk_yy_scan_bytes_ALREADY_DEFINED
+#undef yy_scan_bytes
+#endif
+#ifndef xlu__disk_yy_init_buffer_ALREADY_DEFINED
+#undef yy_init_buffer
+#endif
+#ifndef xlu__disk_yy_flush_buffer_ALREADY_DEFINED
+#undef yy_flush_buffer
+#endif
+#ifndef xlu__disk_yy_load_buffer_state_ALREADY_DEFINED
+#undef yy_load_buffer_state
+#endif
+#ifndef xlu__disk_yy_switch_to_buffer_ALREADY_DEFINED
+#undef yy_switch_to_buffer
+#endif
+#ifndef xlu__disk_yypush_buffer_state_ALREADY_DEFINED
+#undef yypush_buffer_state
+#endif
+#ifndef xlu__disk_yypop_buffer_state_ALREADY_DEFINED
+#undef yypop_buffer_state
+#endif
+#ifndef xlu__disk_yyensure_buffer_stack_ALREADY_DEFINED
+#undef yyensure_buffer_stack
+#endif
+#ifndef xlu__disk_yylex_ALREADY_DEFINED
+#undef yylex
+#endif
+#ifndef xlu__disk_yyrestart_ALREADY_DEFINED
+#undef yyrestart
+#endif
+#ifndef xlu__disk_yylex_init_ALREADY_DEFINED
+#undef yylex_init
+#endif
+#ifndef xlu__disk_yylex_init_extra_ALREADY_DEFINED
+#undef yylex_init_extra
+#endif
+#ifndef xlu__disk_yylex_destroy_ALREADY_DEFINED
+#undef yylex_destroy
+#endif
+#ifndef xlu__disk_yyget_debug_ALREADY_DEFINED
+#undef yyget_debug
+#endif
+#ifndef xlu__disk_yyset_debug_ALREADY_DEFINED
+#undef yyset_debug
+#endif
+#ifndef xlu__disk_yyget_extra_ALREADY_DEFINED
+#undef yyget_extra
+#endif
+#ifndef xlu__disk_yyset_extra_ALREADY_DEFINED
+#undef yyset_extra
+#endif
+#ifndef xlu__disk_yyget_in_ALREADY_DEFINED
+#undef yyget_in
+#endif
+#ifndef xlu__disk_yyset_in_ALREADY_DEFINED
+#undef yyset_in
+#endif
+#ifndef xlu__disk_yyget_out_ALREADY_DEFINED
+#undef yyget_out
+#endif
+#ifndef xlu__disk_yyset_out_ALREADY_DEFINED
+#undef yyset_out
+#endif
+#ifndef xlu__disk_yyget_leng_ALREADY_DEFINED
+#undef yyget_leng
+#endif
+#ifndef xlu__disk_yyget_text_ALREADY_DEFINED
+#undef yyget_text
+#endif
+#ifndef xlu__disk_yyget_lineno_ALREADY_DEFINED
+#undef yyget_lineno
+#endif
+#ifndef xlu__disk_yyset_lineno_ALREADY_DEFINED
+#undef yyset_lineno
+#endif
+#ifndef xlu__disk_yyget_column_ALREADY_DEFINED
+#undef yyget_column
+#endif
+#ifndef xlu__disk_yyset_column_ALREADY_DEFINED
+#undef yyset_column
+#endif
+#ifndef xlu__disk_yywrap_ALREADY_DEFINED
+#undef yywrap
+#endif
+#ifndef xlu__disk_yyget_lval_ALREADY_DEFINED
+#undef yyget_lval
+#endif
+#ifndef xlu__disk_yyset_lval_ALREADY_DEFINED
+#undef yyset_lval
+#endif
+#ifndef xlu__disk_yyget_lloc_ALREADY_DEFINED
+#undef yyget_lloc
+#endif
+#ifndef xlu__disk_yyset_lloc_ALREADY_DEFINED
+#undef yyset_lloc
+#endif
+#ifndef xlu__disk_yyalloc_ALREADY_DEFINED
+#undef yyalloc
+#endif
+#ifndef xlu__disk_yyrealloc_ALREADY_DEFINED
+#undef yyrealloc
+#endif
+#ifndef xlu__disk_yyfree_ALREADY_DEFINED
+#undef yyfree
+#endif
+#ifndef xlu__disk_yytext_ALREADY_DEFINED
+#undef yytext
+#endif
+#ifndef xlu__disk_yyleng_ALREADY_DEFINED
+#undef yyleng
+#endif
+#ifndef xlu__disk_yyin_ALREADY_DEFINED
+#undef yyin
+#endif
+#ifndef xlu__disk_yyout_ALREADY_DEFINED
+#undef yyout
+#endif
+#ifndef xlu__disk_yy_flex_debug_ALREADY_DEFINED
+#undef yy_flex_debug
+#endif
+#ifndef xlu__disk_yylineno_ALREADY_DEFINED
+#undef yylineno
+#endif
+#ifndef xlu__disk_yytables_fload_ALREADY_DEFINED
+#undef yytables_fload
+#endif
+#ifndef xlu__disk_yytables_destroy_ALREADY_DEFINED
+#undef yytables_destroy
+#endif
+#ifndef xlu__disk_yyTABLES_NAME_ALREADY_DEFINED
+#undef yyTABLES_NAME
+#endif
+
+#line 280 "libxlu_disk_l.l"
 
-#line 354 "libxlu_disk_l.h"
+#line 700 "libxlu_disk_l.h"
 #undef xlu__disk_yyIN_HEADER
 #endif /* xlu__disk_yyHEADER_H */
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed May 06 17:17:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 17:17:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWNfA-0005qW-Kc; Wed, 06 May 2020 17:16:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZA5M=6U=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jWNf8-0005qR-Vc
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 17:16:51 +0000
X-Inumbo-ID: 5f5ed380-8fbd-11ea-ae69-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f5ed380-8fbd-11ea-ae69-bc764e2007e4;
 Wed, 06 May 2020 17:16:50 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 49CED20708;
 Wed,  6 May 2020 17:16:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1588785409;
 bh=Z6i2tGad7I5om8sKqamcWlkOEEXLp2HDAdb+ww7BBt8=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=Wh5LqfVDBWL0H0ggLZnIlJ+CT3Y7OxDeYlhpseLtfuNJA6fdkW8kgQGP3+btRIMzt
 wqDUv3ccqA9HTkBx1nNzGGVjX1Y+Aa/iKHDzklJv3LaVRGq9oYC+jLoU4jA2ROtaRS
 XVOMCEMSxVTGLuuxfs839h1YkbckYWtcaP/jABBs=
Date: Wed, 6 May 2020 10:16:48 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Peng Fan <peng.fan@nxp.com>
Subject: RE: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
In-Reply-To: <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
Message-ID: <alpine.DEB.2.21.2005061015050.14706@sstabellini-ThinkPad-T480s>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
 <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1151900323-1588785409=:14706"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?Q?J=C3=BCrgen_Gro=C3=9F?= <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 "minyard@acm.org" <minyard@acm.org>, Roman Shaposhnik <roman@zededa.com>,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 Julien Grall <julien.grall@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1151900323-1588785409=:14706
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 6 May 2020, Peng Fan wrote:
> > Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
> > 
> > On 06.05.20 00:34, Stefano Stabellini wrote:
> > > + Boris, Jürgen
> > >
> > > See the crash Roman is seeing booting dom0 on the Raspberry Pi. It is
> > > related to the recent xen dma_ops changes. Possibly the same thing
> > > reported by Peng here:
> 
> Yes. It is same issue.
> 
> > >
> > > https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmarc
> > > .info%2F%3Fl%3Dlinux-kernel%26m%3D158805976230485%26w%3D2&am
> > p;data=02%
> > >
> > 7C01%7Cpeng.fan%40nxp.com%7Cab98b2db94484141a8ad08d7f1803372%7
> > C686ea1d
> > >
> > 3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637243405233572354&amp;sd
> > ata=0Yr5h
> > > Rg4%2FuApsBpTwIIL4StZU%2FUA55oP5Lfnfmtj4Hg%3D&amp;reserved=0
> > >
> > > An in-depth analysis below.
> > >
> > >
> > > On Mon, 4 May 2020, Roman Shaposhnik wrote:
> > >>>> [    2.534292] Unable to handle kernel paging request at virtual
> > >>>> address 000000000026c340
> > >>>> [    2.542373] Mem abort info:
> > >>>> [    2.545257]   ESR = 0x96000004
> > >>>> [    2.548421]   EC = 0x25: DABT (current EL), IL = 32 bits
> > >>>> [    2.553877]   SET = 0, FnV = 0
> > >>>> [    2.557023]   EA = 0, S1PTW = 0
> > >>>> [    2.560297] Data abort info:
> > >>>> [    2.563258]   ISV = 0, ISS = 0x00000004
> > >>>> [    2.567208]   CM = 0, WnR = 0
> > >>>> [    2.570294] [000000000026c340] user address but active_mm is
> > swapper
> > >>>> [    2.576783] Internal error: Oops: 96000004 [#1] SMP
> > >>>> [    2.581784] Modules linked in:
> > >>>> [    2.584950] CPU: 3 PID: 135 Comm: kworker/3:1 Not tainted
> > 5.6.1-default #9
> > >>>> [    2.591970] Hardware name: Raspberry Pi 4 Model B (DT)
> > >>>> [    2.597256] Workqueue: events deferred_probe_work_func
> > >>>> [    2.602509] pstate: 60000005 (nZCv daif -PAN -UAO)
> > >>>> [    2.607431] pc : xen_swiotlb_free_coherent+0x198/0x1d8
> > >>>> [    2.612696] lr : dma_free_attrs+0x98/0xd0
> > >>>> [    2.616827] sp : ffff800011db3970
> > >>>> [    2.620242] x29: ffff800011db3970 x28: 00000000000f7b00
> > >>>> [    2.625695] x27: 0000000000001000 x26: ffff000037b68410
> > >>>> [    2.631129] x25: 0000000000000001 x24: 00000000f7b00000
> > >>>> [    2.636583] x23: 0000000000000000 x22: 0000000000000000
> > >>>> [    2.642017] x21: ffff800011b0d000 x20: ffff80001179b548
> > >>>> [    2.647461] x19: ffff000037b68410 x18: 0000000000000010
> > >>>> [    2.652905] x17: ffff000035d66a00 x16: 00000000deadbeef
> > >>>> [    2.658348] x15: ffffffffffffffff x14: ffff80001179b548
> > >>>> [    2.663792] x13: ffff800091db37b7 x12: ffff800011db37bf
> > >>>> [    2.669236] x11: ffff8000117c7000 x10: ffff800011db3740
> > >>>> [    2.674680] x9 : 00000000ffffffd0 x8 : ffff80001071e980
> > >>>> [    2.680124] x7 : 0000000000000132 x6 : ffff80001197a6ab
> > >>>> [    2.685568] x5 : 0000000000000000 x4 : 0000000000000000
> > >>>> [    2.691012] x3 : 00000000f7b00000 x2 : fffffdffffe00000
> > >>>> [    2.696465] x1 : 000000000026c340 x0 : 000002000046c340
> > >>>> [    2.701899] Call trace:
> > >>>> [    2.704452]  xen_swiotlb_free_coherent+0x198/0x1d8
> > >>>> [    2.709367]  dma_free_attrs+0x98/0xd0
> > >>>> [    2.713143]  rpi_firmware_property_list+0x1e4/0x240
> > >>>> [    2.718146]  rpi_firmware_property+0x6c/0xb0
> > >>>> [    2.722535]  rpi_firmware_probe+0xf0/0x1e0
> > >>>> [    2.726760]  platform_drv_probe+0x50/0xa0
> > >>>> [    2.730879]  really_probe+0xd8/0x438
> > >>>> [    2.734567]  driver_probe_device+0xdc/0x130
> > >>>> [    2.738870]  __device_attach_driver+0x88/0x108
> > >>>> [    2.743434]  bus_for_each_drv+0x78/0xc8
> > >>>> [    2.747386]  __device_attach+0xd4/0x158
> > >>>> [    2.751337]  device_initial_probe+0x10/0x18
> > >>>> [    2.755649]  bus_probe_device+0x90/0x98
> > >>>> [    2.759590]  deferred_probe_work_func+0x88/0xd8
> > >>>> [    2.764244]  process_one_work+0x1f0/0x3c0
> > >>>> [    2.768369]  worker_thread+0x138/0x570
> > >>>> [    2.772234]  kthread+0x118/0x120
> > >>>> [    2.775571]  ret_from_fork+0x10/0x18
> > >>>> [    2.779263] Code: d34cfc00 f2dfbfe2 d37ae400 8b020001
> > (f8626800)
> > >>>> [    2.785492] ---[ end trace 4c435212e349f45f ]---
> > >>>> [    2.793340] usb 1-1: New USB device found, idVendor=2109,
> > >>>> idProduct=3431, bcdDevice= 4.20
> > >>>> [    2.801038] usb 1-1: New USB device strings: Mfr=0, Product=1,
> > SerialNumber=0
> > >>>> [    2.808297] usb 1-1: Product: USB2.0 Hub
> > >>>> [    2.813710] hub 1-1:1.0: USB hub found
> > >>>> [    2.817117] hub 1-1:1.0: 4 ports detected
> > >>>>
> > >>>> This is bailing out right here:
> > >>>>
> > >>>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fg
> > >>>>
> > it.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Fstable%2Flinux.g
> > >>>>
> > it%2Ftree%2Fdrivers%2Ffirmware%2Fraspberrypi.c%3Fh%3Dv5.6.1%23n125
> > &
> > >>>>
> > amp;data=02%7C01%7Cpeng.fan%40nxp.com%7Cab98b2db94484141a8ad0
> > 8d7f18
> > >>>>
> > 03372%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C6372434052
> > 335723
> > >>>>
> > 54&amp;sdata=h1dyHkb%2FsifUqH3Z0m3uIIcqUhXVwhHS%2Ft%2FVvig%2Fo
> > u4%3D
> > >>>> &amp;reserved=0
> > >>>>
> > >>>> FYIW (since I modified the source to actually print what was
> > >>>> returned right before it bails) we get:
> > >>>>     buf[1] == 0x800000004
> > >>>>     buf[2] == 0x00000001
> > >>>>
> > >>>> Status 0x800000004 is of course suspicious since it is not even listed
> > here:
> > >>>>
> > >>>> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fg
> > >>>>
> > it.kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Fstable%2Flinux.g
> > >>>>
> > it%2Ftree%2Finclude%2Fsoc%2Fbcm2835%2Fraspberrypi-firmware.h%23n14
> > &
> > >>>>
> > amp;data=02%7C01%7Cpeng.fan%40nxp.com%7Cab98b2db94484141a8ad0
> > 8d7f18
> > >>>>
> > 03372%7C686ea1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C6372434052
> > 335723
> > >>>>
> > 54&amp;sdata=3yMm%2FujHCVf%2FigpNLE8hcBWVcvdGrZGv5TuqeMzMV0
> > U%3D&amp
> > >>>> ;reserved=0
> > >>>>
> > >>>> So it appears that this DMA request path is somehow busted and it
> > >>>> would be really nice to figure out why.
> > >>>
> > >>> You have actually discovered a genuine bug in the recent xen dma
> > >>> rework in Linux. Congrats :-)
> > >>
> > >> Nice! ;-)
> > >>
> > >>> I am doing some guesswork here, but from what I read in the thread
> > >>> and the information in this email I think this patch might fix the issue.
> > >>> If it doesn't fix the issue please add a few printks in
> > >>> drivers/xen/swiotlb-xen.c:xen_swiotlb_free_coherent and please let
> > >>> me know where exactly it crashes.
> > >>>
> > >>>
> > >>> diff --git a/include/xen/arm/page-coherent.h
> > >>> b/include/xen/arm/page-coherent.h index b9cc11e887ed..ff4677ed9788
> > >>> 100644
> > >>> --- a/include/xen/arm/page-coherent.h
> > >>> +++ b/include/xen/arm/page-coherent.h
> > >>> @@ -8,12 +8,17 @@
> > >>>   static inline void *xen_alloc_coherent_pages(struct device *hwdev,
> > size_t size,
> > >>>                  dma_addr_t *dma_handle, gfp_t flags, unsigned
> > long attrs)
> > >>>   {
> > >>> +       void *cpu_addr;
> > >>> +       if (dma_alloc_from_dev_coherent(hwdev, size, dma_handle,
> > &cpu_addr))
> > >>> +               return cpu_addr;
> > >>>          return dma_direct_alloc(hwdev, size, dma_handle, flags,
> > attrs);
> > >>>   }
> > >>>
> > >>>   static inline void xen_free_coherent_pages(struct device *hwdev,
> > size_t size,
> > >>>                  void *cpu_addr, dma_addr_t dma_handle,
> > unsigned long attrs)
> > >>>   {
> > >>> +       if (dma_release_from_dev_coherent(hwdev, get_order(size),
> > cpu_addr))
> > >>> +               return;
> > >>>          dma_direct_free(hwdev, size, cpu_addr, dma_handle, attrs);
> > >>>   }
> > >>
> > >> Applied the patch, but it didn't help and after printk's it turns out
> > >> it surprisingly crashes right inside this (rather convoluted if you
> > >> ask me) if statement:
> > >>
> > >> https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgit
> > >> .kernel.org%2Fpub%2Fscm%2Flinux%2Fkernel%2Fgit%2Fstable%2Flinux.gi
> > t%2
> > >>
> > Ftree%2Fdrivers%2Fxen%2Fswiotlb-xen.c%3Fh%3Dv5.6.1%23n349&amp;dat
> > a=02
> > >> %7C01%7Cpeng.fan%40nxp.com%7Cab98b2db94484141a8ad08d7f18033
> > 72%7C686ea
> > >>
> > 1d3bc2b4c6fa92cd99c5c301635%7C0%7C0%7C637243405233572354&amp;
> > sdata=Fu
> > >>
> > BWGAEg%2FkbsnYIIGHmiICTqy%2BcgZs7V%2BMneJum%2BTts%3D&amp;res
> > erved=0
> > >>
> > >> So it makes sense that the patch didn't help -- we never hit that
> > >> xen_free_coherent_pages.
> > >
> > > The crash happens here:
> > >
> > > 	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
> > > 		     range_straddles_page_boundary(phys, size)) &&
> > > 	    TestClearPageXenRemapped(virt_to_page(vaddr)))
> > > 		xen_destroy_contiguous_region(phys, order);
> > >
> > > I don't know exactly what is causing the crash. Is it the WARN_ON
> > somehow?
> > > Is it TestClearPageXenRemapped? Neither should cause a crash in theory.
> > >
> > >
> > > But I do know that there are problems with that if statement on ARM.
> > > It can trigger for one of the following conditions:
> > >
> > > 1) dev_addr + size - 1 > dma_mask
> > > 2) range_straddles_page_boundary(phys, size)
> 
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index bd3a10dfac15..33b027cb0b2a 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -346,6 +346,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>         /* Convert the size to actually allocated. */
>         size = 1UL << (order + XEN_PAGE_SHIFT);
> 
> +       printk("%x %x %px %x %px\n", dev_addr + size - 1 > dma_mask, range_straddles_page_boundary(phys, size), virt_to_page(vaddr), phys, vaddr);
>         if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
>                      range_straddles_page_boundary(phys, size)) &&
>             TestClearPageXenRemapped(virt_to_page(vaddr)))
> 
> 
> In my case:
> 0 0 0000000000271f40 bc000000 ffff800011c7d000
> 
> 
> The alloc path in my side:
> 314         phys = *dma_handle;
> 315         dev_addr = xen_phys_to_bus(phys);
> 316         if (((dev_addr + size - 1 <= dma_mask)) &&
> 317             !range_straddles_page_boundary(phys, size))
> 318                 *dma_handle = dev_addr;
> 
> So I am confused why the free path needs clear xen remap.

Thanks this is useful information, now I understand what is going on.

In your case it shouldn't take the if path in xen_swiotlb_free_coherent
at all, but still should evaluate the if condition. So the if would
return false for you, but you can't finish evaluating the if because
virt_to_page causes a crash.
--8323329-1151900323-1588785409=:14706--


From xen-devel-bounces@lists.xenproject.org Wed May 06 17:32:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 17:32:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWNu9-0007QV-03; Wed, 06 May 2020 17:32:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZA5M=6U=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jWNu7-0007QQ-4s
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 17:32:19 +0000
X-Inumbo-ID: 889f9232-8fbf-11ea-b07b-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 889f9232-8fbf-11ea-b07b-bc764e2007e4;
 Wed, 06 May 2020 17:32:18 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 6ECA8206B8;
 Wed,  6 May 2020 17:32:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1588786337;
 bh=DhYRRTYsd0ZGJxyFjp1HPYez84XZ12Uzat8+4ao82Aw=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=0goh2pqnlYmSdrCI90NanZqGKutyva/b8ZDldg3bSZpoCWkruOVfhvzr9JfrK++hY
 BDWvpykkr2kgVTOY65986A9OGtJjMdTddBf9pN9FABVP6G5k4f7vWMPKzEQ6KGCkzy
 kh7TmAxIKJ7rx6TYIVLLV8B/wqtlMft7tPBwhAUo=
Date: Wed, 6 May 2020 10:32:16 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: =?UTF-8?Q?J=C3=BCrgen_Gro=C3=9F?= <jgross@suse.com>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
In-Reply-To: <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
Message-ID: <alpine.DEB.2.21.2005061018020.14706@sstabellini-ThinkPad-T480s>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1633917079-1588785724=:14706"
Content-ID: <alpine.DEB.2.21.2005061023130.14706@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: peng.fan@nxp.com, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, minyard@acm.org,
 Roman Shaposhnik <roman@zededa.com>, jeff.kubascik@dornerworks.com,
 Julien Grall <julien.grall@arm.com>, xen-devel@lists.xenproject.org,
 boris.ostrovsky@oracle.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1633917079-1588785724=:14706
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2005061023131.14706@sstabellini-ThinkPad-T480s>

On Wed, 6 May 2020, Jürgen Groß wrote:
> On 06.05.20 00:34, Stefano Stabellini wrote:
> > + Boris, Jürgen
> > 
> > See the crash Roman is seeing booting dom0 on the Raspberry Pi. It is
> > related to the recent xen dma_ops changes. Possibly the same thing
> > reported by Peng here:
> > 
> > https://marc.info/?l=linux-kernel&m=158805976230485&w=2
> > 
> > An in-depth analysis below.
> > 
> > 
> > On Mon, 4 May 2020, Roman Shaposhnik wrote:
> > > > > [    2.534292] Unable to handle kernel paging request at virtual
> > > > > address 000000000026c340
> > > > > [    2.542373] Mem abort info:
> > > > > [    2.545257]   ESR = 0x96000004
> > > > > [    2.548421]   EC = 0x25: DABT (current EL), IL = 32 bits
> > > > > [    2.553877]   SET = 0, FnV = 0
> > > > > [    2.557023]   EA = 0, S1PTW = 0
> > > > > [    2.560297] Data abort info:
> > > > > [    2.563258]   ISV = 0, ISS = 0x00000004
> > > > > [    2.567208]   CM = 0, WnR = 0
> > > > > [    2.570294] [000000000026c340] user address but active_mm is
> > > > > swapper
> > > > > [    2.576783] Internal error: Oops: 96000004 [#1] SMP
> > > > > [    2.581784] Modules linked in:
> > > > > [    2.584950] CPU: 3 PID: 135 Comm: kworker/3:1 Not tainted
> > > > > 5.6.1-default #9
> > > > > [    2.591970] Hardware name: Raspberry Pi 4 Model B (DT)
> > > > > [    2.597256] Workqueue: events deferred_probe_work_func
> > > > > [    2.602509] pstate: 60000005 (nZCv daif -PAN -UAO)
> > > > > [    2.607431] pc : xen_swiotlb_free_coherent+0x198/0x1d8
> > > > > [    2.612696] lr : dma_free_attrs+0x98/0xd0
> > > > > [    2.616827] sp : ffff800011db3970
> > > > > [    2.620242] x29: ffff800011db3970 x28: 00000000000f7b00
> > > > > [    2.625695] x27: 0000000000001000 x26: ffff000037b68410
> > > > > [    2.631129] x25: 0000000000000001 x24: 00000000f7b00000
> > > > > [    2.636583] x23: 0000000000000000 x22: 0000000000000000
> > > > > [    2.642017] x21: ffff800011b0d000 x20: ffff80001179b548
> > > > > [    2.647461] x19: ffff000037b68410 x18: 0000000000000010
> > > > > [    2.652905] x17: ffff000035d66a00 x16: 00000000deadbeef
> > > > > [    2.658348] x15: ffffffffffffffff x14: ffff80001179b548
> > > > > [    2.663792] x13: ffff800091db37b7 x12: ffff800011db37bf
> > > > > [    2.669236] x11: ffff8000117c7000 x10: ffff800011db3740
> > > > > [    2.674680] x9 : 00000000ffffffd0 x8 : ffff80001071e980
> > > > > [    2.680124] x7 : 0000000000000132 x6 : ffff80001197a6ab
> > > > > [    2.685568] x5 : 0000000000000000 x4 : 0000000000000000
> > > > > [    2.691012] x3 : 00000000f7b00000 x2 : fffffdffffe00000
> > > > > [    2.696465] x1 : 000000000026c340 x0 : 000002000046c340
> > > > > [    2.701899] Call trace:
> > > > > [    2.704452]  xen_swiotlb_free_coherent+0x198/0x1d8
> > > > > [    2.709367]  dma_free_attrs+0x98/0xd0
> > > > > [    2.713143]  rpi_firmware_property_list+0x1e4/0x240
> > > > > [    2.718146]  rpi_firmware_property+0x6c/0xb0
> > > > > [    2.722535]  rpi_firmware_probe+0xf0/0x1e0
> > > > > [    2.726760]  platform_drv_probe+0x50/0xa0
> > > > > [    2.730879]  really_probe+0xd8/0x438
> > > > > [    2.734567]  driver_probe_device+0xdc/0x130
> > > > > [    2.738870]  __device_attach_driver+0x88/0x108
> > > > > [    2.743434]  bus_for_each_drv+0x78/0xc8
> > > > > [    2.747386]  __device_attach+0xd4/0x158
> > > > > [    2.751337]  device_initial_probe+0x10/0x18
> > > > > [    2.755649]  bus_probe_device+0x90/0x98
> > > > > [    2.759590]  deferred_probe_work_func+0x88/0xd8
> > > > > [    2.764244]  process_one_work+0x1f0/0x3c0
> > > > > [    2.768369]  worker_thread+0x138/0x570
> > > > > [    2.772234]  kthread+0x118/0x120
> > > > > [    2.775571]  ret_from_fork+0x10/0x18
> > > > > [    2.779263] Code: d34cfc00 f2dfbfe2 d37ae400 8b020001 (f8626800)
> > > > > [    2.785492] ---[ end trace 4c435212e349f45f ]---
> > > > > [    2.793340] usb 1-1: New USB device found, idVendor=2109,
> > > > > idProduct=3431, bcdDevice= 4.20
> > > > > [    2.801038] usb 1-1: New USB device strings: Mfr=0, Product=1,
> > > > > SerialNumber=0
> > > > > [    2.808297] usb 1-1: Product: USB2.0 Hub
> > > > > [    2.813710] hub 1-1:1.0: USB hub found
> > > > > [    2.817117] hub 1-1:1.0: 4 ports detected
> > > > > 
> > > > > This is bailing out right here:
> > > > >       https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/firmware/raspberrypi.c?h=v5.6.1#n125
> > > > > 
> > > > > FYIW (since I modified the source to actually print what was returned
> > > > > right before it bails) we get:
> > > > >     buf[1] == 0x800000004
> > > > >     buf[2] == 0x00000001
> > > > > 
> > > > > Status 0x800000004 is of course suspicious since it is not even listed
> > > > > here:
> > > > >      https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/include/soc/bcm2835/raspberrypi-firmware.h#n14
> > > > > 
> > > > > So it appears that this DMA request path is somehow busted and it
> > > > > would be really nice to figure out why.
> > > > 
> > > > You have actually discovered a genuine bug in the recent xen dma rework
> > > > in Linux. Congrats :-)
> > > 
> > > Nice! ;-)
> > > 
> > > > I am doing some guesswork here, but from what I read in the thread and
> > > > the information in this email I think this patch might fix the issue.
> > > > If it doesn't fix the issue please add a few printks in
> > > > drivers/xen/swiotlb-xen.c:xen_swiotlb_free_coherent and please let me
> > > > know where exactly it crashes.
> > > > 
> > > > 
> > > > diff --git a/include/xen/arm/page-coherent.h
> > > > b/include/xen/arm/page-coherent.h
> > > > index b9cc11e887ed..ff4677ed9788 100644
> > > > --- a/include/xen/arm/page-coherent.h
> > > > +++ b/include/xen/arm/page-coherent.h
> > > > @@ -8,12 +8,17 @@
> > > >   static inline void *xen_alloc_coherent_pages(struct device *hwdev,
> > > > size_t size,
> > > >                  dma_addr_t *dma_handle, gfp_t flags, unsigned long
> > > > attrs)
> > > >   {
> > > > +       void *cpu_addr;
> > > > +       if (dma_alloc_from_dev_coherent(hwdev, size, dma_handle,
> > > > &cpu_addr))
> > > > +               return cpu_addr;
> > > >          return dma_direct_alloc(hwdev, size, dma_handle, flags, attrs);
> > > >   }
> > > > 
> > > >   static inline void xen_free_coherent_pages(struct device *hwdev,
> > > > size_t size,
> > > >                  void *cpu_addr, dma_addr_t dma_handle, unsigned long
> > > > attrs)
> > > >   {
> > > > +       if (dma_release_from_dev_coherent(hwdev, get_order(size),
> > > > cpu_addr))
> > > > +               return;
> > > >          dma_direct_free(hwdev, size, cpu_addr, dma_handle, attrs);
> > > >   }
> > > 
> > > Applied the patch, but it didn't help and after printk's it turns out
> > > it surprisingly crashes right inside this (rather convoluted if you
> > > ask me) if statement:
> > >      https://git.kernel.org/pub/scm/linux/kernel/git/stable/linux.git/tree/drivers/xen/swiotlb-xen.c?h=v5.6.1#n349
> > > 
> > > So it makes sense that the patch didn't help -- we never hit that
> > > xen_free_coherent_pages.
> >   The crash happens here:
> > 
> > 	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
> > 		     range_straddles_page_boundary(phys, size)) &&
> > 	    TestClearPageXenRemapped(virt_to_page(vaddr)))
> > 		xen_destroy_contiguous_region(phys, order);
> > 
> > I don't know exactly what is causing the crash. Is it the WARN_ON somehow?
> > Is it TestClearPageXenRemapped? Neither should cause a crash in theory.
> > 
> > 
> > But I do know that there are problems with that if statement on ARM. It
> > can trigger for one of the following conditions:
> > 
> > 1) dev_addr + size - 1 > dma_mask
> > 2) range_straddles_page_boundary(phys, size)
> > 
> > 
> > The first condition might happen after bef4d2037d214 because
> > dma_direct_alloc might not respect the device dma_mask. It is actually a
> > bug and I would like to keep the WARN_ON for that. The patch I sent
> > yesterday (https://marc.info/?l=xen-devel&m=158865080224504) should
> > solve that issue. But Roman is telling us that the crash still persists.
> > 
> > The second condition is completely normal and not an error on ARM
> > because dom0 is 1:1 mapped. It is not an issue if the address range is
> > straddling a page boundary. We certainly shouldn't WARN (or crash).
> > 
> > So, I suggest something similar to Peng's patch, appended.
> > 
> > Roman, does it solve your problem?
> > 
> > 
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index b6d27762c6f8..994ca3a4b653 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -346,9 +346,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t
> > size, void *vaddr,
> >   	/* Convert the size to actually allocated. */
> >   	size = 1UL << (order + XEN_PAGE_SHIFT);
> >   -	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
> > -		     range_straddles_page_boundary(phys, size)) &&
> > -	    TestClearPageXenRemapped(virt_to_page(vaddr)))
> > +	WARN_ON(dev_addr + size - 1 > dma_mask);
> > +	if (TestClearPageXenRemapped(virt_to_page(vaddr)))
> >   		xen_destroy_contiguous_region(phys, order);
> 
> This is a very bad idea for x86. Calling xen_destroy_contiguous_region()
> for cases where it shouldn't be called is causing subtle bugs in memory
> management, like pages being visible twice and thus resulting in weird
> memory inconsistencies.

I understand what you are saying but there is one thing I am missing:
on x86 how could it happen that either:

  dev_addr + size - 1 > dma_mask

or
  range_straddles_page_boundary(phys, size)

evaluates to true? It shouldn't right? If it does, it is an unexpected
error condition, is that right?

If that is the case, does it even make sense to continue at all?
Wouldn't it be better to do something like:

 
#ifdef CONFIG_X86
	if (WARN_ON(dev_addr + size - 1 > dma_mask) ||
	            range_straddles_page_boundary(phys, size)) {
	    return;
	}
#endif

	if (TestClearPageXenRemapped(virt_to_page(vaddr)))
 		xen_destroy_contiguous_region(phys, order);
 
 	xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);


Or do you think that xen_free_coherent_pages should actually be called
anyway?
--8323329-1633917079-1588785724=:14706--


From xen-devel-bounces@lists.xenproject.org Wed May 06 17:34:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 17:34:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWNwb-0007Xe-Dz; Wed, 06 May 2020 17:34:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZA5M=6U=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jWNwa-0007XZ-7U
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 17:34:52 +0000
X-Inumbo-ID: e3d6f9d8-8fbf-11ea-9e9c-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e3d6f9d8-8fbf-11ea-9e9c-12813bfff9fa;
 Wed, 06 May 2020 17:34:51 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 907A3208DB;
 Wed,  6 May 2020 17:34:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1588786491;
 bh=l2feMkeFS934VyyRnn8p8wtcjjDqVIr8WQ/8tuUm/yA=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=sKy0N27nidrypjU5T3up89YU8vhjekMOOnsBeD4d35nR6Xpm2yXIK7EBLZ7J6UE3o
 7xHPc9LG9XFmgjnvAmKVm5E012Bn/np5iJiYFzQFJp9s2er3Dve2977boP2tTHKCkY
 Lylr6zSC73KaXZ4IgFS/Ju+NSbJuULKqv0gpkgUU=
Date: Wed, 6 May 2020 10:34:50 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Nataliya Korovkina <malus.brandywine@gmail.com>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
In-Reply-To: <CADz_WD68bYj-0CSm_zib+LRiMGd1+1eoNLgiJj=vHog685Khsw@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2005060956120.14706@sstabellini-ThinkPad-T480s>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
 <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <CADz_WD5Ln7Pe1WAFp73d2Mz9wxspzTE3WgAJusp5S8LX4=83Bw@mail.gmail.com>
 <2187cd7c-4d48-986b-77d6-4428e8178404@oracle.com>
 <CADz_WD68bYj-0CSm_zib+LRiMGd1+1eoNLgiJj=vHog685Khsw@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?Q?J=C3=BCrgen_Gro=C3=9F?= <jgross@suse.com>,
 Peng Fan <peng.fan@nxp.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, "minyard@acm.org" <minyard@acm.org>,
 Roman Shaposhnik <roman@zededa.com>,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 Julien Grall <julien.grall@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, 6 May 2020, Nataliya Korovkina wrote:
> On Wed, May 6, 2020 at 9:43 AM Boris Ostrovsky
> <boris.ostrovsky@oracle.com> wrote:
> >
> >
> > On 5/6/20 9:08 AM, Nataliya Korovkina wrote:
> > > Hello,
> > >
> > > What I found out: rpi_firmware_property_list() allocates memory from
> > > dma_atomic_pool which was mapped to VMALLOC region, so virt_to_page()
> > > is not eligible in this case.
> >
> >
> > So then it seems it didn't go through xen_swiotlb_alloc_coherent(). In
> > which case it has no business calling xen_swiotlb_free_coherent().
> >
> >
> > -boris
> >
> >
> >
> >
> 
> It does go.
> dma_alloc_coherent() indirectly calls xen_swiotlb_alloc_coherent(),
> then  xen_alloc_coherent_pages() eventually calls arch_dma_alloc() in
> remap.c which successfully allocates pages from atomic pool.
> 
> The patch Julien offered for domian_build.c moved Dom0 banks in the
> first G of RAM.
> So it covered the previous symptom (a crash during allocation) because
> now we avoid pathway  when we mark a page "XenMapped".
> 
> But the symptom still remains in xen_swiotlb_free_coherent() because
> "TestPage..." is called unconditionally. virt_to_page() is not
> applicable to such allocations.

Ah! So this is the crux of the issue. I saw this kind of problem before
on ARM32 (in fact there are several comments warning not to use
virt_to_phys on ARM in drivers/xen/swiotlb-xen.c).


So, to recap we have 2 issues as far as I can tell:

- virt_to_page not working in some cases on ARM, leading to a crash
- WARN_ON for range_straddles_page_boundary which is normal on ARM

The appended patch addresses them by:

- using pfn_to_page instead virt_to_page
- moving the WARN_ON under a #ifdef (Juergen might have a better
  suggestion on how to rework the WARN_ON)

Please let me know if this patch works!

Cheers,

Stefano


diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index b6d27762c6f8..0a40ac332a4c 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -322,7 +322,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 			xen_free_coherent_pages(hwdev, size, ret, (dma_addr_t)phys, attrs);
 			return NULL;
 		}
-		SetPageXenRemapped(virt_to_page(ret));
+		SetPageXenRemapped(pfn_to_page(PFN_DOWN(phys)));
 	}
 	memset(ret, 0, size);
 	return ret;
@@ -346,9 +346,14 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	/* Convert the size to actually allocated. */
 	size = 1UL << (order + XEN_PAGE_SHIFT);
 
-	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
-		     range_straddles_page_boundary(phys, size)) &&
-	    TestClearPageXenRemapped(virt_to_page(vaddr)))
+#ifdef CONFIG_X86
+	if (WARN_ON(dev_addr + size - 1 > dma_mask) ||
+	            range_straddles_page_boundary(phys, size)) {
+	    return;
+	}
+#endif
+
+	if (TestClearPageXenRemapped(pfn_to_page(PFN_DOWN(phys))))
 		xen_destroy_contiguous_region(phys, order);
 
 	xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);


From xen-devel-bounces@lists.xenproject.org Wed May 06 17:36:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 17:36:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWNxe-0007dv-S5; Wed, 06 May 2020 17:35:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=U6FA=6U=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jWNxd-0007di-Jt
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 17:35:57 +0000
X-Inumbo-ID: 0a97fb62-8fc0-11ea-ae69-bc764e2007e4
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0a97fb62-8fc0-11ea-ae69-bc764e2007e4;
 Wed, 06 May 2020 17:35:56 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 046HXi8s081802;
 Wed, 6 May 2020 17:35:36 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=FIwZ3rzBCAfZY5z+8UcntK8KqaM2MzbinvvrYfNtQlo=;
 b=hBB0lWSd2/EeLg2syIL++TEafHWUJvuKc78kZDvjD+T/e+UTFB1lDXU58iJ8Yaz1c/cw
 xH3JQWYF7MZZSE5CqAl85PAePJou8L42NeTkRCPeG8NhcnVmfl5/7XEH8DpbHLFvyuy5
 IIw3j1m/7WSdxcTAIAO8EHV5XHMJyrqOGlk16XyB6zc85Ol8MI0PSmFW1szGxop8MupM
 uBtch6ZO9YoCXJx29v89IR1jWG/F8HfnfNih2lsieedvsXVZh6c3ucDx1KYpzeK2kS9W
 qYB3S8jq5blnjEw9KovxY7UihChZRyJHrn+rtJE/ANKb0qOZejD/QmlJnmwfdsaG8psh wQ== 
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2120.oracle.com with ESMTP id 30s1gnbmfn-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 06 May 2020 17:35:36 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 046HZYHM090814;
 Wed, 6 May 2020 17:35:35 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by aserp3020.oracle.com with ESMTP id 30sjnk55v4-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 06 May 2020 17:35:35 +0000
Received: from abhmp0018.oracle.com (abhmp0018.oracle.com [141.146.116.24])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 046HZLwX031315;
 Wed, 6 May 2020 17:35:22 GMT
Received: from [10.39.253.250] (/10.39.253.250)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Wed, 06 May 2020 10:35:21 -0700
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: Nataliya Korovkina <malus.brandywine@gmail.com>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
 <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <CADz_WD5Ln7Pe1WAFp73d2Mz9wxspzTE3WgAJusp5S8LX4=83Bw@mail.gmail.com>
 <2187cd7c-4d48-986b-77d6-4428e8178404@oracle.com>
 <CADz_WD68bYj-0CSm_zib+LRiMGd1+1eoNLgiJj=vHog685Khsw@mail.gmail.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <a802a0d5-3ae3-97f5-af58-2e58123fec22@oracle.com>
Date: Wed, 6 May 2020 13:35:19 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.4.1
MIME-Version: 1.0
In-Reply-To: <CADz_WD68bYj-0CSm_zib+LRiMGd1+1eoNLgiJj=vHog685Khsw@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9613
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0
 adultscore=0 phishscore=0
 mlxlogscore=999 bulkscore=0 malwarescore=0 spamscore=0 suspectscore=2
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000
 definitions=main-2005060142
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9613
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0
 suspectscore=2 mlxscore=0
 spamscore=0 clxscore=1015 priorityscore=1501 bulkscore=0 phishscore=0
 impostorscore=0 malwarescore=0 lowpriorityscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000
 definitions=main-2005060142
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Peng Fan <peng.fan@nxp.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, "minyard@acm.org" <minyard@acm.org>,
 Roman Shaposhnik <roman@zededa.com>,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 Julien Grall <julien.grall@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


On 5/6/20 12:14 PM, Nataliya Korovkina wrote:
> On Wed, May 6, 2020 at 9:43 AM Boris Ostrovsky
> <boris.ostrovsky@oracle.com> wrote:
>>
>> On 5/6/20 9:08 AM, Nataliya Korovkina wrote:
>>> Hello,
>>>
>>> What I found out: rpi_firmware_property_list() allocates memory from
>>> dma_atomic_pool which was mapped to VMALLOC region, so virt_to_page()
>>> is not eligible in this case.
>>
>> So then it seems it didn't go through xen_swiotlb_alloc_coherent(). In
>> which case it has no business calling xen_swiotlb_free_coherent().
>>
>>
>> -boris
>>
>>
>>
>>
> It does go.
> dma_alloc_coherent() indirectly calls xen_swiotlb_alloc_coherent(),
> then  xen_alloc_coherent_pages() eventually calls arch_dma_alloc() in
> remap.c which successfully allocates pages from atomic pool.


Yes, I was looking at x86's implementation of xen_alloc_coherent_pages().


>
> The patch Julien offered for domian_build.c moved Dom0 banks in the
> first G of RAM.
> So it covered the previous symptom (a crash during allocation) because
> now we avoid pathway  when we mark a page "XenMapped".
>
> But the symptom still remains in xen_swiotlb_free_coherent() because
> "TestPage..." is called unconditionally. virt_to_page() is not
> applicable to such allocations.


Perhaps we just need to make sure we are using right virt-to-page
method. Something like


diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index b6d2776..f224e69 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -335,6 +335,7 @@ int __ref xen_swiotlb_init(int verbose, bool early)
        int order = get_order(size);
        phys_addr_t phys;
        u64 dma_mask = DMA_BIT_MASK(32);
+       struct page *pg;
 
        if (hwdev && hwdev->coherent_dma_mask)
                dma_mask = hwdev->coherent_dma_mask;
@@ -346,9 +347,12 @@ int __ref xen_swiotlb_init(int verbose, bool early)
        /* Convert the size to actually allocated. */
        size = 1UL << (order + XEN_PAGE_SHIFT);
 
+       pg = is_vmalloc_addr(vaddr) ? vmalloc_to_page(vaddr) :
virt_to_page(vaddr);
+       BUG_ON(!pg);
+
        if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
                     range_straddles_page_boundary(phys, size)) &&
-           TestClearPageXenRemapped(virt_to_page(vaddr)))
+           TestClearPageXenRemapped(pg))
                xen_destroy_contiguous_region(phys, order);
 
        xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys,
attrs);


(I have not tested this at all)



From xen-devel-bounces@lists.xenproject.org Wed May 06 17:39:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 17:39:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWO0u-0007oD-BA; Wed, 06 May 2020 17:39:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wdQ7=6U=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jWO0s-0007o8-Tz
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 17:39:18 +0000
X-Inumbo-ID: 82354b34-8fc0-11ea-b9cf-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82354b34-8fc0-11ea-b9cf-bc764e2007e4;
 Wed, 06 May 2020 17:39:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588786758;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=gNJeaSFtUXhaF0WRCgtPP9MBU8Vk1oM89jkeKYoBEos=;
 b=P4E1YHuRXgEhfm7XCzkY8rf7xs5QAdlwMNXNzcUeXOF52mTx0mQF0fyT
 0jC5dIH5O8c1Z/nXQbh+XX46E2sJlZVQhN1urA2qsrGA4Kaa6ORzd2w11
 ed3DYyWl6piY0q0s21i7Un5wJR4ACldU+wajB9ttjN/EJEOBHv8vzpV6e g=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: 8gYCjedZNdxxXMvt0NqJrUQU58Iqm/Sg3jwZ+y5KHNvDl6oB/ct5azkOj4LVk8mA0PoF+lJWDO
 X9/uzf4UkYdsQVxj1M58GoCjP79G/f6TJyEF9J8p32URiSr69vAl9wiZpCLJCnJp39drDfdlSo
 uhXnvM3JOwf2xKshIpds7ZSJqeeHqLTwK7HJceuqzYHDsfSZfXS8nFtryLjKDmwlSA+vxXu8WN
 BKMTml6Z9UnER+NT+RCc+E78/gxkzHN7gYDTMFqzO2djCCHtKoISmv9tbozO7x9e24gZOfzK/G
 Xi8=
X-SBRS: 2.7
X-MesageID: 17157782
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,360,1583211600"; d="scan'208";a="17157782"
Subject: Re: [PATCH] x86/svm: Use flush-by-asid when available
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200505182539.12247-1-andrew.cooper3@citrix.com>
 <20200506070852.GE1353@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <630df298-1647-ad66-047e-5c98e1ff1546@citrix.com>
Date: Wed, 6 May 2020 18:39:02 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200506070852.GE1353@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 06/05/2020 08:08, Roger Pau Monné wrote:
> On Tue, May 05, 2020 at 07:25:39PM +0100, Andrew Cooper wrote:
>> AMD Fam15h processors introduced the flush-by-asid feature, for more fine
>> grain flushing purposes.
>>
>> Flushing everything including ASID 0 (i.e. Xen context) is an an unnecesserily
>> large hammer, and never necessary in the context of guest TLBs needing
>> invalidating.
>>
>> When available, use TLB_CTRL_FLUSH_ASID in preference to TLB_CTRL_FLUSH_ALL.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
>
> I should also look into restricting the usage of FLUSH_HVM_ASID_CORE
> and instead perform more fine grained per-vCPU flushes when possible,
> since FLUSH_HVM_ASID_CORE resets the pCPU ASID generation forcing a
> new ASID to be allocated for all vCPUs running on that pCPU.
>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wl@xen.org>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>>
>> The APM currently describes tlb_control encoding 1 as "Flush entire
>> TLB (Should be used only by legacy hypervisors.)".  AMD have agreed that this
>> is misleading and should say "legacy hardware" instead.
> AFAICT using TLB_CTRL_FLUSH_ASID on hardware not supporting the
> feature has not been found to be safe? Ie: TLB_CTRL_FLUSH_ALL is a
> subset of TLB_CTRL_FLUSH_ASID from a bitmap PoV, so if those bits
> where ignored on older hardware it should be safe to unconditionally
> use it.

So as far as I can tell, TLB_CTRL_FLUSH_ASID is safe to use on older
hardware, but I was told in no uncertain terms by an AMD architect that
we can't rely on that.

Hence this patch not being s/TLB_CTRL_FLUSH_ALL/TLB_CTRL_FLUSH_ALL/ in
asid.c

>
>> This change does come with a minor observed perf improvement on Fam17h
>> hardware, of ~0.6s over ~22s for my XTF pagewalk test.  This test will spot
>> TLB flushing issues, but isn't optimal for spotting the perf increase from
>> better flushing.  There were no observed differences for Fam15h, but this
>> could simply mean that the measured code footprint was larger than the TLB on
>> this CPU.
>> ---
>>  xen/arch/x86/hvm/svm/asid.c       | 9 ++++++---
>>  xen/include/asm-x86/hvm/svm/svm.h | 1 +
>>  2 files changed, 7 insertions(+), 3 deletions(-)
>>
>> diff --git a/xen/arch/x86/hvm/svm/asid.c b/xen/arch/x86/hvm/svm/asid.c
>> index 9be90058c7..ab06dd3f3a 100644
>> --- a/xen/arch/x86/hvm/svm/asid.c
>> +++ b/xen/arch/x86/hvm/svm/asid.c
>> @@ -18,6 +18,7 @@
>>  #include <asm/amd.h>
>>  #include <asm/hvm/nestedhvm.h>
>>  #include <asm/hvm/svm/asid.h>
>> +#include <asm/hvm/svm/svm.h>
>>  
>>  void svm_asid_init(const struct cpuinfo_x86 *c)
>>  {
>> @@ -47,15 +48,17 @@ void svm_asid_handle_vmrun(void)
>>      if ( p_asid->asid == 0 )
>>      {
>>          vmcb_set_guest_asid(vmcb, 1);
>> -        /* TODO: investigate using TLB_CTRL_FLUSH_ASID here instead. */
>> -        vmcb->tlb_control = TLB_CTRL_FLUSH_ALL;
>> +        vmcb->tlb_control =
>> +            cpu_has_svm_flushbyasid ? TLB_CTRL_FLUSH_ASID : TLB_CTRL_FLUSH_ALL;
>>          return;
>>      }
>>  
>>      if ( vmcb_get_guest_asid(vmcb) != p_asid->asid )
>>          vmcb_set_guest_asid(vmcb, p_asid->asid);
>>  
>> -    vmcb->tlb_control = need_flush ? TLB_CTRL_FLUSH_ALL : TLB_CTRL_NO_FLUSH;
>> +    vmcb->tlb_control =
>> +        !need_flush ? TLB_CTRL_NO_FLUSH :
>> +        cpu_has_svm_flushbyasid ? TLB_CTRL_FLUSH_ASID : TLB_CTRL_FLUSH_ALL;
> Since this code structure is used in two places I would consider
> locally introducing something like:
>
> #define TLB_CTRL_FLUSH_CMD (cpu_has_svm_flushbyasid ? TLB_CTRL_FLUSH_ASID \
>                                                     : TLB_CTRL_FLUSH_ALL)
>
> To abstract it away.

Right, but TLB_CTRL_FLUSH_CMD is easy to confuse as constant in the same
space as TLB_CTRL_FLUSH_*, and the logic isn't going to survive a
conversion to a finer grain flushing in exactly this form.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 06 17:43:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 17:43:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWO4o-00009g-Sy; Wed, 06 May 2020 17:43:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1k+i=6U=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
 id 1jWO4n-00009b-1v
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 17:43:21 +0000
X-Inumbo-ID: 0fb8c4b8-8fc1-11ea-b07b-bc764e2007e4
Received: from NAM04-CO1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe4d::606])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0fb8c4b8-8fc1-11ea-b07b-bc764e2007e4;
 Wed, 06 May 2020 17:43:14 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Un9SsL4KemWwktdneznDc1S5sXAJEnxeD8dfjQ0dpX6HvAgOufDEpnEptpApfL1jIgAY6oERLk93mf3JpfTumKArH9UE42Datp5vxT/a/8CIrtxfscI1HplknNFuCs+k242hKkT+VwYO3gR+Z2YKP80dTRKxIQTc4Y8vquIN2a6TQyFDBd91IiZTIA3MVRob+Qn3IaTZDXV5QYu9yb3Lr9tGGZJ+xKbOiVcTkvBmupicVGWiSl2QKehgyg5R6CuovvpWkPKdZ+Gggklq6LfFp2bl6jCkeGXna0kdavfojlKdBOFdS01omSyAjF+CxiUKyG2pFNUXEjt8xkHYt3VIdg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/mzZkmMA7oaaKbkudavSrrrfxrNdSmUWpy3H2kpSAVo=;
 b=fwyO2AB2Q/oR+L9knpLqbsUPqoq+5Fivli4CSXPXyGW3Kbrb3kXLqZbS3kQioka8vfzAKZpFbldGdcRfhcYYPtxNvJp7qCCIAxxLCtVnVNOjP20aZj/q2n7HWJofHSC7YBuEgKSjb3Y/htDXu3rV6zpZ/sOLocbOWx+LRRYyUSY61rMUcnmE53r3XTTytvibYTcN3m025agS+W357ajFTXYvEczWsDPf5AI2E7u9DDUArbs+Jh1HCw5ghcYJOGDqa/cre2Wgx0cAa6Emumkjx3Spwi4bJqabID6yX2uE44E+w6rOsqFrLWgpSlEp6TSLAZKPWlbpLBZHQaNfQRC/TQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.60.83) smtp.rcpttodomain=cardoe.com smtp.mailfrom=xilinx.com;
 dmarc=bestguesspass action=none header.from=xilinx.com; dkim=none (message
 not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/mzZkmMA7oaaKbkudavSrrrfxrNdSmUWpy3H2kpSAVo=;
 b=Y2SN3GN/ExUH4WX04nkFVDlmB+1Jtw+kP8aeZ2z8JPc8yEL1nht518FZwHZ5j7U/LMsgZMO7e16n5Xp9RxLqxkAlbrDZBaUuecGWMX2y9+BhjuB7ZQHq3KvSj/Diqiux4V9U9PuMUXLcQ/tI4HkymrNScFV7Jdcg0Nh2Re9DXWQ=
Received: from BL0PR02CA0005.namprd02.prod.outlook.com (2603:10b6:207:3c::18)
 by MWHPR02MB2399.namprd02.prod.outlook.com (2603:10b6:300:5c::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2958.29; Wed, 6 May
 2020 17:43:12 +0000
Received: from BL2NAM02FT034.eop-nam02.prod.protection.outlook.com
 (2603:10b6:207:3c:cafe::9c) by BL0PR02CA0005.outlook.office365.com
 (2603:10b6:207:3c::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2979.28 via Frontend
 Transport; Wed, 6 May 2020 17:43:12 +0000
Authentication-Results: spf=pass (sender IP is 149.199.60.83)
 smtp.mailfrom=xilinx.com; cardoe.com; dkim=none (message not signed)
 header.d=none;cardoe.com; dmarc=bestguesspass action=none
 header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.60.83 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.60.83; helo=xsj-pvapsmtpgw01;
Received: from xsj-pvapsmtpgw01 (149.199.60.83) by
 BL2NAM02FT034.mail.protection.outlook.com (10.152.77.161) with Microsoft SMTP
 Server id 15.20.2958.27 via Frontend Transport; Wed, 6 May 2020 17:43:12
 +0000
Received: from [149.199.38.66] (port=39372 helo=xsj-pvapsmtp01)
 by xsj-pvapsmtpgw01 with esmtp (Exim 4.90)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1jWO4V-0002KV-CB; Wed, 06 May 2020 10:43:03 -0700
Received: from [127.0.0.1] (helo=localhost)
 by xsj-pvapsmtp01 with smtp (Exim 4.63)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1jWO4d-0004W7-Rm; Wed, 06 May 2020 10:43:11 -0700
Received: from [172.19.2.220] (helo=localhost)
 by xsj-pvapsmtp01 with esmtp (Exim 4.63)
 (envelope-from <stefanos@xilinx.com>)
 id 1jWO4a-0004VR-6f; Wed, 06 May 2020 10:43:08 -0700
Date: Wed, 6 May 2020 10:43:07 -0700 (PDT)
From: Stefano Stabellini <stefano.stabellini@xilinx.com>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: cardoe@cardoe.com, wl@xen.org
Subject: FuSa (temporary) repository for Xen safety documents
Message-ID: <alpine.DEB.2.21.2005051552410.14706@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.2.0.1013-23620.005
X-TM-AS-User-Approved-Sender: Yes;Yes
X-EOPAttributedMessage: 0
X-MS-Office365-Filtering-HT: Tenant
X-Forefront-Antispam-Report: CIP:149.199.60.83; CTRY:US; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:xsj-pvapsmtpgw01; PTR:unknown-60-83.xilinx.com; CAT:NONE;
 SFTY:;
 SFS:(6029001)(7916004)(376002)(346002)(396003)(136003)(39860400002)(46966005)(33430700001)(70206006)(81166007)(336012)(8676002)(107886003)(47076004)(44832011)(8936002)(316002)(9686003)(70586007)(426003)(4326008)(9786002)(2906002)(82740400003)(33716001)(4744005)(82310400002)(33440700001)(356005)(26005)(478600001)(966005)(186003)(5660300002);
 DIR:OUT; SFP:1101; 
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 69bed9f1-091b-4d4c-8c04-08d7f1e4f2ad
X-MS-TrafficTypeDiagnostic: MWHPR02MB2399:
X-Microsoft-Antispam-PRVS: <MWHPR02MB2399A63D0BB692714B3F737BA0A40@MWHPR02MB2399.namprd02.prod.outlook.com>
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-Forefront-PRVS: 03950F25EC
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: rfQX1sQLrd9JLkWeqSFp6pxtBIOAfauMpGEYDL/ol93ka2fFCliaEO2iEOiJ1RzN+KWpfLZp2M6/7frFknDt+CB3E1OWcEZR84t+bALpDlgW5l4bBHPn/gcvnk2mc4ElJXh0zFYlbcVWkue8nhe2d1MjH77SixcY7ELUdrVmPfb+565GufDVkog4JYFTBIKwZen4GRUavQGZaNvuN3vT9zX+m4hbahzBTdcGyNBaE/Vo1Mi4Xi413kDPzeRNY0YCA2HWg7u//y1Z5WfJY7hejb/IembvMnGoz1vP4zAGTmR80T3OL94wOdgv2+MEljFjbrmu9DajkNrvIQ1iJodbmDL723OZzn31FxKbAgf8TMUDIddp2NpssKxzI3Af8upEVfLbDHIXU/TgQi0bD+YvPF4QyW9WhmqM1KzxQwYOKmRk8dIrAxS7+0WheMILcb05f4xFDtISeOakCgy8u2UjEHdH2M26cN9U2g6itDGF9+q0JUM7+0PWoUgD/wah/58ts9m84XAfZTXZZJZQxK2mAeUEJJwFozX7u6W7rXPCPNyYF8TO1bygb6z5jqjoPn28NrS/Bp8E9c4RP8+NEopw6L3dBL8upZjcP/+TBiFcJYQGsSiNclHmMKt3nlzzttCcJwfZ7ZNZWolRDq4lYT+0zAnjeIA0lBnO4sKQb+lhS34=
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 May 2020 17:43:12.2421 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 69bed9f1-091b-4d4c-8c04-08d7f1e4f2ad
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.60.83];
 Helo=[xsj-pvapsmtpgw01]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR02MB2399
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, stefano.stabellini@xilinx.com,
 George.Dunlap@citrix.com, nathalie@xilinx.com, fusa-sig@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi all,

I would like to let you know that I plan to create a FuSa.git repository
under:

https://gitlab.com/xen-project

to host the work-in-progress Xen safety certification documents.

The goal is to have those documents under xen.git, as they need to be
linked to corresponding source code, but we opted for having a separate
repository in the short term to allow easier access and contributions
from safety experts that are not familiar with the git-send-email
workflow.

Cheers,

Stefano


From xen-devel-bounces@lists.xenproject.org Wed May 06 18:25:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 18:25:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWOjK-0003UM-6g; Wed, 06 May 2020 18:25:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RiaG=6U=tklsoftware.com=tamas@srs-us1.protection.inumbo.net>)
 id 1jWOjI-0003UE-Qr
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 18:25:12 +0000
X-Inumbo-ID: ec357f26-8fc6-11ea-ae69-bc764e2007e4
Received: from mail-ed1-x534.google.com (unknown [2a00:1450:4864:20::534])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec357f26-8fc6-11ea-ae69-bc764e2007e4;
 Wed, 06 May 2020 18:25:12 +0000 (UTC)
Received: by mail-ed1-x534.google.com with SMTP id k22so2866499eds.6
 for <xen-devel@lists.xenproject.org>; Wed, 06 May 2020 11:25:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=tklengyel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:from:date:message-id:subject:to:cc;
 bh=oasTas5SRun+aE61FbdyG8rxUBcdLQkbJe3EILiKuFA=;
 b=OWWI0gJSHVXEniK2Ck3fD+cZzXy/D6U6ajZdJvVyeYgBLDNium4fuqly8EKmlPcwUm
 UdISgKXSF+urLKFpN5n45G7eAz1ABEeBKd/aGVbeI519WdIVaDXOxT5wYYtqx+Tka3Qp
 oAnw8Dm+CDfsxCc7vqnsildX+AcxYXyfBn4qHeYhcaxhYKrWgt0SDhE+70Zb2cvRRU4P
 PgVWCLPvT3uFkij9ask+wZHJe/8N6ehE5xwjq0uhg3QSyyP9zmur/e1a8GXUGRlPIu8Z
 TaaGhjsNBweCpdQMjLZMU4yv6uX4qS1q790n/rf9IjKKH4GYgkWo54gBfgBv2sEaDLXF
 I32g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:from:date:message-id:subject:to:cc;
 bh=oasTas5SRun+aE61FbdyG8rxUBcdLQkbJe3EILiKuFA=;
 b=uPogLHpmuvXVXVcYrG6FHRL4cDc/MspfWHr8j9S3yDmfVzI/MF9e9dbjxl6Z0kaHnA
 YHWU5WtqdYQGKnP8RfgyRoX2gwYQNxuX//MCvIgZJnJl6aEOoPwAuMbNE4fKy2dFMLJj
 ccB7kLVYznRFt0UhHsTnqyz1AP+vkj+rTfg6LsMSR74wlBE8jOoSznmCiiBc5QnbO1vC
 x9y3wQ4Aiuu+ycyDyX39N/srF8gY4KdidpnE5uS9JAToNz7ZQMwQD3wa7jV3hwUVNA0K
 WLuH4jg6Xjy3jpvxEibmMr58Ut5GcYweSd6PpUlxVxFEwZcaWisntcOgMtVvia5X1hx2
 epgA==
X-Gm-Message-State: AGi0Puabkeejo9A5ML2lEDBAS2IAXz2wrNbkoq2Vi2PVafkgkkUdfrcJ
 HjmgrCMCtZt9dja8PHu55E1L0y/K4Ig=
X-Google-Smtp-Source: APiQypKyT+dVcCjn3awEHVi92/W8H1k47Hq5m0vZrmE2m1K2ji/qKHTlxPyo7QH6tLBgI7HKrO0nIw==
X-Received: by 2002:a05:6402:1052:: with SMTP id
 e18mr8595892edu.63.1588789511241; 
 Wed, 06 May 2020 11:25:11 -0700 (PDT)
Received: from mail-wm1-f41.google.com (mail-wm1-f41.google.com.
 [209.85.128.41])
 by smtp.gmail.com with ESMTPSA id co22sm283330edb.30.2020.05.06.11.25.10
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 06 May 2020 11:25:10 -0700 (PDT)
Received: by mail-wm1-f41.google.com with SMTP id v4so5571062wme.1
 for <xen-devel@lists.xenproject.org>; Wed, 06 May 2020 11:25:10 -0700 (PDT)
X-Received: by 2002:a7b:cdfa:: with SMTP id p26mr5732409wmj.186.1588789510571; 
 Wed, 06 May 2020 11:25:10 -0700 (PDT)
MIME-Version: 1.0
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Wed, 6 May 2020 12:24:34 -0600
X-Gmail-Original-Message-ID: <CABfawhkhL3e9zzpyTtU6N1fF6tPQ7BrF6cpRTu2EJntbjPia3Q@mail.gmail.com>
Message-ID: <CABfawhkhL3e9zzpyTtU6N1fF6tPQ7BrF6cpRTu2EJntbjPia3Q@mail.gmail.com>
Subject: QEMU-Xen build failure
To: Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <Ian.Jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi all,
on a recent checkout of the Xen staging source I ran into the
following build error with QEMU upstream:

tools/qemu-xen-dir-remote/slirp/src/ip_input.c:330:5: error: ISO C90
forbids mixed declarations and code
[-Werror=declaration-after-statement]
     int delta = (char *)q - (m->m_flags & M_EXT ? m->m_ext : m->m_dat);

Tamas


From xen-devel-bounces@lists.xenproject.org Wed May 06 19:56:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 19:56:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWQ9V-0002EA-06; Wed, 06 May 2020 19:56:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LxRp=6U=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWQ9U-0002E5-6O
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 19:56:20 +0000
X-Inumbo-ID: a6e9b880-8fd3-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6e9b880-8fd3-11ea-b9cf-bc764e2007e4;
 Wed, 06 May 2020 19:56:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1m2DBWu3rQmq8AJUSGlVynDHWrRyEDRFZu5EZ5vbwYM=; b=n7UBIE4Z72fowN9xa7HuQv7Ji
 ap9ZDHjZsGE6434j+HuhezPFODizSsfVX60OLPChzJXvbbSLaXW/TXjWgHY2OA6NL/EOKcIp8LYl1
 bsYJbU7BRIL4er47oMzxgFkkUvbulxj2Q/7Xxm1WQjeLvAkZlCBa98wbhUSAne59j/cMk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWQ9S-0004ec-PR; Wed, 06 May 2020 19:56:18 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWQ9S-0002wU-G3; Wed, 06 May 2020 19:56:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWQ9S-0008Nd-FX; Wed, 06 May 2020 19:56:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150059-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150059: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=8a6b1665d987d043c12dc723d758a7d2ca765264
X-Osstest-Versions-That: xen=b58dc6dbfa3a038c5a22f06861a7652da80eca28
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 06 May 2020 19:56:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150059 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150059/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a6b1665d987d043c12dc723d758a7d2ca765264
baseline version:
 xen                  b58dc6dbfa3a038c5a22f06861a7652da80eca28

Last test of basis   150056  2020-05-06 10:00:41 Z    0 days
Testing same since   150059  2020-05-06 17:02:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b58dc6dbfa..8a6b1665d9  8a6b1665d987d043c12dc723d758a7d2ca765264 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 06 21:11:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 21:11:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWRJu-0000AF-KO; Wed, 06 May 2020 21:11:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=POZY=6U=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1jWRJt-0000AA-BW
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 21:11:09 +0000
X-Inumbo-ID: 1ade0426-8fde-11ea-b9cf-bc764e2007e4
Received: from mail-qv1-xf43.google.com (unknown [2607:f8b0:4864:20::f43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ade0426-8fde-11ea-b9cf-bc764e2007e4;
 Wed, 06 May 2020 21:11:08 +0000 (UTC)
Received: by mail-qv1-xf43.google.com with SMTP id v10so1592871qvr.2
 for <xen-devel@lists.xenproject.org>; Wed, 06 May 2020 14:11:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zededa.com; s=google;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=YCKsd7vQTI2DrN3+bSBaktJZhRsZLMdEUxM/uGHPzSY=;
 b=dtVb/aesF3jYzY7SMImdHHXdPNLlzBUTlRJurxB1eelQ8Av5yBt9sTeBbHmIWbfANX
 U+TYQbXJ751gzf6WEGWkWlprH2iZ6BmAaC6YfmcBoYru67+8BGtqqCyX08x8Wi98zGNj
 SU3bW1ejIG2bkj01QyEMEfVwK2dpUQWIkX6u1oSyCbB153RgRqPqhUA0Cn1v9sGPR0O6
 UPROdSZ1LcRghtfBLl6XrcxDe6FsuyynAP27FnpQKRzPNgCL1b3T9Yfbujn81piaJELu
 vkXJfN9sd4jjFPFTp2/9l1vqTnYPTAllkCBBi3tm29R04BXsvVT40h+wgmJ19uz4gNZs
 V84w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=YCKsd7vQTI2DrN3+bSBaktJZhRsZLMdEUxM/uGHPzSY=;
 b=Vd9VhGsRAb1VAUL6bOYHUXtN7MH8LwrzcPsp2NOxVuBZzfV+D3MM4j594p0n1nvzZd
 YdDkNumq9IPR29gPaOgWdGCIzd7t4g4U5iRtWVdYAN/3D7sK0Gy1KiBk57eUIoAAbqCO
 uBmVGKpJCsBGRFEN3bs/9J7sqMGz5nvS7LSBn0otj6ZnYoAIa3Gbq3+UmwUO8M9HPjNY
 jCmzaBm+8a2gi2PHl11QQYOmtEJqHlkPid/k2IUSfDC8aJkSpOPb6A3cBnluYdgV/Ha7
 KgOgPKoPTkE8Gxnsob0WAd4LKVM9HEuoPSy50oBpqQ3rk0Bwl5J2YKAivsq8ueE9KuKh
 w8Ow==
X-Gm-Message-State: AGi0PubZ+FywEoJVp17RhM3BFZH8ym0lih2jk32oAg/S/Rtq5eT0MKYm
 Q9vJGdDXFBXZKM4ifxbJWCpK+6sj9nT2I1LoRcEPXw==
X-Google-Smtp-Source: APiQypKoJFNQ7KZcQOonxz5ICneu5FS6xlsQzr6mQUbkGKNetEAotUCUkFyVjcp5gsDstAHZpAsjl2IBm/vQmN/0GXs=
X-Received: by 2002:a05:6214:3e2:: with SMTP id
 cf2mr9520302qvb.193.1588799468059; 
 Wed, 06 May 2020 14:11:08 -0700 (PDT)
MIME-Version: 1.0
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
 <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <CADz_WD5Ln7Pe1WAFp73d2Mz9wxspzTE3WgAJusp5S8LX4=83Bw@mail.gmail.com>
 <2187cd7c-4d48-986b-77d6-4428e8178404@oracle.com>
 <CADz_WD68bYj-0CSm_zib+LRiMGd1+1eoNLgiJj=vHog685Khsw@mail.gmail.com>
 <a802a0d5-3ae3-97f5-af58-2e58123fec22@oracle.com>
In-Reply-To: <a802a0d5-3ae3-97f5-af58-2e58123fec22@oracle.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Wed, 6 May 2020 14:10:56 -0700
Message-ID: <CAMmSBy_2o6tsE1fsu_9=h8erOd9yHV-+ZkduTGqa-Gw7ra3mVQ@mail.gmail.com>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Peng Fan <peng.fan@nxp.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, "minyard@acm.org" <minyard@acm.org>,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 Julien Grall <julien.grall@arm.com>,
 Nataliya Korovkina <malus.brandywine@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 6, 2020 at 10:36 AM Boris Ostrovsky
<boris.ostrovsky@oracle.com> wrote:
>
>
> On 5/6/20 12:14 PM, Nataliya Korovkina wrote:
> > On Wed, May 6, 2020 at 9:43 AM Boris Ostrovsky
> > <boris.ostrovsky@oracle.com> wrote:
> >>
> >> On 5/6/20 9:08 AM, Nataliya Korovkina wrote:
> >>> Hello,
> >>>
> >>> What I found out: rpi_firmware_property_list() allocates memory from
> >>> dma_atomic_pool which was mapped to VMALLOC region, so virt_to_page()
> >>> is not eligible in this case.
> >>
> >> So then it seems it didn't go through xen_swiotlb_alloc_coherent(). In
> >> which case it has no business calling xen_swiotlb_free_coherent().
> >>
> >>
> >> -boris
> >>
> >>
> >>
> >>
> > It does go.
> > dma_alloc_coherent() indirectly calls xen_swiotlb_alloc_coherent(),
> > then  xen_alloc_coherent_pages() eventually calls arch_dma_alloc() in
> > remap.c which successfully allocates pages from atomic pool.
>
>
> Yes, I was looking at x86's implementation of xen_alloc_coherent_pages().
>
>
> >
> > The patch Julien offered for domian_build.c moved Dom0 banks in the
> > first G of RAM.
> > So it covered the previous symptom (a crash during allocation) because
> > now we avoid pathway  when we mark a page "XenMapped".
> >
> > But the symptom still remains in xen_swiotlb_free_coherent() because
> > "TestPage..." is called unconditionally. virt_to_page() is not
> > applicable to such allocations.
>
>
> Perhaps we just need to make sure we are using right virt-to-page
> method. Something like
>
>
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index b6d2776..f224e69 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -335,6 +335,7 @@ int __ref xen_swiotlb_init(int verbose, bool early)
>         int order = get_order(size);
>         phys_addr_t phys;
>         u64 dma_mask = DMA_BIT_MASK(32);
> +       struct page *pg;
>
>         if (hwdev && hwdev->coherent_dma_mask)
>                 dma_mask = hwdev->coherent_dma_mask;
> @@ -346,9 +347,12 @@ int __ref xen_swiotlb_init(int verbose, bool early)
>         /* Convert the size to actually allocated. */
>         size = 1UL << (order + XEN_PAGE_SHIFT);
>
> +       pg = is_vmalloc_addr(vaddr) ? vmalloc_to_page(vaddr) :
> virt_to_page(vaddr);
> +       BUG_ON(!pg);
> +
>         if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
>                      range_straddles_page_boundary(phys, size)) &&
> -           TestClearPageXenRemapped(virt_to_page(vaddr)))
> +           TestClearPageXenRemapped(pg))
>                 xen_destroy_contiguous_region(phys, order);
>
>         xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys,
> attrs);
>
>
> (I have not tested this at all)

That's where I come in ;-)

It appears that your patch gets us closest to a fully functional 5.6.x
kernel under Xen on RPi4.

Thank you so much for that!

However, here's an interesting side-effect I'm now observing -- with
your patch + original
patch from Stefano (the one that only applies to
include/xen/arm/page-coherent.h) I can
now boot my RPi4 into a pretty functional system.

However, it is only possible if I allocate 512M (or fewer) memory
chunk to Dom0. If I try
to go higher (820M for example) and all the way to 1G -- I start getting:

[    3.195851] mmc0: unrecognised SCR structure version 7
[    3.200454] mmc0: error -22 whilst initialising SD card

and my SD card stays offline.

This is pretty reliable, and I guess is still related to some kind of
a DMA issue perhaps?

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Wed May 06 21:12:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 21:12:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWRKv-0000EK-UZ; Wed, 06 May 2020 21:12:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=POZY=6U=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1jWRKv-0000EE-7C
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 21:12:13 +0000
X-Inumbo-ID: 41113ca8-8fde-11ea-ae69-bc764e2007e4
Received: from mail-qv1-xf41.google.com (unknown [2607:f8b0:4864:20::f41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 41113ca8-8fde-11ea-ae69-bc764e2007e4;
 Wed, 06 May 2020 21:12:12 +0000 (UTC)
Received: by mail-qv1-xf41.google.com with SMTP id v18so1581991qvx.9
 for <xen-devel@lists.xenproject.org>; Wed, 06 May 2020 14:12:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zededa.com; s=google;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=v0M0geFf2CIFOdjuI4WtDFPTDw9uIxElYDkiHB5MMOY=;
 b=ip7SgBctBhYte9yPUa73ivSsmvvFI9de3s3qPGbObX60tts6VHu6qwiurwpt7hm2lY
 7hjE6Cl7rWz67Q19xfHfw+XW/JtV9LAPutuazimZWxV9aHjSKG+/xgz1Y2OMikRw399u
 Yk5qqrGzcAwufu1dKjr62NkGCIFmmtnZIbhZygIlLurHHzUZjPIxitwHQFIVaSKLO+mg
 slWvdEZJa1Rnd0YgDl6/Qba0bH53/pRrLvVmjLt0DGwRiOtZ2f07SUu4VK2f2YtzLvV1
 WvobQNEDxRvJfKtP5UGkQFa6vaWZabBFe9HuQ7RJlaLm0vFTeINQZ5dUkRjPCbYjBFFy
 SOug==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=v0M0geFf2CIFOdjuI4WtDFPTDw9uIxElYDkiHB5MMOY=;
 b=cYPaTEEtZLmHeALU/QxFY+OC2TCuimefLCvRcdixInA5q3Q3vRbVm9WDsTSUsZxmi1
 cH3M6FzmXGWN2OiJsjoyNdXEogfWju+g4u3HwbEAJbd51mPXTjRmvwcTYJbiFiPp7GYw
 sMTfqFwJR/8jZTYMQ8KmCsuPXOym9Ej2NXmqlOkPhuT6et0FPNWch2Wil3nRid9eqsIX
 /MF+EjsQmd0sKQr/GS+MSqaXTjO1g78bZezDI1WmSPis736ohlN2Shs5Y58Or8bf7VPB
 hpUv7BUfepKlkEZHDiR68HADilhTEilBvX5G+LAiCFt5jZiFBkOhblvDCM8VnmBrmEa0
 e2bA==
X-Gm-Message-State: AGi0Puam1gD8kOvIiX/gzl9V5QhRw67Z1b3hkKhBkE4eXyDa+saP/PIX
 Hr0Wz05Fcfd2qMZSlhFrzNJhV6Lvk8JANLGqGnHwfTU4VEM=
X-Google-Smtp-Source: APiQypKJ4ZEXM05uFEqDd94l5YhOHtYXStiuFx2ERrJkboRvaAXx5ZL++jF6mfzFEvQZHyFYlai2j2lZEncnoFIhrCk=
X-Received: by 2002:ad4:452d:: with SMTP id l13mr10263875qvu.19.1588799532193; 
 Wed, 06 May 2020 14:12:12 -0700 (PDT)
MIME-Version: 1.0
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200501114201.GE9902@minyard.net>
 <CAMmSBy_h9=5cjMv3+-BKHYhBikgna731DoA+t-8FK-2tbPUH-A@mail.gmail.com>
 <20200502021647.GG9902@minyard.net>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
 <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <CADz_WD5Ln7Pe1WAFp73d2Mz9wxspzTE3WgAJusp5S8LX4=83Bw@mail.gmail.com>
 <2187cd7c-4d48-986b-77d6-4428e8178404@oracle.com>
 <CADz_WD68bYj-0CSm_zib+LRiMGd1+1eoNLgiJj=vHog685Khsw@mail.gmail.com>
 <alpine.DEB.2.21.2005060956120.14706@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2005060956120.14706@sstabellini-ThinkPad-T480s>
From: Roman Shaposhnik <roman@zededa.com>
Date: Wed, 6 May 2020 14:12:00 -0700
Message-ID: <CAMmSBy_wcSD3BVcVFJVR1y1CtvxA9xMkobKwbsdf8dGxS5Hcbw@mail.gmail.com>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Peng Fan <peng.fan@nxp.com>, Julien Grall <julien@xen.org>,
 "minyard@acm.org" <minyard@acm.org>,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 Julien Grall <julien.grall@arm.com>,
 Nataliya Korovkina <malus.brandywine@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 6, 2020 at 10:34 AM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> On Wed, 6 May 2020, Nataliya Korovkina wrote:
> > On Wed, May 6, 2020 at 9:43 AM Boris Ostrovsky
> > <boris.ostrovsky@oracle.com> wrote:
> > >
> > >
> > > On 5/6/20 9:08 AM, Nataliya Korovkina wrote:
> > > > Hello,
> > > >
> > > > What I found out: rpi_firmware_property_list() allocates memory from
> > > > dma_atomic_pool which was mapped to VMALLOC region, so virt_to_page()
> > > > is not eligible in this case.
> > >
> > >
> > > So then it seems it didn't go through xen_swiotlb_alloc_coherent(). In
> > > which case it has no business calling xen_swiotlb_free_coherent().
> > >
> > >
> > > -boris
> > >
> > >
> > >
> > >
> >
> > It does go.
> > dma_alloc_coherent() indirectly calls xen_swiotlb_alloc_coherent(),
> > then  xen_alloc_coherent_pages() eventually calls arch_dma_alloc() in
> > remap.c which successfully allocates pages from atomic pool.
> >
> > The patch Julien offered for domian_build.c moved Dom0 banks in the
> > first G of RAM.
> > So it covered the previous symptom (a crash during allocation) because
> > now we avoid pathway  when we mark a page "XenMapped".
> >
> > But the symptom still remains in xen_swiotlb_free_coherent() because
> > "TestPage..." is called unconditionally. virt_to_page() is not
> > applicable to such allocations.
>
> Ah! So this is the crux of the issue. I saw this kind of problem before
> on ARM32 (in fact there are several comments warning not to use
> virt_to_phys on ARM in drivers/xen/swiotlb-xen.c).
>
>
> So, to recap we have 2 issues as far as I can tell:
>
> - virt_to_page not working in some cases on ARM, leading to a crash
> - WARN_ON for range_straddles_page_boundary which is normal on ARM
>
> The appended patch addresses them by:
>
> - using pfn_to_page instead virt_to_page
> - moving the WARN_ON under a #ifdef (Juergen might have a better
>   suggestion on how to rework the WARN_ON)
>
> Please let me know if this patch works!
>
> Cheers,
>
> Stefano
>
>
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index b6d27762c6f8..0a40ac332a4c 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -322,7 +322,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
>                         xen_free_coherent_pages(hwdev, size, ret, (dma_addr_t)phys, attrs);
>                         return NULL;
>                 }
> -               SetPageXenRemapped(virt_to_page(ret));
> +               SetPageXenRemapped(pfn_to_page(PFN_DOWN(phys)));
>         }
>         memset(ret, 0, size);
>         return ret;
> @@ -346,9 +346,14 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>         /* Convert the size to actually allocated. */
>         size = 1UL << (order + XEN_PAGE_SHIFT);
>
> -       if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
> -                    range_straddles_page_boundary(phys, size)) &&
> -           TestClearPageXenRemapped(virt_to_page(vaddr)))
> +#ifdef CONFIG_X86
> +       if (WARN_ON(dev_addr + size - 1 > dma_mask) ||
> +                   range_straddles_page_boundary(phys, size)) {
> +           return;
> +       }
> +#endif
> +
> +       if (TestClearPageXenRemapped(pfn_to_page(PFN_DOWN(phys))))
>                 xen_destroy_contiguous_region(phys, order);
>
>         xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);

Stefano, with your patch applied, I'm still getting:

[    0.590705] Unable to handle kernel paging request at virtual
address fffffe0003700000

However, Boris' patch seems to get us much closer. It would be awesome if you
can take a look at that (plus the additional DMA issue that seems to
be dependent
on how much memory I allocate to Dom0).

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Wed May 06 22:04:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 06 May 2020 22:04:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWS9T-0004M2-Md; Wed, 06 May 2020 22:04:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LxRp=6U=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWS9R-0004Lx-Py
 for xen-devel@lists.xenproject.org; Wed, 06 May 2020 22:04:25 +0000
X-Inumbo-ID: 8b612e38-8fe5-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8b612e38-8fe5-11ea-b9cf-bc764e2007e4;
 Wed, 06 May 2020 22:04:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=NyEFsybpj+HCW6xWbv+w/PODAJxcVFjh9seQEOBDMug=; b=Dkk2yqpJtpUKWx29txKXhZWh8
 JPV2+OWkrrb+vFeyWuRKUBrga3ukAuzz5Mc5QkhO/ZDzXqL3LIrVyjrfuzL2bWL+qqPQUXvWRUPwb
 FpxyAGW7g7DuySbk+W4xNF8rvkHFe3QAt4SzcvYOiRuaq/ypc92NRgp1FZgue8Ipwpqow=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWS9P-000777-Hr; Wed, 06 May 2020 22:04:23 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWS9P-0000pp-97; Wed, 06 May 2020 22:04:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWS9P-0001Iz-8U; Wed, 06 May 2020 22:04:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150043-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150043: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=f19d118bed77bb95681b07f4e76dbb700c16918d
X-Osstest-Versions-That: qemuu=2ef486e76d64436be90f7359a3071fb2a56ce835
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 06 May 2020 22:04:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150043 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150043/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate           fail  like 149911
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149911
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149911
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 149911
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149911
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149911
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149911
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                f19d118bed77bb95681b07f4e76dbb700c16918d
baseline version:
 qemuu                2ef486e76d64436be90f7359a3071fb2a56ce835

Last test of basis   149911  2020-05-03 14:06:59 Z    3 days
Testing same since   150043  2020-05-05 16:07:08 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Bulekov <alxndr@bu.edu>
  Anthoine Bourgeois <anthoine.bourgeois@gmail.com>
  Chen Qun <kuhn.chenqun@huawei.com>
  Daniel Brodsky <dnbrdsky@gmail.com>
  David Gibson <david@gibson.dropbear.id.au>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Fredrik Strupe <fredrik@strupe.net>
  Gerd Hoffmann <kraxel@redhat.com>
  John Snow <jsnow@redhat.com>
  Julia Suvorova <jusual@redhat.com>
  Keith Busch <kbusch@kernel.org>
  Kwangwoo Lee <kwangwoo.lee@sk.com>
  Laurent Vivier <laurent@vivier.eu>
  Li Feng <fengli@smartx.com>
  Liran Alon <liran.alon@oracle.com>
  Max Reitz <mreitz@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Michael Walle <michael@walle.cc>
  Mikhail Gusarov <dottedmag@dottedmag.net>
  Peter Maydell <peter.maydell@linaro.org>
  Peter Turschmid <peter.turschm@nutanix.com>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Raphael Norwitz <raphael.norwitz@nutanix.com>
  Shameer Kolothum <shameerali.kolothum.thodi@huawei.com>
  Simran Singhal <singhalsimran0@gmail.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Wainer dos Santos Moschetta <wainersm@redhat.com>
  Yuval Shaia <yuval.shaia.ml@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   2ef486e76d..f19d118bed  f19d118bed77bb95681b07f4e76dbb700c16918d -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu May 07 05:17:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 05:17:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWYtl-000374-NW; Thu, 07 May 2020 05:16:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aphx=6V=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWYtk-00036z-CH
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 05:16:40 +0000
X-Inumbo-ID: ead86e44-9021-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ead86e44-9021-11ea-ae69-bc764e2007e4;
 Thu, 07 May 2020 05:16:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=XcDd8nPExeM6quyvRC4yF+WRdx/0GjgoES9jDaQwu6I=; b=iTmLR92ETexAFjwETlELrOC1B
 rB47aPJ8nSSzjoy5KnoAbwrepAgqNOXz+58xyFn0FWkG1lCeW7wIDUC5KaJ1+Gi2+6zDR/bXk9klv
 NQxBaZ0TgL4gHhUjm8UGo/02N52rZw5DZgBmod5dGFDqiGEvP6gwtIAayzqJQCxL/r37Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWYtd-0005f7-FK; Thu, 07 May 2020 05:16:33 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWYtd-0000kS-4L; Thu, 07 May 2020 05:16:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWYtd-0004xq-3h; Thu, 07 May 2020 05:16:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150053-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150053: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=0756415f147dda15a417bd79eef9a62027d176e6
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 07 May 2020 05:16:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150053 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150053/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a

version targeted for testing:
 libvirt              0756415f147dda15a417bd79eef9a62027d176e6
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  110 days
Failing since        146211  2020-01-18 04:18:52 Z  110 days  100 attempts
Testing same since   150053  2020-05-06 04:19:32 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 17112 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 07 05:25:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 05:25:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWZ2N-0003xn-Ji; Thu, 07 May 2020 05:25:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aphx=6V=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWZ2M-0003xi-TX
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 05:25:34 +0000
X-Inumbo-ID: 2c927518-9023-11ea-9edf-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2c927518-9023-11ea-9edf-12813bfff9fa;
 Thu, 07 May 2020 05:25:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ZuD1ErAqNR2kCDx9Srm+tHDg3JUhN5r/9xJanYeWR+s=; b=fTa30iTI6cm68MzCtDnmPt2kA
 5Bsu7ApXn8UZe529lUVeT980nTGNfiR9KAUUb8Ay6F2832KFxpfXh9+859yPPDMUu8RZurgjazLko
 y6rhnBVPQYVKWDDWPGQYmzh+FX5lLhdY11YhMSm/WBDBS5ZpSfrsQIrsmlF2m/02RzSNQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWZ2L-0005oK-A6; Thu, 07 May 2020 05:25:33 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWZ2K-00010Y-KF; Thu, 07 May 2020 05:25:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWZ2K-0001XA-JU; Thu, 07 May 2020 05:25:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150050-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150050: all pass - PUSHED
X-Osstest-Versions-This: ovmf=f159102a130fac9b416418eb9b6fa35b63426ca5
X-Osstest-Versions-That: ovmf=de15e7c2651ada46cc649c5b3c8c0c145354ac04
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 07 May 2020 05:25:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150050 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150050/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 f159102a130fac9b416418eb9b6fa35b63426ca5
baseline version:
 ovmf                 de15e7c2651ada46cc649c5b3c8c0c145354ac04

Last test of basis   150045  2020-05-05 16:09:42 Z    1 days
Testing same since   150050  2020-05-05 20:40:09 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   de15e7c265..f159102a13  f159102a130fac9b416418eb9b6fa35b63426ca5 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 07 05:58:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 05:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWZYQ-0006Rc-5u; Thu, 07 May 2020 05:58:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aphx=6V=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWZYP-0006RX-8z
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 05:58:41 +0000
X-Inumbo-ID: c6dab8b6-9027-11ea-9edf-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c6dab8b6-9027-11ea-9edf-12813bfff9fa;
 Thu, 07 May 2020 05:58:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=yLZODkWlcA92lQKHd+ywtMInRR+W7kjCqDJ8zsD3xcs=; b=Sa6dIc6huWI4z8IvzSSMNlZk4
 70r/gE4HjgCBDXgE3xedQ2HvJHYur0Hri1u1RxskWrHiXL1av8lavZUNHThVYlSOwAgVdDPUstCDO
 XrsrVxlH8G+eyD/ZPIVXVJa73XSqDsdxqnFToii+sfECAlYoROOZ/VfjgcqxoJt7y2s0Q=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWZYD-0006Nm-U9; Thu, 07 May 2020 05:58:29 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWZYD-0002Y6-D3; Thu, 07 May 2020 05:58:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWZYD-0006da-C2; Thu, 07 May 2020 05:58:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150044-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150044: regressions - FAIL
X-Osstest-Failures: linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:windows-install:fail:regression
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=47cf1b422e6093aee2a3e55d5e162112a2c69870
X-Osstest-Versions-That: linux=f66ed1ebbfde37631fba289f7c399eaa70632abf
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 07 May 2020 05:58:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150044 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150044/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64 10 windows-install   fail REGR. vs. 149906

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 149906
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 149906

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 149906
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149906
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149906
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 149906
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149906
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 149906
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149906
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149906
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                47cf1b422e6093aee2a3e55d5e162112a2c69870
baseline version:
 linux                f66ed1ebbfde37631fba289f7c399eaa70632abf

Last test of basis   149906  2020-05-02 19:08:56 Z    4 days
Failing since        149912  2020-05-03 18:38:53 Z    3 days    2 attempts
Testing same since   150044  2020-05-05 16:08:58 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alan Stern <stern@rowland.harvard.edu>
  Arnd Bergmann <arnd@arndb.de>
  Artem Borisov <dedsa2002@gmail.com>
  Benjamin Tissoires <benjamin.tissoires@redhat.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Daniel Playfair Cal <daniel.playfair.cal@gmail.com>
  David Sterba <dsterba@suse.com>
  Dexuan Cui <decui@microsoft.com>
  Eric Biggers <ebiggers@google.com>
  Fabian Schindlatz <fabian.schindlatz@fau.de>
  Filipe Manana <fdmanana@suse.com>
  Frédéric Pierret (fepitre) <frederic.pierret@qubes-os.org>
  Geert Uytterhoeven <geert@linux-m68k.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gustavo A. R. Silva <gustavo@embeddedor.com>
  Hans de Goede <hdegoede@redhat.com>
  Jason Gerecke <jason.gerecke@wacom.com>
  Jason Gerecke <killertofu@gmail.com>
  Jia He <justin.he@arm.com>
  Jiri Kosina <jkosina@suse.cz>
  Joerg Roedel <jroedel@suse.de>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kees Cook <keescook@chromium.org>
  Kevin Hao <haokexin@gmail.com>
  Krzysztof Kozlowski <krzk@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Michael S. Tsirkin <mst@redhat.com>
  Qu Wenruo <wqu@suse.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Tang Bin <tangbin@cmss.chinamobile.com>
  Will Deacon <will@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1017 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 07 06:25:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 06:25:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWZyA-0000SY-60; Thu, 07 May 2020 06:25:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ibL=6V=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWZy8-0000ST-U3
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 06:25:16 +0000
X-Inumbo-ID: 819fb91f-902b-11ea-9ee2-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 819fb91f-902b-11ea-9ee2-12813bfff9fa;
 Thu, 07 May 2020 06:25:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4BFCBAF13;
 Thu,  7 May 2020 06:25:14 +0000 (UTC)
Subject: Re: [PATCH] x86/svm: Clean up vmcbcleanbits_t handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200505173250.5916-1-andrew.cooper3@citrix.com>
 <961921e3-c882-dad0-837e-71644f8bf208@suse.com>
 <d64d8593-9d88-2d42-69dc-c1d8b7018c99@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <44291471-004c-915e-224a-cdab4c306676@suse.com>
Date: Thu, 7 May 2020 08:25:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <d64d8593-9d88-2d42-69dc-c1d8b7018c99@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 06.05.2020 18:49, Andrew Cooper wrote:
> On 06/05/2020 16:10, Jan Beulich wrote:
>> On 05.05.2020 19:32, Andrew Cooper wrote:
>>> @@ -435,17 +435,13 @@ static int nsvm_vmcb_prepare4vmrun(struct vcpu *v, struct cpu_user_regs *regs)
>>>      ASSERT(n2vmcb != NULL);
>>>  
>>>      /* Check if virtual VMCB cleanbits are valid */
>>> -    vcleanbits_valid = 1;
>>> -    if ( svm->ns_ovvmcb_pa == INVALID_PADDR )
>>> -        vcleanbits_valid = 0;
>>> -    if (svm->ns_ovvmcb_pa != nv->nv_vvmcxaddr)
>>> -        vcleanbits_valid = 0;
>>> -
>>> -#define vcleanbit_set(_name)	\
>>> -    (vcleanbits_valid && ns_vmcb->cleanbits.fields._name)
>>> +    if ( svm->ns_ovvmcb_pa != INVALID_PADDR &&
>>> +         svm->ns_ovvmcb_pa != nv->nv_vvmcxaddr )
>>> +        clean = ns_vmcb->cleanbits;
>> It looks to me as if the proper inversion of the original condition
>> would mean == on the right side of &&, not != .
> 
> Oops, yes.  Fixed.

And then
Reviewed-by: Jan Beulich <jbeulich@suse.com>

>>> --- a/xen/include/asm-x86/hvm/svm/vmcb.h
>>> +++ b/xen/include/asm-x86/hvm/svm/vmcb.h
>>> @@ -384,34 +384,21 @@ typedef union
>>>  
>>>  typedef union
>>>  {
>>> -    uint32_t bytes;
>>> -    struct
>>> -    {
>>> -        /* cr_intercepts, dr_intercepts, exception_intercepts,
>>> -         * general{1,2}_intercepts, pause_filter_count, tsc_offset */
>>> -        uint32_t intercepts: 1;
>>> -        /* iopm_base_pa, msrpm_base_pa */
>>> -        uint32_t iopm: 1;
>>> -        /* guest_asid */
>>> -        uint32_t asid: 1;
>>> -        /* vintr */
>>> -        uint32_t tpr: 1;
>>> -        /* np_enable, h_cr3, g_pat */
>>> -        uint32_t np: 1;
>>> -        /* cr0, cr3, cr4, efer */
>>> -        uint32_t cr: 1;
>>> -        /* dr6, dr7 */
>>> -        uint32_t dr: 1;
>>> -        /* gdtr, idtr */
>>> -        uint32_t dt: 1;
>>> -        /* cs, ds, es, ss, cpl */
>>> -        uint32_t seg: 1;
>>> -        /* cr2 */
>>> -        uint32_t cr2: 1;
>>> -        /* debugctlmsr, last{branch,int}{to,from}ip */
>>> -        uint32_t lbr: 1;
>>> -        uint32_t resv: 21;
>>> -    } fields;
>>> +    struct {
>>> +        bool intercepts:1; /* 0:  cr/dr/exception/general1/2_intercepts,
>>> +                            *     pause_filter_count, tsc_offset */
>> Could I talk you into omitting the 1/2 part, as there's going to
>> be a 3 for at least MCOMMIT? Just "general" ought to be clear
>> enough, I would think.
> 
> Can do.  I'm not overly happy about this spilling onto two lines, but I
> can't think of how to usefully shrink the comment further without losing
> information.

The line split is unavoidable if we want the enumeration to be
sensible at all. I have no issue with this, to be honest.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 07 07:21:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 07:21:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWaqj-0005G6-KU; Thu, 07 May 2020 07:21:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ibL=6V=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWaqi-0005G1-40
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 07:21:40 +0000
X-Inumbo-ID: 64319ae0-9033-11ea-9ee5-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 64319ae0-9033-11ea-9ee5-12813bfff9fa;
 Thu, 07 May 2020 07:21:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id B2C4CABB2;
 Thu,  7 May 2020 07:21:40 +0000 (UTC)
Subject: Re: [PATCH v2 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
To: paul@xen.org
References: <20200407173847.1595-1-paul@xen.org>
 <20200407173847.1595-2-paul@xen.org>
 <d5013c9d-b1a9-6139-a120-741332d6e086@suse.com>
 <009601d623c5$9547abc0$bfd70340$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <63772836-3b3c-753a-18e5-d9fe3a6666a2@suse.com>
Date: Thu, 7 May 2020 09:21:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <009601d623c5$9547abc0$bfd70340$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 06.05.2020 18:44, Paul Durrant wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 29 April 2020 12:02
>>
>> On 07.04.2020 19:38, Paul Durrant wrote:
>>> +int domain_load_begin(struct domain_context *c, unsigned int tc,
>>> +                      const char *name, const struct vcpu *v, size_t len,
>>> +                      bool exact)
>>> +{
>>> +    if ( c->log )
>>> +        gdprintk(XENLOG_INFO, "%pv load: %s (%lu)\n", v, name,
>>> +                 (unsigned long)len);
>>> +
>>> +    BUG_ON(tc != c->desc.typecode);
>>> +    BUG_ON(v->vcpu_id != c->desc.vcpu_id);
>>> +
>>> +    if ( (exact && (len != c->desc.length)) ||
>>> +         (len < c->desc.length) )
>>> +        return -EINVAL;
>>
>> How about
>>
>>     if ( exact ? len != c->desc.length
>>                : len < c->desc.length )
>>
> 
> Yes, that doesn't look too bad.
> 
>> ? I'm also unsure about the < - don't you mean > instead? Too
>> little data would be compensated by zero padding, but too
>> much data can't be dealt with. But maybe I'm getting the sense
>> of len wrong ...
> 
> I think the < is correct. The caller needs to have at least enough
> space to accommodate the context record.

But this is load, not save - the caller supplies the data. If
there's less data than can be fit, it'll be zero-extended. If
there's too much data, the excess you don't know what to do
with (it might be okay to tolerate it being all zero).

>>> +        if ( (!handlers[i].per_vcpu && c.desc.vcpu_id) ||
>>> +             (c.desc.vcpu_id >= d->max_vcpus) )
>>> +            break;
>>> +
>>> +        v = d->vcpu[c.desc.vcpu_id];
>>> +
>>> +        if ( flags & DOMAIN_SAVE_FLAG_IGNORE )
>>> +        {
>>> +            /* Sink the data */
>>> +            rc = domain_load_entry(&c, c.desc.typecode, "IGNORED",
>>> +                                   v, NULL, c.desc.length, true);
>>
>> IOW the read handlers need to be able to deal with a NULL dst?
>> Sounds a little fragile to me. Is there a reason
>> domain_load_data() can't skip the call to read()?
> 
> Something has to move the cursor so sinking the data using a
> NULL dst seemed like the best way to avoid the need for a
> separate callback function.

And domain_load_data() can't itself advance the cursor in such
a case, with no callback involved at all?

>>> +    uint16_t typecode;
>>> +    /*
>>> +     * Each entry will contain either to global or per-vcpu domain state.
>>> +     * Entries relating to global state should have zero in this field.
>>
>> Is there a reason to specify zero?
>>
> 
> Not particularly but I thought it best to at least specify a value in all cases.

A specific value is certainly a good idea; I wonder though whether
an obviously invalid one (like ~0) wouldn't be better then,
allowing this ID to later be assigned meaning in such cases if
need be.

>>> +     */
>>> +    uint16_t vcpu_id;
>>
>> Seeing (possibly) multi-instance records in HVM state (PIC, IO-APIC, HPET),
>> wouldn't it be better to generalize this to ID, meaning "vCPU ID" just for
>> per-vCPU state?
> 
> True, a generic 'instance' would be needed for such records. I'll so as you suggest.

Which, along with my comment on domain_save_begin() taking a
struct vcpu * right now, will then indeed require changing
to a (struct domain *, unsigned int instance) tuple, I guess.

>>> +#define DOMAIN_SAVE_BEGIN(_x, _c, _v, _len) \
>>> +        domain_save_begin((_c), DOMAIN_SAVE_CODE(_x), #_x, (_v), (_len))
>>
>> In new code I'd like to ask for no leading underscores on macro
>> parameters as well as no unnecessary parenthesization of macro
>> parameters (e.g. when they simply get passed on to another macro
>> or function without being part of a larger expression).
> 
> Personally I think it is generally good practice to parenthesize
> but I can drop if you prefer.

To be honest - it's more than just "prefer": Excess parentheses
hamper readability. There shouldn't be too few, and since macros
already require quite a lot of them, imo we should strive to
have exactly as many as are needed.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 07 07:34:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 07:34:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWb37-0006Ac-Qq; Thu, 07 May 2020 07:34:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FcBF=6V=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jWb36-0006AX-Fm
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 07:34:28 +0000
X-Inumbo-ID: 2e03ce4e-9035-11ea-ae69-bc764e2007e4
Received: from mail-ed1-x543.google.com (unknown [2a00:1450:4864:20::543])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2e03ce4e-9035-11ea-ae69-bc764e2007e4;
 Thu, 07 May 2020 07:34:27 +0000 (UTC)
Received: by mail-ed1-x543.google.com with SMTP id t12so4520952edw.3
 for <xen-devel@lists.xenproject.org>; Thu, 07 May 2020 00:34:27 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=pr1cp3PhdwgQmdUwHzDCUMPbJGv0zkM1yqKo1lf845A=;
 b=NfoOGi+ahTDjbKJG2lG14lwH7iSxhTrkJUmqVJSTLXYozjtmOux/S5QHz0L4LWPz9Z
 2zQU4g6m03lmNSXtUmcvKHnVTVY2j2gpuaT4yc5KCZxl3dP2w491eLZZvjxf9PDs38uB
 oifhjS43U31l8CpQXlGNNVAsuZLJeSx4zpXIKi7RBzEmJgdKddYHbXo0VHryMXiZPp5k
 WnByOx0BJVwjxfYCHLqDfaSoKGLmu1SE1qpwtV9XVqBQHJrUVOoYCfYBlgjjjXJ95VkM
 pYYyLFH4opfITv/jnVg8bb0BilZFuNEmYg5vZKcgAvSNA09z6lJLNErApiVFur879Ifw
 R2XA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=pr1cp3PhdwgQmdUwHzDCUMPbJGv0zkM1yqKo1lf845A=;
 b=UBhGkwticCL0zbspZmi2RlRC+kEWmP1YuCirKHGdzsIfocyzq/O1LMtrI4O7FUsK89
 UWYxzVCI9q4TQ7+TBVjq6M52pXjz4BctLKCtY/cX8RCIHMKA8g5Mn/40QI4TrfA3YslN
 u+9P6sqVNerg6hCvnJnAaA1kp4IZKbrGI/4z9LNyi52VFZHMbDpjAKvmrgMjFuQVhHrE
 AKvOohk79wEzk7H3GEtnmRPY5uOGvU7ilVY3RUpJ5XI7ZKgtxp6COqg0/XlhnFtKa9Cu
 9g/lNHqz9tDB0K8dlxVb3Hg62oyWuTTLl0fwP9it+VYrzlrqXuZCB109LRcMbPNPtfsp
 Hltg==
X-Gm-Message-State: AGi0PuYAWJZvi2sJu0I5+jdLDAh6SLIe3836Xo7Jq/SIjdcVt8bTi96B
 +SI7gHRoFk9ooVqaZ9JSPq4=
X-Google-Smtp-Source: APiQypJbQNI9HfziaJCUbnJQ7tETXi9MC6H6a/wTCwCubc0/g6xBFxfwaGkuJNLaREY2bN5q5HqzLA==
X-Received: by 2002:a05:6402:1b08:: with SMTP id
 by8mr10504521edb.286.1588836866472; 
 Thu, 07 May 2020 00:34:26 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.185])
 by smtp.gmail.com with ESMTPSA id n26sm548782edo.36.2020.05.07.00.34.24
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 07 May 2020 00:34:25 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
References: <20200407173847.1595-1-paul@xen.org>
 <20200407173847.1595-2-paul@xen.org>
 <d5013c9d-b1a9-6139-a120-741332d6e086@suse.com>
 <009601d623c5$9547abc0$bfd70340$@xen.org>
 <63772836-3b3c-753a-18e5-d9fe3a6666a2@suse.com>
In-Reply-To: <63772836-3b3c-753a-18e5-d9fe3a6666a2@suse.com>
Subject: RE: [PATCH v2 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
Date: Thu, 7 May 2020 08:34:24 +0100
Message-ID: <00ab01d62441$ef2630e0$cd7292a0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQIOC3/NwyZzJjhdRz0oBcI7Is/lxwF9QIbqAnK/qAwCoq+JvwFlXK23p+yqL4A=
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 07 May 2020 08:22
> To: paul@xen.org
> Cc: xen-devel@lists.xenproject.org; 'Paul Durrant' =
<pdurrant@amazon.com>; 'Andrew Cooper'
> <andrew.cooper3@citrix.com>; 'George Dunlap' =
<george.dunlap@citrix.com>; 'Ian Jackson'
> <ian.jackson@eu.citrix.com>; 'Julien Grall' <julien@xen.org>; 'Stefano =
Stabellini'
> <sstabellini@kernel.org>; 'Wei Liu' <wl@xen.org>; 'Volodymyr Babchuk' =
<Volodymyr_Babchuk@epam.com>;
> 'Roger Pau Monn=C3=A9' <roger.pau@citrix.com>
> Subject: Re: [PATCH v2 1/5] xen/common: introduce a new framework for =
save/restore of 'domain' context
>=20
> On 06.05.2020 18:44, Paul Durrant wrote:
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: 29 April 2020 12:02
> >>
> >> On 07.04.2020 19:38, Paul Durrant wrote:
> >>> +int domain_load_begin(struct domain_context *c, unsigned int tc,
> >>> +                      const char *name, const struct vcpu *v, =
size_t len,
> >>> +                      bool exact)
> >>> +{
> >>> +    if ( c->log )
> >>> +        gdprintk(XENLOG_INFO, "%pv load: %s (%lu)\n", v, name,
> >>> +                 (unsigned long)len);
> >>> +
> >>> +    BUG_ON(tc !=3D c->desc.typecode);
> >>> +    BUG_ON(v->vcpu_id !=3D c->desc.vcpu_id);
> >>> +
> >>> +    if ( (exact && (len !=3D c->desc.length)) ||
> >>> +         (len < c->desc.length) )
> >>> +        return -EINVAL;
> >>
> >> How about
> >>
> >>     if ( exact ? len !=3D c->desc.length
> >>                : len < c->desc.length )
> >>
> >
> > Yes, that doesn't look too bad.
> >
> >> ? I'm also unsure about the < - don't you mean > instead? Too
> >> little data would be compensated by zero padding, but too
> >> much data can't be dealt with. But maybe I'm getting the sense
> >> of len wrong ...
> >
> > I think the < is correct. The caller needs to have at least enough
> > space to accommodate the context record.
>=20
> But this is load, not save - the caller supplies the data. If
> there's less data than can be fit, it'll be zero-extended. If
> there's too much data, the excess you don't know what to do
> with (it might be okay to tolerate it being all zero).
>=20

But this is a callback. The outer load function iterates over the =
records calling the appropriate hander for each one. Those handlers then =
call this function saying how much data they expect and whether they =
want exactly that amount, or whether they can tolerate less (i.e. =
zero-extend). Hence len < c->desc.length is an error, because it means =
the descriptor contains more data than the hander knows how to handle.

> >>> +        if ( (!handlers[i].per_vcpu && c.desc.vcpu_id) ||
> >>> +             (c.desc.vcpu_id >=3D d->max_vcpus) )
> >>> +            break;
> >>> +
> >>> +        v =3D d->vcpu[c.desc.vcpu_id];
> >>> +
> >>> +        if ( flags & DOMAIN_SAVE_FLAG_IGNORE )
> >>> +        {
> >>> +            /* Sink the data */
> >>> +            rc =3D domain_load_entry(&c, c.desc.typecode, =
"IGNORED",
> >>> +                                   v, NULL, c.desc.length, true);
> >>
> >> IOW the read handlers need to be able to deal with a NULL dst?
> >> Sounds a little fragile to me. Is there a reason
> >> domain_load_data() can't skip the call to read()?
> >
> > Something has to move the cursor so sinking the data using a
> > NULL dst seemed like the best way to avoid the need for a
> > separate callback function.
>=20
> And domain_load_data() can't itself advance the cursor in such
> a case, with no callback involved at all?
>=20

How could it do that without a callback? Doing such a thing negates the =
utility of the ignore flag anyway since it implies the caller is going =
to edit the load stream in advance. Since I'm going to drop the ignore =
flag in v3 anyway, this is all a bit academic.

> >>> +    uint16_t typecode;
> >>> +    /*
> >>> +     * Each entry will contain either to global or per-vcpu =
domain state.
> >>> +     * Entries relating to global state should have zero in this =
field.
> >>
> >> Is there a reason to specify zero?
> >>
> >
> > Not particularly but I thought it best to at least specify a value =
in all cases.
>=20
> A specific value is certainly a good idea; I wonder though whether
> an obviously invalid one (like ~0) wouldn't be better then,
> allowing this ID to later be assigned meaning in such cases if
> need be.
>=20

Ok, that sounds reasonable. I'll #define something for convenience.

> >>> +     */
> >>> +    uint16_t vcpu_id;
> >>
> >> Seeing (possibly) multi-instance records in HVM state (PIC, =
IO-APIC, HPET),
> >> wouldn't it be better to generalize this to ID, meaning "vCPU ID" =
just for
> >> per-vCPU state?
> >
> > True, a generic 'instance' would be needed for such records. I'll so =
as you suggest.
>=20
> Which, along with my comment on domain_save_begin() taking a
> struct vcpu * right now, will then indeed require changing
> to a (struct domain *, unsigned int instance) tuple, I guess.
>=20

Yes.

  Paul



From xen-devel-bounces@lists.xenproject.org Thu May 07 07:40:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 07:40:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWb8O-0006Kp-Ga; Thu, 07 May 2020 07:39:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ibL=6V=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWb8N-0006Kk-Ji
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 07:39:55 +0000
X-Inumbo-ID: f13b0fc6-9035-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f13b0fc6-9035-11ea-b9cf-bc764e2007e4;
 Thu, 07 May 2020 07:39:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 17780ADD2;
 Thu,  7 May 2020 07:39:56 +0000 (UTC)
Subject: Re: [PATCH v2 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
To: paul@xen.org
References: <20200407173847.1595-1-paul@xen.org>
 <20200407173847.1595-2-paul@xen.org>
 <d5013c9d-b1a9-6139-a120-741332d6e086@suse.com>
 <009601d623c5$9547abc0$bfd70340$@xen.org>
 <63772836-3b3c-753a-18e5-d9fe3a6666a2@suse.com>
 <00ab01d62441$ef2630e0$cd7292a0$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4ac9fc33-393d-d550-c656-94814d8f5467@suse.com>
Date: Thu, 7 May 2020 09:39:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <00ab01d62441$ef2630e0$cd7292a0$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.05.2020 09:34, Paul Durrant wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 07 May 2020 08:22
>>
>> On 06.05.2020 18:44, Paul Durrant wrote:
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: 29 April 2020 12:02
>>>>
>>>> On 07.04.2020 19:38, Paul Durrant wrote:
>>>>> +int domain_load_begin(struct domain_context *c, unsigned int tc,
>>>>> +                      const char *name, const struct vcpu *v, size_t len,
>>>>> +                      bool exact)
>>>>> +{
>>>>> +    if ( c->log )
>>>>> +        gdprintk(XENLOG_INFO, "%pv load: %s (%lu)\n", v, name,
>>>>> +                 (unsigned long)len);
>>>>> +
>>>>> +    BUG_ON(tc != c->desc.typecode);
>>>>> +    BUG_ON(v->vcpu_id != c->desc.vcpu_id);
>>>>> +
>>>>> +    if ( (exact && (len != c->desc.length)) ||
>>>>> +         (len < c->desc.length) )
>>>>> +        return -EINVAL;
>>>>
>>>> How about
>>>>
>>>>     if ( exact ? len != c->desc.length
>>>>                : len < c->desc.length )
>>>>
>>>
>>> Yes, that doesn't look too bad.
>>>
>>>> ? I'm also unsure about the < - don't you mean > instead? Too
>>>> little data would be compensated by zero padding, but too
>>>> much data can't be dealt with. But maybe I'm getting the sense
>>>> of len wrong ...
>>>
>>> I think the < is correct. The caller needs to have at least enough
>>> space to accommodate the context record.
>>
>> But this is load, not save - the caller supplies the data. If
>> there's less data than can be fit, it'll be zero-extended. If
>> there's too much data, the excess you don't know what to do
>> with (it might be okay to tolerate it being all zero).
>>
> 
> But this is a callback. The outer load function iterates over
> the records calling the appropriate hander for each one. Those
> handlers then call this function saying how much data they
> expect and whether they want exactly that amount, or whether
> they can tolerate less (i.e. zero-extend). Hence
> len < c->desc.length is an error, because it means the
> descriptor contains more data than the hander knows how to
> handle.

Oh, I see - "But maybe I'm getting the sense of len wrong ..."
then indeed applies.

Any thoughts on tolerating the excess data being zero?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 07 07:45:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 07:45:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWbDc-00078Q-6I; Thu, 07 May 2020 07:45:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FcBF=6V=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jWbDa-00078L-JS
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 07:45:18 +0000
X-Inumbo-ID: b1c1fe62-9036-11ea-ae69-bc764e2007e4
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b1c1fe62-9036-11ea-ae69-bc764e2007e4;
 Thu, 07 May 2020 07:45:17 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id m12so355260wmc.0
 for <xen-devel@lists.xenproject.org>; Thu, 07 May 2020 00:45:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=Q6iZqKK/iHuwFM/LAaovynXP4aAGFMqMvRLsceV5SNE=;
 b=A1kJ6z7nL8McT7QXfnekaDP4cqS6iADy1J922uglLxpRamYkQx8slWIphinkxbMwaO
 Jg7f/KYE92niYFxsbJWXXd408a3w1dm8V+EeQSaAz9/TfNQ0e4PRlE1IU+Mc5k1SizU1
 Re4eAcTgUKW9IiNdkabzjN7w4Fmaq41JRllcmECeRFxmW+pxfeeCiSkVe1VDRWIrkreW
 0TjgqI//tBRjdqFkzWAbZ3e4JCCEBfXJ72cA/R9T6E2hez2x+xLWVWlyCcfjOmb4XWUq
 S6/1MSpaiF/pEPH/fQwMsLr1w98mfAlQzFEr+Zoct/dS0AYktO/4tLb8n5A8J0gOJ7X9
 yiRg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=Q6iZqKK/iHuwFM/LAaovynXP4aAGFMqMvRLsceV5SNE=;
 b=FVauDAJIVns1jjYsg2DtvqiVZ3kSJ/zuJnMiVDrS3m0nzr6SGJpMjZFCaxNAC1WlSC
 xc5z2xWmaZUS54xkaEKmXYxD7xJcC0XDX6JxLuql5S1pTITzoKbAfD29htD/wibRNIjo
 DMRjLrRqkunGqYlJXK8t6OhepFFxctjylCDr5MldOO8AwzNdRA5zRWh6QXfrwWf2VBZj
 El7/MiYQpdfNDFSpl7bYpc5Nmd4axOc/YvKyJ7LU+HwISX36dts04hrly7SDLmssLH4n
 luThwBcOwsUIRqEh2KkBruKVkirQ3hKR0TDsA2jfGloOHesPFGf7jm5yIWivMMx+SMje
 MDxA==
X-Gm-Message-State: AGi0PuYhbejhDs0oK017cBC/6FymqgPU5ZBA5H+3GzLxlILGZrGCf9kZ
 L0DFGH/YZ4fbSMzDUvv6w6U=
X-Google-Smtp-Source: APiQypKIcL3axkw4PeLPYkyKJ4jSmxuu6pWFtiZMdNZakOXtayNIgQugT7Cpk3nsDnPYswKBYjbIsA==
X-Received: by 2002:a7b:c5d4:: with SMTP id n20mr9217441wmk.92.1588837516753; 
 Thu, 07 May 2020 00:45:16 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.185])
 by smtp.gmail.com with ESMTPSA id b22sm14760861wmj.1.2020.05.07.00.45.15
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 07 May 2020 00:45:16 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
References: <20200407173847.1595-1-paul@xen.org>
 <20200407173847.1595-2-paul@xen.org>
 <d5013c9d-b1a9-6139-a120-741332d6e086@suse.com>
 <009601d623c5$9547abc0$bfd70340$@xen.org>
 <63772836-3b3c-753a-18e5-d9fe3a6666a2@suse.com>
 <00ab01d62441$ef2630e0$cd7292a0$@xen.org>
 <4ac9fc33-393d-d550-c656-94814d8f5467@suse.com>
In-Reply-To: <4ac9fc33-393d-d550-c656-94814d8f5467@suse.com>
Subject: RE: [PATCH v2 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
Date: Thu, 7 May 2020 08:45:14 +0100
Message-ID: <00ac01d62443$72c88140$585983c0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQIOC3/NwyZzJjhdRz0oBcI7Is/lxwF9QIbqAnK/qAwCoq+JvwFlXK23AoxL9wsCIkLQPqfHOe9A
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 07 May 2020 08:40
> To: paul@xen.org
> Cc: xen-devel@lists.xenproject.org; 'Paul Durrant' =
<pdurrant@amazon.com>; 'Andrew Cooper'
> <andrew.cooper3@citrix.com>; 'George Dunlap' =
<george.dunlap@citrix.com>; 'Ian Jackson'
> <ian.jackson@eu.citrix.com>; 'Julien Grall' <julien@xen.org>; 'Stefano =
Stabellini'
> <sstabellini@kernel.org>; 'Wei Liu' <wl@xen.org>; 'Volodymyr Babchuk' =
<Volodymyr_Babchuk@epam.com>;
> 'Roger Pau Monn=C3=A9' <roger.pau@citrix.com>
> Subject: Re: [PATCH v2 1/5] xen/common: introduce a new framework for =
save/restore of 'domain' context
>=20
> On 07.05.2020 09:34, Paul Durrant wrote:
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: 07 May 2020 08:22
> >>
> >> On 06.05.2020 18:44, Paul Durrant wrote:
> >>>> From: Jan Beulich <jbeulich@suse.com>
> >>>> Sent: 29 April 2020 12:02
> >>>>
> >>>> On 07.04.2020 19:38, Paul Durrant wrote:
> >>>>> +int domain_load_begin(struct domain_context *c, unsigned int =
tc,
> >>>>> +                      const char *name, const struct vcpu *v, =
size_t len,
> >>>>> +                      bool exact)
> >>>>> +{
> >>>>> +    if ( c->log )
> >>>>> +        gdprintk(XENLOG_INFO, "%pv load: %s (%lu)\n", v, name,
> >>>>> +                 (unsigned long)len);
> >>>>> +
> >>>>> +    BUG_ON(tc !=3D c->desc.typecode);
> >>>>> +    BUG_ON(v->vcpu_id !=3D c->desc.vcpu_id);
> >>>>> +
> >>>>> +    if ( (exact && (len !=3D c->desc.length)) ||
> >>>>> +         (len < c->desc.length) )
> >>>>> +        return -EINVAL;
> >>>>
> >>>> How about
> >>>>
> >>>>     if ( exact ? len !=3D c->desc.length
> >>>>                : len < c->desc.length )
> >>>>
> >>>
> >>> Yes, that doesn't look too bad.
> >>>
> >>>> ? I'm also unsure about the < - don't you mean > instead? Too
> >>>> little data would be compensated by zero padding, but too
> >>>> much data can't be dealt with. But maybe I'm getting the sense
> >>>> of len wrong ...
> >>>
> >>> I think the < is correct. The caller needs to have at least enough
> >>> space to accommodate the context record.
> >>
> >> But this is load, not save - the caller supplies the data. If
> >> there's less data than can be fit, it'll be zero-extended. If
> >> there's too much data, the excess you don't know what to do
> >> with (it might be okay to tolerate it being all zero).
> >>
> >
> > But this is a callback. The outer load function iterates over
> > the records calling the appropriate hander for each one. Those
> > handlers then call this function saying how much data they
> > expect and whether they want exactly that amount, or whether
> > they can tolerate less (i.e. zero-extend). Hence
> > len < c->desc.length is an error, because it means the
> > descriptor contains more data than the hander knows how to
> > handle.
>=20
> Oh, I see - "But maybe I'm getting the sense of len wrong ..."
> then indeed applies.
>=20
> Any thoughts on tolerating the excess data being zero?
>=20

Well the point of the check here is to not tolerate excess data... Are =
you suggesting that it might be a reasonable idea? If so, then yes, =
insisting it is all zero would be an alternative.

  Paul




From xen-devel-bounces@lists.xenproject.org Thu May 07 08:15:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 08:15:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWbgU-0001i7-I9; Thu, 07 May 2020 08:15:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ibL=6V=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWbgT-0001i2-Lq
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 08:15:09 +0000
X-Inumbo-ID: dd586ee0-903a-11ea-9eec-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd586ee0-903a-11ea-9eec-12813bfff9fa;
 Thu, 07 May 2020 08:15:08 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 366EBADFE;
 Thu,  7 May 2020 08:15:10 +0000 (UTC)
Subject: Re: [RFC PATCH] docs/designs: domB design document
To: Christopher Clark <christopher.w.clark@gmail.com>
References: <20200506032312.878-1-christopher.w.clark@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fa40a5f7-39e4-bd30-1e1d-89311cfe2ff7@suse.com>
Date: Thu, 7 May 2020 10:15:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200506032312.878-1-christopher.w.clark@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Adam Schwalm <adam.schwalm@starlab.io>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Rich Persaud <persaur@gmail.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 06.05.2020 05:23, Christopher Clark wrote:
> +It is with this understanding as presented that the DomB project used as the
> +basis for the development of its multiple domain boot capability for Xen. Within
> +the remainder of this document is a detailed explanation of the multiple domain
> +boot, the objectives it strives to achieve, the structure behind the approach,
> +the sequence events in a boot, a contrasting with ARM’s dom0less, and closing
> +with some exemplar implementations.

May I ask that along with dom0less you also explain the (non-)relationship
to the late-Dom0 model we've been having for a number of years? Some
aspects of what the boot domain does look, to me, quite similar.

Apart from this one immediate detail I'm curious about (and that I also
don't know/recall how it gets handled with dom0less): Death of Dom0 in a
traditional setup is a signal to Xen to reboot the host. With any of the
boot time created domains not functioning anymore, the intended purpose
of the host may no longer be fulfilled. But of course there may be a
subset of "optional" domains. As a result - are there any intentions
towards identifying under what conditions it may be better to reboot the
host than wait for human interaction?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 07 08:17:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 08:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWbix-0001p7-0M; Thu, 07 May 2020 08:17:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ibL=6V=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWbiv-0001p1-FE
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 08:17:41 +0000
X-Inumbo-ID: 380a8508-903b-11ea-9eec-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 380a8508-903b-11ea-9eec-12813bfff9fa;
 Thu, 07 May 2020 08:17:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7343CADD2;
 Thu,  7 May 2020 08:17:42 +0000 (UTC)
Subject: Re: [PATCH v2 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
To: paul@xen.org
References: <20200407173847.1595-1-paul@xen.org>
 <20200407173847.1595-2-paul@xen.org>
 <d5013c9d-b1a9-6139-a120-741332d6e086@suse.com>
 <009601d623c5$9547abc0$bfd70340$@xen.org>
 <63772836-3b3c-753a-18e5-d9fe3a6666a2@suse.com>
 <00ab01d62441$ef2630e0$cd7292a0$@xen.org>
 <4ac9fc33-393d-d550-c656-94814d8f5467@suse.com>
 <00ac01d62443$72c88140$585983c0$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <98b80dfd-ae48-7d91-a164-3fd9c294f74d@suse.com>
Date: Thu, 7 May 2020 10:17:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <00ac01d62443$72c88140$585983c0$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.05.2020 09:45, Paul Durrant wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 07 May 2020 08:40
>> To: paul@xen.org
>> Cc: xen-devel@lists.xenproject.org; 'Paul Durrant' <pdurrant@amazon.com>; 'Andrew Cooper'
>> <andrew.cooper3@citrix.com>; 'George Dunlap' <george.dunlap@citrix.com>; 'Ian Jackson'
>> <ian.jackson@eu.citrix.com>; 'Julien Grall' <julien@xen.org>; 'Stefano Stabellini'
>> <sstabellini@kernel.org>; 'Wei Liu' <wl@xen.org>; 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>;
>> 'Roger Pau Monné' <roger.pau@citrix.com>
>> Subject: Re: [PATCH v2 1/5] xen/common: introduce a new framework for save/restore of 'domain' context
>>
>> On 07.05.2020 09:34, Paul Durrant wrote:
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: 07 May 2020 08:22
>>>>
>>>> On 06.05.2020 18:44, Paul Durrant wrote:
>>>>>> From: Jan Beulich <jbeulich@suse.com>
>>>>>> Sent: 29 April 2020 12:02
>>>>>>
>>>>>> On 07.04.2020 19:38, Paul Durrant wrote:
>>>>>>> +int domain_load_begin(struct domain_context *c, unsigned int tc,
>>>>>>> +                      const char *name, const struct vcpu *v, size_t len,
>>>>>>> +                      bool exact)
>>>>>>> +{
>>>>>>> +    if ( c->log )
>>>>>>> +        gdprintk(XENLOG_INFO, "%pv load: %s (%lu)\n", v, name,
>>>>>>> +                 (unsigned long)len);
>>>>>>> +
>>>>>>> +    BUG_ON(tc != c->desc.typecode);
>>>>>>> +    BUG_ON(v->vcpu_id != c->desc.vcpu_id);
>>>>>>> +
>>>>>>> +    if ( (exact && (len != c->desc.length)) ||
>>>>>>> +         (len < c->desc.length) )
>>>>>>> +        return -EINVAL;
>>>>>>
>>>>>> How about
>>>>>>
>>>>>>     if ( exact ? len != c->desc.length
>>>>>>                : len < c->desc.length )
>>>>>>
>>>>>
>>>>> Yes, that doesn't look too bad.
>>>>>
>>>>>> ? I'm also unsure about the < - don't you mean > instead? Too
>>>>>> little data would be compensated by zero padding, but too
>>>>>> much data can't be dealt with. But maybe I'm getting the sense
>>>>>> of len wrong ...
>>>>>
>>>>> I think the < is correct. The caller needs to have at least enough
>>>>> space to accommodate the context record.
>>>>
>>>> But this is load, not save - the caller supplies the data. If
>>>> there's less data than can be fit, it'll be zero-extended. If
>>>> there's too much data, the excess you don't know what to do
>>>> with (it might be okay to tolerate it being all zero).
>>>>
>>>
>>> But this is a callback. The outer load function iterates over
>>> the records calling the appropriate hander for each one. Those
>>> handlers then call this function saying how much data they
>>> expect and whether they want exactly that amount, or whether
>>> they can tolerate less (i.e. zero-extend). Hence
>>> len < c->desc.length is an error, because it means the
>>> descriptor contains more data than the hander knows how to
>>> handle.
>>
>> Oh, I see - "But maybe I'm getting the sense of len wrong ..."
>> then indeed applies.
>>
>> Any thoughts on tolerating the excess data being zero?
>>
> 
> Well the point of the check here is to not tolerate excess data...
> Are you suggesting that it might be a reasonable idea?

Well - it looks to be the obvious counterpart to zero-extending.
I'm not going to assert though that I've thought through all
possible consequences...

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 07 08:22:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 08:22:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWbnI-0002cj-Iy; Thu, 07 May 2020 08:22:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ibL=6V=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWbnH-0002ce-Dd
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 08:22:11 +0000
X-Inumbo-ID: d8e5ca64-903b-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8e5ca64-903b-11ea-b9cf-bc764e2007e4;
 Thu, 07 May 2020 08:22:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 9FB0CADFE;
 Thu,  7 May 2020 08:22:12 +0000 (UTC)
Subject: Re: [PATCH] x86/svm: Use flush-by-asid when available
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200505182539.12247-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <98450ecd-7259-1651-ca6e-58ec1e03ae7f@suse.com>
Date: Thu, 7 May 2020 10:22:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200505182539.12247-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05.05.2020 20:25, Andrew Cooper wrote:
> AMD Fam15h processors introduced the flush-by-asid feature, for more fine
> grain flushing purposes.
> 
> Flushing everything including ASID 0 (i.e. Xen context) is an an unnecesserily
> large hammer, and never necessary in the context of guest TLBs needing
> invalidating.
> 
> When available, use TLB_CTRL_FLUSH_ASID in preference to TLB_CTRL_FLUSH_ALL.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Thu May 07 08:36:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 08:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWc0e-0003YI-RS; Thu, 07 May 2020 08:36:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K0Sz=6V=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jWc0c-0003YD-QJ
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 08:35:58 +0000
X-Inumbo-ID: c6351422-903d-11ea-9eed-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c6351422-903d-11ea-9eed-12813bfff9fa;
 Thu, 07 May 2020 08:35:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=M+Bj3RU/k3t9vnxO+jWI4dODdoXu2tklo8vSxTm72SU=; b=oFIzYgxaEQj7WnVRCZvv7E9flC
 lsqn6h3Y3Mj2JyLs0bQNgh/arZNVzyR2cCKB+StEur6UWePo7xXss/OnEMitBt/eWgHhZmADgIxFs
 cstEW27l9uuYQdEW8BjTaCCMROm9knaIzRT/rEtMw7hWe90Ih5vcBfdtWYr3ed8rKL1A=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jWc0Y-0001Vw-L1; Thu, 07 May 2020 08:35:54 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jWc0Y-0005GW-DH; Thu, 07 May 2020 08:35:54 +0000
Subject: Re: [PATCH v2 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
To: Jan Beulich <jbeulich@suse.com>, paul@xen.org
References: <20200407173847.1595-1-paul@xen.org>
 <20200407173847.1595-2-paul@xen.org>
 <d5013c9d-b1a9-6139-a120-741332d6e086@suse.com>
 <009601d623c5$9547abc0$bfd70340$@xen.org>
 <63772836-3b3c-753a-18e5-d9fe3a6666a2@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2a911fec-82dd-9d97-791a-02dd4b328eb6@xen.org>
Date: Thu, 7 May 2020 09:35:51 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <63772836-3b3c-753a-18e5-d9fe3a6666a2@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 07/05/2020 08:21, Jan Beulich wrote:
> On 06.05.2020 18:44, Paul Durrant wrote:
>>>> +#define DOMAIN_SAVE_BEGIN(_x, _c, _v, _len) \
>>>> +        domain_save_begin((_c), DOMAIN_SAVE_CODE(_x), #_x, (_v), (_len))
>>>
>>> In new code I'd like to ask for no leading underscores on macro
>>> parameters as well as no unnecessary parenthesization of macro
>>> parameters (e.g. when they simply get passed on to another macro
>>> or function without being part of a larger expression).
>>
>> Personally I think it is generally good practice to parenthesize
>> but I can drop if you prefer.
> 
> To be honest - it's more than just "prefer": Excess parentheses
> hamper readability. There shouldn't be too few, and since macros
> already require quite a lot of them, imo we should strive to
> have exactly as many as are needed.

While I understand that too many parentheses may make the code worse, in 
the case of the macros, adding them for each argument is a good 
practice. This pretty simple to follow and avoid the mistake to forget 
to protect an argument correctly.

So I would let the contributor decides whether he wants to protect all 
the macro arguments or just as a need basis.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 07 08:58:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 08:58:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWcMI-0005G8-P3; Thu, 07 May 2020 08:58:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ibL=6V=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWcMH-0005G3-Ez
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 08:58:21 +0000
X-Inumbo-ID: e612c5f2-9040-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e612c5f2-9040-11ea-9887-bc764e2007e4;
 Thu, 07 May 2020 08:58:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 86FD0ACB1;
 Thu,  7 May 2020 08:58:21 +0000 (UTC)
Subject: Re: [PATCH v2 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
To: Julien Grall <julien@xen.org>
References: <20200407173847.1595-1-paul@xen.org>
 <20200407173847.1595-2-paul@xen.org>
 <d5013c9d-b1a9-6139-a120-741332d6e086@suse.com>
 <009601d623c5$9547abc0$bfd70340$@xen.org>
 <63772836-3b3c-753a-18e5-d9fe3a6666a2@suse.com>
 <2a911fec-82dd-9d97-791a-02dd4b328eb6@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <21494488-9dd9-c196-73fa-a82c99c8bc52@suse.com>
Date: Thu, 7 May 2020 10:58:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <2a911fec-82dd-9d97-791a-02dd4b328eb6@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>, 'Wei Liu' <wl@xen.org>,
 paul@xen.org, 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.05.2020 10:35, Julien Grall wrote:
> 
> 
> On 07/05/2020 08:21, Jan Beulich wrote:
>> On 06.05.2020 18:44, Paul Durrant wrote:
>>>>> +#define DOMAIN_SAVE_BEGIN(_x, _c, _v, _len) \
>>>>> +        domain_save_begin((_c), DOMAIN_SAVE_CODE(_x), #_x, (_v), (_len))
>>>>
>>>> In new code I'd like to ask for no leading underscores on macro
>>>> parameters as well as no unnecessary parenthesization of macro
>>>> parameters (e.g. when they simply get passed on to another macro
>>>> or function without being part of a larger expression).
>>>
>>> Personally I think it is generally good practice to parenthesize
>>> but I can drop if you prefer.
>>
>> To be honest - it's more than just "prefer": Excess parentheses
>> hamper readability. There shouldn't be too few, and since macros
>> already require quite a lot of them, imo we should strive to
>> have exactly as many as are needed.
> 
> While I understand that too many parentheses may make the code
> worse, in the case of the macros, adding them for each argument
> is a good practice. This pretty simple to follow and avoid the
> mistake to forget to protect an argument correctly.
> 
> So I would let the contributor decides whether he wants to
> protect all the macro arguments or just as a need basis.

This is a possible alternative position to take, accepting the
worse readability. But then this would please need applying in
full (as far as it's possible - operands to # or ## of course
can't be parenthesized):

#define DOMAIN_SAVE_BEGIN(x, c, v, len) \
        domain_save_begin((c), DOMAIN_SAVE_CODE((x)), #x, (v), (len))

which might look a little odd even to you?

As to readability - I'm sure you realize that from time to time
one needs to look at preprocessor output, where parentheses used
like this plus parentheses then also applied inside the invoked
macro add to one another. All in all I don't really buy the
"avoid the mistake to forget to protect an argument correctly"
argument to outweigh the downsides. We're doing this work not on
an occasional basis, but as a job. There should be a minimum
level of care everyone applies.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 07 09:00:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 09:00:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWcOY-00061h-5l; Thu, 07 May 2020 09:00:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Plze=6V=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jWcOW-00061Z-Nv
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 09:00:40 +0000
X-Inumbo-ID: 394d30ea-9041-11ea-9887-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 394d30ea-9041-11ea-9887-bc764e2007e4;
 Thu, 07 May 2020 09:00:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588842040;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=HwkpoQSjNc5m7+JivnVXv/WyxYqKh3dTmzrZR2JHESk=;
 b=BFlnYleMZ1K4TJC8lCqJTcCo0TZM3lXUZ5m5e8uqA3HI1j2stYLLHVCT
 ibygWr3j4f+y2mbU6JIkWwuWfSqISBDZ5x45/QCfn0WN//21mB9OIFLw+
 an61ghDRu1N09pZ/niWDvd6jtkxFtHByCSLvJvn0jLIpmfmMY3CS7jb/i 8=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: Tz2zzIGUh7TEabyWXqnpXH4d5xOh82p4IKFWCYSnY7EzABhZQH0noXSkpxq2KrekLZ1FFFUcWN
 3idpEbpYXLqDgX9wjlEbzncLb/cC25nnTBa5pg18iGT+hzzL9O0NF3Q1kBE4fkcjpZbMAM06SC
 OLjQKmhLzCaI+1Oz8PxclGRCNNSE4tAoHSWJbv6q7h8JUZ/0jJxxQ9xJXLdpNsB4lFNKDJewIY
 +righeNx10AJzps1Uw3HLgzHHkvig/gLcnsxVuR/IXQF6r1VAoUqZ7OqRk2+EUk0vH6Zb9aEXK
 xh0=
X-SBRS: 2.7
X-MesageID: 17324804
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,363,1583211600"; d="scan'208";a="17324804"
Date: Thu, 7 May 2020 11:00:30 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Wei Liu <wl@xen.org>
Subject: Re: [PATCH] libxl: update libxlu_disk_l.[ch]
Message-ID: <20200507090030.GG1353@Air-de-Roger>
References: <20200506165018.32209-1-wl@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200506165018.32209-1-wl@xen.org>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Xen Development List <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 06, 2020 at 05:50:18PM +0100, Wei Liu wrote:
> Use flex 2.6.4 that is shipped in Debian Buster.
> 
> Signed-off-by: Wei Liu <wl@xen.org>
> ---
> Do this because Roger posted a patch to fix clang build, which requires
> updating the same files. I don't want bury his changes in unrelated
> ones.

Hm, I'm not sure it's helpful, but all I can do is Ack this:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 07 09:32:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 09:32:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWcsi-0008VV-OU; Thu, 07 May 2020 09:31:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K0Sz=6V=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jWcsh-0008VQ-Mc
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 09:31:51 +0000
X-Inumbo-ID: 949909fc-9045-11ea-9ef4-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 949909fc-9045-11ea-9ef4-12813bfff9fa;
 Thu, 07 May 2020 09:31:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=pSi7F3KbCx6AP3mkUt8WMJcapHRbJY1wIic9DL/q9Qs=; b=1ndKInbQCSEwEjAfgPepM2/+zC
 dGI/UTQ2vbSTCO4Ms4KH0LfkPZP9t+J4A6R09Uj8/nn8nGBq6C5c75gpyNR4YGYf+fJ9Dy4LbskP3
 bp6Zr7g5cx37dtyqHUP7lpVdKiLyIixSNTHAl6xcThxWuNtW1+8iYQdWjztIOxM8oULA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jWcse-0002YD-32; Thu, 07 May 2020 09:31:48 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jWcsd-0000Sr-Rm; Thu, 07 May 2020 09:31:47 +0000
Subject: Re: [PATCH v2 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
To: Jan Beulich <jbeulich@suse.com>
References: <20200407173847.1595-1-paul@xen.org>
 <20200407173847.1595-2-paul@xen.org>
 <d5013c9d-b1a9-6139-a120-741332d6e086@suse.com>
 <009601d623c5$9547abc0$bfd70340$@xen.org>
 <63772836-3b3c-753a-18e5-d9fe3a6666a2@suse.com>
 <2a911fec-82dd-9d97-791a-02dd4b328eb6@xen.org>
 <21494488-9dd9-c196-73fa-a82c99c8bc52@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <9eceaf8b-84b8-ba9a-8643-2f78f9852815@xen.org>
Date: Thu, 7 May 2020 10:31:45 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <21494488-9dd9-c196-73fa-a82c99c8bc52@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>, 'Wei Liu' <wl@xen.org>,
 paul@xen.org, 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 07/05/2020 09:58, Jan Beulich wrote:
> On 07.05.2020 10:35, Julien Grall wrote:
>>
>>
>> On 07/05/2020 08:21, Jan Beulich wrote:
>>> On 06.05.2020 18:44, Paul Durrant wrote:
>>>>>> +#define DOMAIN_SAVE_BEGIN(_x, _c, _v, _len) \
>>>>>> +        domain_save_begin((_c), DOMAIN_SAVE_CODE(_x), #_x, (_v), (_len))
>>>>>
>>>>> In new code I'd like to ask for no leading underscores on macro
>>>>> parameters as well as no unnecessary parenthesization of macro
>>>>> parameters (e.g. when they simply get passed on to another macro
>>>>> or function without being part of a larger expression).
>>>>
>>>> Personally I think it is generally good practice to parenthesize
>>>> but I can drop if you prefer.
>>>
>>> To be honest - it's more than just "prefer": Excess parentheses
>>> hamper readability. There shouldn't be too few, and since macros
>>> already require quite a lot of them, imo we should strive to
>>> have exactly as many as are needed.
>>
>> While I understand that too many parentheses may make the code
>> worse, in the case of the macros, adding them for each argument
>> is a good practice. This pretty simple to follow and avoid the
>> mistake to forget to protect an argument correctly.
>>
>> So I would let the contributor decides whether he wants to
>> protect all the macro arguments or just as a need basis.
> 
> This is a possible alternative position to take, accepting the
> worse readability. But then this would please need applying in
> full (as far as it's possible - operands to # or ## of course
> can't be parenthesized):
> 
> #define DOMAIN_SAVE_BEGIN(x, c, v, len) \
>          domain_save_begin((c), DOMAIN_SAVE_CODE((x)), #x, (v), (len))
> 
> which might look a little odd even to you?

One pair of parenthesis looks wasteful but it doesn't bother that much.

> 
> As to readability - I'm sure you realize that from time to time
> one needs to look at preprocessor output, where parentheses used
> like this plus parentheses then also applied inside the invoked
> macro add to one another. All in all I don't really buy the
> "avoid the mistake to forget to protect an argument correctly"
> argument to outweigh the downsides. We're doing this work not on
> an occasional basis, but as a job. There should be a minimum
> level of care everyone applies.

I am sure you heard about "human error" in the past. Even if you do 
something all day along, you will make a mistake one day. By requesting 
a contributor to limit the number of parenthesis, then you increase the 
chance he/she will screw up one day.

You might think reviewers will catch such error, but XSA-316 proved that 
it is possible to miss it. I was the author of the patch and you were 
one the reviewer. As you can see even an experienced contributor and 
reviewer can make mistake... we are all human after all ;).

I don't ask to be over-proctective, but we should not lay trap to other 
contributors for them to fall into accidentally. In this case, Paul 
thinks the parentheses will help him. So I would not impose him to 
remove them and possibly make a mistake.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 07 10:21:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 10:21:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWde9-0004AV-PD; Thu, 07 May 2020 10:20:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aphx=6V=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWde8-0004AQ-W0
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 10:20:53 +0000
X-Inumbo-ID: 69638bfc-904c-11ea-9ef8-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 69638bfc-904c-11ea-9ef8-12813bfff9fa;
 Thu, 07 May 2020 10:20:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Z7gA/E28pdiBB8JYeXY45vaOb7I6Ltpjwm7iJ0i2gJM=; b=H8iE4N3PCl1cZOV+XIyXjsMgh
 n1uf291j7R5XIJrYPpEDjqugorQ+3nJnOogLDaa83HmVGbvm/QZV5GZRAtM3b8gOdTwScIsg+Rfub
 zIxjWvT0VSBbN7sqtauvZLWjca3HckNbxlOmD4b2QmtZHwnl13KYVDslV4bEEuWYOUpVY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWde0-0003Ug-JZ; Thu, 07 May 2020 10:20:44 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWde0-0006qJ-Al; Thu, 07 May 2020 10:20:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWde0-0006B1-8y; Thu, 07 May 2020 10:20:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150048-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150048: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=2e3d87cc734a895ef5b486926274a178836b67a9
X-Osstest-Versions-That: xen=0135be8bd8cd60090298f02310691b688d95c3a8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 07 May 2020 10:20:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150048 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150048/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     12 guest-start              fail REGR. vs. 149908

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 149908
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149908
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149908
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 149908
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 149908
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149908
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149908
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149908
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 149908
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 149908
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  2e3d87cc734a895ef5b486926274a178836b67a9
baseline version:
 xen                  0135be8bd8cd60090298f02310691b688d95c3a8

Last test of basis   149908  2020-05-03 01:54:24 Z    4 days
Testing same since   150048  2020-05-05 19:06:53 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   0135be8bd8..2e3d87cc73  2e3d87cc734a895ef5b486926274a178836b67a9 -> master


From xen-devel-bounces@lists.xenproject.org Thu May 07 11:17:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 11:17:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWeX3-00005r-Ht; Thu, 07 May 2020 11:17:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AJJs=6V=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jWeX1-00005m-Tp
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 11:17:35 +0000
X-Inumbo-ID: 5996b340-9054-11ea-9f00-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5996b340-9054-11ea-9f00-12813bfff9fa;
 Thu, 07 May 2020 11:17:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D4D23B0B8;
 Thu,  7 May 2020 11:17:35 +0000 (UTC)
Subject: Re: [PATCH v7 03/12] docs: add feature document for Xen hypervisor
 sysfs-like support
To: George Dunlap <George.Dunlap@citrix.com>
References: <20200402154616.16927-1-jgross@suse.com>
 <20200402154616.16927-4-jgross@suse.com>
 <AB2368EA-FE62-4735-8064-98DE220B6F9E@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <3d3f8302-5730-e7b7-ab91-7d9038472353@suse.com>
Date: Thu, 7 May 2020 13:17:32 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <AB2368EA-FE62-4735-8064-98DE220B6F9E@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.04.20 15:55, George Dunlap wrote:
> 
>> On Apr 2, 2020, at 4:46 PM, Juergen Gross <JGross@suse.com> wrote:
> [snip]
>> +* {VALUE, VALUE, ... } -- a list of possible values separated by "," and
>> +  enclosed in "{" and "}".
> [snip]
>> +So an entry could look like this:
>> +
>> +    /cpu-bugs/active-pv/xpti = ("No"|{"dom0", "domU", "PCID on"}) [w,X86,PV]
>> +
>> +Possible values would be "No" or a list of "dom0", "domU", and "PCID on".
> 
> One thing that wasn’t clear to me here:  Does the “list” look like
> 
>      “dom0", “domU", “PCID on”
> 
> or
> 
>      {dom0, domU, PCID on}
> 
> or
> 
>      {“dom0”, “domU”, “PCID on”}
> 
> ?

This is only the notation used.

The entry could then be e.g.

   dom0 domU

> 
> Another question that occurs to me from asking this question: in a case like this, are we generally expecting to have options with spaces in them (like “PCID on”)? And/or, are we expecting the strings themselves to have quotes in them? If so, commands to manipulate these will start to look a little gnarly:
> 
>      xenhypfs write <path> “{\”dom0\”, \”PCID on\”}”

You've got a point here. Spaces in single items seem to be a bad idea,
especially in lists.

And as said above: the '{', ',' and the quotes are notation only, they
are not part of the entry content.

> 
> It seems like it would be nicer to be able to write:
> 
>      xenhypfs write <path> "{dom0, PCID-on}”

This would be then:

   xenhypfs write <path> "dom0 PCID-on"

> 
> (Maybe this will be made more clear later in the series, just thought I’d share my thoughts / confusion here.)

I'll make this clearer.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu May 07 11:35:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 11:35:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWeoF-0001kq-8O; Thu, 07 May 2020 11:35:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=AJJs=6V=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jWeoD-0001kh-EP
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 11:35:21 +0000
X-Inumbo-ID: d50b286a-9056-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d50b286a-9056-11ea-b07b-bc764e2007e4;
 Thu, 07 May 2020 11:35:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 3BAB0AB8F;
 Thu,  7 May 2020 11:35:22 +0000 (UTC)
Subject: Re: [PATCH v7 05/12] libs: add libxenhypfs
To: George Dunlap <George.Dunlap@citrix.com>
References: <20200402154616.16927-1-jgross@suse.com>
 <20200402154616.16927-6-jgross@suse.com>
 <936102D2-0655-43EA-B52A-DED46E9E07D0@citrix.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <57e40d81-1e06-bb71-9b27-f955654cf681@suse.com>
Date: Thu, 7 May 2020 13:35:18 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <936102D2-0655-43EA-B52A-DED46E9E07D0@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.04.20 16:53, George Dunlap wrote:
> 
> 
>> On Apr 2, 2020, at 4:46 PM, Juergen Gross <JGross@suse.com> wrote:
>>
>> Add the new library libxenhypfs for access to the hypervisor filesystem.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> Acked-by: Wei Liu <wl@xen.org>
> 
> Just a few questions...
> 
>> +/* Returned buffer and dirent should be freed via free(). */
>> +void *xenhypfs_read_raw(xenhypfs_handle *fshdl, const char *path,
>> +                        struct xenhypfs_dirent **dirent);
>> +
>> +/* Returned buffer should be freed via free(). */
>> +char *xenhypfs_read(xenhypfs_handle *fshdl, const char *path);
> 
> What’s the difference between these two?  And what’s the `dirent` argument to xenhypfs_read_raw() for?

xenhypfs_read() is returning the printable content (if possible),
e.g. "1" for an entry of type integer containing 0x00000001

xenhypfs_read_raw() returns the raw content (e.g. the binary
value 0x00000001 in above example) and the entry format data in
dirent (type, size and enconding of entry).


Juergen


From xen-devel-bounces@lists.xenproject.org Thu May 07 11:36:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 11:36:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWepc-0001pg-Jg; Thu, 07 May 2020 11:36:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aphx=6V=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWepb-0001pZ-Fj
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 11:36:47 +0000
X-Inumbo-ID: 075cf230-9057-11ea-9f06-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 075cf230-9057-11ea-9f06-12813bfff9fa;
 Thu, 07 May 2020 11:36:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=4w2ixTxHAJSJ68WOTZVyCo234QMFE7co+9vASLrLby4=; b=sR6HWNzoD4+UzfAIvApzLfjKs
 KEzYKJ3ahqSjUTqmtOnaEF+ZRdYGNyivK3zdBw2ZBBHVLSHkpsHL5JlFkU9MuFopkzBBLLlzhZFAO
 3nqJZJjcgWbkXB1pxG8q71dpvkrsIGGY1Sfde4CQxz36OE1LUILToZADWFM/Gf3pEquNQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWepY-0004wF-IY; Thu, 07 May 2020 11:36:44 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWepY-0002vD-9B; Thu, 07 May 2020 11:36:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWepY-00068j-81; Thu, 07 May 2020 11:36:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150051-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.9-testing test] 150051: regressions - trouble:
 fail/pass/starved
X-Osstest-Failures: xen-4.9-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:regression
 xen-4.9-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:guest-stop:fail:heisenbug
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-localmigrate/x10:fail:heisenbug
 xen-4.9-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-localmigrate:fail:heisenbug
 xen-4.9-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-localmigrate/x10:fail:heisenbug
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=93cc305d1f3e7c6949a8f4116446624fa2dbfdf4
X-Osstest-Versions-That: xen=45c90737d5f0c8bf479adcd8cb88450f1998e55c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 07 May 2020 11:36:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150051 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150051/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail in 150038 REGR. vs. 149649

Tests which are failing intermittently (not blocking):
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 15 guest-stop fail pass in 150038
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail pass in 150038
 test-amd64-i386-xl-qemut-ws16-amd64 14 guest-localmigrate  fail pass in 150038
 test-amd64-amd64-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail pass in 150038

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop      fail blocked in 149649
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail in 150038 blocked in 149649
 test-amd64-amd64-xl-qemuu-ws16-amd64 16 guest-localmigrate/x10 fail in 150038 blocked in 149649
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop  fail in 150038 like 149649
 test-amd64-i386-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail in 150038 like 149649
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 149649
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149649
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 149649
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 149649
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 149649
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  93cc305d1f3e7c6949a8f4116446624fa2dbfdf4
baseline version:
 xen                  45c90737d5f0c8bf479adcd8cb88450f1998e55c

Last test of basis   149649  2020-04-14 13:35:25 Z   22 days
Testing same since   150038  2020-05-05 16:06:01 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 93cc305d1f3e7c6949a8f4116446624fa2dbfdf4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Apr 3 13:03:40 2020 +0100

    tools/xenstore: fix a use after free problem in xenstored
    
    Commit 562a1c0f7ef3fb ("tools/xenstore: dont unlink connection object
    twice") introduced a potential use after free problem in
    domain_cleanup(): after calling talloc_unlink() for domain->conn
    domain->conn is set to NULL. The problem is that domain is registered
    as talloc child of domain->conn, so it might be freed by the
    talloc_unlink() call.
    
    With Xenstore being single threaded there are normally no concurrent
    memory allocations running and freeing a virtual memory area normally
    doesn't result in that area no longer being accessible. A problem
    could occur only in case either a signal received results in some
    memory allocation done in the signal handler (SIGHUP is a primary
    candidate leading to reopening the log file), or in case the talloc
    framework would do some internal memory allocation during freeing of
    the memory (which would lead to clobbering of the freed domain
    structure).
    
    Fixes: 562a1c0f7ef3fb ("tools/xenstore: dont unlink connection object twice")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    (cherry picked from commit bb2a34fd740e9a26be9e2244f1a5b4cef439e5a8)
    (cherry picked from commit dc5176d0f9434e275e0be1df8d0518e243798beb)
    (cherry picked from commit a997ffe678e698ff2b4c89ae5a98661d12247fef)
    (cherry picked from commit 48e8564435aca590f1c292ab7bb1f3dbc6b75693)
    (cherry picked from commit 1e722e6971539eab4f484affd60490cbc8429951)
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu May 07 12:04:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 12:04:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWfGO-0004MI-9J; Thu, 07 May 2020 12:04:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aphx=6V=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWfGM-0004MD-PN
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 12:04:26 +0000
X-Inumbo-ID: e360d636-905a-11ea-9f0b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e360d636-905a-11ea-9f0b-12813bfff9fa;
 Thu, 07 May 2020 12:04:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=bPrZf1HEF9mtWWciblODPaxXO2TSG0MIyHVmLx+bklY=; b=34N+0XRwOl7x4MD69R0XymfTw
 T7nz0YLYMzzp+Yj5A3Gp2nNbhQIGNLJ+aiczKofLiOtuhpUEc9DF0fEyyjWTwjNJxPl3iMBFgE4+B
 6Zt2zzID2p+d59eOBnZ3JbKpL5S96yqYpTcR105DLLXoPteG/PEfMQSawbMFvstHFzVEk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWfGI-0005TL-9O; Thu, 07 May 2020 12:04:22 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWfGH-0003tz-Se; Thu, 07 May 2020 12:04:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWfGH-0005Fc-Rl; Thu, 07 May 2020 12:04:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150054-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 150054: regressions - FAIL
X-Osstest-Failures: linux-5.4:build-amd64-pvops:kernel-build:fail:regression
 linux-5.4:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=592465e6a54ba8104969f3b73b58df262c5be5f5
X-Osstest-Versions-That: linux=9895e0ac338a8060e6947f897397c21c4d78d80d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 07 May 2020 12:04:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150054 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150054/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             6 kernel-build             fail REGR. vs. 149905

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                592465e6a54ba8104969f3b73b58df262c5be5f5
baseline version:
 linux                9895e0ac338a8060e6947f897397c21c4d78d80d

Last test of basis   149905  2020-05-02 16:38:26 Z    4 days
Testing same since   150054  2020-05-06 06:41:12 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Aharon Landau <aharonl@mellanox.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Alaa Hleihel <alaa@mellanox.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Williamson <alex.williamson@redhat.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Arnd Bergmann <arnd@arndb.de>
  Arun Easi <aeasi@marvell.com>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  Catalin Marinas <catalin.marinas@arm.com>
  Christoph Hellwig <hch@lst.de>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Vetter <daniel.vetter@intel.com>
  David Disseldorp <ddiss@suse.de>
  David Howells <dhowells@redhat.com>
  David Sterba <dsterba@suse.com>
  Dexuan Cui <decui@microsoft.com>
  Douglas Anderson <dianders@chromium.org>
  Filipe Manana <fdmanana@suse.com>
  Gabriel Krisman Bertazi <krisman@collabora.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hui Wang <hui.wang@canonical.com>
  Iuliana Prodan <iuliana.prodan@nxp.com>
  Jason Gunthorpe <jgg@mellanox.com>
  Joerg Roedel <jroedel@suse.de>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Leon Romanovsky <leonro@mellanox.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Marek Behún <marek.behun@nic.cz>
  Martin Blumenstingl <martin.blumenstingl@googlemail.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Liu <liumartin@google.com>
  Martin Wilck <mwilck@suse.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
  Nicolas Ferre <nicolas.ferre@microchip.com>
  Niklas Cassel <niklas.cassel@wdc.com>
  Olga Kornievskaia <kolga@netapp.com>
  Olga Kornievskaia <olga.kornievskaia@gmail.com>
  Paul Moore <paul@paul-moore.com>
  Qu Wenruo <wqu@suse.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rayagonda Kokatanur <rayagonda.kokatanur@broadcom.com>
  Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  ryan_chen <ryan_chen@aspeedtech.com>
  Sean Christopherson <sean.j.christopherson@intel.com>
  Shawn Guo <shawnguo@kernel.org>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Sumit Semwal <sumit.semwal@linaro.org>
  Sunwook Eom <speed.eom@samsung.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Takashi Iwai <tiwai@suse.de>
  Tang Bin <tangbin@cmss.chinamobile.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vasily Averin <vvs@virtuozzo.com>
  Vasily Khoruzhick <anarsoul@gmail.com>
  Veerabhadrarao Badiganti <vbadigan@codeaurora.org>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vinod Koul <vkoul@kernel.org>
  Wei Liu <wei.liu@kernel.org>
  Wolfram Sang <wsa@the-dreams.de>
  Wu Bo <wubo40@huawei.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yan Zhao <yan.y.zhao@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            fail    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1726 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 07 13:17:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 13:17:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWgP2-0001sW-MV; Thu, 07 May 2020 13:17:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ibL=6V=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWgP1-0001sN-2p
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 13:17:27 +0000
X-Inumbo-ID: 17f75d52-9065-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 17f75d52-9065-11ea-b07b-bc764e2007e4;
 Thu, 07 May 2020 13:17:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 6CA0BB01C;
 Thu,  7 May 2020 13:17:27 +0000 (UTC)
Subject: Re: [PATCH 10/16] x86/cpu: Adjust reset_stack_and_jump() to be shadow
 stack compatible
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-11-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4c0dfd8f-38c0-ca32-886d-94cb4698e63b@suse.com>
Date: Thu, 7 May 2020 15:17:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200501225838.9866-11-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.2020 00:58, Andrew Cooper wrote:
> We need to unwind up to the supervisor token.  See the comment for details.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  xen/include/asm-x86/current.h | 42 +++++++++++++++++++++++++++++++++++++++---
>  1 file changed, 39 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/include/asm-x86/current.h b/xen/include/asm-x86/current.h
> index 99b66a0087..2a7b728b1e 100644
> --- a/xen/include/asm-x86/current.h
> +++ b/xen/include/asm-x86/current.h
> @@ -124,13 +124,49 @@ unsigned long get_stack_dump_bottom (unsigned long sp);
>  # define CHECK_FOR_LIVEPATCH_WORK ""
>  #endif
>  
> +#ifdef CONFIG_XEN_SHSTK
> +/*
> + * We need to unwind the primary shadow stack to its supervisor token, located
> + * at 0x5ff8 from the base of the stack blocks.
> + *
> + * Read the shadow stack pointer, subtract it from 0x5ff8, divide by 8 to get
> + * the number of slots needing popping.
> + *
> + * INCSSPQ can't pop more than 255 entries.  We shouldn't ever need to pop
> + * that many entries, and getting this wrong will cause us to #DF later.
> + */
> +# define SHADOW_STACK_WORK                      \
> +    "mov $1, %[ssp];"                           \
> +    "rdsspd %[ssp];"                            \
> +    "cmp $1, %[ssp];"                           \
> +    "je 1f;" /* CET not active?  Skip. */       \
> +    "mov $"STR(0x5ff8)", %[val];"               \

As per comments on earlier patches, I think it would be nice if
this wasn't a literal number here, but tied to actual stack
layout via some suitable expression. An option might be to use
0xff8 (or the constant to be introduced for it in the earlier
patch) here and ...

> +    "and $"STR(STACK_SIZE - 1)", %[ssp];"       \

... PAGE_SIZE here.

> +    "sub %[ssp], %[val];"                       \
> +    "shr $3, %[val];"                           \
> +    "cmp $255, %[val];"                         \
> +    "jle 2f;"                                   \

Perhaps better "jbe", treating the unsigned values as such?

> +    "ud2a;"                                     \
> +    "2: incsspq %q[val];"                       \
> +    "1:"
> +#else
> +# define SHADOW_STACK_WORK ""
> +#endif
> +
>  #define switch_stack_and_jump(fn, instr)                                \
>      ({                                                                  \
> +        unsigned int tmp;                                               \
>          __asm__ __volatile__ (                                          \
> -            "mov %0,%%"__OP"sp;"                                        \
> +            "cmc;"                                                      \
> +            SHADOW_STACK_WORK                                           \
> +            "mov %[stk], %%rsp;"                                        \
>              instr                                                       \
> -             "jmp %c1"                                                  \
> -            : : "r" (guest_cpu_user_regs()), "i" (fn) : "memory" );     \
> +            "jmp %c[fun];"                                              \
> +            : [val] "=&r" (tmp),                                        \
> +              [ssp] "=&r" (tmp)                                         \

See my concern on the earlier similar construct.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 07 13:22:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 13:22:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWgTh-0002gY-DK; Thu, 07 May 2020 13:22:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ibL=6V=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWgTf-0002gT-KP
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 13:22:15 +0000
X-Inumbo-ID: c420bab0-9065-11ea-9f15-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c420bab0-9065-11ea-9f15-12813bfff9fa;
 Thu, 07 May 2020 13:22:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id BAA05AF6D;
 Thu,  7 May 2020 13:22:16 +0000 (UTC)
Subject: Re: [PATCH 11/16] x86/spec-ctrl: Adjust DO_OVERWRITE_RSB to be shadow
 stack compatible
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-12-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3c01d6c3-5a61-018b-13df-6e75256f6d6c@suse.com>
Date: Thu, 7 May 2020 15:22:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200501225838.9866-12-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.2020 00:58, Andrew Cooper wrote:
> @@ -114,6 +114,16 @@
>      sub $1, %ecx
>      jnz .L\@_fill_rsb_loop
>      mov %\tmp, %rsp                 /* Restore old %rsp */
> +
> +#ifdef CONFIG_XEN_SHSTK
> +    mov $1, %ecx
> +    rdsspd %ecx
> +    cmp $1, %ecx
> +    je .L\@_shstk_done
> +    mov $64, %ecx                   /* 64 * 4 bytes, given incsspd */
> +    incsspd %ecx                    /* Restore old SSP */
> +.L\@_shstk_done:
> +#endif

The latest here I wonder why you don't use alternatives patching.
I thought that's what you've introduced the synthetic feature
flag for.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 07 13:22:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 13:22:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWgUB-0002k5-MZ; Thu, 07 May 2020 13:22:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Plze=6V=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jWgUA-0002jx-9A
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 13:22:46 +0000
X-Inumbo-ID: d634a900-9065-11ea-9f16-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d634a900-9065-11ea-9f16-12813bfff9fa;
 Thu, 07 May 2020 13:22:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588857765;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=ohO7AnoRpCbzgs5lJJjQilidAWSjQyWi/0LojJaAd5g=;
 b=W4UWVyhsZrldF7CPOcNp+ElQMNkFBKVu2dGzcuF2cHWx15InN7cHaGjR
 BFl1MzD7LKE5ECCrzHHgtEFMGo8m1j7yhrcb/AZUMygmDXNGwjuqTNlTF
 W3m0h3al0QB+ieyWBDwuBejMfGCRdtZB2sPbcX3TH2C0NpNKnuDN7wJMz I=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: U8wNX+1i06jVBHHKqZ2ABG+xdmUlD413NO8xppoppu2nvmqGYVejyT6T1KhczayLAJtpCVtGj1
 FmrtELFMVMhNv1uK4X+Q2o4dYn2OAtbKV93nQE/lZNZVZQBIOz+hC6dnuReM57wkMhEY26Xl0e
 Vnjk46ZEbB3aX2Kg3IwDkhZG3ZvP6MkE88wsqFVT0fJ5tQLvbGxS1Hyq/rnOFqmB7qMPZjyQ7p
 i0RGtmLMkxXBqRU4JAmZkgZyNVIekapXWEEzZOMCPFRBhvOH7oPll0i83qmMM6iuoxlQ1qW8UB
 lSQ=
X-SBRS: 2.7
X-MesageID: 17344624
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,363,1583211600"; d="scan'208";a="17344624"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/idle: prevent entering C6 with in service interrupts on
 Intel
Date: Thu, 7 May 2020 15:22:36 +0200
Message-ID: <20200507132236.26010-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Ian
 Jackson <ian.jackson@eu.citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Apply a workaround for Intel errata CLX30: "A Pending Fixed Interrupt
May Be Dispatched Before an Interrupt of The Same Priority Completes".

It's not clear which models are affected, as the errata is listed in
the "Second Generation Intel Xeon Scalable Processors" specification
update, but the issue has been seen as far back as Nehalem processors.
Apply the workaround to all Intel processors, the condition can be
relaxed later.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 docs/misc/xen-command-line.pandoc |  8 ++++++++
 xen/arch/x86/acpi/cpu_idle.c      | 22 +++++++++++++++++++++-
 xen/arch/x86/cpu/mwait-idle.c     |  3 +++
 xen/include/asm-x86/cpuidle.h     |  1 +
 4 files changed, 33 insertions(+), 1 deletion(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index ee12b0f53f..6e868a2185 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -652,6 +652,14 @@ Specify the size of the console debug trace buffer. By specifying `cpu:`
 additionally a trace buffer of the specified size is allocated per cpu.
 The debug trace feature is only enabled in debugging builds of Xen.
 
+### disable-c6-isr
+> `= <boolean>`
+
+> Default: `true for Intel CPUs`
+
+Workaround for Intel errata CLX30. Prevent entering C6 idle states with in
+service local APIC interrupts. Enabled by default for all Intel CPUs.
+
 ### dma_bits
 > `= <integer>`
 
diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index b83446e77d..5023fea148 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -573,6 +573,25 @@ static bool errata_c6_eoi_workaround(void)
     return (fix_needed && cpu_has_pending_apic_eoi());
 }
 
+static int8_t __read_mostly disable_c6_isr = -1;
+boolean_param("disable-c6-isr", disable_c6_isr);
+
+/*
+ * Errata CLX30: A Pending Fixed Interrupt May Be Dispatched Before an
+ * Interrupt of The Same Priority Completes.
+ *
+ * Prevent entering C6 if there are pending lapic interrupts, or else the
+ * processor might dispatch further pending interrupts before the first one has
+ * been completed.
+ */
+bool errata_c6_isr_workaround(void)
+{
+    if ( unlikely(disable_c6_isr == -1) )
+        disable_c6_isr = boot_cpu_data.x86_vendor == X86_VENDOR_INTEL;
+
+    return disable_c6_isr && cpu_has_pending_apic_eoi();
+}
+
 void update_last_cx_stat(struct acpi_processor_power *power,
                          struct acpi_processor_cx *cx, uint64_t ticks)
 {
@@ -676,7 +695,8 @@ static void acpi_processor_idle(void)
         return;
     }
 
-    if ( (cx->type == ACPI_STATE_C3) && errata_c6_eoi_workaround() )
+    if ( (cx->type == ACPI_STATE_C3) &&
+         (errata_c6_eoi_workaround() || errata_c6_isr_workaround()) )
         cx = power->safe_state;
 
 
diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index b81937966e..e14cdaeed7 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -770,6 +770,9 @@ static void mwait_idle(void)
 		return;
 	}
 
+	if (cx->type == ACPI_STATE_C3 && errata_c6_isr_workaround())
+		cx = power->safe_state;
+
 	eax = cx->address;
 	cstate = ((eax >> MWAIT_SUBSTATE_SIZE) & MWAIT_CSTATE_MASK) + 1;
 
diff --git a/xen/include/asm-x86/cpuidle.h b/xen/include/asm-x86/cpuidle.h
index 5d7dffd228..8b9d6fdb15 100644
--- a/xen/include/asm-x86/cpuidle.h
+++ b/xen/include/asm-x86/cpuidle.h
@@ -26,4 +26,5 @@ void update_idle_stats(struct acpi_processor_power *,
 void update_last_cx_stat(struct acpi_processor_power *,
                          struct acpi_processor_cx *, uint64_t);
 
+bool errata_c6_isr_workaround(void);
 #endif /* __X86_ASM_CPUIDLE_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 07 13:25:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 13:25:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWgX5-0002vf-5V; Thu, 07 May 2020 13:25:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=53Et=6V=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jWgX4-0002vX-33
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 13:25:46 +0000
X-Inumbo-ID: 41172195-9066-11ea-9f16-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 41172195-9066-11ea-9f16-12813bfff9fa;
 Thu, 07 May 2020 13:25:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588857946;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=tmB6z3p1qvF6/X9wtkwFDp/kOrzHN6RtY/T+wRJL+qY=;
 b=BEKSK1tZZDfXIWYIGMEOpLnqQIM/0IRMo9/8q/1Vf0pUAjsQfp39w08+
 DxQh6cDBTLgqYiDl3gsE5Q6U26/uF/tS/LBZO+4PEueMKF0DtdpRsLVaW
 MM7bFToZ9YU0gkU3M0QwomCMSE6NQGeO8mm2BBt5Mwj6LDqVz8reFd+Jh g=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: 5B91iZU6GiMi0Qa9HGqChNkYNtitwrQRk5OeFoA7btNgRad+9lG0RgkjzuHQAV5N/ZdS9HmwQX
 2OQ2ApOLjRRXk7TKtyEIYbS4txD9RfxVUQfhqGWfnnlGdsohvM9o7ykdn4jXzPcQ7KZLIw5Zqs
 49+t9csNLLf4feyEc+KZUzh5h6SuhJBeBZFV5zroF6D0YDASMdxhqwTkTjM+HQpGIyAprvqKlp
 7+gk4M7lesIKa6kmFjqRYWqlxJUGlIXsZbbgXUFTRD+9FFfwzrINek6rvZTsEa7devf+89iJeN
 pGo=
X-SBRS: 2.7
X-MesageID: 16986654
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,363,1583211600"; d="scan'208";a="16986654"
Subject: Re: [PATCH 11/16] x86/spec-ctrl: Adjust DO_OVERWRITE_RSB to be shadow
 stack compatible
To: Jan Beulich <jbeulich@suse.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-12-andrew.cooper3@citrix.com>
 <3c01d6c3-5a61-018b-13df-6e75256f6d6c@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <bdf2d248-aabb-8913-1d79-355623b47dfb@citrix.com>
Date: Thu, 7 May 2020 14:25:31 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <3c01d6c3-5a61-018b-13df-6e75256f6d6c@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07/05/2020 14:22, Jan Beulich wrote:
> On 02.05.2020 00:58, Andrew Cooper wrote:
>> @@ -114,6 +114,16 @@
>>      sub $1, %ecx
>>      jnz .L\@_fill_rsb_loop
>>      mov %\tmp, %rsp                 /* Restore old %rsp */
>> +
>> +#ifdef CONFIG_XEN_SHSTK
>> +    mov $1, %ecx
>> +    rdsspd %ecx
>> +    cmp $1, %ecx
>> +    je .L\@_shstk_done
>> +    mov $64, %ecx                   /* 64 * 4 bytes, given incsspd */
>> +    incsspd %ecx                    /* Restore old SSP */
>> +.L\@_shstk_done:
>> +#endif
> The latest here I wonder why you don't use alternatives patching.
> I thought that's what you've introduced the synthetic feature
> flag for.

We're already in the middle of an alternative and they don't nest.  More
importantly, this path gets used on the BSP, after patching and before
CET gets enabled.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 07 13:35:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 13:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWggg-0003nV-46; Thu, 07 May 2020 13:35:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ibL=6V=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWgge-0003nQ-Uu
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 13:35:40 +0000
X-Inumbo-ID: a3b4bd4c-9067-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a3b4bd4c-9067-11ea-ae69-bc764e2007e4;
 Thu, 07 May 2020 13:35:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 428A0ACFA;
 Thu,  7 May 2020 13:35:41 +0000 (UTC)
Subject: Re: [PATCH 12/16] x86/extable: Adjust extable handling to be shadow
 stack compatible
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-13-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1e80c672-9308-f7ad-67ea-69d83d69bc03@suse.com>
Date: Thu, 7 May 2020 15:35:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200501225838.9866-13-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.2020 00:58, Andrew Cooper wrote:
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -778,6 +778,28 @@ static bool exception_fixup(struct cpu_user_regs *regs, bool print)
>                 vec_name(regs->entry_vector), regs->error_code,
>                 _p(regs->rip), _p(regs->rip), _p(fixup));
>  
> +    if ( IS_ENABLED(CONFIG_XEN_SHSTK) )
> +    {
> +        unsigned long ssp;
> +
> +        asm ("rdsspq %0" : "=r" (ssp) : "0" (1) );
> +        if ( ssp != 1 )
> +        {
> +            unsigned long *ptr = _p(ssp);
> +
> +            /* Search for %rip in the shadow stack, ... */
> +            while ( *ptr != regs->rip )
> +                ptr++;

Wouldn't it be better to bound the loop, as it shouldn't search past
(strictly speaking not even to) the next page boundary? Also you
don't care about the top of the stack (being the to be restored SSP),
do you? I.e. maybe

            while ( *++ptr != regs->rip )
                ;

?

And then - isn't searching for a specific RIP value alone prone to
error, in case a it matches an ordinary return address? I.e.
wouldn't you better search for a matching RIP accompanied by a
suitable pointer into the shadow stack and a matching CS value?
Otherwise, ...

> +            ASSERT(ptr[1] == __HYPERVISOR_CS);

... also assert that ptr[-1] points into the shadow stack?

> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -708,7 +708,16 @@ exception_with_ints_disabled:
>          call  search_pre_exception_table
>          testq %rax,%rax                 # no fixup code for faulting EIP?
>          jz    1b
> -        movq  %rax,UREGS_rip(%rsp)
> +        movq  %rax,UREGS_rip(%rsp)      # fixup regular stack
> +
> +#ifdef CONFIG_XEN_SHSTK
> +        mov    $1, %edi
> +        rdsspq %rdi
> +        cmp    $1, %edi
> +        je     .L_exn_shstk_done
> +        wrssq  %rax, (%rdi)             # fixup shadow stack
> +.L_exn_shstk_done:
> +#endif

Again avoid the conditional jump by using alternatives patching?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 07 13:36:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 13:36:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWghd-0003rX-Dz; Thu, 07 May 2020 13:36:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kxRa=6V=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jWghb-0003rM-Ij
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 13:36:39 +0000
X-Inumbo-ID: c6af4b50-9067-11ea-9f1a-12813bfff9fa
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c6af4b50-9067-11ea-9f1a-12813bfff9fa;
 Thu, 07 May 2020 13:36:38 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id g13so6416594wrb.8
 for <xen-devel@lists.xenproject.org>; Thu, 07 May 2020 06:36:38 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:content-transfer-encoding
 :in-reply-to:user-agent;
 bh=aqfwh8LlbsD879/cOAoWdiZy98LeMnr8OnnCE7r4Kz0=;
 b=bBiqCnM3FeJ3Lc9Pc/j77/XSCjrqfVOxfQkFpfAqsVACQSLJK6fbrodgYIXli+Cw4C
 ZQzlZhokdZiM7a7fD2gRjqAv6k2Ga6HDwK1ZJ5pBCG+MKFBe6/C229mcKeag+ng9thjM
 U82dMF12wcsNviyf8eMq36KWIF6FfmtCDMfdpUEewQJQEVKZEedp14wJWV+lk7TjLGy/
 LGguP3xT+YeTbXirzh8ssxNWE59F2xkIOzGxxlxGClx5u1KxZyAWboV+PSY7yUaM195r
 sBmJwKdtUhlsIvefe4ZACgPS7/8H0v1Fe352BsKrxiVCwi8tPdmYvY5M51bS24m4HTZ6
 CtPQ==
X-Gm-Message-State: AGi0PuYEGX55ybQgSWIBDvS/nk9/J7RQoPUEHGL/gbRp1uHTkuhNcbRp
 9kycAP3G05CzW8KojarVhoI=
X-Google-Smtp-Source: APiQypIdMbXwteHL7OjI77bEv0/7jEvRMW9TAmtFHoh85TYk2qOD5+ZN9zlZ5t37PTaqSaOrE3bhLQ==
X-Received: by 2002:adf:f4c4:: with SMTP id h4mr16269863wrp.142.1588858597404; 
 Thu, 07 May 2020 06:36:37 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id a10sm4487353wrp.0.2020.05.07.06.36.36
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 07 May 2020 06:36:36 -0700 (PDT)
Date: Thu, 7 May 2020 13:36:35 +0000
From: Wei Liu <wl@xen.org>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: [PATCH] libxl: update libxlu_disk_l.[ch]
Message-ID: <20200507133635.tgcufdk7mepfuxv4@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <20200506165018.32209-1-wl@xen.org>
 <20200507090030.GG1353@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200507090030.GG1353@Air-de-Roger>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Xen Development List <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 07, 2020 at 11:00:30AM +0200, Roger Pau Monn wrote:
> On Wed, May 06, 2020 at 05:50:18PM +0100, Wei Liu wrote:
> > Use flex 2.6.4 that is shipped in Debian Buster.
> > 
> > Signed-off-by: Wei Liu <wl@xen.org>
> > ---
> > Do this because Roger posted a patch to fix clang build, which requires
> > updating the same files. I don't want bury his changes in unrelated
> > ones.
> 
> Hm, I'm not sure it's helpful, but all I can do is Ack this:
> 
> Acked-by: Roger Pau Monn <roger.pau@citrix.com>
> 
> Thanks, Roger.

Thanks.

I will commit this on Friday if I don't hear objection.

Wei.


From xen-devel-bounces@lists.xenproject.org Thu May 07 13:38:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 13:38:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWgjO-0003zw-QV; Thu, 07 May 2020 13:38:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ibL=6V=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWgjO-0003zq-48
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 13:38:30 +0000
X-Inumbo-ID: 09028b66-9068-11ea-9f1a-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 09028b66-9068-11ea-9f1a-12813bfff9fa;
 Thu, 07 May 2020 13:38:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4D582AC44;
 Thu,  7 May 2020 13:38:31 +0000 (UTC)
Subject: Re: [PATCH 11/16] x86/spec-ctrl: Adjust DO_OVERWRITE_RSB to be shadow
 stack compatible
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-12-andrew.cooper3@citrix.com>
 <3c01d6c3-5a61-018b-13df-6e75256f6d6c@suse.com>
 <bdf2d248-aabb-8913-1d79-355623b47dfb@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cdaafe00-9b2d-b638-e921-1f910fc64310@suse.com>
Date: Thu, 7 May 2020 15:38:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <bdf2d248-aabb-8913-1d79-355623b47dfb@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.05.2020 15:25, Andrew Cooper wrote:
> On 07/05/2020 14:22, Jan Beulich wrote:
>> On 02.05.2020 00:58, Andrew Cooper wrote:
>>> @@ -114,6 +114,16 @@
>>>      sub $1, %ecx
>>>      jnz .L\@_fill_rsb_loop
>>>      mov %\tmp, %rsp                 /* Restore old %rsp */
>>> +
>>> +#ifdef CONFIG_XEN_SHSTK
>>> +    mov $1, %ecx
>>> +    rdsspd %ecx
>>> +    cmp $1, %ecx
>>> +    je .L\@_shstk_done
>>> +    mov $64, %ecx                   /* 64 * 4 bytes, given incsspd */
>>> +    incsspd %ecx                    /* Restore old SSP */
>>> +.L\@_shstk_done:
>>> +#endif
>> The latest here I wonder why you don't use alternatives patching.
>> I thought that's what you've introduced the synthetic feature
>> flag for.
> 
> We're already in the middle of an alternative and they don't nest.  More
> importantly, this path gets used on the BSP, after patching and before
> CET gets enabled.

Oh, I should have noticed this. The first point could be dealt with,
but I agree the second pretty much rules out patching.

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 07 13:47:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 13:47:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWgra-0004rb-LN; Thu, 07 May 2020 13:46:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ibL=6V=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWgrZ-0004rW-Fs
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 13:46:57 +0000
X-Inumbo-ID: 374223a0-9069-11ea-9f1c-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 374223a0-9069-11ea-9f1c-12813bfff9fa;
 Thu, 07 May 2020 13:46:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4DCEBAB64;
 Thu,  7 May 2020 13:46:58 +0000 (UTC)
Subject: Re: [PATCH 13/16] x86/ioemul: Rewrite stub generation to be shadow
 stack compatible
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-14-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e36b8093-9288-b4ef-810a-583c0b44c626@suse.com>
Date: Thu, 7 May 2020 15:46:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200501225838.9866-14-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.2020 00:58, Andrew Cooper wrote:
> The logic is completely undocumented and almost impossible to follow.  It
> actually uses return oriented programming.  Rewrite it to conform to more
> normal call mechanics, and leave a big comment explaining thing.  As well as
> the code being easier to follow, it will execute faster as it isn't fighting
> the branch predictor.
> 
> Move the ioemul_handle_quirk() function pointer from traps.c to
> ioport_emulate.c.  There is no reason for it to be in neither of the two
> translation units which use it.  Alter the behaviour to return the number of
> bytes written into the stub.
> 
> Access the addresses of the host/guest helpers with extern const char arrays.
> Nothing good will come of C thinking they are regular functions.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> --
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> 
> Posted previously on its perf benefits alone, but here is the real reason
> behind the change.
> ---
>  xen/arch/x86/ioport_emulate.c  | 11 ++---
>  xen/arch/x86/pv/emul-priv-op.c | 91 +++++++++++++++++++++++++++++++-----------
>  xen/arch/x86/pv/gpr_switch.S   | 37 +++++------------
>  xen/arch/x86/traps.c           |  3 --
>  xen/include/asm-x86/io.h       |  3 +-
>  5 files changed, 85 insertions(+), 60 deletions(-)
> 
> diff --git a/xen/arch/x86/ioport_emulate.c b/xen/arch/x86/ioport_emulate.c
> index 499c1f6056..f7511a9c49 100644
> --- a/xen/arch/x86/ioport_emulate.c
> +++ b/xen/arch/x86/ioport_emulate.c
> @@ -8,7 +8,10 @@
>  #include <xen/sched.h>
>  #include <xen/dmi.h>
>  
> -static bool ioemul_handle_proliant_quirk(
> +unsigned int (*ioemul_handle_quirk)(
> +    u8 opcode, char *io_emul_stub, struct cpu_user_regs *regs);

As requested for the standalone patch - would you mind adding
__read_mostly on this occasion? And I notice now that use of
uint8_t would also be more appropriate.

> --- a/xen/arch/x86/pv/gpr_switch.S
> +++ b/xen/arch/x86/pv/gpr_switch.S
> @@ -9,59 +9,42 @@
>  
>  #include <asm/asm_defns.h>
>  
> -ENTRY(host_to_guest_gpr_switch)
> -        movq  (%rsp), %rcx
> -        movq  %rdi, (%rsp)
> +/* Load guest GPRs.  Parameter in %rdi, clobbers all registers. */
> +ENTRY(load_guest_gprs)
>          movq  UREGS_rdx(%rdi), %rdx
> -        pushq %rbx
>          movq  UREGS_rax(%rdi), %rax
>          movq  UREGS_rbx(%rdi), %rbx
> -        pushq %rbp
>          movq  UREGS_rsi(%rdi), %rsi
>          movq  UREGS_rbp(%rdi), %rbp
> -        pushq %r12
> -        movq  UREGS_r8(%rdi), %r8
> +        movq  UREGS_r8 (%rdi), %r8
>          movq  UREGS_r12(%rdi), %r12
> -        pushq %r13
> -        movq  UREGS_r9(%rdi), %r9
> +        movq  UREGS_r9 (%rdi), %r9
>          movq  UREGS_r13(%rdi), %r13
> -        pushq %r14
>          movq  UREGS_r10(%rdi), %r10
>          movq  UREGS_r14(%rdi), %r14
> -        pushq %r15
>          movq  UREGS_r11(%rdi), %r11
>          movq  UREGS_r15(%rdi), %r15
> -        pushq %rcx /* dummy push, filled by guest_to_host_gpr_switch pointer */
> -        pushq %rcx
> -        leaq  guest_to_host_gpr_switch(%rip),%rcx
> -        movq  %rcx,8(%rsp)
>          movq  UREGS_rcx(%rdi), %rcx
>          movq  UREGS_rdi(%rdi), %rdi
>          ret
>  
> -ENTRY(guest_to_host_gpr_switch)
> +/* Save guest GPRs.  Parameter on the stack above the return address. */
> +ENTRY(save_guest_gprs)
>          pushq %rdi
> -        movq  7*8(%rsp), %rdi
> +        movq  2*8(%rsp), %rdi
>          movq  %rax, UREGS_rax(%rdi)
> -        popq  UREGS_rdi(%rdi)
> +        popq        UREGS_rdi(%rdi)
>          movq  %r15, UREGS_r15(%rdi)
>          movq  %r11, UREGS_r11(%rdi)
> -        popq  %r15
>          movq  %r14, UREGS_r14(%rdi)
>          movq  %r10, UREGS_r10(%rdi)
> -        popq  %r14
>          movq  %r13, UREGS_r13(%rdi)
> -        movq  %r9, UREGS_r9(%rdi)
> -        popq  %r13
> +        movq  %r9,  UREGS_r9 (%rdi)
>          movq  %r12, UREGS_r12(%rdi)
> -        movq  %r8, UREGS_r8(%rdi)
> -        popq  %r12
> +        movq  %r8,  UREGS_r8 (%rdi)
>          movq  %rbp, UREGS_rbp(%rdi)
>          movq  %rsi, UREGS_rsi(%rdi)
> -        popq  %rbp
>          movq  %rbx, UREGS_rbx(%rdi)
>          movq  %rdx, UREGS_rdx(%rdi)
> -        popq  %rbx
>          movq  %rcx, UREGS_rcx(%rdi)
> -        popq  %rcx
>          ret

Now that these are oridinary functions (with just a non-standard
call convention), I think - as said before - they should be marked
STT_FUNC in the object. With that, as also said before, I then
don't think using char[] on the C side declarations (leading to
STT_OBJECT) is appropriate to use - linker are imo fine to warn
about such a mismatch. To prevent the declaration to be used for
an actual call, use e.g. __attribute__((__warning__())) as already
suggested.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 07 13:49:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 13:49:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWgtv-0004zz-5R; Thu, 07 May 2020 13:49:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ibL=6V=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWgtu-0004zs-8k
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 13:49:22 +0000
X-Inumbo-ID: 8dc266f4-9069-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8dc266f4-9069-11ea-ae69-bc764e2007e4;
 Thu, 07 May 2020 13:49:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 82BF8AC6C;
 Thu,  7 May 2020 13:49:23 +0000 (UTC)
Subject: Re: [PATCH 14/16] x86/alt: Adjust _alternative_instructions() to not
 create shadow stacks
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-15-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f5d6963d-1db4-1da0-60dd-c7d5734f01bf@suse.com>
Date: Thu, 7 May 2020 15:49:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200501225838.9866-15-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.2020 00:58, Andrew Cooper wrote:
> The current alternatives algorithm clears CR0.WP and writes into .text.  This
> has a side effect of the mappings becoming shadow stacks once CET is active.
> 
> Adjust _alternative_instructions() to clean up after itself.  This involves
> extending the set of bits modify_xen_mappings() to include Dirty (and Accessed
> for good measure).
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Thu May 07 14:12:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 14:12:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWhG6-0007Or-2l; Thu, 07 May 2020 14:12:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ibL=6V=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWhG4-0007Om-NU
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 14:12:16 +0000
X-Inumbo-ID: c001bfc4-906c-11ea-9f22-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c001bfc4-906c-11ea-9f22-12813bfff9fa;
 Thu, 07 May 2020 14:12:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E0F11AA7C;
 Thu,  7 May 2020 14:12:16 +0000 (UTC)
Subject: Re: [PATCH 15/16] x86/entry: Adjust guest paths to be shadow stack
 compatible
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-16-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2df78612-2c24-32de-186a-c402e188478c@suse.com>
Date: Thu, 7 May 2020 16:12:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200501225838.9866-16-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.2020 00:58, Andrew Cooper wrote:
> The SYSCALL/SYSEXIT paths need to use {SET,CLR}SSBSY.

I take it you mean SYSRET, not SYSEXIT. I do think though that you
also need to deal with the SYSENTER entry point we have.

> --- a/xen/arch/x86/x86_64/compat/entry.S
> +++ b/xen/arch/x86/x86_64/compat/entry.S
> @@ -198,7 +198,7 @@ ENTRY(cr4_pv32_restore)
>  
>  /* See lstar_enter for entry register state. */
>  ENTRY(cstar_enter)
> -        /* sti could live here when we don't switch page tables below. */
> +        ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK

I don't see why you delete the comment here (or elsewhere). While
I recall you not really wanting them there, I still think they're
useful to have, and they shouldn't be deleted as a side effect of
an entirely unrelated change. Of course they need to live after
your insertions then.

> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -194,6 +194,15 @@ restore_all_guest:
>          movq  8(%rsp),%rcx            # RIP
>          ja    iret_exit_to_guest
>  
> +        /* Clear the supervisor shadow stack token busy bit. */
> +.macro rag_clrssbsy
> +        push %rax
> +        rdsspq %rax
> +        clrssbsy (%rax)
> +        pop %rax
> +.endm
> +        ALTERNATIVE "", rag_clrssbsy, X86_FEATURE_XEN_SHSTK

In principle you could get away without spilling %rax:

        cmpl  $1,%ecx
        ja    iret_exit_to_guest

        /* Clear the supervisor shadow stack token busy bit. */
.macro rag_clrssbsy
        rdsspq %rcx
        clrssbsy (%rcx)
.endm
        ALTERNATIVE "", rag_clrssbsy, X86_FEATURE_XEN_SHSTK
        movq  8(%rsp),%rcx            # RIP
        cmpw  $FLAT_USER_CS32,16(%rsp)# CS
        movq  32(%rsp),%rsp           # RSP
        je    1f
        sysretq
1:      sysretl

        ALIGN
/* No special register assumptions. */
iret_exit_to_guest:
        movq  8(%rsp),%rcx            # RIP
        andl  $~(X86_EFLAGS_IOPL|X86_EFLAGS_NT|X86_EFLAGS_VM),24(%rsp)
        ...

Also - what about CLRSSBSY failing? It would seem easier to diagnose
this right here than when getting presumably #DF upon next entry into
Xen. At the very least I think it deserves a comment if an error case
does not get handled.

Somewhat similar for SETSSBSY, except there things get complicated by
it raising #CP instead of setting EFLAGS.CF: Aiui it would require us
to handle #CP on an IST stack in order to avoid #DF there.

> @@ -877,6 +886,14 @@ handle_ist_exception:
>          movl  $UREGS_kernel_sizeof/8,%ecx
>          movq  %rdi,%rsp
>          rep   movsq
> +
> +        /* Switch Shadow Stacks */
> +.macro ist_switch_shstk
> +        rdsspq %rdi
> +        clrssbsy (%rdi)
> +        setssbsy
> +.endm

Could you extend the comment to mention the caveat that you point
out in the description?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 07 14:39:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 14:39:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWhg7-0000sk-Mv; Thu, 07 May 2020 14:39:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aphx=6V=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWhg6-0000sf-GF
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 14:39:10 +0000
X-Inumbo-ID: 82aebdc4-9070-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82aebdc4-9070-11ea-9887-bc764e2007e4;
 Thu, 07 May 2020 14:39:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=gSYmJ0hDIZLoDAOyHAU95Plp5/df3pghogEczXFw4PU=; b=1Yl98ZMbC9OxrqkAqlaNF7rLO
 Zpm51PXt4YDE/ak2cnCwEbqo/uOMO8UT2/NXjkF3e+f1DTDdFskc2npV5jUXGMm9MfskMacLZIDEk
 FBYfqLGMYeCE0p9huiRMsuv9rVNvuv8OifrS8Nve6GC649GT7xNBikH9ONjWTjvJ6bBmw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWhg5-0008St-0x; Thu, 07 May 2020 14:39:09 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWhg4-0002vY-Oe; Thu, 07 May 2020 14:39:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWhg4-0007A4-Nl; Thu, 07 May 2020 14:39:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150062-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150062: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-xsm:xen-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=eea5d63a221a8f36a3ed5b1189fe619d4fa1fde2
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 07 May 2020 14:39:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150062 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150062/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-xsm                6 xen-build                fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a

version targeted for testing:
 libvirt              eea5d63a221a8f36a3ed5b1189fe619d4fa1fde2
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  111 days
Failing since        146211  2020-01-18 04:18:52 Z  110 days  101 attempts
Testing same since   150062  2020-05-07 05:18:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 17243 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 07 14:44:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 14:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWhku-0001im-AN; Thu, 07 May 2020 14:44:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8pIK=6V=amazon.com=prvs=389e06f2e=jgrall@srs-us1.protection.inumbo.net>)
 id 1jWhkt-0001ig-0C
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 14:44:07 +0000
X-Inumbo-ID: 33fd2fac-9071-11ea-9887-bc764e2007e4
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 33fd2fac-9071-11ea-9887-bc764e2007e4;
 Thu, 07 May 2020 14:44:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1588862647; x=1620398647;
 h=to:cc:from:subject:message-id:date:mime-version:
 content-transfer-encoding;
 bh=b4jvw6bVDJC92arf13SnRUtWEV9x0v/aLL5pNKXnqms=;
 b=Wd11rcaIeTPyWY4HMvuadkfh+gkbH8E3Kg13V62LHzx3Wv5zdUD7w4Wb
 5qizHO8Ti6ENP0Aw5w45P2PH55SY6GpK9/bed6/oaEaQBW7a8ggx5FJP4
 edPQd3PQEv6ko1FJlOpUqToHnAeJKjX5ue6t6tR3atYDjlKIJbs8X9Iud Q=;
IronPort-SDR: kG6Ge/Fjn4Tu/K62QgCu76tKJa5rCmY2MU8AJ62krU1Vn9TrEjg7nWjrhTCPgSMJigCwoShHVb
 sW8JS3d3XNGQ==
X-IronPort-AV: E=Sophos;i="5.73,364,1583193600"; d="scan'208";a="29094016"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-1d-5dd976cd.us-east-1.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP;
 07 May 2020 14:43:54 +0000
Received: from EX13MTAUEE002.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166])
 by email-inbound-relay-1d-5dd976cd.us-east-1.amazon.com (Postfix) with ESMTPS
 id 673ECA2399; Thu,  7 May 2020 14:43:53 +0000 (UTC)
Received: from EX13D08UEE001.ant.amazon.com (10.43.62.126) by
 EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 7 May 2020 14:43:50 +0000
Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by
 EX13D08UEE001.ant.amazon.com (10.43.62.126) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 7 May 2020 14:43:50 +0000
Received: from a483e7b01a66.ant.amazon.com (10.85.91.252) by
 mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Thu, 7 May 2020 14:43:49 +0000
To: "committers@xenproject.org" <committers@xenproject.org>
From: Julien Grall <jgrall@amazon.com>
Subject: Xen Coding style
Message-ID: <ad26bbdc-5209-ce0c-7956-f8b08e6c2492@amazon.com>
Date: Thu, 7 May 2020 15:43:49 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "Woodhouse, David" <dwmw@amazon.co.uk>, "paul@xen.org" <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi all,

This was originally discussed during the last community call.

A major part of the comments during review is related to coding style 
issues. This can lead to frustration from the contributor as the current 
CODING_STYLE is quite light and the choice often down to a matter of 
taste from the maintainers.

In the past, Lars tried to address the problem by introducing a coding 
style checker (see [1] and [2]). However, the work came to stop because 
we couldn't agree on what is Xen coding style.

I would like to suggest a different approach. Rather than trying to 
agree on all the rules one by one, I propose to have a vote on different 
coding styles and pick the preferred one.

The list of coding styles would come from the community. It could be 
coding styles already used in the open source community or new coding 
styles (for instance a contributor could write down his/her 
understanding of Xen Coding Style).

Are the committers happy with this appraoch? If so, I could send a 
formal call for coding style?

Cheers,

[1] <20190718144317.23307-1-tamas@tklengyel.com>
[2] <CAOcoXZavLnHhNc7mmHnO5Gi_vq-0j1FCgvpXh0NimAewXX8e1g@mail.gmail.com>


From xen-devel-bounces@lists.xenproject.org Thu May 07 14:54:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 14:54:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWhup-0002cP-Ho; Thu, 07 May 2020 14:54:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ibL=6V=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWhuo-0002cK-Gs
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 14:54:22 +0000
X-Inumbo-ID: a1dced18-9072-11ea-9f32-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a1dced18-9072-11ea-9f32-12813bfff9fa;
 Thu, 07 May 2020 14:54:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 5A892AD12;
 Thu,  7 May 2020 14:54:22 +0000 (UTC)
Subject: Re: [PATCH 16/16] x86/shstk: Activate Supervisor Shadow Stacks
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-17-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <eacafb0a-a049-5bca-7a43-c9c3deb26054@suse.com>
Date: Thu, 7 May 2020 16:54:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200501225838.9866-17-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 02.05.2020 00:58, Andrew Cooper wrote:
> --- a/xen/arch/x86/acpi/wakeup_prot.S
> +++ b/xen/arch/x86/acpi/wakeup_prot.S
> @@ -1,3 +1,8 @@
> +#include <asm/config.h>

Why is this needed? Afaics assembly files, just like C ones, get
xen/config.h included from the compiler command line.

> @@ -48,6 +59,48 @@ ENTRY(s3_resume)
>          pushq   %rax
>          lretq
>  1:
> +#ifdef CONFIG_XEN_SHSTK
> +	/*
> +         * Restoring SSP is a little convoluted, because we are intercepting
> +         * the middle of an in-use shadow stack.  Write a temporary supervisor
> +         * token under the stack, so SETSSBSY takes us where we want, then
> +         * reset MSR_PL0_SSP to its usual value and pop the temporary token.

What do you mean by "takes us where we want"? I take it "us" is really
SSP here?

> +         */
> +        mov     saved_rsp(%rip), %rdi
> +        cmpq    $1, %rdi
> +        je      .L_shstk_done
> +
> +        /* Write a supervisor token under SSP. */
> +        sub     $8, %rdi
> +        mov     %rdi, (%rdi)
> +
> +        /* Load it into MSR_PL0_SSP. */
> +        mov     $MSR_PL0_SSP, %ecx
> +        mov     %rdi, %rdx
> +        shr     $32, %rdx
> +        mov     %edi, %eax
> +
> +        /* Enable CET. */
> +        mov     $MSR_S_CET, %ecx
> +        xor     %edx, %edx
> +        mov     $CET_SHSTK_EN | CET_WRSS_EN, %eax
> +        wrmsr
> +
> +        /* Activate our temporary token. */
> +        mov     $XEN_MINIMAL_CR4 | X86_CR4_CET, %ebx
> +        mov     %rbx, %cr4
> +        setssbsy
> +
> +        /* Reset MSR_PL0_SSP back to its expected value. */
> +        and     $~(STACK_SIZE - 1), %eax
> +        or      $0x5ff8, %eax
> +        wrmsr

Ahead of this WRMSR neither %ecx nor %edx look to have their intended
values anymore. Also there is a again a magic 0x5ff8 here (and at
least one more further down).

> --- a/xen/arch/x86/boot/x86_64.S
> +++ b/xen/arch/x86/boot/x86_64.S
> @@ -28,8 +28,36 @@ ENTRY(__high_start)
>          lretq
>  1:
>          test    %ebx,%ebx
> -        jnz     start_secondary
> +        jz      .L_bsp
>  
> +        /* APs.  Set up shadow stacks before entering C. */
> +
> +        testl   $cpufeat_mask(X86_FEATURE_XEN_SHSTK), \
> +                CPUINFO_FEATURE_OFFSET(X86_FEATURE_XEN_SHSTK) + boot_cpu_data(%rip)
> +        je      .L_ap_shstk_done
> +
> +        mov     $MSR_S_CET, %ecx
> +        xor     %edx, %edx
> +        mov     $CET_SHSTK_EN | CET_WRSS_EN, %eax
> +        wrmsr
> +
> +        mov     $MSR_PL0_SSP, %ecx
> +        mov     %rsp, %rdx
> +        shr     $32, %rdx
> +        mov     %esp, %eax
> +        and     $~(STACK_SIZE - 1), %eax
> +        or      $0x5ff8, %eax
> +        wrmsr
> +
> +        mov     $XEN_MINIMAL_CR4 | X86_CR4_CET, %ecx
> +        mov     %rcx, %cr4
> +        setssbsy

Since the token doesn't get written here, could you make the comment
say where this happens? I have to admit that I had to go through
earlier patches to find it again.

> +.L_ap_shstk_done:
> +        call    start_secondary
> +        BUG     /* start_secondary() shouldn't return. */

This conversion from a jump to CALL is unrelated and hence would
better be mentioned in the description imo.

> --- a/xen/arch/x86/cpu/common.c
> +++ b/xen/arch/x86/cpu/common.c
> @@ -323,6 +323,11 @@ void __init early_cpu_init(void)
>  	       x86_cpuid_vendor_to_str(c->x86_vendor), c->x86, c->x86,
>  	       c->x86_model, c->x86_model, c->x86_mask, eax);
>  
> +	if (c->cpuid_level >= 7) {
> +		cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
> +		c->x86_capability[cpufeat_word(X86_FEATURE_CET_SS)] = ecx;
> +	}

How about moving the leaf 7 code from generic_identify() here as
a whole?

> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -664,6 +664,13 @@ static void __init noreturn reinit_bsp_stack(void)
>      stack_base[0] = stack;
>      memguard_guard_stack(stack);
>  
> +    if ( cpu_has_xen_shstk )
> +    {
> +        wrmsrl(MSR_PL0_SSP, (unsigned long)stack + 0x5ff8);
> +        wrmsrl(MSR_S_CET, CET_SHSTK_EN | CET_WRSS_EN);
> +        asm volatile ("setssbsy" ::: "memory");
> +    }

Same as for APs - a brief comment pointing at where the token was
written would seem helpful.

Could you also have the patch description say a word on the choice
of enabling CET_WRSS_EN uniformly and globally?

> @@ -985,6 +992,21 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>      /* This must come before e820 code because it sets paddr_bits. */
>      early_cpu_init();
>  
> +    /* Choose shadow stack early, to set infrastructure up appropriately. */
> +    if ( opt_xen_shstk && boot_cpu_has(X86_FEATURE_CET_SS) )
> +    {
> +        printk("Enabling Supervisor Shadow Stacks\n");
> +
> +        setup_force_cpu_cap(X86_FEATURE_XEN_SHSTK);
> +#ifdef CONFIG_PV32
> +        if ( opt_pv32 )
> +        {
> +            opt_pv32 = 0;
> +            printk("  - Disabling PV32 due to Shadow Stacks\n");
> +        }
> +#endif

I think this deserves an explanation, either in a comment or in
the patch description.

> @@ -1721,6 +1743,10 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>  
>      alternative_branches();
>  
> +    /* Defer CR4.CET until alternatives have finished playing with CR4.WP */
> +    if ( cpu_has_xen_shstk )
> +        set_in_cr4(X86_CR4_CET);

Nit: CR0.WP (in the comment)

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 07 15:07:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 15:07:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWi7E-0003Zs-To; Thu, 07 May 2020 15:07:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aphx=6V=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWi7D-0003Zn-Ob
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 15:07:11 +0000
X-Inumbo-ID: 6a1b8414-9074-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6a1b8414-9074-11ea-b9cf-bc764e2007e4;
 Thu, 07 May 2020 15:07:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1G07s6hFYY2+hJ1htkGRQMQF9w8Tm74ss2bdDxDkiK8=; b=w8cp4xfQwgFWGkKYP0u/ySV2B
 ZwGVJKkec6KzWosqBnV+TIeJSy9A2TrF3EZwCxLDhbdQ/0OBVDojr4mjCKVmtmWiR8EsCYlXxWxTp
 R4CCejxUNfsQdP7r3Gba/d8+WL5Oq1eXaTDS3VbgzzW9w1FwJqpkT234WlersBS95kUmI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWi77-0000ah-Mq; Thu, 07 May 2020 15:07:05 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWi77-0003wT-9h; Thu, 07 May 2020 15:07:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWi77-0007uB-6f; Thu, 07 May 2020 15:07:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150069-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150069: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=40675b4b874cb9fee0d4f0e12bb3e153ee1c135a
X-Osstest-Versions-That: xen=8a6b1665d987d043c12dc723d758a7d2ca765264
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 07 May 2020 15:07:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150069 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150069/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  40675b4b874cb9fee0d4f0e12bb3e153ee1c135a
baseline version:
 xen                  8a6b1665d987d043c12dc723d758a7d2ca765264

Last test of basis   150059  2020-05-06 17:02:47 Z    0 days
Testing same since   150069  2020-05-07 12:01:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8a6b1665d9..40675b4b87  40675b4b874cb9fee0d4f0e12bb3e153ee1c135a -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 07 15:24:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 15:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWiNd-0005CW-CA; Thu, 07 May 2020 15:24:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aphx=6V=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWiNc-0005CR-4t
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 15:24:08 +0000
X-Inumbo-ID: ca563e30-9076-11ea-9f3b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ca563e30-9076-11ea-9f3b-12813bfff9fa;
 Thu, 07 May 2020 15:24:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=0r/0VbGtJ55PRX+KBwV+FN6AMsWKsMkkCymabN1qhco=; b=vCJD8S9Bz+uMQFWtW8ZXES6BE
 SC0GsjB4k0nCuOv9yUpLnT1RVOVFQ1KD73eWQDd3t8Kwcq0bvtiKhe0S5DIQtjCG8gnxKzm8THtus
 sW1eH0tLY02beC1g6trMg7ccu9JmSDue+E1eFNgEqjtg17YqT2EvwcblunuDaHvSLMi60=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWiNa-0000tv-6u; Thu, 07 May 2020 15:24:06 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWiNZ-0004lB-NQ; Thu, 07 May 2020 15:24:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWiNZ-0001V3-Mn; Thu, 07 May 2020 15:24:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150063-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150063: all pass - PUSHED
X-Osstest-Versions-This: ovmf=8293e6766a884918a6b608c64543caab49870597
X-Osstest-Versions-That: ovmf=f159102a130fac9b416418eb9b6fa35b63426ca5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 07 May 2020 15:24:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150063 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150063/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 8293e6766a884918a6b608c64543caab49870597
baseline version:
 ovmf                 f159102a130fac9b416418eb9b6fa35b63426ca5

Last test of basis   150050  2020-05-05 20:40:09 Z    1 days
Testing same since   150063  2020-05-07 05:27:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Abner Chang <abner.chang@hpe.com>
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Eric Dong <eric.dong@intel.com>
  Guomin Jiang <guomin.jiang@intel.com>
  Hao A Wu <hao.a.wu@intel.com>
  Krzysztof Koch <krzysztof.koch@arm.com>
  Kun Qin <kuqin@microsoft.com>
  Leo Duran <leo.duran@amd.com>
  Nikita Leshenko <nikita.leshchenko@oracle.com>
  Siyuan Fu <siyuan.fu@intel.com>
  Wei6 Xu <wei6.xu@intel.com>
  Zhichao Gao <zhichao.gao@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   f159102a13..8293e6766a  8293e6766a884918a6b608c64543caab49870597 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 07 15:35:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 15:35:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWiY9-00065O-E7; Thu, 07 May 2020 15:35:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aphx=6V=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWiY9-00065J-5I
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 15:35:01 +0000
X-Inumbo-ID: 4c7dffaa-9078-11ea-9f3b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4c7dffaa-9078-11ea-9f3b-12813bfff9fa;
 Thu, 07 May 2020 15:34:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=DgY03X4u8Qi2ulA0Fz0tJM5bC8bDNaYACHBkEng/2uA=; b=e8yGdo5U6p1o11hnEFFDjyeiD
 CIakciJa+QLP8PA5v2/VnTh0qEG374XyQsKGhaSA+msp1J6SZBz7xrT6iNrzPfvrN8nSS+KAimexQ
 GSCcbMN69HDd/la4SRQCo60L3MmzScu095hwGXijZatZ2czdwk9cRogWVN2YPdGvNSOlw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWiY2-000169-1O; Thu, 07 May 2020 15:34:54 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWiY1-0005Db-M4; Thu, 07 May 2020 15:34:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWiY1-0006ME-LZ; Thu, 07 May 2020 15:34:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150061-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150061: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=570a9214827e3d42f7173c4d4c9f045b99834cf0
X-Osstest-Versions-That: qemuu=f19d118bed77bb95681b07f4e76dbb700c16918d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 07 May 2020 15:34:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150061 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150061/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150043
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150043
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150043
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150043
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150043
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150043
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150043
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                570a9214827e3d42f7173c4d4c9f045b99834cf0
baseline version:
 qemuu                f19d118bed77bb95681b07f4e76dbb700c16918d

Last test of basis   150043  2020-05-05 16:07:08 Z    1 days
Testing same since   150061  2020-05-06 22:06:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Alistair Francis <alistair.francis@wdc.com>
  Eric Blake <eblake@redhat.com>
  Joaquin de Andres <me@xcancerberox.com.ar>
  John Snow <jsnow@redhat.com>
  KONRAD Frederic <frederic.konrad@adacore.com>
  Li-Wen Hsu <lwhsu@freebsd.org>
  Li-Wen Hsu <lwhsu@lwhsu.org>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   f19d118bed..570a921482  570a9214827e3d42f7173c4d4c9f045b99834cf0 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu May 07 15:50:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 15:50:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWinB-0007en-OJ; Thu, 07 May 2020 15:50:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=53Et=6V=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jWinA-0007ei-Gz
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 15:50:32 +0000
X-Inumbo-ID: 7ac3c85c-907a-11ea-ae69-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ac3c85c-907a-11ea-ae69-bc764e2007e4;
 Thu, 07 May 2020 15:50:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588866631;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=K6ZR3PXY4Dd0L8GPKdMvC5aZb+GvqcltHa0T2DMQd9k=;
 b=WVFpTEZgMM8d9STeEKQcEmkGLWa7nraVk/NIdPLGycGxooLnmQNnUwt4
 wQtkcGdn12xGiZ00qEDcmL3hDUTi8quk+cLK4tzLgO9iqJKVGpULgsTHT
 sGbI2SmoVqBIgDNZFiOUS/8+f2ngRYM+IBqzntdCeH+pt1DDYlTPXdBpy 4=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: iFNkUIlYTvERwQDDESQtbzzPgDl/Qi+1+4Vrv83R7ryE58arRLw/vo9dRbseKkQa3FehAcaqX+
 9tcastObtS3ym4RMoVpMMEnxbbV+byx9sqH5L28FG8zwGITZKFStzLcG4CkEpU81dRj6AbnOng
 JIOKbiRO75Sa9s5+wfImQrR/ionsQRlVTQt7yoI2VB4AFG2iK0ou4mbVlnJQdSLEJQtrA0Co0Y
 8f/wmoQxB5+SOLruUm6hyXgDaoun3JHXOaknQVWdkRKMex0qW4lCyghQuv72oG3xfeW78NKQBL
 O0o=
X-SBRS: 2.7
X-MesageID: 17273235
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,364,1583211600"; d="scan'208";a="17273235"
Subject: Re: [PATCH 15/16] x86/entry: Adjust guest paths to be shadow stack
 compatible
To: Jan Beulich <jbeulich@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-16-andrew.cooper3@citrix.com>
 <2df78612-2c24-32de-186a-c402e188478c@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <70d7b0e0-a599-6a19-5ace-af4f169545b3@citrix.com>
Date: Thu, 7 May 2020 16:50:26 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <2df78612-2c24-32de-186a-c402e188478c@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07/05/2020 15:12, Jan Beulich wrote:
> On 02.05.2020 00:58, Andrew Cooper wrote:
>> The SYSCALL/SYSEXIT paths need to use {SET,CLR}SSBSY.
> I take it you mean SYSRET, not SYSEXIT.

I do, sorry.

> I do think though that you
> also need to deal with the SYSENTER entry point we have.

Oh - so we do.

>> --- a/xen/arch/x86/x86_64/compat/entry.S
>> +++ b/xen/arch/x86/x86_64/compat/entry.S
>> @@ -198,7 +198,7 @@ ENTRY(cr4_pv32_restore)
>>  
>>  /* See lstar_enter for entry register state. */
>>  ENTRY(cstar_enter)
>> -        /* sti could live here when we don't switch page tables below. */
>> +        ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
> I don't see why you delete the comment here (or elsewhere). While
> I recall you not really wanting them there, I still think they're
> useful to have, and they shouldn't be deleted as a side effect of
> an entirely unrelated change. Of course they need to live after
> your insertions then.

Do you not remember Juergen performance testing results concerning this
comment?  The results were provably worse.

It is a useless comment.  Sure, its technically accurate, but so are an
arbitrarily large number of other comments about how we could permute
the code.

It has already been concluded that we won't be making the suggested
change.  Having a /* TODO - doing X will make the system slower */ isn't
something we should have adding to the complexity of the code, and
tricking people into thinking that something should be done.

>> --- a/xen/arch/x86/x86_64/entry.S
>> +++ b/xen/arch/x86/x86_64/entry.S
>> @@ -194,6 +194,15 @@ restore_all_guest:
>>          movq  8(%rsp),%rcx            # RIP
>>          ja    iret_exit_to_guest
>>  
>> +        /* Clear the supervisor shadow stack token busy bit. */
>> +.macro rag_clrssbsy
>> +        push %rax
>> +        rdsspq %rax
>> +        clrssbsy (%rax)
>> +        pop %rax
>> +.endm
>> +        ALTERNATIVE "", rag_clrssbsy, X86_FEATURE_XEN_SHSTK
> In principle you could get away without spilling %rax:
>
>         cmpl  $1,%ecx
>         ja    iret_exit_to_guest
>
>         /* Clear the supervisor shadow stack token busy bit. */
> .macro rag_clrssbsy
>         rdsspq %rcx
>         clrssbsy (%rcx)
> .endm
>         ALTERNATIVE "", rag_clrssbsy, X86_FEATURE_XEN_SHSTK
>         movq  8(%rsp),%rcx            # RIP
>         cmpw  $FLAT_USER_CS32,16(%rsp)# CS
>         movq  32(%rsp),%rsp           # RSP
>         je    1f
>         sysretq
> 1:      sysretl
>
>         ALIGN
> /* No special register assumptions. */
> iret_exit_to_guest:
>         movq  8(%rsp),%rcx            # RIP
>         andl  $~(X86_EFLAGS_IOPL|X86_EFLAGS_NT|X86_EFLAGS_VM),24(%rsp)
>         ...
>
> Also - what about CLRSSBSY failing? It would seem easier to diagnose
> this right here than when getting presumably #DF upon next entry into
> Xen. At the very least I think it deserves a comment if an error case
> does not get handled.

I did consider this, but ultimately decided against it.

You can't have an unlikely block inside a alternative block because the
jmp's displacement doesn't get fixed up.  Keeping everything inline puts
an incorrect statically-predicted branch in program flow.

Most importantly however, is that the SYSRET path is vastly less common
than the IRET path.  There is no easy way to proactively spot problems
in the IRET path, which means that conditions leading to a problem are
already far more likely to manifest as #DF, so there is very little
value in adding complexity to the SYSRET path in the first place.

> Somewhat similar for SETSSBSY, except there things get complicated by
> it raising #CP instead of setting EFLAGS.CF: Aiui it would require us
> to handle #CP on an IST stack in order to avoid #DF there.

Right, but having #CP as IST gives us far worse problems.

Being able to spot #CP vs #DF doesn't help usefully.  Its still some
arbitrary period of time after the damage was done.

Any nesting of #CP (including fault on IRET out) results in losing
program state and entering an infinite loop.

The cases which end up as #DF are properly fatal to the system, and we
at least get a clean crash out it.

>> @@ -877,6 +886,14 @@ handle_ist_exception:
>>          movl  $UREGS_kernel_sizeof/8,%ecx
>>          movq  %rdi,%rsp
>>          rep   movsq
>> +
>> +        /* Switch Shadow Stacks */
>> +.macro ist_switch_shstk
>> +        rdsspq %rdi
>> +        clrssbsy (%rdi)
>> +        setssbsy
>> +.endm
> Could you extend the comment to mention the caveat that you point
> out in the description?

Ok.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 07 15:58:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 15:58:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWiuo-0007sx-Nz; Thu, 07 May 2020 15:58:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=i2mO=6V=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1jWium-0007ss-VN
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 15:58:25 +0000
X-Inumbo-ID: 940bd7a4-907b-11ea-9f41-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 940bd7a4-907b-11ea-9f41-12813bfff9fa;
 Thu, 07 May 2020 15:58:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588867102;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding;
 bh=4EKKX4DD41dlWkrxPkF9kh2E0OsCPUD9b25qsC4iepo=;
 b=S96BpJtQ8JmRi9Efdtr/w9KchRQoVPW2mlmHoviueuQtf3ON7vRhzTTiv21heYavcNl9tF
 3UmRn/s/OXhxNdi+YZQlN8X5b9QqATxo7kg+TPhDr3BPTmbL8eV9Hgyom2c+JE0ijBHeWM
 gcDdSv4CdyvcKmh9uHoPs0cjLNq/BnE=
Received: from mail-wm1-f69.google.com (mail-wm1-f69.google.com
 [209.85.128.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-112-NZn3ngqNOsaB4LN0LqWKrQ-1; Thu, 07 May 2020 11:58:19 -0400
X-MC-Unique: NZn3ngqNOsaB4LN0LqWKrQ-1
Received: by mail-wm1-f69.google.com with SMTP id n17so2767598wmi.3
 for <xen-devel@lists.xenproject.org>; Thu, 07 May 2020 08:58:19 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=IuiGhxyK+bFATZ+BUnJ0ElhU+c2I1ep1eGewd6Bycyk=;
 b=F3c1/slaDP/jVztwn4KXiaaKbP6HhcMpumRYeOE0MGMNfNF5oM56as+jQsHoih3AT7
 xuQZTd8un4BpM1JzWQFJeeE6Tp+daxEuaeNXtZviQmxCUtmT4N5hjCGKeMoA/fSJ1lhN
 YIGDIiE6ag1gudatiAfXawqlTwfVdrUmR3Y/+0zE2gD9EayoI6WsVXaANe2K8Sr94Oai
 BbBrkifwG1ary/WhnMDjeOq+nhtuGNn9RsP2xl9FOiyjScrpBiSPXfDX3i8xFN0Qe8/0
 inQQWa06/TAtyrrF6eLJQvdxCJfARTuZpIgsoDE4koFXzFLXYMYSnyBcwfkHHqvm9lwk
 WnFQ==
X-Gm-Message-State: AGi0PuaAFGfM6ZIX8irF8PGyjKQgPsFkzH/AotlSiRBGpIfuhq2p+W5e
 1Nx/Mox4Aq0FPiruEkkqHl0q/8aOG+srBXsA+pHe4aWe9OcylIGe6tQ1EkUog8GUmhwdYMXfLNy
 n8Ti/AGD3sWoRFfMcL4t5ITD8fms=
X-Received: by 2002:a1c:2b81:: with SMTP id r123mr10978866wmr.34.1588867097403; 
 Thu, 07 May 2020 08:58:17 -0700 (PDT)
X-Google-Smtp-Source: APiQypK1Ezf6gAWP66zkKrbj9OpjlzAO894wr22cGVpQzjySS8uDR/sLKI6jnyOT+YRkiE0IIfw3Pw==
X-Received: by 2002:a1c:2b81:: with SMTP id r123mr10978817wmr.34.1588867096950; 
 Thu, 07 May 2020 08:58:16 -0700 (PDT)
Received: from localhost.localdomain (187.red-88-21-201.staticip.rima-tde.net.
 [88.21.201.187])
 by smtp.gmail.com with ESMTPSA id d9sm9260216wrg.60.2020.05.07.08.58.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 07 May 2020 08:58:16 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH] accel: Move Xen accelerator code under accel/xen/
Date: Thu,  7 May 2020 17:58:13 +0200
Message-Id: <20200507155813.16169-1-philmd@redhat.com>
X-Mailer: git-send-email 2.21.3
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8;
	text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Igor Mammedov <imammedo@redhat.com>,
 Juan Quintela <quintela@redhat.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, Paul Durrant <paul@xen.org>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Aurelien Jarno <aurelien@aurel32.net>, Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This code is not related to hardware emulation.
Move it under accel/ with the other hypervisors.

Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
---
We could also move the memory management functions from
hw/i386/xen/xen-hvm.c but it is not trivial.

Necessary step to remove "exec/ram_addr.h" in the next series.
---
 include/exec/ram_addr.h                    |  2 +-
 include/hw/xen/xen.h                       | 11 ------
 include/sysemu/xen.h                       | 40 ++++++++++++++++++++++
 hw/xen/xen-common.c =3D> accel/xen/xen-all.c |  3 ++
 hw/acpi/piix4.c                            |  2 +-
 hw/i386/pc.c                               |  1 +
 hw/i386/pc_piix.c                          |  1 +
 hw/i386/pc_q35.c                           |  1 +
 hw/i386/xen/xen-hvm.c                      |  1 +
 hw/i386/xen/xen_platform.c                 |  1 +
 hw/isa/piix3.c                             |  1 +
 hw/pci/msix.c                              |  1 +
 migration/savevm.c                         |  2 +-
 softmmu/vl.c                               |  2 +-
 stubs/xen-hvm.c                            |  9 -----
 target/i386/cpu.c                          |  2 +-
 MAINTAINERS                                |  2 ++
 accel/Makefile.objs                        |  1 +
 accel/xen/Makefile.objs                    |  1 +
 hw/xen/Makefile.objs                       |  2 +-
 20 files changed, 60 insertions(+), 26 deletions(-)
 create mode 100644 include/sysemu/xen.h
 rename hw/xen/xen-common.c =3D> accel/xen/xen-all.c (99%)
 create mode 100644 accel/xen/Makefile.objs

diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 5e59a3d8d7..4e05292f91 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -21,7 +21,7 @@
=20
 #ifndef CONFIG_USER_ONLY
 #include "cpu.h"
-#include "hw/xen/xen.h"
+#include "sysemu/xen.h"
 #include "sysemu/tcg.h"
 #include "exec/ramlist.h"
 #include "exec/ramblock.h"
diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h
index 5ac1c6dc55..771dd447f2 100644
--- a/include/hw/xen/xen.h
+++ b/include/hw/xen/xen.h
@@ -20,13 +20,6 @@ extern uint32_t xen_domid;
 extern enum xen_mode xen_mode;
 extern bool xen_domid_restrict;
=20
-extern bool xen_allowed;
-
-static inline bool xen_enabled(void)
-{
-    return xen_allowed;
-}
-
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
 void xen_piix3_set_irq(void *opaque, int irq_num, int level);
 void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int =
len);
@@ -39,10 +32,6 @@ void xenstore_store_pv_console_info(int i, struct Charde=
v *chr);
=20
 void xen_hvm_init(PCMachineState *pcms, MemoryRegion **ram_memory);
=20
-void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
-                   struct MemoryRegion *mr, Error **errp);
-void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length);
-
 void xen_register_framebuffer(struct MemoryRegion *mr);
=20
 #endif /* QEMU_HW_XEN_H */
diff --git a/include/sysemu/xen.h b/include/sysemu/xen.h
new file mode 100644
index 0000000000..609d2d4184
--- /dev/null
+++ b/include/sysemu/xen.h
@@ -0,0 +1,40 @@
+/*
+ * QEMU Xen support
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or late=
r.
+ * See the COPYING file in the top-level directory.
+ */
+
+#ifndef SYSEMU_XEN_H
+#define SYSEMU_XEN_H
+
+#ifdef CONFIG_XEN
+
+extern bool xen_allowed;
+
+#define xen_enabled() (xen_allowed)
+
+#ifndef CONFIG_USER_ONLY
+void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length);
+void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
+                   struct MemoryRegion *mr, Error **errp);
+#endif
+
+#else /* !CONFIG_XEN */
+
+#define xen_enabled() 0
+#ifndef CONFIG_USER_ONLY
+static inline void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t le=
ngth)
+{
+    abort();
+}
+static inline void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
+                                 MemoryRegion *mr, Error **errp)
+{
+    abort();
+}
+#endif
+
+#endif /* CONFIG_XEN */
+
+#endif
diff --git a/hw/xen/xen-common.c b/accel/xen/xen-all.c
similarity index 99%
rename from hw/xen/xen-common.c
rename to accel/xen/xen-all.c
index a15070f7f6..5c64571cb8 100644
--- a/hw/xen/xen-common.c
+++ b/accel/xen/xen-all.c
@@ -16,6 +16,7 @@
 #include "hw/xen/xen_pt.h"
 #include "chardev/char.h"
 #include "sysemu/accel.h"
+#include "sysemu/xen.h"
 #include "sysemu/runstate.h"
 #include "migration/misc.h"
 #include "migration/global_state.h"
@@ -31,6 +32,8 @@
     do { } while (0)
 #endif
=20
+bool xen_allowed;
+
 xc_interface *xen_xc;
 xenforeignmemory_handle *xen_fmem;
 xendevicemodel_handle *xen_dmod;
diff --git a/hw/acpi/piix4.c b/hw/acpi/piix4.c
index 964d6f5990..daed273687 100644
--- a/hw/acpi/piix4.c
+++ b/hw/acpi/piix4.c
@@ -30,6 +30,7 @@
 #include "hw/acpi/acpi.h"
 #include "sysemu/runstate.h"
 #include "sysemu/sysemu.h"
+#include "sysemu/xen.h"
 #include "qapi/error.h"
 #include "qemu/range.h"
 #include "exec/address-spaces.h"
@@ -41,7 +42,6 @@
 #include "hw/mem/nvdimm.h"
 #include "hw/acpi/memory_hotplug.h"
 #include "hw/acpi/acpi_dev_interface.h"
-#include "hw/xen/xen.h"
 #include "migration/vmstate.h"
 #include "hw/core/cpu.h"
 #include "trace.h"
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 97e345faea..1a599e1de9 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -56,6 +56,7 @@
 #include "sysemu/tcg.h"
 #include "sysemu/numa.h"
 #include "sysemu/kvm.h"
+#include "sysemu/xen.h"
 #include "sysemu/qtest.h"
 #include "sysemu/reset.h"
 #include "sysemu/runstate.h"
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 3862e5120e..c00472b4c5 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -53,6 +53,7 @@
 #include "cpu.h"
 #include "qapi/error.h"
 #include "qemu/error-report.h"
+#include "sysemu/xen.h"
 #ifdef CONFIG_XEN
 #include <xen/hvm/hvm_info_table.h>
 #include "hw/xen/xen_pt.h"
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index 3349e38a4c..e929749d8e 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -36,6 +36,7 @@
 #include "hw/rtc/mc146818rtc.h"
 #include "hw/xen/xen.h"
 #include "sysemu/kvm.h"
+#include "sysemu/xen.h"
 #include "hw/kvm/clock.h"
 #include "hw/pci-host/q35.h"
 #include "hw/qdev-properties.h"
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 82ece6b9e7..041303a2fa 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -28,6 +28,7 @@
 #include "qemu/range.h"
 #include "sysemu/runstate.h"
 #include "sysemu/sysemu.h"
+#include "sysemu/xen.h"
 #include "sysemu/xen-mapcache.h"
 #include "trace.h"
 #include "exec/address-spaces.h"
diff --git a/hw/i386/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
index 0f7b05e5e1..a1492fdecd 100644
--- a/hw/i386/xen/xen_platform.c
+++ b/hw/i386/xen/xen_platform.c
@@ -33,6 +33,7 @@
 #include "hw/xen/xen-legacy-backend.h"
 #include "trace.h"
 #include "exec/address-spaces.h"
+#include "sysemu/xen.h"
 #include "sysemu/block-backend.h"
 #include "qemu/error-report.h"
 #include "qemu/module.h"
diff --git a/hw/isa/piix3.c b/hw/isa/piix3.c
index fd1c78879f..1a5267e19f 100644
--- a/hw/isa/piix3.c
+++ b/hw/isa/piix3.c
@@ -28,6 +28,7 @@
 #include "hw/irq.h"
 #include "hw/isa/isa.h"
 #include "hw/xen/xen.h"
+#include "sysemu/xen.h"
 #include "sysemu/sysemu.h"
 #include "sysemu/reset.h"
 #include "sysemu/runstate.h"
diff --git a/hw/pci/msix.c b/hw/pci/msix.c
index 29187898f2..2c7ead7667 100644
--- a/hw/pci/msix.c
+++ b/hw/pci/msix.c
@@ -19,6 +19,7 @@
 #include "hw/pci/msix.h"
 #include "hw/pci/pci.h"
 #include "hw/xen/xen.h"
+#include "sysemu/xen.h"
 #include "migration/qemu-file-types.h"
 #include "migration/vmstate.h"
 #include "qemu/range.h"
diff --git a/migration/savevm.c b/migration/savevm.c
index c00a6807d9..b979ea6e7f 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -28,7 +28,6 @@
=20
 #include "qemu/osdep.h"
 #include "hw/boards.h"
-#include "hw/xen/xen.h"
 #include "net/net.h"
 #include "migration.h"
 #include "migration/snapshot.h"
@@ -59,6 +58,7 @@
 #include "sysemu/replay.h"
 #include "sysemu/runstate.h"
 #include "sysemu/sysemu.h"
+#include "sysemu/xen.h"
 #include "qjson.h"
 #include "migration/colo.h"
 #include "qemu/bitmap.h"
diff --git a/softmmu/vl.c b/softmmu/vl.c
index afd2615fb3..0344e5fd2e 100644
--- a/softmmu/vl.c
+++ b/softmmu/vl.c
@@ -36,6 +36,7 @@
 #include "sysemu/runstate.h"
 #include "sysemu/seccomp.h"
 #include "sysemu/tcg.h"
+#include "sysemu/xen.h"
=20
 #include "qemu/error-report.h"
 #include "qemu/sockets.h"
@@ -178,7 +179,6 @@ static NotifierList exit_notifiers =3D
 static NotifierList machine_init_done_notifiers =3D
     NOTIFIER_LIST_INITIALIZER(machine_init_done_notifiers);
=20
-bool xen_allowed;
 uint32_t xen_domid;
 enum xen_mode xen_mode =3D XEN_EMULATE;
 bool xen_domid_restrict;
diff --git a/stubs/xen-hvm.c b/stubs/xen-hvm.c
index b7d53b5e2f..6954a5b696 100644
--- a/stubs/xen-hvm.c
+++ b/stubs/xen-hvm.c
@@ -35,11 +35,6 @@ int xen_is_pirq_msi(uint32_t msi_data)
     return 0;
 }
=20
-void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
-                   Error **errp)
-{
-}
-
 qemu_irq *xen_interrupt_controller_init(void)
 {
     return NULL;
@@ -49,10 +44,6 @@ void xen_register_framebuffer(MemoryRegion *mr)
 {
 }
=20
-void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
-{
-}
-
 void xen_hvm_init(PCMachineState *pcms, MemoryRegion **ram_memory)
 {
 }
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 9c256ab159..f9b3ef1ef2 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -29,6 +29,7 @@
 #include "sysemu/reset.h"
 #include "sysemu/hvf.h"
 #include "sysemu/cpus.h"
+#include "sysemu/xen.h"
 #include "kvm_i386.h"
 #include "sev_i386.h"
=20
@@ -54,7 +55,6 @@
 #include "hw/i386/topology.h"
 #ifndef CONFIG_USER_ONLY
 #include "exec/address-spaces.h"
-#include "hw/xen/xen.h"
 #include "hw/i386/apic_internal.h"
 #include "hw/boards.h"
 #endif
diff --git a/MAINTAINERS b/MAINTAINERS
index 1f84e3ae2c..95ddddfb1d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -438,6 +438,7 @@ M: Paul Durrant <paul@xen.org>
 L: xen-devel@lists.xenproject.org
 S: Supported
 F: */xen*
+F: accel/xen/*
 F: hw/9pfs/xen-9p*
 F: hw/char/xen_console.c
 F: hw/display/xenfb.c
@@ -451,6 +452,7 @@ F: hw/i386/xen/
 F: hw/pci-host/xen_igd_pt.c
 F: include/hw/block/dataplane/xen*
 F: include/hw/xen/
+F: include/sysemu/xen.h
 F: include/sysemu/xen-mapcache.h
=20
 Guest CPU Cores (HAXM)
diff --git a/accel/Makefile.objs b/accel/Makefile.objs
index 17e5ac6061..ff72f0d030 100644
--- a/accel/Makefile.objs
+++ b/accel/Makefile.objs
@@ -2,4 +2,5 @@ common-obj-$(CONFIG_SOFTMMU) +=3D accel.o
 obj-$(call land,$(CONFIG_SOFTMMU),$(CONFIG_POSIX)) +=3D qtest.o
 obj-$(CONFIG_KVM) +=3D kvm/
 obj-$(CONFIG_TCG) +=3D tcg/
+obj-$(CONFIG_XEN) +=3D xen/
 obj-y +=3D stubs/
diff --git a/accel/xen/Makefile.objs b/accel/xen/Makefile.objs
new file mode 100644
index 0000000000..7482cfb436
--- /dev/null
+++ b/accel/xen/Makefile.objs
@@ -0,0 +1 @@
+obj-y +=3D xen-all.o
diff --git a/hw/xen/Makefile.objs b/hw/xen/Makefile.objs
index 84df60a928..340b2c5096 100644
--- a/hw/xen/Makefile.objs
+++ b/hw/xen/Makefile.objs
@@ -1,5 +1,5 @@
 # xen backend driver support
-common-obj-$(CONFIG_XEN) +=3D xen-legacy-backend.o xen_devconfig.o xen_pvd=
ev.o xen-common.o xen-bus.o xen-bus-helper.o xen-backend.o
+common-obj-$(CONFIG_XEN) +=3D xen-legacy-backend.o xen_devconfig.o xen_pvd=
ev.o xen-bus.o xen-bus-helper.o xen-backend.o
=20
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) +=3D xen-host-pci-device.o
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) +=3D xen_pt.o xen_pt_config_init.o xen_p=
t_graphics.o xen_pt_msi.o
--=20
2.21.3



From xen-devel-bounces@lists.xenproject.org Thu May 07 16:15:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 16:15:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWjB9-0001bU-Av; Thu, 07 May 2020 16:15:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0ibL=6V=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWjB7-0001bP-L9
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 16:15:17 +0000
X-Inumbo-ID: efbf1174-907d-11ea-9f47-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id efbf1174-907d-11ea-9f47-12813bfff9fa;
 Thu, 07 May 2020 16:15:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 082CFABCE;
 Thu,  7 May 2020 16:15:17 +0000 (UTC)
Subject: Re: [PATCH 15/16] x86/entry: Adjust guest paths to be shadow stack
 compatible
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-16-andrew.cooper3@citrix.com>
 <2df78612-2c24-32de-186a-c402e188478c@suse.com>
 <70d7b0e0-a599-6a19-5ace-af4f169545b3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fa78f626-18a1-bd95-b446-8ade5e9282a6@suse.com>
Date: Thu, 7 May 2020 18:15:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <70d7b0e0-a599-6a19-5ace-af4f169545b3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.05.2020 17:50, Andrew Cooper wrote:
> On 07/05/2020 15:12, Jan Beulich wrote:
>> On 02.05.2020 00:58, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/x86_64/compat/entry.S
>>> +++ b/xen/arch/x86/x86_64/compat/entry.S
>>> @@ -198,7 +198,7 @@ ENTRY(cr4_pv32_restore)
>>>  
>>>  /* See lstar_enter for entry register state. */
>>>  ENTRY(cstar_enter)
>>> -        /* sti could live here when we don't switch page tables below. */
>>> +        ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
>> I don't see why you delete the comment here (or elsewhere). While
>> I recall you not really wanting them there, I still think they're
>> useful to have, and they shouldn't be deleted as a side effect of
>> an entirely unrelated change. Of course they need to live after
>> your insertions then.
> 
> Do you not remember Juergen performance testing results concerning this
> comment?  The results were provably worse.
> 
> It is a useless comment.  Sure, its technically accurate, but so are an
> arbitrarily large number of other comments about how we could permute
> the code.
> 
> It has already been concluded that we won't be making the suggested
> change.  Having a /* TODO - doing X will make the system slower */ isn't
> something we should have adding to the complexity of the code, and
> tricking people into thinking that something should be done.

A separate patch is still the way to go then, with reference to
the claimed performance testing results.

>>> --- a/xen/arch/x86/x86_64/entry.S
>>> +++ b/xen/arch/x86/x86_64/entry.S
>>> @@ -194,6 +194,15 @@ restore_all_guest:
>>>          movq  8(%rsp),%rcx            # RIP
>>>          ja    iret_exit_to_guest
>>>  
>>> +        /* Clear the supervisor shadow stack token busy bit. */
>>> +.macro rag_clrssbsy
>>> +        push %rax
>>> +        rdsspq %rax
>>> +        clrssbsy (%rax)
>>> +        pop %rax
>>> +.endm
>>> +        ALTERNATIVE "", rag_clrssbsy, X86_FEATURE_XEN_SHSTK
>> In principle you could get away without spilling %rax:
>>
>>         cmpl  $1,%ecx
>>         ja    iret_exit_to_guest
>>
>>         /* Clear the supervisor shadow stack token busy bit. */
>> .macro rag_clrssbsy
>>         rdsspq %rcx
>>         clrssbsy (%rcx)
>> .endm
>>         ALTERNATIVE "", rag_clrssbsy, X86_FEATURE_XEN_SHSTK
>>         movq  8(%rsp),%rcx            # RIP
>>         cmpw  $FLAT_USER_CS32,16(%rsp)# CS
>>         movq  32(%rsp),%rsp           # RSP
>>         je    1f
>>         sysretq
>> 1:      sysretl
>>
>>         ALIGN
>> /* No special register assumptions. */
>> iret_exit_to_guest:
>>         movq  8(%rsp),%rcx            # RIP
>>         andl  $~(X86_EFLAGS_IOPL|X86_EFLAGS_NT|X86_EFLAGS_VM),24(%rsp)
>>         ...
>>
>> Also - what about CLRSSBSY failing? It would seem easier to diagnose
>> this right here than when getting presumably #DF upon next entry into
>> Xen. At the very least I think it deserves a comment if an error case
>> does not get handled.
> 
> I did consider this, but ultimately decided against it.
> 
> You can't have an unlikely block inside a alternative block because the
> jmp's displacement doesn't get fixed up.

We do fix up unconditional JMP/CALL displacements; I don't
see why we couldn't also do so for conditional ones.

>  Keeping everything inline puts
> an incorrect statically-predicted branch in program flow.
> 
> Most importantly however, is that the SYSRET path is vastly less common
> than the IRET path.  There is no easy way to proactively spot problems
> in the IRET path, which means that conditions leading to a problem are
> already far more likely to manifest as #DF, so there is very little
> value in adding complexity to the SYSRET path in the first place.

The SYSRET path being uncommon is a problem by itself imo, if
that's indeed the case. I'm sure I've suggested before that
we convert frames to TRAP_syscall ones whenever possible,
such that we wouldn't go the slower IRET path.

>> Somewhat similar for SETSSBSY, except there things get complicated by
>> it raising #CP instead of setting EFLAGS.CF: Aiui it would require us
>> to handle #CP on an IST stack in order to avoid #DF there.
> 
> Right, but having #CP as IST gives us far worse problems.
> 
> Being able to spot #CP vs #DF doesn't help usefully.  Its still some
> arbitrary period of time after the damage was done.
> 
> Any nesting of #CP (including fault on IRET out) results in losing
> program state and entering an infinite loop.
> 
> The cases which end up as #DF are properly fatal to the system, and we
> at least get a clean crash out it.

May I suggest that all of this gets spelled out in at least
the description of the patch, so that it can be properly
understood (and, if need be, revisited) later on?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 07 16:39:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 16:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWjYc-0003Jf-4L; Thu, 07 May 2020 16:39:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=slKb=6V=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jWjYa-0003Ja-Nc
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 16:39:32 +0000
X-Inumbo-ID: 536933c6-9081-11ea-9f4d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 536933c6-9081-11ea-9f4d-12813bfff9fa;
 Thu, 07 May 2020 16:39:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 6D835B1AC;
 Thu,  7 May 2020 16:39:33 +0000 (UTC)
Message-ID: <b1cde2cfb29068e98f848af41e260ad635ac5fa4.camel@suse.com>
Subject: Re: [PATCH] xen/sched: always modify vcpu pause flags atomically
From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Date: Thu, 07 May 2020 18:39:28 +0200
In-Reply-To: <20200506151655.26445-1-jgross@suse.com>
References: <20200506151655.26445-1-jgross@suse.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-28z0u6N0CAUKbwJByzYD"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-28z0u6N0CAUKbwJByzYD
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2020-05-06 at 17:16 +0200, Juergen Gross wrote:
> credit2 is currently modifying the pause flags of vcpus non-
> atomically
> via sched_set_pause_flags() and sched_clear_pause_flags(). This is
> dangerous as there are cases where the paus flags are modified
> without
> any lock held.
>=20
Right.

> So drop the non-atomic pause flag modification functions and rename
> the
> atomic ones dropping the _atomic suffix.
>=20
> Fixes: a76255b4266516 ("xen/sched: make credit2 scheduler vcpu
> agnostic.")
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

> ---
> It should be noted that this issue wasn't introduced by core
> scheduling
> as even before credit2 was using the non-atomic __set_bit() and
> __clear_bit() variants.
>
Yes. I can see that in 222234f2ad17185 ("xen: credit2: use non-atomic
cpumask and bit operations"), where the svc->flags are switched to non-
atomic updates (as, for them, it is true that they're always accessed
while holding locks), switching of setting the _VPF_migrating
pause->flags non atomically also slipped in, but that was clearly a
mistake. :-(

I believe (but I haven't checked this part too thoroughly) that it was
the only one back then. Afterwords, when another instance was added, in
__runq_tickle(), we found the already existing one and followed suit.

Good catch, and thanks. :-)

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-28z0u6N0CAUKbwJByzYD
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl60OcAACgkQFkJ4iaW4
c+7zHhAArUenfxoCOwbGoMzRknFnF3qY9XIrCsSR5jH81/E+k6gx/OZ6wwA37M12
GD3DGdv32pEhZBzvhmQg5hAq/ux+6JdXGU6OlC8PYX77u4/AYFOj+ez3SXQhgoVF
RJE+QTN3HGC1Zivdy13s0nNg6Sd9zRJ3mBcjGk8rTERMw30fizVsHZauLTUn5aII
grj1jPtMxXfqQWhnwtbrrdymKZinAManFAkjcPK9ex5QTrQzr7e0rc/qXeUvDjtc
mGj/T1tGgbSp4bVvuxgdZm5GkkCJUU36K4C1bNjTvScUah3kyOldEGSvI6vN2z9B
h6DLnD9n4LRdYtvtqDKRNcRt8tFYgTiHesETPXWvSMgUgpAVLpbLgFBJiPPYh9GA
hAyNVrUtzOCUL11WZ90Zr+CMaypS1mupqMEXhNoXnlmC9rnoGthvptidV+UM8Mzk
Ad53iiTX8Hxzo8cKVgShD5Esq/0jaaaFKn49XjPaTNvZ7eV4/75YfJeDi3Db/tn6
V3Q2IuVfP8tAB8JsZW/FqEJluyo99dJNbxtEp4ZolSU+57uHUC98gsUcEZ8N8yYf
urLjmUqhC2XYuBA3pb6sg2tvQxd6jXNToXP3AHfoGdTheGfu5wUtSSF5BSenQVmM
hikYn77ipV7J87kT9/QYxdj1MYoOCUC9s3n/xFWHsI5Dr4hVzlM=
=sweq
-----END PGP SIGNATURE-----

--=-28z0u6N0CAUKbwJByzYD--



From xen-devel-bounces@lists.xenproject.org Thu May 07 16:43:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 16:43:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWjck-00046g-M2; Thu, 07 May 2020 16:43:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=slKb=6V=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jWjck-00046a-4J
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 16:43:50 +0000
X-Inumbo-ID: ed0577a6-9081-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ed0577a6-9081-11ea-9887-bc764e2007e4;
 Thu, 07 May 2020 16:43:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id BF37FABE2;
 Thu,  7 May 2020 16:43:50 +0000 (UTC)
Message-ID: <17751c12cdc39acdfaba3a0416ce2b36025a03cc.camel@suse.com>
Subject: Re: [PATCH RESEND 1/2] xen/Kconfig: define EXPERT a bool rather
 than a string
From: Dario Faggioli <dfaggioli@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Date: Thu, 07 May 2020 18:43:45 +0200
In-Reply-To: <20200430142548.23751-2-julien@xen.org>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-2-julien@xen.org>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-I/JwIHgu+OPLjEEgn4hQ"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-I/JwIHgu+OPLjEEgn4hQ
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2020-04-30 at 15:25 +0100, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> Since commit f80fe2b34f08 "xen: Update Kconfig to Linux v5.4" EXPERT
> can only have two values (enabled or disabled). So switch from a
> string
> to a bool.
>=20
> Take the opportunity to replace all "EXPERT =3D y" to "EXPERT".
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>  xen/Kconfig                     |  3 +--
>  xen/Kconfig.debug               |  2 +-
>  xen/arch/arm/Kconfig            | 10 +++++-----
>  xen/arch/x86/Kconfig            |  6 +++---
>  xen/common/Kconfig              | 14 +++++++-------
>  xen/common/sched/Kconfig        |  2 +-
>
I think I'm being copied to this patch due to the hunk that changes
this file above.

Such hunk is:

Acked-by: Dario Faggioli <dfaggioli@suse.com>

FWIW, I agree with the basic idea of the patch series too.

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-I/JwIHgu+OPLjEEgn4hQ
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl60OsEACgkQFkJ4iaW4
c+4kiBAAp2nTBavhsadoZOh2zPgIW5jF8yiohBJ6dxUZi6SMnLA0BZu+t9ratJv5
P8fvaLqIN9ZnhLqgWOUazxK3GmdUSkVJ8XJ1nrUoyCKxlt0K7qbhnUxX1H0uYLfP
zwhJM7mHOIf0srBpLQ2EQ+Z4AM1zaZ0/LkLqGEkK/Z7zqWQ21KGWWAdvrRbTGSXy
EmQaFFRrvVzc2StNpQ84HcEg7U96fJIwXdFHAn9mkf9Wla9LXYWa/CLy0vu//ppW
vmnILpw8jlx5ZxEY+2JgUeb1/YzjmdrdiPcK0HktRRJhOpxrLcG9Ai7uwIBjYKKn
SfFVKf1goOY/j39X1QG+UpCv4iRZobJQsnAd70Cb9VHETg71a5fVlCqeMWkiV27S
Y8efJauJugIvO51mT8nc70Q3x3fXlH0AVkbyxMqK9tFWOmyBBMK2mCfSVx2RJ+Ve
qAW4+tpReJGVuC/ZufA0V4e1vRxCG9DRoYtzC24+Dq+8tXzCMNSxSNf7nIVR83BQ
tXcquawn+XkiUIc19GASrKV85nRs3d6giUdoWky9VB4vZ18kLzim7+ogNrvmnok4
QKC9yXi56NXCI3RVrL+5h5wnbSJNUn6ze/q3oiDyD3WT8lwPOEkxtFcgKrCeiYRv
+lWzyAGVqpEoIucmmA92fuja9J21vbhNOj9Lf+SdMp1xFx3bvos=
=Z+2q
-----END PGP SIGNATURE-----

--=-I/JwIHgu+OPLjEEgn4hQ--



From xen-devel-bounces@lists.xenproject.org Thu May 07 16:45:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 16:45:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWjeB-0004D8-4j; Thu, 07 May 2020 16:45:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=slKb=6V=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jWje9-0004D3-OX
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 16:45:17 +0000
X-Inumbo-ID: 215c21bc-9082-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 215c21bc-9082-11ea-b9cf-bc764e2007e4;
 Thu, 07 May 2020 16:45:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E35E6ABCE;
 Thu,  7 May 2020 16:45:18 +0000 (UTC)
Message-ID: <a554266eacebffea74889d71b67dd207dcd4c574.camel@suse.com>
Subject: Re: [PATCH v4] sched: print information about scheduling granularity
From: Dario Faggioli <dfaggioli@suse.com>
To: Sergey Dyasli <sergey.dyasli@citrix.com>, xen-devel@lists.xenproject.org
Date: Thu, 07 May 2020 18:45:14 +0200
In-Reply-To: <20200506100024.17387-1-sergey.dyasli@citrix.com>
References: <20200506100024.17387-1-sergey.dyasli@citrix.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-VUUsSBVlpTEsqI+1+SMP"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-VUUsSBVlpTEsqI+1+SMP
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2020-05-06 at 11:00 +0100, Sergey Dyasli wrote:
> Currently it might be not obvious which scheduling mode (e.g. core-
> scheduling) is being used by the scheduler. Alleviate this by
> printing
> additional information about the selected granularity per-cpupool.
>=20
> Note: per-cpupool granularity selection is not implemented yet. Every
>       cpupool gets its granularity from the single global value.
>=20
> Take this opportunity to introduce struct sched_gran_name array and
> refactor sched_select_granularity().
>=20
> Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
>
Acked-by: Dario Faggioli <dfaggioli@suse.com>

Thanks and Regards=20
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-VUUsSBVlpTEsqI+1+SMP
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl60OxoACgkQFkJ4iaW4
c+7oLQ/+MMty1zl0XXjkcJXzIQf52YCUgeBgpW0xFSryEsHy5L3w3q6f1bq7PY+J
LY9wLnd7qumSqnwZpnVY5MSpaNXXrDH6WmU88TKz2x7rGeoyqYkNvjWAUjdivQO7
3iW0bojerXY9FMYbt7ILkWeQ1EKCf4hX3vzGcKVDyffL55U51OYqch00RUBHaUK2
0gjX/o7zFlZeQyRggtJnDucAuW+7QwhymdJTHLqgbNAQbz/+RZodabxWyaSE0WOc
rlQ52qyznenMdYv22N5eub0zejH1XKm1OW41tP0EKwBdC9sP6QFj0ED0+mAwvw7S
rM1jhStlBiu7hUvj30+uVQCq6g6wBYmVYLWOJdcYgJ0hiQVUlceY6Dv6/4IVKR2d
/SFw27K5bKSsdoSBYWsKecMqMlWP8i+BJdUNKuXZ18Y0+HzDC5cXGUkxBZNDDFLn
sTGppGe28JX9vLEz493u/VowR37p8Mz3/aU4jjULiI0gUn1rb96/LTxWDxbHUmPC
PQgIChqKN59dXMiBSeRVrIJhRPXVQV4ujPqtOAauRxmPIQy0CECFN48+WlJzCP+V
QW429IOzujtvpDoJspYKKxp/WHUwa95CEpuTjaLpnptv6sseVtp1eI//A4NdEU3y
LgA3R+++waTn1jf6IFLAi8i3xrT8cQC+r/XJ6T30hH+YKW/dkfQ=
=Z/9k
-----END PGP SIGNATURE-----

--=-VUUsSBVlpTEsqI+1+SMP--



From xen-devel-bounces@lists.xenproject.org Thu May 07 17:01:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 17:01:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWjtm-0005rc-Jz; Thu, 07 May 2020 17:01:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gPsh=6V=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jWjtl-0005rX-Ad
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 17:01:25 +0000
X-Inumbo-ID: 5e5d6af9-9084-11ea-9f50-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5e5d6af9-9084-11ea-9f50-12813bfff9fa;
 Thu, 07 May 2020 17:01:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588870880;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=ur5VtiotyWcbSJIqszELjrS4++33DvzXvBT6MW4p10Q=;
 b=RGtckSUVWwU6jM4lOM7eYdC3r1zZs9Sb+qEDH3kaU+YoqY6cTuTv/Y39
 WlTi/tVGB6552+joY0oZ53TJ2RbbZo7q9ZlfBlkhVyN3VSZfoW6j4nczs
 B13srbzMz/sVIL6wgdRzFD+yZdr2deVK9YCpjs3ntDL8vRp0Eacm8OXvo o=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: MfqJggDCYZAEjZN18NUPuygfIq++wVfxO4jIuFdwtOzdGVTibxfvHx0ZuLUIdmm0hYm+BfB98y
 QTeaIhwh9InKdxl7ZzCqWoUVztu6ai/Y6vNsIRpcXMqPXjSfJ/iCgWuijJs81C4niZZLI9ClYT
 NMLmayTyoA1z1vjibx7/fFfLczVX3x37tMxKmXu3Takm710/kdpd7nmpSWeWtHZE3BjZfVHn7i
 m1kOrqifomdgMFDm64FDzldb3RLvA2UWnUttV+Haxb86/xkgDWoGSRLMiKmWQFJ0+2rvHX7tv8
 gB4=
X-SBRS: 2.7
X-MesageID: 17687438
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,364,1583211600"; d="scan'208";a="17687438"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Message-ID: <24244.16076.627203.282982@mariner.uk.xensource.com>
Date: Thu, 7 May 2020 18:01:00 +0100
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
In-Reply-To: <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <JBeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Julien Grall writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
> On 04/05/2020 13:34, Ian Jackson wrote:
> > George Dunlap writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
> >> On Apr 30, 2020, at 3:50 PM, Jan Beulich <JBeulich@suse.com> wrote:
> >>> Well, if I'm not mis-remembering it was on purpose to make it more
> >>> difficult for people to declare themselves "experts". FAOD I'm not
> >>> meaning to imply I don't see and accept the frustration aspect you
> >>> mention further up. The two need to be carefully weighed against
> >>> one another.
> > 
> > Yes, it was on purpose.  However, I had my doubts at the time and
> > I think experience has shown that this was a mistake.
> > 
> >> I don’t think we need to make it difficult for people to declare
> >> themselves experts, particularly as “all” it means at the moment is,
> >> “Can build something which is not security supported”.  People who
> >> are building their own hypervisors are already pretty well advanced;
> >> I think we can let them shoot themselves in the foot if they want
> >> to.
> > 
> > Precisely.
> 
> Can I consider this as an Acked-by? :)

I am happy with the principle of the change.  I haven't reviewed the
details of the commit message etc.

I reviewed the thread and there were two concernes raised:

 * The question of principle.  I disagree with this concern
   because I approve of principle of the patch.

 * Some detail about the precise justificaton as written in
   the commit message, regarding `clean' targets.  Apparently the
   assertion may not be completely true.  I haven't seen a proposed
   alternative wording.

I don't feel I should ack a controversial patch with an unresolved
wording issue.  Can you tell me what your proposed wording is ?
To avoid blocking this change I would be happy to review your wording
and see if it meets my reading of the stated objection.

Alternatively, if you can get agreement from others on the wording of
the commit message, you may put my ack on the patch.

HTH,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 07 17:27:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 17:27:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWkIS-0007bx-Qj; Thu, 07 May 2020 17:26:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=53Et=6V=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jWkIR-0007bs-7a
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 17:26:55 +0000
X-Inumbo-ID: f190c9a0-9087-11ea-b9cf-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f190c9a0-9087-11ea-b9cf-bc764e2007e4;
 Thu, 07 May 2020 17:26:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588872415;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=LR34SWfeleLjzXZQ9LoxWEPpFUaT9bSTuTKCJAVw5K4=;
 b=VAMNH38wr5MsyyNgwlAqBCDPj12aPQH8nwP7ejahgBvZr+vhLtDWwCCN
 HQm7247Udsf6ByLwPmqdiZklxh5D6Eto5G9hJjHxuKhU8cJzAoumPukwf
 SF3neczHtBKz9oD0l0kjUPXKA9lFsdTNgUaNDFfkTL3dewr7w3rH35rkM U=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: eS/OeqLO/8MeKOgG/u67CxxTmP7DRjAKrh1xQb6ZUXI8aaDnEeEFanrlowPWLCEZA42aQT+BXZ
 8B1shmrGbf96Q7zunQPnGObyN/6ftTwoNZ7kuMBa8LVGqBxkJvxapSJHcz5B57fPjWTZK9jr/5
 YU7FWFAvgNfwKChzBqLefOx80WBPhYLqaluU7NO2/aJNO/aN2WpS3xfh9yom9LDVxwewRhdB6k
 k9WCwOR9WgSCUUs0XVWpLMdkPOZ/g5v7FFuZWzS4Uv9HckPRQEo9s13LTSG/KJegpUn+U1hH0S
 0uo=
X-SBRS: 2.7
X-MesageID: 16990167
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,364,1583211600"; d="scan'208";a="16990167"
Subject: Re: [PATCH v2 1/4] x86/mm: no-one passes a NULL domain to
 init_xen_l4_slots()
To: Jan Beulich <jbeulich@suse.com>
References: <9d4b738a-4487-6bfc-3076-597d074c7b47@suse.com>
 <8787b72e-c71e-b75d-2ca0-0c6fe7c8259f@suse.com>
 <20200421164055.GW28601@Air-de-Roger>
 <4779dde6-6582-1776-ea9b-a2cd46ac3bc3@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a40d1289-d88b-db93-1e6f-8f44c9c96bcf@citrix.com>
Date: Thu, 7 May 2020 18:26:48 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <4779dde6-6582-1776-ea9b-a2cd46ac3bc3@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05/05/2020 07:31, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> Andrew,
>
> On 21.04.2020 18:40, Roger Pau Monné wrote:
>> On Tue, Apr 21, 2020 at 11:11:03AM +0200, Jan Beulich wrote:
>>> Drop the NULL checks - they've been introduced by commit 8d7b633ada
>>> ("x86/mm: Consolidate all Xen L4 slot writing into
>>> init_xen_l4_slots()") for no apparent reason.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> you weren't entirely happy with the change because of the
> possible (or, as you state, necessary) need to undo this. I
> still think in the current shape the NULL checks are
> pointless and hence would better go away. Re-introducing them
> (adjusted to whatever shape the function may be in by that
> time) is not that big of a problem. May I ask that you
> explicitly clarify whether you actively NAK the patch, accept
> it going in with Roger's R-b, or would be willing to ack it?

I'm not going to nack it, because that would be petty, but I still don't
think it is a useful use of your time to be making more work for someone
in the future to revert.

However, if you wish to take the patch with Roger's R-b, then please fix
the stale commit message, seeing as this is v2 and I explained exactly
why it was done like that.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 07 17:40:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 17:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWkVQ-0000jH-TT; Thu, 07 May 2020 17:40:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FcBF=6V=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jWkVP-0000jC-Kn
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 17:40:19 +0000
X-Inumbo-ID: d1459c00-9089-11ea-ae69-bc764e2007e4
Received: from mail-wm1-x32b.google.com (unknown [2a00:1450:4864:20::32b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d1459c00-9089-11ea-ae69-bc764e2007e4;
 Thu, 07 May 2020 17:40:18 +0000 (UTC)
Received: by mail-wm1-x32b.google.com with SMTP id h4so7491040wmb.4
 for <xen-devel@lists.xenproject.org>; Thu, 07 May 2020 10:40:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=eaPyYTkYXBrXu/xH0NnuAlo+Tq2jEVRxDirxaSw4Dm4=;
 b=Ag3/IzoGlN3au/qY+FZpa2CRlAewKiyWW+S5kVf2ol4AXBrd54gIBzulDIW0UcAgvk
 7QEOG5ezi6c5yG0dvMa3fBA4XhAv7mf8JM0wyzADxytMZtnopGSmz3wxcP5dbw6dMVH8
 waJJUudnOaFisZEtev+OiOyqhyTHMZboI+r7KuDy57mkFl+RCyRNkT0mM4YgzIqxkZoh
 QKQESboX15XGVw7SDeLMKqHpogruNBMS6gyNMZvJVQ1x950l0QK4uJ7KqCrnjOEsDVqL
 dLYC/NIF1YFk8qJVjGLif0CZtdFxYqKhDAGYM1hw3O89yo91/25cu3Dwhd08bGhU5wQk
 g8UQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=eaPyYTkYXBrXu/xH0NnuAlo+Tq2jEVRxDirxaSw4Dm4=;
 b=E2uYOsQy6/PoZ+o/lushgk7o0u6Y75soSrc0Mo4FvZasT6noQbygRrsTRUHBYtatIm
 B/9c6e6+1RMRow/ujhgBfXWMlgwtNTjLoTD0Wo2rOEHDlMOpJL76KfztOuOFXR5qqVnw
 Te52S1nbj3S92onFiMwDBAKm9whNRr07ahsGJccLyxsatQNfEbvy9ayyQaHLSCI17M/Z
 B51tm7jPZ/rQtI1aGBTP77CnYdA+/S/kfmVe52TDUL7Q7zE1zG9z73IEG2tW52Z8orLB
 0lfjBxI52zz+sW9Xo8O7qbxQrXNLzZHkGCFjvJ++ci2Opl0brxRgqP8nFOXgN3tQVtzh
 HRnQ==
X-Gm-Message-State: AGi0PuYsAE5oQIBt93w7TH6AQZrhckTxP91sctuCz2riWEHTp2oKyUtw
 Ip52P2kr/WEQ8FQmHXZKX9c=
X-Google-Smtp-Source: APiQypJCeKi1dsLuz1/XdQ4gHUKarUmee4ON98uMkUjEOZrkzHu3A/+yV1UEtRqanV14aWFbNTS9yQ==
X-Received: by 2002:a1c:7d4b:: with SMTP id y72mr11731981wmc.11.1588873217964; 
 Thu, 07 May 2020 10:40:17 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.186])
 by smtp.gmail.com with ESMTPSA id i74sm9154972wri.49.2020.05.07.10.40.15
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 07 May 2020 10:40:17 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: =?utf-8?Q?'Philippe_Mathieu-Daud=C3=A9'?= <philmd@redhat.com>,
 <qemu-devel@nongnu.org>
References: <20200507155813.16169-1-philmd@redhat.com>
In-Reply-To: <20200507155813.16169-1-philmd@redhat.com>
Subject: RE: [PATCH] accel: Move Xen accelerator code under accel/xen/
Date: Thu, 7 May 2020 18:40:15 +0100
Message-ID: <00d101d62496$9261b6e0$b72524a0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQLCd2e1MKAbxFSNuZsafCzsSnsP7KbEPi8w
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Eduardo Habkost' <ehabkost@redhat.com>, 'Juan Quintela' <quintela@redhat.com>,
 "'Michael S. Tsirkin'" <mst@redhat.com>,
 "'Dr. David Alan Gilbert'" <dgilbert@redhat.com>,
 'Aleksandar Markovic' <aleksandar.qemu.devel@gmail.com>,
 'Paolo Bonzini' <pbonzini@redhat.com>,
 'Marcel Apfelbaum' <marcel.apfelbaum@gmail.com>,
 'Igor Mammedov' <imammedo@redhat.com>,
 'Anthony Perard' <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 'Aurelien Jarno' <aurelien@aurel32.net>, 'Richard Henderson' <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> Sent: 07 May 2020 16:58
> To: qemu-devel@nongnu.org
> Cc: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>; =
xen-devel@lists.xenproject.org; Stefano Stabellini
> <sstabellini@kernel.org>; Aleksandar Markovic =
<aleksandar.qemu.devel@gmail.com>; Aurelien Jarno
> <aurelien@aurel32.net>; Paolo Bonzini <pbonzini@redhat.com>; Igor =
Mammedov <imammedo@redhat.com>;
> Eduardo Habkost <ehabkost@redhat.com>; Richard Henderson =
<rth@twiddle.net>; Marcel Apfelbaum
> <marcel.apfelbaum@gmail.com>; Dr. David Alan Gilbert =
<dgilbert@redhat.com>; Juan Quintela
> <quintela@redhat.com>; Michael S. Tsirkin <mst@redhat.com>; Paul =
Durrant <paul@xen.org>; Anthony
> Perard <anthony.perard@citrix.com>
> Subject: [PATCH] accel: Move Xen accelerator code under accel/xen/
>=20
> This code is not related to hardware emulation.
> Move it under accel/ with the other hypervisors.
>=20
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>

Reviewed-by: Paul Durrant <paul@xen.org>



From xen-devel-bounces@lists.xenproject.org Thu May 07 18:12:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 18:12:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWkzu-0003Iv-Ey; Thu, 07 May 2020 18:11:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=53Et=6V=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jWkzt-0003Iq-3k
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 18:11:49 +0000
X-Inumbo-ID: 366c445f-908e-11ea-9f57-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 366c445f-908e-11ea-9f57-12813bfff9fa;
 Thu, 07 May 2020 18:11:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588875107;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=HCX/I0dHjDqSDMQrZwaa838VtWpjdp6sj0lX5rmhDDM=;
 b=EdevowK+AiE13Dk3uQ/K4psz8n6qWLOEOjWJLhiIsogJn9oBOeR+G85F
 Zvgz2XMgiQ5Sch65qyix3+DwTmjpaJZmn6z4LiZ38dDfwWl3eSqM6ezz0
 URPuZ+BVMxfYRIeuCgqQmVcR/CLB99wYG8VTi8YInQOyef4mSLlVk4WUE 4=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: BLPE3jx3rl/A5v3ti9yAe8vxgtwxlpGgNGU+uMjjzSPKD8EnLpw74AiJJhAxSnjdt1utZesCJ3
 OvbYuGnFjNnWs1nsfsiuGYlMgWOtZkotnokZv3Wr5DnYAajbJbh04Zf1h4bOdAZod130o6oG8t
 Gh2jkVqhUvcqGC2dUkoFNmOXteTQT4k79R2uHUsz0Xd1IJcnGF9IIRwedMV3vknbtTp4Pq9Jqj
 +GAi8s+r0re8KiEsK8946gvcE8WvctKmg45UiHDhJQW3eUYzYCAS6trodvErydi20z1YgdGN3p
 Trk=
X-SBRS: 2.7
X-MesageID: 17288560
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,364,1583211600"; d="scan'208";a="17288560"
Subject: Re: [PATCH v8 01/12] x86emul: disable FPU/MMX/SIMD insn emulation
 when !HVM
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <6110ec4d-36a7-efa7-fb86-069ec5b83ac2@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a9badaa5-cae0-f2b0-7801-5355f342ad4b@citrix.com>
Date: Thu, 7 May 2020 19:11:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <6110ec4d-36a7-efa7-fb86-069ec5b83ac2@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05/05/2020 09:12, Jan Beulich wrote:
> In a pure PV environment (the PV shim in particular) we don't really
> need emulation of all these. To limit #ifdef-ary utilize some of the
> CASE_*() macros we have, by providing variants expanding to
> (effectively) nothing (really a label, which in turn requires passing
> -Wno-unused-label to the compiler when build such configurations).
>
> Due to the mixture of macro and #ifdef use, the placement of some of
> the #ifdef-s is a little arbitrary.
>
> The resulting object file's .text is less than half the size of the
> original, and looks to also be compiling a little more quickly.
>
> This is meant as a first step; more parts can likely be disabled down
> the road.
>
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v7: Integrate into this series. Re-base.
> ---
> I'll be happy to take suggestions allowing to avoid -Wno-unused-label.

I already gave you one, along with a far less invasive change.

It is not interesting to be able to conditionally compile these things
separately.  A build of Xen will either need everything, or just the
integer group.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 07 18:30:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 18:30:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWlIA-0004vd-4y; Thu, 07 May 2020 18:30:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=53Et=6V=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jWlI8-0004vY-Tl
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 18:30:40 +0000
X-Inumbo-ID: d9f58b2e-9090-11ea-b9cf-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d9f58b2e-9090-11ea-b9cf-bc764e2007e4;
 Thu, 07 May 2020 18:30:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588876239;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=Bn0REZY4DYzTaHDC1DeENYqjKeohP8/OFI0dK20j1Oc=;
 b=HKZuls+8XHU5q4eDl8ziOrFPzt57RV5CV0WE19jod/Tzfs19GqJzJ2lK
 Th631Vw0HKaK7mVdk3ZtKFUH+6mgT/IN+xS4iUwQRKwAv/39vV9jPcvlD
 c9C5jX88QJqTbaDXSRUSYvbP+SeaSdg6aYlvsvWT2LFhf/B3VC3UErhL4 8=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: lTj4GoppG+Sk4bMBahGjFEaeaKNdNtMGKbUjyhZJ9LBdt9ZKgyiTCKUrPWPALdAyDliHWs2Khl
 Ilq65Z6NGBtBqYg/YDsiBmXNL+j2jDLfpPN2KwLzd9BLwMS8L1a2MR2+Wsq0mAPlDAWICe4dbx
 uW6yS7iR/XiC9ATn7Kut1lTflTO9fZ0k+N8L+SyitszhUbu4CbUeN3/ajdkOOj8DxcoDOc6F+A
 +qbajDXyMJ1zhnF5E9dkKa1gjgXrMUXlrj1Tb07RToBVEtyG4tvj3nKiOMGGGXz0DOoAaFmois
 nQo=
X-SBRS: 2.7
X-MesageID: 17696929
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,364,1583211600"; d="scan'208";a="17696929"
Subject: Re: [PATCH v8 02/12] x86emul: support MOVDIR{I,64B} insns
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <04e52d0a-fcce-eba4-0341-3b8838c0faae@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <37726d04-6fb8-5747-82ae-d206737905cf@citrix.com>
Date: Thu, 7 May 2020 19:30:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <04e52d0a-fcce-eba4-0341-3b8838c0faae@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05/05/2020 09:13, Jan Beulich wrote:
> Introduce a new blk() hook, paralleling the rmw() one in a certain way,
> but being intended for larger data sizes, and hence its HVM intermediate
> handling function doesn't fall back to splitting the operation if the
> requested virtual address can't be mapped.
>
> Note that SDM revision 071 doesn't specify exception behavior for
> ModRM.mod == 0b11; assuming #UD here.

Still stale?  It does #UD on current hardware, and will cease to #UD in
the future when the encoding space gets used for something else.

>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Paul Durrant <paul@xen.org>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu May 07 18:34:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 18:34:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWlLx-00056E-UC; Thu, 07 May 2020 18:34:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=slKb=6V=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jWlLx-000569-5S
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 18:34:37 +0000
X-Inumbo-ID: 66da9cf0-9091-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 66da9cf0-9091-11ea-9887-bc764e2007e4;
 Thu, 07 May 2020 18:34:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 8F817AD6C;
 Thu,  7 May 2020 18:34:37 +0000 (UTC)
Message-ID: <7bdf9bd021ff4bd1131a8a41f42b37d6559f600f.camel@suse.com>
Subject: Re: [PATCH 1/3] xen/sched: allow rcu work to happen when syncing
 cpus in core scheduling
From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Date: Thu, 07 May 2020 20:34:33 +0200
In-Reply-To: <20200430151559.1464-2-jgross@suse.com>
References: <20200430151559.1464-1-jgross@suse.com>
 <20200430151559.1464-2-jgross@suse.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-tzvKayGra/mLBcidFyB9"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-tzvKayGra/mLBcidFyB9
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2020-04-30 at 17:15 +0200, Juergen Gross wrote:
> With RCU barriers moved from tasklets to normal RCU processing cpu
> offlining in core scheduling might deadlock due to cpu
> synchronization
> required by RCU processing and core scheduling concurrently.
>=20
> Fix that by bailing out from core scheduling synchronization in case
> of pending RCU work. Additionally the RCU softirq is now required to
> be of higher priority than the scheduling softirqs in order to do
> RCU processing before entering the scheduler again, as bailing out
> from
> the core scheduling synchronization requires to raise another softirq
> SCHED_SLAVE, which would bypass RCU processing again.
>=20
> Reported-by: Sergey Dyasli <sergey.dyasli@citrix.com>
> Tested-by: Sergey Dyasli <sergey.dyasli@citrix.com>
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
In general, I'm fine with this patch and it can have my:

Acked-by: Dario Faggioli <dfaggioli@suse.com>

I'd ask for one thing, but that doesn't affect the ack, as it's not
"my" code. :-)

> diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
> index b4724f5c8b..1f6c4783da 100644
> --- a/xen/include/xen/softirq.h
> +++ b/xen/include/xen/softirq.h
> @@ -4,10 +4,10 @@
>  /* Low-latency softirqs come first in the following list. */
>  enum {
>      TIMER_SOFTIRQ =3D 0,
> +    RCU_SOFTIRQ,
>      SCHED_SLAVE_SOFTIRQ,
>      SCHEDULE_SOFTIRQ,
>      NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ,
> -    RCU_SOFTIRQ,
>      TASKLET_SOFTIRQ,
>      NR_COMMON_SOFTIRQS
>  };
>
So, until now, it was kind of intuitive (at least, it was to me :-) )
that the TIMER_SOFTIRQ, we want it first, and the SCHEDULE one right
after it. And the comment above the enum ("Low-latency softirqs come
first in the following list"), although brief, is effective.

With the introduction of SCHED_SLAVE, things became slightly more
complex, but it still is not too far a reach to figure out the fact
that we want it to be above SCHEDULE, and the reasons for that.

Now that we're moving RCU from (almost) the very bottom to up here, I
think we need some more info, there in the code. Sure all the bits and
pieces are there in the changelogs, but I think it would be rather
helpful to have them easily available to people trying to understand or
modifying this code, e.g., with a comment.

I was also thinking that, even better than a comment, would be a
(build?) BUG_ON if RCU has no smaller value than SCHED_SLAVE and SLAVE.
Not here, of course, but maybe close to some piece of code that relies
on this assumption. Something that, if I tomorrow put the SCHED* ones
on top again, would catch my attention and tell me that I either take
care of that code path too, or I can't do it.

However, I'm not sure whether, e.g., the other hunk of this patch would
be a suitable place for something like this. And I can't, out of the
top of my head, think of a really good place for where to put it.
Therefore, I'm "only" asking for the comment... but if you (or others)
have ideas, that'd be cool. :-)

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-tzvKayGra/mLBcidFyB9
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl60VLkACgkQFkJ4iaW4
c+6mdRAAyGsYwy5ZeAYgtoLaXPBpAQXg36zl35RzQyVFnAJ92N8BGDRSQUfOOTSp
anX+pWE79941pJdVMzCiWzBzFqYDwGM9w+y1xntaJOYQxX9aTP4ub/zTAJEgMBtI
uTuHGSCPEe0pNckeICYBEDEGSadtPU0FOIQqrh45c16oQSLu7aJONdJE9CH2LMaI
Km+HP4CuIirvJmJNwoF4PqNAxKG0rSoOkUQTYwNd/wR4flQF/St7kZ++uL+Xj6Wq
gPWVe70wHGhf4szxEUuzMnRRzwT6apijZvUr8wDyXv5yfiddAw9pKDwPA/8rwctG
YNJw8ezagWenTMjbhJUwRoZKRSmMTy1bzDjywR8RZoAPA0A8dvtP5UNp3OhZXKrx
Lfvn2ZAETTw5OdOf6LBHtdNCgvGoqxlTbMHqeifWL2fKHe0JsZjIWqk5WEdxVmqJ
h0KjQEyTZKdHYRxigB6FmRbFseOh0SERsjgt6UgeCmKKIUeE1goDnp56OgAHZCiX
/93IOcyNxm8b1f59XfFCVVB4jhit8XQwRSLGE+jejq48LXKAqyPm8h2W/WlGBTIo
6TSj8yVKZim2T3dNVbUFKLNTLNeja2CW6opY2qpSNobi2NZfS74PpzcASe1wc6KM
ztfFmYvi/DS69S/RVANvuLg5E3/eO6HQxL2nbWX0g7k5UxA5y3Q=
=6PTh
-----END PGP SIGNATURE-----

--=-tzvKayGra/mLBcidFyB9--



From xen-devel-bounces@lists.xenproject.org Thu May 07 18:36:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 18:36:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWlNR-0005DZ-AZ; Thu, 07 May 2020 18:36:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=slKb=6V=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jWlNP-0005DU-PC
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 18:36:07 +0000
X-Inumbo-ID: 9c8410d4-9091-11ea-9f60-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9c8410d4-9091-11ea-9f60-12813bfff9fa;
 Thu, 07 May 2020 18:36:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 0C5CBADCE;
 Thu,  7 May 2020 18:36:07 +0000 (UTC)
Message-ID: <e02432bf8ea8ca85a31176de1a1f9e429c84b243.camel@suse.com>
Subject: Re: [PATCH 3/3] xen/cpupool: fix removing cpu from a cpupool
From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Date: Thu, 07 May 2020 20:36:04 +0200
In-Reply-To: <20200430151559.1464-4-jgross@suse.com>
References: <20200430151559.1464-1-jgross@suse.com>
 <20200430151559.1464-4-jgross@suse.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-yHozix7pt+5nKfA5jTcL"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-yHozix7pt+5nKfA5jTcL
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2020-04-30 at 17:15 +0200, Juergen Gross wrote:
> Commit cb563d7665f2 ("xen/sched: support core scheduling for moving
> cpus to/from cpupools") introduced a regression when trying to remove
> an offline cpu from a cpupool, as the system would crash in this
> situation.
>=20
> Fix that by testing the cpu to be online.
>=20
> Fixes: cb563d7665f2 ("xen/sched: support core scheduling for moving
> cpus to/from cpupools")
> Signed-off-by: Juergen Gross <jgross@suse.com>
>
Acked-by: Dario Faggioli <dfaggioli@suse.com>

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-yHozix7pt+5nKfA5jTcL
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl60VRQACgkQFkJ4iaW4
c+4imw//blPkeUMBPocu+vJT2lP3DkM6Qs0eB2OVEgzsoLBcwcWQFYnhy3ieZND6
QWWN0/yWXJpvvgDHOWP0Gn+PcefgAjG4iHJmAwoJb/0xa6GNA09i/us5l/9SdWg0
oJOFmwrF9XDgufFvILD9OFpPMRVaETnqkqX8PWfn1IOWzRIbk/W9PUBqj/gII27X
VfbfO8NgohQv+6HFNElTnGJKIcew6+sEETfc9sTvKepsjGZwoYrpIQtRVlcqq/bX
eW8RLpA4VifUorP+kYbWnFTlJ2Fi0GWVKUbRk8ivTVRqghNoSLq54kzGz/3El4ui
C2Vi9HrHaQd254V3MzYZsQQ+gXnWdJz9bCNtpb4Z9EpwPNPvMfBmNpJRDKAyHaHU
3buhGtUD0+pBD2SDfnB3wqUX7gGGhTVhSiJZXWywVW0JfjySezMZPWdkVu4eZZzz
XlRq25kywnpBW2+FByRqk6SGXqi81HMyFMOQs0lPhcvOnY+S4Rv4lckpXkTxOtul
2peO0YjYvM3y4fgOCrLb+KR3THK6+ajBMN07Anz9PpCFR6bJQuHmodBVX1dOMIGy
pawVdG6qMRW2NgdzMWBptanmQosRnL3i8gyukcWCOR7ITDjJJH03SJlD0qAemlJL
5FwvXkuvtRr2vI9/l2ssfBOrihvK/vXkgWwcUfCZQ4rmKyF1oT4=
=qjbJ
-----END PGP SIGNATURE-----

--=-yHozix7pt+5nKfA5jTcL--



From xen-devel-bounces@lists.xenproject.org Thu May 07 18:47:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 18:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWlYW-00067F-Ew; Thu, 07 May 2020 18:47:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aphx=6V=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWlYV-00067A-Ci
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 18:47:35 +0000
X-Inumbo-ID: 36e45ca0-9093-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 36e45ca0-9093-11ea-b07b-bc764e2007e4;
 Thu, 07 May 2020 18:47:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=4F9fMWGLyZGgsvttY1hQe/jufaLQbldWcx47S0nLvH8=; b=ysuQH9jIchBOlHvVhtx7cwTgZ
 dc374bITcgJcHn2wSuhCciDTnJrBqDfXOUTvo27y9vWkcFp61MgfljlpbOlDpIWRnXQf5AGh+fyRQ
 kbiCM8WHiig8gpX06M2csiQtxapcYNbg9D8FgMNEytJUUZa9TuBWCQ4aaOFyCH0om93Fk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWlYU-0005EO-5r; Thu, 07 May 2020 18:47:34 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWlYT-000617-MV; Thu, 07 May 2020 18:47:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWlYT-00019J-Lb; Thu, 07 May 2020 18:47:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150078-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150078: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=64b1da5a2fcf37e3542c277fde194ff3e8bba2d2
X-Osstest-Versions-That: xen=40675b4b874cb9fee0d4f0e12bb3e153ee1c135a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 07 May 2020 18:47:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150078 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150078/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  64b1da5a2fcf37e3542c277fde194ff3e8bba2d2
baseline version:
 xen                  40675b4b874cb9fee0d4f0e12bb3e153ee1c135a

Last test of basis   150069  2020-05-07 12:01:25 Z    0 days
Testing same since   150078  2020-05-07 16:00:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   40675b4b87..64b1da5a2f  64b1da5a2fcf37e3542c277fde194ff3e8bba2d2 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 07 18:59:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 18:59:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWlkK-00072I-Ms; Thu, 07 May 2020 18:59:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=53Et=6V=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jWlkJ-00072D-5r
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 18:59:47 +0000
X-Inumbo-ID: ea3e8560-9094-11ea-9f67-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ea3e8560-9094-11ea-9f67-12813bfff9fa;
 Thu, 07 May 2020 18:59:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588877986;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=LnPmYM3ZIWR8fY2XZ/1m4Q8XN2yWUNpWWtz164UryAc=;
 b=SIZ/cl/vP/WSnBKBvEG9nCKVZdSi6HiVTzgafN28plVKVJajIGUR6y1R
 CS3rNhFsPZ+PBf+Ptof0vJhj4VIJuacw97myL8C6fCiFcRD0eacUl/Zfh
 T/P3/35tRqJfSYvuD/rALRgKLad6SS7RsQ8O2Y6kWC6vcWhVQ6u3Zva/Y M=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: Q5uRWlnG0+tlH0bGWkwoZ6/ZlIDqAjgGXdOa5aPCyZiPYERYl/440Ns3OVWX3KRzNkmHrM+MWR
 Ieu/WeEM3eRx5btQaKkxlM7M4AlLjUPo6HWjAOKTf9XsfWzYe4WwR9nvyX5Isn4gZC4VEnzoZN
 2wQV6SkxDKQOLphaVcxL3bOIxoppQPPdvKKesNEXeq/dkaWfZxv66QGrfXa+ykz0eAA3U7c6zT
 H82HgtRFUy5nPqIQKxv2KnJhEhUv4OnuhEfIIJ4L7ONRO7KjyLNhWKR/lG00UvFEhWPLRfKTiW
 Ah8=
X-SBRS: 2.7
X-MesageID: 17382956
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,364,1583211600"; d="scan'208";a="17382956"
Subject: Re: [PATCH v8 03/12] x86emul: support ENQCMD insns
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <099d03d0-2846-2a3d-93ec-2d10dab12655@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <4fdaeefb-9593-789d-9f73-510e89d6df43@citrix.com>
Date: Thu, 7 May 2020 19:59:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <099d03d0-2846-2a3d-93ec-2d10dab12655@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05/05/2020 09:13, Jan Beulich wrote:
> Note that the ISA extensions document revision 038 doesn't specify
> exception behavior for ModRM.mod == 0b11; assuming #UD here.

Stale.

> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -11480,11 +11513,36 @@ int x86_emul_blk(
>  {
>      switch ( state->blk )
>      {
> +        bool zf;
> +
>          /*
>           * Throughout this switch(), memory clobbers are used to compensate
>           * that other operands may not properly express the (full) memory
>           * ranges covered.
>           */
> +    case blk_enqcmd:
> +        ASSERT(bytes == 64);
> +        if ( ((unsigned long)ptr & 0x3f) )
> +        {
> +            ASSERT_UNREACHABLE();
> +            return X86EMUL_UNHANDLEABLE;
> +        }
> +        *eflags &= ~EFLAGS_MASK;
> +#ifdef HAVE_AS_ENQCMD
> +        asm ( "enqcmds (%[src]), %[dst]" ASM_FLAG_OUT(, "; setz %0")

%[zf]

> +              : [zf] ASM_FLAG_OUT("=@ccz", "=qm") (zf)
> +              : [src] "r" (data), [dst] "r" (ptr) : "memory" );

Can't src get away with being "m" (*data)?  There is no need to force it
into a single register, even if it is overwhelmingly likely to end up
with %rsi scheduled here.

> +#else
> +        /* enqcmds (%rsi), %rdi */
> +        asm ( ".byte 0xf3, 0x0f, 0x38, 0xf8, 0x3e"
> +              ASM_FLAG_OUT(, "; setz %[zf]")
> +              : [zf] ASM_FLAG_OUT("=@ccz", "=qm") (zf)
> +              : "S" (data), "D" (ptr) : "memory" );
> +#endif
> +        if ( zf )
> +            *eflags |= X86_EFLAGS_ZF;
> +        break;
> +
>      case blk_movdir:
>          switch ( bytes )
>          {
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -420,6 +420,10 @@
>  #define MSR_IA32_TSC_DEADLINE		0x000006E0
>  #define MSR_IA32_ENERGY_PERF_BIAS	0x000001b0
>  
> +#define MSR_IA32_PASID			0x00000d93
> +#define  PASID_PASID_MASK		0x000fffff
> +#define  PASID_VALID			0x80000000
> +

Above the legacy line please as this is using the newer style, and drop
_IA32.  Intel's ideal of architectural-ness isn't interesting or worth
the added code volume.

PASSID_PASSID_MASK isn't great, but I can't suggest anything better, and
MSR_PASSID_MAS doesn't work either.

Otherwise, Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu May 07 19:33:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 19:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWmGJ-0001kj-DM; Thu, 07 May 2020 19:32:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=53Et=6V=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jWmGI-0001ke-7u
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 19:32:50 +0000
X-Inumbo-ID: 88904901-9099-11ea-9f6d-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 88904901-9099-11ea-9f6d-12813bfff9fa;
 Thu, 07 May 2020 19:32:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588879968;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=r0Yky+EEud5x4Qlp7k2WR29CxCDqDGb5BK7r8BrVoDs=;
 b=L+PdsM/jHK8EVuD6n96yNF5j30vsBRApmg9JdoT54Yv2dN1abwqm8WDS
 qGIBFqJO6pNl/nwzWf9opkbSSMSdYgqMNPI9wsSnhI6kJ/a7sGdAlTcNv
 ggrsKc1wmKWfSXQKVHGXfGo6NoLy4VYNXLM4eF3ThyvxnbOUJze4k0cve Q=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: w/LBYwjt8Y4xr0/Q1IpXNaALAIdfLBt/pGEAug0279zwFvA+/IcRIvVPNqYHn1qlnIoQKz0OWJ
 UC12FGZRY65cTeafzFOZY1sufk+cqvTohY8inlsRwu3WrEXJTh2oEENCqB332ohRbq9LA9VyVw
 by45Mcer2frjZt9Knus7XlkdrKXhgfsJUTroZUdOyt++8XV4odSdeCV38YnA6t2qfUIgivyOf3
 uRdcbAoJR3bmWXrupvq1KrrF7aPq6IAG2wbEP5jo/gI5foqIEZc42u0T2EVzOJ7KlO690XYPDD
 t+M=
X-SBRS: 2.7
X-MesageID: 17386065
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,364,1583211600"; d="scan'208";a="17386065"
Subject: Re: [PATCH v8 04/12] x86emul: support SERIALIZE
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <0bbbf95e-48ec-ee73-5234-52cf9c6c06d8@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <64de91ff-41ae-baf1-1119-0ba39df32275@citrix.com>
Date: Thu, 7 May 2020 20:32:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <0bbbf95e-48ec-ee73-5234-52cf9c6c06d8@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05/05/2020 09:14, Jan Beulich wrote:
> ... enabling its use by all guest kinds at the same time.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

> @@ -5660,6 +5661,18 @@ x86_emulate(
>                  goto done;
>              break;
>  
> +        case 0xe8:
> +            switch ( vex.pfx )
> +            {
> +            case vex_none: /* serialize */
> +                host_and_vcpu_must_have(serialize);
> +                asm volatile ( ".byte 0x0f, 0x01, 0xe8" );

There is very little need for an actual implementation here.  The VMExit
to get here is good enough.

The only question is whether pre-unrestricted_guest Intel boxes are
liable to find this in real mode code.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 07 20:13:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 20:13:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWmtZ-00055b-Lv; Thu, 07 May 2020 20:13:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=53Et=6V=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jWmtY-00055W-2n
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 20:13:24 +0000
X-Inumbo-ID: 338ceaf2-909f-11ea-9f74-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 338ceaf2-909f-11ea-9f74-12813bfff9fa;
 Thu, 07 May 2020 20:13:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588882403;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=eAz42w4G407xqjU+5HJ38ES5QkCaO1FFdhD88UNFf44=;
 b=LbHiunsagYh4EkvvHme7eRsKrHtIjKQ4B9n+1nyVfRrardnvA/7StYkH
 ZRWFLjEp8ce8lN4bN7xVx/kuvmXv5YfmQ3vA3sewTpxPIxi51tMu4ti03
 7AdZJIBThXtSktJe7qEHYty9/UmVaP/5BW4tKiQjIGozPa3mEe1XB3AJJ E=;
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: v0YBqqoymGMg5tPVAle9c9O241E8UdcsTWn4f4cKvbplTqfI9+YPsvb+mfJb+E1KYlS1kxHalJ
 SXTLrnQWBM8slV6BAx96IxeX8vrtPOBxFDU+PuOCvGrT5ukrbNNzt6Ye9reZwo/1ea2mQy/1x/
 QbRuFoTZZ1MhSOqjdApMWZrvIzyjiAhE4OYXDxmPR4z8btxd1GQazoSUlwaaEzFosRh2kR6lRP
 E/NyUZguhTFp2mQ2hjMpLEjjFtRkPJWmxVzT0ZzAmQXl6nT1UAjtuo9PI+zhV9N2+/v0i/sSqo
 df8=
X-SBRS: 2.7
X-MesageID: 17388994
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,365,1583211600"; d="scan'208";a="17388994"
Subject: Re: [PATCH v8 05/12] x86emul: support X{SUS,RES}LDTRK
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <6e7500d2-262c-29c7-b9be-3fc9be26d198@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <feb3a6ed-b6b9-959c-8eb1-fb6ecfff034c@citrix.com>
Date: Thu, 7 May 2020 21:13:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <6e7500d2-262c-29c7-b9be-3fc9be26d198@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05/05/2020 09:14, Jan Beulich wrote:
> --- a/xen/tools/gen-cpuid.py
> +++ b/xen/tools/gen-cpuid.py
> @@ -284,6 +284,9 @@ def crunch_numbers(state):
>          # as dependent features simplifies Xen's logic, and prevents the guest
>          # from seeing implausible configurations.
>          IBRSB: [STIBP, SSBD],
> +
> +        # In principle the TSXLDTRK insns could also be considered independent.
> +        RTM: [TSXLDTRK],

Why the link?  There is no relevant interaction AFAICT.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 07 20:29:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 20:29:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWn94-00064Y-9I; Thu, 07 May 2020 20:29:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G3Wl=6V=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jWn92-00064S-NT
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 20:29:24 +0000
X-Inumbo-ID: 6f5e0ea6-90a1-11ea-9887-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6f5e0ea6-90a1-11ea-9887-bc764e2007e4;
 Thu, 07 May 2020 20:29:24 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 7B66B208CA;
 Thu,  7 May 2020 20:29:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1588883361;
 bh=C/FjtARnN2djUbFPhCW9yNk/5S3TNogKH8l7LXGiYC4=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=gwIjJPwh6mEzOkMf/Tm/IoswjhNZx/MzBVpfXnGoP0ZGaQglVkoTK7gFLlotlMN7z
 NMDKd3WpkjX47AA9FjTu0l8PTR6Z5yL/84JxDVVBOSVTToPbYAxsaasye4LTCCW+eO
 m68+qlOBr+RTl/bgnl5I3MkydVRB2E3MfeoY2ilI=
Date: Thu, 7 May 2020 13:29:21 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH for-4.14 1/3] xen/arm: atomic: Allow read_atomic() to be
 used in more cases
In-Reply-To: <20200502160700.19573-2-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2005071325210.14706@sstabellini-ThinkPad-T480s>
References: <20200502160700.19573-1-julien@xen.org>
 <20200502160700.19573-2-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, 2 May 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The current implementation of read_atomic() on Arm will not allow to:
>     1) Read a value from a pointer to const because the temporary
>     variable will be const and therefore it is not possible to assign
>     any value. This can be solved by using a union between the type and
>     a char[0].
>     2) Read a pointer value (e.g void *) because the switch contains
>     cast from other type than the size of a pointer. This can be solved by
>     by introducing a static inline for the switch and use void * for the
>     pointer.
> 
> Reported-by: Juergen Gross <jgross@suse.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>  xen/include/asm-arm/atomic.h | 37 +++++++++++++++++++++++++++---------
>  1 file changed, 28 insertions(+), 9 deletions(-)
> 
> diff --git a/xen/include/asm-arm/atomic.h b/xen/include/asm-arm/atomic.h
> index e81bf80e305c..3c3d6bb04ee8 100644
> --- a/xen/include/asm-arm/atomic.h
> +++ b/xen/include/asm-arm/atomic.h
> @@ -71,18 +71,37 @@ build_add_sized(add_u32_sized, "", WORD, uint32_t)
>  #undef build_atomic_write
>  #undef build_add_sized
>  
> +void __bad_atomic_read(const volatile void *p, void *res);
>  void __bad_atomic_size(void);
>  
> +static always_inline void read_atomic_size(const volatile void *p,
> +                                           void *res,
> +                                           unsigned int size)
> +{
> +    switch ( size )
> +    {
> +    case 1:
> +        *(uint8_t *)res = read_u8_atomic(p);
> +        break;
> +    case 2:
> +        *(uint16_t *)res = read_u16_atomic(p);
> +        break;
> +    case 4:
> +        *(uint32_t *)res = read_u32_atomic(p);
> +        break;
> +    case 8:
> +        *(uint64_t *)res = read_u64_atomic(p);
> +        break;
> +    default:
> +        __bad_atomic_read(p, res);
> +        break;
> +    }
> +}
> +
>  #define read_atomic(p) ({                                               \
> -    typeof(*p) __x;                                                     \
> -    switch ( sizeof(*p) ) {                                             \
> -    case 1: __x = (typeof(*p))read_u8_atomic((uint8_t *)p); break;      \
> -    case 2: __x = (typeof(*p))read_u16_atomic((uint16_t *)p); break;    \
> -    case 4: __x = (typeof(*p))read_u32_atomic((uint32_t *)p); break;    \
> -    case 8: __x = (typeof(*p))read_u64_atomic((uint64_t *)p); break;    \
> -    default: __x = 0; __bad_atomic_size(); break;                       \
> -    }                                                                   \
> -    __x;                                                                \
> +    union { typeof(*p) val; char c[0]; } x_;                            \
> +    read_atomic_size(p, x_.c, sizeof(*p));                              \

Wouldn't it be better to pass x_ as follows:

    read_atomic_size(p, &x_, sizeof(*p));

?

In my tests both ways works. I prefer the version with &x_ but given
that it works either way:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



> +    x_.val;                                                             \
>  })
>  
>  #define write_atomic(p, x) ({                                           \
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu May 07 20:29:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 20:29:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWn9N-00065u-I9; Thu, 07 May 2020 20:29:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G3Wl=6V=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jWn9M-00065k-Qs
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 20:29:44 +0000
X-Inumbo-ID: 7c3f565d-90a1-11ea-9f76-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7c3f565d-90a1-11ea-9f76-12813bfff9fa;
 Thu, 07 May 2020 20:29:44 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 47AF8208CA;
 Thu,  7 May 2020 20:29:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1588883383;
 bh=JHkHdgGEDMlboKH3/jlAnR93QkRSEmRfXO6kUy5ex3M=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=D4Gu7P9sSSztVHChckZqQu3JWKf0L2sfukj1KfUsOLN/ZtEE/4MnRtO7BtbjYdTUZ
 ZELNhFm3ewOFa/veNRX0E8bBl1tR/rXvXoJiIi3Ggc8LTtzT/LljGVO3HWHTdJNqK3
 LHyhLzkU4CJ67+TVGuVWgCCZ6hv3JycbFGIdNDKY=
Date: Thu, 7 May 2020 13:29:42 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH for-4.14 2/3] xen/arm: atomic: Rewrite write_atomic()
In-Reply-To: <20200502160700.19573-3-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2005071329310.14706@sstabellini-ThinkPad-T480s>
References: <20200502160700.19573-1-julien@xen.org>
 <20200502160700.19573-3-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, 2 May 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The current implementation of write_atomic has two issues:
>     1) It cannot be used to write pointer value because the switch
>     contains cast to other size than the size of the pointer.
>     2) It will happily allow to write to a pointer to const.
> 
> Additionally, the Arm implementation is returning a value when the x86
> implementation does not anymore. This was introduced in commit
> 2934148a0773 "x86: simplify a few macros / inline functions". There are
> no users of the return value, so it is fine to drop it.
> 
> The switch is now moved in a static inline helper allowing the compiler
> to prevent use of const pointer and also allow to write pointer value.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  xen/include/asm-arm/atomic.h | 40 ++++++++++++++++++++++++++----------
>  1 file changed, 29 insertions(+), 11 deletions(-)
> 
> diff --git a/xen/include/asm-arm/atomic.h b/xen/include/asm-arm/atomic.h
> index 3c3d6bb04ee8..ac2798d095eb 100644
> --- a/xen/include/asm-arm/atomic.h
> +++ b/xen/include/asm-arm/atomic.h
> @@ -98,23 +98,41 @@ static always_inline void read_atomic_size(const volatile void *p,
>      }
>  }
>  
> +static always_inline void write_atomic_size(volatile void *p,
> +                                            void *val,
> +                                            unsigned int size)
> +{
> +    switch ( size )
> +    {
> +    case 1:
> +        write_u8_atomic(p, *(uint8_t *)val);
> +        break;
> +    case 2:
> +        write_u16_atomic(p, *(uint16_t *)val);
> +        break;
> +    case 4:
> +        write_u32_atomic(p, *(uint32_t *)val);
> +        break;
> +    case 8:
> +        write_u64_atomic(p, *(uint64_t *)val);
> +        break;
> +    default:
> +        __bad_atomic_size();
> +        break;
> +    }
> +}
> +
>  #define read_atomic(p) ({                                               \
>      union { typeof(*p) val; char c[0]; } x_;                            \
>      read_atomic_size(p, x_.c, sizeof(*p));                              \
>      x_.val;                                                             \
>  })
>  
> -#define write_atomic(p, x) ({                                           \
> -    typeof(*p) __x = (x);                                               \
> -    switch ( sizeof(*p) ) {                                             \
> -    case 1: write_u8_atomic((uint8_t *)p, (uint8_t)__x); break;         \
> -    case 2: write_u16_atomic((uint16_t *)p, (uint16_t)__x); break;      \
> -    case 4: write_u32_atomic((uint32_t *)p, (uint32_t)__x); break;      \
> -    case 8: write_u64_atomic((uint64_t *)p, (uint64_t)__x); break;      \
> -    default: __bad_atomic_size(); break;                                \
> -    }                                                                   \
> -    __x;                                                                \
> -})
> +#define write_atomic(p, x)                                              \
> +    do {                                                                \
> +        typeof(*p) x_ = (x);                                            \
> +        write_atomic_size(p, &x_, sizeof(*p));                          \
> +    } while ( false )
>  
>  #define add_sized(p, x) ({                                              \
>      typeof(*(p)) __x = (x);                                             \
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu May 07 20:33:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 20:33:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWnCZ-0006xF-2p; Thu, 07 May 2020 20:33:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K0Sz=6V=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jWnCY-0006x9-8b
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 20:33:02 +0000
X-Inumbo-ID: f251d90a-90a1-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f251d90a-90a1-11ea-ae69-bc764e2007e4;
 Thu, 07 May 2020 20:33:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=bBjs2qmXjdr8iYkSW/QaZAeiIo8xT/zoUOuguI3EQzs=; b=Zqr5Ch9LasTEkftvYbGB0EheSc
 axT6Yhfsx5Zc6xrI+W6kI+qD7wj4qBCU4KnFD8zHjJotxYRtcpQtOXRcWGjXAm0dXRTTjw/3bBY05
 hRa15D4BTrvySaOL3yAz3384t9CPHHqdRqM7Zpd4cCZvT966mkb1zQEklsZaqCPUMrXE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jWnCU-0007FD-UC; Thu, 07 May 2020 20:32:58 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jWnCU-000626-Lq; Thu, 07 May 2020 20:32:58 +0000
Subject: Re: [PATCH for-4.14 1/3] xen/arm: atomic: Allow read_atomic() to be
 used in more cases
To: Stefano Stabellini <sstabellini@kernel.org>
References: <20200502160700.19573-1-julien@xen.org>
 <20200502160700.19573-2-julien@xen.org>
 <alpine.DEB.2.21.2005071325210.14706@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <0db53f23-197c-0dcc-b89f-274597ebc32d@xen.org>
Date: Thu, 7 May 2020 21:32:56 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2005071325210.14706@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 07/05/2020 21:29, Stefano Stabellini wrote:
>>   #define read_atomic(p) ({                                               \
>> -    typeof(*p) __x;                                                     \
>> -    switch ( sizeof(*p) ) {                                             \
>> -    case 1: __x = (typeof(*p))read_u8_atomic((uint8_t *)p); break;      \
>> -    case 2: __x = (typeof(*p))read_u16_atomic((uint16_t *)p); break;    \
>> -    case 4: __x = (typeof(*p))read_u32_atomic((uint32_t *)p); break;    \
>> -    case 8: __x = (typeof(*p))read_u64_atomic((uint64_t *)p); break;    \
>> -    default: __x = 0; __bad_atomic_size(); break;                       \
>> -    }                                                                   \
>> -    __x;                                                                \
>> +    union { typeof(*p) val; char c[0]; } x_;                            \
>> +    read_atomic_size(p, x_.c, sizeof(*p));                              \
> 
> Wouldn't it be better to pass x_ as follows:
> 
>      read_atomic_size(p, &x_, sizeof(*p));

I am not sure to understand this. Are you suggesting to pass a pointer 
to the union?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 07 20:34:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 20:34:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWnDn-00072O-DU; Thu, 07 May 2020 20:34:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=G3Wl=6V=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jWnDm-00072J-HI
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 20:34:18 +0000
X-Inumbo-ID: 1fae9406-90a2-11ea-9f78-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1fae9406-90a2-11ea-9f78-12813bfff9fa;
 Thu, 07 May 2020 20:34:18 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 6FEFF208CA;
 Thu,  7 May 2020 20:34:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1588883657;
 bh=K4kKz0E9D4cZDWNdpdaH4tEnjnTvj/V3B4vBoBgTNQ8=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=K3ak1FkozC4HXibn3pcZBnK/rWHPTT7SfG65AewkbjlOhD3KQwqV4OferNh4Omfpb
 onv928ukxsrErWR4FmLRtYZ5fBfV9VDOgKTSu4uz3jawJs3c2g3OgYuSbveROKDpFz
 4hxpJw7jcWHOJ5Rm9Iy1dUa8w4BRVgCX3MPaC4h0=
Date: Thu, 7 May 2020 13:34:16 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH for-4.14 1/3] xen/arm: atomic: Allow read_atomic() to be
 used in more cases
In-Reply-To: <0db53f23-197c-0dcc-b89f-274597ebc32d@xen.org>
Message-ID: <alpine.DEB.2.21.2005071333480.14706@sstabellini-ThinkPad-T480s>
References: <20200502160700.19573-1-julien@xen.org>
 <20200502160700.19573-2-julien@xen.org>
 <alpine.DEB.2.21.2005071325210.14706@sstabellini-ThinkPad-T480s>
 <0db53f23-197c-0dcc-b89f-274597ebc32d@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 Julien Grall <jgrall@amazon.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 7 May 2020, Julien Grall wrote:
> Hi,
> 
> On 07/05/2020 21:29, Stefano Stabellini wrote:
> > >   #define read_atomic(p) ({
> > > \
> > > -    typeof(*p) __x;                                                     \
> > > -    switch ( sizeof(*p) ) {                                             \
> > > -    case 1: __x = (typeof(*p))read_u8_atomic((uint8_t *)p); break;      \
> > > -    case 2: __x = (typeof(*p))read_u16_atomic((uint16_t *)p); break;    \
> > > -    case 4: __x = (typeof(*p))read_u32_atomic((uint32_t *)p); break;    \
> > > -    case 8: __x = (typeof(*p))read_u64_atomic((uint64_t *)p); break;    \
> > > -    default: __x = 0; __bad_atomic_size(); break;                       \
> > > -    }                                                                   \
> > > -    __x;                                                                \
> > > +    union { typeof(*p) val; char c[0]; } x_;                            \
> > > +    read_atomic_size(p, x_.c, sizeof(*p));                              \
> > 
> > Wouldn't it be better to pass x_ as follows:
> > 
> >      read_atomic_size(p, &x_, sizeof(*p));
> 
> I am not sure to understand this. Are you suggesting to pass a pointer to the
> union?

Yes. Would it cause a problem that I couldn't spot?


From xen-devel-bounces@lists.xenproject.org Thu May 07 20:34:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 20:34:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWnEB-00075C-Ml; Thu, 07 May 2020 20:34:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=53Et=6V=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jWnE9-00074t-UZ
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 20:34:42 +0000
X-Inumbo-ID: 2d204bd4-90a2-11ea-9f78-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2d204bd4-90a2-11ea-9f78-12813bfff9fa;
 Thu, 07 May 2020 20:34:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588883681;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=f0Uw6mGiBkQ8TwNJ5g8wMfPKIHQUD5J++nwUo0Pr1FM=;
 b=HiDLKQ/j+3wJu9zrL4ZU04MKN7+Z5GLntcvJ3qC0lK+0ZxIesde+zUGj
 1owgHYPWR90RRqNbT0W7jCCbnG76FUWrDxHDsj48y9x6KjAa6iZQG6CEf
 M09Wc5BSNRp+OSSBFHo0kRyZFruRj2EtjsefV5zAQ0L/J8AT+qO/N6kGU c=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: 4l840KJWxXZaKt8DTNrayIgsKYM2HkaSsrFt6xy+Fg/JIuvs6xa3b3dpCehkFlx7B8FGfwVyY0
 BJ+b8MOAubfsGLlvTC1rLbI9LHvX6qZMbJzpsFm6+mnPK3Emc7BYurABEbvipqmqm5DJRREo9C
 wIvZp8KvUWc2pFHWY+/KBY8BLTIkllUZ3GsyeH8XkrvvguPUPBFBVyCXYwvXkzpav7GI1g7OeC
 Qf/FNz5eCvaAMsVUHvfxW6N51D0X0FuNRaPCnJkVTmtTKeyngFQX7krvokLDEanSdjkwfLjHgF
 2iY=
X-SBRS: 2.7
X-MesageID: 17278392
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,365,1583211600"; d="scan'208";a="17278392"
Subject: Re: [PATCH v8 06/12] x86/HVM: make hvmemul_blk() capable of handling
 r/o operations
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <1587789a-b0d6-6d18-99fc-a94bbea52d7b@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <507d4ced-d6ff-dfd9-d6e5-0a732c334de1@citrix.com>
Date: Thu, 7 May 2020 21:34:35 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <1587789a-b0d6-6d18-99fc-a94bbea52d7b@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05/05/2020 09:15, Jan Beulich wrote:
> In preparation for handling e.g. FLDENV or {F,FX,X}RSTOR here as well.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v8: New (could be folded into "x86emul: support MOVDIR{I,64B} insns",
>     but would invalidate Paul's R-b there).
>
> --- a/xen/arch/x86/hvm/emulate.c
> +++ b/xen/arch/x86/hvm/emulate.c
> @@ -1453,7 +1453,7 @@ static int hvmemul_blk(
>      struct hvm_emulate_ctxt *hvmemul_ctxt =
>          container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
>      unsigned long addr;
> -    uint32_t pfec = PFEC_page_present | PFEC_write_access;
> +    uint32_t pfec = PFEC_page_present;
>      int rc;
>      void *mapping = NULL;
>  
> @@ -1462,6 +1462,9 @@ static int hvmemul_blk(
>      if ( rc != X86EMUL_OKAY || !bytes )
>          return rc;
>  
> +    if ( x86_insn_is_mem_write(state, ctxt) )
> +        pfec |= PFEC_write_access;
> +

For the instructions with two memory operands, it conflates the
read-only side with the read-write side.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 07 20:50:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 20:50:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWnT7-0000L0-2U; Thu, 07 May 2020 20:50:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=K0Sz=6V=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jWnT5-0000Kv-C7
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 20:50:07 +0000
X-Inumbo-ID: 553b7d62-90a4-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 553b7d62-90a4-11ea-b07b-bc764e2007e4;
 Thu, 07 May 2020 20:50:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=JQ8IQvrhyCA0rZHhIC7vOYfDAdOXESt3Mi7LQpYwfYY=; b=1chNlyX4WoXOqad1pEBcq/Fvvn
 /YrHVzCK27o9wFtZS7v9PfZ9Mw7cMenMbzWD1dvVACInoW1d6AUrJQxS4V8We2/yIbpfazI+sqTJW
 7m9fJuMG+13BJZPcd7UHksTrYMixcG/kOC1U/4TIRWowL1176joSRFbVm9kJQ3CDC2bU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jWnT1-0007YP-8m; Thu, 07 May 2020 20:50:03 +0000
Received: from [54.239.6.177] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jWnT1-0007J8-1J; Thu, 07 May 2020 20:50:03 +0000
Subject: Re: [PATCH for-4.14 1/3] xen/arm: atomic: Allow read_atomic() to be
 used in more cases
To: Stefano Stabellini <sstabellini@kernel.org>
References: <20200502160700.19573-1-julien@xen.org>
 <20200502160700.19573-2-julien@xen.org>
 <alpine.DEB.2.21.2005071325210.14706@sstabellini-ThinkPad-T480s>
 <0db53f23-197c-0dcc-b89f-274597ebc32d@xen.org>
 <alpine.DEB.2.21.2005071333480.14706@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <adeb1be6-3288-a441-fe89-28a70d3b5de4@xen.org>
Date: Thu, 7 May 2020 21:50:01 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2005071333480.14706@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Stefano,

On 07/05/2020 21:34, Stefano Stabellini wrote:
> On Thu, 7 May 2020, Julien Grall wrote:
>> Hi,
>>
>> On 07/05/2020 21:29, Stefano Stabellini wrote:
>>>>    #define read_atomic(p) ({
>>>> \
>>>> -    typeof(*p) __x;                                                     \
>>>> -    switch ( sizeof(*p) ) {                                             \
>>>> -    case 1: __x = (typeof(*p))read_u8_atomic((uint8_t *)p); break;      \
>>>> -    case 2: __x = (typeof(*p))read_u16_atomic((uint16_t *)p); break;    \
>>>> -    case 4: __x = (typeof(*p))read_u32_atomic((uint32_t *)p); break;    \
>>>> -    case 8: __x = (typeof(*p))read_u64_atomic((uint64_t *)p); break;    \
>>>> -    default: __x = 0; __bad_atomic_size(); break;                       \
>>>> -    }                                                                   \
>>>> -    __x;                                                                \
>>>> +    union { typeof(*p) val; char c[0]; } x_;                            \
>>>> +    read_atomic_size(p, x_.c, sizeof(*p));                              \
>>>
>>> Wouldn't it be better to pass x_ as follows:
>>>
>>>       read_atomic_size(p, &x_, sizeof(*p));
>>
>> I am not sure to understand this. Are you suggesting to pass a pointer to the
>> union?
> 
> Yes. Would it cause a problem that I couldn't spot?

You defeat the purpose of an union by casting it to something else (even 
if it is void *).

The goal of the union is to be able to access a value in different way 
through a member. So x_.c is more union friendly and makes easier to 
understand why it was implemented like this.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 07 21:55:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 21:55:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWoTw-0005J1-7G; Thu, 07 May 2020 21:55:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aphx=6V=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWoTv-0005Iw-Cf
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 21:55:03 +0000
X-Inumbo-ID: 65991c10-90ad-11ea-9f88-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 65991c10-90ad-11ea-9f88-12813bfff9fa;
 Thu, 07 May 2020 21:54:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=j3XaVXR5PgZ6P9nXFfto1emlLWymC/6ABFE57uw7ws0=; b=x1zK63iZXZdE3uk4UsBSf9XH0
 D4PuFnRBYZck+D3Ej65ZXp+w+3SP0y/Dham6g1DJ3tyqsJqt3sjzTCjnkJL+6jMZj0yDnKKfjupGQ
 7EE7FftaJkEr2OrUXnff3ln7GP/G9Ezf/Qf6ddT6ex6oAvEAN7I6uKzbpSZUG4IiAEle8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWoTr-0000JH-Gf; Thu, 07 May 2020 21:54:59 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWoTr-0007Bk-4J; Thu, 07 May 2020 21:54:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWoTr-0005Rl-3T; Thu, 07 May 2020 21:54:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150080-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150080: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=35b819c45c4603fdb1d400925d6b2e6f8689a9d5
X-Osstest-Versions-That: xen=64b1da5a2fcf37e3542c277fde194ff3e8bba2d2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 07 May 2020 21:54:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150080 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150080/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  35b819c45c4603fdb1d400925d6b2e6f8689a9d5
baseline version:
 xen                  64b1da5a2fcf37e3542c277fde194ff3e8bba2d2

Last test of basis   150078  2020-05-07 16:00:49 Z    0 days
Testing same since   150080  2020-05-07 19:01:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dario Faggioli <dfaggioli@suse.com>
  Sergey Dyasli <sergey.dyasli@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   64b1da5a2f..35b819c45c  35b819c45c4603fdb1d400925d6b2e6f8689a9d5 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 07 22:48:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 22:48:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWpJu-0000zB-Du; Thu, 07 May 2020 22:48:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WU5u=6V=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jWpJt-0000z6-C3
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 22:48:45 +0000
X-Inumbo-ID: e6cd8b02-90b4-11ea-9f8c-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e6cd8b02-90b4-11ea-9f8c-12813bfff9fa;
 Thu, 07 May 2020 22:48:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588891723;
 h=from:to:subject:date:message-id:content-id:
 content-transfer-encoding:mime-version;
 bh=N86GxH6ihLvzvmdPTDb+qmvyWT8evRMPlMJt4BgFShA=;
 b=aVrZkzyFQ2l8EsPw+fKyuLtltpz1Rn7IsEhVEX+xgNLyxps7DF56IUYM
 YvJgKmGO1/4/24H568h/cHKleVvFGOoAzkk1K4SkkpN9fzsEDmETxX6dU
 dWRMutgYuPZs6P6YlM7kuWShPl3dmK25c8nzpiZ/CdGmKF8cg0iB7Zqip E=;
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
IronPort-SDR: qQYWGvmgZFLLM/3qGqt8OhYxIeMDu9Jso036cUH/DOmYv03MAaiNv6GV9WaMSqpnsYClaY2Ygc
 il16AuV+0FSwwgVJ4JqeBUXByffOVE3FR9VE1FmVFj/hUfEOCnbm8+zcmLNW/voVLTFqapwivy
 7/yQMlp1tftd37HdK3c1MYiy/PdUPJ6IWBOImQSH4b1SNVVZJtJRlKjPdjzT1GXPXcLZjMrq7H
 WnbIDUZfOaKvULPQ3i3sCUb2Ce75WHUa23d7y8RpT67jCCQL35DOlvVwU6CAhUMWmbKHaZpj1M
 JV4=
X-SBRS: 2.7
X-MesageID: 17307801
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,365,1583211600"; d="scan'208";a="17307801"
From: George Dunlap <George.Dunlap@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: XenSummit 2020 *will* be held virtuaully in June!
Thread-Topic: XenSummit 2020 *will* be held virtuaully in June!
Thread-Index: AQHWJMGm0AmOdlVrOUaNRapVF0rYbw==
Date: Thu, 7 May 2020 22:48:39 +0000
Message-ID: <562E87BB-FAEE-42B3-BEF4-6E7A4D65269D@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.60.0.2.5)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <8DA4A1E15FEE7244B8D1E365912687E5@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

V2XigJlyZSBzdGlsbCBpcm9uaW5nIG91dCBhbGwgdGhlIGRldGFpbHMsIGJ1dCBpdOKAmXMgYWJz
b2x1dGVseSBjb25maXJtZWQgdGhhdCBYZW5TdW1taXQgMjAyMCB3aWxsIGJlIGhlbGQgdmlydHVh
bGx5IGluIEp1bmUuDQoNCkluIGFkZGl0aW9uLCB0aGUgbmV3IHZlcnNpb24gb2YgdGhlIERlc2ln
biBTZXNzaW9ucyB3ZWJzaXRlIGlzIG5vdyBsaXZlOg0KDQpodHRwczovL2Rlc2lnbi1zZXNzaW9u
cy54ZW5wcm9qZWN0Lm9yZw0KDQpNYWtlIHNwYWNlIG9uIHlvdXIgY2FsZW5kYXJzLCBhbmQgc3Vi
bWl0IHlvdXIgZGVzaWduIHNlc3Npb25zLCBhbmQgd2F0Y2ggdGhpcyBzcGFjZSBmb3IgZnVydGhl
ciBpbmZvcm1hdGlvbiENCg0KIC1HZW9yZ2UgRHVubGFw


From xen-devel-bounces@lists.xenproject.org Thu May 07 23:33:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 07 May 2020 23:33:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWq1F-0005D0-Br; Thu, 07 May 2020 23:33:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aphx=6V=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWq1E-0005Cv-9O
 for xen-devel@lists.xenproject.org; Thu, 07 May 2020 23:33:32 +0000
X-Inumbo-ID: 2844ca86-90bb-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2844ca86-90bb-11ea-ae69-bc764e2007e4;
 Thu, 07 May 2020 23:33:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=enc5iAHDfv7rNf1icsLvs3wyZaW6fP0Ej1utEl/X65k=; b=P3nBbGrxkMLNl8BVTzUzyVVzr
 jS5PvsFZxbRWS8wSYe7X0ZQMwRXqZ0R/6W+9CskOVGRqt+N4rshIMsCvAmfrrcX3wS+9BEGfY56ga
 ueFjNbwYNWv4K63jrZKoG0Wi5DhnVN2D8h9OPVi4HElaUEBok4tDvkgwmXbVMNSEk++Xg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWq1B-00027U-Hw; Thu, 07 May 2020 23:33:29 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWq1B-0004kp-7a; Thu, 07 May 2020 23:33:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWq1B-0002Yw-6n; Thu, 07 May 2020 23:33:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150064-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150064: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=a811c1fa0a02c062555b54651065899437bacdbe
X-Osstest-Versions-That: linux=f66ed1ebbfde37631fba289f7c399eaa70632abf
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 07 May 2020 23:33:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150064 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150064/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     12 guest-start              fail REGR. vs. 149906

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 149906
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 149906
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 149906
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149906
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 149906
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149906
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 149906
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 149906
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149906
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                a811c1fa0a02c062555b54651065899437bacdbe
baseline version:
 linux                f66ed1ebbfde37631fba289f7c399eaa70632abf

Last test of basis   149906  2020-05-02 19:08:56 Z    5 days
Failing since        149912  2020-05-03 18:38:53 Z    4 days    3 attempts
Testing same since   150064  2020-05-07 06:00:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ahmed Abdelsalam <ahabdels@gmail.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Elder <elder@linaro.org>
  Allen Pais <allen.pais@oracle.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Anthony Felice <tony.felice@timesys.com>
  Antoine Tenart <antoine.tenart@bootlin.com>
  Archana Patni <archana.patni@intel.com>
  Arnd Bergmann <arnd@arndb.de>
  Artem Borisov <dedsa2002@gmail.com>
  Aya Levin <ayal@mellanox.com>
  Baruch Siach <baruch@tkos.co.il>
  Benjamin Tissoires <benjamin.tissoires@redhat.com>
  Bjørn Mork <bjorn@mork.no>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Clay McClure <clay@daemons.net>
  Colin Ian King <colin.king@canonical.com>
  Cong Wang <xiyou.wangcong@gmail.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dan Murphy <dmurphy@ti.com>
  Daniel Playfair Cal <daniel.playfair.cal@gmail.com>
  David Ahern <dsahern@kernel.org>
  David E. Box <david.e.box@intel.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Dejin Zheng <zhengdejin5@gmail.com>
  Dexuan Cui <decui@microsoft.com>
  Divagar Mohandass <divagar.mohandass@intel.com>
  Douglas Anderson <dianders@chromium.org>
  Enric Balletbo i Serra <enric.balletbo@collabora.com>
  Eran Ben Elisha <eranbe@mellanox.com>
  Erez Shitrit <erezsh@mellanox.com>
  Eric Biggers <ebiggers@google.com>
  Eric Dumazet <edumazet@google.com>
  Fabian Schindlatz <fabian.schindlatz@fau.de>
  Filipe Manana <fdmanana@suse.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Westphal <fw@strlen.de>
  Frédéric Pierret (fepitre) <frederic.pierret@qubes-os.org>
  Gavin Shan <gshan@redhat.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Geert Uytterhoeven <geert@linux-m68k.org>
  George Spelvin <lkml@sdf.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Grygorii Strashko <grygorii.strashko@ti.com>
  Guillaume Nault <gnault@redhat.com>
  Gustavo A. R. Silva <gustavo@embeddedor.com>
  Gwendal Grignou <gwendal@chromium.org>
  Hans de Goede <hdegoede@redhat.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Ido Schimmel <idosch@mellanox.com>
  Igor Russkikh <irusskikh@marvell.com>
  Jacob Keller <jacob.e.keller@intel.com>
  Jakub Kicinski <kuba@kernel.org>
  Jakub Kicinski <kubakici@wp.pl>
  Jamal Hadi Salim <jhs@mojatatu.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Gerecke <jason.gerecke@wacom.com>
  Jason Gerecke <killertofu@gmail.com>
  Jason Gunthorpe <jgg@mellanox.com>
  Jason Yan <yanaijie@huawei.com>
  Jeff Kirsher <jeffrey.t.kirsher@intel.com>
  Jia He <justin.he@arm.com>
  Jiri Kosina <jkosina@suse.cz>
  Jiri Pirko <jiri@mellanox.com>
  Joerg Roedel <jroedel@suse.de>
  Jon Maloy <jmaloy@redhat.com>
  Jules Irenge <jbi.octave@gmail.com>
  Julia Lawall <Julia.Lawall@inria.fr>
  Julian Wiedmann <jwi@linux.ibm.com>
  Juliet Kim <julietk@linux.vnet.ibm.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kees Cook <keescook@chromium.org>
  Kevin Hao <haokexin@gmail.com>
  Krzysztof Kozlowski <krzk@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lu Baolu <baolu.lu@linux.intel.com>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Matt Jolly <Kangie@footclan.ninja>
  Maxim Petrov <mmrmaximuzz@gmail.com>
  Michael Chan <michael.chan@broadcom.com>
  Michael S. Tsirkin <mst@redhat.com>
  Moshe Shemesh <moshe@mellanox.com>
  Murali Karicheri <m-karicheri2@ti.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Neil Horman <nhorman@tuxdriver.com>
  Nicolas Ferre <nicolas.ferre@microchip.com>
  Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Paolo Abeni <pabeni@redhat.com>
  Parav Pandit <parav@mellanox.com>
  Qiushi Wu <wu000273@umn.edu>
  Qu Wenruo <wqu@suse.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rahul Lakkireddy <rahul.lakkireddy@chelsio.com>
  Richard Clark <richard.xnu.clark@gmail.com>
  Richard Cochran <richardcochran@gmail.com>
  Roi Dayan <roid@mellanox.com>
  Roman Mashak <mrv@mojatatu.com>
  Saeed Mahameed <saeedm@mellanox.com>
  Scott Dial <scott@scottdial.com>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Shannon Nelson <snelson@pensando.io>
  Shay Agroskin <shayagr@amazon.com>
  Simon Wunderlich <sw@simonwunderlich.de>
  Soheil Hassas Yeganeh <soheil@google.com>
  Srinivas Pandruvada <srinivas.pandruvada@linux.intel.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefan Priebe - Profihost AG <s.priebe@profihost.ag>
  Stefano Garzarella <sgarzare@redhat.com>
  Sultan Alsawaf <sultan@kerneltoast.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Sven Eckelmann <sven@narfation.org>
  Tang Bin <tangbin@cmss.chinamobile.com>
  Tariq Toukan <tariqt@mellanox.com>
  Toke Høiland-Jørgensen <toke@redhat.com>
  Tuong Lien <tuong.t.lien@dektech.com.au>
  Vasundhara Volam <vasundhara-v.volam@broadcom.com>
  Vinicius Costa Gomes <vinicius.gomes@intel.com>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiongfeng Wang <wangxiongfeng2@huawei.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Ying Xue <ying.xue@windriver.com>
  Yoshiyuki Kurauchi <ahochauwaaaaa@gmail.com>
  YueHaibing <yuehaibing@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   f66ed1ebbfde..a811c1fa0a02  a811c1fa0a02c062555b54651065899437bacdbe -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Fri May 08 05:25:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 05:25:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWvVj-0006hx-CZ; Fri, 08 May 2020 05:25:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqRt=6W=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWvVi-0006hs-QF
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 05:25:22 +0000
X-Inumbo-ID: 4bae50b2-90ec-11ea-9fc2-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4bae50b2-90ec-11ea-9fc2-12813bfff9fa;
 Fri, 08 May 2020 05:25:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=S3Mqk8UlwhtVS9RrgI6IOfpoF4qbafQEMt31s4K3LKQ=; b=QcVMOcmq4oWEmmNx92PWPXcHc
 DA70M8R7Li0TV9c9Z5pl+54Dz2TVkiBQFVWMI6gDNYIyzFz4vil3lVcCOJz+baY9fWP5AiXOd7zIN
 GYTDfZB0JpYKp5wCsYlUp0677MHpZ34lSxDgCF4VADRU6vmirrUVuhZlny1q8ChGKO3eU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWvVb-0007cd-QH; Fri, 08 May 2020 05:25:15 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWvVb-00035E-DU; Fri, 08 May 2020 05:25:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWvVb-0006QE-Ae; Fri, 08 May 2020 05:25:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150067-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150067: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=8a6b1665d987d043c12dc723d758a7d2ca765264
X-Osstest-Versions-That: xen=2e3d87cc734a895ef5b486926274a178836b67a9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 08 May 2020 05:25:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150067 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150067/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 150048

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150048
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150048
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150048
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150048
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150048
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150048
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150048
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150048
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150048
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8a6b1665d987d043c12dc723d758a7d2ca765264
baseline version:
 xen                  2e3d87cc734a895ef5b486926274a178836b67a9

Last test of basis   150048  2020-05-05 19:06:53 Z    2 days
Testing same since   150067  2020-05-07 10:22:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ashok Raj <ashok.raj@intel.com>
  Borislav Petkov <bp@suse.de>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>
  Thomas Gleixner <tglx@linutronix.de>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   2e3d87cc73..8a6b1665d9  8a6b1665d987d043c12dc723d758a7d2ca765264 -> master


From xen-devel-bounces@lists.xenproject.org Fri May 08 05:29:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 05:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWvZJ-0006qW-UK; Fri, 08 May 2020 05:29:05 +0000
Resent-Date: Fri, 08 May 2020 05:29:05 +0000
Resent-Message-Id: <E1jWvZJ-0006qW-UK@lists.xenproject.org>
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TvG7=6W=patchew.org=no-reply@srs-us1.protection.inumbo.net>)
 id 1jWvZJ-0006qR-8C
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 05:29:05 +0000
X-Inumbo-ID: d26ba53a-90ec-11ea-ae69-bc764e2007e4
Received: from sender4-of-o53.zoho.com (unknown [136.143.188.53])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d26ba53a-90ec-11ea-ae69-bc764e2007e4;
 Fri, 08 May 2020 05:29:01 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1588915714; cv=none; 
 d=zohomail.com; s=zohoarc; 
 b=gGOLUL5jv1FqajGDS9KQXITm2iGvRqwwcft3VSyDoaqc0SQuIMwP1X48RDUGfEMdzXZU9PCqnYOuwBejtGILhVWzXzQZF7j4o8zR2+4D0ZpjsuemOkDDAxtm07rl9ftMqPB6wO66xr4xy5moRVrhctKeg3HoZ/pdG66O3MterLQ=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com;
 s=zohoarc; t=1588915714;
 h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:Reply-To:Subject:To;
 bh=n7E/tdChMElF0r6ONrCQIBFdxsWeY+gIjsdn0eo3/6M=; 
 b=mts2fZ0Pq2nOGyXAK53IVO+dEfE8xgKfJO9LNTa7A/LpWtIrB09L+cttv0eVKE79dYmjH3au5i5gvr75G5e+Dpwb2hRE4ERRfVrimLLNuBAsKebalzj9VtPrQaU20CJK3AGhAFVgPiv3yaLUQZh/l2siOq/20eu4PBT0lPsQigs=
ARC-Authentication-Results: i=1; mx.zohomail.com;
 spf=pass  smtp.mailfrom=no-reply@patchew.org;
 dmarc=pass header.from=<no-reply@patchew.org>
 header.from=<no-reply@patchew.org>
Received: from [172.17.0.3] (23.253.156.214 [23.253.156.214]) by
 mx.zohomail.com with SMTPS id 1588915712512575.8488904048985;
 Thu, 7 May 2020 22:28:32 -0700 (PDT)
Message-ID: <158891571032.29923.11626722130579684161@45ef0f9c86ae>
In-Reply-To: <20200507155813.16169-1-philmd@redhat.com>
Subject: Re: [PATCH] accel: Move Xen accelerator code under accel/xen/
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
Resent-From: 
From: no-reply@patchew.org
To: philmd@redhat.com
Date: Thu, 7 May 2020 22:28:32 -0700 (PDT)
X-ZohoMailClient: External
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: qemu-devel@nongnu.org
Cc: sstabellini@kernel.org, ehabkost@redhat.com, quintela@redhat.com,
 paul@xen.org, dgilbert@redhat.com, qemu-devel@nongnu.org,
 xen-devel@lists.xenproject.org, aleksandar.qemu.devel@gmail.com,
 mst@redhat.com, marcel.apfelbaum@gmail.com, pbonzini@redhat.com,
 anthony.perard@citrix.com, imammedo@redhat.com, philmd@redhat.com,
 aurelien@aurel32.net, rth@twiddle.net
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

UGF0Y2hldyBVUkw6IGh0dHBzOi8vcGF0Y2hldy5vcmcvUUVNVS8yMDIwMDUwNzE1NTgxMy4xNjE2
OS0xLXBoaWxtZEByZWRoYXQuY29tLwoKCgpIaSwKClRoaXMgc2VyaWVzIGZhaWxlZCB0aGUgZG9j
a2VyLXF1aWNrQGNlbnRvczcgYnVpbGQgdGVzdC4gUGxlYXNlIGZpbmQgdGhlIHRlc3RpbmcgY29t
bWFuZHMgYW5kCnRoZWlyIG91dHB1dCBiZWxvdy4gSWYgeW91IGhhdmUgRG9ja2VyIGluc3RhbGxl
ZCwgeW91IGNhbiBwcm9iYWJseSByZXByb2R1Y2UgaXQKbG9jYWxseS4KCj09PSBURVNUIFNDUklQ
VCBCRUdJTiA9PT0KIyEvYmluL2Jhc2gKbWFrZSBkb2NrZXItaW1hZ2UtY2VudG9zNyBWPTEgTkVU
V09SSz0xCnRpbWUgbWFrZSBkb2NrZXItdGVzdC1xdWlja0BjZW50b3M3IFNIT1dfRU5WPTEgSj0x
NCBORVRXT1JLPTEKPT09IFRFU1QgU0NSSVBUIEVORCA9PT0KCi90bXAvcWVtdS10ZXN0L3NyYy90
ZXN0cy9xdGVzdC9saWJxdGVzdC5jOjE3NToga2lsbF9xZW11KCkgZGV0ZWN0ZWQgUUVNVSBkZWF0
aCBmcm9tIHNpZ25hbCA2IChBYm9ydGVkKSAoY29yZSBkdW1wZWQpCiAgVEVTVCAgICBjaGVjay11
bml0OiB0ZXN0cy9jaGVjay1xZGljdAogIFRFU1QgICAgaW90ZXN0LXFjb3cyOiAwMDcKRVJST1Ig
LSB0b28gZmV3IHRlc3RzIHJ1biAoZXhwZWN0ZWQgNSwgZ290IDApCm1ha2U6ICoqKiBbY2hlY2st
cXRlc3QtYWFyY2g2NF0gRXJyb3IgMQptYWtlOiAqKiogV2FpdGluZyBmb3IgdW5maW5pc2hlZCBq
b2JzLi4uLgogIFRFU1QgICAgY2hlY2stcXRlc3QteDg2XzY0OiB0ZXN0cy9xdGVzdC9mZGMtdGVz
dAogIFRFU1QgICAgY2hlY2stdW5pdDogdGVzdHMvY2hlY2stYmxvY2stcWRpY3QKLS0tCiAgICBy
YWlzZSBDYWxsZWRQcm9jZXNzRXJyb3IocmV0Y29kZSwgY21kKQpzdWJwcm9jZXNzLkNhbGxlZFBy
b2Nlc3NFcnJvcjogQ29tbWFuZCAnWydzdWRvJywgJy1uJywgJ2RvY2tlcicsICdydW4nLCAnLS1s
YWJlbCcsICdjb20ucWVtdS5pbnN0YW5jZS51dWlkPWU1OTRiOTc0OTQwNDQ0OTY4Yjc3Zjk0MjQz
MTBjYTNjJywgJy11JywgJzEwMDMnLCAnLS1zZWN1cml0eS1vcHQnLCAnc2VjY29tcD11bmNvbmZp
bmVkJywgJy0tcm0nLCAnLWUnLCAnVEFSR0VUX0xJU1Q9JywgJy1lJywgJ0VYVFJBX0NPTkZJR1VS
RV9PUFRTPScsICctZScsICdWPScsICctZScsICdKPTE0JywgJy1lJywgJ0RFQlVHPScsICctZScs
ICdTSE9XX0VOVj0xJywgJy1lJywgJ0NDQUNIRV9ESVI9L3Zhci90bXAvY2NhY2hlJywgJy12Jywg
Jy9ob21lL3BhdGNoZXcyLy5jYWNoZS9xZW11LWRvY2tlci1jY2FjaGU6L3Zhci90bXAvY2NhY2hl
OnonLCAnLXYnLCAnL3Zhci90bXAvcGF0Y2hldy10ZXN0ZXItdG1wLXAybjRvcGUyL3NyYy9kb2Nr
ZXItc3JjLjIwMjAtMDUtMDgtMDEuMTUuMTQuMjE2MTg6L3Zhci90bXAvcWVtdTp6LHJvJywgJ3Fl
bXU6Y2VudG9zNycsICcvdmFyL3RtcC9xZW11L3J1bicsICd0ZXN0LXF1aWNrJ10nIHJldHVybmVk
IG5vbi16ZXJvIGV4aXQgc3RhdHVzIDIuCmZpbHRlcj0tLWZpbHRlcj1sYWJlbD1jb20ucWVtdS5p
bnN0YW5jZS51dWlkPWU1OTRiOTc0OTQwNDQ0OTY4Yjc3Zjk0MjQzMTBjYTNjCm1ha2VbMV06ICoq
KiBbZG9ja2VyLXJ1bl0gRXJyb3IgMQptYWtlWzFdOiBMZWF2aW5nIGRpcmVjdG9yeSBgL3Zhci90
bXAvcGF0Y2hldy10ZXN0ZXItdG1wLXAybjRvcGUyL3NyYycKbWFrZTogKioqIFtkb2NrZXItcnVu
LXRlc3QtcXVpY2tAY2VudG9zN10gRXJyb3IgMgoKcmVhbCAgICAxM20xNi4yNzZzCnVzZXIgICAg
MG04LjE4NXMKCgpUaGUgZnVsbCBsb2cgaXMgYXZhaWxhYmxlIGF0Cmh0dHA6Ly9wYXRjaGV3Lm9y
Zy9sb2dzLzIwMjAwNTA3MTU1ODEzLjE2MTY5LTEtcGhpbG1kQHJlZGhhdC5jb20vdGVzdGluZy5k
b2NrZXItcXVpY2tAY2VudG9zNy8/dHlwZT1tZXNzYWdlLgotLS0KRW1haWwgZ2VuZXJhdGVkIGF1
dG9tYXRpY2FsbHkgYnkgUGF0Y2hldyBbaHR0cHM6Ly9wYXRjaGV3Lm9yZy9dLgpQbGVhc2Ugc2Vu
ZCB5b3VyIGZlZWRiYWNrIHRvIHBhdGNoZXctZGV2ZWxAcmVkaGF0LmNvbQ==


From xen-devel-bounces@lists.xenproject.org Fri May 08 05:35:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 05:35:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWvez-0007fL-Jd; Fri, 08 May 2020 05:34:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EkJI=6W=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1jWvex-0007fG-Os
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 05:34:55 +0000
X-Inumbo-ID: a57e6b92-90ed-11ea-b07b-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id a57e6b92-90ed-11ea-b07b-bc764e2007e4;
 Fri, 08 May 2020 05:34:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588916094;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=j9cIxYZ4M6GBlMag/rjs0c76h4zJpAwA0JgCL0kqyZo=;
 b=DnBg/zKtiJLM8rB2xAKVzSKB1Z8eFZ/EXEV836lEpDf3FaZP5WTQimxmS4KgG9f79dENrf
 LKgj/Ks00BFZxH3c8gbGXevG+KX++Q4k3DbfRTaCaExdLoU0wnJcjGzCz6rcYLkUeXBFiL
 0q7HhgQ/xyoqsk9HoBWYf9f9f2K3vtc=
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-431-FufhlvFyOYyQVXkNUc9kSg-1; Fri, 08 May 2020 01:34:52 -0400
X-MC-Unique: FufhlvFyOYyQVXkNUc9kSg-1
Received: by mail-wr1-f72.google.com with SMTP id e14so344350wrv.11
 for <xen-devel@lists.xenproject.org>; Thu, 07 May 2020 22:34:52 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-language
 :content-transfer-encoding;
 bh=gUS4ClGn6G8XqFXHNiGlmCwk06xms6ktvah597TE/uY=;
 b=XIyo1iMPAI5+ZiFbvEKWN4IOBeguv3XFYhyIz67ej4sCk2nq3gwm2tv3V7kHsYJcjT
 A6v1zoM/SXDWihppY1TYjXU+xzgwm8oKgWhTrYA5brURpOsOoFc3hdxuiwqARQyx0ulG
 Kd+OICwlE9d011wAWanZF81V+THP2AKWEt3SHdXeqyHrZiZthcBODAhSRSA4lOAcSbnA
 E8VWHQQxitWmtEpvpmZWUS9xi+F6rZtEOJKYA8T7GA9gHR6LTeJ8z5fagcMxl8rhJA8x
 wMD1cPGbus6CnL+kqjJ7WSjYIlunnE9T0lb/Jzti71/lKarTdpqLTajuw2VyHFk+ETdr
 GncA==
X-Gm-Message-State: AGi0PuYauyuoo3HIG5itEObCzj0XS54LihwaaL8M9l5Z64HIiOy8P2NZ
 L6EgQG/av0sB/tLm/ElHVd4ElGdceObl+SbE5Z2/himxg1AwE/yOP/D1+u7sw1zacAjK37mBNOK
 6fLSy0k1ajPGa4ZE5B+lgqieJfpM=
X-Received: by 2002:a1c:4e15:: with SMTP id g21mr13547747wmh.29.1588916091366; 
 Thu, 07 May 2020 22:34:51 -0700 (PDT)
X-Google-Smtp-Source: APiQypI61Fm7WBN/SXhfhIYSOo0bFejh0pT6oDtMsWaGpFR+cCbdXjuMM/JbC85hgfYNOcJ8QGT+TQ==
X-Received: by 2002:a1c:4e15:: with SMTP id g21mr13547723wmh.29.1588916091120; 
 Thu, 07 May 2020 22:34:51 -0700 (PDT)
Received: from [192.168.1.39] (17.red-88-21-202.staticip.rima-tde.net.
 [88.21.202.17])
 by smtp.gmail.com with ESMTPSA id a67sm12158464wmc.30.2020.05.07.22.34.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 07 May 2020 22:34:50 -0700 (PDT)
Subject: Re: [PATCH] accel: Move Xen accelerator code under accel/xen/
To: qemu-devel@nongnu.org, Paolo Bonzini <pbonzini@redhat.com>
References: <20200507155813.16169-1-philmd@redhat.com>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <d84e5541-da7a-4ead-6277-3b144744f58b@redhat.com>
Date: Fri, 8 May 2020 07:34:48 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.5.0
MIME-Version: 1.0
In-Reply-To: <20200507155813.16169-1-philmd@redhat.com>
Content-Language: en-US
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Juan Quintela <quintela@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, Paul Durrant <paul@xen.org>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Igor Mammedov <imammedo@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Aurelien Jarno <aurelien@aurel32.net>, Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/7/20 5:58 PM, Philippe Mathieu-Daud=C3=A9 wrote:
> This code is not related to hardware emulation.
> Move it under accel/ with the other hypervisors.
>=20
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> ---
> We could also move the memory management functions from
> hw/i386/xen/xen-hvm.c but it is not trivial.
>=20
> Necessary step to remove "exec/ram_addr.h" in the next series.
> ---
>   include/exec/ram_addr.h                    |  2 +-
>   include/hw/xen/xen.h                       | 11 ------
>   include/sysemu/xen.h                       | 40 ++++++++++++++++++++++
>   hw/xen/xen-common.c =3D> accel/xen/xen-all.c |  3 ++
>   hw/acpi/piix4.c                            |  2 +-
>   hw/i386/pc.c                               |  1 +
>   hw/i386/pc_piix.c                          |  1 +
>   hw/i386/pc_q35.c                           |  1 +
>   hw/i386/xen/xen-hvm.c                      |  1 +
>   hw/i386/xen/xen_platform.c                 |  1 +
>   hw/isa/piix3.c                             |  1 +
>   hw/pci/msix.c                              |  1 +
>   migration/savevm.c                         |  2 +-
>   softmmu/vl.c                               |  2 +-
>   stubs/xen-hvm.c                            |  9 -----
>   target/i386/cpu.c                          |  2 +-
>   MAINTAINERS                                |  2 ++
>   accel/Makefile.objs                        |  1 +
>   accel/xen/Makefile.objs                    |  1 +
>   hw/xen/Makefile.objs                       |  2 +-
>   20 files changed, 60 insertions(+), 26 deletions(-)
>   create mode 100644 include/sysemu/xen.h
>   rename hw/xen/xen-common.c =3D> accel/xen/xen-all.c (99%)
>   create mode 100644 accel/xen/Makefile.objs
>=20
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index 5e59a3d8d7..4e05292f91 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -21,7 +21,7 @@
>  =20
>   #ifndef CONFIG_USER_ONLY
>   #include "cpu.h"
> -#include "hw/xen/xen.h"
> +#include "sysemu/xen.h"
>   #include "sysemu/tcg.h"
>   #include "exec/ramlist.h"
>   #include "exec/ramblock.h"
> diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h
> index 5ac1c6dc55..771dd447f2 100644
> --- a/include/hw/xen/xen.h
> +++ b/include/hw/xen/xen.h
> @@ -20,13 +20,6 @@ extern uint32_t xen_domid;
>   extern enum xen_mode xen_mode;
>   extern bool xen_domid_restrict;
>  =20
> -extern bool xen_allowed;
> -
> -static inline bool xen_enabled(void)
> -{
> -    return xen_allowed;
> -}
> -
>   int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
>   void xen_piix3_set_irq(void *opaque, int irq_num, int level);
>   void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, i=
nt len);
> @@ -39,10 +32,6 @@ void xenstore_store_pv_console_info(int i, struct Char=
dev *chr);
>  =20
>   void xen_hvm_init(PCMachineState *pcms, MemoryRegion **ram_memory);
>  =20
> -void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
> -                   struct MemoryRegion *mr, Error **errp);
> -void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length);
> -
>   void xen_register_framebuffer(struct MemoryRegion *mr);
>  =20
>   #endif /* QEMU_HW_XEN_H */
> diff --git a/include/sysemu/xen.h b/include/sysemu/xen.h
> new file mode 100644
> index 0000000000..609d2d4184
> --- /dev/null
> +++ b/include/sysemu/xen.h
> @@ -0,0 +1,40 @@
> +/*
> + * QEMU Xen support
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or la=
ter.
> + * See the COPYING file in the top-level directory.
> + */
> +
> +#ifndef SYSEMU_XEN_H
> +#define SYSEMU_XEN_H
> +
> +#ifdef CONFIG_XEN
> +
> +extern bool xen_allowed;
> +
> +#define xen_enabled() (xen_allowed)
> +
> +#ifndef CONFIG_USER_ONLY
> +void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length);
> +void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
> +                   struct MemoryRegion *mr, Error **errp);
> +#endif
> +
> +#else /* !CONFIG_XEN */
> +
> +#define xen_enabled() 0
> +#ifndef CONFIG_USER_ONLY
> +static inline void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t =
length)
> +{
> +    abort();
> +}
> +static inline void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
> +                                 MemoryRegion *mr, Error **errp)
> +{
> +    abort();
> +}
> +#endif

Unfortunately this triggers:

#1  0x00007fca778a5895 abort (libc.so.6)
#2  0x000055f51ebd3e12 xen_hvm_modified_memory (qemu-system-aarch64)
#3  0x000055f51ebd44c9 cpu_physical_memory_set_dirty_range=20
(qemu-system-aarch64)
#4  0x000055f51ebd9de4 ram_block_add (qemu-system-aarch64)
#5  0x000055f51ebda2e3 qemu_ram_alloc_internal (qemu-system-aarch64)
#6  0x000055f51ebda3be qemu_ram_alloc (qemu-system-aarch64)
#7  0x000055f51ec3db9b memory_region_init_ram_shared_nomigrate=20
(qemu-system-aarch64)
#8  0x000055f51ef089e6 ram_backend_memory_alloc (qemu-system-aarch64)
#9  0x000055f51ef081ae host_memory_backend_memory_complete=20
(qemu-system-aarch64)
#10 0x000055f51f2b624b user_creatable_complete (qemu-system-aarch64)
#11 0x000055f51ed93382 create_default_memdev (qemu-system-aarch64)
#12 0x000055f51ed96d1a qemu_init (qemu-system-aarch64)
#13 0x000055f51f3b14bb main (qemu-system-aarch64)
#14 0x00007fca778a6f43 __libc_start_main (libc.so.6)
#15 0x000055f51ebd27de _start (qemu-system-aarch64)

I'll resend adding the following patch checking for xen_enabled() before=20
calling, as we do with other accelerators:

-- >8 --
diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
@@ -330,7 +330,9 @@ static inline void=20
cpu_physical_memory_set_dirty_range(ram_addr_t start,
          }
      }

-    xen_hvm_modified_memory(start, length);
+    if (xen_enabled()) {
+        xen_hvm_modified_memory(start, length);
+    }
  }

  #if !defined(_WIN32)
@@ -388,7 +390,9 @@ static inline void=20
cpu_physical_memory_set_dirty_lebitmap(unsigned long *bitmap,
              }
          }

-        xen_hvm_modified_memory(start, pages << TARGET_PAGE_BITS);
+        if (xen_enabled()) {
+            xen_hvm_modified_memory(start, pages << TARGET_PAGE_BITS);
+        }
      } else {
          uint8_t clients =3D tcg_enabled() ? DIRTY_CLIENTS_ALL :=20
DIRTY_CLIENTS_NOCODE;
---



From xen-devel-bounces@lists.xenproject.org Fri May 08 05:54:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 05:54:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWvxb-0000tQ-A5; Fri, 08 May 2020 05:54:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jWvxa-0000tL-3V
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 05:54:10 +0000
X-Inumbo-ID: 5480bf1c-90f0-11ea-9fc4-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5480bf1c-90f0-11ea-9fc4-12813bfff9fa;
 Fri, 08 May 2020 05:54:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 13E08AA55;
 Fri,  8 May 2020 05:54:09 +0000 (UTC)
Subject: Re: [PATCH 1/3] xen/sched: allow rcu work to happen when syncing cpus
 in core scheduling
To: Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <20200430151559.1464-1-jgross@suse.com>
 <20200430151559.1464-2-jgross@suse.com>
 <7bdf9bd021ff4bd1131a8a41f42b37d6559f600f.camel@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <dac6bdf3-17a1-aeea-f14b-7154222589b4@suse.com>
Date: Fri, 8 May 2020 07:54:05 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <7bdf9bd021ff4bd1131a8a41f42b37d6559f600f.camel@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.05.20 20:34, Dario Faggioli wrote:
> On Thu, 2020-04-30 at 17:15 +0200, Juergen Gross wrote:
>> With RCU barriers moved from tasklets to normal RCU processing cpu
>> offlining in core scheduling might deadlock due to cpu
>> synchronization
>> required by RCU processing and core scheduling concurrently.
>>
>> Fix that by bailing out from core scheduling synchronization in case
>> of pending RCU work. Additionally the RCU softirq is now required to
>> be of higher priority than the scheduling softirqs in order to do
>> RCU processing before entering the scheduler again, as bailing out
>> from
>> the core scheduling synchronization requires to raise another softirq
>> SCHED_SLAVE, which would bypass RCU processing again.
>>
>> Reported-by: Sergey Dyasli <sergey.dyasli@citrix.com>
>> Tested-by: Sergey Dyasli <sergey.dyasli@citrix.com>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
> In general, I'm fine with this patch and it can have my:
> 
> Acked-by: Dario Faggioli <dfaggioli@suse.com>
> 
> I'd ask for one thing, but that doesn't affect the ack, as it's not
> "my" code. :-)
> 
>> diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
>> index b4724f5c8b..1f6c4783da 100644
>> --- a/xen/include/xen/softirq.h
>> +++ b/xen/include/xen/softirq.h
>> @@ -4,10 +4,10 @@
>>   /* Low-latency softirqs come first in the following list. */
>>   enum {
>>       TIMER_SOFTIRQ = 0,
>> +    RCU_SOFTIRQ,
>>       SCHED_SLAVE_SOFTIRQ,
>>       SCHEDULE_SOFTIRQ,
>>       NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ,
>> -    RCU_SOFTIRQ,
>>       TASKLET_SOFTIRQ,
>>       NR_COMMON_SOFTIRQS
>>   };
>>
> So, until now, it was kind of intuitive (at least, it was to me :-) )
> that the TIMER_SOFTIRQ, we want it first, and the SCHEDULE one right
> after it. And the comment above the enum ("Low-latency softirqs come
> first in the following list"), although brief, is effective.
> 
> With the introduction of SCHED_SLAVE, things became slightly more
> complex, but it still is not too far a reach to figure out the fact
> that we want it to be above SCHEDULE, and the reasons for that.
> 
> Now that we're moving RCU from (almost) the very bottom to up here, I
> think we need some more info, there in the code. Sure all the bits and
> pieces are there in the changelogs, but I think it would be rather
> helpful to have them easily available to people trying to understand or
> modifying this code, e.g., with a comment.

That's reasonable.

> 
> I was also thinking that, even better than a comment, would be a
> (build?) BUG_ON if RCU has no smaller value than SCHED_SLAVE and SLAVE.
> Not here, of course, but maybe close to some piece of code that relies
> on this assumption. Something that, if I tomorrow put the SCHED* ones
> on top again, would catch my attention and tell me that I either take
> care of that code path too, or I can't do it.
> 
> However, I'm not sure whether, e.g., the other hunk of this patch would
> be a suitable place for something like this. And I can't, out of the
> top of my head, think of a really good place for where to put it.
> Therefore, I'm "only" asking for the comment... but if you (or others)
> have ideas, that'd be cool. :-)

I think the other hunk is exactly where the BUILD_BUG_ON() should be.
And this is a perfect place for the comment, too, as its placement will
explain the context very well.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri May 08 06:43:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 06:43:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWwjP-0004wM-E0; Fri, 08 May 2020 06:43:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqRt=6W=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWwjO-0004wH-0V
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 06:43:34 +0000
X-Inumbo-ID: 39567590-90f7-11ea-9fc8-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 39567590-90f7-11ea-9fc8-12813bfff9fa;
 Fri, 08 May 2020 06:43:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=G2vpSPcfEKj/EsR1W7dutBuBJ8VQCG6k/a95+gh/l8Q=; b=REAq04qZZfsIUt2HFu2HhXArD
 eHkf9yhlxRhScv7nVeQ9xImTs8leTmtn2t52AblqKw1a0xTu8l7BB4hYBAYdCgY87X9akvL0PWcMf
 TwrBgNFyO3qk/zaeBi4olZ63Lg46JL5rujVPgYYDtB4Eb/pTyHNVjBwLyvOJg7exKdgC8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWwjH-0000gC-VJ; Fri, 08 May 2020 06:43:28 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWwjH-00077G-N4; Fri, 08 May 2020 06:43:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWwjH-0007uj-MC; Fri, 08 May 2020 06:43:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150068-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.9-testing test] 150068: regressions - trouble:
 fail/pass/starved
X-Osstest-Failures: xen-4.9-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:regression
 xen-4.9-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-localmigrate/x10:fail:heisenbug
 xen-4.9-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=93cc305d1f3e7c6949a8f4116446624fa2dbfdf4
X-Osstest-Versions-That: xen=45c90737d5f0c8bf479adcd8cb88450f1998e55c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 08 May 2020 06:43:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150068 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150068/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail in 150038 REGR. vs. 149649

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail pass in 150038

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop       fail blocked in 149649
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop       fail blocked in 149649
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop      fail blocked in 149649
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop fail in 150038 blocked in 149649
 test-amd64-amd64-xl-qemuu-ws16-amd64 16 guest-localmigrate/x10 fail in 150038 blocked in 149649
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail in 150038 like 149649
 test-amd64-i386-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail in 150038 like 149649
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149649
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 149649
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149649
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 149649
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 149649
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  93cc305d1f3e7c6949a8f4116446624fa2dbfdf4
baseline version:
 xen                  45c90737d5f0c8bf479adcd8cb88450f1998e55c

Last test of basis   149649  2020-04-14 13:35:25 Z   23 days
Testing same since   150038  2020-05-05 16:06:01 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 93cc305d1f3e7c6949a8f4116446624fa2dbfdf4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Apr 3 13:03:40 2020 +0100

    tools/xenstore: fix a use after free problem in xenstored
    
    Commit 562a1c0f7ef3fb ("tools/xenstore: dont unlink connection object
    twice") introduced a potential use after free problem in
    domain_cleanup(): after calling talloc_unlink() for domain->conn
    domain->conn is set to NULL. The problem is that domain is registered
    as talloc child of domain->conn, so it might be freed by the
    talloc_unlink() call.
    
    With Xenstore being single threaded there are normally no concurrent
    memory allocations running and freeing a virtual memory area normally
    doesn't result in that area no longer being accessible. A problem
    could occur only in case either a signal received results in some
    memory allocation done in the signal handler (SIGHUP is a primary
    candidate leading to reopening the log file), or in case the talloc
    framework would do some internal memory allocation during freeing of
    the memory (which would lead to clobbering of the freed domain
    structure).
    
    Fixes: 562a1c0f7ef3fb ("tools/xenstore: dont unlink connection object twice")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    (cherry picked from commit bb2a34fd740e9a26be9e2244f1a5b4cef439e5a8)
    (cherry picked from commit dc5176d0f9434e275e0be1df8d0518e243798beb)
    (cherry picked from commit a997ffe678e698ff2b4c89ae5a98661d12247fef)
    (cherry picked from commit 48e8564435aca590f1c292ab7bb1f3dbc6b75693)
    (cherry picked from commit 1e722e6971539eab4f484affd60490cbc8429951)
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri May 08 06:57:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 06:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWwxC-0005rk-N1; Fri, 08 May 2020 06:57:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqRt=6W=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jWwxB-0005rf-P4
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 06:57:49 +0000
X-Inumbo-ID: 39962f8a-90f9-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39962f8a-90f9-11ea-ae69-bc764e2007e4;
 Fri, 08 May 2020 06:57:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=6C1FZBs//uQvS2f2/W+y0yNy/2/FZ4Yw9LqMOYw8SYY=; b=Ttpvom1lS26O4+Cl/hKoc1dtA
 8x8GShDMnUVLIADdfAWjvgSpo00MIe+w/AOjjwNqk1nbI5MAFt2ZBcjrWFINtatxcsnA8zVUIVPvt
 6BItxuedqmEjEqYiFUR+uhEZWL5hyY+oCkreF8bhYjtTk49mLt+GTo2iLqTV9aUvoY4k0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWwx9-0000wX-Cg; Fri, 08 May 2020 06:57:47 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jWwx9-0007b7-32; Fri, 08 May 2020 06:57:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jWwx9-0007vm-29; Fri, 08 May 2020 06:57:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150070-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 150070: regressions - FAIL
X-Osstest-Failures: linux-5.4:test-amd64-amd64-examine:memdisk-try-append:fail:regression
 linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=592465e6a54ba8104969f3b73b58df262c5be5f5
X-Osstest-Versions-That: linux=9895e0ac338a8060e6947f897397c21c4d78d80d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 08 May 2020 06:57:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150070 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150070/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 149905

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 149905

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                592465e6a54ba8104969f3b73b58df262c5be5f5
baseline version:
 linux                9895e0ac338a8060e6947f897397c21c4d78d80d

Last test of basis   149905  2020-05-02 16:38:26 Z    5 days
Testing same since   150054  2020-05-06 06:41:12 Z    2 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Aharon Landau <aharonl@mellanox.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Alaa Hleihel <alaa@mellanox.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Williamson <alex.williamson@redhat.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Arnd Bergmann <arnd@arndb.de>
  Arun Easi <aeasi@marvell.com>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  Catalin Marinas <catalin.marinas@arm.com>
  Christoph Hellwig <hch@lst.de>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Vetter <daniel.vetter@intel.com>
  David Disseldorp <ddiss@suse.de>
  David Howells <dhowells@redhat.com>
  David Sterba <dsterba@suse.com>
  Dexuan Cui <decui@microsoft.com>
  Douglas Anderson <dianders@chromium.org>
  Filipe Manana <fdmanana@suse.com>
  Gabriel Krisman Bertazi <krisman@collabora.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hui Wang <hui.wang@canonical.com>
  Iuliana Prodan <iuliana.prodan@nxp.com>
  Jason Gunthorpe <jgg@mellanox.com>
  Joerg Roedel <jroedel@suse.de>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Leon Romanovsky <leonro@mellanox.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Marek Behún <marek.behun@nic.cz>
  Martin Blumenstingl <martin.blumenstingl@googlemail.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Liu <liumartin@google.com>
  Martin Wilck <mwilck@suse.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
  Nicolas Ferre <nicolas.ferre@microchip.com>
  Niklas Cassel <niklas.cassel@wdc.com>
  Olga Kornievskaia <kolga@netapp.com>
  Olga Kornievskaia <olga.kornievskaia@gmail.com>
  Paul Moore <paul@paul-moore.com>
  Qu Wenruo <wqu@suse.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rayagonda Kokatanur <rayagonda.kokatanur@broadcom.com>
  Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  ryan_chen <ryan_chen@aspeedtech.com>
  Sean Christopherson <sean.j.christopherson@intel.com>
  Shawn Guo <shawnguo@kernel.org>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Sumit Semwal <sumit.semwal@linaro.org>
  Sunwook Eom <speed.eom@samsung.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Takashi Iwai <tiwai@suse.de>
  Tang Bin <tangbin@cmss.chinamobile.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vasily Averin <vvs@virtuozzo.com>
  Vasily Khoruzhick <anarsoul@gmail.com>
  Veerabhadrarao Badiganti <vbadigan@codeaurora.org>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vinod Koul <vkoul@kernel.org>
  Wei Liu <wei.liu@kernel.org>
  Wolfram Sang <wsa@the-dreams.de>
  Wu Bo <wubo40@huawei.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yan Zhao <yan.y.zhao@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1726 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 08 07:14:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 07:14:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWxCk-0007Y9-8j; Fri, 08 May 2020 07:13:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3Kb=6W=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWxCi-0007Xz-VX
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 07:13:52 +0000
X-Inumbo-ID: 77e9aa6c-90fb-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77e9aa6c-90fb-11ea-9887-bc764e2007e4;
 Fri, 08 May 2020 07:13:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 3E717AA4F;
 Fri,  8 May 2020 07:13:53 +0000 (UTC)
Subject: Re: [PATCH v8 06/12] x86/HVM: make hvmemul_blk() capable of handling
 r/o operations
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <1587789a-b0d6-6d18-99fc-a94bbea52d7b@suse.com>
 <507d4ced-d6ff-dfd9-d6e5-0a732c334de1@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <27a0e367-6b63-716c-185e-7d4325db1acb@suse.com>
Date: Fri, 8 May 2020 09:13:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <507d4ced-d6ff-dfd9-d6e5-0a732c334de1@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.05.2020 22:34, Andrew Cooper wrote:
> On 05/05/2020 09:15, Jan Beulich wrote:
>> In preparation for handling e.g. FLDENV or {F,FX,X}RSTOR here as well.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> v8: New (could be folded into "x86emul: support MOVDIR{I,64B} insns",
>>     but would invalidate Paul's R-b there).
>>
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -1453,7 +1453,7 @@ static int hvmemul_blk(
>>      struct hvm_emulate_ctxt *hvmemul_ctxt =
>>          container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
>>      unsigned long addr;
>> -    uint32_t pfec = PFEC_page_present | PFEC_write_access;
>> +    uint32_t pfec = PFEC_page_present;
>>      int rc;
>>      void *mapping = NULL;
>>  
>> @@ -1462,6 +1462,9 @@ static int hvmemul_blk(
>>      if ( rc != X86EMUL_OKAY || !bytes )
>>          return rc;
>>  
>> +    if ( x86_insn_is_mem_write(state, ctxt) )
>> +        pfec |= PFEC_write_access;
>> +
> 
> For the instructions with two memory operands, it conflates the
> read-only side with the read-write side.

"Conflates" is probably the wrong word? The bug you point out is
really in x86_insn_is_mem_write(), in that so far it would return
false for MOVDIR64B and ENQCMD{,S}. As a result I'll fix the
issue there while, at the same time, folding this patch into
"x86emul: support MOVDIR{I,64B} insns".

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 08 07:19:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 07:19:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWxIR-0007r6-NX; Fri, 08 May 2020 07:19:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3Kb=6W=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWxIQ-0007qy-8V
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 07:19:46 +0000
X-Inumbo-ID: 4b07971a-90fc-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b07971a-90fc-11ea-b9cf-bc764e2007e4;
 Fri, 08 May 2020 07:19:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7E6C2AFDB;
 Fri,  8 May 2020 07:19:47 +0000 (UTC)
Subject: Re: [PATCH v8 02/12] x86emul: support MOVDIR{I,64B} insns
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <04e52d0a-fcce-eba4-0341-3b8838c0faae@suse.com>
 <37726d04-6fb8-5747-82ae-d206737905cf@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <46538792-1dc8-d990-65c7-ccff88721e22@suse.com>
Date: Fri, 8 May 2020 09:19:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <37726d04-6fb8-5747-82ae-d206737905cf@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.05.2020 20:30, Andrew Cooper wrote:
> On 05/05/2020 09:13, Jan Beulich wrote:
>> Introduce a new blk() hook, paralleling the rmw() one in a certain way,
>> but being intended for larger data sizes, and hence its HVM intermediate
>> handling function doesn't fall back to splitting the operation if the
>> requested virtual address can't be mapped.
>>
>> Note that SDM revision 071 doesn't specify exception behavior for
>> ModRM.mod == 0b11; assuming #UD here.
> 
> Still stale?  It does #UD on current hardware, and will cease to #UD in
> the future when the encoding space gets used for something else.

What do you mean by "still stale"? Other insns allowing for only
memory operands have the #UD spelled out in the doc. Are you
implying by the 2nd sentence that it should rather be
"goto unrecognized_insn"? I'm afraid we're not very consistent
yet with what we do in such cases; I could certainly work
towards improving this, but the question is whether it is really
sensible in all cases to assume such partially unused encodings
may get used in the future.

>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: Paul Durrant <paul@xen.org>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks, but as per my reply to your patch 6 comment the patch
here will need to be revised, so I'll not apply this just yet
unless you indicate up front that you're fine with me keeping it.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 08 07:32:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 07:32:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWxUV-0001F4-RJ; Fri, 08 May 2020 07:32:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3Kb=6W=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWxUU-0001Ez-D1
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 07:32:14 +0000
X-Inumbo-ID: 071e06cf-90fe-11ea-9fcd-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 071e06cf-90fe-11ea-9fcd-12813bfff9fa;
 Fri, 08 May 2020 07:32:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D2996AB5C;
 Fri,  8 May 2020 07:32:13 +0000 (UTC)
Subject: Re: [PATCH v8 03/12] x86emul: support ENQCMD insns
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <099d03d0-2846-2a3d-93ec-2d10dab12655@suse.com>
 <4fdaeefb-9593-789d-9f73-510e89d6df43@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0c9b92b1-9caf-741b-eaaa-18c652e6b504@suse.com>
Date: Fri, 8 May 2020 09:32:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <4fdaeefb-9593-789d-9f73-510e89d6df43@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.05.2020 20:59, Andrew Cooper wrote:
> On 05/05/2020 09:13, Jan Beulich wrote:
>> Note that the ISA extensions document revision 038 doesn't specify
>> exception behavior for ModRM.mod == 0b11; assuming #UD here.
> 
> Stale.

In which way (beyond the question of whether to use
goto unrecognized_insn in the code instead)? The doc doesn't
mention ModRM.mod specifics in any way.

>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>> @@ -11480,11 +11513,36 @@ int x86_emul_blk(
>>  {
>>      switch ( state->blk )
>>      {
>> +        bool zf;
>> +
>>          /*
>>           * Throughout this switch(), memory clobbers are used to compensate
>>           * that other operands may not properly express the (full) memory
>>           * ranges covered.
>>           */
>> +    case blk_enqcmd:
>> +        ASSERT(bytes == 64);
>> +        if ( ((unsigned long)ptr & 0x3f) )
>> +        {
>> +            ASSERT_UNREACHABLE();
>> +            return X86EMUL_UNHANDLEABLE;
>> +        }
>> +        *eflags &= ~EFLAGS_MASK;
>> +#ifdef HAVE_AS_ENQCMD
>> +        asm ( "enqcmds (%[src]), %[dst]" ASM_FLAG_OUT(, "; setz %0")
> 
> %[zf]

Oops, indeed.

>> +              : [zf] ASM_FLAG_OUT("=@ccz", "=qm") (zf)
>> +              : [src] "r" (data), [dst] "r" (ptr) : "memory" );
> 
> Can't src get away with being "m" (*data)?  There is no need to force it
> into a single register, even if it is overwhelmingly likely to end up
> with %rsi scheduled here.

Well, *data can't be used, as data is of void* type. It would
need to have a suitable cast on it, but since that's not
going to avoid the memory clobber I didn't think it was worth
it (also together with the comment ahead of the switch()).

>> --- a/xen/include/asm-x86/msr-index.h
>> +++ b/xen/include/asm-x86/msr-index.h
>> @@ -420,6 +420,10 @@
>>  #define MSR_IA32_TSC_DEADLINE		0x000006E0
>>  #define MSR_IA32_ENERGY_PERF_BIAS	0x000001b0
>>  
>> +#define MSR_IA32_PASID			0x00000d93
>> +#define  PASID_PASID_MASK		0x000fffff
>> +#define  PASID_VALID			0x80000000
>> +
> 
> Above the legacy line please as this is using the newer style,

Ah, yes, I should have remembered to re-base this over your
change there.

> and drop
> _IA32.  Intel's ideal of architectural-ness isn't interesting or worth
> the added code volume.

We'd been there before, and you know I disagree. I think it
is wrong for me to make the change, but I will do so just
to retain your ack.

> PASSID_PASSID_MASK isn't great, but I can't suggest anything better, and
> MSR_PASSID_MAS doesn't work either.
> 
> Otherwise, Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 08 07:34:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 07:34:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWxWX-0001MD-7T; Fri, 08 May 2020 07:34:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3Kb=6W=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWxWV-0001M6-6n
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 07:34:19 +0000
X-Inumbo-ID: 532c4b64-90fe-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 532c4b64-90fe-11ea-ae69-bc764e2007e4;
 Fri, 08 May 2020 07:34:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 403E5AF68;
 Fri,  8 May 2020 07:34:20 +0000 (UTC)
Subject: Re: [PATCH v8 04/12] x86emul: support SERIALIZE
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <0bbbf95e-48ec-ee73-5234-52cf9c6c06d8@suse.com>
 <64de91ff-41ae-baf1-1119-0ba39df32275@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0c5a03c6-6c4f-c6ec-e474-71a2badd1c9c@suse.com>
Date: Fri, 8 May 2020 09:34:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <64de91ff-41ae-baf1-1119-0ba39df32275@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.05.2020 21:32, Andrew Cooper wrote:
> On 05/05/2020 09:14, Jan Beulich wrote:
>> ... enabling its use by all guest kinds at the same time.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks.

>> @@ -5660,6 +5661,18 @@ x86_emulate(
>>                  goto done;
>>              break;
>>  
>> +        case 0xe8:
>> +            switch ( vex.pfx )
>> +            {
>> +            case vex_none: /* serialize */
>> +                host_and_vcpu_must_have(serialize);
>> +                asm volatile ( ".byte 0x0f, 0x01, 0xe8" );
> 
> There is very little need for an actual implementation here.  The VMExit
> to get here is good enough.
> 
> The only question is whether pre-unrestricted_guest Intel boxes are
> liable to find this in real mode code.

Apart from this we also shouldn't assume HVM in the core emulator,
I think.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 08 07:35:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 07:35:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWxXA-0001Po-H9; Fri, 08 May 2020 07:35:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XRfz=6W=epam.com=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1jWxX9-0001Pf-US
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 07:35:00 +0000
X-Inumbo-ID: 69dfb3e8-90fe-11ea-9fcd-12813bfff9fa
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.87]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 69dfb3e8-90fe-11ea-9fcd-12813bfff9fa;
 Fri, 08 May 2020 07:34:58 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=kL4iEETBB9dNaWgerJOu0cp66WrkVNOiP8Y2YVTdyRUyJRHm6CsEXTOyN0zTnhZ4OhA1DdR08um998nO57EriFBBX5NT5xGPSzyyshfv/heOK1FqpQa5uuofdSbiAulujXIBoqW4y9M8GANKZ+76LfOtGu0/HcpMCe6KPoZr1afzuoXHxaRZw327aRlDmj/5PJHK/xX4bAArcjMKYTUXI8Tpo2mWfRs8lnk7HK9UPgaUhScYUuM0R5MDBk4FIf51wzrwSNhvPkwEOF80WROCRGkHDmwL/mS6APYyxWWxSfmSQaCDydh0iJi3Wg0P3PVzOQoJdJeiV1h6t1ryl22WsA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nurOTnXNAH6y5TE/bgfWbV4zWOoQBSLQbBmndLpMI3w=;
 b=IDTnBW4iH8hj+mMQmWwZojCN6tABM27OSoFxNXm/5NvUBq+Q56QXnp3yLwhr+Rxhxi1NpKPsN8OXBZsIJTx4VPXLlnl3rSzBbgR99bi7yh6K5+9NiisY+ekuD2GyEZWbnTaytwDs9gPBYA+aXOJrW18vWjZIaJdR58WpOA+5XCgrlRmW73DMBEJHPpca1gcEtU/YdLvnTSgjWnFoUqgzYpEcUAN6xDz4Y4aOx5s+Z1kk5N6i0pDHiZojvlURJP+kffQeaEhDkwNoEP5Dc8AlIR8CZ2WBGzVqcAfLyKyujPk8/A4neyoaCNMVEf0lAm4ndRHpZypSHePrCwglXlW29w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nurOTnXNAH6y5TE/bgfWbV4zWOoQBSLQbBmndLpMI3w=;
 b=IbuDY2mu52M3uZcmFCAQb0lDdsbU3IKASyUMgGNStzw5tcDUNhvsXPbNDWRX4F0aUUioiVq3ZQzYlimYVHpIz8z38aZfxx0a1JebTHhyryPtHdLMU39DnY+n0ykmYzMUOgOjPU6G0tuFU8J3Zbe5K2bp5D+hhRApFlmIuiHzsZk=
Received: from VI1PR03MB3998.eurprd03.prod.outlook.com (2603:10a6:803:72::14)
 by VI1PR03MB3998.eurprd03.prod.outlook.com (2603:10a6:803:72::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2979.28; Fri, 8 May
 2020 07:34:57 +0000
Received: from VI1PR03MB3998.eurprd03.prod.outlook.com
 ([fe80::28ec:3584:94d:27a4]) by VI1PR03MB3998.eurprd03.prod.outlook.com
 ([fe80::28ec:3584:94d:27a4%7]) with mapi id 15.20.2979.028; Fri, 8 May 2020
 07:34:56 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Dan Carpenter <dan.carpenter@oracle.com>
Subject: Re: [bug report] drm/xen-front: Add support for Xen PV display
 frontend
Thread-Topic: [bug report] drm/xen-front: Add support for Xen PV display
 frontend
Thread-Index: AQHWF8n8HYXBbl5N3kSvZpSjxllEBqiDciuAgAAEjwCAGnAHgA==
Date: Fri, 8 May 2020 07:34:56 +0000
Message-ID: <84a349ac-d162-48c6-afc1-38f1bbf49b75@epam.com>
References: <20200421104522.GA86681@mwanda>
 <28cc7f7c-fe0a-fd06-d330-73531b818a79@epam.com> <20200421115112.GB2682@kadam>
In-Reply-To: <20200421115112.GB2682@kadam>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: oracle.com; dkim=none (message not signed)
 header.d=none;oracle.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: fd28cdd0-365f-4883-b0af-08d7f3224e99
x-ms-traffictypediagnostic: VI1PR03MB3998:
x-microsoft-antispam-prvs: <VI1PR03MB399814E6195BA3111FB3089FE7A20@VI1PR03MB3998.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-forefront-prvs: 039735BC4E
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: U2NH65MU5yihgeBr21gTBaNA5pKSOS93VNPSe2cV03t8cpycM2JUhd4aGmbQK1YZRXnUlmgdimPzhvTM0PGe0bx9FrOn3NDdHeM3T/JmJjzFGd6BeBEliiUFPwHuaz7smbrgPVLQ7DfeCCqW8Q3u2EcnZ/N9DhFfjiY6SmNI/ruNWFyBtCdNTAbuQXqu34F++jbWJS15wcgvFb8yRyEAIlWqb5uekAr1IQLFV27PUBRRUnxhmTt4i/8t+hHbCEcEw1x6ygBYYRCAxxexXw3vDSntkMD89bDQSwrqeToY5tSV9v0MG8G8S9Ag5j7P3ZCO++xWTyETHnIKCx/vKEx3gB9FLVEBEKIIX3/y+I0wMvsa4H2gSwInMbp4cSvLWuH/9rpVUawiWPmD6DEPQX7MBCdTjxZp00Zt+9sFhn0hDrJrmiAutfM032R1OJOjDPKhRWJTeZZnRFYkNjza0uhwBN13Sgat2rCitWuRBFdTfEH4ByYxDXjdA3/L/qFfTuvyocfhnzWEWHkTGTLtKgRrgQ==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:VI1PR03MB3998.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(366004)(376002)(39860400002)(136003)(396003)(346002)(33430700001)(6486002)(86362001)(8936002)(2616005)(83300400001)(76116006)(2906002)(6506007)(53546011)(26005)(83280400001)(83310400001)(83320400001)(83290400001)(6512007)(478600001)(54906003)(6916009)(66556008)(64756008)(4326008)(66446008)(8676002)(36756003)(66946007)(31696002)(31686004)(66476007)(33440700001)(5660300002)(316002)(71200400001)(186003);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: uvhbIQP+mk4LaOqVaEVv86UkpTCe85lkumWC8aLALk20DnfxSADF5zg63GdExLyCLaGt9QAJNhKu02vCf1JTsUr2dyaLALypTFULsXoQF38Cg4/TW3ARbrmRKyetAUjGdDZooZ1xqU3ajY0wK2XLITN2RNHDIcJnbVQJIc5exr6sez6+LYdnhw9ZOtyncn7fdncNp11AgZbCXcLuo7CrTVhFunXJiCav+6BDEBPBlkyJljoK4VkllLXhyCBGr1LSbMUKkXh9iYB3FEZB0GL+1ApqWm4HgG+djSgCUo2FrXonPLkkvqKjjXrD0XiYsy/HNYMX71XPrI+UK/OQHN0t+jCjYExMV6QxKkKLdfFzXvK1ulBhl21POB44OL6Yv4LriXi7i0+Ty7WG5NwGw4RjpiMEbIL2RsqAjv4B9HmBAhyzRhs8x1Xr3IrH4T3WAn2rEDwEqAvQDSbqAGwGO7ROe/OGIfp8zJL4n6LkXk8CmD0=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <148263C525E6EE48AA2FDEDE1FF7B131@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: fd28cdd0-365f-4883-b0af-08d7f3224e99
X-MS-Exchange-CrossTenant-originalarrivaltime: 08 May 2020 07:34:56.8057 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: gbW1+S+PzWbhvHr9Hd/dWx37wuj/UWGvNrVACYOfvY7CP08gnIqDY1ks6QAbaYRFgirlSXyuJxVb3YjYM4s1JRZYFjw1pK9pX3Ki8eXCctH4y92UvyO+bIBS0lEgsmQ5
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR03MB3998
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "kernel-janitors@vger.kernel.org" <kernel-janitors@vger.kernel.org>,
 "dri-devel@lists.freedesktop.org" <dri-devel@lists.freedesktop.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQpPbiA0LzIxLzIwIDI6NTEgUE0sIERhbiBDYXJwZW50ZXIgd3JvdGU6DQo+IEl0IHR1cm5zIG91
dCB0aGVyZSBhcmVuJ3QgdGhhdCBtYW55IG9mIHRoZXNlIGluIHhlbi4NCj4NCj4gJCBncmVwIElT
X0VSUl9PUl9OVUxMIGRyaXZlcnMvZ3B1L2RybS94ZW4vIC1Sbg0KPiBkcml2ZXJzL2dwdS9kcm0v
eGVuL3hlbl9kcm1fZnJvbnRfa21zLmM6NjM6ICAgICBpZiAoSVNfRVJSX09SX05VTEwoZmIpKQ0K
PiBkcml2ZXJzL2dwdS9kcm0veGVuL3hlbl9kcm1fZnJvbnRfZ2VtLmM6ODY6ICAgICBpZiAoSVNf
RVJSX09SX05VTEwoeGVuX29iaikpDQo+IGRyaXZlcnMvZ3B1L2RybS94ZW4veGVuX2RybV9mcm9u
dF9nZW0uYzoxMjA6ICAgIGlmIChJU19FUlJfT1JfTlVMTCh4ZW5fb2JqLT5wYWdlcykpIHsNCj4g
ZHJpdmVycy9ncHUvZHJtL3hlbi94ZW5fZHJtX2Zyb250X2dlbS5jOjEzOTogICAgaWYgKElTX0VS
Ul9PUl9OVUxMKHhlbl9vYmopKQ0KPiBkcml2ZXJzL2dwdS9kcm0veGVuL3hlbl9kcm1fZnJvbnRf
Z2VtLmM6MTk3OiAgICBpZiAoSVNfRVJSX09SX05VTEwoeGVuX29iaikpDQo+IGRyaXZlcnMvZ3B1
L2RybS94ZW4veGVuX2RybV9mcm9udC5jOjQwMzogICAgICAgIGlmIChJU19FUlJfT1JfTlVMTChv
YmopKSB7DQo+DQo+IFRoZXkncmUgYWxsIHdyb25nLCBiZWNhdXNlIGlmIHRoZSBwb2ludGVyIHdh
cyByZWFsbHkgTlVMTCB0aGVuIGl0J3MNCj4gbm90IGhhbmRsZWQgY29ycmVjdGx5IGFuZCB3b3Vs
ZCBldmVudHVhbGx5IGxlYWQgdG8gYSBydW50aW1lIGZhaWx1cmUuDQoNCkl0IHNlZW1zIHRoYXQg
eW91IHdlcmUgcmlnaHQgYW5kIEkgY2FuIHNpbXBseSBjaGFuZ2UgYWxsIElTX0VSUl9PUl9OVUxM
IA0KdG8ganVzdCBJU19FUlINCg0KSSBhbSBwbGFubmluZyBhIHNlcmllcyBvZiBwYXRjaGVzIGxh
dGVyIG9uLCBzbyBJJ2xsIGluY2x1ZGUgdGhpcyBmaXggYXMgd2VsbA0KDQo+DQo+IE5vcm1hbGx5
IFNtYXRjaCBpcyBzbWFydCBlbm91Z2ggdG8ga25vdyB0aGF0IHRoZSBwb2ludGVyIGlzbid0IE5V
TEwgc28NCj4gaXQgZG9lc24ndCBnZW5lcmF0ZSBhIHdhcm5pbmcgYnV0IHllc3RlcmRheSBJIGlu
dHJvZHVjZWQgYSBidWcgaW4gU21hdGNoDQo+IGJ5IG1pc3Rha2UuICBJdCdzIGZpeGVkIG5vdy4N
Cj4NCj4gcmVnYXJkcywNCj4gZGFuIGNhcnBlbnRlcg0KPg0KVGhhbmsgeW91LA0KDQpPbGVrc2Fu
ZHINCg==


From xen-devel-bounces@lists.xenproject.org Fri May 08 07:38:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 07:38:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWxaA-0001bK-0H; Fri, 08 May 2020 07:38:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3Kb=6W=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWxa9-0001bF-5J
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 07:38:05 +0000
X-Inumbo-ID: da0033bc-90fe-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id da0033bc-90fe-11ea-9887-bc764e2007e4;
 Fri, 08 May 2020 07:38:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 44060AF68;
 Fri,  8 May 2020 07:38:06 +0000 (UTC)
Subject: Re: [PATCH v8 05/12] x86emul: support X{SUS,RES}LDTRK
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <6e7500d2-262c-29c7-b9be-3fc9be26d198@suse.com>
 <feb3a6ed-b6b9-959c-8eb1-fb6ecfff034c@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b5f9438b-471d-bf32-3f4c-11287060938c@suse.com>
Date: Fri, 8 May 2020 09:38:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <feb3a6ed-b6b9-959c-8eb1-fb6ecfff034c@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.05.2020 22:13, Andrew Cooper wrote:
> On 05/05/2020 09:14, Jan Beulich wrote:
>> --- a/xen/tools/gen-cpuid.py
>> +++ b/xen/tools/gen-cpuid.py
>> @@ -284,6 +284,9 @@ def crunch_numbers(state):
>>          # as dependent features simplifies Xen's logic, and prevents the guest
>>          # from seeing implausible configurations.
>>          IBRSB: [STIBP, SSBD],
>> +
>> +        # In principle the TSXLDTRK insns could also be considered independent.
>> +        RTM: [TSXLDTRK],
> 
> Why the link?  There is no relevant interaction AFAICT.

Do the insns make any sense without TSX? Anyway - hence the
comment, and if you're convinced the connection does not
need making, I'd be okay dropping it. I would assume though
that we'd better hide TSXLDTRK whenever we hide RTM, which
is most easily achieved by having a connection here.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 08 07:46:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 07:46:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWxhi-0002R6-Q9; Fri, 08 May 2020 07:45:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3Kb=6W=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWxhh-0002R1-2V
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 07:45:53 +0000
X-Inumbo-ID: f088047e-90ff-11ea-9fcd-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f088047e-90ff-11ea-9fcd-12813bfff9fa;
 Fri, 08 May 2020 07:45:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 72260AFCF;
 Fri,  8 May 2020 07:45:53 +0000 (UTC)
Subject: Re: [PATCH v2 1/4] x86/mm: no-one passes a NULL domain to
 init_xen_l4_slots()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <9d4b738a-4487-6bfc-3076-597d074c7b47@suse.com>
 <8787b72e-c71e-b75d-2ca0-0c6fe7c8259f@suse.com>
 <20200421164055.GW28601@Air-de-Roger>
 <4779dde6-6582-1776-ea9b-a2cd46ac3bc3@suse.com>
 <a40d1289-d88b-db93-1e6f-8f44c9c96bcf@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ede5db03-bf12-baff-055f-569bc5c11f2c@suse.com>
Date: Fri, 8 May 2020 09:45:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <a40d1289-d88b-db93-1e6f-8f44c9c96bcf@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.05.2020 19:26, Andrew Cooper wrote:
> On 05/05/2020 07:31, Jan Beulich wrote:
>> On 21.04.2020 18:40, Roger Pau Monné wrote:
>>> On Tue, Apr 21, 2020 at 11:11:03AM +0200, Jan Beulich wrote:
>>>> Drop the NULL checks - they've been introduced by commit 8d7b633ada
>>>> ("x86/mm: Consolidate all Xen L4 slot writing into
>>>> init_xen_l4_slots()") for no apparent reason.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
>> you weren't entirely happy with the change because of the
>> possible (or, as you state, necessary) need to undo this. I
>> still think in the current shape the NULL checks are
>> pointless and hence would better go away. Re-introducing them
>> (adjusted to whatever shape the function may be in by that
>> time) is not that big of a problem. May I ask that you
>> explicitly clarify whether you actively NAK the patch, accept
>> it going in with Roger's R-b, or would be willing to ack it?
> 
> I'm not going to nack it, because that would be petty, but I still don't
> think it is a useful use of your time to be making more work for someone
> in the future to revert.
> 
> However, if you wish to take the patch with Roger's R-b, then please fix
> the stale commit message, seeing as this is v2 and I explained exactly
> why it was done like that.

Is "... without giving a reason; I'm told this was done in anticipation
of the function potentially getting called with a NULL argument" any
better? I don't think the commit message here was stale, as said commit
indeed gives no explanation, yet all call sites pass non-NULL.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 08 08:07:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 08:07:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWy2s-0004fn-1w; Fri, 08 May 2020 08:07:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EkJI=6W=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1jWy2r-0004fi-1e
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 08:07:45 +0000
X-Inumbo-ID: fea55bc7-9102-11ea-9fcd-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id fea55bc7-9102-11ea-9fcd-12813bfff9fa;
 Fri, 08 May 2020 08:07:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588925263;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding;
 bh=lsVYqGBtTkxgpJepH+D6TJxoO78gJAP4vg9zK14/HUc=;
 b=BFOsSxeChGhwyvitefLCkmoIxOsdwRsDiHA0OwZBwPWIkhnxei/Q3QBVF04nZX3sw6kFu4
 JRef51DWnWS1nkaT2TgS5fzUWJQIUtU4hayBYeUoqfX/WVK4KBZFMmnzKz2nYeJux1D9ah
 KJS+vrOihfAn2UNvPKcxW8+vpuXvBQs=
Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com
 [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-451-Qc-WHqHzMhiYSOsomuqowA-1; Fri, 08 May 2020 04:07:41 -0400
X-MC-Unique: Qc-WHqHzMhiYSOsomuqowA-1
Received: by mail-wr1-f71.google.com with SMTP id g10so514129wrr.10
 for <xen-devel@lists.xenproject.org>; Fri, 08 May 2020 01:07:41 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=zjPOOqjFPt4tAr2EgDf+wraBoTpFRAsGJe9UIuI6ceQ=;
 b=bZerd3EaqWHyZDIfUVm4SNyCRL2xV68BDBgGWiVT2oNjMZD89h8HKSs986AlfdCxDl
 VEdAhuamCmcJNykppYUfUt09Eq+GZyhf9awRsZKCVPrlSXnPmtaYCwS+qpddo5TWqrjc
 Wlf5hbYo/NnZKkiICSWt36k6iTC9NSHP+sVLiZdl0QC57ybcW/52krW9cs1s+b8xZj6L
 eTA9zw3vcKoN89Q6Dj/OqQm7VR10EsOadwBHWwNV25WRAX8VlhvJkJNSGdKZDbUJX2Y6
 Bf1KLI/j3fZaHlvMKdjLCnTh9VSRVYUKO7cOlru0MPgPPPlhvN5qvohencoZ03ryULrB
 sHkA==
X-Gm-Message-State: AGi0PubFagOQbLfDxW2IVZnxfqSZh5Hq/W31+DJzrvLtjkwO+x34tJGa
 sYK1SsGaNJhgv1o86SKTZGfxr6/xeoaKhFqBHtKlVeONRBlanc/+Q7L73Oh3cRHcXE7bCRzXEd2
 cIXS7grmuOCw/9kzoaMoJ5VfmcsI=
X-Received: by 2002:a7b:cbc6:: with SMTP id n6mr15838021wmi.155.1588925260460; 
 Fri, 08 May 2020 01:07:40 -0700 (PDT)
X-Google-Smtp-Source: APiQypIe+hKHXvjzeDcgPVLdfTohkt78fgSMzmCSKrRSdn3j/e/e+bwypmyK/i5YdwMWhV4+LWV5Tg==
X-Received: by 2002:a7b:cbc6:: with SMTP id n6mr15837999wmi.155.1588925260304; 
 Fri, 08 May 2020 01:07:40 -0700 (PDT)
Received: from x1w.redhat.com (17.red-88-21-202.staticip.rima-tde.net.
 [88.21.202.17])
 by smtp.gmail.com with ESMTPSA id b82sm12514617wmh.1.2020.05.08.01.07.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 08 May 2020 01:07:39 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v2 0/2] accel: Move Xen accelerator code under accel/xen/
Date: Fri,  8 May 2020 10:07:36 +0200
Message-Id: <20200508080738.2646-1-philmd@redhat.com>
X-Mailer: git-send-email 2.21.3
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8;
	text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Juan Quintela <quintela@redhat.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Igor Mammedov <imammedo@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>,
 Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Supersedes: <20200507155813.16169-1-philmd@redhat.com>

Philippe Mathieu-Daud=C3=A9 (2):
  exec: Check Xen is enabled before calling the Xen API
  accel: Move Xen accelerator code under accel/xen/

 include/exec/ram_addr.h                    | 10 ++++--
 include/hw/xen/xen.h                       | 11 ------
 include/sysemu/xen.h                       | 40 ++++++++++++++++++++++
 hw/xen/xen-common.c =3D> accel/xen/xen-all.c |  3 ++
 hw/acpi/piix4.c                            |  2 +-
 hw/i386/pc.c                               |  1 +
 hw/i386/pc_piix.c                          |  1 +
 hw/i386/pc_q35.c                           |  1 +
 hw/i386/xen/xen-hvm.c                      |  1 +
 hw/i386/xen/xen_platform.c                 |  1 +
 hw/isa/piix3.c                             |  1 +
 hw/pci/msix.c                              |  1 +
 migration/savevm.c                         |  2 +-
 softmmu/vl.c                               |  2 +-
 stubs/xen-hvm.c                            |  9 -----
 target/i386/cpu.c                          |  2 +-
 MAINTAINERS                                |  2 ++
 accel/Makefile.objs                        |  1 +
 accel/xen/Makefile.objs                    |  1 +
 hw/xen/Makefile.objs                       |  2 +-
 20 files changed, 66 insertions(+), 28 deletions(-)
 create mode 100644 include/sysemu/xen.h
 rename hw/xen/xen-common.c =3D> accel/xen/xen-all.c (99%)
 create mode 100644 accel/xen/Makefile.objs

--=20
2.21.3



From xen-devel-bounces@lists.xenproject.org Fri May 08 08:07:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 08:07:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWy2x-0004g4-A0; Fri, 08 May 2020 08:07:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EkJI=6W=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1jWy2w-0004fy-0x
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 08:07:50 +0000
X-Inumbo-ID: 01779709-9103-11ea-9fcd-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 01779709-9103-11ea-9fcd-12813bfff9fa;
 Fri, 08 May 2020 08:07:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588925268;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=SuYs7UxkZXmIi+x04iSR/3r2g41gsXRftXIHWrzE+iQ=;
 b=NGroNXsKEX5qDXVmC1U4ONJEQ5jQZ5LD+MhZYZGIcHfai0MS1IJEPGPn/Pr6Bu3GkB0u9W
 1r6MJ5P7O54L4OJwKIRuHKsy+DG8Mum4Tvmpm3VtWhKHhth6a0H9qpVl3Dx12CxSVBync+
 f/n3VW6+ThrWHBhMivZz/u4H1Gv/kJE=
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-259-c2fDB09uMeWEdiVzAw6cTQ-1; Fri, 08 May 2020 04:07:46 -0400
X-MC-Unique: c2fDB09uMeWEdiVzAw6cTQ-1
Received: by mail-wm1-f71.google.com with SMTP id h184so4819731wmf.5
 for <xen-devel@lists.xenproject.org>; Fri, 08 May 2020 01:07:46 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=KmYk5oCCH0iOigKwyECVQLnLIP/7UbsMDa2NDl9JB4g=;
 b=gmJZgfBhH/GAfX+cnBpwegW8OxYPsOibxYXU71tNPyaaHKvnVpa+vRcUzCX2osBeGG
 YBXqkndj2gKXxfRgJ47Iat+CycGGPu8A207XS8a3RqbIsCs76CQJJlLZRGN0DR5tQhFG
 yCAhn5y39ILHTQob30v9XV54IvZH2R+zSyYPUxxcy2rhkqPxfYEImIa1LmX8rvb2diyd
 z9xblt1UogJZqXI+IrmheIi408koJwZEGwf2f0TcaDPVog+v8jM/NulQugVkff7SsVZZ
 stTfqbEEKSNM1T2iv94WQqthZXWgSJFjG32B35jVbkwLHIFb7a11IIs3gT8jVnBe4Aid
 uiNg==
X-Gm-Message-State: AGi0Pua0oE+EOW3gvautcNSnX7TADVHsQdB4lDv7GCug1+f4H0yruxWY
 Slrt5bNgHcu7hvivE8oXeoqft2XlN2E7VWqjIk7yYS/uPDCM0cYLrdCMLtOIYJ+nX5wfPmTdFt2
 VNjwKcB6FD0hYBZo96TUyUzAm5yI=
X-Received: by 2002:adf:80ee:: with SMTP id 101mr1546305wrl.156.1588925265455; 
 Fri, 08 May 2020 01:07:45 -0700 (PDT)
X-Google-Smtp-Source: APiQypKXOBfLV/4JiC9sd75VwtKJJwa7gBFo2sTQ0zCOhSHeGR5hqWbusQ4EiH/EH8+RYbtPLF5MyQ==
X-Received: by 2002:adf:80ee:: with SMTP id 101mr1546287wrl.156.1588925265280; 
 Fri, 08 May 2020 01:07:45 -0700 (PDT)
Received: from x1w.redhat.com (17.red-88-21-202.staticip.rima-tde.net.
 [88.21.202.17])
 by smtp.gmail.com with ESMTPSA id x13sm12113449wmc.5.2020.05.08.01.07.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 08 May 2020 01:07:44 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v2 1/2] exec: Check Xen is enabled before calling the Xen API
Date: Fri,  8 May 2020 10:07:37 +0200
Message-Id: <20200508080738.2646-2-philmd@redhat.com>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200508080738.2646-1-philmd@redhat.com>
References: <20200508080738.2646-1-philmd@redhat.com>
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8;
	text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Juan Quintela <quintela@redhat.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Igor Mammedov <imammedo@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>,
 Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
---
 include/exec/ram_addr.h | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 5e59a3d8d7..dd8713179e 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -330,7 +330,9 @@ static inline void cpu_physical_memory_set_dirty_range(=
ram_addr_t start,
         }
     }
=20
-    xen_hvm_modified_memory(start, length);
+    if (xen_enabled()) {
+        xen_hvm_modified_memory(start, length);
+    }
 }
=20
 #if !defined(_WIN32)
@@ -388,7 +390,9 @@ static inline void cpu_physical_memory_set_dirty_lebitm=
ap(unsigned long *bitmap,
             }
         }
=20
-        xen_hvm_modified_memory(start, pages << TARGET_PAGE_BITS);
+        if (xen_enabled()) {
+            xen_hvm_modified_memory(start, pages << TARGET_PAGE_BITS);
+        }
     } else {
         uint8_t clients =3D tcg_enabled() ? DIRTY_CLIENTS_ALL : DIRTY_CLIE=
NTS_NOCODE;
=20
--=20
2.21.3



From xen-devel-bounces@lists.xenproject.org Fri May 08 08:07:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 08:07:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWy34-0004hx-If; Fri, 08 May 2020 08:07:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EkJI=6W=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1jWy33-0004hg-L7
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 08:07:57 +0000
X-Inumbo-ID: 062fc1c4-9103-11ea-b9cf-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 062fc1c4-9103-11ea-b9cf-bc764e2007e4;
 Fri, 08 May 2020 08:07:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588925275;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=0D6ku1Fj3sv09Nz0FJZBwLmyqalpCYFmnTu7MzH1x8k=;
 b=T5dcpcRNiqRYoEZIvDelCiBGopEskZEqN0RRQdA/KNk7P0ntUIFljwqFS4BbxQajCE8lSo
 Wuo1g4meNNoTAP7luxS6HrWXhAv1npC/ERupu+TrtboASQWZ1RebPwnhZrq5rCbWOh/vpS
 4Gn1v7JFUi3f7qqddI3lSwQB5hURZzU=
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-172-wK91ju-APhyHuAAngtkDaw-1; Fri, 08 May 2020 04:07:53 -0400
X-MC-Unique: wK91ju-APhyHuAAngtkDaw-1
Received: by mail-wm1-f71.google.com with SMTP id s12so4815243wmj.6
 for <xen-devel@lists.xenproject.org>; Fri, 08 May 2020 01:07:53 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=miY+K7Sb8ybAl2E/6rB5uwDxovdnaZE85VV9pE7PSqs=;
 b=Zqm9OgC3xIiYHBQ2UWaicGdPtFaztKBQq59xOSdN0LyhVr683i8C3riRg1n6Pz9OC/
 afy9Csau6LWivfH/qtWoptFoS40BYLx2xvJWgJWPfINm72REivGklvlByaylMZ8njsNz
 /u6GwAUYQM0J6wqPnVLsgJnByoLwHHhbBkB1KC+jl4ZjJUMzt6xnbBW4myOgK8T9MqKY
 +QCvx9ArxwrPVqDVhBCkww0TEMTQUHKYgTSs4VmNZxPKJW5maqYvQTfn3HpAb1w+qv4s
 jl21iWb6IjxveLuBZYl1Lkr07bTJkqJmfWYrVGKLrggQjwcWpmIvgwLBR54EXHekOPhQ
 3oqQ==
X-Gm-Message-State: AGi0PubOOgBC/DwaUY+XPnHBnVVPXwFXLNYE10WG968fon2Nw2vTatbG
 5aAztKmowA/xIJBtWvgzVhaWiUjGI1QTIhb2ahaDbaVzuL5RfEiQrxk3ZXiOvFIJz70QOVEwigE
 GcRM6GJPeuHQKsVSlT1brbyMWn0w=
X-Received: by 2002:a05:600c:22d3:: with SMTP id
 19mr15538270wmg.110.1588925270878; 
 Fri, 08 May 2020 01:07:50 -0700 (PDT)
X-Google-Smtp-Source: APiQypLuTWxIPWO2+aSHv8dXWWjP5IWC2qSSybKxl2GopqOi2SIyKSxSrJh4rD/Z6g3NJEhB4P0UhA==
X-Received: by 2002:a05:600c:22d3:: with SMTP id
 19mr15538217wmg.110.1588925270387; 
 Fri, 08 May 2020 01:07:50 -0700 (PDT)
Received: from x1w.redhat.com (17.red-88-21-202.staticip.rima-tde.net.
 [88.21.202.17])
 by smtp.gmail.com with ESMTPSA id z1sm12125185wmf.15.2020.05.08.01.07.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 08 May 2020 01:07:49 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v2 2/2] accel: Move Xen accelerator code under accel/xen/
Date: Fri,  8 May 2020 10:07:38 +0200
Message-Id: <20200508080738.2646-3-philmd@redhat.com>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200508080738.2646-1-philmd@redhat.com>
References: <20200508080738.2646-1-philmd@redhat.com>
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8;
	text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Juan Quintela <quintela@redhat.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Igor Mammedov <imammedo@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>,
 Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This code is not related to hardware emulation.
Move it under accel/ with the other hypervisors.

Reviewed-by: Paul Durrant <paul@xen.org>
Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
---
We could also move the memory management functions from
hw/i386/xen/xen-hvm.c but it is not trivial.

v2: Use g_assert_not_reached() instead of abort()
---
 include/exec/ram_addr.h                    |  2 +-
 include/hw/xen/xen.h                       | 11 ------
 include/sysemu/xen.h                       | 40 ++++++++++++++++++++++
 hw/xen/xen-common.c =3D> accel/xen/xen-all.c |  3 ++
 hw/acpi/piix4.c                            |  2 +-
 hw/i386/pc.c                               |  1 +
 hw/i386/pc_piix.c                          |  1 +
 hw/i386/pc_q35.c                           |  1 +
 hw/i386/xen/xen-hvm.c                      |  1 +
 hw/i386/xen/xen_platform.c                 |  1 +
 hw/isa/piix3.c                             |  1 +
 hw/pci/msix.c                              |  1 +
 migration/savevm.c                         |  2 +-
 softmmu/vl.c                               |  2 +-
 stubs/xen-hvm.c                            |  9 -----
 target/i386/cpu.c                          |  2 +-
 MAINTAINERS                                |  2 ++
 accel/Makefile.objs                        |  1 +
 accel/xen/Makefile.objs                    |  1 +
 hw/xen/Makefile.objs                       |  2 +-
 20 files changed, 60 insertions(+), 26 deletions(-)
 create mode 100644 include/sysemu/xen.h
 rename hw/xen/xen-common.c =3D> accel/xen/xen-all.c (99%)
 create mode 100644 accel/xen/Makefile.objs

diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index dd8713179e..a94809f89b 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -21,7 +21,7 @@
=20
 #ifndef CONFIG_USER_ONLY
 #include "cpu.h"
-#include "hw/xen/xen.h"
+#include "sysemu/xen.h"
 #include "sysemu/tcg.h"
 #include "exec/ramlist.h"
 #include "exec/ramblock.h"
diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h
index 5ac1c6dc55..771dd447f2 100644
--- a/include/hw/xen/xen.h
+++ b/include/hw/xen/xen.h
@@ -20,13 +20,6 @@ extern uint32_t xen_domid;
 extern enum xen_mode xen_mode;
 extern bool xen_domid_restrict;
=20
-extern bool xen_allowed;
-
-static inline bool xen_enabled(void)
-{
-    return xen_allowed;
-}
-
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
 void xen_piix3_set_irq(void *opaque, int irq_num, int level);
 void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int =
len);
@@ -39,10 +32,6 @@ void xenstore_store_pv_console_info(int i, struct Charde=
v *chr);
=20
 void xen_hvm_init(PCMachineState *pcms, MemoryRegion **ram_memory);
=20
-void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
-                   struct MemoryRegion *mr, Error **errp);
-void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length);
-
 void xen_register_framebuffer(struct MemoryRegion *mr);
=20
 #endif /* QEMU_HW_XEN_H */
diff --git a/include/sysemu/xen.h b/include/sysemu/xen.h
new file mode 100644
index 0000000000..e3d30e327f
--- /dev/null
+++ b/include/sysemu/xen.h
@@ -0,0 +1,40 @@
+/*
+ * QEMU Xen support
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or late=
r.
+ * See the COPYING file in the top-level directory.
+ */
+
+#ifndef SYSEMU_XEN_H
+#define SYSEMU_XEN_H
+
+#ifdef CONFIG_XEN
+
+extern bool xen_allowed;
+
+#define xen_enabled() (xen_allowed)
+
+#ifndef CONFIG_USER_ONLY
+void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length);
+void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
+                   struct MemoryRegion *mr, Error **errp);
+#endif
+
+#else /* !CONFIG_XEN */
+
+#define xen_enabled() 0
+#ifndef CONFIG_USER_ONLY
+static inline void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t le=
ngth)
+{
+    g_assert_not_reached();
+}
+static inline void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
+                                 MemoryRegion *mr, Error **errp)
+{
+    g_assert_not_reached();
+}
+#endif
+
+#endif /* CONFIG_XEN */
+
+#endif
diff --git a/hw/xen/xen-common.c b/accel/xen/xen-all.c
similarity index 99%
rename from hw/xen/xen-common.c
rename to accel/xen/xen-all.c
index a15070f7f6..5c64571cb8 100644
--- a/hw/xen/xen-common.c
+++ b/accel/xen/xen-all.c
@@ -16,6 +16,7 @@
 #include "hw/xen/xen_pt.h"
 #include "chardev/char.h"
 #include "sysemu/accel.h"
+#include "sysemu/xen.h"
 #include "sysemu/runstate.h"
 #include "migration/misc.h"
 #include "migration/global_state.h"
@@ -31,6 +32,8 @@
     do { } while (0)
 #endif
=20
+bool xen_allowed;
+
 xc_interface *xen_xc;
 xenforeignmemory_handle *xen_fmem;
 xendevicemodel_handle *xen_dmod;
diff --git a/hw/acpi/piix4.c b/hw/acpi/piix4.c
index 964d6f5990..daed273687 100644
--- a/hw/acpi/piix4.c
+++ b/hw/acpi/piix4.c
@@ -30,6 +30,7 @@
 #include "hw/acpi/acpi.h"
 #include "sysemu/runstate.h"
 #include "sysemu/sysemu.h"
+#include "sysemu/xen.h"
 #include "qapi/error.h"
 #include "qemu/range.h"
 #include "exec/address-spaces.h"
@@ -41,7 +42,6 @@
 #include "hw/mem/nvdimm.h"
 #include "hw/acpi/memory_hotplug.h"
 #include "hw/acpi/acpi_dev_interface.h"
-#include "hw/xen/xen.h"
 #include "migration/vmstate.h"
 #include "hw/core/cpu.h"
 #include "trace.h"
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 97e345faea..1a599e1de9 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -56,6 +56,7 @@
 #include "sysemu/tcg.h"
 #include "sysemu/numa.h"
 #include "sysemu/kvm.h"
+#include "sysemu/xen.h"
 #include "sysemu/qtest.h"
 #include "sysemu/reset.h"
 #include "sysemu/runstate.h"
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 3862e5120e..c00472b4c5 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -53,6 +53,7 @@
 #include "cpu.h"
 #include "qapi/error.h"
 #include "qemu/error-report.h"
+#include "sysemu/xen.h"
 #ifdef CONFIG_XEN
 #include <xen/hvm/hvm_info_table.h>
 #include "hw/xen/xen_pt.h"
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index 3349e38a4c..e929749d8e 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -36,6 +36,7 @@
 #include "hw/rtc/mc146818rtc.h"
 #include "hw/xen/xen.h"
 #include "sysemu/kvm.h"
+#include "sysemu/xen.h"
 #include "hw/kvm/clock.h"
 #include "hw/pci-host/q35.h"
 #include "hw/qdev-properties.h"
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 82ece6b9e7..041303a2fa 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -28,6 +28,7 @@
 #include "qemu/range.h"
 #include "sysemu/runstate.h"
 #include "sysemu/sysemu.h"
+#include "sysemu/xen.h"
 #include "sysemu/xen-mapcache.h"
 #include "trace.h"
 #include "exec/address-spaces.h"
diff --git a/hw/i386/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
index 0f7b05e5e1..a1492fdecd 100644
--- a/hw/i386/xen/xen_platform.c
+++ b/hw/i386/xen/xen_platform.c
@@ -33,6 +33,7 @@
 #include "hw/xen/xen-legacy-backend.h"
 #include "trace.h"
 #include "exec/address-spaces.h"
+#include "sysemu/xen.h"
 #include "sysemu/block-backend.h"
 #include "qemu/error-report.h"
 #include "qemu/module.h"
diff --git a/hw/isa/piix3.c b/hw/isa/piix3.c
index fd1c78879f..1a5267e19f 100644
--- a/hw/isa/piix3.c
+++ b/hw/isa/piix3.c
@@ -28,6 +28,7 @@
 #include "hw/irq.h"
 #include "hw/isa/isa.h"
 #include "hw/xen/xen.h"
+#include "sysemu/xen.h"
 #include "sysemu/sysemu.h"
 #include "sysemu/reset.h"
 #include "sysemu/runstate.h"
diff --git a/hw/pci/msix.c b/hw/pci/msix.c
index 29187898f2..2c7ead7667 100644
--- a/hw/pci/msix.c
+++ b/hw/pci/msix.c
@@ -19,6 +19,7 @@
 #include "hw/pci/msix.h"
 #include "hw/pci/pci.h"
 #include "hw/xen/xen.h"
+#include "sysemu/xen.h"
 #include "migration/qemu-file-types.h"
 #include "migration/vmstate.h"
 #include "qemu/range.h"
diff --git a/migration/savevm.c b/migration/savevm.c
index c00a6807d9..b979ea6e7f 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -28,7 +28,6 @@
=20
 #include "qemu/osdep.h"
 #include "hw/boards.h"
-#include "hw/xen/xen.h"
 #include "net/net.h"
 #include "migration.h"
 #include "migration/snapshot.h"
@@ -59,6 +58,7 @@
 #include "sysemu/replay.h"
 #include "sysemu/runstate.h"
 #include "sysemu/sysemu.h"
+#include "sysemu/xen.h"
 #include "qjson.h"
 #include "migration/colo.h"
 #include "qemu/bitmap.h"
diff --git a/softmmu/vl.c b/softmmu/vl.c
index afd2615fb3..0344e5fd2e 100644
--- a/softmmu/vl.c
+++ b/softmmu/vl.c
@@ -36,6 +36,7 @@
 #include "sysemu/runstate.h"
 #include "sysemu/seccomp.h"
 #include "sysemu/tcg.h"
+#include "sysemu/xen.h"
=20
 #include "qemu/error-report.h"
 #include "qemu/sockets.h"
@@ -178,7 +179,6 @@ static NotifierList exit_notifiers =3D
 static NotifierList machine_init_done_notifiers =3D
     NOTIFIER_LIST_INITIALIZER(machine_init_done_notifiers);
=20
-bool xen_allowed;
 uint32_t xen_domid;
 enum xen_mode xen_mode =3D XEN_EMULATE;
 bool xen_domid_restrict;
diff --git a/stubs/xen-hvm.c b/stubs/xen-hvm.c
index b7d53b5e2f..6954a5b696 100644
--- a/stubs/xen-hvm.c
+++ b/stubs/xen-hvm.c
@@ -35,11 +35,6 @@ int xen_is_pirq_msi(uint32_t msi_data)
     return 0;
 }
=20
-void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
-                   Error **errp)
-{
-}
-
 qemu_irq *xen_interrupt_controller_init(void)
 {
     return NULL;
@@ -49,10 +44,6 @@ void xen_register_framebuffer(MemoryRegion *mr)
 {
 }
=20
-void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
-{
-}
-
 void xen_hvm_init(PCMachineState *pcms, MemoryRegion **ram_memory)
 {
 }
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 9c256ab159..f9b3ef1ef2 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -29,6 +29,7 @@
 #include "sysemu/reset.h"
 #include "sysemu/hvf.h"
 #include "sysemu/cpus.h"
+#include "sysemu/xen.h"
 #include "kvm_i386.h"
 #include "sev_i386.h"
=20
@@ -54,7 +55,6 @@
 #include "hw/i386/topology.h"
 #ifndef CONFIG_USER_ONLY
 #include "exec/address-spaces.h"
-#include "hw/xen/xen.h"
 #include "hw/i386/apic_internal.h"
 #include "hw/boards.h"
 #endif
diff --git a/MAINTAINERS b/MAINTAINERS
index 1f84e3ae2c..95ddddfb1d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -438,6 +438,7 @@ M: Paul Durrant <paul@xen.org>
 L: xen-devel@lists.xenproject.org
 S: Supported
 F: */xen*
+F: accel/xen/*
 F: hw/9pfs/xen-9p*
 F: hw/char/xen_console.c
 F: hw/display/xenfb.c
@@ -451,6 +452,7 @@ F: hw/i386/xen/
 F: hw/pci-host/xen_igd_pt.c
 F: include/hw/block/dataplane/xen*
 F: include/hw/xen/
+F: include/sysemu/xen.h
 F: include/sysemu/xen-mapcache.h
=20
 Guest CPU Cores (HAXM)
diff --git a/accel/Makefile.objs b/accel/Makefile.objs
index 17e5ac6061..ff72f0d030 100644
--- a/accel/Makefile.objs
+++ b/accel/Makefile.objs
@@ -2,4 +2,5 @@ common-obj-$(CONFIG_SOFTMMU) +=3D accel.o
 obj-$(call land,$(CONFIG_SOFTMMU),$(CONFIG_POSIX)) +=3D qtest.o
 obj-$(CONFIG_KVM) +=3D kvm/
 obj-$(CONFIG_TCG) +=3D tcg/
+obj-$(CONFIG_XEN) +=3D xen/
 obj-y +=3D stubs/
diff --git a/accel/xen/Makefile.objs b/accel/xen/Makefile.objs
new file mode 100644
index 0000000000..7482cfb436
--- /dev/null
+++ b/accel/xen/Makefile.objs
@@ -0,0 +1 @@
+obj-y +=3D xen-all.o
diff --git a/hw/xen/Makefile.objs b/hw/xen/Makefile.objs
index 84df60a928..340b2c5096 100644
--- a/hw/xen/Makefile.objs
+++ b/hw/xen/Makefile.objs
@@ -1,5 +1,5 @@
 # xen backend driver support
-common-obj-$(CONFIG_XEN) +=3D xen-legacy-backend.o xen_devconfig.o xen_pvd=
ev.o xen-common.o xen-bus.o xen-bus-helper.o xen-backend.o
+common-obj-$(CONFIG_XEN) +=3D xen-legacy-backend.o xen_devconfig.o xen_pvd=
ev.o xen-bus.o xen-bus-helper.o xen-backend.o
=20
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) +=3D xen-host-pci-device.o
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) +=3D xen_pt.o xen_pt_config_init.o xen_p=
t_graphics.o xen_pt_msi.o
--=20
2.21.3



From xen-devel-bounces@lists.xenproject.org Fri May 08 08:10:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 08:10:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWy5Q-0005am-Vr; Fri, 08 May 2020 08:10:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3Kb=6W=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWy5P-0005ad-8Y
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 08:10:23 +0000
X-Inumbo-ID: 5ca8c028-9103-11ea-9fcd-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5ca8c028-9103-11ea-9fcd-12813bfff9fa;
 Fri, 08 May 2020 08:10:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 89D70AAC7;
 Fri,  8 May 2020 08:10:23 +0000 (UTC)
Subject: Re: [PATCH v8 01/12] x86emul: disable FPU/MMX/SIMD insn emulation
 when !HVM
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <6110ec4d-36a7-efa7-fb86-069ec5b83ac2@suse.com>
 <a9badaa5-cae0-f2b0-7801-5355f342ad4b@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <62c1b64a-5ddb-d6d0-26e7-519eb2695d84@suse.com>
Date: Fri, 8 May 2020 10:10:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <a9badaa5-cae0-f2b0-7801-5355f342ad4b@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.05.2020 20:11, Andrew Cooper wrote:
> On 05/05/2020 09:12, Jan Beulich wrote:
>> In a pure PV environment (the PV shim in particular) we don't really
>> need emulation of all these. To limit #ifdef-ary utilize some of the
>> CASE_*() macros we have, by providing variants expanding to
>> (effectively) nothing (really a label, which in turn requires passing
>> -Wno-unused-label to the compiler when build such configurations).
>>
>> Due to the mixture of macro and #ifdef use, the placement of some of
>> the #ifdef-s is a little arbitrary.
>>
>> The resulting object file's .text is less than half the size of the
>> original, and looks to also be compiling a little more quickly.
>>
>> This is meant as a first step; more parts can likely be disabled down
>> the road.
>>
>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> v7: Integrate into this series. Re-base.
>> ---
>> I'll be happy to take suggestions allowing to avoid -Wno-unused-label.
> 
> I already gave you one, along with a far less invasive change.
> 
> It is not interesting to be able to conditionally compile these things
> separately.  A build of Xen will either need everything, or just the
> integer group.

And I had replied to that, indicating my (at least partial)
disagreement as well as asking for clarification on an apparently
incomplete sentence in your response.

Note that in an initial (3 months earlier) reply you did say

"On that subject, it would be very helpful to at least be able to
 configure reduced builds from these utilities."

which I responded to

"Yes, I too have been thinking this way. I may get there eventually."

I specifically meant FPU-less, MMX-less, and SIMD-less each on their
own.

I'm also not at all convinced of, as you say, "a build of Xen will
either need everything, or just the integer group". Yes, for now I
unilaterally disable all three for !HVM here, but that's just
because we know of no PV guests trying to update their page tables
in "interesting" ways. Long term I think this would better be a
separate Kconfig option (or multiple ones, if we stick to keeping
the three groups here to have separate controls), merely defaulting
to !HVM. I could of course switch to such an approach right away.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 08 08:19:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 08:19:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWyEX-0005rF-Vl; Fri, 08 May 2020 08:19:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3Kb=6W=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWyEW-0005r9-P0
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 08:19:48 +0000
X-Inumbo-ID: ae3a16ac-9104-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ae3a16ac-9104-11ea-ae69-bc764e2007e4;
 Fri, 08 May 2020 08:19:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id DCFD2B004;
 Fri,  8 May 2020 08:19:49 +0000 (UTC)
Subject: Re: [PATCH 3/3] xen/cpupool: fix removing cpu from a cpupool
To: Juergen Gross <jgross@suse.com>
References: <20200430151559.1464-1-jgross@suse.com>
 <20200430151559.1464-4-jgross@suse.com>
 <e02432bf8ea8ca85a31176de1a1f9e429c84b243.camel@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7c0c5c02-7bf8-8aa1-4c5d-586d18290d2f@suse.com>
Date: Fri, 8 May 2020 10:19:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <e02432bf8ea8ca85a31176de1a1f9e429c84b243.camel@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.05.2020 20:36, Dario Faggioli wrote:
> On Thu, 2020-04-30 at 17:15 +0200, Juergen Gross wrote:
>> Commit cb563d7665f2 ("xen/sched: support core scheduling for moving
>> cpus to/from cpupools") introduced a regression when trying to remove
>> an offline cpu from a cpupool, as the system would crash in this
>> situation.
>>
>> Fix that by testing the cpu to be online.
>>
>> Fixes: cb563d7665f2 ("xen/sched: support core scheduling for moving
>> cpus to/from cpupools")
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>
> Acked-by: Dario Faggioli <dfaggioli@suse.com>

Jürgen,

it looks like this is independent of the earlier two patches
and hence could go in, while I understand there'll be v2 for
the earlier ones. Please confirm.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 08 08:29:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 08:29:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWyNU-0006iS-SX; Fri, 08 May 2020 08:29:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jWyNT-0006iN-Rz
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 08:29:03 +0000
X-Inumbo-ID: f8f46e30-9105-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f8f46e30-9105-11ea-b9cf-bc764e2007e4;
 Fri, 08 May 2020 08:29:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 9C875AEC4;
 Fri,  8 May 2020 08:29:04 +0000 (UTC)
Subject: Re: [PATCH 3/3] xen/cpupool: fix removing cpu from a cpupool
To: Jan Beulich <jbeulich@suse.com>
References: <20200430151559.1464-1-jgross@suse.com>
 <20200430151559.1464-4-jgross@suse.com>
 <e02432bf8ea8ca85a31176de1a1f9e429c84b243.camel@suse.com>
 <7c0c5c02-7bf8-8aa1-4c5d-586d18290d2f@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <515da70d-1149-0de7-0592-78416496ed62@suse.com>
Date: Fri, 8 May 2020 10:29:01 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <7c0c5c02-7bf8-8aa1-4c5d-586d18290d2f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.20 10:19, Jan Beulich wrote:
> On 07.05.2020 20:36, Dario Faggioli wrote:
>> On Thu, 2020-04-30 at 17:15 +0200, Juergen Gross wrote:
>>> Commit cb563d7665f2 ("xen/sched: support core scheduling for moving
>>> cpus to/from cpupools") introduced a regression when trying to remove
>>> an offline cpu from a cpupool, as the system would crash in this
>>> situation.
>>>
>>> Fix that by testing the cpu to be online.
>>>
>>> Fixes: cb563d7665f2 ("xen/sched: support core scheduling for moving
>>> cpus to/from cpupools")
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>
>> Acked-by: Dario Faggioli <dfaggioli@suse.com>
> 
> Jürgen,
> 
> it looks like this is independent of the earlier two patches
> and hence could go in, while I understand there'll be v2 for
> the earlier ones. Please confirm.

Confirmed.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri May 08 08:36:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 08:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWyUR-0007Yt-LF; Fri, 08 May 2020 08:36:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3Kb=6W=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jWyUQ-0007Yo-Eb
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 08:36:14 +0000
X-Inumbo-ID: f8f3694e-9106-11ea-9fcd-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f8f3694e-9106-11ea-9fcd-12813bfff9fa;
 Fri, 08 May 2020 08:36:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 339E1AAC7;
 Fri,  8 May 2020 08:36:14 +0000 (UTC)
Subject: Re: Xen Coding style
To: Julien Grall <jgrall@amazon.com>
References: <ad26bbdc-5209-ce0c-7956-f8b08e6c2492@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8771c424-6340-10e5-1c1f-d72271ab8c1b@suse.com>
Date: Fri, 8 May 2020 10:36:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <ad26bbdc-5209-ce0c-7956-f8b08e6c2492@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "committers@xenproject.org" <committers@xenproject.org>, "Woodhouse,
 David" <dwmw@amazon.co.uk>, "paul@xen.org" <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.05.2020 16:43, Julien Grall wrote:
> This was originally discussed during the last community call.
> 
> A major part of the comments during review is related to coding style issues. This can lead to frustration from the contributor as the current CODING_STYLE is quite light and the choice often down to a matter of taste from the maintainers.
> 
> In the past, Lars tried to address the problem by introducing a coding style checker (see [1] and [2]). However, the work came to stop because we couldn't agree on what is Xen coding style.
> 
> I would like to suggest a different approach. Rather than trying to agree on all the rules one by one, I propose to have a vote on different coding styles and pick the preferred one.
> 
> The list of coding styles would come from the community. It could be coding styles already used in the open source community or new coding styles (for instance a contributor could write down his/her understanding of Xen Coding Style).
> 
> Are the committers happy with this appraoch?

I'm not, sorry, and I'm pretty sure I indicated so before. For one I
don't think picking an arbitrary other style than what we currently use
is going to be helpful: It'll be a significant amount of code churn all
over the code base. Instead I think the basic current principle should
remain: Imported files, if not heavily changed, are permitted to retain
their original style, while "our" files get written in "our" style.
(Recording which one's which is still tbd.)

I don't think though this necessarily implies "to agree on all the rules
one by one" - we could also settle on there explicitly being freedom
beyond what's spelled out explicitly. I'd not be happy with this, as it
would lead to a lot of inconsistencies over time, but it's still an
option. Obviously there's the wide range of middle ground to agree on
some more rules to become written down (I did propose a few over time,
without - iirc - getting helpful, if any, feedback), while leaving the
rest up to the author.

The main thing we need to settle on imo is whether unwritten rules
count. Me being picky isn't because of me liking to be, but because of
me thinking that a consistent code base is quite helpful. If consensus
is that consistency is not a goal, I'll accept this (with some
grumbling).

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 08 08:40:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 08:40:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWyY4-0007hZ-5G; Fri, 08 May 2020 08:40:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lkD8=6W=redhat.com=quintela@srs-us1.protection.inumbo.net>)
 id 1jWyY2-0007hU-DA
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 08:39:58 +0000
X-Inumbo-ID: 7f74bc16-9107-11ea-9887-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 7f74bc16-9107-11ea-9887-bc764e2007e4;
 Fri, 08 May 2020 08:39:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588927197;
 h=from:from:reply-to:reply-to:subject:subject:date:date:
 message-id:message-id:to:to:cc:cc:mime-version:mime-version:
 content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=reH0qvTpLHYW8AWLzCTVWmAxiqSUgDgGOqWacJWsPL4=;
 b=F+nOJbNywZ4BQzzYq65k49vgj1uoOxf1jyuXv21l7fgfGP/Tjwg58PVgpZQhEeynCw97iy
 KEqmVXDOpqGK4YslgEXhc5mXRd2Z8sTb+VAwmqJ/ekIq5JZiyU2oVGHQGFaPRbKmUrk4tE
 ue3CzrfVxCqCrcNTIzpslDOL8A8g/Ao=
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-42-AUDqd_F7NLqPCzLpAlygWA-1; Fri, 08 May 2020 04:39:55 -0400
X-MC-Unique: AUDqd_F7NLqPCzLpAlygWA-1
Received: by mail-wr1-f69.google.com with SMTP id e14so543258wrv.11
 for <xen-devel@lists.xenproject.org>; Fri, 08 May 2020 01:39:55 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:in-reply-to:references
 :user-agent:reply-to:date:message-id:mime-version
 :content-transfer-encoding;
 bh=Oums1JEin6yJ+Rlwcse4Oz5sWMkUJewVQwCMYJIC6+w=;
 b=GBHxfgFjLHbpC3I6EgtKjXh2xZ1fecmDT8+IK9xxnrx3rOemTljvQJncKjlq4OBIs8
 4z0NhUJpz0cdNC3plWearNgmG1PwOG2EFk301PE/mzibSX97MtBj/1tHos6VNOHNioop
 a4kPw2fdfeexDkFyXvndRNpMPTPOO5zDY1qC45Wut3kgYNiqxY25EESk3y4DGdgIlrNd
 uG1TsYjdSflhsYZkd3+htjJmh23wstMx0Do6OUEi/sxvsV54y957+Ko673H7A1d5zbw+
 AVTeUZSOKw14EiCnhtPbKloJsDbPOHoEgxLFZn0GVvzgXjsMs6BbfMoHaZubTl5hmeTD
 fM6A==
X-Gm-Message-State: AGi0PuZjTA3fUaj3COV9UljcUZZhVbiEnLrRlAY05hcRJXEiOYVr9kr7
 WQF1pje7MZhMEHhTBcaRwgqIbD7EFlBgQqTyXEWIsS7sdfGijKHnaVEYMmXOKogETT5IGbkd2/c
 Vr4hh33g8Lx5IMJD2NyyKhAKpg3w=
X-Received: by 2002:a1c:3985:: with SMTP id
 g127mr15211426wma.102.1588927194301; 
 Fri, 08 May 2020 01:39:54 -0700 (PDT)
X-Google-Smtp-Source: APiQypIGZvQSiAVo/M2tuOGa6PEpWsThKAvLNEB70YMcWK9+Ce3x+uPZMGBN2OqDydYDc6Du++rEkg==
X-Received: by 2002:a1c:3985:: with SMTP id
 g127mr15211401wma.102.1588927194082; 
 Fri, 08 May 2020 01:39:54 -0700 (PDT)
Received: from localhost (trasno.trasno.org. [83.165.45.250])
 by smtp.gmail.com with ESMTPSA id m6sm1808275wrq.5.2020.05.08.01.39.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 08 May 2020 01:39:52 -0700 (PDT)
From: Juan Quintela <quintela@redhat.com>
To: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: Re: [PATCH v2 1/2] exec: Check Xen is enabled before calling the Xen
 API
In-Reply-To: <20200508080738.2646-2-philmd@redhat.com> ("Philippe
 =?utf-8?Q?Mathieu-Daud=C3=A9=22's?= message of "Fri, 8 May 2020 10:07:37
 +0200")
References: <20200508080738.2646-1-philmd@redhat.com>
 <20200508080738.2646-2-philmd@redhat.com>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
Date: Fri, 08 May 2020 10:39:51 +0200
Message-ID: <87wo5mes6g.fsf@secure.mitica>
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: quintela@redhat.com
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Paul Durrant <paul@xen.org>, qemu-devel@nongnu.org,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Igor Mammedov <imammedo@redhat.com>,
 Aurelien Jarno <aurelien@aurel32.net>, Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> wrote:
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> ---
>  include/exec/ram_addr.h | 8 ++++++--
>  1 file changed, 6 insertions(+), 2 deletions(-)
>
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index 5e59a3d8d7..dd8713179e 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -330,7 +330,9 @@ static inline void cpu_physical_memory_set_dirty_rang=
e(ram_addr_t start,
>          }
>      }
> =20
> -    xen_hvm_modified_memory(start, length);
> +    if (xen_enabled()) {
> +        xen_hvm_modified_memory(start, length);
> +    }
>  }
> =20
>  #if !defined(_WIN32)
> @@ -388,7 +390,9 @@ static inline void cpu_physical_memory_set_dirty_lebi=
tmap(unsigned long *bitmap,
>              }
>          }
> =20
> -        xen_hvm_modified_memory(start, pages << TARGET_PAGE_BITS);
> +        if (xen_enabled()) {
> +            xen_hvm_modified_memory(start, pages << TARGET_PAGE_BITS);
> +        }
>      } else {
>          uint8_t clients =3D tcg_enabled() ? DIRTY_CLIENTS_ALL : DIRTY_CL=
IENTS_NOCODE;

I don't object moving the xen code to accell.  But I think that this
change is bad.

On the following patch:
- You export xen_allowed
  (ok, it was already exported, but I think it shouldn't)

(master)$ find . -type f | xargs grep xen_allowed
./hw/xen/xen-common.c:    ac->allowed =3D &xen_allowed;
./include/hw/xen/xen.h:extern bool xen_allowed;
./include/hw/xen/xen.h:    return xen_allowed;
./softmmu/vl.c:bool xen_allowed;

This are all the users that I can find.

And xen_havm_modified_memory() is an empty function if xen is not
compiled in.  And in the case that xen is compiled in, the 1st thing
that it checks is:

   if (unlikely(xen_in_migration)) {

That is way more restrictive that xen_enabled().

So, I think that it is better to drop this patch, maintain next one, but
just un-exporting xen_allowed.

What do you think?

Later, Juan.




From xen-devel-bounces@lists.xenproject.org Fri May 08 09:32:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 09:32:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWzMW-0003y4-BY; Fri, 08 May 2020 09:32:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EkJI=6W=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1jWzMU-0003xz-Cx
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 09:32:06 +0000
X-Inumbo-ID: c7d61e44-910e-11ea-b07b-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id c7d61e44-910e-11ea-b07b-bc764e2007e4;
 Fri, 08 May 2020 09:32:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588930325;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=7Fizs/jTz7Evgj0jXTg1CAo/rJD+kXMrBLYhiLKYfrQ=;
 b=Fi8wGB5EusAwDTyYNh1QdRQem4LZFAeCdYJDfmnQNfcuHJv/MfRTXYwYqUJDVaDBNz4N5F
 0SB0nIuCq1GjpbWJoJJ7HQPMAsB2pzJ5sWI6sruMBzcTVohcGGdHCOcfTW3i2SPN3uyQpu
 BlPOD+fR4D7OvJwZkbVNCjniO01TXwM=
Received: from mail-wm1-f72.google.com (mail-wm1-f72.google.com
 [209.85.128.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-299-ArIIjk-3M16jwpj_FLuLuQ-1; Fri, 08 May 2020 05:32:02 -0400
X-MC-Unique: ArIIjk-3M16jwpj_FLuLuQ-1
Received: by mail-wm1-f72.google.com with SMTP id q77so230756wme.4
 for <xen-devel@lists.xenproject.org>; Fri, 08 May 2020 02:32:02 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-language
 :content-transfer-encoding;
 bh=BFoDHna6XRL92Q6uyxjk4+3nW/j50BtcEQYU7yuSZ4c=;
 b=HIC/INYXlbK8ZauqapVE3VFWQjjeaiGIlAzbQ+c2st/x/CeQC1vmWChii+A3SpSr2Q
 4ENASel1pnIljNdT7E8TQ0bUtH6ZseJpyShnJXSRkVzDztoJMoevoG0UDs1FOTQUuJZY
 2aEMK9SPfFve8lsVv/rsI3RvzBmixbXMYwrN6DDgSVbQ9qi5GZ4XLr3y7EyKJlECh/Vc
 q3n22sJf/q46SEOkrqUdzT9mngQ6ZRlRRzXQINZNqomofhq09gOWJEK5K007u1ED6GZU
 uH0Gey2KbPAgDa4mSBi+1SXGX/0XxQDcKHNP5pXmEIHn9XKwXgIY8QC9h1/1hT5uPVI+
 REaQ==
X-Gm-Message-State: AGi0PubMQv5MDs9v3PFwhT5nb81YZwfIBjVHH5mNBkGm/M9H6QbWetzP
 odaSPLxu+jTQPAQCP2dmrPwmWGET4ldPfWnxGwR34wKMAa4iknDP7PUEl/Lzrn+74j8+EI3m2da
 8QqcKXsIpvBNKPuPmvwYPpOtml9o=
X-Received: by 2002:a7b:ca47:: with SMTP id m7mr16080718wml.55.1588930321614; 
 Fri, 08 May 2020 02:32:01 -0700 (PDT)
X-Google-Smtp-Source: APiQypIn9j1oj4Str39MCCMkLIi/fuLFk9d58QkDTA/zKBxxIA1JCsgbH5eufVPx8yt9TWGYgHt8rQ==
X-Received: by 2002:a7b:ca47:: with SMTP id m7mr16080690wml.55.1588930321362; 
 Fri, 08 May 2020 02:32:01 -0700 (PDT)
Received: from [192.168.1.38] (17.red-88-21-202.staticip.rima-tde.net.
 [88.21.202.17])
 by smtp.gmail.com with ESMTPSA id i6sm1966693wrw.97.2020.05.08.02.31.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 08 May 2020 02:32:00 -0700 (PDT)
Subject: Re: [PATCH v2 1/2] exec: Check Xen is enabled before calling the Xen
 API
To: quintela@redhat.com
References: <20200508080738.2646-1-philmd@redhat.com>
 <20200508080738.2646-2-philmd@redhat.com> <87wo5mes6g.fsf@secure.mitica>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Message-ID: <0731df30-b0c4-c7bf-6194-42cd6cc90ba5@redhat.com>
Date: Fri, 8 May 2020 11:31:59 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.5.0
MIME-Version: 1.0
In-Reply-To: <87wo5mes6g.fsf@secure.mitica>
Content-Language: en-US
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Paul Durrant <paul@xen.org>, qemu-devel@nongnu.org,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Igor Mammedov <imammedo@redhat.com>,
 Aurelien Jarno <aurelien@aurel32.net>, Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Juan,

On 5/8/20 10:39 AM, Juan Quintela wrote:
> Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> wrote:
>> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
>> ---
>>   include/exec/ram_addr.h | 8 ++++++--
>>   1 file changed, 6 insertions(+), 2 deletions(-)
>>
>> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
>> index 5e59a3d8d7..dd8713179e 100644
>> --- a/include/exec/ram_addr.h
>> +++ b/include/exec/ram_addr.h
>> @@ -330,7 +330,9 @@ static inline void cpu_physical_memory_set_dirty_ran=
ge(ram_addr_t start,
>>           }
>>       }
>>  =20
>> -    xen_hvm_modified_memory(start, length);
>> +    if (xen_enabled()) {
>> +        xen_hvm_modified_memory(start, length);
>> +    }
>>   }
>>  =20
>>   #if !defined(_WIN32)
>> @@ -388,7 +390,9 @@ static inline void cpu_physical_memory_set_dirty_leb=
itmap(unsigned long *bitmap,
>>               }
>>           }
>>  =20
>> -        xen_hvm_modified_memory(start, pages << TARGET_PAGE_BITS);
>> +        if (xen_enabled()) {
>> +            xen_hvm_modified_memory(start, pages << TARGET_PAGE_BITS);
>> +        }
>>       } else {
>>           uint8_t clients =3D tcg_enabled() ? DIRTY_CLIENTS_ALL : DIRTY_=
CLIENTS_NOCODE;
>=20
> I don't object moving the xen code to accell.  But I think that this
> change is bad.
>=20
> On the following patch:
> - You export xen_allowed
>    (ok, it was already exported, but I think it shouldn't)
>=20
> (master)$ find . -type f | xargs grep xen_allowed
> ./hw/xen/xen-common.c:    ac->allowed =3D &xen_allowed;
> ./include/hw/xen/xen.h:extern bool xen_allowed;
> ./include/hw/xen/xen.h:    return xen_allowed;
> ./softmmu/vl.c:bool xen_allowed;
>=20
> This are all the users that I can find.
>=20
> And xen_havm_modified_memory() is an empty function if xen is not
> compiled in.  And in the case that xen is compiled in, the 1st thing
> that it checks is:
>=20
>     if (unlikely(xen_in_migration)) {
>=20
> That is way more restrictive that xen_enabled().
>=20
> So, I think that it is better to drop this patch, maintain next one, but
> just un-exporting xen_allowed.
>=20
> What do you think?

I blindly trust your judgement on this :) I'd rather not touch this code=20
but as it happens to be in "exec/ram_addr.h" I had to modify it.

Thanks for your reviews!

>=20
> Later, Juan.
>=20
>=20



From xen-devel-bounces@lists.xenproject.org Fri May 08 10:00:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 10:00:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWzng-0006Rq-MM; Fri, 08 May 2020 10:00:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nu4B=6W=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jWznf-0006Rl-3m
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 10:00:11 +0000
X-Inumbo-ID: b2d7086a-9112-11ea-9fda-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b2d7086a-9112-11ea-9fda-12813bfff9fa;
 Fri, 08 May 2020 10:00:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=O0OkjHYnq9+w0lp7tGf40PLsmsbfev6xtdny3rnhGCY=; b=0CS1EtfUX/3KVpMeldXsj6CFjN
 DKFqx2anGlH6JRa0D+fi+vi6o2dlNH2rJdXMCf2hwfkqCkseUPx7dMDK/WmwFSBjDPLUEWBW5rO6W
 EmKw2PElhcXdFOeRb+DUQUbvUq2Z3bEb05xGQ51p2PN4ZSWk2Iuilp9pvSCmb47+pv4w=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jWznX-0004zI-K6; Fri, 08 May 2020 10:00:03 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jWznX-0004CR-Bp; Fri, 08 May 2020 10:00:03 +0000
Subject: Re: Xen Coding style
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <jgrall@amazon.com>
References: <ad26bbdc-5209-ce0c-7956-f8b08e6c2492@amazon.com>
 <8771c424-6340-10e5-1c1f-d72271ab8c1b@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <38926d4e-3429-58bc-f43c-514aed253a8e@xen.org>
Date: Fri, 8 May 2020 11:00:01 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <8771c424-6340-10e5-1c1f-d72271ab8c1b@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "committers@xenproject.org" <committers@xenproject.org>, "Woodhouse,
 David" <dwmw@amazon.co.uk>, "paul@xen.org" <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 08/05/2020 09:36, Jan Beulich wrote:
> On 07.05.2020 16:43, Julien Grall wrote:
>> This was originally discussed during the last community call.
>>
>> A major part of the comments during review is related to coding style issues. This can lead to frustration from the contributor as the current CODING_STYLE is quite light and the choice often down to a matter of taste from the maintainers.
>>
>> In the past, Lars tried to address the problem by introducing a coding style checker (see [1] and [2]). However, the work came to stop because we couldn't agree on what is Xen coding style.
>>
>> I would like to suggest a different approach. Rather than trying to agree on all the rules one by one, I propose to have a vote on different coding styles and pick the preferred one.
>>
>> The list of coding styles would come from the community. It could be coding styles already used in the open source community or new coding styles (for instance a contributor could write down his/her understanding of Xen Coding Style).
>>
>> Are the committers happy with this appraoch?
> 
> I'm not, sorry, and I'm pretty sure I indicated so before. For one I
> don't think picking an arbitrary other style than what we currently use
> is going to be helpful: It'll be a significant amount of code churn all
> over the code base. Instead I think the basic current principle should
> remain: Imported files, if not heavily changed, are permitted to retain
> their original style, while "our" files get written in "our" style.
> (Recording which one's which is still tbd.)

There are existing coding styles that are quite close to the one used by 
Xen (such as BSD). We could either use them as-is or make small changes 
to fit Xen.

> 
> I don't think though this necessarily implies "to agree on all the rules
> one by one" - we could also settle on there explicitly being freedom
> beyond what's spelled out explicitly. I'd not be happy with this, as it
> would lead to a lot of inconsistencies over time, but it's still an
> option. Obviously there's the wide range of middle ground to agree on
> some more rules to become written down (I did propose a few over time,
> without - iirc - getting helpful, if any, feedback), while leaving the
> rest up to the author.
> 
> The main thing we need to settle on imo is whether unwritten rules
> count. Me being picky isn't because of me liking to be, but because of
> me thinking that a consistent code base is quite helpful. If consensus
> is that consistency is not a goal, I'll accept this (with some
> grumbling).

Consistency is important, but this should not be based on unwritten 
rules. We should be able to write a script that can check whether a 
patch contain any violations.

The end goal is to reduce the workload on the reviewer and the tension 
within the community.

You seem to be the maintainer with the most unwritten rules. Would you 
mind to have a try at writing a coding style based on it?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 08 10:02:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 10:02:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jWzq1-0006YU-46; Fri, 08 May 2020 10:02:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EkJI=6W=redhat.com=philmd@srs-us1.protection.inumbo.net>)
 id 1jWzq0-0006YO-9i
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 10:02:36 +0000
X-Inumbo-ID: 0a2da182-9113-11ea-9887-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 0a2da182-9113-11ea-9887-bc764e2007e4;
 Fri, 08 May 2020 10:02:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588932154;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding;
 bh=oyCNTnHhHdPCntB2iymiBhEaboMzl925iCHCrhY71n4=;
 b=hwJOKcIrpNjQTBxF3WQ8EW34p20a1we0qWpZYfsEtngu0oxNsT14WWpUS9O7Jv5TY7VYTD
 EsaADOqsykvctmlw9Q0c3th0bKYYK8KrZnYCjzNNoeZ5y3Jiy0IUy8XUq/MLHbUMd+AIn3
 PWcSdSXV/FT9HNROleZJF85ySREoUpM=
Received: from mail-wm1-f71.google.com (mail-wm1-f71.google.com
 [209.85.128.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-355-8EvEfsrrOui9LQTXbs6Bkw-1; Fri, 08 May 2020 06:02:32 -0400
X-MC-Unique: 8EvEfsrrOui9LQTXbs6Bkw-1
Received: by mail-wm1-f71.google.com with SMTP id t62so4945108wma.0
 for <xen-devel@lists.xenproject.org>; Fri, 08 May 2020 03:02:32 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=SKDSDKBFGH1/yF7r5qYkc8O5JBfQtvmM/G+KMrgETOc=;
 b=ZV6huE0vuz1rk+iFfIMngHl48PoDzCz1XVdkkwHkkv0IS0Bf0a5FPONwJgDGRvuwPZ
 +c605c+k7NwQx68IA7xROG97fmSy1h5nOHxXfIzhpkEUefmIp4c3drbsuK2NMblMX6VE
 PZceL8njVGzKtLRMEVn3mhy4UC1BQKrSOFHu6YOsK7+oToZXIeg+nF4ACwznjVZ9+Np5
 DXcl1ett+StmVyNU8vMFsCmRiXBXh7FrWN98gPqorUlDcCrplJ0cBJkLsOKM7sHOWr7M
 CsGESbqu0RtkKVehAOAYeFtGIettWJjT3/F7j6Bckf6XsQm0Z2xoIE0I4YLRtMgPfOzX
 1wcw==
X-Gm-Message-State: AGi0Puap7QvduoEdySLpI89G/uGBuHr5KbQsdwZfWHdyGw0PvOOiz5w9
 SBrYNtZXq/nUBSO104xt8k+YABk/YRAa58UHRS/AJgAYeis8jH3e27Kzac9oCTzrNlhiuoyEqll
 HpPuLJeCwVGnHvsMT7a6Leop6MOo=
X-Received: by 2002:a7b:c190:: with SMTP id y16mr16328228wmi.50.1588932150122; 
 Fri, 08 May 2020 03:02:30 -0700 (PDT)
X-Google-Smtp-Source: APiQypLPhAbRGhflJFROxMiZWnCSGY5E0wlYSAYj3SE6hJkY+ietmgeypZ09pVmPJaBWQzpPPtqY7g==
X-Received: by 2002:a7b:c190:: with SMTP id y16mr16328190wmi.50.1588932149734; 
 Fri, 08 May 2020 03:02:29 -0700 (PDT)
Received: from x1w.redhat.com (17.red-88-21-202.staticip.rima-tde.net.
 [88.21.202.17])
 by smtp.gmail.com with ESMTPSA id u2sm2645822wrd.40.2020.05.08.03.02.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 08 May 2020 03:02:24 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v3] accel: Move Xen accelerator code under accel/xen/
Date: Fri,  8 May 2020 12:02:22 +0200
Message-Id: <20200508100222.7112-1-philmd@redhat.com>
X-Mailer: git-send-email 2.21.3
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8;
	text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 Juan Quintela <quintela@redhat.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Igor Mammedov <imammedo@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>,
 Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This code is not related to hardware emulation.
Move it under accel/ with the other hypervisors.

Reviewed-by: Paul Durrant <paul@xen.org>
Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
---
We could also move the memory management functions from
hw/i386/xen/xen-hvm.c but it is not trivial.

v2: Use g_assert_not_reached() instead of abort()
v3: (quintela)
 - Do not expose xen_allowed
 - Do not abort in xen_hvm_modified_memory
---
 include/exec/ram_addr.h                    |  2 +-
 include/hw/xen/xen.h                       | 11 -------
 include/sysemu/xen.h                       | 38 ++++++++++++++++++++++
 hw/xen/xen-common.c =3D> accel/xen/xen-all.c |  8 +++++
 hw/acpi/piix4.c                            |  2 +-
 hw/i386/pc.c                               |  1 +
 hw/i386/pc_piix.c                          |  1 +
 hw/i386/pc_q35.c                           |  1 +
 hw/i386/xen/xen-hvm.c                      |  1 +
 hw/i386/xen/xen_platform.c                 |  1 +
 hw/isa/piix3.c                             |  1 +
 hw/pci/msix.c                              |  1 +
 migration/savevm.c                         |  2 +-
 softmmu/vl.c                               |  2 +-
 stubs/xen-hvm.c                            |  9 -----
 target/i386/cpu.c                          |  2 +-
 MAINTAINERS                                |  2 ++
 accel/Makefile.objs                        |  1 +
 accel/xen/Makefile.objs                    |  1 +
 hw/xen/Makefile.objs                       |  2 +-
 20 files changed, 63 insertions(+), 26 deletions(-)
 create mode 100644 include/sysemu/xen.h
 rename hw/xen/xen-common.c =3D> accel/xen/xen-all.c (98%)
 create mode 100644 accel/xen/Makefile.objs

diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
index 5e59a3d8d7..4e05292f91 100644
--- a/include/exec/ram_addr.h
+++ b/include/exec/ram_addr.h
@@ -21,7 +21,7 @@
=20
 #ifndef CONFIG_USER_ONLY
 #include "cpu.h"
-#include "hw/xen/xen.h"
+#include "sysemu/xen.h"
 #include "sysemu/tcg.h"
 #include "exec/ramlist.h"
 #include "exec/ramblock.h"
diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h
index 5ac1c6dc55..771dd447f2 100644
--- a/include/hw/xen/xen.h
+++ b/include/hw/xen/xen.h
@@ -20,13 +20,6 @@ extern uint32_t xen_domid;
 extern enum xen_mode xen_mode;
 extern bool xen_domid_restrict;
=20
-extern bool xen_allowed;
-
-static inline bool xen_enabled(void)
-{
-    return xen_allowed;
-}
-
 int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
 void xen_piix3_set_irq(void *opaque, int irq_num, int level);
 void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int =
len);
@@ -39,10 +32,6 @@ void xenstore_store_pv_console_info(int i, struct Charde=
v *chr);
=20
 void xen_hvm_init(PCMachineState *pcms, MemoryRegion **ram_memory);
=20
-void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
-                   struct MemoryRegion *mr, Error **errp);
-void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length);
-
 void xen_register_framebuffer(struct MemoryRegion *mr);
=20
 #endif /* QEMU_HW_XEN_H */
diff --git a/include/sysemu/xen.h b/include/sysemu/xen.h
new file mode 100644
index 0000000000..1ca292715e
--- /dev/null
+++ b/include/sysemu/xen.h
@@ -0,0 +1,38 @@
+/*
+ * QEMU Xen support
+ *
+ * This work is licensed under the terms of the GNU GPL, version 2 or late=
r.
+ * See the COPYING file in the top-level directory.
+ */
+
+#ifndef SYSEMU_XEN_H
+#define SYSEMU_XEN_H
+
+#ifdef CONFIG_XEN
+
+bool xen_enabled(void);
+
+#ifndef CONFIG_USER_ONLY
+void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length);
+void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
+                   struct MemoryRegion *mr, Error **errp);
+#endif
+
+#else /* !CONFIG_XEN */
+
+#define xen_enabled() 0
+#ifndef CONFIG_USER_ONLY
+static inline void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t le=
ngth)
+{
+    /* nothing */
+}
+static inline void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
+                                 MemoryRegion *mr, Error **errp)
+{
+    g_assert_not_reached();
+}
+#endif
+
+#endif /* CONFIG_XEN */
+
+#endif
diff --git a/hw/xen/xen-common.c b/accel/xen/xen-all.c
similarity index 98%
rename from hw/xen/xen-common.c
rename to accel/xen/xen-all.c
index a15070f7f6..4f22c53731 100644
--- a/hw/xen/xen-common.c
+++ b/accel/xen/xen-all.c
@@ -16,6 +16,7 @@
 #include "hw/xen/xen_pt.h"
 #include "chardev/char.h"
 #include "sysemu/accel.h"
+#include "sysemu/xen.h"
 #include "sysemu/runstate.h"
 #include "migration/misc.h"
 #include "migration/global_state.h"
@@ -31,6 +32,13 @@
     do { } while (0)
 #endif
=20
+static bool xen_allowed;
+
+bool xen_enabled(void)
+{
+    return xen_allowed;
+}
+
 xc_interface *xen_xc;
 xenforeignmemory_handle *xen_fmem;
 xendevicemodel_handle *xen_dmod;
diff --git a/hw/acpi/piix4.c b/hw/acpi/piix4.c
index 964d6f5990..daed273687 100644
--- a/hw/acpi/piix4.c
+++ b/hw/acpi/piix4.c
@@ -30,6 +30,7 @@
 #include "hw/acpi/acpi.h"
 #include "sysemu/runstate.h"
 #include "sysemu/sysemu.h"
+#include "sysemu/xen.h"
 #include "qapi/error.h"
 #include "qemu/range.h"
 #include "exec/address-spaces.h"
@@ -41,7 +42,6 @@
 #include "hw/mem/nvdimm.h"
 #include "hw/acpi/memory_hotplug.h"
 #include "hw/acpi/acpi_dev_interface.h"
-#include "hw/xen/xen.h"
 #include "migration/vmstate.h"
 #include "hw/core/cpu.h"
 #include "trace.h"
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index 97e345faea..1a599e1de9 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -56,6 +56,7 @@
 #include "sysemu/tcg.h"
 #include "sysemu/numa.h"
 #include "sysemu/kvm.h"
+#include "sysemu/xen.h"
 #include "sysemu/qtest.h"
 #include "sysemu/reset.h"
 #include "sysemu/runstate.h"
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index 3862e5120e..c00472b4c5 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -53,6 +53,7 @@
 #include "cpu.h"
 #include "qapi/error.h"
 #include "qemu/error-report.h"
+#include "sysemu/xen.h"
 #ifdef CONFIG_XEN
 #include <xen/hvm/hvm_info_table.h>
 #include "hw/xen/xen_pt.h"
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index 3349e38a4c..e929749d8e 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -36,6 +36,7 @@
 #include "hw/rtc/mc146818rtc.h"
 #include "hw/xen/xen.h"
 #include "sysemu/kvm.h"
+#include "sysemu/xen.h"
 #include "hw/kvm/clock.h"
 #include "hw/pci-host/q35.h"
 #include "hw/qdev-properties.h"
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 82ece6b9e7..041303a2fa 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -28,6 +28,7 @@
 #include "qemu/range.h"
 #include "sysemu/runstate.h"
 #include "sysemu/sysemu.h"
+#include "sysemu/xen.h"
 #include "sysemu/xen-mapcache.h"
 #include "trace.h"
 #include "exec/address-spaces.h"
diff --git a/hw/i386/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
index 0f7b05e5e1..a1492fdecd 100644
--- a/hw/i386/xen/xen_platform.c
+++ b/hw/i386/xen/xen_platform.c
@@ -33,6 +33,7 @@
 #include "hw/xen/xen-legacy-backend.h"
 #include "trace.h"
 #include "exec/address-spaces.h"
+#include "sysemu/xen.h"
 #include "sysemu/block-backend.h"
 #include "qemu/error-report.h"
 #include "qemu/module.h"
diff --git a/hw/isa/piix3.c b/hw/isa/piix3.c
index fd1c78879f..1a5267e19f 100644
--- a/hw/isa/piix3.c
+++ b/hw/isa/piix3.c
@@ -28,6 +28,7 @@
 #include "hw/irq.h"
 #include "hw/isa/isa.h"
 #include "hw/xen/xen.h"
+#include "sysemu/xen.h"
 #include "sysemu/sysemu.h"
 #include "sysemu/reset.h"
 #include "sysemu/runstate.h"
diff --git a/hw/pci/msix.c b/hw/pci/msix.c
index 29187898f2..2c7ead7667 100644
--- a/hw/pci/msix.c
+++ b/hw/pci/msix.c
@@ -19,6 +19,7 @@
 #include "hw/pci/msix.h"
 #include "hw/pci/pci.h"
 #include "hw/xen/xen.h"
+#include "sysemu/xen.h"
 #include "migration/qemu-file-types.h"
 #include "migration/vmstate.h"
 #include "qemu/range.h"
diff --git a/migration/savevm.c b/migration/savevm.c
index c00a6807d9..b979ea6e7f 100644
--- a/migration/savevm.c
+++ b/migration/savevm.c
@@ -28,7 +28,6 @@
=20
 #include "qemu/osdep.h"
 #include "hw/boards.h"
-#include "hw/xen/xen.h"
 #include "net/net.h"
 #include "migration.h"
 #include "migration/snapshot.h"
@@ -59,6 +58,7 @@
 #include "sysemu/replay.h"
 #include "sysemu/runstate.h"
 #include "sysemu/sysemu.h"
+#include "sysemu/xen.h"
 #include "qjson.h"
 #include "migration/colo.h"
 #include "qemu/bitmap.h"
diff --git a/softmmu/vl.c b/softmmu/vl.c
index afd2615fb3..0344e5fd2e 100644
--- a/softmmu/vl.c
+++ b/softmmu/vl.c
@@ -36,6 +36,7 @@
 #include "sysemu/runstate.h"
 #include "sysemu/seccomp.h"
 #include "sysemu/tcg.h"
+#include "sysemu/xen.h"
=20
 #include "qemu/error-report.h"
 #include "qemu/sockets.h"
@@ -178,7 +179,6 @@ static NotifierList exit_notifiers =3D
 static NotifierList machine_init_done_notifiers =3D
     NOTIFIER_LIST_INITIALIZER(machine_init_done_notifiers);
=20
-bool xen_allowed;
 uint32_t xen_domid;
 enum xen_mode xen_mode =3D XEN_EMULATE;
 bool xen_domid_restrict;
diff --git a/stubs/xen-hvm.c b/stubs/xen-hvm.c
index b7d53b5e2f..6954a5b696 100644
--- a/stubs/xen-hvm.c
+++ b/stubs/xen-hvm.c
@@ -35,11 +35,6 @@ int xen_is_pirq_msi(uint32_t msi_data)
     return 0;
 }
=20
-void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
-                   Error **errp)
-{
-}
-
 qemu_irq *xen_interrupt_controller_init(void)
 {
     return NULL;
@@ -49,10 +44,6 @@ void xen_register_framebuffer(MemoryRegion *mr)
 {
 }
=20
-void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
-{
-}
-
 void xen_hvm_init(PCMachineState *pcms, MemoryRegion **ram_memory)
 {
 }
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 9c256ab159..f9b3ef1ef2 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -29,6 +29,7 @@
 #include "sysemu/reset.h"
 #include "sysemu/hvf.h"
 #include "sysemu/cpus.h"
+#include "sysemu/xen.h"
 #include "kvm_i386.h"
 #include "sev_i386.h"
=20
@@ -54,7 +55,6 @@
 #include "hw/i386/topology.h"
 #ifndef CONFIG_USER_ONLY
 #include "exec/address-spaces.h"
-#include "hw/xen/xen.h"
 #include "hw/i386/apic_internal.h"
 #include "hw/boards.h"
 #endif
diff --git a/MAINTAINERS b/MAINTAINERS
index 1f84e3ae2c..95ddddfb1d 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -438,6 +438,7 @@ M: Paul Durrant <paul@xen.org>
 L: xen-devel@lists.xenproject.org
 S: Supported
 F: */xen*
+F: accel/xen/*
 F: hw/9pfs/xen-9p*
 F: hw/char/xen_console.c
 F: hw/display/xenfb.c
@@ -451,6 +452,7 @@ F: hw/i386/xen/
 F: hw/pci-host/xen_igd_pt.c
 F: include/hw/block/dataplane/xen*
 F: include/hw/xen/
+F: include/sysemu/xen.h
 F: include/sysemu/xen-mapcache.h
=20
 Guest CPU Cores (HAXM)
diff --git a/accel/Makefile.objs b/accel/Makefile.objs
index 17e5ac6061..ff72f0d030 100644
--- a/accel/Makefile.objs
+++ b/accel/Makefile.objs
@@ -2,4 +2,5 @@ common-obj-$(CONFIG_SOFTMMU) +=3D accel.o
 obj-$(call land,$(CONFIG_SOFTMMU),$(CONFIG_POSIX)) +=3D qtest.o
 obj-$(CONFIG_KVM) +=3D kvm/
 obj-$(CONFIG_TCG) +=3D tcg/
+obj-$(CONFIG_XEN) +=3D xen/
 obj-y +=3D stubs/
diff --git a/accel/xen/Makefile.objs b/accel/xen/Makefile.objs
new file mode 100644
index 0000000000..7482cfb436
--- /dev/null
+++ b/accel/xen/Makefile.objs
@@ -0,0 +1 @@
+obj-y +=3D xen-all.o
diff --git a/hw/xen/Makefile.objs b/hw/xen/Makefile.objs
index 84df60a928..340b2c5096 100644
--- a/hw/xen/Makefile.objs
+++ b/hw/xen/Makefile.objs
@@ -1,5 +1,5 @@
 # xen backend driver support
-common-obj-$(CONFIG_XEN) +=3D xen-legacy-backend.o xen_devconfig.o xen_pvd=
ev.o xen-common.o xen-bus.o xen-bus-helper.o xen-backend.o
+common-obj-$(CONFIG_XEN) +=3D xen-legacy-backend.o xen_devconfig.o xen_pvd=
ev.o xen-bus.o xen-bus-helper.o xen-backend.o
=20
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) +=3D xen-host-pci-device.o
 obj-$(CONFIG_XEN_PCI_PASSTHROUGH) +=3D xen_pt.o xen_pt_config_init.o xen_p=
t_graphics.o xen_pt_msi.o
--=20
2.21.3



From xen-devel-bounces@lists.xenproject.org Fri May 08 10:58:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 10:58:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX0iC-0002Dn-J6; Fri, 08 May 2020 10:58:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lkD8=6W=redhat.com=quintela@srs-us1.protection.inumbo.net>)
 id 1jX0iA-0002Di-L9
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 10:58:34 +0000
X-Inumbo-ID: dc3a9053-911a-11ea-9fde-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id dc3a9053-911a-11ea-9fde-12813bfff9fa;
 Fri, 08 May 2020 10:58:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588935513;
 h=from:from:reply-to:reply-to:subject:subject:date:date:
 message-id:message-id:to:to:cc:cc:mime-version:mime-version:
 content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=cyCZiIMXk1Y28WNZr1AkdRRZAp+HQiTOsMdv91zL4Pg=;
 b=QQUbzO+JSdcsqbIaOv1SvSM/3F0mcMFySX5iuq5vOO1ij3aSYiBZkTYyyxlezd3gshmEIv
 3nQZc5JQrtMAdH00wUh4HclE3j7YC4sYzF+aEuDVlBhBAO7DP0kSB3tMQGsI/Ct1SQ1aTD
 ODG6VmNC+2CTLdcif4XGpRYtcRyG2Ps=
Received: from mail-wr1-f69.google.com (mail-wr1-f69.google.com
 [209.85.221.69]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-227-xhbTINf7PuWx1-meMsQw0g-1; Fri, 08 May 2020 06:58:29 -0400
X-MC-Unique: xhbTINf7PuWx1-meMsQw0g-1
Received: by mail-wr1-f69.google.com with SMTP id d1so698010wru.6
 for <xen-devel@lists.xenproject.org>; Fri, 08 May 2020 03:58:29 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:in-reply-to:references
 :user-agent:reply-to:date:message-id:mime-version
 :content-transfer-encoding;
 bh=cyCZiIMXk1Y28WNZr1AkdRRZAp+HQiTOsMdv91zL4Pg=;
 b=I2M23CMEW8TaDKB8kXnEFzZPiGote1KDyK/cyKc88K7OYy6uL0tjRmeqUjEd2BPUSx
 ahO5xu5yuPgplt6YhqWanyNcR4aJhYyrBA4SrsG5Tn0ppGcYqGlpufkq/4YdhY1m3wWY
 DteN9kkNCkbdiTgMWUVShvT0Smv/XMGvP9xDms5LepYPoMYBT8RoYSy0H1XY8oYAiPb+
 CF+E0NrWCMpv+1LoaBR72VTvD1k7d1GQfvQl+V9yFWAk336knNR7DuxJzecamlViNKnX
 7gxOcvJZ5xstXWTAuvaYYQC8GNSw7GF2MNL3oBbnI4CrEn4uNHtV1s/KCZERapBGKSpC
 7z3Q==
X-Gm-Message-State: AGi0PuawHNyYWTGEBRSoloaX+OpET5ti4KBaM50/uUzKSclxB7bcxhhb
 /MVPb5OyXoseddUXyrGOdr7hIOW3ud0dJlCTgrnhP9eOPMk3mjunj0MmlQw9E0nuNNktjpL0h7g
 l7dpYUtygpz3nX0M5Ehhe/Hy0FMc=
X-Received: by 2002:a05:600c:24cf:: with SMTP id
 15mr14926199wmu.94.1588935508552; 
 Fri, 08 May 2020 03:58:28 -0700 (PDT)
X-Google-Smtp-Source: APiQypL77NI34TbDDfERPCrxkNpOhsGcev4KtdXrtLP6r59dCngh8fw4wlAQoNf/ixkyz3h4JhDTQA==
X-Received: by 2002:a05:600c:24cf:: with SMTP id
 15mr14926177wmu.94.1588935508281; 
 Fri, 08 May 2020 03:58:28 -0700 (PDT)
Received: from localhost (trasno.trasno.org. [83.165.45.250])
 by smtp.gmail.com with ESMTPSA id r11sm1265076wrv.14.2020.05.08.03.58.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 08 May 2020 03:58:27 -0700 (PDT)
From: Juan Quintela <quintela@redhat.com>
To: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Subject: Re: [PATCH v3] accel: Move Xen accelerator code under accel/xen/
In-Reply-To: <20200508100222.7112-1-philmd@redhat.com> ("Philippe
 =?utf-8?Q?Mathieu-Daud=C3=A9=22's?= message of "Fri, 8 May 2020 12:02:22
 +0200")
References: <20200508100222.7112-1-philmd@redhat.com>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
Date: Fri, 08 May 2020 12:58:26 +0200
Message-ID: <87368ad771.fsf@secure.mitica>
MIME-Version: 1.0
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: quintela@redhat.com
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-devel@nongnu.org,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Igor Mammedov <imammedo@redhat.com>,
 Aurelien Jarno <aurelien@aurel32.net>, Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com> wrote:
> This code is not related to hardware emulation.
> Move it under accel/ with the other hypervisors.
>
> Reviewed-by: Paul Durrant <paul@xen.org>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>

Reviewed-by: Juan Quintela <quintela@redhat.com>



From xen-devel-bounces@lists.xenproject.org Fri May 08 11:20:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 11:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX13O-0004Xl-Cw; Fri, 08 May 2020 11:20:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3Kb=6W=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jX13N-0004Xf-Lr
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 11:20:29 +0000
X-Inumbo-ID: eb6de648-911d-11ea-9fe0-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eb6de648-911d-11ea-9fe0-12813bfff9fa;
 Fri, 08 May 2020 11:20:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 97044AF3B;
 Fri,  8 May 2020 11:20:29 +0000 (UTC)
Subject: Re: Xen Coding style
To: Julien Grall <julien@xen.org>
References: <ad26bbdc-5209-ce0c-7956-f8b08e6c2492@amazon.com>
 <8771c424-6340-10e5-1c1f-d72271ab8c1b@suse.com>
 <38926d4e-3429-58bc-f43c-514aed253a8e@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3b55f045-c6a0-af62-c607-3a85d38cea25@suse.com>
Date: Fri, 8 May 2020 13:20:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <38926d4e-3429-58bc-f43c-514aed253a8e@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Julien Grall <jgrall@amazon.com>,
 "committers@xenproject.org" <committers@xenproject.org>, "Woodhouse,
 David" <dwmw@amazon.co.uk>, "paul@xen.org" <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 12:00, Julien Grall wrote:
> You seem to be the maintainer with the most unwritten rules. Would
> you mind to have a try at writing a coding style based on it?

On the basis that even small, single aspect patches to CODING_STYLE
have been ignored [1], I don't think this would be a good use of my
time. If I was promised (reasonable) feedback, I could take what I
have and try to add at least a few more things based on what I find
myself commenting on more frequently. But really I'd prefer it to
be done the other way around - for people to look at the patches
already sent, and for me to only subsequently send more. After all,
if already those adjustments are controversial, I don't think we
could settle on others.

Jan

[1] https://lists.xenproject.org/archives/html/xen-devel/2019-07/msg01234.html


From xen-devel-bounces@lists.xenproject.org Fri May 08 11:37:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 11:37:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX1Jm-0005VN-Oh; Fri, 08 May 2020 11:37:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqRt=6W=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jX1Jl-0005VI-Hl
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 11:37:25 +0000
X-Inumbo-ID: 48b65b58-9120-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 48b65b58-9120-11ea-ae69-bc764e2007e4;
 Fri, 08 May 2020 11:37:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=2nwCorm8Nh+YfA6sdYpf699VKcnyBdHNlW1g2XUWb3E=; b=wxuGClX15kArb0ea9EAv5XnBP
 ePYW8Dak3AUybToBiu1TUh93k+u0c94op8I1wH7L5JB1aRRJynNL59RW2owpT9J6Wr52GSHQVHze+
 1eCn12QxXZ/UK9Ija8+p5yep6PFZW8LjpzOQpQuWl68AIAijf2fPsaJGDGxfk1SCgQTKg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jX1Jj-0006lT-4L; Fri, 08 May 2020 11:37:23 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jX1Ji-0005g5-SA; Fri, 08 May 2020 11:37:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jX1Ji-0008Bz-P3; Fri, 08 May 2020 11:37:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150073-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 150073: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-4.13-testing:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 xen-4.13-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: xen=9649b83b2ab4707de79da42307f8757e317bf217
X-Osstest-Versions-That: xen=d2aecd86c4481291b260869c47cf0a9a02321564
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 08 May 2020 11:37:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150073 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150073/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 150042

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150042
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass

version targeted for testing:
 xen                  9649b83b2ab4707de79da42307f8757e317bf217
baseline version:
 xen                  d2aecd86c4481291b260869c47cf0a9a02321564

Last test of basis   150042  2020-05-05 16:06:37 Z    2 days
Testing same since   150073  2020-05-07 13:06:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ashok Raj <ashok.raj@intel.com>
  Borislav Petkov <bp@suse.de>
  Hongyan Xia <hongyxia@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Thomas Gleixner <tglx@linutronix.de>
  Varad Gautam <vrd@amazon.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d2aecd86c4..9649b83b2a  9649b83b2ab4707de79da42307f8757e317bf217 -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Fri May 08 11:57:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 11:57:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX1d9-0007HM-RC; Fri, 08 May 2020 11:57:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqRt=6W=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jX1d8-0007HH-7Y
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 11:57:26 +0000
X-Inumbo-ID: 1551bb56-9123-11ea-9fe1-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1551bb56-9123-11ea-9fe1-12813bfff9fa;
 Fri, 08 May 2020 11:57:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ql7pKNwfC4STMAsAPjonhakOgMLt+0bEShk51ir4E+o=; b=ivpHjY1JVASifDTdPRYYXiMr3
 f8eZgA2DYPlp6EgnpdXzCE62qzxmIO9AAAj9HFq0fzNJO1Vc1plW8H4UQ1/Wzw/C4EnD3efYom+PI
 JBjUf5UeGpqiVL1vFHblKnGRO9FN7vi8tiW1gpFGzt64J0YE60TBKpxHdXAv0tTjI+yeg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jX1d7-00078N-DB; Fri, 08 May 2020 11:57:25 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jX1d6-0006u2-U3; Fri, 08 May 2020 11:57:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jX1d6-0004Fp-TT; Fri, 08 May 2020 11:57:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150088-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150088: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9
X-Osstest-Versions-That: xen=35b819c45c4603fdb1d400925d6b2e6f8689a9d5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 08 May 2020 11:57:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150088 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150088/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9
baseline version:
 xen                  35b819c45c4603fdb1d400925d6b2e6f8689a9d5

Last test of basis   150080  2020-05-07 19:01:35 Z    0 days
Testing same since   150088  2020-05-08 09:00:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dario Faggioli <dfaggioli@suse.com>
  Juergen Gross <jgross@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   35b819c45c..e0d92d9bd7  e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri May 08 12:19:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 12:19:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX1yX-0000ej-3t; Fri, 08 May 2020 12:19:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nu4B=6W=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jX1yW-0000ee-CH
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 12:19:32 +0000
X-Inumbo-ID: 2b767932-9126-11ea-9fe3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2b767932-9126-11ea-9fe3-12813bfff9fa;
 Fri, 08 May 2020 12:19:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=9qzjDPmPt7faBUrRFNEd9DTpu6660M2T8Wn5nIMWlrU=; b=emYcw3Jh/e/FDlp3Wf1HQRnqLE
 3OTCP0UHbOls/55yGtwvaNnbBNNls3Ql6MpUGIpK1b9n+94uwEZkxDxISxyvtzeFI7dg2WRaWyamR
 2VI1rMcNcAOiufO49NzDFz9ptKRTwbI8fnYAo//+UnGJvfoXxbNt4bjJG7wUELwfgKCU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jX1yR-0007YC-PR; Fri, 08 May 2020 12:19:27 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jX1yR-00056U-IM; Fri, 08 May 2020 12:19:27 +0000
Subject: Re: Xen Coding style
To: Jan Beulich <jbeulich@suse.com>
References: <ad26bbdc-5209-ce0c-7956-f8b08e6c2492@amazon.com>
 <8771c424-6340-10e5-1c1f-d72271ab8c1b@suse.com>
 <38926d4e-3429-58bc-f43c-514aed253a8e@xen.org>
 <3b55f045-c6a0-af62-c607-3a85d38cea25@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <63d1ceac-81bb-c916-d403-6f227b4d0ea8@xen.org>
Date: Fri, 8 May 2020 13:19:25 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <3b55f045-c6a0-af62-c607-3a85d38cea25@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Julien Grall <jgrall@amazon.com>,
 "committers@xenproject.org" <committers@xenproject.org>, "Woodhouse,
 David" <dwmw@amazon.co.uk>, "paul@xen.org" <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan,

On 08/05/2020 12:20, Jan Beulich wrote:
> On 08.05.2020 12:00, Julien Grall wrote:
>> You seem to be the maintainer with the most unwritten rules. Would
>> you mind to have a try at writing a coding style based on it?
> 
> On the basis that even small, single aspect patches to CODING_STYLE
> have been ignored [1],

Your thread is one of the example why I started this thread. Agreeing on 
specific rule doesn't work because it either result to bikesheding or 
there is not enough interest to review rule by rule.

> I don't think this would be a good use of my
> time.

I would have assumed that the current situation (i.e 
nitpicking/bikeshedding on the ML) is not a good use of your time :).

I would be happy to put some effort to help getting the coding style 
right, however I believe focusing on an overall coding style would value 
everyone's time better than a rule by rule discussion.

> If I was promised (reasonable) feedback, I could take what I
> have and try to add at least a few more things based on what I find
> myself commenting on more frequently. But really I'd prefer it to
> be done the other way around - for people to look at the patches
> already sent, and for me to only subsequently send more. After all,
> if already those adjustments are controversial, I don't think we
> could settle on others.
While I understand this requires another investment from your part, I am 
afraid it is going to be painful for someone else to go through all the 
existing coding style bikeshedding and infer your unwritten rules.

It might be more beneficial for that person to pursue the work done by 
Tamas and Viktor in the past (see my previous e-mail). This may mean to 
adopt an existing coding style (BSD) and then tweak it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 08 12:43:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 12:43:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX2M1-0002zo-1F; Fri, 08 May 2020 12:43:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3Kb=6W=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jX2Lz-0002zj-7Z
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 12:43:47 +0000
X-Inumbo-ID: 8dc88ee2-9129-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8dc88ee2-9129-11ea-9887-bc764e2007e4;
 Fri, 08 May 2020 12:43:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 9DCE1ACC2;
 Fri,  8 May 2020 12:43:46 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/PVH: PHYSDEVOP_pci_mmcfg_reserved should not blindly
 register a region
Message-ID: <2ee1a3cb-3b40-6e6d-5308-1cdf9f6c521e@suse.com>
Date: Fri, 8 May 2020 14:43:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The op has a register/unregister flag, and hence registration shouldn't
happen unilaterally. Introduce unregister_vpci_mmcfg_handler() to handle
this case.

Fixes: eb3dd90e4089 ("x86/physdev: enable PHYSDEVOP_pci_mmcfg_reserved for PVH Dom0")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -558,6 +558,47 @@ int register_vpci_mmcfg_handler(struct d
     return 0;
 }
 
+int unregister_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
+                                  unsigned int start_bus, unsigned int end_bus,
+                                  unsigned int seg)
+{
+    struct hvm_mmcfg *mmcfg;
+    int rc = -ENOENT;
+
+    ASSERT(is_hardware_domain(d));
+
+    if ( start_bus > end_bus )
+        return -EINVAL;
+
+    write_lock(&d->arch.hvm.mmcfg_lock);
+
+    list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next )
+        if ( mmcfg->addr == addr + (start_bus << 20) &&
+             mmcfg->segment == seg &&
+             mmcfg->start_bus == start_bus &&
+             mmcfg->size == ((end_bus - start_bus + 1) << 20) )
+        {
+            list_del(&mmcfg->next);
+            if ( !list_empty(&d->arch.hvm.mmcfg_regions) )
+                xfree(mmcfg);
+            else
+            {
+                /*
+                 * Cannot unregister the MMIO handler - leave a fake entry
+                 * on the list.
+                 */
+                memset(mmcfg, 0, sizeof(*mmcfg));
+                list_add(&mmcfg->next, &d->arch.hvm.mmcfg_regions);
+            }
+            rc = 0;
+            break;
+        }
+
+    write_unlock(&d->arch.hvm.mmcfg_lock);
+
+    return rc;
+}
+
 void destroy_vpci_mmcfg(struct domain *d)
 {
     struct list_head *mmcfg_regions = &d->arch.hvm.mmcfg_regions;
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -559,12 +559,18 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
         if ( !ret && has_vpci(currd) )
         {
             /*
-             * For HVM (PVH) domains try to add the newly found MMCFG to the
-             * domain.
+             * For HVM (PVH) domains try to add/remove the reported MMCFG
+             * to/from the domain.
              */
-            ret = register_vpci_mmcfg_handler(currd, info.address,
-                                              info.start_bus, info.end_bus,
-                                              info.segment);
+            if ( info.flags & XEN_PCI_MMCFG_RESERVED )
+                ret = register_vpci_mmcfg_handler(currd, info.address,
+                                                  info.start_bus, info.end_bus,
+                                                  info.segment);
+            else
+                ret = unregister_vpci_mmcfg_handler(currd, info.address,
+                                                    info.start_bus,
+                                                    info.end_bus,
+                                                    info.segment);
         }
 
         break;
--- a/xen/include/asm-x86/hvm/io.h
+++ b/xen/include/asm-x86/hvm/io.h
@@ -178,6 +178,9 @@ void register_vpci_portio_handler(struct
 int register_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
                                 unsigned int start_bus, unsigned int end_bus,
                                 unsigned int seg);
+int unregister_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
+                                  unsigned int start_bus, unsigned int end_bus,
+                                  unsigned int seg);
 /* Destroy tracked MMCFG areas. */
 void destroy_vpci_mmcfg(struct domain *d);
 


From xen-devel-bounces@lists.xenproject.org Fri May 08 12:47:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 12:47:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX2PV-00037r-Hk; Fri, 08 May 2020 12:47:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4FLg=6W=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jX2PT-00037l-EX
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 12:47:23 +0000
X-Inumbo-ID: 0fe4b810-912a-11ea-ae69-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 0fe4b810-912a-11ea-ae69-bc764e2007e4;
 Fri, 08 May 2020 12:47:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588942042;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=fsnozb6kIQGYwomOyBKftMgcGklRNkmQU34LmQ0rQgU=;
 b=ai7F2GO2OT2lWJ7rMgsugh4SNiaiYrGBG4mwVEVXpm/Mf375CDa//61e6Hvc1/xmWxGm+S
 Y82K/K5Ey7tSgsVaQQnVoQiqRodbSSUndbnypRNUmbAv80G5OuVa/GZYSaGPZRKp+BWYxs
 S0NWMkJ2LNvt/6Q+am+8cdkqo7UhMmU=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-381-xHC9BoC8MPSLoyVaICWZKA-1; Fri, 08 May 2020 08:47:17 -0400
X-MC-Unique: xHC9BoC8MPSLoyVaICWZKA-1
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 541CB107ACF2;
 Fri,  8 May 2020 12:47:13 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-113-6.ams2.redhat.com [10.36.113.6])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 60BF76ACF1;
 Fri,  8 May 2020 12:47:06 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id D67E811358BC; Fri,  8 May 2020 14:47:04 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: Re: [PATCH v2 3/3] hw: Remove unnecessary DEVICE() cast
References: <20200504100735.10269-1-f4bug@amsat.org>
 <20200504100735.10269-4-f4bug@amsat.org>
Date: Fri, 08 May 2020 14:47:04 +0200
In-Reply-To: <20200504100735.10269-4-f4bug@amsat.org> ("Philippe
 =?utf-8?Q?Mathieu-Daud=C3=A9=22's?= message of "Mon, 4 May 2020 12:07:35
 +0200")
Message-ID: <87r1vuy4on.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 David Hildenbrand <david@redhat.com>, Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, David Gibson <david@gibson.dropbear.id.au>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 =?utf-8?Q?C=C3=A9dric?= Le Goater <clg@kaod.org>, John Snow <jsnow@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 "Daniel P. =?utf-8?Q?Berrang=C3=A9?=" <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, qemu-ppc@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org> writes:

> The DEVICE() macro is defined as:
>
>   #define DEVICE(obj) OBJECT_CHECK(DeviceState, (obj), TYPE_DEVICE)
>
> which expands to:
>
>   ((DeviceState *)object_dynamic_cast_assert((Object *)(obj), (name),
>                                              __FILE__, __LINE__,
>                                              __func__))
>
> This assertion can only fail when @obj points to something other
> than its stated type, i.e. when we're in undefined behavior country.
>
> Remove the unnecessary DEVICE() casts when we already know the
> pointer is of DeviceState type.
>
> Patch created mechanically using spatch with this script:
>
>   @@
>   typedef DeviceState;
>   DeviceState *s;
>   @@
>   -   DEVICE(s)
>   +   s
>
> Acked-by: David Gibson <david@gibson.dropbear.id.au>
> Acked-by: Paul Durrant <paul@xen.org>
> Reviewed-by: Markus Armbruster <armbru@redhat.com>
> Reviewed-by: C=C3=A9dric Le Goater <clg@kaod.org>
> Acked-by: John Snow <jsnow@redhat.com>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Lovely.

Reviewed-by: Markus Armbruster <armbru@redhat.com>

Now repeat this exercise for each QOM type cast macro :)



From xen-devel-bounces@lists.xenproject.org Fri May 08 12:50:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 12:50:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX2S3-0003SU-0l; Fri, 08 May 2020 12:50:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4FLg=6W=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jX2S2-0003N1-Jy
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 12:50:02 +0000
X-Inumbo-ID: 6e7bca80-912a-11ea-9887-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 6e7bca80-912a-11ea-9887-bc764e2007e4;
 Fri, 08 May 2020 12:50:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588942201;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=mr4uqpTXg1sXYxmIwAO3pDUOz2ZcsDiSAVP7r/LI/bA=;
 b=NG2sgFXCVisgjvMMA+Mrl5ZJ9QmVS8CKnuUifi5RQgxfQ7hhA+EYmgaWJHvlw6lWtnJzGr
 qdooTjjn+MBVzKFZv/4X/b6ckO+7zoMou7K9+p1CzTIr9APTxK939BlRjka89s79lX3jK9
 7HCyrTOintJAiG85/FqUcWkaaGWuKMM=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-373-4YAIeqf8OaueleRfKAlSYA-1; Fri, 08 May 2020 08:50:00 -0400
X-MC-Unique: 4YAIeqf8OaueleRfKAlSYA-1
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id ABB70800688;
 Fri,  8 May 2020 12:49:56 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-113-6.ams2.redhat.com [10.36.113.6])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 500E85D9CA;
 Fri,  8 May 2020 12:49:50 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id E80F711358BC; Fri,  8 May 2020 14:49:48 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: Re: [PATCH v2 2/3] various: Remove unnecessary OBJECT() cast
References: <20200504100735.10269-1-f4bug@amsat.org>
 <20200504100735.10269-3-f4bug@amsat.org>
Date: Fri, 08 May 2020 14:49:48 +0200
In-Reply-To: <20200504100735.10269-3-f4bug@amsat.org> ("Philippe
 =?utf-8?Q?Mathieu-Daud=C3=A9=22's?= message of "Mon, 4 May 2020 12:07:34
 +0200")
Message-ID: <87mu6iy4k3.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 David Hildenbrand <david@redhat.com>, Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, David Gibson <david@gibson.dropbear.id.au>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 =?utf-8?Q?C=C3=A9dric?= Le Goater <clg@kaod.org>, qemu-ppc@nongnu.org,
 John Snow <jsnow@redhat.com>, Richard Henderson <rth@twiddle.net>,
 "Daniel P. =?utf-8?Q?Berrang=C3=A9?=" <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, Corey Minyard <cminyard@mvista.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org> writes:

> The OBJECT() macro is defined as:
>
>   #define OBJECT(obj) ((Object *)(obj))
>
> which expands to:
>
>   ((Object *)object_dynamic_cast_assert((Object *)(obj), (name),
>                                         __FILE__, __LINE__, __func__))

Nope :)

> This assertion can only fail when @obj points to something other
> than its stated type, i.e. when we're in undefined behavior country.

There is no assertion.

> Remove the unnecessary OBJECT() casts when we already know the
> pointer is of Object type.
>
> Patch created mechanically using spatch with this script:
>
>   @@
>   typedef Object;
>   Object *o;
>   @@
>   -   OBJECT(o)
>   +   o
>
> Acked-by: Cornelia Huck <cohuck@redhat.com>
> Acked-by: Corey Minyard <cminyard@mvista.com>
> Acked-by: John Snow <jsnow@redhat.com>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>
> ---
> v2: Reword (Markus)

My rewording suggestion applied to PATCH 3, not to this one.

With v2's commit message;
Reviewed-by: Markus Armbruster <armbru@redhat.com>



From xen-devel-bounces@lists.xenproject.org Fri May 08 12:50:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 12:50:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX2St-0003wr-Ar; Fri, 08 May 2020 12:50:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4FLg=6W=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jX2Ss-0003wh-HS
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 12:50:54 +0000
X-Inumbo-ID: 8db9f854-912a-11ea-b07b-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.61])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 8db9f854-912a-11ea-b07b-bc764e2007e4;
 Fri, 08 May 2020 12:50:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1588942253;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=zudctO3EkR+8Y0s3mx6MbBIauAULqPgX14OPAMKsxPM=;
 b=GyG3gvoZIszpvvKVWhZ0elNqWCCN4vkybmpJiEVQscR5ZIa5exV7asPCw/XFbvpL1fDRlc
 GqolCWINcY3fyXorw/H3QG00EXRb60CwdLwgNTSXcoPiDXIslBHrY+87FPaYNXu/L1JYJX
 gmy6SSah2UEjEVkb8Qg+o4KOfW6Gx4o=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-76-1-TUPxSPMNaE1-cw9UJ2Pg-1; Fri, 08 May 2020 08:50:52 -0400
X-MC-Unique: 1-TUPxSPMNaE1-cw9UJ2Pg-1
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DFAC2107ACCA;
 Fri,  8 May 2020 12:50:48 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-113-6.ams2.redhat.com [10.36.113.6])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 8B92B5C1BE;
 Fri,  8 May 2020 12:50:42 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 0D88F11358BC; Fri,  8 May 2020 14:50:41 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: Re: [PATCH v2 1/3] target: Remove unnecessary CPU() cast
References: <20200504100735.10269-1-f4bug@amsat.org>
 <20200504100735.10269-2-f4bug@amsat.org>
Date: Fri, 08 May 2020 14:50:41 +0200
In-Reply-To: <20200504100735.10269-2-f4bug@amsat.org> ("Philippe
 =?utf-8?Q?Mathieu-Daud=C3=A9=22's?= message of "Mon, 4 May 2020 12:07:33
 +0200")
Message-ID: <87imh6y4im.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 David Hildenbrand <david@redhat.com>, Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, David Gibson <david@gibson.dropbear.id.au>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 =?utf-8?Q?C=C3=A9dric?= Le Goater <clg@kaod.org>, John Snow <jsnow@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 "Daniel P. =?utf-8?Q?Berrang=C3=A9?=" <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, qemu-ppc@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org> writes:

> The CPU() macro is defined as:
>
>   #define CPU(obj) ((CPUState *)(obj))
>
> which expands to:
>
>   ((CPUState *)object_dynamic_cast_assert((Object *)(obj), (name),
>                                           __FILE__, __LINE__, __func__))
>
> This assertion can only fail when @obj points to something other
> than its stated type, i.e. when we're in undefined behavior country.
>
> Remove the unnecessary CPU() casts when we already know the pointer
> is of CPUState type.
>
> Patch created mechanically using spatch with this script:
>
>   @@
>   typedef CPUState;
>   CPUState *s;
>   @@
>   -   CPU(s)
>   +   s
>
> Acked-by: David Gibson <david@gibson.dropbear.id.au>
> Reviewed-by: C=C3=A9dric Le Goater <clg@kaod.org>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Markus Armbruster <armbru@redhat.com>



From xen-devel-bounces@lists.xenproject.org Fri May 08 12:54:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 12:54:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX2W6-000494-SF; Fri, 08 May 2020 12:54:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M6n8=6W=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jX2W5-00048x-Cl
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 12:54:13 +0000
X-Inumbo-ID: 0311327a-912b-11ea-9fe5-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0311327a-912b-11ea-9fe5-12813bfff9fa;
 Fri, 08 May 2020 12:54:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588942451;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=5aNK6xTRklf7K2GontuyiPQPstkJsRBOU6hDzb0VAQA=;
 b=WWxF9dxtweloiGhqQpibu3Ugrb1Ruf6BhqXvka/MjIUvzG+Q0YmCdLvb
 vfKhbhIMvMQJN6ZNwelF/4BsRgiYscu98HW71nrzKZ8pulpCGKGOt/MoZ
 C4RWn2biSyqNtMhdE16KmDS6O7/2bYa4VEBi9dTeTRDkiR+wfoCdDPptT g=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 4+f0sYDL0We9F1/Uxd/U+80BDXDR48WPKMcJpAmkXHZZO6y/ktO1Q1csYKu0az2Ts24IDRwmQ8
 Zpma19IxKdIWC+eAWpK7CR/vIwLTiUQyCLsjljMd5KOZnBumiEnElg6a3jVtuaudfNLmAP42B7
 miEEwdYXuVW8QuFTcjmrKh1EVeQ2a2E+Dx5rIHdpLPe60tr3nXpRSfZANmsZXozxeSt/vn9lGY
 Tdz1u5akYnVoAhFP+gWgB3Gez2F09leHkXQCu7VijoTR7wtV1z8gnxQpv0oN6DbXtVdR4wJDA6
 fkY=
X-SBRS: 2.7
X-MesageID: 17756052
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,367,1583211600"; d="scan'208";a="17756052"
Subject: Re: [PATCH] x86/PVH: PHYSDEVOP_pci_mmcfg_reserved should not blindly
 register a region
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <2ee1a3cb-3b40-6e6d-5308-1cdf9f6c521e@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c8826959-8dbe-cc39-80b0-8ba03c6a6f30@citrix.com>
Date: Fri, 8 May 2020 13:54:06 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <2ee1a3cb-3b40-6e6d-5308-1cdf9f6c521e@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08/05/2020 13:43, Jan Beulich wrote:
> The op has a register/unregister flag, and hence registration shouldn't
> happen unilaterally. Introduce unregister_vpci_mmcfg_handler() to handle
> this case.
>
> Fixes: eb3dd90e4089 ("x86/physdev: enable PHYSDEVOP_pci_mmcfg_reserved for PVH Dom0")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I agree in principle that registration shouldn't be unilateral, but why
on earth does the API behave like that to begin with?

There is no provision to move or update MMCFG regions in any spec I'm
aware of, and hardware cannot in practice update memory routing like this.

Under what circumstances should we tolerate an unregister in the first
place?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 08 12:55:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 12:55:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX2Xf-0004GJ-C9; Fri, 08 May 2020 12:55:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XWPr=6W=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jX2Xe-0004GE-03
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 12:55:50 +0000
X-Inumbo-ID: 3d522502-912b-11ea-9887-bc764e2007e4
Received: from mail-wr1-x430.google.com (unknown [2a00:1450:4864:20::430])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d522502-912b-11ea-9887-bc764e2007e4;
 Fri, 08 May 2020 12:55:49 +0000 (UTC)
Received: by mail-wr1-x430.google.com with SMTP id i15so1711946wrx.10
 for <xen-devel@lists.xenproject.org>; Fri, 08 May 2020 05:55:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=+ORZwHK2pQ3ZWxXmEBHMv2QvgRrdA781Ku91+W+/YJ4=;
 b=QidqxMQgXKZ4l3Gx7jKY1/28sx6PSUK9rSC6pwiqohRpmgD3QU7XmlnN7TOHXfdHq+
 4uIJCEVf4qFyRXWlbLd+aIAbGl6zYX9iRha+AsiEL+YdQQfogGnZKD1wPII7coFmBWuw
 kevFW8VqGWDvBSVlIDMFBTodH6ECx8KoV3CYAVT+bT89nynFnLPWV3chtfBdQATi/+LI
 RZ8STdet+5zYRGvHr4jBz8k8pyY/4OHuRJJqrOnsiCaMevQyZ3fHnwnlrm/mgv1OWo6g
 wV30lgyDnC+4HaQXgu2MORsR1RYjx4OWhZCcLZxkHRPeZtwwZwsRtVkVcDnrc9DGS693
 fD0Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=+ORZwHK2pQ3ZWxXmEBHMv2QvgRrdA781Ku91+W+/YJ4=;
 b=SLn4abFB/Upabev7VsGmJdXeQKl+2ZuDZYkNaL8yHo/zE7eRMBrVklDkn8U81OxuqI
 tiGTbA33ymcIK7TsYf8U3de29STErg6epPJOiuAOeFEFRcpAI0NLXb/TniFc1OfNv5fk
 ZWaJctyd5VNzqUHmI/r2pOBo4agtfS5UbsyXwDHZU4K0WZMk0SsjBLo419GGVyaK6Uz4
 f5BVk2uEz84LCH6++qJbMxTP5w75ZJcsSJKqTIkdGjrP2gvXtVBdPDuCeb2jq3x09baA
 0tB/6pG0J9UAlWIqB+O+2MqIoCkh2PYS3fpDeQsEF9gTEZi2mFsgcydccIIcPHDrSXkQ
 ty7g==
X-Gm-Message-State: AGi0PuadlgjqK8ENXSQh+rNqguJxjjrdmo12MgJMoR65HkcKJ/oK5o4D
 QjWofWK/io9a/eRA6NDt3eKBUpGCOdf1/OSgOuQ=
X-Google-Smtp-Source: APiQypJlQP/b59FA2T4ciVtc3yA92ItNMNOyJBoIfQ3RXzpGRsqud3VcliyH3RcBxTdpfjGXWzsSmLYqLb9MIf3ro0k=
X-Received: by 2002:adf:e450:: with SMTP id t16mr2968110wrm.301.1588942548196; 
 Fri, 08 May 2020 05:55:48 -0700 (PDT)
MIME-Version: 1.0
References: <ad26bbdc-5209-ce0c-7956-f8b08e6c2492@amazon.com>
 <8771c424-6340-10e5-1c1f-d72271ab8c1b@suse.com>
 <38926d4e-3429-58bc-f43c-514aed253a8e@xen.org>
 <3b55f045-c6a0-af62-c607-3a85d38cea25@suse.com>
 <63d1ceac-81bb-c916-d403-6f227b4d0ea8@xen.org>
In-Reply-To: <63d1ceac-81bb-c916-d403-6f227b4d0ea8@xen.org>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Fri, 8 May 2020 06:55:13 -0600
Message-ID: <CABfawhnvKgoP_EE7An8FDgJ11Ta8_gOo7tZ_J98sB_+Qir7=Yg@mail.gmail.com>
Subject: Re: Xen Coding style
To: Julien Grall <julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "paul@xen.org" <paul@xen.org>, Julien Grall <jgrall@amazon.com>,
 "committers@xenproject.org" <committers@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Woodhouse,
 David" <dwmw@amazon.co.uk>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 8, 2020 at 6:21 AM Julien Grall <julien@xen.org> wrote:
>
> Hi Jan,
>
> On 08/05/2020 12:20, Jan Beulich wrote:
> > On 08.05.2020 12:00, Julien Grall wrote:
> >> You seem to be the maintainer with the most unwritten rules. Would
> >> you mind to have a try at writing a coding style based on it?
> >
> > On the basis that even small, single aspect patches to CODING_STYLE
> > have been ignored [1],
>
> Your thread is one of the example why I started this thread. Agreeing on
> specific rule doesn't work because it either result to bikesheding or
> there is not enough interest to review rule by rule.
>
> > I don't think this would be a good use of my
> > time.
>
> I would have assumed that the current situation (i.e
> nitpicking/bikeshedding on the ML) is not a good use of your time :).
>
> I would be happy to put some effort to help getting the coding style
> right, however I believe focusing on an overall coding style would value
> everyone's time better than a rule by rule discussion.
>
> > If I was promised (reasonable) feedback, I could take what I
> > have and try to add at least a few more things based on what I find
> > myself commenting on more frequently. But really I'd prefer it to
> > be done the other way around - for people to look at the patches
> > already sent, and for me to only subsequently send more. After all,
> > if already those adjustments are controversial, I don't think we
> > could settle on others.
> While I understand this requires another investment from your part, I am
> afraid it is going to be painful for someone else to go through all the
> existing coding style bikeshedding and infer your unwritten rules.
>
> It might be more beneficial for that person to pursue the work done by
> Tamas and Viktor in the past (see my previous e-mail). This may mean to
> adopt an existing coding style (BSD) and then tweak it.

Thanks Julien for restarting this discussion. IMHO agreeing on a set
of style rules ahead and then applying universally all at once is not
going to be productive since we are so all over the place. Instead, I
would recommend we start piece-by-piece. We introduce a baseline style
checker, then maintainers can decide when and if they want to move
their code-base to be under the automated style checker. That way we
have a baseline and each maintainer can decide on their own term when
they want to have their files be also style checked and in what form.
The upside of this route I think is pretty clear: we can have at least
partial automation even while we figure out what to do with some of
the more problematic files and quirks that are in our code-base. I
would highly prefer this route since I would immediately bring all
files I maintain over to the automated checker just so I never ever
have to deal with this again manually. What style is in use to me
really doesn't matter, BSD was very close with some minor tweaks, or
even what we use to check the style just as long as we have
_something_.

Cheers,
Tamas


From xen-devel-bounces@lists.xenproject.org Fri May 08 13:00:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 13:00:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX2cL-00056Z-VV; Fri, 08 May 2020 13:00:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M6n8=6W=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jX2cK-00056U-9n
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 13:00:40 +0000
X-Inumbo-ID: e8a1e1cc-912b-11ea-9feb-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e8a1e1cc-912b-11ea-9feb-12813bfff9fa;
 Fri, 08 May 2020 13:00:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588942836;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=Vz/36Lck5MGVk0jqxRBoQQn2SfYZWrKQXZon82oEWZc=;
 b=Lqm4wDSqe356658T/z/FFMQx0LyYdT2nA/1QkSBbGt4v66gr2BnV0Xd4
 MEmN5aLdViINY7fJPBAQqM46e4GS1cXpncpeV+SWOEyc7nviTEmUMqg4w
 BT+lHroi4458bPQQe4jHWD1mqte6rdoaa4D3fOB5QeeMIlmIVA1evSDQ/ w=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: YzNcvFTcTZXuo6n0SV7gNiDmO/7AyPCD5yPoTf+GIGe2A1BkuznUk5gXblMcXyKtSrVGv1rh38
 PSllyHL440PqJXrowpR38stOXQKbecyDCCQG8B2+hlbDnl2gPIBQY6r2xqaZcMQ6E5qQb7/gO3
 +ckIOqevnwrgGbYKIIPQ++3t0B5cuBpXY2BOfJfWHgcJ6QdyVVxHByLc8wvKqFedRSRVat2krD
 taL57druuj8j1KXRtXavXr0C4hOeUfvu7ujC+aKYYOnpUmoEjuOsZbnRG7lL3E/wT4Kz4GnsXc
 myg=
X-SBRS: 2.7
X-MesageID: 17084357
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,367,1583211600"; d="scan'208";a="17084357"
Subject: Re: [PATCH v8 04/12] x86emul: support SERIALIZE
To: Jan Beulich <jbeulich@suse.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <0bbbf95e-48ec-ee73-5234-52cf9c6c06d8@suse.com>
 <64de91ff-41ae-baf1-1119-0ba39df32275@citrix.com>
 <0c5a03c6-6c4f-c6ec-e474-71a2badd1c9c@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <fe12bd3d-37f6-111c-e738-dde0b42d2d3d@citrix.com>
Date: Fri, 8 May 2020 14:00:24 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <0c5a03c6-6c4f-c6ec-e474-71a2badd1c9c@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08/05/2020 08:34, Jan Beulich wrote:
>>> @@ -5660,6 +5661,18 @@ x86_emulate(
>>>                  goto done;
>>>              break;
>>>  
>>> +        case 0xe8:
>>> +            switch ( vex.pfx )
>>> +            {
>>> +            case vex_none: /* serialize */
>>> +                host_and_vcpu_must_have(serialize);
>>> +                asm volatile ( ".byte 0x0f, 0x01, 0xe8" );
>> There is very little need for an actual implementation here.  The VMExit
>> to get here is good enough.
>>
>> The only question is whether pre-unrestricted_guest Intel boxes are
>> liable to find this in real mode code.
> Apart from this we also shouldn't assume HVM in the core emulator,
> I think.

It's not assuming HVM.  Its just that there is no way we can end up
emulating this instruction from PV context.

If we could end up here in PV context, the exception causing us to
emulate to begin with would be good enough as well.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 08 13:09:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 13:09:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX2kN-0005Jo-R3; Fri, 08 May 2020 13:08:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HLRL=6W=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jX2kM-0005Jj-NB
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 13:08:58 +0000
X-Inumbo-ID: 137ac9a8-912d-11ea-b07b-bc764e2007e4
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 137ac9a8-912d-11ea-b07b-bc764e2007e4;
 Fri, 08 May 2020 13:08:57 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id s8so1770083wrt.9
 for <xen-devel@lists.xenproject.org>; Fri, 08 May 2020 06:08:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:subject:to:cc:references:from:message-id:date:user-agent
 :mime-version:in-reply-to:content-language:content-transfer-encoding;
 bh=SfoftjUdJMP/vemqd09MfiuAR5q0nKJtH0beyh9JsQg=;
 b=buOYjOXc2/tIRGGk9mjFWqjrviH/6shur17U6RKWI2VfnGj0jhu/j1HwE055G95CxE
 RT2I+OVK3GxUqG4LwYG7coF2qnGOdV8GiaQqJkVhH0R4vx8q8MrNCC4aF/Axd01eqJ0a
 hvnWfuoZUPxaxHjak7DxgRmHuOxReHOkUaO6yJsDTeTTTUoLEr5AW/tfQqJVBC60cX3e
 A1x+fyJINBTWYPbcniWG+sU7UIefEakcisBAjkBt5tLrZn0J8Pr6Pu11r9pRiRfMDNYE
 ZBcx0gCAdPOHOQw9hrhP3bWhJkECk7KmGOZKGAS1kgQBSq6r3eBaxmZ9w4GzmgZtWi5Y
 OT5A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:subject:to:cc:references:from:message-id
 :date:user-agent:mime-version:in-reply-to:content-language
 :content-transfer-encoding;
 bh=SfoftjUdJMP/vemqd09MfiuAR5q0nKJtH0beyh9JsQg=;
 b=T91kyJA7TKWQz4kd183Fw87MJyKScnFMUzSf7hL9q/LlrME6d29CNlpS0nAINUB/r+
 Amq5XvV4FrqvIo1MakxfRLhzq84eZ8bEFGg3hdS7mCPSGQ+BLyznCiZQJQofzi1zN8Fv
 thJ452Wljkh/rXqBKMJfhZkTue6OHj4OPwTjDWQFbOQliACyFqmSVlZ8ObvaBPvXb+XM
 /SZfVCZmpj8tsigy/Ke3v4b2a3pwgW7ZvG8OlhGz0nZqQHxU7g8xU8c1FXfqtL6meuHS
 E3uAV70NYkZ06MLXRP5+qmKSgoBz864z01eQfqIJAUMdzRN6bbsLCFDTmmJbQ7N5RFSb
 WLdA==
X-Gm-Message-State: AGi0PuZMRJGyGebZRAE4xOfrGuxQFdgeMfonOm0iDe355EKjeXMTh/OH
 5mecK8e9O33LDilxa4sRPv8=
X-Google-Smtp-Source: APiQypKW2ZHWmhB+ydNX9xkm44FQ6geGPfM0Pb3LxCGEVG3UmAv4x62XFMba1P1sJsmshnoslGVWWg==
X-Received: by 2002:adf:e9d0:: with SMTP id l16mr2782543wrn.69.1588943337081; 
 Fri, 08 May 2020 06:08:57 -0700 (PDT)
Received: from [192.168.1.38] (17.red-88-21-202.staticip.rima-tde.net.
 [88.21.202.17])
 by smtp.gmail.com with ESMTPSA id s14sm12369462wmh.18.2020.05.08.06.08.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 08 May 2020 06:08:56 -0700 (PDT)
Subject: Re: [PATCH v2 2/3] various: Remove unnecessary OBJECT() cast
To: Markus Armbruster <armbru@redhat.com>
References: <20200504100735.10269-1-f4bug@amsat.org>
 <20200504100735.10269-3-f4bug@amsat.org> <87mu6iy4k3.fsf@dusky.pond.sub.org>
From: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
Message-ID: <b23abb5d-7222-c6ad-7314-a4138cb5d062@amsat.org>
Date: Fri, 8 May 2020 15:08:54 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.5.0
MIME-Version: 1.0
In-Reply-To: <87mu6iy4k3.fsf@dusky.pond.sub.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 David Hildenbrand <david@redhat.com>, Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, David Gibson <david@gibson.dropbear.id.au>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 =?UTF-8?Q?C=c3=a9dric_Le_Goater?= <clg@kaod.org>, qemu-ppc@nongnu.org,
 John Snow <jsnow@redhat.com>, Richard Henderson <rth@twiddle.net>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, Corey Minyard <cminyard@mvista.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/8/20 2:49 PM, Markus Armbruster wrote:
> Philippe Mathieu-Daudé <f4bug@amsat.org> writes:
> 
>> The OBJECT() macro is defined as:
>>
>>    #define OBJECT(obj) ((Object *)(obj))
>>
>> which expands to:
>>
>>    ((Object *)object_dynamic_cast_assert((Object *)(obj), (name),
>>                                          __FILE__, __LINE__, __func__))
> 
> Nope :)
> 
>> This assertion can only fail when @obj points to something other
>> than its stated type, i.e. when we're in undefined behavior country.
> 
> There is no assertion.
> 
>> Remove the unnecessary OBJECT() casts when we already know the
>> pointer is of Object type.
>>
>> Patch created mechanically using spatch with this script:
>>
>>    @@
>>    typedef Object;
>>    Object *o;
>>    @@
>>    -   OBJECT(o)
>>    +   o
>>
>> Acked-by: Cornelia Huck <cohuck@redhat.com>
>> Acked-by: Corey Minyard <cminyard@mvista.com>
>> Acked-by: John Snow <jsnow@redhat.com>
>> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
>> Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
>> ---
>> v2: Reword (Markus)
> 
> My rewording suggestion applied to PATCH 3, not to this one.

OK.

> 
> With v2's commit message;
> Reviewed-by: Markus Armbruster <armbru@redhat.com>

Are you willing to take these patches? In that case, are you OK to take 
1 & 3 and I resend 2?

Thanks,

Phil.


From xen-devel-bounces@lists.xenproject.org Fri May 08 13:15:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 13:15:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX2qT-00068l-IL; Fri, 08 May 2020 13:15:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M6n8=6W=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jX2qS-00068g-7E
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 13:15:16 +0000
X-Inumbo-ID: f451b2b6-912d-11ea-ae69-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f451b2b6-912d-11ea-ae69-bc764e2007e4;
 Fri, 08 May 2020 13:15:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588943715;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=x57zNJaVGCkHcBWyzF+hCSFPTDobTi6hMb959CM+xfo=;
 b=fyrRLJEbIcXfWg1839GtnsH/N+b+/8kLJpNy3BqxKXJrgNzdWaPMSwHz
 POx6/sfjV2Cj17gLMyC1xAdGf076axR3gyTJXqtqNcGIpTLvtC7+8hzJa
 2aDiMYdD2Kf2xIPDTiSDMnGvPsUa89Nfr3KObFgRG319sl/IMrPQb/nXe 0=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: bt12wJy/PwrWpBHByaCRzFdxppSg2IcmjwHEuExgJl3gFYILCnPSgjyWbwsOimc+j1VI+83XIq
 ML5XdNY2qwIiv0SJAfeaf5C6zsjh1lTTEWVY0Ir6YwEsTSIzEHFMK3/3icFLFXf5qGf/x7d4jN
 ChRRLxi5AA5xzMKruzjpBOP6SMfA+F785Og44txyoHY/s22Jjy7iL2DtFWQOnRcKrcST+2elOe
 gPMQ4JgzRxWjr2Gt98cMN8umj2WZzMB8fVMS130E6X9noMotvpVXfrqdmjfe9v1rwmfLFzTcgl
 e6A=
X-SBRS: 2.7
X-MesageID: 17758085
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,367,1583211600"; d="scan'208";a="17758085"
Subject: Re: [PATCH v8 05/12] x86emul: support X{SUS,RES}LDTRK
To: Jan Beulich <jbeulich@suse.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <6e7500d2-262c-29c7-b9be-3fc9be26d198@suse.com>
 <feb3a6ed-b6b9-959c-8eb1-fb6ecfff034c@citrix.com>
 <b5f9438b-471d-bf32-3f4c-11287060938c@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <9fa8ceb3-fd4b-e754-2c82-92f134603e34@citrix.com>
Date: Fri, 8 May 2020 14:15:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <b5f9438b-471d-bf32-3f4c-11287060938c@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08/05/2020 08:38, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> On 07.05.2020 22:13, Andrew Cooper wrote:
>> On 05/05/2020 09:14, Jan Beulich wrote:
>>> --- a/xen/tools/gen-cpuid.py
>>> +++ b/xen/tools/gen-cpuid.py
>>> @@ -284,6 +284,9 @@ def crunch_numbers(state):
>>>          # as dependent features simplifies Xen's logic, and prevents the guest
>>>          # from seeing implausible configurations.
>>>          IBRSB: [STIBP, SSBD],
>>> +
>>> +        # In principle the TSXLDTRK insns could also be considered independent.
>>> +        RTM: [TSXLDTRK],
>> Why the link?  There is no relevant interaction AFAICT.
> Do the insns make any sense without TSX? Anyway - hence the
> comment, and if you're convinced the connection does not
> need making, I'd be okay dropping it. I would assume though
> that we'd better hide TSXLDTRK whenever we hide RTM, which
> is most easily achieved by having a connection here.

Actually - that is a very good point.  I expect there will (or should)
be an interaction with MSR_TSX_CTRL, as it has CPUID-hiding functionality.

For now, could I ask you to not expose this to guests in this patch?

For the emulator side of things alone I think this is ok (although
looking over it a second time, we could really do with a comment in the
code explaining that we're never in an RTM region, hence the nop behaviour).

I'll follow up with Intel, and we can figure out the CPUID derivation
details at a later point.

If you're happy with this plan, then A-by to save a round trip.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 08 13:38:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 13:38:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX3CG-0007p7-GD; Fri, 08 May 2020 13:37:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5Ij8=6W=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jX3CE-0007p2-PP
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 13:37:46 +0000
X-Inumbo-ID: 18fab830-9131-11ea-b9cf-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 18fab830-9131-11ea-b9cf-bc764e2007e4;
 Fri, 08 May 2020 13:37:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588945065;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=DhRphIvf7sjTNx/Wdze4Otwlur5WjDj9hFxA790+fD4=;
 b=NEnGGbNDvW4foFuJhA/kF8U/5oqeBaQQmnTun4q37lfBfob7X3wTyQGA
 OmrXoDPwp7yqtfSOlgwfKda76GdEuIXwovUElWK5mR1RTZVaf7aEJxceK
 ienkSxsJJGs8SzRo14NKQgArRUhL/nhCXjlaNP4zHYhdEo0hA1dk/qBhg 0=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: giKtkqNoDW9oR34EmkmnjJ8mXDFhqKJtornAa3jOob5klNe1NexDMVIDS3n7vVPal2zCMy+aes
 RfXBBkhAnY4vsBHU1yXcaiNDtE3wRFNrZTD0gYACwXWfXneVq5kHFUbe8hyRIfN3t95sJO+fZH
 lUIt6HK449gy1/pennObmNI6XdwTTApfomUUMbUucBTDJ4h/mxQRsh5TLTBGTarYgA+dsRQe4Z
 ZQljPMT1g7TKGoet8RZrpDUThTJB4YZVRrEhiE0uTXXNwkcF+Be9QuoXKYPSDFuSG1nnwoTDT8
 h2U=
X-SBRS: 2.7
X-MesageID: 17349962
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,367,1583211600"; d="scan'208";a="17349962"
Date: Fri, 8 May 2020 15:37:20 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v8 08/12] x86emul: support FLDENV and FRSTOR
Message-ID: <20200508133720.GH1353@Air-de-Roger>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <09fe2c18-0037-af71-93be-87261051e2a2@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <09fe2c18-0037-af71-93be-87261051e2a2@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 05, 2020 at 10:16:20AM +0200, Jan Beulich wrote:
> While the Intel SDM claims that FRSTOR itself may raise #MF upon
> completion, this was confirmed by Intel to be a doc error which will be
> corrected in due course; behavior is like FLDENV, and like old hard copy
> manuals describe it. Otherwise we'd have to emulate the insn by filling
> st(N) in suitable order, followed by FLDENV.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v7: New.
> 
> --- a/tools/tests/x86_emulator/test_x86_emulator.c
> +++ b/tools/tests/x86_emulator/test_x86_emulator.c
> @@ -2442,6 +2442,27 @@ int main(int argc, char **argv)
>      else
>          printf("skipped\n");
>  
> +    printf("%-40s", "Testing fldenv 8(%edx)...");

Likely a stupid question, but why the added 8? edx will contain the
memory address used to save the sate by fnstenv, so I would expect
fldenv to just load from there?

> +    if ( stack_exec && cpu_has_fpu )
> +    {
> +        asm volatile ( "fnstenv %0\n\t"
> +                       "fninit"
> +                       : "=m" (res[2]) :: "memory" );

Why do you need the memory clobber here? I assume it's because res is
of type unsigned int and hence doesn't have the right size that
fnstenv will actually write to?

> +        zap_fpsel(&res[2], true);
> +        instr[0] = 0xd9; instr[1] = 0x62; instr[2] = 0x08;
> +        regs.eip = (unsigned long)&instr[0];
> +        regs.edx = (unsigned long)res;
> +        rc = x86_emulate(&ctxt, &emulops);
> +        asm volatile ( "fnstenv %0" : "=m" (res[9]) :: "memory" );
> +        if ( (rc != X86EMUL_OKAY) ||
> +             memcmp(res + 2, res + 9, 28) ||
> +             (regs.eip != (unsigned long)&instr[3]) )
> +            goto fail;
> +        printf("okay\n");
> +    }
> +    else
> +        printf("skipped\n");
> +
>      printf("%-40s", "Testing 16-bit fnsave (%ecx)...");
>      if ( stack_exec && cpu_has_fpu )
>      {
> @@ -2468,6 +2489,31 @@ int main(int argc, char **argv)
>              goto fail;
>          printf("okay\n");
>      }
> +    else
> +        printf("skipped\n");
> +
> +    printf("%-40s", "Testing frstor (%edx)...");
> +    if ( stack_exec && cpu_has_fpu )
> +    {
> +        const uint16_t seven = 7;
> +
> +        asm volatile ( "fninit\n\t"
> +                       "fld1\n\t"
> +                       "fidivs %1\n\t"
> +                       "fnsave %0\n\t"
> +                       : "=&m" (res[0]) : "m" (seven) : "memory" );
> +        zap_fpsel(&res[0], true);
> +        instr[0] = 0xdd; instr[1] = 0x22;
> +        regs.eip = (unsigned long)&instr[0];
> +        regs.edx = (unsigned long)res;
> +        rc = x86_emulate(&ctxt, &emulops);
> +        asm volatile ( "fnsave %0" : "=m" (res[27]) :: "memory" );
> +        if ( (rc != X86EMUL_OKAY) ||
> +             memcmp(res, res + 27, 108) ||
> +             (regs.eip != (unsigned long)&instr[2]) )
> +            goto fail;
> +        printf("okay\n");
> +    }
>      else
>          printf("skipped\n");
>  
> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -857,6 +857,7 @@ struct x86_emulate_state {
>          blk_NONE,
>          blk_enqcmd,
>  #ifndef X86EMUL_NO_FPU
> +        blk_fld, /* FLDENV, FRSTOR */
>          blk_fst, /* FNSTENV, FNSAVE */
>  #endif
>          blk_movdir,
> @@ -4948,21 +4949,14 @@ x86_emulate(
>                  dst.bytes = 4;
>                  emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
>                  break;
> -            case 4: /* fldenv - TODO */
> -                state->fpu_ctrl = true;
> -                goto unimplemented_insn;
> -            case 5: /* fldcw m2byte */
> -                state->fpu_ctrl = true;
> -            fpu_memsrc16:
> -                if ( (rc = ops->read(ea.mem.seg, ea.mem.off, &src.val,
> -                                     2, ctxt)) != X86EMUL_OKAY )
> -                    goto done;
> -                emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
> -                break;
> +            case 4: /* fldenv */
> +                /* Raise #MF now if there are pending unmasked exceptions. */
> +                emulate_fpu_insn_stub(0xd9, 0xd0 /* fnop */);

Maybe it would make sense to have a wrapper for fnop?

> +                /* fall through */
>              case 6: /* fnstenv */
>                  fail_if(!ops->blk);
> -                state->blk = blk_fst;
> -                /* REX is meaningless for this insn by this point. */
> +                state->blk = modrm_reg & 2 ? blk_fst : blk_fld;
> +                /* REX is meaningless for these insns by this point. */
>                  rex_prefix = in_protmode(ctxt, ops);
>                  if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
>                                      op_bytes > 2 ? sizeof(struct x87_env32)
> @@ -4972,6 +4966,14 @@ x86_emulate(
>                      goto done;
>                  state->fpu_ctrl = true;
>                  break;
> +            case 5: /* fldcw m2byte */
> +                state->fpu_ctrl = true;
> +            fpu_memsrc16:
> +                if ( (rc = ops->read(ea.mem.seg, ea.mem.off, &src.val,
> +                                     2, ctxt)) != X86EMUL_OKAY )
> +                    goto done;
> +                emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
> +                break;
>              case 7: /* fnstcw m2byte */
>                  state->fpu_ctrl = true;
>              fpu_memdst16:
> @@ -5124,13 +5126,14 @@ x86_emulate(
>                  dst.bytes = 8;
>                  emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
>                  break;
> -            case 4: /* frstor - TODO */
> -                state->fpu_ctrl = true;
> -                goto unimplemented_insn;
> +            case 4: /* frstor */
> +                /* Raise #MF now if there are pending unmasked exceptions. */
> +                emulate_fpu_insn_stub(0xd9, 0xd0 /* fnop */);
> +                /* fall through */
>              case 6: /* fnsave */
>                  fail_if(!ops->blk);
> -                state->blk = blk_fst;
> -                /* REX is meaningless for this insn by this point. */
> +                state->blk = modrm_reg & 2 ? blk_fst : blk_fld;
> +                /* REX is meaningless for these insns by this point. */
>                  rex_prefix = in_protmode(ctxt, ops);
>                  if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
>                                      op_bytes > 2 ? sizeof(struct x87_env32) + 80
> @@ -11648,6 +11651,89 @@ int x86_emul_blk(
>  
>  #ifndef X86EMUL_NO_FPU
>  
> +    case blk_fld:
> +        ASSERT(!data);
> +
> +        /* state->rex_prefix carries CR0.PE && !EFLAGS.VM setting */
> +        switch ( bytes )
> +        {
> +        case sizeof(fpstate.env):
> +        case sizeof(fpstate):
> +            memcpy(&fpstate.env, ptr, sizeof(fpstate.env));
> +            if ( !state->rex_prefix )
> +            {
> +                unsigned int fip = fpstate.env.mode.real.fip_lo +
> +                                   (fpstate.env.mode.real.fip_hi << 16);
> +                unsigned int fdp = fpstate.env.mode.real.fdp_lo +
> +                                   (fpstate.env.mode.real.fdp_hi << 16);
> +                unsigned int fop = fpstate.env.mode.real.fop;
> +
> +                fpstate.env.mode.prot.fip = fip & 0xf;
> +                fpstate.env.mode.prot.fcs = fip >> 4;
> +                fpstate.env.mode.prot.fop = fop;
> +                fpstate.env.mode.prot.fdp = fdp & 0xf;
> +                fpstate.env.mode.prot.fds = fdp >> 4;

I've found the layouts in the SDM vol. 1, but I haven't been able to
found the translation mechanism from real to protected. Could you
maybe add a reference here?

> +            }
> +
> +            if ( bytes == sizeof(fpstate.env) )
> +                ptr = NULL;
> +            else
> +                ptr += sizeof(fpstate.env);
> +            break;
> +
> +        case sizeof(struct x87_env16):
> +        case sizeof(struct x87_env16) + sizeof(fpstate.freg):
> +        {
> +            const struct x87_env16 *env = ptr;
> +
> +            fpstate.env.fcw = env->fcw;
> +            fpstate.env.fsw = env->fsw;
> +            fpstate.env.ftw = env->ftw;
> +
> +            if ( state->rex_prefix )
> +            {
> +                fpstate.env.mode.prot.fip = env->mode.prot.fip;
> +                fpstate.env.mode.prot.fcs = env->mode.prot.fcs;
> +                fpstate.env.mode.prot.fdp = env->mode.prot.fdp;
> +                fpstate.env.mode.prot.fds = env->mode.prot.fds;
> +                fpstate.env.mode.prot.fop = 0; /* unknown */
> +            }
> +            else
> +            {
> +                unsigned int fip = env->mode.real.fip_lo +
> +                                   (env->mode.real.fip_hi << 16);
> +                unsigned int fdp = env->mode.real.fdp_lo +
> +                                   (env->mode.real.fdp_hi << 16);
> +                unsigned int fop = env->mode.real.fop;
> +
> +                fpstate.env.mode.prot.fip = fip & 0xf;
> +                fpstate.env.mode.prot.fcs = fip >> 4;
> +                fpstate.env.mode.prot.fop = fop;
> +                fpstate.env.mode.prot.fdp = fdp & 0xf;
> +                fpstate.env.mode.prot.fds = fdp >> 4;

This looks mostly the same as the translation done above, so maybe
could be abstracted anyway in a macro to avoid the code repetition?
(ie: fpstate_real_to_prot(src, dst) or some such).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 08 13:46:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 13:46:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX3KU-0000Ft-G7; Fri, 08 May 2020 13:46:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3Kb=6W=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jX3KT-0000Fo-Fw
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 13:46:17 +0000
X-Inumbo-ID: 49739602-9132-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 49739602-9132-11ea-ae69-bc764e2007e4;
 Fri, 08 May 2020 13:46:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 69E39AF0C;
 Fri,  8 May 2020 13:46:17 +0000 (UTC)
Subject: Re: [PATCH] x86/idle: prevent entering C6 with in service interrupts
 on Intel
To: Roger Pau Monne <roger.pau@citrix.com>
References: <20200507132236.26010-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3d147b74-81dd-83b8-7035-67c5ceb72c5f@suse.com>
Date: Fri, 8 May 2020 15:46:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200507132236.26010-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07.05.2020 15:22, Roger Pau Monne wrote:
> Apply a workaround for Intel errata CLX30: "A Pending Fixed Interrupt
> May Be Dispatched Before an Interrupt of The Same Priority Completes".
> 
> It's not clear which models are affected, as the errata is listed in
> the "Second Generation Intel Xeon Scalable Processors" specification
> update, but the issue has been seen as far back as Nehalem processors.
> Apply the workaround to all Intel processors, the condition can be
> relaxed later.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  docs/misc/xen-command-line.pandoc |  8 ++++++++
>  xen/arch/x86/acpi/cpu_idle.c      | 22 +++++++++++++++++++++-
>  xen/arch/x86/cpu/mwait-idle.c     |  3 +++
>  xen/include/asm-x86/cpuidle.h     |  1 +
>  4 files changed, 33 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
> index ee12b0f53f..6e868a2185 100644
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -652,6 +652,14 @@ Specify the size of the console debug trace buffer. By specifying `cpu:`
>  additionally a trace buffer of the specified size is allocated per cpu.
>  The debug trace feature is only enabled in debugging builds of Xen.
>  
> +### disable-c6-isr
> +> `= <boolean>`
> +
> +> Default: `true for Intel CPUs`
> +
> +Workaround for Intel errata CLX30. Prevent entering C6 idle states with in
> +service local APIC interrupts. Enabled by default for all Intel CPUs.
> +
>  ### dma_bits
>  > `= <integer>`
>  
> diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
> index b83446e77d..5023fea148 100644
> --- a/xen/arch/x86/acpi/cpu_idle.c
> +++ b/xen/arch/x86/acpi/cpu_idle.c
> @@ -573,6 +573,25 @@ static bool errata_c6_eoi_workaround(void)
>      return (fix_needed && cpu_has_pending_apic_eoi());
>  }
>  
> +static int8_t __read_mostly disable_c6_isr = -1;
> +boolean_param("disable-c6-isr", disable_c6_isr);
> +
> +/*
> + * Errata CLX30: A Pending Fixed Interrupt May Be Dispatched Before an
> + * Interrupt of The Same Priority Completes.
> + *
> + * Prevent entering C6 if there are pending lapic interrupts, or else the
> + * processor might dispatch further pending interrupts before the first one has
> + * been completed.
> + */
> +bool errata_c6_isr_workaround(void)
> +{
> +    if ( unlikely(disable_c6_isr == -1) )
> +        disable_c6_isr = boot_cpu_data.x86_vendor == X86_VENDOR_INTEL;
> +
> +    return disable_c6_isr && cpu_has_pending_apic_eoi();

This check being the same as in errata_c6_eoi_workaround(),
why don't you simply extend that function? (See also below.)
Also both variable and command line option descriptor could
go inside the function, to limit their scopes.

> @@ -676,7 +695,8 @@ static void acpi_processor_idle(void)
>          return;
>      }
>  
> -    if ( (cx->type == ACPI_STATE_C3) && errata_c6_eoi_workaround() )
> +    if ( (cx->type == ACPI_STATE_C3) &&
> +         (errata_c6_eoi_workaround() || errata_c6_isr_workaround()) )
>          cx = power->safe_state;

I realize you only add to existing code, but I'm afraid this
existing code cannot be safely added to. Already prior to
your change there was a curious mismatch of C3 and c6 on this
line, and I don't see how ACPI_STATE_C3 correlates with
"core C6" state. Now this may have been the convention for
Nehalem/Westmere systems, but already the mwait-idle entries
for these CPU models have 4 entries (albeit such that they
match this scheme). As a result I think this at the very
least needs to be >=, not ==, even more so now that you want
to cover all Intel CPUs.

Obviously (I think) it is a mistake that mwait-idle doesn't
also activate this workaround, which imo is another reason to
stick to just errata_c6_eoi_workaround().

> --- a/xen/arch/x86/cpu/mwait-idle.c
> +++ b/xen/arch/x86/cpu/mwait-idle.c
> @@ -770,6 +770,9 @@ static void mwait_idle(void)
>  		return;
>  	}
>  
> +	if (cx->type == ACPI_STATE_C3 && errata_c6_isr_workaround())
> +		cx = power->safe_state;

Here it becomes even more relevant I think that == not be
used, as the static tables list deeper C-states; it's just
that the SKX table, which also gets used for CLX afaict, has
no entries beyond C6-SKX. Others with deeper ones presumably
have the deeper C-states similarly affected (or not) by this
erratum.

For reference, mwait_idle_cpu_init() has

		hint = flg2MWAIT(cpuidle_state_table[cstate].flags);
		state = MWAIT_HINT2CSTATE(hint) + 1;
                ...
		cx->type = state;

i.e. the value you compare against is derived from the static
table entries. For Nehalem/Westmere this means that what goes
under ACPI_STATE_C3 is indeed C6-NHM, and this looks to match
for all the non-Atoms, but for none of the Atoms. Now Atoms
could easily be unaffected, but (just to take an example) if
C6-SNB was affected, surely C7-SNB would be, too.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 08 13:49:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 13:49:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX3Ni-0000Ol-1u; Fri, 08 May 2020 13:49:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3Kb=6W=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jX3Nh-0000Og-CS
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 13:49:37 +0000
X-Inumbo-ID: c13b878a-9132-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c13b878a-9132-11ea-ae69-bc764e2007e4;
 Fri, 08 May 2020 13:49:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A5CF2AD9F;
 Fri,  8 May 2020 13:49:38 +0000 (UTC)
Subject: Re: [PATCH] x86/PVH: PHYSDEVOP_pci_mmcfg_reserved should not blindly
 register a region
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <2ee1a3cb-3b40-6e6d-5308-1cdf9f6c521e@suse.com>
 <c8826959-8dbe-cc39-80b0-8ba03c6a6f30@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <16ed4b91-fdb3-d2c5-9a3c-4aa7ff172b98@suse.com>
Date: Fri, 8 May 2020 15:49:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <c8826959-8dbe-cc39-80b0-8ba03c6a6f30@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 14:54, Andrew Cooper wrote:
> On 08/05/2020 13:43, Jan Beulich wrote:
>> The op has a register/unregister flag, and hence registration shouldn't
>> happen unilaterally. Introduce unregister_vpci_mmcfg_handler() to handle
>> this case.
>>
>> Fixes: eb3dd90e4089 ("x86/physdev: enable PHYSDEVOP_pci_mmcfg_reserved for PVH Dom0")
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I agree in principle that registration shouldn't be unilateral, but why
> on earth does the API behave like that to begin with?
> 
> There is no provision to move or update MMCFG regions in any spec I'm
> aware of, and hardware cannot in practice update memory routing like this.
> 
> Under what circumstances should we tolerate an unregister in the first
> place?

Hot unplug of an entire segment, for example.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 08 13:59:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 13:59:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX3Wr-0001GE-0z; Fri, 08 May 2020 13:59:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3Kb=6W=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jX3Wp-0001G9-Tu
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 13:59:03 +0000
X-Inumbo-ID: 12e73f42-9134-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 12e73f42-9134-11ea-b9cf-bc764e2007e4;
 Fri, 08 May 2020 13:59:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id EFE5CABC7;
 Fri,  8 May 2020 13:59:04 +0000 (UTC)
Subject: Re: [PATCH v8 04/12] x86emul: support SERIALIZE
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <0bbbf95e-48ec-ee73-5234-52cf9c6c06d8@suse.com>
 <64de91ff-41ae-baf1-1119-0ba39df32275@citrix.com>
 <0c5a03c6-6c4f-c6ec-e474-71a2badd1c9c@suse.com>
 <fe12bd3d-37f6-111c-e738-dde0b42d2d3d@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1a4a9298-3446-a863-2e24-4c81244594dd@suse.com>
Date: Fri, 8 May 2020 15:59:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <fe12bd3d-37f6-111c-e738-dde0b42d2d3d@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 15:00, Andrew Cooper wrote:
> On 08/05/2020 08:34, Jan Beulich wrote:
>>>> @@ -5660,6 +5661,18 @@ x86_emulate(
>>>>                  goto done;
>>>>              break;
>>>>  
>>>> +        case 0xe8:
>>>> +            switch ( vex.pfx )
>>>> +            {
>>>> +            case vex_none: /* serialize */
>>>> +                host_and_vcpu_must_have(serialize);
>>>> +                asm volatile ( ".byte 0x0f, 0x01, 0xe8" );
>>> There is very little need for an actual implementation here.  The VMExit
>>> to get here is good enough.
>>>
>>> The only question is whether pre-unrestricted_guest Intel boxes are
>>> liable to find this in real mode code.
>> Apart from this we also shouldn't assume HVM in the core emulator,
>> I think.
> 
> It's not assuming HVM.  Its just that there is no way we can end up
> emulating this instruction from PV context.
> 
> If we could end up here in PV context, the exception causing us to
> emulate to begin with would be good enough as well.

With the current way of where/how emulation gets involved -
yes. I'd like to remind you though of the 4-insn window
shadow code tries to emulate over for PAE guests. There
is no intervening #VMEXIT there.

So do you want me to drop the asm() then, and switch from
host_and_vcpu_must_have(serialize) to just
vcpu_must_have(serialize)?

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 08 14:18:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 14:18:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX3pf-0002z7-Me; Fri, 08 May 2020 14:18:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jX3pe-0002z2-7d
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 14:18:30 +0000
X-Inumbo-ID: c9787082-9136-11ea-a002-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c9787082-9136-11ea-a002-12813bfff9fa;
 Fri, 08 May 2020 14:18:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id B9930ABC7;
 Fri,  8 May 2020 14:18:30 +0000 (UTC)
Subject: Re: Xen Coding style
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>, Julien Grall <julien@xen.org>
References: <ad26bbdc-5209-ce0c-7956-f8b08e6c2492@amazon.com>
 <8771c424-6340-10e5-1c1f-d72271ab8c1b@suse.com>
 <38926d4e-3429-58bc-f43c-514aed253a8e@xen.org>
 <3b55f045-c6a0-af62-c607-3a85d38cea25@suse.com>
 <63d1ceac-81bb-c916-d403-6f227b4d0ea8@xen.org>
 <CABfawhnvKgoP_EE7An8FDgJ11Ta8_gOo7tZ_J98sB_+Qir7=Yg@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <10ee3601-fc34-5714-30ea-abd2f2c03cfe@suse.com>
Date: Fri, 8 May 2020 16:18:27 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <CABfawhnvKgoP_EE7An8FDgJ11Ta8_gOo7tZ_J98sB_+Qir7=Yg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "paul@xen.org" <paul@xen.org>, Julien Grall <jgrall@amazon.com>,
 "committers@xenproject.org" <committers@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Woodhouse,
 David" <dwmw@amazon.co.uk>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.20 14:55, Tamas K Lengyel wrote:
> On Fri, May 8, 2020 at 6:21 AM Julien Grall <julien@xen.org> wrote:
>>
>> Hi Jan,
>>
>> On 08/05/2020 12:20, Jan Beulich wrote:
>>> On 08.05.2020 12:00, Julien Grall wrote:
>>>> You seem to be the maintainer with the most unwritten rules. Would
>>>> you mind to have a try at writing a coding style based on it?
>>>
>>> On the basis that even small, single aspect patches to CODING_STYLE
>>> have been ignored [1],
>>
>> Your thread is one of the example why I started this thread. Agreeing on
>> specific rule doesn't work because it either result to bikesheding or
>> there is not enough interest to review rule by rule.
>>
>>> I don't think this would be a good use of my
>>> time.
>>
>> I would have assumed that the current situation (i.e
>> nitpicking/bikeshedding on the ML) is not a good use of your time :).
>>
>> I would be happy to put some effort to help getting the coding style
>> right, however I believe focusing on an overall coding style would value
>> everyone's time better than a rule by rule discussion.
>>
>>> If I was promised (reasonable) feedback, I could take what I
>>> have and try to add at least a few more things based on what I find
>>> myself commenting on more frequently. But really I'd prefer it to
>>> be done the other way around - for people to look at the patches
>>> already sent, and for me to only subsequently send more. After all,
>>> if already those adjustments are controversial, I don't think we
>>> could settle on others.
>> While I understand this requires another investment from your part, I am
>> afraid it is going to be painful for someone else to go through all the
>> existing coding style bikeshedding and infer your unwritten rules.
>>
>> It might be more beneficial for that person to pursue the work done by
>> Tamas and Viktor in the past (see my previous e-mail). This may mean to
>> adopt an existing coding style (BSD) and then tweak it.
> 
> Thanks Julien for restarting this discussion. IMHO agreeing on a set
> of style rules ahead and then applying universally all at once is not
> going to be productive since we are so all over the place. Instead, I
> would recommend we start piece-by-piece. We introduce a baseline style
> checker, then maintainers can decide when and if they want to move
> their code-base to be under the automated style checker. That way we
> have a baseline and each maintainer can decide on their own term when
> they want to have their files be also style checked and in what form.
> The upside of this route I think is pretty clear: we can have at least
> partial automation even while we figure out what to do with some of
> the more problematic files and quirks that are in our code-base. I
> would highly prefer this route since I would immediately bring all
> files I maintain over to the automated checker just so I never ever
> have to deal with this again manually. What style is in use to me
> really doesn't matter, BSD was very close with some minor tweaks, or
> even what we use to check the style just as long as we have
> _something_.

Wouldn't it make more sense to have a patch checker instead and accept
only patches which change code according to the style guide? This
wouldn't require to change complete files at a time.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri May 08 14:24:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 14:24:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX3vH-0003nG-AN; Fri, 08 May 2020 14:24:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XWPr=6W=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jX3vF-0003mk-GW
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 14:24:17 +0000
X-Inumbo-ID: 988df232-9137-11ea-ae69-bc764e2007e4
Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 988df232-9137-11ea-ae69-bc764e2007e4;
 Fri, 08 May 2020 14:24:16 +0000 (UTC)
Received: by mail-wr1-x435.google.com with SMTP id g13so2116795wrb.8
 for <xen-devel@lists.xenproject.org>; Fri, 08 May 2020 07:24:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=o9g/QE37gJ+MfpEr/Vqy47zFTs3KaFk/uh9jWxrFkEk=;
 b=ZrBzYHENGc6qrUH1gzmpsOeqFQ1WFwYF6+5i9cIH3WmHFi/Jzt3G1/cDDjpRcub8cD
 +Dt/W4R5idw2wU+ewMSWREzi+rGZ2PIif4ZISmfuBZjrqmSD0+lzZhtIlMjMDORmkCHd
 +6S6x5hZMZ0hRsLIdZVXWxg38mzfFEYjFSP+iI2f/ElZ3B46fH6jwz73IR/bIOw6ycr2
 TTQSrijiD9PHLm4wzPnE+t39P+3k8ZCuym8BCwtQzV4JPuYijW/uJ9bbjRQChWIaXAb0
 7y/M9fcaZtBo5RWDGCFkx9JSp1JnAdsYrOZyg5yeThISvHDaam/ywcL249Q7oMZpfT++
 lJJQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=o9g/QE37gJ+MfpEr/Vqy47zFTs3KaFk/uh9jWxrFkEk=;
 b=PIyVmB+M60hDhUNzMYenleji0GvXeZCYX/KOpbKkDGiOe09ZDV8/pPpsJ1/Z6Hvcni
 PkuY/9Aqk1uwtIDVo8vMY0KVzOkspdx63z9qT9me2vqKC5Y7BY0wdCvWzXc8gyEF/FBs
 6h7PA7vGsc2iO7Ve+3xTRXUggP1HtB995jTp4p/OkXNn1/upShnmjBjQUqgxJE9vED5l
 HkZfkM9orfWR4JHIZyCAjsRXqRUeQLCigXRVadOs0mi0ZXLLxLHQA1madNI4aUyQ0gzy
 TQqaxLJLMo96pY/uZW71Ombu4ilvCbtlSQ5ZiGrlcnxOFfcjUBOsofRJbUftRDxiOCz2
 17gg==
X-Gm-Message-State: AGi0PubYxpysCxttV71mHp0AkyEmPaY8SdQzS4J6cjECgIcRdBYTGuGR
 AN5cAcAQTiDNZS65aDJ6po38MMEBegK0HfNXQJ0=
X-Google-Smtp-Source: APiQypKX7WcBOE22gJNn5i5LqO5X6xVHxqnz6BEMdnoAd2gX3sZYNPdz9ZMahlkof5Kdy/atrs8Q8OBekaDUU2EJdH4=
X-Received: by 2002:adf:fecf:: with SMTP id q15mr3166643wrs.259.1588947855220; 
 Fri, 08 May 2020 07:24:15 -0700 (PDT)
MIME-Version: 1.0
References: <ad26bbdc-5209-ce0c-7956-f8b08e6c2492@amazon.com>
 <8771c424-6340-10e5-1c1f-d72271ab8c1b@suse.com>
 <38926d4e-3429-58bc-f43c-514aed253a8e@xen.org>
 <3b55f045-c6a0-af62-c607-3a85d38cea25@suse.com>
 <63d1ceac-81bb-c916-d403-6f227b4d0ea8@xen.org>
 <CABfawhnvKgoP_EE7An8FDgJ11Ta8_gOo7tZ_J98sB_+Qir7=Yg@mail.gmail.com>
 <10ee3601-fc34-5714-30ea-abd2f2c03cfe@suse.com>
In-Reply-To: <10ee3601-fc34-5714-30ea-abd2f2c03cfe@suse.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Fri, 8 May 2020 08:23:40 -0600
Message-ID: <CABfawhkRcVavY+gkyvPfUTDdr1Xf=Xsmap11mgCi8cwcNyR=Ug@mail.gmail.com>
Subject: Re: Xen Coding style
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, "paul@xen.org" <paul@xen.org>,
 Julien Grall <jgrall@amazon.com>,
 "committers@xenproject.org" <committers@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Woodhouse,
 David" <dwmw@amazon.co.uk>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 8, 2020 at 8:18 AM J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wrot=
e:
>
> On 08.05.20 14:55, Tamas K Lengyel wrote:
> > On Fri, May 8, 2020 at 6:21 AM Julien Grall <julien@xen.org> wrote:
> >>
> >> Hi Jan,
> >>
> >> On 08/05/2020 12:20, Jan Beulich wrote:
> >>> On 08.05.2020 12:00, Julien Grall wrote:
> >>>> You seem to be the maintainer with the most unwritten rules. Would
> >>>> you mind to have a try at writing a coding style based on it?
> >>>
> >>> On the basis that even small, single aspect patches to CODING_STYLE
> >>> have been ignored [1],
> >>
> >> Your thread is one of the example why I started this thread. Agreeing =
on
> >> specific rule doesn't work because it either result to bikesheding or
> >> there is not enough interest to review rule by rule.
> >>
> >>> I don't think this would be a good use of my
> >>> time.
> >>
> >> I would have assumed that the current situation (i.e
> >> nitpicking/bikeshedding on the ML) is not a good use of your time :).
> >>
> >> I would be happy to put some effort to help getting the coding style
> >> right, however I believe focusing on an overall coding style would val=
ue
> >> everyone's time better than a rule by rule discussion.
> >>
> >>> If I was promised (reasonable) feedback, I could take what I
> >>> have and try to add at least a few more things based on what I find
> >>> myself commenting on more frequently. But really I'd prefer it to
> >>> be done the other way around - for people to look at the patches
> >>> already sent, and for me to only subsequently send more. After all,
> >>> if already those adjustments are controversial, I don't think we
> >>> could settle on others.
> >> While I understand this requires another investment from your part, I =
am
> >> afraid it is going to be painful for someone else to go through all th=
e
> >> existing coding style bikeshedding and infer your unwritten rules.
> >>
> >> It might be more beneficial for that person to pursue the work done by
> >> Tamas and Viktor in the past (see my previous e-mail). This may mean t=
o
> >> adopt an existing coding style (BSD) and then tweak it.
> >
> > Thanks Julien for restarting this discussion. IMHO agreeing on a set
> > of style rules ahead and then applying universally all at once is not
> > going to be productive since we are so all over the place. Instead, I
> > would recommend we start piece-by-piece. We introduce a baseline style
> > checker, then maintainers can decide when and if they want to move
> > their code-base to be under the automated style checker. That way we
> > have a baseline and each maintainer can decide on their own term when
> > they want to have their files be also style checked and in what form.
> > The upside of this route I think is pretty clear: we can have at least
> > partial automation even while we figure out what to do with some of
> > the more problematic files and quirks that are in our code-base. I
> > would highly prefer this route since I would immediately bring all
> > files I maintain over to the automated checker just so I never ever
> > have to deal with this again manually. What style is in use to me
> > really doesn't matter, BSD was very close with some minor tweaks, or
> > even what we use to check the style just as long as we have
> > _something_.
>
> Wouldn't it make more sense to have a patch checker instead and accept
> only patches which change code according to the style guide? This
> wouldn't require to change complete files at a time.

In theory, yes. But in practice this would require that we can agree
on a style that applies to all patches that touch any file within Xen.
We can't seem to do that because there are too many exceptions and
corner-cases and personal-preferences of maintainers that apply only
to a subset of the codebase. So AFAICT what you propose doesn't seem
to be a viable way to start.

Tamas


From xen-devel-bounces@lists.xenproject.org Fri May 08 14:43:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 14:43:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX4DG-0005QS-CR; Fri, 08 May 2020 14:42:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jX4DE-0005QK-OU
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 14:42:52 +0000
X-Inumbo-ID: 318be46a-913a-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 318be46a-913a-11ea-b07b-bc764e2007e4;
 Fri, 08 May 2020 14:42:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 60276AF76;
 Fri,  8 May 2020 14:42:53 +0000 (UTC)
Subject: Re: Xen Coding style
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
References: <ad26bbdc-5209-ce0c-7956-f8b08e6c2492@amazon.com>
 <8771c424-6340-10e5-1c1f-d72271ab8c1b@suse.com>
 <38926d4e-3429-58bc-f43c-514aed253a8e@xen.org>
 <3b55f045-c6a0-af62-c607-3a85d38cea25@suse.com>
 <63d1ceac-81bb-c916-d403-6f227b4d0ea8@xen.org>
 <CABfawhnvKgoP_EE7An8FDgJ11Ta8_gOo7tZ_J98sB_+Qir7=Yg@mail.gmail.com>
 <10ee3601-fc34-5714-30ea-abd2f2c03cfe@suse.com>
 <CABfawhkRcVavY+gkyvPfUTDdr1Xf=Xsmap11mgCi8cwcNyR=Ug@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d0be31c1-f5fe-30ba-9c1a-5d37333d2479@suse.com>
Date: Fri, 8 May 2020 16:42:49 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <CABfawhkRcVavY+gkyvPfUTDdr1Xf=Xsmap11mgCi8cwcNyR=Ug@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, "paul@xen.org" <paul@xen.org>,
 Julien Grall <jgrall@amazon.com>,
 "committers@xenproject.org" <committers@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Woodhouse,
 David" <dwmw@amazon.co.uk>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.20 16:23, Tamas K Lengyel wrote:
> On Fri, May 8, 2020 at 8:18 AM Jürgen Groß <jgross@suse.com> wrote:
>>
>> On 08.05.20 14:55, Tamas K Lengyel wrote:
>>> On Fri, May 8, 2020 at 6:21 AM Julien Grall <julien@xen.org> wrote:
>>>>
>>>> Hi Jan,
>>>>
>>>> On 08/05/2020 12:20, Jan Beulich wrote:
>>>>> On 08.05.2020 12:00, Julien Grall wrote:
>>>>>> You seem to be the maintainer with the most unwritten rules. Would
>>>>>> you mind to have a try at writing a coding style based on it?
>>>>>
>>>>> On the basis that even small, single aspect patches to CODING_STYLE
>>>>> have been ignored [1],
>>>>
>>>> Your thread is one of the example why I started this thread. Agreeing on
>>>> specific rule doesn't work because it either result to bikesheding or
>>>> there is not enough interest to review rule by rule.
>>>>
>>>>> I don't think this would be a good use of my
>>>>> time.
>>>>
>>>> I would have assumed that the current situation (i.e
>>>> nitpicking/bikeshedding on the ML) is not a good use of your time :).
>>>>
>>>> I would be happy to put some effort to help getting the coding style
>>>> right, however I believe focusing on an overall coding style would value
>>>> everyone's time better than a rule by rule discussion.
>>>>
>>>>> If I was promised (reasonable) feedback, I could take what I
>>>>> have and try to add at least a few more things based on what I find
>>>>> myself commenting on more frequently. But really I'd prefer it to
>>>>> be done the other way around - for people to look at the patches
>>>>> already sent, and for me to only subsequently send more. After all,
>>>>> if already those adjustments are controversial, I don't think we
>>>>> could settle on others.
>>>> While I understand this requires another investment from your part, I am
>>>> afraid it is going to be painful for someone else to go through all the
>>>> existing coding style bikeshedding and infer your unwritten rules.
>>>>
>>>> It might be more beneficial for that person to pursue the work done by
>>>> Tamas and Viktor in the past (see my previous e-mail). This may mean to
>>>> adopt an existing coding style (BSD) and then tweak it.
>>>
>>> Thanks Julien for restarting this discussion. IMHO agreeing on a set
>>> of style rules ahead and then applying universally all at once is not
>>> going to be productive since we are so all over the place. Instead, I
>>> would recommend we start piece-by-piece. We introduce a baseline style
>>> checker, then maintainers can decide when and if they want to move
>>> their code-base to be under the automated style checker. That way we
>>> have a baseline and each maintainer can decide on their own term when
>>> they want to have their files be also style checked and in what form.
>>> The upside of this route I think is pretty clear: we can have at least
>>> partial automation even while we figure out what to do with some of
>>> the more problematic files and quirks that are in our code-base. I
>>> would highly prefer this route since I would immediately bring all
>>> files I maintain over to the automated checker just so I never ever
>>> have to deal with this again manually. What style is in use to me
>>> really doesn't matter, BSD was very close with some minor tweaks, or
>>> even what we use to check the style just as long as we have
>>> _something_.
>>
>> Wouldn't it make more sense to have a patch checker instead and accept
>> only patches which change code according to the style guide? This
>> wouldn't require to change complete files at a time.
> 
> In theory, yes. But in practice this would require that we can agree
> on a style that applies to all patches that touch any file within Xen.
> We can't seem to do that because there are too many exceptions and
> corner-cases and personal-preferences of maintainers that apply only
> to a subset of the codebase. So AFAICT what you propose doesn't seem
> to be a viable way to start.

I think long ago we already agreed to have a control file which tells a
style checker which style to apply (if any). As a start we could have a
patch checker checking only the commit message (has it a Signed-off-by:
etc.). The next step would be to add the control file, and the framework
to split the patch into the changed file hunks and passing each hunk to
the correct checking script (might be doing nothing in the beginning).
And then we could add some logic to the single checkers.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri May 08 14:43:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 14:43:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX4DE-0005Q2-01; Fri, 08 May 2020 14:42:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3Kb=6W=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jX4DD-0005Px-0M
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 14:42:51 +0000
X-Inumbo-ID: 305bf198-913a-11ea-a00c-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 305bf198-913a-11ea-a00c-12813bfff9fa;
 Fri, 08 May 2020 14:42:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 973C3AE47;
 Fri,  8 May 2020 14:42:51 +0000 (UTC)
Subject: Re: [PATCH v8 05/12] x86emul: support X{SUS,RES}LDTRK
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <6e7500d2-262c-29c7-b9be-3fc9be26d198@suse.com>
 <feb3a6ed-b6b9-959c-8eb1-fb6ecfff034c@citrix.com>
 <b5f9438b-471d-bf32-3f4c-11287060938c@suse.com>
 <9fa8ceb3-fd4b-e754-2c82-92f134603e34@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <940a0a21-e63c-85a8-4fc9-c24547f9854f@suse.com>
Date: Fri, 8 May 2020 16:42:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <9fa8ceb3-fd4b-e754-2c82-92f134603e34@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 15:15, Andrew Cooper wrote:
> On 08/05/2020 08:38, Jan Beulich wrote:
>> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>>
>> On 07.05.2020 22:13, Andrew Cooper wrote:
>>> On 05/05/2020 09:14, Jan Beulich wrote:
>>>> --- a/xen/tools/gen-cpuid.py
>>>> +++ b/xen/tools/gen-cpuid.py
>>>> @@ -284,6 +284,9 @@ def crunch_numbers(state):
>>>>          # as dependent features simplifies Xen's logic, and prevents the guest
>>>>          # from seeing implausible configurations.
>>>>          IBRSB: [STIBP, SSBD],
>>>> +
>>>> +        # In principle the TSXLDTRK insns could also be considered independent.
>>>> +        RTM: [TSXLDTRK],
>>> Why the link?  There is no relevant interaction AFAICT.
>> Do the insns make any sense without TSX? Anyway - hence the
>> comment, and if you're convinced the connection does not
>> need making, I'd be okay dropping it. I would assume though
>> that we'd better hide TSXLDTRK whenever we hide RTM, which
>> is most easily achieved by having a connection here.
> 
> Actually - that is a very good point.  I expect there will (or should)
> be an interaction with MSR_TSX_CTRL, as it has CPUID-hiding functionality.
> 
> For now, could I ask you to not expose this to guests in this patch?

As per our irc discussion, I'd make it 'a' then instead of 'A'.
Will need to wait for gen-cpuid.py to accept 'a' then.

> For the emulator side of things alone I think this is ok (although
> looking over it a second time, we could really do with a comment in the
> code explaining that we're never in an RTM region, hence the nop behaviour).

I've added

                /*
                 * We're never in a transactional region when coming here
                 * - nothing else to do.
                 */

to both paths.

> I'll follow up with Intel, and we can figure out the CPUID derivation
> details at a later point.
> 
> If you're happy with this plan, then A-by to save a round trip.

Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 08 14:48:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 14:48:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX4J2-0005hs-1h; Fri, 08 May 2020 14:48:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5Ij8=6W=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jX4J0-0005hn-Kz
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 14:48:50 +0000
X-Inumbo-ID: 065aa9b1-913b-11ea-a00d-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 065aa9b1-913b-11ea-a00d-12813bfff9fa;
 Fri, 08 May 2020 14:48:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588949329;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=BX34D3fECH+PeVZ3NA/2gZzeW1JvaHPOZiM4xbvXJLs=;
 b=UKytWx41y0tFDhLTlWhbhMXrgNMwJjLr6z7Sn5udMZWkrqL4jDfpc+XA
 OOKxP59C2w4cAElA/+YGy+pAs2W/M70EEmtBS/rFBa13Gtu75PQJavRre
 g3OyL1DNX43cmkpTXUDxuO4D4gMuZKtL5W+BNzatoA/LPCXIY5m1Xf7dx o=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: AsQAxb7mBQNkVZ6P+L/cIw5KMUL1iOUrtEZ5Y/K393oKqie/3UPYkSJGkLuCySREmmL5X9u82C
 OggPhWP+59iDd8mZ8akjXLZ/NEbJB5gv1iyJFEqAITtDSahMcJN7sksckfW8Ep4zI+fJgfMqD8
 /nFF3hWgOQjaX9fch8YcztZlBwmjMZWsoxIVv/s/oQAJpzHXmHkGbYpfcllLvaLHzANg+lz3Ca
 hpImTUlKen+XUni3KvK1FPh7JP2DYDCSzpL/+dc2vEZVpv2h8h4hiMYzcWBDig4BnNu/cMtoCk
 A5M=
X-SBRS: 2.7
X-MesageID: 17766602
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,367,1583211600"; d="scan'208";a="17766602"
Date: Fri, 8 May 2020 16:48:20 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86/PVH: PHYSDEVOP_pci_mmcfg_reserved should not blindly
 register a region
Message-ID: <20200508144820.GI1353@Air-de-Roger>
References: <2ee1a3cb-3b40-6e6d-5308-1cdf9f6c521e@suse.com>
 <c8826959-8dbe-cc39-80b0-8ba03c6a6f30@citrix.com>
 <16ed4b91-fdb3-d2c5-9a3c-4aa7ff172b98@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <16ed4b91-fdb3-d2c5-9a3c-4aa7ff172b98@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 08, 2020 at 03:49:35PM +0200, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> 
> On 08.05.2020 14:54, Andrew Cooper wrote:
> > On 08/05/2020 13:43, Jan Beulich wrote:
> >> The op has a register/unregister flag, and hence registration shouldn't
> >> happen unilaterally. Introduce unregister_vpci_mmcfg_handler() to handle
> >> this case.
> >>
> >> Fixes: eb3dd90e4089 ("x86/physdev: enable PHYSDEVOP_pci_mmcfg_reserved for PVH Dom0")
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > I agree in principle that registration shouldn't be unilateral, but why
> > on earth does the API behave like that to begin with?
> > 
> > There is no provision to move or update MMCFG regions in any spec I'm
> > aware of, and hardware cannot in practice update memory routing like this.
> > 
> > Under what circumstances should we tolerate an unregister in the first
> > place?
> 
> Hot unplug of an entire segment, for example.

An OS could also rebalance the resources of a host bridge, according
to the PCI firmware spec, in which case _CBA should be re-evaluated.

I'm not sure whether rebalancing would work anyway, or if there even
are systems that support this and OSes that would attempt to do it,
but since we have the interface for this let's try to do something
sensible.

The other options is simply returning -EOPNOTSUPP. Iff the domain
doesn't try to access devices that would reside on the segment
hot-unplugged it shouldn't make much of a difference, rebalancing is
the case were Xen must support add/remove in order to re-place the
position of the ECAM areas.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 08 15:03:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:03:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX4XP-0007LG-D0; Fri, 08 May 2020 15:03:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5Ij8=6W=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jX4XO-0007LB-Kb
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:03:42 +0000
X-Inumbo-ID: 1984abce-913d-11ea-b07b-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1984abce-913d-11ea-b07b-bc764e2007e4;
 Fri, 08 May 2020 15:03:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588950220;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=FwIDSNXsGMfTsMPP+jwB/EU2ayOwIRuK6CAOtUnYjCw=;
 b=Fm03/03SoiT/9tCrZi3CKv6AgBsUJd4UxSU3KcMOzRr5a0uSzBmyv78b
 8jubL+wWTY+J1RDWF57tj5KwJBdbHE3pOvpJyAYoAoXNv/U2uWffUMRtG
 4MruiwObJdymjVgESvvVr2yxRc5iwH1IA2XTSzedwRbuHPceqoZ0Kd714 s=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: H5Im5vXfE0UWUo+Rvkj7MbX3sNPeEG/41REr6BiDqzjtfnFlTAI+f2B8AkJREXUV7eSAR9Qr1p
 QTCMHs0LrxObQNkqQwPau40VNUsGEzabdMBiKo10rVCAdE0dyxKa4c3L/qETv9770qf1dKiHYo
 l81eza+vmrVgDHBQ1TTay3mztpQNodMC9Xm3cOVwNv8mI+yyNyQ5SXFXrEy7OJgboRcacew90v
 Zuaf/rs4vkoLgLaHGrFbAjlHhUDJMxR/2DQWH8b9ISeMsTOKlgmXf6GoUJA2G6NZ2T/kfNM1Wy
 D+A=
X-SBRS: 2.7
X-MesageID: 17357810
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,368,1583211600"; d="scan'208";a="17357810"
Date: Fri, 8 May 2020 17:03:12 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86/PVH: PHYSDEVOP_pci_mmcfg_reserved should not blindly
 register a region
Message-ID: <20200508150312.GJ1353@Air-de-Roger>
References: <2ee1a3cb-3b40-6e6d-5308-1cdf9f6c521e@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <2ee1a3cb-3b40-6e6d-5308-1cdf9f6c521e@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 08, 2020 at 02:43:38PM +0200, Jan Beulich wrote:
> The op has a register/unregister flag, and hence registration shouldn't
> happen unilaterally. Introduce unregister_vpci_mmcfg_handler() to handle
> this case.
> 
> Fixes: eb3dd90e4089 ("x86/physdev: enable PHYSDEVOP_pci_mmcfg_reserved for PVH Dom0")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/hvm/io.c
> +++ b/xen/arch/x86/hvm/io.c
> @@ -558,6 +558,47 @@ int register_vpci_mmcfg_handler(struct d
>      return 0;
>  }
>  
> +int unregister_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
> +                                  unsigned int start_bus, unsigned int end_bus,
> +                                  unsigned int seg)
> +{
> +    struct hvm_mmcfg *mmcfg;
> +    int rc = -ENOENT;
> +
> +    ASSERT(is_hardware_domain(d));
> +
> +    if ( start_bus > end_bus )
> +        return -EINVAL;
> +
> +    write_lock(&d->arch.hvm.mmcfg_lock);
> +
> +    list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next )
> +        if ( mmcfg->addr == addr + (start_bus << 20) &&
> +             mmcfg->segment == seg &&
> +             mmcfg->start_bus == start_bus &&
> +             mmcfg->size == ((end_bus - start_bus + 1) << 20) )
> +        {
> +            list_del(&mmcfg->next);
> +            if ( !list_empty(&d->arch.hvm.mmcfg_regions) )
> +                xfree(mmcfg);
> +            else
> +            {
> +                /*
> +                 * Cannot unregister the MMIO handler - leave a fake entry
> +                 * on the list.
> +                 */
> +                memset(mmcfg, 0, sizeof(*mmcfg));
> +                list_add(&mmcfg->next, &d->arch.hvm.mmcfg_regions);

Instead of leaving this zombie entry around maybe we could add a
static bool in register_vpci_mmcfg_handler to signal whether the MMIO
intercept has been registered?

> +            }
> +            rc = 0;
> +            break;
> +        }
> +
> +    write_unlock(&d->arch.hvm.mmcfg_lock);
> +
> +    return rc;
> +}
> +
>  void destroy_vpci_mmcfg(struct domain *d)
>  {
>      struct list_head *mmcfg_regions = &d->arch.hvm.mmcfg_regions;
> --- a/xen/arch/x86/physdev.c
> +++ b/xen/arch/x86/physdev.c
> @@ -559,12 +559,18 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>          if ( !ret && has_vpci(currd) )
>          {
>              /*
> -             * For HVM (PVH) domains try to add the newly found MMCFG to the
> -             * domain.
> +             * For HVM (PVH) domains try to add/remove the reported MMCFG
> +             * to/from the domain.
>               */
> -            ret = register_vpci_mmcfg_handler(currd, info.address,
> -                                              info.start_bus, info.end_bus,
> -                                              info.segment);
> +            if ( info.flags & XEN_PCI_MMCFG_RESERVED )

Do you think you could also add a small note in physdev.h regarding
the fact that XEN_PCI_MMCFG_RESERVED is used to register a MMCFG
region, and not setting it would imply an unregister request?

It's not obvious to me from the name of the flag.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 08 15:04:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:04:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX4Xl-0007MZ-Ld; Fri, 08 May 2020 15:04:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3Kb=6W=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jX4Xk-0007MQ-Mt
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:04:04 +0000
X-Inumbo-ID: 27bbffc6-913d-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27bbffc6-913d-11ea-b9cf-bc764e2007e4;
 Fri, 08 May 2020 15:04:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 9CF03AE65;
 Fri,  8 May 2020 15:04:05 +0000 (UTC)
Subject: Re: [PATCH v8 08/12] x86emul: support FLDENV and FRSTOR
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <09fe2c18-0037-af71-93be-87261051e2a2@suse.com>
 <20200508133720.GH1353@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4b6f4353-066e-351d-597d-4455193ff95f@suse.com>
Date: Fri, 8 May 2020 17:04:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200508133720.GH1353@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 15:37, Roger Pau Monné wrote:
> On Tue, May 05, 2020 at 10:16:20AM +0200, Jan Beulich wrote:
>> --- a/tools/tests/x86_emulator/test_x86_emulator.c
>> +++ b/tools/tests/x86_emulator/test_x86_emulator.c
>> @@ -2442,6 +2442,27 @@ int main(int argc, char **argv)
>>      else
>>          printf("skipped\n");
>>  
>> +    printf("%-40s", "Testing fldenv 8(%edx)...");
> 
> Likely a stupid question, but why the added 8? edx will contain the
> memory address used to save the sate by fnstenv, so I would expect
> fldenv to just load from there?

The 8 is just to vary ModR/M encodings acrosss the various
tests - it's an arbitrary choice.

>> +    if ( stack_exec && cpu_has_fpu )
>> +    {
>> +        asm volatile ( "fnstenv %0\n\t"
>> +                       "fninit"
>> +                       : "=m" (res[2]) :: "memory" );
> 
> Why do you need the memory clobber here? I assume it's because res is
> of type unsigned int and hence doesn't have the right size that
> fnstenv will actually write to?

Correct.

>> @@ -4948,21 +4949,14 @@ x86_emulate(
>>                  dst.bytes = 4;
>>                  emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
>>                  break;
>> -            case 4: /* fldenv - TODO */
>> -                state->fpu_ctrl = true;
>> -                goto unimplemented_insn;
>> -            case 5: /* fldcw m2byte */
>> -                state->fpu_ctrl = true;
>> -            fpu_memsrc16:
>> -                if ( (rc = ops->read(ea.mem.seg, ea.mem.off, &src.val,
>> -                                     2, ctxt)) != X86EMUL_OKAY )
>> -                    goto done;
>> -                emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
>> -                break;
>> +            case 4: /* fldenv */
>> +                /* Raise #MF now if there are pending unmasked exceptions. */
>> +                emulate_fpu_insn_stub(0xd9, 0xd0 /* fnop */);
> 
> Maybe it would make sense to have a wrapper for fnop?

I'm not convinced that would be worth it.

>> @@ -11648,6 +11651,89 @@ int x86_emul_blk(
>>  
>>  #ifndef X86EMUL_NO_FPU
>>  
>> +    case blk_fld:
>> +        ASSERT(!data);
>> +
>> +        /* state->rex_prefix carries CR0.PE && !EFLAGS.VM setting */
>> +        switch ( bytes )
>> +        {
>> +        case sizeof(fpstate.env):
>> +        case sizeof(fpstate):
>> +            memcpy(&fpstate.env, ptr, sizeof(fpstate.env));
>> +            if ( !state->rex_prefix )
>> +            {
>> +                unsigned int fip = fpstate.env.mode.real.fip_lo +
>> +                                   (fpstate.env.mode.real.fip_hi << 16);
>> +                unsigned int fdp = fpstate.env.mode.real.fdp_lo +
>> +                                   (fpstate.env.mode.real.fdp_hi << 16);
>> +                unsigned int fop = fpstate.env.mode.real.fop;
>> +
>> +                fpstate.env.mode.prot.fip = fip & 0xf;
>> +                fpstate.env.mode.prot.fcs = fip >> 4;
>> +                fpstate.env.mode.prot.fop = fop;
>> +                fpstate.env.mode.prot.fdp = fdp & 0xf;
>> +                fpstate.env.mode.prot.fds = fdp >> 4;
> 
> I've found the layouts in the SDM vol. 1, but I haven't been able to
> found the translation mechanism from real to protected. Could you
> maybe add a reference here?

A reference to some piece of documentation? I don't think this
is spelled out anywhere. It's also only one of various possible
ways of doing the translation, but among them the most flexible
one for possible consumers of the data (because of using the
smallest possible offsets into the segments).

>> +            }
>> +
>> +            if ( bytes == sizeof(fpstate.env) )
>> +                ptr = NULL;
>> +            else
>> +                ptr += sizeof(fpstate.env);
>> +            break;
>> +
>> +        case sizeof(struct x87_env16):
>> +        case sizeof(struct x87_env16) + sizeof(fpstate.freg):
>> +        {
>> +            const struct x87_env16 *env = ptr;
>> +
>> +            fpstate.env.fcw = env->fcw;
>> +            fpstate.env.fsw = env->fsw;
>> +            fpstate.env.ftw = env->ftw;
>> +
>> +            if ( state->rex_prefix )
>> +            {
>> +                fpstate.env.mode.prot.fip = env->mode.prot.fip;
>> +                fpstate.env.mode.prot.fcs = env->mode.prot.fcs;
>> +                fpstate.env.mode.prot.fdp = env->mode.prot.fdp;
>> +                fpstate.env.mode.prot.fds = env->mode.prot.fds;
>> +                fpstate.env.mode.prot.fop = 0; /* unknown */
>> +            }
>> +            else
>> +            {
>> +                unsigned int fip = env->mode.real.fip_lo +
>> +                                   (env->mode.real.fip_hi << 16);
>> +                unsigned int fdp = env->mode.real.fdp_lo +
>> +                                   (env->mode.real.fdp_hi << 16);
>> +                unsigned int fop = env->mode.real.fop;
>> +
>> +                fpstate.env.mode.prot.fip = fip & 0xf;
>> +                fpstate.env.mode.prot.fcs = fip >> 4;
>> +                fpstate.env.mode.prot.fop = fop;
>> +                fpstate.env.mode.prot.fdp = fdp & 0xf;
>> +                fpstate.env.mode.prot.fds = fdp >> 4;
> 
> This looks mostly the same as the translation done above, so maybe
> could be abstracted anyway in a macro to avoid the code repetition?
> (ie: fpstate_real_to_prot(src, dst) or some such).

Just the 5 assignments could be put in an inline function, but
if we also wanted to abstract away the declarations with their
initializers, it would need to be a macro because of the
different types of fpstate.env and *env. While I'd generally
prefer inline functions, the macro would have the benefit that
it could be #define-d / #undef-d right inside this case block.
Thoughts? 

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 08 15:05:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:05:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX4Yq-0007TP-0h; Fri, 08 May 2020 15:05:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M6n8=6W=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jX4Yo-0007TI-UB
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:05:10 +0000
X-Inumbo-ID: 4f4dd21c-913d-11ea-a013-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f4dd21c-913d-11ea-a013-12813bfff9fa;
 Fri, 08 May 2020 15:05:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588950310;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=Rxz5ayXkVhQcIZe8LnlmVx26dYJADMhWFsWDJ7DcR+c=;
 b=Ed5Yg5e24wJFhqYzyHi6CyJjZhcdEDRnCP8LQFEyWNDeXgUNMqHbKHXM
 lLaLl+J1UDO5oOWDzzJvNwG3lcqAVoA/hxqA3lq3K1LBB7xgcUAbcl+U9
 ze2CYMYRgCflVNpTo83fOPdoP/AXMY5Gpb8HyEyhcoN/THtVvkCoXzE6m M=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: sX6pPtWO1xcemqcgkJxls9p1PABP8wgYuEg4xBm7z6sdf/cSQPr9qEKqJyvFMsdWbzy+IX55Hi
 M7W39NEzmuPYbAGGVCGRfyVtfZTs52EXnMWxx+KGl5AgTshv4D+JYTZlFbs2tkeKerOj18pa/f
 27/e1JTX52a0NqMD0nlStyNHeGKUaGxR8I1E9L35J0R4iMxiAoKvsjUyiKNriom9YlpJHFOMV2
 ILIXQmJHbpM9a1cC/J+pygppAekK9Nv1ShlQv5NArIUImPgyIXwfgMHRD0zvQkey4UT7acAp7d
 fOw=
X-SBRS: 2.7
X-MesageID: 17358004
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,368,1583211600"; d="scan'208";a="17358004"
Subject: Re: [PATCH v8 04/12] x86emul: support SERIALIZE
To: Jan Beulich <jbeulich@suse.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <0bbbf95e-48ec-ee73-5234-52cf9c6c06d8@suse.com>
 <64de91ff-41ae-baf1-1119-0ba39df32275@citrix.com>
 <0c5a03c6-6c4f-c6ec-e474-71a2badd1c9c@suse.com>
 <fe12bd3d-37f6-111c-e738-dde0b42d2d3d@citrix.com>
 <1a4a9298-3446-a863-2e24-4c81244594dd@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <883156dc-dd47-203d-8d64-2a4f025270f3@citrix.com>
Date: Fri, 8 May 2020 16:05:06 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <1a4a9298-3446-a863-2e24-4c81244594dd@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08/05/2020 14:59, Jan Beulich wrote:
> On 08.05.2020 15:00, Andrew Cooper wrote:
>> On 08/05/2020 08:34, Jan Beulich wrote:
>>>>> @@ -5660,6 +5661,18 @@ x86_emulate(
>>>>>                  goto done;
>>>>>              break;
>>>>>  
>>>>> +        case 0xe8:
>>>>> +            switch ( vex.pfx )
>>>>> +            {
>>>>> +            case vex_none: /* serialize */
>>>>> +                host_and_vcpu_must_have(serialize);
>>>>> +                asm volatile ( ".byte 0x0f, 0x01, 0xe8" );
>>>> There is very little need for an actual implementation here.  The VMExit
>>>> to get here is good enough.
>>>>
>>>> The only question is whether pre-unrestricted_guest Intel boxes are
>>>> liable to find this in real mode code.
>>> Apart from this we also shouldn't assume HVM in the core emulator,
>>> I think.
>> It's not assuming HVM.  Its just that there is no way we can end up
>> emulating this instruction from PV context.
>>
>> If we could end up here in PV context, the exception causing us to
>> emulate to begin with would be good enough as well.
> With the current way of where/how emulation gets involved -
> yes. I'd like to remind you though of the 4-insn window
> shadow code tries to emulate over for PAE guests. There
> is no intervening #VMEXIT there.
>
> So do you want me to drop the asm() then, and switch from
> host_and_vcpu_must_have(serialize) to just
> vcpu_must_have(serialize)?

No - keep it as is.  This isn't a fastpath, and it is much safer to
assume there might be something we've forgotten.  (Or perhaps some new
future behaviour included in the definition of architecturally serialising).

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 08 15:11:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:11:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX4f6-0008Np-Tl; Fri, 08 May 2020 15:11:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=t3Kb=6W=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jX4f4-0008Nj-Ro
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:11:38 +0000
X-Inumbo-ID: 36689394-913e-11ea-a015-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 36689394-913e-11ea-a015-12813bfff9fa;
 Fri, 08 May 2020 15:11:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A9454AB64;
 Fri,  8 May 2020 15:11:39 +0000 (UTC)
Subject: Re: [PATCH] x86/PVH: PHYSDEVOP_pci_mmcfg_reserved should not blindly
 register a region
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <2ee1a3cb-3b40-6e6d-5308-1cdf9f6c521e@suse.com>
 <20200508150312.GJ1353@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <70c8b4f4-b690-c031-3b90-1776d872d171@suse.com>
Date: Fri, 8 May 2020 17:11:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200508150312.GJ1353@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 17:03, Roger Pau Monné wrote:
> On Fri, May 08, 2020 at 02:43:38PM +0200, Jan Beulich wrote:
>> --- a/xen/arch/x86/hvm/io.c
>> +++ b/xen/arch/x86/hvm/io.c
>> @@ -558,6 +558,47 @@ int register_vpci_mmcfg_handler(struct d
>>      return 0;
>>  }
>>  
>> +int unregister_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
>> +                                  unsigned int start_bus, unsigned int end_bus,
>> +                                  unsigned int seg)
>> +{
>> +    struct hvm_mmcfg *mmcfg;
>> +    int rc = -ENOENT;
>> +
>> +    ASSERT(is_hardware_domain(d));
>> +
>> +    if ( start_bus > end_bus )
>> +        return -EINVAL;
>> +
>> +    write_lock(&d->arch.hvm.mmcfg_lock);
>> +
>> +    list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next )
>> +        if ( mmcfg->addr == addr + (start_bus << 20) &&
>> +             mmcfg->segment == seg &&
>> +             mmcfg->start_bus == start_bus &&
>> +             mmcfg->size == ((end_bus - start_bus + 1) << 20) )
>> +        {
>> +            list_del(&mmcfg->next);
>> +            if ( !list_empty(&d->arch.hvm.mmcfg_regions) )
>> +                xfree(mmcfg);
>> +            else
>> +            {
>> +                /*
>> +                 * Cannot unregister the MMIO handler - leave a fake entry
>> +                 * on the list.
>> +                 */
>> +                memset(mmcfg, 0, sizeof(*mmcfg));
>> +                list_add(&mmcfg->next, &d->arch.hvm.mmcfg_regions);
> 
> Instead of leaving this zombie entry around maybe we could add a
> static bool in register_vpci_mmcfg_handler to signal whether the MMIO
> intercept has been registered?

That was my initial plan indeed, but registration is per-domain.

>> --- a/xen/arch/x86/physdev.c
>> +++ b/xen/arch/x86/physdev.c
>> @@ -559,12 +559,18 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>>          if ( !ret && has_vpci(currd) )
>>          {
>>              /*
>> -             * For HVM (PVH) domains try to add the newly found MMCFG to the
>> -             * domain.
>> +             * For HVM (PVH) domains try to add/remove the reported MMCFG
>> +             * to/from the domain.
>>               */
>> -            ret = register_vpci_mmcfg_handler(currd, info.address,
>> -                                              info.start_bus, info.end_bus,
>> -                                              info.segment);
>> +            if ( info.flags & XEN_PCI_MMCFG_RESERVED )
> 
> Do you think you could also add a small note in physdev.h regarding
> the fact that XEN_PCI_MMCFG_RESERVED is used to register a MMCFG
> region, and not setting it would imply an unregister request?
> 
> It's not obvious to me from the name of the flag.

The main purpose of the flag is to identify whether a region can be
used (because of having been found marked suitably reserved by
firmware). The flag not set effectively means "region is not marked
reserved". You pointing this out makes me wonder whether instead I
should simply expand the if() in context, without making it behave
like unregistration. Then again we'd have no way to unregister a
region, and hence (ab)using this function for this purpose seems to
makes sense (and, afaict, not require any code changes elsewhere).

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 08 15:28:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX4ul-0000wj-Ca; Fri, 08 May 2020 15:27:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M6n8=6W=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jX4uk-0000wd-HM
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:27:50 +0000
X-Inumbo-ID: 796f4096-9140-11ea-9887-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 796f4096-9140-11ea-9887-bc764e2007e4;
 Fri, 08 May 2020 15:27:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588951670;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=rWD7JG3rxsZ/VbK7A2DoYPoErxiSrz+GJ4tgtyShxYc=;
 b=G/Tq+DRooWqrCv+CfXcTjjbvxF1dbNutf4hcZn/iUI+iOod0ub3PT2Dt
 7JBegNbFCW4RJ3TE+3D1/EhfCBm/0PTskWysJfql+UsctxnyGfgFeqcp4
 0kBdOd8/0dmR5R9L/yYFpl394Q0X04bFI+XekO07SbyPorYVLqqjGBC9M o=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: G0qHJbCARlPKqDc3xWvLoKCwNkkQ/OjZUm9DlvVedlJ+Xbqzlj+D9pmd0Q2EFW4U9eszAfKF/L
 LckIdMDsOWhfLYbCdoOxYrAdH4JshgQlwlTZbBpMyHN/YltCcrcd6h+E4JJqEwT70fE2iQaCwu
 ehKigCvM7t55ZRfCe95gfs+LxGh7ZGtnPhuyZbgmSEErT5nSEUEyvNBMlTTAKHi97pJ5pKwIov
 QZb/dbY3MnDG9SpjznqRrkCyPaeVp1xzKEPuRzCkSnrDDqIPKoKQWrNMHeSSr20Y9PR5Z62rtc
 uVc=
X-SBRS: 2.7
X-MesageID: 17098576
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,368,1583211600"; d="scan'208";a="17098576"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/gen-cpuid: Distinguish default vs max in feature
 annotations
Date: Fri, 8 May 2020 16:27:29 +0100
Message-ID: <20200508152729.14295-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Allow lowercase a/s/h to be used to annotate a non-default feature.

However, until the toolstack migration logic is fixed, it is not safe to
activate yet.  Tolerate the annotations, but ignore them for now.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

This patch has been pending the toolstack work for several months now, and we
want to start using the max tags for x86 emul work.
---
 xen/include/public/arch-x86/cpufeatureset.h | 2 ++
 xen/tools/gen-cpuid.py                      | 5 +++++
 2 files changed, 7 insertions(+)

diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index e2749245f3..0ffab6c57b 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -87,6 +87,8 @@ enum {
  *   'A' = All guests.
  *   'S' = All HVM guests (not PV guests).
  *   'H' = HVM HAP guests (not PV or HVM Shadow guests).
+ *   Upper case => Available by default
+ *   Lower case => Can be opted-in to, but not available by default.
  */
 
 /* Intel-defined CPU features, CPUID level 0x00000001.edx, word 0 */
diff --git a/xen/tools/gen-cpuid.py b/xen/tools/gen-cpuid.py
index af5610a5e6..d90a2d85c7 100755
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -23,6 +23,7 @@ def __init__(self, input, output):
         self.raw = {
             '!': set(),
             'A': set(), 'S': set(), 'H': set(),
+            'a': set(), 's': set(), 'h': set(),
         }
 
         # State calculated
@@ -133,9 +134,13 @@ def crunch_numbers(state):
     state.hvm_shadow_def = state.pv_def | state.raw['S']
     state.hvm_hap_def = state.hvm_shadow_def | state.raw['H']
 
+    # TODO: Ignore def/max split until the toolstack migration logic is fixed
     state.pv_max = state.pv_def
     state.hvm_shadow_max = state.hvm_shadow_def
     state.hvm_hap_max = state.hvm_hap_def
+    # state.pv_max = state.raw['A'] | state.raw['a']
+    # state.hvm_shadow_max = state.pv_max | state.raw['S'] | state.raw['s']
+    # state.hvm_hap_max = state.hvm_shadow_max | state.raw['H'] | state.raw['h']
 
     #
     # Feature dependency information.
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 08 15:31:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:31:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX4xs-0001i4-SG; Fri, 08 May 2020 15:31:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M6n8=6W=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jX4xr-0001hy-9O
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:31:03 +0000
X-Inumbo-ID: ec95f682-9140-11ea-9887-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec95f682-9140-11ea-9887-bc764e2007e4;
 Fri, 08 May 2020 15:31:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588951862;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=5+MGsT7EyqtQLeICh9rFqTjuvOdRffCurchcqUClatM=;
 b=SpzoObUFHmGI9TOl2R32lkSmYSCMZo6u6qmutbiuxGD9KHDt4Siz7tHj
 Zij2csP21QlR7oRxx6lZXKbciQuu1OLgCTTYjk7jIUb8L1AKjZtGum69t
 K7Xr5fpmpz74ISfacng6wAQLVCyQKfo17i1duu9W3deiAEawCrsWY53kd g=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: GpVAFiZb+9fWBIHW5kac0SEi9E6bmNffs3++sIDrqvpFinArVS4ufgdGtyQ2GiHYx3WiE7/b5Y
 VwXYGXgRY3heO6s0Av0tL9GNvmnQ83Lo9LDsWIob4GJbbu9ATvcbVjvzagfa4yEQI2clN3Dm6Y
 eTidIsvPEIm8NOIemLbroYfmmhY0wie42bJMPAyQTRj6OTs69aruL+W2GkYxJQP/zRMN6I3JGP
 p7H1ralItCjcHnsspQHm42+OgIbsTdiB+6xP3hiwzTSeORW4Mcjk0Jj+zeSx9lIzXTKJJNdGR9
 oDw=
X-SBRS: 2.7
X-MesageID: 17453940
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,368,1583211600"; d="scan'208";a="17453940"
Subject: Re: [PATCH] tools/libxc: Reduce feature handling complexity in
 xc_cpuid_apply_policy()
To: Xen-devel <xen-devel@lists.xenproject.org>
References: <20200303182326.16739-1-andrew.cooper3@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f2d8a8cc-a949-22f0-0e26-0df2c7d5889f@citrix.com>
Date: Fri, 8 May 2020 16:30:58 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200303182326.16739-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <Ian.Jackson@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Tools ping?

~Andrew

On 03/03/2020 18:23, Andrew Cooper wrote:
> xc_cpuid_apply_policy() is gaining extra parameters to untangle CPUID
> complexity in Xen.  While an improvement in general, it does have the
> unfortunate side effect of duplicating some settings across muliple
> parameters.
>
> Rearrange the logic to only consider 'pae' if no explicit featureset is
> provided.  This reduces the complexity for callers who have already provided a
> pae setting in the featureset.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Ian Jackson <Ian.Jackson@citrix.com>
> ---
>  tools/libxc/include/xenctrl.h | 6 ++++++
>  tools/libxc/xc_cpuid_x86.c    | 7 +++++--
>  2 files changed, 11 insertions(+), 2 deletions(-)
>
> diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
> index fc6e57a1a0..8d13a7e20b 100644
> --- a/tools/libxc/include/xenctrl.h
> +++ b/tools/libxc/include/xenctrl.h
> @@ -1798,6 +1798,12 @@ int xc_cpuid_set(xc_interface *xch,
>                   const unsigned int *input,
>                   const char **config,
>                   char **config_transformed);
> +/*
> + * Make adjustments to the CPUID settings for a domain.
> + *
> + * Either pass a full new @featureset (and @nr_features), or adjust individual
> + * features (@pae).
> + */
>  int xc_cpuid_apply_policy(xc_interface *xch,
>                            uint32_t domid,
>                            const uint32_t *featureset,
> diff --git a/tools/libxc/xc_cpuid_x86.c b/tools/libxc/xc_cpuid_x86.c
> index 5ced6d18b9..f045b03223 100644
> --- a/tools/libxc/xc_cpuid_x86.c
> +++ b/tools/libxc/xc_cpuid_x86.c
> @@ -532,6 +532,11 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid,
>  
>          cpuid_featureset_to_policy(feat, p);
>      }
> +    else
> +    {
> +        if ( di.hvm )
> +            p->basic.pae = pae;
> +    }
>  
>      if ( !di.hvm )
>      {
> @@ -615,8 +620,6 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid,
>              break;
>          }
>  
> -        p->basic.pae = pae;
> -
>          /*
>           * These settings are necessary to cause earlier HVM_PARAM_NESTEDHVM /
>           * XEN_DOMCTL_disable_migrate settings to be reflected correctly in



From xen-devel-bounces@lists.xenproject.org Fri May 08 15:34:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:34:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX51A-0001sR-Bv; Fri, 08 May 2020 15:34:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jX519-0001sG-D5
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:34:27 +0000
X-Inumbo-ID: 660d27a6-9141-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 660d27a6-9141-11ea-b9cf-bc764e2007e4;
 Fri, 08 May 2020 15:34:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D2E17AE64;
 Fri,  8 May 2020 15:34:27 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 01/12] xen/vmx: let opt_ept_ad always reflect the current
 setting
Date: Fri,  8 May 2020 17:34:10 +0200
Message-Id: <20200508153421.24525-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200508153421.24525-1-jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Kevin Tian <kevin.tian@intel.com>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In case opt_ept_ad has not been set explicitly by the user via command
line or runtime parameter, it is treated as "no" on Avoton cpus.

Change that handling by setting opt_ept_ad to 0 for this cpu type
explicitly if no user value has been set.

By putting this into the (renamed) boot time initialization of vmcs.c
_vmx_cpu_up() can be made static.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        | 22 +++++++++++++++-------
 xen/arch/x86/hvm/vmx/vmx.c         |  4 +---
 xen/include/asm-x86/hvm/vmx/vmcs.h |  3 +--
 3 files changed, 17 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 4c23645454..221af9737a 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -315,10 +315,6 @@ static int vmx_init_vmcs_config(void)
 
         if ( !opt_ept_ad )
             _vmx_ept_vpid_cap &= ~VMX_EPT_AD_BIT;
-        else if ( /* Work around Erratum AVR41 on Avoton processors. */
-                  boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x4d &&
-                  opt_ept_ad < 0 )
-            _vmx_ept_vpid_cap &= ~VMX_EPT_AD_BIT;
 
         /*
          * Additional sanity checking before using EPT:
@@ -652,7 +648,7 @@ void vmx_cpu_dead(unsigned int cpu)
     vmx_pi_desc_fixup(cpu);
 }
 
-int _vmx_cpu_up(bool bsp)
+static int _vmx_cpu_up(bool bsp)
 {
     u32 eax, edx;
     int rc, bios_locked, cpu = smp_processor_id();
@@ -2108,9 +2104,21 @@ static void vmcs_dump(unsigned char ch)
     printk("**************************************\n");
 }
 
-void __init setup_vmcs_dump(void)
+int __init vmx_vmcs_init(void)
 {
-    register_keyhandler('v', vmcs_dump, "dump VT-x VMCSs", 1);
+    int ret;
+
+    if ( opt_ept_ad < 0 )
+        /* Work around Erratum AVR41 on Avoton processors. */
+        opt_ept_ad = !(boot_cpu_data.x86 == 6 &&
+                       boot_cpu_data.x86_model == 0x4d);
+
+    ret = _vmx_cpu_up(true);
+
+    if ( !ret )
+        register_keyhandler('v', vmcs_dump, "dump VT-x VMCSs", 1);
+
+    return ret;
 }
 
 static void __init __maybe_unused build_assertions(void)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 6efa80e422..11a4dd94cf 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2482,7 +2482,7 @@ const struct hvm_function_table * __init start_vmx(void)
 {
     set_in_cr4(X86_CR4_VMXE);
 
-    if ( _vmx_cpu_up(true) )
+    if ( vmx_vmcs_init() )
     {
         printk("VMX: failed to initialise.\n");
         return NULL;
@@ -2553,8 +2553,6 @@ const struct hvm_function_table * __init start_vmx(void)
         vmx_function_table.get_guest_bndcfgs = vmx_get_guest_bndcfgs;
     }
 
-    setup_vmcs_dump();
-
     lbr_tsx_fixup_check();
     bdf93_fixup_check();
 
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 95c1dea7b8..906810592f 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -21,11 +21,10 @@
 #include <xen/mm.h>
 
 extern void vmcs_dump_vcpu(struct vcpu *v);
-extern void setup_vmcs_dump(void);
+extern int vmx_vmcs_init(void);
 extern int  vmx_cpu_up_prepare(unsigned int cpu);
 extern void vmx_cpu_dead(unsigned int cpu);
 extern int  vmx_cpu_up(void);
-extern int  _vmx_cpu_up(bool bsp);
 extern void vmx_cpu_down(void);
 
 struct vmcs_struct {
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 08 15:34:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:34:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX51A-0001sX-Ja; Fri, 08 May 2020 15:34:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jX519-0001sH-IE
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:34:27 +0000
X-Inumbo-ID: 66285f1c-9141-11ea-a01d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 66285f1c-9141-11ea-a01d-12813bfff9fa;
 Fri, 08 May 2020 15:34:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D96F3AE6F;
 Fri,  8 May 2020 15:34:27 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 02/12] xen: add a generic way to include binary files as
 variables
Date: Fri,  8 May 2020 17:34:11 +0200
Message-Id: <20200508153421.24525-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200508153421.24525-1-jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add a new script xen/tools/binfile for including a binary file at build
time being usable via a pointer and a size variable in the hypervisor.

Make use of that generic tool in xsm.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Wei Liu <wl@xen.org>
---
V3:
- new patch

V4:
- add alignment parameter (Jan Beulich)
- use .Lend instead of . (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore                   |  1 +
 xen/tools/binfile            | 41 ++++++++++++++++++++++++++++++++++++
 xen/xsm/flask/Makefile       |  5 ++++-
 xen/xsm/flask/flask-policy.S | 16 --------------
 4 files changed, 46 insertions(+), 17 deletions(-)
 create mode 100755 xen/tools/binfile
 delete mode 100644 xen/xsm/flask/flask-policy.S

diff --git a/.gitignore b/.gitignore
index 9c8a31f896..4059829f6e 100644
--- a/.gitignore
+++ b/.gitignore
@@ -314,6 +314,7 @@ xen/test/livepatch/*.livepatch
 xen/tools/kconfig/.tmp_gtkcheck
 xen/tools/kconfig/.tmp_qtcheck
 xen/tools/symbols
+xen/xsm/flask/flask-policy.S
 xen/xsm/flask/include/av_perm_to_string.h
 xen/xsm/flask/include/av_permissions.h
 xen/xsm/flask/include/class_to_string.h
diff --git a/xen/tools/binfile b/xen/tools/binfile
new file mode 100755
index 0000000000..7bb35a5178
--- /dev/null
+++ b/xen/tools/binfile
@@ -0,0 +1,41 @@
+#!/bin/sh
+# usage: binfile [-i] [-a <align>] <target-src.S> <binary-file> <varname>
+# -a <align>  align data at 2^<align> boundary (default: byte alignment)
+# -i          add to .init.rodata (default: .rodata) section
+
+section=""
+align=0
+
+OPTIND=1
+while getopts "ia:" opt; do
+    case "$opt" in
+    i)
+        section=".init"
+        ;;
+    a)
+        align=$OPTARG
+        ;;
+    esac
+done
+
+target=$1
+binsource=$2
+varname=$3
+
+cat <<EOF >$target
+#include <asm/asm_defns.h>
+
+        .section $section.rodata, "a", %progbits
+
+        .p2align $align
+        .global $varname
+$varname:
+        .incbin "$binsource"
+.Lend:
+
+        .type $varname, %object
+        .size $varname, .Lend - $varname
+
+        .global ${varname}_size
+        ASM_INT(${varname}_size, .Lend - $varname)
+EOF
diff --git a/xen/xsm/flask/Makefile b/xen/xsm/flask/Makefile
index eebfceecc5..d8486fc7e4 100644
--- a/xen/xsm/flask/Makefile
+++ b/xen/xsm/flask/Makefile
@@ -39,6 +39,9 @@ $(subst include/,%/,$(AV_H_FILES)): $(AV_H_DEPEND) $(mkaccess) FORCE
 obj-bin-$(CONFIG_XSM_FLASK_POLICY) += flask-policy.o
 flask-policy.o: policy.bin
 
+flask-policy.S: $(XEN_ROOT)/xen/tools/binfile
+	$(XEN_ROOT)/xen/tools/binfile -i $@ policy.bin xsm_flask_init_policy
+
 FLASK_BUILD_DIR := $(CURDIR)
 POLICY_SRC := $(FLASK_BUILD_DIR)/xenpolicy-$(XEN_FULLVERSION)
 
@@ -48,4 +51,4 @@ policy.bin: FORCE
 
 .PHONY: clean
 clean::
-	rm -f $(ALL_H_FILES) *.o $(DEPS_RM) policy.* $(POLICY_SRC)
+	rm -f $(ALL_H_FILES) *.o $(DEPS_RM) policy.* $(POLICY_SRC) flask-policy.S
diff --git a/xen/xsm/flask/flask-policy.S b/xen/xsm/flask/flask-policy.S
deleted file mode 100644
index d38aa39964..0000000000
--- a/xen/xsm/flask/flask-policy.S
+++ /dev/null
@@ -1,16 +0,0 @@
-#include <asm/asm_defns.h>
-
-        .section .init.rodata, "a", %progbits
-
-/* const unsigned char xsm_flask_init_policy[] __initconst */
-        .global xsm_flask_init_policy
-xsm_flask_init_policy:
-        .incbin "policy.bin"
-.Lend:
-
-        .type xsm_flask_init_policy, %object
-        .size xsm_flask_init_policy, . - xsm_flask_init_policy
-
-/* const unsigned int __initconst xsm_flask_init_policy_size */
-        .global xsm_flask_init_policy_size
-        ASM_INT(xsm_flask_init_policy_size, .Lend - xsm_flask_init_policy)
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 08 15:34:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:34:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX51F-0001tb-Qx; Fri, 08 May 2020 15:34:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jX51E-0001ss-AT
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:34:32 +0000
X-Inumbo-ID: 663ca1de-9141-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 663ca1de-9141-11ea-b07b-bc764e2007e4;
 Fri, 08 May 2020 15:34:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 2B838AE79;
 Fri,  8 May 2020 15:34:28 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 03/12] docs: add feature document for Xen hypervisor
 sysfs-like support
Date: Fri,  8 May 2020 17:34:12 +0200
Message-Id: <20200508153421.24525-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200508153421.24525-1-jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On the 2019 Xen developer summit there was agreement that the Xen
hypervisor should gain support for a hierarchical name-value store
similar to the Linux kernel's sysfs.

In the beginning there should only be basic support: entries can be
added from the hypervisor itself only, there is a simple hypercall
interface to read the data.

Add a feature document for setting the base of a discussion regarding
the desired functionality and the entries to add.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
V1:
- remove the "--" prefixes of the sub-commands of the user tool
  (Jan Beulich)
- rename xenfs to xenhypfs (Jan Beulich)
- add "tree" and "write" options to user tool

V2:
- move example tree to the paths description (Ian Jackson)
- specify allowed characters for keys and values (Ian Jackson)

V3:
- correct introduction (writable entries)

V4:
- add list specification
- add entry example (Julien Grall)
- correct date and Xen version (Julien Grall)
- add ARM64 as possible architecture (Julien Grall)
- add version description to the feature doc (Jan Beulich)

V8:
- clarify syntax used in hypfs-paths.pandoc (George Dunlap)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/features/hypervisorfs.pandoc |  92 +++++++++++++++++++++++++
 docs/misc/hypfs-paths.pandoc      | 107 ++++++++++++++++++++++++++++++
 2 files changed, 199 insertions(+)
 create mode 100644 docs/features/hypervisorfs.pandoc
 create mode 100644 docs/misc/hypfs-paths.pandoc

diff --git a/docs/features/hypervisorfs.pandoc b/docs/features/hypervisorfs.pandoc
new file mode 100644
index 0000000000..a0a0ead057
--- /dev/null
+++ b/docs/features/hypervisorfs.pandoc
@@ -0,0 +1,92 @@
+% Hypervisor FS
+% Revision 1
+
+\clearpage
+
+# Basics
+---------------- ---------------------
+         Status: **Supported**
+
+  Architectures: all
+
+     Components: Hypervisor, toolstack
+---------------- ---------------------
+
+# Overview
+
+The Hypervisor FS is a hierarchical name-value store for reporting
+information to guests, especially dom0. It is similar to the Linux
+kernel's sysfs. Entries and directories are created by the hypervisor,
+while the toolstack is able to use a hypercall to query the entry
+values or (if allowed by the hypervisor) to modify them.
+
+# User details
+
+With:
+
+    xenhypfs ls <path>
+
+the user can list the entries of a specific path of the FS. Using:
+
+    xenhypfs cat <path>
+
+the content of an entry can be retrieved. Using:
+
+    xenhypfs write <path> <string>
+
+a writable entry can be modified. With:
+
+    xenhypfs tree
+
+the complete Hypervisor FS entry tree can be printed.
+
+The FS paths are documented in `docs/misc/hypfs-paths.pandoc`.
+
+# Technical details
+
+Access to the hypervisor filesystem is done via the stable new hypercall
+__HYPERVISOR_filesystem_op. This hypercall supports a sub-command
+XEN_HYPFS_OP_get_version which will return the highest version of the
+interface supported by the hypervisor. Additions to the interface need
+to bump the interface version. The hypervisor is required to support the
+previous interface versions, too (this implies that additions will always
+require new sub-commands in order to allow the hypervisor to decide which
+version of the interface to use).
+
+* hypercall interface specification
+    * `xen/include/public/hypfs.h`
+* hypervisor internal files
+    * `xen/include/xen/hypfs.h`
+    * `xen/common/hypfs.c`
+* `libxenhypfs`
+    * `tools/libs/libxenhypfs/*`
+* `xenhypfs`
+    * `tools/misc/xenhypfs.c`
+* path documentation
+    * `docs/misc/hypfs-paths.pandoc`
+
+# Testing
+
+Any new parameters or hardware mitigations should be verified to show up
+correctly in the filesystem.
+
+# Areas for improvement
+
+* More detailed access rights
+* Entries per domain and/or per cpupool
+
+# Known issues
+
+* None
+
+# References
+
+* None
+
+# History
+
+------------------------------------------------------------------------
+Date       Revision Version  Notes
+---------- -------- -------- -------------------------------------------
+2020-01-23 1        Xen 4.14 Document written
+---------- -------- -------- -------------------------------------------
diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
new file mode 100644
index 0000000000..39539fa1b5
--- /dev/null
+++ b/docs/misc/hypfs-paths.pandoc
@@ -0,0 +1,107 @@
+# Xenhypfs Paths
+
+This document attempts to define all the paths which are available
+in the Xen hypervisor file system (hypfs).
+
+The hypervisor file system can be accessed via the xenhypfs tool.
+
+## Notation
+
+The hypervisor file system is similar to the Linux kernel's sysfs.
+In this document directories are always specified with a trailing "/".
+
+The following notation conventions apply:
+
+        DIRECTORY/
+
+        PATH = VALUES [TAGS]
+
+The first syntax defines a directory. It normally contains related
+entries and the general scope of the directory is described.
+
+The second syntax defines a file entry containing values which are
+either set by the hypervisor or, if the file is writable, can be set
+by the user.
+
+PATH can contain simple regex constructs following the Perl compatible
+regexp syntax described in pcre(3) or perlre(1).
+
+A hypervisor file system entry name can be any 0-delimited byte string
+not containing any '/' character. The names "." and ".." are reserved
+for file system internal use.
+
+VALUES are strings and can take the following forms (note that this represents
+only the syntax used in this document):
+
+* STRING -- an arbitrary 0-delimited byte string.
+* INTEGER -- An integer, in decimal representation unless otherwise
+  noted.
+* "a literal string" -- literal strings are contained within quotes.
+* (VALUE | VALUE | ... ) -- a set of alternatives. Alternatives are
+  separated by a "|" and all the alternatives are enclosed in "(" and
+  ")".
+* {VALUE, VALUE, ... } -- a list of possible values separated by "," and
+  enclosed in "{" and "}".
+
+Additional TAGS may follow as a comma separated set of the following
+tags enclosed in square brackets.
+
+* w -- Path is writable by the user. This capability is usually
+  limited to the control domain (e.g. dom0).
+* ARM | ARM32 | ARM64 | X86: the path is available for the respective
+  architecture only.
+* PV --  Path is valid for PV capable hypervisors only.
+* HVM -- Path is valid for HVM capable hypervisors only.
+* CONFIG_* -- Path is valid only in case the hypervisor was built with
+  the respective config option.
+
+So an entry could look like this:
+
+    /cpu-bugs/active-pv/xpti = ("No"|{"dom0", "domU", "PCID-on"}) [w,X86,PV]
+
+Possible values would be "No" or a list of "dom0", "domU", and "PCID-on" with
+the list elements separated by spaces, e.g. "dom0 PCID-on".
+The entry would be writable and it would exist on X86 only and only if the
+hypervisor is configured to support PV guests.
+
+## Example
+
+A populated Xen hypervisor file system might look like the following example:
+
+    /
+        buildinfo/           directory containing build-time data
+            config           contents of .config file used to build Xen
+        cpu-bugs/            x86: directory of cpu bug information
+            l1tf             "Vulnerable" or "Not vulnerable"
+            mds              "Vulnerable" or "Not vulnerable"
+            meltdown         "Vulnerable" or "Not vulnerable"
+            spec-store-bypass "Vulnerable" or "Not vulnerable"
+            spectre-v1       "Vulnerable" or "Not vulnerable"
+            spectre-v2       "Vulnerable" or "Not vulnerable"
+            mitigations/     directory of mitigation settings
+                bti-thunk    "N/A", "RETPOLINE", "LFENCE" or "JMP"
+                spec-ctrl    "No", "IBRS+" or "IBRS-"
+                ibpb         "No" or "Yes"
+                l1d-flush    "No" or "Yes"
+                md-clear     "No" or "VERW"
+                l1tf-barrier "No" or "Yes"
+            active-hvm/      directory for mitigations active in hvm doamins
+                msr-spec-ctrl "No" or "Yes"
+                rsb          "No" or "Yes"
+                eager-fpu    "No" or "Yes"
+                md-clear     "No" or "Yes"
+            active-pv/       directory for mitigations active in pv doamins
+                msr-spec-ctrl "No" or "Yes"
+                rsb          "No" or "Yes"
+                eager-fpu    "No" or "Yes"
+                md-clear     "No" or "Yes"
+                xpti         "No" or list of "dom0", "domU", "PCID-on"
+                l1tf-shadow  "No" or list of "dom0", "domU"
+        params/              directory with hypervisor parameter values
+                             (boot/runtime parameters)
+
+## General Paths
+
+#### /
+
+The root of the hypervisor file system.
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 08 15:34:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:34:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX51G-0001tw-3J; Fri, 08 May 2020 15:34:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jX51E-0001t1-Dy
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:34:32 +0000
X-Inumbo-ID: 660c5d3b-9141-11ea-a01d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 660c5d3b-9141-11ea-a01d-12813bfff9fa;
 Fri, 08 May 2020 15:34:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 457DBAE7D;
 Fri,  8 May 2020 15:34:28 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 04/12] xen: add basic hypervisor filesystem support
Date: Fri,  8 May 2020 17:34:13 +0200
Message-Id: <20200508153421.24525-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200508153421.24525-1-jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add the infrastructure for the hypervisor filesystem.

This includes the hypercall interface and the base functions for
entry creation, deletion and modification.

In order not to have to repeat the same pattern multiple times in case
adding a new node should BUG_ON() failure, the helpers for adding a
node (hypfs_add_dir() and hypfs_add_leaf()) get a nofault parameter
causing the BUG() in case of a failure.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V1:
- rename files from filesystem.* to hypfs.*
- add dummy write entry support
- rename hypercall filesystem_op to hypfs_op
- add support for unsigned integer entries

V2:
- test new entry name to be valid

V3:
- major rework, especially by supporting binary contents of entries
- addressed all comments

V4:
- sort #includes alphabetically (Wei Liu)
- add public interface structures to xlat.lst (Jan Beulich)
- let DIRENTRY_SIZE() add 1 for trailing nul byte (Jan Beulich)
- remove hypfs_add_entry() (Jan Beulich)
- len -> ulen (Jan Beulich)
- switch sequence of tests in hypfs_get_entry_rel() (Jan Beulich)
- add const qualifier (Jan Beulich)
- return -ENOBUFS if only direntry but no entry contents are returned
  (Jan Beulich)
- use xmalloc() instead of xzalloc() (Jan Beulich)
- better error handling in hypfs_write_leaf() (Jan Beulich)
- return -EOPNOTSUPP for unknown sub-command (Jan Beulich)
- use plain integers for enum-like constants in public interface
  (Jan Beulich)
- rename XEN_HYPFS_OP_read_contents to XEN_HYPFS_OP_read (Jan Beulich)
- add some comments in include/public/hypfs.h (Jan Beulich)
- use const_char for user parameter path (Jan Beulich)
- add helpers for XEN_HYPFS_TYPE_BOOL and XEN_HYPFS_TYPE_INT entry
  definitions (Jan Beulich)
- make statically defined entries __read_mostly (Jan Beulich)

V5:
- switch to xsm for privilege check

V6:
- use memchr() for testing correct string length (Jan Beulich)
- reject writing to non-string leafs with wrong length (Jan Beulich)
- only support bools of natural size (Julien Grall)
- adjust blank padding in header (Jan Beulich)
- adjust comments in public header (Jan Beulich)
- rename hypfs_string_set() and add comment (Jan Beulich)
- add common HYPFS_INIT helper macro (Jan Beulich)
- really check structures added to xlat.lst (Jan Beulich)
- add missing xsm parts (Jan Beulich)

V7:
- simplify compat check (Jan Beulich)
- add max write size (Jan Beulich)
- better length testing of written string (Jan Beulich)

V8:
- add Kconfig item CONFIG_HYPFS (Jan Beulich)
- init write pointer in HYPFS_*_INIT_WRITABLE() (Jan Beulich)
- expand write ASSERT()s (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/flask/policy/modules/dom0.te  |   2 +-
 xen/arch/arm/traps.c                |   3 +
 xen/arch/x86/hvm/hypercall.c        |   3 +
 xen/arch/x86/hypercall.c            |   3 +
 xen/arch/x86/pv/hypercall.c         |   3 +
 xen/common/Kconfig                  |  11 +
 xen/common/Makefile                 |   1 +
 xen/common/hypfs.c                  | 354 ++++++++++++++++++++++++++++
 xen/include/Makefile                |   1 +
 xen/include/public/hypfs.h          | 127 ++++++++++
 xen/include/public/xen.h            |   1 +
 xen/include/xen/hypercall.h         |  10 +
 xen/include/xen/hypfs.h             | 122 ++++++++++
 xen/include/xlat.lst                |   2 +
 xen/include/xsm/dummy.h             |   6 +
 xen/include/xsm/xsm.h               |   6 +
 xen/xsm/dummy.c                     |   1 +
 xen/xsm/flask/hooks.c               |   6 +
 xen/xsm/flask/policy/access_vectors |   2 +
 19 files changed, 663 insertions(+), 1 deletion(-)
 create mode 100644 xen/common/hypfs.c
 create mode 100644 xen/include/public/hypfs.h
 create mode 100644 xen/include/xen/hypfs.h

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index 272f6a4f75..20925e38a2 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -11,7 +11,7 @@ allow dom0_t xen_t:xen {
 	mtrr_del mtrr_read microcode physinfo quirk writeconsole readapic
 	writeapic privprofile nonprivprofile kexec firmware sleep frequency
 	getidle debug getcpuinfo heap pm_op mca_op lockprof cpupool_op
-	getscheduler setscheduler
+	getscheduler setscheduler hypfs_op
 };
 allow dom0_t xen_t:xen2 {
 	resource_op psr_cmt_op psr_alloc pmu_ctrl get_symbol
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 30c4c1830b..8f40d0e0b6 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1381,6 +1381,9 @@ static arm_hypercall_t arm_hypercall_table[] = {
 #ifdef CONFIG_ARGO
     HYPERCALL(argo_op, 5),
 #endif
+#ifdef CONFIG_HYPFS
+    HYPERCALL(hypfs_op, 5),
+#endif
 };
 
 #ifndef NDEBUG
diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index c41c2179c9..b6ccaf4457 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -150,6 +150,9 @@ static const hypercall_table_t hvm_hypercall_table[] = {
 #endif
     HYPERCALL(xenpmu_op),
     COMPAT_CALL(dm_op),
+#ifdef CONFIG_HYPFS
+    HYPERCALL(hypfs_op),
+#endif
     HYPERCALL(arch_1)
 };
 
diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c
index 7f299d45c6..dd00983005 100644
--- a/xen/arch/x86/hypercall.c
+++ b/xen/arch/x86/hypercall.c
@@ -72,6 +72,9 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls] =
 #ifdef CONFIG_HVM
     ARGS(hvm_op, 2),
     ARGS(dm_op, 3),
+#endif
+#ifdef CONFIG_HYPFS
+    ARGS(hypfs_op, 5),
 #endif
     ARGS(mca, 1),
     ARGS(arch_1, 1),
diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c
index b0d1d0ed77..53a52360fa 100644
--- a/xen/arch/x86/pv/hypercall.c
+++ b/xen/arch/x86/pv/hypercall.c
@@ -84,6 +84,9 @@ const hypercall_table_t pv_hypercall_table[] = {
 #ifdef CONFIG_HVM
     HYPERCALL(hvm_op),
     COMPAT_CALL(dm_op),
+#endif
+#ifdef CONFIG_HYPFS
+    HYPERCALL(hypfs_op),
 #endif
     HYPERCALL(mca),
     HYPERCALL(arch_1),
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index a6914fcae9..2515e053c8 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -116,6 +116,17 @@ config SPECULATIVE_HARDEN_BRANCH
 
 endmenu
 
+config HYPFS
+	bool "Hypervisor file system support"
+	default y
+	---help---
+	  Support Xen hypervisor file system. This file system is used to
+	  present various hypervisor internal data to dom0 and in some
+	  cases to allow modifying settings. Disabling the support might
+	  result in some features not being available.
+
+	  If unsure, say Y.
+
 config KEXEC
 	bool "kexec support"
 	default y
diff --git a/xen/common/Makefile b/xen/common/Makefile
index e8cde65370..98b7904dcd 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -10,6 +10,7 @@ obj-y += domain.o
 obj-y += event_2l.o
 obj-y += event_channel.o
 obj-y += event_fifo.o
+obj-$(CONFIG_HYPFS) += hypfs.o
 obj-$(CONFIG_CRASH_DEBUG) += gdbstub.o
 obj-$(CONFIG_GRANT_TABLE) += grant_table.o
 obj-y += guestcopy.o
diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
new file mode 100644
index 0000000000..a873d507d1
--- /dev/null
+++ b/xen/common/hypfs.c
@@ -0,0 +1,354 @@
+/******************************************************************************
+ *
+ * hypfs.c
+ *
+ * Simple sysfs-like file system for the hypervisor.
+ */
+
+#include <xen/err.h>
+#include <xen/guest_access.h>
+#include <xen/hypercall.h>
+#include <xen/hypfs.h>
+#include <xen/lib.h>
+#include <xen/rwlock.h>
+#include <public/hypfs.h>
+
+#ifdef CONFIG_COMPAT
+#include <compat/hypfs.h>
+CHECK_hypfs_dirlistentry;
+#endif
+
+#define DIRENTRY_NAME_OFF offsetof(struct xen_hypfs_dirlistentry, name)
+#define DIRENTRY_SIZE(name_len) \
+    (DIRENTRY_NAME_OFF +        \
+     ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
+
+static DEFINE_RWLOCK(hypfs_lock);
+
+HYPFS_DIR_INIT(hypfs_root, "");
+
+static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new)
+{
+    int ret = -ENOENT;
+    struct hypfs_entry *e;
+
+    write_lock(&hypfs_lock);
+
+    list_for_each_entry ( e, &parent->dirlist, list )
+    {
+        int cmp = strcmp(e->name, new->name);
+
+        if ( cmp > 0 )
+        {
+            ret = 0;
+            list_add_tail(&new->list, &e->list);
+            break;
+        }
+        if ( cmp == 0 )
+        {
+            ret = -EEXIST;
+            break;
+        }
+    }
+
+    if ( ret == -ENOENT )
+    {
+        ret = 0;
+        list_add_tail(&new->list, &parent->dirlist);
+    }
+
+    if ( !ret )
+    {
+        unsigned int sz = strlen(new->name);
+
+        parent->e.size += DIRENTRY_SIZE(sz);
+    }
+
+    write_unlock(&hypfs_lock);
+
+    return ret;
+}
+
+int hypfs_add_dir(struct hypfs_entry_dir *parent,
+                  struct hypfs_entry_dir *dir, bool nofault)
+{
+    int ret;
+
+    ret = add_entry(parent, &dir->e);
+    BUG_ON(nofault && ret);
+
+    return ret;
+}
+
+int hypfs_add_leaf(struct hypfs_entry_dir *parent,
+                   struct hypfs_entry_leaf *leaf, bool nofault)
+{
+    int ret;
+
+    if ( !leaf->content )
+        ret = -EINVAL;
+    else
+        ret = add_entry(parent, &leaf->e);
+    BUG_ON(nofault && ret);
+
+    return ret;
+}
+
+static int hypfs_get_path_user(char *buf,
+                               XEN_GUEST_HANDLE_PARAM(const_char) uaddr,
+                               unsigned long ulen)
+{
+    if ( ulen > XEN_HYPFS_MAX_PATHLEN )
+        return -EINVAL;
+
+    if ( copy_from_guest(buf, uaddr, ulen) )
+        return -EFAULT;
+
+    if ( memchr(buf, 0, ulen) != buf + ulen - 1 )
+        return -EINVAL;
+
+    return 0;
+}
+
+static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
+                                               const char *path)
+{
+    const char *end;
+    struct hypfs_entry *entry;
+    unsigned int name_len;
+
+    if ( dir->e.type != XEN_HYPFS_TYPE_DIR )
+        return NULL;
+
+    if ( !*path )
+        return &dir->e;
+
+    end = strchr(path, '/');
+    if ( !end )
+        end = strchr(path, '\0');
+    name_len = end - path;
+
+    list_for_each_entry ( entry, &dir->dirlist, list )
+    {
+        int cmp = strncmp(path, entry->name, name_len);
+        struct hypfs_entry_dir *d = container_of(entry,
+                                                 struct hypfs_entry_dir, e);
+
+        if ( cmp < 0 )
+            return NULL;
+        if ( !cmp && strlen(entry->name) == name_len )
+            return *end ? hypfs_get_entry_rel(d, end + 1) : entry;
+    }
+
+    return NULL;
+}
+
+struct hypfs_entry *hypfs_get_entry(const char *path)
+{
+    if ( path[0] != '/' )
+        return NULL;
+
+    return hypfs_get_entry_rel(&hypfs_root, path + 1);
+}
+
+int hypfs_read_dir(const struct hypfs_entry *entry,
+                   XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    const struct hypfs_entry_dir *d;
+    const struct hypfs_entry *e;
+    unsigned int size = entry->size;
+
+    d = container_of(entry, const struct hypfs_entry_dir, e);
+
+    list_for_each_entry ( e, &d->dirlist, list )
+    {
+        struct xen_hypfs_dirlistentry direntry;
+        unsigned int e_namelen = strlen(e->name);
+        unsigned int e_len = DIRENTRY_SIZE(e_namelen);
+
+        direntry.e.pad = 0;
+        direntry.e.type = e->type;
+        direntry.e.encoding = e->encoding;
+        direntry.e.content_len = e->size;
+        direntry.e.max_write_len = e->max_size;
+        direntry.off_next = list_is_last(&e->list, &d->dirlist) ? 0 : e_len;
+        if ( copy_to_guest(uaddr, &direntry, 1) )
+            return -EFAULT;
+
+        if ( copy_to_guest_offset(uaddr, DIRENTRY_NAME_OFF,
+                                  e->name, e_namelen + 1) )
+            return -EFAULT;
+
+        guest_handle_add_offset(uaddr, e_len);
+
+        ASSERT(e_len <= size);
+        size -= e_len;
+    }
+
+    return 0;
+}
+
+int hypfs_read_leaf(const struct hypfs_entry *entry,
+                    XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    const struct hypfs_entry_leaf *l;
+
+    l = container_of(entry, const struct hypfs_entry_leaf, e);
+
+    return copy_to_guest(uaddr, l->content, entry->size) ? -EFAULT: 0;
+}
+
+static int hypfs_read(const struct hypfs_entry *entry,
+                      XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
+{
+    struct xen_hypfs_direntry e;
+    long ret = -EINVAL;
+
+    if ( ulen < sizeof(e) )
+        goto out;
+
+    e.pad = 0;
+    e.type = entry->type;
+    e.encoding = entry->encoding;
+    e.content_len = entry->size;
+    e.max_write_len = entry->max_size;
+
+    ret = -EFAULT;
+    if ( copy_to_guest(uaddr, &e, 1) )
+        goto out;
+
+    ret = -ENOBUFS;
+    if ( ulen < entry->size + sizeof(e) )
+        goto out;
+
+    guest_handle_add_offset(uaddr, sizeof(e));
+
+    ret = entry->read(entry, uaddr);
+
+ out:
+    return ret;
+}
+
+int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
+                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
+{
+    char *buf;
+    int ret;
+
+    if ( leaf->e.type != XEN_HYPFS_TYPE_STRING &&
+         leaf->e.type != XEN_HYPFS_TYPE_BLOB && ulen != leaf->e.size )
+        return -EDOM;
+
+    buf = xmalloc_array(char, ulen);
+    if ( !buf )
+        return -ENOMEM;
+
+    ret = -EFAULT;
+    if ( copy_from_guest(buf, uaddr, ulen) )
+        goto out;
+
+    ret = -EINVAL;
+    if ( leaf->e.type == XEN_HYPFS_TYPE_STRING &&
+         memchr(buf, 0, ulen) != (buf + ulen - 1) )
+        goto out;
+
+    ret = 0;
+    memcpy(leaf->write_ptr, buf, ulen);
+    leaf->e.size = ulen;
+
+ out:
+    xfree(buf);
+    return ret;
+}
+
+int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
+                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
+{
+    bool buf;
+
+    ASSERT(leaf->e.type == XEN_HYPFS_TYPE_BOOL &&
+           leaf->e.size == sizeof(bool) &&
+           leaf->e.max_size == sizeof(bool) );
+
+    if ( ulen != leaf->e.max_size )
+        return -EDOM;
+
+    if ( copy_from_guest(&buf, uaddr, ulen) )
+        return -EFAULT;
+
+    *(bool *)leaf->write_ptr = buf;
+
+    return 0;
+}
+
+static int hypfs_write(struct hypfs_entry *entry,
+                       XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
+{
+    struct hypfs_entry_leaf *l;
+
+    if ( !entry->write )
+        return -EACCES;
+
+    ASSERT(entry->max_size);
+
+    if ( ulen > entry->max_size )
+        return -ENOSPC;
+
+    l = container_of(entry, struct hypfs_entry_leaf, e);
+
+    return entry->write(l, uaddr, ulen);
+}
+
+long do_hypfs_op(unsigned int cmd,
+                 XEN_GUEST_HANDLE_PARAM(const_char) arg1, unsigned long arg2,
+                 XEN_GUEST_HANDLE_PARAM(void) arg3, unsigned long arg4)
+{
+    int ret;
+    struct hypfs_entry *entry;
+    static char path[XEN_HYPFS_MAX_PATHLEN];
+
+    if ( xsm_hypfs_op(XSM_PRIV) )
+        return -EPERM;
+
+    if ( cmd == XEN_HYPFS_OP_get_version )
+        return XEN_HYPFS_VERSION;
+
+    if ( cmd == XEN_HYPFS_OP_write_contents )
+        write_lock(&hypfs_lock);
+    else
+        read_lock(&hypfs_lock);
+
+    ret = hypfs_get_path_user(path, arg1, arg2);
+    if ( ret )
+        goto out;
+
+    entry = hypfs_get_entry(path);
+    if ( !entry )
+    {
+        ret = -ENOENT;
+        goto out;
+    }
+
+    switch ( cmd )
+    {
+    case XEN_HYPFS_OP_read:
+        ret = hypfs_read(entry, arg3, arg4);
+        break;
+
+    case XEN_HYPFS_OP_write_contents:
+        ret = hypfs_write(entry, arg3, arg4);
+        break;
+
+    default:
+        ret = -EOPNOTSUPP;
+        break;
+    }
+
+ out:
+    if ( cmd == XEN_HYPFS_OP_write_contents )
+        write_unlock(&hypfs_lock);
+    else
+        read_unlock(&hypfs_lock);
+
+    return ret;
+}
diff --git a/xen/include/Makefile b/xen/include/Makefile
index 2a10725d68..089314dc72 100644
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -9,6 +9,7 @@ headers-y := \
     compat/event_channel.h \
     compat/features.h \
     compat/grant_table.h \
+    compat/hypfs.h \
     compat/kexec.h \
     compat/memory.h \
     compat/nmi.h \
diff --git a/xen/include/public/hypfs.h b/xen/include/public/hypfs.h
new file mode 100644
index 0000000000..f1c1b3e935
--- /dev/null
+++ b/xen/include/public/hypfs.h
@@ -0,0 +1,127 @@
+/******************************************************************************
+ * Xen Hypervisor Filesystem
+ *
+ * Copyright (c) 2019, SUSE Software Solutions Germany GmbH
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __XEN_PUBLIC_HYPFS_H__
+#define __XEN_PUBLIC_HYPFS_H__
+
+#include "xen.h"
+
+/*
+ * Definitions for the __HYPERVISOR_hypfs_op hypercall.
+ */
+
+/* Highest version number of the hypfs interface currently defined. */
+#define XEN_HYPFS_VERSION      1
+
+/* Maximum length of a path in the filesystem. */
+#define XEN_HYPFS_MAX_PATHLEN  1024
+
+struct xen_hypfs_direntry {
+    uint8_t type;
+#define XEN_HYPFS_TYPE_DIR     0
+#define XEN_HYPFS_TYPE_BLOB    1
+#define XEN_HYPFS_TYPE_STRING  2
+#define XEN_HYPFS_TYPE_UINT    3
+#define XEN_HYPFS_TYPE_INT     4
+#define XEN_HYPFS_TYPE_BOOL    5
+    uint8_t encoding;
+#define XEN_HYPFS_ENC_PLAIN    0
+#define XEN_HYPFS_ENC_GZIP     1
+    uint16_t pad;              /* Returned as 0. */
+    uint32_t content_len;      /* Current length of data. */
+    uint32_t max_write_len;    /* Max. length for writes (0 if read-only). */
+};
+
+struct xen_hypfs_dirlistentry {
+    struct xen_hypfs_direntry e;
+    /* Offset in bytes to next entry (0 == this is the last entry). */
+    uint16_t off_next;
+    /* Zero terminated entry name, possibly with some padding for alignment. */
+    char name[XEN_FLEX_ARRAY_DIM];
+};
+
+/*
+ * Hypercall operations.
+ */
+
+/*
+ * XEN_HYPFS_OP_get_version
+ *
+ * Read highest interface version supported by the hypervisor.
+ *
+ * Possible return values:
+ * >0: highest supported interface version
+ * <0: negative Xen errno value
+ */
+#define XEN_HYPFS_OP_get_version     0
+
+/*
+ * XEN_HYPFS_OP_read
+ *
+ * Read a filesystem entry.
+ *
+ * Returns the direntry and contents of an entry in the buffer supplied by the
+ * caller (struct xen_hypfs_direntry with the contents following directly
+ * after it).
+ * The data buffer must be at least the size of the direntry returned. If the
+ * data buffer was not large enough for all the data -ENOBUFS and no entry
+ * data is returned, but the direntry will contain the needed size for the
+ * returned data.
+ * The format of the contents is according to its entry type and encoding.
+ * The contents of a directory are multiple struct xen_hypfs_dirlistentry
+ * items.
+ *
+ * arg1: XEN_GUEST_HANDLE(path name)
+ * arg2: length of path name (including trailing zero byte)
+ * arg3: XEN_GUEST_HANDLE(data buffer written by hypervisor)
+ * arg4: data buffer size
+ *
+ * Possible return values:
+ * 0: success
+ * <0 : negative Xen errno value
+ */
+#define XEN_HYPFS_OP_read              1
+
+/*
+ * XEN_HYPFS_OP_write_contents
+ *
+ * Write contents of a filesystem entry.
+ *
+ * Writes an entry with the contents of a buffer supplied by the caller.
+ * The data type and encoding can't be changed. The size can be changed only
+ * for blobs and strings.
+ *
+ * arg1: XEN_GUEST_HANDLE(path name)
+ * arg2: length of path name (including trailing zero byte)
+ * arg3: XEN_GUEST_HANDLE(content buffer read by hypervisor)
+ * arg4: content buffer size
+ *
+ * Possible return values:
+ * 0: success
+ * <0 : negative Xen errno value
+ */
+#define XEN_HYPFS_OP_write_contents    2
+
+#endif /* __XEN_PUBLIC_HYPFS_H__ */
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 75b1619d0d..945ef30273 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -130,6 +130,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_argo_op              39
 #define __HYPERVISOR_xenpmu_op            40
 #define __HYPERVISOR_dm_op                41
+#define __HYPERVISOR_hypfs_op             42
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index d82a293377..655acc7f47 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -150,6 +150,16 @@ do_dm_op(
     unsigned int nr_bufs,
     XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs);
 
+#ifdef CONFIG_HYPFS
+extern long
+do_hypfs_op(
+    unsigned int cmd,
+    XEN_GUEST_HANDLE_PARAM(const_char) arg1,
+    unsigned long arg2,
+    XEN_GUEST_HANDLE_PARAM(void) arg3,
+    unsigned long arg4);
+#endif
+
 #ifdef CONFIG_COMPAT
 
 extern int
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
new file mode 100644
index 0000000000..279bcf3b4c
--- /dev/null
+++ b/xen/include/xen/hypfs.h
@@ -0,0 +1,122 @@
+#ifndef __XEN_HYPFS_H__
+#define __XEN_HYPFS_H__
+
+#ifdef CONFIG_HYPFS
+#include <xen/list.h>
+#include <xen/string.h>
+#include <public/hypfs.h>
+
+struct hypfs_entry_leaf;
+
+struct hypfs_entry {
+    unsigned short type;
+    unsigned short encoding;
+    unsigned int size;
+    unsigned int max_size;
+    const char *name;
+    struct list_head list;
+    int (*read)(const struct hypfs_entry *entry,
+                XEN_GUEST_HANDLE_PARAM(void) uaddr);
+    int (*write)(struct hypfs_entry_leaf *leaf,
+                 XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen);
+};
+
+struct hypfs_entry_leaf {
+    struct hypfs_entry e;
+    union {
+        const void *content;
+        void *write_ptr;
+    };
+};
+
+struct hypfs_entry_dir {
+    struct hypfs_entry e;
+    struct list_head dirlist;
+};
+
+#define HYPFS_DIR_INIT(var, nam)                  \
+    struct hypfs_entry_dir __read_mostly var = {  \
+        .e.type = XEN_HYPFS_TYPE_DIR,             \
+        .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
+        .e.name = nam,                            \
+        .e.size = 0,                              \
+        .e.max_size = 0,                          \
+        .e.list = LIST_HEAD_INIT(var.e.list),     \
+        .e.read = hypfs_read_dir,                 \
+        .dirlist = LIST_HEAD_INIT(var.dirlist),   \
+    }
+
+#define HYPFS_VARSIZE_INIT(var, typ, nam, msz)    \
+    struct hypfs_entry_leaf __read_mostly var = { \
+        .e.type = typ,                            \
+        .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
+        .e.name = nam,                            \
+        .e.max_size = msz,                        \
+        .e.read = hypfs_read_leaf,                \
+    }
+
+/* Content and size need to be set via hypfs_string_set_reference(). */
+#define HYPFS_STRING_INIT(var, nam)               \
+    HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0)
+
+/*
+ * Set content and size of a XEN_HYPFS_TYPE_STRING node. The node will point
+ * to str, so any later modification of *str should be followed by a call
+ * to hypfs_string_set_reference() in order to update the size of the node
+ * data.
+ */
+static inline void hypfs_string_set_reference(struct hypfs_entry_leaf *leaf,
+                                              const char *str)
+{
+    leaf->content = str;
+    leaf->e.size = strlen(str) + 1;
+}
+
+#define HYPFS_FIXEDSIZE_INIT(var, typ, nam, contvar, wr) \
+    struct hypfs_entry_leaf __read_mostly var = {        \
+        .e.type = typ,                                   \
+        .e.encoding = XEN_HYPFS_ENC_PLAIN,               \
+        .e.name = nam,                                   \
+        .e.size = sizeof(contvar),                       \
+        .e.max_size = wr ? sizeof(contvar) : 0,          \
+        .e.read = hypfs_read_leaf,                       \
+        .e.write = wr,                                   \
+        .content = &contvar,                             \
+    }
+
+#define HYPFS_UINT_INIT(var, nam, contvar)                       \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_UINT, nam, contvar, NULL)
+#define HYPFS_UINT_INIT_WRITABLE(var, nam, contvar)              \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_UINT, nam, contvar, \
+                         hypfs_write_leaf)
+
+#define HYPFS_INT_INIT(var, nam, contvar)                        \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_INT, nam, contvar, NULL)
+#define HYPFS_INT_INIT_WRITABLE(var, nam, contvar)               \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_INT, nam, contvar, \
+                         hypfs_write_leaf)
+
+#define HYPFS_BOOL_INIT(var, nam, contvar)                       \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, NULL)
+#define HYPFS_BOOL_INIT_WRITABLE(var, nam, contvar)              \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, \
+                         hypfs_write_bool)
+
+extern struct hypfs_entry_dir hypfs_root;
+
+struct hypfs_entry *hypfs_get_entry(const char *path);
+int hypfs_add_dir(struct hypfs_entry_dir *parent,
+                  struct hypfs_entry_dir *dir, bool nofault);
+int hypfs_add_leaf(struct hypfs_entry_dir *parent,
+                   struct hypfs_entry_leaf *leaf, bool nofault);
+int hypfs_read_dir(const struct hypfs_entry *entry,
+                   XEN_GUEST_HANDLE_PARAM(void) uaddr);
+int hypfs_read_leaf(const struct hypfs_entry *entry,
+                    XEN_GUEST_HANDLE_PARAM(void) uaddr);
+int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
+                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen);
+int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
+                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen);
+#endif
+
+#endif /* __XEN_HYPFS_H__ */
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index 95f5e5592b..0921d4a8d0 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -86,6 +86,8 @@
 ?	vcpu_hvm_context		hvm/hvm_vcpu.h
 ?	vcpu_hvm_x86_32			hvm/hvm_vcpu.h
 ?	vcpu_hvm_x86_64			hvm/hvm_vcpu.h
+?	hypfs_direntry			hypfs.h
+?	hypfs_dirlistentry		hypfs.h
 ?	kexec_exec			kexec.h
 !	kexec_image			kexec.h
 !	kexec_range			kexec.h
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 295dd67c48..2368acebed 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -434,6 +434,12 @@ static XSM_INLINE int xsm_page_offline(XSM_DEFAULT_ARG uint32_t cmd)
     return xsm_default_action(action, current->domain, NULL);
 }
 
+static XSM_INLINE int xsm_hypfs_op(XSM_DEFAULT_VOID)
+{
+    XSM_ASSERT_ACTION(XSM_PRIV);
+    return xsm_default_action(action, current->domain, NULL);
+}
+
 static XSM_INLINE long xsm_do_xsm_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return -ENOSYS;
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index e22d6160b5..a80bcf3e42 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -127,6 +127,7 @@ struct xsm_operations {
     int (*resource_setup_misc) (void);
 
     int (*page_offline)(uint32_t cmd);
+    int (*hypfs_op)(void);
 
     long (*do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
 #ifdef CONFIG_COMPAT
@@ -536,6 +537,11 @@ static inline int xsm_page_offline(xsm_default_t def, uint32_t cmd)
     return xsm_ops->page_offline(cmd);
 }
 
+static inline int xsm_hypfs_op(xsm_default_t def)
+{
+    return xsm_ops->hypfs_op();
+}
+
 static inline long xsm_do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return xsm_ops->do_xsm_op(op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 5705e52791..d4cce68089 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -103,6 +103,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, resource_setup_misc);
 
     set_to_dummy_if_null(ops, page_offline);
+    set_to_dummy_if_null(ops, hypfs_op);
     set_to_dummy_if_null(ops, hvm_param);
     set_to_dummy_if_null(ops, hvm_control);
     set_to_dummy_if_null(ops, hvm_param_nested);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 4649e6fd95..a2c78e445c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1173,6 +1173,11 @@ static inline int flask_page_offline(uint32_t cmd)
     }
 }
 
+static inline int flask_hypfs_op(void)
+{
+    return domain_has_xen(current->domain, XEN__HYPFS_OP);
+}
+
 static int flask_add_to_physmap(struct domain *d1, struct domain *d2)
 {
     return domain_has_perm(d1, d2, SECCLASS_MMU, MMU__PHYSMAP);
@@ -1812,6 +1817,7 @@ static struct xsm_operations flask_ops = {
     .resource_setup_misc = flask_resource_setup_misc,
 
     .page_offline = flask_page_offline,
+    .hypfs_op = flask_hypfs_op,
     .hvm_param = flask_hvm_param,
     .hvm_control = flask_hvm_param,
     .hvm_param_nested = flask_hvm_param_nested,
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index c055c14c26..c9e385fb9b 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -67,6 +67,8 @@ class xen
     lockprof
 # XEN_SYSCTL_cpupool_op
     cpupool_op
+# hypfs hypercall
+    hypfs_op
 # XEN_SYSCTL_scheduler_op with XEN_DOMCTL_SCHEDOP_getinfo, XEN_SYSCTL_sched_id, XEN_DOMCTL_SCHEDOP_getvcpuinfo
     getscheduler
 # XEN_SYSCTL_scheduler_op with XEN_DOMCTL_SCHEDOP_putinfo, XEN_DOMCTL_SCHEDOP_putvcpuinfo
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 08 15:34:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:34:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX51K-0001wS-JV; Fri, 08 May 2020 15:34:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jX51J-0001vu-9v
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:34:37 +0000
X-Inumbo-ID: 66464586-9141-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 66464586-9141-11ea-b07b-bc764e2007e4;
 Fri, 08 May 2020 15:34:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D9802AE71;
 Fri,  8 May 2020 15:34:27 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 00/12] Add hypervisor sysfs-like support
Date: Fri,  8 May 2020 17:34:09 +0200
Message-Id: <20200508153421.24525-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On the 2019 Xen developer summit there was agreement that the Xen
hypervisor should gain support for a hierarchical name-value store
similar to the Linux kernel's sysfs.

This is a first implementation of that idea adding the basic
functionality to hypervisor and tools side. The interface to any
user program making use of that "xen-hypfs" is a new library
"libxenhypfs" with a stable interface.

The series adds read-only nodes with buildinfo data and writable
nodes with runtime parameters. xl is switched to use the new file
system for modifying the runtime parameters and the old sysctl
interface for that purpose is dropped.

Changes in V8:
- addressed review comments
- added CONFIG_HYPFS config option

Changes in V7:
- old patch 1 already applied
- add new patch 1 (carved out and modified from patch 9)
- addressed review comments
- modified public interface to have a max write size instead of a
  writable flag only

Changes in V6:
- added new patches 1, 10, 11, 12
- addressed review comments
- modified interface for creating nodes for runtime parameters

Changes in V5:
- switched to xsm for privilege check

Changes in V4:
- former patch 2 removed as already committed
- addressed review comments

Changes in V3:
- major rework, especially by supporting binary contents of entries
- added several new patches (1, 2, 7)
- full support of all runtime parameters
- support of writing entries (especially runtime parameters)

Changes in V2:
- all comments to V1 addressed
- added man-page for xenhypfs tool
- added runtime parameter read access for string parameters

Changes in V1:
- renamed xenfs ->xenhypfs
- added writable entries support at the interface level and in the
  xenhypfs tool
- added runtime parameter read access (integer type only for now)
- added docs/misc/hypfs-paths.pandoc for path descriptions

Juergen Gross (12):
  xen/vmx: let opt_ept_ad always reflect the current setting
  xen: add a generic way to include binary files as variables
  docs: add feature document for Xen hypervisor sysfs-like support
  xen: add basic hypervisor filesystem support
  libs: add libxenhypfs
  tools: add xenfs tool
  xen: provide version information in hypfs
  xen: add /buildinfo/config entry to hypervisor filesystem
  xen: add runtime parameter access support to hypfs
  tools/libxl: use libxenhypfs for setting xen runtime parameters
  tools/libxc: remove xc_set_parameters()
  xen: remove XEN_SYSCTL_set_parameter support

 .gitignore                          |   6 +
 docs/features/hypervisorfs.pandoc   |  92 +++++
 docs/man/xenhypfs.1.pod             |  61 ++++
 docs/misc/hypfs-paths.pandoc        | 165 +++++++++
 tools/Rules.mk                      |   8 +-
 tools/flask/policy/modules/dom0.te  |   4 +-
 tools/libs/Makefile                 |   1 +
 tools/libs/hypfs/Makefile           |  16 +
 tools/libs/hypfs/core.c             | 536 ++++++++++++++++++++++++++++
 tools/libs/hypfs/include/xenhypfs.h |  90 +++++
 tools/libs/hypfs/libxenhypfs.map    |  10 +
 tools/libs/hypfs/xenhypfs.pc.in     |  10 +
 tools/libxc/include/xenctrl.h       |   1 -
 tools/libxc/xc_misc.c               |  21 --
 tools/libxl/Makefile                |   3 +-
 tools/libxl/libxl.c                 |  53 ++-
 tools/libxl/libxl_internal.h        |   1 +
 tools/libxl/xenlight.pc.in          |   2 +-
 tools/misc/Makefile                 |   6 +
 tools/misc/xenhypfs.c               | 192 ++++++++++
 tools/xl/xl_misc.c                  |   1 -
 xen/arch/arm/traps.c                |   3 +
 xen/arch/arm/xen.lds.S              |  12 +-
 xen/arch/x86/hvm/hypercall.c        |   3 +
 xen/arch/x86/hvm/vmx/vmcs.c         |  47 ++-
 xen/arch/x86/hvm/vmx/vmx.c          |   4 +-
 xen/arch/x86/hypercall.c            |   3 +
 xen/arch/x86/pv/domain.c            |  24 +-
 xen/arch/x86/pv/hypercall.c         |   3 +
 xen/arch/x86/xen.lds.S              |  12 +-
 xen/common/Kconfig                  |  22 ++
 xen/common/Makefile                 |  13 +
 xen/common/grant_table.c            |  62 +++-
 xen/common/hypfs.c                  | 385 ++++++++++++++++++++
 xen/common/kernel.c                 |  82 ++++-
 xen/common/sysctl.c                 |  36 --
 xen/drivers/char/console.c          |  77 +++-
 xen/include/Makefile                |   1 +
 xen/include/asm-x86/hvm/vmx/vmcs.h  |   3 +-
 xen/include/public/hypfs.h          | 127 +++++++
 xen/include/public/sysctl.h         |  19 +-
 xen/include/public/xen.h            |   1 +
 xen/include/xen/hypercall.h         |  10 +
 xen/include/xen/hypfs.h             | 124 +++++++
 xen/include/xen/kernel.h            |   3 +
 xen/include/xen/lib.h               |   1 -
 xen/include/xen/param.h             | 123 +++++--
 xen/include/xlat.lst                |   2 +
 xen/include/xsm/dummy.h             |   6 +
 xen/include/xsm/xsm.h               |   6 +
 xen/tools/binfile                   |  41 +++
 xen/xsm/dummy.c                     |   1 +
 xen/xsm/flask/Makefile              |   5 +-
 xen/xsm/flask/flask-policy.S        |  16 -
 xen/xsm/flask/hooks.c               |   9 +-
 xen/xsm/flask/policy/access_vectors |   4 +-
 56 files changed, 2379 insertions(+), 190 deletions(-)
 create mode 100644 docs/features/hypervisorfs.pandoc
 create mode 100644 docs/man/xenhypfs.1.pod
 create mode 100644 docs/misc/hypfs-paths.pandoc
 create mode 100644 tools/libs/hypfs/Makefile
 create mode 100644 tools/libs/hypfs/core.c
 create mode 100644 tools/libs/hypfs/include/xenhypfs.h
 create mode 100644 tools/libs/hypfs/libxenhypfs.map
 create mode 100644 tools/libs/hypfs/xenhypfs.pc.in
 create mode 100644 tools/misc/xenhypfs.c
 create mode 100644 xen/common/hypfs.c
 create mode 100644 xen/include/public/hypfs.h
 create mode 100644 xen/include/xen/hypfs.h
 create mode 100755 xen/tools/binfile
 delete mode 100644 xen/xsm/flask/flask-policy.S

-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 08 15:34:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:34:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX51K-0001wo-SF; Fri, 08 May 2020 15:34:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jX51J-0001vy-E4
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:34:37 +0000
X-Inumbo-ID: 66285f20-9141-11ea-a01d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 66285f20-9141-11ea-a01d-12813bfff9fa;
 Fri, 08 May 2020 15:34:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 2F168AEC4;
 Fri,  8 May 2020 15:34:29 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 08/12] xen: add /buildinfo/config entry to hypervisor
 filesystem
Date: Fri,  8 May 2020 17:34:17 +0200
Message-Id: <20200508153421.24525-9-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200508153421.24525-1-jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add the /buildinfo/config entry to the hypervisor filesystem. This
entry contains the .config file used to build the hypervisor.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- store data in gzip format
- use binfile mechanism to create data file
- move code to kernel.c

V6:
- add config item for the /buildinfo/config (Jan Beulich)
- make config related variables const in kernel.h (Jan Beulich)

V7:
- update doc (Jan Beulich)
- use "rm -f" in Makefile (Jan Beulich)

V8:
- add dependency top CONFIG_HYPFS
- use macro for definition of leaf (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore                   |  2 ++
 docs/misc/hypfs-paths.pandoc |  4 ++++
 xen/common/Kconfig           | 11 +++++++++++
 xen/common/Makefile          | 12 ++++++++++++
 xen/common/kernel.c          | 11 +++++++++++
 xen/include/xen/kernel.h     |  3 +++
 6 files changed, 43 insertions(+)

diff --git a/.gitignore b/.gitignore
index ce3ef23d45..a58de1fd4a 100644
--- a/.gitignore
+++ b/.gitignore
@@ -298,6 +298,8 @@ xen/arch/*/efi/boot.c
 xen/arch/*/efi/compat.c
 xen/arch/*/efi/efi.h
 xen/arch/*/efi/runtime.c
+xen/common/config_data.S
+xen/common/config.gz
 xen/include/headers*.chk
 xen/include/asm
 xen/include/asm-*/asm-offsets.h
diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index d730caf394..9a76bc383b 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -135,6 +135,10 @@ Information about the compile domain.
 
 The compiler used to build Xen.
 
+#### /buildinfo/config = STRING [CONFIG_HYPFS_CONFIG]
+
+The contents of the `xen/.config` file at the time of the hypervisor build.
+
 #### /buildinfo/version/
 
 A directory containing version information of the hypervisor.
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 2515e053c8..b36cd74a24 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -127,6 +127,17 @@ config HYPFS
 
 	  If unsure, say Y.
 
+config HYPFS_CONFIG
+	bool "Provide hypervisor .config via hypfs entry"
+	default y
+	depends on HYPFS
+	---help---
+	  When enabled the contents of the .config file used to build the
+	  hypervisor are provided via the hypfs entry /buildinfo/config.
+
+	  Disable this option in case you want to spare some memory or you
+	  want to hide the .config contents from dom0.
+
 config KEXEC
 	bool "kexec support"
 	default y
diff --git a/xen/common/Makefile b/xen/common/Makefile
index 98b7904dcd..d6d7bad2a9 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -1,6 +1,7 @@
 obj-$(CONFIG_ARGO) += argo.o
 obj-y += bitmap.o
 obj-y += bsearch.o
+obj-$(CONFIG_HYPFS_CONFIG) += config_data.o
 obj-$(CONFIG_CORE_PARKING) += core_parking.o
 obj-y += cpu.o
 obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o
@@ -73,3 +74,14 @@ obj-$(CONFIG_UBSAN) += ubsan/
 
 obj-$(CONFIG_NEEDS_LIBELF) += libelf/
 obj-$(CONFIG_HAS_DEVICE_TREE) += libfdt/
+
+config.gz: ../.config
+	gzip -c $< >$@
+
+config_data.o: config.gz
+
+config_data.S: $(XEN_ROOT)/xen/tools/binfile
+	$(XEN_ROOT)/xen/tools/binfile $@ config.gz xen_config_data
+
+clean::
+	rm -f config_data.S config.gz 2>/dev/null
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index db7bd23fcb..f8f41820d5 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -390,6 +390,10 @@ static HYPFS_STRING_INIT(compile_date, "compile_date");
 static HYPFS_STRING_INIT(compile_domain, "compile_domain");
 static HYPFS_STRING_INIT(extra, "extra");
 
+#ifdef CONFIG_HYPFS_CONFIG
+static HYPFS_STRING_INIT(config, "config");
+#endif
+
 static int __init buildinfo_init(void)
 {
     hypfs_add_dir(&hypfs_root, &buildinfo, true);
@@ -415,6 +419,13 @@ static int __init buildinfo_init(void)
     hypfs_add_leaf(&version, &major, true);
     hypfs_add_leaf(&version, &minor, true);
 
+#ifdef CONFIG_HYPFS_CONFIG
+    config.e.encoding = XEN_HYPFS_ENC_GZIP;
+    config.e.size = xen_config_data_size;
+    config.content = &xen_config_data;
+    hypfs_add_leaf(&buildinfo, &config, true);
+#endif
+
     return 0;
 }
 __initcall(buildinfo_init);
diff --git a/xen/include/xen/kernel.h b/xen/include/xen/kernel.h
index 548b64da9f..02e3281f52 100644
--- a/xen/include/xen/kernel.h
+++ b/xen/include/xen/kernel.h
@@ -100,5 +100,8 @@ extern enum system_state {
 
 bool_t is_active_kernel_text(unsigned long addr);
 
+extern const char xen_config_data;
+extern const unsigned int xen_config_data_size;
+
 #endif /* _LINUX_KERNEL_H */
 
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 08 15:34:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:34:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX51P-00020K-8D; Fri, 08 May 2020 15:34:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jX51O-0001zb-Ba
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:34:42 +0000
X-Inumbo-ID: 66eec602-9141-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 66eec602-9141-11ea-b9cf-bc764e2007e4;
 Fri, 08 May 2020 15:34:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id BA2D6AE8C;
 Fri,  8 May 2020 15:34:28 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 06/12] tools: add xenfs tool
Date: Fri,  8 May 2020 17:34:15 +0200
Message-Id: <20200508153421.24525-7-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200508153421.24525-1-jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add the xenfs tool for accessing the hypervisor filesystem.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Wei Liu <wl@xen.org>
---
V1:
- rename to xenhypfs
- don't use "--" for subcommands
- add write support

V2:
- escape non-printable characters per default with cat subcommand
  (Ian Jackson)
- add -b option to cat subcommand (Ian Jackson)
- add man page

V3:
- adapt to new hypfs interface

V7:
- added missing bool for ls output

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore              |   1 +
 docs/man/xenhypfs.1.pod |  61 +++++++++++++
 tools/misc/Makefile     |   6 ++
 tools/misc/xenhypfs.c   | 192 ++++++++++++++++++++++++++++++++++++++++
 4 files changed, 260 insertions(+)
 create mode 100644 docs/man/xenhypfs.1.pod
 create mode 100644 tools/misc/xenhypfs.c

diff --git a/.gitignore b/.gitignore
index 41b1d4c8a1..ce3ef23d45 100644
--- a/.gitignore
+++ b/.gitignore
@@ -368,6 +368,7 @@ tools/libxl/test_timedereg
 tools/libxl/test_fdderegrace
 tools/firmware/etherboot/eb-roms.h
 tools/firmware/etherboot/gpxe-git-snapshot.tar.gz
+tools/misc/xenhypfs
 tools/misc/xenwatchdogd
 tools/misc/xen-hvmcrash
 tools/misc/xen-lowmemd
diff --git a/docs/man/xenhypfs.1.pod b/docs/man/xenhypfs.1.pod
new file mode 100644
index 0000000000..37aa488fcc
--- /dev/null
+++ b/docs/man/xenhypfs.1.pod
@@ -0,0 +1,61 @@
+=head1 NAME
+
+xenhypfs - Xen tool to access Xen hypervisor file system
+
+=head1 SYNOPSIS
+
+B<xenhypfs> I<subcommand> [I<options>] [I<args>]
+
+=head1 DESCRIPTION
+
+The B<xenhypfs> program is used to access the Xen hypervisor file system.
+It can be used to show the available entries, to show their contents and
+(if allowed) to modify their contents.
+
+=head1 SUBCOMMANDS
+
+=over 4
+
+=item B<ls> I<path>
+
+List the available entries below I<path>.
+
+=item B<cat> [I<-b>] I<path>
+
+Show the contents of the entry specified by I<path>. Non-printable characters
+other than white space characters (like tab, new line) will be shown as
+B<\xnn> (B<nn> being a two digit hex number) unless the option B<-b> is
+specified.
+
+=item B<write> I<path> I<value>
+
+Set the contents of the entry specified by I<path> to I<value>.
+
+=item B<tree>
+
+Show all the entries of the file system as a tree.
+
+=back
+
+=head1 RETURN CODES
+
+=over 4
+
+=item B<0>
+
+Success
+
+=item B<1>
+
+Invalid usage (e.g. unknown subcommand, unknown option, missing parameter).
+
+=item B<2>
+
+Entry not found while traversing the tree.
+
+=item B<3>
+
+Access right violation.
+
+=back
+
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 63947bfadc..9fdb13597f 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -24,6 +24,7 @@ INSTALL_SBIN-$(CONFIG_X86)     += xen-lowmemd
 INSTALL_SBIN-$(CONFIG_X86)     += xen-mfndump
 INSTALL_SBIN-$(CONFIG_X86)     += xen-ucode
 INSTALL_SBIN                   += xencov
+INSTALL_SBIN                   += xenhypfs
 INSTALL_SBIN                   += xenlockprof
 INSTALL_SBIN                   += xenperf
 INSTALL_SBIN                   += xenpm
@@ -86,6 +87,9 @@ xenperf: xenperf.o
 xenpm: xenpm.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
+xenhypfs: xenhypfs.o
+	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenhypfs) $(APPEND_LDFLAGS)
+
 xenlockprof: xenlockprof.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
@@ -94,6 +98,8 @@ xen-hptool.o: CFLAGS += -I$(XEN_ROOT)/tools/libxc $(CFLAGS_libxencall)
 xen-hptool: xen-hptool.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(APPEND_LDFLAGS)
 
+xenhypfs.o: CFLAGS += $(CFLAGS_libxenhypfs)
+
 # xen-mfndump incorrectly uses libxc internals
 xen-mfndump.o: CFLAGS += -I$(XEN_ROOT)/tools/libxc $(CFLAGS_libxencall)
 xen-mfndump: xen-mfndump.o
diff --git a/tools/misc/xenhypfs.c b/tools/misc/xenhypfs.c
new file mode 100644
index 0000000000..158b901f42
--- /dev/null
+++ b/tools/misc/xenhypfs.c
@@ -0,0 +1,192 @@
+#define _GNU_SOURCE
+#include <ctype.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <xenhypfs.h>
+
+static struct xenhypfs_handle *hdl;
+
+static int usage(void)
+{
+    fprintf(stderr, "usage: xenhypfs ls <path>\n");
+    fprintf(stderr, "       xenhypfs cat [-b] <path>\n");
+    fprintf(stderr, "       xenhypfs write <path> <val>\n");
+    fprintf(stderr, "       xenhypfs tree\n");
+
+    return 1;
+}
+
+static void xenhypfs_print_escaped(char *string)
+{
+    char *c;
+
+    for (c = string; *c; c++) {
+        if (isgraph(*c) || isspace(*c))
+            printf("%c", *c);
+        else
+            printf("\\x%02x", *c);
+    }
+    printf("\n");
+}
+
+static int xenhypfs_cat(int argc, char *argv[])
+{
+    int ret = 0;
+    char *result;
+    char *path;
+    bool bin = false;
+
+    switch (argc) {
+    case 1:
+        path = argv[0];
+        break;
+
+    case 2:
+        if (strcmp(argv[0], "-b"))
+            return usage();
+        bin = true;
+        path = argv[1];
+        break;
+
+    default:
+        return usage();
+    }
+
+    result = xenhypfs_read(hdl, path);
+    if (!result) {
+        perror("could not read");
+        ret = 3;
+    } else {
+        if (!bin)
+            printf("%s\n", result);
+        else
+            xenhypfs_print_escaped(result);
+        free(result);
+    }
+
+    return ret;
+}
+
+static int xenhypfs_wr(char *path, char *val)
+{
+    int ret;
+
+    ret = xenhypfs_write(hdl, path, val);
+    if (ret) {
+        perror("could not write");
+        ret = 3;
+    }
+
+    return ret;
+}
+
+static char *xenhypfs_type(struct xenhypfs_dirent *ent)
+{
+    char *res;
+
+    switch (ent->type) {
+    case xenhypfs_type_dir:
+        res = "<dir>   ";
+        break;
+    case xenhypfs_type_blob:
+        res = "<blob>  ";
+        break;
+    case xenhypfs_type_string:
+        res = "<string>";
+        break;
+    case xenhypfs_type_uint:
+        res = "<uint>  ";
+        break;
+    case xenhypfs_type_int:
+        res = "<int>   ";
+        break;
+    case xenhypfs_type_bool:
+        res = "<bool>  ";
+        break;
+    default:
+        res = "<\?\?\?>   ";
+        break;
+    }
+
+    return res;
+}
+
+static int xenhypfs_ls(char *path)
+{
+    struct xenhypfs_dirent *ent;
+    unsigned int n, i;
+    int ret = 0;
+
+    ent = xenhypfs_readdir(hdl, path, &n);
+    if (!ent) {
+        perror("could not read dir");
+        ret = 3;
+    } else {
+        for (i = 0; i < n; i++)
+            printf("%s r%c %s\n", xenhypfs_type(ent + i),
+                   ent[i].is_writable ? 'w' : '-', ent[i].name);
+
+        free(ent);
+    }
+
+    return ret;
+}
+
+static int xenhypfs_tree_sub(char *path, unsigned int depth)
+{
+    struct xenhypfs_dirent *ent;
+    unsigned int n, i;
+    int ret = 0;
+    char *p;
+
+    ent = xenhypfs_readdir(hdl, path, &n);
+    if (!ent)
+        return 2;
+
+    for (i = 0; i < n; i++) {
+        printf("%*s%s%s\n", depth * 2, "", ent[i].name,
+               ent[i].type == xenhypfs_type_dir ? "/" : "");
+        if (ent[i].type == xenhypfs_type_dir) {
+            asprintf(&p, "%s%s%s", path, (depth == 1) ? "" : "/", ent[i].name);
+            if (xenhypfs_tree_sub(p, depth + 1))
+                ret = 2;
+        }
+    }
+
+    free(ent);
+
+    return ret;
+}
+
+static int xenhypfs_tree(void)
+{
+    printf("/\n");
+
+    return xenhypfs_tree_sub("/", 1);
+}
+
+int main(int argc, char *argv[])
+{
+    int ret;
+
+    hdl = xenhypfs_open(NULL, 0);
+
+    if (!hdl) {
+        fprintf(stderr, "Could not open libxenhypfs\n");
+        ret = 2;
+    } else if (argc >= 3 && !strcmp(argv[1], "cat"))
+        ret = xenhypfs_cat(argc - 2, argv + 2);
+    else if (argc == 3 && !strcmp(argv[1], "ls"))
+        ret = xenhypfs_ls(argv[2]);
+    else if (argc == 4 && !strcmp(argv[1], "write"))
+        ret = xenhypfs_wr(argv[2], argv[3]);
+    else if (argc == 2 && !strcmp(argv[1], "tree"))
+        ret = xenhypfs_tree();
+    else
+        ret = usage();
+
+    xenhypfs_close(hdl);
+
+    return ret;
+}
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 08 15:34:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:34:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX51P-00020o-IF; Fri, 08 May 2020 15:34:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jX51O-0001zf-EC
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:34:42 +0000
X-Inumbo-ID: 674b1723-9141-11ea-a01d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 674b1723-9141-11ea-a01d-12813bfff9fa;
 Fri, 08 May 2020 15:34:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id C664CAEF8;
 Fri,  8 May 2020 15:34:29 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 11/12] tools/libxc: remove xc_set_parameters()
Date: Fri,  8 May 2020 17:34:20 +0200
Message-Id: <20200508153421.24525-12-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200508153421.24525-1-jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

There is no user of xc_set_parameters() left, so remove it.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V6:
- new patch
---
 tools/libxc/include/xenctrl.h |  1 -
 tools/libxc/xc_misc.c         | 21 ---------------------
 2 files changed, 22 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 0a6ff93229..feb20319b6 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1226,7 +1226,6 @@ int xc_readconsolering(xc_interface *xch,
                        int clear, int incremental, uint32_t *pindex);
 
 int xc_send_debug_keys(xc_interface *xch, const char *keys);
-int xc_set_parameters(xc_interface *xch, const char *params);
 
 typedef struct xen_sysctl_physinfo xc_physinfo_t;
 typedef struct xen_sysctl_cputopo xc_cputopo_t;
diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index fe477bf344..3820394413 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -187,27 +187,6 @@ int xc_send_debug_keys(xc_interface *xch, const char *keys)
     return ret;
 }
 
-int xc_set_parameters(xc_interface *xch, const char *params)
-{
-    int ret, len = strlen(params);
-    DECLARE_SYSCTL;
-    DECLARE_HYPERCALL_BOUNCE_IN(params, len);
-
-    if ( xc_hypercall_bounce_pre(xch, params) )
-        return -1;
-
-    sysctl.cmd = XEN_SYSCTL_set_parameter;
-    set_xen_guest_handle(sysctl.u.set_parameter.params, params);
-    sysctl.u.set_parameter.size = len;
-    memset(sysctl.u.set_parameter.pad, 0, sizeof(sysctl.u.set_parameter.pad));
-
-    ret = do_sysctl(xch, &sysctl);
-
-    xc_hypercall_bounce_post(xch, params);
-
-    return ret;
-}
-
 int xc_physinfo(xc_interface *xch,
                 xc_physinfo_t *put_info)
 {
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 08 15:34:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:34:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX51U-000256-Tv; Fri, 08 May 2020 15:34:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jX51T-00023x-Al
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:34:47 +0000
X-Inumbo-ID: 6707a9a6-9141-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6707a9a6-9141-11ea-ae69-bc764e2007e4;
 Fri, 08 May 2020 15:34:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id F2E63AEA6;
 Fri,  8 May 2020 15:34:28 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 07/12] xen: provide version information in hypfs
Date: Fri,  8 May 2020 17:34:16 +0200
Message-Id: <20200508153421.24525-8-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200508153421.24525-1-jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Provide version and compile information in /buildinfo/ node of the
Xen hypervisor file system. As this information is accessible by dom0
only no additional security problem arises.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V3:
- new patch

V4:
- add __read_mostly annotations (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/hypfs-paths.pandoc | 45 ++++++++++++++++++++++++++++++++++
 xen/common/kernel.c          | 47 ++++++++++++++++++++++++++++++++++++
 2 files changed, 92 insertions(+)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index 39539fa1b5..d730caf394 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -105,3 +105,48 @@ A populated Xen hypervisor file system might look like the following example:
 #### /
 
 The root of the hypervisor file system.
+
+#### /buildinfo/
+
+A directory containing static information generated while building the
+hypervisor.
+
+#### /buildinfo/changeset = STRING
+
+Git commit of the hypervisor.
+
+#### /buildinfo/compileinfo/
+
+A directory containing information about compilation of Xen.
+
+#### /buildinfo/compileinfo/compile_by = STRING
+
+Information who compiled the hypervisor.
+
+#### /buildinfo/compileinfo/compile_date = STRING
+
+Date of the hypervisor compilation.
+
+#### /buildinfo/compileinfo/compile_domain = STRING
+
+Information about the compile domain.
+
+#### /buildinfo/compileinfo/compiler = STRING
+
+The compiler used to build Xen.
+
+#### /buildinfo/version/
+
+A directory containing version information of the hypervisor.
+
+#### /buildinfo/version/extra = STRING
+
+Extra version information.
+
+#### /buildinfo/version/major = INTEGER
+
+The major version of Xen.
+
+#### /buildinfo/version/minor = INTEGER
+
+The minor version of Xen.
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index 572e3fc07d..db7bd23fcb 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -13,6 +13,7 @@
 #include <xen/paging.h>
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
+#include <xen/hypfs.h>
 #include <xsm/xsm.h>
 #include <asm/current.h>
 #include <public/version.h>
@@ -373,6 +374,52 @@ void __init do_initcalls(void)
         (*call)();
 }
 
+#ifdef CONFIG_HYPFS
+static unsigned int __read_mostly major_version;
+static unsigned int __read_mostly minor_version;
+
+static HYPFS_DIR_INIT(buildinfo, "buildinfo");
+static HYPFS_DIR_INIT(compileinfo, "compileinfo");
+static HYPFS_DIR_INIT(version, "version");
+static HYPFS_UINT_INIT(major, "major", major_version);
+static HYPFS_UINT_INIT(minor, "minor", minor_version);
+static HYPFS_STRING_INIT(changeset, "changeset");
+static HYPFS_STRING_INIT(compiler, "compiler");
+static HYPFS_STRING_INIT(compile_by, "compile_by");
+static HYPFS_STRING_INIT(compile_date, "compile_date");
+static HYPFS_STRING_INIT(compile_domain, "compile_domain");
+static HYPFS_STRING_INIT(extra, "extra");
+
+static int __init buildinfo_init(void)
+{
+    hypfs_add_dir(&hypfs_root, &buildinfo, true);
+
+    hypfs_string_set_reference(&changeset, xen_changeset());
+    hypfs_add_leaf(&buildinfo, &changeset, true);
+
+    hypfs_add_dir(&buildinfo, &compileinfo, true);
+    hypfs_string_set_reference(&compiler, xen_compiler());
+    hypfs_string_set_reference(&compile_by, xen_compile_by());
+    hypfs_string_set_reference(&compile_date, xen_compile_date());
+    hypfs_string_set_reference(&compile_domain, xen_compile_domain());
+    hypfs_add_leaf(&compileinfo, &compiler, true);
+    hypfs_add_leaf(&compileinfo, &compile_by, true);
+    hypfs_add_leaf(&compileinfo, &compile_date, true);
+    hypfs_add_leaf(&compileinfo, &compile_domain, true);
+
+    major_version = xen_major_version();
+    minor_version = xen_minor_version();
+    hypfs_add_dir(&buildinfo, &version, true);
+    hypfs_string_set_reference(&extra, xen_extra_version());
+    hypfs_add_leaf(&version, &extra, true);
+    hypfs_add_leaf(&version, &major, true);
+    hypfs_add_leaf(&version, &minor, true);
+
+    return 0;
+}
+__initcall(buildinfo_init);
+#endif
+
 # define DO(fn) long do_##fn
 
 #endif
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 08 15:34:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:34:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX51V-00025S-6z; Fri, 08 May 2020 15:34:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jX51T-000242-Ea
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:34:47 +0000
X-Inumbo-ID: 674b1722-9141-11ea-a01d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 674b1722-9141-11ea-a01d-12813bfff9fa;
 Fri, 08 May 2020 15:34:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id AAA1BAEE9;
 Fri,  8 May 2020 15:34:29 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 10/12] tools/libxl: use libxenhypfs for setting xen runtime
 parameters
Date: Fri,  8 May 2020 17:34:19 +0200
Message-Id: <20200508153421.24525-11-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200508153421.24525-1-jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Instead of xc_set_parameters() use xenhypfs_write() for setting
parameters of the hypervisor.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V6:
- new patch
---
 tools/Rules.mk               |  2 +-
 tools/libxl/Makefile         |  3 +-
 tools/libxl/libxl.c          | 53 ++++++++++++++++++++++++++++++++----
 tools/libxl/libxl_internal.h |  1 +
 tools/libxl/xenlight.pc.in   |  2 +-
 tools/xl/xl_misc.c           |  1 -
 6 files changed, 52 insertions(+), 10 deletions(-)

diff --git a/tools/Rules.mk b/tools/Rules.mk
index ad6073fcad..883a193f9e 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -178,7 +178,7 @@ CFLAGS += -O2 -fomit-frame-pointer
 endif
 
 CFLAGS_libxenlight = -I$(XEN_XENLIGHT) $(CFLAGS_libxenctrl) $(CFLAGS_xeninclude)
-SHDEPS_libxenlight = $(SHLIB_libxenctrl) $(SHLIB_libxenstore)
+SHDEPS_libxenlight = $(SHLIB_libxenctrl) $(SHLIB_libxenstore) $(SHLIB_libxenhypfs)
 LDLIBS_libxenlight = $(SHDEPS_libxenlight) $(XEN_XENLIGHT)/libxenlight$(libextension)
 SHLIB_libxenlight  = $(SHDEPS_libxenlight) -Wl,-rpath-link=$(XEN_XENLIGHT)
 
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 69fcf21577..a89ebab0b4 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -20,7 +20,7 @@ LIBUUID_LIBS += -luuid
 endif
 
 LIBXL_LIBS =
-LIBXL_LIBS = $(LDLIBS_libxentoollog) $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(LDLIBS_libxentoolcore) $(PTYFUNCS_LIBS) $(LIBUUID_LIBS)
+LIBXL_LIBS = $(LDLIBS_libxentoollog) $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenhypfs) $(LDLIBS_libxenstore) $(LDLIBS_libxentoolcore) $(PTYFUNCS_LIBS) $(LIBUUID_LIBS)
 ifeq ($(CONFIG_LIBNL),y)
 LIBXL_LIBS += $(LIBNL3_LIBS)
 endif
@@ -33,6 +33,7 @@ CFLAGS_LIBXL += $(CFLAGS_libxentoolcore)
 CFLAGS_LIBXL += $(CFLAGS_libxenevtchn)
 CFLAGS_LIBXL += $(CFLAGS_libxenctrl)
 CFLAGS_LIBXL += $(CFLAGS_libxenguest)
+CFLAGS_LIBXL += $(CFLAGS_libxenhypfs)
 CFLAGS_LIBXL += $(CFLAGS_libxenstore)
 ifeq ($(CONFIG_LIBNL),y)
 CFLAGS_LIBXL += $(LIBNL3_CFLAGS)
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index f60fd3e4fd..621acc88f3 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -663,15 +663,56 @@ int libxl_set_parameters(libxl_ctx *ctx, char *params)
 {
     int ret;
     GC_INIT(ctx);
+    char *par, *val, *end, *path;
+    xenhypfs_handle *hypfs;
 
-    ret = xc_set_parameters(ctx->xch, params);
-    if (ret < 0) {
-        LOGEV(ERROR, ret, "setting parameters");
-        GC_FREE;
-        return ERROR_FAIL;
+    hypfs = xenhypfs_open(ctx->lg, 0);
+    if (!hypfs) {
+        LOGE(ERROR, "opening Xen hypfs");
+        ret = ERROR_FAIL;
+        goto out;
     }
+
+    while (isblank(*params))
+        params++;
+
+    for (par = params; *par; par = end) {
+        end = strchr(par, ' ');
+        if (!end)
+            end = par + strlen(par);
+
+        val = strchr(par, '=');
+        if (val > end)
+            val = NULL;
+        if (!val && !strncmp(par, "no", 2)) {
+            path = libxl__sprintf(gc, "/params/%s", par + 2);
+            path[end - par - 2 + 8] = 0;
+            val = "no";
+            par += 2;
+        } else {
+            path = libxl__sprintf(gc, "/params/%s", par);
+            path[val - par + 8] = 0;
+            val = libxl__strndup(gc, val + 1, end - val - 1);
+        }
+
+	LOG(DEBUG, "setting node \"%s\" to value \"%s\"", path, val);
+        ret = xenhypfs_write(hypfs, path, val);
+        if (ret < 0) {
+            LOGE(ERROR, "setting parameters");
+            ret = ERROR_FAIL;
+            goto out;
+        }
+
+        while (isblank(*end))
+            end++;
+    }
+
+    ret = 0;
+
+out:
+    xenhypfs_close(hypfs);
     GC_FREE;
-    return 0;
+    return ret;
 }
 
 static int fd_set_flags(libxl_ctx *ctx, int fd,
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index e5effd2ad1..b85b771659 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -56,6 +56,7 @@
 #define XC_WANT_COMPAT_MAP_FOREIGN_API
 #include <xenctrl.h>
 #include <xenguest.h>
+#include <xenhypfs.h>
 #include <xc_dom.h>
 
 #include <xen-tools/libs.h>
diff --git a/tools/libxl/xenlight.pc.in b/tools/libxl/xenlight.pc.in
index c0f769fd20..6b351ba096 100644
--- a/tools/libxl/xenlight.pc.in
+++ b/tools/libxl/xenlight.pc.in
@@ -9,4 +9,4 @@ Description: The Xenlight library for Xen hypervisor
 Version: @@version@@
 Cflags: -I${includedir}
 Libs: @@libsflag@@${libdir} -lxenlight
-Requires.private: xentoollog,xenevtchn,xencontrol,xenguest,xenstore
+Requires.private: xentoollog,xenevtchn,xencontrol,xenguest,xenstore,xenhypfs
diff --git a/tools/xl/xl_misc.c b/tools/xl/xl_misc.c
index 20ed605f4f..08f0fb6dc9 100644
--- a/tools/xl/xl_misc.c
+++ b/tools/xl/xl_misc.c
@@ -168,7 +168,6 @@ int main_set_parameters(int argc, char **argv)
 
     if (libxl_set_parameters(ctx, params)) {
         fprintf(stderr, "cannot set parameters: %s\n", params);
-        fprintf(stderr, "Use \"xl dmesg\" to look for possible reason.\n");
         return EXIT_FAILURE;
     }
 
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 08 15:34:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:34:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX51Z-0002AA-MG; Fri, 08 May 2020 15:34:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jX51Y-00028l-B6
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:34:52 +0000
X-Inumbo-ID: 679af65c-9141-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 679af65c-9141-11ea-ae69-bc764e2007e4;
 Fri, 08 May 2020 15:34:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 28DAFAF01;
 Fri,  8 May 2020 15:34:30 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 12/12] xen: remove XEN_SYSCTL_set_parameter support
Date: Fri,  8 May 2020 17:34:21 +0200
Message-Id: <20200508153421.24525-13-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200508153421.24525-1-jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The functionality of XEN_SYSCTL_set_parameter is available via hypfs
now, so it can be removed.

This allows to remove the kernel_param structure for runtime parameters
by putting the now only used structure element into the hypfs node
structure of the runtime parameters.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V6:
- new patch

V7:
- only comment out definition of XEN_SYSCTL_set_parameter (Jan Beulich)

V8:
- rebase to use CONFIG_HYPFS

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/flask/policy/modules/dom0.te  |  2 +-
 xen/arch/arm/xen.lds.S              |  5 --
 xen/arch/x86/hvm/vmx/vmcs.c         | 38 ++++++-------
 xen/arch/x86/xen.lds.S              |  5 --
 xen/common/hypfs.c                  | 12 +---
 xen/common/kernel.c                 | 11 ----
 xen/common/sysctl.c                 | 36 ------------
 xen/include/public/sysctl.h         | 19 +------
 xen/include/xen/hypfs.h             |  5 --
 xen/include/xen/lib.h               |  1 -
 xen/include/xen/param.h             | 87 +++++------------------------
 xen/xsm/flask/hooks.c               |  3 -
 xen/xsm/flask/policy/access_vectors |  2 -
 13 files changed, 34 insertions(+), 192 deletions(-)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index 20925e38a2..0a63ce15b6 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -16,7 +16,7 @@ allow dom0_t xen_t:xen {
 allow dom0_t xen_t:xen2 {
 	resource_op psr_cmt_op psr_alloc pmu_ctrl get_symbol
 	get_cpu_levelling_caps get_cpu_featureset livepatch_op
-	coverage_op set_parameter
+	coverage_op
 };
 
 # Allow dom0 to use all XENVER_ subops that have checks.
diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
index 0a6efe96cf..a795497ac8 100644
--- a/xen/arch/arm/xen.lds.S
+++ b/xen/arch/arm/xen.lds.S
@@ -54,11 +54,6 @@ SECTIONS
        *(.data.rel.ro)
        *(.data.rel.ro.*)
 
-       . = ALIGN(POINTER_ALIGN);
-       __param_start = .;
-       *(.data.param)
-       __param_end = .;
-
        __proc_info_start = .;
        *(.proc.info)
        __proc_info_end = .;
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 1746385d13..ca94c2bedc 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -71,27 +71,6 @@ static bool __read_mostly opt_ept_pml = true;
 static s8 __read_mostly opt_ept_ad = -1;
 int8_t __read_mostly opt_ept_exec_sp = -1;
 
-#ifdef CONFIG_HYPFS
-static char opt_ept_setting[10];
-
-static void update_ept_param(void)
-{
-    if ( opt_ept_exec_sp >= 0 )
-        snprintf(opt_ept_setting, sizeof(opt_ept_setting), "exec-sp=%d",
-                 opt_ept_exec_sp);
-}
-
-static void __init init_ept_param(struct param_hypfs *par)
-{
-    update_ept_param();
-    custom_runtime_set_var(par, opt_ept_setting);
-}
-#else
-static void update_ept_param(void)
-{
-}
-#endif
-
 static int __init parse_ept_param(const char *s)
 {
     const char *ss;
@@ -118,6 +97,22 @@ static int __init parse_ept_param(const char *s)
 }
 custom_param("ept", parse_ept_param);
 
+#ifdef CONFIG_HYPFS
+static char opt_ept_setting[10];
+
+static void update_ept_param(void)
+{
+    if ( opt_ept_exec_sp >= 0 )
+        snprintf(opt_ept_setting, sizeof(opt_ept_setting), "exec-sp=%d",
+                 opt_ept_exec_sp);
+}
+
+static void __init init_ept_param(struct param_hypfs *par)
+{
+    update_ept_param();
+    custom_runtime_set_var(par, opt_ept_setting);
+}
+
 static int parse_ept_param_runtime(const char *s);
 custom_runtime_only_param("ept", parse_ept_param_runtime, init_ept_param);
 
@@ -172,6 +167,7 @@ static int parse_ept_param_runtime(const char *s)
 
     return 0;
 }
+#endif
 
 /* Dynamic (run-time adjusted) execution control flags. */
 u32 vmx_pin_based_exec_control __read_mostly;
diff --git a/xen/arch/x86/xen.lds.S b/xen/arch/x86/xen.lds.S
index 3ed020e26b..0273f79152 100644
--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -128,11 +128,6 @@ SECTIONS
        *(.ex_table.pre)
        __stop___pre_ex_table = .;
 
-       . = ALIGN(POINTER_ALIGN);
-       __param_start = .;
-       *(.data.param)
-       __param_end = .;
-
 #if defined(CONFIG_HAS_VPCI) && defined(CONFIG_LATE_HWDOM)
        . = ALIGN(POINTER_ALIGN);
        __start_vpci_array = .;
diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index e4de0a9eef..434b6c1308 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -302,7 +302,7 @@ int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
         goto out;
 
     p = container_of(leaf, struct param_hypfs, hypfs);
-    ret = p->param->par.func(buf);
+    ret = p->func(buf);
 
     if ( !ret )
         leaf->e.size = ulen;
@@ -383,13 +383,3 @@ long do_hypfs_op(unsigned int cmd,
 
     return ret;
 }
-
-void hypfs_write_lock(void)
-{
-    write_lock(&hypfs_lock);
-}
-
-void hypfs_write_unlock(void)
-{
-    write_unlock(&hypfs_lock);
-}
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index 3b8b0d8ca5..bb37c7c9bb 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -196,17 +196,6 @@ static void __init _cmdline_parse(const char *cmdline)
     parse_params(cmdline, __setup_start, __setup_end);
 }
 
-int runtime_parse(const char *line)
-{
-    int ret;
-
-    hypfs_write_lock();
-    ret = parse_params(line, __param_start, __param_end);
-    hypfs_write_unlock();
-
-    return ret;
-}
-
 /**
  *    cmdline_parse -- parses the xen command line.
  * If CONFIG_CMDLINE is set, it would be parsed prior to @cmdline.
diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index 1c6a817476..ec916424e5 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -471,42 +471,6 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
             copyback = 1;
         break;
 
-    case XEN_SYSCTL_set_parameter:
-    {
-#define XEN_SET_PARAMETER_MAX_SIZE 1023
-        char *params;
-
-        if ( op->u.set_parameter.pad[0] || op->u.set_parameter.pad[1] ||
-             op->u.set_parameter.pad[2] )
-        {
-            ret = -EINVAL;
-            break;
-        }
-        if ( op->u.set_parameter.size > XEN_SET_PARAMETER_MAX_SIZE )
-        {
-            ret = -E2BIG;
-            break;
-        }
-        params = xmalloc_bytes(op->u.set_parameter.size + 1);
-        if ( !params )
-        {
-            ret = -ENOMEM;
-            break;
-        }
-        if ( copy_from_guest(params, op->u.set_parameter.params,
-                             op->u.set_parameter.size) )
-            ret = -EFAULT;
-        else
-        {
-            params[op->u.set_parameter.size] = 0;
-            ret = runtime_parse(params);
-        }
-
-        xfree(params);
-
-        break;
-    }
-
     default:
         ret = arch_do_sysctl(op, u_sysctl);
         copyback = 0;
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 3a08c512e8..f635c0c2db 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -1026,22 +1026,6 @@ struct xen_sysctl_livepatch_op {
     } u;
 };
 
-/*
- * XEN_SYSCTL_set_parameter
- *
- * Change hypervisor parameters at runtime.
- * The input string is parsed similar to the boot parameters.
- * Parameters are a single string terminated by a NUL byte of max. size
- * characters. Multiple settings can be specified by separating them
- * with blanks.
- */
-
-struct xen_sysctl_set_parameter {
-    XEN_GUEST_HANDLE_64(const_char) params; /* IN: pointer to parameters. */
-    uint16_t size;                          /* IN: size of parameters. */
-    uint16_t pad[3];                        /* IN: MUST be zero. */
-};
-
 #if defined(__i386__) || defined(__x86_64__)
 /*
  * XEN_SYSCTL_get_cpu_policy (x86 specific)
@@ -1106,7 +1090,7 @@ struct xen_sysctl {
 #define XEN_SYSCTL_get_cpu_levelling_caps        25
 #define XEN_SYSCTL_get_cpu_featureset            26
 #define XEN_SYSCTL_livepatch_op                  27
-#define XEN_SYSCTL_set_parameter                 28
+/* #define XEN_SYSCTL_set_parameter                 28 */
 #define XEN_SYSCTL_get_cpu_policy                29
     uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
     union {
@@ -1135,7 +1119,6 @@ struct xen_sysctl {
         struct xen_sysctl_cpu_levelling_caps cpu_levelling_caps;
         struct xen_sysctl_cpu_featureset    cpu_featureset;
         struct xen_sysctl_livepatch_op      livepatch;
-        struct xen_sysctl_set_parameter     set_parameter;
 #if defined(__i386__) || defined(__x86_64__)
         struct xen_sysctl_cpu_policy        cpu_policy;
 #endif
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 095a0c9bee..3ba1eb4971 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -119,11 +119,6 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
                      XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen);
 int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
                        XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen);
-void hypfs_write_lock(void);
-void hypfs_write_unlock(void);
-#else
-static inline void hypfs_write_lock(void) {}
-static inline void hypfs_write_unlock(void) {}
 #endif
 
 #endif /* __XEN_HYPFS_H__ */
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 2d7a054931..e5b0a007b8 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -75,7 +75,6 @@
 struct domain;
 
 void cmdline_parse(const char *cmdline);
-int runtime_parse(const char *line);
 int parse_bool(const char *s, const char *e);
 
 /**
diff --git a/xen/include/xen/param.h b/xen/include/xen/param.h
index 4893de9bd5..ca032bf4f3 100644
--- a/xen/include/xen/param.h
+++ b/xen/include/xen/param.h
@@ -27,9 +27,6 @@ struct kernel_param {
 };
 
 extern const struct kernel_param __setup_start[], __setup_end[];
-extern const struct kernel_param __param_start[], __param_end[];
-
-#define __dataparam       __used_section(".data.param")
 
 #define __param(att)      static const att \
     __attribute__((__aligned__(sizeof(void *)))) struct kernel_param
@@ -79,14 +76,12 @@ extern const struct kernel_param __param_start[], __param_end[];
         { .name = setup_str_ign,            \
           .type = OPT_IGNORE }
 
-#define __rtparam         __param(__dataparam)
-
 #ifdef CONFIG_HYPFS
 
 struct param_hypfs {
-    const struct kernel_param *param;
     struct hypfs_entry_leaf hypfs;
     void (*init_leaf)(struct param_hypfs *par);
+    int (*func)(const char *);
 };
 
 extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
@@ -109,28 +104,17 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
 
 /* initfunc needs to set size and content, e.g. via custom_runtime_set_var(). */
 #define custom_runtime_only_param(_name, _var, initfunc) \
-    __rtparam __rtpar_##_var = \
-      { .name = _name, \
-          .type = OPT_CUSTOM, \
-          .par.func = _var }; \
     __paramfs __parfs_##_var = \
-        { .param = &__rtpar_##_var, \
-          .init_leaf = initfunc, \
-          .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
+        { .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
           .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
           .hypfs.e.name = _name, \
           .hypfs.e.read = hypfs_read_leaf, \
-          .hypfs.e.write = hypfs_write_custom }
+          .hypfs.e.write = hypfs_write_custom, \
+          .init_leaf = initfunc, \
+          .func = _var }
 #define boolean_runtime_only_param(_name, _var) \
-    __rtparam __rtpar_##_var = \
-        { .name = _name, \
-          .type = OPT_BOOL, \
-          .len = sizeof(_var) + \
-                 BUILD_BUG_ON_ZERO(sizeof(_var) != sizeof(bool)), \
-          .par.var = &_var }; \
     __paramfs __parfs_##_var = \
-        { .param = &__rtpar_##_var, \
-          .hypfs.e.type = XEN_HYPFS_TYPE_BOOL, \
+        { .hypfs.e.type = XEN_HYPFS_TYPE_BOOL, \
           .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
           .hypfs.e.name = _name, \
           .hypfs.e.size = sizeof(_var), \
@@ -139,14 +123,8 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
           .hypfs.e.write = hypfs_write_bool, \
           .hypfs.content = &_var }
 #define integer_runtime_only_param(_name, _var) \
-    __rtparam __rtpar_##_var = \
-        { .name = _name, \
-          .type = OPT_UINT, \
-          .len = sizeof(_var), \
-          .par.var = &_var }; \
     __paramfs __parfs_##_var = \
-        { .param = &__rtpar_##_var, \
-          .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
+        { .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
           .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
           .hypfs.e.name = _name, \
           .hypfs.e.size = sizeof(_var), \
@@ -155,14 +133,8 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
           .hypfs.e.write = hypfs_write_leaf, \
           .hypfs.content = &_var }
 #define size_runtime_only_param(_name, _var) \
-    __rtparam __rtpar_##_var = \
-        { .name = _name, \
-          .type = OPT_SIZE, \
-          .len = sizeof(_var), \
-          .par.var = &_var }; \
     __paramfs __parfs_##_var = \
-        { .param = &__rtpar_##_var, \
-          .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
+        { .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
           .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
           .hypfs.e.name = _name, \
           .hypfs.e.size = sizeof(_var), \
@@ -171,14 +143,8 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
           .hypfs.e.write = hypfs_write_leaf, \
           .hypfs.content = &_var }
 #define string_runtime_only_param(_name, _var) \
-    __rtparam __rtpar_##_var = \
-        { .name = _name, \
-          .type = OPT_STR, \
-          .len = sizeof(_var), \
-          .par.var = &_var }; \
     __paramfs __parfs_##_var = \
-        { .param = &__rtpar_##_var, \
-          .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
+        { .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
           .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
           .hypfs.e.name = _name, \
           .hypfs.e.size = sizeof(_var), \
@@ -194,36 +160,11 @@ struct param_hypfs {
 
 #define param_2_parfs(par)  NULL
 
-#define custom_runtime_only_param(_name, _var, initfunc) \
-    __rtparam __rtpar_##_var = \
-      { .name = _name, \
-          .type = OPT_CUSTOM, \
-          .par.func = _var }
-#define boolean_runtime_only_param(_name, _var) \
-    __rtparam __rtpar_##_var = \
-        { .name = _name, \
-          .type = OPT_BOOL, \
-          .len = sizeof(_var) + \
-                 BUILD_BUG_ON_ZERO(sizeof(_var) != sizeof(bool)), \
-          .par.var = &_var }
-#define integer_runtime_only_param(_name, _var) \
-    __rtparam __rtpar_##_var = \
-        { .name = _name, \
-          .type = OPT_UINT, \
-          .len = sizeof(_var), \
-          .par.var = &_var }
-#define size_runtime_only_param(_name, _var) \
-    __rtparam __rtpar_##_var = \
-        { .name = _name, \
-          .type = OPT_SIZE, \
-          .len = sizeof(_var), \
-          .par.var = &_var }
-#define string_runtime_only_param(_name, _var) \
-    __rtparam __rtpar_##_var = \
-        { .name = _name, \
-          .type = OPT_STR, \
-          .len = sizeof(_var), \
-          .par.var = &_var }
+#define custom_runtime_only_param(_name, _var, initfunc)
+#define boolean_runtime_only_param(_name, _var)
+#define integer_runtime_only_param(_name, _var)
+#define size_runtime_only_param(_name, _var)
+#define string_runtime_only_param(_name, _var)
 
 #define custom_runtime_set_var(parfs, var)
 
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index a2c78e445c..a314bf85ce 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -822,9 +822,6 @@ static int flask_sysctl(int cmd)
     case XEN_SYSCTL_coverage_op:
         return avc_current_has_perm(SECINITSID_XEN, SECCLASS_XEN2,
                                     XEN2__COVERAGE_OP, NULL);
-    case XEN_SYSCTL_set_parameter:
-        return avc_current_has_perm(SECINITSID_XEN, SECCLASS_XEN2,
-                                    XEN2__SET_PARAMETER, NULL);
 
     default:
         return avc_unknown_permission("sysctl", cmd);
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index c9e385fb9b..b87c99ea98 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -99,8 +99,6 @@ class xen2
     livepatch_op
 # XEN_SYSCTL_coverage_op
     coverage_op
-# XEN_SYSCTL_set_parameter
-    set_parameter
 }
 
 # Classes domain and domain2 consist of operations that a domain performs on
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 08 15:34:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:34:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX51a-0002Aj-2G; Fri, 08 May 2020 15:34:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jX51Y-00028s-Ef
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:34:52 +0000
X-Inumbo-ID: 66285f1e-9141-11ea-a01d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 66285f1e-9141-11ea-a01d-12813bfff9fa;
 Fri, 08 May 2020 15:34:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 85033AE7F;
 Fri,  8 May 2020 15:34:28 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 05/12] libs: add libxenhypfs
Date: Fri,  8 May 2020 17:34:14 +0200
Message-Id: <20200508153421.24525-6-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200508153421.24525-1-jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add the new library libxenhypfs for access to the hypervisor filesystem.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Wei Liu <wl@xen.org>
---
V1:
- rename to libxenhypfs
- add xenhypfs_write()

V3:
- major rework due to new hypervisor interface
- add decompression capability

V4:
- add dependency to libz in pkgconfig file (Wei Liu)

V7:
- don't assume hypervisor's sizeof(bool) is same as in user land

V8:
- add some comments regarding the semantics of the lib functions
  (George Dunlap)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore                          |   2 +
 tools/Rules.mk                      |   6 +
 tools/libs/Makefile                 |   1 +
 tools/libs/hypfs/Makefile           |  16 +
 tools/libs/hypfs/core.c             | 536 ++++++++++++++++++++++++++++
 tools/libs/hypfs/include/xenhypfs.h |  90 +++++
 tools/libs/hypfs/libxenhypfs.map    |  10 +
 tools/libs/hypfs/xenhypfs.pc.in     |  10 +
 8 files changed, 671 insertions(+)
 create mode 100644 tools/libs/hypfs/Makefile
 create mode 100644 tools/libs/hypfs/core.c
 create mode 100644 tools/libs/hypfs/include/xenhypfs.h
 create mode 100644 tools/libs/hypfs/libxenhypfs.map
 create mode 100644 tools/libs/hypfs/xenhypfs.pc.in

diff --git a/.gitignore b/.gitignore
index 4059829f6e..41b1d4c8a1 100644
--- a/.gitignore
+++ b/.gitignore
@@ -110,6 +110,8 @@ tools/libs/evtchn/headers.chk
 tools/libs/evtchn/xenevtchn.pc
 tools/libs/gnttab/headers.chk
 tools/libs/gnttab/xengnttab.pc
+tools/libs/hypfs/headers.chk
+tools/libs/hypfs/xenhypfs.pc
 tools/libs/call/headers.chk
 tools/libs/call/xencall.pc
 tools/libs/foreignmemory/headers.chk
diff --git a/tools/Rules.mk b/tools/Rules.mk
index 5b8cf748ad..ad6073fcad 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -19,6 +19,7 @@ XEN_LIBXENGNTTAB   = $(XEN_ROOT)/tools/libs/gnttab
 XEN_LIBXENCALL     = $(XEN_ROOT)/tools/libs/call
 XEN_LIBXENFOREIGNMEMORY = $(XEN_ROOT)/tools/libs/foreignmemory
 XEN_LIBXENDEVICEMODEL = $(XEN_ROOT)/tools/libs/devicemodel
+XEN_LIBXENHYPFS    = $(XEN_ROOT)/tools/libs/hypfs
 XEN_LIBXC          = $(XEN_ROOT)/tools/libxc
 XEN_XENLIGHT       = $(XEN_ROOT)/tools/libxl
 # Currently libxlutil lives in the same directory as libxenlight
@@ -132,6 +133,11 @@ SHDEPS_libxendevicemodel = $(SHLIB_libxentoollog) $(SHLIB_libxentoolcore) $(SHLI
 LDLIBS_libxendevicemodel = $(SHDEPS_libxendevicemodel) $(XEN_LIBXENDEVICEMODEL)/libxendevicemodel$(libextension)
 SHLIB_libxendevicemodel  = $(SHDEPS_libxendevicemodel) -Wl,-rpath-link=$(XEN_LIBXENDEVICEMODEL)
 
+CFLAGS_libxenhypfs = -I$(XEN_LIBXENHYPFS)/include $(CFLAGS_xeninclude)
+SHDEPS_libxenhypfs = $(SHLIB_libxentoollog) $(SHLIB_libxentoolcore) $(SHLIB_xencall)
+LDLIBS_libxenhypfs = $(SHDEPS_libxenhypfs) $(XEN_LIBXENHYPFS)/libxenhypfs$(libextension)
+SHLIB_libxenhypfs  = $(SHDEPS_libxenhypfs) -Wl,-rpath-link=$(XEN_LIBXENHYPFS)
+
 # code which compiles against libxenctrl get __XEN_TOOLS__ and
 # therefore sees the unstable hypercall interfaces.
 CFLAGS_libxenctrl = -I$(XEN_LIBXC)/include $(CFLAGS_libxentoollog) $(CFLAGS_libxenforeignmemory) $(CFLAGS_libxendevicemodel) $(CFLAGS_xeninclude) -D__XEN_TOOLS__
diff --git a/tools/libs/Makefile b/tools/libs/Makefile
index 88901e7341..69cdfb5975 100644
--- a/tools/libs/Makefile
+++ b/tools/libs/Makefile
@@ -9,6 +9,7 @@ SUBDIRS-y += gnttab
 SUBDIRS-y += call
 SUBDIRS-y += foreignmemory
 SUBDIRS-y += devicemodel
+SUBDIRS-y += hypfs
 
 ifeq ($(CONFIG_RUMP),y)
 SUBDIRS-y := toolcore
diff --git a/tools/libs/hypfs/Makefile b/tools/libs/hypfs/Makefile
new file mode 100644
index 0000000000..06dd449929
--- /dev/null
+++ b/tools/libs/hypfs/Makefile
@@ -0,0 +1,16 @@
+XEN_ROOT = $(CURDIR)/../../..
+include $(XEN_ROOT)/tools/Rules.mk
+
+MAJOR    = 1
+MINOR    = 0
+LIBNAME  := hypfs
+USELIBS  := toollog toolcore call
+
+APPEND_LDFLAGS += -lz
+
+SRCS-y                 += core.c
+
+include ../libs.mk
+
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_LIBXENHYPFS)/include
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/hypfs/core.c b/tools/libs/hypfs/core.c
new file mode 100644
index 0000000000..befdc66e96
--- /dev/null
+++ b/tools/libs/hypfs/core.c
@@ -0,0 +1,536 @@
+/*
+ * Copyright (c) 2019 SUSE Software Solutions Germany GmbH
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#define __XEN_TOOLS__ 1
+
+#define _GNU_SOURCE
+
+#include <errno.h>
+#include <inttypes.h>
+#include <stdbool.h>
+#include <stdlib.h>
+#include <string.h>
+#include <zlib.h>
+
+#include <xentoollog.h>
+#include <xenhypfs.h>
+#include <xencall.h>
+#include <xentoolcore_internal.h>
+
+#include <xen/xen.h>
+#include <xen/hypfs.h>
+
+#define BUF_SIZE 4096
+
+struct xenhypfs_handle {
+    xentoollog_logger *logger, *logger_tofree;
+    unsigned int flags;
+    xencall_handle *xcall;
+};
+
+xenhypfs_handle *xenhypfs_open(xentoollog_logger *logger,
+                               unsigned open_flags)
+{
+    xenhypfs_handle *fshdl = calloc(1, sizeof(*fshdl));
+
+    if (!fshdl)
+        return NULL;
+
+    fshdl->flags = open_flags;
+    fshdl->logger = logger;
+    fshdl->logger_tofree = NULL;
+
+    if (!fshdl->logger) {
+        fshdl->logger = fshdl->logger_tofree =
+            (xentoollog_logger*)
+            xtl_createlogger_stdiostream(stderr, XTL_PROGRESS, 0);
+        if (!fshdl->logger)
+            goto err;
+    }
+
+    fshdl->xcall = xencall_open(fshdl->logger, 0);
+    if (!fshdl->xcall)
+        goto err;
+
+    /* No need to remember supported version, we only support V1. */
+    if (xencall1(fshdl->xcall, __HYPERVISOR_hypfs_op,
+                 XEN_HYPFS_OP_get_version) < 0)
+        goto err;
+
+    return fshdl;
+
+err:
+    xtl_logger_destroy(fshdl->logger_tofree);
+    xencall_close(fshdl->xcall);
+    free(fshdl);
+    return NULL;
+}
+
+int xenhypfs_close(xenhypfs_handle *fshdl)
+{
+    if (!fshdl)
+        return 0;
+
+    xencall_close(fshdl->xcall);
+    xtl_logger_destroy(fshdl->logger_tofree);
+    free(fshdl);
+    return 0;
+}
+
+static int xenhypfs_get_pathbuf(xenhypfs_handle *fshdl, const char *path,
+                                char **path_buf)
+{
+    int ret = -1;
+    int path_sz;
+
+    if (!fshdl) {
+        errno = EBADF;
+        goto out;
+    }
+
+    path_sz = strlen(path) + 1;
+    if (path_sz > XEN_HYPFS_MAX_PATHLEN)
+    {
+        errno = ENAMETOOLONG;
+        goto out;
+    }
+
+    *path_buf = xencall_alloc_buffer(fshdl->xcall, path_sz);
+    if (!*path_buf) {
+        errno = ENOMEM;
+        goto out;
+    }
+    strcpy(*path_buf, path);
+
+    ret = path_sz;
+
+ out:
+    return ret;
+}
+
+static void *xenhypfs_inflate(void *in_data, size_t *sz)
+{
+    unsigned char *workbuf;
+    void *content = NULL;
+    unsigned int out_sz;
+    z_stream z = { .opaque = NULL };
+    int ret;
+
+    workbuf = malloc(BUF_SIZE);
+    if (!workbuf)
+        return NULL;
+
+    z.next_in = in_data;
+    z.avail_in = *sz;
+    ret = inflateInit2(&z, MAX_WBITS + 32); /* 32 == gzip */
+
+    for (*sz = 0; ret == Z_OK; *sz += out_sz) {
+        z.next_out = workbuf;
+        z.avail_out = BUF_SIZE;
+        ret = inflate(&z, Z_SYNC_FLUSH);
+        if (ret != Z_OK && ret != Z_STREAM_END)
+            break;
+
+        out_sz = z.next_out - workbuf;
+        content = realloc(content, *sz + out_sz);
+        if (!content) {
+            ret = Z_MEM_ERROR;
+            break;
+        }
+        memcpy(content + *sz, workbuf, out_sz);
+    }
+
+    inflateEnd(&z);
+    if (ret != Z_STREAM_END) {
+        free(content);
+        content = NULL;
+        errno = EIO;
+    }
+    free(workbuf);
+    return content;
+}
+
+static void xenhypfs_set_attrs(struct xen_hypfs_direntry *entry,
+                               struct xenhypfs_dirent *dirent)
+{
+    dirent->size = entry->content_len;
+
+    switch(entry->type) {
+    case XEN_HYPFS_TYPE_DIR:
+        dirent->type = xenhypfs_type_dir;
+        break;
+    case XEN_HYPFS_TYPE_BLOB:
+        dirent->type = xenhypfs_type_blob;
+        break;
+    case XEN_HYPFS_TYPE_STRING:
+        dirent->type = xenhypfs_type_string;
+        break;
+    case XEN_HYPFS_TYPE_UINT:
+        dirent->type = xenhypfs_type_uint;
+        break;
+    case XEN_HYPFS_TYPE_INT:
+        dirent->type = xenhypfs_type_int;
+        break;
+    case XEN_HYPFS_TYPE_BOOL:
+        dirent->type = xenhypfs_type_bool;
+        break;
+    default:
+        dirent->type = xenhypfs_type_blob;
+    }
+
+    switch (entry->encoding) {
+    case XEN_HYPFS_ENC_PLAIN:
+        dirent->encoding = xenhypfs_enc_plain;
+        break;
+    case XEN_HYPFS_ENC_GZIP:
+        dirent->encoding = xenhypfs_enc_gzip;
+        break;
+    default:
+        dirent->encoding = xenhypfs_enc_plain;
+        dirent->type = xenhypfs_type_blob;
+    }
+
+    dirent->is_writable = entry->max_write_len;
+}
+
+void *xenhypfs_read_raw(xenhypfs_handle *fshdl, const char *path,
+                        struct xenhypfs_dirent **dirent)
+{
+    void *retbuf = NULL, *content = NULL;
+    char *path_buf = NULL;
+    const char *name;
+    struct xen_hypfs_direntry *entry;
+    int ret;
+    int sz, path_sz;
+
+    *dirent = NULL;
+    ret = xenhypfs_get_pathbuf(fshdl, path, &path_buf);
+    if (ret < 0)
+        goto out;
+
+    path_sz = ret;
+
+    for (sz = BUF_SIZE;; sz = sizeof(*entry) + entry->content_len) {
+        if (retbuf)
+            xencall_free_buffer(fshdl->xcall, retbuf);
+
+        retbuf = xencall_alloc_buffer(fshdl->xcall, sz);
+        if (!retbuf) {
+            errno = ENOMEM;
+            goto out;
+        }
+        entry = retbuf;
+
+        ret = xencall5(fshdl->xcall, __HYPERVISOR_hypfs_op, XEN_HYPFS_OP_read,
+                       (unsigned long)path_buf, path_sz,
+                       (unsigned long)retbuf, sz);
+        if (!ret)
+            break;
+
+        if (ret != ENOBUFS) {
+            errno = -ret;
+            goto out;
+        }
+    }
+
+    content = malloc(entry->content_len);
+    if (!content)
+        goto out;
+    memcpy(content, entry + 1, entry->content_len);
+
+    name = strrchr(path, '/');
+    if (!name)
+        name = path;
+    else {
+        name++;
+        if (!*name)
+            name--;
+    }
+    *dirent = calloc(1, sizeof(struct xenhypfs_dirent) + strlen(name) + 1);
+    if (!*dirent) {
+        free(content);
+        content = NULL;
+        errno = ENOMEM;
+        goto out;
+    }
+    (*dirent)->name = (char *)(*dirent + 1);
+    strcpy((*dirent)->name, name);
+    xenhypfs_set_attrs(entry, *dirent);
+
+ out:
+    ret = errno;
+    xencall_free_buffer(fshdl->xcall, path_buf);
+    xencall_free_buffer(fshdl->xcall, retbuf);
+    errno = ret;
+
+    return content;
+}
+
+char *xenhypfs_read(xenhypfs_handle *fshdl, const char *path)
+{
+    char *buf, *ret_buf = NULL;
+    struct xenhypfs_dirent *dirent;
+    int ret;
+
+    buf = xenhypfs_read_raw(fshdl, path, &dirent);
+    if (!buf)
+        goto out;
+
+    switch (dirent->encoding) {
+    case xenhypfs_enc_plain:
+        break;
+    case xenhypfs_enc_gzip:
+        ret_buf = xenhypfs_inflate(buf, &dirent->size);
+        if (!ret_buf)
+            goto out;
+        free(buf);
+        buf = ret_buf;
+        ret_buf = NULL;
+        break;
+    }
+
+    switch (dirent->type) {
+    case xenhypfs_type_dir:
+        errno = EISDIR;
+        break;
+    case xenhypfs_type_blob:
+        errno = EDOM;
+        break;
+    case xenhypfs_type_string:
+        ret_buf = buf;
+        buf = NULL;
+        break;
+    case xenhypfs_type_uint:
+    case xenhypfs_type_bool:
+        switch (dirent->size) {
+        case 1:
+            ret = asprintf(&ret_buf, "%"PRIu8, *(uint8_t *)buf);
+            break;
+        case 2:
+            ret = asprintf(&ret_buf, "%"PRIu16, *(uint16_t *)buf);
+            break;
+        case 4:
+            ret = asprintf(&ret_buf, "%"PRIu32, *(uint32_t *)buf);
+            break;
+        case 8:
+            ret = asprintf(&ret_buf, "%"PRIu64, *(uint64_t *)buf);
+            break;
+        default:
+            ret = -1;
+            errno = EDOM;
+        }
+        if (ret < 0)
+            ret_buf = NULL;
+        break;
+    case xenhypfs_type_int:
+        switch (dirent->size) {
+        case 1:
+            ret = asprintf(&ret_buf, "%"PRId8, *(int8_t *)buf);
+            break;
+        case 2:
+            ret = asprintf(&ret_buf, "%"PRId16, *(int16_t *)buf);
+            break;
+        case 4:
+            ret = asprintf(&ret_buf, "%"PRId32, *(int32_t *)buf);
+            break;
+        case 8:
+            ret = asprintf(&ret_buf, "%"PRId64, *(int64_t *)buf);
+            break;
+        default:
+            ret = -1;
+            errno = EDOM;
+        }
+        if (ret < 0)
+            ret_buf = NULL;
+        break;
+    }
+
+ out:
+    ret = errno;
+    free(buf);
+    free(dirent);
+    errno = ret;
+
+    return ret_buf;
+}
+
+struct xenhypfs_dirent *xenhypfs_readdir(xenhypfs_handle *fshdl,
+                                         const char *path,
+                                         unsigned int *num_entries)
+{
+    void *buf, *curr;
+    int ret;
+    char *names;
+    struct xenhypfs_dirent *ret_buf = NULL, *dirent;
+    unsigned int n = 0, name_sz = 0;
+    struct xen_hypfs_dirlistentry *entry;
+
+    buf = xenhypfs_read_raw(fshdl, path, &dirent);
+    if (!buf)
+        goto out;
+
+    if (dirent->type != xenhypfs_type_dir ||
+        dirent->encoding != xenhypfs_enc_plain) {
+        errno = ENOTDIR;
+        goto out;
+    }
+
+    if (dirent->size) {
+        curr = buf;
+        for (n = 1;; n++) {
+            entry = curr;
+            name_sz += strlen(entry->name) + 1;
+            if (!entry->off_next)
+                break;
+
+            curr += entry->off_next;
+        }
+    }
+
+    ret_buf = malloc(n * sizeof(*ret_buf) + name_sz);
+    if (!ret_buf)
+        goto out;
+
+    *num_entries = n;
+    names = (char *)(ret_buf + n);
+    curr = buf;
+    for (n = 0; n < *num_entries; n++) {
+        entry = curr;
+        xenhypfs_set_attrs(&entry->e, ret_buf + n);
+        ret_buf[n].name = names;
+        strcpy(names, entry->name);
+        names += strlen(entry->name) + 1;
+        curr += entry->off_next;
+    }
+
+ out:
+    ret = errno;
+    free(buf);
+    free(dirent);
+    errno = ret;
+
+    return ret_buf;
+}
+
+int xenhypfs_write(xenhypfs_handle *fshdl, const char *path, const char *val)
+{
+    void *buf = NULL;
+    char *path_buf = NULL, *val_end;
+    int ret, saved_errno;
+    int sz, path_sz;
+    struct xen_hypfs_direntry *entry;
+    uint64_t mask;
+
+    ret = xenhypfs_get_pathbuf(fshdl, path, &path_buf);
+    if (ret < 0)
+        goto out;
+
+    path_sz = ret;
+    ret = -1;
+
+    sz = BUF_SIZE;
+    buf = xencall_alloc_buffer(fshdl->xcall, sz);
+    if (!buf) {
+        errno = ENOMEM;
+        goto out;
+    }
+
+    ret = xencall5(fshdl->xcall, __HYPERVISOR_hypfs_op, XEN_HYPFS_OP_read,
+                   (unsigned long)path_buf, path_sz,
+                   (unsigned long)buf, sizeof(*entry));
+    if (ret && errno != ENOBUFS)
+        goto out;
+    ret = -1;
+    entry = buf;
+    if (!entry->max_write_len) {
+        errno = EACCES;
+        goto out;
+    }
+    if (entry->encoding != XEN_HYPFS_ENC_PLAIN) {
+        /* Writing compressed data currently not supported. */
+        errno = EDOM;
+        goto out;
+    }
+
+    switch (entry->type) {
+    case XEN_HYPFS_TYPE_STRING:
+        if (sz < strlen(val) + 1) {
+            sz = strlen(val) + 1;
+            xencall_free_buffer(fshdl->xcall, buf);
+            buf = xencall_alloc_buffer(fshdl->xcall, sz);
+            if (!buf) {
+                errno = ENOMEM;
+                goto out;
+            }
+        }
+        sz = strlen(val) + 1;
+        strcpy(buf, val);
+        break;
+    case XEN_HYPFS_TYPE_UINT:
+        sz = entry->content_len;
+        errno = 0;
+        *(unsigned long long *)buf = strtoull(val, &val_end, 0);
+        if (errno || !*val || *val_end)
+            goto out;
+        mask = ~0ULL << (8 * sz);
+        if ((*(uint64_t *)buf & mask) && ((*(uint64_t *)buf & mask) != mask)) {
+            errno = ERANGE;
+            goto out;
+        }
+        break;
+    case XEN_HYPFS_TYPE_INT:
+        sz = entry->content_len;
+        errno = 0;
+        *(unsigned long long *)buf = strtoll(val, &val_end, 0);
+        if (errno || !*val || *val_end)
+            goto out;
+        mask = (sz == 8) ? 0 : ~0ULL << (8 * sz);
+        if ((*(uint64_t *)buf & mask) && ((*(uint64_t *)buf & mask) != mask)) {
+            errno = ERANGE;
+            goto out;
+        }
+        break;
+    case XEN_HYPFS_TYPE_BOOL:
+        sz = entry->content_len;
+        *(unsigned long long *)buf = 0;
+        if (!strcmp(val, "1") || !strcmp(val, "on") || !strcmp(val, "yes") ||
+            !strcmp(val, "true") || !strcmp(val, "enable"))
+            *(unsigned long long *)buf = 1;
+        else if (strcmp(val, "0") && strcmp(val, "no") && strcmp(val, "off") &&
+                 strcmp(val, "false") && strcmp(val, "disable")) {
+            errno = EDOM;
+            goto out;
+        }
+        break;
+    default:
+        /* No support for other types (yet). */
+        errno = EDOM;
+        goto out;
+    }
+
+    ret = xencall5(fshdl->xcall, __HYPERVISOR_hypfs_op,
+                   XEN_HYPFS_OP_write_contents,
+                   (unsigned long)path_buf, path_sz,
+                   (unsigned long)buf, sz);
+
+ out:
+    saved_errno = errno;
+    xencall_free_buffer(fshdl->xcall, path_buf);
+    xencall_free_buffer(fshdl->xcall, buf);
+    errno = saved_errno;
+    return ret;
+}
diff --git a/tools/libs/hypfs/include/xenhypfs.h b/tools/libs/hypfs/include/xenhypfs.h
new file mode 100644
index 0000000000..ab157edceb
--- /dev/null
+++ b/tools/libs/hypfs/include/xenhypfs.h
@@ -0,0 +1,90 @@
+/*
+ * Copyright (c) 2019 SUSE Software Solutions Germany GmbH
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef XENHYPFS_H
+#define XENHYPFS_H
+
+#include <stdbool.h>
+#include <stdint.h>
+#include <sys/types.h>
+
+/* Callers who don't care don't need to #include <xentoollog.h> */
+struct xentoollog_logger;
+
+typedef struct xenhypfs_handle xenhypfs_handle;
+
+struct xenhypfs_dirent {
+    char *name;
+    size_t size;
+    enum {
+        xenhypfs_type_dir,
+        xenhypfs_type_blob,
+        xenhypfs_type_string,
+        xenhypfs_type_uint,
+        xenhypfs_type_int,
+        xenhypfs_type_bool
+    } type;
+    enum {
+        xenhypfs_enc_plain,
+        xenhypfs_enc_gzip
+    } encoding;
+    bool is_writable;
+};
+
+xenhypfs_handle *xenhypfs_open(struct xentoollog_logger *logger,
+                               unsigned int open_flags);
+int xenhypfs_close(xenhypfs_handle *fshdl);
+
+/*
+ * Return the raw contents of a Xen hypfs entry and its dirent containing
+ * the size, type and encoding.
+ * Returned buffer and dirent should be freed via free().
+ */
+void *xenhypfs_read_raw(xenhypfs_handle *fshdl, const char *path,
+                        struct xenhypfs_dirent **dirent);
+
+/*
+ * Return the contents of a Xen hypfs entry as a string.
+ * Returned buffer should be freed via free().
+ */
+char *xenhypfs_read(xenhypfs_handle *fshdl, const char *path);
+
+/*
+ * Return the contents of a Xen hypfs directory in form of an array of
+ * dirents.
+ * Returned buffer should be freed via free().
+ */
+struct xenhypfs_dirent *xenhypfs_readdir(xenhypfs_handle *fshdl,
+                                         const char *path,
+                                         unsigned int *num_entries);
+
+/*
+ * Write a Xen hypfs entry with a value. The value is converted from a string
+ * to the appropriate type.
+ */
+int xenhypfs_write(xenhypfs_handle *fshdl, const char *path, const char *val);
+
+#endif /* XENHYPFS_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libs/hypfs/libxenhypfs.map b/tools/libs/hypfs/libxenhypfs.map
new file mode 100644
index 0000000000..47f1edda3e
--- /dev/null
+++ b/tools/libs/hypfs/libxenhypfs.map
@@ -0,0 +1,10 @@
+VERS_1.0 {
+	global:
+		xenhypfs_open;
+		xenhypfs_close;
+		xenhypfs_read_raw;
+		xenhypfs_read;
+		xenhypfs_readdir;
+		xenhypfs_write;
+	local: *; /* Do not expose anything by default */
+};
diff --git a/tools/libs/hypfs/xenhypfs.pc.in b/tools/libs/hypfs/xenhypfs.pc.in
new file mode 100644
index 0000000000..92a262c7a2
--- /dev/null
+++ b/tools/libs/hypfs/xenhypfs.pc.in
@@ -0,0 +1,10 @@
+prefix=@@prefix@@
+includedir=@@incdir@@
+libdir=@@libdir@@
+
+Name: Xenhypfs
+Description: The Xenhypfs library for Xen hypervisor
+Version: @@version@@
+Cflags: -I${includedir} @@cflagslocal@@
+Libs: @@libsflag@@${libdir} -lxenhypfs
+Requires.private: xentoolcore,xentoollog,xencall,z
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 08 15:34:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:34:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX51e-0002G6-KX; Fri, 08 May 2020 15:34:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jX51d-0002Ef-Bv
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:34:57 +0000
X-Inumbo-ID: 6799b7a6-9141-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6799b7a6-9141-11ea-ae69-bc764e2007e4;
 Fri, 08 May 2020 15:34:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7E8C4AEDA;
 Fri,  8 May 2020 15:34:29 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v8 09/12] xen: add runtime parameter access support to hypfs
Date: Fri,  8 May 2020 17:34:18 +0200
Message-Id: <20200508153421.24525-10-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200508153421.24525-1-jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add support to read and modify values of hypervisor runtime parameters
via the hypervisor file system.

As runtime parameters can be modified via a sysctl, too, this path has
to take the hypfs rw_lock as writer.

For custom runtime parameters the connection between the parameter
value and the file system is done via an init function which will set
the initial value (if needed) and the leaf properties.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- complete rework
- support custom parameters, too
- support parameter writing

V6:
- rewording in docs/misc/hypfs-paths.pandoc (Jan Beulich)
- use memchr() (Jan Beulich)
- use strlcat() (Jan Beulich)
- rework to use a custom parameter init function instead of a reference
  to a content variable, allowing to drop default strings
- style correction (Jan Beulich)
- dropping param_append_str() in favor of a custom function at its only
  use site

V7:
- fine tune some parameter initializations (Jan Beulich)
- call custom_runtime_set_var() after updating the value
- modify alignment in Arm linker script to 4 (Jan Beulich)

V8:
- modify alignment in Arm linker script to 8 (Julien Grall)
- fix ept runtime parameter reporting (Jan Beulich)
- rebase to support CONFIG_HYPFS

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/hypfs-paths.pandoc |   9 +++
 xen/arch/arm/xen.lds.S       |   7 ++
 xen/arch/x86/hvm/vmx/vmcs.c  |  29 +++++++-
 xen/arch/x86/pv/domain.c     |  24 ++++++-
 xen/arch/x86/xen.lds.S       |   7 ++
 xen/common/grant_table.c     |  62 ++++++++++++++----
 xen/common/hypfs.c           |  41 ++++++++++++
 xen/common/kernel.c          |  27 +++++++-
 xen/drivers/char/console.c   |  77 ++++++++++++++++++++--
 xen/include/xen/hypfs.h      |   7 ++
 xen/include/xen/param.h      | 124 ++++++++++++++++++++++++++++++++++-
 11 files changed, 390 insertions(+), 24 deletions(-)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index 9a76bc383b..a111c6f25c 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -154,3 +154,12 @@ The major version of Xen.
 #### /buildinfo/version/minor = INTEGER
 
 The minor version of Xen.
+
+#### /params/
+
+A directory of runtime parameters.
+
+#### /params/*
+
+The individual parameters. The description of the different parameters can be
+found in `docs/misc/xen-command-line.pandoc`.
diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
index a497f6a48d..0a6efe96cf 100644
--- a/xen/arch/arm/xen.lds.S
+++ b/xen/arch/arm/xen.lds.S
@@ -89,6 +89,13 @@ SECTIONS
        __start_schedulers_array = .;
        *(.data.schedulers)
        __end_schedulers_array = .;
+
+#ifdef CONFIG_HYPFS
+       . = ALIGN(8);
+       __paramhypfs_start = .;
+       *(.data.paramhypfs)
+       __paramhypfs_end = .;
+#endif
        *(.data.rel)
        *(.data.rel.*)
        CONSTRUCTORS
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 221af9737a..1746385d13 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -71,6 +71,27 @@ static bool __read_mostly opt_ept_pml = true;
 static s8 __read_mostly opt_ept_ad = -1;
 int8_t __read_mostly opt_ept_exec_sp = -1;
 
+#ifdef CONFIG_HYPFS
+static char opt_ept_setting[10];
+
+static void update_ept_param(void)
+{
+    if ( opt_ept_exec_sp >= 0 )
+        snprintf(opt_ept_setting, sizeof(opt_ept_setting), "exec-sp=%d",
+                 opt_ept_exec_sp);
+}
+
+static void __init init_ept_param(struct param_hypfs *par)
+{
+    update_ept_param();
+    custom_runtime_set_var(par, opt_ept_setting);
+}
+#else
+static void update_ept_param(void)
+{
+}
+#endif
+
 static int __init parse_ept_param(const char *s)
 {
     const char *ss;
@@ -97,6 +118,9 @@ static int __init parse_ept_param(const char *s)
 }
 custom_param("ept", parse_ept_param);
 
+static int parse_ept_param_runtime(const char *s);
+custom_runtime_only_param("ept", parse_ept_param_runtime, init_ept_param);
+
 static int parse_ept_param_runtime(const char *s)
 {
     struct domain *d;
@@ -115,6 +139,10 @@ static int parse_ept_param_runtime(const char *s)
 
     opt_ept_exec_sp = val;
 
+    update_ept_param();
+    custom_runtime_set_var(param_2_parfs(parse_ept_param_runtime),
+                           opt_ept_setting);
+
     rcu_read_lock(&domlist_read_lock);
     for_each_domain ( d )
     {
@@ -144,7 +172,6 @@ static int parse_ept_param_runtime(const char *s)
 
     return 0;
 }
-custom_runtime_only_param("ept", parse_ept_param_runtime);
 
 /* Dynamic (run-time adjusted) execution control flags. */
 u32 vmx_pin_based_exec_control __read_mostly;
diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
index 0a4a5bd001..6bfce9a76a 100644
--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -52,9 +52,27 @@ static __read_mostly enum {
     PCID_OFF,
     PCID_ALL,
     PCID_XPTI,
-    PCID_NOXPTI
+    PCID_NOXPTI,
+    PCID_END
 } opt_pcid = PCID_XPTI;
 
+#ifdef CONFIG_HYPFS
+static const char opt_pcid_2_string[PCID_END][7] = {
+    [PCID_OFF] = "off",
+    [PCID_ALL] = "on",
+    [PCID_XPTI] = "xpti",
+    [PCID_NOXPTI] = "noxpti"
+};
+
+static void __init opt_pcid_init(struct param_hypfs *par)
+{
+    custom_runtime_set_var(par, opt_pcid_2_string[opt_pcid]);
+}
+#endif
+
+static int parse_pcid(const char *s);
+custom_runtime_param("pcid", parse_pcid, opt_pcid_init);
+
 static int parse_pcid(const char *s)
 {
     int rc = 0;
@@ -87,9 +105,11 @@ static int parse_pcid(const char *s)
         break;
     }
 
+    custom_runtime_set_var(param_2_parfs(parse_pcid),
+                           opt_pcid_2_string[opt_pcid]);
+
     return rc;
 }
-custom_runtime_param("pcid", parse_pcid);
 
 static void noreturn continue_nonidle_domain(struct vcpu *v)
 {
diff --git a/xen/arch/x86/xen.lds.S b/xen/arch/x86/xen.lds.S
index 0e3a733cab..3ed020e26b 100644
--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -279,6 +279,13 @@ SECTIONS
        __start_schedulers_array = .;
        *(.data.schedulers)
        __end_schedulers_array = .;
+
+#ifdef CONFIG_HYPFS
+       . = ALIGN(8);
+       __paramhypfs_start = .;
+       *(.data.paramhypfs)
+       __paramhypfs_end = .;
+#endif
   } :text
 
   DECL_SECTION(.data) {
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 5ef7ff940d..a9eeee8ff9 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -85,8 +85,43 @@ struct grant_table {
     struct grant_table_arch arch;
 };
 
-static int parse_gnttab_limit(const char *param, const char *arg,
-                              unsigned int *valp)
+unsigned int __read_mostly opt_max_grant_frames = 64;
+static unsigned int __read_mostly opt_max_maptrack_frames = 1024;
+
+#ifdef CONFIG_HYPFS
+#define GRANT_CUSTOM_VAL_SZ  12
+static char __read_mostly opt_max_grant_frames_val[GRANT_CUSTOM_VAL_SZ];
+static char __read_mostly opt_max_maptrack_frames_val[GRANT_CUSTOM_VAL_SZ];
+
+static void update_gnttab_par(struct param_hypfs *par, unsigned int val,
+                              char *parval)
+{
+    snprintf(parval, GRANT_CUSTOM_VAL_SZ, "%u", val);
+    custom_runtime_set_var_sz(par, parval, GRANT_CUSTOM_VAL_SZ);
+}
+
+static void __init gnttab_max_frames_init(struct param_hypfs *par)
+{
+    update_gnttab_par(par, opt_max_grant_frames, opt_max_grant_frames_val);
+}
+
+static void __init max_maptrack_frames_init(struct param_hypfs *par)
+{
+    update_gnttab_par(par, opt_max_maptrack_frames,
+                      opt_max_maptrack_frames_val);
+}
+#else
+#define opt_max_grant_frames_val    NULL
+#define opt_max_maptrack_frames_val NULL
+
+static void update_gnttab_par(struct param_hypfs *par, unsigned int val,
+                              char *parval)
+{
+}
+#endif
+
+static int parse_gnttab_limit(struct param_hypfs *par, const char *arg,
+                              unsigned int *valp, char *parval)
 {
     const char *e;
     unsigned long val;
@@ -99,28 +134,33 @@ static int parse_gnttab_limit(const char *param, const char *arg,
         return -ERANGE;
 
     *valp = val;
+    update_gnttab_par(par, val, parval);
 
     return 0;
 }
 
-unsigned int __read_mostly opt_max_grant_frames = 64;
+static int parse_gnttab_max_frames(const char *arg);
+custom_runtime_param("gnttab_max_frames", parse_gnttab_max_frames,
+                     gnttab_max_frames_init);
 
 static int parse_gnttab_max_frames(const char *arg)
 {
-    return parse_gnttab_limit("gnttab_max_frames", arg,
-                              &opt_max_grant_frames);
+    return parse_gnttab_limit(param_2_parfs(parse_gnttab_max_frames),
+                              arg, &opt_max_grant_frames,
+                              opt_max_grant_frames_val);
 }
-custom_runtime_param("gnttab_max_frames", parse_gnttab_max_frames);
 
-static unsigned int __read_mostly opt_max_maptrack_frames = 1024;
+static int parse_gnttab_max_maptrack_frames(const char *arg);
+custom_runtime_param("gnttab_max_maptrack_frames",
+                     parse_gnttab_max_maptrack_frames,
+                     max_maptrack_frames_init);
 
 static int parse_gnttab_max_maptrack_frames(const char *arg)
 {
-    return parse_gnttab_limit("gnttab_max_maptrack_frames", arg,
-                              &opt_max_maptrack_frames);
+    return parse_gnttab_limit(param_2_parfs(parse_gnttab_max_maptrack_frames),
+                              arg, &opt_max_maptrack_frames,
+                              opt_max_maptrack_frames_val);
 }
-custom_runtime_param("gnttab_max_maptrack_frames",
-                     parse_gnttab_max_maptrack_frames);
 
 #ifndef GNTTAB_MAX_VERSION
 #define GNTTAB_MAX_VERSION 2
diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index a873d507d1..e4de0a9eef 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -10,6 +10,7 @@
 #include <xen/hypercall.h>
 #include <xen/hypfs.h>
 #include <xen/lib.h>
+#include <xen/param.h>
 #include <xen/rwlock.h>
 #include <public/hypfs.h>
 
@@ -281,6 +282,36 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
     return 0;
 }
 
+int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
+                       XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
+{
+    struct param_hypfs *p;
+    char *buf;
+    int ret;
+
+    buf = xzalloc_array(char, ulen);
+    if ( !buf )
+        return -ENOMEM;
+
+    ret = -EFAULT;
+    if ( copy_from_guest(buf, uaddr, ulen) )
+        goto out;
+
+    ret = -EDOM;
+    if ( memchr(buf, 0, ulen) != (buf + ulen - 1) )
+        goto out;
+
+    p = container_of(leaf, struct param_hypfs, hypfs);
+    ret = p->param->par.func(buf);
+
+    if ( !ret )
+        leaf->e.size = ulen;
+
+ out:
+    xfree(buf);
+    return ret;
+}
+
 static int hypfs_write(struct hypfs_entry *entry,
                        XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
 {
@@ -352,3 +383,13 @@ long do_hypfs_op(unsigned int cmd,
 
     return ret;
 }
+
+void hypfs_write_lock(void)
+{
+    write_lock(&hypfs_lock);
+}
+
+void hypfs_write_unlock(void)
+{
+    write_unlock(&hypfs_lock);
+}
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index f8f41820d5..3b8b0d8ca5 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -198,7 +198,13 @@ static void __init _cmdline_parse(const char *cmdline)
 
 int runtime_parse(const char *line)
 {
-    return parse_params(line, __param_start, __param_end);
+    int ret;
+
+    hypfs_write_lock();
+    ret = parse_params(line, __param_start, __param_end);
+    hypfs_write_unlock();
+
+    return ret;
 }
 
 /**
@@ -429,6 +435,25 @@ static int __init buildinfo_init(void)
     return 0;
 }
 __initcall(buildinfo_init);
+
+static HYPFS_DIR_INIT(params, "params");
+
+static int __init param_init(void)
+{
+    struct param_hypfs *param;
+
+    hypfs_add_dir(&hypfs_root, &params, true);
+
+    for ( param = __paramhypfs_start; param < __paramhypfs_end; param++ )
+    {
+        if ( param->init_leaf )
+            param->init_leaf(param);
+        hypfs_add_leaf(&params, &param->hypfs, true);
+    }
+
+    return 0;
+}
+__initcall(param_init);
 #endif
 
 # define DO(fn) long do_##fn
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 913ae1b66a..692b7278f6 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -75,12 +75,35 @@ enum con_timestamp_mode
     TSM_DATE_MS,       /* [YYYY-MM-DD HH:MM:SS.mmm] */
     TSM_BOOT,          /* [SSSSSS.uuuuuu] */
     TSM_RAW,           /* [XXXXXXXXXXXXXXXX] */
+    TSM_END
 };
 
 static enum con_timestamp_mode __read_mostly opt_con_timestamp_mode = TSM_NONE;
 
+#ifdef CONFIG_HYPFS
+static const char con_timestamp_mode_2_string[TSM_END][7] = {
+    [TSM_NONE] = "none",
+    [TSM_DATE] = "date",
+    [TSM_DATE_MS] = "datems",
+    [TSM_BOOT] = "boot",
+    [TSM_RAW] = "raw"
+};
+
+static void con_timestamp_mode_upd(struct param_hypfs *par)
+{
+    const char *val = con_timestamp_mode_2_string[opt_con_timestamp_mode];
+
+    custom_runtime_set_var_sz(par, val, 7);
+}
+#else
+static void con_timestamp_mode_upd(struct param_hypfs *par)
+{
+}
+#endif
+
 static int parse_console_timestamps(const char *s);
-custom_runtime_param("console_timestamps", parse_console_timestamps);
+custom_runtime_param("console_timestamps", parse_console_timestamps,
+                     con_timestamp_mode_upd);
 
 /* conring_size: allows a large console ring than default (16kB). */
 static uint32_t __initdata opt_conring_size;
@@ -143,6 +166,39 @@ static int __read_mostly xenlog_guest_lower_thresh =
 static int parse_loglvl(const char *s);
 static int parse_guest_loglvl(const char *s);
 
+#ifdef CONFIG_HYPFS
+#define LOGLVL_VAL_SZ 16
+static char xenlog_val[LOGLVL_VAL_SZ];
+static char xenlog_guest_val[LOGLVL_VAL_SZ];
+
+static char *lvl2opt[] = { "none", "error", "warning", "info", "all" };
+
+static void xenlog_update_val(int lower, int upper, char *val)
+{
+    snprintf(val, LOGLVL_VAL_SZ, "%s/%s", lvl2opt[lower], lvl2opt[upper]);
+}
+
+static void __init xenlog_init(struct param_hypfs *par)
+{
+    xenlog_update_val(xenlog_lower_thresh, xenlog_upper_thresh, xenlog_val);
+    custom_runtime_set_var(par, xenlog_val);
+}
+
+static void __init xenlog_guest_init(struct param_hypfs *par)
+{
+    xenlog_update_val(xenlog_guest_lower_thresh, xenlog_guest_upper_thresh,
+                      xenlog_guest_val);
+    custom_runtime_set_var(par, xenlog_guest_val);
+}
+#else
+#define xenlog_val       NULL
+#define xenlog_guest_val NULL
+
+static void xenlog_update_val(int lower, int upper, char *val)
+{
+}
+#endif
+
 /*
  * <lvl> := none|error|warning|info|debug|all
  * loglvl=<lvl_print_always>[/<lvl_print_ratelimit>]
@@ -151,8 +207,8 @@ static int parse_guest_loglvl(const char *s);
  * Similar definitions for guest_loglvl, but applies to guest tracing.
  * Defaults: loglvl=warning ; guest_loglvl=none/warning
  */
-custom_runtime_param("loglvl", parse_loglvl);
-custom_runtime_param("guest_loglvl", parse_guest_loglvl);
+custom_runtime_param("loglvl", parse_loglvl, xenlog_init);
+custom_runtime_param("guest_loglvl", parse_guest_loglvl, xenlog_guest_init);
 
 static atomic_t print_everything = ATOMIC_INIT(0);
 
@@ -173,7 +229,7 @@ static int __parse_loglvl(const char *s, const char **ps)
     return 2; /* sane fallback */
 }
 
-static int _parse_loglvl(const char *s, int *lower, int *upper)
+static int _parse_loglvl(const char *s, int *lower, int *upper, char *val)
 {
     *lower = *upper = __parse_loglvl(s, &s);
     if ( *s == '/' )
@@ -181,18 +237,21 @@ static int _parse_loglvl(const char *s, int *lower, int *upper)
     if ( *upper < *lower )
         *upper = *lower;
 
+    xenlog_update_val(*lower, *upper, val);
+
     return *s ? -EINVAL : 0;
 }
 
 static int parse_loglvl(const char *s)
 {
-    return _parse_loglvl(s, &xenlog_lower_thresh, &xenlog_upper_thresh);
+    return _parse_loglvl(s, &xenlog_lower_thresh, &xenlog_upper_thresh,
+                         xenlog_val);
 }
 
 static int parse_guest_loglvl(const char *s)
 {
     return _parse_loglvl(s, &xenlog_guest_lower_thresh,
-                         &xenlog_guest_upper_thresh);
+                         &xenlog_guest_upper_thresh, xenlog_guest_val);
 }
 
 static char *loglvl_str(int lvl)
@@ -727,13 +786,17 @@ static int printk_prefix_check(char *p, char **pp)
 
 static int parse_console_timestamps(const char *s)
 {
+    struct param_hypfs *par = param_2_parfs(parse_console_timestamps);
+
     switch ( parse_bool(s, NULL) )
     {
     case 0:
         opt_con_timestamp_mode = TSM_NONE;
+        con_timestamp_mode_upd(par);
         return 0;
     case 1:
         opt_con_timestamp_mode = TSM_DATE;
+        con_timestamp_mode_upd(par);
         return 0;
     }
     if ( *s == '\0' || /* Compat for old booleanparam() */
@@ -750,6 +813,8 @@ static int parse_console_timestamps(const char *s)
     else
         return -EINVAL;
 
+    con_timestamp_mode_upd(par);
+
     return 0;
 }
 
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 279bcf3b4c..095a0c9bee 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -117,6 +117,13 @@ int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
                      XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen);
 int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
                      XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen);
+int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
+                       XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen);
+void hypfs_write_lock(void);
+void hypfs_write_unlock(void);
+#else
+static inline void hypfs_write_lock(void) {}
+static inline void hypfs_write_unlock(void) {}
 #endif
 
 #endif /* __XEN_HYPFS_H__ */
diff --git a/xen/include/xen/param.h b/xen/include/xen/param.h
index a1dc3ba8f0..4893de9bd5 100644
--- a/xen/include/xen/param.h
+++ b/xen/include/xen/param.h
@@ -1,6 +1,7 @@
 #ifndef _XEN_PARAM_H
 #define _XEN_PARAM_H
 
+#include <xen/hypfs.h>
 #include <xen/init.h>
 #include <xen/lib.h>
 #include <xen/stdbool.h>
@@ -80,7 +81,120 @@ extern const struct kernel_param __param_start[], __param_end[];
 
 #define __rtparam         __param(__dataparam)
 
-#define custom_runtime_only_param(_name, _var) \
+#ifdef CONFIG_HYPFS
+
+struct param_hypfs {
+    const struct kernel_param *param;
+    struct hypfs_entry_leaf hypfs;
+    void (*init_leaf)(struct param_hypfs *par);
+};
+
+extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
+
+#define __paramhypfs      __used_section(".data.paramhypfs")
+
+#define __paramfs         static __paramhypfs \
+    __attribute__((__aligned__(sizeof(void *)))) struct param_hypfs
+
+#define custom_runtime_set_var_sz(parfs, var, sz) \
+    { \
+        (parfs)->hypfs.content = var; \
+        (parfs)->hypfs.e.max_size = sz; \
+        (parfs)->hypfs.e.size = strlen(var) + 1; \
+    }
+#define custom_runtime_set_var(parfs, var) \
+    custom_runtime_set_var_sz(parfs, var, sizeof(var))
+
+#define param_2_parfs(par) &__parfs_##par
+
+/* initfunc needs to set size and content, e.g. via custom_runtime_set_var(). */
+#define custom_runtime_only_param(_name, _var, initfunc) \
+    __rtparam __rtpar_##_var = \
+      { .name = _name, \
+          .type = OPT_CUSTOM, \
+          .par.func = _var }; \
+    __paramfs __parfs_##_var = \
+        { .param = &__rtpar_##_var, \
+          .init_leaf = initfunc, \
+          .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
+          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
+          .hypfs.e.name = _name, \
+          .hypfs.e.read = hypfs_read_leaf, \
+          .hypfs.e.write = hypfs_write_custom }
+#define boolean_runtime_only_param(_name, _var) \
+    __rtparam __rtpar_##_var = \
+        { .name = _name, \
+          .type = OPT_BOOL, \
+          .len = sizeof(_var) + \
+                 BUILD_BUG_ON_ZERO(sizeof(_var) != sizeof(bool)), \
+          .par.var = &_var }; \
+    __paramfs __parfs_##_var = \
+        { .param = &__rtpar_##_var, \
+          .hypfs.e.type = XEN_HYPFS_TYPE_BOOL, \
+          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
+          .hypfs.e.name = _name, \
+          .hypfs.e.size = sizeof(_var), \
+          .hypfs.e.max_size = sizeof(_var), \
+          .hypfs.e.read = hypfs_read_leaf, \
+          .hypfs.e.write = hypfs_write_bool, \
+          .hypfs.content = &_var }
+#define integer_runtime_only_param(_name, _var) \
+    __rtparam __rtpar_##_var = \
+        { .name = _name, \
+          .type = OPT_UINT, \
+          .len = sizeof(_var), \
+          .par.var = &_var }; \
+    __paramfs __parfs_##_var = \
+        { .param = &__rtpar_##_var, \
+          .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
+          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
+          .hypfs.e.name = _name, \
+          .hypfs.e.size = sizeof(_var), \
+          .hypfs.e.max_size = sizeof(_var), \
+          .hypfs.e.read = hypfs_read_leaf, \
+          .hypfs.e.write = hypfs_write_leaf, \
+          .hypfs.content = &_var }
+#define size_runtime_only_param(_name, _var) \
+    __rtparam __rtpar_##_var = \
+        { .name = _name, \
+          .type = OPT_SIZE, \
+          .len = sizeof(_var), \
+          .par.var = &_var }; \
+    __paramfs __parfs_##_var = \
+        { .param = &__rtpar_##_var, \
+          .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
+          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
+          .hypfs.e.name = _name, \
+          .hypfs.e.size = sizeof(_var), \
+          .hypfs.e.max_size = sizeof(_var), \
+          .hypfs.e.read = hypfs_read_leaf, \
+          .hypfs.e.write = hypfs_write_leaf, \
+          .hypfs.content = &_var }
+#define string_runtime_only_param(_name, _var) \
+    __rtparam __rtpar_##_var = \
+        { .name = _name, \
+          .type = OPT_STR, \
+          .len = sizeof(_var), \
+          .par.var = &_var }; \
+    __paramfs __parfs_##_var = \
+        { .param = &__rtpar_##_var, \
+          .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
+          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
+          .hypfs.e.name = _name, \
+          .hypfs.e.size = sizeof(_var), \
+          .hypfs.e.max_size = sizeof(_var), \
+          .hypfs.e.read = hypfs_read_leaf, \
+          .hypfs.e.write = hypfs_write_leaf, \
+          .hypfs.content = &_var }
+
+#else
+
+struct param_hypfs {
+};
+
+#define param_2_parfs(par)  NULL
+
+#define custom_runtime_only_param(_name, _var, initfunc) \
     __rtparam __rtpar_##_var = \
       { .name = _name, \
           .type = OPT_CUSTOM, \
@@ -111,9 +225,13 @@ extern const struct kernel_param __param_start[], __param_end[];
           .len = sizeof(_var), \
           .par.var = &_var }
 
-#define custom_runtime_param(_name, _var) \
+#define custom_runtime_set_var(parfs, var)
+
+#endif
+
+#define custom_runtime_param(_name, _var, initfunc) \
     custom_param(_name, _var); \
-    custom_runtime_only_param(_name, _var)
+    custom_runtime_only_param(_name, _var, initfunc)
 #define boolean_runtime_param(_name, _var) \
     boolean_param(_name, _var); \
     boolean_runtime_only_param(_name, _var)
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 08 15:37:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:37:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX53y-00034F-2v; Fri, 08 May 2020 15:37:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqRt=6W=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jX53x-000345-Be
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:37:21 +0000
X-Inumbo-ID: cd628efa-9141-11ea-a021-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd628efa-9141-11ea-a021-12813bfff9fa;
 Fri, 08 May 2020 15:37:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=K5qjZ3fQmWhnC+IPfNLtH6EyQ88a+306Yrm2h2J+bl0=; b=IUbJAk/vyveqf4AKddXak9NCZ
 9TkMGkLcXmem1PLXt5xh7/VhJ8gNa+7QcpN/BDH+aYdhkPK81K1qghmVQ1Cd4Omu518oaLQffNXn7
 0Cx/CfBqoaNSAhUoIfWc/YKQZ4qwAEJVPluJYIZ5NrQmDWTHg1VCUhPMsSgy58P/hRBJg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jX53v-0002xv-4G; Fri, 08 May 2020 15:37:19 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jX53u-0002pj-T1; Fri, 08 May 2020 15:37:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jX53u-000558-SI; Fri, 08 May 2020 15:37:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150072-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 150072: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: xen=c26841f0aad0af887a6b3658d71959c4946e480d
X-Osstest-Versions-That: xen=e43fc14ec58329813af876ed3b30899a04d65a08
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 08 May 2020 15:37:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150072 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150072/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    17 guest-localmigrate/x10       fail  like 150041
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 xen                  c26841f0aad0af887a6b3658d71959c4946e480d
baseline version:
 xen                  e43fc14ec58329813af876ed3b30899a04d65a08

Last test of basis   150041  2020-05-05 16:06:33 Z    2 days
Testing same since   150072  2020-05-07 13:06:22 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Ashok Raj <ashok.raj@intel.com>
  Borislav Petkov <bp@suse.de>
  Hongyan Xia <hongyxia@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Thomas Gleixner <tglx@linutronix.de>
  Varad Gautam <vrd@amazon.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e43fc14ec5..c26841f0aa  c26841f0aad0af887a6b3658d71959c4946e480d -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Fri May 08 15:50:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:50:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX5Go-0004sE-OU; Fri, 08 May 2020 15:50:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nu4B=6W=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jX5Gn-0004s9-ID
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:50:37 +0000
X-Inumbo-ID: a865cd4a-9143-11ea-a02a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a865cd4a-9143-11ea-a02a-12813bfff9fa;
 Fri, 08 May 2020 15:50:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Ryni20iStrFgE63HRr0c2L62ozRLJ8mQqDXVcSy1G0o=; b=v9yBpfz4yeys70013FohC/h6T5
 Fgi4eE0l3EUrF+9UnvJWXDXtCBMTesYR1eYfsTuKhyNiS7w2uf2FngMXoB2Yvr1c2vIPrfpr6i04S
 mqz8KcJS/TjKSyRBQQpHUK9d8v8B2E1da+SZU51cC846HRYAqIiwhyT0ZTwvARHZ5NqE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jX5Gj-0003Fz-1u; Fri, 08 May 2020 15:50:33 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jX5Gi-0002Xh-MG; Fri, 08 May 2020 15:50:32 +0000
Subject: Re: Xen Coding style
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Tamas K Lengyel <tamas.k.lengyel@gmail.com>
References: <ad26bbdc-5209-ce0c-7956-f8b08e6c2492@amazon.com>
 <8771c424-6340-10e5-1c1f-d72271ab8c1b@suse.com>
 <38926d4e-3429-58bc-f43c-514aed253a8e@xen.org>
 <3b55f045-c6a0-af62-c607-3a85d38cea25@suse.com>
 <63d1ceac-81bb-c916-d403-6f227b4d0ea8@xen.org>
 <CABfawhnvKgoP_EE7An8FDgJ11Ta8_gOo7tZ_J98sB_+Qir7=Yg@mail.gmail.com>
 <10ee3601-fc34-5714-30ea-abd2f2c03cfe@suse.com>
 <CABfawhkRcVavY+gkyvPfUTDdr1Xf=Xsmap11mgCi8cwcNyR=Ug@mail.gmail.com>
 <d0be31c1-f5fe-30ba-9c1a-5d37333d2479@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f24a6c04-e11d-ae64-3d9a-cb3ad1ac3c14@xen.org>
Date: Fri, 8 May 2020 16:50:30 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <d0be31c1-f5fe-30ba-9c1a-5d37333d2479@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "paul@xen.org" <paul@xen.org>, Julien Grall <jgrall@amazon.com>,
 "committers@xenproject.org" <committers@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Woodhouse,
 David" <dwmw@amazon.co.uk>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 08/05/2020 15:42, Jürgen Groß wrote:
> On 08.05.20 16:23, Tamas K Lengyel wrote:
>> On Fri, May 8, 2020 at 8:18 AM Jürgen Groß <jgross@suse.com> wrote:
>>>
>>> On 08.05.20 14:55, Tamas K Lengyel wrote:
>>>> On Fri, May 8, 2020 at 6:21 AM Julien Grall <julien@xen.org> wrote:
>>>>>
>>>>> Hi Jan,
>>>>>
>>>>> On 08/05/2020 12:20, Jan Beulich wrote:
>>>>>> On 08.05.2020 12:00, Julien Grall wrote:
>>>>>>> You seem to be the maintainer with the most unwritten rules. Would
>>>>>>> you mind to have a try at writing a coding style based on it?
>>>>>>
>>>>>> On the basis that even small, single aspect patches to CODING_STYLE
>>>>>> have been ignored [1],
>>>>>
>>>>> Your thread is one of the example why I started this thread. 
>>>>> Agreeing on
>>>>> specific rule doesn't work because it either result to bikesheding or
>>>>> there is not enough interest to review rule by rule.
>>>>>
>>>>>> I don't think this would be a good use of my
>>>>>> time.
>>>>>
>>>>> I would have assumed that the current situation (i.e
>>>>> nitpicking/bikeshedding on the ML) is not a good use of your time :).
>>>>>
>>>>> I would be happy to put some effort to help getting the coding style
>>>>> right, however I believe focusing on an overall coding style would 
>>>>> value
>>>>> everyone's time better than a rule by rule discussion.
>>>>>
>>>>>> If I was promised (reasonable) feedback, I could take what I
>>>>>> have and try to add at least a few more things based on what I find
>>>>>> myself commenting on more frequently. But really I'd prefer it to
>>>>>> be done the other way around - for people to look at the patches
>>>>>> already sent, and for me to only subsequently send more. After all,
>>>>>> if already those adjustments are controversial, I don't think we
>>>>>> could settle on others.
>>>>> While I understand this requires another investment from your part, 
>>>>> I am
>>>>> afraid it is going to be painful for someone else to go through all 
>>>>> the
>>>>> existing coding style bikeshedding and infer your unwritten rules.
>>>>>
>>>>> It might be more beneficial for that person to pursue the work done by
>>>>> Tamas and Viktor in the past (see my previous e-mail). This may 
>>>>> mean to
>>>>> adopt an existing coding style (BSD) and then tweak it.
>>>>
>>>> Thanks Julien for restarting this discussion. IMHO agreeing on a set
>>>> of style rules ahead and then applying universally all at once is not
>>>> going to be productive since we are so all over the place. Instead, I
>>>> would recommend we start piece-by-piece. We introduce a baseline style
>>>> checker, then maintainers can decide when and if they want to move
>>>> their code-base to be under the automated style checker. That way we
>>>> have a baseline and each maintainer can decide on their own term when
>>>> they want to have their files be also style checked and in what form.
>>>> The upside of this route I think is pretty clear: we can have at least
>>>> partial automation even while we figure out what to do with some of
>>>> the more problematic files and quirks that are in our code-base. I
>>>> would highly prefer this route since I would immediately bring all
>>>> files I maintain over to the automated checker just so I never ever
>>>> have to deal with this again manually. What style is in use to me
>>>> really doesn't matter, BSD was very close with some minor tweaks, or
>>>> even what we use to check the style just as long as we have
>>>> _something_.
>>>
>>> Wouldn't it make more sense to have a patch checker instead and accept
>>> only patches which change code according to the style guide? This
>>> wouldn't require to change complete files at a time.

So what are you going to do if a new contributor is unfortunate enough 
to modify code that doesn't pass the coding style checker? Are we going 
to impose him/her to fix the coding style before submitting his patch?

>>
>> In theory, yes. But in practice this would require that we can agree
>> on a style that applies to all patches that touch any file within Xen.
>> We can't seem to do that because there are too many exceptions and
>> corner-cases and personal-preferences of maintainers that apply only
>> to a subset of the codebase. So AFAICT what you propose doesn't seem
>> to be a viable way to start.
> 
> I think long ago we already agreed to have a control file which tells a
> style checker which style to apply (if any). As a start we could have a
> patch checker checking only the commit message (has it a Signed-off-by:
> etc.). The next step would be to add the control file, and the framework
> to split the patch into the changed file hunks and passing each hunk to
> the correct checking script (might be doing nothing in the beginning).
> And then we could add some logic to the single checkers.

Yes the framework can be written down now (patches are welcomed). But 
this doesn't resolve the underlying problem. Aside files imported from 
Linux, what is the coding style of Xen?

The best way to describe it at the moment is a collection of unwritten 
rules that can differ even between maintainers of the same component. I 
don't really see how we could write a style checker based on this.

The best way to reduce the burden on reviewer and bikeshedding is by 
formalizing what our coding style. With that, we can write a style 
checker effectively and all sort of other tools.

It would prefer to have a common one but we could consider small quirks 
per components if it is necessary.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 08 15:56:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 15:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX5MS-00054c-EH; Fri, 08 May 2020 15:56:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=JXU2=6W=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jX5MR-00054X-N1
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 15:56:27 +0000
X-Inumbo-ID: 78ef45a4-9144-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78ef45a4-9144-11ea-ae69-bc764e2007e4;
 Fri, 08 May 2020 15:56:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 19500AD35;
 Fri,  8 May 2020 15:56:28 +0000 (UTC)
Subject: Re: Xen Coding style
To: Julien Grall <julien@xen.org>, Tamas K Lengyel <tamas.k.lengyel@gmail.com>
References: <ad26bbdc-5209-ce0c-7956-f8b08e6c2492@amazon.com>
 <8771c424-6340-10e5-1c1f-d72271ab8c1b@suse.com>
 <38926d4e-3429-58bc-f43c-514aed253a8e@xen.org>
 <3b55f045-c6a0-af62-c607-3a85d38cea25@suse.com>
 <63d1ceac-81bb-c916-d403-6f227b4d0ea8@xen.org>
 <CABfawhnvKgoP_EE7An8FDgJ11Ta8_gOo7tZ_J98sB_+Qir7=Yg@mail.gmail.com>
 <10ee3601-fc34-5714-30ea-abd2f2c03cfe@suse.com>
 <CABfawhkRcVavY+gkyvPfUTDdr1Xf=Xsmap11mgCi8cwcNyR=Ug@mail.gmail.com>
 <d0be31c1-f5fe-30ba-9c1a-5d37333d2479@suse.com>
 <f24a6c04-e11d-ae64-3d9a-cb3ad1ac3c14@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <1d6bab23-8652-e010-dc54-fcc91974adf1@suse.com>
Date: Fri, 8 May 2020 17:56:24 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <f24a6c04-e11d-ae64-3d9a-cb3ad1ac3c14@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "paul@xen.org" <paul@xen.org>, Julien Grall <jgrall@amazon.com>,
 "committers@xenproject.org" <committers@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Woodhouse,
 David" <dwmw@amazon.co.uk>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.20 17:50, Julien Grall wrote:
> 
> 
> On 08/05/2020 15:42, Jürgen Groß wrote:
>> On 08.05.20 16:23, Tamas K Lengyel wrote:
>>> On Fri, May 8, 2020 at 8:18 AM Jürgen Groß <jgross@suse.com> wrote:
>>>>
>>>> On 08.05.20 14:55, Tamas K Lengyel wrote:
>>>>> On Fri, May 8, 2020 at 6:21 AM Julien Grall <julien@xen.org> wrote:
>>>>>>
>>>>>> Hi Jan,
>>>>>>
>>>>>> On 08/05/2020 12:20, Jan Beulich wrote:
>>>>>>> On 08.05.2020 12:00, Julien Grall wrote:
>>>>>>>> You seem to be the maintainer with the most unwritten rules. Would
>>>>>>>> you mind to have a try at writing a coding style based on it?
>>>>>>>
>>>>>>> On the basis that even small, single aspect patches to CODING_STYLE
>>>>>>> have been ignored [1],
>>>>>>
>>>>>> Your thread is one of the example why I started this thread. 
>>>>>> Agreeing on
>>>>>> specific rule doesn't work because it either result to bikesheding or
>>>>>> there is not enough interest to review rule by rule.
>>>>>>
>>>>>>> I don't think this would be a good use of my
>>>>>>> time.
>>>>>>
>>>>>> I would have assumed that the current situation (i.e
>>>>>> nitpicking/bikeshedding on the ML) is not a good use of your time :).
>>>>>>
>>>>>> I would be happy to put some effort to help getting the coding style
>>>>>> right, however I believe focusing on an overall coding style would 
>>>>>> value
>>>>>> everyone's time better than a rule by rule discussion.
>>>>>>
>>>>>>> If I was promised (reasonable) feedback, I could take what I
>>>>>>> have and try to add at least a few more things based on what I find
>>>>>>> myself commenting on more frequently. But really I'd prefer it to
>>>>>>> be done the other way around - for people to look at the patches
>>>>>>> already sent, and for me to only subsequently send more. After all,
>>>>>>> if already those adjustments are controversial, I don't think we
>>>>>>> could settle on others.
>>>>>> While I understand this requires another investment from your 
>>>>>> part, I am
>>>>>> afraid it is going to be painful for someone else to go through 
>>>>>> all the
>>>>>> existing coding style bikeshedding and infer your unwritten rules.
>>>>>>
>>>>>> It might be more beneficial for that person to pursue the work 
>>>>>> done by
>>>>>> Tamas and Viktor in the past (see my previous e-mail). This may 
>>>>>> mean to
>>>>>> adopt an existing coding style (BSD) and then tweak it.
>>>>>
>>>>> Thanks Julien for restarting this discussion. IMHO agreeing on a set
>>>>> of style rules ahead and then applying universally all at once is not
>>>>> going to be productive since we are so all over the place. Instead, I
>>>>> would recommend we start piece-by-piece. We introduce a baseline style
>>>>> checker, then maintainers can decide when and if they want to move
>>>>> their code-base to be under the automated style checker. That way we
>>>>> have a baseline and each maintainer can decide on their own term when
>>>>> they want to have their files be also style checked and in what form.
>>>>> The upside of this route I think is pretty clear: we can have at least
>>>>> partial automation even while we figure out what to do with some of
>>>>> the more problematic files and quirks that are in our code-base. I
>>>>> would highly prefer this route since I would immediately bring all
>>>>> files I maintain over to the automated checker just so I never ever
>>>>> have to deal with this again manually. What style is in use to me
>>>>> really doesn't matter, BSD was very close with some minor tweaks, or
>>>>> even what we use to check the style just as long as we have
>>>>> _something_.
>>>>
>>>> Wouldn't it make more sense to have a patch checker instead and accept
>>>> only patches which change code according to the style guide? This
>>>> wouldn't require to change complete files at a time.
> 
> So what are you going to do if a new contributor is unfortunate enough 
> to modify code that doesn't pass the coding style checker? Are we going 
> to impose him/her to fix the coding style before submitting his patch?

The modified portions should comply to the coding style. Same as in the
Linux kernel.

> 
>>>
>>> In theory, yes. But in practice this would require that we can agree
>>> on a style that applies to all patches that touch any file within Xen.
>>> We can't seem to do that because there are too many exceptions and
>>> corner-cases and personal-preferences of maintainers that apply only
>>> to a subset of the codebase. So AFAICT what you propose doesn't seem
>>> to be a viable way to start.
>>
>> I think long ago we already agreed to have a control file which tells a
>> style checker which style to apply (if any). As a start we could have a
>> patch checker checking only the commit message (has it a Signed-off-by:
>> etc.). The next step would be to add the control file, and the framework
>> to split the patch into the changed file hunks and passing each hunk to
>> the correct checking script (might be doing nothing in the beginning).
>> And then we could add some logic to the single checkers.
> 
> Yes the framework can be written down now (patches are welcomed). But 
> this doesn't resolve the underlying problem. Aside files imported from 
> Linux, what is the coding style of Xen?

The checker for Xen style needs to be written, yes.

> The best way to describe it at the moment is a collection of unwritten 
> rules that can differ even between maintainers of the same component. I 
> don't really see how we could write a style checker based on this.

The style checker's rules must be written down, of course.

And today there are already some rules documented.

> 
> The best way to reduce the burden on reviewer and bikeshedding is by 
> formalizing what our coding style. With that, we can write a style 
> checker effectively and all sort of other tools.

Correct. A checker should only check what is written down. Not more,
maybe less in the beginning.

> It would prefer to have a common one but we could consider small quirks 
> per components if it is necessary.

I would say: only Xen style, Linux kernel style, libxl style (why do we
have different styles in one project???), maybe BSD, and maybe no style
at all.

Ah yes, and Makefile style, doc style, Kconfig style, ...


Juergen


From xen-devel-bounces@lists.xenproject.org Fri May 08 16:09:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 16:09:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX5Yo-0006aT-Lo; Fri, 08 May 2020 16:09:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5Ij8=6W=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jX5Yn-0006aO-Kz
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 16:09:13 +0000
X-Inumbo-ID: 40f3d034-9146-11ea-a030-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 40f3d034-9146-11ea-a030-12813bfff9fa;
 Fri, 08 May 2020 16:09:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588954153;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=KOX86speoIalQT0X5DM5CUM7PMw5d3xQsKvfUmiKhpg=;
 b=KjjdfQLr/GSA2AANgKhJpcB17lkcOWDybpKpod/+jSJiB240IQeaDG2Y
 GlOXr58IPHYRiQ8/KebDBIj2HRXZKF4xeXlx61hEdOYgsTi3ff2v2gl2W
 SzZX4Fp13x0E2xFBBwpiyZLRID6g6DH2w6V5KFK3dSsmQK66RFtuqqvWF E=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 7mWj5OG18WUiZhGexg3G3pxCV8+q65xvfkIDkBxk/UX0tY5tJIOs2EOxBbv7bBRVBRsEQl/6gq
 6fgvyT0UeUhUrjyvWZIm5iaEaJmwYA7SmeBbhWiYyIDmpu7LVYb2A8wzEJw3Ux4DXRkebiN6fB
 tfpMM3tZWqYoAhqx1ICe36XMgeivfAr9tvZS7UeEJINqQsk5IweJ2006J6aKywMIoMl2bQUQVA
 1p01Ttjv9dN6DjoqXe2Ly3op4c9cWg1GjsDKKsBA1c+fl6TjxpwEf6KaUrpGi9g3zeXmu8xt9J
 nr4=
X-SBRS: 2.7
X-MesageID: 17075937
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,368,1583211600"; d="scan'208";a="17075937"
Date: Fri, 8 May 2020 18:08:51 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86/PVH: PHYSDEVOP_pci_mmcfg_reserved should not blindly
 register a region
Message-ID: <20200508160851.GK1353@Air-de-Roger>
References: <2ee1a3cb-3b40-6e6d-5308-1cdf9f6c521e@suse.com>
 <20200508150312.GJ1353@Air-de-Roger>
 <70c8b4f4-b690-c031-3b90-1776d872d171@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <70c8b4f4-b690-c031-3b90-1776d872d171@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 08, 2020 at 05:11:35PM +0200, Jan Beulich wrote:
> On 08.05.2020 17:03, Roger Pau Monné wrote:
> > On Fri, May 08, 2020 at 02:43:38PM +0200, Jan Beulich wrote:
> >> --- a/xen/arch/x86/hvm/io.c
> >> +++ b/xen/arch/x86/hvm/io.c
> >> @@ -558,6 +558,47 @@ int register_vpci_mmcfg_handler(struct d
> >>      return 0;
> >>  }
> >>  
> >> +int unregister_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
> >> +                                  unsigned int start_bus, unsigned int end_bus,
> >> +                                  unsigned int seg)
> >> +{
> >> +    struct hvm_mmcfg *mmcfg;
> >> +    int rc = -ENOENT;
> >> +
> >> +    ASSERT(is_hardware_domain(d));
> >> +
> >> +    if ( start_bus > end_bus )
> >> +        return -EINVAL;
> >> +
> >> +    write_lock(&d->arch.hvm.mmcfg_lock);
> >> +
> >> +    list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next )
> >> +        if ( mmcfg->addr == addr + (start_bus << 20) &&
> >> +             mmcfg->segment == seg &&
> >> +             mmcfg->start_bus == start_bus &&
> >> +             mmcfg->size == ((end_bus - start_bus + 1) << 20) )
> >> +        {
> >> +            list_del(&mmcfg->next);
> >> +            if ( !list_empty(&d->arch.hvm.mmcfg_regions) )
> >> +                xfree(mmcfg);
> >> +            else
> >> +            {
> >> +                /*
> >> +                 * Cannot unregister the MMIO handler - leave a fake entry
> >> +                 * on the list.
> >> +                 */
> >> +                memset(mmcfg, 0, sizeof(*mmcfg));
> >> +                list_add(&mmcfg->next, &d->arch.hvm.mmcfg_regions);
> > 
> > Instead of leaving this zombie entry around maybe we could add a
> > static bool in register_vpci_mmcfg_handler to signal whether the MMIO
> > intercept has been registered?
> 
> That was my initial plan indeed, but registration is per-domain.

Indeed, this would work now because it's only used by the hardware
domain, but it's not a good move long term.

What about splitting the registration into a
register_vpci_mmio_handler and call it from hvm_domain_initialise
like it's done for register_vpci_portio_handler?

That might be cleaner long term, sorry if it's more work.

> >> --- a/xen/arch/x86/physdev.c
> >> +++ b/xen/arch/x86/physdev.c
> >> @@ -559,12 +559,18 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
> >>          if ( !ret && has_vpci(currd) )
> >>          {
> >>              /*
> >> -             * For HVM (PVH) domains try to add the newly found MMCFG to the
> >> -             * domain.
> >> +             * For HVM (PVH) domains try to add/remove the reported MMCFG
> >> +             * to/from the domain.
> >>               */
> >> -            ret = register_vpci_mmcfg_handler(currd, info.address,
> >> -                                              info.start_bus, info.end_bus,
> >> -                                              info.segment);
> >> +            if ( info.flags & XEN_PCI_MMCFG_RESERVED )
> > 
> > Do you think you could also add a small note in physdev.h regarding
> > the fact that XEN_PCI_MMCFG_RESERVED is used to register a MMCFG
> > region, and not setting it would imply an unregister request?
> > 
> > It's not obvious to me from the name of the flag.
> 
> The main purpose of the flag is to identify whether a region can be
> used (because of having been found marked suitably reserved by
> firmware). The flag not set effectively means "region is not marked
> reserved".

Looking at pci_mmcfg_arch_disable, should the region then also be
removed from mmio_ro_ranges? (kind of tangential to this patch)

> You pointing this out makes me wonder whether instead I
> should simply expand the if() in context, without making it behave
> like unregistration. Then again we'd have no way to unregister a
> region, and hence (ab)using this function for this purpose seems to
> makes sense (and, afaict, not require any code changes elsewhere).

Right now the only user I know of PHYSDEVOP_pci_mmcfg_reserved is
Linux, and AFAICT it always sets the XEN_PCI_MMCFG_RESERVED flag (at
least upstream).

I don't mind that much what we end up doing, as long as it's
documented in physdev.h. There's no documentation of that physdevop
hypercall at all, so if we provide proper documentation I would be
fine with treating a call with no flags as an unregistration request
(which is kind of what we already do for a classic PV hardware
domain).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 08 16:22:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 16:22:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX5lK-0008Co-UA; Fri, 08 May 2020 16:22:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5Ij8=6W=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jX5lJ-0008Cg-Kz
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 16:22:09 +0000
X-Inumbo-ID: 0ff317b6-9148-11ea-9887-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ff317b6-9148-11ea-9887-bc764e2007e4;
 Fri, 08 May 2020 16:22:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588954928;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=NfoLnuPCoU1UJG0a6YeuDpHB4+57UytJprgA800ZQiQ=;
 b=JNh7fgwyYUPsppK95wExAuLxILt/zxZu5R3Bn4jQLbYzu281e977NRNC
 0C9K2YaX2Crt4FnBUoUASN1gTHBUr7k7zopS84pp3B6HHqFK3qqLA0f/h
 ivqjXHiPplSmKnNCrJieXKd92MwJKrB6O3Poo5DfjrG/8p/Jw9zNsZ5Ew E=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: q1dhdRVf6RUsqoOq+0ooojt/r6aPIoQ+fjH8EdTDiThnvpi2e9GQi4724QY5Cm+42FhADY1vzU
 qD+sX2GCynFYMb+Eg+c+NLQqNYTIuTo2hG2OyhbyKf1My0vqsVhC6Eg0cZGpba1noMJCNjF9gg
 PMB7FP9eqV5/2QXW+gyR5+h+k0ItEbaryyyk+GGaxnqOYnG37lLtfDaHZsgLmQQTQ/LTIw0x6W
 M1hBocE6v5Ecip6v51ewnbOTr6sCjrg695e5THdRybolGLRX28SMNDUUINNxJ+sdjnN5FIpYa+
 Ds4=
X-SBRS: 2.7
X-MesageID: 17458741
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,368,1583211600"; d="scan'208";a="17458741"
Date: Fri, 8 May 2020 18:21:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v8 08/12] x86emul: support FLDENV and FRSTOR
Message-ID: <20200508162155.GL1353@Air-de-Roger>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <09fe2c18-0037-af71-93be-87261051e2a2@suse.com>
 <20200508133720.GH1353@Air-de-Roger>
 <4b6f4353-066e-351d-597d-4455193ff95f@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <4b6f4353-066e-351d-597d-4455193ff95f@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 08, 2020 at 05:04:02PM +0200, Jan Beulich wrote:
> On 08.05.2020 15:37, Roger Pau Monné wrote:
> > On Tue, May 05, 2020 at 10:16:20AM +0200, Jan Beulich wrote:
> >> --- a/tools/tests/x86_emulator/test_x86_emulator.c
> >> +++ b/tools/tests/x86_emulator/test_x86_emulator.c
> >> @@ -11648,6 +11651,89 @@ int x86_emul_blk(
> >>  
> >>  #ifndef X86EMUL_NO_FPU
> >>  
> >> +    case blk_fld:
> >> +        ASSERT(!data);
> >> +
> >> +        /* state->rex_prefix carries CR0.PE && !EFLAGS.VM setting */
> >> +        switch ( bytes )
> >> +        {
> >> +        case sizeof(fpstate.env):
> >> +        case sizeof(fpstate):
> >> +            memcpy(&fpstate.env, ptr, sizeof(fpstate.env));
> >> +            if ( !state->rex_prefix )
> >> +            {
> >> +                unsigned int fip = fpstate.env.mode.real.fip_lo +
> >> +                                   (fpstate.env.mode.real.fip_hi << 16);
> >> +                unsigned int fdp = fpstate.env.mode.real.fdp_lo +
> >> +                                   (fpstate.env.mode.real.fdp_hi << 16);
> >> +                unsigned int fop = fpstate.env.mode.real.fop;
> >> +
> >> +                fpstate.env.mode.prot.fip = fip & 0xf;
> >> +                fpstate.env.mode.prot.fcs = fip >> 4;
> >> +                fpstate.env.mode.prot.fop = fop;
> >> +                fpstate.env.mode.prot.fdp = fdp & 0xf;
> >> +                fpstate.env.mode.prot.fds = fdp >> 4;
> > 
> > I've found the layouts in the SDM vol. 1, but I haven't been able to
> > found the translation mechanism from real to protected. Could you
> > maybe add a reference here?
> 
> A reference to some piece of documentation? I don't think this
> is spelled out anywhere. It's also only one of various possible
> ways of doing the translation, but among them the most flexible
> one for possible consumers of the data (because of using the
> smallest possible offsets into the segments).

Having this written down as a comment would help, but maybe that's
just because I'm not familiar at all with all this stuff.

Again, likely a very stupid question, but I would expect:

fpstate.env.mode.prot.fip = fip;

Without the mask.

> >> +            }
> >> +
> >> +            if ( bytes == sizeof(fpstate.env) )
> >> +                ptr = NULL;
> >> +            else
> >> +                ptr += sizeof(fpstate.env);
> >> +            break;
> >> +
> >> +        case sizeof(struct x87_env16):
> >> +        case sizeof(struct x87_env16) + sizeof(fpstate.freg):
> >> +        {
> >> +            const struct x87_env16 *env = ptr;
> >> +
> >> +            fpstate.env.fcw = env->fcw;
> >> +            fpstate.env.fsw = env->fsw;
> >> +            fpstate.env.ftw = env->ftw;
> >> +
> >> +            if ( state->rex_prefix )
> >> +            {
> >> +                fpstate.env.mode.prot.fip = env->mode.prot.fip;
> >> +                fpstate.env.mode.prot.fcs = env->mode.prot.fcs;
> >> +                fpstate.env.mode.prot.fdp = env->mode.prot.fdp;
> >> +                fpstate.env.mode.prot.fds = env->mode.prot.fds;
> >> +                fpstate.env.mode.prot.fop = 0; /* unknown */
> >> +            }
> >> +            else
> >> +            {
> >> +                unsigned int fip = env->mode.real.fip_lo +
> >> +                                   (env->mode.real.fip_hi << 16);
> >> +                unsigned int fdp = env->mode.real.fdp_lo +
> >> +                                   (env->mode.real.fdp_hi << 16);
> >> +                unsigned int fop = env->mode.real.fop;
> >> +
> >> +                fpstate.env.mode.prot.fip = fip & 0xf;
> >> +                fpstate.env.mode.prot.fcs = fip >> 4;
> >> +                fpstate.env.mode.prot.fop = fop;
> >> +                fpstate.env.mode.prot.fdp = fdp & 0xf;
> >> +                fpstate.env.mode.prot.fds = fdp >> 4;
> > 
> > This looks mostly the same as the translation done above, so maybe
> > could be abstracted anyway in a macro to avoid the code repetition?
> > (ie: fpstate_real_to_prot(src, dst) or some such).
> 
> Just the 5 assignments could be put in an inline function, but
> if we also wanted to abstract away the declarations with their
> initializers, it would need to be a macro because of the
> different types of fpstate.env and *env. While I'd generally
> prefer inline functions, the macro would have the benefit that
> it could be #define-d / #undef-d right inside this case block.
> Thoughts? 

I think a macro would be fine IMO.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 08 17:24:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 17:24:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX6jV-0004qj-Gu; Fri, 08 May 2020 17:24:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5Ij8=6W=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jX6jU-0004qe-Qo
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 17:24:20 +0000
X-Inumbo-ID: c011ce5a-9150-11ea-b07b-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c011ce5a-9150-11ea-b07b-bc764e2007e4;
 Fri, 08 May 2020 17:24:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588958660;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=a4Rvv8S+IPmbc9kmP71z6n6Wz2alFja5VdgAXPtIEI8=;
 b=Y7tr7/lRfg9utwYEcKnSKDtN4DHxpJHU9z8gtjf6FkOz18k1eFNa/LnB
 18A+xD4vwK4flNZ3HZ2NcA5A86sOhX7qHpRrKE0vunBz7G2o26x3c92Un
 T016lSADGZfNdFzOkWWL5byb1IbD2bqS2LRKZTbENipKIS4gPeYt2dMhG 4=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
X-Ironport-Dmarc-Check-Result: validskip
IronPort-SDR: VNt+codr5fEJdm6Hnyif6ikrWE1lGnEJpLq42oPIZ1+vcN9YRMNhuwhh1LqLN+LSoy1hkwxAFV
 hkRkMPp2qx19H+la4OjEdBbqV4kTXAnML+PFSrnaT0zBT4QDKBPf591Y7BEp+jmcjGCMHZ3KbB
 yAlATYXhL/4G4wUSnDbf7/ihwhow1tEBZpFKCj6Ka8O958j/iiFWOQPzhmz3iDwnbz1XOvlIkA
 uh8Db20KVYyznnk9AKYj/6XTQE4dhqqHsfiXFNNbffhtV8RFomPm9iFtbIskw2TSDUPsvV+EEl
 zIs=
X-SBRS: 2.7
X-MesageID: 17109192
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,368,1583211600"; d="scan'208";a="17109192"
Date: Fri, 8 May 2020 19:24:11 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86/idle: prevent entering C6 with in service interrupts
 on Intel
Message-ID: <20200508172411.GM1353@Air-de-Roger>
References: <20200507132236.26010-1-roger.pau@citrix.com>
 <3d147b74-81dd-83b8-7035-67c5ceb72c5f@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <3d147b74-81dd-83b8-7035-67c5ceb72c5f@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Ian
 Jackson <ian.jackson@eu.citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 08, 2020 at 03:46:08PM +0200, Jan Beulich wrote:
> On 07.05.2020 15:22, Roger Pau Monne wrote:
> > diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
> > index b83446e77d..5023fea148 100644
> > --- a/xen/arch/x86/acpi/cpu_idle.c
> > +++ b/xen/arch/x86/acpi/cpu_idle.c
> > @@ -573,6 +573,25 @@ static bool errata_c6_eoi_workaround(void)
> >      return (fix_needed && cpu_has_pending_apic_eoi());
> >  }
> >  
> > +static int8_t __read_mostly disable_c6_isr = -1;
> > +boolean_param("disable-c6-isr", disable_c6_isr);
> > +
> > +/*
> > + * Errata CLX30: A Pending Fixed Interrupt May Be Dispatched Before an
> > + * Interrupt of The Same Priority Completes.
> > + *
> > + * Prevent entering C6 if there are pending lapic interrupts, or else the
> > + * processor might dispatch further pending interrupts before the first one has
> > + * been completed.
> > + */
> > +bool errata_c6_isr_workaround(void)
> > +{
> > +    if ( unlikely(disable_c6_isr == -1) )
> > +        disable_c6_isr = boot_cpu_data.x86_vendor == X86_VENDOR_INTEL;
> > +
> > +    return disable_c6_isr && cpu_has_pending_apic_eoi();
> 
> This check being the same as in errata_c6_eoi_workaround(),
> why don't you simply extend that function? (See also below.)
> Also both variable and command line option descriptor could
> go inside the function, to limit their scopes.

Since this is actually a superset (as it covers all Intel CPUs), I
should delete the previous (more restricted) workaround matching
logic.

> > @@ -676,7 +695,8 @@ static void acpi_processor_idle(void)
> >          return;
> >      }
> >  
> > -    if ( (cx->type == ACPI_STATE_C3) && errata_c6_eoi_workaround() )
> > +    if ( (cx->type == ACPI_STATE_C3) &&
> > +         (errata_c6_eoi_workaround() || errata_c6_isr_workaround()) )
> >          cx = power->safe_state;
> 
> I realize you only add to existing code, but I'm afraid this
> existing code cannot be safely added to. Already prior to
> your change there was a curious mismatch of C3 and c6 on this
> line, and I don't see how ACPI_STATE_C3 correlates with
> "core C6" state. Now this may have been the convention for
> Nehalem/Westmere systems, but already the mwait-idle entries
> for these CPU models have 4 entries (albeit such that they
> match this scheme). As a result I think this at the very
> least needs to be >=, not ==, even more so now that you want
> to cover all Intel CPUs.

Hm, I think this is because AFAICT acpi_processor_idle only
understands up to ACPI_STATE_C3, passing a type > ACPI_STATE_C3 will
just cause it to fallback to C0. I've adjusted the comparison to use
>= instead, as a safety imporvement in case we add support for more
states to acpi_processor_idle.

> Obviously (I think) it is a mistake that mwait-idle doesn't
> also activate this workaround, which imo is another reason to
> stick to just errata_c6_eoi_workaround().

I assumed the previous workaround didn't apply to any of the CPUs
supported by the mwait-idle driver, since it seems to only support
recent-ish models.

> > --- a/xen/arch/x86/cpu/mwait-idle.c
> > +++ b/xen/arch/x86/cpu/mwait-idle.c
> > @@ -770,6 +770,9 @@ static void mwait_idle(void)
> >  		return;
> >  	}
> >  
> > +	if (cx->type == ACPI_STATE_C3 && errata_c6_isr_workaround())
> > +		cx = power->safe_state;
> 
> Here it becomes even more relevant I think that == not be
> used, as the static tables list deeper C-states; it's just
> that the SKX table, which also gets used for CLX afaict, has
> no entries beyond C6-SKX. Others with deeper ones presumably
> have the deeper C-states similarly affected (or not) by this
> erratum.
> 
> For reference, mwait_idle_cpu_init() has
> 
> 		hint = flg2MWAIT(cpuidle_state_table[cstate].flags);
> 		state = MWAIT_HINT2CSTATE(hint) + 1;
>                 ...
> 		cx->type = state;
> 
> i.e. the value you compare against is derived from the static
> table entries. For Nehalem/Westmere this means that what goes
> under ACPI_STATE_C3 is indeed C6-NHM, and this looks to match
> for all the non-Atoms, but for none of the Atoms. Now Atoms
> could easily be unaffected, but (just to take an example) if
> C6-SNB was affected, surely C7-SNB would be, too.

Yes, I've adjusted this to use cx->type >= 3 instead. I have to admit
I'm confused by the name of the states being C6-* while the cstate
value reported by Xen will be 3 (I would instead expect the type to be
6 in order to match the name).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 08 17:49:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 17:49:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX77d-0006cM-KZ; Fri, 08 May 2020 17:49:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqRt=6W=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jX77b-0006cH-Rr
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 17:49:15 +0000
X-Inumbo-ID: 380822c6-9154-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 380822c6-9154-11ea-9887-bc764e2007e4;
 Fri, 08 May 2020 17:49:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=FtlcKGIASCUq4WLs6dTmBa17DXpPoInCY8pdakIC9Zw=; b=eEqQ4uQUgEms/1eiOyMDlv5tb
 iv0QutVaOVZHw6s7zAxMEKlvjt/Hixb+Ix8OGcahYcjUtsL9Ei8jV0FNf+ocOTH1cH/9HO6VIZpZu
 nWinpr58TNRUhyJId3xPbXtP0xq5vQzdAHW41xHEsTV6LIzVrsH4d4Uz6RGR19yZWGJhw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jX77V-0005wH-0a; Fri, 08 May 2020 17:49:09 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jX77U-00024i-P0; Fri, 08 May 2020 17:49:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jX77U-000556-OM; Fri, 08 May 2020 17:49:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150077-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150077: regressions - FAIL
X-Osstest-Failures: qemu-mainline:build-i386-xsm:xen-build:fail:regression
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=3c7adbc67d9a5c3e992a4dd13b8704464daaad5b
X-Osstest-Versions-That: qemuu=570a9214827e3d42f7173c4d4c9f045b99834cf0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 08 May 2020 17:49:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150077 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150077/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-xsm                6 xen-build                fail REGR. vs. 150061

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150061
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150061
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150061
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150061
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150061
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150061
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                3c7adbc67d9a5c3e992a4dd13b8704464daaad5b
baseline version:
 qemuu                570a9214827e3d42f7173c4d4c9f045b99834cf0

Last test of basis   150061  2020-05-06 22:06:59 Z    1 days
Testing same since   150077  2020-05-07 15:37:29 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Chen Qun <kuhn.chenqun@huawei.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Gibson <david@gibson.dropbear.id.au>
  Eric Auger <eric.auger@redhat.com>
  Greg Kurz <groug@kaod.org>
  Maxim Levitsky <mlevitsk@redhat.com>
  Nicholas Piggin <npiggin@gmail.com>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Stefan Berger <stefanb@linux.ibm.com>
  Suraj Jitindar Singh <sjitindarsingh@gmail.com>
  Tong Ho <tong.ho@xilinx.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 770 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 08 17:58:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 17:58:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX7GD-0007W3-Hm; Fri, 08 May 2020 17:58:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M6n8=6W=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jX7GC-0007Vy-0x
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 17:58:08 +0000
X-Inumbo-ID: 77ff70f4-9155-11ea-a046-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 77ff70f4-9155-11ea-a046-12813bfff9fa;
 Fri, 08 May 2020 17:58:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588960686;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=/tf9G9INhMmqoNYZkaphDXy3MgbO46SRIyBkWWuGLHU=;
 b=K1ADmkVepQ5PEdmmxpNQJG2NxjudQatIenwsaZmwhbIuo61Xj9f1EHIW
 MM3oqvpeAf4bVPCShkJ+1G25tMiRd2K2XUIdCmiE3A2yX4hZ0xe+K3dbt
 OEHUbyANiB7lWyOZiZT2lApbHCXi4W8yK5UXAOfEPhGBFYgb+oehOJCp9 M=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
X-Ironport-Dmarc-Check-Result: validskip
IronPort-SDR: gyVkSe73blWrzgYSCiLjGo7tQ2jpcm4KhpC+06ODWtfwDeeD3FWLAn+sohllwxnDL9Vj3y3eAU
 UXs3rLX1sg9R0HN44Q38gFXQp+PkgvnnJRXA3OULRE3wDIeBaqs4mGGc5LZL2/Dyd7aUjHHPEW
 darn0VQghrjoedm3s6bfAXyDenPbmP2pKV82SNakMSGgoDBpnYt2GP1ZX02IuiJvCV8q5gyeRB
 1zENulD1Uj0jKk7EpYW74xv7CDE9PvT8Zn/x7D7I6msnUlg9UHnP6fBmzyVAlTewc5mStrbXDF
 KfA=
X-SBRS: 2.7
X-MesageID: 17782050
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,368,1583211600"; d="scan'208";a="17782050"
Subject: Re: [PATCH v8 07/12] x86emul: support FNSTENV and FNSAVE
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <9a2afbb1-af92-2c7d-9fde-d8d8e4563a2a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0e222090-c989-45b5-65be-efb09e7b9bb9@citrix.com>
Date: Fri, 8 May 2020 18:58:01 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <9a2afbb1-af92-2c7d-9fde-d8d8e4563a2a@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05/05/2020 09:15, Jan Beulich wrote:
> To avoid introducing another boolean into emulator state, the
> rex_prefix field gets (ab)used to convey the real/VM86 vs protected mode
> info (affecting structure layout, albeit not size) to x86_emul_blk().
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> TBD: The full 16-bit padding fields in the 32-bit structures get filled
>      with all ones by modern CPUs (i.e. other than the comment says for

You really mean "unlike" here, rather than "other".  They do not have
the same meaning in this context.

(I think you're also missing a "what", which I'm guessing is just an
oversight.)

>      FIP and FDP). We may want to mirror this as well (for the real mode
>      variant), even if those fields' contents are unspecified.

This is surprising behaviour, but I expect it dates back to external x87
processors and default MMIO behaviour.

If it is entirely consistent, it match be nice to match.  OTOH, the
manuals are very clear that it is reserved, which I think gives us the
liberty to use the easier implementation.

> ---
> v7: New.
>
> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -897,6 +900,50 @@ struct x86_emulate_state {
>  #define PTR_POISON NULL /* 32-bit builds are for user-space, so NULL is OK. */
>  #endif
>  
> +#ifndef X86EMUL_NO_FPU
> +struct x87_env16 {
> +    uint16_t fcw;
> +    uint16_t fsw;
> +    uint16_t ftw;
> +    union {
> +        struct {
> +            uint16_t fip_lo;
> +            uint16_t fop:11, :1, fip_hi:4;
> +            uint16_t fdp_lo;
> +            uint16_t :12, fdp_hi:4;
> +        } real;
> +        struct {
> +            uint16_t fip;
> +            uint16_t fcs;
> +            uint16_t fdp;
> +            uint16_t fds;
> +        } prot;
> +    } mode;
> +};
> +
> +struct x87_env32 {
> +    uint32_t fcw:16, :16;
> +    uint32_t fsw:16, :16;
> +    uint32_t ftw:16, :16;

uint16_t fcw, :16;
uint16_t fsw, :16;
uint16_t ftw, :16;

which reduces the number of 16 bit bitfields.

> +    union {
> +        struct {
> +            /* some CPUs/FPUs also store the full FIP here */
> +            uint32_t fip_lo:16, :16;
> +            uint32_t fop:11, :1, fip_hi:16, :4;
> +            /* some CPUs/FPUs also store the full FDP here */
> +            uint32_t fdp_lo:16, :16;
> +            uint32_t :12, fdp_hi:16, :4;

Annoyingly, two of these lines can't use uint16_t.  I'm torn as to
whether to suggest converting the other two which can.

> +        } real;
> +        struct {
> +            uint32_t fip;
> +            uint32_t fcs:16, fop:11, :5;
> +            uint32_t fdp;
> +            uint32_t fds:16, :16;

These two can be converted safely.

> @@ -4912,9 +4959,19 @@ x86_emulate(
>                      goto done;
>                  emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
>                  break;
> -            case 6: /* fnstenv - TODO */
> +            case 6: /* fnstenv */
> +                fail_if(!ops->blk);
> +                state->blk = blk_fst;
> +                /* REX is meaningless for this insn by this point. */
> +                rex_prefix = in_protmode(ctxt, ops);

Code like this is why I have such a strong objection to obfuscating macros.

It reads as if you're updating a local variable, alongside a comment
explaining that it is meaningless.

It is critically important for clarity that the comment state that
you're (ab)using the field to pass information into ->blk(), and I'd go
so far as suggesting

/*state->*/rex_prefix = in_protmode(ctxt, ops);

to reinforce the point that rex_prefix isn't a local variable, seeing
the obfuscation prevents a real state->rex_prefix from working.

> +                if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
> +                                    op_bytes > 2 ? sizeof(struct x87_env32)
> +                                                 : sizeof(struct x87_env16),
> +                                    &_regs.eflags,
> +                                    state, ctxt)) != X86EMUL_OKAY )
> +                    goto done;
>                  state->fpu_ctrl = true;
> -                goto unimplemented_insn;
> +                break;
>              case 7: /* fnstcw m2byte */
>                  state->fpu_ctrl = true;
>              fpu_memdst16:
> @@ -5068,9 +5125,21 @@ x86_emulate(
>                  emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
>                  break;
>              case 4: /* frstor - TODO */
> -            case 6: /* fnsave - TODO */
>                  state->fpu_ctrl = true;
>                  goto unimplemented_insn;
> +            case 6: /* fnsave */
> +                fail_if(!ops->blk);
> +                state->blk = blk_fst;
> +                /* REX is meaningless for this insn by this point. */
> +                rex_prefix = in_protmode(ctxt, ops);
> +                if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
> +                                    op_bytes > 2 ? sizeof(struct x87_env32) + 80
> +                                                 : sizeof(struct x87_env16) + 80,
> +                                    &_regs.eflags,
> +                                    state, ctxt)) != X86EMUL_OKAY )
> +                    goto done;
> +                state->fpu_ctrl = true;
> +                break;
>              case 7: /* fnstsw m2byte */
>                  state->fpu_ctrl = true;
>                  goto fpu_memdst16;
> @@ -11542,6 +11611,12 @@ int x86_emul_blk(
>      switch ( state->blk )
>      {
>          bool zf;
> +        struct {
> +            struct x87_env32 env;
> +            struct {
> +               uint8_t bytes[10];
> +            } freg[8];
> +        } fpstate;
>  
>          /*
>           * Throughout this switch(), memory clobbers are used to compensate
> @@ -11571,6 +11646,93 @@ int x86_emul_blk(
>              *eflags |= X86_EFLAGS_ZF;
>          break;
>  
> +#ifndef X86EMUL_NO_FPU
> +
> +    case blk_fst:
> +        ASSERT(!data);
> +
> +        if ( bytes > sizeof(fpstate.env) )
> +            asm ( "fnsave %0" : "=m" (fpstate) );
> +        else
> +            asm ( "fnstenv %0" : "=m" (fpstate.env) );

We have 4 possible sizes to deal with here - the 16 and 32bit formats of
prot vs real/vm86 modes, and it is not clear (code wise) why
sizeof(fpstate.env) is a suitable boundary.

Given that these are legacy instructions and not a hotpath in the
slightest, it is possibly faster (by removing the branch) and definitely
more obvious to use fnsave unconditionally, and derive all of the
smaller layouts that way.

Critically however, it prevents us from needing a CVE/XSA if ... [bottom
comment]

> +
> +        /* state->rex_prefix carries CR0.PE && !EFLAGS.VM setting */
> +        switch ( bytes )
> +        {
> +        case sizeof(fpstate.env):
> +        case sizeof(fpstate):

These case labels don't match up in any kind of obvious way to the
caller.  I think you need /* 32bit FNSAVE */ and /* 32bit FNSTENV */
here, and

> +            if ( !state->rex_prefix )

if ( !state->rex_prefix ) /* Convert 32bit prot to 32bit real/vm86 format */

here.

> +            {
> +                unsigned int fip = fpstate.env.mode.prot.fip +
> +                                   (fpstate.env.mode.prot.fcs << 4);
> +                unsigned int fdp = fpstate.env.mode.prot.fdp +
> +                                   (fpstate.env.mode.prot.fds << 4);
> +                unsigned int fop = fpstate.env.mode.prot.fop;
> +
> +                memset(&fpstate.env.mode, 0, sizeof(fpstate.env.mode));
> +                fpstate.env.mode.real.fip_lo = fip;
> +                fpstate.env.mode.real.fip_hi = fip >> 16;
> +                fpstate.env.mode.real.fop = fop;
> +                fpstate.env.mode.real.fdp_lo = fdp;
> +                fpstate.env.mode.real.fdp_hi = fdp >> 16;
> +            }
> +            memcpy(ptr, &fpstate.env, sizeof(fpstate.env));
> +            if ( bytes == sizeof(fpstate.env) )
> +                ptr = NULL;
> +            else
> +                ptr += sizeof(fpstate.env);
> +            break;
> +
> +        case sizeof(struct x87_env16):
> +        case sizeof(struct x87_env16) + sizeof(fpstate.freg):

Similarly, this wants /* 16bit FNSAVE */ and /* 16bit FNSTENV */.  I'm
tempted to suggest a literal 80 rather than sizeof(fpstate.freg) to
match the caller.

> +            if ( state->rex_prefix )

/* Convert 32bit prot to 16bit prot format */

> +            {
> +                struct x87_env16 *env = ptr;
> +
> +                env->fcw = fpstate.env.fcw;
> +                env->fsw = fpstate.env.fsw;
> +                env->ftw = fpstate.env.ftw;
> +                env->mode.prot.fip = fpstate.env.mode.prot.fip;
> +                env->mode.prot.fcs = fpstate.env.mode.prot.fcs;
> +                env->mode.prot.fdp = fpstate.env.mode.prot.fdp;
> +                env->mode.prot.fds = fpstate.env.mode.prot.fds;
> +            }
> +            else
> +            {

/* Convert 32bit prot to 16bit real/vm86 format */

> +                unsigned int fip = fpstate.env.mode.prot.fip +
> +                                   (fpstate.env.mode.prot.fcs << 4);
> +                unsigned int fdp = fpstate.env.mode.prot.fdp +
> +                                   (fpstate.env.mode.prot.fds << 4);
> +                struct x87_env16 env = {
> +                    .fcw = fpstate.env.fcw,
> +                    .fsw = fpstate.env.fsw,
> +                    .ftw = fpstate.env.ftw,
> +                    .mode.real.fip_lo = fip,
> +                    .mode.real.fip_hi = fip >> 16,
> +                    .mode.real.fop = fpstate.env.mode.prot.fop,
> +                    .mode.real.fdp_lo = fdp,
> +                    .mode.real.fdp_hi = fdp >> 16
> +                };
> +
> +                memcpy(ptr, &env, sizeof(env));
> +            }
> +            if ( bytes == sizeof(struct x87_env16) )
> +                ptr = NULL;
> +            else
> +                ptr += sizeof(struct x87_env16);
> +            break;
> +
> +        default:
> +            ASSERT_UNREACHABLE();
> +            return X86EMUL_UNHANDLEABLE;
> +        }
> +
> +        if ( ptr )
> +            memcpy(ptr, fpstate.freg, sizeof(fpstate.freg));

... we get here accidentally, and then copy stack rubble into the guest.

~Andrew

> +        break;
> +
> +#endif /* X86EMUL_NO_FPU */
> +
>      case blk_movdir:
>          switch ( bytes )
>          {
>



From xen-devel-bounces@lists.xenproject.org Fri May 08 18:19:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 18:19:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX7b8-0000rp-Hc; Fri, 08 May 2020 18:19:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M6n8=6W=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jX7b7-0000rk-IH
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 18:19:45 +0000
X-Inumbo-ID: 7dad4596-9158-11ea-a04d-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7dad4596-9158-11ea-a04d-12813bfff9fa;
 Fri, 08 May 2020 18:19:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588961985;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=NpeuG5CrKL/UohQSxc3MzgpR9FmZIavGI5ZvTQk8K5g=;
 b=OJs49vLk9Sh0UeFbVnzl+P5wFz6xMEpgACsY4mWT1IKcDDB3LLI7SoPF
 mnVdRadX4mLoS9VEjvLCS/6VEVP0Khingx4pSRZFKWpNbWfvxMYnfXUrB
 hz3EgbWxIXZZsJqHxMEiMMBDUZxdS+kt3IO6sDL6dog4fqrlTDG2Z5b3o 8=;
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
X-Ironport-Dmarc-Check-Result: validskip
IronPort-SDR: aUFSsr0S6ZCJ7OemU0LGUhNWgv+Bv31p3ZQMqR5oaLNaaOG3H73cj9ao2OCtoXLU4vCij45ijB
 dQl6UjFuAy3uHiz3a4kQjadMONQx9YtDuYgSL736V1izkkl6Fn7Mrb4OhtKznqELUWZkjpg+NH
 TQAhxvtelSy8HKhQz+dRTRUQG1pkDiOWavcgLr0RLmd827+Q50eNomDn2Onr75UUCJycMI1Edt
 BJY96P0/MPLMbaXTO433RM5ZSgToDoD9l2XAwlJly8ASZMyiQrcaKgloMakctSvrvaCt1okkEr
 Lgs=
X-SBRS: 2.7
X-MesageID: 17085663
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,368,1583211600"; d="scan'208";a="17085663"
Subject: Re: [PATCH v8 08/12] x86emul: support FLDENV and FRSTOR
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <09fe2c18-0037-af71-93be-87261051e2a2@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f60dbeea-1054-486f-d1f5-ab9da46c951a@citrix.com>
Date: Fri, 8 May 2020 19:19:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <09fe2c18-0037-af71-93be-87261051e2a2@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05/05/2020 09:16, Jan Beulich wrote:
> While the Intel SDM claims that FRSTOR itself may raise #MF upon
> completion, this was confirmed by Intel to be a doc error which will be
> corrected in due course; behavior is like FLDENV, and like old hard copy
> manuals describe it. Otherwise we'd have to emulate the insn by filling
> st(N) in suitable order, followed by FLDENV.

I wouldn't bother calling this out at all.  We know the doc is going to
be corrected in the next revision, and "what we would have done if the
docs error was in fact accurate" only adds confusion to an already
complicated change.

> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -857,6 +857,7 @@ struct x86_emulate_state {
>          blk_NONE,
>          blk_enqcmd,
>  #ifndef X86EMUL_NO_FPU
> +        blk_fld, /* FLDENV, FRSTOR */
>          blk_fst, /* FNSTENV, FNSAVE */
>  #endif
>          blk_movdir,
> @@ -4948,21 +4949,14 @@ x86_emulate(
>                  dst.bytes = 4;
>                  emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
>                  break;
> -            case 4: /* fldenv - TODO */
> -                state->fpu_ctrl = true;
> -                goto unimplemented_insn;
> -            case 5: /* fldcw m2byte */
> -                state->fpu_ctrl = true;
> -            fpu_memsrc16:
> -                if ( (rc = ops->read(ea.mem.seg, ea.mem.off, &src.val,
> -                                     2, ctxt)) != X86EMUL_OKAY )
> -                    goto done;
> -                emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
> -                break;

The commit message should however note that the larger-than-expected
diff is purely code motion for case 5, to arrange fldenv to fall though
into fstenv.

Alternatively, could be pulled out into an earlier patch.

> +            case 4: /* fldenv */
> +                /* Raise #MF now if there are pending unmasked exceptions. */
> +                emulate_fpu_insn_stub(0xd9, 0xd0 /* fnop */);
> +                /* fall through */
>              case 6: /* fnstenv */
>                  fail_if(!ops->blk);
> -                state->blk = blk_fst;
> -                /* REX is meaningless for this insn by this point. */
> +                state->blk = modrm_reg & 2 ? blk_fst : blk_fld;
> +                /* REX is meaningless for these insns by this point. */
>                  rex_prefix = in_protmode(ctxt, ops);
>                  if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
>                                      op_bytes > 2 ? sizeof(struct x87_env32)
> @@ -4972,6 +4966,14 @@ x86_emulate(
>                      goto done;
>                  state->fpu_ctrl = true;
>                  break;
> +            case 5: /* fldcw m2byte */
> +                state->fpu_ctrl = true;
> +            fpu_memsrc16:
> +                if ( (rc = ops->read(ea.mem.seg, ea.mem.off, &src.val,
> +                                     2, ctxt)) != X86EMUL_OKAY )
> +                    goto done;
> +                emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
> +                break;
>              case 7: /* fnstcw m2byte */
>                  state->fpu_ctrl = true;
>              fpu_memdst16:
> @@ -5124,13 +5126,14 @@ x86_emulate(
>                  dst.bytes = 8;
>                  emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
>                  break;
> -            case 4: /* frstor - TODO */
> -                state->fpu_ctrl = true;
> -                goto unimplemented_insn;
> +            case 4: /* frstor */
> +                /* Raise #MF now if there are pending unmasked exceptions. */
> +                emulate_fpu_insn_stub(0xd9, 0xd0 /* fnop */);
> +                /* fall through */
>              case 6: /* fnsave */
>                  fail_if(!ops->blk);
> -                state->blk = blk_fst;
> -                /* REX is meaningless for this insn by this point. */
> +                state->blk = modrm_reg & 2 ? blk_fst : blk_fld;
> +                /* REX is meaningless for these insns by this point. */
>                  rex_prefix = in_protmode(ctxt, ops);
>                  if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
>                                      op_bytes > 2 ? sizeof(struct x87_env32) + 80
> @@ -11648,6 +11651,89 @@ int x86_emul_blk(
>  
>  #ifndef X86EMUL_NO_FPU
>  
> +    case blk_fld:
> +        ASSERT(!data);
> +
> +        /* state->rex_prefix carries CR0.PE && !EFLAGS.VM setting */
> +        switch ( bytes )
> +        {
> +        case sizeof(fpstate.env):
> +        case sizeof(fpstate):
> +            memcpy(&fpstate.env, ptr, sizeof(fpstate.env));
> +            if ( !state->rex_prefix )
> +            {
> +                unsigned int fip = fpstate.env.mode.real.fip_lo +
> +                                   (fpstate.env.mode.real.fip_hi << 16);
> +                unsigned int fdp = fpstate.env.mode.real.fdp_lo +
> +                                   (fpstate.env.mode.real.fdp_hi << 16);
> +                unsigned int fop = fpstate.env.mode.real.fop;
> +
> +                fpstate.env.mode.prot.fip = fip & 0xf;
> +                fpstate.env.mode.prot.fcs = fip >> 4;
> +                fpstate.env.mode.prot.fop = fop;
> +                fpstate.env.mode.prot.fdp = fdp & 0xf;
> +                fpstate.env.mode.prot.fds = fdp >> 4;

The same comments about comments apply here from the previous patch.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 08 18:30:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 18:30:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX7l5-0001zC-Gy; Fri, 08 May 2020 18:30:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M6n8=6W=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jX7l4-0001r3-JU
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 18:30:02 +0000
X-Inumbo-ID: eda15cf6-9159-11ea-a04f-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eda15cf6-9159-11ea-a04f-12813bfff9fa;
 Fri, 08 May 2020 18:30:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588962602;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=2KpXGx6X5lyc4nxPmX/ONwbZuGOzOBl4nWLEcsbxJzI=;
 b=bvL3cNr6ZeUNJ5bekBwlxUrEvZ5rIbgAFVSxFEXbI6xvEiR5fCV/MDNF
 QwVK/3znIrHKZdGzZxY8EtysZAYwLiNzUp4o7CuFgK8A6vbJQGVwKw+t/
 /FyxpVt7KZUm+DsUK9C1etcun2NL5gvkIiV4GPxNqZtyN82IRg8nNW0yq g=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
X-Ironport-Dmarc-Check-Result: validskip
IronPort-SDR: cUc48mnZP7MFZVEFWyAexv07RfZf63yd213EVwqW1xp8Xn0lOSTiXAu7MRXRhpbeiE6YUEzPfo
 dkAPxso0TLj/pdf0u50CNvNufwujt8LlELgCS5QoYCzvtmTzAXIXpAsJ+fwRF3KCQ4p+4qyIRh
 py3MPrmc39ftGN+GQwaLn+5jmGJpnYXQOZlkGmsr1Y0IMxLiTAc7cK4KQEG7d2S0fMmmh4GhD5
 2xypPovxHggb0t3/E8MIyGBYBOIKD/HsbIhZy9R8jUiDu3Fs9Hy9OaCp0dHYQanpX4zkykGzoR
 eVk=
X-SBRS: 2.7
X-MesageID: 17114416
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,368,1583211600"; d="scan'208";a="17114416"
Subject: Re: [PATCH v8 08/12] x86emul: support FLDENV and FRSTOR
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <09fe2c18-0037-af71-93be-87261051e2a2@suse.com>
 <20200508133720.GH1353@Air-de-Roger>
 <4b6f4353-066e-351d-597d-4455193ff95f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a8342bf8-d866-b507-9420-0384545e9a4f@citrix.com>
Date: Fri, 8 May 2020 19:29:57 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <4b6f4353-066e-351d-597d-4455193ff95f@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08/05/2020 16:04, Jan Beulich wrote:
>>> +            }
>>> +
>>> +            if ( bytes == sizeof(fpstate.env) )
>>> +                ptr = NULL;
>>> +            else
>>> +                ptr += sizeof(fpstate.env);
>>> +            break;
>>> +
>>> +        case sizeof(struct x87_env16):
>>> +        case sizeof(struct x87_env16) + sizeof(fpstate.freg):
>>> +        {
>>> +            const struct x87_env16 *env = ptr;
>>> +
>>> +            fpstate.env.fcw = env->fcw;
>>> +            fpstate.env.fsw = env->fsw;
>>> +            fpstate.env.ftw = env->ftw;
>>> +
>>> +            if ( state->rex_prefix )
>>> +            {
>>> +                fpstate.env.mode.prot.fip = env->mode.prot.fip;
>>> +                fpstate.env.mode.prot.fcs = env->mode.prot.fcs;
>>> +                fpstate.env.mode.prot.fdp = env->mode.prot.fdp;
>>> +                fpstate.env.mode.prot.fds = env->mode.prot.fds;
>>> +                fpstate.env.mode.prot.fop = 0; /* unknown */
>>> +            }
>>> +            else
>>> +            {
>>> +                unsigned int fip = env->mode.real.fip_lo +
>>> +                                   (env->mode.real.fip_hi << 16);
>>> +                unsigned int fdp = env->mode.real.fdp_lo +
>>> +                                   (env->mode.real.fdp_hi << 16);
>>> +                unsigned int fop = env->mode.real.fop;
>>> +
>>> +                fpstate.env.mode.prot.fip = fip & 0xf;
>>> +                fpstate.env.mode.prot.fcs = fip >> 4;
>>> +                fpstate.env.mode.prot.fop = fop;
>>> +                fpstate.env.mode.prot.fdp = fdp & 0xf;
>>> +                fpstate.env.mode.prot.fds = fdp >> 4;
>> This looks mostly the same as the translation done above, so maybe
>> could be abstracted anyway in a macro to avoid the code repetition?
>> (ie: fpstate_real_to_prot(src, dst) or some such).
> Just the 5 assignments could be put in an inline function, but
> if we also wanted to abstract away the declarations with their
> initializers, it would need to be a macro because of the
> different types of fpstate.env and *env. While I'd generally
> prefer inline functions, the macro would have the benefit that
> it could be #define-d / #undef-d right inside this case block.
> Thoughts?

Code like this is large in terms of volume, but it is completely crystal
clear (with the requested comments in place) and easy to follow.

I don't see how attempting to abstract out two small portions is going
to be an improvement.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 08 19:32:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 19:32:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX8iy-0007H3-5O; Fri, 08 May 2020 19:31:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M6n8=6W=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jX8iw-0007Gy-Ie
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 19:31:54 +0000
X-Inumbo-ID: 91ca2e90-9162-11ea-a061-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 91ca2e90-9162-11ea-a061-12813bfff9fa;
 Fri, 08 May 2020 19:31:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588966314;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=Yj1X4kaVtelg71QUbAwLQpZu8oQuNOJ2I/AnJIv7FYY=;
 b=aSxgq4NZ4vJXXRxsJeGxJc1dAW5Hgem7PSpVdYHDdRQUCveHhgFByeYx
 DRzLyLUSEkZMn0dRQN4HZT+ydKXG2Dz8Xw+j4QEdNRb+273tEnVUR4jYg
 Mbl3/omoFYppe1SZSqUlrUH2XmOtwf4t3a6D1f94gBsH4nu1Sj2BiCkf8 Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
X-Ironport-Dmarc-Check-Result: validskip
IronPort-SDR: oE/DjZX68ROKNquvq5Vr+v0FJ9oCOFaV+zzDy1p95J/CtDSnKC7v3WU2PfSYZN8v2kK9NEEHc8
 S5ZRa3sQZZDflDbitG11hsJHFuWyaS1U7fife9Th52LHALRY5c4fIUv2Nh3SPAO9NO+AGn14Bq
 LIjUbGX35JJAxbGYodmgyq9Aoi9RG29G3KDx1d68G1huwhqLIZkG8sFFXkbwR8ydZt1iJ+AB8A
 +8KaSjH+3+qhScvZUCj87MbpLrLfpXtWK/psCvG7lVQqeaOAY7ic+n/VSQoc7MT8QG/wQY9GVK
 wVM=
X-SBRS: 2.7
X-MesageID: 17361964
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,368,1583211600"; d="scan'208";a="17361964"
Subject: Re: [PATCH v8 09/12] x86emul: support FXSAVE/FXRSTOR
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <ea1db2c5-3dd7-f1c8-c051-e39f0dffc94e@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <4f0da795-a148-e1f3-bd97-caeb84d702cb@citrix.com>
Date: Fri, 8 May 2020 20:31:47 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <ea1db2c5-3dd7-f1c8-c051-e39f0dffc94e@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05/05/2020 09:16, Jan Beulich wrote:
> Note that FPU selector handling as well as MXCSR mask saving for now
> does not honor differences between host and guest visible featuresets.
>
> While for Intel operation of the insns with CR4.OSFXSR=0 is
> implementation dependent, use the easiest solution there: Simply don't
> look at the bit in the first place. For AMD and alike the behavior is
> well defined, so it gets handled together with FFXSR.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v8: Respect EFER.FFXSE and CR4.OSFXSR. Correct wrong X86EMUL_NO_*
>     dependencies. Reduce #ifdef-ary.
> v7: New.
>
> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -953,6 +958,29 @@ typedef union {
>      uint32_t data32[16];
>  } mmval_t;
>  
> +struct x86_fxsr {
> +    uint16_t fcw;
> +    uint16_t fsw;
> +    uint16_t ftw:8, :8;

uint8_t

> +    uint16_t fop;
> +    union {
> +        struct {
> +            uint32_t offs;
> +            uint32_t sel:16, :16;

uint16_t

> +        };
> +        uint64_t addr;
> +    } fip, fdp;
> +    uint32_t mxcsr;
> +    uint32_t mxcsr_mask;
> +    struct {
> +        uint8_t data[10];
> +        uint8_t _[6];

I'd be tempted to suggest uint16_t :16, :16, :16; here, which gets rid
of the named field, or explicitly rsvd to match below.

> +    } fpreg[8];
> +    uint64_t __attribute__ ((aligned(16))) xmm[16][2];
> +    uint64_t _[6];

This field however is used below.  It would be clearer being named 'rsvd'.

> +    uint64_t avl[6];
> +};
> +
>  /*
>   * While proper alignment gets specified above, this doesn't get honored by
>   * the compiler for automatic variables. Use this helper to instantiate a
> @@ -1910,6 +1938,7 @@ amd_like(const struct x86_emulate_ctxt *
>  #define vcpu_has_cmov()        (ctxt->cpuid->basic.cmov)
>  #define vcpu_has_clflush()     (ctxt->cpuid->basic.clflush)
>  #define vcpu_has_mmx()         (ctxt->cpuid->basic.mmx)
> +#define vcpu_has_fxsr()        (ctxt->cpuid->basic.fxsr)
>  #define vcpu_has_sse()         (ctxt->cpuid->basic.sse)
>  #define vcpu_has_sse2()        (ctxt->cpuid->basic.sse2)
>  #define vcpu_has_sse3()        (ctxt->cpuid->basic.sse3)
> @@ -8125,6 +8154,47 @@ x86_emulate(
>      case X86EMUL_OPC(0x0f, 0xae): case X86EMUL_OPC_66(0x0f, 0xae): /* Grp15 */
>          switch ( modrm_reg & 7 )
>          {
> +#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
> +    !defined(X86EMUL_NO_SIMD)
> +        case 0: /* fxsave */
> +        case 1: /* fxrstor */
> +            generate_exception_if(vex.pfx, EXC_UD);
> +            vcpu_must_have(fxsr);
> +            generate_exception_if(ea.type != OP_MEM, EXC_UD);
> +            generate_exception_if(!is_aligned(ea.mem.seg, ea.mem.off, 16,
> +                                              ctxt, ops),
> +                                  EXC_GP, 0);
> +            fail_if(!ops->blk);
> +            op_bytes =
> +#ifdef __x86_64__
> +                !mode_64bit() ? offsetof(struct x86_fxsr, xmm[8]) :
> +#endif
> +                sizeof(struct x86_fxsr);
> +            if ( amd_like(ctxt) )
> +            {
> +                if ( !ops->read_cr ||
> +                     ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
> +                    cr4 = X86_CR4_OSFXSR;

Why do we want to assume OSFXSR in the case of a read_cr() failure,
rather than bailing on the entire instruction?

> +                if ( !ops->read_msr ||
> +                     ops->read_msr(MSR_EFER, &msr_val, ctxt) != X86EMUL_OKAY )
> +                    msr_val = 0;
> +                if ( !(cr4 & X86_CR4_OSFXSR) ||
> +                     (mode_64bit() && mode_ring0() && (msr_val & EFER_FFXSE)) )
> +                    op_bytes = offsetof(struct x86_fxsr, xmm[0]);

This is a very peculiar optimisation in AMD processors...  (but your
logic here does agree with the description AFAICT)

> +            }
> +            /*
> +             * This could also be X86EMUL_FPU_mmx, but it shouldn't be
> +             * X86EMUL_FPU_xmm, as we don't want CR4.OSFXSR checked.
> +             */
> +            get_fpu(X86EMUL_FPU_fpu);
> +            state->blk = modrm_reg & 1 ? blk_fxrstor : blk_fxsave;
> +            if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
> +                                sizeof(struct x86_fxsr), &_regs.eflags,
> +                                state, ctxt)) != X86EMUL_OKAY )
> +                goto done;
> +            break;
> +#endif /* X86EMUL_NO_{FPU,MMX,SIMD} */
> +
>  #ifndef X86EMUL_NO_SIMD
>          case 2: /* ldmxcsr */
>              generate_exception_if(vex.pfx, EXC_UD);
> @@ -11611,6 +11681,8 @@ int x86_emul_blk(
>      struct x86_emulate_state *state,
>      struct x86_emulate_ctxt *ctxt)
>  {
> +    int rc = X86EMUL_OKAY;
> +
>      switch ( state->blk )
>      {
>          bool zf;
> @@ -11819,6 +11891,77 @@ int x86_emul_blk(
>  
>  #endif /* X86EMUL_NO_FPU */
>  
> +#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
> +    !defined(X86EMUL_NO_SIMD)
> +
> +    case blk_fxrstor:
> +    {
> +        struct x86_fxsr *fxsr = FXSAVE_AREA;
> +
> +        ASSERT(!data);
> +        ASSERT(bytes == sizeof(*fxsr));
> +        ASSERT(state->op_bytes <= bytes);
> +
> +        if ( state->op_bytes < sizeof(*fxsr) )
> +        {
> +            if ( state->rex_prefix & REX_W )
> +            {
> +                /*
> +                 * The only way to force fxsaveq on a wide range of gas
> +                 * versions. On older versions the rex64 prefix works only if
> +                 * we force an addressing mode that doesn't require extended
> +                 * registers.
> +                 */
> +                asm volatile ( ".byte 0x48; fxsave (%1)"
> +                               : "=m" (*fxsr) : "R" (fxsr) );
> +            }
> +            else
> +                asm volatile ( "fxsave %0" : "=m" (*fxsr) );
> +        }
> +
> +        memcpy(fxsr, ptr, min(state->op_bytes,
> +                              (unsigned int)offsetof(struct x86_fxsr, _)));
> +        memset(fxsr->_, 0, sizeof(*fxsr) - offsetof(struct x86_fxsr, _));

I'm completely lost trying to follow what's going on here.  Why are we
constructing something different to what the guest gave us?

> +
> +        generate_exception_if(fxsr->mxcsr & ~mxcsr_mask, EXC_GP, 0);
> +
> +        if ( state->rex_prefix & REX_W )
> +        {
> +            /* See above for why operand/constraints are this way. */
> +            asm volatile ( ".byte 0x48; fxrstor (%0)"
> +                           :: "R" (fxsr), "m" (*fxsr) );
> +        }
> +        else
> +            asm volatile ( "fxrstor %0" :: "m" (*fxsr) );
> +        break;
> +    }
> +
> +    case blk_fxsave:
> +    {
> +        struct x86_fxsr *fxsr = FXSAVE_AREA;
> +
> +        ASSERT(!data);
> +        ASSERT(bytes == sizeof(*fxsr));
> +        ASSERT(state->op_bytes <= bytes);
> +
> +        if ( state->rex_prefix & REX_W )
> +        {
> +            /* See above for why operand/constraint are this way. */
> +            asm volatile ( ".byte 0x48; fxsave (%0)"
> +                           :: "R" (state->op_bytes < sizeof(*fxsr) ? fxsr : ptr)
> +                           : "memory" );
> +        }
> +        else
> +            asm volatile ( "fxsave (%0)"
> +                           :: "r" (state->op_bytes < sizeof(*fxsr) ? fxsr : ptr)
> +                           : "memory" );
> +        if ( state->op_bytes < sizeof(*fxsr) )
> +            memcpy(ptr, fxsr, state->op_bytes);

I think this logic would be clearer to follow with:

void *buf = state->op_bytes < sizeof(*fxsr) ? fxsr : ptr;

...

if ( buf != ptr )
    memcpy(ptr, fxsr, state->op_bytes);

This more clearly highlights the "we either fxsave'd straight into the
destination pointer, or into a local buffer if we only want a subset"
property.

> --- a/xen/arch/x86/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate.c
> @@ -42,6 +42,8 @@
>      }                                                      \
>  })
>  
> +#define FXSAVE_AREA current->arch.fpu_ctxt

How safe is this?  Don't we already use this buffer to recover the old
state in the case of an exception?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 08 20:16:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 20:16:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX9Pr-0002FO-I8; Fri, 08 May 2020 20:16:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqRt=6W=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jX9Pp-0002FJ-QI
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 20:16:13 +0000
X-Inumbo-ID: c0fcf4e4-9168-11ea-a067-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0fcf4e4-9168-11ea-a067-12813bfff9fa;
 Fri, 08 May 2020 20:16:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=bDzexYkLSEW4/t6p1GjlgbhA5/DrMK5QDn8oK3olGJg=; b=BKGStMT/hLRNxuRqxo1IiJIzE
 DE47w8EyhS826HGI8ouWL587CrzBej5h4m+s3FwQCTnXdOoeN4IIwcIUKw676qfeoKYU15si9AX2H
 vZgZYHBtCiAJix/Mdwr+CsRdMDvjT0FWWKX/76WmRHyLslwNMk+uGdxzKTrMP0l7gC0sA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jX9Pk-0000L6-ND; Fri, 08 May 2020 20:16:08 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jX9Pk-00011C-C4; Fri, 08 May 2020 20:16:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jX9Pk-0004HZ-64; Fri, 08 May 2020 20:16:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150083-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150083: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=23bf93884c6346206e87c0f14d93f905e8c81267
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 08 May 2020 20:16:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150083 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150083/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              23bf93884c6346206e87c0f14d93f905e8c81267
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  112 days
Failing since        146211  2020-01-18 04:18:52 Z  111 days  102 attempts
Testing same since   150083  2020-05-08 04:18:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 17347 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 08 20:33:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 20:33:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jX9fz-0003tx-6n; Fri, 08 May 2020 20:32:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M6n8=6W=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jX9fx-0003ts-O4
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 20:32:53 +0000
X-Inumbo-ID: 16ceacda-916b-11ea-9887-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 16ceacda-916b-11ea-9887-bc764e2007e4;
 Fri, 08 May 2020 20:32:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588969973;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=lvcHfMrPoxy3lfH9QSTjWGiNH2/W59CeABGN2RP453Y=;
 b=d3ct67v+yu+aVKURHK/vShqTADatG0FviMjUtPd2qkIDz9Roz2c/OBDQ
 THjEpf5UVqZm/6c1u+RNruwHKBmjPmH8Tr40Vdq8FT9+fdVmPhJeSTJTW
 0uk8JVvcSU1YQsCh1d6mrG/c9QQ+UusZKGzvBCFFYlZbjYStIYMX1RASX U=;
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
X-Ironport-Dmarc-Check-Result: validskip
IronPort-SDR: gLEO9oAMiGRmKnl3GtWt0sX7AQPZZ1BwmIWR7VMFy2qVlsq6amL74aGQSuJrccZJEuvSXPTsJX
 6blFDBlgf0HbtDdDpeEpCaVj0m0PIURi9a5lf3RI89aLJG85AqcLoBaEqsfCn2Dm04vU/Ztnxv
 R3JftELIBv5iFSMVOm142oJT3uvqD2+GdKYH4igu9rAK9+GDHte3Z7ZyfMAov/QouF9nooY5Xl
 Oh6xs/y3sK83m6QcWYmEWbgVjOyBpz96bi9owHQ4OcV7LtWlV0Hud3qiTdiBkGVa7J8Me1ad7t
 O/k=
X-SBRS: 2.7
X-MesageID: 17123184
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,369,1583211600"; d="scan'208";a="17123184"
Subject: Re: [PATCH v8 10/12] x86/HVM: scale MPERF values reported to guests
 (on AMD)
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <5da4ed2e-8eb8-0b18-3c1f-9d419371c08a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <8bf8b3c9-1cec-0943-4b98-75b4a787a344@citrix.com>
Date: Fri, 8 May 2020 21:32:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <5da4ed2e-8eb8-0b18-3c1f-9d419371c08a@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05/05/2020 09:18, Jan Beulich wrote:
> AMD's PM specifies that MPERF (and its r/o counterpart) reads are
> affected by the TSC ratio. Hence when processing such reads in software
> we too should scale the values. While we don't currently (yet) expose
> the underlying feature flags, besides us allowing the MSRs to be read
> nevertheless, RDPRU is going to expose the values even to user space.
>
> Furthermore, due to the not exposed feature flags, this change has the
> effect of making properly inaccessible (for reads) the two MSRs.
>
> Note that writes to MPERF (and APERF) continue to be unsupported.

Linux is now using MPERF/APERF for its frequency-invariant scheduling
logic.  Irritatingly, via its read/write alias rather than its read-only
alias.  Even more irritatingly, Intel's reference algorithm recommends
writing to both, despite this being being far less efficient than (one
of) AMD's (two) algorithm(s) which tells you just to subtract the values
you last sampled.

On the one hand, I'm tempted to suggest that we offer EFRO on Intel and
update Linux to use it.  OTOH, that would VMExit as Intel CPUs don't
understand the EFRO interface.

I can't see any sane way to virtualise the write behaviour for MPERF/APERF.

>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v3: New.
> ---
> I did consider whether to put the code in guest_rdmsr() instead, but
> decided that it's better to have it next to TSC handling.

Please do put it in guest_rdmsr().  This is code hygene just as much as
bool_t or style fixes are.

The relationship to TSC is passing-at-best.

>
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -3478,6 +3478,22 @@ int hvm_msr_read_intercept(unsigned int
>          *msr_content = v->arch.hvm.msr_tsc_adjust;
>          break;
>  
> +    case MSR_MPERF_RD_ONLY:
> +        if ( !d->arch.cpuid->extd.efro )
> +        {
> +            goto gp_fault;
> +
> +    case MSR_IA32_MPERF:
> +            if ( !(d->arch.cpuid->basic.raw[6].c &
> +                   CPUID6_ECX_APERFMPERF_CAPABILITY) )
> +                goto gp_fault;
> +        }
> +        if ( rdmsr_safe(msr, *msr_content) )
> +            goto gp_fault;
> +        if ( d->arch.cpuid->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON) )

I suspect we want to gain amd_like() outside of the emulator.

> +            *msr_content = hvm_get_guest_tsc_fixed(v, *msr_content);
> +        break;
> +
>      case MSR_APIC_BASE:
>          *msr_content = vcpu_vlapic(v)->hw.apic_base_msr;
>          break;
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -405,6 +405,9 @@
>  #define MSR_IA32_MPERF			0x000000e7
>  #define MSR_IA32_APERF			0x000000e8
>  
> +#define MSR_MPERF_RD_ONLY		0xc00000e7
> +#define MSR_APERF_RD_ONLY		0xc00000e8

S/RD_ONLY/RO/ ?  No loss of meaning.  Also, above the legacy line please.

~Andrew

> +
>  #define MSR_IA32_THERM_CONTROL		0x0000019a
>  #define MSR_IA32_THERM_INTERRUPT	0x0000019b
>  #define MSR_IA32_THERM_STATUS		0x0000019c
>



From xen-devel-bounces@lists.xenproject.org Fri May 08 21:04:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 21:04:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXAAR-0006TL-Tm; Fri, 08 May 2020 21:04:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M6n8=6W=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jXAAR-0006TG-0T
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 21:04:23 +0000
X-Inumbo-ID: 7d828f1a-916f-11ea-a071-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7d828f1a-916f-11ea-a071-12813bfff9fa;
 Fri, 08 May 2020 21:04:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1588971862;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=Xt1kL6CbUuB4s9c/xNAlc5HpI87uPFG7NIUqE70ub6Q=;
 b=QNY/MycQMKfD7WskuLaSKzC9vRCt4izP+MfB3VAaqz2dS1pqDE8B7B1W
 aTKdm/Dd6DXn73rjokEzZGW/c2tTeM+dt6ktGKm8QcXtQFWFx4chDXVGD
 gxviTblK8dAZK6YlCx/aP/WDtHP48DJPLyjUgMiwaIzCBexv1t2aD69RN k=;
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
X-Ironport-Dmarc-Check-Result: validskip
IronPort-SDR: gvEq1nHMnQBL3kJRzNlmeSRMD51Jsf8m9tpPAbfGsUMP2mEyH+Ygw9e7hvVjWv/RQ/bSz6Mrvu
 k5PlFOat2DCORtgZ9PyvcnjvzcZ5orhDLZgGayn7z12NQhsgGv/VLWwpOD6q10NTAxtrlxrpb6
 tu8X83AhhFjrRUHVory4/r2s0hf8TyWUe4ABZ0iYSD3MIG855BO07ZFFR3ihvGebwVlxLUUCsv
 5fRSSpAMRVXcpk++MMxm1fTcncpobCbtJDdXfjp8anVOr5DfsTcvqDfpYz6N3Mtq7pNqLr05PG
 0V4=
X-SBRS: 2.7
X-MesageID: 17795292
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,369,1583211600"; d="scan'208";a="17795292"
Subject: Re: [PATCH v8 12/12] x86/HVM: don't needlessly intercept
 APERF/MPERF/TSC MSR reads
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <e92b6c1a-b2c3-13e7-116c-4772c851dd0b@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <81cc74ce-0a53-d5cd-3513-af3af6382815@citrix.com>
Date: Fri, 8 May 2020 22:04:17 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <e92b6c1a-b2c3-13e7-116c-4772c851dd0b@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05/05/2020 09:20, Jan Beulich wrote:
> If the hardware can handle accesses, we should allow it to do so. This
> way we can expose EFRO to HVM guests,

I'm reminded now of the conversation I'm sure we've had before, although
I have a feeling it was on IRC.

APERF/MPERF (including the EFRO interface on AMD) are free running
counters but only in C0.  The raw values are not synchronised across
threads.

A vCPU which gets rescheduled has a 50% chance of finding the one or
both values going backwards, and a 100% chance of totally bogus calculation.

There is no point exposing APERF/MPERF to guests.  It can only be used
safely in hypervisor context, on AMD hardware with a CLGI/STGI region,
or on Intel hardware in an NMI handler if you trust that a machine check
isn't going to ruin your day.

VMs have no way of achieving the sampling requirements, and has a fair
chance of getting a plausible-but-wrong answer.

The only possibility to do this safely is on a VM which is pinned to
pCPU for its lifetime, but even I'm unconvinced of the correctness.

I don't think we should be exposing this functionality to guests at all,
although I might be persuaded if someone wanting to use it in a VM can
provide a concrete justification of why the above problems won't get in
their way.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 08 21:05:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 21:05:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXABD-0006WC-7O; Fri, 08 May 2020 21:05:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqRt=6W=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXABC-0006W5-7r
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 21:05:10 +0000
X-Inumbo-ID: 93f94b26-916f-11ea-a071-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 93f94b26-916f-11ea-a071-12813bfff9fa;
 Fri, 08 May 2020 21:04:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=QNvYtdR5Uc2Ft2Ku/DDJcDjBPZj6mPOwKe96P4gCJgk=; b=riT0xJmPahodlWBrjYIwvUGgF
 7HEJ9tBy13GEwl8yE+9QSv/4yYEerIM4gmBUc5YGjxcrRaNeiCaN5rZxUByid5nH//5r1Dfr89sfz
 GsZNH5a0iuNYRYpPEnEhetMowR+iih5BVyU1bgtanlw76OT5RXANxIA+TSMEau5G+xptA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXAB1-0001qf-K5; Fri, 08 May 2020 21:04:59 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXAB1-0003iV-8m; Fri, 08 May 2020 21:04:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXAB1-0007NA-89; Fri, 08 May 2020 21:04:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150082-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150082: all pass - PUSHED
X-Osstest-Versions-This: ovmf=3a3713e62cfad00d78bb938b0d9fb1eedaeff314
X-Osstest-Versions-That: ovmf=8293e6766a884918a6b608c64543caab49870597
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 08 May 2020 21:04:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150082 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150082/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 3a3713e62cfad00d78bb938b0d9fb1eedaeff314
baseline version:
 ovmf                 8293e6766a884918a6b608c64543caab49870597

Last test of basis   150063  2020-05-07 05:27:13 Z    1 days
Testing same since   150082  2020-05-08 04:09:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   8293e6766a..3a3713e62c  3a3713e62cfad00d78bb938b0d9fb1eedaeff314 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 08 22:09:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 22:09:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXBBJ-0003AR-6n; Fri, 08 May 2020 22:09:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UrFz=6W=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jXBBI-0003AM-06
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 22:09:20 +0000
X-Inumbo-ID: 902e1ea0-9178-11ea-b07b-bc764e2007e4
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 902e1ea0-9178-11ea-b07b-bc764e2007e4;
 Fri, 08 May 2020 22:09:19 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 048M3qiu161659;
 Fri, 8 May 2020 22:09:17 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=from : to : cc :
 subject : date : message-id; s=corp-2020-01-29;
 bh=kRJKGgXEcBvkOnkZ00J/8oVZj7ZZskCjtSJZuvcjbbk=;
 b=EkLLPV6+6vDj7ywobWZNfkVpk42vo4sS6Wavr2iWASmCYK+NZexIWqJKpxtkhn63wQtc
 wg7wFDZVVRob2exoB5k+88R3KfzYN92b2An084ssEc7X+glnExqPHU36Czrwp6ErPhnG
 1kBnoJxXMoujtsoNJJGWGf89MgM8QihpfoSBJGWtZ+MzxfZTjRgn2yhP9/4EpbbAGfEV
 hGmeHbj5qaDkwFUH35qkoJD5I9IAz1FzgOrSmuK+4BnrT+y3blK7GcPryYxbQ72tTY8X
 +Q0qMMUAM5J5B7lWZs9C3YOfWClf705gbobq0U85IyCEy8JrbkDDr0PSS9lD+Tq59lv/ 7A== 
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2130.oracle.com with ESMTP id 30vtewwe50-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 08 May 2020 22:09:17 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 048M5xSe011008;
 Fri, 8 May 2020 22:07:16 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by aserp3020.oracle.com with ESMTP id 30vte193h4-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 08 May 2020 22:07:16 +0000
Received: from abhmp0017.oracle.com (abhmp0017.oracle.com [141.146.116.23])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 048M76d6019813;
 Fri, 8 May 2020 22:07:06 GMT
Received: from ovs104.us.oracle.com (/10.149.224.204)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 08 May 2020 15:07:06 -0700
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Subject: [PATCH] xen/cpuhotplug: Fix initial CPU offlining for PV(H) guests
Date: Fri,  8 May 2020 18:28:43 -0400
Message-Id: <1588976923-3667-1-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.3.1
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9615
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999
 phishscore=0
 bulkscore=0 malwarescore=0 suspectscore=0 adultscore=0 mlxscore=0
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2003020000 definitions=main-2005080187
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9615
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0
 mlxscore=0 mlxlogscore=999
 malwarescore=0 spamscore=0 priorityscore=1501 lowpriorityscore=0
 impostorscore=0 suspectscore=0 adultscore=0 clxscore=1015 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000
 definitions=main-2005080187
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 sstabellini@kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Commit a926f81d2f6c ("xen/cpuhotplug: Replace cpu_up/down() with
device_online/offline()") replaced cpu_down() with device_offline()
call which requires that the CPU has been registered before. This
registration, however, happens later from topology_init() which
is called as subsys_initcall(). setup_vcpu_hotplug_event(), on the
other hand, is invoked earlier, during arch_initcall().

As result, booting a PV(H) guest with vcpus < maxvcpus causes a crash.

Move setup_vcpu_hotplug_event() (and therefore setup_cpu_watcher()) to
late_initcall(). In addition, instead of performing all offlining steps
in setup_cpu_watcher() simply call disable_hotplug_cpu().

Fixes: a926f81d2f6c (xen/cpuhotplug: Replace cpu_up/down() with device_online/offline()"
Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 drivers/xen/cpu_hotplug.c | 8 +++-----
 1 file changed, 3 insertions(+), 5 deletions(-)

diff --git a/drivers/xen/cpu_hotplug.c b/drivers/xen/cpu_hotplug.c
index ec975de..b96b11e 100644
--- a/drivers/xen/cpu_hotplug.c
+++ b/drivers/xen/cpu_hotplug.c
@@ -93,10 +93,8 @@ static int setup_cpu_watcher(struct notifier_block *notifier,
 	(void)register_xenbus_watch(&cpu_watch);
 
 	for_each_possible_cpu(cpu) {
-		if (vcpu_online(cpu) == 0) {
-			device_offline(get_cpu_device(cpu));
-			set_cpu_present(cpu, false);
-		}
+		if (vcpu_online(cpu) == 0)
+			disable_hotplug_cpu(cpu);
 	}
 
 	return NOTIFY_DONE;
@@ -119,5 +117,5 @@ static int __init setup_vcpu_hotplug_event(void)
 	return 0;
 }
 
-arch_initcall(setup_vcpu_hotplug_event);
+late_initcall(setup_vcpu_hotplug_event);
 
-- 
1.8.3.1



From xen-devel-bounces@lists.xenproject.org Fri May 08 23:17:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 08 May 2020 23:17:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXCEd-0000Oy-5i; Fri, 08 May 2020 23:16:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wqRt=6W=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXCEc-0000Ot-25
 for xen-devel@lists.xenproject.org; Fri, 08 May 2020 23:16:50 +0000
X-Inumbo-ID: fd959104-9181-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd959104-9181-11ea-9887-bc764e2007e4;
 Fri, 08 May 2020 23:16:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=tbd7CxT8G8lkjIdAitNa0ZC+oCKw9gZo5se5LPZr3vQ=; b=oyijEQRTLneJSHsOr8lSGWnWy
 pjP0qN5bUMZ07BuX0NIOC80cFVqHSccTtgK+0sKwjyx03L0Qj0qzCQSPlipQMWJ/Xk/06CslB/Cyu
 Kw9yqQc7edIsivIHqPW2MilFyRxNSQbcj7cRwvVP4TAQaLKqPpe7U+cRLw2UvdipqgMgQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXCEZ-0004pg-N4; Fri, 08 May 2020 23:16:47 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXCEZ-0000Xi-F9; Fri, 08 May 2020 23:16:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXCEZ-0006Mp-E8; Fri, 08 May 2020 23:16:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150081-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150081: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=192ffb7515839b1cc8457e0a8c1e09783de019d3
X-Osstest-Versions-That: linux=a811c1fa0a02c062555b54651065899437bacdbe
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 08 May 2020 23:16:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150081 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150081/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150064
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150064
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150064
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150064
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150064
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150064
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150064
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150064
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150064
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                192ffb7515839b1cc8457e0a8c1e09783de019d3
baseline version:
 linux                a811c1fa0a02c062555b54651065899437bacdbe

Last test of basis   150064  2020-05-07 06:00:34 Z    1 days
Testing same since   150081  2020-05-07 23:40:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alan Maguire <alan.maguire@oracle.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christoph Hellwig <hch@lst.de>
  Fangrui Song <maskray@google.com>
  James Morse <james.morse@arm.com>
  Jim Mattson <jmattson@google.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Marc Zyngier <maz@kernel.org>
  Mark Rutland <mark.rutland@arm.com>
  Masami Hiramatsu <mhiramat@kernel.org>
  Maxim Levitsky <mlevitsk@redhat.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Xu <peterx@redhat.com>
  Sean Christopherson <sean.j.christopherson@intel.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Wei Yang <richard.weiyang@gmail.com>
  Will Deacon <will@kernel.org>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yiwei Zhang <zzyiwei@google.com>
  Yunfeng Ye <yeyunfeng@huawei.com>
  Zenghui Yu <yuzenghui@huawei.com>
  Zou Wei <zou_wei@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   a811c1fa0a02..192ffb751583  192ffb7515839b1cc8457e0a8c1e09783de019d3 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sat May 09 00:02:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 09 May 2020 00:02:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXCwm-00051i-Mt; Sat, 09 May 2020 00:02:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=z/aV=6X=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
 id 1jXCwl-00051a-Ed
 for xen-devel@lists.xenproject.org; Sat, 09 May 2020 00:02:27 +0000
X-Inumbo-ID: 5ddf9da6-9188-11ea-9887-bc764e2007e4
Received: from mail-oi1-x244.google.com (unknown [2607:f8b0:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ddf9da6-9188-11ea-9887-bc764e2007e4;
 Sat, 09 May 2020 00:02:26 +0000 (UTC)
Received: by mail-oi1-x244.google.com with SMTP id 19so10020996oiy.8
 for <xen-devel@lists.xenproject.org>; Fri, 08 May 2020 17:02:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=CW4Nd9vXCE/Qs2L5YU+ExaCL2Yjneh25CTD52QLhGvk=;
 b=PZzTzpNp0lAHotjAZmWJyYeRF1LrCL0hJMDkrIR5nzhVaN1IabmauQft2kjTJVNYp+
 R8TdgmFqWaH3inlni1N0GHo+S+fKiL0FQpPwKY0MMpAFLvTK/x9Jz8nZADv7YQ6SfP5Y
 YrvboQ1bEL0dlTbkf0M84jVxFGgJCwGbAoneGsIvaIrRdRMgau6EJHfeM3mD/QQAIbIq
 0ZeMrH9RZvlXxlqbqUy42XgztR+hpposz89U8k0/NA4LRFmB2teKAsypu6xQ4I4xzfN3
 aT424EKUrwR96ad6g0HtjSFTZeyTFBkQxkzd7cBG0q6iihYiOGZr1D7VnwqiVjRVcrtO
 j1cQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=CW4Nd9vXCE/Qs2L5YU+ExaCL2Yjneh25CTD52QLhGvk=;
 b=gDTOwkAWgOP7fSEOQro+W8idkS+HK5st7oS7MB2QanIvFx0v/eH/F3yTJ0PoYID8RM
 PQBWAIQ2g2f5FIgL3DkKF0teWWXkhnJs72ruBB7LnKpqgLRU7r6WhNjqltmewEgqQ993
 C0Z+1r/9fvyFavjmcaDUFj0KhZtL0bUwdgYm9YzoEr+3D8/FlzGADHoL6nzCjeZadkus
 5pMNjSCiajJosTVrlUbOEoxMxPn7KjkKO/7a7TsukDoh539nzl3TFDizAELablB0lqzG
 g66I960aJaoDOMj/6ZFFFwAG3UCtJfTLAuNLY8LAk9mAORHF1rBqSo2l5sCQGngajsEb
 mRdg==
X-Gm-Message-State: AGi0PuYrp6KTZFo4rUZhDNQavvhrBafR7Y+ymVCCkGMkXtDJWjE/98Q/
 ArIyGXjysQLMusiv7xVCwyBuMGUWtIStH7S6s8E=
X-Google-Smtp-Source: APiQypJWLxm6Lt/slTUnN3Q4qlcdvPKayHizn3P9I9KsqhmuphZMQJ4QYsEaZ7IIb60dhcrm3HRfb9dwYsrsZU/d9Yk=
X-Received: by 2002:aca:480b:: with SMTP id v11mr11859281oia.20.1588982546157; 
 Fri, 08 May 2020 17:02:26 -0700 (PDT)
MIME-Version: 1.0
References: <20200506032312.878-1-christopher.w.clark@gmail.com>
 <fa40a5f7-39e4-bd30-1e1d-89311cfe2ff7@suse.com>
In-Reply-To: <fa40a5f7-39e4-bd30-1e1d-89311cfe2ff7@suse.com>
From: Christopher Clark <christopher.w.clark@gmail.com>
Date: Fri, 8 May 2020 17:02:15 -0700
Message-ID: <CACMJ4Ga79KA9LiL5znJp_2bbCXnv1AQYoGT1i2huHotr_6z8Zw@mail.gmail.com>
Subject: Re: [RFC PATCH] docs/designs: domB design document
To: Jan Beulich <jbeulich@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Adam Schwalm <adam.schwalm@starlab.io>,
 "Daniel P . Smith" <dpsmith@apertussolutions.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Rich Persaud <persaur@gmail.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 7, 2020 at 1:15 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 06.05.2020 05:23, Christopher Clark wrote:
> > +It is with this understanding as presented that the DomB project used =
as the
> > +basis for the development of its multiple domain boot capability for X=
en. Within
> > +the remainder of this document is a detailed explanation of the multip=
le domain
> > +boot, the objectives it strives to achieve, the structure behind the a=
pproach,
> > +the sequence events in a boot, a contrasting with ARM=E2=80=99s dom0le=
ss, and closing
> > +with some exemplar implementations.
>
> May I ask that along with dom0less you also explain the (non-)relationshi=
p
> to the late-Dom0 model we've been having for a number of years? Some
> aspects of what the boot domain does look, to me, quite similar.

I haven't seen the term 'late-Dom0' used before, so am inferring that
you might mean the 'late hardware domain' feature of Xen, in which
case: yes, thanks - we will add a contrast section on how DomB relates
to that. At this (early) point, I think that we should be able to
retire/replace Xen's late hardware domain implementation in favour of
DomB, if that direction is acceptable to the community; so we will
look into how it relates.

> Apart from this one immediate detail I'm curious about (and that I also
> don't know/recall how it gets handled with dom0less): Death of Dom0 in a
> traditional setup is a signal to Xen to reboot the host. With any of the
> boot time created domains not functioning anymore, the intended purpose
> of the host may no longer be fulfilled. But of course there may be a
> subset of "optional" domains. As a result - are there any intentions
> towards identifying under what conditions it may be better to reboot the
> host than wait for human interaction?

Excellent point; we have given it some limited consideration -- eg.
something should be able to be expressed in the per-domain metadata
supplied in the Launch Control Module, to set state that is held in
Xen's domain struct for acting upon on domain exit -- but it is not
included in the design document yet and indeed it ought to be.

Thanks for the review.

Christopher


From xen-devel-bounces@lists.xenproject.org Sat May 09 00:07:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 09 May 2020 00:07:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXD1D-0005BN-8w; Sat, 09 May 2020 00:07:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7o+2=6X=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jXD1C-0005BI-IF
 for xen-devel@lists.xenproject.org; Sat, 09 May 2020 00:07:02 +0000
X-Inumbo-ID: 01cca92c-9189-11ea-a08a-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 01cca92c-9189-11ea-a08a-12813bfff9fa;
 Sat, 09 May 2020 00:07:01 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E6B47217BA;
 Sat,  9 May 2020 00:07:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1588982821;
 bh=SrxvNwD47o7Ig+7SER7oobjUeKIf3mAVkEkZ01FPrrA=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=hirxPktpfP3uPIPH1mPW9XSzXkH+0kx8m88lgYhTf8R4IsEMVBHrYQxV+z+bLE18C
 U4BSKj+FQXPj+cGtH8LaN10TGoZ28etY3NyMxGOOs0J4busH3aPLi+OM4KvjoKVkeb
 jcILSe7TerHOgSu8F6MJVAoXCGYDm/83SFLxxGmI=
Date: Fri, 8 May 2020 17:06:54 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 08/12] xen/arm: if is_domain_direct_mapped use native
 addresses for GICv2
In-Reply-To: <4384aed5-97cd-ef93-5512-b41c0124b072@xen.org>
Message-ID: <alpine.DEB.2.21.2005081525270.26167@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-8-sstabellini@kernel.org>
 <05375484-43f2-9d4b-205a-b9dcf4ee5d8e@xen.org>
 <alpine.DEB.2.21.2004301412460.28941@sstabellini-ThinkPad-T480s>
 <4384aed5-97cd-ef93-5512-b41c0124b072@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr_Babchuk@epam.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 1 May 2020, Julien Grall wrote:
> On 01/05/2020 02:26, Stefano Stabellini wrote:
> > On Wed, 15 Apr 2020, Julien Grall wrote:
> > > On 15/04/2020 02:02, Stefano Stabellini wrote:
> > > > Today we use native addresses to map the GICv2 for Dom0 and fixed
> > > > addresses for DomUs.
> > > > 
> > > > This patch changes the behavior so that native addresses are used for
> > > > any domain that is_domain_direct_mapped.
> > > > 
> > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > > > ---
> > > >    xen/arch/arm/domain_build.c | 10 +++++++---
> > > >    xen/arch/arm/vgic-v2.c      | 12 ++++++------
> > > >    xen/arch/arm/vgic/vgic-v2.c |  2 +-
> > > >    xen/include/asm-arm/vgic.h  |  1 +
> > > >    4 files changed, 15 insertions(+), 10 deletions(-)
> > > > 
> > > > diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> > > > index 627e0c5e8e..303bee60f6 100644
> > > > --- a/xen/arch/arm/domain_build.c
> > > > +++ b/xen/arch/arm/domain_build.c
> > > > @@ -1643,8 +1643,12 @@ static int __init make_gicv2_domU_node(struct
> > > > kernel_info *kinfo)
> > > >        int res = 0;
> > > >        __be32 reg[(GUEST_ROOT_ADDRESS_CELLS + GUEST_ROOT_SIZE_CELLS) *
> > > > 2];
> > > >        __be32 *cells;
> > > > +    struct domain *d = kinfo->d;
> > > > +    char buf[38];
> > > >    -    res = fdt_begin_node(fdt,
> > > > "interrupt-controller@"__stringify(GUEST_GICD_BASE));
> > > > +    snprintf(buf, sizeof(buf), "interrupt-controller@%"PRIx64,
> > > > +             d->arch.vgic.dbase);
> > > > +    res = fdt_begin_node(fdt, buf);
> > > >        if ( res )
> > > >            return res;
> > > >    @@ -1666,9 +1670,9 @@ static int __init make_gicv2_domU_node(struct
> > > > kernel_info *kinfo)
> > > >          cells = &reg[0];
> > > >        dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS,
> > > > GUEST_ROOT_SIZE_CELLS,
> > > > -                       GUEST_GICD_BASE, GUEST_GICD_SIZE);
> > > > +                       d->arch.vgic.dbase, GUEST_GICD_SIZE);
> > > >        dt_child_set_range(&cells, GUEST_ROOT_ADDRESS_CELLS,
> > > > GUEST_ROOT_SIZE_CELLS,
> > > > -                       GUEST_GICC_BASE, GUEST_GICC_SIZE);
> > > > +                       d->arch.vgic.cbase, GUEST_GICC_SIZE);
> > > 
> > > You can't use the native base address and not the native size.
> > > Particularly,
> > > this is going to screw any GIC using 8KB alias.
> > 
> > I don't follow why it could cause problems with a GIC using the 8KB
> > alias. Maybe an example (even a fake example) would help.
> 
> The GICC interface is composed of the two 4KB pages. In some of the
> implementation, each pages starts at a 64KB aligned address. Each page are
> also aliased every 4KB in the 64KB region.
> 
> For guest, we don't expose the full 128KB region but only part of it (8KB). So
> the guest interface is the same regardless the underlying implementation of
> the GIC.
> 
> vgic.cbase points to the beginning of the first region. So what you are
> mapping is the first 8KB of the first region. The second region is not mapped
> at all.
> 
> As the pages are aliased, the trick we use is to map from vgic.cbase + 60KB
> (vgic_v2.hw.aliased_offset). This means the 2 pages will now be contiguous in
> the guest physical memory.

I understood the issue and I fixed it by applying
vgic_v2.hw.aliased_offset to vbase and cbase. (Although only vbase is
actually necessary as far as I can tell.)


From xen-devel-bounces@lists.xenproject.org Sat May 09 00:07:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 09 May 2020 00:07:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXD1E-0005Bh-H6; Sat, 09 May 2020 00:07:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/fco=6X=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
 id 1jXD1D-0005BT-MO
 for xen-devel@lists.xenproject.org; Sat, 09 May 2020 00:07:03 +0000
X-Inumbo-ID: 029ef5e4-9189-11ea-9887-bc764e2007e4
Received: from NAM10-BN7-obe.outbound.protection.outlook.com (unknown
 [40.107.92.79]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 029ef5e4-9189-11ea-9887-bc764e2007e4;
 Sat, 09 May 2020 00:07:03 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Zme4ZBlaCjpSqGiNjvCVs7dyL0ePyKo3wqnZxDJwUnAcK3pc1j97k/Jx/owzGXcl20Y/LsNgoHCBPogfOhsjXVDyZou/B3LElhmOPnpSq+yYeqok85syuWPFHB+WM+Tbf7GAH1dMhxwtZRxJjwSC/M2O12nAklK0h26914itUtd9agMM4TcdC73CEqBscD+05Sarac0CY1NiyP3F9JJB39IsRdU/4dFVuTXmJ/245AFHWUNRwN0XCgh8eNyvTj6MfiBJS2BgkUbyPxc/qkJfzWw0lNlGNY1ho4BTHYH2o21Z9HGFIb4JIBBylJM0OWxw2zZMgJcZHshtgSEpvfvnuQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZH0sWIKra/opr9d1Bhi9Dt35s70uZNG4dRQKOY59tIA=;
 b=QmX+03nsLzEHP8NDNeQDdbhzkxYhWjZ8zeBDLtmjYSjigs67FWKaYcIyx/KVMHEn59NZVj8NEmyp8OtQB0e/0fU2be0oTJFR0R2CphSeZlA7uKXZLWOjASnZqtaIOtEGkKSMzpFU6lLkoNaaHSA0lDXRvUPUa/QjDEWePwzuNsaY0ewYuKvlwGKHCqM9dhbCkhX7LKyflzS/tq9z3Cbi91fkdGTRpaemkg4y05cRQjcKBSR20242sNCfVVivh9jNEeR57r3ivQgmcdGlMb9RjAxYsZ4xDZCVP0+f/8uFcsQF2DWKzaIT97sU4yo8Uf0SfQroFdVnU9hX//egYoIX2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.60.83) smtp.rcpttodomain=epam.com smtp.mailfrom=xilinx.com;
 dmarc=bestguesspass action=none header.from=xilinx.com; dkim=none (message
 not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZH0sWIKra/opr9d1Bhi9Dt35s70uZNG4dRQKOY59tIA=;
 b=DQ9mzv/yZQ0oASqiM9G/O6BLXdqBtusc7OZEyyUwbWlLfuwuhTwYXE7gt6/6yFJxuc25ocpX05eUyHA0FavlXmsl2Qdhiq1kGAKFYKKhGeSL9Gs6l3k9/asKuCddZ7GiDp4Ve1q66hN7CfH7nPB7YWSCehZIksz7BDdAGsXgT8Y=
Received: from CY4PR06CA0038.namprd06.prod.outlook.com (2603:10b6:903:77::24)
 by CH2PR02MB6950.namprd02.prod.outlook.com (2603:10b6:610:5c::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2979.33; Sat, 9 May
 2020 00:06:58 +0000
Received: from CY1NAM02FT022.eop-nam02.prod.protection.outlook.com
 (2603:10b6:903:77:cafe::c1) by CY4PR06CA0038.outlook.office365.com
 (2603:10b6:903:77::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2979.26 via Frontend
 Transport; Sat, 9 May 2020 00:06:58 +0000
Authentication-Results: spf=pass (sender IP is 149.199.60.83)
 smtp.mailfrom=xilinx.com; epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=bestguesspass action=none
 header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.60.83 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.60.83; helo=xsj-pvapsmtpgw01;
Received: from xsj-pvapsmtpgw01 (149.199.60.83) by
 CY1NAM02FT022.mail.protection.outlook.com (10.152.75.185) with Microsoft SMTP
 Server id 15.20.2979.29 via Frontend Transport; Sat, 9 May 2020 00:06:58
 +0000
Received: from [149.199.38.66] (port=58476 helo=xsj-pvapsmtp01)
 by xsj-pvapsmtpgw01 with esmtp (Exim 4.90)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1jXD0w-0000AB-BR; Fri, 08 May 2020 17:06:46 -0700
Received: from [127.0.0.1] (helo=localhost)
 by xsj-pvapsmtp01 with smtp (Exim 4.63)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1jXD17-0002lm-S7; Fri, 08 May 2020 17:06:57 -0700
Received: from xsj-pvapsmtp01 (mailman.xilinx.com [149.199.38.66])
 by xsj-smtp-dlp2.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id 04906oPU027484; 
 Fri, 8 May 2020 17:06:51 -0700
Received: from [172.19.2.220] (helo=localhost)
 by xsj-pvapsmtp01 with esmtp (Exim 4.63)
 (envelope-from <stefanos@xilinx.com>)
 id 1jXD10-0002kr-SF; Fri, 08 May 2020 17:06:50 -0700
Date: Fri, 8 May 2020 17:06:50 -0700 (PDT)
From: Stefano Stabellini <stefano.stabellini@xilinx.com>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 09/12] xen/arm: if is_domain_direct_mapped use native
 addresses for GICv3
In-Reply-To: <ab281b4d-c78f-15c3-57d3-85d0cef7bec8@xen.org>
Message-ID: <alpine.DEB.2.21.2005081458530.26167@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-9-sstabellini@kernel.org>
 <923411c5-37d4-c86e-c5a8-8acd8a6830e7@xen.org>
 <alpine.DEB.2.21.2004301613220.28941@sstabellini-ThinkPad-T480s>
 <ab281b4d-c78f-15c3-57d3-85d0cef7bec8@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-RCIS-Action: ALLOW
X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.2.0.1013-23620.005
X-TM-AS-User-Approved-Sender: Yes;Yes
X-EOPAttributedMessage: 0
X-MS-Office365-Filtering-HT: Tenant
X-Forefront-Antispam-Report: CIP:149.199.60.83; CTRY:US; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:xsj-pvapsmtpgw01; PTR:unknown-60-83.xilinx.com; CAT:NONE;
 SFTY:;
 SFS:(7916004)(346002)(396003)(136003)(376002)(39860400002)(46966005)(33430700001)(8676002)(8936002)(6916009)(70586007)(70206006)(9786002)(336012)(426003)(82310400002)(47076004)(33440700001)(82740400003)(356005)(478600001)(81166007)(53546011)(26005)(33716001)(4326008)(186003)(107886003)(316002)(44832011)(54906003)(9686003)(2906002)(5660300002);
 DIR:OUT; SFP:1101; 
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6406708f-257e-4951-6315-08d7f3ace3ff
X-MS-TrafficTypeDiagnostic: CH2PR02MB6950:
X-Microsoft-Antispam-PRVS: <CH2PR02MB695010FF5712BDB7A2145D0BA0A30@CH2PR02MB6950.namprd02.prod.outlook.com>
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-Forefront-PRVS: 03982FDC1D
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: rFm//OQDTs+cyb+jcewL7XtgezIvBWYSw0sK8w5Bj+rKG4FQITha15M2w8bccis7qocct7r5m57FnuPnIjF4iLzbKWucobmcD218eQVajGU03IuViFkczzQaNODZUqBXF1ny0Qv6M6lotQ4NPolfeYYP3H3fNBrbZoSsq1cxv6xqsv2TMD5SmNsTVcD87oVm/x+MlCoc+ajQ1KDoO5PIS7kbAMVPSVXVJ6hjU9P4dFfC+JEq0Y/32b67EDHfzLJNu+e7B68av5seugLeNLXjPCqT+5yZbZ2+XAD0ex+bZvHMr+kpn8bCyVkePo7Wsa/6cmpN0J9UOzjFeLbX+jCHNAiiMEiWwwfUUVeV+cfVVRCas9pUZF2Z3SJr2OnWkwjEi0HJaG96RHiwIC0YLvGp2aGEoxSaW8HDbm6QsoT22rvDqkP8UM7ke1Ga+gPOvtGvB6erpxxRXC87rRVT1zt3FLO67jXprM1RcuktCNAf921i9BvSlKPvK5is1yn/VuINSSXim2GfcOQA6x7WPSU9DH1XKnqETk/5Xhq/ELHtc8tL/geFO0A0EAd/xIGksyVM4iuxI2EPXCcLpFfwa8apog==
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2020 00:06:58.1234 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6406708f-257e-4951-6315-08d7f3ace3ff
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.60.83];
 Helo=[xsj-pvapsmtpgw01]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR02MB6950
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr_Babchuk@epam.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 1 May 2020, Julien Grall wrote:
> Hi Stefano,
> 
> On 01/05/2020 02:31, Stefano Stabellini wrote:
> > On Wed, 15 Apr 2020, Julien Grall wrote:
> > > > diff --git a/xen/arch/arm/vgic-v3.c b/xen/arch/arm/vgic-v3.c
> > > > index 4e60ba15cc..4cf430f865 100644
> > > > --- a/xen/arch/arm/vgic-v3.c
> > > > +++ b/xen/arch/arm/vgic-v3.c
> > > > @@ -1677,13 +1677,25 @@ static int vgic_v3_domain_init(struct domain *d)
> > > 
> > > 
> > > I think you also want to modify vgic_v3_max_rdist_count().
> > 
> > I don't think so: domUs even direct-mapped still only get 1 rdist
> > region. This patch is not changing the layout of the domU gic, it is
> > only finding a "hole" in the physical address space to make sure there
> > are no conflicts (or at least minimize the chance of conflicts.)
> 
> How do you know the "hole" is big enough?
> 
> > 
> > > >         * Domain 0 gets the hardware address.
> > > >         * Guests get the virtual platform layout.
> > > 
> > > This comment needs to be updated.
> > 
> > Yep, I'll do
> > 
> > 
> > > >         */
> > > > -    if ( is_hardware_domain(d) )
> > > > +    if ( is_domain_direct_mapped(d) )
> > > >        {
> > > >            unsigned int first_cpu = 0;
> > > > +        unsigned int nr_rdist_regions;
> > > >              d->arch.vgic.dbase = vgic_v3_hw.dbase;
> > > >    -        for ( i = 0; i < vgic_v3_hw.nr_rdist_regions; i++ )
> > > > +        if ( is_hardware_domain(d) )
> > > > +        {
> > > > +            nr_rdist_regions = vgic_v3_hw.nr_rdist_regions;
> > > > +            d->arch.vgic.intid_bits = vgic_v3_hw.intid_bits;
> > > > +        }
> > > > +        else
> > > > +        {
> > > > +            nr_rdist_regions = 1;
> > > 
> > > What does promise your the rdist region will be big enough to cater all
> > > the
> > > re-distributors for your domain?
> > 
> > Good point. I'll add an explicit check for that with at least a warning.
> > I don't think we want to return error because the configuration it is
> > still likely to work.
> 
> No it is not going to work. Imagine you have have a guest with 3 vCPUs but the
> first re-distributor region can only cater 2 re-distributor. How is this going
> to be fine to continue?
> 
> For dom0, we are re-using the same hole but possibly not all of them. Why
> can't we do that for domU?

I implemented what you suggested


From xen-devel-bounces@lists.xenproject.org Sat May 09 00:07:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 09 May 2020 00:07:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXD1O-0005Dj-PQ; Sat, 09 May 2020 00:07:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7o+2=6X=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jXD1N-0005DK-Gu
 for xen-devel@lists.xenproject.org; Sat, 09 May 2020 00:07:13 +0000
X-Inumbo-ID: 088992b6-9189-11ea-b9cf-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 088992b6-9189-11ea-b9cf-bc764e2007e4;
 Sat, 09 May 2020 00:07:13 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 62B232184D;
 Sat,  9 May 2020 00:07:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1588982832;
 bh=tjhorcnghY2eQGqm8jtdi7hkjMizB/NV5vnFe7r3qB0=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=tvmA8Z5PxtpE/mYTUzjg09AeAFBCebgebQ1aNTko38hVinQQsB71HQnhYh1uJOrrb
 X54Zqb1aWavq3jCxptLrsJQnJFBl6ewkNGd11CmsblvZ55Lo3FLs3pVo37LJ7fnuME
 bhMA7ANdxwBhqpXT+Wovr9x68qs7m0/dc5miaSLs=
Date: Fri, 8 May 2020 17:07:11 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 03/12] xen/arm: introduce 1:1 mapping for domUs
In-Reply-To: <77d2858c-768c-e2a1-e2c9-32cb1612512d@xen.org>
Message-ID: <alpine.DEB.2.21.2005081351340.26167@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-3-sstabellini@kernel.org>
 <3f26f6a0-89bd-cbec-f07f-90a08fa60e26@xen.org>
 <alpine.DEB.2.21.2004301417070.28941@sstabellini-ThinkPad-T480s>
 <77d2858c-768c-e2a1-e2c9-32cb1612512d@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr_Babchuk@epam.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 1 May 2020, Julien Grall wrote:
> On 01/05/2020 02:26, Stefano Stabellini wrote:
> > On Wed, 15 Apr 2020, Julien Grall wrote:
> > > On 15/04/2020 02:02, Stefano Stabellini wrote:
> > > > In some cases it is desirable to map domU memory 1:1 (guest physical ==
> > > > physical.) For instance, because we want to assign a device to the domU
> > > > but the IOMMU is not present or cannot be used. In these cases, other
> > > > mechanisms should be used for DMA protection, e.g. a MPU.
> > > 
> > > I am not against this, however the documentation should clearly explain
> > > that
> > > you are making your platform insecure unless you have other mean for DMA
> > > protection.
> > 
> > I'll expand the docs
> > 
> > 
> > > > 
> > > > This patch introduces a new device tree option for dom0less guests to
> > > > request a domain to be directly mapped. It also specifies the memory
> > > > ranges. This patch documents the new attribute and parses it at boot
> > > > time. (However, the implementation of 1:1 mapping is missing and just
> > > > BUG() out at the moment.)  Finally the patch sets the new direct_map
> > > > flag for DomU domains.
> > > > 
> > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > > > ---
> > > >    docs/misc/arm/device-tree/booting.txt | 13 +++++++
> > > >    docs/misc/arm/passthrough-noiommu.txt | 35 ++++++++++++++++++
> > > >    xen/arch/arm/domain_build.c           | 52
> > > > +++++++++++++++++++++++++--
> > > >    3 files changed, 98 insertions(+), 2 deletions(-)
> > > >    create mode 100644 docs/misc/arm/passthrough-noiommu.txt
> > > > 
> > > > diff --git a/docs/misc/arm/device-tree/booting.txt
> > > > b/docs/misc/arm/device-tree/booting.txt
> > > > index 5243bc7fd3..fce5f7ed5a 100644
> > > > --- a/docs/misc/arm/device-tree/booting.txt
> > > > +++ b/docs/misc/arm/device-tree/booting.txt
> > > > @@ -159,6 +159,19 @@ with the following properties:
> > > >        used, or GUEST_VPL011_SPI+1 if vpl011 is enabled, whichever is
> > > >        greater.
> > > >    +- direct-map
> > > > +
> > > > +    Optional. An array of integer pairs specifying addresses and sizes.
> > > > +    direct_map requests the memory of the domain to be 1:1 mapped with
> > > > +    the memory ranges specified as argument. Only sizes that are a
> > > > +    power of two number of pages are allowed.
> > > > +
> > > > +- #direct-map-addr-cells and #direct-map-size-cells
> > > > +
> > > > +    The number of cells to use for the addresses and for the sizes in
> > > > +    direct-map. Default and maximum are 2 cells for both addresses and
> > > > +    sizes.
> > > > +
> > > 
> > > As this is going to be mostly used for passthrough, can't we take
> > > advantage of
> > > the partial device-tree and describe the memory region using memory node?
> > 
> > With the system device tree bindings that are under discussion the role
> > of the partial device tree might be reduce going forward, and might even
> > go away in the long term. For this reason, I would prefer not to add
> > more things to the partial device tree.
> 
> Was the interface you suggested approved by the committee behind system device
> tree? If not, we will still have to support your proposal + whatever the
> committee come up with. So I am not entirely sure why using the partial
> device-tree will be an issue.

Not yet


> It is actually better to keep everything in the partial device-tree as it
> would avoid to clash with whatever you come up with the system device tree.

I don't think we want to support both in the long term. The closer we
are to it the better for transitioning.


> Also, I don't think the partial device-tree could ever go away at least in
> Xen. This is an external interface we provide to the user, removing it would
> mean users would not be able to upgrade from Xen 4.x to 4.y without any major
> rewrite of there DT.

I don't want to put the memory ranges inside the multiboot,device-tree
module because that is clearly for device assignment:

"Device Assignment (Passthrough) is supported by adding another module,
alongside the kernel and ramdisk, with the device tree fragment
corresponding to the device node to assign to the guest."

One could do 1:1 memory mapping without device assignment.


Genuine question: did we write down any compatibility promise on that
interface? If so, do you know where? I'd like to take a look.

In any case backward compatible interfaces can be deprecated although it
takes time. Alternatively it could be made optional (even for device
assignment). So expanding its scope beyond device assignment
configuration it is not a good idea.


From xen-devel-bounces@lists.xenproject.org Sat May 09 00:07:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 09 May 2020 00:07:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXD1h-0005IV-2o; Sat, 09 May 2020 00:07:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/fco=6X=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
 id 1jXD1f-0005HV-DR
 for xen-devel@lists.xenproject.org; Sat, 09 May 2020 00:07:31 +0000
X-Inumbo-ID: 12c47b24-9189-11ea-ae69-bc764e2007e4
Received: from NAM10-MW2-obe.outbound.protection.outlook.com (unknown
 [40.107.94.72]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 12c47b24-9189-11ea-ae69-bc764e2007e4;
 Sat, 09 May 2020 00:07:30 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=R7HVosScciPnnBGfHsRzlbBRGX/aCtR7cEwz97ZZXRN4ZlPlWmYJsgcc4rpPlo6NU5jGqfCZg2rj8eRn4XOcvoR26Mgs14uDfd1jZsOQwF90ZRmPEhyA7xwVJPWeBgh/+NcNvJHX0R7PNiU3vON4gKW9GonyFuGNVSpnOwW9nhGeMeD+Gib5GpP/UgYjTSAp7JicQObblifRvllTMWiFCztGb+fNGUnitXsIauUufWOfrPd2cAHqYRwcUl+M7eYyB1HIRPCny4DCMSuXEOSeRW+LYTk+lh2XWV9MjbngeD7s2+KJLkGZnAV32dLtlCwUJwmx66slWZ9c68N6X4VINw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XojeCPjyUHJaldrlTDJ9nO2YyaD6JRL0I9Lrsh9Rm68=;
 b=Qlm4O3ErfjYrX1XZvmYvwGemPUZEYnqtgAKazOBSEwyWAW/UX4CIUkos/AfkDsTDszRWw43zFBIJ3UkFXgfznx6gNM7VmESpCNyTL4Vqm7AYfm56cC0uVuHxFdvxoINMYnIIFlMwYYqMLJI6eEjfSdFhJSTPxKyp2L25Hn03+ioF02sfAQPmBH1y4spknw5Nyj5Yc2SkK1puKq+GNw+GIxTI2+ZFg5DidjmW87Osu9u778fCx+0cyeCgsBengWRf1ryL7etmlXXnX7WLuCkTct7NEMmmhC4EVr/U09KE7+5BlVHIlhDOMu+w27q5m6RmJLtWE0SczIZ9Ob6zTt6FbA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.60.83) smtp.rcpttodomain=epam.com smtp.mailfrom=xilinx.com;
 dmarc=bestguesspass action=none header.from=xilinx.com; dkim=none (message
 not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XojeCPjyUHJaldrlTDJ9nO2YyaD6JRL0I9Lrsh9Rm68=;
 b=KEXuF2Z01JnXYIAQ6nGZZZBK/1iJHepgImTviyNsVycUFakgTKghPWXpXJ987GS+Diz/uZWNtnZZxwpT8YE17ZgZCjFjtxeLIes3BbmjvO3yjqH9twWDc8oVq4Z4GoxIBPmc2f1QbkHWr3m+V6/SqW4c46RlwjVO1xzs2+XNt9E=
Received: from DM5PR13CA0003.namprd13.prod.outlook.com (2603:10b6:3:23::13) by
 DM6PR02MB5355.namprd02.prod.outlook.com (2603:10b6:5:47::20) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.2958.20; Sat, 9 May 2020 00:07:29 +0000
Received: from CY1NAM02FT058.eop-nam02.prod.protection.outlook.com
 (2603:10b6:3:23:cafe::fc) by DM5PR13CA0003.outlook.office365.com
 (2603:10b6:3:23::13) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.12 via Frontend
 Transport; Sat, 9 May 2020 00:07:29 +0000
Authentication-Results: spf=pass (sender IP is 149.199.60.83)
 smtp.mailfrom=xilinx.com; epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=bestguesspass action=none
 header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.60.83 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.60.83; helo=xsj-pvapsmtpgw01;
Received: from xsj-pvapsmtpgw01 (149.199.60.83) by
 CY1NAM02FT058.mail.protection.outlook.com (10.152.74.149) with Microsoft SMTP
 Server id 15.20.2979.29 via Frontend Transport; Sat, 9 May 2020 00:07:28
 +0000
Received: from [149.199.38.66] (port=58649 helo=xsj-pvapsmtp01)
 by xsj-pvapsmtpgw01 with esmtp (Exim 4.90)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1jXD1Q-0000Aj-JC; Fri, 08 May 2020 17:07:16 -0700
Received: from [127.0.0.1] (helo=localhost)
 by xsj-pvapsmtp01 with smtp (Exim 4.63)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1jXD1c-0002nr-3s; Fri, 08 May 2020 17:07:28 -0700
Received: from xsj-pvapsmtp01 (xsj-pvapsmtp01.xilinx.com [149.199.38.66])
 by xsj-smtp-dlp2.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id 04907I0P027994; 
 Fri, 8 May 2020 17:07:18 -0700
Received: from [172.19.2.220] (helo=localhost)
 by xsj-pvapsmtp01 with esmtp (Exim 4.63)
 (envelope-from <stefanos@xilinx.com>)
 id 1jXD1S-0002nd-39; Fri, 08 May 2020 17:07:18 -0700
Date: Fri, 8 May 2020 17:07:17 -0700 (PDT)
From: Stefano Stabellini <stefano.stabellini@xilinx.com>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 10/12] xen/arm: if is_domain_direct_mapped use native
 UART address for vPL011
In-Reply-To: <7176c924-eb16-959e-53cd-c73db88f65db@xen.org>
Message-ID: <alpine.DEB.2.21.2005081601400.26167@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-10-sstabellini@kernel.org>
 <05b46414-12c3-5f79-f4b1-46cf8750d28c@xen.org>
 <alpine.DEB.2.21.2004301319380.28941@sstabellini-ThinkPad-T480s>
 <7176c924-eb16-959e-53cd-c73db88f65db@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-RCIS-Action: ALLOW
X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.2.0.1013-23620.005
X-TM-AS-User-Approved-Sender: Yes;Yes
X-EOPAttributedMessage: 0
X-MS-Office365-Filtering-HT: Tenant
X-Forefront-Antispam-Report: CIP:149.199.60.83; CTRY:US; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:xsj-pvapsmtpgw01; PTR:unknown-60-83.xilinx.com; CAT:NONE;
 SFTY:;
 SFS:(7916004)(346002)(136003)(376002)(396003)(39860400002)(46966005)(33430700001)(426003)(8676002)(8936002)(44832011)(82740400003)(81166007)(356005)(5660300002)(82310400002)(47076004)(2906002)(70206006)(26005)(6916009)(478600001)(316002)(33440700001)(9686003)(336012)(186003)(107886003)(33716001)(70586007)(9786002)(54906003)(4326008)(53546011);
 DIR:OUT; SFP:1101; 
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4fbc0ecb-0939-4738-8377-08d7f3acf605
X-MS-TrafficTypeDiagnostic: DM6PR02MB5355:
X-Microsoft-Antispam-PRVS: <DM6PR02MB5355E3BC01770FA5E64411D9A0A30@DM6PR02MB5355.namprd02.prod.outlook.com>
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-Forefront-PRVS: 03982FDC1D
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: jJ7a4Vf4H7dhL4Hg+SOUbGPGbJzdlo07zrPex28zWCZa3f+nSfN9Le9Bp09tqX8VWI4LvNn9DZceq1EFGve+Z3BTkA6G3Eq4Nx9mzCPTzMkbCX4Ht3K3XrGZrsZIQKOsL3VO8vgUcOVesZCWgfA+gu4bT1ttYWupRnq+K935d2VwJo4tRBhoQFA49n43/crad9eG1rvp2uQD+BFRjyZtPtVxvNcZx5l5fBPASuuwJGmojL2+KZwcetVSNSJtWY2rTaUY+fbHl9uSHLtNmSMY3rSPKxz6eiKKJ5JRlHdVKTH2N+9zsVtwB9M31isgCeatCmqC8+qqTuHmR1cFQb1LVuO/+EuxnYHvOTU9hgACIGCMakQ8XcbdUD9BECFqcYGqhJS8i5ppwmhC/k+R58FpNfImMYA+S7BunC5jHRJgyvL1rh6V8Cq26JfxqvSzAIdOOBXYAjQptHwU5ZFB0I2JJ2mh6P4KF7bYZwqw5u8aShXxn7wXJCPMjGKWfQJrrNonK2HtHU0ccfBZ3aqegMc6YH4OWrGcFnQlaJsHIzo5/fh27TOYs6KyVsWo5oHqo7IoWJ+lmTb8EtHn+DRTV104ug==
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 09 May 2020 00:07:28.3625 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4fbc0ecb-0939-4738-8377-08d7f3acf605
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.60.83];
 Helo=[xsj-pvapsmtpgw01]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR02MB5355
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr_Babchuk@epam.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 1 May 2020, Julien Grall wrote:
> On 01/05/2020 02:26, Stefano Stabellini wrote:
> > On Wed, 15 Apr 2020, Julien Grall wrote:
> > > Hi Stefano,
> > > 
> > > On 15/04/2020 02:02, Stefano Stabellini wrote:
> > > > We always use a fix address to map the vPL011 to domains. The address
> > > > could be a problem for domains that are directly mapped.
> > > > 
> > > > Instead, for domains that are directly mapped, reuse the address of the
> > > > physical UART on the platform to avoid potential clashes.
> > > 
> > > How do you know the physical UART MMIO region is big enough to fit the
> > > PL011?
> > 
> > That cannot be because the vPL011 MMIO size is 1 page, which is the
> > minimum right?
> 
> No, there are platforms out with multiple UARTs in the same page (see sunxi
> for instance).

But if there are multiple UARTs sharing the same page, and the first one
is used by Xen, there is no way to assign one of the secondary UARTs to
a domU. So there would be no problem choosing the physical UART address
for the virtual PL011.
 
 
> > > What if the user want to assign the physical UART to the domain + the
> > > vpl011?
> > 
> > A user can assign a UART to the domain, but it cannot assign the UART
> > used by Xen (DTUART), which is the one we are using here to get the
> > physical information.
> > 
> > 
> > (If there is no UART used by Xen then we'll fall back to the virtual
> > addresses. If they conflict we return error and let the user fix the
> > configuration.)
> 
> If there is no UART in Xen, how the user will know the addresses conflicted?
> Earlyprintk?

That's a good question. Yes, I think earlyprintk would be the only way.


From xen-devel-bounces@lists.xenproject.org Sat May 09 06:57:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 09 May 2020 06:57:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXJPY-00032y-1n; Sat, 09 May 2020 06:56:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqM1=6X=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXJPV-00032p-TY
 for xen-devel@lists.xenproject.org; Sat, 09 May 2020 06:56:33 +0000
X-Inumbo-ID: 36cec176-91c2-11ea-a0b8-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 36cec176-91c2-11ea-a0b8-12813bfff9fa;
 Sat, 09 May 2020 06:56:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=kEh4XF6oiT07673javSuQKv/+X+UxZqgDbBEBiLRVv0=; b=dwQJTCWMRII9/KI/aKxoHAoUi
 UPhLFX4+UT7dpQFC1zXs4XewyjH15sHo99EM4y+kKiKFijuAUP12nYxnHUk1R1qFJF/V1JtnDOnil
 SToOWgUhgsyQoc/M4b7/1Qpdr3Lx4kWOOS9ST+cTaRzSSYfzms6ymHmZKIQiroKtdo2nw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXJPT-0003t1-HR; Sat, 09 May 2020 06:56:31 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXJPT-0003DO-8m; Sat, 09 May 2020 06:56:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXJPT-0004d1-6s; Sat, 09 May 2020 06:56:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150084-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150084: regressions - FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:regression
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=35b819c45c4603fdb1d400925d6b2e6f8689a9d5
X-Osstest-Versions-That: xen=8a6b1665d987d043c12dc723d758a7d2ca765264
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 09 May 2020 06:56:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150084 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150084/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 150067

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 150067

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate           fail  like 150067
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150067
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150067
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150067
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150067
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150067
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150067
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150067
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150067
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150067
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  35b819c45c4603fdb1d400925d6b2e6f8689a9d5
baseline version:
 xen                  8a6b1665d987d043c12dc723d758a7d2ca765264

Last test of basis   150067  2020-05-07 10:22:45 Z    1 days
Testing same since   150084  2020-05-08 05:26:57 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Sergey Dyasli <sergey.dyasli@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 35b819c45c4603fdb1d400925d6b2e6f8689a9d5
Author: Sergey Dyasli <sergey.dyasli@citrix.com>
Date:   Wed May 6 11:00:24 2020 +0100

    sched: print information about scheduling granularity
    
    Currently it might be not obvious which scheduling mode (e.g. core-
    scheduling) is being used by the scheduler. Alleviate this by printing
    additional information about the selected granularity per-cpupool.
    
    Note: per-cpupool granularity selection is not implemented yet. Every
          cpupool gets its granularity from the single global value.
    
    Take this opportunity to introduce struct sched_gran_name array and
    refactor sched_select_granularity().
    
    Signed-off-by: Sergey Dyasli <sergey.dyasli@citrix.com>
    Acked-by: Dario Faggioli <dfaggioli@suse.com>

commit 64b1da5a2fcf37e3542c277fde194ff3e8bba2d2
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Feb 12 18:37:04 2019 +0000

    x86/svm: Use flush-by-asid when available
    
    AMD Fam15h processors introduced the flush-by-asid feature, for more fine
    grain flushing purposes.
    
    Flushing everything including ASID 0 (i.e. Xen context) is an an unnecesserily
    large hammer, and never necessary in the context of guest TLBs needing
    invalidating.
    
    When available, use TLB_CTRL_FLUSH_ASID in preference to TLB_CTRL_FLUSH_ALL.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 06e6f3176804b7eabfd028ec3777a69668fad36a
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue Apr 21 18:18:08 2020 +0100

    x86/svm: Clean up vmcbcleanbits_t handling
    
    Rework the vmcbcleanbits_t definitons to use bool, drop 'fields' from the
    namespace, position the comments in an unambiguous position, and include the
    bit position.
    
    In svm_vmexit_handler(), don't bother conditionally writing ~0 or 0 based on
    hardware support.  The field was entirely unused and ignored on older
    hardware (and we're already setting reserved cleanbits anyway).
    
    In nsvm_vmcb_prepare4vmrun(), simplify the logic massively by dropping the
    vcleanbit_set() macro using a vmcbcleanbits_t local variable which only gets
    filled in the case that clean bits were valid previously.  Fix up the style on
    impacted lines.
    
    No practical change in behaviour.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 40675b4b874cb9fee0d4f0e12bb3e153ee1c135a
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu May 7 13:18:24 2020 +0200

    Arm: fix build with CONFIG_DTB_FILE set
    
    Recent changes no longer allow modification of AFLAGS. The needed
    conversion was apparently missed in 2740d96efdd3 ("xen/build: have the
    root Makefile generates the CFLAGS").
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit dd5520f9df05b84116ad57f16fff45323042cc63
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu May 7 13:15:13 2020 +0200

    x86: adjustments to guest handle treatment
    
    First of all avoid excessive conversions. copy_{from,to}_guest(), for
    example, work fine with all of XEN_GUEST_HANDLE{,_64,_PARAM}().
    
    Further
    - do_physdev_op_compat() didn't use the param form for its parameter,
    - {hap,shadow}_track_dirty_vram() wrongly used the param form,
    - compat processor Px logic failed to check compatibility of native and
      compat structures not further converted.
    
    As this eliminates all users of guest_handle_from_param() and as there's
    no real need to allow for conversions in both directions, drop the
    macros as well.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat May 09 07:47:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 09 May 2020 07:47:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXKD8-0007DQ-2I; Sat, 09 May 2020 07:47:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqM1=6X=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXKD7-0007DL-1U
 for xen-devel@lists.xenproject.org; Sat, 09 May 2020 07:47:49 +0000
X-Inumbo-ID: 5f895b60-91c9-11ea-a0c7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5f895b60-91c9-11ea-a0c7-12813bfff9fa;
 Sat, 09 May 2020 07:47:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=M+Jca/3ybmD8Dm0O5pEUdejoXlY/2gzf9J2wHxJ21yk=; b=WNJWuJxnrYLSMtRU0P4DmA6Po
 slpCQd2FAL7tLiro9l4lui2Z+vdZ5YZlksCxmaWBVMe0lovxuZMCwuYS53K6FLLt1TNTTeUyGtP/7
 KkmaGfig26ji29Dqz7OS/Y4aW4tUCxMgmxIEbqgLSi9U/XepfyxQiGnh/VZwI+jELMWdA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXKD4-0004pg-Cd; Sat, 09 May 2020 07:47:46 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXKD4-0006f5-14; Sat, 09 May 2020 07:47:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXKD4-0003BN-0R; Sat, 09 May 2020 07:47:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150085-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.9-testing test] 150085: regressions - trouble:
 fail/pass/starved
X-Osstest-Failures: xen-4.9-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:regression
 xen-4.9-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-localmigrate/x10:fail:heisenbug
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-localmigrate/x10:fail:heisenbug
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=93cc305d1f3e7c6949a8f4116446624fa2dbfdf4
X-Osstest-Versions-That: xen=45c90737d5f0c8bf479adcd8cb88450f1998e55c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 09 May 2020 07:47:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150085 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150085/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop       fail REGR. vs. 149649

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail in 150068 pass in 150085
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail pass in 150068

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-ws16-amd64 16 guest-localmigrate/x10 fail blocked in 149649
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop fail in 150068 blocked in 149649
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop fail in 150068 blocked in 149649
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop fail in 150068 blocked in 149649
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop  fail in 150068 like 149649
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeat fail in 150068 like 149649
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 149649
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149649
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 149649
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 149649
 test-amd64-i386-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail like 149649
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  93cc305d1f3e7c6949a8f4116446624fa2dbfdf4
baseline version:
 xen                  45c90737d5f0c8bf479adcd8cb88450f1998e55c

Last test of basis   149649  2020-04-14 13:35:25 Z   24 days
Testing same since   150038  2020-05-05 16:06:01 Z    3 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 93cc305d1f3e7c6949a8f4116446624fa2dbfdf4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Apr 3 13:03:40 2020 +0100

    tools/xenstore: fix a use after free problem in xenstored
    
    Commit 562a1c0f7ef3fb ("tools/xenstore: dont unlink connection object
    twice") introduced a potential use after free problem in
    domain_cleanup(): after calling talloc_unlink() for domain->conn
    domain->conn is set to NULL. The problem is that domain is registered
    as talloc child of domain->conn, so it might be freed by the
    talloc_unlink() call.
    
    With Xenstore being single threaded there are normally no concurrent
    memory allocations running and freeing a virtual memory area normally
    doesn't result in that area no longer being accessible. A problem
    could occur only in case either a signal received results in some
    memory allocation done in the signal handler (SIGHUP is a primary
    candidate leading to reopening the log file), or in case the talloc
    framework would do some internal memory allocation during freeing of
    the memory (which would lead to clobbering of the freed domain
    structure).
    
    Fixes: 562a1c0f7ef3fb ("tools/xenstore: dont unlink connection object twice")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    (cherry picked from commit bb2a34fd740e9a26be9e2244f1a5b4cef439e5a8)
    (cherry picked from commit dc5176d0f9434e275e0be1df8d0518e243798beb)
    (cherry picked from commit a997ffe678e698ff2b4c89ae5a98661d12247fef)
    (cherry picked from commit 48e8564435aca590f1c292ab7bb1f3dbc6b75693)
    (cherry picked from commit 1e722e6971539eab4f484affd60490cbc8429951)
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat May 09 09:46:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 09 May 2020 09:46:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXM3n-0000dC-En; Sat, 09 May 2020 09:46:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqM1=6X=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXM3m-0000d7-Gc
 for xen-devel@lists.xenproject.org; Sat, 09 May 2020 09:46:18 +0000
X-Inumbo-ID: edffb4b0-91d9-11ea-a0d2-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id edffb4b0-91d9-11ea-a0d2-12813bfff9fa;
 Sat, 09 May 2020 09:46:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=UTpWxfEn0pA1FKErYTl+pgQz3QfZEgHokMxQiPtb0Vg=; b=zpgSne5aA/dmuIzQxucckpGZ4
 BA5ctLtmb2ZoBL+NAmX9NzUcLA57rpzwGjZH5Vn5Gd2E94Sl73bfYkyVAPrsS+J07HKk5VQtpWlko
 YlQ8vnoyR9TkAU2N7J0iHX0VCSh7ae+2ThvxYKQQo3PWcTqRRDn3Wm763wEcUB/+nSSTw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXM3l-0007Vw-9w; Sat, 09 May 2020 09:46:17 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXM3l-0003PI-0h; Sat, 09 May 2020 09:46:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXM3k-00072J-UB; Sat, 09 May 2020 09:46:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150086-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 150086: regressions - FAIL
X-Osstest-Failures: linux-5.4:test-amd64-amd64-examine:memdisk-try-append:fail:regression
 linux-5.4:test-amd64-amd64-xl-rtds:guest-saverestore:fail:heisenbug
 linux-5.4:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
 linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=592465e6a54ba8104969f3b73b58df262c5be5f5
X-Osstest-Versions-That: linux=9895e0ac338a8060e6947f897397c21c4d78d80d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 09 May 2020 09:46:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150086 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150086/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 149905

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     15 guest-saverestore          fail pass in 150070
 test-amd64-amd64-libvirt-vhd 17 guest-start/debian.repeat  fail pass in 150070
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat  fail pass in 150070

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds 16 guest-localmigrate fail in 150070 REGR. vs. 149905

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                592465e6a54ba8104969f3b73b58df262c5be5f5
baseline version:
 linux                9895e0ac338a8060e6947f897397c21c4d78d80d

Last test of basis   149905  2020-05-02 16:38:26 Z    6 days
Testing same since   150054  2020-05-06 06:41:12 Z    3 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Aharon Landau <aharonl@mellanox.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Alaa Hleihel <alaa@mellanox.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Williamson <alex.williamson@redhat.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Arnd Bergmann <arnd@arndb.de>
  Arun Easi <aeasi@marvell.com>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  Catalin Marinas <catalin.marinas@arm.com>
  Christoph Hellwig <hch@lst.de>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Vetter <daniel.vetter@intel.com>
  David Disseldorp <ddiss@suse.de>
  David Howells <dhowells@redhat.com>
  David Sterba <dsterba@suse.com>
  Dexuan Cui <decui@microsoft.com>
  Douglas Anderson <dianders@chromium.org>
  Filipe Manana <fdmanana@suse.com>
  Gabriel Krisman Bertazi <krisman@collabora.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hui Wang <hui.wang@canonical.com>
  Iuliana Prodan <iuliana.prodan@nxp.com>
  Jason Gunthorpe <jgg@mellanox.com>
  Joerg Roedel <jroedel@suse.de>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Leon Romanovsky <leonro@mellanox.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Marek Behún <marek.behun@nic.cz>
  Martin Blumenstingl <martin.blumenstingl@googlemail.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Liu <liumartin@google.com>
  Martin Wilck <mwilck@suse.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
  Nicolas Ferre <nicolas.ferre@microchip.com>
  Niklas Cassel <niklas.cassel@wdc.com>
  Olga Kornievskaia <kolga@netapp.com>
  Olga Kornievskaia <olga.kornievskaia@gmail.com>
  Paul Moore <paul@paul-moore.com>
  Qu Wenruo <wqu@suse.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rayagonda Kokatanur <rayagonda.kokatanur@broadcom.com>
  Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  ryan_chen <ryan_chen@aspeedtech.com>
  Sean Christopherson <sean.j.christopherson@intel.com>
  Shawn Guo <shawnguo@kernel.org>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Sumit Semwal <sumit.semwal@linaro.org>
  Sunwook Eom <speed.eom@samsung.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Takashi Iwai <tiwai@suse.de>
  Tang Bin <tangbin@cmss.chinamobile.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vasily Averin <vvs@virtuozzo.com>
  Vasily Khoruzhick <anarsoul@gmail.com>
  Veerabhadrarao Badiganti <vbadigan@codeaurora.org>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vinod Koul <vkoul@kernel.org>
  Wei Liu <wei.liu@kernel.org>
  Wolfram Sang <wsa@the-dreams.de>
  Wu Bo <wubo40@huawei.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yan Zhao <yan.y.zhao@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1726 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 09 09:57:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 09 May 2020 09:57:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXME3-0001XL-Gi; Sat, 09 May 2020 09:56:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LYqh=6X=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jXME2-0001XG-Hl
 for xen-devel@lists.xenproject.org; Sat, 09 May 2020 09:56:54 +0000
X-Inumbo-ID: 69746630-91db-11ea-a0d4-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 69746630-91db-11ea-a0d4-12813bfff9fa;
 Sat, 09 May 2020 09:56:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=U7T9L4VBA06Qbk4y86VhRGArJj2r/9HhmmdCI1mU0s0=; b=ieghrjCCB+oM2V24UGQRGs4auX
 qlH/5zJWLkyfYjzSDqzx8YsO7kps3G7ln/ryTxrgOgKoKbYEQnJ3+UAhOxXNyWfmXyZcdrQlpYpge
 CuxqMaAfqdKFGS8yp3M5khn/pvjhdT99NUPGQiaFu4emJiCRbeP2l8OOtzgvNBHAX+Ik=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jXME1-0007hk-0e; Sat, 09 May 2020 09:56:53 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jXME0-00074h-Ls; Sat, 09 May 2020 09:56:52 +0000
Subject: Re: [PATCH 03/12] xen/arm: introduce 1:1 mapping for domUs
To: Stefano Stabellini <sstabellini@kernel.org>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-3-sstabellini@kernel.org>
 <3f26f6a0-89bd-cbec-f07f-90a08fa60e26@xen.org>
 <alpine.DEB.2.21.2004301417070.28941@sstabellini-ThinkPad-T480s>
 <77d2858c-768c-e2a1-e2c9-32cb1612512d@xen.org>
 <alpine.DEB.2.21.2005081351340.26167@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <df25b809-035d-aa78-664c-69855ace5f60@xen.org>
Date: Sat, 9 May 2020 10:56:50 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2005081351340.26167@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>, Volodymyr_Babchuk@epam.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 09/05/2020 01:07, Stefano Stabellini wrote:
> On Fri, 1 May 2020, Julien Grall wrote:
>> On 01/05/2020 02:26, Stefano Stabellini wrote:
>>> On Wed, 15 Apr 2020, Julien Grall wrote:
>>>> On 15/04/2020 02:02, Stefano Stabellini wrote:
>>>>> In some cases it is desirable to map domU memory 1:1 (guest physical ==
>>>>> physical.) For instance, because we want to assign a device to the domU
>>>>> but the IOMMU is not present or cannot be used. In these cases, other
>>>>> mechanisms should be used for DMA protection, e.g. a MPU.
>>>>
>>>> I am not against this, however the documentation should clearly explain
>>>> that
>>>> you are making your platform insecure unless you have other mean for DMA
>>>> protection.
>>>
>>> I'll expand the docs
>>>
>>>
>>>>>
>>>>> This patch introduces a new device tree option for dom0less guests to
>>>>> request a domain to be directly mapped. It also specifies the memory
>>>>> ranges. This patch documents the new attribute and parses it at boot
>>>>> time. (However, the implementation of 1:1 mapping is missing and just
>>>>> BUG() out at the moment.)  Finally the patch sets the new direct_map
>>>>> flag for DomU domains.
>>>>>
>>>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>>>> ---
>>>>>     docs/misc/arm/device-tree/booting.txt | 13 +++++++
>>>>>     docs/misc/arm/passthrough-noiommu.txt | 35 ++++++++++++++++++
>>>>>     xen/arch/arm/domain_build.c           | 52
>>>>> +++++++++++++++++++++++++--
>>>>>     3 files changed, 98 insertions(+), 2 deletions(-)
>>>>>     create mode 100644 docs/misc/arm/passthrough-noiommu.txt
>>>>>
>>>>> diff --git a/docs/misc/arm/device-tree/booting.txt
>>>>> b/docs/misc/arm/device-tree/booting.txt
>>>>> index 5243bc7fd3..fce5f7ed5a 100644
>>>>> --- a/docs/misc/arm/device-tree/booting.txt
>>>>> +++ b/docs/misc/arm/device-tree/booting.txt
>>>>> @@ -159,6 +159,19 @@ with the following properties:
>>>>>         used, or GUEST_VPL011_SPI+1 if vpl011 is enabled, whichever is
>>>>>         greater.
>>>>>     +- direct-map
>>>>> +
>>>>> +    Optional. An array of integer pairs specifying addresses and sizes.
>>>>> +    direct_map requests the memory of the domain to be 1:1 mapped with
>>>>> +    the memory ranges specified as argument. Only sizes that are a
>>>>> +    power of two number of pages are allowed.
>>>>> +
>>>>> +- #direct-map-addr-cells and #direct-map-size-cells
>>>>> +
>>>>> +    The number of cells to use for the addresses and for the sizes in
>>>>> +    direct-map. Default and maximum are 2 cells for both addresses and
>>>>> +    sizes.
>>>>> +
>>>>
>>>> As this is going to be mostly used for passthrough, can't we take
>>>> advantage of
>>>> the partial device-tree and describe the memory region using memory node?
>>>
>>> With the system device tree bindings that are under discussion the role
>>> of the partial device tree might be reduce going forward, and might even
>>> go away in the long term. For this reason, I would prefer not to add
>>> more things to the partial device tree.
>>
>> Was the interface you suggested approved by the committee behind system device
>> tree? If not, we will still have to support your proposal + whatever the
>> committee come up with. So I am not entirely sure why using the partial
>> device-tree will be an issue.
> 
> Not yet

This answer...

> 
> 
>> It is actually better to keep everything in the partial device-tree as it
>> would avoid to clash with whatever you come up with the system device tree.
> 
> I don't think we want to support both in the long term. The closer we
> are to it the better for transitioning.

... raises the question how your solution is going to be closer? Do you 
mind providing more details on the system device-tree?

> 
> 
>> Also, I don't think the partial device-tree could ever go away at least in
>> Xen. This is an external interface we provide to the user, removing it would
>> mean users would not be able to upgrade from Xen 4.x to 4.y without any major
>> rewrite of there DT.
> 
> I don't want to put the memory ranges inside the multiboot,device-tree
> module because that is clearly for device assignment:
> 
> "Device Assignment (Passthrough) is supported by adding another module,
> alongside the kernel and ramdisk, with the device tree fragment
> corresponding to the device node to assign to the guest."

Thanks you for copying the documentation here... As you know, when the 
partial device-tree was designed, it was only focused on device 
assignment. However, I don't see how this prevents us to extend it to 
more use cases.

Describing the RAM regions in the partial device-tree means you have a 
single place where you can understand the memory layout of your guest.

You have also much more flexibility for describing your guests over the 
/chosen node and avoid to clash with the rest of the host device-tree.

> 
> One could do 1:1 memory mapping without device assignment.
 >
> Genuine question: did we write down any compatibility promise on that
> interface? If so, do you know where? I'd like to take a look.

Nothing written in Xen, however a Device-Tree bindings are meant to be 
stable.

This would be a pretty bad user experience if you had to rewrite your 
device-tree when upgrading Xen 4.14 to Xen 5.x. This also means 
roll-back would be more difficult as there are more components dependency.

> In any case backward compatible interfaces can be deprecated although it
> takes time. Alternatively it could be made optional (even for device
> assignment). So expanding its scope beyond device assignment
> configuration it is not a good idea.

What would be the replacement? I still haven't seen any light of the 
so-called system device-tree.

At the moment, I can better picture how this can work with the partial 
device-tree. One of the advantage is you could describe your guest 
layout in one place and then re-use it for booting a guest from the 
toolstack (not implemented yet) or the hypervisor.

I could change my mind if it turns out to be genuinely more complicated 
to implement in Xen and/or you provide more details on how this is going 
to work out with the system device-tree.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 09 10:12:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 09 May 2020 10:12:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXMSd-0003G9-1W; Sat, 09 May 2020 10:11:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LYqh=6X=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jXMSc-0003G4-38
 for xen-devel@lists.xenproject.org; Sat, 09 May 2020 10:11:58 +0000
X-Inumbo-ID: 840e224a-91dd-11ea-a0d8-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 840e224a-91dd-11ea-a0d8-12813bfff9fa;
 Sat, 09 May 2020 10:11:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=xuXBYx3OsRINyPoJ3zzADil0gvkKS1x7O3jX++2b3mA=; b=Rc93M1zRpGrI/2qbCX0nr9TkAA
 BOYVVTaYXKJr51QwTiROdacUGRfSzAKREN08kEEu1irIVi8PFeCS24XcXgfGqUc/R888p8/TjsWGO
 0CYEgiO2wyhW9qpbScuCI2eZoKpUVAw/iGSRUOT11W7sUAlcSRxvdVFkgfrDf9D/XN4c=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jXMSb-00084G-F6; Sat, 09 May 2020 10:11:57 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jXMSb-0008Gf-8J; Sat, 09 May 2020 10:11:57 +0000
Subject: Re: [PATCH 10/12] xen/arm: if is_domain_direct_mapped use native UART
 address for vPL011
To: Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-10-sstabellini@kernel.org>
 <05b46414-12c3-5f79-f4b1-46cf8750d28c@xen.org>
 <alpine.DEB.2.21.2004301319380.28941@sstabellini-ThinkPad-T480s>
 <7176c924-eb16-959e-53cd-c73db88f65db@xen.org>
 <alpine.DEB.2.21.2005081601400.26167@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <8c01cb1a-0745-3eca-a45d-09c9297163ce@xen.org>
Date: Sat, 9 May 2020 11:11:55 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2005081601400.26167@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr_Babchuk@epam.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Stefano,

On 09/05/2020 01:07, Stefano Stabellini wrote:
> On Fri, 1 May 2020, Julien Grall wrote:
>> On 01/05/2020 02:26, Stefano Stabellini wrote:
>>> On Wed, 15 Apr 2020, Julien Grall wrote:
>>>> Hi Stefano,
>>>>
>>>> On 15/04/2020 02:02, Stefano Stabellini wrote:
>>>>> We always use a fix address to map the vPL011 to domains. The address
>>>>> could be a problem for domains that are directly mapped.
>>>>>
>>>>> Instead, for domains that are directly mapped, reuse the address of the
>>>>> physical UART on the platform to avoid potential clashes.
>>>>
>>>> How do you know the physical UART MMIO region is big enough to fit the
>>>> PL011?
>>>
>>> That cannot be because the vPL011 MMIO size is 1 page, which is the
>>> minimum right?
>>
>> No, there are platforms out with multiple UARTs in the same page (see sunxi
>> for instance).
> 
> But if there are multiple UARTs sharing the same page, and the first one
> is used by Xen, there is no way to assign one of the secondary UARTs to
> a domU. So there would be no problem choosing the physical UART address
> for the virtual PL011.

AFAICT, nothing prevents a user to assign such UART to a dom0less guest 
today. It would not be safe, but it should work.

If you want to make it safe, then you would need to trap the MMIO access 
so they can be sanitized. For a UART device, I don't think the overhead 
would be too bad.

Anyway, the only thing I request is to add sanity check in the code to 
help the user diagnostics any potential clash.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 09 12:43:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 09 May 2020 12:43:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXOpJ-00073b-Sz; Sat, 09 May 2020 12:43:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqM1=6X=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXOpI-00073W-Nn
 for xen-devel@lists.xenproject.org; Sat, 09 May 2020 12:43:32 +0000
X-Inumbo-ID: b07df7fa-91f2-11ea-a0e7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b07df7fa-91f2-11ea-a0e7-12813bfff9fa;
 Sat, 09 May 2020 12:43:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=NoP1VZ4F1qw0Unacz3u7mox1NKM+jNr5sj3LRDCv09A=; b=gzK2+zDAVtOJrTPK7bQTIQmxG
 yVU8n0nozpM3nrYlQjjdjp0F2eCfocjoG+gQwT+JppYtygrpXIb0BEHzu1CBkMj0CkRW34pO8oExL
 DLb4kkJcSHgMHehCKUWtgoBA6Kh4yl7bkIKn4O1nAdRxa7GguZ1DRF9ALGxbEe1jTPXko=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXOpH-0002Lu-JA; Sat, 09 May 2020 12:43:31 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXOpH-0003eT-AH; Sat, 09 May 2020 12:43:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXOpH-00033z-9c; Sat, 09 May 2020 12:43:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150093-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150093: all pass - PUSHED
X-Osstest-Versions-This: ovmf=c8543b8d830d22882dab4ece47f0413f9c6eb431
X-Osstest-Versions-That: ovmf=3a3713e62cfad00d78bb938b0d9fb1eedaeff314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 09 May 2020 12:43:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150093 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150093/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 c8543b8d830d22882dab4ece47f0413f9c6eb431
baseline version:
 ovmf                 3a3713e62cfad00d78bb938b0d9fb1eedaeff314

Last test of basis   150082  2020-05-08 04:09:39 Z    1 days
Testing same since   150093  2020-05-08 21:10:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Leif Lindholm <leif@nuviainc.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Sean Brogan <sean.brogan@microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   3a3713e62c..c8543b8d83  c8543b8d830d22882dab4ece47f0413f9c6eb431 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat May 09 14:31:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 09 May 2020 14:31:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXQUy-0007b0-Nm; Sat, 09 May 2020 14:30:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqM1=6X=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXQUx-0007av-QS
 for xen-devel@lists.xenproject.org; Sat, 09 May 2020 14:30:39 +0000
X-Inumbo-ID: a326c078-9201-11ea-a0ff-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a326c078-9201-11ea-a0ff-12813bfff9fa;
 Sat, 09 May 2020 14:30:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=EMCFxcTUcrxy5uQJlvxAT0HNf5BtVmk5HWbSfSWJDRo=; b=sbmOKkLE4UiDwhLMBm/nb/MGn
 g00cwGvVBZvzz4xbhXEX4pXn8NTi1X0bzwjNuD52byaGgl5nGEbDe0uWyaOaTsEaMdIUt50iweqni
 RfwTLsyFYowEqRH+kOzKBN6Np3aCY+b3/DyMOjVey8I50VBq3126nfC90iG48E2YV2I1s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXQUp-0004LP-Km; Sat, 09 May 2020 14:30:31 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXQUp-0002aR-6l; Sat, 09 May 2020 14:30:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXQUp-000810-6A; Sat, 09 May 2020 14:30:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150089-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150089: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-armhf-armhf-xl-vhd:leak-check/check:fail:regression
 qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=c88f1ffc19e38008a1c33ae039482a860aa7418c
X-Osstest-Versions-That: qemuu=570a9214827e3d42f7173c4d4c9f045b99834cf0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 09 May 2020 14:30:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150089 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150089/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd      18 leak-check/check         fail REGR. vs. 150061

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150061
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150061
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150061
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150061
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150061
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150061
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                c88f1ffc19e38008a1c33ae039482a860aa7418c
baseline version:
 qemuu                570a9214827e3d42f7173c4d4c9f045b99834cf0

Last test of basis   150061  2020-05-06 22:06:59 Z    2 days
Failing since        150077  2020-05-07 15:37:29 Z    1 days    2 attempts
Testing same since   150089  2020-05-08 17:52:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alberto Garcia <berto@igalia.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Chen Qun <kuhn.chenqun@huawei.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Greg Kurz <groug@kaod.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Nicholas Piggin <npiggin@gmail.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Suraj Jitindar Singh <sjitindarsingh@gmail.com>
  Tong Ho <tong.ho@xilinx.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Wei Wang <wei.w.wang@intel.com>
  Yi Sun <yi.y.sun@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1649 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 09 14:37:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 09 May 2020 14:37:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXQbI-0007nf-E2; Sat, 09 May 2020 14:37:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=b8++=6X=gmail.com=rikard.falkeborn@srs-us1.protection.inumbo.net>)
 id 1jXPq8-0003d0-Ug
 for xen-devel@lists.xenproject.org; Sat, 09 May 2020 13:48:29 +0000
X-Inumbo-ID: c2386058-91fb-11ea-ae69-bc764e2007e4
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c2386058-91fb-11ea-ae69-bc764e2007e4;
 Sat, 09 May 2020 13:48:27 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id a21so4609892ljb.9
 for <xen-devel@lists.xenproject.org>; Sat, 09 May 2020 06:48:27 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=97riR7h5mctZpNCGSCsUzKUdKKhONjRhdoAsaVf8nyw=;
 b=aBClKuiBIJWNgNo6gaPKpc6o8Bx2D2tB1J4jkUiu91comKnJCm77qMPje6ENn0inC7
 6q/P3ambDpvAHWkzRBxKtcy79WwPgS5TRjDdfsMoAfXlKRR7hPwn8M78VW2yg+FdwZSo
 7wn9gyzcJN/ewxvhBC9cq/JwxQSeLTAbLCpWJRm6Dw50kqa39v/wjQkqT38YW8Zv2Q3R
 CpvIV4tJEGnGc50ZCCTEfVJ+g8LVtNF4TbzWBsv1P8848fFZsPBp2RSsCuoOomZ42NgY
 7jrHoG6JbxAgk0QiB+kFGs0XNvQB9AgMDbIfNr7PGv34g2nncRK1e5kYDyfGr6Cx/7mj
 3P4A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=97riR7h5mctZpNCGSCsUzKUdKKhONjRhdoAsaVf8nyw=;
 b=UNLIL4q4aSfmaye5xv7luQG8mskoHGGUgddCxZSY3vFKfzZ554IKXo/PdsiuXVR5sg
 VJ6L65C960j804iwg4mQuEPX+YxF2pt0JYYivludZfhP9NzsW9Wwee2I99d1tZ4khS9H
 hl6/M4xYHXX5LlqXq0+XSWHsovC/9noZ42ptvpf/1tXPehyzPoRk8PMSrJ4NlYvSheXh
 EyY85sQDG8gOHw/cA44I5tXS+RhIy7peG0UxCLWpRYHjO5eBAfwVSosuZhWtYT2RpnLM
 6yBltdULgjT8qFyqIhV86DspZqK5xhJDonswVBxzfp9aHGU6YoR1aCaczoChSTPWd4mg
 Ismw==
X-Gm-Message-State: AOAM532HSLcV5IpdhsLvGNxQtRV7GKGERuYpN+5TWc3SCoyX8A/03CLx
 ArdeUdRAfpVOQdPMcOO/pT0=
X-Google-Smtp-Source: ABdhPJxpQ/5qZk8wrc4rdW4hRVVzSjbl+JycdDYBicx5ny4EL+DgCn8XoLsyjaFo973ka6XmUYd3Rw==
X-Received: by 2002:a2e:2a82:: with SMTP id q124mr4991852ljq.155.1589032106552; 
 Sat, 09 May 2020 06:48:26 -0700 (PDT)
Received: from localhost.localdomain (h-158-174-22-22.NA.cust.bahnhof.se.
 [158.174.22.22])
 by smtp.gmail.com with ESMTPSA id a24sm3928093ljk.10.2020.05.09.06.48.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 09 May 2020 06:48:25 -0700 (PDT)
From: Rikard Falkeborn <rikard.falkeborn@gmail.com>
To: boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org
Subject: [PATCH] xen-platform: Constify dev_pm_ops
Date: Sat,  9 May 2020 15:47:55 +0200
Message-Id: <20200509134755.15038-1-rikard.falkeborn@gmail.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Mailman-Approved-At: Sat, 09 May 2020 14:37:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 Rikard Falkeborn <rikard.falkeborn@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

dev_pm_ops is never modified, so mark it const to allow the compiler to
put it in read-only memory.

Before:
   text    data     bss     dec     hex filename
   2457    1668     256    4381    111d drivers/xen/platform-pci.o

After:
   text    data     bss     dec     hex filename
   2681    1444     256    4381    111d drivers/xen/platform-pci.o

Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com>
---
 drivers/xen/platform-pci.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/platform-pci.c b/drivers/xen/platform-pci.c
index 59e85e408c23..dd911e1ff782 100644
--- a/drivers/xen/platform-pci.c
+++ b/drivers/xen/platform-pci.c
@@ -168,7 +168,7 @@ static const struct pci_device_id platform_pci_tbl[] = {
 	{0,}
 };
 
-static struct dev_pm_ops platform_pm_ops = {
+static const struct dev_pm_ops platform_pm_ops = {
 	.resume_noirq =   platform_pci_resume,
 };
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Sat May 09 15:27:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 09 May 2020 15:27:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXRNs-0003Wj-Fq; Sat, 09 May 2020 15:27:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqM1=6X=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXRNr-0003We-9A
 for xen-devel@lists.xenproject.org; Sat, 09 May 2020 15:27:23 +0000
X-Inumbo-ID: 93acda12-9209-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 93acda12-9209-11ea-b9cf-bc764e2007e4;
 Sat, 09 May 2020 15:27:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=JFo+NMfGTqMEHYbeJ2L9ErauQtf31U0fs1mpzGyQkGc=; b=KMQdNXJRqbJCFH8PxiOX87cth
 D4DvhXTj8oxIMemTlM1a9ieNALCGC3EmPWcAqwiAAPckwFVhAfaRBuiJMA/q/1zQcZS20BkODR8lp
 APlfwWNQYpTznhPXL92r+//UIqhoEK3hYsdRGX+fxZ47dU4hoAJYNgTiMLKzH5hQIlldM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXRNp-0005Nb-Ha; Sat, 09 May 2020 15:27:21 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXRNo-0005WD-Un; Sat, 09 May 2020 15:27:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXRNo-0001iv-U5; Sat, 09 May 2020 15:27:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150099-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150099: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=23bf93884c6346206e87c0f14d93f905e8c81267
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 09 May 2020 15:27:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150099 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150099/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              23bf93884c6346206e87c0f14d93f905e8c81267
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  113 days
Failing since        146211  2020-01-18 04:18:52 Z  112 days  103 attempts
Testing same since   150083  2020-05-08 04:18:46 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 17347 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 09 18:42:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 09 May 2020 18:42:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXUPw-0003fv-G3; Sat, 09 May 2020 18:41:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqM1=6X=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXUPu-0003fq-PK
 for xen-devel@lists.xenproject.org; Sat, 09 May 2020 18:41:42 +0000
X-Inumbo-ID: b5887d42-9224-11ea-a12f-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b5887d42-9224-11ea-a12f-12813bfff9fa;
 Sat, 09 May 2020 18:41:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hIFG/rQkWDbyMieFMsjrnUc8UAbZ2pHZkNZKhKf9vzQ=; b=hS32hRZsPymKkrlJ5UiaL5xv+
 I0Qn1BzDsdOIIRMSjrOiJLizsZ1XevUqBTvJ8gebu9JaZ9TaElBLHdNKYXfv42R9RBAxIzPbON85l
 EgmCky+boj0efv0yrEtZqIkVsJIHT+j1NZuKPuiVBikFCvYn+XLe2OVGpyggbaGJnlORA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXUPm-00014u-R0; Sat, 09 May 2020 18:41:34 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXUPm-0005EH-JU; Sat, 09 May 2020 18:41:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXUPm-0003Lc-Ic; Sat, 09 May 2020 18:41:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150098-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150098: regressions - FAIL
X-Osstest-Failures: linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:regression
 linux-linus:test-amd64-amd64-xl-rtds:guest-saverestore:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=d5eeab8d7e269e8cfc53b915bccd7bd30485bcbf
X-Osstest-Versions-That: linux=192ffb7515839b1cc8457e0a8c1e09783de019d3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 09 May 2020 18:41:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150098 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150098/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-boot     fail REGR. vs. 150081

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     15 guest-saverestore        fail REGR. vs. 150081

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150081
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150081
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150081
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150081
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150081
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150081
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150081
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150081
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                d5eeab8d7e269e8cfc53b915bccd7bd30485bcbf
baseline version:
 linux                192ffb7515839b1cc8457e0a8c1e09783de019d3

Last test of basis   150081  2020-05-07 23:40:11 Z    1 days
Testing same since   150098  2020-05-08 23:19:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Deucher <alexdeucher@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Arnd Bergmann <arnd@arndb.de>
  Arun Easi <aeasi@marvell.com>
  Aurabindo Pillai <aurabindo.pillai@amd.com>
  Aymeric Agon-Rambosson <aymeric.agon@yandex.com>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Brian King <brking@linux.vnet.ibm.com>
  Bryan O'Donoghue <bryan.odonoghue@linaro.org>
  ChenTao <chentao107@huawei.com>
  Christian Gromm <christian.gromm@microchip.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Kolesa <daniel@octaforge.org>
  Dave Airlie <airlied@redhat.com>
  David Hildenbrand <david@redhat.com>
  Dennis Zhou <dennis@kernel.org>
  Evan Quan <evan.quan@amd.com>
  Filipe Manana <fdmanana@suse.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Georgi Djakov <georgi.djakov@linaro.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gurchetan Singh <gurchetansingh@chromium.org>
  H. Nikolaus Schaller <hns@goldelico.com>
  Haibo Chen <haibo.chen@nxp.com>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Henry Willard <henry.willard@oracle.com>
  Ilya Dryomov <idryomov@gmail.com>
  Ivan Delalande <colona@arista.com>
  James Hilliard <james.hilliard1@gmail.com>
  James Morris <jmorris@namei.org>
  Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
  Jeff Layton <jlayton@kernel.org>
  Jeffrey Hugo <jhugo@codeaurora.org>
  Jeremy Linton <jeremy.linton@arm.com>
  Johan Hovold <johan@kernel.org>
  Johannes Weiner <hannes@cmpxchg.org>
  John Stultz <john.stultz@linaro.org>
  Kees Cook <keescook@chromium.org>
  Khazhismel Kumykov <khazhy@google.com>
  Kishon Vijay Abraham I <kishon@ti.com>
  KP Singh <kpsingh@google.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Luis Chamberlain <mcgrof@kernel.org>
  Luis Henriques <lhenriques@suse.com>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Maciej Grochowski <maciej.grochowski@pm.me>
  Manfred Spraul <manfred@colorfullife.com>
  Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Matt Jolly <Kangie@footclan.ninja>
  Maxime Ripard <maxime@cerno.tech>
  Mel Gorman <mgorman@techsingularity.net>
  Michal Hocko <mhocko@kernel.org>
  Michal Hocko <mhocko@suse.com>
  Michal Simek <michal.simek@xilinx.com>
  Michel Dänzer <mdaenzer@redhat.com>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Nicolas Pitre <nico@fluxnic.net>
  Nilesh Javali <njavali@marvell.com>
  Oleg Nesterov <oleg@redhat.com>
  Oliver Neukum <oneukum@suse.com>
  Oscar Carter <oscar.carter@gmx.com>
  Paul Cercueil <paul@crapouillou.net>
  Peter Chen <peter.chen@nxp.com>
  Prashant Malani <pmalani@chromium.org>
  Qiwu Chen <chenqiwu@xiaomi.com>
  Qiwu Chen <qiwuchen55@gmail.com>
  Quinn Tran <qutran@marvell.com>
  Rafael Aquini <aquini@redhat.com>
  Roman Li <roman.li@amd.com>
  Roman Penyaev <rpenyaev@suse.de>
  Sage Weil <sage@redhat.com>
  Sage Weil <sweil@redhat.com>
  Saravana Kannan <saravanak@google.com>
  Sean Paul <seanpaul@chromium.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Shubhrajyoti Datta <shubhrajyoti.datta@xilinx.com>
  Sung Lee <sung.lee@amd.com>
  Tejun Heo <tj@kernel.org>
  Thierry Reding <treding@nvidia.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vinod Koul <vkoul@kernel.org>
  Waiman Long <longman@redhat.com>
  Wolfram Sang <wsa@the-dreams.de>
  Wu Bo <wubo40@huawei.com>
  Yafang Shao <laoar.shao@gmail.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2626 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 10 01:19:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 10 May 2020 01:19:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXac4-00089q-0C; Sun, 10 May 2020 01:18:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d3Yd=6Y=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXac2-00089k-KR
 for xen-devel@lists.xenproject.org; Sun, 10 May 2020 01:18:38 +0000
X-Inumbo-ID: 2c24fa8e-925c-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c24fa8e-925c-11ea-9887-bc764e2007e4;
 Sun, 10 May 2020 01:18:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=MMZ5bNVbLLr3dBMposmxEXggHhbWQttg03J0gKgY0rc=; b=obsIMUL/dOdD/njsAdby59FH5
 BNDIuBILS5d8tEENyiT+vNGoiuorWqBfDUFktCEDN3oHrMOifn0EkggPHs5BjD8wV+Ccdaq0wgK6C
 II27ERfadjFfvaNc6FLLiOSdxgqem/Amlqnm+JPm21wPbUAK8QoRSOjh6moLPjnQ1dBCM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXac0-0006Zo-6r; Sun, 10 May 2020 01:18:36 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXabz-0003WD-V3; Sun, 10 May 2020 01:18:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXabz-0003Cs-Sl; Sun, 10 May 2020 01:18:35 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150100-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150100: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9
X-Osstest-Versions-That: xen=8a6b1665d987d043c12dc723d758a7d2ca765264
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 10 May 2020 01:18:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150100 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150100/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 150067

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150067
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150067
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150067
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150067
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150067
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150067
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150067
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150067
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150067
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150067
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9
baseline version:
 xen                  8a6b1665d987d043c12dc723d758a7d2ca765264

Last test of basis   150067  2020-05-07 10:22:45 Z    2 days
Failing since        150084  2020-05-08 05:26:57 Z    1 days    2 attempts
Testing same since   150100  2020-05-09 06:57:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Sergey Dyasli <sergey.dyasli@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8a6b1665d9..e0d92d9bd7  e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9 -> master


From xen-devel-bounces@lists.xenproject.org Sun May 10 02:17:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 10 May 2020 02:17:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXbX4-00059A-6G; Sun, 10 May 2020 02:17:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d3Yd=6Y=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXbX3-000595-Gy
 for xen-devel@lists.xenproject.org; Sun, 10 May 2020 02:17:33 +0000
X-Inumbo-ID: 6435194c-9264-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6435194c-9264-11ea-9887-bc764e2007e4;
 Sun, 10 May 2020 02:17:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=wYpFKOEY+51f/cwEeAyXZpxFtEmA+6kgI3sFewZTdr8=; b=eOsH1ncm2UYfPJ+JSp955I4vS
 kfT7nnCCV+h23Kb+9Ta9YvstKFwGuitfysBpjIsnvn0GURG2dNqzT/ponA81+KShBWPk3a1jFMxQw
 ETVT/XQHrhGcc1b20tDFT1gyQydmV5+Z/09q1MUnd4K56wVkAaDJgOY7gixhu1GDxXxXA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXbWw-00086Q-5l; Sun, 10 May 2020 02:17:26 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXbWv-0007kF-UQ; Sun, 10 May 2020 02:17:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXbWv-0003s3-Te; Sun, 10 May 2020 02:17:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150102-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.9-testing test] 150102: regressions - trouble:
 fail/pass/starved
X-Osstest-Failures: xen-4.9-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:regression
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-localmigrate/x10:fail:heisenbug
 xen-4.9-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:guest-stop:fail:heisenbug
 xen-4.9-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=93cc305d1f3e7c6949a8f4116446624fa2dbfdf4
X-Osstest-Versions-That: xen=45c90737d5f0c8bf479adcd8cb88450f1998e55c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 10 May 2020 02:17:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150102 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150102/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop       fail REGR. vs. 149649

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-win7-amd64 16 guest-localmigrate/x10 fail in 150085 pass in 150102
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 17 guest-stop  fail pass in 150085

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop      fail blocked in 149649
 test-amd64-amd64-xl-qemuu-ws16-amd64 16 guest-localmigrate/x10 fail blocked in 149649
 test-amd64-amd64-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail in 150085 like 149649
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail like 149649
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149649
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149649
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 149649
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 149649
 test-amd64-i386-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail like 149649
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  93cc305d1f3e7c6949a8f4116446624fa2dbfdf4
baseline version:
 xen                  45c90737d5f0c8bf479adcd8cb88450f1998e55c

Last test of basis   149649  2020-04-14 13:35:25 Z   25 days
Testing same since   150038  2020-05-05 16:06:01 Z    4 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 93cc305d1f3e7c6949a8f4116446624fa2dbfdf4
Author: Juergen Gross <jgross@suse.com>
Date:   Fri Apr 3 13:03:40 2020 +0100

    tools/xenstore: fix a use after free problem in xenstored
    
    Commit 562a1c0f7ef3fb ("tools/xenstore: dont unlink connection object
    twice") introduced a potential use after free problem in
    domain_cleanup(): after calling talloc_unlink() for domain->conn
    domain->conn is set to NULL. The problem is that domain is registered
    as talloc child of domain->conn, so it might be freed by the
    talloc_unlink() call.
    
    With Xenstore being single threaded there are normally no concurrent
    memory allocations running and freeing a virtual memory area normally
    doesn't result in that area no longer being accessible. A problem
    could occur only in case either a signal received results in some
    memory allocation done in the signal handler (SIGHUP is a primary
    candidate leading to reopening the log file), or in case the talloc
    framework would do some internal memory allocation during freeing of
    the memory (which would lead to clobbering of the freed domain
    structure).
    
    Fixes: 562a1c0f7ef3fb ("tools/xenstore: dont unlink connection object twice")
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    (cherry picked from commit bb2a34fd740e9a26be9e2244f1a5b4cef439e5a8)
    (cherry picked from commit dc5176d0f9434e275e0be1df8d0518e243798beb)
    (cherry picked from commit a997ffe678e698ff2b4c89ae5a98661d12247fef)
    (cherry picked from commit 48e8564435aca590f1c292ab7bb1f3dbc6b75693)
    (cherry picked from commit 1e722e6971539eab4f484affd60490cbc8429951)
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sun May 10 04:58:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 10 May 2020 04:58:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXe23-00046f-1L; Sun, 10 May 2020 04:57:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d3Yd=6Y=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXe21-00046a-SB
 for xen-devel@lists.xenproject.org; Sun, 10 May 2020 04:57:41 +0000
X-Inumbo-ID: c60778ca-927a-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c60778ca-927a-11ea-9887-bc764e2007e4;
 Sun, 10 May 2020 04:57:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=QZGT7F0VY9el6BeOiK7xPm472eYUr1O9pu3M0BIOnUQ=; b=DzH02/aJqBCXUW/j5XlmVWRZP
 7dIgpnl1PlUpz3CSZ9fiZKmwvq70gutCPZ/itHx94lX9qIGMFiMJYKhus0wif82uxnqduaNNqUpt0
 IiC2ErnOkJZ2DnpEG2X4FR6FTgN7aaCxqMOOkajT1mLV0p2IAv2P01j8pi7dwB1WwSM6U=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXe1z-0007C1-5s; Sun, 10 May 2020 04:57:39 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXe1y-00079W-Q3; Sun, 10 May 2020 04:57:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXe1y-0002ze-PD; Sun, 10 May 2020 04:57:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150104-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 150104: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=592465e6a54ba8104969f3b73b58df262c5be5f5
X-Osstest-Versions-That: linux=9895e0ac338a8060e6947f897397c21c4d78d80d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 10 May 2020 04:57:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150104 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150104/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 149905
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                592465e6a54ba8104969f3b73b58df262c5be5f5
baseline version:
 linux                9895e0ac338a8060e6947f897397c21c4d78d80d

Last test of basis   149905  2020-05-02 16:38:26 Z    7 days
Testing same since   150054  2020-05-06 06:41:12 Z    3 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Aharon Landau <aharonl@mellanox.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Alaa Hleihel <alaa@mellanox.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Williamson <alex.williamson@redhat.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Arnd Bergmann <arnd@arndb.de>
  Arun Easi <aeasi@marvell.com>
  Benjamin Herrenschmidt <benh@kernel.crashing.org>
  Catalin Marinas <catalin.marinas@arm.com>
  Christoph Hellwig <hch@lst.de>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Vetter <daniel.vetter@intel.com>
  David Disseldorp <ddiss@suse.de>
  David Howells <dhowells@redhat.com>
  David Sterba <dsterba@suse.com>
  Dexuan Cui <decui@microsoft.com>
  Douglas Anderson <dianders@chromium.org>
  Filipe Manana <fdmanana@suse.com>
  Gabriel Krisman Bertazi <krisman@collabora.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hui Wang <hui.wang@canonical.com>
  Iuliana Prodan <iuliana.prodan@nxp.com>
  Jason Gunthorpe <jgg@mellanox.com>
  Joerg Roedel <jroedel@suse.de>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Leon Romanovsky <leonro@mellanox.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Marek Behún <marek.behun@nic.cz>
  Martin Blumenstingl <martin.blumenstingl@googlemail.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Liu <liumartin@google.com>
  Martin Wilck <mwilck@suse.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
  Nicolas Ferre <nicolas.ferre@microchip.com>
  Niklas Cassel <niklas.cassel@wdc.com>
  Olga Kornievskaia <kolga@netapp.com>
  Olga Kornievskaia <olga.kornievskaia@gmail.com>
  Paul Moore <paul@paul-moore.com>
  Qu Wenruo <wqu@suse.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rayagonda Kokatanur <rayagonda.kokatanur@broadcom.com>
  Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  ryan_chen <ryan_chen@aspeedtech.com>
  Sean Christopherson <sean.j.christopherson@intel.com>
  Shawn Guo <shawnguo@kernel.org>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Sumit Semwal <sumit.semwal@linaro.org>
  Sunwook Eom <speed.eom@samsung.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Takashi Iwai <tiwai@suse.de>
  Tang Bin <tangbin@cmss.chinamobile.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vasily Averin <vvs@virtuozzo.com>
  Vasily Khoruzhick <anarsoul@gmail.com>
  Veerabhadrarao Badiganti <vbadigan@codeaurora.org>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vincenzo Frascino <vincenzo.frascino@arm.com>
  Vinod Koul <vkoul@kernel.org>
  Wei Liu <wei.liu@kernel.org>
  Wolfram Sang <wsa@the-dreams.de>
  Wu Bo <wubo40@huawei.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yan Zhao <yan.y.zhao@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   9895e0ac338a..592465e6a54b  592465e6a54ba8104969f3b73b58df262c5be5f5 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Sun May 10 08:34:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 10 May 2020 08:34:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXhPK-0006AK-8Q; Sun, 10 May 2020 08:33:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d3Yd=6Y=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXhPJ-0006AF-0z
 for xen-devel@lists.xenproject.org; Sun, 10 May 2020 08:33:57 +0000
X-Inumbo-ID: f8f5a022-9298-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f8f5a022-9298-11ea-b07b-bc764e2007e4;
 Sun, 10 May 2020 08:33:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=UZNCAFBIyIhpeV3Fe0dt6xKVHq/8oLsvV5iM6WFccSs=; b=q5+TOaLpcT62BbmT0ascLbvQc
 rAtSAbR7jtdA6pvEcUWSYkoQe3p8gdhGNiKkcu67156xfEwIpP+ykahmLvn0Y5g6E0+TiN7+ofReF
 1lTH+My2y7IegCylFLaSUqaoYQv50b8isIb457AUbnxRE8JhXsZS4xCcPVkH94i2rGHi0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXhPB-00041S-Ie; Sun, 10 May 2020 08:33:49 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXhPB-0003Tr-4A; Sun, 10 May 2020 08:33:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXhPB-0007rE-3Z; Sun, 10 May 2020 08:33:49 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150109-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150109: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=c88f1ffc19e38008a1c33ae039482a860aa7418c
X-Osstest-Versions-That: qemuu=570a9214827e3d42f7173c4d4c9f045b99834cf0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 10 May 2020 08:33:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150109 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150109/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150061
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150061
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150061
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150061
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150061
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150061
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                c88f1ffc19e38008a1c33ae039482a860aa7418c
baseline version:
 qemuu                570a9214827e3d42f7173c4d4c9f045b99834cf0

Last test of basis   150061  2020-05-06 22:06:59 Z    3 days
Failing since        150077  2020-05-07 15:37:29 Z    2 days    3 attempts
Testing same since   150089  2020-05-08 17:52:18 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alberto Garcia <berto@igalia.com>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Alexey Krasikov <alex-krasikov@yandex-team.ru>
  Chen Qun <kuhn.chenqun@huawei.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniele Buono <dbuono@linux.vnet.ibm.com>
  David Gibson <david@gibson.dropbear.id.au>
  David Hildenbrand <david@redhat.com>
  Dr. David Alan Gilbert <dgilbert@redhat.com>
  Eric Auger <eric.auger@redhat.com>
  Eric Blake <eblake@redhat.com>
  Greg Kurz <groug@kaod.org>
  Keqian Zhu <zhukeqian1@huawei.com>
  Kevin Wolf <kwolf@redhat.com>
  Mao Zhongyi <maozhongyi@cmss.chinamobile.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Max Reitz <mreitz@redhat.com>
  Maxim Levitsky <mlevitsk@redhat.com>
  Nicholas Piggin <npiggin@gmail.com>
  Pan Nengyuan <pannengyuan@huawei.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Richard Henderson <richard.henderson@linaro.org>
  Richard W.M. Jones <rjones@redhat.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Suraj Jitindar Singh <sjitindarsingh@gmail.com>
  Tong Ho <tong.ho@xilinx.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Wei Wang <wei.w.wang@intel.com>
  Yi Sun <yi.y.sun@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   570a921482..c88f1ffc19  c88f1ffc19e38008a1c33ae039482a860aa7418c -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sun May 10 09:46:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 10 May 2020 09:46:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXiXE-0003SO-MU; Sun, 10 May 2020 09:46:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d3Yd=6Y=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXiXC-0003SJ-Fn
 for xen-devel@lists.xenproject.org; Sun, 10 May 2020 09:46:10 +0000
X-Inumbo-ID: 138d4386-92a3-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 138d4386-92a3-11ea-b07b-bc764e2007e4;
 Sun, 10 May 2020 09:46:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=t4YVXObO3dHoBiVBBG88BtwK3y23h0k1jKKPBdFMiNc=; b=ku6y+9kYc5mAuUM7VtC0KDn4W
 Ha+7hTTYbLV8R0AXRqqfR/umwO4bU68CZLSUa95SME+K6ZTktb9E9bDn/tmoWBlgCCDSFMK+YEhvx
 wb/ORoxdFTD6kOtD0XPoJF9uNHM0j2izGL7LAifm0gXfqb26H2Mp4oJ5X0YMWtG5+n6VE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXiXB-0005SS-67; Sun, 10 May 2020 09:46:09 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXiXA-00069w-US; Sun, 10 May 2020 09:46:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXiXA-0007I7-Tt; Sun, 10 May 2020 09:46:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150123-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 150123: all pass - PUSHED
X-Osstest-Versions-This: xen=e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9
X-Osstest-Versions-That: xen=779efdbb502b38c66b774b124fa0ceed254875bd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 10 May 2020 09:46:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150123 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150123/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9
baseline version:
 xen                  779efdbb502b38c66b774b124fa0ceed254875bd

Last test of basis   150055  2020-05-06 09:19:00 Z    4 days
Testing same since   150123  2020-05-10 09:19:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Sergey Dyasli <sergey.dyasli@citrix.com>
  Tamas K Lengyel <tamas.lengyel@intel.com>
  Wei Liu <liuwe@microsoft.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   779efdbb50..e0d92d9bd7  e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun May 10 11:28:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 10 May 2020 11:28:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXk7k-0003F1-Fj; Sun, 10 May 2020 11:28:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d3Yd=6Y=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXk7j-0003Ew-3e
 for xen-devel@lists.xenproject.org; Sun, 10 May 2020 11:27:59 +0000
X-Inumbo-ID: 49cce4de-92b1-11ea-a182-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 49cce4de-92b1-11ea-a182-12813bfff9fa;
 Sun, 10 May 2020 11:27:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=aXbDLd+0hCNQy6Z0D571efCcYssgMbc/NnJmQJzTKMg=; b=JGhc4ojt58As1IYkVnDW4qIyF
 cJ0D1VFxMyko4hNb0Dbo6CBw+Q+hPmWD7Jiatq6/NKGxEH+NkRt8Erc+2ww7l+0VwMkbXh2Jue+sz
 QWJTxDanfWp1rbMOAU6M292SIMcpVH316nu8K8rHPWads5PoS6p2roUZRXg7CuguBOWtc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXk7d-0007bj-4V; Sun, 10 May 2020 11:27:53 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXk7b-00018t-En; Sun, 10 May 2020 11:27:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXk7b-0003II-Cn; Sun, 10 May 2020 11:27:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150121-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150121: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf:xen-build:fail:regression
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:build-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=23bf93884c6346206e87c0f14d93f905e8c81267
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 10 May 2020 11:27:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150121 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150121/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf                   6 xen-build                fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a

version targeted for testing:
 libvirt              23bf93884c6346206e87c0f14d93f905e8c81267
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  114 days
Failing since        146211  2020-01-18 04:18:52 Z  113 days  104 attempts
Testing same since   150083  2020-05-08 04:18:46 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  fail    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 17347 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 10 14:57:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 10 May 2020 14:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXnNz-0003ML-7n; Sun, 10 May 2020 14:56:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d3Yd=6Y=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXnNy-0003MG-Hn
 for xen-devel@lists.xenproject.org; Sun, 10 May 2020 14:56:58 +0000
X-Inumbo-ID: 7adc99b2-92ce-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7adc99b2-92ce-11ea-b07b-bc764e2007e4;
 Sun, 10 May 2020 14:56:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=8wcE70Cigy/V3qYYGvkPXVU2xaIpyARMtuWh4guxn64=; b=cHP5MubGZinWM7UfHH5sJNmFf
 QkpLfSEN414FnGsLw9heKxr6ZfajDF6O8yc/D7DHm4/yxTJMAHBuqPT/7iQMURnO/asaavxaPIANY
 HDuldfJvDf7G/GPxgLB++yj6ND6vgk9bF2d2RptuzxunbJIV6IO0wtO5oPsGyo+2A+pTs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXnNq-0003OC-Rr; Sun, 10 May 2020 14:56:50 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXnNq-0004at-FQ; Sun, 10 May 2020 14:56:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXnNq-0006Dc-El; Sun, 10 May 2020 14:56:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150117-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150117: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-boot:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
 linux-linus:test-amd64-amd64-xl-rtds:guest-saverestore:fail:allowable
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=d5eeab8d7e269e8cfc53b915bccd7bd30485bcbf
X-Osstest-Versions-That: linux=192ffb7515839b1cc8457e0a8c1e09783de019d3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 10 May 2020 14:56:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150117 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150117/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-debianhvm-amd64 7 xen-boot fail in 150098 pass in 150117
 test-armhf-armhf-xl-rtds     12 guest-start                fail pass in 150098

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     15 guest-saverestore        fail REGR. vs. 150081

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds    13 migrate-support-check fail in 150098 never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-check fail in 150098 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150081
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150081
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150081
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150081
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150081
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150081
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150081
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150081
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                d5eeab8d7e269e8cfc53b915bccd7bd30485bcbf
baseline version:
 linux                192ffb7515839b1cc8457e0a8c1e09783de019d3

Last test of basis   150081  2020-05-07 23:40:11 Z    2 days
Testing same since   150098  2020-05-08 23:19:29 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Deucher <alexdeucher@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Arnd Bergmann <arnd@arndb.de>
  Arun Easi <aeasi@marvell.com>
  Aurabindo Pillai <aurabindo.pillai@amd.com>
  Aymeric Agon-Rambosson <aymeric.agon@yandex.com>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Brian King <brking@linux.vnet.ibm.com>
  Bryan O'Donoghue <bryan.odonoghue@linaro.org>
  ChenTao <chentao107@huawei.com>
  Christian Gromm <christian.gromm@microchip.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Kolesa <daniel@octaforge.org>
  Dave Airlie <airlied@redhat.com>
  David Hildenbrand <david@redhat.com>
  Dennis Zhou <dennis@kernel.org>
  Evan Quan <evan.quan@amd.com>
  Filipe Manana <fdmanana@suse.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Georgi Djakov <georgi.djakov@linaro.org>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gurchetan Singh <gurchetansingh@chromium.org>
  H. Nikolaus Schaller <hns@goldelico.com>
  Haibo Chen <haibo.chen@nxp.com>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Henry Willard <henry.willard@oracle.com>
  Ilya Dryomov <idryomov@gmail.com>
  Ivan Delalande <colona@arista.com>
  James Hilliard <james.hilliard1@gmail.com>
  James Morris <jmorris@namei.org>
  Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
  Jeff Layton <jlayton@kernel.org>
  Jeffrey Hugo <jhugo@codeaurora.org>
  Jeremy Linton <jeremy.linton@arm.com>
  Johan Hovold <johan@kernel.org>
  Johannes Weiner <hannes@cmpxchg.org>
  John Stultz <john.stultz@linaro.org>
  Kees Cook <keescook@chromium.org>
  Khazhismel Kumykov <khazhy@google.com>
  Kishon Vijay Abraham I <kishon@ti.com>
  KP Singh <kpsingh@google.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Luis Chamberlain <mcgrof@kernel.org>
  Luis Henriques <lhenriques@suse.com>
  Lukas Bulwahn <lukas.bulwahn@gmail.com>
  Maciej Grochowski <maciej.grochowski@pm.me>
  Manfred Spraul <manfred@colorfullife.com>
  Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Matt Jolly <Kangie@footclan.ninja>
  Maxime Ripard <maxime@cerno.tech>
  Mel Gorman <mgorman@techsingularity.net>
  Michal Hocko <mhocko@kernel.org>
  Michal Hocko <mhocko@suse.com>
  Michal Simek <michal.simek@xilinx.com>
  Michel Dänzer <mdaenzer@redhat.com>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Nicolas Pitre <nico@fluxnic.net>
  Nilesh Javali <njavali@marvell.com>
  Oleg Nesterov <oleg@redhat.com>
  Oliver Neukum <oneukum@suse.com>
  Oscar Carter <oscar.carter@gmx.com>
  Paul Cercueil <paul@crapouillou.net>
  Peter Chen <peter.chen@nxp.com>
  Prashant Malani <pmalani@chromium.org>
  Qiwu Chen <chenqiwu@xiaomi.com>
  Qiwu Chen <qiwuchen55@gmail.com>
  Quinn Tran <qutran@marvell.com>
  Rafael Aquini <aquini@redhat.com>
  Roman Li <roman.li@amd.com>
  Roman Penyaev <rpenyaev@suse.de>
  Sage Weil <sage@redhat.com>
  Sage Weil <sweil@redhat.com>
  Saravana Kannan <saravanak@google.com>
  Sean Paul <seanpaul@chromium.org>
  Sergei Trofimovich <slyfox@gentoo.org>
  Shubhrajyoti Datta <shubhrajyoti.datta@xilinx.com>
  Sung Lee <sung.lee@amd.com>
  Tejun Heo <tj@kernel.org>
  Thierry Reding <treding@nvidia.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vinod Koul <vkoul@kernel.org>
  Waiman Long <longman@redhat.com>
  Wolfram Sang <wsa@the-dreams.de>
  Wu Bo <wubo40@huawei.com>
  Yafang Shao <laoar.shao@gmail.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   192ffb751583..d5eeab8d7e26  d5eeab8d7e269e8cfc53b915bccd7bd30485bcbf -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sun May 10 15:58:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 10 May 2020 15:58:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXoKm-0008HY-IZ; Sun, 10 May 2020 15:57:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d3Yd=6Y=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXoKm-0008HT-8r
 for xen-devel@lists.xenproject.org; Sun, 10 May 2020 15:57:44 +0000
X-Inumbo-ID: f859df00-92d6-11ea-a1a7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f859df00-92d6-11ea-a1a7-12813bfff9fa;
 Sun, 10 May 2020 15:57:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=XVdlcgkIMTs0Rfmnza3axCxR3YTluL4+L+sKKEsh5ps=; b=5qA5mKkaA158w228rqTfE7lOF
 jKxnwxfAWwqVgxyySwZzCaaWPsv3Z6VD9mqCYfvfpC956vSWPqqUUGpfHqkYfNEGcOMXeie5TUnc1
 Dftjq0JwfSCIxXeM3CiRtLq2b9XsnAffCq0xok+Ip60JF1YW9892KhidSm0VpijLkEl+Q=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXoKf-0004bS-90; Sun, 10 May 2020 15:57:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXoKe-00009Q-V2; Sun, 10 May 2020 15:57:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXoKe-0005Jm-UU; Sun, 10 May 2020 15:57:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150125-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150125: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=190c60f12db469472476041ecd0e6c9a0d4b0f8a
X-Osstest-Versions-That: xen=e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 10 May 2020 15:57:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150125 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150125/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  190c60f12db469472476041ecd0e6c9a0d4b0f8a
baseline version:
 xen                  e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9

Last test of basis   150088  2020-05-08 09:00:36 Z    2 days
Testing same since   150125  2020-05-10 13:00:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e0d92d9bd7..190c60f12d  190c60f12db469472476041ecd0e6c9a0d4b0f8a -> smoke


From xen-devel-bounces@lists.xenproject.org Sun May 10 18:19:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 10 May 2020 18:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXqXg-0003PG-Tn; Sun, 10 May 2020 18:19:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d3Yd=6Y=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXqXg-0003PB-Bz
 for xen-devel@lists.xenproject.org; Sun, 10 May 2020 18:19:12 +0000
X-Inumbo-ID: bb896a46-92ea-11ea-a1ba-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bb896a46-92ea-11ea-a1ba-12813bfff9fa;
 Sun, 10 May 2020 18:19:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ZOEVlSvyVfkAKEYglIGQc4v+0nUq21g89loBOxDJzQw=; b=dtsjTGElcaB8K5kFd7xX8VcE/
 0A8xEIw1TzvoW5N/RtVbCF4jMA3juAZQYpIfeGMqxShI7A2BSDfwcVYIeWuo30rlu1bdlWm6ZXXBh
 lsCFs/pKHVJ/VdcCHgyMZ1KJ+hCY4txqZkBNebcgXIkuRd8uX1Kcayfc93a73MslHTYTo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXqXZ-00084d-55; Sun, 10 May 2020 18:19:05 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXqXY-0003Yl-TD; Sun, 10 May 2020 18:19:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXqXY-000589-Rj; Sun, 10 May 2020 18:19:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150119-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150119: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9
X-Osstest-Versions-That: xen=e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 10 May 2020 18:19:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150119 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150119/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150100
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150100
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150100
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150100
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150100
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150100
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150100
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150100
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150100
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150100
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150100
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9
baseline version:
 xen                  e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9

Last test of basis   150119  2020-05-10 01:51:25 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun May 10 20:52:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 10 May 2020 20:52:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXsw4-0007pF-Ki; Sun, 10 May 2020 20:52:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d3Yd=6Y=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXsw3-0007pA-4Z
 for xen-devel@lists.xenproject.org; Sun, 10 May 2020 20:52:31 +0000
X-Inumbo-ID: 295b115e-9300-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 295b115e-9300-11ea-9887-bc764e2007e4;
 Sun, 10 May 2020 20:52:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=alNAKjUzgZ4AtOXoJwswHGvAsMKqYE2hcoLO85LhN/k=; b=JUerLnt4CpCP7/lJ0Miu1K6RJ
 046CQY6Sea2tsL1aN6U3n5RBtLRx6gG+ZjQRJTQIXoQvkuDNz8NC/IsIeOc8PDOrJeOnsxvv2Ukhw
 /or/o0/W+YE4mKyYVUqPwgorUzsDCDtIx5WcyScoHy1u8pqOCB8Y1mqP+sUVv04eRBVpg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXsw0-0002mK-VU; Sun, 10 May 2020 20:52:29 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXsw0-0004ku-NW; Sun, 10 May 2020 20:52:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXsw0-0007G7-Mo; Sun, 10 May 2020 20:52:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150120-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.9-testing test] 150120: tolerable trouble: fail/pass/starved -
 PUSHED
X-Osstest-Failures: xen-4.9-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:guest-stop:fail:heisenbug
 xen-4.9-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:heisenbug
 xen-4.9-testing:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
 xen-4.9-testing:test-amd64-i386-xl-qemut-win7-amd64:windows-install:fail:heisenbug
 xen-4.9-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-start/win.repeat:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-localmigrate/x10:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.9-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.9-testing:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=93cc305d1f3e7c6949a8f4116446624fa2dbfdf4
X-Osstest-Versions-That: xen=45c90737d5f0c8bf479adcd8cb88450f1998e55c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 10 May 2020 20:52:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150120 xen-4.9-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150120/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 17 guest-stop fail in 150102 pass in 150120
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop fail in 150102 pass in 150120
 test-armhf-armhf-xl-rtds     12 guest-start                fail pass in 150102
 test-amd64-i386-xl-qemut-win7-amd64 10 windows-install     fail pass in 150102

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop      fail blocked in 149649
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop      fail blocked in 149649
 test-amd64-amd64-xl-qemuu-ws16-amd64 16 guest-localmigrate/x10 fail in 150102 blocked in 149649
 test-amd64-i386-xl-qemut-win7-amd64 16 guest-localmigrate/x10 fail in 150102 like 149649
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeat fail in 150102 like 149649
 test-armhf-armhf-xl-rtds    13 migrate-support-check fail in 150102 never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-check fail in 150102 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149649
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149649
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 149649
 test-amd64-amd64-xl-qemut-ws16-amd64 18 guest-start/win.repeat fail like 149649
 test-amd64-i386-xl-qemut-ws16-amd64 16 guest-localmigrate/x10 fail like 149649
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  93cc305d1f3e7c6949a8f4116446624fa2dbfdf4
baseline version:
 xen                  45c90737d5f0c8bf479adcd8cb88450f1998e55c

Last test of basis   149649  2020-04-14 13:35:25 Z   26 days
Testing same since   150038  2020-05-05 16:06:01 Z    5 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   45c90737d5..93cc305d1f  93cc305d1f3e7c6949a8f4116446624fa2dbfdf4 -> stable-4.9


From xen-devel-bounces@lists.xenproject.org Sun May 10 23:25:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 10 May 2020 23:25:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jXvJo-0003gr-NT; Sun, 10 May 2020 23:25:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d3Yd=6Y=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jXvJn-0003gm-3q
 for xen-devel@lists.xenproject.org; Sun, 10 May 2020 23:25:11 +0000
X-Inumbo-ID: 7a10b198-9315-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a10b198-9315-11ea-ae69-bc764e2007e4;
 Sun, 10 May 2020 23:25:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=HKZPKNMKf1cGiWdUPjwGiNDF1mMBbheDWcGVMfP/8jk=; b=ofCoX8/kMXrqGMTBSl0HE4pog
 IQsBlpPAPwR58NeVE162ih7t2vYtFMnAopI9MotNVfhSAgvwKs4KQ3lqVb9uFk2Ou4SllHTKkpcHt
 f0rdxwh517AgHdRYM7z1Bx5LlqsA29bNeDhxbO4Wb3qFLavh/smUZAjamBF8Stt4EYhYk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXvJf-0005pJ-MX; Sun, 10 May 2020 23:25:03 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jXvJf-0004G1-D6; Sun, 10 May 2020 23:25:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jXvJf-0002aR-CC; Sun, 10 May 2020 23:25:03 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150122-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 150122: regressions - FAIL
X-Osstest-Failures: linux-5.4:test-amd64-i386-xl-shadow:xen-boot:fail:regression
 linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=f015b86259a520ad886523d9ec6fdb0ed80edc38
X-Osstest-Versions-That: linux=592465e6a54ba8104969f3b73b58df262c5be5f5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 10 May 2020 23:25:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150122 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150122/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-shadow     7 xen-boot                 fail REGR. vs. 150104

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate           fail  like 150070
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150086
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                f015b86259a520ad886523d9ec6fdb0ed80edc38
baseline version:
 linux                592465e6a54ba8104969f3b73b58df262c5be5f5

Last test of basis   150104  2020-05-09 09:48:15 Z    1 days
Testing same since   150122  2020-05-10 08:40:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Ma <aaron.ma@canonical.com>
  AceLan Kao <acelan.kao@canonical.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Elder <elder@linaro.org>
  Alexei Starovoitov <ast@kernel.org>
  Amadeusz Sławiński <amadeuszx.slawinski@linux.intel.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Andrzej Hajda <a.hajda@samsung.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Andy Yan <andy.yan@rock-chips.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Brendan Higgins <brendanhiggins@google.com>
  Brian Cain <bcain@codeaurora.org>
  Catalin Marinas <catalin.marinas@arm.com>
  Chanwoo Choi <cw00.choi@samsung.com>
  Christoph Hellwig <hch@lst.de>
  David S. Miller <davem@davemloft.net>
  Doug Berger <opendmb@gmail.com>
  Felipe Balbi <balbi@kernel.org>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hans de Goede <hdegoede@redhat.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Jere Leppänen <jere.leppanen@nokia.com>
  Jeremie Francois (on alpha) <jeremie.francois@gmail.com>
  Jia He <justin.he@arm.com>
  Jiri Slaby <jslaby@suse.cz>
  Johannes Berg <johannes.berg@intel.com>
  Julien Beraud <julien.beraud@orolia.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Lee Jones <lee.jones@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Luis Chamberlain <mcgrof@kernel.org>
  Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Mark Brown <broonie@kernel.org>
  Masahiro Yamada <masahiroy@kernel.org>
  Matt Roper <matthew.d.roper@intel.com>
  Matthias Blankertz <matthias.blankertz@cetitec.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael S. Tsirkin <mst@redhat.com>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paulo Alcantara (SUSE) <pc@cjr.nz>
  Paulo Alcantara <pc@cjr.nz>
  Pavel Machek <pavel@denx.de>
  Qian Cai <cai@lca.pw>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Roman Gilg <subdiff@gmail.com>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Sandeep Raghuraman <sandy.8925@gmail.com>
  Sasha Levin <sashal@kernel.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Takashi Iwai <tiwai@suse.de>
  Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thinh Nguyen <thinhn@synopsys.com>
  Thomas Pedersen <thomas@adapt-ip.com>
  Tuowen Zhao <ztuowen@gmail.com>
  Tyler Hicks <tyhicks@linux.microsoft.com>
  Vamshi K Sthambamkadi <vamshi.k.sthambamkadi@gmail.com>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Zhan Liu <zhan.liu@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1335 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 11 06:40:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 06:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY26h-0008Pw-LH; Mon, 11 May 2020 06:40:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nxab=6Z=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jY26h-0008Pr-6a
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 06:40:07 +0000
X-Inumbo-ID: 3e50ac76-9352-11ea-a1f0-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3e50ac76-9352-11ea-a1f0-12813bfff9fa;
 Mon, 11 May 2020 06:40:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=RfIxyj7GJQmj/Ata0f9f3a8+XLiyw5Z+DziYC4v837U=; b=aFFXQgMV/GQyoMoSoP9zo9EAw
 b0bFdGc51xDS8nHFM67Nokpb2aXot9mFEjzRbj6EKkL//6IQv+3sgcM13o2B56BQB+yosH2mZdy2i
 t7Sku+Ig/7+10Apn1tAx1mwnJXB6C3WIQU7drAEApU/eKjhnaJattm+XhnXbMk4vqhDJo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jY26d-0001BK-LM; Mon, 11 May 2020 06:40:03 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jY26d-0002Mh-96; Mon, 11 May 2020 06:40:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jY26d-00028h-8F; Mon, 11 May 2020 06:40:03 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150126-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150126: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=e99332e7b4cda6e60f5b5916cf9943a79dbef902
X-Osstest-Versions-That: linux=d5eeab8d7e269e8cfc53b915bccd7bd30485bcbf
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 11 May 2020 06:40:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150126 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150126/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 150098

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150117
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150117
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150117
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150117
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150117
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150117
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150117
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150117
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150117
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                e99332e7b4cda6e60f5b5916cf9943a79dbef902
baseline version:
 linux                d5eeab8d7e269e8cfc53b915bccd7bd30485bcbf

Last test of basis   150117  2020-05-09 19:09:53 Z    1 days
Testing same since   150126  2020-05-10 14:58:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andreas Schwab <schwab@suse.de>
  Anup Patel <anup.patel@wdc.com>
  Atish Patra <atish.patra@wdc.com>
  Jens Axboe <axboe@kernel.dk>
  Linus Torvalds <torvalds@linux-foundation.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Pavel Begunkov <asml.silence@gmail.com>
  Vincent Chen <vincent.chen@sifive.com>
  Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
  Yash Shah <yash.shah@sifive.com>
  Zong Li <zong.li@sifive.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   d5eeab8d7e26..e99332e7b4cd  e99332e7b4cda6e60f5b5916cf9943a79dbef902 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Mon May 11 07:26:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 07:26:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY2p9-0003Pe-FS; Mon, 11 May 2020 07:26:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4Y5i=6Z=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jY2p8-0003PZ-Iz
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 07:26:02 +0000
X-Inumbo-ID: a9d470fc-9358-11ea-a1f3-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a9d470fc-9358-11ea-a1f3-12813bfff9fa;
 Mon, 11 May 2020 07:26:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 0D896B32A;
 Mon, 11 May 2020 07:26:02 +0000 (UTC)
Subject: Re: [PATCH v8 08/12] x86emul: support FLDENV and FRSTOR
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <09fe2c18-0037-af71-93be-87261051e2a2@suse.com>
 <20200508133720.GH1353@Air-de-Roger>
 <4b6f4353-066e-351d-597d-4455193ff95f@suse.com>
 <a8342bf8-d866-b507-9420-0384545e9a4f@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dece70e0-3140-eb4a-b6c7-0bf904cb6f2a@suse.com>
Date: Mon, 11 May 2020 09:25:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <a8342bf8-d866-b507-9420-0384545e9a4f@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 20:29, Andrew Cooper wrote:
> On 08/05/2020 16:04, Jan Beulich wrote:
>>>> +            }
>>>> +
>>>> +            if ( bytes == sizeof(fpstate.env) )
>>>> +                ptr = NULL;
>>>> +            else
>>>> +                ptr += sizeof(fpstate.env);
>>>> +            break;
>>>> +
>>>> +        case sizeof(struct x87_env16):
>>>> +        case sizeof(struct x87_env16) + sizeof(fpstate.freg):
>>>> +        {
>>>> +            const struct x87_env16 *env = ptr;
>>>> +
>>>> +            fpstate.env.fcw = env->fcw;
>>>> +            fpstate.env.fsw = env->fsw;
>>>> +            fpstate.env.ftw = env->ftw;
>>>> +
>>>> +            if ( state->rex_prefix )
>>>> +            {
>>>> +                fpstate.env.mode.prot.fip = env->mode.prot.fip;
>>>> +                fpstate.env.mode.prot.fcs = env->mode.prot.fcs;
>>>> +                fpstate.env.mode.prot.fdp = env->mode.prot.fdp;
>>>> +                fpstate.env.mode.prot.fds = env->mode.prot.fds;
>>>> +                fpstate.env.mode.prot.fop = 0; /* unknown */
>>>> +            }
>>>> +            else
>>>> +            {
>>>> +                unsigned int fip = env->mode.real.fip_lo +
>>>> +                                   (env->mode.real.fip_hi << 16);
>>>> +                unsigned int fdp = env->mode.real.fdp_lo +
>>>> +                                   (env->mode.real.fdp_hi << 16);
>>>> +                unsigned int fop = env->mode.real.fop;
>>>> +
>>>> +                fpstate.env.mode.prot.fip = fip & 0xf;
>>>> +                fpstate.env.mode.prot.fcs = fip >> 4;
>>>> +                fpstate.env.mode.prot.fop = fop;
>>>> +                fpstate.env.mode.prot.fdp = fdp & 0xf;
>>>> +                fpstate.env.mode.prot.fds = fdp >> 4;
>>> This looks mostly the same as the translation done above, so maybe
>>> could be abstracted anyway in a macro to avoid the code repetition?
>>> (ie: fpstate_real_to_prot(src, dst) or some such).
>> Just the 5 assignments could be put in an inline function, but
>> if we also wanted to abstract away the declarations with their
>> initializers, it would need to be a macro because of the
>> different types of fpstate.env and *env. While I'd generally
>> prefer inline functions, the macro would have the benefit that
>> it could be #define-d / #undef-d right inside this case block.
>> Thoughts?
> 
> Code like this is large in terms of volume, but it is completely crystal
> clear (with the requested comments in place) and easy to follow.
> 
> I don't see how attempting to abstract out two small portions is going
> to be an improvement.

Okay, easier for me if I don't need to touch it. Roger, can you
live with that?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 11 07:29:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 07:29:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY2sV-0003Yf-VI; Mon, 11 May 2020 07:29:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4Y5i=6Z=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jY2sU-0003YZ-8D
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 07:29:30 +0000
X-Inumbo-ID: 26481850-9359-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26481850-9359-11ea-9887-bc764e2007e4;
 Mon, 11 May 2020 07:29:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 71875B052;
 Mon, 11 May 2020 07:29:31 +0000 (UTC)
Subject: Re: [PATCH v8 08/12] x86emul: support FLDENV and FRSTOR
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <09fe2c18-0037-af71-93be-87261051e2a2@suse.com>
 <20200508133720.GH1353@Air-de-Roger>
 <4b6f4353-066e-351d-597d-4455193ff95f@suse.com>
 <20200508162155.GL1353@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7f289c91-da38-55bc-a49a-dd80e60958d4@suse.com>
Date: Mon, 11 May 2020 09:29:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200508162155.GL1353@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 18:21, Roger Pau Monné wrote:
> On Fri, May 08, 2020 at 05:04:02PM +0200, Jan Beulich wrote:
>> On 08.05.2020 15:37, Roger Pau Monné wrote:
>>> On Tue, May 05, 2020 at 10:16:20AM +0200, Jan Beulich wrote:
>>>> --- a/tools/tests/x86_emulator/test_x86_emulator.c
>>>> +++ b/tools/tests/x86_emulator/test_x86_emulator.c
>>>> @@ -11648,6 +11651,89 @@ int x86_emul_blk(
>>>>  
>>>>  #ifndef X86EMUL_NO_FPU
>>>>  
>>>> +    case blk_fld:
>>>> +        ASSERT(!data);
>>>> +
>>>> +        /* state->rex_prefix carries CR0.PE && !EFLAGS.VM setting */
>>>> +        switch ( bytes )
>>>> +        {
>>>> +        case sizeof(fpstate.env):
>>>> +        case sizeof(fpstate):
>>>> +            memcpy(&fpstate.env, ptr, sizeof(fpstate.env));
>>>> +            if ( !state->rex_prefix )
>>>> +            {
>>>> +                unsigned int fip = fpstate.env.mode.real.fip_lo +
>>>> +                                   (fpstate.env.mode.real.fip_hi << 16);
>>>> +                unsigned int fdp = fpstate.env.mode.real.fdp_lo +
>>>> +                                   (fpstate.env.mode.real.fdp_hi << 16);
>>>> +                unsigned int fop = fpstate.env.mode.real.fop;
>>>> +
>>>> +                fpstate.env.mode.prot.fip = fip & 0xf;
>>>> +                fpstate.env.mode.prot.fcs = fip >> 4;
>>>> +                fpstate.env.mode.prot.fop = fop;
>>>> +                fpstate.env.mode.prot.fdp = fdp & 0xf;
>>>> +                fpstate.env.mode.prot.fds = fdp >> 4;
>>>
>>> I've found the layouts in the SDM vol. 1, but I haven't been able to
>>> found the translation mechanism from real to protected. Could you
>>> maybe add a reference here?
>>
>> A reference to some piece of documentation? I don't think this
>> is spelled out anywhere. It's also only one of various possible
>> ways of doing the translation, but among them the most flexible
>> one for possible consumers of the data (because of using the
>> smallest possible offsets into the segments).
> 
> Having this written down as a comment would help, but maybe that's
> just because I'm not familiar at all with all this stuff.
> 
> Again, likely a very stupid question, but I would expect:
> 
> fpstate.env.mode.prot.fip = fip;
> 
> Without the mask.

How that? A linear address has many ways of decomposing into a
real/vm86 mode ssss:oooo pair, but what you suggest is not one
of them. The other extreme to the one chosen would be

                fpstate.env.mode.prot.fip = fip & 0xffff;
                fpstate.env.mode.prot.fcs = (fip >> 4) & 0xf000;

Except that when doing it this way, even the full insn (or for
fcs:fdp the full operand) may not be accessible through the
resulting ssss, due to segment wraparound.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 11 07:32:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 07:32:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY2uv-0004KL-ON; Mon, 11 May 2020 07:32:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tfUY=6Z=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jY2uu-0004KE-JG
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 07:32:00 +0000
X-Inumbo-ID: 7d0c3f72-9359-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d0c3f72-9359-11ea-b07b-bc764e2007e4;
 Mon, 11 May 2020 07:31:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 00914AC90;
 Mon, 11 May 2020 07:31:56 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH 2/2] xen/xenbus: let xenbus_map_ring_valloc() return errno
 values only
Date: Mon, 11 May 2020 09:31:51 +0200
Message-Id: <20200511073151.19043-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200511073151.19043-1-jgross@suse.com>
References: <20200511073151.19043-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Today xenbus_map_ring_valloc() can return either a negative errno
value (-ENOMEM or -EINVAL) or a grant status value. This is a mess as
e.g -ENOMEM and GNTST_eagain have the same numeric value.

Fix that by turning all grant mapping errors into -ENOENT. This is
no problem as all callers of xenbus_map_ring_valloc() only use the
return value to print an error message, and in case of mapping errors
the grant status value has already been printed by __xenbus_map_ring()
before.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/xenbus/xenbus_client.c | 22 ++++++----------------
 1 file changed, 6 insertions(+), 16 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index d8e5c5e4fa67..5e6b256ca916 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -456,8 +456,7 @@ EXPORT_SYMBOL_GPL(xenbus_free_evtchn);
  * Map @nr_grefs pages of memory into this domain from another
  * domain's grant table.  xenbus_map_ring_valloc allocates @nr_grefs
  * pages of virtual address space, maps the pages to that address, and
- * sets *vaddr to that address.  Returns 0 on success, and GNTST_*
- * (see xen/include/interface/grant_table.h) or -ENOMEM / -EINVAL on
+ * sets *vaddr to that address.  Returns 0 on success, and -errno on
  * error. If an error is returned, device will switch to
  * XenbusStateClosing and the error message will be saved in XenStore.
  */
@@ -477,18 +476,11 @@ int xenbus_map_ring_valloc(struct xenbus_device *dev, grant_ref_t *gnt_refs,
 		return -ENOMEM;
 
 	info->node = kzalloc(sizeof(*info->node), GFP_KERNEL);
-	if (!info->node) {
+	if (!info->node)
 		err = -ENOMEM;
-		goto out;
-	}
-
-	err = ring_ops->map(dev, info, gnt_refs, nr_grefs, vaddr);
-
-	/* Some hypervisors are buggy and can return 1. */
-	if (err > 0)
-		err = GNTST_general_error;
+	else
+		err = ring_ops->map(dev, info, gnt_refs, nr_grefs, vaddr);
 
- out:
 	kfree(info->node);
 	kfree(info);
 	return err;
@@ -507,7 +499,6 @@ static int __xenbus_map_ring(struct xenbus_device *dev,
 			     bool *leaked)
 {
 	int i, j;
-	int err = GNTST_okay;
 
 	if (nr_grefs > XENBUS_MAX_RING_GRANTS)
 		return -EINVAL;
@@ -522,7 +513,6 @@ static int __xenbus_map_ring(struct xenbus_device *dev,
 
 	for (i = 0; i < nr_grefs; i++) {
 		if (info->map[i].status != GNTST_okay) {
-			err = info->map[i].status;
 			xenbus_dev_fatal(dev, info->map[i].status,
 					 "mapping in shared page %d from domain %d",
 					 gnt_refs[i], dev->otherend_id);
@@ -531,7 +521,7 @@ static int __xenbus_map_ring(struct xenbus_device *dev,
 			handles[i] = info->map[i].handle;
 	}
 
-	return GNTST_okay;
+	return 0;
 
  fail:
 	for (i = j = 0; i < nr_grefs; i++) {
@@ -554,7 +544,7 @@ static int __xenbus_map_ring(struct xenbus_device *dev,
 		}
 	}
 
-	return err;
+	return -ENOENT;
 }
 
 /**
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Mon May 11 07:32:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 07:32:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY2ur-0004JK-Bu; Mon, 11 May 2020 07:31:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tfUY=6Z=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jY2up-0004JD-KF
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 07:31:55 +0000
X-Inumbo-ID: 7d0520ca-9359-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d0520ca-9359-11ea-9887-bc764e2007e4;
 Mon, 11 May 2020 07:31:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id B0E8BAC20;
 Mon, 11 May 2020 07:31:56 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 clang-built-linux@googlegroups.com
Subject: [PATCH 0/2] xen/xenbus: some cleanups
Date: Mon, 11 May 2020 09:31:49 +0200
Message-Id: <20200511073151.19043-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Avoid allocating large amount of data on the stack in
xenbus_map_ring_valloc() and some related return value cleanups.

Juergen Gross (2):
  xen/xenbus: avoid large structs and arrays on the stack
  xen/xenbus: let xenbus_map_ring_valloc() return errno values only

 drivers/xen/xenbus/xenbus_client.c | 133 ++++++++++++++---------------
 1 file changed, 64 insertions(+), 69 deletions(-)

-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Mon May 11 07:32:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 07:32:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY2v1-0004M4-1u; Mon, 11 May 2020 07:32:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tfUY=6Z=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jY2uz-0004Le-KL
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 07:32:05 +0000
X-Inumbo-ID: 7d0814a6-9359-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d0814a6-9359-11ea-9887-bc764e2007e4;
 Mon, 11 May 2020 07:31:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D0575AC5B;
 Mon, 11 May 2020 07:31:56 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 clang-built-linux@googlegroups.com
Subject: [PATCH 1/2] xen/xenbus: avoid large structs and arrays on the stack
Date: Mon, 11 May 2020 09:31:50 +0200
Message-Id: <20200511073151.19043-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200511073151.19043-1-jgross@suse.com>
References: <20200511073151.19043-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Arnd Bergmann <arnd@arndb.de>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

xenbus_map_ring_valloc() and its sub-functions are putting quite large
structs and arrays on the stack. This is problematic at runtime, but
might also result in build failures (e.g. with clang due to the option
-Werror,-Wframe-larger-than=... used).

Fix that by moving most of the data from the stack into a dynamically
allocated struct. Performance is no issue here, as
xenbus_map_ring_valloc() is used only when adding a new PV device to
a backend driver.

While at it move some duplicated code from pv/hvm specific mapping
functions to the single caller.

Reported-by: Arnd Bergmann <arnd@arndb.de>
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/xenbus/xenbus_client.c | 127 +++++++++++++++--------------
 1 file changed, 66 insertions(+), 61 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_client.c b/drivers/xen/xenbus/xenbus_client.c
index 040d2a43e8e3..d8e5c5e4fa67 100644
--- a/drivers/xen/xenbus/xenbus_client.c
+++ b/drivers/xen/xenbus/xenbus_client.c
@@ -69,11 +69,27 @@ struct xenbus_map_node {
 	unsigned int   nr_handles;
 };
 
+struct map_ring_valloc {
+	struct xenbus_map_node *node;
+
+	/* Why do we need two arrays? See comment of __xenbus_map_ring */
+	union {
+		unsigned long addrs[XENBUS_MAX_RING_GRANTS];
+		pte_t *ptes[XENBUS_MAX_RING_GRANTS];
+	};
+	phys_addr_t phys_addrs[XENBUS_MAX_RING_GRANTS];
+
+	struct gnttab_map_grant_ref map[XENBUS_MAX_RING_GRANTS];
+	struct gnttab_unmap_grant_ref unmap[XENBUS_MAX_RING_GRANTS];
+
+	unsigned int idx;	/* HVM only. */
+};
+
 static DEFINE_SPINLOCK(xenbus_valloc_lock);
 static LIST_HEAD(xenbus_valloc_pages);
 
 struct xenbus_ring_ops {
-	int (*map)(struct xenbus_device *dev,
+	int (*map)(struct xenbus_device *dev, struct map_ring_valloc *info,
 		   grant_ref_t *gnt_refs, unsigned int nr_grefs,
 		   void **vaddr);
 	int (*unmap)(struct xenbus_device *dev, void *vaddr);
@@ -449,12 +465,32 @@ int xenbus_map_ring_valloc(struct xenbus_device *dev, grant_ref_t *gnt_refs,
 			   unsigned int nr_grefs, void **vaddr)
 {
 	int err;
+	struct map_ring_valloc *info;
+
+	*vaddr = NULL;
+
+	if (nr_grefs > XENBUS_MAX_RING_GRANTS)
+		return -EINVAL;
+
+	info = kzalloc(sizeof(*info), GFP_KERNEL);
+	if (!info)
+		return -ENOMEM;
+
+	info->node = kzalloc(sizeof(*info->node), GFP_KERNEL);
+	if (!info->node) {
+		err = -ENOMEM;
+		goto out;
+	}
+
+	err = ring_ops->map(dev, info, gnt_refs, nr_grefs, vaddr);
 
-	err = ring_ops->map(dev, gnt_refs, nr_grefs, vaddr);
 	/* Some hypervisors are buggy and can return 1. */
 	if (err > 0)
 		err = GNTST_general_error;
 
+ out:
+	kfree(info->node);
+	kfree(info);
 	return err;
 }
 EXPORT_SYMBOL_GPL(xenbus_map_ring_valloc);
@@ -466,12 +502,10 @@ static int __xenbus_map_ring(struct xenbus_device *dev,
 			     grant_ref_t *gnt_refs,
 			     unsigned int nr_grefs,
 			     grant_handle_t *handles,
-			     phys_addr_t *addrs,
+			     struct map_ring_valloc *info,
 			     unsigned int flags,
 			     bool *leaked)
 {
-	struct gnttab_map_grant_ref map[XENBUS_MAX_RING_GRANTS];
-	struct gnttab_unmap_grant_ref unmap[XENBUS_MAX_RING_GRANTS];
 	int i, j;
 	int err = GNTST_okay;
 
@@ -479,23 +513,22 @@ static int __xenbus_map_ring(struct xenbus_device *dev,
 		return -EINVAL;
 
 	for (i = 0; i < nr_grefs; i++) {
-		memset(&map[i], 0, sizeof(map[i]));
-		gnttab_set_map_op(&map[i], addrs[i], flags, gnt_refs[i],
-				  dev->otherend_id);
+		gnttab_set_map_op(&info->map[i], info->phys_addrs[i], flags,
+				  gnt_refs[i], dev->otherend_id);
 		handles[i] = INVALID_GRANT_HANDLE;
 	}
 
-	gnttab_batch_map(map, i);
+	gnttab_batch_map(info->map, i);
 
 	for (i = 0; i < nr_grefs; i++) {
-		if (map[i].status != GNTST_okay) {
-			err = map[i].status;
-			xenbus_dev_fatal(dev, map[i].status,
+		if (info->map[i].status != GNTST_okay) {
+			err = info->map[i].status;
+			xenbus_dev_fatal(dev, info->map[i].status,
 					 "mapping in shared page %d from domain %d",
 					 gnt_refs[i], dev->otherend_id);
 			goto fail;
 		} else
-			handles[i] = map[i].handle;
+			handles[i] = info->map[i].handle;
 	}
 
 	return GNTST_okay;
@@ -503,19 +536,19 @@ static int __xenbus_map_ring(struct xenbus_device *dev,
  fail:
 	for (i = j = 0; i < nr_grefs; i++) {
 		if (handles[i] != INVALID_GRANT_HANDLE) {
-			memset(&unmap[j], 0, sizeof(unmap[j]));
-			gnttab_set_unmap_op(&unmap[j], (phys_addr_t)addrs[i],
+			gnttab_set_unmap_op(&info->unmap[j],
+					    info->phys_addrs[i],
 					    GNTMAP_host_map, handles[i]);
 			j++;
 		}
 	}
 
-	if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, unmap, j))
+	if (HYPERVISOR_grant_table_op(GNTTABOP_unmap_grant_ref, info->unmap, j))
 		BUG();
 
 	*leaked = false;
 	for (i = 0; i < j; i++) {
-		if (unmap[i].status != GNTST_okay) {
+		if (info->unmap[i].status != GNTST_okay) {
 			*leaked = true;
 			break;
 		}
@@ -566,21 +599,12 @@ static int xenbus_unmap_ring(struct xenbus_device *dev, grant_handle_t *handles,
 	return err;
 }
 
-struct map_ring_valloc_hvm
-{
-	unsigned int idx;
-
-	/* Why do we need two arrays? See comment of __xenbus_map_ring */
-	phys_addr_t phys_addrs[XENBUS_MAX_RING_GRANTS];
-	unsigned long addrs[XENBUS_MAX_RING_GRANTS];
-};
-
 static void xenbus_map_ring_setup_grant_hvm(unsigned long gfn,
 					    unsigned int goffset,
 					    unsigned int len,
 					    void *data)
 {
-	struct map_ring_valloc_hvm *info = data;
+	struct map_ring_valloc *info = data;
 	unsigned long vaddr = (unsigned long)gfn_to_virt(gfn);
 
 	info->phys_addrs[info->idx] = vaddr;
@@ -590,38 +614,27 @@ static void xenbus_map_ring_setup_grant_hvm(unsigned long gfn,
 }
 
 static int xenbus_map_ring_valloc_hvm(struct xenbus_device *dev,
+				      struct map_ring_valloc *info,
 				      grant_ref_t *gnt_ref,
 				      unsigned int nr_grefs,
 				      void **vaddr)
 {
-	struct xenbus_map_node *node;
+	struct xenbus_map_node *node = info->node;
 	int err;
 	void *addr;
 	bool leaked = false;
-	struct map_ring_valloc_hvm info = {
-		.idx = 0,
-	};
 	unsigned int nr_pages = XENBUS_PAGES(nr_grefs);
 
-	if (nr_grefs > XENBUS_MAX_RING_GRANTS)
-		return -EINVAL;
-
-	*vaddr = NULL;
-
-	node = kzalloc(sizeof(*node), GFP_KERNEL);
-	if (!node)
-		return -ENOMEM;
-
 	err = alloc_xenballooned_pages(nr_pages, node->hvm.pages);
 	if (err)
 		goto out_err;
 
 	gnttab_foreach_grant(node->hvm.pages, nr_grefs,
 			     xenbus_map_ring_setup_grant_hvm,
-			     &info);
+			     info);
 
 	err = __xenbus_map_ring(dev, gnt_ref, nr_grefs, node->handles,
-				info.phys_addrs, GNTMAP_host_map, &leaked);
+				info, GNTMAP_host_map, &leaked);
 	node->nr_handles = nr_grefs;
 
 	if (err)
@@ -641,11 +654,13 @@ static int xenbus_map_ring_valloc_hvm(struct xenbus_device *dev,
 	spin_unlock(&xenbus_valloc_lock);
 
 	*vaddr = addr;
+	info->node = NULL;
+
 	return 0;
 
  out_xenbus_unmap_ring:
 	if (!leaked)
-		xenbus_unmap_ring(dev, node->handles, nr_grefs, info.addrs);
+		xenbus_unmap_ring(dev, node->handles, nr_grefs, info->addrs);
 	else
 		pr_alert("leaking %p size %u page(s)",
 			 addr, nr_pages);
@@ -653,7 +668,6 @@ static int xenbus_map_ring_valloc_hvm(struct xenbus_device *dev,
 	if (!leaked)
 		free_xenballooned_pages(nr_pages, node->hvm.pages);
  out_err:
-	kfree(node);
 	return err;
 }
 
@@ -677,39 +691,29 @@ EXPORT_SYMBOL_GPL(xenbus_unmap_ring_vfree);
 
 #ifdef CONFIG_XEN_PV
 static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev,
+				     struct map_ring_valloc *info,
 				     grant_ref_t *gnt_refs,
 				     unsigned int nr_grefs,
 				     void **vaddr)
 {
-	struct xenbus_map_node *node;
+	struct xenbus_map_node *node = info->node;
 	struct vm_struct *area;
-	pte_t *ptes[XENBUS_MAX_RING_GRANTS];
-	phys_addr_t phys_addrs[XENBUS_MAX_RING_GRANTS];
 	int err = GNTST_okay;
 	int i;
 	bool leaked;
 
-	*vaddr = NULL;
-
-	if (nr_grefs > XENBUS_MAX_RING_GRANTS)
-		return -EINVAL;
-
-	node = kzalloc(sizeof(*node), GFP_KERNEL);
-	if (!node)
-		return -ENOMEM;
-
-	area = alloc_vm_area(XEN_PAGE_SIZE * nr_grefs, ptes);
+	area = alloc_vm_area(XEN_PAGE_SIZE * nr_grefs, info->ptes);
 	if (!area) {
 		kfree(node);
 		return -ENOMEM;
 	}
 
 	for (i = 0; i < nr_grefs; i++)
-		phys_addrs[i] = arbitrary_virt_to_machine(ptes[i]).maddr;
+		info->phys_addrs[i] =
+			arbitrary_virt_to_machine(info->ptes[i]).maddr;
 
 	err = __xenbus_map_ring(dev, gnt_refs, nr_grefs, node->handles,
-				phys_addrs,
-				GNTMAP_host_map | GNTMAP_contains_pte,
+				info, GNTMAP_host_map | GNTMAP_contains_pte,
 				&leaked);
 	if (err)
 		goto failed;
@@ -722,6 +726,8 @@ static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev,
 	spin_unlock(&xenbus_valloc_lock);
 
 	*vaddr = area->addr;
+	info->node = NULL;
+
 	return 0;
 
 failed:
@@ -730,7 +736,6 @@ static int xenbus_map_ring_valloc_pv(struct xenbus_device *dev,
 	else
 		pr_alert("leaking VM area %p size %u page(s)", area, nr_grefs);
 
-	kfree(node);
 	return err;
 }
 
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Mon May 11 07:42:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 07:42:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY35D-0005PA-2R; Mon, 11 May 2020 07:42:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tfUY=6Z=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jY35B-0005P5-Fu
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 07:42:37 +0000
X-Inumbo-ID: faa844b6-935a-11ea-a1f4-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id faa844b6-935a-11ea-a1f4-12813bfff9fa;
 Mon, 11 May 2020 07:42:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 15307AC90;
 Mon, 11 May 2020 07:42:37 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH] xen/pvcalls-back: test for errors when calling
 backend_connect()
Date: Mon, 11 May 2020 09:42:31 +0200
Message-Id: <20200511074231.19794-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

backend_connect() can fail, so switch the device to connected only if
no error occurred.

Fixes: 0a9c75c2c7258f2 ("xen/pvcalls: xenbus state handling")
Cc: stable@vger.kernel.org
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/pvcalls-back.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
index cf4ce3e9358d..41a18ece029a 100644
--- a/drivers/xen/pvcalls-back.c
+++ b/drivers/xen/pvcalls-back.c
@@ -1088,7 +1088,8 @@ static void set_backend_state(struct xenbus_device *dev,
 		case XenbusStateInitialised:
 			switch (state) {
 			case XenbusStateConnected:
-				backend_connect(dev);
+				if (backend_connect(dev))
+					return;
 				xenbus_switch_state(dev, XenbusStateConnected);
 				break;
 			case XenbusStateClosing:
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Mon May 11 07:43:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 07:43:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY35w-0005TL-Bo; Mon, 11 May 2020 07:43:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4Y5i=6Z=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jY35w-0005TF-3i
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 07:43:24 +0000
X-Inumbo-ID: 175bc38a-935b-11ea-a1f4-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 175bc38a-935b-11ea-a1f4-12813bfff9fa;
 Mon, 11 May 2020 07:43:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 6B08EAC5B;
 Mon, 11 May 2020 07:43:25 +0000 (UTC)
Subject: Re: [PATCH] x86/gen-cpuid: Distinguish default vs max in feature
 annotations
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200508152729.14295-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e6380a05-d67a-b3a8-a624-ba5c161a8c53@suse.com>
Date: Mon, 11 May 2020 09:43:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200508152729.14295-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 17:27, Andrew Cooper wrote:
> @@ -133,9 +134,13 @@ def crunch_numbers(state):
>      state.hvm_shadow_def = state.pv_def | state.raw['S']
>      state.hvm_hap_def = state.hvm_shadow_def | state.raw['H']
>  
> +    # TODO: Ignore def/max split until the toolstack migration logic is fixed
>      state.pv_max = state.pv_def
>      state.hvm_shadow_max = state.hvm_shadow_def
>      state.hvm_hap_max = state.hvm_hap_def
> +    # state.pv_max = state.raw['A'] | state.raw['a']
> +    # state.hvm_shadow_max = state.pv_max | state.raw['S'] | state.raw['s']
> +    # state.hvm_hap_max = state.hvm_shadow_max | state.raw['H'] | state.raw['h']

While in comment form it doesn't matter yet, for actually enabling
this it would seem to me to be more expressive as

    state.pv_max = state.pv_def | state.raw['a']
    state.hvm_shadow_max = state.hvm_shadow_def | state.pv_max | state.raw['s']
    state.hvm_hap_max = state.hvm_hap_def | state.hvm_shadow_max | state.raw['h']

Thoughts?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 11 08:03:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 08:03:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY3P1-0007wE-F4; Mon, 11 May 2020 08:03:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AmMB=6Z=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jY3Oz-0007w9-UG
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 08:03:05 +0000
X-Inumbo-ID: d76dcf04-935d-11ea-ae69-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d76dcf04-935d-11ea-ae69-bc764e2007e4;
 Mon, 11 May 2020 08:03:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589184184;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=eQPhma5Pn61HAy8fBts1O06trrH60bgL+l04thBt9K4=;
 b=I2VySybojpQUBxYcZB++hqI2OHtbLV8LN9UgLEzEkJi567k1Z9ezra2a
 7vbpPCTO7RVgSqwM8vHiZO9Dq+Gcf1xEpSniH8gL8RAYyUn54wTy2y2Fi
 yN4V9D6WACqV03Q5Y7/mlkJ+yAcDPOfO697BniFbrdfknU1S5mUjNZrgY k=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 3N7rWcvWOgLIvGrCxE5yX0bJ/l6TnC7oAbnPLslfmYA7iVwQdmt96OuP9/4maF+eGNlmTWdhiO
 zGxvwpbhbnPVRxlnb5RPnqcpzJ6U9jnwsTLQJOtpZvIQhSeyeOH1DykpKGhErW3F7fLHDueAGQ
 Hmr/giycBIIItjpYyKXYyusB5MqqqHdQSWX3aCBuIEGuJMGCga2vcvUQsO/78OATsivr0Uv1hF
 yAZL8f24dZ6ff6AyQhoM/bRTFwVn4ckYHPH/vASRrhBZ/GON1P9k7hSLzuWftxjdqBxz46sY7F
 XA8=
X-SBRS: 2.7
X-MesageID: 17879859
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,379,1583211600"; d="scan'208";a="17879859"
Date: Mon, 11 May 2020 10:02:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v8 08/12] x86emul: support FLDENV and FRSTOR
Message-ID: <20200511080255.GN1353@Air-de-Roger>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <09fe2c18-0037-af71-93be-87261051e2a2@suse.com>
 <20200508133720.GH1353@Air-de-Roger>
 <4b6f4353-066e-351d-597d-4455193ff95f@suse.com>
 <a8342bf8-d866-b507-9420-0384545e9a4f@citrix.com>
 <dece70e0-3140-eb4a-b6c7-0bf904cb6f2a@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <dece70e0-3140-eb4a-b6c7-0bf904cb6f2a@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 11, 2020 at 09:25:54AM +0200, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> 
> On 08.05.2020 20:29, Andrew Cooper wrote:
> > On 08/05/2020 16:04, Jan Beulich wrote:
> >>>> +            }
> >>>> +
> >>>> +            if ( bytes == sizeof(fpstate.env) )
> >>>> +                ptr = NULL;
> >>>> +            else
> >>>> +                ptr += sizeof(fpstate.env);
> >>>> +            break;
> >>>> +
> >>>> +        case sizeof(struct x87_env16):
> >>>> +        case sizeof(struct x87_env16) + sizeof(fpstate.freg):
> >>>> +        {
> >>>> +            const struct x87_env16 *env = ptr;
> >>>> +
> >>>> +            fpstate.env.fcw = env->fcw;
> >>>> +            fpstate.env.fsw = env->fsw;
> >>>> +            fpstate.env.ftw = env->ftw;
> >>>> +
> >>>> +            if ( state->rex_prefix )
> >>>> +            {
> >>>> +                fpstate.env.mode.prot.fip = env->mode.prot.fip;
> >>>> +                fpstate.env.mode.prot.fcs = env->mode.prot.fcs;
> >>>> +                fpstate.env.mode.prot.fdp = env->mode.prot.fdp;
> >>>> +                fpstate.env.mode.prot.fds = env->mode.prot.fds;
> >>>> +                fpstate.env.mode.prot.fop = 0; /* unknown */
> >>>> +            }
> >>>> +            else
> >>>> +            {
> >>>> +                unsigned int fip = env->mode.real.fip_lo +
> >>>> +                                   (env->mode.real.fip_hi << 16);
> >>>> +                unsigned int fdp = env->mode.real.fdp_lo +
> >>>> +                                   (env->mode.real.fdp_hi << 16);
> >>>> +                unsigned int fop = env->mode.real.fop;
> >>>> +
> >>>> +                fpstate.env.mode.prot.fip = fip & 0xf;
> >>>> +                fpstate.env.mode.prot.fcs = fip >> 4;
> >>>> +                fpstate.env.mode.prot.fop = fop;
> >>>> +                fpstate.env.mode.prot.fdp = fdp & 0xf;
> >>>> +                fpstate.env.mode.prot.fds = fdp >> 4;
> >>> This looks mostly the same as the translation done above, so maybe
> >>> could be abstracted anyway in a macro to avoid the code repetition?
> >>> (ie: fpstate_real_to_prot(src, dst) or some such).
> >> Just the 5 assignments could be put in an inline function, but
> >> if we also wanted to abstract away the declarations with their
> >> initializers, it would need to be a macro because of the
> >> different types of fpstate.env and *env. While I'd generally
> >> prefer inline functions, the macro would have the benefit that
> >> it could be #define-d / #undef-d right inside this case block.
> >> Thoughts?
> > 
> > Code like this is large in terms of volume, but it is completely crystal
> > clear (with the requested comments in place) and easy to follow.
> > 
> > I don't see how attempting to abstract out two small portions is going
> > to be an improvement.
> 
> Okay, easier for me if I don't need to touch it. Roger, can you
> live with that?

Yes, that's fine.

Thanks.


From xen-devel-bounces@lists.xenproject.org Mon May 11 08:12:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 08:12:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY3Xn-0000zq-7w; Mon, 11 May 2020 08:12:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tfUY=6Z=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jY3Xl-0000zl-48
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 08:12:09 +0000
X-Inumbo-ID: 1b0a63c0-935f-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b0a63c0-935f-11ea-9887-bc764e2007e4;
 Mon, 11 May 2020 08:12:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 6E202ADC2;
 Mon, 11 May 2020 08:12:09 +0000 (UTC)
Subject: Re: [PATCH] xen-platform: Constify dev_pm_ops
To: Rikard Falkeborn <rikard.falkeborn@gmail.com>,
 boris.ostrovsky@oracle.com, sstabellini@kernel.org
References: <20200509134755.15038-1-rikard.falkeborn@gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <03d892fb-eb90-19e1-9d23-71d4b9df75ff@suse.com>
Date: Mon, 11 May 2020 10:12:05 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200509134755.15038-1-rikard.falkeborn@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 09.05.20 15:47, Rikard Falkeborn wrote:
> dev_pm_ops is never modified, so mark it const to allow the compiler to
> put it in read-only memory.
> 
> Before:
>     text    data     bss     dec     hex filename
>     2457    1668     256    4381    111d drivers/xen/platform-pci.o
> 
> After:
>     text    data     bss     dec     hex filename
>     2681    1444     256    4381    111d drivers/xen/platform-pci.o
> 
> Signed-off-by: Rikard Falkeborn <rikard.falkeborn@gmail.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Mon May 11 08:14:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 08:14:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY3aU-00018z-OZ; Mon, 11 May 2020 08:14:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AmMB=6Z=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jY3aU-00018u-1E
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 08:14:58 +0000
X-Inumbo-ID: 8002b156-935f-11ea-a1f5-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8002b156-935f-11ea-a1f5-12813bfff9fa;
 Mon, 11 May 2020 08:14:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589184898;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=UeP7ACtncW/BHcF27i5MYIvueaaUCsS3M5fQzbqtlQg=;
 b=hxKgY0eeeQYkitehVGFIRsfkwEHdGh+fNxhL+ghiMTKfpmGMZrsSZUjL
 73ss9jgtLXtZ/kEjH55k9bFPGX6NwQk2yPMGgnOB9TTW2iydmtfJnNr2Y
 fQl7dGufRz3KnTrUTGgKFSjVmofkHlFnaRr+P2Jd7wuxpVjj/j4ij/PgF 4=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: uf4chRphwK/uoRtRN4O4u9d0tVna4inFzaSQNb5d1CRDb6ML6/pR5CzUX+b1g1bcmeNi6iJB1c
 5baNIPhYKnNXPq76JySvTYpGNsTzIOQsMBqUDbwARBgDGGxziT+Y7zzWdGX1sOw1visIsgo7bp
 R01fVZAY0+Eb9/9/55QMKWjUdtnPTEFeLjJeLXNtArLjP49lKne2kZSGyh884b4ab7p7FupsA9
 na1dphxv8wrfboVK+/yJFSLNXKA0XYdTFQkGvJm9WRn69ivchb6rWC/ztbrXaOg67lKHeJyojd
 BNk=
X-SBRS: 2.7
X-MesageID: 17184668
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,379,1583211600"; d="scan'208";a="17184668"
Date: Mon, 11 May 2020 10:14:49 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: <qemu-devel@nongnu.org>
Subject: Re: [PATCH] xen: fix build without pci passthrough
Message-ID: <20200511081449.GO1353@Air-de-Roger>
References: <20200504101443.3165-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200504101443.3165-1-roger.pau@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Ping?

On Mon, May 04, 2020 at 12:14:43PM +0200, Roger Pau Monne wrote:
> has_igd_gfx_passthru is only available when QEMU is built with
> CONFIG_XEN_PCI_PASSTHROUGH, and hence shouldn't be used in common
> code without checking if it's available.
> 
> Fixes: 46472d82322d0 ('xen: convert "-machine igd-passthru" to an accelerator property')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org
> ---
>  hw/xen/xen-common.c | 4 ++++
>  hw/xen/xen_pt.h     | 7 +++++++
>  2 files changed, 11 insertions(+)
> 
> diff --git a/hw/xen/xen-common.c b/hw/xen/xen-common.c
> index a15070f7f6..c800862419 100644
> --- a/hw/xen/xen-common.c
> +++ b/hw/xen/xen-common.c
> @@ -127,6 +127,7 @@ static void xen_change_state_handler(void *opaque, int running,
>      }
>  }
>  
> +#ifdef CONFIG_XEN_PCI_PASSTHROUGH
>  static bool xen_get_igd_gfx_passthru(Object *obj, Error **errp)
>  {
>      return has_igd_gfx_passthru;
> @@ -136,6 +137,7 @@ static void xen_set_igd_gfx_passthru(Object *obj, bool value, Error **errp)
>  {
>      has_igd_gfx_passthru = value;
>  }
> +#endif
>  
>  static void xen_setup_post(MachineState *ms, AccelState *accel)
>  {
> @@ -197,11 +199,13 @@ static void xen_accel_class_init(ObjectClass *oc, void *data)
>  
>      compat_props_add(ac->compat_props, compat, G_N_ELEMENTS(compat));
>  
> +#ifdef CONFIG_XEN_PCI_PASSTHROUGH
>      object_class_property_add_bool(oc, "igd-passthru",
>          xen_get_igd_gfx_passthru, xen_set_igd_gfx_passthru,
>          &error_abort);
>      object_class_property_set_description(oc, "igd-passthru",
>          "Set on/off to enable/disable igd passthrou", &error_abort);
> +#endif
>  }
>  
>  #define TYPE_XEN_ACCEL ACCEL_CLASS_NAME("xen")
> diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
> index 179775db7b..660dd8a008 100644
> --- a/hw/xen/xen_pt.h
> +++ b/hw/xen/xen_pt.h
> @@ -1,6 +1,7 @@
>  #ifndef XEN_PT_H
>  #define XEN_PT_H
>  
> +#include "qemu/osdep.h"
>  #include "hw/xen/xen_common.h"
>  #include "hw/pci/pci.h"
>  #include "xen-host-pci-device.h"
> @@ -322,7 +323,13 @@ extern void *pci_assign_dev_load_option_rom(PCIDevice *dev,
>                                              unsigned int domain,
>                                              unsigned int bus, unsigned int slot,
>                                              unsigned int function);
> +
> +#ifdef CONFIG_XEN_PCI_PASSTHROUGH
>  extern bool has_igd_gfx_passthru;
> +#else
> +# define has_igd_gfx_passthru false
> +#endif
> +
>  static inline bool is_igd_vga_passthrough(XenHostPCIDevice *dev)
>  {
>      return (has_igd_gfx_passthru
> -- 
> 2.26.2
> 


From xen-devel-bounces@lists.xenproject.org Mon May 11 08:16:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 08:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY3bf-0001FD-34; Mon, 11 May 2020 08:16:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tfUY=6Z=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jY3be-0001F7-0v
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 08:16:10 +0000
X-Inumbo-ID: ab12708e-935f-11ea-a1f5-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ab12708e-935f-11ea-a1f5-12813bfff9fa;
 Mon, 11 May 2020 08:16:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 3810CB07B;
 Mon, 11 May 2020 08:16:11 +0000 (UTC)
Subject: Re: [PATCH] xen/cpuhotplug: Fix initial CPU offlining for PV(H) guests
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
References: <1588976923-3667-1-git-send-email-boris.ostrovsky@oracle.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <8a94d819-5143-58e2-9bd2-1cc341ba3a80@suse.com>
Date: Mon, 11 May 2020 10:16:07 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <1588976923-3667-1-git-send-email-boris.ostrovsky@oracle.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: sstabellini@kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 09.05.20 00:28, Boris Ostrovsky wrote:
> Commit a926f81d2f6c ("xen/cpuhotplug: Replace cpu_up/down() with
> device_online/offline()") replaced cpu_down() with device_offline()
> call which requires that the CPU has been registered before. This
> registration, however, happens later from topology_init() which
> is called as subsys_initcall(). setup_vcpu_hotplug_event(), on the
> other hand, is invoked earlier, during arch_initcall().
> 
> As result, booting a PV(H) guest with vcpus < maxvcpus causes a crash.
> 
> Move setup_vcpu_hotplug_event() (and therefore setup_cpu_watcher()) to
> late_initcall(). In addition, instead of performing all offlining steps
> in setup_cpu_watcher() simply call disable_hotplug_cpu().
> 
> Fixes: a926f81d2f6c (xen/cpuhotplug: Replace cpu_up/down() with device_online/offline()"
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Mon May 11 09:22:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 09:22:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY4dh-00077g-Ml; Mon, 11 May 2020 09:22:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AmMB=6Z=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jY4dg-00077b-MH
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 09:22:20 +0000
X-Inumbo-ID: e96a233c-9368-11ea-ae69-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e96a233c-9368-11ea-ae69-bc764e2007e4;
 Mon, 11 May 2020 09:22:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589188939;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=D7nx2jdoCYvV/D9YRYzAO324Y5cO6ldbjgmzPw+3qIk=;
 b=NGKQpm9jfFIsz1LcdN1ZU2xUBoDJvRM/ozH9yM9UNT06NdAApe/bXafU
 t3mFP1GRVjaoC+9MtkDFiN9DW+WiXm1YZtcWeX4W0xDcgBbpiNRqCIfMn
 zH9DaFCYMdX+xqv+2QrUkmFUVHxuBeJU2Kz2PM74LRs0WQmpRuIGlNjUV s=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 31Ivv0DxsjcOlFyHLdVkB6zKskcIFcSc+WG0oc3xfMfQtg4d5niSNsKcGDTyQ+pOAk8Lg7pKKf
 TZWymNU/ZoNM17TnPt3L1o75EK4j0zV8bEZnkQH9mm3812FL8j+/3YIp+rBI6KSHcLYmED4pwO
 FQw1LqCa8XJFfI9Zmb3dbBC/AENMUfpJ6rH/Z+fASp1T5jsXSI2cu8nC7/wWa0K5OzVeZM19H4
 rpkiBHeQqx0oSga2dXYr9YaF3+6gq0DVgPkhMwAsIs/slpwkZ+tCQ4icMIQNnreNo9Dau5vMtv
 +Gs=
X-SBRS: 2.7
X-MesageID: 17884784
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,379,1583211600"; d="scan'208";a="17884784"
Date: Mon, 11 May 2020 11:22:11 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v8 08/12] x86emul: support FLDENV and FRSTOR
Message-ID: <20200511092211.GA35422@Air-de-Roger>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <09fe2c18-0037-af71-93be-87261051e2a2@suse.com>
 <20200508133720.GH1353@Air-de-Roger>
 <4b6f4353-066e-351d-597d-4455193ff95f@suse.com>
 <20200508162155.GL1353@Air-de-Roger>
 <7f289c91-da38-55bc-a49a-dd80e60958d4@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <7f289c91-da38-55bc-a49a-dd80e60958d4@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 11, 2020 at 09:29:27AM +0200, Jan Beulich wrote:
> On 08.05.2020 18:21, Roger Pau Monné wrote:
> > On Fri, May 08, 2020 at 05:04:02PM +0200, Jan Beulich wrote:
> >> On 08.05.2020 15:37, Roger Pau Monné wrote:
> >>> On Tue, May 05, 2020 at 10:16:20AM +0200, Jan Beulich wrote:
> >>>> --- a/tools/tests/x86_emulator/test_x86_emulator.c
> >>>> +++ b/tools/tests/x86_emulator/test_x86_emulator.c
> >>>> @@ -11648,6 +11651,89 @@ int x86_emul_blk(
> >>>>  
> >>>>  #ifndef X86EMUL_NO_FPU
> >>>>  
> >>>> +    case blk_fld:
> >>>> +        ASSERT(!data);
> >>>> +
> >>>> +        /* state->rex_prefix carries CR0.PE && !EFLAGS.VM setting */
> >>>> +        switch ( bytes )
> >>>> +        {
> >>>> +        case sizeof(fpstate.env):
> >>>> +        case sizeof(fpstate):
> >>>> +            memcpy(&fpstate.env, ptr, sizeof(fpstate.env));
> >>>> +            if ( !state->rex_prefix )
> >>>> +            {
> >>>> +                unsigned int fip = fpstate.env.mode.real.fip_lo +
> >>>> +                                   (fpstate.env.mode.real.fip_hi << 16);
> >>>> +                unsigned int fdp = fpstate.env.mode.real.fdp_lo +
> >>>> +                                   (fpstate.env.mode.real.fdp_hi << 16);
> >>>> +                unsigned int fop = fpstate.env.mode.real.fop;
> >>>> +
> >>>> +                fpstate.env.mode.prot.fip = fip & 0xf;
> >>>> +                fpstate.env.mode.prot.fcs = fip >> 4;
> >>>> +                fpstate.env.mode.prot.fop = fop;
> >>>> +                fpstate.env.mode.prot.fdp = fdp & 0xf;
> >>>> +                fpstate.env.mode.prot.fds = fdp >> 4;
> >>>
> >>> I've found the layouts in the SDM vol. 1, but I haven't been able to
> >>> found the translation mechanism from real to protected. Could you
> >>> maybe add a reference here?
> >>
> >> A reference to some piece of documentation? I don't think this
> >> is spelled out anywhere. It's also only one of various possible
> >> ways of doing the translation, but among them the most flexible
> >> one for possible consumers of the data (because of using the
> >> smallest possible offsets into the segments).
> > 
> > Having this written down as a comment would help, but maybe that's
> > just because I'm not familiar at all with all this stuff.
> > 
> > Again, likely a very stupid question, but I would expect:
> > 
> > fpstate.env.mode.prot.fip = fip;
> > 
> > Without the mask.
> 
> How that? A linear address has many ways of decomposing into a
> real/vm86 mode ssss:oooo pair, but what you suggest is not one
> of them. The other extreme to the one chosen would be
> 
>                 fpstate.env.mode.prot.fip = fip & 0xffff;
>                 fpstate.env.mode.prot.fcs = (fip >> 4) & 0xf000;
> 
> Except that when doing it this way, even the full insn (or for
> fcs:fdp the full operand) may not be accessible through the
> resulting ssss, due to segment wraparound.

Thanks for the explanation. I see it's better to split the offset into
the lower 4 bytes only in order to prevent overflow.

Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 11 09:35:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 09:35:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY4pv-00089c-Pb; Mon, 11 May 2020 09:34:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HsYJ=6Z=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jY4pu-00089X-JW
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 09:34:58 +0000
X-Inumbo-ID: adaca124-936a-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id adaca124-936a-11ea-b9cf-bc764e2007e4;
 Mon, 11 May 2020 09:34:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=mStpTBGyY9S8LM11HslJemktFveGhQ7ZfYEFGpEbwQg=; b=H+E53ZZEgDnTRKZ+sJZOe6UaPf
 1/R35G7nsxLOiUIyNFAhZeI+n0JyZBKwTpiMST7uoLaIsVt0Oln/R72SBWJex4Fb5Yx6nBEcBe5K4
 I41hlTg73oKwYyyDHk05W2KNvi8eox2bj6CUfVJysU0NgFR/UqQrgooRya+rNmDFFYR8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jY4pt-00022e-4s; Mon, 11 May 2020 09:34:57 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jY4ps-0005Ov-Ty; Mon, 11 May 2020 09:34:57 +0000
Subject: Re: [PATCH] optee: immediately free buffers that are released by
 OP-TEE
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20200506014246.3397490-1-volodymyr_babchuk@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <51b8c855-5e94-2829-a703-d43c84948120@xen.org>
Date: Mon, 11 May 2020 10:34:55 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200506014246.3397490-1-volodymyr_babchuk@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "tee-dev@lists.linaro.org" <tee-dev@lists.linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Volodymyr,

On 06/05/2020 02:44, Volodymyr Babchuk wrote:
> Normal World can share buffer with OP-TEE for two reasons:
> 1. Some client application wants to exchange data with TA
> 2. OP-TEE asks for shared buffer for internal needs
> 
> The second case was handle more strictly than necessary:
> 
> 1. In RPC request OP-TEE asks for buffer
> 2. NW allocates buffer and provides it via RPC response
> 3. Xen pins pages and translates data
> 4. Xen provides buffer to OP-TEE
> 5. OP-TEE uses it
> 6. OP-TEE sends request to free the buffer
> 7. NW frees the buffer and sends the RPC response
> 8. Xen unpins pages and forgets about the buffer
> 
> The problem is that Xen should forget about buffer in between stages 6
> and 7. I.e. the right flow should be like this:
> 
> 6. OP-TEE sends request to free the buffer
> 7. Xen unpins pages and forgets about the buffer
> 8. NW frees the buffer and sends the RPC response
> 
> This is because OP-TEE internally frees the buffer before sending the
> "free SHM buffer" request. So we have no reason to hold reference for
> this buffer anymore. Moreover, in multiprocessor systems NW have time
> to reuse buffer cookie for another buffer. Xen complained about this
> and denied the new buffer registration. I have seen this issue while
> running tests on iMX SoC.
> 
> So, this patch basically corrects that behavior by freeing the buffer
> earlier, when handling RPC return from OP-TEE.
> 
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> ---
>   xen/arch/arm/tee/optee.c | 24 ++++++++++++++++++++----
>   1 file changed, 20 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
> index 6a035355db..af19fc31f8 100644
> --- a/xen/arch/arm/tee/optee.c
> +++ b/xen/arch/arm/tee/optee.c
> @@ -1099,6 +1099,26 @@ static int handle_rpc_return(struct optee_domain *ctx,
>           if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_ALLOC )
>               call->rpc_buffer_type = shm_rpc->xen_arg->params[0].u.value.a;
>   
> +        /*
> +         * OP-TEE signals that it frees the buffer that it requested
> +         * before. This is the right for us to do the same.
> +         */
> +        if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_FREE )
> +        {
> +            uint64_t cookie = shm_rpc->xen_arg->params[0].u.value.b;
> +
> +            free_optee_shm_buf(ctx, cookie);
> +
> +            /*
> +             * This should never happen. We have a bug either in the
> +             * OP-TEE or in the mediator.
> +             */
> +            if ( call->rpc_data_cookie && call->rpc_data_cookie != cookie )
> +                gprintk(XENLOG_ERR,
> +                        "Saved RPC cookie does not corresponds to OP-TEE's (%"PRIx64" != %"PRIx64")\n",

s/corresponds/correspond/

> +                        call->rpc_data_cookie, cookie);

IIUC, if you free the wrong SHM buffer then your guest is likely to be 
running incorrectly afterwards. So shouldn't we crash the guest to avoid 
further issue?

> +            call->rpc_data_cookie = 0;
> +        }
>           unmap_domain_page(shm_rpc->xen_arg);
>       }
>   
> @@ -1464,10 +1484,6 @@ static void handle_rpc_cmd(struct optee_domain *ctx, struct cpu_user_regs *regs,
>               }
>               break;
>           case OPTEE_RPC_CMD_SHM_FREE:
> -            free_optee_shm_buf(ctx, shm_rpc->xen_arg->params[0].u.value.b);
> -            if ( call->rpc_data_cookie ==
> -                 shm_rpc->xen_arg->params[0].u.value.b )
> -                call->rpc_data_cookie = 0;
>               break;
>           default:
>               break;
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 11 10:11:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 10:11:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY5Op-0003D4-3O; Mon, 11 May 2020 10:11:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MQL3=6Z=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jY5On-0003Cz-9q
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 10:11:01 +0000
X-Inumbo-ID: b645fe34-936f-11ea-b07b-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b645fe34-936f-11ea-b07b-bc764e2007e4;
 Mon, 11 May 2020 10:11:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589191860;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=FJoMCbjbJhOvYHy5EPW+wuBFemCOUU0k9wl/bu+6Bsc=;
 b=KsBMvY2DV/0/wbDmM5Z65YcF4JyPhGQ3UFlZsOyEgIrsLRXnBEY1Zj6e
 QeIfv3sAhTMnWbBrkYzLQMkrwkAR45xSPmkHGERwiVXVfRT7DmfivkFfD
 WS8I5z6ELY24LVu/v4wvNI+v376HL6npX/uz2LVCeaBjTFT9BwTJbGZw1 g=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: CdnZXZRQ0ENSIfshlULNDRMrprxrdCSpdfgDQYAfxBTUDmqEs7wM/ly2AaShST4uQ9ALXQjcTw
 +4u21gS3ysdjlJ5tLGQY1FMk6jhjpRMIjc4z78O+6ZkLfwyC38pHR30wnJt1EFMJT6W510tRWg
 hz2Izq4D2OC8STdJS5aNApQxPqVnDzblnbRB957RrYAlUS/igwOPWkjgApEcO2k3r4/DuyG7pO
 huCcETzn6uMxYphf+OOvCV7Rso7zQya1JtG/a52gQBGuBMrvepCfAA2MDI3IyYlf8BaJwR5KH1
 T5I=
X-SBRS: 2.7
X-MesageID: 17479590
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,379,1583211600"; d="scan'208";a="17479590"
Subject: Re: [PATCH] optee: immediately free buffers that are released by
 OP-TEE
To: Julien Grall <julien@xen.org>, Volodymyr Babchuk
 <Volodymyr_Babchuk@epam.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <20200506014246.3397490-1-volodymyr_babchuk@epam.com>
 <51b8c855-5e94-2829-a703-d43c84948120@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f4e1cc2b-97bf-d242-8f1b-e72083f378be@citrix.com>
Date: Mon, 11 May 2020 11:10:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <51b8c855-5e94-2829-a703-d43c84948120@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "tee-dev@lists.linaro.org" <tee-dev@lists.linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11/05/2020 10:34, Julien Grall wrote:
> Hi Volodymyr,
>
> On 06/05/2020 02:44, Volodymyr Babchuk wrote:
>> Normal World can share buffer with OP-TEE for two reasons:
>> 1. Some client application wants to exchange data with TA
>> 2. OP-TEE asks for shared buffer for internal needs
>>
>> The second case was handle more strictly than necessary:
>>
>> 1. In RPC request OP-TEE asks for buffer
>> 2. NW allocates buffer and provides it via RPC response
>> 3. Xen pins pages and translates data
>> 4. Xen provides buffer to OP-TEE
>> 5. OP-TEE uses it
>> 6. OP-TEE sends request to free the buffer
>> 7. NW frees the buffer and sends the RPC response
>> 8. Xen unpins pages and forgets about the buffer
>>
>> The problem is that Xen should forget about buffer in between stages 6
>> and 7. I.e. the right flow should be like this:
>>
>> 6. OP-TEE sends request to free the buffer
>> 7. Xen unpins pages and forgets about the buffer
>> 8. NW frees the buffer and sends the RPC response
>>
>> This is because OP-TEE internally frees the buffer before sending the
>> "free SHM buffer" request. So we have no reason to hold reference for
>> this buffer anymore. Moreover, in multiprocessor systems NW have time
>> to reuse buffer cookie for another buffer. Xen complained about this
>> and denied the new buffer registration. I have seen this issue while
>> running tests on iMX SoC.
>>
>> So, this patch basically corrects that behavior by freeing the buffer
>> earlier, when handling RPC return from OP-TEE.
>>
>> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
>> ---
>>   xen/arch/arm/tee/optee.c | 24 ++++++++++++++++++++----
>>   1 file changed, 20 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>> index 6a035355db..af19fc31f8 100644
>> --- a/xen/arch/arm/tee/optee.c
>> +++ b/xen/arch/arm/tee/optee.c
>> @@ -1099,6 +1099,26 @@ static int handle_rpc_return(struct
>> optee_domain *ctx,
>>           if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_ALLOC )
>>               call->rpc_buffer_type =
>> shm_rpc->xen_arg->params[0].u.value.a;
>>   +        /*
>> +         * OP-TEE signals that it frees the buffer that it requested
>> +         * before. This is the right for us to do the same.
>> +         */
>> +        if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_FREE )
>> +        {
>> +            uint64_t cookie = shm_rpc->xen_arg->params[0].u.value.b;
>> +
>> +            free_optee_shm_buf(ctx, cookie);
>> +
>> +            /*
>> +             * This should never happen. We have a bug either in the
>> +             * OP-TEE or in the mediator.
>> +             */
>> +            if ( call->rpc_data_cookie && call->rpc_data_cookie !=
>> cookie )
>> +                gprintk(XENLOG_ERR,
>> +                        "Saved RPC cookie does not corresponds to
>> OP-TEE's (%"PRIx64" != %"PRIx64")\n",
>
> s/corresponds/correspond/
>
>> +                        call->rpc_data_cookie, cookie);
>
> IIUC, if you free the wrong SHM buffer then your guest is likely to be
> running incorrectly afterwards. So shouldn't we crash the guest to
> avoid further issue?

No - crashing the guest prohibits testing of the interface, and/or the
guest realising it screwed up and dumping enough state to usefully debug
what is going on.

Furthermore, if userspace could trigger this path, we'd have to issue an
XSA.

Crashing the guest is almost never the right thing to do, and definitely
not appropriate for a bad parameter.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 11 10:12:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 10:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY5QS-0003Mg-R6; Mon, 11 May 2020 10:12:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nxab=6Z=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jY5QS-0003Mb-0S
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 10:12:44 +0000
X-Inumbo-ID: f3511c28-936f-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f3511c28-936f-11ea-ae69-bc764e2007e4;
 Mon, 11 May 2020 10:12:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hDlJBtonvltBwDud1T1U9Attg2O76gjqdcTfRn5Kee0=; b=F7rX/x1HpfsQuXaR6WGX1sEPf
 0ssjh69HDpQhzvkrOIbc7qR6r2Ir18+PpfOTbjtEFPb4lOF9uotVUP8okzpz0jjETMQjdClmA/HjA
 YADvPZZcKYipp7MGF8owL/6dKJSpiP06S5ZmrZczyLiGrke+D4dO1a9Tah0sd2I5OYbGY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jY5QP-0002uR-U3; Mon, 11 May 2020 10:12:41 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jY5QP-0007be-CN; Mon, 11 May 2020 10:12:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jY5QP-0001u0-Bl; Mon, 11 May 2020 10:12:41 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150128-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150128: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=190c60f12db469472476041ecd0e6c9a0d4b0f8a
X-Osstest-Versions-That: xen=e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 11 May 2020 10:12:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150128 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150128/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150119
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150119
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150119
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150119
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150119
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150119
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150119
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150119
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150119
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150119
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150119
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  190c60f12db469472476041ecd0e6c9a0d4b0f8a
baseline version:
 xen                  e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9

Last test of basis   150119  2020-05-10 01:51:25 Z    1 days
Testing same since   150128  2020-05-10 18:37:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e0d92d9bd7..190c60f12d  190c60f12db469472476041ecd0e6c9a0d4b0f8a -> master


From xen-devel-bounces@lists.xenproject.org Mon May 11 10:18:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 10:18:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY5Vz-0003au-IX; Mon, 11 May 2020 10:18:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AmMB=6Z=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jY5Vy-0003ap-KP
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 10:18:26 +0000
X-Inumbo-ID: bfc2315d-9370-11ea-a1fe-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bfc2315d-9370-11ea-a1fe-12813bfff9fa;
 Mon, 11 May 2020 10:18:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589192305;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=toaCxfPBf5Yk338+hhpOIhTaQZYVwK3BeAiQLNq/LKc=;
 b=WlCIL18TQ1zZjBPM/bslttIoLwqgC0Wp0d49cnvocJ0m767ae87hH0bi
 TKT3sw9d8hD1QUHH3oPeocpeZ3j7DVS8LThBpAdzrQsrKdBIkZMp5fJsN
 pKTKhL174zVIEPj6w/kMMmAxJ/RE1a/voHAgMAdDZ83PYTUzcR2mYz827 c=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: Gmi11WsyMls3IOeIrSYnF6b62AFjF+vU8utGL2vILPN+kpy0Gl/0jK+kz1r7YWRmaWxAFxbk0M
 ves/jAZqZEc22LBhYWxHcMw/ilYnd7YGgiMDdEPY6btT91vqKH99RRtghh2OMduvBptJhslmG8
 KquW1njBhonN/R3KFTCvhzcd9XNlJK1cfviyx9HA4NWgFic+IB9Uqnd2gevjerztllXxEIL4J/
 bOQK8KkCFMK8pVMyu8H6yrCNtOInB+Noin/s14zw2+dOZcs2zoqM6IeLlgLpGIh33iVcFPFLQW
 Q68=
X-SBRS: 2.7
X-MesageID: 17480036
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,379,1583211600"; d="scan'208";a="17480036"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2] x86/idle: prevent entering C6 with in service interrupts
 on Intel
Date: Mon, 11 May 2020 12:17:53 +0200
Message-ID: <20200511101753.36610-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Ian
 Jackson <ian.jackson@eu.citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Apply a workaround for Intel errata CLX30: "A Pending Fixed Interrupt
May Be Dispatched Before an Interrupt of The Same Priority Completes".

It's not clear which models are affected, as the errata is listed in
the "Second Generation Intel Xeon Scalable Processors" specification
update, but the issue has been seen as far back as Nehalem processors.
Apply the workaround to all Intel processors, the condition can be
relaxed later.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Unify workaround with errata_c6_eoi_workaround.
 - Properly check state in both acpi and mwait drivers.
---
 docs/misc/xen-command-line.pandoc |  8 ++++++++
 xen/arch/x86/acpi/cpu_idle.c      | 23 ++++++++++++-----------
 xen/arch/x86/cpu/mwait-idle.c     |  3 +++
 xen/include/asm-x86/cpuidle.h     |  1 +
 4 files changed, 24 insertions(+), 11 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index ee12b0f53f..6e868a2185 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -652,6 +652,14 @@ Specify the size of the console debug trace buffer. By specifying `cpu:`
 additionally a trace buffer of the specified size is allocated per cpu.
 The debug trace feature is only enabled in debugging builds of Xen.
 
+### disable-c6-isr
+> `= <boolean>`
+
+> Default: `true for Intel CPUs`
+
+Workaround for Intel errata CLX30. Prevent entering C6 idle states with in
+service local APIC interrupts. Enabled by default for all Intel CPUs.
+
 ### dma_bits
 > `= <integer>`
 
diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index b83446e77d..8ef05aeba3 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -555,20 +555,21 @@ void trace_exit_reason(u32 *irq_traced)
  * There was an errata with some Core i7 processors that an EOI transaction 
  * may not be sent if software enters core C6 during an interrupt service 
  * routine. So we don't enter deep Cx state if there is an EOI pending.
+ *
+ * Errata CLX30: A Pending Fixed Interrupt May Be Dispatched Before an
+ * Interrupt of The Same Priority Completes.
+ *
+ * Prevent entering C6 if there are pending lapic interrupts, or else the
+ * processor might dispatch further pending interrupts before the first one has
+ * been completed.
  */
-static bool errata_c6_eoi_workaround(void)
+bool errata_c6_eoi_workaround(void)
 {
-    static int8_t fix_needed = -1;
+    static int8_t __read_mostly fix_needed = -1;
+    boolean_param("disable-c6-isr", fix_needed);
 
     if ( unlikely(fix_needed == -1) )
-    {
-        int model = boot_cpu_data.x86_model;
-        fix_needed = (cpu_has_apic && !directed_eoi_enabled &&
-                      (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) &&
-                      (boot_cpu_data.x86 == 6) &&
-                      ((model == 0x1a) || (model == 0x1e) || (model == 0x1f) ||
-                       (model == 0x25) || (model == 0x2c) || (model == 0x2f)));
-    }
+        fix_needed = boot_cpu_data.x86_vendor == X86_VENDOR_INTEL;
 
     return (fix_needed && cpu_has_pending_apic_eoi());
 }
@@ -676,7 +677,7 @@ static void acpi_processor_idle(void)
         return;
     }
 
-    if ( (cx->type == ACPI_STATE_C3) && errata_c6_eoi_workaround() )
+    if ( (cx->type >= ACPI_STATE_C3) && errata_c6_eoi_workaround() )
         cx = power->safe_state;
 
 
diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index b81937966e..bb017c488f 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -770,6 +770,9 @@ static void mwait_idle(void)
 		return;
 	}
 
+	if (cx->type >= 3 && errata_c6_eoi_workaround())
+		cx = power->safe_state;
+
 	eax = cx->address;
 	cstate = ((eax >> MWAIT_SUBSTATE_SIZE) & MWAIT_CSTATE_MASK) + 1;
 
diff --git a/xen/include/asm-x86/cpuidle.h b/xen/include/asm-x86/cpuidle.h
index 5d7dffd228..13879f58a1 100644
--- a/xen/include/asm-x86/cpuidle.h
+++ b/xen/include/asm-x86/cpuidle.h
@@ -26,4 +26,5 @@ void update_idle_stats(struct acpi_processor_power *,
 void update_last_cx_stat(struct acpi_processor_power *,
                          struct acpi_processor_cx *, uint64_t);
 
+bool errata_c6_eoi_workaround(void);
 #endif /* __X86_ASM_CPUIDLE_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon May 11 10:21:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 10:21:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY5ZM-0004Li-2Y; Mon, 11 May 2020 10:21:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AmMB=6Z=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jY5ZK-0004Ld-Kr
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 10:21:54 +0000
X-Inumbo-ID: 3bae6d30-9371-11ea-b9cf-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3bae6d30-9371-11ea-b9cf-bc764e2007e4;
 Mon, 11 May 2020 10:21:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589192514;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=c4qVm48SiE+9k82it/FvAxSBf0G4nzK3DjxTozoPa7Y=;
 b=ftww0LVqOinIxyw8+tPfkf8QpMg3X80ne0Z0SJ7ZQtqtjZy0lWV++5xw
 RUK+6WnNC0s2YC+9Q7DmL548+xoyapfOKwWZZJoc+gDWjANpnchOvEPV7
 MyoScFKppun+NdaxXQCzYraI8WbMiCXckHCweWJG/JYgQSKsXp/8fWpSz 0=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: s3GUxM4AJt54wrwnnts09AYyXrm/w/IKEDf+Dtun/2y+h9eYhqDz29aWd1iyr1Tz55kF7xQoAR
 IJTZ/x81FD1aNKOxObvDlHutOQ+/QgJyTvlhrDdzRXO2mpkgDRt+6AOxRWOz+wen2Z/jq2ovWf
 mlSVTfCwyvjG6QmUTjpKrxZ8G36wU3p+UTQIe/zyVsBOjGkCz/Km2duT0Fc81us+p75qeTB/xx
 BpY0bcxzWHdDP12ks56kBufq4woQYwTtAWdBoANKAkNZRA44Wyy9gW+2fWkE+LBTy9GUTCEYfB
 YaQ=
X-SBRS: 2.7
X-MesageID: 17192982
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,379,1583211600"; d="scan'208";a="17192982"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/mwait: remove unneeded local variables
Date: Mon, 11 May 2020 12:21:28 +0200
Message-ID: <20200511102128.36840-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Remove the eax and cstate local variables, the same can be directly
fetched from acpi_processor_cx without any transformations.

No functional change.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/cpu/mwait-idle.c | 11 ++++-------
 1 file changed, 4 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index bb017c488f..6d10ac32b8 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -721,7 +721,7 @@ static void mwait_idle(void)
 	unsigned int cpu = smp_processor_id();
 	struct acpi_processor_power *power = processor_powers[cpu];
 	struct acpi_processor_cx *cx = NULL;
-	unsigned int eax, next_state, cstate;
+	unsigned int next_state;
 	u64 before, after;
 	u32 exp = 0, pred = 0, irq_traced[4] = { 0 };
 
@@ -773,9 +773,6 @@ static void mwait_idle(void)
 	if (cx->type >= 3 && errata_c6_eoi_workaround())
 		cx = power->safe_state;
 
-	eax = cx->address;
-	cstate = ((eax >> MWAIT_SUBSTATE_SIZE) & MWAIT_CSTATE_MASK) + 1;
-
 #if 0 /* XXX Can we/do we need to do something similar on Xen? */
 	/*
 	 * leave_mm() to avoid costly and often unnecessary wakeups
@@ -785,7 +782,7 @@ static void mwait_idle(void)
 		leave_mm(cpu);
 #endif
 
-	if (!(lapic_timer_reliable_states & (1 << cstate)))
+	if (!(lapic_timer_reliable_states & (1 << cx->type)))
 		lapic_timer_off();
 
 	before = alternative_call(cpuidle_get_tick);
@@ -794,7 +791,7 @@ static void mwait_idle(void)
 	update_last_cx_stat(power, cx, before);
 
 	if (cpu_is_haltable(cpu))
-		mwait_idle_with_hints(eax, MWAIT_ECX_INTERRUPT_BREAK);
+		mwait_idle_with_hints(cx->address, MWAIT_ECX_INTERRUPT_BREAK);
 
 	after = alternative_call(cpuidle_get_tick);
 
@@ -807,7 +804,7 @@ static void mwait_idle(void)
 	update_idle_stats(power, cx, before, after);
 	local_irq_enable();
 
-	if (!(lapic_timer_reliable_states & (1 << cstate)))
+	if (!(lapic_timer_reliable_states & (1 << cx->type)))
 		lapic_timer_on();
 
 	rcu_idle_exit(cpu);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon May 11 10:27:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 10:27:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY5e8-0004X6-PN; Mon, 11 May 2020 10:26:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HsYJ=6Z=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jY5e7-0004X1-Sg
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 10:26:51 +0000
X-Inumbo-ID: ed4ef032-9371-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ed4ef032-9371-11ea-9887-bc764e2007e4;
 Mon, 11 May 2020 10:26:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=GoavD/MUgf+PSwDniYE0pGVYwTzzjpu0yCLvKrwJIPM=; b=3xgzBAkOl6QCdU6GL3H0sdgjht
 W0wNbGQpmK7lOgAB9d0pULooLFtqPGe+GBIbwhl+ICacyIxrTVkL2foC7X2fn5fpIbqoW1LgPZoMN
 1qK4qD6xH25nkYL/WAMRkcibTXaIY5HNow9vYOPexdr2RxBQZq0NmTwhQJNUqBne6XtQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jY5e6-0003DG-Ga; Mon, 11 May 2020 10:26:50 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jY5e6-0000cZ-5E; Mon, 11 May 2020 10:26:50 +0000
Subject: Re: [PATCH] optee: immediately free buffers that are released by
 OP-TEE
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20200506014246.3397490-1-volodymyr_babchuk@epam.com>
 <51b8c855-5e94-2829-a703-d43c84948120@xen.org>
 <f4e1cc2b-97bf-d242-8f1b-e72083f378be@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <cc94a26c-3364-163a-4ec6-f410c17c671f@xen.org>
Date: Mon, 11 May 2020 11:26:48 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <f4e1cc2b-97bf-d242-8f1b-e72083f378be@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "tee-dev@lists.linaro.org" <tee-dev@lists.linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Andrew,

On 11/05/2020 11:10, Andrew Cooper wrote:
> On 11/05/2020 10:34, Julien Grall wrote:
>> Hi Volodymyr,
>>
>> On 06/05/2020 02:44, Volodymyr Babchuk wrote:
>>> Normal World can share buffer with OP-TEE for two reasons:
>>> 1. Some client application wants to exchange data with TA
>>> 2. OP-TEE asks for shared buffer for internal needs
>>>
>>> The second case was handle more strictly than necessary:
>>>
>>> 1. In RPC request OP-TEE asks for buffer
>>> 2. NW allocates buffer and provides it via RPC response
>>> 3. Xen pins pages and translates data
>>> 4. Xen provides buffer to OP-TEE
>>> 5. OP-TEE uses it
>>> 6. OP-TEE sends request to free the buffer
>>> 7. NW frees the buffer and sends the RPC response
>>> 8. Xen unpins pages and forgets about the buffer
>>>
>>> The problem is that Xen should forget about buffer in between stages 6
>>> and 7. I.e. the right flow should be like this:
>>>
>>> 6. OP-TEE sends request to free the buffer
>>> 7. Xen unpins pages and forgets about the buffer
>>> 8. NW frees the buffer and sends the RPC response
>>>
>>> This is because OP-TEE internally frees the buffer before sending the
>>> "free SHM buffer" request. So we have no reason to hold reference for
>>> this buffer anymore. Moreover, in multiprocessor systems NW have time
>>> to reuse buffer cookie for another buffer. Xen complained about this
>>> and denied the new buffer registration. I have seen this issue while
>>> running tests on iMX SoC.
>>>
>>> So, this patch basically corrects that behavior by freeing the buffer
>>> earlier, when handling RPC return from OP-TEE.
>>>
>>> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
>>> ---
>>>    xen/arch/arm/tee/optee.c | 24 ++++++++++++++++++++----
>>>    1 file changed, 20 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>>> index 6a035355db..af19fc31f8 100644
>>> --- a/xen/arch/arm/tee/optee.c
>>> +++ b/xen/arch/arm/tee/optee.c
>>> @@ -1099,6 +1099,26 @@ static int handle_rpc_return(struct
>>> optee_domain *ctx,
>>>            if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_ALLOC )
>>>                call->rpc_buffer_type =
>>> shm_rpc->xen_arg->params[0].u.value.a;
>>>    +        /*
>>> +         * OP-TEE signals that it frees the buffer that it requested
>>> +         * before. This is the right for us to do the same.
>>> +         */
>>> +        if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_FREE )
>>> +        {
>>> +            uint64_t cookie = shm_rpc->xen_arg->params[0].u.value.b;
>>> +
>>> +            free_optee_shm_buf(ctx, cookie);
>>> +
>>> +            /*
>>> +             * This should never happen. We have a bug either in the
>>> +             * OP-TEE or in the mediator.
>>> +             */
>>> +            if ( call->rpc_data_cookie && call->rpc_data_cookie !=
>>> cookie )
>>> +                gprintk(XENLOG_ERR,
>>> +                        "Saved RPC cookie does not corresponds to
>>> OP-TEE's (%"PRIx64" != %"PRIx64")\n",
>>
>> s/corresponds/correspond/
>>
>>> +                        call->rpc_data_cookie, cookie);
>>
>> IIUC, if you free the wrong SHM buffer then your guest is likely to be
>> running incorrectly afterwards. So shouldn't we crash the guest to
>> avoid further issue?
> 
> No - crashing the guest prohibits testing of the interface, and/or the
> guest realising it screwed up and dumping enough state to usefully debug
> what is going on.

The comment in the code suggests it is a bug in the OP-TEE/mediator:

/*
  * This should never happen. We have a bug either in the
  * OP-TEE or in the mediator.
  */

So I am not sure why this would be the guest fault here.

> 
> Furthermore, if userspace could trigger this path, we'd have to issue an
> XSA.

Why so? We don't issue XSAs for hypercalls issued through privcmd. While 
this is not hypercalls but close enough as this is using smc (Supervisor 
Mode Call) and hvc. Both are only accessible from kernel mode.

> 
> Crashing the guest is almost never the right thing to do, and definitely
> not appropriate for a bad parameter.

AFAICT, the bad parameter is not from the guest but OP-TEE firmware (or 
mediator) itself. If OP-TEE/mediator is returning buggy value, then it 
may mean you break the isolation. So I don't think simply printing a 
message and continue is the right thing to do.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 11 10:32:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 10:32:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY5j6-0005LB-DO; Mon, 11 May 2020 10:32:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AmMB=6Z=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jY5j5-0005L6-Mk
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 10:31:59 +0000
X-Inumbo-ID: a3d56d36-9372-11ea-a1fe-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a3d56d36-9372-11ea-a1fe-12813bfff9fa;
 Mon, 11 May 2020 10:31:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589193118;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=ZK+6clCz3pAtg+4NKrRjeCXG99veStf7aGxOGbS5Yvo=;
 b=KGDRHPy2wk2BbD1C546nwiqGnyAXBG9DgwGdvHmyZA38/CkMofiqsiAU
 PyDfUt6QDurZccKD9svfGrn9KRIQaJtMglIOCjjdlH/lWEIL0YiQsjEo8
 4LyrNGOtXQaGxu0+3OjpLgOtdrzjI77C/KnsKzqp++uoSR8lT3GCXJJNN A=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 3IjGuaCHeywdi6HXrhDLnTrVtwNziCdCwERsjbo0ezf/ozYLrC7sdAFVwtHoI1EiHbCsREhaKt
 Let24G9jtgRMe1GOGK61Ej/KnG/de66pMWavl9Lm62OiJtIc2XHC06S7mcFGkQjXtKuM0t6sJF
 R7aS4S/9aaLhU31RSrlMItl5tCVyF8cT+A4N+CH5TGNwsx8d5m3Fg6gkSTfCRhsGitYObyRKqf
 CLLgqIsyXO7ZbaDw80EELZgwUK10r++J8LpQF9/pgV+tMP3jOiwhhZQZkYOaDUHRHz6NtVDcaC
 oy8=
X-SBRS: 2.7
X-MesageID: 17219151
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,379,1583211600"; d="scan'208";a="17219151"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH] changelog: add relevant changes during 4.14 development window
Date: Mon, 11 May 2020 12:31:45 +0200
Message-ID: <20200511103145.37098-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Community Manager <community.manager@xenproject.org>,
 Paul Durrant <paul@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add entries for the relevant changes I've been working on during the
4.14 development time frame. Mostly performance improvements related
to pvshim scalability issues when running with high number of vCPUs.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 CHANGELOG.md | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index b11e9bc4e3..554eeb6a12 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -8,6 +8,12 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
 
 ### Added
  - This file and MAINTAINERS entry.
+ - Use x2APIC mode whenever available, regardless of interrupt remapping
+   support.
+ - Performance improvements to guest assisted TLB flushes, either when using
+   the Xen hypercall interface or the viridian one.
+ - Assorted pvshim performance and scalability improvements plus some bug
+   fixes.
 
 ## [4.13.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.13.0) - 2019-12-17
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Mon May 11 10:38:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 10:38:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY5pP-0005Yw-4T; Mon, 11 May 2020 10:38:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nxab=6Z=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jY5pN-0005Yr-Tr
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 10:38:29 +0000
X-Inumbo-ID: 8d115168-9373-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d115168-9373-11ea-ae69-bc764e2007e4;
 Mon, 11 May 2020 10:38:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=PU87V9bmqjpv+4YCnw2SHq/EDUNYgSyzYotBo4VBtCI=; b=ZgnOvtUACP1reNdh7UL2dH938
 Ho3HqpcmwcdJqpgcX1ERtm04kwWOMeGp9PjS0zo8zWFovXyJjP7xwrSIpkmdX2pfBKjYaYQ+OdG2N
 V5pza2RiEl4IUp6xAC8jdb0zlmqoMzdoluyMaWtFAfrM45I3VM2uFRxRX9/KS1K2FTbuI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jY5pM-0003Ro-CT; Mon, 11 May 2020 10:38:28 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jY5pM-0000na-1Z; Mon, 11 May 2020 10:38:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jY5pM-0008Cy-12; Mon, 11 May 2020 10:38:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150131-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150131: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=23bf93884c6346206e87c0f14d93f905e8c81267
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 11 May 2020 10:38:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150131 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150131/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              23bf93884c6346206e87c0f14d93f905e8c81267
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  115 days
Failing since        146211  2020-01-18 04:18:52 Z  114 days  105 attempts
Testing same since   150083  2020-05-08 04:18:46 Z    3 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 17347 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 11 10:38:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 10:38:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY5pp-0005b9-Di; Mon, 11 May 2020 10:38:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MQL3=6Z=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jY5pn-0005az-C0
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 10:38:55 +0000
X-Inumbo-ID: 9c4ca31c-9373-11ea-b07b-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c4ca31c-9373-11ea-b07b-bc764e2007e4;
 Mon, 11 May 2020 10:38:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589193534;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=2HZ8R5wunFDPYcA0cnORPcZAxJX0ODZv6tShorw+oXk=;
 b=AkeADt24UIVEV9xdpRkBWU0/GgKAl7tXubkhT1vS6BMgAIvMoNfEt+YY
 A0hzAANMOOk6+runAYDsu4eB5Xb9dGQqgPBOvOqd8CMu7Bz++S2/DQ5Um
 0GZmHNi8CQZdLCZMPnYafWBet13Sb2t0R13lysARV+r3O13CUBpMsFZtF 0=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: p0kxgfcqjRfrYBGkHTPe8yQpaL+DXNm6mYmEnmk+HKEk0fIx9Xv1OtPMHsaG/VXMa6c+avO4hL
 0rU6REGNNPcXsRvWnjmj+B9Z1Pj5Jc4+A+ul4vHiFRDFnozV947HU5opI7b+b7iBF4icQzHu1z
 w5Q9Afjn2dFeWtKe+q0kW+0mu9YPHh9bTmenNyzrZRscBI0M7wOqIhf3iyipWO2sP2xwLd6Fz8
 4dvXqE9i22Rvo+CgfXM05x2sU25fRqZWOwxDktH2Cbw1rOa3LKP5xndM6PTuG6qnJeKQnhxFTI
 pU8=
X-SBRS: 2.7
X-MesageID: 17573114
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,379,1583211600"; d="scan'208";a="17573114"
Subject: Re: [PATCH v2] x86/idle: prevent entering C6 with in service
 interrupts on Intel
To: Roger Pau Monne <roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
References: <20200511101753.36610-1-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f3471cee-342e-c169-f3eb-34f559892336@citrix.com>
Date: Mon, 11 May 2020 11:38:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200511101753.36610-1-roger.pau@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11/05/2020 11:17, Roger Pau Monne wrote:
> Apply a workaround for Intel errata CLX30: "A Pending Fixed Interrupt
> May Be Dispatched Before an Interrupt of The Same Priority Completes".
>
> It's not clear which models are affected, as the errata is listed in
> the "Second Generation Intel Xeon Scalable Processors" specification
> update, but the issue has been seen as far back as Nehalem processors.

Really?  I'm only aware of it being Haswell and later.

CLX30 was just one single example I gave you.  It is public in all the
specification updates going backwards, and is for example SKX100, BDX99 etc.

> Apply the workaround to all Intel processors, the condition can be
> relaxed later.

Nothing in the code checks ISR, so we're applying "no power saving"
unilaterally rather than in the very rare corner case that it occurs.

I'm also not aware of it affecting Atom processors.

This will cripple anything running on battery power, and is therefore
not an appropriate fix in this form.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 11 10:40:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 10:40:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY5rS-0006Nw-P1; Mon, 11 May 2020 10:40:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4Y5i=6Z=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jY5rS-0006Nq-9J
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 10:40:38 +0000
X-Inumbo-ID: d9993bb8-9373-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d9993bb8-9373-11ea-b9cf-bc764e2007e4;
 Mon, 11 May 2020 10:40:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D24C7AC64;
 Mon, 11 May 2020 10:40:38 +0000 (UTC)
Subject: Re: [PATCH] x86/idle: prevent entering C6 with in service interrupts
 on Intel
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200507132236.26010-1-roger.pau@citrix.com>
 <3d147b74-81dd-83b8-7035-67c5ceb72c5f@suse.com>
 <20200508172411.GM1353@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <991f9773-b173-b374-bb22-aff24a930630@suse.com>
Date: Mon, 11 May 2020 12:40:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200508172411.GM1353@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 19:24, Roger Pau Monné wrote:
> On Fri, May 08, 2020 at 03:46:08PM +0200, Jan Beulich wrote:
>> On 07.05.2020 15:22, Roger Pau Monne wrote:
>>> --- a/xen/arch/x86/cpu/mwait-idle.c
>>> +++ b/xen/arch/x86/cpu/mwait-idle.c
>>> @@ -770,6 +770,9 @@ static void mwait_idle(void)
>>>  		return;
>>>  	}
>>>  
>>> +	if (cx->type == ACPI_STATE_C3 && errata_c6_isr_workaround())
>>> +		cx = power->safe_state;
>>
>> Here it becomes even more relevant I think that == not be
>> used, as the static tables list deeper C-states; it's just
>> that the SKX table, which also gets used for CLX afaict, has
>> no entries beyond C6-SKX. Others with deeper ones presumably
>> have the deeper C-states similarly affected (or not) by this
>> erratum.
>>
>> For reference, mwait_idle_cpu_init() has
>>
>> 		hint = flg2MWAIT(cpuidle_state_table[cstate].flags);
>> 		state = MWAIT_HINT2CSTATE(hint) + 1;
>>                 ...
>> 		cx->type = state;
>>
>> i.e. the value you compare against is derived from the static
>> table entries. For Nehalem/Westmere this means that what goes
>> under ACPI_STATE_C3 is indeed C6-NHM, and this looks to match
>> for all the non-Atoms, but for none of the Atoms. Now Atoms
>> could easily be unaffected, but (just to take an example) if
>> C6-SNB was affected, surely C7-SNB would be, too.
> 
> Yes, I've adjusted this to use cx->type >= 3 instead. I have to admit
> I'm confused by the name of the states being C6-* while the cstate
> value reported by Xen will be 3 (I would instead expect the type to be
> 6 in order to match the name).

Well, the problem is the disagreement in numbering between ACPI
and what the various CPU specs actually use.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 11 10:55:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 10:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY65z-0007PM-1R; Mon, 11 May 2020 10:55:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=XH06=6Z=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jY65y-0007PH-9F
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 10:55:38 +0000
X-Inumbo-ID: f27d400a-9375-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f27d400a-9375-11ea-ae69-bc764e2007e4;
 Mon, 11 May 2020 10:55:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
 References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=HAToxlyt/mFYVlJ7+5L4GLnIWhxntlLpjFjlfawAXfM=; b=coif0YvS/myvScrZfU0V8kmzMd
 wWq6eHqILV/Pc9KmZEHzCNp3b2xUBusgWBtB4r2OFahUuU8+Ib3QQ82DwFiyMNpaoRSgUpAHnEIuG
 /N3SjgLkBF1g4C1TuomPAQ49hV16CmEJqAIyWKX9jGz/Qz1EGqwl0tQN5nQbnEVZ//zE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jY65v-0003n3-Ff; Mon, 11 May 2020 10:55:35 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <hx242@xen.org>)
 id 1jY65v-0002pC-4V; Mon, 11 May 2020 10:55:35 +0000
Message-ID: <ade804fe6e456f68db4cb01e005e02ce7c2976b7.camel@xen.org>
Subject: Re: [PATCH v6 12/15] x86/smpboot: switch pl*e to use new APIs in
 clone_mapping
From: Hongyan Xia <hx242@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Date: Mon, 11 May 2020 11:55:33 +0100
In-Reply-To: <88709097-661e-ce7b-1a46-1dcecf029428@suse.com>
References: <cover.1587735799.git.hongyxia@amazon.com>
 <a1c29e58a5d40748413e8088ad88ba4319a328d4.1587735799.git.hongyxia@amazon.com>
 <88709097-661e-ce7b-1a46-1dcecf029428@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 2020-04-30 at 17:15 +0200, Jan Beulich wrote:
> On 24.04.2020 16:09, Hongyan Xia wrote:
> > From: Wei Liu <wei.liu2@citrix.com>
> 
> Nit: Why the emphasis on pl*e in the title? Is there anything left
> unconverted in the function? IOW how about "switch clone_mapping()
> to new page table APIs"?

The title seems stale. Will fix.

> ...
> > @@ -724,48 +724,61 @@ static int clone_mapping(const void *ptr,
> > root_pgentry_t *rpt)
> >          }
> >      }
> >  
> > +    UNMAP_DOMAIN_PAGE(pl1e);
> > +    UNMAP_DOMAIN_PAGE(pl2e);
> > +    UNMAP_DOMAIN_PAGE(pl3e);
> > +
> >      if ( !(root_get_flags(rpt[root_table_offset(linear)]) &
> > _PAGE_PRESENT) )
> >      {
> > -        pl3e = alloc_xen_pagetable();
> > -        if ( !pl3e )
> > +        mfn_t l3mfn = alloc_xen_pagetable_new();
> > +
> > +        if ( mfn_eq(l3mfn, INVALID_MFN) )
> >              goto out;
> > +
> > +        pl3e = map_domain_page(l3mfn);
> 
> Seeing this recur (from other patches) I wonder whether we wouldn't
> better make map_domain_page() accept INVALID_MFN and return NULL in
> this case. In cases like the one here it would eliminate the need
> for several local variables. Of course the downside of this is that
> then we'll have to start checking map_domain_page()'s return value.
> A middle ground could be to have
> 
> void *alloc_mapped_pagetable(mfn_t *mfn);
> 
> allowing to pass in NULL if the MFN is of no interest.

I would say that when the caller requires a new Xen page table
allocation, almost all the time both the mfn and the virt are needed
(on top of my head I cannot think of a case where we pass in NULL, you
almost always need the mfn to write new page table entries), so I think
the benefit of this is just compressing two calls into one, which I am
not quite sure is worth it.

> > @@ -781,6 +794,9 @@ static int clone_mapping(const void *ptr,
> > root_pgentry_t *rpt)
> >  
> >      rc = 0;
> >   out:
> > +    UNMAP_DOMAIN_PAGE(pl1e);
> > +    UNMAP_DOMAIN_PAGE(pl2e);
> > +    UNMAP_DOMAIN_PAGE(pl3e);
> >      return rc;
> >  }
> 
> I don't think the writing of NULL into the variables is necessary
> here. And if the needed if()-s are of concern, then perhaps we
> should consider making unmap_domain_page() finally accept NULL as
> input?

I usually don't have a problem with this because a sane compiler would
definitely remove the unnecessary clearing, so I would use the macro
version as much as possible. I am okay with moving the NULL check into
unmap() itself, but note that this also needs changes on Arm side.

Hongyan



From xen-devel-bounces@lists.xenproject.org Mon May 11 11:07:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 11:07:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY6Hp-0008Nb-0J; Mon, 11 May 2020 11:07:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AmMB=6Z=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jY6Hn-0008NW-KH
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 11:07:51 +0000
X-Inumbo-ID: a738d3fa-9377-11ea-b9cf-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a738d3fa-9377-11ea-b9cf-bc764e2007e4;
 Mon, 11 May 2020 11:07:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589195271;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=ZMGAZL7A0M3DP9SLTWecx9J810+9GaJaImgn1XXl3WU=;
 b=HDzsG54gkX1ba1ybLkMbHz3UOO4ttAS0nxFcDrUS5UZqxWPO257Amitk
 aKICjjhqtpHmESIZaBs5eZVL6BKg3xBEiDoV/b9GxQz0Mz6fceK6CBU8w
 KD6yixZvrRn2i+gsRU8OUwFz2pdfW0uiGc8HxsPIdt1v5Zwj9D3j2tqcx 4=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: mb8Lzy4JPpHYLts3I0dl1Mb/yHUQFTz9agoKyU+wHe0BK9TDJOa7obgjmQBWEejvHlkoZF6TVG
 mVHez3AXae3N4bAmfYjI/P/4pN/hGe57O7SRZZYzi6bBHUdXjTuE3agjJJd11vZnOn+D0j/s0z
 mJHaqpbMJ+fmALFn+5UGYp3LZKtUlq1TFrdmgD3eWYVhfyqVvCeV0z+0xbzp4ELyQddl8tyqOd
 BxgUQ98QZEwskE3J6KwcXMpXAPRq1+WJjxTL9CxQD06W3PWkie0tCM+WNryUT1kjX64EOthZEb
 Rjo=
X-SBRS: 2.7
X-MesageID: 17195507
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,379,1583211600"; d="scan'208";a="17195507"
Date: Mon, 11 May 2020 13:07:43 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2] x86/idle: prevent entering C6 with in service
 interrupts on Intel
Message-ID: <20200511110743.GB35422@Air-de-Roger>
References: <20200511101753.36610-1-roger.pau@citrix.com>
 <f3471cee-342e-c169-f3eb-34f559892336@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <f3471cee-342e-c169-f3eb-34f559892336@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 11, 2020 at 11:38:49AM +0100, Andrew Cooper wrote:
> On 11/05/2020 11:17, Roger Pau Monne wrote:
> > Apply a workaround for Intel errata CLX30: "A Pending Fixed Interrupt
> > May Be Dispatched Before an Interrupt of The Same Priority Completes".
> >
> > It's not clear which models are affected, as the errata is listed in
> > the "Second Generation Intel Xeon Scalable Processors" specification
> > update, but the issue has been seen as far back as Nehalem processors.
> 
> Really?  I'm only aware of it being Haswell and later.
> 
> CLX30 was just one single example I gave you.  It is public in all the
> specification updates going backwards, and is for example SKX100, BDX99 etc.

Right, will update accordingly then.

> > Apply the workaround to all Intel processors, the condition can be
> > relaxed later.
> 
> Nothing in the code checks ISR, so we're applying "no power saving"
> unilaterally rather than in the very rare corner case that it occurs.

We don't check ISR directly, but instead the stack of pending
interrupts to EOI, which should match the vectors pending in the ISR?

As vectors that can be masked are not held pending in the ISR. I can
check ISR directly if that's any better, but AFAICT using
cpu_has_pending_apic_eoi is equally effective and faster.

> I'm also not aware of it affecting Atom processors.
> 
> This will cripple anything running on battery power, and is therefore
> not an appropriate fix in this form.

TBH, I've tried it in it's current form and it doesn't trigger that
often.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 11 11:28:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 11:28:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY6bx-0001fw-8p; Mon, 11 May 2020 11:28:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tfUY=6Z=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jY6bv-0001fi-IU
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 11:28:39 +0000
X-Inumbo-ID: 8c28f088-937a-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8c28f088-937a-11ea-9887-bc764e2007e4;
 Mon, 11 May 2020 11:28:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 63D35AECD;
 Mon, 11 May 2020 11:28:35 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 1/3] xen/sched: allow rcu work to happen when syncing cpus
 in core scheduling
Date: Mon, 11 May 2020 13:28:27 +0200
Message-Id: <20200511112829.5500-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200511112829.5500-1-jgross@suse.com>
References: <20200511112829.5500-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Sergey Dyasli <sergey.dyasli@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Dario Faggioli <dfaggioli@suse.com>,
 Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

With RCU barriers moved from tasklets to normal RCU processing cpu
offlining in core scheduling might deadlock due to cpu synchronization
required by RCU processing and core scheduling concurrently.

Fix that by bailing out from core scheduling synchronization in case
of pending RCU work. Additionally the RCU softirq is now required to
be of higher priority than the scheduling softirqs in order to do
RCU processing before entering the scheduler again, as bailing out from
the core scheduling synchronization requires to raise another softirq
SCHED_SLAVE, which would bypass RCU processing again.

Reported-by: Sergey Dyasli <sergey.dyasli@citrix.com>
Tested-by: Sergey Dyasli <sergey.dyasli@citrix.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Dario Faggioli <dfaggioli@suse.com>
---
V2:
- add BUILD_BUG_ON() and comment (Dario Faggioli)
---
 xen/common/sched/core.c   | 13 ++++++++++---
 xen/include/xen/softirq.h |  2 +-
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index d94b95285f..5df66cbf9b 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -2457,13 +2457,20 @@ static struct sched_unit *sched_wait_rendezvous_in(struct sched_unit *prev,
             v = unit2vcpu_cpu(prev, cpu);
         }
         /*
-         * Coming from idle might need to do tasklet work.
+         * Check for any work to be done which might need cpu synchronization.
+         * This is either pending RCU work, or tasklet work when coming from
+         * idle. It is mandatory that RCU softirqs are of higher priority
+         * than scheduling ones as otherwise a deadlock might occur.
          * In order to avoid deadlocks we can't do that here, but have to
-         * continue the idle loop.
+         * schedule the previous vcpu again, which will lead to the desired
+         * processing to be done.
          * Undo the rendezvous_in_cnt decrement and schedule another call of
          * sched_slave().
          */
-        if ( is_idle_unit(prev) && sched_tasklet_check_cpu(cpu) )
+        BUILD_BUG_ON(RCU_SOFTIRQ > SCHED_SLAVE_SOFTIRQ ||
+                     RCU_SOFTIRQ > SCHEDULE_SOFTIRQ);
+        if ( rcu_pending(cpu) ||
+             (is_idle_unit(prev) && sched_tasklet_check_cpu(cpu)) )
         {
             struct vcpu *vprev = current;
 
diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
index b4724f5c8b..1f6c4783da 100644
--- a/xen/include/xen/softirq.h
+++ b/xen/include/xen/softirq.h
@@ -4,10 +4,10 @@
 /* Low-latency softirqs come first in the following list. */
 enum {
     TIMER_SOFTIRQ = 0,
+    RCU_SOFTIRQ,
     SCHED_SLAVE_SOFTIRQ,
     SCHEDULE_SOFTIRQ,
     NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ,
-    RCU_SOFTIRQ,
     TASKLET_SOFTIRQ,
     NR_COMMON_SOFTIRQS
 };
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Mon May 11 11:28:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 11:28:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY6br-0001fS-Oq; Mon, 11 May 2020 11:28:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tfUY=6Z=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jY6bq-0001fI-II
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 11:28:34 +0000
X-Inumbo-ID: 8c16cd4a-937a-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8c16cd4a-937a-11ea-ae69-bc764e2007e4;
 Mon, 11 May 2020 11:28:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 2AF4EAD73;
 Mon, 11 May 2020 11:28:35 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 0/3] xen: Fix some bugs in scheduling
Date: Mon, 11 May 2020 13:28:26 +0200
Message-Id: <20200511112829.5500-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Dario Faggioli <dfaggioli@suse.com>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Some problems I found when trying to find a problem with
cpu-on/offlining in core scheduling mode.

Juergen Gross (3):
  xen/sched: allow rcu work to happen when syncing cpus in core
    scheduling
  xen/sched: don't call sync_vcpu_execstate() in
    sched_unit_migrate_finish()
  xen/sched: fix latent races accessing vcpu->dirty_cpu

 xen/arch/x86/domain.c     | 16 +++++++++++-----
 xen/common/domain.c       |  2 +-
 xen/common/keyhandler.c   |  2 +-
 xen/common/sched/core.c   | 18 ++++++++++--------
 xen/include/xen/sched.h   |  2 +-
 xen/include/xen/softirq.h |  2 +-
 6 files changed, 25 insertions(+), 17 deletions(-)

-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Mon May 11 11:28:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 11:28:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY6c2-0001gW-Hp; Mon, 11 May 2020 11:28:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tfUY=6Z=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jY6c0-0001gC-JS
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 11:28:44 +0000
X-Inumbo-ID: 8c77ccd0-937a-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8c77ccd0-937a-11ea-ae69-bc764e2007e4;
 Mon, 11 May 2020 11:28:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id BD775AE91;
 Mon, 11 May 2020 11:28:35 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 3/3] xen/sched: fix latent races accessing vcpu->dirty_cpu
Date: Mon, 11 May 2020 13:28:29 +0200
Message-Id: <20200511112829.5500-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200511112829.5500-1-jgross@suse.com>
References: <20200511112829.5500-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The dirty_cpu field of struct vcpu denotes which cpu still holds data
of a vcpu. All accesses to this field should be atomic in case the
vcpu could just be running, as it is accessed without any lock held
in most cases. Especially sync_local_execstate() and context_switch()
for the same vcpu running concurrently have a risk for failing.

There are some instances where accesses are not atomically done, and
even worse where multiple accesses are done when a single one would
be mandated.

Correct that in order to avoid potential problems.

Add some assertions to verify dirty_cpu is handled properly.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- convert all accesses to v->dirty_cpu to atomic ones (Jan Beulich)
- drop cast (Julien Grall)
---
 xen/arch/x86/domain.c   | 16 +++++++++++-----
 xen/common/domain.c     |  2 +-
 xen/common/keyhandler.c |  2 +-
 xen/include/xen/sched.h |  2 +-
 4 files changed, 14 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index a4428190d5..2e5717b983 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -183,7 +183,7 @@ void startup_cpu_idle_loop(void)
 
     ASSERT(is_idle_vcpu(v));
     cpumask_set_cpu(v->processor, v->domain->dirty_cpumask);
-    v->dirty_cpu = v->processor;
+    write_atomic(&v->dirty_cpu, v->processor);
 
     reset_stack_and_jump(idle_loop);
 }
@@ -1769,6 +1769,7 @@ static void __context_switch(void)
 
     if ( !is_idle_domain(pd) )
     {
+        ASSERT(read_atomic(&p->dirty_cpu) == cpu);
         memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES);
         vcpu_save_fpu(p);
         pd->arch.ctxt_switch->from(p);
@@ -1832,7 +1833,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
 {
     unsigned int cpu = smp_processor_id();
     const struct domain *prevd = prev->domain, *nextd = next->domain;
-    unsigned int dirty_cpu = next->dirty_cpu;
+    unsigned int dirty_cpu = read_atomic(&next->dirty_cpu);
 
     ASSERT(prev != next);
     ASSERT(local_irq_is_enabled());
@@ -1844,6 +1845,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     {
         /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
         flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
+        ASSERT(!vcpu_cpu_dirty(next));
     }
 
     _update_runstate_area(prev);
@@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
 
 void sync_vcpu_execstate(struct vcpu *v)
 {
-    if ( v->dirty_cpu == smp_processor_id() )
+    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
+
+    if ( dirty_cpu == smp_processor_id() )
         sync_local_execstate();
-    else if ( vcpu_cpu_dirty(v) )
+    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
     {
         /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
-        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
+        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
     }
+    ASSERT(!is_vcpu_dirty_cpu(dirty_cpu) ||
+           read_atomic(&v->dirty_cpu) != dirty_cpu);
 }
 
 static int relinquish_memory(
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 7cc9526139..70ff05eefc 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -158,7 +158,7 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
 
     v->domain = d;
     v->vcpu_id = vcpu_id;
-    v->dirty_cpu = VCPU_CPU_CLEAN;
+    write_atomic(&v->dirty_cpu, VCPU_CPU_CLEAN);
 
     spin_lock_init(&v->virq_lock);
 
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 87bd145374..68364e987d 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -316,7 +316,7 @@ static void dump_domains(unsigned char key)
                        vcpu_info(v, evtchn_upcall_pending),
                        !vcpu_event_delivery_is_enabled(v));
                 if ( vcpu_cpu_dirty(v) )
-                    printk("dirty_cpu=%u", v->dirty_cpu);
+                    printk("dirty_cpu=%u", read_atomic(&v->dirty_cpu));
                 printk("\n");
                 printk("    pause_count=%d pause_flags=%lx\n",
                        atomic_read(&v->pause_count), v->pause_flags);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 6101761d25..ac53519d7f 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -844,7 +844,7 @@ static inline bool is_vcpu_dirty_cpu(unsigned int cpu)
 
 static inline bool vcpu_cpu_dirty(const struct vcpu *v)
 {
-    return is_vcpu_dirty_cpu(v->dirty_cpu);
+    return is_vcpu_dirty_cpu(read_atomic(&v->dirty_cpu));
 }
 
 void vcpu_block(void);
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Mon May 11 11:28:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 11:28:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY6bs-0001fY-0i; Mon, 11 May 2020 11:28:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tfUY=6Z=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jY6br-0001fN-61
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 11:28:35 +0000
X-Inumbo-ID: 8c1673a4-937a-11ea-a200-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8c1673a4-937a-11ea-a200-12813bfff9fa;
 Mon, 11 May 2020 11:28:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7D308AEC1;
 Mon, 11 May 2020 11:28:35 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 2/3] xen/sched: don't call sync_vcpu_execstate() in
 sched_unit_migrate_finish()
Date: Mon, 11 May 2020 13:28:28 +0200
Message-Id: <20200511112829.5500-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200511112829.5500-1-jgross@suse.com>
References: <20200511112829.5500-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

With support of core scheduling sched_unit_migrate_finish() gained a
call of sync_vcpu_execstate() as it was believed to be called as a
result of vcpu migration in any case.

In case of migrating a vcpu away from a physical cpu for a short period
of time only this might not be true, so drop the call and let the lazy
state syncing do its job.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- new patch
---
 xen/common/sched/core.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 5df66cbf9b..cb49a8bc02 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1078,12 +1078,7 @@ static void sched_unit_migrate_finish(struct sched_unit *unit)
     sched_spin_unlock_double(old_lock, new_lock, flags);
 
     if ( old_cpu != new_cpu )
-    {
-        /* Vcpus are moved to other pcpus, commit their states to memory. */
-        for_each_sched_unit_vcpu ( unit, v )
-            sync_vcpu_execstate(v);
         sched_move_irqs(unit);
-    }
 
     /* Wake on new CPU. */
     for_each_sched_unit_vcpu ( unit, v )
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Mon May 11 11:33:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 11:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY6gu-0002jq-66; Mon, 11 May 2020 11:33:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tL/r=6Z=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jY6gt-0002jl-Km
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 11:33:47 +0000
X-Inumbo-ID: 46a59c4a-937b-11ea-b9cf-bc764e2007e4
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 46a59c4a-937b-11ea-b9cf-bc764e2007e4;
 Mon, 11 May 2020 11:33:46 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id l11so4590295wru.0
 for <xen-devel@lists.xenproject.org>; Mon, 11 May 2020 04:33:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=ibSaSQ6KUOEi7BJQa1f+0VwxcleRpXDIX9+5LG695rQ=;
 b=BTKToUSqHmq1CsGsuofoThOFZ3e88uPkBM1rxKpGBciDwDR/BSUzQt+QmYfmqKvH5u
 JmsEK1W9weXDyUiNyyoK/dScvp6UxNmf9Ws4dKCRN7kN3YndeD6Wxq0WU7WyLPj9mxi+
 QIfhkfmZpP8t8CJ0hEC7RMI1nB9Vx5zu5wBk12mdU+pp/GIfOaA+ULztXvPN7aIzuEvK
 xAmgyfQyXLpTlyUQaUQGCi3CLGu/nzz27F2orZMoCAZyUfpQBAo9YDW48wFBGCI1Amkr
 QRWrv+5ckBPKiPC6G+Bou93i/bUPbU2YsbKCOdu1eLdmpFXnv8BJs0BnorTriAWryj8/
 UR5w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=ibSaSQ6KUOEi7BJQa1f+0VwxcleRpXDIX9+5LG695rQ=;
 b=ce7Gu+xMcNVDGJOhL9Hie6fn2VeqaK0pFryAw8vrOIC+tmH6YRWCcFxy+cHVs5AlIl
 NfApUiOR8Mtgp2vCsHKmvPXacN/wr+kNuFiXqiPkxJsMqqDDVWCAkXsy/XQDlYsKvy4K
 VAv5+gZTDXcJrfD4H4hfEq0owAejmLRnEg7FGlmpoLITiQVscSRVFOh2PzYrnelkbQHd
 n94obpZTti3O5HFY2enPni3GbCXDdVa05slIu5QeG0Q3xTbvw8PIWD+Ljgeqju95uaMT
 /HWXUnEMzv2qR5J101aWzqzxSjfuF4NkGLQs7wgBRneZaJDMJtrsJ87QX4SMD189lGwF
 VmPQ==
X-Gm-Message-State: AGi0PubDjmdRzJlNqkhU1rpkfaXEkk9iFkRVJEBxq91KCgO7sbxvd/Kv
 IjMEBSKqtYaZkEg0tmofFgE=
X-Google-Smtp-Source: APiQypItTJc0a/Dyy2ztp469G12LZGZVCDJLuGadxEshDSX9SeDBDctzYseSeFOoHhSQc+496GxWtw==
X-Received: by 2002:adf:f1c3:: with SMTP id z3mr19731530wro.201.1589196826026; 
 Mon, 11 May 2020 04:33:46 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.186])
 by smtp.gmail.com with ESMTPSA id g184sm11752793wmg.1.2020.05.11.04.33.44
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 11 May 2020 04:33:45 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Roger Pau Monne'" <roger.pau@citrix.com>,
 <xen-devel@lists.xenproject.org>
References: <20200511103145.37098-1-roger.pau@citrix.com>
In-Reply-To: <20200511103145.37098-1-roger.pau@citrix.com>
Subject: RE: [PATCH] changelog: add relevant changes during 4.14 development
 window
Date: Mon, 11 May 2020 12:33:44 +0100
Message-ID: <000d01d62788$07d16fd0$17744f70$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQCqoBtxXRpJrkR7COjPNOw6eVixCar5z8Ew
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Community Manager' <community.manager@xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Roger Pau Monne <roger.pau@citrix.com>
> Sent: 11 May 2020 11:32
> To: xen-devel@lists.xenproject.org
> Cc: Roger Pau Monne <roger.pau@citrix.com>; Paul Durrant =
<paul@xen.org>; Community Manager
> <community.manager@xenproject.org>
> Subject: [PATCH] changelog: add relevant changes during 4.14 =
development window
>=20
> Add entries for the relevant changes I've been working on during the
> 4.14 development time frame. Mostly performance improvements related
> to pvshim scalability issues when running with high number of vCPUs.
>=20
> Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>

Acked-by: Paul Durrant <paul@xen.org>

> ---
>  CHANGELOG.md | 6 ++++++
>  1 file changed, 6 insertions(+)
>=20
> diff --git a/CHANGELOG.md b/CHANGELOG.md
> index b11e9bc4e3..554eeb6a12 100644
> --- a/CHANGELOG.md
> +++ b/CHANGELOG.md
> @@ -8,6 +8,12 @@ The format is based on [Keep a =
Changelog](https://keepachangelog.com/en/1.0.0/)
>=20
>  ### Added
>   - This file and MAINTAINERS entry.
> + - Use x2APIC mode whenever available, regardless of interrupt =
remapping
> +   support.
> + - Performance improvements to guest assisted TLB flushes, either =
when using
> +   the Xen hypercall interface or the viridian one.
> + - Assorted pvshim performance and scalability improvements plus some =
bug
> +   fixes.
>=20
>  ## =
[4.13.0](https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dshortlog;h=3DREL=
EASE-4.13.0) - 2019-12-17
>=20
> --
> 2.26.2




From xen-devel-bounces@lists.xenproject.org Mon May 11 11:47:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 11:47:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY6tg-0003gV-IF; Mon, 11 May 2020 11:47:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nxab=6Z=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jY6tf-0003gQ-1E
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 11:46:59 +0000
X-Inumbo-ID: 1939ed7c-937d-11ea-a204-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1939ed7c-937d-11ea-a204-12813bfff9fa;
 Mon, 11 May 2020 11:46:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=lS6oEnANzS0Wlx51rAXgqKfIrjPlm9+nJCWuCeIRay8=; b=TiGgB/SstqcjN1egjGjcnPE5O
 vzRt4dXOYpXDskyb7G4equaXYcbeZX4EEfpAPPUaWwwSR38S74KTrqpQBsUjtyiV09vTN8xP4elVE
 2BQkB3rxrIstmpwjQ6Gi37jnrevbkR9c2QCMLPiPz3Qgr7N0fHrdpjKnEGGSdHxxuwHpo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jY6tT-0004td-Sx; Mon, 11 May 2020 11:46:47 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jY6tT-0003VC-GU; Mon, 11 May 2020 11:46:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jY6tT-0000z4-Ex; Mon, 11 May 2020 11:46:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150130-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 150130: regressions - FAIL
X-Osstest-Failures: linux-5.4:build-arm64:xen-build:fail:regression
 linux-5.4:test-armhf-armhf-xl-rtds:guest-start.2:fail:allowable
 linux-5.4:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
 linux-5.4:build-arm64-libvirt:build-check(1):blocked:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
 linux-5.4:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-5.4:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=f015b86259a520ad886523d9ec6fdb0ed80edc38
X-Osstest-Versions-That: linux=592465e6a54ba8104969f3b73b58df262c5be5f5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 11 May 2020 11:46:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150130 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150130/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                   6 xen-build                fail REGR. vs. 150104

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     17 guest-start.2            fail REGR. vs. 150104

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150104
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                f015b86259a520ad886523d9ec6fdb0ed80edc38
baseline version:
 linux                592465e6a54ba8104969f3b73b58df262c5be5f5

Last test of basis   150104  2020-05-09 09:48:15 Z    2 days
Testing same since   150122  2020-05-10 08:40:44 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Ma <aaron.ma@canonical.com>
  AceLan Kao <acelan.kao@canonical.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Elder <elder@linaro.org>
  Alexei Starovoitov <ast@kernel.org>
  Amadeusz Sławiński <amadeuszx.slawinski@linux.intel.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Andrzej Hajda <a.hajda@samsung.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Andy Yan <andy.yan@rock-chips.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Brendan Higgins <brendanhiggins@google.com>
  Brian Cain <bcain@codeaurora.org>
  Catalin Marinas <catalin.marinas@arm.com>
  Chanwoo Choi <cw00.choi@samsung.com>
  Christoph Hellwig <hch@lst.de>
  David S. Miller <davem@davemloft.net>
  Doug Berger <opendmb@gmail.com>
  Felipe Balbi <balbi@kernel.org>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hans de Goede <hdegoede@redhat.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Jere Leppänen <jere.leppanen@nokia.com>
  Jeremie Francois (on alpha) <jeremie.francois@gmail.com>
  Jia He <justin.he@arm.com>
  Jiri Slaby <jslaby@suse.cz>
  Johannes Berg <johannes.berg@intel.com>
  Julien Beraud <julien.beraud@orolia.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Lee Jones <lee.jones@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Luis Chamberlain <mcgrof@kernel.org>
  Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Mark Brown <broonie@kernel.org>
  Masahiro Yamada <masahiroy@kernel.org>
  Matt Roper <matthew.d.roper@intel.com>
  Matthias Blankertz <matthias.blankertz@cetitec.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael S. Tsirkin <mst@redhat.com>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paulo Alcantara (SUSE) <pc@cjr.nz>
  Paulo Alcantara <pc@cjr.nz>
  Pavel Machek <pavel@denx.de>
  Qian Cai <cai@lca.pw>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Roman Gilg <subdiff@gmail.com>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Sandeep Raghuraman <sandy.8925@gmail.com>
  Sasha Levin <sashal@kernel.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Takashi Iwai <tiwai@suse.de>
  Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thinh Nguyen <thinhn@synopsys.com>
  Thomas Pedersen <thomas@adapt-ip.com>
  Tuowen Zhao <ztuowen@gmail.com>
  Tyler Hicks <tyhicks@linux.microsoft.com>
  Vamshi K Sthambamkadi <vamshi.k.sthambamkadi@gmail.com>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Zhan Liu <zhan.liu@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1335 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 11 12:58:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 12:58:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY80n-0001DD-E2; Mon, 11 May 2020 12:58:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nxab=6Z=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jY80m-0001D8-4e
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 12:58:24 +0000
X-Inumbo-ID: 183925a1-9387-11ea-a20d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 183925a1-9387-11ea-a20d-12813bfff9fa;
 Mon, 11 May 2020 12:58:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=/6ITCS+ztydVoT9i75ZX6QkKIGYVIFlVsofbvcH+LYs=; b=LinEscMCqYRNcWM3e3t0QmKeR
 gWVtPJy5Piq/okWVRoJ0M8ZHEpdqJkVi2OIePUJuKc5qkVlJ/fkDgAQfebEig+HBzOwC5ZjjAXshx
 sPwsgrfGJZ/Jc3JNei8cBGlnPu2wkIp2gKft4OlPjnHUD+ohAEbc/ogPg/24jEIzHcYXI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jY80k-0006KG-Cc; Mon, 11 May 2020 12:58:22 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jY80k-0006e2-4Y; Mon, 11 May 2020 12:58:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jY80k-0003Nl-3n; Mon, 11 May 2020 12:58:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150134-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150134: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=a82582b1af6a4a57ca53bcfad9f71428cb5f9a54
X-Osstest-Versions-That: xen=190c60f12db469472476041ecd0e6c9a0d4b0f8a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 11 May 2020 12:58:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150134 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150134/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a82582b1af6a4a57ca53bcfad9f71428cb5f9a54
baseline version:
 xen                  190c60f12db469472476041ecd0e6c9a0d4b0f8a

Last test of basis   150125  2020-05-10 13:00:51 Z    0 days
Testing same since   150134  2020-05-11 10:01:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   190c60f12d..a82582b1af  a82582b1af6a4a57ca53bcfad9f71428cb5f9a54 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon May 11 13:10:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 13:10:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY8CC-0002rs-GL; Mon, 11 May 2020 13:10:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4Y5i=6Z=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jY8CB-0002rf-KC
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 13:10:11 +0000
X-Inumbo-ID: bdb9b43a-9388-11ea-a20f-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bdb9b43a-9388-11ea-a20f-12813bfff9fa;
 Mon, 11 May 2020 13:10:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id C8B29AFD4;
 Mon, 11 May 2020 13:10:11 +0000 (UTC)
Subject: Re: [PATCH] changelog: add relevant changes during 4.14 development
 window
To: Roger Pau Monne <roger.pau@citrix.com>
References: <20200511103145.37098-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9f783539-6a36-08eb-c141-bd0f76e5acfd@suse.com>
Date: Mon, 11 May 2020 15:10:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200511103145.37098-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Community Manager <community.manager@xenproject.org>,
 xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.2020 12:31, Roger Pau Monne wrote:
> Add entries for the relevant changes I've been working on during the
> 4.14 development time frame. Mostly performance improvements related
> to pvshim scalability issues when running with high number of vCPUs.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  CHANGELOG.md | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/CHANGELOG.md b/CHANGELOG.md
> index b11e9bc4e3..554eeb6a12 100644
> --- a/CHANGELOG.md
> +++ b/CHANGELOG.md
> @@ -8,6 +8,12 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
>  
>  ### Added
>   - This file and MAINTAINERS entry.
> + - Use x2APIC mode whenever available, regardless of interrupt remapping
> +   support.
> + - Performance improvements to guest assisted TLB flushes, either when using
> +   the Xen hypercall interface or the viridian one.
> + - Assorted pvshim performance and scalability improvements plus some bug
> +   fixes.

Wouldn't most/all of these better go under a "### Changed" heading?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 11 13:30:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 13:30:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY8Vm-0004df-7U; Mon, 11 May 2020 13:30:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HsYJ=6Z=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jY8Vk-0004da-LV
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 13:30:24 +0000
X-Inumbo-ID: 9179fee0-938b-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9179fee0-938b-11ea-b9cf-bc764e2007e4;
 Mon, 11 May 2020 13:30:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=e35MejOoQGnah4MSylnPgLhQkJiOrK7aN4Ngfk5skUE=; b=N+BV6k2i3vStDV1zdgOFodUHDm
 dmaqoRfmc98BCIQAkt9JAVuhDsIlOUnQ8rR+ZMDLHci4x1A+b98wHufjSfIkYeUgKwCCLtaZdCRRn
 E5Vz9rVRorNASYv9DoZKW9723rp81LOGQ5zQJCq44rwhHvOVLyed3al2chP7a58HzdNg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jY8Vi-000700-FX; Mon, 11 May 2020 13:30:22 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jY8Vi-0004MT-8A; Mon, 11 May 2020 13:30:22 +0000
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Ian Jackson <ian.jackson@citrix.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
From: Julien Grall <julien@xen.org>
Message-ID: <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
Date: Mon, 11 May 2020 14:30:20 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <24244.16076.627203.282982@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <JBeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Ian,

Thank you for the clarification.

On 07/05/2020 18:01, Ian Jackson wrote:
> Julien Grall writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>> On 04/05/2020 13:34, Ian Jackson wrote:
>>> George Dunlap writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>> On Apr 30, 2020, at 3:50 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>> Well, if I'm not mis-remembering it was on purpose to make it more
>>>>> difficult for people to declare themselves "experts". FAOD I'm not
>>>>> meaning to imply I don't see and accept the frustration aspect you
>>>>> mention further up. The two need to be carefully weighed against
>>>>> one another.
>>>
>>> Yes, it was on purpose.  However, I had my doubts at the time and
>>> I think experience has shown that this was a mistake.
>>>
>>>> I don’t think we need to make it difficult for people to declare
>>>> themselves experts, particularly as “all” it means at the moment is,
>>>> “Can build something which is not security supported”.  People who
>>>> are building their own hypervisors are already pretty well advanced;
>>>> I think we can let them shoot themselves in the foot if they want
>>>> to.
>>>
>>> Precisely.
>>
>> Can I consider this as an Acked-by? :)
> 
> I am happy with the principle of the change.  I haven't reviewed the
> details of the commit message etc.
> 
> I reviewed the thread and there were two concernes raised:
> 
>   * The question of principle.  I disagree with this concern
>     because I approve of principle of the patch.
> 
>   * Some detail about the precise justificaton as written in
>     the commit message, regarding `clean' targets.  Apparently the
>     assertion may not be completely true.  I haven't seen a proposed
>     alternative wording.

I have checked the latest staging, the `clean` target doesn't trash 
.config anymore.

> 
> I don't feel I should ack a controversial patch with an unresolved
> wording issue.  Can you tell me what your proposed wording is ?
> To avoid blocking this change I would be happy to review your wording
> and see if it meets my reading of the stated objection.

Here a suggested rewording:

"EXPERT mode is currently used to gate any options that are in technical
preview or not security supported At the moment, the only way to select
it is to use XEN_CONFIG_EXPERT=y on the make command line.

However, if the user forget to add the option when (re)building or when 
using menuconfig, then .config will get rewritten. This may lead to a 
rather frustrating experience as it is difficult to diagnostic the
issue.

A lot of the options behind EXPERT would benefit to be more accessible 
so user can experiment with it and voice any concern before they are 
fully be supported.

So rather than making really difficult to experiment or tweak your Xen 
(for instance by adding a built-in command line), this option can now be 
selected from the menuconfig.

This doesn't change the fact a Xen with EXPERT mode selected will not be 
security supported.
"

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 11 13:40:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 13:40:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY8fm-0005X0-6d; Mon, 11 May 2020 13:40:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4Y5i=6Z=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jY8fl-0005Wv-1A
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 13:40:45 +0000
X-Inumbo-ID: 02b32888-938d-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 02b32888-938d-11ea-ae69-bc764e2007e4;
 Mon, 11 May 2020 13:40:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 34D70AD35;
 Mon, 11 May 2020 13:40:45 +0000 (UTC)
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Julien Grall <julien@xen.org>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4017f7f0-744b-70ff-8fa4-b33c53a8b9e2@suse.com>
Date: Mon, 11 May 2020 15:40:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.2020 15:30, Julien Grall wrote:
> Hi Ian,
> 
> Thank you for the clarification.
> 
> On 07/05/2020 18:01, Ian Jackson wrote:
>> Julien Grall writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>> On 04/05/2020 13:34, Ian Jackson wrote:
>>>> George Dunlap writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>>> On Apr 30, 2020, at 3:50 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>> Well, if I'm not mis-remembering it was on purpose to make it more
>>>>>> difficult for people to declare themselves "experts". FAOD I'm not
>>>>>> meaning to imply I don't see and accept the frustration aspect you
>>>>>> mention further up. The two need to be carefully weighed against
>>>>>> one another.
>>>>
>>>> Yes, it was on purpose.  However, I had my doubts at the time and
>>>> I think experience has shown that this was a mistake.
>>>>
>>>>> I don’t think we need to make it difficult for people to declare
>>>>> themselves experts, particularly as “all” it means at the moment is,
>>>>> “Can build something which is not security supported”.  People who
>>>>> are building their own hypervisors are already pretty well advanced;
>>>>> I think we can let them shoot themselves in the foot if they want
>>>>> to.
>>>>
>>>> Precisely.
>>>
>>> Can I consider this as an Acked-by? :)
>>
>> I am happy with the principle of the change.  I haven't reviewed the
>> details of the commit message etc.
>>
>> I reviewed the thread and there were two concernes raised:
>>
>>   * The question of principle.  I disagree with this concern
>>     because I approve of principle of the patch.
>>
>>   * Some detail about the precise justificaton as written in
>>     the commit message, regarding `clean' targets.  Apparently the
>>     assertion may not be completely true.  I haven't seen a proposed
>>     alternative wording.
> 
> I have checked the latest staging, the `clean` target doesn't trash 
> .config anymore.
> 
>>
>> I don't feel I should ack a controversial patch with an unresolved
>> wording issue.  Can you tell me what your proposed wording is ?
>> To avoid blocking this change I would be happy to review your wording
>> and see if it meets my reading of the stated objection.
> 
> Here a suggested rewording:
> 
> "EXPERT mode is currently used to gate any options that are in technical
> preview or not security supported At the moment, the only way to select
> it is to use XEN_CONFIG_EXPERT=y on the make command line.
> 
> However, if the user forget to add the option when (re)building or when 
> using menuconfig, then .config will get rewritten. This may lead to a 
> rather frustrating experience as it is difficult to diagnostic the
> issue.

To me this looks very similar to e.g. not suitably overriding the
default toolchain binaries, if one has a need to build with newer
ones than what a distro provides. According to some of my routinely
built configs both can be done by putting suitable entries into
./.config (not xen/.config), removing the need to remember adding
either to the make command line.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 11 13:40:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 13:40:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY8fr-0005XI-Ee; Mon, 11 May 2020 13:40:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nuJv=6Z=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jY8fq-0005XC-BX
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 13:40:50 +0000
X-Inumbo-ID: 05b65c62-938d-11ea-a213-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 05b65c62-938d-11ea-a213-12813bfff9fa;
 Mon, 11 May 2020 13:40:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589204448;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=9oTiTTjCyUNvNnSlPh1UEle4mQ5Pfvepo/yfLiBn9CM=;
 b=aB+MLfFQxJKPCg+UmlTt17B6bsFYDoD0cBi9LX/eHd2N2HDaOXNjayBw
 bv+1ck0T9r4FbMU19VrxLR2eliWWz1au+aINOWZxjMK0s7NQ8X4mjGgxN
 ZVBcXa/JtcB+sGXPXZ/ZJ3aCTXoGi/6ByXUx/VXBq3t8HNibFkKgb0uTD M=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 anthony.perard@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="anthony.perard@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 anthony.perard@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="anthony.perard@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=anthony.perard@citrix.com;
 spf=Pass smtp.mailfrom=anthony.perard@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: v4HspzHLgAwb5U+o8xK7e97+x5w+49kXorekB1vj78YFEH9Wd9qlRxlXJJL+0asnvYjx2qPUM5
 pibhlWrrGhEHVITsOSnh2VOm6PteQbcD606OPKE7rAViWuCoi87vxhQ1J6hDaBqz9QlTchTTu5
 pTsdOzlboZwc/3C4gj2zhYZJcb1lkF/HYMEpl+Qv+dgsA7kX0tkA8g9r/jVdwvqeO+lR105Vr+
 vE5jtb+NTwUSKQIvR1pRlcmo+oUwiYvbQEF4J0vQF4BCbkbNOijMsGsCj3ASwBurOY8Rzefa95
 +1E=
X-SBRS: 2.7
X-MesageID: 17496335
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,380,1583211600"; d="scan'208";a="17496335"
Date: Mon, 11 May 2020 14:40:43 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH] xen: fix build without pci passthrough
Message-ID: <20200511134043.GH2116@perard.uk.xensource.com>
References: <20200504101443.3165-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200504101443.3165-1-roger.pau@citrix.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 qemu-devel@nongnu.org, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 04, 2020 at 12:14:43PM +0200, Roger Pau Monne wrote:
> has_igd_gfx_passthru is only available when QEMU is built with
> CONFIG_XEN_PCI_PASSTHROUGH, and hence shouldn't be used in common
> code without checking if it's available.
> 
> Fixes: 46472d82322d0 ('xen: convert "-machine igd-passthru" to an accelerator property')
> Signed-off-by: Roger Pau Monn <roger.pau@citrix.com>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org
> ---
>  hw/xen/xen-common.c | 4 ++++
>  hw/xen/xen_pt.h     | 7 +++++++
>  2 files changed, 11 insertions(+)
> 
> diff --git a/hw/xen/xen-common.c b/hw/xen/xen-common.c
> index a15070f7f6..c800862419 100644
> --- a/hw/xen/xen-common.c
> +++ b/hw/xen/xen-common.c
> @@ -127,6 +127,7 @@ static void xen_change_state_handler(void *opaque, int running,
>      }
>  }
>  
> +#ifdef CONFIG_XEN_PCI_PASSTHROUGH
>  static bool xen_get_igd_gfx_passthru(Object *obj, Error **errp)
>  {
>      return has_igd_gfx_passthru;
> @@ -136,6 +137,7 @@ static void xen_set_igd_gfx_passthru(Object *obj, bool value, Error **errp)
>  {
>      has_igd_gfx_passthru = value;
>  }
> +#endif
>  
>  static void xen_setup_post(MachineState *ms, AccelState *accel)
>  {
> @@ -197,11 +199,13 @@ static void xen_accel_class_init(ObjectClass *oc, void *data)
>  
>      compat_props_add(ac->compat_props, compat, G_N_ELEMENTS(compat));
>  
> +#ifdef CONFIG_XEN_PCI_PASSTHROUGH
>      object_class_property_add_bool(oc, "igd-passthru",
>          xen_get_igd_gfx_passthru, xen_set_igd_gfx_passthru,
>          &error_abort);
>      object_class_property_set_description(oc, "igd-passthru",
>          "Set on/off to enable/disable igd passthrou", &error_abort);
> +#endif

It might not be a good idea to have the presence of class property
depending on what is built-in. Instead, I think it would be better to
return an error when QEMU is built without xen_pt and a user is trying
to enable it. That can be done by calling by calling error_setg(errp,
"nop") in xen_set_igd_gfx_passthru().

>  }
>  
>  #define TYPE_XEN_ACCEL ACCEL_CLASS_NAME("xen")
> diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
> index 179775db7b..660dd8a008 100644
> --- a/hw/xen/xen_pt.h
> +++ b/hw/xen/xen_pt.h
> @@ -1,6 +1,7 @@
>  #ifndef XEN_PT_H
>  #define XEN_PT_H
>  
> +#include "qemu/osdep.h"

Why do you need osdep?

>  #include "hw/xen/xen_common.h"
>  #include "hw/pci/pci.h"
>  #include "xen-host-pci-device.h"
> @@ -322,7 +323,13 @@ extern void *pci_assign_dev_load_option_rom(PCIDevice *dev,
>                                              unsigned int domain,
>                                              unsigned int bus, unsigned int slot,
>                                              unsigned int function);
> +
> +#ifdef CONFIG_XEN_PCI_PASSTHROUGH
>  extern bool has_igd_gfx_passthru;
> +#else
> +# define has_igd_gfx_passthru false
> +#endif

I don't quite like the use of define here. Could you introduce a
function that return a bool instead? And defining that function in
hw/xen/xen.h like xen_enabled() would be fine I think.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon May 11 13:48:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 13:48:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY8mw-0005qn-79; Mon, 11 May 2020 13:48:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4Y5i=6Z=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jY8mv-0005qi-Tf
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 13:48:09 +0000
X-Inumbo-ID: 0c23108a-938e-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c23108a-938e-11ea-b9cf-bc764e2007e4;
 Mon, 11 May 2020 13:48:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 57B83AE2D;
 Mon, 11 May 2020 13:46:42 +0000 (UTC)
Subject: Re: [PATCH] x86/PVH: PHYSDEVOP_pci_mmcfg_reserved should not blindly
 register a region
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <2ee1a3cb-3b40-6e6d-5308-1cdf9f6c521e@suse.com>
 <20200508150312.GJ1353@Air-de-Roger>
 <70c8b4f4-b690-c031-3b90-1776d872d171@suse.com>
 <20200508160851.GK1353@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d1248c00-ae0b-e2aa-be41-e27d27dce379@suse.com>
Date: Mon, 11 May 2020 15:46:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200508160851.GK1353@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 18:08, Roger Pau Monné wrote:
> On Fri, May 08, 2020 at 05:11:35PM +0200, Jan Beulich wrote:
>> On 08.05.2020 17:03, Roger Pau Monné wrote:
>>> On Fri, May 08, 2020 at 02:43:38PM +0200, Jan Beulich wrote:
>>>> --- a/xen/arch/x86/hvm/io.c
>>>> +++ b/xen/arch/x86/hvm/io.c
>>>> @@ -558,6 +558,47 @@ int register_vpci_mmcfg_handler(struct d
>>>>      return 0;
>>>>  }
>>>>  
>>>> +int unregister_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
>>>> +                                  unsigned int start_bus, unsigned int end_bus,
>>>> +                                  unsigned int seg)
>>>> +{
>>>> +    struct hvm_mmcfg *mmcfg;
>>>> +    int rc = -ENOENT;
>>>> +
>>>> +    ASSERT(is_hardware_domain(d));
>>>> +
>>>> +    if ( start_bus > end_bus )
>>>> +        return -EINVAL;
>>>> +
>>>> +    write_lock(&d->arch.hvm.mmcfg_lock);
>>>> +
>>>> +    list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next )
>>>> +        if ( mmcfg->addr == addr + (start_bus << 20) &&
>>>> +             mmcfg->segment == seg &&
>>>> +             mmcfg->start_bus == start_bus &&
>>>> +             mmcfg->size == ((end_bus - start_bus + 1) << 20) )
>>>> +        {
>>>> +            list_del(&mmcfg->next);
>>>> +            if ( !list_empty(&d->arch.hvm.mmcfg_regions) )
>>>> +                xfree(mmcfg);
>>>> +            else
>>>> +            {
>>>> +                /*
>>>> +                 * Cannot unregister the MMIO handler - leave a fake entry
>>>> +                 * on the list.
>>>> +                 */
>>>> +                memset(mmcfg, 0, sizeof(*mmcfg));
>>>> +                list_add(&mmcfg->next, &d->arch.hvm.mmcfg_regions);
>>>
>>> Instead of leaving this zombie entry around maybe we could add a
>>> static bool in register_vpci_mmcfg_handler to signal whether the MMIO
>>> intercept has been registered?
>>
>> That was my initial plan indeed, but registration is per-domain.
> 
> Indeed, this would work now because it's only used by the hardware
> domain, but it's not a good move long term.
> 
> What about splitting the registration into a
> register_vpci_mmio_handler and call it from hvm_domain_initialise
> like it's done for register_vpci_portio_handler?

No, the goal is to not register unneeded handlers. But see below -
I'll likely ditch the function anyway.

>>>> --- a/xen/arch/x86/physdev.c
>>>> +++ b/xen/arch/x86/physdev.c
>>>> @@ -559,12 +559,18 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
>>>>          if ( !ret && has_vpci(currd) )
>>>>          {
>>>>              /*
>>>> -             * For HVM (PVH) domains try to add the newly found MMCFG to the
>>>> -             * domain.
>>>> +             * For HVM (PVH) domains try to add/remove the reported MMCFG
>>>> +             * to/from the domain.
>>>>               */
>>>> -            ret = register_vpci_mmcfg_handler(currd, info.address,
>>>> -                                              info.start_bus, info.end_bus,
>>>> -                                              info.segment);
>>>> +            if ( info.flags & XEN_PCI_MMCFG_RESERVED )
>>>
>>> Do you think you could also add a small note in physdev.h regarding
>>> the fact that XEN_PCI_MMCFG_RESERVED is used to register a MMCFG
>>> region, and not setting it would imply an unregister request?
>>>
>>> It's not obvious to me from the name of the flag.
>>
>> The main purpose of the flag is to identify whether a region can be
>> used (because of having been found marked suitably reserved by
>> firmware). The flag not set effectively means "region is not marked
>> reserved".
> 
> Looking at pci_mmcfg_arch_disable, should the region then also be
> removed from mmio_ro_ranges? (kind of tangential to this patch)

If it's truly unregistration - yes. But ...

>> You pointing this out makes me wonder whether instead I
>> should simply expand the if() in context, without making it behave
>> like unregistration. Then again we'd have no way to unregister a
>> region, and hence (ab)using this function for this purpose seems to
>> makes sense (and, afaict, not require any code changes elsewhere).
> 
> Right now the only user I know of PHYSDEVOP_pci_mmcfg_reserved is
> Linux, and AFAICT it always sets the XEN_PCI_MMCFG_RESERVED flag (at
> least upstream).

... I've looked at our forward port, where this was first introduced.
There we made the call in all cases, with the flag indicating what is
wanted. Therefore I don't think we want to assign the flag being
clear the meaning of "unregistration". I'll therefore switch to the
simpler change of just expanding the if().

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 11 13:57:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 13:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY8wJ-0006jS-73; Mon, 11 May 2020 13:57:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HsYJ=6Z=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jY8wI-0006jN-Ha
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 13:57:50 +0000
X-Inumbo-ID: 666823e0-938f-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 666823e0-938f-11ea-b9cf-bc764e2007e4;
 Mon, 11 May 2020 13:57:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=FyiP+7pmc3k8A4yk2jOn/XBu06jfl+1JFA9puXYHPgM=; b=VPqRd8hlRtLZ+QJlDk/p+hcQjN
 g3M79f5ZujKZOd1MP1HninE7iZypGQYVu6UdWGF/bvu0q+aRd6LRJPIaJ4o9wR16n2IhKaJYvyJA5
 xgeZGsNgtTTtbPdKxbY+PHHuyFIrZU+qXX8mEf4eWmGGuCVkVwVIOeJLloTiWHnkyBcA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jY8wG-0007ZU-7b; Mon, 11 May 2020 13:57:48 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jY8wG-00062C-02; Mon, 11 May 2020 13:57:48 +0000
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Jan Beulich <jbeulich@suse.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
 <4017f7f0-744b-70ff-8fa4-b33c53a8b9e2@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ca115637-5e84-2990-65e8-e2f04ec0ddb5@xen.org>
Date: Mon, 11 May 2020 14:57:45 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <4017f7f0-744b-70ff-8fa4-b33c53a8b9e2@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 11/05/2020 14:40, Jan Beulich wrote:
> On 11.05.2020 15:30, Julien Grall wrote:
>> Hi Ian,
>>
>> Thank you for the clarification.
>>
>> On 07/05/2020 18:01, Ian Jackson wrote:
>>> Julien Grall writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>> On 04/05/2020 13:34, Ian Jackson wrote:
>>>>> George Dunlap writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>>>> On Apr 30, 2020, at 3:50 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>> Well, if I'm not mis-remembering it was on purpose to make it more
>>>>>>> difficult for people to declare themselves "experts". FAOD I'm not
>>>>>>> meaning to imply I don't see and accept the frustration aspect you
>>>>>>> mention further up. The two need to be carefully weighed against
>>>>>>> one another.
>>>>>
>>>>> Yes, it was on purpose.  However, I had my doubts at the time and
>>>>> I think experience has shown that this was a mistake.
>>>>>
>>>>>> I don’t think we need to make it difficult for people to declare
>>>>>> themselves experts, particularly as “all” it means at the moment is,
>>>>>> “Can build something which is not security supported”.  People who
>>>>>> are building their own hypervisors are already pretty well advanced;
>>>>>> I think we can let them shoot themselves in the foot if they want
>>>>>> to.
>>>>>
>>>>> Precisely.
>>>>
>>>> Can I consider this as an Acked-by? :)
>>>
>>> I am happy with the principle of the change.  I haven't reviewed the
>>> details of the commit message etc.
>>>
>>> I reviewed the thread and there were two concernes raised:
>>>
>>>    * The question of principle.  I disagree with this concern
>>>      because I approve of principle of the patch.
>>>
>>>    * Some detail about the precise justificaton as written in
>>>      the commit message, regarding `clean' targets.  Apparently the
>>>      assertion may not be completely true.  I haven't seen a proposed
>>>      alternative wording.
>>
>> I have checked the latest staging, the `clean` target doesn't trash
>> .config anymore.
>>
>>>
>>> I don't feel I should ack a controversial patch with an unresolved
>>> wording issue.  Can you tell me what your proposed wording is ?
>>> To avoid blocking this change I would be happy to review your wording
>>> and see if it meets my reading of the stated objection.
>>
>> Here a suggested rewording:
>>
>> "EXPERT mode is currently used to gate any options that are in technical
>> preview or not security supported At the moment, the only way to select
>> it is to use XEN_CONFIG_EXPERT=y on the make command line.
>>
>> However, if the user forget to add the option when (re)building or when
>> using menuconfig, then .config will get rewritten. This may lead to a
>> rather frustrating experience as it is difficult to diagnostic the
>> issue.
> 
> To me this looks very similar to e.g. not suitably overriding the
> default toolchain binaries, if one has a need to build with newer
> ones than what a distro provides. According to some of my routinely
> built configs both can be done by putting suitable entries into
> ./.config (not xen/.config), removing the need to remember adding
> either to the make command line.

I have never heard of ./.config before. So what are you referring to?

But yes, there are ways to make it permanent. But that involves hacking 
Xen source. This is not a very great approach because if you need to 
bisect, then you have to remember to apply the change everytime. It also 
doesn't work if you have to build for multiple different target from the 
same source.

The compiler is another option that would be worth to move in the 
Kconfig. I have stopped counting the number of time I mixed up between 
Arm64, Arm32 and x86 compilers when building Xen...

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 11 14:07:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 14:07:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY95o-0007jO-4b; Mon, 11 May 2020 14:07:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4Y5i=6Z=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jY95n-0007jJ-0t
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 14:07:39 +0000
X-Inumbo-ID: c3e4897d-9390-11ea-a214-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c3e4897d-9390-11ea-a214-12813bfff9fa;
 Mon, 11 May 2020 14:07:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D234EAD33;
 Mon, 11 May 2020 14:07:38 +0000 (UTC)
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Julien Grall <julien@xen.org>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
 <4017f7f0-744b-70ff-8fa4-b33c53a8b9e2@suse.com>
 <ca115637-5e84-2990-65e8-e2f04ec0ddb5@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <803876ce-503a-2e97-f310-0413e515970b@suse.com>
Date: Mon, 11 May 2020 16:07:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <ca115637-5e84-2990-65e8-e2f04ec0ddb5@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.2020 15:57, Julien Grall wrote:
> Hi,
> 
> On 11/05/2020 14:40, Jan Beulich wrote:
>> On 11.05.2020 15:30, Julien Grall wrote:
>>> Hi Ian,
>>>
>>> Thank you for the clarification.
>>>
>>> On 07/05/2020 18:01, Ian Jackson wrote:
>>>> Julien Grall writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>>> On 04/05/2020 13:34, Ian Jackson wrote:
>>>>>> George Dunlap writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>>>>> On Apr 30, 2020, at 3:50 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>>> Well, if I'm not mis-remembering it was on purpose to make it more
>>>>>>>> difficult for people to declare themselves "experts". FAOD I'm not
>>>>>>>> meaning to imply I don't see and accept the frustration aspect you
>>>>>>>> mention further up. The two need to be carefully weighed against
>>>>>>>> one another.
>>>>>>
>>>>>> Yes, it was on purpose.  However, I had my doubts at the time and
>>>>>> I think experience has shown that this was a mistake.
>>>>>>
>>>>>>> I don’t think we need to make it difficult for people to declare
>>>>>>> themselves experts, particularly as “all” it means at the moment is,
>>>>>>> “Can build something which is not security supported”.  People who
>>>>>>> are building their own hypervisors are already pretty well advanced;
>>>>>>> I think we can let them shoot themselves in the foot if they want
>>>>>>> to.
>>>>>>
>>>>>> Precisely.
>>>>>
>>>>> Can I consider this as an Acked-by? :)
>>>>
>>>> I am happy with the principle of the change.  I haven't reviewed the
>>>> details of the commit message etc.
>>>>
>>>> I reviewed the thread and there were two concernes raised:
>>>>
>>>>    * The question of principle.  I disagree with this concern
>>>>      because I approve of principle of the patch.
>>>>
>>>>    * Some detail about the precise justificaton as written in
>>>>      the commit message, regarding `clean' targets.  Apparently the
>>>>      assertion may not be completely true.  I haven't seen a proposed
>>>>      alternative wording.
>>>
>>> I have checked the latest staging, the `clean` target doesn't trash
>>> .config anymore.
>>>
>>>>
>>>> I don't feel I should ack a controversial patch with an unresolved
>>>> wording issue.  Can you tell me what your proposed wording is ?
>>>> To avoid blocking this change I would be happy to review your wording
>>>> and see if it meets my reading of the stated objection.
>>>
>>> Here a suggested rewording:
>>>
>>> "EXPERT mode is currently used to gate any options that are in technical
>>> preview or not security supported At the moment, the only way to select
>>> it is to use XEN_CONFIG_EXPERT=y on the make command line.
>>>
>>> However, if the user forget to add the option when (re)building or when
>>> using menuconfig, then .config will get rewritten. This may lead to a
>>> rather frustrating experience as it is difficult to diagnostic the
>>> issue.
>>
>> To me this looks very similar to e.g. not suitably overriding the
>> default toolchain binaries, if one has a need to build with newer
>> ones than what a distro provides. According to some of my routinely
>> built configs both can be done by putting suitable entries into
>> ./.config (not xen/.config), removing the need to remember adding
>> either to the make command line.
> 
> I have never heard of ./.config before. So what are you referring to?

I'm referring to this line in ./Config.mk:

-include $(XEN_ROOT)/.config

> But yes, there are ways to make it permanent. But that involves hacking 
> Xen source.

Why would there be any need for a source modification? Just like
xen/.config, ./.config is not considered part of the source.

> This is not a very great approach because if you need to 
> bisect, then you have to remember to apply the change everytime. It also 
> doesn't work if you have to build for multiple different target from the 
> same source.

Why wouldn't it? I'm doing exactly this, far beyond just x86 and
Arm builds, and it all works fine. (It would work even better
with out-of-tree builds, but it looks like Anthony is getting us
there.)

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 11 14:11:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 14:11:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY99j-00006C-Lz; Mon, 11 May 2020 14:11:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HsYJ=6Z=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jY99h-000067-NC
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 14:11:41 +0000
X-Inumbo-ID: 55fd17e8-9391-11ea-a214-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 55fd17e8-9391-11ea-a214-12813bfff9fa;
 Mon, 11 May 2020 14:11:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=To+MbXWFEZdpU38r7NtAo/xrgp96aDKRXfuoKZbjyTs=; b=krqYsOqjckHYabN3zBu+PGSjAJ
 ZzqS+kSEdyIewDpqXoA5DL1LOnolCL+pFUn1VxEWoiB/wpxkc0ruvDt5uK2/6AdRC1MqLiv5FjFQ6
 0NwYtdrtIj17yO+UryXk3JOLK1I8Enr6/4K1EHtdHRyVC9i+OhiLmNufYxRPrUjmQSoE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jY99f-0007xV-R1; Mon, 11 May 2020 14:11:39 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jY99f-00079M-Eh; Mon, 11 May 2020 14:11:39 +0000
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Jan Beulich <jbeulich@suse.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
 <4017f7f0-744b-70ff-8fa4-b33c53a8b9e2@suse.com>
 <ca115637-5e84-2990-65e8-e2f04ec0ddb5@xen.org>
 <803876ce-503a-2e97-f310-0413e515970b@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <bbc12f81-7854-ad72-63ee-3ec94b378bf9@xen.org>
Date: Mon, 11 May 2020 15:11:37 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <803876ce-503a-2e97-f310-0413e515970b@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 11/05/2020 15:07, Jan Beulich wrote:
> On 11.05.2020 15:57, Julien Grall wrote:
>> Hi,
>>
>> On 11/05/2020 14:40, Jan Beulich wrote:
>>> On 11.05.2020 15:30, Julien Grall wrote:
>>>> Hi Ian,
>>>>
>>>> Thank you for the clarification.
>>>>
>>>> On 07/05/2020 18:01, Ian Jackson wrote:
>>>>> Julien Grall writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>>>> On 04/05/2020 13:34, Ian Jackson wrote:
>>>>>>> George Dunlap writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>>>>>> On Apr 30, 2020, at 3:50 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>>>> Well, if I'm not mis-remembering it was on purpose to make it more
>>>>>>>>> difficult for people to declare themselves "experts". FAOD I'm not
>>>>>>>>> meaning to imply I don't see and accept the frustration aspect you
>>>>>>>>> mention further up. The two need to be carefully weighed against
>>>>>>>>> one another.
>>>>>>>
>>>>>>> Yes, it was on purpose.  However, I had my doubts at the time and
>>>>>>> I think experience has shown that this was a mistake.
>>>>>>>
>>>>>>>> I don’t think we need to make it difficult for people to declare
>>>>>>>> themselves experts, particularly as “all” it means at the moment is,
>>>>>>>> “Can build something which is not security supported”.  People who
>>>>>>>> are building their own hypervisors are already pretty well advanced;
>>>>>>>> I think we can let them shoot themselves in the foot if they want
>>>>>>>> to.
>>>>>>>
>>>>>>> Precisely.
>>>>>>
>>>>>> Can I consider this as an Acked-by? :)
>>>>>
>>>>> I am happy with the principle of the change.  I haven't reviewed the
>>>>> details of the commit message etc.
>>>>>
>>>>> I reviewed the thread and there were two concernes raised:
>>>>>
>>>>>     * The question of principle.  I disagree with this concern
>>>>>       because I approve of principle of the patch.
>>>>>
>>>>>     * Some detail about the precise justificaton as written in
>>>>>       the commit message, regarding `clean' targets.  Apparently the
>>>>>       assertion may not be completely true.  I haven't seen a proposed
>>>>>       alternative wording.
>>>>
>>>> I have checked the latest staging, the `clean` target doesn't trash
>>>> .config anymore.
>>>>
>>>>>
>>>>> I don't feel I should ack a controversial patch with an unresolved
>>>>> wording issue.  Can you tell me what your proposed wording is ?
>>>>> To avoid blocking this change I would be happy to review your wording
>>>>> and see if it meets my reading of the stated objection.
>>>>
>>>> Here a suggested rewording:
>>>>
>>>> "EXPERT mode is currently used to gate any options that are in technical
>>>> preview or not security supported At the moment, the only way to select
>>>> it is to use XEN_CONFIG_EXPERT=y on the make command line.
>>>>
>>>> However, if the user forget to add the option when (re)building or when
>>>> using menuconfig, then .config will get rewritten. This may lead to a
>>>> rather frustrating experience as it is difficult to diagnostic the
>>>> issue.
>>>
>>> To me this looks very similar to e.g. not suitably overriding the
>>> default toolchain binaries, if one has a need to build with newer
>>> ones than what a distro provides. According to some of my routinely
>>> built configs both can be done by putting suitable entries into
>>> ./.config (not xen/.config), removing the need to remember adding
>>> either to the make command line.
>>
>> I have never heard of ./.config before. So what are you referring to?
> 
> I'm referring to this line in ./Config.mk:
> 
> -include $(XEN_ROOT)/.config

Great another undocumented way to do things...

> 
>> But yes, there are ways to make it permanent. But that involves hacking
>> Xen source.
> 
> Why would there be any need for a source modification? Just like
> xen/.config, ./.config is not considered part of the source.
> 
>> This is not a very great approach because if you need to
>> bisect, then you have to remember to apply the change everytime. It also
>> doesn't work if you have to build for multiple different target from the
>> same source.
> 
> Why wouldn't it? I'm doing exactly this, far beyond just x86 and
> Arm builds, and it all works fine. (It would work even better
> with out-of-tree builds, but it looks like Anthony is getting us
> there.)

... let me ask it differently. Are you concerned of my wording or by the 
fact we make easier to a user to EXPERT mode?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 11 14:15:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 14:15:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY9Cx-0000H2-52; Mon, 11 May 2020 14:15:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4Y5i=6Z=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jY9Cv-0000Gt-J1
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 14:15:01 +0000
X-Inumbo-ID: ccb61560-9391-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ccb61560-9391-11ea-b07b-bc764e2007e4;
 Mon, 11 May 2020 14:15:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E2D4EAB76;
 Mon, 11 May 2020 14:15:01 +0000 (UTC)
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Julien Grall <julien@xen.org>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
 <4017f7f0-744b-70ff-8fa4-b33c53a8b9e2@suse.com>
 <ca115637-5e84-2990-65e8-e2f04ec0ddb5@xen.org>
 <803876ce-503a-2e97-f310-0413e515970b@suse.com>
 <bbc12f81-7854-ad72-63ee-3ec94b378bf9@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bf6a9ed3-fd7e-c588-9f72-8084dab1f556@suse.com>
Date: Mon, 11 May 2020 16:14:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <bbc12f81-7854-ad72-63ee-3ec94b378bf9@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.2020 16:11, Julien Grall wrote:
> Hi,
> 
> On 11/05/2020 15:07, Jan Beulich wrote:
>> On 11.05.2020 15:57, Julien Grall wrote:
>>> Hi,
>>>
>>> On 11/05/2020 14:40, Jan Beulich wrote:
>>>> On 11.05.2020 15:30, Julien Grall wrote:
>>>>> Hi Ian,
>>>>>
>>>>> Thank you for the clarification.
>>>>>
>>>>> On 07/05/2020 18:01, Ian Jackson wrote:
>>>>>> Julien Grall writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>>>>> On 04/05/2020 13:34, Ian Jackson wrote:
>>>>>>>> George Dunlap writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>>>>>>> On Apr 30, 2020, at 3:50 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>>>>> Well, if I'm not mis-remembering it was on purpose to make it more
>>>>>>>>>> difficult for people to declare themselves "experts". FAOD I'm not
>>>>>>>>>> meaning to imply I don't see and accept the frustration aspect you
>>>>>>>>>> mention further up. The two need to be carefully weighed against
>>>>>>>>>> one another.
>>>>>>>>
>>>>>>>> Yes, it was on purpose.  However, I had my doubts at the time and
>>>>>>>> I think experience has shown that this was a mistake.
>>>>>>>>
>>>>>>>>> I don’t think we need to make it difficult for people to declare
>>>>>>>>> themselves experts, particularly as “all” it means at the moment is,
>>>>>>>>> “Can build something which is not security supported”.  People who
>>>>>>>>> are building their own hypervisors are already pretty well advanced;
>>>>>>>>> I think we can let them shoot themselves in the foot if they want
>>>>>>>>> to.
>>>>>>>>
>>>>>>>> Precisely.
>>>>>>>
>>>>>>> Can I consider this as an Acked-by? :)
>>>>>>
>>>>>> I am happy with the principle of the change.  I haven't reviewed the
>>>>>> details of the commit message etc.
>>>>>>
>>>>>> I reviewed the thread and there were two concernes raised:
>>>>>>
>>>>>>     * The question of principle.  I disagree with this concern
>>>>>>       because I approve of principle of the patch.
>>>>>>
>>>>>>     * Some detail about the precise justificaton as written in
>>>>>>       the commit message, regarding `clean' targets.  Apparently the
>>>>>>       assertion may not be completely true.  I haven't seen a proposed
>>>>>>       alternative wording.
>>>>>
>>>>> I have checked the latest staging, the `clean` target doesn't trash
>>>>> .config anymore.
>>>>>
>>>>>>
>>>>>> I don't feel I should ack a controversial patch with an unresolved
>>>>>> wording issue.  Can you tell me what your proposed wording is ?
>>>>>> To avoid blocking this change I would be happy to review your wording
>>>>>> and see if it meets my reading of the stated objection.
>>>>>
>>>>> Here a suggested rewording:
>>>>>
>>>>> "EXPERT mode is currently used to gate any options that are in technical
>>>>> preview or not security supported At the moment, the only way to select
>>>>> it is to use XEN_CONFIG_EXPERT=y on the make command line.
>>>>>
>>>>> However, if the user forget to add the option when (re)building or when
>>>>> using menuconfig, then .config will get rewritten. This may lead to a
>>>>> rather frustrating experience as it is difficult to diagnostic the
>>>>> issue.
>>>>
>>>> To me this looks very similar to e.g. not suitably overriding the
>>>> default toolchain binaries, if one has a need to build with newer
>>>> ones than what a distro provides. According to some of my routinely
>>>> built configs both can be done by putting suitable entries into
>>>> ./.config (not xen/.config), removing the need to remember adding
>>>> either to the make command line.
>>>
>>> I have never heard of ./.config before. So what are you referring to?
>>
>> I'm referring to this line in ./Config.mk:
>>
>> -include $(XEN_ROOT)/.config
> 
> Great another undocumented way to do things...
> 
>>
>>> But yes, there are ways to make it permanent. But that involves hacking
>>> Xen source.
>>
>> Why would there be any need for a source modification? Just like
>> xen/.config, ./.config is not considered part of the source.
>>
>>> This is not a very great approach because if you need to
>>> bisect, then you have to remember to apply the change everytime. It also
>>> doesn't work if you have to build for multiple different target from the
>>> same source.
>>
>> Why wouldn't it? I'm doing exactly this, far beyond just x86 and
>> Arm builds, and it all works fine. (It would work even better
>> with out-of-tree builds, but it looks like Anthony is getting us
>> there.)
> 
> ... let me ask it differently. Are you concerned of my wording or by the 
> fact we make easier to a user to EXPERT mode?

I'm trying to make the point that your patch, to me, looks to be
trying to overcome a problem for which we have had a solution all
the time.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 11 14:28:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 14:28:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY9Pa-0001Fg-BW; Mon, 11 May 2020 14:28:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2nSL=6Z=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jY9PZ-0001Fb-CR
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 14:28:05 +0000
X-Inumbo-ID: 9f5b155a-9393-11ea-a215-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9f5b155a-9393-11ea-a215-12813bfff9fa;
 Mon, 11 May 2020 14:28:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589207284;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=CtEB7EXwJGNdwiT8BuA/fSZx76lVbT4EwRvG+OpaDlE=;
 b=CkaDeGvZqug3ryKlt/DIybn1kPEQevJ3Nxuz+BjHoLeY/ytfCZDEygjz
 0ttVymM/lkuyoL5iMJloI7F3/t8BNy8HN1CRNb9albXl6IWS9DxeefW6C
 eW+fnChdjSsqM5F8963kYqqEGyd1dxu4I8ves0lVSMaFnHlLPGkNzMA2A M=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: VaDVOzUPScAxQJ3TEnvE4oivfmLEbxaFnbzU8CJXilDkteVwNeahGbcwNUNHsleKuyXUONJfH0
 JY4HHRs0CitcZm84amwXgGGr+6wdU5lF0hDULlb7D1gFLzWXkSyLmEudGlIKu4laGNf5UbZKPP
 HapQPbq7SZn+2DlURGgEaPAff5OBkmtoq4AGsfkZrO+7vxNrttUV8YxHaokJ/n/s46mjMMsfI5
 VgLKhoVnJwh+RGFzgL+JaAYORGJGqASWrhJ7pvntdzy2nHnYaaOiJW8a1g0LPZDP/VAPEC1ZmC
 oaU=
X-SBRS: 2.7
X-MesageID: 17240834
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,380,1583211600"; d="scan'208";a="17240834"
From: George Dunlap <George.Dunlap@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
Thread-Topic: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from
 the menuconfig directly
Thread-Index: AQHWHvtEfLlMb5EF90y7GzIGxHVxJ6iRncgAgAYL5QCAABeggIAC9dcAgAILiACABg54AIAAAuMAgAAExoCAAAK+AIAAASKAgAAEkoA=
Date: Mon, 11 May 2020 14:27:59 +0000
Message-ID: <498C2068-6E57-465C-80E2-A689438D2F79@citrix.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
 <4017f7f0-744b-70ff-8fa4-b33c53a8b9e2@suse.com>
 <ca115637-5e84-2990-65e8-e2f04ec0ddb5@xen.org>
 <803876ce-503a-2e97-f310-0413e515970b@suse.com>
 <bbc12f81-7854-ad72-63ee-3ec94b378bf9@xen.org>
In-Reply-To: <bbc12f81-7854-ad72-63ee-3ec94b378bf9@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <69AAD4A5D64DBB4ABD465B20C075E7A2@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDExLCAyMDIwLCBhdCAzOjExIFBNLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4
ZW4ub3JnPiB3cm90ZToNCj4gDQo+IFtDQVVUSU9OIC0gRVhURVJOQUwgRU1BSUxdIERPIE5PVCBy
ZXBseSwgY2xpY2sgbGlua3MsIG9yIG9wZW4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBoYXZlIHZl
cmlmaWVkIHRoZSBzZW5kZXIgYW5kIGtub3cgdGhlIGNvbnRlbnQgaXMgc2FmZS4NCj4gDQo+IEhp
LA0KPiANCj4gT24gMTEvMDUvMjAyMCAxNTowNywgSmFuIEJldWxpY2ggd3JvdGU6DQo+PiBPbiAx
MS4wNS4yMDIwIDE1OjU3LCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+Pj4gDQo+Pj4gSSBoYXZlIG5l
dmVyIGhlYXJkIG9mIC4vLmNvbmZpZyBiZWZvcmUuIFNvIHdoYXQgYXJlIHlvdSByZWZlcnJpbmcg
dG8/DQo+PiBJJ20gcmVmZXJyaW5nIHRvIHRoaXMgbGluZSBpbiAuL0NvbmZpZy5tazoNCj4+IC1p
bmNsdWRlICQoWEVOX1JPT1QpLy5jb25maWcNCj4gDQo+IEdyZWF0IGFub3RoZXIgdW5kb2N1bWVu
dGVkIHdheSB0byBkbyB0aGluZ3MuLi4NCg0KT2gsIGlzIHRoYXQgbm90IGRvY3VtZW50ZWQ/ICBJ
dOKAmXMgcXVpdGUgdmVuZXJhYmxlIGF0IHRoaXMgcG9pbnQg4oCUIGlmIGl04oCZcyBub3QgZG9j
dW1lbnRlZCB0aGF0IHNob3VsZCBjaGFuZ2UuICAoQWx0aG91Z2ggSSBndWVzcyB0aGVyZeKAmXMg
YW4gYXJndW1lbnQgdGhhdCBldmVyeXRoaW5nIHdoaWNoIHdvdWxkIGJlIGluY2x1ZGVkIHRoZXJl
IHNob3VsZCBiZSBhZGRlZCB0byBlaXRoZXIgS0NvbmZpZyBvciBjb25maWd1cmUuKQ0KDQpJIGRv
buKAmXQgdGhpbmsgSSB3b3VsZCBoYXZlIHRob3VnaHQgdG8gYWRkIFhFTl9DT05GSUdfRVhQRVJU
PXkgdG8gLmNvbmZpZyB0byBwcmV2ZW50IHRoZSBpc3N1ZSBKdWxpZW4gaXMgc2VlaW5nLiAgVGhh
dCBtYXliZSBwYXJ0IG9mIHRoZSByZWFzb24gd2h5IHRoaXMgZG9lc27igJl0IGJvdGhlciB5b3Ug
YXMgbXVjaCBhcyBpdCBkb2VzIGhpbS4NCg0KRnJvbSBhIFVJIHBlcnNwZWN0aXZlLCBJIHRoaW5r
IHRoYXTigJlzIGEgcG9vciBVSSDigJQgZW5hYmxpbmcgQ09ORklHX0VYUEVSVCBmcm9tIHRoZSBt
ZW51Y29uZmlnIGlzIG1vcmUgZGlzY292ZXJhYmxlIGFuZCBtb3JlIGluIGxpbmUgd2l0aCB3aGF0
IHBlb3BsZSBleHBlY3QuDQoNCiAtR2VvcmdlDQoNCg==


From xen-devel-bounces@lists.xenproject.org Mon May 11 14:32:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 14:32:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY9TM-000227-0j; Mon, 11 May 2020 14:32:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HsYJ=6Z=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jY9TK-000222-C2
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 14:31:58 +0000
X-Inumbo-ID: 2ac84c0c-9394-11ea-a218-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2ac84c0c-9394-11ea-a218-12813bfff9fa;
 Mon, 11 May 2020 14:31:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=QvLqClO0VU5s88qCAN1HUjc1Bvco4H+r7Pt7jrgc59Q=; b=H/tKsRWlYRVZcZfRKQAnXEM+Sp
 ebxcJKqmVHqdK/g1HvDHCBrFcI1LouAMnuEGK9VTTzar5o2My1d1jl0YDLdUYqwH3AMVkeor5yk7o
 rEFcsYlTDoxZuZhEp533BQ/P8p1p8PRSzDc2JEgGDxBig080nRSokrZblDxDamYT8da8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jY9TH-0008NC-MB; Mon, 11 May 2020 14:31:55 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jY9TH-0007x9-7o; Mon, 11 May 2020 14:31:55 +0000
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Jan Beulich <jbeulich@suse.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
 <4017f7f0-744b-70ff-8fa4-b33c53a8b9e2@suse.com>
 <ca115637-5e84-2990-65e8-e2f04ec0ddb5@xen.org>
 <803876ce-503a-2e97-f310-0413e515970b@suse.com>
 <bbc12f81-7854-ad72-63ee-3ec94b378bf9@xen.org>
 <bf6a9ed3-fd7e-c588-9f72-8084dab1f556@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ac8f85db-8d94-b616-9cdc-3c996f3a1d43@xen.org>
Date: Mon, 11 May 2020 15:31:52 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <bf6a9ed3-fd7e-c588-9f72-8084dab1f556@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 11/05/2020 15:14, Jan Beulich wrote:
> On 11.05.2020 16:11, Julien Grall wrote:
>> Hi,
>>
>> On 11/05/2020 15:07, Jan Beulich wrote:
>>> On 11.05.2020 15:57, Julien Grall wrote:
>>>> Hi,
>>>>
>>>> On 11/05/2020 14:40, Jan Beulich wrote:
>>>>> On 11.05.2020 15:30, Julien Grall wrote:
>>>>>> Hi Ian,
>>>>>>
>>>>>> Thank you for the clarification.
>>>>>>
>>>>>> On 07/05/2020 18:01, Ian Jackson wrote:
>>>>>>> Julien Grall writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>>>>>> On 04/05/2020 13:34, Ian Jackson wrote:
>>>>>>>>> George Dunlap writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>>>>>>>> On Apr 30, 2020, at 3:50 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>>>>>> Well, if I'm not mis-remembering it was on purpose to make it more
>>>>>>>>>>> difficult for people to declare themselves "experts". FAOD I'm not
>>>>>>>>>>> meaning to imply I don't see and accept the frustration aspect you
>>>>>>>>>>> mention further up. The two need to be carefully weighed against
>>>>>>>>>>> one another.
>>>>>>>>>
>>>>>>>>> Yes, it was on purpose.  However, I had my doubts at the time and
>>>>>>>>> I think experience has shown that this was a mistake.
>>>>>>>>>
>>>>>>>>>> I don’t think we need to make it difficult for people to declare
>>>>>>>>>> themselves experts, particularly as “all” it means at the moment is,
>>>>>>>>>> “Can build something which is not security supported”.  People who
>>>>>>>>>> are building their own hypervisors are already pretty well advanced;
>>>>>>>>>> I think we can let them shoot themselves in the foot if they want
>>>>>>>>>> to.
>>>>>>>>>
>>>>>>>>> Precisely.
>>>>>>>>
>>>>>>>> Can I consider this as an Acked-by? :)
>>>>>>>
>>>>>>> I am happy with the principle of the change.  I haven't reviewed the
>>>>>>> details of the commit message etc.
>>>>>>>
>>>>>>> I reviewed the thread and there were two concernes raised:
>>>>>>>
>>>>>>>      * The question of principle.  I disagree with this concern
>>>>>>>        because I approve of principle of the patch.
>>>>>>>
>>>>>>>      * Some detail about the precise justificaton as written in
>>>>>>>        the commit message, regarding `clean' targets.  Apparently the
>>>>>>>        assertion may not be completely true.  I haven't seen a proposed
>>>>>>>        alternative wording.
>>>>>>
>>>>>> I have checked the latest staging, the `clean` target doesn't trash
>>>>>> .config anymore.
>>>>>>
>>>>>>>
>>>>>>> I don't feel I should ack a controversial patch with an unresolved
>>>>>>> wording issue.  Can you tell me what your proposed wording is ?
>>>>>>> To avoid blocking this change I would be happy to review your wording
>>>>>>> and see if it meets my reading of the stated objection.
>>>>>>
>>>>>> Here a suggested rewording:
>>>>>>
>>>>>> "EXPERT mode is currently used to gate any options that are in technical
>>>>>> preview or not security supported At the moment, the only way to select
>>>>>> it is to use XEN_CONFIG_EXPERT=y on the make command line.
>>>>>>
>>>>>> However, if the user forget to add the option when (re)building or when
>>>>>> using menuconfig, then .config will get rewritten. This may lead to a
>>>>>> rather frustrating experience as it is difficult to diagnostic the
>>>>>> issue.
>>>>>
>>>>> To me this looks very similar to e.g. not suitably overriding the
>>>>> default toolchain binaries, if one has a need to build with newer
>>>>> ones than what a distro provides. According to some of my routinely
>>>>> built configs both can be done by putting suitable entries into
>>>>> ./.config (not xen/.config), removing the need to remember adding
>>>>> either to the make command line.
>>>>
>>>> I have never heard of ./.config before. So what are you referring to?
>>>
>>> I'm referring to this line in ./Config.mk:
>>>
>>> -include $(XEN_ROOT)/.config
>>
>> Great another undocumented way to do things...
>>
>>>
>>>> But yes, there are ways to make it permanent. But that involves hacking
>>>> Xen source.
>>>
>>> Why would there be any need for a source modification? Just like
>>> xen/.config, ./.config is not considered part of the source.
>>>
>>>> This is not a very great approach because if you need to
>>>> bisect, then you have to remember to apply the change everytime. It also
>>>> doesn't work if you have to build for multiple different target from the
>>>> same source.
>>>
>>> Why wouldn't it? I'm doing exactly this, far beyond just x86 and
>>> Arm builds, and it all works fine. (It would work even better
>>> with out-of-tree builds, but it looks like Anthony is getting us
>>> there.)
>>
>> ... let me ask it differently. Are you concerned of my wording or by the
>> fact we make easier to a user to EXPERT mode?
> 
> I'm trying to make the point that your patch, to me, looks to be
> trying to overcome a problem for which we have had a solution all
> the time.

For a first, AFAICT, your solution is not documented anywhere at all... 
It would be possible to document it thought.

However, having two .config for managing Xen options is not very great. 
You have to remember where to add the options and that you need to 
provide the two files if you want someone else to reproduce your setup.

So I would still like to push for a single .config.

As an aside, if you look at the placement of ./.config, it is meant to 
be applied for the full tree. XEN_CONFIG_EXPERT is a config specific to 
the hypervisor, so it is a bit counterintuitive to put it in that file.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 11 14:35:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 14:35:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY9Wa-0002BA-GI; Mon, 11 May 2020 14:35:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AmMB=6Z=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jY9WZ-0002B4-3J
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 14:35:19 +0000
X-Inumbo-ID: a1a37b14-9394-11ea-a218-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a1a37b14-9394-11ea-a218-12813bfff9fa;
 Mon, 11 May 2020 14:35:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589207718;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=+fKZx7eXtkts7N/PkZvF7HfM4waLmeva/WULtQOP5y8=;
 b=faK28273DUpRww7yYYgbNr/k9aQ2+/WroQN0DIbikWRXoqlSthJfFGmE
 mJFNF/P/33vND3gz2WKx69Um21YBgmfiL67ciRmgSSR+GO3dzO1McGLLv
 jR0liPsPK+ssPKMSdPoLb+lyQ/NgHTP2XCYQvpGpwE8pdlkTtWS7y4zi4 g=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: T2QMMqIcwm5uMqzbkmhQ3DJWln281zDUnA8XRzlJXT0DCx0es705OFG9GjZE9Rhqxvatcs5Ljw
 h+xDQ0/Zoa7XwIuP8OamI8v3GkqtMhaZJtalFdLnljOLJnRKUEPu+DWRupVJ3LRIxypTCVi9/S
 JEPu1HhI/BkFReWiBWlpl1n7X3+w9PN5MWWB1vbp/xXBMVTTP2xexoVLmLP4HU+rkQqkcK34bC
 zYjwZpbl7iR8PWzzvt7xXR9/Idr25ue7nplfbJymqeQFTWKHVIegXL4e+idoXvNa0muZdf6fGI
 Sno=
X-SBRS: 2.7
X-MesageID: 17595735
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,380,1583211600"; d="scan'208";a="17595735"
Date: Mon, 11 May 2020 16:35:10 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86/PVH: PHYSDEVOP_pci_mmcfg_reserved should not blindly
 register a region
Message-ID: <20200511143510.GC35422@Air-de-Roger>
References: <2ee1a3cb-3b40-6e6d-5308-1cdf9f6c521e@suse.com>
 <20200508150312.GJ1353@Air-de-Roger>
 <70c8b4f4-b690-c031-3b90-1776d872d171@suse.com>
 <20200508160851.GK1353@Air-de-Roger>
 <d1248c00-ae0b-e2aa-be41-e27d27dce379@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d1248c00-ae0b-e2aa-be41-e27d27dce379@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 11, 2020 at 03:46:38PM +0200, Jan Beulich wrote:
> On 08.05.2020 18:08, Roger Pau Monné wrote:
> > On Fri, May 08, 2020 at 05:11:35PM +0200, Jan Beulich wrote:
> >> On 08.05.2020 17:03, Roger Pau Monné wrote:
> >>> On Fri, May 08, 2020 at 02:43:38PM +0200, Jan Beulich wrote:
> >>>> --- a/xen/arch/x86/hvm/io.c
> >>>> +++ b/xen/arch/x86/hvm/io.c
> >>>> @@ -558,6 +558,47 @@ int register_vpci_mmcfg_handler(struct d
> >>>>      return 0;
> >>>>  }
> >>>>  
> >>>> +int unregister_vpci_mmcfg_handler(struct domain *d, paddr_t addr,
> >>>> +                                  unsigned int start_bus, unsigned int end_bus,
> >>>> +                                  unsigned int seg)
> >>>> +{
> >>>> +    struct hvm_mmcfg *mmcfg;
> >>>> +    int rc = -ENOENT;
> >>>> +
> >>>> +    ASSERT(is_hardware_domain(d));
> >>>> +
> >>>> +    if ( start_bus > end_bus )
> >>>> +        return -EINVAL;
> >>>> +
> >>>> +    write_lock(&d->arch.hvm.mmcfg_lock);
> >>>> +
> >>>> +    list_for_each_entry ( mmcfg, &d->arch.hvm.mmcfg_regions, next )
> >>>> +        if ( mmcfg->addr == addr + (start_bus << 20) &&
> >>>> +             mmcfg->segment == seg &&
> >>>> +             mmcfg->start_bus == start_bus &&
> >>>> +             mmcfg->size == ((end_bus - start_bus + 1) << 20) )
> >>>> +        {
> >>>> +            list_del(&mmcfg->next);
> >>>> +            if ( !list_empty(&d->arch.hvm.mmcfg_regions) )
> >>>> +                xfree(mmcfg);
> >>>> +            else
> >>>> +            {
> >>>> +                /*
> >>>> +                 * Cannot unregister the MMIO handler - leave a fake entry
> >>>> +                 * on the list.
> >>>> +                 */
> >>>> +                memset(mmcfg, 0, sizeof(*mmcfg));
> >>>> +                list_add(&mmcfg->next, &d->arch.hvm.mmcfg_regions);
> >>>
> >>> Instead of leaving this zombie entry around maybe we could add a
> >>> static bool in register_vpci_mmcfg_handler to signal whether the MMIO
> >>> intercept has been registered?
> >>
> >> That was my initial plan indeed, but registration is per-domain.
> > 
> > Indeed, this would work now because it's only used by the hardware
> > domain, but it's not a good move long term.
> > 
> > What about splitting the registration into a
> > register_vpci_mmio_handler and call it from hvm_domain_initialise
> > like it's done for register_vpci_portio_handler?
> 
> No, the goal is to not register unneeded handlers. But see below -
> I'll likely ditch the function anyway.
> 
> >>>> --- a/xen/arch/x86/physdev.c
> >>>> +++ b/xen/arch/x86/physdev.c
> >>>> @@ -559,12 +559,18 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
> >>>>          if ( !ret && has_vpci(currd) )
> >>>>          {
> >>>>              /*
> >>>> -             * For HVM (PVH) domains try to add the newly found MMCFG to the
> >>>> -             * domain.
> >>>> +             * For HVM (PVH) domains try to add/remove the reported MMCFG
> >>>> +             * to/from the domain.
> >>>>               */
> >>>> -            ret = register_vpci_mmcfg_handler(currd, info.address,
> >>>> -                                              info.start_bus, info.end_bus,
> >>>> -                                              info.segment);
> >>>> +            if ( info.flags & XEN_PCI_MMCFG_RESERVED )
> >>>
> >>> Do you think you could also add a small note in physdev.h regarding
> >>> the fact that XEN_PCI_MMCFG_RESERVED is used to register a MMCFG
> >>> region, and not setting it would imply an unregister request?
> >>>
> >>> It's not obvious to me from the name of the flag.
> >>
> >> The main purpose of the flag is to identify whether a region can be
> >> used (because of having been found marked suitably reserved by
> >> firmware). The flag not set effectively means "region is not marked
> >> reserved".
> > 
> > Looking at pci_mmcfg_arch_disable, should the region then also be
> > removed from mmio_ro_ranges? (kind of tangential to this patch)
> 
> If it's truly unregistration - yes. But ...
> 
> >> You pointing this out makes me wonder whether instead I
> >> should simply expand the if() in context, without making it behave
> >> like unregistration. Then again we'd have no way to unregister a
> >> region, and hence (ab)using this function for this purpose seems to
> >> makes sense (and, afaict, not require any code changes elsewhere).
> > 
> > Right now the only user I know of PHYSDEVOP_pci_mmcfg_reserved is
> > Linux, and AFAICT it always sets the XEN_PCI_MMCFG_RESERVED flag (at
> > least upstream).
> 
> ... I've looked at our forward port, where this was first introduced.
> There we made the call in all cases, with the flag indicating what is
> wanted. Therefore I don't think we want to assign the flag being
> clear the meaning of "unregistration". I'll therefore switch to the
> simpler change of just expanding the if().

I'm not opposed to this. Leaving the vpci MMIO handlers for disabled
regions is fine, writes will be ignored and reads will return ~0.

This will prevent a PVH hardware domain from accessing those broken
MMCFG regions if it really wants to, but I think it's similar to how a
classic PV dom0 would behave (with the exception that in that case the
domain would be allowed to read from the MMCFG area).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 11 14:51:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 14:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY9lc-0003qc-TS; Mon, 11 May 2020 14:50:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4Y5i=6Z=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jY9lb-0003qX-Td
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 14:50:51 +0000
X-Inumbo-ID: ce780ed0-9396-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce780ed0-9396-11ea-b07b-bc764e2007e4;
 Mon, 11 May 2020 14:50:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 781EAAC77;
 Mon, 11 May 2020 14:50:52 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] x86/PVH: PHYSDEVOP_pci_mmcfg_reserved should not blindly
 register a region
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <5df03dc2-6bbd-6f5f-e2f8-18ed2062eb0f@suse.com>
Date: Mon, 11 May 2020 16:50:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The op has a "is reserved" flag, and hence registration shouldn't
happen unilaterally.

Fixes: eb3dd90e4089 ("x86/physdev: enable PHYSDEVOP_pci_mmcfg_reserved for PVH Dom0")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Don't treat XEN_PCI_MMCFG_RESERVED clear as an unregistration
    request.

--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -557,7 +557,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_H
 
         ret = pci_mmcfg_reserved(info.address, info.segment,
                                  info.start_bus, info.end_bus, info.flags);
-        if ( !ret && has_vpci(currd) )
+        if ( !ret && has_vpci(currd) && (info.flags & XEN_PCI_MMCFG_RESERVED) )
         {
             /*
              * For HVM (PVH) domains try to add the newly found MMCFG to the


From xen-devel-bounces@lists.xenproject.org Mon May 11 14:53:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 14:53:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY9oT-0003zI-BS; Mon, 11 May 2020 14:53:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MQL3=6Z=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jY9oS-0003zC-56
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 14:53:48 +0000
X-Inumbo-ID: 37947dcc-9397-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37947dcc-9397-11ea-ae69-bc764e2007e4;
 Mon, 11 May 2020 14:53:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589208828;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=8x/qahwqikLhTKltH5G8F8Fn+ZigWA9Ayn1oIa2+6yI=;
 b=GEX5ya+w/o/5w8Km+fWkulJlIZsLAyjRxyy/pfCYpjdrjh8j+y0L2v43
 2GsCX3XC/+qmIPsLvXERZSsA6rIn1cUPFLWPSTkuubE4qq2tyFX77x6Fe
 0LM5noiABVW9Ynra8ruKX2speJRkq2GX3IAepS5DSSRU17bjjk0/NplmD I=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 7R3ic9toiaykhYNmVUq9jQaGi4XMCpaOnFxHpaSGz+Q5uraBh92ojoMI/GBa8fp6ktx3KS3q6I
 VX2YCzOmW8BdsyH6KxiQIbj1HxkvXHG3HgQoE9UOZZl87w52w+2GBYf1WoklEXGfIW7mdfce3n
 5k7GAViWwXONNYqcgFFC8jqknbofxoD60d+XFo/FZ9Y+lI6ku6R/Cy779iPNert22OP6wj02UV
 gSiIZ5I+60brbMMK4uMLhRP2wvEvXrbupBXMvp5C0NS89fd6HGQgIVWKkjW5WEgb+JAuflTRSy
 +AI=
X-SBRS: 2.7
X-MesageID: 17489351
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,380,1583211600"; d="scan'208";a="17489351"
Subject: Re: [PATCH 01/16] x86/traps: Drop last_extable_addr
To: Jan Beulich <jbeulich@suse.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-2-andrew.cooper3@citrix.com>
 <3ec06cb1-6b31-2fde-6de4-23938acab73a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <64cb6793-c3a0-e396-c6cc-50f5a36e19a6@citrix.com>
Date: Mon, 11 May 2020 15:53:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <3ec06cb1-6b31-2fde-6de4-23938acab73a@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 04/05/2020 13:44, Jan Beulich wrote:
> On 02.05.2020 00:58, Andrew Cooper wrote:
>> The only user of this facility is dom_crash_sync_extable() by passing 0 into
>> asm_domain_crash_synchronous().  The common error cases are already covered
>> with show_page_walk(), leaving only %ss/%fs selector/segment errors in the
>> compat case.
>>
>> Point at dom_crash_sync_extable in the error message, which is about as good
>> as the error hints from other users of asm_domain_crash_synchronous(), and
>> drop last_extable_addr.
> While I'm not entirely opposed, I'd like you to clarify that you indeed
> mean to (mostly) revert your own improvement from 6 or 7 years back
> (commit 8e0da8c07f4f). I'm also surprised to find this as part of the
> series it's in - in what way does this logic get in the way of CET-SS?

It was part of the exception_fixup() cleanup.  The first 4 patches not
specifically related to CET-SS.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 11 15:00:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 15:00:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY9uh-0004sr-1i; Mon, 11 May 2020 15:00:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4Y5i=6Z=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jY9ug-0004sm-1p
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 15:00:14 +0000
X-Inumbo-ID: 1c57e64c-9398-11ea-a21f-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1c57e64c-9398-11ea-a21f-12813bfff9fa;
 Mon, 11 May 2020 15:00:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 04F33AD08;
 Mon, 11 May 2020 15:00:12 +0000 (UTC)
Subject: Re: [PATCH 01/16] x86/traps: Drop last_extable_addr
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-2-andrew.cooper3@citrix.com>
 <3ec06cb1-6b31-2fde-6de4-23938acab73a@suse.com>
 <64cb6793-c3a0-e396-c6cc-50f5a36e19a6@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c2740228-ff78-e9d3-41c4-6a5402029259@suse.com>
Date: Mon, 11 May 2020 17:00:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <64cb6793-c3a0-e396-c6cc-50f5a36e19a6@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.2020 16:53, Andrew Cooper wrote:
> On 04/05/2020 13:44, Jan Beulich wrote:
>> On 02.05.2020 00:58, Andrew Cooper wrote:
>>> The only user of this facility is dom_crash_sync_extable() by passing 0 into
>>> asm_domain_crash_synchronous().  The common error cases are already covered
>>> with show_page_walk(), leaving only %ss/%fs selector/segment errors in the
>>> compat case.
>>>
>>> Point at dom_crash_sync_extable in the error message, which is about as good
>>> as the error hints from other users of asm_domain_crash_synchronous(), and
>>> drop last_extable_addr.
>> While I'm not entirely opposed, I'd like you to clarify that you indeed
>> mean to (mostly) revert your own improvement from 6 or 7 years back
>> (commit 8e0da8c07f4f). I'm also surprised to find this as part of the
>> series it's in - in what way does this logic get in the way of CET-SS?
> 
> It was part of the exception_fixup() cleanup.  The first 4 patches not
> specifically related to CET-SS.

"Cleanup" doesn't mean it's needed as a prereq for the CET-SS, so I'm
afraid it's still not really clear to me whether we indeed want to
effectively revert the earlier change, which had a reason to be done
the way it was done.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 11 15:01:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 15:01:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jY9vf-0004xI-GH; Mon, 11 May 2020 15:01:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MQL3=6Z=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jY9vd-0004x9-Ii
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 15:01:13 +0000
X-Inumbo-ID: 407cce0c-9398-11ea-9887-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 407cce0c-9398-11ea-9887-bc764e2007e4;
 Mon, 11 May 2020 15:01:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589209272;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=6k1RSa0b9HpIrhEynQB+3VH7kQgkVRwe81YYLsq4b/Q=;
 b=QEHTVGPvKkkea5Yl60e6RQ5PmhaLlM3Ppk5cukQb7i+9mT1ruspZ+5V7
 T1x0gY6tyMKkebN/IxpqL1d+TcUkzd36JhKpIFtaqU4JCpZsHNWPthH3Y
 BDRYPGLD82Re3zPsjXwMEb8rTM0R0fbmvrvkKtLjGxBlJGo8o+xTou78y k=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: jL9iz8iYybZHUJ5sG5imbQX6g2fRPuHHX8nws8tUowYFFhhDaNmcy8wvrW/uPKnsInraGhGRVn
 p8IdgAXy+CTdOJopA7ya8vK+X6YXepVb0n/+lpid+6IWCoQPuEuu1E1BLkfGg2R4qEcepaUIUx
 UutQM31k19JP1wI9u39btDyq11d+ISYi6B0/+V8nfV+X1zju3TXdqOxwJIw4h+SUUSQzMA2elD
 REcpry8gP4W2REgenGp7LIEM8wTg+8toLFt0c2ZA4FHjFNncrW/dp4lg8I5QGkN/NhSKiz/XhT
 P3k=
X-SBRS: 2.7
X-MesageID: 17490179
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,380,1583211600"; d="scan'208";a="17490179"
Subject: Re: [PATCH 02/16] x86/traps: Clean up printing in
 do_reserved_trap()/fatal_trap()
To: Jan Beulich <jbeulich@suse.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-3-andrew.cooper3@citrix.com>
 <aca22b53-895e-19bb-c54c-f1e4945c95c1@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <8f1d68b1-895a-d2a6-4dcb-55b688b03336@citrix.com>
Date: Mon, 11 May 2020 16:01:06 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <aca22b53-895e-19bb-c54c-f1e4945c95c1@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 04/05/2020 14:08, Jan Beulich wrote:
> On 02.05.2020 00:58, Andrew Cooper wrote:
>> For one, they render the vector in a different base.
>>
>> Introduce X86_EXC_* constants and vec_name() to refer to exceptions by their
>> mnemonic, which starts bringing the code/diagnostics in line with the Intel
>> and AMD manuals.
> For this "bringing in line" purpose I'd like to see whether you could
> live with some adjustments to how you're currently doing things:
> - NMI is nowhere prefixed by #, hence I think we'd better not do so
>   either; may require embedding the #-es in the names[] table, or not
>   using N() for NMI

No-one is going to get confused at seeing #NMI in an error message.  I
don't mind jugging the existing names table, but anything more
complicated is overkill.

> - neither Coprocessor Segment Overrun nor vector 0x0f have a mnemonic
>   and hence I think we shouldn't invent one; just treat them like
>   other reserved vectors (of which at least vector 0x09 indeed is one
>   on x86-64)?

This I disagree with.  Coprocessor Segment Overrun *is* its name in both
manuals, and the avoidance of vector 0xf is clearly documented as well,
due to it being the default PIC Spurious Interrupt Vector.

Neither CSO or SPV are expected to be encountered in practice, but if
they are, highlighting them is a damn-sight more helpful than pretending
they don't exist.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 11 15:09:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 15:09:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYA3G-0005FM-B7; Mon, 11 May 2020 15:09:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4Y5i=6Z=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYA3F-0005FH-8A
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 15:09:05 +0000
X-Inumbo-ID: 5a3e830c-9399-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a3e830c-9399-11ea-ae69-bc764e2007e4;
 Mon, 11 May 2020 15:09:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 3975DAF0F;
 Mon, 11 May 2020 15:09:06 +0000 (UTC)
Subject: Re: [PATCH 02/16] x86/traps: Clean up printing in
 do_reserved_trap()/fatal_trap()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-3-andrew.cooper3@citrix.com>
 <aca22b53-895e-19bb-c54c-f1e4945c95c1@suse.com>
 <8f1d68b1-895a-d2a6-4dcb-55b688b03336@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b1ef905c-dab6-d1c3-4673-4c06c7e94a0a@suse.com>
Date: Mon, 11 May 2020 17:09:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <8f1d68b1-895a-d2a6-4dcb-55b688b03336@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.2020 17:01, Andrew Cooper wrote:
> On 04/05/2020 14:08, Jan Beulich wrote:
>> On 02.05.2020 00:58, Andrew Cooper wrote:
>>> For one, they render the vector in a different base.
>>>
>>> Introduce X86_EXC_* constants and vec_name() to refer to exceptions by their
>>> mnemonic, which starts bringing the code/diagnostics in line with the Intel
>>> and AMD manuals.
>> For this "bringing in line" purpose I'd like to see whether you could
>> live with some adjustments to how you're currently doing things:
>> - NMI is nowhere prefixed by #, hence I think we'd better not do so
>>   either; may require embedding the #-es in the names[] table, or not
>>   using N() for NMI
> 
> No-one is going to get confused at seeing #NMI in an error message.  I
> don't mind jugging the existing names table, but anything more
> complicated is overkill.
> 
>> - neither Coprocessor Segment Overrun nor vector 0x0f have a mnemonic
>>   and hence I think we shouldn't invent one; just treat them like
>>   other reserved vectors (of which at least vector 0x09 indeed is one
>>   on x86-64)?
> 
> This I disagree with.  Coprocessor Segment Overrun *is* its name in both
> manuals, and the avoidance of vector 0xf is clearly documented as well,
> due to it being the default PIC Spurious Interrupt Vector.
> 
> Neither CSO or SPV are expected to be encountered in practice, but if
> they are, highlighting them is a damn-sight more helpful than pretending
> they don't exist.

How is them occurring (and getting logged with their vector numbers)
any different from other reserved, acronym-less vectors? I particularly
didn't suggest to pretend they don't exist; instead I did suggest that
they are as reserved as, say, vector 0x18. By inventing an acronym and
logging this instead of the vector number you'll make people other than
you have to look up what the odd acronym means iff such an exception
ever got raised.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 11 15:14:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 15:14:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYA8P-00063g-Vn; Mon, 11 May 2020 15:14:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MQL3=6Z=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYA8P-00063b-0J
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 15:14:25 +0000
X-Inumbo-ID: 1526cc6a-939a-11ea-a220-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1526cc6a-939a-11ea-a220-12813bfff9fa;
 Mon, 11 May 2020 15:14:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589210058;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=cBoJjXNcGsJM3sNnjSrX0FAbDNhV28QazThYh4TVWgc=;
 b=cvDNODAKfycTvyOYEbcsK9gyLBpMC51UQvoE40NHsUNj+tW6pwIXgUCW
 aEFaAc5iuPcD9IHL30PyCo3MV5wpAZlgDbMMzC4P8fQph8TfHon2QRef9
 9xy6P1FqeGToYBmTT8Bgucz8ER1Y9k5YEvSECRVqs0/V2C9dAJdUdz4k5 U=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: QlRPeFKfIjG2IYWfHN+UABObrvWpsVU5fsIZ54bWHxMRrHToobl1kpaKvzvsLSHIPeLjIpGTyr
 KlEvbqdFAdJQcyOjEJSyZvdDW8vnFsfcwSx22/4Td5Yvp43k/ZoBFAn/7spNzj/cjNBlgd9X3/
 K1XGDX936/fNeiaZonwTpBBsLnG+5gNZq+gYBgynmQ2acdmHggaYVLHjAZrwAKX/FzZN46Muqk
 JbhAwm5GhUSOhcTUCKOzLHGBsI8InCmOUbJQUWrugUIp6rOmPUvaU79jld049+J6WoBMqFa0hR
 LwI=
X-SBRS: 2.7
X-MesageID: 17917240
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,380,1583211600"; d="scan'208";a="17917240"
Subject: Re: [PATCH 03/16] x86/traps: Factor out exception_fixup() and make
 printing consistent
To: Jan Beulich <jbeulich@suse.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-4-andrew.cooper3@citrix.com>
 <f7cb696a-5c2c-4aa6-d379-ed77772b7c35@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a397dd69-2384-a4af-d127-9189a730a554@citrix.com>
Date: Mon, 11 May 2020 16:14:12 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <f7cb696a-5c2c-4aa6-d379-ed77772b7c35@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 04/05/2020 14:20, Jan Beulich wrote:
> On 02.05.2020 00:58, Andrew Cooper wrote:
>> --- a/xen/arch/x86/traps.c
>> +++ b/xen/arch/x86/traps.c
>> @@ -774,10 +774,27 @@ static void do_reserved_trap(struct cpu_user_regs *regs)
>>            trapnr, vec_name(trapnr), regs->error_code);
>>  }
>>  
>> +static bool exception_fixup(struct cpu_user_regs *regs, bool print)
>> +{
>> +    unsigned long fixup = search_exception_table(regs);
>> +
>> +    if ( unlikely(fixup == 0) )
>> +        return false;
>> +
>> +    /* Can currently be triggered by guests.  Make sure we ratelimit. */
>> +    if ( IS_ENABLED(CONFIG_DEBUG) && print )
> I didn't think we consider dprintk()-s a possible security issue.
> Why would we consider so a printk() hidden behind
> IS_ENABLED(CONFIG_DEBUG)? IOW I think one of XENLOG_GUEST and
> IS_ENABLED(CONFIG_DEBUG) wants dropping.

Who said anything about a security issue?

I'm deliberately not using dprintk() for the reasons explained in the
commit message, so IS_ENABLED(CONFIG_DEBUG) is to cover that.

XENLOG_GUEST is because everything (other than the boot path) hitting
this caused directly by a guest action, and needs rate-limiting.  This
was not consistent before.

>
>> @@ -1466,12 +1468,11 @@ void do_page_fault(struct cpu_user_regs *regs)
>>          if ( pf_type != real_fault )
>>              return;
>>  
>> -        if ( likely((fixup = search_exception_table(regs)) != 0) )
>> +        if ( likely(exception_fixup(regs, false)) )
>>          {
>>              perfc_incr(copy_user_faults);
>>              if ( unlikely(regs->error_code & PFEC_reserved_bit) )
>>                  reserved_bit_page_fault(addr, regs);
>> -            regs->rip = fixup;
> I'm afraid this modification can't validly be pulled ahead -
> show_execution_state(), as called from reserved_bit_page_fault(),
> wants to / should print the original RIP, not the fixed up one.

This path is bogus to begin with.  Any RSVD pagefault (other than the
Shadow MMIO fastpath, handled earlier) is a bug in Xen and should be
fatal rather than just a warning on extable-tagged instructions.

Amongst other things, it would consistent an L1TF-vulnerable gadget. 
The MMIO fastpath is only safe-ish because it also inverts the upper
address bits.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 11 15:28:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 15:28:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYALS-0006zW-6c; Mon, 11 May 2020 15:27:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2nSL=6Z=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jYALQ-0006zR-Kq
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 15:27:52 +0000
X-Inumbo-ID: f9977114-939b-11ea-a222-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f9977114-939b-11ea-a222-12813bfff9fa;
 Mon, 11 May 2020 15:27:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589210871;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=pd9FiApjkRrI17M6BuPt0/+3rlUpB8JSaCgjR2xygGI=;
 b=UKPNlN3+azrLDaHp8xzVmF6tmo8Gblu7NytfIxtR3Sz3Qt/l1FXsQPhV
 a8sccbwJnlhefkHh3kxUM3r5P6SBmAJMj77llcK6yeqVo5CTSJMJ3DW/T
 qUHQ1F1ymRprI6zZt3lpG/JYKyL0E+EZ0D85DMdxp+MUZfnsiaN7LNWab Q=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: sFzvVleouJDg7Xy1TxcKvHMaiHbyxVydoEM6/s1ITbdNkInSW28ttEnCDjOkUY6ajxHNr4MKXo
 jMipn6myWvWCQtXey2/BHk+ae4HWeKnFi9OfEvYOg3iQXD4/3oXfbkt5lr/gtMtwgd7UAScuVo
 MLYgihGuTl+w3E6r1IfDOcCaDdFImN2W7v5vmRDOjj2598yDApxXASP1ZpdzaHtZmyuYvRiWXW
 JkkWplkmxfhjCMCndu7K+hUY1Z8zMqQwwo+XfmLi4r+Z7dz9tbTP0dMZCDw+OuwiETwtaiSO5s
 YZE=
X-SBRS: 2.7
X-MesageID: 17493328
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,380,1583211600"; d="scan'208";a="17493328"
From: George Dunlap <George.Dunlap@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
Thread-Topic: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from
 the menuconfig directly
Thread-Index: AQHWHvtEfLlMb5EF90y7GzIGxHVxJ6iRncgAgAYL5QCAABeggIAC9dcAgAILiACABg54AIAAIM8A
Date: Mon, 11 May 2020 15:27:46 +0000
Message-ID: <2716ACC1-E38D-45F9-8C07-D99FF846330E@citrix.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
In-Reply-To: <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <0EE21F3C4624CA4D8B688630C49563B5@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDExLCAyMDIwLCBhdCAyOjMwIFBNLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4
ZW4ub3JnPiB3cm90ZToNCj4gDQo+IFtDQVVUSU9OIC0gRVhURVJOQUwgRU1BSUxdIERPIE5PVCBy
ZXBseSwgY2xpY2sgbGlua3MsIG9yIG9wZW4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBoYXZlIHZl
cmlmaWVkIHRoZSBzZW5kZXIgYW5kIGtub3cgdGhlIGNvbnRlbnQgaXMgc2FmZS4NCj4gDQo+IEhp
IElhbiwNCj4gDQo+IFRoYW5rIHlvdSBmb3IgdGhlIGNsYXJpZmljYXRpb24uDQo+IA0KPiBPbiAw
Ny8wNS8yMDIwIDE4OjAxLCBJYW4gSmFja3NvbiB3cm90ZToNCj4+IEp1bGllbiBHcmFsbCB3cml0
ZXMgKCJSZTogW1BBVENIIFJFU0VORCAyLzJdIHhlbjogQWxsb3cgRVhQRVJUIG1vZGUgdG8gYmUg
c2VsZWN0ZWQgZnJvbSB0aGUgbWVudWNvbmZpZyBkaXJlY3RseSIpOg0KPj4+IE9uIDA0LzA1LzIw
MjAgMTM6MzQsIElhbiBKYWNrc29uIHdyb3RlOg0KPj4+PiBHZW9yZ2UgRHVubGFwIHdyaXRlcyAo
IlJlOiBbUEFUQ0ggUkVTRU5EIDIvMl0geGVuOiBBbGxvdyBFWFBFUlQgbW9kZSB0byBiZSBzZWxl
Y3RlZCBmcm9tIHRoZSBtZW51Y29uZmlnIGRpcmVjdGx5Iik6DQo+Pj4+PiBPbiBBcHIgMzAsIDIw
MjAsIGF0IDM6NTAgUE0sIEphbiBCZXVsaWNoIDxKQmV1bGljaEBzdXNlLmNvbT4gd3JvdGU6DQo+
Pj4+Pj4gV2VsbCwgaWYgSSdtIG5vdCBtaXMtcmVtZW1iZXJpbmcgaXQgd2FzIG9uIHB1cnBvc2Ug
dG8gbWFrZSBpdCBtb3JlDQo+Pj4+Pj4gZGlmZmljdWx0IGZvciBwZW9wbGUgdG8gZGVjbGFyZSB0
aGVtc2VsdmVzICJleHBlcnRzIi4gRkFPRCBJJ20gbm90DQo+Pj4+Pj4gbWVhbmluZyB0byBpbXBs
eSBJIGRvbid0IHNlZSBhbmQgYWNjZXB0IHRoZSBmcnVzdHJhdGlvbiBhc3BlY3QgeW91DQo+Pj4+
Pj4gbWVudGlvbiBmdXJ0aGVyIHVwLiBUaGUgdHdvIG5lZWQgdG8gYmUgY2FyZWZ1bGx5IHdlaWdo
ZWQgYWdhaW5zdA0KPj4+Pj4+IG9uZSBhbm90aGVyLg0KPj4+PiANCj4+Pj4gWWVzLCBpdCB3YXMg
b24gcHVycG9zZS4gIEhvd2V2ZXIsIEkgaGFkIG15IGRvdWJ0cyBhdCB0aGUgdGltZSBhbmQNCj4+
Pj4gSSB0aGluayBleHBlcmllbmNlIGhhcyBzaG93biB0aGF0IHRoaXMgd2FzIGEgbWlzdGFrZS4N
Cj4+Pj4gDQo+Pj4+PiBJIGRvbuKAmXQgdGhpbmsgd2UgbmVlZCB0byBtYWtlIGl0IGRpZmZpY3Vs
dCBmb3IgcGVvcGxlIHRvIGRlY2xhcmUNCj4+Pj4+IHRoZW1zZWx2ZXMgZXhwZXJ0cywgcGFydGlj
dWxhcmx5IGFzIOKAnGFsbOKAnSBpdCBtZWFucyBhdCB0aGUgbW9tZW50IGlzLA0KPj4+Pj4g4oCc
Q2FuIGJ1aWxkIHNvbWV0aGluZyB3aGljaCBpcyBub3Qgc2VjdXJpdHkgc3VwcG9ydGVk4oCdLiAg
UGVvcGxlIHdobw0KPj4+Pj4gYXJlIGJ1aWxkaW5nIHRoZWlyIG93biBoeXBlcnZpc29ycyBhcmUg
YWxyZWFkeSBwcmV0dHkgd2VsbCBhZHZhbmNlZDsNCj4+Pj4+IEkgdGhpbmsgd2UgY2FuIGxldCB0
aGVtIHNob290IHRoZW1zZWx2ZXMgaW4gdGhlIGZvb3QgaWYgdGhleSB3YW50DQo+Pj4+PiB0by4N
Cj4+Pj4gDQo+Pj4+IFByZWNpc2VseS4NCj4+PiANCj4+PiBDYW4gSSBjb25zaWRlciB0aGlzIGFz
IGFuIEFja2VkLWJ5PyA6KQ0KPj4gSSBhbSBoYXBweSB3aXRoIHRoZSBwcmluY2lwbGUgb2YgdGhl
IGNoYW5nZS4gIEkgaGF2ZW4ndCByZXZpZXdlZCB0aGUNCj4+IGRldGFpbHMgb2YgdGhlIGNvbW1p
dCBtZXNzYWdlIGV0Yy4NCj4+IEkgcmV2aWV3ZWQgdGhlIHRocmVhZCBhbmQgdGhlcmUgd2VyZSB0
d28gY29uY2VybmVzIHJhaXNlZDoNCj4+ICAqIFRoZSBxdWVzdGlvbiBvZiBwcmluY2lwbGUuICBJ
IGRpc2FncmVlIHdpdGggdGhpcyBjb25jZXJuDQo+PiAgICBiZWNhdXNlIEkgYXBwcm92ZSBvZiBw
cmluY2lwbGUgb2YgdGhlIHBhdGNoLg0KPj4gICogU29tZSBkZXRhaWwgYWJvdXQgdGhlIHByZWNp
c2UganVzdGlmaWNhdG9uIGFzIHdyaXR0ZW4gaW4NCj4+ICAgIHRoZSBjb21taXQgbWVzc2FnZSwg
cmVnYXJkaW5nIGBjbGVhbicgdGFyZ2V0cy4gIEFwcGFyZW50bHkgdGhlDQo+PiAgICBhc3NlcnRp
b24gbWF5IG5vdCBiZSBjb21wbGV0ZWx5IHRydWUuICBJIGhhdmVuJ3Qgc2VlbiBhIHByb3Bvc2Vk
DQo+PiAgICBhbHRlcm5hdGl2ZSB3b3JkaW5nLg0KPiANCj4gSSBoYXZlIGNoZWNrZWQgdGhlIGxh
dGVzdCBzdGFnaW5nLCB0aGUgYGNsZWFuYCB0YXJnZXQgZG9lc24ndCB0cmFzaCAuY29uZmlnIGFu
eW1vcmUuDQo+IA0KPj4gSSBkb24ndCBmZWVsIEkgc2hvdWxkIGFjayBhIGNvbnRyb3ZlcnNpYWwg
cGF0Y2ggd2l0aCBhbiB1bnJlc29sdmVkDQo+PiB3b3JkaW5nIGlzc3VlLiAgQ2FuIHlvdSB0ZWxs
IG1lIHdoYXQgeW91ciBwcm9wb3NlZCB3b3JkaW5nIGlzID8NCj4+IFRvIGF2b2lkIGJsb2NraW5n
IHRoaXMgY2hhbmdlIEkgd291bGQgYmUgaGFwcHkgdG8gcmV2aWV3IHlvdXIgd29yZGluZw0KPj4g
YW5kIHNlZSBpZiBpdCBtZWV0cyBteSByZWFkaW5nIG9mIHRoZSBzdGF0ZWQgb2JqZWN0aW9uLg0K
PiANCj4gSGVyZSBhIHN1Z2dlc3RlZCByZXdvcmRpbmc6DQo+IA0KPiAiRVhQRVJUIG1vZGUgaXMg
Y3VycmVudGx5IHVzZWQgdG8gZ2F0ZSBhbnkgb3B0aW9ucyB0aGF0IGFyZSBpbiB0ZWNobmljYWwN
Cj4gcHJldmlldyBvciBub3Qgc2VjdXJpdHkgc3VwcG9ydGVkIEF0IHRoZSBtb21lbnQsIHRoZSBv
bmx5IHdheSB0byBzZWxlY3QNCj4gaXQgaXMgdG8gdXNlIFhFTl9DT05GSUdfRVhQRVJUPXkgb24g
dGhlIG1ha2UgY29tbWFuZCBsaW5lLg0KPiANCj4gSG93ZXZlciwgaWYgdGhlIHVzZXIgZm9yZ2V0
IHRvIGFkZCB0aGUgb3B0aW9uIHdoZW4gKHJlKWJ1aWxkaW5nIG9yIHdoZW4gdXNpbmcgbWVudWNv
bmZpZywgdGhlbiAuY29uZmlnIHdpbGwgZ2V0IHJld3JpdHRlbi4gVGhpcyBtYXkgbGVhZCB0byBh
IHJhdGhlciBmcnVzdHJhdGluZyBleHBlcmllbmNlIGFzIGl0IGlzIGRpZmZpY3VsdCB0byBkaWFn
bm9zdGljIHRoZQ0KPiBpc3N1ZS4NCj4gDQo+IEEgbG90IG9mIHRoZSBvcHRpb25zIGJlaGluZCBF
WFBFUlQgd291bGQgYmVuZWZpdCB0byBiZSBtb3JlIGFjY2Vzc2libGUgc28gdXNlciBjYW4gZXhw
ZXJpbWVudCB3aXRoIGl0IGFuZCB2b2ljZSBhbnkgY29uY2VybiBiZWZvcmUgdGhleSBhcmUgZnVs
bHkgYmUgc3VwcG9ydGVkLg0KPiANCj4gU28gcmF0aGVyIHRoYW4gbWFraW5nIHJlYWxseSBkaWZm
aWN1bHQgdG8gZXhwZXJpbWVudCBvciB0d2VhayB5b3VyIFhlbiAoZm9yIGluc3RhbmNlIGJ5IGFk
ZGluZyBhIGJ1aWx0LWluIGNvbW1hbmQgbGluZSksIHRoaXMgb3B0aW9uIGNhbiBub3cgYmUgc2Vs
ZWN0ZWQgZnJvbSB0aGUgbWVudWNvbmZpZy4NCj4gDQo+IFRoaXMgZG9lc24ndCBjaGFuZ2UgdGhl
IGZhY3QgYSBYZW4gd2l0aCBFWFBFUlQgbW9kZSBzZWxlY3RlZCB3aWxsIG5vdCBiZSBzZWN1cml0
eSBzdXBwb3J0ZWQuDQo+ICINCg0KSG93IGFib3V0IHRoaXMsIGNsYXJpZnlpbmcgdGhhdCB0b3At
bGV2ZWwgLmNvbmZpZyBpcyBhbiBvcHRpb24sIGJ1dCB0aGF0IGl04oCZcyBzdGlsbCBiZXR0ZXIg
dG8gcHV0IGl0IGluIG1lbnVjb25maWc/ICAoQWxzbyBub3RlIGEgbnVtYmVyIG9mIGdyYW1tYXIg
dHdlYWtzLikNCg0K4oCUDQoNCkVYUEVSVCBtb2RlIGlzIGN1cnJlbnRseSB1c2VkIHRvIGdhdGUg
YW55IG9wdGlvbnMgdGhhdCBhcmUgaW4gdGVjaG5pY2FsDQpwcmV2aWV3IG9yIG5vdCBzZWN1cml0
eSBzdXBwb3J0ZWQuIEF0IHRoZSBtb21lbnQsIHRoaXMgaXMgc2VsZWN0ZWQgYnkgYWRkaW5nIFhF
Tl9DT05GSUdfRVhQRVJUPXkgb24gdGhlIG1ha2UgY29tbWFuZCBsaW5lLCBvciB0byB0aGUgKGN1
cnJlbnRseSB1bmRvY3VtZW50ZWQpIHRvcC1sZXZlbCAuY29uZmlnIGZpbGUuDQoNClRoaXMgbWFr
ZXMgdGhlIG9wdGlvbiB2ZXJ5IHVuaW50dWl0aXZlIHRvIHVzZTogSWYgdGhlIHVzZXIgZm9yZ2V0
cyB0byBhZGQgdGhlIG9wdGlvbiB3aGVuIChyZSlidWlsZGluZyBvciB3aGVuIHVzaW5nIG1lbnVj
b25maWcsIHRoZW4geGVuLy5jb25maWcgd2lsbCBiZSBzaWxlbnRseSByZXdyaXR0ZW4sIGxlYWRp
bmcgdG8gYmVoYXZpb3Igd2hpY2ggaXMgdmVyeSBkaWZmaWN1bHQgdG8gZGlhZ25vc2UuICBBZGRp
bmcgWEVOX0NPTkZJR19FWFBFUlQ9eSB0byB0aGUgdG9wLWxldmVsIC5jb25maWcgaXMgbm90IG9i
dmlvdXMgYmVoYXZpb3IsIHBhcnRpY3VsYXJseSBhcyB0aGUgZmlsZSBpcyB1bmRvY3VtZW50ZWQu
DQoNCkEgbG90IG9mIHRoZSBvcHRpb25zIGJlaGluZCBFWFBFUlQgd291bGQgYmVuZWZpdCBmcm9t
IGJlaW5nIG1vcmUgYWNjZXNzaWJsZSBzbyB1c2VycyBjYW4gZXhwZXJpbWVudCB3aXRoIHRoZW0g
YW5kIHZvaWNlIGFueSBjb25jZXJucyBiZWZvcmUgdGhleSBhcmUgZnVsbHkgc3VwcG9ydGVkLg0K
DQpUbyBtYWtlIHRoaXMgb3B0aW9uIG1vcmUgZGlzY292ZXJhYmxlIGFuZCBjb25zaXN0ZW50IHRv
IHVzZSwgbWFrZSBpdCBwb3NzaWJsZSB0byBzZWxlY3QgaXQgZnJvbSB0aGUgbWVudWNvbmZpZy4N
Cg0KVGhpcyBkb2Vzbid0IGNoYW5nZSB0aGUgZmFjdCBhIFhlbiB3aXRoIEVYUEVSVCBtb2RlIHNl
bGVjdGVkIHdpbGwgbm90IGJlIHNlY3VyaXR5IHN1cHBvcnRlZC4NCg0K4oCUDQoNCiAtR2Vvcmdl


From xen-devel-bounces@lists.xenproject.org Mon May 11 15:29:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 15:29:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYANN-00077J-JB; Mon, 11 May 2020 15:29:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HsYJ=6Z=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jYANM-00077E-2a
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 15:29:52 +0000
X-Inumbo-ID: 4193bcac-939c-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4193bcac-939c-11ea-b07b-bc764e2007e4;
 Mon, 11 May 2020 15:29:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=IAa6CEQzUpKX9xK0f/kTrCmsLxzMGB9ZkNttFZ3ddU8=; b=SC66WyQbC5N5hzdTGCHEYVBI+X
 KcQDok6+QYJDVSlTAYAFxfkx7CsRJKQ78hjHZ2ikrpcQqyCEv7+s5wKPW7TtoX5Tbr1jHYx/s6mPH
 r6tobP7eyVEIZ0Pc+XVr5FN53J4CAQoMzSnJFlcpBW54JTyUqlVlAskCQOWmNs1ERYKI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jYANJ-0001AX-Uy; Mon, 11 May 2020 15:29:49 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jYANJ-0002pd-J9; Mon, 11 May 2020 15:29:49 +0000
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: George Dunlap <George.Dunlap@citrix.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
 <2716ACC1-E38D-45F9-8C07-D99FF846330E@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7048ed1e-1ed4-c594-108b-4bb4cbb77eaf@xen.org>
Date: Mon, 11 May 2020 16:29:47 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <2716ACC1-E38D-45F9-8C07-D99FF846330E@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Jan Beulich <JBeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi George,

On 11/05/2020 16:27, George Dunlap wrote:
> 
> 
>> On May 11, 2020, at 2:30 PM, Julien Grall <julien@xen.org> wrote:
>>
>> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>>
>> Hi Ian,
>>
>> Thank you for the clarification.
>>
>> On 07/05/2020 18:01, Ian Jackson wrote:
>>> Julien Grall writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>> On 04/05/2020 13:34, Ian Jackson wrote:
>>>>> George Dunlap writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>>>> On Apr 30, 2020, at 3:50 PM, Jan Beulich <JBeulich@suse.com> wrote:
>>>>>>> Well, if I'm not mis-remembering it was on purpose to make it more
>>>>>>> difficult for people to declare themselves "experts". FAOD I'm not
>>>>>>> meaning to imply I don't see and accept the frustration aspect you
>>>>>>> mention further up. The two need to be carefully weighed against
>>>>>>> one another.
>>>>>
>>>>> Yes, it was on purpose.  However, I had my doubts at the time and
>>>>> I think experience has shown that this was a mistake.
>>>>>
>>>>>> I don’t think we need to make it difficult for people to declare
>>>>>> themselves experts, particularly as “all” it means at the moment is,
>>>>>> “Can build something which is not security supported”.  People who
>>>>>> are building their own hypervisors are already pretty well advanced;
>>>>>> I think we can let them shoot themselves in the foot if they want
>>>>>> to.
>>>>>
>>>>> Precisely.
>>>>
>>>> Can I consider this as an Acked-by? :)
>>> I am happy with the principle of the change.  I haven't reviewed the
>>> details of the commit message etc.
>>> I reviewed the thread and there were two concernes raised:
>>>   * The question of principle.  I disagree with this concern
>>>     because I approve of principle of the patch.
>>>   * Some detail about the precise justificaton as written in
>>>     the commit message, regarding `clean' targets.  Apparently the
>>>     assertion may not be completely true.  I haven't seen a proposed
>>>     alternative wording.
>>
>> I have checked the latest staging, the `clean` target doesn't trash .config anymore.
>>
>>> I don't feel I should ack a controversial patch with an unresolved
>>> wording issue.  Can you tell me what your proposed wording is ?
>>> To avoid blocking this change I would be happy to review your wording
>>> and see if it meets my reading of the stated objection.
>>
>> Here a suggested rewording:
>>
>> "EXPERT mode is currently used to gate any options that are in technical
>> preview or not security supported At the moment, the only way to select
>> it is to use XEN_CONFIG_EXPERT=y on the make command line.
>>
>> However, if the user forget to add the option when (re)building or when using menuconfig, then .config will get rewritten. This may lead to a rather frustrating experience as it is difficult to diagnostic the
>> issue.
>>
>> A lot of the options behind EXPERT would benefit to be more accessible so user can experiment with it and voice any concern before they are fully be supported.
>>
>> So rather than making really difficult to experiment or tweak your Xen (for instance by adding a built-in command line), this option can now be selected from the menuconfig.
>>
>> This doesn't change the fact a Xen with EXPERT mode selected will not be security supported.
>> "
> 
> How about this, clarifying that top-level .config is an option, but that it’s still better to put it in menuconfig?  (Also note a number of grammar tweaks.)
> 
> —
> 
> EXPERT mode is currently used to gate any options that are in technical
> preview or not security supported. At the moment, this is selected by adding XEN_CONFIG_EXPERT=y on the make command line, or to the (currently undocumented) top-level .config file.
> 
> This makes the option very unintuitive to use: If the user forgets to add the option when (re)building or when using menuconfig, then xen/.config will be silently rewritten, leading to behavior which is very difficult to diagnose.  Adding XEN_CONFIG_EXPERT=y to the top-level .config is not obvious behavior, particularly as the file is undocumented.
> 
> A lot of the options behind EXPERT would benefit from being more accessible so users can experiment with them and voice any concerns before they are fully supported.
> 
> To make this option more discoverable and consistent to use, make it possible to select it from the menuconfig.
> 
> This doesn't change the fact a Xen with EXPERT mode selected will not be security supported.

I am happy this wording.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 11 15:46:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 15:46:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYAdA-0000Lo-6H; Mon, 11 May 2020 15:46:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MQL3=6Z=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYAd9-0000Lj-6D
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 15:46:11 +0000
X-Inumbo-ID: 887e9202-939e-11ea-a225-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 887e9202-939e-11ea-a225-12813bfff9fa;
 Mon, 11 May 2020 15:46:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589211970;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=ay3lVMQuKXg3xeonyfw3lTv7xk1dkOWJ+SpboOyyZys=;
 b=RSv+e1WHjetkAh/tHcuW7KXM58WHLk14OIbXu91RCReZ5h8AboTDJDNm
 l/+1jJjo/Gw+bIEMzeRuP/Tk31/bDRBaRS3X0hJGQQcJd3ajOqok96xDW
 qBXAp2nynElzufZQS4v0R204h8WtGpEKJnDFFNwfNuQ6G8bVi9KSbFnQZ 8=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: BIEdjZobJADB4Mioh+CCRcH4RWWhTKmZGTIxFlb5rqto/car2qxTTdep9LV3VYbDIDuEnm3DgO
 NTjmx4RJnW8asjg+N6Mf9xfMXUgByQ49E84QhHebI2CshQ3zK/abqfDq3yZk1600ggXnJBHtrm
 D8MGSAv9HmCWnXu/39UKpe7CWggqqRuGBUvVfwoxWscdQqd/bA9dEv126324zNcs4o4g0glsKW
 auEnKIRO1eWMJkWu7keEVmRmknD1Q1Fr3njQIf3nZdNyHDtGeA3pVYSPreUlzlD8EC98xKfpzw
 Ll0=
X-SBRS: 2.7
X-MesageID: 17495273
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,380,1583211600"; d="scan'208";a="17495273"
Subject: Re: [PATCH 05/16] x86/shstk: Introduce Supervisor Shadow Stack support
To: Jan Beulich <jbeulich@suse.com>, Anthony Perard <anthony.perard@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-6-andrew.cooper3@citrix.com>
 <d0347fec-3ccb-daa7-5c4d-f0e74b5fb247@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <00302d53-499a-7f6e-76a5-a5eec4e11252@citrix.com>
Date: Mon, 11 May 2020 16:46:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <d0347fec-3ccb-daa7-5c4d-f0e74b5fb247@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 04/05/2020 14:52, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> On 02.05.2020 00:58, Andrew Cooper wrote:
>> --- a/xen/arch/x86/Kconfig
>> +++ b/xen/arch/x86/Kconfig
>> @@ -34,6 +34,9 @@ config ARCH_DEFCONFIG
>>  config INDIRECT_THUNK
>>  	def_bool $(cc-option,-mindirect-branch-register)
>>  
>> +config HAS_AS_CET
>> +	def_bool $(as-instr,wrssq %rax$(comma)0;setssbsy;endbr64)
> I see you add as-instr here as a side effect. Until the other
> similar checks get converted, I think for the time being we should
> use the old model, to have all such checks in one place. I realize
> this means you can't have a Kconfig dependency then.

No.  That's like asking me to keep on using bool_t, and you are the
first person to point out and object to that in newly submitted patches.

> Also why do you check multiple insns, when just one (like we do
> elsewhere) would suffice?

Because CET-SS and CET-IBT are orthogonal ABIs, but you wanted a single
CET symbol, rather than a CET_SS symbol.

I picked a sample of various instructions to get broad coverage of CET
without including every instruction.

> The crucial insns to check are those which got changed pretty
> close before the release of 2.29 (in the cover letter you
> mention 2.32): One of incssp{d,q}, setssbsy, or saveprevssp.
> There weren't official binutils releases with the original
> insns, but distros may still have picked up intermediate
> snapshots.

I've got zero interest in catering to distros which are still using
obsolete pre-release toolchains.  Bleeding edge toolchains are one
thing, but this is like asking us to support the middle changeset of the
series introducing CET, which is absolutely a non-starter.

If the instructions were missing from real binutils releases, then
obviously we should exclude those releases, but that doesn't appear to
be the case.

>> --- a/xen/arch/x86/setup.c
>> +++ b/xen/arch/x86/setup.c
>> @@ -95,6 +95,36 @@ unsigned long __initdata highmem_start;
>>  size_param("highmem-start", highmem_start);
>>  #endif
>>  
>> +static bool __initdata opt_xen_shstk = true;
>> +
>> +static int parse_xen(const char *s)
>> +{
>> +    const char *ss;
>> +    int val, rc = 0;
>> +
>> +    do {
>> +        ss = strchr(s, ',');
>> +        if ( !ss )
>> +            ss = strchr(s, '\0');
>> +
>> +        if ( (val = parse_boolean("shstk", s, ss)) >= 0 )
>> +        {
>> +#ifdef CONFIG_XEN_SHSTK
>> +            opt_xen_shstk = val;
>> +#else
>> +            no_config_param("XEN_SHSTK", "xen", s, ss);
>> +#endif
>> +        }
>> +        else
>> +            rc = -EINVAL;
>> +
>> +        s = ss + 1;
>> +    } while ( *ss );
>> +
>> +    return rc;
>> +}
>> +custom_param("xen", parse_xen);
> What's the idea here going forward, i.e. why the new top level
> "xen" option? Almost all options are for Xen itself, after all.
> Did you perhaps mean this to be "cet"?

I forgot an RFC for this, as I couldn't think of anything better.  "cet"
as a top level option isn't going to gain more than {no-}shstk and
{no-}ibt as suboptions.

>> --- a/xen/scripts/Kconfig.include
>> +++ b/xen/scripts/Kconfig.include
>> @@ -31,6 +31,10 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -E -x c /dev/null -o /de
>>  # Return y if the linker supports <flag>, n otherwise
>>  ld-option = $(success,$(LD) -v $(1))
>>  
>> +# $(as-instr,<instr>)
>> +# Return y if the assembler supports <instr>, n otherwise
>> +as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -)
> CLANG_FLAGS caught my eye here, then noticing that cc-option
> also uses it. Anthony - what's the deal with this? It doesn't
> look to get defined anywhere, and I also don't see what clang-
> specific about these constructs.

This is as it inherits from Linux.  There is obviously a reason.

However, I'm not interested in diving into that rabbit hole in an
unrelated series.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 11 15:56:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 15:56:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYAnP-0001FC-74; Mon, 11 May 2020 15:56:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AmMB=6Z=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jYAnO-0001F7-Bd
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 15:56:46 +0000
X-Inumbo-ID: 03095043-93a0-11ea-a226-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 03095043-93a0-11ea-a226-12813bfff9fa;
 Mon, 11 May 2020 15:56:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589212604;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=87sgAQ15qhIOWSaRctS80Wh/3gY+1va5Yr3D2qTSETM=;
 b=TMXdhktIXL/ppSIE1ZG1DZv72rROw9OO+PL5+DksF3wQAg7b145YJXND
 2vjn1IxFLi8r4u+vh5r0PPVq+ABCiaHi2UfoNPcL1l/GrDdekkTUvIcyx
 XahtXjQALkL3nH7LotODktr+UgC9AkeGnp3yUezENVLNhHb3D+O7bNm+N w=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: D6aacQA9vn7Kd3nNvVuNtLnB3B0aGXg1m438x8f99UzvkGWsHoO9Nj+mCA1tmpeOvF2GFGWzjb
 TXv8QqI8Qu5KlmAEtBTv+YqRzF2n2wehTO2p/Hj7PxHYSeShWVTdnoQgZ02ftno12b56gv4O+i
 Uoy+tV9rLxoe6Ub2VbwNyyRbCAdLecpR+pCdXgXckcx52gqKb2uQpRxxQAt2nDkLKJEiwZntf4
 m0k32Mu3Oo7KiKv62C2pGOJu/bxhHFjNDqcIcvVnivPj7PdDU9rvIShUzq6XGFcGjg7BMdeskU
 Xto=
X-SBRS: 2.7
X-MesageID: 17604081
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,380,1583211600"; d="scan'208";a="17604081"
Date: Mon, 11 May 2020 17:56:37 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2] x86/idle: prevent entering C6 with in service
 interrupts on Intel
Message-ID: <20200511155637.GD35422@Air-de-Roger>
References: <20200511101753.36610-1-roger.pau@citrix.com>
 <f3471cee-342e-c169-f3eb-34f559892336@citrix.com>
 <20200511110743.GB35422@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200511110743.GB35422@Air-de-Roger>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 11, 2020 at 01:07:43PM +0200, Roger Pau Monné wrote:
> On Mon, May 11, 2020 at 11:38:49AM +0100, Andrew Cooper wrote:
> > On 11/05/2020 11:17, Roger Pau Monne wrote:
> > > Apply a workaround for Intel errata CLX30: "A Pending Fixed Interrupt
> > > May Be Dispatched Before an Interrupt of The Same Priority Completes".
> > >
> > > It's not clear which models are affected, as the errata is listed in
> > > the "Second Generation Intel Xeon Scalable Processors" specification
> > > update, but the issue has been seen as far back as Nehalem processors.
> > 
> > Really?  I'm only aware of it being Haswell and later.

So I've found the following related erratas:

BDX99: E7-8800 v4, E7-4800 v4 (Broadwell)
CLX30: 2nd Gen Xeon Scalable (Cascade Lake)
SKX100: Xeon Scalable (Skylake)
CFW125: E-2100, E-2200 (Kaby Lake)
BDF104: E5-2600 v4 (Broadwell)
BDH85: i7 LGA2011 v3 socket (Broadwell)
BDM135: 5th Gen Intel Core, Core-M, Mobile Intel Pentium/Celeron (Broadwell)
KBW131: E3-1200 v6 (Kaby Lake)

So my plan would be to cover all chips from Broadwell to Cascade Lake,
this would be: Broadwell, Skylake, Kaby Lake, Coffee Lake,
{Cannon/Whiskey/Amber} Lake and Cascade Lake. I haven't found any
mention of the issue in the Haswell specification updates, and the one
for Xeon E7 v3 was last updated in April 2020.

I think the list of IDs to match against should be:

#define INTEL_FAM6_MODEL(m) { X86_VENDOR_INTEL, 6, m, X86_FEATURE_ALWAYS }
{
    /* Broadwell */
    INTEL_FAM6_MODEL(0x47),
    INTEL_FAM6_MODEL(0x3d),
    INTEL_FAM6_MODEL(0x4f),
    INTEL_FAM6_MODEL(0x56),
    /* Skylake (client) */
    INTEL_FAM6_MODEL(0x5e),
    INTEL_FAM6_MODEL(0x4e),
    /* {Sky/Cascade}lake (server) */
    INTEL_FAM6_MODEL(0x55),
    /* {Kaby/Coffee/Whiskey/Amber} Lake */
    INTEL_FAM6_MODEL(0x9e),
    INTEL_FAM6_MODEL(0x8e),
    /* Cannon Lake */
    INTEL_FAM6_MODEL(0x66),
    {}
}

Let me know if that sounds sensible.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 11 15:58:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 15:58:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYAon-0001MR-IA; Mon, 11 May 2020 15:58:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AmMB=6Z=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jYAom-0001MM-JL
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 15:58:12 +0000
X-Inumbo-ID: 364e725d-93a0-11ea-a226-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 364e725d-93a0-11ea-a226-12813bfff9fa;
 Mon, 11 May 2020 15:58:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589212692;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=JC5sPfldlBy30kmzvDg365erw9XH1ut1ZN6wk1PR950=;
 b=MCDgisCJbBeaYDu8fjFQQVF3APjPLBnu6IW6fjyN5WXMkdW29NVhWVom
 x3BqZ4cY8apO/seW1OviQp+ByKenAW0CPVdKYMt6AR7GJgXfosO6x1bzQ
 uGNjDUZylO00+WCVcbdz9vOgrsu4hu4RTQgQMZwhmshqVjZnn6mYQcVDa Q=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: r8wtShB3/pQv64BOw5IebljoD2MikdygoswJ4vemMPMVnym1XOTXnVHMmb5rUMhIl6yG5G3FMt
 loJDJMJE9NxApaeysPWJzw44hv8SMLrcGVW8NXWnpDlYgY5GSahkFQNNYvedYA42QM4KNClVmU
 RAZlOi2FwJv87frLCADxDAytUtlmLfQwqITtrSteqxtfGcxPhddmqTq/5NNO0rRvrzFgnVL8Kh
 jT98VtoflPh2IgwIXdU6E4D4qPdjx1c/1i8vnzstGRyOSuahIAZT6tROCkzEzxiREszXG5eFCe
 GBY=
X-SBRS: 2.7
X-MesageID: 17496405
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,380,1583211600"; d="scan'208";a="17496405"
Date: Mon, 11 May 2020 17:58:02 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2] x86/PVH: PHYSDEVOP_pci_mmcfg_reserved should not
 blindly register a region
Message-ID: <20200511155802.GE35422@Air-de-Roger>
References: <5df03dc2-6bbd-6f5f-e2f8-18ed2062eb0f@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5df03dc2-6bbd-6f5f-e2f8-18ed2062eb0f@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 11, 2020 at 04:50:47PM +0200, Jan Beulich wrote:
> The op has a "is reserved" flag, and hence registration shouldn't
> happen unilaterally.
> 
> Fixes: eb3dd90e4089 ("x86/physdev: enable PHYSDEVOP_pci_mmcfg_reserved for PVH Dom0")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 11 17:12:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 17:12:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYByT-0008LU-Tt; Mon, 11 May 2020 17:12:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xqE7=6Z=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jYByS-0008LP-FU
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 17:12:16 +0000
X-Inumbo-ID: 8f59cd2e-93aa-11ea-b07b-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8f59cd2e-93aa-11ea-b07b-bc764e2007e4;
 Mon, 11 May 2020 17:12:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589217135;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=qxw/OiIRWrI6NnbVgTiTR9UQjL3DT+kEU7WtKYsgPrk=;
 b=fuVGiQT4QvmuTiOc8qjfgJowIZoNFVZRqYdcNY2O8Almadhf2ZqjatWE
 RXR1Jebmbdb5ElxlsK9Re1kr0XNdYX78poudwP948bt1AGvXNXKQGVRYA
 K3xv5uiYQVv+RMq3nHf5QUHzb0J/KLF+WaySWatB48lsjZywmrC8wqByY g=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: zCZ4fhcncFNcK03AQumN08SOxzTXMCE8u2e6JOiuCyVRyUkZmZBi24vARVXbubka3KN0dhxmpb
 nc3d96hAWBCjRY+yVyZ8l9TI36z8Ia/MRWMSHaDdUvdITyzFFrbQeFqdco6eHQID2d9VHbb7BT
 mCpLM1mZydjHqJWOX3uAQvDHX0wd+CDQ2tteT+/CKZRqLEQKfN9SPxvWRAMg085fsEo1pXukLf
 FGijOWJa2k5a6Nx7/kcwnhCc9jezOluri9Kn0lgKdF+Di1+TUBtLYTVmnUElcNAdjZAEPWD6IB
 anA=
X-SBRS: 2.7
X-MesageID: 17929351
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,380,1583211600"; d="scan'208";a="17929351"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24249.34665.704224.478005@mariner.uk.xensource.com>
Date: Mon, 11 May 2020 18:12:09 +0100
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
In-Reply-To: <7048ed1e-1ed4-c594-108b-4bb4cbb77eaf@xen.org>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
 <2716ACC1-E38D-45F9-8C07-D99FF846330E@citrix.com>
 <7048ed1e-1ed4-c594-108b-4bb4cbb77eaf@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <JBeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Julien Grall writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
> I am happy this wording.

Thanks for engaging with the commit message issue.

This patch is:

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

with any of the suggested commit message wordings.  I trust your
judgement to pick a suitable wording.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon May 11 17:14:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 17:14:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYC0j-0008Tc-Aw; Mon, 11 May 2020 17:14:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xqE7=6Z=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jYC0h-0008TT-BQ
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 17:14:35 +0000
X-Inumbo-ID: e2150bc8-93aa-11ea-9887-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e2150bc8-93aa-11ea-9887-bc764e2007e4;
 Mon, 11 May 2020 17:14:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589217274;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=QdEpiE9Ok4AbFHTujsfNJOBuCfXMJVyazrV+1Fil8NM=;
 b=RwdRWcpLe9FJ6sFnYanyJUYzg2CSedirCK9bhkSqgaAIueGPHovUGHQY
 fPChX6VIopvQOgetdFaBjGovYBscbPpvry0/7kxMCIb9xPZyMfseiXnid
 kPo6DRjiaOXKYYFL9p6vm+cFEcK8VCwSmFczj/ayt39TEkejia1JbYtLh w=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: IuTxPdF11qpUfdKXQbT6rN3NWu2ZouDu9Jyea7Lv62nn8a2ten5/etzDRSQ6sGFSeMibcwu1wm
 Dm/jGjoQMNuqUBlNuLzLUbuRuJKZiSetOmehThNNKPFDPoh/vHaAYcrkmMGGVU78ytdZ1bUwmV
 9zCiuFP2m3KdC1f27xoLsrpB3RyJv+SOV6pzstRqzDH0u41EZlkKpf/469AnuDvOTbqV8rxj3V
 WZtC+7biVFja68zkB2ZgMVohkjwZfPzrR73dvFNLmJDbXwuMjYjJzBo9ZiAjy1PfwkXXZTUg9k
 q7g=
X-SBRS: 2.7
X-MesageID: 17611570
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,380,1583211600"; d="scan'208";a="17611570"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Message-ID: <24249.34804.568523.847077@mariner.uk.xensource.com>
Date: Mon, 11 May 2020 18:14:28 +0100
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
In-Reply-To: <bf6a9ed3-fd7e-c588-9f72-8084dab1f556@suse.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
 <4017f7f0-744b-70ff-8fa4-b33c53a8b9e2@suse.com>
 <ca115637-5e84-2990-65e8-e2f04ec0ddb5@xen.org>
 <803876ce-503a-2e97-f310-0413e515970b@suse.com>
 <bbc12f81-7854-ad72-63ee-3ec94b378bf9@xen.org>
 <bf6a9ed3-fd7e-c588-9f72-8084dab1f556@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jan Beulich writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
> I'm trying to make the point that your patch, to me, looks to be
> trying to overcome a problem for which we have had a solution all
> the time.

Thanks for this clear statement of your objection.  I'm afraid I don't
agree.  Even though .config exists (and is even used by osstest, so I
know about it) I don't think it is as good as having it in
menuconfig.

I agree with George's comments:

| From a UI perspective, I think that’s a poor UI — enabling
| CONFIG_EXPERT from the menuconfig is more discoverable and more in
| line with what people expect.

and I haven't seen good reasons (in my opinion) for diverging from
that discoverability and expectation.

So I have given my ack in the other mail.

Regards,
Ian.


From xen-devel-bounces@lists.xenproject.org Mon May 11 17:17:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 17:17:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYC3Q-0000B4-SL; Mon, 11 May 2020 17:17:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nxab=6Z=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYC3Q-0000Ay-2g
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 17:17:24 +0000
X-Inumbo-ID: 407f130c-93ab-11ea-a234-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 407f130c-93ab-11ea-a234-12813bfff9fa;
 Mon, 11 May 2020 17:17:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ItjSiiyHvU0rssvYrFRTetLq6+mbZM5ehA/qggEbfnk=; b=5N7gA2t6uqK8vW64g3JMofTrj
 cVKEVJm9vPszrNpSasgJ+9jp9L8wkLDFlG7x46m90hr8Rg0YmSnOxfKO9TpPEDRvDTwVxrWs/7s65
 aApA5kqI86CSrjVlKNQMy+jbinZOKYDelROdmQZLv+ZCq+uHdZ2Y8u3FTOgql57py0Uh4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYC3D-0003uW-Nr; Mon, 11 May 2020 17:17:11 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYC3D-0005K1-BB; Mon, 11 May 2020 17:17:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYC3D-0006bW-AA; Mon, 11 May 2020 17:17:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150133-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150133: regressions - FAIL
X-Osstest-Failures: linux-linus:build-amd64-libvirt:libvirt-build:fail:regression
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 linux-linus:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=2ef96a5bb12be62ef75b5828c0aab838ebb29cb8
X-Osstest-Versions-That: linux=e99332e7b4cda6e60f5b5916cf9943a79dbef902
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 11 May 2020 17:17:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150133 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150133/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 150126

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 150126

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150126
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150126
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150126
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150126
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150126
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150126
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150126
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150126
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150126
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                2ef96a5bb12be62ef75b5828c0aab838ebb29cb8
baseline version:
 linux                e99332e7b4cda6e60f5b5916cf9943a79dbef902

Last test of basis   150126  2020-05-10 14:58:51 Z    1 days
Testing same since   150133  2020-05-11 06:42:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexey Dobriyan <adobriyan@gmail.com>
  Anton Eidelman <anton@lightbitslabs.com>
  Christoph Hellwig <hch@lst.de>
  Ingo Molnar <mingo@kernel.org>
  Jann Horn <jannh@google.com>
  Jens Axboe <axboe@kernel.dk>
  Joerg Roedel <jroedel@suse.de>
  John Garry <john.garry@huawei.com>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Julia Lawall <Julia.Lawall@inria.fr>
  Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
  Linus Torvalds <torvalds@linux-foundation.org>
  Miroslav Benes <mbenes@suse.cz>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Qian Cai <cai@lca.pw>
  Randy Dunlap <rdunlap@infradead.org>
  Rick Edgecombe <rick.p.edgecombe@intel.com>
  Sagi Grimberg <sagi@grimberg.me>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Tejun Heo <tj@kernel.org>
  Thomas Gleixner <tglx@linutronix.de>
  Yufen Yu <yuyufen@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 985 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 11 17:20:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 17:20:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYC6A-0000zO-C9; Mon, 11 May 2020 17:20:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MQL3=6Z=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYC69-0000zJ-OD
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 17:20:13 +0000
X-Inumbo-ID: ac24b670-93ab-11ea-9887-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac24b670-93ab-11ea-9887-bc764e2007e4;
 Mon, 11 May 2020 17:20:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589217612;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=V4AWX8xeCPie/Pw176bJsSk4E5fPG4JRdKC6J9P9dDg=;
 b=QFqBSWaA6HOav11pto/aKiwVNzUVsjFH5CZcC0eX8W81tEW6eyTWRyym
 Cdjsq82BN9Sa6A40Q8zgrZxm9YJ8jRa8JmmV3xP6zSToSyOyuajyTJFhF
 uOVZKbFOcin/vPoO0QJtu8Ba4JuwULKn+D39npteyf+/sRSrX6f6+yEFF g=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 2lDwPoN6XwKW149XCgLpOVaYISXx3JP8MRpoHwDL3QnAsLr/LdrTBu+d6TjHE0zUkisYk8FYjM
 kwZcmUgCJKsDycWv9gOgLXHEPtahQBPta0cMaDQgC8eN7TiMXvSo36StWSS9gRkoiHghuKSp15
 HMlLiCr0+utEgtzdmbkvHFCD385NVKX2hKYue9X/CC3YDF6EUW+Bi8SWzD1Js7Ll3pgDiWA9gE
 /APtBN76qln9diU4uWQCDk3fAIkBmhdeBLuL2P6aRN5JBxWAo7oQ474wUM50PhwLV+aIlu8Sra
 6eU=
X-SBRS: 2.7
X-MesageID: 17930201
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,380,1583211600"; d="scan'208";a="17930201"
Subject: Re: [PATCH 06/16] x86/traps: Implement #CP handler and extend #PF for
 shadow stacks
To: Jan Beulich <jbeulich@suse.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-7-andrew.cooper3@citrix.com>
 <561914ce-d076-8f1a-e74b-7c37567480a1@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b7cbee0d-2215-e600-e755-48a329e0786d@citrix.com>
Date: Mon, 11 May 2020 18:20:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <561914ce-d076-8f1a-e74b-7c37567480a1@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 04/05/2020 15:10, Jan Beulich wrote:
> On 02.05.2020 00:58, Andrew Cooper wrote:
>> @@ -1457,6 +1451,10 @@ void do_page_fault(struct cpu_user_regs *regs)
>>      {
>>          enum pf_type pf_type = spurious_page_fault(addr, regs);
>>  
>> +        /* Any fault on a shadow stack access is a bug in Xen. */
>> +        if ( error_code & PFEC_shstk )
>> +            goto fatal;
> Not going through the full spurious_page_fault() in this case
> would seem desirable, as would be at least a respective
> adjustment to __page_fault_type(). Perhaps such an adjustment
> could then avoid the change (and the need for goto) here?

This seems to do a lot of things which have little/nothing to do with
spurious faults.

In particular, we don't need to disable interrupts to look at
PFEC_shstk, or RSVD for that matter.

>> @@ -1906,6 +1905,43 @@ void do_debug(struct cpu_user_regs *regs)
>>      pv_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC);
>>  }
>>  
>> +void do_entry_CP(struct cpu_user_regs *regs)
> If there's a plan to change other exception handlers to this naming
> scheme too, I can live with the intermediate inconsistency.

Yes.  This isn't the first time I've introduced this naming scheme
either, but the emulator patch got stuck in trivialities.

> Otherwise, though, I'd prefer a name closer to what other handlers
> use, e.g. do_control_prot(). Same for the asm entry point then.

These names are impossible to search for, because there is no
consistency even amongst the similarly named ones.

>
>> @@ -940,7 +944,8 @@ autogen_stubs: /* Automatically generated stubs. */
>>          entrypoint 1b
>>  
>>          /* Reserved exceptions, heading towards do_reserved_trap(). */
>> -        .elseif vec == TRAP_copro_seg || vec == TRAP_spurious_int || (vec > TRAP_simd_error && vec < TRAP_nr)
>> +        .elseif vec == TRAP_copro_seg || vec == TRAP_spurious_int || \
>> +                vec == TRAP_virtualisation || (vec > X86_EXC_CP && vec < TRAP_nr)
> Use the shorter X86_EXC_VE here, since you don't want/need to
> retain what was there before? (I'd be fine with you changing
> the two other TRAP_* too at this occasion.)

Ok.

Having played this game several times now, I'm considering reworking how
do_reserved_trap() gets set up, because we've now got two places which
can go wrong and result in an unhelpful assert early on boot, rather
than a build-time failure.  (The one thing I'm not sure on how to do is
usefully turn this into a build time failure.)

>
>> --- a/xen/include/asm-x86/processor.h
>> +++ b/xen/include/asm-x86/processor.h
>> @@ -68,6 +68,7 @@
>>  #define PFEC_reserved_bit   (_AC(1,U) << 3)
>>  #define PFEC_insn_fetch     (_AC(1,U) << 4)
>>  #define PFEC_prot_key       (_AC(1,U) << 5)
>> +#define PFEC_shstk         (_AC(1,U) << 6)
> One too few padding blanks?

Oops.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 11 17:48:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 17:48:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYCXI-0002pP-L2; Mon, 11 May 2020 17:48:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MQL3=6Z=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYCXH-0002pK-EO
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 17:48:15 +0000
X-Inumbo-ID: 96939368-93af-11ea-b07b-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96939368-93af-11ea-b07b-bc764e2007e4;
 Mon, 11 May 2020 17:48:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589219294;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=1YuLqSJhS24tb+cRPtgbpGsjxbGohZ1x4vJW0qivluQ=;
 b=ZyMnIVwvNNxqXkiZsfX6EWxfPXe6ckie+qgTyxgegjIy2nO+IJqHdkle
 4HWaB+vcz9aF1IsTR0M6ljw51MEyEn7AEySkiDdj/r0U0Y8c3UkGelRrX
 2hFUBIf/pYX+VoJwZica5eDdv3LJ2J0QK//Jyu0NBGpI3AKtXT6XfONjC g=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 000k+0R0nErGb8NbHz0FFJQ5jkiWORs981K8+ywmfEg2ECNkXaLzuKfeGqxZ6w8+MgNgToU0GM
 XzKUNr/OQS1v5utgFig7Vul30YuI8GdX7gGd9sC0wlZIPjzUmVZ1ZKWShwfj85Vab8f0IapLFn
 RoNmqaG+VG0HeqkzeZ3iO88Q3y80ZDMMLMNWDEjRRgX3t6NOXZOpOhOPv51NlCIwlHz+e+k2fh
 ic7Y5menFrEfwlJyFxMVS2t02hyHNmhGMZBW+IpJ+rMGZ/2J2g2ZoBFiC51HVumYpkcerfOWGc
 dC8=
X-SBRS: 2.7
X-MesageID: 17932759
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,380,1583211600"; d="scan'208";a="17932759"
Subject: Re: [PATCH 07/16] x86/shstk: Re-layout the stack block for shadow
 stacks
To: Jan Beulich <jbeulich@suse.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-8-andrew.cooper3@citrix.com>
 <8b6e03ee-545d-eada-457e-79c183a72d6d@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <51eca0a6-48ff-9169-2c41-c1cadace1d02@citrix.com>
Date: Mon, 11 May 2020 18:48:09 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <8b6e03ee-545d-eada-457e-79c183a72d6d@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 04/05/2020 15:24, Jan Beulich wrote:
> On 02.05.2020 00:58, Andrew Cooper wrote:
>> --- a/xen/arch/x86/cpu/common.c
>> +++ b/xen/arch/x86/cpu/common.c
>> @@ -732,14 +732,14 @@ void load_system_tables(void)
>>  		.rsp2 = 0x8600111111111111ul,
>>  
>>  		/*
>> -		 * MCE, NMI and Double Fault handlers get their own stacks.
>> +		 * #DB, NMI, DF and #MCE handlers get their own stacks.
> Then also #DF and #MC?

Ok.

>
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -6002,25 +6002,18 @@ void memguard_unguard_range(void *p, unsigned long l)
>>  
>>  void memguard_guard_stack(void *p)
>>  {
>> -    /* IST_MAX IST pages + at least 1 guard page + primary stack. */
>> -    BUILD_BUG_ON((IST_MAX + 1) * PAGE_SIZE + PRIMARY_STACK_SIZE > STACK_SIZE);
>> +    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
>>  
>> -    memguard_guard_range(p + IST_MAX * PAGE_SIZE,
>> -                         STACK_SIZE - PRIMARY_STACK_SIZE - IST_MAX * PAGE_SIZE);
>> +    p += 5 * PAGE_SIZE;
> The literal 5 here and ...
>
>> +    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
>>  }
>>  
>>  void memguard_unguard_stack(void *p)
>>  {
>> -    memguard_unguard_range(p + IST_MAX * PAGE_SIZE,
>> -                           STACK_SIZE - PRIMARY_STACK_SIZE - IST_MAX * PAGE_SIZE);
>> -}
>> -
>> -bool memguard_is_stack_guard_page(unsigned long addr)
>> -{
>> -    addr &= STACK_SIZE - 1;
>> +    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, PAGE_HYPERVISOR_RW);
>>  
>> -    return addr >= IST_MAX * PAGE_SIZE &&
>> -           addr < STACK_SIZE - PRIMARY_STACK_SIZE;
>> +    p += 5 * PAGE_SIZE;
> ... here could do with macro-izing: IST_MAX + 1 would already be
> a little better, I guess.

The problem is that "IST_MAX + 1" is now less meaningful than a literal
5, because at least 5 obviously matches up with the comment describing
which page does what.

~Andrew

>
> Preferably with adjustments along these lines
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>
> Jan



From xen-devel-bounces@lists.xenproject.org Mon May 11 18:01:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 18:01:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYCk2-0004Tn-Vm; Mon, 11 May 2020 18:01:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lxCE=6Z=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jYCk1-0004Ti-O1
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 18:01:25 +0000
X-Inumbo-ID: 6d2813da-93b1-11ea-a23d-12813bfff9fa
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6d2813da-93b1-11ea-a23d-12813bfff9fa;
 Mon, 11 May 2020 18:01:24 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04BHtoZZ165453;
 Mon, 11 May 2020 18:01:18 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type; s=corp-2020-01-29;
 bh=3kDmNoqAEMZCIXkcb0qOJul4sO1nAcYTG6P0n3tKFVY=;
 b=J08lgend++x1QqGWYhELUriMEny26k3fyjyCtHH4swF4nr0HlxIWmjm6gN2ExsQJmF8H
 /fyLBmkUUnyvCvbRxnSJiC80XcRPP1JK6TdepnYLyEbAMDOyn3iDl2UYnhx4+WytAj5t
 goJLQvhUFu3rVXh5yQKzSQUp2M/XfxaBHWQJFpafh66wNATudAi0/SMvS6l4AekMYfMT
 vYHpmrhFcBsMuVllX98JKAXFyrWCiMX9vWtV1AqOgtngOkLwEv4Uf90ndO9S1H+Ny+w3
 Lbvt7xEibUHNB/tGl63mq4lCiEiqzaDzHO/+Yz+WH3rDFOY7wXj2Of1xVvuflRzBe/Hy mg== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by userp2130.oracle.com with ESMTP id 30x3gmenme-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Mon, 11 May 2020 18:01:18 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04BHht2K117945;
 Mon, 11 May 2020 18:01:18 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by userp3030.oracle.com with ESMTP id 30x6ewggc4-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 11 May 2020 18:01:18 +0000
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 04BI1Gwt006639;
 Mon, 11 May 2020 18:01:16 GMT
Received: from [10.39.250.101] (/10.39.250.101)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Mon, 11 May 2020 11:01:16 -0700
Subject: Re: [PATCH 1/2] xen/xenbus: avoid large structs and arrays on the
 stack
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, clang-built-linux@googlegroups.com
References: <20200511073151.19043-1-jgross@suse.com>
 <20200511073151.19043-2-jgross@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
X-Pep-Version: 2.0
Message-ID: <965e1d65-3a0c-3a71-ca58-2b5c04f98bce@oracle.com>
Date: Mon, 11 May 2020 14:01:00 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200511073151.19043-2-jgross@suse.com>
Content-Type: multipart/mixed; boundary="------------444D946E3E378382AC37167D"
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9618
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0
 spamscore=0 phishscore=0
 mlxlogscore=999 mlxscore=0 malwarescore=0 bulkscore=0 suspectscore=2
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000
 definitions=main-2005110139
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9618
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0
 mlxlogscore=999
 clxscore=1015 spamscore=0 lowpriorityscore=0 phishscore=0 bulkscore=0
 malwarescore=0 priorityscore=1501 mlxscore=0 suspectscore=2
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2003020000 definitions=main-2005110139
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Arnd Bergmann <arnd@arndb.de>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a multi-part message in MIME format.
--------------444D946E3E378382AC37167D
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On 5/11/20 3:31 AM, Juergen Gross wrote:
> =20
>  static int xenbus_map_ring_valloc_hvm(struct xenbus_device *dev,


I wonder whether we can drop valloc/vfree from xenbus_ring_ops' names.


> +				      struct map_ring_valloc *info,
>  				      grant_ref_t *gnt_ref,
>  				      unsigned int nr_grefs,
>  				      void **vaddr)
>  {
> -	struct xenbus_map_node *node;
> +	struct xenbus_map_node *node =3D info->node;
>  	int err;
>  	void *addr;
>  	bool leaked =3D false;
> -	struct map_ring_valloc_hvm info =3D {
> -		.idx =3D 0,
> -	};
>  	unsigned int nr_pages =3D XENBUS_PAGES(nr_grefs);
> =20
> -	if (nr_grefs > XENBUS_MAX_RING_GRANTS)
> -		return -EINVAL;
> -
> -	*vaddr =3D NULL;
> -
> -	node =3D kzalloc(sizeof(*node), GFP_KERNEL);
> -	if (!node)
> -		return -ENOMEM;
> -
>  	err =3D alloc_xenballooned_pages(nr_pages, node->hvm.pages);
>  	if (err)
>  		goto out_err;
> =20
>  	gnttab_foreach_grant(node->hvm.pages, nr_grefs,
>  			     xenbus_map_ring_setup_grant_hvm,
> -			     &info);
> +			     info);
> =20
>  	err =3D __xenbus_map_ring(dev, gnt_ref, nr_grefs, node->handles,
> -				info.phys_addrs, GNTMAP_host_map, &leaked);
> +				info, GNTMAP_host_map, &leaked);
>  	node->nr_handles =3D nr_grefs;
> =20
>  	if (err)
> @@ -641,11 +654,13 @@ static int xenbus_map_ring_valloc_hvm(struct xenb=
us_device *dev,
>  	spin_unlock(&xenbus_valloc_lock);
> =20
>  	*vaddr =3D addr;
> +	info->node =3D NULL;


Is this so that xenbus_map_ring_valloc() doesn't free it accidentally?


-boris


> +
>  	return 0;
> =20
> =20


--------------444D946E3E378382AC37167D
Content-Type: application/pgp-keys;
 name="pEpkey.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="pEpkey.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrI
tCzP8FUVPQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6
ukOB7igy2PGqZd+MMDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0P
VYR5hyvhyf6VIfGuvqIsvJw5C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi
5w4LXDbF+3oholKYCkPwxmGdK8MUIdkMd7iYdKqiP4W6FKQou/lC3jvOceGupEoD
V9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlomwoVWc0xBZboQguhauQqrBFoo
HO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2FXnIChrYxR6S0ijS
qUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2SfjZwK+G
ETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMG
m5PLHDSP0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQAB
tDNCb3JpcyBPc3Ryb3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xl
LmNvbT6JAjgEEwECACIFAlH8CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheA
AAoJEIredpCGysGyasEP/j5xApopUf4g9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4
bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9zJimBDrhLkDI3Zsx2CafL4pMJ
vpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaSVGx3tsQIAr7MsQxi
lMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaKjImqWhU9
CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPg
EsEJXSr9tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJo
lFM2i4Z0k40BnFU/kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLl
VxdO4qvJkv92SzZz4538az1Tm+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1
z4qu7udYCuuV/4xDEhslUq1+GcNDjAhBnNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs
4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2ohWwveNeRTkxh+2x1Qb3GT46u
iQEcBBABAgAGBQJXFiMoAAoJEKXZdqUyumTAq/oH/2P6KTIO7dGbFl8ed3QgZ4nX
1YeMc+CLCO9m4m+sOaLHgD/NYgPA4/ZwvCU9B/40HEKziq7sAkuEURrIeXyLwrmI
wRsdPYXO4IBoEKafA51A5sAhLJy1POFcs2WI1f3n0bQfx2hgQCE9S1yjgyO+3t+z
+slt3MR6kt1cW3lG4dXzrKNTCTEyMWlqLJELHWA0Ja/8hF0Z1gteOOb2ol1kjB2v
ZPDJYVnhD4yzraE2lgqaRkNA2l5pUZ7p0T06/5MdNKY1NxOqv3zLXNNjxvRYiSVD
F35K05OAyuMKKKgayaLJbLXwkhqSpiNGw0k+cWZTzXO32szxXo4z2Ek766b8GBeJ
ARwEEAEIAAYFAlcWI68ACgkQbE5iBDHJmA3+uAf/W+4NitsClOyDCFoFpPwEhYqM
qxMmyzax27P8s1ZLaOxQvjAaQxUIPVEABbx86yHcsZvzLXXwqosYjy4RWxC+0dMb
oQ8l2oKqrSPa/buCTRG7zBHisQLKyDHQw0aWnVW041P1s4pIs58DUIovTP4MpNPf
qx8+dDGXDpYOEwnmxTLXKfZMyDnq1QtTXQ576HxPt70U/xZ1ZLlCsPsBhiIdjvi9
7jxALwqVd+FyfVDSVm67H5ek+d3ygolA4mJIFjPzmfBXFEbycnrNhc7ZAnfYAipz
6ZPNWmbv69hiKvWLi3uQbEJz4JOLH2xtlDvuCtYKY1F90EPLBj6zW0y8N0F1zYkC
HAQQAQIABgUCVyN1mQAKCRBSpQ6E8dsJv8WtEACEt9gA3neve9p0PpJWfKSDNDn7
3ncvtsHrkHFgVMzdizZoq4MAUKSNun6qH0g8EAx1gydyQBpbUL8GHQJCMdMPe+Jn
C0k+yMOlUcKrGfmJ4+iNj79XON2Xi4toHdkVNorAftT0TCUMmBa7yFQJpoqOJMbF
0c+ONoYE8FlZ+3Irb5kDscIp3f/GRY6LgGF9IPWCLixfohoKgkDhyTH+JxAurhjG
nb2zBX296hPU09RE4pzj4aXsh/plaon79Z4BQgkdUu4NH6h8EWJbwIuWebfgBv5K
wJZfgqXhLyXG1DtmmG2fjqxbjRg6hOSTRQpfQnxj9gDSnlzOsnTSsfxrbVCIoHEu
bkM5Y12F/gHOnh0WxLe/FWMzNLu1VwU+TtqZOb70DEPNPMq3d46TImvEMQll9i8R
pHdFQywY4Pc1FydUpBjWiXD9qKusgBXrubG6roZL5G/twYMT4mAiZuG2hfNWfNDR
75XzFbMPjI/Qyp8Ya4MikuIEuZ6MF+Qqb1kiE7x1rOACFXm9R6zQS1+FRj7emasX
d6MZL2XTXvN+ppR1XP8OO27Sk3i83aBXO/syb5YpgYT8/VWb9oQClrWj7bWL5YC8
tNmgW/70AMaAOkVZw50K6jrSLC4jPbfVP6a5kefGgn4UbmNYH1D0yH1yiPM0BBb8
/F9pHV2tFVrmDIBJl4kBHAQQAQIABgUCVxYftwAKCRBcW6PzLGbE0RALB/wLz6Tf
MFpB7fX8M9Hz1XCkU/s9PmYsFjcONPBFCjQSgll+UzpCSiFpH7nYJ80yaWGWskhP
0yJjYtqwPU0h4YQq8paTLZqypWt9zoQzs/km2rRvpvcKVhR4vKOrbOa1To4/LRAa
jCsvAgQI1ay9LWxIzbA7WrA4fEFiaIdyHExD6Y8g08xQqGCG9Tv3xM36YN/oWjlQ
UOMsOz2Bxc9M8c4PeEFSzksoQDEXRY9PR6F5oIy2YPegrTRjKqXurWyaEZIvu2fG
uC8r+NGKN3LQbJsBW5m5Y1eCpzcXBlww4C6g70V2zKT+FTp4J1goU4WcDXsEBr7B
hHzWvm7RygPA7NwXuQINBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc
6mnky03qyymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf
1yIoxbIpDbffnuyzkuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2
LNzZ3/on4dnEc/qd+ZZFlOQ4KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7Y
ALB/11FO2nBB7zw7HAUYqJeHutCwxm7iBDNt0g9fhviNcJzagqJ1R7aPjtjBoYvK
kbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzCgM2R4qqUXmxFIS4Bee+gnJi0
Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pwXIDcEq8MXzPBbxwH
KJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ2ydg7dBh
Da6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRV
ko6xCBU4SQARAQABiQIfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIyw
R6jTqix6/fL0Ip8Gjpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9f
JKMl7F3SEgpYaiKEcHfoKGdh30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N
4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJPAIIANYvJaD8xA7sYUXGTzOXDh2THWSv
mEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTju3qcaOM6i/m4hqtvsI1cOORM
VwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+luqoqAF/AEGsNZTrwH
JYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUdt3Iq9hdj
pU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws
86PHdthhFm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm
4xrh/PJO6c1THqdQ19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOT
nkKE6QcA4zckFepUkfmBV1wMJg6OxFYd01z+a+oL
=3D3tCZ
-----END PGP PUBLIC KEY BLOCK-----

--------------444D946E3E378382AC37167D--


From xen-devel-bounces@lists.xenproject.org Mon May 11 18:32:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 18:32:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYDDs-00070n-Ft; Mon, 11 May 2020 18:32:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lxCE=6Z=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jYDDr-00070i-BH
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 18:32:15 +0000
X-Inumbo-ID: b954dea6-93b5-11ea-a241-12813bfff9fa
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b954dea6-93b5-11ea-a241-12813bfff9fa;
 Mon, 11 May 2020 18:32:10 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04BISS5A024294;
 Mon, 11 May 2020 18:32:08 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type; s=corp-2020-01-29;
 bh=0Ey/kzTRW0zR9w4JP8XW1qIbWybWgGoabtUmZPgIkpo=;
 b=zYc4mFUVMSlX3AVF0YDffObZ2Kkywl4qWE/KVYywaao9U7a+qhA/Zb9hgjQcRpuXRieY
 2PJ3ioU3PpCT3nZk8M50yLPG9S5F8xi/PWGaoABCZOIePiAJWLFpeZqdAOjqj9lNFR00
 BWtow/7DZsM5tv7VYZFma4co2T9hp6j+5WrOrF6fpAWlD2lWOcC5GimNdw8ZxilDrLgV
 Auy+tEBJPFdBLGnWpNFoBf6zVJL9VZUYO59tRVYoQGidFWgG/px8G8nErzTZ+U0FpjKZ
 7JplPk5CSj0JsY3bE32wcnmbyKYuPp528Ay06e10UvwsKE3VRfYAsg4GMmNvjEN3R7RL 7A== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by userp2130.oracle.com with ESMTP id 30x3gmetps-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Mon, 11 May 2020 18:32:08 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04BIS8KM068892;
 Mon, 11 May 2020 18:32:07 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by userp3030.oracle.com with ESMTP id 30x6ewhy8w-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 11 May 2020 18:32:07 +0000
Received: from abhmp0003.oracle.com (abhmp0003.oracle.com [141.146.116.9])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 04BIW6NB006674;
 Mon, 11 May 2020 18:32:06 GMT
Received: from [10.39.250.101] (/10.39.250.101)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Mon, 11 May 2020 11:32:06 -0700
Subject: Re: [PATCH 2/2] xen/xenbus: let xenbus_map_ring_valloc() return errno
 values only
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
References: <20200511073151.19043-1-jgross@suse.com>
 <20200511073151.19043-3-jgross@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
X-Pep-Version: 2.0
Message-ID: <692e23f8-1eb7-4fb2-7375-e85cb27dfab0@oracle.com>
Date: Mon, 11 May 2020 14:32:05 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200511073151.19043-3-jgross@suse.com>
Content-Type: multipart/mixed; boundary="------------AE3EE8FF91A004904F775E68"
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9618
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0
 spamscore=0 phishscore=0
 mlxlogscore=999 mlxscore=0 malwarescore=0 bulkscore=0 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2003020000
 definitions=main-2005110141
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9618
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0
 mlxlogscore=999
 clxscore=1015 spamscore=0 lowpriorityscore=0 phishscore=0 bulkscore=0
 malwarescore=0 priorityscore=1501 mlxscore=0 suspectscore=0
 impostorscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2003020000 definitions=main-2005110141
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a multi-part message in MIME format.
--------------AE3EE8FF91A004904F775E68
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

On 5/11/20 3:31 AM, Juergen Gross wrote:
> Today xenbus_map_ring_valloc() can return either a negative errno
> value (-ENOMEM or -EINVAL) or a grant status value. This is a mess as
> e.g -ENOMEM and GNTST_eagain have the same numeric value.
>
> Fix that by turning all grant mapping errors into -ENOENT. This is
> no problem as all callers of xenbus_map_ring_valloc() only use the
> return value to print an error message, and in case of mapping errors
> the grant status value has already been printed by __xenbus_map_ring()
> before.
>
> Signed-off-by: Juergen Gross <jgross@suse.com>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>




--------------AE3EE8FF91A004904F775E68
Content-Type: application/pgp-keys;
 name="pEpkey.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="pEpkey.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

mQINBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrI
tCzP8FUVPQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6
ukOB7igy2PGqZd+MMDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0P
VYR5hyvhyf6VIfGuvqIsvJw5C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi
5w4LXDbF+3oholKYCkPwxmGdK8MUIdkMd7iYdKqiP4W6FKQou/lC3jvOceGupEoD
V9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlomwoVWc0xBZboQguhauQqrBFoo
HO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2FXnIChrYxR6S0ijS
qUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2SfjZwK+G
ETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMG
m5PLHDSP0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQAB
tDNCb3JpcyBPc3Ryb3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xl
LmNvbT6JAjgEEwECACIFAlH8CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheA
AAoJEIredpCGysGyasEP/j5xApopUf4g9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4
bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9zJimBDrhLkDI3Zsx2CafL4pMJ
vpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaSVGx3tsQIAr7MsQxi
lMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaKjImqWhU9
CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPg
EsEJXSr9tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJo
lFM2i4Z0k40BnFU/kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLl
VxdO4qvJkv92SzZz4538az1Tm+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1
z4qu7udYCuuV/4xDEhslUq1+GcNDjAhBnNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs
4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2ohWwveNeRTkxh+2x1Qb3GT46u
iQEcBBABAgAGBQJXFiMoAAoJEKXZdqUyumTAq/oH/2P6KTIO7dGbFl8ed3QgZ4nX
1YeMc+CLCO9m4m+sOaLHgD/NYgPA4/ZwvCU9B/40HEKziq7sAkuEURrIeXyLwrmI
wRsdPYXO4IBoEKafA51A5sAhLJy1POFcs2WI1f3n0bQfx2hgQCE9S1yjgyO+3t+z
+slt3MR6kt1cW3lG4dXzrKNTCTEyMWlqLJELHWA0Ja/8hF0Z1gteOOb2ol1kjB2v
ZPDJYVnhD4yzraE2lgqaRkNA2l5pUZ7p0T06/5MdNKY1NxOqv3zLXNNjxvRYiSVD
F35K05OAyuMKKKgayaLJbLXwkhqSpiNGw0k+cWZTzXO32szxXo4z2Ek766b8GBeJ
ARwEEAEIAAYFAlcWI68ACgkQbE5iBDHJmA3+uAf/W+4NitsClOyDCFoFpPwEhYqM
qxMmyzax27P8s1ZLaOxQvjAaQxUIPVEABbx86yHcsZvzLXXwqosYjy4RWxC+0dMb
oQ8l2oKqrSPa/buCTRG7zBHisQLKyDHQw0aWnVW041P1s4pIs58DUIovTP4MpNPf
qx8+dDGXDpYOEwnmxTLXKfZMyDnq1QtTXQ576HxPt70U/xZ1ZLlCsPsBhiIdjvi9
7jxALwqVd+FyfVDSVm67H5ek+d3ygolA4mJIFjPzmfBXFEbycnrNhc7ZAnfYAipz
6ZPNWmbv69hiKvWLi3uQbEJz4JOLH2xtlDvuCtYKY1F90EPLBj6zW0y8N0F1zYkC
HAQQAQIABgUCVyN1mQAKCRBSpQ6E8dsJv8WtEACEt9gA3neve9p0PpJWfKSDNDn7
3ncvtsHrkHFgVMzdizZoq4MAUKSNun6qH0g8EAx1gydyQBpbUL8GHQJCMdMPe+Jn
C0k+yMOlUcKrGfmJ4+iNj79XON2Xi4toHdkVNorAftT0TCUMmBa7yFQJpoqOJMbF
0c+ONoYE8FlZ+3Irb5kDscIp3f/GRY6LgGF9IPWCLixfohoKgkDhyTH+JxAurhjG
nb2zBX296hPU09RE4pzj4aXsh/plaon79Z4BQgkdUu4NH6h8EWJbwIuWebfgBv5K
wJZfgqXhLyXG1DtmmG2fjqxbjRg6hOSTRQpfQnxj9gDSnlzOsnTSsfxrbVCIoHEu
bkM5Y12F/gHOnh0WxLe/FWMzNLu1VwU+TtqZOb70DEPNPMq3d46TImvEMQll9i8R
pHdFQywY4Pc1FydUpBjWiXD9qKusgBXrubG6roZL5G/twYMT4mAiZuG2hfNWfNDR
75XzFbMPjI/Qyp8Ya4MikuIEuZ6MF+Qqb1kiE7x1rOACFXm9R6zQS1+FRj7emasX
d6MZL2XTXvN+ppR1XP8OO27Sk3i83aBXO/syb5YpgYT8/VWb9oQClrWj7bWL5YC8
tNmgW/70AMaAOkVZw50K6jrSLC4jPbfVP6a5kefGgn4UbmNYH1D0yH1yiPM0BBb8
/F9pHV2tFVrmDIBJl4kBHAQQAQIABgUCVxYftwAKCRBcW6PzLGbE0RALB/wLz6Tf
MFpB7fX8M9Hz1XCkU/s9PmYsFjcONPBFCjQSgll+UzpCSiFpH7nYJ80yaWGWskhP
0yJjYtqwPU0h4YQq8paTLZqypWt9zoQzs/km2rRvpvcKVhR4vKOrbOa1To4/LRAa
jCsvAgQI1ay9LWxIzbA7WrA4fEFiaIdyHExD6Y8g08xQqGCG9Tv3xM36YN/oWjlQ
UOMsOz2Bxc9M8c4PeEFSzksoQDEXRY9PR6F5oIy2YPegrTRjKqXurWyaEZIvu2fG
uC8r+NGKN3LQbJsBW5m5Y1eCpzcXBlww4C6g70V2zKT+FTp4J1goU4WcDXsEBr7B
hHzWvm7RygPA7NwXuQINBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc
6mnky03qyymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf
1yIoxbIpDbffnuyzkuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2
LNzZ3/on4dnEc/qd+ZZFlOQ4KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7Y
ALB/11FO2nBB7zw7HAUYqJeHutCwxm7iBDNt0g9fhviNcJzagqJ1R7aPjtjBoYvK
kbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzCgM2R4qqUXmxFIS4Bee+gnJi0
Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pwXIDcEq8MXzPBbxwH
KJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ2ydg7dBh
Da6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRV
ko6xCBU4SQARAQABiQIfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIyw
R6jTqix6/fL0Ip8Gjpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9f
JKMl7F3SEgpYaiKEcHfoKGdh30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N
4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJPAIIANYvJaD8xA7sYUXGTzOXDh2THWSv
mEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTju3qcaOM6i/m4hqtvsI1cOORM
VwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+luqoqAF/AEGsNZTrwH
JYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUdt3Iq9hdj
pU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws
86PHdthhFm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm
4xrh/PJO6c1THqdQ19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOT
nkKE6QcA4zckFepUkfmBV1wMJg6OxFYd01z+a+oL
=3D3tCZ
-----END PGP PUBLIC KEY BLOCK-----

--------------AE3EE8FF91A004904F775E68--


From xen-devel-bounces@lists.xenproject.org Mon May 11 18:48:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 18:48:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYDTM-00081v-TW; Mon, 11 May 2020 18:48:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MQL3=6Z=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYDTL-000815-Mm
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 18:48:15 +0000
X-Inumbo-ID: f8055200-93b7-11ea-9887-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f8055200-93b7-11ea-9887-bc764e2007e4;
 Mon, 11 May 2020 18:48:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589222894;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=MYaZwDsAVkRp8lvSQHR65VKbP/HlTMO1e62r3ITEeXI=;
 b=Sd36zreFrh2yVwGCgH3dOZji5FcJ9CstDoXEeymrwQ20K5jtYyMcq1+4
 VnrRkLA1E/m/qU/1g7KZeJ/TtPNUyBdlRb43j7fAs3uZ95lqr6WjyUrTb
 I4DSSIc5mJqr8qKzYugpi5hykNBep7al0lutX1nEDbdamwwp0zZ2vCcER A=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 7g4L4vCvNt84P54kGjwLw/IMQV/puJ/qU5RaJX/vjuvrhGyAA+0H94JNedWFKPVyx99kV3gOmx
 gmdkH3cowYxCXOERf0cv4JgQ+q/FiP4ZEu7pQMrvHl5jCkriPaW3oYJoYGo5sh7U2uxX+4aO+s
 Vly6v9TpMqBVlulXprHry9gG6o8QL8eSyRl9MpmWQjYcOhRxcQIDKTexmoAJ+abJApXBVIMfPV
 VfbquG26Vz+7/CjPfoW88OzAce+REAlw6TA+WN+VDiUna3J1ETzwfHIf/5qRnSkZDpwEhx42bw
 yyk=
X-SBRS: 2.7
X-MesageID: 17619118
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,381,1583211600"; d="scan'208";a="17619118"
Subject: Re: [PATCH 09/16] x86/cpu: Adjust enable_nmis() to be shadow stack
 compatible
To: Jan Beulich <jbeulich@suse.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-10-andrew.cooper3@citrix.com>
 <478340f1-4238-1419-eeb7-c8c2ed7103a6@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <544840f1-9c8a-ec97-44f3-5ec263b8428f@citrix.com>
Date: Mon, 11 May 2020 19:48:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <478340f1-4238-1419-eeb7-c8c2ed7103a6@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05/05/2020 15:48, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> On 02.05.2020 00:58, Andrew Cooper wrote:
>> When executing an IRET-to-self, the shadow stack must agree with the regular
>> stack.  We can't manipulate SSP directly, so have to fake a shadow IRET frame
>> by executing 3 CALLs, then editing the result to look correct.
>>
>> This is not a fastpath, is called on the BSP long before CET can be set up,
>> and may be called on the crash path after CET is disabled.  Use the fact that
>> INCSSP is allocated from the hint nop space to construct a test for CET being
>> active which is safe on all processors.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> albeit with a question which may make a further change necessary:
>
>> --- a/xen/include/asm-x86/processor.h
>> +++ b/xen/include/asm-x86/processor.h
>> @@ -544,17 +544,40 @@ static inline void enable_nmis(void)
>>  {
>>      unsigned long tmp;
>>  
>> -    asm volatile ( "mov %%rsp, %[tmp]     \n\t"
>> -                   "push %[ss]            \n\t"
>> -                   "push %[tmp]           \n\t"
>> -                   "pushf                 \n\t"
>> -                   "push %[cs]            \n\t"
>> -                   "lea 1f(%%rip), %[tmp] \n\t"
>> -                   "push %[tmp]           \n\t"
>> -                   "iretq; 1:             \n\t"
>> -                   : [tmp] "=&r" (tmp)
>> +    asm volatile ( "mov     %%rsp, %[rsp]        \n\t"
>> +                   "lea    .Ldone(%%rip), %[rip] \n\t"
>> +#ifdef CONFIG_XEN_SHSTK
>> +                   /* Check for CET-SS being active. */
>> +                   "mov    $1, %k[ssp]           \n\t"
>> +                   "rdsspq %[ssp]                \n\t"
>> +                   "cmp    $1, %k[ssp]           \n\t"
>> +                   "je     .Lshstk_done          \n\t"
>> +
>> +                   /* Push 3 words on the shadow stack */
>> +                   ".rept 3                      \n\t"
>> +                   "call 1f; nop; 1:             \n\t"
>> +                   ".endr                        \n\t"
>> +
>> +                   /* Fixup to be an IRET shadow stack frame */
>> +                   "wrssq  %q[cs], -1*8(%[ssp])  \n\t"
>> +                   "wrssq  %[rip], -2*8(%[ssp])  \n\t"
>> +                   "wrssq  %[ssp], -3*8(%[ssp])  \n\t"
>> +
>> +                   ".Lshstk_done:"
>> +#endif
>> +                   /* Write an IRET regular frame */
>> +                   "push   %[ss]                 \n\t"
>> +                   "push   %[rsp]                \n\t"
>> +                   "pushf                        \n\t"
>> +                   "push   %q[cs]                \n\t"
>> +                   "push   %[rip]                \n\t"
>> +                   "iretq                        \n\t"
>> +                   ".Ldone:                      \n\t"
>> +                   : [rip] "=&r" (tmp),
>> +                     [rsp] "=&r" (tmp),
>> +                     [ssp] "=&r" (tmp)
> Even after an hour of reading and searching through the gcc docs
> I can't convince myself that this utilizes defined behavior. We
> do tie multiple outputs to the same C variable elsewhere, yes,
> but only in cases where we don't really care about the register,
> or where the register is a fixed one anyway. What I can't find
> is a clear statement that gcc wouldn't ever, now or in the
> future, use the same register for all three outputs. I think one
> can deduce this in certain ways, and experimentation also seems
> to confirm it, but still.

I don't see how different local variable will make any difference.  The
variables themselves will be dropped for being write-only before
register scheduling happens.

GCC and Clang reliably use the callee preserved registers first, then
start spilling caller preserved registers, and finally refuse to
assemble as soon as we hit 16 registers of this form, citing impossible
constraints (GCC) or too many registers (Clang).

The important point is that they are listed as separate outputs, so
cannot share register allocations.  If this were to be violated, loads
of code would malfunction.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 11 20:08:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 20:08:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYEiN-0006BK-SM; Mon, 11 May 2020 20:07:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MQL3=6Z=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYEiM-0006BF-5Y
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 20:07:50 +0000
X-Inumbo-ID: 160aa344-93c3-11ea-a24d-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 160aa344-93c3-11ea-a24d-12813bfff9fa;
 Mon, 11 May 2020 20:07:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589227669;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=5CxL0G0DnbnCGUkgIk9oli1/xZEqe1Hx0HCv+dYZnEs=;
 b=AdvwuSisgZKzRU64dn9KWlaE/93zqU9uFTsts1Of95USNor6AJ2Xwe9m
 7Fcs9YyejEMBfXBDomG7wjeYf2y5bPzViYdAl1C4ihz+dbNxy0qZ3xrAE
 2QkRI6b8hTpF+FM44+ORj5ImhxaIqF0aCvV7QgsBk94EC9OHCgKBOr8pS 4=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: AmCvXBpqQGe40+yP2Fk0H2wO9yR9USNe/T0nbqCUFxyNw0FPHhvVqz4ylJeeAEDafRXgyHes4S
 dbj0kdELT6MfXqu0ptfv/dls8cPHK84UzaFWJ6V7gZvGr5ZS2TkeNVUfpAVcJ2nYEef/N/8ZzF
 YK0L2+xyFk+9B/WbP2qSA3P0BJmlSAm6Ulbp+bJH1U0wLunxlphrtSPp9/pHBHhRtaQNmWQRkg
 Jb+71ccocn7FMdf9X81AojbOTPS1QGlPf5pnrvCxaaYMVYjJLc1en2DqBczvLzp2Xvrp2EEduA
 KSY=
X-SBRS: 2.7
X-MesageID: 17273758
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,381,1583211600"; d="scan'208";a="17273758"
Subject: Re: [PATCH 10/16] x86/cpu: Adjust reset_stack_and_jump() to be shadow
 stack compatible
To: Jan Beulich <jbeulich@suse.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-11-andrew.cooper3@citrix.com>
 <4c0dfd8f-38c0-ca32-886d-94cb4698e63b@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <49be63a5-1edc-cfc4-2081-e6ce8ae87a70@citrix.com>
Date: Mon, 11 May 2020 21:07:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <4c0dfd8f-38c0-ca32-886d-94cb4698e63b@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07/05/2020 14:17, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> On 02.05.2020 00:58, Andrew Cooper wrote:
>> We need to unwind up to the supervisor token.  See the comment for details.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wl@xen.org>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> ---
>>  xen/include/asm-x86/current.h | 42 +++++++++++++++++++++++++++++++++++++++---
>>  1 file changed, 39 insertions(+), 3 deletions(-)
>>
>> diff --git a/xen/include/asm-x86/current.h b/xen/include/asm-x86/current.h
>> index 99b66a0087..2a7b728b1e 100644
>> --- a/xen/include/asm-x86/current.h
>> +++ b/xen/include/asm-x86/current.h
>> @@ -124,13 +124,49 @@ unsigned long get_stack_dump_bottom (unsigned long sp);
>>  # define CHECK_FOR_LIVEPATCH_WORK ""
>>  #endif
>>  
>> +#ifdef CONFIG_XEN_SHSTK
>> +/*
>> + * We need to unwind the primary shadow stack to its supervisor token, located
>> + * at 0x5ff8 from the base of the stack blocks.
>> + *
>> + * Read the shadow stack pointer, subtract it from 0x5ff8, divide by 8 to get
>> + * the number of slots needing popping.
>> + *
>> + * INCSSPQ can't pop more than 255 entries.  We shouldn't ever need to pop
>> + * that many entries, and getting this wrong will cause us to #DF later.
>> + */
>> +# define SHADOW_STACK_WORK                      \
>> +    "mov $1, %[ssp];"                           \
>> +    "rdsspd %[ssp];"                            \
>> +    "cmp $1, %[ssp];"                           \
>> +    "je 1f;" /* CET not active?  Skip. */       \
>> +    "mov $"STR(0x5ff8)", %[val];"               \
> As per comments on earlier patches, I think it would be nice if
> this wasn't a literal number here, but tied to actual stack
> layout via some suitable expression. An option might be to use
> 0xff8 (or the constant to be introduced for it in the earlier
> patch) here and ...
>
>> +    "and $"STR(STACK_SIZE - 1)", %[ssp];"       \
> ... PAGE_SIZE here.

It is important to use STACK_SIZE here and not PAGE_SIZE to trigger...

>> +    "sub %[ssp], %[val];"                       \
>> +    "shr $3, %[val];"                           \
>> +    "cmp $255, %[val];"                         \
>> +    "jle 2f;"                                   \

... this condition if we try to reset&jump from more than 4k away from
0x5ff8, e.g. from a IST stack.

Whatever happens we're going to crash, but given that we're talking
about imm32's here,

> Perhaps better "jbe", treating the unsigned values as such?

What I really want is actually to opencode an UNLIKLEY() region seeing
none of our infrastructure works inside inline asm.  Same for...

>
>> +    "ud2a;"                                     \

... this to turn into a real BUG.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 11 20:09:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 20:09:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYEjj-0006Ge-6g; Mon, 11 May 2020 20:09:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7br7=6Z=gmail.com=persaur@srs-us1.protection.inumbo.net>)
 id 1jYEjh-0006GW-6j
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 20:09:13 +0000
X-Inumbo-ID: 47e6995e-93c3-11ea-9887-bc764e2007e4
Received: from mail-qk1-x729.google.com (unknown [2607:f8b0:4864:20::729])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47e6995e-93c3-11ea-9887-bc764e2007e4;
 Mon, 11 May 2020 20:09:12 +0000 (UTC)
Received: by mail-qk1-x729.google.com with SMTP id g185so11211912qke.7
 for <xen-devel@lists.xenproject.org>; Mon, 11 May 2020 13:09:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=content-transfer-encoding:from:mime-version:subject:date:message-id
 :references:cc:in-reply-to:to;
 bh=F4bkj/28xd+PhlgWdwkIBOf4x6G5mIF4O897eian0TA=;
 b=Crl27/u0PjgeyhfocQ82pnyfEvnEyuKnHfF5Nopjdb5nQ496rDIovgOuTdi6wN6V92
 je4VfqffJCcSvu4Aa5KgbNllMbypGSes4ogkQOs3YJ9+ZWYssMvDVudR9BlGjL/RmhT3
 CCTR5B4EJKoeLgGTtbF6SJV9XWeUO9BlQkUFa7bUoVn+LKcXlogyZACJP9FYafYiRo5h
 TMpThI7YvIL/nXByYjvpWSwUacKMs1iJZHQvoFKFrsvP8Fk7BdPfae84lMOAyEfwyzx+
 Zn6fE6oHhgijys4A1BCExf0Fu22lTokJUQtYALZQ+UliiaKpgIjA70z1W95hIwVCCTu8
 XqoA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:content-transfer-encoding:from:mime-version
 :subject:date:message-id:references:cc:in-reply-to:to;
 bh=F4bkj/28xd+PhlgWdwkIBOf4x6G5mIF4O897eian0TA=;
 b=qk+FJvwJLQjX2vDV89oCofYcXnQma7FCi9w69XD/GuKUiAxCigXGkOgAbzkOQp31UR
 bXPsVieCZUVKU17MFZjtJnrRCYtjz35P2AG5bdDiVjU8Qn1zlPWREEQ1EErhruAeslVp
 FOsoj/V7GLSNEndfVlGxjbTczoN2yhixuNrrPak6KwG8jv3gtsxTkcAPWgYC8qJEQzDz
 gLk9wJC/WGWSV56vSu21SReCrlGQElbTc/VoJB+/PauhGollmuju2baoYOP2pPkiIO7y
 gdwVUF0hycPKM6lklGfUmZGTKSI1pWt/lkmm7ysmjRW+hqVdEmHORApsmpynYqNjoh/j
 sdmw==
X-Gm-Message-State: AGi0PuauX500VVsfiqcxxdyaSfp+Id0ro0H2SlT2HpkMf/QUGmCIBHaT
 wYOZgt1WIOa1wxewNo2Yb0c=
X-Google-Smtp-Source: APiQypJb6wJJzXiCuo0y5BjFpzVTj71BhNnQqwhKN1ZuJpd9xQYRbBbAPs0s/Z+OfIJbZMves9lJuA==
X-Received: by 2002:a37:a0d5:: with SMTP id
 j204mr17571901qke.128.1589227751820; 
 Mon, 11 May 2020 13:09:11 -0700 (PDT)
Received: from [100.64.72.249] ([173.245.215.240])
 by smtp.gmail.com with ESMTPSA id p68sm9140371qka.56.2020.05.11.13.07.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 11 May 2020 13:08:25 -0700 (PDT)
Content-Type: multipart/alternative;
 boundary=Apple-Mail-790DACCD-FD2E-48C3-A81C-734C3B2439C2
Content-Transfer-Encoding: 7bit
From: Rich Persaud <persaur@gmail.com>
Mime-Version: 1.0 (1.0)
Subject: Re: XenSummit 2020 *will* be held virtuaully in June!
Date: Mon, 11 May 2020 16:07:56 -0400
Message-Id: <FFE251E7-4DFD-414F-8E14-7B814956E7A4@gmail.com>
References: <562E87BB-FAEE-42B3-BEF4-6E7A4D65269D@citrix.com>
In-Reply-To: <562E87BB-FAEE-42B3-BEF4-6E7A4D65269D@citrix.com>
To: George Dunlap <George.Dunlap@citrix.com>
X-Mailer: iPad Mail (17E262)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--Apple-Mail-790DACCD-FD2E-48C3-A81C-734C3B2439C2
Content-Type: text/plain;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

On May 7, 2020, at 18:50, George Dunlap <George.Dunlap@citrix.com> wrote:
>=20
> =EF=BB=BFWe=E2=80=99re still ironing out all the details, but it=E2=80=99s=
 absolutely confirmed that XenSummit 2020 will be held virtually in June.
>=20
> In addition, the new version of the Design Sessions website is now live:
>=20
> https://design-sessions.xenproject.org
>=20
> Make space on your calendars, and submit your design sessions, and watch t=
his space for further information!

Thanks for the update.  Might be worth adding the above note and expected pr=
icing to one of the public status pages and Twitter, while the new schedule i=
s being worked out:

https://events.linuxfoundation.org/xen-summit/attend/novel-coronavirus-updat=
e/
https://xenproject.org/2020/03/26/xenproject-developer-and-design-summit-upd=
ate-in-light-of-covid-19/
https://events.linuxfoundation.org/xen-summit/

Rich=

--Apple-Mail-790DACCD-FD2E-48C3-A81C-734C3B2439C2
Content-Type: text/html;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D=
utf-8"></head><body dir=3D"auto"><div dir=3D"ltr">On May 7, 2020, at 18:50, G=
eorge Dunlap &lt;George.Dunlap@citrix.com&gt; wrote:</div><div dir=3D"ltr"><=
blockquote type=3D"cite"><br></blockquote></div><blockquote type=3D"cite"><d=
iv dir=3D"ltr">=EF=BB=BF<span>We=E2=80=99re still ironing out all the detail=
s, but it=E2=80=99s absolutely confirmed that XenSummit 2020 will be held vi=
rtually in June.</span><br><span></span><br><span>In addition, the new versi=
on of the Design Sessions website is now live:</span><br><span></span><br><s=
pan>https://design-sessions.xenproject.org</span><br><span></span><br><span>=
Make space on your calendars, and submit your design sessions, and watch thi=
s space for further information!</span></div></blockquote><br><div>Thanks fo=
r the update. &nbsp;Might be worth adding the above note and expected pricin=
g to one of the public status pages and Twitter, while the new schedule is b=
eing worked out:</div><div><br></div><div><a href=3D"https://events.linuxfou=
ndation.org/xen-summit/attend/novel-coronavirus-update/">https://events.linu=
xfoundation.org/xen-summit/attend/novel-coronavirus-update/</a></div><div><a=
 href=3D"https://xenproject.org/2020/03/26/xenproject-developer-and-design-s=
ummit-update-in-light-of-covid-19/">https://xenproject.org/2020/03/26/xenpro=
ject-developer-and-design-summit-update-in-light-of-covid-19/</a></div><div>=
<a href=3D"https://events.linuxfoundation.org/xen-summit/">https://events.li=
nuxfoundation.org/xen-summit/</a></div><div><br></div><div>Rich</div></body>=
</html>=

--Apple-Mail-790DACCD-FD2E-48C3-A81C-734C3B2439C2--


From xen-devel-bounces@lists.xenproject.org Mon May 11 20:09:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 20:09:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYEk6-0006K4-F7; Mon, 11 May 2020 20:09:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7br7=6Z=gmail.com=persaur@srs-us1.protection.inumbo.net>)
 id 1jYEk5-0006Jn-4H
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 20:09:37 +0000
X-Inumbo-ID: 56609eee-93c3-11ea-9887-bc764e2007e4
Received: from mail-qt1-x82a.google.com (unknown [2607:f8b0:4864:20::82a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 56609eee-93c3-11ea-9887-bc764e2007e4;
 Mon, 11 May 2020 20:09:36 +0000 (UTC)
Received: by mail-qt1-x82a.google.com with SMTP id x12so9146311qts.9
 for <xen-devel@lists.xenproject.org>; Mon, 11 May 2020 13:09:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=content-transfer-encoding:from:mime-version:subject:message-id:date
 :cc:to; bh=oz5AXjOGQthPcUXOgDz1OHf+zg0JkhY7kehWPRBVgxY=;
 b=kmFRIHRRfr4Ani0a7IxTXopzQuWHfzuzuea8x/jQrhyeG1n0nx2gzOG2HFrhBXT5mF
 H0hkdN5IAVTgMXSyVsQkDm35mo4lRLyO4FbbJt9BUNLpRNKMLxNREYJtqZoH4LmE5n+N
 oxSLkpiWzZHocp7mceH8Ycx4Ed94LHkADVsgCL3ImvaAD94dua8f9OC1k20zggVFPqc7
 7ahK3ASIArI8cShnxwc5fU/qKcWaa4azZVk6ZP7gza6WHxNCpsb0xFb9ZN6tHjSrNbeW
 1ohcH4J96yHrA2oZYf0cRkTJQ686PfEc7SSlFAyJoY/s0T9O7O/oTvVWDMeJzwmiJxcO
 zR3g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:content-transfer-encoding:from:mime-version
 :subject:message-id:date:cc:to;
 bh=oz5AXjOGQthPcUXOgDz1OHf+zg0JkhY7kehWPRBVgxY=;
 b=W4hMsFhylTwdAh+cCSp0h6HQ8NWvH7+h5aUj4i2gTmPl/VWTa9QCW/G66K/BKpHHN1
 2KBZwHA5v8NEXVsRftIzd5hENmMRtbBF4brdQMxmIevaPS9pU4qu2PqW5QelXHa04KAW
 QVH3qDYwbubbLC6ez6WdMI8QeY3/BrRG/PV3Z2mrOfpxVmv7i9QynIZxSdv7lrjKLD5s
 8u8Km2eNnzCYERdvMH/eg0hk910N/KGxsuHUfNXhwXNITShPIHKw9ZsqHwjcKV/am+lR
 UvHpwcZ5AfpDqOLwl6A6/ED2MhgOHlUgqWwqGbysagtywi6VJmS2k/2bymM6okSD3Ry5
 MEcg==
X-Gm-Message-State: AOAM530fgdhbWJWL0F9hBDlxQL/+7VRTzCCjzqKDuHC7mNPFobzHx/8r
 pYO9R6XS7//HG47mOQBsE9s=
X-Google-Smtp-Source: ABdhPJw3d2dAvzuoTxsyRxbYLjPccq/1z6acL7WAUnaM5GcMrTs+tYzQicFYNvDWTwYpmRpJFq5XaQ==
X-Received: by 2002:ac8:554c:: with SMTP id o12mr2372434qtr.89.1589227776294; 
 Mon, 11 May 2020 13:09:36 -0700 (PDT)
Received: from [100.64.72.249] ([173.245.215.240])
 by smtp.gmail.com with ESMTPSA id n9sm9160921qkn.10.2020.05.11.13.09.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 11 May 2020 13:09:16 -0700 (PDT)
Content-Type: multipart/alternative;
 boundary=Apple-Mail-99C10892-B47C-4B47-8C56-CA777856C245
Content-Transfer-Encoding: 7bit
From: Rich Persaud <persaur@gmail.com>
Mime-Version: 1.0 (1.0)
Subject: Re:  XenSummit 2020 *will* be held virtuaully in June!
Message-Id: <DD52C1F7-DD4B-4718-8EB6-6A18047B8A10@gmail.com>
Date: Mon, 11 May 2020 16:08:29 -0400
To: George Dunlap <George.Dunlap@citrix.com>
X-Mailer: iPad Mail (17E262)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--Apple-Mail-99C10892-B47C-4B47-8C56-CA777856C245
Content-Type: text/plain;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

=EF=BB=BF
>=20
> On May 7, 2020, at 18:50, George Dunlap <George.Dunlap@citrix.com> wrote:
> =EF=BB=BFWe=E2=80=99re still ironing out all the details, but it=E2=80=99s=
 absolutely confirmed that XenSummit 2020 will be held virtually in June.
>=20
> In addition, the new version of the Design Sessions website is now live:
>=20
> https://design-sessions.xenproject.org
>=20
> Make space on your calendars, and submit your design sessions, and watch t=
his space for further information!

Thanks for the update.  Might be worth adding the above note and expected pr=
icing to one of the public status pages and Twitter, while the new schedule i=
s being worked out:

https://events.linuxfoundation.org/xen-summit/attend/novel-coronavirus-updat=
e/
https://xenproject.org/2020/03/26/xenproject-developer-and-design-summit-upd=
ate-in-light-of-covid-19/
https://events.linuxfoundation.org/xen-summit/

Rich=

--Apple-Mail-99C10892-B47C-4B47-8C56-CA777856C245
Content-Type: text/html;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D=
utf-8"></head><body dir=3D"auto"><div dir=3D"ltr">=EF=BB=BF<meta http-equiv=3D=
"content-type" content=3D"text/html; charset=3Dutf-8"><div dir=3D"ltr">On Ma=
y 7, 2020, at 18:50, George Dunlap &lt;George.Dunlap@citrix.com&gt; wrote:</=
div><div dir=3D"ltr"><blockquote type=3D"cite"><br></blockquote></div><block=
quote type=3D"cite"><div dir=3D"ltr">=EF=BB=BF<span>We=E2=80=99re still iron=
ing out all the details, but it=E2=80=99s absolutely confirmed that XenSummi=
t 2020 will be held virtually in June.</span><br><span></span><br><span>In a=
ddition, the new version of the Design Sessions website is now live:</span><=
br><span></span><br><span>https://design-sessions.xenproject.org</span><br><=
span></span><br><span>Make space on your calendars, and submit your design s=
essions, and watch this space for further information!</span></div></blockqu=
ote><br><div>Thanks for the update. &nbsp;Might be worth adding the above no=
te and expected pricing to one of the public status pages and Twitter, while=
 the new schedule is being worked out:</div><div><br></div><div><a href=3D"h=
ttps://events.linuxfoundation.org/xen-summit/attend/novel-coronavirus-update=
/">https://events.linuxfoundation.org/xen-summit/attend/novel-coronavirus-up=
date/</a></div><div><a href=3D"https://xenproject.org/2020/03/26/xenproject-=
developer-and-design-summit-update-in-light-of-covid-19/">https://xenproject=
.org/2020/03/26/xenproject-developer-and-design-summit-update-in-light-of-co=
vid-19/</a></div><div><a href=3D"https://events.linuxfoundation.org/xen-summ=
it/">https://events.linuxfoundation.org/xen-summit/</a></div><div><br></div>=
<div>Rich</div></div></body></html>=

--Apple-Mail-99C10892-B47C-4B47-8C56-CA777856C245--


From xen-devel-bounces@lists.xenproject.org Mon May 11 20:19:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 20:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYEtP-0007HQ-Dd; Mon, 11 May 2020 20:19:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3lka=6Z=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jYEtO-0007HL-Rd
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 20:19:14 +0000
X-Inumbo-ID: ae025f9c-93c4-11ea-9887-bc764e2007e4
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ae025f9c-93c4-11ea-9887-bc764e2007e4;
 Mon, 11 May 2020 20:19:13 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id u20so734011ljo.1
 for <xen-devel@lists.xenproject.org>; Mon, 11 May 2020 13:19:13 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=T5kY+GMis5F+wvvF0php5urmVU2PMNnTQwEVgNiABEg=;
 b=f6fjOl+JmhNL4PDwNK6vwOmikNCaXVj40g8EHtmZzBoajkanEJ4V266PFYoW0bWu2n
 zZQo4PZ2OcOK9RDN2e7LOYoBOkks7gDae55gPLaKAo7+MLZAICoPx3uuxUHP3BANyO2F
 6WMZTBPevgBodk+YGHJ6UaP+EianacWMAJf6hTc7kz5I+gixJUrGfRHvcnxlYKmt6OKm
 9npKyIGRCMH63O2og85Cjkv1mprA063qaDGXsPYV+paSXKEUiQuc3Sc76Y446j97f+vn
 wsv/pw5WloXVVVOjUNaoXEKxjnLkeb26HgvV4ZC/R3MzAUnALtSO456VL3pPXwyG1Oel
 pqhg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=T5kY+GMis5F+wvvF0php5urmVU2PMNnTQwEVgNiABEg=;
 b=d7roO+pyJGCDtOWt693vtB06ilhs3Rp9z787zvOc4QFqASLAS6UYU2Xrl8OXT73CGF
 oJvnKTQSLSaF9GNn+NEA2ArfNMbbbzTHx9NhkflgAfmoo5UB/yBfkUR3W2SGJoWM7R97
 pgy0VztQVk7TUP1ybh4HPwi5xutUqJpRSq/+ByLOMbyrzOOGAbHHudQPzzDzBaVbhcbu
 BddWRtc7AiyID0F1uH8Yq6SzZZ6ivMUFjzrXP1aG5SE7wQyWLQ6puZ7uLwaEq6fzEjeY
 gwLPnx7nzmRPHQOIBYm/IqYzMCeKHGJf4DoHfHZAAfH/tXcoiEBtddxdgn8RDy6nFWSC
 1zcQ==
X-Gm-Message-State: AOAM530ik2usrmvrhR0YEK5ZI0twcxrs68L7NmmECaVXT0EeGzQB2IGS
 4Pwk/cf0CYLFUdTUnGpMl27jAOoPngJSydzddG2p0Q==
X-Google-Smtp-Source: ABdhPJxCfp+QaFoMiwvAyLI6hD+yJ+kmMA4RER9mktwiexYy/vSsHJwA+7TjjPNpxbKM/663SNH5YBngKFQThs+xBDE=
X-Received: by 2002:a2e:8018:: with SMTP id j24mr11599288ljg.246.1589228352065; 
 Mon, 11 May 2020 13:19:12 -0700 (PDT)
MIME-Version: 1.0
References: <20200428040433.23504-1-jandryuk@gmail.com>
In-Reply-To: <20200428040433.23504-1-jandryuk@gmail.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 11 May 2020 16:19:00 -0400
Message-ID: <CAKf6xptOrADAOfiFsjKknw9j5qcO4k+c=AQxDLFDt+u2N3y5vQ@mail.gmail.com>
Subject: Re: [PATCH v5 00/21] Add support for qemu-xen runnning in a
 Linux-based stubdomain
To: xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Simon Gaiser <simon@invisiblethingslab.com>,
 Jan Beulich <jbeulich@suse.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>, Eric Shelton <eshelton@pobox.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Ping?

-Jason

On Tue, Apr 28, 2020 at 12:05 AM Jason Andryuk <jandryuk@gmail.com> wrote:
>
> Hi,
>
> In coordination with Marek, I'm making a submission of his patches for Li=
nux
> stubdomain device-model support.  I made a few of my own additions, but M=
arek
> did the heavy lifting.  Thank you, Marek.
>
> Below is mostly the v4 cover leter with a few additions.
>
> General idea is to allow freely set device_model_version and
> device_model_stubdomain_override and choose the right options based on th=
is
> choice.  Also, allow to specific path to stubdomain kernel/ramdisk, for g=
reater
> flexibility.
>
> First two patches add documentation about expected toolstack-stubdomain-q=
emu
> interface, both for MiniOS stubdomain and Linux stubdomain.
>
> Initial version has no QMP support - in initial patches it is completely
> disabled, which means no suspend/restore and no PCI passthrough.
>
> Later patches add QMP over libvchan connection support. The actual connec=
tion
> is made in a separate process. As discussed on Xen Summit 2019, this allo=
ws to
> apply some basic checks and/or filtering (not part of this series), to li=
mit
> libxl exposure for potentially malicious stubdomain.
>
> Jason's additions ensure the qmp-proxy (vchan-socket-proxy) processes and
> sockets are cleaned up and add some documentation.
>
> The actual stubdomain implementation is here:
>
>     https://github.com/marmarek/qubes-vmm-xen-stubdom-linux
>     (branch for-upstream, tag for-upstream-v3)
>
> See readme there for build instructions.  Marek's version requires dracut=
.  I
> have hacked up a version usable install with initramfs-tools:
>
>    https://github.com/jandryuk/qubes-vmm-xen-stubdom-linux
>    (branch initramfs-tools)
>
> Few comments/questions about the stubdomain code:
>
> 1. There are extra patches for qemu that are necessary to run it in stubd=
omain.
> While it is desirable to upstream them, I think it can be done after merg=
ing
> libxl part. Stubdomain's qemu build will in most cases be separate anyway=
, to
> limit qemu's dependencies (so the stubdomain size).
>
> 2. By default Linux hvc-xen console frontend is unreliable for data trans=
fer
> (qemu state save/restore) - it drops data sent faster than client is read=
ing
> it. To fix it, console device needs to be switched into raw mode (`stty r=
aw
> /dev/hvc1`). Especially for restoring qemu state it is tricky, as it woul=
d need
> to be done before opening the device, but stty (obviously) needs to open =
the
> device first. To solve this problem, for now the repository contains kern=
el
> patch which changes the default for all hvc consoles. Again, this isn't
> practical problem, as the kernel for stubdomain is built separately. But =
it
> would be nice to have something working with vanilla kernel. I see those
> options:
>   - convert it to kernel cmdline parameter (hvc_console_raw=3D1 ?)
>   - use channels instead of consoles (and on the kernel side change the d=
efault
>     to "raw" only for channels); while in theory better design, libxl par=
t will
>     be more complex, as channels can be connected to sockets but not file=
s, so
>     libxl would need to read/write to it exactly when qemu write/read the=
 data,
>     not before/after as it is done now
>
> 3. Mini-OS stubdoms use dmargs xenstore key as a string.  Linux stubdoms =
use
> dmargs as a directory for numbered entries.  Should they be different nam=
es?
>
> Remaining parts for eliminating dom0's instance of qemu:
>  - do not force QDISK backend for CDROM
>  - multiple consoles support in xenconsoled
>
> Changes in v2:
>  - apply review comments by Jason Andryuk
> Changes in v3:
>  - rework qemu arguments handling (separate xenstore keys, instead of \x1=
b separator)
>  - add QMP over libvchan, instead of console
>  - add protocol documentation
>  - a lot of minor changes, see individual patches for full changes list
>  - split xenconsoled patches into separate series
> Changes in v4:
>  - extract vchan connection into a separate process
>  - rebase on master
>  - various fixes
> Changes in v5:
>  - Marek: apply review comments from Jason Andryuk
>  - Jason: Clean up qmp-proxy processes and sockets
>
> Cc: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>
> Cc: Simon Gaiser <simon@invisiblethingslab.com>
> Cc: Eric Shelton <eshelton@pobox.com>
> Cc: Ian Jackson <ian.jackson@citrix.com>
> Cc: Wei Liu <wl@xen.org>
>
> Eric Shelton (1):
>   libxl: Handle Linux stubdomain specific QEMU options.
>
> Jason Andryuk (5):
>   docs: Add device-model-domid to xenstore-paths
>   libxl: Check stubdomain kernel & ramdisk presence
>   libxl: Refactor kill_device_model to libxl__kill_xs_path
>   libxl: Kill vchan-socket-proxy when cleaning up qmp
>   tools: Clean up vchan-socket-proxy socket
>
> Marek Marczykowski-G=C3=B3recki (15):
>   Document ioemu MiniOS stubdomain protocol
>   Document ioemu Linux stubdomain protocol
>   libxl: fix qemu-trad cmdline for no sdl/vnc case
>   libxl: Allow running qemu-xen in stubdomain
>   libxl: write qemu arguments into separate xenstore keys
>   xl: add stubdomain related options to xl config parser
>   tools/libvchan: notify server when client is connected
>   libxl: add save/restore support for qemu-xen in stubdomain
>   tools: add missing libxenvchan cflags
>   tools: add simple vchan-socket-proxy
>   libxl: use vchan for QMP access with Linux stubdomain
>   Regenerate autotools files
>   libxl: require qemu in dom0 even if stubdomain is in use
>   libxl: ignore emulated IDE disks beyond the first 4
>   libxl: consider also qemu in stubdomain in libxl__dm_active check
>
>  .gitignore                          |   1 +
>  configure                           |  14 +-
>  docs/configure                      |  14 +-
>  docs/man/xl.cfg.5.pod.in            |  27 +-
>  docs/misc/stubdom.txt               | 103 ++++++
>  docs/misc/xenstore-paths.pandoc     |   5 +
>  stubdom/configure                   |  14 +-
>  tools/Rules.mk                      |   2 +-
>  tools/config.h.in                   |   3 +
>  tools/configure                     |  46 ++-
>  tools/configure.ac                  |   9 +
>  tools/libvchan/Makefile             |   8 +-
>  tools/libvchan/init.c               |   3 +
>  tools/libvchan/vchan-socket-proxy.c | 500 ++++++++++++++++++++++++++++
>  tools/libxl/libxl_aoutils.c         |  32 ++
>  tools/libxl/libxl_create.c          |  46 ++-
>  tools/libxl/libxl_dm.c              | 484 +++++++++++++++++++++------
>  tools/libxl/libxl_domain.c          |   7 +
>  tools/libxl/libxl_internal.h        |  22 ++
>  tools/libxl/libxl_mem.c             |   6 +-
>  tools/libxl/libxl_qmp.c             |  27 +-
>  tools/libxl/libxl_types.idl         |   3 +
>  tools/xl/xl_parse.c                 |   7 +
>  23 files changed, 1205 insertions(+), 178 deletions(-)
>  create mode 100644 tools/libvchan/vchan-socket-proxy.c
>
> --
> 2.20.1
>


From xen-devel-bounces@lists.xenproject.org Mon May 11 21:10:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 21:10:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYFgE-0002zR-ER; Mon, 11 May 2020 21:09:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MQL3=6Z=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYFgD-0002zM-EV
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 21:09:41 +0000
X-Inumbo-ID: b9f6ffd6-93cb-11ea-b9cf-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9f6ffd6-93cb-11ea-b9cf-bc764e2007e4;
 Mon, 11 May 2020 21:09:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589231380;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=NY1XJFJ2Rvqi+mx/4eGIROK01Agkif3wcfVYsifAoOA=;
 b=dMBJiWjHUWj1S8+2q18dPqhEMBVoimLp823VHnzC5S3I91X+lWEAeLtj
 4L0pREUs2KHD0kCd+mKoZ0d7UEFmLly+nti9QOtWfZBf8b6G5X961HnHS
 UE/cq7kfoR50WZPQ4tYf+eHS1W4U3jaNy0Uw+fsgKFbZqEimkQ6TyvEA6 Y=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: Y0mSk5D3d2QPa6BziZmUJM4wAqpHNCaDUNhNTJ/Mj7cdi28qB9rpEWt0r2/cLRinOYmj0SqhkX
 tCb5ZubbL8joJ7eVIyKHavYMoMBIrWF7ff2v9n8GFAya05kUeqS5MIkjGe04roRALvbQT6DLL4
 K3OY1Sty3HPdinX44mOob9evmdpM252zL3SnedTKqyfcBrb5OSToqlJxyoXKPYll5+Sv7jiwts
 098t0C1itG6RFZZJYbOFPUkR3zc/N/Y2rmDaEUoeNlCpVehaQHZxHxembJ5ACmGC7LrxUlqOeV
 dJo=
X-SBRS: 2.7
X-MesageID: 17523707
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,381,1583211600"; d="scan'208";a="17523707"
Subject: Re: [PATCH 12/16] x86/extable: Adjust extable handling to be shadow
 stack compatible
To: Jan Beulich <jbeulich@suse.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-13-andrew.cooper3@citrix.com>
 <1e80c672-9308-f7ad-67ea-69d83d69bc03@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <974f631e-3a82-3da4-124d-f4bf2bef89e2@citrix.com>
Date: Mon, 11 May 2020 22:09:34 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <1e80c672-9308-f7ad-67ea-69d83d69bc03@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07/05/2020 14:35, Jan Beulich wrote:
> On 02.05.2020 00:58, Andrew Cooper wrote:
>> --- a/xen/arch/x86/traps.c
>> +++ b/xen/arch/x86/traps.c
>> @@ -778,6 +778,28 @@ static bool exception_fixup(struct cpu_user_regs *regs, bool print)
>>                 vec_name(regs->entry_vector), regs->error_code,
>>                 _p(regs->rip), _p(regs->rip), _p(fixup));
>>  
>> +    if ( IS_ENABLED(CONFIG_XEN_SHSTK) )
>> +    {
>> +        unsigned long ssp;
>> +
>> +        asm ("rdsspq %0" : "=r" (ssp) : "0" (1) );
>> +        if ( ssp != 1 )
>> +        {
>> +            unsigned long *ptr = _p(ssp);
>> +
>> +            /* Search for %rip in the shadow stack, ... */
>> +            while ( *ptr != regs->rip )
>> +                ptr++;
> Wouldn't it be better to bound the loop, as it shouldn't search past
> (strictly speaking not even to) the next page boundary? Also you
> don't care about the top of the stack (being the to be restored SSP),
> do you? I.e. maybe
>
>             while ( *++ptr != regs->rip )
>                 ;
>
> ?
>
> And then - isn't searching for a specific RIP value alone prone to
> error, in case a it matches an ordinary return address? I.e.
> wouldn't you better search for a matching RIP accompanied by a
> suitable pointer into the shadow stack and a matching CS value?
> Otherwise, ...
>
>> +            ASSERT(ptr[1] == __HYPERVISOR_CS);
> ... also assert that ptr[-1] points into the shadow stack?

So this is the problem I was talking about that the previous contexts
SSP isn't stored anywhere helpful.

What we are in practice doing is looking 2 or 3 words up the shadow
stack (depending on exactly how deep our call graph is), to the shadow
IRET frame matching the real IRET frame which regs is pointing to.

Both IRET frames were pushed in the process of generating the exception,
and we've already matched regs->rip to the exception table record.  We
need to fix up regs->rip and the shadow lip to the fixup address.

As we are always fixing up an exception generated from Xen context, we
know that ptr[1] == __HYPERVISOR_CS, and *ptr[-1] = &ptr[2], as we
haven't switched shadow stack as part of taking this exception. 
However, this second point is fragile to exception handlers moving onto IST.

We can't encounter regs->rip in the shadow stack between the current SSP
and the IRET frame we're looking to fix up, or we would have taken a
recursive fault and not reached exception_fixup() to begin with.

Therefore, the loop is reasonably bounded in all cases.

Sadly, there is no RDSS instruction, so we can't actually use shadow
stack reads to spot if we underflowed the shadow stack, and there is no
useful alternative to panic() if we fail to find the shadow IRET frame.

>> --- a/xen/arch/x86/x86_64/entry.S
>> +++ b/xen/arch/x86/x86_64/entry.S
>> @@ -708,7 +708,16 @@ exception_with_ints_disabled:
>>          call  search_pre_exception_table
>>          testq %rax,%rax                 # no fixup code for faulting EIP?
>>          jz    1b
>> -        movq  %rax,UREGS_rip(%rsp)
>> +        movq  %rax,UREGS_rip(%rsp)      # fixup regular stack
>> +
>> +#ifdef CONFIG_XEN_SHSTK
>> +        mov    $1, %edi
>> +        rdsspq %rdi
>> +        cmp    $1, %edi
>> +        je     .L_exn_shstk_done
>> +        wrssq  %rax, (%rdi)             # fixup shadow stack
>> +.L_exn_shstk_done:
>> +#endif
> Again avoid the conditional jump by using alternatives patching?

Well - that depends on whether we're likely to gain any new content in
the pre exception table.

As it stands, it is only the IRET(s) to userspace so would be safe to
turn this into an unconditional alternative.  Even in the crash case, we
won't be returning to guest context after having started the crash
teardown path.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 11 21:42:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 21:42:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYGBO-000676-RU; Mon, 11 May 2020 21:41:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=U+Ik=6Z=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jYGBN-000671-9B
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 21:41:53 +0000
X-Inumbo-ID: 38ec6229-93d0-11ea-a25b-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 38ec6229-93d0-11ea-a25b-12813bfff9fa;
 Mon, 11 May 2020 21:41:52 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 5FA83206E6;
 Mon, 11 May 2020 21:41:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1589233311;
 bh=/fl3uka4BT5IGqZ+Q4Z4mtPFibQXcT2JNziQkXT1B1Y=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=2YJjy8kxgUoeP+gliRPlKr1nW7XHzo4oL9gHBwx3pAz42LG3n/w4NH/igzJJstIct
 HW4nO1ydZ86YE7re2cLTdml2H2TafeyC+oDL4wo5Mqhxuf/9gDzc8Ci/QsQsQ3UlFR
 LhIZV1ZAwA1Eky4PnR4qG9ZAi+2mr3HhB9+YdsHQ=
Date: Mon, 11 May 2020 14:41:50 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] xen/pvcalls-back: test for errors when calling
 backend_connect()
In-Reply-To: <20200511074231.19794-1-jgross@suse.com>
Message-ID: <alpine.DEB.2.21.2005111440210.26167@sstabellini-ThinkPad-T480s>
References: <20200511074231.19794-1-jgross@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 stable@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, 11 May 2020, Juergen Gross wrote:
> backend_connect() can fail, so switch the device to connected only if
> no error occurred.
> 
> Fixes: 0a9c75c2c7258f2 ("xen/pvcalls: xenbus state handling")
> Cc: stable@vger.kernel.org
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>  drivers/xen/pvcalls-back.c | 3 ++-
>  1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
> index cf4ce3e9358d..41a18ece029a 100644
> --- a/drivers/xen/pvcalls-back.c
> +++ b/drivers/xen/pvcalls-back.c
> @@ -1088,7 +1088,8 @@ static void set_backend_state(struct xenbus_device *dev,
>  		case XenbusStateInitialised:
>  			switch (state) {
>  			case XenbusStateConnected:
> -				backend_connect(dev);
> +				if (backend_connect(dev))
> +					return;

Do you think it would make sense to WARN?

>  				xenbus_switch_state(dev, XenbusStateConnected);
>  				break;
>  			case XenbusStateClosing:
> -- 
> 2.26.1
> 


From xen-devel-bounces@lists.xenproject.org Mon May 11 21:46:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 21:46:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYGFO-0006Gv-CQ; Mon, 11 May 2020 21:46:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MQL3=6Z=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYGFM-0006Gp-N6
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 21:46:00 +0000
X-Inumbo-ID: cd5bf946-93d0-11ea-a25b-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd5bf946-93d0-11ea-a25b-12813bfff9fa;
 Mon, 11 May 2020 21:46:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589233560;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=uRyRnhqFgzKxenC2tvIs2NePODGr3rM03NA2zgymcN4=;
 b=A2ejr17omt27E73jFOn8TgxW8PfImrUO/T6DHDRHhDJBMeanxwrDTWmK
 5aKtktiku4fomMcxRdigfUPpK+o7pvubPRcVys3ZAjXRcQOGLAd/zBpnf
 u33FOQNK6gqfVgSLDnk5xYbCMJfyQTx657enIgxDGsDh5iTHTb8WwdcBu I=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: L8L0g6U9Zw1f49zs/whRgmEYhOV4Ql4ZVnjDELpjs82Ywf8cYHtxZf9QMYflpUQXbRCoQxbjd1
 bGXAm/DTTGXkuiKM/gHVx/5WHWt8z3+6alE0VXrfk0rdiYZ34YpR5wKEoDgnJL42e52ZynzfuO
 dGlqKKYo7+1TH3sjkPE4zdi/srSroMC6N+wxqQj5j4Zpx27dtUU2ekuORJH9AaDtBZoRATUK28
 E3KPxQtRQnpoh4goV8liaOiMFelunqPlHw/easV2N+LM8NZlwyXTvMxCEMbZoA7mGha5wdGgoa
 GtA=
X-SBRS: 2.7
X-MesageID: 17253186
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,381,1583211600"; d="scan'208";a="17253186"
Subject: Re: [PATCH 15/16] x86/entry: Adjust guest paths to be shadow stack
 compatible
To: Jan Beulich <jbeulich@suse.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-16-andrew.cooper3@citrix.com>
 <2df78612-2c24-32de-186a-c402e188478c@suse.com>
 <70d7b0e0-a599-6a19-5ace-af4f169545b3@citrix.com>
 <fa78f626-18a1-bd95-b446-8ade5e9282a6@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b2afaa93-c738-dcfd-cbc7-147e48cd24ee@citrix.com>
Date: Mon, 11 May 2020 22:45:54 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <fa78f626-18a1-bd95-b446-8ade5e9282a6@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07/05/2020 17:15, Jan Beulich wrote:
>>>> --- a/xen/arch/x86/x86_64/entry.S
>>>> +++ b/xen/arch/x86/x86_64/entry.S
>>>> @@ -194,6 +194,15 @@ restore_all_guest:
>>>>          movq  8(%rsp),%rcx            # RIP
>>>>          ja    iret_exit_to_guest
>>>>  
>>>> +        /* Clear the supervisor shadow stack token busy bit. */
>>>> +.macro rag_clrssbsy
>>>> +        push %rax
>>>> +        rdsspq %rax
>>>> +        clrssbsy (%rax)
>>>> +        pop %rax
>>>> +.endm
>>>> +        ALTERNATIVE "", rag_clrssbsy, X86_FEATURE_XEN_SHSTK
>>> In principle you could get away without spilling %rax:
>>>
>>>         cmpl  $1,%ecx
>>>         ja    iret_exit_to_guest
>>>
>>>         /* Clear the supervisor shadow stack token busy bit. */
>>> .macro rag_clrssbsy
>>>         rdsspq %rcx
>>>         clrssbsy (%rcx)
>>> .endm
>>>         ALTERNATIVE "", rag_clrssbsy, X86_FEATURE_XEN_SHSTK
>>>         movq  8(%rsp),%rcx            # RIP
>>>         cmpw  $FLAT_USER_CS32,16(%rsp)# CS
>>>         movq  32(%rsp),%rsp           # RSP
>>>         je    1f
>>>         sysretq
>>> 1:      sysretl
>>>
>>>         ALIGN
>>> /* No special register assumptions. */
>>> iret_exit_to_guest:
>>>         movq  8(%rsp),%rcx            # RIP
>>>         andl  $~(X86_EFLAGS_IOPL|X86_EFLAGS_NT|X86_EFLAGS_VM),24(%rsp)
>>>         ...
>>>
>>> Also - what about CLRSSBSY failing? It would seem easier to diagnose
>>> this right here than when getting presumably #DF upon next entry into
>>> Xen. At the very least I think it deserves a comment if an error case
>>> does not get handled.
>> I did consider this, but ultimately decided against it.
>>
>> You can't have an unlikely block inside a alternative block because the
>> jmp's displacement doesn't get fixed up.
> We do fix up unconditional JMP/CALL displacements; I don't
> see why we couldn't also do so for conditional ones.

Only for the first instruction in the block.

We do not decode the entire block of instructions and fix up each
displacement.

>
>>   Keeping everything inline puts
>> an incorrect statically-predicted branch in program flow.
>>
>> Most importantly however, is that the SYSRET path is vastly less common
>> than the IRET path.  There is no easy way to proactively spot problems
>> in the IRET path, which means that conditions leading to a problem are
>> already far more likely to manifest as #DF, so there is very little
>> value in adding complexity to the SYSRET path in the first place.
> The SYSRET path being uncommon is a problem by itself imo, if
> that's indeed the case. I'm sure I've suggested before that
> we convert frames to TRAP_syscall ones whenever possible,
> such that we wouldn't go the slower IRET path.

It is not possible to convert any.

The opportunistic SYSRET logic in Linux loses you performance in
reality.  Its just that the extra conditionals are very highly predicted
and totally dominated by the ring transition cost.

You can create a synthetic test case where the opportunistic logic makes
a performance win, but the chances of encountering real world code where
TRAP_syscall is clear and %r11 and %rcx match flags/rip is 2 ^ 128.

It is very much not worth the extra code and cycles taken to implement.

>>> Somewhat similar for SETSSBSY, except there things get complicated by
>>> it raising #CP instead of setting EFLAGS.CF: Aiui it would require us
>>> to handle #CP on an IST stack in order to avoid #DF there.
>> Right, but having #CP as IST gives us far worse problems.
>>
>> Being able to spot #CP vs #DF doesn't help usefully.  Its still some
>> arbitrary period of time after the damage was done.
>>
>> Any nesting of #CP (including fault on IRET out) results in losing
>> program state and entering an infinite loop.
>>
>> The cases which end up as #DF are properly fatal to the system, and we
>> at least get a clean crash out it.
> May I suggest that all of this gets spelled out in at least
> the description of the patch, so that it can be properly
> understood (and, if need be, revisited) later on?

Is this really the right patch to do that?

I do eventually plan to put a whole load of this kinds of details into
the hypervisor guide.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 11 22:37:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 22:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYH2x-0001xk-Bs; Mon, 11 May 2020 22:37:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=U+Ik=6Z=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jYH2w-0001xf-5v
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 22:37:14 +0000
X-Inumbo-ID: f4fc208d-93d7-11ea-a260-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f4fc208d-93d7-11ea-a260-12813bfff9fa;
 Mon, 11 May 2020 22:37:13 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id B0AEC2070B;
 Mon, 11 May 2020 22:37:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1589236633;
 bh=faqbtFB4H4RaJrawnQWAiM8zc0Du1EsVBGZThV+qQi0=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=MGwM7pHEkN2CUXvMpM7AhYZC10Bn4P4gPwXAgo9kqZWzIFyzjhm5/cLCg9oJnM5bB
 CcRfy0+Yc2Y/ctFd83IU/VS0gYNDh75AyD+OxCuKYzrCQoNt9biGq29oAnEnNDv9nI
 eR4y3z+BLrEPqtkoI7BvPRNbJueHA9J9AE1/yB4w=
Date: Mon, 11 May 2020 15:37:12 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] optee: immediately free buffers that are released by
 OP-TEE
In-Reply-To: <f4e1cc2b-97bf-d242-8f1b-e72083f378be@citrix.com>
Message-ID: <alpine.DEB.2.21.2005111534160.26167@sstabellini-ThinkPad-T480s>
References: <20200506014246.3397490-1-volodymyr_babchuk@epam.com>
 <51b8c855-5e94-2829-a703-d43c84948120@xen.org>
 <f4e1cc2b-97bf-d242-8f1b-e72083f378be@citrix.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1829641898-1589236632=:26167"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "tee-dev@lists.linaro.org" <tee-dev@lists.linaro.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1829641898-1589236632=:26167
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Mon, 11 May 2020, Andrew Cooper wrote:
> On 11/05/2020 10:34, Julien Grall wrote:
> > Hi Volodymyr,
> >
> > On 06/05/2020 02:44, Volodymyr Babchuk wrote:
> >> Normal World can share buffer with OP-TEE for two reasons:
> >> 1. Some client application wants to exchange data with TA
> >> 2. OP-TEE asks for shared buffer for internal needs
> >>
> >> The second case was handle more strictly than necessary:
> >>
> >> 1. In RPC request OP-TEE asks for buffer
> >> 2. NW allocates buffer and provides it via RPC response
> >> 3. Xen pins pages and translates data
> >> 4. Xen provides buffer to OP-TEE
> >> 5. OP-TEE uses it
> >> 6. OP-TEE sends request to free the buffer
> >> 7. NW frees the buffer and sends the RPC response
> >> 8. Xen unpins pages and forgets about the buffer
> >>
> >> The problem is that Xen should forget about buffer in between stages 6
> >> and 7. I.e. the right flow should be like this:
> >>
> >> 6. OP-TEE sends request to free the buffer
> >> 7. Xen unpins pages and forgets about the buffer
> >> 8. NW frees the buffer and sends the RPC response
> >>
> >> This is because OP-TEE internally frees the buffer before sending the
> >> "free SHM buffer" request. So we have no reason to hold reference for
> >> this buffer anymore. Moreover, in multiprocessor systems NW have time
> >> to reuse buffer cookie for another buffer. Xen complained about this
> >> and denied the new buffer registration. I have seen this issue while
> >> running tests on iMX SoC.
> >>
> >> So, this patch basically corrects that behavior by freeing the buffer
> >> earlier, when handling RPC return from OP-TEE.
> >>
> >> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
> >> ---
> >>   xen/arch/arm/tee/optee.c | 24 ++++++++++++++++++++----
> >>   1 file changed, 20 insertions(+), 4 deletions(-)
> >>
> >> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
> >> index 6a035355db..af19fc31f8 100644
> >> --- a/xen/arch/arm/tee/optee.c
> >> +++ b/xen/arch/arm/tee/optee.c
> >> @@ -1099,6 +1099,26 @@ static int handle_rpc_return(struct
> >> optee_domain *ctx,
> >>           if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_ALLOC )
> >>               call->rpc_buffer_type =
> >> shm_rpc->xen_arg->params[0].u.value.a;
> >>   +        /*
> >> +         * OP-TEE signals that it frees the buffer that it requested
> >> +         * before. This is the right for us to do the same.
> >> +         */
> >> +        if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_FREE )
> >> +        {
> >> +            uint64_t cookie = shm_rpc->xen_arg->params[0].u.value.b;
> >> +
> >> +            free_optee_shm_buf(ctx, cookie);
> >> +
> >> +            /*
> >> +             * This should never happen. We have a bug either in the
> >> +             * OP-TEE or in the mediator.
> >> +             */
> >> +            if ( call->rpc_data_cookie && call->rpc_data_cookie !=
> >> cookie )
> >> +                gprintk(XENLOG_ERR,
> >> +                        "Saved RPC cookie does not corresponds to
> >> OP-TEE's (%"PRIx64" != %"PRIx64")\n",
> >
> > s/corresponds/correspond/
> >
> >> +                        call->rpc_data_cookie, cookie);
> >
> > IIUC, if you free the wrong SHM buffer then your guest is likely to be
> > running incorrectly afterwards. So shouldn't we crash the guest to
> > avoid further issue?
> 
> No - crashing the guest prohibits testing of the interface, and/or the
> guest realising it screwed up and dumping enough state to usefully debug
> what is going on.
> 
> Furthermore, if userspace could trigger this path, we'd have to issue an
> XSA.
> 
> Crashing the guest is almost never the right thing to do, and definitely
> not appropriate for a bad parameter.

Maybe we want to close the OPTEE interface for the guest, instead of
crashing the whole VM. I.e. freeing the OPTEE context for the domain
(d->arch.tee)?

But I think the patch is good as it is honestly.
--8323329-1829641898-1589236632=:26167--


From xen-devel-bounces@lists.xenproject.org Mon May 11 22:58:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 22:58:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYHNM-0003et-0F; Mon, 11 May 2020 22:58:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nZYY=6Z=xilinx.com=stefanos@srs-us1.protection.inumbo.net>)
 id 1jYHNK-0003eo-Iv
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 22:58:18 +0000
X-Inumbo-ID: e6fd6876-93da-11ea-b07b-bc764e2007e4
Received: from NAM02-BL2-obe.outbound.protection.outlook.com (unknown
 [40.107.75.42]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e6fd6876-93da-11ea-b07b-bc764e2007e4;
 Mon, 11 May 2020 22:58:17 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PiU3pPYgCT8zd5wAa2Mm1Bs7sDX8BPvvqIw8JdlrLGfyBdj70i1SRxLnnium+u1/+pRbjvcD445rHMLVoM/S+yxSmdZTLFK/f5uAO2sP74E6L4KTDYVn8Gr9WYvZSOLoe88OAj0vb2zTM2PgFPCTmiv7q/sg2XduRByNZc4H4KEziMLNTw7f9sHtUcuzUbSnSW/SPMoGMSqwpZRjUdMODZvfOHBeLdbKBmjE/dYFD6S+VbQzvo7GaHI/38HA0jHSZ4thfR4pVtZSoV54VSf0Ce6xpV+SfVnbs0TbcPx8JF2xf6pVHZdpbbjNYXcnlmou7zjWnYNFQ/oM+5Onos4qIQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K5CEMu3z68dH/mrbR1Lvb8MxsL5xs/aDZWZz3J8TNTY=;
 b=Ubvb4aMUbcNVLQaqIScRxF6o4px3aG4OYfTW86630kNEhd2SKes+an6cf+HJRnUf1x9HWxqgaLluYiA/RqQObPEmKUJlFnAo0i/K7u5QhInFfbmNnyKsLKSXfNpBk5pxEwOFUzoWmS9WfXzzhwUjrUMlVw14qE9gq5hgT0fU34orYjNZJZawO+wmvAB3kN2wNOjv7Acqe60xJ8bn85GXER7o2IOTJ1ZIyiRERDuUP2fJrYV9+PH+oE3ql3RVBMSkGnUXGZgcJS1HYqDNQJCFm+45+BFAOAk3qXXc19/hdGvw0FwQZBecR9VrcSMUNXgylo6Er8N7FGr+/Sbonpx4rw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 149.199.60.83) smtp.rcpttodomain=epam.com smtp.mailfrom=xilinx.com;
 dmarc=bestguesspass action=none header.from=xilinx.com; dkim=none (message
 not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=xilinx.onmicrosoft.com; s=selector2-xilinx-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K5CEMu3z68dH/mrbR1Lvb8MxsL5xs/aDZWZz3J8TNTY=;
 b=QkiXt4TtLDKaS9ZIQFGmlizuLnTdfmU/ofnUewFpPws8ISfhISPBezh+7vX9WHLmqNnIZkvLCDcCOXcDukrS3ynznPrZDXE/QQjtyHILg5Hh49e4ohBfhvSeig6KkSvAxa6VNVYcNNp7H5Vr/MoFcOeIDwYupbm1aspRIOGzrDs=
Received: from MN2PR12CA0024.namprd12.prod.outlook.com (2603:10b6:208:a8::37)
 by BY5PR02MB7060.namprd02.prod.outlook.com (2603:10b6:a03:236::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2979.35; Mon, 11 May
 2020 22:58:15 +0000
Received: from BL2NAM02FT005.eop-nam02.prod.protection.outlook.com
 (2603:10b6:208:a8:cafe::b9) by MN2PR12CA0024.outlook.office365.com
 (2603:10b6:208:a8::37) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2979.28 via Frontend
 Transport; Mon, 11 May 2020 22:58:15 +0000
Authentication-Results: spf=pass (sender IP is 149.199.60.83)
 smtp.mailfrom=xilinx.com; epam.com; dkim=none (message not signed)
 header.d=none;epam.com; dmarc=bestguesspass action=none
 header.from=xilinx.com;
Received-SPF: Pass (protection.outlook.com: domain of xilinx.com designates
 149.199.60.83 as permitted sender) receiver=protection.outlook.com;
 client-ip=149.199.60.83; helo=xsj-pvapsmtpgw01;
Received: from xsj-pvapsmtpgw01 (149.199.60.83) by
 BL2NAM02FT005.mail.protection.outlook.com (10.152.76.252) with Microsoft SMTP
 Server id 15.20.2979.29 via Frontend Transport; Mon, 11 May 2020 22:58:14
 +0000
Received: from [149.199.38.66] (port=34857 helo=xsj-pvapsmtp01)
 by xsj-pvapsmtpgw01 with esmtp (Exim 4.90)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1jYHN0-0004nZ-Dp; Mon, 11 May 2020 15:57:58 -0700
Received: from [127.0.0.1] (helo=localhost)
 by xsj-pvapsmtp01 with smtp (Exim 4.63)
 (envelope-from <stefano.stabellini@xilinx.com>)
 id 1jYHNF-0001MK-Sm; Mon, 11 May 2020 15:58:13 -0700
Received: from xsj-pvapsmtp01 (mail.xilinx.com [149.199.38.66] (may be forged))
 by xsj-smtp-dlp2.xlnx.xilinx.com (8.13.8/8.13.1) with ESMTP id 04BMw9pn002845; 
 Mon, 11 May 2020 15:58:09 -0700
Received: from [172.19.222.70] (helo=localhost)
 by xsj-pvapsmtp01 with esmtp (Exim 4.63)
 (envelope-from <stefanos@xilinx.com>)
 id 1jYHNB-0001M3-Fc; Mon, 11 May 2020 15:58:09 -0700
Date: Mon, 11 May 2020 15:58:09 -0700 (PDT)
From: Stefano Stabellini <stefano.stabellini@xilinx.com>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 10/12] xen/arm: if is_domain_direct_mapped use native
 UART address for vPL011
In-Reply-To: <8c01cb1a-0745-3eca-a45d-09c9297163ce@xen.org>
Message-ID: <alpine.DEB.2.21.2005111543330.26167@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-10-sstabellini@kernel.org>
 <05b46414-12c3-5f79-f4b1-46cf8750d28c@xen.org>
 <alpine.DEB.2.21.2004301319380.28941@sstabellini-ThinkPad-T480s>
 <7176c924-eb16-959e-53cd-c73db88f65db@xen.org>
 <alpine.DEB.2.21.2005081601400.26167@sstabellini-ThinkPad-T480s>
 <8c01cb1a-0745-3eca-a45d-09c9297163ce@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-RCIS-Action: ALLOW
X-TM-AS-Product-Ver: IMSS-7.1.0.1224-8.2.0.1013-23620.005
X-TM-AS-User-Approved-Sender: Yes;Yes
X-EOPAttributedMessage: 0
X-MS-Office365-Filtering-HT: Tenant
X-Forefront-Antispam-Report: CIP:149.199.60.83; CTRY:US; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:xsj-pvapsmtpgw01; PTR:unknown-60-83.xilinx.com; CAT:NONE;
 SFTY:;
 SFS:(7916004)(376002)(346002)(39860400002)(136003)(396003)(46966005)(33430700001)(54906003)(33716001)(70586007)(53546011)(26005)(316002)(186003)(47076004)(82310400002)(33440700001)(82740400003)(70206006)(8676002)(426003)(8936002)(356005)(478600001)(81166007)(2906002)(9786002)(9686003)(4326008)(44832011)(336012)(6916009)(5660300002)(42866002);
 DIR:OUT; SFP:1101; 
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: dea80352-44ca-4e90-0f26-08d7f5fec942
X-MS-TrafficTypeDiagnostic: BY5PR02MB7060:
X-Microsoft-Antispam-PRVS: <BY5PR02MB70602B158118E359DB604DBFA0A10@BY5PR02MB7060.namprd02.prod.outlook.com>
X-Auto-Response-Suppress: DR, RN, NRN, OOF, AutoReply
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-Forefront-PRVS: 04004D94E2
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: MaeGaLXcpgKumLqDb11fEekkt9OjEYr7kXp2RY/pQjiX4clHBjnqyB/CXGqykRN4HyWcen17kdC/d1AOs/W6EDuIzZuF/WsixFQXM/w/4e3fOAxnHjvVhXrk1AGQBRo+Dq0QhYNL9NY3EPVIMJzt+Wclhf7d6N43dMsXBlB3Ld05/KW4TawytQgOscudEqnGs7RofKyD4plJgOcj8hr75kCSoZLjjg7WGI/SiSgaisdRgxtFsTW2K37NEeXYvBgMS3f+kLFt7FIOgDtYo8kOVw4ckhEuEmqz1ckHZJooZ4dsxyvAA7ePYHdXq8WR4bXKcJVwrlidtYPAK9mJoffhH47hMINlWprPLEAVARdcGVeVzG2YxqWUL/mBGst+bzLh7i7jJLicdXtFCkrXWTP3zDSfFYZbiOSAhmCQbgOsrHjlUYYTAwvHrvb1maA0JBtAvT5CXSUUlr+6yDAsBj2h/UQEGZMLY3H9Wn0TQ1lxPOCB119pl6RZV0VcV2iuT4YpN1ECr3c6+MDsmgM6w0lsvxo6rH5yh3kpkUpZ6VpenAP1lcNETJIiHvTj5mPxv4vhSQ8tia1tje0sk8Vvmnc+2aLlFqKul1yVbIWCP6rBeVw=
X-OriginatorOrg: xilinx.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2020 22:58:14.3182 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: dea80352-44ca-4e90-0f26-08d7f5fec942
X-MS-Exchange-CrossTenant-Id: 657af505-d5df-48d0-8300-c31994686c5c
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=657af505-d5df-48d0-8300-c31994686c5c; Ip=[149.199.60.83];
 Helo=[xsj-pvapsmtpgw01]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR02MB7060
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>, Volodymyr_Babchuk@epam.com,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, 9 May 2020, Julien Grall wrote:
> Hi Stefano,
> 
> On 09/05/2020 01:07, Stefano Stabellini wrote:
> > On Fri, 1 May 2020, Julien Grall wrote:
> > > On 01/05/2020 02:26, Stefano Stabellini wrote:
> > > > On Wed, 15 Apr 2020, Julien Grall wrote:
> > > > > Hi Stefano,
> > > > > 
> > > > > On 15/04/2020 02:02, Stefano Stabellini wrote:
> > > > > > We always use a fix address to map the vPL011 to domains. The
> > > > > > address
> > > > > > could be a problem for domains that are directly mapped.
> > > > > > 
> > > > > > Instead, for domains that are directly mapped, reuse the address of
> > > > > > the
> > > > > > physical UART on the platform to avoid potential clashes.
> > > > > 
> > > > > How do you know the physical UART MMIO region is big enough to fit the
> > > > > PL011?
> > > > 
> > > > That cannot be because the vPL011 MMIO size is 1 page, which is the
> > > > minimum right?
> > > 
> > > No, there are platforms out with multiple UARTs in the same page (see
> > > sunxi
> > > for instance).
> > 
> > But if there are multiple UARTs sharing the same page, and the first one
> > is used by Xen, there is no way to assign one of the secondary UARTs to
> > a domU. So there would be no problem choosing the physical UART address
> > for the virtual PL011.
> 
> AFAICT, nothing prevents a user to assign such UART to a dom0less guest today.
> It would not be safe, but it should work.
>
> If you want to make it safe, then you would need to trap the MMIO access so
> they can be sanitized. For a UART device, I don't think the overhead would be
> too bad.
> 
> Anyway, the only thing I request is to add sanity check in the code to help
> the user diagnostics any potential clash.

OK thanks for clarifying, I'll do that.


From xen-devel-bounces@lists.xenproject.org Mon May 11 23:17:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 23:17:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYHfI-0005NL-Ls; Mon, 11 May 2020 23:16:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nxab=6Z=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYHfG-0005NG-NI
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 23:16:50 +0000
X-Inumbo-ID: 79eaf37c-93dd-11ea-a264-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 79eaf37c-93dd-11ea-a264-12813bfff9fa;
 Mon, 11 May 2020 23:16:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=wP/dpOcgzHbUBVe5iJrK8wIdmf08HKhin1dtSPyvvUs=; b=0/hnntx5+gr+yGlELEi7/cq4N
 7zD0l3QEV8aju1dqYmbs2CROVjU38GES6yjEyCtxf79ff4GNgZ/+K/MinZQMmmnz28PBFHFSRiFIs
 i64b5180gCX/QYlA6cWvlMXP5/Lsg99dFXN38ER9Q3+uvLjjMJOyvx4LPUKXigSka4lOU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYHf8-0002zZ-Sr; Mon, 11 May 2020 23:16:42 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYHf8-000888-Ja; Mon, 11 May 2020 23:16:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYHf8-0006fO-If; Mon, 11 May 2020 23:16:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150135-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150135: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 xen-unstable:test-armhf-armhf-xl-arndale:xen-boot:fail:heisenbug
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:heisenbug
 xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=190c60f12db469472476041ecd0e6c9a0d4b0f8a
X-Osstest-Versions-That: xen=190c60f12db469472476041ecd0e6c9a0d4b0f8a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 11 May 2020 23:16:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150135 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150135/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeat fail in 150128 pass in 150135
 test-armhf-armhf-xl-arndale   7 xen-boot                   fail pass in 150128
 test-amd64-amd64-xl-rtds     16 guest-localmigrate         fail pass in 150128
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 150128

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds 18 guest-localmigrate/x10 fail in 150128 blocked in 150135
 test-armhf-armhf-xl-arndale 13 migrate-support-check fail in 150128 never pass
 test-armhf-armhf-xl-arndale 14 saverestore-support-check fail in 150128 never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150128
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150128
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150128
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150128
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150128
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150128
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150128
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150128
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150128
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  190c60f12db469472476041ecd0e6c9a0d4b0f8a
baseline version:
 xen                  190c60f12db469472476041ecd0e6c9a0d4b0f8a

Last test of basis   150135  2020-05-11 10:14:21 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon May 11 23:46:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 11 May 2020 23:46:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYI84-0007r8-23; Mon, 11 May 2020 23:46:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MQL3=6Z=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYI82-0007r3-V5
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 23:46:35 +0000
X-Inumbo-ID: a4b427a0-93e1-11ea-ae69-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a4b427a0-93e1-11ea-ae69-bc764e2007e4;
 Mon, 11 May 2020 23:46:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589240793;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=jPa6okqu5A+kF2s16qpmwy8N+5Q7RFJ4j6mt3oTu4AI=;
 b=FJEmN6di0WJKKc6A8CeG4GhMcG1GiG/cRXS28DAAGardECmLwoS0m/vb
 apC6aBgr/gG3JRZQtfarAyDkBHU7D5ASDQtD1v+Msgka9w4TaRMXgZU5F
 k0VpGVMDscVF6AQ6YOUWEi0jbrfItVJ3BuImHH/lL5CubI7ZUshrARdOA k=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 4Dtoer7KjLyUlA+aX2MVIzWULzMvb17acOPLvWGtId1EgDX8ae9XaLCIjYYx8ZtTn2c/5i47wn
 GppFtduONHKUQKLcRnc2XhLxAyexj9dlOpb6/T4c5Oy2lFdFvNoDopxHVHe136aj4zf/n+mtBY
 9Hq4HsboGfNnv8xNb4i1uAN1jN2rsLb4YgbBqcLORGTdqCSp286AeqXZI56nSf+ITsIrFTIQdh
 6n8FAIt1ywUCkm75OaPqdlfTsYlKvC2M3SqBShMepU6biZMjcoB9AFctuAgmm6aTo4SOqYciPW
 0Os=
X-SBRS: 2.7
X-MesageID: 17258483
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,381,1583211600"; d="scan'208";a="17258483"
Subject: Re: [PATCH 16/16] x86/shstk: Activate Supervisor Shadow Stacks
To: Jan Beulich <jbeulich@suse.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-17-andrew.cooper3@citrix.com>
 <eacafb0a-a049-5bca-7a43-c9c3deb26054@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b2e2c7ec-6ed7-b77d-5e0a-6b0f716c451d@citrix.com>
Date: Tue, 12 May 2020 00:46:27 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <eacafb0a-a049-5bca-7a43-c9c3deb26054@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 07/05/2020 15:54, Jan Beulich wrote:
> On 02.05.2020 00:58, Andrew Cooper wrote:
>> --- a/xen/arch/x86/acpi/wakeup_prot.S
>> +++ b/xen/arch/x86/acpi/wakeup_prot.S
>> @@ -1,3 +1,8 @@
>> +#include <asm/config.h>
> Why is this needed? Afaics assembly files, just like C ones, get
> xen/config.h included from the compiler command line.

I'll double check, but I do recall it being necessary.

>> @@ -48,6 +59,48 @@ ENTRY(s3_resume)
>>          pushq   %rax
>>          lretq
>>  1:
>> +#ifdef CONFIG_XEN_SHSTK
>> +	/*
>> +         * Restoring SSP is a little convoluted, because we are intercepting
>> +         * the middle of an in-use shadow stack.  Write a temporary supervisor
>> +         * token under the stack, so SETSSBSY takes us where we want, then
>> +         * reset MSR_PL0_SSP to its usual value and pop the temporary token.
> What do you mean by "takes us where we want"? I take it "us" is really
> SSP here?

Load the SSP that we want.  SETSSBSY is the only instruction which can
do a fairly arbitrary load of SSP, but it still has to complete the
check and activation of the supervisor token.

>> +         */
>> +        mov     saved_rsp(%rip), %rdi
>> +        cmpq    $1, %rdi
>> +        je      .L_shstk_done
>> +
>> +        /* Write a supervisor token under SSP. */
>> +        sub     $8, %rdi
>> +        mov     %rdi, (%rdi)
>> +
>> +        /* Load it into MSR_PL0_SSP. */
>> +        mov     $MSR_PL0_SSP, %ecx
>> +        mov     %rdi, %rdx
>> +        shr     $32, %rdx
>> +        mov     %edi, %eax
>> +
>> +        /* Enable CET. */
>> +        mov     $MSR_S_CET, %ecx
>> +        xor     %edx, %edx
>> +        mov     $CET_SHSTK_EN | CET_WRSS_EN, %eax
>> +        wrmsr
>> +
>> +        /* Activate our temporary token. */
>> +        mov     $XEN_MINIMAL_CR4 | X86_CR4_CET, %ebx
>> +        mov     %rbx, %cr4
>> +        setssbsy
>> +
>> +        /* Reset MSR_PL0_SSP back to its expected value. */
>> +        and     $~(STACK_SIZE - 1), %eax
>> +        or      $0x5ff8, %eax
>> +        wrmsr
> Ahead of this WRMSR neither %ecx nor %edx look to have their intended
> values anymore. Also there is a again a magic 0x5ff8 here (and at
> least one more further down).

There is another bug in this version which I spotted and fixed.  The
write of the supervisor shadow stack token has to be done after CET is
enabled and with WRSSQ because the mapping is already read-only.

>> --- a/xen/arch/x86/boot/x86_64.S
>> +++ b/xen/arch/x86/boot/x86_64.S
>> @@ -28,8 +28,36 @@ ENTRY(__high_start)
>>          lretq
>>  1:
>>          test    %ebx,%ebx
>> -        jnz     start_secondary
>> +        jz      .L_bsp
>>  
>> +        /* APs.  Set up shadow stacks before entering C. */
>> +
>> +        testl   $cpufeat_mask(X86_FEATURE_XEN_SHSTK), \
>> +                CPUINFO_FEATURE_OFFSET(X86_FEATURE_XEN_SHSTK) + boot_cpu_data(%rip)
>> +        je      .L_ap_shstk_done
>> +
>> +        mov     $MSR_S_CET, %ecx
>> +        xor     %edx, %edx
>> +        mov     $CET_SHSTK_EN | CET_WRSS_EN, %eax
>> +        wrmsr
>> +
>> +        mov     $MSR_PL0_SSP, %ecx
>> +        mov     %rsp, %rdx
>> +        shr     $32, %rdx
>> +        mov     %esp, %eax
>> +        and     $~(STACK_SIZE - 1), %eax
>> +        or      $0x5ff8, %eax
>> +        wrmsr
>> +
>> +        mov     $XEN_MINIMAL_CR4 | X86_CR4_CET, %ecx
>> +        mov     %rcx, %cr4
>> +        setssbsy
> Since the token doesn't get written here, could you make the comment
> say where this happens? I have to admit that I had to go through
> earlier patches to find it again.

Ok.

>
>> +.L_ap_shstk_done:
>> +        call    start_secondary
>> +        BUG     /* start_secondary() shouldn't return. */
> This conversion from a jump to CALL is unrelated and hence would
> better be mentioned in the description imo.
>
>> --- a/xen/arch/x86/cpu/common.c
>> +++ b/xen/arch/x86/cpu/common.c
>> @@ -323,6 +323,11 @@ void __init early_cpu_init(void)
>>  	       x86_cpuid_vendor_to_str(c->x86_vendor), c->x86, c->x86,
>>  	       c->x86_model, c->x86_model, c->x86_mask, eax);
>>  
>> +	if (c->cpuid_level >= 7) {
>> +		cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
>> +		c->x86_capability[cpufeat_word(X86_FEATURE_CET_SS)] = ecx;
>> +	}
> How about moving the leaf 7 code from generic_identify() here as
> a whole?

In the past, we've deliberately not done that to avoid code gaining a
reliance on the pre-cached values.

I have a plan to rework this substantially when I move microcode loading
to the start of __start_xen(), at which point early_cpu_init() will
disappear and become the BSP's regular cpu_init().

Until then, we shouldn't cache unnecessary leaves this early.

>> --- a/xen/arch/x86/setup.c
>> +++ b/xen/arch/x86/setup.c
>> @@ -664,6 +664,13 @@ static void __init noreturn reinit_bsp_stack(void)
>>      stack_base[0] = stack;
>>      memguard_guard_stack(stack);
>>  
>> +    if ( cpu_has_xen_shstk )
>> +    {
>> +        wrmsrl(MSR_PL0_SSP, (unsigned long)stack + 0x5ff8);
>> +        wrmsrl(MSR_S_CET, CET_SHSTK_EN | CET_WRSS_EN);
>> +        asm volatile ("setssbsy" ::: "memory");
>> +    }
> Same as for APs - a brief comment pointing at where the token was
> written would seem helpful.
>
> Could you also have the patch description say a word on the choice
> of enabling CET_WRSS_EN uniformly and globally?

That is an area for possible improvement.  For now, it is unilaterally
enabled for simplicity.

None of the places we need to use WRSSQ are fastpaths.  It is in the
extable recovery, S3 path and enable_nmi()'s.

We can get away with a RDMSR/WRMSR/WRMSR sequence, which keeps us safe
to ROP gadgets and problems from poisoning a read-mostly default.

>> @@ -985,6 +992,21 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>>      /* This must come before e820 code because it sets paddr_bits. */
>>      early_cpu_init();
>>  
>> +    /* Choose shadow stack early, to set infrastructure up appropriately. */
>> +    if ( opt_xen_shstk && boot_cpu_has(X86_FEATURE_CET_SS) )
>> +    {
>> +        printk("Enabling Supervisor Shadow Stacks\n");
>> +
>> +        setup_force_cpu_cap(X86_FEATURE_XEN_SHSTK);
>> +#ifdef CONFIG_PV32
>> +        if ( opt_pv32 )
>> +        {
>> +            opt_pv32 = 0;
>> +            printk("  - Disabling PV32 due to Shadow Stacks\n");
>> +        }
>> +#endif
> I think this deserves an explanation, either in a comment or in
> the patch description.

Probably both.

>
>> @@ -1721,6 +1743,10 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>>  
>>      alternative_branches();
>>  
>> +    /* Defer CR4.CET until alternatives have finished playing with CR4.WP */
>> +    if ( cpu_has_xen_shstk )
>> +        set_in_cr4(X86_CR4_CET);
> Nit: CR0.WP (in the comment)

Oops.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 12 00:34:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 00:34:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYIsU-00046v-Tj; Tue, 12 May 2020 00:34:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EIy0=62=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYIsS-00046q-Qf
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 00:34:32 +0000
X-Inumbo-ID: 57bde9de-93e8-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57bde9de-93e8-11ea-ae69-bc764e2007e4;
 Tue, 12 May 2020 00:34:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=huzX+P5NRKG69QsPmQmvU/b54RK6jFc6l+IoTAbPHjE=; b=cZeqwk8myr8z5DkUwqadKIS3T
 /1Mb+85HLTdYZj4jW4uizqHVmbFhAVBLt1C3jqLLme11lmx/ujIvxxDzDAlM463gAq2gpGbBuHX04
 x4nYEp1JTCCWne0eoO77zK6H8ujtYWy9FTTedZSFcDO/pxfdrimF/9tfcKxT288ENu+oc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYIsP-00059K-VL; Tue, 12 May 2020 00:34:30 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYIsP-0003ID-H4; Tue, 12 May 2020 00:34:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYIsP-0001xw-DN; Tue, 12 May 2020 00:34:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150136-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 150136: regressions - FAIL
X-Osstest-Failures: linux-5.4:build-arm64:xen-build:fail:regression
 linux-5.4:test-amd64-amd64-xl-multivcpu:guest-localmigrate/x10:fail:heisenbug
 linux-5.4:test-armhf-armhf-xl-rtds:guest-start.2:fail:allowable
 linux-5.4:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
 linux-5.4:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
 linux-5.4:build-arm64-libvirt:build-check(1):blocked:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
 linux-5.4:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=f015b86259a520ad886523d9ec6fdb0ed80edc38
X-Osstest-Versions-That: linux=592465e6a54ba8104969f3b73b58df262c5be5f5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 12 May 2020 00:34:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150136 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150136/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                   6 xen-build      fail in 150130 REGR. vs. 150104

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-multivcpu 18 guest-localmigrate/x10    fail pass in 150130

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     17 guest-start.2  fail in 150130 REGR. vs. 150104

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 150130 n/a
 test-arm64-arm64-xl           1 build-check(1)           blocked in 150130 n/a
 build-arm64-libvirt           1 build-check(1)           blocked in 150130 n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)           blocked in 150130 n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 150130 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 150130 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 150130 n/a
 test-arm64-arm64-examine      1 build-check(1)           blocked in 150130 n/a
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150086
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150104
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                f015b86259a520ad886523d9ec6fdb0ed80edc38
baseline version:
 linux                592465e6a54ba8104969f3b73b58df262c5be5f5

Last test of basis   150104  2020-05-09 09:48:15 Z    2 days
Testing same since   150122  2020-05-10 08:40:44 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Ma <aaron.ma@canonical.com>
  AceLan Kao <acelan.kao@canonical.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Elder <elder@linaro.org>
  Alexei Starovoitov <ast@kernel.org>
  Amadeusz Sławiński <amadeuszx.slawinski@linux.intel.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Andrzej Hajda <a.hajda@samsung.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Andy Yan <andy.yan@rock-chips.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Brendan Higgins <brendanhiggins@google.com>
  Brian Cain <bcain@codeaurora.org>
  Catalin Marinas <catalin.marinas@arm.com>
  Chanwoo Choi <cw00.choi@samsung.com>
  Christoph Hellwig <hch@lst.de>
  David S. Miller <davem@davemloft.net>
  Doug Berger <opendmb@gmail.com>
  Felipe Balbi <balbi@kernel.org>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hans de Goede <hdegoede@redhat.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Jere Leppänen <jere.leppanen@nokia.com>
  Jeremie Francois (on alpha) <jeremie.francois@gmail.com>
  Jia He <justin.he@arm.com>
  Jiri Slaby <jslaby@suse.cz>
  Johannes Berg <johannes.berg@intel.com>
  Julien Beraud <julien.beraud@orolia.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Lee Jones <lee.jones@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Luis Chamberlain <mcgrof@kernel.org>
  Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Mark Brown <broonie@kernel.org>
  Masahiro Yamada <masahiroy@kernel.org>
  Matt Roper <matthew.d.roper@intel.com>
  Matthias Blankertz <matthias.blankertz@cetitec.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael S. Tsirkin <mst@redhat.com>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paulo Alcantara (SUSE) <pc@cjr.nz>
  Paulo Alcantara <pc@cjr.nz>
  Pavel Machek <pavel@denx.de>
  Qian Cai <cai@lca.pw>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Roman Gilg <subdiff@gmail.com>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Sandeep Raghuraman <sandy.8925@gmail.com>
  Sasha Levin <sashal@kernel.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Takashi Iwai <tiwai@suse.de>
  Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thinh Nguyen <thinhn@synopsys.com>
  Thomas Pedersen <thomas@adapt-ip.com>
  Tuowen Zhao <ztuowen@gmail.com>
  Tyler Hicks <tyhicks@linux.microsoft.com>
  Vamshi K Sthambamkadi <vamshi.k.sthambamkadi@gmail.com>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Zhan Liu <zhan.liu@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1335 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 12 01:11:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 01:11:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYJRY-0008PS-S9; Tue, 12 May 2020 01:10:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4tCB=62=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jYJRY-0008PN-9w
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 01:10:48 +0000
X-Inumbo-ID: 696d9652-93ed-11ea-ae69-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 696d9652-93ed-11ea-ae69-bc764e2007e4;
 Tue, 12 May 2020 01:10:47 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id A204320722;
 Tue, 12 May 2020 01:10:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1589245847;
 bh=cSFwD1+I8gzgQ6JIea/1dIP90y0woUlDtgmTMw5D0xM=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=AfNUWN+eHXwrPKR8FX/olDTjta3/5LcQk8tqZcyhZcyeX8N5RQ6EGcKTSrpQuvdXk
 WrkiaPVmgRnCvAGrAFmrPzYIBEyP9wcO8zzfeyISn/8hvT0pu3mOxkX0Ih5lssOi7t
 Xvz/yO4ZpjVVup5B36zErOZ/VMkXYmZkeQXyWTvU=
Date: Mon, 11 May 2020 18:10:46 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 05/12] xen: introduce reserve_heap_pages
In-Reply-To: <86e8fa89-c6f5-6c9e-4f3e-7f98e8e12c6a@xen.org>
Message-ID: <alpine.DEB.2.21.2005111750240.26167@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-5-sstabellini@kernel.org>
 <3129ab49-5898-9d2e-8fbb-d1fcaf6cdec7@suse.com>
 <alpine.DEB.2.21.2004291510270.28941@sstabellini-ThinkPad-T480s>
 <a316ed70-da35-8be0-a092-d992e56563d2@xen.org>
 <alpine.DEB.2.21.2004300928240.28941@sstabellini-ThinkPad-T480s>
 <86e8fa89-c6f5-6c9e-4f3e-7f98e8e12c6a@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 andrew.cooper3@citrix.com, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>, Volodymyr_Babchuk@epam.com,
 "Woodhouse, David" <dwmw@amazon.co.uk>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 30 Apr 2020, Julien Grall wrote:
> On 30/04/2020 18:00, Stefano Stabellini wrote:
> > On Thu, 30 Apr 2020, Julien Grall wrote:
> > > > > > +    pg = maddr_to_page(start);
> > > > > > +    node = phys_to_nid(start);
> > > > > > +    zone = page_to_zone(pg);
> > > > > > +    page_list_del(pg, &heap(node, zone, order));
> > > > > > +
> > > > > > +    __alloc_heap_pages(pg, order, memflags, d);
> > > > > 
> > > > > I agree with Julien in not seeing how this can be safe / correct.
> > > > 
> > > > I haven't seen any issues so far in my testing -- I imagine it is
> > > > because there aren't many memory allocations after setup_mm() and before
> > > > create_domUs()  (which on ARM is called just before
> > > > domain_unpause_by_systemcontroller at the end of start_xen.)
> > > 
> > > I am not sure why you exclude setup_mm(). Any memory allocated (boot
> > > allocator, xenheap) can clash with your regions. The main memory
> > > allocations
> > > are for the frametable and dom0. I would say you were lucky to not hit
> > > them.
> > 
> > Maybe it is because Xen typically allocates memory top-down? So if I
> > chose a high range then I would see a failure? But I have been mostly
> > testing with ranges close to the begin of RAM (as opposed to
> > ranges close to the end of RAM.)
> 
> I haven't looked at the details of the implementation, but you can try to
> specify dom0 addresses for your domU. You should see a failure.

I managed to reproduce a failure by choosing the top address range. On
Xilinx ZynqMP the memory is:

  reg = <0x0 0x0 0x0 0x7ff00000 0x8 0x0 0x0 0x80000000>;

And I chose:

  fdt set /chosen/domU0 direct-map <0x0 0x10000000 0x10000000 0x8 0x70000000 0x10000000>

Resulting in:

(XEN) *** LOADING DOMU cpus=1 memory=80000KB ***
(XEN) Loading d1 kernel from boot module @ 0000000007200000
(XEN) Loading ramdisk from boot module @ 0000000008200000
(XEN) direct_map start=0x00000010000000 size=0x00000010000000
(XEN) direct_map start=0x00000870000000 size=0x00000010000000
(XEN) Data Abort Trap. Syndrome=0x5
(XEN) Walking Hypervisor VA 0x2403480018 on CPU0 via TTBR 0x0000000000f05000
(XEN) 0TH[0x0] = 0x0000000000f08f7f
(XEN) 1ST[0x90] = 0x0000000000000000
(XEN) CPU0: Unexpected Trap: Data Abort

[...]

(XEN) Xen call trace:
(XEN)    [<000000000021a65c>] page_alloc.c#alloc_pages_from_buddy+0x15c/0x5d0 (PC)
(XEN)    [<000000000021b43c>] reserve_domheap_pages+0xc4/0x148 (LR)

Anything other than the very top of memory works.


> > > > - in construct_domU, add the range to xenheap and reserve it with
> > > > reserve_heap_pages
> > > 
> > > I am afraid you can't give the regions to the allocator and then allocate
> > > them. The allocator is free to use any page for its own purpose or exclude
> > > them.
> > > 
> > > AFAICT, the allocator doesn't have a list of page in use. It only keeps
> > > track
> > > of free pages. So we can make the content of struct page_info to look like
> > > it
> > > was allocated by the allocator.
> > > 
> > > We would need to be careful when giving a page back to allocator as the
> > > page
> > > would need to be initialized (see [1]). This may not be a concern for
> > > Dom0less
> > > as the domain may never be destroyed but will be for correctness PoV.
> > > 
> > > For LiveUpdate, the original Xen will carve out space to use by the boot
> > > allocator in the new Xen. But I think this is not necessary in your
> > > context.
> > > 
> > > It should be sufficient to exclude the page from the boot allocators (as
> > > we do
> > > for other modules).
> > > 
> > > One potential issue that can arise is there is no easy way today to
> > > differentiate between pages allocated and pages not yet initialized. To
> > > make
> > > the code robust, we need to prevent a page to be used in two places. So
> > > for
> > > LiveUpdate we are marking them with a special value, this is used
> > > afterwards
> > > to check we are effictively using a reserved page.
> > > 
> > > I hope this helps.
> > 
> > Thanks for writing all of this down but I haven't understood some of it.
> > 
> > For the sake of this discussion let's say that we managed to "reserve"
> > the range early enough like we do for other modules, as you wrote.
> > 
> > At the point where we want to call reserve_heap_pages() we would call
> > init_heap_pages() just before it. We are still relatively early at boot
> > so there aren't any concurrent memory operations. Why this doesn't work?
> 
> Because init_heap_pages() may exclude some pages (for instance MFN 0 is carved
> out) or use pages for its internal structure (see init_node_heap()). So you
> can't expect to be able to allocate the exact same region by
> reserve_heap_pages().

But it can't possibly use of any of pages it is trying to add to the
heap, right?

We have reserved a certain range, we tell init_heap_pages to add the
range to the heap, init_node_heap gets called and it ends up calling
xmalloc. There is no way xmalloc can use any memory from that
particular range because it is not in the heap yet. That should be safe.

The init_node_heap code is a bit hard to follow but I went through it
and couldn't spot anything that could cause any issues (MFN 0 aside
which is a bit special). Am I missing something?


From xen-devel-bounces@lists.xenproject.org Tue May 12 03:40:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 03:40:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYLm1-0003vf-SB; Tue, 12 May 2020 03:40:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rfNj=62=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jYLlz-0003bO-S4
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 03:40:03 +0000
X-Inumbo-ID: 43668558-9402-11ea-b07b-bc764e2007e4
Received: from mail-qt1-x844.google.com (unknown [2607:f8b0:4864:20::844])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43668558-9402-11ea-b07b-bc764e2007e4;
 Tue, 12 May 2020 03:40:03 +0000 (UTC)
Received: by mail-qt1-x844.google.com with SMTP id l1so6456716qtp.6
 for <xen-devel@lists.xenproject.org>; Mon, 11 May 2020 20:40:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=TplU775Hdr/fV2y9OYZMwvrtT2WBw0Ib2Gh0FHeM5+k=;
 b=gckY5BnnwAUYllb5uQrbl+92MqXXVKscqygtsrdF00NV2+Gtb4cRGmo/Rd6rlaX7YT
 0S7MOTrxfbGh2vqvhtKQbKlQJDXSfx/MAokwmOpFmPao7d8NilYNqDZq2zzPiPGrqN6K
 /ycPwnnndTSCr0i2SsoX1oASl5Wr0daazE2WX+UIFoTM2bqVezG24iJpayYnX7zSxao0
 4ycn0jWatpvbwidl7s/Lv5u3KwPdfCwSKILntaz2mOqHS4p/XsqTewQMsULxwEun0Tw/
 Qyl0XWL/SQ0Em34UYswwoWqT2XKw9lIFZVcJxXYTUgJFU7YFO3v8i90CItNBwANPVyNB
 4w3A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=TplU775Hdr/fV2y9OYZMwvrtT2WBw0Ib2Gh0FHeM5+k=;
 b=PUe/Js7Fbq4K0KGF/mF77gId+MGEIA/FkAze4zg4oQBBGrPeAYOQ/I6YdtdIhG2pp5
 XyduBwS3OoroHhsKi6yA2I5Ya59RT/abxnBrcMMdthOAJ7U17n4yRoXCJKRZ7TkxrJuD
 FddgRa2+CjZ+8j1Bk+Y72DudR4Q07yTZ/hrjiYiaIj+cd0bq6C3sqP27vlzRqiXNUPt/
 3swIFT9NNnzrL3lCKYxWytSzB5GDmwEK98U/crdGj2A7K8da7tKNeF7Udh/InEZlQ1yL
 HBlo/BVG2WHOt+wsGJHUceXmPn0pXh+fZjh9nhI9JmL85yCP+ZbbVf9Qq+9i+ZkHR/Wv
 F3VQ==
X-Gm-Message-State: AOAM531y3jOx/quJ5Mt+3vZk/k2QfH+USI92Jz3gVUGPMzf3Feyaf2iS
 pFBmDLTVnxOsMhJ4kWds00S6hjkz
X-Google-Smtp-Source: ABdhPJxksLm506g3irhARDNGuFxiKZEuL2rKK24I53mzUFs+9NDpOEw1QuymX/3Ne+YML3mL95C9FQ==
X-Received: by 2002:ac8:227d:: with SMTP id p58mr574088qtp.180.1589254802410; 
 Mon, 11 May 2020 20:40:02 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:54fc:f6ff:ea10:3d73])
 by smtp.gmail.com with ESMTPSA id w35sm1107092qtk.51.2020.05.11.20.40.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 11 May 2020 20:40:01 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 1/2] xen/x86: Disable fcf-protection when necessary to build
Date: Mon, 11 May 2020 23:39:47 -0400
Message-Id: <20200512033948.3507-2-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200512033948.3507-1-jandryuk@gmail.com>
References: <20200512033948.3507-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefan Bader <stefan.bader@canonical.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Ubuntu gcc-9 enables -fcf-protection by default, which conflicts with
-mindirect-branch=extern and prevents building the hypervisor with
CONFIG_INDIRECT_THUNK:
xmalloc.h:81:1: error: ‘-mindirect-branch’ and ‘-fcf-protection’ are not
compatible

Detect this incompatible combination, and add -fcf-protection=none to
allow the build to succeed.  To actually generated the error, the
compiled program must include a function.

CC: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 xen/arch/x86/arch.mk | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/arch.mk b/xen/arch/x86/arch.mk
index 2a51553edb..3aa6ce521a 100644
--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -63,7 +63,16 @@ CFLAGS += -mno-red-zone -fpic -fno-asynchronous-unwind-tables
 CFLAGS += -mno-sse $(call cc-option,$(CC),-mskip-rax-setup)
 
 # Compile with thunk-extern, indirect-branch-register if avaiable.
-CFLAGS-$(CONFIG_INDIRECT_THUNK) += -mindirect-branch=thunk-extern
+# Some versions of gcc error: "‘-mindirect-branch’ and ‘-fcf-protection’ are
+# not compatible".  For those, we need to disable cf-protection with
+# -fcf-protection=none
+cc-mindirect-branch = $(shell if test -n "`echo 'void foo(void) {};' | \
+      LANG=C $(CC) -mindirect-branch=thunk-extern -S -o /dev/null -x c - 2>&1 | \
+      grep -- '-mindirect-branch.*-fcf-protection.*are not compatible' -`"; \
+    then echo "-mindirect-branch=thunk-extern -fcf-protection=none"; \
+    else echo "-mindirect-branch=thunk-extern"; fi ;)
+
+CFLAGS-$(CONFIG_INDIRECT_THUNK) += $(call cc-mindirect-branch)
 CFLAGS-$(CONFIG_INDIRECT_THUNK) += -mindirect-branch-register
 CFLAGS-$(CONFIG_INDIRECT_THUNK) += -fno-jump-tables
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 12 03:40:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 03:40:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYLm6-000417-4D; Tue, 12 May 2020 03:40:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rfNj=62=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jYLm4-000410-S8
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 03:40:08 +0000
X-Inumbo-ID: 45cf3aba-9402-11ea-b07b-bc764e2007e4
Received: from mail-qv1-xf42.google.com (unknown [2607:f8b0:4864:20::f42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 45cf3aba-9402-11ea-b07b-bc764e2007e4;
 Tue, 12 May 2020 03:40:07 +0000 (UTC)
Received: by mail-qv1-xf42.google.com with SMTP id r3so5788571qvm.1
 for <xen-devel@lists.xenproject.org>; Mon, 11 May 2020 20:40:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=fUSe4sPjrvSj8MkkBiwu4oYkVAy7jThrwleho5JbiVw=;
 b=c1M1X3dPW7QnQPjbFpKT7WhAdgxtg4rq9TM7hm5+PSgae+tVgAk0JBMT0ZwBeSwujK
 uZfh4bYl2V9rpND3rgVchQd6asXnmTwXhhbOxY8XbY93n443iKWwfzsZuihjsdwA21mZ
 s0xTpK8fDNciZg078A4XzJTMyhk+SpbAogyy/sr6+UFhHR5/GAQ6gXDsDDxUC8eorjaq
 QPW5lIF7S6arOwtLmGa0T/l4vQJAhS6SoelNd/BBPhKg2qV4P1rP2gKaUyALEejJp0gZ
 QnInLjfLJjP9Y6LcywP4p/bc9FCcHOxPF4HnQ63li6GccJS1FMtt6k8WDtO8BsEt7wDa
 vtOA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=fUSe4sPjrvSj8MkkBiwu4oYkVAy7jThrwleho5JbiVw=;
 b=Iy6yKbE4zJDwdR9NwlSjTJzWRhzXF+6Nj11GfGsgWEyS2ouzTv/nqWGUP4QkEScg4N
 xTyIsMVwWUwWwEAdsb2wS5NaxmwmAPPmJBKAedQeDNlDWDrxvgYxEYAadfk7jDlg69ws
 d/2Si5u2sKm42g0eG/vciqRiPf4UhXoDfxNpUlenEH7mxqq6fQzqyFu/SEAZvCH4To4J
 jbHIl9BUE3hzV79lW+zvLQldh7anHbJvesRZwyfv/erKg+TGVdmPveNq+wWNR/7T4Rw9
 E1V4Lx8UPNigDcl9B2IcsIUnFdHjfa5RYqPi/rI1QBKA3tNNLmtcDKXcSwu8bIs+YgAi
 kihA==
X-Gm-Message-State: AGi0PubRqJmInhCW1Fpbd3Z6S7WEKKbdUs2Ab6+hrX8GHAToEmMfnmmo
 EDtdStNgAvzzXEmENyPIFyrFE6Np
X-Google-Smtp-Source: APiQypLRzxmHa0L58r2SmRhhbfGpiE9hN1MiCxm5pUopE/A9Pmr4BwSf/4tYae5vzJKjB77AHsPb2w==
X-Received: by 2002:ad4:548b:: with SMTP id q11mr18798391qvy.129.1589254806451; 
 Mon, 11 May 2020 20:40:06 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:54fc:f6ff:ea10:3d73])
 by smtp.gmail.com with ESMTPSA id w35sm1107092qtk.51.2020.05.11.20.40.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 11 May 2020 20:40:05 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 2/2] x86/boot: Drop .note.gnu.properties in build32.lds
Date: Mon, 11 May 2020 23:39:48 -0400
Message-Id: <20200512033948.3507-3-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200512033948.3507-1-jandryuk@gmail.com>
References: <20200512033948.3507-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefan Bader <stefan.bader@canonical.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

reloc.S and cmdline.S as arrays of executable bytes for inclusion in
head.S generated from compiled object files.  Object files generated by
an -fcf-protection toolchain include a .note.gnu.property section.  The
way reloc.S and cmdline.S are generated, the bytes of .note.gnu.property
become the start of the .S files.  When head.S calls reloc or
cmdline_parse_early, those note bytes are executed instead of the
intended .text section.  This results in an early crash in reloc.

Discard the .note.gnu.property section when linking to avoid the extra
bytes.

Stefan Bader also noticed that build32.mk requires -fcf-protection=none
or else the hypervisor will not boot.
https://bugs.launchpad.net/ubuntu/+source/gcc-9/+bug/1863260

CC: Stefan Bader <stefan.bader@canonical.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 xen/arch/x86/boot/build32.lds | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/xen/arch/x86/boot/build32.lds b/xen/arch/x86/boot/build32.lds
index da35aee910..0f69765579 100644
--- a/xen/arch/x86/boot/build32.lds
+++ b/xen/arch/x86/boot/build32.lds
@@ -48,5 +48,10 @@ SECTIONS
          * Please check build32.mk for more details.
          */
         /* *(.got.plt) */
+        /*
+         * The note will end up before the .text section and be called
+         * incorrectly as instructions.
+         */
+        *(.note.gnu.property)
   }
 }
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 12 03:40:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 03:40:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYLlw-0003NL-Jp; Tue, 12 May 2020 03:40:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rfNj=62=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jYLlv-0003NG-0f
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 03:39:59 +0000
X-Inumbo-ID: 40994586-9402-11ea-9887-bc764e2007e4
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40994586-9402-11ea-9887-bc764e2007e4;
 Tue, 12 May 2020 03:39:58 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id f83so12247152qke.13
 for <xen-devel@lists.xenproject.org>; Mon, 11 May 2020 20:39:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=bBz+Wc4i15Xaa7Low+S4PQ1PMoRJwgeHjDfzODciz8o=;
 b=D4r+etRuXVxrryh503ZaEC9beU5bc6ueG+kH0Rf+Sd7ay849aiEOdhJ2qqpvwXHQ2Y
 LWd+O3nkW9A76qsEOMWY/qPI5aSvlDbrzMvCJOmxcixBaEV9BvEoTJSGWiuiR/1m2kCc
 IRmVFThDK94I+JFvn/EIYDXMwy35nhuYn/ijHZSmacbiLUzfExt8rjfkk7Id0aKJUP9N
 fVbA7d6CyB5ayml5pTPDNpBiMeSK28icF5gDAouFkf0EcB5c8m9pDsigkXy34PRTsHqX
 rVzC9dpxmWINGEE1742xwzhee2ai3B1OiS6FE/G1ibJ6YzcUaA9g06joEswbL7EcX2kR
 qTJQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=bBz+Wc4i15Xaa7Low+S4PQ1PMoRJwgeHjDfzODciz8o=;
 b=Tfne6WTvI4L69Z64I8KQnifxsuG08wXAL0AbpYr6mMzdBA1DtnMEve70B2qH3Doymm
 W0YAIe0CCiOe5IUd+1QzR1Ezqg7hrCtWoYtmM9R+u0r2RUA9CHpfb0fJ5kqhvcDW0v0z
 pI76bQbHfWyCPjoCsOf9FcXRaTOgoa1AkxyGcmZ2iEMZeTxzU7w/+KApRQtTo4Lv+Fhp
 kpftEk50qbPrQKYTrWe8UZlGgpOEYksBTDMdy8avwx4bxqcW8O/vcx5qrNIkED1hky6o
 RVId2Efm9hN5vhZUuwjJcICW5ppMI1Fk3aHkI/DqF678pth/Rx9oBrEFSz7CzgNa4uCV
 IRdA==
X-Gm-Message-State: AGi0PuYAe+FowLEaqYONVJNyp0OLUZANYnPIcjljicpAEwOSWeIu+gkV
 rN2vS8oxHA5bKpqK42SGOkpPx8TS
X-Google-Smtp-Source: APiQypI9yIaAepvPI+TXIcnJmSWXzL8apyulQs63gvY3BBJ0mLroccr7+04ljGsChcmzx8LLiKJBWA==
X-Received: by 2002:a37:628b:: with SMTP id
 w133mr18860699qkb.185.1589254797642; 
 Mon, 11 May 2020 20:39:57 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:54fc:f6ff:ea10:3d73])
 by smtp.gmail.com with ESMTPSA id w35sm1107092qtk.51.2020.05.11.20.39.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 11 May 2020 20:39:56 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 0/2] Fixups for fcf-protection
Date: Mon, 11 May 2020 23:39:46 -0400
Message-Id: <20200512033948.3507-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefan Bader <stefan.bader@canonical.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Two patches to fix building with a cf-protection toolchain.  The first
handles the case where the compiler fails to run with "error:
‘-mindirect-branch’ and ‘-fcf-protection’ are not compatible".

The second fixes a runtime error that prevented Xen booting in legacy
mode.

I still haven't figured out exactly what is wrong with rombios and/or
ipxe.

Jason Andryuk (2):
  xen/x86: Disable fcf-protection when necessary to build
  x86/boot: Drop .note.gnu.properties in build32.lds

 xen/arch/x86/arch.mk          | 11 ++++++++++-
 xen/arch/x86/boot/build32.lds |  5 +++++
 2 files changed, 15 insertions(+), 1 deletion(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 12 03:57:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 03:57:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYM2e-0005Ao-LQ; Tue, 12 May 2020 03:57:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EIy0=62=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYM2d-0005Ah-Sy
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 03:57:15 +0000
X-Inumbo-ID: a725280e-9404-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a725280e-9404-11ea-ae69-bc764e2007e4;
 Tue, 12 May 2020 03:57:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=tWXPgMUoiBlNxcofibncyXZMuI+NchnzPfpLvOvO5gI=; b=TzGmG3usK08dt3Y12tU9TFTQp
 MIR09lsUax/xD41FuJQef6t50KqPU+kvzfCnpePCmKgvrD9wTSIqDNRJPPYtaLPDHvVCIjL+KfMma
 QfqS7yBbbcPL0Xxv5jfJkPaZpegiH9r9Lm88C2IpTmiAJX7kjNBm8oV/v2lY7YdzSgKpw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYM2W-0002ct-As; Tue, 12 May 2020 03:57:08 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYM2V-0006CR-W8; Tue, 12 May 2020 03:57:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYM2V-0007fm-US; Tue, 12 May 2020 03:57:07 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150139-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150139: regressions - FAIL
X-Osstest-Failures: qemu-mainline:build-arm64:xen-build:fail:regression
 qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
 qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=de2f658b6bb422ec0e0fa94a49e476018602eeea
X-Osstest-Versions-That: qemuu=c88f1ffc19e38008a1c33ae039482a860aa7418c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 12 May 2020 03:57:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150139 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150139/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                   6 xen-build                fail REGR. vs. 150109

Tests which did not succeed, but are not blocking:
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150109
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150109
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150109
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150109
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150109
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150109
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                de2f658b6bb422ec0e0fa94a49e476018602eeea
baseline version:
 qemuu                c88f1ffc19e38008a1c33ae039482a860aa7418c

Last test of basis   150109  2020-05-09 14:41:45 Z    2 days
Testing same since   150139  2020-05-11 16:07:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Cédric Le Goater <clg@kaod.org>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Joel Stanley <joel@jms.id.au>
  Laurent Desnogues <laurent.desnogues@gmail.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 606 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 12 04:13:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 04:13:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYMIB-0006sw-7Z; Tue, 12 May 2020 04:13:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X8tc=6Z=gmail.com=buycomputer40@srs-us1.protection.inumbo.net>)
 id 1jYCSp-0002kc-9a
 for xen-devel@lists.xenproject.org; Mon, 11 May 2020 17:43:39 +0000
X-Inumbo-ID: f14ec2c4-93ae-11ea-ae69-bc764e2007e4
Received: from mail-lf1-x135.google.com (unknown [2a00:1450:4864:20::135])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f14ec2c4-93ae-11ea-ae69-bc764e2007e4;
 Mon, 11 May 2020 17:43:37 +0000 (UTC)
Received: by mail-lf1-x135.google.com with SMTP id h26so8248822lfg.6
 for <xen-devel@lists.xenproject.org>; Mon, 11 May 2020 10:43:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:from:date:message-id:subject:to;
 bh=Rz9GSpBeCBVlmgkJ2vbN4985npc/UY216IuzIizL/3s=;
 b=PFyKs65eeDagZLt6Q/slL4WI5gymdG3RucGkbu8yX9RQFjCaaCvr+NpHO1EWs6uRQx
 TSbhfIsDQHauAMCnX1lxH2pqg7aUZ53TPxRuhmRw4UCCMyqyFJ6TVk2+fg32FAumk+mV
 GvbqD0BaPgNCU5v46LQ3hnXu0Kcc6N0zJY/VSgUlARXARYS0hnxFzuoi9i7sfARHPsHE
 IvVhasxdwC+tFX2/UO7S8rgW3nIeBjHJTmSXmmjb/uDzZ7nD1Hv2IoTE7qMRm5iqudpn
 ASzKEAAlvq64eMT4FmDExGnLRDrL+6zRECmUiJYa+MUoh4fc4s588lGEDdiAvO0n+4F5
 ZZCQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
 bh=Rz9GSpBeCBVlmgkJ2vbN4985npc/UY216IuzIizL/3s=;
 b=dkXVP2M7Mu1Ah1MNBkzcNSJrgRsjTCDx/hhWDSwUPphjzQR4mIQoj+pxtxZPz+R+QP
 LbLKOAhS398yK8H2SFeVnwYWS4JMckWhUhRsf4i4Hn9oluNb9XfQaJHyvRUtOz7pLaPv
 9gpwUV7zKQo1zLwQZgEBOmdHs+OPyR9jyfs3cBT2Dgjncc3JrVWwKgEM6/R1QbXkvsUu
 ptTWfS96f8mgweSKLDk1761nNYz587ji3j2adXewew8nDKcwr+lZKJzfGkWvlSYHOD6h
 2J4AuPRbmgWgNKX6mj5SdPmvOjK13pWl63XKVDWW0oxQDm/sy8Skex0tIgZp8zTK+2Qm
 OXKw==
X-Gm-Message-State: AOAM530LKTT+pFJQoDMTWJR7rVUe86x0O4SG/ZgnAM6ZIQNKVQRRiWnU
 KP0ONms2S9xFDYyQVIHvWN/mu7+mC2F+BUBUbSPlpw==
X-Google-Smtp-Source: ABdhPJztX1bA2V/jjAQwsIbC7q2G61zPqSsDFWH44To5H7AKuNh9ozhiAeaPn3kEktzO9tfPC05GglhMx4lZU+vM2W4=
X-Received: by 2002:a19:7418:: with SMTP id v24mr11825050lfe.15.1589219016030; 
 Mon, 11 May 2020 10:43:36 -0700 (PDT)
MIME-Version: 1.0
From: buy computer <buycomputer40@gmail.com>
Date: Mon, 11 May 2020 20:43:24 +0300
Message-ID: <CANSXg2FGtiDT05sQUpSAshAsdP4wSjPgQbfw_+aKJuAzSwvJuQ@mail.gmail.com>
Subject: iommu=no-igfx
To: xen-devel@lists.xenproject.org
Content-Type: multipart/mixed; boundary="000000000000ebeab605a562e2b0"
X-Mailman-Approved-At: Tue, 12 May 2020 04:13:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--000000000000ebeab605a562e2b0
Content-Type: multipart/alternative; boundary="000000000000ebeab305a562e2ae"

--000000000000ebeab305a562e2ae
Content-Type: text/plain; charset="UTF-8"

Hi!

I've been working on a Windows 10 HVM on a Debian 10 dom0. When I was first
trying to make the VM, I was getting IOMMU errors. I had a hard time
figuring out what to do about this, and finally discovered that putting
iommu=no-igfx in the grub stopped the errors.

Unfortunately, without the graphics support the VM is understandably slow,
and can crash. I was also only now pointed to the page
<https://xenbits.xen.org/docs/unstable/misc/xen-command-line.html#iommu>
which says to report any errors that get fixed by using iommu=no-igfx.

I'm attaching a copy of xl dmesg from before I put in the no-igfx. The
bottom is the iommu errors. If there is a better way for me to share the
output, please let me know.

--000000000000ebeab305a562e2ae
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Hi! <br></div><div><br></div><div>I&#39;ve been worki=
ng on a Windows 10 HVM on a Debian 10 dom0. When I was first trying to make=
 the VM, I was getting IOMMU errors. I had a hard time figuring out what to=
 do about this, and finally discovered that putting iommu=3Dno-igfx in the =
grub stopped the errors.</div><div><br></div><div>Unfortunately, without th=
e graphics support the VM is understandably slow, and can crash. I was also=
 only now pointed to the <a href=3D"https://xenbits.xen.org/docs/unstable/m=
isc/xen-command-line.html#iommu">page</a> which says to report any errors t=
hat get fixed by using iommu=3Dno-igfx.</div><div><br></div><div>I&#39;m at=
taching a copy of xl dmesg from before I put in the no-igfx. The bottom is =
the iommu errors. If there is a better way for me to share the output, plea=
se let me know.<br></div></div>

--000000000000ebeab305a562e2ae--

--000000000000ebeab605a562e2b0
Content-Type: text/x-log; charset="US-ASCII"; name="xendmesg.log"
Content-Disposition: attachment; filename="xendmesg.log"
Content-Transfer-Encoding: base64
Content-ID: <f_ka2ry02p0>
X-Attachment-Id: f_ka2ry02p0

KFhFTikgcGFyYW1ldGVyICJwbGFjZWhvbGRlciIgdW5rbm93biEKKFhFTikgcGFyYW1ldGVyICJu
by1yZWFsLW1vZGUiIHVua25vd24hCihYRU4pIHBhcmFtZXRlciAiZWRkIiB1bmtub3duIQooWEVO
KSBYZW4gdmVyc2lvbiA0LjExLjQtcHJlIChEZWJpYW4gNC4xMS4zKzI0LWcxNGI2MmFiM2U1LTF+
ZGViMTB1MSkgKHBrZy14ZW4tZGV2ZWxAbGlzdHMuYWxpb3RoLmRlYmlhbi5vcmcpIChnY2MgKERl
YmlhbiA4LjMuMC02KSA4LjMuMCkgZGVidWc9biAgV2VkIEphbiAgOCAyMDoxNjo1MSBVVEMgMjAy
MAooWEVOKSBCb290bG9hZGVyOiBHUlVCIDIuMDIrZGZzZzEtMjAKKFhFTikgQ29tbWFuZCBsaW5l
OiBwbGFjZWhvbGRlciBuby1yZWFsLW1vZGUgZWRkPW9mZgooWEVOKSBYZW4gaW1hZ2UgbG9hZCBi
YXNlIGFkZHJlc3M6IDB4N2JlMDAwMDAKKFhFTikgVmlkZW8gaW5mb3JtYXRpb246CihYRU4pICBW
R0EgaXMgZ3JhcGhpY3MgbW9kZSAxOTIweDEwODAsIDMyIGJwcAooWEVOKSBEaXNjIGluZm9ybWF0
aW9uOgooWEVOKSAgRm91bmQgMCBNQlIgc2lnbmF0dXJlcwooWEVOKSAgRm91bmQgMSBFREQgaW5m
b3JtYXRpb24gc3RydWN0dXJlcwooWEVOKSBFRkkgUkFNIG1hcDoKKFhFTikgIDAwMDAwMDAwMDAw
MDAwMDAgLSAwMDAwMDAwMDAwMDlmMDAwICh1c2FibGUpCihYRU4pICAwMDAwMDAwMDAwMDlmMDAw
IC0gMDAwMDAwMDAwMDEwMDAwMCAocmVzZXJ2ZWQpCihYRU4pICAwMDAwMDAwMDAwMTAwMDAwIC0g
MDAwMDAwMDA3Y2NiYTAwMCAodXNhYmxlKQooWEVOKSAgMDAwMDAwMDA3Y2NiYTAwMCAtIDAwMDAw
MDAwN2ZiZTcwMDAgKHJlc2VydmVkKQooWEVOKSAgMDAwMDAwMDA3ZmJlNzAwMCAtIDAwMDAwMDAw
N2ZjYWEwMDAgKEFDUEkgTlZTKQooWEVOKSAgMDAwMDAwMDA3ZmNhYTAwMCAtIDAwMDAwMDAwN2Zk
MGYwMDAgKEFDUEkgZGF0YSkKKFhFTikgIDAwMDAwMDAwN2ZkMGYwMDAgLSAwMDAwMDAwMDdmZDEw
MDAwICh1c2FibGUpCihYRU4pICAwMDAwMDAwMDdmZDEwMDAwIC0gMDAwMDAwMDA4ODAwMDAwMCAo
cmVzZXJ2ZWQpCihYRU4pICAwMDAwMDAwMDg4MDAwMDAwIC0gMDAwMDAwMDA4ODIwMDAwMCAodXNh
YmxlKQooWEVOKSAgMDAwMDAwMDA4ODIwMDAwMCAtIDAwMDAwMDAwOGM4MDAwMDAgKHJlc2VydmVk
KQooWEVOKSAgMDAwMDAwMDBmZTAxMDAwMCAtIDAwMDAwMDAwZmUwMTEwMDAgKHJlc2VydmVkKQoo
WEVOKSAgMDAwMDAwMDEwMDAwMDAwMCAtIDAwMDAwMDA4NzE4MDAwMDAgKHVzYWJsZSkKKFhFTikg
QUNQSTogUlNEUCA3RkQwRTAxNCwgMDAyNCAocjIgTEVOT1ZPKQooWEVOKSBBQ1BJOiBYU0RUIDdG
RDBDMTg4LCAwMTBDIChyMSBMRU5PVk8gVFAtUjBZICAgICAgICAxQjAgUFRFQyAgICAgICAgMikK
KFhFTikgQUNQSTogRkFDUCA3RTlFNjAwMCwgMDExNCAocjYgTEVOT1ZPIFRQLVIwWSAgICAgICAg
MUIwIFBURUMgICAgICAgIDIpCihYRU4pIEFDUEk6IERTRFQgN0U5QkUwMDAsIDIzMTFCIChyMiBM
RU5PVk8gQ0ZMICAgICAgMjAxNzAwMDEgSU5UTCAyMDE2MDQyMikKKFhFTikgQUNQSTogRkFDUyA3
RkMwNTAwMCwgMDA0MAooWEVOKSBBQ1BJOiBTU0RUIDdFQTkzMDAwLCAxQjFDIChyMiBMRU5PVk8g
IENwdVNzZHQgICAgIDMwMDAgSU5UTCAyMDE2MDUyNykKKFhFTikgQUNQSTogU1NEVCA3RUE5MjAw
MCwgMDU2RCAocjIgTEVOT1ZPICAgIEN0ZHBCICAgICAxMDAwIElOVEwgMjAxNjA1MjcpCihYRU4p
IEFDUEk6IFNTRFQgN0VBNTkwMDAsIDM5OTYgKHIyIExFTk9WTyBEcHRmVGFibCAgICAgMTAwMCBJ
TlRMIDIwMTYwNTI3KQooWEVOKSBBQ1BJOiBTU0RUIDdFOUVCMDAwLCAzMTNEIChyMiBMRU5PVk8g
IFNhU3NkdCAgICAgIDMwMDAgSU5UTCAyMDE2MDUyNykKKFhFTikgQUNQSTogU1NEVCA3RTlFQTAw
MCwgMDYxMiAocjIgTEVOT1ZPIFRwbTJUYWJsICAgICAxMDAwIElOVEwgMjAxNjA1MjcpCihYRU4p
IEFDUEk6IFRQTTIgN0U5RTkwMDAsIDAwMzQgKHI0IExFTk9WTyBUUC1SMFkgICAgICAgIDFCMCBQ
VEVDICAgICAgICAyKQooWEVOKSBBQ1BJOiBVRUZJIDdGQzFBMDAwLCAwMDQyIChyMSBMRU5PVk8g
VFAtUjBZICAgICAgICAxQjAgUFRFQyAgICAgICAgMikKKFhFTikgQUNQSTogU1NEVCA3RTlFNzAw
MCwgMDUzOCAocjIgTEVOT1ZPIFBlcmZUdW5lICAgICAxMDAwIElOVEwgMjAxNjA1MjcpCihYRU4p
IEFDUEk6IEhQRVQgN0U5RTUwMDAsIDAwMzggKHIxIExFTk9WTyBUUC1SMFkgICAgICAgIDFCMCBQ
VEVDICAgICAgICAyKQooWEVOKSBBQ1BJOiBBUElDIDdFOUU0MDAwLCAwMTJDIChyMyBMRU5PVk8g
VFAtUjBZICAgICAgICAxQjAgUFRFQyAgICAgICAgMikKKFhFTikgQUNQSTogTUNGRyA3RTlFMzAw
MCwgMDAzQyAocjEgTEVOT1ZPIFRQLVIwWSAgICAgICAgMUIwIFBURUMgICAgICAgIDIpCihYRU4p
IEFDUEk6IEVDRFQgN0U5RTIwMDAsIDAwNTMgKHIxIExFTk9WTyBUUC1SMFkgICAgICAgIDFCMCBQ
VEVDICAgICAgICAyKQooWEVOKSBBQ1BJOiBTU0RUIDdFOUJCMDAwLCAyMzA0IChyMiBMRU5PVk8g
UHJvalNzZHQgICAgICAgMTAgSU5UTCAyMDE2MDUyNykKKFhFTikgQUNQSTogQk9PVCA3RTlCQTAw
MCwgMDAyOCAocjEgTEVOT1ZPIFRQLVIwWSAgICAgICAgMUIwIFBURUMgICAgICAgIDIpCihYRU4p
IEFDUEk6IFNMSUMgN0U5QjkwMDAsIDAxNzYgKHIxIExFTk9WTyBUUC1SMFkgICAgICAgIDFCMCBQ
VEVDICAgICAgICAyKQooWEVOKSBBQ1BJOiBTU0RUIDdFOUI4MDAwLCAwQ0UzIChyMiBMRU5PVk8g
VXNiQ1RhYmwgICAgIDEwMDAgSU5UTCAyMDE2MDUyNykKKFhFTikgQUNQSTogTFBJVCA3RTlCNzAw
MCwgMDA1QyAocjEgTEVOT1ZPIFRQLVIwWSAgICAgICAgMUIwIFBURUMgICAgICAgIDIpCihYRU4p
IEFDUEk6IFdTTVQgN0U5QjYwMDAsIDAwMjggKHIxIExFTk9WTyBUUC1SMFkgICAgICAgIDFCMCBQ
VEVDICAgICAgICAyKQooWEVOKSBBQ1BJOiBTU0RUIDdFOUI0MDAwLCAxNDlGIChyMiBMRU5PVk8g
VGJ0VHlwZUMgICAgICAgIDAgSU5UTCAyMDE2MDUyNykKKFhFTikgQUNQSTogREJHUCA3RTlCMzAw
MCwgMDAzNCAocjEgTEVOT1ZPIFRQLVIwWSAgICAgICAgMUIwIFBURUMgICAgICAgIDIpCihYRU4p
IEFDUEk6IERCRzIgN0U5QjIwMDAsIDAwNTQgKHIwIExFTk9WTyBUUC1SMFkgICAgICAgIDFCMCBQ
VEVDICAgICAgICAyKQooWEVOKSBBQ1BJOiBNU0RNIDdFOUIxMDAwLCAwMDU1IChyMyBMRU5PVk8g
VFAtUjBZICAgICAgICAxQjAgUFRFQyAgICAgICAgMikKKFhFTikgQUNQSTogQkFUQiA3RTk5QzAw
MCwgMDA0QSAocjIgTEVOT1ZPIFRQLVIwWSAgICAgICAgMUIwIFBURUMgICAgICAgIDIpCihYRU4p
IEFDUEk6IE5ITFQgN0RCOTkwMDAsIDAwMkQgKHIwIExFTk9WTyBUUC1SMFkgICAgICAgIDFCMCBQ
VEVDICAgICAgICAyKQooWEVOKSBBQ1BJOiBETUFSIDdEQjk4MDAwLCAwMEE4IChyMSBMRU5PVk8g
VFAtUjBZICAgICAgICAxQjAgUFRFQyAgICAgICAgMikKKFhFTikgQUNQSTogRlBEVCA3REI5NjAw
MCwgMDA0NCAocjEgTEVOT1ZPIFRQLVIwWSAgICAgICAgMUIwIFBURUMgICAgICAgIDIpCihYRU4p
IEFDUEk6IEJHUlQgN0RCOTUwMDAsIDAwMzggKHIxIExFTk9WTyBUUC1SMFkgICAgICAgIDFCMCBQ
VEVDICAgICAgICAyKQooWEVOKSBBQ1BJOiBVRUZJIDdGQzAwMDAwLCAwMTE2IChyMSBMRU5PVk8g
VFAtUjBZICAgICAgICAxQjAgUFRFQyAgICAgICAgMikKKFhFTikgU3lzdGVtIFJBTTogMzI0ODZN
QiAoMzMyNjYwMjRrQikKKFhFTikgRG9tYWluIGhlYXAgaW5pdGlhbGlzZWQKKFhFTikgQUNQSTog
SW52YWxpZCBzbGVlcCBjb250cm9sL3N0YXR1cyByZWdpc3RlciBkYXRhOiAwOjB4ODoweDMgMDow
eDg6MHgzCihYRU4pIEFDUEk6IDMyLzY0WCBGQUNTIGFkZHJlc3MgbWlzbWF0Y2ggaW4gRkFEVCAt
IDdmYzA1MDAwLzAwMDAwMDAwMDAwMDAwMDAsIHVzaW5nIDMyCihYRU4pIElPQVBJQ1swXTogYXBp
Y19pZCAyLCB2ZXJzaW9uIDMyLCBhZGRyZXNzIDB4ZmVjMDAwMDAsIEdTSSAwLTExOQooWEVOKSBF
bmFibGluZyBBUElDIG1vZGU6ICBGbGF0LiAgVXNpbmcgMSBJL08gQVBJQ3MKKFhFTikgU3dpdGNo
ZWQgdG8gQVBJQyBkcml2ZXIgeDJhcGljX2NsdXN0ZXIKKFhFTikgeHN0YXRlOiBzaXplOiAweDQ0
MCBhbmQgc3RhdGVzOiAweDFmCihYRU4pIFNwZWN1bGF0aXZlIG1pdGlnYXRpb24gZmFjaWxpdGll
czoKKFhFTikgICBIYXJkd2FyZSBmZWF0dXJlczogSUJSUy9JQlBCIFNUSUJQIEwxRF9GTFVTSCBT
U0JEIFJEQ0xfTk8gU0tJUF9MMURGTAooWEVOKSAgIENvbXBpbGVkLWluIHN1cHBvcnQ6IElORElS
RUNUX1RIVU5LIFNIQURPV19QQUdJTkcKKFhFTikgICBYZW4gc2V0dGluZ3M6IEJUSS1UaHVuayBK
TVAsIFNQRUNfQ1RSTDogSUJSUysgU1NCRC0sIE90aGVyOiBJQlBCCihYRU4pICAgU3VwcG9ydCBm
b3IgVk1zOiBQVjogTVNSX1NQRUNfQ1RSTCBSU0IgRUFHRVJfRlBVLCBIVk06IE1TUl9TUEVDX0NU
UkwgUlNCIEVBR0VSX0ZQVQooWEVOKSAgIFhQVEkgKDY0LWJpdCBQViBvbmx5KTogRG9tMCBkaXNh
YmxlZCwgRG9tVSBkaXNhYmxlZAooWEVOKSAgIFBWIEwxVEYgc2hhZG93aW5nOiBEb20wIGRpc2Fi
bGVkLCBEb21VIGRpc2FibGVkCihYRU4pIFVzaW5nIHNjaGVkdWxlcjogU01QIENyZWRpdCBTY2hl
ZHVsZXIgKGNyZWRpdCkKKFhFTikgUGxhdGZvcm0gdGltZXIgaXMgMjMuOTk5TUh6IEhQRVQKKFhF
TikgRGV0ZWN0ZWQgMTk5Mi4xMDMgTUh6IHByb2Nlc3Nvci4KKFhFTikgSW5pdGluZyBtZW1vcnkg
c2hhcmluZy4KKFhFTikgUENJOiBOb3QgdXNpbmcgTUNGRyBmb3Igc2VnbWVudCAwMDAwIGJ1cyAw
MC1mZgooWEVOKSBJbnRlbCBWVC1kIGlvbW11IDAgc3VwcG9ydGVkIHBhZ2Ugc2l6ZXM6IDRrQiwg
Mk1CLCAxR0IuCihYRU4pIEludGVsIFZULWQgaW9tbXUgMSBzdXBwb3J0ZWQgcGFnZSBzaXplczog
NGtCLCAyTUIsIDFHQi4KKFhFTikgSW50ZWwgVlQtZCBTbm9vcCBDb250cm9sIG5vdCBlbmFibGVk
LgooWEVOKSBJbnRlbCBWVC1kIERvbTAgRE1BIFBhc3N0aHJvdWdoIG5vdCBlbmFibGVkLgooWEVO
KSBJbnRlbCBWVC1kIFF1ZXVlZCBJbnZhbGlkYXRpb24gZW5hYmxlZC4KKFhFTikgSW50ZWwgVlQt
ZCBJbnRlcnJ1cHQgUmVtYXBwaW5nIGVuYWJsZWQuCihYRU4pIEludGVsIFZULWQgUG9zdGVkIElu
dGVycnVwdCBub3QgZW5hYmxlZC4KKFhFTikgSW50ZWwgVlQtZCBTaGFyZWQgRVBUIHRhYmxlcyBl
bmFibGVkLgooWEVOKSBJL08gdmlydHVhbGlzYXRpb24gZW5hYmxlZAooWEVOKSAgLSBEb20wIG1v
ZGU6IFJlbGF4ZWQKKFhFTikgSW50ZXJydXB0IHJlbWFwcGluZyBlbmFibGVkCihYRU4pIEVuYWJs
ZWQgZGlyZWN0ZWQgRU9JIHdpdGggaW9hcGljX2Fja19vbGQgb24hCihYRU4pIEVOQUJMSU5HIElP
LUFQSUMgSVJRcwooWEVOKSAgLT4gVXNpbmcgb2xkIEFDSyBtZXRob2QKKFhFTikgQWxsb2NhdGVk
IGNvbnNvbGUgcmluZyBvZiAxNiBLaUIuCihYRU4pIFZNWDogU3VwcG9ydGVkIGFkdmFuY2VkIGZl
YXR1cmVzOgooWEVOKSAgLSBBUElDIE1NSU8gYWNjZXNzIHZpcnR1YWxpc2F0aW9uCihYRU4pICAt
IEFQSUMgVFBSIHNoYWRvdwooWEVOKSAgLSBFeHRlbmRlZCBQYWdlIFRhYmxlcyAoRVBUKQooWEVO
KSAgLSBWaXJ0dWFsLVByb2Nlc3NvciBJZGVudGlmaWVycyAoVlBJRCkKKFhFTikgIC0gVmlydHVh
bCBOTUkKKFhFTikgIC0gTVNSIGRpcmVjdC1hY2Nlc3MgYml0bWFwCihYRU4pICAtIFVucmVzdHJp
Y3RlZCBHdWVzdAooWEVOKSAgLSBWTSBGdW5jdGlvbnMKKFhFTikgIC0gVmlydHVhbGlzYXRpb24g
RXhjZXB0aW9ucwooWEVOKSAgLSBQYWdlIE1vZGlmaWNhdGlvbiBMb2dnaW5nCihYRU4pIEhWTTog
QVNJRHMgZW5hYmxlZC4KKFhFTikgVk1YOiBEaXNhYmxpbmcgZXhlY3V0YWJsZSBFUFQgc3VwZXJw
YWdlcyBkdWUgdG8gQ1ZFLTIwMTgtMTIyMDcKKFhFTikgSFZNOiBWTVggZW5hYmxlZAooWEVOKSBI
Vk06IEhhcmR3YXJlIEFzc2lzdGVkIFBhZ2luZyAoSEFQKSBkZXRlY3RlZAooWEVOKSBIVk06IEhB
UCBwYWdlIHNpemVzOiA0a0IsIDJNQiwgMUdCCihYRU4pIEJyb3VnaHQgdXAgOCBDUFVzCihYRU4p
IERvbTAgaGFzIG1heGltdW0gODg4IFBJUlFzCihYRU4pICBYZW4gIGtlcm5lbDogNjQtYml0LCBs
c2IsIGNvbXBhdDMyCihYRU4pICBEb20wIGtlcm5lbDogNjQtYml0LCBQQUUsIGxzYiwgcGFkZHIg
MHgxMDAwMDAwIC0+IDB4MjgyYzAwMAooWEVOKSBQSFlTSUNBTCBNRU1PUlkgQVJSQU5HRU1FTlQ6
CihYRU4pICBEb20wIGFsbG9jLjogICAwMDAwMDAwODUwMDAwMDAwLT4wMDAwMDAwODU0MDAwMDAw
ICg4MTQ3MjY3IHBhZ2VzIHRvIGJlIGFsbG9jYXRlZCkKKFhFTikgIEluaXQuIHJhbWRpc2s6IDAw
MDAwMDA4NmYyNTQwMDAtPjAwMDAwMDA4NzE3ZmZkNmMKKFhFTikgVklSVFVBTCBNRU1PUlkgQVJS
QU5HRU1FTlQ6CihYRU4pICBMb2FkZWQga2VybmVsOiBmZmZmZmZmZjgxMDAwMDAwLT5mZmZmZmZm
ZjgyODJjMDAwCihYRU4pICBJbml0LiByYW1kaXNrOiAwMDAwMDAwMDAwMDAwMDAwLT4wMDAwMDAw
MDAwMDAwMDAwCihYRU4pICBQaHlzLU1hY2ggbWFwOiAwMDAwMDA4MDAwMDAwMDAwLT4wMDAwMDA4
MDAzZTViNzc4CihYRU4pICBTdGFydCBpbmZvOiAgICBmZmZmZmZmZjgyODJjMDAwLT5mZmZmZmZm
ZjgyODJjNGI4CihYRU4pICBYZW5zdG9yZSByaW5nOiAwMDAwMDAwMDAwMDAwMDAwLT4wMDAwMDAw
MDAwMDAwMDAwCihYRU4pICBDb25zb2xlIHJpbmc6ICAwMDAwMDAwMDAwMDAwMDAwLT4wMDAwMDAw
MDAwMDAwMDAwCihYRU4pICBQYWdlIHRhYmxlczogICBmZmZmZmZmZjgyODJkMDAwLT5mZmZmZmZm
ZjgyODQ2MDAwCihYRU4pICBCb290IHN0YWNrOiAgICBmZmZmZmZmZjgyODQ2MDAwLT5mZmZmZmZm
ZjgyODQ3MDAwCihYRU4pICBUT1RBTDogICAgICAgICBmZmZmZmZmZjgwMDAwMDAwLT5mZmZmZmZm
ZjgyYzAwMDAwCihYRU4pICBFTlRSWSBBRERSRVNTOiBmZmZmZmZmZjgyNDU2MTgwCihYRU4pIERv
bTAgaGFzIG1heGltdW0gOCBWQ1BVcwooWEVOKSBJbml0aWFsIGxvdyBtZW1vcnkgdmlycSB0aHJl
c2hvbGQgc2V0IGF0IDB4NDAwMCBwYWdlcy4KKFhFTikgU2NydWJiaW5nIEZyZWUgUkFNIG9uIDEg
bm9kZXMgdXNpbmcgNCBDUFVzCihYRU4pIC4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4u
Li4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uLi4uZG9uZS4KKFhFTikgU3RkLiBMb2ds
ZXZlbDogRXJyb3JzIGFuZCB3YXJuaW5ncwooWEVOKSBHdWVzdCBMb2dsZXZlbDogTm90aGluZyAo
UmF0ZS1saW1pdGVkOiBFcnJvcnMgYW5kIHdhcm5pbmdzKQooWEVOKSAqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioKKFhFTikgQm9vdGVkIG9uIE1MUERT
L01GQkRTLXZ1bG5lcmFibGUgaGFyZHdhcmUgd2l0aCBTTVQvSHlwZXJ0aHJlYWRpbmcKKFhFTikg
ZW5hYmxlZC4gIE1pdGlnYXRpb25zIHdpbGwgbm90IGJlIGZ1bGx5IGVmZmVjdGl2ZS4gIFBsZWFz
ZQooWEVOKSBjaG9vc2UgYW4gZXhwbGljaXQgc210PTxib29sPiBzZXR0aW5nLiAgU2VlIFhTQS0y
OTcuCihYRU4pICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKgooWEVOKSAzLi4uIDIuLi4gMS4uLiAKKFhFTikgWGVuIGlzIHJlbGlucXVpc2hpbmcgVkdB
IGNvbnNvbGUuCihYRU4pICoqKiBTZXJpYWwgaW5wdXQgLT4gRE9NMCAodHlwZSAnQ1RSTC1hJyB0
aHJlZSB0aW1lcyB0byBzd2l0Y2ggaW5wdXQgdG8gWGVuKQooWEVOKSBGcmVlZCA0NzZrQiBpbml0
IG1lbW9yeQooWEVOKSBbVlQtRF1ETUFSOltETUEgUmVhZF0gUmVxdWVzdCBkZXZpY2UgWzAwMDA6
MDA6MDIuMF0gZmF1bHQgYWRkciA3NmM2MTVkMDAwLCBpb21tdSByZWcgPSBmZmZmODJjMDAwYTBj
MDAwCihYRU4pIFtWVC1EXURNQVI6IHJlYXNvbiAwNiAtIFBURSBSZWFkIGFjY2VzcyBpcyBub3Qg
c2V0CihYRU4pIFtWVC1EXURNQVI6W0RNQSBSZWFkXSBSZXF1ZXN0IGRldmljZSBbMDAwMDowMDow
Mi4wXSBmYXVsdCBhZGRyIDc2YzYxNWQwMDAsIGlvbW11IHJlZyA9IGZmZmY4MmMwMDBhMGMwMDAK
KFhFTikgW1ZULURdRE1BUjogcmVhc29uIDA2IC0gUFRFIFJlYWQgYWNjZXNzIGlzIG5vdCBzZXQK
KFhFTikgW1ZULURdRE1BUjpbRE1BIFJlYWRdIFJlcXVlc3QgZGV2aWNlIFswMDAwOjAwOjAyLjBd
IGZhdWx0IGFkZHIgNzZjNjE1ZDAwMCwgaW9tbXUgcmVnID0gZmZmZjgyYzAwMGEwYzAwMAooWEVO
KSBbVlQtRF1ETUFSOiByZWFzb24gMDYgLSBQVEUgUmVhZCBhY2Nlc3MgaXMgbm90IHNldAooWEVO
KSBbVlQtRF1ETUFSOltETUEgUmVhZF0gUmVxdWVzdCBkZXZpY2UgWzAwMDA6MDA6MDIuMF0gZmF1
bHQgYWRkciA3NmM2MTVkMDAwLCBpb21tdSByZWcgPSBmZmZmODJjMDAwYTBjMDAwCihYRU4pIFtW
VC1EXURNQVI6IHJlYXNvbiAwNiAtIFBURSBSZWFkIGFjY2VzcyBpcyBub3Qgc2V0CihYRU4pIFtW
VC1EXURNQVI6W0RNQSBSZWFkXSBSZXF1ZXN0IGRldmljZSBbMDAwMDowMDowMi4wXSBmYXVsdCBh
ZGRyIDc2YzYxNWQwMDAsIGlvbW11IHJlZyA9IGZmZmY4MmMwMDBhMGMwMDAKKFhFTikgW1ZULURd
RE1BUjogcmVhc29uIDA2IC0gUFRFIFJlYWQgYWNjZXNzIGlzIG5vdCBzZXQKKFhFTikgcHJpbnRr
OiAyMDAgbWVzc2FnZXMgc3VwcHJlc3NlZC4KKFhFTikgW1ZULURdRE1BUjpbRE1BIFJlYWRdIFJl
cXVlc3QgZGV2aWNlIFswMDAwOjAwOjAyLjBdIGZhdWx0IGFkZHIgNzZjNjE1ZDAwMCwgaW9tbXUg
cmVnID0gZmZmZjgyYzAwMGEwYzAwMAooWEVOKSBwcmludGs6IDE0NSBtZXNzYWdlcyBzdXBwcmVz
c2VkLgooWEVOKSBbVlQtRF1ETUFSOltETUEgUmVhZF0gUmVxdWVzdCBkZXZpY2UgWzAwMDA6MDA6
MDIuMF0gZmF1bHQgYWRkciAyMTJlZDY5MDAwLCBpb21tdSByZWcgPSBmZmZmODJjMDAwYTBjMDAw
CihYRU4pIHByaW50azogNTAxIG1lc3NhZ2VzIHN1cHByZXNzZWQuCihYRU4pIFtWVC1EXURNQVI6
W0RNQSBSZWFkXSBSZXF1ZXN0IGRldmljZSBbMDAwMDowMDowMi4wXSBmYXVsdCBhZGRyIDIxMmVk
NjkwMDAsIGlvbW11IHJlZyA9IGZmZmY4MmMwMDBhMGMwMDAKKFhFTikgcHJpbnRrOiA1OTkgbWVz
c2FnZXMgc3VwcHJlc3NlZC4KKFhFTikgW1ZULURdRE1BUjpbRE1BIFJlYWRdIFJlcXVlc3QgZGV2
aWNlIFswMDAwOjAwOjAyLjBdIGZhdWx0IGFkZHIgMjEyZWQ2OTAwMCwgaW9tbXUgcmVnID0gZmZm
ZjgyYzAwMGEwYzAwMAooWEVOKSBwcmludGs6IDU5OSBtZXNzYWdlcyBzdXBwcmVzc2VkLgooWEVO
KSBbVlQtRF1ETUFSOltETUEgUmVhZF0gUmVxdWVzdCBkZXZpY2UgWzAwMDA6MDA6MDIuMF0gZmF1
bHQgYWRkciAxZjI3ZDkxMDAwLCBpb21tdSByZWcgPSBmZmZmODJjMDAwYTBjMDAwCihYRU4pIHBy
aW50azogNTk5IG1lc3NhZ2VzIHN1cHByZXNzZWQuCihYRU4pIFtWVC1EXURNQVI6W0RNQSBSZWFk
XSBSZXF1ZXN0IGRldmljZSBbMDAwMDowMDowMi4wXSBmYXVsdCBhZGRyIDRmNmVlYTQwMDAsIGlv
bW11IHJlZyA9IGZmZmY4MmMwMDBhMGMwMDAKKFhFTikgcHJpbnRrOiA1OTkgbWVzc2FnZXMgc3Vw
cHJlc3NlZC4KKFhFTikgW1ZULURdRE1BUjpbRE1BIFJlYWRdIFJlcXVlc3QgZGV2aWNlIFswMDAw
OjAwOjAyLjBdIGZhdWx0IGFkZHIgNzQ3MGVjYjAwMCwgaW9tbXUgcmVnID0gZmZmZjgyYzAwMGEw
YzAwMAooWEVOKSBwcmludGs6IDU5OSBtZXNzYWdlcyBzdXBwcmVzc2VkLgooWEVOKSBbVlQtRF1E
TUFSOltETUEgUmVhZF0gUmVxdWVzdCBkZXZpY2UgWzAwMDA6MDA6MDIuMF0gZmF1bHQgYWRkciA0
ZjZlZWE0MDAwLCBpb21tdSByZWcgPSBmZmZmODJjMDAwYTBjMDAwCihYRU4pIHByaW50azogNTk5
IG1lc3NhZ2VzIHN1cHByZXNzZWQuCihYRU4pIFtWVC1EXURNQVI6W0RNQSBSZWFkXSBSZXF1ZXN0
IGRldmljZSBbMDAwMDowMDowMi4wXSBmYXVsdCBhZGRyIDRmNmVlYTQwMDAsIGlvbW11IHJlZyA9
IGZmZmY4MmMwMDBhMGMwMDAKKFhFTikgcHJpbnRrOiA1OTkgbWVzc2FnZXMgc3VwcHJlc3NlZC4K
KFhFTikgW1ZULURdRE1BUjpbRE1BIFJlYWRdIFJlcXVlc3QgZGV2aWNlIFswMDAwOjAwOjAyLjBd
IGZhdWx0IGFkZHIgNGY2ZWVhNDAwMCwgaW9tbXUgcmVnID0gZmZmZjgyYzAwMGEwYzAwMAooWEVO
KSBwcmludGs6IDU5OSBtZXNzYWdlcyBzdXBwcmVzc2VkLgooWEVOKSBbVlQtRF1ETUFSOltETUEg
UmVhZF0gUmVxdWVzdCBkZXZpY2UgWzAwMDA6MDA6MDIuMF0gZmF1bHQgYWRkciA3NDcwZWNiMDAw
LCBpb21tdSByZWcgPSBmZmZmODJjMDAwYTBjMDAwCihYRU4pIHByaW50azogNTk5IG1lc3NhZ2Vz
IHN1cHByZXNzZWQuCihYRU4pIFtWVC1EXURNQVI6W0RNQSBSZWFkXSBSZXF1ZXN0IGRldmljZSBb
MDAwMDowMDowMi4wXSBmYXVsdCBhZGRyIDc0NzBlY2IwMDAsIGlvbW11IHJlZyA9IGZmZmY4MmMw
MDBhMGMwMDAKKFhFTikgcHJpbnRrOiA1OTkgbWVzc2FnZXMgc3VwcHJlc3NlZC4KKFhFTikgW1ZU
LURdRE1BUjpbRE1BIFJlYWRdIFJlcXVlc3QgZGV2aWNlIFswMDAwOjAwOjAyLjBdIGZhdWx0IGFk
ZHIgNzQ3MGVjYjAwMCwgaW9tbXUgcmVnID0gZmZmZjgyYzAwMGEwYzAwMAooWEVOKSBwcmludGs6
IDU5OSBtZXNzYWdlcyBzdXBwcmVzc2VkLgooWEVOKSBbVlQtRF1ETUFSOltETUEgUmVhZF0gUmVx
dWVzdCBkZXZpY2UgWzAwMDA6MDA6MDIuMF0gZmF1bHQgYWRkciA3NDcwZWNiMDAwLCBpb21tdSBy
ZWcgPSBmZmZmODJjMDAwYTBjMDAwCihYRU4pIHByaW50azogNTk5IG1lc3NhZ2VzIHN1cHByZXNz
ZWQuCihYRU4pIFtWVC1EXURNQVI6W0RNQSBSZWFkXSBSZXF1ZXN0IGRldmljZSBbMDAwMDowMDow
Mi4wXSBmYXVsdCBhZGRyIDFmMjdkOTEwMDAsIGlvbW11IHJlZyA9IGZmZmY4MmMwMDBhMGMwMDAK
KFhFTikgcHJpbnRrOiA1OTkgbWVzc2FnZXMgc3VwcHJlc3NlZC4KKFhFTikgW1ZULURdRE1BUjpb
RE1BIFJlYWRdIFJlcXVlc3QgZGV2aWNlIFswMDAwOjAwOjAyLjBdIGZhdWx0IGFkZHIgNGY2ZWVh
NDAwMCwgaW9tbXUgcmVnID0gZmZmZjgyYzAwMGEwYzAwMAooWEVOKSBwcmludGs6IDYwMSBtZXNz
YWdlcyBzdXBwcmVzc2VkLgooWEVOKSBbVlQtRF1ETUFSOltETUEgUmVhZF0gUmVxdWVzdCBkZXZp
Y2UgWzAwMDA6MDA6MDIuMF0gZmF1bHQgYWRkciAxZjI3ZDkxMDAwLCBpb21tdSByZWcgPSBmZmZm
ODJjMDAwYTBjMDAwCihYRU4pIHByaW50azogNTk5IG1lc3NhZ2VzIHN1cHByZXNzZWQuCihYRU4p
IFtWVC1EXURNQVI6W0RNQSBSZWFkXSBSZXF1ZXN0IGRldmljZSBbMDAwMDowMDowMi4wXSBmYXVs
dCBhZGRyIDc0NzBlY2IwMDAsIGlvbW11IHJlZyA9IGZmZmY4MmMwMDBhMGMwMDAKKFhFTikgcHJp
bnRrOiA1OTkgbWVzc2FnZXMgc3VwcHJlc3NlZC4KKFhFTikgW1ZULURdRE1BUjpbRE1BIFJlYWRd
IFJlcXVlc3QgZGV2aWNlIFswMDAwOjAwOjAyLjBdIGZhdWx0IGFkZHIgNzQ3MGVjYjAwMCwgaW9t
bXUgcmVnID0gZmZmZjgyYzAwMGEwYzAwMAooWEVOKSBwcmludGs6IDU5OSBtZXNzYWdlcyBzdXBw
cmVzc2VkLgooWEVOKSBbVlQtRF1ETUFSOltETUEgUmVhZF0gUmVxdWVzdCBkZXZpY2UgWzAwMDA6
MDA6MDIuMF0gZmF1bHQgYWRkciAxZjI3ZDkxMDAwLCBpb21tdSByZWcgPSBmZmZmODJjMDAwYTBj
MDAwCg==
--000000000000ebeab605a562e2b0--


From xen-devel-bounces@lists.xenproject.org Tue May 12 04:24:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 04:24:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYMSu-0007o9-9P; Tue, 12 May 2020 04:24:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pTU5=62=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jYMSt-0007o4-85
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 04:24:23 +0000
X-Inumbo-ID: 7400ccfe-9408-11ea-a275-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7400ccfe-9408-11ea-a275-12813bfff9fa;
 Tue, 12 May 2020 04:24:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A3E99AEDF;
 Tue, 12 May 2020 04:24:23 +0000 (UTC)
Subject: Re: [PATCH] xen/pvcalls-back: test for errors when calling
 backend_connect()
To: Stefano Stabellini <sstabellini@kernel.org>
References: <20200511074231.19794-1-jgross@suse.com>
 <alpine.DEB.2.21.2005111440210.26167@sstabellini-ThinkPad-T480s>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <17512c98-b309-7b83-8c9c-cc8d43a495a2@suse.com>
Date: Tue, 12 May 2020 06:24:20 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2005111440210.26167@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, linux-kernel@vger.kernel.org,
 stable@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.20 23:41, Stefano Stabellini wrote:
> On Mon, 11 May 2020, Juergen Gross wrote:
>> backend_connect() can fail, so switch the device to connected only if
>> no error occurred.
>>
>> Fixes: 0a9c75c2c7258f2 ("xen/pvcalls: xenbus state handling")
>> Cc: stable@vger.kernel.org
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
> 
> 
>> ---
>>   drivers/xen/pvcalls-back.c | 3 ++-
>>   1 file changed, 2 insertions(+), 1 deletion(-)
>>
>> diff --git a/drivers/xen/pvcalls-back.c b/drivers/xen/pvcalls-back.c
>> index cf4ce3e9358d..41a18ece029a 100644
>> --- a/drivers/xen/pvcalls-back.c
>> +++ b/drivers/xen/pvcalls-back.c
>> @@ -1088,7 +1088,8 @@ static void set_backend_state(struct xenbus_device *dev,
>>   		case XenbusStateInitialised:
>>   			switch (state) {
>>   			case XenbusStateConnected:
>> -				backend_connect(dev);
>> +				if (backend_connect(dev))
>> +					return;
> 
> Do you think it would make sense to WARN?

There already should be an error message (either due to a failed
grant mapping or a failed memory allocation).


Juergen


From xen-devel-bounces@lists.xenproject.org Tue May 12 05:51:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 05:51:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYNpC-00073P-2C; Tue, 12 May 2020 05:51:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EIy0=62=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYNpA-00073K-4C
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 05:51:28 +0000
X-Inumbo-ID: 9bcd4e72-9414-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9bcd4e72-9414-11ea-b9cf-bc764e2007e4;
 Tue, 12 May 2020 05:51:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=UYCUO7/mXcAMTdwRLzYOIWd/3shTYLAEhY9saEQET10=; b=59mml8WK9yZsEPArjKKXedq/M
 qMJsLJWLjUD9H/CHm9ThkOahsuwTKJRN1PCNJkBDzsySeCRt5Yf1QzB5EhBwlt1Pw5HHpTIsXH+iw
 USQuhqBnLCb8KtVVFdD/TTtJ2+AaHZApZD9uPdMmCA0tcH79rNoqYncMhbuTq2vZEeVlU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYNp4-0005R0-1N; Tue, 12 May 2020 05:51:22 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYNp3-0003af-MY; Tue, 12 May 2020 05:51:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYNp3-0004G9-Lz; Tue, 12 May 2020 05:51:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150141-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150141: all pass - PUSHED
X-Osstest-Versions-This: ovmf=9378310dd877b99be1da398f39e82e0501aca372
X-Osstest-Versions-That: ovmf=c8543b8d830d22882dab4ece47f0413f9c6eb431
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 12 May 2020 05:51:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150141 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150141/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 9378310dd877b99be1da398f39e82e0501aca372
baseline version:
 ovmf                 c8543b8d830d22882dab4ece47f0413f9c6eb431

Last test of basis   150093  2020-05-08 21:10:51 Z    3 days
Testing same since   150141  2020-05-11 19:39:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bret Barkelew <bret.barkelew@microsoft.com>
  Lendacky, Thomas <thomas.lendacky@amd.com>
  Tom Lendacky <thomas.lendacky@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   c8543b8d83..9378310dd8  9378310dd877b99be1da398f39e82e0501aca372 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue May 12 07:00:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 07:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYOu5-0004PQ-Ls; Tue, 12 May 2020 07:00:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cB0z=62=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jYOu4-0004PK-Gf
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 07:00:36 +0000
X-Inumbo-ID: 45568d56-941e-11ea-ae69-bc764e2007e4
Received: from mail-wm1-x333.google.com (unknown [2a00:1450:4864:20::333])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 45568d56-941e-11ea-ae69-bc764e2007e4;
 Tue, 12 May 2020 07:00:32 +0000 (UTC)
Received: by mail-wm1-x333.google.com with SMTP id z72so12371510wmc.2
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 00:00:32 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=txIDyB7XC8tZKLfnlalgGrEM/4fVGjUtnwmnJEocRGA=;
 b=BUmdZMezX1+Z5MKQa9G7TP6V/232cCpjbA6TQUHmvPfH6RjhjqjWRwVYH/NFRReXvh
 R/9gEFCAIepjicSrtqaLV3k3jM6FT7+u3R6Jgbff7mLr4eQ/aBT4AjbxdGFxKeOHpE/1
 OZctIeOuzbH5ZIseuZ9mjbtL/MqFJqOqFCURnWNrhoCkYfZaPKM8WOWYNosQlpCwcCrB
 U0Ggsy3SBNsNOr0lb5CAQ/EoGyxjvqWAWIpkg7DvEb9/NLTd9xXqqnZoMwGzGB5qI966
 3XiAONBjxmg2n1UcePsLclE2GhSywWgJtcmSpEBGdsDWHr7kmFCFWNh57w/5tZT3kg85
 ZkbQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=txIDyB7XC8tZKLfnlalgGrEM/4fVGjUtnwmnJEocRGA=;
 b=EKlSKP/sFFy4Wbp26fAYYZU7ywRQiNnxecPYIb2BXSszUposua6i3FPMyq9VRd83Do
 5vug4VaZg7uMTMRZA9YfwPQCMfwXfHrW2BwQGcT38WS3dWiJlJsgDENHgqP4JaDGJ/4z
 nzwYT9+WZjvuZuPIxkIU0BRqTPJzeUznwGPOVPetB24oy/zC3aCa6OhvGiuUFplWoo2D
 u1lNqTrKPXFNunPY/dkNEha9lt/4m8quXiggIgHBgbSHeHwVrjKzPNiF0j+nxAApNiPo
 c8E2DfzrDKV2MplljggTJAREy2d6lHvrwr7IZqt7p+OZ8XvvmepCXhnqhEuqQbCtgxfj
 +VLQ==
X-Gm-Message-State: AGi0PuZt7AUbpSU74T9kBnfFq6/DFnu8hR4BmQ6lVr8ln/ML5QOF5xlS
 iYDcY/Dqdoercjdaaur/AsY=
X-Google-Smtp-Source: APiQypJE9nNwBRqddbh239Y2LT2ofMC3xojGfMdo2Ac3ymnEIbCGjV+p6ybl8OCjnG82fAIykC+wEw==
X-Received: by 2002:a7b:cb86:: with SMTP id m6mr33504111wmi.64.1589266831768; 
 Tue, 12 May 2020 00:00:31 -0700 (PDT)
Received: from localhost.localdomain (17.red-88-21-202.staticip.rima-tde.net.
 [88.21.202.17])
 by smtp.gmail.com with ESMTPSA id i17sm30322243wml.23.2020.05.12.00.00.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 12 May 2020 00:00:31 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Subject: [PATCH v3 2/3] various: Remove unnecessary OBJECT() cast
Date: Tue, 12 May 2020 09:00:19 +0200
Message-Id: <20200512070020.22782-3-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200512070020.22782-1-f4bug@amsat.org>
References: <20200512070020.22782-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 qemu-trivial@nongnu.org, David Hildenbrand <david@redhat.com>,
 Markus Armbruster <armbru@redhat.com>, Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, David Gibson <david@gibson.dropbear.id.au>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
 John Snow <jsnow@redhat.com>, Richard Henderson <rth@twiddle.net>,
 Corey Minyard <cminyard@mvista.com>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, qemu-ppc@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The OBJECT() macro is defined as:

  #define OBJECT(obj) ((Object *)(obj))

Remove the unnecessary OBJECT() casts when we already know the
pointer is of Object type.

Patch created mechanically using spatch with this script:

  @@
  typedef Object;
  Object *o;
  @@
  -   OBJECT(o)
  +   o

Acked-by: Cornelia Huck <cohuck@redhat.com>
Acked-by: Corey Minyard <cminyard@mvista.com>
Acked-by: John Snow <jsnow@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/core/bus.c                       | 2 +-
 hw/ide/ahci-allwinner.c             | 2 +-
 hw/ipmi/smbus_ipmi.c                | 2 +-
 hw/microblaze/petalogix_ml605_mmu.c | 8 ++++----
 hw/s390x/sclp.c                     | 2 +-
 monitor/misc.c                      | 3 +--
 qom/object.c                        | 4 ++--
 7 files changed, 11 insertions(+), 12 deletions(-)

diff --git a/hw/core/bus.c b/hw/core/bus.c
index 3dc0a825f0..4ea5870de8 100644
--- a/hw/core/bus.c
+++ b/hw/core/bus.c
@@ -25,7 +25,7 @@
 
 void qbus_set_hotplug_handler(BusState *bus, Object *handler, Error **errp)
 {
-    object_property_set_link(OBJECT(bus), OBJECT(handler),
+    object_property_set_link(OBJECT(bus), handler,
                              QDEV_HOTPLUG_HANDLER_PROPERTY, errp);
 }
 
diff --git a/hw/ide/ahci-allwinner.c b/hw/ide/ahci-allwinner.c
index bb8393d2b6..8536b9eb5a 100644
--- a/hw/ide/ahci-allwinner.c
+++ b/hw/ide/ahci-allwinner.c
@@ -90,7 +90,7 @@ static void allwinner_ahci_init(Object *obj)
     SysbusAHCIState *s = SYSBUS_AHCI(obj);
     AllwinnerAHCIState *a = ALLWINNER_AHCI(obj);
 
-    memory_region_init_io(&a->mmio, OBJECT(obj), &allwinner_ahci_mem_ops, a,
+    memory_region_init_io(&a->mmio, obj, &allwinner_ahci_mem_ops, a,
                           "allwinner-ahci", ALLWINNER_AHCI_MMIO_SIZE);
     memory_region_add_subregion(&s->ahci.mem, ALLWINNER_AHCI_MMIO_OFF,
                                 &a->mmio);
diff --git a/hw/ipmi/smbus_ipmi.c b/hw/ipmi/smbus_ipmi.c
index 2a9470d9df..f1a0148755 100644
--- a/hw/ipmi/smbus_ipmi.c
+++ b/hw/ipmi/smbus_ipmi.c
@@ -329,7 +329,7 @@ static void smbus_ipmi_init(Object *obj)
 {
     SMBusIPMIDevice *sid = SMBUS_IPMI(obj);
 
-    ipmi_bmc_find_and_link(OBJECT(obj), (Object **) &sid->bmc);
+    ipmi_bmc_find_and_link(obj, (Object **) &sid->bmc);
 }
 
 static void smbus_ipmi_get_fwinfo(struct IPMIInterface *ii, IPMIFwInfo *info)
diff --git a/hw/microblaze/petalogix_ml605_mmu.c b/hw/microblaze/petalogix_ml605_mmu.c
index 0a2640c40b..52dcea9abd 100644
--- a/hw/microblaze/petalogix_ml605_mmu.c
+++ b/hw/microblaze/petalogix_ml605_mmu.c
@@ -150,9 +150,9 @@ petalogix_ml605_init(MachineState *machine)
     qdev_set_nic_properties(eth0, &nd_table[0]);
     qdev_prop_set_uint32(eth0, "rxmem", 0x1000);
     qdev_prop_set_uint32(eth0, "txmem", 0x1000);
-    object_property_set_link(OBJECT(eth0), OBJECT(ds),
+    object_property_set_link(OBJECT(eth0), ds,
                              "axistream-connected", &error_abort);
-    object_property_set_link(OBJECT(eth0), OBJECT(cs),
+    object_property_set_link(OBJECT(eth0), cs,
                              "axistream-control-connected", &error_abort);
     qdev_init_nofail(eth0);
     sysbus_mmio_map(SYS_BUS_DEVICE(eth0), 0, AXIENET_BASEADDR);
@@ -163,9 +163,9 @@ petalogix_ml605_init(MachineState *machine)
     cs = object_property_get_link(OBJECT(eth0),
                                   "axistream-control-connected-target", NULL);
     qdev_prop_set_uint32(dma, "freqhz", 100 * 1000000);
-    object_property_set_link(OBJECT(dma), OBJECT(ds),
+    object_property_set_link(OBJECT(dma), ds,
                              "axistream-connected", &error_abort);
-    object_property_set_link(OBJECT(dma), OBJECT(cs),
+    object_property_set_link(OBJECT(dma), cs,
                              "axistream-control-connected", &error_abort);
     qdev_init_nofail(dma);
     sysbus_mmio_map(SYS_BUS_DEVICE(dma), 0, AXIDMA_BASEADDR);
diff --git a/hw/s390x/sclp.c b/hw/s390x/sclp.c
index ede056b3ef..4132286db7 100644
--- a/hw/s390x/sclp.c
+++ b/hw/s390x/sclp.c
@@ -322,7 +322,7 @@ void s390_sclp_init(void)
 
     object_property_add_child(qdev_get_machine(), TYPE_SCLP, new,
                               NULL);
-    object_unref(OBJECT(new));
+    object_unref(new);
     qdev_init_nofail(DEVICE(new));
 }
 
diff --git a/monitor/misc.c b/monitor/misc.c
index 9723b466cd..f5207cd242 100644
--- a/monitor/misc.c
+++ b/monitor/misc.c
@@ -1837,8 +1837,7 @@ void object_add_completion(ReadLineState *rs, int nb_args, const char *str)
 static int qdev_add_hotpluggable_device(Object *obj, void *opaque)
 {
     GSList **list = opaque;
-    DeviceState *dev = (DeviceState *)object_dynamic_cast(OBJECT(obj),
-                                                          TYPE_DEVICE);
+    DeviceState *dev = (DeviceState *)object_dynamic_cast(obj, TYPE_DEVICE);
 
     if (dev == NULL) {
         return 0;
diff --git a/qom/object.c b/qom/object.c
index be700e831f..07c1443d0e 100644
--- a/qom/object.c
+++ b/qom/object.c
@@ -762,7 +762,7 @@ Object *object_new_with_propv(const char *typename,
         }
     }
 
-    object_unref(OBJECT(obj));
+    object_unref(obj);
     return obj;
 
  error:
@@ -1687,7 +1687,7 @@ void object_property_add_child(Object *obj, const char *name,
         return;
     }
 
-    type = g_strdup_printf("child<%s>", object_get_typename(OBJECT(child)));
+    type = g_strdup_printf("child<%s>", object_get_typename(child));
 
     op = object_property_add(obj, name, type, object_get_child_property, NULL,
                              object_finalize_child_property, child, &local_err);
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Tue May 12 07:00:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 07:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYOtw-0004OV-1j; Tue, 12 May 2020 07:00:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cB0z=62=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jYOtu-0004OQ-KT
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 07:00:26 +0000
X-Inumbo-ID: 412bb198-941e-11ea-ae69-bc764e2007e4
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 412bb198-941e-11ea-ae69-bc764e2007e4;
 Tue, 12 May 2020 07:00:25 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id m24so10868744wml.2
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 00:00:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=v6fg/mU4Bk/0q03fr2d5YzMbkGZ67G/czubLvjRut6E=;
 b=GgXgmFz8IY+pa9ocN2/XvYf1Z5J3CRDjyV+8xqeFnikvU5jCbQLf+YGXawi1zXF1wK
 nEp/GbVlzP17pUvUFZkuS9f2GXLimVNCaQnAurujidToXag6PvgDmn4tpcRFsH7reZQC
 MoAWgBH6I6b5/OgmqJ3AEr4tBCEoxIuIHZKm9nLR6k5jqXT5QzsgTNJsm0Y8zTWC17ed
 +8+skZr4g0xTxz1dfSh5eKksAqonVo7b7KGs47GDQFSxdg06PQdaimERZEF9qH0cDLZp
 rciTN8f2otxSJeW3UBVU7QboWmu2WO1IudLXV7wEwGqOsGivDgP9LXihMZbbmnP7e08A
 ViAQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :mime-version:content-transfer-encoding;
 bh=v6fg/mU4Bk/0q03fr2d5YzMbkGZ67G/czubLvjRut6E=;
 b=rHfYkLo/i5cPot0MjYFyI/qZ9bTQVsJzJFw8meoD9o50RjcxyELZ0npKb6/Sj80l21
 CznAeDp3G4MO8dnG4L/O2XLzmIt4nHnUucP5xvKjxfjm/6CbHRzMFOiSAO1t9b9T82hm
 DLQIeTmgHwUhrXf+Rf7TZk67SWhAfBRVj6D6hWL5MOi8UbH8+UmIgdTp5zutj3qt369m
 kd5tfcOSXjMW5zcp3kzkTWnu3aH76qSOuCRPykn/BmSdtiBOrmI70+0YiP216TAdFBq3
 cZ44lnGYgxyF2Hqglnm/3dGsL05fJbXK3hBcuhP2Thlfx4w+5P4GVPfDsbRQ0r1iHNCs
 jftw==
X-Gm-Message-State: AGi0Pua2m6yeDPLP+32fXgRHdkE44zwdCMTDzuk4NEPbW7jrSec4Rb+R
 DpGeeUmZrar8o/hSTZ/mDfo=
X-Google-Smtp-Source: APiQypIT2HPCX3fabSDeTGNUfM5dRg5+16ERzkptJxDXeBDyVSY92wAyQPNH6JP5vKrTXy89Xka+RA==
X-Received: by 2002:a1c:790b:: with SMTP id l11mr13129954wme.2.1589266824836; 
 Tue, 12 May 2020 00:00:24 -0700 (PDT)
Received: from localhost.localdomain (17.red-88-21-202.staticip.rima-tde.net.
 [88.21.202.17])
 by smtp.gmail.com with ESMTPSA id i17sm30322243wml.23.2020.05.12.00.00.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 12 May 2020 00:00:24 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Subject: [PATCH v3 0/3] various: Remove unnecessary casts
Date: Tue, 12 May 2020 09:00:17 +0200
Message-Id: <20200512070020.22782-1-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 qemu-trivial@nongnu.org, David Hildenbrand <david@redhat.com>,
 Markus Armbruster <armbru@redhat.com>, Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, David Gibson <david@gibson.dropbear.id.au>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
 John Snow <jsnow@redhat.com>, Richard Henderson <rth@twiddle.net>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, qemu-ppc@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Remove unnecessary casts using coccinelle scripts.

The CPU()/OBJECT() patches don't introduce logical change,
The DEVICE() one removes various OBJECT_CHECK() calls.

Since v3:
- Fixed patch #2 description (Markus)
- Add A-b/R-b tags

Since v2:
- Reword description (Markus)
- Add A-b/R-b tags

Philippe Mathieu-Daudé (3):
  target: Remove unnecessary CPU() cast
  various: Remove unnecessary OBJECT() cast
  hw: Remove unnecessary DEVICE() cast

 hw/core/bus.c                       | 2 +-
 hw/display/artist.c                 | 2 +-
 hw/display/cg3.c                    | 2 +-
 hw/display/sm501.c                  | 2 +-
 hw/display/tcx.c                    | 4 ++--
 hw/display/vga-isa.c                | 2 +-
 hw/i2c/imx_i2c.c                    | 2 +-
 hw/i2c/mpc_i2c.c                    | 2 +-
 hw/ide/ahci-allwinner.c             | 2 +-
 hw/ide/piix.c                       | 2 +-
 hw/ipmi/smbus_ipmi.c                | 2 +-
 hw/microblaze/petalogix_ml605_mmu.c | 8 ++++----
 hw/misc/macio/pmu.c                 | 2 +-
 hw/net/ftgmac100.c                  | 3 +--
 hw/net/imx_fec.c                    | 2 +-
 hw/nubus/nubus-device.c             | 2 +-
 hw/pci-host/bonito.c                | 2 +-
 hw/ppc/spapr.c                      | 2 +-
 hw/s390x/sclp.c                     | 2 +-
 hw/sh4/sh_pci.c                     | 2 +-
 hw/xen/xen-legacy-backend.c         | 2 +-
 monitor/misc.c                      | 3 +--
 qom/object.c                        | 4 ++--
 target/ppc/mmu_helper.c             | 2 +-
 24 files changed, 29 insertions(+), 31 deletions(-)

-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Tue May 12 07:00:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 07:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYOuA-0004Qf-Up; Tue, 12 May 2020 07:00:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cB0z=62=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jYOu9-0004QG-GY
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 07:00:41 +0000
X-Inumbo-ID: 4795615a-941e-11ea-b07b-bc764e2007e4
Received: from mail-wm1-x341.google.com (unknown [2a00:1450:4864:20::341])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4795615a-941e-11ea-b07b-bc764e2007e4;
 Tue, 12 May 2020 07:00:36 +0000 (UTC)
Received: by mail-wm1-x341.google.com with SMTP id g12so21970982wmh.3
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 00:00:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=oSI7PBjKYrlIMMlntGk8wizK4yL1nnvGLGhYtbWUWo8=;
 b=cG086jRJCGPkxKN1DRiVA0aRdMcFWXY/hWZ60En9VhgxhJYdb/f9yNdDtFI5Wa6pVt
 RUU+hEL+A0zY/3W8Q7Vj2t83lEW/tk6988md4dGV+OSHU4U/6hE2Fx8O285MV72fLCU0
 mb075igcAHTqhcYN9MDWSeYLjbMYnTIXrI6FqV8Knte97T9Eyz22rk4hc1chLLrfNMOp
 +8Lot1yBfZxA+1DikHUsyPlbKtLGpcfuvtqA1BYcskX3NwSHAc4guOgnPup8oxmPptXZ
 +Fh4L9BQtZnI+dZJsgtiKvV3TwFmofy1Qnnlv3vadxZ2OnGZxJ67J8bIwm5kWqoxXSJy
 P+Lg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=oSI7PBjKYrlIMMlntGk8wizK4yL1nnvGLGhYtbWUWo8=;
 b=jE9vdU+vNPlUl2X5Bs5fWJmvx9vJQ0NjE1GeqTCsKKJN2Ivw08+fcQ76ubPYr7BPzz
 ZMSUeUfEKSQ9PGTMuTwz+Fe40Cj3tGsuPDlJ3MxH+7QwrYsv4rtosB7BoRVLBrhA7qZ0
 YW9sUtrlMgZGkFsaEmI5VVrQ/IaGweam6U+895/pUyuqZ3q9ds7eGKIwx9oM1Y9F2zUz
 uklUgxsVIsqVjKS+XTONJ7SMtLnInXRQ6en0mfjpbpVfkvzMphDBpgtXoQNAfXO5NBJ8
 Dk4wQ9LjXokYG9Cexx0YEXOmrspFWKzrOwNsNVNR1AihI4dL8AzAijvsgWdWrmPDh7UU
 Eihw==
X-Gm-Message-State: AGi0Pub2b2WxfPxf6kwr/TFCcBBDwXGkIAMJhvhckmgyf+ZLncaPFSeP
 SGajsTOh35M0TWCmdCHTEW8=
X-Google-Smtp-Source: APiQypLsBLLCcST2brDrKB9sPzgFFZ9YK6z/Vm4O7scXhxYR1i+RylXGb9AaZGF2wkdLmTwPriT+rg==
X-Received: by 2002:a1c:4e0e:: with SMTP id g14mr6756908wmh.0.1589266835360;
 Tue, 12 May 2020 00:00:35 -0700 (PDT)
Received: from localhost.localdomain (17.red-88-21-202.staticip.rima-tde.net.
 [88.21.202.17])
 by smtp.gmail.com with ESMTPSA id i17sm30322243wml.23.2020.05.12.00.00.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 12 May 2020 00:00:34 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Subject: [PATCH v3 3/3] hw: Remove unnecessary DEVICE() cast
Date: Tue, 12 May 2020 09:00:20 +0200
Message-Id: <20200512070020.22782-4-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200512070020.22782-1-f4bug@amsat.org>
References: <20200512070020.22782-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 qemu-trivial@nongnu.org, David Hildenbrand <david@redhat.com>,
 Markus Armbruster <armbru@redhat.com>, Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, David Gibson <david@gibson.dropbear.id.au>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
 John Snow <jsnow@redhat.com>, Richard Henderson <rth@twiddle.net>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, qemu-ppc@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The DEVICE() macro is defined as:

  #define DEVICE(obj) OBJECT_CHECK(DeviceState, (obj), TYPE_DEVICE)

which expands to:

  ((DeviceState *)object_dynamic_cast_assert((Object *)(obj), (name),
                                             __FILE__, __LINE__,
                                             __func__))

This assertion can only fail when @obj points to something other
than its stated type, i.e. when we're in undefined behavior country.

Remove the unnecessary DEVICE() casts when we already know the
pointer is of DeviceState type.

Patch created mechanically using spatch with this script:

  @@
  typedef DeviceState;
  DeviceState *s;
  @@
  -   DEVICE(s)
  +   s

Acked-by: David Gibson <david@gibson.dropbear.id.au>
Acked-by: Paul Durrant <paul@xen.org>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Acked-by: John Snow <jsnow@redhat.com>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/display/artist.c         | 2 +-
 hw/display/cg3.c            | 2 +-
 hw/display/sm501.c          | 2 +-
 hw/display/tcx.c            | 4 ++--
 hw/display/vga-isa.c        | 2 +-
 hw/i2c/imx_i2c.c            | 2 +-
 hw/i2c/mpc_i2c.c            | 2 +-
 hw/ide/piix.c               | 2 +-
 hw/misc/macio/pmu.c         | 2 +-
 hw/net/ftgmac100.c          | 3 +--
 hw/net/imx_fec.c            | 2 +-
 hw/nubus/nubus-device.c     | 2 +-
 hw/pci-host/bonito.c        | 2 +-
 hw/ppc/spapr.c              | 2 +-
 hw/sh4/sh_pci.c             | 2 +-
 hw/xen/xen-legacy-backend.c | 2 +-
 16 files changed, 17 insertions(+), 18 deletions(-)

diff --git a/hw/display/artist.c b/hw/display/artist.c
index 753dbb9a77..7e2a4556bd 100644
--- a/hw/display/artist.c
+++ b/hw/display/artist.c
@@ -1353,7 +1353,7 @@ static void artist_realizefn(DeviceState *dev, Error **errp)
     s->cursor_height = 32;
     s->cursor_width = 32;
 
-    s->con = graphic_console_init(DEVICE(dev), 0, &artist_ops, s);
+    s->con = graphic_console_init(dev, 0, &artist_ops, s);
     qemu_console_resize(s->con, s->width, s->height);
 }
 
diff --git a/hw/display/cg3.c b/hw/display/cg3.c
index a1ede10394..f7f1c199ce 100644
--- a/hw/display/cg3.c
+++ b/hw/display/cg3.c
@@ -321,7 +321,7 @@ static void cg3_realizefn(DeviceState *dev, Error **errp)
 
     sysbus_init_irq(sbd, &s->irq);
 
-    s->con = graphic_console_init(DEVICE(dev), 0, &cg3_ops, s);
+    s->con = graphic_console_init(dev, 0, &cg3_ops, s);
     qemu_console_resize(s->con, s->width, s->height);
 }
 
diff --git a/hw/display/sm501.c b/hw/display/sm501.c
index de0ab9d977..2a564889bd 100644
--- a/hw/display/sm501.c
+++ b/hw/display/sm501.c
@@ -1839,7 +1839,7 @@ static void sm501_init(SM501State *s, DeviceState *dev,
                                 &s->twoD_engine_region);
 
     /* create qemu graphic console */
-    s->con = graphic_console_init(DEVICE(dev), 0, &sm501_ops, s);
+    s->con = graphic_console_init(dev, 0, &sm501_ops, s);
 }
 
 static const VMStateDescription vmstate_sm501_state = {
diff --git a/hw/display/tcx.c b/hw/display/tcx.c
index 76de16e8ea..1fb45b1aab 100644
--- a/hw/display/tcx.c
+++ b/hw/display/tcx.c
@@ -868,9 +868,9 @@ static void tcx_realizefn(DeviceState *dev, Error **errp)
     sysbus_init_irq(sbd, &s->irq);
 
     if (s->depth == 8) {
-        s->con = graphic_console_init(DEVICE(dev), 0, &tcx_ops, s);
+        s->con = graphic_console_init(dev, 0, &tcx_ops, s);
     } else {
-        s->con = graphic_console_init(DEVICE(dev), 0, &tcx24_ops, s);
+        s->con = graphic_console_init(dev, 0, &tcx24_ops, s);
     }
     s->thcmisc = 0;
 
diff --git a/hw/display/vga-isa.c b/hw/display/vga-isa.c
index 0633ed382c..3aaeeeca1e 100644
--- a/hw/display/vga-isa.c
+++ b/hw/display/vga-isa.c
@@ -74,7 +74,7 @@ static void vga_isa_realizefn(DeviceState *dev, Error **errp)
                                         0x000a0000,
                                         vga_io_memory, 1);
     memory_region_set_coalescing(vga_io_memory);
-    s->con = graphic_console_init(DEVICE(dev), 0, s->hw_ops, s);
+    s->con = graphic_console_init(dev, 0, s->hw_ops, s);
 
     memory_region_add_subregion(isa_address_space(isadev),
                                 VBE_DISPI_LFB_PHYSICAL_ADDRESS,
diff --git a/hw/i2c/imx_i2c.c b/hw/i2c/imx_i2c.c
index 30b9aea247..2e02e1c4fa 100644
--- a/hw/i2c/imx_i2c.c
+++ b/hw/i2c/imx_i2c.c
@@ -305,7 +305,7 @@ static void imx_i2c_realize(DeviceState *dev, Error **errp)
                           IMX_I2C_MEM_SIZE);
     sysbus_init_mmio(SYS_BUS_DEVICE(dev), &s->iomem);
     sysbus_init_irq(SYS_BUS_DEVICE(dev), &s->irq);
-    s->bus = i2c_init_bus(DEVICE(dev), NULL);
+    s->bus = i2c_init_bus(dev, NULL);
 }
 
 static void imx_i2c_class_init(ObjectClass *klass, void *data)
diff --git a/hw/i2c/mpc_i2c.c b/hw/i2c/mpc_i2c.c
index 0aa1be3ce7..9a724f3a3e 100644
--- a/hw/i2c/mpc_i2c.c
+++ b/hw/i2c/mpc_i2c.c
@@ -332,7 +332,7 @@ static void mpc_i2c_realize(DeviceState *dev, Error **errp)
     memory_region_init_io(&i2c->iomem, OBJECT(i2c), &i2c_ops, i2c,
                           "mpc-i2c", 0x14);
     sysbus_init_mmio(SYS_BUS_DEVICE(dev), &i2c->iomem);
-    i2c->bus = i2c_init_bus(DEVICE(dev), "i2c");
+    i2c->bus = i2c_init_bus(dev, "i2c");
 }
 
 static void mpc_i2c_class_init(ObjectClass *klass, void *data)
diff --git a/hw/ide/piix.c b/hw/ide/piix.c
index 3b2de4c312..b402a93636 100644
--- a/hw/ide/piix.c
+++ b/hw/ide/piix.c
@@ -193,7 +193,7 @@ int pci_piix3_xen_ide_unplug(DeviceState *dev, bool aux)
             blk_unref(blk);
         }
     }
-    qdev_reset_all(DEVICE(dev));
+    qdev_reset_all(dev);
     return 0;
 }
 
diff --git a/hw/misc/macio/pmu.c b/hw/misc/macio/pmu.c
index b8466a4a3f..4b7def9096 100644
--- a/hw/misc/macio/pmu.c
+++ b/hw/misc/macio/pmu.c
@@ -758,7 +758,7 @@ static void pmu_realize(DeviceState *dev, Error **errp)
 
     if (s->has_adb) {
         qbus_create_inplace(&s->adb_bus, sizeof(s->adb_bus), TYPE_ADB_BUS,
-                            DEVICE(dev), "adb.0");
+                            dev, "adb.0");
         s->adb_poll_timer = timer_new_ms(QEMU_CLOCK_VIRTUAL, pmu_adb_poll, s);
         s->adb_poll_mask = 0xffff;
         s->autopoll_rate_ms = 20;
diff --git a/hw/net/ftgmac100.c b/hw/net/ftgmac100.c
index 041ed21017..25ebee7ec2 100644
--- a/hw/net/ftgmac100.c
+++ b/hw/net/ftgmac100.c
@@ -1035,8 +1035,7 @@ static void ftgmac100_realize(DeviceState *dev, Error **errp)
     qemu_macaddr_default_if_unset(&s->conf.macaddr);
 
     s->nic = qemu_new_nic(&net_ftgmac100_info, &s->conf,
-                          object_get_typename(OBJECT(dev)), DEVICE(dev)->id,
-                          s);
+                          object_get_typename(OBJECT(dev)), dev->id, s);
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 }
 
diff --git a/hw/net/imx_fec.c b/hw/net/imx_fec.c
index a35c33683e..7adcc9df65 100644
--- a/hw/net/imx_fec.c
+++ b/hw/net/imx_fec.c
@@ -1323,7 +1323,7 @@ static void imx_eth_realize(DeviceState *dev, Error **errp)
 
     s->nic = qemu_new_nic(&imx_eth_net_info, &s->conf,
                           object_get_typename(OBJECT(dev)),
-                          DEVICE(dev)->id, s);
+                          dev->id, s);
 
     qemu_format_nic_info_str(qemu_get_queue(s->nic), s->conf.macaddr.a);
 }
diff --git a/hw/nubus/nubus-device.c b/hw/nubus/nubus-device.c
index 01ccad9e8e..ffe78a8823 100644
--- a/hw/nubus/nubus-device.c
+++ b/hw/nubus/nubus-device.c
@@ -156,7 +156,7 @@ void nubus_register_rom(NubusDevice *dev, const uint8_t *rom, uint32_t size,
 
 static void nubus_device_realize(DeviceState *dev, Error **errp)
 {
-    NubusBus *nubus = NUBUS_BUS(qdev_get_parent_bus(DEVICE(dev)));
+    NubusBus *nubus = NUBUS_BUS(qdev_get_parent_bus(dev));
     NubusDevice *nd = NUBUS_DEVICE(dev);
     char *name;
     hwaddr slot_offset;
diff --git a/hw/pci-host/bonito.c b/hw/pci-host/bonito.c
index cc6545c8a8..f212796044 100644
--- a/hw/pci-host/bonito.c
+++ b/hw/pci-host/bonito.c
@@ -606,7 +606,7 @@ static void bonito_pcihost_realize(DeviceState *dev, Error **errp)
     BonitoState *bs = BONITO_PCI_HOST_BRIDGE(dev);
 
     memory_region_init(&bs->pci_mem, OBJECT(dev), "pci.mem", BONITO_PCILO_SIZE);
-    phb->bus = pci_register_root_bus(DEVICE(dev), "pci",
+    phb->bus = pci_register_root_bus(dev, "pci",
                                      pci_bonito_set_irq, pci_bonito_map_irq,
                                      dev, &bs->pci_mem, get_system_io(),
                                      0x28, 32, TYPE_PCI_BUS);
diff --git a/hw/ppc/spapr.c b/hw/ppc/spapr.c
index c18eab0a23..b058ce37a8 100644
--- a/hw/ppc/spapr.c
+++ b/hw/ppc/spapr.c
@@ -4024,7 +4024,7 @@ static void spapr_phb_plug(HotplugHandler *hotplug_dev, DeviceState *dev,
     /* hotplug hooks should check it's enabled before getting this far */
     assert(drc);
 
-    spapr_drc_attach(drc, DEVICE(dev), &local_err);
+    spapr_drc_attach(drc, dev, &local_err);
     if (local_err) {
         error_propagate(errp, local_err);
         return;
diff --git a/hw/sh4/sh_pci.c b/hw/sh4/sh_pci.c
index 08f2fc1dde..0a3e86f949 100644
--- a/hw/sh4/sh_pci.c
+++ b/hw/sh4/sh_pci.c
@@ -129,7 +129,7 @@ static void sh_pci_device_realize(DeviceState *dev, Error **errp)
     for (i = 0; i < 4; i++) {
         sysbus_init_irq(sbd, &s->irq[i]);
     }
-    phb->bus = pci_register_root_bus(DEVICE(dev), "pci",
+    phb->bus = pci_register_root_bus(dev, "pci",
                                      sh_pci_set_irq, sh_pci_map_irq,
                                      s->irq,
                                      get_system_memory(),
diff --git a/hw/xen/xen-legacy-backend.c b/hw/xen/xen-legacy-backend.c
index 4a373b2373..f9d013811a 100644
--- a/hw/xen/xen-legacy-backend.c
+++ b/hw/xen/xen-legacy-backend.c
@@ -705,7 +705,7 @@ int xen_be_init(void)
 
     xen_sysdev = qdev_create(NULL, TYPE_XENSYSDEV);
     qdev_init_nofail(xen_sysdev);
-    xen_sysbus = qbus_create(TYPE_XENSYSBUS, DEVICE(xen_sysdev), "xen-sysbus");
+    xen_sysbus = qbus_create(TYPE_XENSYSBUS, xen_sysdev, "xen-sysbus");
     qbus_set_bus_hotplug_handler(xen_sysbus, &error_abort);
 
     return 0;
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Tue May 12 07:00:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 07:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYOu0-0004Oy-Di; Tue, 12 May 2020 07:00:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cB0z=62=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jYOtz-0004Or-GS
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 07:00:31 +0000
X-Inumbo-ID: 43340e86-941e-11ea-b9cf-bc764e2007e4
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43340e86-941e-11ea-b9cf-bc764e2007e4;
 Tue, 12 May 2020 07:00:29 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id y3so13931704wrt.1
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 00:00:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=uzSFoHvo+LUbrscxwLzY12kA7xl8hv+snls8859rSqE=;
 b=MKr7oaKIpdvu+7LhBS2ehzLga85rG20OmN/PQECwyNW6HJtJzgNbzn6/VSmt4HvpaE
 PCKkjxQ0SA9tDrjwV2vTRFt33MrBgYxXvJmW3xKfeh/jSPmVCKrB3mCNpYWS26HX3o5O
 hlh2XRl77sADIzjOQD3FhYzhGwWsvB5BiIRbRqJcq1rHxnkK6RXA18djF/X0TIRbJyDn
 w32lo0jVIS5hyOvbEactOn62WitR6SoI48eG9jU8v5NFciT+6Ug1P8t2rQ2nPvMv008M
 V6i7jvjYu8DN9SMM2V5Ykxmbtqbcxq4MNOc0Hft0mb7XvhhkZNICRLM4vW532M5F8WP6
 cBgg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=uzSFoHvo+LUbrscxwLzY12kA7xl8hv+snls8859rSqE=;
 b=JidafzB5H+LPJQv4oaQ/speldc7wgCoBnvHe6NTVd2t6bP0VNd2QmhRuJidv3fJyBW
 NWCH/skb2TrrfoZFP8redYjvcNlOMwK25QFCDsEGGzDQnlciRcENgefhSlVAPCyTzs1P
 P0n8p37nTfITRxF5WxPGEEKbYSYsjK8BCBMy4VY6KanhQejoXINdKaR6dHC1CzFIdscs
 glkKFiFCYwgsZ4vibO5+mBwHmOOTJhn2W4yyXI4g3Lndc5Ar1VlkFEfO/hKoscMYo3A5
 sGjBil1zgOzYWjgPAGP0Caj/NIFqFbwYeQr9cJVtq8VtlbHJsnyWvQQMgBi/FEE3JtcH
 vTpw==
X-Gm-Message-State: AGi0PuaeVxdWTd54Rfc2aOhZCHmNxYQw2a3I2wyKaeC4SM7gqsXC4jHx
 hEC2raz5MurGrR+ooOU9z+0=
X-Google-Smtp-Source: APiQypIs8M6gmDURfLOfQu5RtZ1Um8Xav+XMDqYjUSpzJEC+9fnPgiw4rV1mWuxxF/9/Y3/iVbp1ow==
X-Received: by 2002:a5d:6b8c:: with SMTP id n12mr23302823wrx.107.1589266828240; 
 Tue, 12 May 2020 00:00:28 -0700 (PDT)
Received: from localhost.localdomain (17.red-88-21-202.staticip.rima-tde.net.
 [88.21.202.17])
 by smtp.gmail.com with ESMTPSA id i17sm30322243wml.23.2020.05.12.00.00.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 12 May 2020 00:00:27 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Subject: [PATCH v3 1/3] target: Remove unnecessary CPU() cast
Date: Tue, 12 May 2020 09:00:18 +0200
Message-Id: <20200512070020.22782-2-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200512070020.22782-1-f4bug@amsat.org>
References: <20200512070020.22782-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>,
 BALATON Zoltan <balaton@eik.bme.hu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 qemu-trivial@nongnu.org, David Hildenbrand <david@redhat.com>,
 Markus Armbruster <armbru@redhat.com>, Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, David Gibson <david@gibson.dropbear.id.au>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
 John Snow <jsnow@redhat.com>, Richard Henderson <rth@twiddle.net>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, qemu-ppc@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The CPU() macro is defined as:

  #define CPU(obj) ((CPUState *)(obj))

which expands to:

  ((CPUState *)object_dynamic_cast_assert((Object *)(obj), (name),
                                          __FILE__, __LINE__, __func__))

This assertion can only fail when @obj points to something other
than its stated type, i.e. when we're in undefined behavior country.

Remove the unnecessary CPU() casts when we already know the pointer
is of CPUState type.

Patch created mechanically using spatch with this script:

  @@
  typedef CPUState;
  CPUState *s;
  @@
  -   CPU(s)
  +   s

Acked-by: David Gibson <david@gibson.dropbear.id.au>
Reviewed-by: Cédric Le Goater <clg@kaod.org>
Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 target/ppc/mmu_helper.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/ppc/mmu_helper.c b/target/ppc/mmu_helper.c
index 86c667b094..8972714775 100644
--- a/target/ppc/mmu_helper.c
+++ b/target/ppc/mmu_helper.c
@@ -1820,7 +1820,7 @@ static inline void do_invalidate_BAT(CPUPPCState *env, target_ulong BATu,
     if (((end - base) >> TARGET_PAGE_BITS) > 1024) {
         /* Flushing 1024 4K pages is slower than a complete flush */
         LOG_BATS("Flush all BATs\n");
-        tlb_flush(CPU(cs));
+        tlb_flush(cs);
         LOG_BATS("Flush done\n");
         return;
     }
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Tue May 12 07:17:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 07:17:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYPAU-0005hm-JI; Tue, 12 May 2020 07:17:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OTN4=62=canonical.com=stefan.bader@srs-us1.protection.inumbo.net>)
 id 1jYPAT-0005hh-4A
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 07:17:33 +0000
X-Inumbo-ID: a4fbf78a-9420-11ea-a27c-12813bfff9fa
Received: from youngberry.canonical.com (unknown [91.189.89.112])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a4fbf78a-9420-11ea-a27c-12813bfff9fa;
 Tue, 12 May 2020 07:17:32 +0000 (UTC)
Received: from 1.general.smb.uk.vpn ([10.172.193.28])
 by youngberry.canonical.com with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2)
 (envelope-from <stefan.bader@canonical.com>)
 id 1jYPAN-00031U-FU; Tue, 12 May 2020 07:17:27 +0000
Subject: Re: [PATCH 0/2] Fixups for fcf-protection
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
References: <20200512033948.3507-1-jandryuk@gmail.com>
From: Stefan Bader <stefan.bader@canonical.com>
Autocrypt: addr=stefan.bader@canonical.com; prefer-encrypt=mutual; keydata=
 xsFNBE5mmXEBEADoM0yd6ERIuH2sQjbCGtrt0SFCbpAuOgNy7LSDJw2vZHkZ1bLPtpojdQId
 258o/4V+qLWaWLjbQdadzodnVUsvb+LUKJhFRB1kmzVYNxiu7AtxOnNmUn9dl1oS90IACo1B
 BpaMIunnKu1pp7s3sfzWapsNMwHbYVHXyJeaPFtMqOxd1V7bNEAC9uNjqJ3IG15f5/50+N+w
 LGkd5QJmp6Hs9RgCXQMDn989+qFnJga390C9JPWYye0sLjQeZTuUgdhebP0nvciOlKwaOC8v
 K3UwEIbjt+eL18kBq4VBgrqQiMupmTP9oQNYEgk2FiW3iAQ9BXE8VGiglUOF8KIe/2okVjdO
 nl3VgOHumV+emrE8XFOB2pgVmoklYNvOjaIV7UBesO5/16jbhGVDXskpZkrP/Ip+n9XD/EJM
 ismF8UcvcL4aPwZf9J03fZT4HARXuig/GXdK7nMgCRChKwsAARjw5f8lUx5iR1wZwSa7HhHP
 rAclUzjFNK2819/Ke5kM1UuT1X9aqL+uLYQEDB3QfJmdzVv5vHON3O7GOfaxBICo4Z5OdXSQ
 SRetiJ8YeUhKpWSqP59PSsbJg+nCKvWfkl/XUu5cFO4V/+NfivTttnoFwNhi/4lrBKZDhGVm
 6Oo/VytPpGHXt29npHb8x0NsQOsfZeam9Z5ysmePwH/53Np8NQARAQABzTVTdGVmYW4gQmFk
 ZXIgKENhbm9uaWNhbCkgPHN0ZWZhbi5iYWRlckBjYW5vbmljYWwuY29tPsLBrgQTAQoAQQIb
 AwULCQgHAwUVCgkICwUWAgMBAAIeAQIXgAIZARYhBNtdfMrzmU4zldpNPuhnXe7L7s6jBQJd
 7mXwBQkRwqX/ACEJEOhnXe7L7s6jFiEE2118yvOZTjOV2k0+6Gdd7svuzqPCmBAAsTPnhe+A
 iFiLyoLCqSikRlerZ9i20wUwQyRbd0Dtj+bl+eY+z9Bix+mfsu1ByYMYHPhb1gMv8oP7VgXV
 bX6/Ojw1BN5HTYMmSxpPHauLLMj7NL1hj9zQS/Jq45Zryz1i8j2XM36BaA4rIQrjXmfJteNT
 kUQwAXqMCMnvRP4M95mMYGCSgM8oFEo7cMGA5XbeusCIzH1ReoBtxJRTiLWZ7o9NloBtJ4iI
 4850l8+Ak/ySLdC4YXdy3bd0suU9qZ5wIKAfhkEwZvxlAuFF8s1hqjR1sNdypD45IWXakZOi
 ILX0wmPWKbUJrwNz3slG7OTE4UpF9cD2tixXLsBX/+l9XLfHm1PR8lC3PhQVThDOGL/TxTbC
 CX22wnE/YsK1yhdrsP7d6F73ZxA2ytBejpco3O84WhfMMHOhVT1JhO/XZj3vMQIkbXUX5CYO
 KiC53L6Kir5H1oqAxQi6CcKHjku5m5HKP2q4BJifm9/9jLAwvm9JUo1DX7SNw7++TCNrhsxT
 SKe0DSx7y6ONUxX9dclvzQ+2CFlVUv/7GqcCkaKUh1rn5ARuN087xeM2UU7xwiF7PzX7ybrz
 Juermy995788k4RnqOXEblcjJvcH+TKBljqSKY8t4tyPErgVUfm16JIHQh+kQydA0uuMPNda
 CYD8GTtU3Jw9g4q9mdi4E2ADvATOwU0ETmaZcQEQAMKRF+5LVwDHTbJOyT2DIBqlxCelvoQF
 rLjQKH8g+swXaIbgXQJfqm4q5uONVuovqMQrKSyo9vntW71YC5/LhGW/c7DNrKaZaTTQJE4J
 VK4RX7duKQFOcX+X5VUK9xw2WkewAMwudxoBO9I6PWIJd6KTE0CTYsDeD0fy7PuVBSGOeLTm
 LEFkYMZtrEHo52aHnyryT+KihEQfKp+V5KDXOm4HFgYpW6DZ1pctK9AjvDn15g78vViku27W
 wzOpHJh1JTIKI1xcM78qjbbWjY192pD0oRPVrPxBOwGdl5OyOyThWdjCNz1kRg3ssBNauHPy
 +AjZ4/zSVfFeb2THzU25uc4/Gdrm+D0OHFkSOjwD7MThzltC5IIncZOc5qVewDPQvCTUfWcX
 CLNSq+Y8jx4CpkZ5mgnjT24Nw2LYGtH5bsyNfE8zmTgzbMyI18i80GNyUEsT+buetzE0s6TX
 P8pCIVVlCG0deg5NRaYg1n4TcYglPYNOgXFShoRbYZ1fSuOoR6ttRqijpIFfsfGaMDOx40P0
 hq0ZPGA34SElSIhYrhQ4ffjd6sHseBr3xZ4TNlOrtbY2/Ceo5UCrYSWc+EesP3ydYgFk84S7
 rGCLK9UV9ckaZEExEFH7yEGN9fTrjecurfBg6tls18/x0lVBngbEjo4tNzBg2CJ+qn9IgnMT
 W2CTABEBAAHCwZMEGAEKACYCGwwWIQTbXXzK85lOM5XaTT7oZ13uy+7OowUCXe5mRQUJEYdS
 1AAhCRDoZ13uy+7OoxYhBNtdfMrzmU4zldpNPuhnXe7L7s6jGfAP/jjsc4PD0+wfaP2L2wbi
 n53N1itsRaWD7IqpUZPuzZ1dQVzjKQnvY6yhstXqyYNFgQ+wV4O5m0I+ih+fKDLJQmQpG+Dd
 YoMA9iYiaPy3/fAxXcOoVEfCWvwzlYY6TY324ReRCCM5JFfCv6SK5ETzi+rpXYtiD6MLTJMt
 sqCCdXEHbURBFC/1nKUaC61umaiE8eEcS9p51EqdJKa97HbGJlKBilgHwUjv1kwrMqVuGJne
 LVkk+DVRWDltv6ZETl/LGkXc52gkRZ5/EHk0m9loA5lyy4ximp3GJmTzUXHa/TrBXFjdkd5Y
 6Ovn61ufBqEdU6OBOya9jLnAyvMJr5H9PDZZ4ajs32kb4HSyyuZebb+i2Thgh9e4pig7unB9
 Kn9BFQgndzqvEiKLCs3L2CUasHOgiRiUms/QjVBwpw1MzGatT4vguBbitoto81/sSUQLxF+s
 WdAYX7ip7puyrWZgWAAni+FduwXrOq9mBhH+GUKvZMjVWeq/qZnMkUuPeWPvK1YIsc29/cci
 wM8DhQgaQnLE+jLHbKiMfYq/g8d2laVPZMcxS15o9SZ5agrw8eIPKtZBFPX3w+m5qEWLhOOf
 33iBEBq9ULnimnNa6UR4X6IQk2TRticdXOlcGQmLwSpDiTFqUMEbchHEoXF9Y6rrl00IEoC1
 2Iat+yfjuNhlNAJs
Message-ID: <3542ecb3-6f4e-2408-ea9f-9b03ac23688e@canonical.com>
Date: Tue, 12 May 2020 09:17:26 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200512033948.3507-1-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12.05.20 05:39, Jason Andryuk wrote:
> Two patches to fix building with a cf-protection toolchain.  The first
> handles the case where the compiler fails to run with "error:
> ‘-mindirect-branch’ and ‘-fcf-protection’ are not compatible".
> 
> The second fixes a runtime error that prevented Xen booting in legacy
> mode.

That might be better than just disabling fcf-protection as well (which was done
in Ubuntu lacking a better solution).

Not sure it was already hit but that additional .note section breaks the build
of the emulator as generated headers become gigantic:

https://git.launchpad.net/ubuntu/+source/xen/tree/debian/patches/1001-strip-note-gnu-property.patch?h=ubuntu/focal

-Stefan
> 
> I still haven't figured out exactly what is wrong with rombios and/or
> ipxe.
> 
> Jason Andryuk (2):
>   xen/x86: Disable fcf-protection when necessary to build
>   x86/boot: Drop .note.gnu.properties in build32.lds
> 
>  xen/arch/x86/arch.mk          | 11 ++++++++++-
>  xen/arch/x86/boot/build32.lds |  5 +++++
>  2 files changed, 15 insertions(+), 1 deletion(-)
> 



From xen-devel-bounces@lists.xenproject.org Tue May 12 07:18:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 07:18:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYPBY-0005mI-TH; Tue, 12 May 2020 07:18:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qSkR=62=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYPBX-0005m8-N1
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 07:18:39 +0000
X-Inumbo-ID: cce54652-9420-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cce54652-9420-11ea-b07b-bc764e2007e4;
 Tue, 12 May 2020 07:18:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 83FE2B175;
 Tue, 12 May 2020 07:18:40 +0000 (UTC)
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Ian Jackson <ian.jackson@citrix.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
 <4017f7f0-744b-70ff-8fa4-b33c53a8b9e2@suse.com>
 <ca115637-5e84-2990-65e8-e2f04ec0ddb5@xen.org>
 <803876ce-503a-2e97-f310-0413e515970b@suse.com>
 <bbc12f81-7854-ad72-63ee-3ec94b378bf9@xen.org>
 <bf6a9ed3-fd7e-c588-9f72-8084dab1f556@suse.com>
 <24249.34804.568523.847077@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <88487e23-88f7-2ce8-8292-7e97ed8902c5@suse.com>
Date: Tue, 12 May 2020 09:18:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <24249.34804.568523.847077@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.2020 19:14, Ian Jackson wrote:
> Jan Beulich writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>> I'm trying to make the point that your patch, to me, looks to be
>> trying to overcome a problem for which we have had a solution all
>> the time.
> 
> Thanks for this clear statement of your objection.  I'm afraid I don't
> agree.  Even though .config exists (and is even used by osstest, so I
> know about it) I don't think it is as good as having it in
> menuconfig.

But you realize that my objection is (was) more towards the reasoning
behind the change, than towards the change itself. If, as a community,
we decide to undo what we might now call a mistake, and if we're ready
to deal with the consequences, so be it.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 12 07:48:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 07:48:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYPea-0008L2-Af; Tue, 12 May 2020 07:48:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EIy0=62=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYPeZ-0008Kx-AE
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 07:48:39 +0000
X-Inumbo-ID: fa2241b6-9424-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa2241b6-9424-11ea-b9cf-bc764e2007e4;
 Tue, 12 May 2020 07:48:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=HANhGf5MHG6jaVtJ11gw2uzFIx+a8y+3RzfUKp+1Ybk=; b=WSrlZY951EGpTKeAyD5oH8gBp
 bNaEr0oGwiP7Uh39KyqzbF7fjKlLRvjr51ExvACWS3k6Ik4K1n15zuBUC9JwEDBxzq9ElW39G2Twp
 CdWFrNYhZ9RUUcRFeztBax9PaKUytlJNWR7r0yQB1mo21a0m5ylHKaNR24FJIAEbC8WBQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYPeS-00081t-A4; Tue, 12 May 2020 07:48:32 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYPeR-0000S3-Vl; Tue, 12 May 2020 07:48:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYPeR-0000mM-K0; Tue, 12 May 2020 07:48:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150140-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150140: regressions - FAIL
X-Osstest-Failures: linux-linus:build-amd64-libvirt:libvirt-build:fail:regression
 linux-linus:test-amd64-amd64-xl-rtds:guest-saverestore:fail:heisenbug
 linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=2ef96a5bb12be62ef75b5828c0aab838ebb29cb8
X-Osstest-Versions-That: linux=e99332e7b4cda6e60f5b5916cf9943a79dbef902
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 12 May 2020 07:48:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150140 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150140/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build  fail in 150133 REGR. vs. 150126

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     15 guest-saverestore          fail pass in 150133
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 150133
 test-armhf-armhf-xl-vhd      15 guest-start/debian.repeat  fail pass in 150133

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds 16 guest-localmigrate fail in 150133 REGR. vs. 150126

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-pair  1 build-check(1)          blocked in 150133 n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)           blocked in 150133 n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 150133 n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)           blocked in 150133 n/a
 test-amd64-amd64-libvirt      1 build-check(1)           blocked in 150133 n/a
 test-armhf-armhf-xl-rtds 16 guest-start/debian.repeat fail in 150133 like 150126
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150126
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150126
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150126
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150126
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150126
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150126
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150126
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150126
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                2ef96a5bb12be62ef75b5828c0aab838ebb29cb8
baseline version:
 linux                e99332e7b4cda6e60f5b5916cf9943a79dbef902

Last test of basis   150126  2020-05-10 14:58:51 Z    1 days
Testing same since   150133  2020-05-11 06:42:17 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexey Dobriyan <adobriyan@gmail.com>
  Anton Eidelman <anton@lightbitslabs.com>
  Christoph Hellwig <hch@lst.de>
  Ingo Molnar <mingo@kernel.org>
  Jann Horn <jannh@google.com>
  Jens Axboe <axboe@kernel.dk>
  Joerg Roedel <jroedel@suse.de>
  John Garry <john.garry@huawei.com>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Julia Lawall <Julia.Lawall@inria.fr>
  Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
  Linus Torvalds <torvalds@linux-foundation.org>
  Miroslav Benes <mbenes@suse.cz>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Qian Cai <cai@lca.pw>
  Randy Dunlap <rdunlap@infradead.org>
  Rick Edgecombe <rick.p.edgecombe@intel.com>
  Sagi Grimberg <sagi@grimberg.me>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Tejun Heo <tj@kernel.org>
  Thomas Gleixner <tglx@linutronix.de>
  Yufen Yu <yuyufen@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 985 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 12 08:58:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 08:58:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYQjZ-00069J-Tb; Tue, 12 May 2020 08:57:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WwR0=62=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jYQjY-00069E-Ke
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 08:57:52 +0000
X-Inumbo-ID: a76d6b9f-942e-11ea-a280-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a76d6b9f-942e-11ea-a280-12813bfff9fa;
 Tue, 12 May 2020 08:57:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=B4xokOh/D/iyUg0ldO5myuFkrr6clNuIMI+vqJeHaZ8=; b=nr8Fp5yjIR7Uxb46BD0QYwZMCr
 as+DGMzAF24VHViJkFW4xYh8E/Yt4WRabF1K1g7QU6kgmqzf2r8bkSnyztIhJwKlvv8EJsfAnMqIc
 ic+gkTuU6kfaMoxLO9xDzpPQgYh5wN5yTTRRviuOZXHkv+RRc3Dpwk5paMqO34UjqJGo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jYQjS-0001Vz-7Q; Tue, 12 May 2020 08:57:46 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jYQjR-0006aX-RC; Tue, 12 May 2020 08:57:46 +0000
Subject: Re: [PATCH 05/12] xen: introduce reserve_heap_pages
To: Stefano Stabellini <sstabellini@kernel.org>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-5-sstabellini@kernel.org>
 <3129ab49-5898-9d2e-8fbb-d1fcaf6cdec7@suse.com>
 <alpine.DEB.2.21.2004291510270.28941@sstabellini-ThinkPad-T480s>
 <a316ed70-da35-8be0-a092-d992e56563d2@xen.org>
 <alpine.DEB.2.21.2004300928240.28941@sstabellini-ThinkPad-T480s>
 <86e8fa89-c6f5-6c9e-4f3e-7f98e8e12c6a@xen.org>
 <alpine.DEB.2.21.2005111750240.26167@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <99681366-3439-8974-e699-b5550e5a0e9e@xen.org>
Date: Tue, 12 May 2020 09:57:43 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2005111750240.26167@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, andrew.cooper3@citrix.com,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>, Volodymyr_Babchuk@epam.com,
 "Woodhouse, David" <dwmw@amazon.co.uk>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 12/05/2020 02:10, Stefano Stabellini wrote:
> On Thu, 30 Apr 2020, Julien Grall wrote:
>> On 30/04/2020 18:00, Stefano Stabellini wrote:
>>> On Thu, 30 Apr 2020, Julien Grall wrote:
>>>>>>> +    pg = maddr_to_page(start);
>>>>>>> +    node = phys_to_nid(start);
>>>>>>> +    zone = page_to_zone(pg);
>>>>>>> +    page_list_del(pg, &heap(node, zone, order));
>>>>>>> +
>>>>>>> +    __alloc_heap_pages(pg, order, memflags, d);
>>>>>>
>>>>>> I agree with Julien in not seeing how this can be safe / correct.
>>>>>
>>>>> I haven't seen any issues so far in my testing -- I imagine it is
>>>>> because there aren't many memory allocations after setup_mm() and before
>>>>> create_domUs()  (which on ARM is called just before
>>>>> domain_unpause_by_systemcontroller at the end of start_xen.)
>>>>
>>>> I am not sure why you exclude setup_mm(). Any memory allocated (boot
>>>> allocator, xenheap) can clash with your regions. The main memory
>>>> allocations
>>>> are for the frametable and dom0. I would say you were lucky to not hit
>>>> them.
>>>
>>> Maybe it is because Xen typically allocates memory top-down? So if I
>>> chose a high range then I would see a failure? But I have been mostly
>>> testing with ranges close to the begin of RAM (as opposed to
>>> ranges close to the end of RAM.)
>>
>> I haven't looked at the details of the implementation, but you can try to
>> specify dom0 addresses for your domU. You should see a failure.
> 
> I managed to reproduce a failure by choosing the top address range. On
> Xilinx ZynqMP the memory is:
> 
>    reg = <0x0 0x0 0x0 0x7ff00000 0x8 0x0 0x0 0x80000000>;
> 
> And I chose:
> 
>    fdt set /chosen/domU0 direct-map <0x0 0x10000000 0x10000000 0x8 0x70000000 0x10000000>
> 
> Resulting in:
> 
> (XEN) *** LOADING DOMU cpus=1 memory=80000KB ***
> (XEN) Loading d1 kernel from boot module @ 0000000007200000
> (XEN) Loading ramdisk from boot module @ 0000000008200000
> (XEN) direct_map start=0x00000010000000 size=0x00000010000000
> (XEN) direct_map start=0x00000870000000 size=0x00000010000000
> (XEN) Data Abort Trap. Syndrome=0x5
> (XEN) Walking Hypervisor VA 0x2403480018 on CPU0 via TTBR 0x0000000000f05000
> (XEN) 0TH[0x0] = 0x0000000000f08f7f
> (XEN) 1ST[0x90] = 0x0000000000000000
> (XEN) CPU0: Unexpected Trap: Data Abort
> 
> [...]
> 
> (XEN) Xen call trace:
> (XEN)    [<000000000021a65c>] page_alloc.c#alloc_pages_from_buddy+0x15c/0x5d0 (PC)
> (XEN)    [<000000000021b43c>] reserve_domheap_pages+0xc4/0x148 (LR)

This isn't what I was expecting. If there is any failure, I would expect 
an error message not a data abort. However...

> 
> Anything other than the very top of memory works.

... I am very confused by this. Are you suggesting that with your series 
you can allocate the same range for Dom0 and a DomU without any trouble?

> 
>>>>> - in construct_domU, add the range to xenheap and reserve it with
>>>>> reserve_heap_pages
>>>>
>>>> I am afraid you can't give the regions to the allocator and then allocate
>>>> them. The allocator is free to use any page for its own purpose or exclude
>>>> them.
>>>>
>>>> AFAICT, the allocator doesn't have a list of page in use. It only keeps
>>>> track
>>>> of free pages. So we can make the content of struct page_info to look like
>>>> it
>>>> was allocated by the allocator.
>>>>
>>>> We would need to be careful when giving a page back to allocator as the
>>>> page
>>>> would need to be initialized (see [1]). This may not be a concern for
>>>> Dom0less
>>>> as the domain may never be destroyed but will be for correctness PoV.
>>>>
>>>> For LiveUpdate, the original Xen will carve out space to use by the boot
>>>> allocator in the new Xen. But I think this is not necessary in your
>>>> context.
>>>>
>>>> It should be sufficient to exclude the page from the boot allocators (as
>>>> we do
>>>> for other modules).
>>>>
>>>> One potential issue that can arise is there is no easy way today to
>>>> differentiate between pages allocated and pages not yet initialized. To
>>>> make
>>>> the code robust, we need to prevent a page to be used in two places. So
>>>> for
>>>> LiveUpdate we are marking them with a special value, this is used
>>>> afterwards
>>>> to check we are effictively using a reserved page.
>>>>
>>>> I hope this helps.
>>>
>>> Thanks for writing all of this down but I haven't understood some of it.
>>>
>>> For the sake of this discussion let's say that we managed to "reserve"
>>> the range early enough like we do for other modules, as you wrote.
>>>
>>> At the point where we want to call reserve_heap_pages() we would call
>>> init_heap_pages() just before it. We are still relatively early at boot
>>> so there aren't any concurrent memory operations. Why this doesn't work?
>>
>> Because init_heap_pages() may exclude some pages (for instance MFN 0 is carved
>> out) or use pages for its internal structure (see init_node_heap()). So you
>> can't expect to be able to allocate the exact same region by
>> reserve_heap_pages().
> 
> But it can't possibly use of any of pages it is trying to add to the
> heap, right?
Yes it can, there are already multiple examples in the buddy allocator.

> 
> We have reserved a certain range, we tell init_heap_pages to add the
> range to the heap, init_node_heap gets called and it ends up calling
> xmalloc. There is no way xmalloc can use any memory from that
> particular range because it is not in the heap yet. That should be safe.

If you look carefully at the code, you will notice:

     else if ( *use_tail && nr >= needed &&
               arch_mfn_in_directmap(mfn + nr) &&
               (!xenheap_bits ||
                !((mfn + nr - 1) >> (xenheap_bits - PAGE_SHIFT))) )
     {
         _heap[node] = mfn_to_virt(mfn + nr - needed);
         avail[node] = mfn_to_virt(mfn + nr - 1) +
                       PAGE_SIZE - sizeof(**avail) * NR_ZONES;
     }

This is one of the condition where the allocator will use a few pages 
from the region for itself.

> The init_node_heap code is a bit hard to follow but I went through it
> and couldn't spot anything that could cause any issues (MFN 0 aside
> which is a bit special). Am I missing something?
Aside what I wrote above, as soon as you give a page to an allocator, 
you waive a right to decide what the page is used for. The allocator is 
free to use the page for bookeeping or even carve out the page because 
it can't deal with it.

So I don't really see how giving a region to the allocator and then 
expecting the same region a call after is ever going to be safe.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 12 09:01:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 09:01:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYQn6-0006xc-G7; Tue, 12 May 2020 09:01:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ihkH=62=samsung.com=m.szyprowski@srs-us1.protection.inumbo.net>)
 id 1jYQn3-0006xW-Hp
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 09:01:31 +0000
X-Inumbo-ID: 280edbe8-942f-11ea-b07b-bc764e2007e4
Received: from mailout1.w1.samsung.com (unknown [210.118.77.11])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 280edbe8-942f-11ea-b07b-bc764e2007e4;
 Tue, 12 May 2020 09:01:25 +0000 (UTC)
Received: from eucas1p2.samsung.com (unknown [182.198.249.207])
 by mailout1.w1.samsung.com (KnoxPortal) with ESMTP id
 20200512090124euoutp01e49ae4acac62bcad757dedd8c151dec2~OPFvtM29R2629726297euoutp01h
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 09:01:24 +0000 (GMT)
DKIM-Filter: OpenDKIM Filter v2.11.0 mailout1.w1.samsung.com
 20200512090124euoutp01e49ae4acac62bcad757dedd8c151dec2~OPFvtM29R2629726297euoutp01h
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com;
 s=mail20170921; t=1589274084;
 bh=FXPMiPZ7c5SpQvubIUWbXGofE01C1Z/1aYgGs4t5Rxs=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=BP901js1GRZjmNF1SwtMzosbRHEa1o6G1icsOMfwQ9l93Oo6cjLR8VbhmXr29uccy
 EzUKr/OsIwN4BiL5PtbIzmQlpt7pg3irpJ5bC67B1Z9FUC2y4flIg8qoLG7+nHikyy
 0dmXxNldyN/xM4k5urp6yaKbjBtAc9MHt8oSP+VI=
Received: from eusmges3new.samsung.com (unknown [203.254.199.245]) by
 eucas1p2.samsung.com (KnoxPortal) with ESMTP id
 20200512090123eucas1p27aea9b6c10466074bd204422df08d17e~OPFvPqjYq3098030980eucas1p2J;
 Tue, 12 May 2020 09:01:23 +0000 (GMT)
Received: from eucas1p2.samsung.com ( [182.198.249.207]) by
 eusmges3new.samsung.com (EUCPMTA) with SMTP id 75.7B.60698.3E56ABE5; Tue, 12
 May 2020 10:01:23 +0100 (BST)
Received: from eusmtrp1.samsung.com (unknown [182.198.249.138]) by
 eucas1p2.samsung.com (KnoxPortal) with ESMTPA id
 20200512090123eucas1p268736ef6e202c23e8be77c56873f415f~OPFu8MRmS2180321803eucas1p27;
 Tue, 12 May 2020 09:01:23 +0000 (GMT)
Received: from eusmgms2.samsung.com (unknown [182.198.249.180]) by
 eusmtrp1.samsung.com (KnoxPortal) with ESMTP id
 20200512090123eusmtrp120653e9289ee4b94bb938846d5e1609a~OPFu7bCGL0188101881eusmtrp1V;
 Tue, 12 May 2020 09:01:23 +0000 (GMT)
X-AuditID: cbfec7f5-a29ff7000001ed1a-44-5eba65e3ce95
Received: from eusmtip1.samsung.com ( [203.254.199.221]) by
 eusmgms2.samsung.com (EUCPMTA) with SMTP id F2.03.07950.3E56ABE5; Tue, 12
 May 2020 10:01:23 +0100 (BST)
Received: from AMDC2765.digital.local (unknown [106.120.51.73]) by
 eusmtip1.samsung.com (KnoxPortal) with ESMTPA id
 20200512090122eusmtip1628d2b6fc16039b5ec284bb2840665fe~OPFuTMsTB1181011810eusmtip1r;
 Tue, 12 May 2020 09:01:22 +0000 (GMT)
From: Marek Szyprowski <m.szyprowski@samsung.com>
To: dri-devel@lists.freedesktop.org, iommu@lists.linux-foundation.org,
 linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org
Subject: [PATCH v4 27/38] xen: gntdev: fix common struct sg_table related
 issues
Date: Tue, 12 May 2020 11:00:47 +0200
Message-Id: <20200512090058.14910-27-m.szyprowski@samsung.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200512090058.14910-1-m.szyprowski@samsung.com>
X-Brightmail-Tracker: H4sIAAAAAAAAA0WSe0hTURzHOffe7V7FyW0qHlQUFoUFaT6iG4oUJt1/DCukh2gtvejQTdt8
 ZEWJZsic5pKlmJaF4PuRrWmKmo+5mWW+H7NQU6MM0fJRmlmbV+2/7+/z+37P93A4BMrv5NgR
 IkksI5UIowRcc0zTufb+0DTTEHL4z6w9ldHThVDPc6s5VF9OKkb91ShRanBlgUuVlmsRqrDF
 i8ofc6eWB6cQqnZ6mEMNNORzqcqOjzjVujjDoX6qs5HjlnTF4wpAN60WYnTd6iSHnkjXIfSL
 ojv0+OY0SmePFgO6fXEQoxvHkrj091kDRmeqywBdrR7C6KVaxwDeJXPvMCZKFM9IXX2umEcU
 pGXiMb221wum2tEkUGolB2YEJD3hbK8alwNzgk+WADiykc9lh2UA83/83t4sAdi6qeXsRJSG
 coRdFAM4qsjCdiODdXrE5OKSblA+L+eatDWZCqA+w8JkQsl0FKoqy7aOsiLPwrwNA27SGLkP
 6r89Rk2aR/pAg3YcYeucYHnN6y1uZuQTTza22iBpwGFR8wiXNZ2Ey/cHtrUVnNOpcVY7wO5s
 xXYgBcCpnkqcHRQADiTnAtblBT/0rBvThPF+B2B1gyuLT8C+ftUWhqQlHJ3fY8KoUT7Q5KAs
 5sG0e3zWvR/m6ap2a1t7+1FW07CmxLD9jloAteUKkAWc8v6XFQJQBmyZOJk4nJF5SJgEF5lQ
 LIuThLuERotrgfF3dW/qVupB88bVNkASQGDBS3N/FcLnCONlieI2AAlUYM27KzIiXpgw8QYj
 jb4sjYtiZG3AnsAEtjyPZ1+D+WS4MJaJZJgYRrqzRQgzuyTgj6/bOJA9KhuNdsKuacw3chLB
 pwu7Yo76LZx/6dWo7BAOOQc2DdUnWLgGOaw8xfUd8kSVyDtQsVcZ/CvimP8R0ZxjYoBkOPmT
 9Vho7WlnlWfn27V3vjpR3rVHb4Jawvwu5AyMnPG/7btOz4hTnM/Jb1alx3x+eCtWeTFQnHLq
 iwCTRQjdDqJSmfAfUBbuYVkDAAA=
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrGIsWRmVeSWpSXmKPExsVy+t/xu7qPU3fFGTz/xmbRe+4kk8XGGetZ
 LS5Ob2Wx+L9tIrPFla/v2SxWrj7KZLFgv7XFnJtGFl+uPGSy2PT4GqvF5V1z2CzWHrnLbnHw
 wxNWi+9bJjM58HmsmbeG0WPvtwUsHtu/PWD1uN99nMlj85J6j9v/HjN7TL6xnNHj8IcrLB67
 bzaweXx8eovFo2/LKkaP9Vuusnh83iQXwBulZ1OUX1qSqpCRX1xiqxRtaGGkZ2hpoWdkYqln
 aGwea2VkqqRvZ5OSmpNZllqkb5eglzG3o4+94IJ4xdyHh5kbGFcKdzFyckgImEhMvLWaqYuR
 i0NIYCmjxL3rM9kgEjISJ6c1sELYwhJ/rnWxQRR9YpQ42/yABSTBJmAo0fUWIiEi0MkoMa37
 IzuIwywwmVni2errQHM5OIQFAiSmvuYDaWARUJU48XoeM4jNK2AncevobSaIDfISqzccAItz
 AsXvz/8DtkBIoFDi0dW3LBMY+RYwMqxiFEktLc5Nzy020itOzC0uzUvXS87P3cQIjKNtx35u
 2cHY9S74EKMAB6MSD2+H0c44IdbEsuLK3EOMEhzMSiK8LZlAId6UxMqq1KL8+KLSnNTiQ4ym
 QEdNZJYSTc4HxnheSbyhqaG5haWhubG5sZmFkjhvh8DBGCGB9MSS1OzU1ILUIpg+Jg5OqQZG
 Ba+Ub1VHxD9sT/5RO83mwGInkclS+/W8P06vU82Qf/K1czKPkJ/95kkfdYOm8pj7fvRhke3b
 51ih+Ct1XVX+limzVJ5+E+hLtquTeP37shD7jeNvDHSLP2lzzxb4fSbUUNksdvM2Q86pUr1f
 y6OetU8sKHasnPnadw87xwq7hAjhv1N1PAKUWIozEg21mIuKEwEHRpQquQIAAA==
X-CMS-MailID: 20200512090123eucas1p268736ef6e202c23e8be77c56873f415f
X-Msg-Generator: CA
Content-Type: text/plain; charset="utf-8"
X-RootMTR: 20200512090123eucas1p268736ef6e202c23e8be77c56873f415f
X-EPHeader: CA
CMS-TYPE: 201P
X-CMS-RootMailID: 20200512090123eucas1p268736ef6e202c23e8be77c56873f415f
References: <20200512085710.14688-1-m.szyprowski@samsung.com>
 <20200512090058.14910-1-m.szyprowski@samsung.com>
 <CGME20200512090123eucas1p268736ef6e202c23e8be77c56873f415f@eucas1p2.samsung.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Robin Murphy <robin.murphy@arm.com>, Christoph Hellwig <hch@lst.de>,
 linux-arm-kernel@lists.infradead.org,
 Marek Szyprowski <m.szyprowski@samsung.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function
returns the number of the created entries in the DMA address space.
However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and
dma_unmap_sg must be called with the original number of the entries
passed to the dma_map_sg().

struct sg_table is a common structure used for describing a non-contiguous
memory buffer, used commonly in the DRM and graphics subsystems. It
consists of a scatterlist with memory pages and DMA addresses (sgl entry),
as well as the number of scatterlist entries: CPU pages (orig_nents entry)
and DMA mapped pages (nents entry).

It turned out that it was a common mistake to misuse nents and orig_nents
entries, calling DMA-mapping functions with a wrong number of entries or
ignoring the number of mapped entries returned by the dma_map_sg()
function.

To avoid such issues, lets use a common dma-mapping wrappers operating
directly on the struct sg_table objects and use scatterlist page
iterators where possible. This, almost always, hides references to the
nents and orig_nents entries, making the code robust, easier to follow
and copy/paste safe.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
---
For more information, see '[PATCH v4 00/38] DRM: fix struct sg_table nents
vs. orig_nents misuse' thread:
https://lore.kernel.org/dri-devel/20200512085710.14688-1-m.szyprowski@samsung.com/T/
---
 drivers/xen/gntdev-dmabuf.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
index 75d3bb9..ba6cad8 100644
--- a/drivers/xen/gntdev-dmabuf.c
+++ b/drivers/xen/gntdev-dmabuf.c
@@ -247,10 +247,9 @@ static void dmabuf_exp_ops_detach(struct dma_buf *dma_buf,
 
 		if (sgt) {
 			if (gntdev_dmabuf_attach->dir != DMA_NONE)
-				dma_unmap_sg_attrs(attach->dev, sgt->sgl,
-						   sgt->nents,
-						   gntdev_dmabuf_attach->dir,
-						   DMA_ATTR_SKIP_CPU_SYNC);
+				dma_unmap_sgtable(attach->dev, sgt,
+						  gntdev_dmabuf_attach->dir,
+						  DMA_ATTR_SKIP_CPU_SYNC);
 			sg_free_table(sgt);
 		}
 
@@ -288,8 +287,8 @@ static void dmabuf_exp_ops_detach(struct dma_buf *dma_buf,
 	sgt = dmabuf_pages_to_sgt(gntdev_dmabuf->pages,
 				  gntdev_dmabuf->nr_pages);
 	if (!IS_ERR(sgt)) {
-		if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
-				      DMA_ATTR_SKIP_CPU_SYNC)) {
+		if (dma_map_sgtable(attach->dev, sgt, dir,
+				    DMA_ATTR_SKIP_CPU_SYNC)) {
 			sg_free_table(sgt);
 			kfree(sgt);
 			sgt = ERR_PTR(-ENOMEM);
@@ -625,7 +624,7 @@ static struct gntdev_dmabuf *dmabuf_imp_alloc_storage(int count)
 
 	/* Now convert sgt to array of pages and check for page validity. */
 	i = 0;
-	for_each_sg_page(sgt->sgl, &sg_iter, sgt->nents, 0) {
+	for_each_sgtable_page(sgt, &sg_iter, 0) {
 		struct page *page = sg_page_iter_page(&sg_iter);
 		/*
 		 * Check if page is valid: this can happen if we are given
-- 
1.9.1



From xen-devel-bounces@lists.xenproject.org Tue May 12 09:13:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 09:13:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYQyi-0007s7-Bs; Tue, 12 May 2020 09:13:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pTU5=62=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jYQyh-0007s2-Dg
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 09:13:31 +0000
X-Inumbo-ID: d8840a38-9430-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8840a38-9430-11ea-b07b-bc764e2007e4;
 Tue, 12 May 2020 09:13:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id CFD8EAC6C;
 Tue, 12 May 2020 09:13:31 +0000 (UTC)
Subject: Re: [PATCH 1/2] xen/xenbus: avoid large structs and arrays on the
 stack
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 clang-built-linux@googlegroups.com
References: <20200511073151.19043-1-jgross@suse.com>
 <20200511073151.19043-2-jgross@suse.com>
 <965e1d65-3a0c-3a71-ca58-2b5c04f98bce@oracle.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <aa93722b-100a-c750-cf51-bcaf9defbd03@suse.com>
Date: Tue, 12 May 2020 11:13:27 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <965e1d65-3a0c-3a71-ca58-2b5c04f98bce@oracle.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Arnd Bergmann <arnd@arndb.de>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.20 20:01, Boris Ostrovsky wrote:
> On 5/11/20 3:31 AM, Juergen Gross wrote:
>>   
>>   static int xenbus_map_ring_valloc_hvm(struct xenbus_device *dev,
> 
> 
> I wonder whether we can drop valloc/vfree from xenbus_ring_ops' names.

I can do that.

> 
> 
>> +				      struct map_ring_valloc *info,
>>   				      grant_ref_t *gnt_ref,
>>   				      unsigned int nr_grefs,
>>   				      void **vaddr)
>>   {
>> -	struct xenbus_map_node *node;
>> +	struct xenbus_map_node *node = info->node;
>>   	int err;
>>   	void *addr;
>>   	bool leaked = false;
>> -	struct map_ring_valloc_hvm info = {
>> -		.idx = 0,
>> -	};
>>   	unsigned int nr_pages = XENBUS_PAGES(nr_grefs);
>>   
>> -	if (nr_grefs > XENBUS_MAX_RING_GRANTS)
>> -		return -EINVAL;
>> -
>> -	*vaddr = NULL;
>> -
>> -	node = kzalloc(sizeof(*node), GFP_KERNEL);
>> -	if (!node)
>> -		return -ENOMEM;
>> -
>>   	err = alloc_xenballooned_pages(nr_pages, node->hvm.pages);
>>   	if (err)
>>   		goto out_err;
>>   
>>   	gnttab_foreach_grant(node->hvm.pages, nr_grefs,
>>   			     xenbus_map_ring_setup_grant_hvm,
>> -			     &info);
>> +			     info);
>>   
>>   	err = __xenbus_map_ring(dev, gnt_ref, nr_grefs, node->handles,
>> -				info.phys_addrs, GNTMAP_host_map, &leaked);
>> +				info, GNTMAP_host_map, &leaked);
>>   	node->nr_handles = nr_grefs;
>>   
>>   	if (err)
>> @@ -641,11 +654,13 @@ static int xenbus_map_ring_valloc_hvm(struct xenbus_device *dev,
>>   	spin_unlock(&xenbus_valloc_lock);
>>   
>>   	*vaddr = addr;
>> +	info->node = NULL;
> 
> 
> Is this so that xenbus_map_ring_valloc() doesn't free it accidentally?

Yes.


Juergen



From xen-devel-bounces@lists.xenproject.org Tue May 12 10:00:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 10:00:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYRiB-0003cM-2o; Tue, 12 May 2020 10:00:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qSkR=62=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYRiA-0003cH-Bk
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 10:00:30 +0000
X-Inumbo-ID: 68a9c2e6-9437-11ea-a289-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 68a9c2e6-9437-11ea-a289-12813bfff9fa;
 Tue, 12 May 2020 10:00:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id EEC04AA55;
 Tue, 12 May 2020 10:00:30 +0000 (UTC)
To: Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: odd pte_mfn_to_pfn() behavior
Message-ID: <b4c54557-e91a-adf4-8506-02dcc2b81f63@suse.com>
Date: Tue, 12 May 2020 12:00:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jürgen, Boris,

in figuring out why a W+X mapping warning has magically disappeared
(I now at least know that it was for the shared info fixmap page)
in 5.6, besides finding two logic errors in 2ae27137b2db ("x86: mm:
convert dump_pagetables to use walk_page_range") - one in the way
st->prot_levels[] gets maintained, the other it truncating 64-bit
PAE PTEs to 32 bits when calling note_page() - I've also noticed
that the observed (in note_page()) PTE for the shared info page is
now 0x62 (not present, PFN 0). Up to 5.5, only the PTE flags were
involved, and pte_flags() behaves better than pte_val() for this
particular PTE - pte_mfn_to_pfn() zaps _PAGE_PRESENT and the frame
number when it can't translate the MFN to a PFN.

Presumably this behavior is acceptable in most cases, but it
clearly is wrong here and would - even with aforementioned bugs
addressed - still prevent the W+X warning to reappear. I have no
idea whether it would be acceptable for the generic framework to
be changed to use pte_flags() again instead of pte_val().

Having looked at this code (and the prior logic) I have to admit
that I haven't been able to figure yet why I've seen these W+X
mapping warnings for the shared info page only ever on 32-bit.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 12 10:08:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 10:08:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYRqF-0003p8-V6; Tue, 12 May 2020 10:08:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WwR0=62=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jYRqE-0003p3-UX
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 10:08:51 +0000
X-Inumbo-ID: 92f99548-9438-11ea-a289-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 92f99548-9438-11ea-a289-12813bfff9fa;
 Tue, 12 May 2020 10:08:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=oJxlv/d0KiHi8eDk6YvTUJWuKtgj0TZOw+6y6VZF2Uw=; b=c6PAJHo5et+fk6tP2VBR/IlNpJ
 NbN06sit/CeYnZuBSAUuSf6/HdSSvnIXdDspU+ulXd0y3UhXlnAU5FnP0npL0JiZ8MgrmktaymFN8
 +z7tbNSj4Dgq7CpP+zNL/fXRoVRWD8Uw6dECFiA81VE5Iy69K+NmMIBFm53y0Jo19NT4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jYRqB-00033B-Vi; Tue, 12 May 2020 10:08:47 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jYRqB-0002RU-Nu; Tue, 12 May 2020 10:08:47 +0000
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Jan Beulich <jbeulich@suse.com>, Ian Jackson <ian.jackson@citrix.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
 <4017f7f0-744b-70ff-8fa4-b33c53a8b9e2@suse.com>
 <ca115637-5e84-2990-65e8-e2f04ec0ddb5@xen.org>
 <803876ce-503a-2e97-f310-0413e515970b@suse.com>
 <bbc12f81-7854-ad72-63ee-3ec94b378bf9@xen.org>
 <bf6a9ed3-fd7e-c588-9f72-8084dab1f556@suse.com>
 <24249.34804.568523.847077@mariner.uk.xensource.com>
 <88487e23-88f7-2ce8-8292-7e97ed8902c5@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <8dc17648-c669-eec7-2ecd-81245fa8d517@xen.org>
Date: Tue, 12 May 2020 11:08:45 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <88487e23-88f7-2ce8-8292-7e97ed8902c5@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 12/05/2020 08:18, Jan Beulich wrote:
> On 11.05.2020 19:14, Ian Jackson wrote:
>> Jan Beulich writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>> I'm trying to make the point that your patch, to me, looks to be
>>> trying to overcome a problem for which we have had a solution all
>>> the time.
>>
>> Thanks for this clear statement of your objection.  I'm afraid I don't
>> agree.  Even though .config exists (and is even used by osstest, so I
>> know about it) I don't think it is as good as having it in
>> menuconfig.
> 
> But you realize that my objection is (was) more towards the reasoning
> behind the change, than towards the change itself. If, as a community,
> we decide to undo what we might now call a mistake, and if we're ready
> to deal with the consequences, so be it.

Would you mind to explain the fall out you expect from this patch? Are 
you worry more people may contact security@xen.org for non-security issue?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 12 10:15:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 10:15:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYRwy-0004gk-Pt; Tue, 12 May 2020 10:15:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qSkR=62=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYRwx-0004gf-15
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 10:15:47 +0000
X-Inumbo-ID: 8b42f46b-9439-11ea-a28a-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8b42f46b-9439-11ea-a28a-12813bfff9fa;
 Tue, 12 May 2020 10:15:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 8B2F7AC61;
 Tue, 12 May 2020 10:15:47 +0000 (UTC)
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Julien Grall <julien@xen.org>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
 <4017f7f0-744b-70ff-8fa4-b33c53a8b9e2@suse.com>
 <ca115637-5e84-2990-65e8-e2f04ec0ddb5@xen.org>
 <803876ce-503a-2e97-f310-0413e515970b@suse.com>
 <bbc12f81-7854-ad72-63ee-3ec94b378bf9@xen.org>
 <bf6a9ed3-fd7e-c588-9f72-8084dab1f556@suse.com>
 <24249.34804.568523.847077@mariner.uk.xensource.com>
 <88487e23-88f7-2ce8-8292-7e97ed8902c5@suse.com>
 <8dc17648-c669-eec7-2ecd-81245fa8d517@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cabaa3f1-5eca-c7a5-e819-1c7487c4add1@suse.com>
Date: Tue, 12 May 2020 12:15:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <8dc17648-c669-eec7-2ecd-81245fa8d517@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12.05.2020 12:08, Julien Grall wrote:
> On 12/05/2020 08:18, Jan Beulich wrote:
>> On 11.05.2020 19:14, Ian Jackson wrote:
>>> Jan Beulich writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>> I'm trying to make the point that your patch, to me, looks to be
>>>> trying to overcome a problem for which we have had a solution all
>>>> the time.
>>>
>>> Thanks for this clear statement of your objection.  I'm afraid I don't
>>> agree.  Even though .config exists (and is even used by osstest, so I
>>> know about it) I don't think it is as good as having it in
>>> menuconfig.
>>
>> But you realize that my objection is (was) more towards the reasoning
>> behind the change, than towards the change itself. If, as a community,
>> we decide to undo what we might now call a mistake, and if we're ready
>> to deal with the consequences, so be it.
> 
> Would you mind to explain the fall out you expect from this patch? Are 
> you worry more people may contact security@xen.org for non-security issue?

That's one possible thing that might happen. But even more generally
the likelihood will increase that people report issues without paying
attention that they depend on their choice of configuration. We'll
have to both take this into consideration and ask back for the
specific .config they've used.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 12 10:58:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 10:58:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYScd-00080V-0I; Tue, 12 May 2020 10:58:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qSkR=62=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYScb-00080Q-2A
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 10:58:49 +0000
X-Inumbo-ID: 8e341eaa-943f-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e341eaa-943f-11ea-ae69-bc764e2007e4;
 Tue, 12 May 2020 10:58:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id DBFF4ABB2;
 Tue, 12 May 2020 10:58:49 +0000 (UTC)
To: Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: use of "stat -"
Message-ID: <3bfd6384-fcaf-c74a-e560-a35aafa06a43@suse.com>
Date: Tue, 12 May 2020 12:58:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

now that I've been able to do a little bit of work from the office
again, I've run into a regression from b72682c602b8 "scripts: Use
stat to check lock claim". On one of my older machines I've noticed
guests wouldn't launch anymore, which I've tracked down to the
system having an old stat on it. Yes, the commit says the needed
behavior has been available since 2009, but please let's not forget
that we continue to support building with tool chains from about
2007.

Putting in place and using newer tool chain versions without
touching the base distro is pretty straightforward. Replacing the
coreutils package isn't, and there's not even an override available
by which one could point at an alternative one. Hence I think
bumping the minimum required versions of basic tools should be
done even more carefully than bumping required tool chain versions
(which we've not dared to do in years). After having things
successfully working again with a full revert, I'm now going to
experiment with adapting behavior to stat's capabilities. Would
something like that be acceptable (if it works out)?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 12 11:00:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 11:00:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYSeC-0000MR-CG; Tue, 12 May 2020 11:00:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WwR0=62=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jYSeA-0000MM-VX
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 11:00:27 +0000
X-Inumbo-ID: c8ed7528-943f-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c8ed7528-943f-11ea-b9cf-bc764e2007e4;
 Tue, 12 May 2020 11:00:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=NLaF4lxv8ZeQ+uHNhOEK7cs0zYlDraOV3q7r2ycHm7w=; b=c8qaO9cr4pGeFPXaNF/rOdgoX4
 RGtpdESDY01JiQ0FXTVDbsCRzfvTO5ipWnfydPDO61RChQUyZCwupLPj24urdhdbHmxt9JLPzIdQo
 Rw2tvdcyCMMgx7gZExxF8qFrTRDeJCev2Taie3y5ao1n8TB6WQ8iH9mq6kZUevuljCec=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jYSe8-000487-Ob; Tue, 12 May 2020 11:00:24 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jYSe8-0005BQ-HF; Tue, 12 May 2020 11:00:24 +0000
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Jan Beulich <jbeulich@suse.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
 <4017f7f0-744b-70ff-8fa4-b33c53a8b9e2@suse.com>
 <ca115637-5e84-2990-65e8-e2f04ec0ddb5@xen.org>
 <803876ce-503a-2e97-f310-0413e515970b@suse.com>
 <bbc12f81-7854-ad72-63ee-3ec94b378bf9@xen.org>
 <bf6a9ed3-fd7e-c588-9f72-8084dab1f556@suse.com>
 <24249.34804.568523.847077@mariner.uk.xensource.com>
 <88487e23-88f7-2ce8-8292-7e97ed8902c5@suse.com>
 <8dc17648-c669-eec7-2ecd-81245fa8d517@xen.org>
 <cabaa3f1-5eca-c7a5-e819-1c7487c4add1@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <71e63752-b6b0-9578-e79a-0bef30a7c50a@xen.org>
Date: Tue, 12 May 2020 12:00:22 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <cabaa3f1-5eca-c7a5-e819-1c7487c4add1@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan,

On 12/05/2020 11:15, Jan Beulich wrote:
> On 12.05.2020 12:08, Julien Grall wrote:
>> On 12/05/2020 08:18, Jan Beulich wrote:
>>> On 11.05.2020 19:14, Ian Jackson wrote:
>>>> Jan Beulich writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>>> I'm trying to make the point that your patch, to me, looks to be
>>>>> trying to overcome a problem for which we have had a solution all
>>>>> the time.
>>>>
>>>> Thanks for this clear statement of your objection.  I'm afraid I don't
>>>> agree.  Even though .config exists (and is even used by osstest, so I
>>>> know about it) I don't think it is as good as having it in
>>>> menuconfig.
>>>
>>> But you realize that my objection is (was) more towards the reasoning
>>> behind the change, than towards the change itself. If, as a community,
>>> we decide to undo what we might now call a mistake, and if we're ready
>>> to deal with the consequences, so be it.
>>
>> Would you mind to explain the fall out you expect from this patch? Are
>> you worry more people may contact security@xen.org for non-security issue?
> 
> That's one possible thing that might happen. But even more generally
> the likelihood will increase that people report issues without paying
> attention that they depend on their choice of configuration.
I agree that you are going to get more report because there are more 
users to try new things. So inevitently, you will get more incomplete 
report. This is always the downside of allowing more flexibility.

But we also need to look at the upside. I can see 2 advantages:
     1) It will be easier to try upcoming features (e.g Argo). The more 
testing and input, the more chance a feature will be a success.
     2) It will be easier to tailor Xen (such as built-in command line).

In both cases, you make Xen more compelling because you allow to 
experiment and make it more flexible. IHMO, this is one of the best way 
to attract users and possible new contributors/reviewers to Xen community.

>  We'll
> have to both take this into consideration and ask back for the
> specific .config they've used.
Correct me if I am wrong, but this is not very specific to EXPERT mode. 
You can already select different options that will affect the behavior 
of the hypervisor. For instance, on x86, you can disable PV guest 
support. How do you figure that out today without asking the .config?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 12 11:03:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 11:03:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYSh2-0000Ul-Rb; Tue, 12 May 2020 11:03:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qSkR=62=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYSh1-0000Ue-Oh
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 11:03:23 +0000
X-Inumbo-ID: 31e662e2-9440-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31e662e2-9440-11ea-ae69-bc764e2007e4;
 Tue, 12 May 2020 11:03:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 6DE69ABB2;
 Tue, 12 May 2020 11:03:24 +0000 (UTC)
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: Julien Grall <julien@xen.org>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
 <4017f7f0-744b-70ff-8fa4-b33c53a8b9e2@suse.com>
 <ca115637-5e84-2990-65e8-e2f04ec0ddb5@xen.org>
 <803876ce-503a-2e97-f310-0413e515970b@suse.com>
 <bbc12f81-7854-ad72-63ee-3ec94b378bf9@xen.org>
 <bf6a9ed3-fd7e-c588-9f72-8084dab1f556@suse.com>
 <24249.34804.568523.847077@mariner.uk.xensource.com>
 <88487e23-88f7-2ce8-8292-7e97ed8902c5@suse.com>
 <8dc17648-c669-eec7-2ecd-81245fa8d517@xen.org>
 <cabaa3f1-5eca-c7a5-e819-1c7487c4add1@suse.com>
 <71e63752-b6b0-9578-e79a-0bef30a7c50a@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b292694a-cc82-f844-6312-f2b13f7a55ba@suse.com>
Date: Tue, 12 May 2020 13:03:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <71e63752-b6b0-9578-e79a-0bef30a7c50a@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12.05.2020 13:00, Julien Grall wrote:
> Hi Jan,
> 
> On 12/05/2020 11:15, Jan Beulich wrote:
>> On 12.05.2020 12:08, Julien Grall wrote:
>>> On 12/05/2020 08:18, Jan Beulich wrote:
>>>> On 11.05.2020 19:14, Ian Jackson wrote:
>>>>> Jan Beulich writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>>>> I'm trying to make the point that your patch, to me, looks to be
>>>>>> trying to overcome a problem for which we have had a solution all
>>>>>> the time.
>>>>>
>>>>> Thanks for this clear statement of your objection.  I'm afraid I don't
>>>>> agree.  Even though .config exists (and is even used by osstest, so I
>>>>> know about it) I don't think it is as good as having it in
>>>>> menuconfig.
>>>>
>>>> But you realize that my objection is (was) more towards the reasoning
>>>> behind the change, than towards the change itself. If, as a community,
>>>> we decide to undo what we might now call a mistake, and if we're ready
>>>> to deal with the consequences, so be it.
>>>
>>> Would you mind to explain the fall out you expect from this patch? Are
>>> you worry more people may contact security@xen.org for non-security issue?
>>
>> That's one possible thing that might happen. But even more generally
>> the likelihood will increase that people report issues without paying
>> attention that they depend on their choice of configuration.
> I agree that you are going to get more report because there are more 
> users to try new things. So inevitently, you will get more incomplete 
> report. This is always the downside of allowing more flexibility.
> 
> But we also need to look at the upside. I can see 2 advantages:
>      1) It will be easier to try upcoming features (e.g Argo). The more 
> testing and input, the more chance a feature will be a success.
>      2) It will be easier to tailor Xen (such as built-in command line).
> 
> In both cases, you make Xen more compelling because you allow to 
> experiment and make it more flexible. IHMO, this is one of the best way 
> to attract users and possible new contributors/reviewers to Xen community.

I'm fully aware of the upsides.

>>  We'll
>> have to both take this into consideration and ask back for the
>> specific .config they've used.
> Correct me if I am wrong, but this is not very specific to EXPERT mode. 
> You can already select different options that will affect the behavior 
> of the hypervisor. For instance, on x86, you can disable PV guest 
> support. How do you figure that out today without asking the .config?

I didn't say this is a new problem; I indicated this is going to
become more likely to be one.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 12 11:05:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 11:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYSjN-0000eZ-9D; Tue, 12 May 2020 11:05:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XDyx=62=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jYSjL-0000eT-Uc
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 11:05:48 +0000
X-Inumbo-ID: 87471a9c-9440-11ea-a28b-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 87471a9c-9440-11ea-a28b-12813bfff9fa;
 Tue, 12 May 2020 11:05:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589281546;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=bsvc5OkKSrVvGeEPB+neIoYMqkJLthxyQFEIp5Ci5JI=;
 b=QHKxj8Lvfqau7wMd+3kbWFHz1v33pP8ief7Md2cPvr3twVJraJEjpkn3
 7DSNxtWXe7oDaWqbI3PHXierkN4tm8/MOm8EkL3Xx0rEYgxOgS+kQx+wB
 cyOkqxzyYmoI6QG5P4Hy9H1so4g2V43+TpxCIVpg5GsBkyiWcxr17dcQd 4=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: VW/hK7HEe6VpHT7pchFaIkU/0wnsQ+4XhPam97BV0iT7hyxbbJVMIBMbVspOBXhvXnoBPazCL6
 IBV4myy+0iqJOlWqSy7Wc8gIW2QpOQ5mCZsa5BASETbD8O7yHI70efGNFlIr473BpHxozPy8OO
 eRdCjV4sb3a2kRI3SqAU54NZ2bOjHQ6TwG7mzNz+eS5yXMo3P+e/GIXq65+mhFlGXm3QrxFATa
 dgdVKArtgrx1PxKxRhE4nxT8sRpV2uQQ9KkDNj5KQEFJ4aUpbtGu5Q90r6mDLgJHGh8XoW80E5
 +s0=
X-SBRS: 2.7
X-MesageID: 17566535
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,383,1583211600"; d="scan'208";a="17566535"
From: George Dunlap <George.Dunlap@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
Thread-Topic: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from
 the menuconfig directly
Thread-Index: AQHWHvtEfLlMb5EF90y7GzIGxHVxJ6iRncgAgAYL5QCAABeggIAC9dcAgAILiACABg54AIAAAuMAgAAExoCAAAK+AIAAASKAgAAA7oCAADIoAIAA69aAgAAvjYCAAAH0AIAADHgAgAAA1YCAAACngA==
Date: Tue, 12 May 2020 11:05:41 +0000
Message-ID: <8B924F4F-197A-4951-9A14-B15164D3BB27@citrix.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
 <4017f7f0-744b-70ff-8fa4-b33c53a8b9e2@suse.com>
 <ca115637-5e84-2990-65e8-e2f04ec0ddb5@xen.org>
 <803876ce-503a-2e97-f310-0413e515970b@suse.com>
 <bbc12f81-7854-ad72-63ee-3ec94b378bf9@xen.org>
 <bf6a9ed3-fd7e-c588-9f72-8084dab1f556@suse.com>
 <24249.34804.568523.847077@mariner.uk.xensource.com>
 <88487e23-88f7-2ce8-8292-7e97ed8902c5@suse.com>
 <8dc17648-c669-eec7-2ecd-81245fa8d517@xen.org>
 <cabaa3f1-5eca-c7a5-e819-1c7487c4add1@suse.com>
 <71e63752-b6b0-9578-e79a-0bef30a7c50a@xen.org>
 <b292694a-cc82-f844-6312-f2b13f7a55ba@suse.com>
In-Reply-To: <b292694a-cc82-f844-6312-f2b13f7a55ba@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <D36C0768583CD94E91D0B30222CCB518@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDEyLCAyMDIwLCBhdCAxMjowMyBQTSwgSmFuIEJldWxpY2ggPGpiZXVsaWNo
QHN1c2UuY29tPiB3cm90ZToNCj4gDQo+IFtDQVVUSU9OIC0gRVhURVJOQUwgRU1BSUxdIERPIE5P
VCByZXBseSwgY2xpY2sgbGlua3MsIG9yIG9wZW4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBoYXZl
IHZlcmlmaWVkIHRoZSBzZW5kZXIgYW5kIGtub3cgdGhlIGNvbnRlbnQgaXMgc2FmZS4NCj4gDQo+
IE9uIDEyLjA1LjIwMjAgMTM6MDAsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4+IEhpIEphbiwNCj4+
IA0KPj4gT24gMTIvMDUvMjAyMCAxMToxNSwgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4gT24gMTIu
MDUuMjAyMCAxMjowOCwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPj4+PiBPbiAxMi8wNS8yMDIwIDA4
OjE4LCBKYW4gQmV1bGljaCB3cm90ZToNCj4+Pj4+IE9uIDExLjA1LjIwMjAgMTk6MTQsIElhbiBK
YWNrc29uIHdyb3RlOg0KPj4+Pj4+IEphbiBCZXVsaWNoIHdyaXRlcyAoIlJlOiBbUEFUQ0ggUkVT
RU5EIDIvMl0geGVuOiBBbGxvdyBFWFBFUlQgbW9kZSB0byBiZSBzZWxlY3RlZCBmcm9tIHRoZSBt
ZW51Y29uZmlnIGRpcmVjdGx5Iik6DQo+Pj4+Pj4+IEknbSB0cnlpbmcgdG8gbWFrZSB0aGUgcG9p
bnQgdGhhdCB5b3VyIHBhdGNoLCB0byBtZSwgbG9va3MgdG8gYmUNCj4+Pj4+Pj4gdHJ5aW5nIHRv
IG92ZXJjb21lIGEgcHJvYmxlbSBmb3Igd2hpY2ggd2UgaGF2ZSBoYWQgYSBzb2x1dGlvbiBhbGwN
Cj4+Pj4+Pj4gdGhlIHRpbWUuDQo+Pj4+Pj4gDQo+Pj4+Pj4gVGhhbmtzIGZvciB0aGlzIGNsZWFy
IHN0YXRlbWVudCBvZiB5b3VyIG9iamVjdGlvbi4gIEknbSBhZnJhaWQgSSBkb24ndA0KPj4+Pj4+
IGFncmVlLiAgRXZlbiB0aG91Z2ggLmNvbmZpZyBleGlzdHMgKGFuZCBpcyBldmVuIHVzZWQgYnkg
b3NzdGVzdCwgc28gSQ0KPj4+Pj4+IGtub3cgYWJvdXQgaXQpIEkgZG9uJ3QgdGhpbmsgaXQgaXMg
YXMgZ29vZCBhcyBoYXZpbmcgaXQgaW4NCj4+Pj4+PiBtZW51Y29uZmlnLg0KPj4+Pj4gDQo+Pj4+
PiBCdXQgeW91IHJlYWxpemUgdGhhdCBteSBvYmplY3Rpb24gaXMgKHdhcykgbW9yZSB0b3dhcmRz
IHRoZSByZWFzb25pbmcNCj4+Pj4+IGJlaGluZCB0aGUgY2hhbmdlLCB0aGFuIHRvd2FyZHMgdGhl
IGNoYW5nZSBpdHNlbGYuIElmLCBhcyBhIGNvbW11bml0eSwNCj4+Pj4+IHdlIGRlY2lkZSB0byB1
bmRvIHdoYXQgd2UgbWlnaHQgbm93IGNhbGwgYSBtaXN0YWtlLCBhbmQgaWYgd2UncmUgcmVhZHkN
Cj4+Pj4+IHRvIGRlYWwgd2l0aCB0aGUgY29uc2VxdWVuY2VzLCBzbyBiZSBpdC4NCj4+Pj4gDQo+
Pj4+IFdvdWxkIHlvdSBtaW5kIHRvIGV4cGxhaW4gdGhlIGZhbGwgb3V0IHlvdSBleHBlY3QgZnJv
bSB0aGlzIHBhdGNoPyBBcmUNCj4+Pj4geW91IHdvcnJ5IG1vcmUgcGVvcGxlIG1heSBjb250YWN0
IHNlY3VyaXR5QHhlbi5vcmcgZm9yIG5vbi1zZWN1cml0eSBpc3N1ZT8NCj4+PiANCj4+PiBUaGF0
J3Mgb25lIHBvc3NpYmxlIHRoaW5nIHRoYXQgbWlnaHQgaGFwcGVuLiBCdXQgZXZlbiBtb3JlIGdl
bmVyYWxseQ0KPj4+IHRoZSBsaWtlbGlob29kIHdpbGwgaW5jcmVhc2UgdGhhdCBwZW9wbGUgcmVw
b3J0IGlzc3VlcyB3aXRob3V0IHBheWluZw0KPj4+IGF0dGVudGlvbiB0aGF0IHRoZXkgZGVwZW5k
IG9uIHRoZWlyIGNob2ljZSBvZiBjb25maWd1cmF0aW9uLg0KPj4gSSBhZ3JlZSB0aGF0IHlvdSBh
cmUgZ29pbmcgdG8gZ2V0IG1vcmUgcmVwb3J0IGJlY2F1c2UgdGhlcmUgYXJlIG1vcmUgDQo+PiB1
c2VycyB0byB0cnkgbmV3IHRoaW5ncy4gU28gaW5ldml0ZW50bHksIHlvdSB3aWxsIGdldCBtb3Jl
IGluY29tcGxldGUgDQo+PiByZXBvcnQuIFRoaXMgaXMgYWx3YXlzIHRoZSBkb3duc2lkZSBvZiBh
bGxvd2luZyBtb3JlIGZsZXhpYmlsaXR5Lg0KPj4gDQo+PiBCdXQgd2UgYWxzbyBuZWVkIHRvIGxv
b2sgYXQgdGhlIHVwc2lkZS4gSSBjYW4gc2VlIDIgYWR2YW50YWdlczoNCj4+ICAgICAxKSBJdCB3
aWxsIGJlIGVhc2llciB0byB0cnkgdXBjb21pbmcgZmVhdHVyZXMgKGUuZyBBcmdvKS4gVGhlIG1v
cmUgDQo+PiB0ZXN0aW5nIGFuZCBpbnB1dCwgdGhlIG1vcmUgY2hhbmNlIGEgZmVhdHVyZSB3aWxs
IGJlIGEgc3VjY2Vzcy4NCj4+ICAgICAyKSBJdCB3aWxsIGJlIGVhc2llciB0byB0YWlsb3IgWGVu
IChzdWNoIGFzIGJ1aWx0LWluIGNvbW1hbmQgbGluZSkuDQo+PiANCj4+IEluIGJvdGggY2FzZXMs
IHlvdSBtYWtlIFhlbiBtb3JlIGNvbXBlbGxpbmcgYmVjYXVzZSB5b3UgYWxsb3cgdG8gDQo+PiBl
eHBlcmltZW50IGFuZCBtYWtlIGl0IG1vcmUgZmxleGlibGUuIElITU8sIHRoaXMgaXMgb25lIG9m
IHRoZSBiZXN0IHdheSANCj4+IHRvIGF0dHJhY3QgdXNlcnMgYW5kIHBvc3NpYmxlIG5ldyBjb250
cmlidXRvcnMvcmV2aWV3ZXJzIHRvIFhlbiBjb21tdW5pdHkuDQo+IA0KPiBJJ20gZnVsbHkgYXdh
cmUgb2YgdGhlIHVwc2lkZXMuDQo+IA0KPj4+IFdlJ2xsDQo+Pj4gaGF2ZSB0byBib3RoIHRha2Ug
dGhpcyBpbnRvIGNvbnNpZGVyYXRpb24gYW5kIGFzayBiYWNrIGZvciB0aGUNCj4+PiBzcGVjaWZp
YyAuY29uZmlnIHRoZXkndmUgdXNlZC4NCj4+IENvcnJlY3QgbWUgaWYgSSBhbSB3cm9uZywgYnV0
IHRoaXMgaXMgbm90IHZlcnkgc3BlY2lmaWMgdG8gRVhQRVJUIG1vZGUuIA0KPj4gWW91IGNhbiBh
bHJlYWR5IHNlbGVjdCBkaWZmZXJlbnQgb3B0aW9ucyB0aGF0IHdpbGwgYWZmZWN0IHRoZSBiZWhh
dmlvciANCj4+IG9mIHRoZSBoeXBlcnZpc29yLiBGb3IgaW5zdGFuY2UsIG9uIHg4NiwgeW91IGNh
biBkaXNhYmxlIFBWIGd1ZXN0IA0KPj4gc3VwcG9ydC4gSG93IGRvIHlvdSBmaWd1cmUgdGhhdCBv
dXQgdG9kYXkgd2l0aG91dCBhc2tpbmcgdGhlIC5jb25maWc/DQo+IA0KPiBJIGRpZG4ndCBzYXkg
dGhpcyBpcyBhIG5ldyBwcm9ibGVtOyBJIGluZGljYXRlZCB0aGlzIGlzIGdvaW5nIHRvDQo+IGJl
Y29tZSBtb3JlIGxpa2VseSB0byBiZSBvbmUuDQoNCkkgZmVlbCBsaWtlIHRoZXJl4oCZcyBhIG1p
c3VuZGVyc3RhbmRpbmcgaGVyZSDigJQgSmFuLCBhcmUgeW91IHNpbXBseSBleHBsYWluaW5nIHlv
dXJzZWxmIGFuZC9vciBtYWtpbmcgc3VyZSB0aGF0IHdlIGFsbCB1bmRlcnN0YW5kIHRoZSBpbXBs
aWNhdGlvbnMgb2Ygb3VyIGNob2ljZT8gIE9yIGFyZSB5b3UgYXJndWluZyBhZ2FpbnN0IGFjY2Vw
dGFuY2UgaW4gYW4gaW1wbGljaXRseSBOYWNrLWluZyBtYW5uZXI/DQoNCkkgdW5kZXJzdG9vZCBK
YW4gdG8gYmUgZG9pbmcgdGhlIGZvcm1lcjsgYW5kIHRoYXQgYXMgc3VjaCB3aXRoIElhbuKAmXMg
YWNrLCB0aGlzIHBhdGNoICh3aXRoIHRoZSBtb2RpZmllZCBjb21taXQgbWVzc2FnZSkgY2FuIGdv
IGluLg0KDQogLUdlb3JnZQ0K


From xen-devel-bounces@lists.xenproject.org Tue May 12 11:18:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 11:18:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYSvV-0001b2-Cs; Tue, 12 May 2020 11:18:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qSkR=62=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYSvU-0001ax-5d
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 11:18:20 +0000
X-Inumbo-ID: 455ead51-9442-11ea-a28d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 455ead51-9442-11ea-a28d-12813bfff9fa;
 Tue, 12 May 2020 11:18:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id BDFD2ABB2;
 Tue, 12 May 2020 11:18:17 +0000 (UTC)
Subject: Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the
 menuconfig directly
To: George Dunlap <George.Dunlap@citrix.com>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-3-julien@xen.org>
 <3a4ec020-f626-031e-73a6-b2eee97ab9e8@suse.com>
 <123BE55A-AACB-4FE3-94E2-1559ED73DD09@citrix.com>
 <24240.3047.877655.345428@mariner.uk.xensource.com>
 <1d8eb504-51e9-b8e7-f1af-862760f0f15d@xen.org>
 <24244.16076.627203.282982@mariner.uk.xensource.com>
 <09d729ad-58a7-1f4b-c779-5fd81d7009a4@xen.org>
 <4017f7f0-744b-70ff-8fa4-b33c53a8b9e2@suse.com>
 <ca115637-5e84-2990-65e8-e2f04ec0ddb5@xen.org>
 <803876ce-503a-2e97-f310-0413e515970b@suse.com>
 <bbc12f81-7854-ad72-63ee-3ec94b378bf9@xen.org>
 <bf6a9ed3-fd7e-c588-9f72-8084dab1f556@suse.com>
 <24249.34804.568523.847077@mariner.uk.xensource.com>
 <88487e23-88f7-2ce8-8292-7e97ed8902c5@suse.com>
 <8dc17648-c669-eec7-2ecd-81245fa8d517@xen.org>
 <cabaa3f1-5eca-c7a5-e819-1c7487c4add1@suse.com>
 <71e63752-b6b0-9578-e79a-0bef30a7c50a@xen.org>
 <b292694a-cc82-f844-6312-f2b13f7a55ba@suse.com>
 <8B924F4F-197A-4951-9A14-B15164D3BB27@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c50b3208-2e31-4f3c-fb15-89c46e1b0ea4@suse.com>
Date: Tue, 12 May 2020 13:18:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <8B924F4F-197A-4951-9A14-B15164D3BB27@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12.05.2020 13:05, George Dunlap wrote:
> 
> 
>> On May 12, 2020, at 12:03 PM, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>>
>> On 12.05.2020 13:00, Julien Grall wrote:
>>> Hi Jan,
>>>
>>> On 12/05/2020 11:15, Jan Beulich wrote:
>>>> On 12.05.2020 12:08, Julien Grall wrote:
>>>>> On 12/05/2020 08:18, Jan Beulich wrote:
>>>>>> On 11.05.2020 19:14, Ian Jackson wrote:
>>>>>>> Jan Beulich writes ("Re: [PATCH RESEND 2/2] xen: Allow EXPERT mode to be selected from the menuconfig directly"):
>>>>>>>> I'm trying to make the point that your patch, to me, looks to be
>>>>>>>> trying to overcome a problem for which we have had a solution all
>>>>>>>> the time.
>>>>>>>
>>>>>>> Thanks for this clear statement of your objection.  I'm afraid I don't
>>>>>>> agree.  Even though .config exists (and is even used by osstest, so I
>>>>>>> know about it) I don't think it is as good as having it in
>>>>>>> menuconfig.
>>>>>>
>>>>>> But you realize that my objection is (was) more towards the reasoning
>>>>>> behind the change, than towards the change itself. If, as a community,
>>>>>> we decide to undo what we might now call a mistake, and if we're ready
>>>>>> to deal with the consequences, so be it.
>>>>>
>>>>> Would you mind to explain the fall out you expect from this patch? Are
>>>>> you worry more people may contact security@xen.org for non-security issue?
>>>>
>>>> That's one possible thing that might happen. But even more generally
>>>> the likelihood will increase that people report issues without paying
>>>> attention that they depend on their choice of configuration.
>>> I agree that you are going to get more report because there are more 
>>> users to try new things. So inevitently, you will get more incomplete 
>>> report. This is always the downside of allowing more flexibility.
>>>
>>> But we also need to look at the upside. I can see 2 advantages:
>>>     1) It will be easier to try upcoming features (e.g Argo). The more 
>>> testing and input, the more chance a feature will be a success.
>>>     2) It will be easier to tailor Xen (such as built-in command line).
>>>
>>> In both cases, you make Xen more compelling because you allow to 
>>> experiment and make it more flexible. IHMO, this is one of the best way 
>>> to attract users and possible new contributors/reviewers to Xen community.
>>
>> I'm fully aware of the upsides.
>>
>>>> We'll
>>>> have to both take this into consideration and ask back for the
>>>> specific .config they've used.
>>> Correct me if I am wrong, but this is not very specific to EXPERT mode. 
>>> You can already select different options that will affect the behavior 
>>> of the hypervisor. For instance, on x86, you can disable PV guest 
>>> support. How do you figure that out today without asking the .config?
>>
>> I didn't say this is a new problem; I indicated this is going to
>> become more likely to be one.
> 
> I feel like there’s a misunderstanding here — Jan, are you simply
> explaining yourself and/or making sure that we all understand the
> implications of our choice?  Or are you arguing against acceptance
> in an implicitly Nack-ing manner?

The former - it would have seemed impolite if I hadn't replied to
Julien's question.

> I understood Jan to be doing the former; and that as such with
> Ian’s ack, this patch (with the modified commit message) can go in.

Indeed. Looks like I'm the only one anyway to be concerned of the
extra effort.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 12 11:37:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 11:37:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYTEB-0003Hh-4m; Tue, 12 May 2020 11:37:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WwR0=62=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jYTEA-0003Hc-Ic
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 11:37:38 +0000
X-Inumbo-ID: fa9e830a-9444-11ea-a28f-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fa9e830a-9444-11ea-a28f-12813bfff9fa;
 Tue, 12 May 2020 11:37:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=F/354qPX+xUwC4ROKvWcSQki1iTKgf+VhKci2ATn1vE=; b=2H2hSV7363YUakpCJ91QlaTyvz
 qMz7qsNeWT9UPs6xWETXMq1DvkBDYVHzaW6WPdEc6h3RJuBwGxDqW98m9OtHjYXFIp/GdSNQPt/8x
 XKJI0zlnJiud55nMmIe0amrU3LHGs8765BD3dyYOdbQoKcLTpF1AmT4CM9FYHg9dv5vs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jYTDz-0004tX-OP; Tue, 12 May 2020 11:37:27 +0000
Received: from 54-240-197-231.amazon.com ([54.240.197.231]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jYTDz-00074G-C8; Tue, 12 May 2020 11:37:27 +0000
Subject: Re: [PATCH RESEND 1/2] xen/Kconfig: define EXPERT a bool rather than
 a string
To: xen-devel@lists.xenproject.org
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-2-julien@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <cf8cb895-9eed-0bbd-37b8-73d82392c619@xen.org>
Date: Tue, 12 May 2020 12:37:24 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200430142548.23751-2-julien@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Dario Faggioli <dfaggioli@suse.com>,
 Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

It would be good a have an ack for the small Arm changes. I will wait 
until tomorrow morning (UK time) and commit it if I see no objection.

Cheers,

On 30/04/2020 15:25, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Since commit f80fe2b34f08 "xen: Update Kconfig to Linux v5.4" EXPERT
> can only have two values (enabled or disabled). So switch from a string
> to a bool.
> 
> Take the opportunity to replace all "EXPERT = y" to "EXPERT".
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>   xen/Kconfig                     |  3 +--
>   xen/Kconfig.debug               |  2 +-
>   xen/arch/arm/Kconfig            | 10 +++++-----
>   xen/arch/x86/Kconfig            |  6 +++---
>   xen/common/Kconfig              | 14 +++++++-------
>   xen/common/sched/Kconfig        |  2 +-
>   xen/drivers/passthrough/Kconfig |  2 +-
>   7 files changed, 19 insertions(+), 20 deletions(-)
> 
> diff --git a/xen/Kconfig b/xen/Kconfig
> index 073042f46730..120b5f412993 100644
> --- a/xen/Kconfig
> +++ b/xen/Kconfig
> @@ -35,8 +35,7 @@ config DEFCONFIG_LIST
>   	default ARCH_DEFCONFIG
>   
>   config EXPERT
> -	string
> -	default y if "$(XEN_CONFIG_EXPERT)" = "y"
> +	def_bool y if "$(XEN_CONFIG_EXPERT)" = "y"
>   
>   config LTO
>   	bool "Link Time Optimisation"
> diff --git a/xen/Kconfig.debug b/xen/Kconfig.debug
> index ee6ee33b69be..fad3050d4f7b 100644
> --- a/xen/Kconfig.debug
> +++ b/xen/Kconfig.debug
> @@ -11,7 +11,7 @@ config DEBUG
>   
>   	  You probably want to say 'N' here.
>   
> -if DEBUG || EXPERT = "y"
> +if DEBUG || EXPERT
>   
>   config CRASH_DEBUG
>   	bool "Crash Debugging Support"
> diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> index d51f66072e2e..6a43576dac5e 100644
> --- a/xen/arch/arm/Kconfig
> +++ b/xen/arch/arm/Kconfig
> @@ -33,7 +33,7 @@ source "arch/Kconfig"
>   
>   config ACPI
>   	bool
> -	prompt "ACPI (Advanced Configuration and Power Interface) Support" if EXPERT = "y"
> +	prompt "ACPI (Advanced Configuration and Power Interface) Support" if EXPERT
>   	depends on ARM_64
>   	---help---
>   
> @@ -51,7 +51,7 @@ config GICV3
>   
>   config HAS_ITS
>           bool
> -        prompt "GICv3 ITS MSI controller support" if EXPERT = "y"
> +        prompt "GICv3 ITS MSI controller support" if EXPERT
>           depends on GICV3 && !NEW_VGIC
>   
>   config HVM
> @@ -81,7 +81,7 @@ config SBSA_VUART_CONSOLE
>   	  SBSA Generic UART implements a subset of ARM PL011 UART.
>   
>   config ARM_SSBD
> -	bool "Speculative Store Bypass Disable" if EXPERT = "y"
> +	bool "Speculative Store Bypass Disable" if EXPERT
>   	depends on HAS_ALTERNATIVE
>   	default y
>   	help
> @@ -91,7 +91,7 @@ config ARM_SSBD
>   	  If unsure, say Y.
>   
>   config HARDEN_BRANCH_PREDICTOR
> -	bool "Harden the branch predictor against aliasing attacks" if EXPERT = "y"
> +	bool "Harden the branch predictor against aliasing attacks" if EXPERT
>   	default y
>   	help
>   	  Speculation attacks against some high-performance processors rely on
> @@ -108,7 +108,7 @@ config HARDEN_BRANCH_PREDICTOR
>   	  If unsure, say Y.
>   
>   config TEE
> -	bool "Enable TEE mediators support" if EXPERT = "y"
> +	bool "Enable TEE mediators support" if EXPERT
>   	default n
>   	help
>   	  This option enables generic TEE mediators support. It allows guests
> diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
> index a69be983d6f3..3237cb2f31f4 100644
> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -112,7 +112,7 @@ config BIGMEM
>   	  If unsure, say N.
>   
>   config HVM_FEP
> -	bool "HVM Forced Emulation Prefix support" if EXPERT = "y"
> +	bool "HVM Forced Emulation Prefix support" if EXPERT
>   	default DEBUG
>   	depends on HVM
>   	---help---
> @@ -132,7 +132,7 @@ config HVM_FEP
>   
>   config TBOOT
>   	def_bool y
> -	prompt "Xen tboot support" if EXPERT = "y"
> +	prompt "Xen tboot support" if EXPERT
>   	select CRYPTO
>   	---help---
>   	  Allows support for Trusted Boot using the Intel(R) Trusted Execution
> @@ -217,7 +217,7 @@ config HYPERV_GUEST
>   endif
>   
>   config MEM_SHARING
> -	bool "Xen memory sharing support" if EXPERT = "y"
> +	bool "Xen memory sharing support" if EXPERT
>   	depends on HVM
>   
>   endmenu
> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
> index a6914fcae98b..fe9b41f72128 100644
> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -12,7 +12,7 @@ config CORE_PARKING
>   	bool
>   
>   config GRANT_TABLE
> -	bool "Grant table support" if EXPERT = "y"
> +	bool "Grant table support" if EXPERT
>   	default y
>   	---help---
>   	  Grant table provides a generic mechanism to memory sharing
> @@ -128,7 +128,7 @@ config KEXEC
>   	  If unsure, say Y.
>   
>   config EFI_SET_VIRTUAL_ADDRESS_MAP
> -    bool "EFI: call SetVirtualAddressMap()" if EXPERT = "y"
> +    bool "EFI: call SetVirtualAddressMap()" if EXPERT
>       ---help---
>         Call EFI SetVirtualAddressMap() runtime service to setup memory map for
>         further runtime services. According to UEFI spec, it isn't strictly
> @@ -139,7 +139,7 @@ config EFI_SET_VIRTUAL_ADDRESS_MAP
>   
>   config XENOPROF
>   	def_bool y
> -	prompt "Xen Oprofile Support" if EXPERT = "y"
> +	prompt "Xen Oprofile Support" if EXPERT
>   	depends on X86
>   	---help---
>   	  Xen OProfile (Xenoprof) is a system-wide profiler for Xen virtual
> @@ -176,7 +176,7 @@ config XSM_FLASK
>   
>   config XSM_FLASK_AVC_STATS
>   	def_bool y
> -	prompt "Maintain statistics on the FLASK access vector cache" if EXPERT = "y"
> +	prompt "Maintain statistics on the FLASK access vector cache" if EXPERT
>   	depends on XSM_FLASK
>   	---help---
>   	  Maintain counters on the access vector cache that can be viewed using
> @@ -249,7 +249,7 @@ config LATE_HWDOM
>   	  If unsure, say N.
>   
>   config ARGO
> -	bool "Argo: hypervisor-mediated interdomain communication" if EXPERT = "y"
> +	bool "Argo: hypervisor-mediated interdomain communication" if EXPERT
>   	---help---
>   	  Enables a hypercall for domains to ask the hypervisor to perform
>   	  data transfer of messages between domains.
> @@ -321,7 +321,7 @@ config SUPPRESS_DUPLICATE_SYMBOL_WARNINGS
>   	  build becoming overly verbose.
>   
>   config CMDLINE
> -	string "Built-in hypervisor command string" if EXPERT = "y"
> +	string "Built-in hypervisor command string" if EXPERT
>   	default ""
>   	---help---
>   	  Enter arguments here that should be compiled into the hypervisor
> @@ -354,7 +354,7 @@ config DOM0_MEM
>   	  Leave empty if you are not sure what to specify.
>   
>   config TRACEBUFFER
> -	bool "Enable tracing infrastructure" if EXPERT = "y"
> +	bool "Enable tracing infrastructure" if EXPERT
>   	default y
>   	---help---
>   	  Enable tracing infrastructure and pre-defined tracepoints within Xen.
> diff --git a/xen/common/sched/Kconfig b/xen/common/sched/Kconfig
> index 883ac87cab65..61231aacaa1c 100644
> --- a/xen/common/sched/Kconfig
> +++ b/xen/common/sched/Kconfig
> @@ -1,5 +1,5 @@
>   menu "Schedulers"
> -	visible if EXPERT = "y"
> +	visible if EXPERT
>   
>   config SCHED_CREDIT
>   	bool "Credit scheduler support"
> diff --git a/xen/drivers/passthrough/Kconfig b/xen/drivers/passthrough/Kconfig
> index e7e62ccd63c3..73f4ad89ecbc 100644
> --- a/xen/drivers/passthrough/Kconfig
> +++ b/xen/drivers/passthrough/Kconfig
> @@ -14,7 +14,7 @@ config ARM_SMMU
>   	  ARM SMMU architecture.
>   
>   config IPMMU_VMSA
> -	bool "Renesas IPMMU-VMSA found in R-Car Gen3 SoCs" if EXPERT = "y"
> +	bool "Renesas IPMMU-VMSA found in R-Car Gen3 SoCs" if EXPERT
>   	depends on ARM_64
>   	---help---
>   	  Support for implementations of the Renesas IPMMU-VMSA found
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 12 13:05:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 13:05:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYUbJ-0002Do-2H; Tue, 12 May 2020 13:05:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qSkR=62=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYUbH-0002Dj-GX
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 13:05:35 +0000
X-Inumbo-ID: 43da84f4-9451-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43da84f4-9451-11ea-9887-bc764e2007e4;
 Tue, 12 May 2020 13:05:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 2BBAEAC64;
 Tue, 12 May 2020 13:05:36 +0000 (UTC)
Subject: Re: [PATCH 03/16] x86/traps: Factor out exception_fixup() and make
 printing consistent
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-4-andrew.cooper3@citrix.com>
 <f7cb696a-5c2c-4aa6-d379-ed77772b7c35@suse.com>
 <a397dd69-2384-a4af-d127-9189a730a554@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <afd75bde-9adf-d8cf-f8cf-24cb1b753253@suse.com>
Date: Tue, 12 May 2020 15:05:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <a397dd69-2384-a4af-d127-9189a730a554@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.2020 17:14, Andrew Cooper wrote:
> On 04/05/2020 14:20, Jan Beulich wrote:
>> On 02.05.2020 00:58, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/traps.c
>>> +++ b/xen/arch/x86/traps.c
>>> @@ -774,10 +774,27 @@ static void do_reserved_trap(struct cpu_user_regs *regs)
>>>            trapnr, vec_name(trapnr), regs->error_code);
>>>  }
>>>  
>>> +static bool exception_fixup(struct cpu_user_regs *regs, bool print)
>>> +{
>>> +    unsigned long fixup = search_exception_table(regs);
>>> +
>>> +    if ( unlikely(fixup == 0) )
>>> +        return false;
>>> +
>>> +    /* Can currently be triggered by guests.  Make sure we ratelimit. */
>>> +    if ( IS_ENABLED(CONFIG_DEBUG) && print )
>> I didn't think we consider dprintk()-s a possible security issue.
>> Why would we consider so a printk() hidden behind
>> IS_ENABLED(CONFIG_DEBUG)? IOW I think one of XENLOG_GUEST and
>> IS_ENABLED(CONFIG_DEBUG) wants dropping.
> 
> Who said anything about a security issue?

The need to rate limit is (among other aspects) to prevent a
(logspam) security issue, isn't it?

> I'm deliberately not using dprintk() for the reasons explained in the
> commit message, so IS_ENABLED(CONFIG_DEBUG) is to cover that.

Which is fine, and which I understood.

> XENLOG_GUEST is because everything (other than the boot path) hitting
> this caused directly by a guest action, and needs rate-limiting.  This
> was not consistent before.

But why do we need both CONFIG_DEBUG and XENLOG_GUEST? In a debug
build I think we'd better know of such events under the control
of the general log level setting, not under that of guest specific
messages.

>>> @@ -1466,12 +1468,11 @@ void do_page_fault(struct cpu_user_regs *regs)
>>>          if ( pf_type != real_fault )
>>>              return;
>>>  
>>> -        if ( likely((fixup = search_exception_table(regs)) != 0) )
>>> +        if ( likely(exception_fixup(regs, false)) )
>>>          {
>>>              perfc_incr(copy_user_faults);
>>>              if ( unlikely(regs->error_code & PFEC_reserved_bit) )
>>>                  reserved_bit_page_fault(addr, regs);
>>> -            regs->rip = fixup;
>> I'm afraid this modification can't validly be pulled ahead -
>> show_execution_state(), as called from reserved_bit_page_fault(),
>> wants to / should print the original RIP, not the fixed up one.
> 
> This path is bogus to begin with.  Any RSVD pagefault (other than the
> Shadow MMIO fastpath, handled earlier) is a bug in Xen and should be
> fatal rather than just a warning on extable-tagged instructions.

I could see reasons to convert to a fatal exception (without looking
up fixups), but then in a separate change with suitable justification.
I'd like to point out though that this would convert a logspam kind
of DoS to a Xen crash one, in case such a PTE would indeed be created
anywhere by mistake.

However, it is not helpful to the diagnosis of such a problem if the
logged address points at the fixup code rather than the faulting one.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 12 13:23:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 13:23:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYUsW-0003rN-HI; Tue, 12 May 2020 13:23:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XDyx=62=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jYUsV-0003rI-0B
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 13:23:23 +0000
X-Inumbo-ID: bfb9954a-9453-11ea-a2a3-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bfb9954a-9453-11ea-a2a3-12813bfff9fa;
 Tue, 12 May 2020 13:23:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589289802;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:mime-version;
 bh=aCYl2QzDO1ZRyNR2qCPMJ34zvSlUI4OBjX4LJIuFsYk=;
 b=IipGkX627DXkxE0G+G2qOfITXmMuWF6qZOyTXQ2W575NV8W8u7E2Gvwx
 GCiYYueXd4AIWYA4TmfcpuFm3dZ2r+kIlhgkSKEnl/fYh5TcoeVHIuaYn
 zGuETlhoFraLqIjvSeonISigErUfcKIGXnSVzYEKWgO27dxTNBWOhO0nk 4=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: aJxO6Y5oMvLWSh28PJ4mUV68jCAICCitJYcw7yacTTsmlKSc/XKWXYG3626LXMDar0FfgnuRmo
 Ud28dy0F7UmquHV6bxHttpSO5wYJnucxrJbcgSyuHyDhllJ25ID1xTq3qu37m7t01KNAK4dUJK
 1WUsAZUpu/U2eOELRZQ5NrmkOgQPGBgtCroPX5CCLdw5MnwwKASmsblmVYkU/AGaky7GbZN+sl
 cJQCMBJCt0LvU6FHoHpFNjpc/YS3brejpY27pQp/BUCXqudWrwscytO1HkPuxQxtYn+Gjfo3Qg
 Pls=
X-SBRS: 2.7
X-MesageID: 17307178
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,383,1583211600"; d="go'?scan'208";a="17307178"
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
Subject: Re: [PATCH v2 1/1] golang/xenlight: add NameToDomid and DomidToName
 util functions
Thread-Topic: [PATCH v2 1/1] golang/xenlight: add NameToDomid and DomidToName
 util functions
Thread-Index: AQHWGeTLlIhJvo8WaUufFBMrVdiAt6iM0gqAgABiaYCAFzc7AA==
Date: Tue, 12 May 2020 13:23:18 +0000
Message-ID: <47D89BFE-68EE-4E2F-80C0-6E6444CBDB04@citrix.com>
References: <cover.1587682041.git.rosbrookn@ainfosec.com>
 <73e709cf0a87f3728de438e4a3b849462fd098ac.1587682041.git.rosbrookn@ainfosec.com>
 <59C12770-F12A-45B7-AB62-8BB3A0DC96D8@citrix.com>
 <CAEBZRSdtmyuKP4+yv8-2FjAkmBAc8L9MNr9r5cT4yUcyNM_v=g@mail.gmail.com>
In-Reply-To: <CAEBZRSdtmyuKP4+yv8-2FjAkmBAc8L9MNr9r5cT4yUcyNM_v=g@mail.gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: multipart/mixed;
 boundary="_002_47D89BFE68EE4E2F80C06E6444CBDB04citrixcom_"
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--_002_47D89BFE68EE4E2F80C06E6444CBDB04citrixcom_
Content-Type: text/plain; charset="utf-8"
Content-ID: <4E57966DCBB0F64DAD62B0E218DFCBC3@citrix.com>
Content-Transfer-Encoding: base64

DQoNCj4gT24gQXByIDI3LCAyMDIwLCBhdCA3OjUxIFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9v
a25AZ21haWwuY29tPiB3cm90ZToNCj4gDQo+IE9uIE1vbiwgQXByIDI3LCAyMDIwIGF0IDg6NTkg
QU0gR2VvcmdlIER1bmxhcCA8R2VvcmdlLkR1bmxhcEBjaXRyaXguY29tPiB3cm90ZToNCj4+IA0K
Pj4gDQo+PiANCj4+PiBPbiBBcHIgMjQsIDIwMjAsIGF0IDQ6MDIgQU0sIE5pY2sgUm9zYnJvb2sg
PHJvc2Jyb29rbkBnbWFpbC5jb20+IHdyb3RlOg0KPj4+IA0KPj4+IE1hbnkgZXhwb3J0ZWQgZnVu
Y3Rpb25zIGluIHhlbmxpZ2h0IHJlcXVpcmUgYSBkb21pZCBhcyBhbiBhcmd1bWVudC4gTWFrZQ0K
Pj4+IGl0IGVhc2llciBmb3IgcGFja2FnZSB1c2VycyB0byB1c2UgdGhlc2UgZnVuY3Rpb25zIGJ5
IGFkZGluZyB3cmFwcGVycw0KPj4+IGZvciB0aGUgbGlieGwgdXRpbGl0eSBmdW5jdGlvbnMgbGli
eGxfbmFtZV90b19kb21pZCBhbmQNCj4+PiBsaWJ4bF9kb21pZF90b19uYW1lLg0KPj4+IA0KPj4+
IFNpZ25lZC1vZmYtYnk6IE5pY2sgUm9zYnJvb2sgPHJvc2Jyb29rbkBhaW5mb3NlYy5jb20+DQo+
Pj4gLS0tDQo+Pj4gdG9vbHMvZ29sYW5nL3hlbmxpZ2h0L3hlbmxpZ2h0LmdvIHwgMzggKysrKysr
KysrKysrKysrKysrKysrKysrKysrKysrLQ0KPj4+IDEgZmlsZSBjaGFuZ2VkLCAzNyBpbnNlcnRp
b25zKCspLCAxIGRlbGV0aW9uKC0pDQo+Pj4gDQo+Pj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2dvbGFu
Zy94ZW5saWdodC94ZW5saWdodC5nbyBiL3Rvb2xzL2dvbGFuZy94ZW5saWdodC94ZW5saWdodC5n
bw0KPj4+IGluZGV4IDZiNGY0OTI1NTAuLmQxZDMwYjYzZTEgMTAwNjQ0DQo+Pj4gLS0tIGEvdG9v
bHMvZ29sYW5nL3hlbmxpZ2h0L3hlbmxpZ2h0LmdvDQo+Pj4gKysrIGIvdG9vbHMvZ29sYW5nL3hl
bmxpZ2h0L3hlbmxpZ2h0LmdvDQo+Pj4gQEAgLTIxLDEzICsyMSwxMyBAQCBwYWNrYWdlIHhlbmxp
Z2h0DQo+Pj4gI2NnbyBMREZMQUdTOiAtbHhlbmxpZ2h0IC1seWFqbCAtbHhlbnRvb2xsb2cNCj4+
PiAjaW5jbHVkZSA8c3RkbGliLmg+DQo+Pj4gI2luY2x1ZGUgPGxpYnhsLmg+DQo+Pj4gKyNpbmNs
dWRlIDxsaWJ4bF91dGlscy5oPg0KPj4+IA0KPj4+IHN0YXRpYyBjb25zdCBsaWJ4bF9jaGlsZHBy
b2NfaG9va3MgY2hpbGRwcm9jX2hvb2tzID0geyAuY2hsZG93bmVyID0gbGlieGxfc2lnY2hsZF9v
d25lcl9tYWlubG9vcCB9Ow0KPj4+IA0KPj4+IHZvaWQgeGVubGlnaHRfc2V0X2NobGRwcm9jKGxp
YnhsX2N0eCAqY3R4KSB7DQo+Pj4gICAgICBsaWJ4bF9jaGlsZHByb2Nfc2V0bW9kZShjdHgsICZj
aGlsZHByb2NfaG9va3MsIE5VTEwpOw0KPj4+IH0NCj4+PiAtDQo+Pj4gKi8NCj4+PiBpbXBvcnQg
IkMiDQo+Pj4gDQo+Pj4gQEAgLTc1LDYgKzc1LDEwIEBAIHZhciBsaWJ4bEVycm9ycyA9IG1hcFtF
cnJvcl1zdHJpbmd7DQo+Pj4gICAgICBFcnJvckZlYXR1cmVSZW1vdmVkOiAgICAgICAgICAgICAg
ICJGZWF0dXJlIHJlbW92ZWQiLA0KPj4+IH0NCj4+PiANCj4+PiArY29uc3QgKA0KPj4+ICsgICAg
IERvbWlkSW52YWxpZCBEb21pZCA9IF5Eb21pZCgwKQ0KPj4gDQo+PiBOb3QgdG8gYmUgYSBzdGlj
a2xlciwgYnV0IGFyZSB3ZSBwb3NpdGl2ZSB0aGF0IHRoaXMgd2lsbCBhbHdheXMgcmVzdWx0IGlu
IHRoZSBzYW1lIHZhbHVlIGFzIEMuSU5WQUxJRF9ET01JRD8NCj4+IA0KPj4gVGhlcmUgYXJlIHNv
bWUgZnVuY3Rpb25zIHRoYXQgd2lsbCByZXR1cm4gSU5WQUxJRF9ET01JRCwgb3IgYWNjZXB0IElO
VkFMSURfRE9NSUQgYXMgYW4gYXJndW1lbnQuICBVc2VycyBvZiB0aGUgYHhlbmxpZ2h0YCBwYWNr
YWdlIHdpbGwgcHJlc3VtYWJseSBuZWVkIHRvIGNvbXBhcmUgYWdhaW5zdCB0aGlzIHZhbHVlIGFu
ZC9vciBwYXNzIGl0IGluLg0KPj4gDQo+PiBJdCBzZWVtcyBsaWtlIHdlIGNvdWxkOg0KPj4gDQo+
PiAxLiBSZWx5IG9uIGxhbmd1YWdlIGxhd3llcmluZyB0byBqdXN0aWZ5IG91ciBhc3N1bXB0aW9u
IHRoYXQgXkRvbWlkKDApIHdpbGwgYWx3YXlzIGJlIHRoZSBzYW1lIGFzIEMg4oCcfjDigJ0NCj4+
IA0KPj4gMi4gT24gbWFyc2hhbGxpbmcgaW50byAvIG91dCBvZiBDLCBjb252ZXJ0IEMuSU5WQUxJ
RF9ET01JRCB0byBEb21pZEludmFsaWQNCj4+IA0KPj4gMy4gU2V0IERvbWlkID0gQy5JTlZBTElE
X0RPTUlEDQo+IA0KPiBJIGp1c3QgdGVzdGVkIHRoaXMgd2F5LCBhbmQgaXQgZG9lcyBub3Qgd29y
ayBhcyBleHBlY3RlZC4gU2luY2UNCj4gQy5JTlZBTElEX0RPTUlEIGlzICNkZWZpbmUnZCwgY2dv
IGRvZXMgbm90IGtub3cgaXQgaXMgaW50ZW5kZWQgdG8gYmUNCj4gdXNlZCBhcyB1aW50MzJfdCwg
YW5kIGRlY2lkZXMgdG8gZGVjbGFyZSBDLklOVkFMSURfRE9NSUQgYXMgaW50LiBTbywNCj4gQy5J
TlZBTElEX0RPTUlEID0gXmludCgwKSA9IC0xLCB3aGljaCBvdmVyZmxvd3MgRG9taWQuDQo+IA0K
PiBJIHRyaWVkIGEgZmV3IHRoaW5ncyBpbiB0aGUgY2dvIHByZWFtYmxlIHdpdGhvdXQgYW55IGx1
Y2suDQo+IEVzc2VudGlhbGx5LCBvbmUgY2Fubm90IGRlZmluZSBhIGNvbnN0IHVpbnQzMl90IGlu
IEMgc3BhY2UsIGFuZCB1c2UNCj4gdGhhdCB0byBkZWZpbmUgYSBjb25zdCBpbiBHbyBzcGFjZS4N
Cj4gDQo+IEFueSBpZGVhcz8NCg0KVGhlIGZvbGxvd2luZyBzZWVtcyB0byB3b3JrIGZvciBtZS4g
IEluIHRoZSBDIHNlY3Rpb246DQoNCi8vICNkZWZpbmUgSU5WQUxJRF9ET01JRF9UWVBFRCAoKHVp
bnQzMl90KUlOVkFMSURfRE9NSUQpDQoNClRoZW46DQoNCiAgICBEb21pZEludmFsaWQgRG9taWQg
PSBDLklOVkFMSURfRE9NSURfVFlQRUQNCg0KQXR0YWNoZWQgaXMgYSBQb0MuICBXaGF0IGRvIHlv
dSB0aGluaz8NCg0KIC1HZW9yZ2UNCg0K

--_002_47D89BFE68EE4E2F80C06E6444CBDB04citrixcom_
Content-Type: application/octet-stream; name="invalid-domid-test.go"
Content-Description: invalid-domid-test.go
Content-Disposition: attachment; filename="invalid-domid-test.go"; size=315;
	creation-date="Tue, 12 May 2020 13:23:18 GMT";
	modification-date="Tue, 12 May 2020 13:23:18 GMT"
Content-ID: <B88E1AB4B3F5A44CB78B48815091B9F8@citrix.com>
Content-Transfer-Encoding: base64

cGFja2FnZSBtYWluDQoNCmltcG9ydCAoDQoJImZtdCINCikNCg0KLy8gI2luY2x1ZGUgPHN0ZGlu
dC5oPg0KLy8gI2RlZmluZSBJTlZBTElEX0RPTUlEIH4wDQovLw0KLy8gI2RlZmluZSBJTlZBTElE
X0RPTUlEX1ZBTFVFICgodWludDMyX3QpSU5WQUxJRF9ET01JRCkNCmltcG9ydCAiQyINCg0KdHlw
ZSBEb21pZCB1aW50MzINCg0KY29uc3QgRG9taWRJbnZhbGlkIERvbWlkID0gQy5JTlZBTElEX0RP
TUlEX1ZBTFVFDQoNCmZ1bmMgbWFpbigpIHsNCglmbXQuUHJpbnRmKCJJTlZBTElEX0RPTUlEIHZh
bHVlOiAldlxuIiwgRG9taWRJbnZhbGlkKQ0KfQ0K

--_002_47D89BFE68EE4E2F80C06E6444CBDB04citrixcom_--


From xen-devel-bounces@lists.xenproject.org Tue May 12 13:55:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 13:55:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYVN9-0006NQ-0R; Tue, 12 May 2020 13:55:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qSkR=62=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYVN7-0006NL-K8
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 13:55:01 +0000
X-Inumbo-ID: 2b3eab6d-9458-11ea-a2ad-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2b3eab6d-9458-11ea-a2ad-12813bfff9fa;
 Tue, 12 May 2020 13:55:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id EC6DDAE8C;
 Tue, 12 May 2020 13:55:01 +0000 (UTC)
Subject: Re: [PATCH 05/16] x86/shstk: Introduce Supervisor Shadow Stack support
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-6-andrew.cooper3@citrix.com>
 <d0347fec-3ccb-daa7-5c4d-f0e74b5fb247@suse.com>
 <00302d53-499a-7f6e-76a5-a5eec4e11252@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dc585fec-e325-70a4-94e3-32205d84b1ea@suse.com>
Date: Tue, 12 May 2020 15:54:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <00302d53-499a-7f6e-76a5-a5eec4e11252@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.2020 17:46, Andrew Cooper wrote:
> On 04/05/2020 14:52, Jan Beulich wrote:
>> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>>
>> On 02.05.2020 00:58, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/Kconfig
>>> +++ b/xen/arch/x86/Kconfig
>>> @@ -34,6 +34,9 @@ config ARCH_DEFCONFIG
>>>  config INDIRECT_THUNK
>>>  	def_bool $(cc-option,-mindirect-branch-register)
>>>  
>>> +config HAS_AS_CET
>>> +	def_bool $(as-instr,wrssq %rax$(comma)0;setssbsy;endbr64)
>> I see you add as-instr here as a side effect. Until the other
>> similar checks get converted, I think for the time being we should
>> use the old model, to have all such checks in one place. I realize
>> this means you can't have a Kconfig dependency then.
> 
> No.  That's like asking me to keep on using bool_t, and you are the
> first person to point out and object to that in newly submitted patches.

These are entirely different things. The bool_t => bool
conversion is agreed upon. The conversion to record tool chain
capabilities in xen/.config isn't. I've raised my reservations
against this elsewhere. I can be convinced, but not by trying to
introduce such functionality as a side effect of an unrelated
change.

>> Also why do you check multiple insns, when just one (like we do
>> elsewhere) would suffice?
> 
> Because CET-SS and CET-IBT are orthogonal ABIs, but you wanted a single
> CET symbol, rather than a CET_SS symbol.
> 
> I picked a sample of various instructions to get broad coverage of CET
> without including every instruction.

I wanted HAS_AS_CET rather than HAS_AS_CET_SS and HAS_AS_CET_IBT
because both got added to gas at the same time, and hence there's
little point in having separate symbols. If there was a reason to
assume assemblers might be out there which support one but not
the other, then we indeed ought to have two symbols.

>> The crucial insns to check are those which got changed pretty
>> close before the release of 2.29 (in the cover letter you
>> mention 2.32): One of incssp{d,q}, setssbsy, or saveprevssp.
>> There weren't official binutils releases with the original
>> insns, but distros may still have picked up intermediate
>> snapshots.
> 
> I've got zero interest in catering to distros which are still using
> obsolete pre-release toolchains.  Bleeding edge toolchains are one
> thing, but this is like asking us to support the middle changeset of the
> series introducing CET, which is absolutely a non-starter.
> 
> If the instructions were missing from real binutils releases, then
> obviously we should exclude those releases, but that doesn't appear to
> be the case.

But you realize that there's no special effort needed? We merely
need to check for the right insns. Their operands not matching
for binutils from the intermediate time window is enough for our
purposes. With my remark I merely meant to guide which of the
three insns you've picked needs to remain, and which would imo
better be dropped.

>>> --- a/xen/arch/x86/setup.c
>>> +++ b/xen/arch/x86/setup.c
>>> @@ -95,6 +95,36 @@ unsigned long __initdata highmem_start;
>>>  size_param("highmem-start", highmem_start);
>>>  #endif
>>>  
>>> +static bool __initdata opt_xen_shstk = true;
>>> +
>>> +static int parse_xen(const char *s)
>>> +{
>>> +    const char *ss;
>>> +    int val, rc = 0;
>>> +
>>> +    do {
>>> +        ss = strchr(s, ',');
>>> +        if ( !ss )
>>> +            ss = strchr(s, '\0');
>>> +
>>> +        if ( (val = parse_boolean("shstk", s, ss)) >= 0 )
>>> +        {
>>> +#ifdef CONFIG_XEN_SHSTK
>>> +            opt_xen_shstk = val;
>>> +#else
>>> +            no_config_param("XEN_SHSTK", "xen", s, ss);
>>> +#endif
>>> +        }
>>> +        else
>>> +            rc = -EINVAL;
>>> +
>>> +        s = ss + 1;
>>> +    } while ( *ss );
>>> +
>>> +    return rc;
>>> +}
>>> +custom_param("xen", parse_xen);
>> What's the idea here going forward, i.e. why the new top level
>> "xen" option? Almost all options are for Xen itself, after all.
>> Did you perhaps mean this to be "cet"?
> 
> I forgot an RFC for this, as I couldn't think of anything better.  "cet"
> as a top level option isn't going to gain more than {no-}shstk and
> {no-}ibt as suboptions.

Imo that's still better than "xen=".

>>> --- a/xen/scripts/Kconfig.include
>>> +++ b/xen/scripts/Kconfig.include
>>> @@ -31,6 +31,10 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -E -x c /dev/null -o /de
>>>  # Return y if the linker supports <flag>, n otherwise
>>>  ld-option = $(success,$(LD) -v $(1))
>>>  
>>> +# $(as-instr,<instr>)
>>> +# Return y if the assembler supports <instr>, n otherwise
>>> +as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -)
>> CLANG_FLAGS caught my eye here, then noticing that cc-option
>> also uses it. Anthony - what's the deal with this? It doesn't
>> look to get defined anywhere, and I also don't see what clang-
>> specific about these constructs.
> 
> This is as it inherits from Linux.  There is obviously a reason.
> 
> However, I'm not interested in diving into that rabbit hole in an
> unrelated series.

That's fine - my question was directed at Anthony after all.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 12 13:56:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 13:56:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYVOK-0006RD-BU; Tue, 12 May 2020 13:56:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EIy0=62=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYVOJ-0006R3-K6
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 13:56:15 +0000
X-Inumbo-ID: 536abd06-9458-11ea-a2ad-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 536abd06-9458-11ea-a2ad-12813bfff9fa;
 Tue, 12 May 2020 13:56:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=dh3Ol96j728NZiu9e4odquBePAfQnrcH9FLFJLJO8mM=; b=Z3yLwJSivN6LakrcH91E2wiMg
 CTa8SPmbDkJBDACYhT/xHkyNwh9bnDU3+BrUymra6pQBnEkBGMHZq6HDAN5agDt6RVvnfzGKnmw8F
 R5WodJP/TU/qRfF4NgYrjHMIJsAKc19dav5XJPngtt3o4WzpkbGoNoMEIpS52q1/TftUs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYVOA-0007fm-BT; Tue, 12 May 2020 13:56:06 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYVO9-0005DZ-VP; Tue, 12 May 2020 13:56:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYVO9-0004nV-Uj; Tue, 12 May 2020 13:56:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150146-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150146: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-pvops:kernel-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=f5418b427e7d2f26803880309478de9103680826
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 12 May 2020 13:56:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150146 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150146/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-pvops              6 kernel-build             fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              f5418b427e7d2f26803880309478de9103680826
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  116 days
Failing since        146211  2020-01-18 04:18:52 Z  115 days  106 attempts
Testing same since   150146  2020-05-12 04:18:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 17515 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 12 13:58:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 13:58:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYVQo-0006bQ-Td; Tue, 12 May 2020 13:58:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qSkR=62=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYVQn-0006bK-Mx
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 13:58:49 +0000
X-Inumbo-ID: b410fac6-9458-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b410fac6-9458-11ea-b07b-bc764e2007e4;
 Tue, 12 May 2020 13:58:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 03A5EAE56;
 Tue, 12 May 2020 13:58:50 +0000 (UTC)
Subject: Re: [PATCH 06/16] x86/traps: Implement #CP handler and extend #PF for
 shadow stacks
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-7-andrew.cooper3@citrix.com>
 <561914ce-d076-8f1a-e74b-7c37567480a1@suse.com>
 <b7cbee0d-2215-e600-e755-48a329e0786d@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2bab5512-2bcb-7b92-f4ac-d21e89b748d8@suse.com>
Date: Tue, 12 May 2020 15:58:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <b7cbee0d-2215-e600-e755-48a329e0786d@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.2020 19:20, Andrew Cooper wrote:
> On 04/05/2020 15:10, Jan Beulich wrote:
>> On 02.05.2020 00:58, Andrew Cooper wrote:
>>> @@ -1457,6 +1451,10 @@ void do_page_fault(struct cpu_user_regs *regs)
>>>      {
>>>          enum pf_type pf_type = spurious_page_fault(addr, regs);
>>>  
>>> +        /* Any fault on a shadow stack access is a bug in Xen. */
>>> +        if ( error_code & PFEC_shstk )
>>> +            goto fatal;
>> Not going through the full spurious_page_fault() in this case
>> would seem desirable, as would be at least a respective
>> adjustment to __page_fault_type(). Perhaps such an adjustment
>> could then avoid the change (and the need for goto) here?
> 
> This seems to do a lot of things which have little/nothing to do with
> spurious faults.
> 
> In particular, we don't need to disable interrupts to look at
> PFEC_shstk, or RSVD for that matter.

Perhaps even more so a reason to make spurious_page_fault()
return a new enum pf_type enumerator? In any event your reply
looks more like a "yes" to my suggestion than an objection,
but I may be getting it entirely wrong ...

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 12 14:08:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 14:08:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYVZV-0007Z6-S5; Tue, 12 May 2020 14:07:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qSkR=62=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYVZV-0007Z1-GL
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 14:07:49 +0000
X-Inumbo-ID: f5c65852-9459-11ea-a2af-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f5c65852-9459-11ea-a2af-12813bfff9fa;
 Tue, 12 May 2020 14:07:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 9F3CDAA7C;
 Tue, 12 May 2020 14:07:50 +0000 (UTC)
Subject: Re: [PATCH 07/16] x86/shstk: Re-layout the stack block for shadow
 stacks
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-8-andrew.cooper3@citrix.com>
 <8b6e03ee-545d-eada-457e-79c183a72d6d@suse.com>
 <51eca0a6-48ff-9169-2c41-c1cadace1d02@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ad8a9e10-62a5-c107-3c61-56a8e1eb0bac@suse.com>
Date: Tue, 12 May 2020 16:07:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <51eca0a6-48ff-9169-2c41-c1cadace1d02@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.2020 19:48, Andrew Cooper wrote:
> On 04/05/2020 15:24, Jan Beulich wrote:
>> On 02.05.2020 00:58, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/cpu/common.c
>>> +++ b/xen/arch/x86/cpu/common.c
>>> @@ -732,14 +732,14 @@ void load_system_tables(void)
>>>  		.rsp2 = 0x8600111111111111ul,
>>>  
>>>  		/*
>>> -		 * MCE, NMI and Double Fault handlers get their own stacks.
>>> +		 * #DB, NMI, DF and #MCE handlers get their own stacks.
>> Then also #DF and #MC?
> 
> Ok.
> 
>>
>>> --- a/xen/arch/x86/mm.c
>>> +++ b/xen/arch/x86/mm.c
>>> @@ -6002,25 +6002,18 @@ void memguard_unguard_range(void *p, unsigned long l)
>>>  
>>>  void memguard_guard_stack(void *p)
>>>  {
>>> -    /* IST_MAX IST pages + at least 1 guard page + primary stack. */
>>> -    BUILD_BUG_ON((IST_MAX + 1) * PAGE_SIZE + PRIMARY_STACK_SIZE > STACK_SIZE);
>>> +    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
>>>  
>>> -    memguard_guard_range(p + IST_MAX * PAGE_SIZE,
>>> -                         STACK_SIZE - PRIMARY_STACK_SIZE - IST_MAX * PAGE_SIZE);
>>> +    p += 5 * PAGE_SIZE;
>> The literal 5 here and ...
>>
>>> +    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
>>>  }
>>>  
>>>  void memguard_unguard_stack(void *p)
>>>  {
>>> -    memguard_unguard_range(p + IST_MAX * PAGE_SIZE,
>>> -                           STACK_SIZE - PRIMARY_STACK_SIZE - IST_MAX * PAGE_SIZE);
>>> -}
>>> -
>>> -bool memguard_is_stack_guard_page(unsigned long addr)
>>> -{
>>> -    addr &= STACK_SIZE - 1;
>>> +    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, PAGE_HYPERVISOR_RW);
>>>  
>>> -    return addr >= IST_MAX * PAGE_SIZE &&
>>> -           addr < STACK_SIZE - PRIMARY_STACK_SIZE;
>>> +    p += 5 * PAGE_SIZE;
>> ... here could do with macro-izing: IST_MAX + 1 would already be
>> a little better, I guess.
> 
> The problem is that "IST_MAX + 1" is now less meaningful than a literal
> 5, because at least 5 obviously matches up with the comment describing
> which page does what.

If you don't like IST_MAX+1, can I at least talk you into introducing
a separate constant? This is the only way to connect together the
various places where it is used. We can't be sure that we're not going
to touch this code anymore, ever. And if we do, it'll help if related
places are easy to spot.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 12 14:08:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 14:08:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYVZt-0007bG-6B; Tue, 12 May 2020 14:08:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EIy0=62=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYVZr-0007b7-OY
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 14:08:11 +0000
X-Inumbo-ID: ff492030-9459-11ea-a2af-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ff492030-9459-11ea-a2af-12813bfff9fa;
 Tue, 12 May 2020 14:08:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=HaJpeQrshNGcReKJwnBopWzKpQisUMEYXDaqIp/iuC0=; b=4OQCK7zgvVZQh5n9Z2hgu2HvM
 n4UhrQGJqdsuIBo/3TrrTTErbg5iF5pNnZY0+zMdhHFiYtLZ7QLp4TdjdLq1Izej2e5lOvcR+Ni+y
 wD/tRpFHgTZ7T8U7h8V+/yVHHiUsVjfvThDH7Cs41F4ltsQDHlzjrTqwR9q8yk5CywxhQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYVZk-00082G-90; Tue, 12 May 2020 14:08:04 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYVZk-0005dG-3C; Tue, 12 May 2020 14:08:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYVZk-0004RZ-2W; Tue, 12 May 2020 14:08:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150142-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150142: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=a82582b1af6a4a57ca53bcfad9f71428cb5f9a54
X-Osstest-Versions-That: xen=190c60f12db469472476041ecd0e6c9a0d4b0f8a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 12 May 2020 14:08:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150142 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150142/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate           fail  like 150135
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150135
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150135
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150135
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150135
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150135
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150135
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150135
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150135
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150135
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  a82582b1af6a4a57ca53bcfad9f71428cb5f9a54
baseline version:
 xen                  190c60f12db469472476041ecd0e6c9a0d4b0f8a

Last test of basis   150135  2020-05-11 10:14:21 Z    1 days
Testing same since   150142  2020-05-11 23:37:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   190c60f12d..a82582b1af  a82582b1af6a4a57ca53bcfad9f71428cb5f9a54 -> master


From xen-devel-bounces@lists.xenproject.org Tue May 12 14:20:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 14:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYVlC-0000Cg-F6; Tue, 12 May 2020 14:19:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Uja8=62=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jYVlB-0000Ca-EG
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 14:19:53 +0000
X-Inumbo-ID: a411d25a-945b-11ea-a2b1-12813bfff9fa
Received: from mail-wm1-f43.google.com (unknown [209.85.128.43])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a411d25a-945b-11ea-a2b1-12813bfff9fa;
 Tue, 12 May 2020 14:19:50 +0000 (UTC)
Received: by mail-wm1-f43.google.com with SMTP id m24so12358552wml.2
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 07:19:50 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=JNzLOzK3zqQaTGgwDwdkMFE+RsO3Jbe8dnugUcCb/oY=;
 b=XKPbxyxPfau+HBUBk95H4uiYMArzv98df4gIzumIIQLjJ/8i/eghbF2frnsTzqMvm9
 ALvtv4upYXohoI870I7YqrPRXEG+ZuNmlaqWYQM+z0T9Uxk9+dz2CXixHrbqxGNBU8Ed
 88wsSuYhmdrrJAAQBQBHMt8alojtlhpeMOOjkaWKGmzodEcpo8PQ5avJLQHfXETm2bi/
 xqyA+VsdNJfOExSvNxYu9M/ZV4sMAMni+rsOg0qfbDr++anHyvokqCyl7j6WNvTS3dA4
 bBzWlauhfRFbb0ZkmuOwGOEMfGV5KwfocRdSYmdyn1fB6wZfY4OYw//cTzkzl1bnY/ks
 bfNg==
X-Gm-Message-State: AGi0PuazHb6zzG15BlxNnAzVmptbH7p6/yRBUoc1C0ONlLeJMj+OAYTL
 Qthfv+jgZATOEnbJpEJPd8U=
X-Google-Smtp-Source: APiQypKxUSY/Ei5/fk56Fi65Ht9AesMwP09vvQDI9OOSdiFzHHK4q51a+KLd+JP9XP9c6LLn/d9SAw==
X-Received: by 2002:a05:600c:4102:: with SMTP id
 j2mr40139842wmi.159.1589293190080; 
 Tue, 12 May 2020 07:19:50 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id d1sm14095035wrc.26.2020.05.12.07.19.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 12 May 2020 07:19:49 -0700 (PDT)
Date: Tue, 12 May 2020 14:19:47 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: use of "stat -"
Message-ID: <20200512141947.yqx4gmbvqs4grx5g@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <3bfd6384-fcaf-c74a-e560-a35aafa06a43@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <3bfd6384-fcaf-c74a-e560-a35aafa06a43@suse.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 12, 2020 at 12:58:46PM +0200, Jan Beulich wrote:
> Hello,
> 
> now that I've been able to do a little bit of work from the office
> again, I've run into a regression from b72682c602b8 "scripts: Use
> stat to check lock claim". On one of my older machines I've noticed
> guests wouldn't launch anymore, which I've tracked down to the
> system having an old stat on it. Yes, the commit says the needed
> behavior has been available since 2009, but please let's not forget
> that we continue to support building with tool chains from about
> 2007.
> 
> Putting in place and using newer tool chain versions without
> touching the base distro is pretty straightforward. Replacing the
> coreutils package isn't, and there's not even an override available
> by which one could point at an alternative one. Hence I think
> bumping the minimum required versions of basic tools should be
> done even more carefully than bumping required tool chain versions
> (which we've not dared to do in years). After having things
> successfully working again with a full revert, I'm now going to
> experiment with adapting behavior to stat's capabilities. Would
> something like that be acceptable (if it works out)?

Are you asking for reverting that patch?

Wei.

> 
> Jan


From xen-devel-bounces@lists.xenproject.org Tue May 12 14:24:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 14:24:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYVpq-00012R-7s; Tue, 12 May 2020 14:24:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EIy0=62=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYVpp-00012M-Oz
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 14:24:41 +0000
X-Inumbo-ID: 4d07e23c-945c-11ea-a2b3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4d07e23c-945c-11ea-a2b3-12813bfff9fa;
 Tue, 12 May 2020 14:24:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=egfCiy+/i6Rmg0gvXEd1VNjrXot/V4F8Wr32zLfaZ6I=; b=csPFE84NEdi2R649M26UB2O06
 RYHzbgXCI2Ndjj3jZzl5+TeoxPPsAdUxRTskVv9k9u4aa9FVCGDxGJWtPaZSQUdYYkKoZuqPr7eeH
 AOW6Ygs6OddW9mWMvXPzQzMr+zhlPk6OiQJmf7wdHJugh6rtr17gaHHKp1Xj2GClz229M=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYVpg-0008N0-UI; Tue, 12 May 2020 14:24:32 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYVpg-0006BV-Hi; Tue, 12 May 2020 14:24:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYVpg-0005o4-H3; Tue, 12 May 2020 14:24:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150143-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 150143: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-5.4:test-amd64-amd64-xl-rtds:guest-saverestore:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=f015b86259a520ad886523d9ec6fdb0ed80edc38
X-Osstest-Versions-That: linux=592465e6a54ba8104969f3b73b58df262c5be5f5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 12 May 2020 14:24:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150143 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150143/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     15 guest-saverestore            fail  like 150086
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150086
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                f015b86259a520ad886523d9ec6fdb0ed80edc38
baseline version:
 linux                592465e6a54ba8104969f3b73b58df262c5be5f5

Last test of basis   150104  2020-05-09 09:48:15 Z    3 days
Testing same since   150122  2020-05-10 08:40:44 Z    2 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aaron Ma <aaron.ma@canonical.com>
  AceLan Kao <acelan.kao@canonical.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Elder <elder@linaro.org>
  Alexei Starovoitov <ast@kernel.org>
  Amadeusz Sławiński <amadeuszx.slawinski@linux.intel.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Andrzej Hajda <a.hajda@samsung.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Andy Yan <andy.yan@rock-chips.com>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Brendan Higgins <brendanhiggins@google.com>
  Brian Cain <bcain@codeaurora.org>
  Catalin Marinas <catalin.marinas@arm.com>
  Chanwoo Choi <cw00.choi@samsung.com>
  Christoph Hellwig <hch@lst.de>
  David S. Miller <davem@davemloft.net>
  Doug Berger <opendmb@gmail.com>
  Felipe Balbi <balbi@kernel.org>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hans de Goede <hdegoede@redhat.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Jere Leppänen <jere.leppanen@nokia.com>
  Jeremie Francois (on alpha) <jeremie.francois@gmail.com>
  Jia He <justin.he@arm.com>
  Jiri Slaby <jslaby@suse.cz>
  Johannes Berg <johannes.berg@intel.com>
  Julien Beraud <julien.beraud@orolia.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Lee Jones <lee.jones@linaro.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Luis Chamberlain <mcgrof@kernel.org>
  Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Mark Brown <broonie@kernel.org>
  Masahiro Yamada <masahiroy@kernel.org>
  Matt Roper <matthew.d.roper@intel.com>
  Matthias Blankertz <matthias.blankertz@cetitec.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael S. Tsirkin <mst@redhat.com>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Nathan Chancellor <natechancellor@gmail.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paulo Alcantara (SUSE) <pc@cjr.nz>
  Paulo Alcantara <pc@cjr.nz>
  Pavel Machek <pavel@denx.de>
  Qian Cai <cai@lca.pw>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Roman Gilg <subdiff@gmail.com>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Sandeep Raghuraman <sandy.8925@gmail.com>
  Sasha Levin <sashal@kernel.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Takashi Iwai <tiwai@suse.de>
  Thadeu Lima de Souza Cascardo <cascardo@canonical.com>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thinh Nguyen <thinhn@synopsys.com>
  Thomas Pedersen <thomas@adapt-ip.com>
  Tuowen Zhao <ztuowen@gmail.com>
  Tyler Hicks <tyhicks@linux.microsoft.com>
  Vamshi K Sthambamkadi <vamshi.k.sthambamkadi@gmail.com>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Zhan Liu <zhan.liu@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   592465e6a54b..f015b86259a5  f015b86259a520ad886523d9ec6fdb0ed80edc38 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Tue May 12 14:31:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 14:31:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYVwU-0001tr-W3; Tue, 12 May 2020 14:31:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qSkR=62=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYVwT-0001tm-Hz
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 14:31:33 +0000
X-Inumbo-ID: 465d0268-945d-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 465d0268-945d-11ea-ae69-bc764e2007e4;
 Tue, 12 May 2020 14:31:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4F718AD03;
 Tue, 12 May 2020 14:31:34 +0000 (UTC)
Subject: Re: [PATCH 12/16] x86/extable: Adjust extable handling to be shadow
 stack compatible
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-13-andrew.cooper3@citrix.com>
 <1e80c672-9308-f7ad-67ea-69d83d69bc03@suse.com>
 <974f631e-3a82-3da4-124d-f4bf2bef89e2@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <59fcdaf0-f877-7a90-9bf4-9e41b1bbcea7@suse.com>
Date: Tue, 12 May 2020 16:31:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <974f631e-3a82-3da4-124d-f4bf2bef89e2@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.2020 23:09, Andrew Cooper wrote:
> On 07/05/2020 14:35, Jan Beulich wrote:
>> On 02.05.2020 00:58, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/traps.c
>>> +++ b/xen/arch/x86/traps.c
>>> @@ -778,6 +778,28 @@ static bool exception_fixup(struct cpu_user_regs *regs, bool print)
>>>                 vec_name(regs->entry_vector), regs->error_code,
>>>                 _p(regs->rip), _p(regs->rip), _p(fixup));
>>>  
>>> +    if ( IS_ENABLED(CONFIG_XEN_SHSTK) )
>>> +    {
>>> +        unsigned long ssp;
>>> +
>>> +        asm ("rdsspq %0" : "=r" (ssp) : "0" (1) );
>>> +        if ( ssp != 1 )
>>> +        {
>>> +            unsigned long *ptr = _p(ssp);
>>> +
>>> +            /* Search for %rip in the shadow stack, ... */
>>> +            while ( *ptr != regs->rip )
>>> +                ptr++;
>> Wouldn't it be better to bound the loop, as it shouldn't search past
>> (strictly speaking not even to) the next page boundary? Also you
>> don't care about the top of the stack (being the to be restored SSP),
>> do you? I.e. maybe
>>
>>             while ( *++ptr != regs->rip )
>>                 ;
>>
>> ?
>>
>> And then - isn't searching for a specific RIP value alone prone to
>> error, in case a it matches an ordinary return address? I.e.
>> wouldn't you better search for a matching RIP accompanied by a
>> suitable pointer into the shadow stack and a matching CS value?
>> Otherwise, ...
>>
>>> +            ASSERT(ptr[1] == __HYPERVISOR_CS);
>> ... also assert that ptr[-1] points into the shadow stack?
> 
> So this is the problem I was talking about that the previous contexts
> SSP isn't stored anywhere helpful.
> 
> What we are in practice doing is looking 2 or 3 words up the shadow
> stack (depending on exactly how deep our call graph is), to the shadow
> IRET frame matching the real IRET frame which regs is pointing to.
> 
> Both IRET frames were pushed in the process of generating the exception,
> and we've already matched regs->rip to the exception table record.  We
> need to fix up regs->rip and the shadow lip to the fixup address.
> 
> As we are always fixing up an exception generated from Xen context, we
> know that ptr[1] == __HYPERVISOR_CS, and *ptr[-1] = &ptr[2], as we
> haven't switched shadow stack as part of taking this exception. 
> However, this second point is fragile to exception handlers moving onto IST.
> 
> We can't encounter regs->rip in the shadow stack between the current SSP
> and the IRET frame we're looking to fix up, or we would have taken a
> recursive fault and not reached exception_fixup() to begin with.

I'm afraid I don't follow here. Consider a function (also)
involved in exception handling having this code sequence:

    call    func
    mov     (%rax), %eax

If the fault we're handling occured on the MOV and
exception_fixup() is a descendant of func(), then the first
instance of an address on the shadow stack pointing at this
MOV is going to be the one which did not fault.

> Therefore, the loop is reasonably bounded in all cases.
> 
> Sadly, there is no RDSS instruction, so we can't actually use shadow
> stack reads to spot if we underflowed the shadow stack, and there is no
> useful alternative to panic() if we fail to find the shadow IRET frame.

But afaics you don't panic() in this case. Instead you continue
looping until (presumably) you hit some form of fault.

>>> --- a/xen/arch/x86/x86_64/entry.S
>>> +++ b/xen/arch/x86/x86_64/entry.S
>>> @@ -708,7 +708,16 @@ exception_with_ints_disabled:
>>>          call  search_pre_exception_table
>>>          testq %rax,%rax                 # no fixup code for faulting EIP?
>>>          jz    1b
>>> -        movq  %rax,UREGS_rip(%rsp)
>>> +        movq  %rax,UREGS_rip(%rsp)      # fixup regular stack
>>> +
>>> +#ifdef CONFIG_XEN_SHSTK
>>> +        mov    $1, %edi
>>> +        rdsspq %rdi
>>> +        cmp    $1, %edi
>>> +        je     .L_exn_shstk_done
>>> +        wrssq  %rax, (%rdi)             # fixup shadow stack
>>> +.L_exn_shstk_done:
>>> +#endif
>> Again avoid the conditional jump by using alternatives patching?
> 
> Well - that depends on whether we're likely to gain any new content in
> the pre exception table.
> 
> As it stands, it is only the IRET(s) to userspace so would be safe to
> turn this into an unconditional alternative.  Even in the crash case, we
> won't be returning to guest context after having started the crash
> teardown path.

Ah, right - perhaps indeed better keep it as is then.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 12 14:33:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 14:33:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYVyf-000206-DJ; Tue, 12 May 2020 14:33:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qSkR=62=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYVye-000201-3c
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 14:33:48 +0000
X-Inumbo-ID: 9670d7e8-945d-11ea-a2b8-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9670d7e8-945d-11ea-a2b8-12813bfff9fa;
 Tue, 12 May 2020 14:33:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id B5247ACCE;
 Tue, 12 May 2020 14:33:48 +0000 (UTC)
Subject: Re: use of "stat -"
To: Wei Liu <wl@xen.org>
References: <3bfd6384-fcaf-c74a-e560-a35aafa06a43@suse.com>
 <20200512141947.yqx4gmbvqs4grx5g@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fa507eab-547a-c0fb-9620-825aba5f55b2@suse.com>
Date: Tue, 12 May 2020 16:33:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200512141947.yqx4gmbvqs4grx5g@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12.05.2020 16:19, Wei Liu wrote:
> On Tue, May 12, 2020 at 12:58:46PM +0200, Jan Beulich wrote:
>> now that I've been able to do a little bit of work from the office
>> again, I've run into a regression from b72682c602b8 "scripts: Use
>> stat to check lock claim". On one of my older machines I've noticed
>> guests wouldn't launch anymore, which I've tracked down to the
>> system having an old stat on it. Yes, the commit says the needed
>> behavior has been available since 2009, but please let's not forget
>> that we continue to support building with tool chains from about
>> 2007.
>>
>> Putting in place and using newer tool chain versions without
>> touching the base distro is pretty straightforward. Replacing the
>> coreutils package isn't, and there's not even an override available
>> by which one could point at an alternative one. Hence I think
>> bumping the minimum required versions of basic tools should be
>> done even more carefully than bumping required tool chain versions
>> (which we've not dared to do in years). After having things
>> successfully working again with a full revert, I'm now going to
>> experiment with adapting behavior to stat's capabilities. Would
>> something like that be acceptable (if it works out)?
> 
> Are you asking for reverting that patch?

Well, I assume the patch has its merits, even if I don't really
understand what they are from its description. I'm instead asking
whether something like the below (meanwhile tested) would be
acceptable.

Jan

--- unstable.orig/tools/hotplug/Linux/locking.sh
+++ unstable/tools/hotplug/Linux/locking.sh
@@ -42,6 +42,12 @@ claim_lock()
     # it being possible to safely remove the lockfile when done.
     # See below for a correctness proof.
     local stat
+    local use_stat
+    local rightfile
+    if stat - 2>/dev/null >/dev/null
+    then
+        use_stat=y
+    fi
     while true; do
         eval "exec $_lockfd<>$_lockfile"
         flock -x $_lockfd || return $?
@@ -56,7 +62,17 @@ claim_lock()
         # WW.XXX
         # YY.ZZZ
         # which need to be separated and compared.
-        if stat=$( stat -L -c '%D.%i' - $_lockfile 0<&$_lockfd 2>/dev/null )
+        if [ x$use_stat != xy ]
+        then
+            # Fall back to the original Perl approach.
+            rightfile=$( perl -e '
+                open STDIN, "<&'$_lockfd'" or die $!;
+                my $fd_inum = (stat STDIN)[1]; die $! unless defined $fd_inum;
+                my $file_inum = (stat $ARGV[0])[1];
+                print "y\n" if $fd_inum eq $file_inum;
+                                 ' "$_lockfile" )
+            if [ x$rightfile = xy ]; then break; fi
+        elif stat=$( stat -L -c '%D.%i' - $_lockfile 0<&$_lockfd 2>/dev/null )
         then
             local file_stat
             local fd_stat




From xen-devel-bounces@lists.xenproject.org Tue May 12 14:37:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 14:37:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYW1f-0002At-0U; Tue, 12 May 2020 14:36:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XDyx=62=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jYW1c-0002An-VN
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 14:36:53 +0000
X-Inumbo-ID: 046be59e-945e-11ea-a2bb-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 046be59e-945e-11ea-a2bb-12813bfff9fa;
 Tue, 12 May 2020 14:36:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589294211;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=eYqQ8fIPQoZurpCTyEVO9wqqBYCFIDGSp7VjbM0Cb1A=;
 b=Vs4uw6dds9COu1FDbBlPP7xolygPgilk6x2HkmKBP0nCnhHRWFDrWysb
 SMEZMGEDNzTLTObl/ZAwgTJzefQeNj8wQU+EYFaEeGFGMnoOL9/Cv4rQX
 qRm26rlavAwF5+Cw0/xIrVdu1A154xU0hYolDPadysMQc7Lql2E2dbuoc k=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 7RM7K6YbycKcovFyNYE4+HpYMyMXkX7le1fdcEMSnpr5Mhjy3H0oihw3AB8yT+YqSVo8dNWfMw
 qlG2XiZIANh41UHx/rruHlThEmk2QiQGW3oeCJGULgtpm27bICRya3GahviCzKqKP9QOdtzmQ7
 p2Q5pssdji48PPhR1U86i96nb5DnfGBx5CqlNRfNe/BIev2di7lWu6pD48vl9Lqg97mWRlvCxz
 Tqlb85E+xEIKMyYsSgK4g2L8e0t7dLVPkaDYSn5MtZxkQR3cvWl+MVH9RSF9nNQnPNVU2e9zx9
 iBE=
X-SBRS: 2.7
X-MesageID: 17599552
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,384,1583211600"; d="scan'208";a="17599552"
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
Subject: Re: [PATCH 2/3] golang/xenlight: init xenlight go module
Thread-Topic: [PATCH 2/3] golang/xenlight: init xenlight go module
Thread-Index: AQHWHzfMuMNvM1D6EUuQ5HJvPIpidKikdZCA
Date: Tue, 12 May 2020 14:36:47 +0000
Message-ID: <3ED0B49D-123C-4925-B3A0-4FA0B44DF9F0@citrix.com>
References: <cover.1588282027.git.rosbrookn@ainfosec.com>
 <c38afab85d9fc005edade229896008a4ad5a1929.1588282027.git.rosbrookn@ainfosec.com>
In-Reply-To: <c38afab85d9fc005edade229896008a4ad5a1929.1588282027.git.rosbrookn@ainfosec.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <8E084319BF30EC4A89535278BFABBD96@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gQXByIDMwLCAyMDIwLCBhdCAxMDozOSBQTSwgTmljayBSb3Nicm9vayA8cm9zYnJv
b2tuQGdtYWlsLmNvbT4gd3JvdGU6DQo+IA0KPiBJbml0aWFsaXplIHRoZSB4ZW5saWdodCBHbyBt
b2R1bGUgdXNpbmcgdGhlIHhlbmJpdHMgZ2l0LWh0dHAgVVJMLA0KPiB4ZW5iaXRzLnhlbi5vcmcv
Z2l0LWh0dHAveGVuLmdpdC90b29scy9nb2xhbmcveGVubGlnaHQsIGFuZCB1cGRhdGUgdGhlDQo+
IFhFTl9HT0NPREVfVVJMIHZhcmlhYmxlIGluIHRvb2xzL1J1bGVzLm1rIGFjY29yZGluZ2x5Lg0K
PiANCj4gU2lnbmVkLW9mZi1ieTogTmljayBSb3Nicm9vayA8cm9zYnJvb2tuQGFpbmZvc2VjLmNv
bT4NCj4gLS0tDQo+IHRvb2xzL1J1bGVzLm1rICAgICAgICAgICAgICAgfCAyICstDQo+IHRvb2xz
L2dvbGFuZy94ZW5saWdodC9nby5tb2QgfCAxICsNCj4gMiBmaWxlcyBjaGFuZ2VkLCAyIGluc2Vy
dGlvbnMoKyksIDEgZGVsZXRpb24oLSkNCj4gY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL2dvbGFu
Zy94ZW5saWdodC9nby5tb2QNCj4gDQo+IGRpZmYgLS1naXQgYS90b29scy9SdWxlcy5tayBiL3Rv
b2xzL1J1bGVzLm1rDQo+IGluZGV4IDViOGNmNzQ4YWQuLmNhMzNjYzdiMzEgMTAwNjQ0DQo+IC0t
LSBhL3Rvb2xzL1J1bGVzLm1rDQo+ICsrKyBiL3Rvb2xzL1J1bGVzLm1rDQo+IEBAIC0zNiw3ICsz
Niw3IEBAIGRlYnVnID89IHkNCj4gZGVidWdfc3ltYm9scyA/PSAkKGRlYnVnKQ0KPiANCj4gWEVO
X0dPUEFUSCAgICAgICAgPSAkKFhFTl9ST09UKS90b29scy9nb2xhbmcNCj4gLVhFTl9HT0NPREVf
VVJMICAgID0gZ29sYW5nLnhlbnByb2plY3Qub3JnDQo+ICtYRU5fR09DT0RFX1VSTCAgICA9IHhl
bmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0L3Rvb2xzL2dvbGFuZw0KDQpUaGUgcHJpbWFy
eSBlZmZlY3Qgb2YgdGhpcyB3aWxsIGJlIHRvIGluc3RhbGwgdGhlIGNvZGUgaW4gJFBSRUZJWC9z
aGFyZS9nb2NvZGUveGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQvdG9vbHMvZ29sYW5n
L3hlbmxpZ2h0IHdoZW4gbWFraW5nIGRlYmJhbGxzIG9yIGRvaW5nIGBtYWtlIGluc3RhbGxgLg0K
DQpJIGRvbuKAmXQgaW1tZWRpYXRlbHkgc2VlIHRoZSBhZHZhbnRhZ2Ugb2YgdGhhdCwgcGFydGlj
dWxhcmx5IGlmIHdl4oCZcmUgc3RpbGwgdGhpbmtpbmcgYWJvdXQgaGF2aW5nIGEg4oCccHJldHRp
ZXLigJ0gcGF0aCBhdCBzb21lIHBvaW50IGluIHRoZSBmdXR1cmUuICBXaGF0IHdhcyB5b3VyIHRo
aW5raW5nIGhlcmU/DQoNCj4gaWZlcSAoJChkZWJ1Z19zeW1ib2xzKSx5KQ0KPiBDRkxBR1MgKz0g
LWczDQo+IGRpZmYgLS1naXQgYS90b29scy9nb2xhbmcveGVubGlnaHQvZ28ubW9kIGIvdG9vbHMv
Z29sYW5nL3hlbmxpZ2h0L2dvLm1vZA0KPiBuZXcgZmlsZSBtb2RlIDEwMDY0NA0KPiBpbmRleCAw
MDAwMDAwMDAwLi4yMzJkMTAyMTUzDQo+IC0tLSAvZGV2L251bGwNCj4gKysrIGIvdG9vbHMvZ29s
YW5nL3hlbmxpZ2h0L2dvLm1vZA0KPiBAQCAtMCwwICsxIEBADQo+ICttb2R1bGUgeGVuYml0cy54
ZW4ub3JnL2dpdC1odHRwL3hlbi5naXQvdG9vbHMvZ29sYW5nL3hlbmxpZ2h0DQoNClRoaXMgc2hv
dWxkIHByb2JhYmx5IGJlIHMveGVuL3hlbnByb2plY3QvOyANCg0KSWYgeW91IHdhbnQgSSBjb3Vs
ZCBjaGVjayBpbiBhIHZlcnNpb24gb2YgdGhpcyBwYXRjaCB3aXRoIGp1c3QgdGhlIGdvLm1vZCwg
d2l0aCB0aGF0IGNoYW5nZS4NCg0KIC1HZW9yZ2U=


From xen-devel-bounces@lists.xenproject.org Tue May 12 14:48:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 14:48:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYWCT-000350-0h; Tue, 12 May 2020 14:48:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sh4E=62=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYWCS-00034v-Cu
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 14:48:04 +0000
X-Inumbo-ID: 94d13d68-945f-11ea-a2c0-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 94d13d68-945f-11ea-a2c0-12813bfff9fa;
 Tue, 12 May 2020 14:48:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589294884;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=jVjRbiYFDETmm85nMp64hRUUwpS0KFOXOZJIum9tDOM=;
 b=VpRj226SmuSgbbkyc8wkJ0QCH6F0JNfiUsAv7TqionBywCl0ukMglL7t
 f6hJm4M4nT+cOUdiRpPRAUzTkXX2a5ua8YSlA6I6/fQZqerX7rHFJ9bYB
 28Ms0+fgOLCOxyvrCRF/fGCW9w4I6wfHjCCJsRf+tPA4MZDGTVFZZz4JR Q=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: x+LMrm/A+JJLmLQzg+ujblcMvjaQmSsyIHvKNhEa5f99tijtFf2LUNrQ51fcwND+MkO3tYrSEo
 u2ZoXs3zsAkxnbeiPO5GhagHiVEF2hlLeJliKo6seBnZghzDHvnWXEHXcDbuLvUwxpb2V73yGk
 hAb90R+ng0ORnk7K9MO1SS4PywzFXgM85++PFGd+6kf5Yltq/3vpvDwnT3z0F4dt2emYDQ4bNU
 bt0xLRQ3g55ZC/kY4TVG35DS3hNlRULCTe667GVEevdasSUhl5GHb6HPTi65wqHNYXGCCSm3As
 u94=
X-SBRS: 2.7
X-MesageID: 17347226
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,384,1583211600"; d="scan'208";a="17347226"
Subject: Re: use of "stat -"
To: Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>
References: <3bfd6384-fcaf-c74a-e560-a35aafa06a43@suse.com>
 <20200512141947.yqx4gmbvqs4grx5g@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
 <fa507eab-547a-c0fb-9620-825aba5f55b2@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <4b90b635-84bb-e827-d52e-dfe1ebdb4e4d@citrix.com>
Date: Tue, 12 May 2020 15:47:57 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <fa507eab-547a-c0fb-9620-825aba5f55b2@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Jason
 Andryuk <jandryuk@gmail.com>, Ian Jackson <ian.jackson@eu.citrix.com>, Paul
 Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12/05/2020 15:33, Jan Beulich wrote:
> On 12.05.2020 16:19, Wei Liu wrote:
>> On Tue, May 12, 2020 at 12:58:46PM +0200, Jan Beulich wrote:
>>> now that I've been able to do a little bit of work from the office
>>> again, I've run into a regression from b72682c602b8 "scripts: Use
>>> stat to check lock claim". On one of my older machines I've noticed
>>> guests wouldn't launch anymore, which I've tracked down to the
>>> system having an old stat on it. Yes, the commit says the needed
>>> behavior has been available since 2009, but please let's not forget
>>> that we continue to support building with tool chains from about
>>> 2007.
>>>
>>> Putting in place and using newer tool chain versions without
>>> touching the base distro is pretty straightforward. Replacing the
>>> coreutils package isn't, and there's not even an override available
>>> by which one could point at an alternative one. Hence I think
>>> bumping the minimum required versions of basic tools should be
>>> done even more carefully than bumping required tool chain versions
>>> (which we've not dared to do in years). After having things
>>> successfully working again with a full revert, I'm now going to
>>> experiment with adapting behavior to stat's capabilities. Would
>>> something like that be acceptable (if it works out)?
>> Are you asking for reverting that patch?
> Well, I assume the patch has its merits, even if I don't really
> understand what they are from its description.

What is in any away unclear about the final paragraph in the commit message?

> I'm instead asking
> whether something like the below (meanwhile tested) would be
> acceptable.

Not really, seeing as removing perl was the whole point.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 12 14:48:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 14:48:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYWCg-00035o-9i; Tue, 12 May 2020 14:48:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XDyx=62=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jYWCf-00035e-Gy
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 14:48:17 +0000
X-Inumbo-ID: 9ce8e4ce-945f-11ea-ae69-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ce8e4ce-945f-11ea-ae69-bc764e2007e4;
 Tue, 12 May 2020 14:48:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589294896;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=f/9zfxNObESnNvBeKDkJZHqMMVvQvh9szwVIA2UPEwQ=;
 b=dWIgdzf07HPpi08Ki9KHbkpnE/lhbUttAaG7JnrScyFvayvl3LLQ4s6N
 ywc3t7HPu64EaPMOeuFhL3itzYAPBtY6sFXQFvrNccCT/lZkjSmci6zFU
 FNhpZYe8/TpeUrZQplI7+O0SZl2l52KDAawr+kil6Sp/b9TmsDeUpdLp1 k=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: m7D3Vy4AYQh2kVocbDNCOZ5G3ChcmdBFJkBd3LZ6tF3cfOzGm5xDO9Ns1y7PlrAt/3EBGxBXjt
 nO86xXpe0RsyjW7Toa65NWKgnfc/wN4yDQw6DyW49ghPmHQ4xiKdLJB2qiGwK1pH3u1PiushXK
 70A6mbpFMkRzCn4KLuPyW2sof/ol/uAVR1V/GM1Kmq8oEshkdn0HK4hOgHwo9SAaAKVOyasYcT
 thqQiJp/mZbEuiaGR2ChSQ/JDfvH7+cwJAJh5cFw9F/ZLrTRSO2IxIdlQlUc/7oF+ibadDnpPp
 2E0=
X-SBRS: 2.7
X-MesageID: 18015676
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,384,1583211600"; d="scan'208";a="18015676"
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
Subject: Re: [PATCH 1/3] golang/xenlight: re-track generated go code
Thread-Topic: [PATCH 1/3] golang/xenlight: re-track generated go code
Thread-Index: AQHWHzfOsKEBjrjqSkC89EI6NA5mTKikeMEA
Date: Tue, 12 May 2020 14:48:12 +0000
Message-ID: <772DD16D-DA6B-4009-8B9B-B8187ED06136@citrix.com>
References: <cover.1588282027.git.rosbrookn@ainfosec.com>
 <2ccb1ae4ffa3f00a13ce303df0e4a44d249861c2.1588282027.git.rosbrookn@ainfosec.com>
In-Reply-To: <2ccb1ae4ffa3f00a13ce303df0e4a44d249861c2.1588282027.git.rosbrookn@ainfosec.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="us-ascii"
Content-ID: <2585D2C045F1D3429F476AA1F543B893@citrix.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Jan Beulich <jbeulich@suse.com>, Ian
 Jackson <Ian.Jackson@citrix.com>, xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On Apr 30, 2020, at 10:38 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
>=20
> Commit df669de074c395a3b2eeb975fddd3da4c148da13 un-tracked the generated
> Go code, but it was decided that we actually keep the generated code
> in-tree.
>=20
> Undo the changes to ignore the generated code, and re-generate it.
>=20
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue May 12 14:50:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 14:50:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYWF9-0003wP-OC; Tue, 12 May 2020 14:50:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9uWS=62=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jYWF8-0003wI-1g
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 14:50:50 +0000
X-Inumbo-ID: f7c61a42-945f-11ea-b07b-bc764e2007e4
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7c61a42-945f-11ea-b07b-bc764e2007e4;
 Tue, 12 May 2020 14:50:49 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id f18so13961915lja.13
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 07:50:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=pUXqzPRwKc5es8JMb7JTyBoF2txoWiHqL8hOELqpXxA=;
 b=vONPMWinD+FUp5soNwqt2Bc6gVEFpFgQd0srt9kx73anU3wjOmLtC0lS82Z+JRrLL7
 VaAFU3vI48UXeZ/wFCAQHKcCFCtPd9grzB6YOnNjou8wij9khtvd23xRW9vMgAX57gAk
 ARKL/OiaDzbxTWhQwFcYSlNZmvggl/tBiI2iIJMwNlLr2iyaCPdDRoCmZmBgLPP+oYn0
 1ymjoP+NdKThT29fLbXQZUuDgvVWT/4mIG3aqmtXZencOXi1FsRQxm1jJtlKjsBId0Ml
 Wjb132M3uUxp4Ac+3MTNDc704KYEj1d9s9d1OqXXRteyPB0ZKLQ+vvvWXsplUB+yqjvm
 O2ew==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=pUXqzPRwKc5es8JMb7JTyBoF2txoWiHqL8hOELqpXxA=;
 b=Hx2H/DrrPqagshM0n7QIMbRwK9B+C3VTFSbwvGTVij9kcpoWdLClYrociEqgGgAJ3w
 TkGEm8rrmbjOV3PzDCWOjhqtBfhu2FwfgXL5iH87JeqND/E+3d+p+czbXcaPsor8qPrk
 tWMdU3BVStE5TgMTif2qqneuflKLX9YbGds0gs+rhkdW/02TncHHDw3wxAr2UaVgaHZ/
 O3sCmTC3ibbvrN7pzeUut1pguDbA8s3w71SIZeMNp+HjOgolCdbtcJXyElo27FBzOVKy
 0zx7N1IyhS4gKiA/5vxv4ELwM8tEgq2mjVyjz007mAnObGxDpNliJW/F2CvEAIE76oQ0
 LuMw==
X-Gm-Message-State: AOAM530xV6SB19bnFtAwiXiSYjQa1eMz+awZ/C5bqyXD7Um7+YAbUrGX
 mGQ6S/XTsuseeN6FHiwIPS8rtEGqzekp014l47I=
X-Google-Smtp-Source: ABdhPJyhuOmZ5kgyx7PEzutZc/kB1yw+GXfBfKt3GFrhmLh7QGgbNV6+m3/jHbIQr7bjMTbuL8Cx141as+znoe4z/ZY=
X-Received: by 2002:a05:651c:c8:: with SMTP id
 8mr13185719ljr.182.1589295048261; 
 Tue, 12 May 2020 07:50:48 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1587682041.git.rosbrookn@ainfosec.com>
 <73e709cf0a87f3728de438e4a3b849462fd098ac.1587682041.git.rosbrookn@ainfosec.com>
 <59C12770-F12A-45B7-AB62-8BB3A0DC96D8@citrix.com>
 <CAEBZRSdtmyuKP4+yv8-2FjAkmBAc8L9MNr9r5cT4yUcyNM_v=g@mail.gmail.com>
 <47D89BFE-68EE-4E2F-80C0-6E6444CBDB04@citrix.com>
In-Reply-To: <47D89BFE-68EE-4E2F-80C0-6E6444CBDB04@citrix.com>
From: Nick Rosbrook <rosbrookn@gmail.com>
Date: Tue, 12 May 2020 10:50:36 -0400
Message-ID: <CAEBZRScrMTobo8a5scX4sUeh1W59T2-A9kuDkGqbxNLa4-JfQA@mail.gmail.com>
Subject: Re: [PATCH v2 1/1] golang/xenlight: add NameToDomid and DomidToName
 util functions
To: George Dunlap <George.Dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> > I just tested this way, and it does not work as expected. Since
> > C.INVALID_DOMID is #define'd, cgo does not know it is intended to be
> > used as uint32_t, and decides to declare C.INVALID_DOMID as int. So,
> > C.INVALID_DOMID = ^int(0) = -1, which overflows Domid.
> >
> > I tried a few things in the cgo preamble without any luck.
> > Essentially, one cannot define a const uint32_t in C space, and use
> > that to define a const in Go space.
> >
> > Any ideas?
>
> The following seems to work for me.  In the C section:
>
> // #define INVALID_DOMID_TYPED ((uint32_t)INVALID_DOMID)
>
> Then:
>
>     DomidInvalid Domid = C.INVALID_DOMID_TYPED
>
> Attached is a PoC.  What do you think?

Yes, that looks good.

Thanks,
-NR


From xen-devel-bounces@lists.xenproject.org Tue May 12 14:56:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 14:56:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYWKU-00049n-CD; Tue, 12 May 2020 14:56:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qSkR=62=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYWKS-00049i-Eh
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 14:56:20 +0000
X-Inumbo-ID: bbdf7843-9460-11ea-a2c2-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bbdf7843-9460-11ea-a2c2-12813bfff9fa;
 Tue, 12 May 2020 14:56:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 02876AC58;
 Tue, 12 May 2020 14:56:19 +0000 (UTC)
Subject: Re: [PATCH 15/16] x86/entry: Adjust guest paths to be shadow stack
 compatible
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-16-andrew.cooper3@citrix.com>
 <2df78612-2c24-32de-186a-c402e188478c@suse.com>
 <70d7b0e0-a599-6a19-5ace-af4f169545b3@citrix.com>
 <fa78f626-18a1-bd95-b446-8ade5e9282a6@suse.com>
 <b2afaa93-c738-dcfd-cbc7-147e48cd24ee@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6a99236c-4f7b-7cf0-a4a8-f505ef64356d@suse.com>
Date: Tue, 12 May 2020 16:56:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <b2afaa93-c738-dcfd-cbc7-147e48cd24ee@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.2020 23:45, Andrew Cooper wrote:
> On 07/05/2020 17:15, Jan Beulich wrote:
>>>>> --- a/xen/arch/x86/x86_64/entry.S
>>>>> +++ b/xen/arch/x86/x86_64/entry.S
>>>>> @@ -194,6 +194,15 @@ restore_all_guest:
>>>>>          movq  8(%rsp),%rcx            # RIP
>>>>>          ja    iret_exit_to_guest
>>>>>  
>>>>> +        /* Clear the supervisor shadow stack token busy bit. */
>>>>> +.macro rag_clrssbsy
>>>>> +        push %rax
>>>>> +        rdsspq %rax
>>>>> +        clrssbsy (%rax)
>>>>> +        pop %rax
>>>>> +.endm
>>>>> +        ALTERNATIVE "", rag_clrssbsy, X86_FEATURE_XEN_SHSTK
>>>> In principle you could get away without spilling %rax:
>>>>
>>>>         cmpl  $1,%ecx
>>>>         ja    iret_exit_to_guest
>>>>
>>>>         /* Clear the supervisor shadow stack token busy bit. */
>>>> .macro rag_clrssbsy
>>>>         rdsspq %rcx
>>>>         clrssbsy (%rcx)
>>>> .endm
>>>>         ALTERNATIVE "", rag_clrssbsy, X86_FEATURE_XEN_SHSTK
>>>>         movq  8(%rsp),%rcx            # RIP
>>>>         cmpw  $FLAT_USER_CS32,16(%rsp)# CS
>>>>         movq  32(%rsp),%rsp           # RSP
>>>>         je    1f
>>>>         sysretq
>>>> 1:      sysretl
>>>>
>>>>         ALIGN
>>>> /* No special register assumptions. */
>>>> iret_exit_to_guest:
>>>>         movq  8(%rsp),%rcx            # RIP
>>>>         andl  $~(X86_EFLAGS_IOPL|X86_EFLAGS_NT|X86_EFLAGS_VM),24(%rsp)
>>>>         ...
>>>>
>>>> Also - what about CLRSSBSY failing? It would seem easier to diagnose
>>>> this right here than when getting presumably #DF upon next entry into
>>>> Xen. At the very least I think it deserves a comment if an error case
>>>> does not get handled.
>>> I did consider this, but ultimately decided against it.
>>>
>>> You can't have an unlikely block inside a alternative block because the
>>> jmp's displacement doesn't get fixed up.
>> We do fix up unconditional JMP/CALL displacements; I don't
>> see why we couldn't also do so for conditional ones.
> 
> Only for the first instruction in the block.
> 
> We do not decode the entire block of instructions and fix up each
> displacement.

Right, but that's not overly difficult to overcome - simply split
the ALTERNATIVE in two.

>>>   Keeping everything inline puts
>>> an incorrect statically-predicted branch in program flow.
>>>
>>> Most importantly however, is that the SYSRET path is vastly less common
>>> than the IRET path.  There is no easy way to proactively spot problems
>>> in the IRET path, which means that conditions leading to a problem are
>>> already far more likely to manifest as #DF, so there is very little
>>> value in adding complexity to the SYSRET path in the first place.
>> The SYSRET path being uncommon is a problem by itself imo, if
>> that's indeed the case. I'm sure I've suggested before that
>> we convert frames to TRAP_syscall ones whenever possible,
>> such that we wouldn't go the slower IRET path.
> 
> It is not possible to convert any.
> 
> The opportunistic SYSRET logic in Linux loses you performance in
> reality.  Its just that the extra conditionals are very highly predicted
> and totally dominated by the ring transition cost.
> 
> You can create a synthetic test case where the opportunistic logic makes
> a performance win, but the chances of encountering real world code where
> TRAP_syscall is clear and %r11 and %rcx match flags/rip is 2 ^ 128.
> 
> It is very much not worth the extra code and cycles taken to implement.

Oops, yes, for a moment I forgot this minor detail of %rcx/%r11.

>>>> Somewhat similar for SETSSBSY, except there things get complicated by
>>>> it raising #CP instead of setting EFLAGS.CF: Aiui it would require us
>>>> to handle #CP on an IST stack in order to avoid #DF there.
>>> Right, but having #CP as IST gives us far worse problems.
>>>
>>> Being able to spot #CP vs #DF doesn't help usefully.  Its still some
>>> arbitrary period of time after the damage was done.
>>>
>>> Any nesting of #CP (including fault on IRET out) results in losing
>>> program state and entering an infinite loop.
>>>
>>> The cases which end up as #DF are properly fatal to the system, and we
>>> at least get a clean crash out it.
>> May I suggest that all of this gets spelled out in at least
>> the description of the patch, so that it can be properly
>> understood (and, if need be, revisited) later on?
> 
> Is this really the right patch to do that?
> 
> I do eventually plan to put a whole load of this kinds of details into
> the hypervisor guide.

Well, as you can see having some of these considerations and
decisions spelled out would already have helped review here.
Whether this is exactly the right patch I'm not sure, but I'd
find it quite helpful if such was available at least for
cross referencing.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 12 14:59:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 14:59:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYWNv-0004Hd-SE; Tue, 12 May 2020 14:59:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qSkR=62=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYWNu-0004HX-JW
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 14:59:54 +0000
X-Inumbo-ID: 3b93949e-9461-11ea-a2c2-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3b93949e-9461-11ea-a2c2-12813bfff9fa;
 Tue, 12 May 2020 14:59:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 865DEAB64;
 Tue, 12 May 2020 14:59:55 +0000 (UTC)
Subject: Re: use of "stat -"
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <3bfd6384-fcaf-c74a-e560-a35aafa06a43@suse.com>
 <20200512141947.yqx4gmbvqs4grx5g@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
 <fa507eab-547a-c0fb-9620-825aba5f55b2@suse.com>
 <4b90b635-84bb-e827-d52e-dfe1ebdb4e4d@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <814db557-4f6a-020d-9f71-4ee3724981e3@suse.com>
Date: Tue, 12 May 2020 16:59:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <4b90b635-84bb-e827-d52e-dfe1ebdb4e4d@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Jason Andryuk <jandryuk@gmail.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12.05.2020 16:47, Andrew Cooper wrote:
> On 12/05/2020 15:33, Jan Beulich wrote:
>> On 12.05.2020 16:19, Wei Liu wrote:
>>> On Tue, May 12, 2020 at 12:58:46PM +0200, Jan Beulich wrote:
>>>> now that I've been able to do a little bit of work from the office
>>>> again, I've run into a regression from b72682c602b8 "scripts: Use
>>>> stat to check lock claim". On one of my older machines I've noticed
>>>> guests wouldn't launch anymore, which I've tracked down to the
>>>> system having an old stat on it. Yes, the commit says the needed
>>>> behavior has been available since 2009, but please let's not forget
>>>> that we continue to support building with tool chains from about
>>>> 2007.
>>>>
>>>> Putting in place and using newer tool chain versions without
>>>> touching the base distro is pretty straightforward. Replacing the
>>>> coreutils package isn't, and there's not even an override available
>>>> by which one could point at an alternative one. Hence I think
>>>> bumping the minimum required versions of basic tools should be
>>>> done even more carefully than bumping required tool chain versions
>>>> (which we've not dared to do in years). After having things
>>>> successfully working again with a full revert, I'm now going to
>>>> experiment with adapting behavior to stat's capabilities. Would
>>>> something like that be acceptable (if it works out)?
>>> Are you asking for reverting that patch?
>> Well, I assume the patch has its merits, even if I don't really
>> understand what they are from its description.
> 
> What is in any away unclear about the final paragraph in the commit message?

This being the last sentence instead of the first (or even the
subject) makes it look like this is a nice side effect, not
like this was the goal of the change.

>> I'm instead asking
>> whether something like the below (meanwhile tested) would be
>> acceptable.
> 
> Not really, seeing as removing perl was the whole point.

The suggested change doesn't re-introduce a runtime dependency on
Perl, _except_ on very old systems.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 12 15:06:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 15:06:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYWU4-0005B3-MP; Tue, 12 May 2020 15:06:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9uWS=62=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jYWU3-0005Ay-MD
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 15:06:15 +0000
X-Inumbo-ID: 1f4da0ec-9462-11ea-b07b-bc764e2007e4
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1f4da0ec-9462-11ea-b07b-bc764e2007e4;
 Tue, 12 May 2020 15:06:14 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id w10so3334491ljo.0
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 08:06:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=F40EM8E52DER+n4TYSS96riz/1NrSyOfyA55JCJymHY=;
 b=r3hHsqNgOC8gC7wzF8+6K7mPNCPF/bltuC4+ioxglXDX2ovbEcAamXH5OWj9VHcqf4
 2Km/I+COpf7Y7zV7B8nHbV4cYzWeB5CGwy+DkKRho+5ZWoV34cjFVWdvNl6sPbJB9I1w
 7VLTbbwJl9yWcFJVRXfglLgO9CYoObA2OiWEQNmPwmFt20tAZSfAQqcQ+WUjkKU4LSoL
 Ef4sQIpNXYzRoHvJlIUu+cmnyvrGaeFwcrMF7FH2bXupDeQs/m3Fkg4Ua/QOM3TvopDL
 i3GVMtDAfa+JHdnP6STFDMgl9XHbc7XlbPDAbNUUVRV7Mn3d+2aSD2ufPYxgfuMsFMD3
 FAPQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=F40EM8E52DER+n4TYSS96riz/1NrSyOfyA55JCJymHY=;
 b=FTn6BNGuL5oW2ISERUE6YgNT8TxrxqqAhMUMlA5YkxA1g06yztlDatFbXe23V6iJn9
 x4gFoQQLrII2w99/SYBdbo3t5OG7Iz/O8E3xtvfM46YKz6Gdt2QuBRFA7gKZ+gR0OaHz
 wlOz9GhFnm2vKNA6YtwChUGM0Ltxip6CcufzbWaPQk/mxRkl1xB97pEhahlPv+7Tk5Ad
 JrexakOXSb/PmOQrIe2GhAfzW1rIlvV/vHWbzNfP3hs0vYznkAceo7S72REWmeqwHReY
 hTFiYsHkwLJw/b3nDecru4AE90ly8eq4XB6cZJmOS4iHFysDr/6TQJX4SEQgMUsiMUJb
 XVrA==
X-Gm-Message-State: AOAM532q50Bmb+BhnGRQY0BDDlLXteXLmsG9Wy8/7m+qBeaNUJ0EMmlL
 dwVduaFf9qGN+gCRcxy4DZU/yM3OweV9MCPUjGw=
X-Google-Smtp-Source: ABdhPJyKYGH1JQvyO/Fis31DhghOlDD0/EhkBD+osrprtbtTvgqXQvrp9ewW87qhGek7YSERPTBEHpp3fSn05zqWkMQ=
X-Received: by 2002:a2e:8047:: with SMTP id p7mr13336451ljg.206.1589295973596; 
 Tue, 12 May 2020 08:06:13 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1588282027.git.rosbrookn@ainfosec.com>
 <c38afab85d9fc005edade229896008a4ad5a1929.1588282027.git.rosbrookn@ainfosec.com>
 <3ED0B49D-123C-4925-B3A0-4FA0B44DF9F0@citrix.com>
In-Reply-To: <3ED0B49D-123C-4925-B3A0-4FA0B44DF9F0@citrix.com>
From: Nick Rosbrook <rosbrookn@gmail.com>
Date: Tue, 12 May 2020 11:06:02 -0400
Message-ID: <CAEBZRSdWCJo9kBnNv6Jqt76E3fb8DDX6C4zndMtvrhngEzGHxg@mail.gmail.com>
Subject: Re: [PATCH 2/3] golang/xenlight: init xenlight go module
To: George Dunlap <George.Dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 12, 2020 at 10:36 AM George Dunlap <George.Dunlap@citrix.com> w=
rote:
>
>
>
> > On Apr 30, 2020, at 10:39 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote=
:
> >
> > Initialize the xenlight Go module using the xenbits git-http URL,
> > xenbits.xen.org/git-http/xen.git/tools/golang/xenlight, and update the
> > XEN_GOCODE_URL variable in tools/Rules.mk accordingly.
> >
> > Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
> > ---
> > tools/Rules.mk               | 2 +-
> > tools/golang/xenlight/go.mod | 1 +
> > 2 files changed, 2 insertions(+), 1 deletion(-)
> > create mode 100644 tools/golang/xenlight/go.mod
> >
> > diff --git a/tools/Rules.mk b/tools/Rules.mk
> > index 5b8cf748ad..ca33cc7b31 100644
> > --- a/tools/Rules.mk
> > +++ b/tools/Rules.mk
> > @@ -36,7 +36,7 @@ debug ?=3D y
> > debug_symbols ?=3D $(debug)
> >
> > XEN_GOPATH        =3D $(XEN_ROOT)/tools/golang
> > -XEN_GOCODE_URL    =3D golang.xenproject.org
> > +XEN_GOCODE_URL    =3D xenbits.xen.org/git-http/xen.git/tools/golang
>
> The primary effect of this will be to install the code in $PREFIX/share/g=
ocode/xenbits.xen.org/git-http/xen.git/tools/golang/xenlight when making de=
bballs or doing `make install`.
>
> I don=E2=80=99t immediately see the advantage of that, particularly if we=
=E2=80=99re still thinking about having a =E2=80=9Cprettier=E2=80=9D path a=
t some point in the future.  What was your thinking here?

With the module being defined as `xenbits.xen.org/...`, the `build`
Make target will fail as-is for a module-aware version of go (because
it cannot find a module named `golang.xenproject.org/xenlight`). So,
the reason for this change is to preserve the existing functionality
of that Make target. Changing XEN_GOCODE_URL seemed like the correct
change, but I'm open to suggestions.

>
> > ifeq ($(debug_symbols),y)
> > CFLAGS +=3D -g3
> > diff --git a/tools/golang/xenlight/go.mod b/tools/golang/xenlight/go.mo=
d
> > new file mode 100644
> > index 0000000000..232d102153
> > --- /dev/null
> > +++ b/tools/golang/xenlight/go.mod
> > @@ -0,0 +1 @@
> > +module xenbits.xen.org/git-http/xen.git/tools/golang/xenlight
>
> This should probably be s/xen/xenproject/;

AFAICT, that's the correct URL, e.g. [1] and [2]. Am I missing something?

Thanks,
-NR

[1] https://pkg.go.dev/mod/xenbits.xen.org/git-http/xen.git
[2] https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dsummary


From xen-devel-bounces@lists.xenproject.org Tue May 12 15:32:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 15:32:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYWt7-0007ab-Nc; Tue, 12 May 2020 15:32:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4tCB=62=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jYWt6-0007aW-Gk
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 15:32:08 +0000
X-Inumbo-ID: bd36369a-9465-11ea-9887-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd36369a-9465-11ea-9887-bc764e2007e4;
 Tue, 12 May 2020 15:32:07 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id C887B206B8;
 Tue, 12 May 2020 15:32:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1589297527;
 bh=7pIOx69oiwQOJ+OHt9vkiN1vVRFe69uFl3ubYDQxJ8I=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=WDm6SszikhmcD90xjjfAOqsv91/XX9jUfNXU2dXdsinRsWBDWLtJxQFauTtcbt9Tc
 dQLTNJFjSQVneBg0v1BmfSjHycXalmYEIZqBwV7V4NuopGe3FG1tAn6FYOZevNvDoR
 HPy/jU9VBkFjZkANtibEypc/mqc+EGW6o7nqaRA4=
Date: Tue, 12 May 2020 08:32:05 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH RESEND 1/2] xen/Kconfig: define EXPERT a bool rather than
 a string
In-Reply-To: <cf8cb895-9eed-0bbd-37b8-73d82392c619@xen.org>
Message-ID: <alpine.DEB.2.21.2005120831520.26167@sstabellini-ThinkPad-T480s>
References: <20200430142548.23751-1-julien@xen.org>
 <20200430142548.23751-2-julien@xen.org>
 <cf8cb895-9eed-0bbd-37b8-73d82392c619@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Dario Faggioli <dfaggioli@suse.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, 12 May 2020, Julien Grall wrote:
> Hi,
> 
> It would be good a have an ack for the small Arm changes.

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> I will wait until
> tomorrow morning (UK time) and commit it if I see no objection.
> 
> Cheers,
> 
> On 30/04/2020 15:25, Julien Grall wrote:
> > From: Julien Grall <jgrall@amazon.com>
> > 
> > Since commit f80fe2b34f08 "xen: Update Kconfig to Linux v5.4" EXPERT
> > can only have two values (enabled or disabled). So switch from a string
> > to a bool.
> > 
> > Take the opportunity to replace all "EXPERT = y" to "EXPERT".
> > 
> > Signed-off-by: Julien Grall <jgrall@amazon.com>
> > ---
> >   xen/Kconfig                     |  3 +--
> >   xen/Kconfig.debug               |  2 +-
> >   xen/arch/arm/Kconfig            | 10 +++++-----
> >   xen/arch/x86/Kconfig            |  6 +++---
> >   xen/common/Kconfig              | 14 +++++++-------
> >   xen/common/sched/Kconfig        |  2 +-
> >   xen/drivers/passthrough/Kconfig |  2 +-
> >   7 files changed, 19 insertions(+), 20 deletions(-)
> > 
> > diff --git a/xen/Kconfig b/xen/Kconfig
> > index 073042f46730..120b5f412993 100644
> > --- a/xen/Kconfig
> > +++ b/xen/Kconfig
> > @@ -35,8 +35,7 @@ config DEFCONFIG_LIST
> >   	default ARCH_DEFCONFIG
> >     config EXPERT
> > -	string
> > -	default y if "$(XEN_CONFIG_EXPERT)" = "y"
> > +	def_bool y if "$(XEN_CONFIG_EXPERT)" = "y"
> >     config LTO
> >   	bool "Link Time Optimisation"
> > diff --git a/xen/Kconfig.debug b/xen/Kconfig.debug
> > index ee6ee33b69be..fad3050d4f7b 100644
> > --- a/xen/Kconfig.debug
> > +++ b/xen/Kconfig.debug
> > @@ -11,7 +11,7 @@ config DEBUG
> >     	  You probably want to say 'N' here.
> >   -if DEBUG || EXPERT = "y"
> > +if DEBUG || EXPERT
> >     config CRASH_DEBUG
> >   	bool "Crash Debugging Support"
> > diff --git a/xen/arch/arm/Kconfig b/xen/arch/arm/Kconfig
> > index d51f66072e2e..6a43576dac5e 100644
> > --- a/xen/arch/arm/Kconfig
> > +++ b/xen/arch/arm/Kconfig
> > @@ -33,7 +33,7 @@ source "arch/Kconfig"
> >     config ACPI
> >   	bool
> > -	prompt "ACPI (Advanced Configuration and Power Interface) Support" if
> > EXPERT = "y"
> > +	prompt "ACPI (Advanced Configuration and Power Interface) Support" if
> > EXPERT
> >   	depends on ARM_64
> >   	---help---
> >   @@ -51,7 +51,7 @@ config GICV3
> >     config HAS_ITS
> >           bool
> > -        prompt "GICv3 ITS MSI controller support" if EXPERT = "y"
> > +        prompt "GICv3 ITS MSI controller support" if EXPERT
> >           depends on GICV3 && !NEW_VGIC
> >     config HVM
> > @@ -81,7 +81,7 @@ config SBSA_VUART_CONSOLE
> >   	  SBSA Generic UART implements a subset of ARM PL011 UART.
> >     config ARM_SSBD
> > -	bool "Speculative Store Bypass Disable" if EXPERT = "y"
> > +	bool "Speculative Store Bypass Disable" if EXPERT
> >   	depends on HAS_ALTERNATIVE
> >   	default y
> >   	help
> > @@ -91,7 +91,7 @@ config ARM_SSBD
> >   	  If unsure, say Y.
> >     config HARDEN_BRANCH_PREDICTOR
> > -	bool "Harden the branch predictor against aliasing attacks" if EXPERT
> > = "y"
> > +	bool "Harden the branch predictor against aliasing attacks" if EXPERT
> >   	default y
> >   	help
> >   	  Speculation attacks against some high-performance processors rely on
> > @@ -108,7 +108,7 @@ config HARDEN_BRANCH_PREDICTOR
> >   	  If unsure, say Y.
> >     config TEE
> > -	bool "Enable TEE mediators support" if EXPERT = "y"
> > +	bool "Enable TEE mediators support" if EXPERT
> >   	default n
> >   	help
> >   	  This option enables generic TEE mediators support. It allows guests
> > diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
> > index a69be983d6f3..3237cb2f31f4 100644
> > --- a/xen/arch/x86/Kconfig
> > +++ b/xen/arch/x86/Kconfig
> > @@ -112,7 +112,7 @@ config BIGMEM
> >   	  If unsure, say N.
> >     config HVM_FEP
> > -	bool "HVM Forced Emulation Prefix support" if EXPERT = "y"
> > +	bool "HVM Forced Emulation Prefix support" if EXPERT
> >   	default DEBUG
> >   	depends on HVM
> >   	---help---
> > @@ -132,7 +132,7 @@ config HVM_FEP
> >     config TBOOT
> >   	def_bool y
> > -	prompt "Xen tboot support" if EXPERT = "y"
> > +	prompt "Xen tboot support" if EXPERT
> >   	select CRYPTO
> >   	---help---
> >   	  Allows support for Trusted Boot using the Intel(R) Trusted Execution
> > @@ -217,7 +217,7 @@ config HYPERV_GUEST
> >   endif
> >     config MEM_SHARING
> > -	bool "Xen memory sharing support" if EXPERT = "y"
> > +	bool "Xen memory sharing support" if EXPERT
> >   	depends on HVM
> >     endmenu
> > diff --git a/xen/common/Kconfig b/xen/common/Kconfig
> > index a6914fcae98b..fe9b41f72128 100644
> > --- a/xen/common/Kconfig
> > +++ b/xen/common/Kconfig
> > @@ -12,7 +12,7 @@ config CORE_PARKING
> >   	bool
> >     config GRANT_TABLE
> > -	bool "Grant table support" if EXPERT = "y"
> > +	bool "Grant table support" if EXPERT
> >   	default y
> >   	---help---
> >   	  Grant table provides a generic mechanism to memory sharing
> > @@ -128,7 +128,7 @@ config KEXEC
> >   	  If unsure, say Y.
> >     config EFI_SET_VIRTUAL_ADDRESS_MAP
> > -    bool "EFI: call SetVirtualAddressMap()" if EXPERT = "y"
> > +    bool "EFI: call SetVirtualAddressMap()" if EXPERT
> >       ---help---
> >         Call EFI SetVirtualAddressMap() runtime service to setup memory map
> > for
> >         further runtime services. According to UEFI spec, it isn't strictly
> > @@ -139,7 +139,7 @@ config EFI_SET_VIRTUAL_ADDRESS_MAP
> >     config XENOPROF
> >   	def_bool y
> > -	prompt "Xen Oprofile Support" if EXPERT = "y"
> > +	prompt "Xen Oprofile Support" if EXPERT
> >   	depends on X86
> >   	---help---
> >   	  Xen OProfile (Xenoprof) is a system-wide profiler for Xen virtual
> > @@ -176,7 +176,7 @@ config XSM_FLASK
> >     config XSM_FLASK_AVC_STATS
> >   	def_bool y
> > -	prompt "Maintain statistics on the FLASK access vector cache" if
> > EXPERT = "y"
> > +	prompt "Maintain statistics on the FLASK access vector cache" if
> > EXPERT
> >   	depends on XSM_FLASK
> >   	---help---
> >   	  Maintain counters on the access vector cache that can be viewed
> > using
> > @@ -249,7 +249,7 @@ config LATE_HWDOM
> >   	  If unsure, say N.
> >     config ARGO
> > -	bool "Argo: hypervisor-mediated interdomain communication" if EXPERT =
> > "y"
> > +	bool "Argo: hypervisor-mediated interdomain communication" if EXPERT
> >   	---help---
> >   	  Enables a hypercall for domains to ask the hypervisor to perform
> >   	  data transfer of messages between domains.
> > @@ -321,7 +321,7 @@ config SUPPRESS_DUPLICATE_SYMBOL_WARNINGS
> >   	  build becoming overly verbose.
> >     config CMDLINE
> > -	string "Built-in hypervisor command string" if EXPERT = "y"
> > +	string "Built-in hypervisor command string" if EXPERT
> >   	default ""
> >   	---help---
> >   	  Enter arguments here that should be compiled into the hypervisor
> > @@ -354,7 +354,7 @@ config DOM0_MEM
> >   	  Leave empty if you are not sure what to specify.
> >     config TRACEBUFFER
> > -	bool "Enable tracing infrastructure" if EXPERT = "y"
> > +	bool "Enable tracing infrastructure" if EXPERT
> >   	default y
> >   	---help---
> >   	  Enable tracing infrastructure and pre-defined tracepoints within
> > Xen.
> > diff --git a/xen/common/sched/Kconfig b/xen/common/sched/Kconfig
> > index 883ac87cab65..61231aacaa1c 100644
> > --- a/xen/common/sched/Kconfig
> > +++ b/xen/common/sched/Kconfig
> > @@ -1,5 +1,5 @@
> >   menu "Schedulers"
> > -	visible if EXPERT = "y"
> > +	visible if EXPERT
> >     config SCHED_CREDIT
> >   	bool "Credit scheduler support"
> > diff --git a/xen/drivers/passthrough/Kconfig
> > b/xen/drivers/passthrough/Kconfig
> > index e7e62ccd63c3..73f4ad89ecbc 100644
> > --- a/xen/drivers/passthrough/Kconfig
> > +++ b/xen/drivers/passthrough/Kconfig
> > @@ -14,7 +14,7 @@ config ARM_SMMU
> >   	  ARM SMMU architecture.
> >     config IPMMU_VMSA
> > -	bool "Renesas IPMMU-VMSA found in R-Car Gen3 SoCs" if EXPERT = "y"
> > +	bool "Renesas IPMMU-VMSA found in R-Car Gen3 SoCs" if EXPERT
> >   	depends on ARM_64
> >   	---help---
> >   	  Support for implementations of the Renesas IPMMU-VMSA found
> > 
> 
> -- 
> Julien Grall
> 


From xen-devel-bounces@lists.xenproject.org Tue May 12 15:32:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 15:32:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYWtG-0007av-Vl; Tue, 12 May 2020 15:32:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qSkR=62=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYWtG-0007aq-93
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 15:32:18 +0000
X-Inumbo-ID: c2a84208-9465-11ea-a2c8-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c2a84208-9465-11ea-a2c8-12813bfff9fa;
 Tue, 12 May 2020 15:32:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A3526AB64;
 Tue, 12 May 2020 15:32:18 +0000 (UTC)
Subject: Re: [PATCH 2/2] x86/boot: Drop .note.gnu.properties in build32.lds
To: Jason Andryuk <jandryuk@gmail.com>
References: <20200512033948.3507-1-jandryuk@gmail.com>
 <20200512033948.3507-3-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <69dd92f0-5b23-7a3d-3568-feab20874f97@suse.com>
Date: Tue, 12 May 2020 17:32:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200512033948.3507-3-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Stefan Bader <stefan.bader@canonical.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12.05.2020 05:39, Jason Andryuk wrote:
> reloc.S and cmdline.S as arrays of executable bytes for inclusion in
> head.S generated from compiled object files.  Object files generated by
> an -fcf-protection toolchain include a .note.gnu.property section.  The
> way reloc.S and cmdline.S are generated, the bytes of .note.gnu.property
> become the start of the .S files.  When head.S calls reloc or
> cmdline_parse_early, those note bytes are executed instead of the
> intended .text section.  This results in an early crash in reloc.

I may be misremembering, but I vaguely recall some similar change
suggestion. What I'm missing here is some form of statement as to
whether this is legitimate tool chain behavior, or a bug, and
hence whether this is a fix or a workaround.

> Discard the .note.gnu.property section when linking to avoid the extra
> bytes.

If we go this route (and if, as per above, I'm misremembering,
meaning we didn't reject such a change earlier on), why would we
not strip .note and .note.* in one go?

> Stefan Bader also noticed that build32.mk requires -fcf-protection=none
> or else the hypervisor will not boot.
> https://bugs.launchpad.net/ubuntu/+source/gcc-9/+bug/1863260

How's this related to the change here?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 12 15:47:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 15:47:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYX7k-0000DE-Af; Tue, 12 May 2020 15:47:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XDyx=62=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jYX7i-0000D9-Pe
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 15:47:14 +0000
X-Inumbo-ID: d911e056-9467-11ea-b07b-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d911e056-9467-11ea-b07b-bc764e2007e4;
 Tue, 12 May 2020 15:47:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589298434;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=1oxtECHPG/0CvmNLFoQ1Z0B2TgsYdj1TGNyduqlHtec=;
 b=OwJBGNGKawrkcW/bWPvHQ8TtMv5CZnYQFfUDmYX7vjjSZeKRTntbpXXf
 ge3rAnz8+zDvZvuFmqyZ/MEXF4+yBrJJyDs7TJmmUzQ9tshRRfLPqt8Gk
 Dkfcn7oK6zeruNg+77w+9ufRmk84NoqWu5Y1YwisCIji8I6Zka59CqD/T 8=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: mIzLOdDsRl7Fk+E4TpFYhzCJp1P8MgjH0zxxr7bJZMq93nUR/KqO7iJrKh92AC48mhaW+wTLtf
 BIMciGu8NI1yUjq+i5Oi1T1YFqo99GUz+w+tDHyBM1gpd9yUl3O2+nzQFoqegep0pqMAiBipen
 huqBLN9y7qGQMZfDnAvqHb90PDFaSvNwVX2UxTFbmN+BPRVwAbyHUEbNC+FJjkyXVOwBQPAs3l
 RutdmcA9vwlANYu/w91mnLiRqioAgj9KxVua2b8LwjXxqh/15lKETUmoaLYGJn9I5iX7KDQJ4S
 Qvg=
X-SBRS: 2.7
X-MesageID: 17355183
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,384,1583211600"; d="scan'208";a="17355183"
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
Subject: Re: [PATCH 2/3] golang/xenlight: init xenlight go module
Thread-Topic: [PATCH 2/3] golang/xenlight: init xenlight go module
Thread-Index: AQHWHzfMuMNvM1D6EUuQ5HJvPIpidKikdZCAgAAILACAAAt9gA==
Date: Tue, 12 May 2020 15:47:09 +0000
Message-ID: <AC8F9121-EA81-4461-A963-F195BE2C070A@citrix.com>
References: <cover.1588282027.git.rosbrookn@ainfosec.com>
 <c38afab85d9fc005edade229896008a4ad5a1929.1588282027.git.rosbrookn@ainfosec.com>
 <3ED0B49D-123C-4925-B3A0-4FA0B44DF9F0@citrix.com>
 <CAEBZRSdWCJo9kBnNv6Jqt76E3fb8DDX6C4zndMtvrhngEzGHxg@mail.gmail.com>
In-Reply-To: <CAEBZRSdWCJo9kBnNv6Jqt76E3fb8DDX6C4zndMtvrhngEzGHxg@mail.gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <4D4FC0C147D761449A6F7E39CA16AC2A@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDEyLCAyMDIwLCBhdCA0OjA2IFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9v
a25AZ21haWwuY29tPiB3cm90ZToNCj4gDQo+IFtDQVVUSU9OIC0gRVhURVJOQUwgRU1BSUxdIERP
IE5PVCByZXBseSwgY2xpY2sgbGlua3MsIG9yIG9wZW4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBo
YXZlIHZlcmlmaWVkIHRoZSBzZW5kZXIgYW5kIGtub3cgdGhlIGNvbnRlbnQgaXMgc2FmZS4NCj4g
DQo+IE9uIFR1ZSwgTWF5IDEyLCAyMDIwIGF0IDEwOjM2IEFNIEdlb3JnZSBEdW5sYXAgPEdlb3Jn
ZS5EdW5sYXBAY2l0cml4LmNvbT4gd3JvdGU6DQo+PiANCj4+IA0KPj4gDQo+Pj4gT24gQXByIDMw
LCAyMDIwLCBhdCAxMDozOSBQTSwgTmljayBSb3Nicm9vayA8cm9zYnJvb2tuQGdtYWlsLmNvbT4g
d3JvdGU6DQo+Pj4gDQo+Pj4gK21vZHVsZSB4ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdp
dC90b29scy9nb2xhbmcveGVubGlnaHQNCj4+IA0KPj4gVGhpcyBzaG91bGQgcHJvYmFibHkgYmUg
cy94ZW4veGVucHJvamVjdC87DQo+IA0KPiBBRkFJQ1QsIHRoYXQncyB0aGUgY29ycmVjdCBVUkws
IGUuZy4gWzFdIGFuZCBbMl0uIEFtIEkgbWlzc2luZyBzb21ldGhpbmc/DQoNCkZvciB0cmFkZW1h
cmsgcmVhc29ucywgd2hlbiB3ZSBqb2luZWQgdGhlIExpbnV4IEZvdW5kYXRpb24sIHdlIGRpZCBh
IGJpZyByZWJyYW5kaW5nIGZyb20g4oCcWGVu4oCdIHRvIOKAnFhlblByb2plY3TigJ0uICAoU2Vl
IFsxXSBmb3IgbW9yZSBkZXRhaWxzLikgIFRoZSB4ZW4ub3JnIGRvbWFpbnMgYXJlIGp1c3QgZm9y
IGJhY2t3YXJkcyBjb21wYXRpYmlsaXR5LiA6LSkNCg0KLUdlb3JnZQ0KDQpbMV0gaHR0cHM6Ly94
ZW5wcm9qZWN0Lm9yZy8yMDEzLzA0LzE3L3VwY29taW5nLWNoYW5nZXMtdG8tdGhlLXhlbi13ZWJz
aXRlcy8NCg0K


From xen-devel-bounces@lists.xenproject.org Tue May 12 15:52:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 15:52:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYXCx-00011Z-V6; Tue, 12 May 2020 15:52:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rfNj=62=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jYXCw-00011U-VZ
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 15:52:39 +0000
X-Inumbo-ID: 9a465e82-9468-11ea-b9cf-bc764e2007e4
Received: from mail-lf1-x129.google.com (unknown [2a00:1450:4864:20::129])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9a465e82-9468-11ea-b9cf-bc764e2007e4;
 Tue, 12 May 2020 15:52:38 +0000 (UTC)
Received: by mail-lf1-x129.google.com with SMTP id a4so10979626lfh.12
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 08:52:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=Rw2zspztliujrBSlCejt1mULr8uYRGTb7Bs3TU+w8Z4=;
 b=DlzrmPO/lXQL8s3PQpz9yZUQ/kMmbfebLxIQtdayfZTmfrJq2ziKiDUmj3vgv0160E
 cjthc958h06SS/n5GCmfGiFTyCAmwFQGngGmYQmLTLGsC/J6y2/2a0ocv0QLs3zfa+fh
 KMN9FoQZYqNS63xZBya4f556kej/aDiOogIlpAJleO6+iEtNzCnrin8k0JLP0rC9xoWi
 Eqz9gKKq/7CeVuSU6036v5ZRKwlUzj6VJQDVTrokuj5mN/2lm/t/LM9nW+lOqVtgOJ9S
 UHvdUPfE4PeGnuxupSaM8zUKIto7u29YZnJPRgIMz+duGInRW/mUTf03yZU5KZ0o1HvO
 /RKw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=Rw2zspztliujrBSlCejt1mULr8uYRGTb7Bs3TU+w8Z4=;
 b=KzWbDL69J8vll3Zq9uyedVYCuF4GeffkMsdRc5mERH9K+a1ODQV3QzVGYNI+jNch0T
 HuaB87voeMKT/h1kJXiCWVSM0ShqBB/H6T4cs8yMBX0qgeg04dD+u05PRQUmIr21tK/w
 AOFB284LpJ1ec5JQlQ4c2iQpc4DKS9zo2McTkoNsvRMEklYUpWC1n4/GxzBiqO+vpPg6
 CFlyjO0+fKRGy4WjN/aiRc1pjedhKdBP8IglTwU6QmbDirAETO/bXE9DTgLatwj7TbFB
 lywf2ICfXOU/f1HOC118/CWspw/dX/fEoBj+A4tjhaX3917nUmJcMaltBFK9sHy3AM1M
 fBPA==
X-Gm-Message-State: AOAM530M4dzILmH5Hdlq3Szrnt2BOp/myB6RdpOCWMW5w3s0toIasYpC
 3sMoXeiFgspP/kuLRznSlOzemZKABRPzr+Zoti4=
X-Google-Smtp-Source: ABdhPJxqQtqYiVUb1/vgFN7Zs7KiVWVWysMEQmDL+qno6j0+R718FBCMjY6VyN2yLopkBC8YxUCD5AFxrxIPXz4dVXE=
X-Received: by 2002:a19:e041:: with SMTP id g1mr14519604lfj.70.1589298756903; 
 Tue, 12 May 2020 08:52:36 -0700 (PDT)
MIME-Version: 1.0
References: <3bfd6384-fcaf-c74a-e560-a35aafa06a43@suse.com>
 <20200512141947.yqx4gmbvqs4grx5g@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
 <fa507eab-547a-c0fb-9620-825aba5f55b2@suse.com>
 <4b90b635-84bb-e827-d52e-dfe1ebdb4e4d@citrix.com>
 <814db557-4f6a-020d-9f71-4ee3724981e3@suse.com>
In-Reply-To: <814db557-4f6a-020d-9f71-4ee3724981e3@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 12 May 2020 11:52:25 -0400
Message-ID: <CAKf6xps0XDRTUJsbE1zHpn=h98yTN+Y1DzaNpVGzhhJGVccRRw@mail.gmail.com>
Subject: Re: use of "stat -"
To: Jan Beulich <jbeulich@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 12, 2020 at 10:59 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 12.05.2020 16:47, Andrew Cooper wrote:
> > On 12/05/2020 15:33, Jan Beulich wrote:
> >> On 12.05.2020 16:19, Wei Liu wrote:
> >>> On Tue, May 12, 2020 at 12:58:46PM +0200, Jan Beulich wrote:
> >>>> now that I've been able to do a little bit of work from the office
> >>>> again, I've run into a regression from b72682c602b8 "scripts: Use
> >>>> stat to check lock claim". On one of my older machines I've noticed
> >>>> guests wouldn't launch anymore, which I've tracked down to the
> >>>> system having an old stat on it. Yes, the commit says the needed
> >>>> behavior has been available since 2009, but please let's not forget
> >>>> that we continue to support building with tool chains from about
> >>>> 2007.

Sorry for regressing your old system setup.  Out of curiosity, what OS
version are you using?

> >>>> Putting in place and using newer tool chain versions without
> >>>> touching the base distro is pretty straightforward. Replacing the
> >>>> coreutils package isn't, and there's not even an override available
> >>>> by which one could point at an alternative one. Hence I think
> >>>> bumping the minimum required versions of basic tools should be
> >>>> done even more carefully than bumping required tool chain versions
> >>>> (which we've not dared to do in years). After having things
> >>>> successfully working again with a full revert, I'm now going to
> >>>> experiment with adapting behavior to stat's capabilities. Would
> >>>> something like that be acceptable (if it works out)?
> >>> Are you asking for reverting that patch?
> >> Well, I assume the patch has its merits, even if I don't really
> >> understand what they are from its description.
> >
> > What is in any away unclear about the final paragraph in the commit message?
>
> This being the last sentence instead of the first (or even the
> subject) makes it look like this is a nice side effect, not
> like this was the goal of the change.

I see how the motivation wasn't conveyed properly in the commit
message.  It was captured in the cover letter, but that doesn't make
it into the repo. :(

> >> I'm instead asking
> >> whether something like the below (meanwhile tested) would be
> >> acceptable.
> >
> > Not really, seeing as removing perl was the whole point.
>
> The suggested change doesn't re-introduce a runtime dependency on
> Perl, _except_ on very old systems.

Yes, the runtime detection looks okay.  However, Ian may not like
testing `stat -` since he previously disliked the extra overhead of
calling sed.

v1 of the patchset created a dedicated C utility, but Ian preferred
using stat(1).

Qubes uses a different approach to remove perl to bypass stat-ing the
FD.  "Use plain flock on open FD. This makes the locking mechanism not
tolerate removing the lock file once created."
https://github.com/QubesOS/qubes-vmm-xen/blob/xen-4.13/patch-tools-hotplug-drop-perl-usage-in-locking-mechanism.patch
 So they have lockfiles persist even when no process holds the lock.

I was just looking to remove perl since it's a large dependency for
this single use.  I'm not tied to a particular approach.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Tue May 12 15:58:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 15:58:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYXIP-0001EY-Mz; Tue, 12 May 2020 15:58:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sh4E=62=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYXIO-0001EO-A9
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 15:58:16 +0000
X-Inumbo-ID: 6348ec5a-9469-11ea-a2cd-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6348ec5a-9469-11ea-a2cd-12813bfff9fa;
 Tue, 12 May 2020 15:58:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589299096;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=XJtRyoWkbnyGP96LNpNOc6p2euzgTk05FDsqLU/e4Q8=;
 b=KlQi2KSBMdA7hVlbVjwJihdVnStfKCFRbW0+nrGyfCJk1GEi6IHPlXuO
 4XLOvPcBiMOpfXGgmnj2OkI9xcV0SiN7KNGZDIk+FQlgYiRhbB9PUUkb0
 cprfKe9U7UMkZBTqzpBXVrdNU2Y6aFhw+CxgkoaTaC6OfBgbHUZuBL1Ub 8=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: zd44B1xrkETwMRCWIwqUb8Cd0Ds0m+uwNTW4de+Sq4y/ikYsGRpZLnT6TuDoSH8t2yutrhXtvn
 3k+0W9pKlklBoHJW6FgOnvWOCGTEpYNRiGzBL5pUD9SW8Z/FCEVlBJ3bsnJQua9R8wYBhle6Pz
 EQqLkX14pg+jW4FvdMTExblZX2Urrqsq8CrY/EJByCRsW0lfHkW2N4ApL77sTCu71kBy3G/Jsi
 noWbvpTX++SXCye/jT+p4UPN/TwUmdxTHBk4UzPurRCOhhVPkFpjpP/59bqIRPvLHkBqpCfcPa
 dd4=
X-SBRS: 2.7
X-MesageID: 17356193
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,384,1583211600"; d="scan'208";a="17356193"
Subject: Re: [PATCH 2/2] x86/boot: Drop .note.gnu.properties in build32.lds
To: Jan Beulich <jbeulich@suse.com>, Jason Andryuk <jandryuk@gmail.com>
References: <20200512033948.3507-1-jandryuk@gmail.com>
 <20200512033948.3507-3-jandryuk@gmail.com>
 <69dd92f0-5b23-7a3d-3568-feab20874f97@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <372f83e4-6016-cc10-a8e6-970d644eb561@citrix.com>
Date: Tue, 12 May 2020 16:58:09 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <69dd92f0-5b23-7a3d-3568-feab20874f97@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefan Bader <stefan.bader@canonical.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12/05/2020 16:32, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> On 12.05.2020 05:39, Jason Andryuk wrote:
>> reloc.S and cmdline.S as arrays of executable bytes for inclusion in
>> head.S generated from compiled object files.  Object files generated by
>> an -fcf-protection toolchain include a .note.gnu.property section.  The
>> way reloc.S and cmdline.S are generated, the bytes of .note.gnu.property
>> become the start of the .S files.  When head.S calls reloc or
>> cmdline_parse_early, those note bytes are executed instead of the
>> intended .text section.  This results in an early crash in reloc.
> I may be misremembering, but I vaguely recall some similar change
> suggestion. What I'm missing here is some form of statement as to
> whether this is legitimate tool chain behavior, or a bug, and
> hence whether this is a fix or a workaround.

The linker is free to position unreferenced sections anywhere it wishes.

It is deeply unhelpful behaviour, but neither Binutils nor Clang
developers think it is something wanting fixing.

One option might be to use --orphan-handling=error so unexpected
toolchain behaviour breaks the build, or in this case perhaps =discard
might be better.

>> Discard the .note.gnu.property section when linking to avoid the extra
>> bytes.
> If we go this route (and if, as per above, I'm misremembering,
> meaning we didn't reject such a change earlier on), why would we
> not strip .note and .note.* in one go?
>
>> Stefan Bader also noticed that build32.mk requires -fcf-protection=none
>> or else the hypervisor will not boot.
>> https://bugs.launchpad.net/ubuntu/+source/gcc-9/+bug/1863260
> How's this related to the change here?

I think there is a bit of confusion as to exactly what is going on.

Ubuntu defaults -fcf-protection to enabled, which has a side effect of
turning on CET, which inserts ENDBR{32,64} instructions and generates
.note.gnu.properties indicating that the binary is CET-IBT compatible.

ENDBR* instructions come from the Hint Nop space so are safe on older
processors, but do ultimately add to binary bloat.  It also occurs to me
that it likely breaks livepath.

The reason Xen fails to boot is purely to do with the position of
.note.gnu.properties, not the ENDBR* instructions.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 12 15:58:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 15:58:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYXIP-0001Ef-V8; Tue, 12 May 2020 15:58:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9uWS=62=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jYXIO-0001ET-Tk
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 15:58:16 +0000
X-Inumbo-ID: 63dc07ec-9469-11ea-b9cf-bc764e2007e4
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63dc07ec-9469-11ea-b9cf-bc764e2007e4;
 Tue, 12 May 2020 15:58:16 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id d22so4965291lfm.11
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 08:58:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=6I2U4Zp1WU1URlas1ELBN65ErA5DxK5M5EbOQI4eRJg=;
 b=YosAvIxKZrx+gdjP8L8pZcMkP4r9ygtODG1sfhuYLV2fkj9GF5vA6XzC0zSB5CX17D
 H4Hvnpwhzn2g6LqhaLdb1ns21mMZrb19NPSmlp0wKvoK58j04NJmQPQd04S+mrUNpJLu
 Fmatq7os0iBmf64rLkby+kDsMwObs9KwE7tJ6vXu8EXFHhsifggzOy85HPuz26D2hOb5
 BHMLw4dPyR2zeTU+ngkpufLGEmJJEYhR2OamDnbhZo6vGmYdsRGXW36jYkBFWjL/F5fz
 iU7LcPa1lOWKcV9ZaR7QqX/Ai7j6lcXlYPGMyurTPDbIKcJ9M4vqS7o7LAAGC+rzJiFz
 ywxA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=6I2U4Zp1WU1URlas1ELBN65ErA5DxK5M5EbOQI4eRJg=;
 b=Qm0nBvzLnDBG6iqNYUxdkajrUDaHI7jbh8EwfhWKDE1jhvLxdVgznZx/8j5ZM6/v+W
 VggpxlT3hGeQIHd4Wg9b0ccErC1RYX5yt22cZ3Xmt7Hx7I5EMo4MrKwRQB2mbDRcwxGm
 ODe3ForgnHQzeD22WWBXbK2tgk6U5qKmCgDJRcz30+oMfpKcXCVQD5sU7MArkwSoBS48
 jG61OPICEq78FyiUFgw9FjXPrvAb0D6bQ1Whs/bL8pCJPHoOC6u5tkwT8lxlelvWkW7r
 CWOdhhQ/ZILwDHdq3JsOLeVp+fKVLYzDSXSVETrFFKVwr9aIQBbuOx1TMB1hRz/2JMXs
 A33g==
X-Gm-Message-State: AOAM5316wInYLFHpNHah28zP/r0e+vFR28SwM0JcICJ9QshHd8qaiyDN
 bpL7a2SHcuVJK5h/r6AFe2HMC4GS8/50wiyE7pw=
X-Google-Smtp-Source: ABdhPJx4ezPcb0722JZ7HfS00b8/G6UbRYl+xvNzq5s/tBT+M3OzbdB0QfEuwKMfDDM/4/bPXLgqvx5uV+nIy1uUqm0=
X-Received: by 2002:ac2:4da1:: with SMTP id h1mr14624104lfe.152.1589299095160; 
 Tue, 12 May 2020 08:58:15 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1588282027.git.rosbrookn@ainfosec.com>
 <c38afab85d9fc005edade229896008a4ad5a1929.1588282027.git.rosbrookn@ainfosec.com>
 <3ED0B49D-123C-4925-B3A0-4FA0B44DF9F0@citrix.com>
 <CAEBZRSdWCJo9kBnNv6Jqt76E3fb8DDX6C4zndMtvrhngEzGHxg@mail.gmail.com>
 <AC8F9121-EA81-4461-A963-F195BE2C070A@citrix.com>
In-Reply-To: <AC8F9121-EA81-4461-A963-F195BE2C070A@citrix.com>
From: Nick Rosbrook <rosbrookn@gmail.com>
Date: Tue, 12 May 2020 11:58:03 -0400
Message-ID: <CAEBZRSe37UZZY23QA55WbNYMohwLVs4k4F4UFwV7iUqOeE=caA@mail.gmail.com>
Subject: Re: [PATCH 2/3] golang/xenlight: init xenlight go module
To: George Dunlap <George.Dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> For trademark reasons, when we joined the Linux Foundation, we did a big =
rebranding from =E2=80=9CXen=E2=80=9D to =E2=80=9CXenProject=E2=80=9D.  (Se=
e [1] for more details.)  The xen.org domains are just for backwards compat=
ibility. :-)

Ah, thanks! I'll fix that.

-NR


From xen-devel-bounces@lists.xenproject.org Tue May 12 16:11:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 16:11:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYXVT-0003R4-2N; Tue, 12 May 2020 16:11:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qSkR=62=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYXVS-0003Qz-Bf
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 16:11:46 +0000
X-Inumbo-ID: 45cefa8d-946b-11ea-a2d0-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 45cefa8d-946b-11ea-a2d0-12813bfff9fa;
 Tue, 12 May 2020 16:11:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id F2862AC53;
 Tue, 12 May 2020 16:11:46 +0000 (UTC)
Subject: Re: [PATCH 2/2] x86/boot: Drop .note.gnu.properties in build32.lds
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200512033948.3507-1-jandryuk@gmail.com>
 <20200512033948.3507-3-jandryuk@gmail.com>
 <69dd92f0-5b23-7a3d-3568-feab20874f97@suse.com>
 <372f83e4-6016-cc10-a8e6-970d644eb561@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5e976f0f-3a9b-6159-f4bf-b9f8b6c578d2@suse.com>
Date: Tue, 12 May 2020 18:11:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <372f83e4-6016-cc10-a8e6-970d644eb561@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Stefan Bader <stefan.bader@canonical.com>, Wei Liu <wl@xen.org>,
 Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12.05.2020 17:58, Andrew Cooper wrote:
> On 12/05/2020 16:32, Jan Beulich wrote:
>> On 12.05.2020 05:39, Jason Andryuk wrote:
>>> Discard the .note.gnu.property section when linking to avoid the extra
>>> bytes.
>> If we go this route (and if, as per above, I'm misremembering,
>> meaning we didn't reject such a change earlier on), why would we
>> not strip .note and .note.* in one go?
>>
>>> Stefan Bader also noticed that build32.mk requires -fcf-protection=none
>>> or else the hypervisor will not boot.
>>> https://bugs.launchpad.net/ubuntu/+source/gcc-9/+bug/1863260
>> How's this related to the change here?
> 
> I think there is a bit of confusion as to exactly what is going on.
> 
> Ubuntu defaults -fcf-protection to enabled, which has a side effect of
> turning on CET, which inserts ENDBR{32,64} instructions and generates
> .note.gnu.properties indicating that the binary is CET-IBT compatible.

I.e. in principle this 2nd patch wouldn't be necessary at all if
we forced -fcf-protection=none unilaterally, and provided build32.mk
properly inherits CFLAGS. Discarding note sections may still be
a desirable thing to do though ...

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 12 16:13:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 16:13:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYXXV-0003XR-Fe; Tue, 12 May 2020 16:13:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sh4E=62=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYXXT-0003XJ-U6
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 16:13:51 +0000
X-Inumbo-ID: 90c9e146-946b-11ea-a2d0-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 90c9e146-946b-11ea-a2d0-12813bfff9fa;
 Tue, 12 May 2020 16:13:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589300031;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=nkn7Wt1Uo3TggDhSxw2xw1E5I1XEGPr21R5zH/K1+Kg=;
 b=IhZZyOOgCBHzcIY0KIPa5b6M66myZ+rFSJnwobAymcHHxA1aUHni8esH
 4i5eaqZtWrRVteTOHkuUyQ1h0w6A/fluAG6+ZdzAJ7fR42NOjiz4fD8zg
 Y1y61sOwgHv5JC8Xzve8C+CDWYOkicqStY4llXe94hG4Fwns4TMgzM7Ls I=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: UHA0EXrdwzH8OPjNA0UPGF57MjE1LJclRoFHiuLAFG/WhduyFYNAJ7mdtY1earuCA4ohjRyj2q
 r81nDz+3foZLu56w+1xGi5qNyPOTM8p4cwAzhNJjxHkMmkfnmSKuVIjhKfz4OrCpzFumNVcCFx
 6//HygwizG7FWLAQ6IzTIxR1MyQRkkjQJDX5Pt/Y/FKf5p39+1485FiYu6fN0ko02RJCHP/0kM
 MuXF8KeEcRulGDKKrpczNAi1Q8BtalV6QOoDR7dNIBsX0pk2I0r1QU77Ih6iHinoalGgiE5YCe
 KAQ=
X-SBRS: 2.7
X-MesageID: 17358402
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,384,1583211600"; d="scan'208";a="17358402"
Subject: Re: [PATCH 2/2] x86/boot: Drop .note.gnu.properties in build32.lds
To: Jan Beulich <jbeulich@suse.com>
References: <20200512033948.3507-1-jandryuk@gmail.com>
 <20200512033948.3507-3-jandryuk@gmail.com>
 <69dd92f0-5b23-7a3d-3568-feab20874f97@suse.com>
 <372f83e4-6016-cc10-a8e6-970d644eb561@citrix.com>
 <5e976f0f-3a9b-6159-f4bf-b9f8b6c578d2@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <511378d3-038f-fa6d-905c-6697ed9ecf1b@citrix.com>
Date: Tue, 12 May 2020 17:13:45 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <5e976f0f-3a9b-6159-f4bf-b9f8b6c578d2@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Stefan Bader <stefan.bader@canonical.com>, Wei
 Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12/05/2020 17:11, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> On 12.05.2020 17:58, Andrew Cooper wrote:
>> On 12/05/2020 16:32, Jan Beulich wrote:
>>> On 12.05.2020 05:39, Jason Andryuk wrote:
>>>> Discard the .note.gnu.property section when linking to avoid the extra
>>>> bytes.
>>> If we go this route (and if, as per above, I'm misremembering,
>>> meaning we didn't reject such a change earlier on), why would we
>>> not strip .note and .note.* in one go?
>>>
>>>> Stefan Bader also noticed that build32.mk requires -fcf-protection=none
>>>> or else the hypervisor will not boot.
>>>> https://bugs.launchpad.net/ubuntu/+source/gcc-9/+bug/1863260
>>> How's this related to the change here?
>> I think there is a bit of confusion as to exactly what is going on.
>>
>> Ubuntu defaults -fcf-protection to enabled, which has a side effect of
>> turning on CET, which inserts ENDBR{32,64} instructions and generates
>> .note.gnu.properties indicating that the binary is CET-IBT compatible.
> I.e. in principle this 2nd patch wouldn't be necessary at all if
> we forced -fcf-protection=none unilaterally, and provided build32.mk
> properly inherits CFLAGS. Discarding note sections may still be
> a desirable thing to do though ...

Even if we disable unilaterally for now, we'll still want to re-enable
at some point, so this will come and bite us again.

I'm currently testing Ubuntu's default behaviour in combination with
livepatch, because I bet there are further breakages to be found.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 12 16:14:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 16:14:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYXYE-0003bZ-P3; Tue, 12 May 2020 16:14:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sh4E=62=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYXYD-0003bQ-FI
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 16:14:37 +0000
X-Inumbo-ID: abd963f8-946b-11ea-b9cf-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id abd963f8-946b-11ea-b9cf-bc764e2007e4;
 Tue, 12 May 2020 16:14:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589300075;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=MYHVgIVo9jvXLejtNrbXy9XdAcnvJU2n1xm/4DSuPGs=;
 b=LxST1de8ynfu+u7OarlRGMJ1uaHZEZv0MTT+VFoY46PmJUUkFKln7g/6
 k8czC1DwtVKDhw/tFHSiZ08LVZgvHUzsElHiLzTOM/ueTm7txH+p2SxXe
 voF3XHAkOXhR+egubJWiuKivEqU2KICA+hy5wEgH7PwaDZL0T9ekcsjWW I=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: mgzBGGFjidN/99PQ2OrbS9ToHCO1nogmQvIc/fAxbuJRXysrFvTimlBMYM5sHZUaIFXOuaiSej
 Po6hdEOPLVnqE0ZhXtUAqJ/SUUhyqyIxWfBJ139LS0BrK8bKE6WTRdyL+/TqrN/XHnk4l8N16r
 vamD8OQqXl4P3ukWK2vtoFyf9gJufgNXoVWVM8h/RVeHUfYtuvSlbfyf4SRXQk9g72kJMIi2as
 NmUpksrptk1aEzggKrNVT2EpNe7BtWg02AqGPzLrA+J/sfbfghqNM5NAbN2T3bn2aqR97stUUL
 HQ4=
X-SBRS: 2.7
X-MesageID: 18027030
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,384,1583211600"; d="scan'208";a="18027030"
Subject: Re: [PATCH 12/16] x86/extable: Adjust extable handling to be shadow
 stack compatible
To: Jan Beulich <jbeulich@suse.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-13-andrew.cooper3@citrix.com>
 <1e80c672-9308-f7ad-67ea-69d83d69bc03@suse.com>
 <974f631e-3a82-3da4-124d-f4bf2bef89e2@citrix.com>
 <59fcdaf0-f877-7a90-9bf4-9e41b1bbcea7@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <876b7c21-8354-8461-12b2-baf19b0426de@citrix.com>
Date: Tue, 12 May 2020 17:14:30 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <59fcdaf0-f877-7a90-9bf4-9e41b1bbcea7@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12/05/2020 15:31, Jan Beulich wrote:
> On 11.05.2020 23:09, Andrew Cooper wrote:
>> On 07/05/2020 14:35, Jan Beulich wrote:
>>> On 02.05.2020 00:58, Andrew Cooper wrote:
>>>> --- a/xen/arch/x86/traps.c
>>>> +++ b/xen/arch/x86/traps.c
>>>> @@ -778,6 +778,28 @@ static bool exception_fixup(struct cpu_user_regs *regs, bool print)
>>>>                 vec_name(regs->entry_vector), regs->error_code,
>>>>                 _p(regs->rip), _p(regs->rip), _p(fixup));
>>>>  
>>>> +    if ( IS_ENABLED(CONFIG_XEN_SHSTK) )
>>>> +    {
>>>> +        unsigned long ssp;
>>>> +
>>>> +        asm ("rdsspq %0" : "=r" (ssp) : "0" (1) );
>>>> +        if ( ssp != 1 )
>>>> +        {
>>>> +            unsigned long *ptr = _p(ssp);
>>>> +
>>>> +            /* Search for %rip in the shadow stack, ... */
>>>> +            while ( *ptr != regs->rip )
>>>> +                ptr++;
>>> Wouldn't it be better to bound the loop, as it shouldn't search past
>>> (strictly speaking not even to) the next page boundary? Also you
>>> don't care about the top of the stack (being the to be restored SSP),
>>> do you? I.e. maybe
>>>
>>>             while ( *++ptr != regs->rip )
>>>                 ;
>>>
>>> ?
>>>
>>> And then - isn't searching for a specific RIP value alone prone to
>>> error, in case a it matches an ordinary return address? I.e.
>>> wouldn't you better search for a matching RIP accompanied by a
>>> suitable pointer into the shadow stack and a matching CS value?
>>> Otherwise, ...
>>>
>>>> +            ASSERT(ptr[1] == __HYPERVISOR_CS);
>>> ... also assert that ptr[-1] points into the shadow stack?
>> So this is the problem I was talking about that the previous contexts
>> SSP isn't stored anywhere helpful.
>>
>> What we are in practice doing is looking 2 or 3 words up the shadow
>> stack (depending on exactly how deep our call graph is), to the shadow
>> IRET frame matching the real IRET frame which regs is pointing to.
>>
>> Both IRET frames were pushed in the process of generating the exception,
>> and we've already matched regs->rip to the exception table record.  We
>> need to fix up regs->rip and the shadow lip to the fixup address.
>>
>> As we are always fixing up an exception generated from Xen context, we
>> know that ptr[1] == __HYPERVISOR_CS, and *ptr[-1] = &ptr[2], as we
>> haven't switched shadow stack as part of taking this exception. 
>> However, this second point is fragile to exception handlers moving onto IST.
>>
>> We can't encounter regs->rip in the shadow stack between the current SSP
>> and the IRET frame we're looking to fix up, or we would have taken a
>> recursive fault and not reached exception_fixup() to begin with.
> I'm afraid I don't follow here. Consider a function (also)
> involved in exception handling having this code sequence:
>
>     call    func
>     mov     (%rax), %eax
>
> If the fault we're handling occured on the MOV and
> exception_fixup() is a descendant of func(), then the first
> instance of an address on the shadow stack pointing at this
> MOV is going to be the one which did not fault.

No.  The moment `call func` returns, the address you're looking to match
is rubble no longer on the stack.  Numerically, it will be located at
SSP-8 when the fault for MOV is generated.

In this exact case, it would be clobbered by the shadow IRET frame, but
even if it was deeper in the call tree, we would still never encounter
it from waking up the shadow stack from SSP.

The only things you will find on the shadow stack is the shadow IRET
frame, handle_exception_saved(), do_*(), fixup_exception(), except that
I'd not like to fix the behaviour to require exactly two function calls
of depth for fixup_exception().

>> Therefore, the loop is reasonably bounded in all cases.
>>
>> Sadly, there is no RDSS instruction, so we can't actually use shadow
>> stack reads to spot if we underflowed the shadow stack, and there is no
>> useful alternative to panic() if we fail to find the shadow IRET frame.
> But afaics you don't panic() in this case. Instead you continue
> looping until (presumably) you hit some form of fault.
>
>>>> --- a/xen/arch/x86/x86_64/entry.S
>>>> +++ b/xen/arch/x86/x86_64/entry.S
>>>> @@ -708,7 +708,16 @@ exception_with_ints_disabled:
>>>>          call  search_pre_exception_table
>>>>          testq %rax,%rax                 # no fixup code for faulting EIP?
>>>>          jz    1b
>>>> -        movq  %rax,UREGS_rip(%rsp)
>>>> +        movq  %rax,UREGS_rip(%rsp)      # fixup regular stack
>>>> +
>>>> +#ifdef CONFIG_XEN_SHSTK
>>>> +        mov    $1, %edi
>>>> +        rdsspq %rdi
>>>> +        cmp    $1, %edi
>>>> +        je     .L_exn_shstk_done
>>>> +        wrssq  %rax, (%rdi)             # fixup shadow stack
>>>> +.L_exn_shstk_done:
>>>> +#endif
>>> Again avoid the conditional jump by using alternatives patching?
>> Well - that depends on whether we're likely to gain any new content in
>> the pre exception table.
>>
>> As it stands, it is only the IRET(s) to userspace so would be safe to
>> turn this into an unconditional alternative.  Even in the crash case, we
>> won't be returning to guest context after having started the crash
>> teardown path.
> Ah, right - perhaps indeed better keep it as is then.

That was my reasoning.  It is a path we expect to execute never with
well behaved guests, so I erred on the safe side.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 12 16:17:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 16:17:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYXb3-0003o5-BC; Tue, 12 May 2020 16:17:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rfNj=62=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jYXb1-0003o0-Pw
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 16:17:31 +0000
X-Inumbo-ID: 1416caa0-946c-11ea-ae69-bc764e2007e4
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1416caa0-946c-11ea-ae69-bc764e2007e4;
 Tue, 12 May 2020 16:17:30 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id z22so11097312lfd.0
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 09:17:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=xq9364n6y8yNNWKyS1z9p9kRD+YN2mN7eiq5HouVxC4=;
 b=CekUhOP2v5QHDlYyMrg/XwAGCGp7UotOiRbET+h3WGCb2kEVYWiwNLSXJtXB2q4e6E
 Sq2cHhFTlE+zt604vYAH2+LuO0m2AUWi+1QpIDc74vQGVBAGVEZUot6xU//7QvFyXMDH
 w+KpMNkLySTw+CFVxA5SiNKrhlOUjPAP9oHciZ+e0T71++NL5ueoHiKtQfplpJ0a3ROs
 PcpRoNVm2qMd/AH2wYFDC0Y9xzq37RE5PTjXgiUotwE/NMJbT2ViUVs7V3+8gN2bCS7D
 rucMbTpMCBfTQCR/gAOezyDMs9aiQkV7qc9RvRfo7CCB3M9OtGNJFzk1vjJOdkGlDP29
 xxSQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=xq9364n6y8yNNWKyS1z9p9kRD+YN2mN7eiq5HouVxC4=;
 b=mgUzNUbykYtiT0ttMq3xcGz5iacE0InUObSj5F7pemHWekE7ZK+Sd4gSacflw0jnkn
 ZbW6wb07Xke7ad0RJ+r9d/+H1RRS1Mklq7IVjrZv0Yni/ZCtidZp1qtbzMyg+pRux88D
 UMR4Jrt4oY/rbAgBwewMbsILv/VyEBDrACyPagbIx8KebsWjjYrI6PhLTKlengwyVHfv
 M0lK6/Q+GkG7kABsOLLAQFKwhn69uACb84tEyZ0pvc1X6Z+nD6Y0ihqDro1htc3F8Ei4
 3zcGTPd8zPjhoxK6jgR6PSQBtT2AOAvf6J6msPcfp69AM9CpTJmthnVxZpk3uOxw5lYD
 uGMg==
X-Gm-Message-State: AOAM532N5L6j8Hmu9d0R8j3fP9AWwJkOK0al+OeI2p+g1MjshPhNSmoW
 kbLZrFUVLDEkxHCtGup+lf5gpNx+eg30yrQbK7I=
X-Google-Smtp-Source: ABdhPJzatdahXip62o3iHBSUyTxWzUDRavSMSzMw8K+AUyHMR7cpG1ORS5pzXfrxT+Q0hxrrvqMaP5kMJH+kBUI5GLQ=
X-Received: by 2002:a19:7004:: with SMTP id h4mr14098963lfc.148.1589300249753; 
 Tue, 12 May 2020 09:17:29 -0700 (PDT)
MIME-Version: 1.0
References: <20200512033948.3507-1-jandryuk@gmail.com>
 <20200512033948.3507-3-jandryuk@gmail.com>
 <69dd92f0-5b23-7a3d-3568-feab20874f97@suse.com>
 <372f83e4-6016-cc10-a8e6-970d644eb561@citrix.com>
In-Reply-To: <372f83e4-6016-cc10-a8e6-970d644eb561@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 12 May 2020 12:17:16 -0400
Message-ID: <CAKf6xpsmYpXSkSoVxafcRMH8dQ2DJ6rOfp+ah=RyoS6DheUj4A@mail.gmail.com>
Subject: Re: [PATCH 2/2] x86/boot: Drop .note.gnu.properties in build32.lds
To: Andrew Cooper <andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefan Bader <stefan.bader@canonical.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 12, 2020 at 11:58 AM Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
>
> On 12/05/2020 16:32, Jan Beulich wrote:
> > [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> >
> > On 12.05.2020 05:39, Jason Andryuk wrote:
> >> reloc.S and cmdline.S as arrays of executable bytes for inclusion in
> >> head.S generated from compiled object files.  Object files generated by
> >> an -fcf-protection toolchain include a .note.gnu.property section.  The
> >> way reloc.S and cmdline.S are generated, the bytes of .note.gnu.property
> >> become the start of the .S files.  When head.S calls reloc or
> >> cmdline_parse_early, those note bytes are executed instead of the
> >> intended .text section.  This results in an early crash in reloc.
> > I may be misremembering, but I vaguely recall some similar change
> > suggestion. What I'm missing here is some form of statement as to
> > whether this is legitimate tool chain behavior, or a bug, and
> > hence whether this is a fix or a workaround.
>
> The linker is free to position unreferenced sections anywhere it wishes.
>
> It is deeply unhelpful behaviour, but neither Binutils nor Clang
> developers think it is something wanting fixing.
>
> One option might be to use --orphan-handling=error so unexpected
> toolchain behaviour breaks the build, or in this case perhaps =discard
> might be better.

The toolchain uses .note.gnu.property to flag object files as
supporting Intel CET (Control-flow Enforcement Technology) enabled by
-fcf-protection.  The linker/loader uses the note to know if CET
should be enabled or disabled.  CET can only be enabled if the
application and all libraries support it.

So it's legitimate to flag compiled objects with .note.gnu.property.
The .S files generated by build32.mk are .. interesting.  It seems
like they should only be the runtime code & data, so we don't want the
.note in there.  So I guess this is a workaround for how the .S files
are generated?  My first attempt added -R .note.gnu.property, fyi.

I'm not familiar with the linker options Andrew references, to know
how usable they are off the top of my head.

-fcf-protection=none could also be specified in CFLAGS in build32.mk
to avoid generating the note.

> >> Discard the .note.gnu.property section when linking to avoid the extra
> >> bytes.
> > If we go this route (and if, as per above, I'm misremembering,
> > meaning we didn't reject such a change earlier on), why would we
> > not strip .note and .note.* in one go?

Maybe?  I made the conservative change since they weren't previously discarded.

> >> Stefan Bader also noticed that build32.mk requires -fcf-protection=none
> >> or else the hypervisor will not boot.
> >> https://bugs.launchpad.net/ubuntu/+source/gcc-9/+bug/1863260
> > How's this related to the change here?
>
> I think there is a bit of confusion as to exactly what is going on.
>
> Ubuntu defaults -fcf-protection to enabled, which has a side effect of
> turning on CET, which inserts ENDBR{32,64} instructions and generates
> .note.gnu.properties indicating that the binary is CET-IBT compatible.
>
> ENDBR* instructions come from the Hint Nop space so are safe on older
> processors, but do ultimately add to binary bloat.  It also occurs to me
> that it likely breaks livepath.
>
> The reason Xen fails to boot is purely to do with the position of
> .note.gnu.properties, not the ENDBR* instructions.

Yes.

I referenced Stefan's bug since it specifically called out build32.mk
as problematic even after supplying -fcf-protection=none for a
hypervisor build.  I was trying to give credit and reference a helpful
bug entry.  I don't know how Xen handles such things, but I am fine
dropping it.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Tue May 12 16:29:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 16:29:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYXm6-0004iC-B8; Tue, 12 May 2020 16:28:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sh4E=62=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYXm6-0004i7-2T
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 16:28:58 +0000
X-Inumbo-ID: ac4c4e5e-946d-11ea-a2d8-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ac4c4e5e-946d-11ea-a2d8-12813bfff9fa;
 Tue, 12 May 2020 16:28:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589300936;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=9ntlV26wEcF4XSi0g3CxgD+0WFwGxy7ndEp9NSgUC9o=;
 b=f5hvJZ26SLmrpXoi96zv1gyWv6P4jcRtUsqwV80kD18sJr45kk6n+gxq
 D+cRwgIXFH0xvLbmPyNG8rhGyZqg18mxJ6e4X2IkXwhD7BRH6VPyWcPN7
 0S8mVpUAmv3ALuaXvEkk4yvPB4reB0z7Wx6seQdmlqmV57vfUXsvYjCue g=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: +u5DxnAfrkcTxy86LoqbYdWGWgD4MDiNoRN4uyQr9EyenJJIeAcXbodTFOdmmyak2oVBpDV6hg
 RiDfOTSFS0LWue0y5Z1Fqv7GNdzmzQuHOriNg9WVWIe2BSWaK0I6uNlnH72X6I4x/1d8wJJOwE
 PfIG72bG3sz5okjDrUuB6imabWL7kOmnjDq4lEVFVWKLyDha0gd1CVDLGRInI3ljQrNRB7jpkj
 IAXHjxdKlR2bWaKOPvpuPk7dFzLe9kMUTuRVPC/O38BljzxfvWLg1+kOi/7/mBn10Rekb50ef3
 o+I=
X-SBRS: 2.7
X-MesageID: 18028713
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,384,1583211600"; d="scan'208";a="18028713"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/guest: Fix assembler warnings with newer binutils
Date: Tue, 12 May 2020 17:28:30 +0100
Message-ID: <20200512162830.5912-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

GAS of at least version 2.34 complains:

  hypercall_page.S: Assembler messages:
  hypercall_page.S:24: Warning: symbol 'HYPERCALL_set_trap_table' already has its type set
  ...
  hypercall_page.S:71: Warning: symbol 'HYPERCALL_arch_7' already has its type set

This is because the whole page is declared as STT_OBJECT, and then every
hypercall within it is declared as STT_FUNC.  As these are function-like and
in .text, retain the STT_FUNC type and drop STT_OBJECT.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

Alternative suggestions welcome.  I've got half a mind to strip the unused
hypercalls, as a large chunk of them are PV only and will never be used by a
PVH Xen.  This would also remove the existing alias between hypercall_page and
HYPERCALL_set_trap_table.
---
 xen/arch/x86/guest/xen/hypercall_page.S | 1 -
 1 file changed, 1 deletion(-)

diff --git a/xen/arch/x86/guest/xen/hypercall_page.S b/xen/arch/x86/guest/xen/hypercall_page.S
index 6485e9150e..9673846b7d 100644
--- a/xen/arch/x86/guest/xen/hypercall_page.S
+++ b/xen/arch/x86/guest/xen/hypercall_page.S
@@ -8,7 +8,6 @@
 GLOBAL(hypercall_page)
          /* Poisoned with `ret` for safety before hypercalls are set up. */
         .fill PAGE_SIZE, 1, 0xc3
-        .type hypercall_page, STT_OBJECT
         .size hypercall_page, PAGE_SIZE
 
 /*
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue May 12 17:21:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 17:21:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYYaG-00015q-BB; Tue, 12 May 2020 17:20:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XDyx=62=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jYYaE-00015l-T8
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 17:20:46 +0000
X-Inumbo-ID: ea038eac-9474-11ea-b9cf-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea038eac-9474-11ea-b9cf-bc764e2007e4;
 Tue, 12 May 2020 17:20:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589304046;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=Rz/OpGpQ+brWc/9YfLNcQ3O+xh9dhjXSF8ukKqAjjOI=;
 b=iWrKn0m2odbclo5nIe5Ok+7IErAwPvaDelVBu/z1zCiGx11v0b/g5v8m
 iT0gpjldbYlFiAUO6UoTDw1maeMU/52ZJs5Sf787YmRua6wNF90Fp/iOe
 IY7nlzprtdpPZ21xhpHxXYIHjp+GceV34sqoPoP05bbbPX71xs+XGdx/J 4=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: puPhiaKIFfltbd3Jd+DFXII/F/u6J0arkaLVLJtPPootupbEL+p1C1LwNu5daRrnIFoo+8ERlC
 eboSZutgWyjLkHRazGGqUk13arY05bvggI2TWF3I9mhX6czwNbaPgXTPwLpGyAPWy8Lh/yw7rP
 FkLuBFa0zTJtmWnN0AOyJOCE/cjlCLRk2UA6ylloTA1F7SKeEDEozKnyZr6IGGuXeoju6Zrp2y
 iDJe9dvuylw/k+a4zGvORq8ikxz5pn15pxfIXiEFrRJ8iCNU8oU8aSgBIIxc8EgE2NVip3Rxey
 rgs=
X-SBRS: 2.7
X-MesageID: 17365587
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,384,1583211600"; d="scan'208";a="17365587"
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
Subject: Re: [PATCH 3/3] golang/xenlight: add necessary module/package
 documentation
Thread-Topic: [PATCH 3/3] golang/xenlight: add necessary module/package
 documentation
Thread-Index: AQHWHzfORxaz+/l/VEu6tN0ApHJiJKikozQA
Date: Tue, 12 May 2020 17:20:08 +0000
Message-ID: <16919263-9167-4BB1-9583-F7899FE3A246@citrix.com>
References: <cover.1588282027.git.rosbrookn@ainfosec.com>
 <9f5000ceea14e6818e491df38eba161c800b4cf8.1588282027.git.rosbrookn@ainfosec.com>
In-Reply-To: <9f5000ceea14e6818e491df38eba161c800b4cf8.1588282027.git.rosbrookn@ainfosec.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <E87C769079464643A2178899FBED12A6@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gQXByIDMwLCAyMDIwLCBhdCAxMDozOSBQTSwgTmljayBSb3Nicm9vayA8cm9zYnJv
b2tuQGdtYWlsLmNvbT4gd3JvdGU6DQo+IA0KPiBBZGQgYSBSRUFETUUgYW5kIHBhY2thZ2UgY29t
bWVudCBnaXZpbmcgYSBicmllZiBvdmVydmlldyBvZiB0aGUgcGFja2FnZS4NCj4gVGhlc2UgYWxz
byBoZWxwIHBrZy5nby5kZXYgZ2VuZXJhdGUgYmV0dGVyIGRvY3VtZW50YXRpb24uDQo+IA0KPiBB
bHNvLCBhZGQgYSBjb3B5IG9mIFhlbidzIGxpY2Vuc2UgdG8gdG9vbHMvZ29sYW5nL3hlbmxpZ2h0
LiBUaGlzIGlzDQo+IHJlcXVpcmVkIGZvciB0aGUgcGFja2FnZSB0byBiZSBzaG93biBvbiBwa2cu
Z28uZGV2IGFuZCBhZGRlZCB0byB0aGUNCj4gZGVmYXVsdCBtb2R1bGUgcHJveHksIHByb3h5Lmdv
bGFuZy5vcmcuDQoNCmxpYnhsIGlzIGFjdHVhbGx5IHVuZGVyIHRoZSBMR1BMOyBJIGd1ZXNzIHdl
IHdhbnQgdGhlIHhlbmxpZ2h0IHBhY2thZ2UgdG8gYmUgdGhlIHNhbWU/DQoNCkFzIGRpc2N1c3Nl
ZCBiZWZvcmUsIGFyZ3VhYmx5IGRpc3RyaWJ1dGluZyB0aGUgKi5nZW4uZ28gZmlsZXMgd29u4oCZ
dCBiZSBzdWZmaWNpZW50IHRvIGNvbXBseSB3aXRoIHRoZSBsaWNlbnNlLCBiZWNhdXNlIHRoZXkg
YXJlIG5vdCB0aGUg4oCccHJlZmVycmVkIGZvcm0gb2YgbW9kaWZpY2F0aW9u4oCdOiB0aGF0IHdv
dWxkIGJlIGxpYnhsX3R5cGVzLmlkbCwgYWxvbmcgd2l0aCB0aGUgcHl0aG9uIGdlbmVyYXRvcnMu
DQoNCk9UT0gsIEkgc3VwcG9zZSB0aGF04oCZcyBzb3J0IG9mIEdvb2dsZeKAmXMgcHJvYmxlbSBp
biBzb21lIHdheXMuLi4NCg0KPiANCj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2dvbGFuZy94ZW5saWdo
dC9SRUFETUUubWQgYi90b29scy9nb2xhbmcveGVubGlnaHQvUkVBRE1FLm1kDQo+IG5ldyBmaWxl
IG1vZGUgMTAwNjQ0DQo+IGluZGV4IDAwMDAwMDAwMDAuLjQyMjQwZTIxMzINCj4gLS0tIC9kZXYv
bnVsbA0KPiArKysgYi90b29scy9nb2xhbmcveGVubGlnaHQvUkVBRE1FLm1kDQo+IEBAIC0wLDAg
KzEsMTcgQEANCj4gKyMgeGVubGlnaHQNCj4gKw0KPiArIyMgQWJvdXQNCj4gKw0KPiArVGhlIHhl
bmxpZ2h0IHBhY2thZ2UgcHJvdmlkZXMgR28gYmluZGluZ3MgdG8gWGVuJ3MgbGlieGwgQyBsaWJy
YXJ5IHZpYSBjZ28uIFRoZSBwYWNrYWdlIGlzIGN1cnJlbnRseSBpbiBhbiB1bnN0YWJsZSAicHJl
dmlldyIgc3RhdGUuDQoNCldlIHNob3VsZCBwcm9iYWJseSB0cnkgdG8gc2xvdCBpdCBpbnRvIG9u
ZSBvZiB0aGUgb2ZmaWNpYWwgdGVybXMgd2UgdXNlIGluIFNVUFBPUlQubWQgKGFuZCBhbHNvIGFk
ZCBpdCB0byBTVVBQT1JULm1kISkuDQoNCkFUTSB5b3UgY2Fu4oCZdCBldmVuIGRvIGEgZnVsbCBW
TSBsaWZlY3ljbGUgd2l0aCBpdCBwcm9wZXJseSAodG8gaW5jbHVkZSBoYXJ2ZXN0aW5nIGRlc3Ry
b3llZCBkb21haW5zKTsgc28gSSB0aGluayBpdCB3b3VsZCBjb21lIHVuZGVyIOKAnGV4cGVyaW1l
bnRhbOKAnS4NCg0KPiBUaGlzIG1lYW5zIHRoZSBwYWNrYWdlIGlzIHJlYWR5IGZvciBpbml0aWFs
IHVzZSBhbmQgZXZhbHVhdGlvbiwgYnV0IGlzIG5vdCB5ZXQgZnVsbHkgZnVuY3Rpb25hbC4gTmFt
ZWx5LCBvbmx5IGEgc3Vic2V0IG9mIGxpYnhsJ3MgQVBJIGlzIGltcGxlbWVudGVkLCBhbmQgYnJl
YWtpbmcgY2hhbmdlcyBtYXkgb2NjdXIgaW4gZnV0dXJlIHBhY2thZ2UgdmVyc2lvbnMuDQo+ICsN
Cj4gK011Y2ggb2YgdGhlIHBhY2thZ2UgaXMgZ2VuZXJhdGVkIHVzaW5nIHRoZSBsaWJ4bCBJREwu
IENoYW5nZXMgdG8gdGhlIGdlbmVyYXRlZCBjb2RlIGNhbiBiZSBtYWRlIGJ5IG1vZGlmeWluZyBg
dG9vbHMvZ29sYW5nL3hlbmxpZ2h0L2dlbmdvdHlwZXMucHlgIGluIHRoZSB4ZW4uZ2l0IHRyZWUu
DQo+ICsNCj4gKyMjIEdldHRpbmcgU3RhcnRlZA0KPiArDQo+ICtgYGBnbw0KPiAraW1wb3J0ICgN
Cj4gKyAgICAgICAgInhlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0L3Rvb2xzL2dvbGFu
Zy94ZW5saWdodCINCj4gKykNCj4gK2BgYA0KPiArDQo+ICtUaGUgbW9kdWxlIGlzIG5vdCB5ZXQg
dGFnZ2VkIGluZGVwZW5kZW50bHkgb2YgeGVuLmdpdCwgc28gZXhwZWN0IHRvIHNlZSBgdjAuMC4w
LTxkYXRlPi08Z2l0IGhhc2g+YCBhcyB0aGUgcGFja2FnZSB2ZXJzaW9uLiBJZiB5b3Ugd2FudCB0
byBwb2ludCB0byBhIFhlbiByZWxlYXNlLCBzdWNoIGFzIDQuMTQuMCwgeW91IGNhbiBydW4gYGdv
IGdldCB4ZW5iaXRzLnhlbi5vcmcvZ2l0LWh0dHAveGVuLmdpdC90b29scy9nb2xhbmcveGVubGln
aHRAUkVMRUFTRS00LjE0LjBgLg0KDQpJIHRoaW5rIEkgd291bGQgc2F5IHNvbWV0aGluZyBsaWtl
Og0KDQrigJQNCg0KVGhlIG1vZHVsZSBpcyBub3QgeWV0IHRhZ2dlZCBpbmRlcGVuZGVudGx5IG9m
IHhlbi5naXQ7IGlmIHlvdSBkb27igJl0IHNwZWNpZnkgdGhlIHZlcnNpb24sIHlvdeKAmWxsIGdl
dCB0aGUgbW9zdCByZWNlbnQgZGV2ZWxvcG1lbnQgdmVyc2lvbiwgd2hpY2ggaXMgcHJvYmFibHkg
bm90IHdoYXQgeW91IHdhbnQuICBBIGJldHRlciBvcHRpb24gd291bGQgYmUgdG8gc3BlY2lmeSBh
IFhlbiByZWxlYXNlIHRhZzsgZm9yIGluc3RhbmNlOiBgZ28gZ2V0IHhlbmJpdHPigKYuL3hlbmxp
Z2h0QFJFTEVBU0UtNC4xNC4xMGAuDQoNCuKAlA0KDQoNCj4gZGlmZiAtLWdpdCBhL3Rvb2xzL2dv
bGFuZy94ZW5saWdodC94ZW5saWdodC5nbyBiL3Rvb2xzL2dvbGFuZy94ZW5saWdodC94ZW5saWdo
dC5nbw0KPiBpbmRleCA2YjRmNDkyNTUwLi4zZWFhNWEzZDYzIDEwMDY0NA0KPiAtLS0gYS90b29s
cy9nb2xhbmcveGVubGlnaHQveGVubGlnaHQuZ28NCj4gKysrIGIvdG9vbHMvZ29sYW5nL3hlbmxp
Z2h0L3hlbmxpZ2h0LmdvDQo+IEBAIC0xNCw2ICsxNCw4IEBADQo+ICAqIFlvdSBzaG91bGQgaGF2
ZSByZWNlaXZlZCBhIGNvcHkgb2YgdGhlIEdOVSBMZXNzZXIgR2VuZXJhbCBQdWJsaWMNCj4gICog
TGljZW5zZSBhbG9uZyB3aXRoIHRoaXMgbGlicmFyeTsgSWYgbm90LCBzZWUgPGh0dHA6Ly93d3cu
Z251Lm9yZy9saWNlbnNlcy8+Lg0KPiAgKi8NCj4gKw0KPiArLy8gUGFja2FnZSB4ZW5saWdodCBw
cm92aWRlcyBiaW5kaW5ncyB0byBYZW4ncyBsaWJ4bCBDIGxpYnJhcnkuDQo+IHBhY2thZ2UgeGVu
bGlnaHQNCg0KDQpXaWxsIHRoaXMgY29tbWVudCByZXBsYWNlIHRoZSBjb21tZW50IGFib3ZlIGl0
IGluIHRoZSBwa2cuZ28uZGV2IG1vZHVsZSBwYWdlPw0KDQogLUdlb3JnZQ0KDQo=


From xen-devel-bounces@lists.xenproject.org Tue May 12 17:30:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 17:30:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYYk1-0001yT-BZ; Tue, 12 May 2020 17:30:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XDyx=62=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jYYjz-0001yO-LS
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 17:30:51 +0000
X-Inumbo-ID: 52ceba82-9476-11ea-ae69-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 52ceba82-9476-11ea-ae69-bc764e2007e4;
 Tue, 12 May 2020 17:30:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589304650;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=jeEIi6O9DfW5vd4wbQe4U3bA76B9Sp2NDnFWHd3O+WU=;
 b=XmjDJv1er7sNIU1DA8Ftza6Pxw49ENwYwM35yxuXIYVCqRZfSoa0SNti
 o3XdByvUjBRn24M+fE4BtfO5OiiAKSg+enSLmokEbHK0vSU/s2iFBsMf6
 z2yBFUUCRhyMUpmpGiScCVRa0akghsU+xZl4iEfL09y5qpfhSNlFpMvPw w=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: aO5QLcyCmQjQNXIF5GPVKYjW+RDf+QaENEIIZGUxy32tXnmKkc/FvsPB7eHKF5/Qnh6zdU9Etc
 y+nEtLtu2dCIh3W5qMFZU0ZkD3K0wTOknJV0e4JWiMWTT8EAkU5eSP8I+dq6JGX/OWJ1zAD4dd
 8CnlwoiCVrOUiYateEwNsjGE1858SEp04bLA8RSas04pNXOBoVriyJgQLTGaQb93oaP/3+2/yF
 g2GUbwYi26SGFmjS6zVs4Tue/1DZwtlquckmTNXPlu528LRZF2uMAw1hPjGVARRussn9xGGcew
 UvM=
X-SBRS: 2.7
X-MesageID: 17618266
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,384,1583211600"; d="scan'208";a="17618266"
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
Subject: Re: [PATCH 2/3] golang/xenlight: init xenlight go module
Thread-Topic: [PATCH 2/3] golang/xenlight: init xenlight go module
Thread-Index: AQHWHzfMuMNvM1D6EUuQ5HJvPIpidKikdZCAgAAILACAAChygA==
Date: Tue, 12 May 2020 17:30:47 +0000
Message-ID: <294923FB-07D4-4CEB-9B29-3450DB454000@citrix.com>
References: <cover.1588282027.git.rosbrookn@ainfosec.com>
 <c38afab85d9fc005edade229896008a4ad5a1929.1588282027.git.rosbrookn@ainfosec.com>
 <3ED0B49D-123C-4925-B3A0-4FA0B44DF9F0@citrix.com>
 <CAEBZRSdWCJo9kBnNv6Jqt76E3fb8DDX6C4zndMtvrhngEzGHxg@mail.gmail.com>
In-Reply-To: <CAEBZRSdWCJo9kBnNv6Jqt76E3fb8DDX6C4zndMtvrhngEzGHxg@mail.gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <1F7213EFDCE91645BAFE4403BE39DC72@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQo+IE9uIE1heSAxMiwgMjAyMCwgYXQgNDowNiBQTSwgTmljayBSb3Nicm9vayA8cm9zYnJvb2tu
QGdtYWlsLmNvbT4gd3JvdGU6DQo+IA0KPiBbQ0FVVElPTiAtIEVYVEVSTkFMIEVNQUlMXSBETyBO
T1QgcmVwbHksIGNsaWNrIGxpbmtzLCBvciBvcGVuIGF0dGFjaG1lbnRzIHVubGVzcyB5b3UgaGF2
ZSB2ZXJpZmllZCB0aGUgc2VuZGVyIGFuZCBrbm93IHRoZSBjb250ZW50IGlzIHNhZmUuDQo+IA0K
PiBPbiBUdWUsIE1heSAxMiwgMjAyMCBhdCAxMDozNiBBTSBHZW9yZ2UgRHVubGFwIDxHZW9yZ2Uu
RHVubGFwQGNpdHJpeC5jb20+IHdyb3RlOg0KPj4gDQo+PiANCj4+IA0KPj4+IE9uIEFwciAzMCwg
MjAyMCwgYXQgMTA6MzkgUE0sIE5pY2sgUm9zYnJvb2sgPHJvc2Jyb29rbkBnbWFpbC5jb20+IHdy
b3RlOg0KPj4+IA0KPj4+IEluaXRpYWxpemUgdGhlIHhlbmxpZ2h0IEdvIG1vZHVsZSB1c2luZyB0
aGUgeGVuYml0cyBnaXQtaHR0cCBVUkwsDQo+Pj4geGVuYml0cy54ZW4ub3JnL2dpdC1odHRwL3hl
bi5naXQvdG9vbHMvZ29sYW5nL3hlbmxpZ2h0LCBhbmQgdXBkYXRlIHRoZQ0KPj4+IFhFTl9HT0NP
REVfVVJMIHZhcmlhYmxlIGluIHRvb2xzL1J1bGVzLm1rIGFjY29yZGluZ2x5Lg0KPj4+IA0KPj4+
IFNpZ25lZC1vZmYtYnk6IE5pY2sgUm9zYnJvb2sgPHJvc2Jyb29rbkBhaW5mb3NlYy5jb20+DQo+
Pj4gLS0tDQo+Pj4gdG9vbHMvUnVsZXMubWsgICAgICAgICAgICAgICB8IDIgKy0NCj4+PiB0b29s
cy9nb2xhbmcveGVubGlnaHQvZ28ubW9kIHwgMSArDQo+Pj4gMiBmaWxlcyBjaGFuZ2VkLCAyIGlu
c2VydGlvbnMoKyksIDEgZGVsZXRpb24oLSkNCj4+PiBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMv
Z29sYW5nL3hlbmxpZ2h0L2dvLm1vZA0KPj4+IA0KPj4+IGRpZmYgLS1naXQgYS90b29scy9SdWxl
cy5tayBiL3Rvb2xzL1J1bGVzLm1rDQo+Pj4gaW5kZXggNWI4Y2Y3NDhhZC4uY2EzM2NjN2IzMSAx
MDA2NDQNCj4+PiAtLS0gYS90b29scy9SdWxlcy5taw0KPj4+ICsrKyBiL3Rvb2xzL1J1bGVzLm1r
DQo+Pj4gQEAgLTM2LDcgKzM2LDcgQEAgZGVidWcgPz0geQ0KPj4+IGRlYnVnX3N5bWJvbHMgPz0g
JChkZWJ1ZykNCj4+PiANCj4+PiBYRU5fR09QQVRIICAgICAgICA9ICQoWEVOX1JPT1QpL3Rvb2xz
L2dvbGFuZw0KPj4+IC1YRU5fR09DT0RFX1VSTCAgICA9IGdvbGFuZy54ZW5wcm9qZWN0Lm9yZw0K
Pj4+ICtYRU5fR09DT0RFX1VSTCAgICA9IHhlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0
L3Rvb2xzL2dvbGFuZw0KPj4gDQo+PiBUaGUgcHJpbWFyeSBlZmZlY3Qgb2YgdGhpcyB3aWxsIGJl
IHRvIGluc3RhbGwgdGhlIGNvZGUgaW4gJFBSRUZJWC9zaGFyZS9nb2NvZGUveGVuYml0cy54ZW4u
b3JnL2dpdC1odHRwL3hlbi5naXQvdG9vbHMvZ29sYW5nL3hlbmxpZ2h0IHdoZW4gbWFraW5nIGRl
YmJhbGxzIG9yIGRvaW5nIGBtYWtlIGluc3RhbGxgLg0KPj4gDQo+PiBJIGRvbuKAmXQgaW1tZWRp
YXRlbHkgc2VlIHRoZSBhZHZhbnRhZ2Ugb2YgdGhhdCwgcGFydGljdWxhcmx5IGlmIHdl4oCZcmUg
c3RpbGwgdGhpbmtpbmcgYWJvdXQgaGF2aW5nIGEg4oCccHJldHRpZXLigJ0gcGF0aCBhdCBzb21l
IHBvaW50IGluIHRoZSBmdXR1cmUuICBXaGF0IHdhcyB5b3VyIHRoaW5raW5nIGhlcmU/DQo+IA0K
PiBXaXRoIHRoZSBtb2R1bGUgYmVpbmcgZGVmaW5lZCBhcyBgeGVuYml0cy54ZW4ub3JnLy4uLmAs
IHRoZSBgYnVpbGRgDQo+IE1ha2UgdGFyZ2V0IHdpbGwgZmFpbCBhcy1pcyBmb3IgYSBtb2R1bGUt
YXdhcmUgdmVyc2lvbiBvZiBnbyAoYmVjYXVzZQ0KPiBpdCBjYW5ub3QgZmluZCBhIG1vZHVsZSBu
YW1lZCBgZ29sYW5nLnhlbnByb2plY3Qub3JnL3hlbmxpZ2h0YCkuIFNvLA0KPiB0aGUgcmVhc29u
IGZvciB0aGlzIGNoYW5nZSBpcyB0byBwcmVzZXJ2ZSB0aGUgZXhpc3RpbmcgZnVuY3Rpb25hbGl0
eQ0KPiBvZiB0aGF0IE1ha2UgdGFyZ2V0LiBDaGFuZ2luZyBYRU5fR09DT0RFX1VSTCBzZWVtZWQg
bGlrZSB0aGUgY29ycmVjdA0KPiBjaGFuZ2UsIGJ1dCBJJ20gb3BlbiB0byBzdWdnZXN0aW9ucy4N
Cg0KT2guICBCdXQgbm8sIHRoYXTigJlzIG5vdCBhdCBhbGwgd2hhdCB3ZSB3YW50Lg0KDQpUaGUg
d2hvbGUgcG9pbnQgb2YgcnVubmluZyDigJhnbyBidWlsZOKAmSBpcyB0byBtYWtlIHN1cmUgdGhh
dCAqdGhlIGNvZGUgd2UganVzdCBjb3BpZWQqIOKAlCB0aGUgY29kZSByaWdodCBub3cgaW4gb3Vy
IG93biBsb2NhbCB0cmVlLCBwZXJoYXBzIHdoaWNoIHdhcyBqdXN0IGdlbmVyYXRlZCDigJQgYWN0
dWFsbHkgY29tcGlsZXMuDQoNCkl0IGxvb2tzIGxpa2Ugd2hlbiB3ZSBhZGQgdGhlIGBnby5tb2Rg
IGZ1cnRoZXIgdXAgdGhlIHRyZWUsIGl0IG1ha2VzIGBnbyBidWlsZGAgaWdub3JlIHRoZSBHT1BB
VEggZW52aXJvbm1lbnQgdmFyaWFibGUgd2XigJlyZSBnaXZpbmcgaXQsIHdoaWNoIGNhdXNlcyB0
aGUgYnVpbGQgZmFpbHVyZS4gIEJ1dCB5b3VyIOKAnGZpeOKAnSBkb2VzbuKAmXQgbWFrZSBpdCB1
c2UgdGhlIGluLXRyZWUgZ28gY29kZSBhZ2FpbjsgaW5zdGVhZCBpdCBsb29rcyBsaWtlIGl0IGNh
dXNlcyBgZ28gYnVpbGRgIGNvbW1hbmQgdG8gZ28gYW5kIGZldGNoIHRoZSBtb3N0IHJlY2VudCBg
bWFzdGVyYCB2ZXJzaW9uIGZyb20geGVuYml0cywgaWdub3JpbmcgdGhlIGdvIGNvZGUgaW4gdGhl
IHRyZWUgY29tcGxldGVseS4gOi0pDQoNCiAtR2Vvcmdl


From xen-devel-bounces@lists.xenproject.org Tue May 12 17:38:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 17:38:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYYqo-0002Ay-8j; Tue, 12 May 2020 17:37:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XDyx=62=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jYYqm-0002At-8N
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 17:37:52 +0000
X-Inumbo-ID: 4d346152-9477-11ea-b07b-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d346152-9477-11ea-b07b-bc764e2007e4;
 Tue, 12 May 2020 17:37:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589305071;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=2T3V+rV8+2TUN3SI5RXR5n8osG3EUMONUkvXvpWY8m4=;
 b=DRqoYVeACQHyzRj7mB9P+w6Q1Mb92OFAaY6R/ChY/8nAJat/Rp6OCHqF
 /DejvnFhHVqIusZhhVn2DMb98O2YkCfSwY/adRFXSYadblzcDsLqAAL+/
 rBJU/UZUI6Yh/fqeyw0b1OYGQNUmn+x9u/hDD1lgeBBgcYx0CBUs0dus/ M=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: JpvFQzTKMZm6U24rXJNmWKQoSKl6Won99KvBdwpcbfIPeuCB/HiVPB2K/O91dVuCKnWv3lpLdk
 CweSbrWgCyc3L/VV+OatnQ361oPzp14q3tlbiD2EenX712PcMySYE/FR6LUv2KaTIX3zf0tTcB
 Ic1GH2JtO8dqNiHXgzReN+4LQtllny0sdavce8iW7iGw50tCSRl9ym8QmhycJjxTLiVs4SX51U
 69IMqSwaG73l91fTzxGEZpTV6WPnd3tKMhbOazIqBFBgBbRGOUwV99tvVOmPgv9dJq6smqW58F
 2uA=
X-SBRS: 2.7
X-MesageID: 17367309
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,384,1583211600"; d="scan'208";a="17367309"
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
Subject: Re: [PATCH 2/3] golang/xenlight: init xenlight go module
Thread-Topic: [PATCH 2/3] golang/xenlight: init xenlight go module
Thread-Index: AQHWHzfMuMNvM1D6EUuQ5HJvPIpidKikdZCAgAAILACAAChygIAAAfSA
Date: Tue, 12 May 2020 17:37:47 +0000
Message-ID: <0B2B28B0-C857-443B-B73A-6A5A31039889@citrix.com>
References: <cover.1588282027.git.rosbrookn@ainfosec.com>
 <c38afab85d9fc005edade229896008a4ad5a1929.1588282027.git.rosbrookn@ainfosec.com>
 <3ED0B49D-123C-4925-B3A0-4FA0B44DF9F0@citrix.com>
 <CAEBZRSdWCJo9kBnNv6Jqt76E3fb8DDX6C4zndMtvrhngEzGHxg@mail.gmail.com>
 <294923FB-07D4-4CEB-9B29-3450DB454000@citrix.com>
In-Reply-To: <294923FB-07D4-4CEB-9B29-3450DB454000@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <1AD7FAC507A6F441837E5DEE461EA4AB@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDEyLCAyMDIwLCBhdCA2OjMwIFBNLCBHZW9yZ2UgRHVubGFwIDxnZW9yZ2Uu
ZHVubGFwQGNpdHJpeC5jb20+IHdyb3RlOg0KPiANCj4+IA0KPj4gT24gTWF5IDEyLCAyMDIwLCBh
dCA0OjA2IFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9va25AZ21haWwuY29tPiB3cm90ZToNCj4+
IA0KPj4gW0NBVVRJT04gLSBFWFRFUk5BTCBFTUFJTF0gRE8gTk9UIHJlcGx5LCBjbGljayBsaW5r
cywgb3Igb3BlbiBhdHRhY2htZW50cyB1bmxlc3MgeW91IGhhdmUgdmVyaWZpZWQgdGhlIHNlbmRl
ciBhbmQga25vdyB0aGUgY29udGVudCBpcyBzYWZlLg0KPj4gDQo+PiBPbiBUdWUsIE1heSAxMiwg
MjAyMCBhdCAxMDozNiBBTSBHZW9yZ2UgRHVubGFwIDxHZW9yZ2UuRHVubGFwQGNpdHJpeC5jb20+
IHdyb3RlOg0KPj4+IA0KPj4+IA0KPj4+IA0KPj4+PiBPbiBBcHIgMzAsIDIwMjAsIGF0IDEwOjM5
IFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9va25AZ21haWwuY29tPiB3cm90ZToNCj4+Pj4gDQo+
Pj4+IEluaXRpYWxpemUgdGhlIHhlbmxpZ2h0IEdvIG1vZHVsZSB1c2luZyB0aGUgeGVuYml0cyBn
aXQtaHR0cCBVUkwsDQo+Pj4+IHhlbmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0L3Rvb2xz
L2dvbGFuZy94ZW5saWdodCwgYW5kIHVwZGF0ZSB0aGUNCj4+Pj4gWEVOX0dPQ09ERV9VUkwgdmFy
aWFibGUgaW4gdG9vbHMvUnVsZXMubWsgYWNjb3JkaW5nbHkuDQo+Pj4+IA0KPj4+PiBTaWduZWQt
b2ZmLWJ5OiBOaWNrIFJvc2Jyb29rIDxyb3Nicm9va25AYWluZm9zZWMuY29tPg0KPj4+PiAtLS0N
Cj4+Pj4gdG9vbHMvUnVsZXMubWsgICAgICAgICAgICAgICB8IDIgKy0NCj4+Pj4gdG9vbHMvZ29s
YW5nL3hlbmxpZ2h0L2dvLm1vZCB8IDEgKw0KPj4+PiAyIGZpbGVzIGNoYW5nZWQsIDIgaW5zZXJ0
aW9ucygrKSwgMSBkZWxldGlvbigtKQ0KPj4+PiBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvZ29s
YW5nL3hlbmxpZ2h0L2dvLm1vZA0KPj4+PiANCj4+Pj4gZGlmZiAtLWdpdCBhL3Rvb2xzL1J1bGVz
Lm1rIGIvdG9vbHMvUnVsZXMubWsNCj4+Pj4gaW5kZXggNWI4Y2Y3NDhhZC4uY2EzM2NjN2IzMSAx
MDA2NDQNCj4+Pj4gLS0tIGEvdG9vbHMvUnVsZXMubWsNCj4+Pj4gKysrIGIvdG9vbHMvUnVsZXMu
bWsNCj4+Pj4gQEAgLTM2LDcgKzM2LDcgQEAgZGVidWcgPz0geQ0KPj4+PiBkZWJ1Z19zeW1ib2xz
ID89ICQoZGVidWcpDQo+Pj4+IA0KPj4+PiBYRU5fR09QQVRIICAgICAgICA9ICQoWEVOX1JPT1Qp
L3Rvb2xzL2dvbGFuZw0KPj4+PiAtWEVOX0dPQ09ERV9VUkwgICAgPSBnb2xhbmcueGVucHJvamVj
dC5vcmcNCj4+Pj4gK1hFTl9HT0NPREVfVVJMICAgID0geGVuYml0cy54ZW4ub3JnL2dpdC1odHRw
L3hlbi5naXQvdG9vbHMvZ29sYW5nDQo+Pj4gDQo+Pj4gVGhlIHByaW1hcnkgZWZmZWN0IG9mIHRo
aXMgd2lsbCBiZSB0byBpbnN0YWxsIHRoZSBjb2RlIGluICRQUkVGSVgvc2hhcmUvZ29jb2RlL3hl
bmJpdHMueGVuLm9yZy9naXQtaHR0cC94ZW4uZ2l0L3Rvb2xzL2dvbGFuZy94ZW5saWdodCB3aGVu
IG1ha2luZyBkZWJiYWxscyBvciBkb2luZyBgbWFrZSBpbnN0YWxsYC4NCj4+PiANCj4+PiBJIGRv
buKAmXQgaW1tZWRpYXRlbHkgc2VlIHRoZSBhZHZhbnRhZ2Ugb2YgdGhhdCwgcGFydGljdWxhcmx5
IGlmIHdl4oCZcmUgc3RpbGwgdGhpbmtpbmcgYWJvdXQgaGF2aW5nIGEg4oCccHJldHRpZXLigJ0g
cGF0aCBhdCBzb21lIHBvaW50IGluIHRoZSBmdXR1cmUuICBXaGF0IHdhcyB5b3VyIHRoaW5raW5n
IGhlcmU/DQo+PiANCj4+IFdpdGggdGhlIG1vZHVsZSBiZWluZyBkZWZpbmVkIGFzIGB4ZW5iaXRz
Lnhlbi5vcmcvLi4uYCwgdGhlIGBidWlsZGANCj4+IE1ha2UgdGFyZ2V0IHdpbGwgZmFpbCBhcy1p
cyBmb3IgYSBtb2R1bGUtYXdhcmUgdmVyc2lvbiBvZiBnbyAoYmVjYXVzZQ0KPj4gaXQgY2Fubm90
IGZpbmQgYSBtb2R1bGUgbmFtZWQgYGdvbGFuZy54ZW5wcm9qZWN0Lm9yZy94ZW5saWdodGApLiBT
bywNCj4+IHRoZSByZWFzb24gZm9yIHRoaXMgY2hhbmdlIGlzIHRvIHByZXNlcnZlIHRoZSBleGlz
dGluZyBmdW5jdGlvbmFsaXR5DQo+PiBvZiB0aGF0IE1ha2UgdGFyZ2V0LiBDaGFuZ2luZyBYRU5f
R09DT0RFX1VSTCBzZWVtZWQgbGlrZSB0aGUgY29ycmVjdA0KPj4gY2hhbmdlLCBidXQgSSdtIG9w
ZW4gdG8gc3VnZ2VzdGlvbnMuDQo+IA0KPiBPaC4gIEJ1dCBubywgdGhhdOKAmXMgbm90IGF0IGFs
bCB3aGF0IHdlIHdhbnQuDQo+IA0KPiBUaGUgd2hvbGUgcG9pbnQgb2YgcnVubmluZyDigJhnbyBi
dWlsZOKAmSBpcyB0byBtYWtlIHN1cmUgdGhhdCAqdGhlIGNvZGUgd2UganVzdCBjb3BpZWQqIOKA
lCB0aGUgY29kZSByaWdodCBub3cgaW4gb3VyIG93biBsb2NhbCB0cmVlLCBwZXJoYXBzIHdoaWNo
IHdhcyBqdXN0IGdlbmVyYXRlZCDigJQgYWN0dWFsbHkgY29tcGlsZXMuDQo+IA0KPiBJdCBsb29r
cyBsaWtlIHdoZW4gd2UgYWRkIHRoZSBgZ28ubW9kYCBmdXJ0aGVyIHVwIHRoZSB0cmVlLCBpdCBt
YWtlcyBgZ28gYnVpbGRgIGlnbm9yZSB0aGUgR09QQVRIIGVudmlyb25tZW50IHZhcmlhYmxlIHdl
4oCZcmUgZ2l2aW5nIGl0LCB3aGljaCBjYXVzZXMgdGhlIGJ1aWxkIGZhaWx1cmUuICBCdXQgeW91
ciDigJxmaXjigJ0gZG9lc27igJl0IG1ha2UgaXQgdXNlIHRoZSBpbi10cmVlIGdvIGNvZGUgYWdh
aW47IGluc3RlYWQgaXQgbG9va3MgbGlrZSBpdCBjYXVzZXMgYGdvIGJ1aWxkYCBjb21tYW5kIHRv
IGdvIGFuZCBmZXRjaCB0aGUgbW9zdCByZWNlbnQgYG1hc3RlcmAgdmVyc2lvbiBmcm9tIHhlbmJp
dHMsIGlnbm9yaW5nIHRoZSBnbyBjb2RlIGluIHRoZSB0cmVlIGNvbXBsZXRlbHkuIDotKQ0KDQpP
Sywgc28gYWN0dWFsbHkgd2hhdCB5b3Ugd2FudCB0byBkbyBpcyByZXBsYWNlIHRoZSBgZ28gaW5z
dGFsbCAteCAkT1RIRVJfUExBQ0VfV0VfSlVTVF9DT1BJRURfVEhFX0ZJTEVTYCB3aXRoIGBnbyBi
dWlsZCAteGAgaW4gdGhlIE1ha2VmaWxlLg0KDQogLUdlb3JnZQ==


From xen-devel-bounces@lists.xenproject.org Tue May 12 17:52:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 17:52:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYZ4o-0003n1-5T; Tue, 12 May 2020 17:52:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UHzJ=62=dornerworks.com=stewart.hildebrand@srs-us1.protection.inumbo.net>)
 id 1jYZ4m-0003mm-Sz
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 17:52:20 +0000
X-Inumbo-ID: 516bdeba-9479-11ea-a2ec-12813bfff9fa
Received: from USG02-BN3-obe.outbound.protection.office365.us (unknown
 [23.103.208.43]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 516bdeba-9479-11ea-a2ec-12813bfff9fa;
 Tue, 12 May 2020 17:52:16 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector5401; d=microsoft.com; cv=none;
 b=bszWCxULPwViyELzgwQ9iWh894VcixMG5LXPWNNeLA0byL6hjInjmnzOei6Ow2xXHZcc8ZzBMGD3BWqcLHZScJUYheeJAXhbSY1e0RTg0rvy4zVJDv7oTdYC0WL1SkxoXGe7pyPKYjtF4PXFJLmys5gqZTNaJXZ36dGpwwBBWCWFaUT8g41xAu+OZTuwjnsHEOTHt661yc7QdsWyTlN/l42p6iRXy09iiIYM418ZwNdxDimA34mvNlInzcDECQOVryHAyGGvhVOtq+Rt/56zTe7Vo4pwRoFef9mwnR5Qn1cwVVL7FDIwNLIXhqN5gX6iINg6EAl9MvHseoReSBPBVw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector5401;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7l8Bmm/K7ceMetHeKL2gL9WtqQocfYzBFc4P4UDSMq0=;
 b=KRsbMeDaeO/TjmC5z7u2CxoWqgwrE3Tg49/JRHWt6w6/jgJUfV44RG4jvkKX0Op1sPJscEWOk/0z1cS0Tso+f9bFgmIyjCsDOuGupQSYojQ1FDIOi/C5AFdFtvDkm2Q3HKMdSEcjSB2T642AnN0kcmKIOrO3cvu1/J8qdY1ow+0VMSCUyqKDDWAi06OvF8g6VGsj4ZB2XGRS6fxBcIBnDXtPcaEVpuQwvfVjy4GJSts+MG/dgAlpMQ/O1T0EW64pG7/n02XmErOH4T/NeKShaRq6IvbvNaKBN66uVAbFI1vdzijuZXmOSL77rEbY1/11HfwUvDygPgJCm9Lq8kND7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=fail (sender ip is
 207.242.234.14) smtp.rcpttodomain=lists.xenproject.org
 smtp.mailfrom=dornerworks.com; dmarc=none action=none
 header.from=dornerworks.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=dornerworkssystem.onmicrosoft.us;
 s=selector1-dornerworkssystem-onmicrosoft-us;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7l8Bmm/K7ceMetHeKL2gL9WtqQocfYzBFc4P4UDSMq0=;
 b=kd5Jk72EF5PayUkVYKeLXAqWQU6/sBOPOWb0TeJIlTwd8jkok8Fd354+hdNOcb0HboSsXLfHgYMC2vGu0FplSci3RtnE8kW9KW5b7yalgSzORROnF/wEBl5gyY2N21qhZlRkYGAjQADlagiJ5/xZMYcsWqCUkOtgbIf4lt4Dy88=
Received: from SN1P110CA0027.NAMP110.PROD.OUTLOOK.COM (2001:489a:200:61::17)
 by BN3P110MB0337.NAMP110.PROD.OUTLOOK.COM (2001:489a:200:409::11) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2937.27; Tue, 12 May
 2020 17:52:12 +0000
Received: from CY1USG02FT013.eop-usg02.itar.protection.office365.us
 (2001:489a:2202:d::631) by SN1P110CA0027.office365.us (2001:489a:200:61::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2958.35 via Frontend
 Transport; Tue, 12 May 2020 17:52:12 +0000
Authentication-Results: spf=fail (sender IP is 207.242.234.14)
 smtp.mailfrom=dornerworks.com; lists.xenproject.org; dkim=none (message not
 signed) header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=dornerworks.com;
Received-SPF: Fail (protection.outlook.com: domain of dornerworks.com does not
 designate 207.242.234.14 as permitted sender)
 receiver=protection.outlook.com; client-ip=207.242.234.14;
 helo=ubuntu.localdomain;
Received: from ubuntu.localdomain (207.242.234.14) by
 CY1USG02FT013.mail.protection.office365.us (10.97.26.98) with Microsoft SMTP
 Server id 15.20.2979.30 via Frontend Transport; Tue, 12 May 2020 17:52:11
 +0000
From: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
To: xen-devel@lists.xenproject.org
Subject: [XEN PATCH 1/2] xen/build: fixup path to merge_config.sh
Date: Tue, 12 May 2020 13:52:05 -0400
Message-Id: <20200512175206.20314-2-stewart.hildebrand@dornerworks.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200512175206.20314-1-stewart.hildebrand@dornerworks.com>
References: <20200512175206.20314-1-stewart.hildebrand@dornerworks.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-spam-status: No, score=-2.9 required=3.5 tests=ALL_TRUSTED, BAYES_00,
 MAILSHELL_SCORE_0_4
X-Spam-Flag: NO
X-EOPAttributedMessage: 0
X-Forefront-Antispam-Report: CIP:207.242.234.14; CTRY:US; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:ubuntu.localdomain; PTR:InfoDomainNonexistent; CAT:NONE;
 SFTY:;
 SFS:(346002)(46966005)(33430700001)(8936002)(6666004)(2616005)(956004)(44832011)(86362001)(4326008)(2906002)(26005)(186003)(70206006)(70586007)(4744005)(508600001)(336012)(54906003)(8676002)(6916009)(81166007)(36756003)(47076004)(33440700001)(33310700002)(5660300002)(82310400002)(1076003);
 DIR:OUT; SFP:1101; 
Content-Type: text/plain
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3f1846d1-cba3-4d37-cb13-08d7f69d3345
X-MS-TrafficTypeDiagnostic: BN3P110MB0337:
X-Microsoft-Antispam-PRVS: <BN3P110MB033704ED41645EF72F1DFBEE8CBE0@BN3P110MB0337.NAMP110.PROD.OUTLOOK.COM>
X-MS-Oob-TLC-OOBClassifiers: OLM:3044;
X-Forefront-PRVS: 0401647B7F
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-OriginatorOrg: dornerworks.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2020 17:52:11.4398 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3f1846d1-cba3-4d37-cb13-08d7f69d3345
X-MS-Exchange-CrossTenant-Id: 097cf9aa-db69-4b12-aeab-ab5f513dbff9
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=097cf9aa-db69-4b12-aeab-ab5f513dbff9; Ip=[207.242.234.14];
 Helo=[ubuntu.localdomain]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN3P110MB0337
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Doug Goldstein <cardoe@cardoe.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This resolves the following observed error:

/bin/sh: /path/to/xen/xen/../xen/scripts/kconfig/merge_config.sh: No such file or directory

Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
---
 xen/tools/kconfig/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/tools/kconfig/Makefile b/xen/tools/kconfig/Makefile
index ef2f2336c4..fd37f4386a 100644
--- a/xen/tools/kconfig/Makefile
+++ b/xen/tools/kconfig/Makefile
@@ -93,7 +93,7 @@ configfiles=$(wildcard $(srctree)/kernel/configs/$@ $(srctree)/arch/$(SRCARCH)/c
 
 %.config: $(obj)/conf
 	$(if $(call configfiles),, $(error No configuration exists for this target on this architecture))
-	$(Q)$(CONFIG_SHELL) $(srctree)/scripts/kconfig/merge_config.sh -m .config $(configfiles)
+	$(Q)$(CONFIG_SHELL) $(srctree)/tools/kconfig/merge_config.sh -m .config $(configfiles)
 	$(Q)$(MAKE) -f $(srctree)/Makefile olddefconfig
 
 PHONY += kvmconfig
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue May 12 17:52:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 17:52:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYZ4n-0003mt-RT; Tue, 12 May 2020 17:52:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UHzJ=62=dornerworks.com=stewart.hildebrand@srs-us1.protection.inumbo.net>)
 id 1jYZ4m-0003mh-IL
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 17:52:20 +0000
X-Inumbo-ID: 531af214-9479-11ea-b07b-bc764e2007e4
Received: from USG02-BN3-obe.outbound.protection.office365.us (unknown
 [2001:489a:2202:c::61b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 531af214-9479-11ea-b07b-bc764e2007e4;
 Tue, 12 May 2020 17:52:19 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector5401; d=microsoft.com; cv=none;
 b=QhD+wVeWCXsol2Vbpue4ZW8ysO1zciA0ZGqSU+6tJh+F1QFUVWLi9NBwBZU27utm1GdUJojmYpM5hKQYb036zZ6AVgMcfedL6dV0Qb6Ha+ccIlquYeB2/5SYMA7umoGqW1o+r593lR4Jj1L/2DrRLXsgg2XFR8qRJ5Xp0XsLSZirzShEWR6nYVBrpU2nTf9OCkfC30UsXWRkanfjQoGGPO7gT49jHnM9chkYURNRsd+HKSM0CanZ2CmCYhe1inYtvgmCXSpUin2TZMpxUSbSYTNFzamhYMzhKNwFlcjQi4dMZCHSSdCZXNAJTdgpvGHauCujRElClIW2Owegf4EoTA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector5401;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aMD9YyguV+j7HzlGDJo06t45jT4o8cXWM5ebZCrfrrg=;
 b=OHLIAW951Md8WcBygIaLrK2hFQAmKR1hF0jd0XSCzjuabE4zcRusWMCBonzF1QvlJzcbIEOiYjIVTgNouQMVzmyPEB6Ly5Uy06BMj+dda+g7eUHI2Ipv0IIm+BmXYeeKEnGPpHDO/uUOLWmXp5VOaKfhTkT1FxC1Ebgm58s3dwOYM6QoT8c24kaXKpPetPaLvof9+8h6mM0qrDyz9o48a05EPGTdtF1htB/YUKIlP+LAVRA+Zj0K1Gio90vo/DgL1P4HJn733Bcj0UFL+CQ8s1DmkuGQrhq16/usa4pOrhKtZtw4Lw3E03h0OexurP0eq1Qi47jvRSTMo/xZPRN7Cw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=fail (sender ip is
 207.242.234.14) smtp.rcpttodomain=lists.xenproject.org
 smtp.mailfrom=dornerworks.com; dmarc=none action=none
 header.from=dornerworks.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=dornerworkssystem.onmicrosoft.us;
 s=selector1-dornerworkssystem-onmicrosoft-us;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aMD9YyguV+j7HzlGDJo06t45jT4o8cXWM5ebZCrfrrg=;
 b=NW72o35JOLQd5BP3ug7JZUTuuSLkXrJRRU5T5mCDwPaJhv2bvwaRcX8lsIaR7LBcgZAasCuZPtluPhqFNGf9eYpEIUDFrf+9lZEvXKtLcohFEkVUcmbyYNvzeSvqYo5LNcGPgxCC5AfZusTwwZPcRHMqAlraDZXP0RT3C65Kdrw=
Received: from SN1P110CA0026.NAMP110.PROD.OUTLOOK.COM (2001:489a:200:61::16)
 by CY1P110MB0504.NAMP110.PROD.OUTLOOK.COM (2001:489a:200:403::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2937.25; Tue, 12 May
 2020 17:52:16 +0000
Received: from CY1USG02FT013.eop-usg02.itar.protection.office365.us
 (2001:489a:2202:d::631) by SN1P110CA0026.office365.us (2001:489a:200:61::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2958.35 via Frontend
 Transport; Tue, 12 May 2020 17:52:15 +0000
Authentication-Results: spf=fail (sender IP is 207.242.234.14)
 smtp.mailfrom=dornerworks.com; lists.xenproject.org; dkim=none (message not
 signed) header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=dornerworks.com;
Received-SPF: Fail (protection.outlook.com: domain of dornerworks.com does not
 designate 207.242.234.14 as permitted sender)
 receiver=protection.outlook.com; client-ip=207.242.234.14;
 helo=ubuntu.localdomain;
Received: from ubuntu.localdomain (207.242.234.14) by
 CY1USG02FT013.mail.protection.office365.us (10.97.26.98) with Microsoft SMTP
 Server id 15.20.2979.30 via Frontend Transport; Tue, 12 May 2020 17:52:14
 +0000
From: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
To: xen-devel@lists.xenproject.org
Subject: [XEN PATCH 2/2] xen/build: use the correct kconfig makefile
Date: Tue, 12 May 2020 13:52:06 -0400
Message-Id: <20200512175206.20314-3-stewart.hildebrand@dornerworks.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200512175206.20314-1-stewart.hildebrand@dornerworks.com>
References: <20200512175206.20314-1-stewart.hildebrand@dornerworks.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-spam-status: No, score=-2.9 required=3.5 tests=ALL_TRUSTED, BAYES_00,
 MAILSHELL_SCORE_0_4
X-Spam-Flag: NO
X-EOPAttributedMessage: 0
X-Forefront-Antispam-Report: CIP:207.242.234.14; CTRY:US; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:ubuntu.localdomain; PTR:InfoDomainNonexistent; CAT:NONE;
 SFTY:;
 SFS:(346002)(46966005)(33430700001)(6666004)(186003)(33440700001)(956004)(508600001)(36756003)(86362001)(70586007)(1076003)(82310400002)(44832011)(70206006)(2616005)(5660300002)(47076004)(26005)(8936002)(336012)(81166007)(33310700002)(6916009)(2906002)(8676002)(54906003)(4326008);
 DIR:OUT; SFP:1101; 
Content-Type: text/plain
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 396e9b25-d712-4636-4890-08d7f69d3532
X-MS-TrafficTypeDiagnostic: CY1P110MB0504:
X-Microsoft-Antispam-PRVS: <CY1P110MB0504726D4C13176D353026A58CBE0@CY1P110MB0504.NAMP110.PROD.OUTLOOK.COM>
X-MS-Oob-TLC-OOBClassifiers: OLM:3968;
X-Forefront-PRVS: 0401647B7F
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-OriginatorOrg: dornerworks.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2020 17:52:14.5987 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 396e9b25-d712-4636-4890-08d7f69d3532
X-MS-Exchange-CrossTenant-Id: 097cf9aa-db69-4b12-aeab-ab5f513dbff9
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=097cf9aa-db69-4b12-aeab-ab5f513dbff9; Ip=[207.242.234.14];
 Helo=[ubuntu.localdomain]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1P110MB0504
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Doug Goldstein <cardoe@cardoe.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This resolves the following observed error:

#
# merged configuration written to .config (needs make)
#
make -f /path/to/xen/xen/../xen/Makefile olddefconfig
make[2]: Entering directory '/path/to/xen/xen'
make[2]: *** No rule to make target 'olddefconfig'.  Stop.
make[2]: Leaving directory '/path/to/xen/xen'
tools/kconfig/Makefile:95: recipe for target 'custom.config' failed

Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>

---
It's possible there are other places where the Makefile path will need
to be changed. This just happened to be the one that failed for me.
---
 xen/tools/kconfig/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/tools/kconfig/Makefile b/xen/tools/kconfig/Makefile
index fd37f4386a..f39521a0ed 100644
--- a/xen/tools/kconfig/Makefile
+++ b/xen/tools/kconfig/Makefile
@@ -94,7 +94,7 @@ configfiles=$(wildcard $(srctree)/kernel/configs/$@ $(srctree)/arch/$(SRCARCH)/c
 %.config: $(obj)/conf
 	$(if $(call configfiles),, $(error No configuration exists for this target on this architecture))
 	$(Q)$(CONFIG_SHELL) $(srctree)/tools/kconfig/merge_config.sh -m .config $(configfiles)
-	$(Q)$(MAKE) -f $(srctree)/Makefile olddefconfig
+	$(Q)$(MAKE) -f $(srctree)/tools/kconfig/Makefile.kconfig olddefconfig
 
 PHONY += kvmconfig
 kvmconfig: kvm_guest.config
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue May 12 17:52:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 17:52:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYZ4h-0003mV-I0; Tue, 12 May 2020 17:52:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UHzJ=62=dornerworks.com=stewart.hildebrand@srs-us1.protection.inumbo.net>)
 id 1jYZ4g-0003mQ-KZ
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 17:52:14 +0000
X-Inumbo-ID: 4f7b4226-9479-11ea-b9cf-bc764e2007e4
Received: from USG02-BN3-obe.outbound.protection.office365.us (unknown
 [2001:489a:2202:c::623])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4f7b4226-9479-11ea-b9cf-bc764e2007e4;
 Tue, 12 May 2020 17:52:13 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector5401; d=microsoft.com; cv=none;
 b=FNRv4yf6VBuwcj7CX05z4y06SXfqXUOWpsAryReOlqvp7FDgVUJ/SVuDr2yW76nF9u4oJZyU9pYwhU+clr+/n0bzG0l5LpOh7KJdANrizZHLAOVsFSwuOO0osoT2WS0d8eKe6l5oXnfqAx0L5YnmOkULKoQn2mdivI+bZES64f5HoB6pm/wrehMl0+ufDG/Im0I7JYfoDpJbPPByb2dAs2ZY+1y867zHJPIeB3ojeO7ORPSQQmfWJh521vIDcBx14K4emTOVh3jBijpq04s2Bx2YvVAZWEc94HQYmL9NhfedL+39tzLCtOYzuYjLW9ef0XDUSpmEzfYjO/wDd3/s6A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector5401;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ibFUUBUfF4zgAAt8INh+umAlcWsDLL1uH7FOJdxJPag=;
 b=grT6wEmtoxI2kjiiv+FAXoXfzIH0cWgcOlm/e2jw93M0AetlSfD6oUBTrW5ujyry/yhOEPmzPuRGDrWMpazFCWk7L+hEF5KG1zjDGRhn7RK/ToBcnS+Sfo8xGe3Cg5/7WaP/duRaYrMfqG/5/aqLIu9PjWlXlcRPndkuk8YW9qEZrsW3oTTOARk/wp0eRNxhhloMIh/b0wERb2WW3IkEXr9zp0Yfpo198uXYtBtVyMeodEmHsPeyIdKSrObCDyPIjZCRjnPAMjgcvPfjGKmX/4kGakjbY+961VwPghrIrm/mY3d6t4J/X9XBAwZqtbFhPk/ErKeFoM+/zpDiUofCww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=fail (sender ip is
 207.242.234.14) smtp.rcpttodomain=lists.xenproject.org
 smtp.mailfrom=dornerworks.com; dmarc=none action=none
 header.from=dornerworks.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=dornerworkssystem.onmicrosoft.us;
 s=selector1-dornerworkssystem-onmicrosoft-us;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ibFUUBUfF4zgAAt8INh+umAlcWsDLL1uH7FOJdxJPag=;
 b=k2IeNJtkHI9E8zge43PkMrsFrxnuHejHraPcePVZjgKdVnkc5fp5C71+tMLYu8rC/F9p7t9jddsz9WTIaTIt68zpEawlasBFbLidB9RjdJNz3v7qK0R63LHPWTAc9yAx6bbwFbQDYylcQ5sH7LefIsXvZvM0P7efXxhY8xuRHdI=
Received: from SN1P110CA0027.NAMP110.PROD.OUTLOOK.COM (2001:489a:200:61::17)
 by SN5P110MB0557.NAMP110.PROD.OUTLOOK.COM (2001:489a:200:41c::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2958.35; Tue, 12 May
 2020 17:52:09 +0000
Received: from CY1USG02FT013.eop-usg02.itar.protection.office365.us
 (2001:489a:2202:d::631) by SN1P110CA0027.office365.us (2001:489a:200:61::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2958.35 via Frontend
 Transport; Tue, 12 May 2020 17:52:09 +0000
Authentication-Results: spf=fail (sender IP is 207.242.234.14)
 smtp.mailfrom=dornerworks.com; lists.xenproject.org; dkim=none (message not
 signed) header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=dornerworks.com;
Received-SPF: Fail (protection.outlook.com: domain of dornerworks.com does not
 designate 207.242.234.14 as permitted sender)
 receiver=protection.outlook.com; client-ip=207.242.234.14;
 helo=ubuntu.localdomain;
Received: from ubuntu.localdomain (207.242.234.14) by
 CY1USG02FT013.mail.protection.office365.us (10.97.26.98) with Microsoft SMTP
 Server id 15.20.2979.30 via Frontend Transport; Tue, 12 May 2020 17:52:07
 +0000
From: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
To: xen-devel@lists.xenproject.org
Subject: [XEN PATCH 0/2] xen/build: fix kconfig errors during config merge
Date: Tue, 12 May 2020 13:52:04 -0400
Message-Id: <20200512175206.20314-1-stewart.hildebrand@dornerworks.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-spam-status: No, score=-2.9 required=3.5 tests=ALL_TRUSTED, BAYES_00,
 MAILSHELL_SCORE_0_4
X-Spam-Flag: NO
X-EOPAttributedMessage: 0
X-Forefront-Antispam-Report: CIP:207.242.234.14; CTRY:US; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:ubuntu.localdomain; PTR:InfoDomainNonexistent; CAT:NONE;
 SFTY:;
 SFS:(346002)(46966005)(33430700001)(8676002)(81166007)(6916009)(70206006)(33310700002)(33440700001)(8936002)(70586007)(82310400002)(2616005)(44832011)(1076003)(86362001)(47076004)(4326008)(6666004)(508600001)(4744005)(956004)(336012)(186003)(5660300002)(36756003)(54906003)(26005)(2906002);
 DIR:OUT; SFP:1101; 
Content-Type: text/plain
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1bde4486-e498-4431-364b-08d7f69d314f
X-MS-TrafficTypeDiagnostic: SN5P110MB0557:
X-Microsoft-Antispam-PRVS: <SN5P110MB055798B824C98EFE1D941EA18CBE0@SN5P110MB0557.NAMP110.PROD.OUTLOOK.COM>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-Forefront-PRVS: 0401647B7F
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-OriginatorOrg: dornerworks.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2020 17:52:07.9848 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 1bde4486-e498-4431-364b-08d7f69d314f
X-MS-Exchange-CrossTenant-Id: 097cf9aa-db69-4b12-aeab-ab5f513dbff9
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=097cf9aa-db69-4b12-aeab-ab5f513dbff9; Ip=[207.242.234.14];
 Helo=[ubuntu.localdomain]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SN5P110MB0557
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Doug Goldstein <cardoe@cardoe.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This series fixes a couple of kconfig errors that I observed while
invoking a build with a defconfig and config fragment.

I invoked the build as follows:

cat > xen/arch/arm/configs/custom.config <<EOF
CONFIG_DEBUG=y
CONFIG_SCHED_ARINC653=y
CONFIG_EARLY_PRINTK_ZYNQMP=y
EOF
make -C xen XEN_TARGET_ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- defconfig custom.config
make XEN_TARGET_ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- dist-xen -j $(nproc)

Stewart Hildebrand (2):
  xen/build: fixup path to merge_config.sh
  xen/build: use the correct kconfig makefile

 xen/tools/kconfig/Makefile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue May 12 18:47:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 18:47:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYZw4-0008DD-K8; Tue, 12 May 2020 18:47:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sh4E=62=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYZw3-0008D8-Pj
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 18:47:23 +0000
X-Inumbo-ID: 03a77bbe-9481-11ea-ae69-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03a77bbe-9481-11ea-ae69-bc764e2007e4;
 Tue, 12 May 2020 18:47:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589309242;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=f4J1OFATmnrmFK7ksAxy0B0Slp2BkOK60i2CQ7K1lRQ=;
 b=ChdUIv0RvWywXxEXQj3coC8gu3EuZm/z5f+urGAYEiE8TlkYWQoXRNjV
 wEGzj6bls1765MbiaLkEb0Mu2/r/MiOlerAaQT6+4bF9UATV5Ge85wLHN
 bhB5oOXS6LDhowAzwAH5dHRYo4C+xfaZPA9Q78ZqH3CrAa5Rfbmf8rCkX E=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: hYykLjkCS6FKzopO1KlOSjfAOKeL5xbJhT9IdKYsh1j4GUkDkUoUwRjShlTiz/cdI440wpGoyI
 XGSMW0Fv0MeUk1hMFDg4PaVRnjrdIdTryw0MAMh0EC8EpqzEIYl1K94t2m+96kAikdJL/zmhV7
 G3D8LthhjtXTwZ+pwVaG1aomMO8438sSsj85yeDmg/HZFo67g5FSVHYqmr8+byGObsbws3KJxK
 Ix4uTK0H1HoUczLyyGY/G/p9K7CeRexCHHmJm7GvPJc2GzphgNilQkY1qfdr3I9XtHHVvVuAef
 Opw=
X-SBRS: 2.7
X-MesageID: 17719284
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,384,1583211600"; d="scan'208";a="17719284"
Subject: Re: [PATCH 0/2] Fixups for fcf-protection
To: Stefan Bader <stefan.bader@canonical.com>, Jason Andryuk
 <jandryuk@gmail.com>, <xen-devel@lists.xenproject.org>
References: <20200512033948.3507-1-jandryuk@gmail.com>
 <3542ecb3-6f4e-2408-ea9f-9b03ac23688e@canonical.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <2fbc4be8-c992-1703-168c-a4124a0fd02e@citrix.com>
Date: Tue, 12 May 2020 19:47:16 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <3542ecb3-6f4e-2408-ea9f-9b03ac23688e@canonical.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12/05/2020 08:17, Stefan Bader wrote:
> On 12.05.20 05:39, Jason Andryuk wrote:
>> Two patches to fix building with a cf-protection toolchain.  The first
>> handles the case where the compiler fails to run with "error:
>> ‘-mindirect-branch’ and ‘-fcf-protection’ are not compatible".
>>
>> The second fixes a runtime error that prevented Xen booting in legacy
>> mode.
> That might be better than just disabling fcf-protection as well (which was done
> in Ubuntu lacking a better solution).

It is a GCC bug

https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93654

Fixed for 10, and 9.4

>
> Not sure it was already hit but that additional .note section breaks the build
> of the emulator as generated headers become gigantic:
>
> https://git.launchpad.net/ubuntu/+source/xen/tree/debian/patches/1001-strip-note-gnu-property.patch?h=ubuntu/focal

4.6G of notes?!?  That is surely a toolchain bug.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 12 19:09:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 19:09:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYaHD-0001Y3-A0; Tue, 12 May 2020 19:09:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EIy0=62=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYaHB-0001Xy-Rm
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 19:09:13 +0000
X-Inumbo-ID: 0ca862ca-9484-11ea-a2f7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0ca862ca-9484-11ea-a2f7-12813bfff9fa;
 Tue, 12 May 2020 19:09:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=knuoDRUACCFarGWJ8R2350IQBRLV2TW4WWQWDH4yWeM=; b=ZEpCcYyVIqhpu93f9Rv9gDwSQ
 8ypTGMdMDG/ZlGKov+nxIWqWw9nwCOqs7bQunNo2rd/qci7ZHifPbH026EuS5cyy0tICGKNT6S2n7
 OeF8U1jSpJOFBXBdiBYIXZXVoNqwcpsJTCUDTK1hFU5t//Hhk+ob/pM3TC05TMajaMqNQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYaH3-0006Sk-Gi; Tue, 12 May 2020 19:09:05 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYaH3-0004h4-0F; Tue, 12 May 2020 19:09:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYaH2-0006gg-Vb; Tue, 12 May 2020 19:09:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150144-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150144: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=de2f658b6bb422ec0e0fa94a49e476018602eeea
X-Osstest-Versions-That: qemuu=c88f1ffc19e38008a1c33ae039482a860aa7418c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 12 May 2020 19:09:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150144 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150144/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150109
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150109
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150109
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150109
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150109
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150109
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                de2f658b6bb422ec0e0fa94a49e476018602eeea
baseline version:
 qemuu                c88f1ffc19e38008a1c33ae039482a860aa7418c

Last test of basis   150109  2020-05-09 14:41:45 Z    3 days
Testing same since   150139  2020-05-11 16:07:09 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Cédric Le Goater <clg@kaod.org>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Joel Stanley <joel@jms.id.au>
  Laurent Desnogues <laurent.desnogues@gmail.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   c88f1ffc19..de2f658b6b  de2f658b6bb422ec0e0fa94a49e476018602eeea -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Tue May 12 19:10:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 19:10:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYaIA-0002F5-Kf; Tue, 12 May 2020 19:10:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sh4E=62=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYaI9-0002Ex-NV
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 19:10:13 +0000
X-Inumbo-ID: 3440d754-9484-11ea-a2f8-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3440d754-9484-11ea-a2f8-12813bfff9fa;
 Tue, 12 May 2020 19:10:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589310613;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=Ok9vrPMEnIt9CRPWXOchhDesY4y4P9LTshix2i9L3qQ=;
 b=ID9RmqsL7cmMaS00C8Nod54vIITeqEkbpBYpWOi/cWaT5IwZJ4troi7o
 u95wm/1WCRUdrAfUcCfiDCiQlhyRlurKslvZoVS8OGa1s4hgqQyb60/nj
 9kFo9CRto0SKgh0tLI+jzXfonE6yuAqkSorXn9xSzIqYe9hWrXk5ViiL2 8=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: njvaU9IXbt00zFhAduUe/8NtmSADjHQsV+3Wjl0M0KWc+3Lw/I0+363CavNUfMufDk1n4q3C0d
 yo4g7ExsfhXiqKl5Qczxnl2r/urw7JVagCnq4JwV0a/s9P78ji44FRgnQI5fRyPAaNXP4+klop
 N1LgzB4cgl4cqTPTt0uYzGOysN8sKoKMi9ywmSsrMFFwZuh6Mq3wcwHw1wbW3UvMkQgsTYwUw8
 L2JHV5dfAdIwqlGsR8ySMhCtHZuepkqOTOb6b9zuxKGmZQyLUESFF5btk1KBvkVPs4Q9hFgg3g
 NSo=
X-SBRS: 2.7
X-MesageID: 17347110
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,384,1583211600"; d="scan'208";a="17347110"
Subject: Re: [PATCH 2/2] x86/boot: Drop .note.gnu.properties in build32.lds
To: Jason Andryuk <jandryuk@gmail.com>
References: <20200512033948.3507-1-jandryuk@gmail.com>
 <20200512033948.3507-3-jandryuk@gmail.com>
 <69dd92f0-5b23-7a3d-3568-feab20874f97@suse.com>
 <372f83e4-6016-cc10-a8e6-970d644eb561@citrix.com>
 <CAKf6xpsmYpXSkSoVxafcRMH8dQ2DJ6rOfp+ah=RyoS6DheUj4A@mail.gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <cfdcff79-e196-9ec8-79bf-c811ec6cd529@citrix.com>
Date: Tue, 12 May 2020 20:10:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <CAKf6xpsmYpXSkSoVxafcRMH8dQ2DJ6rOfp+ah=RyoS6DheUj4A@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefan Bader <stefan.bader@canonical.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12/05/2020 17:17, Jason Andryuk wrote:
> On Tue, May 12, 2020 at 11:58 AM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> On 12/05/2020 16:32, Jan Beulich wrote:
>>> On 12.05.2020 05:39, Jason Andryuk wrote:
>>>> reloc.S and cmdline.S as arrays of executable bytes for inclusion in
>>>> head.S generated from compiled object files.  Object files generated by
>>>> an -fcf-protection toolchain include a .note.gnu.property section.  The
>>>> way reloc.S and cmdline.S are generated, the bytes of .note.gnu.property
>>>> become the start of the .S files.  When head.S calls reloc or
>>>> cmdline_parse_early, those note bytes are executed instead of the
>>>> intended .text section.  This results in an early crash in reloc.
>>> I may be misremembering, but I vaguely recall some similar change
>>> suggestion. What I'm missing here is some form of statement as to
>>> whether this is legitimate tool chain behavior, or a bug, and
>>> hence whether this is a fix or a workaround.
>> The linker is free to position unreferenced sections anywhere it wishes.
>>
>> It is deeply unhelpful behaviour, but neither Binutils nor Clang
>> developers think it is something wanting fixing.
>>
>> One option might be to use --orphan-handling=error so unexpected
>> toolchain behaviour breaks the build, or in this case perhaps =discard
>> might be better.
> The toolchain uses .note.gnu.property to flag object files as
> supporting Intel CET (Control-flow Enforcement Technology) enabled by
> -fcf-protection.  The linker/loader uses the note to know if CET
> should be enabled or disabled.  CET can only be enabled if the
> application and all libraries support it.

Right, except we're a kernel here (rather than userspace), so the
practicalities are different.

> So it's legitimate to flag compiled objects with .note.gnu.property.
> The .S files generated by build32.mk are .. interesting.  It seems
> like they should only be the runtime code & data, so we don't want the
> .note in there.

Yes...  Self-hosted relocatable 32bit code is tricky at the best of
times, and this is a very good example of how not to do it.

I've got a plan to get rid of it completely, but it needs a bit more of
the "switch to kbuild" series to go in first.

>   So I guess this is a workaround for how the .S files
> are generated?  My first attempt added -R .note.gnu.property, fyi.
>
> I'm not familiar with the linker options Andrew references, to know
> how usable they are off the top of my head.
>
> -fcf-protection=none could also be specified in CFLAGS in build32.mk
> to avoid generating the note.
>
>>>> Discard the .note.gnu.property section when linking to avoid the extra
>>>> bytes.
>>> If we go this route (and if, as per above, I'm misremembering,
>>> meaning we didn't reject such a change earlier on), why would we
>>> not strip .note and .note.* in one go?
> Maybe?  I made the conservative change since they weren't previously discarded.
>
>>>> Stefan Bader also noticed that build32.mk requires -fcf-protection=none
>>>> or else the hypervisor will not boot.
>>>> https://bugs.launchpad.net/ubuntu/+source/gcc-9/+bug/1863260
>>> How's this related to the change here?
>> I think there is a bit of confusion as to exactly what is going on.
>>
>> Ubuntu defaults -fcf-protection to enabled, which has a side effect of
>> turning on CET, which inserts ENDBR{32,64} instructions and generates
>> .note.gnu.properties indicating that the binary is CET-IBT compatible.
>>
>> ENDBR* instructions come from the Hint Nop space so are safe on older
>> processors, but do ultimately add to binary bloat.  It also occurs to me
>> that it likely breaks livepath.
>>
>> The reason Xen fails to boot is purely to do with the position of
>> .note.gnu.properties, not the ENDBR* instructions.
> Yes.
>
> I referenced Stefan's bug since it specifically called out build32.mk
> as problematic even after supplying -fcf-protection=none for a
> hypervisor build.  I was trying to give credit and reference a helpful
> bug entry.  I don't know how Xen handles such things, but I am fine
> dropping it.

Typically a Reported-by: $PERSON <$EMAIL> tag, but frankly it would have
been nice if anyone had posted any of these problems to xen-devel 6
months ago when it was first discovered to be a problem.

So far, we're at one definite (and fixed) toolchain bug, one
obvious-but-not-yet-debugged toolchain bug, a robustness fix in Xen for
the 32bit mess, and overriding of a system default, and thats before
getting to the iPXE issues.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 12 19:11:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 19:11:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYaJM-0002OM-3y; Tue, 12 May 2020 19:11:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sh4E=62=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYaJK-0002OF-Q7
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 19:11:26 +0000
X-Inumbo-ID: 5fc2edc2-9484-11ea-b07b-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5fc2edc2-9484-11ea-b07b-bc764e2007e4;
 Tue, 12 May 2020 19:11:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589310685;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=koGuhiMVnPOG2QDwjwN95ApMyENVW3xkRxCOG8gCMLs=;
 b=Gpuqk/C/RYoMkkwbFHKdLRPz2by0FkgDeM5JM4i97kJX7MvvGp5UXRpg
 VdCu2eKLo1tJsiSuiCB9x+wIutB2xnWjgyNIrQeCjB3sNo/cgvI11X0gt
 9w6tavJXvDVxD4Ea7me+z9rpthex9GXTP5Wh52zOw3wGhCWp/Lh6DAsSK U=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: vkZGNXi74RKObYukMDCiIiiczA99NXOOM5tOGNdVz5UCWqCqkESy7Eb09Kn1B48dCU9FMQlwS9
 cV/PotgV//XMYenCdaor1+BjiR9qh33dAwn0uNQESH7uOPEFQAPIB3hT/fCqyOslXn+HWMiPuQ
 YAhghx3PnDB0YCVzeLezYRpTqyB/nphxgexIfpWtJ54Hy5BwyL6Qv0njrj2Jn6ZNF1+8gA2/hk
 zirupSWBh2BM0SO/qeaRMdpMj5CLvNv3HlrifGvNbpqCTw1BZi5HtZ03QSX5sOr9ay9VbfONMt
 PeQ=
X-SBRS: 2.7
X-MesageID: 17376180
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,384,1583211600"; d="scan'208";a="17376180"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/build32: Discard all orphaned sections
Date: Tue, 12 May 2020 20:11:08 +0100
Message-ID: <20200512191108.6461-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefan Bader <stefan.bader@canonical.com>, Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Linkers may put orphaned sections ahead of .text, which breaks the calling
requirements.  A concrete example is Ubuntu's GCC-9 default of enabling
-fcf-protection which causes us to try and execute .note.gnu.properties during
Xen's boot.

Put .got.plt in its own section as it specifically needs preserving from the
linkers point of view, and discard everything else.  This will hopefully be
more robust to other unexpected toolchain properties.

Fixes boot from an Ubuntu build of Xen.

Reported-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Jason Andryuk <jandryuk@gmail.com>
CC: Stefan Bader <stefan.bader@canonical.com>
---
 xen/arch/x86/boot/build32.lds | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/boot/build32.lds b/xen/arch/x86/boot/build32.lds
index da35aee910..97454b40ff 100644
--- a/xen/arch/x86/boot/build32.lds
+++ b/xen/arch/x86/boot/build32.lds
@@ -31,7 +31,7 @@ SECTIONS
         *(.bss.*)
   }
 
-  /DISCARD/ : {
+  .got.plt : {
         /*
          * PIC/PIE executable contains .got.plt section even if it is not linked
          * with dynamic libraries. In such case it is just placeholder for
@@ -47,6 +47,14 @@ SECTIONS
          *
          * Please check build32.mk for more details.
          */
-        /* *(.got.plt) */
+        *(.got.plt)
+  }
+
+  /DISCARD/ : {
+        /*
+         * Discard everything else, to prevent linkers from putting
+         * orphaned sections ahead of .text, which needs to be first.
+         */
+        *(*)
   }
 }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue May 12 19:11:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 19:11:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYaJi-0002Qw-Dc; Tue, 12 May 2020 19:11:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sh4E=62=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYaJi-0002Qq-0x
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 19:11:50 +0000
X-Inumbo-ID: 6dae7a82-9484-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6dae7a82-9484-11ea-ae69-bc764e2007e4;
 Tue, 12 May 2020 19:11:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589310709;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=oMCdGSDqIXHcN3yrHbB182v6Q6Qj/Ianz2JO6oKJY2I=;
 b=IX6PR4MFFU/bz0O8a5EdaZZhfcUZ5iIOJXQWMMOcycD/60yfqHylwgju
 lf3hSuEqiqWHWprWJCPeccx/xKWzzuyGvozVy7ifd5hRop9jP5vnTAIgY
 iJruB7MYZAbMupvQ0kVbfOL0lew45mS8tDx4GDa4yO1joJFkLQqHAjFP8 8=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: PBwiuavzafFtcce7TY/uOXgN1tTXnJ+s1m+WMDEnJTkyadmsH7TPFfUwRDQDuQmV67huSJee6e
 /G4x1e/Igmwppwy/Nt1vKUIE90uZP+K59wXFR32m/yrPVyrKP3RcfuTLgRATacAcWvRqJdWmdC
 f447dNQL5Gw2m32E1Tq3re8yrcrqM6pG4w32pbHR+UbRRuMLz2ZzJBCk6h7AFtp55pN1ONzEU6
 VK/Q/586PvfBzNP+bDcTCI7QwoujHHGXhH3DKEviXZPl7hM0doEuxxqciXYC1xike67kCeT0UD
 zNM=
X-SBRS: 2.7
X-MesageID: 17620682
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,384,1583211600"; d="scan'208";a="17620682"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/build: Unilaterally disable -fcf-protection
Date: Tue, 12 May 2020 20:11:16 +0100
Message-ID: <20200512191116.6851-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefan Bader <stefan.bader@canonical.com>, Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

See comment for details.  Works around a GCC-9 bug which breaks the build on
Ubuntu.

Reported-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Jason Andryuk <jandryuk@gmail.com>
CC: Stefan Bader <stefan.bader@canonical.com>

Sorry for messing you around with how to fix this.  I'd neglected to consider
the CONFIG_LIVEPATCH interaction.  With that extra observation, there is no
point having the extra complexity given that the result with CET-IBT and
Retpoline still isn't usable.
---
 xen/arch/x86/arch.mk         | 9 +++++++++
 xen/arch/x86/boot/build32.mk | 1 +
 2 files changed, 10 insertions(+)

diff --git a/xen/arch/x86/arch.mk b/xen/arch/x86/arch.mk
index 2a51553edb..93e30e4bea 100644
--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -67,6 +67,15 @@ CFLAGS-$(CONFIG_INDIRECT_THUNK) += -mindirect-branch=thunk-extern
 CFLAGS-$(CONFIG_INDIRECT_THUNK) += -mindirect-branch-register
 CFLAGS-$(CONFIG_INDIRECT_THUNK) += -fno-jump-tables
 
+# Xen doesn't support CET-IBT yet.  At a minimum, logic is required to
+# enable it for supervisor use, but the Livepatch functionality needs
+# to learn not to overwrite ENDBR64 instructions.
+#
+# Furthermore, Ubuntu enables -fcf-protection by default, along with a
+# buggy version of GCC-9 which objects to it in combination with
+# -mindirect-branch=thunk-extern (Fixed in GCC 10, 9.4).
+$(call cc-option-add,CFLAGS,CC,-fcf-protection=none)
+
 # If supported by the compiler, reduce stack alignment to 8 bytes. But allow
 # this to be overridden elsewhere.
 $(call cc-option-add,CFLAGS-stack-boundary,CC,-mpreferred-stack-boundary=3)
diff --git a/xen/arch/x86/boot/build32.mk b/xen/arch/x86/boot/build32.mk
index 48c7407c00..5a00755512 100644
--- a/xen/arch/x86/boot/build32.mk
+++ b/xen/arch/x86/boot/build32.mk
@@ -3,6 +3,7 @@ CFLAGS =
 include $(XEN_ROOT)/Config.mk
 
 $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
+$(call cc-option-add,CFLAGS,CC,-fcf-protection=none)
 
 CFLAGS += -Werror -fno-asynchronous-unwind-tables -fno-builtin -g0 -msoft-float
 CFLAGS += -I$(XEN_ROOT)/xen/include
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue May 12 19:32:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 19:32:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYad9-0004Cl-4s; Tue, 12 May 2020 19:31:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rfNj=62=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jYad8-0004Cg-7F
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 19:31:54 +0000
X-Inumbo-ID: 3b740962-9487-11ea-9887-bc764e2007e4
Received: from mail-lj1-x235.google.com (unknown [2a00:1450:4864:20::235])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b740962-9487-11ea-9887-bc764e2007e4;
 Tue, 12 May 2020 19:31:53 +0000 (UTC)
Received: by mail-lj1-x235.google.com with SMTP id a21so15007323ljj.11
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 12:31:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to;
 bh=7EpZ3W6+R10EcvysXCT5zhaMgB5ltwhnG0M6maYRBm8=;
 b=J3/AN0Xenf3hEXhDrPeakJgk+xIJCgMd1MjC4P2YFJ8ZoYZ2HQUC6RIdn3uxXlNJe0
 JEOQk42o7EAukz/Enp8pg6ZhSzUM+IST3AKrKy6bFDHokKDsWyXQoMB2hgibM+cZIxvG
 fp3wqzrWfP/c3Q0qx4E25I+j9HeyKLOL3/z/em8pzLCJbDHh/1eiMu/4rEsNMM2z8g3y
 d2bvd8WUDnkEfa6GCMpjFSpQvawaDcwIUb7nnpyPhWcatzmIGixUl/dzkNOeeBc65ptl
 Fl+53pVt3Ph23MTduT5CLJ/Gr/ceG0c/sHbyE1c2k/j11akrKVMXlDSRBifcZ6Pw9/aY
 P7SA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to;
 bh=7EpZ3W6+R10EcvysXCT5zhaMgB5ltwhnG0M6maYRBm8=;
 b=UTfYEZ56q8dZVaou66aW1YE9u5m3C0n4kCQRND0FNF9JNNZAeoyneYbSrIRnwT3iaz
 dmBys18FlViXfJAzomUf/JNFHY5uLlTekO5Km9Ga5Aj0/yeMVLUILH+4iEg5fZmqa2R4
 dFJZCITiiHQWg9XCsLjA2z1leS54ic2Lle2EKQD+p0VjEHk9gRYOnak32svDPyCzHQYo
 0pDYEu2yuOKszXBaTaTaTLdky3XSdMKjuH8HBmLJ9Qeyu8W2TlSmluFxi/KYYlnjVCRH
 piQtzHMGraJxUCsjKhPYw5bS6l8+JUU8Ep4d52jnxmJ3CAQIqngtsUKyC9KxGGVseU+x
 V8Ow==
X-Gm-Message-State: AOAM533OPLx2MhyN2RndunCtgTj7HC/wxWTR8mcTWoUFDF257Kw3FaPm
 vgHbMJPGS0dYR+WgM8dj71C4yX4CZtZO4MsuR10=
X-Google-Smtp-Source: ABdhPJwXKliJUQcenUT1P94I9xPO+d+gfY9ydpGuiMS5jPbn4C52wkeDpwQ2SSyqMdabEVTUxJ7KgxFtJlo4hi6cqxQ=
X-Received: by 2002:a2e:8108:: with SMTP id d8mr14083989ljg.184.1589311912162; 
 Tue, 12 May 2020 12:31:52 -0700 (PDT)
MIME-Version: 1.0
References: <3ed7eb87-070c-28ea-4f8a-aa4421cea93a@citrix.com>
 <5ea8173d.1c69fb81.915ba.8400@mx.google.com>
 <c242b963-ae80-1ca0-9b4d-fe2c8f66b6a2@citrix.com>
 <34cc563f-9e05-b55c-54f4-55104d2d42b5@citrix.com>
In-Reply-To: <34cc563f-9e05-b55c-54f4-55104d2d42b5@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 12 May 2020 15:31:40 -0400
Message-ID: <CAKf6xpuf83H6PQXORX9A-S2d5Zzsvc_S6gQsQgKPZLJFoXw-9g@mail.gmail.com>
Subject: rombios triple fault with -fcf-protection
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is the output from a failed rombios boot:

(d11) Invoking ROMBIOS ...
(XEN) stdvga.c:173:d11v0 entering stdvga mode
(d11) VGABios $Id: vgabios.c,v 1.67 2008/01/27 09:44:12 vruppert Exp $
(d11) Bochs BIOS - build: 06/23/99
(d11) $Revision: 1.221 $ $Date: 2008/12/07 17:32:29 $
(d11) Options: apmbios pcibios eltorito PMM
(d11)
(d11) ata0 master: QEMU HARDDISK ATA-7 Hard-Disk ( 112 MBytes)
(d11)
(XEN) MMIO emulation failed (1): d11v0 32bit @ 0008:ffffffff ->
(XEN) d11v0 Triple fault - invoking HVM shutdown action 1
(XEN) *** Dumping Dom11 vcpu#0 state: ***
(XEN) ----[ Xen-4.14-unstable  x86_64  debug=y   Not tainted ]----
(XEN) CPU:    1
(XEN) RIP:    0008:[<00000000ffffffff>]
(XEN) RFLAGS: 0000000000010086   CONTEXT: hvm guest (d11v0)
(XEN) rax: 0000000043001000   rbx: 000000000009c30e   rcx: 0000000000000000
(XEN) rdx: 0000000000000000   rsi: 000000000000031e   rdi: 0000000000000020
(XEN) rbp: 00000000000002de   rsp: 000000000009ef48   r8:  0000000000000000
(XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
(XEN) r12: 0000000000000000   r13: 0000000000000000   r14: 0000000000000000
(XEN) r15: 0000000000000000   cr0: 0000000000000011   cr4: 0000000000000000
(XEN) cr3: 0000000000400000   cr2: 0000000000000000
(XEN) fsb: 0000000000000000   gsb: 00000000000c9000   gss: 0000000000000002
(XEN) ds: 0018   es: 0018   fs: 0000   gs: c900   ss: 0018   cs: 0008

For a traditional stubdom with sdl enabled, I also see the init_message of iPXE
iPXE (http://ipxe.org) 00:04.0 C900 PCI2.10 PMM_

The underscore might be the cursor?

The VM reboots.  If I set on_reboot="preserve" and use gdbsx:

(gdb) target remote :1234
Remote debugging using :1234
0xdeadf00d in ?? ()
(gdb) bt
#0  0xdeadf00d in ?? ()
(gdb) i r
eax            0x0                 0
ecx            0x0                 0
edx            0xe5a08016          -442466282
ebx            0x87                135
esp            0x23aa              0x23aa <memset+22>
ebp            0x0                 0x0
esi            0x0                 0
edi            0x0                 0
eip            0xdeadf00d          0xdeadf00d
eflags         0xdeadbeef          [ CF PF ZF SF IF DF OF RF AC VIF ID ]
cs             0xdeadf00d          -559026163
ss             0xdeadbeef          -559038737
ds             0x5ef890            6224016
es             0x0                 0
fs             0x5ef868            6223976
gs             0x0                 0


From xen-devel-bounces@lists.xenproject.org Tue May 12 19:50:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 19:50:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYauv-0005qj-Pn; Tue, 12 May 2020 19:50:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eNOS=62=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1jYauu-0005qe-FO
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 19:50:16 +0000
X-Inumbo-ID: cc8c20cc-9489-11ea-a2fe-12813bfff9fa
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cc8c20cc-9489-11ea-a2fe-12813bfff9fa;
 Tue, 12 May 2020 19:50:15 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 04CJo6Vi096417
 (version=TLSv1.2 cipher=DHE-RSA-AES128-GCM-SHA256 bits=128 verify=NO);
 Tue, 12 May 2020 15:50:12 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 04CJo5Yu096416;
 Tue, 12 May 2020 12:50:05 -0700 (PDT) (envelope-from ehem)
Date: Tue, 12 May 2020 12:50:05 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: use of "stat -"
Message-ID: <20200512195005.GA96154@mattapan.m5p.com>
References: <3bfd6384-fcaf-c74a-e560-a35aafa06a43@suse.com>
 <20200512141947.yqx4gmbvqs4grx5g@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
 <fa507eab-547a-c0fb-9620-825aba5f55b2@suse.com>
 <4b90b635-84bb-e827-d52e-dfe1ebdb4e4d@citrix.com>
 <814db557-4f6a-020d-9f71-4ee3724981e3@suse.com>
 <CAKf6xps0XDRTUJsbE1zHpn=h98yTN+Y1DzaNpVGzhhJGVccRRw@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAKf6xps0XDRTUJsbE1zHpn=h98yTN+Y1DzaNpVGzhhJGVccRRw@mail.gmail.com>
User-Agent: Mutt/1.11.4 (2019-03-13)
X-Spam-Status: No, score=0.3 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
 autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 12, 2020 at 11:52:25AM -0400, Jason Andryuk wrote:
> I was just looking to remove perl since it's a large dependency for
> this single use.  I'm not tied to a particular approach.

Have you tried to remove Perl from a system?  This is possible, but on
many systems there will be hundreds or thousands of other programs which
already cause Perl to be installed.

Perl doesn't have the mindshare it once did, but it is far from dead.  A
new desktop Linux installation might have less than a hundred programs
depending on Perl.  A new developer Linux installation will likely have
hundreds of programs depending on Perl.  A decades old system like Jan
is testing, there will like thousands.

Removing dependancies is good.  Perhaps this is looking a few years into
the future where Perl is even less common.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Tue May 12 19:54:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 19:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYazD-000614-BY; Tue, 12 May 2020 19:54:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Sh4E=62=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYazB-00060z-EI
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 19:54:41 +0000
X-Inumbo-ID: 69eaf21e-948a-11ea-a2fe-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 69eaf21e-948a-11ea-a2fe-12813bfff9fa;
 Tue, 12 May 2020 19:54:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589313279;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=qEIXMZOTP1NgAqtNXm0llZAxH+be2t7QOrP5aZNerqA=;
 b=ZTkZUyAJeWOUVqdrgbIat5HSZ6e/l0zwLPlMcb/KhxzDqIrY7eggScKX
 m+2ZxKFnX8VGYFxrEfXdXkOLo/bLpXj/T8TiB5o9sDgKQGdAx7nURZmgT
 z7V2yw8Mjxxefy3UoafKXwXXSjCfa5+jlnZsXCe6XXNNP8qAVops8yFJP o=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: Pyb4WEtry9WWIvmHCzgxvLtc5ZA+8ZWIKSIR1ovn/b/BeQF+L58DXldauWP8bNMGRvQHKYJvBe
 qxAl2hUPqRuBe1zXc6/CJD9ntdOHXL3Ifh0BOiXZa0ciW4p3f6KD8PjK6ra5wiYII47rOH9UCr
 89zzjRZYAkaV+iv3NrDhSlpiPwZVXVics8y8ueU7uij/Wt/g6Tol/rCHyGJ/fO08tNbEaKfydG
 AFriAses9eWJ3yTc+RSU+YjEuXTNYcyyJYb8ckfo0nUFmCumw5VNysn+3yCY1005im1/da0Mo2
 PZA=
X-SBRS: 2.7
X-MesageID: 17630810
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,384,1583211600"; d="scan'208";a="17630810"
Subject: Re: use of "stat -"
To: Elliott Mitchell <ehem+xen@m5p.com>, Jason Andryuk <jandryuk@gmail.com>
References: <3bfd6384-fcaf-c74a-e560-a35aafa06a43@suse.com>
 <20200512141947.yqx4gmbvqs4grx5g@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
 <fa507eab-547a-c0fb-9620-825aba5f55b2@suse.com>
 <4b90b635-84bb-e827-d52e-dfe1ebdb4e4d@citrix.com>
 <814db557-4f6a-020d-9f71-4ee3724981e3@suse.com>
 <CAKf6xps0XDRTUJsbE1zHpn=h98yTN+Y1DzaNpVGzhhJGVccRRw@mail.gmail.com>
 <20200512195005.GA96154@mattapan.m5p.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <049e0022-f9c1-6dc9-3360-d25d88eeb97f@citrix.com>
Date: Tue, 12 May 2020 20:54:26 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200512195005.GA96154@mattapan.m5p.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12/05/2020 20:50, Elliott Mitchell wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> On Tue, May 12, 2020 at 11:52:25AM -0400, Jason Andryuk wrote:
>> I was just looking to remove perl since it's a large dependency for
>> this single use.  I'm not tied to a particular approach.
> Have you tried to remove Perl from a system?  This is possible, but on
> many systems there will be hundreds or thousands of other programs which
> already cause Perl to be installed.
>
> Perl doesn't have the mindshare it once did, but it is far from dead.  A
> new desktop Linux installation might have less than a hundred programs
> depending on Perl.  A new developer Linux installation will likely have
> hundreds of programs depending on Perl.  A decades old system like Jan
> is testing, there will like thousands.
>
> Removing dependancies is good.  Perhaps this is looking a few years into
> the future where Perl is even less common.

Not everyone has a fully fat Linux running as dom0.  There are real
systems using Xen which have already successfully purged perl.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 12 22:00:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 22:00:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYcwQ-0007Qk-NO; Tue, 12 May 2020 21:59:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9uWS=62=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jYcwP-0007Qf-Hh
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 21:59:57 +0000
X-Inumbo-ID: ea17d0d4-949b-11ea-9887-bc764e2007e4
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea17d0d4-949b-11ea-9887-bc764e2007e4;
 Tue, 12 May 2020 21:59:56 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id o14so14395722ljp.4
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 14:59:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=1kBnuCyLYWMDCij7B9yRQ+3dh3pWBlQAb15NdI1M4/s=;
 b=TYf7Iw09yr6QDw2iGLTkkSQD76o+BhADrPLJoB6ZSgenzELnS/IhxwMi91nIRrsX9G
 A+0sBVKVJo+WNu3ru64Bk4smr4zPXHWaALGSGnlQtUnqkKl+afKKqBFx1gM55r+/QnWS
 ElrTqGDKNOVn4NDI5DAkaU4xgj+f6zFiLIFXnNe9rC1tW7YSYmK2fTgqdCkm39WYq8VK
 hdjdQ9z5G3woW2yjibOTNU3RSH1/qBLSw5L1dbrVAGb8p+CiybBfMoH9igoUDKR0qjZ1
 j2Gow+ndm6fPKf+iW3PxYoUDkBdHivZTe8o9DDqjqFVXsE7lm7y7XSzc/qWz04Vlgz8L
 biPw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=1kBnuCyLYWMDCij7B9yRQ+3dh3pWBlQAb15NdI1M4/s=;
 b=Ho+pbsBtDSv0ETcQ3ehkMuPCFst/7DvmcWOptHn8JE8q8G37pzvqNRZWFQFn/EougX
 +f8miZixNibTKIhbbiY171GO9vzgRVMR3DIT+xBlPe7k8D2SfDJJE7FVf4+0h0X7JgIT
 SD4PW6PYd+/qU+hCl8AkV2p9h7lCDz6++ReWQQNCkx0ddwXqyOnNuUJAJkrlwGEvJCIa
 a5FAj3mmSh/8KfFw+Zft+PlEKOOgjOqw4IKHAEHLon1awh76cuDsaRviZKjZL4thDKzM
 OifIWBNQPM3HPiYEmyO/wM40BkEnjYHli5APaIcarR9/iT30mSVZDPH23Eu/NDUxEzbH
 p0gg==
X-Gm-Message-State: AOAM533ZtWZF5udrAXupn60YP6DdNCCxG0GhLxk5yXV3unDh5NcWRqKE
 xpmG9CoY/MMX8btg/LNO7954HzRZvpbCyYXklg8=
X-Google-Smtp-Source: ABdhPJwvt5RYwmZtpejdCCBF80XIZtKj204yF7jL3X4rqtP1DLQrLLj+xEt7Qg2SUvpyWB4cg/U/aJURcN4vkorcPi4=
X-Received: by 2002:a05:651c:c8:: with SMTP id
 8mr14222697ljr.182.1589320795068; 
 Tue, 12 May 2020 14:59:55 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1588282027.git.rosbrookn@ainfosec.com>
 <9f5000ceea14e6818e491df38eba161c800b4cf8.1588282027.git.rosbrookn@ainfosec.com>
 <16919263-9167-4BB1-9583-F7899FE3A246@citrix.com>
In-Reply-To: <16919263-9167-4BB1-9583-F7899FE3A246@citrix.com>
From: Nick Rosbrook <rosbrookn@gmail.com>
Date: Tue, 12 May 2020 17:59:43 -0400
Message-ID: <CAEBZRSd90nhgL7K2Z2BUN=3NShpBO4Awsx3hdO5WSY8NZHzPBQ@mail.gmail.com>
Subject: Re: [PATCH 3/3] golang/xenlight: add necessary module/package
 documentation
To: George Dunlap <George.Dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 12, 2020 at 1:20 PM George Dunlap <George.Dunlap@citrix.com> wr=
ote:
>
>
>
> > On Apr 30, 2020, at 10:39 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote=
:
> >
> > Add a README and package comment giving a brief overview of the package=
.
> > These also help pkg.go.dev generate better documentation.
> >
> > Also, add a copy of Xen's license to tools/golang/xenlight. This is
> > required for the package to be shown on pkg.go.dev and added to the
> > default module proxy, proxy.golang.org.
>
> libxl is actually under the LGPL; I guess we want the xenlight package to=
 be the same?

Thanks, yes it should probably be the same.

> As discussed before, arguably distributing the *.gen.go files won=E2=80=
=99t be sufficient to comply with the license, because they are not the =E2=
=80=9Cpreferred form of modification=E2=80=9D: that would be libxl_types.id=
l, along with the python generators.
>
> OTOH, I suppose that=E2=80=99s sort of Google=E2=80=99s problem in some w=
ays...

Yeah, I guess that's true. Besides, modules are not intended to be
modified (they are store in a read-only cache by default), so I don't
think we need to worry about that until a separate repo is involved.
Anyone who looks on pkg.go.dev for now will see that xen.git is the
source repository.

> >
> > diff --git a/tools/golang/xenlight/README.md b/tools/golang/xenlight/RE=
ADME.md
> > new file mode 100644
> > index 0000000000..42240e2132
> > --- /dev/null
> > +++ b/tools/golang/xenlight/README.md
> > @@ -0,0 +1,17 @@
> > +# xenlight
> > +
> > +## About
> > +
> > +The xenlight package provides Go bindings to Xen's libxl C library via=
 cgo. The package is currently in an unstable "preview" state.
>
> We should probably try to slot it into one of the official terms we use i=
n SUPPORT.md (and also add it to SUPPORT.md!).
>
> ATM you can=E2=80=99t even do a full VM lifecycle with it properly (to in=
clude harvesting destroyed domains); so I think it would come under =E2=80=
=9Cexperimental=E2=80=9D.

Thanks, I wasn't aware of these definitions in SUPPORT.md.

> > This means the package is ready for initial use and evaluation, but is =
not yet fully functional. Namely, only a subset of libxl's API is implement=
ed, and breaking changes may occur in future package versions.
> > +
> > +Much of the package is generated using the libxl IDL. Changes to the g=
enerated code can be made by modifying `tools/golang/xenlight/gengotypes.py=
` in the xen.git tree.
> > +
> > +## Getting Started
> > +
> > +```go
> > +import (
> > +        "xenbits.xen.org/git-http/xen.git/tools/golang/xenlight"
> > +)
> > +```
> > +
> > +The module is not yet tagged independently of xen.git, so expect to se=
e `v0.0.0-<date>-<git hash>` as the package version. If you want to point t=
o a Xen release, such as 4.14.0, you can run `go get xenbits.xen.org/git-ht=
tp/xen.git/tools/golang/xenlight@RELEASE-4.14.0`.
>
> I think I would say something like:
>
> =E2=80=94
>
> The module is not yet tagged independently of xen.git; if you don=E2=80=
=99t specify the version, you=E2=80=99ll get the most recent development ve=
rsion, which is probably not what you want.  A better option would be to sp=
ecify a Xen release tag; for instance: `go get xenbits=E2=80=A6./xenlight@R=
ELEASE-4.14.10`.

That sounds much better. Thanks.

> > diff --git a/tools/golang/xenlight/xenlight.go b/tools/golang/xenlight/=
xenlight.go
> > index 6b4f492550..3eaa5a3d63 100644
> > --- a/tools/golang/xenlight/xenlight.go
> > +++ b/tools/golang/xenlight/xenlight.go
> > @@ -14,6 +14,8 @@
> >  * You should have received a copy of the GNU Lesser General Public
> >  * License along with this library; If not, see <http://www.gnu.org/lic=
enses/>.
> >  */
> > +
> > +// Package xenlight provides bindings to Xen's libxl C library.
> > package xenlight
>
>
> Will this comment replace the comment above it in the pkg.go.dev module p=
age?

Yes, this should be recognized as the "package comment" now.

-NR


From xen-devel-bounces@lists.xenproject.org Tue May 12 22:55:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 22:55:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYdno-0003uG-RX; Tue, 12 May 2020 22:55:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eNOS=62=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1jYdno-0003uB-2I
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 22:55:08 +0000
X-Inumbo-ID: 9fb7f69c-94a3-11ea-b07b-bc764e2007e4
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9fb7f69c-94a3-11ea-b07b-bc764e2007e4;
 Tue, 12 May 2020 22:55:07 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.15.2/8.15.2) with ESMTPS id 04CMswfQ001680
 (version=TLSv1.2 cipher=DHE-RSA-AES128-GCM-SHA256 bits=128 verify=NO);
 Tue, 12 May 2020 18:55:04 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.15.2/8.15.2/Submit) id 04CMsw1C001679;
 Tue, 12 May 2020 15:54:58 -0700 (PDT) (envelope-from ehem)
Date: Tue, 12 May 2020 15:54:58 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: use of "stat -"
Message-ID: <20200512225458.GA1530@mattapan.m5p.com>
References: <3bfd6384-fcaf-c74a-e560-a35aafa06a43@suse.com>
 <20200512141947.yqx4gmbvqs4grx5g@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
 <fa507eab-547a-c0fb-9620-825aba5f55b2@suse.com>
 <4b90b635-84bb-e827-d52e-dfe1ebdb4e4d@citrix.com>
 <814db557-4f6a-020d-9f71-4ee3724981e3@suse.com>
 <CAKf6xps0XDRTUJsbE1zHpn=h98yTN+Y1DzaNpVGzhhJGVccRRw@mail.gmail.com>
 <20200512195005.GA96154@mattapan.m5p.com>
 <049e0022-f9c1-6dc9-3360-d25d88eeb97f@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <049e0022-f9c1-6dc9-3360-d25d88eeb97f@citrix.com>
User-Agent: Mutt/1.11.4 (2019-03-13)
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
 autolearn_force=no version=3.4.4
X-Spam-Checker-Version: SpamAssassin 3.4.4 (2020-01-24) on mattapan.m5p.com
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Jason Andryuk <jandryuk@gmail.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 12, 2020 at 08:54:26PM +0100, Andrew Cooper wrote:
> On 12/05/2020 20:50, Elliott Mitchell wrote:
> > On Tue, May 12, 2020 at 11:52:25AM -0400, Jason Andryuk wrote:
> >> I was just looking to remove perl since it's a large dependency for
> >> this single use.  I'm not tied to a particular approach.

> > Removing dependancies is good.  Perhaps this is looking a few years into
> > the future where Perl is even less common.
> 
> Not everyone has a fully fat Linux running as dom0.?? There are real
> systems using Xen which have already successfully purged perl.

Gah!  Misread an earlier message.  Upon rereading the message seems my
thinking was wrong.  Yes, it is pretty reasonable to setup a system
without Perl.

Sorry for interrupting with the braindead comment.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Tue May 12 23:17:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 23:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYe9M-0005fD-M2; Tue, 12 May 2020 23:17:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EIy0=62=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYe9L-0005f8-8X
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 23:17:23 +0000
X-Inumbo-ID: bab51670-94a6-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bab51670-94a6-11ea-ae69-bc764e2007e4;
 Tue, 12 May 2020 23:17:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=jUzovW6o7XttNwy73mp8XuOCuValj61AxWu/fRlQYp4=; b=OqNPvIpO3FbUsDptDDR7KoCKG
 jaqrMDrtR6A9DIGpwFW9muxbuJ+71pjuRWEdQg3irnCMoupM2tvqiPOzJ1ZYILPpZz9Aw4xht2r54
 HfF4Kv56uooNrJmZl0PTJ8IZYvxqUpSAhBuLi47KQejOsC3yGNAkGQCVSONgei7P67Z9Q=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYe9I-0003EY-Dk; Tue, 12 May 2020 23:17:20 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYe9I-0008Gb-5B; Tue, 12 May 2020 23:17:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYe9I-0003IC-4g; Tue, 12 May 2020 23:17:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150148-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150148: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=152036d1379ffd6985262743dcf6b0f9c75f83a4
X-Osstest-Versions-That: linux=e99332e7b4cda6e60f5b5916cf9943a79dbef902
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 12 May 2020 23:17:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150148 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150148/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150126
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150126
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150126
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150126
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150126
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150126
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150126
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150126
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150126
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                152036d1379ffd6985262743dcf6b0f9c75f83a4
baseline version:
 linux                e99332e7b4cda6e60f5b5916cf9943a79dbef902

Last test of basis   150126  2020-05-10 14:58:51 Z    2 days
Failing since        150133  2020-05-11 06:42:17 Z    1 days    3 attempts
Testing same since   150148  2020-05-12 07:51:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexey Dobriyan <adobriyan@gmail.com>
  Anton Eidelman <anton@lightbitslabs.com>
  Christoph Hellwig <hch@lst.de>
  Chuck Lever <chuck.lever@oracle.com>
  Ingo Molnar <mingo@kernel.org>
  Jann Horn <jannh@google.com>
  Jens Axboe <axboe@kernel.dk>
  Joerg Roedel <jroedel@suse.de>
  John Garry <john.garry@huawei.com>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Julia Lawall <Julia.Lawall@inria.fr>
  Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
  Linus Torvalds <torvalds@linux-foundation.org>
  Miroslav Benes <mbenes@suse.cz>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Qian Cai <cai@lca.pw>
  Randy Dunlap <rdunlap@infradead.org>
  Rick Edgecombe <rick.p.edgecombe@intel.com>
  Sagi Grimberg <sagi@grimberg.me>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Tejun Heo <tj@kernel.org>
  Thomas Gleixner <tglx@linutronix.de>
  Yufen Yu <yuyufen@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   e99332e7b4cd..152036d1379f  152036d1379ffd6985262743dcf6b0f9c75f83a4 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Tue May 12 23:37:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 12 May 2020 23:37:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYeSU-0007LN-D1; Tue, 12 May 2020 23:37:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9uWS=62=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jYeST-0007LI-Ls
 for xen-devel@lists.xenproject.org; Tue, 12 May 2020 23:37:09 +0000
X-Inumbo-ID: 7e9f1246-94a9-11ea-9887-bc764e2007e4
Received: from mail-lf1-x12b.google.com (unknown [2a00:1450:4864:20::12b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7e9f1246-94a9-11ea-9887-bc764e2007e4;
 Tue, 12 May 2020 23:37:08 +0000 (UTC)
Received: by mail-lf1-x12b.google.com with SMTP id 8so9312479lfp.4
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 16:37:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=c8OXBEOBsZxAuz+wtveao7AIP9q5uFAQmVv2I4n8v/Y=;
 b=Ubl2/3C1UHclGBsf0UVUgHFxH026AYLF+OyeWHtzn6wES4hNF/ColxeZMCJR2ps3Kq
 IXGU8F+75V0q59EQjpppm40tgVf3G+OvjQ3nX5r43cW8jqZbdOPd6zd+UuzLjbei3ShX
 FcFBIhK5hbnp6lgPZ5oSLtLYOkPoCAzS41bt54kn5vFyoGIpN8SQyabBWwPcnQQvzvp9
 dnwRHWJX/ZIOewNdflVqrtLreA2H/3EKhkXUQdfvHwAUpe0fpKRII+YmZBxw+UOkOxOz
 xQ1KHIGPIbs6Cr8y1TWSNU/NzoaCakPVbuZm9aHcQgC31cjj58Y8NL68dI2grJDeOZoH
 4p0Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=c8OXBEOBsZxAuz+wtveao7AIP9q5uFAQmVv2I4n8v/Y=;
 b=RdNSW7aDScTZ6lxilZoj493unNwaK/WELHGqmtPAhhVwMTmotpDkG2uPsCNSwu5/6V
 6uG16PCZxPkLXIi8wK+uEj0qwi/cDigzVtPc7BDMfYStnLR+mKcPFBC/RuWS+mAPYQzt
 ROQoF49OsLhrY+9vr5WQeOs4B43ST4+lM6dUTYswUbg/98O669vdqIHgYLqC5kzr+27c
 kLydqlzUx/8Zc0gho3xkZXnGsbS46NwXXJZ2KUuNSyiAw8J04PMlvSmvpwx0UXPXZozv
 FSo6QzhxzHwuEpclABWoOoyxxmJ84iSmxpZayrPiwW9L0GjaTowkmgV4zFasZLlxB4kQ
 MywQ==
X-Gm-Message-State: AOAM5332h5Nyb8iNlyPwEEsWICqIz6G1xrY+4TbnRcn3M+mdy9/56/aU
 dDJYuDsX4A7vsdfJKQ0sP27egDITLntiwxPQJqA=
X-Google-Smtp-Source: ABdhPJw8xRMgxRg/LloyAaGN8zxkmSEaExUQJnCATegUctJuJMwNWCT5HrwXlb+5ZLgKIRyjy0MahbqLZ+5oscm9TRA=
X-Received: by 2002:ac2:5384:: with SMTP id g4mr16115722lfh.1.1589326627823;
 Tue, 12 May 2020 16:37:07 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1588282027.git.rosbrookn@ainfosec.com>
 <c38afab85d9fc005edade229896008a4ad5a1929.1588282027.git.rosbrookn@ainfosec.com>
 <3ED0B49D-123C-4925-B3A0-4FA0B44DF9F0@citrix.com>
 <CAEBZRSdWCJo9kBnNv6Jqt76E3fb8DDX6C4zndMtvrhngEzGHxg@mail.gmail.com>
 <294923FB-07D4-4CEB-9B29-3450DB454000@citrix.com>
In-Reply-To: <294923FB-07D4-4CEB-9B29-3450DB454000@citrix.com>
From: Nick Rosbrook <rosbrookn@gmail.com>
Date: Tue, 12 May 2020 19:36:56 -0400
Message-ID: <CAEBZRSeFptOcLQ5sVyP6k4cmaqUwx9=HabnzPXfwBb+ih3nduQ@mail.gmail.com>
Subject: Re: [PATCH 2/3] golang/xenlight: init xenlight go module
To: George Dunlap <George.Dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> The whole point of running =E2=80=98go build=E2=80=99 is to make sure tha=
t *the code we just copied* =E2=80=94 the code right now in our own local t=
ree, perhaps which was just generated =E2=80=94 actually compiles.

I understand that, and in my tests that's the outcome.

> It looks like when we add the `go.mod` further up the tree, it makes `go =
build` ignore the GOPATH environment variable we=E2=80=99re giving it, whic=
h causes the build failure.  But your =E2=80=9Cfix=E2=80=9D doesn=E2=80=99t=
 make it use the in-tree go code again; instead it looks like it causes `go=
 build` command to go and fetch the most recent `master` version from xenbi=
ts, ignoring the go code in the tree completely. :-)

The GOPATH is "ignored" because it is obsolete in module-aware go
(this is one of the primary motivations of modules). The build fails
(without changing XEN_GOCODE_URL) because xenproject.golang.org is
*not* the module in the local tree, and the subsequent fetch fails.
However, when the correct import path (after updating XEN_GOCODE_URL)
is used, go is smart enough to realize we're trying to build our local
module and will not do a fetch.

However, I'm more than happy to just use `go build` instead of `go
install` in that make target.

Thanks
-NR


From xen-devel-bounces@lists.xenproject.org Wed May 13 00:34:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 00:34:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYfLG-0004KQ-1X; Wed, 13 May 2020 00:33:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVKH=63=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jYfLE-0004KL-24
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 00:33:44 +0000
X-Inumbo-ID: 65a8f86d-94b1-11ea-a31d-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 65a8f86d-94b1-11ea-a31d-12813bfff9fa;
 Wed, 13 May 2020 00:33:42 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 02B082064A;
 Wed, 13 May 2020 00:33:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1589330021;
 bh=JYNlqx2nVMGWRHsMq87EGcMcXPQc5y50+2Ut0JkUSk0=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=vWuapgk1JqanidL0A+uFhKkmKD6TBPWPpfJrBnMeDDrDc2hynivnqMLbMct++E+W4
 6zAE+nWkagUB+kWHj0dhjkLsyBUaG6aSoNRSwPad5T5K/fxWQGzAQdBUF7zcWLoQWv
 waQ8i9Tvcu04CMgG6GPgh6/QXFFX+aSbUtcuQQLs=
Date: Tue, 12 May 2020 17:33:40 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: boris.ostrovsky@oracle.com, jgross@suse.com, julien@xen.org
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
In-Reply-To: <CAMmSBy_wcSD3BVcVFJVR1y1CtvxA9xMkobKwbsdf8dGxS5Hcbw@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2005121723240.26167@sstabellini-ThinkPad-T480s>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <4f6ef562-e213-8952-66d6-0f083bf8c593@xen.org>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
 <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <CADz_WD5Ln7Pe1WAFp73d2Mz9wxspzTE3WgAJusp5S8LX4=83Bw@mail.gmail.com>
 <2187cd7c-4d48-986b-77d6-4428e8178404@oracle.com>
 <CADz_WD68bYj-0CSm_zib+LRiMGd1+1eoNLgiJj=vHog685Khsw@mail.gmail.com>
 <alpine.DEB.2.21.2005060956120.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy_wcSD3BVcVFJVR1y1CtvxA9xMkobKwbsdf8dGxS5Hcbw@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-1114903866-1589329903=:26167"
Content-ID: <alpine.DEB.2.21.2005121732000.26167@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peng Fan <peng.fan@nxp.com>, Stefano Stabellini <sstabellini@kernel.org>,
 "minyard@acm.org" <minyard@acm.org>, roman@zededa.com,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 Julien Grall <julien.grall@arm.com>,
 Nataliya Korovkina <malus.brandywine@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1114903866-1589329903=:26167
Content-Type: text/plain; CHARSET=US-ASCII
Content-ID: <alpine.DEB.2.21.2005121732001.26167@sstabellini-ThinkPad-T480s>

I worked with Roman to do several more tests and here is an update on
the situation. We don't know why my patch didn't work when Boris' patch
[1] worked.  Both of them should have worked the same way.

Anyway, we continued with Boris patch to debug the new mmc error which
looks like this:

[    3.084464] mmc0: unrecognised SCR structure version 15
[    3.089176] mmc0: error -22 whilst initialising SD card

I asked to add a lot of trancing, see attached diff. We discovered a bug
in xen_swiotlb_init: if io_tlb_start != 0 at the beginning of
xen_swiotlb_init, start_dma_addr is not set correctly. This oneline
patch fixes it:

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 0a40ac332a4c..b75c43356eba 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -191,6 +191,7 @@ int __ref xen_swiotlb_init(int verbose, bool early)
         * IO TLB memory already allocated. Just use it.
         */
        if (io_tlb_start != 0) {
+               start_dma_addr = io_tlb_start;
                xen_io_tlb_start = phys_to_virt(io_tlb_start);
                goto end;
        }

Unfortunately it doesn't solve the mmc0 error.


As you might notice from the logs, none of the other interesting printks
printed anything, which seems to mean that the memory allocated by
xen_swiotlb_alloc_coherent and mapped by xen_swiotlb_map_page should be
just fine.

I am starting to be out of ideas. Do you guys have any suggestions on
what could be the issue or what could be done to discover it?

Cheers,

Stefano


On Wed, 6 May 2020, Roman Shaposhnik wrote:
> > So, to recap we have 2 issues as far as I can tell:
> >
> > - virt_to_page not working in some cases on ARM, leading to a crash
> > - WARN_ON for range_straddles_page_boundary which is normal on ARM
> >
> > The appended patch addresses them by:
> >
> > - using pfn_to_page instead virt_to_page
> > - moving the WARN_ON under a #ifdef (Juergen might have a better
> >   suggestion on how to rework the WARN_ON)
> >
> > Please let me know if this patch works!
> >
> > Cheers,
> >
> > Stefano
> >
> >
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index b6d27762c6f8..0a40ac332a4c 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -322,7 +322,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
> >                         xen_free_coherent_pages(hwdev, size, ret, (dma_addr_t)phys, attrs);
> >                         return NULL;
> >                 }
> > -               SetPageXenRemapped(virt_to_page(ret));
> > +               SetPageXenRemapped(pfn_to_page(PFN_DOWN(phys)));
> >         }
> >         memset(ret, 0, size);
> >         return ret;
> > @@ -346,9 +346,14 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
> >         /* Convert the size to actually allocated. */
> >         size = 1UL << (order + XEN_PAGE_SHIFT);
> >
> > -       if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
> > -                    range_straddles_page_boundary(phys, size)) &&
> > -           TestClearPageXenRemapped(virt_to_page(vaddr)))
> > +#ifdef CONFIG_X86
> > +       if (WARN_ON(dev_addr + size - 1 > dma_mask) ||
> > +                   range_straddles_page_boundary(phys, size)) {
> > +           return;
> > +       }
> > +#endif
> > +
> > +       if (TestClearPageXenRemapped(pfn_to_page(PFN_DOWN(phys))))
> >                 xen_destroy_contiguous_region(phys, order);
> >
> >         xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
> 
> Stefano, with your patch applied, I'm still getting:
> 
> [    0.590705] Unable to handle kernel paging request at virtual
> address fffffe0003700000
> 
> However, Boris' patch seems to get us much closer. It would be awesome if you
> can take a look at that (plus the additional DMA issue that seems to
> be dependent
> on how much memory I allocate to Dom0).
--8323329-1114903866-1589329903=:26167
Content-Type: text/plain; NAME=xen.full.log.txt
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.21.2005121731430.26167@sstabellini-ThinkPad-T480s>
Content-Description: 
Content-Disposition: ATTACHMENT; FILENAME=xen.full.log.txt

VXNpbmcgbW9kdWxlcyBwcm92aWRlZCBieSBib290bG9hZGVyIGluIEZEVA0K
WGVuIDQuMTQtdW5zdGFibGUgKGMvcyBUaHUgQXByIDMwIDEwOjQ1OjA5IDIw
MjAgKzAyMDAgZ2l0OjAxMzViZTgtZGlydHkpIEVGSSBsb2FkZXINCldhcm5p
bmc6IENvdWxkIG5vdCBxdWVyeSB2YXJpYWJsZSBzdG9yZTogMHg4MDAwMDAw
MDAwMDAwMDAzDQotIFVBUlQgZW5hYmxlZCAtDQotIEJvb3QgQ1BVIGJvb3Rp
bmcgLQ0KLSBDdXJyZW50IEVMIDAwMDAwMDA4IC0NCi0gSW5pdGlhbGl6ZSBD
UFUgLQ0KLSBUdXJuaW5nIG9uIHBhZ2luZyAtDQotIFJlYWR5IC0NCihYRU4p
IENoZWNraW5nIGZvciBpbml0cmQgaW4gL2Nob3Nlbg0KKFhFTikgUkFNOiAw
MDAwMDAwMDAwMDAxMDAwIC0gMDAwMDAwMDAwN2VmMWZmZg0KKFhFTikgUkFN
OiAwMDAwMDAwMDA3ZWYyMDAwIC0gMDAwMDAwMDAwN2YwZGZmZg0KKFhFTikg
UkFNOiAwMDAwMDAwMDA3ZjBlMDAwIC0gMDAwMDAwMDAyYmMxYWZmZg0KKFhF
TikgUkFNOiAwMDAwMDAwMDJiYzFiMDAwIC0gMDAwMDAwMDAyYmMyOWZmZg0K
KFhFTikgUkFNOiAwMDAwMDAwMDJiYzJhMDAwIC0gMDAwMDAwMDAyYmQ2ZWZm
Zg0KKFhFTikgUkFNOiAwMDAwMDAwMDJiZDZmMDAwIC0gMDAwMDAwMDAyZDc4
MGZmZg0KKFhFTikgUkFNOiAwMDAwMDAwMDJkNzgxMDAwIC0gMDAwMDAwMDAz
YzlmNmZmZg0KKFhFTikgUkFNOiAwMDAwMDAwMDNjOWY3MDAwIC0gMDAwMDAw
MDAzYzlmOGZmZg0KKFhFTikgUkFNOiAwMDAwMDAwMDNjOWZiMDAwIC0gMDAw
MDAwMDAzYzlmZGZmZg0KKFhFTikgUkFNOiAwMDAwMDAwMDNjOWZlMDAwIC0g
MDAwMDAwMDAzY2IwOGZmZg0KKFhFTikgUkFNOiAwMDAwMDAwMDNjYjEwMDAw
IC0gMDAwMDAwMDAzY2IxMGZmZg0KKFhFTikgUkFNOiAwMDAwMDAwMDNjYjEy
MDAwIC0gMDAwMDAwMDAzY2IxM2ZmZg0KKFhFTikgUkFNOiAwMDAwMDAwMDNj
YjFiMDAwIC0gMDAwMDAwMDAzY2IxY2ZmZg0KKFhFTikgUkFNOiAwMDAwMDAw
MDNjYjFlMDAwIC0gMDAwMDAwMDAzZGYzZmZmZg0KKFhFTikgUkFNOiAwMDAw
MDAwMDNkZjUwMDAwIC0gMDAwMDAwMDAzZGZmZmZmZg0KKFhFTikgUkFNOiAw
MDAwMDAwMDQwMDAwMDAwIC0gMDAwMDAwMDBmYmZmZmZmZg0KKFhFTikNCihY
RU4pIE1PRFVMRVswXTogMDAwMDAwMDAyYmMyYTAwMCAtIDAwMDAwMDAwMmJk
NmUwZDggWGVuDQooWEVOKSBNT0RVTEVbMV06IDAwMDAwMDAwMmJjMWMwMDAg
LSAwMDAwMDAwMDJiYzJhMDAwIERldmljZSBUcmVlDQooWEVOKSBNT0RVTEVb
Ml06IDAwMDAwMDAwMmJkN2MwMDAgLSAwMDAwMDAwMDJkNjdjYTAwIEtlcm5l
bA0KKFhFTikNCihYRU4pIENNRExJTkVbMDAwMDAwMDAyYmQ3YzAwMF06Y2hv
c2VuIGNvbnNvbGU9aHZjMCBlYXJseXByaW50az14ZW4gbm9tb2Rlc2V0IHJv
b3RkZWxheT0xMA0KKFhFTikNCihYRU4pIENvbW1hbmQgbGluZTogZG9tMF9t
ZW09ODIwTSBkb20wX21heF92Y3B1cz0xDQooWEVOKSBEb21haW4gaGVhcCBp
bml0aWFsaXNlZA0KKFhFTikgQm9vdGluZyB1c2luZyBEZXZpY2UgVHJlZQ0K
KFhFTikgIC0+IHVuZmxhdHRlbl9kZXZpY2VfdHJlZSgpDQooWEVOKSBVbmZs
YXR0ZW5pbmcgZGV2aWNlIHRyZWU6DQooWEVOKSBtYWdpYzogMHhkMDBkZmVl
ZA0KKFhFTikgc2l6ZTogMHgwMGUwMDANCihYRU4pIHZlcnNpb246IDB4MDAw
MDExDQooWEVOKSAgIHNpemUgaXMgMHgxYmE0MCBhbGxvY2F0aW5nLi4uDQoo
WEVOKSAgIHVuZmxhdHRlbmluZyA4MDAwZjdmYzAwMDAuLi4NCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yICAtPg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
YWxpYXNlcyAtPiBhbGlhc2VzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBj
aG9zZW4gLT4gY2hvc2VuDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBtb2R1
bGVAMmJkN2MwMDAgLT4gbW9kdWxlDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciByZXNlcnZlZC1tZW1vcnkgLT4gcmVzZXJ2ZWQtbWVtb3J5DQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBsaW51eCxjbWEgLT4gbGludXgsY21hDQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciB0aGVybWFsLXpvbmVzIC0+IHRoZXJtYWwt
em9uZXMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNwdS10aGVybWFsIC0+
IGNwdS10aGVybWFsDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjb29saW5n
LW1hcHMgLT4gY29vbGluZy1tYXBzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBzb2MgLT4gc29jDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB0aW1lckA3
ZTAwMzAwMCAtPiB0aW1lcg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdHhw
QDdlMDA0MDAwIC0+IHR4cA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY3By
bWFuQDdlMTAxMDAwIC0+IGNwcm1hbg0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgbWFpbGJveEA3ZTAwYjg4MCAtPiBtYWlsYm94DQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBncGlvQDdlMjAwMDAwIC0+IGdwaW8NCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIGRwaV9ncGlvMCAtPiBkcGlfZ3BpbzANCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIGVtbWNfZ3BpbzIyIC0+IGVtbWNfZ3BpbzIyDQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBlbW1jX2dwaW8zNCAtPiBlbW1jX2dw
aW8zNA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZW1tY19ncGlvNDggLT4g
ZW1tY19ncGlvNDgNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGdwY2xrMF9n
cGlvNCAtPiBncGNsazBfZ3BpbzQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IGdwY2xrMV9ncGlvNSAtPiBncGNsazFfZ3BpbzUNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIGdwY2xrMV9ncGlvNDIgLT4gZ3BjbGsxX2dwaW80Mg0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgZ3BjbGsxX2dwaW80NCAtPiBncGNsazFf
Z3BpbzQ0DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBncGNsazJfZ3BpbzYg
LT4gZ3BjbGsyX2dwaW82DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBncGNs
azJfZ3BpbzQzIC0+IGdwY2xrMl9ncGlvNDMNCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIGkyYzBfZ3BpbzAgLT4gaTJjMF9ncGlvMA0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgaTJjMF9ncGlvMjggLT4gaTJjMF9ncGlvMjgNCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIGkyYzBfZ3BpbzQ0IC0+IGkyYzBfZ3BpbzQ0
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpMmMxX2dwaW8yIC0+IGkyYzFf
Z3BpbzINCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGkyYzFfZ3BpbzQ0IC0+
IGkyYzFfZ3BpbzQ0DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBqdGFnX2dw
aW8yMiAtPiBqdGFnX2dwaW8yMg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
cGNtX2dwaW8xOCAtPiBwY21fZ3BpbzE4DQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBwY21fZ3BpbzI4IC0+IHBjbV9ncGlvMjgNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIHNkaG9zdF9ncGlvNDggLT4gc2Rob3N0X2dwaW80OA0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpMF9ncGlvNyAtPiBzcGkwX2dwaW83
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzcGkwX2dwaW8zNSAtPiBzcGkw
X2dwaW8zNQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpMV9ncGlvMTYg
LT4gc3BpMV9ncGlvMTYNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwaTJf
Z3BpbzQwIC0+IHNwaTJfZ3BpbzQwDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciB1YXJ0MF9ncGlvMTQgLT4gdWFydDBfZ3BpbzE0DQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciB1YXJ0MF9jdHNydHNfZ3BpbzE2IC0+IHVhcnQwX2N0c3J0
c19ncGlvMTYNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHVhcnQwX2N0c3J0
c19ncGlvMzAgLT4gdWFydDBfY3RzcnRzX2dwaW8zMA0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgdWFydDBfZ3BpbzMyIC0+IHVhcnQwX2dwaW8zMg0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgdWFydDBfZ3BpbzM2IC0+IHVhcnQwX2dw
aW8zNg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdWFydDBfY3RzcnRzX2dw
aW8zOCAtPiB1YXJ0MF9jdHNydHNfZ3BpbzM4DQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciB1YXJ0MV9ncGlvMTQgLT4gdWFydDFfZ3BpbzE0DQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciB1YXJ0MV9jdHNydHNfZ3BpbzE2IC0+IHVhcnQx
X2N0c3J0c19ncGlvMTYNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHVhcnQx
X2dwaW8zMiAtPiB1YXJ0MV9ncGlvMzINCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIHVhcnQxX2N0c3J0c19ncGlvMzAgLT4gdWFydDFfY3RzcnRzX2dwaW8z
MA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdWFydDFfZ3BpbzQwIC0+IHVh
cnQxX2dwaW80MA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdWFydDFfY3Rz
cnRzX2dwaW80MiAtPiB1YXJ0MV9jdHNydHNfZ3BpbzQyDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBncGNsazBfZ3BpbzQ5IC0+IGdwY2xrMF9ncGlvNDkN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1ncGNsayAtPiBwaW4tZ3Bj
bGsNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGdwY2xrMV9ncGlvNTAgLT4g
Z3BjbGsxX2dwaW81MA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGluLWdw
Y2xrIC0+IHBpbi1ncGNsaw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZ3Bj
bGsyX2dwaW81MSAtPiBncGNsazJfZ3BpbzUxDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBwaW4tZ3BjbGsgLT4gcGluLWdwY2xrDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBpMmMwX2dwaW80NiAtPiBpMmMwX2dwaW80Ng0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgcGluLXNkYSAtPiBwaW4tc2RhDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBwaW4tc2NsIC0+IHBpbi1zY2wNCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIGkyYzFfZ3BpbzQ2IC0+IGkyYzFfZ3BpbzQ2DQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tc2RhIC0+IHBpbi1zZGENCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1zY2wgLT4gcGluLXNjbA0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjM19ncGlvMiAtPiBpMmMzX2dwaW8y
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tc2RhIC0+IHBpbi1zZGEN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1zY2wgLT4gcGluLXNjbA0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjM19ncGlvNCAtPiBpMmMzX2dw
aW80DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tc2RhIC0+IHBpbi1z
ZGENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1zY2wgLT4gcGluLXNj
bA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjNF9ncGlvNiAtPiBpMmM0
X2dwaW82DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tc2RhIC0+IHBp
bi1zZGENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1zY2wgLT4gcGlu
LXNjbA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjNF9ncGlvOCAtPiBp
MmM0X2dwaW84DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tc2RhIC0+
IHBpbi1zZGENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1zY2wgLT4g
cGluLXNjbA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjNV9ncGlvMTAg
LT4gaTJjNV9ncGlvMTANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1z
ZGEgLT4gcGluLXNkYQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGluLXNj
bCAtPiBwaW4tc2NsDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpMmM1X2dw
aW8xMiAtPiBpMmM1X2dwaW8xMg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
cGluLXNkYSAtPiBwaW4tc2RhDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBw
aW4tc2NsIC0+IHBpbi1zY2wNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGky
YzZfZ3BpbzAgLT4gaTJjNl9ncGlvMA0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgcGluLXNkYSAtPiBwaW4tc2RhDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBwaW4tc2NsIC0+IHBpbi1zY2wNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IGkyYzZfZ3BpbzIyIC0+IGkyYzZfZ3BpbzIyDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBwaW4tc2RhIC0+IHBpbi1zZGENCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIHBpbi1zY2wgLT4gcGluLXNjbA0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgaTJjX3NsYXZlX2dwaW84IC0+IGkyY19zbGF2ZV9ncGlvOA0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgcGlucy1pMmMtc2xhdmUgLT4gcGlucy1p
MmMtc2xhdmUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGp0YWdfZ3BpbzQ4
IC0+IGp0YWdfZ3BpbzQ4DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW5z
LWp0YWcgLT4gcGlucy1qdGFnDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBt
aWlfZ3BpbzI4IC0+IG1paV9ncGlvMjgNCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIHBpbnMtbWlpIC0+IHBpbnMtbWlpDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBtaWlfZ3BpbzM2IC0+IG1paV9ncGlvMzYNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIHBpbnMtbWlpIC0+IHBpbnMtbWlpDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBwY21fZ3BpbzUwIC0+IHBjbV9ncGlvNTANCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIHBpbnMtcGNtIC0+IHBpbnMtcGNtDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBwd20wXzBfZ3BpbzEyIC0+IHB3bTBfMF9ncGlv
MTINCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1wd20gLT4gcGluLXB3
bQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHdtMF8wX2dwaW8xOCAtPiBw
d20wXzBfZ3BpbzE4DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tcHdt
IC0+IHBpbi1wd20NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHB3bTFfMF9n
cGlvNDAgLT4gcHdtMV8wX2dwaW80MA0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgcGluLXB3bSAtPiBwaW4tcHdtDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBwd20wXzFfZ3BpbzEzIC0+IHB3bTBfMV9ncGlvMTMNCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIHBpbi1wd20gLT4gcGluLXB3bQ0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3IgcHdtMF8xX2dwaW8xOSAtPiBwd20wXzFfZ3BpbzE5DQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tcHdtIC0+IHBpbi1wd20NCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHB3bTFfMV9ncGlvNDEgLT4gcHdtMV8x
X2dwaW80MQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGluLXB3bSAtPiBw
aW4tcHdtDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwd20wXzFfZ3BpbzQ1
IC0+IHB3bTBfMV9ncGlvNDUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBp
bi1wd20gLT4gcGluLXB3bQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHdt
MF8wX2dwaW81MiAtPiBwd20wXzBfZ3BpbzUyDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBwaW4tcHdtIC0+IHBpbi1wd20NCihYRU4pIGZpeGVkIHVwIG5h
bWUgZm9yIHB3bTBfMV9ncGlvNTMgLT4gcHdtMF8xX2dwaW81Mw0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgcGluLXB3bSAtPiBwaW4tcHdtDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciByZ21paV9ncGlvMzUgLT4gcmdtaWlfZ3BpbzM1
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tc3RhcnQtc3RvcCAtPiBw
aW4tc3RhcnQtc3RvcA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGluLXJ4
LW9rIC0+IHBpbi1yeC1vaw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igcmdt
aWlfaXJxX2dwaW8zNCAtPiByZ21paV9pcnFfZ3BpbzM0DQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBwaW4taXJxIC0+IHBpbi1pcnENCihYRU4pIGZpeGVk
IHVwIG5hbWUgZm9yIHJnbWlpX2lycV9ncGlvMzkgLT4gcmdtaWlfaXJxX2dw
aW8zOQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGluLWlycSAtPiBwaW4t
aXJxDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciByZ21paV9tZGlvX2dwaW8y
OCAtPiByZ21paV9tZGlvX2dwaW8yOA0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgcGlucy1tZGlvIC0+IHBpbnMtbWRpbw0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgcmdtaWlfbWRpb19ncGlvMzcgLT4gcmdtaWlfbWRpb19ncGlvMzcN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbnMtbWRpbyAtPiBwaW5zLW1k
aW8NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwaTBfZ3BpbzQ2IC0+IHNw
aTBfZ3BpbzQ2DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW5zLXNwaSAt
PiBwaW5zLXNwaQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpMl9ncGlv
NDYgLT4gc3BpMl9ncGlvNDYNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBp
bnMtc3BpIC0+IHBpbnMtc3BpDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBz
cGkzX2dwaW8wIC0+IHNwaTNfZ3BpbzANCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIHBpbnMtc3BpIC0+IHBpbnMtc3BpDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBzcGk0X2dwaW80IC0+IHNwaTRfZ3BpbzQNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIHBpbnMtc3BpIC0+IHBpbnMtc3BpDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBzcGk1X2dwaW8xMiAtPiBzcGk1X2dwaW8xMg0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgcGlucy1zcGkgLT4gcGlucy1zcGkNCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIHNwaTZfZ3BpbzE4IC0+IHNwaTZfZ3BpbzE4
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW5zLXNwaSAtPiBwaW5zLXNw
aQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgdWFydDJfZ3BpbzAgLT4gdWFy
dDJfZ3BpbzANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi10eCAtPiBw
aW4tdHgNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1yeCAtPiBwaW4t
cngNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHVhcnQyX2N0c3J0c19ncGlv
MiAtPiB1YXJ0Ml9jdHNydHNfZ3BpbzINCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIHBpbi1jdHMgLT4gcGluLWN0cw0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgcGluLXJ0cyAtPiBwaW4tcnRzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciB1YXJ0M19ncGlvNCAtPiB1YXJ0M19ncGlvNA0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3IgcGluLXR4IC0+IHBpbi10eA0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgcGluLXJ4IC0+IHBpbi1yeA0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgdWFydDNfY3RzcnRzX2dwaW82IC0+IHVhcnQzX2N0c3J0c19ncGlvNg0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGluLWN0cyAtPiBwaW4tY3RzDQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tcnRzIC0+IHBpbi1ydHMNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHVhcnQ0X2dwaW84IC0+IHVhcnQ0X2dw
aW84DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tdHggLT4gcGluLXR4
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwaW4tcnggLT4gcGluLXJ4DQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1YXJ0NF9jdHNydHNfZ3BpbzEwIC0+
IHVhcnQ0X2N0c3J0c19ncGlvMTANCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IHBpbi1jdHMgLT4gcGluLWN0cw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
cGluLXJ0cyAtPiBwaW4tcnRzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1
YXJ0NV9ncGlvMTIgLT4gdWFydDVfZ3BpbzEyDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBwaW4tdHggLT4gcGluLXR4DQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBwaW4tcnggLT4gcGluLXJ4DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciB1YXJ0NV9jdHNydHNfZ3BpbzE0IC0+IHVhcnQ1X2N0c3J0c19ncGlvMTQN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpbi1jdHMgLT4gcGluLWN0cw0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGluLXJ0cyAtPiBwaW4tcnRzDQoo
WEVOKSBmaXhlZCB1cCBuYW1lIGZvciBncGlvb3V0IC0+IGdwaW9vdXQNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIGFsdDAgLT4gYWx0MA0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3IgZHBpXzE4Yml0X2dwaW8wIC0+IGRwaV8xOGJpdF9n
cGlvMA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpMF9waW5zIC0+IHNw
aTBfcGlucw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpMF9jc19waW5z
IC0+IHNwaTBfY3NfcGlucw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3Bp
M19waW5zIC0+IHNwaTNfcGlucw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
c3BpM19jc19waW5zIC0+IHNwaTNfY3NfcGlucw0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3Igc3BpNF9waW5zIC0+IHNwaTRfcGlucw0KKFhFTikgZml4ZWQg
dXAgbmFtZSBmb3Igc3BpNF9jc19waW5zIC0+IHNwaTRfY3NfcGlucw0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpNV9waW5zIC0+IHNwaTVfcGlucw0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpNV9jc19waW5zIC0+IHNwaTVf
Y3NfcGlucw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpNl9waW5zIC0+
IHNwaTZfcGlucw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpNl9jc19w
aW5zIC0+IHNwaTZfY3NfcGlucw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
aTJjMCAtPiBpMmMwDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpMmMxIC0+
IGkyYzENCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGkyYzMgLT4gaTJjMw0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjNCAtPiBpMmM0DQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBpMmM1IC0+IGkyYzUNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIGkyYzYgLT4gaTJjNg0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgaTJzIC0+IGkycw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2Rpb19w
aW5zIC0+IHNkaW9fcGlucw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgYnRf
cGlucyAtPiBidF9waW5zDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1YXJ0
MF9waW5zIC0+IHVhcnQwX3BpbnMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9y
IHVhcnQxX3BpbnMgLT4gdWFydDFfcGlucw0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgdWFydDJfcGlucyAtPiB1YXJ0Ml9waW5zDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciB1YXJ0M19waW5zIC0+IHVhcnQzX3BpbnMNCihYRU4pIGZp
eGVkIHVwIG5hbWUgZm9yIHVhcnQ0X3BpbnMgLT4gdWFydDRfcGlucw0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgdWFydDVfcGlucyAtPiB1YXJ0NV9waW5z
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBhdWRpb19waW5zIC0+IGF1ZGlv
X3BpbnMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNlcmlhbEA3ZTIwMTAw
MCAtPiBzZXJpYWwNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG1tY0A3ZTIw
MjAwMCAtPiBtbWMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGkyc0A3ZTIw
MzAwMCAtPiBpMnMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwaUA3ZTIw
NDAwMCAtPiBzcGkNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNwaWRldkAw
IC0+IHNwaWRldg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpZGV2QDEg
LT4gc3BpZGV2DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpMmNAN2UyMDUw
MDAgLT4gaTJjDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpMmMwbXV4IC0+
IGkyYzBtdXgNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGkyY0AwIC0+IGky
Yw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjQDEgLT4gaTJjDQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBkcGlAN2UyMDgwMDAgLT4gZHBpDQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBkc2lAN2UyMDkwMDAgLT4gZHNpDQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBhdXhAN2UyMTUwMDAgLT4gYXV4DQooWEVO
KSBmaXhlZCB1cCBuYW1lIGZvciBzZXJpYWxAN2UyMTUwNDAgLT4gc2VyaWFs
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzcGlAN2UyMTUwODAgLT4gc3Bp
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBzcGlAN2UyMTUwYzAgLT4gc3Bp
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBwd21AN2UyMGMwMDAgLT4gcHdt
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBodnNAN2U0MDAwMDAgLT4gaHZz
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBkc2lAN2U3MDAwMDAgLT4gZHNp
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBpMmNAN2U4MDQwMDAgLT4gaTJj
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB2ZWNAN2U4MDYwMDAgLT4gdmVj
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB1c2JAN2U5ODAwMDAgLT4gdXNi
DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBsb2NhbF9pbnRjQDQwMDAwMDAw
IC0+IGxvY2FsX2ludGMNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGludGVy
cnVwdC1jb250cm9sbGVyQDQwMDQxMDAwIC0+IGludGVycnVwdC1jb250cm9s
bGVyDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBhdnMtbW9uaXRvckA3ZDVk
MjAwMCAtPiBhdnMtbW9uaXRvcg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
dGhlcm1hbCAtPiB0aGVybWFsDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBk
bWFAN2UwMDcwMDAgLT4gZG1hDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB3
YXRjaGRvZ0A3ZTEwMDAwMCAtPiB3YXRjaGRvZw0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3Igcm5nQDdlMTA0MDAwIC0+IHJuZw0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3Igc2VyaWFsQDdlMjAxNDAwIC0+IHNlcmlhbA0KKFhFTikgZml4
ZWQgdXAgbmFtZSBmb3Igc2VyaWFsQDdlMjAxNjAwIC0+IHNlcmlhbA0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3Igc2VyaWFsQDdlMjAxODAwIC0+IHNlcmlh
bA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc2VyaWFsQDdlMjAxYTAwIC0+
IHNlcmlhbA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpQDdlMjA0NjAw
IC0+IHNwaQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpQDdlMjA0ODAw
IC0+IHNwaQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpQDdlMjA0YTAw
IC0+IHNwaQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Igc3BpQDdlMjA0YzAw
IC0+IHNwaQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjQDdlMjA1NjAw
IC0+IGkyYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjQDdlMjA1ODAw
IC0+IGkyYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjQDdlMjA1YTAw
IC0+IGkyYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJjQDdlMjA1YzAw
IC0+IGkyYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcHdtQDdlMjBjODAw
IC0+IHB3bQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZmlybXdhcmUgLT4g
ZmlybXdhcmUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGdwaW8gLT4gZ3Bp
bw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcG93ZXIgLT4gcG93ZXINCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpeGVsdmFsdmVAN2UyMDYwMDAgLT4g
cGl4ZWx2YWx2ZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGl4ZWx2YWx2
ZUA3ZTIwNzAwMCAtPiBwaXhlbHZhbHZlDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBtbWNAN2UzMDAwMDAgLT4gbW1jDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciBtbWNuckA3ZTMwMDAwMCAtPiBtbWNucg0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3IgZmlybXdhcmVrbXNAN2U2MDAwMDAgLT4gZmlybXdhcmVrbXMN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNtaUA3ZTYwMDAwMCAtPiBzbWkN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNzaUA3ZTgwMDAwMCAtPiBjc2kN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNzaUA3ZTgwMTAwMCAtPiBjc2kN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBvcnQgLT4gcG9ydA0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgZW5kcG9pbnQgLT4gZW5kcG9pbnQNCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIGF4aXBlcmYgLT4gYXhpcGVyZg0KKFhFTikg
Zml4ZWQgdXAgbmFtZSBmb3IgZ3Bpb21lbSAtPiBncGlvbWVtDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBmYiAtPiBmYg0KKFhFTikgZml4ZWQgdXAgbmFt
ZSBmb3IgdmNzbSAtPiB2Y3NtDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBz
b3VuZCAtPiBzb3VuZA0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGl4ZWx2
YWx2ZUA3ZTIwYTAwMCAtPiBwaXhlbHZhbHZlDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBwaXhlbHZhbHZlQDdlMjE2MDAwIC0+IHBpeGVsdmFsdmUNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBpeGVsdmFsdmVAN2VjMTIwMDAgLT4g
cGl4ZWx2YWx2ZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY2xvY2tAN2Vm
MDAwMDAgLT4gY2xvY2sNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGhkbWlA
N2VmMDA3MDAgLT4gaGRtaQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaTJj
QDdlZjA0NTAwIC0+IGkyYw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgaGRt
aUA3ZWYwNTcwMCAtPiBoZG1pDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBp
MmNAN2VmMDk1MDAgLT4gaTJjDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBj
bG9ja3MgLT4gY2xvY2tzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjbGst
b3NjIC0+IGNsay1vc2MNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGNsay11
c2IgLT4gY2xrLXVzYg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcGh5IC0+
IHBoeQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgYXJtLXBtdSAtPiBhcm0t
cG11DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciB0aW1lciAtPiB0aW1lcg0K
KFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgY3B1cyAtPiBjcHVzDQooWEVOKSBm
aXhlZCB1cCBuYW1lIGZvciBjcHVAMCAtPiBjcHUNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIGNwdUAxIC0+IGNwdQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBm
b3IgY3B1QDIgLT4gY3B1DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjcHVA
MyAtPiBjcHUNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNjYiAtPiBzY2IN
CihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHBjaWVAN2Q1MDAwMDAgLT4gcGNp
ZQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZXRoZXJuZXRAN2Q1ODAwMDAg
LT4gZXRoZXJuZXQNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIG1kaW9AZTE0
IC0+IG1kaW8NCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIGV0aGVybmV0LXBo
eUAxIC0+IGV0aGVybmV0LXBoeQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
ZG1hQDdlMDA3YjAwIC0+IGRtYQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3Ig
bWFpbGJveEA3ZTAwYjg0MCAtPiBtYWlsYm94DQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBiY20yODM1X2F1ZGlvIC0+IGJjbTI4MzVfYXVkaW8NCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIHhoY2lAN2U5YzAwMDAgLT4geGhjaQ0KKFhF
TikgZml4ZWQgdXAgbmFtZSBmb3IgaGV2Yy1kZWNvZGVyQDdlYjAwMDAwIC0+
IGhldmMtZGVjb2Rlcg0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgcnBpdmlk
LWxvY2FsLWludGNAN2ViMTAwMDAgLT4gcnBpdmlkLWxvY2FsLWludGMNCihY
RU4pIGZpeGVkIHVwIG5hbWUgZm9yIGgyNjQtZGVjb2RlckA3ZWIyMDAwMCAt
PiBoMjY0LWRlY29kZXINCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHZwOS1k
ZWNvZGVyQDdlYjMwMDAwIC0+IHZwOS1kZWNvZGVyDQooWEVOKSBmaXhlZCB1
cCBuYW1lIGZvciBsZWRzIC0+IGxlZHMNCihYRU4pIGZpeGVkIHVwIG5hbWUg
Zm9yIGFjdCAtPiBhY3QNCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHB3ciAt
PiBwd3INCihYRU4pIGZpeGVkIHVwIG5hbWUgZm9yIHNkX2lvXzF2OF9yZWcg
LT4gc2RfaW9fMXY4X3JlZw0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgX19v
dmVycmlkZXNfXyAtPiBfX292ZXJyaWRlc19fDQooWEVOKSBmaXhlZCB1cCBu
YW1lIGZvciBmaXhlZHJlZ3VsYXRvcl8zdjMgLT4gZml4ZWRyZWd1bGF0b3Jf
M3YzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBmaXhlZHJlZ3VsYXRvcl81
djAgLT4gZml4ZWRyZWd1bGF0b3JfNXYwDQooWEVOKSBmaXhlZCB1cCBuYW1l
IGZvciB2M2RidXMgLT4gdjNkYnVzDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciB2M2RAN2VjMDQwMDAgLT4gdjNkDQooWEVOKSBmaXhlZCB1cCBuYW1lIGZv
ciBncHUgLT4gZ3B1DQooWEVOKSBmaXhlZCB1cCBuYW1lIGZvciBjbGstMTA4
TSAtPiBjbGstMTA4TQ0KKFhFTikgZml4ZWQgdXAgbmFtZSBmb3IgZmlybXdh
cmUtY2xvY2tzIC0+IGZpcm13YXJlLWNsb2Nrcw0KKFhFTikgZml4ZWQgdXAg
bmFtZSBmb3IgZW1tYzJidXMgLT4gZW1tYzJidXMNCihYRU4pIGZpeGVkIHVw
IG5hbWUgZm9yIGVtbWMyQDdlMzQwMDAwIC0+IGVtbWMyDQooWEVOKSBmaXhl
ZCB1cCBuYW1lIGZvciBzZF92Y2NfcmVnIC0+IHNkX3ZjY19yZWcNCihYRU4p
IGZpeGVkIHVwIG5hbWUgZm9yIF9fc3ltYm9sc19fIC0+IF9fc3ltYm9sc19f
DQooWEVOKSAgPC0gdW5mbGF0dGVuX2RldmljZV90cmVlKCkNCihYRU4pIGFk
ZGluZyBEVCBhbGlhczpzZXJpYWwwOiBzdGVtPXNlcmlhbCBpZD0wIG5vZGU9
L3NvYy9zZXJpYWxAN2UyMTUwNDANCihYRU4pIGFkZGluZyBEVCBhbGlhczpz
ZXJpYWwxOiBzdGVtPXNlcmlhbCBpZD0xIG5vZGU9L3NvYy9zZXJpYWxAN2Uy
MDEwMDANCihYRU4pIGFkZGluZyBEVCBhbGlhczpldGhlcm5ldDA6IHN0ZW09
ZXRoZXJuZXQgaWQ9MCBub2RlPS9zY2IvZXRoZXJuZXRAN2Q1ODAwMDANCihY
RU4pIGFkZGluZyBEVCBhbGlhczpwY2llMDogc3RlbT1wY2llIGlkPTAgbm9k
ZT0vc2NiL3BjaWVAN2Q1MDAwMDANCihYRU4pIGFkZGluZyBEVCBhbGlhczph
dWRpbzogc3RlbT1hdWRpbyBpZD0wIG5vZGU9L3NjYi9tYWlsYm94QDdlMDBi
ODQwL2JjbTI4MzVfYXVkaW8NCihYRU4pIGFkZGluZyBEVCBhbGlhczphdXg6
IHN0ZW09YXV4IGlkPTAgbm9kZT0vc29jL2F1eEA3ZTIxNTAwMA0KKFhFTikg
YWRkaW5nIERUIGFsaWFzOnNvdW5kOiBzdGVtPXNvdW5kIGlkPTAgbm9kZT0v
c29jL3NvdW5kDQooWEVOKSBhZGRpbmcgRFQgYWxpYXM6c29jOiBzdGVtPXNv
YyBpZD0wIG5vZGU9L3NvYw0KKFhFTikgYWRkaW5nIERUIGFsaWFzOmRtYTog
c3RlbT1kbWEgaWQ9MCBub2RlPS9zb2MvZG1hQDdlMDA3MDAwDQooWEVOKSBh
ZGRpbmcgRFQgYWxpYXM6d2F0Y2hkb2c6IHN0ZW09d2F0Y2hkb2cgaWQ9MCBu
b2RlPS9zb2Mvd2F0Y2hkb2dAN2UxMDAwMDANCihYRU4pIGFkZGluZyBEVCBh
bGlhczpyYW5kb206IHN0ZW09cmFuZG9tIGlkPTAgbm9kZT0vc29jL3JuZ0A3
ZTEwNDAwMA0KKFhFTikgYWRkaW5nIERUIGFsaWFzOm1haWxib3g6IHN0ZW09
bWFpbGJveCBpZD0wIG5vZGU9L3NvYy9tYWlsYm94QDdlMDBiODgwDQooWEVO
KSBhZGRpbmcgRFQgYWxpYXM6Z3Bpbzogc3RlbT1ncGlvIGlkPTAgbm9kZT0v
c29jL2dwaW9AN2UyMDAwMDANCihYRU4pIGFkZGluZyBEVCBhbGlhczp1YXJ0
MDogc3RlbT11YXJ0IGlkPTAgbm9kZT0vc29jL3NlcmlhbEA3ZTIwMTAwMA0K
KFhFTikgYWRkaW5nIERUIGFsaWFzOnVhcnQxOiBzdGVtPXVhcnQgaWQ9MSBu
b2RlPS9zb2Mvc2VyaWFsQDdlMjE1MDQwDQooWEVOKSBhZGRpbmcgRFQgYWxp
YXM6c2Rob3N0OiBzdGVtPXNkaG9zdCBpZD0wIG5vZGU9L3NvYy9tbWNAN2Uy
MDIwMDANCihYRU4pIGFkZGluZyBEVCBhbGlhczptbWM6IHN0ZW09bW1jIGlk
PTAgbm9kZT0vc29jL21tY0A3ZTMwMDAwMA0KKFhFTikgYWRkaW5nIERUIGFs
aWFzOm1tYzE6IHN0ZW09bW1jIGlkPTEgbm9kZT0vc29jL21tY25yQDdlMzAw
MDAwDQooWEVOKSBhZGRpbmcgRFQgYWxpYXM6bW1jMDogc3RlbT1tbWMgaWQ9
MCBub2RlPS9lbW1jMmJ1cy9lbW1jMkA3ZTM0MDAwMA0KKFhFTikgYWRkaW5n
IERUIGFsaWFzOmkyczogc3RlbT1pMnMgaWQ9MCBub2RlPS9zb2MvaTJzQDdl
MjAzMDAwDQooWEVOKSBhZGRpbmcgRFQgYWxpYXM6aTJjMDogc3RlbT1pMmMg
aWQ9MCBub2RlPS9zb2MvaTJjMG11eC9pMmNAMA0KKFhFTikgYWRkaW5nIERU
IGFsaWFzOmkyYzE6IHN0ZW09aTJjIGlkPTEgbm9kZT0vc29jL2kyY0A3ZTgw
NDAwMA0KKFhFTikgYWRkaW5nIERUIGFsaWFzOmkyYzEwOiBzdGVtPWkyYyBp
ZD0xMCBub2RlPS9zb2MvaTJjMG11eC9pMmNAMQ0KKFhFTikgYWRkaW5nIERU
IGFsaWFzOnNwaTA6IHN0ZW09c3BpIGlkPTAgbm9kZT0vc29jL3NwaUA3ZTIw
NDAwMA0KKFhFTikgYWRkaW5nIERUIGFsaWFzOnNwaTE6IHN0ZW09c3BpIGlk
PTEgbm9kZT0vc29jL3NwaUA3ZTIxNTA4MA0KKFhFTikgYWRkaW5nIERUIGFs
aWFzOnNwaTI6IHN0ZW09c3BpIGlkPTIgbm9kZT0vc29jL3NwaUA3ZTIxNTBj
MA0KKFhFTikgYWRkaW5nIERUIGFsaWFzOnVzYjogc3RlbT11c2IgaWQ9MCBu
b2RlPS9zb2MvdXNiQDdlOTgwMDAwDQooWEVOKSBhZGRpbmcgRFQgYWxpYXM6
bGVkczogc3RlbT1sZWRzIGlkPTAgbm9kZT0vbGVkcw0KKFhFTikgYWRkaW5n
IERUIGFsaWFzOmZiOiBzdGVtPWZiIGlkPTAgbm9kZT0vc29jL2ZiDQooWEVO
KSBhZGRpbmcgRFQgYWxpYXM6dGhlcm1hbDogc3RlbT10aGVybWFsIGlkPTAg
bm9kZT0vc29jL2F2cy1tb25pdG9yQDdkNWQyMDAwL3RoZXJtYWwNCihYRU4p
IGFkZGluZyBEVCBhbGlhczpheGlwZXJmOiBzdGVtPWF4aXBlcmYgaWQ9MCBu
b2RlPS9zb2MvYXhpcGVyZg0KKFhFTikgYWRkaW5nIERUIGFsaWFzOm1tYzI6
IHN0ZW09bW1jIGlkPTIgbm9kZT0vc29jL21tY0A3ZTIwMjAwMA0KKFhFTikg
YWRkaW5nIERUIGFsaWFzOmkyYzM6IHN0ZW09aTJjIGlkPTMgbm9kZT0vc29j
L2kyY0A3ZTIwNTYwMA0KKFhFTikgYWRkaW5nIERUIGFsaWFzOmkyYzQ6IHN0
ZW09aTJjIGlkPTQgbm9kZT0vc29jL2kyY0A3ZTIwNTgwMA0KKFhFTikgYWRk
aW5nIERUIGFsaWFzOmkyYzU6IHN0ZW09aTJjIGlkPTUgbm9kZT0vc29jL2ky
Y0A3ZTIwNWEwMA0KKFhFTikgYWRkaW5nIERUIGFsaWFzOmkyYzY6IHN0ZW09
aTJjIGlkPTYgbm9kZT0vc29jL2kyY0A3ZTIwNWMwMA0KKFhFTikgYWRkaW5n
IERUIGFsaWFzOmVtbWMyYnVzOiBzdGVtPWVtbWMyYnVzIGlkPTAgbm9kZT0v
ZW1tYzJidXMNCihYRU4pIFBsYXRmb3JtOiBSYXNwYmVycnkgUGkgNA0KKFhF
TikgTm8gZHR1YXJ0IHBhdGggY29uZmlndXJlZA0KKFhFTikgQmFkIGNvbnNv
bGU9IG9wdGlvbiAnZHR1YXJ0Jw0KIFhlbiA0LjE0LXVuc3RhYmxlDQooWEVO
KSBYZW4gdmVyc2lvbiA0LjE0LXVuc3RhYmxlIChAKSAoZ2NjIChBbHBpbmUg
Ni40LjApIDYuNC4wKSBkZWJ1Zz15ICBUdWUgTWF5IDEyIDAzOjU4OjA0IFVU
QyAyMDIwDQooWEVOKSBMYXRlc3QgQ2hhbmdlU2V0OiBUaHUgQXByIDMwIDEw
OjQ1OjA5IDIwMjAgKzAyMDAgZ2l0OjAxMzViZTgtZGlydHkNCihYRU4pIGJ1
aWxkLWlkOiA5Zjc4MGI4MjM5Y2FmMGM2MmIyOTg3OTk2MTc3ZDMyYzZlNmRj
OWRjDQooWEVOKSBQcm9jZXNzb3I6IDQxMGZkMDgzOiAiQVJNIExpbWl0ZWQi
LCB2YXJpYW50OiAweDAsIHBhcnQgMHhkMDgsIHJldiAweDMNCihYRU4pIDY0
LWJpdCBFeGVjdXRpb246DQooWEVOKSAgIFByb2Nlc3NvciBGZWF0dXJlczog
MDAwMDAwMDAwMDAwMjIyMiAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKSAgICAg
RXhjZXB0aW9uIExldmVsczogRUwzOjY0KzMyIEVMMjo2NCszMiBFTDE6NjQr
MzIgRUwwOjY0KzMyDQooWEVOKSAgICAgRXh0ZW5zaW9uczogRmxvYXRpbmdQ
b2ludCBBZHZhbmNlZFNJTUQNCihYRU4pICAgRGVidWcgRmVhdHVyZXM6IDAw
MDAwMDAwMTAzMDUxMDYgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICBBdXhp
bGlhcnkgRmVhdHVyZXM6IDAwMDAwMDAwMDAwMDAwMDAgMDAwMDAwMDAwMDAw
MDAwMA0KKFhFTikgICBNZW1vcnkgTW9kZWwgRmVhdHVyZXM6IDAwMDAwMDAw
MDAwMDExMjQgMDAwMDAwMDAwMDAwMDAwMA0KKFhFTikgICBJU0EgRmVhdHVy
ZXM6ICAwMDAwMDAwMDAwMDEwMDAwIDAwMDAwMDAwMDAwMDAwMDANCihYRU4p
IDMyLWJpdCBFeGVjdXRpb246DQooWEVOKSAgIFByb2Nlc3NvciBGZWF0dXJl
czogMDAwMDAxMzE6MDAwMTEwMTENCihYRU4pICAgICBJbnN0cnVjdGlvbiBT
ZXRzOiBBQXJjaDMyIEEzMiBUaHVtYiBUaHVtYi0yIEphemVsbGUNCihYRU4p
ICAgICBFeHRlbnNpb25zOiBHZW5lcmljVGltZXIgU2VjdXJpdHkNCihYRU4p
ICAgRGVidWcgRmVhdHVyZXM6IDAzMDEwMDY2DQooWEVOKSAgIEF1eGlsaWFy
eSBGZWF0dXJlczogMDAwMDAwMDANCihYRU4pICAgTWVtb3J5IE1vZGVsIEZl
YXR1cmVzOiAxMDIwMTEwNSA0MDAwMDAwMCAwMTI2MDAwMCAwMjEwMjIxMQ0K
KFhFTikgIElTQSBGZWF0dXJlczogMDIxMDExMTAgMTMxMTIxMTEgMjEyMzIw
NDIgMDExMTIxMzEgMDAwMTExNDIgMDAwMTAwMDENCihYRU4pIFNNUDogQWxs
b3dpbmcgNCBDUFVzDQooWEVOKSBlbmFibGVkIHdvcmthcm91bmQgZm9yOiBB
Uk0gZXJyYXR1bSAxMzE5NTM3DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19p
cnE6IGRldj0vdGltZXIsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTEgaW50bGVuPTEyDQoo
WEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGlu
dHNwZWM9WzB4MDAwMDAwMDEgMHgwMDAwMDAwZC4uLl0sb2ludHNpemU9Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29u
dHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9
MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jh
d19pcnE6IGRldj0vdGltZXIsIGluZGV4PTENCihYRU4pICB1c2luZyAnaW50
ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTEgaW50bGVuPTEy
DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0KKFhFTikgZHRfaXJxX21h
cF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAw
LGludHNwZWM9WzB4MDAwMDAwMDEgMHgwMDAwMDAwZS4uLl0sb2ludHNpemU9
Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQt
Y29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNp
emU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0
X3Jhd19pcnE6IGRldj0vdGltZXIsIGluZGV4PTINCihYRU4pICB1c2luZyAn
aW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTEgaW50bGVu
PTEyDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQx
MDAwLGludHNwZWM9WzB4MDAwMDAwMDEgMHgwMDAwMDAwYi4uLl0sb2ludHNp
emU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1
cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRk
cnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2Vf
Z2V0X3Jhd19pcnE6IGRldj0vdGltZXIsIGluZGV4PTMNCihYRU4pICB1c2lu
ZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTEgaW50
bGVuPTEyDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQw
MDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDEgMHgwMDAwMDAwYS4uLl0sb2lu
dHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRl
cnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4g
YWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBHZW5lcmlj
IFRpbWVyIElSUTogcGh5cz0zMCBoeXA9MjYgdmlydD0yNyBGcmVxOiA1NDAw
MCBLSHoNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9z
b2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAgKioNCihYRU4pIERU
OiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL3NvYw0KKFhFTikg
RFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDQwMDQxMDAwPDM+DQooWEVO
KSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0xKSBvbiAv
DQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBkZWZh
dWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9MTgwMDAwMCwgZGE9NDAwNDEwMDAN
CihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2MwMDAwMDAsIHM9MjAwMDAw
MCwgZGE9NDAwNDEwMDANCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9NDAw
MDAwMDAsIHM9ODAwMDAwLCBkYT00MDA0MTAwMA0KKFhFTikgRFQ6IHBhcmVu
dCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZmODAwMDAwPDM+
DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDQxMDAwDQooWEVOKSBEVDogb25l
IGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiBmZjg0MTAwMDwz
Pg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvc29jL2ludGVycnVwdC1jb250cm9s
bGVyQDQwMDQxMDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5h
PTEsIG5zPTEpIG9uIC9zb2MNCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRy
ZXNzOjwzPiA0MDA0MjAwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMg
ZGVmYXVsdCAobmE9MiwgbnM9MSkgb24gLw0KKFhFTikgRFQ6IHdhbGtpbmcg
cmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTdlMDAwMDAw
LCBzPTE4MDAwMDAsIGRhPTQwMDQyMDAwDQooWEVOKSBEVDogZGVmYXVsdCBt
YXAsIGNwPTdjMDAwMDAwLCBzPTIwMDAwMDAsIGRhPTQwMDQyMDAwDQooWEVO
KSBEVDogZGVmYXVsdCBtYXAsIGNwPTQwMDAwMDAwLCBzPTgwMDAwMCwgZGE9
NDAwNDIwMDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwz
PiAwMDAwMDAwMDwzPiBmZjgwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zm
c2V0OiA0MjAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8
Mz4gMDAwMDAwMDA8Mz4gZmY4NDIwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVk
IHJvb3Qgbm9kZQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZp
Y2UgL3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCAqKg0KKFhF
TikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvc29jDQoo
WEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gNDAwNDQwMDA8Mz4N
CihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEp
IG9uIC8NCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6
IGRlZmF1bHQgbWFwLCBjcD03ZTAwMDAwMCwgcz0xODAwMDAwLCBkYT00MDA0
NDAwMA0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03YzAwMDAwMCwgcz0y
MDAwMDAwLCBkYT00MDA0NDAwMA0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBj
cD00MDAwMDAwMCwgcz04MDAwMDAsIGRhPTQwMDQ0MDAwDQooWEVOKSBEVDog
cGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gZmY4MDAw
MDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogNDQwMDANCihYRU4pIERU
OiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZmODQ0
MDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pIERU
OiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zb2MvaW50ZXJydXB0LWNv
bnRyb2xsZXJANDAwNDEwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVs
dCAobmE9MSwgbnM9MSkgb24gL3NvYw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5n
IGFkZHJlc3M6PDM+IDQwMDQ2MDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1
cyBpcyBkZWZhdWx0IChuYT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fs
a2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2Uw
MDAwMDAsIHM9MTgwMDAwMCwgZGE9NDAwNDYwMDANCihYRU4pIERUOiBkZWZh
dWx0IG1hcCwgY3A9N2MwMDAwMDAsIHM9MjAwMDAwMCwgZGE9NDAwNDYwMDAN
CihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9NDAwMDAwMDAsIHM9ODAwMDAw
LCBkYT00MDA0NjAwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBm
b3I6PDM+IDAwMDAwMDAwPDM+IGZmODAwMDAwPDM+DQooWEVOKSBEVDogd2l0
aCBvZmZzZXQ6IDQ2MDAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0
aW9uOjwzPiAwMDAwMDAwMDwzPiBmZjg0NjAwMDwzPg0KKFhFTikgRFQ6IHJl
YWNoZWQgcm9vdCBub2RlDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6
IGRldj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBpbmRl
eD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVO
KSAgaW50c3BlYz0xIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0
LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMSAweDAw
MDAwMDA5Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
aXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXpl
PTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICEN
CihYRU4pIEdJQ3YyIGluaXRpYWxpemF0aW9uOg0KKFhFTikgICAgICAgICBn
aWNfZGlzdF9hZGRyPTAwMDAwMDAwZmY4NDEwMDANCihYRU4pICAgICAgICAg
Z2ljX2NwdV9hZGRyPTAwMDAwMDAwZmY4NDIwMDANCihYRU4pICAgICAgICAg
Z2ljX2h5cF9hZGRyPTAwMDAwMDAwZmY4NDQwMDANCihYRU4pICAgICAgICAg
Z2ljX3ZjcHVfYWRkcj0wMDAwMDAwMGZmODQ2MDAwDQooWEVOKSAgICAgICAg
IGdpY19tYWludGVuYW5jZV9pcnE9MjUNCihYRU4pIEdJQ3YyOiAyNTYgbGlu
ZXMsIDQgY3B1cywgc2VjdXJlIChJSUQgMDIwMDE0M2IpLg0KKFhFTikgWFNN
IEZyYW1ld29yayB2MS4wLjAgaW5pdGlhbGl6ZWQNCihYRU4pIEluaXRpYWxp
c2luZyBYU00gU0lMTyBtb2RlDQooWEVOKSBVc2luZyBzY2hlZHVsZXI6IFNN
UCBDcmVkaXQgU2NoZWR1bGVyIHJldjIgKGNyZWRpdDIpDQooWEVOKSBJbml0
aWFsaXppbmcgQ3JlZGl0MiBzY2hlZHVsZXINCihYRU4pICBsb2FkX3ByZWNp
c2lvbl9zaGlmdDogMTgNCihYRU4pICBsb2FkX3dpbmRvd19zaGlmdDogMzAN
CihYRU4pICB1bmRlcmxvYWRfYmFsYW5jZV90b2xlcmFuY2U6IDANCihYRU4p
ICBvdmVybG9hZF9iYWxhbmNlX3RvbGVyYW5jZTogLTMNCihYRU4pICBydW5x
dWV1ZXMgYXJyYW5nZW1lbnQ6IHNvY2tldA0KKFhFTikgIGNhcCBlbmZvcmNl
bWVudCBncmFudWxhcml0eTogMTBtcw0KKFhFTikgbG9hZCB0cmFja2luZyB3
aW5kb3cgbGVuZ3RoIDEwNzM3NDE4MjQgbnMNCihYRU4pIEFsbG9jYXRlZCBj
b25zb2xlIHJpbmcgb2YgMzIgS2lCLg0KKFhFTikgQ1BVMDogR3Vlc3QgYXRv
bWljcyB3aWxsIHRyeSAxNCB0aW1lcyBiZWZvcmUgcGF1c2luZyB0aGUgZG9t
YWluDQooWEVOKSBCcmluZ2luZyB1cCBDUFUxDQotIENQVSAwMDAwMDAwMSBi
b290aW5nIC0NCi0gQ3VycmVudCBFTCAwMDAwMDAwOCAtDQotIEluaXRpYWxp
emUgQ1BVIC0NCi0gVHVybmluZyBvbiBwYWdpbmcgLQ0KLSBSZWFkeSAtDQoo
WEVOKSBDUFUxOiBHdWVzdCBhdG9taWNzIHdpbGwgdHJ5IDEzIHRpbWVzIGJl
Zm9yZSBwYXVzaW5nIHRoZSBkb21haW4NCihYRU4pIENQVSAxIGJvb3RlZC4N
CihYRU4pIEJyaW5naW5nIHVwIENQVTINCi0gQ1BVIDAwMDAwMDAyIGJvb3Rp
bmcgLQ0KLSBDdXJyZW50IEVMIDAwMDAwMDA4IC0NCi0gSW5pdGlhbGl6ZSBD
UFUgLQ0KLSBUdXJuaW5nIG9uIHBhZ2luZyAtDQotIFJlYWR5IC0NCihYRU4p
IENQVTI6IEd1ZXN0IGF0b21pY3Mgd2lsbCB0cnkgMTMgdGltZXMgYmVmb3Jl
IHBhdXNpbmcgdGhlIGRvbWFpbg0KKFhFTikgQ1BVIDIgYm9vdGVkLg0KKFhF
TikgQnJpbmdpbmcgdXAgQ1BVMw0KLSBDUFUgMDAwMDAwMDMgYm9vdGluZyAt
DQotIEN1cnJlbnQgRUwgMDAwMDAwMDggLQ0KLSBJbml0aWFsaXplIENQVSAt
DQotIFR1cm5pbmcgb24gcGFnaW5nIC0NCi0gUmVhZHkgLQ0KKFhFTikgQ1BV
MzogR3Vlc3QgYXRvbWljcyB3aWxsIHRyeSAxNCB0aW1lcyBiZWZvcmUgcGF1
c2luZyB0aGUgZG9tYWluDQooWEVOKSBDUFUgMyBib290ZWQuDQooWEVOKSBC
cm91Z2h0IHVwIDQgQ1BVcw0KKFhFTikgSS9PIHZpcnR1YWxpc2F0aW9uIGRp
c2FibGVkDQooWEVOKSBQMk06IDQ0LWJpdCBJUEEgd2l0aCA0NC1iaXQgUEEg
YW5kIDgtYml0IFZNSUQNCihYRU4pIFAyTTogNCBsZXZlbHMgd2l0aCBvcmRl
ci0wIHJvb3QsIFZUQ1IgMHg4MDA0MzU5NA0KKFhFTikgQWRkaW5nIGNwdSAw
IHRvIHJ1bnF1ZXVlIDANCihYRU4pICBGaXJzdCBjcHUgb24gcnVucXVldWUs
IGFjdGl2YXRpbmcNCihYRU4pIEFkZGluZyBjcHUgMSB0byBydW5xdWV1ZSAw
DQooWEVOKSBBZGRpbmcgY3B1IDIgdG8gcnVucXVldWUgMA0KKFhFTikgQWRk
aW5nIGNwdSAzIHRvIHJ1bnF1ZXVlIDANCihYRU4pIGFsdGVybmF0aXZlczog
UGF0Y2hpbmcgd2l0aCBhbHQgdGFibGUgMDAwMDAwMDAwMDJkNDViOCAtPiAw
MDAwMDAwMDAwMmQ0Y2NjDQooWEVOKSAqKiogTE9BRElORyBET01BSU4gMCAq
KioNCihYRU4pIExvYWRpbmcgZDAga2VybmVsIGZyb20gYm9vdCBtb2R1bGUg
QCAwMDAwMDAwMDJiZDdjMDAwDQooWEVOKSBBbGxvY2F0aW5nIDE6MSBtYXBw
aW5ncyB0b3RhbGxpbmcgODIwTUIgZm9yIGRvbTA6DQooWEVOKSBCQU5LWzBd
IDB4MDAwMDAwMDgwMDAwMDAtMHgwMDAwMDAyODAwMDAwMCAoNTEyTUIpDQoo
WEVOKSBCQU5LWzFdIDB4MDAwMDAwMmEwMDAwMDAtMHgwMDAwMDAyYjAwMDAw
MCAoMTZNQikNCihYRU4pIEJBTktbMl0gMHgwMDAwMDAyZTAwMDAwMC0weDAw
MDAwMDM4MDAwMDAwICgxNjBNQikNCihYRU4pIEJBTktbM10gMHgwMDAwMDAz
Y2MwMDAwMC0weDAwMDAwMDNkMDAwMDAwICg0TUIpDQooWEVOKSBCQU5LWzRd
IDB4MDAwMDAwZTgwMDAwMDAtMHgwMDAwMDBmMDAwMDAwMCAoMTI4TUIpDQoo
WEVOKSBHcmFudCB0YWJsZSByYW5nZTogMHgwMDAwMDAyYmMyYTAwMC0weDAw
MDAwMDJiYzZhMDAwDQooWEVOKSBoYW5kbGUgLw0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS8NCihYRU4pIC8gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgLyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vDQooWEVOKSBoYW5k
bGUgL2FsaWFzZXMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vYWxpYXNl
cw0KKFhFTikgL2FsaWFzZXMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL2FsaWFzZXMgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2FsaWFzZXMN
CihYRU4pIGhhbmRsZSAvY2hvc2VuDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L2Nob3Nlbg0KKFhFTikgL2Nob3NlbiBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvY2hvc2VuIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9j
aG9zZW4NCihYRU4pIGhhbmRsZSAvY2hvc2VuL21vZHVsZUAyYmQ3YzAwMA0K
KFhFTikgICBTa2lwIGl0IChtYXRjaGVkKQ0KKFhFTikgaGFuZGxlIC9yZXNl
cnZlZC1tZW1vcnkNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcmVzZXJ2
ZWQtbWVtb3J5DQooWEVOKSAvcmVzZXJ2ZWQtbWVtb3J5IHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9yZXNlcnZlZC1tZW1v
cnkgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3Jlc2VydmVkLW1lbW9yeQ0KKFhFTikgaGFuZGxl
IC9yZXNlcnZlZC1tZW1vcnkvbGludXgsY21hDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3Jlc2VydmVkLW1lbW9yeS9saW51eCxjbWENCihYRU4pIC9y
ZXNlcnZlZC1tZW1vcnkvbGludXgsY21hIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9yZXNlcnZlZC1tZW1vcnkvbGludXgs
Y21hIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9yZXNlcnZlZC1tZW1vcnkvbGludXgsY21hDQoo
WEVOKSBoYW5kbGUgL3RoZXJtYWwtem9uZXMNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vdGhlcm1hbC16b25lcw0KKFhFTikgL3RoZXJtYWwtem9uZXMg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Ro
ZXJtYWwtem9uZXMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3RoZXJtYWwtem9uZXMNCihYRU4p
IGhhbmRsZSAvdGhlcm1hbC16b25lcy9jcHUtdGhlcm1hbA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS90aGVybWFsLXpvbmVzL2NwdS10aGVybWFsDQoo
WEVOKSAvdGhlcm1hbC16b25lcy9jcHUtdGhlcm1hbCBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvdGhlcm1hbC16b25lcy9j
cHUtdGhlcm1hbCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhlcm1hbC16b25lcy9jcHUtdGhl
cm1hbA0KKFhFTikgaGFuZGxlIC90aGVybWFsLXpvbmVzL2NwdS10aGVybWFs
L2Nvb2xpbmctbWFwcw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS90aGVy
bWFsLXpvbmVzL2NwdS10aGVybWFsL2Nvb2xpbmctbWFwcw0KKFhFTikgL3Ro
ZXJtYWwtem9uZXMvY3B1LXRoZXJtYWwvY29vbGluZy1tYXBzIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC90aGVybWFsLXpv
bmVzL2NwdS10aGVybWFsL2Nvb2xpbmctbWFwcyBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vdGhl
cm1hbC16b25lcy9jcHUtdGhlcm1hbC9jb29saW5nLW1hcHMNCihYRU4pIGhh
bmRsZSAvc29jDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYw0KKFhF
TikgL3NvYyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvc29jIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MNCihYRU4pIGhhbmRsZSAvc29j
L3RpbWVyQDdlMDAzMDAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nv
Yy90aW1lckA3ZTAwMzAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBw
cm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MTINCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTEyDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19p
cnE6IGRldj0vc29jL3RpbWVyQDdlMDAzMDAwLCBpbmRleD0wDQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0w
IGludGxlbj0xMg0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MTINCihYRU4p
IGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxl
ckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNDAuLi5d
LG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2Mv
aW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikg
IC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRf
ZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy90aW1lckA3ZTAwMzAwMCwg
aW5kZXg9MQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0K
KFhFTikgIGludHNwZWM9MCBpbnRsZW49MTINCihYRU4pICBpbnRzaXplPTMg
aW50bGVuPTEyDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50
ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAw
MCAweDAwMDAwMDQxLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAw
LCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290
IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2Mv
dGltZXJAN2UwMDMwMDAsIGluZGV4PTINCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTEyDQoo
WEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGlu
dHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA0Mi4uLl0sb2ludHNpemU9Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29u
dHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9
MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jh
d19pcnE6IGRldj0vc29jL3RpbWVyQDdlMDAzMDAwLCBpbmRleD0zDQooWEVO
KSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3Bl
Yz0wIGludGxlbj0xMg0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MTINCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJv
bGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNDMu
Li5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9z
b2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhF
TikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikg
L3NvYy90aW1lckA3ZTAwMzAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAx
DQooWEVOKSBDaGVjayBpZiAvc29jL3RpbWVyQDdlMDAzMDAwIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zb2MvdGltZXJAN2UwMDMwMDANCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTEyDQoo
WEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0KKFhFTikgZHRfZGV2aWNlX2dl
dF9yYXdfaXJxOiBkZXY9L3NvYy90aW1lckA3ZTAwMzAwMCwgaW5kZXg9MA0K
KFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGlu
dHNwZWM9MCBpbnRsZW49MTINCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTEy
DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNv
bnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAw
MDQwLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBh
cj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMN
CihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihY
RU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2MvdGltZXJAN2Uw
MDMwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTEyDQooWEVOKSAgaW50
c2l6ZT0zIGludGxlbj0xMg0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0v
c29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4
MDAwMDAwMDAgMHgwMDAwMDA0MC4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0
MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikg
IC0+IGdvdCBpdCAhDQooWEVOKSAgIC0gSVJROiA5Ng0KKFhFTikgZHRfZGV2
aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy90aW1lckA3ZTAwMzAwMCwgaW5k
ZXg9MQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhF
TikgIGludHNwZWM9MCBpbnRsZW49MTINCihYRU4pICBpbnRzaXplPTMgaW50
bGVuPTEyDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJy
dXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAw
eDAwMDAwMDQxLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBz
aXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0
ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2MvdGlt
ZXJAN2UwMDMwMDAsIGluZGV4PTENCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTEyDQooWEVO
KSAgaW50c2l6ZT0zIGludGxlbj0xMg0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNw
ZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA0MS4uLl0sb2ludHNpemU9Mw0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJv
bGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0K
KFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAgIC0gSVJROiA5Nw0KKFhFTikg
ZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy90aW1lckA3ZTAwMzAw
MCwgaW5kZXg9Mg0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0
eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MTINCihYRU4pICBpbnRzaXpl
PTMgaW50bGVuPTEyDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2Mv
aW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAw
MDAwMCAweDAwMDAwMDQyLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQx
MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4g
Z290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9z
b2MvdGltZXJAN2UwMDMwMDAsIGluZGV4PTINCihYRU4pICB1c2luZyAnaW50
ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTEy
DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0KKFhFTikgZHRfaXJxX21h
cF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAw
LGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA0Mi4uLl0sb2ludHNpemU9
Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQt
Y29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNp
emU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAgIC0gSVJROiA5OA0K
KFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy90aW1lckA3
ZTAwMzAwMCwgaW5kZXg9Mw0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBw
cm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MTINCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTEyDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFy
PS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1b
MHgwMDAwMDAwMCAweDAwMDAwMDQzLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVy
QDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVO
KSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTog
ZGV2PS9zb2MvdGltZXJAN2UwMDMwMDAsIGluZGV4PTMNCihYRU4pICB1c2lu
ZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50
bGVuPTEyDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQw
MDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA0My4uLl0sb2lu
dHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRl
cnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4g
YWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAgIC0gSVJR
OiA5OQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3Nv
Yy90aW1lckA3ZTAwMzAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0
IChuYT0xLCBucz0xKSBvbiAvc29jDQooWEVOKSBEVDogdHJhbnNsYXRpbmcg
YWRkcmVzczo8Mz4gN2UwMDMwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVz
IGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihYRU4pIERUOiB3YWxr
aW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03ZTAw
MDAwMCwgcz0xODAwMDAwLCBkYT03ZTAwMzAwMA0KKFhFTikgRFQ6IHBhcmVu
dCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZlMDAwMDAwPDM+
DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDMwMDANCihYRU4pIERUOiBvbmUg
bGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZlMDAzMDAwPDM+
DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlP
OiAwMGZlMDAzMDAwIC0gMDBmZTAwNDAwMCBQMk1UeXBlPTUNCihYRU4pIGhh
bmRsZSAvc29jL3R4cEA3ZTAwNDAwMA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zb2MvdHhwQDdlMDA0MDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVw
dHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVO
KSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jh
d19pcnE6IGRldj0vc29jL3R4cEA3ZTAwNDAwMCwgaW5kZXg9MA0KKFhFTikg
IHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9
MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVy
QDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA0Yi4uLl0s
b2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9p
bnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAg
LT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvc29j
L3R4cEA3ZTAwNDAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVO
KSBDaGVjayBpZiAvc29jL3R4cEA3ZTAwNDAwMCBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29j
L3R4cEA3ZTAwNDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9w
ZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNp
emU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBk
ZXY9L3NvYy90eHBAN2UwMDQwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAn
aW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVu
PTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAw
MCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNGIuLi5dLG9pbnRzaXpl
PTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0
LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJz
aXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dl
dF9yYXdfaXJxOiBkZXY9L3NvYy90eHBAN2UwMDQwMDAsIGluZGV4PTANCihY
RU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRz
cGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJv
bGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNGIu
Li5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9z
b2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhF
TikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikg
ICAtIElSUTogMTA3DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRl
dmljZSAvc29jL3R4cEA3ZTAwNDAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBk
ZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvc29jDQooWEVOKSBEVDogdHJhbnNs
YXRpbmcgYWRkcmVzczo8Mz4gN2UwMDQwMDA8Mz4NCihYRU4pIERUOiBwYXJl
bnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihYRU4pIERU
OiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBj
cD03ZTAwMDAwMCwgcz0xODAwMDAwLCBkYT03ZTAwNDAwMA0KKFhFTikgRFQ6
IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZlMDAw
MDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDQwMDANCihYRU4pIERU
OiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZlMDA0
MDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAg
LSBNTUlPOiAwMGZlMDA0MDAwIC0gMDBmZTAwNDAyMCBQMk1UeXBlPTUNCihY
RU4pIGhhbmRsZSAvc29jL2Nwcm1hbkA3ZTEwMTAwMA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zb2MvY3BybWFuQDdlMTAxMDAwDQooWEVOKSAvc29j
L2Nwcm1hbkA3ZTEwMTAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQoo
WEVOKSBDaGVjayBpZiAvc29jL2Nwcm1hbkA3ZTEwMTAwMCBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vc29jL2Nwcm1hbkA3ZTEwMTAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0
aW9uIGZvciBkZXZpY2UgL3NvYy9jcHJtYW5AN2UxMDEwMDAgKioNCihYRU4p
IERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL3NvYw0KKFhF
TikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDdlMTAxMDAwPDM+DQoo
WEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0xKSBv
biAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBk
ZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9MTgwMDAwMCwgZGE9N2UxMDEw
MDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAw
MDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiAx
MDEwMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAw
MDAwMDAwPDM+IGZlMTAxMDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290
IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZlMTAxMDAwIC0gMDBmZTEwMzAw
MCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvc29jL21haWxib3hAN2UwMGI4
ODANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL21haWxib3hAN2Uw
MGI4ODANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihY
RU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50
bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2Mv
bWFpbGJveEA3ZTAwYjg4MCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0K
KFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGlu
dHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyMS4uLl0sb2ludHNpemU9Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29u
dHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9
MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvc29jL21haWxib3hAN2Uw
MGI4ODAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sg
aWYgL3NvYy9tYWlsYm94QDdlMDBiODgwIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvbWFp
bGJveEA3ZTAwYjg4MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9w
ZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNp
emU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBk
ZXY9L3NvYy9tYWlsYm94QDdlMDBiODgwLCBpbmRleD0wDQooWEVOKSAgdXNp
bmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGlu
dGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAw
NDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDIxLi4uXSxvaW50
c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVy
cnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBh
ZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2Rldmlj
ZV9nZXRfcmF3X2lycTogZGV2PS9zb2MvbWFpbGJveEA3ZTAwYjg4MCwgaW5k
ZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhF
TikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRs
ZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVw
dC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgw
MDAwMDAyMS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6
ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAh
DQooWEVOKSAgIC0gSVJROiA2NQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9u
IGZvciBkZXZpY2UgL3NvYy9tYWlsYm94QDdlMDBiODgwICoqDQooWEVOKSBE
VDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9zb2MNCihYRU4p
IERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3ZTAwYjg4MDwzPg0KKFhF
TikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24g
Lw0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVm
YXVsdCBtYXAsIGNwPTdlMDAwMDAwLCBzPTE4MDAwMDAsIGRhPTdlMDBiODgw
DQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAw
MDA8Mz4gZmUwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogYjg4
MA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAw
MDA8Mz4gZmUwMGI4ODA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9k
ZQ0KKFhFTikgICAtIE1NSU86IDAwZmUwMGI4ODAgLSAwMGZlMDBiOGMwIFAy
TVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMA0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNw
ZWM9MCBpbnRsZW49Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Ng0KKFhF
TikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9ncGlvQDdlMjAw
MDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3Bl
cnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj02DQooWEVOKSAgaW50c2l6
ZT0zIGludGxlbj02DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2Mv
aW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAw
MDAwMCAweDAwMDAwMDcxLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQx
MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4g
Z290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9z
b2MvZ3Bpb0A3ZTIwMDAwMCwgaW5kZXg9MQ0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Ng0K
KFhFTikgIGludHNpemU9MyBpbnRsZW49Ng0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGlu
dHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA3Mi4uLl0sb2ludHNpemU9Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29u
dHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9
MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvc29jL2dwaW9AN2UyMDAw
MDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYg
L3NvYy9ncGlvQDdlMjAwMDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIw
MDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhF
TikgIGludHNwZWM9MCBpbnRsZW49Ng0KKFhFTikgIGludHNpemU9MyBpbnRs
ZW49Ng0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9n
cGlvQDdlMjAwMDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVw
dHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj02DQooWEVO
KSAgaW50c2l6ZT0zIGludGxlbj02DQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
cGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3Bl
Yz1bMHgwMDAwMDAwMCAweDAwMDAwMDcxLi4uXSxvaW50c2l6ZT0zDQooWEVO
KSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9s
bGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQoo
WEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2ly
cTogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMCwgaW5kZXg9MA0KKFhFTikgIHVz
aW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBp
bnRsZW49Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Ng0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQw
MDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA3MS4uLl0sb2lu
dHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRl
cnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4g
YWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAgIC0gSVJR
OiAxNDUNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2Mv
Z3Bpb0A3ZTIwMDAwMCwgaW5kZXg9MQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1
cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Ng0KKFhF
TikgIGludHNpemU9MyBpbnRsZW49Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNw
ZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA3Mi4uLl0sb2ludHNpemU9Mw0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJv
bGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0K
KFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19p
cnE6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAsIGluZGV4PTENCihYRU4pICB1
c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAg
aW50bGVuPTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTYNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0
MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNzIuLi5dLG9p
bnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50
ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+
IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgICAtIElS
UTogMTQ2DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAv
c29jL2dwaW9AN2UyMDAwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVs
dCAobmE9MSwgbnM9MSkgb24gL3NvYw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5n
IGFkZHJlc3M6PDM+IDdlMjAwMDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1
cyBpcyBkZWZhdWx0IChuYT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fs
a2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2Uw
MDAwMDAsIHM9MTgwMDAwMCwgZGE9N2UyMDAwMDANCihYRU4pIERUOiBwYXJl
bnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwz
Pg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiAyMDAwMDANCihYRU4pIERUOiBv
bmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZlMjAwMDAw
PDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBN
TUlPOiAwMGZlMjAwMDAwIC0gMDBmZTIwMDBiNCBQMk1UeXBlPTUNCihYRU4p
IGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvZHBpX2dwaW8wDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2RwaV9ncGlv
MA0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL2RwaV9ncGlvMCBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9A
N2UyMDAwMDAvZHBpX2dwaW8wIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIw
MDAwMC9kcGlfZ3BpbzANCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAw
MDAvZW1tY19ncGlvMjINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29j
L2dwaW9AN2UyMDAwMDAvZW1tY19ncGlvMjINCihYRU4pIC9zb2MvZ3Bpb0A3
ZTIwMDAwMC9lbW1jX2dwaW8yMiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvZW1tY19ncGlv
MjIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2VtbWNfZ3BpbzIy
DQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL2VtbWNfZ3BpbzM0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAw
L2VtbWNfZ3BpbzM0DQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvZW1tY19n
cGlvMzQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3NvYy9ncGlvQDdlMjAwMDAwL2VtbWNfZ3BpbzM0IGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9lbW1jX2dwaW8zNA0KKFhFTikgaGFuZGxl
IC9zb2MvZ3Bpb0A3ZTIwMDAwMC9lbW1jX2dwaW80OA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9lbW1jX2dwaW80OA0K
KFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL2VtbWNfZ3BpbzQ4IHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3
ZTIwMDAwMC9lbW1jX2dwaW80OCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2Uy
MDAwMDAvZW1tY19ncGlvNDgNCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2Uy
MDAwMDAvZ3BjbGswX2dwaW80DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3NvYy9ncGlvQDdlMjAwMDAwL2dwY2xrMF9ncGlvNA0KKFhFTikgL3NvYy9n
cGlvQDdlMjAwMDAwL2dwY2xrMF9ncGlvNCBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvZ3Bj
bGswX2dwaW80IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9ncGNs
azBfZ3BpbzQNCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvZ3Bj
bGsxX2dwaW81DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlv
QDdlMjAwMDAwL2dwY2xrMV9ncGlvNQ0KKFhFTikgL3NvYy9ncGlvQDdlMjAw
MDAwL2dwY2xrMV9ncGlvNSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvZ3BjbGsxX2dwaW81
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9ncGNsazFfZ3BpbzUN
CihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvZ3BjbGsxX2dwaW80
Mg0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAw
MC9ncGNsazFfZ3BpbzQyDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvZ3Bj
bGsxX2dwaW80MiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvZ3BjbGsxX2dwaW80MiBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvZ3BjbGsxX2dwaW80Mg0KKFhF
TikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9ncGNsazFfZ3BpbzQ0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2dw
Y2xrMV9ncGlvNDQNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9ncGNsazFf
Z3BpbzQ0IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9ncGNsazFfZ3BpbzQ0IGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9ncGNsazFfZ3BpbzQ0DQooWEVOKSBo
YW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL2dwY2xrMl9ncGlvNg0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9ncGNsazJf
Z3BpbzYNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9ncGNsazJfZ3BpbzYg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Nv
Yy9ncGlvQDdlMjAwMDAwL2dwY2xrMl9ncGlvNiBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29j
L2dwaW9AN2UyMDAwMDAvZ3BjbGsyX2dwaW82DQooWEVOKSBoYW5kbGUgL3Nv
Yy9ncGlvQDdlMjAwMDAwL2dwY2xrMl9ncGlvNDMNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvZ3BjbGsyX2dwaW80Mw0K
KFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL2dwY2xrMl9ncGlvNDMgcGFzc3Ro
cm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlv
QDdlMjAwMDAwL2dwY2xrMl9ncGlvNDMgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlv
QDdlMjAwMDAwL2dwY2xrMl9ncGlvNDMNCihYRU4pIGhhbmRsZSAvc29jL2dw
aW9AN2UyMDAwMDAvaTJjMF9ncGlvMA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmMwX2dwaW8wDQooWEVOKSAvc29j
L2dwaW9AN2UyMDAwMDAvaTJjMF9ncGlvMCBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvaTJj
MF9ncGlvMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvaTJjMF9n
cGlvMA0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmMwX2dw
aW8yOA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIw
MDAwMC9pMmMwX2dwaW8yOA0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL2ky
YzBfZ3BpbzI4IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmMwX2dwaW8yOCBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvaTJjMF9ncGlvMjgNCihYRU4pIGhh
bmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvaTJjMF9ncGlvNDQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvaTJjMF9ncGlv
NDQNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmMwX2dwaW80NCBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dw
aW9AN2UyMDAwMDAvaTJjMF9ncGlvNDQgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlv
QDdlMjAwMDAwL2kyYzBfZ3BpbzQ0DQooWEVOKSBoYW5kbGUgL3NvYy9ncGlv
QDdlMjAwMDAwL2kyYzFfZ3BpbzINCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vc29jL2dwaW9AN2UyMDAwMDAvaTJjMV9ncGlvMg0KKFhFTikgL3NvYy9n
cGlvQDdlMjAwMDAwL2kyYzFfZ3BpbzIgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzFf
Z3BpbzIgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzFfZ3Bp
bzINCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvaTJjMV9ncGlv
NDQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAw
MDAvaTJjMV9ncGlvNDQNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmMx
X2dwaW80NCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvaTJjMV9ncGlvNDQgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzFfZ3BpbzQ0DQooWEVOKSBoYW5k
bGUgL3NvYy9ncGlvQDdlMjAwMDAwL2p0YWdfZ3BpbzIyDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2p0YWdfZ3BpbzIy
DQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvanRhZ19ncGlvMjIgcGFzc3Ro
cm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlv
QDdlMjAwMDAwL2p0YWdfZ3BpbzIyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3
ZTIwMDAwMC9qdGFnX2dwaW8yMg0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3
ZTIwMDAwMC9wY21fZ3BpbzE4DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3NvYy9ncGlvQDdlMjAwMDAwL3BjbV9ncGlvMTgNCihYRU4pIC9zb2MvZ3Bp
b0A3ZTIwMDAwMC9wY21fZ3BpbzE4IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9wY21fZ3Bp
bzE4IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9wY21fZ3BpbzE4
DQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL3BjbV9ncGlvMjgN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAv
cGNtX2dwaW8yOA0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL3BjbV9ncGlv
MjggcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L3NvYy9ncGlvQDdlMjAwMDAwL3BjbV9ncGlvMjggaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nv
Yy9ncGlvQDdlMjAwMDAwL3BjbV9ncGlvMjgNCihYRU4pIGhhbmRsZSAvc29j
L2dwaW9AN2UyMDAwMDAvc2Rob3N0X2dwaW80OA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9zZGhvc3RfZ3BpbzQ4DQoo
WEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvc2Rob3N0X2dwaW80OCBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9A
N2UyMDAwMDAvc2Rob3N0X2dwaW80OCBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9A
N2UyMDAwMDAvc2Rob3N0X2dwaW80OA0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bp
b0A3ZTIwMDAwMC9zcGkwX2dwaW83DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3NwaTBfZ3BpbzcNCihYRU4pIC9zb2Mv
Z3Bpb0A3ZTIwMDAwMC9zcGkwX2dwaW83IHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGkw
X2dwaW83IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGkwX2dw
aW83DQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL3NwaTBfZ3Bp
bzM1DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAw
MDAwL3NwaTBfZ3BpbzM1DQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvc3Bp
MF9ncGlvMzUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hl
Y2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3NwaTBfZ3BpbzM1IGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGkwX2dwaW8zNQ0KKFhFTikgaGFu
ZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGkxX2dwaW8xNg0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGkxX2dwaW8x
Ng0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL3NwaTFfZ3BpbzE2IHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bp
b0A3ZTIwMDAwMC9zcGkxX2dwaW8xNiBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9A
N2UyMDAwMDAvc3BpMV9ncGlvMTYNCihYRU4pIGhhbmRsZSAvc29jL2dwaW9A
N2UyMDAwMDAvc3BpMl9ncGlvNDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vc29jL2dwaW9AN2UyMDAwMDAvc3BpMl9ncGlvNDANCihYRU4pIC9zb2Mv
Z3Bpb0A3ZTIwMDAwMC9zcGkyX2dwaW80MCBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvc3Bp
Ml9ncGlvNDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3NwaTJf
Z3BpbzQwDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQw
X2dwaW8xNA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3
ZTIwMDAwMC91YXJ0MF9ncGlvMTQNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAw
MC91YXJ0MF9ncGlvMTQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQwX2dwaW8xNCBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvdWFydDBfZ3BpbzE0DQoo
WEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQwX2N0c3J0c19n
cGlvMTYNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2Uy
MDAwMDAvdWFydDBfY3RzcnRzX2dwaW8xNg0KKFhFTikgL3NvYy9ncGlvQDdl
MjAwMDAwL3VhcnQwX2N0c3J0c19ncGlvMTYgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3Vh
cnQwX2N0c3J0c19ncGlvMTYgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAw
MDAwL3VhcnQwX2N0c3J0c19ncGlvMTYNCihYRU4pIGhhbmRsZSAvc29jL2dw
aW9AN2UyMDAwMDAvdWFydDBfY3RzcnRzX2dwaW8zMA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0MF9jdHNydHNf
Z3BpbzMwDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDBfY3RzcnRz
X2dwaW8zMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDBfY3RzcnRzX2dwaW8zMCBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvdWFydDBfY3RzcnRzX2dw
aW8zMA0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0MF9n
cGlvMzINCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2Uy
MDAwMDAvdWFydDBfZ3BpbzMyDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAv
dWFydDBfZ3BpbzMyIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0MF9ncGlvMzIgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQwX2dwaW8zMg0KKFhF
TikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0MF9ncGlvMzYNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvdWFy
dDBfZ3BpbzM2DQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDBfZ3Bp
bzM2IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0MF9ncGlvMzYgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQwX2dwaW8zNg0KKFhFTikgaGFuZGxl
IC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0MF9jdHNydHNfZ3BpbzM4DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQw
X2N0c3J0c19ncGlvMzgNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0
MF9jdHNydHNfZ3BpbzM4IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0MF9jdHNydHNf
Z3BpbzM4IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0MF9j
dHNydHNfZ3BpbzM4DQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAw
L3VhcnQxX2dwaW8xNA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mv
Z3Bpb0A3ZTIwMDAwMC91YXJ0MV9ncGlvMTQNCihYRU4pIC9zb2MvZ3Bpb0A3
ZTIwMDAwMC91YXJ0MV9ncGlvMTQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQxX2dw
aW8xNCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvdWFydDFfZ3Bp
bzE0DQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQxX2N0
c3J0c19ncGlvMTYNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dw
aW9AN2UyMDAwMDAvdWFydDFfY3RzcnRzX2dwaW8xNg0KKFhFTikgL3NvYy9n
cGlvQDdlMjAwMDAwL3VhcnQxX2N0c3J0c19ncGlvMTYgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAw
MDAwL3VhcnQxX2N0c3J0c19ncGlvMTYgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlv
QDdlMjAwMDAwL3VhcnQxX2N0c3J0c19ncGlvMTYNCihYRU4pIGhhbmRsZSAv
c29jL2dwaW9AN2UyMDAwMDAvdWFydDFfZ3BpbzMyDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQxX2dwaW8zMg0K
KFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQxX2dwaW8zMiBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9A
N2UyMDAwMDAvdWFydDFfZ3BpbzMyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3
ZTIwMDAwMC91YXJ0MV9ncGlvMzINCihYRU4pIGhhbmRsZSAvc29jL2dwaW9A
N2UyMDAwMDAvdWFydDFfY3RzcnRzX2dwaW8zMA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0MV9jdHNydHNfZ3Bp
bzMwDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDFfY3RzcnRzX2dw
aW8zMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDFfY3RzcnRzX2dwaW8zMCBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvdWFydDFfY3RzcnRzX2dwaW8z
MA0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0MV9ncGlv
NDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAw
MDAvdWFydDFfZ3BpbzQwDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvdWFy
dDFfZ3BpbzQwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0MV9ncGlvNDAgaXMgYmVo
aW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQxX2dwaW80MA0KKFhFTikg
aGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0MV9jdHNydHNfZ3BpbzQy
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAw
L3VhcnQxX2N0c3J0c19ncGlvNDINCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAw
MC91YXJ0MV9jdHNydHNfZ3BpbzQyIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0MV9j
dHNydHNfZ3BpbzQyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC91
YXJ0MV9jdHNydHNfZ3BpbzQyDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdl
MjAwMDAwL2dwY2xrMF9ncGlvNDkNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vc29jL2dwaW9AN2UyMDAwMDAvZ3BjbGswX2dwaW80OQ0KKFhFTikgL3Nv
Yy9ncGlvQDdlMjAwMDAwL2dwY2xrMF9ncGlvNDkgcGFzc3Rocm91Z2ggPSAx
IG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAw
L2dwY2xrMF9ncGlvNDkgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAw
L2dwY2xrMF9ncGlvNDkNCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAw
MDAvZ3BjbGswX2dwaW80OS9waW4tZ3BjbGsNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvZ3BjbGswX2dwaW80OS9waW4t
Z3BjbGsNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9ncGNsazBfZ3BpbzQ5
L3Bpbi1ncGNsayBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvZ3BjbGswX2dwaW80OS9waW4t
Z3BjbGsgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2dwY2xrMF9n
cGlvNDkvcGluLWdwY2xrDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAw
MDAwL2dwY2xrMV9ncGlvNTANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
c29jL2dwaW9AN2UyMDAwMDAvZ3BjbGsxX2dwaW81MA0KKFhFTikgL3NvYy9n
cGlvQDdlMjAwMDAwL2dwY2xrMV9ncGlvNTAgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL2dw
Y2xrMV9ncGlvNTAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2dw
Y2xrMV9ncGlvNTANCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAv
Z3BjbGsxX2dwaW81MC9waW4tZ3BjbGsNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvZ3BjbGsxX2dwaW81MC9waW4tZ3Bj
bGsNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9ncGNsazFfZ3BpbzUwL3Bp
bi1ncGNsayBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvZ3BjbGsxX2dwaW81MC9waW4tZ3Bj
bGsgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2dwY2xrMV9ncGlv
NTAvcGluLWdwY2xrDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAw
L2dwY2xrMl9ncGlvNTENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29j
L2dwaW9AN2UyMDAwMDAvZ3BjbGsyX2dwaW81MQ0KKFhFTikgL3NvYy9ncGlv
QDdlMjAwMDAwL2dwY2xrMl9ncGlvNTEgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL2dwY2xr
Ml9ncGlvNTEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2dwY2xr
Ml9ncGlvNTENCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvZ3Bj
bGsyX2dwaW81MS9waW4tZ3BjbGsNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vc29jL2dwaW9AN2UyMDAwMDAvZ3BjbGsyX2dwaW81MS9waW4tZ3BjbGsN
CihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9ncGNsazJfZ3BpbzUxL3Bpbi1n
cGNsayBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvc29jL2dwaW9AN2UyMDAwMDAvZ3BjbGsyX2dwaW81MS9waW4tZ3BjbGsg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2dwY2xrMl9ncGlvNTEv
cGluLWdwY2xrDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL2ky
YzBfZ3BpbzQ2DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlv
QDdlMjAwMDAwL2kyYzBfZ3BpbzQ2DQooWEVOKSAvc29jL2dwaW9AN2UyMDAw
MDAvaTJjMF9ncGlvNDYgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzBfZ3BpbzQ2IGlz
IGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmMwX2dwaW80Ng0KKFhF
TikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmMwX2dwaW80Ni9waW4t
c2RhDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAw
MDAwL2kyYzBfZ3BpbzQ2L3Bpbi1zZGENCihYRU4pIC9zb2MvZ3Bpb0A3ZTIw
MDAwMC9pMmMwX2dwaW80Ni9waW4tc2RhIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmMw
X2dwaW80Ni9waW4tc2RhIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAw
MC9pMmMwX2dwaW80Ni9waW4tc2RhDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlv
QDdlMjAwMDAwL2kyYzBfZ3BpbzQ2L3Bpbi1zY2wNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvaTJjMF9ncGlvNDYvcGlu
LXNjbA0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzBfZ3BpbzQ2L3Bp
bi1zY2wgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzBfZ3BpbzQ2L3Bpbi1zY2wgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzBfZ3BpbzQ2L3Bpbi1z
Y2wNCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvaTJjMV9ncGlv
NDYNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAw
MDAvaTJjMV9ncGlvNDYNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmMx
X2dwaW80NiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvaTJjMV9ncGlvNDYgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzFfZ3BpbzQ2DQooWEVOKSBoYW5k
bGUgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzFfZ3BpbzQ2L3Bpbi1zZGENCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvaTJj
MV9ncGlvNDYvcGluLXNkYQ0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL2ky
YzFfZ3BpbzQ2L3Bpbi1zZGEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzFfZ3BpbzQ2
L3Bpbi1zZGEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzFf
Z3BpbzQ2L3Bpbi1zZGENCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAw
MDAvaTJjMV9ncGlvNDYvcGluLXNjbA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmMxX2dwaW80Ni9waW4tc2NsDQoo
WEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvaTJjMV9ncGlvNDYvcGluLXNjbCBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29j
L2dwaW9AN2UyMDAwMDAvaTJjMV9ncGlvNDYvcGluLXNjbCBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vc29jL2dwaW9AN2UyMDAwMDAvaTJjMV9ncGlvNDYvcGluLXNjbA0KKFhF
TikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmMzX2dwaW8yDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzNf
Z3BpbzINCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmMzX2dwaW8yIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2Mv
Z3Bpb0A3ZTIwMDAwMC9pMmMzX2dwaW8yIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bp
b0A3ZTIwMDAwMC9pMmMzX2dwaW8yDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlv
QDdlMjAwMDAwL2kyYzNfZ3BpbzIvcGluLXNkYQ0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmMzX2dwaW8yL3Bpbi1z
ZGENCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmMzX2dwaW8yL3Bpbi1z
ZGEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzNfZ3BpbzIvcGluLXNkYSBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvaTJjM19ncGlvMi9waW4tc2RhDQoo
WEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzNfZ3BpbzIvcGlu
LXNjbA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIw
MDAwMC9pMmMzX2dwaW8yL3Bpbi1zY2wNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIw
MDAwMC9pMmMzX2dwaW8yL3Bpbi1zY2wgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzNf
Z3BpbzIvcGluLXNjbCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAv
aTJjM19ncGlvMi9waW4tc2NsDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdl
MjAwMDAwL2kyYzNfZ3BpbzQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
c29jL2dwaW9AN2UyMDAwMDAvaTJjM19ncGlvNA0KKFhFTikgL3NvYy9ncGlv
QDdlMjAwMDAwL2kyYzNfZ3BpbzQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzNfZ3Bp
bzQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzNfZ3BpbzQN
CihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvaTJjM19ncGlvNC9w
aW4tc2RhDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdl
MjAwMDAwL2kyYzNfZ3BpbzQvcGluLXNkYQ0KKFhFTikgL3NvYy9ncGlvQDdl
MjAwMDAwL2kyYzNfZ3BpbzQvcGluLXNkYSBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvaTJj
M19ncGlvNC9waW4tc2RhIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAw
MC9pMmMzX2dwaW80L3Bpbi1zZGENCihYRU4pIGhhbmRsZSAvc29jL2dwaW9A
N2UyMDAwMDAvaTJjM19ncGlvNC9waW4tc2NsDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzNfZ3BpbzQvcGluLXNj
bA0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzNfZ3BpbzQvcGluLXNj
bCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
c29jL2dwaW9AN2UyMDAwMDAvaTJjM19ncGlvNC9waW4tc2NsIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmMzX2dwaW80L3Bpbi1zY2wNCihY
RU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvaTJjNF9ncGlvNg0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM0
X2dwaW82DQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvaTJjNF9ncGlvNiBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29j
L2dwaW9AN2UyMDAwMDAvaTJjNF9ncGlvNiBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dw
aW9AN2UyMDAwMDAvaTJjNF9ncGlvNg0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bp
b0A3ZTIwMDAwMC9pMmM0X2dwaW82L3Bpbi1zZGENCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvaTJjNF9ncGlvNi9waW4t
c2RhDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvaTJjNF9ncGlvNi9waW4t
c2RhIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM0X2dwaW82L3Bpbi1zZGEgaXMgYmVo
aW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzRfZ3BpbzYvcGluLXNkYQ0K
KFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM0X2dwaW82L3Bp
bi1zY2wNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2Uy
MDAwMDAvaTJjNF9ncGlvNi9waW4tc2NsDQooWEVOKSAvc29jL2dwaW9AN2Uy
MDAwMDAvaTJjNF9ncGlvNi9waW4tc2NsIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM0
X2dwaW82L3Bpbi1zY2wgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAw
L2kyYzRfZ3BpbzYvcGluLXNjbA0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3
ZTIwMDAwMC9pMmM0X2dwaW84DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzRfZ3BpbzgNCihYRU4pIC9zb2MvZ3Bp
b0A3ZTIwMDAwMC9pMmM0X2dwaW84IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM0X2dw
aW84IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM0X2dwaW84
DQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzRfZ3Bpbzgv
cGluLXNkYQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3
ZTIwMDAwMC9pMmM0X2dwaW84L3Bpbi1zZGENCihYRU4pIC9zb2MvZ3Bpb0A3
ZTIwMDAwMC9pMmM0X2dwaW84L3Bpbi1zZGEgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL2ky
YzRfZ3BpbzgvcGluLXNkYSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQg
aXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAw
MDAvaTJjNF9ncGlvOC9waW4tc2RhDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlv
QDdlMjAwMDAwL2kyYzRfZ3BpbzgvcGluLXNjbA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM0X2dwaW84L3Bpbi1z
Y2wNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM0X2dwaW84L3Bpbi1z
Y2wgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzRfZ3BpbzgvcGluLXNjbCBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvaTJjNF9ncGlvOC9waW4tc2NsDQoo
WEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzVfZ3BpbzEwDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2ky
YzVfZ3BpbzEwDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvaTJjNV9ncGlv
MTAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzVfZ3BpbzEwIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9z
b2MvZ3Bpb0A3ZTIwMDAwMC9pMmM1X2dwaW8xMA0KKFhFTikgaGFuZGxlIC9z
b2MvZ3Bpb0A3ZTIwMDAwMC9pMmM1X2dwaW8xMC9waW4tc2RhDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzVfZ3Bp
bzEwL3Bpbi1zZGENCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM1X2dw
aW8xMC9waW4tc2RhIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM1X2dwaW8xMC9waW4t
c2RhIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM1X2dwaW8x
MC9waW4tc2RhDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL2ky
YzVfZ3BpbzEwL3Bpbi1zY2wNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
c29jL2dwaW9AN2UyMDAwMDAvaTJjNV9ncGlvMTAvcGluLXNjbA0KKFhFTikg
L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzVfZ3BpbzEwL3Bpbi1zY2wgcGFzc3Ro
cm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlv
QDdlMjAwMDAwL2kyYzVfZ3BpbzEwL3Bpbi1zY2wgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nv
Yy9ncGlvQDdlMjAwMDAwL2kyYzVfZ3BpbzEwL3Bpbi1zY2wNCihYRU4pIGhh
bmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvaTJjNV9ncGlvMTINCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvaTJjNV9ncGlv
MTINCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM1X2dwaW8xMiBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dw
aW9AN2UyMDAwMDAvaTJjNV9ncGlvMTIgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlv
QDdlMjAwMDAwL2kyYzVfZ3BpbzEyDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlv
QDdlMjAwMDAwL2kyYzVfZ3BpbzEyL3Bpbi1zZGENCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvaTJjNV9ncGlvMTIvcGlu
LXNkYQ0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzVfZ3BpbzEyL3Bp
bi1zZGEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzVfZ3BpbzEyL3Bpbi1zZGEgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzVfZ3BpbzEyL3Bpbi1z
ZGENCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvaTJjNV9ncGlv
MTIvcGluLXNjbA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bp
b0A3ZTIwMDAwMC9pMmM1X2dwaW8xMi9waW4tc2NsDQooWEVOKSAvc29jL2dw
aW9AN2UyMDAwMDAvaTJjNV9ncGlvMTIvcGluLXNjbCBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2UyMDAw
MDAvaTJjNV9ncGlvMTIvcGluLXNjbCBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9A
N2UyMDAwMDAvaTJjNV9ncGlvMTIvcGluLXNjbA0KKFhFTikgaGFuZGxlIC9z
b2MvZ3Bpb0A3ZTIwMDAwMC9pMmM2X2dwaW8wDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzZfZ3BpbzANCihYRU4p
IC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM2X2dwaW8wIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAw
MC9pMmM2X2dwaW8wIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9p
MmM2X2dwaW8wDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL2ky
YzZfZ3BpbzAvcGluLXNkYQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9z
b2MvZ3Bpb0A3ZTIwMDAwMC9pMmM2X2dwaW8wL3Bpbi1zZGENCihYRU4pIC9z
b2MvZ3Bpb0A3ZTIwMDAwMC9pMmM2X2dwaW8wL3Bpbi1zZGEgcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdl
MjAwMDAwL2kyYzZfZ3BpbzAvcGluLXNkYSBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dw
aW9AN2UyMDAwMDAvaTJjNl9ncGlvMC9waW4tc2RhDQooWEVOKSBoYW5kbGUg
L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzZfZ3BpbzAvcGluLXNjbA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM2X2dw
aW8wL3Bpbi1zY2wNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM2X2dw
aW8wL3Bpbi1zY2wgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzZfZ3BpbzAvcGluLXNj
bCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvaTJjNl9ncGlvMC9w
aW4tc2NsDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzZf
Z3BpbzIyDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdl
MjAwMDAwL2kyYzZfZ3BpbzIyDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAv
aTJjNl9ncGlvMjIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzZfZ3BpbzIyIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM2X2dwaW8yMg0KKFhFTikg
aGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM2X2dwaW8yMi9waW4tc2Rh
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAw
L2kyYzZfZ3BpbzIyL3Bpbi1zZGENCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAw
MC9pMmM2X2dwaW8yMi9waW4tc2RhIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM2X2dw
aW8yMi9waW4tc2RhIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9p
MmM2X2dwaW8yMi9waW4tc2RhDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdl
MjAwMDAwL2kyYzZfZ3BpbzIyL3Bpbi1zY2wNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvaTJjNl9ncGlvMjIvcGluLXNj
bA0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzZfZ3BpbzIyL3Bpbi1z
Y2wgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzZfZ3BpbzIyL3Bpbi1zY2wgaXMgYmVo
aW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzZfZ3BpbzIyL3Bpbi1zY2wN
CihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvaTJjX3NsYXZlX2dw
aW84DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAw
MDAwL2kyY19zbGF2ZV9ncGlvOA0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAw
L2kyY19zbGF2ZV9ncGlvOCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvaTJjX3NsYXZlX2dw
aW84IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmNfc2xhdmVf
Z3BpbzgNCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvaTJjX3Ns
YXZlX2dwaW84L3BpbnMtaTJjLXNsYXZlDQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2kyY19zbGF2ZV9ncGlvOC9waW5z
LWkyYy1zbGF2ZQ0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL2kyY19zbGF2
ZV9ncGlvOC9waW5zLWkyYy1zbGF2ZSBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvaTJjX3Ns
YXZlX2dwaW84L3BpbnMtaTJjLXNsYXZlIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bp
b0A3ZTIwMDAwMC9pMmNfc2xhdmVfZ3BpbzgvcGlucy1pMmMtc2xhdmUNCihY
RU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvanRhZ19ncGlvNDgNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvanRh
Z19ncGlvNDgNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9qdGFnX2dwaW80
OCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
c29jL2dwaW9AN2UyMDAwMDAvanRhZ19ncGlvNDggaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nv
Yy9ncGlvQDdlMjAwMDAwL2p0YWdfZ3BpbzQ4DQooWEVOKSBoYW5kbGUgL3Nv
Yy9ncGlvQDdlMjAwMDAwL2p0YWdfZ3BpbzQ4L3BpbnMtanRhZw0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9qdGFnX2dw
aW80OC9waW5zLWp0YWcNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9qdGFn
X2dwaW80OC9waW5zLWp0YWcgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL2p0YWdfZ3BpbzQ4
L3BpbnMtanRhZyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvanRh
Z19ncGlvNDgvcGlucy1qdGFnDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdl
MjAwMDAwL21paV9ncGlvMjgNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
c29jL2dwaW9AN2UyMDAwMDAvbWlpX2dwaW8yOA0KKFhFTikgL3NvYy9ncGlv
QDdlMjAwMDAwL21paV9ncGlvMjggcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL21paV9ncGlv
MjggaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL21paV9ncGlvMjgN
CihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvbWlpX2dwaW8yOC9w
aW5zLW1paQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3
ZTIwMDAwMC9taWlfZ3BpbzI4L3BpbnMtbWlpDQooWEVOKSAvc29jL2dwaW9A
N2UyMDAwMDAvbWlpX2dwaW8yOC9waW5zLW1paSBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2UyMDAwMDAv
bWlpX2dwaW8yOC9waW5zLW1paSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2Uy
MDAwMDAvbWlpX2dwaW8yOC9waW5zLW1paQ0KKFhFTikgaGFuZGxlIC9zb2Mv
Z3Bpb0A3ZTIwMDAwMC9taWlfZ3BpbzM2DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL21paV9ncGlvMzYNCihYRU4pIC9z
b2MvZ3Bpb0A3ZTIwMDAwMC9taWlfZ3BpbzM2IHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9t
aWlfZ3BpbzM2IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9taWlf
Z3BpbzM2DQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL21paV9n
cGlvMzYvcGlucy1taWkNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29j
L2dwaW9AN2UyMDAwMDAvbWlpX2dwaW8zNi9waW5zLW1paQ0KKFhFTikgL3Nv
Yy9ncGlvQDdlMjAwMDAwL21paV9ncGlvMzYvcGlucy1taWkgcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdl
MjAwMDAwL21paV9ncGlvMzYvcGlucy1taWkgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9n
cGlvQDdlMjAwMDAwL21paV9ncGlvMzYvcGlucy1taWkNCihYRU4pIGhhbmRs
ZSAvc29jL2dwaW9AN2UyMDAwMDAvcGNtX2dwaW81MA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9wY21fZ3BpbzUwDQoo
WEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvcGNtX2dwaW81MCBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2Uy
MDAwMDAvcGNtX2dwaW81MCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQg
aXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAw
MDAvcGNtX2dwaW81MA0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAw
MC9wY21fZ3BpbzUwL3BpbnMtcGNtDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3BjbV9ncGlvNTAvcGlucy1wY20NCihY
RU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9wY21fZ3BpbzUwL3BpbnMtcGNtIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2Mv
Z3Bpb0A3ZTIwMDAwMC9wY21fZ3BpbzUwL3BpbnMtcGNtIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9wY21fZ3BpbzUwL3BpbnMtcGNtDQooWEVO
KSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL3B3bTBfMF9ncGlvMTINCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvcHdt
MF8wX2dwaW8xMg0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL3B3bTBfMF9n
cGlvMTIgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3NvYy9ncGlvQDdlMjAwMDAwL3B3bTBfMF9ncGlvMTIgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3B3bTBfMF9ncGlvMTINCihYRU4pIGhh
bmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvcHdtMF8wX2dwaW8xMi9waW4tcHdt
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAw
L3B3bTBfMF9ncGlvMTIvcGluLXB3bQ0KKFhFTikgL3NvYy9ncGlvQDdlMjAw
MDAwL3B3bTBfMF9ncGlvMTIvcGluLXB3bSBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvcHdt
MF8wX2dwaW8xMi9waW4tcHdtIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIw
MDAwMC9wd20wXzBfZ3BpbzEyL3Bpbi1wd20NCihYRU4pIGhhbmRsZSAvc29j
L2dwaW9AN2UyMDAwMDAvcHdtMF8wX2dwaW8xOA0KKFhFTikgZHRfaXJxX251
bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9wd20wXzBfZ3BpbzE4DQoo
WEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvcHdtMF8wX2dwaW8xOCBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9A
N2UyMDAwMDAvcHdtMF8wX2dwaW8xOCBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9A
N2UyMDAwMDAvcHdtMF8wX2dwaW8xOA0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bp
b0A3ZTIwMDAwMC9wd20wXzBfZ3BpbzE4L3Bpbi1wd20NCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvcHdtMF8wX2dwaW8x
OC9waW4tcHdtDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvcHdtMF8wX2dw
aW8xOC9waW4tcHdtIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9wd20wXzBfZ3BpbzE4L3Bp
bi1wd20gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3B3bTBfMF9n
cGlvMTgvcGluLXB3bQ0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAw
MC9wd20xXzBfZ3BpbzQwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nv
Yy9ncGlvQDdlMjAwMDAwL3B3bTFfMF9ncGlvNDANCihYRU4pIC9zb2MvZ3Bp
b0A3ZTIwMDAwMC9wd20xXzBfZ3BpbzQwIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9wd20x
XzBfZ3BpbzQwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9wd20x
XzBfZ3BpbzQwDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL3B3
bTFfMF9ncGlvNDAvcGluLXB3bQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9wd20xXzBfZ3BpbzQwL3Bpbi1wd20NCihY
RU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9wd20xXzBfZ3BpbzQwL3Bpbi1wd20g
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Nv
Yy9ncGlvQDdlMjAwMDAwL3B3bTFfMF9ncGlvNDAvcGluLXB3bSBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvcHdtMV8wX2dwaW80MC9waW4tcHdt
DQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL3B3bTBfMV9ncGlv
MTMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAw
MDAvcHdtMF8xX2dwaW8xMw0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL3B3
bTBfMV9ncGlvMTMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3B3bTBfMV9ncGlvMTMgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3B3bTBfMV9ncGlvMTMNCihY
RU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvcHdtMF8xX2dwaW8xMy9w
aW4tcHdtDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdl
MjAwMDAwL3B3bTBfMV9ncGlvMTMvcGluLXB3bQ0KKFhFTikgL3NvYy9ncGlv
QDdlMjAwMDAwL3B3bTBfMV9ncGlvMTMvcGluLXB3bSBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2UyMDAw
MDAvcHdtMF8xX2dwaW8xMy9waW4tcHdtIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bp
b0A3ZTIwMDAwMC9wd20wXzFfZ3BpbzEzL3Bpbi1wd20NCihYRU4pIGhhbmRs
ZSAvc29jL2dwaW9AN2UyMDAwMDAvcHdtMF8xX2dwaW8xOQ0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9wd20wXzFfZ3Bp
bzE5DQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvcHdtMF8xX2dwaW8xOSBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29j
L2dwaW9AN2UyMDAwMDAvcHdtMF8xX2dwaW8xOSBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29j
L2dwaW9AN2UyMDAwMDAvcHdtMF8xX2dwaW8xOQ0KKFhFTikgaGFuZGxlIC9z
b2MvZ3Bpb0A3ZTIwMDAwMC9wd20wXzFfZ3BpbzE5L3Bpbi1wd20NCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvcHdtMF8x
X2dwaW8xOS9waW4tcHdtDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvcHdt
MF8xX2dwaW8xOS9waW4tcHdtIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9wd20wXzFfZ3Bp
bzE5L3Bpbi1wd20gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3B3
bTBfMV9ncGlvMTkvcGluLXB3bQ0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3
ZTIwMDAwMC9wd20xXzFfZ3BpbzQxDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3B3bTFfMV9ncGlvNDENCihYRU4pIC9z
b2MvZ3Bpb0A3ZTIwMDAwMC9wd20xXzFfZ3BpbzQxIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAw
MC9wd20xXzFfZ3BpbzQxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAw
MC9wd20xXzFfZ3BpbzQxDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAw
MDAwL3B3bTFfMV9ncGlvNDEvcGluLXB3bQ0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9wd20xXzFfZ3BpbzQxL3Bpbi1w
d20NCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9wd20xXzFfZ3BpbzQxL3Bp
bi1wd20gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3NvYy9ncGlvQDdlMjAwMDAwL3B3bTFfMV9ncGlvNDEvcGluLXB3bSBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvcHdtMV8xX2dwaW80MS9w
aW4tcHdtDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL3B3bTBf
MV9ncGlvNDUNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9A
N2UyMDAwMDAvcHdtMF8xX2dwaW80NQ0KKFhFTikgL3NvYy9ncGlvQDdlMjAw
MDAwL3B3bTBfMV9ncGlvNDUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3B3bTBfMV9ncGlv
NDUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3B3bTBfMV9ncGlv
NDUNCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvcHdtMF8xX2dw
aW80NS9waW4tcHdtDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9n
cGlvQDdlMjAwMDAwL3B3bTBfMV9ncGlvNDUvcGluLXB3bQ0KKFhFTikgL3Nv
Yy9ncGlvQDdlMjAwMDAwL3B3bTBfMV9ncGlvNDUvcGluLXB3bSBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9A
N2UyMDAwMDAvcHdtMF8xX2dwaW80NS9waW4tcHdtIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9z
b2MvZ3Bpb0A3ZTIwMDAwMC9wd20wXzFfZ3BpbzQ1L3Bpbi1wd20NCihYRU4p
IGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvcHdtMF8wX2dwaW81Mg0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9wd20w
XzBfZ3BpbzUyDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvcHdtMF8wX2dw
aW81MiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvc29jL2dwaW9AN2UyMDAwMDAvcHdtMF8wX2dwaW81MiBpcyBiZWhpbmQg
dGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vc29jL2dwaW9AN2UyMDAwMDAvcHdtMF8wX2dwaW81Mg0KKFhFTikgaGFu
ZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9wd20wXzBfZ3BpbzUyL3Bpbi1wd20N
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAv
cHdtMF8wX2dwaW81Mi9waW4tcHdtDQooWEVOKSAvc29jL2dwaW9AN2UyMDAw
MDAvcHdtMF8wX2dwaW81Mi9waW4tcHdtIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9wd20w
XzBfZ3BpbzUyL3Bpbi1wd20gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAw
MDAwL3B3bTBfMF9ncGlvNTIvcGluLXB3bQ0KKFhFTikgaGFuZGxlIC9zb2Mv
Z3Bpb0A3ZTIwMDAwMC9wd20wXzFfZ3BpbzUzDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3B3bTBfMV9ncGlvNTMNCihY
RU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9wd20wXzFfZ3BpbzUzIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3
ZTIwMDAwMC9wd20wXzFfZ3BpbzUzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3
ZTIwMDAwMC9wd20wXzFfZ3BpbzUzDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlv
QDdlMjAwMDAwL3B3bTBfMV9ncGlvNTMvcGluLXB3bQ0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9wd20wXzFfZ3BpbzUz
L3Bpbi1wd20NCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9wd20wXzFfZ3Bp
bzUzL3Bpbi1wd20gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3B3bTBfMV9ncGlvNTMvcGlu
LXB3bSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvcHdtMF8xX2dw
aW81My9waW4tcHdtDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAw
L3JnbWlpX2dwaW8zNQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mv
Z3Bpb0A3ZTIwMDAwMC9yZ21paV9ncGlvMzUNCihYRU4pIC9zb2MvZ3Bpb0A3
ZTIwMDAwMC9yZ21paV9ncGlvMzUgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3JnbWlpX2dw
aW8zNSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvcmdtaWlfZ3Bp
bzM1DQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL3JnbWlpX2dw
aW8zNS9waW4tc3RhcnQtc3RvcA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9yZ21paV9ncGlvMzUvcGluLXN0YXJ0LXN0
b3ANCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9yZ21paV9ncGlvMzUvcGlu
LXN0YXJ0LXN0b3AgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3JnbWlpX2dwaW8zNS9waW4t
c3RhcnQtc3RvcCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvcmdt
aWlfZ3BpbzM1L3Bpbi1zdGFydC1zdG9wDQooWEVOKSBoYW5kbGUgL3NvYy9n
cGlvQDdlMjAwMDAwL3JnbWlpX2dwaW8zNS9waW4tcngtb2sNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvcmdtaWlfZ3Bp
bzM1L3Bpbi1yeC1vaw0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL3JnbWlp
X2dwaW8zNS9waW4tcngtb2sgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3JnbWlpX2dwaW8z
NS9waW4tcngtb2sgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3Jn
bWlpX2dwaW8zNS9waW4tcngtb2sNCihYRU4pIGhhbmRsZSAvc29jL2dwaW9A
N2UyMDAwMDAvcmdtaWlfaXJxX2dwaW8zNA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9yZ21paV9pcnFfZ3BpbzM0DQoo
WEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvcmdtaWlfaXJxX2dwaW8zNCBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dw
aW9AN2UyMDAwMDAvcmdtaWlfaXJxX2dwaW8zNCBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29j
L2dwaW9AN2UyMDAwMDAvcmdtaWlfaXJxX2dwaW8zNA0KKFhFTikgaGFuZGxl
IC9zb2MvZ3Bpb0A3ZTIwMDAwMC9yZ21paV9pcnFfZ3BpbzM0L3Bpbi1pcnEN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAv
cmdtaWlfaXJxX2dwaW8zNC9waW4taXJxDQooWEVOKSAvc29jL2dwaW9AN2Uy
MDAwMDAvcmdtaWlfaXJxX2dwaW8zNC9waW4taXJxIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAw
MC9yZ21paV9pcnFfZ3BpbzM0L3Bpbi1pcnEgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9n
cGlvQDdlMjAwMDAwL3JnbWlpX2lycV9ncGlvMzQvcGluLWlycQ0KKFhFTikg
aGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9yZ21paV9pcnFfZ3BpbzM5DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3Jn
bWlpX2lycV9ncGlvMzkNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9yZ21p
aV9pcnFfZ3BpbzM5IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9yZ21paV9pcnFfZ3BpbzM5
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9yZ21paV9pcnFfZ3Bp
bzM5DQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL3JnbWlpX2ly
cV9ncGlvMzkvcGluLWlycQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9z
b2MvZ3Bpb0A3ZTIwMDAwMC9yZ21paV9pcnFfZ3BpbzM5L3Bpbi1pcnENCihY
RU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9yZ21paV9pcnFfZ3BpbzM5L3Bpbi1p
cnEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L3NvYy9ncGlvQDdlMjAwMDAwL3JnbWlpX2lycV9ncGlvMzkvcGluLWlycSBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvcmdtaWlfaXJxX2dwaW8z
OS9waW4taXJxDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL3Jn
bWlpX21kaW9fZ3BpbzI4DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nv
Yy9ncGlvQDdlMjAwMDAwL3JnbWlpX21kaW9fZ3BpbzI4DQooWEVOKSAvc29j
L2dwaW9AN2UyMDAwMDAvcmdtaWlfbWRpb19ncGlvMjggcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAw
MDAwL3JnbWlpX21kaW9fZ3BpbzI4IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3
ZTIwMDAwMC9yZ21paV9tZGlvX2dwaW8yOA0KKFhFTikgaGFuZGxlIC9zb2Mv
Z3Bpb0A3ZTIwMDAwMC9yZ21paV9tZGlvX2dwaW8yOC9waW5zLW1kaW8NCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvcmdt
aWlfbWRpb19ncGlvMjgvcGlucy1tZGlvDQooWEVOKSAvc29jL2dwaW9AN2Uy
MDAwMDAvcmdtaWlfbWRpb19ncGlvMjgvcGlucy1tZGlvIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIw
MDAwMC9yZ21paV9tZGlvX2dwaW8yOC9waW5zLW1kaW8gaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3NvYy9ncGlvQDdlMjAwMDAwL3JnbWlpX21kaW9fZ3BpbzI4L3BpbnMtbWRp
bw0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9yZ21paV9tZGlv
X2dwaW8zNw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3
ZTIwMDAwMC9yZ21paV9tZGlvX2dwaW8zNw0KKFhFTikgL3NvYy9ncGlvQDdl
MjAwMDAwL3JnbWlpX21kaW9fZ3BpbzM3IHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9yZ21p
aV9tZGlvX2dwaW8zNyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAv
cmdtaWlfbWRpb19ncGlvMzcNCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2Uy
MDAwMDAvcmdtaWlfbWRpb19ncGlvMzcvcGlucy1tZGlvDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3JnbWlpX21kaW9f
Z3BpbzM3L3BpbnMtbWRpbw0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL3Jn
bWlpX21kaW9fZ3BpbzM3L3BpbnMtbWRpbyBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvcmdt
aWlfbWRpb19ncGlvMzcvcGlucy1tZGlvIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bp
b0A3ZTIwMDAwMC9yZ21paV9tZGlvX2dwaW8zNy9waW5zLW1kaW8NCihYRU4p
IGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvc3BpMF9ncGlvNDYNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvc3BpMF9n
cGlvNDYNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGkwX2dwaW80NiBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29j
L2dwaW9AN2UyMDAwMDAvc3BpMF9ncGlvNDYgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9n
cGlvQDdlMjAwMDAwL3NwaTBfZ3BpbzQ2DQooWEVOKSBoYW5kbGUgL3NvYy9n
cGlvQDdlMjAwMDAwL3NwaTBfZ3BpbzQ2L3BpbnMtc3BpDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3NwaTBfZ3BpbzQ2
L3BpbnMtc3BpDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvc3BpMF9ncGlv
NDYvcGlucy1zcGkgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3NwaTBfZ3BpbzQ2L3BpbnMt
c3BpIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGkwX2dwaW80
Ni9waW5zLXNwaQ0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9z
cGkyX2dwaW80Ng0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bp
b0A3ZTIwMDAwMC9zcGkyX2dwaW80Ng0KKFhFTikgL3NvYy9ncGlvQDdlMjAw
MDAwL3NwaTJfZ3BpbzQ2IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGkyX2dwaW80NiBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvc3BpMl9ncGlvNDYNCihY
RU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvc3BpMl9ncGlvNDYvcGlu
cy1zcGkNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2Uy
MDAwMDAvc3BpMl9ncGlvNDYvcGlucy1zcGkNCihYRU4pIC9zb2MvZ3Bpb0A3
ZTIwMDAwMC9zcGkyX2dwaW80Ni9waW5zLXNwaSBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2UyMDAwMDAv
c3BpMl9ncGlvNDYvcGlucy1zcGkgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdl
MjAwMDAwL3NwaTJfZ3BpbzQ2L3BpbnMtc3BpDQooWEVOKSBoYW5kbGUgL3Nv
Yy9ncGlvQDdlMjAwMDAwL3NwaTNfZ3BpbzANCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvc3BpM19ncGlvMA0KKFhFTikg
L3NvYy9ncGlvQDdlMjAwMDAwL3NwaTNfZ3BpbzAgcGFzc3Rocm91Z2ggPSAx
IG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAw
L3NwaTNfZ3BpbzAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3Nw
aTNfZ3BpbzANCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvc3Bp
M19ncGlvMC9waW5zLXNwaQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9z
b2MvZ3Bpb0A3ZTIwMDAwMC9zcGkzX2dwaW8wL3BpbnMtc3BpDQooWEVOKSAv
c29jL2dwaW9AN2UyMDAwMDAvc3BpM19ncGlvMC9waW5zLXNwaSBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9A
N2UyMDAwMDAvc3BpM19ncGlvMC9waW5zLXNwaSBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29j
L2dwaW9AN2UyMDAwMDAvc3BpM19ncGlvMC9waW5zLXNwaQ0KKFhFTikgaGFu
ZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGk0X2dwaW80DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3NwaTRfZ3BpbzQN
CihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGk0X2dwaW80IHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3
ZTIwMDAwMC9zcGk0X2dwaW80IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIw
MDAwMC9zcGk0X2dwaW80DQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAw
MDAwL3NwaTRfZ3BpbzQvcGlucy1zcGkNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvc3BpNF9ncGlvNC9waW5zLXNwaQ0K
KFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL3NwaTRfZ3BpbzQvcGlucy1zcGkg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Nv
Yy9ncGlvQDdlMjAwMDAwL3NwaTRfZ3BpbzQvcGlucy1zcGkgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3NwaTRfZ3BpbzQvcGlucy1zcGkNCihY
RU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvc3BpNV9ncGlvMTINCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvc3Bp
NV9ncGlvMTINCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGk1X2dwaW8x
MiBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
c29jL2dwaW9AN2UyMDAwMDAvc3BpNV9ncGlvMTIgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nv
Yy9ncGlvQDdlMjAwMDAwL3NwaTVfZ3BpbzEyDQooWEVOKSBoYW5kbGUgL3Nv
Yy9ncGlvQDdlMjAwMDAwL3NwaTVfZ3BpbzEyL3BpbnMtc3BpDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3NwaTVfZ3Bp
bzEyL3BpbnMtc3BpDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvc3BpNV9n
cGlvMTIvcGlucy1zcGkgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3NwaTVfZ3BpbzEyL3Bp
bnMtc3BpIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGk1X2dw
aW8xMi9waW5zLXNwaQ0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAw
MC9zcGk2X2dwaW8xOA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mv
Z3Bpb0A3ZTIwMDAwMC9zcGk2X2dwaW8xOA0KKFhFTikgL3NvYy9ncGlvQDdl
MjAwMDAwL3NwaTZfZ3BpbzE4IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGk2X2dwaW8x
OCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvc3BpNl9ncGlvMTgN
CihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvc3BpNl9ncGlvMTgv
cGlucy1zcGkNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9A
N2UyMDAwMDAvc3BpNl9ncGlvMTgvcGlucy1zcGkNCihYRU4pIC9zb2MvZ3Bp
b0A3ZTIwMDAwMC9zcGk2X2dwaW8xOC9waW5zLXNwaSBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2UyMDAw
MDAvc3BpNl9ncGlvMTgvcGlucy1zcGkgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlv
QDdlMjAwMDAwL3NwaTZfZ3BpbzE4L3BpbnMtc3BpDQooWEVOKSBoYW5kbGUg
L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQyX2dwaW8wDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQyX2dwaW8wDQoo
WEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDJfZ3BpbzAgcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdl
MjAwMDAwL3VhcnQyX2dwaW8wIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIw
MDAwMC91YXJ0Ml9ncGlvMA0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIw
MDAwMC91YXJ0Ml9ncGlvMC9waW4tdHgNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvdWFydDJfZ3BpbzAvcGluLXR4DQoo
WEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDJfZ3BpbzAvcGluLXR4IHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2Mv
Z3Bpb0A3ZTIwMDAwMC91YXJ0Ml9ncGlvMC9waW4tdHggaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQyX2dwaW8wL3Bpbi10eA0KKFhFTikg
aGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0Ml9ncGlvMC9waW4tcngN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAv
dWFydDJfZ3BpbzAvcGluLXJ4DQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAv
dWFydDJfZ3BpbzAvcGluLXJ4IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0Ml9ncGlv
MC9waW4tcnggaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQy
X2dwaW8wL3Bpbi1yeA0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAw
MC91YXJ0Ml9jdHNydHNfZ3BpbzINCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vc29jL2dwaW9AN2UyMDAwMDAvdWFydDJfY3RzcnRzX2dwaW8yDQooWEVO
KSAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDJfY3RzcnRzX2dwaW8yIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bp
b0A3ZTIwMDAwMC91YXJ0Ml9jdHNydHNfZ3BpbzIgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nv
Yy9ncGlvQDdlMjAwMDAwL3VhcnQyX2N0c3J0c19ncGlvMg0KKFhFTikgaGFu
ZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0Ml9jdHNydHNfZ3BpbzIvcGlu
LWN0cw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIw
MDAwMC91YXJ0Ml9jdHNydHNfZ3BpbzIvcGluLWN0cw0KKFhFTikgL3NvYy9n
cGlvQDdlMjAwMDAwL3VhcnQyX2N0c3J0c19ncGlvMi9waW4tY3RzIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bp
b0A3ZTIwMDAwMC91YXJ0Ml9jdHNydHNfZ3BpbzIvcGluLWN0cyBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvdWFydDJfY3RzcnRzX2dwaW8yL3Bp
bi1jdHMNCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDJf
Y3RzcnRzX2dwaW8yL3Bpbi1ydHMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vc29jL2dwaW9AN2UyMDAwMDAvdWFydDJfY3RzcnRzX2dwaW8yL3Bpbi1y
dHMNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0Ml9jdHNydHNfZ3Bp
bzIvcGluLXJ0cyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBD
aGVjayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDJfY3RzcnRzX2dwaW8y
L3Bpbi1ydHMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQy
X2N0c3J0c19ncGlvMi9waW4tcnRzDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlv
QDdlMjAwMDAwL3VhcnQzX2dwaW80DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQzX2dwaW80DQooWEVOKSAvc29j
L2dwaW9AN2UyMDAwMDAvdWFydDNfZ3BpbzQgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3Vh
cnQzX2dwaW80IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0
M19ncGlvNA0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0
M19ncGlvNC9waW4tdHgNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29j
L2dwaW9AN2UyMDAwMDAvdWFydDNfZ3BpbzQvcGluLXR4DQooWEVOKSAvc29j
L2dwaW9AN2UyMDAwMDAvdWFydDNfZ3BpbzQvcGluLXR4IHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIw
MDAwMC91YXJ0M19ncGlvNC9waW4tdHggaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlv
QDdlMjAwMDAwL3VhcnQzX2dwaW80L3Bpbi10eA0KKFhFTikgaGFuZGxlIC9z
b2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0M19ncGlvNC9waW4tcngNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvdWFydDNfZ3Bp
bzQvcGluLXJ4DQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDNfZ3Bp
bzQvcGluLXJ4IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0M19ncGlvNC9waW4tcngg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQzX2dwaW80L3Bp
bi1yeA0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0M19j
dHNydHNfZ3BpbzYNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dw
aW9AN2UyMDAwMDAvdWFydDNfY3RzcnRzX2dwaW82DQooWEVOKSAvc29jL2dw
aW9AN2UyMDAwMDAvdWFydDNfY3RzcnRzX2dwaW82IHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAw
MC91YXJ0M19jdHNydHNfZ3BpbzYgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdl
MjAwMDAwL3VhcnQzX2N0c3J0c19ncGlvNg0KKFhFTikgaGFuZGxlIC9zb2Mv
Z3Bpb0A3ZTIwMDAwMC91YXJ0M19jdHNydHNfZ3BpbzYvcGluLWN0cw0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0
M19jdHNydHNfZ3BpbzYvcGluLWN0cw0KKFhFTikgL3NvYy9ncGlvQDdlMjAw
MDAwL3VhcnQzX2N0c3J0c19ncGlvNi9waW4tY3RzIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAw
MC91YXJ0M19jdHNydHNfZ3BpbzYvcGluLWN0cyBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29j
L2dwaW9AN2UyMDAwMDAvdWFydDNfY3RzcnRzX2dwaW82L3Bpbi1jdHMNCihY
RU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDNfY3RzcnRzX2dw
aW82L3Bpbi1ydHMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dw
aW9AN2UyMDAwMDAvdWFydDNfY3RzcnRzX2dwaW82L3Bpbi1ydHMNCihYRU4p
IC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0M19jdHNydHNfZ3BpbzYvcGluLXJ0
cyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAv
c29jL2dwaW9AN2UyMDAwMDAvdWFydDNfY3RzcnRzX2dwaW82L3Bpbi1ydHMg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQzX2N0c3J0c19n
cGlvNi9waW4tcnRzDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAw
L3VhcnQ0X2dwaW84DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9n
cGlvQDdlMjAwMDAwL3VhcnQ0X2dwaW84DQooWEVOKSAvc29jL2dwaW9AN2Uy
MDAwMDAvdWFydDRfZ3BpbzggcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0K
KFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQ0X2dwaW84
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0NF9ncGlvOA0K
KFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0NF9ncGlvOC9w
aW4tdHgNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2Uy
MDAwMDAvdWFydDRfZ3BpbzgvcGluLXR4DQooWEVOKSAvc29jL2dwaW9AN2Uy
MDAwMDAvdWFydDRfZ3BpbzgvcGluLXR4IHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0
NF9ncGlvOC9waW4tdHggaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAw
L3VhcnQ0X2dwaW84L3Bpbi10eA0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3
ZTIwMDAwMC91YXJ0NF9ncGlvOC9waW4tcngNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvdWFydDRfZ3BpbzgvcGluLXJ4
DQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDRfZ3BpbzgvcGluLXJ4
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9z
b2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0NF9ncGlvOC9waW4tcnggaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQ0X2dwaW84L3Bpbi1yeA0KKFhF
TikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0NF9jdHNydHNfZ3Bp
bzEwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAw
MDAwL3VhcnQ0X2N0c3J0c19ncGlvMTANCihYRU4pIC9zb2MvZ3Bpb0A3ZTIw
MDAwMC91YXJ0NF9jdHNydHNfZ3BpbzEwIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0
NF9jdHNydHNfZ3BpbzEwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAw
MC91YXJ0NF9jdHNydHNfZ3BpbzEwDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlv
QDdlMjAwMDAwL3VhcnQ0X2N0c3J0c19ncGlvMTAvcGluLWN0cw0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0NF9j
dHNydHNfZ3BpbzEwL3Bpbi1jdHMNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAw
MC91YXJ0NF9jdHNydHNfZ3BpbzEwL3Bpbi1jdHMgcGFzc3Rocm91Z2ggPSAx
IG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAw
L3VhcnQ0X2N0c3J0c19ncGlvMTAvcGluLWN0cyBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29j
L2dwaW9AN2UyMDAwMDAvdWFydDRfY3RzcnRzX2dwaW8xMC9waW4tY3RzDQoo
WEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQ0X2N0c3J0c19n
cGlvMTAvcGluLXJ0cw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mv
Z3Bpb0A3ZTIwMDAwMC91YXJ0NF9jdHNydHNfZ3BpbzEwL3Bpbi1ydHMNCihY
RU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0NF9jdHNydHNfZ3BpbzEwL3Bp
bi1ydHMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQ0X2N0c3J0c19ncGlvMTAvcGlu
LXJ0cyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvdWFydDRfY3Rz
cnRzX2dwaW8xMC9waW4tcnRzDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdl
MjAwMDAwL3VhcnQ1X2dwaW8xMg0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0NV9ncGlvMTINCihYRU4pIC9zb2Mv
Z3Bpb0A3ZTIwMDAwMC91YXJ0NV9ncGlvMTIgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3Vh
cnQ1X2dwaW8xMiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvdWFy
dDVfZ3BpbzEyDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL3Vh
cnQ1X2dwaW8xMi9waW4tdHgNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
c29jL2dwaW9AN2UyMDAwMDAvdWFydDVfZ3BpbzEyL3Bpbi10eA0KKFhFTikg
L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQ1X2dwaW8xMi9waW4tdHggcGFzc3Ro
cm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlv
QDdlMjAwMDAwL3VhcnQ1X2dwaW8xMi9waW4tdHggaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nv
Yy9ncGlvQDdlMjAwMDAwL3VhcnQ1X2dwaW8xMi9waW4tdHgNCihYRU4pIGhh
bmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDVfZ3BpbzEyL3Bpbi1yeA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC91
YXJ0NV9ncGlvMTIvcGluLXJ4DQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAv
dWFydDVfZ3BpbzEyL3Bpbi1yeCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDVfZ3Bp
bzEyL3Bpbi1yeCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvdWFy
dDVfZ3BpbzEyL3Bpbi1yeA0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIw
MDAwMC91YXJ0NV9jdHNydHNfZ3BpbzE0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQ1X2N0c3J0c19ncGlvMTQN
CihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0NV9jdHNydHNfZ3BpbzE0
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9z
b2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0NV9jdHNydHNfZ3BpbzE0IGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0NV9jdHNydHNfZ3BpbzE0DQoo
WEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQ1X2N0c3J0c19n
cGlvMTQvcGluLWN0cw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mv
Z3Bpb0A3ZTIwMDAwMC91YXJ0NV9jdHNydHNfZ3BpbzE0L3Bpbi1jdHMNCihY
RU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0NV9jdHNydHNfZ3BpbzE0L3Bp
bi1jdHMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQ1X2N0c3J0c19ncGlvMTQvcGlu
LWN0cyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvdWFydDVfY3Rz
cnRzX2dwaW8xNC9waW4tY3RzDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdl
MjAwMDAwL3VhcnQ1X2N0c3J0c19ncGlvMTQvcGluLXJ0cw0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0NV9jdHNy
dHNfZ3BpbzE0L3Bpbi1ydHMNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91
YXJ0NV9jdHNydHNfZ3BpbzE0L3Bpbi1ydHMgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3Vh
cnQ1X2N0c3J0c19ncGlvMTQvcGluLXJ0cyBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dw
aW9AN2UyMDAwMDAvdWFydDVfY3RzcnRzX2dwaW8xNC9waW4tcnRzDQooWEVO
KSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL2dwaW9vdXQNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvZ3Bpb291dA0K
KFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL2dwaW9vdXQgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAw
MDAwL2dwaW9vdXQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2dw
aW9vdXQNCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvYWx0MA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9h
bHQwDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvYWx0MCBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2Uy
MDAwMDAvYWx0MCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvYWx0
MA0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9kcGlfMThiaXRf
Z3BpbzANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2Uy
MDAwMDAvZHBpXzE4Yml0X2dwaW8wDQooWEVOKSAvc29jL2dwaW9AN2UyMDAw
MDAvZHBpXzE4Yml0X2dwaW8wIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9kcGlfMThiaXRf
Z3BpbzAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2RwaV8xOGJp
dF9ncGlvMA0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGkw
X3BpbnMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2Uy
MDAwMDAvc3BpMF9waW5zDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvc3Bp
MF9waW5zIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGkwX3BpbnMgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3NvYy9ncGlvQDdlMjAwMDAwL3NwaTBfcGlucw0KKFhFTikgaGFuZGxlIC9z
b2MvZ3Bpb0A3ZTIwMDAwMC9zcGkwX2NzX3BpbnMNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvc3BpMF9jc19waW5zDQoo
WEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvc3BpMF9jc19waW5zIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3
ZTIwMDAwMC9zcGkwX2NzX3BpbnMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdl
MjAwMDAwL3NwaTBfY3NfcGlucw0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3
ZTIwMDAwMC9zcGkzX3BpbnMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
c29jL2dwaW9AN2UyMDAwMDAvc3BpM19waW5zDQooWEVOKSAvc29jL2dwaW9A
N2UyMDAwMDAvc3BpM19waW5zIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGkzX3BpbnMg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3NwaTNfcGlucw0KKFhF
TikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGkzX2NzX3BpbnMNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvc3Bp
M19jc19waW5zDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvc3BpM19jc19w
aW5zIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGkzX2NzX3BpbnMgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3NvYy9ncGlvQDdlMjAwMDAwL3NwaTNfY3NfcGlucw0KKFhFTikgaGFuZGxl
IC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGk0X3BpbnMNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvc3BpNF9waW5zDQooWEVO
KSAvc29jL2dwaW9AN2UyMDAwMDAvc3BpNF9waW5zIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAw
MC9zcGk0X3BpbnMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3Nw
aTRfcGlucw0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGk0
X2NzX3BpbnMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9A
N2UyMDAwMDAvc3BpNF9jc19waW5zDQooWEVOKSAvc29jL2dwaW9AN2UyMDAw
MDAvc3BpNF9jc19waW5zIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihY
RU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGk0X2NzX3BpbnMg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3NwaTRfY3NfcGlucw0K
KFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGk1X3BpbnMNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvc3Bp
NV9waW5zDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvc3BpNV9waW5zIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2Mv
Z3Bpb0A3ZTIwMDAwMC9zcGk1X3BpbnMgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlv
QDdlMjAwMDAwL3NwaTVfcGlucw0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3
ZTIwMDAwMC9zcGk1X2NzX3BpbnMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vc29jL2dwaW9AN2UyMDAwMDAvc3BpNV9jc19waW5zDQooWEVOKSAvc29j
L2dwaW9AN2UyMDAwMDAvc3BpNV9jc19waW5zIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9z
cGk1X2NzX3BpbnMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3Nw
aTVfY3NfcGlucw0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9z
cGk2X3BpbnMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9A
N2UyMDAwMDAvc3BpNl9waW5zDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAv
c3BpNl9waW5zIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGk2X3BpbnMgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3NwaTZfcGlucw0KKFhFTikgaGFuZGxl
IC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zcGk2X2NzX3BpbnMNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvc3BpNl9jc19waW5z
DQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvc3BpNl9jc19waW5zIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bp
b0A3ZTIwMDAwMC9zcGk2X2NzX3BpbnMgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlv
QDdlMjAwMDAwL3NwaTZfY3NfcGlucw0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bp
b0A3ZTIwMDAwMC9pMmMwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nv
Yy9ncGlvQDdlMjAwMDAwL2kyYzANCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAw
MC9pMmMwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmMwIGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mv
Z3Bpb0A3ZTIwMDAwMC9pMmMwDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdl
MjAwMDAwL2kyYzENCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dw
aW9AN2UyMDAwMDAvaTJjMQ0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL2ky
YzEgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYg
L3NvYy9ncGlvQDdlMjAwMDAwL2kyYzEgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlv
QDdlMjAwMDAwL2kyYzENCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAw
MDAvaTJjMw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3
ZTIwMDAwMC9pMmMzDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvaTJjMyBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29j
L2dwaW9AN2UyMDAwMDAvaTJjMyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2Uy
MDAwMDAvaTJjMw0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9p
MmM0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAw
MDAwL2kyYzQNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM0IHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bp
b0A3ZTIwMDAwMC9pMmM0IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAw
MC9pMmM0DQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzUN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAv
aTJjNQ0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL2kyYzUgcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdl
MjAwMDAwL2kyYzUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2ky
YzUNCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvaTJjNg0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMmM2
DQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvaTJjNiBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9AN2UyMDAw
MDAvaTJjNiBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvaTJjNg0K
KFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMnMNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvaTJzDQooWEVO
KSAvc29jL2dwaW9AN2UyMDAwMDAvaTJzIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9pMnMg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL2kycw0KKFhFTikgaGFu
ZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9zZGlvX3BpbnMNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvc2Rpb19waW5zDQoo
WEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvc2Rpb19waW5zIHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIw
MDAwMC9zZGlvX3BpbnMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAw
L3NkaW9fcGlucw0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9i
dF9waW5zDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdl
MjAwMDAwL2J0X3BpbnMNCihYRU4pIC9zb2MvZ3Bpb0A3ZTIwMDAwMC9idF9w
aW5zIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlm
IC9zb2MvZ3Bpb0A3ZTIwMDAwMC9idF9waW5zIGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mv
Z3Bpb0A3ZTIwMDAwMC9idF9waW5zDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlv
QDdlMjAwMDAwL3VhcnQwX3BpbnMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vc29jL2dwaW9AN2UyMDAwMDAvdWFydDBfcGlucw0KKFhFTikgL3NvYy9n
cGlvQDdlMjAwMDAwL3VhcnQwX3BpbnMgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMA0KKFhFTikgQ2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQw
X3BpbnMgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQwX3Bp
bnMNCihYRU4pIGhhbmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDFfcGlu
cw0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAw
MC91YXJ0MV9waW5zDQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDFf
cGlucyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDFfcGlucyBpcyBiZWhpbmQgdGhl
IElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
c29jL2dwaW9AN2UyMDAwMDAvdWFydDFfcGlucw0KKFhFTikgaGFuZGxlIC9z
b2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0Ml9waW5zDQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQyX3BpbnMNCihYRU4p
IC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0Ml9waW5zIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAw
MC91YXJ0Ml9waW5zIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC91
YXJ0Ml9waW5zDQooWEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL3Vh
cnQzX3BpbnMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9A
N2UyMDAwMDAvdWFydDNfcGlucw0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAw
L3VhcnQzX3BpbnMgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQzX3BpbnMgaXMgYmVo
aW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3NvYy9ncGlvQDdlMjAwMDAwL3VhcnQzX3BpbnMNCihYRU4pIGhh
bmRsZSAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDRfcGlucw0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0NF9waW5z
DQooWEVOKSAvc29jL2dwaW9AN2UyMDAwMDAvdWFydDRfcGlucyBwYXNzdGhy
b3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9A
N2UyMDAwMDAvdWFydDRfcGlucyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2Uy
MDAwMDAvdWFydDRfcGlucw0KKFhFTikgaGFuZGxlIC9zb2MvZ3Bpb0A3ZTIw
MDAwMC91YXJ0NV9waW5zDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nv
Yy9ncGlvQDdlMjAwMDAwL3VhcnQ1X3BpbnMNCihYRU4pIC9zb2MvZ3Bpb0A3
ZTIwMDAwMC91YXJ0NV9waW5zIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDAN
CihYRU4pIENoZWNrIGlmIC9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0NV9waW5z
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zb2MvZ3Bpb0A3ZTIwMDAwMC91YXJ0NV9waW5zDQoo
WEVOKSBoYW5kbGUgL3NvYy9ncGlvQDdlMjAwMDAwL2F1ZGlvX3BpbnMNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2dwaW9AN2UyMDAwMDAvYXVk
aW9fcGlucw0KKFhFTikgL3NvYy9ncGlvQDdlMjAwMDAwL2F1ZGlvX3BpbnMg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL3Nv
Yy9ncGlvQDdlMjAwMDAwL2F1ZGlvX3BpbnMgaXMgYmVoaW5kIHRoZSBJT01N
VSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9n
cGlvQDdlMjAwMDAwL2F1ZGlvX3BpbnMNCihYRU4pIGhhbmRsZSAvc29jL3Nl
cmlhbEA3ZTIwMTAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mv
c2VyaWFsQDdlMjAxMDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHBy
b3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50
c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6
IGRldj0vc29jL3NlcmlhbEA3ZTIwMTAwMCwgaW5kZXg9MA0KKFhFTikgIHVz
aW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBp
bnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQw
MDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA3OS4uLl0sb2lu
dHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRl
cnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4g
YWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvc29jL3Nl
cmlhbEA3ZTIwMTAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVO
KSBDaGVjayBpZiAvc29jL3NlcmlhbEA3ZTIwMTAwMCBpcyBiZWhpbmQgdGhl
IElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
c29jL3NlcmlhbEA3ZTIwMTAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRz
JyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikg
IGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdf
aXJxOiBkZXY9L3NvYy9zZXJpYWxAN2UyMDEwMDAsIGluZGV4PTANCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4p
IGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxl
ckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNzkuLi5d
LG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2Mv
aW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikg
IC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRf
ZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9zZXJpYWxAN2UyMDEwMDAs
IGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMg
aW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRl
cnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAw
IDB4MDAwMDAwNzkuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAs
IHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3Qg
aXQgIQ0KKFhFTikgICAtIElSUTogMTUzDQooWEVOKSBEVDogKiogdHJhbnNs
YXRpb24gZm9yIGRldmljZSAvc29jL3NlcmlhbEA3ZTIwMTAwMCAqKg0KKFhF
TikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvc29jDQoo
WEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gN2UyMDEwMDA8Mz4N
CihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEp
IG9uIC8NCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6
IGRlZmF1bHQgbWFwLCBjcD03ZTAwMDAwMCwgcz0xODAwMDAwLCBkYT03ZTIw
MTAwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAw
MDAwMDAwPDM+IGZlMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6
IDIwMTAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4g
MDAwMDAwMDA8Mz4gZmUyMDEwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJv
b3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwZmUyMDEwMDAgLSAwMGZlMjAx
MjAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9zb2MvbW1jQDdlMjAyMDAw
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9tbWNAN2UyMDIwMDAN
CihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBp
bnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMN
CihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2MvbW1jQDdl
MjAyMDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHBy
b3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50
c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9z
b2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgw
MDAwMDAwMCAweDAwMDAwMDc4Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQw
MDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAg
LT4gZ290IGl0ICENCihYRU4pIC9zb2MvbW1jQDdlMjAyMDAwIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9zb2MvbW1jQDdl
MjAyMDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9zb2MvbW1jQDdlMjAyMDAwDQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0w
IGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBk
dF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL21tY0A3ZTIwMjAwMCwg
aW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0K
KFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBp
bnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVy
cnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAg
MHgwMDAwMDA3OC4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwg
c2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBp
dCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL21t
Y0A3ZTIwMjAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRz
JyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikg
IGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBh
cj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9
WzB4MDAwMDAwMDAgMHgwMDAwMDA3OC4uLl0sb2ludHNpemU9Mw0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxl
ckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhF
TikgIC0+IGdvdCBpdCAhDQooWEVOKSAgIC0gSVJROiAxNTINCihYRU4pIERU
OiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zb2MvbW1jQDdlMjAyMDAw
ICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9u
IC9zb2MNCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3ZTIw
MjAwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9
MiwgbnM9MSkgb24gLw0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQoo
WEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTdlMDAwMDAwLCBzPTE4MDAwMDAs
IGRhPTdlMjAyMDAwDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZv
cjo8Mz4gMDAwMDAwMDA8Mz4gZmUwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRo
IG9mZnNldDogMjAyMDAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0
aW9uOjwzPiAwMDAwMDAwMDwzPiBmZTIwMjAwMDwzPg0KKFhFTikgRFQ6IHJl
YWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDBmZTIwMjAwMCAt
IDAwZmUyMDIxMDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL3NvYy9pMnNA
N2UyMDMwMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2kyc0A3
ZTIwMzAwMA0KKFhFTikgL3NvYy9pMnNAN2UyMDMwMDAgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3NvYy9pMnNAN2UyMDMw
MDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3NvYy9pMnNAN2UyMDMwMDANCihYRU4pIERUOiAq
KiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zb2MvaTJzQDdlMjAzMDAwICoq
DQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9z
b2MNCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3ZTIwMzAw
MDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9Miwg
bnM9MSkgb24gLw0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVO
KSBEVDogZGVmYXVsdCBtYXAsIGNwPTdlMDAwMDAwLCBzPTE4MDAwMDAsIGRh
PTdlMjAzMDAwDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8
Mz4gMDAwMDAwMDA8Mz4gZmUwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9m
ZnNldDogMjAzMDAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9u
OjwzPiAwMDAwMDAwMDwzPiBmZTIwMzAwMDwzPg0KKFhFTikgRFQ6IHJlYWNo
ZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDBmZTIwMzAwMCAtIDAw
ZmUyMDMwMjQgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL3NvYy9zcGlAN2Uy
MDQwMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL3NwaUA3ZTIw
NDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhF
TikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRs
ZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9z
cGlAN2UyMDQwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4p
ICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBw
YXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVj
PVsweDAwMDAwMDAwIDB4MDAwMDAwNzYuLi5dLG9pbnRzaXplPTMNCihYRU4p
IGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xs
ZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihY
RU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgL3NvYy9zcGlAN2UyMDQwMDAgcGFz
c3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3NvYy9z
cGlAN2UyMDQwMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9zcGlAN2UyMDQwMDANCihY
RU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRz
cGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihY
RU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2Mvc3BpQDdlMjA0
MDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3Bl
cnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6
ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2Mv
aW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAw
MDAwMCAweDAwMDAwMDc2Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQx
MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4g
Z290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9z
b2Mvc3BpQDdlMjA0MDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVy
cnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQoo
WEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50
c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDc2Li4uXSxvaW50c2l6ZT0zDQoo
WEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250
cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0x
DQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pICAgLSBJUlE6IDE1MA0KKFhF
TikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3NvYy9zcGlAN2Uy
MDQwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9
MSkgb24gL3NvYw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+
IDdlMjA0MDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0
IChuYT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMu
Li4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9MTgw
MDAwMCwgZGE9N2UyMDQwMDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRp
b24gZm9yOjwzPiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhFTikgRFQ6
IHdpdGggb2Zmc2V0OiAyMDQwMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJh
bnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZlMjA0MDAwPDM+DQooWEVOKSBE
VDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZlMjA0
MDAwIC0gMDBmZTIwNDIwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvc29j
L3NwaUA3ZTIwNDAwMC9zcGlkZXZAMA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zb2Mvc3BpQDdlMjA0MDAwL3NwaWRldkAwDQooWEVOKSAvc29jL3Nw
aUA3ZTIwNDAwMC9zcGlkZXZAMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvc29jL3NwaUA3ZTIwNDAwMC9zcGlkZXZAMCBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc29jL3NwaUA3ZTIwNDAwMC9zcGlkZXZAMA0KKFhFTikg
aGFuZGxlIC9zb2Mvc3BpQDdlMjA0MDAwL3NwaWRldkAxDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3NvYy9zcGlAN2UyMDQwMDAvc3BpZGV2QDENCihY
RU4pIC9zb2Mvc3BpQDdlMjA0MDAwL3NwaWRldkAxIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2Mvc3BpQDdlMjA0MDAw
L3NwaWRldkAxIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mvc3BpQDdlMjA0MDAwL3NwaWRl
dkAxDQooWEVOKSBoYW5kbGUgL3NvYy9pMmNAN2UyMDUwMDANCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vc29jL2kyY0A3ZTIwNTAwMA0KKFhFTikgIHVz
aW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBp
bnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRf
ZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9pMmNAN2UyMDUwMDAsIGlu
ZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihY
RU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50
bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1
cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4
MDAwMDAwNzUuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNp
emU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQg
IQ0KKFhFTikgL3NvYy9pMmNAN2UyMDUwMDAgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3NvYy9pMmNAN2UyMDUwMDAgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NvYy9pMmNAN2UyMDUwMDANCihYRU4pICB1c2luZyAnaW50
ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMN
CihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9n
ZXRfcmF3X2lycTogZGV2PS9zb2MvaTJjQDdlMjA1MDAwLCBpbmRleD0wDQoo
WEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50
c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQoo
WEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRy
b2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDc1
Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0v
c29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihY
RU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4p
IGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2MvaTJjQDdlMjA1MDAw
LCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5
DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0z
IGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50
ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAw
MCAweDAwMDAwMDc1Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAw
LCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290
IGl0ICENCihYRU4pICAgLSBJUlE6IDE0OQ0KKFhFTikgRFQ6ICoqIHRyYW5z
bGF0aW9uIGZvciBkZXZpY2UgL3NvYy9pMmNAN2UyMDUwMDAgKioNCihYRU4p
IERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL3NvYw0KKFhF
TikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDdlMjA1MDAwPDM+DQoo
WEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0xKSBv
biAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBk
ZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9MTgwMDAwMCwgZGE9N2UyMDUw
MDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAw
MDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiAy
MDUwMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAw
MDAwMDAwPDM+IGZlMjA1MDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290
IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZlMjA1MDAwIC0gMDBmZTIwNTIw
MCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvc29jL2kyYzBtdXgNCihYRU4p
IGR0X2lycV9udW1iZXI6IGRldj0vc29jL2kyYzBtdXgNCihYRU4pIC9zb2Mv
aTJjMG11eCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVj
ayBpZiAvc29jL2kyYzBtdXggaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9pMmMwbXV4DQoo
WEVOKSBoYW5kbGUgL3NvYy9pMmMwbXV4L2kyY0AwDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NvYy9pMmMwbXV4L2kyY0AwDQooWEVOKSAvc29jL2ky
YzBtdXgvaTJjQDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3NvYy9pMmMwbXV4L2kyY0AwIGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mv
aTJjMG11eC9pMmNAMA0KKFhFTikgaGFuZGxlIC9zb2MvaTJjMG11eC9pMmNA
MQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvaTJjMG11eC9pMmNA
MQ0KKFhFTikgL3NvYy9pMmMwbXV4L2kyY0AxIHBhc3N0aHJvdWdoID0gMSBu
YWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zb2MvaTJjMG11eC9pMmNAMSBp
cyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc29jL2kyYzBtdXgvaTJjQDENCihYRU4pIGhhbmRsZSAv
c29jL2RwaUA3ZTIwODAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9z
b2MvZHBpQDdlMjA4MDAwDQooWEVOKSAvc29jL2RwaUA3ZTIwODAwMCBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvc29jL2Rw
aUA3ZTIwODAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2RwaUA3ZTIwODAwMA0KKFhF
TikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3NvYy9kcGlAN2Uy
MDgwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9
MSkgb24gL3NvYw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+
IDdlMjA4MDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0
IChuYT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMu
Li4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9MTgw
MDAwMCwgZGE9N2UyMDgwMDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRp
b24gZm9yOjwzPiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhFTikgRFQ6
IHdpdGggb2Zmc2V0OiAyMDgwMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJh
bnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZlMjA4MDAwPDM+DQooWEVOKSBE
VDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZlMjA4
MDAwIC0gMDBmZTIwODA4YyBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvc29j
L2RzaUA3ZTIwOTAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mv
ZHNpQDdlMjA5MDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3Bl
cnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6
ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRl
dj0vc29jL2RzaUA3ZTIwOTAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdp
bnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49
Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21h
cF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAw
LGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA2NC4uLl0sb2ludHNpemU9
Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQt
Y29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNp
emU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvc29jL2RzaUA3ZTIw
OTAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBp
ZiAvc29jL2RzaUA3ZTIwOTAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2RzaUA3ZTIw
OTAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhF
TikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRs
ZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9k
c2lAN2UyMDkwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4p
ICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBw
YXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVj
PVsweDAwMDAwMDAwIDB4MDAwMDAwNjQuLi5dLG9pbnRzaXplPTMNCihYRU4p
IGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xs
ZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihY
RU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJx
OiBkZXY9L3NvYy9kc2lAN2UyMDkwMDAsIGluZGV4PTANCihYRU4pICB1c2lu
ZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50
bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2ly
cV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0
MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNjQuLi5dLG9pbnRz
aXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJy
dXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFk
ZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgICAtIElSUTog
MTMyDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc29j
L2RzaUA3ZTIwOTAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChu
YT0xLCBucz0xKSBvbiAvc29jDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRk
cmVzczo8Mz4gN2UyMDkwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlz
IGRlZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihYRU4pIERUOiB3YWxraW5n
IHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03ZTAwMDAw
MCwgcz0xODAwMDAwLCBkYT03ZTIwOTAwMA0KKFhFTikgRFQ6IHBhcmVudCB0
cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZlMDAwMDAwPDM+DQoo
WEVOKSBEVDogd2l0aCBvZmZzZXQ6IDIwOTAwMA0KKFhFTikgRFQ6IG9uZSBs
ZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gZmUyMDkwMDA8Mz4N
CihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86
IDAwZmUyMDkwMDAgLSAwMGZlMjA5MDc4IFAyTVR5cGU9NQ0KKFhFTikgaGFu
ZGxlIC9zb2MvYXV4QDdlMjE1MDAwDQooWEVOKSAgIFNraXAgaXQgKGJsYWNr
bGlzdGVkKQ0KKFhFTikgaGFuZGxlIC9zb2Mvc2VyaWFsQDdlMjE1MDQwDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9zZXJpYWxAN2UyMTUwNDAN
CihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBp
bnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMN
CihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2Mvc2VyaWFs
QDdlMjE1MDQwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMn
IHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAg
aW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFy
PS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1b
MHgwMDAwMDAwMCAweDAwMDAwMDVkLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVy
QDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVO
KSAgLT4gZ290IGl0ICENCihYRU4pIC9zb2Mvc2VyaWFsQDdlMjE1MDQwIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9zb2Mv
c2VyaWFsQDdlMjE1MDQwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mvc2VyaWFsQDdlMjE1
MDQwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVO
KSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL3Nl
cmlhbEA3ZTIxNTA0MCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1
cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhF
TikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNw
ZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA1ZC4uLl0sb2ludHNpemU9Mw0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJv
bGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0K
KFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19p
cnE6IGRldj0vc29jL3NlcmlhbEA3ZTIxNTA0MCwgaW5kZXg9MA0KKFhFTikg
IHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9
MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVy
QDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA1ZC4uLl0s
b2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9p
bnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAg
LT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAgIC0g
SVJROiAxMjUNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNl
IC9zb2Mvc2VyaWFsQDdlMjE1MDQwICoqDQooWEVOKSBEVDogYnVzIGlzIGRl
ZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9zb2MNCihYRU4pIERUOiB0cmFuc2xh
dGluZyBhZGRyZXNzOjwzPiA3ZTIxNTA0MDwzPg0KKFhFTikgRFQ6IHBhcmVu
dCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24gLw0KKFhFTikgRFQ6
IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNw
PTdlMDAwMDAwLCBzPTE4MDAwMDAsIGRhPTdlMjE1MDQwDQooWEVOKSBEVDog
cGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gZmUwMDAw
MDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogMjE1MDQwDQooWEVOKSBE
VDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiBmZTIx
NTA0MDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDBmZTIxNTA0MCAtIDAwZmUyMTUwODAgUDJNVHlwZT01DQoo
WEVOKSBoYW5kbGUgL3NvYy9zcGlAN2UyMTUwODANCihYRU4pICAgU2tpcCBp
dCAoYmxhY2tsaXN0ZWQpDQooWEVOKSBoYW5kbGUgL3NvYy9zcGlAN2UyMTUw
YzANCihYRU4pICAgU2tpcCBpdCAoYmxhY2tsaXN0ZWQpDQooWEVOKSBoYW5k
bGUgL3NvYy9wd21AN2UyMGMwMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vc29jL3B3bUA3ZTIwYzAwMA0KKFhFTikgL3NvYy9wd21AN2UyMGMwMDAg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3Nv
Yy9wd21AN2UyMGMwMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9wd21AN2UyMGMwMDAN
CihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zb2MvcHdt
QDdlMjBjMDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTEs
IG5zPTEpIG9uIC9zb2MNCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNz
OjwzPiA3ZTIwYzAwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVm
YXVsdCAobmE9MiwgbnM9MSkgb24gLw0KKFhFTikgRFQ6IHdhbGtpbmcgcmFu
Z2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTdlMDAwMDAwLCBz
PTE4MDAwMDAsIGRhPTdlMjBjMDAwDQooWEVOKSBEVDogcGFyZW50IHRyYW5z
bGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gZmUwMDAwMDA8Mz4NCihYRU4p
IERUOiB3aXRoIG9mZnNldDogMjBjMDAwDQooWEVOKSBEVDogb25lIGxldmVs
IHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiBmZTIwYzAwMDwzPg0KKFhF
TikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDBm
ZTIwYzAwMCAtIDAwZmUyMGMwMjggUDJNVHlwZT01DQooWEVOKSBoYW5kbGUg
L3NvYy9odnNAN2U0MDAwMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
c29jL2h2c0A3ZTQwMDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBw
cm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJx
OiBkZXY9L3NvYy9odnNAN2U0MDAwMDAsIGluZGV4PTANCihYRU4pICB1c2lu
ZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50
bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2ly
cV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0
MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNjEuLi5dLG9pbnRz
aXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJy
dXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFk
ZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgL3NvYy9odnNA
N2U0MDAwMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hl
Y2sgaWYgL3NvYy9odnNAN2U0MDAwMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9odnNA
N2U0MDAwMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMg
aW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9z
b2MvaHZzQDdlNDAwMDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVy
cnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQoo
WEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50
c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDYxLi4uXSxvaW50c2l6ZT0zDQoo
WEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250
cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0x
DQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3
X2lycTogZGV2PS9zb2MvaHZzQDdlNDAwMDAwLCBpbmRleD0wDQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0w
IGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJA
NDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDYxLi4uXSxv
aW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2lu
dGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAt
PiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pICAgLSBJ
UlE6IDEyOQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2Ug
L3NvYy9odnNAN2U0MDAwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVs
dCAobmE9MSwgbnM9MSkgb24gL3NvYw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5n
IGFkZHJlc3M6PDM+IDdlNDAwMDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1
cyBpcyBkZWZhdWx0IChuYT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fs
a2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2Uw
MDAwMDAsIHM9MTgwMDAwMCwgZGE9N2U0MDAwMDANCihYRU4pIERUOiBwYXJl
bnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwz
Pg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA0MDAwMDANCihYRU4pIERUOiBv
bmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZlNDAwMDAw
PDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBN
TUlPOiAwMGZlNDAwMDAwIC0gMDBmZTQwNjAwMCBQMk1UeXBlPTUNCihYRU4p
IGhhbmRsZSAvc29jL2RzaUA3ZTcwMDAwMA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9zb2MvZHNpQDdlNzAwMDAwDQooWEVOKSAgdXNpbmcgJ2ludGVy
cnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQoo
WEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0
X3Jhd19pcnE6IGRldj0vc29jL2RzaUA3ZTcwMDAwMCwgaW5kZXg9MA0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNw
ZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9s
bGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA2Yy4u
Ll0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3Nv
Yy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVO
KSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAv
c29jL2RzaUA3ZTcwMDAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQoo
WEVOKSBDaGVjayBpZiAvc29jL2RzaUA3ZTcwMDAwMCBpcyBiZWhpbmQgdGhl
IElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
c29jL2RzaUA3ZTcwMDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBw
cm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJx
OiBkZXY9L3NvYy9kc2lAN2U3MDAwMDAsIGluZGV4PTANCihYRU4pICB1c2lu
ZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50
bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2ly
cV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0
MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNmMuLi5dLG9pbnRz
aXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJy
dXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFk
ZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNl
X2dldF9yYXdfaXJxOiBkZXY9L3NvYy9kc2lAN2U3MDAwMDAsIGluZGV4PTAN
CihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBp
bnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMN
CihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29u
dHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAw
NmMuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFy
PS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0K
KFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhF
TikgICAtIElSUTogMTQwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9y
IGRldmljZSAvc29jL2RzaUA3ZTcwMDAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBp
cyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvc29jDQooWEVOKSBEVDogdHJh
bnNsYXRpbmcgYWRkcmVzczo8Mz4gN2U3MDAwMDA8Mz4NCihYRU4pIERUOiBw
YXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihYRU4p
IERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFw
LCBjcD03ZTAwMDAwMCwgcz0xODAwMDAwLCBkYT03ZTcwMDAwMA0KKFhFTikg
RFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZl
MDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDcwMDAwMA0KKFhF
TikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4g
ZmU3MDAwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhF
TikgICAtIE1NSU86IDAwZmU3MDAwMDAgLSAwMGZlNzAwMDhjIFAyTVR5cGU9
NQ0KKFhFTikgaGFuZGxlIC9zb2MvaTJjQDdlODA0MDAwDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3NvYy9pMmNAN2U4MDQwMDANCihYRU4pICB1c2lu
ZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50
bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2Rl
dmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2MvaTJjQDdlODA0MDAwLCBpbmRl
eD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVO
KSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0
LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAw
MDAwMDc1Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
aXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXpl
PTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICEN
CihYRU4pIC9zb2MvaTJjQDdlODA0MDAwIHBhc3N0aHJvdWdoID0gMSBuYWRk
ciA9IDENCihYRU4pIENoZWNrIGlmIC9zb2MvaTJjQDdlODA0MDAwIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9zb2MvaTJjQDdlODA0MDAwDQooWEVOKSAgdXNpbmcgJ2ludGVy
cnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQoo
WEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0
X3Jhd19pcnE6IGRldj0vc29jL2kyY0A3ZTgwNDAwMCwgaW5kZXg9MA0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNw
ZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9s
bGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA3NS4u
Ll0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3Nv
Yy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVO
KSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBk
dF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL2kyY0A3ZTgwNDAwMCwg
aW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0K
KFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBp
bnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVy
cnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAg
MHgwMDAwMDA3NS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwg
c2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBp
dCAhDQooWEVOKSAgIC0gSVJROiAxNDkNCihYRU4pIERUOiAqKiB0cmFuc2xh
dGlvbiBmb3IgZGV2aWNlIC9zb2MvaTJjQDdlODA0MDAwICoqDQooWEVOKSBE
VDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9zb2MNCihYRU4p
IERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3ZTgwNDAwMDwzPg0KKFhF
TikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24g
Lw0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVm
YXVsdCBtYXAsIGNwPTdlMDAwMDAwLCBzPTE4MDAwMDAsIGRhPTdlODA0MDAw
DQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAw
MDA8Mz4gZmUwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogODA0
MDAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAw
MDAwMDwzPiBmZTgwNDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBu
b2RlDQooWEVOKSAgIC0gTU1JTzogMDBmZTgwNDAwMCAtIDAwZmU4MDUwMDAg
UDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL3NvYy92ZWNAN2U4MDYwMDANCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL3ZlY0A3ZTgwNjAwMA0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNw
ZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhF
TikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy92ZWNAN2U4MDYw
MDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVy
dHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXpl
PTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9p
bnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAw
MDAwIDB4MDAwMDAwN2IuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEw
MDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBn
b3QgaXQgIQ0KKFhFTikgL3NvYy92ZWNAN2U4MDYwMDAgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3NvYy92ZWNAN2U4MDYw
MDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3NvYy92ZWNAN2U4MDYwMDANCihYRU4pICB1c2lu
ZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50
bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2Rl
dmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2MvdmVjQDdlODA2MDAwLCBpbmRl
eD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVO
KSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0
LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAw
MDAwMDdiLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
aXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXpl
PTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICEN
CihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2MvdmVjQDdl
ODA2MDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHBy
b3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50
c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9z
b2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgw
MDAwMDAwMCAweDAwMDAwMDdiLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQw
MDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAg
LT4gZ290IGl0ICENCihYRU4pICAgLSBJUlE6IDE1NQ0KKFhFTikgRFQ6ICoq
IHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3NvYy92ZWNAN2U4MDYwMDAgKioN
CihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL3Nv
Yw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDdlODA2MDAw
PDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBu
cz0xKSBvbiAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4p
IERUOiBkZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9MTgwMDAwMCwgZGE9
N2U4MDYwMDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwz
PiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zm
c2V0OiA4MDYwMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246
PDM+IDAwMDAwMDAwPDM+IGZlODA2MDAwPDM+DQooWEVOKSBEVDogcmVhY2hl
ZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZlODA2MDAwIC0gMDBm
ZTgwNzAwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvc29jL3VzYkA3ZTk4
MDAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvdXNiQDdlOTgw
MDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVO
KSAgaW50c3BlYz0wIGludGxlbj02DQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj02DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL3Vz
YkA3ZTk4MDAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRz
JyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Ng0KKFhFTikg
IGludHNpemU9MyBpbnRsZW49Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBh
cj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9
WzB4MDAwMDAwMDAgMHgwMDAwMDA0OS4uLl0sb2ludHNpemU9Mw0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxl
ckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhF
TikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6
IGRldj0vc29jL3VzYkA3ZTk4MDAwMCwgaW5kZXg9MQ0KKFhFTikgIHVzaW5n
ICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRs
ZW49Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Ng0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQx
MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyOC4uLl0sb2ludHNp
emU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1
cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRk
cnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvc29jL3VzYkA3
ZTk4MDAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAyDQooWEVOKSBDaGVj
ayBpZiAvc29jL3VzYkA3ZTk4MDAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL3VzYkA3
ZTk4MDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0K
KFhFTikgIGludHNwZWM9MCBpbnRsZW49Ng0KKFhFTikgIGludHNpemU9MyBp
bnRsZW49Ng0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3Nv
Yy91c2JAN2U5ODAwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTYNCihY
RU4pICBpbnRzaXplPTMgaW50bGVuPTYNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRz
cGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNDkuLi5dLG9pbnRzaXplPTMNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRy
b2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTEN
CihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdf
aXJxOiBkZXY9L3NvYy91c2JAN2U5ODAwMDAsIGluZGV4PTANCihYRU4pICB1
c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAg
aW50bGVuPTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTYNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0
MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNDkuLi5dLG9p
bnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50
ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+
IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgICAtIElS
UTogMTA1DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29j
L3VzYkA3ZTk4MDAwMCwgaW5kZXg9MQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1
cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Ng0KKFhF
TikgIGludHNpemU9MyBpbnRsZW49Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNw
ZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyOC4uLl0sb2ludHNpemU9Mw0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJv
bGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0K
KFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19p
cnE6IGRldj0vc29jL3VzYkA3ZTk4MDAwMCwgaW5kZXg9MQ0KKFhFTikgIHVz
aW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBp
bnRsZW49Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Ng0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQw
MDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyOC4uLl0sb2lu
dHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRl
cnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4g
YWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAgIC0gSVJR
OiA3Mg0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3Nv
Yy91c2JAN2U5ODAwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAo
bmE9MSwgbnM9MSkgb24gL3NvYw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFk
ZHJlc3M6PDM+IDdlOTgwMDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBp
cyBkZWZhdWx0IChuYT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fsa2lu
ZyByYW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2UwMDAw
MDAsIHM9MTgwMDAwMCwgZGE9N2U5ODAwMDANCihYRU4pIERUOiBwYXJlbnQg
dHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwzPg0K
KFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA5ODAwMDANCihYRU4pIERUOiBvbmUg
bGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZlOTgwMDAwPDM+
DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlP
OiAwMGZlOTgwMDAwIC0gMDBmZTk5MDAwMCBQMk1UeXBlPTUNCihYRU4pIERU
OiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zb2MvdXNiQDdlOTgwMDAw
ICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9u
IC9zb2MNCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3ZTAw
YjIwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9
MiwgbnM9MSkgb24gLw0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQoo
WEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTdlMDAwMDAwLCBzPTE4MDAwMDAs
IGRhPTdlMDBiMjAwDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZv
cjo8Mz4gMDAwMDAwMDA8Mz4gZmUwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRo
IG9mZnNldDogYjIwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlv
bjo8Mz4gMDAwMDAwMDA8Mz4gZmUwMGIyMDA8Mz4NCihYRU4pIERUOiByZWFj
aGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwZmUwMGIyMDAgLSAw
MGZlMDBiNDAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9zb2MvbG9jYWxf
aW50Y0A0MDAwMDAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mv
bG9jYWxfaW50Y0A0MDAwMDAwMA0KKFhFTikgL3NvYy9sb2NhbF9pbnRjQDQw
MDAwMDAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNr
IGlmIC9zb2MvbG9jYWxfaW50Y0A0MDAwMDAwMCBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29j
L2xvY2FsX2ludGNANDAwMDAwMDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlv
biBmb3IgZGV2aWNlIC9zb2MvbG9jYWxfaW50Y0A0MDAwMDAwMCAqKg0KKFhF
TikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvc29jDQoo
WEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gNDAwMDAwMDA8Mz4N
CihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEp
IG9uIC8NCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6
IGRlZmF1bHQgbWFwLCBjcD03ZTAwMDAwMCwgcz0xODAwMDAwLCBkYT00MDAw
MDAwMA0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03YzAwMDAwMCwgcz0y
MDAwMDAwLCBkYT00MDAwMDAwMA0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBj
cD00MDAwMDAwMCwgcz04MDAwMDAsIGRhPTQwMDAwMDAwDQooWEVOKSBEVDog
cGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gZmY4MDAw
MDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogMA0KKFhFTikgRFQ6IG9u
ZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gZmY4MDAwMDA8
Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1N
SU86IDAwZmY4MDAwMDAgLSAwMGZmODAwMTAwIFAyTVR5cGU9NQ0KKFhFTikg
aGFuZGxlIC9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDANCihY
RU4pIENyZWF0ZSBnaWMgbm9kZQ0KKFhFTikgICBTZXQgcGhhbmRsZSA9IDB4
MQ0KKFhFTikgaGFuZGxlIC9zb2MvYXZzLW1vbml0b3JAN2Q1ZDIwMDANCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2F2cy1tb25pdG9yQDdkNWQy
MDAwDQooWEVOKSAvc29jL2F2cy1tb25pdG9yQDdkNWQyMDAwIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9zb2MvYXZzLW1v
bml0b3JAN2Q1ZDIwMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9hdnMtbW9uaXRvckA3
ZDVkMjAwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2Ug
L3NvYy9hdnMtbW9uaXRvckA3ZDVkMjAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBp
cyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvc29jDQooWEVOKSBEVDogdHJh
bnNsYXRpbmcgYWRkcmVzczo8Mz4gN2Q1ZDIwMDA8Mz4NCihYRU4pIERUOiBw
YXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihYRU4p
IERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFw
LCBjcD03ZTAwMDAwMCwgcz0xODAwMDAwLCBkYT03ZDVkMjAwMA0KKFhFTikg
RFQ6IGRlZmF1bHQgbWFwLCBjcD03YzAwMDAwMCwgcz0yMDAwMDAwLCBkYT03
ZDVkMjAwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+
IDAwMDAwMDAwPDM+IGZjMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZz
ZXQ6IDE1ZDIwMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246
PDM+IDAwMDAwMDAwPDM+IGZkNWQyMDAwPDM+DQooWEVOKSBEVDogcmVhY2hl
ZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZkNWQyMDAwIC0gMDBm
ZDVkMmYwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvc29jL2F2cy1tb25p
dG9yQDdkNWQyMDAwL3RoZXJtYWwNCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vc29jL2F2cy1tb25pdG9yQDdkNWQyMDAwL3RoZXJtYWwNCihYRU4pIC9z
b2MvYXZzLW1vbml0b3JAN2Q1ZDIwMDAvdGhlcm1hbCBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2F2cy1tb25pdG9y
QDdkNWQyMDAwL3RoZXJtYWwgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9hdnMtbW9uaXRv
ckA3ZDVkMjAwMC90aGVybWFsDQooWEVOKSBoYW5kbGUgL3NvYy9kbWFAN2Uw
MDcwMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2RtYUA3ZTAw
NzAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhF
TikgIGludHNwZWM9MCBpbnRsZW49MzMNCihYRU4pICBpbnRzaXplPTMgaW50
bGVuPTMzDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29j
L2RtYUA3ZTAwNzAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1
cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MzMNCihY
RU4pICBpbnRzaXplPTMgaW50bGVuPTMzDQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50
c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDUwLi4uXSxvaW50c2l6ZT0zDQoo
WEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250
cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0x
DQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3
X2lycTogZGV2PS9zb2MvZG1hQDdlMDA3MDAwLCBpbmRleD0xDQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0w
IGludGxlbj0zMw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MzMNCihYRU4p
IGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxl
ckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNTEuLi5d
LG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2Mv
aW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikg
IC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRf
ZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9kbWFAN2UwMDcwMDAsIGlu
ZGV4PTINCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihY
RU4pICBpbnRzcGVjPTAgaW50bGVuPTMzDQooWEVOKSAgaW50c2l6ZT0zIGlu
dGxlbj0zMw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVy
cnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAg
MHgwMDAwMDA1Mi4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwg
c2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBp
dCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL2Rt
YUA3ZTAwNzAwMCwgaW5kZXg9Mw0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRz
JyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MzMNCihYRU4p
ICBpbnRzaXplPTMgaW50bGVuPTMzDQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
cGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3Bl
Yz1bMHgwMDAwMDAwMCAweDAwMDAwMDUzLi4uXSxvaW50c2l6ZT0zDQooWEVO
KSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9s
bGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQoo
WEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2ly
cTogZGV2PS9zb2MvZG1hQDdlMDA3MDAwLCBpbmRleD00DQooWEVOKSAgdXNp
bmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGlu
dGxlbj0zMw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MzMNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0
MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNTQuLi5dLG9p
bnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50
ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+
IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2
aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9kbWFAN2UwMDcwMDAsIGluZGV4
PTUNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4p
ICBpbnRzcGVjPTAgaW50bGVuPTMzDQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj0zMw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVw
dC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgw
MDAwMDA1NS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6
ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAh
DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL2RtYUA3
ZTAwNzAwMCwgaW5kZXg9Ng0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBw
cm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MzMNCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTMzDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFy
PS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1b
MHgwMDAwMDAwMCAweDAwMDAwMDU2Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVy
QDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVO
KSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTog
ZGV2PS9zb2MvZG1hQDdlMDA3MDAwLCBpbmRleD03DQooWEVOKSAgdXNpbmcg
J2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxl
bj0zMw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MzMNCihYRU4pIGR0X2ly
cV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0
MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNTcuLi5dLG9pbnRz
aXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJy
dXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFk
ZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNl
X2dldF9yYXdfaXJxOiBkZXY9L3NvYy9kbWFAN2UwMDcwMDAsIGluZGV4PTgN
CihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBp
bnRzcGVjPTAgaW50bGVuPTMzDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0z
Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1j
b250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAw
MDA1Ny4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlw
YXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0z
DQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQoo
WEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL2RtYUA3ZTAw
NzAwMCwgaW5kZXg9OQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9w
ZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MzMNCihYRU4pICBpbnRz
aXplPTMgaW50bGVuPTMzDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9z
b2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgw
MDAwMDAwMCAweDAwMDAwMDU4Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQw
MDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAg
LT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2
PS9zb2MvZG1hQDdlMDA3MDAwLCBpbmRleD0xMA0KKFhFTikgIHVzaW5nICdp
bnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49
MzMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMzDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEw
MDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDU4Li4uXSxvaW50c2l6
ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVw
dC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRy
c2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIC9zb2MvZG1hQDdl
MDA3MDAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNr
IGlmIC9zb2MvZG1hQDdlMDA3MDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5k
IGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvZG1hQDdl
MDA3MDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQoo
WEVOKSAgaW50c3BlYz0wIGludGxlbj0zMw0KKFhFTikgIGludHNpemU9MyBp
bnRsZW49MzMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9z
b2MvZG1hQDdlMDA3MDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVy
cnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zMw0K
KFhFTikgIGludHNpemU9MyBpbnRsZW49MzMNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxp
bnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNTAuLi5dLG9pbnRzaXplPTMN
CihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNv
bnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXpl
PTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9y
YXdfaXJxOiBkZXY9L3NvYy9kbWFAN2UwMDcwMDAsIGluZGV4PTANCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTMzDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zMw0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9s
bGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA1MC4u
Ll0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3Nv
Yy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVO
KSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAg
IC0gSVJROiAxMTINCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2
PS9zb2MvZG1hQDdlMDA3MDAwLCBpbmRleD0xDQooWEVOKSAgdXNpbmcgJ2lu
dGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0z
Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MzMNCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAw
MCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNTEuLi5dLG9pbnRzaXpl
PTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0
LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJz
aXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dl
dF9yYXdfaXJxOiBkZXY9L3NvYy9kbWFAN2UwMDcwMDAsIGluZGV4PTENCihY
RU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRz
cGVjPTAgaW50bGVuPTMzDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zMw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250
cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA1
MS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9
L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQoo
WEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVO
KSAgIC0gSVJROiAxMTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTog
ZGV2PS9zb2MvZG1hQDdlMDA3MDAwLCBpbmRleD0yDQooWEVOKSAgdXNpbmcg
J2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxl
bj0zMw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MzMNCihYRU4pIGR0X2ly
cV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0
MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNTIuLi5dLG9pbnRz
aXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJy
dXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFk
ZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNl
X2dldF9yYXdfaXJxOiBkZXY9L3NvYy9kbWFAN2UwMDcwMDAsIGluZGV4PTIN
CihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBp
bnRzcGVjPTAgaW50bGVuPTMzDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0z
Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1j
b250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAw
MDA1Mi4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlw
YXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0z
DQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQoo
WEVOKSAgIC0gSVJROiAxMTQNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2ly
cTogZGV2PS9zb2MvZG1hQDdlMDA3MDAwLCBpbmRleD0zDQooWEVOKSAgdXNp
bmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGlu
dGxlbj0zMw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MzMNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0
MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNTMuLi5dLG9p
bnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50
ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+
IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2
aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9kbWFAN2UwMDcwMDAsIGluZGV4
PTMNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4p
ICBpbnRzcGVjPTAgaW50bGVuPTMzDQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj0zMw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVw
dC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgw
MDAwMDA1My4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6
ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAh
DQooWEVOKSAgIC0gSVJROiAxMTUNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3
X2lycTogZGV2PS9zb2MvZG1hQDdlMDA3MDAwLCBpbmRleD00DQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0w
IGludGxlbj0zMw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MzMNCihYRU4p
IGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxl
ckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNTQuLi5d
LG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2Mv
aW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikg
IC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRf
ZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9kbWFAN2UwMDcwMDAsIGlu
ZGV4PTQNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihY
RU4pICBpbnRzcGVjPTAgaW50bGVuPTMzDQooWEVOKSAgaW50c2l6ZT0zIGlu
dGxlbj0zMw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVy
cnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAg
MHgwMDAwMDA1NC4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwg
c2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBp
dCAhDQooWEVOKSAgIC0gSVJROiAxMTYNCihYRU4pIGR0X2RldmljZV9nZXRf
cmF3X2lycTogZGV2PS9zb2MvZG1hQDdlMDA3MDAwLCBpbmRleD01DQooWEVO
KSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3Bl
Yz0wIGludGxlbj0zMw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MzMNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJv
bGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNTUu
Li5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9z
b2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhF
TikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikg
ZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9kbWFAN2UwMDcwMDAs
IGluZGV4PTUNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMzDQooWEVOKSAgaW50c2l6ZT0z
IGludGxlbj0zMw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2lu
dGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAw
MDAgMHgwMDAwMDA1NS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21h
cF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAw
MCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdv
dCBpdCAhDQooWEVOKSAgIC0gSVJROiAxMTcNCihYRU4pIGR0X2RldmljZV9n
ZXRfcmF3X2lycTogZGV2PS9zb2MvZG1hQDdlMDA3MDAwLCBpbmRleD02DQoo
WEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50
c3BlYz0wIGludGxlbj0zMw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49MzMN
CihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29u
dHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAw
NTYuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFy
PS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0K
KFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhF
TikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9kbWFAN2UwMDcw
MDAsIGluZGV4PTYNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVy
dHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMzDQooWEVOKSAgaW50c2l6
ZT0zIGludGxlbj0zMw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29j
L2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAw
MDAwMDAgMHgwMDAwMDA1Ni4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0
MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+
IGdvdCBpdCAhDQooWEVOKSAgIC0gSVJROiAxMTgNCihYRU4pIGR0X2Rldmlj
ZV9nZXRfcmF3X2lycTogZGV2PS9zb2MvZG1hQDdlMDA3MDAwLCBpbmRleD03
DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAg
aW50c3BlYz0wIGludGxlbj0zMw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49
MzMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQt
Y29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAw
MDAwNTcuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBp
cGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9
Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0K
KFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9kbWFAN2Uw
MDcwMDAsIGluZGV4PTcNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMzDQooWEVOKSAgaW50
c2l6ZT0zIGludGxlbj0zMw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0v
c29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4
MDAwMDAwMDAgMHgwMDAwMDA1Ny4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0
MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikg
IC0+IGdvdCBpdCAhDQooWEVOKSAgIC0gSVJROiAxMTkNCihYRU4pIGR0X2Rl
dmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2MvZG1hQDdlMDA3MDAwLCBpbmRl
eD04DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVO
KSAgaW50c3BlYz0wIGludGxlbj0zMw0KKFhFTikgIGludHNpemU9MyBpbnRs
ZW49MzMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1
cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4
MDAwMDAwNTcuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNp
emU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQg
IQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9kbWFA
N2UwMDcwMDAsIGluZGV4PTgNCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMzDQooWEVOKSAg
aW50c2l6ZT0zIGludGxlbj0zMw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBh
cj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9
WzB4MDAwMDAwMDAgMHgwMDAwMDA1Ny4uLl0sb2ludHNpemU9Mw0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxl
ckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhF
TikgIC0+IGdvdCBpdCAhDQooWEVOKSAgIC0gSVJROiAxMTkNCihYRU4pIGR0
X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2MvZG1hQDdlMDA3MDAwLCBp
bmRleD05DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQoo
WEVOKSAgaW50c3BlYz0wIGludGxlbj0zMw0KKFhFTikgIGludHNpemU9MyBp
bnRsZW49MzMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRl
cnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAw
IDB4MDAwMDAwNTguLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAs
IHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3Qg
aXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9k
bWFAN2UwMDcwMDAsIGluZGV4PTkNCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMzDQooWEVO
KSAgaW50c2l6ZT0zIGludGxlbj0zMw0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNw
ZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA1OC4uLl0sb2ludHNpemU9Mw0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJv
bGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0K
KFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAgIC0gSVJROiAxMjANCihYRU4p
IGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2MvZG1hQDdlMDA3MDAw
LCBpbmRleD0xMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0
eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MzMNCihYRU4pICBpbnRzaXpl
PTMgaW50bGVuPTMzDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2Mv
aW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAw
MDAwMCAweDAwMDAwMDU4Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQx
MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4g
Z290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9z
b2MvZG1hQDdlMDA3MDAwLCBpbmRleD0xMA0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MzMN
CihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMzDQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAs
aW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDU4Li4uXSxvaW50c2l6ZT0z
DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1j
b250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6
ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pICAgLSBJUlE6IDEyMA0K
KFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3NvYy9kbWFA
N2UwMDcwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwg
bnM9MSkgb24gL3NvYw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6
PDM+IDdlMDA3MDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZh
dWx0IChuYT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5n
ZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9
MTgwMDAwMCwgZGE9N2UwMDcwMDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNs
YXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhFTikg
RFQ6IHdpdGggb2Zmc2V0OiA3MDAwDQooWEVOKSBEVDogb25lIGxldmVsIHRy
YW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiBmZTAwNzAwMDwzPg0KKFhFTikg
RFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDBmZTAw
NzAwMCAtIDAwZmUwMDdiMDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL3Nv
Yy93YXRjaGRvZ0A3ZTEwMDAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9zb2Mvd2F0Y2hkb2dAN2UxMDAwMDANCihYRU4pIC9zb2Mvd2F0Y2hkb2dA
N2UxMDAwMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMw0KKFhFTikgQ2hl
Y2sgaWYgL3NvYy93YXRjaGRvZ0A3ZTEwMDAwMCBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29j
L3dhdGNoZG9nQDdlMTAwMDAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24g
Zm9yIGRldmljZSAvc29jL3dhdGNoZG9nQDdlMTAwMDAwICoqDQooWEVOKSBE
VDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9zb2MNCihYRU4p
IERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3ZTEwMDAwMDwzPg0KKFhF
TikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24g
Lw0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVm
YXVsdCBtYXAsIGNwPTdlMDAwMDAwLCBzPTE4MDAwMDAsIGRhPTdlMTAwMDAw
DQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAw
MDA8Mz4gZmUwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogMTAw
MDAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAw
MDAwMDwzPiBmZTEwMDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBu
b2RlDQooWEVOKSAgIC0gTU1JTzogMDBmZTEwMDAwMCAtIDAwZmUxMDAxMTQg
UDJNVHlwZT01DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmlj
ZSAvc29jL3dhdGNoZG9nQDdlMTAwMDAwICoqDQooWEVOKSBEVDogYnVzIGlz
IGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9zb2MNCihYRU4pIERUOiB0cmFu
c2xhdGluZyBhZGRyZXNzOjwzPiA3ZTAwYTAwMDwzPg0KKFhFTikgRFQ6IHBh
cmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24gLw0KKFhFTikg
RFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAs
IGNwPTdlMDAwMDAwLCBzPTE4MDAwMDAsIGRhPTdlMDBhMDAwDQooWEVOKSBE
VDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gZmUw
MDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogYTAwMA0KKFhFTikg
RFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gZmUw
MGEwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikg
ICAtIE1NSU86IDAwZmUwMGEwMDAgLSAwMGZlMDBhMDI0IFAyTVR5cGU9NQ0K
KFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3NvYy93YXRj
aGRvZ0A3ZTEwMDAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChu
YT0xLCBucz0xKSBvbiAvc29jDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRk
cmVzczo8Mz4gN2VjMTEwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlz
IGRlZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihYRU4pIERUOiB3YWxraW5n
IHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03ZTAwMDAw
MCwgcz0xODAwMDAwLCBkYT03ZWMxMTAwMA0KKFhFTikgRFQ6IHBhcmVudCB0
cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZlMDAwMDAwPDM+DQoo
WEVOKSBEVDogd2l0aCBvZmZzZXQ6IGMxMTAwMA0KKFhFTikgRFQ6IG9uZSBs
ZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gZmVjMTEwMDA8Mz4N
CihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86
IDAwZmVjMTEwMDAgLSAwMGZlYzExMDIwIFAyTVR5cGU9NQ0KKFhFTikgaGFu
ZGxlIC9zb2Mvcm5nQDdlMTA0MDAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NvYy9ybmdAN2UxMDQwMDANCihYRU4pIC9zb2Mvcm5nQDdlMTA0MDAw
IHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9z
b2Mvcm5nQDdlMTA0MDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mvcm5nQDdlMTA0MDAw
DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc29jL3Ju
Z0A3ZTEwNDAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0x
LCBucz0xKSBvbiAvc29jDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVz
czo8Mz4gN2UxMDQwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRl
ZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihYRU4pIERUOiB3YWxraW5nIHJh
bmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03ZTAwMDAwMCwg
cz0xODAwMDAwLCBkYT03ZTEwNDAwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFu
c2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZlMDAwMDAwPDM+DQooWEVO
KSBEVDogd2l0aCBvZmZzZXQ6IDEwNDAwMA0KKFhFTikgRFQ6IG9uZSBsZXZl
bCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gZmUxMDQwMDA8Mz4NCihY
RU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAw
ZmUxMDQwMDAgLSAwMGZlMTA0MDI4IFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxl
IC9zb2Mvc2VyaWFsQDdlMjAxNDAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NvYy9zZXJpYWxAN2UyMDE0MDANCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihY
RU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRf
cmF3X2lycTogZGV2PS9zb2Mvc2VyaWFsQDdlMjAxNDAwLCBpbmRleD0wDQoo
WEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50
c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQoo
WEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRy
b2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDc5
Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0v
c29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihY
RU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4p
IC9zb2Mvc2VyaWFsQDdlMjAxNDAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDENCihYRU4pIENoZWNrIGlmIC9zb2Mvc2VyaWFsQDdlMjAxNDAwIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9zb2Mvc2VyaWFsQDdlMjAxNDAwDQooWEVOKSAgdXNpbmcgJ2lu
dGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0z
DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2Vf
Z2V0X3Jhd19pcnE6IGRldj0vc29jL3NlcmlhbEA3ZTIwMTQwMCwgaW5kZXg9
MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikg
IGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49
Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1j
b250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAw
MDA3OS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlw
YXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0z
DQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQoo
WEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL3NlcmlhbEA3
ZTIwMTQwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBw
cm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0v
c29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4
MDAwMDAwMDAgMHgwMDAwMDA3OS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0
MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikg
IC0+IGdvdCBpdCAhDQooWEVOKSAgIC0gSVJROiAxNTMNCihYRU4pIERUOiAq
KiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zb2Mvc2VyaWFsQDdlMjAxNDAw
ICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9u
IC9zb2MNCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3ZTIw
MTQwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9
MiwgbnM9MSkgb24gLw0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQoo
WEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTdlMDAwMDAwLCBzPTE4MDAwMDAs
IGRhPTdlMjAxNDAwDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZv
cjo8Mz4gMDAwMDAwMDA8Mz4gZmUwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRo
IG9mZnNldDogMjAxNDAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0
aW9uOjwzPiAwMDAwMDAwMDwzPiBmZTIwMTQwMDwzPg0KKFhFTikgRFQ6IHJl
YWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDBmZTIwMTQwMCAt
IDAwZmUyMDE2MDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL3NvYy9zZXJp
YWxAN2UyMDE2MDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL3Nl
cmlhbEA3ZTIwMTYwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9w
ZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNp
emU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBk
ZXY9L3NvYy9zZXJpYWxAN2UyMDE2MDAsIGluZGV4PTANCihYRU4pICB1c2lu
ZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50
bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2ly
cV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0
MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNzkuLi5dLG9pbnRz
aXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJy
dXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFk
ZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgL3NvYy9zZXJp
YWxAN2UyMDE2MDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikg
Q2hlY2sgaWYgL3NvYy9zZXJpYWxAN2UyMDE2MDAgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nv
Yy9zZXJpYWxAN2UyMDE2MDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2ly
cTogZGV2PS9zb2Mvc2VyaWFsQDdlMjAxNjAwLCBpbmRleD0wDQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0w
IGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJA
NDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDc5Li4uXSxv
aW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2lu
dGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAt
PiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2Rl
dmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2Mvc2VyaWFsQDdlMjAxNjAwLCBp
bmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQoo
WEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGlu
dGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJy
dXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAw
eDAwMDAwMDc5Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBz
aXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0
ICENCihYRU4pICAgLSBJUlE6IDE1Mw0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0
aW9uIGZvciBkZXZpY2UgL3NvYy9zZXJpYWxAN2UyMDE2MDAgKioNCihYRU4p
IERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL3NvYw0KKFhF
TikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDdlMjAxNjAwPDM+DQoo
WEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0xKSBv
biAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBk
ZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9MTgwMDAwMCwgZGE9N2UyMDE2
MDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAw
MDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiAy
MDE2MDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAw
MDAwMDAwPDM+IGZlMjAxNjAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290
IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZlMjAxNjAwIC0gMDBmZTIwMTgw
MCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvc29jL3NlcmlhbEA3ZTIwMTgw
MA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mvc2VyaWFsQDdlMjAx
ODAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVO
KSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL3Nl
cmlhbEA3ZTIwMTgwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1
cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhF
TikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNw
ZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA3OS4uLl0sb2ludHNpemU9Mw0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJv
bGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0K
KFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvc29jL3NlcmlhbEA3ZTIwMTgw
MCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAv
c29jL3NlcmlhbEA3ZTIwMTgwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL3NlcmlhbEA3
ZTIwMTgwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0K
KFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBp
bnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3Nv
Yy9zZXJpYWxAN2UyMDE4MDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50
ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMN
CihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxp
bnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNzkuLi5dLG9pbnRzaXplPTMN
CihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNv
bnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXpl
PTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9y
YXdfaXJxOiBkZXY9L3NvYy9zZXJpYWxAN2UyMDE4MDAsIGluZGV4PTANCihY
RU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRz
cGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJv
bGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNzku
Li5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9z
b2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhF
TikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikg
ICAtIElSUTogMTUzDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRl
dmljZSAvc29jL3NlcmlhbEA3ZTIwMTgwMCAqKg0KKFhFTikgRFQ6IGJ1cyBp
cyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvc29jDQooWEVOKSBEVDogdHJh
bnNsYXRpbmcgYWRkcmVzczo8Mz4gN2UyMDE4MDA8Mz4NCihYRU4pIERUOiBw
YXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihYRU4p
IERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFw
LCBjcD03ZTAwMDAwMCwgcz0xODAwMDAwLCBkYT03ZTIwMTgwMA0KKFhFTikg
RFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZl
MDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDIwMTgwMA0KKFhF
TikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4g
ZmUyMDE4MDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhF
TikgICAtIE1NSU86IDAwZmUyMDE4MDAgLSAwMGZlMjAxYTAwIFAyTVR5cGU9
NQ0KKFhFTikgaGFuZGxlIC9zb2Mvc2VyaWFsQDdlMjAxYTAwDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9zZXJpYWxAN2UyMDFhMDANCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4p
IGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2Mvc2VyaWFsQDdlMjAx
YTAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3Bl
cnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6
ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2Mv
aW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAw
MDAwMCAweDAwMDAwMDc5Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQx
MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4g
Z290IGl0ICENCihYRU4pIC9zb2Mvc2VyaWFsQDdlMjAxYTAwIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9zb2Mvc2VyaWFs
QDdlMjAxYTAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mvc2VyaWFsQDdlMjAxYTAwDQoo
WEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50
c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQoo
WEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL3NlcmlhbEA3
ZTIwMWEwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBw
cm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0v
c29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4
MDAwMDAwMDAgMHgwMDAwMDA3OS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRf
aXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0
MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikg
IC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRl
dj0vc29jL3NlcmlhbEA3ZTIwMWEwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5n
ICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRs
ZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQx
MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA3OS4uLl0sb2ludHNp
emU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1
cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRk
cnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAgIC0gSVJROiAx
NTMNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zb2Mv
c2VyaWFsQDdlMjAxYTAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQg
KG5hPTEsIG5zPTEpIG9uIC9zb2MNCihYRU4pIERUOiB0cmFuc2xhdGluZyBh
ZGRyZXNzOjwzPiA3ZTIwMWEwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMg
aXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24gLw0KKFhFTikgRFQ6IHdhbGtp
bmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTdlMDAw
MDAwLCBzPTE4MDAwMDAsIGRhPTdlMjAxYTAwDQooWEVOKSBEVDogcGFyZW50
IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gZmUwMDAwMDA8Mz4N
CihYRU4pIERUOiB3aXRoIG9mZnNldDogMjAxYTAwDQooWEVOKSBEVDogb25l
IGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiBmZTIwMWEwMDwz
Pg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1J
TzogMDBmZTIwMWEwMCAtIDAwZmUyMDFjMDAgUDJNVHlwZT01DQooWEVOKSBo
YW5kbGUgL3NvYy9zcGlAN2UyMDQ2MDANCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc29jL3NwaUA3ZTIwNDYwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1
cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhF
TikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9y
YXdfaXJxOiBkZXY9L3NvYy9zcGlAN2UyMDQ2MDAsIGluZGV4PTANCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4p
IGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxl
ckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNzYuLi5d
LG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2Mv
aW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikg
IC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgL3Nv
Yy9zcGlAN2UyMDQ2MDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhF
TikgQ2hlY2sgaWYgL3NvYy9zcGlAN2UyMDQ2MDAgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nv
Yy9zcGlAN2UyMDQ2MDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRz
aXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTog
ZGV2PS9zb2Mvc3BpQDdlMjA0NjAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcg
J2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxl
bj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEw
MDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDc2Li4uXSxvaW50c2l6
ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVw
dC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRy
c2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9n
ZXRfcmF3X2lycTogZGV2PS9zb2Mvc3BpQDdlMjA0NjAwLCBpbmRleD0wDQoo
WEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50
c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQoo
WEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRy
b2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDc2
Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0v
c29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihY
RU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4p
ICAgLSBJUlE6IDE1MA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBk
ZXZpY2UgL3NvYy9zcGlAN2UyMDQ2MDAgKioNCihYRU4pIERUOiBidXMgaXMg
ZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL3NvYw0KKFhFTikgRFQ6IHRyYW5z
bGF0aW5nIGFkZHJlc3M6PDM+IDdlMjA0NjAwPDM+DQooWEVOKSBEVDogcGFy
ZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0xKSBvbiAvDQooWEVOKSBE
VDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwg
Y3A9N2UwMDAwMDAsIHM9MTgwMDAwMCwgZGE9N2UyMDQ2MDANCihYRU4pIERU
OiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiBmZTAw
MDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiAyMDQ2MDANCihYRU4p
IERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZl
MjA0NjAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4p
ICAgLSBNTUlPOiAwMGZlMjA0NjAwIC0gMDBmZTIwNDgwMCBQMk1UeXBlPTUN
CihYRU4pIGhhbmRsZSAvc29jL3NwaUA3ZTIwNDgwMA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zb2Mvc3BpQDdlMjA0ODAwDQooWEVOKSAgdXNpbmcg
J2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxl
bj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZp
Y2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL3NwaUA3ZTIwNDgwMCwgaW5kZXg9
MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikg
IGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49
Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1j
b250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAw
MDA3Ni4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlw
YXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0z
DQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQoo
WEVOKSAvc29jL3NwaUA3ZTIwNDgwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAxDQooWEVOKSBDaGVjayBpZiAvc29jL3NwaUA3ZTIwNDgwMCBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc29jL3NwaUA3ZTIwNDgwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1
cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhF
TikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9y
YXdfaXJxOiBkZXY9L3NvYy9zcGlAN2UyMDQ4MDAsIGluZGV4PTANCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4p
IGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxl
ckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNzYuLi5d
LG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2Mv
aW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikg
IC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRf
ZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9zcGlAN2UyMDQ4MDAsIGlu
ZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihY
RU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50
bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1
cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4
MDAwMDAwNzYuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNp
emU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQg
IQ0KKFhFTikgICAtIElSUTogMTUwDQooWEVOKSBEVDogKiogdHJhbnNsYXRp
b24gZm9yIGRldmljZSAvc29jL3NwaUA3ZTIwNDgwMCAqKg0KKFhFTikgRFQ6
IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvc29jDQooWEVOKSBE
VDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gN2UyMDQ4MDA8Mz4NCihYRU4p
IERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8N
CihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1
bHQgbWFwLCBjcD03ZTAwMDAwMCwgcz0xODAwMDAwLCBkYT03ZTIwNDgwMA0K
KFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAw
PDM+IGZlMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDIwNDgw
MA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAw
MDA8Mz4gZmUyMDQ4MDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9k
ZQ0KKFhFTikgICAtIE1NSU86IDAwZmUyMDQ4MDAgLSAwMGZlMjA0YTAwIFAy
TVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9zb2Mvc3BpQDdlMjA0YTAwDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9zcGlAN2UyMDRhMDANCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4p
IGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2Mvc3BpQDdlMjA0YTAw
LCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5
DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0z
IGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50
ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAw
MCAweDAwMDAwMDc2Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAw
LCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290
IGl0ICENCihYRU4pIC9zb2Mvc3BpQDdlMjA0YTAwIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9zb2Mvc3BpQDdlMjA0YTAw
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zb2Mvc3BpQDdlMjA0YTAwDQooWEVOKSAgdXNpbmcg
J2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxl
bj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZp
Y2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL3NwaUA3ZTIwNGEwMCwgaW5kZXg9
MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikg
IGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49
Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1j
b250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAw
MDA3Ni4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlw
YXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0z
DQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQoo
WEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL3NwaUA3ZTIw
NGEwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9w
ZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNp
emU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29j
L2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAw
MDAwMDAgMHgwMDAwMDA3Ni4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0
MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+
IGdvdCBpdCAhDQooWEVOKSAgIC0gSVJROiAxNTANCihYRU4pIERUOiAqKiB0
cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zb2Mvc3BpQDdlMjA0YTAwICoqDQoo
WEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9zb2MN
CihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3ZTIwNGEwMDwz
Pg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9
MSkgb24gLw0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBE
VDogZGVmYXVsdCBtYXAsIGNwPTdlMDAwMDAwLCBzPTE4MDAwMDAsIGRhPTdl
MjA0YTAwDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4g
MDAwMDAwMDA8Mz4gZmUwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNl
dDogMjA0YTAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwz
PiAwMDAwMDAwMDwzPiBmZTIwNGEwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQg
cm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDBmZTIwNGEwMCAtIDAwZmUy
MDRjMDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL3NvYy9zcGlAN2UyMDRj
MDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL3NwaUA3ZTIwNGMw
MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikg
IGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49
Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9zcGlA
N2UyMDRjMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9
L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsw
eDAwMDAwMDAwIDB4MDAwMDAwNzYuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJA
NDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4p
ICAtPiBnb3QgaXQgIQ0KKFhFTikgL3NvYy9zcGlAN2UyMDRjMDAgcGFzc3Ro
cm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3NvYy9zcGlA
N2UyMDRjMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9zcGlAN2UyMDRjMDANCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4p
IGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2Mvc3BpQDdlMjA0YzAw
LCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5
DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0z
IGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50
ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAw
MCAweDAwMDAwMDc2Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAw
LCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290
IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2Mv
c3BpQDdlMjA0YzAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVw
dHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVO
KSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
cGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3Bl
Yz1bMHgwMDAwMDAwMCAweDAwMDAwMDc2Li4uXSxvaW50c2l6ZT0zDQooWEVO
KSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9s
bGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQoo
WEVOKSAgLT4gZ290IGl0ICENCihYRU4pICAgLSBJUlE6IDE1MA0KKFhFTikg
RFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3NvYy9zcGlAN2UyMDRj
MDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkg
b24gL3NvYw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDdl
MjA0YzAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChu
YT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4N
CihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9MTgwMDAw
MCwgZGE9N2UyMDRjMDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24g
Zm9yOjwzPiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhFTikgRFQ6IHdp
dGggb2Zmc2V0OiAyMDRjMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNs
YXRpb246PDM+IDAwMDAwMDAwPDM+IGZlMjA0YzAwPDM+DQooWEVOKSBEVDog
cmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZlMjA0YzAw
IC0gMDBmZTIwNGUwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvc29jL2ky
Y0A3ZTIwNTYwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvaTJj
QDdlMjA1NjAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5
DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0z
IGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0v
c29jL2kyY0A3ZTIwNTYwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0K
KFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGlu
dHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA3NS4uLl0sb2ludHNpemU9Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29u
dHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9
MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvc29jL2kyY0A3ZTIwNTYw
MCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAv
c29jL2kyY0A3ZTIwNTYwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQg
aXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2kyY0A3ZTIwNTYw
MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikg
IGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49
Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9pMmNA
N2UyMDU2MDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9
L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsw
eDAwMDAwMDAwIDB4MDAwMDAwNzUuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJA
NDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4p
ICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBk
ZXY9L3NvYy9pMmNAN2UyMDU2MDAsIGluZGV4PTANCihYRU4pICB1c2luZyAn
aW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVu
PTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAw
MCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNzUuLi5dLG9pbnRzaXpl
PTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0
LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJz
aXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgICAtIElSUTogMTQ5
DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc29jL2ky
Y0A3ZTIwNTYwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0x
LCBucz0xKSBvbiAvc29jDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVz
czo8Mz4gN2UyMDU2MDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRl
ZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihYRU4pIERUOiB3YWxraW5nIHJh
bmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03ZTAwMDAwMCwg
cz0xODAwMDAwLCBkYT03ZTIwNTYwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFu
c2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZlMDAwMDAwPDM+DQooWEVO
KSBEVDogd2l0aCBvZmZzZXQ6IDIwNTYwMA0KKFhFTikgRFQ6IG9uZSBsZXZl
bCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gZmUyMDU2MDA8Mz4NCihY
RU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAw
ZmUyMDU2MDAgLSAwMGZlMjA1ODAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxl
IC9zb2MvaTJjQDdlMjA1ODAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3NvYy9pMmNAN2UyMDU4MDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2ly
cTogZGV2PS9zb2MvaTJjQDdlMjA1ODAwLCBpbmRleD0wDQooWEVOKSAgdXNp
bmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGlu
dGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAw
NDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDc1Li4uXSxvaW50
c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVy
cnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBh
ZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIC9zb2MvaTJj
QDdlMjA1ODAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENo
ZWNrIGlmIC9zb2MvaTJjQDdlMjA1ODAwIGlzIGJlaGluZCB0aGUgSU9NTVUg
YW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvaTJj
QDdlMjA1ODAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5
DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0z
IGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0v
c29jL2kyY0A3ZTIwNTgwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0K
KFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGlu
dHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA3NS4uLl0sb2ludHNpemU9Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29u
dHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9
MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jh
d19pcnE6IGRldj0vc29jL2kyY0A3ZTIwNTgwMCwgaW5kZXg9MA0KKFhFTikg
IHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9
MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVy
QDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA3NS4uLl0s
b2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9p
bnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAg
LT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAgIC0g
SVJROiAxNDkNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNl
IC9zb2MvaTJjQDdlMjA1ODAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1
bHQgKG5hPTEsIG5zPTEpIG9uIC9zb2MNCihYRU4pIERUOiB0cmFuc2xhdGlu
ZyBhZGRyZXNzOjwzPiA3ZTIwNTgwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBi
dXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24gLw0KKFhFTikgRFQ6IHdh
bGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTdl
MDAwMDAwLCBzPTE4MDAwMDAsIGRhPTdlMjA1ODAwDQooWEVOKSBEVDogcGFy
ZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gZmUwMDAwMDA8
Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogMjA1ODAwDQooWEVOKSBEVDog
b25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiBmZTIwNTgw
MDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0g
TU1JTzogMDBmZTIwNTgwMCAtIDAwZmUyMDVhMDAgUDJNVHlwZT01DQooWEVO
KSBoYW5kbGUgL3NvYy9pMmNAN2UyMDVhMDANCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vc29jL2kyY0A3ZTIwNWEwMA0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0K
KFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dl
dF9yYXdfaXJxOiBkZXY9L3NvYy9pMmNAN2UyMDVhMDAsIGluZGV4PTANCihY
RU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRz
cGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJv
bGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNzUu
Li5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9z
b2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhF
TikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikg
L3NvYy9pMmNAN2UyMDVhMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0K
KFhFTikgQ2hlY2sgaWYgL3NvYy9pMmNAN2UyMDVhMDAgaXMgYmVoaW5kIHRo
ZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3NvYy9pMmNAN2UyMDVhMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2ly
cTogZGV2PS9zb2MvaTJjQDdlMjA1YTAwLCBpbmRleD0wDQooWEVOKSAgdXNp
bmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGlu
dGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAw
NDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDc1Li4uXSxvaW50
c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVy
cnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBh
ZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2Rldmlj
ZV9nZXRfcmF3X2lycTogZGV2PS9zb2MvaTJjQDdlMjA1YTAwLCBpbmRleD0w
DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAg
aW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0z
DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNv
bnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAw
MDc1Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBh
cj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMN
CihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihY
RU4pICAgLSBJUlE6IDE0OQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZv
ciBkZXZpY2UgL3NvYy9pMmNAN2UyMDVhMDAgKioNCihYRU4pIERUOiBidXMg
aXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL3NvYw0KKFhFTikgRFQ6IHRy
YW5zbGF0aW5nIGFkZHJlc3M6PDM+IDdlMjA1YTAwPDM+DQooWEVOKSBEVDog
cGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0xKSBvbiAvDQooWEVO
KSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1h
cCwgY3A9N2UwMDAwMDAsIHM9MTgwMDAwMCwgZGE9N2UyMDVhMDANCihYRU4p
IERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiBm
ZTAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiAyMDVhMDANCihY
RU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+
IGZlMjA1YTAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihY
RU4pICAgLSBNTUlPOiAwMGZlMjA1YTAwIC0gMDBmZTIwNWMwMCBQMk1UeXBl
PTUNCihYRU4pIGhhbmRsZSAvc29jL2kyY0A3ZTIwNWMwMA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9zb2MvaTJjQDdlMjA1YzAwDQooWEVOKSAgdXNp
bmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGlu
dGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9k
ZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL2kyY0A3ZTIwNWMwMCwgaW5k
ZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhF
TikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRs
ZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVw
dC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgw
MDAwMDA3NS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6
ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAh
DQooWEVOKSAvc29jL2kyY0A3ZTIwNWMwMCBwYXNzdGhyb3VnaCA9IDEgbmFk
ZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvc29jL2kyY0A3ZTIwNWMwMCBpcyBi
ZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vc29jL2kyY0A3ZTIwNWMwMA0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0K
KFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dl
dF9yYXdfaXJxOiBkZXY9L3NvYy9pMmNAN2UyMDVjMDAsIGluZGV4PTANCihY
RU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRz
cGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJv
bGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNzUu
Li5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9z
b2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhF
TikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikg
ZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9pMmNAN2UyMDVjMDAs
IGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMg
aW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRl
cnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAw
IDB4MDAwMDAwNzUuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAs
IHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3Qg
aXQgIQ0KKFhFTikgICAtIElSUTogMTQ5DQooWEVOKSBEVDogKiogdHJhbnNs
YXRpb24gZm9yIGRldmljZSAvc29jL2kyY0A3ZTIwNWMwMCAqKg0KKFhFTikg
RFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvc29jDQooWEVO
KSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gN2UyMDVjMDA8Mz4NCihY
RU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEpIG9u
IC8NCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRl
ZmF1bHQgbWFwLCBjcD03ZTAwMDAwMCwgcz0xODAwMDAwLCBkYT03ZTIwNWMw
MA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAw
MDAwPDM+IGZlMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDIw
NWMwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAw
MDAwMDA8Mz4gZmUyMDVjMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qg
bm9kZQ0KKFhFTikgICAtIE1NSU86IDAwZmUyMDVjMDAgLSAwMGZlMjA1ZTAw
IFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9zb2MvcHdtQDdlMjBjODAwDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9wd21AN2UyMGM4MDANCihY
RU4pIC9zb2MvcHdtQDdlMjBjODAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDENCihYRU4pIENoZWNrIGlmIC9zb2MvcHdtQDdlMjBjODAwIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zb2MvcHdtQDdlMjBjODAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRp
b24gZm9yIGRldmljZSAvc29jL3B3bUA3ZTIwYzgwMCAqKg0KKFhFTikgRFQ6
IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvc29jDQooWEVOKSBE
VDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gN2UyMGM4MDA8Mz4NCihYRU4p
IERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8N
CihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1
bHQgbWFwLCBjcD03ZTAwMDAwMCwgcz0xODAwMDAwLCBkYT03ZTIwYzgwMA0K
KFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAw
PDM+IGZlMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDIwYzgw
MA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAw
MDA8Mz4gZmUyMGM4MDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9k
ZQ0KKFhFTikgICAtIE1NSU86IDAwZmUyMGM4MDAgLSAwMGZlMjBjODI4IFAy
TVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9zb2MvZmlybXdhcmUNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vc29jL2Zpcm13YXJlDQooWEVOKSAvc29jL2Zp
cm13YXJlIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNr
IGlmIC9zb2MvZmlybXdhcmUgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9maXJtd2FyZQ0K
KFhFTikgaGFuZGxlIC9zb2MvZmlybXdhcmUvZ3Bpbw0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zb2MvZmlybXdhcmUvZ3Bpbw0KKFhFTikgL3NvYy9m
aXJtd2FyZS9ncGlvIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9zb2MvZmlybXdhcmUvZ3BpbyBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29j
L2Zpcm13YXJlL2dwaW8NCihYRU4pIGhhbmRsZSAvc29jL3Bvd2VyDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9wb3dlcg0KKFhFTikgL3NvYy9w
b3dlciBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvc29jL3Bvd2VyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvcG93ZXINCihYRU4pIGhh
bmRsZSAvc29jL3BpeGVsdmFsdmVAN2UyMDYwMDANCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc29jL3BpeGVsdmFsdmVAN2UyMDYwMDANCihYRU4pICB1
c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAg
aW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0
X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2MvcGl4ZWx2YWx2ZUA3ZTIw
NjAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9w
ZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNp
emU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29j
L2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAw
MDAwMDAgMHgwMDAwMDA2ZC4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0
MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+
IGdvdCBpdCAhDQooWEVOKSAvc29jL3BpeGVsdmFsdmVAN2UyMDYwMDAgcGFz
c3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3NvYy9w
aXhlbHZhbHZlQDdlMjA2MDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvcGl4ZWx2YWx2
ZUA3ZTIwNjAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0
eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9
MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9
L3NvYy9waXhlbHZhbHZlQDdlMjA2MDAwLCBpbmRleD0wDQooWEVOKSAgdXNp
bmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGlu
dGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAw
NDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDZkLi4uXSxvaW50
c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVy
cnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBh
ZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2Rldmlj
ZV9nZXRfcmF3X2lycTogZGV2PS9zb2MvcGl4ZWx2YWx2ZUA3ZTIwNjAwMCwg
aW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0K
KFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBp
bnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVy
cnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAg
MHgwMDAwMDA2ZC4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwg
c2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBp
dCAhDQooWEVOKSAgIC0gSVJROiAxNDENCihYRU4pIERUOiAqKiB0cmFuc2xh
dGlvbiBmb3IgZGV2aWNlIC9zb2MvcGl4ZWx2YWx2ZUA3ZTIwNjAwMCAqKg0K
KFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvc29j
DQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gN2UyMDYwMDA8
Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5z
PTEpIG9uIC8NCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikg
RFQ6IGRlZmF1bHQgbWFwLCBjcD03ZTAwMDAwMCwgcz0xODAwMDAwLCBkYT03
ZTIwNjAwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+
IDAwMDAwMDAwPDM+IGZlMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZz
ZXQ6IDIwNjAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8
Mz4gMDAwMDAwMDA8Mz4gZmUyMDYwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVk
IHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwZmUyMDYwMDAgLSAwMGZl
MjA2MTAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9zb2MvcGl4ZWx2YWx2
ZUA3ZTIwNzAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvcGl4
ZWx2YWx2ZUA3ZTIwNzAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBw
cm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJx
OiBkZXY9L3NvYy9waXhlbHZhbHZlQDdlMjA3MDAwLCBpbmRleD0wDQooWEVO
KSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3Bl
Yz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVO
KSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xs
ZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDZlLi4u
XSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29j
L2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4p
ICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIC9z
b2MvcGl4ZWx2YWx2ZUA3ZTIwNzAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIg
PSAxDQooWEVOKSBDaGVjayBpZiAvc29jL3BpeGVsdmFsdmVAN2UyMDcwMDAg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NvYy9waXhlbHZhbHZlQDdlMjA3MDAwDQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0w
IGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBk
dF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL3BpeGVsdmFsdmVAN2Uy
MDcwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRz
aXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3Nv
Yy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAw
MDAwMDAwIDB4MDAwMDAwNmUuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2ly
cV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAw
NDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAt
PiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9
L3NvYy9waXhlbHZhbHZlQDdlMjA3MDAwLCBpbmRleD0wDQooWEVOKSAgdXNp
bmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGlu
dGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAw
NDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDZlLi4uXSxvaW50
c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVy
cnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBh
ZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pICAgLSBJUlE6
IDE0Mg0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3Nv
Yy9waXhlbHZhbHZlQDdlMjA3MDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRl
ZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9zb2MNCihYRU4pIERUOiB0cmFuc2xh
dGluZyBhZGRyZXNzOjwzPiA3ZTIwNzAwMDwzPg0KKFhFTikgRFQ6IHBhcmVu
dCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24gLw0KKFhFTikgRFQ6
IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNw
PTdlMDAwMDAwLCBzPTE4MDAwMDAsIGRhPTdlMjA3MDAwDQooWEVOKSBEVDog
cGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gZmUwMDAw
MDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogMjA3MDAwDQooWEVOKSBE
VDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiBmZTIw
NzAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDBmZTIwNzAwMCAtIDAwZmUyMDcxMDAgUDJNVHlwZT01DQoo
WEVOKSBoYW5kbGUgL3NvYy9tbWNAN2UzMDAwMDANCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc29jL21tY0A3ZTMwMDAwMA0KKFhFTikgIHVzaW5nICdp
bnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49
Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNl
X2dldF9yYXdfaXJxOiBkZXY9L3NvYy9tbWNAN2UzMDAwMDAsIGluZGV4PTAN
CihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBp
bnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMN
CihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29u
dHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAw
N2UuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFy
PS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0K
KFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhF
TikgL3NvYy9tbWNAN2UzMDAwMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MQ0KKFhFTikgQ2hlY2sgaWYgL3NvYy9tbWNAN2UzMDAwMDAgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NvYy9tbWNAN2UzMDAwMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4p
ICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3
X2lycTogZGV2PS9zb2MvbW1jQDdlMzAwMDAwLCBpbmRleD0wDQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0w
IGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJA
NDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDdlLi4uXSxv
aW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2lu
dGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAt
PiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2Rl
dmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2MvbW1jQDdlMzAwMDAwLCBpbmRl
eD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVO
KSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0
LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAw
MDAwMDdlLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
aXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXpl
PTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICEN
CihYRU4pICAgLSBJUlE6IDE1OA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9u
IGZvciBkZXZpY2UgL3NvYy9tbWNAN2UzMDAwMDAgKioNCihYRU4pIERUOiBi
dXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL3NvYw0KKFhFTikgRFQ6
IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDdlMzAwMDAwPDM+DQooWEVOKSBE
VDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0xKSBvbiAvDQoo
WEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0
IG1hcCwgY3A9N2UwMDAwMDAsIHM9MTgwMDAwMCwgZGE9N2UzMDAwMDANCihY
RU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwz
PiBmZTAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiAzMDAwMDAN
CihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAw
PDM+IGZlMzAwMDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUN
CihYRU4pICAgLSBNTUlPOiAwMGZlMzAwMDAwIC0gMDBmZTMwMDEwMCBQMk1U
eXBlPTUNCihYRU4pIGhhbmRsZSAvc29jL21tY25yQDdlMzAwMDAwDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9tbWNuckA3ZTMwMDAwMA0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNw
ZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhF
TikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9tbWNuckA3ZTMw
MDAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9w
ZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNp
emU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29j
L2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAw
MDAwMDAgMHgwMDAwMDA3ZS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0
MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+
IGdvdCBpdCAhDQooWEVOKSAvc29jL21tY25yQDdlMzAwMDAwIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9zb2MvbW1jbnJA
N2UzMDAwMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9tbWNuckA3ZTMwMDAwMA0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNw
ZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhF
TikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9tbWNuckA3ZTMw
MDAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9w
ZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNp
emU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29j
L2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAw
MDAwMDAgMHgwMDAwMDA3ZS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0
MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+
IGdvdCBpdCAhDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0v
c29jL21tY25yQDdlMzAwMDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2lu
dGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0z
DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAs
aW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDdlLi4uXSxvaW50c2l6ZT0z
DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1j
b250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6
ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pICAgLSBJUlE6IDE1OA0K
KFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3NvYy9tbWNu
ckA3ZTMwMDAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0x
LCBucz0xKSBvbiAvc29jDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVz
czo8Mz4gN2UzMDAwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRl
ZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihYRU4pIERUOiB3YWxraW5nIHJh
bmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03ZTAwMDAwMCwg
cz0xODAwMDAwLCBkYT03ZTMwMDAwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFu
c2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZlMDAwMDAwPDM+DQooWEVO
KSBEVDogd2l0aCBvZmZzZXQ6IDMwMDAwMA0KKFhFTikgRFQ6IG9uZSBsZXZl
bCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gZmUzMDAwMDA8Mz4NCihY
RU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAw
ZmUzMDAwMDAgLSAwMGZlMzAwMTAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxl
IC9zb2MvZmlybXdhcmVrbXNAN2U2MDAwMDANCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vc29jL2Zpcm13YXJla21zQDdlNjAwMDAwDQooWEVOKSAgdXNp
bmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGlu
dGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9k
ZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL2Zpcm13YXJla21zQDdlNjAw
MDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3Bl
cnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6
ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2Mv
aW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAw
MDAwMCAweDAwMDAwMDcwLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQx
MDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4g
Z290IGl0ICENCihYRU4pIC9zb2MvZmlybXdhcmVrbXNAN2U2MDAwMDAgcGFz
c3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3NvYy9m
aXJtd2FyZWttc0A3ZTYwMDAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBh
ZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2Zpcm13YXJl
a21zQDdlNjAwMDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3Bl
cnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6
ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRl
dj0vc29jL2Zpcm13YXJla21zQDdlNjAwMDAwLCBpbmRleD0wDQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0w
IGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJA
NDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDcwLi4uXSxv
aW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2lu
dGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAt
PiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2Rl
dmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2MvZmlybXdhcmVrbXNAN2U2MDAw
MDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVy
dHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXpl
PTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9p
bnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAw
MDAwIDB4MDAwMDAwNzAuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEw
MDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBn
b3QgaXQgIQ0KKFhFTikgICAtIElSUTogMTQ0DQooWEVOKSBEVDogKiogdHJh
bnNsYXRpb24gZm9yIGRldmljZSAvc29jL2Zpcm13YXJla21zQDdlNjAwMDAw
ICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9u
IC9zb2MNCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3ZTYw
MDAwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9
MiwgbnM9MSkgb24gLw0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQoo
WEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTdlMDAwMDAwLCBzPTE4MDAwMDAs
IGRhPTdlNjAwMDAwDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZv
cjo8Mz4gMDAwMDAwMDA8Mz4gZmUwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRo
IG9mZnNldDogNjAwMDAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0
aW9uOjwzPiAwMDAwMDAwMDwzPiBmZTYwMDAwMDwzPg0KKFhFTikgRFQ6IHJl
YWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDBmZTYwMDAwMCAt
IDAwZmU2MDAxMDAgUDJNVHlwZT01DQooWEVOKSBoYW5kbGUgL3NvYy9zbWlA
N2U2MDAwMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL3NtaUA3
ZTYwMDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0K
KFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBp
bnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3Nv
Yy9zbWlAN2U2MDAwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihY
RU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRz
cGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNzAuLi5dLG9pbnRzaXplPTMNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRy
b2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTEN
CihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgL3NvYy9zbWlAN2U2MDAwMDAg
cGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3Nv
Yy9zbWlAN2U2MDAwMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9zbWlAN2U2MDAwMDAN
CihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBp
bnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMN
CihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zb2Mvc21pQDdl
NjAwMDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHBy
b3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50
c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9z
b2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgw
MDAwMDAwMCAweDAwMDAwMDcwLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQw
MDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAg
LT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2
PS9zb2Mvc21pQDdlNjAwMDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2lu
dGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0z
DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAs
aW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDcwLi4uXSxvaW50c2l6ZT0z
DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1j
b250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6
ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pICAgLSBJUlE6IDE0NA0K
KFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3NvYy9zbWlA
N2U2MDAwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwg
bnM9MSkgb24gL3NvYw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6
PDM+IDdlNjAwMDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZh
dWx0IChuYT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5n
ZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9
MTgwMDAwMCwgZGE9N2U2MDAwMDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNs
YXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhFTikg
RFQ6IHdpdGggb2Zmc2V0OiA2MDAwMDANCihYRU4pIERUOiBvbmUgbGV2ZWwg
dHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZlNjAwMDAwPDM+DQooWEVO
KSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZl
NjAwMDAwIC0gMDBmZTYwMDEwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAv
c29jL2NzaUA3ZTgwMDAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9z
b2MvY3NpQDdlODAwMDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHBy
b3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50
c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6
IGRldj0vc29jL2NzaUA3ZTgwMDAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5n
ICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRs
ZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJx
X21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQx
MDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA2Ni4uLl0sb2ludHNp
emU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1
cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRk
cnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvc29jL2NzaUA3
ZTgwMDAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAyDQooWEVOKSBDaGVj
ayBpZiAvc29jL2NzaUA3ZTgwMDAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2NzaUA3
ZTgwMDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0K
KFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBp
bnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3Nv
Yy9jc2lAN2U4MDAwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihY
RU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRz
cGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNjYuLi5dLG9pbnRzaXplPTMNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRy
b2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTEN
CihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdf
aXJxOiBkZXY9L3NvYy9jc2lAN2U4MDAwMDAsIGluZGV4PTANCihYRU4pICB1
c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAg
aW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0
MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNjYuLi5dLG9p
bnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50
ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+
IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgICAtIElS
UTogMTM0DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAv
c29jL2NzaUA3ZTgwMDAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0
IChuYT0xLCBucz0xKSBvbiAvc29jDQooWEVOKSBEVDogdHJhbnNsYXRpbmcg
YWRkcmVzczo8Mz4gN2U4MDAwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVz
IGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihYRU4pIERUOiB3YWxr
aW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03ZTAw
MDAwMCwgcz0xODAwMDAwLCBkYT03ZTgwMDAwMA0KKFhFTikgRFQ6IHBhcmVu
dCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZlMDAwMDAwPDM+
DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDgwMDAwMA0KKFhFTikgRFQ6IG9u
ZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gZmU4MDAwMDA8
Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1N
SU86IDAwZmU4MDAwMDAgLSAwMGZlODAwODAwIFAyTVR5cGU9NQ0KKFhFTikg
RFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3NvYy9jc2lAN2U4MDAw
MDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkg
b24gL3NvYw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDdl
ODAyMDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChu
YT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4N
CihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9MTgwMDAw
MCwgZGE9N2U4MDIwMDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24g
Zm9yOjwzPiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhFTikgRFQ6IHdp
dGggb2Zmc2V0OiA4MDIwMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNs
YXRpb246PDM+IDAwMDAwMDAwPDM+IGZlODAyMDAwPDM+DQooWEVOKSBEVDog
cmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZlODAyMDAw
IC0gMDBmZTgwMjAwNCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvc29jL2Nz
aUA3ZTgwMTAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvY3Np
QDdlODAxMDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5
DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0z
IGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0v
c29jL2NzaUA3ZTgwMTAwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0K
KFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9y
YXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGlu
dHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA2Ny4uLl0sb2ludHNpemU9Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29u
dHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9
MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvc29jL2NzaUA3ZTgwMTAw
MCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAyDQooWEVOKSBDaGVjayBpZiAv
c29jL2NzaUA3ZTgwMTAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQg
aXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2NzaUA3ZTgwMTAw
MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikg
IGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49
Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9jc2lA
N2U4MDEwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9
L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsw
eDAwMDAwMDAwIDB4MDAwMDAwNjcuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJA
NDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4p
ICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBk
ZXY9L3NvYy9jc2lAN2U4MDEwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAn
aW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVu
PTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAw
MCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNjcuLi5dLG9pbnRzaXpl
PTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0
LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJz
aXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgICAtIElSUTogMTM1
DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc29jL2Nz
aUA3ZTgwMTAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0x
LCBucz0xKSBvbiAvc29jDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVz
czo8Mz4gN2U4MDEwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRl
ZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihYRU4pIERUOiB3YWxraW5nIHJh
bmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03ZTAwMDAwMCwg
cz0xODAwMDAwLCBkYT03ZTgwMTAwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFu
c2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZlMDAwMDAwPDM+DQooWEVO
KSBEVDogd2l0aCBvZmZzZXQ6IDgwMTAwMA0KKFhFTikgRFQ6IG9uZSBsZXZl
bCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gZmU4MDEwMDA8Mz4NCihY
RU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAw
ZmU4MDEwMDAgLSAwMGZlODAxODAwIFAyTVR5cGU9NQ0KKFhFTikgRFQ6ICoq
IHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3NvYy9jc2lAN2U4MDEwMDAgKioN
CihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL3Nv
Yw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDdlODAyMDA0
PDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBu
cz0xKSBvbiAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4p
IERUOiBkZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9MTgwMDAwMCwgZGE9
N2U4MDIwMDQNCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwz
PiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zm
c2V0OiA4MDIwMDQNCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246
PDM+IDAwMDAwMDAwPDM+IGZlODAyMDA0PDM+DQooWEVOKSBEVDogcmVhY2hl
ZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZlODAyMDA0IC0gMDBm
ZTgwMjAwOCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvc29jL2NzaUA3ZTgw
MTAwMC9wb3J0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9jc2lA
N2U4MDEwMDAvcG9ydA0KKFhFTikgL3NvYy9jc2lAN2U4MDEwMDAvcG9ydCBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29j
L2NzaUA3ZTgwMTAwMC9wb3J0IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFk
ZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvY3NpQDdlODAx
MDAwL3BvcnQNCihYRU4pIGhhbmRsZSAvc29jL2NzaUA3ZTgwMTAwMC9wb3J0
L2VuZHBvaW50DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9jc2lA
N2U4MDEwMDAvcG9ydC9lbmRwb2ludA0KKFhFTikgL3NvYy9jc2lAN2U4MDEw
MDAvcG9ydC9lbmRwb2ludCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvc29jL2NzaUA3ZTgwMTAwMC9wb3J0L2VuZHBvaW50
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zb2MvY3NpQDdlODAxMDAwL3BvcnQvZW5kcG9pbnQN
CihYRU4pIGhhbmRsZSAvc29jL2F4aXBlcmYNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vc29jL2F4aXBlcmYNCihYRU4pIC9zb2MvYXhpcGVyZiBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAyDQooWEVOKSBDaGVjayBpZiAvc29jL2F4
aXBlcmYgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9heGlwZXJmDQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvc29jL2F4aXBlcmYgKioNCihYRU4p
IERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL3NvYw0KKFhF
TikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDdlMDA5ODAwPDM+DQoo
WEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0xKSBv
biAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBk
ZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9MTgwMDAwMCwgZGE9N2UwMDk4
MDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAw
MDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiA5
ODAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAw
MDAwMDwzPiBmZTAwOTgwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBu
b2RlDQooWEVOKSAgIC0gTU1JTzogMDBmZTAwOTgwMCAtIDAwZmUwMDk5MDAg
UDJNVHlwZT01DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmlj
ZSAvc29jL2F4aXBlcmYgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAo
bmE9MSwgbnM9MSkgb24gL3NvYw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFk
ZHJlc3M6PDM+IDdlZTA4MDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBp
cyBkZWZhdWx0IChuYT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fsa2lu
ZyByYW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2UwMDAw
MDAsIHM9MTgwMDAwMCwgZGE9N2VlMDgwMDANCihYRU4pIERUOiBwYXJlbnQg
dHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwzPg0K
KFhFTikgRFQ6IHdpdGggb2Zmc2V0OiBlMDgwMDANCihYRU4pIERUOiBvbmUg
bGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZlZTA4MDAwPDM+
DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlP
OiAwMGZlZTA4MDAwIC0gMDBmZWUwODEwMCBQMk1UeXBlPTUNCihYRU4pIGhh
bmRsZSAvc29jL2dwaW9tZW0NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
c29jL2dwaW9tZW0NCihYRU4pIC9zb2MvZ3Bpb21lbSBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvc29jL2dwaW9tZW0gaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L3NvYy9ncGlvbWVtDQooWEVOKSBEVDogKiogdHJhbnNsYXRp
b24gZm9yIGRldmljZSAvc29jL2dwaW9tZW0gKioNCihYRU4pIERUOiBidXMg
aXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL3NvYw0KKFhFTikgRFQ6IHRy
YW5zbGF0aW5nIGFkZHJlc3M6PDM+IDdlMjAwMDAwPDM+DQooWEVOKSBEVDog
cGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0xKSBvbiAvDQooWEVO
KSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1h
cCwgY3A9N2UwMDAwMDAsIHM9MTgwMDAwMCwgZGE9N2UyMDAwMDANCihYRU4p
IERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiBm
ZTAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiAyMDAwMDANCihY
RU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+
IGZlMjAwMDAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihY
RU4pICAgLSBNTUlPOiAwMGZlMjAwMDAwIC0gMDBmZTIwMTAwMCBQMk1UeXBl
PTUNCihYRU4pIGhhbmRsZSAvc29jL2ZiDQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3NvYy9mYg0KKFhFTikgL3NvYy9mYiBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL2ZiIGlzIGJlaGluZCB0
aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS9zb2MvZmINCihYRU4pIGhhbmRsZSAvc29jL3Zjc20NCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vc29jL3Zjc20NCihYRU4pIC9zb2MvdmNzbSBwYXNz
dGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvc29jL3Zj
c20gaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3NvYy92Y3NtDQooWEVOKSBoYW5kbGUgL3NvYy9z
b3VuZA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mvc291bmQNCihY
RU4pIC9zb2Mvc291bmQgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhF
TikgQ2hlY2sgaWYgL3NvYy9zb3VuZCBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL3NvdW5k
DQooWEVOKSBoYW5kbGUgL3NvYy9waXhlbHZhbHZlQDdlMjBhMDAwDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NvYy9waXhlbHZhbHZlQDdlMjBhMDAw
DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAg
aW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0z
DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL3BpeGVs
dmFsdmVAN2UyMGEwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihY
RU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRz
cGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNjUuLi5dLG9pbnRzaXplPTMNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRy
b2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTEN
CihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgL3NvYy9waXhlbHZhbHZlQDdl
MjBhMDAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNr
IGlmIC9zb2MvcGl4ZWx2YWx2ZUA3ZTIwYTAwMCBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29j
L3BpeGVsdmFsdmVAN2UyMGEwMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4p
ICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3
X2lycTogZGV2PS9zb2MvcGl4ZWx2YWx2ZUA3ZTIwYTAwMCwgaW5kZXg9MA0K
KFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGlu
dHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250
cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA2
NS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9
L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQoo
WEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVO
KSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc29jL3BpeGVsdmFsdmVA
N2UyMGEwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9
L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsw
eDAwMDAwMDAwIDB4MDAwMDAwNjUuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJA
NDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4p
ICAtPiBnb3QgaXQgIQ0KKFhFTikgICAtIElSUTogMTMzDQooWEVOKSBEVDog
KiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc29jL3BpeGVsdmFsdmVAN2Uy
MGEwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9
MSkgb24gL3NvYw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+
IDdlMjBhMDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0
IChuYT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMu
Li4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9MTgw
MDAwMCwgZGE9N2UyMGEwMDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRp
b24gZm9yOjwzPiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhFTikgRFQ6
IHdpdGggb2Zmc2V0OiAyMGEwMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJh
bnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZlMjBhMDAwPDM+DQooWEVOKSBE
VDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZlMjBh
MDAwIC0gMDBmZTIwYTEwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvc29j
L3BpeGVsdmFsdmVAN2UyMTYwMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRl
dj0vc29jL3BpeGVsdmFsdmVAN2UyMTYwMDANCihYRU4pICB1c2luZyAnaW50
ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMN
CihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9n
ZXRfcmF3X2lycTogZGV2PS9zb2MvcGl4ZWx2YWx2ZUA3ZTIxNjAwMCwgaW5k
ZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhF
TikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRs
ZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVw
dC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgw
MDAwMDA2ZS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6
ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAh
DQooWEVOKSAvc29jL3BpeGVsdmFsdmVAN2UyMTYwMDAgcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3NvYy9waXhlbHZhbHZl
QDdlMjE2MDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvcGl4ZWx2YWx2ZUA3ZTIxNjAw
MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikg
IGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49
Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9waXhl
bHZhbHZlQDdlMjE2MDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVy
cnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQoo
WEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50
c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDZlLi4uXSxvaW50c2l6ZT0zDQoo
WEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250
cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0x
DQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3
X2lycTogZGV2PS9zb2MvcGl4ZWx2YWx2ZUA3ZTIxNjAwMCwgaW5kZXg9MA0K
KFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGlu
dHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250
cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA2
ZS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9
L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQoo
WEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVO
KSAgIC0gSVJROiAxNDINCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3Ig
ZGV2aWNlIC9zb2MvcGl4ZWx2YWx2ZUA3ZTIxNjAwMCAqKg0KKFhFTikgRFQ6
IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvc29jDQooWEVOKSBE
VDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gN2UyMTYwMDA8Mz4NCihYRU4p
IERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8N
CihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1
bHQgbWFwLCBjcD03ZTAwMDAwMCwgcz0xODAwMDAwLCBkYT03ZTIxNjAwMA0K
KFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAw
PDM+IGZlMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDIxNjAw
MA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAw
MDA8Mz4gZmUyMTYwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9k
ZQ0KKFhFTikgICAtIE1NSU86IDAwZmUyMTYwMDAgLSAwMGZlMjE2MTAwIFAy
TVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9zb2MvcGl4ZWx2YWx2ZUA3ZWMxMjAw
MA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvcGl4ZWx2YWx2ZUA3
ZWMxMjAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0K
KFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBp
bnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3Nv
Yy9waXhlbHZhbHZlQDdlYzEyMDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcg
J2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxl
bj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEw
MDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDZhLi4uXSxvaW50c2l6
ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVw
dC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRy
c2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIC9zb2MvcGl4ZWx2
YWx2ZUA3ZWMxMjAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVO
KSBDaGVjayBpZiAvc29jL3BpeGVsdmFsdmVAN2VjMTIwMDAgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NvYy9waXhlbHZhbHZlQDdlYzEyMDAwDQooWEVOKSAgdXNpbmcgJ2lu
dGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0z
DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2Vf
Z2V0X3Jhd19pcnE6IGRldj0vc29jL3BpeGVsdmFsdmVAN2VjMTIwMDAsIGlu
ZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihY
RU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50
bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1
cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4
MDAwMDAwNmEuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNp
emU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQg
IQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3NvYy9waXhl
bHZhbHZlQDdlYzEyMDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVy
cnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQoo
WEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50
c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDZhLi4uXSxvaW50c2l6ZT0zDQoo
WEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250
cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0x
DQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pICAgLSBJUlE6IDEzOA0KKFhF
TikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3NvYy9waXhlbHZh
bHZlQDdlYzEyMDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5h
PTEsIG5zPTEpIG9uIC9zb2MNCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRy
ZXNzOjwzPiA3ZWMxMjAwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMg
ZGVmYXVsdCAobmE9MiwgbnM9MSkgb24gLw0KKFhFTikgRFQ6IHdhbGtpbmcg
cmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTdlMDAwMDAw
LCBzPTE4MDAwMDAsIGRhPTdlYzEyMDAwDQooWEVOKSBEVDogcGFyZW50IHRy
YW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gZmUwMDAwMDA8Mz4NCihY
RU4pIERUOiB3aXRoIG9mZnNldDogYzEyMDAwDQooWEVOKSBEVDogb25lIGxl
dmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiBmZWMxMjAwMDwzPg0K
KFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzog
MDBmZWMxMjAwMCAtIDAwZmVjMTIxMDAgUDJNVHlwZT01DQooWEVOKSBoYW5k
bGUgL3NvYy9jbG9ja0A3ZWYwMDAwMA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zb2MvY2xvY2tAN2VmMDAwMDANCihYRU4pIC9zb2MvY2xvY2tAN2Vm
MDAwMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sg
aWYgL3NvYy9jbG9ja0A3ZWYwMDAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFu
ZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2Nsb2Nr
QDdlZjAwMDAwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmlj
ZSAvc29jL2Nsb2NrQDdlZjAwMDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRl
ZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9zb2MNCihYRU4pIERUOiB0cmFuc2xh
dGluZyBhZGRyZXNzOjwzPiA3ZWYwMDAwMDwzPg0KKFhFTikgRFQ6IHBhcmVu
dCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24gLw0KKFhFTikgRFQ6
IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNw
PTdlMDAwMDAwLCBzPTE4MDAwMDAsIGRhPTdlZjAwMDAwDQooWEVOKSBEVDog
cGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gZmUwMDAw
MDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogZjAwMDAwDQooWEVOKSBE
VDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiBmZWYw
MDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDBmZWYwMDAwMCAtIDAwZmVmMDAwMTAgUDJNVHlwZT01DQoo
WEVOKSBoYW5kbGUgL3NvYy9oZG1pQDdlZjAwNzAwDQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NvYy9oZG1pQDdlZjAwNzAwDQooWEVOKSAvc29jL2hk
bWlAN2VmMDA3MDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gOQ0KKFhFTikg
Q2hlY2sgaWYgL3NvYy9oZG1pQDdlZjAwNzAwIGlzIGJlaGluZCB0aGUgSU9N
TVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2Mv
aGRtaUA3ZWYwMDcwMA0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBk
ZXZpY2UgL3NvYy9oZG1pQDdlZjAwNzAwICoqDQooWEVOKSBEVDogYnVzIGlz
IGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9zb2MNCihYRU4pIERUOiB0cmFu
c2xhdGluZyBhZGRyZXNzOjwzPiA3ZWYwMDcwMDwzPg0KKFhFTikgRFQ6IHBh
cmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24gLw0KKFhFTikg
RFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAs
IGNwPTdlMDAwMDAwLCBzPTE4MDAwMDAsIGRhPTdlZjAwNzAwDQooWEVOKSBE
VDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gZmUw
MDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogZjAwNzAwDQooWEVO
KSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiBm
ZWYwMDcwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVO
KSAgIC0gTU1JTzogMDBmZWYwMDcwMCAtIDAwZmVmMDBhMDAgUDJNVHlwZT01
DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc29jL2hk
bWlAN2VmMDA3MDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9
MSwgbnM9MSkgb24gL3NvYw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJl
c3M6PDM+IDdlZjAwMzAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBk
ZWZhdWx0IChuYT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fsa2luZyBy
YW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAs
IHM9MTgwMDAwMCwgZGE9N2VmMDAzMDANCihYRU4pIERUOiBwYXJlbnQgdHJh
bnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhF
TikgRFQ6IHdpdGggb2Zmc2V0OiBmMDAzMDANCihYRU4pIERUOiBvbmUgbGV2
ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZlZjAwMzAwPDM+DQoo
WEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAw
MGZlZjAwMzAwIC0gMDBmZWYwMDUwMCBQMk1UeXBlPTUNCihYRU4pIERUOiAq
KiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zb2MvaGRtaUA3ZWYwMDcwMCAq
Kg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAv
c29jDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gN2VmMDBm
MDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIs
IG5zPTEpIG9uIC8NCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhF
TikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03ZTAwMDAwMCwgcz0xODAwMDAwLCBk
YT03ZWYwMGYwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6
PDM+IDAwMDAwMDAwPDM+IGZlMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBv
ZmZzZXQ6IGYwMGYwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlv
bjo8Mz4gMDAwMDAwMDA8Mz4gZmVmMDBmMDA8Mz4NCihYRU4pIERUOiByZWFj
aGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwZmVmMDBmMDAgLSAw
MGZlZjAwZjgwIFAyTVR5cGU9NQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9u
IGZvciBkZXZpY2UgL3NvYy9oZG1pQDdlZjAwNzAwICoqDQooWEVOKSBEVDog
YnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9zb2MNCihYRU4pIERU
OiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3ZWYwMGY4MDwzPg0KKFhFTikg
RFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24gLw0K
KFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVs
dCBtYXAsIGNwPTdlMDAwMDAwLCBzPTE4MDAwMDAsIGRhPTdlZjAwZjgwDQoo
WEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8
Mz4gZmUwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogZjAwZjgw
DQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAw
MDwzPiBmZWYwMGY4MDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2Rl
DQooWEVOKSAgIC0gTU1JTzogMDBmZWYwMGY4MCAtIDAwZmVmMDEwMDAgUDJN
VHlwZT01DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAv
c29jL2hkbWlAN2VmMDA3MDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVs
dCAobmE9MSwgbnM9MSkgb24gL3NvYw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5n
IGFkZHJlc3M6PDM+IDdlZjAxYjAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1
cyBpcyBkZWZhdWx0IChuYT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fs
a2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2Uw
MDAwMDAsIHM9MTgwMDAwMCwgZGE9N2VmMDFiMDANCihYRU4pIERUOiBwYXJl
bnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwz
Pg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiBmMDFiMDANCihYRU4pIERUOiBv
bmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZlZjAxYjAw
PDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBN
TUlPOiAwMGZlZjAxYjAwIC0gMDBmZWYwMWQwMCBQMk1UeXBlPTUNCihYRU4p
IERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zb2MvaGRtaUA3ZWYw
MDcwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0x
KSBvbiAvc29jDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4g
N2VmMDFmMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQg
KG5hPTIsIG5zPTEpIG9uIC8NCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4u
Lg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03ZTAwMDAwMCwgcz0xODAw
MDAwLCBkYT03ZWYwMWYwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlv
biBmb3I6PDM+IDAwMDAwMDAwPDM+IGZlMDAwMDAwPDM+DQooWEVOKSBEVDog
d2l0aCBvZmZzZXQ6IGYwMWYwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFu
c2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gZmVmMDFmMDA8Mz4NCihYRU4pIERU
OiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwZmVmMDFm
MDAgLSAwMGZlZjAyMzAwIFAyTVR5cGU9NQ0KKFhFTikgRFQ6ICoqIHRyYW5z
bGF0aW9uIGZvciBkZXZpY2UgL3NvYy9oZG1pQDdlZjAwNzAwICoqDQooWEVO
KSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9zb2MNCihY
RU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3ZWYwMDIwMDwzPg0K
KFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9MSkg
b24gLw0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDog
ZGVmYXVsdCBtYXAsIGNwPTdlMDAwMDAwLCBzPTE4MDAwMDAsIGRhPTdlZjAw
MjAwDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAw
MDAwMDA8Mz4gZmUwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDog
ZjAwMjAwDQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAw
MDAwMDAwMDwzPiBmZWYwMDIwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9v
dCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDBmZWYwMDIwMCAtIDAwZmVmMDAy
ODAgUDJNVHlwZT01DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRl
dmljZSAvc29jL2hkbWlAN2VmMDA3MDAgKioNCihYRU4pIERUOiBidXMgaXMg
ZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL3NvYw0KKFhFTikgRFQ6IHRyYW5z
bGF0aW5nIGFkZHJlc3M6PDM+IDdlZjA0MzAwPDM+DQooWEVOKSBEVDogcGFy
ZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0xKSBvbiAvDQooWEVOKSBE
VDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwg
Y3A9N2UwMDAwMDAsIHM9MTgwMDAwMCwgZGE9N2VmMDQzMDANCihYRU4pIERU
OiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiBmZTAw
MDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiBmMDQzMDANCihYRU4p
IERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZl
ZjA0MzAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4p
ICAgLSBNTUlPOiAwMGZlZjA0MzAwIC0gMDBmZWYwNDQwMCBQMk1UeXBlPTUN
CihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zb2MvaGRt
aUA3ZWYwMDcwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0x
LCBucz0xKSBvbiAvc29jDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVz
czo8Mz4gN2VmMjAwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRl
ZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihYRU4pIERUOiB3YWxraW5nIHJh
bmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03ZTAwMDAwMCwg
cz0xODAwMDAwLCBkYT03ZWYyMDAwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFu
c2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZlMDAwMDAwPDM+DQooWEVO
KSBEVDogd2l0aCBvZmZzZXQ6IGYyMDAwMA0KKFhFTikgRFQ6IG9uZSBsZXZl
bCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gZmVmMjAwMDA8Mz4NCihY
RU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAw
ZmVmMjAwMDAgLSAwMGZlZjIwMTAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxl
IC9zb2MvaTJjQDdlZjA0NTAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3NvYy9pMmNAN2VmMDQ1MDANCihYRU4pIC9zb2MvaTJjQDdlZjA0NTAwIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDINCihYRU4pIENoZWNrIGlmIC9zb2Mv
aTJjQDdlZjA0NTAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9zb2MvaTJjQDdlZjA0NTAwDQoo
WEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc29jL2kyY0A3
ZWYwNDUwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBu
cz0xKSBvbiAvc29jDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8
Mz4gN2VmMDQ1MDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1
bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihYRU4pIERUOiB3YWxraW5nIHJhbmdl
cy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03ZTAwMDAwMCwgcz0x
ODAwMDAwLCBkYT03ZWYwNDUwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xh
dGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZlMDAwMDAwPDM+DQooWEVOKSBE
VDogd2l0aCBvZmZzZXQ6IGYwNDUwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0
cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gZmVmMDQ1MDA8Mz4NCihYRU4p
IERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwZmVm
MDQ1MDAgLSAwMGZlZjA0NjAwIFAyTVR5cGU9NQ0KKFhFTikgRFQ6ICoqIHRy
YW5zbGF0aW9uIGZvciBkZXZpY2UgL3NvYy9pMmNAN2VmMDQ1MDAgKioNCihY
RU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL3NvYw0K
KFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDdlZjAwYjAwPDM+
DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0x
KSBvbiAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4pIERU
OiBkZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9MTgwMDAwMCwgZGE9N2Vm
MDBiMDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAw
MDAwMDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0
OiBmMDBiMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+
IDAwMDAwMDAwPDM+IGZlZjAwYjAwPDM+DQooWEVOKSBEVDogcmVhY2hlZCBy
b290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZlZjAwYjAwIC0gMDBmZWYw
MGUwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvc29jL2hkbWlAN2VmMDU3
MDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc29jL2hkbWlAN2VmMDU3
MDANCihYRU4pIC9zb2MvaGRtaUA3ZWYwNTcwMCBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSA5DQooWEVOKSBDaGVjayBpZiAvc29jL2hkbWlAN2VmMDU3MDAg
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L3NvYy9oZG1pQDdlZjA1NzAwDQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvc29jL2hkbWlAN2VmMDU3MDAgKioN
CihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkgb24gL3Nv
Yw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDdlZjA1NzAw
PDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBu
cz0xKSBvbiAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4p
IERUOiBkZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9MTgwMDAwMCwgZGE9
N2VmMDU3MDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwz
PiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zm
c2V0OiBmMDU3MDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRpb246
PDM+IDAwMDAwMDAwPDM+IGZlZjA1NzAwPDM+DQooWEVOKSBEVDogcmVhY2hl
ZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZlZjA1NzAwIC0gMDBm
ZWYwNWEwMCBQMk1UeXBlPTUNCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBm
b3IgZGV2aWNlIC9zb2MvaGRtaUA3ZWYwNTcwMCAqKg0KKFhFTikgRFQ6IGJ1
cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvc29jDQooWEVOKSBEVDog
dHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gN2VmMDUzMDA8Mz4NCihYRU4pIERU
OiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihY
RU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQg
bWFwLCBjcD03ZTAwMDAwMCwgcz0xODAwMDAwLCBkYT03ZWYwNTMwMA0KKFhF
TikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+
IGZlMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IGYwNTMwMA0K
KFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8
Mz4gZmVmMDUzMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0K
KFhFTikgICAtIE1NSU86IDAwZmVmMDUzMDAgLSAwMGZlZjA1NTAwIFAyTVR5
cGU9NQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3Nv
Yy9oZG1pQDdlZjA1NzAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQg
KG5hPTEsIG5zPTEpIG9uIC9zb2MNCihYRU4pIERUOiB0cmFuc2xhdGluZyBh
ZGRyZXNzOjwzPiA3ZWYwNWYwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMg
aXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24gLw0KKFhFTikgRFQ6IHdhbGtp
bmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTdlMDAw
MDAwLCBzPTE4MDAwMDAsIGRhPTdlZjA1ZjAwDQooWEVOKSBEVDogcGFyZW50
IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gZmUwMDAwMDA8Mz4N
CihYRU4pIERUOiB3aXRoIG9mZnNldDogZjA1ZjAwDQooWEVOKSBEVDogb25l
IGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiBmZWYwNWYwMDwz
Pg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1J
TzogMDBmZWYwNWYwMCAtIDAwZmVmMDVmODAgUDJNVHlwZT01DQooWEVOKSBE
VDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc29jL2hkbWlAN2VmMDU3
MDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwgbnM9MSkg
b24gL3NvYw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDdl
ZjA1ZjgwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChu
YT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4N
CihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9MTgwMDAw
MCwgZGE9N2VmMDVmODANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24g
Zm9yOjwzPiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhFTikgRFQ6IHdp
dGggb2Zmc2V0OiBmMDVmODANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNs
YXRpb246PDM+IDAwMDAwMDAwPDM+IGZlZjA1ZjgwPDM+DQooWEVOKSBEVDog
cmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZlZjA1Zjgw
IC0gMDBmZWYwNjAwMCBQMk1UeXBlPTUNCihYRU4pIERUOiAqKiB0cmFuc2xh
dGlvbiBmb3IgZGV2aWNlIC9zb2MvaGRtaUA3ZWYwNTcwMCAqKg0KKFhFTikg
RFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvc29jDQooWEVO
KSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gN2VmMDZiMDA8Mz4NCihY
RU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEpIG9u
IC8NCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRl
ZmF1bHQgbWFwLCBjcD03ZTAwMDAwMCwgcz0xODAwMDAwLCBkYT03ZWYwNmIw
MA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAw
MDAwPDM+IGZlMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IGYw
NmIwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAw
MDAwMDA8Mz4gZmVmMDZiMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qg
bm9kZQ0KKFhFTikgICAtIE1NSU86IDAwZmVmMDZiMDAgLSAwMGZlZjA2ZDAw
IFAyTVR5cGU9NQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZp
Y2UgL3NvYy9oZG1pQDdlZjA1NzAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRl
ZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9zb2MNCihYRU4pIERUOiB0cmFuc2xh
dGluZyBhZGRyZXNzOjwzPiA3ZWYwNmYwMDwzPg0KKFhFTikgRFQ6IHBhcmVu
dCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24gLw0KKFhFTikgRFQ6
IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNw
PTdlMDAwMDAwLCBzPTE4MDAwMDAsIGRhPTdlZjA2ZjAwDQooWEVOKSBEVDog
cGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gZmUwMDAw
MDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogZjA2ZjAwDQooWEVOKSBE
VDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiBmZWYw
NmYwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAg
IC0gTU1JTzogMDBmZWYwNmYwMCAtIDAwZmVmMDczMDAgUDJNVHlwZT01DQoo
WEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc29jL2hkbWlA
N2VmMDU3MDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MSwg
bnM9MSkgb24gL3NvYw0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6
PDM+IDdlZjAwMjgwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZh
dWx0IChuYT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5n
ZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2UwMDAwMDAsIHM9
MTgwMDAwMCwgZGE9N2VmMDAyODANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNs
YXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiBmZTAwMDAwMDwzPg0KKFhFTikg
RFQ6IHdpdGggb2Zmc2V0OiBmMDAyODANCihYRU4pIERUOiBvbmUgbGV2ZWwg
dHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZlZjAwMjgwPDM+DQooWEVO
KSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZl
ZjAwMjgwIC0gMDBmZWYwMDMwMCBQMk1UeXBlPTUNCihYRU4pIERUOiAqKiB0
cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zb2MvaGRtaUA3ZWYwNTcwMCAqKg0K
KFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0xKSBvbiAvc29j
DQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gN2VmMDkzMDA8
Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5z
PTEpIG9uIC8NCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikg
RFQ6IGRlZmF1bHQgbWFwLCBjcD03ZTAwMDAwMCwgcz0xODAwMDAwLCBkYT03
ZWYwOTMwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+
IDAwMDAwMDAwPDM+IGZlMDAwMDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZz
ZXQ6IGYwOTMwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8
Mz4gMDAwMDAwMDA8Mz4gZmVmMDkzMDA8Mz4NCihYRU4pIERUOiByZWFjaGVk
IHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwZmVmMDkzMDAgLSAwMGZl
ZjA5NDAwIFAyTVR5cGU9NQ0KKFhFTikgRFQ6ICoqIHRyYW5zbGF0aW9uIGZv
ciBkZXZpY2UgL3NvYy9oZG1pQDdlZjA1NzAwICoqDQooWEVOKSBEVDogYnVz
IGlzIGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9zb2MNCihYRU4pIERUOiB0
cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3ZWYyMDAwMDwzPg0KKFhFTikgRFQ6
IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24gLw0KKFhF
TikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBt
YXAsIGNwPTdlMDAwMDAwLCBzPTE4MDAwMDAsIGRhPTdlZjIwMDAwDQooWEVO
KSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4g
ZmUwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogZjIwMDAwDQoo
WEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwz
PiBmZWYyMDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQoo
WEVOKSAgIC0gTU1JTzogMDBmZWYyMDAwMCAtIDAwZmVmMjAxMDAgUDJNVHlw
ZT01DQooWEVOKSBoYW5kbGUgL3NvYy9pMmNAN2VmMDk1MDANCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vc29jL2kyY0A3ZWYwOTUwMA0KKFhFTikgL3Nv
Yy9pMmNAN2VmMDk1MDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMg0KKFhF
TikgQ2hlY2sgaWYgL3NvYy9pMmNAN2VmMDk1MDAgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nv
Yy9pMmNAN2VmMDk1MDANCihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3Ig
ZGV2aWNlIC9zb2MvaTJjQDdlZjA5NTAwICoqDQooWEVOKSBEVDogYnVzIGlz
IGRlZmF1bHQgKG5hPTEsIG5zPTEpIG9uIC9zb2MNCihYRU4pIERUOiB0cmFu
c2xhdGluZyBhZGRyZXNzOjwzPiA3ZWYwOTUwMDwzPg0KKFhFTikgRFQ6IHBh
cmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24gLw0KKFhFTikg
RFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAs
IGNwPTdlMDAwMDAwLCBzPTE4MDAwMDAsIGRhPTdlZjA5NTAwDQooWEVOKSBE
VDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gZmUw
MDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogZjA5NTAwDQooWEVO
KSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiBm
ZWYwOTUwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVO
KSAgIC0gTU1JTzogMDBmZWYwOTUwMCAtIDAwZmVmMDk2MDAgUDJNVHlwZT01
DQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc29jL2ky
Y0A3ZWYwOTUwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0x
LCBucz0xKSBvbiAvc29jDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVz
czo8Mz4gN2VmMDViMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRl
ZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihYRU4pIERUOiB3YWxraW5nIHJh
bmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03ZTAwMDAwMCwg
cz0xODAwMDAwLCBkYT03ZWYwNWIwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFu
c2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZlMDAwMDAwPDM+DQooWEVO
KSBEVDogd2l0aCBvZmZzZXQ6IGYwNWIwMA0KKFhFTikgRFQ6IG9uZSBsZXZl
bCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gZmVmMDViMDA8Mz4NCihY
RU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAw
ZmVmMDViMDAgLSAwMGZlZjA1ZTAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxl
IC9jbG9ja3MNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2xvY2tzDQoo
WEVOKSAvY2xvY2tzIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9jbG9ja3MgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRk
IGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Nsb2Nrcw0KKFhFTikg
aGFuZGxlIC9jbG9ja3MvY2xrLW9zYw0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9jbG9ja3MvY2xrLW9zYw0KKFhFTikgL2Nsb2Nrcy9jbGstb3NjIHBh
c3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9jbG9j
a3MvY2xrLW9zYyBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vY2xvY2tzL2Nsay1vc2MNCihYRU4p
IGhhbmRsZSAvY2xvY2tzL2Nsay11c2INCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vY2xvY2tzL2Nsay11c2INCihYRU4pIC9jbG9ja3MvY2xrLXVzYiBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvY2xv
Y2tzL2Nsay11c2IgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Nsb2Nrcy9jbGstdXNiDQooWEVO
KSBoYW5kbGUgL3BoeQ0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9waHkN
CihYRU4pIC9waHkgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikg
Q2hlY2sgaWYgL3BoeSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQN
CihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vcGh5DQooWEVOKSBoYW5kbGUg
L2FybS1wbXUNCihYRU4pICAgU2tpcCBpdCAobWF0Y2hlZCkNCihYRU4pIGhh
bmRsZSAvdGltZXINCihYRU4pIENyZWF0ZSB0aW1lciBub2RlDQooWEVOKSAg
IFNlY3VyZSBpbnRlcnJ1cHQgMjkNCihYRU4pICAgTm9uIHNlY3VyZSBpbnRl
cnJ1cHQgMzANCihYRU4pICAgVmlydCBpbnRlcnJ1cHQgMjcNCihYRU4pIGhh
bmRsZSAvY3B1cw0KKFhFTikgICBTa2lwIGl0IChtYXRjaGVkKQ0KKFhFTikg
aGFuZGxlIC9zY2INCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2NiDQoo
WEVOKSAvc2NiIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENo
ZWNrIGlmIC9zY2IgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NjYg0KKFhFTikgaGFuZGxlIC9z
Y2IvcGNpZUA3ZDUwMDAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9z
Y2IvcGNpZUA3ZDUwMDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBw
cm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Ng0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49Ng0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJx
OiBkZXY9L3NjYi9wY2llQDdkNTAwMDAwLCBpbmRleD0wDQooWEVOKSAgdXNp
bmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGlu
dGxlbj02DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj02DQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAw
NDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDk0Li4uXSxvaW50
c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVy
cnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBh
ZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2Rldmlj
ZV9nZXRfcmF3X2lycTogZGV2PS9zY2IvcGNpZUA3ZDUwMDAwMCwgaW5kZXg9
MQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikg
IGludHNwZWM9MCBpbnRsZW49Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49
Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1j
b250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAw
MDA5NC4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlw
YXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0z
DQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQoo
WEVOKSAvc2NiL3BjaWVAN2Q1MDAwMDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRy
ID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3NjYi9wY2llQDdkNTAwMDAwIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9zY2IvcGNpZUA3ZDUwMDAwMA0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Ng0K
KFhFTikgIGludHNpemU9MyBpbnRsZW49Ng0KKFhFTikgZHRfZGV2aWNlX2dl
dF9yYXdfaXJxOiBkZXY9L3NjYi9wY2llQDdkNTAwMDAwLCBpbmRleD0wDQoo
WEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50
c3BlYz0wIGludGxlbj02DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj02DQoo
WEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRy
b2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDk0
Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0v
c29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihY
RU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4p
IGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zY2IvcGNpZUA3ZDUwMDAw
MCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0
eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Ng0KKFhFTikgIGludHNpemU9
MyBpbnRsZW49Ng0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2lu
dGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAw
MDAgMHgwMDAwMDA5NC4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21h
cF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAw
MCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdv
dCBpdCAhDQooWEVOKSAgIC0gSVJROiAxODANCihYRU4pIGR0X2RldmljZV9n
ZXRfcmF3X2lycTogZGV2PS9zY2IvcGNpZUA3ZDUwMDAwMCwgaW5kZXg9MQ0K
KFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGlu
dHNwZWM9MCBpbnRsZW49Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Ng0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250
cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA5
NC4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9
L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQoo
WEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVO
KSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc2NiL3BjaWVAN2Q1MDAw
MDAsIGluZGV4PTENCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVy
dHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTYNCihYRU4pICBpbnRzaXpl
PTMgaW50bGVuPTYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9p
bnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAw
MDAwIDB4MDAwMDAwOTQuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEw
MDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBn
b3QgaXQgIQ0KKFhFTikgICAtIElSUTogMTgwDQooWEVOKSBEVDogKiogdHJh
bnNsYXRpb24gZm9yIGRldmljZSAvc2NiL3BjaWVAN2Q1MDAwMDAgKioNCihY
RU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24gL3NjYg0K
KFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+
IDdkNTAwMDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0
IChuYT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fsa2luZyByYW5nZXMu
Li4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2MwMDAwMDAsIHM9Mzgw
MDAwMCwgZGE9N2Q1MDAwMDANCihYRU4pIERUOiBwYXJlbnQgdHJhbnNsYXRp
b24gZm9yOjwzPiAwMDAwMDAwMDwzPiBmYzAwMDAwMDwzPg0KKFhFTikgRFQ6
IHdpdGggb2Zmc2V0OiAxNTAwMDAwDQooWEVOKSBEVDogb25lIGxldmVsIHRy
YW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiBmZDUwMDAwMDwzPg0KKFhFTikg
RFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzogMDBmZDUw
MDAwMCAtIDAwZmQ1MDkzMTAgUDJNVHlwZT01DQooWEVOKSBNYXBwaW5nIGNo
aWxkcmVuIG9mIC9zY2IvcGNpZUA3ZDUwMDAwMCB0byBndWVzdA0KKFhFTikg
ZHRfZm9yX2VhY2hfaXJxX21hcDogcGFyPS9zY2IvcGNpZUA3ZDUwMDAwMCBj
Yj0wMDAwMDAwMDAwMmM1MGQwIGRhdGE9MDAwMDgwMDBmN2Y5NzAwMA0KKFhF
TikgZHRfZm9yX2VhY2hfaXJxX21hcDogaXBhcj0vc2NiL3BjaWVAN2Q1MDAw
MDAsIHNpemU9MQ0KKFhFTikgIC0+IGFkZHJzaXplPTMNCihYRU4pICAtPiBp
cGFyIGludGVycnVwdC1jb250cm9sbGVyDQooWEVOKSAgLT4gcGludHNpemU9
MywgcGFkZHJzaXplPTANCihYRU4pICAgLSBJUlE6IDE3NQ0KKFhFTikgIC0+
IGltYXBsZW49MA0KKFhFTikgZHRfZm9yX2VhY2hfcmFuZ2U6IGRldj1wY2ll
LCBidXM9cGNpLCBwYXJlbnQ9c2NiLCBybGVuPTcsIHJvbmU9Nw0KKFhFTikg
RFQ6ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3NjYi9wY2llQDdkNTAw
MDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIp
IG9uIC9zY2INCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAw
MDAwMDAwNjwzPiAwMDAwMDAwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMg
aXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24gLw0KKFhFTikgRFQ6IHdhbGtp
bmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTdjMDAw
MDAwLCBzPTM4MDAwMDAsIGRhPTYwMDAwMDAwMA0KKFhFTikgRFQ6IGRlZmF1
bHQgbWFwLCBjcD00MDAwMDAwMCwgcz04MDAwMDAsIGRhPTYwMDAwMDAwMA0K
KFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD02MDAwMDAwMDAsIHM9NDAwMDAw
MDAsIGRhPTYwMDAwMDAwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlv
biBmb3I6PDM+IDAwMDAwMDA2PDM+IDAwMDAwMDAwPDM+DQooWEVOKSBEVDog
d2l0aCBvZmZzZXQ6IDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNsYXRp
b246PDM+IDAwMDAwMDA2PDM+IDAwMDAwMDAwPDM+DQooWEVOKSBEVDogcmVh
Y2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwNjAwMDAwMDAwIC0g
MDYwNDAwMDAwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvc2NiL2V0aGVy
bmV0QDdkNTgwMDAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NjYi9l
dGhlcm5ldEA3ZDU4MDAwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBw
cm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Ng0KKFhFTikgIGlu
dHNpemU9MyBpbnRsZW49Ng0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdfaXJx
OiBkZXY9L3NjYi9ldGhlcm5ldEA3ZDU4MDAwMCwgaW5kZXg9MA0KKFhFTikg
IHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9
MCBpbnRsZW49Ng0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Ng0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVy
QDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA5ZC4uLl0s
b2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9p
bnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAg
LT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBkdF9k
ZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc2NiL2V0aGVybmV0QDdkNTgwMDAw
LCBpbmRleD0xDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5
DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj02DQooWEVOKSAgaW50c2l6ZT0z
IGludGxlbj02DQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50
ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAw
MCAweDAwMDAwMDllLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAw
LCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290
IGl0ICENCihYRU4pIC9zY2IvZXRoZXJuZXRAN2Q1ODAwMDAgcGFzc3Rocm91
Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3NjYi9ldGhlcm5l
dEA3ZDU4MDAwMCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihY
RU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2NiL2V0aGVybmV0QDdkNTgwMDAw
DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAg
aW50c3BlYz0wIGludGxlbj02DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj02
DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc2NiL2V0aGVy
bmV0QDdkNTgwMDAwLCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVw
dHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj02DQooWEVO
KSAgaW50c2l6ZT0zIGludGxlbj02DQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
cGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3Bl
Yz1bMHgwMDAwMDAwMCAweDAwMDAwMDlkLi4uXSxvaW50c2l6ZT0zDQooWEVO
KSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9s
bGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQoo
WEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2ly
cTogZGV2PS9zY2IvZXRoZXJuZXRAN2Q1ODAwMDAsIGluZGV4PTANCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTYNCihYRU4p
IGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxl
ckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwOWQuLi5d
LG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2Mv
aW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikg
IC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgICAt
IElSUTogMTg5DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0v
c2NiL2V0aGVybmV0QDdkNTgwMDAwLCBpbmRleD0xDQooWEVOKSAgdXNpbmcg
J2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxl
bj02DQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj02DQooWEVOKSBkdF9pcnFf
bWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEw
MDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDllLi4uXSxvaW50c2l6
ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVycnVw
dC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBhZGRy
c2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2RldmljZV9n
ZXRfcmF3X2lycTogZGV2PS9zY2IvZXRoZXJuZXRAN2Q1ODAwMDAsIGluZGV4
PTENCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4p
ICBpbnRzcGVjPTAgaW50bGVuPTYNCihYRU4pICBpbnRzaXplPTMgaW50bGVu
PTYNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQt
Y29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAw
MDAwOWUuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBp
cGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9
Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0K
KFhFTikgICAtIElSUTogMTkwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24g
Zm9yIGRldmljZSAvc2NiL2V0aGVybmV0QDdkNTgwMDAwICoqDQooWEVOKSBE
VDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9zY2INCihYRU4p
IERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3ZDU4
MDAwMDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9
MiwgbnM9MSkgb24gLw0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQoo
WEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTdjMDAwMDAwLCBzPTM4MDAwMDAs
IGRhPTdkNTgwMDAwDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZv
cjo8Mz4gMDAwMDAwMDA8Mz4gZmMwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRo
IG9mZnNldDogMTU4MDAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xh
dGlvbjo8Mz4gMDAwMDAwMDA8Mz4gZmQ1ODAwMDA8Mz4NCihYRU4pIERUOiBy
ZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwZmQ1ODAwMDAg
LSAwMGZkNTkwMDAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9zY2IvZXRo
ZXJuZXRAN2Q1ODAwMDAvbWRpb0BlMTQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vc2NiL2V0aGVybmV0QDdkNTgwMDAwL21kaW9AZTE0DQooWEVOKSAv
c2NiL2V0aGVybmV0QDdkNTgwMDAwL21kaW9AZTE0IHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zY2IvZXRoZXJuZXRAN2Q1
ODAwMDAvbWRpb0BlMTQgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NjYi9ldGhlcm5ldEA3ZDU4
MDAwMC9tZGlvQGUxNA0KKFhFTikgaGFuZGxlIC9zY2IvZXRoZXJuZXRAN2Q1
ODAwMDAvbWRpb0BlMTQvZXRoZXJuZXQtcGh5QDENCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc2NiL2V0aGVybmV0QDdkNTgwMDAwL21kaW9AZTE0L2V0
aGVybmV0LXBoeUAxDQooWEVOKSAvc2NiL2V0aGVybmV0QDdkNTgwMDAwL21k
aW9AZTE0L2V0aGVybmV0LXBoeUAxIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9
IDANCihYRU4pIENoZWNrIGlmIC9zY2IvZXRoZXJuZXRAN2Q1ODAwMDAvbWRp
b0BlMTQvZXRoZXJuZXQtcGh5QDEgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQg
YWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NjYi9ldGhlcm5l
dEA3ZDU4MDAwMC9tZGlvQGUxNC9ldGhlcm5ldC1waHlAMQ0KKFhFTikgaGFu
ZGxlIC9zY2IvZG1hQDdlMDA3YjAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L3NjYi9kbWFAN2UwMDdiMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTEyDQooWEVO
KSAgaW50c2l6ZT0zIGludGxlbj0xMg0KKFhFTikgZHRfZGV2aWNlX2dldF9y
YXdfaXJxOiBkZXY9L3NjYi9kbWFAN2UwMDdiMDAsIGluZGV4PTANCihYRU4p
ICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVj
PTAgaW50bGVuPTEyDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9s
bGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA1OS4u
Ll0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3Nv
Yy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVO
KSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSBk
dF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc2NiL2RtYUA3ZTAwN2IwMCwg
aW5kZXg9MQ0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0K
KFhFTikgIGludHNwZWM9MCBpbnRsZW49MTINCihYRU4pICBpbnRzaXplPTMg
aW50bGVuPTEyDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50
ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAw
MCAweDAwMDAwMDVhLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAw
LCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290
IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zY2Iv
ZG1hQDdlMDA3YjAwLCBpbmRleD0yDQooWEVOKSAgdXNpbmcgJ2ludGVycnVw
dHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0xMg0KKFhF
TikgIGludHNpemU9MyBpbnRsZW49MTINCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRz
cGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNWIuLi5dLG9pbnRzaXplPTMNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRy
b2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTEN
CihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgZHRfZGV2aWNlX2dldF9yYXdf
aXJxOiBkZXY9L3NjYi9kbWFAN2UwMDdiMDAsIGluZGV4PTMNCihYRU4pICB1
c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAg
aW50bGVuPTEyDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0xMg0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVy
QDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA1Yy4uLl0s
b2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9p
bnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAg
LT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAvc2Ni
L2RtYUA3ZTAwN2IwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAxDQooWEVO
KSBDaGVjayBpZiAvc2NiL2RtYUA3ZTAwN2IwMCBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2Ni
L2RtYUA3ZTAwN2IwMA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9w
ZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49MTINCihYRU4pICBpbnRz
aXplPTMgaW50bGVuPTEyDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6
IGRldj0vc2NiL2RtYUA3ZTAwN2IwMCwgaW5kZXg9MA0KKFhFTikgIHVzaW5n
ICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRs
ZW49MTINCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTEyDQooWEVOKSBkdF9p
cnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAw
NDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDU5Li4uXSxvaW50
c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2ludGVy
cnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAtPiBh
ZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2Rldmlj
ZV9nZXRfcmF3X2lycTogZGV2PS9zY2IvZG1hQDdlMDA3YjAwLCBpbmRleD0w
DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAg
aW50c3BlYz0wIGludGxlbj0xMg0KKFhFTikgIGludHNpemU9MyBpbnRsZW49
MTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQt
Y29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAw
MDAwNTkuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBp
cGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9
Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0K
KFhFTikgICAtIElSUTogMTIxDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19p
cnE6IGRldj0vc2NiL2RtYUA3ZTAwN2IwMCwgaW5kZXg9MQ0KKFhFTikgIHVz
aW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBp
bnRsZW49MTINCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTEyDQooWEVOKSBk
dF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJA
NDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDVhLi4uXSxv
aW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29jL2lu
dGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4pICAt
PiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0X2Rl
dmljZV9nZXRfcmF3X2lycTogZGV2PS9zY2IvZG1hQDdlMDA3YjAwLCBpbmRl
eD0xDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVO
KSAgaW50c3BlYz0wIGludGxlbj0xMg0KKFhFTikgIGludHNpemU9MyBpbnRs
ZW49MTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1
cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4
MDAwMDAwNWEuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNp
emU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQg
IQ0KKFhFTikgICAtIElSUTogMTIyDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jh
d19pcnE6IGRldj0vc2NiL2RtYUA3ZTAwN2IwMCwgaW5kZXg9Mg0KKFhFTikg
IHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9
MCBpbnRsZW49MTINCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTEyDQooWEVO
KSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xs
ZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDViLi4u
XSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0vc29j
L2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihYRU4p
ICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4pIGR0
X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zY2IvZG1hQDdlMDA3YjAwLCBp
bmRleD0yDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQoo
WEVOKSAgaW50c3BlYz0wIGludGxlbj0xMg0KKFhFTikgIGludHNpemU9MyBp
bnRsZW49MTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRl
cnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAw
IDB4MDAwMDAwNWIuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAs
IHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3Qg
aXQgIQ0KKFhFTikgICAtIElSUTogMTIzDQooWEVOKSBkdF9kZXZpY2VfZ2V0
X3Jhd19pcnE6IGRldj0vc2NiL2RtYUA3ZTAwN2IwMCwgaW5kZXg9Mw0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNw
ZWM9MCBpbnRsZW49MTINCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTEyDQoo
WEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRy
b2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAwMDAwMDVj
Li4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogaXBhcj0v
c29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXplPTMNCihY
RU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICENCihYRU4p
IGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zY2IvZG1hQDdlMDA3YjAw
LCBpbmRleD0zDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5
DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0xMg0KKFhFTikgIGludHNpemU9
MyBpbnRsZW49MTINCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9p
bnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAw
MDAwIDB4MDAwMDAwNWMuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9t
YXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEw
MDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBn
b3QgaXQgIQ0KKFhFTikgICAtIElSUTogMTI0DQooWEVOKSBEVDogKiogdHJh
bnNsYXRpb24gZm9yIGRldmljZSAvc2NiL2RtYUA3ZTAwN2IwMCAqKg0KKFhF
TikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0yKSBvbiAvc2NiDQoo
WEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gMDAwMDAwMDA8Mz4g
N2UwMDdiMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQg
KG5hPTIsIG5zPTEpIG9uIC8NCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4u
Lg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03YzAwMDAwMCwgcz0zODAw
MDAwLCBkYT03ZTAwN2IwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlv
biBmb3I6PDM+IDAwMDAwMDAwPDM+IGZjMDAwMDAwPDM+DQooWEVOKSBEVDog
d2l0aCBvZmZzZXQ6IDIwMDdiMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJh
bnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZlMDA3YjAwPDM+DQooWEVOKSBE
VDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZlMDA3
YjAwIC0gMDBmZTAwN2YwMCBQMk1UeXBlPTUNCihYRU4pIGhhbmRsZSAvc2Ni
L21haWxib3hAN2UwMGI4NDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0v
c2NiL21haWxib3hAN2UwMGI4NDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4p
ICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3
X2lycTogZGV2PS9zY2IvbWFpbGJveEA3ZTAwYjg0MCwgaW5kZXg9MA0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNw
ZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9s
bGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDAyMi4u
Ll0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3Nv
Yy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVO
KSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAv
c2NiL21haWxib3hAN2UwMGI4NDAgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MQ0KKFhFTikgQ2hlY2sgaWYgL3NjYi9tYWlsYm94QDdlMDBiODQwIGlzIGJl
aGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJl
cjogZGV2PS9zY2IvbWFpbGJveEA3ZTAwYjg0MA0KKFhFTikgIHVzaW5nICdp
bnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49
Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNl
X2dldF9yYXdfaXJxOiBkZXY9L3NjYi9tYWlsYm94QDdlMDBiODQwLCBpbmRl
eD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVO
KSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxl
bj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJydXB0
LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAweDAw
MDAwMDIyLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jhdzog
aXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBzaXpl
PTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0ICEN
CihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zY2IvbWFpbGJv
eEA3ZTAwYjg0MCwgaW5kZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRz
JyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikg
IGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBh
cj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9
WzB4MDAwMDAwMDAgMHgwMDAwMDAyMi4uLl0sb2ludHNpemU9Mw0KKFhFTikg
ZHRfaXJxX21hcF9yYXc6IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxl
ckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhF
TikgIC0+IGdvdCBpdCAhDQooWEVOKSAgIC0gSVJROiA2Ng0KKFhFTikgRFQ6
ICoqIHRyYW5zbGF0aW9uIGZvciBkZXZpY2UgL3NjYi9tYWlsYm94QDdlMDBi
ODQwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIp
IG9uIC9zY2INCihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAw
MDAwMDAwMDwzPiA3ZTAwYjg0MDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMg
aXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24gLw0KKFhFTikgRFQ6IHdhbGtp
bmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVsdCBtYXAsIGNwPTdjMDAw
MDAwLCBzPTM4MDAwMDAsIGRhPTdlMDBiODQwDQooWEVOKSBEVDogcGFyZW50
IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8Mz4gZmMwMDAwMDA8Mz4N
CihYRU4pIERUOiB3aXRoIG9mZnNldDogMjAwYjg0MA0KKFhFTikgRFQ6IG9u
ZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gZmUwMGI4NDA8
Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1N
SU86IDAwZmUwMGI4NDAgLSAwMGZlMDBiODdjIFAyTVR5cGU9NQ0KKFhFTikg
aGFuZGxlIC9zY2IvbWFpbGJveEA3ZTAwYjg0MC9iY20yODM1X2F1ZGlvDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3NjYi9tYWlsYm94QDdlMDBiODQw
L2JjbTI4MzVfYXVkaW8NCihYRU4pIC9zY2IvbWFpbGJveEA3ZTAwYjg0MC9i
Y20yODM1X2F1ZGlvIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4p
IENoZWNrIGlmIC9zY2IvbWFpbGJveEA3ZTAwYjg0MC9iY20yODM1X2F1ZGlv
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zY2IvbWFpbGJveEA3ZTAwYjg0MC9iY20yODM1X2F1
ZGlvDQooWEVOKSBoYW5kbGUgL3NjYi94aGNpQDdlOWMwMDAwDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3NjYi94aGNpQDdlOWMwMDAwDQooWEVOKSAg
dXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0w
IGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBk
dF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc2NiL3hoY2lAN2U5YzAwMDAs
IGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkN
CihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMg
aW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRl
cnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAw
IDB4MDAwMDAwYjAuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBf
cmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAs
IHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3Qg
aXQgIQ0KKFhFTikgL3NjYi94aGNpQDdlOWMwMDAwIHBhc3N0aHJvdWdoID0g
MSBuYWRkciA9IDENCihYRU4pIENoZWNrIGlmIC9zY2IveGhjaUA3ZTljMDAw
MCBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vc2NiL3hoY2lAN2U5YzAwMDANCihYRU4pICB1c2lu
ZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50
bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2Rl
dmljZV9nZXRfcmF3X2lycTogZGV2PS9zY2IveGhjaUA3ZTljMDAwMCwgaW5k
ZXg9MA0KKFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhF
TikgIGludHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRs
ZW49Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVw
dC1jb250cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgw
MDAwMDBiMC4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6
IGlwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6
ZT0zDQooWEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAh
DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vc2NiL3hoY2lA
N2U5YzAwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9
L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsw
eDAwMDAwMDAwIDB4MDAwMDAwYjAuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJA
NDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4p
ICAtPiBnb3QgaXQgIQ0KKFhFTikgICAtIElSUTogMjA4DQooWEVOKSBEVDog
KiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvc2NiL3hoY2lAN2U5YzAwMDAg
KioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9Mikgb24g
L3NjYg0KKFhFTikgRFQ6IHRyYW5zbGF0aW5nIGFkZHJlc3M6PDM+IDAwMDAw
MDAwPDM+IDdlOWMwMDAwPDM+DQooWEVOKSBEVDogcGFyZW50IGJ1cyBpcyBk
ZWZhdWx0IChuYT0yLCBucz0xKSBvbiAvDQooWEVOKSBEVDogd2Fsa2luZyBy
YW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0IG1hcCwgY3A9N2MwMDAwMDAs
IHM9MzgwMDAwMCwgZGE9N2U5YzAwMDANCihYRU4pIERUOiBwYXJlbnQgdHJh
bnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwzPiBmYzAwMDAwMDwzPg0KKFhF
TikgRFQ6IHdpdGggb2Zmc2V0OiAyOWMwMDAwDQooWEVOKSBEVDogb25lIGxl
dmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAwMDwzPiBmZTljMDAwMDwzPg0K
KFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2RlDQooWEVOKSAgIC0gTU1JTzog
MDBmZTljMDAwMCAtIDAwZmVhYzAwMDAgUDJNVHlwZT01DQooWEVOKSBoYW5k
bGUgL3NjYi9oZXZjLWRlY29kZXJAN2ViMDAwMDANCihYRU4pIGR0X2lycV9u
dW1iZXI6IGRldj0vc2NiL2hldmMtZGVjb2RlckA3ZWIwMDAwMA0KKFhFTikg
L3NjYi9oZXZjLWRlY29kZXJAN2ViMDAwMDAgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3NjYi9oZXZjLWRlY29kZXJAN2Vi
MDAwMDAgaXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L3NjYi9oZXZjLWRlY29kZXJAN2ViMDAwMDAN
CihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zY2IvaGV2
Yy1kZWNvZGVyQDdlYjAwMDAwICoqDQooWEVOKSBEVDogYnVzIGlzIGRlZmF1
bHQgKG5hPTIsIG5zPTIpIG9uIC9zY2INCihYRU4pIERUOiB0cmFuc2xhdGlu
ZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3ZWIwMDAwMDwzPg0KKFhFTikg
RFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9MSkgb24gLw0K
KFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBEVDogZGVmYXVs
dCBtYXAsIGNwPTdjMDAwMDAwLCBzPTM4MDAwMDAsIGRhPTdlYjAwMDAwDQoo
WEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4gMDAwMDAwMDA8
Mz4gZmMwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNldDogMmIwMDAw
MA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAw
MDA8Mz4gZmViMDAwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9k
ZQ0KKFhFTikgICAtIE1NSU86IDAwZmViMDAwMDAgLSAwMGZlYjEwMDAwIFAy
TVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9zY2IvcnBpdmlkLWxvY2FsLWludGNA
N2ViMTAwMDANCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2NiL3JwaXZp
ZC1sb2NhbC1pbnRjQDdlYjEwMDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVw
dHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVO
KSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jh
d19pcnE6IGRldj0vc2NiL3JwaXZpZC1sb2NhbC1pbnRjQDdlYjEwMDAwLCBp
bmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQoo
WEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGlu
dGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50ZXJy
dXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAwMCAw
eDAwMDAwMDYyLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFwX3Jh
dzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAwLCBz
aXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290IGl0
ICENCihYRU4pIC9zY2IvcnBpdmlkLWxvY2FsLWludGNAN2ViMTAwMDAgcGFz
c3Rocm91Z2ggPSAxIG5hZGRyID0gMQ0KKFhFTikgQ2hlY2sgaWYgL3NjYi9y
cGl2aWQtbG9jYWwtaW50Y0A3ZWIxMDAwMCBpcyBiZWhpbmQgdGhlIElPTU1V
IGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vc2NiL3Jw
aXZpZC1sb2NhbC1pbnRjQDdlYjEwMDAwDQooWEVOKSAgdXNpbmcgJ2ludGVy
cnVwdHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQoo
WEVOKSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0
X3Jhd19pcnE6IGRldj0vc2NiL3JwaXZpZC1sb2NhbC1pbnRjQDdlYjEwMDAw
LCBpbmRleD0wDQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5
DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0z
IGludGxlbj0zDQooWEVOKSBkdF9pcnFfbWFwX3JhdzogcGFyPS9zb2MvaW50
ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsaW50c3BlYz1bMHgwMDAwMDAw
MCAweDAwMDAwMDYyLi4uXSxvaW50c2l6ZT0zDQooWEVOKSBkdF9pcnFfbWFw
X3JhdzogaXBhcj0vc29jL2ludGVycnVwdC1jb250cm9sbGVyQDQwMDQxMDAw
LCBzaXplPTMNCihYRU4pICAtPiBhZGRyc2l6ZT0xDQooWEVOKSAgLT4gZ290
IGl0ICENCihYRU4pIGR0X2RldmljZV9nZXRfcmF3X2lycTogZGV2PS9zY2Iv
cnBpdmlkLWxvY2FsLWludGNAN2ViMTAwMDAsIGluZGV4PTANCihYRU4pICB1
c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAg
aW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0
MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwNjIuLi5dLG9p
bnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50
ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+
IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgICAtIElS
UTogMTMwDQooWEVOKSBEVDogKiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAv
c2NiL3JwaXZpZC1sb2NhbC1pbnRjQDdlYjEwMDAwICoqDQooWEVOKSBEVDog
YnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTIpIG9uIC9zY2INCihYRU4pIERU
OiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiAwMDAwMDAwMDwzPiA3ZWIxMDAw
MDwzPg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9Miwg
bnM9MSkgb24gLw0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVO
KSBEVDogZGVmYXVsdCBtYXAsIGNwPTdjMDAwMDAwLCBzPTM4MDAwMDAsIGRh
PTdlYjEwMDAwDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8
Mz4gMDAwMDAwMDA8Mz4gZmMwMDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9m
ZnNldDogMmIxMDAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlv
bjo8Mz4gMDAwMDAwMDA8Mz4gZmViMTAwMDA8Mz4NCihYRU4pIERUOiByZWFj
aGVkIHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwZmViMTAwMDAgLSAw
MGZlYjExMDAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9zY2IvaDI2NC1k
ZWNvZGVyQDdlYjIwMDAwDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3Nj
Yi9oMjY0LWRlY29kZXJAN2ViMjAwMDANCihYRU4pIC9zY2IvaDI2NC1kZWNv
ZGVyQDdlYjIwMDAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4p
IENoZWNrIGlmIC9zY2IvaDI2NC1kZWNvZGVyQDdlYjIwMDAwIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9zY2IvaDI2NC1kZWNvZGVyQDdlYjIwMDAwDQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvc2NiL2gyNjQtZGVjb2RlckA3ZWIy
MDAwMCAqKg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0y
KSBvbiAvc2NiDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4g
MDAwMDAwMDA8Mz4gN2ViMjAwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVz
IGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihYRU4pIERUOiB3YWxr
aW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03YzAw
MDAwMCwgcz0zODAwMDAwLCBkYT03ZWIyMDAwMA0KKFhFTikgRFQ6IHBhcmVu
dCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZjMDAwMDAwPDM+
DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDJiMjAwMDANCihYRU4pIERUOiBv
bmUgbGV2ZWwgdHJhbnNsYXRpb246PDM+IDAwMDAwMDAwPDM+IGZlYjIwMDAw
PDM+DQooWEVOKSBEVDogcmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBN
TUlPOiAwMGZlYjIwMDAwIC0gMDBmZWIzMDAwMCBQMk1UeXBlPTUNCihYRU4p
IGhhbmRsZSAvc2NiL3ZwOS1kZWNvZGVyQDdlYjMwMDAwDQooWEVOKSBkdF9p
cnFfbnVtYmVyOiBkZXY9L3NjYi92cDktZGVjb2RlckA3ZWIzMDAwMA0KKFhF
TikgL3NjYi92cDktZGVjb2RlckA3ZWIzMDAwMCBwYXNzdGhyb3VnaCA9IDEg
bmFkZHIgPSAxDQooWEVOKSBDaGVjayBpZiAvc2NiL3ZwOS1kZWNvZGVyQDdl
YjMwMDAwIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikg
ZHRfaXJxX251bWJlcjogZGV2PS9zY2IvdnA5LWRlY29kZXJAN2ViMzAwMDAN
CihYRU4pIERUOiAqKiB0cmFuc2xhdGlvbiBmb3IgZGV2aWNlIC9zY2IvdnA5
LWRlY29kZXJAN2ViMzAwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVs
dCAobmE9MiwgbnM9Mikgb24gL3NjYg0KKFhFTikgRFQ6IHRyYW5zbGF0aW5n
IGFkZHJlc3M6PDM+IDAwMDAwMDAwPDM+IDdlYjMwMDAwPDM+DQooWEVOKSBE
VDogcGFyZW50IGJ1cyBpcyBkZWZhdWx0IChuYT0yLCBucz0xKSBvbiAvDQoo
WEVOKSBEVDogd2Fsa2luZyByYW5nZXMuLi4NCihYRU4pIERUOiBkZWZhdWx0
IG1hcCwgY3A9N2MwMDAwMDAsIHM9MzgwMDAwMCwgZGE9N2ViMzAwMDANCihY
RU4pIERUOiBwYXJlbnQgdHJhbnNsYXRpb24gZm9yOjwzPiAwMDAwMDAwMDwz
PiBmYzAwMDAwMDwzPg0KKFhFTikgRFQ6IHdpdGggb2Zmc2V0OiAyYjMwMDAw
DQooWEVOKSBEVDogb25lIGxldmVsIHRyYW5zbGF0aW9uOjwzPiAwMDAwMDAw
MDwzPiBmZWIzMDAwMDwzPg0KKFhFTikgRFQ6IHJlYWNoZWQgcm9vdCBub2Rl
DQooWEVOKSAgIC0gTU1JTzogMDBmZWIzMDAwMCAtIDAwZmViNDAwMDAgUDJN
VHlwZT01DQooWEVOKSBoYW5kbGUgL2xlZHMNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vbGVkcw0KKFhFTikgL2xlZHMgcGFzc3Rocm91Z2ggPSAxIG5h
ZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL2xlZHMgaXMgYmVoaW5kIHRoZSBJ
T01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2xl
ZHMNCihYRU4pIGhhbmRsZSAvbGVkcy9hY3QNCihYRU4pIGR0X2lycV9udW1i
ZXI6IGRldj0vbGVkcy9hY3QNCihYRU4pIC9sZWRzL2FjdCBwYXNzdGhyb3Vn
aCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvbGVkcy9hY3QgaXMg
YmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVt
YmVyOiBkZXY9L2xlZHMvYWN0DQooWEVOKSBoYW5kbGUgL2xlZHMvcHdyDQoo
WEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2xlZHMvcHdyDQooWEVOKSAvbGVk
cy9wd3IgcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sg
aWYgL2xlZHMvcHdyIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0K
KFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9sZWRzL3B3cg0KKFhFTikgaGFu
ZGxlIC9zZF9pb18xdjhfcmVnDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9
L3NkX2lvXzF2OF9yZWcNCihYRU4pIC9zZF9pb18xdjhfcmVnIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zZF9pb18xdjhf
cmVnIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRf
aXJxX251bWJlcjogZGV2PS9zZF9pb18xdjhfcmVnDQooWEVOKSBoYW5kbGUg
L19fb3ZlcnJpZGVzX18NCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vX19v
dmVycmlkZXNfXw0KKFhFTikgL19fb3ZlcnJpZGVzX18gcGFzc3Rocm91Z2gg
PSAxIG5hZGRyID0gMA0KKFhFTikgQ2hlY2sgaWYgL19fb3ZlcnJpZGVzX18g
aXMgYmVoaW5kIHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFf
bnVtYmVyOiBkZXY9L19fb3ZlcnJpZGVzX18NCihYRU4pIGhhbmRsZSAvZml4
ZWRyZWd1bGF0b3JfM3YzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Zp
eGVkcmVndWxhdG9yXzN2Mw0KKFhFTikgL2ZpeGVkcmVndWxhdG9yXzN2MyBw
YXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvZml4
ZWRyZWd1bGF0b3JfM3YzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBp
dA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9maXhlZHJlZ3VsYXRvcl8z
djMNCihYRU4pIGhhbmRsZSAvZml4ZWRyZWd1bGF0b3JfNXYwDQooWEVOKSBk
dF9pcnFfbnVtYmVyOiBkZXY9L2ZpeGVkcmVndWxhdG9yXzV2MA0KKFhFTikg
L2ZpeGVkcmVndWxhdG9yXzV2MCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAw
DQooWEVOKSBDaGVjayBpZiAvZml4ZWRyZWd1bGF0b3JfNXYwIGlzIGJlaGlu
ZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjog
ZGV2PS9maXhlZHJlZ3VsYXRvcl81djANCihYRU4pIGhhbmRsZSAvdjNkYnVz
DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L3YzZGJ1cw0KKFhFTikgL3Yz
ZGJ1cyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBp
ZiAvdjNkYnVzIGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhF
TikgZHRfaXJxX251bWJlcjogZGV2PS92M2RidXMNCihYRU4pIGhhbmRsZSAv
djNkYnVzL3YzZEA3ZWMwNDAwMA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2
PS92M2RidXMvdjNkQDdlYzA0MDAwDQooWEVOKSAgdXNpbmcgJ2ludGVycnVw
dHMnIHByb3BlcnR5DQooWEVOKSAgaW50c3BlYz0wIGludGxlbj0zDQooWEVO
KSAgaW50c2l6ZT0zIGludGxlbj0zDQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jh
d19pcnE6IGRldj0vdjNkYnVzL3YzZEA3ZWMwNDAwMCwgaW5kZXg9MA0KKFhF
TikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNw
ZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhF
TikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250cm9s
bGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA0YS4u
Ll0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9L3Nv
Yy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQooWEVO
KSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVOKSAv
djNkYnVzL3YzZEA3ZWMwNDAwMCBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAy
DQooWEVOKSBDaGVjayBpZiAvdjNkYnVzL3YzZEA3ZWMwNDAwMCBpcyBiZWhp
bmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6
IGRldj0vdjNkYnVzL3YzZEA3ZWMwNDAwMA0KKFhFTikgIHVzaW5nICdpbnRl
cnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGludHNwZWM9MCBpbnRsZW49Mw0K
KFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0KKFhFTikgZHRfZGV2aWNlX2dl
dF9yYXdfaXJxOiBkZXY9L3YzZGJ1cy92M2RAN2VjMDQwMDAsIGluZGV4PTAN
CihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJvcGVydHkNCihYRU4pICBp
bnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRzaXplPTMgaW50bGVuPTMN
CihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29u
dHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAwMDAwMDAwIDB4MDAwMDAw
NGEuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBpcGFy
PS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAwNDEwMDAsIHNpemU9Mw0K
KFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAtPiBnb3QgaXQgIQ0KKFhF
TikgZHRfZGV2aWNlX2dldF9yYXdfaXJxOiBkZXY9L3YzZGJ1cy92M2RAN2Vj
MDQwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycgcHJv
cGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBpbnRz
aXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9L3Nv
Yy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsweDAw
MDAwMDAwIDB4MDAwMDAwNGEuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0X2ly
cV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJANDAw
NDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4pICAt
PiBnb3QgaXQgIQ0KKFhFTikgICAtIElSUTogMTA2DQooWEVOKSBEVDogKiog
dHJhbnNsYXRpb24gZm9yIGRldmljZSAvdjNkYnVzL3YzZEA3ZWMwNDAwMCAq
Kg0KKFhFTikgRFQ6IGJ1cyBpcyBkZWZhdWx0IChuYT0xLCBucz0yKSBvbiAv
djNkYnVzDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVzczo8Mz4gN2Vj
MDAwMDA8Mz4NCihYRU4pIERUOiBwYXJlbnQgYnVzIGlzIGRlZmF1bHQgKG5h
PTIsIG5zPTEpIG9uIC8NCihYRU4pIERUOiB3YWxraW5nIHJhbmdlcy4uLg0K
KFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBjcD03YzUwMDAwMCwgcz0zMzAwMDAw
LCBkYT03ZWMwMDAwMA0KKFhFTikgRFQ6IHBhcmVudCB0cmFuc2xhdGlvbiBm
b3I6PDM+IDAwMDAwMDAwPDM+IGZjNTAwMDAwPDM+DQooWEVOKSBEVDogd2l0
aCBvZmZzZXQ6IDI3MDAwMDANCihYRU4pIERUOiBvbmUgbGV2ZWwgdHJhbnNs
YXRpb246PDM+IDAwMDAwMDAwPDM+IGZlYzAwMDAwPDM+DQooWEVOKSBEVDog
cmVhY2hlZCByb290IG5vZGUNCihYRU4pICAgLSBNTUlPOiAwMGZlYzAwMDAw
IC0gMDBmZWMwNDAwMCBQMk1UeXBlPTUNCihYRU4pIERUOiAqKiB0cmFuc2xh
dGlvbiBmb3IgZGV2aWNlIC92M2RidXMvdjNkQDdlYzA0MDAwICoqDQooWEVO
KSBEVDogYnVzIGlzIGRlZmF1bHQgKG5hPTEsIG5zPTIpIG9uIC92M2RidXMN
CihYRU4pIERUOiB0cmFuc2xhdGluZyBhZGRyZXNzOjwzPiA3ZWMwNDAwMDwz
Pg0KKFhFTikgRFQ6IHBhcmVudCBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9
MSkgb24gLw0KKFhFTikgRFQ6IHdhbGtpbmcgcmFuZ2VzLi4uDQooWEVOKSBE
VDogZGVmYXVsdCBtYXAsIGNwPTdjNTAwMDAwLCBzPTMzMDAwMDAsIGRhPTdl
YzA0MDAwDQooWEVOKSBEVDogcGFyZW50IHRyYW5zbGF0aW9uIGZvcjo8Mz4g
MDAwMDAwMDA8Mz4gZmM1MDAwMDA8Mz4NCihYRU4pIERUOiB3aXRoIG9mZnNl
dDogMjcwNDAwMA0KKFhFTikgRFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8
Mz4gMDAwMDAwMDA8Mz4gZmVjMDQwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVk
IHJvb3Qgbm9kZQ0KKFhFTikgICAtIE1NSU86IDAwZmVjMDQwMDAgLSAwMGZl
YzA4MDAwIFAyTVR5cGU9NQ0KKFhFTikgaGFuZGxlIC9ncHUNCihYRU4pIGR0
X2lycV9udW1iZXI6IGRldj0vZ3B1DQooWEVOKSAvZ3B1IHBhc3N0aHJvdWdo
ID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9ncHUgaXMgYmVoaW5k
IHRoZSBJT01NVSBhbmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBk
ZXY9L2dwdQ0KKFhFTikgaGFuZGxlIC9jbGstMTA4TQ0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9jbGstMTA4TQ0KKFhFTikgL2Nsay0xMDhNIHBhc3N0
aHJvdWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9jbGstMTA4
TSBpcyBiZWhpbmQgdGhlIElPTU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2ly
cV9udW1iZXI6IGRldj0vY2xrLTEwOE0NCihYRU4pIGhhbmRsZSAvZmlybXdh
cmUtY2xvY2tzDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2Zpcm13YXJl
LWNsb2Nrcw0KKFhFTikgL2Zpcm13YXJlLWNsb2NrcyBwYXNzdGhyb3VnaCA9
IDEgbmFkZHIgPSAwDQooWEVOKSBDaGVjayBpZiAvZmlybXdhcmUtY2xvY2tz
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9maXJtd2FyZS1jbG9ja3MNCihYRU4pIGhhbmRsZSAv
ZW1tYzJidXMNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vZW1tYzJidXMN
CihYRU4pIC9lbW1jMmJ1cyBwYXNzdGhyb3VnaCA9IDEgbmFkZHIgPSAwDQoo
WEVOKSBDaGVjayBpZiAvZW1tYzJidXMgaXMgYmVoaW5kIHRoZSBJT01NVSBh
bmQgYWRkIGl0DQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L2VtbWMyYnVz
DQooWEVOKSBoYW5kbGUgL2VtbWMyYnVzL2VtbWMyQDdlMzQwMDAwDQooWEVO
KSBkdF9pcnFfbnVtYmVyOiBkZXY9L2VtbWMyYnVzL2VtbWMyQDdlMzQwMDAw
DQooWEVOKSAgdXNpbmcgJ2ludGVycnVwdHMnIHByb3BlcnR5DQooWEVOKSAg
aW50c3BlYz0wIGludGxlbj0zDQooWEVOKSAgaW50c2l6ZT0zIGludGxlbj0z
DQooWEVOKSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vZW1tYzJidXMv
ZW1tYzJAN2UzNDAwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJy
dXB0cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihY
RU4pICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3
OiBwYXI9L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRz
cGVjPVsweDAwMDAwMDAwIDB4MDAwMDAwN2UuLi5dLG9pbnRzaXplPTMNCihY
RU4pIGR0X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRy
b2xsZXJANDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTEN
CihYRU4pICAtPiBnb3QgaXQgIQ0KKFhFTikgL2VtbWMyYnVzL2VtbWMyQDdl
MzQwMDAwIHBhc3N0aHJvdWdoID0gMSBuYWRkciA9IDENCihYRU4pIENoZWNr
IGlmIC9lbW1jMmJ1cy9lbW1jMkA3ZTM0MDAwMCBpcyBiZWhpbmQgdGhlIElP
TU1VIGFuZCBhZGQgaXQNCihYRU4pIGR0X2lycV9udW1iZXI6IGRldj0vZW1t
YzJidXMvZW1tYzJAN2UzNDAwMDANCihYRU4pICB1c2luZyAnaW50ZXJydXB0
cycgcHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4p
ICBpbnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2RldmljZV9nZXRfcmF3
X2lycTogZGV2PS9lbW1jMmJ1cy9lbW1jMkA3ZTM0MDAwMCwgaW5kZXg9MA0K
KFhFTikgIHVzaW5nICdpbnRlcnJ1cHRzJyBwcm9wZXJ0eQ0KKFhFTikgIGlu
dHNwZWM9MCBpbnRsZW49Mw0KKFhFTikgIGludHNpemU9MyBpbnRsZW49Mw0K
KFhFTikgZHRfaXJxX21hcF9yYXc6IHBhcj0vc29jL2ludGVycnVwdC1jb250
cm9sbGVyQDQwMDQxMDAwLGludHNwZWM9WzB4MDAwMDAwMDAgMHgwMDAwMDA3
ZS4uLl0sb2ludHNpemU9Mw0KKFhFTikgZHRfaXJxX21hcF9yYXc6IGlwYXI9
L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCwgc2l6ZT0zDQoo
WEVOKSAgLT4gYWRkcnNpemU9MQ0KKFhFTikgIC0+IGdvdCBpdCAhDQooWEVO
KSBkdF9kZXZpY2VfZ2V0X3Jhd19pcnE6IGRldj0vZW1tYzJidXMvZW1tYzJA
N2UzNDAwMDAsIGluZGV4PTANCihYRU4pICB1c2luZyAnaW50ZXJydXB0cycg
cHJvcGVydHkNCihYRU4pICBpbnRzcGVjPTAgaW50bGVuPTMNCihYRU4pICBp
bnRzaXplPTMgaW50bGVuPTMNCihYRU4pIGR0X2lycV9tYXBfcmF3OiBwYXI9
L3NvYy9pbnRlcnJ1cHQtY29udHJvbGxlckA0MDA0MTAwMCxpbnRzcGVjPVsw
eDAwMDAwMDAwIDB4MDAwMDAwN2UuLi5dLG9pbnRzaXplPTMNCihYRU4pIGR0
X2lycV9tYXBfcmF3OiBpcGFyPS9zb2MvaW50ZXJydXB0LWNvbnRyb2xsZXJA
NDAwNDEwMDAsIHNpemU9Mw0KKFhFTikgIC0+IGFkZHJzaXplPTENCihYRU4p
ICAtPiBnb3QgaXQgIQ0KKFhFTikgICAtIElSUTogMTU4DQooWEVOKSBEVDog
KiogdHJhbnNsYXRpb24gZm9yIGRldmljZSAvZW1tYzJidXMvZW1tYzJAN2Uz
NDAwMDAgKioNCihYRU4pIERUOiBidXMgaXMgZGVmYXVsdCAobmE9MiwgbnM9
MSkgb24gL2VtbWMyYnVzDQooWEVOKSBEVDogdHJhbnNsYXRpbmcgYWRkcmVz
czo8Mz4gMDAwMDAwMDA8Mz4gN2UzNDAwMDA8Mz4NCihYRU4pIERUOiBwYXJl
bnQgYnVzIGlzIGRlZmF1bHQgKG5hPTIsIG5zPTEpIG9uIC8NCihYRU4pIERU
OiB3YWxraW5nIHJhbmdlcy4uLg0KKFhFTikgRFQ6IGRlZmF1bHQgbWFwLCBj
cD03ZTAwMDAwMCwgcz0xODAwMDAwLCBkYT03ZTM0MDAwMA0KKFhFTikgRFQ6
IHBhcmVudCB0cmFuc2xhdGlvbiBmb3I6PDM+IDAwMDAwMDAwPDM+IGZlMDAw
MDAwPDM+DQooWEVOKSBEVDogd2l0aCBvZmZzZXQ6IDM0MDAwMA0KKFhFTikg
RFQ6IG9uZSBsZXZlbCB0cmFuc2xhdGlvbjo8Mz4gMDAwMDAwMDA8Mz4gZmUz
NDAwMDA8Mz4NCihYRU4pIERUOiByZWFjaGVkIHJvb3Qgbm9kZQ0KKFhFTikg
ICAtIE1NSU86IDAwZmUzNDAwMDAgLSAwMGZlMzQwMTAwIFAyTVR5cGU9NQ0K
KFhFTikgaGFuZGxlIC9zZF92Y2NfcmVnDQooWEVOKSBkdF9pcnFfbnVtYmVy
OiBkZXY9L3NkX3ZjY19yZWcNCihYRU4pIC9zZF92Y2NfcmVnIHBhc3N0aHJv
dWdoID0gMSBuYWRkciA9IDANCihYRU4pIENoZWNrIGlmIC9zZF92Y2NfcmVn
IGlzIGJlaGluZCB0aGUgSU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJx
X251bWJlcjogZGV2PS9zZF92Y2NfcmVnDQooWEVOKSBoYW5kbGUgL19fc3lt
Ym9sc19fDQooWEVOKSBkdF9pcnFfbnVtYmVyOiBkZXY9L19fc3ltYm9sc19f
DQooWEVOKSAvX19zeW1ib2xzX18gcGFzc3Rocm91Z2ggPSAxIG5hZGRyID0g
MA0KKFhFTikgQ2hlY2sgaWYgL19fc3ltYm9sc19fIGlzIGJlaGluZCB0aGUg
SU9NTVUgYW5kIGFkZCBpdA0KKFhFTikgZHRfaXJxX251bWJlcjogZGV2PS9f
X3N5bWJvbHNfXw0KKFhFTikgQWxsb2NhdGluZyBQUEkgMTYgZm9yIGV2ZW50
IGNoYW5uZWwgaW50ZXJydXB0DQooWEVOKSBDcmVhdGUgaHlwZXJ2aXNvciBu
b2RlDQooWEVOKSBDcmVhdGUgUFNDSSBub2RlDQooWEVOKSBDcmVhdGUgY3B1
cyBub2RlDQooWEVOKSBDcmVhdGUgY3B1QDAgKGxvZ2ljYWwgQ1BVSUQ6IDAp
IG5vZGUNCihYRU4pIENyZWF0ZSBtZW1vcnkgbm9kZSAocmVnIHNpemUgMywg
bnIgY2VsbHMgMTUpDQooWEVOKSAgIEJhbmsgMDogMHg4MDAwMDAwLT4weDI4
MDAwMDAwDQooWEVOKSAgIEJhbmsgMTogMHgyYTAwMDAwMC0+MHgyYjAwMDAw
MA0KKFhFTikgICBCYW5rIDI6IDB4MmUwMDAwMDAtPjB4MzgwMDAwMDANCihY
RU4pICAgQmFuayAzOiAweDNjYzAwMDAwLT4weDNkMDAwMDAwDQooWEVOKSAg
IEJhbmsgNDogMHhlODAwMDAwMC0+MHhmMDAwMDAwMA0KKFhFTikgTG9hZGlu
ZyB6SW1hZ2UgZnJvbSAwMDAwMDAwMDJiZDdjMDAwIHRvIDAwMDAwMDAwMDgw
ODAwMDAtMDAwMDAwMDAwOTk4MGEwMA0KKFhFTikgTG9hZGluZyBkMCBEVEIg
dG8gMHgwMDAwMDAwMDEwMDAwMDAwLTB4MDAwMDAwMDAxMDAwYjMzYQ0KKFhF
TikgSW5pdGlhbCBsb3cgbWVtb3J5IHZpcnEgdGhyZXNob2xkIHNldCBhdCAw
eDQwMDAgcGFnZXMuDQooWEVOKSBTY3J1YmJpbmcgRnJlZSBSQU0gaW4gYmFj
a2dyb3VuZA0KKFhFTikgU3RkLiBMb2dsZXZlbDogQWxsDQooWEVOKSBHdWVz
dCBMb2dsZXZlbDogQWxsDQooWEVOKSAqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCihYRU4pIE5vIHN1cHBv
cnQgZm9yIEFSTV9TTUNDQ19BUkNIX1dPUktBUk9VTkRfMS4NCihYRU4pIFBs
ZWFzZSB1cGRhdGUgeW91ciBmaXJtd2FyZS4NCihYRU4pICoqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KKFhF
TikgTm8gc3VwcG9ydCBmb3IgQVJNX1NNQ0NDX0FSQ0hfV09SS0FST1VORF8x
Lg0KKFhFTikgUGxlYXNlIHVwZGF0ZSB5b3VyIGZpcm13YXJlLg0KKFhFTikg
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqDQooWEVOKSBObyBzdXBwb3J0IGZvciBBUk1fU01DQ0NfQVJDSF9X
T1JLQVJPVU5EXzEuDQooWEVOKSBQbGVhc2UgdXBkYXRlIHlvdXIgZmlybXdh
cmUuDQooWEVOKSAqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioq
KioqKioqKioqKioqKioqKioNCihYRU4pIDMuLi4gMi4uLiAxLi4uDQooWEVO
KSAqKiogU2VyaWFsIGlucHV0IHRvIERPTTAgKHR5cGUgJ0NUUkwtYScgdGhy
ZWUgdGltZXMgdG8gc3dpdGNoIGlucHV0KQ0KKFhFTikgRnJlZWQgMzQ0a0Ig
aW5pdCBtZW1vcnkuDQooWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdv
cmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElWRVI0DQooWEVO
KSBkMHYwOiB2R0lDRDogdW5oYW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBm
ZmZmZmZmZiB0byBJQ0FDVElWRVI4DQooWEVOKSBkMHYwOiB2R0lDRDogdW5o
YW5kbGVkIHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElW
RVIxMg0KKFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRl
IDB4MDAwMDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMTYNCihYRU4pIGQwdjA6
IHZHSUNEOiB1bmhhbmRsZWQgd29yZCB3cml0ZSAweDAwMDAwMGZmZmZmZmZm
IHRvIElDQUNUSVZFUjIwDQooWEVOKSBkMHYwOiB2R0lDRDogdW5oYW5kbGVk
IHdvcmQgd3JpdGUgMHgwMDAwMDBmZmZmZmZmZiB0byBJQ0FDVElWRVIyNA0K
KFhFTikgZDB2MDogdkdJQ0Q6IHVuaGFuZGxlZCB3b3JkIHdyaXRlIDB4MDAw
MDAwZmZmZmZmZmYgdG8gSUNBQ1RJVkVSMjgNCihYRU4pIGQwdjA6IHZHSUNE
OiB1bmhhbmRsZWQgd29yZCB3cml0ZSAweDAwMDAwMGZmZmZmZmZmIHRvIElD
QUNUSVZFUjANClsgICAgMC4wMDAwMDBdIEJvb3RpbmcgTGludXggb24gcGh5
c2ljYWwgQ1BVIDB4MDAwMDAwMDAwMCBbMHg0MTBmZDA4M10NClsgICAgMC4w
MDAwMDBdIExpbnV4IHZlcnNpb24gNS42LjEwLWRlZmF1bHQrIChyb290QDEz
MDI0ZWIxZDJjZikgKGdjYyB2ZXJzaW9uIDguMy4wIChBbHBpbmUgOC4zLjAp
KSAjMTkgU01QIFR1ZSBNYXkgMTIgMDM6NTY6NDYgVVRDIDIwMjANClsgICAg
MC4wMDAwMDBdIE1hY2hpbmUgbW9kZWw6IFJhc3BiZXJyeSBQaSA0IE1vZGVs
IEINClsgICAgMC4wMDAwMDBdIFhlbiA0LjE0IHN1cHBvcnQgZm91bmQNClsg
ICAgMC4wMDAwMDBdIGVmaTogR2V0dGluZyBFRkkgcGFyYW1ldGVycyBmcm9t
IEZEVDoNClsgICAgMC4wMDAwMDBdIGVmaTogVUVGSSBub3QgZm91bmQuDQpb
ICAgIDAuMDAwMDAwXSBSZXNlcnZlZCBtZW1vcnk6IGNyZWF0ZWQgQ01BIG1l
bW9yeSBwb29sIGF0IDB4MDAwMDAwMDAyNDAwMDAwMCwgc2l6ZSA2NCBNaUIN
ClsgICAgMC4wMDAwMDBdIE9GOiByZXNlcnZlZCBtZW06IGluaXRpYWxpemVk
IG5vZGUgbGludXgsY21hLCBjb21wYXRpYmxlIGlkIHNoYXJlZC1kbWEtcG9v
bA0KWyAgICAwLjAwMDAwMF0gTlVNQTogTm8gTlVNQSBjb25maWd1cmF0aW9u
IGZvdW5kDQpbICAgIDAuMDAwMDAwXSBOVU1BOiBGYWtpbmcgYSBub2RlIGF0
IFttZW0gMHgwMDAwMDAwMDA4MDAwMDAwLTB4MDAwMDAwMDBlZmZmZmZmZl0N
ClsgICAgMC4wMDAwMDBdIE5VTUE6IE5PREVfREFUQSBbbWVtIDB4ZWZlMjgy
YzAtMHhlZmUyYmZmZl0NClsgICAgMC4wMDAwMDBdIFpvbmUgcmFuZ2VzOg0K
WyAgICAwLjAwMDAwMF0gICBETUEgICAgICBbbWVtIDB4MDAwMDAwMDAwODAw
MDAwMC0weDAwMDAwMDAwM2ZmZmZmZmZdDQpbICAgIDAuMDAwMDAwXSAgIERN
QTMyICAgIFttZW0gMHgwMDAwMDAwMDQwMDAwMDAwLTB4MDAwMDAwMDBlZmZm
ZmZmZl0NClsgICAgMC4wMDAwMDBdICAgTm9ybWFsICAgZW1wdHkNClsgICAg
MC4wMDAwMDBdIE1vdmFibGUgem9uZSBzdGFydCBmb3IgZWFjaCBub2RlDQpb
ICAgIDAuMDAwMDAwXSBFYXJseSBtZW1vcnkgbm9kZSByYW5nZXMNClsgICAg
MC4wMDAwMDBdICAgbm9kZSAgIDA6IFttZW0gMHgwMDAwMDAwMDA4MDAwMDAw
LTB4MDAwMDAwMDAyN2ZmZmZmZl0NClsgICAgMC4wMDAwMDBdICAgbm9kZSAg
IDA6IFttZW0gMHgwMDAwMDAwMDJhMDAwMDAwLTB4MDAwMDAwMDAyYWZmZmZm
Zl0NClsgICAgMC4wMDAwMDBdICAgbm9kZSAgIDA6IFttZW0gMHgwMDAwMDAw
MDJlMDAwMDAwLTB4MDAwMDAwMDAzN2ZmZmZmZl0NClsgICAgMC4wMDAwMDBd
ICAgbm9kZSAgIDA6IFttZW0gMHgwMDAwMDAwMDNjYzAwMDAwLTB4MDAwMDAw
MDAzY2ZmZmZmZl0NClsgICAgMC4wMDAwMDBdICAgbm9kZSAgIDA6IFttZW0g
MHgwMDAwMDAwMGU4MDAwMDAwLTB4MDAwMDAwMDBlZmZmZmZmZl0NClsgICAg
MC4wMDAwMDBdIEluaXRtZW0gc2V0dXAgbm9kZSAwIFttZW0gMHgwMDAwMDAw
MDA4MDAwMDAwLTB4MDAwMDAwMDBlZmZmZmZmZl0NClsgICAgMC4wMDAwMDBd
IHBzY2k6IHByb2JpbmcgZm9yIGNvbmR1aXQgbWV0aG9kIGZyb20gRFQuDQpb
ICAgIDAuMDAwMDAwXSBwc2NpOiBQU0NJdjEuMSBkZXRlY3RlZCBpbiBmaXJt
d2FyZS4NClsgICAgMC4wMDAwMDBdIHBzY2k6IFVzaW5nIHN0YW5kYXJkIFBT
Q0kgdjAuMiBmdW5jdGlvbiBJRHMNClsgICAgMC4wMDAwMDBdIHBzY2k6IFRy
dXN0ZWQgT1MgbWlncmF0aW9uIG5vdCByZXF1aXJlZA0KWyAgICAwLjAwMDAw
MF0gcHNjaTogU01DIENhbGxpbmcgQ29udmVudGlvbiB2MS4xDQpbICAgIDAu
MDAwMDAwXSBwZXJjcHU6IEVtYmVkZGVkIDIzIHBhZ2VzL2NwdSBzNTQyMzIg
cjgxOTIgZDMxNzg0IHU5NDIwOA0KWyAgICAwLjAwMDAwMF0gRGV0ZWN0ZWQg
UElQVCBJLWNhY2hlIG9uIENQVTANClsgICAgMC4wMDAwMDBdIENQVSBmZWF0
dXJlczogZGV0ZWN0ZWQ6IEVMMiB2ZWN0b3IgaGFyZGVuaW5nDQpbICAgIDAu
MDAwMDAwXSBDUFUgZmVhdHVyZXM6IGRldGVjdGVkOiBTcGVjdWxhdGl2ZSBT
dG9yZSBCeXBhc3MgRGlzYWJsZQ0KWyAgICAwLjAwMDAwMF0gQ1BVIGZlYXR1
cmVzOiBkZXRlY3RlZDogQVJNIGVycmF0dW0gMTMxOTM2Nw0KWyAgICAwLjAw
MDAwMF0gQnVpbHQgMSB6b25lbGlzdHMsIG1vYmlsaXR5IGdyb3VwaW5nIG9u
LiAgVG90YWwgcGFnZXM6IDIwNjY0MA0KWyAgICAwLjAwMDAwMF0gUG9saWN5
IHpvbmU6IERNQTMyDQpbICAgIDAuMDAwMDAwXSBLZXJuZWwgY29tbWFuZCBs
aW5lOiBjb25zb2xlPWh2YzAgZWFybHlwcmludGs9eGVuIG5vbW9kZXNldCBy
b290ZGVsYXk9MTANClsgICAgMC4wMDAwMDBdIERlbnRyeSBjYWNoZSBoYXNo
IHRhYmxlIGVudHJpZXM6IDEzMTA3MiAob3JkZXI6IDgsIDEwNDg1NzYgYnl0
ZXMsIGxpbmVhcikNClsgICAgMC4wMDAwMDBdIElub2RlLWNhY2hlIGhhc2gg
dGFibGUgZW50cmllczogNjU1MzYgKG9yZGVyOiA3LCA1MjQyODggYnl0ZXMs
IGxpbmVhcikNClsgICAgMC4wMDAwMDBdIG1lbSBhdXRvLWluaXQ6IHN0YWNr
Om9mZiwgaGVhcCBhbGxvYzpvZmYsIGhlYXAgZnJlZTpvZmYNClsgICAgMC4w
MDAwMDBdIHNvZnR3YXJlIElPIFRMQjogbWFwcGVkIFttZW0gMHgzNDAwMDAw
MC0weDM4MDAwMDAwXSAoNjRNQikNClsgICAgMC4wMDAwMDBdIE1lbW9yeTog
NjQ1NTA4Sy84Mzk2ODBLIGF2YWlsYWJsZSAoMTI3OTZLIGtlcm5lbCBjb2Rl
LCAxODU4SyByd2RhdGEsIDYyMTJLIHJvZGF0YSwgNDY3MksgaW5pdCwgNzU3
SyBic3MsIDEyODYzNksgcmVzZXJ2ZWQsIDY1NTM2SyBjbWEtcmVzZXJ2ZWQp
DQpbICAgIDAuMDAwMDAwXSByYW5kb206IGdldF9yYW5kb21fdTY0IGNhbGxl
ZCBmcm9tIF9fa21lbV9jYWNoZV9jcmVhdGUrMHg0MC8weDU4MCB3aXRoIGNy
bmdfaW5pdD0wDQpbICAgIDAuMDAwMDAwXSBTTFVCOiBIV2FsaWduPTY0LCBP
cmRlcj0wLTMsIE1pbk9iamVjdHM9MCwgQ1BVcz0xLCBOb2Rlcz0xDQpbICAg
IDAuMDAwMDAwXSByY3U6IEhpZXJhcmNoaWNhbCBSQ1UgaW1wbGVtZW50YXRp
b24uDQpbICAgIDAuMDAwMDAwXSByY3U6IAlSQ1UgcmVzdHJpY3RpbmcgQ1BV
cyBmcm9tIE5SX0NQVVM9NDgwIHRvIG5yX2NwdV9pZHM9MS4NClsgICAgMC4w
MDAwMDBdIHJjdTogUkNVIGNhbGN1bGF0ZWQgdmFsdWUgb2Ygc2NoZWR1bGVy
LWVubGlzdG1lbnQgZGVsYXkgaXMgMTAgamlmZmllcy4NClsgICAgMC4wMDAw
MDBdIHJjdTogQWRqdXN0aW5nIGdlb21ldHJ5IGZvciByY3VfZmFub3V0X2xl
YWY9MTYsIG5yX2NwdV9pZHM9MQ0KWyAgICAwLjAwMDAwMF0gTlJfSVJRUzog
NjQsIG5yX2lycXM6IDY0LCBwcmVhbGxvY2F0ZWQgaXJxczogMA0KWyAgICAw
LjAwMDAwMF0gYXJjaF90aW1lcjogY3AxNSB0aW1lcihzKSBydW5uaW5nIGF0
IDU0LjAwTUh6ICh2aXJ0KS4NClsgICAgMC4wMDAwMDBdIGNsb2Nrc291cmNl
OiBhcmNoX3N5c19jb3VudGVyOiBtYXNrOiAweGZmZmZmZmZmZmZmZmZmIG1h
eF9jeWNsZXM6IDB4Yzc0M2NlMzQ2LCBtYXhfaWRsZV9uczogNDQwNzk1MjAz
MTIzIG5zDQpbICAgIDAuMDAwMDA1XSBzY2hlZF9jbG9jazogNTYgYml0cyBh
dCA1NE1IeiwgcmVzb2x1dGlvbiAxOG5zLCB3cmFwcyBldmVyeSA0Mzk4MDQ2
NTExMTAybnMNClsgICAgMC4wMDA0MTVdIENvbnNvbGU6IGNvbG91ciBkdW1t
eSBkZXZpY2UgODB4MjUNClsgICAgMC4zMjQ4ODldIHByaW50azogY29uc29s
ZSBbaHZjMF0gZW5hYmxlZA0KWyAgICAwLjMyOTE2N10gQ2FsaWJyYXRpbmcg
ZGVsYXkgbG9vcCAoc2tpcHBlZCksIHZhbHVlIGNhbGN1bGF0ZWQgdXNpbmcg
dGltZXIgZnJlcXVlbmN5Li4gMTA4LjAwIEJvZ29NSVBTIChscGo9NTQwMDAw
KQ0KWyAgICAwLjMzOTczMl0gcGlkX21heDogZGVmYXVsdDogMzI3NjggbWlu
aW11bTogMzAxDQpbICAgIDAuMzQ0NTU3XSBMU006IFNlY3VyaXR5IEZyYW1l
d29yayBpbml0aWFsaXppbmcNClsgICAgMC4zNDkyNzBdIE1vdW50LWNhY2hl
IGhhc2ggdGFibGUgZW50cmllczogMjA0OCAob3JkZXI6IDIsIDE2Mzg0IGJ5
dGVzLCBsaW5lYXIpDQpbICAgIDAuMzU2NzgwXSBNb3VudHBvaW50LWNhY2hl
IGhhc2ggdGFibGUgZW50cmllczogMjA0OCAob3JkZXI6IDIsIDE2Mzg0IGJ5
dGVzLCBsaW5lYXIpDQpbICAgIDAuMzY1Njc5XSBEaXNhYmxpbmcgbWVtb3J5
IGNvbnRyb2wgZ3JvdXAgc3Vic3lzdGVtDQpbICAgIDAuMzcxODU0XSB4ZW46
Z3JhbnRfdGFibGU6IEdyYW50IHRhYmxlcyB1c2luZyB2ZXJzaW9uIDEgbGF5
b3V0DQpbICAgIDAuMzc3MzM4XSBHcmFudCB0YWJsZSBpbml0aWFsaXplZA0K
WyAgICAwLjM4MDk4Nl0geGVuOmV2ZW50czogVXNpbmcgRklGTy1iYXNlZCBB
QkkNClsgICAgMC4zODUzNTBdIFhlbjogaW5pdGlhbGl6aW5nIGNwdTANClsg
ICAgMC4zODg5ODBdIHJjdTogSGllcmFyY2hpY2FsIFNSQ1UgaW1wbGVtZW50
YXRpb24uDQpbICAgIDAuMzk4MTEwXSBFRkkgc2VydmljZXMgd2lsbCBub3Qg
YmUgYXZhaWxhYmxlLg0KWyAgICAwLjQwMjI4Ml0gc21wOiBCcmluZ2luZyB1
cCBzZWNvbmRhcnkgQ1BVcyAuLi4NClsgICAgMC40MDY3OTJdIHNtcDogQnJv
dWdodCB1cCAxIG5vZGUsIDEgQ1BVDQpbICAgIDAuNDEwODczXSBTTVA6IFRv
dGFsIG9mIDEgcHJvY2Vzc29ycyBhY3RpdmF0ZWQuDQpbICAgIDAuNDE1NzA3
XSBDUFUgZmVhdHVyZXM6IGRldGVjdGVkOiAzMi1iaXQgRUwwIFN1cHBvcnQN
ClsgICAgMC40MjA5ODldIENQVSBmZWF0dXJlczogZGV0ZWN0ZWQ6IENSQzMy
IGluc3RydWN0aW9ucw0KWyAgICAwLjQ1NTY4NF0gQ1BVOiBBbGwgQ1BVKHMp
IHN0YXJ0ZWQgYXQgRUwxDQpbICAgIDAuNDU5Mjc2XSBhbHRlcm5hdGl2ZXM6
IHBhdGNoaW5nIGtlcm5lbCBjb2RlDQpbICAgIDAuNDY1MjAzXSBkZXZ0bXBm
czogaW5pdGlhbGl6ZWQNClsgICAgMC40NzU0NDFdIEVuYWJsZWQgY3AxNV9i
YXJyaWVyIHN1cHBvcnQNClsgICAgMC40Nzg4MDddIEVuYWJsZWQgc2V0ZW5k
IHN1cHBvcnQNClsgICAgMC40ODIzNDBdIEtBU0xSIGRpc2FibGVkIGR1ZSB0
byBsYWNrIG9mIHNlZWQNClsgICAgMC40ODczMDVdIGNsb2Nrc291cmNlOiBq
aWZmaWVzOiBtYXNrOiAweGZmZmZmZmZmIG1heF9jeWNsZXM6IDB4ZmZmZmZm
ZmYsIG1heF9pZGxlX25zOiAxOTExMjYwNDQ2Mjc1MDAwMCBucw0KWyAgICAw
LjQ5Njk0OF0gZnV0ZXggaGFzaCB0YWJsZSBlbnRyaWVzOiAyNTYgKG9yZGVy
OiAyLCAxNjM4NCBieXRlcywgbGluZWFyKQ0KWyAgICAwLjUwNTYyNV0gcGlu
Y3RybCBjb3JlOiBpbml0aWFsaXplZCBwaW5jdHJsIHN1YnN5c3RlbQ0KWyAg
ICAwLjUxMTc5OF0gdGhlcm1hbF9zeXM6IFJlZ2lzdGVyZWQgdGhlcm1hbCBn
b3Zlcm5vciAnZmFpcl9zaGFyZScNClsgICAgMC41MTE4MDJdIHRoZXJtYWxf
c3lzOiBSZWdpc3RlcmVkIHRoZXJtYWwgZ292ZXJub3IgJ2JhbmdfYmFuZycN
ClsgICAgMC41MTczNTJdIHRoZXJtYWxfc3lzOiBSZWdpc3RlcmVkIHRoZXJt
YWwgZ292ZXJub3IgJ3N0ZXBfd2lzZScNClsgICAgMC41MjM1NDNdIHRoZXJt
YWxfc3lzOiBSZWdpc3RlcmVkIHRoZXJtYWwgZ292ZXJub3IgJ3VzZXJfc3Bh
Y2UnDQpbICAgIDAuNTI5ODA0XSBETUkgbm90IHByZXNlbnQgb3IgaW52YWxp
ZC4NClsgICAgMC41NDAzOThdIE5FVDogUmVnaXN0ZXJlZCBwcm90b2NvbCBm
YW1pbHkgMTYNClsgICAgMC41NDU2NjVdIERNQTogcHJlYWxsb2NhdGVkIDI1
NiBLaUIgcG9vbCBmb3IgYXRvbWljIGFsbG9jYXRpb25zDQpbICAgIDAuNTUx
MjM3XSBhdWRpdDogaW5pdGlhbGl6aW5nIG5ldGxpbmsgc3Vic3lzIChkaXNh
YmxlZCkNClsgICAgMC41NTgwNDJdIGF1ZGl0OiB0eXBlPTIwMDAgYXVkaXQo
MC40MjA6MSk6IHN0YXRlPWluaXRpYWxpemVkIGF1ZGl0X2VuYWJsZWQ9MCBy
ZXM9MQ0KWyAgICAwLjU2NTk1MV0gaHctYnJlYWtwb2ludDogZm91bmQgNiBi
cmVha3BvaW50IGFuZCA0IHdhdGNocG9pbnQgcmVnaXN0ZXJzLg0KWyAgICAw
LjU3MjMwNV0gQVNJRCBhbGxvY2F0b3IgaW5pdGlhbGlzZWQgd2l0aCA2NTUz
NiBlbnRyaWVzDQpbICAgIDAuNTc3NzYzXSBERUJVRyB4ZW5fc3dpb3RsYl9p
bml0IDI2MCBzdGFydD0zNDAwMDAwMCBlbmQ9MzgwMDAwMDAgcmM9LTEyDQpb
ICAgIDAuNTg1OTkyXSBTZXJpYWw6IEFNQkEgUEwwMTEgVUFSVCBkcml2ZXIN
ClsgICAgMC41OTEzODFdIGJjbTI4MzUtbWJveCBmZTAwYjg4MC5tYWlsYm94
OiBtYWlsYm94IGVuYWJsZWQNClsgICAgMC41OTk2MjRdID09PTEgYmVmb3Jl
DQpbICAgIDAuNjAxNDg1XSA9PT09IG1pZGRsZQ0KWyAgICAwLjYwNDAzMV0g
PT09PSBhZnRlcg0KWyAgICAwLjYwNjQ5MF0gPT09PSBkb25lDQpbICAgIDAu
NjA5NjU5XSA9PT0xIGJlZm9yZQ0KWyAgICAwLjYxMTUxN10gPT09PSBtaWRk
bGUNClsgICAgMC42MTQwNjNdID09PT0gYWZ0ZXINClsgICAgMC42MTY1MjJd
ID09PT0gZG9uZQ0KWyAgICAwLjYxODkwNl0gcmFzcGJlcnJ5cGktZmlybXdh
cmUgc29jOmZpcm13YXJlOiBBdHRhY2hlZCB0byBmaXJtd2FyZSBmcm9tIDIw
MjAtMDItMjAgMTY6NDEsIHZhcmlhbnQgc3RhcnQNClsgICAgMC42Mzg2MDZd
ID09PTEgYmVmb3JlDQpbICAgIDAuNjQwNDY0XSA9PT09IG1pZGRsZQ0KWyAg
ICAwLjY0MzAyMl0gPT09PSBhZnRlcg0KWyAgICAwLjY0NTQ2OV0gPT09PSBk
b25lDQpbICAgIDAuNjQ3ODQ5XSByYXNwYmVycnlwaS1maXJtd2FyZSBzb2M6
ZmlybXdhcmU6IEZpcm13YXJlIGhhc2ggaXMgMDBmMzU1OTk3Y2NkYWNmYWFj
NzBlNmUwNzgwM2VjY2MwYTZmMWQ2ZQ0KWyAgICAwLjY2NzU1Ml0gPT09MSBi
ZWZvcmUNClsgICAgMC42Njk0MTBdID09PT0gbWlkZGxlDQpbICAgIDAuNjcx
OTU2XSA9PT09IGFmdGVyDQpbICAgIDAuNjc0NDE1XSA9PT09IGRvbmUNClsg
ICAgMC42OTI5MDhdIEh1Z2VUTEIgcmVnaXN0ZXJlZCAxLjAwIEdpQiBwYWdl
IHNpemUsIHByZS1hbGxvY2F0ZWQgMCBwYWdlcw0KWyAgICAwLjY5OTExMl0g
SHVnZVRMQiByZWdpc3RlcmVkIDMyLjAgTWlCIHBhZ2Ugc2l6ZSwgcHJlLWFs
bG9jYXRlZCAwIHBhZ2VzDQpbICAgIDAuNzA1OTMxXSBIdWdlVExCIHJlZ2lz
dGVyZWQgMi4wMCBNaUIgcGFnZSBzaXplLCBwcmUtYWxsb2NhdGVkIDAgcGFn
ZXMNClsgICAgMC43MTI4MDVdIEh1Z2VUTEIgcmVnaXN0ZXJlZCA2NC4wIEtp
QiBwYWdlIHNpemUsIHByZS1hbGxvY2F0ZWQgMCBwYWdlcw0KWyAgICAwLjcy
NDAwMF0gY3J5cHRkOiBtYXhfY3B1X3FsZW4gc2V0IHRvIDEwMDANClsgICAg
MC43MzQxMzFdIEFDUEk6IEludGVycHJldGVyIGRpc2FibGVkLg0KWyAgICAw
LjczNzY5Nl0gYmNtMjgzNS1kbWEgZmUwMDcwMDAuZG1hOiBETUEgbGVnYWN5
IEFQSSBtYW5hZ2VyLCBkbWFjaGFucz0weDENClsgICAgMC43NDYwMTZdIHhl
bjpiYWxsb29uOiBJbml0aWFsaXNpbmcgYmFsbG9vbiBkcml2ZXINClsgICAg
MC43NTE2ODNdIGlvbW11OiBEZWZhdWx0IGRvbWFpbiB0eXBlOiBUcmFuc2xh
dGVkDQpbICAgIDAuNzU2NDYzXSB2Z2FhcmI6IGxvYWRlZA0KWyAgICAwLjc1
OTIwM10gU0NTSSBzdWJzeXN0ZW0gaW5pdGlhbGl6ZWQNClsgICAgMC43NjMw
MTJdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBkcml2ZXIg
dXNiZnMNClsgICAgMC43NjgzOTddIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3
IGludGVyZmFjZSBkcml2ZXIgaHViDQpbICAgIDAuNzczNzkwXSB1c2Jjb3Jl
OiByZWdpc3RlcmVkIG5ldyBkZXZpY2UgZHJpdmVyIHVzYg0KWyAgICAwLjc3
OTEzOF0gdXNiX3BoeV9nZW5lcmljIHBoeTogcGh5IHN1cHBseSB2Y2Mgbm90
IGZvdW5kLCB1c2luZyBkdW1teSByZWd1bGF0b3INClsgICAgMC43ODY3NzVd
IHBwc19jb3JlOiBMaW51eFBQUyBBUEkgdmVyLiAxIHJlZ2lzdGVyZWQNClsg
ICAgMC43OTE1NzldIHBwc19jb3JlOiBTb2Z0d2FyZSB2ZXIuIDUuMy42IC0g
Q29weXJpZ2h0IDIwMDUtMjAwNyBSb2RvbGZvIEdpb21ldHRpIDxnaW9tZXR0
aUBsaW51eC5pdD4NClsgICAgMC44MDA4OTNdIFBUUCBjbG9jayBzdXBwb3J0
IHJlZ2lzdGVyZWQNClsgICAgMC44MDUwNTRdIEVEQUMgTUM6IFZlcjogMy4w
LjANClsgICAgMC44MDk0MjhdIE5ldExhYmVsOiBJbml0aWFsaXppbmcNClsg
ICAgMC44MTIyNThdIE5ldExhYmVsOiAgZG9tYWluIGhhc2ggc2l6ZSA9IDEy
OA0KWyAgICAwLjgxNjc1OF0gTmV0TGFiZWw6ICBwcm90b2NvbHMgPSBVTkxB
QkVMRUQgQ0lQU092NCBDQUxJUFNPDQpbICAgIDAuODIyNjEzXSBOZXRMYWJl
bDogIHVubGFiZWxlZCB0cmFmZmljIGFsbG93ZWQgYnkgZGVmYXVsdA0KWyAg
ICAwLjgyODgyOV0gY2xvY2tzb3VyY2U6IFN3aXRjaGVkIHRvIGNsb2Nrc291
cmNlIGFyY2hfc3lzX2NvdW50ZXINClsgICAgMC44MzQ2ODVdIFZGUzogRGlz
ayBxdW90YXMgZHF1b3RfNi42LjANClsgICAgMC44Mzg2NDFdIFZGUzogRHF1
b3QtY2FjaGUgaGFzaCB0YWJsZSBlbnRyaWVzOiA1MTIgKG9yZGVyIDAsIDQw
OTYgYnl0ZXMpDQpbICAgIDAuODQ1NzUxXSBwbnA6IFBuUCBBQ1BJOiBkaXNh
YmxlZA0KWyAgICAwLjg1NjI1Ml0gTkVUOiBSZWdpc3RlcmVkIHByb3RvY29s
IGZhbWlseSAyDQpbICAgIDAuODYwNTYwXSB0Y3BfbGlzdGVuX3BvcnRhZGRy
X2hhc2ggaGFzaCB0YWJsZSBlbnRyaWVzOiA1MTIgKG9yZGVyOiAxLCA4MTky
IGJ5dGVzLCBsaW5lYXIpDQpbICAgIDAuODY4NTgzXSBUQ1AgZXN0YWJsaXNo
ZWQgaGFzaCB0YWJsZSBlbnRyaWVzOiA4MTkyIChvcmRlcjogNCwgNjU1MzYg
Ynl0ZXMsIGxpbmVhcikNClsgICAgMC44NzY1NDJdIFRDUCBiaW5kIGhhc2gg
dGFibGUgZW50cmllczogODE5MiAob3JkZXI6IDUsIDEzMTA3MiBieXRlcywg
bGluZWFyKQ0KWyAgICAwLjg4Mzk2NV0gVENQOiBIYXNoIHRhYmxlcyBjb25m
aWd1cmVkIChlc3RhYmxpc2hlZCA4MTkyIGJpbmQgODE5MikNClsgICAgMC44
OTA1MjhdIFVEUCBoYXNoIHRhYmxlIGVudHJpZXM6IDUxMiAob3JkZXI6IDIs
IDE2Mzg0IGJ5dGVzLCBsaW5lYXIpDQpbICAgIDAuODk3MTE3XSBVRFAtTGl0
ZSBoYXNoIHRhYmxlIGVudHJpZXM6IDUxMiAob3JkZXI6IDIsIDE2Mzg0IGJ5
dGVzLCBsaW5lYXIpDQpbICAgIDAuOTA0NTAxXSBORVQ6IFJlZ2lzdGVyZWQg
cHJvdG9jb2wgZmFtaWx5IDENClsgICAgMC45MDg4MDhdIFBDSTogQ0xTIDAg
Ynl0ZXMsIGRlZmF1bHQgNjQNClsgICAgMC45MTMzNjBdIGt2bSBbMV06IEhZ
UCBtb2RlIG5vdCBhdmFpbGFibGUNClsgICAgMC45MjQ2NDFdIEluaXRpYWxp
c2Ugc3lzdGVtIHRydXN0ZWQga2V5cmluZ3MNClsgICAgMC45Mjg2NjNdIHdv
cmtpbmdzZXQ6IHRpbWVzdGFtcF9iaXRzPTQwIG1heF9vcmRlcj0xOCBidWNr
ZXRfb3JkZXI9MA0KWyAgICAwLjk0MjAzN10gemJ1ZDogbG9hZGVkDQpbICAg
IDAuOTQ1NTkyXSBzcXVhc2hmczogdmVyc2lvbiA0LjAgKDIwMDkvMDEvMzEp
IFBoaWxsaXAgTG91Z2hlcg0KWyAgICAwLjk1MTQ5Nl0gOXA6IEluc3RhbGxp
bmcgdjlmcyA5cDIwMDAgZmlsZSBzeXN0ZW0gc3VwcG9ydA0KWyAgICAwLjk3
OTQ5Ml0gS2V5IHR5cGUgYXN5bW1ldHJpYyByZWdpc3RlcmVkDQpbICAgIDAu
OTgzMDM3XSBBc3ltbWV0cmljIGtleSBwYXJzZXIgJ3g1MDknIHJlZ2lzdGVy
ZWQNClsgICAgMC45ODgwNThdIEJsb2NrIGxheWVyIFNDU0kgZ2VuZXJpYyAo
YnNnKSBkcml2ZXIgdmVyc2lvbiAwLjQgbG9hZGVkIChtYWpvciAyNDYpDQpb
ICAgIDAuOTk1NzM3XSBpbyBzY2hlZHVsZXIgbXEtZGVhZGxpbmUgcmVnaXN0
ZXJlZA0KWyAgICAxLjAwMDI2MF0gaW8gc2NoZWR1bGVyIGt5YmVyIHJlZ2lz
dGVyZWQNClsgICAgMS4wMDQ1MzBdIGlvIHNjaGVkdWxlciBiZnEgcmVnaXN0
ZXJlZA0KWyAgICAxLjAxMzIyMl0gPT09MSBiZWZvcmUNClsgICAgMS4wMTUw
ODZdID09PT0gbWlkZGxlDQpbICAgIDEuMDE3NjMxXSA9PT09IGFmdGVyDQpb
ICAgIDEuMDIwMTIwXSA9PT09IGRvbmUNClsgICAgMS4wMjI1MjldID09PTEg
YmVmb3JlDQpbICAgIDEuMDI1MDA3XSA9PT09IG1pZGRsZQ0KWyAgICAxLjAy
NzU1Ml0gPT09PSBhZnRlcg0KWyAgICAxLjAzMDAzNV0gPT09PSBkb25lDQpb
ICAgIDEuMDMyNDQ0XSA9PT0xIGJlZm9yZQ0KWyAgICAxLjAzNDkyOF0gPT09
PSBtaWRkbGUNClsgICAgMS4wMzc0NzRdID09PT0gYWZ0ZXINClsgICAgMS4w
Mzk5NThdID09PT0gZG9uZQ0KWyAgICAxLjA0MjM1OF0gPT09MSBiZWZvcmUN
ClsgICAgMS4wNDQ4NTBdID09PT0gbWlkZGxlDQpbICAgIDEuMDQ3Mzk2XSA9
PT09IGFmdGVyDQpbICAgIDEuMDQ5ODY5XSA9PT09IGRvbmUNClsgICAgMS4w
NTIyNjBdID09PTEgYmVmb3JlDQpbICAgIDEuMDU0NzcyXSA9PT09IG1pZGRs
ZQ0KWyAgICAxLjA1NzMxOF0gPT09PSBhZnRlcg0KWyAgICAxLjA1OTc5M10g
PT09PSBkb25lDQpbICAgIDEuMDYyMTgxXSA9PT0xIGJlZm9yZQ0KWyAgICAx
LjA2NDY5NF0gPT09PSBtaWRkbGUNClsgICAgMS4wNjcyNDBdID09PT0gYWZ0
ZXINClsgICAgMS4wNjk3MTVdID09PT0gZG9uZQ0KWyAgICAxLjA3MjEwNF0g
PT09MSBiZWZvcmUNClsgICAgMS4wNzQ2MTZdID09PT0gbWlkZGxlDQpbICAg
IDEuMDc3MTYyXSA9PT09IGFmdGVyDQpbICAgIDEuMDc5NjM2XSA9PT09IGRv
bmUNClsgICAgMS4wODIwMjVdID09PTEgYmVmb3JlDQpbICAgIDEuMDg0NTM4
XSA9PT09IG1pZGRsZQ0KWyAgICAxLjA4NzA4M10gPT09PSBhZnRlcg0KWyAg
ICAxLjA4OTU1OF0gPT09PSBkb25lDQpbICAgIDEuMDkyNDk2XSBzaHBjaHA6
IFN0YW5kYXJkIEhvdCBQbHVnIFBDSSBDb250cm9sbGVyIERyaXZlciB2ZXJz
aW9uOiAwLjQNClsgICAgMS4xMDAwNDhdIGJyY20tcGNpZSBmZDUwMDAwMC5w
Y2llOiBob3N0IGJyaWRnZSAvc2NiL3BjaWVAN2Q1MDAwMDAgcmFuZ2VzOg0K
WyAgICAxLjEwNjQ4N10gYnJjbS1wY2llIGZkNTAwMDAwLnBjaWU6ICAgTm8g
YnVzIHJhbmdlIGZvdW5kIGZvciAvc2NiL3BjaWVAN2Q1MDAwMDAsIHVzaW5n
IFtidXMgMDAtZmZdDQpbICAgIDEuMTE1NzM1XSBicmNtLXBjaWUgZmQ1MDAw
MDAucGNpZTogICAgICBNRU0gMHgwNjAwMDAwMDAwLi4weDA2MDNmZmZmZmYg
LT4gMHgwMGY4MDAwMDAwDQpbICAgIDEuMTIzOTg2XSBicmNtLXBjaWUgZmQ1
MDAwMDAucGNpZTogICBJQiBNRU0gMHgwMDAwMDAwMDAwLi4weDAwYmZmZmZm
ZmYgLT4gMHgwMDAwMDAwMDAwDQpbICAgIDEuMTkwOTQxXSBicmNtLXBjaWUg
ZmQ1MDAwMDAucGNpZTogbGluayB1cCwgNSBHVC9zIHgxIChTU0MpDQpbICAg
IDEuMTk2MzU3XSBicmNtLXBjaWUgZmQ1MDAwMDAucGNpZTogUENJIGhvc3Qg
YnJpZGdlIHRvIGJ1cyAwMDAwOjAwDQpbICAgIDEuMjAyNTY5XSBwY2lfYnVz
IDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNlIFtidXMgMDAtZmZdDQpbICAg
IDEuMjA4MTcyXSBwY2lfYnVzIDAwMDA6MDA6IHJvb3QgYnVzIHJlc291cmNl
IFttZW0gMHg2MDAwMDAwMDAtMHg2MDNmZmZmZmZdIChidXMgYWRkcmVzcyBb
MHhmODAwMDAwMC0weGZiZmZmZmZmXSkNClsgICAgMS4yMTg3NDJdIHBjaSAw
MDAwOjAwOjAwLjA6IFsxNGU0OjI3MTFdIHR5cGUgMDEgY2xhc3MgMHgwNjA0
MDANClsgICAgMS4yMjQ5NDZdIHBjaSAwMDAwOjAwOjAwLjA6IFBNRSMgc3Vw
cG9ydGVkIGZyb20gRDAgRDNob3QNCihYRU4pIHBoeXNkZXYuYzoxNjpkMHYw
IFBIWVNERVZPUCBjbWQ9MjU6IG5vdCBpbXBsZW1lbnRlZA0KKFhFTikgcGh5
c2Rldi5jOjE2OmQwdjAgUEhZU0RFVk9QIGNtZD0xNTogbm90IGltcGxlbWVu
dGVkDQpbICAgIDEuMjQwODYyXSBwY2kgMDAwMDowMDowMC4wOiBGYWlsZWQg
dG8gYWRkIC0gcGFzc3Rocm91Z2ggb3IgTVNJL01TSS1YIG1pZ2h0IGZhaWwh
DQpbICAgIDEuMjUxNjczXSBwY2kgMDAwMDowMDowMC4wOiBicmlkZ2UgY29u
ZmlndXJhdGlvbiBpbnZhbGlkIChbYnVzIDAwLTAwXSksIHJlY29uZmlndXJp
bmcNClsgICAgMS4yNTkyOTNdIHBjaSAwMDAwOjAxOjAwLjA6IFsxMTA2OjM0
ODNdIHR5cGUgMDAgY2xhc3MgMHgwYzAzMzANClsgICAgMS4yNjUzMzBdIHBj
aSAwMDAwOjAxOjAwLjA6IHJlZyAweDEwOiBbbWVtIDB4MDAwMDAwMDAtMHgw
MDAwMGZmZiA2NGJpdF0NClsgICAgMS4yNzIzNTFdIHBjaSAwMDAwOjAxOjAw
LjA6IFBNRSMgc3VwcG9ydGVkIGZyb20gRDAgRDNjb2xkDQooWEVOKSBwaHlz
ZGV2LmM6MTY6ZDB2MCBQSFlTREVWT1AgY21kPTE1OiBub3QgaW1wbGVtZW50
ZWQNClsgICAgMS4yODMxNTddIHBjaSAwMDAwOjAxOjAwLjA6IEZhaWxlZCB0
byBhZGQgLSBwYXNzdGhyb3VnaCBvciBNU0kvTVNJLVggbWlnaHQgZmFpbCEN
ClsgICAgMS4yOTM4OTZdIHBjaV9idXMgMDAwMDowMTogYnVzbl9yZXM6IFti
dXMgMDEtZmZdIGVuZCBpcyB1cGRhdGVkIHRvIDAxDQpbICAgIDEuMzAwMDUw
XSBwY2kgMDAwMDowMDowMC4wOiBCQVIgMTQ6IGFzc2lnbmVkIFttZW0gMHg2
MDAwMDAwMDAtMHg2MDAwZmZmZmZdDQpbICAgIDEuMzA3MTgxXSBwY2kgMDAw
MDowMTowMC4wOiBCQVIgMDogYXNzaWduZWQgW21lbSAweDYwMDAwMDAwMC0w
eDYwMDAwMGZmZiA2NGJpdF0NClsgICAgMS4zMTQ4MzhdIHBjaSAwMDAwOjAw
OjAwLjA6IFBDSSBicmlkZ2UgdG8gW2J1cyAwMV0NClsgICAgMS4zMTk5MThd
IHBjaSAwMDAwOjAwOjAwLjA6ICAgYnJpZGdlIHdpbmRvdyBbbWVtIDB4NjAw
MDAwMDAwLTB4NjAwMGZmZmZmXQ0KWyAgICAxLjMyNzE2OV0gcGNpZXBvcnQg
MDAwMDowMDowMC4wOiBlbmFibGluZyBkZXZpY2UgKDAwMDAgLT4gMDAwMikN
ClsgICAgMS4zMzM0MTZdIHBjaWVwb3J0IDAwMDA6MDA6MDAuMDogUE1FOiBT
aWduYWxpbmcgd2l0aCBJUlEgMzMNClsgICAgMS4zMzk0MThdIHBjaWVwb3J0
IDAwMDA6MDA6MDAuMDogQUVSOiBlbmFibGVkIHdpdGggSVJRIDMzDQpbICAg
IDEuMzQ0OTk5XSBwY2kgMDAwMDowMTowMC4wOiBlbmFibGluZyBkZXZpY2Ug
KDAwMDAgLT4gMDAwMikNClsgICAgMS4zNTA3NDFdID09PTEgYmVmb3JlDQpb
ICAgIDEuMzUzMTc5XSA9PT09IG1pZGRsZQ0KWyAgICAxLjM1NTcyNF0gPT09
PSBhZnRlcg0KWyAgICAxLjM1ODE4M10gPT09PSBkb25lDQpbICAgIDEuMzYw
NjIwXSBwY2kgMDAwMDowMTowMC4wOiBxdWlya191c2JfZWFybHlfaGFuZG9m
ZisweDAvMHg4ZjAgdG9vayAxNTI0NSB1c2Vjcw0KWyAgICAxLjM3MTA2MF0g
PT09MSBiZWZvcmUNClsgICAgMS4zNzI5MjFdID09PT0gbWlkZGxlDQpbICAg
IDEuMzc1NDY3XSA9PT09IGFmdGVyDQpbICAgIDEuMzc3OTI2XSA9PT09IGRv
bmUNClsgICAgMS4zODA0MTldID09PTEgYmVmb3JlDQpbICAgIDEuMzgyODQz
XSA9PT09IG1pZGRsZQ0KWyAgICAxLjM4NTM4OV0gPT09PSBhZnRlcg0KWyAg
ICAxLjM4Nzg0N10gPT09PSBkb25lDQpbICAgIDEuMzkwMzA2XSA9PT0xIGJl
Zm9yZQ0KWyAgICAxLjM5Mjc2NV0gPT09PSBtaWRkbGUNClsgICAgMS4zOTUz
MTBdID09PT0gYWZ0ZXINClsgICAgMS4zOTc3NjldID09PT0gZG9uZQ0KWyAg
ICAxLjQwMDIxNl0gPT09MSBiZWZvcmUNClsgICAgMS40MDI2ODZdID09PT0g
bWlkZGxlDQpbICAgIDEuNDA1MjMyXSA9PT09IGFmdGVyDQpbICAgIDEuNDA3
NjkxXSA9PT09IGRvbmUNClsgICAgMS40MTAxNDBdID09PTEgYmVmb3JlDQpb
ICAgIDEuNDEyNjA4XSA9PT09IG1pZGRsZQ0KWyAgICAxLjQxNTE1NF0gPT09
PSBhZnRlcg0KWyAgICAxLjQxNzYxM10gPT09PSBkb25lDQpbICAgIDEuNDIw
MDE0XSByYXNwYmVycnlwaS1jbGsgZmlybXdhcmUtY2xvY2tzOiBDUFUgZnJl
cXVlbmN5IHJhbmdlOiBtaW4gNjAwMDAwMDAwLCBtYXggMTUwMDAwMDAwMA0K
WyAgICAxLjQyODkyNV0gPT09MSBiZWZvcmUNClsgICAgMS40MzE0MTBdID09
PT0gbWlkZGxlDQpbICAgIDEuNDMzOTQ0XSA9PT09IGFmdGVyDQpbICAgIDEu
NDM2NDAzXSA9PT09IGRvbmUNClsgICAgMS40Mzg4NTRdID09PTEgYmVmb3Jl
DQpbICAgIDEuNDQxMzMyXSA9PT09IG1pZGRsZQ0KWyAgICAxLjQ0Mzg2Nl0g
PT09PSBhZnRlcg0KWyAgICAxLjQ0NjMyNV0gPT09PSBkb25lDQpbICAgIDEu
NDQ4NzQ2XSA9PT0xIGJlZm9yZQ0KWyAgICAxLjQ1MTI2OF0gPT09PSBtaWRk
bGUNClsgICAgMS40NTM3ODhdID09PT0gYWZ0ZXINClsgICAgMS40NTYyNDZd
ID09PT0gZG9uZQ0KWyAgICAxLjQ1ODY2MV0gPT09MSBiZWZvcmUNClsgICAg
MS40NjExODhdID09PT0gbWlkZGxlDQpbICAgIDEuNDYzNzEwXSA9PT09IGFm
dGVyDQpbICAgIDEuNDY2MTY4XSA9PT09IGRvbmUNClsgICAgMS40Njg1ODJd
ID09PTEgYmVmb3JlDQpbICAgIDEuNDcxMTEwXSA9PT09IG1pZGRsZQ0KWyAg
ICAxLjQ3MzYzMV0gPT09PSBhZnRlcg0KWyAgICAxLjQ3NjA5MF0gPT09PSBk
b25lDQpbICAgIDEuNDc4NTAzXSA9PT0xIGJlZm9yZQ0KWyAgICAxLjQ4MTAz
Ml0gPT09PSBtaWRkbGUNClsgICAgMS40ODM1NTNdID09PT0gYWZ0ZXINClsg
ICAgMS40ODYwMTJdID09PT0gZG9uZQ0KWyAgICAxLjQ4ODQzMF0gPT09MSBi
ZWZvcmUNClsgICAgMS40OTA5NTRdID09PT0gbWlkZGxlDQpbICAgIDEuNDkz
NDc1XSA9PT09IGFmdGVyDQpbICAgIDEuNDk1OTM0XSA9PT09IGRvbmUNClsg
ICAgMS40OTgzNTVdID09PTEgYmVmb3JlDQpbICAgIDEuNTAwODc2XSA9PT09
IG1pZGRsZQ0KWyAgICAxLjUwMzM5N10gPT09PSBhZnRlcg0KWyAgICAxLjUw
NTg1Nl0gPT09PSBkb25lDQpbICAgIDEuNTA4MjcwXSA9PT0xIGJlZm9yZQ0K
WyAgICAxLjUxMDc5N10gPT09PSBtaWRkbGUNClsgICAgMS41MTMzMTldID09
PT0gYWZ0ZXINClsgICAgMS41MTU3NzddID09PT0gZG9uZQ0KWyAgICAxLjUx
ODE5Ml0gPT09MSBiZWZvcmUNClsgICAgMS41MjA3MDZdID09PT0gbWlkZGxl
DQpbICAgIDEuNTIzMjQxXSA9PT09IGFmdGVyDQpbICAgIDEuNTI1Njk5XSA9
PT09IGRvbmUNClsgICAgMS41MjgxMTldID09PTEgYmVmb3JlDQpbICAgIDEu
NTMwNjI5XSA9PT09IG1pZGRsZQ0KWyAgICAxLjUzMzE2M10gPT09PSBhZnRl
cg0KWyAgICAxLjUzNTYyMV0gPT09PSBkb25lDQpbICAgIDEuNTM4MDM1XSA9
PT0xIGJlZm9yZQ0KWyAgICAxLjU0MDU1MF0gPT09PSBtaWRkbGUNClsgICAg
MS41NDMwODVdID09PT0gYWZ0ZXINClsgICAgMS41NDU1NDNdID09PT0gZG9u
ZQ0KWyAgICAxLjU1MzczOF0gPT09MSBiZWZvcmUNClsgICAgMS41NTU1OTld
ID09PT0gbWlkZGxlDQpbICAgIDEuNTU4MTQ1XSA9PT09IGFmdGVyDQpbICAg
IDEuNTYwNjE3XSA9PT09IGRvbmUNClsgICAgMS41NjU3ODJdIHhlbjp4ZW5f
ZXZ0Y2huOiBFdmVudC1jaGFubmVsIGRldmljZSBpbnN0YWxsZWQNClsgICAg
MS41NzEwNTVdIEluaXRpYWxpc2luZyBYZW4gcHZjYWxscyBmcm9udGVuZCBk
cml2ZXINClsgICAgMS41Nzc1NDBdIFNlcmlhbDogODI1MC8xNjU1MCBkcml2
ZXIsIDMyIHBvcnRzLCBJUlEgc2hhcmluZyBlbmFibGVkDQpbICAgIDEuNTkx
MjQxXSBiY20yODM1LWF1eC11YXJ0IGZlMjE1MDQwLnNlcmlhbDogdGhlcmUg
aXMgbm90IHZhbGlkIG1hcHMgZm9yIHN0YXRlIGRlZmF1bHQNClsgICAgMS41
OTg4NTddIE9GOiAvc29jL3NlcmlhbEA3ZTIxNTA0MDogY291bGQgbm90IGZp
bmQgcGhhbmRsZQ0KWyAgICAxLjYwNDU1MF0gYmNtMjgzNS1hdXgtdWFydCBm
ZTIxNTA0MC5zZXJpYWw6IGNvdWxkIG5vdCBnZXQgY2xrOiAtMg0KWyAgICAx
LjYxMDk5NF0gYmNtMjgzNS1hdXgtdWFydDogcHJvYmUgb2YgZmUyMTUwNDAu
c2VyaWFsIGZhaWxlZCB3aXRoIGVycm9yIC0yDQpbICAgIDEuNjE4NTU5XSBT
ZXJpYWw6IEFNQkEgZHJpdmVyDQpbICAgIDEuNjIxODg1XSBtc21fc2VyaWFs
OiBkcml2ZXIgaW5pdGlhbGl6ZWQNClsgICAgMS42MjY3NDZdIGNhY2hlaW5m
bzogVW5hYmxlIHRvIGRldGVjdCBjYWNoZSBoaWVyYXJjaHkgZm9yIENQVSAw
DQpbICAgIDEuNjQxOTEyXSBicmQ6IG1vZHVsZSBsb2FkZWQNClsgICAgMS42
NTE5ODhdIGxvb3A6IG1vZHVsZSBsb2FkZWQNClsgICAgMS42NzA1MTldIElu
dmFsaWQgbWF4X3F1ZXVlcyAoNCksIHdpbGwgdXNlIGRlZmF1bHQgbWF4OiAx
Lg0KWyAgICAxLjY4NzcyMF0gZHJiZDogaW5pdGlhbGl6ZWQuIFZlcnNpb246
IDguNC4xMSAoYXBpOjEvcHJvdG86ODYtMTAxKQ0KWyAgICAxLjY5MzQ5NF0g
ZHJiZDogYnVpbHQtaW4NClsgICAgMS42OTYyNjVdIGRyYmQ6IHJlZ2lzdGVy
ZWQgYXMgYmxvY2sgZGV2aWNlIG1ham9yIDE0Nw0KWyAgICAxLjcwMzU3OV0g
d2lyZWd1YXJkOiBhbGxvd2VkaXBzIHNlbGYtdGVzdHM6IHBhc3MNClsgICAg
MS43MTAzNTBdIHdpcmVndWFyZDogbm9uY2UgY291bnRlciBzZWxmLXRlc3Rz
OiBwYXNzDQpbICAgIDIuMTMwNTkxXSB3aXJlZ3VhcmQ6IHJhdGVsaW1pdGVy
IHNlbGYtdGVzdHM6IHBhc3MNClsgICAgMi4xMzQ5OTBdIHdpcmVndWFyZDog
V2lyZUd1YXJkIDEuMC4wIGxvYWRlZC4gU2VlIHd3dy53aXJlZ3VhcmQuY29t
IGZvciBpbmZvcm1hdGlvbi4NClsgICAgMi4xNDI5NDRdIHdpcmVndWFyZDog
Q29weXJpZ2h0IChDKSAyMDE1LTIwMTkgSmFzb24gQS4gRG9uZW5mZWxkIDxK
YXNvbkB6eDJjNC5jb20+LiBBbGwgUmlnaHRzIFJlc2VydmVkLg0KWyAgICAy
LjE1NDM3Ml0gbGlicGh5OiBGaXhlZCBNRElPIEJ1czogcHJvYmVkDQpbICAg
IDIuMTU4NDM3XSBiY21nZW5ldCBmZDU4MDAwMC5ldGhlcm5ldDogdXNpbmcg
cmFuZG9tIEV0aGVybmV0IE1BQw0KWyAgICAyLjE2NDE4NV0gYmNtZ2VuZXQg
ZmQ1ODAwMDAuZXRoZXJuZXQ6IGZhaWxlZCB0byBnZXQgZW5ldCBjbG9jaw0K
WyAgICAyLjE3MDMwNF0gYmNtZ2VuZXQgZmQ1ODAwMDAuZXRoZXJuZXQ6IEdF
TkVUIDUuMCBFUEhZOiAweDAwMDANClsgICAgMi4xNzYyNjZdIGJjbWdlbmV0
IGZkNTgwMDAwLmV0aGVybmV0OiBmYWlsZWQgdG8gZ2V0IGVuZXQtd29sIGNs
b2NrDQpbICAgIDIuMTgyNzc0XSBiY21nZW5ldCBmZDU4MDAwMC5ldGhlcm5l
dDogZmFpbGVkIHRvIGdldCBlbmV0LWVlZSBjbG9jaw0KWyAgICAyLjIwODg3
OF0gbGlicGh5OiBiY21nZW5ldCBNSUkgYnVzOiBwcm9iZWQNClsgICAgMi4y
ODg5MTVdIHVuaW1hYy1tZGlvIHVuaW1hYy1tZGlvLi0xOTogQnJvYWRjb20g
VW5pTUFDIE1ESU8gYnVzDQpbICAgIDIuMjk1MjI1XSB4ZW5fbmV0ZnJvbnQ6
IEluaXRpYWxpc2luZyBYZW4gdmlydHVhbCBldGhlcm5ldCBkcml2ZXINClsg
ICAgMi4zMDExNzZdIHhoY2lfaGNkIDAwMDA6MDE6MDAuMDogeEhDSSBIb3N0
IENvbnRyb2xsZXINClsgICAgMi4zMDYyNTFdIHhoY2lfaGNkIDAwMDA6MDE6
MDAuMDogbmV3IFVTQiBidXMgcmVnaXN0ZXJlZCwgYXNzaWduZWQgYnVzIG51
bWJlciAxDQpbICAgIDIuMzE0NDg3XSB4aGNpX2hjZCAwMDAwOjAxOjAwLjA6
IGhjYyBwYXJhbXMgMHgwMDI4NDFlYiBoY2kgdmVyc2lvbiAweDEwMCBxdWly
a3MgMHgwMDAwMDAxMDAwMDAwODkwDQpbICAgIDIuMzIzNjI5XSB1c2IgdXNi
MTogTmV3IFVTQiBkZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJv
ZHVjdD0wMDAyLCBiY2REZXZpY2U9IDUuMDYNClsgICAgMi4zMzE1NTldIHVz
YiB1c2IxOiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVj
dD0yLCBTZXJpYWxOdW1iZXI9MQ0KWyAgICAyLjMzODkzM10gdXNiIHVzYjE6
IFByb2R1Y3Q6IHhIQ0kgSG9zdCBDb250cm9sbGVyDQpbICAgIDIuMzQzOTIw
XSB1c2IgdXNiMTogTWFudWZhY3R1cmVyOiBMaW51eCA1LjYuMTAtZGVmYXVs
dCsgeGhjaS1oY2QNClsgICAgMi4zNTAyNTVdIHVzYiB1c2IxOiBTZXJpYWxO
dW1iZXI6IDAwMDA6MDE6MDAuMA0KWyAgICAyLjM1NTQ1Nl0gaHViIDEtMDox
LjA6IFVTQiBodWIgZm91bmQNClsgICAgMi4zNTg5MjFdIGh1YiAxLTA6MS4w
OiAxIHBvcnQgZGV0ZWN0ZWQNClsgICAgMi4zNjMyNzhdIHhoY2lfaGNkIDAw
MDA6MDE6MDAuMDogeEhDSSBIb3N0IENvbnRyb2xsZXINClsgICAgMi4zNjgy
NTFdIHhoY2lfaGNkIDAwMDA6MDE6MDAuMDogbmV3IFVTQiBidXMgcmVnaXN0
ZXJlZCwgYXNzaWduZWQgYnVzIG51bWJlciAyDQpbICAgIDIuMzc1ODUyXSB4
aGNpX2hjZCAwMDAwOjAxOjAwLjA6IEhvc3Qgc3VwcG9ydHMgVVNCIDMuMCBT
dXBlclNwZWVkDQpbICAgIDIuMzgyNDMxXSB1c2IgdXNiMjogTmV3IFVTQiBk
ZXZpY2UgZm91bmQsIGlkVmVuZG9yPTFkNmIsIGlkUHJvZHVjdD0wMDAzLCBi
Y2REZXZpY2U9IDUuMDYNClsgICAgMi4zOTA2NTBdIHVzYiB1c2IyOiBOZXcg
VVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MywgUHJvZHVjdD0yLCBTZXJpYWxO
dW1iZXI9MQ0KWyAgICAyLjM5ODAwOF0gdXNiIHVzYjI6IFByb2R1Y3Q6IHhI
Q0kgSG9zdCBDb250cm9sbGVyDQpbICAgIDIuNDAzMDI0XSB1c2IgdXNiMjog
TWFudWZhY3R1cmVyOiBMaW51eCA1LjYuMTAtZGVmYXVsdCsgeGhjaS1oY2QN
ClsgICAgMi40MDkzNTBdIHVzYiB1c2IyOiBTZXJpYWxOdW1iZXI6IDAwMDA6
MDE6MDAuMA0KWyAgICAyLjQxNDQ4OF0gaHViIDItMDoxLjA6IFVTQiBodWIg
Zm91bmQNClsgICAgMi40MTc5NzBdIGh1YiAyLTA6MS4wOiA0IHBvcnRzIGRl
dGVjdGVkDQpbICAgIDIuNDIyNzU2XSB1c2Jjb3JlOiByZWdpc3RlcmVkIG5l
dyBpbnRlcmZhY2UgZHJpdmVyIHVzYi1zdG9yYWdlDQpbICAgIDIuNDI4NDQ1
XSBtb3VzZWRldjogUFMvMiBtb3VzZSBkZXZpY2UgY29tbW9uIGZvciBhbGwg
bWljZQ0KWyAgICAyLjQzNjY0MV0gYmNtMjgzNS13ZHQgYmNtMjgzNS13ZHQ6
IEJyb2FkY29tIEJDTTI4MzUgd2F0Y2hkb2cgdGltZXINClsgICAgMi40NDMx
MjFdIHhlbl93ZHQgeGVuX3dkdDogaW5pdGlhbGl6ZWQgKHRpbWVvdXQ9NjBz
LCBub3dheW91dD0wKQ0KWyAgICAyLjQ0OTUwNl0gc2RoY2k6IFNlY3VyZSBE
aWdpdGFsIEhvc3QgQ29udHJvbGxlciBJbnRlcmZhY2UgZHJpdmVyDQpbICAg
IDIuNDU1MTQxXSBzZGhjaTogQ29weXJpZ2h0KGMpIFBpZXJyZSBPc3NtYW4N
ClsgICAgMi40NjAwMThdIG1tYy1iY20yODM1IGZlMzAwMDAwLm1tY25yOiBj
b3VsZCBub3QgZ2V0IGNsaywgZGVmZXJyaW5nIHByb2JlDQpbICAgIDIuNDY2
ODc3XSBFcnJvcjogRHJpdmVyICdzZGhvc3QtYmNtMjgzNScgaXMgYWxyZWFk
eSByZWdpc3RlcmVkLCBhYm9ydGluZy4uLg0KWyAgICAyLjQ3Mzk1N10gc2Ro
Y2ktcGx0Zm06IFNESENJIHBsYXRmb3JtIGFuZCBPRiBkcml2ZXIgaGVscGVy
DQpbICAgIDIuNDgwNzcwXSBsZWR0cmlnLWNwdTogcmVnaXN0ZXJlZCB0byBp
bmRpY2F0ZSBhY3Rpdml0eSBvbiBDUFVzDQpbICAgIDIuNDg2ODYyXSBoaWQ6
IHJhdyBISUQgZXZlbnRzIGRyaXZlciAoQykgSmlyaSBLb3NpbmENClsgICAg
Mi40OTE2MjVdIHVzYmNvcmU6IHJlZ2lzdGVyZWQgbmV3IGludGVyZmFjZSBk
cml2ZXIgdXNiaGlkDQpbICAgIDIuNDk3MjA2XSB1c2JoaWQ6IFVTQiBISUQg
Y29yZSBkcml2ZXINClsgICAgMi41MDQxMjZdIHh0X3RpbWU6IGtlcm5lbCB0
aW1lem9uZSBpcyAtMDAwMA0KWyAgICAyLjUwNzk0NV0gSVBWUzogUmVnaXN0
ZXJlZCBwcm90b2NvbHMgKCkNClsgICAgMi41MTIxMDZdIElQVlM6IENvbm5l
Y3Rpb24gaGFzaCB0YWJsZSBjb25maWd1cmVkIChzaXplPTQwOTYsIG1lbW9y
eT02NEtieXRlcykNClsgICAgMi41MTk3MjhdIElQVlM6IGlwdnMgbG9hZGVk
Lg0KWyAgICAyLjUyMjgxNV0gaXBpcDogSVB2NCBhbmQgTVBMUyBvdmVyIElQ
djQgdHVubmVsaW5nIGRyaXZlcg0KWyAgICAyLjUyODcyMV0gZ3JlOiBHUkUg
b3ZlciBJUHY0IGRlbXVsdGlwbGV4b3IgZHJpdmVyDQpbICAgIDIuNTMzNzg3
XSBpcHRfQ0xVU1RFUklQOiBDbHVzdGVySVAgVmVyc2lvbiAwLjggbG9hZGVk
IHN1Y2Nlc3NmdWxseQ0KWyAgICAyLjUzOTg4MV0gSW5pdGlhbGl6aW5nIFhG
Uk0gbmV0bGluayBzb2NrZXQNClsgICAgMi41NDQ3NjBdIE5FVDogUmVnaXN0
ZXJlZCBwcm90b2NvbCBmYW1pbHkgMTANClsgICAgMi41NDk3NjFdIFNlZ21l
bnQgUm91dGluZyB3aXRoIElQdjYNClsgICAgMi41NTM1OTRdIE5FVDogUmVn
aXN0ZXJlZCBwcm90b2NvbCBmYW1pbHkgMTcNClsgICAgMi41NTc0OTBdIE5F
VDogUmVnaXN0ZXJlZCBwcm90b2NvbCBmYW1pbHkgMTUNClsgICAgMi41NjIy
MDVdIEJyaWRnZSBmaXJld2FsbGluZyByZWdpc3RlcmVkDQpbICAgIDIuNTY2
MjYxXSA4MDIxcTogODAyLjFRIFZMQU4gU3VwcG9ydCB2MS44DQpbICAgIDIu
NTcwNjg0XSA5cG5ldDogSW5zdGFsbGluZyA5UDIwMDAgc3VwcG9ydA0KWyAg
ICAyLjU3NDg2MF0gSW5pdGlhbGlzaW5nIFhlbiB0cmFuc3BvcnQgZm9yIDlw
ZnMNClsgICAgMi41Nzk2NDRdIEtleSB0eXBlIGRuc19yZXNvbHZlciByZWdp
c3RlcmVkDQpbICAgIDIuNTg0MjcwXSByZWdpc3RlcmVkIHRhc2tzdGF0cyB2
ZXJzaW9uIDENClsgICAgMi41ODgxMjFdIExvYWRpbmcgY29tcGlsZWQtaW4g
WC41MDkgY2VydGlmaWNhdGVzDQpbICAgIDIuNTk3MjQyXSBMb2FkZWQgWC41
MDkgY2VydCAnQnVpbGQgdGltZSBhdXRvZ2VuZXJhdGVkIGtlcm5lbCBrZXk6
IDk1NTdkN2Y5NDY2NTg4Y2IwNzIwNDE0N2Y0ODM4YjU0NzE0NDFhNDUnDQpb
ICAgIDIuNjA2ODk1XSB6c3dhcDogbG9hZGVkIHVzaW5nIHBvb2wgbHpvL3pi
dWQNClsgICAgMi42MTE0NzldIEtleSB0eXBlIC5fZnNjcnlwdCByZWdpc3Rl
cmVkDQpbICAgIDIuNjE1MjYxXSBLZXkgdHlwZSAuZnNjcnlwdCByZWdpc3Rl
cmVkDQpbICAgIDIuNjE5NDA5XSBLZXkgdHlwZSBmc2NyeXB0LXByb3Zpc2lv
bmluZyByZWdpc3RlcmVkDQpbICAgIDIuNjI3NjQ1XSA9PT0xIGJlZm9yZQ0K
WyAgICAyLjYyOTU1OF0gPT09PSBtaWRkbGUNClsgICAgMi42MzIwNThdID09
PT0gYWZ0ZXINClsgICAgMi42MzQ1MTddID09PT0gZG9uZQ0KWyAgICAyLjY0
MDM0MV0gdWFydC1wbDAxMSBmZTIwMTAwMC5zZXJpYWw6IHRoZXJlIGlzIG5v
dCB2YWxpZCBtYXBzIGZvciBzdGF0ZSBkZWZhdWx0DQpbICAgIDIuNjQ3NDA2
XSB1YXJ0LXBsMDExIGZlMjAxMDAwLnNlcmlhbDogY3RzX2V2ZW50X3dvcmth
cm91bmQgZW5hYmxlZA0KWyAgICAyLjY1MzkxMl0gZmUyMDEwMDAuc2VyaWFs
OiB0dHlBTUEwIGF0IE1NSU8gMHhmZTIwMTAwMCAoaXJxID0gMTIsIGJhc2Vf
YmF1ZCA9IDApIGlzIGEgUEwwMTEgcmV2Mg0KWyAgICAyLjY2MzU4Nl0gPT09
MSBiZWZvcmUNClsgICAgMi42NjU0NjFdID09PT0gbWlkZGxlDQpbICAgIDIu
NjY3OTkzXSA9PT09IGFmdGVyDQpbICAgIDIuNjcwNDc0XSA9PT09IGRvbmUN
ClsgICAgMi42NzI5MzddID09PTEgYmVmb3JlDQpbICAgIDIuNjc1MzgxXSA9
PT09IG1pZGRsZQ0KWyAgICAyLjY3NzkxNF0gPT09PSBhZnRlcg0KWyAgICAy
LjY4MDM4N10gPT09PSBkb25lDQpbICAgIDIuNjg0MDE5XSA9PT0xIGJlZm9y
ZQ0KWyAgICAyLjY4NTg3OF0gPT09PSBtaWRkbGUNClsgICAgMi42ODg0MjRd
ID09PT0gYWZ0ZXINClsgICAgMi42OTA4OTddID09PT0gZG9uZQ0KWyAgICAy
LjY5MzYzOV0gPT09MSBiZWZvcmUNClsgICAgMi42OTU4MDBdID09PT0gbWlk
ZGxlDQpbICAgIDIuNjk4MzQ2XSA9PT09IGFmdGVyDQpbICAgIDIuNzAwODE5
XSA9PT09IGRvbmUNClsgICAgMi43MDM2MTddID09PTEgYmVmb3JlDQpbICAg
IDIuNzA1NzIyXSA9PT09IG1pZGRsZQ0KWyAgICAyLjcwODI2OF0gPT09PSBh
ZnRlcg0KWyAgICAyLjcxMDc0MV0gPT09PSBkb25lDQpbICAgIDIuNzEzMTM1
XSA9PT0xIGJlZm9yZQ0KWyAgICAyLjcxNTY0M10gPT09PSBtaWRkbGUNClsg
ICAgMi43MTgxODldID09PT0gYWZ0ZXINClsgICAgMi43MjA2NjFdID09PT0g
ZG9uZQ0KWyAgICAyLjcyNDI3Nl0gPT09MSBiZWZvcmUNClsgICAgMi43MjYx
MzRdID09PT0gbWlkZGxlDQpbICAgIDIuNzI4NjkyXSA9PT09IGFmdGVyDQpb
ICAgIDIuNzMxMTU3XSA9PT09IGRvbmUNClsgICAgMi43MzQwNTNdIGJjbTI4
MzUtcG93ZXIgYmNtMjgzNS1wb3dlcjogQnJvYWRjb20gQkNNMjgzNSBwb3dl
ciBkb21haW5zIGRyaXZlcg0KWyAgICAyLjc0MDkzMF0gdXNiIDEtMTogbmV3
IGhpZ2gtc3BlZWQgVVNCIGRldmljZSBudW1iZXIgMiB1c2luZyB4aGNpX2hj
ZA0KWyAgICAyLjc0ODQ2Nl0gbW1jLWJjbTI4MzUgZmUzMDAwMDAubW1jbnI6
IG1tY19kZWJ1ZzowIG1tY19kZWJ1ZzI6MA0KWyAgICAyLjc1Mzk1NF0gbW1j
LWJjbTI4MzUgZmUzMDAwMDAubW1jbnI6IEZvcmNpbmcgUElPIG1vZGUNClsg
ICAgMi43ODg4OTddID09PTEgYmVmb3JlDQpbICAgIDIuNzkwNzc2XSA9PT09
IG1pZGRsZQ0KWyAgICAyLjc5MzMyMF0gPT09PSBhZnRlcg0KWyAgICAyLjc5
NTc2Nl0gPT09PSBkb25lDQpbICAgIDIuODE2NTA2XSBtbWMxOiBxdWV1aW5n
IHVua25vd24gQ0lTIHR1cGxlIDB4ODAgKDIgYnl0ZXMpDQpbICAgIDIuODIz
MTA4XSBtbWMxOiBxdWV1aW5nIHVua25vd24gQ0lTIHR1cGxlIDB4ODAgKDMg
Ynl0ZXMpDQpbICAgIDIuODI5NjgzXSBtbWMxOiBxdWV1aW5nIHVua25vd24g
Q0lTIHR1cGxlIDB4ODAgKDMgYnl0ZXMpDQpbICAgIDIuODM0NjU4XSBtbWMw
OiBTREhDSSBjb250cm9sbGVyIG9uIGZlMzQwMDAwLmVtbWMyIFtmZTM0MDAw
MC5lbW1jMl0gdXNpbmcgQURNQQ0KWyAgICAyLjg0MzYwNF0gaGN0b3N5czog
dW5hYmxlIHRvIG9wZW4gcnRjIGRldmljZSAocnRjMCkNClsgICAgMi44NDg2
MTldID09PTEgYmVmb3JlDQpbICAgIDIuODUwNzA0XSA9PT09IG1pZGRsZQ0K
WyAgICAyLjg1MzE5MV0gPT09PSBhZnRlcg0KWyAgICAyLjg1NTY0OV0gPT09
PSBkb25lDQpbICAgIDIuODU4MTg4XSA9PT0xIGJlZm9yZQ0KWyAgICAyLjg2
MDYxNF0gPT09PSBtaWRkbGUNClsgICAgMi44NjMxMTNdID09PT0gYWZ0ZXIN
ClsgICAgMi44NjU1NzFdID09PT0gZG9uZQ0KWyAgICAyLjg2ODE2OF0gPT09
MSBiZWZvcmUNClsgICAgMi44NzA1MjFdID09PT0gbWlkZGxlDQpbICAgIDIu
ODczMDM0XSA9PT09IGFmdGVyDQpbICAgIDIuODc1NDkzXSA9PT09IGRvbmUN
ClsgICAgMi44Nzc5NzJdID09PTEgYmVmb3JlDQpbICAgIDIuODgwNDY5XSA9
PT09IG1pZGRsZQ0KWyAgICAyLjg4Mjk1Nl0gPT09PSBhZnRlcg0KWyAgICAy
Ljg4NTQxNV0gPT09PSBkb25lDQpbICAgIDIuODg3ODUzXSA9PT0xIGJlZm9y
ZQ0KWyAgICAyLjg5MDM2M10gPT09PSBtaWRkbGUNClsgICAgMi44OTI4Nzhd
ID09PT0gYWZ0ZXINClsgICAgMi44OTUzMzddID09PT0gZG9uZQ0KWyAgICAy
Ljg5Nzc2MF0gPT09MSBiZWZvcmUNClsgICAgMi45MDAyNzJdID09PT0gbWlk
ZGxlDQpbICAgIDIuOTAyODAwXSA9PT09IGFmdGVyDQpbICAgIDIuOTA1MjU4
XSA9PT09IGRvbmUNClsgICAgMi45MDc3OTddID09PTEgYmVmb3JlDQpbICAg
IDIuOTEwMjA4XSA9PT09IG1pZGRsZQ0KWyAgICAyLjkxMjcyMl0gPT09PSBh
ZnRlcg0KWyAgICAyLjkxNTE4MF0gPT09PSBkb25lDQpbICAgIDIuOTE3NjQx
XSA9PT0xIGJlZm9yZQ0KWyAgICAyLjkyMDE0Nl0gPT09PSBtaWRkbGUNClsg
ICAgMi45MjI2NDNdID09PT0gYWZ0ZXINClsgICAgMi45MjUxMDJdID09PT0g
ZG9uZQ0KWyAgICAyLjkyNzU3OV0gPT09MSBiZWZvcmUNClsgICAgMi45MzAw
NzVdID09PT0gbWlkZGxlDQpbICAgIDIuOTMyNTY1XSA9PT09IGFmdGVyDQpb
ICAgIDIuOTM1MDI0XSA9PT09IGRvbmUNClsgICAgMi45Mzc0NTBdIG1tYzE6
IHF1ZXVpbmcgdW5rbm93biBDSVMgdHVwbGUgMHg4MCAoNyBieXRlcykNClsg
ICAgMi45NDMxMzddID09PTEgYmVmb3JlDQpbICAgIDIuOTQ1NTc0XSA9PT09
IG1pZGRsZQ0KWyAgICAyLjk0ODEwN10gPT09PSBhZnRlcg0KWyAgICAyLjk1
MDU4MF0gPT09PSBkb25lDQpbICAgIDIuOTUzMDQwXSA9PT0xIGJlZm9yZQ0K
WyAgICAyLjk1NTUxNl0gPT09PSBtaWRkbGUNClsgICAgMi45NTgwMjldID09
PT0gYWZ0ZXINClsgICAgMi45NjA1MDJdID09PT0gZG9uZQ0KWyAgICAyLjk2
Mjk4NF0gPT09MSBiZWZvcmUNClsgICAgMi45NjU0MzddID09PT0gbWlkZGxl
DQpbICAgIDIuOTY3OTUxXSA9PT09IGFmdGVyDQpbICAgIDIuOTcwNDIzXSA9
PT09IGRvbmUNClsgICAgMi45NzI5MDRdID09PTEgYmVmb3JlDQpbICAgIDIu
OTc1MzU4XSA9PT09IG1pZGRsZQ0KWyAgICAyLjk3Nzg3Ml0gPT09PSBhZnRl
cg0KWyAgICAyLjk4MDM1M10gPT09PSBkb25lDQpbICAgIDIuOTgzMDkzXSBt
bWMxOiBxdWV1aW5nIHVua25vd24gQ0lTIHR1cGxlIDB4ODAgKDMgYnl0ZXMp
DQpbICAgIDIuOTg4NTU5XSBXYWl0aW5nIDEwIHNlYyBiZWZvcmUgbW91bnRp
bmcgcm9vdCBkZXZpY2UuLi4NClsgICAgMi45OTU0NzZdIHVzYiAxLTE6IE5l
dyBVU0IgZGV2aWNlIGZvdW5kLCBpZFZlbmRvcj0yMTA5LCBpZFByb2R1Y3Q9
MzQzMSwgYmNkRGV2aWNlPSA0LjIwDQpbICAgIDMuMDAzMTk0XSB1c2IgMS0x
OiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MCwgUHJvZHVjdD0xLCBT
ZXJpYWxOdW1iZXI9MA0KWyAgICAzLjAxMDQ0MV0gdXNiIDEtMTogUHJvZHVj
dDogVVNCMi4wIEh1Yg0KWyAgICAzLjAxNjAwN10gaHViIDEtMToxLjA6IFVT
QiBodWIgZm91bmQNClsgICAgMy4wMTk1MjRdIGh1YiAxLTE6MS4wOiA0IHBv
cnRzIGRldGVjdGVkDQpbICAgIDMuMDUyNzQ1XSA9PT0xIGJlZm9yZQ0KWyAg
ICAzLjA1NDYyMl0gPT09PSBtaWRkbGUNClsgICAgMy4wNTcxNTBdID09PT0g
YWZ0ZXINClsgICAgMy4wNTk2NDFdID09PT0gZG9uZQ0KWyAgICAzLjA2ODI5
N10gcmFuZG9tOiBmYXN0IGluaXQgZG9uZQ0KWyAgICAzLjA4NDQ2NF0gbW1j
MDogdW5yZWNvZ25pc2VkIFNDUiBzdHJ1Y3R1cmUgdmVyc2lvbiAxNQ0KWyAg
ICAzLjA4OTE3Nl0gbW1jMDogZXJyb3IgLTIyIHdoaWxzdCBpbml0aWFsaXNp
bmcgU0QgY2FyZA0KWyAgICAzLjA5NTQ4N10gPT09MSBiZWZvcmUNClsgICAg
My4wOTczNDZdID09PT0gbWlkZGxlDQpbICAgIDMuMDk5OTEwXSA9PT09IGFm
dGVyDQpbICAgIDMuMTAyMzUxXSA9PT09IGRvbmUNClsgICAgMy4xMDY0OTFd
ID09PTEgYmVmb3JlDQpbICAgIDMuMTA4MzY3XSA9PT09IG1pZGRsZQ0KWyAg
ICAzLjExMDkwOV0gPT09PSBhZnRlcg0KWyAgICAzLjExMzM1M10gPT09PSBk
b25lDQpbICAgIDMuMTE2MjExXSA9PT0xIGJlZm9yZQ0KWyAgICAzLjExODI4
OF0gPT09PSBtaWRkbGUNClsgICAgMy4xMjA4MzBdID09PT0gYWZ0ZXINClsg
ICAgMy4xMjMyNzVdID09PT0gZG9uZQ0KWyAgICAzLjE0MDUxNV0gbW1jMTog
bmV3IGhpZ2ggc3BlZWQgU0RJTyBjYXJkIGF0IGFkZHJlc3MgMDAwMQ0KWyAg
ICAzLjIxNzEwOF0gbW1jMDogdW5yZWNvZ25pc2VkIFNDUiBzdHJ1Y3R1cmUg
dmVyc2lvbiAxNQ0KWyAgICAzLjIyMTc5N10gbW1jMDogZXJyb3IgLTIyIHdo
aWxzdCBpbml0aWFsaXNpbmcgU0QgY2FyZA0KWyAgICAzLjIyODI0M10gPT09
MSBiZWZvcmUNClsgICAgMy4yMzAxMTZdID09PT0gbWlkZGxlDQpbICAgIDMu
MjMyNjQ3XSA9PT09IGFmdGVyDQpbICAgIDMuMjM1MTA2XSA9PT09IGRvbmUN
ClsgICAgMy4yMzkyMTVdID09PTEgYmVmb3JlDQpbICAgIDMuMjQxMDczXSA9
PT09IG1pZGRsZQ0KWyAgICAzLjI0MzYxOV0gPT09PSBhZnRlcg0KWyAgICAz
LjI0NjA4OV0gPT09PSBkb25lDQpbICAgIDMuMjQ4ODk4XSA9PT0xIGJlZm9y
ZQ0KWyAgICAzLjI1MTAyMF0gPT09PSBtaWRkbGUNClsgICAgMy4yNTM1NDFd
ID09PT0gYWZ0ZXINClsgICAgMy4yNTYwMTJdID09PT0gZG9uZQ0KWyAgICAz
LjM0MDUwNF0gbW1jMDogdW5yZWNvZ25pc2VkIFNDUiBzdHJ1Y3R1cmUgdmVy
c2lvbiA3DQpbICAgIDMuMzQ1MDkxXSBtbWMwOiBlcnJvciAtMjIgd2hpbHN0
IGluaXRpYWxpc2luZyBTRCBjYXJkDQpbICAgIDMuMzUxODQwXSA9PT0xIGJl
Zm9yZQ0KWyAgICAzLjM1MzcwOV0gPT09PSBtaWRkbGUNClsgICAgMy4zNTYy
NDNdID09PT0gYWZ0ZXINClsgICAgMy4zNTg3MDJdID09PT0gZG9uZQ0KWyAg
ICAzLjM2MTExMl0gdXNiIDEtMS4zOiBuZXcgbG93LXNwZWVkIFVTQiBkZXZp
Y2UgbnVtYmVyIDMgdXNpbmcgeGhjaV9oY2QNClsgICAgMy4zNjk1OTRdID09
PTEgYmVmb3JlDQpbICAgIDMuMzcxNDc1XSA9PT09IG1pZGRsZQ0KWyAgICAz
LjM3NDAwOV0gPT09PSBhZnRlcg0KWyAgICAzLjM3NjQ1Nl0gPT09PSBkb25l
DQpbICAgIDMuMzc5MzYwXSA9PT0xIGJlZm9yZQ0KWyAgICAzLjM4MTM3M10g
PT09PSBtaWRkbGUNClsgICAgMy4zODM5MTldID09PT0gYWZ0ZXINClsgICAg
My4zODYzNzhdID09PT0gZG9uZQ0KWyAgICAzLjQ4ODYxM10gbW1jMDogdW5y
ZWNvZ25pc2VkIFNDUiBzdHJ1Y3R1cmUgdmVyc2lvbiA3DQpbICAgIDMuNDkz
MjE0XSBtbWMwOiBlcnJvciAtMjIgd2hpbHN0IGluaXRpYWxpc2luZyBTRCBj
YXJkDQpbICAgIDMuNTAwODYwXSA9PT0xIGJlZm9yZQ0KWyAgICAzLjUwMjcx
OV0gPT09PSBtaWRkbGUNClsgICAgMy41MDUyNjRdID09PT0gYWZ0ZXINClsg
ICAgMy41MDc3MjNdID09PT0gZG9uZQ0KWyAgICAzLjUyMjY5MF0gdXNiIDEt
MS4zOiBOZXcgVVNCIGRldmljZSBmb3VuZCwgaWRWZW5kb3I9MDNlZSwgaWRQ
cm9kdWN0PTU2MDEsIGJjZERldmljZT0gMS4wMA0KWyAgICAzLjUzMDU0MV0g
dXNiIDEtMS4zOiBOZXcgVVNCIGRldmljZSBzdHJpbmdzOiBNZnI9MSwgUHJv
ZHVjdD0yLCBTZXJpYWxOdW1iZXI9MA0KWyAgICAzLjUzNzk5OV0gdXNiIDEt
MS4zOiBQcm9kdWN0OiBNaXRzdW1pIFVTQiBLZXlib2FyZA0KWyAgICAzLjU0
MzA5M10gdXNiIDEtMS4zOiBNYW51ZmFjdHVyZXI6IE1pdHN1bWkgRWxlY3Ry
aWMNClsgICAgMy41NTk0NjZdIGlucHV0OiBNaXRzdW1pIEVsZWN0cmljIE1p
dHN1bWkgVVNCIEtleWJvYXJkIGFzIC9kZXZpY2VzL3BsYXRmb3JtL3NjYi9m
ZDUwMDAwMC5wY2llL3BjaTAwMDA6MDAvMDAwMDowMDowMC4wLzAwMDA6MDE6
MDAuMC91c2IxLzEtMS8xLTEuMy8xLTEuMzoxLjAvMDAwMzowM0VFOjU2MDEu
MDAwMS9pbnB1dC9pbnB1dDANClsgICAgMy42MzkzMzNdIGhpZC1nZW5lcmlj
IDAwMDM6MDNFRTo1NjAxLjAwMDE6IGlucHV0LGhpZHJhdzA6IFVTQiBISUQg
djEuMDAgS2V5Ym9hcmQgW01pdHN1bWkgRWxlY3RyaWMgTWl0c3VtaSBVU0Ig
S2V5Ym9hcmRdIG9uIHVzYi0wMDAwOjAxOjAwLjAtMS4zL2lucHV0MA0KWyAg
ICA0LjU3OTI1N10gPT09MSBiZWZvcmUNClsgICAgNC41ODExMTddID09PT0g
bWlkZGxlDQpbICAgIDQuNTgzNjYyXSA9PT09IGFmdGVyDQpbICAgIDQuNTg2
MTMzXSA9PT09IGRvbmUNClsgICAgNC41ODg5NjhdID09PTEgYmVmb3JlDQpb
ICAgIDQuNTkxMDM4XSA9PT09IG1pZGRsZQ0KWyAgICA0LjU5MzU4NF0gPT09
PSBhZnRlcg0KWyAgICA0LjU5NjA1NV0gPT09PSBkb25lDQpbICAgIDQuNjcy
MjgxXSBtbWMwOiB1bnJlY29nbmlzZWQgU0NSIHN0cnVjdHVyZSB2ZXJzaW9u
IDcNClsgICAgNC42NzY4NjhdIG1tYzA6IGVycm9yIC0yMiB3aGlsc3QgaW5p
dGlhbGlzaW5nIFNEIGNhcmQNClsgICAgNC42ODMxODBdID09PTEgYmVmb3Jl
DQpbICAgIDQuNjg1MDM4XSA9PT09IG1pZGRsZQ0KWyAgICA0LjY4NzU4NF0g
PT09PSBhZnRlcg0KWyAgICA0LjY5MDA1N10gPT09PSBkb25lDQpbICAgIDQu
Njk0MTU0XSA9PT0xIGJlZm9yZQ0KWyAgICA0LjY5NjAxMl0gPT09PSBtaWRk
bGUNClsgICAgNC42OTg1NjldID09PT0gYWZ0ZXINClsgICAgNC43MDEwMzBd
ID09PT0gZG9uZQ0KWyAgICA0LjcwMzg0OV0gPT09MSBiZWZvcmUNClsgICAg
NC43MDU5MzNdID09PT0gbWlkZGxlDQpbICAgIDQuNzA4NDkxXSA9PT09IGFm
dGVyDQpbICAgIDQuNzEwOTUzXSA9PT09IGRvbmUNClsgICAgNC43OTAxNTld
IG1tYzA6IHVucmVjb2duaXNlZCBTQ1Igc3RydWN0dXJlIHZlcnNpb24gNw0K
WyAgICA0Ljc5NDc0NF0gbW1jMDogZXJyb3IgLTIyIHdoaWxzdCBpbml0aWFs
aXNpbmcgU0QgY2FyZA0KWyAgICA0LjgwMTIyMF0gPT09MSBiZWZvcmUNClsg
ICAgNC44MDMwNzhdID09PT0gbWlkZGxlDQpbICAgIDQuODA1NjI0XSA9PT09
IGFmdGVyDQpbICAgIDQuODA4MDgzXSA9PT09IGRvbmUNClsgICAgNC44MTIy
MDRdID09PTEgYmVmb3JlDQpbICAgIDQuODE0MDczXSA9PT09IG1pZGRsZQ0K
WyAgICA0LjgxNjYwN10gPT09PSBhZnRlcg0KWyAgICA0LjgxOTA4NF0gPT09
PSBkb25lDQpbICAgIDQuODIxODk5XSA9PT0xIGJlZm9yZQ0KWyAgICA0Ljgy
Mzk5NV0gPT09PSBtaWRkbGUNClsgICAgNC44MjY1MjldID09PT0gYWZ0ZXIN
ClsgICAgNC44MjkwMDZdID09PT0gZG9uZQ0KWyAgICA0LjkxMzgyNF0gbW1j
MDogdW5yZWNvZ25pc2VkIFNDUiBzdHJ1Y3R1cmUgdmVyc2lvbiA3DQpbICAg
IDQuOTE4NDA3XSBtbWMwOiBlcnJvciAtMjIgd2hpbHN0IGluaXRpYWxpc2lu
ZyBTRCBjYXJkDQpbICAgIDQuOTI1MTYxXSA9PT0xIGJlZm9yZQ0KWyAgICA0
LjkyNzAyMF0gPT09PSBtaWRkbGUNClsgICAgNC45Mjk1ODNdID09PT0gYWZ0
ZXINClsgICAgNC45MzIwMjRdID09PT0gZG9uZQ0KWyAgICA0LjkzNjEzMV0g
PT09MSBiZWZvcmUNClsgICAgNC45Mzc5ODldID09PT0gbWlkZGxlDQpbICAg
IDQuOTQwNTQ4XSA9PT09IGFmdGVyDQpbICAgIDQuOTQzMDA1XSA9PT09IGRv
bmUNClsgICAgNC45NDU4MjZdID09PTEgYmVmb3JlDQpbICAgIDQuOTQ3OTEx
XSA9PT09IG1pZGRsZQ0KWyAgICA0Ljk1MDQ3MF0gPT09PSBhZnRlcg0KWyAg
ICA0Ljk1MjkyN10gPT09PSBkb25lDQpbICAgIDUuMDU1MTQzXSBtbWMwOiB1
bnJlY29nbmlzZWQgU0NSIHN0cnVjdHVyZSB2ZXJzaW9uIDcNClsgICAgNS4w
NTk3NDldIG1tYzA6IGVycm9yIC0yMiB3aGlsc3QgaW5pdGlhbGlzaW5nIFNE
IGNhcmQNClsgICAgNS4wNjczNzhdID09PTEgYmVmb3JlDQpbICAgIDUuMDY5
MjU0XSA9PT09IG1pZGRsZQ0KWyAgICA1LjA3MTc4Ml0gPT09PSBhZnRlcg0K
WyAgICA1LjA3NDI0MV0gPT09PSBkb25lDQpbICAgIDYuMTAwMzI2XSA9PT0x
IGJlZm9yZQ0KWyAgICA2LjEwMjE4M10gPT09PSBtaWRkbGUNClsgICAgNi4x
MDQ3MjldID09PT0gYWZ0ZXINClsgICAgNi4xMDcxODddID09PT0gZG9uZQ0K
WyAgICA2LjExMDAzOF0gPT09MSBiZWZvcmUNClsgICAgNi4xMTIxMDVdID09
PT0gbWlkZGxlDQpbICAgIDYuMTE0NjUxXSA9PT09IGFmdGVyDQpbICAgIDYu
MTE3MTA5XSA9PT09IGRvbmUNClsgICAgNi4xOTMzMzJdIG1tYzA6IHVucmVj
b2duaXNlZCBTQ1Igc3RydWN0dXJlIHZlcnNpb24gNw0KWyAgICA2LjE5Nzkx
Nl0gbW1jMDogZXJyb3IgLTIyIHdoaWxzdCBpbml0aWFsaXNpbmcgU0QgY2Fy
ZA0KWyAgICA2LjIwNDIyOF0gPT09MSBiZWZvcmUNClsgICAgNi4yMDYwODZd
ID09PT0gbWlkZGxlDQpbICAgIDYuMjA4NjMxXSA9PT09IGFmdGVyDQpbICAg
IDYuMjExMTA1XSA9PT09IGRvbmUNClsgICAgNi4yMTUxOThdID09PTEgYmVm
b3JlDQpbICAgIDYuMjE3MDU1XSA9PT09IG1pZGRsZQ0KWyAgICA2LjIxOTYx
OV0gPT09PSBhZnRlcg0KWyAgICA2LjIyMjA2MF0gPT09PSBkb25lDQpbICAg
IDYuMjI0ODkzXSA9PT0xIGJlZm9yZQ0KWyAgICA2LjIyNjk3N10gPT09PSBt
aWRkbGUNClsgICAgNi4yMjk1NDJdID09PT0gYWZ0ZXINClsgICAgNi4yMzE5
ODJdID09PT0gZG9uZQ0KWyAgICA2LjMxMTIxNV0gbW1jMDogdW5yZWNvZ25p
c2VkIFNDUiBzdHJ1Y3R1cmUgdmVyc2lvbiA3DQpbICAgIDYuMzE1Nzk5XSBt
bWMwOiBlcnJvciAtMjIgd2hpbHN0IGluaXRpYWxpc2luZyBTRCBjYXJkDQpb
ICAgIDYuMzIyMjc3XSA9PT0xIGJlZm9yZQ0KWyAgICA2LjMyNDE0N10gPT09
PSBtaWRkbGUNClsgICAgNi4zMjY2ODFdID09PT0gYWZ0ZXINClsgICAgNi4z
MjkxNThdID09PT0gZG9uZQ0KWyAgICA2LjMzMzI0N10gPT09MSBiZWZvcmUN
ClsgICAgNi4zMzUxMDVdID09PT0gbWlkZGxlDQpbICAgIDYuMzM3NjUxXSA9
PT09IGFmdGVyDQpbICAgIDYuMzQwMTI0XSA9PT09IGRvbmUNClsgICAgNi4z
NDI5NDNdID09PTEgYmVmb3JlDQpbICAgIDYuMzQ1MDI3XSA9PT09IG1pZGRs
ZQ0KWyAgICA2LjM0NzU3M10gPT09PSBhZnRlcg0KWyAgICA2LjM1MDA0Nl0g
PT09PSBkb25lDQpbICAgIDYuNDM0OTA1XSBtbWMwOiB1bnJlY29nbmlzZWQg
U0NSIHN0cnVjdHVyZSB2ZXJzaW9uIDcNClsgICAgNi40Mzk1MTFdIG1tYzA6
IGVycm9yIC0yMiB3aGlsc3QgaW5pdGlhbGlzaW5nIFNEIGNhcmQNClsgICAg
Ni40NDYyMjJdID09PTEgYmVmb3JlDQpbICAgIDYuNDQ4MDgwXSA9PT09IG1p
ZGRsZQ0KWyAgICA2LjQ1MDYzOV0gPT09PSBhZnRlcg0KWyAgICA2LjQ1MzA4
NF0gPT09PSBkb25lDQpbICAgIDYuNDU3MTkxXSA9PT0xIGJlZm9yZQ0KWyAg
ICA2LjQ1OTA2Nl0gPT09PSBtaWRkbGUNClsgICAgNi40NjE1OTVdID09PT0g
YWZ0ZXINClsgICAgNi40NjQwNTNdID09PT0gZG9uZQ0KWyAgICA2LjQ2Njg4
N10gPT09MSBiZWZvcmUNClsgICAgNi40Njg5ODhdID09PT0gbWlkZGxlDQpb
ICAgIDYuNDcxNTE2XSA9PT09IGFmdGVyDQpbICAgIDYuNDczOTc1XSA9PT09
IGRvbmUNClsgICAxMy40NTkwNDddIFZGUzogQ2Fubm90IG9wZW4gcm9vdCBk
ZXZpY2UgIihudWxsKSIgb3IgdW5rbm93bi1ibG9jaygwLDApOiBlcnJvciAt
Ng0KWyAgIDEzLjQ2NjAwMF0gUGxlYXNlIGFwcGVuZCBhIGNvcnJlY3QgInJv
b3Q9IiBib290IG9wdGlvbjsgaGVyZSBhcmUgdGhlIGF2YWlsYWJsZSBwYXJ0
aXRpb25zOg0KWyAgIDEzLjQ3NDU2NV0gMDEwMCAgICAgICAgICAxMzEwNzIg
cmFtMA0KWyAgIDEzLjQ3NDU2N10gIChkcml2ZXI/KQ0KWyAgIDEzLjQ4MDg1
OV0gMDEwMSAgICAgICAgICAxMzEwNzIgcmFtMQ0KWyAgIDEzLjQ4MDg2MV0g
IChkcml2ZXI/KQ0KWyAgIDEzLjQ4NzE2NV0gMDEwMiAgICAgICAgICAxMzEw
NzIgcmFtMg0KWyAgIDEzLjQ4NzE2Nl0gIChkcml2ZXI/KQ0KWyAgIDEzLjQ5
MzQ4Nl0gMDEwMyAgICAgICAgICAxMzEwNzIgcmFtMw0KWyAgIDEzLjQ5MzQ4
OF0gIChkcml2ZXI/KQ0KWyAgIDEzLjQ5OTgyNF0gMDEwNCAgICAgICAgICAx
MzEwNzIgcmFtNA0KWyAgIDEzLjQ5OTgyNl0gIChkcml2ZXI/KQ0KWyAgIDEz
LjUwNjEyOV0gMDEwNSAgICAgICAgICAxMzEwNzIgcmFtNQ0KWyAgIDEzLjUw
NjEzMV0gIChkcml2ZXI/KQ0KWyAgIDEzLjUxMjQ1Ml0gMDEwNiAgICAgICAg
ICAxMzEwNzIgcmFtNg0KWyAgIDEzLjUxMjQ1M10gIChkcml2ZXI/KQ0KWyAg
IDEzLjUxODc3M10gMDEwNyAgICAgICAgICAxMzEwNzIgcmFtNw0KWyAgIDEz
LjUxODc3NF0gIChkcml2ZXI/KQ0KWyAgIDEzLjUyNTA5Nl0gMDEwOCAgICAg
ICAgICAxMzEwNzIgcmFtOA0KWyAgIDEzLjUyNTA5OF0gIChkcml2ZXI/KQ0K
WyAgIDEzLjUzMTQxN10gMDEwOSAgICAgICAgICAxMzEwNzIgcmFtOQ0KWyAg
IDEzLjUzMTQxOV0gIChkcml2ZXI/KQ0KWyAgIDEzLjUzNzczOV0gMDEwYSAg
ICAgICAgICAxMzEwNzIgcmFtMTANClsgICAxMy41Mzc3NDBdICAoZHJpdmVy
PykNClsgICAxMy41NDQxNjBdIDAxMGIgICAgICAgICAgMTMxMDcyIHJhbTEx
DQpbICAgMTMuNTQ0MTYyXSAgKGRyaXZlcj8pDQpbICAgMTMuNTUwNTcxXSAw
MTBjICAgICAgICAgIDEzMTA3MiByYW0xMg0KWyAgIDEzLjU1MDU3M10gIChk
cml2ZXI/KQ0KWyAgIDEzLjU1Njk2OF0gMDEwZCAgICAgICAgICAxMzEwNzIg
cmFtMTMNClsgICAxMy41NTY5NzBdICAoZHJpdmVyPykNClsgICAxMy41NjMz
NzhdIDAxMGUgICAgICAgICAgMTMxMDcyIHJhbTE0DQpbICAgMTMuNTYzMzgw
XSAgKGRyaXZlcj8pDQpbICAgMTMuNTY5ODA0XSAwMTBmICAgICAgICAgIDEz
MTA3MiByYW0xNQ0KWyAgIDEzLjU2OTgwNl0gIChkcml2ZXI/KQ0KWyAgIDEz
LjU3NjIyM10gS2VybmVsIHBhbmljIC0gbm90IHN5bmNpbmc6IFZGUzogVW5h
YmxlIHRvIG1vdW50IHJvb3QgZnMgb24gdW5rbm93bi1ibG9jaygwLDApDQpb
ICAgMTMuNTg0NjM4XSBDUFU6IDAgUElEOiAxIENvbW06IHN3YXBwZXIvMCBO
b3QgdGFpbnRlZCA1LjYuMTAtZGVmYXVsdCsgIzE5DQpbICAgMTMuNTkxNTU5
XSBIYXJkd2FyZSBuYW1lOiBSYXNwYmVycnkgUGkgNCBNb2RlbCBCIChEVCkN
ClsgICAxMy41OTY4MjhdIENhbGwgdHJhY2U6DQpbICAgMTMuNTk5MzgyXSAg
ZHVtcF9iYWNrdHJhY2UrMHgwLzB4MWQwDQpbICAgMTMuNjAzMTUyXSAgc2hv
d19zdGFjaysweDE0LzB4MjANClsgICAxMy42MDY1NzhdICBkdW1wX3N0YWNr
KzB4YmMvMHgxMDANClsgICAxMy42MTAwOTJdICBwYW5pYysweDE1MC8weDMy
NA0KWyAgIDEzLjYxMzI1MV0gIG1vdW50X2Jsb2NrX3Jvb3QrMHgyOGMvMHgz
MmMNClsgICAxMy42MTczNzZdICBtb3VudF9yb290KzB4N2MvMHg4OA0KWyAg
IDEzLjYyMDgwMF0gIHByZXBhcmVfbmFtZXNwYWNlKzB4MTU4LzB4MTk4DQpb
ICAgMTMuNjI1MDI0XSAga2VybmVsX2luaXRfZnJlZWFibGUrMHgyNjQvMHgy
YjANClsgICAxMy42Mjk0OTZdICBrZXJuZWxfaW5pdCsweDEwLzB4MTAwDQpb
ICAgMTMuNjMzMDk0XSAgcmV0X2Zyb21fZm9yaysweDEwLzB4MTgNClsgICAx
My42MzY3ODZdIEtlcm5lbCBPZmZzZXQ6IGRpc2FibGVkDQpbICAgMTMuNjQw
MzgwXSBDUFUgZmVhdHVyZXM6IDB4MTAwMDIsNjEwMDYwMDANClsgICAxMy42
NDQ2MDJdIE1lbW9yeSBMaW1pdDogbm9uZQ0KWyAgIDEzLjY0Nzc2MV0gLS0t
WyBlbmQgS2VybmVsIHBhbmljIC0gbm90IHN5bmNpbmc6IFZGUzogVW5hYmxl
IHRvIG1vdW50IHJvb3QgZnMgb24gdW5rbm93bi1ibG9jaygwLDApIF0tLS0N
Cg==

--8323329-1114903866-1589329903=:26167
Content-Type: text/plain; NAME=diff.txt
Content-Transfer-Encoding: BASE64
Content-ID: <alpine.DEB.2.21.2005121731431.26167@sstabellini-ThinkPad-T480s>
Content-Description: 
Content-Disposition: ATTACHMENT; FILENAME=diff.txt

ZGlmZiAtLWdpdCBhL2RyaXZlcnMveGVuL3N3aW90bGIteGVuLmMgYi9kcml2
ZXJzL3hlbi9zd2lvdGxiLXhlbi5jDQppbmRleCBiNmQyNzc2MmMuLjQ4OTBj
ZWM5NiAxMDA2NDQNCi0tLSBhL2RyaXZlcnMveGVuL3N3aW90bGIteGVuLmMN
CisrKyBiL2RyaXZlcnMveGVuL3N3aW90bGIteGVuLmMNCkBAIC0yNTUsNiAr
MjU1LDcgQEAgaW50IF9fcmVmIHhlbl9zd2lvdGxiX2luaXQoaW50IHZlcmJv
c2UsIGJvb2wgZWFybHkpDQogCWlmICghcmMpDQogCQlzd2lvdGxiX3NldF9t
YXhfc2VnbWVudChQQUdFX1NJWkUpOw0KIA0KK3ByaW50aygiREVCVUcgJXMg
JWQgc3RhcnQ9JWxseCBlbmQ9JWxseFxuIixfX2Z1bmNfXyxfX0xJTkVfXyxz
dGFydF9kbWFfYWRkcixzdGFydF9kbWFfYWRkcitieXRlcyk7DQogCXJldHVy
biByYzsNCiBlcnJvcjoNCiAJaWYgKHJlcGVhdC0tKSB7DQpAQCAtMzI0LDYg
KzMyNSwxMCBAQCB4ZW5fc3dpb3RsYl9hbGxvY19jb2hlcmVudChzdHJ1Y3Qg
ZGV2aWNlICpod2Rldiwgc2l6ZV90IHNpemUsDQogCQl9DQogCQlTZXRQYWdl
WGVuUmVtYXBwZWQodmlydF90b19wYWdlKHJldCkpOw0KIAl9DQoraWYgKCFk
bWFfY2FwYWJsZShod2RldiwgKmRtYV9oYW5kbGUsIHNpemUsIHRydWUpKQ0K
KyAgICBwcmludGsoIkRFQlVHMSAlcyAlZCBwaHlzPSVsbHggZG1hPSVsbHgg
ZG1hX21hc2s9JWxseFxuIixfX2Z1bmNfXyxfX0xJTkVfXyxwaHlzLCpkbWFf
aGFuZGxlLGRtYV9tYXNrKTsNCitpZiAoZGV2X2FkZHIgKyBzaXplIC0gMSA+
IGRtYV9tYXNrKQ0KKyAgICBwcmludGsoIkRFQlVHMiAlcyAlZCBwaHlzPSVs
bHggZG1hPSVsbHggZG1hX21hc2s9JWxseFxuIixfX2Z1bmNfXyxfX0xJTkVf
XyxwaHlzLCpkbWFfaGFuZGxlLGRtYV9tYXNrKTsNCiAJbWVtc2V0KHJldCwg
MCwgc2l6ZSk7DQogCXJldHVybiByZXQ7DQogfQ0KQEAgLTM5OCw2ICs0MDgs
NyBAQCBzdGF0aWMgZG1hX2FkZHJfdCB4ZW5fc3dpb3RsYl9tYXBfcGFnZShz
dHJ1Y3QgZGV2aWNlICpkZXYsIHN0cnVjdCBwYWdlICpwYWdlLA0KIAkgKiBF
bnN1cmUgdGhhdCB0aGUgYWRkcmVzcyByZXR1cm5lZCBpcyBETUEnYmxlDQog
CSAqLw0KIAlpZiAodW5saWtlbHkoIWRtYV9jYXBhYmxlKGRldiwgZGV2X2Fk
ZHIsIHNpemUsIHRydWUpKSkgew0KK3ByaW50aygiREVCVUczICVzICVkIHBo
eXM9JWxseCBkbWE9JWxseFxuIixfX2Z1bmNfXyxfX0xJTkVfXyxwaHlzLGRl
dl9hZGRyKTsJCQ0KIAkJc3dpb3RsYl90YmxfdW5tYXBfc2luZ2xlKGRldiwg
bWFwLCBzaXplLCBzaXplLCBkaXIsDQogCQkJCWF0dHJzIHwgRE1BX0FUVFJf
U0tJUF9DUFVfU1lOQyk7DQogCQlyZXR1cm4gRE1BX01BUFBJTkdfRVJST1I7
DQo=

--8323329-1114903866-1589329903=:26167--


From xen-devel-bounces@lists.xenproject.org Wed May 13 00:56:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 00:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYfgj-00063y-7R; Wed, 13 May 2020 00:55:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FcTI=63=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jYfgh-000637-ET
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 00:55:55 +0000
X-Inumbo-ID: 7fd151e6-94b4-11ea-ae69-bc764e2007e4
Received: from mail-qt1-x835.google.com (unknown [2607:f8b0:4864:20::835])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7fd151e6-94b4-11ea-ae69-bc764e2007e4;
 Wed, 13 May 2020 00:55:54 +0000 (UTC)
Received: by mail-qt1-x835.google.com with SMTP id p12so12825238qtn.13
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 17:55:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id;
 bh=EMqMHiDg6gKs8OQ+tw4WAiOg/f7/en1mPF/tDzR7hOE=;
 b=dJYOWESkROg0U5+eZsfHG8Ydy4j1f2bOnzQ0NIislyQIWJCCqMs8OZO185Q22TSWVZ
 mZiQBhe9SShl8/qBIcexNLhUO3b3Eqm5BNArlQ7EFYsXv+cVh/C57dbUMC8QspkTWnTx
 9h3avkzMwHHB+I3OyzzQ7tV5d11SCD0HhohHUw49bd0TXWgq4kD+6u83hWGJBI2OP3Ck
 rEAzNRxzShU1ukriU1Mu+fGCPPiHA8sf7bplW5D0AQmdNsainhTXYZ/zpzn3Kp7qjjQb
 9Nj97d3uLESPlLxEMVXM9pUXYpjKGnoJJTc6AAZwEomGC6lATm5A4YBM9ytIZYk3Av0d
 zGhg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id;
 bh=EMqMHiDg6gKs8OQ+tw4WAiOg/f7/en1mPF/tDzR7hOE=;
 b=OCPOeefxUqfZCSrMqTOHci45eIvMdGZdaBnkdiTVSwYg50WKJp2+9G0wGyRMXMogtC
 +hQKV2HuoB9y4XepJSo+HKOZFXM9y5WLYPAUrjC0fqRj2KLunpLqnyu5pQc/z5VLEthw
 pLCEEzss4CcA2x41uPiz6th2zuG0VSo3UWJL9k0HLkfzxfIXRv4THSoHJ9oWQhg9KNiu
 O9MVCoqaYcKAF6pEPf+wOrYWAezk+mgMaG1O+ktqtAtDHDKGGPV3ydBEq6qxINotzL15
 OWdkWAJfdPnktNR+Ma8cT6+6PnKgzUoX/sc5qltM6pqZGK9O2YaP+OPfEVyOjnnMsAYn
 8t/g==
X-Gm-Message-State: AGi0PuaSvCIIkWFFmKMHzi3AkCNeYbazNU4ot3LeFnL4PDWO5TOYQfuk
 lIOd7ebzEOJrDdbebK95bHPKH/eccFc=
X-Google-Smtp-Source: APiQypLZrSd1jBf6vjCZ/Iqv8pToJSWHu2/t9eUwUf47AIwgkw4AxwzKB+ETw0D4ubAoaW2Gmo8Rug==
X-Received: by 2002:aed:3949:: with SMTP id l67mr13125456qte.313.1589331353911; 
 Tue, 12 May 2020 17:55:53 -0700 (PDT)
Received: from six.lan (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id y28sm8388602qtc.62.2020.05.12.17.55.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 12 May 2020 17:55:53 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 0/1] More wrappers for xenlight Go package
Date: Tue, 12 May 2020 20:55:49 -0400
Message-Id: <cover.1589321804.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This series adds wrappers to the xenlight package for various libxl
functions, which are now trivial to add with the generated types and 
marshaling helpers. In particular, these are functions that would allow
redctl to begin making the transition to using the xenlight package. For 
reference, I have started an experimental branch where I am using these
functions in redctl [1].

[1] https://gitlab.com/enr0n/redctl/-/blob/1bdf7b515654cc030e095f3a630a24530f930c00/internal/server/xenlight_xen_driver.go

Changes in v2:
 - Define constant DomidInvalid in first patch for use in NameToDomid
 - Add more detail to function comments

Changes in v3:
 - Define INVALID_DOMID_TYPED in C preamble so that the Go const
   DomidInvalid is always in-sync with the libxl definition.

Nick Rosbrook (1):
  golang/xenlight: add NameToDomid and DomidToName util functions

 tools/golang/xenlight/xenlight.go | 40 ++++++++++++++++++++++++++++++-
 1 file changed, 39 insertions(+), 1 deletion(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 13 00:56:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 00:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYfgn-00064F-Fp; Wed, 13 May 2020 00:56:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FcTI=63=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jYfgm-000646-9y
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 00:56:00 +0000
X-Inumbo-ID: 8090bc20-94b4-11ea-ae69-bc764e2007e4
Received: from mail-qt1-x841.google.com (unknown [2607:f8b0:4864:20::841])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8090bc20-94b4-11ea-ae69-bc764e2007e4;
 Wed, 13 May 2020 00:55:56 +0000 (UTC)
Received: by mail-qt1-x841.google.com with SMTP id i68so12867109qtb.5
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 17:55:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :in-reply-to:references;
 bh=vucWzifnCHg1Y17H84jeYTA4ym+JO/tR6T+lTmzuxUM=;
 b=nFwHOxIyQmzykfR4t6Lqr1JLUBfjw6TpV3GM0+nB9D79zh+TotaGfgG11UemFt7Adj
 QpiNhIOWB1LW5E3UhqOSmz5I0YcY8ifsuCpN8upCe3YgVAqskiiF0psYKUttBMeZK3Rb
 rfMlbiYg7Y6obDCP1hruvkfFp7Ek2ZldAIyB2TL3NZg/UIy0h2lqOLk2knz/ILX/m1u1
 N2DiP/j2LrNnLncOe2BGliPHdlrVS59He/s64RMwA18xLXnKR/MrTR0Z75zutnHFKTcb
 7JhNisfQLDqmczUGs8Ph/yb6BrpNuSL5VSG//nDoAHKxY6ccJ+2M094snSOs/EwO3aiI
 sIew==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:in-reply-to:references;
 bh=vucWzifnCHg1Y17H84jeYTA4ym+JO/tR6T+lTmzuxUM=;
 b=DNvfWDQUoJ09ucxLK8wSuuaIZ6e11/QGyFZFJoajDsauXvT7vyz36iT6mUd32mQaxo
 H5Ait62QBFK+ZdA8efr5N6WGwAL71b1soA8brj5lZ0L0UdvFzdUqhlN5jyJPL0HV/Pbj
 JcWJ8rbCRrwwNd7N8hEWF00HLY9+Ln5P+lpAfJXCoeI5lhljVEeQW4vzDaDSePoklsLZ
 wS4tl2v0scIaUOeNenBphFs8OQ1nH0FktHbHOOhognDLtxMj2NANGsqGCCvKvqrL/fUG
 i6TjxO5yZe4KKx5B17XkjLrOTqXOohZwYKpLWGjhNCy7t9nQLj0sET/RWUCTalXMxcP0
 3R0g==
X-Gm-Message-State: AGi0PubWI9EPYVg67scmqAfz6uMAfDhWwXexhLY5v1sU4+prKasvd/np
 tGdyOoRN5j2sKRdQZSS1LlgE/SjNS/4=
X-Google-Smtp-Source: APiQypL+6WJmum4S4Lp8c1UA4uB8kZY4IOWco9PVZueHJf77++cWuk8UFJRu+Z2ujqjmdlZiGd4JhQ==
X-Received: by 2002:aed:3788:: with SMTP id j8mr24430376qtb.113.1589331355391; 
 Tue, 12 May 2020 17:55:55 -0700 (PDT)
Received: from six.lan (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id y28sm8388602qtc.62.2020.05.12.17.55.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 12 May 2020 17:55:54 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 1/1] golang/xenlight: add NameToDomid and DomidToName util
 functions
Date: Tue, 12 May 2020 20:55:50 -0400
Message-Id: <a543eec1da35b619a06816f8aed29774daa38cb7.1589321804.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1589321804.git.rosbrookn@ainfosec.com>
References: <cover.1589321804.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1589321804.git.rosbrookn@ainfosec.com>
References: <cover.1589321804.git.rosbrookn@ainfosec.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Many exported functions in xenlight require a domid as an argument. Make
it easier for package users to use these functions by adding wrappers
for the libxl utility functions libxl_name_to_domid and
libxl_domid_to_name.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/xenlight.go | 40 ++++++++++++++++++++++++++++++-
 1 file changed, 39 insertions(+), 1 deletion(-)

diff --git a/tools/golang/xenlight/xenlight.go b/tools/golang/xenlight/xenlight.go
index 6b4f492550..742e5e11f1 100644
--- a/tools/golang/xenlight/xenlight.go
+++ b/tools/golang/xenlight/xenlight.go
@@ -21,13 +21,15 @@ package xenlight
 #cgo LDFLAGS: -lxenlight -lyajl -lxentoollog
 #include <stdlib.h>
 #include <libxl.h>
+#include <libxl_utils.h>
+
+#define INVALID_DOMID_TYPED ((uint32_t) INVALID_DOMID)
 
 static const libxl_childproc_hooks childproc_hooks = { .chldowner = libxl_sigchld_owner_mainloop };
 
 void xenlight_set_chldproc(libxl_ctx *ctx) {
 	libxl_childproc_setmode(ctx, &childproc_hooks, NULL);
 }
-
 */
 import "C"
 
@@ -75,6 +77,10 @@ var libxlErrors = map[Error]string{
 	ErrorFeatureRemoved:               "Feature removed",
 }
 
+const (
+	DomidInvalid Domid = Domid(C.INVALID_DOMID_TYPED)
+)
+
 func (e Error) Error() string {
 	if s, ok := libxlErrors[e]; ok {
 		return s
@@ -190,6 +196,38 @@ func (ctx *Context) Close() error {
 
 type Domid uint32
 
+// NameToDomid returns the Domid for a domain, given its name, if it exists.
+//
+// NameToDomid does not guarantee that the domid associated with name at
+// the time NameToDomid is called is the same as the domid associated with
+// name at the time NameToDomid returns.
+func (Ctx *Context) NameToDomid(name string) (Domid, error) {
+	var domid C.uint32_t
+
+	cname := C.CString(name)
+	defer C.free(unsafe.Pointer(cname))
+
+	if ret := C.libxl_name_to_domid(Ctx.ctx, cname, &domid); ret != 0 {
+		return DomidInvalid, Error(ret)
+	}
+
+	return Domid(domid), nil
+}
+
+// DomidToName returns the name for a domain, given its domid. If there
+// is no domain with the given domid, DomidToName will return the empty
+// string.
+//
+// DomidToName does not guarantee that the name (if any) associated with domid
+// at the time DomidToName is called is the same as the name (if any) associated
+// with domid at the time DomidToName returns.
+func (Ctx *Context) DomidToName(domid Domid) string {
+	cname := C.libxl_domid_to_name(Ctx.ctx, C.uint32_t(domid))
+	defer C.free(unsafe.Pointer(cname))
+
+	return C.GoString(cname)
+}
+
 // Devid is a device ID.
 type Devid int
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 13 00:58:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 00:58:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYfit-0006El-Tf; Wed, 13 May 2020 00:58:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FcTI=63=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jYfit-0006Eg-Jh
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 00:58:11 +0000
X-Inumbo-ID: d10f3c08-94b4-11ea-b07b-bc764e2007e4
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d10f3c08-94b4-11ea-b07b-bc764e2007e4;
 Wed, 13 May 2020 00:58:11 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id y22so3114439qki.3
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 17:58:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id;
 bh=Abb8SF9BVb/yvCQWRsCzpAAii8nwRgH/VtD1d77e7kM=;
 b=VVlIpEw6Q0U6XlZM1WlSNHztNl024vZOo6JuKhbgjVgFRZJh7Q3S36v9QUuivHTmqp
 A20SaZn1c0QitYH2WMN+M+dO/+E1K9hFl8DH6dd7Dl0jyvAHtL+5gLVLpB5PVAs3dJNp
 Uz15Rvdc8tgBSSc3JIyvSOAltHWWOhEjsqHMC+UVTdIFT6zRJrDioI3+ZNzNjiCCb2l3
 Qx/ziEEfEYxG1EczkHL4VOSZzxHSAejpSq6OCNIN/BKBrXJrZYw9MM/oO9kJLq02gGCz
 s216aFSdX3/FAUHJGAnBpxqPkzHCDWS2M0z79Q+IU0oftT3UhF7JlKbG8flg5Fw6/Un9
 NbPg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id;
 bh=Abb8SF9BVb/yvCQWRsCzpAAii8nwRgH/VtD1d77e7kM=;
 b=SKXh4TdtbWp/uQBDVEGlK38o5MKteoTkSQlDGRSomFEhLYMTx6VIprq9tuEmvs1U6H
 4MqdTw9nXliD+2oSV+68C3ViIT8ZGlxiLasbEhCdEzi7gww/KIy+Y5uF2zBDJxc+P2pK
 gmGHg0FY/d0xs9/93s5287usaMkDiYdWYMxjWKv1dvhN2wFFGY3EbBRzgc6tWDpqpu+x
 r19A7qty5i+geKBEO8IPij8YdcbUji9kQ1AUKC5uJfPXJHxexf0Jqe6srptwZFFvQBJC
 nZz2Xe6hjoxnJaHnynmHoGCqita4N/39jidaPSSnQhrbpIHC4k69p7lCtQYxfDeocQ7m
 Gh2w==
X-Gm-Message-State: AGi0PuYgoCLTQe0caRzF2DrCfhr0dovAWmGZ22RRBXIezOv1/0RO8xF/
 wk8Jc4EgY6cIRbFCdK0gzqDloxDfgwY=
X-Google-Smtp-Source: APiQypIOaRVxfIQ+gNYGoQX9nWHQ8ZayFE/yEjmLUnRkfSfUwrq6b0YbC4QRP8JPsMRbJc0awmvOXA==
X-Received: by 2002:a37:97c1:: with SMTP id
 z184mr18043810qkd.249.1589331490249; 
 Tue, 12 May 2020 17:58:10 -0700 (PDT)
Received: from six.lan (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id 62sm12400828qkh.113.2020.05.12.17.58.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 12 May 2020 17:58:09 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 0/3] initialize xenlight go module
Date: Tue, 12 May 2020 20:58:04 -0400
Message-Id: <cover.1589330383.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

These patches setup an initial Go module for the xenlight package. The
go code is tracked again, since the module is defined in xen.git as
xenbits.xenproject.org/git-http/xen.git/tools/golang/xenlight. The final patch
adds a README and LICENSE to tools/golang/xenlight so that the package
will show up properly on pkg.go.dev and be available via the module
proxy at proxy.golang.org.

Changes in v2:
 - Use xenproject.org instead of xen.org in module path.
 - Use `go build` instead of `go install` in xenlight Makefile.
 - Use LGPL instead of GPL for xenlight LICENSE.
 - Add entry for xenlight package in SUPPORT.md.
 - Change some wording in the README for clarity. 

Nick Rosbrook (3):
  golang/xenlight: re-track generated go code
  golang/xenlight: init xenlight go module
  golang/xenlight: add necessary module/package documentation

 .gitignore                           |    3 -
 .hgignore                            |    2 -
 SUPPORT.md                           |    6 +
 tools/golang/xenlight/LICENSE        |  198 ++
 tools/golang/xenlight/Makefile       |    3 +-
 tools/golang/xenlight/README.md      |   17 +
 tools/golang/xenlight/go.mod         |    1 +
 tools/golang/xenlight/helpers.gen.go | 4728 ++++++++++++++++++++++++++
 tools/golang/xenlight/types.gen.go   | 1226 +++++++
 tools/golang/xenlight/xenlight.go    |    2 +
 10 files changed, 6179 insertions(+), 7 deletions(-)
 create mode 100644 tools/golang/xenlight/LICENSE
 create mode 100644 tools/golang/xenlight/README.md
 create mode 100644 tools/golang/xenlight/go.mod
 create mode 100644 tools/golang/xenlight/helpers.gen.go
 create mode 100644 tools/golang/xenlight/types.gen.go

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 13 00:58:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 00:58:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYfiz-0006GG-5j; Wed, 13 May 2020 00:58:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FcTI=63=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jYfiy-0006GA-K6
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 00:58:16 +0000
X-Inumbo-ID: d30b8bb0-94b4-11ea-b07b-bc764e2007e4
Received: from mail-qt1-x841.google.com (unknown [2607:f8b0:4864:20::841])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d30b8bb0-94b4-11ea-b07b-bc764e2007e4;
 Wed, 13 May 2020 00:58:14 +0000 (UTC)
Received: by mail-qt1-x841.google.com with SMTP id v4so11939803qte.3
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 17:58:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :in-reply-to:references;
 bh=9QmGMkOR+A+pcuReg/osmrOg0hnPPVtYzNJ/7f6YaNg=;
 b=vbptMo/eBHuIG3SRRrDxBqD1zDbRCFmHucEVmXcE07e1JIMsrNjlAvtlsdrBv+BIrs
 BY3ynre2rZ2nKgSLktlZQH4n7gNfmcn/uKwe86v4sXkacMQuVpVMPnua9f+60d7up7PW
 nUjIOFBc/ZHdAUfyyT68ReP8SYRDbQiPxa189wMYX5j+L8P6XzTD5e89BQRRmhdJ9ApW
 pgI1qefmWNHuETb5XDcqG4qWtJr95nHQmdH58qH5hP2Fm/HT1Nq2Y9qSzLW23ilCGssU
 v1/M1m8bpmEmE2yHtnz/mqEOsVGEvlxVd9MZxcJK0W8avzpYDhvLbjcto/JRTqPCWwd5
 wHgA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:in-reply-to:references;
 bh=9QmGMkOR+A+pcuReg/osmrOg0hnPPVtYzNJ/7f6YaNg=;
 b=Ti6eJxBH6UTo8xGDbUlXeMn2piPZqqeOFnC6I9RvCj3xzB7zKkPkxpFiF50y4Z581N
 GjbfwElve7fDjAdHA7+C8VBKUS7mVWwj/oGpn1mwqdfR+2vuaa758yn6LOCpvid6vKFL
 A7Y38ILSD4OKALilRns3F5g3DzHEjiRszfnKxgL6CoBQGHJHzWvZAEBemA7kkKb/+HjL
 mqT3pD5+z1w57Cqt045sZj/nIYJie5O0XVp2XLI1tnB8GNv9xrwbP1WMJbPcU6PwL0sX
 9eiFMK8UaD2D5yuKc0lNtT1fhTZpU/CfS+X8wGdJyYS2ljJsg+3ARNFLchZI0BTWJt/U
 FQwg==
X-Gm-Message-State: AOAM531tUsDeXCoEyWtknDIgN+VvAQSd5cj26lvNVnO4HnmlqdCmd44q
 j2DFWEOrPgSONatoCWwfBKjxjzZifsc=
X-Google-Smtp-Source: ABdhPJyRR0JQaQPyfA0v3+R8gVZAK0qYjKH35eNV0aKR0mDANKcXzAIRb4B32yUg//m2MRiLdpJI7w==
X-Received: by 2002:ac8:6651:: with SMTP id j17mr4960597qtp.35.1589331493712; 
 Tue, 12 May 2020 17:58:13 -0700 (PDT)
Received: from six.lan (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id 62sm12400828qkh.113.2020.05.12.17.58.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 12 May 2020 17:58:13 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 2/3] golang/xenlight: init xenlight go module
Date: Tue, 12 May 2020 20:58:06 -0400
Message-Id: <d3744468e8f6ce22756355a2e36b182cea7d5068.1589330383.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1589330383.git.rosbrookn@ainfosec.com>
References: <cover.1589330383.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1589330383.git.rosbrookn@ainfosec.com>
References: <cover.1589330383.git.rosbrookn@ainfosec.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Initialize the xenlight Go module using the xenbits git-http URL,
xenbits.xenproject.org/git-http/xen.git/tools/golang/xenlight.

Also simplify the build Make target by using `go build` instead of `go
install`, and do not set GOPATH here because it is now unnecessary.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
Changes in v2:
 - Use xenproject.org instead of xen.org in module path.
 - Undo change to XEN_GOCODE_URL; just use `go build`.
---
 tools/golang/xenlight/Makefile | 2 +-
 tools/golang/xenlight/go.mod   | 1 +
 2 files changed, 2 insertions(+), 1 deletion(-)
 create mode 100644 tools/golang/xenlight/go.mod

diff --git a/tools/golang/xenlight/Makefile b/tools/golang/xenlight/Makefile
index 753132306a..37ed1358c4 100644
--- a/tools/golang/xenlight/Makefile
+++ b/tools/golang/xenlight/Makefile
@@ -33,7 +33,7 @@ $(XEN_GOPATH)/src/$(XEN_GOCODE_URL)/xenlight/: xenlight.go types.gen.go helpers.
 # so that it can find the actual library.
 .PHONY: build
 build: package
-	CGO_CFLAGS="$(CFLAGS_libxenlight) $(CFLAGS_libxentoollog)" CGO_LDFLAGS="$(LDLIBS_libxenlight) $(LDLIBS_libxentoollog) -L$(XEN_XENLIGHT) -L$(XEN_LIBXENTOOLLOG)" GOPATH=$(XEN_GOPATH) $(GO) install -x $(XEN_GOCODE_URL)/xenlight
+	CGO_CFLAGS="$(CFLAGS_libxenlight) $(CFLAGS_libxentoollog)" CGO_LDFLAGS="$(LDLIBS_libxenlight) $(LDLIBS_libxentoollog) -L$(XEN_XENLIGHT) -L$(XEN_LIBXENTOOLLOG)" $(GO) build -x
 
 .PHONY: install
 install: build
diff --git a/tools/golang/xenlight/go.mod b/tools/golang/xenlight/go.mod
new file mode 100644
index 0000000000..926474d929
--- /dev/null
+++ b/tools/golang/xenlight/go.mod
@@ -0,0 +1 @@
+module xenbits.xenproject.org/git-http/xen.git/tools/golang/xenlight
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 13 00:58:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 00:58:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYfj9-0006IY-Es; Wed, 13 May 2020 00:58:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FcTI=63=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jYfj7-0006ID-Os
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 00:58:25 +0000
X-Inumbo-ID: d401cb60-94b4-11ea-ae69-bc764e2007e4
Received: from mail-qt1-x832.google.com (unknown [2607:f8b0:4864:20::832])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d401cb60-94b4-11ea-ae69-bc764e2007e4;
 Wed, 13 May 2020 00:58:16 +0000 (UTC)
Received: by mail-qt1-x832.google.com with SMTP id j2so12576205qtr.12
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 17:58:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :in-reply-to:references;
 bh=an676N/D+6nftjkOiiWzF86EZ8yFV1/mcajsNXVTAmI=;
 b=R2Jq28ufrtUpeo73Gfl85WUUCqcfn03eArSLipGmiED+RyNZfH9cGlkIKo9OFUP8kj
 wfVHIrJJDZrCahXMeylg8DUrXLd0GCe3DXmGyp6q+CFyk9AywRFZW7qLe5NL1IsFLvAy
 CXzbIx6826Ze12tmQlYc2mnSWmGPXL4/RKhz6mqhCnXR8WQWVgRMbw/IOpRQL7LMhAxI
 t3aHyVRg/h0oBErSc62/kO8V3QZ80w3cEsOVVLihMSLXpX4e9YWH60OXsO2dFAidg0HP
 A59wvLyaaTaXo46cxFxd86omzbiu3IIGz4DnCtnpHU4vx+m0FiYzI956WaTyBIYn61F0
 zFaQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:in-reply-to:references;
 bh=an676N/D+6nftjkOiiWzF86EZ8yFV1/mcajsNXVTAmI=;
 b=V8tqrPvkbV2GShutnb/OIGa926102GuRxpTurRY7rdtVntUKcuMNIlC24T1VaO5aMj
 GCqmPL5XreXbuP8PGzTSKTqX7XDQuE0Uyxnv9oUiPwEmY2H0kTYRqDcmbc+NHJPo6O0C
 K4BvMS0plMJjaBy9+HUV6CI9MYdumHZkUCAZfM9oqfuUPvQWr4MDqcbQkH8tD5inlrsA
 yfT0Nv3sXV8pxin9u9kxzIPsL70Xos61ED4wSQ2G2dDJPqUR1eFa6t7SCjxGR5ftEvug
 caTd7+Ftbh4hKafkWYrieWB3896Ov/qbRbed7TpVTWhDrEY/q+DmSlHwN/KXYblv3IHN
 VSig==
X-Gm-Message-State: AGi0Pub7i5DrnTlnzykSmjqpbyU/LfUBvWdAUJB5X1SkYOGBD00hoVM0
 erayYrxxNJAEkvUHHShcdfw9Ph+VlXI=
X-Google-Smtp-Source: APiQypJCdl2pWRGcso/VFc3FfD3oRe6SBMHeT/w/R4xwHJ+UrHmdIAZo1uCIyJ2NtVPK+f1f35mVpg==
X-Received: by 2002:aed:37ca:: with SMTP id j68mr25132598qtb.276.1589331492390; 
 Tue, 12 May 2020 17:58:12 -0700 (PDT)
Received: from six.lan (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id 62sm12400828qkh.113.2020.05.12.17.58.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 12 May 2020 17:58:11 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 1/3] golang/xenlight: re-track generated go code
Date: Tue, 12 May 2020 20:58:05 -0400
Message-Id: <0b40edc65dfd44a9cad8533163f571e169982d0f.1589330383.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1589330383.git.rosbrookn@ainfosec.com>
References: <cover.1589330383.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1589330383.git.rosbrookn@ainfosec.com>
References: <cover.1589330383.git.rosbrookn@ainfosec.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Commit df669de074c395a3b2eeb975fddd3da4c148da13 un-tracked the generated
Go code, but it was decided that we actually keep the generated code
in-tree.

Undo the changes to ignore the generated code, and re-generate it.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
Reviewed-by: George Dunlap <george.dunlap@citrix.com>
---
 .gitignore                           |    3 -
 .hgignore                            |    2 -
 tools/golang/xenlight/Makefile       |    1 -
 tools/golang/xenlight/helpers.gen.go | 4728 ++++++++++++++++++++++++++
 tools/golang/xenlight/types.gen.go   | 1226 +++++++
 5 files changed, 5954 insertions(+), 6 deletions(-)
 create mode 100644 tools/golang/xenlight/helpers.gen.go
 create mode 100644 tools/golang/xenlight/types.gen.go

diff --git a/.gitignore b/.gitignore
index 9c8a31f896..bfa53723b3 100644
--- a/.gitignore
+++ b/.gitignore
@@ -406,9 +406,6 @@ tools/xenstore/xenstore-watch
 tools/xl/_paths.h
 tools/xl/xl
 
-tools/golang/src
-tools/golang/*/*.gen.go
-
 docs/txt/misc/*.txt
 docs/txt/man/*.txt
 docs/figs/*.png
diff --git a/.hgignore b/.hgignore
index 2ec52982e1..2d41670632 100644
--- a/.hgignore
+++ b/.hgignore
@@ -282,8 +282,6 @@
 ^tools/ocaml/test/xtl$
 ^tools/ocaml/test/send_debug_keys$
 ^tools/ocaml/test/list_domains$
-^tools/golang/src$
-^tools/golang/.*/.*\.gen\.go$
 ^tools/autom4te\.cache$
 ^tools/config\.h$
 ^tools/config\.log$
diff --git a/tools/golang/xenlight/Makefile b/tools/golang/xenlight/Makefile
index 144c133ced..753132306a 100644
--- a/tools/golang/xenlight/Makefile
+++ b/tools/golang/xenlight/Makefile
@@ -49,7 +49,6 @@ install: build
 clean:
 	$(RM) -r $(XEN_GOPATH)$(GOXL_PKG_DIR)
 	$(RM) $(XEN_GOPATH)/pkg/*/$(XEN_GOCODE_URL)/xenlight.a
-	$(RM) *.gen.go
 
 .PHONY: distclean
 distclean: clean
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
new file mode 100644
index 0000000000..109e9515a2
--- /dev/null
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -0,0 +1,4728 @@
+// DO NOT EDIT.
+//
+// This file is generated by:
+// gengotypes.py ../../libxl/libxl_types.idl
+//
+package xenlight
+
+import (
+	"errors"
+	"fmt"
+	"unsafe"
+)
+
+/*
+#cgo LDFLAGS: -lxenlight
+#include <stdlib.h>
+#include <libxl.h>
+
+typedef typeof(((struct libxl_channelinfo *)NULL)->u.pty)libxl_channelinfo_connection_union_pty;
+typedef typeof(((struct libxl_domain_build_info *)NULL)->u.hvm)libxl_domain_build_info_type_union_hvm;
+typedef typeof(((struct libxl_domain_build_info *)NULL)->u.pv)libxl_domain_build_info_type_union_pv;
+typedef typeof(((struct libxl_domain_build_info *)NULL)->u.pvh)libxl_domain_build_info_type_union_pvh;
+typedef typeof(((struct libxl_device_usbdev *)NULL)->u.hostdev)libxl_device_usbdev_type_union_hostdev;
+typedef typeof(((struct libxl_device_channel *)NULL)->u.socket)libxl_device_channel_connection_union_socket;
+typedef typeof(((struct libxl_event *)NULL)->u.domain_shutdown)libxl_event_type_union_domain_shutdown;
+typedef typeof(((struct libxl_event *)NULL)->u.disk_eject)libxl_event_type_union_disk_eject;
+typedef typeof(((struct libxl_event *)NULL)->u.operation_complete)libxl_event_type_union_operation_complete;
+typedef typeof(((struct libxl_psr_hw_info *)NULL)->u.cat)libxl_psr_hw_info_type_union_cat;
+typedef typeof(((struct libxl_psr_hw_info *)NULL)->u.mba)libxl_psr_hw_info_type_union_mba;
+*/
+import "C"
+
+// NewIoportRange returns an instance of IoportRange initialized with defaults.
+func NewIoportRange() (*IoportRange, error) {
+	var (
+		x  IoportRange
+		xc C.libxl_ioport_range
+	)
+
+	C.libxl_ioport_range_init(&xc)
+	defer C.libxl_ioport_range_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *IoportRange) fromC(xc *C.libxl_ioport_range) error {
+	x.First = uint32(xc.first)
+	x.Number = uint32(xc.number)
+
+	return nil
+}
+
+func (x *IoportRange) toC(xc *C.libxl_ioport_range) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_ioport_range_dispose(xc)
+		}
+	}()
+
+	xc.first = C.uint32_t(x.First)
+	xc.number = C.uint32_t(x.Number)
+
+	return nil
+}
+
+// NewIomemRange returns an instance of IomemRange initialized with defaults.
+func NewIomemRange() (*IomemRange, error) {
+	var (
+		x  IomemRange
+		xc C.libxl_iomem_range
+	)
+
+	C.libxl_iomem_range_init(&xc)
+	defer C.libxl_iomem_range_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *IomemRange) fromC(xc *C.libxl_iomem_range) error {
+	x.Start = uint64(xc.start)
+	x.Number = uint64(xc.number)
+	x.Gfn = uint64(xc.gfn)
+
+	return nil
+}
+
+func (x *IomemRange) toC(xc *C.libxl_iomem_range) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_iomem_range_dispose(xc)
+		}
+	}()
+
+	xc.start = C.uint64_t(x.Start)
+	xc.number = C.uint64_t(x.Number)
+	xc.gfn = C.uint64_t(x.Gfn)
+
+	return nil
+}
+
+// NewVgaInterfaceInfo returns an instance of VgaInterfaceInfo initialized with defaults.
+func NewVgaInterfaceInfo() (*VgaInterfaceInfo, error) {
+	var (
+		x  VgaInterfaceInfo
+		xc C.libxl_vga_interface_info
+	)
+
+	C.libxl_vga_interface_info_init(&xc)
+	defer C.libxl_vga_interface_info_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *VgaInterfaceInfo) fromC(xc *C.libxl_vga_interface_info) error {
+	x.Kind = VgaInterfaceType(xc.kind)
+
+	return nil
+}
+
+func (x *VgaInterfaceInfo) toC(xc *C.libxl_vga_interface_info) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_vga_interface_info_dispose(xc)
+		}
+	}()
+
+	xc.kind = C.libxl_vga_interface_type(x.Kind)
+
+	return nil
+}
+
+// NewVncInfo returns an instance of VncInfo initialized with defaults.
+func NewVncInfo() (*VncInfo, error) {
+	var (
+		x  VncInfo
+		xc C.libxl_vnc_info
+	)
+
+	C.libxl_vnc_info_init(&xc)
+	defer C.libxl_vnc_info_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *VncInfo) fromC(xc *C.libxl_vnc_info) error {
+	if err := x.Enable.fromC(&xc.enable); err != nil {
+		return fmt.Errorf("converting field Enable: %v", err)
+	}
+	x.Listen = C.GoString(xc.listen)
+	x.Passwd = C.GoString(xc.passwd)
+	x.Display = int(xc.display)
+	if err := x.Findunused.fromC(&xc.findunused); err != nil {
+		return fmt.Errorf("converting field Findunused: %v", err)
+	}
+
+	return nil
+}
+
+func (x *VncInfo) toC(xc *C.libxl_vnc_info) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_vnc_info_dispose(xc)
+		}
+	}()
+
+	if err := x.Enable.toC(&xc.enable); err != nil {
+		return fmt.Errorf("converting field Enable: %v", err)
+	}
+	if x.Listen != "" {
+		xc.listen = C.CString(x.Listen)
+	}
+	if x.Passwd != "" {
+		xc.passwd = C.CString(x.Passwd)
+	}
+	xc.display = C.int(x.Display)
+	if err := x.Findunused.toC(&xc.findunused); err != nil {
+		return fmt.Errorf("converting field Findunused: %v", err)
+	}
+
+	return nil
+}
+
+// NewSpiceInfo returns an instance of SpiceInfo initialized with defaults.
+func NewSpiceInfo() (*SpiceInfo, error) {
+	var (
+		x  SpiceInfo
+		xc C.libxl_spice_info
+	)
+
+	C.libxl_spice_info_init(&xc)
+	defer C.libxl_spice_info_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *SpiceInfo) fromC(xc *C.libxl_spice_info) error {
+	if err := x.Enable.fromC(&xc.enable); err != nil {
+		return fmt.Errorf("converting field Enable: %v", err)
+	}
+	x.Port = int(xc.port)
+	x.TlsPort = int(xc.tls_port)
+	x.Host = C.GoString(xc.host)
+	if err := x.DisableTicketing.fromC(&xc.disable_ticketing); err != nil {
+		return fmt.Errorf("converting field DisableTicketing: %v", err)
+	}
+	x.Passwd = C.GoString(xc.passwd)
+	if err := x.AgentMouse.fromC(&xc.agent_mouse); err != nil {
+		return fmt.Errorf("converting field AgentMouse: %v", err)
+	}
+	if err := x.Vdagent.fromC(&xc.vdagent); err != nil {
+		return fmt.Errorf("converting field Vdagent: %v", err)
+	}
+	if err := x.ClipboardSharing.fromC(&xc.clipboard_sharing); err != nil {
+		return fmt.Errorf("converting field ClipboardSharing: %v", err)
+	}
+	x.Usbredirection = int(xc.usbredirection)
+	x.ImageCompression = C.GoString(xc.image_compression)
+	x.StreamingVideo = C.GoString(xc.streaming_video)
+
+	return nil
+}
+
+func (x *SpiceInfo) toC(xc *C.libxl_spice_info) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_spice_info_dispose(xc)
+		}
+	}()
+
+	if err := x.Enable.toC(&xc.enable); err != nil {
+		return fmt.Errorf("converting field Enable: %v", err)
+	}
+	xc.port = C.int(x.Port)
+	xc.tls_port = C.int(x.TlsPort)
+	if x.Host != "" {
+		xc.host = C.CString(x.Host)
+	}
+	if err := x.DisableTicketing.toC(&xc.disable_ticketing); err != nil {
+		return fmt.Errorf("converting field DisableTicketing: %v", err)
+	}
+	if x.Passwd != "" {
+		xc.passwd = C.CString(x.Passwd)
+	}
+	if err := x.AgentMouse.toC(&xc.agent_mouse); err != nil {
+		return fmt.Errorf("converting field AgentMouse: %v", err)
+	}
+	if err := x.Vdagent.toC(&xc.vdagent); err != nil {
+		return fmt.Errorf("converting field Vdagent: %v", err)
+	}
+	if err := x.ClipboardSharing.toC(&xc.clipboard_sharing); err != nil {
+		return fmt.Errorf("converting field ClipboardSharing: %v", err)
+	}
+	xc.usbredirection = C.int(x.Usbredirection)
+	if x.ImageCompression != "" {
+		xc.image_compression = C.CString(x.ImageCompression)
+	}
+	if x.StreamingVideo != "" {
+		xc.streaming_video = C.CString(x.StreamingVideo)
+	}
+
+	return nil
+}
+
+// NewSdlInfo returns an instance of SdlInfo initialized with defaults.
+func NewSdlInfo() (*SdlInfo, error) {
+	var (
+		x  SdlInfo
+		xc C.libxl_sdl_info
+	)
+
+	C.libxl_sdl_info_init(&xc)
+	defer C.libxl_sdl_info_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *SdlInfo) fromC(xc *C.libxl_sdl_info) error {
+	if err := x.Enable.fromC(&xc.enable); err != nil {
+		return fmt.Errorf("converting field Enable: %v", err)
+	}
+	if err := x.Opengl.fromC(&xc.opengl); err != nil {
+		return fmt.Errorf("converting field Opengl: %v", err)
+	}
+	x.Display = C.GoString(xc.display)
+	x.Xauthority = C.GoString(xc.xauthority)
+
+	return nil
+}
+
+func (x *SdlInfo) toC(xc *C.libxl_sdl_info) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_sdl_info_dispose(xc)
+		}
+	}()
+
+	if err := x.Enable.toC(&xc.enable); err != nil {
+		return fmt.Errorf("converting field Enable: %v", err)
+	}
+	if err := x.Opengl.toC(&xc.opengl); err != nil {
+		return fmt.Errorf("converting field Opengl: %v", err)
+	}
+	if x.Display != "" {
+		xc.display = C.CString(x.Display)
+	}
+	if x.Xauthority != "" {
+		xc.xauthority = C.CString(x.Xauthority)
+	}
+
+	return nil
+}
+
+// NewDominfo returns an instance of Dominfo initialized with defaults.
+func NewDominfo() (*Dominfo, error) {
+	var (
+		x  Dominfo
+		xc C.libxl_dominfo
+	)
+
+	C.libxl_dominfo_init(&xc)
+	defer C.libxl_dominfo_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Dominfo) fromC(xc *C.libxl_dominfo) error {
+	if err := x.Uuid.fromC(&xc.uuid); err != nil {
+		return fmt.Errorf("converting field Uuid: %v", err)
+	}
+	x.Domid = Domid(xc.domid)
+	x.Ssidref = uint32(xc.ssidref)
+	x.SsidLabel = C.GoString(xc.ssid_label)
+	x.Running = bool(xc.running)
+	x.Blocked = bool(xc.blocked)
+	x.Paused = bool(xc.paused)
+	x.Shutdown = bool(xc.shutdown)
+	x.Dying = bool(xc.dying)
+	x.NeverStop = bool(xc.never_stop)
+	x.ShutdownReason = ShutdownReason(xc.shutdown_reason)
+	x.OutstandingMemkb = uint64(xc.outstanding_memkb)
+	x.CurrentMemkb = uint64(xc.current_memkb)
+	x.SharedMemkb = uint64(xc.shared_memkb)
+	x.PagedMemkb = uint64(xc.paged_memkb)
+	x.MaxMemkb = uint64(xc.max_memkb)
+	x.CpuTime = uint64(xc.cpu_time)
+	x.VcpuMaxId = uint32(xc.vcpu_max_id)
+	x.VcpuOnline = uint32(xc.vcpu_online)
+	x.Cpupool = uint32(xc.cpupool)
+	x.DomainType = DomainType(xc.domain_type)
+
+	return nil
+}
+
+func (x *Dominfo) toC(xc *C.libxl_dominfo) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_dominfo_dispose(xc)
+		}
+	}()
+
+	if err := x.Uuid.toC(&xc.uuid); err != nil {
+		return fmt.Errorf("converting field Uuid: %v", err)
+	}
+	xc.domid = C.libxl_domid(x.Domid)
+	xc.ssidref = C.uint32_t(x.Ssidref)
+	if x.SsidLabel != "" {
+		xc.ssid_label = C.CString(x.SsidLabel)
+	}
+	xc.running = C.bool(x.Running)
+	xc.blocked = C.bool(x.Blocked)
+	xc.paused = C.bool(x.Paused)
+	xc.shutdown = C.bool(x.Shutdown)
+	xc.dying = C.bool(x.Dying)
+	xc.never_stop = C.bool(x.NeverStop)
+	xc.shutdown_reason = C.libxl_shutdown_reason(x.ShutdownReason)
+	xc.outstanding_memkb = C.uint64_t(x.OutstandingMemkb)
+	xc.current_memkb = C.uint64_t(x.CurrentMemkb)
+	xc.shared_memkb = C.uint64_t(x.SharedMemkb)
+	xc.paged_memkb = C.uint64_t(x.PagedMemkb)
+	xc.max_memkb = C.uint64_t(x.MaxMemkb)
+	xc.cpu_time = C.uint64_t(x.CpuTime)
+	xc.vcpu_max_id = C.uint32_t(x.VcpuMaxId)
+	xc.vcpu_online = C.uint32_t(x.VcpuOnline)
+	xc.cpupool = C.uint32_t(x.Cpupool)
+	xc.domain_type = C.libxl_domain_type(x.DomainType)
+
+	return nil
+}
+
+// NewCpupoolinfo returns an instance of Cpupoolinfo initialized with defaults.
+func NewCpupoolinfo() (*Cpupoolinfo, error) {
+	var (
+		x  Cpupoolinfo
+		xc C.libxl_cpupoolinfo
+	)
+
+	C.libxl_cpupoolinfo_init(&xc)
+	defer C.libxl_cpupoolinfo_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Cpupoolinfo) fromC(xc *C.libxl_cpupoolinfo) error {
+	x.Poolid = uint32(xc.poolid)
+	x.PoolName = C.GoString(xc.pool_name)
+	x.Sched = Scheduler(xc.sched)
+	x.NDom = uint32(xc.n_dom)
+	if err := x.Cpumap.fromC(&xc.cpumap); err != nil {
+		return fmt.Errorf("converting field Cpumap: %v", err)
+	}
+
+	return nil
+}
+
+func (x *Cpupoolinfo) toC(xc *C.libxl_cpupoolinfo) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_cpupoolinfo_dispose(xc)
+		}
+	}()
+
+	xc.poolid = C.uint32_t(x.Poolid)
+	if x.PoolName != "" {
+		xc.pool_name = C.CString(x.PoolName)
+	}
+	xc.sched = C.libxl_scheduler(x.Sched)
+	xc.n_dom = C.uint32_t(x.NDom)
+	if err := x.Cpumap.toC(&xc.cpumap); err != nil {
+		return fmt.Errorf("converting field Cpumap: %v", err)
+	}
+
+	return nil
+}
+
+// NewChannelinfo returns an instance of Channelinfo initialized with defaults.
+func NewChannelinfo(connection ChannelConnection) (*Channelinfo, error) {
+	var (
+		x  Channelinfo
+		xc C.libxl_channelinfo
+	)
+
+	C.libxl_channelinfo_init(&xc)
+	C.libxl_channelinfo_init_connection(&xc, C.libxl_channel_connection(connection))
+	defer C.libxl_channelinfo_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Channelinfo) fromC(xc *C.libxl_channelinfo) error {
+	x.Backend = C.GoString(xc.backend)
+	x.BackendId = uint32(xc.backend_id)
+	x.Frontend = C.GoString(xc.frontend)
+	x.FrontendId = uint32(xc.frontend_id)
+	x.Devid = Devid(xc.devid)
+	x.State = int(xc.state)
+	x.Evtch = int(xc.evtch)
+	x.Rref = int(xc.rref)
+	x.Connection = ChannelConnection(xc.connection)
+	switch x.Connection {
+	case ChannelConnectionUnknown:
+		x.ConnectionUnion = nil
+	case ChannelConnectionPty:
+		var connectionPty ChannelinfoConnectionUnionPty
+		if err := connectionPty.fromC(xc); err != nil {
+			return fmt.Errorf("converting field connectionPty: %v", err)
+		}
+		x.ConnectionUnion = connectionPty
+	case ChannelConnectionSocket:
+		x.ConnectionUnion = nil
+	default:
+		return fmt.Errorf("invalid union key '%v'", x.Connection)
+	}
+
+	return nil
+}
+
+func (x *ChannelinfoConnectionUnionPty) fromC(xc *C.libxl_channelinfo) error {
+	if ChannelConnection(xc.connection) != ChannelConnectionPty {
+		return errors.New("expected union key ChannelConnectionPty")
+	}
+
+	tmp := (*C.libxl_channelinfo_connection_union_pty)(unsafe.Pointer(&xc.u[0]))
+	x.Path = C.GoString(tmp.path)
+	return nil
+}
+
+func (x *Channelinfo) toC(xc *C.libxl_channelinfo) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_channelinfo_dispose(xc)
+		}
+	}()
+
+	if x.Backend != "" {
+		xc.backend = C.CString(x.Backend)
+	}
+	xc.backend_id = C.uint32_t(x.BackendId)
+	if x.Frontend != "" {
+		xc.frontend = C.CString(x.Frontend)
+	}
+	xc.frontend_id = C.uint32_t(x.FrontendId)
+	xc.devid = C.libxl_devid(x.Devid)
+	xc.state = C.int(x.State)
+	xc.evtch = C.int(x.Evtch)
+	xc.rref = C.int(x.Rref)
+	xc.connection = C.libxl_channel_connection(x.Connection)
+	switch x.Connection {
+	case ChannelConnectionUnknown:
+		break
+	case ChannelConnectionPty:
+		tmp, ok := x.ConnectionUnion.(ChannelinfoConnectionUnionPty)
+		if !ok {
+			return errors.New("wrong type for union key connection")
+		}
+		var pty C.libxl_channelinfo_connection_union_pty
+		if tmp.Path != "" {
+			pty.path = C.CString(tmp.Path)
+		}
+		ptyBytes := C.GoBytes(unsafe.Pointer(&pty), C.sizeof_libxl_channelinfo_connection_union_pty)
+		copy(xc.u[:], ptyBytes)
+	case ChannelConnectionSocket:
+		break
+	default:
+		return fmt.Errorf("invalid union key '%v'", x.Connection)
+	}
+
+	return nil
+}
+
+// NewVminfo returns an instance of Vminfo initialized with defaults.
+func NewVminfo() (*Vminfo, error) {
+	var (
+		x  Vminfo
+		xc C.libxl_vminfo
+	)
+
+	C.libxl_vminfo_init(&xc)
+	defer C.libxl_vminfo_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Vminfo) fromC(xc *C.libxl_vminfo) error {
+	if err := x.Uuid.fromC(&xc.uuid); err != nil {
+		return fmt.Errorf("converting field Uuid: %v", err)
+	}
+	x.Domid = Domid(xc.domid)
+
+	return nil
+}
+
+func (x *Vminfo) toC(xc *C.libxl_vminfo) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_vminfo_dispose(xc)
+		}
+	}()
+
+	if err := x.Uuid.toC(&xc.uuid); err != nil {
+		return fmt.Errorf("converting field Uuid: %v", err)
+	}
+	xc.domid = C.libxl_domid(x.Domid)
+
+	return nil
+}
+
+// NewVersionInfo returns an instance of VersionInfo initialized with defaults.
+func NewVersionInfo() (*VersionInfo, error) {
+	var (
+		x  VersionInfo
+		xc C.libxl_version_info
+	)
+
+	C.libxl_version_info_init(&xc)
+	defer C.libxl_version_info_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *VersionInfo) fromC(xc *C.libxl_version_info) error {
+	x.XenVersionMajor = int(xc.xen_version_major)
+	x.XenVersionMinor = int(xc.xen_version_minor)
+	x.XenVersionExtra = C.GoString(xc.xen_version_extra)
+	x.Compiler = C.GoString(xc.compiler)
+	x.CompileBy = C.GoString(xc.compile_by)
+	x.CompileDomain = C.GoString(xc.compile_domain)
+	x.CompileDate = C.GoString(xc.compile_date)
+	x.Capabilities = C.GoString(xc.capabilities)
+	x.Changeset = C.GoString(xc.changeset)
+	x.VirtStart = uint64(xc.virt_start)
+	x.Pagesize = int(xc.pagesize)
+	x.Commandline = C.GoString(xc.commandline)
+	x.BuildId = C.GoString(xc.build_id)
+
+	return nil
+}
+
+func (x *VersionInfo) toC(xc *C.libxl_version_info) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_version_info_dispose(xc)
+		}
+	}()
+
+	xc.xen_version_major = C.int(x.XenVersionMajor)
+	xc.xen_version_minor = C.int(x.XenVersionMinor)
+	if x.XenVersionExtra != "" {
+		xc.xen_version_extra = C.CString(x.XenVersionExtra)
+	}
+	if x.Compiler != "" {
+		xc.compiler = C.CString(x.Compiler)
+	}
+	if x.CompileBy != "" {
+		xc.compile_by = C.CString(x.CompileBy)
+	}
+	if x.CompileDomain != "" {
+		xc.compile_domain = C.CString(x.CompileDomain)
+	}
+	if x.CompileDate != "" {
+		xc.compile_date = C.CString(x.CompileDate)
+	}
+	if x.Capabilities != "" {
+		xc.capabilities = C.CString(x.Capabilities)
+	}
+	if x.Changeset != "" {
+		xc.changeset = C.CString(x.Changeset)
+	}
+	xc.virt_start = C.uint64_t(x.VirtStart)
+	xc.pagesize = C.int(x.Pagesize)
+	if x.Commandline != "" {
+		xc.commandline = C.CString(x.Commandline)
+	}
+	if x.BuildId != "" {
+		xc.build_id = C.CString(x.BuildId)
+	}
+
+	return nil
+}
+
+// NewDomainCreateInfo returns an instance of DomainCreateInfo initialized with defaults.
+func NewDomainCreateInfo() (*DomainCreateInfo, error) {
+	var (
+		x  DomainCreateInfo
+		xc C.libxl_domain_create_info
+	)
+
+	C.libxl_domain_create_info_init(&xc)
+	defer C.libxl_domain_create_info_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DomainCreateInfo) fromC(xc *C.libxl_domain_create_info) error {
+	x.Type = DomainType(xc._type)
+	if err := x.Hap.fromC(&xc.hap); err != nil {
+		return fmt.Errorf("converting field Hap: %v", err)
+	}
+	if err := x.Oos.fromC(&xc.oos); err != nil {
+		return fmt.Errorf("converting field Oos: %v", err)
+	}
+	x.Ssidref = uint32(xc.ssidref)
+	x.SsidLabel = C.GoString(xc.ssid_label)
+	x.Name = C.GoString(xc.name)
+	x.Domid = Domid(xc.domid)
+	if err := x.Uuid.fromC(&xc.uuid); err != nil {
+		return fmt.Errorf("converting field Uuid: %v", err)
+	}
+	if err := x.Xsdata.fromC(&xc.xsdata); err != nil {
+		return fmt.Errorf("converting field Xsdata: %v", err)
+	}
+	if err := x.Platformdata.fromC(&xc.platformdata); err != nil {
+		return fmt.Errorf("converting field Platformdata: %v", err)
+	}
+	x.Poolid = uint32(xc.poolid)
+	x.PoolName = C.GoString(xc.pool_name)
+	if err := x.RunHotplugScripts.fromC(&xc.run_hotplug_scripts); err != nil {
+		return fmt.Errorf("converting field RunHotplugScripts: %v", err)
+	}
+	if err := x.DriverDomain.fromC(&xc.driver_domain); err != nil {
+		return fmt.Errorf("converting field DriverDomain: %v", err)
+	}
+	x.Passthrough = Passthrough(xc.passthrough)
+	if err := x.XendSuspendEvtchnCompat.fromC(&xc.xend_suspend_evtchn_compat); err != nil {
+		return fmt.Errorf("converting field XendSuspendEvtchnCompat: %v", err)
+	}
+
+	return nil
+}
+
+func (x *DomainCreateInfo) toC(xc *C.libxl_domain_create_info) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_domain_create_info_dispose(xc)
+		}
+	}()
+
+	xc._type = C.libxl_domain_type(x.Type)
+	if err := x.Hap.toC(&xc.hap); err != nil {
+		return fmt.Errorf("converting field Hap: %v", err)
+	}
+	if err := x.Oos.toC(&xc.oos); err != nil {
+		return fmt.Errorf("converting field Oos: %v", err)
+	}
+	xc.ssidref = C.uint32_t(x.Ssidref)
+	if x.SsidLabel != "" {
+		xc.ssid_label = C.CString(x.SsidLabel)
+	}
+	if x.Name != "" {
+		xc.name = C.CString(x.Name)
+	}
+	xc.domid = C.libxl_domid(x.Domid)
+	if err := x.Uuid.toC(&xc.uuid); err != nil {
+		return fmt.Errorf("converting field Uuid: %v", err)
+	}
+	if err := x.Xsdata.toC(&xc.xsdata); err != nil {
+		return fmt.Errorf("converting field Xsdata: %v", err)
+	}
+	if err := x.Platformdata.toC(&xc.platformdata); err != nil {
+		return fmt.Errorf("converting field Platformdata: %v", err)
+	}
+	xc.poolid = C.uint32_t(x.Poolid)
+	if x.PoolName != "" {
+		xc.pool_name = C.CString(x.PoolName)
+	}
+	if err := x.RunHotplugScripts.toC(&xc.run_hotplug_scripts); err != nil {
+		return fmt.Errorf("converting field RunHotplugScripts: %v", err)
+	}
+	if err := x.DriverDomain.toC(&xc.driver_domain); err != nil {
+		return fmt.Errorf("converting field DriverDomain: %v", err)
+	}
+	xc.passthrough = C.libxl_passthrough(x.Passthrough)
+	if err := x.XendSuspendEvtchnCompat.toC(&xc.xend_suspend_evtchn_compat); err != nil {
+		return fmt.Errorf("converting field XendSuspendEvtchnCompat: %v", err)
+	}
+
+	return nil
+}
+
+// NewDomainRestoreParams returns an instance of DomainRestoreParams initialized with defaults.
+func NewDomainRestoreParams() (*DomainRestoreParams, error) {
+	var (
+		x  DomainRestoreParams
+		xc C.libxl_domain_restore_params
+	)
+
+	C.libxl_domain_restore_params_init(&xc)
+	defer C.libxl_domain_restore_params_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DomainRestoreParams) fromC(xc *C.libxl_domain_restore_params) error {
+	x.CheckpointedStream = int(xc.checkpointed_stream)
+	x.StreamVersion = uint32(xc.stream_version)
+	x.ColoProxyScript = C.GoString(xc.colo_proxy_script)
+	if err := x.UserspaceColoProxy.fromC(&xc.userspace_colo_proxy); err != nil {
+		return fmt.Errorf("converting field UserspaceColoProxy: %v", err)
+	}
+
+	return nil
+}
+
+func (x *DomainRestoreParams) toC(xc *C.libxl_domain_restore_params) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_domain_restore_params_dispose(xc)
+		}
+	}()
+
+	xc.checkpointed_stream = C.int(x.CheckpointedStream)
+	xc.stream_version = C.uint32_t(x.StreamVersion)
+	if x.ColoProxyScript != "" {
+		xc.colo_proxy_script = C.CString(x.ColoProxyScript)
+	}
+	if err := x.UserspaceColoProxy.toC(&xc.userspace_colo_proxy); err != nil {
+		return fmt.Errorf("converting field UserspaceColoProxy: %v", err)
+	}
+
+	return nil
+}
+
+// NewSchedParams returns an instance of SchedParams initialized with defaults.
+func NewSchedParams() (*SchedParams, error) {
+	var (
+		x  SchedParams
+		xc C.libxl_sched_params
+	)
+
+	C.libxl_sched_params_init(&xc)
+	defer C.libxl_sched_params_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *SchedParams) fromC(xc *C.libxl_sched_params) error {
+	x.Vcpuid = int(xc.vcpuid)
+	x.Weight = int(xc.weight)
+	x.Cap = int(xc.cap)
+	x.Period = int(xc.period)
+	x.Extratime = int(xc.extratime)
+	x.Budget = int(xc.budget)
+
+	return nil
+}
+
+func (x *SchedParams) toC(xc *C.libxl_sched_params) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_sched_params_dispose(xc)
+		}
+	}()
+
+	xc.vcpuid = C.int(x.Vcpuid)
+	xc.weight = C.int(x.Weight)
+	xc.cap = C.int(x.Cap)
+	xc.period = C.int(x.Period)
+	xc.extratime = C.int(x.Extratime)
+	xc.budget = C.int(x.Budget)
+
+	return nil
+}
+
+// NewVcpuSchedParams returns an instance of VcpuSchedParams initialized with defaults.
+func NewVcpuSchedParams() (*VcpuSchedParams, error) {
+	var (
+		x  VcpuSchedParams
+		xc C.libxl_vcpu_sched_params
+	)
+
+	C.libxl_vcpu_sched_params_init(&xc)
+	defer C.libxl_vcpu_sched_params_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *VcpuSchedParams) fromC(xc *C.libxl_vcpu_sched_params) error {
+	x.Sched = Scheduler(xc.sched)
+	x.Vcpus = nil
+	if n := int(xc.num_vcpus); n > 0 {
+		cVcpus := (*[1 << 28]C.libxl_sched_params)(unsafe.Pointer(xc.vcpus))[:n:n]
+		x.Vcpus = make([]SchedParams, n)
+		for i, v := range cVcpus {
+			if err := x.Vcpus[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Vcpus: %v", err)
+			}
+		}
+	}
+
+	return nil
+}
+
+func (x *VcpuSchedParams) toC(xc *C.libxl_vcpu_sched_params) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_vcpu_sched_params_dispose(xc)
+		}
+	}()
+
+	xc.sched = C.libxl_scheduler(x.Sched)
+	if numVcpus := len(x.Vcpus); numVcpus > 0 {
+		xc.vcpus = (*C.libxl_sched_params)(C.malloc(C.ulong(numVcpus) * C.sizeof_libxl_sched_params))
+		xc.num_vcpus = C.int(numVcpus)
+		cVcpus := (*[1 << 28]C.libxl_sched_params)(unsafe.Pointer(xc.vcpus))[:numVcpus:numVcpus]
+		for i, v := range x.Vcpus {
+			if err := v.toC(&cVcpus[i]); err != nil {
+				return fmt.Errorf("converting field Vcpus: %v", err)
+			}
+		}
+	}
+
+	return nil
+}
+
+// NewDomainSchedParams returns an instance of DomainSchedParams initialized with defaults.
+func NewDomainSchedParams() (*DomainSchedParams, error) {
+	var (
+		x  DomainSchedParams
+		xc C.libxl_domain_sched_params
+	)
+
+	C.libxl_domain_sched_params_init(&xc)
+	defer C.libxl_domain_sched_params_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DomainSchedParams) fromC(xc *C.libxl_domain_sched_params) error {
+	x.Sched = Scheduler(xc.sched)
+	x.Weight = int(xc.weight)
+	x.Cap = int(xc.cap)
+	x.Period = int(xc.period)
+	x.Budget = int(xc.budget)
+	x.Extratime = int(xc.extratime)
+	x.Slice = int(xc.slice)
+	x.Latency = int(xc.latency)
+
+	return nil
+}
+
+func (x *DomainSchedParams) toC(xc *C.libxl_domain_sched_params) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_domain_sched_params_dispose(xc)
+		}
+	}()
+
+	xc.sched = C.libxl_scheduler(x.Sched)
+	xc.weight = C.int(x.Weight)
+	xc.cap = C.int(x.Cap)
+	xc.period = C.int(x.Period)
+	xc.budget = C.int(x.Budget)
+	xc.extratime = C.int(x.Extratime)
+	xc.slice = C.int(x.Slice)
+	xc.latency = C.int(x.Latency)
+
+	return nil
+}
+
+// NewVnodeInfo returns an instance of VnodeInfo initialized with defaults.
+func NewVnodeInfo() (*VnodeInfo, error) {
+	var (
+		x  VnodeInfo
+		xc C.libxl_vnode_info
+	)
+
+	C.libxl_vnode_info_init(&xc)
+	defer C.libxl_vnode_info_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *VnodeInfo) fromC(xc *C.libxl_vnode_info) error {
+	x.Memkb = uint64(xc.memkb)
+	x.Distances = nil
+	if n := int(xc.num_distances); n > 0 {
+		cDistances := (*[1 << 28]C.uint32_t)(unsafe.Pointer(xc.distances))[:n:n]
+		x.Distances = make([]uint32, n)
+		for i, v := range cDistances {
+			x.Distances[i] = uint32(v)
+		}
+	}
+	x.Pnode = uint32(xc.pnode)
+	if err := x.Vcpus.fromC(&xc.vcpus); err != nil {
+		return fmt.Errorf("converting field Vcpus: %v", err)
+	}
+
+	return nil
+}
+
+func (x *VnodeInfo) toC(xc *C.libxl_vnode_info) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_vnode_info_dispose(xc)
+		}
+	}()
+
+	xc.memkb = C.uint64_t(x.Memkb)
+	if numDistances := len(x.Distances); numDistances > 0 {
+		xc.distances = (*C.uint32_t)(C.malloc(C.size_t(numDistances * numDistances)))
+		xc.num_distances = C.int(numDistances)
+		cDistances := (*[1 << 28]C.uint32_t)(unsafe.Pointer(xc.distances))[:numDistances:numDistances]
+		for i, v := range x.Distances {
+			cDistances[i] = C.uint32_t(v)
+		}
+	}
+	xc.pnode = C.uint32_t(x.Pnode)
+	if err := x.Vcpus.toC(&xc.vcpus); err != nil {
+		return fmt.Errorf("converting field Vcpus: %v", err)
+	}
+
+	return nil
+}
+
+// NewRdmReserve returns an instance of RdmReserve initialized with defaults.
+func NewRdmReserve() (*RdmReserve, error) {
+	var (
+		x  RdmReserve
+		xc C.libxl_rdm_reserve
+	)
+
+	C.libxl_rdm_reserve_init(&xc)
+	defer C.libxl_rdm_reserve_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *RdmReserve) fromC(xc *C.libxl_rdm_reserve) error {
+	x.Strategy = RdmReserveStrategy(xc.strategy)
+	x.Policy = RdmReservePolicy(xc.policy)
+
+	return nil
+}
+
+func (x *RdmReserve) toC(xc *C.libxl_rdm_reserve) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_rdm_reserve_dispose(xc)
+		}
+	}()
+
+	xc.strategy = C.libxl_rdm_reserve_strategy(x.Strategy)
+	xc.policy = C.libxl_rdm_reserve_policy(x.Policy)
+
+	return nil
+}
+
+// NewDomainBuildInfo returns an instance of DomainBuildInfo initialized with defaults.
+func NewDomainBuildInfo(dtype DomainType) (*DomainBuildInfo, error) {
+	var (
+		x  DomainBuildInfo
+		xc C.libxl_domain_build_info
+	)
+
+	C.libxl_domain_build_info_init(&xc)
+	C.libxl_domain_build_info_init_type(&xc, C.libxl_domain_type(dtype))
+	defer C.libxl_domain_build_info_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DomainBuildInfo) fromC(xc *C.libxl_domain_build_info) error {
+	x.MaxVcpus = int(xc.max_vcpus)
+	if err := x.AvailVcpus.fromC(&xc.avail_vcpus); err != nil {
+		return fmt.Errorf("converting field AvailVcpus: %v", err)
+	}
+	if err := x.Cpumap.fromC(&xc.cpumap); err != nil {
+		return fmt.Errorf("converting field Cpumap: %v", err)
+	}
+	if err := x.Nodemap.fromC(&xc.nodemap); err != nil {
+		return fmt.Errorf("converting field Nodemap: %v", err)
+	}
+	x.VcpuHardAffinity = nil
+	if n := int(xc.num_vcpu_hard_affinity); n > 0 {
+		cVcpuHardAffinity := (*[1 << 28]C.libxl_bitmap)(unsafe.Pointer(xc.vcpu_hard_affinity))[:n:n]
+		x.VcpuHardAffinity = make([]Bitmap, n)
+		for i, v := range cVcpuHardAffinity {
+			if err := x.VcpuHardAffinity[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field VcpuHardAffinity: %v", err)
+			}
+		}
+	}
+	x.VcpuSoftAffinity = nil
+	if n := int(xc.num_vcpu_soft_affinity); n > 0 {
+		cVcpuSoftAffinity := (*[1 << 28]C.libxl_bitmap)(unsafe.Pointer(xc.vcpu_soft_affinity))[:n:n]
+		x.VcpuSoftAffinity = make([]Bitmap, n)
+		for i, v := range cVcpuSoftAffinity {
+			if err := x.VcpuSoftAffinity[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field VcpuSoftAffinity: %v", err)
+			}
+		}
+	}
+	if err := x.NumaPlacement.fromC(&xc.numa_placement); err != nil {
+		return fmt.Errorf("converting field NumaPlacement: %v", err)
+	}
+	x.TscMode = TscMode(xc.tsc_mode)
+	x.MaxMemkb = uint64(xc.max_memkb)
+	x.TargetMemkb = uint64(xc.target_memkb)
+	x.VideoMemkb = uint64(xc.video_memkb)
+	x.ShadowMemkb = uint64(xc.shadow_memkb)
+	x.IommuMemkb = uint64(xc.iommu_memkb)
+	x.RtcTimeoffset = uint32(xc.rtc_timeoffset)
+	x.ExecSsidref = uint32(xc.exec_ssidref)
+	x.ExecSsidLabel = C.GoString(xc.exec_ssid_label)
+	if err := x.Localtime.fromC(&xc.localtime); err != nil {
+		return fmt.Errorf("converting field Localtime: %v", err)
+	}
+	if err := x.DisableMigrate.fromC(&xc.disable_migrate); err != nil {
+		return fmt.Errorf("converting field DisableMigrate: %v", err)
+	}
+	if err := x.Cpuid.fromC(&xc.cpuid); err != nil {
+		return fmt.Errorf("converting field Cpuid: %v", err)
+	}
+	x.BlkdevStart = C.GoString(xc.blkdev_start)
+	x.VnumaNodes = nil
+	if n := int(xc.num_vnuma_nodes); n > 0 {
+		cVnumaNodes := (*[1 << 28]C.libxl_vnode_info)(unsafe.Pointer(xc.vnuma_nodes))[:n:n]
+		x.VnumaNodes = make([]VnodeInfo, n)
+		for i, v := range cVnumaNodes {
+			if err := x.VnumaNodes[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field VnumaNodes: %v", err)
+			}
+		}
+	}
+	x.MaxGrantFrames = uint32(xc.max_grant_frames)
+	x.MaxMaptrackFrames = uint32(xc.max_maptrack_frames)
+	x.DeviceModelVersion = DeviceModelVersion(xc.device_model_version)
+	if err := x.DeviceModelStubdomain.fromC(&xc.device_model_stubdomain); err != nil {
+		return fmt.Errorf("converting field DeviceModelStubdomain: %v", err)
+	}
+	x.DeviceModel = C.GoString(xc.device_model)
+	x.DeviceModelSsidref = uint32(xc.device_model_ssidref)
+	x.DeviceModelSsidLabel = C.GoString(xc.device_model_ssid_label)
+	x.DeviceModelUser = C.GoString(xc.device_model_user)
+	if err := x.Extra.fromC(&xc.extra); err != nil {
+		return fmt.Errorf("converting field Extra: %v", err)
+	}
+	if err := x.ExtraPv.fromC(&xc.extra_pv); err != nil {
+		return fmt.Errorf("converting field ExtraPv: %v", err)
+	}
+	if err := x.ExtraHvm.fromC(&xc.extra_hvm); err != nil {
+		return fmt.Errorf("converting field ExtraHvm: %v", err)
+	}
+	if err := x.SchedParams.fromC(&xc.sched_params); err != nil {
+		return fmt.Errorf("converting field SchedParams: %v", err)
+	}
+	x.Ioports = nil
+	if n := int(xc.num_ioports); n > 0 {
+		cIoports := (*[1 << 28]C.libxl_ioport_range)(unsafe.Pointer(xc.ioports))[:n:n]
+		x.Ioports = make([]IoportRange, n)
+		for i, v := range cIoports {
+			if err := x.Ioports[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Ioports: %v", err)
+			}
+		}
+	}
+	x.Irqs = nil
+	if n := int(xc.num_irqs); n > 0 {
+		cIrqs := (*[1 << 28]C.uint32_t)(unsafe.Pointer(xc.irqs))[:n:n]
+		x.Irqs = make([]uint32, n)
+		for i, v := range cIrqs {
+			x.Irqs[i] = uint32(v)
+		}
+	}
+	x.Iomem = nil
+	if n := int(xc.num_iomem); n > 0 {
+		cIomem := (*[1 << 28]C.libxl_iomem_range)(unsafe.Pointer(xc.iomem))[:n:n]
+		x.Iomem = make([]IomemRange, n)
+		for i, v := range cIomem {
+			if err := x.Iomem[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Iomem: %v", err)
+			}
+		}
+	}
+	if err := x.ClaimMode.fromC(&xc.claim_mode); err != nil {
+		return fmt.Errorf("converting field ClaimMode: %v", err)
+	}
+	x.EventChannels = uint32(xc.event_channels)
+	x.Kernel = C.GoString(xc.kernel)
+	x.Cmdline = C.GoString(xc.cmdline)
+	x.Ramdisk = C.GoString(xc.ramdisk)
+	x.DeviceTree = C.GoString(xc.device_tree)
+	if err := x.Acpi.fromC(&xc.acpi); err != nil {
+		return fmt.Errorf("converting field Acpi: %v", err)
+	}
+	x.Bootloader = C.GoString(xc.bootloader)
+	if err := x.BootloaderArgs.fromC(&xc.bootloader_args); err != nil {
+		return fmt.Errorf("converting field BootloaderArgs: %v", err)
+	}
+	x.TimerMode = TimerMode(xc.timer_mode)
+	if err := x.NestedHvm.fromC(&xc.nested_hvm); err != nil {
+		return fmt.Errorf("converting field NestedHvm: %v", err)
+	}
+	if err := x.Apic.fromC(&xc.apic); err != nil {
+		return fmt.Errorf("converting field Apic: %v", err)
+	}
+	if err := x.DmRestrict.fromC(&xc.dm_restrict); err != nil {
+		return fmt.Errorf("converting field DmRestrict: %v", err)
+	}
+	x.Tee = TeeType(xc.tee)
+	x.Type = DomainType(xc._type)
+	switch x.Type {
+	case DomainTypeHvm:
+		var typeHvm DomainBuildInfoTypeUnionHvm
+		if err := typeHvm.fromC(xc); err != nil {
+			return fmt.Errorf("converting field typeHvm: %v", err)
+		}
+		x.TypeUnion = typeHvm
+	case DomainTypePv:
+		var typePv DomainBuildInfoTypeUnionPv
+		if err := typePv.fromC(xc); err != nil {
+			return fmt.Errorf("converting field typePv: %v", err)
+		}
+		x.TypeUnion = typePv
+	case DomainTypePvh:
+		var typePvh DomainBuildInfoTypeUnionPvh
+		if err := typePvh.fromC(xc); err != nil {
+			return fmt.Errorf("converting field typePvh: %v", err)
+		}
+		x.TypeUnion = typePvh
+	case DomainTypeInvalid:
+		x.TypeUnion = nil
+	default:
+		return fmt.Errorf("invalid union key '%v'", x.Type)
+	}
+	x.ArchArm.GicVersion = GicVersion(xc.arch_arm.gic_version)
+	x.ArchArm.Vuart = VuartType(xc.arch_arm.vuart)
+	x.Altp2M = Altp2MMode(xc.altp2m)
+
+	return nil
+}
+
+func (x *DomainBuildInfoTypeUnionHvm) fromC(xc *C.libxl_domain_build_info) error {
+	if DomainType(xc._type) != DomainTypeHvm {
+		return errors.New("expected union key DomainTypeHvm")
+	}
+
+	tmp := (*C.libxl_domain_build_info_type_union_hvm)(unsafe.Pointer(&xc.u[0]))
+	x.Firmware = C.GoString(tmp.firmware)
+	x.Bios = BiosType(tmp.bios)
+	if err := x.Pae.fromC(&tmp.pae); err != nil {
+		return fmt.Errorf("converting field Pae: %v", err)
+	}
+	if err := x.Apic.fromC(&tmp.apic); err != nil {
+		return fmt.Errorf("converting field Apic: %v", err)
+	}
+	if err := x.Acpi.fromC(&tmp.acpi); err != nil {
+		return fmt.Errorf("converting field Acpi: %v", err)
+	}
+	if err := x.AcpiS3.fromC(&tmp.acpi_s3); err != nil {
+		return fmt.Errorf("converting field AcpiS3: %v", err)
+	}
+	if err := x.AcpiS4.fromC(&tmp.acpi_s4); err != nil {
+		return fmt.Errorf("converting field AcpiS4: %v", err)
+	}
+	if err := x.AcpiLaptopSlate.fromC(&tmp.acpi_laptop_slate); err != nil {
+		return fmt.Errorf("converting field AcpiLaptopSlate: %v", err)
+	}
+	if err := x.Nx.fromC(&tmp.nx); err != nil {
+		return fmt.Errorf("converting field Nx: %v", err)
+	}
+	if err := x.Viridian.fromC(&tmp.viridian); err != nil {
+		return fmt.Errorf("converting field Viridian: %v", err)
+	}
+	if err := x.ViridianEnable.fromC(&tmp.viridian_enable); err != nil {
+		return fmt.Errorf("converting field ViridianEnable: %v", err)
+	}
+	if err := x.ViridianDisable.fromC(&tmp.viridian_disable); err != nil {
+		return fmt.Errorf("converting field ViridianDisable: %v", err)
+	}
+	x.Timeoffset = C.GoString(tmp.timeoffset)
+	if err := x.Hpet.fromC(&tmp.hpet); err != nil {
+		return fmt.Errorf("converting field Hpet: %v", err)
+	}
+	if err := x.VptAlign.fromC(&tmp.vpt_align); err != nil {
+		return fmt.Errorf("converting field VptAlign: %v", err)
+	}
+	x.MmioHoleMemkb = uint64(tmp.mmio_hole_memkb)
+	x.TimerMode = TimerMode(tmp.timer_mode)
+	if err := x.NestedHvm.fromC(&tmp.nested_hvm); err != nil {
+		return fmt.Errorf("converting field NestedHvm: %v", err)
+	}
+	if err := x.Altp2M.fromC(&tmp.altp2m); err != nil {
+		return fmt.Errorf("converting field Altp2M: %v", err)
+	}
+	x.SystemFirmware = C.GoString(tmp.system_firmware)
+	x.SmbiosFirmware = C.GoString(tmp.smbios_firmware)
+	x.AcpiFirmware = C.GoString(tmp.acpi_firmware)
+	x.Hdtype = Hdtype(tmp.hdtype)
+	if err := x.Nographic.fromC(&tmp.nographic); err != nil {
+		return fmt.Errorf("converting field Nographic: %v", err)
+	}
+	if err := x.Vga.fromC(&tmp.vga); err != nil {
+		return fmt.Errorf("converting field Vga: %v", err)
+	}
+	if err := x.Vnc.fromC(&tmp.vnc); err != nil {
+		return fmt.Errorf("converting field Vnc: %v", err)
+	}
+	x.Keymap = C.GoString(tmp.keymap)
+	if err := x.Sdl.fromC(&tmp.sdl); err != nil {
+		return fmt.Errorf("converting field Sdl: %v", err)
+	}
+	if err := x.Spice.fromC(&tmp.spice); err != nil {
+		return fmt.Errorf("converting field Spice: %v", err)
+	}
+	if err := x.GfxPassthru.fromC(&tmp.gfx_passthru); err != nil {
+		return fmt.Errorf("converting field GfxPassthru: %v", err)
+	}
+	x.GfxPassthruKind = GfxPassthruKind(tmp.gfx_passthru_kind)
+	x.Serial = C.GoString(tmp.serial)
+	x.Boot = C.GoString(tmp.boot)
+	if err := x.Usb.fromC(&tmp.usb); err != nil {
+		return fmt.Errorf("converting field Usb: %v", err)
+	}
+	x.Usbversion = int(tmp.usbversion)
+	x.Usbdevice = C.GoString(tmp.usbdevice)
+	if err := x.VkbDevice.fromC(&tmp.vkb_device); err != nil {
+		return fmt.Errorf("converting field VkbDevice: %v", err)
+	}
+	x.Soundhw = C.GoString(tmp.soundhw)
+	if err := x.XenPlatformPci.fromC(&tmp.xen_platform_pci); err != nil {
+		return fmt.Errorf("converting field XenPlatformPci: %v", err)
+	}
+	if err := x.UsbdeviceList.fromC(&tmp.usbdevice_list); err != nil {
+		return fmt.Errorf("converting field UsbdeviceList: %v", err)
+	}
+	x.VendorDevice = VendorDevice(tmp.vendor_device)
+	if err := x.MsVmGenid.fromC(&tmp.ms_vm_genid); err != nil {
+		return fmt.Errorf("converting field MsVmGenid: %v", err)
+	}
+	if err := x.SerialList.fromC(&tmp.serial_list); err != nil {
+		return fmt.Errorf("converting field SerialList: %v", err)
+	}
+	if err := x.Rdm.fromC(&tmp.rdm); err != nil {
+		return fmt.Errorf("converting field Rdm: %v", err)
+	}
+	x.RdmMemBoundaryMemkb = uint64(tmp.rdm_mem_boundary_memkb)
+	x.McaCaps = uint64(tmp.mca_caps)
+	return nil
+}
+
+func (x *DomainBuildInfoTypeUnionPv) fromC(xc *C.libxl_domain_build_info) error {
+	if DomainType(xc._type) != DomainTypePv {
+		return errors.New("expected union key DomainTypePv")
+	}
+
+	tmp := (*C.libxl_domain_build_info_type_union_pv)(unsafe.Pointer(&xc.u[0]))
+	x.Kernel = C.GoString(tmp.kernel)
+	x.SlackMemkb = uint64(tmp.slack_memkb)
+	x.Bootloader = C.GoString(tmp.bootloader)
+	if err := x.BootloaderArgs.fromC(&tmp.bootloader_args); err != nil {
+		return fmt.Errorf("converting field BootloaderArgs: %v", err)
+	}
+	x.Cmdline = C.GoString(tmp.cmdline)
+	x.Ramdisk = C.GoString(tmp.ramdisk)
+	x.Features = C.GoString(tmp.features)
+	if err := x.E820Host.fromC(&tmp.e820_host); err != nil {
+		return fmt.Errorf("converting field E820Host: %v", err)
+	}
+	return nil
+}
+
+func (x *DomainBuildInfoTypeUnionPvh) fromC(xc *C.libxl_domain_build_info) error {
+	if DomainType(xc._type) != DomainTypePvh {
+		return errors.New("expected union key DomainTypePvh")
+	}
+
+	tmp := (*C.libxl_domain_build_info_type_union_pvh)(unsafe.Pointer(&xc.u[0]))
+	if err := x.Pvshim.fromC(&tmp.pvshim); err != nil {
+		return fmt.Errorf("converting field Pvshim: %v", err)
+	}
+	x.PvshimPath = C.GoString(tmp.pvshim_path)
+	x.PvshimCmdline = C.GoString(tmp.pvshim_cmdline)
+	x.PvshimExtra = C.GoString(tmp.pvshim_extra)
+	return nil
+}
+
+func (x *DomainBuildInfo) toC(xc *C.libxl_domain_build_info) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_domain_build_info_dispose(xc)
+		}
+	}()
+
+	xc.max_vcpus = C.int(x.MaxVcpus)
+	if err := x.AvailVcpus.toC(&xc.avail_vcpus); err != nil {
+		return fmt.Errorf("converting field AvailVcpus: %v", err)
+	}
+	if err := x.Cpumap.toC(&xc.cpumap); err != nil {
+		return fmt.Errorf("converting field Cpumap: %v", err)
+	}
+	if err := x.Nodemap.toC(&xc.nodemap); err != nil {
+		return fmt.Errorf("converting field Nodemap: %v", err)
+	}
+	if numVcpuHardAffinity := len(x.VcpuHardAffinity); numVcpuHardAffinity > 0 {
+		xc.vcpu_hard_affinity = (*C.libxl_bitmap)(C.malloc(C.ulong(numVcpuHardAffinity) * C.sizeof_libxl_bitmap))
+		xc.num_vcpu_hard_affinity = C.int(numVcpuHardAffinity)
+		cVcpuHardAffinity := (*[1 << 28]C.libxl_bitmap)(unsafe.Pointer(xc.vcpu_hard_affinity))[:numVcpuHardAffinity:numVcpuHardAffinity]
+		for i, v := range x.VcpuHardAffinity {
+			if err := v.toC(&cVcpuHardAffinity[i]); err != nil {
+				return fmt.Errorf("converting field VcpuHardAffinity: %v", err)
+			}
+		}
+	}
+	if numVcpuSoftAffinity := len(x.VcpuSoftAffinity); numVcpuSoftAffinity > 0 {
+		xc.vcpu_soft_affinity = (*C.libxl_bitmap)(C.malloc(C.ulong(numVcpuSoftAffinity) * C.sizeof_libxl_bitmap))
+		xc.num_vcpu_soft_affinity = C.int(numVcpuSoftAffinity)
+		cVcpuSoftAffinity := (*[1 << 28]C.libxl_bitmap)(unsafe.Pointer(xc.vcpu_soft_affinity))[:numVcpuSoftAffinity:numVcpuSoftAffinity]
+		for i, v := range x.VcpuSoftAffinity {
+			if err := v.toC(&cVcpuSoftAffinity[i]); err != nil {
+				return fmt.Errorf("converting field VcpuSoftAffinity: %v", err)
+			}
+		}
+	}
+	if err := x.NumaPlacement.toC(&xc.numa_placement); err != nil {
+		return fmt.Errorf("converting field NumaPlacement: %v", err)
+	}
+	xc.tsc_mode = C.libxl_tsc_mode(x.TscMode)
+	xc.max_memkb = C.uint64_t(x.MaxMemkb)
+	xc.target_memkb = C.uint64_t(x.TargetMemkb)
+	xc.video_memkb = C.uint64_t(x.VideoMemkb)
+	xc.shadow_memkb = C.uint64_t(x.ShadowMemkb)
+	xc.iommu_memkb = C.uint64_t(x.IommuMemkb)
+	xc.rtc_timeoffset = C.uint32_t(x.RtcTimeoffset)
+	xc.exec_ssidref = C.uint32_t(x.ExecSsidref)
+	if x.ExecSsidLabel != "" {
+		xc.exec_ssid_label = C.CString(x.ExecSsidLabel)
+	}
+	if err := x.Localtime.toC(&xc.localtime); err != nil {
+		return fmt.Errorf("converting field Localtime: %v", err)
+	}
+	if err := x.DisableMigrate.toC(&xc.disable_migrate); err != nil {
+		return fmt.Errorf("converting field DisableMigrate: %v", err)
+	}
+	if err := x.Cpuid.toC(&xc.cpuid); err != nil {
+		return fmt.Errorf("converting field Cpuid: %v", err)
+	}
+	if x.BlkdevStart != "" {
+		xc.blkdev_start = C.CString(x.BlkdevStart)
+	}
+	if numVnumaNodes := len(x.VnumaNodes); numVnumaNodes > 0 {
+		xc.vnuma_nodes = (*C.libxl_vnode_info)(C.malloc(C.ulong(numVnumaNodes) * C.sizeof_libxl_vnode_info))
+		xc.num_vnuma_nodes = C.int(numVnumaNodes)
+		cVnumaNodes := (*[1 << 28]C.libxl_vnode_info)(unsafe.Pointer(xc.vnuma_nodes))[:numVnumaNodes:numVnumaNodes]
+		for i, v := range x.VnumaNodes {
+			if err := v.toC(&cVnumaNodes[i]); err != nil {
+				return fmt.Errorf("converting field VnumaNodes: %v", err)
+			}
+		}
+	}
+	xc.max_grant_frames = C.uint32_t(x.MaxGrantFrames)
+	xc.max_maptrack_frames = C.uint32_t(x.MaxMaptrackFrames)
+	xc.device_model_version = C.libxl_device_model_version(x.DeviceModelVersion)
+	if err := x.DeviceModelStubdomain.toC(&xc.device_model_stubdomain); err != nil {
+		return fmt.Errorf("converting field DeviceModelStubdomain: %v", err)
+	}
+	if x.DeviceModel != "" {
+		xc.device_model = C.CString(x.DeviceModel)
+	}
+	xc.device_model_ssidref = C.uint32_t(x.DeviceModelSsidref)
+	if x.DeviceModelSsidLabel != "" {
+		xc.device_model_ssid_label = C.CString(x.DeviceModelSsidLabel)
+	}
+	if x.DeviceModelUser != "" {
+		xc.device_model_user = C.CString(x.DeviceModelUser)
+	}
+	if err := x.Extra.toC(&xc.extra); err != nil {
+		return fmt.Errorf("converting field Extra: %v", err)
+	}
+	if err := x.ExtraPv.toC(&xc.extra_pv); err != nil {
+		return fmt.Errorf("converting field ExtraPv: %v", err)
+	}
+	if err := x.ExtraHvm.toC(&xc.extra_hvm); err != nil {
+		return fmt.Errorf("converting field ExtraHvm: %v", err)
+	}
+	if err := x.SchedParams.toC(&xc.sched_params); err != nil {
+		return fmt.Errorf("converting field SchedParams: %v", err)
+	}
+	if numIoports := len(x.Ioports); numIoports > 0 {
+		xc.ioports = (*C.libxl_ioport_range)(C.malloc(C.ulong(numIoports) * C.sizeof_libxl_ioport_range))
+		xc.num_ioports = C.int(numIoports)
+		cIoports := (*[1 << 28]C.libxl_ioport_range)(unsafe.Pointer(xc.ioports))[:numIoports:numIoports]
+		for i, v := range x.Ioports {
+			if err := v.toC(&cIoports[i]); err != nil {
+				return fmt.Errorf("converting field Ioports: %v", err)
+			}
+		}
+	}
+	if numIrqs := len(x.Irqs); numIrqs > 0 {
+		xc.irqs = (*C.uint32_t)(C.malloc(C.size_t(numIrqs * numIrqs)))
+		xc.num_irqs = C.int(numIrqs)
+		cIrqs := (*[1 << 28]C.uint32_t)(unsafe.Pointer(xc.irqs))[:numIrqs:numIrqs]
+		for i, v := range x.Irqs {
+			cIrqs[i] = C.uint32_t(v)
+		}
+	}
+	if numIomem := len(x.Iomem); numIomem > 0 {
+		xc.iomem = (*C.libxl_iomem_range)(C.malloc(C.ulong(numIomem) * C.sizeof_libxl_iomem_range))
+		xc.num_iomem = C.int(numIomem)
+		cIomem := (*[1 << 28]C.libxl_iomem_range)(unsafe.Pointer(xc.iomem))[:numIomem:numIomem]
+		for i, v := range x.Iomem {
+			if err := v.toC(&cIomem[i]); err != nil {
+				return fmt.Errorf("converting field Iomem: %v", err)
+			}
+		}
+	}
+	if err := x.ClaimMode.toC(&xc.claim_mode); err != nil {
+		return fmt.Errorf("converting field ClaimMode: %v", err)
+	}
+	xc.event_channels = C.uint32_t(x.EventChannels)
+	if x.Kernel != "" {
+		xc.kernel = C.CString(x.Kernel)
+	}
+	if x.Cmdline != "" {
+		xc.cmdline = C.CString(x.Cmdline)
+	}
+	if x.Ramdisk != "" {
+		xc.ramdisk = C.CString(x.Ramdisk)
+	}
+	if x.DeviceTree != "" {
+		xc.device_tree = C.CString(x.DeviceTree)
+	}
+	if err := x.Acpi.toC(&xc.acpi); err != nil {
+		return fmt.Errorf("converting field Acpi: %v", err)
+	}
+	if x.Bootloader != "" {
+		xc.bootloader = C.CString(x.Bootloader)
+	}
+	if err := x.BootloaderArgs.toC(&xc.bootloader_args); err != nil {
+		return fmt.Errorf("converting field BootloaderArgs: %v", err)
+	}
+	xc.timer_mode = C.libxl_timer_mode(x.TimerMode)
+	if err := x.NestedHvm.toC(&xc.nested_hvm); err != nil {
+		return fmt.Errorf("converting field NestedHvm: %v", err)
+	}
+	if err := x.Apic.toC(&xc.apic); err != nil {
+		return fmt.Errorf("converting field Apic: %v", err)
+	}
+	if err := x.DmRestrict.toC(&xc.dm_restrict); err != nil {
+		return fmt.Errorf("converting field DmRestrict: %v", err)
+	}
+	xc.tee = C.libxl_tee_type(x.Tee)
+	xc._type = C.libxl_domain_type(x.Type)
+	switch x.Type {
+	case DomainTypeHvm:
+		tmp, ok := x.TypeUnion.(DomainBuildInfoTypeUnionHvm)
+		if !ok {
+			return errors.New("wrong type for union key type")
+		}
+		var hvm C.libxl_domain_build_info_type_union_hvm
+		if tmp.Firmware != "" {
+			hvm.firmware = C.CString(tmp.Firmware)
+		}
+		hvm.bios = C.libxl_bios_type(tmp.Bios)
+		if err := tmp.Pae.toC(&hvm.pae); err != nil {
+			return fmt.Errorf("converting field Pae: %v", err)
+		}
+		if err := tmp.Apic.toC(&hvm.apic); err != nil {
+			return fmt.Errorf("converting field Apic: %v", err)
+		}
+		if err := tmp.Acpi.toC(&hvm.acpi); err != nil {
+			return fmt.Errorf("converting field Acpi: %v", err)
+		}
+		if err := tmp.AcpiS3.toC(&hvm.acpi_s3); err != nil {
+			return fmt.Errorf("converting field AcpiS3: %v", err)
+		}
+		if err := tmp.AcpiS4.toC(&hvm.acpi_s4); err != nil {
+			return fmt.Errorf("converting field AcpiS4: %v", err)
+		}
+		if err := tmp.AcpiLaptopSlate.toC(&hvm.acpi_laptop_slate); err != nil {
+			return fmt.Errorf("converting field AcpiLaptopSlate: %v", err)
+		}
+		if err := tmp.Nx.toC(&hvm.nx); err != nil {
+			return fmt.Errorf("converting field Nx: %v", err)
+		}
+		if err := tmp.Viridian.toC(&hvm.viridian); err != nil {
+			return fmt.Errorf("converting field Viridian: %v", err)
+		}
+		if err := tmp.ViridianEnable.toC(&hvm.viridian_enable); err != nil {
+			return fmt.Errorf("converting field ViridianEnable: %v", err)
+		}
+		if err := tmp.ViridianDisable.toC(&hvm.viridian_disable); err != nil {
+			return fmt.Errorf("converting field ViridianDisable: %v", err)
+		}
+		if tmp.Timeoffset != "" {
+			hvm.timeoffset = C.CString(tmp.Timeoffset)
+		}
+		if err := tmp.Hpet.toC(&hvm.hpet); err != nil {
+			return fmt.Errorf("converting field Hpet: %v", err)
+		}
+		if err := tmp.VptAlign.toC(&hvm.vpt_align); err != nil {
+			return fmt.Errorf("converting field VptAlign: %v", err)
+		}
+		hvm.mmio_hole_memkb = C.uint64_t(tmp.MmioHoleMemkb)
+		hvm.timer_mode = C.libxl_timer_mode(tmp.TimerMode)
+		if err := tmp.NestedHvm.toC(&hvm.nested_hvm); err != nil {
+			return fmt.Errorf("converting field NestedHvm: %v", err)
+		}
+		if err := tmp.Altp2M.toC(&hvm.altp2m); err != nil {
+			return fmt.Errorf("converting field Altp2M: %v", err)
+		}
+		if tmp.SystemFirmware != "" {
+			hvm.system_firmware = C.CString(tmp.SystemFirmware)
+		}
+		if tmp.SmbiosFirmware != "" {
+			hvm.smbios_firmware = C.CString(tmp.SmbiosFirmware)
+		}
+		if tmp.AcpiFirmware != "" {
+			hvm.acpi_firmware = C.CString(tmp.AcpiFirmware)
+		}
+		hvm.hdtype = C.libxl_hdtype(tmp.Hdtype)
+		if err := tmp.Nographic.toC(&hvm.nographic); err != nil {
+			return fmt.Errorf("converting field Nographic: %v", err)
+		}
+		if err := tmp.Vga.toC(&hvm.vga); err != nil {
+			return fmt.Errorf("converting field Vga: %v", err)
+		}
+		if err := tmp.Vnc.toC(&hvm.vnc); err != nil {
+			return fmt.Errorf("converting field Vnc: %v", err)
+		}
+		if tmp.Keymap != "" {
+			hvm.keymap = C.CString(tmp.Keymap)
+		}
+		if err := tmp.Sdl.toC(&hvm.sdl); err != nil {
+			return fmt.Errorf("converting field Sdl: %v", err)
+		}
+		if err := tmp.Spice.toC(&hvm.spice); err != nil {
+			return fmt.Errorf("converting field Spice: %v", err)
+		}
+		if err := tmp.GfxPassthru.toC(&hvm.gfx_passthru); err != nil {
+			return fmt.Errorf("converting field GfxPassthru: %v", err)
+		}
+		hvm.gfx_passthru_kind = C.libxl_gfx_passthru_kind(tmp.GfxPassthruKind)
+		if tmp.Serial != "" {
+			hvm.serial = C.CString(tmp.Serial)
+		}
+		if tmp.Boot != "" {
+			hvm.boot = C.CString(tmp.Boot)
+		}
+		if err := tmp.Usb.toC(&hvm.usb); err != nil {
+			return fmt.Errorf("converting field Usb: %v", err)
+		}
+		hvm.usbversion = C.int(tmp.Usbversion)
+		if tmp.Usbdevice != "" {
+			hvm.usbdevice = C.CString(tmp.Usbdevice)
+		}
+		if err := tmp.VkbDevice.toC(&hvm.vkb_device); err != nil {
+			return fmt.Errorf("converting field VkbDevice: %v", err)
+		}
+		if tmp.Soundhw != "" {
+			hvm.soundhw = C.CString(tmp.Soundhw)
+		}
+		if err := tmp.XenPlatformPci.toC(&hvm.xen_platform_pci); err != nil {
+			return fmt.Errorf("converting field XenPlatformPci: %v", err)
+		}
+		if err := tmp.UsbdeviceList.toC(&hvm.usbdevice_list); err != nil {
+			return fmt.Errorf("converting field UsbdeviceList: %v", err)
+		}
+		hvm.vendor_device = C.libxl_vendor_device(tmp.VendorDevice)
+		if err := tmp.MsVmGenid.toC(&hvm.ms_vm_genid); err != nil {
+			return fmt.Errorf("converting field MsVmGenid: %v", err)
+		}
+		if err := tmp.SerialList.toC(&hvm.serial_list); err != nil {
+			return fmt.Errorf("converting field SerialList: %v", err)
+		}
+		if err := tmp.Rdm.toC(&hvm.rdm); err != nil {
+			return fmt.Errorf("converting field Rdm: %v", err)
+		}
+		hvm.rdm_mem_boundary_memkb = C.uint64_t(tmp.RdmMemBoundaryMemkb)
+		hvm.mca_caps = C.uint64_t(tmp.McaCaps)
+		hvmBytes := C.GoBytes(unsafe.Pointer(&hvm), C.sizeof_libxl_domain_build_info_type_union_hvm)
+		copy(xc.u[:], hvmBytes)
+	case DomainTypePv:
+		tmp, ok := x.TypeUnion.(DomainBuildInfoTypeUnionPv)
+		if !ok {
+			return errors.New("wrong type for union key type")
+		}
+		var pv C.libxl_domain_build_info_type_union_pv
+		if tmp.Kernel != "" {
+			pv.kernel = C.CString(tmp.Kernel)
+		}
+		pv.slack_memkb = C.uint64_t(tmp.SlackMemkb)
+		if tmp.Bootloader != "" {
+			pv.bootloader = C.CString(tmp.Bootloader)
+		}
+		if err := tmp.BootloaderArgs.toC(&pv.bootloader_args); err != nil {
+			return fmt.Errorf("converting field BootloaderArgs: %v", err)
+		}
+		if tmp.Cmdline != "" {
+			pv.cmdline = C.CString(tmp.Cmdline)
+		}
+		if tmp.Ramdisk != "" {
+			pv.ramdisk = C.CString(tmp.Ramdisk)
+		}
+		if tmp.Features != "" {
+			pv.features = C.CString(tmp.Features)
+		}
+		if err := tmp.E820Host.toC(&pv.e820_host); err != nil {
+			return fmt.Errorf("converting field E820Host: %v", err)
+		}
+		pvBytes := C.GoBytes(unsafe.Pointer(&pv), C.sizeof_libxl_domain_build_info_type_union_pv)
+		copy(xc.u[:], pvBytes)
+	case DomainTypePvh:
+		tmp, ok := x.TypeUnion.(DomainBuildInfoTypeUnionPvh)
+		if !ok {
+			return errors.New("wrong type for union key type")
+		}
+		var pvh C.libxl_domain_build_info_type_union_pvh
+		if err := tmp.Pvshim.toC(&pvh.pvshim); err != nil {
+			return fmt.Errorf("converting field Pvshim: %v", err)
+		}
+		if tmp.PvshimPath != "" {
+			pvh.pvshim_path = C.CString(tmp.PvshimPath)
+		}
+		if tmp.PvshimCmdline != "" {
+			pvh.pvshim_cmdline = C.CString(tmp.PvshimCmdline)
+		}
+		if tmp.PvshimExtra != "" {
+			pvh.pvshim_extra = C.CString(tmp.PvshimExtra)
+		}
+		pvhBytes := C.GoBytes(unsafe.Pointer(&pvh), C.sizeof_libxl_domain_build_info_type_union_pvh)
+		copy(xc.u[:], pvhBytes)
+	case DomainTypeInvalid:
+		break
+	default:
+		return fmt.Errorf("invalid union key '%v'", x.Type)
+	}
+	xc.arch_arm.gic_version = C.libxl_gic_version(x.ArchArm.GicVersion)
+	xc.arch_arm.vuart = C.libxl_vuart_type(x.ArchArm.Vuart)
+	xc.altp2m = C.libxl_altp2m_mode(x.Altp2M)
+
+	return nil
+}
+
+// NewDeviceVfb returns an instance of DeviceVfb initialized with defaults.
+func NewDeviceVfb() (*DeviceVfb, error) {
+	var (
+		x  DeviceVfb
+		xc C.libxl_device_vfb
+	)
+
+	C.libxl_device_vfb_init(&xc)
+	defer C.libxl_device_vfb_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DeviceVfb) fromC(xc *C.libxl_device_vfb) error {
+	x.BackendDomid = Domid(xc.backend_domid)
+	x.BackendDomname = C.GoString(xc.backend_domname)
+	x.Devid = Devid(xc.devid)
+	if err := x.Vnc.fromC(&xc.vnc); err != nil {
+		return fmt.Errorf("converting field Vnc: %v", err)
+	}
+	if err := x.Sdl.fromC(&xc.sdl); err != nil {
+		return fmt.Errorf("converting field Sdl: %v", err)
+	}
+	x.Keymap = C.GoString(xc.keymap)
+
+	return nil
+}
+
+func (x *DeviceVfb) toC(xc *C.libxl_device_vfb) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_device_vfb_dispose(xc)
+		}
+	}()
+
+	xc.backend_domid = C.libxl_domid(x.BackendDomid)
+	if x.BackendDomname != "" {
+		xc.backend_domname = C.CString(x.BackendDomname)
+	}
+	xc.devid = C.libxl_devid(x.Devid)
+	if err := x.Vnc.toC(&xc.vnc); err != nil {
+		return fmt.Errorf("converting field Vnc: %v", err)
+	}
+	if err := x.Sdl.toC(&xc.sdl); err != nil {
+		return fmt.Errorf("converting field Sdl: %v", err)
+	}
+	if x.Keymap != "" {
+		xc.keymap = C.CString(x.Keymap)
+	}
+
+	return nil
+}
+
+// NewDeviceVkb returns an instance of DeviceVkb initialized with defaults.
+func NewDeviceVkb() (*DeviceVkb, error) {
+	var (
+		x  DeviceVkb
+		xc C.libxl_device_vkb
+	)
+
+	C.libxl_device_vkb_init(&xc)
+	defer C.libxl_device_vkb_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DeviceVkb) fromC(xc *C.libxl_device_vkb) error {
+	x.BackendDomid = Domid(xc.backend_domid)
+	x.BackendDomname = C.GoString(xc.backend_domname)
+	x.Devid = Devid(xc.devid)
+	x.BackendType = VkbBackend(xc.backend_type)
+	x.UniqueId = C.GoString(xc.unique_id)
+	x.FeatureDisableKeyboard = bool(xc.feature_disable_keyboard)
+	x.FeatureDisablePointer = bool(xc.feature_disable_pointer)
+	x.FeatureAbsPointer = bool(xc.feature_abs_pointer)
+	x.FeatureRawPointer = bool(xc.feature_raw_pointer)
+	x.FeatureMultiTouch = bool(xc.feature_multi_touch)
+	x.Width = uint32(xc.width)
+	x.Height = uint32(xc.height)
+	x.MultiTouchWidth = uint32(xc.multi_touch_width)
+	x.MultiTouchHeight = uint32(xc.multi_touch_height)
+	x.MultiTouchNumContacts = uint32(xc.multi_touch_num_contacts)
+
+	return nil
+}
+
+func (x *DeviceVkb) toC(xc *C.libxl_device_vkb) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_device_vkb_dispose(xc)
+		}
+	}()
+
+	xc.backend_domid = C.libxl_domid(x.BackendDomid)
+	if x.BackendDomname != "" {
+		xc.backend_domname = C.CString(x.BackendDomname)
+	}
+	xc.devid = C.libxl_devid(x.Devid)
+	xc.backend_type = C.libxl_vkb_backend(x.BackendType)
+	if x.UniqueId != "" {
+		xc.unique_id = C.CString(x.UniqueId)
+	}
+	xc.feature_disable_keyboard = C.bool(x.FeatureDisableKeyboard)
+	xc.feature_disable_pointer = C.bool(x.FeatureDisablePointer)
+	xc.feature_abs_pointer = C.bool(x.FeatureAbsPointer)
+	xc.feature_raw_pointer = C.bool(x.FeatureRawPointer)
+	xc.feature_multi_touch = C.bool(x.FeatureMultiTouch)
+	xc.width = C.uint32_t(x.Width)
+	xc.height = C.uint32_t(x.Height)
+	xc.multi_touch_width = C.uint32_t(x.MultiTouchWidth)
+	xc.multi_touch_height = C.uint32_t(x.MultiTouchHeight)
+	xc.multi_touch_num_contacts = C.uint32_t(x.MultiTouchNumContacts)
+
+	return nil
+}
+
+// NewDeviceDisk returns an instance of DeviceDisk initialized with defaults.
+func NewDeviceDisk() (*DeviceDisk, error) {
+	var (
+		x  DeviceDisk
+		xc C.libxl_device_disk
+	)
+
+	C.libxl_device_disk_init(&xc)
+	defer C.libxl_device_disk_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DeviceDisk) fromC(xc *C.libxl_device_disk) error {
+	x.BackendDomid = Domid(xc.backend_domid)
+	x.BackendDomname = C.GoString(xc.backend_domname)
+	x.PdevPath = C.GoString(xc.pdev_path)
+	x.Vdev = C.GoString(xc.vdev)
+	x.Backend = DiskBackend(xc.backend)
+	x.Format = DiskFormat(xc.format)
+	x.Script = C.GoString(xc.script)
+	x.Removable = int(xc.removable)
+	x.Readwrite = int(xc.readwrite)
+	x.IsCdrom = int(xc.is_cdrom)
+	x.DirectIoSafe = bool(xc.direct_io_safe)
+	if err := x.DiscardEnable.fromC(&xc.discard_enable); err != nil {
+		return fmt.Errorf("converting field DiscardEnable: %v", err)
+	}
+	if err := x.ColoEnable.fromC(&xc.colo_enable); err != nil {
+		return fmt.Errorf("converting field ColoEnable: %v", err)
+	}
+	if err := x.ColoRestoreEnable.fromC(&xc.colo_restore_enable); err != nil {
+		return fmt.Errorf("converting field ColoRestoreEnable: %v", err)
+	}
+	x.ColoHost = C.GoString(xc.colo_host)
+	x.ColoPort = int(xc.colo_port)
+	x.ColoExport = C.GoString(xc.colo_export)
+	x.ActiveDisk = C.GoString(xc.active_disk)
+	x.HiddenDisk = C.GoString(xc.hidden_disk)
+
+	return nil
+}
+
+func (x *DeviceDisk) toC(xc *C.libxl_device_disk) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_device_disk_dispose(xc)
+		}
+	}()
+
+	xc.backend_domid = C.libxl_domid(x.BackendDomid)
+	if x.BackendDomname != "" {
+		xc.backend_domname = C.CString(x.BackendDomname)
+	}
+	if x.PdevPath != "" {
+		xc.pdev_path = C.CString(x.PdevPath)
+	}
+	if x.Vdev != "" {
+		xc.vdev = C.CString(x.Vdev)
+	}
+	xc.backend = C.libxl_disk_backend(x.Backend)
+	xc.format = C.libxl_disk_format(x.Format)
+	if x.Script != "" {
+		xc.script = C.CString(x.Script)
+	}
+	xc.removable = C.int(x.Removable)
+	xc.readwrite = C.int(x.Readwrite)
+	xc.is_cdrom = C.int(x.IsCdrom)
+	xc.direct_io_safe = C.bool(x.DirectIoSafe)
+	if err := x.DiscardEnable.toC(&xc.discard_enable); err != nil {
+		return fmt.Errorf("converting field DiscardEnable: %v", err)
+	}
+	if err := x.ColoEnable.toC(&xc.colo_enable); err != nil {
+		return fmt.Errorf("converting field ColoEnable: %v", err)
+	}
+	if err := x.ColoRestoreEnable.toC(&xc.colo_restore_enable); err != nil {
+		return fmt.Errorf("converting field ColoRestoreEnable: %v", err)
+	}
+	if x.ColoHost != "" {
+		xc.colo_host = C.CString(x.ColoHost)
+	}
+	xc.colo_port = C.int(x.ColoPort)
+	if x.ColoExport != "" {
+		xc.colo_export = C.CString(x.ColoExport)
+	}
+	if x.ActiveDisk != "" {
+		xc.active_disk = C.CString(x.ActiveDisk)
+	}
+	if x.HiddenDisk != "" {
+		xc.hidden_disk = C.CString(x.HiddenDisk)
+	}
+
+	return nil
+}
+
+// NewDeviceNic returns an instance of DeviceNic initialized with defaults.
+func NewDeviceNic() (*DeviceNic, error) {
+	var (
+		x  DeviceNic
+		xc C.libxl_device_nic
+	)
+
+	C.libxl_device_nic_init(&xc)
+	defer C.libxl_device_nic_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DeviceNic) fromC(xc *C.libxl_device_nic) error {
+	x.BackendDomid = Domid(xc.backend_domid)
+	x.BackendDomname = C.GoString(xc.backend_domname)
+	x.Devid = Devid(xc.devid)
+	x.Mtu = int(xc.mtu)
+	x.Model = C.GoString(xc.model)
+	if err := x.Mac.fromC(&xc.mac); err != nil {
+		return fmt.Errorf("converting field Mac: %v", err)
+	}
+	x.Ip = C.GoString(xc.ip)
+	x.Bridge = C.GoString(xc.bridge)
+	x.Ifname = C.GoString(xc.ifname)
+	x.Script = C.GoString(xc.script)
+	x.Nictype = NicType(xc.nictype)
+	x.RateBytesPerInterval = uint64(xc.rate_bytes_per_interval)
+	x.RateIntervalUsecs = uint32(xc.rate_interval_usecs)
+	x.Gatewaydev = C.GoString(xc.gatewaydev)
+	x.ColoftForwarddev = C.GoString(xc.coloft_forwarddev)
+	x.ColoSockMirrorId = C.GoString(xc.colo_sock_mirror_id)
+	x.ColoSockMirrorIp = C.GoString(xc.colo_sock_mirror_ip)
+	x.ColoSockMirrorPort = C.GoString(xc.colo_sock_mirror_port)
+	x.ColoSockComparePriInId = C.GoString(xc.colo_sock_compare_pri_in_id)
+	x.ColoSockComparePriInIp = C.GoString(xc.colo_sock_compare_pri_in_ip)
+	x.ColoSockComparePriInPort = C.GoString(xc.colo_sock_compare_pri_in_port)
+	x.ColoSockCompareSecInId = C.GoString(xc.colo_sock_compare_sec_in_id)
+	x.ColoSockCompareSecInIp = C.GoString(xc.colo_sock_compare_sec_in_ip)
+	x.ColoSockCompareSecInPort = C.GoString(xc.colo_sock_compare_sec_in_port)
+	x.ColoSockCompareNotifyId = C.GoString(xc.colo_sock_compare_notify_id)
+	x.ColoSockCompareNotifyIp = C.GoString(xc.colo_sock_compare_notify_ip)
+	x.ColoSockCompareNotifyPort = C.GoString(xc.colo_sock_compare_notify_port)
+	x.ColoSockRedirector0Id = C.GoString(xc.colo_sock_redirector0_id)
+	x.ColoSockRedirector0Ip = C.GoString(xc.colo_sock_redirector0_ip)
+	x.ColoSockRedirector0Port = C.GoString(xc.colo_sock_redirector0_port)
+	x.ColoSockRedirector1Id = C.GoString(xc.colo_sock_redirector1_id)
+	x.ColoSockRedirector1Ip = C.GoString(xc.colo_sock_redirector1_ip)
+	x.ColoSockRedirector1Port = C.GoString(xc.colo_sock_redirector1_port)
+	x.ColoSockRedirector2Id = C.GoString(xc.colo_sock_redirector2_id)
+	x.ColoSockRedirector2Ip = C.GoString(xc.colo_sock_redirector2_ip)
+	x.ColoSockRedirector2Port = C.GoString(xc.colo_sock_redirector2_port)
+	x.ColoFilterMirrorQueue = C.GoString(xc.colo_filter_mirror_queue)
+	x.ColoFilterMirrorOutdev = C.GoString(xc.colo_filter_mirror_outdev)
+	x.ColoFilterRedirector0Queue = C.GoString(xc.colo_filter_redirector0_queue)
+	x.ColoFilterRedirector0Indev = C.GoString(xc.colo_filter_redirector0_indev)
+	x.ColoFilterRedirector0Outdev = C.GoString(xc.colo_filter_redirector0_outdev)
+	x.ColoFilterRedirector1Queue = C.GoString(xc.colo_filter_redirector1_queue)
+	x.ColoFilterRedirector1Indev = C.GoString(xc.colo_filter_redirector1_indev)
+	x.ColoFilterRedirector1Outdev = C.GoString(xc.colo_filter_redirector1_outdev)
+	x.ColoComparePriIn = C.GoString(xc.colo_compare_pri_in)
+	x.ColoCompareSecIn = C.GoString(xc.colo_compare_sec_in)
+	x.ColoCompareOut = C.GoString(xc.colo_compare_out)
+	x.ColoCompareNotifyDev = C.GoString(xc.colo_compare_notify_dev)
+	x.ColoSockSecRedirector0Id = C.GoString(xc.colo_sock_sec_redirector0_id)
+	x.ColoSockSecRedirector0Ip = C.GoString(xc.colo_sock_sec_redirector0_ip)
+	x.ColoSockSecRedirector0Port = C.GoString(xc.colo_sock_sec_redirector0_port)
+	x.ColoSockSecRedirector1Id = C.GoString(xc.colo_sock_sec_redirector1_id)
+	x.ColoSockSecRedirector1Ip = C.GoString(xc.colo_sock_sec_redirector1_ip)
+	x.ColoSockSecRedirector1Port = C.GoString(xc.colo_sock_sec_redirector1_port)
+	x.ColoFilterSecRedirector0Queue = C.GoString(xc.colo_filter_sec_redirector0_queue)
+	x.ColoFilterSecRedirector0Indev = C.GoString(xc.colo_filter_sec_redirector0_indev)
+	x.ColoFilterSecRedirector0Outdev = C.GoString(xc.colo_filter_sec_redirector0_outdev)
+	x.ColoFilterSecRedirector1Queue = C.GoString(xc.colo_filter_sec_redirector1_queue)
+	x.ColoFilterSecRedirector1Indev = C.GoString(xc.colo_filter_sec_redirector1_indev)
+	x.ColoFilterSecRedirector1Outdev = C.GoString(xc.colo_filter_sec_redirector1_outdev)
+	x.ColoFilterSecRewriter0Queue = C.GoString(xc.colo_filter_sec_rewriter0_queue)
+	x.ColoCheckpointHost = C.GoString(xc.colo_checkpoint_host)
+	x.ColoCheckpointPort = C.GoString(xc.colo_checkpoint_port)
+
+	return nil
+}
+
+func (x *DeviceNic) toC(xc *C.libxl_device_nic) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_device_nic_dispose(xc)
+		}
+	}()
+
+	xc.backend_domid = C.libxl_domid(x.BackendDomid)
+	if x.BackendDomname != "" {
+		xc.backend_domname = C.CString(x.BackendDomname)
+	}
+	xc.devid = C.libxl_devid(x.Devid)
+	xc.mtu = C.int(x.Mtu)
+	if x.Model != "" {
+		xc.model = C.CString(x.Model)
+	}
+	if err := x.Mac.toC(&xc.mac); err != nil {
+		return fmt.Errorf("converting field Mac: %v", err)
+	}
+	if x.Ip != "" {
+		xc.ip = C.CString(x.Ip)
+	}
+	if x.Bridge != "" {
+		xc.bridge = C.CString(x.Bridge)
+	}
+	if x.Ifname != "" {
+		xc.ifname = C.CString(x.Ifname)
+	}
+	if x.Script != "" {
+		xc.script = C.CString(x.Script)
+	}
+	xc.nictype = C.libxl_nic_type(x.Nictype)
+	xc.rate_bytes_per_interval = C.uint64_t(x.RateBytesPerInterval)
+	xc.rate_interval_usecs = C.uint32_t(x.RateIntervalUsecs)
+	if x.Gatewaydev != "" {
+		xc.gatewaydev = C.CString(x.Gatewaydev)
+	}
+	if x.ColoftForwarddev != "" {
+		xc.coloft_forwarddev = C.CString(x.ColoftForwarddev)
+	}
+	if x.ColoSockMirrorId != "" {
+		xc.colo_sock_mirror_id = C.CString(x.ColoSockMirrorId)
+	}
+	if x.ColoSockMirrorIp != "" {
+		xc.colo_sock_mirror_ip = C.CString(x.ColoSockMirrorIp)
+	}
+	if x.ColoSockMirrorPort != "" {
+		xc.colo_sock_mirror_port = C.CString(x.ColoSockMirrorPort)
+	}
+	if x.ColoSockComparePriInId != "" {
+		xc.colo_sock_compare_pri_in_id = C.CString(x.ColoSockComparePriInId)
+	}
+	if x.ColoSockComparePriInIp != "" {
+		xc.colo_sock_compare_pri_in_ip = C.CString(x.ColoSockComparePriInIp)
+	}
+	if x.ColoSockComparePriInPort != "" {
+		xc.colo_sock_compare_pri_in_port = C.CString(x.ColoSockComparePriInPort)
+	}
+	if x.ColoSockCompareSecInId != "" {
+		xc.colo_sock_compare_sec_in_id = C.CString(x.ColoSockCompareSecInId)
+	}
+	if x.ColoSockCompareSecInIp != "" {
+		xc.colo_sock_compare_sec_in_ip = C.CString(x.ColoSockCompareSecInIp)
+	}
+	if x.ColoSockCompareSecInPort != "" {
+		xc.colo_sock_compare_sec_in_port = C.CString(x.ColoSockCompareSecInPort)
+	}
+	if x.ColoSockCompareNotifyId != "" {
+		xc.colo_sock_compare_notify_id = C.CString(x.ColoSockCompareNotifyId)
+	}
+	if x.ColoSockCompareNotifyIp != "" {
+		xc.colo_sock_compare_notify_ip = C.CString(x.ColoSockCompareNotifyIp)
+	}
+	if x.ColoSockCompareNotifyPort != "" {
+		xc.colo_sock_compare_notify_port = C.CString(x.ColoSockCompareNotifyPort)
+	}
+	if x.ColoSockRedirector0Id != "" {
+		xc.colo_sock_redirector0_id = C.CString(x.ColoSockRedirector0Id)
+	}
+	if x.ColoSockRedirector0Ip != "" {
+		xc.colo_sock_redirector0_ip = C.CString(x.ColoSockRedirector0Ip)
+	}
+	if x.ColoSockRedirector0Port != "" {
+		xc.colo_sock_redirector0_port = C.CString(x.ColoSockRedirector0Port)
+	}
+	if x.ColoSockRedirector1Id != "" {
+		xc.colo_sock_redirector1_id = C.CString(x.ColoSockRedirector1Id)
+	}
+	if x.ColoSockRedirector1Ip != "" {
+		xc.colo_sock_redirector1_ip = C.CString(x.ColoSockRedirector1Ip)
+	}
+	if x.ColoSockRedirector1Port != "" {
+		xc.colo_sock_redirector1_port = C.CString(x.ColoSockRedirector1Port)
+	}
+	if x.ColoSockRedirector2Id != "" {
+		xc.colo_sock_redirector2_id = C.CString(x.ColoSockRedirector2Id)
+	}
+	if x.ColoSockRedirector2Ip != "" {
+		xc.colo_sock_redirector2_ip = C.CString(x.ColoSockRedirector2Ip)
+	}
+	if x.ColoSockRedirector2Port != "" {
+		xc.colo_sock_redirector2_port = C.CString(x.ColoSockRedirector2Port)
+	}
+	if x.ColoFilterMirrorQueue != "" {
+		xc.colo_filter_mirror_queue = C.CString(x.ColoFilterMirrorQueue)
+	}
+	if x.ColoFilterMirrorOutdev != "" {
+		xc.colo_filter_mirror_outdev = C.CString(x.ColoFilterMirrorOutdev)
+	}
+	if x.ColoFilterRedirector0Queue != "" {
+		xc.colo_filter_redirector0_queue = C.CString(x.ColoFilterRedirector0Queue)
+	}
+	if x.ColoFilterRedirector0Indev != "" {
+		xc.colo_filter_redirector0_indev = C.CString(x.ColoFilterRedirector0Indev)
+	}
+	if x.ColoFilterRedirector0Outdev != "" {
+		xc.colo_filter_redirector0_outdev = C.CString(x.ColoFilterRedirector0Outdev)
+	}
+	if x.ColoFilterRedirector1Queue != "" {
+		xc.colo_filter_redirector1_queue = C.CString(x.ColoFilterRedirector1Queue)
+	}
+	if x.ColoFilterRedirector1Indev != "" {
+		xc.colo_filter_redirector1_indev = C.CString(x.ColoFilterRedirector1Indev)
+	}
+	if x.ColoFilterRedirector1Outdev != "" {
+		xc.colo_filter_redirector1_outdev = C.CString(x.ColoFilterRedirector1Outdev)
+	}
+	if x.ColoComparePriIn != "" {
+		xc.colo_compare_pri_in = C.CString(x.ColoComparePriIn)
+	}
+	if x.ColoCompareSecIn != "" {
+		xc.colo_compare_sec_in = C.CString(x.ColoCompareSecIn)
+	}
+	if x.ColoCompareOut != "" {
+		xc.colo_compare_out = C.CString(x.ColoCompareOut)
+	}
+	if x.ColoCompareNotifyDev != "" {
+		xc.colo_compare_notify_dev = C.CString(x.ColoCompareNotifyDev)
+	}
+	if x.ColoSockSecRedirector0Id != "" {
+		xc.colo_sock_sec_redirector0_id = C.CString(x.ColoSockSecRedirector0Id)
+	}
+	if x.ColoSockSecRedirector0Ip != "" {
+		xc.colo_sock_sec_redirector0_ip = C.CString(x.ColoSockSecRedirector0Ip)
+	}
+	if x.ColoSockSecRedirector0Port != "" {
+		xc.colo_sock_sec_redirector0_port = C.CString(x.ColoSockSecRedirector0Port)
+	}
+	if x.ColoSockSecRedirector1Id != "" {
+		xc.colo_sock_sec_redirector1_id = C.CString(x.ColoSockSecRedirector1Id)
+	}
+	if x.ColoSockSecRedirector1Ip != "" {
+		xc.colo_sock_sec_redirector1_ip = C.CString(x.ColoSockSecRedirector1Ip)
+	}
+	if x.ColoSockSecRedirector1Port != "" {
+		xc.colo_sock_sec_redirector1_port = C.CString(x.ColoSockSecRedirector1Port)
+	}
+	if x.ColoFilterSecRedirector0Queue != "" {
+		xc.colo_filter_sec_redirector0_queue = C.CString(x.ColoFilterSecRedirector0Queue)
+	}
+	if x.ColoFilterSecRedirector0Indev != "" {
+		xc.colo_filter_sec_redirector0_indev = C.CString(x.ColoFilterSecRedirector0Indev)
+	}
+	if x.ColoFilterSecRedirector0Outdev != "" {
+		xc.colo_filter_sec_redirector0_outdev = C.CString(x.ColoFilterSecRedirector0Outdev)
+	}
+	if x.ColoFilterSecRedirector1Queue != "" {
+		xc.colo_filter_sec_redirector1_queue = C.CString(x.ColoFilterSecRedirector1Queue)
+	}
+	if x.ColoFilterSecRedirector1Indev != "" {
+		xc.colo_filter_sec_redirector1_indev = C.CString(x.ColoFilterSecRedirector1Indev)
+	}
+	if x.ColoFilterSecRedirector1Outdev != "" {
+		xc.colo_filter_sec_redirector1_outdev = C.CString(x.ColoFilterSecRedirector1Outdev)
+	}
+	if x.ColoFilterSecRewriter0Queue != "" {
+		xc.colo_filter_sec_rewriter0_queue = C.CString(x.ColoFilterSecRewriter0Queue)
+	}
+	if x.ColoCheckpointHost != "" {
+		xc.colo_checkpoint_host = C.CString(x.ColoCheckpointHost)
+	}
+	if x.ColoCheckpointPort != "" {
+		xc.colo_checkpoint_port = C.CString(x.ColoCheckpointPort)
+	}
+
+	return nil
+}
+
+// NewDevicePci returns an instance of DevicePci initialized with defaults.
+func NewDevicePci() (*DevicePci, error) {
+	var (
+		x  DevicePci
+		xc C.libxl_device_pci
+	)
+
+	C.libxl_device_pci_init(&xc)
+	defer C.libxl_device_pci_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DevicePci) fromC(xc *C.libxl_device_pci) error {
+	x.Func = byte(xc._func)
+	x.Dev = byte(xc.dev)
+	x.Bus = byte(xc.bus)
+	x.Domain = int(xc.domain)
+	x.Vdevfn = uint32(xc.vdevfn)
+	x.VfuncMask = uint32(xc.vfunc_mask)
+	x.Msitranslate = bool(xc.msitranslate)
+	x.PowerMgmt = bool(xc.power_mgmt)
+	x.Permissive = bool(xc.permissive)
+	x.Seize = bool(xc.seize)
+	x.RdmPolicy = RdmReservePolicy(xc.rdm_policy)
+
+	return nil
+}
+
+func (x *DevicePci) toC(xc *C.libxl_device_pci) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_device_pci_dispose(xc)
+		}
+	}()
+
+	xc._func = C.uint8_t(x.Func)
+	xc.dev = C.uint8_t(x.Dev)
+	xc.bus = C.uint8_t(x.Bus)
+	xc.domain = C.int(x.Domain)
+	xc.vdevfn = C.uint32_t(x.Vdevfn)
+	xc.vfunc_mask = C.uint32_t(x.VfuncMask)
+	xc.msitranslate = C.bool(x.Msitranslate)
+	xc.power_mgmt = C.bool(x.PowerMgmt)
+	xc.permissive = C.bool(x.Permissive)
+	xc.seize = C.bool(x.Seize)
+	xc.rdm_policy = C.libxl_rdm_reserve_policy(x.RdmPolicy)
+
+	return nil
+}
+
+// NewDeviceRdm returns an instance of DeviceRdm initialized with defaults.
+func NewDeviceRdm() (*DeviceRdm, error) {
+	var (
+		x  DeviceRdm
+		xc C.libxl_device_rdm
+	)
+
+	C.libxl_device_rdm_init(&xc)
+	defer C.libxl_device_rdm_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DeviceRdm) fromC(xc *C.libxl_device_rdm) error {
+	x.Start = uint64(xc.start)
+	x.Size = uint64(xc.size)
+	x.Policy = RdmReservePolicy(xc.policy)
+
+	return nil
+}
+
+func (x *DeviceRdm) toC(xc *C.libxl_device_rdm) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_device_rdm_dispose(xc)
+		}
+	}()
+
+	xc.start = C.uint64_t(x.Start)
+	xc.size = C.uint64_t(x.Size)
+	xc.policy = C.libxl_rdm_reserve_policy(x.Policy)
+
+	return nil
+}
+
+// NewDeviceUsbctrl returns an instance of DeviceUsbctrl initialized with defaults.
+func NewDeviceUsbctrl() (*DeviceUsbctrl, error) {
+	var (
+		x  DeviceUsbctrl
+		xc C.libxl_device_usbctrl
+	)
+
+	C.libxl_device_usbctrl_init(&xc)
+	defer C.libxl_device_usbctrl_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DeviceUsbctrl) fromC(xc *C.libxl_device_usbctrl) error {
+	x.Type = UsbctrlType(xc._type)
+	x.Devid = Devid(xc.devid)
+	x.Version = int(xc.version)
+	x.Ports = int(xc.ports)
+	x.BackendDomid = Domid(xc.backend_domid)
+	x.BackendDomname = C.GoString(xc.backend_domname)
+
+	return nil
+}
+
+func (x *DeviceUsbctrl) toC(xc *C.libxl_device_usbctrl) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_device_usbctrl_dispose(xc)
+		}
+	}()
+
+	xc._type = C.libxl_usbctrl_type(x.Type)
+	xc.devid = C.libxl_devid(x.Devid)
+	xc.version = C.int(x.Version)
+	xc.ports = C.int(x.Ports)
+	xc.backend_domid = C.libxl_domid(x.BackendDomid)
+	if x.BackendDomname != "" {
+		xc.backend_domname = C.CString(x.BackendDomname)
+	}
+
+	return nil
+}
+
+// NewDeviceUsbdev returns an instance of DeviceUsbdev initialized with defaults.
+func NewDeviceUsbdev(utype UsbdevType) (*DeviceUsbdev, error) {
+	var (
+		x  DeviceUsbdev
+		xc C.libxl_device_usbdev
+	)
+
+	C.libxl_device_usbdev_init(&xc)
+	C.libxl_device_usbdev_init_type(&xc, C.libxl_usbdev_type(utype))
+	defer C.libxl_device_usbdev_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DeviceUsbdev) fromC(xc *C.libxl_device_usbdev) error {
+	x.Ctrl = Devid(xc.ctrl)
+	x.Port = int(xc.port)
+	x.Type = UsbdevType(xc._type)
+	switch x.Type {
+	case UsbdevTypeHostdev:
+		var typeHostdev DeviceUsbdevTypeUnionHostdev
+		if err := typeHostdev.fromC(xc); err != nil {
+			return fmt.Errorf("converting field typeHostdev: %v", err)
+		}
+		x.TypeUnion = typeHostdev
+	default:
+		return fmt.Errorf("invalid union key '%v'", x.Type)
+	}
+
+	return nil
+}
+
+func (x *DeviceUsbdevTypeUnionHostdev) fromC(xc *C.libxl_device_usbdev) error {
+	if UsbdevType(xc._type) != UsbdevTypeHostdev {
+		return errors.New("expected union key UsbdevTypeHostdev")
+	}
+
+	tmp := (*C.libxl_device_usbdev_type_union_hostdev)(unsafe.Pointer(&xc.u[0]))
+	x.Hostbus = byte(tmp.hostbus)
+	x.Hostaddr = byte(tmp.hostaddr)
+	return nil
+}
+
+func (x *DeviceUsbdev) toC(xc *C.libxl_device_usbdev) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_device_usbdev_dispose(xc)
+		}
+	}()
+
+	xc.ctrl = C.libxl_devid(x.Ctrl)
+	xc.port = C.int(x.Port)
+	xc._type = C.libxl_usbdev_type(x.Type)
+	switch x.Type {
+	case UsbdevTypeHostdev:
+		tmp, ok := x.TypeUnion.(DeviceUsbdevTypeUnionHostdev)
+		if !ok {
+			return errors.New("wrong type for union key type")
+		}
+		var hostdev C.libxl_device_usbdev_type_union_hostdev
+		hostdev.hostbus = C.uint8_t(tmp.Hostbus)
+		hostdev.hostaddr = C.uint8_t(tmp.Hostaddr)
+		hostdevBytes := C.GoBytes(unsafe.Pointer(&hostdev), C.sizeof_libxl_device_usbdev_type_union_hostdev)
+		copy(xc.u[:], hostdevBytes)
+	default:
+		return fmt.Errorf("invalid union key '%v'", x.Type)
+	}
+
+	return nil
+}
+
+// NewDeviceDtdev returns an instance of DeviceDtdev initialized with defaults.
+func NewDeviceDtdev() (*DeviceDtdev, error) {
+	var (
+		x  DeviceDtdev
+		xc C.libxl_device_dtdev
+	)
+
+	C.libxl_device_dtdev_init(&xc)
+	defer C.libxl_device_dtdev_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DeviceDtdev) fromC(xc *C.libxl_device_dtdev) error {
+	x.Path = C.GoString(xc.path)
+
+	return nil
+}
+
+func (x *DeviceDtdev) toC(xc *C.libxl_device_dtdev) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_device_dtdev_dispose(xc)
+		}
+	}()
+
+	if x.Path != "" {
+		xc.path = C.CString(x.Path)
+	}
+
+	return nil
+}
+
+// NewDeviceVtpm returns an instance of DeviceVtpm initialized with defaults.
+func NewDeviceVtpm() (*DeviceVtpm, error) {
+	var (
+		x  DeviceVtpm
+		xc C.libxl_device_vtpm
+	)
+
+	C.libxl_device_vtpm_init(&xc)
+	defer C.libxl_device_vtpm_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DeviceVtpm) fromC(xc *C.libxl_device_vtpm) error {
+	x.BackendDomid = Domid(xc.backend_domid)
+	x.BackendDomname = C.GoString(xc.backend_domname)
+	x.Devid = Devid(xc.devid)
+	if err := x.Uuid.fromC(&xc.uuid); err != nil {
+		return fmt.Errorf("converting field Uuid: %v", err)
+	}
+
+	return nil
+}
+
+func (x *DeviceVtpm) toC(xc *C.libxl_device_vtpm) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_device_vtpm_dispose(xc)
+		}
+	}()
+
+	xc.backend_domid = C.libxl_domid(x.BackendDomid)
+	if x.BackendDomname != "" {
+		xc.backend_domname = C.CString(x.BackendDomname)
+	}
+	xc.devid = C.libxl_devid(x.Devid)
+	if err := x.Uuid.toC(&xc.uuid); err != nil {
+		return fmt.Errorf("converting field Uuid: %v", err)
+	}
+
+	return nil
+}
+
+// NewDeviceP9 returns an instance of DeviceP9 initialized with defaults.
+func NewDeviceP9() (*DeviceP9, error) {
+	var (
+		x  DeviceP9
+		xc C.libxl_device_p9
+	)
+
+	C.libxl_device_p9_init(&xc)
+	defer C.libxl_device_p9_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DeviceP9) fromC(xc *C.libxl_device_p9) error {
+	x.BackendDomid = Domid(xc.backend_domid)
+	x.BackendDomname = C.GoString(xc.backend_domname)
+	x.Tag = C.GoString(xc.tag)
+	x.Path = C.GoString(xc.path)
+	x.SecurityModel = C.GoString(xc.security_model)
+	x.Devid = Devid(xc.devid)
+
+	return nil
+}
+
+func (x *DeviceP9) toC(xc *C.libxl_device_p9) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_device_p9_dispose(xc)
+		}
+	}()
+
+	xc.backend_domid = C.libxl_domid(x.BackendDomid)
+	if x.BackendDomname != "" {
+		xc.backend_domname = C.CString(x.BackendDomname)
+	}
+	if x.Tag != "" {
+		xc.tag = C.CString(x.Tag)
+	}
+	if x.Path != "" {
+		xc.path = C.CString(x.Path)
+	}
+	if x.SecurityModel != "" {
+		xc.security_model = C.CString(x.SecurityModel)
+	}
+	xc.devid = C.libxl_devid(x.Devid)
+
+	return nil
+}
+
+// NewDevicePvcallsif returns an instance of DevicePvcallsif initialized with defaults.
+func NewDevicePvcallsif() (*DevicePvcallsif, error) {
+	var (
+		x  DevicePvcallsif
+		xc C.libxl_device_pvcallsif
+	)
+
+	C.libxl_device_pvcallsif_init(&xc)
+	defer C.libxl_device_pvcallsif_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DevicePvcallsif) fromC(xc *C.libxl_device_pvcallsif) error {
+	x.BackendDomid = Domid(xc.backend_domid)
+	x.BackendDomname = C.GoString(xc.backend_domname)
+	x.Devid = Devid(xc.devid)
+
+	return nil
+}
+
+func (x *DevicePvcallsif) toC(xc *C.libxl_device_pvcallsif) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_device_pvcallsif_dispose(xc)
+		}
+	}()
+
+	xc.backend_domid = C.libxl_domid(x.BackendDomid)
+	if x.BackendDomname != "" {
+		xc.backend_domname = C.CString(x.BackendDomname)
+	}
+	xc.devid = C.libxl_devid(x.Devid)
+
+	return nil
+}
+
+// NewDeviceChannel returns an instance of DeviceChannel initialized with defaults.
+func NewDeviceChannel(connection ChannelConnection) (*DeviceChannel, error) {
+	var (
+		x  DeviceChannel
+		xc C.libxl_device_channel
+	)
+
+	C.libxl_device_channel_init(&xc)
+	C.libxl_device_channel_init_connection(&xc, C.libxl_channel_connection(connection))
+	defer C.libxl_device_channel_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DeviceChannel) fromC(xc *C.libxl_device_channel) error {
+	x.BackendDomid = Domid(xc.backend_domid)
+	x.BackendDomname = C.GoString(xc.backend_domname)
+	x.Devid = Devid(xc.devid)
+	x.Name = C.GoString(xc.name)
+	x.Connection = ChannelConnection(xc.connection)
+	switch x.Connection {
+	case ChannelConnectionUnknown:
+		x.ConnectionUnion = nil
+	case ChannelConnectionPty:
+		x.ConnectionUnion = nil
+	case ChannelConnectionSocket:
+		var connectionSocket DeviceChannelConnectionUnionSocket
+		if err := connectionSocket.fromC(xc); err != nil {
+			return fmt.Errorf("converting field connectionSocket: %v", err)
+		}
+		x.ConnectionUnion = connectionSocket
+	default:
+		return fmt.Errorf("invalid union key '%v'", x.Connection)
+	}
+
+	return nil
+}
+
+func (x *DeviceChannelConnectionUnionSocket) fromC(xc *C.libxl_device_channel) error {
+	if ChannelConnection(xc.connection) != ChannelConnectionSocket {
+		return errors.New("expected union key ChannelConnectionSocket")
+	}
+
+	tmp := (*C.libxl_device_channel_connection_union_socket)(unsafe.Pointer(&xc.u[0]))
+	x.Path = C.GoString(tmp.path)
+	return nil
+}
+
+func (x *DeviceChannel) toC(xc *C.libxl_device_channel) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_device_channel_dispose(xc)
+		}
+	}()
+
+	xc.backend_domid = C.libxl_domid(x.BackendDomid)
+	if x.BackendDomname != "" {
+		xc.backend_domname = C.CString(x.BackendDomname)
+	}
+	xc.devid = C.libxl_devid(x.Devid)
+	if x.Name != "" {
+		xc.name = C.CString(x.Name)
+	}
+	xc.connection = C.libxl_channel_connection(x.Connection)
+	switch x.Connection {
+	case ChannelConnectionUnknown:
+		break
+	case ChannelConnectionPty:
+		break
+	case ChannelConnectionSocket:
+		tmp, ok := x.ConnectionUnion.(DeviceChannelConnectionUnionSocket)
+		if !ok {
+			return errors.New("wrong type for union key connection")
+		}
+		var socket C.libxl_device_channel_connection_union_socket
+		if tmp.Path != "" {
+			socket.path = C.CString(tmp.Path)
+		}
+		socketBytes := C.GoBytes(unsafe.Pointer(&socket), C.sizeof_libxl_device_channel_connection_union_socket)
+		copy(xc.u[:], socketBytes)
+	default:
+		return fmt.Errorf("invalid union key '%v'", x.Connection)
+	}
+
+	return nil
+}
+
+// NewConnectorParam returns an instance of ConnectorParam initialized with defaults.
+func NewConnectorParam() (*ConnectorParam, error) {
+	var (
+		x  ConnectorParam
+		xc C.libxl_connector_param
+	)
+
+	C.libxl_connector_param_init(&xc)
+	defer C.libxl_connector_param_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *ConnectorParam) fromC(xc *C.libxl_connector_param) error {
+	x.UniqueId = C.GoString(xc.unique_id)
+	x.Width = uint32(xc.width)
+	x.Height = uint32(xc.height)
+
+	return nil
+}
+
+func (x *ConnectorParam) toC(xc *C.libxl_connector_param) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_connector_param_dispose(xc)
+		}
+	}()
+
+	if x.UniqueId != "" {
+		xc.unique_id = C.CString(x.UniqueId)
+	}
+	xc.width = C.uint32_t(x.Width)
+	xc.height = C.uint32_t(x.Height)
+
+	return nil
+}
+
+// NewDeviceVdispl returns an instance of DeviceVdispl initialized with defaults.
+func NewDeviceVdispl() (*DeviceVdispl, error) {
+	var (
+		x  DeviceVdispl
+		xc C.libxl_device_vdispl
+	)
+
+	C.libxl_device_vdispl_init(&xc)
+	defer C.libxl_device_vdispl_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DeviceVdispl) fromC(xc *C.libxl_device_vdispl) error {
+	x.BackendDomid = Domid(xc.backend_domid)
+	x.BackendDomname = C.GoString(xc.backend_domname)
+	x.Devid = Devid(xc.devid)
+	x.BeAlloc = bool(xc.be_alloc)
+	x.Connectors = nil
+	if n := int(xc.num_connectors); n > 0 {
+		cConnectors := (*[1 << 28]C.libxl_connector_param)(unsafe.Pointer(xc.connectors))[:n:n]
+		x.Connectors = make([]ConnectorParam, n)
+		for i, v := range cConnectors {
+			if err := x.Connectors[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Connectors: %v", err)
+			}
+		}
+	}
+
+	return nil
+}
+
+func (x *DeviceVdispl) toC(xc *C.libxl_device_vdispl) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_device_vdispl_dispose(xc)
+		}
+	}()
+
+	xc.backend_domid = C.libxl_domid(x.BackendDomid)
+	if x.BackendDomname != "" {
+		xc.backend_domname = C.CString(x.BackendDomname)
+	}
+	xc.devid = C.libxl_devid(x.Devid)
+	xc.be_alloc = C.bool(x.BeAlloc)
+	if numConnectors := len(x.Connectors); numConnectors > 0 {
+		xc.connectors = (*C.libxl_connector_param)(C.malloc(C.ulong(numConnectors) * C.sizeof_libxl_connector_param))
+		xc.num_connectors = C.int(numConnectors)
+		cConnectors := (*[1 << 28]C.libxl_connector_param)(unsafe.Pointer(xc.connectors))[:numConnectors:numConnectors]
+		for i, v := range x.Connectors {
+			if err := v.toC(&cConnectors[i]); err != nil {
+				return fmt.Errorf("converting field Connectors: %v", err)
+			}
+		}
+	}
+
+	return nil
+}
+
+// NewVsndParams returns an instance of VsndParams initialized with defaults.
+func NewVsndParams() (*VsndParams, error) {
+	var (
+		x  VsndParams
+		xc C.libxl_vsnd_params
+	)
+
+	C.libxl_vsnd_params_init(&xc)
+	defer C.libxl_vsnd_params_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *VsndParams) fromC(xc *C.libxl_vsnd_params) error {
+	x.SampleRates = nil
+	if n := int(xc.num_sample_rates); n > 0 {
+		cSampleRates := (*[1 << 28]C.uint32_t)(unsafe.Pointer(xc.sample_rates))[:n:n]
+		x.SampleRates = make([]uint32, n)
+		for i, v := range cSampleRates {
+			x.SampleRates[i] = uint32(v)
+		}
+	}
+	x.SampleFormats = nil
+	if n := int(xc.num_sample_formats); n > 0 {
+		cSampleFormats := (*[1 << 28]C.libxl_vsnd_pcm_format)(unsafe.Pointer(xc.sample_formats))[:n:n]
+		x.SampleFormats = make([]VsndPcmFormat, n)
+		for i, v := range cSampleFormats {
+			x.SampleFormats[i] = VsndPcmFormat(v)
+		}
+	}
+	x.ChannelsMin = uint32(xc.channels_min)
+	x.ChannelsMax = uint32(xc.channels_max)
+	x.BufferSize = uint32(xc.buffer_size)
+
+	return nil
+}
+
+func (x *VsndParams) toC(xc *C.libxl_vsnd_params) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_vsnd_params_dispose(xc)
+		}
+	}()
+
+	if numSampleRates := len(x.SampleRates); numSampleRates > 0 {
+		xc.sample_rates = (*C.uint32_t)(C.malloc(C.size_t(numSampleRates * numSampleRates)))
+		xc.num_sample_rates = C.int(numSampleRates)
+		cSampleRates := (*[1 << 28]C.uint32_t)(unsafe.Pointer(xc.sample_rates))[:numSampleRates:numSampleRates]
+		for i, v := range x.SampleRates {
+			cSampleRates[i] = C.uint32_t(v)
+		}
+	}
+	if numSampleFormats := len(x.SampleFormats); numSampleFormats > 0 {
+		xc.sample_formats = (*C.libxl_vsnd_pcm_format)(C.malloc(C.size_t(numSampleFormats * numSampleFormats)))
+		xc.num_sample_formats = C.int(numSampleFormats)
+		cSampleFormats := (*[1 << 28]C.libxl_vsnd_pcm_format)(unsafe.Pointer(xc.sample_formats))[:numSampleFormats:numSampleFormats]
+		for i, v := range x.SampleFormats {
+			cSampleFormats[i] = C.libxl_vsnd_pcm_format(v)
+		}
+	}
+	xc.channels_min = C.uint32_t(x.ChannelsMin)
+	xc.channels_max = C.uint32_t(x.ChannelsMax)
+	xc.buffer_size = C.uint32_t(x.BufferSize)
+
+	return nil
+}
+
+// NewVsndStream returns an instance of VsndStream initialized with defaults.
+func NewVsndStream() (*VsndStream, error) {
+	var (
+		x  VsndStream
+		xc C.libxl_vsnd_stream
+	)
+
+	C.libxl_vsnd_stream_init(&xc)
+	defer C.libxl_vsnd_stream_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *VsndStream) fromC(xc *C.libxl_vsnd_stream) error {
+	x.UniqueId = C.GoString(xc.unique_id)
+	x.Type = VsndStreamType(xc._type)
+	if err := x.Params.fromC(&xc.params); err != nil {
+		return fmt.Errorf("converting field Params: %v", err)
+	}
+
+	return nil
+}
+
+func (x *VsndStream) toC(xc *C.libxl_vsnd_stream) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_vsnd_stream_dispose(xc)
+		}
+	}()
+
+	if x.UniqueId != "" {
+		xc.unique_id = C.CString(x.UniqueId)
+	}
+	xc._type = C.libxl_vsnd_stream_type(x.Type)
+	if err := x.Params.toC(&xc.params); err != nil {
+		return fmt.Errorf("converting field Params: %v", err)
+	}
+
+	return nil
+}
+
+// NewVsndPcm returns an instance of VsndPcm initialized with defaults.
+func NewVsndPcm() (*VsndPcm, error) {
+	var (
+		x  VsndPcm
+		xc C.libxl_vsnd_pcm
+	)
+
+	C.libxl_vsnd_pcm_init(&xc)
+	defer C.libxl_vsnd_pcm_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *VsndPcm) fromC(xc *C.libxl_vsnd_pcm) error {
+	x.Name = C.GoString(xc.name)
+	if err := x.Params.fromC(&xc.params); err != nil {
+		return fmt.Errorf("converting field Params: %v", err)
+	}
+	x.Streams = nil
+	if n := int(xc.num_vsnd_streams); n > 0 {
+		cStreams := (*[1 << 28]C.libxl_vsnd_stream)(unsafe.Pointer(xc.streams))[:n:n]
+		x.Streams = make([]VsndStream, n)
+		for i, v := range cStreams {
+			if err := x.Streams[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Streams: %v", err)
+			}
+		}
+	}
+
+	return nil
+}
+
+func (x *VsndPcm) toC(xc *C.libxl_vsnd_pcm) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_vsnd_pcm_dispose(xc)
+		}
+	}()
+
+	if x.Name != "" {
+		xc.name = C.CString(x.Name)
+	}
+	if err := x.Params.toC(&xc.params); err != nil {
+		return fmt.Errorf("converting field Params: %v", err)
+	}
+	if numVsndStreams := len(x.Streams); numVsndStreams > 0 {
+		xc.streams = (*C.libxl_vsnd_stream)(C.malloc(C.ulong(numVsndStreams) * C.sizeof_libxl_vsnd_stream))
+		xc.num_vsnd_streams = C.int(numVsndStreams)
+		cStreams := (*[1 << 28]C.libxl_vsnd_stream)(unsafe.Pointer(xc.streams))[:numVsndStreams:numVsndStreams]
+		for i, v := range x.Streams {
+			if err := v.toC(&cStreams[i]); err != nil {
+				return fmt.Errorf("converting field Streams: %v", err)
+			}
+		}
+	}
+
+	return nil
+}
+
+// NewDeviceVsnd returns an instance of DeviceVsnd initialized with defaults.
+func NewDeviceVsnd() (*DeviceVsnd, error) {
+	var (
+		x  DeviceVsnd
+		xc C.libxl_device_vsnd
+	)
+
+	C.libxl_device_vsnd_init(&xc)
+	defer C.libxl_device_vsnd_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DeviceVsnd) fromC(xc *C.libxl_device_vsnd) error {
+	x.BackendDomid = Domid(xc.backend_domid)
+	x.BackendDomname = C.GoString(xc.backend_domname)
+	x.Devid = Devid(xc.devid)
+	x.ShortName = C.GoString(xc.short_name)
+	x.LongName = C.GoString(xc.long_name)
+	if err := x.Params.fromC(&xc.params); err != nil {
+		return fmt.Errorf("converting field Params: %v", err)
+	}
+	x.Pcms = nil
+	if n := int(xc.num_vsnd_pcms); n > 0 {
+		cPcms := (*[1 << 28]C.libxl_vsnd_pcm)(unsafe.Pointer(xc.pcms))[:n:n]
+		x.Pcms = make([]VsndPcm, n)
+		for i, v := range cPcms {
+			if err := x.Pcms[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Pcms: %v", err)
+			}
+		}
+	}
+
+	return nil
+}
+
+func (x *DeviceVsnd) toC(xc *C.libxl_device_vsnd) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_device_vsnd_dispose(xc)
+		}
+	}()
+
+	xc.backend_domid = C.libxl_domid(x.BackendDomid)
+	if x.BackendDomname != "" {
+		xc.backend_domname = C.CString(x.BackendDomname)
+	}
+	xc.devid = C.libxl_devid(x.Devid)
+	if x.ShortName != "" {
+		xc.short_name = C.CString(x.ShortName)
+	}
+	if x.LongName != "" {
+		xc.long_name = C.CString(x.LongName)
+	}
+	if err := x.Params.toC(&xc.params); err != nil {
+		return fmt.Errorf("converting field Params: %v", err)
+	}
+	if numVsndPcms := len(x.Pcms); numVsndPcms > 0 {
+		xc.pcms = (*C.libxl_vsnd_pcm)(C.malloc(C.ulong(numVsndPcms) * C.sizeof_libxl_vsnd_pcm))
+		xc.num_vsnd_pcms = C.int(numVsndPcms)
+		cPcms := (*[1 << 28]C.libxl_vsnd_pcm)(unsafe.Pointer(xc.pcms))[:numVsndPcms:numVsndPcms]
+		for i, v := range x.Pcms {
+			if err := v.toC(&cPcms[i]); err != nil {
+				return fmt.Errorf("converting field Pcms: %v", err)
+			}
+		}
+	}
+
+	return nil
+}
+
+// NewDomainConfig returns an instance of DomainConfig initialized with defaults.
+func NewDomainConfig() (*DomainConfig, error) {
+	var (
+		x  DomainConfig
+		xc C.libxl_domain_config
+	)
+
+	C.libxl_domain_config_init(&xc)
+	defer C.libxl_domain_config_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DomainConfig) fromC(xc *C.libxl_domain_config) error {
+	if err := x.CInfo.fromC(&xc.c_info); err != nil {
+		return fmt.Errorf("converting field CInfo: %v", err)
+	}
+	if err := x.BInfo.fromC(&xc.b_info); err != nil {
+		return fmt.Errorf("converting field BInfo: %v", err)
+	}
+	x.Disks = nil
+	if n := int(xc.num_disks); n > 0 {
+		cDisks := (*[1 << 28]C.libxl_device_disk)(unsafe.Pointer(xc.disks))[:n:n]
+		x.Disks = make([]DeviceDisk, n)
+		for i, v := range cDisks {
+			if err := x.Disks[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Disks: %v", err)
+			}
+		}
+	}
+	x.Nics = nil
+	if n := int(xc.num_nics); n > 0 {
+		cNics := (*[1 << 28]C.libxl_device_nic)(unsafe.Pointer(xc.nics))[:n:n]
+		x.Nics = make([]DeviceNic, n)
+		for i, v := range cNics {
+			if err := x.Nics[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Nics: %v", err)
+			}
+		}
+	}
+	x.Pcidevs = nil
+	if n := int(xc.num_pcidevs); n > 0 {
+		cPcidevs := (*[1 << 28]C.libxl_device_pci)(unsafe.Pointer(xc.pcidevs))[:n:n]
+		x.Pcidevs = make([]DevicePci, n)
+		for i, v := range cPcidevs {
+			if err := x.Pcidevs[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Pcidevs: %v", err)
+			}
+		}
+	}
+	x.Rdms = nil
+	if n := int(xc.num_rdms); n > 0 {
+		cRdms := (*[1 << 28]C.libxl_device_rdm)(unsafe.Pointer(xc.rdms))[:n:n]
+		x.Rdms = make([]DeviceRdm, n)
+		for i, v := range cRdms {
+			if err := x.Rdms[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Rdms: %v", err)
+			}
+		}
+	}
+	x.Dtdevs = nil
+	if n := int(xc.num_dtdevs); n > 0 {
+		cDtdevs := (*[1 << 28]C.libxl_device_dtdev)(unsafe.Pointer(xc.dtdevs))[:n:n]
+		x.Dtdevs = make([]DeviceDtdev, n)
+		for i, v := range cDtdevs {
+			if err := x.Dtdevs[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Dtdevs: %v", err)
+			}
+		}
+	}
+	x.Vfbs = nil
+	if n := int(xc.num_vfbs); n > 0 {
+		cVfbs := (*[1 << 28]C.libxl_device_vfb)(unsafe.Pointer(xc.vfbs))[:n:n]
+		x.Vfbs = make([]DeviceVfb, n)
+		for i, v := range cVfbs {
+			if err := x.Vfbs[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Vfbs: %v", err)
+			}
+		}
+	}
+	x.Vkbs = nil
+	if n := int(xc.num_vkbs); n > 0 {
+		cVkbs := (*[1 << 28]C.libxl_device_vkb)(unsafe.Pointer(xc.vkbs))[:n:n]
+		x.Vkbs = make([]DeviceVkb, n)
+		for i, v := range cVkbs {
+			if err := x.Vkbs[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Vkbs: %v", err)
+			}
+		}
+	}
+	x.Vtpms = nil
+	if n := int(xc.num_vtpms); n > 0 {
+		cVtpms := (*[1 << 28]C.libxl_device_vtpm)(unsafe.Pointer(xc.vtpms))[:n:n]
+		x.Vtpms = make([]DeviceVtpm, n)
+		for i, v := range cVtpms {
+			if err := x.Vtpms[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Vtpms: %v", err)
+			}
+		}
+	}
+	x.P9S = nil
+	if n := int(xc.num_p9s); n > 0 {
+		cP9S := (*[1 << 28]C.libxl_device_p9)(unsafe.Pointer(xc.p9s))[:n:n]
+		x.P9S = make([]DeviceP9, n)
+		for i, v := range cP9S {
+			if err := x.P9S[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field P9S: %v", err)
+			}
+		}
+	}
+	x.Pvcallsifs = nil
+	if n := int(xc.num_pvcallsifs); n > 0 {
+		cPvcallsifs := (*[1 << 28]C.libxl_device_pvcallsif)(unsafe.Pointer(xc.pvcallsifs))[:n:n]
+		x.Pvcallsifs = make([]DevicePvcallsif, n)
+		for i, v := range cPvcallsifs {
+			if err := x.Pvcallsifs[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Pvcallsifs: %v", err)
+			}
+		}
+	}
+	x.Vdispls = nil
+	if n := int(xc.num_vdispls); n > 0 {
+		cVdispls := (*[1 << 28]C.libxl_device_vdispl)(unsafe.Pointer(xc.vdispls))[:n:n]
+		x.Vdispls = make([]DeviceVdispl, n)
+		for i, v := range cVdispls {
+			if err := x.Vdispls[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Vdispls: %v", err)
+			}
+		}
+	}
+	x.Vsnds = nil
+	if n := int(xc.num_vsnds); n > 0 {
+		cVsnds := (*[1 << 28]C.libxl_device_vsnd)(unsafe.Pointer(xc.vsnds))[:n:n]
+		x.Vsnds = make([]DeviceVsnd, n)
+		for i, v := range cVsnds {
+			if err := x.Vsnds[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Vsnds: %v", err)
+			}
+		}
+	}
+	x.Channels = nil
+	if n := int(xc.num_channels); n > 0 {
+		cChannels := (*[1 << 28]C.libxl_device_channel)(unsafe.Pointer(xc.channels))[:n:n]
+		x.Channels = make([]DeviceChannel, n)
+		for i, v := range cChannels {
+			if err := x.Channels[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Channels: %v", err)
+			}
+		}
+	}
+	x.Usbctrls = nil
+	if n := int(xc.num_usbctrls); n > 0 {
+		cUsbctrls := (*[1 << 28]C.libxl_device_usbctrl)(unsafe.Pointer(xc.usbctrls))[:n:n]
+		x.Usbctrls = make([]DeviceUsbctrl, n)
+		for i, v := range cUsbctrls {
+			if err := x.Usbctrls[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Usbctrls: %v", err)
+			}
+		}
+	}
+	x.Usbdevs = nil
+	if n := int(xc.num_usbdevs); n > 0 {
+		cUsbdevs := (*[1 << 28]C.libxl_device_usbdev)(unsafe.Pointer(xc.usbdevs))[:n:n]
+		x.Usbdevs = make([]DeviceUsbdev, n)
+		for i, v := range cUsbdevs {
+			if err := x.Usbdevs[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Usbdevs: %v", err)
+			}
+		}
+	}
+	x.OnPoweroff = ActionOnShutdown(xc.on_poweroff)
+	x.OnReboot = ActionOnShutdown(xc.on_reboot)
+	x.OnWatchdog = ActionOnShutdown(xc.on_watchdog)
+	x.OnCrash = ActionOnShutdown(xc.on_crash)
+	x.OnSoftReset = ActionOnShutdown(xc.on_soft_reset)
+
+	return nil
+}
+
+func (x *DomainConfig) toC(xc *C.libxl_domain_config) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_domain_config_dispose(xc)
+		}
+	}()
+
+	if err := x.CInfo.toC(&xc.c_info); err != nil {
+		return fmt.Errorf("converting field CInfo: %v", err)
+	}
+	if err := x.BInfo.toC(&xc.b_info); err != nil {
+		return fmt.Errorf("converting field BInfo: %v", err)
+	}
+	if numDisks := len(x.Disks); numDisks > 0 {
+		xc.disks = (*C.libxl_device_disk)(C.malloc(C.ulong(numDisks) * C.sizeof_libxl_device_disk))
+		xc.num_disks = C.int(numDisks)
+		cDisks := (*[1 << 28]C.libxl_device_disk)(unsafe.Pointer(xc.disks))[:numDisks:numDisks]
+		for i, v := range x.Disks {
+			if err := v.toC(&cDisks[i]); err != nil {
+				return fmt.Errorf("converting field Disks: %v", err)
+			}
+		}
+	}
+	if numNics := len(x.Nics); numNics > 0 {
+		xc.nics = (*C.libxl_device_nic)(C.malloc(C.ulong(numNics) * C.sizeof_libxl_device_nic))
+		xc.num_nics = C.int(numNics)
+		cNics := (*[1 << 28]C.libxl_device_nic)(unsafe.Pointer(xc.nics))[:numNics:numNics]
+		for i, v := range x.Nics {
+			if err := v.toC(&cNics[i]); err != nil {
+				return fmt.Errorf("converting field Nics: %v", err)
+			}
+		}
+	}
+	if numPcidevs := len(x.Pcidevs); numPcidevs > 0 {
+		xc.pcidevs = (*C.libxl_device_pci)(C.malloc(C.ulong(numPcidevs) * C.sizeof_libxl_device_pci))
+		xc.num_pcidevs = C.int(numPcidevs)
+		cPcidevs := (*[1 << 28]C.libxl_device_pci)(unsafe.Pointer(xc.pcidevs))[:numPcidevs:numPcidevs]
+		for i, v := range x.Pcidevs {
+			if err := v.toC(&cPcidevs[i]); err != nil {
+				return fmt.Errorf("converting field Pcidevs: %v", err)
+			}
+		}
+	}
+	if numRdms := len(x.Rdms); numRdms > 0 {
+		xc.rdms = (*C.libxl_device_rdm)(C.malloc(C.ulong(numRdms) * C.sizeof_libxl_device_rdm))
+		xc.num_rdms = C.int(numRdms)
+		cRdms := (*[1 << 28]C.libxl_device_rdm)(unsafe.Pointer(xc.rdms))[:numRdms:numRdms]
+		for i, v := range x.Rdms {
+			if err := v.toC(&cRdms[i]); err != nil {
+				return fmt.Errorf("converting field Rdms: %v", err)
+			}
+		}
+	}
+	if numDtdevs := len(x.Dtdevs); numDtdevs > 0 {
+		xc.dtdevs = (*C.libxl_device_dtdev)(C.malloc(C.ulong(numDtdevs) * C.sizeof_libxl_device_dtdev))
+		xc.num_dtdevs = C.int(numDtdevs)
+		cDtdevs := (*[1 << 28]C.libxl_device_dtdev)(unsafe.Pointer(xc.dtdevs))[:numDtdevs:numDtdevs]
+		for i, v := range x.Dtdevs {
+			if err := v.toC(&cDtdevs[i]); err != nil {
+				return fmt.Errorf("converting field Dtdevs: %v", err)
+			}
+		}
+	}
+	if numVfbs := len(x.Vfbs); numVfbs > 0 {
+		xc.vfbs = (*C.libxl_device_vfb)(C.malloc(C.ulong(numVfbs) * C.sizeof_libxl_device_vfb))
+		xc.num_vfbs = C.int(numVfbs)
+		cVfbs := (*[1 << 28]C.libxl_device_vfb)(unsafe.Pointer(xc.vfbs))[:numVfbs:numVfbs]
+		for i, v := range x.Vfbs {
+			if err := v.toC(&cVfbs[i]); err != nil {
+				return fmt.Errorf("converting field Vfbs: %v", err)
+			}
+		}
+	}
+	if numVkbs := len(x.Vkbs); numVkbs > 0 {
+		xc.vkbs = (*C.libxl_device_vkb)(C.malloc(C.ulong(numVkbs) * C.sizeof_libxl_device_vkb))
+		xc.num_vkbs = C.int(numVkbs)
+		cVkbs := (*[1 << 28]C.libxl_device_vkb)(unsafe.Pointer(xc.vkbs))[:numVkbs:numVkbs]
+		for i, v := range x.Vkbs {
+			if err := v.toC(&cVkbs[i]); err != nil {
+				return fmt.Errorf("converting field Vkbs: %v", err)
+			}
+		}
+	}
+	if numVtpms := len(x.Vtpms); numVtpms > 0 {
+		xc.vtpms = (*C.libxl_device_vtpm)(C.malloc(C.ulong(numVtpms) * C.sizeof_libxl_device_vtpm))
+		xc.num_vtpms = C.int(numVtpms)
+		cVtpms := (*[1 << 28]C.libxl_device_vtpm)(unsafe.Pointer(xc.vtpms))[:numVtpms:numVtpms]
+		for i, v := range x.Vtpms {
+			if err := v.toC(&cVtpms[i]); err != nil {
+				return fmt.Errorf("converting field Vtpms: %v", err)
+			}
+		}
+	}
+	if numP9S := len(x.P9S); numP9S > 0 {
+		xc.p9s = (*C.libxl_device_p9)(C.malloc(C.ulong(numP9S) * C.sizeof_libxl_device_p9))
+		xc.num_p9s = C.int(numP9S)
+		cP9S := (*[1 << 28]C.libxl_device_p9)(unsafe.Pointer(xc.p9s))[:numP9S:numP9S]
+		for i, v := range x.P9S {
+			if err := v.toC(&cP9S[i]); err != nil {
+				return fmt.Errorf("converting field P9S: %v", err)
+			}
+		}
+	}
+	if numPvcallsifs := len(x.Pvcallsifs); numPvcallsifs > 0 {
+		xc.pvcallsifs = (*C.libxl_device_pvcallsif)(C.malloc(C.ulong(numPvcallsifs) * C.sizeof_libxl_device_pvcallsif))
+		xc.num_pvcallsifs = C.int(numPvcallsifs)
+		cPvcallsifs := (*[1 << 28]C.libxl_device_pvcallsif)(unsafe.Pointer(xc.pvcallsifs))[:numPvcallsifs:numPvcallsifs]
+		for i, v := range x.Pvcallsifs {
+			if err := v.toC(&cPvcallsifs[i]); err != nil {
+				return fmt.Errorf("converting field Pvcallsifs: %v", err)
+			}
+		}
+	}
+	if numVdispls := len(x.Vdispls); numVdispls > 0 {
+		xc.vdispls = (*C.libxl_device_vdispl)(C.malloc(C.ulong(numVdispls) * C.sizeof_libxl_device_vdispl))
+		xc.num_vdispls = C.int(numVdispls)
+		cVdispls := (*[1 << 28]C.libxl_device_vdispl)(unsafe.Pointer(xc.vdispls))[:numVdispls:numVdispls]
+		for i, v := range x.Vdispls {
+			if err := v.toC(&cVdispls[i]); err != nil {
+				return fmt.Errorf("converting field Vdispls: %v", err)
+			}
+		}
+	}
+	if numVsnds := len(x.Vsnds); numVsnds > 0 {
+		xc.vsnds = (*C.libxl_device_vsnd)(C.malloc(C.ulong(numVsnds) * C.sizeof_libxl_device_vsnd))
+		xc.num_vsnds = C.int(numVsnds)
+		cVsnds := (*[1 << 28]C.libxl_device_vsnd)(unsafe.Pointer(xc.vsnds))[:numVsnds:numVsnds]
+		for i, v := range x.Vsnds {
+			if err := v.toC(&cVsnds[i]); err != nil {
+				return fmt.Errorf("converting field Vsnds: %v", err)
+			}
+		}
+	}
+	if numChannels := len(x.Channels); numChannels > 0 {
+		xc.channels = (*C.libxl_device_channel)(C.malloc(C.ulong(numChannels) * C.sizeof_libxl_device_channel))
+		xc.num_channels = C.int(numChannels)
+		cChannels := (*[1 << 28]C.libxl_device_channel)(unsafe.Pointer(xc.channels))[:numChannels:numChannels]
+		for i, v := range x.Channels {
+			if err := v.toC(&cChannels[i]); err != nil {
+				return fmt.Errorf("converting field Channels: %v", err)
+			}
+		}
+	}
+	if numUsbctrls := len(x.Usbctrls); numUsbctrls > 0 {
+		xc.usbctrls = (*C.libxl_device_usbctrl)(C.malloc(C.ulong(numUsbctrls) * C.sizeof_libxl_device_usbctrl))
+		xc.num_usbctrls = C.int(numUsbctrls)
+		cUsbctrls := (*[1 << 28]C.libxl_device_usbctrl)(unsafe.Pointer(xc.usbctrls))[:numUsbctrls:numUsbctrls]
+		for i, v := range x.Usbctrls {
+			if err := v.toC(&cUsbctrls[i]); err != nil {
+				return fmt.Errorf("converting field Usbctrls: %v", err)
+			}
+		}
+	}
+	if numUsbdevs := len(x.Usbdevs); numUsbdevs > 0 {
+		xc.usbdevs = (*C.libxl_device_usbdev)(C.malloc(C.ulong(numUsbdevs) * C.sizeof_libxl_device_usbdev))
+		xc.num_usbdevs = C.int(numUsbdevs)
+		cUsbdevs := (*[1 << 28]C.libxl_device_usbdev)(unsafe.Pointer(xc.usbdevs))[:numUsbdevs:numUsbdevs]
+		for i, v := range x.Usbdevs {
+			if err := v.toC(&cUsbdevs[i]); err != nil {
+				return fmt.Errorf("converting field Usbdevs: %v", err)
+			}
+		}
+	}
+	xc.on_poweroff = C.libxl_action_on_shutdown(x.OnPoweroff)
+	xc.on_reboot = C.libxl_action_on_shutdown(x.OnReboot)
+	xc.on_watchdog = C.libxl_action_on_shutdown(x.OnWatchdog)
+	xc.on_crash = C.libxl_action_on_shutdown(x.OnCrash)
+	xc.on_soft_reset = C.libxl_action_on_shutdown(x.OnSoftReset)
+
+	return nil
+}
+
+// NewDiskinfo returns an instance of Diskinfo initialized with defaults.
+func NewDiskinfo() (*Diskinfo, error) {
+	var (
+		x  Diskinfo
+		xc C.libxl_diskinfo
+	)
+
+	C.libxl_diskinfo_init(&xc)
+	defer C.libxl_diskinfo_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Diskinfo) fromC(xc *C.libxl_diskinfo) error {
+	x.Backend = C.GoString(xc.backend)
+	x.BackendId = uint32(xc.backend_id)
+	x.Frontend = C.GoString(xc.frontend)
+	x.FrontendId = uint32(xc.frontend_id)
+	x.Devid = Devid(xc.devid)
+	x.State = int(xc.state)
+	x.Evtch = int(xc.evtch)
+	x.Rref = int(xc.rref)
+
+	return nil
+}
+
+func (x *Diskinfo) toC(xc *C.libxl_diskinfo) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_diskinfo_dispose(xc)
+		}
+	}()
+
+	if x.Backend != "" {
+		xc.backend = C.CString(x.Backend)
+	}
+	xc.backend_id = C.uint32_t(x.BackendId)
+	if x.Frontend != "" {
+		xc.frontend = C.CString(x.Frontend)
+	}
+	xc.frontend_id = C.uint32_t(x.FrontendId)
+	xc.devid = C.libxl_devid(x.Devid)
+	xc.state = C.int(x.State)
+	xc.evtch = C.int(x.Evtch)
+	xc.rref = C.int(x.Rref)
+
+	return nil
+}
+
+// NewNicinfo returns an instance of Nicinfo initialized with defaults.
+func NewNicinfo() (*Nicinfo, error) {
+	var (
+		x  Nicinfo
+		xc C.libxl_nicinfo
+	)
+
+	C.libxl_nicinfo_init(&xc)
+	defer C.libxl_nicinfo_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Nicinfo) fromC(xc *C.libxl_nicinfo) error {
+	x.Backend = C.GoString(xc.backend)
+	x.BackendId = uint32(xc.backend_id)
+	x.Frontend = C.GoString(xc.frontend)
+	x.FrontendId = uint32(xc.frontend_id)
+	x.Devid = Devid(xc.devid)
+	x.State = int(xc.state)
+	x.Evtch = int(xc.evtch)
+	x.RrefTx = int(xc.rref_tx)
+	x.RrefRx = int(xc.rref_rx)
+
+	return nil
+}
+
+func (x *Nicinfo) toC(xc *C.libxl_nicinfo) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_nicinfo_dispose(xc)
+		}
+	}()
+
+	if x.Backend != "" {
+		xc.backend = C.CString(x.Backend)
+	}
+	xc.backend_id = C.uint32_t(x.BackendId)
+	if x.Frontend != "" {
+		xc.frontend = C.CString(x.Frontend)
+	}
+	xc.frontend_id = C.uint32_t(x.FrontendId)
+	xc.devid = C.libxl_devid(x.Devid)
+	xc.state = C.int(x.State)
+	xc.evtch = C.int(x.Evtch)
+	xc.rref_tx = C.int(x.RrefTx)
+	xc.rref_rx = C.int(x.RrefRx)
+
+	return nil
+}
+
+// NewVtpminfo returns an instance of Vtpminfo initialized with defaults.
+func NewVtpminfo() (*Vtpminfo, error) {
+	var (
+		x  Vtpminfo
+		xc C.libxl_vtpminfo
+	)
+
+	C.libxl_vtpminfo_init(&xc)
+	defer C.libxl_vtpminfo_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Vtpminfo) fromC(xc *C.libxl_vtpminfo) error {
+	x.Backend = C.GoString(xc.backend)
+	x.BackendId = uint32(xc.backend_id)
+	x.Frontend = C.GoString(xc.frontend)
+	x.FrontendId = uint32(xc.frontend_id)
+	x.Devid = Devid(xc.devid)
+	x.State = int(xc.state)
+	x.Evtch = int(xc.evtch)
+	x.Rref = int(xc.rref)
+	if err := x.Uuid.fromC(&xc.uuid); err != nil {
+		return fmt.Errorf("converting field Uuid: %v", err)
+	}
+
+	return nil
+}
+
+func (x *Vtpminfo) toC(xc *C.libxl_vtpminfo) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_vtpminfo_dispose(xc)
+		}
+	}()
+
+	if x.Backend != "" {
+		xc.backend = C.CString(x.Backend)
+	}
+	xc.backend_id = C.uint32_t(x.BackendId)
+	if x.Frontend != "" {
+		xc.frontend = C.CString(x.Frontend)
+	}
+	xc.frontend_id = C.uint32_t(x.FrontendId)
+	xc.devid = C.libxl_devid(x.Devid)
+	xc.state = C.int(x.State)
+	xc.evtch = C.int(x.Evtch)
+	xc.rref = C.int(x.Rref)
+	if err := x.Uuid.toC(&xc.uuid); err != nil {
+		return fmt.Errorf("converting field Uuid: %v", err)
+	}
+
+	return nil
+}
+
+// NewUsbctrlinfo returns an instance of Usbctrlinfo initialized with defaults.
+func NewUsbctrlinfo() (*Usbctrlinfo, error) {
+	var (
+		x  Usbctrlinfo
+		xc C.libxl_usbctrlinfo
+	)
+
+	C.libxl_usbctrlinfo_init(&xc)
+	defer C.libxl_usbctrlinfo_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Usbctrlinfo) fromC(xc *C.libxl_usbctrlinfo) error {
+	x.Type = UsbctrlType(xc._type)
+	x.Devid = Devid(xc.devid)
+	x.Version = int(xc.version)
+	x.Ports = int(xc.ports)
+	x.Backend = C.GoString(xc.backend)
+	x.BackendId = uint32(xc.backend_id)
+	x.Frontend = C.GoString(xc.frontend)
+	x.FrontendId = uint32(xc.frontend_id)
+	x.State = int(xc.state)
+	x.Evtch = int(xc.evtch)
+	x.RefUrb = int(xc.ref_urb)
+	x.RefConn = int(xc.ref_conn)
+
+	return nil
+}
+
+func (x *Usbctrlinfo) toC(xc *C.libxl_usbctrlinfo) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_usbctrlinfo_dispose(xc)
+		}
+	}()
+
+	xc._type = C.libxl_usbctrl_type(x.Type)
+	xc.devid = C.libxl_devid(x.Devid)
+	xc.version = C.int(x.Version)
+	xc.ports = C.int(x.Ports)
+	if x.Backend != "" {
+		xc.backend = C.CString(x.Backend)
+	}
+	xc.backend_id = C.uint32_t(x.BackendId)
+	if x.Frontend != "" {
+		xc.frontend = C.CString(x.Frontend)
+	}
+	xc.frontend_id = C.uint32_t(x.FrontendId)
+	xc.state = C.int(x.State)
+	xc.evtch = C.int(x.Evtch)
+	xc.ref_urb = C.int(x.RefUrb)
+	xc.ref_conn = C.int(x.RefConn)
+
+	return nil
+}
+
+// NewVcpuinfo returns an instance of Vcpuinfo initialized with defaults.
+func NewVcpuinfo() (*Vcpuinfo, error) {
+	var (
+		x  Vcpuinfo
+		xc C.libxl_vcpuinfo
+	)
+
+	C.libxl_vcpuinfo_init(&xc)
+	defer C.libxl_vcpuinfo_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Vcpuinfo) fromC(xc *C.libxl_vcpuinfo) error {
+	x.Vcpuid = uint32(xc.vcpuid)
+	x.Cpu = uint32(xc.cpu)
+	x.Online = bool(xc.online)
+	x.Blocked = bool(xc.blocked)
+	x.Running = bool(xc.running)
+	x.VcpuTime = uint64(xc.vcpu_time)
+	if err := x.Cpumap.fromC(&xc.cpumap); err != nil {
+		return fmt.Errorf("converting field Cpumap: %v", err)
+	}
+	if err := x.CpumapSoft.fromC(&xc.cpumap_soft); err != nil {
+		return fmt.Errorf("converting field CpumapSoft: %v", err)
+	}
+
+	return nil
+}
+
+func (x *Vcpuinfo) toC(xc *C.libxl_vcpuinfo) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_vcpuinfo_dispose(xc)
+		}
+	}()
+
+	xc.vcpuid = C.uint32_t(x.Vcpuid)
+	xc.cpu = C.uint32_t(x.Cpu)
+	xc.online = C.bool(x.Online)
+	xc.blocked = C.bool(x.Blocked)
+	xc.running = C.bool(x.Running)
+	xc.vcpu_time = C.uint64_t(x.VcpuTime)
+	if err := x.Cpumap.toC(&xc.cpumap); err != nil {
+		return fmt.Errorf("converting field Cpumap: %v", err)
+	}
+	if err := x.CpumapSoft.toC(&xc.cpumap_soft); err != nil {
+		return fmt.Errorf("converting field CpumapSoft: %v", err)
+	}
+
+	return nil
+}
+
+// NewPhysinfo returns an instance of Physinfo initialized with defaults.
+func NewPhysinfo() (*Physinfo, error) {
+	var (
+		x  Physinfo
+		xc C.libxl_physinfo
+	)
+
+	C.libxl_physinfo_init(&xc)
+	defer C.libxl_physinfo_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Physinfo) fromC(xc *C.libxl_physinfo) error {
+	x.ThreadsPerCore = uint32(xc.threads_per_core)
+	x.CoresPerSocket = uint32(xc.cores_per_socket)
+	x.MaxCpuId = uint32(xc.max_cpu_id)
+	x.NrCpus = uint32(xc.nr_cpus)
+	x.CpuKhz = uint32(xc.cpu_khz)
+	x.TotalPages = uint64(xc.total_pages)
+	x.FreePages = uint64(xc.free_pages)
+	x.ScrubPages = uint64(xc.scrub_pages)
+	x.OutstandingPages = uint64(xc.outstanding_pages)
+	x.SharingFreedPages = uint64(xc.sharing_freed_pages)
+	x.SharingUsedFrames = uint64(xc.sharing_used_frames)
+	x.MaxPossibleMfn = uint64(xc.max_possible_mfn)
+	x.NrNodes = uint32(xc.nr_nodes)
+	if err := x.HwCap.fromC(&xc.hw_cap); err != nil {
+		return fmt.Errorf("converting field HwCap: %v", err)
+	}
+	x.CapHvm = bool(xc.cap_hvm)
+	x.CapPv = bool(xc.cap_pv)
+	x.CapHvmDirectio = bool(xc.cap_hvm_directio)
+	x.CapHap = bool(xc.cap_hap)
+	x.CapShadow = bool(xc.cap_shadow)
+	x.CapIommuHapPtShare = bool(xc.cap_iommu_hap_pt_share)
+
+	return nil
+}
+
+func (x *Physinfo) toC(xc *C.libxl_physinfo) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_physinfo_dispose(xc)
+		}
+	}()
+
+	xc.threads_per_core = C.uint32_t(x.ThreadsPerCore)
+	xc.cores_per_socket = C.uint32_t(x.CoresPerSocket)
+	xc.max_cpu_id = C.uint32_t(x.MaxCpuId)
+	xc.nr_cpus = C.uint32_t(x.NrCpus)
+	xc.cpu_khz = C.uint32_t(x.CpuKhz)
+	xc.total_pages = C.uint64_t(x.TotalPages)
+	xc.free_pages = C.uint64_t(x.FreePages)
+	xc.scrub_pages = C.uint64_t(x.ScrubPages)
+	xc.outstanding_pages = C.uint64_t(x.OutstandingPages)
+	xc.sharing_freed_pages = C.uint64_t(x.SharingFreedPages)
+	xc.sharing_used_frames = C.uint64_t(x.SharingUsedFrames)
+	xc.max_possible_mfn = C.uint64_t(x.MaxPossibleMfn)
+	xc.nr_nodes = C.uint32_t(x.NrNodes)
+	if err := x.HwCap.toC(&xc.hw_cap); err != nil {
+		return fmt.Errorf("converting field HwCap: %v", err)
+	}
+	xc.cap_hvm = C.bool(x.CapHvm)
+	xc.cap_pv = C.bool(x.CapPv)
+	xc.cap_hvm_directio = C.bool(x.CapHvmDirectio)
+	xc.cap_hap = C.bool(x.CapHap)
+	xc.cap_shadow = C.bool(x.CapShadow)
+	xc.cap_iommu_hap_pt_share = C.bool(x.CapIommuHapPtShare)
+
+	return nil
+}
+
+// NewConnectorinfo returns an instance of Connectorinfo initialized with defaults.
+func NewConnectorinfo() (*Connectorinfo, error) {
+	var (
+		x  Connectorinfo
+		xc C.libxl_connectorinfo
+	)
+
+	C.libxl_connectorinfo_init(&xc)
+	defer C.libxl_connectorinfo_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Connectorinfo) fromC(xc *C.libxl_connectorinfo) error {
+	x.UniqueId = C.GoString(xc.unique_id)
+	x.Width = uint32(xc.width)
+	x.Height = uint32(xc.height)
+	x.ReqEvtch = int(xc.req_evtch)
+	x.ReqRref = int(xc.req_rref)
+	x.EvtEvtch = int(xc.evt_evtch)
+	x.EvtRref = int(xc.evt_rref)
+
+	return nil
+}
+
+func (x *Connectorinfo) toC(xc *C.libxl_connectorinfo) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_connectorinfo_dispose(xc)
+		}
+	}()
+
+	if x.UniqueId != "" {
+		xc.unique_id = C.CString(x.UniqueId)
+	}
+	xc.width = C.uint32_t(x.Width)
+	xc.height = C.uint32_t(x.Height)
+	xc.req_evtch = C.int(x.ReqEvtch)
+	xc.req_rref = C.int(x.ReqRref)
+	xc.evt_evtch = C.int(x.EvtEvtch)
+	xc.evt_rref = C.int(x.EvtRref)
+
+	return nil
+}
+
+// NewVdisplinfo returns an instance of Vdisplinfo initialized with defaults.
+func NewVdisplinfo() (*Vdisplinfo, error) {
+	var (
+		x  Vdisplinfo
+		xc C.libxl_vdisplinfo
+	)
+
+	C.libxl_vdisplinfo_init(&xc)
+	defer C.libxl_vdisplinfo_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Vdisplinfo) fromC(xc *C.libxl_vdisplinfo) error {
+	x.Backend = C.GoString(xc.backend)
+	x.BackendId = uint32(xc.backend_id)
+	x.Frontend = C.GoString(xc.frontend)
+	x.FrontendId = uint32(xc.frontend_id)
+	x.Devid = Devid(xc.devid)
+	x.State = int(xc.state)
+	x.BeAlloc = bool(xc.be_alloc)
+	x.Connectors = nil
+	if n := int(xc.num_connectors); n > 0 {
+		cConnectors := (*[1 << 28]C.libxl_connectorinfo)(unsafe.Pointer(xc.connectors))[:n:n]
+		x.Connectors = make([]Connectorinfo, n)
+		for i, v := range cConnectors {
+			if err := x.Connectors[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Connectors: %v", err)
+			}
+		}
+	}
+
+	return nil
+}
+
+func (x *Vdisplinfo) toC(xc *C.libxl_vdisplinfo) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_vdisplinfo_dispose(xc)
+		}
+	}()
+
+	if x.Backend != "" {
+		xc.backend = C.CString(x.Backend)
+	}
+	xc.backend_id = C.uint32_t(x.BackendId)
+	if x.Frontend != "" {
+		xc.frontend = C.CString(x.Frontend)
+	}
+	xc.frontend_id = C.uint32_t(x.FrontendId)
+	xc.devid = C.libxl_devid(x.Devid)
+	xc.state = C.int(x.State)
+	xc.be_alloc = C.bool(x.BeAlloc)
+	if numConnectors := len(x.Connectors); numConnectors > 0 {
+		xc.connectors = (*C.libxl_connectorinfo)(C.malloc(C.ulong(numConnectors) * C.sizeof_libxl_connectorinfo))
+		xc.num_connectors = C.int(numConnectors)
+		cConnectors := (*[1 << 28]C.libxl_connectorinfo)(unsafe.Pointer(xc.connectors))[:numConnectors:numConnectors]
+		for i, v := range x.Connectors {
+			if err := v.toC(&cConnectors[i]); err != nil {
+				return fmt.Errorf("converting field Connectors: %v", err)
+			}
+		}
+	}
+
+	return nil
+}
+
+// NewStreaminfo returns an instance of Streaminfo initialized with defaults.
+func NewStreaminfo() (*Streaminfo, error) {
+	var (
+		x  Streaminfo
+		xc C.libxl_streaminfo
+	)
+
+	C.libxl_streaminfo_init(&xc)
+	defer C.libxl_streaminfo_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Streaminfo) fromC(xc *C.libxl_streaminfo) error {
+	x.ReqEvtch = int(xc.req_evtch)
+	x.ReqRref = int(xc.req_rref)
+
+	return nil
+}
+
+func (x *Streaminfo) toC(xc *C.libxl_streaminfo) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_streaminfo_dispose(xc)
+		}
+	}()
+
+	xc.req_evtch = C.int(x.ReqEvtch)
+	xc.req_rref = C.int(x.ReqRref)
+
+	return nil
+}
+
+// NewPcminfo returns an instance of Pcminfo initialized with defaults.
+func NewPcminfo() (*Pcminfo, error) {
+	var (
+		x  Pcminfo
+		xc C.libxl_pcminfo
+	)
+
+	C.libxl_pcminfo_init(&xc)
+	defer C.libxl_pcminfo_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Pcminfo) fromC(xc *C.libxl_pcminfo) error {
+	x.Streams = nil
+	if n := int(xc.num_vsnd_streams); n > 0 {
+		cStreams := (*[1 << 28]C.libxl_streaminfo)(unsafe.Pointer(xc.streams))[:n:n]
+		x.Streams = make([]Streaminfo, n)
+		for i, v := range cStreams {
+			if err := x.Streams[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Streams: %v", err)
+			}
+		}
+	}
+
+	return nil
+}
+
+func (x *Pcminfo) toC(xc *C.libxl_pcminfo) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_pcminfo_dispose(xc)
+		}
+	}()
+
+	if numVsndStreams := len(x.Streams); numVsndStreams > 0 {
+		xc.streams = (*C.libxl_streaminfo)(C.malloc(C.ulong(numVsndStreams) * C.sizeof_libxl_streaminfo))
+		xc.num_vsnd_streams = C.int(numVsndStreams)
+		cStreams := (*[1 << 28]C.libxl_streaminfo)(unsafe.Pointer(xc.streams))[:numVsndStreams:numVsndStreams]
+		for i, v := range x.Streams {
+			if err := v.toC(&cStreams[i]); err != nil {
+				return fmt.Errorf("converting field Streams: %v", err)
+			}
+		}
+	}
+
+	return nil
+}
+
+// NewVsndinfo returns an instance of Vsndinfo initialized with defaults.
+func NewVsndinfo() (*Vsndinfo, error) {
+	var (
+		x  Vsndinfo
+		xc C.libxl_vsndinfo
+	)
+
+	C.libxl_vsndinfo_init(&xc)
+	defer C.libxl_vsndinfo_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Vsndinfo) fromC(xc *C.libxl_vsndinfo) error {
+	x.Backend = C.GoString(xc.backend)
+	x.BackendId = uint32(xc.backend_id)
+	x.Frontend = C.GoString(xc.frontend)
+	x.FrontendId = uint32(xc.frontend_id)
+	x.Devid = Devid(xc.devid)
+	x.State = int(xc.state)
+	x.Pcms = nil
+	if n := int(xc.num_vsnd_pcms); n > 0 {
+		cPcms := (*[1 << 28]C.libxl_pcminfo)(unsafe.Pointer(xc.pcms))[:n:n]
+		x.Pcms = make([]Pcminfo, n)
+		for i, v := range cPcms {
+			if err := x.Pcms[i].fromC(&v); err != nil {
+				return fmt.Errorf("converting field Pcms: %v", err)
+			}
+		}
+	}
+
+	return nil
+}
+
+func (x *Vsndinfo) toC(xc *C.libxl_vsndinfo) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_vsndinfo_dispose(xc)
+		}
+	}()
+
+	if x.Backend != "" {
+		xc.backend = C.CString(x.Backend)
+	}
+	xc.backend_id = C.uint32_t(x.BackendId)
+	if x.Frontend != "" {
+		xc.frontend = C.CString(x.Frontend)
+	}
+	xc.frontend_id = C.uint32_t(x.FrontendId)
+	xc.devid = C.libxl_devid(x.Devid)
+	xc.state = C.int(x.State)
+	if numVsndPcms := len(x.Pcms); numVsndPcms > 0 {
+		xc.pcms = (*C.libxl_pcminfo)(C.malloc(C.ulong(numVsndPcms) * C.sizeof_libxl_pcminfo))
+		xc.num_vsnd_pcms = C.int(numVsndPcms)
+		cPcms := (*[1 << 28]C.libxl_pcminfo)(unsafe.Pointer(xc.pcms))[:numVsndPcms:numVsndPcms]
+		for i, v := range x.Pcms {
+			if err := v.toC(&cPcms[i]); err != nil {
+				return fmt.Errorf("converting field Pcms: %v", err)
+			}
+		}
+	}
+
+	return nil
+}
+
+// NewVkbinfo returns an instance of Vkbinfo initialized with defaults.
+func NewVkbinfo() (*Vkbinfo, error) {
+	var (
+		x  Vkbinfo
+		xc C.libxl_vkbinfo
+	)
+
+	C.libxl_vkbinfo_init(&xc)
+	defer C.libxl_vkbinfo_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Vkbinfo) fromC(xc *C.libxl_vkbinfo) error {
+	x.Backend = C.GoString(xc.backend)
+	x.BackendId = uint32(xc.backend_id)
+	x.Frontend = C.GoString(xc.frontend)
+	x.FrontendId = uint32(xc.frontend_id)
+	x.Devid = Devid(xc.devid)
+	x.State = int(xc.state)
+	x.Evtch = int(xc.evtch)
+	x.Rref = int(xc.rref)
+
+	return nil
+}
+
+func (x *Vkbinfo) toC(xc *C.libxl_vkbinfo) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_vkbinfo_dispose(xc)
+		}
+	}()
+
+	if x.Backend != "" {
+		xc.backend = C.CString(x.Backend)
+	}
+	xc.backend_id = C.uint32_t(x.BackendId)
+	if x.Frontend != "" {
+		xc.frontend = C.CString(x.Frontend)
+	}
+	xc.frontend_id = C.uint32_t(x.FrontendId)
+	xc.devid = C.libxl_devid(x.Devid)
+	xc.state = C.int(x.State)
+	xc.evtch = C.int(x.Evtch)
+	xc.rref = C.int(x.Rref)
+
+	return nil
+}
+
+// NewNumainfo returns an instance of Numainfo initialized with defaults.
+func NewNumainfo() (*Numainfo, error) {
+	var (
+		x  Numainfo
+		xc C.libxl_numainfo
+	)
+
+	C.libxl_numainfo_init(&xc)
+	defer C.libxl_numainfo_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Numainfo) fromC(xc *C.libxl_numainfo) error {
+	x.Size = uint64(xc.size)
+	x.Free = uint64(xc.free)
+	x.Dists = nil
+	if n := int(xc.num_dists); n > 0 {
+		cDists := (*[1 << 28]C.uint32_t)(unsafe.Pointer(xc.dists))[:n:n]
+		x.Dists = make([]uint32, n)
+		for i, v := range cDists {
+			x.Dists[i] = uint32(v)
+		}
+	}
+
+	return nil
+}
+
+func (x *Numainfo) toC(xc *C.libxl_numainfo) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_numainfo_dispose(xc)
+		}
+	}()
+
+	xc.size = C.uint64_t(x.Size)
+	xc.free = C.uint64_t(x.Free)
+	if numDists := len(x.Dists); numDists > 0 {
+		xc.dists = (*C.uint32_t)(C.malloc(C.size_t(numDists * numDists)))
+		xc.num_dists = C.int(numDists)
+		cDists := (*[1 << 28]C.uint32_t)(unsafe.Pointer(xc.dists))[:numDists:numDists]
+		for i, v := range x.Dists {
+			cDists[i] = C.uint32_t(v)
+		}
+	}
+
+	return nil
+}
+
+// NewCputopology returns an instance of Cputopology initialized with defaults.
+func NewCputopology() (*Cputopology, error) {
+	var (
+		x  Cputopology
+		xc C.libxl_cputopology
+	)
+
+	C.libxl_cputopology_init(&xc)
+	defer C.libxl_cputopology_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Cputopology) fromC(xc *C.libxl_cputopology) error {
+	x.Core = uint32(xc.core)
+	x.Socket = uint32(xc.socket)
+	x.Node = uint32(xc.node)
+
+	return nil
+}
+
+func (x *Cputopology) toC(xc *C.libxl_cputopology) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_cputopology_dispose(xc)
+		}
+	}()
+
+	xc.core = C.uint32_t(x.Core)
+	xc.socket = C.uint32_t(x.Socket)
+	xc.node = C.uint32_t(x.Node)
+
+	return nil
+}
+
+// NewPcitopology returns an instance of Pcitopology initialized with defaults.
+func NewPcitopology() (*Pcitopology, error) {
+	var (
+		x  Pcitopology
+		xc C.libxl_pcitopology
+	)
+
+	C.libxl_pcitopology_init(&xc)
+	defer C.libxl_pcitopology_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Pcitopology) fromC(xc *C.libxl_pcitopology) error {
+	x.Seg = uint16(xc.seg)
+	x.Bus = byte(xc.bus)
+	x.Devfn = byte(xc.devfn)
+	x.Node = uint32(xc.node)
+
+	return nil
+}
+
+func (x *Pcitopology) toC(xc *C.libxl_pcitopology) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_pcitopology_dispose(xc)
+		}
+	}()
+
+	xc.seg = C.uint16_t(x.Seg)
+	xc.bus = C.uint8_t(x.Bus)
+	xc.devfn = C.uint8_t(x.Devfn)
+	xc.node = C.uint32_t(x.Node)
+
+	return nil
+}
+
+// NewSchedCreditParams returns an instance of SchedCreditParams initialized with defaults.
+func NewSchedCreditParams() (*SchedCreditParams, error) {
+	var (
+		x  SchedCreditParams
+		xc C.libxl_sched_credit_params
+	)
+
+	C.libxl_sched_credit_params_init(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *SchedCreditParams) fromC(xc *C.libxl_sched_credit_params) error {
+	x.TsliceMs = int(xc.tslice_ms)
+	x.RatelimitUs = int(xc.ratelimit_us)
+	x.VcpuMigrDelayUs = int(xc.vcpu_migr_delay_us)
+
+	return nil
+}
+
+func (x *SchedCreditParams) toC(xc *C.libxl_sched_credit_params) (err error) {
+	xc.tslice_ms = C.int(x.TsliceMs)
+	xc.ratelimit_us = C.int(x.RatelimitUs)
+	xc.vcpu_migr_delay_us = C.int(x.VcpuMigrDelayUs)
+
+	return nil
+}
+
+// NewSchedCredit2Params returns an instance of SchedCredit2Params initialized with defaults.
+func NewSchedCredit2Params() (*SchedCredit2Params, error) {
+	var (
+		x  SchedCredit2Params
+		xc C.libxl_sched_credit2_params
+	)
+
+	C.libxl_sched_credit2_params_init(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *SchedCredit2Params) fromC(xc *C.libxl_sched_credit2_params) error {
+	x.RatelimitUs = int(xc.ratelimit_us)
+
+	return nil
+}
+
+func (x *SchedCredit2Params) toC(xc *C.libxl_sched_credit2_params) (err error) {
+	xc.ratelimit_us = C.int(x.RatelimitUs)
+
+	return nil
+}
+
+// NewDomainRemusInfo returns an instance of DomainRemusInfo initialized with defaults.
+func NewDomainRemusInfo() (*DomainRemusInfo, error) {
+	var (
+		x  DomainRemusInfo
+		xc C.libxl_domain_remus_info
+	)
+
+	C.libxl_domain_remus_info_init(&xc)
+	defer C.libxl_domain_remus_info_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *DomainRemusInfo) fromC(xc *C.libxl_domain_remus_info) error {
+	x.Interval = int(xc.interval)
+	if err := x.AllowUnsafe.fromC(&xc.allow_unsafe); err != nil {
+		return fmt.Errorf("converting field AllowUnsafe: %v", err)
+	}
+	if err := x.Blackhole.fromC(&xc.blackhole); err != nil {
+		return fmt.Errorf("converting field Blackhole: %v", err)
+	}
+	if err := x.Compression.fromC(&xc.compression); err != nil {
+		return fmt.Errorf("converting field Compression: %v", err)
+	}
+	if err := x.Netbuf.fromC(&xc.netbuf); err != nil {
+		return fmt.Errorf("converting field Netbuf: %v", err)
+	}
+	x.Netbufscript = C.GoString(xc.netbufscript)
+	if err := x.Diskbuf.fromC(&xc.diskbuf); err != nil {
+		return fmt.Errorf("converting field Diskbuf: %v", err)
+	}
+	if err := x.Colo.fromC(&xc.colo); err != nil {
+		return fmt.Errorf("converting field Colo: %v", err)
+	}
+	if err := x.UserspaceColoProxy.fromC(&xc.userspace_colo_proxy); err != nil {
+		return fmt.Errorf("converting field UserspaceColoProxy: %v", err)
+	}
+
+	return nil
+}
+
+func (x *DomainRemusInfo) toC(xc *C.libxl_domain_remus_info) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_domain_remus_info_dispose(xc)
+		}
+	}()
+
+	xc.interval = C.int(x.Interval)
+	if err := x.AllowUnsafe.toC(&xc.allow_unsafe); err != nil {
+		return fmt.Errorf("converting field AllowUnsafe: %v", err)
+	}
+	if err := x.Blackhole.toC(&xc.blackhole); err != nil {
+		return fmt.Errorf("converting field Blackhole: %v", err)
+	}
+	if err := x.Compression.toC(&xc.compression); err != nil {
+		return fmt.Errorf("converting field Compression: %v", err)
+	}
+	if err := x.Netbuf.toC(&xc.netbuf); err != nil {
+		return fmt.Errorf("converting field Netbuf: %v", err)
+	}
+	if x.Netbufscript != "" {
+		xc.netbufscript = C.CString(x.Netbufscript)
+	}
+	if err := x.Diskbuf.toC(&xc.diskbuf); err != nil {
+		return fmt.Errorf("converting field Diskbuf: %v", err)
+	}
+	if err := x.Colo.toC(&xc.colo); err != nil {
+		return fmt.Errorf("converting field Colo: %v", err)
+	}
+	if err := x.UserspaceColoProxy.toC(&xc.userspace_colo_proxy); err != nil {
+		return fmt.Errorf("converting field UserspaceColoProxy: %v", err)
+	}
+
+	return nil
+}
+
+// NewEvent returns an instance of Event initialized with defaults.
+func NewEvent(etype EventType) (*Event, error) {
+	var (
+		x  Event
+		xc C.libxl_event
+	)
+
+	C.libxl_event_init(&xc)
+	C.libxl_event_init_type(&xc, C.libxl_event_type(etype))
+	defer C.libxl_event_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *Event) fromC(xc *C.libxl_event) error {
+	if err := x.Link.fromC(&xc.link); err != nil {
+		return fmt.Errorf("converting field Link: %v", err)
+	}
+	x.Domid = Domid(xc.domid)
+	if err := x.Domuuid.fromC(&xc.domuuid); err != nil {
+		return fmt.Errorf("converting field Domuuid: %v", err)
+	}
+	x.ForUser = uint64(xc.for_user)
+	x.Type = EventType(xc._type)
+	switch x.Type {
+	case EventTypeDomainShutdown:
+		var typeDomainShutdown EventTypeUnionDomainShutdown
+		if err := typeDomainShutdown.fromC(xc); err != nil {
+			return fmt.Errorf("converting field typeDomainShutdown: %v", err)
+		}
+		x.TypeUnion = typeDomainShutdown
+	case EventTypeDomainDeath:
+		x.TypeUnion = nil
+	case EventTypeDiskEject:
+		var typeDiskEject EventTypeUnionDiskEject
+		if err := typeDiskEject.fromC(xc); err != nil {
+			return fmt.Errorf("converting field typeDiskEject: %v", err)
+		}
+		x.TypeUnion = typeDiskEject
+	case EventTypeOperationComplete:
+		var typeOperationComplete EventTypeUnionOperationComplete
+		if err := typeOperationComplete.fromC(xc); err != nil {
+			return fmt.Errorf("converting field typeOperationComplete: %v", err)
+		}
+		x.TypeUnion = typeOperationComplete
+	case EventTypeDomainCreateConsoleAvailable:
+		x.TypeUnion = nil
+	default:
+		return fmt.Errorf("invalid union key '%v'", x.Type)
+	}
+
+	return nil
+}
+
+func (x *EventTypeUnionDomainShutdown) fromC(xc *C.libxl_event) error {
+	if EventType(xc._type) != EventTypeDomainShutdown {
+		return errors.New("expected union key EventTypeDomainShutdown")
+	}
+
+	tmp := (*C.libxl_event_type_union_domain_shutdown)(unsafe.Pointer(&xc.u[0]))
+	x.ShutdownReason = byte(tmp.shutdown_reason)
+	return nil
+}
+
+func (x *EventTypeUnionDiskEject) fromC(xc *C.libxl_event) error {
+	if EventType(xc._type) != EventTypeDiskEject {
+		return errors.New("expected union key EventTypeDiskEject")
+	}
+
+	tmp := (*C.libxl_event_type_union_disk_eject)(unsafe.Pointer(&xc.u[0]))
+	x.Vdev = C.GoString(tmp.vdev)
+	if err := x.Disk.fromC(&tmp.disk); err != nil {
+		return fmt.Errorf("converting field Disk: %v", err)
+	}
+	return nil
+}
+
+func (x *EventTypeUnionOperationComplete) fromC(xc *C.libxl_event) error {
+	if EventType(xc._type) != EventTypeOperationComplete {
+		return errors.New("expected union key EventTypeOperationComplete")
+	}
+
+	tmp := (*C.libxl_event_type_union_operation_complete)(unsafe.Pointer(&xc.u[0]))
+	x.Rc = int(tmp.rc)
+	return nil
+}
+
+func (x *Event) toC(xc *C.libxl_event) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_event_dispose(xc)
+		}
+	}()
+
+	if err := x.Link.toC(&xc.link); err != nil {
+		return fmt.Errorf("converting field Link: %v", err)
+	}
+	xc.domid = C.libxl_domid(x.Domid)
+	if err := x.Domuuid.toC(&xc.domuuid); err != nil {
+		return fmt.Errorf("converting field Domuuid: %v", err)
+	}
+	xc.for_user = C.uint64_t(x.ForUser)
+	xc._type = C.libxl_event_type(x.Type)
+	switch x.Type {
+	case EventTypeDomainShutdown:
+		tmp, ok := x.TypeUnion.(EventTypeUnionDomainShutdown)
+		if !ok {
+			return errors.New("wrong type for union key type")
+		}
+		var domain_shutdown C.libxl_event_type_union_domain_shutdown
+		domain_shutdown.shutdown_reason = C.uint8_t(tmp.ShutdownReason)
+		domain_shutdownBytes := C.GoBytes(unsafe.Pointer(&domain_shutdown), C.sizeof_libxl_event_type_union_domain_shutdown)
+		copy(xc.u[:], domain_shutdownBytes)
+	case EventTypeDomainDeath:
+		break
+	case EventTypeDiskEject:
+		tmp, ok := x.TypeUnion.(EventTypeUnionDiskEject)
+		if !ok {
+			return errors.New("wrong type for union key type")
+		}
+		var disk_eject C.libxl_event_type_union_disk_eject
+		if tmp.Vdev != "" {
+			disk_eject.vdev = C.CString(tmp.Vdev)
+		}
+		if err := tmp.Disk.toC(&disk_eject.disk); err != nil {
+			return fmt.Errorf("converting field Disk: %v", err)
+		}
+		disk_ejectBytes := C.GoBytes(unsafe.Pointer(&disk_eject), C.sizeof_libxl_event_type_union_disk_eject)
+		copy(xc.u[:], disk_ejectBytes)
+	case EventTypeOperationComplete:
+		tmp, ok := x.TypeUnion.(EventTypeUnionOperationComplete)
+		if !ok {
+			return errors.New("wrong type for union key type")
+		}
+		var operation_complete C.libxl_event_type_union_operation_complete
+		operation_complete.rc = C.int(tmp.Rc)
+		operation_completeBytes := C.GoBytes(unsafe.Pointer(&operation_complete), C.sizeof_libxl_event_type_union_operation_complete)
+		copy(xc.u[:], operation_completeBytes)
+	case EventTypeDomainCreateConsoleAvailable:
+		break
+	default:
+		return fmt.Errorf("invalid union key '%v'", x.Type)
+	}
+
+	return nil
+}
+
+// NewPsrCatInfo returns an instance of PsrCatInfo initialized with defaults.
+func NewPsrCatInfo() (*PsrCatInfo, error) {
+	var (
+		x  PsrCatInfo
+		xc C.libxl_psr_cat_info
+	)
+
+	C.libxl_psr_cat_info_init(&xc)
+	defer C.libxl_psr_cat_info_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *PsrCatInfo) fromC(xc *C.libxl_psr_cat_info) error {
+	x.Id = uint32(xc.id)
+	x.CosMax = uint32(xc.cos_max)
+	x.CbmLen = uint32(xc.cbm_len)
+	x.CdpEnabled = bool(xc.cdp_enabled)
+
+	return nil
+}
+
+func (x *PsrCatInfo) toC(xc *C.libxl_psr_cat_info) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_psr_cat_info_dispose(xc)
+		}
+	}()
+
+	xc.id = C.uint32_t(x.Id)
+	xc.cos_max = C.uint32_t(x.CosMax)
+	xc.cbm_len = C.uint32_t(x.CbmLen)
+	xc.cdp_enabled = C.bool(x.CdpEnabled)
+
+	return nil
+}
+
+// NewPsrHwInfo returns an instance of PsrHwInfo initialized with defaults.
+func NewPsrHwInfo(ptype PsrFeatType) (*PsrHwInfo, error) {
+	var (
+		x  PsrHwInfo
+		xc C.libxl_psr_hw_info
+	)
+
+	C.libxl_psr_hw_info_init(&xc)
+	C.libxl_psr_hw_info_init_type(&xc, C.libxl_psr_feat_type(ptype))
+	defer C.libxl_psr_hw_info_dispose(&xc)
+
+	if err := x.fromC(&xc); err != nil {
+		return nil, err
+	}
+
+	return &x, nil
+}
+
+func (x *PsrHwInfo) fromC(xc *C.libxl_psr_hw_info) error {
+	x.Id = uint32(xc.id)
+	x.Type = PsrFeatType(xc._type)
+	switch x.Type {
+	case PsrFeatTypeCat:
+		var typeCat PsrHwInfoTypeUnionCat
+		if err := typeCat.fromC(xc); err != nil {
+			return fmt.Errorf("converting field typeCat: %v", err)
+		}
+		x.TypeUnion = typeCat
+	case PsrFeatTypeMba:
+		var typeMba PsrHwInfoTypeUnionMba
+		if err := typeMba.fromC(xc); err != nil {
+			return fmt.Errorf("converting field typeMba: %v", err)
+		}
+		x.TypeUnion = typeMba
+	default:
+		return fmt.Errorf("invalid union key '%v'", x.Type)
+	}
+
+	return nil
+}
+
+func (x *PsrHwInfoTypeUnionCat) fromC(xc *C.libxl_psr_hw_info) error {
+	if PsrFeatType(xc._type) != PsrFeatTypeCat {
+		return errors.New("expected union key PsrFeatTypeCat")
+	}
+
+	tmp := (*C.libxl_psr_hw_info_type_union_cat)(unsafe.Pointer(&xc.u[0]))
+	x.CosMax = uint32(tmp.cos_max)
+	x.CbmLen = uint32(tmp.cbm_len)
+	x.CdpEnabled = bool(tmp.cdp_enabled)
+	return nil
+}
+
+func (x *PsrHwInfoTypeUnionMba) fromC(xc *C.libxl_psr_hw_info) error {
+	if PsrFeatType(xc._type) != PsrFeatTypeMba {
+		return errors.New("expected union key PsrFeatTypeMba")
+	}
+
+	tmp := (*C.libxl_psr_hw_info_type_union_mba)(unsafe.Pointer(&xc.u[0]))
+	x.CosMax = uint32(tmp.cos_max)
+	x.ThrtlMax = uint32(tmp.thrtl_max)
+	x.Linear = bool(tmp.linear)
+	return nil
+}
+
+func (x *PsrHwInfo) toC(xc *C.libxl_psr_hw_info) (err error) {
+	defer func() {
+		if err != nil {
+			C.libxl_psr_hw_info_dispose(xc)
+		}
+	}()
+
+	xc.id = C.uint32_t(x.Id)
+	xc._type = C.libxl_psr_feat_type(x.Type)
+	switch x.Type {
+	case PsrFeatTypeCat:
+		tmp, ok := x.TypeUnion.(PsrHwInfoTypeUnionCat)
+		if !ok {
+			return errors.New("wrong type for union key type")
+		}
+		var cat C.libxl_psr_hw_info_type_union_cat
+		cat.cos_max = C.uint32_t(tmp.CosMax)
+		cat.cbm_len = C.uint32_t(tmp.CbmLen)
+		cat.cdp_enabled = C.bool(tmp.CdpEnabled)
+		catBytes := C.GoBytes(unsafe.Pointer(&cat), C.sizeof_libxl_psr_hw_info_type_union_cat)
+		copy(xc.u[:], catBytes)
+	case PsrFeatTypeMba:
+		tmp, ok := x.TypeUnion.(PsrHwInfoTypeUnionMba)
+		if !ok {
+			return errors.New("wrong type for union key type")
+		}
+		var mba C.libxl_psr_hw_info_type_union_mba
+		mba.cos_max = C.uint32_t(tmp.CosMax)
+		mba.thrtl_max = C.uint32_t(tmp.ThrtlMax)
+		mba.linear = C.bool(tmp.Linear)
+		mbaBytes := C.GoBytes(unsafe.Pointer(&mba), C.sizeof_libxl_psr_hw_info_type_union_mba)
+		copy(xc.u[:], mbaBytes)
+	default:
+		return fmt.Errorf("invalid union key '%v'", x.Type)
+	}
+
+	return nil
+}
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
new file mode 100644
index 0000000000..df68fd0e88
--- /dev/null
+++ b/tools/golang/xenlight/types.gen.go
@@ -0,0 +1,1226 @@
+// DO NOT EDIT.
+//
+// This file is generated by:
+// gengotypes.py ../../libxl/libxl_types.idl
+//
+package xenlight
+
+type Error int
+
+const (
+	ErrorNonspecific                  Error = -1
+	ErrorVersion                      Error = -2
+	ErrorFail                         Error = -3
+	ErrorNi                           Error = -4
+	ErrorNomem                        Error = -5
+	ErrorInval                        Error = -6
+	ErrorBadfail                      Error = -7
+	ErrorGuestTimedout                Error = -8
+	ErrorTimedout                     Error = -9
+	ErrorNoparavirt                   Error = -10
+	ErrorNotReady                     Error = -11
+	ErrorOseventRegFail               Error = -12
+	ErrorBufferfull                   Error = -13
+	ErrorUnknownChild                 Error = -14
+	ErrorLockFail                     Error = -15
+	ErrorJsonConfigEmpty              Error = -16
+	ErrorDeviceExists                 Error = -17
+	ErrorCheckpointDevopsDoesNotMatch Error = -18
+	ErrorCheckpointDeviceNotSupported Error = -19
+	ErrorVnumaConfigInvalid           Error = -20
+	ErrorDomainNotfound               Error = -21
+	ErrorAborted                      Error = -22
+	ErrorNotfound                     Error = -23
+	ErrorDomainDestroyed              Error = -24
+	ErrorFeatureRemoved               Error = -25
+	ErrorProtocolErrorQmp             Error = -26
+	ErrorUnknownQmpError              Error = -27
+	ErrorQmpGenericError              Error = -28
+	ErrorQmpCommandNotFound           Error = -29
+	ErrorQmpDeviceNotActive           Error = -30
+	ErrorQmpDeviceNotFound            Error = -31
+	ErrorQemuApi                      Error = -32
+)
+
+type DomainType int
+
+const (
+	DomainTypeInvalid DomainType = -1
+	DomainTypeHvm     DomainType = 1
+	DomainTypePv      DomainType = 2
+	DomainTypePvh     DomainType = 3
+)
+
+type RdmReserveStrategy int
+
+const (
+	RdmReserveStrategyIgnore RdmReserveStrategy = 0
+	RdmReserveStrategyHost   RdmReserveStrategy = 1
+)
+
+type RdmReservePolicy int
+
+const (
+	RdmReservePolicyInvalid RdmReservePolicy = -1
+	RdmReservePolicyStrict  RdmReservePolicy = 0
+	RdmReservePolicyRelaxed RdmReservePolicy = 1
+)
+
+type ChannelConnection int
+
+const (
+	ChannelConnectionUnknown ChannelConnection = 0
+	ChannelConnectionPty     ChannelConnection = 1
+	ChannelConnectionSocket  ChannelConnection = 2
+)
+
+type DeviceModelVersion int
+
+const (
+	DeviceModelVersionUnknown            DeviceModelVersion = 0
+	DeviceModelVersionQemuXenTraditional DeviceModelVersion = 1
+	DeviceModelVersionQemuXen            DeviceModelVersion = 2
+)
+
+type ConsoleType int
+
+const (
+	ConsoleTypeUnknown ConsoleType = 0
+	ConsoleTypeSerial  ConsoleType = 1
+	ConsoleTypePv      ConsoleType = 2
+	ConsoleTypeVuart   ConsoleType = 3
+)
+
+type DiskFormat int
+
+const (
+	DiskFormatUnknown DiskFormat = 0
+	DiskFormatQcow    DiskFormat = 1
+	DiskFormatQcow2   DiskFormat = 2
+	DiskFormatVhd     DiskFormat = 3
+	DiskFormatRaw     DiskFormat = 4
+	DiskFormatEmpty   DiskFormat = 5
+	DiskFormatQed     DiskFormat = 6
+)
+
+type DiskBackend int
+
+const (
+	DiskBackendUnknown DiskBackend = 0
+	DiskBackendPhy     DiskBackend = 1
+	DiskBackendTap     DiskBackend = 2
+	DiskBackendQdisk   DiskBackend = 3
+)
+
+type NicType int
+
+const (
+	NicTypeUnknown  NicType = 0
+	NicTypeVifIoemu NicType = 1
+	NicTypeVif      NicType = 2
+)
+
+type ActionOnShutdown int
+
+const (
+	ActionOnShutdownDestroy         ActionOnShutdown = 1
+	ActionOnShutdownRestart         ActionOnShutdown = 2
+	ActionOnShutdownRestartRename   ActionOnShutdown = 3
+	ActionOnShutdownPreserve        ActionOnShutdown = 4
+	ActionOnShutdownCoredumpDestroy ActionOnShutdown = 5
+	ActionOnShutdownCoredumpRestart ActionOnShutdown = 6
+	ActionOnShutdownSoftReset       ActionOnShutdown = 7
+)
+
+type Trigger int
+
+const (
+	TriggerUnknown  Trigger = 0
+	TriggerPower    Trigger = 1
+	TriggerSleep    Trigger = 2
+	TriggerNmi      Trigger = 3
+	TriggerInit     Trigger = 4
+	TriggerReset    Trigger = 5
+	TriggerS3Resume Trigger = 6
+)
+
+type TscMode int
+
+const (
+	TscModeDefault        TscMode = 0
+	TscModeAlwaysEmulate  TscMode = 1
+	TscModeNative         TscMode = 2
+	TscModeNativeParavirt TscMode = 3
+)
+
+type GfxPassthruKind int
+
+const (
+	GfxPassthruKindDefault GfxPassthruKind = 0
+	GfxPassthruKindIgd     GfxPassthruKind = 1
+)
+
+type TimerMode int
+
+const (
+	TimerModeUnknown               TimerMode = -1
+	TimerModeDelayForMissedTicks   TimerMode = 0
+	TimerModeNoDelayForMissedTicks TimerMode = 1
+	TimerModeNoMissedTicksPending  TimerMode = 2
+	TimerModeOneMissedTickPending  TimerMode = 3
+)
+
+type BiosType int
+
+const (
+	BiosTypeUnknown BiosType = 0
+	BiosTypeRombios BiosType = 1
+	BiosTypeSeabios BiosType = 2
+	BiosTypeOvmf    BiosType = 3
+)
+
+type Scheduler int
+
+const (
+	SchedulerUnknown  Scheduler = 0
+	SchedulerSedf     Scheduler = 4
+	SchedulerCredit   Scheduler = 5
+	SchedulerCredit2  Scheduler = 6
+	SchedulerArinc653 Scheduler = 7
+	SchedulerRtds     Scheduler = 8
+	SchedulerNull     Scheduler = 9
+)
+
+type ShutdownReason int
+
+const (
+	ShutdownReasonUnknown   ShutdownReason = -1
+	ShutdownReasonPoweroff  ShutdownReason = 0
+	ShutdownReasonReboot    ShutdownReason = 1
+	ShutdownReasonSuspend   ShutdownReason = 2
+	ShutdownReasonCrash     ShutdownReason = 3
+	ShutdownReasonWatchdog  ShutdownReason = 4
+	ShutdownReasonSoftReset ShutdownReason = 5
+)
+
+type VgaInterfaceType int
+
+const (
+	VgaInterfaceTypeUnknown VgaInterfaceType = 0
+	VgaInterfaceTypeCirrus  VgaInterfaceType = 1
+	VgaInterfaceTypeStd     VgaInterfaceType = 2
+	VgaInterfaceTypeNone    VgaInterfaceType = 3
+	VgaInterfaceTypeQxl     VgaInterfaceType = 4
+)
+
+type VendorDevice int
+
+const (
+	VendorDeviceNone      VendorDevice = 0
+	VendorDeviceXenserver VendorDevice = 1
+)
+
+type ViridianEnlightenment int
+
+const (
+	ViridianEnlightenmentBase                ViridianEnlightenment = 0
+	ViridianEnlightenmentFreq                ViridianEnlightenment = 1
+	ViridianEnlightenmentTimeRefCount        ViridianEnlightenment = 2
+	ViridianEnlightenmentReferenceTsc        ViridianEnlightenment = 3
+	ViridianEnlightenmentHcallRemoteTlbFlush ViridianEnlightenment = 4
+	ViridianEnlightenmentApicAssist          ViridianEnlightenment = 5
+	ViridianEnlightenmentCrashCtl            ViridianEnlightenment = 6
+	ViridianEnlightenmentSynic               ViridianEnlightenment = 7
+	ViridianEnlightenmentStimer              ViridianEnlightenment = 8
+	ViridianEnlightenmentHcallIpi            ViridianEnlightenment = 9
+)
+
+type Hdtype int
+
+const (
+	HdtypeIde  Hdtype = 1
+	HdtypeAhci Hdtype = 2
+)
+
+type CheckpointedStream int
+
+const (
+	CheckpointedStreamNone  CheckpointedStream = 0
+	CheckpointedStreamRemus CheckpointedStream = 1
+	CheckpointedStreamColo  CheckpointedStream = 2
+)
+
+type VuartType int
+
+const (
+	VuartTypeUnknown  VuartType = 0
+	VuartTypeSbsaUart VuartType = 1
+)
+
+type VkbBackend int
+
+const (
+	VkbBackendUnknown VkbBackend = 0
+	VkbBackendQemu    VkbBackend = 1
+	VkbBackendLinux   VkbBackend = 2
+)
+
+type Passthrough int
+
+const (
+	PassthroughDefault  Passthrough = 0
+	PassthroughDisabled Passthrough = 1
+	PassthroughEnabled  Passthrough = 2
+	PassthroughSyncPt   Passthrough = 3
+	PassthroughSharePt  Passthrough = 4
+)
+
+type IoportRange struct {
+	First  uint32
+	Number uint32
+}
+
+type IomemRange struct {
+	Start  uint64
+	Number uint64
+	Gfn    uint64
+}
+
+type VgaInterfaceInfo struct {
+	Kind VgaInterfaceType
+}
+
+type VncInfo struct {
+	Enable     Defbool
+	Listen     string
+	Passwd     string
+	Display    int
+	Findunused Defbool
+}
+
+type SpiceInfo struct {
+	Enable           Defbool
+	Port             int
+	TlsPort          int
+	Host             string
+	DisableTicketing Defbool
+	Passwd           string
+	AgentMouse       Defbool
+	Vdagent          Defbool
+	ClipboardSharing Defbool
+	Usbredirection   int
+	ImageCompression string
+	StreamingVideo   string
+}
+
+type SdlInfo struct {
+	Enable     Defbool
+	Opengl     Defbool
+	Display    string
+	Xauthority string
+}
+
+type Dominfo struct {
+	Uuid             Uuid
+	Domid            Domid
+	Ssidref          uint32
+	SsidLabel        string
+	Running          bool
+	Blocked          bool
+	Paused           bool
+	Shutdown         bool
+	Dying            bool
+	NeverStop        bool
+	ShutdownReason   ShutdownReason
+	OutstandingMemkb uint64
+	CurrentMemkb     uint64
+	SharedMemkb      uint64
+	PagedMemkb       uint64
+	MaxMemkb         uint64
+	CpuTime          uint64
+	VcpuMaxId        uint32
+	VcpuOnline       uint32
+	Cpupool          uint32
+	DomainType       DomainType
+}
+
+type Cpupoolinfo struct {
+	Poolid   uint32
+	PoolName string
+	Sched    Scheduler
+	NDom     uint32
+	Cpumap   Bitmap
+}
+
+type Channelinfo struct {
+	Backend         string
+	BackendId       uint32
+	Frontend        string
+	FrontendId      uint32
+	Devid           Devid
+	State           int
+	Evtch           int
+	Rref            int
+	Connection      ChannelConnection
+	ConnectionUnion channelinfoConnectionUnion
+}
+
+type channelinfoConnectionUnion interface {
+	ischannelinfoConnectionUnion()
+}
+
+type ChannelinfoConnectionUnionPty struct {
+	Path string
+}
+
+func (x ChannelinfoConnectionUnionPty) ischannelinfoConnectionUnion() {}
+
+type Vminfo struct {
+	Uuid  Uuid
+	Domid Domid
+}
+
+type VersionInfo struct {
+	XenVersionMajor int
+	XenVersionMinor int
+	XenVersionExtra string
+	Compiler        string
+	CompileBy       string
+	CompileDomain   string
+	CompileDate     string
+	Capabilities    string
+	Changeset       string
+	VirtStart       uint64
+	Pagesize        int
+	Commandline     string
+	BuildId         string
+}
+
+type DomainCreateInfo struct {
+	Type                    DomainType
+	Hap                     Defbool
+	Oos                     Defbool
+	Ssidref                 uint32
+	SsidLabel               string
+	Name                    string
+	Domid                   Domid
+	Uuid                    Uuid
+	Xsdata                  KeyValueList
+	Platformdata            KeyValueList
+	Poolid                  uint32
+	PoolName                string
+	RunHotplugScripts       Defbool
+	DriverDomain            Defbool
+	Passthrough             Passthrough
+	XendSuspendEvtchnCompat Defbool
+}
+
+type DomainRestoreParams struct {
+	CheckpointedStream int
+	StreamVersion      uint32
+	ColoProxyScript    string
+	UserspaceColoProxy Defbool
+}
+
+type SchedParams struct {
+	Vcpuid    int
+	Weight    int
+	Cap       int
+	Period    int
+	Extratime int
+	Budget    int
+}
+
+type VcpuSchedParams struct {
+	Sched Scheduler
+	Vcpus []SchedParams
+}
+
+type DomainSchedParams struct {
+	Sched     Scheduler
+	Weight    int
+	Cap       int
+	Period    int
+	Budget    int
+	Extratime int
+	Slice     int
+	Latency   int
+}
+
+type VnodeInfo struct {
+	Memkb     uint64
+	Distances []uint32
+	Pnode     uint32
+	Vcpus     Bitmap
+}
+
+type GicVersion int
+
+const (
+	GicVersionDefault GicVersion = 0
+	GicVersionV2      GicVersion = 32
+	GicVersionV3      GicVersion = 48
+)
+
+type TeeType int
+
+const (
+	TeeTypeNone  TeeType = 0
+	TeeTypeOptee TeeType = 1
+)
+
+type RdmReserve struct {
+	Strategy RdmReserveStrategy
+	Policy   RdmReservePolicy
+}
+
+type Altp2MMode int
+
+const (
+	Altp2MModeDisabled Altp2MMode = 0
+	Altp2MModeMixed    Altp2MMode = 1
+	Altp2MModeExternal Altp2MMode = 2
+	Altp2MModeLimited  Altp2MMode = 3
+)
+
+type DomainBuildInfo struct {
+	MaxVcpus              int
+	AvailVcpus            Bitmap
+	Cpumap                Bitmap
+	Nodemap               Bitmap
+	VcpuHardAffinity      []Bitmap
+	VcpuSoftAffinity      []Bitmap
+	NumaPlacement         Defbool
+	TscMode               TscMode
+	MaxMemkb              uint64
+	TargetMemkb           uint64
+	VideoMemkb            uint64
+	ShadowMemkb           uint64
+	IommuMemkb            uint64
+	RtcTimeoffset         uint32
+	ExecSsidref           uint32
+	ExecSsidLabel         string
+	Localtime             Defbool
+	DisableMigrate        Defbool
+	Cpuid                 CpuidPolicyList
+	BlkdevStart           string
+	VnumaNodes            []VnodeInfo
+	MaxGrantFrames        uint32
+	MaxMaptrackFrames     uint32
+	DeviceModelVersion    DeviceModelVersion
+	DeviceModelStubdomain Defbool
+	DeviceModel           string
+	DeviceModelSsidref    uint32
+	DeviceModelSsidLabel  string
+	DeviceModelUser       string
+	Extra                 StringList
+	ExtraPv               StringList
+	ExtraHvm              StringList
+	SchedParams           DomainSchedParams
+	Ioports               []IoportRange
+	Irqs                  []uint32
+	Iomem                 []IomemRange
+	ClaimMode             Defbool
+	EventChannels         uint32
+	Kernel                string
+	Cmdline               string
+	Ramdisk               string
+	DeviceTree            string
+	Acpi                  Defbool
+	Bootloader            string
+	BootloaderArgs        StringList
+	TimerMode             TimerMode
+	NestedHvm             Defbool
+	Apic                  Defbool
+	DmRestrict            Defbool
+	Tee                   TeeType
+	Type                  DomainType
+	TypeUnion             domainBuildInfoTypeUnion
+	ArchArm               struct {
+		GicVersion GicVersion
+		Vuart      VuartType
+	}
+	Altp2M Altp2MMode
+}
+
+type domainBuildInfoTypeUnion interface {
+	isdomainBuildInfoTypeUnion()
+}
+
+type DomainBuildInfoTypeUnionHvm struct {
+	Firmware            string
+	Bios                BiosType
+	Pae                 Defbool
+	Apic                Defbool
+	Acpi                Defbool
+	AcpiS3              Defbool
+	AcpiS4              Defbool
+	AcpiLaptopSlate     Defbool
+	Nx                  Defbool
+	Viridian            Defbool
+	ViridianEnable      Bitmap
+	ViridianDisable     Bitmap
+	Timeoffset          string
+	Hpet                Defbool
+	VptAlign            Defbool
+	MmioHoleMemkb       uint64
+	TimerMode           TimerMode
+	NestedHvm           Defbool
+	Altp2M              Defbool
+	SystemFirmware      string
+	SmbiosFirmware      string
+	AcpiFirmware        string
+	Hdtype              Hdtype
+	Nographic           Defbool
+	Vga                 VgaInterfaceInfo
+	Vnc                 VncInfo
+	Keymap              string
+	Sdl                 SdlInfo
+	Spice               SpiceInfo
+	GfxPassthru         Defbool
+	GfxPassthruKind     GfxPassthruKind
+	Serial              string
+	Boot                string
+	Usb                 Defbool
+	Usbversion          int
+	Usbdevice           string
+	VkbDevice           Defbool
+	Soundhw             string
+	XenPlatformPci      Defbool
+	UsbdeviceList       StringList
+	VendorDevice        VendorDevice
+	MsVmGenid           MsVmGenid
+	SerialList          StringList
+	Rdm                 RdmReserve
+	RdmMemBoundaryMemkb uint64
+	McaCaps             uint64
+}
+
+func (x DomainBuildInfoTypeUnionHvm) isdomainBuildInfoTypeUnion() {}
+
+type DomainBuildInfoTypeUnionPv struct {
+	Kernel         string
+	SlackMemkb     uint64
+	Bootloader     string
+	BootloaderArgs StringList
+	Cmdline        string
+	Ramdisk        string
+	Features       string
+	E820Host       Defbool
+}
+
+func (x DomainBuildInfoTypeUnionPv) isdomainBuildInfoTypeUnion() {}
+
+type DomainBuildInfoTypeUnionPvh struct {
+	Pvshim        Defbool
+	PvshimPath    string
+	PvshimCmdline string
+	PvshimExtra   string
+}
+
+func (x DomainBuildInfoTypeUnionPvh) isdomainBuildInfoTypeUnion() {}
+
+type DeviceVfb struct {
+	BackendDomid   Domid
+	BackendDomname string
+	Devid          Devid
+	Vnc            VncInfo
+	Sdl            SdlInfo
+	Keymap         string
+}
+
+type DeviceVkb struct {
+	BackendDomid           Domid
+	BackendDomname         string
+	Devid                  Devid
+	BackendType            VkbBackend
+	UniqueId               string
+	FeatureDisableKeyboard bool
+	FeatureDisablePointer  bool
+	FeatureAbsPointer      bool
+	FeatureRawPointer      bool
+	FeatureMultiTouch      bool
+	Width                  uint32
+	Height                 uint32
+	MultiTouchWidth        uint32
+	MultiTouchHeight       uint32
+	MultiTouchNumContacts  uint32
+}
+
+type DeviceDisk struct {
+	BackendDomid      Domid
+	BackendDomname    string
+	PdevPath          string
+	Vdev              string
+	Backend           DiskBackend
+	Format            DiskFormat
+	Script            string
+	Removable         int
+	Readwrite         int
+	IsCdrom           int
+	DirectIoSafe      bool
+	DiscardEnable     Defbool
+	ColoEnable        Defbool
+	ColoRestoreEnable Defbool
+	ColoHost          string
+	ColoPort          int
+	ColoExport        string
+	ActiveDisk        string
+	HiddenDisk        string
+}
+
+type DeviceNic struct {
+	BackendDomid                   Domid
+	BackendDomname                 string
+	Devid                          Devid
+	Mtu                            int
+	Model                          string
+	Mac                            Mac
+	Ip                             string
+	Bridge                         string
+	Ifname                         string
+	Script                         string
+	Nictype                        NicType
+	RateBytesPerInterval           uint64
+	RateIntervalUsecs              uint32
+	Gatewaydev                     string
+	ColoftForwarddev               string
+	ColoSockMirrorId               string
+	ColoSockMirrorIp               string
+	ColoSockMirrorPort             string
+	ColoSockComparePriInId         string
+	ColoSockComparePriInIp         string
+	ColoSockComparePriInPort       string
+	ColoSockCompareSecInId         string
+	ColoSockCompareSecInIp         string
+	ColoSockCompareSecInPort       string
+	ColoSockCompareNotifyId        string
+	ColoSockCompareNotifyIp        string
+	ColoSockCompareNotifyPort      string
+	ColoSockRedirector0Id          string
+	ColoSockRedirector0Ip          string
+	ColoSockRedirector0Port        string
+	ColoSockRedirector1Id          string
+	ColoSockRedirector1Ip          string
+	ColoSockRedirector1Port        string
+	ColoSockRedirector2Id          string
+	ColoSockRedirector2Ip          string
+	ColoSockRedirector2Port        string
+	ColoFilterMirrorQueue          string
+	ColoFilterMirrorOutdev         string
+	ColoFilterRedirector0Queue     string
+	ColoFilterRedirector0Indev     string
+	ColoFilterRedirector0Outdev    string
+	ColoFilterRedirector1Queue     string
+	ColoFilterRedirector1Indev     string
+	ColoFilterRedirector1Outdev    string
+	ColoComparePriIn               string
+	ColoCompareSecIn               string
+	ColoCompareOut                 string
+	ColoCompareNotifyDev           string
+	ColoSockSecRedirector0Id       string
+	ColoSockSecRedirector0Ip       string
+	ColoSockSecRedirector0Port     string
+	ColoSockSecRedirector1Id       string
+	ColoSockSecRedirector1Ip       string
+	ColoSockSecRedirector1Port     string
+	ColoFilterSecRedirector0Queue  string
+	ColoFilterSecRedirector0Indev  string
+	ColoFilterSecRedirector0Outdev string
+	ColoFilterSecRedirector1Queue  string
+	ColoFilterSecRedirector1Indev  string
+	ColoFilterSecRedirector1Outdev string
+	ColoFilterSecRewriter0Queue    string
+	ColoCheckpointHost             string
+	ColoCheckpointPort             string
+}
+
+type DevicePci struct {
+	Func         byte
+	Dev          byte
+	Bus          byte
+	Domain       int
+	Vdevfn       uint32
+	VfuncMask    uint32
+	Msitranslate bool
+	PowerMgmt    bool
+	Permissive   bool
+	Seize        bool
+	RdmPolicy    RdmReservePolicy
+}
+
+type DeviceRdm struct {
+	Start  uint64
+	Size   uint64
+	Policy RdmReservePolicy
+}
+
+type UsbctrlType int
+
+const (
+	UsbctrlTypeAuto        UsbctrlType = 0
+	UsbctrlTypePv          UsbctrlType = 1
+	UsbctrlTypeDevicemodel UsbctrlType = 2
+	UsbctrlTypeQusb        UsbctrlType = 3
+)
+
+type UsbdevType int
+
+const (
+	UsbdevTypeHostdev UsbdevType = 1
+)
+
+type DeviceUsbctrl struct {
+	Type           UsbctrlType
+	Devid          Devid
+	Version        int
+	Ports          int
+	BackendDomid   Domid
+	BackendDomname string
+}
+
+type DeviceUsbdev struct {
+	Ctrl      Devid
+	Port      int
+	Type      UsbdevType
+	TypeUnion deviceUsbdevTypeUnion
+}
+
+type deviceUsbdevTypeUnion interface {
+	isdeviceUsbdevTypeUnion()
+}
+
+type DeviceUsbdevTypeUnionHostdev struct {
+	Hostbus  byte
+	Hostaddr byte
+}
+
+func (x DeviceUsbdevTypeUnionHostdev) isdeviceUsbdevTypeUnion() {}
+
+type DeviceDtdev struct {
+	Path string
+}
+
+type DeviceVtpm struct {
+	BackendDomid   Domid
+	BackendDomname string
+	Devid          Devid
+	Uuid           Uuid
+}
+
+type DeviceP9 struct {
+	BackendDomid   Domid
+	BackendDomname string
+	Tag            string
+	Path           string
+	SecurityModel  string
+	Devid          Devid
+}
+
+type DevicePvcallsif struct {
+	BackendDomid   Domid
+	BackendDomname string
+	Devid          Devid
+}
+
+type DeviceChannel struct {
+	BackendDomid    Domid
+	BackendDomname  string
+	Devid           Devid
+	Name            string
+	Connection      ChannelConnection
+	ConnectionUnion deviceChannelConnectionUnion
+}
+
+type deviceChannelConnectionUnion interface {
+	isdeviceChannelConnectionUnion()
+}
+
+type DeviceChannelConnectionUnionSocket struct {
+	Path string
+}
+
+func (x DeviceChannelConnectionUnionSocket) isdeviceChannelConnectionUnion() {}
+
+type ConnectorParam struct {
+	UniqueId string
+	Width    uint32
+	Height   uint32
+}
+
+type DeviceVdispl struct {
+	BackendDomid   Domid
+	BackendDomname string
+	Devid          Devid
+	BeAlloc        bool
+	Connectors     []ConnectorParam
+}
+
+type VsndPcmFormat int
+
+const (
+	VsndPcmFormatS8               VsndPcmFormat = 1
+	VsndPcmFormatU8               VsndPcmFormat = 2
+	VsndPcmFormatS16Le            VsndPcmFormat = 3
+	VsndPcmFormatS16Be            VsndPcmFormat = 4
+	VsndPcmFormatU16Le            VsndPcmFormat = 5
+	VsndPcmFormatU16Be            VsndPcmFormat = 6
+	VsndPcmFormatS24Le            VsndPcmFormat = 7
+	VsndPcmFormatS24Be            VsndPcmFormat = 8
+	VsndPcmFormatU24Le            VsndPcmFormat = 9
+	VsndPcmFormatU24Be            VsndPcmFormat = 10
+	VsndPcmFormatS32Le            VsndPcmFormat = 11
+	VsndPcmFormatS32Be            VsndPcmFormat = 12
+	VsndPcmFormatU32Le            VsndPcmFormat = 13
+	VsndPcmFormatU32Be            VsndPcmFormat = 14
+	VsndPcmFormatF32Le            VsndPcmFormat = 15
+	VsndPcmFormatF32Be            VsndPcmFormat = 16
+	VsndPcmFormatF64Le            VsndPcmFormat = 17
+	VsndPcmFormatF64Be            VsndPcmFormat = 18
+	VsndPcmFormatIec958SubframeLe VsndPcmFormat = 19
+	VsndPcmFormatIec958SubframeBe VsndPcmFormat = 20
+	VsndPcmFormatMuLaw            VsndPcmFormat = 21
+	VsndPcmFormatALaw             VsndPcmFormat = 22
+	VsndPcmFormatImaAdpcm         VsndPcmFormat = 23
+	VsndPcmFormatMpeg             VsndPcmFormat = 24
+	VsndPcmFormatGsm              VsndPcmFormat = 25
+)
+
+type VsndParams struct {
+	SampleRates   []uint32
+	SampleFormats []VsndPcmFormat
+	ChannelsMin   uint32
+	ChannelsMax   uint32
+	BufferSize    uint32
+}
+
+type VsndStreamType int
+
+const (
+	VsndStreamTypeP VsndStreamType = 1
+	VsndStreamTypeC VsndStreamType = 2
+)
+
+type VsndStream struct {
+	UniqueId string
+	Type     VsndStreamType
+	Params   VsndParams
+}
+
+type VsndPcm struct {
+	Name    string
+	Params  VsndParams
+	Streams []VsndStream
+}
+
+type DeviceVsnd struct {
+	BackendDomid   Domid
+	BackendDomname string
+	Devid          Devid
+	ShortName      string
+	LongName       string
+	Params         VsndParams
+	Pcms           []VsndPcm
+}
+
+type DomainConfig struct {
+	CInfo       DomainCreateInfo
+	BInfo       DomainBuildInfo
+	Disks       []DeviceDisk
+	Nics        []DeviceNic
+	Pcidevs     []DevicePci
+	Rdms        []DeviceRdm
+	Dtdevs      []DeviceDtdev
+	Vfbs        []DeviceVfb
+	Vkbs        []DeviceVkb
+	Vtpms       []DeviceVtpm
+	P9S         []DeviceP9
+	Pvcallsifs  []DevicePvcallsif
+	Vdispls     []DeviceVdispl
+	Vsnds       []DeviceVsnd
+	Channels    []DeviceChannel
+	Usbctrls    []DeviceUsbctrl
+	Usbdevs     []DeviceUsbdev
+	OnPoweroff  ActionOnShutdown
+	OnReboot    ActionOnShutdown
+	OnWatchdog  ActionOnShutdown
+	OnCrash     ActionOnShutdown
+	OnSoftReset ActionOnShutdown
+}
+
+type Diskinfo struct {
+	Backend    string
+	BackendId  uint32
+	Frontend   string
+	FrontendId uint32
+	Devid      Devid
+	State      int
+	Evtch      int
+	Rref       int
+}
+
+type Nicinfo struct {
+	Backend    string
+	BackendId  uint32
+	Frontend   string
+	FrontendId uint32
+	Devid      Devid
+	State      int
+	Evtch      int
+	RrefTx     int
+	RrefRx     int
+}
+
+type Vtpminfo struct {
+	Backend    string
+	BackendId  uint32
+	Frontend   string
+	FrontendId uint32
+	Devid      Devid
+	State      int
+	Evtch      int
+	Rref       int
+	Uuid       Uuid
+}
+
+type Usbctrlinfo struct {
+	Type       UsbctrlType
+	Devid      Devid
+	Version    int
+	Ports      int
+	Backend    string
+	BackendId  uint32
+	Frontend   string
+	FrontendId uint32
+	State      int
+	Evtch      int
+	RefUrb     int
+	RefConn    int
+}
+
+type Vcpuinfo struct {
+	Vcpuid     uint32
+	Cpu        uint32
+	Online     bool
+	Blocked    bool
+	Running    bool
+	VcpuTime   uint64
+	Cpumap     Bitmap
+	CpumapSoft Bitmap
+}
+
+type Physinfo struct {
+	ThreadsPerCore     uint32
+	CoresPerSocket     uint32
+	MaxCpuId           uint32
+	NrCpus             uint32
+	CpuKhz             uint32
+	TotalPages         uint64
+	FreePages          uint64
+	ScrubPages         uint64
+	OutstandingPages   uint64
+	SharingFreedPages  uint64
+	SharingUsedFrames  uint64
+	MaxPossibleMfn     uint64
+	NrNodes            uint32
+	HwCap              Hwcap
+	CapHvm             bool
+	CapPv              bool
+	CapHvmDirectio     bool
+	CapHap             bool
+	CapShadow          bool
+	CapIommuHapPtShare bool
+}
+
+type Connectorinfo struct {
+	UniqueId string
+	Width    uint32
+	Height   uint32
+	ReqEvtch int
+	ReqRref  int
+	EvtEvtch int
+	EvtRref  int
+}
+
+type Vdisplinfo struct {
+	Backend    string
+	BackendId  uint32
+	Frontend   string
+	FrontendId uint32
+	Devid      Devid
+	State      int
+	BeAlloc    bool
+	Connectors []Connectorinfo
+}
+
+type Streaminfo struct {
+	ReqEvtch int
+	ReqRref  int
+}
+
+type Pcminfo struct {
+	Streams []Streaminfo
+}
+
+type Vsndinfo struct {
+	Backend    string
+	BackendId  uint32
+	Frontend   string
+	FrontendId uint32
+	Devid      Devid
+	State      int
+	Pcms       []Pcminfo
+}
+
+type Vkbinfo struct {
+	Backend    string
+	BackendId  uint32
+	Frontend   string
+	FrontendId uint32
+	Devid      Devid
+	State      int
+	Evtch      int
+	Rref       int
+}
+
+type Numainfo struct {
+	Size  uint64
+	Free  uint64
+	Dists []uint32
+}
+
+type Cputopology struct {
+	Core   uint32
+	Socket uint32
+	Node   uint32
+}
+
+type Pcitopology struct {
+	Seg   uint16
+	Bus   byte
+	Devfn byte
+	Node  uint32
+}
+
+type SchedCreditParams struct {
+	TsliceMs        int
+	RatelimitUs     int
+	VcpuMigrDelayUs int
+}
+
+type SchedCredit2Params struct {
+	RatelimitUs int
+}
+
+type DomainRemusInfo struct {
+	Interval           int
+	AllowUnsafe        Defbool
+	Blackhole          Defbool
+	Compression        Defbool
+	Netbuf             Defbool
+	Netbufscript       string
+	Diskbuf            Defbool
+	Colo               Defbool
+	UserspaceColoProxy Defbool
+}
+
+type EventType int
+
+const (
+	EventTypeDomainShutdown               EventType = 1
+	EventTypeDomainDeath                  EventType = 2
+	EventTypeDiskEject                    EventType = 3
+	EventTypeOperationComplete            EventType = 4
+	EventTypeDomainCreateConsoleAvailable EventType = 5
+)
+
+type Event struct {
+	Link      EvLink
+	Domid     Domid
+	Domuuid   Uuid
+	ForUser   uint64
+	Type      EventType
+	TypeUnion eventTypeUnion
+}
+
+type eventTypeUnion interface {
+	iseventTypeUnion()
+}
+
+type EventTypeUnionDomainShutdown struct {
+	ShutdownReason byte
+}
+
+func (x EventTypeUnionDomainShutdown) iseventTypeUnion() {}
+
+type EventTypeUnionDiskEject struct {
+	Vdev string
+	Disk DeviceDisk
+}
+
+func (x EventTypeUnionDiskEject) iseventTypeUnion() {}
+
+type EventTypeUnionOperationComplete struct {
+	Rc int
+}
+
+func (x EventTypeUnionOperationComplete) iseventTypeUnion() {}
+
+type PsrCmtType int
+
+const (
+	PsrCmtTypeCacheOccupancy PsrCmtType = 1
+	PsrCmtTypeTotalMemCount  PsrCmtType = 2
+	PsrCmtTypeLocalMemCount  PsrCmtType = 3
+)
+
+type PsrCbmType int
+
+const (
+	PsrCbmTypeUnknown   PsrCbmType = 0
+	PsrCbmTypeL3Cbm     PsrCbmType = 1
+	PsrCbmTypeL3CbmCode PsrCbmType = 2
+	PsrCbmTypeL3CbmData PsrCbmType = 3
+	PsrCbmTypeL2Cbm     PsrCbmType = 4
+	PsrCbmTypeMbaThrtl  PsrCbmType = 5
+)
+
+type PsrCatInfo struct {
+	Id         uint32
+	CosMax     uint32
+	CbmLen     uint32
+	CdpEnabled bool
+}
+
+type PsrFeatType int
+
+const (
+	PsrFeatTypeCat PsrFeatType = 1
+	PsrFeatTypeMba PsrFeatType = 2
+)
+
+type PsrHwInfo struct {
+	Id        uint32
+	Type      PsrFeatType
+	TypeUnion psrHwInfoTypeUnion
+}
+
+type psrHwInfoTypeUnion interface {
+	ispsrHwInfoTypeUnion()
+}
+
+type PsrHwInfoTypeUnionCat struct {
+	CosMax     uint32
+	CbmLen     uint32
+	CdpEnabled bool
+}
+
+func (x PsrHwInfoTypeUnionCat) ispsrHwInfoTypeUnion() {}
+
+type PsrHwInfoTypeUnionMba struct {
+	CosMax   uint32
+	ThrtlMax uint32
+	Linear   bool
+}
+
+func (x PsrHwInfoTypeUnionMba) ispsrHwInfoTypeUnion() {}
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 13 00:58:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 00:58:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYfjB-0006JR-1i; Wed, 13 May 2020 00:58:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FcTI=63=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jYfj9-0006Id-NO
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 00:58:27 +0000
X-Inumbo-ID: d4c1b0ba-94b4-11ea-b07b-bc764e2007e4
Received: from mail-qt1-x832.google.com (unknown [2607:f8b0:4864:20::832])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d4c1b0ba-94b4-11ea-b07b-bc764e2007e4;
 Wed, 13 May 2020 00:58:17 +0000 (UTC)
Received: by mail-qt1-x832.google.com with SMTP id x8so12777647qtr.2
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 17:58:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:in-reply-to:references:content-transfer-encoding;
 bh=1brDpjdUwGMbVSq6tuF5O7Jze20hQbuXMTaGnOz/sxI=;
 b=XD+Hf7/zflbEuxrNjcGjwwix11KTscuVE9nkRfu+8VCzg6CutgOmkc9Zhr+Z8erA70
 6yiizRJwQSdJ5wiziDMYz21cylPU1OUg9bmvdG573SqyfJLJlivULyUVF56CXbDwQvI/
 xh2bFZGlYf3lmqDVWwvjFJ++lSaIr3ocks2CG8DLtFfZ8USxh+Fqp+YU18qIqQettKP+
 n/vvHQPOdLCmswDwCBQ5SyVSuwunbdWlqG392bvWk1RYsc9rT6lTK7T7tosAgVy4qDs9
 wn6XyYzcTS6VnYYptEzaoF0vKV0nsjvyVgTZFOi6ex4XYls36ji/JDgmc8UGAsr60i2p
 /g/A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:in-reply-to:references
 :content-transfer-encoding;
 bh=1brDpjdUwGMbVSq6tuF5O7Jze20hQbuXMTaGnOz/sxI=;
 b=idr/KFTybXg1tkFv6tYtqWuDK4jTEZI8LDjT3/KOSw6FTRdsQxHoa9OGbTCs/F+0PJ
 rfjiPM17Sz8ayMI1Z7UxjTJLPXQI9HkoUnf6xWYK0/tFuxRHC3suHhN1Z1DdIUEyV/oB
 HeKtpBNXaAC8O5sc4zK0mQpQ9M/nUi7GAHLuhgY/wMIHkjooDpvwlNVoJEycOAizn7oT
 3JVbK7wTZaEG12I1luyDdJmfGURmUpuRhlgrqGb/YYFQeG72yavd9YkuVFD8VX5eE98Y
 OSBM3ArbQ3Z6B881hsWcoB3b1o/TNu18jc90DFFmb3KF8KPB6FOnc24ZYu7Q+CEzTTkJ
 lryQ==
X-Gm-Message-State: AGi0PuYt29qA2ElsJNomh5DnSRCrInddGRmlzizu0gydcc1dgwl4Dn7l
 117aW5Nlc4mHWnteU7BEfguIv8aaczM=
X-Google-Smtp-Source: APiQypLmY2UikkTHR9+BQw2L5vEGBMeA6STde8Z5tyapQpQljpKFtb9u/T8A0nngKy1Ge2qV36mz2Q==
X-Received: by 2002:ac8:6b44:: with SMTP id x4mr14047305qts.353.1589331495390; 
 Tue, 12 May 2020 17:58:15 -0700 (PDT)
Received: from six.lan (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id 62sm12400828qkh.113.2020.05.12.17.58.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 12 May 2020 17:58:14 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 3/3] golang/xenlight: add necessary module/package
 documentation
Date: Tue, 12 May 2020 20:58:07 -0400
Message-Id: <a42395202aef85d983dd9db361c366a6d03e313f.1589330383.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1589330383.git.rosbrookn@ainfosec.com>
References: <cover.1589330383.git.rosbrookn@ainfosec.com>
MIME-Version: 1.0
In-Reply-To: <cover.1589330383.git.rosbrookn@ainfosec.com>
References: <cover.1589330383.git.rosbrookn@ainfosec.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add a README and package comment giving a brief overview of the package.
These also help pkg.go.dev generate better documentation.

Also, add a copy of the LGPL (the same license used by libxl) to
tools/golang/xenlight. This is required for the package to be shown
on pkg.go.dev and added to the default module proxy, proxy.golang.org.

Finally, add an entry for the xenlight package to SUPPORT.md.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
Changes in v2:
 - Use LGPL instead of GPL for license.
 - Change some wording in README for clarity.
 - Add entry to SUPPORT.md for xenlight package.
---
 SUPPORT.md                        |   6 +
 tools/golang/xenlight/LICENSE     | 198 ++++++++++++++++++++++++++++++
 tools/golang/xenlight/README.md   |  17 +++
 tools/golang/xenlight/xenlight.go |   2 +
 4 files changed, 223 insertions(+)
 create mode 100644 tools/golang/xenlight/LICENSE
 create mode 100644 tools/golang/xenlight/README.md

diff --git a/SUPPORT.md b/SUPPORT.md
index 7270c9b021..e3a366fd56 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -147,6 +147,12 @@ Output of information in machine-parseable JSON format
 
     Status: Supported
 
+### xenlight Go package
+
+Go (golang) bindings for libxl
+
+    Status: Experimental
+
 ## Toolstack/3rd party
 
 ### libvirt driver for xl
diff --git a/tools/golang/xenlight/LICENSE b/tools/golang/xenlight/LICENSE
new file mode 100644
index 0000000000..d4d1f17688
--- /dev/null
+++ b/tools/golang/xenlight/LICENSE
@@ -0,0 +1,198 @@
+This repository is distributed under the terms of the GNU Lesser General
+Public License version 2.1 (included below).
+
+As a special exception to the GNU Lesser General Public License, you
+may link, statically or dynamically, a "work that uses the Library"
+with a publicly distributed version of the Library to produce an
+executable file containing portions of the Library, and distribute
+that executable file under terms of your choice, without any of the
+additional requirements listed in clause 6 of the GNU Lesser General
+Public License.  By "a publicly distributed version of the Library",
+we mean either the unmodified Library as distributed, or a
+modified version of the Library that is distributed under the
+conditions defined in clause 3 of the GNU Library General Public
+License.  This exception does not however invalidate any other reasons
+why the executable file might be covered by the GNU Lesser General
+Public License.
+
+------------
+
+GNU LESSER GENERAL PUBLIC LICENSE
+Version 2.1, February 1999
+
+
+Copyright (C) 1991, 1999 Free Software Foundation, Inc.
+59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+Everyone is permitted to copy and distribute verbatim copies
+of this license document, but changing it is not allowed.
+
+[This is the first released version of the Lesser GPL.  It also counts
+ as the successor of the GNU Library Public License, version 2, hence
+ the version number 2.1.]
+
+Preamble
+The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public Licenses are intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users.
+
+This license, the Lesser General Public License, applies to some specially designated software packages--typically libraries--of the Free Software Foundation and other authors who decide to use it. You can use it too, but we suggest you first think carefully about whether this license or the ordinary General Public License is the better strategy to use in any particular case, based on the explanations below.
+
+When we speak of free software, we are referring to freedom of use, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish); that you receive source code or can get it if you want it; that you can change the software and use pieces of it in new free programs; and that you are informed that you can do these things.
+
+To protect your rights, we need to make restrictions that forbid distributors to deny you these rights or to ask you to surrender these rights. These restrictions translate to certain responsibilities for you if you distribute copies of the library or if you modify it.
+
+For example, if you distribute copies of the library, whether gratis or for a fee, you must give the recipients all the rights that we gave you. You must make sure that they, too, receive or can get the source code. If you link other code with the library, you must provide complete object files to the recipients, so that they can relink them with the library after making changes to the library and recompiling it. And you must show them these terms so they know their rights.
+
+We protect your rights with a two-step method: (1) we copyright the library, and (2) we offer you this license, which gives you legal permission to copy, distribute and/or modify the library.
+
+To protect each distributor, we want to make it very clear that there is no warranty for the free library. Also, if the library is modified by someone else and passed on, the recipients should know that what they have is not the original version, so that the original author's reputation will not be affected by problems that might be introduced by others.
+
+Finally, software patents pose a constant threat to the existence of any free program. We wish to make sure that a company cannot effectively restrict the users of a free program by obtaining a restrictive license from a patent holder. Therefore, we insist that any patent license obtained for a version of the library must be consistent with the full freedom of use specified in this license.
+
+Most GNU software, including some libraries, is covered by the ordinary GNU General Public License. This license, the GNU Lesser General Public License, applies to certain designated libraries, and is quite different from the ordinary General Public License. We use this license for certain libraries in order to permit linking those libraries into non-free programs.
+
+When a program is linked with a library, whether statically or using a shared library, the combination of the two is legally speaking a combined work, a derivative of the original library. The ordinary General Public License therefore permits such linking only if the entire combination fits its criteria of freedom. The Lesser General Public License permits more lax criteria for linking other code with the library.
+
+We call this license the "Lesser" General Public License because it does Less to protect the user's freedom than the ordinary General Public License. It also provides other free software developers Less of an advantage over competing non-free programs. These disadvantages are the reason we use the ordinary General Public License for many libraries. However, the Lesser license provides advantages in certain special circumstances.
+
+For example, on rare occasions, there may be a special need to encourage the widest possible use of a certain library, so that it becomes a de-facto standard. To achieve this, non-free programs must be allowed to use the library. A more frequent case is that a free library does the same job as widely used non-free libraries. In this case, there is little to gain by limiting the free library to free software only, so we use the Lesser General Public License.
+
+In other cases, permission to use a particular library in non-free programs enables a greater number of people to use a large body of free software. For example, permission to use the GNU C Library in non-free programs enables many more people to use the whole GNU operating system, as well as its variant, the GNU/Linux operating system.
+
+Although the Lesser General Public License is Less protective of the users' freedom, it does ensure that the user of a program that is linked with the Library has the freedom and the wherewithal to run that program using a modified version of the Library.
+
+The precise terms and conditions for copying, distribution and modification follow. Pay close attention to the difference between a "work based on the library" and a "work that uses the library". The former contains code derived from the library, whereas the latter must be combined with the library in order to run.
+
+
+TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+0. This License Agreement applies to any software library or other program which contains a notice placed by the copyright holder or other authorized party saying it may be distributed under the terms of this Lesser General Public License (also called "this License"). Each licensee is addressed as "you".
+
+A "library" means a collection of software functions and/or data prepared so as to be conveniently linked with application programs (which use some of those functions and data) to form executables.
+
+The "Library", below, refers to any such software library or work which has been distributed under these terms. A "work based on the Library" means either the Library or any derivative work under copyright law: that is to say, a work containing the Library or a portion of it, either verbatim or with modifications and/or translated straightforwardly into another language. (Hereinafter, translation is included without limitation in the term "modification".)
+
+"Source code" for a work means the preferred form of the work for making modifications to it. For a library, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the library.
+
+Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running a program using the Library is not restricted, and output from such a program is covered only if its contents constitute a work based on the Library (independent of the use of the Library in a tool for writing it). Whether that is true depends on what the Library does and what the program that uses the Library does.
+
+1. You may copy and distribute verbatim copies of the Library's complete source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and distribute a copy of this License along with the Library.
+
+You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee.
+
+2. You may modify your copy or copies of the Library or any portion of it, thus forming a work based on the Library, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions:
+
+
+a) The modified work must itself be a software library.
+b) You must cause the files modified to carry prominent notices stating that you changed the files and the date of any change.
+c) You must cause the whole of the work to be licensed at no charge to all third parties under the terms of this License.
+d) If a facility in the modified Library refers to a function or a table of data to be supplied by an application program that uses the facility, other than as an argument passed when the facility is invoked, then you must make a good faith effort to ensure that, in the event an application does not supply such function or table, the facility still operates, and performs whatever part of its purpose remains meaningful.
+(For example, a function in a library to compute square roots has a purpose that is entirely well-defined independent of the application. Therefore, Subsection 2d requires that any application-supplied function or table used by this function must be optional: if the application does not supply it, the square root function must still compute square roots.)
+
+These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Library, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Library, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it.
+
+Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Library.
+
+In addition, mere aggregation of another work not based on the Library with the Library (or with a work based on the Library) on a volume of a storage or distribution medium does not bring the other work under the scope of this License.
+
+3. You may opt to apply the terms of the ordinary GNU General Public License instead of this License to a given copy of the Library. To do this, you must alter all the notices that refer to this License, so that they refer to the ordinary GNU General Public License, version 2, instead of to this License. (If a newer version than version 2 of the ordinary GNU General Public License has appeared, then you can specify that version instead if you wish.) Do not make any other change in these notices.
+
+Once this change is made in a given copy, it is irreversible for that copy, so the ordinary GNU General Public License applies to all subsequent copies and derivative works made from that copy.
+
+This option is useful when you wish to copy part of the code of the Library into a program that is not a library.
+
+4. You may copy and distribute the Library (or a portion or derivative of it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange.
+
+If distribution of object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place satisfies the requirement to distribute the source code, even though third parties are not compelled to copy the source along with the object code.
+
+5. A program that contains no derivative of any portion of the Library, but is designed to work with the Library by being compiled or linked with it, is called a "work that uses the Library". Such a work, in isolation, is not a derivative work of the Library, and therefore falls outside the scope of this License.
+
+However, linking a "work that uses the Library" with the Library creates an executable that is a derivative of the Library (because it contains portions of the Library), rather than a "work that uses the library". The executable is therefore covered by this License. Section 6 states terms for distribution of such executables.
+
+When a "work that uses the Library" uses material from a header file that is part of the Library, the object code for the work may be a derivative work of the Library even though the source code is not. Whether this is true is especially significant if the work can be linked without the Library, or if the work is itself a library. The threshold for this to be true is not precisely defined by law.
+
+If such an object file uses only numerical parameters, data structure layouts and accessors, and small macros and small inline functions (ten lines or less in length), then the use of the object file is unrestricted, regardless of whether it is legally a derivative work. (Executables containing this object code plus portions of the Library will still fall under Section 6.)
+
+Otherwise, if the work is a derivative of the Library, you may distribute the object code for the work under the terms of Section 6. Any executables containing that work also fall under Section 6, whether or not they are linked directly with the Library itself.
+
+6. As an exception to the Sections above, you may also combine or link a "work that uses the Library" with the Library to produce a work containing portions of the Library, and distribute that work under terms of your choice, provided that the terms permit modification of the work for the customer's own use and reverse engineering for debugging such modifications.
+
+You must give prominent notice with each copy of the work that the Library is used in it and that the Library and its use are covered by this License. You must supply a copy of this License. If the work during execution displays copyright notices, you must include the copyright notice for the Library among them, as well as a reference directing the user to the copy of this License. Also, you must do one of these things:
+
+
+a) Accompany the work with the complete corresponding machine-readable source code for the Library including whatever changes were used in the work (which must be distributed under Sections 1 and 2 above); and, if the work is an executable linked with the Library, with the complete machine-readable "work that uses the Library", as object code and/or source code, so that the user can modify the Library and then relink to produce a modified executable containing the modified Library. (It is understood that the user who changes the contents of definitions files in the Library will not necessarily be able to recompile the application to use the modified definitions.)
+b) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (1) uses at run time a copy of the library already present on the user's computer system, rather than copying library functions into the executable, and (2) will operate properly with a modified version of the library, if the user installs one, as long as the modified version is interface-compatible with the version that the work was made with.
+c) Accompany the work with a written offer, valid for at least three years, to give the same user the materials specified in Subsection 6a, above, for a charge no more than the cost of performing this distribution.
+d) If distribution of the work is made by offering access to copy from a designated place, offer equivalent access to copy the above specified materials from the same place.
+e) Verify that the user has already received a copy of these materials or that you have already sent this user a copy.
+For an executable, the required form of the "work that uses the Library" must include any data and utility programs needed for reproducing the executable from it. However, as a special exception, the materials to be distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable.
+
+It may happen that this requirement contradicts the license restrictions of other proprietary libraries that do not normally accompany the operating system. Such a contradiction means you cannot use both them and the Library together in an executable that you distribute.
+
+7. You may place library facilities that are a work based on the Library side-by-side in a single library together with other library facilities not covered by this License, and distribute such a combined library, provided that the separate distribution of the work based on the Library and of the other library facilities is otherwise permitted, and provided that you do these two things:
+
+
+a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities. This must be distributed under the terms of the Sections above.
+b) Give prominent notice with the combined library of the fact that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work.
+8. You may not copy, modify, sublicense, link with, or distribute the Library except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, link with, or distribute the Library is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
+
+9. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Library or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Library (or any work based on the Library), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Library or works based on it.
+
+10. Each time you redistribute the Library (or any work based on the Library), the recipient automatically receives a license from the original licensor to copy, distribute, link with or modify the Library subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties with this License.
+
+11. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Library at all. For example, if a patent license would not permit royalty-free redistribution of the Library by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Library.
+
+If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply, and the section as a whole is intended to apply in other circumstances.
+
+It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice.
+
+This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License.
+
+12. If the distribution and/or use of the Library is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Library under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License.
+
+13. The Free Software Foundation may publish revised and/or new versions of the Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
+
+Each version is given a distinguishing version number. If the Library specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Library does not specify a license version number, you may choose any version ever published by the Free Software Foundation.
+
+14. If you wish to incorporate parts of the Library into other free programs whose distribution conditions are incompatible with these, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally.
+
+NO WARRANTY
+
+15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
+
+
+END OF TERMS AND CONDITIONS
+How to Apply These Terms to Your New Libraries
+If you develop a new library, and you want it to be of the greatest possible use to the public, we recommend making it free software that everyone can redistribute and change. You can do so by permitting redistribution under these terms (or, alternatively, under the terms of the ordinary General Public License).
+
+To apply these terms, attach the following notices to the library. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found.
+
+
+one line to give the library's name and an idea of what it does.
+Copyright (C) year  name of author
+
+This library is free software; you can redistribute it and/or
+modify it under the terms of the GNU Lesser General Public
+License as published by the Free Software Foundation; either
+version 2.1 of the License, or (at your option) any later version.
+
+This library is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public
+License along with this library; If not, see <http://www.gnu.org/licenses/>.
+
+Also add information on how to contact you by electronic and paper mail.
+
+You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the library, if necessary. Here is a sample; alter the names:
+
+
+Yoyodyne, Inc., hereby disclaims all copyright interest in
+the library `Frob' (a library for tweaking knobs) written
+by James Random Hacker.
+
+signature of Ty Coon, 1 April 1990
+Ty Coon, President of Vice
+
+That's all there is to it!
diff --git a/tools/golang/xenlight/README.md b/tools/golang/xenlight/README.md
new file mode 100644
index 0000000000..046f86ab3e
--- /dev/null
+++ b/tools/golang/xenlight/README.md
@@ -0,0 +1,17 @@
+# xenlight
+
+## About
+
+The xenlight package provides Go bindings to Xen's libxl C library via cgo. The package is currently in an unstable "experimental" state. This means the package is ready for initial use and evaluation, but is not yet fully functional. Namely, only a subset of libxl's API is implemented, and breaking changes may occur in future package versions.
+
+Much of the package is generated using the libxl IDL. Changes to the generated code can be made by modifying `tools/golang/xenlight/gengotypes.py` in the xen.git tree.
+
+## Getting Started
+
+```go
+import (
+        "xenbits.xenproject.org/git-http/xen.git/tools/golang/xenlight"
+)
+```
+
+The module is not yet tagged independently of xen.git; if you don’t specify the version, you’ll get the most recent development version, which is probably not what you want.  A better option would be to specify a Xen release tag; for instance: `go get xenbits.xenproject.org/git-http/xen.git/tools/golang/xenlight@RELEASE-4.14.0`.
diff --git a/tools/golang/xenlight/xenlight.go b/tools/golang/xenlight/xenlight.go
index 6b4f492550..3eaa5a3d63 100644
--- a/tools/golang/xenlight/xenlight.go
+++ b/tools/golang/xenlight/xenlight.go
@@ -14,6 +14,8 @@
  * You should have received a copy of the GNU Lesser General Public
  * License along with this library; If not, see <http://www.gnu.org/licenses/>.
  */
+
+// Package xenlight provides bindings to Xen's libxl C library.
 package xenlight
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 13 02:28:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 02:28:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYh7Z-0006rw-RY; Wed, 13 May 2020 02:27:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WltO=63=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jYh7Y-0006rr-AP
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 02:27:44 +0000
X-Inumbo-ID: 52c1e816-94c1-11ea-9887-bc764e2007e4
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 52c1e816-94c1-11ea-9887-bc764e2007e4;
 Wed, 13 May 2020 02:27:43 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id a4so12255244lfh.12
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 19:27:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=VbFr1/1CMDbdIRzckOMwfDFKihc0fhWTkKaWoFSjiV4=;
 b=PedfiEa1+y5MEmJ4mgTmTNy0cneAUsmmn4nAKzyhTNXxfRV/f7r8Mlx7k3IACyOYjr
 HVtjtwOieHlVgaIYZQ5Nnd4DG8JnMdbWv94zhO1SvDysBAv0Fe6/tbHQVKL8enuaLPkN
 vfyQdCAFAE3E5hwi27rqagYhYPkeCdOE3SXkSyXGEHhV+AqPMsEz2Vel4qMwN6GQDKgE
 YTXnFkgVQ27PGdAC6LU7z79UOZlg84N3aYAuSG2ytRFIUYfY3/yAM/9bKPQwxyQ7Nzv4
 XTonFk91qNZ1da2qhl1IWDl9CsMIO2QXH79r3rhdOtENxlzP2+02HLzyS/CvUVim1bHT
 HGhQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=VbFr1/1CMDbdIRzckOMwfDFKihc0fhWTkKaWoFSjiV4=;
 b=iljccZmSlvIh/nXzEG2zJevbht50sFw8ZyjwVyZnzv1Z3vodVGkc+3s3n3ZHUY+B4b
 cgkRvphVzPbRq65iJnOaqZWVMwy7rVlKjNtv6pZpzigS0ZXqsR/HhvvMXXVD+SXJkl1H
 5rWyLCCCFmT2KbkpAmRnXTDtSo3VqOra803uvKx9cQvt+cMMadthvQwKZCdFYmjRzmDi
 E1u/xbBaJkFAhcMhYbksuLEN+dc8Mwf0hc65ZPXHcxvK+g0KzI4TZ8J9J+yyh70Y1PFH
 PiqApm0O0rimaM2QNYqtZ318ayjDOlqQFiW38tUTX1M1atDA4fiCCGMAlfPNuDPZ/f9f
 EP4A==
X-Gm-Message-State: AOAM531s2fFqWe9K0u7bZdqv/AIfN/RyX/wNwSIKuIlg+rcy912r2S2/
 6yL2UewzpBKfxXh8dku1DDNe0nqYXqTZ49f+ClM=
X-Google-Smtp-Source: ABdhPJy+lMlAjOrCt3+JbkqTi/DIl2lwNtkJtXvQ0LfhsJwEWn9RfvFqJjLRpXLho2ThXXeERZkQ+PBYVvwFSfPqVa0=
X-Received: by 2002:ac2:4c3b:: with SMTP id u27mr16187841lfq.212.1589336861936; 
 Tue, 12 May 2020 19:27:41 -0700 (PDT)
MIME-Version: 1.0
References: <20200512191108.6461-1-andrew.cooper3@citrix.com>
In-Reply-To: <20200512191108.6461-1-andrew.cooper3@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 12 May 2020 22:27:30 -0400
Message-ID: <CAKf6xpt3hOM+y_w1s99phu8exHE+xyAsYM=6qHFqpD9mY_U5AQ@mail.gmail.com>
Subject: Re: [PATCH] x86/build32: Discard all orphaned sections
To: Andrew Cooper <andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefan Bader <stefan.bader@canonical.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 12, 2020 at 3:11 PM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>
> Linkers may put orphaned sections ahead of .text, which breaks the calling
> requirements.  A concrete example is Ubuntu's GCC-9 default of enabling
> -fcf-protection which causes us to try and execute .note.gnu.properties during
> Xen's boot.
>
> Put .got.plt in its own section as it specifically needs preserving from the
> linkers point of view, and discard everything else.  This will hopefully be
> more robust to other unexpected toolchain properties.
>
> Fixes boot from an Ubuntu build of Xen.
>
> Reported-by: Jason Andryuk <jandryuk@gmail.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Tested-by: Jason Andryuk <jandryuk@gmail.com>

Thanks


From xen-devel-bounces@lists.xenproject.org Wed May 13 02:35:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 02:35:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYhFQ-0007ho-Lu; Wed, 13 May 2020 02:35:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WltO=63=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jYhFO-0007hj-TV
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 02:35:50 +0000
X-Inumbo-ID: 75035d46-94c2-11ea-ae69-bc764e2007e4
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 75035d46-94c2-11ea-ae69-bc764e2007e4;
 Wed, 13 May 2020 02:35:50 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id a9so12267959lfb.8
 for <xen-devel@lists.xenproject.org>; Tue, 12 May 2020 19:35:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=E9N4W2DwvaFGv1eRMFSJ1p6aZv3XUhtfCnoLChztmVA=;
 b=Ifk0J4Xmf2rqOyNut+l9VVX80YJg2wQ5sphfC0LxSYuXdTLYlJcoxKdx3Y+hlm/Qoa
 xVrwuYwUwvG2BZGFyZpR8wSMGB/mHlsCOwBe3mNzYoPf0h/jssE9VLhKEaGhripYcdq6
 ZGOACzEDqsNverHqjLhPIsJbhiO0+GrB2mVqB7rtIkGGH4M+F9MOYoy6pTDIpNktjHKk
 1457IrfBxMYUtQu+drKkQEf9dWEETSEW+rBGkZdorb8Kjihzrkf8mjKa+vTdnGqJa27X
 k02lISbThujYMBuBHHsjn/5/aSIFePHF0cbMrCx8DODvw2hvINXs0wKv53uQtYXQUc4B
 nRMw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=E9N4W2DwvaFGv1eRMFSJ1p6aZv3XUhtfCnoLChztmVA=;
 b=B4zReaj4kSA9DJYPEyE36fx+dpsyBZuwFqB2dyDq0XLQoExm4Gesuhzr7Ffto/vgRO
 6ngxn8uVultBsOoOlGxz/ruK8+1Lw4KBbDYgd9E0TM9v3eKiDDPbi7KL0/pJbd2isDEO
 tgyuelNpdMU9KXB7aiuSdwS1tT17CU4nSBDCjPVpb8yw8YzWJSk7ZgbrkE7Nh+Uuhp/B
 NARyLB4rgoTdhJ0L8/KaTSdBBwDrluhXupk26wIjEBhlr1zMm8h6sCR63tilruRIiNZW
 +MHWq+taFuZp+ZxEDE18ktf/qkmvU7WIHwC0BFQFivb7XBa1EWDz+mXmRT5sV+9J7kOH
 Sppg==
X-Gm-Message-State: AOAM532Tx5XrMftimEvgu/MQh98RKBIrAd/QTUCt01NukyXkClMBhUDa
 Ptv+tNK3T2zVMg/mtBNahsPkQx7utNIM+JGC6o8=
X-Google-Smtp-Source: ABdhPJxs0TdijdGhSlbSI5fXuLNpbmbV92urJRIUVRWarQ8/4SvmZiU6YxqrO2jP1zqhMquADG0OG0qPwdqw7EmtpZ0=
X-Received: by 2002:ac2:4c3b:: with SMTP id u27mr16203886lfq.212.1589337349132; 
 Tue, 12 May 2020 19:35:49 -0700 (PDT)
MIME-Version: 1.0
References: <20200512191116.6851-1-andrew.cooper3@citrix.com>
In-Reply-To: <20200512191116.6851-1-andrew.cooper3@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 12 May 2020 22:35:38 -0400
Message-ID: <CAKf6xpt-WLVOTaca_FncB4XX0PQ2ZbP6GFWQjinAYex=6mptRA@mail.gmail.com>
Subject: Re: [PATCH] x86/build: Unilaterally disable -fcf-protection
To: Andrew Cooper <andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefan Bader <stefan.bader@canonical.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 12, 2020 at 3:11 PM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>
> See comment for details.  Works around a GCC-9 bug which breaks the build on
> Ubuntu.
>
> Reported-by: Jason Andryuk <jandryuk@gmail.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Tested-by: Jason Andryuk <jandryuk@gmail.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

> diff --git a/xen/arch/x86/arch.mk b/xen/arch/x86/arch.mk
> index 2a51553edb..93e30e4bea 100644
> --- a/xen/arch/x86/arch.mk
> +++ b/xen/arch/x86/arch.mk
> @@ -67,6 +67,15 @@ CFLAGS-$(CONFIG_INDIRECT_THUNK) += -mindirect-branch=thunk-extern
>  CFLAGS-$(CONFIG_INDIRECT_THUNK) += -mindirect-branch-register
>  CFLAGS-$(CONFIG_INDIRECT_THUNK) += -fno-jump-tables
>
> +# Xen doesn't support CET-IBT yet.  At a minimum, logic is required to
> +# enable it for supervisor use, but the Livepatch functionality needs
> +# to learn not to overwrite ENDBR64 instructions.

Is the problem that existing functions start with ENDBR64, but the
livepatch overwrites with a "real" instruction?

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed May 13 03:26:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 03:26:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYi1x-0003O4-HD; Wed, 13 May 2020 03:26:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Rkd=63=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYi1w-0003Nz-KF
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 03:26:00 +0000
X-Inumbo-ID: 73ddd552-94c9-11ea-a32d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 73ddd552-94c9-11ea-a32d-12813bfff9fa;
 Wed, 13 May 2020 03:25:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=9sNziLhxj8nr7Rcvw33DcqLhDXwvR5l/ByZj8olgza0=; b=OGRHRBMsAFBf68mevNtmPvE36
 DqJFoC/aUPPMo+eRt4nClQ30Ochv3UlfzeHowTeKzNgB5eWSEacM3tgIbH3OZJ8r6MzgI1orTWjNt
 OT4z0KKUW7H77bVO3v3gpxy6iWHhWzVkKbpzyxHvJ7R0N1s/4f8YdPuQlIs7Ccws8dox4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYi1p-0002DN-Vd; Wed, 13 May 2020 03:25:54 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYi1p-0005cl-LC; Wed, 13 May 2020 03:25:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYi1p-0006K3-JB; Wed, 13 May 2020 03:25:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150152-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150152: all pass - PUSHED
X-Osstest-Versions-This: ovmf=88899a372cfc44f8612315f4b43a084d1814fe69
X-Osstest-Versions-That: ovmf=9378310dd877b99be1da398f39e82e0501aca372
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 13 May 2020 03:25:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150152 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150152/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 88899a372cfc44f8612315f4b43a084d1814fe69
baseline version:
 ovmf                 9378310dd877b99be1da398f39e82e0501aca372

Last test of basis   150141  2020-05-11 19:39:25 Z    1 days
Testing same since   150152  2020-05-12 19:39:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ard.biesheuvel@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   9378310dd8..88899a372c  88899a372cfc44f8612315f4b43a084d1814fe69 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 13 04:07:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 04:07:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYig6-0006nJ-Uc; Wed, 13 May 2020 04:07:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Rkd=63=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYig5-0006nD-NS
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 04:07:29 +0000
X-Inumbo-ID: 42162f28-94cf-11ea-a32d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 42162f28-94cf-11ea-a32d-12813bfff9fa;
 Wed, 13 May 2020 04:07:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=QEJZUmMqpmSwz3mbNeueU7jzjxth7jHbqdt1A41Dxw4=; b=CZTiGMFwZXg5PJDiEh9q8dgeB
 f2ezaa8YZVGhjsFeifRucpFU9Kh++6wsb/jFK3GKFV2tRihrjRaROudnZ9cdMdmcWeXbPlM0fgMvB
 6YhZlI5aEe3YdXkjk4oTsWo/LbRUhAnmBldYhinGbre98UDo/HLLzSp36p3MosMAF/Mj8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYig3-00039U-Ez; Wed, 13 May 2020 04:07:27 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYig3-0008EO-0V; Wed, 13 May 2020 04:07:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYig2-0007Ym-W8; Wed, 13 May 2020 04:07:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150150-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150150: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-saverestore.2:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=a82582b1af6a4a57ca53bcfad9f71428cb5f9a54
X-Osstest-Versions-That: xen=a82582b1af6a4a57ca53bcfad9f71428cb5f9a54
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 13 May 2020 04:07:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150150 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150150/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     17 guest-saverestore.2     fail blocked in 150142
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150142
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150142
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150142
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150142
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150142
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150142
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150142
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150142
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150142
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  a82582b1af6a4a57ca53bcfad9f71428cb5f9a54
baseline version:
 xen                  a82582b1af6a4a57ca53bcfad9f71428cb5f9a54

Last test of basis   150150  2020-05-12 14:10:46 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed May 13 05:41:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 05:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYk8x-0006my-N5; Wed, 13 May 2020 05:41:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CP9S=63=canonical.com=stefan.bader@srs-us1.protection.inumbo.net>)
 id 1jYk8w-0006mt-LC
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 05:41:22 +0000
X-Inumbo-ID: 5f2224a4-94dc-11ea-a336-12813bfff9fa
Received: from youngberry.canonical.com (unknown [91.189.89.112])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5f2224a4-94dc-11ea-a336-12813bfff9fa;
 Wed, 13 May 2020 05:41:21 +0000 (UTC)
Received: from 1.general.smb.uk.vpn ([10.172.193.28])
 by youngberry.canonical.com with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.86_2)
 (envelope-from <stefan.bader@canonical.com>)
 id 1jYk8r-00056w-A9; Wed, 13 May 2020 05:41:17 +0000
Subject: Re: [PATCH 0/2] Fixups for fcf-protection
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
References: <20200512033948.3507-1-jandryuk@gmail.com>
 <3542ecb3-6f4e-2408-ea9f-9b03ac23688e@canonical.com>
 <2fbc4be8-c992-1703-168c-a4124a0fd02e@citrix.com>
From: Stefan Bader <stefan.bader@canonical.com>
Autocrypt: addr=stefan.bader@canonical.com; prefer-encrypt=mutual; keydata=
 xsFNBE5mmXEBEADoM0yd6ERIuH2sQjbCGtrt0SFCbpAuOgNy7LSDJw2vZHkZ1bLPtpojdQId
 258o/4V+qLWaWLjbQdadzodnVUsvb+LUKJhFRB1kmzVYNxiu7AtxOnNmUn9dl1oS90IACo1B
 BpaMIunnKu1pp7s3sfzWapsNMwHbYVHXyJeaPFtMqOxd1V7bNEAC9uNjqJ3IG15f5/50+N+w
 LGkd5QJmp6Hs9RgCXQMDn989+qFnJga390C9JPWYye0sLjQeZTuUgdhebP0nvciOlKwaOC8v
 K3UwEIbjt+eL18kBq4VBgrqQiMupmTP9oQNYEgk2FiW3iAQ9BXE8VGiglUOF8KIe/2okVjdO
 nl3VgOHumV+emrE8XFOB2pgVmoklYNvOjaIV7UBesO5/16jbhGVDXskpZkrP/Ip+n9XD/EJM
 ismF8UcvcL4aPwZf9J03fZT4HARXuig/GXdK7nMgCRChKwsAARjw5f8lUx5iR1wZwSa7HhHP
 rAclUzjFNK2819/Ke5kM1UuT1X9aqL+uLYQEDB3QfJmdzVv5vHON3O7GOfaxBICo4Z5OdXSQ
 SRetiJ8YeUhKpWSqP59PSsbJg+nCKvWfkl/XUu5cFO4V/+NfivTttnoFwNhi/4lrBKZDhGVm
 6Oo/VytPpGHXt29npHb8x0NsQOsfZeam9Z5ysmePwH/53Np8NQARAQABzTVTdGVmYW4gQmFk
 ZXIgKENhbm9uaWNhbCkgPHN0ZWZhbi5iYWRlckBjYW5vbmljYWwuY29tPsLBrgQTAQoAQQIb
 AwULCQgHAwUVCgkICwUWAgMBAAIeAQIXgAIZARYhBNtdfMrzmU4zldpNPuhnXe7L7s6jBQJd
 7mXwBQkRwqX/ACEJEOhnXe7L7s6jFiEE2118yvOZTjOV2k0+6Gdd7svuzqPCmBAAsTPnhe+A
 iFiLyoLCqSikRlerZ9i20wUwQyRbd0Dtj+bl+eY+z9Bix+mfsu1ByYMYHPhb1gMv8oP7VgXV
 bX6/Ojw1BN5HTYMmSxpPHauLLMj7NL1hj9zQS/Jq45Zryz1i8j2XM36BaA4rIQrjXmfJteNT
 kUQwAXqMCMnvRP4M95mMYGCSgM8oFEo7cMGA5XbeusCIzH1ReoBtxJRTiLWZ7o9NloBtJ4iI
 4850l8+Ak/ySLdC4YXdy3bd0suU9qZ5wIKAfhkEwZvxlAuFF8s1hqjR1sNdypD45IWXakZOi
 ILX0wmPWKbUJrwNz3slG7OTE4UpF9cD2tixXLsBX/+l9XLfHm1PR8lC3PhQVThDOGL/TxTbC
 CX22wnE/YsK1yhdrsP7d6F73ZxA2ytBejpco3O84WhfMMHOhVT1JhO/XZj3vMQIkbXUX5CYO
 KiC53L6Kir5H1oqAxQi6CcKHjku5m5HKP2q4BJifm9/9jLAwvm9JUo1DX7SNw7++TCNrhsxT
 SKe0DSx7y6ONUxX9dclvzQ+2CFlVUv/7GqcCkaKUh1rn5ARuN087xeM2UU7xwiF7PzX7ybrz
 Juermy995788k4RnqOXEblcjJvcH+TKBljqSKY8t4tyPErgVUfm16JIHQh+kQydA0uuMPNda
 CYD8GTtU3Jw9g4q9mdi4E2ADvATOwU0ETmaZcQEQAMKRF+5LVwDHTbJOyT2DIBqlxCelvoQF
 rLjQKH8g+swXaIbgXQJfqm4q5uONVuovqMQrKSyo9vntW71YC5/LhGW/c7DNrKaZaTTQJE4J
 VK4RX7duKQFOcX+X5VUK9xw2WkewAMwudxoBO9I6PWIJd6KTE0CTYsDeD0fy7PuVBSGOeLTm
 LEFkYMZtrEHo52aHnyryT+KihEQfKp+V5KDXOm4HFgYpW6DZ1pctK9AjvDn15g78vViku27W
 wzOpHJh1JTIKI1xcM78qjbbWjY192pD0oRPVrPxBOwGdl5OyOyThWdjCNz1kRg3ssBNauHPy
 +AjZ4/zSVfFeb2THzU25uc4/Gdrm+D0OHFkSOjwD7MThzltC5IIncZOc5qVewDPQvCTUfWcX
 CLNSq+Y8jx4CpkZ5mgnjT24Nw2LYGtH5bsyNfE8zmTgzbMyI18i80GNyUEsT+buetzE0s6TX
 P8pCIVVlCG0deg5NRaYg1n4TcYglPYNOgXFShoRbYZ1fSuOoR6ttRqijpIFfsfGaMDOx40P0
 hq0ZPGA34SElSIhYrhQ4ffjd6sHseBr3xZ4TNlOrtbY2/Ceo5UCrYSWc+EesP3ydYgFk84S7
 rGCLK9UV9ckaZEExEFH7yEGN9fTrjecurfBg6tls18/x0lVBngbEjo4tNzBg2CJ+qn9IgnMT
 W2CTABEBAAHCwZMEGAEKACYCGwwWIQTbXXzK85lOM5XaTT7oZ13uy+7OowUCXe5mRQUJEYdS
 1AAhCRDoZ13uy+7OoxYhBNtdfMrzmU4zldpNPuhnXe7L7s6jGfAP/jjsc4PD0+wfaP2L2wbi
 n53N1itsRaWD7IqpUZPuzZ1dQVzjKQnvY6yhstXqyYNFgQ+wV4O5m0I+ih+fKDLJQmQpG+Dd
 YoMA9iYiaPy3/fAxXcOoVEfCWvwzlYY6TY324ReRCCM5JFfCv6SK5ETzi+rpXYtiD6MLTJMt
 sqCCdXEHbURBFC/1nKUaC61umaiE8eEcS9p51EqdJKa97HbGJlKBilgHwUjv1kwrMqVuGJne
 LVkk+DVRWDltv6ZETl/LGkXc52gkRZ5/EHk0m9loA5lyy4ximp3GJmTzUXHa/TrBXFjdkd5Y
 6Ovn61ufBqEdU6OBOya9jLnAyvMJr5H9PDZZ4ajs32kb4HSyyuZebb+i2Thgh9e4pig7unB9
 Kn9BFQgndzqvEiKLCs3L2CUasHOgiRiUms/QjVBwpw1MzGatT4vguBbitoto81/sSUQLxF+s
 WdAYX7ip7puyrWZgWAAni+FduwXrOq9mBhH+GUKvZMjVWeq/qZnMkUuPeWPvK1YIsc29/cci
 wM8DhQgaQnLE+jLHbKiMfYq/g8d2laVPZMcxS15o9SZ5agrw8eIPKtZBFPX3w+m5qEWLhOOf
 33iBEBq9ULnimnNa6UR4X6IQk2TRticdXOlcGQmLwSpDiTFqUMEbchHEoXF9Y6rrl00IEoC1
 2Iat+yfjuNhlNAJs
Message-ID: <27da328a-189a-607c-0f97-705405380c1b@canonical.com>
Date: Wed, 13 May 2020 07:41:15 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <2fbc4be8-c992-1703-168c-a4124a0fd02e@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12.05.20 20:47, Andrew Cooper wrote:
> On 12/05/2020 08:17, Stefan Bader wrote:
>> On 12.05.20 05:39, Jason Andryuk wrote:
>>> Two patches to fix building with a cf-protection toolchain.  The first
>>> handles the case where the compiler fails to run with "error:
>>> ‘-mindirect-branch’ and ‘-fcf-protection’ are not compatible".
>>>
>>> The second fixes a runtime error that prevented Xen booting in legacy
>>> mode.
>> That might be better than just disabling fcf-protection as well (which was done
>> in Ubuntu lacking a better solution).
> 
> It is a GCC bug
> 
> https://gcc.gnu.org/bugzilla/show_bug.cgi?id=93654
> 
> Fixed for 10, and 9.4

Ah, thanks. And yes, should have reported sooner but, as always, one runs into
those things while in a hurry, with not enough time. We were glad to have
something that did work somehow.

> 
>>
>> Not sure it was already hit but that additional .note section breaks the build
>> of the emulator as generated headers become gigantic:
>>
>> https://git.launchpad.net/ubuntu/+source/xen/tree/debian/patches/1001-strip-note-gnu-property.patch?h=ubuntu/focal
> 
> 4.6G of notes?!?  That is surely a toolchain bug.

No, sorry if that was unclear. The .notes themselves are just about some Kb or
so per object file. Problem is that each object file gets converted into a hex
array header file. And this does multiply the resulting header file sizes.
So the .h files generated are 4.6G in size. And there were a couple of those,
all included into one .c file. Which ended in the compiler running out of memory
on a 32GB system.

> 
> ~Andrew
> 



From xen-devel-bounces@lists.xenproject.org Wed May 13 05:54:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 05:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYkLI-0007iD-QG; Wed, 13 May 2020 05:54:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Rkd=63=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYkLH-0007i8-Et
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 05:54:07 +0000
X-Inumbo-ID: 277f4dde-94de-11ea-a336-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 277f4dde-94de-11ea-a336-12813bfff9fa;
 Wed, 13 May 2020 05:54:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=vwDIY2wwjN2Qy1DWmUEp2M6s+9B5zFfe+2ABMfeqXpc=; b=I8W8z0N8+/AcKqdvgfppH+roU
 Z4or5v9GcJ1NS5Hbt90TDhgm0BClImaLcSu7hIE1c3hfDcXfxUEoKhQhVHOemsJJvjg2db8WnJpdG
 Bp068ThOCzH4rRUOg5ALUOZbiaxNyCtCQv/iPNyPGoXfgCA2d6JleqbxaLMSbknNSqKBk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYkLF-0005ii-8G; Wed, 13 May 2020 05:54:05 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYkLE-0005kf-VJ; Wed, 13 May 2020 05:54:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYkLE-0006el-UZ; Wed, 13 May 2020 05:54:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150151-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150151: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=d5c75ec500d96f1d93447f990cd5a4ef5ba27fae
X-Osstest-Versions-That: qemuu=de2f658b6bb422ec0e0fa94a49e476018602eeea
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 13 May 2020 05:54:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150151 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150151/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 150144

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150144
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150144
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150144
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150144
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150144
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d5c75ec500d96f1d93447f990cd5a4ef5ba27fae
baseline version:
 qemuu                de2f658b6bb422ec0e0fa94a49e476018602eeea

Last test of basis   150144  2020-05-12 03:58:58 Z    1 days
Testing same since   150151  2020-05-12 19:38:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jafar Abdi <cafer.abdi@gmail.com>
  Peter Maydell <peter.maydell@linaro.org>
  Stefan Berger <stefanb@linux.ibm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   de2f658b6b..d5c75ec500  d5c75ec500d96f1d93447f990cd5a4ef5ba27fae -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Wed May 13 07:45:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 07:45:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYm5B-0008Jd-2i; Wed, 13 May 2020 07:45:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Rkd=63=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYm59-0008JY-Gb
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 07:45:35 +0000
X-Inumbo-ID: b75333bc-94ed-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b75333bc-94ed-11ea-ae69-bc764e2007e4;
 Wed, 13 May 2020 07:45:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=eraSQW284jnBR7JZ0D2vrKPFR5ETfe3+DUZI+lDZY2M=; b=OW928E5dY4KNiXWTiW3WxV5n1
 VBB0dW1CSQQ4ItXFhgajTaglr9ytwZLf46c3y8+kOqYs94eTDrt3i2ugK6QB3XE3h3JWjpnkafXRk
 RB8cArT0guq6f7eahUYkf7kNtwjmeDIZYpWF66mPWRmULuYtHBSBB5sHHjwldkaGSgcb8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYm53-000829-07; Wed, 13 May 2020 07:45:29 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYm52-0001r8-KJ; Wed, 13 May 2020 07:45:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYm52-00042m-Jd; Wed, 13 May 2020 07:45:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150155-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150155: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=65a12c467cd683809b4d445b8cf1c3ae250209b2
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 13 May 2020 07:45:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150155 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150155/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              65a12c467cd683809b4d445b8cf1c3ae250209b2
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  117 days
Failing since        146211  2020-01-18 04:18:52 Z  116 days  107 attempts
Testing same since   150155  2020-05-13 04:18:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 18046 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 13 09:01:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 09:01:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYnGS-0006tM-AI; Wed, 13 May 2020 09:01:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dqM3=63=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYnGR-0006tH-9z
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 09:01:19 +0000
X-Inumbo-ID: 4ed43e0c-94f8-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ed43e0c-94f8-11ea-ae69-bc764e2007e4;
 Wed, 13 May 2020 09:01:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 3FC4AAC5F;
 Wed, 13 May 2020 09:01:20 +0000 (UTC)
Subject: Re: [PATCH 0/2] Fixups for fcf-protection
To: Stefan Bader <stefan.bader@canonical.com>
References: <20200512033948.3507-1-jandryuk@gmail.com>
 <3542ecb3-6f4e-2408-ea9f-9b03ac23688e@canonical.com>
 <2fbc4be8-c992-1703-168c-a4124a0fd02e@citrix.com>
 <27da328a-189a-607c-0f97-705405380c1b@canonical.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <89ad0ca5-c575-80da-bde3-e87d2df1c4ba@suse.com>
Date: Wed, 13 May 2020 11:01:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <27da328a-189a-607c-0f97-705405380c1b@canonical.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 13.05.2020 07:41, Stefan Bader wrote:
> On 12.05.20 20:47, Andrew Cooper wrote:
>> On 12/05/2020 08:17, Stefan Bader wrote:
>>> Not sure it was already hit but that additional .note section breaks the build
>>> of the emulator as generated headers become gigantic:
>>>
>>> https://git.launchpad.net/ubuntu/+source/xen/tree/debian/patches/1001-strip-note-gnu-property.patch?h=ubuntu/focal
>>
>> 4.6G of notes?!?  That is surely a toolchain bug.
> 
> No, sorry if that was unclear. The .notes themselves are just about some Kb or
> so per object file. Problem is that each object file gets converted into a hex
> array header file. And this does multiply the resulting header file sizes.
> So the .h files generated are 4.6G in size. And there were a couple of those,
> all included into one .c file. Which ended in the compiler running out of memory
> on a 32GB system.

But as per the link above it's still 3Mb notes per object,
which still seems insane.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 13 09:13:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 09:13:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYnSU-0007qF-FK; Wed, 13 May 2020 09:13:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dqM3=63=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYnSS-0007qA-Io
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 09:13:44 +0000
X-Inumbo-ID: 0adb0f94-94fa-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0adb0f94-94fa-11ea-ae69-bc764e2007e4;
 Wed, 13 May 2020 09:13:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 1481AAB8F;
 Wed, 13 May 2020 09:13:45 +0000 (UTC)
Subject: Re: [PATCH] x86/build32: Discard all orphaned sections
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200512191108.6461-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a1d1135a-8f9c-81c3-5fc8-bbc3787ebd0f@suse.com>
Date: Wed, 13 May 2020 11:13:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200512191108.6461-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Jason Andryuk <jandryuk@gmail.com>, Stefan Bader <stefan.bader@canonical.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12.05.2020 21:11, Andrew Cooper wrote:
> @@ -47,6 +47,14 @@ SECTIONS
>           *
>           * Please check build32.mk for more details.
>           */
> -        /* *(.got.plt) */
> +        *(.got.plt)
> +  }
> +
> +  /DISCARD/ : {
> +        /*
> +         * Discard everything else, to prevent linkers from putting
> +         * orphaned sections ahead of .text, which needs to be first.
> +         */
> +        *(*)
>    }
>  }

To be honest I'm not sure if this isn't going too far. Much
depends on what would happen if a new real section appeared
that needs retaining. Granted the linker may then once again
put it at the beginning of the image, and we'll be screwed
again, just like we'll be screwed if a section gets discarded
by mistake. But it would be really nice if we had a way to
flag the need to play with the linker script. Hence perhaps
on new enough tool chains we indeed may want to make use of
--orphan-handling= ? And then discard just .note and .note.*
here?

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 13 09:15:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 09:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYnUC-0007wQ-Ry; Wed, 13 May 2020 09:15:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dqM3=63=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYnUB-0007wK-Kx
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 09:15:31 +0000
X-Inumbo-ID: 4ae3dda0-94fa-11ea-a345-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4ae3dda0-94fa-11ea-a345-12813bfff9fa;
 Wed, 13 May 2020 09:15:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id DD898AE28;
 Wed, 13 May 2020 09:15:32 +0000 (UTC)
Subject: Re: [PATCH] x86/build: Unilaterally disable -fcf-protection
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200512191116.6851-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1e02fd6a-255a-a588-bf19-963917438fe4@suse.com>
Date: Wed, 13 May 2020 11:15:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200512191116.6851-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Jason Andryuk <jandryuk@gmail.com>, Stefan Bader <stefan.bader@canonical.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12.05.2020 21:11, Andrew Cooper wrote:
> See comment for details.  Works around a GCC-9 bug which breaks the build on
> Ubuntu.
> 
> Reported-by: Jason Andryuk <jandryuk@gmail.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Wed May 13 09:22:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 09:22:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYnb6-0000NR-Kl; Wed, 13 May 2020 09:22:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dqM3=63=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYnb6-0000NM-0g
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 09:22:40 +0000
X-Inumbo-ID: 49f142b0-94fb-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 49f142b0-94fb-11ea-ae69-bc764e2007e4;
 Wed, 13 May 2020 09:22:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D3C01AE35;
 Wed, 13 May 2020 09:22:40 +0000 (UTC)
Subject: Re: [PATCH 12/16] x86/extable: Adjust extable handling to be shadow
 stack compatible
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-13-andrew.cooper3@citrix.com>
 <1e80c672-9308-f7ad-67ea-69d83d69bc03@suse.com>
 <974f631e-3a82-3da4-124d-f4bf2bef89e2@citrix.com>
 <59fcdaf0-f877-7a90-9bf4-9e41b1bbcea7@suse.com>
 <876b7c21-8354-8461-12b2-baf19b0426de@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6c59aafd-77ab-5c89-9c09-1a657e16ed0b@suse.com>
Date: Wed, 13 May 2020 11:22:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <876b7c21-8354-8461-12b2-baf19b0426de@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12.05.2020 18:14, Andrew Cooper wrote:
> On 12/05/2020 15:31, Jan Beulich wrote:
>> On 11.05.2020 23:09, Andrew Cooper wrote:
>>> On 07/05/2020 14:35, Jan Beulich wrote:
>>>> On 02.05.2020 00:58, Andrew Cooper wrote:
>>>>> --- a/xen/arch/x86/traps.c
>>>>> +++ b/xen/arch/x86/traps.c
>>>>> @@ -778,6 +778,28 @@ static bool exception_fixup(struct cpu_user_regs *regs, bool print)
>>>>>                 vec_name(regs->entry_vector), regs->error_code,
>>>>>                 _p(regs->rip), _p(regs->rip), _p(fixup));
>>>>>  
>>>>> +    if ( IS_ENABLED(CONFIG_XEN_SHSTK) )
>>>>> +    {
>>>>> +        unsigned long ssp;
>>>>> +
>>>>> +        asm ("rdsspq %0" : "=r" (ssp) : "0" (1) );
>>>>> +        if ( ssp != 1 )
>>>>> +        {
>>>>> +            unsigned long *ptr = _p(ssp);
>>>>> +
>>>>> +            /* Search for %rip in the shadow stack, ... */
>>>>> +            while ( *ptr != regs->rip )
>>>>> +                ptr++;
>>>> Wouldn't it be better to bound the loop, as it shouldn't search past
>>>> (strictly speaking not even to) the next page boundary? Also you
>>>> don't care about the top of the stack (being the to be restored SSP),
>>>> do you? I.e. maybe
>>>>
>>>>             while ( *++ptr != regs->rip )
>>>>                 ;
>>>>
>>>> ?
>>>>
>>>> And then - isn't searching for a specific RIP value alone prone to
>>>> error, in case a it matches an ordinary return address? I.e.
>>>> wouldn't you better search for a matching RIP accompanied by a
>>>> suitable pointer into the shadow stack and a matching CS value?
>>>> Otherwise, ...
>>>>
>>>>> +            ASSERT(ptr[1] == __HYPERVISOR_CS);
>>>> ... also assert that ptr[-1] points into the shadow stack?
>>> So this is the problem I was talking about that the previous contexts
>>> SSP isn't stored anywhere helpful.
>>>
>>> What we are in practice doing is looking 2 or 3 words up the shadow
>>> stack (depending on exactly how deep our call graph is), to the shadow
>>> IRET frame matching the real IRET frame which regs is pointing to.
>>>
>>> Both IRET frames were pushed in the process of generating the exception,
>>> and we've already matched regs->rip to the exception table record.  We
>>> need to fix up regs->rip and the shadow lip to the fixup address.
>>>
>>> As we are always fixing up an exception generated from Xen context, we
>>> know that ptr[1] == __HYPERVISOR_CS, and *ptr[-1] = &ptr[2], as we
>>> haven't switched shadow stack as part of taking this exception. 
>>> However, this second point is fragile to exception handlers moving onto IST.
>>>
>>> We can't encounter regs->rip in the shadow stack between the current SSP
>>> and the IRET frame we're looking to fix up, or we would have taken a
>>> recursive fault and not reached exception_fixup() to begin with.
>> I'm afraid I don't follow here. Consider a function (also)
>> involved in exception handling having this code sequence:
>>
>>     call    func
>>     mov     (%rax), %eax
>>
>> If the fault we're handling occured on the MOV and
>> exception_fixup() is a descendant of func(), then the first
>> instance of an address on the shadow stack pointing at this
>> MOV is going to be the one which did not fault.
> 
> No.  The moment `call func` returns, the address you're looking to match
> is rubble no longer on the stack.  Numerically, it will be located at
> SSP-8 when the fault for MOV is generated.

I think I still didn't explain the scenario sufficiently well:

In some function, say test(), we have above code sequence and
the MOV faults. The exception handling then goes
- assembly entry
- C entry
- test()
- exception_fixup()
in order of (nesting) call sequence, i.e. with _all_ respective
return addresses still on the stack. Since your lookup starts
from SSP, there'll be two active frames on the stack which both
have the same return address.

We may not actively have such a code structure right now, but
it would seem shortsighted to me to not account for the
possibility.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 13 09:56:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 09:56:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYo7Y-0002ui-BM; Wed, 13 May 2020 09:56:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Rkd=63=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYo7W-0002ud-Jj
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 09:56:10 +0000
X-Inumbo-ID: f6597686-94ff-11ea-a348-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6597686-94ff-11ea-a348-12813bfff9fa;
 Wed, 13 May 2020 09:56:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=a62biwheoBI6ysfMBsrQn60BIx2zFc34Ox37F90oYY4=; b=YBu1Ua2EeRwE+X7mLT9+hdDOt
 HSeoKrVjkyLpzRHq/RHAJhsPCLDmfHj5p9Z84xjnryur8zYl8Bxw2YPEEokoeKSPDJw+cgKbRU8EI
 at6UNgAOTPSGwFduV0tnq05avk0F6pSL9jxt7BmPfEjHdLW8IBHfl65M+zCKnWzxpHj2I=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYo7R-0002pB-Ly; Wed, 13 May 2020 09:56:05 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYo7R-0000am-FA; Wed, 13 May 2020 09:56:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYo7R-0008CI-ET; Wed, 13 May 2020 09:56:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150157-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 150157: all pass - PUSHED
X-Osstest-Versions-This: xen=a82582b1af6a4a57ca53bcfad9f71428cb5f9a54
X-Osstest-Versions-That: xen=e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 13 May 2020 09:56:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150157 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150157/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  a82582b1af6a4a57ca53bcfad9f71428cb5f9a54
baseline version:
 xen                  e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9

Last test of basis   150123  2020-05-10 09:19:05 Z    3 days
Testing same since   150157  2020-05-13 09:19:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e0d92d9bd7..a82582b1af  a82582b1af6a4a57ca53bcfad9f71428cb5f9a54 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed May 13 10:11:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 10:11:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYoMF-0004ba-NI; Wed, 13 May 2020 10:11:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mb4t=63=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jYoMF-0004bV-04
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 10:11:23 +0000
X-Inumbo-ID: 187db91e-9502-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 187db91e-9502-11ea-b07b-bc764e2007e4;
 Wed, 13 May 2020 10:11:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=GgQjqsugvIMcbgU9FUSq5h0RPhgWmTgPyNTalMfquT0=; b=4fU6izyoqpcal9KFpbpxm3ThG/
 5wrmGD2AUgoJEmdKTiq3bHhuNaEzjTRy5iC3D21rz00ZT+x2SDUyUrwvErC66ngFZD6UvAAeAn40r
 B4nSexltuxaa41YuDnUre4TsadFcbhjN2rFzzKDHq2dvOPNnLLy2/2i1Yjwt9fBYNuPE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jYoMC-0003ET-Jz; Wed, 13 May 2020 10:11:20 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jYoMC-0006CB-B9; Wed, 13 May 2020 10:11:20 +0000
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: Stefano Stabellini <sstabellini@kernel.org>, boris.ostrovsky@oracle.com,
 jgross@suse.com
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200502173529.GH9902@minyard.net>
 <ed02b555-dbaa-2ebf-d09f-0474e1a7a745@xen.org>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
 <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <CADz_WD5Ln7Pe1WAFp73d2Mz9wxspzTE3WgAJusp5S8LX4=83Bw@mail.gmail.com>
 <2187cd7c-4d48-986b-77d6-4428e8178404@oracle.com>
 <CADz_WD68bYj-0CSm_zib+LRiMGd1+1eoNLgiJj=vHog685Khsw@mail.gmail.com>
 <alpine.DEB.2.21.2005060956120.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy_wcSD3BVcVFJVR1y1CtvxA9xMkobKwbsdf8dGxS5Hcbw@mail.gmail.com>
 <alpine.DEB.2.21.2005121723240.26167@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <42253259-9663-67e8-117f-8ba631243585@xen.org>
Date: Wed, 13 May 2020 11:11:17 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2005121723240.26167@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peng Fan <peng.fan@nxp.com>, "minyard@acm.org" <minyard@acm.org>,
 roman@zededa.com,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 Julien Grall <julien.grall@arm.com>,
 Nataliya Korovkina <malus.brandywine@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 13/05/2020 01:33, Stefano Stabellini wrote:
> I worked with Roman to do several more tests and here is an update on
> the situation. We don't know why my patch didn't work when Boris' patch
> [1] worked.  Both of them should have worked the same way.
> 
> Anyway, we continued with Boris patch to debug the new mmc error which
> looks like this:
> 
> [    3.084464] mmc0: unrecognised SCR structure version 15
> [    3.089176] mmc0: error -22 whilst initialising SD card
> 
> I asked to add a lot of trancing, see attached diff.

Please avoid attachment on mailing list and use pastebin for diff.

> We discovered a bug
> in xen_swiotlb_init: if io_tlb_start != 0 at the beginning of
> xen_swiotlb_init, start_dma_addr is not set correctly. This oneline
> patch fixes it:
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index 0a40ac332a4c..b75c43356eba 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -191,6 +191,7 @@ int __ref xen_swiotlb_init(int verbose, bool early)
>           * IO TLB memory already allocated. Just use it.
>           */
>          if (io_tlb_start != 0) {
> +               start_dma_addr = io_tlb_start;
>                  xen_io_tlb_start = phys_to_virt(io_tlb_start);
>                  goto end;
>          }
> 
> Unfortunately it doesn't solve the mmc0 error.
> 
> 
> As you might notice from the logs, none of the other interesting printks
> printed anything, which seems to mean that the memory allocated by
> xen_swiotlb_alloc_coherent and mapped by xen_swiotlb_map_page should be
> just fine.
> 
> I am starting to be out of ideas. Do you guys have any suggestions on
> what could be the issue or what could be done to discover it?

Sorry if my suggestions are going to be obvious, but I can't confirm 
whether this was already done:
     1) Does the kernel boot on baremetal (i.e without Xen)? This should 
help to confirm whether the bug is Xen is related.
     2) Swiotlb should not be necessary for basic dom0 boot on Arm. Did 
you try to disable it? This should help to confirm whether swiotlb is 
the problem or not.
     3) Did you track down how the MMC read the SCR structure?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 13 10:15:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 10:15:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYoQK-0004lj-8F; Wed, 13 May 2020 10:15:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hhv+=63=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jYoQJ-0004le-IR
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 10:15:35 +0000
X-Inumbo-ID: ae4024fa-9502-11ea-a34b-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ae4024fa-9502-11ea-a34b-12813bfff9fa;
 Wed, 13 May 2020 10:15:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589364935;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=K1DDlQ+oTKpl/SufIVCEeC9exhF5mUl7ZcBdW6Fjs78=;
 b=erQbfCSKswPWDUr5e/i5c2Hn+n80zXSl1WEfve2uM9WjdjK+8S2O1YJJ
 dhMp6YRyBg3Yje0UkUwqnVColrznXDUshSfKwwHRxmgSdAJpECnmZqaTW
 Wkbj26dEI8hgpL4s/HnJBWLmEqreqCOF6i/7pogU6qLbhjaq1vrK829bF M=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: UlVtyVxFqucz2Ilf4xfP3dTj0dxIY93ZPPrPZ1dM3Re46079vOuj0+g+TTnTcGwVEsxGdJprpI
 ld6mvtfuIbkMg5lSHhPMvkDMQlh1nGqrctuOWHuKK7iStzCE1qciwGoL71StxVTZgfw9+nESiT
 gf/IhWt9oimsdDkjbfcY8EgPlydZlHhuQr8hIfB+HM7Q3e+++nOawvb+eC7ez+zuqy5ylK0G5B
 okOGXvJSx4hPeFPxXTbWLmbLOOuA7bii+wMe6LOMTUms8bSDZQkhUeJ7Lp86aWCOFAJrqhIsb+
 UiI=
X-SBRS: 2.7
X-MesageID: 17395518
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,387,1583211600"; d="scan'208";a="17395518"
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
Subject: Re: [PATCH v3 1/1] golang/xenlight: add NameToDomid and DomidToName
 util functions
Thread-Topic: [PATCH v3 1/1] golang/xenlight: add NameToDomid and DomidToName
 util functions
Thread-Index: AQHWKMFEbi6kgYqzdUWf66u+AEotDKilq9GA
Date: Wed, 13 May 2020 10:15:29 +0000
Message-ID: <77D6F591-FF91-43A8-8278-1F7EF3D8B9D9@citrix.com>
References: <cover.1589321804.git.rosbrookn@ainfosec.com>
 <a543eec1da35b619a06816f8aed29774daa38cb7.1589321804.git.rosbrookn@ainfosec.com>
In-Reply-To: <a543eec1da35b619a06816f8aed29774daa38cb7.1589321804.git.rosbrookn@ainfosec.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="us-ascii"
Content-ID: <DD73ACB3B0C3894EB2FFF89A63E40602@citrix.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On May 13, 2020, at 1:55 AM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
>=20
> Many exported functions in xenlight require a domid as an argument. Make
> it easier for package users to use these functions by adding wrappers
> for the libxl utility functions libxl_name_to_domid and
> libxl_domid_to_name.
>=20
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>

Looks good, thanks!

Reviewed-by: George Dunlap <george.dunlap@citrix.com>



From xen-devel-bounces@lists.xenproject.org Wed May 13 10:38:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 10:38:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYomb-0006Xr-7P; Wed, 13 May 2020 10:38:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hhv+=63=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jYoma-0006Xm-3M
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 10:38:36 +0000
X-Inumbo-ID: e3796b6a-9505-11ea-9887-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e3796b6a-9505-11ea-9887-bc764e2007e4;
 Wed, 13 May 2020 10:38:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589366316;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=Px4UDWDSJ9KmQDWSbBEUARrbOqFDI2TFyAdHivPVcIs=;
 b=AtED9dlpgA65dHcAvlKWRDzXW1tWAcIPjg4w7Z3wklDH3XoP7KlRZ+Yc
 s3Adac6DR/9mRFFFnmUD80FhioOn+WZzs3mw5Wb1ribAn4cjfzFUViKra
 GvMIT5dLRt+GcwS1qesNAMhM5mMB3q20WjiSAYP0/r75EkF6qhzSojZzl o=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: GEmSY5n6YWq20P3vtg85G28MnbF68lxa2nQKduvWS5aq4bBUm/+G4nQyF8XIOMH+3mYNAcAmoK
 rWRbLNJlDQCQVqZD6H494cANNH7/bnT5fBFYl5A66eMWGtuzHGnMks/WHaZ9fIdeks8Aj8Fq1l
 UhX7IZAXZCgdTFNnAhki3DarWd7fN1CB2OX7ognZ+x4jqAul6Z2E35eW40ZOjGnVSh50bvH/fx
 lHLzUjf3d7dfxsX6lvwML9f57xs9j9ZUsFiU5SQvQnTphExy20uFYCBSmp/rPUW4XjxFsVXFzj
 SsE=
X-SBRS: 2.7
X-MesageID: 17425328
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,387,1583211600"; d="scan'208";a="17425328"
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
Subject: Re: [PATCH v2 2/3] golang/xenlight: init xenlight go module
Thread-Topic: [PATCH v2 2/3] golang/xenlight: init xenlight go module
Thread-Index: AQHWKMGVg9jV6anrV0CFXVFjqUoLW6ilsi+A
Date: Wed, 13 May 2020 10:38:17 +0000
Message-ID: <52D26A49-D479-4F0B-91D0-553C737E2337@citrix.com>
References: <cover.1589330383.git.rosbrookn@ainfosec.com>
 <d3744468e8f6ce22756355a2e36b182cea7d5068.1589330383.git.rosbrookn@ainfosec.com>
In-Reply-To: <d3744468e8f6ce22756355a2e36b182cea7d5068.1589330383.git.rosbrookn@ainfosec.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="us-ascii"
Content-ID: <A1AECD281A7A404B8F7C787EF5A684F0@citrix.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On May 13, 2020, at 1:58 AM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
>=20
> Initialize the xenlight Go module using the xenbits git-http URL,
> xenbits.xenproject.org/git-http/xen.git/tools/golang/xenlight.
>=20
> Also simplify the build Make target by using `go build` instead of `go
> install`, and do not set GOPATH here because it is now unnecessary.

Glad you caught the GOPATH thing. :-)

>=20
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>

Revewied-by: George Dunlap <george.dunlap@citrix.com>



From xen-devel-bounces@lists.xenproject.org Wed May 13 11:01:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 11:01:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYp8Z-0000Vg-4z; Wed, 13 May 2020 11:01:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cxQB=63=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYp8X-0000Vb-KR
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 11:01:17 +0000
X-Inumbo-ID: 10c6498c-9509-11ea-a353-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 10c6498c-9509-11ea-a353-12813bfff9fa;
 Wed, 13 May 2020 11:01:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589367676;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=CzknWRp/eE8EiDmjOjPFiq/naD4x0PUIefks06Zyxs0=;
 b=iPs4NYKGA8DGs6fVU7JYHIMycdJKZ14rp/6SIveoC6+dshwpl3nGSZK6
 e3sAtUToqqzsuJ9jIZMp3gwUJ2kclCxJftuW6TC+b3mZ/PlI0GPxSKfJ8
 /t2cjhOLKEQjp0Ctr5ntismq+nePHZ7QwkMH68gAL1LmPCBynSFMv5cNI Q=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 9dkxv8eXgsEB8EJbELdPRjgMvApa7EX+GXEz+CbiXxb0AXVNJhxzFubgNC2cYwR1AeSKzjCzet
 lMLS4KYoYyGf037mGBQG+Xt+H1BXhCvRSH/yLRWowkdyifj+BQuFBZSDkhZeb5VscVslDyAZH0
 0RHC3FDzGZNh9YyroxjqWd33hQW0jlTKr0fEtjGEbY8OTFl1pjRGpBrxkiuBwDB65R9SFAXCdr
 lnLJRcBl/xYNtcQ9mjg0VryBQOHrOfhVrJcSuEGtp5BMCFm3D7bSsTTBwEv9wONdq0ZSh0k6K8
 VDM=
X-SBRS: 2.7
X-MesageID: 17671904
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,387,1583211600"; d="scan'208";a="17671904"
Subject: Re: [PATCH] x86/build: Unilaterally disable -fcf-protection
To: Jason Andryuk <jandryuk@gmail.com>
References: <20200512191116.6851-1-andrew.cooper3@citrix.com>
 <CAKf6xpt-WLVOTaca_FncB4XX0PQ2ZbP6GFWQjinAYex=6mptRA@mail.gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6b7871d8-903a-20df-4e19-9a6200094aa5@citrix.com>
Date: Wed, 13 May 2020 12:01:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <CAKf6xpt-WLVOTaca_FncB4XX0PQ2ZbP6GFWQjinAYex=6mptRA@mail.gmail.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefan Bader <stefan.bader@canonical.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 13/05/2020 03:35, Jason Andryuk wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> On Tue, May 12, 2020 at 3:11 PM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>> See comment for details.  Works around a GCC-9 bug which breaks the build on
>> Ubuntu.
>>
>> Reported-by: Jason Andryuk <jandryuk@gmail.com>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Tested-by: Jason Andryuk <jandryuk@gmail.com>
> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
>
>> diff --git a/xen/arch/x86/arch.mk b/xen/arch/x86/arch.mk
>> index 2a51553edb..93e30e4bea 100644
>> --- a/xen/arch/x86/arch.mk
>> +++ b/xen/arch/x86/arch.mk
>> @@ -67,6 +67,15 @@ CFLAGS-$(CONFIG_INDIRECT_THUNK) += -mindirect-branch=thunk-extern
>>  CFLAGS-$(CONFIG_INDIRECT_THUNK) += -mindirect-branch-register
>>  CFLAGS-$(CONFIG_INDIRECT_THUNK) += -fno-jump-tables
>>
>> +# Xen doesn't support CET-IBT yet.  At a minimum, logic is required to
>> +# enable it for supervisor use, but the Livepatch functionality needs
>> +# to learn not to overwrite ENDBR64 instructions.
> Is the problem that existing functions start with ENDBR64, but the
> livepatch overwrites with a "real" instruction?

We livepatch by creating a new complete copy of the function, and
putting `jmp new` at the head of the old one.

This means we don't need to patch every callsite and track every
function pointer to the old function, and we can fully revert by
replacing the 5 bytes which became `jmp new`.

With CET-IBT in the mix, livepatch will have to learn to spot an ENDBR64
instruction and leave it intact, patching instead the next 5 bytes, so
an old function pointer still lands on the ENDBR64 instruction.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 13 11:29:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 11:29:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYpZr-0002N1-Fx; Wed, 13 May 2020 11:29:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hhv+=63=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jYpZq-0002Mw-9J
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 11:29:30 +0000
X-Inumbo-ID: 013dfa92-950d-11ea-a359-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 013dfa92-950d-11ea-a359-12813bfff9fa;
 Wed, 13 May 2020 11:29:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589369367;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=d0hlYLOfkzYbZiFKqnxe5ZmNxW60qKWHhm05xfKs3+c=;
 b=NQgdreTKc/7sbmBIf4CB0TFOFKy/EY1++h/kh7lvrUCFzKraSF98EnNS
 ZtKoQmEXSKEC1ApE3/4L80jsqZjeqwc0/l6oJlS3QD4vF/cpwv3GZ3qiQ
 BDOgl0gMnIo3sFEKyKpkqLVxZLo0qewy1METJUCJEa2WRov5LtV1IXtEz s=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 7NmjoA4DwNwtjqs0YOc/vZt1BPpyM3OubCTyZaQe6oPDxYZ2NknD24OP5jH93lTiQ/VEjO2qtJ
 I02IApKsNbsvZBWbg4ZDH2LxqGad/+F+/6EoH9hrrVqQDwbFxkeXk+8004fWK/1Hd/vu21blOx
 T4IKiANepLXZ5x1EDOfaVc+Zf8/KwO9kFB3t10c/sBLLuDHPEKcwl8TC3K42X4YXiMQ7AEsNWQ
 w6QG1osp3cNsW16gg9r2dV6kKZR86CpbHWVHKi1+b/EMPsXJQl0UBxuL4D7FQK8Ptx6pOd37Fm
 EKw=
X-SBRS: 2.7
X-MesageID: 18097819
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,387,1583211600"; d="scan'208";a="18097819"
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
Subject: Re: [PATCH v2 3/3] golang/xenlight: add necessary module/package
 documentation
Thread-Topic: [PATCH v2 3/3] golang/xenlight: add necessary module/package
 documentation
Thread-Index: AQHWKMGZFErKT/Po40mgeLsLE6wA6KilwHcA
Date: Wed, 13 May 2020 11:29:24 +0000
Message-ID: <BAD53A57-6842-43E6-AA5B-6C42B7290D00@citrix.com>
References: <cover.1589330383.git.rosbrookn@ainfosec.com>
 <a42395202aef85d983dd9db361c366a6d03e313f.1589330383.git.rosbrookn@ainfosec.com>
In-Reply-To: <a42395202aef85d983dd9db361c366a6d03e313f.1589330383.git.rosbrookn@ainfosec.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <39A659E889BE4346B66423F1C5304761@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Jan Beulich <jbeulich@suse.com>, Ian
 Jackson <Ian.Jackson@citrix.com>, xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDEzLCAyMDIwLCBhdCAxOjU4IEFNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9v
a25AZ21haWwuY29tPiB3cm90ZToNCj4gDQo+IEFkZCBhIFJFQURNRSBhbmQgcGFja2FnZSBjb21t
ZW50IGdpdmluZyBhIGJyaWVmIG92ZXJ2aWV3IG9mIHRoZSBwYWNrYWdlLg0KPiBUaGVzZSBhbHNv
IGhlbHAgcGtnLmdvLmRldiBnZW5lcmF0ZSBiZXR0ZXIgZG9jdW1lbnRhdGlvbi4NCg0KT25lIHRo
aW5nIEkgZm9yZ290IHRvIG1lbnRpb24gYWJvdXQgdGhlIFJFQURNRSBpcyB0aGUgbG9uZyBsaW5l
cyDigJQgZG8geW91IG1pbmQgaWYgSSB3cmFwIHRob3NlIGJlZm9yZSBjaGVja2luZyBpdCBpbj8N
Cg0KPiBBbHNvLCBhZGQgYSBjb3B5IG9mIHRoZSBMR1BMICh0aGUgc2FtZSBsaWNlbnNlIHVzZWQg
YnkgbGlieGwpIHRvDQo+IHRvb2xzL2dvbGFuZy94ZW5saWdodC4gVGhpcyBpcyByZXF1aXJlZCBm
b3IgdGhlIHBhY2thZ2UgdG8gYmUgc2hvd24NCj4gb24gcGtnLmdvLmRldiBhbmQgYWRkZWQgdG8g
dGhlIGRlZmF1bHQgbW9kdWxlIHByb3h5LCBwcm94eS5nb2xhbmcub3JnLg0KPiANCj4gRmluYWxs
eSwgYWRkIGFuIGVudHJ5IGZvciB0aGUgeGVubGlnaHQgcGFja2FnZSB0byBTVVBQT1JULm1kLg0K
PiANCj4gU2lnbmVkLW9mZi1ieTogTmljayBSb3Nicm9vayA8cm9zYnJvb2tuQGFpbmZvc2VjLmNv
bT4NCg0KUmV2aWV3ZWQtYnk6IEdlb3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNv
bT4=


From xen-devel-bounces@lists.xenproject.org Wed May 13 11:45:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 11:45:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYppa-00040R-UU; Wed, 13 May 2020 11:45:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Rkd=63=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYppZ-00040M-8o
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 11:45:45 +0000
X-Inumbo-ID: 476c26c2-950f-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 476c26c2-950f-11ea-9887-bc764e2007e4;
 Wed, 13 May 2020 11:45:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=HVB4aG5SeqwAAVW50UExMlMg86lcC/eXjRNkFbR/aFg=; b=NiJOZyqI0GXqBJs1h3EgsfuaM
 bQ5WDtharWZGWDLEqwoy7UqnmfrbM9IwyYu+ZG5LR0RNtoBLouQWL0D6lL8Pz2PcJtYhDNyuZ/sRV
 vDoap6AzDUph7QjYEsRRQBEzk0SfXAx+VhqaRv96zjSnLqSLM00FhSpGyXSuSQh9rzfso=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYppY-0005Aq-5d; Wed, 13 May 2020 11:45:44 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYppX-00050P-NT; Wed, 13 May 2020 11:45:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYppX-0007Np-Mq; Wed, 13 May 2020 11:45:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150156-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150156: regressions - FAIL
X-Osstest-Failures: ovmf:build-i386-xsm:xen-build:fail:regression
X-Osstest-Versions-This: ovmf=242ab73d7f255d5d859eaf74a23b9d68c686d177
X-Osstest-Versions-That: ovmf=88899a372cfc44f8612315f4b43a084d1814fe69
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 13 May 2020 11:45:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150156 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150156/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-xsm                6 xen-build                fail REGR. vs. 150152

version targeted for testing:
 ovmf                 242ab73d7f255d5d859eaf74a23b9d68c686d177
baseline version:
 ovmf                 88899a372cfc44f8612315f4b43a084d1814fe69

Last test of basis   150152  2020-05-12 19:39:31 Z    0 days
Testing same since   150156  2020-05-13 06:10:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael Kubacki <michael.kubacki@microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               fail    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 242ab73d7f255d5d859eaf74a23b9d68c686d177
Author: Michael Kubacki <michael.kubacki@microsoft.com>
Date:   Sat May 9 10:22:42 2020 +0800

    BaseTools/Ecc: Replace deprecated function time.clock()
    
    REF:https://bugzilla.tianocore.org/show_bug.cgi?id=2707
    
    Ecc fails with Python 3.8 because it uses the deprecated time.clock()
    function - https://docs.python.org/3.7/library/time.html#time.clock
    
    This change updates EccMain.py to use time.perf_counter().
    
    Cc: Bob Feng <bob.c.feng@intel.com>
    Cc: Liming Gao <liming.gao@intel.com>
    Signed-off-by: Michael Kubacki <michael.kubacki@microsoft.com>
    Reviewed-by: Bob Feng <bob.c.feng@intel.com>


From xen-devel-bounces@lists.xenproject.org Wed May 13 11:56:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 11:56:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYq05-0004uy-1I; Wed, 13 May 2020 11:56:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Rkd=63=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYq03-0004ut-CW
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 11:56:35 +0000
X-Inumbo-ID: c6b76206-9510-11ea-a366-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c6b76206-9510-11ea-a366-12813bfff9fa;
 Wed, 13 May 2020 11:56:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=zyCtSPiSGDUENlGUGY0MZmr5luUkY4nsTR/t/HVCTkY=; b=3+t9JQn3zDz6wTWwvspOAu9KP
 u/uRs3zft7fdUFRN+RMSAAyIpybb0Fje1Efza0gK2J6eKyRwvBL0Q7EecLis0LEOjVKalo8mb2gsE
 bxr6eQXPsqbTVj9nVLsyC6X6jGIoJHDobRxR5wNU1y4KAj7mNTaqJYTcR/eveLwPLI2fI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYpzv-0005NZ-7s; Wed, 13 May 2020 11:56:27 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYpzu-0005dB-QN; Wed, 13 May 2020 11:56:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYpzu-0002YS-Pi; Wed, 13 May 2020 11:56:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150153-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150153: regressions - FAIL
X-Osstest-Failures: linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
 linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 linux-linus:test-armhf-armhf-xl-rtds:guest-start.2:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=24085f70a6e1b0cb647ec92623284641d8270637
X-Osstest-Versions-That: linux=152036d1379ffd6985262743dcf6b0f9c75f83a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 13 May 2020 11:56:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150153 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150153/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 150148
 test-armhf-armhf-xl-vhd      11 guest-start              fail REGR. vs. 150148

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 150148
 test-armhf-armhf-xl-rtds     17 guest-start.2            fail REGR. vs. 150148

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150148
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150148
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150148
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150148
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150148
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150148
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150148
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150148
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                24085f70a6e1b0cb647ec92623284641d8270637
baseline version:
 linux                152036d1379ffd6985262743dcf6b0f9c75f83a4

Last test of basis   150148  2020-05-12 07:51:46 Z    1 days
Testing same since   150153  2020-05-12 23:40:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adam Ford <aford173@gmail.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Bob Peterson <rpeterso@redhat.com>
  David Gow <davidgow@google.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Stephen Warren <swarren@nvidia.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 529 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 13 12:08:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 12:08:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYqAy-0005uR-9d; Wed, 13 May 2020 12:07:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dqM3=63=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYqAx-0005uM-4S
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 12:07:51 +0000
X-Inumbo-ID: 5d5c6700-9512-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d5c6700-9512-11ea-b07b-bc764e2007e4;
 Wed, 13 May 2020 12:07:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id B3C55AF41;
 Wed, 13 May 2020 12:07:51 +0000 (UTC)
Subject: Re: [PATCH v8 07/12] x86emul: support FNSTENV and FNSAVE
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <9a2afbb1-af92-2c7d-9fde-d8d8e4563a2a@suse.com>
 <0e222090-c989-45b5-65be-efb09e7b9bb9@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4df19637-1a39-491d-83cf-a599c764ab1d@suse.com>
Date: Wed, 13 May 2020 14:07:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <0e222090-c989-45b5-65be-efb09e7b9bb9@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 19:58, Andrew Cooper wrote:
> On 05/05/2020 09:15, Jan Beulich wrote:
>> To avoid introducing another boolean into emulator state, the
>> rex_prefix field gets (ab)used to convey the real/VM86 vs protected mode
>> info (affecting structure layout, albeit not size) to x86_emul_blk().
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> TBD: The full 16-bit padding fields in the 32-bit structures get filled
>>      with all ones by modern CPUs (i.e. other than the comment says for
> 
> You really mean "unlike" here, rather than "other".  They do not have
> the same meaning in this context.
> 
> (I think you're also missing a "what", which I'm guessing is just an
> oversight.)

Well, it's really s/other than/unlike what/ then afaics.

>>      FIP and FDP). We may want to mirror this as well (for the real mode
>>      variant), even if those fields' contents are unspecified.
> 
> This is surprising behaviour, but I expect it dates back to external x87
> processors and default MMIO behaviour.
> 
> If it is entirely consistent, it match be nice to match.  OTOH, the
> manuals are very clear that it is reserved, which I think gives us the
> liberty to use the easier implementation.

I've checked in on an AMD system meanwhile, and it's the same
there. The mentioned comment really refers to observations back
on a 386/387 pair. I think really old CPUs didn't write the
full 16-bit padding fields at all, and the 386/387 then started
writing full 32 bits of FIP and FDP alongside their "high 16
bits" secondary fields. I further assume that this "don't
write parts of the struct at all" behavior was considered
unsafe, or unhelpful when trying to write things out in bigger
chunks (ideally in full cachelines).

I'll leave this as is for now; we can still consider to store
all ones there later on.

>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>> @@ -897,6 +900,50 @@ struct x86_emulate_state {
>>  #define PTR_POISON NULL /* 32-bit builds are for user-space, so NULL is OK. */
>>  #endif
>>  
>> +#ifndef X86EMUL_NO_FPU
>> +struct x87_env16 {
>> +    uint16_t fcw;
>> +    uint16_t fsw;
>> +    uint16_t ftw;
>> +    union {
>> +        struct {
>> +            uint16_t fip_lo;
>> +            uint16_t fop:11, :1, fip_hi:4;
>> +            uint16_t fdp_lo;
>> +            uint16_t :12, fdp_hi:4;
>> +        } real;
>> +        struct {
>> +            uint16_t fip;
>> +            uint16_t fcs;
>> +            uint16_t fdp;
>> +            uint16_t fds;
>> +        } prot;
>> +    } mode;
>> +};
>> +
>> +struct x87_env32 {
>> +    uint32_t fcw:16, :16;
>> +    uint32_t fsw:16, :16;
>> +    uint32_t ftw:16, :16;
> 
> uint16_t fcw, :16;
> uint16_t fsw, :16;
> uint16_t ftw, :16;
> 
> which reduces the number of 16 bit bitfields.

I'm unconvinced of this being helpful in any way. My goal here
was really to consistently use all uint16_t in the 16-bit
struct, and all uint32_t in the 32-bit one, not the least
after ...

>> +    union {
>> +        struct {
>> +            /* some CPUs/FPUs also store the full FIP here */
>> +            uint32_t fip_lo:16, :16;
>> +            uint32_t fop:11, :1, fip_hi:16, :4;
>> +            /* some CPUs/FPUs also store the full FDP here */
>> +            uint32_t fdp_lo:16, :16;
>> +            uint32_t :12, fdp_hi:16, :4;
> 
> Annoyingly, two of these lines can't use uint16_t.  I'm torn as to
> whether to suggest converting the other two which can.

... observing this. (Really I had it the other way around
initially. I'd be okay to switch back if there was a half
way compelling argument - reducing the number of 16-bit
bitfields doesn't sound like one to me, though, unless
there are implications from this that I don't see.)

>> @@ -11571,6 +11646,93 @@ int x86_emul_blk(
>>              *eflags |= X86_EFLAGS_ZF;
>>          break;
>>  
>> +#ifndef X86EMUL_NO_FPU
>> +
>> +    case blk_fst:
>> +        ASSERT(!data);
>> +
>> +        if ( bytes > sizeof(fpstate.env) )
>> +            asm ( "fnsave %0" : "=m" (fpstate) );
>> +        else
>> +            asm ( "fnstenv %0" : "=m" (fpstate.env) );
> 
> We have 4 possible sizes to deal with here - the 16 and 32bit formats of
> prot vs real/vm86 modes, and it is not clear (code wise) why
> sizeof(fpstate.env) is a suitable boundary.

See the definitons of struct x87_env16 and struct x87_env32:
They're intentionally in part using a union, to make more
obvious that in fact there's just two different sizes to deal
with.

> Given that these are legacy instructions and not a hotpath in the
> slightest, it is possibly faster (by removing the branch) and definitely
> more obvious to use fnsave unconditionally, and derive all of the
> smaller layouts that way.

I can accept the "not a hotpath" argument, but I'm against
using insns other than the intended one for no good reason.

> Critically however, it prevents us from needing a CVE/XSA if ... [bottom
> comment]

This is a legitimate concern, but imo not to be addressed by
using FNSAVE uniformly: There being fields which have
undefined contents even with FNSTENV (and which hence in
principle could not get written at all), I'm instead going
to memset() the entire structure upfront. I'll use 0 for
now, but using ~0 might be an option to fill the padding
fields (see above); the downside then would be that we'd
also fill the less-than-16-bit padding fields this way,
where hardware stores 0 (and where we are at risk of breaking
consumers which don't mask as necessary).

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 13 12:15:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 12:15:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYqIR-0006kS-3V; Wed, 13 May 2020 12:15:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WltO=63=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jYqIQ-0006jw-0Q
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 12:15:34 +0000
X-Inumbo-ID: 7151536e-9513-11ea-ae69-bc764e2007e4
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7151536e-9513-11ea-ae69-bc764e2007e4;
 Wed, 13 May 2020 12:15:33 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id d22so7392804lfm.11
 for <xen-devel@lists.xenproject.org>; Wed, 13 May 2020 05:15:33 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=L2W0uFkc6QuntFU2EHPcjPAwaJRKQKlwXVXdAgoHTP0=;
 b=fd1wzFqtuIvhYUrrKQwhIQ4+VgL0IRgAwDsUCefB8A2yZJVGSdB/gDpX0fC+Ea+uAk
 KXI+VJIyitEf6o3sxeVCTSgcx+p7ztx8Vkq7KJc4wfziu0r8tQgndRuddvR5CWfHv09k
 uJLsTxR6uR9l9YH7b9lGavpdaxr0ucccZrrNFc6QZg0JFAe2ngRbKpVuw0RkSJINQghu
 BK3LKCwJW5zb+6Xt2Ph064J5e+tXth4q7bJLgFrdlcljlV+qgdsY6HcM0BmztU9p9CRc
 dGtyfA1nSyP7yBYRBreYuD2oXLANtisiMkbd9kadEcJicXm5aLLelXDI2Kdd3rPWsc9o
 rcqQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=L2W0uFkc6QuntFU2EHPcjPAwaJRKQKlwXVXdAgoHTP0=;
 b=j+a9G1nQ/FJNkakAieFfddfhHvhNZdHujoUR0cI3DLxaEG/Pn7Lz9rCjhbuEq48Bzt
 IdxebF4DHEQc7mvvGYgtBNQoDjNzp4ubdsZgSe93GefdPpKBBB0PSIn2YqhHy2436WEu
 pxLtWwU9H/ri5ilhWHWqGr3EdgQcUs3gn7f5k5Jwetx8cOUfqQBYGLre/auf+fji6ldd
 n2eJqQX2LWfMu18CGYdtlAvjYG3u/ii3N5PMqRK1H4kE3rk87nIRApi0ENfkJXr0hCaN
 OjVo2o6qn+EQqPLvmUQPVhAznlW7/QaEvowZh7cBEjD6s1dvzpg2kDOHmf87ohCF90WL
 qMZA==
X-Gm-Message-State: AOAM532aVpBdEsiNA5K8ondSun3zewqqmL18U7W7k7g95FJtzB5HlRmq
 ED1wfT9HwTbgvSulHjyGm6kbyi+9zpOsXba+mao=
X-Google-Smtp-Source: ABdhPJybBZnjHEZE4zXVStIICNBuWKF1Z3rZqHPHN2GJl9xNaPc9ph9fBC9UOXEcFIL/XZbvu3Mla1o/ZELtz9PdMys=
X-Received: by 2002:a19:7004:: with SMTP id h4mr16764878lfc.148.1589372132166; 
 Wed, 13 May 2020 05:15:32 -0700 (PDT)
MIME-Version: 1.0
References: <20200512191116.6851-1-andrew.cooper3@citrix.com>
 <CAKf6xpt-WLVOTaca_FncB4XX0PQ2ZbP6GFWQjinAYex=6mptRA@mail.gmail.com>
 <6b7871d8-903a-20df-4e19-9a6200094aa5@citrix.com>
In-Reply-To: <6b7871d8-903a-20df-4e19-9a6200094aa5@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 13 May 2020 08:15:19 -0400
Message-ID: <CAKf6xpv-5gy-eiMN49w99ZpeGzNH5R2x3YmwOR+ZZsnCzb_5Gg@mail.gmail.com>
Subject: Re: [PATCH] x86/build: Unilaterally disable -fcf-protection
To: Andrew Cooper <andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefan Bader <stefan.bader@canonical.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 13, 2020 at 7:01 AM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>
> On 13/05/2020 03:35, Jason Andryuk wrote:
> > [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> >
> > On Tue, May 12, 2020 at 3:11 PM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
> >> +# Xen doesn't support CET-IBT yet.  At a minimum, logic is required to
> >> +# enable it for supervisor use, but the Livepatch functionality needs
> >> +# to learn not to overwrite ENDBR64 instructions.
> > Is the problem that existing functions start with ENDBR64, but the
> > livepatch overwrites with a "real" instruction?
>
> We livepatch by creating a new complete copy of the function, and
> putting `jmp new` at the head of the old one.
>
> This means we don't need to patch every callsite and track every
> function pointer to the old function, and we can fully revert by
> replacing the 5 bytes which became `jmp new`.
>
> With CET-IBT in the mix, livepatch will have to learn to spot an ENDBR64
> instruction and leave it intact, patching instead the next 5 bytes, so
> an old function pointer still lands on the ENDBR64 instruction.

Ah, okay.  Thanks for the explanation.

-Jason


From xen-devel-bounces@lists.xenproject.org Wed May 13 12:16:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 12:16:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYqJ6-0006nI-D8; Wed, 13 May 2020 12:16:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cxQB=63=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYqJ4-0006nA-PE
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 12:16:14 +0000
X-Inumbo-ID: 899c59aa-9513-11ea-b9cf-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 899c59aa-9513-11ea-b9cf-bc764e2007e4;
 Wed, 13 May 2020 12:16:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589372174;
 h=from:to:cc:subject:date:message-id:mime-version;
 bh=BOnEEEyVJ3vq0TiTT8o5PI3JhI1X5+lNHauBuJWe2dY=;
 b=hAuSUAkGhBCVMas7fbhqlgWX8DfPeuIn48fWwdWaGOONLbvwiBuR42j8
 vQxoANPH7oK4hwT1C4g0xhttB8woD7UJlUVwPm0Ce+BHeQI7HnE0LljsT
 5Ch2irs9/1UFMjpihrQSs5tr0BOIR9EBCf/v+mszFc/gLx59A+8G2N7wI s=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: LL8J8BvYCVxKKQi6T+UAvew6GjLoIY5biqHFTcgf8U4hHdiddOx/EWQzueNSUKDYYheqwqj3gF
 csyiOGl/Eb4zg+87kRJ3Av6Uquj1sMgVFK/3sdZwId7nOlI1hOwhOZuDni7zUPr4EEEV+eY2w/
 kmHzqobMBvn7fFxZV5cm5+D3QM5qgBVva/lEXEM6fuFr3c8dgEaoaV6z3Ygnes7CldLah1KRRr
 +e5QMwJRGqB35dvim6PmQK9EBqdZKwvCREcz6UIkGrZ0gSNQGxF6UnW66AdMb+xVId4SA8YmI3
 ZQo=
X-SBRS: 2.7
X-MesageID: 17677973
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,387,1583211600"; d="scan'208";a="17677973"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] stubdom: Use matching quotes in error message
Date: Wed, 13 May 2020 13:15:54 +0100
Message-ID: <20200513121554.15239-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Jan
 Beulich <JBeulich@suse.com>, Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This prevents syntax highlighting from believing the rest of the file is a
string.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Jan Beulich <JBeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Wei Liu <wl@xen.org>
CC: Julien Grall <julien@xen.org>
---
 stubdom/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/stubdom/Makefile b/stubdom/Makefile
index 8cf7131c6a..12aa211ac3 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -16,7 +16,7 @@ CFLAGS += -O1 -fno-omit-frame-pointer
 
 ifeq (,$(findstring clean,$(MAKECMDGOALS)))
   ifeq ($(wildcard $(MINI_OS)/Config.mk),)
-    $(error Please run `make mini-os-dir' in top-level directory)
+    $(error Please run 'make mini-os-dir' in top-level directory)
   endif
   include $(XEN_ROOT)/Config.mk
 endif
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 13 12:23:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 12:23:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYqQ7-0007i2-5P; Wed, 13 May 2020 12:23:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mb4t=63=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jYqQ5-0007hx-Fj
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 12:23:29 +0000
X-Inumbo-ID: 8c71035a-9514-11ea-a36a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8c71035a-9514-11ea-a36a-12813bfff9fa;
 Wed, 13 May 2020 12:23:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=tIgvbCx/xo8TJd4ZUL44gU21a0xS0sY7kmnccqVppuY=; b=GQ/icHFh1qzm0mYTC+n5iRu3c7
 /SZk+HDA++wcR898zL8pWWSE7MjUWdXs8ie0hJ09svUdhH5255QW7blxPbl0vcMWKc5VPMvwnuK9v
 ND6xXqV0iI/mIA01W8Mqxp+gkbw4c9MJ9wTCxvqWRCD9uTOEXqxWwD9qezrcLOJwKYtA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jYqQ1-0005z1-KH; Wed, 13 May 2020 12:23:25 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jYqQ1-0007pg-Cq; Wed, 13 May 2020 12:23:25 +0000
Subject: Re: [PATCH] stubdom: Use matching quotes in error message
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <20200513121554.15239-1-andrew.cooper3@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a34dea2a-75d9-5eb3-a668-b72306091c2f@xen.org>
Date: Wed, 13 May 2020 13:23:23 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200513121554.15239-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 13/05/2020 13:15, Andrew Cooper wrote:
> This prevents syntax highlighting from believing the rest of the file is a
> string.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> CC: George Dunlap <George.Dunlap@eu.citrix.com>
> CC: Ian Jackson <ian.jackson@citrix.com>
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> CC: Wei Liu <wl@xen.org>
> CC: Julien Grall <julien@xen.org>
> ---
>   stubdom/Makefile | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/stubdom/Makefile b/stubdom/Makefile
> index 8cf7131c6a..12aa211ac3 100644
> --- a/stubdom/Makefile
> +++ b/stubdom/Makefile
> @@ -16,7 +16,7 @@ CFLAGS += -O1 -fno-omit-frame-pointer
>   
>   ifeq (,$(findstring clean,$(MAKECMDGOALS)))
>     ifeq ($(wildcard $(MINI_OS)/Config.mk),)
> -    $(error Please run `make mini-os-dir' in top-level directory)
> +    $(error Please run 'make mini-os-dir' in top-level directory)
>     endif
>     include $(XEN_ROOT)/Config.mk
>   endif
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 13 13:04:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 13:04:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYr36-0002j9-Lx; Wed, 13 May 2020 13:03:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FcTI=63=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jYr34-0002j3-Vs
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 13:03:47 +0000
X-Inumbo-ID: 2daf0154-951a-11ea-9887-bc764e2007e4
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2daf0154-951a-11ea-9887-bc764e2007e4;
 Wed, 13 May 2020 13:03:46 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id f18so17671399lja.13
 for <xen-devel@lists.xenproject.org>; Wed, 13 May 2020 06:03:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=XOKqvYgS51zYyBhTlMbM5DoC7qOBC4b/bGpL7WiFAEQ=;
 b=mGJRf4YnFkQ1mpKs205pKlnb/dHaNcuIBLvuaDiH5qkMYXqY7BaKi8ye4tiu8f4i52
 Ff1T6L9TrECR3H+ssUt38sC92TEjbl+FG/E/LAC2bH//qkArV7Y+WOsh10VRrT7ZvnSQ
 fQxTUfkV3ZHIGpaUyWEtMTChpCx4dORmzM+ZTc5WrnAo37BfPQLF3O6d/+UpFLShFzh8
 jWW9qIsWIZpd7YlcLEtKuVAfWrfUwgjHoCA0muKlIWOY1gGoTnf1BSpr06oVm3UujAIh
 ACxN4llS6C9NXr99N1MZgVIcsGhY7ZltlBYvvQicHSAB7Uc6PSH/B8HpbFSrcmqXa0eL
 lFOQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=XOKqvYgS51zYyBhTlMbM5DoC7qOBC4b/bGpL7WiFAEQ=;
 b=Qr0PxGF9hiFACMX08Zxx7mF1iUUFRqyKs6geG6Kkipqh9C9mPg+KibhX33UX5EY5Du
 Ei7wovcKTsrTMw9wljCTCwGp0Lz0N6NQb0c/P7lXycOQWgLutDozQXk+KqUDdgNWtMYa
 bqXbLh4/89KBaVlKs2OzA2EI/ilsckiPVTgXzOITglHAv8iq09oyh9Jj1RDsYwEb7jar
 96Tyr7oEIYsA01Cs/4/sTsYCzQticmJuAD13Q72N6faDitsHdjAGLnnA8IXf8QECl9ue
 P6bIDwGnRp6pI+Fmu8bGp7ezl4knO91yejCvHyn+a4RNfVE8/uBZU1fFcAMFNC3yg/20
 FyXg==
X-Gm-Message-State: AOAM532DA503lW2YkE44158QANjSadAjjiKzWyEytuptnqXsnRdqhhH7
 2Y06Yo+X68mKXqL7fYMf2Yei11ZptIbrkyHaDc0=
X-Google-Smtp-Source: ABdhPJzVAvcoO6m9+UZDWniMLTWgHfb34alX1dNieiUZY0nQN7N1WpDrFpmGtkYX3RYXBL3ljAA7pzKk9kUEexJ0ZLo=
X-Received: by 2002:a2e:958b:: with SMTP id w11mr2590090ljh.262.1589375025202; 
 Wed, 13 May 2020 06:03:45 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1589330383.git.rosbrookn@ainfosec.com>
 <a42395202aef85d983dd9db361c366a6d03e313f.1589330383.git.rosbrookn@ainfosec.com>
 <BAD53A57-6842-43E6-AA5B-6C42B7290D00@citrix.com>
In-Reply-To: <BAD53A57-6842-43E6-AA5B-6C42B7290D00@citrix.com>
From: Nick Rosbrook <rosbrookn@gmail.com>
Date: Wed, 13 May 2020 09:03:32 -0400
Message-ID: <CAEBZRSfxAeZdbvkOQ+fkVso_6v1qVPih6AQm62NC=oyhkgsU0Q@mail.gmail.com>
Subject: Re: [PATCH v2 3/3] golang/xenlight: add necessary module/package
 documentation
To: George Dunlap <George.Dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Jan Beulich <jbeulich@suse.com>,
 Ian Jackson <Ian.Jackson@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> One thing I forgot to mention about the README is the long lines =E2=80=
=94 do you mind if I wrap those before checking it in?

I don't mind at all.

> > Also, add a copy of the LGPL (the same license used by libxl) to
> > tools/golang/xenlight. This is required for the package to be shown
> > on pkg.go.dev and added to the default module proxy, proxy.golang.org.
> >
> > Finally, add an entry for the xenlight package to SUPPORT.md.
> >
> > Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
>
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>

Thanks!

-NR


From xen-devel-bounces@lists.xenproject.org Wed May 13 13:05:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 13:05:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYr4c-0002ni-0y; Wed, 13 May 2020 13:05:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hhv+=63=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jYr4a-0002nY-VN
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 13:05:21 +0000
X-Inumbo-ID: 6597333e-951a-11ea-9887-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6597333e-951a-11ea-9887-bc764e2007e4;
 Wed, 13 May 2020 13:05:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589375120;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=GaXvdjYTq2Fy3KMfN2u5oM7RGylQLcWksh0x8ZmsTJs=;
 b=dHnIFxH29v+/bNZ6ixNthGAVRj/b/FhVgx6/hJjdudgfMXSYMbgG2wCV
 hXSkigP2betEk+bBynKTsdak9xQFmy2f8zche8b+p3c1hIoDtG+wF+E4x
 S5SYgMvt8p4Pn9pn25j3mUzkIai9xVEbW9nE4TNmtlfwtFeXYgqvuS0PU Y=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: fTjZsgcrsISY/Elv83KenEJ2SeJdj2822onl3n7GBdsF8bFIWAM/blyw6RVmSILCuumbOcZFve
 m1LhNPioCwwpNZZsbOrxhKIfTAy++Ktt1fDss6yKESJvTKLiaCmP5knQwZZFu+yZyeQ9JQArRg
 g5ei4O2Mw8f76vfJ+NsqLyrFOTQ31tkZ1JdMcVjODO4ZzyP1mXbOD7+Wd3JoKfRrmMo+v/q7Sc
 jZDLURmR9tFDXNtfBnUD3E818iCLHmMsmed+Vo8NS90tFds/zv04KXXG85p8nxu4rwx2vBcyVZ
 RkI=
X-SBRS: 2.7
X-MesageID: 17682590
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,387,1583211600"; d="scan'208";a="17682590"
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
Subject: Re: [PATCH v2 3/3] golang/xenlight: add necessary module/package
 documentation
Thread-Topic: [PATCH v2 3/3] golang/xenlight: add necessary module/package
 documentation
Thread-Index: AQHWKMGZFErKT/Po40mgeLsLE6wA6Kil2zkA
Date: Wed, 13 May 2020 13:05:10 +0000
Message-ID: <AC49E6E7-BB64-4672-94DE-342E3E288C96@citrix.com>
References: <cover.1589330383.git.rosbrookn@ainfosec.com>
 <a42395202aef85d983dd9db361c366a6d03e313f.1589330383.git.rosbrookn@ainfosec.com>
In-Reply-To: <a42395202aef85d983dd9db361c366a6d03e313f.1589330383.git.rosbrookn@ainfosec.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <566FD78309915F418CD8E77EF69DD73E@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Jan Beulich <jbeulich@suse.com>, Ian
 Jackson <Ian.Jackson@citrix.com>, xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQo+IE9uIE1heSAxMywgMjAyMCwgYXQgMTo1OCBBTSwgTmljayBSb3Nicm9vayA8cm9zYnJvb2tu
QGdtYWlsLmNvbT4gd3JvdGU6DQo+IA0KPiBBZGQgYSBSRUFETUUgYW5kIHBhY2thZ2UgY29tbWVu
dCBnaXZpbmcgYSBicmllZiBvdmVydmlldyBvZiB0aGUgcGFja2FnZS4NCj4gVGhlc2UgYWxzbyBo
ZWxwIHBrZy5nby5kZXYgZ2VuZXJhdGUgYmV0dGVyIGRvY3VtZW50YXRpb24uDQo+IA0KPiBBbHNv
LCBhZGQgYSBjb3B5IG9mIHRoZSBMR1BMICh0aGUgc2FtZSBsaWNlbnNlIHVzZWQgYnkgbGlieGwp
IHRvDQo+IHRvb2xzL2dvbGFuZy94ZW5saWdodC4gVGhpcyBpcyByZXF1aXJlZCBmb3IgdGhlIHBh
Y2thZ2UgdG8gYmUgc2hvd24NCj4gb24gcGtnLmdvLmRldiBhbmQgYWRkZWQgdG8gdGhlIGRlZmF1
bHQgbW9kdWxlIHByb3h5LCBwcm94eS5nb2xhbmcub3JnLg0KDQpPSywgc28gZGlkbuKAmXQgbm90
aWNlIHRoaXMgYXQgZmlyc3QuICBJdCBsb29rcyBsaWtlIHlvdSByZWFkIHRoZSBjb21tZW50cyBh
dCB0aGUgdG9wIG9mIGxpYnhsLmMsIG5vdGljZWQgdGhlIGNvbW1lbnQgYWJvdXQg4oCcLi4udGhl
IHNwZWNpYWwgZXhjZXB0aW9uIG9uIGxpbmtpbmcgZGVzY3JpYmVkIGluIGZpbGUgTElDRU5TReKA
nSwgbG9va2VkIGFyb3VuZCBmb3Igc3VjaCBhIGZpbGUsIGFuZCBmb3VuZCBpdCBpbiB0b29scy9v
Y2FtbCwgYW5kIGNvcGllZCB0aGF0IG9uZT8NCg0KSSBoYWQgYSBjaGF0IHdpdGggSWFuIEphY2tz
b24gb24gSVJDIChjb3BpZWQgYmVsb3cgZm9yIHRoZSByZWNvcmQpLCBhbmQgdGhpbmsgdGhhdCBj
b21tZW50IGlzIHNpbXBseSBpbiBlcnJvci4gIFdlIGFncmVlZCB0aGF0IHdlIHNob3VsZCBqdXN0
IGNvcHkgaHR0cHM6Ly93d3cuZ251Lm9yZy9saWNlbnNlcy9vbGQtbGljZW5zZXMvbGdwbC0yLjEu
dHh0IGludG8gdGhlIGRpcmVjdG9yeSB2ZXJiYXRpbS4NCg0KIC1HZW9yZ2UNCg0KMTI6NDk6NDAg
ICAgICAgICAgIGd3ZCB8IEhtbS4uLiBsaWJ4bCBpcyBzdXBwb3NlZGx5IHVuZGVyIHRoZSBMR1BM
LCBidXQgSSBjYW4ndCAgIA0KICAgICAgICAgICAgICAgICAgICAgICB8IHNlZW0gdG8gZmluZCBh
IGNvcHkgb2YgaXQgaW4gb3VyIHNvdXJjZSB0cmVlICAgICAgICAgICAgIA0KMTI6NTA6MDEgICAg
ICAgICAgIGd3ZCB8IE9oLCB0aGVyZSdzIG9uZSBpbiB0b29scy9saWJ4YyAgICAgICAgICAgICAg
ICAgICAgICAgICAgIA0KMTI6NTQ6NDQgICAgICAgICAgIGd3ZCB8IERpemlldDogVGhlIGNvbW1l
bnQgYXQgdGhlIHRvcCBvZiBsaWJ4bC5jIHJlZmVycyB0byAgICAgIA0KICAgICAgICAgICAgICAg
ICAgICAgICB8ICIuLi50aGUgc3BlY2lhbCBleGNlcHRpb24gb24gbGlua2luZyBkZXNjcmliZWQg
aW4gZmlsZSAgIA0KICAgICAgICAgICAgICAgICAgICAgICB8IExJQ0VOU0UuIiAgQnV0IHRoZXJl
J3Mgbm8gc3VjaCBmaWxlIGluIHRvb2xzL2xpYnhsICAgICAgIA0KMTI6NTU6MjIgICAgICAgICAg
IGd3ZCB8IFRoZSBtb3N0IHBsYXVzaWJsZSBjYW5kaWRhdGUgZm9yIHdoYXQgaXQncyB0YWxraW5n
IGFib3V0IA0KICAgICAgICAgICAgICAgICAgICAgICB8IGlzIHRvb2xzL29jYW1sL0xJQ0VOU0Ug
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIA0KMTI6NTk6NDMgICAgICAgIERpemll
dCB8IFdlaXJkICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgIA0KMTM6MDA6MTYgICAgICAgIERpemlldCB8IFRoYXQgdGV4dCBpcyBwcmVzZW50IGluICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIA0KICAgICAgICAgICAgICAgICAgICAgICB8
IGM1YzMwMWFjNzc0ZDhjYjM2Y2NiYzc3NWJiN2YxZDcwMDVkNzhhNzEgImxpYnhlbmxpZ2h0OiAg
IA0KICAgICAgICAgICAgICAgICAgICAgICB8IGluaXRpYWwgbGlieGVubGlnaHQgaW1wbGVtZW50
YXRpb24gdW5kZXIgdG9vbHMvbGlieGwiICAgIA0KMTM6MDA6MzkgICAgICAgIERpemlldCB8IFRo
ZXJlIGlzIG5vIGZpbGUgTElDRU5TRSBvdGhlciB0aGFuICAgICAgICAgICAgICAgICAgICAgIA0K
ICAgICAgICAgICAgICAgICAgICAgICB8IHhlbi90b29scy9maWdsZXQvTElDRU5TRSAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgIA0KMTM6MDA6NTcgICAgICAgIERpemlldCB8IFdoaWNo
IGlzIGEgY29weSBvZiB0aGUgQXJ0aXN0aWMgTGljZW5zZSBhbmQgbm90ICAgICAgICAgIA0KICAg
ICAgICAgICAgICAgICAgICAgICB8IHJlbGV2YW50LiAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIA0KMTM6MDE6MTggICAgICAgIERpemlldCB8IEhvd2V2ZXIs
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIA0KMTM6MDE6
NDMgICAgICAgIERpemlldCB8IHwgTGljZW5zaW5nIEV4Y2VwdGlvbnMgKHRoZSByZWxheGVkIEJT
RC1zdHlsZSBsaWNlbnNlKSAgIA0KMTM6MDE6NDcgICAgICAgIERpemlldCB8IF4gaW4gdGhlIHRv
cC1sZXZlbCBDT1BZSU5HICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIA0KMTM6MDI6MzAg
ICAgICAgIERpemlldCB8IEkgZG9uJ3QgdGhpbmsgdGhpcyBpcyBhbiAiZXhjZXB0aW9uIG9uIGxp
bmtpbmciIGFuZCBpdCAgIA0KICAgICAgICAgICAgICAgICAgICAgICB8IGRvZXNuJ3Qgc3RhdGUg
dGhhdCBpdCBhcHBsaWVzIHRvIGxpYnhsLiAgICAgICAgICAgICAgICAgIA0KMTM6MDM6MjkgICAg
ICAgIERpemlldCB8IFRoZSBwaHJhc2UgImV4Y2VwdGlvbiBvbiBsaW5raW5nIiBhcHBlYXJzIG9u
bHkgaW4gbGlieGwgIA0KICAgICAgICAgICAgICAgICAgICAgICB8IGluIHRodGEgY29tbWl0ICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIA0KMTM6MDQ6MDUgICAgICAg
IERpemlldCB8IGh0dHBzOi8vZ2l0aHViLmNvbS94YXBpLXByb2plY3QveGVuLWFwaS9ibG9iL21h
c3Rlci9vY2FtIA0KICAgICAgICAgICAgICAgICAgICAgICB8IGwvaWRsL2RhdGFtb2RlbC5tbCAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIA0KMTM6MDQ6MzEgICAgICAgIERp
emlldCB8IGh0dHBzOi8vZ2l0aHViLmNvbS94YXBpLXByb2plY3QveGVuLWFwaS9ibG9iL21hc3Rl
ci9MSUNFIA0KICAgICAgICAgICAgICAgICAgICAgICB8IE5TRSAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIA0KMTM6MDQ6NDAgICAgICAgIERpemll
dCB8IEkgdGhpbmsgdGhhdCBpcyBwcm9iYWJseSB3aGF0IHdhcyBpbnRlbmRlZCB0byBiZSAgICAg
ICAgIA0KICAgICAgICAgICAgICAgICAgICAgICB8IHJlZmVycmVkIHRvLiAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIA0KMTM6MDU6MjQgICAgICAgIERpemlldCB8
IEFuZCBpbmRlZWQgdGhhdCB0ZXh0IGFwcGVhcnMgaW4gdG9vbHMvb2NhbWwgICAgICAgICAgICAg
IA0KMTM6MDY6NTkgICAgICAgIERpemlldCB8IFRoaXMgaXMgYW4gYWRkaXRpb25hbCBwZXJtaXNz
aW9uIChib3Jyb3dpbmcgR1BMdjMgICAgICAgIA0KICAgICAgICAgICAgICAgICAgICAgICB8IHRl
cm1pbm9sb2d5KSB3aGljaCBhIGRvd25zdHJlYW0gY2FuIGRyb3AuICAgICAgICAgICAgICAgIA0K
MTM6MDc6MzkgICAgICAgIERpemlldCB8IEZ1cnRoZXJtb3JlIEkgZG9uJ3QgdGhpbmsgYW55b25l
J3MgUy1vLWIgZm9yIGEgbGlieGwgICAgIA0KICAgICAgICAgICAgICAgICAgICAgICB8IGNvbW1p
dCBjYW4gYmUgdGFrZW4gdG8gaGF2ZSBtZWFudCB0byBpbmNsdWRlIHRoYXQgICAgICAgIA0KICAg
ICAgICAgICAgICAgICAgICAgICB8IGV4Y2VwdGlvbi4gIFNvIEkgY29uY2x1ZGUgdGhhdCBubyBl
eGNlcHRpb24gaW4gZmFjdCAgICAgIA0KICAgICAgICAgICAgICAgICAgICAgICB8IGFwcGxpZXMg
YW5kIHRoZSB0ZXh0IHJlZmVycmluZyB0byBpdCBzaG91bGQgYmUgZGVsZXRlZC4gIA0KMTM6MDc6
NTMgICAgICAgIERpemlldCB8IGd3ZDogXiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgIA0KMTM6MTM6MjkgICAgICAgICAgIC0tPiB8IHpoZW5nYyAofnpo
ZW5nY0AxODAuMTEwLjUwLjExKSBoYXMgam9pbmVkICN4ZW5kZXZlbCAgICAgDQoxMzoxNzo1NSAg
ICAgICAgICAgPC0tIHwgemhlbmdjICh+emhlbmdjQDE4MC4xMTAuNTAuMTEpIGhhcyBxdWl0IChQ
aW5nIHRpbWVvdXQ6ICAgDQogICAgICAgICAgICAgICAgICAgICAgIHwgMjU4IHNlY29uZHMpICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgDQoxMzoyODoxMSAgICAg
ICAgICAgZ3dkIHwgRGl6aWV0OiBPSyAtLSB0aGlzIGNhbWUgdXAgYWN0dWFsbHkgaW4gdGhlIGNv
bnRleHQgb2YgICAgDQogICAgICAgICAgICAgICAgICAgICAgIHwgIltQQVRDSCB2MiAzLzNdIGdv
bGFuZy94ZW5saWdodDogYWRkIG5lY2Vzc2FyeSAgICAgICAgICAgDQogICAgICAgICAgICAgICAg
ICAgICAgIHwgbW9kdWxlL3BhY2thZ2UgZG9jdW1lbnRhdGlvbiIgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgDQoxMzoyODo0OCAgICAgICAgICAgZ3dkIHwgZ29sYW5nIGhhcyB0aGlzIGF1dG9t
YXRpYyBwYWNrYWdlIHByb3h5IC8gY2F0YWxvZ2luZyAgICAgDQogICAgICAgICAgICAgICAgICAg
ICAgIHwgc3lzdGVtICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgDQoxMzoyOToyNCAgICAgICAgICAgZ3dkIHwgVG8gaGF2ZSBpdCB3b3JrIHRoZSB3YXkg
aGUgd2FudHMgaXQsIHRoZSBnb2xhbmcveGVubGlnaHQgDQogICAgICAgICAgICAgICAgICAgICAg
IHwgZGlyZWN0b3J5IG5lZWRzIHNvbWV0aGluZyB0aGF0IHRoZWlyIHN5c3RlbSBjYW4gICAgICAg
ICAgDQogICAgICAgICAgICAgICAgICAgICAgIHwgcmVjb2duaXplIGFzIGEgc3VpdGFibGUgbGlj
ZW5zZSAgICAgICAgICAgICAgICAgICAgICAgICAgDQoxMzozMDoyMCAgICAgICAgICAgZ3dkIHwg
djEgaGFkIHRoZSBHUEwsIHdoaWNoIHdhcyBqdXN0IHdyb25nOyBpdCBsb29rcyBsaWtlIGhlJ3Mg
DQogICAgICAgICAgICAgICAgICAgICAgIHwgdGhlbiBmb2xsb3dlZCB0aGUgc2FtZSBwYXRoIEkg
ZGlkLCBsb29raW5nIGF0IGxpYnhsLmMsICAgDQogICAgICAgICAgICAgICAgICAgICAgIHwgdHJ5
aW5nIHRvIGZpbmQgdGhpcyAiZXhjZXB0aW9uIiB0aGluZywgYW5kIGNvcHlpbmcgdGhlICAgDQog
ICAgICAgICAgICAgICAgICAgICAgIHwgZmlsZSBmLyB0b29scy9vY2FtbCAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgDQoxMzozMDo1OCAgICAgICAgICAgZ3dkIHwgSSBndWVz
cyBJIHNob3VsZCB0ZWxsIGhpbSB0byBqdXN0IGNvcHkgICAgICAgICAgICAgICAgICAgDQogICAg
ICAgICAgICAgICAgICAgICAgIHwgaHR0cHM6Ly93d3cuZ251Lm9yZy9saWNlbnNlcy9vbGQtbGlj
ZW5zZXMvbGdwbC0yLjEudHh0ICAgDQoxMzozMTozNyAgICAgICAgRGl6aWV0IHwgeQ0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed May 13 13:22:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 13:22:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYrLD-0004Vh-M0; Wed, 13 May 2020 13:22:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=X9El=63=xen.org=wl@srs-us1.protection.inumbo.net>)
 id 1jYrLC-0004Vc-Ky
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 13:22:30 +0000
X-Inumbo-ID: cbeed798-951c-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cbeed798-951c-11ea-b9cf-bc764e2007e4;
 Wed, 13 May 2020 13:22:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID
 :Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID
 :Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:
 Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe
 :List-Post:List-Owner:List-Archive;
 bh=T4x6RYWHbhsKq1zsNwHEDFVquU4sb5kWaB59Q30E8jQ=; b=uzlch76Eh+Vz6cJYK3xUGEtGnt
 AeBfJ6xUpfh5Y1qrni35T4gXkOsP403XENCLsJhEmt+YL6HnO6RFtYxsIoYJxISo2DUsBwPXpnvML
 zeKcJwIJ7Bk9TLRH4Hw8IoRa6dpHiKMz61Jo/TuSW+KAuF6Rx1KMMSNn5v3sqzGqb77Q=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <wl@xen.org>)
 id 1jYrLB-0007EQ-TT; Wed, 13 May 2020 13:22:29 +0000
Received: from 44.142.6.51.dyn.plus.net ([51.6.142.44] helo=debian)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <wl@xen.org>)
 id 1jYrLB-0004L8-J9; Wed, 13 May 2020 13:22:29 +0000
Date: Wed, 13 May 2020 14:22:26 +0100
From: Wei Liu <wl@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] tools/libxc: Reduce feature handling complexity in
 xc_cpuid_apply_policy()
Message-ID: <20200513132226.mwpna334sbunbqj4@debian>
References: <20200303182326.16739-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200303182326.16739-1-andrew.cooper3@citrix.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <Ian.Jackson@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Mar 03, 2020 at 06:23:26PM +0000, Andrew Cooper wrote:
> xc_cpuid_apply_policy() is gaining extra parameters to untangle CPUID
> complexity in Xen.  While an improvement in general, it does have the
> unfortunate side effect of duplicating some settings across muliple
> parameters.
> 
> Rearrange the logic to only consider 'pae' if no explicit featureset is
> provided.  This reduces the complexity for callers who have already provided a
> pae setting in the featureset.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Wed May 13 13:24:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 13:24:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYrMq-0004bh-13; Wed, 13 May 2020 13:24:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dqM3=63=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYrMp-0004ba-Dl
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 13:24:11 +0000
X-Inumbo-ID: 06fc3330-951d-11ea-a375-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06fc3330-951d-11ea-a375-12813bfff9fa;
 Wed, 13 May 2020 13:24:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 66C71ACA1;
 Wed, 13 May 2020 13:24:11 +0000 (UTC)
Subject: Re: [PATCH v8 09/12] x86emul: support FXSAVE/FXRSTOR
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <ea1db2c5-3dd7-f1c8-c051-e39f0dffc94e@suse.com>
 <4f0da795-a148-e1f3-bd97-caeb84d702cb@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <017ea483-cda8-7dec-883d-b877e3965b94@suse.com>
Date: Wed, 13 May 2020 15:24:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <4f0da795-a148-e1f3-bd97-caeb84d702cb@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 21:31, Andrew Cooper wrote:
> On 05/05/2020 09:16, Jan Beulich wrote:
>> @@ -8125,6 +8154,47 @@ x86_emulate(
>>      case X86EMUL_OPC(0x0f, 0xae): case X86EMUL_OPC_66(0x0f, 0xae): /* Grp15 */
>>          switch ( modrm_reg & 7 )
>>          {
>> +#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
>> +    !defined(X86EMUL_NO_SIMD)
>> +        case 0: /* fxsave */
>> +        case 1: /* fxrstor */
>> +            generate_exception_if(vex.pfx, EXC_UD);
>> +            vcpu_must_have(fxsr);
>> +            generate_exception_if(ea.type != OP_MEM, EXC_UD);
>> +            generate_exception_if(!is_aligned(ea.mem.seg, ea.mem.off, 16,
>> +                                              ctxt, ops),
>> +                                  EXC_GP, 0);
>> +            fail_if(!ops->blk);
>> +            op_bytes =
>> +#ifdef __x86_64__
>> +                !mode_64bit() ? offsetof(struct x86_fxsr, xmm[8]) :
>> +#endif
>> +                sizeof(struct x86_fxsr);
>> +            if ( amd_like(ctxt) )
>> +            {
>> +                if ( !ops->read_cr ||
>> +                     ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
>> +                    cr4 = X86_CR4_OSFXSR;
> 
> Why do we want to assume OSFXSR in the case of a read_cr() failure,
> rather than bailing on the entire instruction?

I prefer to assume "normal" operation over failing in such
cases. We have a few similar examples elsewhere. I'll add
a comment t this effect.

>> @@ -11819,6 +11891,77 @@ int x86_emul_blk(
>>  
>>  #endif /* X86EMUL_NO_FPU */
>>  
>> +#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
>> +    !defined(X86EMUL_NO_SIMD)
>> +
>> +    case blk_fxrstor:
>> +    {
>> +        struct x86_fxsr *fxsr = FXSAVE_AREA;
>> +
>> +        ASSERT(!data);
>> +        ASSERT(bytes == sizeof(*fxsr));
>> +        ASSERT(state->op_bytes <= bytes);
>> +
>> +        if ( state->op_bytes < sizeof(*fxsr) )
>> +        {
>> +            if ( state->rex_prefix & REX_W )
>> +            {
>> +                /*
>> +                 * The only way to force fxsaveq on a wide range of gas
>> +                 * versions. On older versions the rex64 prefix works only if
>> +                 * we force an addressing mode that doesn't require extended
>> +                 * registers.
>> +                 */
>> +                asm volatile ( ".byte 0x48; fxsave (%1)"
>> +                               : "=m" (*fxsr) : "R" (fxsr) );
>> +            }
>> +            else
>> +                asm volatile ( "fxsave %0" : "=m" (*fxsr) );
>> +        }
>> +
>> +        memcpy(fxsr, ptr, min(state->op_bytes,
>> +                              (unsigned int)offsetof(struct x86_fxsr, _)));
>> +        memset(fxsr->_, 0, sizeof(*fxsr) - offsetof(struct x86_fxsr, _));
> 
> I'm completely lost trying to follow what's going on here.  Why are we
> constructing something different to what the guest gave us?

This part of the structure may not get written by FXSAVE. Hence
I'd prefer to assume it has unknown contents (which we shouldn't
leak) over assuming this would never get written (and hence
remain zeroed). Furthermore we mean to pass this to FXRSTOR,
which we know can raise #GP in principle. While this is a legacy
insns and hence unlikely to change in behavior, it seems more
safe to have well known values in at least the reserved range.

I'll add an abbreviated variant of this as a comment.

>> +
>> +        generate_exception_if(fxsr->mxcsr & ~mxcsr_mask, EXC_GP, 0);
>> +
>> +        if ( state->rex_prefix & REX_W )
>> +        {
>> +            /* See above for why operand/constraints are this way. */
>> +            asm volatile ( ".byte 0x48; fxrstor (%0)"
>> +                           :: "R" (fxsr), "m" (*fxsr) );
>> +        }
>> +        else
>> +            asm volatile ( "fxrstor %0" :: "m" (*fxsr) );
>> +        break;
>> +    }
>> +
>> +    case blk_fxsave:
>> +    {
>> +        struct x86_fxsr *fxsr = FXSAVE_AREA;
>> +
>> +        ASSERT(!data);
>> +        ASSERT(bytes == sizeof(*fxsr));
>> +        ASSERT(state->op_bytes <= bytes);
>> +
>> +        if ( state->rex_prefix & REX_W )
>> +        {
>> +            /* See above for why operand/constraint are this way. */
>> +            asm volatile ( ".byte 0x48; fxsave (%0)"
>> +                           :: "R" (state->op_bytes < sizeof(*fxsr) ? fxsr : ptr)
>> +                           : "memory" );
>> +        }
>> +        else
>> +            asm volatile ( "fxsave (%0)"
>> +                           :: "r" (state->op_bytes < sizeof(*fxsr) ? fxsr : ptr)
>> +                           : "memory" );
>> +        if ( state->op_bytes < sizeof(*fxsr) )
>> +            memcpy(ptr, fxsr, state->op_bytes);
> 
> I think this logic would be clearer to follow with:
> 
> void *buf = state->op_bytes < sizeof(*fxsr) ? fxsr : ptr;
> 
> ...
> 
> if ( buf != ptr )
>     memcpy(ptr, fxsr, state->op_bytes);
> 
> This more clearly highlights the "we either fxsave'd straight into the
> destination pointer, or into a local buffer if we only want a subset"
> property.

Ah, yes, and by having buf (or really repurposed fxsr) have
proper type I can then also avoid the memory clobbers, making
the asm()s more similar to the ones used for FXRSTOR emulation.

>> --- a/xen/arch/x86/x86_emulate.c
>> +++ b/xen/arch/x86/x86_emulate.c
>> @@ -42,6 +42,8 @@
>>      }                                                      \
>>  })
>>  
>> +#define FXSAVE_AREA current->arch.fpu_ctxt
> 
> How safe is this?  Don't we already use this buffer to recover the old
> state in the case of an exception?

For a memory write fault after having updated register state
already, yes. But that can't be the case here. Nevertheless
forcing me to look at this again turned up a bug: We need to
set state->fpu_ctrl in order to keep {,hvmemul_}put_fpu()
from trying to replace FIP/FOP/FDP.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 13 13:26:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 13:26:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYrOh-0004iU-DP; Wed, 13 May 2020 13:26:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FcTI=63=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jYrOg-0004iN-Ei
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 13:26:06 +0000
X-Inumbo-ID: 4c2e3fac-951d-11ea-b07b-bc764e2007e4
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4c2e3fac-951d-11ea-b07b-bc764e2007e4;
 Wed, 13 May 2020 13:26:05 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id g4so17791407ljl.2
 for <xen-devel@lists.xenproject.org>; Wed, 13 May 2020 06:26:05 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=RTqbyGlhscmjLzn7g/8DGPWMuvrDtNhgtAa3qV08SZ0=;
 b=ei41WyYiSus+zwyX1tO/BeILCPH5i/b5h31+gxwcQV5VHpC32sSXCoP/jlGLyrv7fb
 y1gmTQUYTs8WbUS/CTLbSVSrVnm7qQv3PIkRhoZAh+RiHJrP7H2vMtFADWYF22bQ1UPs
 Axj/4SdJTu/MFmhBZNOrW26bwyrCU/+ZCNzG+522aQKq0Ul4+kMdj/i6e32jvQ9uj4/k
 +RkW696B7aDQ8OdwiyOPpckDHElTjXikKWBdXgQ2TBLerbsdQEANHDVDGUY+CmqxmfUf
 I7Bn/sNH/WZ7xmi/KKjMl4hmLe8ugKnQmr61j6q9PjeB+OQrdUCsWTN8fHE+/HuMCPSP
 OD0Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=RTqbyGlhscmjLzn7g/8DGPWMuvrDtNhgtAa3qV08SZ0=;
 b=uae+3yxvoH8CvBocJJPybtwLLFpU/83pOfzL+BwfYlTLO+XX8C165XG6YGKGne8N4f
 TEPbr2l0wDXUMBHW2St+tJzVlq9zVALkHe6TBbo/m/wNEYZJteH4uDa1Ich5D/7gmp+Y
 f1VkBtFBk4Dtx7qZCqTdJVXMrHW1y8Nvi07R+wa6WL9Ob/9crxilv5OUzd/xSLDPdFOI
 QqWifGUTiSOU+sVbRCkc2iP5p3sduB0qaiX6rKBv1ZPC5NgXu9Qkohv2o2YvJpXHUqT9
 L3uTslcBlmEm2gJLgLbMJbs6BkErDRlLKFXxVr3lguu6bo4tBXkQ4ANnb6vGX5cz8G7f
 xIMw==
X-Gm-Message-State: AOAM533QlStim5/tBRtMajpqGwet0Nrs4kIxbt1juiIJJ5DLRmPvFYXC
 BB2LH8TQTLAWZDGuj2V0egD0SbOKu+w66OkZgWU=
X-Google-Smtp-Source: ABdhPJzMKA10p+Fk1SqXnwruvj6HHbv2oHBEoHQtuMi9psdSo1xwNc4HJAiV3ChlwL7Bdm7d0cx07c4KqRzYdmQLoYA=
X-Received: by 2002:a2e:8047:: with SMTP id p7mr16362857ljg.206.1589376364439; 
 Wed, 13 May 2020 06:26:04 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1589330383.git.rosbrookn@ainfosec.com>
 <a42395202aef85d983dd9db361c366a6d03e313f.1589330383.git.rosbrookn@ainfosec.com>
 <AC49E6E7-BB64-4672-94DE-342E3E288C96@citrix.com>
In-Reply-To: <AC49E6E7-BB64-4672-94DE-342E3E288C96@citrix.com>
From: Nick Rosbrook <rosbrookn@gmail.com>
Date: Wed, 13 May 2020 09:25:52 -0400
Message-ID: <CAEBZRSd1z4RLRzv=drZ=WnR+ZsWTA1u+n4QmbdOZwzmb-+dSrg@mail.gmail.com>
Subject: Re: [PATCH v2 3/3] golang/xenlight: add necessary module/package
 documentation
To: George Dunlap <George.Dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Jan Beulich <jbeulich@suse.com>,
 Ian Jackson <Ian.Jackson@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> OK, so didn=E2=80=99t notice this at first.  It looks like you read the c=
omments at the top of libxl.c, noticed the comment about =E2=80=9C...the sp=
ecial exception on linking described in file LICENSE=E2=80=9D, looked aroun=
d for such a file, and found it in tools/ocaml, and copied that one?
>
> I had a chat with Ian Jackson on IRC (copied below for the record), and t=
hink that comment is simply in error.  We agreed that we should just copy h=
ttps://www.gnu.org/licenses/old-licenses/lgpl-2.1.txt into the directory ve=
rbatim.

Yeah that's what I did. I'll send a corrected patch shortly.

Thanks,
-NR


From xen-devel-bounces@lists.xenproject.org Wed May 13 13:33:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 13:33:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYrVj-0005aZ-6R; Wed, 13 May 2020 13:33:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hVCK=63=samsung.com=m.szyprowski@srs-us1.protection.inumbo.net>)
 id 1jYrVg-0005aU-RI
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 13:33:21 +0000
X-Inumbo-ID: 4db38aca-951e-11ea-b07b-bc764e2007e4
Received: from mailout2.w1.samsung.com (unknown [210.118.77.12])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4db38aca-951e-11ea-b07b-bc764e2007e4;
 Wed, 13 May 2020 13:33:17 +0000 (UTC)
Received: from eucas1p2.samsung.com (unknown [182.198.249.207])
 by mailout2.w1.samsung.com (KnoxPortal) with ESMTP id
 20200513133317euoutp02bdb7dc50e4fba9744f16345f5d49b8e1~OmcaoleoP0033500335euoutp02a
 for <xen-devel@lists.xenproject.org>; Wed, 13 May 2020 13:33:17 +0000 (GMT)
DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.w1.samsung.com
 20200513133317euoutp02bdb7dc50e4fba9744f16345f5d49b8e1~OmcaoleoP0033500335euoutp02a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com;
 s=mail20170921; t=1589376797;
 bh=MxXtwPFeqegif4HlAx0ieDJD2dsgeJX9eJCBPth0NkY=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=bPJpgt8VuXkVNgjfyAPOKxkJUOGWc559Au+XESo6RfQLtqDQY8JOsgf+m2WZIWwYV
 VyPgJJsJFEIg4wSRliVqoQLg7mM/CtatyylMuXap1010xXlGBtKQ4v5n1BptEdBWAM
 osgilRuadHwJOzoi0xb1nRmw9wTREBJ4rksQm/p0=
Received: from eusmges3new.samsung.com (unknown [203.254.199.245]) by
 eucas1p1.samsung.com (KnoxPortal) with ESMTP id
 20200513133316eucas1p14f8338b43b0ca57538811a5d90a84071~OmcaVfN4L2633026330eucas1p1V;
 Wed, 13 May 2020 13:33:16 +0000 (GMT)
Received: from eucas1p1.samsung.com ( [182.198.249.206]) by
 eusmges3new.samsung.com (EUCPMTA) with SMTP id 2F.D8.60698.C17FBBE5; Wed, 13
 May 2020 14:33:16 +0100 (BST)
Received: from eusmtrp1.samsung.com (unknown [182.198.249.138]) by
 eucas1p2.samsung.com (KnoxPortal) with ESMTPA id
 20200513133316eucas1p2ad01d27ea4388cb50424bcf112d710ef~OmcaFLccM0359403594eucas1p2m;
 Wed, 13 May 2020 13:33:16 +0000 (GMT)
Received: from eusmgms2.samsung.com (unknown [182.198.249.180]) by
 eusmtrp1.samsung.com (KnoxPortal) with ESMTP id
 20200513133316eusmtrp1137b1bc87b13093a87f9bf46333dbe6c~OmcaEjEe81050610506eusmtrp1V;
 Wed, 13 May 2020 13:33:16 +0000 (GMT)
X-AuditID: cbfec7f5-a29ff7000001ed1a-ed-5ebbf71c62d2
Received: from eusmtip1.samsung.com ( [203.254.199.221]) by
 eusmgms2.samsung.com (EUCPMTA) with SMTP id 75.47.07950.C17FBBE5; Wed, 13
 May 2020 14:33:16 +0100 (BST)
Received: from AMDC2765.digital.local (unknown [106.120.51.73]) by
 eusmtip1.samsung.com (KnoxPortal) with ESMTPA id
 20200513133315eusmtip11bd7d51f6d64aee3d21ad3edfd6dd66a~OmcZYTFoT0693306933eusmtip1G;
 Wed, 13 May 2020 13:33:15 +0000 (GMT)
From: Marek Szyprowski <m.szyprowski@samsung.com>
To: dri-devel@lists.freedesktop.org, iommu@lists.linux-foundation.org,
 linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org
Subject: [PATCH v5 27/38] xen: gntdev: fix common struct sg_table related
 issues
Date: Wed, 13 May 2020 15:32:34 +0200
Message-Id: <20200513133245.6408-27-m.szyprowski@samsung.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200513133245.6408-1-m.szyprowski@samsung.com>
X-Brightmail-Tracker: H4sIAAAAAAAAA0WSe0hTYRjG+3bO2Y7i7DgNP8wUFxZG3iUPWFYgcgiMKCowMlcevM0pm5eM
 ILGM3LxUGoqI2YW8rukaGpr3a1iiOXXLiS4vmWRq3qWsHY/Wf7/3eb6H5+XlwxFBO2aHR0ri
 aalEJBZyzdGazo1eV/u1+hCPe4bDZGbvew5Zna/CyP68NJT8U/MIIbUr81yyrKKDQxY3+ZGF
 ei9yWWvkkOqJIYwcqCvkksr2UR7ZsjCJkWuaHM4pS6qyqBJQDavFKFW7Oo5RY4ouDvXm5R1q
 ZGsCoXJ0JYBqW9CiVL0+hUstTn1GqSxNOaBUmkGUWlI7nOMHmx8Po8WRibTU3T/UPCJbPsWL
 67e92VltxFJAubUcmOGQ8IGLZYMcOTDHBUQpgJv6up1hGcDswWaMHZYA1Dc2cXcjuVm/AWuU
 AKgwzgDG2I5U1QcyzCU8oXxOvh2wIdIA7M60YAIIoUDgE2U5xhjWxHmomPq4HUYJZ7ipM6AM
 84kTsONrJWDbHGFFVTPCsJlJnzU0oaw+xIOD88EsB8CC9Bc8lq3hbJdmh+1hT04GyhRD4i6A
 xl4ljx0yABxIzd9p8IOG3k3TqrhpPReoqnNn5dNw/el3wMiQsIS6OStGRkz4uCYPYWU+fHBf
 wL4+BAu6Xv+rben7hLBMwaKsdS57rDYA+z/UgofAseB/WTEA5cCWTpDFhNMybwmd5CYTxcgS
 JOFuN2Jj1MD0t3q2ulbegsZf11sBgQOhBX98uD5EgIkSZckxrQDiiNCGf1ZlkvhhouRbtDT2
 mjRBTMtawX4cFdryvZ9/uyogwkXxdDRNx9HSXZeDm9mlgINWrrj9eNrKZfWyh1E145874lIZ
 dkFBREmIV8mwBL190X5dOb53ci4p+5JYLTz68wudmhE17Ru6bzQgejpdb/HjnT+tD3w2aiE+
 aXBwskzUAkuOT1/fMfeyVc2ZCTS7cWzzQNB0qa57bdK5zGmroi8odCOgwderf1geskd5RYjK
 IkSeRxCpTPQXYGbUzFcDAAA=
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFlrOIsWRmVeSWpSXmKPExsVy+t/xu7oy33fHGZzrFLPoPXeSyWLjjPWs
 Fhent7JY/N82kdniytf3bBYrVx9lsliw39pizk0jiy9XHjJZbHp8jdXi8q45bBZrj9xltzj4
 4Qmrxfctk5kc+DzWzFvD6LH32wIWj+3fHrB63O8+zuSxeUm9x+1/j5k9Jt9Yzuhx+MMVFo/d
 NxvYPD4+vcXi0bdlFaPH+i1XWTw+b5IL4I3SsynKLy1JVcjILy6xVYo2tDDSM7S00DMysdQz
 NDaPtTIyVdK3s0lJzcksSy3St0vQy+jvespecFG84tjGh6wNjKuEuxg5OSQETCSm9P1l7GLk
 4hASWMoo8e/sNCaIhIzEyWkNrBC2sMSfa11sEEWfGCXuX7sJlmATMJToeguREBHoZJSY1v2R
 HcRhFpjMLPFs9XWwUcICARJT9p9jBrFZBFQlft24wwJi8wrYShx9voYRYoW8xOoNB8BqOIHi
 r+7sB6sREsiX2Lt4H9sERr4FjAyrGEVSS4tz03OLjfSKE3OLS/PS9ZLzczcxAiNp27GfW3Yw
 dr0LPsQowMGoxMNrcWt3nBBrYllxZe4hRgkOZiURXr/1QCHelMTKqtSi/Pii0pzU4kOMpkBH
 TWSWEk3OB0Z5Xkm8oamhuYWlobmxubGZhZI4b4fAwRghgfTEktTs1NSC1CKYPiYOTqkGxivl
 QdUBSx+8kvJblRct/6Hu3CSZ+Kn/Vvfd3HRj+o3tP47J7LvdPtm2uzhBe5HR6XiLXSWWeTdl
 +M/PtLE/yvXAZpW0vJzPLrO5bbZ3uZamT7Pf0vMtz5Gx9dc61VeambJ6S3KT+JI+Xlt86KGh
 7OJ0Di+N9W7ik3LbHqz6t29169qLykWKbkosxRmJhlrMRcWJABDblfe6AgAA
X-CMS-MailID: 20200513133316eucas1p2ad01d27ea4388cb50424bcf112d710ef
X-Msg-Generator: CA
Content-Type: text/plain; charset="utf-8"
X-RootMTR: 20200513133316eucas1p2ad01d27ea4388cb50424bcf112d710ef
X-EPHeader: CA
CMS-TYPE: 201P
X-CMS-RootMailID: 20200513133316eucas1p2ad01d27ea4388cb50424bcf112d710ef
References: <20200513132114.6046-1-m.szyprowski@samsung.com>
 <20200513133245.6408-1-m.szyprowski@samsung.com>
 <CGME20200513133316eucas1p2ad01d27ea4388cb50424bcf112d710ef@eucas1p2.samsung.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Robin Murphy <robin.murphy@arm.com>, Christoph Hellwig <hch@lst.de>,
 linux-arm-kernel@lists.infradead.org,
 Marek Szyprowski <m.szyprowski@samsung.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function
returns the number of the created entries in the DMA address space.
However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and
dma_unmap_sg must be called with the original number of the entries
passed to the dma_map_sg().

struct sg_table is a common structure used for describing a non-contiguous
memory buffer, used commonly in the DRM and graphics subsystems. It
consists of a scatterlist with memory pages and DMA addresses (sgl entry),
as well as the number of scatterlist entries: CPU pages (orig_nents entry)
and DMA mapped pages (nents entry).

It turned out that it was a common mistake to misuse nents and orig_nents
entries, calling DMA-mapping functions with a wrong number of entries or
ignoring the number of mapped entries returned by the dma_map_sg()
function.

To avoid such issues, lets use a common dma-mapping wrappers operating
directly on the struct sg_table objects and use scatterlist page
iterators where possible. This, almost always, hides references to the
nents and orig_nents entries, making the code robust, easier to follow
and copy/paste safe.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
---
For more information, see '[PATCH v5 00/38] DRM: fix struct sg_table nents
vs. orig_nents misuse' thread:
https://lore.kernel.org/linux-iommu/20200513132114.6046-1-m.szyprowski@samsung.com/T/
---
 drivers/xen/gntdev-dmabuf.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/drivers/xen/gntdev-dmabuf.c b/drivers/xen/gntdev-dmabuf.c
index 75d3bb9..ba6cad8 100644
--- a/drivers/xen/gntdev-dmabuf.c
+++ b/drivers/xen/gntdev-dmabuf.c
@@ -247,10 +247,9 @@ static void dmabuf_exp_ops_detach(struct dma_buf *dma_buf,
 
 		if (sgt) {
 			if (gntdev_dmabuf_attach->dir != DMA_NONE)
-				dma_unmap_sg_attrs(attach->dev, sgt->sgl,
-						   sgt->nents,
-						   gntdev_dmabuf_attach->dir,
-						   DMA_ATTR_SKIP_CPU_SYNC);
+				dma_unmap_sgtable(attach->dev, sgt,
+						  gntdev_dmabuf_attach->dir,
+						  DMA_ATTR_SKIP_CPU_SYNC);
 			sg_free_table(sgt);
 		}
 
@@ -288,8 +287,8 @@ static void dmabuf_exp_ops_detach(struct dma_buf *dma_buf,
 	sgt = dmabuf_pages_to_sgt(gntdev_dmabuf->pages,
 				  gntdev_dmabuf->nr_pages);
 	if (!IS_ERR(sgt)) {
-		if (!dma_map_sg_attrs(attach->dev, sgt->sgl, sgt->nents, dir,
-				      DMA_ATTR_SKIP_CPU_SYNC)) {
+		if (dma_map_sgtable(attach->dev, sgt, dir,
+				    DMA_ATTR_SKIP_CPU_SYNC)) {
 			sg_free_table(sgt);
 			kfree(sgt);
 			sgt = ERR_PTR(-ENOMEM);
@@ -625,7 +624,7 @@ static struct gntdev_dmabuf *dmabuf_imp_alloc_storage(int count)
 
 	/* Now convert sgt to array of pages and check for page validity. */
 	i = 0;
-	for_each_sg_page(sgt->sgl, &sg_iter, sgt->nents, 0) {
+	for_each_sgtable_page(sgt, &sg_iter, 0) {
 		struct page *page = sg_page_iter_page(&sg_iter);
 		/*
 		 * Check if page is valid: this can happen if we are given
-- 
1.9.1



From xen-devel-bounces@lists.xenproject.org Wed May 13 13:35:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 13:35:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYrXW-0005gJ-JN; Wed, 13 May 2020 13:35:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dqM3=63=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYrXV-0005gE-Qp
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 13:35:13 +0000
X-Inumbo-ID: 915b0e24-951e-11ea-a377-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 915b0e24-951e-11ea-a377-12813bfff9fa;
 Wed, 13 May 2020 13:35:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 126CCB004;
 Wed, 13 May 2020 13:35:13 +0000 (UTC)
Subject: Re: [PATCH v8 12/12] x86/HVM: don't needlessly intercept
 APERF/MPERF/TSC MSR reads
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <e92b6c1a-b2c3-13e7-116c-4772c851dd0b@suse.com>
 <81cc74ce-0a53-d5cd-3513-af3af6382815@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <05203042-662c-3dc4-15e6-bc45587fbeec@suse.com>
Date: Wed, 13 May 2020 15:35:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <81cc74ce-0a53-d5cd-3513-af3af6382815@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 23:04, Andrew Cooper wrote:
> On 05/05/2020 09:20, Jan Beulich wrote:
>> If the hardware can handle accesses, we should allow it to do so. This
>> way we can expose EFRO to HVM guests,
> 
> I'm reminded now of the conversation I'm sure we've had before, although
> I have a feeling it was on IRC.
> 
> APERF/MPERF (including the EFRO interface on AMD) are free running
> counters but only in C0.  The raw values are not synchronised across
> threads.
> 
> A vCPU which gets rescheduled has a 50% chance of finding the one or
> both values going backwards, and a 100% chance of totally bogus calculation.
> 
> There is no point exposing APERF/MPERF to guests.  It can only be used
> safely in hypervisor context, on AMD hardware with a CLGI/STGI region,
> or on Intel hardware in an NMI handler if you trust that a machine check
> isn't going to ruin your day.
> 
> VMs have no way of achieving the sampling requirements, and has a fair
> chance of getting a plausible-but-wrong answer.
> 
> The only possibility to do this safely is on a VM which is pinned to
> pCPU for its lifetime, but even I'm unconvinced of the correctness.
> 
> I don't think we should be exposing this functionality to guests at all,
> although I might be persuaded if someone wanting to use it in a VM can
> provide a concrete justification of why the above problems won't get in
> their way.

Am I getting it right then that here you're reverting what you've said
on patch 10: "I'm tempted to suggest that we offer EFRO on Intel ..."?
And hence your request is to drop both that and this patch?

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 13 13:56:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 13:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYrrt-0007QO-NU; Wed, 13 May 2020 13:56:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cxQB=63=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYrrs-0007Q3-Il
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 13:56:16 +0000
X-Inumbo-ID: 8230159a-9521-11ea-a380-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8230159a-9521-11ea-a380-12813bfff9fa;
 Wed, 13 May 2020 13:56:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589378174;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=8RnL7r04wvQLuEkyA1mIyz1nkfFjDGCWQxNcNfagCt4=;
 b=ZAUpL6SGJN70VJrjN95xxkbFv72q4A9LB6XFIAxMfvOspzlOQ8aJMrSl
 qf5Pk7h5xJB/847sxJ7fJN4RWXriqPJ2mfvKceHysGyH8kfxNea+FFOTy
 Kb98X7+8A7fCONSaxMzwOgnhw0GOx1cLf4HwbVdANgHFe2c28QXADvWgv o=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: OsZHGwfRCQDDywVu6pe+akW9kNGwoP1VbEawUh6mupPmhbra4dTE2Wkq/TDp83Nyg35LMQwdDF
 EAUHT7MDR8Rr14GUtC91sVg+NymN6Wo26BdluIh8gS5yS74tz1ccbOYmMyv78NMLzHvwd5mDNV
 QLJZj2LrdftePKEu2Nq5xsYzY7tMelU4LP9jLwwWY2930tnXeFZgEQhzRhpA6Qijbbtk0RXdYB
 glhefavSDmxxep4W9jeZ5AY3wkQfwAMOW/ceNWY3rN5rks5bKvJQpiWXr3ZXw75Tyu4Dk1RHfH
 bgE=
X-SBRS: 2.7
X-MesageID: 17414537
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,387,1583211600"; d="scan'208";a="17414537"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/build: Unilaterally disable -fcf-protection
Date: Wed, 13 May 2020 14:55:52 +0100
Message-ID: <20200513135552.24329-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200513135552.24329-1-andrew.cooper3@citrix.com>
References: <20200513135552.24329-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefan Bader <stefan.bader@canonical.com>, Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Xen doesn't support CET-IBT yet.  At a minimum, logic is required to enable it
for supervisor use, but the livepatch functionality needs to learn not to
overwrite ENDBR64 instructions.

Furthermore, Ubuntu enables -fcf-protection by default, along with a buggy
version of GCC-9 which objects to it in combination with
-mindirect-branch=thunk-extern (Fixed in GCC 10, 9.4).

Various objects (Xen boot path, Rombios 32 stubs) require .text to be at the
beginning of the object.  These paths explode when .note.gnu.properties gets
put ahead of .text and we end up executing the notes data.

Disable -fcf-protection for all embedded objects.

Reported-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Jason Andryuk <jandryuk@gmail.com>
CC: Stefan Bader <stefan.bader@canonical.com>

v2:
 * Fix Rombios 32 stubs as well.
---
 Config.mk | 1 +
 1 file changed, 1 insertion(+)

diff --git a/Config.mk b/Config.mk
index b0f16680f3..7d556aed30 100644
--- a/Config.mk
+++ b/Config.mk
@@ -205,6 +205,7 @@ APPEND_CFLAGS += $(foreach i, $(APPEND_INCLUDES), -I$(i))
 
 EMBEDDED_EXTRA_CFLAGS := -nopie -fno-stack-protector -fno-stack-protector-all
 EMBEDDED_EXTRA_CFLAGS += -fno-exceptions -fno-asynchronous-unwind-tables
+EMBEDDED_EXTRA_CFLAGS += -fcf-protection=none
 
 XEN_EXTFILES_URL ?= http://xenbits.xen.org/xen-extfiles
 # All the files at that location were downloaded from elsewhere on
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 13 13:56:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 13:56:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYrrr-0007Px-CV; Wed, 13 May 2020 13:56:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cxQB=63=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYrrq-0007Ps-CA
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 13:56:14 +0000
X-Inumbo-ID: 819c9a90-9521-11ea-9887-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 819c9a90-9521-11ea-9887-bc764e2007e4;
 Wed, 13 May 2020 13:56:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589378174;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=szdENZVnK21wmpaV/YA6upz3x9DsZQ4tlOwrH96ssUA=;
 b=hrrY+jggF7ZVaA9VlEOqzHzmCJ7tCNXtlfiBDdlqVqhJi1NJClNCyZMK
 hCG+ilroJF9+mkyInQbQi/wr80pzB9FV05fggEEyZUsoe3XaX9usaTTdE
 bLxFUAVg7EQsMX7FyA1w2JsC1G+7INeVb0fx/cvzjRfPK1YhuCLF900+4 4=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: oT5dIifoFXzVz1Re2i0NWmkd5ngOkzeV2gSKY/ltaaOocMbRS3u2CqDSVHRzJOI5XD08gBiQBM
 DbrvDqPwARTGfVrhycrLH3KFFWV09cYaZhcoQgxo6SwCAy48gkJkU5yCc84vQX+C1fvejHDGBO
 93j9x3bVPU2U2Z3zESPG+EexmJmbktLIVZqTMkb0fi3ATyskpdUsCqjsIsAOT5FguVYV7cjfSD
 55iiT1trEMjJ4UJEfkM6/pqK9SQmw8kKZhFvo6DmsmQMj8M4Iib4R6mX8Jn0IdY4LmMc+3GBRQ
 kcs=
X-SBRS: 2.7
X-MesageID: 17443159
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,387,1583211600"; d="scan'208";a="17443159"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] build: Fix build with Ubuntu
Date: Wed, 13 May 2020 14:55:50 +0100
Message-ID: <20200513135552.24329-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Jason
 Andryuk <jandryuk@gmail.com>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefan Bader <stefan.bader@canonical.com>, Jan Beulich <JBeulich@suse.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This supercedes "x86/build: Unilaterally disable -fcf-protection"

Andrew Cooper (2):
  x86/build: move -fno-asynchronous-unwind-tables into EMBEDDED_EXTRA_CFLAGS
  x86/build: Unilaterally disable -fcf-protection

 Config.mk                            | 3 ++-
 tools/tests/x86_emulator/testcase.mk | 2 +-
 xen/arch/x86/arch.mk                 | 2 +-
 xen/arch/x86/boot/build32.mk         | 2 +-
 4 files changed, 5 insertions(+), 4 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 13 13:56:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 13:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYrry-0007R1-Vt; Wed, 13 May 2020 13:56:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cxQB=63=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYrrx-0007Qp-Iw
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 13:56:21 +0000
X-Inumbo-ID: 81f0bc89-9521-11ea-a380-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 81f0bc89-9521-11ea-a380-12813bfff9fa;
 Wed, 13 May 2020 13:56:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589378175;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=yfsuuBspMmNC3kof03e5m6UQEyNDgBOVgEXM/dKmV24=;
 b=hJy2tvJvOMHdG0iiAU2z6tWCSXW0SIdk3cGuJzWU33eHZsNfwtdJbONd
 dwjoueCo/W7D6g4xpV79cBlHYYrgFFlyAICVNTWHE9GKvfvJ8e/QWOoAZ
 1KOJo6wN7n8rhZXHldDOysF9GNY6G27LkbSFI1g2E/2OTVPWnH6ZswrfT o=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: JRMroCWzhFP36YLcddszmc8gqNxC9Wj3k0V3CzzK2SCMCzz1nBMsO7hP+fg5uNVEBuKl4Alg2z
 i2RefznLFaoW6Xr4iVgjISbV5aH5V6oIJ4IrsmTZ8xicSiX9/b3rfVssdbwhrtoMGEDsvBqD/t
 davD7CETOgkRXPdG8LWQErDbxemRXJA/dpEEyMorf4FZhXn/5jWYVDarhLWqI9QaCUOO/iSG8r
 /NP4kM6jUNYcXmMBf21hOo/7ZoCoMimaNgQA0s+8f3Y4B6RJTsU+u+gPf2ThFzOFmgMvJL3nU2
 5lY=
X-SBRS: 2.7
X-MesageID: 17688176
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,387,1583211600"; d="scan'208";a="17688176"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/build: move -fno-asynchronous-unwind-tables into
 EMBEDDED_EXTRA_CFLAGS
Date: Wed, 13 May 2020 14:55:51 +0100
Message-ID: <20200513135552.24329-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200513135552.24329-1-andrew.cooper3@citrix.com>
References: <20200513135552.24329-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <JBeulich@suse.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Users of EMBEDDED_EXTRA_CFLAGS already use -fno-asynchronous-unwind-tables, or
ought to.  This shrinks the size of the rombios 32bit stubs in guest memory.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: George Dunlap <George.Dunlap@eu.citrix.com>
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien@xen.org>
---
 Config.mk                            | 2 +-
 tools/tests/x86_emulator/testcase.mk | 2 +-
 xen/arch/x86/arch.mk                 | 2 +-
 xen/arch/x86/boot/build32.mk         | 2 +-
 4 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/Config.mk b/Config.mk
index 3621162ae4..b0f16680f3 100644
--- a/Config.mk
+++ b/Config.mk
@@ -204,7 +204,7 @@ APPEND_LDFLAGS += $(foreach i, $(APPEND_LIB), -L$(i))
 APPEND_CFLAGS += $(foreach i, $(APPEND_INCLUDES), -I$(i))
 
 EMBEDDED_EXTRA_CFLAGS := -nopie -fno-stack-protector -fno-stack-protector-all
-EMBEDDED_EXTRA_CFLAGS += -fno-exceptions
+EMBEDDED_EXTRA_CFLAGS += -fno-exceptions -fno-asynchronous-unwind-tables
 
 XEN_EXTFILES_URL ?= http://xenbits.xen.org/xen-extfiles
 # All the files at that location were downloaded from elsewhere on
diff --git a/tools/tests/x86_emulator/testcase.mk b/tools/tests/x86_emulator/testcase.mk
index a565d15524..dafeb6caf7 100644
--- a/tools/tests/x86_emulator/testcase.mk
+++ b/tools/tests/x86_emulator/testcase.mk
@@ -4,7 +4,7 @@ include $(XEN_ROOT)/tools/Rules.mk
 
 $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
 
-CFLAGS += -fno-builtin -fno-asynchronous-unwind-tables -g0 $($(TESTCASE)-cflags)
+CFLAGS += -fno-builtin -g0 $($(TESTCASE)-cflags)
 
 .PHONY: all
 all: $(TESTCASE).bin
diff --git a/xen/arch/x86/arch.mk b/xen/arch/x86/arch.mk
index 2a51553edb..62b7c97007 100644
--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -56,7 +56,7 @@ $(call as-option-add,CFLAGS,CC,\
 $(call as-option-add,CFLAGS,CC,\
     ".L1: .L2: .nops (.L2 - .L1)$$(comma)9",-DHAVE_AS_NOPS_DIRECTIVE)
 
-CFLAGS += -mno-red-zone -fpic -fno-asynchronous-unwind-tables
+CFLAGS += -mno-red-zone -fpic
 
 # Xen doesn't use SSE interally.  If the compiler supports it, also skip the
 # SSE setup for variadic function calls.
diff --git a/xen/arch/x86/boot/build32.mk b/xen/arch/x86/boot/build32.mk
index 48c7407c00..5851ebff5f 100644
--- a/xen/arch/x86/boot/build32.mk
+++ b/xen/arch/x86/boot/build32.mk
@@ -4,7 +4,7 @@ include $(XEN_ROOT)/Config.mk
 
 $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
 
-CFLAGS += -Werror -fno-asynchronous-unwind-tables -fno-builtin -g0 -msoft-float
+CFLAGS += -Werror -fno-builtin -g0 -msoft-float
 CFLAGS += -I$(XEN_ROOT)/xen/include
 CFLAGS := $(filter-out -flto,$(CFLAGS)) 
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 13 14:11:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 14:11:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYs6K-0000sc-8H; Wed, 13 May 2020 14:11:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dqM3=63=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYs6J-0000sX-8Z
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 14:11:11 +0000
X-Inumbo-ID: 97a5c9cc-9523-11ea-a383-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 97a5c9cc-9523-11ea-a383-12813bfff9fa;
 Wed, 13 May 2020 14:11:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 96891AD10;
 Wed, 13 May 2020 14:11:10 +0000 (UTC)
Subject: Re: [PATCH] x86/build: move -fno-asynchronous-unwind-tables into
 EMBEDDED_EXTRA_CFLAGS
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
References: <20200513135552.24329-1-andrew.cooper3@citrix.com>
 <20200513135552.24329-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <88c734e9-fd99-c157-ae9c-0900157bf1b6@suse.com>
Date: Wed, 13 May 2020 16:11:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200513135552.24329-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 13.05.2020 15:55, Andrew Cooper wrote:
> Users of EMBEDDED_EXTRA_CFLAGS already use -fno-asynchronous-unwind-tables, or
> ought to.

It's not really well defined what they're supposed to be used for
(and where it's not supposed to be used). I notice in particular
a use in stubdom/Makefile which I'm unsure whether it indeed wants
this adjustment. Therefore ...

>  This shrinks the size of the rombios 32bit stubs in guest memory.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with the request that Samuel also ack (or otherwise) the change
from a stubdom perspective.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 13 14:13:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 14:13:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYs8a-00010E-LM; Wed, 13 May 2020 14:13:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dqM3=63=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYs8a-000109-0p
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 14:13:32 +0000
X-Inumbo-ID: ebeec59c-9523-11ea-a384-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ebeec59c-9523-11ea-a384-12813bfff9fa;
 Wed, 13 May 2020 14:13:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4B8B4AC90;
 Wed, 13 May 2020 14:13:32 +0000 (UTC)
Subject: Re: [PATCH] x86/build: Unilaterally disable -fcf-protection
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200513135552.24329-1-andrew.cooper3@citrix.com>
 <20200513135552.24329-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a56bb718-47e8-ab45-d706-84f59d3131d8@suse.com>
Date: Wed, 13 May 2020 16:13:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200513135552.24329-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Jason Andryuk <jandryuk@gmail.com>, Stefan Bader <stefan.bader@canonical.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 13.05.2020 15:55, Andrew Cooper wrote:
> Xen doesn't support CET-IBT yet.  At a minimum, logic is required to enable it
> for supervisor use, but the livepatch functionality needs to learn not to
> overwrite ENDBR64 instructions.
> 
> Furthermore, Ubuntu enables -fcf-protection by default, along with a buggy
> version of GCC-9 which objects to it in combination with
> -mindirect-branch=thunk-extern (Fixed in GCC 10, 9.4).
> 
> Various objects (Xen boot path, Rombios 32 stubs) require .text to be at the
> beginning of the object.  These paths explode when .note.gnu.properties gets
> put ahead of .text and we end up executing the notes data.
> 
> Disable -fcf-protection for all embedded objects.
> 
> Reported-by: Jason Andryuk <jandryuk@gmail.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

For the immediate purpose
Reviewed-by: Jan Beulich <jbeulich@suse.com>

I wonder however ...

> --- a/Config.mk
> +++ b/Config.mk
> @@ -205,6 +205,7 @@ APPEND_CFLAGS += $(foreach i, $(APPEND_INCLUDES), -I$(i))
>  
>  EMBEDDED_EXTRA_CFLAGS := -nopie -fno-stack-protector -fno-stack-protector-all
>  EMBEDDED_EXTRA_CFLAGS += -fno-exceptions -fno-asynchronous-unwind-tables
> +EMBEDDED_EXTRA_CFLAGS += -fcf-protection=none

... whether this isn't going to bite us once some of the consumers
of this variable want to enable some different mode.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 13 14:18:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 14:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYsDK-0001C2-Ft; Wed, 13 May 2020 14:18:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xdko=63=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1jYsDJ-0001Bv-FY
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 14:18:25 +0000
X-Inumbo-ID: 9aa7de8e-9524-11ea-a38a-12813bfff9fa
Received: from hera.aquilenet.fr (unknown [185.233.100.1])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9aa7de8e-9524-11ea-a38a-12813bfff9fa;
 Wed, 13 May 2020 14:18:23 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id CFEECF3A9;
 Wed, 13 May 2020 16:18:22 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id Oy_Jz76kSO_h; Wed, 13 May 2020 16:18:21 +0200 (CEST)
Received: from function.home (unknown
 [IPv6:2a01:cb19:956:1b00:9eb6:d0ff:fe88:c3c7])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id F1CEFF04D;
 Wed, 13 May 2020 16:18:20 +0200 (CEST)
Received: from samy by function.home with local (Exim 4.93)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1jYsDD-005Ohe-I2; Wed, 13 May 2020 16:18:19 +0200
Date: Wed, 13 May 2020 16:18:19 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86/build: move -fno-asynchronous-unwind-tables into
 EMBEDDED_EXTRA_CFLAGS
Message-ID: <20200513141819.nekdeotfuicputvo@function>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>
References: <20200513135552.24329-1-andrew.cooper3@citrix.com>
 <20200513135552.24329-2-andrew.cooper3@citrix.com>
 <88c734e9-fd99-c157-ae9c-0900157bf1b6@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <88c734e9-fd99-c157-ae9c-0900157bf1b6@suse.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@citrix.com>,
 Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

Jan Beulich, le mer. 13 mai 2020 16:11:00 +0200, a ecrit:
> On 13.05.2020 15:55, Andrew Cooper wrote:
> > Users of EMBEDDED_EXTRA_CFLAGS already use -fno-asynchronous-unwind-tables, or
> > ought to.
> 
> It's not really well defined what they're supposed to be used for
> (and where it's not supposed to be used). I notice in particular
> a use in stubdom/Makefile which I'm unsure whether it indeed wants
> this adjustment. Therefore ...

I don't know why this is there in mini-os. It dates back 2005
8afe079be ('Mini-OS cleanups. Bug fixes in x86_64 assembly code.')

It indeed looks to me like a general option to minimize binary size, so
I'd say it is fine to make its activation general.

> >  This shrinks the size of the rombios 32bit stubs in guest memory.
> > 
> > Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with the request that Samuel also ack (or otherwise) the change
> from a stubdom perspective.

Samuel


From xen-devel-bounces@lists.xenproject.org Wed May 13 14:18:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 14:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYsDJ-0001Br-8N; Wed, 13 May 2020 14:18:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FcTI=63=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jYsDH-0001Bm-JF
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 14:18:23 +0000
X-Inumbo-ID: 9a5d6714-9524-11ea-b07b-bc764e2007e4
Received: from mail-qt1-x841.google.com (unknown [2607:f8b0:4864:20::841])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9a5d6714-9524-11ea-b07b-bc764e2007e4;
 Wed, 13 May 2020 14:18:23 +0000 (UTC)
Received: by mail-qt1-x841.google.com with SMTP id b1so13471360qtt.1
 for <xen-devel@lists.xenproject.org>; Wed, 13 May 2020 07:18:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id;
 bh=K9Wqtawrn3d5LUJiPdInY5dz3Pq2E+yl4crIH2wi+jw=;
 b=FIKEdliuagLFhAbjk2w/BLI4gzsD8hNAPYJsOdlzcorus+MJjz6LbYG26Lbw5iCd3d
 FNnVwv27tkNEC8q83WvdNKK0ds702HzLMmk5Rm13gh65QWZBRw4LG2R9JtjupkdrIjAR
 I5irJD7x/zZyoPWVvvp2/2w1bzNSi6fWWtto94qRANG6sJZiPuQ257B8Ys5ru1B+b02n
 lh/qv7fbjWQvIDDtJKJo3YpwAthXKWGbxLVSZqttoAoosHvj/c/iDCD57EoZnxpBk7V+
 +rNOn93JciBQW+TGd2Ys7zGFsXG6nl0KjAU/ub/J4tjaKg+lUIJOHO1m9TrMQYexQgLd
 3vzw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id;
 bh=K9Wqtawrn3d5LUJiPdInY5dz3Pq2E+yl4crIH2wi+jw=;
 b=ntrpPZHColDurZM5s+23Ql2p5vm5C96OxEL/11PnEcZ/fPrwmoN936ANekakbVaWmW
 ggElIRGBBN/5AUk1oH+zh9qGteLOILJFy0lfda8lAKbi3g99UyKPTh+n1jmV6EGCcne4
 xBGeWZEfEebFXSzYUd0oViz1PXW6/DuRaI5Es95aH+uhxganRNUBr9y5y8V0pGI6sYuV
 Mb1xMCa+3XD3BMzvUIhM5NfOFXUkAhqcFoZg8YPYEtMvImQYc+kKlYVsG/U/Oo68efUO
 3qLZM/YK7d/Y2v6tSSbdDuE3ThohQ6zi5uaMtfHZPkBwOCd11DRuwPnirZHlFnzgK8xX
 9y3A==
X-Gm-Message-State: AGi0PuZIhIdoZ9QLtjjE8/+VPVidfATGFCxP2hLsGA2Xp8KN3tkQx9Hx
 AwnwRSub61Cev+0pD/45bmdoL0WKQCU=
X-Google-Smtp-Source: APiQypJtXRvrt1SV2ezjZovYN4JKLah2zLAnzwbmfflsyhMneZ+P6mT0RGo8YchEn+5S61frzxuebA==
X-Received: by 2002:ac8:660d:: with SMTP id c13mr15089567qtp.267.1589379502140; 
 Wed, 13 May 2020 07:18:22 -0700 (PDT)
Received: from six.lan (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id o3sm15045707qtt.56.2020.05.13.07.18.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 13 May 2020 07:18:21 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 0/1] initialize xenlight go module
Date: Wed, 13 May 2020 10:18:18 -0400
Message-Id: <cover.1589379056.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

These patches setup an initial Go module for the xenlight package. The
go code is tracked again, since the module is defined in xen.git as
xenbits.xenproject.org/git-http/xen.git/tools/golang/xenlight. The final patch
adds a README and LICENSE to tools/golang/xenlight so that the package
will show up properly on pkg.go.dev and be available via the module
proxy at proxy.golang.org.

Changes in v2:
 - Use xenproject.org instead of xen.org in module path.
 - Use `go build` instead of `go install` in xenlight Makefile.
 - Use LGPL instead of GPL for xenlight LICENSE.
 - Add entry for xenlight package in SUPPORT.md.
 - Change some wording in the README for clarity. 

Changes in v3:
 - Use LGPL as-is from [1].
 - Wrap lines in README.

[1] https://www.gnu.org/licenses/old-licenses/lgpl-2.1.txt

Nick Rosbrook (1):
  golang/xenlight: add necessary module/package documentation

 SUPPORT.md                        |   6 +
 tools/golang/xenlight/LICENSE     | 502 ++++++++++++++++++++++++++++++
 tools/golang/xenlight/README.md   |  28 ++
 tools/golang/xenlight/xenlight.go |   2 +
 4 files changed, 538 insertions(+)
 create mode 100644 tools/golang/xenlight/LICENSE
 create mode 100644 tools/golang/xenlight/README.md

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 13 14:18:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 14:18:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYsDN-0001Cp-Nf; Wed, 13 May 2020 14:18:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FcTI=63=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jYsDM-0001CJ-C9
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 14:18:28 +0000
X-Inumbo-ID: 9c098bba-9524-11ea-9887-bc764e2007e4
Received: from mail-qt1-x829.google.com (unknown [2607:f8b0:4864:20::829])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c098bba-9524-11ea-9887-bc764e2007e4;
 Wed, 13 May 2020 14:18:25 +0000 (UTC)
Received: by mail-qt1-x829.google.com with SMTP id t25so910705qtc.0
 for <xen-devel@lists.xenproject.org>; Wed, 13 May 2020 07:18:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:in-reply-to:references:content-transfer-encoding;
 bh=UydLMLxX/OPOtirEgKx5Z28DCUjWwKpHi6YJJt1Dbck=;
 b=MsHIbpWu5rl+G0EJRRmyjtzZd/5L/J7eLTRxU+iS5hvWEOzrflEC4s7plTVe/cHILn
 sZF0boVyEW0SSM7ByrAB9k/HVy+vRhkDdyqnYk2E7AK9JZp/t6KlEh+1vsQd5Jj+UHpc
 cnz8miFdnWkpk6JrntLs3tF3BKBcUNwCiKQfgjUcjge+CSrWuW4P4RjC1iAGleqz7/mK
 g/amkz9UIoPE37FWaD6Wku5Yjs7oAjGUPfvjKOS/d0Y8tblRMcuOcGRFKXVf7r5xF9fa
 DiaPE3LEWKZFgShumFqqVxye3wk3KTAVT5erHrPMMDBptNyaw1MUqJHxZnYmnnYaSqxF
 x8tg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:in-reply-to:references
 :content-transfer-encoding;
 bh=UydLMLxX/OPOtirEgKx5Z28DCUjWwKpHi6YJJt1Dbck=;
 b=EiE6thYUTwKlsqduZVyX5zNXKWgIDbghdJyCR9koKh5zqICZP52st0//KKmAzGGseH
 FFrns9L3Z++QECFKyxfWBwwua1GFpAY5PsBTsYMRlaJlFeQoCpLX2dDQfEfEoBtf9wfS
 sV+73teQO6lyiGTOvOreKWAKvekHEBsv80P5AQ9rJHl000TAnAU6nt4hU17VjqyoicTI
 WMbHRDU5+ATBTTPXtC2uVNSZ4nZSOn8aucGDUzIdQJrp+90pLEtc4u/GIcul7rkoQ27Q
 JU6mIEHPlF9uqnfSVZX2hn4ZBoTfoJBs7IfriovWC75skqmE/bgOVLY+u0hNtByWNVXH
 //Cw==
X-Gm-Message-State: AGi0PubkCAjjXZDUwmkUuSjdXWBRHJOUYKiy3UwPSAZQkYwaKCz/yzBy
 U9bPD7kavVqg+CYX1Y1OFj17FonDcNY=
X-Google-Smtp-Source: APiQypJYil/MJqERnWyvdPiidkMCQzdJwblF24VgnxurmaPJM9Hq4fYDFhupEkvOzNNcdN+02LsTsw==
X-Received: by 2002:aed:2253:: with SMTP id o19mr16771508qtc.236.1589379503806; 
 Wed, 13 May 2020 07:18:23 -0700 (PDT)
Received: from six.lan (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id o3sm15045707qtt.56.2020.05.13.07.18.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 13 May 2020 07:18:23 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 1/1] golang/xenlight: add necessary module/package
 documentation
Date: Wed, 13 May 2020 10:18:19 -0400
Message-Id: <fa80be6bf52005db0e54fd8dd74a9ff855a5316f.1589379056.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1589379056.git.rosbrookn@ainfosec.com>
References: <cover.1589379056.git.rosbrookn@ainfosec.com>
MIME-Version: 1.0
In-Reply-To: <cover.1589379056.git.rosbrookn@ainfosec.com>
References: <cover.1589379056.git.rosbrookn@ainfosec.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add a README and package comment giving a brief overview of the package.
These also help pkg.go.dev generate better documentation.

Also, add a copy of the LGPL (the same license used by libxl) to
tools/golang/xenlight. This is required for the package to be shown
on pkg.go.dev and added to the default module proxy, proxy.golang.org.

Finally, add an entry for the xenlight package to SUPPORT.md.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
Changes in v2:
 - Use LGPL instead of GPL for license.
 - Change some wording in README for clarity.
 - Add entry to SUPPORT.md for xenlight package.

Changes in v3:
 - Use LGPL as-is, not from tools/ocaml.
 - Wrap lines in README.
---
 SUPPORT.md                        |   6 +
 tools/golang/xenlight/LICENSE     | 502 ++++++++++++++++++++++++++++++
 tools/golang/xenlight/README.md   |  28 ++
 tools/golang/xenlight/xenlight.go |   2 +
 4 files changed, 538 insertions(+)
 create mode 100644 tools/golang/xenlight/LICENSE
 create mode 100644 tools/golang/xenlight/README.md

diff --git a/SUPPORT.md b/SUPPORT.md
index 7270c9b021..e3a366fd56 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -147,6 +147,12 @@ Output of information in machine-parseable JSON format
 
     Status: Supported
 
+### xenlight Go package
+
+Go (golang) bindings for libxl
+
+    Status: Experimental
+
 ## Toolstack/3rd party
 
 ### libvirt driver for xl
diff --git a/tools/golang/xenlight/LICENSE b/tools/golang/xenlight/LICENSE
new file mode 100644
index 0000000000..4362b49151
--- /dev/null
+++ b/tools/golang/xenlight/LICENSE
@@ -0,0 +1,502 @@
+                  GNU LESSER GENERAL PUBLIC LICENSE
+                       Version 2.1, February 1999
+
+ Copyright (C) 1991, 1999 Free Software Foundation, Inc.
+ 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+[This is the first released version of the Lesser GPL.  It also counts
+ as the successor of the GNU Library Public License, version 2, hence
+ the version number 2.1.]
+
+                            Preamble
+
+  The licenses for most software are designed to take away your
+freedom to share and change it.  By contrast, the GNU General Public
+Licenses are intended to guarantee your freedom to share and change
+free software--to make sure the software is free for all its users.
+
+  This license, the Lesser General Public License, applies to some
+specially designated software packages--typically libraries--of the
+Free Software Foundation and other authors who decide to use it.  You
+can use it too, but we suggest you first think carefully about whether
+this license or the ordinary General Public License is the better
+strategy to use in any particular case, based on the explanations below.
+
+  When we speak of free software, we are referring to freedom of use,
+not price.  Our General Public Licenses are designed to make sure that
+you have the freedom to distribute copies of free software (and charge
+for this service if you wish); that you receive source code or can get
+it if you want it; that you can change the software and use pieces of
+it in new free programs; and that you are informed that you can do
+these things.
+
+  To protect your rights, we need to make restrictions that forbid
+distributors to deny you these rights or to ask you to surrender these
+rights.  These restrictions translate to certain responsibilities for
+you if you distribute copies of the library or if you modify it.
+
+  For example, if you distribute copies of the library, whether gratis
+or for a fee, you must give the recipients all the rights that we gave
+you.  You must make sure that they, too, receive or can get the source
+code.  If you link other code with the library, you must provide
+complete object files to the recipients, so that they can relink them
+with the library after making changes to the library and recompiling
+it.  And you must show them these terms so they know their rights.
+
+  We protect your rights with a two-step method: (1) we copyright the
+library, and (2) we offer you this license, which gives you legal
+permission to copy, distribute and/or modify the library.
+
+  To protect each distributor, we want to make it very clear that
+there is no warranty for the free library.  Also, if the library is
+modified by someone else and passed on, the recipients should know
+that what they have is not the original version, so that the original
+author's reputation will not be affected by problems that might be
+introduced by others.
+
+  Finally, software patents pose a constant threat to the existence of
+any free program.  We wish to make sure that a company cannot
+effectively restrict the users of a free program by obtaining a
+restrictive license from a patent holder.  Therefore, we insist that
+any patent license obtained for a version of the library must be
+consistent with the full freedom of use specified in this license.
+
+  Most GNU software, including some libraries, is covered by the
+ordinary GNU General Public License.  This license, the GNU Lesser
+General Public License, applies to certain designated libraries, and
+is quite different from the ordinary General Public License.  We use
+this license for certain libraries in order to permit linking those
+libraries into non-free programs.
+
+  When a program is linked with a library, whether statically or using
+a shared library, the combination of the two is legally speaking a
+combined work, a derivative of the original library.  The ordinary
+General Public License therefore permits such linking only if the
+entire combination fits its criteria of freedom.  The Lesser General
+Public License permits more lax criteria for linking other code with
+the library.
+
+  We call this license the "Lesser" General Public License because it
+does Less to protect the user's freedom than the ordinary General
+Public License.  It also provides other free software developers Less
+of an advantage over competing non-free programs.  These disadvantages
+are the reason we use the ordinary General Public License for many
+libraries.  However, the Lesser license provides advantages in certain
+special circumstances.
+
+  For example, on rare occasions, there may be a special need to
+encourage the widest possible use of a certain library, so that it becomes
+a de-facto standard.  To achieve this, non-free programs must be
+allowed to use the library.  A more frequent case is that a free
+library does the same job as widely used non-free libraries.  In this
+case, there is little to gain by limiting the free library to free
+software only, so we use the Lesser General Public License.
+
+  In other cases, permission to use a particular library in non-free
+programs enables a greater number of people to use a large body of
+free software.  For example, permission to use the GNU C Library in
+non-free programs enables many more people to use the whole GNU
+operating system, as well as its variant, the GNU/Linux operating
+system.
+
+  Although the Lesser General Public License is Less protective of the
+users' freedom, it does ensure that the user of a program that is
+linked with the Library has the freedom and the wherewithal to run
+that program using a modified version of the Library.
+
+  The precise terms and conditions for copying, distribution and
+modification follow.  Pay close attention to the difference between a
+"work based on the library" and a "work that uses the library".  The
+former contains code derived from the library, whereas the latter must
+be combined with the library in order to run.
+
+                  GNU LESSER GENERAL PUBLIC LICENSE
+   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+  0. This License Agreement applies to any software library or other
+program which contains a notice placed by the copyright holder or
+other authorized party saying it may be distributed under the terms of
+this Lesser General Public License (also called "this License").
+Each licensee is addressed as "you".
+
+  A "library" means a collection of software functions and/or data
+prepared so as to be conveniently linked with application programs
+(which use some of those functions and data) to form executables.
+
+  The "Library", below, refers to any such software library or work
+which has been distributed under these terms.  A "work based on the
+Library" means either the Library or any derivative work under
+copyright law: that is to say, a work containing the Library or a
+portion of it, either verbatim or with modifications and/or translated
+straightforwardly into another language.  (Hereinafter, translation is
+included without limitation in the term "modification".)
+
+  "Source code" for a work means the preferred form of the work for
+making modifications to it.  For a library, complete source code means
+all the source code for all modules it contains, plus any associated
+interface definition files, plus the scripts used to control compilation
+and installation of the library.
+
+  Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope.  The act of
+running a program using the Library is not restricted, and output from
+such a program is covered only if its contents constitute a work based
+on the Library (independent of the use of the Library in a tool for
+writing it).  Whether that is true depends on what the Library does
+and what the program that uses the Library does.
+
+  1. You may copy and distribute verbatim copies of the Library's
+complete source code as you receive it, in any medium, provided that
+you conspicuously and appropriately publish on each copy an
+appropriate copyright notice and disclaimer of warranty; keep intact
+all the notices that refer to this License and to the absence of any
+warranty; and distribute a copy of this License along with the
+Library.
+
+  You may charge a fee for the physical act of transferring a copy,
+and you may at your option offer warranty protection in exchange for a
+fee.
+
+  2. You may modify your copy or copies of the Library or any portion
+of it, thus forming a work based on the Library, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+    a) The modified work must itself be a software library.
+
+    b) You must cause the files modified to carry prominent notices
+    stating that you changed the files and the date of any change.
+
+    c) You must cause the whole of the work to be licensed at no
+    charge to all third parties under the terms of this License.
+
+    d) If a facility in the modified Library refers to a function or a
+    table of data to be supplied by an application program that uses
+    the facility, other than as an argument passed when the facility
+    is invoked, then you must make a good faith effort to ensure that,
+    in the event an application does not supply such function or
+    table, the facility still operates, and performs whatever part of
+    its purpose remains meaningful.
+
+    (For example, a function in a library to compute square roots has
+    a purpose that is entirely well-defined independent of the
+    application.  Therefore, Subsection 2d requires that any
+    application-supplied function or table used by this function must
+    be optional: if the application does not supply it, the square
+    root function must still compute square roots.)
+
+These requirements apply to the modified work as a whole.  If
+identifiable sections of that work are not derived from the Library,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works.  But when you
+distribute the same sections as part of a whole which is a work based
+on the Library, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote
+it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Library.
+
+In addition, mere aggregation of another work not based on the Library
+with the Library (or with a work based on the Library) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+  3. You may opt to apply the terms of the ordinary GNU General Public
+License instead of this License to a given copy of the Library.  To do
+this, you must alter all the notices that refer to this License, so
+that they refer to the ordinary GNU General Public License, version 2,
+instead of to this License.  (If a newer version than version 2 of the
+ordinary GNU General Public License has appeared, then you can specify
+that version instead if you wish.)  Do not make any other change in
+these notices.
+
+  Once this change is made in a given copy, it is irreversible for
+that copy, so the ordinary GNU General Public License applies to all
+subsequent copies and derivative works made from that copy.
+
+  This option is useful when you wish to copy part of the code of
+the Library into a program that is not a library.
+
+  4. You may copy and distribute the Library (or a portion or
+derivative of it, under Section 2) in object code or executable form
+under the terms of Sections 1 and 2 above provided that you accompany
+it with the complete corresponding machine-readable source code, which
+must be distributed under the terms of Sections 1 and 2 above on a
+medium customarily used for software interchange.
+
+  If distribution of object code is made by offering access to copy
+from a designated place, then offering equivalent access to copy the
+source code from the same place satisfies the requirement to
+distribute the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+  5. A program that contains no derivative of any portion of the
+Library, but is designed to work with the Library by being compiled or
+linked with it, is called a "work that uses the Library".  Such a
+work, in isolation, is not a derivative work of the Library, and
+therefore falls outside the scope of this License.
+
+  However, linking a "work that uses the Library" with the Library
+creates an executable that is a derivative of the Library (because it
+contains portions of the Library), rather than a "work that uses the
+library".  The executable is therefore covered by this License.
+Section 6 states terms for distribution of such executables.
+
+  When a "work that uses the Library" uses material from a header file
+that is part of the Library, the object code for the work may be a
+derivative work of the Library even though the source code is not.
+Whether this is true is especially significant if the work can be
+linked without the Library, or if the work is itself a library.  The
+threshold for this to be true is not precisely defined by law.
+
+  If such an object file uses only numerical parameters, data
+structure layouts and accessors, and small macros and small inline
+functions (ten lines or less in length), then the use of the object
+file is unrestricted, regardless of whether it is legally a derivative
+work.  (Executables containing this object code plus portions of the
+Library will still fall under Section 6.)
+
+  Otherwise, if the work is a derivative of the Library, you may
+distribute the object code for the work under the terms of Section 6.
+Any executables containing that work also fall under Section 6,
+whether or not they are linked directly with the Library itself.
+
+  6. As an exception to the Sections above, you may also combine or
+link a "work that uses the Library" with the Library to produce a
+work containing portions of the Library, and distribute that work
+under terms of your choice, provided that the terms permit
+modification of the work for the customer's own use and reverse
+engineering for debugging such modifications.
+
+  You must give prominent notice with each copy of the work that the
+Library is used in it and that the Library and its use are covered by
+this License.  You must supply a copy of this License.  If the work
+during execution displays copyright notices, you must include the
+copyright notice for the Library among them, as well as a reference
+directing the user to the copy of this License.  Also, you must do one
+of these things:
+
+    a) Accompany the work with the complete corresponding
+    machine-readable source code for the Library including whatever
+    changes were used in the work (which must be distributed under
+    Sections 1 and 2 above); and, if the work is an executable linked
+    with the Library, with the complete machine-readable "work that
+    uses the Library", as object code and/or source code, so that the
+    user can modify the Library and then relink to produce a modified
+    executable containing the modified Library.  (It is understood
+    that the user who changes the contents of definitions files in the
+    Library will not necessarily be able to recompile the application
+    to use the modified definitions.)
+
+    b) Use a suitable shared library mechanism for linking with the
+    Library.  A suitable mechanism is one that (1) uses at run time a
+    copy of the library already present on the user's computer system,
+    rather than copying library functions into the executable, and (2)
+    will operate properly with a modified version of the library, if
+    the user installs one, as long as the modified version is
+    interface-compatible with the version that the work was made with.
+
+    c) Accompany the work with a written offer, valid for at
+    least three years, to give the same user the materials
+    specified in Subsection 6a, above, for a charge no more
+    than the cost of performing this distribution.
+
+    d) If distribution of the work is made by offering access to copy
+    from a designated place, offer equivalent access to copy the above
+    specified materials from the same place.
+
+    e) Verify that the user has already received a copy of these
+    materials or that you have already sent this user a copy.
+
+  For an executable, the required form of the "work that uses the
+Library" must include any data and utility programs needed for
+reproducing the executable from it.  However, as a special exception,
+the materials to be distributed need not include anything that is
+normally distributed (in either source or binary form) with the major
+components (compiler, kernel, and so on) of the operating system on
+which the executable runs, unless that component itself accompanies
+the executable.
+
+  It may happen that this requirement contradicts the license
+restrictions of other proprietary libraries that do not normally
+accompany the operating system.  Such a contradiction means you cannot
+use both them and the Library together in an executable that you
+distribute.
+
+  7. You may place library facilities that are a work based on the
+Library side-by-side in a single library together with other library
+facilities not covered by this License, and distribute such a combined
+library, provided that the separate distribution of the work based on
+the Library and of the other library facilities is otherwise
+permitted, and provided that you do these two things:
+
+    a) Accompany the combined library with a copy of the same work
+    based on the Library, uncombined with any other library
+    facilities.  This must be distributed under the terms of the
+    Sections above.
+
+    b) Give prominent notice with the combined library of the fact
+    that part of it is a work based on the Library, and explaining
+    where to find the accompanying uncombined form of the same work.
+
+  8. You may not copy, modify, sublicense, link with, or distribute
+the Library except as expressly provided under this License.  Any
+attempt otherwise to copy, modify, sublicense, link with, or
+distribute the Library is void, and will automatically terminate your
+rights under this License.  However, parties who have received copies,
+or rights, from you under this License will not have their licenses
+terminated so long as such parties remain in full compliance.
+
+  9. You are not required to accept this License, since you have not
+signed it.  However, nothing else grants you permission to modify or
+distribute the Library or its derivative works.  These actions are
+prohibited by law if you do not accept this License.  Therefore, by
+modifying or distributing the Library (or any work based on the
+Library), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Library or works based on it.
+
+  10. Each time you redistribute the Library (or any work based on the
+Library), the recipient automatically receives a license from the
+original licensor to copy, distribute, link with or modify the Library
+subject to these terms and conditions.  You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties with
+this License.
+
+  11. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License.  If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Library at all.  For example, if a patent
+license would not permit royalty-free redistribution of the Library by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Library.
+
+If any portion of this section is held invalid or unenforceable under any
+particular circumstance, the balance of the section is intended to apply,
+and the section as a whole is intended to apply in other circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system which is
+implemented by public license practices.  Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+  12. If the distribution and/or use of the Library is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Library under this License may add
+an explicit geographical distribution limitation excluding those countries,
+so that distribution is permitted only in or among countries not thus
+excluded.  In such case, this License incorporates the limitation as if
+written in the body of this License.
+
+  13. The Free Software Foundation may publish revised and/or new
+versions of the Lesser General Public License from time to time.
+Such new versions will be similar in spirit to the present version,
+but may differ in detail to address new problems or concerns.
+
+Each version is given a distinguishing version number.  If the Library
+specifies a version number of this License which applies to it and
+"any later version", you have the option of following the terms and
+conditions either of that version or of any later version published by
+the Free Software Foundation.  If the Library does not specify a
+license version number, you may choose any version ever published by
+the Free Software Foundation.
+
+  14. If you wish to incorporate parts of the Library into other free
+programs whose distribution conditions are incompatible with these,
+write to the author to ask for permission.  For software which is
+copyrighted by the Free Software Foundation, write to the Free
+Software Foundation; we sometimes make exceptions for this.  Our
+decision will be guided by the two goals of preserving the free status
+of all derivatives of our free software and of promoting the sharing
+and reuse of software generally.
+
+                            NO WARRANTY
+
+  15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
+WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
+EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
+OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
+KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
+LIBRARY IS WITH YOU.  SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
+THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+  16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
+WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
+AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
+FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
+CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
+LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
+RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
+FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
+SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
+DAMAGES.
+
+                     END OF TERMS AND CONDITIONS
+
+           How to Apply These Terms to Your New Libraries
+
+  If you develop a new library, and you want it to be of the greatest
+possible use to the public, we recommend making it free software that
+everyone can redistribute and change.  You can do so by permitting
+redistribution under these terms (or, alternatively, under the terms of the
+ordinary General Public License).
+
+  To apply these terms, attach the following notices to the library.  It is
+safest to attach them to the start of each source file to most effectively
+convey the exclusion of warranty; and each file should have at least the
+"copyright" line and a pointer to where the full notice is found.
+
+    <one line to give the library's name and a brief idea of what it does.>
+    Copyright (C) <year>  <name of author>
+
+    This library is free software; you can redistribute it and/or
+    modify it under the terms of the GNU Lesser General Public
+    License as published by the Free Software Foundation; either
+    version 2.1 of the License, or (at your option) any later version.
+
+    This library is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+    Lesser General Public License for more details.
+
+    You should have received a copy of the GNU Lesser General Public
+    License along with this library; if not, write to the Free Software
+    Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
+
+Also add information on how to contact you by electronic and paper mail.
+
+You should also get your employer (if you work as a programmer) or your
+school, if any, to sign a "copyright disclaimer" for the library, if
+necessary.  Here is a sample; alter the names:
+
+  Yoyodyne, Inc., hereby disclaims all copyright interest in the
+  library `Frob' (a library for tweaking knobs) written by James Random Hacker.
+
+  <signature of Ty Coon>, 1 April 1990
+  Ty Coon, President of Vice
+
+That's all there is to it!
diff --git a/tools/golang/xenlight/README.md b/tools/golang/xenlight/README.md
new file mode 100644
index 0000000000..a423a5600a
--- /dev/null
+++ b/tools/golang/xenlight/README.md
@@ -0,0 +1,28 @@
+# xenlight
+
+## About
+
+The xenlight package provides Go bindings to Xen's libxl C library via cgo.
+The package is currently in an unstable "experimental" state. This means
+the package is ready for initial use and evaluation, but is not yet fully
+functional. Namely, only a subset of libxl's API is implemented, and
+breaking changes may occur in future package versions.
+
+Much of the package is generated using the libxl IDL. Changes to the
+generated code can be made by modifying `tools/golang/xenlight/gengotypes.py`
+in the xen.git tree.
+
+## Getting Started
+
+```go
+import (
+        "xenbits.xenproject.org/git-http/xen.git/tools/golang/xenlight"
+)
+```
+
+The module is not yet tagged independently of xen.git; if you don’t specify
+the version, you’ll get the most recent development version, which is
+probably not what you want. A better option would be to specify a Xen
+release tag; for instance:
+
+    go get xenbits.xenproject.org/git-http/xen.git/tools/golang/xenlight@RELEASE-4.14.0.
diff --git a/tools/golang/xenlight/xenlight.go b/tools/golang/xenlight/xenlight.go
index 6b4f492550..3eaa5a3d63 100644
--- a/tools/golang/xenlight/xenlight.go
+++ b/tools/golang/xenlight/xenlight.go
@@ -14,6 +14,8 @@
  * You should have received a copy of the GNU Lesser General Public
  * License along with this library; If not, see <http://www.gnu.org/licenses/>.
  */
+
+// Package xenlight provides bindings to Xen's libxl C library.
 package xenlight
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 13 14:31:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 14:31:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYsPq-0002xQ-2u; Wed, 13 May 2020 14:31:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dqM3=63=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYsPp-0002xL-1Q
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 14:31:21 +0000
X-Inumbo-ID: 687ea422-9526-11ea-a395-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 687ea422-9526-11ea-a395-12813bfff9fa;
 Wed, 13 May 2020 14:31:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 49C11ADF8;
 Wed, 13 May 2020 14:31:20 +0000 (UTC)
Subject: Re: [PATCH] x86/guest: Fix assembler warnings with newer binutils
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200512162830.5912-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0edc48e1-5f09-573b-8958-94c3a95f2c33@suse.com>
Date: Wed, 13 May 2020 16:31:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200512162830.5912-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12.05.2020 18:28, Andrew Cooper wrote:
> --- a/xen/arch/x86/guest/xen/hypercall_page.S
> +++ b/xen/arch/x86/guest/xen/hypercall_page.S
> @@ -8,7 +8,6 @@
>  GLOBAL(hypercall_page)
>           /* Poisoned with `ret` for safety before hypercalls are set up. */
>          .fill PAGE_SIZE, 1, 0xc3
> -        .type hypercall_page, STT_OBJECT
>          .size hypercall_page, PAGE_SIZE

Looks like we don't need to sacrifice type setting here: Simply
moving .type ahead of .set in DECLARE_HYPERCALL() seems to also
help. To save a roundtrip, this alternative change
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 13 14:41:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 14:41:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYsZ8-0003qs-1Z; Wed, 13 May 2020 14:40:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cxQB=63=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYsZ6-0003qn-DP
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 14:40:56 +0000
X-Inumbo-ID: bffbcef4-9527-11ea-9887-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bffbcef4-9527-11ea-9887-bc764e2007e4;
 Wed, 13 May 2020 14:40:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589380856;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=aJPuLbXmdiUuvxJZB9lY/DMK61zjlx7iWP0InQTAu3o=;
 b=JvfDRRleSHx78ebnE64XH+i558wQfvLcQpwlrKNQ5PkOqrDzJMQoz3PL
 lKoJRMAW+Bs2H/N5Q8KJjrcqfCvZ8SCF78GrEmnWLpWtLIOvdj5knVUsD
 plF8apsO/MlxrK8/MFGH6oKyU6Cn2z4Dy7Q5kdwPU0kWtdBYasacBMkSy E=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: VvX3nAt0ezuSuxDOykhKCzheoUyLKUASYAy2MFYKayxIye40p7h7eRrn4Z/AwoilhqFz75PuU3
 bj+UtjDBbbfTT6WtE8QoP0l9A0zLr5Rm3fw2sij5c6+tSGH+ILmeWKpevGd5fYK4mTJxe95KFv
 Mblg+6MO3h0M7sY34eh8KHMO37bu/Ah6x3UhL0wTl96d4AlWD79Fxvzcu/73YSBh4NpwvUoiun
 /1tGF2VzhMNmLA0jOrEVBpROetD2Czf32CV5E+kLPtfytgATGgzK4qyOwZlrsYdyxda9GznP8s
 xBE=
X-SBRS: 2.7
X-MesageID: 17448237
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,388,1583211600"; d="scan'208";a="17448237"
Subject: Re: [PATCH] x86/build: Unilaterally disable -fcf-protection
To: Jan Beulich <jbeulich@suse.com>
References: <20200513135552.24329-1-andrew.cooper3@citrix.com>
 <20200513135552.24329-3-andrew.cooper3@citrix.com>
 <a56bb718-47e8-ab45-d706-84f59d3131d8@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d395aa13-ab66-490e-109c-94a050b1fc9a@citrix.com>
Date: Wed, 13 May 2020 15:40:48 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <a56bb718-47e8-ab45-d706-84f59d3131d8@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Jason Andryuk <jandryuk@gmail.com>, Stefan Bader <stefan.bader@canonical.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 13/05/2020 15:13, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> On 13.05.2020 15:55, Andrew Cooper wrote:
>> Xen doesn't support CET-IBT yet.  At a minimum, logic is required to enable it
>> for supervisor use, but the livepatch functionality needs to learn not to
>> overwrite ENDBR64 instructions.
>>
>> Furthermore, Ubuntu enables -fcf-protection by default, along with a buggy
>> version of GCC-9 which objects to it in combination with
>> -mindirect-branch=thunk-extern (Fixed in GCC 10, 9.4).
>>
>> Various objects (Xen boot path, Rombios 32 stubs) require .text to be at the
>> beginning of the object.  These paths explode when .note.gnu.properties gets
>> put ahead of .text and we end up executing the notes data.
>>
>> Disable -fcf-protection for all embedded objects.
>>
>> Reported-by: Jason Andryuk <jandryuk@gmail.com>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> For the immediate purpose
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

>
> I wonder however ...
>
>> --- a/Config.mk
>> +++ b/Config.mk
>> @@ -205,6 +205,7 @@ APPEND_CFLAGS += $(foreach i, $(APPEND_INCLUDES), -I$(i))
>>  
>>  EMBEDDED_EXTRA_CFLAGS := -nopie -fno-stack-protector -fno-stack-protector-all
>>  EMBEDDED_EXTRA_CFLAGS += -fno-exceptions -fno-asynchronous-unwind-tables
>> +EMBEDDED_EXTRA_CFLAGS += -fcf-protection=none
> ... whether this isn't going to bite us once some of the consumers
> of this variable want to enable some different mode.

I'm not overly happy with EMBEDDED_EXTRA_CFLAGS as a concept, but these
build fixes do need backporting.

All embedded targets may in principle use some/all of these options at
some point in the future.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 13 14:54:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 14:54:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYsm7-0004no-7s; Wed, 13 May 2020 14:54:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WltO=63=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jYsm6-0004nj-0b
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 14:54:22 +0000
X-Inumbo-ID: a07f3316-9529-11ea-b9cf-bc764e2007e4
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a07f3316-9529-11ea-b9cf-bc764e2007e4;
 Wed, 13 May 2020 14:54:21 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id u15so18148707ljd.3
 for <xen-devel@lists.xenproject.org>; Wed, 13 May 2020 07:54:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=MG4vWe3NIOB9tnvJ17Qc4t/qYEXvgRhyiC2qniHyU/k=;
 b=k2H12gZr1sQXZ9wpU5TmOsUIBCVUbUetwkMzzphq+P+yy1ZBfUPv8ArJSmt2a+Glpv
 C/CRUjn+hiAyt3AH2dvdzWgU6v2QWRezdrIw9iGutCTu20tZeHjMehxzsYS4q3omKv8L
 Y2mqifbJpX4Fus+LUruqv03Gcq//tSYBNMLG683Y02MCcVQgID+eVTHJjLnYf1UkE+LE
 mAAHe//+SO4C/cvIh2YEN/SUc6LYii0yJ5veIaCBOHSUi19/oyPU07+x1A/niJwv8Dss
 0y7yIK7WB8m6baZe/DZBX8vBBkUDBC8nTiPgsiEkKOh53nuGMBKYc62tmqmJCbdLZ6tf
 msJA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=MG4vWe3NIOB9tnvJ17Qc4t/qYEXvgRhyiC2qniHyU/k=;
 b=Oe+OKiEJ4dI52hDSWYqxjWQL4IVywFjJF/aBZG9fs6YkuWasmWS7c+GbPvVUlqnOQV
 KHBJjJL5ou4EDhkpK+rOexki0jldDD6Zx5nefhETiNnxu2IgjnY4gay8Jyqp6rtu1XE5
 lNUkv+3HHA48JbmmpcgAS+ZvCys3IFxBY/19zQ7/vUXqvi6l0hJpgZ7iFqhD/b9Pj0gw
 jlQwpvUHe39yQ/S299tl/3xWKIxhSjJqtwgKntL8zGDiB+dAcPDQkidAxse28ZuV2LWu
 dKf8hTDkAYenyC6eQQl4P1qD+aaMxJ6vXB6HBPIjSfdRN2SDPnUkPA3e1+fT5t1l8Voa
 d2hA==
X-Gm-Message-State: AOAM533pNMqZd64B1c32Di6g9JsaXHY62Iy+66/eOoipqgb+GDHvRact
 6CmTqeA7w5pcYnTDAcJRATpEPFF/3aALdiWgwqU=
X-Google-Smtp-Source: ABdhPJxTvniAg8zbY+XkcgO+vXyN8uFTid/vsVXDFzdHJ1s5bCuUGIwZAtPr6KyNlqSadG8DDhY/d2j9CSQRTeEWLQA=
X-Received: by 2002:a2e:7215:: with SMTP id n21mr16390295ljc.199.1589381660260; 
 Wed, 13 May 2020 07:54:20 -0700 (PDT)
MIME-Version: 1.0
References: <20200513135552.24329-1-andrew.cooper3@citrix.com>
 <20200513135552.24329-3-andrew.cooper3@citrix.com>
In-Reply-To: <20200513135552.24329-3-andrew.cooper3@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 13 May 2020 10:54:08 -0400
Message-ID: <CAKf6xpuD46CEwc3et33_1g4SXYOJzcYehjHPV97P1HdKwkASLg@mail.gmail.com>
Subject: Re: [PATCH] x86/build: Unilaterally disable -fcf-protection
To: Andrew Cooper <andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefan Bader <stefan.bader@canonical.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 13, 2020 at 9:56 AM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>
> Xen doesn't support CET-IBT yet.  At a minimum, logic is required to enable it
> for supervisor use, but the livepatch functionality needs to learn not to
> overwrite ENDBR64 instructions.
>
> Furthermore, Ubuntu enables -fcf-protection by default, along with a buggy
> version of GCC-9 which objects to it in combination with
> -mindirect-branch=thunk-extern (Fixed in GCC 10, 9.4).
>
> Various objects (Xen boot path, Rombios 32 stubs) require .text to be at the
> beginning of the object.  These paths explode when .note.gnu.properties gets
> put ahead of .text and we end up executing the notes data.
>
> Disable -fcf-protection for all embedded objects.
>
> Reported-by: Jason Andryuk <jandryuk@gmail.com>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

I have not re-tested this posting, but I tested an equivalent change
~2 weeks ago (in case that counts for Tested-by).

-Jason


From xen-devel-bounces@lists.xenproject.org Wed May 13 15:00:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 15:00:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYsru-0005eQ-To; Wed, 13 May 2020 15:00:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cxQB=63=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYsru-0005eL-4q
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 15:00:22 +0000
X-Inumbo-ID: 77127b36-952a-11ea-ae69-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77127b36-952a-11ea-ae69-bc764e2007e4;
 Wed, 13 May 2020 15:00:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589382021;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=SLf79gMT3AsWVAodTqnqXd4+ahLWfTEnk6lVRWxtUUM=;
 b=HgX7rcAKlfRa0bssAzuYN62BxBgNkz+iVOJX/g2IgEr6/6j0aGQfoL6t
 KSH1AyCHXuEgte19NVpK/pgD9q2OOR1oPqozBMbZuhLmoknrIxkUw3ttV
 YFragZDZtUMEKJXgYyhZsxzWeLZYBmQsEWYYAGC0iKWoYxzvVe19WtIIS o=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 5jaecxsnO2wF4TU0kso1MbXkmYFtrybExaye8QKXPT64LnPOyqXJJ4iv2mQErrNdxwei8WCla2
 GtJYD9ADWUnzQOiJpOGokThm3v1H3sQxNFu4u1L6S3YBPDjVHltML7cZKfAITCGVVCdlRucc2G
 MCWZo7Rsuljx97th4pbmG/AXUioBcIpYT02O6qd4z1LB9BzbxtXg/b1XXFy9g5akbmekWH4fqi
 5/FqVHgx3/6q/kF+gexF1otiUVFhiRO/xqttiaikNpou8bvEaKTdNCJqcyb0s+FS6kwXEro+GM
 EhA=
X-SBRS: 2.7
X-MesageID: 17421822
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,388,1583211600"; d="scan'208";a="17421822"
Subject: Re: [PATCH] x86/build32: Discard all orphaned sections
To: Jan Beulich <jbeulich@suse.com>
References: <20200512191108.6461-1-andrew.cooper3@citrix.com>
 <a1d1135a-8f9c-81c3-5fc8-bbc3787ebd0f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e11b2b5b-5504-f2a3-f1d8-986bc97a4b78@citrix.com>
Date: Wed, 13 May 2020 16:00:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <a1d1135a-8f9c-81c3-5fc8-bbc3787ebd0f@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Jason Andryuk <jandryuk@gmail.com>, Stefan Bader <stefan.bader@canonical.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 13/05/2020 10:13, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> On 12.05.2020 21:11, Andrew Cooper wrote:
>> @@ -47,6 +47,14 @@ SECTIONS
>>           *
>>           * Please check build32.mk for more details.
>>           */
>> -        /* *(.got.plt) */
>> +        *(.got.plt)
>> +  }
>> +
>> +  /DISCARD/ : {
>> +        /*
>> +         * Discard everything else, to prevent linkers from putting
>> +         * orphaned sections ahead of .text, which needs to be first.
>> +         */
>> +        *(*)
>>    }
>>  }
> To be honest I'm not sure if this isn't going too far. Much
> depends on what would happen if a new real section appeared
> that needs retaining.

Anything which is important enough will result in a linker error.

> Granted the linker may then once again
> put it at the beginning of the image, and we'll be screwed
> again, just like we'll be screwed if a section gets discarded
> by mistake.

This way is more likely to result in a build failure than an inability
to boot the resulting build of Xen.

> But it would be really nice if we had a way to
> flag the need to play with the linker script. Hence perhaps
> on new enough tool chains we indeed may want to make use of
> --orphan-handling= ? And then discard just .note and .note.*
> here?

The only valid option would be =error, but experimenting with that yields

ld: error: unplaced orphan section `.comment' from `cmdline.o'
ld: error: unplaced orphan section `.note.GNU-stack' from `cmdline.o'
ld: error: unplaced orphan section `.note.gnu.property' from `cmdline.o'
ld: error: unplaced orphan section `.rel.got' from `cmdline.o'
ld: error: unplaced orphan section `.got' from `cmdline.o'
ld: error: unplaced orphan section `.got.plt' from `cmdline.o'
ld: error: unplaced orphan section `.iplt' from `cmdline.o'
ld: error: unplaced orphan section `.rel.iplt' from `cmdline.o'
ld: error: unplaced orphan section `.igot.plt' from `cmdline.o'

which I think is going to get us massively bogged down in toolchain
specifics.  I'm not entirely convinced this would be a good move.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 13 15:07:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 15:07:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYsyN-0005qG-Lb; Wed, 13 May 2020 15:07:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mpYu=63=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jYsyM-0005qA-KI
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 15:07:02 +0000
X-Inumbo-ID: 65cddacc-952b-11ea-b07b-bc764e2007e4
Received: from mail-wm1-x32d.google.com (unknown [2a00:1450:4864:20::32d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65cddacc-952b-11ea-b07b-bc764e2007e4;
 Wed, 13 May 2020 15:07:01 +0000 (UTC)
Received: by mail-wm1-x32d.google.com with SMTP id y24so28926062wma.4
 for <xen-devel@lists.xenproject.org>; Wed, 13 May 2020 08:07:01 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=0ppYH1uoJsmVYxgQK4IIad79h5mAJL8RIv2tg3nr+4A=;
 b=SKt6Leh7uQ+0vSpAkgVJ62kddnpzoJlRXwkIOXaCs97PV7OMFEYZBwY4OLkkjKmetc
 oX9Y28TKu25qSxsZMKMwGDTpoCDKvT+X4qs3SmkSpZiHvnoqrtZXiEu7CNYARJ38PpaH
 hUODt3Gmi5TEDf6R+3NnVMp1sv3ldEZoHuN/giEuZhPnE99Wb8WR9HYI4xo8eIYuh2W9
 TniMoVb5CLLfp9Fj77GWv91JLdgF6QovNaC3RWAxYtQJsuwCDcQZTm5ScXgl5yTCXPpO
 1qiU2+ltYVbPZmYccpi/TrUFw6UIZjDcCHWd6LLFskrCd0HKjPuD49hl5hnLXpjC0w96
 T0Vw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=0ppYH1uoJsmVYxgQK4IIad79h5mAJL8RIv2tg3nr+4A=;
 b=egZCpSnSU4U7sTiZ+Rx7eGgE3OQmHmybbCC665ckYcsK7f0dtGOCkcw0ZeZ3cJ6eej
 fR1gcqGGrMzaG42fANhhJhNehrAo3SvSUFWDZlJyhJV0ByJygi9sVj8AR+GkovdGv5BR
 JUUP1XuSns7i7wmKDH/GEZTTyLojmAbq6qf8L6cEAq5JUjg5g/jt+WS8RQF9gDQVC41o
 L5WGiuukps3c75eL4PYWti2jCdeJlZRbEKr39b5XJH3Mdz7JXx2Xtmz/Tr03BOHyTeWf
 cmMMGlTpK8Obs44cVayNxKx/SgqjKawIUmrTmCIahSFijR2GR92F2itQEFFVXQ/NApEF
 wQXw==
X-Gm-Message-State: AGi0PubV2qySIYbF4j2SGT2pGqT7753FNMMe8A/oBvbvyVg7YnCNLeM2
 vY3NRJaaZWlPYr8GdJncbzQ=
X-Google-Smtp-Source: APiQypLf4GYAApxzBmDRzlKwxgfnRYCix/En4wffCcYg/HtKlmC9eipF8nl44R9NTiF6uj8+uxo2DA==
X-Received: by 2002:a05:600c:2109:: with SMTP id
 u9mr18804456wml.75.1589382420825; 
 Wed, 13 May 2020 08:07:00 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-234.amazon.com. [54.240.197.234])
 by smtp.gmail.com with ESMTPSA id j206sm27149721wmj.20.2020.05.13.08.06.59
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 13 May 2020 08:07:00 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
References: <20200407173847.1595-1-paul@xen.org>
 <20200407173847.1595-3-paul@xen.org>
 <70d94284-264b-b03d-1577-fafcf125a9b1@suse.com>
In-Reply-To: <70d94284-264b-b03d-1577-fafcf125a9b1@suse.com>
Subject: RE: [PATCH v2 2/5] xen/common/domctl: introduce
 XEN_DOMCTL_get/setdomaincontext
Date: Wed, 13 May 2020 16:06:59 +0100
Message-ID: <001001d62938$26edbbb0$74c93310$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQIOC3/NwyZzJjhdRz0oBcI7Is/lxwJNGvp/Al3A28moEP8owA==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Daniel De Graaf' <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 29 April 2020 15:51
> To: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; Paul Durrant <pdurrant@amazon.com>; Daniel De Graaf
> <dgdegra@tycho.nsa.gov>; Ian Jackson <ian.jackson@eu.citrix.com>; Wei Liu <wl@xen.org>; Andrew Cooper
> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>; Julien Grall <julien@xen.org>;
> Stefano Stabellini <sstabellini@kernel.org>
> Subject: Re: [PATCH v2 2/5] xen/common/domctl: introduce XEN_DOMCTL_get/setdomaincontext
> 
> On 07.04.2020 19:38, Paul Durrant wrote:
> > @@ -358,6 +359,113 @@ static struct vnuma_info *vnuma_init(const struct xen_domctl_vnuma *uinfo,
> >      return ERR_PTR(ret);
> >  }
> >
> > +struct domctl_context
> > +{
> > +    void *buffer;
> > +    size_t len;
> > +    size_t cur;
> > +};
> > +
> > +static int accumulate_size(void *priv, const void *data, size_t len)
> > +{
> > +    struct domctl_context *c = priv;
> > +
> > +    if ( c->len + len < c->len )
> > +        return -EOVERFLOW;
> > +
> > +    c->len += len;
> > +
> > +    return 0;
> > +}
> > +
> > +static int save_data(void *priv, const void *data, size_t len)
> > +{
> > +    struct domctl_context *c = priv;
> > +
> > +    if ( c->len - c->cur < len )
> > +        return -ENOSPC;
> > +
> > +    memcpy(c->buffer + c->cur, data, len);
> > +    c->cur += len;
> > +
> > +    return 0;
> > +}
> > +
> > +static int getdomaincontext(struct domain *d,
> > +                            struct xen_domctl_getdomaincontext *gdc)
> > +{
> > +    struct domctl_context c = { };
> 
> Please can you use ZERO_BLOCK_PTR or some such for the buffer
> field, such that errnoeous use of the field would not end up
> as a (PV-controllable) NULL deref. (Yes, it's a domctl, but
> still.) This being common code you also want to get things
> right for Arm, irrespective of whether the code will be dead
> there for now.
> 

Ok.

> > +    int rc;
> > +
> > +    if ( d == current->domain )
> > +        return -EPERM;
> > +
> > +    if ( guest_handle_is_null(gdc->buffer) ) /* query for buffer size */
> > +    {
> > +        if ( gdc->size )
> > +            return -EINVAL;
> > +
> > +        /* dry run to acquire buffer size */
> > +        rc = domain_save(d, accumulate_size, &c, true);
> > +        if ( rc )
> > +            return rc;
> > +
> > +        gdc->size = c.len;
> > +        return 0;
> > +    }
> > +
> > +    c.len = gdc->size;
> > +    c.buffer = xmalloc_bytes(c.len);
> 
> What sizes are we looking at here? It may be better to use vmalloc()
> right from the start.

Could be quite big so that seems reasonable.

> If not, I'd like to advocate for using
> xmalloc_array() instead of xmalloc_bytes() - see the almost-XSA
> commit cf38b4926e2b.
> 
> > +    if ( !c.buffer )
> > +        return -ENOMEM;
> > +
> > +    rc = domain_save(d, save_data, &c, false);
> > +
> > +    gdc->size = c.cur;
> > +    if ( !rc && copy_to_guest(gdc->buffer, c.buffer, gdc->size) )
> 
> As to my remark in patch 1 on the size field, applying to this size
> field too - copy_to_user{,_hvm}() don't support a 64-bit value (on
> y86 at least).

Ok.

> 
> > --- a/xen/include/public/domctl.h
> > +++ b/xen/include/public/domctl.h
> > @@ -38,7 +38,7 @@
> >  #include "hvm/save.h"
> >  #include "memory.h"
> >
> > -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000012
> > +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000013
> 
> I don't see you making any change making the interface backwards
> incompatible, hence no need for the bump.
> 

Ok.

> > @@ -1129,6 +1129,44 @@ struct xen_domctl_vuart_op {
> >                                   */
> >  };
> >
> > +/*
> > + * Get/Set domain PV context. The same struct xen_domctl_domaincontext
> > + * is used for both commands but with slightly different field semantics
> > + * as follows:
> > + *
> > + * XEN_DOMCTL_getdomaincontext
> > + * ---------------------------
> > + *
> > + * buffer (IN):   The buffer into which the context data should be
> > + *                copied, or NULL to query the buffer size that should
> > + *                be allocated.
> > + * size (IN/OUT): If 'buffer' is NULL then the value passed in must be
> > + *                zero, and the value passed out will be the size of the
> > + *                buffer to allocate.
> > + *                If 'buffer' is non-NULL then the value passed in must
> > + *                be the size of the buffer into which data may be copied.
> 
> This leaves open whether the size also gets updated in this latter
> case.

I'll clarify.

> 
> > + */
> > +struct xen_domctl_getdomaincontext {
> > +    uint64_t size;
> 
> If this is to remain 64-bits (with too large values suitably taken
> care of for all cases - see above), uint64_aligned_t please for
> consistency, if nothing else.

I've changed to uint32_t.

  Paul

> 
> Jan



From xen-devel-bounces@lists.xenproject.org Wed May 13 15:11:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 15:11:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYt2H-0006e7-77; Wed, 13 May 2020 15:11:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVKH=63=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jYt2G-0006e2-5m
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 15:11:04 +0000
X-Inumbo-ID: f59916f8-952b-11ea-a3a3-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f59916f8-952b-11ea-a3a3-12813bfff9fa;
 Wed, 13 May 2020 15:11:02 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 007CE20708;
 Wed, 13 May 2020 15:11:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1589382662;
 bh=cublrR0e2N/Q0X3vvE/ueAymcHFgeJ20H/1fElESiZs=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=HAOU73QpPam2s2+/biUZW3/u8Ma0SerNAZBNYLiEjMDKOsk6l0PWxiCS5ShuGYWmR
 wXaPe+Kag+fjRgoZnMGrWx0EqEpI95teZVxwoFLubfrTAIIT+u+1dytrrLV3bC+VzX
 eIswv9238ZW6XlPVDR4ayfIcGXjiVrUh7B5TKzLQ=
Date: Wed, 13 May 2020 08:11:00 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
In-Reply-To: <42253259-9663-67e8-117f-8ba631243585@xen.org>
Message-ID: <alpine.DEB.2.21.2005130810270.26167@sstabellini-ThinkPad-T480s>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <20200504124453.GI9902@minyard.net>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
 <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <CADz_WD5Ln7Pe1WAFp73d2Mz9wxspzTE3WgAJusp5S8LX4=83Bw@mail.gmail.com>
 <2187cd7c-4d48-986b-77d6-4428e8178404@oracle.com>
 <CADz_WD68bYj-0CSm_zib+LRiMGd1+1eoNLgiJj=vHog685Khsw@mail.gmail.com>
 <alpine.DEB.2.21.2005060956120.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy_wcSD3BVcVFJVR1y1CtvxA9xMkobKwbsdf8dGxS5Hcbw@mail.gmail.com>
 <alpine.DEB.2.21.2005121723240.26167@sstabellini-ThinkPad-T480s>
 <42253259-9663-67e8-117f-8ba631243585@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Peng Fan <peng.fan@nxp.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "minyard@acm.org" <minyard@acm.org>, roman@zededa.com,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 Julien Grall <julien.grall@arm.com>,
 Nataliya Korovkina <malus.brandywine@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 boris.ostrovsky@oracle.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, 13 May 2020, Julien Grall wrote:
> Hi,
> 
> On 13/05/2020 01:33, Stefano Stabellini wrote:
> > I worked with Roman to do several more tests and here is an update on
> > the situation. We don't know why my patch didn't work when Boris' patch
> > [1] worked.  Both of them should have worked the same way.
> > 
> > Anyway, we continued with Boris patch to debug the new mmc error which
> > looks like this:
> > 
> > [    3.084464] mmc0: unrecognised SCR structure version 15
> > [    3.089176] mmc0: error -22 whilst initialising SD card
> > 
> > I asked to add a lot of trancing, see attached diff.
> 
> Please avoid attachment on mailing list and use pastebin for diff.
> 
> > We discovered a bug
> > in xen_swiotlb_init: if io_tlb_start != 0 at the beginning of
> > xen_swiotlb_init, start_dma_addr is not set correctly. This oneline
> > patch fixes it:
> > 
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index 0a40ac332a4c..b75c43356eba 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -191,6 +191,7 @@ int __ref xen_swiotlb_init(int verbose, bool early)
> >           * IO TLB memory already allocated. Just use it.
> >           */
> >          if (io_tlb_start != 0) {
> > +               start_dma_addr = io_tlb_start;
> >                  xen_io_tlb_start = phys_to_virt(io_tlb_start);
> >                  goto end;
> >          }
> > 
> > Unfortunately it doesn't solve the mmc0 error.
> > 
> > 
> > As you might notice from the logs, none of the other interesting printks
> > printed anything, which seems to mean that the memory allocated by
> > xen_swiotlb_alloc_coherent and mapped by xen_swiotlb_map_page should be
> > just fine.
> > 
> > I am starting to be out of ideas. Do you guys have any suggestions on
> > what could be the issue or what could be done to discover it?
> 
> Sorry if my suggestions are going to be obvious, but I can't confirm whether
> this was already done:
>     1) Does the kernel boot on baremetal (i.e without Xen)? This should help
> to confirm whether the bug is Xen is related.

Yes it boots

>     2) Swiotlb should not be necessary for basic dom0 boot on Arm. Did you try
> to disable it? This should help to confirm whether swiotlb is the problem or
> not.

It boots disabling swiotlb-xen


>     3) Did you track down how the MMC read the SCR structure?
 
Not yet


From xen-devel-bounces@lists.xenproject.org Wed May 13 15:16:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 15:16:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYt6s-0006pO-Ur; Wed, 13 May 2020 15:15:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dqM3=63=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYt6r-0006pJ-0e
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 15:15:49 +0000
X-Inumbo-ID: 9fb49acc-952c-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9fb49acc-952c-11ea-9887-bc764e2007e4;
 Wed, 13 May 2020 15:15:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D6241AC20;
 Wed, 13 May 2020 15:15:49 +0000 (UTC)
Subject: Re: [PATCH] x86/build32: Discard all orphaned sections
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200512191108.6461-1-andrew.cooper3@citrix.com>
 <a1d1135a-8f9c-81c3-5fc8-bbc3787ebd0f@suse.com>
 <e11b2b5b-5504-f2a3-f1d8-986bc97a4b78@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <875d5449-dde0-e675-fb1e-c2b3ed238674@suse.com>
Date: Wed, 13 May 2020 17:15:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <e11b2b5b-5504-f2a3-f1d8-986bc97a4b78@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Jason Andryuk <jandryuk@gmail.com>, Stefan Bader <stefan.bader@canonical.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 13.05.2020 17:00, Andrew Cooper wrote:
> On 13/05/2020 10:13, Jan Beulich wrote:
>> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>>
>> On 12.05.2020 21:11, Andrew Cooper wrote:
>>> @@ -47,6 +47,14 @@ SECTIONS
>>>           *
>>>           * Please check build32.mk for more details.
>>>           */
>>> -        /* *(.got.plt) */
>>> +        *(.got.plt)
>>> +  }
>>> +
>>> +  /DISCARD/ : {
>>> +        /*
>>> +         * Discard everything else, to prevent linkers from putting
>>> +         * orphaned sections ahead of .text, which needs to be first.
>>> +         */
>>> +        *(*)
>>>    }
>>>  }
>> To be honest I'm not sure if this isn't going too far. Much
>> depends on what would happen if a new real section appeared
>> that needs retaining.
> 
> Anything which is important enough will result in a linker error.
> 
>> Granted the linker may then once again
>> put it at the beginning of the image, and we'll be screwed
>> again, just like we'll be screwed if a section gets discarded
>> by mistake.
> 
> This way is more likely to result in a build failure than an inability
> to boot the resulting build of Xen.
> 
>> But it would be really nice if we had a way to
>> flag the need to play with the linker script. Hence perhaps
>> on new enough tool chains we indeed may want to make use of
>> --orphan-handling= ? And then discard just .note and .note.*
>> here?
> 
> The only valid option would be =error, but experimenting with that yields
> 
> ld: error: unplaced orphan section `.comment' from `cmdline.o'
> ld: error: unplaced orphan section `.note.GNU-stack' from `cmdline.o'
> ld: error: unplaced orphan section `.note.gnu.property' from `cmdline.o'
> ld: error: unplaced orphan section `.rel.got' from `cmdline.o'
> ld: error: unplaced orphan section `.got' from `cmdline.o'
> ld: error: unplaced orphan section `.got.plt' from `cmdline.o'
> ld: error: unplaced orphan section `.iplt' from `cmdline.o'
> ld: error: unplaced orphan section `.rel.iplt' from `cmdline.o'
> ld: error: unplaced orphan section `.igot.plt' from `cmdline.o'
> 
> which I think is going to get us massively bogged down in toolchain
> specifics.  I'm not entirely convinced this would be a good move.

That's ugly indeed; especially the .rel.* sections are worrying to
appear there. Hence patch as is
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 13 15:21:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 15:21:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYtCR-0007eK-K8; Wed, 13 May 2020 15:21:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dqM3=63=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jYtCR-0007eF-1w
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 15:21:35 +0000
X-Inumbo-ID: 6e0756f8-952d-11ea-a3a9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6e0756f8-952d-11ea-a3a9-12813bfff9fa;
 Wed, 13 May 2020 15:21:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 29CF4B0BE;
 Wed, 13 May 2020 15:21:36 +0000 (UTC)
Subject: Re: [PATCH] x86/mwait: remove unneeded local variables
To: Roger Pau Monne <roger.pau@citrix.com>
References: <20200511102128.36840-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2e97f534-5f34-5e18-4377-865bfb9bd228@suse.com>
Date: Wed, 13 May 2020 17:21:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200511102128.36840-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.2020 12:21, Roger Pau Monne wrote:
> Remove the eax and cstate local variables, the same can be directly
> fetched from acpi_processor_cx without any transformations.
> 
> No functional change.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

First patch for the 4.15 folder ...

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 13 15:27:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 15:27:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYtIC-0007rp-9P; Wed, 13 May 2020 15:27:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mpYu=63=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jYtIB-0007rk-3k
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 15:27:31 +0000
X-Inumbo-ID: 423dd6b8-952e-11ea-ae69-bc764e2007e4
Received: from mail-wr1-x42d.google.com (unknown [2a00:1450:4864:20::42d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 423dd6b8-952e-11ea-ae69-bc764e2007e4;
 Wed, 13 May 2020 15:27:30 +0000 (UTC)
Received: by mail-wr1-x42d.google.com with SMTP id y16so14241008wrs.3
 for <xen-devel@lists.xenproject.org>; Wed, 13 May 2020 08:27:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=9atujlcgusW5HYpnti+tAVTbDLW9h5r5ESSqAQKGoYI=;
 b=El8k9x9pPnRMmeA4DcwlA88LCHObx9eQg+njZXhqgT8e7w5M57l2ve8PfzaL8RTOHN
 TS8cm0m59jmLR9gk0Tm7Jw++bPV9DeJH0WEzE2CPdf4e5X7C7eR+k5P42yFmKkA6d4eK
 6u+4X13rWSxiZlXbe0BKT8Xh8+4Ee+zvg58MS1AeogRsgf3Y/75F7fxvdZ0tyjiRGR8j
 MezCj1ZOqurKxZjSs2f+skYNVKY35UTZhxfO3uvzwJFpcYNG2eEcAEBlPAtBnhKYrOaL
 mrXClGktVhaXxw/LUKiqufmhPNDe3L8ryu056j7Iu3cHms+pZjC1CR1Dfizxl7qUxOTU
 DJew==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=9atujlcgusW5HYpnti+tAVTbDLW9h5r5ESSqAQKGoYI=;
 b=smir2kiYR2JooEi1DlioUVJFjeT49xI7/PFCk8JEkR0u8ZaToDQpwmNMI4Y732M9lM
 hbymR9Nsy0dTygdVnxqcZV/j6iDB3bOCpDXKkXwQVkFN1EyjWLwGEZ/tFRJhFshpXNTM
 BZ9l75OChGyds1ik1SG/z+u+fQFV0BJtZLP/qJiLlmA0ZYcJLwEZO4HJ91TJM9R17CKr
 psT/Bezp2HBHaD7m/64Wp3C16f/hWtWWLMR2lJjedWw2pL62u1YdAoQJ45E1+eEqtC0k
 jWlmW/DtA70PAqRxJUhGunLTbfLMfn62cYwE/p9boEcyU7N+8P6vtzXEIdOwj+8U29fT
 ARDA==
X-Gm-Message-State: AGi0PuakN0u+4r+42p8HW4HfoowsucmP8omrCgKmLSjPWgitzgSoBOBk
 PxH8Z0Qm+O+EmRMeWvGRBUA=
X-Google-Smtp-Source: APiQypIFooOW2UeOBm1MA4feXuxZvob1heQ0TNb+VHV4u/m0Ozp8GHgvPm6t1GG09iuSH7Onygbe4g==
X-Received: by 2002:a5d:56c6:: with SMTP id m6mr31013234wrw.78.1589383649674; 
 Wed, 13 May 2020 08:27:29 -0700 (PDT)
Received: from CBGR90WXYV0 (54-240-197-226.amazon.com. [54.240.197.226])
 by smtp.gmail.com with ESMTPSA id 89sm27716332wrj.37.2020.05.13.08.27.28
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 13 May 2020 08:27:29 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
References: <20200407173847.1595-1-paul@xen.org>
 <20200407173847.1595-4-paul@xen.org>
 <76e91373-1f7c-bd68-2800-83163fb22aa2@suse.com>
In-Reply-To: <76e91373-1f7c-bd68-2800-83163fb22aa2@suse.com>
Subject: RE: [PATCH v2 3/5] tools/misc: add xen-domctx to present domain
 context
Date: Wed, 13 May 2020 16:27:28 +0100
Message-ID: <001101d6293b$0374bdc0$0a5e3940$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQIOC3/NwyZzJjhdRz0oBcI7Is/lxwFxlJcKATUa0IqoISjFcA==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: xen-devel@lists.xenproject.org, 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>, 'Wei Liu' <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 29 April 2020 16:04
> To: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; Paul Durrant <pdurrant@amazon.com>; Ian Jackson
> <ian.jackson@eu.citrix.com>; Wei Liu <wl@xen.org>
> Subject: Re: [PATCH v2 3/5] tools/misc: add xen-domctx to present domain context
> 
> On 07.04.2020 19:38, Paul Durrant wrote:
> > +int main(int argc, char **argv)
> > +{
> > +    uint32_t domid;
> > +    unsigned int entry;
> > +    xc_interface *xch;
> > +    int rc;
> > +
> > +    if ( argc != 2 || !argv[1] || (rc = atoi(argv[1])) < 0 )
> > +    {
> > +        fprintf(stderr, "usage: %s <domid>\n", argv[0]);
> > +        exit(1);
> > +    }
> 
> Perhaps also allow dumping just a single (vCPU or other) ID?
> 

Yes, I can add optional type and instance arguments.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Wed May 13 15:43:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 15:43:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYtXn-00014B-Na; Wed, 13 May 2020 15:43:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hhv+=63=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jYtXm-00013Z-NX
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 15:43:38 +0000
X-Inumbo-ID: 82b47a06-9530-11ea-b07b-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82b47a06-9530-11ea-b07b-bc764e2007e4;
 Wed, 13 May 2020 15:43:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589384617;
 h=from:to:cc:subject:date:message-id:references:
 in-reply-to:content-id:content-transfer-encoding: mime-version;
 bh=Mkr6Uo3lyE7hiZ4l/x1maj4DgXhMgoRRrV0ngc5bcfc=;
 b=MyK8o+CZZIdMhAxEOlZ7MUvnxhiHWzq2N2vZ6TeR5FmRJZccChf27Pcb
 1eM0R9is1YfznW2yb/I+J+qCqCU2MLt6B5SsojIt8n7OSnyQKkCGrWBtf
 oh64ggZpUrPBSMp51LjoCocjW+4kyfiozAhEOhhiGXdhQyCUgDFEqjla6 g=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: gHBWn170w+QERJvitPyjhI6I5WutqgBYKlSU5TnG9J4meSkQvZryi2Aqd/PN8DktGUuMwLiN+6
 Uy35PthhOdXqRpl5dMpeGI0RYXQv+X/taQzVL0tvozE34Nfna0+gJZlkNRP2TDNMe1KggfRXMI
 +zDAFTllM4z10p9OvkWkXi2HbYdl3/4/rMYxMbRDjQGXKp1CwfbLAK+O2SLelXboStnm25YY0y
 dmeclml9xrWdYtOLg980eEihE1KQeG7xUYy7zTNXlYRuuUmijPVAIDSVaqlxj4EC42KqT28GvL
 x8A=
X-SBRS: 2.7
X-MesageID: 17704893
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,388,1583211600"; d="scan'208";a="17704893"
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
Subject: Re: [PATCH v3 1/1] golang/xenlight: add necessary module/package
 documentation
Thread-Topic: [PATCH v3 1/1] golang/xenlight: add necessary module/package
 documentation
Thread-Index: AQHWKTFf897wv5R+lky5jSbM2UASCaimBpqA
Date: Wed, 13 May 2020 15:43:33 +0000
Message-ID: <CF9BC8CA-8A63-4569-AA49-A33A14D858CE@citrix.com>
References: <cover.1589379056.git.rosbrookn@ainfosec.com>
 <fa80be6bf52005db0e54fd8dd74a9ff855a5316f.1589379056.git.rosbrookn@ainfosec.com>
In-Reply-To: <fa80be6bf52005db0e54fd8dd74a9ff855a5316f.1589379056.git.rosbrookn@ainfosec.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="us-ascii"
Content-ID: <536A57941FE4644C8297FCD5908EFF12@citrix.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Jan Beulich <jbeulich@suse.com>, Ian
 Jackson <Ian.Jackson@citrix.com>, xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On May 13, 2020, at 3:18 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
>=20
> Add a README and package comment giving a brief overview of the package.
> These also help pkg.go.dev generate better documentation.
>=20
> Also, add a copy of the LGPL (the same license used by libxl) to
> tools/golang/xenlight. This is required for the package to be shown
> on pkg.go.dev and added to the default module proxy, proxy.golang.org.
>=20
> Finally, add an entry for the xenlight package to SUPPORT.md.
>=20
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>

Reviewed-by: George Dunlap <george.dunlap@citrix.com>



From xen-devel-bounces@lists.xenproject.org Wed May 13 15:44:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 15:44:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYtYE-00015c-0C; Wed, 13 May 2020 15:44:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yQOZ=63=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jYtYC-00015T-7A
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 15:44:04 +0000
X-Inumbo-ID: 9211a456-9530-11ea-a3ac-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9211a456-9530-11ea-a3ac-12813bfff9fa;
 Wed, 13 May 2020 15:44:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=/7oErC2oD1FcQ4fz06/kz/o4eYcq6WUr8ULBeUXJrcs=; b=Mq0bonYakbRr3UnPTZ9WkzH/8R
 G2Kf6Bpl0KPZK8TutwBptlA3bz5URgiBzHc7xn1BKoKmgEjwqtjGfahzxnU/AAKDJKOAqCTazFBRk
 AOZDHlk+S0EaSFyvlYTzUvMmYBASA/J37D+Y30EdN5Mcu7XJ1CZCtcpxmQBuSvjh5xns=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jYtYA-0001ra-Kp; Wed, 13 May 2020 15:44:02 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <hx242@xen.org>)
 id 1jYtYA-000716-8Y; Wed, 13 May 2020 15:44:02 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] domain_page: handle NULL within unmap_domain_page() itself
Date: Wed, 13 May 2020 16:43:33 +0100
Message-Id: <a3ddf0c755227a3c742f6b93783c576135a86874.1589384602.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Hongyan Xia <hongyxia@amazon.com>

The macro version UNMAP_DOMAIN_PAGE() does both NULL checking and
variable clearing. Move NULL checking into the function itself so that
the semantics is consistent with other similar constructs like XFREE().
This also eases the use unmap_domain_page() in error handling paths,
where we only care about NULL checking but not about variable clearing.

Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
---
 xen/arch/arm/mm.c             | 3 +++
 xen/arch/x86/domain_page.c    | 2 +-
 xen/include/xen/domain_page.h | 7 ++-----
 3 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 727107eefa..1b14f49345 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -498,6 +498,9 @@ void unmap_domain_page(const void *va)
     lpae_t *map = this_cpu(xen_dommap);
     int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
 
+    if ( !va )
+        return;
+
     local_irq_save(flags);
 
     ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
index dd32712d2f..b03728e18e 100644
--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -181,7 +181,7 @@ void unmap_domain_page(const void *ptr)
     unsigned long va = (unsigned long)ptr, mfn, flags;
     struct vcpu_maphash_entry *hashent;
 
-    if ( va >= DIRECTMAP_VIRT_START )
+    if ( !va || va >= DIRECTMAP_VIRT_START )
         return;
 
     ASSERT(va >= MAPCACHE_VIRT_START && va < MAPCACHE_VIRT_END);
diff --git a/xen/include/xen/domain_page.h b/xen/include/xen/domain_page.h
index ab2be7b719..a182d33b67 100644
--- a/xen/include/xen/domain_page.h
+++ b/xen/include/xen/domain_page.h
@@ -73,11 +73,8 @@ static inline void unmap_domain_page_global(const void *va) {};
 #endif /* !CONFIG_DOMAIN_PAGE */
 
 #define UNMAP_DOMAIN_PAGE(p) do {   \
-    if ( p )                        \
-    {                               \
-        unmap_domain_page(p);       \
-        (p) = NULL;                 \
-    }                               \
+    unmap_domain_page(p);           \
+    (p) = NULL;                     \
 } while ( false )
 
 #endif /* __XEN_DOMAIN_PAGE_H__ */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 13 15:52:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 15:52:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYtfy-00021c-SW; Wed, 13 May 2020 15:52:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Rkd=63=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYtfx-00021X-M5
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 15:52:05 +0000
X-Inumbo-ID: ac4d6891-9531-11ea-a3ae-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ac4d6891-9531-11ea-a3ae-12813bfff9fa;
 Wed, 13 May 2020 15:51:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=KR//BHBpdOA1Y3U+XBxEqbkPzttr+bdCD3hhbPBDVWI=; b=MFP72jQGYKpNwdb/Q/jUAytru
 zfdDwPkOBrijiVclT/Z0deyIkak8Apwyhb7/YMQkpFJAdncjJeldPXdzWY5GNC/coduCuarem0fcN
 iEB5aqn9SVlt7kvPN135dFR2gmfpslZGZKoPYk1nwP/LSHDvaE3Dxo4v9e4Bg5WXva08c=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYtfp-00022M-0y; Wed, 13 May 2020 15:51:57 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYtfo-0001hc-LD; Wed, 13 May 2020 15:51:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYtfo-0001CD-KU; Wed, 13 May 2020 15:51:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150162-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150162: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=9d83ad86834300927b636fa02b29d84854399ed8
X-Osstest-Versions-That: xen=a82582b1af6a4a57ca53bcfad9f71428cb5f9a54
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 13 May 2020 15:51:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150162 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150162/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9d83ad86834300927b636fa02b29d84854399ed8
baseline version:
 xen                  a82582b1af6a4a57ca53bcfad9f71428cb5f9a54

Last test of basis   150134  2020-05-11 10:01:13 Z    2 days
Testing same since   150162  2020-05-13 13:00:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Paul Durrant <pdurrant@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   a82582b1af..9d83ad8683  9d83ad86834300927b636fa02b29d84854399ed8 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 13 16:41:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 16:41:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYuRR-0006h9-Nt; Wed, 13 May 2020 16:41:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Rkd=63=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYuRQ-0006h4-R7
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 16:41:08 +0000
X-Inumbo-ID: 89761ebf-9538-11ea-a3c0-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 89761ebf-9538-11ea-a3c0-12813bfff9fa;
 Wed, 13 May 2020 16:41:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=+DeNWY2o7OnmecMtfuyyCDIRH0saZGKSgIHGDxcTC2k=; b=3Msr9NuloYAiOE2RZj1rXjVVX
 WduaGhGcif4OFONCWNF/W6nUKbvVKiUGZULK+Gc38ich0i01ICklrgeAmNhqbsB6FggHd6OzSeJkp
 3fP6NNJQZ8/1IqeOb095rNg83VNr4EVs8cV4GZKsQYsEA0S2RQOfBad9qCbupaFQi4kYM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYuRN-0003XW-QL; Wed, 13 May 2020 16:41:05 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYuRN-0003NR-B4; Wed, 13 May 2020 16:41:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYuRN-0006HW-AC; Wed, 13 May 2020 16:41:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150160-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150160: all pass - PUSHED
X-Osstest-Versions-This: ovmf=ceacd9e992cd12f3c07ae1a28a75a6b8750718aa
X-Osstest-Versions-That: ovmf=88899a372cfc44f8612315f4b43a084d1814fe69
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 13 May 2020 16:41:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150160 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150160/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ceacd9e992cd12f3c07ae1a28a75a6b8750718aa
baseline version:
 ovmf                 88899a372cfc44f8612315f4b43a084d1814fe69

Last test of basis   150152  2020-05-12 19:39:31 Z    0 days
Failing since        150156  2020-05-13 06:10:56 Z    0 days    2 attempts
Testing same since   150160  2020-05-13 12:09:21 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Feng, YunhuaX <YunhuaX.Feng@intel.com>
  Michael Kubacki <michael.kubacki@microsoft.com>
  Yunhua Feng <yunhuax.feng@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   88899a372c..ceacd9e992  ceacd9e992cd12f3c07ae1a28a75a6b8750718aa -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 13 16:47:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 16:47:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYuX1-0006tb-Hb; Wed, 13 May 2020 16:46:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hhv+=63=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jYuX0-0006tW-7S
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 16:46:54 +0000
X-Inumbo-ID: 589de8a2-9539-11ea-ae69-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 589de8a2-9539-11ea-ae69-bc764e2007e4;
 Wed, 13 May 2020 16:46:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589388413;
 h=from:to:subject:date:message-id:references:in-reply-to:
 content-id:content-transfer-encoding:mime-version;
 bh=7KufxJ0vfEQcrsvvTS6YGOCA5DoDPlmZ4ExMt5aHByw=;
 b=MwDaTBZrVIAEntMe4faNkIOYrzPPAoArmKh6sWVFyGFCThfsoRPpuxTK
 IqZTXNWn8vbytLIEJ3fcIau6g1+ecjFbg80SuOLHBBTwH1h1iTNyjZh+L
 cfh0Sl7ts+cwoYV6JPHpu8BnmdxKqu6eUAyPesPdEcHrCe+gnojeCBREv A=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 George.Dunlap@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 George.Dunlap@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="George.Dunlap@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="George.Dunlap@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=George.Dunlap@citrix.com;
 spf=Pass smtp.mailfrom=George.Dunlap@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: PT6EPIlIEuJNmuB0v2HRQUQ2Zzw7wHca56dj7FUacNNcUl+z3PG5i/WGQ5LgygDMv1M2WvbGO4
 oxDATWRgIrm7nvwC55C3Og/JmQGh1VTi+xipI55iuQkX/LDMqVigc/mzYrEIlEzwtwuPKnmmfy
 1LH5eVgF+B4G+dprFi8ng8LpY6wxXCKMaXj4aabo75fWQrDNHSLAdmAEJz7QyflVBTfCb4M/1z
 vQAYyXS221pyOYMm59zpZYpuvZXGO4StSF91r2u/wIACtGIWJu5pecIsu640NpawGAiS4u4T9w
 CLI=
X-SBRS: 2.7
X-MesageID: 17463570
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,388,1583211600"; d="scan'208";a="17463570"
From: George Dunlap <George.Dunlap@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: UPDATE: XenSummit 2020 will be held virtuaully in JULY
Thread-Topic: UPDATE: XenSummit 2020 will be held virtuaully in JULY
Thread-Index: AQHWKUYYH5YWzVTOTEWd9iPfxbVutA==
Date: Wed, 13 May 2020 16:46:48 +0000
Message-ID: <BD0FAF84-990F-4DC5-AA71-C7859F1609A2@citrix.com>
References: <562E87BB-FAEE-42B3-BEF4-6E7A4D65269D@citrix.com>
In-Reply-To: <562E87BB-FAEE-42B3-BEF4-6E7A4D65269D@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="us-ascii"
Content-ID: <025580CDF2E2274F9556B5EA3DE553CB@citrix.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Just wanted to give an update to this:

After considering recommendations at the previous community call, consultin=
g with Linux Foundation staff, and running it past the Advisory Board, it h=
as been decided to move the event to JULY 6-9.

Time zone still to be decided.

Thanks to everyone who gave input.

 -George=


From xen-devel-bounces@lists.xenproject.org Wed May 13 17:26:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 17:26:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYv8f-0001ox-CM; Wed, 13 May 2020 17:25:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Rkd=63=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYv8e-0001os-0W
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 17:25:48 +0000
X-Inumbo-ID: c7d9c6b4-953e-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c7d9c6b4-953e-11ea-9887-bc764e2007e4;
 Wed, 13 May 2020 17:25:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=3axj4JfmnBMJ7qehsUC9uDgGFn6gpeB2O51QHE0dvsE=; b=h/XEPwMjFXmLiNKUcxaN0+O93
 d7yRfv8pNBXo1ggJ6UldGqEjXMzKyfK9w441+uurZDw1ex/T/ob8JccJspMF9XeooBq3o9yGq3hg1
 /Wdqyh1KKG+QxR7W+2fctvs9KPO/vshGIIt4XoOdCCK6mbFetvSb5tYycsafMN0SyzVpc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYv8b-0004TO-V6; Wed, 13 May 2020 17:25:46 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYv8b-0005F7-Ng; Wed, 13 May 2020 17:25:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYv8b-0008Vj-MZ; Wed, 13 May 2020 17:25:45 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150154-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150154: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-saverestore.2:fail:heisenbug
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=a82582b1af6a4a57ca53bcfad9f71428cb5f9a54
X-Osstest-Versions-That: xen=a82582b1af6a4a57ca53bcfad9f71428cb5f9a54
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 13 May 2020 17:25:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150154 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150154/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds  17 guest-saverestore.2 fail in 150150 pass in 150154
 test-armhf-armhf-xl-rtds     12 guest-start                fail pass in 150150

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150150
 test-armhf-armhf-xl-rtds    13 migrate-support-check fail in 150150 never pass
 test-armhf-armhf-xl-rtds 14 saverestore-support-check fail in 150150 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150150
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150150
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150150
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150150
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150150
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150150
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150150
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150150
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150150
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  a82582b1af6a4a57ca53bcfad9f71428cb5f9a54
baseline version:
 xen                  a82582b1af6a4a57ca53bcfad9f71428cb5f9a54

Last test of basis   150154  2020-05-13 04:09:34 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed May 13 17:54:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 17:54:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYvZd-0004Iq-H9; Wed, 13 May 2020 17:53:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Rkd=63=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYvZc-0004Il-KL
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 17:53:40 +0000
X-Inumbo-ID: aa565c34-9542-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa565c34-9542-11ea-b9cf-bc764e2007e4;
 Wed, 13 May 2020 17:53:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=jEXiLkiIB/bAqJMREV2PQvYroas6M2gpTCuG/Im1jgY=; b=tDmxk1z8H9F+GAjvmyVsINYxk
 c17DelR8KeAyRdBmwbuhqfY5iRzMDpBhatpsVUTWkshpMc+ddwt4O7f4A2chLa166B5hEx3kfFfUN
 1pL9Ueg8p/+3GozrSKroEIIFK44/VpCvR6nXL8wT7JCNYD239syiM4uHikLqlPVlIbFRs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYvZW-00052C-5Q; Wed, 13 May 2020 17:53:34 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYvZV-0005zE-R0; Wed, 13 May 2020 17:53:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYvZV-0005TT-QN; Wed, 13 May 2020 17:53:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150163-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150163: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:build-armhf:xen-build:fail:regression
 xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=61be48dc029294275348443f78a5e600ef28274f
X-Osstest-Versions-That: xen=9d83ad86834300927b636fa02b29d84854399ed8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 13 May 2020 17:53:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150163 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150163/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                   6 xen-build                fail REGR. vs. 150162

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  61be48dc029294275348443f78a5e600ef28274f
baseline version:
 xen                  9d83ad86834300927b636fa02b29d84854399ed8

Last test of basis   150162  2020-05-13 13:00:23 Z    0 days
Testing same since   150163  2020-05-13 16:01:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 61be48dc029294275348443f78a5e600ef28274f
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Wed May 13 10:18:19 2020 -0400

    golang/xenlight: add necessary module/package documentation
    
    Add a README and package comment giving a brief overview of the package.
    These also help pkg.go.dev generate better documentation.
    
    Also, add a copy of the LGPL (the same license used by libxl) to
    tools/golang/xenlight. This is required for the package to be shown
    on pkg.go.dev and added to the default module proxy, proxy.golang.org.
    
    Finally, add an entry for the xenlight package to SUPPORT.md.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 13 18:20:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 18:20:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYvz9-0006U1-Q6; Wed, 13 May 2020 18:20:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mb4t=63=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jYvz7-0006D8-Qu
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 18:20:01 +0000
X-Inumbo-ID: 5bb291b7-9546-11ea-a3f1-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5bb291b7-9546-11ea-a3f1-12813bfff9fa;
 Wed, 13 May 2020 18:20:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=8M5LPu96qCWo9M3ouaNwpZgV7DJQdhFtxTpmVytz0AY=; b=JNcTXvRnMJgPF9Qb51FzU4/ZR2
 t5vPM0Rtf8wf4bGUlNmBZuMsjZc9bvTL5Fy21o0c1eBOBoCTopVpxbTo7D90t1e12G1yNd5+0Mb4m
 8du26szNZq0nl0D5eyXJx4SPz+6/S5LGa1BhDWJg3jfuDWA/cgYbSvG/NRaiz5KsEVDg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jYvz4-0005e7-Vm; Wed, 13 May 2020 18:19:58 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jYvz4-00027p-N4; Wed, 13 May 2020 18:19:58 +0000
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: Stefano Stabellini <sstabellini@kernel.org>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
 <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <CADz_WD5Ln7Pe1WAFp73d2Mz9wxspzTE3WgAJusp5S8LX4=83Bw@mail.gmail.com>
 <2187cd7c-4d48-986b-77d6-4428e8178404@oracle.com>
 <CADz_WD68bYj-0CSm_zib+LRiMGd1+1eoNLgiJj=vHog685Khsw@mail.gmail.com>
 <alpine.DEB.2.21.2005060956120.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy_wcSD3BVcVFJVR1y1CtvxA9xMkobKwbsdf8dGxS5Hcbw@mail.gmail.com>
 <alpine.DEB.2.21.2005121723240.26167@sstabellini-ThinkPad-T480s>
 <42253259-9663-67e8-117f-8ba631243585@xen.org>
 <alpine.DEB.2.21.2005130810270.26167@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <d940d405-5706-c749-f546-c0c60528394d@xen.org>
Date: Wed, 13 May 2020 19:19:56 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2005130810270.26167@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Peng Fan <peng.fan@nxp.com>,
 "minyard@acm.org" <minyard@acm.org>, roman@zededa.com,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 Julien Grall <julien.grall@arm.com>,
 Nataliya Korovkina <malus.brandywine@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 boris.ostrovsky@oracle.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 13/05/2020 16:11, Stefano Stabellini wrote:
> On Wed, 13 May 2020, Julien Grall wrote:
>> Hi,
>>
>> On 13/05/2020 01:33, Stefano Stabellini wrote:
>>> I worked with Roman to do several more tests and here is an update on
>>> the situation. We don't know why my patch didn't work when Boris' patch
>>> [1] worked.  Both of them should have worked the same way.
>>>
>>> Anyway, we continued with Boris patch to debug the new mmc error which
>>> looks like this:
>>>
>>> [    3.084464] mmc0: unrecognised SCR structure version 15
>>> [    3.089176] mmc0: error -22 whilst initialising SD card
>>>
>>> I asked to add a lot of trancing, see attached diff.
>>
>> Please avoid attachment on mailing list and use pastebin for diff.
>>
>>> We discovered a bug
>>> in xen_swiotlb_init: if io_tlb_start != 0 at the beginning of
>>> xen_swiotlb_init, start_dma_addr is not set correctly. This oneline
>>> patch fixes it:
>>>
>>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>>> index 0a40ac332a4c..b75c43356eba 100644
>>> --- a/drivers/xen/swiotlb-xen.c
>>> +++ b/drivers/xen/swiotlb-xen.c
>>> @@ -191,6 +191,7 @@ int __ref xen_swiotlb_init(int verbose, bool early)
>>>            * IO TLB memory already allocated. Just use it.
>>>            */
>>>           if (io_tlb_start != 0) {
>>> +               start_dma_addr = io_tlb_start;
>>>                   xen_io_tlb_start = phys_to_virt(io_tlb_start);
>>>                   goto end;
>>>           }
>>>
>>> Unfortunately it doesn't solve the mmc0 error.
>>>
>>>
>>> As you might notice from the logs, none of the other interesting printks
>>> printed anything, which seems to mean that the memory allocated by
>>> xen_swiotlb_alloc_coherent and mapped by xen_swiotlb_map_page should be
>>> just fine.
>>>
>>> I am starting to be out of ideas. Do you guys have any suggestions on
>>> what could be the issue or what could be done to discover it?
>>
>> Sorry if my suggestions are going to be obvious, but I can't confirm whether
>> this was already done:
>>      1) Does the kernel boot on baremetal (i.e without Xen)? This should help
>> to confirm whether the bug is Xen is related.
> 
> Yes it boots
> 
>>      2) Swiotlb should not be necessary for basic dom0 boot on Arm. Did you try
>> to disable it? This should help to confirm whether swiotlb is the problem or
>> not.
> 
> It boots disabling swiotlb-xen

Thank you for the answer! swiotlb-xen should basically be a NOP for 
dom0. So this suggests swiotlb is doing some transformation on the DMA 
address.

I have an idea what may have gone wrong. AFAICT, xen-swiotlb seems to 
assume the DMA address space and CPU address space is the same. Is RPI 
using the same address space?

As an aside, I can't find the 1=== and === from the log in your diff. So 
I am not entirely which path is used. Can you provide the associated log 
with your diff?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 13 18:26:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 18:26:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYw53-00071n-IT; Wed, 13 May 2020 18:26:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Mb4t=63=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jYw52-00071i-MF
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 18:26:08 +0000
X-Inumbo-ID: 369cede4-9547-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 369cede4-9547-11ea-9887-bc764e2007e4;
 Wed, 13 May 2020 18:26:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:References:Cc:To:From:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=wSX6tJyNd3CxsKFGFlh1ikl3tEtS4VnP7ZhP/NqpZTI=; b=TfzHLoqDqTN4DXD/Sfs/y2IZn/
 0GeTJF6yM6+rf5kC8xUesoIB3OHzlMTaEIIn4qtSnByZE5xMTVyYHHF23MtsSn0ktj9qBShfqy2Xt
 GhQIIuQGVFQphmMrjusO+UEoIx/LvAmrzY3bAbbLmU4OV/J0pYv1y0tBE7/deuaI4r7A=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jYw50-0005lZ-LG; Wed, 13 May 2020 18:26:06 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jYw50-0002bj-Cm; Wed, 13 May 2020 18:26:06 +0000
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
From: Julien Grall <julien@xen.org>
To: Stefano Stabellini <sstabellini@kernel.org>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
 <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <CADz_WD5Ln7Pe1WAFp73d2Mz9wxspzTE3WgAJusp5S8LX4=83Bw@mail.gmail.com>
 <2187cd7c-4d48-986b-77d6-4428e8178404@oracle.com>
 <CADz_WD68bYj-0CSm_zib+LRiMGd1+1eoNLgiJj=vHog685Khsw@mail.gmail.com>
 <alpine.DEB.2.21.2005060956120.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy_wcSD3BVcVFJVR1y1CtvxA9xMkobKwbsdf8dGxS5Hcbw@mail.gmail.com>
 <alpine.DEB.2.21.2005121723240.26167@sstabellini-ThinkPad-T480s>
 <42253259-9663-67e8-117f-8ba631243585@xen.org>
 <alpine.DEB.2.21.2005130810270.26167@sstabellini-ThinkPad-T480s>
 <d940d405-5706-c749-f546-c0c60528394d@xen.org>
Message-ID: <d19f82a9-160e-ccc5-ebf9-8eb397dbeb08@xen.org>
Date: Wed, 13 May 2020 19:26:03 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <d940d405-5706-c749-f546-c0c60528394d@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Peng Fan <peng.fan@nxp.com>,
 "minyard@acm.org" <minyard@acm.org>, roman@zededa.com,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 Julien Grall <julien.grall@arm.com>,
 Nataliya Korovkina <malus.brandywine@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 boris.ostrovsky@oracle.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 13/05/2020 19:19, Julien Grall wrote:
> Hi,
> 
> On 13/05/2020 16:11, Stefano Stabellini wrote:
>> On Wed, 13 May 2020, Julien Grall wrote:
>>> Hi,
>>>
>>> On 13/05/2020 01:33, Stefano Stabellini wrote:
>>>> I worked with Roman to do several more tests and here is an update on
>>>> the situation. We don't know why my patch didn't work when Boris' patch
>>>> [1] worked.  Both of them should have worked the same way.
>>>>
>>>> Anyway, we continued with Boris patch to debug the new mmc error which
>>>> looks like this:
>>>>
>>>> [    3.084464] mmc0: unrecognised SCR structure version 15
>>>> [    3.089176] mmc0: error -22 whilst initialising SD card
>>>>
>>>> I asked to add a lot of trancing, see attached diff.
>>>
>>> Please avoid attachment on mailing list and use pastebin for diff.
>>>
>>>> We discovered a bug
>>>> in xen_swiotlb_init: if io_tlb_start != 0 at the beginning of
>>>> xen_swiotlb_init, start_dma_addr is not set correctly. This oneline
>>>> patch fixes it:
>>>>
>>>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>>>> index 0a40ac332a4c..b75c43356eba 100644
>>>> --- a/drivers/xen/swiotlb-xen.c
>>>> +++ b/drivers/xen/swiotlb-xen.c
>>>> @@ -191,6 +191,7 @@ int __ref xen_swiotlb_init(int verbose, bool early)
>>>>            * IO TLB memory already allocated. Just use it.
>>>>            */
>>>>           if (io_tlb_start != 0) {
>>>> +               start_dma_addr = io_tlb_start;
>>>>                   xen_io_tlb_start = phys_to_virt(io_tlb_start);
>>>>                   goto end;
>>>>           }
>>>>
>>>> Unfortunately it doesn't solve the mmc0 error.
>>>>
>>>>
>>>> As you might notice from the logs, none of the other interesting 
>>>> printks
>>>> printed anything, which seems to mean that the memory allocated by
>>>> xen_swiotlb_alloc_coherent and mapped by xen_swiotlb_map_page should be
>>>> just fine.
>>>>
>>>> I am starting to be out of ideas. Do you guys have any suggestions on
>>>> what could be the issue or what could be done to discover it?
>>>
>>> Sorry if my suggestions are going to be obvious, but I can't confirm 
>>> whether
>>> this was already done:
>>>      1) Does the kernel boot on baremetal (i.e without Xen)? This 
>>> should help
>>> to confirm whether the bug is Xen is related.
>>
>> Yes it boots
>>
>>>      2) Swiotlb should not be necessary for basic dom0 boot on Arm. 
>>> Did you try
>>> to disable it? This should help to confirm whether swiotlb is the 
>>> problem or
>>> not.
>>
>> It boots disabling swiotlb-xen
> 
> Thank you for the answer! swiotlb-xen should basically be a NOP for 
> dom0. So this suggests swiotlb is doing some transformation on the DMA 
> address.
> 
> I have an idea what may have gone wrong. AFAICT, xen-swiotlb seems to 
> assume the DMA address space and CPU address space is the same. Is RPI 
> using the same address space?

Another question, is the DMA request bounced? If so, are we sure the 
bounce buffer is in the first GB?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 13 19:14:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 19:14:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYwoz-0002jZ-8U; Wed, 13 May 2020 19:13:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=GJAX=63=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1jYwoy-0002jU-AX
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 19:13:36 +0000
X-Inumbo-ID: d78abb4a-954d-11ea-ae69-bc764e2007e4
Received: from mail-qv1-xf44.google.com (unknown [2607:f8b0:4864:20::f44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d78abb4a-954d-11ea-ae69-bc764e2007e4;
 Wed, 13 May 2020 19:13:35 +0000 (UTC)
Received: by mail-qv1-xf44.google.com with SMTP id g20so458634qvb.9
 for <xen-devel@lists.xenproject.org>; Wed, 13 May 2020 12:13:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zededa.com; s=google;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=U6iZF1caH6VOddxvTW47XGR3M7B27p0oqCJdQgf7rTY=;
 b=geID/LW/yYgYejhFNX9qNwxv58iVYiSm0XWfBcxlFDpWZToQwPCHdH3rO6LlLNjDnR
 JPjpyCNDaH0OsBeaFfJSd1G96JifU3x1BoBVRAEIGmNby39IFvkRPj7PbOnBd64BBBZZ
 ipqRTtNY5PORkT+tYsSKbc1MQeRfEcXu+LiJ2I0n4qOtRXcWVA9k1/UEJBMzDRMVJyva
 sv5jl/J6EDNAMhoSOT1UcIlrx+h8nYZZUSZVkD4LmvaxaTbo4g16vDiECzpvYIX68oFk
 O05Tc1hXnG7GlT+Ye/ca3jOH2tGJFLrY1p/+U5WaUmWLqfeBIjc/NNBfzYa+FLoGhoMf
 Amkw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=U6iZF1caH6VOddxvTW47XGR3M7B27p0oqCJdQgf7rTY=;
 b=fh+cM4JDhNHIu8ENNS3krI1SupGaJGfzAVQbEhnDL2nLg7/r1QV88sfOLZ8Qn83G3C
 bBIwGAgB/gZz/Yf1WRBAUVNd9RRBDzOW/KI9E2eO6eVVMaLZKZkadQ5ZWZtlb/vFXbDQ
 Pmn5b/iHalMl6ge8kabDn0if5YV42T3k/+PNaOQBLMjyZc0ZP1EZgimNVasFwLIXBFXD
 eWvOgMsyqqfg11Bu2klK+OtYcuf5DMdEDq/3h7sVGI1jcVBTbBytp3ke1pvDQ8rexk5q
 4rCwHbVYzAH2tZdDrHQC3591gExpWmJkanq/HcgjDLRI8kcq18Y8qxTxbDndHRBtsIkX
 aAsw==
X-Gm-Message-State: AOAM530udSZdYwnwVGtzeEGc+pRN5oRguFLoVUiNZpFq/woEnVCgh8Xb
 lwAkKZrQ2LkEbVGzr9Oy7K2odC9Xs9brpxi7SSOLXA==
X-Google-Smtp-Source: ABdhPJzz3LQYQzR9E9bZ5ZDZm3xzxXEZz2mzhS6Pjz7x7ThKCMFXj1/z7HriqNzRllKWONBHtLR60yOMPWDDc7zUt/c=
X-Received: by 2002:ad4:452d:: with SMTP id l13mr1172279qvu.19.1589397214563; 
 Wed, 13 May 2020 12:13:34 -0700 (PDT)
MIME-Version: 1.0
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <CAMmSBy-w1HOyGGCjB_nJcn2xw+4sNdDrtJ8PC3foaxJOtdRmnQ@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
 <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <CADz_WD5Ln7Pe1WAFp73d2Mz9wxspzTE3WgAJusp5S8LX4=83Bw@mail.gmail.com>
 <2187cd7c-4d48-986b-77d6-4428e8178404@oracle.com>
 <CADz_WD68bYj-0CSm_zib+LRiMGd1+1eoNLgiJj=vHog685Khsw@mail.gmail.com>
 <alpine.DEB.2.21.2005060956120.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy_wcSD3BVcVFJVR1y1CtvxA9xMkobKwbsdf8dGxS5Hcbw@mail.gmail.com>
 <alpine.DEB.2.21.2005121723240.26167@sstabellini-ThinkPad-T480s>
 <42253259-9663-67e8-117f-8ba631243585@xen.org>
 <alpine.DEB.2.21.2005130810270.26167@sstabellini-ThinkPad-T480s>
 <d940d405-5706-c749-f546-c0c60528394d@xen.org>
In-Reply-To: <d940d405-5706-c749-f546-c0c60528394d@xen.org>
From: Roman Shaposhnik <roman@zededa.com>
Date: Wed, 13 May 2020 12:13:23 -0700
Message-ID: <CAMmSBy9+LnHcMJfgWRsiToz+pG2X_tRtxLnDAT-W+_9f--_x6g@mail.gmail.com>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
To: Julien Grall <julien@xen.org>
Content-Type: multipart/mixed; boundary="00000000000062113c05a58c60fd"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Peng Fan <peng.fan@nxp.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "minyard@acm.org" <minyard@acm.org>,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 Julien Grall <julien.grall@arm.com>,
 Nataliya Korovkina <malus.brandywine@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--00000000000062113c05a58c60fd
Content-Type: text/plain; charset="UTF-8"

On Wed, May 13, 2020 at 11:20 AM Julien Grall <julien@xen.org> wrote:
>
> Hi,
>
> On 13/05/2020 16:11, Stefano Stabellini wrote:
> > On Wed, 13 May 2020, Julien Grall wrote:
> >> Hi,
> >>
> >> On 13/05/2020 01:33, Stefano Stabellini wrote:
> >>> I worked with Roman to do several more tests and here is an update on
> >>> the situation. We don't know why my patch didn't work when Boris' patch
> >>> [1] worked.  Both of them should have worked the same way.
> >>>
> >>> Anyway, we continued with Boris patch to debug the new mmc error which
> >>> looks like this:
> >>>
> >>> [    3.084464] mmc0: unrecognised SCR structure version 15
> >>> [    3.089176] mmc0: error -22 whilst initialising SD card
> >>>
> >>> I asked to add a lot of trancing, see attached diff.
> >>
> >> Please avoid attachment on mailing list and use pastebin for diff.
> >>
> >>> We discovered a bug
> >>> in xen_swiotlb_init: if io_tlb_start != 0 at the beginning of
> >>> xen_swiotlb_init, start_dma_addr is not set correctly. This oneline
> >>> patch fixes it:
> >>>
> >>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> >>> index 0a40ac332a4c..b75c43356eba 100644
> >>> --- a/drivers/xen/swiotlb-xen.c
> >>> +++ b/drivers/xen/swiotlb-xen.c
> >>> @@ -191,6 +191,7 @@ int __ref xen_swiotlb_init(int verbose, bool early)
> >>>            * IO TLB memory already allocated. Just use it.
> >>>            */
> >>>           if (io_tlb_start != 0) {
> >>> +               start_dma_addr = io_tlb_start;
> >>>                   xen_io_tlb_start = phys_to_virt(io_tlb_start);
> >>>                   goto end;
> >>>           }
> >>>
> >>> Unfortunately it doesn't solve the mmc0 error.
> >>>
> >>>
> >>> As you might notice from the logs, none of the other interesting printks
> >>> printed anything, which seems to mean that the memory allocated by
> >>> xen_swiotlb_alloc_coherent and mapped by xen_swiotlb_map_page should be
> >>> just fine.
> >>>
> >>> I am starting to be out of ideas. Do you guys have any suggestions on
> >>> what could be the issue or what could be done to discover it?
> >>
> >> Sorry if my suggestions are going to be obvious, but I can't confirm whether
> >> this was already done:
> >>      1) Does the kernel boot on baremetal (i.e without Xen)? This should help
> >> to confirm whether the bug is Xen is related.
> >
> > Yes it boots
> >
> >>      2) Swiotlb should not be necessary for basic dom0 boot on Arm. Did you try
> >> to disable it? This should help to confirm whether swiotlb is the problem or
> >> not.
> >
> > It boots disabling swiotlb-xen
>
> Thank you for the answer! swiotlb-xen should basically be a NOP for
> dom0. So this suggests swiotlb is doing some transformation on the DMA
> address.
>
> I have an idea what may have gone wrong. AFAICT, xen-swiotlb seems to
> assume the DMA address space and CPU address space is the same. Is RPI
> using the same address space?
>
> As an aside, I can't find the 1=== and === from the log in your diff. So
> I am not entirely which path is used. Can you provide the associated log
> with your diff?

These are just extra printks to understand the code path better. A full complete
patch is attached so we're all on the same page.

Thanks,
Roman.

--00000000000062113c05a58c60fd
Content-Type: text/plain; charset="US-ASCII"; name="patch.txt"
Content-Disposition: attachment; filename="patch.txt"
Content-Transfer-Encoding: base64
Content-ID: <f_ka5q1n230>
X-Attachment-Id: f_ka5q1n230

ZGlmZiAtLWdpdCBhL2FyY2gvYXJtL3hlbi9tbS5jIGIvYXJjaC9hcm0veGVuL21tLmMKaW5kZXgg
ZDQwZTllNWZjLi5mYjIxYjU0MmMgMTAwNjQ0Ci0tLSBhL2FyY2gvYXJtL3hlbi9tbS5jCisrKyBi
L2FyY2gvYXJtL3hlbi9tbS5jCkBAIC0yOSwxMyArMjksMTEgQEAgdW5zaWduZWQgbG9uZyB4ZW5f
Z2V0X3N3aW90bGJfZnJlZV9wYWdlcyh1bnNpZ25lZCBpbnQgb3JkZXIpCiAKIAlmb3JfZWFjaF9t
ZW1ibG9jayhtZW1vcnksIHJlZykgewogCQlpZiAocmVnLT5iYXNlIDwgKHBoeXNfYWRkcl90KTB4
ZmZmZmZmZmYpIHsKLQkJCWlmIChJU19FTkFCTEVEKENPTkZJR19aT05FX0RNQTMyKSkKLQkJCQlm
bGFncyB8PSBfX0dGUF9ETUEzMjsKLQkJCWVsc2UKLQkJCQlmbGFncyB8PSBfX0dGUF9ETUE7CisJ
CQlmbGFncyB8PSBfX0dGUF9ETUE7CiAJCQlicmVhazsKIAkJfQogCX0KKwlwcmludGsoIkRFQlVH
ICVzICVkIGZsYWdzPSV4XG4iLF9fZnVuY19fLF9fTElORV9fLGZsYWdzKTsKIAlyZXR1cm4gX19n
ZXRfZnJlZV9wYWdlcyhmbGFncywgb3JkZXIpOwogfQogCmRpZmYgLS1naXQgYS9kcml2ZXJzL3hl
bi9zd2lvdGxiLXhlbi5jIGIvZHJpdmVycy94ZW4vc3dpb3RsYi14ZW4uYwppbmRleCBiNmQyNzc2
MmMuLmZmOWQwN2UyOCAxMDA2NDQKLS0tIGEvZHJpdmVycy94ZW4vc3dpb3RsYi14ZW4uYworKysg
Yi9kcml2ZXJzL3hlbi9zd2lvdGxiLXhlbi5jCkBAIC0xOTEsNiArMTkxLDcgQEAgaW50IF9fcmVm
IHhlbl9zd2lvdGxiX2luaXQoaW50IHZlcmJvc2UsIGJvb2wgZWFybHkpCiAJICogSU8gVExCIG1l
bW9yeSBhbHJlYWR5IGFsbG9jYXRlZC4gSnVzdCB1c2UgaXQuCiAJICovCiAJaWYgKGlvX3RsYl9z
dGFydCAhPSAwKSB7CisJCXN0YXJ0X2RtYV9hZGRyID0gaW9fdGxiX3N0YXJ0OwogCQl4ZW5faW9f
dGxiX3N0YXJ0ID0gcGh5c190b192aXJ0KGlvX3RsYl9zdGFydCk7CiAJCWdvdG8gZW5kOwogCX0K
QEAgLTIyNCw2ICsyMjUsNyBAQCBpbnQgX19yZWYgeGVuX3N3aW90bGJfaW5pdChpbnQgdmVyYm9z
ZSwgYm9vbCBlYXJseSkKIAkJbV9yZXQgPSBYRU5fU1dJT1RMQl9FTk9NRU07CiAJCWdvdG8gZXJy
b3I7CiAJfQorcHJpbnRrKCJERUJVRyAlcyAlZCBzdGFydF92aXJ0PSVseCBvcmRlcj0lbHUgcGh5
cz0lbGx4IHZtYWxsb2M/PSVkXG4iLF9fZnVuY19fLF9fTElORV9fLCh1bnNpZ25lZCBsb25nKXhl
bl9pb190bGJfc3RhcnQsIG9yZGVyLCB2aXJ0X3RvX3BoeXMoeGVuX2lvX3RsYl9zdGFydCksIGlz
X3ZtYWxsb2NfYWRkcih4ZW5faW9fdGxiX3N0YXJ0KSk7CQogCS8qCiAJICogQW5kIHJlcGxhY2Ug
dGhhdCBtZW1vcnkgd2l0aCBwYWdlcyB1bmRlciA0R0IuCiAJICovCkBAIC0yNTUsNiArMjU3LDcg
QEAgaW50IF9fcmVmIHhlbl9zd2lvdGxiX2luaXQoaW50IHZlcmJvc2UsIGJvb2wgZWFybHkpCiAJ
aWYgKCFyYykKIAkJc3dpb3RsYl9zZXRfbWF4X3NlZ21lbnQoUEFHRV9TSVpFKTsKIAorcHJpbnRr
KCJERUJVRyAlcyAlZCBzdGFydD0lbGx4IGVuZD0lbGx4IHJjPSVkXG4iLF9fZnVuY19fLF9fTElO
RV9fLHN0YXJ0X2RtYV9hZGRyLHN0YXJ0X2RtYV9hZGRyK2J5dGVzLHJjKTsKIAlyZXR1cm4gcmM7
CiBlcnJvcjoKIAlpZiAocmVwZWF0LS0pIHsKQEAgLTMyNCw2ICszMjcsMTAgQEAgeGVuX3N3aW90
bGJfYWxsb2NfY29oZXJlbnQoc3RydWN0IGRldmljZSAqaHdkZXYsIHNpemVfdCBzaXplLAogCQl9
CiAJCVNldFBhZ2VYZW5SZW1hcHBlZCh2aXJ0X3RvX3BhZ2UocmV0KSk7CiAJfQoraWYgKCFkbWFf
Y2FwYWJsZShod2RldiwgKmRtYV9oYW5kbGUsIHNpemUsIHRydWUpKQorICAgIHByaW50aygiREVC
VUcxICVzICVkIHBoeXM9JWxseCBkbWE9JWxseCBkbWFfbWFzaz0lbGx4XG4iLF9fZnVuY19fLF9f
TElORV9fLHBoeXMsKmRtYV9oYW5kbGUsZG1hX21hc2spOworaWYgKGRldl9hZGRyICsgc2l6ZSAt
IDEgPiBkbWFfbWFzaykKKyAgICBwcmludGsoIkRFQlVHMiAlcyAlZCBwaHlzPSVsbHggZG1hPSVs
bHggZG1hX21hc2s9JWxseFxuIixfX2Z1bmNfXyxfX0xJTkVfXyxwaHlzLCpkbWFfaGFuZGxlLGRt
YV9tYXNrKTsKIAltZW1zZXQocmV0LCAwLCBzaXplKTsKIAlyZXR1cm4gcmV0OwogfQpAQCAtMzM1
LDYgKzM0Miw3IEBAIHhlbl9zd2lvdGxiX2ZyZWVfY29oZXJlbnQoc3RydWN0IGRldmljZSAqaHdk
ZXYsIHNpemVfdCBzaXplLCB2b2lkICp2YWRkciwKIAlpbnQgb3JkZXIgPSBnZXRfb3JkZXIoc2l6
ZSk7CiAJcGh5c19hZGRyX3QgcGh5czsKIAl1NjQgZG1hX21hc2sgPSBETUFfQklUX01BU0soMzIp
OworCXN0cnVjdCBwYWdlICpwZzsKIAogCWlmIChod2RldiAmJiBod2Rldi0+Y29oZXJlbnRfZG1h
X21hc2spCiAJCWRtYV9tYXNrID0gaHdkZXYtPmNvaGVyZW50X2RtYV9tYXNrOwpAQCAtMzQ1LDEy
ICszNTMsMTYgQEAgeGVuX3N3aW90bGJfZnJlZV9jb2hlcmVudChzdHJ1Y3QgZGV2aWNlICpod2Rl
diwgc2l6ZV90IHNpemUsIHZvaWQgKnZhZGRyLAogCiAJLyogQ29udmVydCB0aGUgc2l6ZSB0byBh
Y3R1YWxseSBhbGxvY2F0ZWQuICovCiAJc2l6ZSA9IDFVTCA8PCAob3JkZXIgKyBYRU5fUEFHRV9T
SElGVCk7Ci0KK3ByaW50ayhLRVJOX0NSSVQgIj09PTEgYmVmb3JlXG4iKTsKKwlwZyA9IGlzX3Zt
YWxsb2NfYWRkcih2YWRkcikgPyB2bWFsbG9jX3RvX3BhZ2UodmFkZHIpIDogdmlydF90b19wYWdl
KHZhZGRyKTsKK3ByaW50ayhLRVJOX0NSSVQgIj09PT0gbWlkZGxlXG4iKTsJCisJQlVHX09OKCFw
Zyk7CitwcmludGsoS0VSTl9DUklUICI9PT09IGFmdGVyXG4iKTsKIAlpZiAoIVdBUk5fT04oKGRl
dl9hZGRyICsgc2l6ZSAtIDEgPiBkbWFfbWFzaykgfHwKIAkJICAgICByYW5nZV9zdHJhZGRsZXNf
cGFnZV9ib3VuZGFyeShwaHlzLCBzaXplKSkgJiYKLQkgICAgVGVzdENsZWFyUGFnZVhlblJlbWFw
cGVkKHZpcnRfdG9fcGFnZSh2YWRkcikpKQorCSAgICBUZXN0Q2xlYXJQYWdlWGVuUmVtYXBwZWQo
cGcpKQogCQl4ZW5fZGVzdHJveV9jb250aWd1b3VzX3JlZ2lvbihwaHlzLCBvcmRlcik7Ci0KK3By
aW50ayhLRVJOX0NSSVQgIj09PT0gZG9uZVxuIik7CiAJeGVuX2ZyZWVfY29oZXJlbnRfcGFnZXMo
aHdkZXYsIHNpemUsIHZhZGRyLCAoZG1hX2FkZHJfdClwaHlzLCBhdHRycyk7CiB9CiAKQEAgLTM5
OCw2ICs0MTAsNyBAQCBzdGF0aWMgZG1hX2FkZHJfdCB4ZW5fc3dpb3RsYl9tYXBfcGFnZShzdHJ1
Y3QgZGV2aWNlICpkZXYsIHN0cnVjdCBwYWdlICpwYWdlLAogCSAqIEVuc3VyZSB0aGF0IHRoZSBh
ZGRyZXNzIHJldHVybmVkIGlzIERNQSdibGUKIAkgKi8KIAlpZiAodW5saWtlbHkoIWRtYV9jYXBh
YmxlKGRldiwgZGV2X2FkZHIsIHNpemUsIHRydWUpKSkgeworcHJpbnRrKCJERUJVRzMgJXMgJWQg
cGh5cz0lbGx4IGRtYT0lbGx4XG4iLF9fZnVuY19fLF9fTElORV9fLHBoeXMsZGV2X2FkZHIpOwkJ
CiAJCXN3aW90bGJfdGJsX3VubWFwX3NpbmdsZShkZXYsIG1hcCwgc2l6ZSwgc2l6ZSwgZGlyLAog
CQkJCWF0dHJzIHwgRE1BX0FUVFJfU0tJUF9DUFVfU1lOQyk7CiAJCXJldHVybiBETUFfTUFQUElO
R19FUlJPUjsKZGlmZiAtLWdpdCBhL2luY2x1ZGUveGVuL2FybS9wYWdlLWNvaGVyZW50LmggYi9p
bmNsdWRlL3hlbi9hcm0vcGFnZS1jb2hlcmVudC5oCmluZGV4IGI5Y2MxMWU4OC4uNTBjN2EyZTk2
IDEwMDY0NAotLS0gYS9pbmNsdWRlL3hlbi9hcm0vcGFnZS1jb2hlcmVudC5oCisrKyBiL2luY2x1
ZGUveGVuL2FybS9wYWdlLWNvaGVyZW50LmgKQEAgLTgsMTIgKzgsMTcgQEAKIHN0YXRpYyBpbmxp
bmUgdm9pZCAqeGVuX2FsbG9jX2NvaGVyZW50X3BhZ2VzKHN0cnVjdCBkZXZpY2UgKmh3ZGV2LCBz
aXplX3Qgc2l6ZSwKIAkJZG1hX2FkZHJfdCAqZG1hX2hhbmRsZSwgZ2ZwX3QgZmxhZ3MsIHVuc2ln
bmVkIGxvbmcgYXR0cnMpCiB7CisJdm9pZCAqY3B1X2FkZHI7CisgICAgICAgIGlmIChkbWFfYWxs
b2NfZnJvbV9kZXZfY29oZXJlbnQoaHdkZXYsIHNpemUsIGRtYV9oYW5kbGUsICZjcHVfYWRkcikp
CisgICAgICAgICAgICByZXR1cm4gY3B1X2FkZHI7CiAJcmV0dXJuIGRtYV9kaXJlY3RfYWxsb2Mo
aHdkZXYsIHNpemUsIGRtYV9oYW5kbGUsIGZsYWdzLCBhdHRycyk7CiB9CiAKIHN0YXRpYyBpbmxp
bmUgdm9pZCB4ZW5fZnJlZV9jb2hlcmVudF9wYWdlcyhzdHJ1Y3QgZGV2aWNlICpod2Rldiwgc2l6
ZV90IHNpemUsCiAJCXZvaWQgKmNwdV9hZGRyLCBkbWFfYWRkcl90IGRtYV9oYW5kbGUsIHVuc2ln
bmVkIGxvbmcgYXR0cnMpCiB7CisJaWYgKGRtYV9yZWxlYXNlX2Zyb21fZGV2X2NvaGVyZW50KGh3
ZGV2LCBnZXRfb3JkZXIoc2l6ZSksIGNwdV9hZGRyKSkKKyAgICAgICAgICAgIHJldHVybjsKIAlk
bWFfZGlyZWN0X2ZyZWUoaHdkZXYsIHNpemUsIGNwdV9hZGRyLCBkbWFfaGFuZGxlLCBhdHRycyk7
CiB9CiAK
--00000000000062113c05a58c60fd--


From xen-devel-bounces@lists.xenproject.org Wed May 13 19:14:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 19:14:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYwpv-0002mg-Ig; Wed, 13 May 2020 19:14:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cxQB=63=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jYwpt-0002mX-UF
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 19:14:34 +0000
X-Inumbo-ID: f9e63e26-954d-11ea-ae69-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f9e63e26-954d-11ea-ae69-bc764e2007e4;
 Wed, 13 May 2020 19:14:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589397272;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=sYInIXPt4WQOis6dwpeauulW9smubci4iZQh6ZDeLM0=;
 b=XvIpDbQpy57SIjOXmtWH79kVgjOlbxWH8PO/FGCfh3GWB4HDatCesKBb
 4HkQtf+z2WldrRS4LJ81Y390R+kst4ZgPqYOYLC4YFyOtZliSgPKKNCX/
 KDgmbVt6KAPkR4/8sLCrwBnhI+hIFNzvxBZ/VDjg9fqeVRdRFX83nnALE o=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: LnTBF+hnFXyzCwpCsPGiILXnbdCJ4E/eDLWoLSoTTTqhovoXQxZUIsyOXUNtktdEVLhDOUKGK1
 i3x2Qiq9mCAWnJZT3Hf2WWSu/24/QCqPRp9zH5E85h3aS/XfMemYaE0vT6C/v75HyvgW/C1FVg
 Jtb2KCbioNYyv8F6sBwO+bWb3olJones0YW7KPFD1kxVw0ihkMbWDL1ujMqUrwDK7hbCTw9rSA
 k9hQFCZVpP/N7kbDnoRnU5QD5kbMQfcFj309qN2Y20Rss5XTJmvdwNW5dd0zQzG7hPnNyx00WU
 eZ8=
X-SBRS: 2.7
X-MesageID: 18153276
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,388,1583211600"; d="scan'208";a="18153276"
Subject: Re: [PATCH] x86/build32: Discard all orphaned sections
To: Jan Beulich <jbeulich@suse.com>
References: <20200512191108.6461-1-andrew.cooper3@citrix.com>
 <a1d1135a-8f9c-81c3-5fc8-bbc3787ebd0f@suse.com>
 <e11b2b5b-5504-f2a3-f1d8-986bc97a4b78@citrix.com>
 <875d5449-dde0-e675-fb1e-c2b3ed238674@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e420b0ab-f0e8-4a31-ef7b-b538a58dd455@citrix.com>
Date: Wed, 13 May 2020 20:14:27 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <875d5449-dde0-e675-fb1e-c2b3ed238674@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Jason Andryuk <jandryuk@gmail.com>, Stefan Bader <stefan.bader@canonical.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 13/05/2020 16:15, Jan Beulich wrote:
>>> But it would be really nice if we had a way to
>>> flag the need to play with the linker script. Hence perhaps
>>> on new enough tool chains we indeed may want to make use of
>>> --orphan-handling= ? And then discard just .note and .note.*
>>> here?
>> The only valid option would be =error, but experimenting with that yields
>>
>> ld: error: unplaced orphan section `.comment' from `cmdline.o'
>> ld: error: unplaced orphan section `.note.GNU-stack' from `cmdline.o'
>> ld: error: unplaced orphan section `.note.gnu.property' from `cmdline.o'
>> ld: error: unplaced orphan section `.rel.got' from `cmdline.o'
>> ld: error: unplaced orphan section `.got' from `cmdline.o'
>> ld: error: unplaced orphan section `.got.plt' from `cmdline.o'
>> ld: error: unplaced orphan section `.iplt' from `cmdline.o'
>> ld: error: unplaced orphan section `.rel.iplt' from `cmdline.o'
>> ld: error: unplaced orphan section `.igot.plt' from `cmdline.o'
>>
>> which I think is going to get us massively bogged down in toolchain
>> specifics.  I'm not entirely convinced this would be a good move.
> That's ugly indeed; especially the .rel.* sections are worrying to
> appear there.

What is even more curious, most of them don't exist in cmdine.o

Section Headers:
  [Nr] Name              Type            Addr     Off    Size   ES Flg
Lk Inf Al
  [ 0]                   NULL            00000000 000000 000000 00     
0   0  0
  [ 1] .group            GROUP           00000000 000034 000008 04    
14  38  4
  [ 2] .group            GROUP           00000000 00003c 000008 04    
14  40  4
  [ 3] .text             PROGBITS        00000000 000044 00094a 00  AX 
0   0  1
  [ 4] .rel.text         REL             00000000 000e88 0001e8 08   I
14   3  4
  [ 5] .data             PROGBITS        00000000 00098e 000000 00  WA 
0   0  1
  [ 6] .bss              NOBITS          00000000 00098e 000000 00  WA 
0   0  1
  [ 7] .rodata           PROGBITS        00000000 000990 0000f3 00   A 
0   0  4
  [ 8] .rel.rodata       REL             00000000 001070 000120 08   I
14   7  4
  [ 9] .text.__x86.get_pc_thunk.ax PROGBITS        00000000 000a83
000004 00 AXG  0   0  1
  [10] .text.__x86.get_pc_thunk.bx PROGBITS        00000000 000a87
000004 00 AXG  0   0  1
  [11] .comment          PROGBITS        00000000 000a8b 00002d 01  MS 
0   0  1
  [12] .note.GNU-stack   PROGBITS        00000000 000ab8 000000 00     
0   0  1
  [13] .note.gnu.property NOTE            00000000 000ab8 00001c 00   A 
0   0  4
  [14] .symtab           SYMTAB          00000000 000ad4 000290 10    
15  36  4
  [15] .strtab           STRTAB          00000000 000d64 000124 00     
0   0  1
  [16] .shstrtab         STRTAB          00000000 001190 0000a7 00     
0   0  1

I suspect they are inserted by default as part of processing the
relocations, and end up empty.

With =warn rather than =error, we instead get

ld: warning: orphan section `.comment' from `cmdline.o' being placed in
section `.comment'
ld: warning: orphan section `.note.GNU-stack' from `cmdline.o' being
placed in section `.note.GNU-stack'
ld: warning: orphan section `.note.gnu.property' from `cmdline.o' being
placed in section `.note.gnu.property'
ld: warning: orphan section `.rel.got' from `cmdline.o' being placed in
section `.rel.dyn'
ld: warning: orphan section `.got' from `cmdline.o' being placed in
section `.got'
ld: warning: orphan section `.got.plt' from `cmdline.o' being placed in
section `.got.plt'
ld: warning: orphan section `.iplt' from `cmdline.o' being placed in
section `.iplt'
ld: warning: orphan section `.rel.iplt' from `cmdline.o' being placed in
section `.rel.dyn'
ld: warning: orphan section `.igot.plt' from `cmdline.o' being placed in
section `.igot.plt'

and

Section Headers:
  [Nr] Name              Type            Addr     Off    Size   ES Flg
Lk Inf Al
  [ 0]                   NULL            00000000 000000 000000 00     
0   0  0
  [ 1] .note.gnu.property NOTE            00000000 0000b4 00001c 00   A 
0   0  4
  [ 2] .text             PROGBITS        0000001c 0000d0 000a47 00 WAX 
0   0  4
  [ 3] .got.plt          PROGBITS        00000a64 000b18 00000c 04  WA 
0   0  4
  [ 4] .comment          PROGBITS        00000000 000b24 00002c 01  MS 
0   0  1
  [ 5] .symtab           SYMTAB          00000000 000b50 000230 10     
6  31  4
  [ 6] .strtab           STRTAB          00000000 000d80 000124 00     
0   0  1
  [ 7] .shstrtab         STRTAB          00000000 000ea4 000046 00     
0   0  1

in cmdline.lnk, so the .rel.* sections have been dropped overall.  I
think the =error logic is simply at an unhelpful position during processing.

> Hence patch as is Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks,

As I say, I have a plan to replace all of this completely when a bit
more of kbuild is in place.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 13 19:50:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 19:50:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYxO7-0005ZP-Qg; Wed, 13 May 2020 19:49:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVKH=63=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jYxO7-0005ZK-1z
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 19:49:55 +0000
X-Inumbo-ID: ea0569a0-9552-11ea-a401-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ea0569a0-9552-11ea-a401-12813bfff9fa;
 Wed, 13 May 2020 19:49:53 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 6A42720659;
 Wed, 13 May 2020 19:49:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1589399393;
 bh=A6OIMY+x0cwZr6eTl0RCx+93TS1h+MYJs+vrefIb5dc=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=wWUYIolus5RDYmnCxmhhw7kPQGmhmpVQHdXkquXs7/iBMdeyGq7tU/wHM8hg4Z4Hi
 lJ6jJrAr/TtDDrjar4RcOrcC39O//2Wp9fIEouWsykBc5K177Ev2rgaPYoCEURl/VF
 gw5zut3YqgaGFnrGf5TCG9J9+YpM/T4/vCoa49AQ=
Date: Wed, 13 May 2020 12:49:51 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
In-Reply-To: <d940d405-5706-c749-f546-c0c60528394d@xen.org>
Message-ID: <alpine.DEB.2.21.2005131126550.26167@sstabellini-ThinkPad-T480s>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <alpine.DEB.2.21.2005042004500.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
 <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <CADz_WD5Ln7Pe1WAFp73d2Mz9wxspzTE3WgAJusp5S8LX4=83Bw@mail.gmail.com>
 <2187cd7c-4d48-986b-77d6-4428e8178404@oracle.com>
 <CADz_WD68bYj-0CSm_zib+LRiMGd1+1eoNLgiJj=vHog685Khsw@mail.gmail.com>
 <alpine.DEB.2.21.2005060956120.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy_wcSD3BVcVFJVR1y1CtvxA9xMkobKwbsdf8dGxS5Hcbw@mail.gmail.com>
 <alpine.DEB.2.21.2005121723240.26167@sstabellini-ThinkPad-T480s>
 <42253259-9663-67e8-117f-8ba631243585@xen.org>
 <alpine.DEB.2.21.2005130810270.26167@sstabellini-ThinkPad-T480s>
 <d940d405-5706-c749-f546-c0c60528394d@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Peng Fan <peng.fan@nxp.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "minyard@acm.org" <minyard@acm.org>, roman@zededa.com,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 Julien Grall <julien.grall@arm.com>,
 Nataliya Korovkina <malus.brandywine@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 boris.ostrovsky@oracle.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, 13 May 2020, Julien Grall wrote:
> On 13/05/2020 16:11, Stefano Stabellini wrote:
> > On Wed, 13 May 2020, Julien Grall wrote:
> > > Hi,
> > > 
> > > On 13/05/2020 01:33, Stefano Stabellini wrote:
> > > > I worked with Roman to do several more tests and here is an update on
> > > > the situation. We don't know why my patch didn't work when Boris' patch
> > > > [1] worked.  Both of them should have worked the same way.
> > > > 
> > > > Anyway, we continued with Boris patch to debug the new mmc error which
> > > > looks like this:
> > > > 
> > > > [    3.084464] mmc0: unrecognised SCR structure version 15
> > > > [    3.089176] mmc0: error -22 whilst initialising SD card
> > > > 
> > > > I asked to add a lot of trancing, see attached diff.
> > > 
> > > Please avoid attachment on mailing list and use pastebin for diff.
> > > 
> > > > We discovered a bug
> > > > in xen_swiotlb_init: if io_tlb_start != 0 at the beginning of
> > > > xen_swiotlb_init, start_dma_addr is not set correctly. This oneline
> > > > patch fixes it:
> > > > 
> > > > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > > > index 0a40ac332a4c..b75c43356eba 100644
> > > > --- a/drivers/xen/swiotlb-xen.c
> > > > +++ b/drivers/xen/swiotlb-xen.c
> > > > @@ -191,6 +191,7 @@ int __ref xen_swiotlb_init(int verbose, bool early)
> > > >            * IO TLB memory already allocated. Just use it.
> > > >            */
> > > >           if (io_tlb_start != 0) {
> > > > +               start_dma_addr = io_tlb_start;
> > > >                   xen_io_tlb_start = phys_to_virt(io_tlb_start);
> > > >                   goto end;
> > > >           }
> > > > 
> > > > Unfortunately it doesn't solve the mmc0 error.
> > > > 
> > > > 
> > > > As you might notice from the logs, none of the other interesting printks
> > > > printed anything, which seems to mean that the memory allocated by
> > > > xen_swiotlb_alloc_coherent and mapped by xen_swiotlb_map_page should be
> > > > just fine.
> > > > 
> > > > I am starting to be out of ideas. Do you guys have any suggestions on
> > > > what could be the issue or what could be done to discover it?
> > > 
> > > Sorry if my suggestions are going to be obvious, but I can't confirm
> > > whether
> > > this was already done:
> > >      1) Does the kernel boot on baremetal (i.e without Xen)? This should
> > > help
> > > to confirm whether the bug is Xen is related.
> > 
> > Yes it boots
> > 
> > >      2) Swiotlb should not be necessary for basic dom0 boot on Arm. Did
> > > you try
> > > to disable it? This should help to confirm whether swiotlb is the problem
> > > or
> > > not.
> > 
> > It boots disabling swiotlb-xen
> 
> Thank you for the answer! swiotlb-xen should basically be a NOP for dom0. So
> this suggests swiotlb is doing some transformation on the DMA address.

Yes, but couldn't spot where yet


> I have an idea what may have gone wrong. AFAICT, xen-swiotlb seems to assume
> the DMA address space and CPU address space is the same. Is RPI using the same
> address space?

I thought about it too but we didn't add an explicit check for this yet.
My understanding is that it is not supposed to happen on arm64 boards
but at this point it is worth checking it for sure. Thanks for the
suggestion.


From xen-devel-bounces@lists.xenproject.org Wed May 13 19:51:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 19:51:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYxPf-0006HT-62; Wed, 13 May 2020 19:51:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RVKH=63=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jYxPd-0006HO-QC
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 19:51:29 +0000
X-Inumbo-ID: 228f6ba4-9553-11ea-a401-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 228f6ba4-9553-11ea-a401-12813bfff9fa;
 Wed, 13 May 2020 19:51:28 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 4EA27205ED;
 Wed, 13 May 2020 19:51:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1589399488;
 bh=q9pL9MyiDPkzpcxwhyoVVNHmRZJxL8DD6AP9lH+W12s=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=J7HusWoQ5Y+Ge5/d62uQu1IC/VMSsxA/jdUbWqcKoHAtUTTXmvlkay3w1N628GUFH
 jW5D8I/W+l47BKN32/ANeTJk1sCaj/JeLAxR8Tw72xsJjbCYwlDOXpYRbevZr9QvCF
 hTSITLg/iQHink6MqfgFvfx/NGc3X4l+ETO9m/Io=
Date: Wed, 13 May 2020 12:51:26 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: Troubles running Xen on Raspberry Pi 4 with 5.6.1 DomU
In-Reply-To: <d19f82a9-160e-ccc5-ebf9-8eb397dbeb08@xen.org>
Message-ID: <alpine.DEB.2.21.2005131249570.26167@sstabellini-ThinkPad-T480s>
References: <CAMmSBy_A4xJmCo-PiWukv0+vkEhXXDpYwZAKAiJ=mmpuY1CbMA@mail.gmail.com>
 <CAMmSBy-yymEGQcuUBHZi-tL9ra7x9nDv+ms8SLiZr1H=BpHUiQ@mail.gmail.com>
 <alpine.DEB.2.21.2005051508180.14706@sstabellini-ThinkPad-T480s>
 <9ee0fb6f-98df-d993-c42e-f47270bf2eaa@suse.com>
 <DB6PR0402MB2760AF88FE7E3DF47401BE5988A40@DB6PR0402MB2760.eurprd04.prod.outlook.com>
 <CADz_WD5Ln7Pe1WAFp73d2Mz9wxspzTE3WgAJusp5S8LX4=83Bw@mail.gmail.com>
 <2187cd7c-4d48-986b-77d6-4428e8178404@oracle.com>
 <CADz_WD68bYj-0CSm_zib+LRiMGd1+1eoNLgiJj=vHog685Khsw@mail.gmail.com>
 <alpine.DEB.2.21.2005060956120.14706@sstabellini-ThinkPad-T480s>
 <CAMmSBy_wcSD3BVcVFJVR1y1CtvxA9xMkobKwbsdf8dGxS5Hcbw@mail.gmail.com>
 <alpine.DEB.2.21.2005121723240.26167@sstabellini-ThinkPad-T480s>
 <42253259-9663-67e8-117f-8ba631243585@xen.org>
 <alpine.DEB.2.21.2005130810270.26167@sstabellini-ThinkPad-T480s>
 <d940d405-5706-c749-f546-c0c60528394d@xen.org>
 <d19f82a9-160e-ccc5-ebf9-8eb397dbeb08@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1236722027-1589399488=:26167"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Peng Fan <peng.fan@nxp.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "minyard@acm.org" <minyard@acm.org>, roman@zededa.com,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 Julien Grall <julien.grall@arm.com>,
 Nataliya Korovkina <malus.brandywine@gmail.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 boris.ostrovsky@oracle.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1236722027-1589399488=:26167
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 13 May 2020, Julien Grall wrote:
> On 13/05/2020 19:19, Julien Grall wrote:
> > Hi,
> > 
> > On 13/05/2020 16:11, Stefano Stabellini wrote:
> > > On Wed, 13 May 2020, Julien Grall wrote:
> > > > Hi,
> > > > 
> > > > On 13/05/2020 01:33, Stefano Stabellini wrote:
> > > > > I worked with Roman to do several more tests and here is an update on
> > > > > the situation. We don't know why my patch didn't work when Boris'
> > > > > patch
> > > > > [1] worked.  Both of them should have worked the same way.
> > > > > 
> > > > > Anyway, we continued with Boris patch to debug the new mmc error which
> > > > > looks like this:
> > > > > 
> > > > > [    3.084464] mmc0: unrecognised SCR structure version 15
> > > > > [    3.089176] mmc0: error -22 whilst initialising SD card
> > > > > 
> > > > > I asked to add a lot of trancing, see attached diff.
> > > > 
> > > > Please avoid attachment on mailing list and use pastebin for diff.
> > > > 
> > > > > We discovered a bug
> > > > > in xen_swiotlb_init: if io_tlb_start != 0 at the beginning of
> > > > > xen_swiotlb_init, start_dma_addr is not set correctly. This oneline
> > > > > patch fixes it:
> > > > > 
> > > > > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > > > > index 0a40ac332a4c..b75c43356eba 100644
> > > > > --- a/drivers/xen/swiotlb-xen.c
> > > > > +++ b/drivers/xen/swiotlb-xen.c
> > > > > @@ -191,6 +191,7 @@ int __ref xen_swiotlb_init(int verbose, bool
> > > > > early)
> > > > >            * IO TLB memory already allocated. Just use it.
> > > > >            */
> > > > >           if (io_tlb_start != 0) {
> > > > > +               start_dma_addr = io_tlb_start;
> > > > >                   xen_io_tlb_start = phys_to_virt(io_tlb_start);
> > > > >                   goto end;
> > > > >           }
> > > > > 
> > > > > Unfortunately it doesn't solve the mmc0 error.
> > > > > 
> > > > > 
> > > > > As you might notice from the logs, none of the other interesting
> > > > > printks
> > > > > printed anything, which seems to mean that the memory allocated by
> > > > > xen_swiotlb_alloc_coherent and mapped by xen_swiotlb_map_page should
> > > > > be
> > > > > just fine.
> > > > > 
> > > > > I am starting to be out of ideas. Do you guys have any suggestions on
> > > > > what could be the issue or what could be done to discover it?
> > > > 
> > > > Sorry if my suggestions are going to be obvious, but I can't confirm
> > > > whether
> > > > this was already done:
> > > >      1) Does the kernel boot on baremetal (i.e without Xen)? This should
> > > > help
> > > > to confirm whether the bug is Xen is related.
> > > 
> > > Yes it boots
> > > 
> > > >      2) Swiotlb should not be necessary for basic dom0 boot on Arm. Did
> > > > you try
> > > > to disable it? This should help to confirm whether swiotlb is the
> > > > problem or
> > > > not.
> > > 
> > > It boots disabling swiotlb-xen
> > 
> > Thank you for the answer! swiotlb-xen should basically be a NOP for dom0. So
> > this suggests swiotlb is doing some transformation on the DMA address.
> > 
> > I have an idea what may have gone wrong. AFAICT, xen-swiotlb seems to assume
> > the DMA address space and CPU address space is the same. Is RPI using the
> > same address space?
> 
> Another question, is the DMA request bounced? If so, are we sure the bounce
> buffer is in the first GB?

Yes, it is. This is actually where we spent the last few days, and I
found another little related bug in the initialization of the
swiotlb-xen but now I am sure the memory is under 1GB (0x34000000-0x38000000)
--8323329-1236722027-1589399488=:26167--


From xen-devel-bounces@lists.xenproject.org Wed May 13 20:14:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 20:14:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jYxln-00089U-04; Wed, 13 May 2020 20:14:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Rkd=63=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jYxlm-00089P-4n
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 20:14:22 +0000
X-Inumbo-ID: 54a29596-9556-11ea-a401-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 54a29596-9556-11ea-a401-12813bfff9fa;
 Wed, 13 May 2020 20:14:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=pzYBdE75XflP+cbEINwPcyGFdmm1bd+9CGyttKRKkXg=; b=HoKA3Only7ibwfKquNgjxhhj0
 229yxpHUM2KFym294ghDjDq6axYEcjuOn0w8sz6EQi8Nsia7mDygbY9+GXv9hq8Vw8qGFlWzhE993
 qHq6vmcWuTGcjpI6AlOEAVwkkt2yXk36ngL12KP2XFFMa5B+q+YvYPbIa7T39bm3LmyK4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYxlk-000834-Lg; Wed, 13 May 2020 20:14:20 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jYxlk-0003pA-DT; Wed, 13 May 2020 20:14:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jYxlk-00089r-Cs; Wed, 13 May 2020 20:14:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150166-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150166: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=61be48dc029294275348443f78a5e600ef28274f
X-Osstest-Versions-That: xen=9d83ad86834300927b636fa02b29d84854399ed8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 13 May 2020 20:14:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150166 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150166/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  61be48dc029294275348443f78a5e600ef28274f
baseline version:
 xen                  9d83ad86834300927b636fa02b29d84854399ed8

Last test of basis   150162  2020-05-13 13:00:23 Z    0 days
Testing same since   150163  2020-05-13 16:01:10 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9d83ad8683..61be48dc02  61be48dc029294275348443f78a5e600ef28274f -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 13 22:58:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 22:58:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ0KG-0004Wb-0u; Wed, 13 May 2020 22:58:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Rkd=63=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZ0KE-0004WT-EN
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 22:58:06 +0000
X-Inumbo-ID: 3166083a-956d-11ea-a426-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3166083a-956d-11ea-a426-12813bfff9fa;
 Wed, 13 May 2020 22:58:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Rr7b9KLSr2GAa6WnQ8Qbqgq1byhqp9rZV+yljoVuCJY=; b=wEZYC+XiGTVm66YZUaD875kL/
 ms7qxxKP2xzclHfbSL2QYZZJUYuyh/ZxPzRSGVKhjXsXOm5iGMCcz4TCHW8hfStNd12hR0RrRAqId
 3D6SIlfFBZrBNfGnb0iUg3jga5hiCe3oq6UkZqhZdfGynoZdrrZGJd3y2Iko2vOpcwuVo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZ0K7-0002vv-Vv; Wed, 13 May 2020 22:58:00 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZ0K7-0003XA-M5; Wed, 13 May 2020 22:57:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZ0K7-0001wg-K7; Wed, 13 May 2020 22:57:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150159-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150159: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=24085f70a6e1b0cb647ec92623284641d8270637
X-Osstest-Versions-That: linux=152036d1379ffd6985262743dcf6b0f9c75f83a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 13 May 2020 22:57:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150159 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150159/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 150148

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150148
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150148
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150148
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150148
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150148
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150148
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150148
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150148
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                24085f70a6e1b0cb647ec92623284641d8270637
baseline version:
 linux                152036d1379ffd6985262743dcf6b0f9c75f83a4

Last test of basis   150148  2020-05-12 07:51:46 Z    1 days
Testing same since   150153  2020-05-12 23:40:04 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adam Ford <aford173@gmail.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Bob Peterson <rpeterso@redhat.com>
  David Gow <davidgow@google.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Stephen Warren <swarren@nvidia.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   152036d1379f..24085f70a6e1  24085f70a6e1b0cb647ec92623284641d8270637 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Wed May 13 23:11:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 23:11:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ0XG-00068v-D4; Wed, 13 May 2020 23:11:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=7Rkd=63=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZ0XF-00068q-5x
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 23:11:33 +0000
X-Inumbo-ID: 11ce68e4-956f-11ea-a429-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 11ce68e4-956f-11ea-a429-12813bfff9fa;
 Wed, 13 May 2020 23:11:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=FNCkokUQbf21KV+raI9GYxWHfLTQIban6MX3+vdhArA=; b=23eDjWM1HCYIqnCQMhdUjOh70
 E7F2dPzUbw9n0t7AEmsqpADrrtfRjN52rKGxLTfy6Mtt+OK+bbNPAfjiuq1NtNBW7JiCb8YZfDxgi
 QZnfM+DforUOoRc4M+ra5o0VKz3PcIE8RTYGMrvoFNVd7QwZ0xIqOEDOoc1KksvTdLmGE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZ0X7-0003Dh-Uq; Wed, 13 May 2020 23:11:25 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZ0X7-0004MO-L6; Wed, 13 May 2020 23:11:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZ0X7-0003Xq-KW; Wed, 13 May 2020 23:11:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150168-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150168: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=3a218961b16f1f4feb1147f56338faf1ac8f5703
X-Osstest-Versions-That: xen=61be48dc029294275348443f78a5e600ef28274f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 13 May 2020 23:11:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150168 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150168/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3a218961b16f1f4feb1147f56338faf1ac8f5703
baseline version:
 xen                  61be48dc029294275348443f78a5e600ef28274f

Last test of basis   150166  2020-05-13 18:00:35 Z    0 days
Testing same since   150168  2020-05-13 21:00:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   61be48dc02..3a218961b1  3a218961b16f1f4feb1147f56338faf1ac8f5703 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 13 23:40:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 13 May 2020 23:40:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ0ya-00080m-84; Wed, 13 May 2020 23:39:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cgDE=63=oracle.com=dongli.zhang@srs-us1.protection.inumbo.net>)
 id 1jZ0yY-00080h-Kv
 for xen-devel@lists.xenproject.org; Wed, 13 May 2020 23:39:46 +0000
X-Inumbo-ID: 05f45408-9573-11ea-a42d-12813bfff9fa
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 05f45408-9573-11ea-a42d-12813bfff9fa;
 Wed, 13 May 2020 23:39:44 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04DNaPAg065703;
 Wed, 13 May 2020 23:39:42 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=from : to : cc :
 subject : date : message-id; s=corp-2020-01-29;
 bh=WUtDE1uvxFfqtxiGlboxCs/AC9GkRjxAc7zArX6xztU=;
 b=C5wiUZj9WYB1aqt+jnVi12+YGGDEW0nQ8LtsgPIZBkEAcyoj8ADZC37Pgb8kxnOh26fc
 V0Wu+Tja6vN9+8fzF34W6WSFaJS3MWmSIhSwRQ+vnzrXGROUojOCUVlARX/lOQqEVVD7
 k//V+rJYO0LxrgNN4BiAAEXcfnMofRusW62WfQTf32LIqb1LvXaCAnvddFb0CX/vEPEH
 qie2e4zIR7bYHTGuf5jjxqg3eXk0w/iK+w9w+TgtQyM8WtJWZnLsX5s8aApmUa+ZlGgS
 D+8jO2AAKohWG+jzWyQJEO5TVobD6JCTcyU2TMVMpq9Zi8e1UrPMM5Q7wQkbOseCgff5 qQ== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by userp2130.oracle.com with ESMTP id 3100yfydwe-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Wed, 13 May 2020 23:39:42 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04DNc3Fk100680;
 Wed, 13 May 2020 23:39:42 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by userp3030.oracle.com with ESMTP id 3100yfk17f-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 13 May 2020 23:39:42 +0000
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 04DNde5Y017022;
 Wed, 13 May 2020 23:39:40 GMT
Received: from localhost.localdomain (/10.211.9.80)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Wed, 13 May 2020 16:39:40 -0700
From: Dongli Zhang <dongli.zhang@oracle.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Subject: [PATCH 1/1] xen/manage: enable C_A_D to force reboot
Date: Wed, 13 May 2020 16:34:10 -0700
Message-Id: <20200513233410.18120-1-dongli.zhang@oracle.com>
X-Mailer: git-send-email 2.17.1
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9620
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0
 malwarescore=0 bulkscore=0
 phishscore=0 suspectscore=0 adultscore=0 mlxscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2005130203
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9620
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0
 lowpriorityscore=0 adultscore=0
 cotscore=-2147483648 mlxscore=0 suspectscore=0 spamscore=0 impostorscore=0
 mlxlogscore=999 malwarescore=0 clxscore=1011 phishscore=0 bulkscore=0
 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2005130203
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, rose.wang@oracle.com, boris.ostrovsky@oracle.com,
 sstabellini@kernel.org, joe.jin@oracle.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The systemd may be configured to mask ctrl-alt-del via "systemctl mask
ctrl-alt-del.target". As a result, the pv reboot would not work as signal
is ignored.

This patch always enables C_A_D before the call of ctrl_alt_del() in order
to force the reboot.

Reported-by: Rose Wang <rose.wang@oracle.com>
Cc: Joe Jin <joe.jin@oracle.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
---
 drivers/xen/manage.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
index cd046684e0d1..3190d0ecb52e 100644
--- a/drivers/xen/manage.c
+++ b/drivers/xen/manage.c
@@ -204,6 +204,13 @@ static void do_poweroff(void)
 static void do_reboot(void)
 {
 	shutting_down = SHUTDOWN_POWEROFF; /* ? */
+	/*
+	 * The systemd may be configured to mask ctrl-alt-del via
+	 * "systemctl mask ctrl-alt-del.target". As a result, the pv reboot
+	 * would not work. To enable C_A_D would force the reboot.
+	 */
+	C_A_D = 1;
+
 	ctrl_alt_del();
 }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 14 00:20:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 00:20:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ1bi-0004AD-Tw; Thu, 14 May 2020 00:20:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q+J7=64=glacier-peak.net=sam.felton@srs-us1.protection.inumbo.net>)
 id 1jZ1bi-0004A8-5i
 for xen-devel@lists.xen.org; Thu, 14 May 2020 00:20:14 +0000
X-Inumbo-ID: ac29210a-9578-11ea-a42f-12813bfff9fa
Received: from palantir.glacier-peak.net (unknown [50.106.60.213])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ac29210a-9578-11ea-a42f-12813bfff9fa;
 Thu, 14 May 2020 00:20:11 +0000 (UTC)
Received: from silmaril.corp.glacier-peak.net (192.168.42.226) by
 palantir.glacier-peak.net (192.168.42.223) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id
 15.1.1913.5; Wed, 13 May 2020 17:20:05 -0700
Received: from silmaril.corp.glacier-peak.net (192.168.42.226) by
 silmaril.corp.glacier-peak.net (192.168.42.226) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1913.5; Wed, 13 May 2020 17:20:04 -0700
Received: from silmaril.corp.glacier-peak.net ([fe80::3c33:3873:6e2:4337]) by
 silmaril.corp.glacier-peak.net ([fe80::3c33:3873:6e2:4337%3]) with
 mapi id 15.01.1913.010; Wed, 13 May 2020 17:20:04 -0700
From: "Samuel P. Felton - GPT LLC" <sam.felton@glacier-peak.net>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: RE: Question...
Thread-Topic: Question...
Thread-Index: AdYffIk+nIOwP7iTQLuOs3KZB2G7LwAIaoMAABRNg3A=
Date: Thu, 14 May 2020 00:20:04 +0000
Message-ID: <b2396085a65a407089d635e68beabb61@glacier-peak.net>
References: <f017c46e427a45ecab00c1c59413658c@glacier-peak.net>
 <907FA58B-240E-45E6-B452-9094531F715E@arm.com>
In-Reply-To: <907FA58B-240E-45E6-B452-9094531F715E@arm.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: yes
X-MS-TNEF-Correlator: 
x-originating-ip: [192.168.1.24]
Content-Type: multipart/mixed;
 boundary="_004_b2396085a65a407089d635e68beabb61glacierpeaknet_"
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--_004_b2396085a65a407089d635e68beabb61glacierpeaknet_
Content-Type: multipart/alternative;
	boundary="_000_b2396085a65a407089d635e68beabb61glacierpeaknet_"

--_000_b2396085a65a407089d635e68beabb61glacierpeaknet_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

Qm9uam91ciwgQmVydHJhbmQsDQoNCg0KDQpUaGFuayB5b3UgZm9yIHlvdXIgcmVwbHkuIEkgYW0g
YSBuZXdiaWUgdG8gWGVuIGJ1dCBJ4oCZdmUgdXNlZCBCaHl2ZSwgVm13YXJlIGFuZCBNU0ZUIEh5
cGVyLVYgYmVmb3JlIHNvIEkgYW0gc29tZXdoYXQgZmFtaWxpYXIgd2l0aCBob3cgdHlwZS0xIGh5
cGVydmlzb3JzIHdvcmsuIFN0aWxsLiDwn6SUDQoNCg0KDQpJJ3ZlIGVuY2xvc2VkOg0KDQoNCg0K
ICAxLiAgVGhlIHNjcmlwdCBJIHVzZWQgdG8gdHJ5IHRvIHNldCB1cCBEb20wLiBJdOKAmXMgYmFz
aWNhbGx5IHRoZSBzY3JpcHQgSSBmb3VuZCBoZXJlOiBodHRwczovL2FuZ3J5ZWxlY3Ryb24uY29t
L2hvdy10by1pbnN0YWxsLXVidW50dS1hcy14ZW4tZG9tMC1hbmQtZG9tdS8sIGV4Y2VwdCwgc2xp
Z2h0bHkgbW9kaWZpZWQgZm9yIEFBcmNoNjQvQVJNdjggYXMgY2xvc2UgYXMgSSB3YXMgYWJsZSB0
byBkZXRlcm1pbmUgaXQuDQogIDIuICBUaGUgZ3J1Yi5jZmcgZmlsZS4NCg0KDQoNCkFzIEkgc2Fp
ZCwgdGhlIHNlcmlhbCBwb3J0IFNURE9VVCBnb2VzIGRlYWQgc28gbm8gbG9ncy4gUGxlYXNlIGhl
bHAgbWUgdW5kZXJzdGFuZCBob3cgdG8gdHVybiBvbiBsb2dnaW5nIHNvIEkgY2FuIGZpZ3VyZSBv
dXQgd2hhdOKAmXMgZ29pbmcgb24uDQoNCg0KDQoNCg0KDQoNClRoZSBtYWNoaW5lIEnigJltIHBv
cnRpbmcgdG8gaXMgdGhpcyBvbmU6IGh0dHBzOi8vd3d3LmdpZ2FieXRlLmNvbS90dy9BUk0tU2Vy
dmVyL1IyODEtVDk0LXJldi0xMDAjb3YNCg0KDQoNCkl0IGhhcyAxMjhHIFJBTSBhbmQgYSB0b3Rh
bCBvZiA1VEIgb2YgU0FTIGFuZCBTU0QgZGlzay4gSeKAmW0gYm9vdGluZyBvZmYgb25lIG9mIHRo
ZSBTU0RzLg0KDQoNCg0KSXQgcnVucyBVYnVudHUgMjAuMDQgTFRTIGp1c3QgZmluZTsgSSBldmVu
IGhhdmUgYSBVRUZJIEdPUCBkcml2ZXIgZm9yIHRoZSBBTUQgUmFkZW9uIFI5IEdQVSBJIHBsdWdn
ZWQgaW4uIEFsbCBvZiB0aGF0IHdvcmtzIGZpbmUgd2l0aCBVYnVudHUuDQoNCg0KDQpUaGFua3Mg
aW4gYWR2YW5jZSBmb3IgYW55IGlkZWFzDQoNClJnZHMuLA0KflNhbQ0KDQrigJxZb3VyIGJlc3Qg
YW5kIHdpc2VzdCByZWZ1Z2UgZnJvbSBhbGwgdHJvdWJsZXMgaXMgaW4geW91ciBzY2llbmNl4oCd
DQogICAgICAgICAgICAgICAgICAgICAgICAgIC0gQWRhIEtpbmcsIGNvdW50ZXNzIG9mIExvdmVs
YWNlDQoNCg0KDQotLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KRnJvbTogQmVydHJhbmQgTWFy
cXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPg0KU2VudDogRnJpZGF5LCBNYXkgMSwgMjAy
MCAyOjUyDQpUbzogU2FtdWVsIFAuIEZlbHRvbiAtIEdQVCBMTEMgPHNhbS5mZWx0b25AZ2xhY2ll
ci1wZWFrLm5ldD4NCkNjOiB4ZW4tZGV2ZWxAbGlzdHMueGVuLm9yZw0KU3ViamVjdDogUmU6IFF1
ZXN0aW9uLi4uDQoNCg0KDQpIaSBTYW11ZWwsDQoNCg0KDQo+IE9uIDEgTWF5IDIwMjAsIGF0IDA2
OjU5LCBTYW11ZWwgUC4gRmVsdG9uIC0gR1BUIExMQyA8c2FtLmZlbHRvbkBnbGFjaWVyLXBlYWsu
bmV0PG1haWx0bzpzYW0uZmVsdG9uQGdsYWNpZXItcGVhay5uZXQ+PiB3cm90ZToNCg0KPg0KDQo+
IFNvLCBJ4oCZbSB0cnlpbmcgdG8gZ2V0IGEgWGVuIERvbTAgYW5kIERvbVUsIGJvdGggcnVubmlu
ZyBVYnVudHUgMjAuMDQgTFRTIHVwIG9uIG91ciBicmFuZC1uZXcgR2lnYWJ5dGUgVGh1bmRlclgy
IEFSTTY0IGJveC4gSSBjYW4gZ2V0IFVidW50dSB1cCBhbmQgcnVubmluZywgYnV0IGFmdGVyIGlu
c3RhbGxpbmcgdGhlIFhlbiBiaXRzLCBpdCBkaWVzIGFmdGVyIHRoZSBVRUZJIGxheWVyIGxhdW5j
aGVzIEdSVUIuIEkgaGF2ZW7igJl0IGJlZW4gYWJsZSB0byBnZXQgYW55IGxvZ2ZpbGVzIGJlY2F1
c2UgaXQgZG9lc27igJl0IGdldCB0aGF0IGZhci4gTm90aGluZyBzaG93cyB1cCBvbiB0aGUgc2Vy
aWFsIHBvcnQgbG9nIGVpdGhlciDigJMgaXQganVzdCBoYW5ncy4NCg0KPg0KDQo+IEhhcyBhbnlv
bmUgb3ZlciB0aGVyZSBiZWVuIHRyeWluZyB0byBnZXQgYSBzaW1pbGFyIHNldHVwIHJ1bm5pbmc/
IEFtIEkgb3V0IHRvIGx1bmNoIGZvciB0cnlpbmcgdGhpcywgb3IgaXMgdGhlcmUgc29tZXRoaW5n
IEnigJltIG1pc3Npbmc/IEFueSBoZWxwIGF0IGFsbCB3b3VsZCBiZSBhcHByZWNpYXRlZC4NCg0K
DQoNCkkgYW0gY3VycmVudGx5IHBvcnRpbmcgWGVuIG9uIGFuIE4xU0RQIGFybSBib2FyZCB3aGlj
aCBpcyBhbHNvIHVzaW5nIGEgRURLMi9ncnViIHNldHVwIGFuZCBJIG1hbmFnZSB0byBzdGFydCB4
ZW4gZnJvbSBncnViIGFuZCB0aGVuIHN0YXJ0IGRvbTAgcHJvdmlkaW5nIGEgRFRCLg0KDQoNCg0K
TXkgZ3J1YiBjb25maWd1cmF0aW9uIGxvb2tzIGxpa2UgdGhpczoNCg0KbWVudWVudHJ5ICd4ZW4n
IHsNCg0KICAgIHhlbl9oeXBlcnZpc29yICRwcmVmaXgveGVuLmVmaSBsb2dsdmw9YWxsIGd1ZXN0
X2xvZ2x2bD1hbGwgY29uc29sZT1kdHVhcnQgZHR1YXJ0PXNlcmlhbDAgZG9tMF9tZW09MUcNCg0K
ICAgIHhlbl9tb2R1bGUgJHByZWZpeC9JbWFnZSBjb25zb2xlPWh2YzAgZWZpPW5vcnVudGltZSBy
dyByb290PS9kZXYvc2RhMSByb290d2FpdA0KDQogICAgZGV2aWNldHJlZSAkcHJlZml4L24xc2Rw
LmR0Yg0KDQp9DQoNCg0KDQpDb3VsZCB5b3Ugc2hhcmUgeW91ciBncnViIGNvbmZpZ3VyYXRpb24g
Pw0KDQoNCg0KQmVydHJhbmQNCg0KDQoNCj4NCg0KPiBJZiB0aGlzIGRvZXNu4oCZdCB3b3JrLCBJ
4oCZbSBnb2luZyB0byBoYXZlIHRvIGdvIHRvIEZyZWVCU0QgYW5kIEJoeXZlIGJlY2F1c2UgSSBr
bm93IHNvbWVvbmUgd2hvIGhhcyB0aGF0IHdvcmtpbmcuIEnigJlkIHJhdGhlciB1c2UgTGludXgg
dGhhbiBCU0QgZm9yIHRoaXMgYXBwbGljYXRpb24sIHRoZXJlIGFyZSBtb3JlIGRyaXZlcnMgc3Vw
cG9ydGluZyB0aGlzIGhhcmRhd2FyZS4NCg0KPg0KDQo+IFRoYW5rcywNCg0KPiB+U2FtDQoNCj4N
Cg0KPg0KDQo+DQoNCj4NCg0KPiA8aW1hZ2UwMDEucG5nPg0KDQo+IFBob25lOiArMSAyMDYgNzAx
LTczMjEgZXh0LiAxMDENCg0KPiBFbWFpbDogaW5mb0BnbGFjaWVyLXBlYWsubmV0PG1haWx0bzpp
bmZvQGdsYWNpZXItcGVhay5uZXQ+DQoNCg0KDQpJTVBPUlRBTlQgTk9USUNFOiBUaGUgY29udGVu
dHMgb2YgdGhpcyBlbWFpbCBhbmQgYW55IGF0dGFjaG1lbnRzIGFyZSBjb25maWRlbnRpYWwgYW5k
IG1heSBhbHNvIGJlIHByaXZpbGVnZWQuIElmIHlvdSBhcmUgbm90IHRoZSBpbnRlbmRlZCByZWNp
cGllbnQsIHBsZWFzZSBub3RpZnkgdGhlIHNlbmRlciBpbW1lZGlhdGVseSBhbmQgZG8gbm90IGRp
c2Nsb3NlIHRoZSBjb250ZW50cyB0byBhbnkgb3RoZXIgcGVyc29uLCB1c2UgaXQgZm9yIGFueSBw
dXJwb3NlLCBvciBzdG9yZSBvciBjb3B5IHRoZSBpbmZvcm1hdGlvbiBpbiBhbnkgbWVkaXVtLiBU
aGFuayB5b3UuDQo=

--_000_b2396085a65a407089d635e68beabb61glacierpeaknet_
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: base64

PGh0bWwgeG1sbnM6dj0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTp2bWwiIHhtbG5zOm89InVy
bjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOm9mZmljZSIgeG1sbnM6dz0idXJuOnNjaGVt
YXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6d29yZCIgeG1sbnM6bT0iaHR0cDovL3NjaGVtYXMubWlj
cm9zb2Z0LmNvbS9vZmZpY2UvMjAwNC8xMi9vbW1sIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcv
VFIvUkVDLWh0bWw0MCI+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIg
Y29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjxtZXRhIG5hbWU9IkdlbmVyYXRv
ciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTUgKGZpbHRlcmVkIG1lZGl1bSkiPg0KPHN0eWxl
PjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6
IkNhbWJyaWEgTWF0aCI7DQoJcGFub3NlLTE6MiA0IDUgMyA1IDQgNiAzIDIgNDt9DQpAZm9udC1m
YWNlDQoJe2ZvbnQtZmFtaWx5OkRlbmdYaWFuOw0KCXBhbm9zZS0xOjIgMSA2IDAgMyAxIDEgMSAx
IDE7fQ0KQGZvbnQtZmFjZQ0KCXtmb250LWZhbWlseTpDYWxpYnJpOw0KCXBhbm9zZS0xOjIgMTUg
NSAyIDIgMiA0IDMgMiA0O30NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6R2VvcmdpYTsNCglw
YW5vc2UtMToyIDQgNSAyIDUgNCA1IDIgMyAzO30NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6
Ik1vbm90eXBlIENvcnNpdmEiOw0KCXBhbm9zZS0xOjMgMSAxIDEgMSAyIDEgMSAxIDE7fQ0KQGZv
bnQtZmFjZQ0KCXtmb250LWZhbWlseToiXEBEZW5nWGlhbiI7DQoJcGFub3NlLTE6MiAxIDYgMCAz
IDEgMSAxIDEgMTt9DQpAZm9udC1mYWNlDQoJe2ZvbnQtZmFtaWx5OiJGcmVlc3R5bGUgU2NyaXB0
Ijt9DQovKiBTdHlsZSBEZWZpbml0aW9ucyAqLw0KcC5Nc29Ob3JtYWwsIGxpLk1zb05vcm1hbCwg
ZGl2Lk1zb05vcm1hbA0KCXttYXJnaW46MGluOw0KCW1hcmdpbi1ib3R0b206LjAwMDFwdDsNCglm
b250LXNpemU6MTEuMHB0Ow0KCWZvbnQtZmFtaWx5OiJDYWxpYnJpIixzYW5zLXNlcmlmO30NCmE6
bGluaywgc3Bhbi5Nc29IeXBlcmxpbmsNCgl7bXNvLXN0eWxlLXByaW9yaXR5Ojk5Ow0KCWNvbG9y
OiMwNTYzQzE7DQoJdGV4dC1kZWNvcmF0aW9uOnVuZGVybGluZTt9DQpwLk1zb1BsYWluVGV4dCwg
bGkuTXNvUGxhaW5UZXh0LCBkaXYuTXNvUGxhaW5UZXh0DQoJe21zby1zdHlsZS1wcmlvcml0eTo5
OTsNCgltc28tc3R5bGUtbGluazoiUGxhaW4gVGV4dCBDaGFyIjsNCgltYXJnaW46MGluOw0KCW1h
cmdpbi1ib3R0b206LjAwMDFwdDsNCglmb250LXNpemU6MTEuMHB0Ow0KCWZvbnQtZmFtaWx5OiJD
YWxpYnJpIixzYW5zLXNlcmlmO30NCnNwYW4uUGxhaW5UZXh0Q2hhcg0KCXttc28tc3R5bGUtbmFt
ZToiUGxhaW4gVGV4dCBDaGFyIjsNCgltc28tc3R5bGUtcHJpb3JpdHk6OTk7DQoJbXNvLXN0eWxl
LWxpbms6IlBsYWluIFRleHQiOw0KCWZvbnQtZmFtaWx5OiJDYWxpYnJpIixzYW5zLXNlcmlmO30N
CnNwYW4uRW1haWxTdHlsZTIwDQoJe21zby1zdHlsZS10eXBlOnBlcnNvbmFsLWNvbXBvc2U7fQ0K
Lk1zb0NocERlZmF1bHQNCgl7bXNvLXN0eWxlLXR5cGU6ZXhwb3J0LW9ubHk7DQoJZm9udC1zaXpl
OjEwLjBwdDsNCglmb250LWZhbWlseToiQ2FsaWJyaSIsc2Fucy1zZXJpZjt9DQpAcGFnZSBXb3Jk
U2VjdGlvbjENCgl7c2l6ZTo4LjVpbiAxMS4waW47DQoJbWFyZ2luOjEuMGluIDEuMGluIDEuMGlu
IDEuMGluO30NCmRpdi5Xb3JkU2VjdGlvbjENCgl7cGFnZTpXb3JkU2VjdGlvbjE7fQ0KLyogTGlz
dCBEZWZpbml0aW9ucyAqLw0KQGxpc3QgbDANCgl7bXNvLWxpc3QtaWQ6MTkzODU2OTk5Ow0KCW1z
by1saXN0LXR5cGU6aHlicmlkOw0KCW1zby1saXN0LXRlbXBsYXRlLWlkczotOTYzMDg5NjE2IDY3
Njk4NzAzIDY3Njk4NzEzIDY3Njk4NzE1IDY3Njk4NzAzIDY3Njk4NzEzIDY3Njk4NzE1IDY3Njk4
NzAzIDY3Njk4NzEzIDY3Njk4NzE1O30NCkBsaXN0IGwwOmxldmVsMQ0KCXttc28tbGV2ZWwtdGFi
LXN0b3A6bm9uZTsNCgltc28tbGV2ZWwtbnVtYmVyLXBvc2l0aW9uOmxlZnQ7DQoJdGV4dC1pbmRl
bnQ6LS4yNWluO30NCkBsaXN0IGwwOmxldmVsMg0KCXttc28tbGV2ZWwtbnVtYmVyLWZvcm1hdDph
bHBoYS1sb3dlcjsNCgltc28tbGV2ZWwtdGFiLXN0b3A6bm9uZTsNCgltc28tbGV2ZWwtbnVtYmVy
LXBvc2l0aW9uOmxlZnQ7DQoJdGV4dC1pbmRlbnQ6LS4yNWluO30NCkBsaXN0IGwwOmxldmVsMw0K
CXttc28tbGV2ZWwtbnVtYmVyLWZvcm1hdDpyb21hbi1sb3dlcjsNCgltc28tbGV2ZWwtdGFiLXN0
b3A6bm9uZTsNCgltc28tbGV2ZWwtbnVtYmVyLXBvc2l0aW9uOnJpZ2h0Ow0KCXRleHQtaW5kZW50
Oi05LjBwdDt9DQpAbGlzdCBsMDpsZXZlbDQNCgl7bXNvLWxldmVsLXRhYi1zdG9wOm5vbmU7DQoJ
bXNvLWxldmVsLW51bWJlci1wb3NpdGlvbjpsZWZ0Ow0KCXRleHQtaW5kZW50Oi0uMjVpbjt9DQpA
bGlzdCBsMDpsZXZlbDUNCgl7bXNvLWxldmVsLW51bWJlci1mb3JtYXQ6YWxwaGEtbG93ZXI7DQoJ
bXNvLWxldmVsLXRhYi1zdG9wOm5vbmU7DQoJbXNvLWxldmVsLW51bWJlci1wb3NpdGlvbjpsZWZ0
Ow0KCXRleHQtaW5kZW50Oi0uMjVpbjt9DQpAbGlzdCBsMDpsZXZlbDYNCgl7bXNvLWxldmVsLW51
bWJlci1mb3JtYXQ6cm9tYW4tbG93ZXI7DQoJbXNvLWxldmVsLXRhYi1zdG9wOm5vbmU7DQoJbXNv
LWxldmVsLW51bWJlci1wb3NpdGlvbjpyaWdodDsNCgl0ZXh0LWluZGVudDotOS4wcHQ7fQ0KQGxp
c3QgbDA6bGV2ZWw3DQoJe21zby1sZXZlbC10YWItc3RvcDpub25lOw0KCW1zby1sZXZlbC1udW1i
ZXItcG9zaXRpb246bGVmdDsNCgl0ZXh0LWluZGVudDotLjI1aW47fQ0KQGxpc3QgbDA6bGV2ZWw4
DQoJe21zby1sZXZlbC1udW1iZXItZm9ybWF0OmFscGhhLWxvd2VyOw0KCW1zby1sZXZlbC10YWIt
c3RvcDpub25lOw0KCW1zby1sZXZlbC1udW1iZXItcG9zaXRpb246bGVmdDsNCgl0ZXh0LWluZGVu
dDotLjI1aW47fQ0KQGxpc3QgbDA6bGV2ZWw5DQoJe21zby1sZXZlbC1udW1iZXItZm9ybWF0OnJv
bWFuLWxvd2VyOw0KCW1zby1sZXZlbC10YWItc3RvcDpub25lOw0KCW1zby1sZXZlbC1udW1iZXIt
cG9zaXRpb246cmlnaHQ7DQoJdGV4dC1pbmRlbnQ6LTkuMHB0O30NCkBsaXN0IGwxDQoJe21zby1s
aXN0LWlkOjE3MDEzOTM0MTQ7DQoJbXNvLWxpc3QtdGVtcGxhdGUtaWRzOjEzMzIyOTgyODt9DQpv
bA0KCXttYXJnaW4tYm90dG9tOjBpbjt9DQp1bA0KCXttYXJnaW4tYm90dG9tOjBpbjt9DQotLT48
L3N0eWxlPjwhLS1baWYgZ3RlIG1zbyA5XT48eG1sPg0KPG86c2hhcGVkZWZhdWx0cyB2OmV4dD0i
ZWRpdCIgc3BpZG1heD0iMTAyNiIgLz4NCjwveG1sPjwhW2VuZGlmXS0tPjwhLS1baWYgZ3RlIG1z
byA5XT48eG1sPg0KPG86c2hhcGVsYXlvdXQgdjpleHQ9ImVkaXQiPg0KPG86aWRtYXAgdjpleHQ9
ImVkaXQiIGRhdGE9IjEiIC8+DQo8L286c2hhcGVsYXlvdXQ+PC94bWw+PCFbZW5kaWZdLS0+DQo8
L2hlYWQ+DQo8Ym9keSBsYW5nPSJFTi1VUyIgbGluaz0iIzA1NjNDMSIgdmxpbms9IiM5NTRGNzIi
Pg0KPGRpdiBjbGFzcz0iV29yZFNlY3Rpb24xIj4NCjxwIGNsYXNzPSJNc29QbGFpblRleHQiPkJv
bmpvdXIsIEJlcnRyYW5kLDxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb1BsYWluVGV4dCI+
PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBjbGFzcz0iTXNvUGxhaW5UZXh0Ij5UaGFuayB5b3Ug
Zm9yIHlvdXIgcmVwbHkuIEkgYW0gYSBuZXdiaWUgdG8gWGVuIGJ1dCBJ4oCZdmUgdXNlZCBCaHl2
ZSwgVm13YXJlIGFuZCBNU0ZUIEh5cGVyLVYgYmVmb3JlIHNvIEkgYW0gc29tZXdoYXQgZmFtaWxp
YXIgd2l0aCBob3cgdHlwZS0xIGh5cGVydmlzb3JzIHdvcmsuIFN0aWxsLg0KPHNwYW4gc3R5bGU9
ImZvbnQtZmFtaWx5OiZxdW90O1NlZ29lIFVJIEVtb2ppJnF1b3Q7LHNhbnMtc2VyaWYiPiYjMTI5
MzAwOzwvc3Bhbj48bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29QbGFpblRleHQiPjxvOnA+
Jm5ic3A7PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb1BsYWluVGV4dCI+SSd2ZSBlbmNsb3NlZDo8
bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29QbGFpblRleHQiPjxvOnA+Jm5ic3A7PC9vOnA+
PC9wPg0KPG9sIHN0eWxlPSJtYXJnaW4tdG9wOjBpbiIgc3RhcnQ9IjEiIHR5cGU9IjEiPg0KPGxp
IGNsYXNzPSJNc29QbGFpblRleHQiIHN0eWxlPSJtc28tbGlzdDpsMCBsZXZlbDEgbGZvMyI+VGhl
IHNjcmlwdCBJIHVzZWQgdG8gdHJ5IHRvIHNldCB1cCBEb20wLiBJdOKAmXMgYmFzaWNhbGx5IHRo
ZSBzY3JpcHQgSSBmb3VuZCBoZXJlOg0KPGEgaHJlZj0iaHR0cHM6Ly9hbmdyeWVsZWN0cm9uLmNv
bS9ob3ctdG8taW5zdGFsbC11YnVudHUtYXMteGVuLWRvbTAtYW5kLWRvbXUvIj5odHRwczovL2Fu
Z3J5ZWxlY3Ryb24uY29tL2hvdy10by1pbnN0YWxsLXVidW50dS1hcy14ZW4tZG9tMC1hbmQtZG9t
dS88L2E+LCBleGNlcHQsIHNsaWdodGx5IG1vZGlmaWVkIGZvciBBQXJjaDY0L0FSTXY4IGFzIGNs
b3NlIGFzIEkgd2FzIGFibGUgdG8gZGV0ZXJtaW5lIGl0LjxvOnA+PC9vOnA+PC9saT48bGkgY2xh
c3M9Ik1zb1BsYWluVGV4dCIgc3R5bGU9Im1zby1saXN0OmwwIGxldmVsMSBsZm8zIj5UaGUgZ3J1
Yi5jZmcgZmlsZS48bzpwPjwvbzpwPjwvbGk+PC9vbD4NCjxwIGNsYXNzPSJNc29QbGFpblRleHQi
PjxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb1BsYWluVGV4dCI+QXMgSSBzYWlk
LCB0aGUgc2VyaWFsIHBvcnQgU1RET1VUIGdvZXMgZGVhZCBzbyBubyBsb2dzLiBQbGVhc2UgaGVs
cCBtZSB1bmRlcnN0YW5kIGhvdyB0byB0dXJuIG9uIGxvZ2dpbmcgc28gSSBjYW4gZmlndXJlIG91
dCB3aGF04oCZcyBnb2luZyBvbi48bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29QbGFpblRl
eHQiPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb1BsYWluVGV4dCI+PG86cD4m
bmJzcDs8L286cD48L3A+DQo8cCBjbGFzcz0iTXNvUGxhaW5UZXh0Ij48bzpwPiZuYnNwOzwvbzpw
PjwvcD4NCjxwIGNsYXNzPSJNc29QbGFpblRleHQiPlRoZSBtYWNoaW5lIEnigJltIHBvcnRpbmcg
dG8gaXMgdGhpcyBvbmU6IDxhIGhyZWY9Imh0dHBzOi8vd3d3LmdpZ2FieXRlLmNvbS90dy9BUk0t
U2VydmVyL1IyODEtVDk0LXJldi0xMDAjb3YiPg0KaHR0cHM6Ly93d3cuZ2lnYWJ5dGUuY29tL3R3
L0FSTS1TZXJ2ZXIvUjI4MS1UOTQtcmV2LTEwMCNvdjwvYT48bzpwPjwvbzpwPjwvcD4NCjxwIGNs
YXNzPSJNc29QbGFpblRleHQiPjxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb1Bs
YWluVGV4dCI+SXQgaGFzIDEyOEcgUkFNIGFuZCBhIHRvdGFsIG9mIDVUQiBvZiBTQVMgYW5kIFNT
RCBkaXNrLiBJ4oCZbSBib290aW5nIG9mZiBvbmUgb2YgdGhlIFNTRHMuPG86cD48L286cD48L3A+
DQo8cCBjbGFzcz0iTXNvUGxhaW5UZXh0Ij48bzpwPiZuYnNwOzwvbzpwPjwvcD4NCjxwIGNsYXNz
PSJNc29QbGFpblRleHQiPkl0IHJ1bnMgVWJ1bnR1IDIwLjA0IExUUyBqdXN0IGZpbmU7IEkgZXZl
biBoYXZlIGEgVUVGSSBHT1AgZHJpdmVyIGZvciB0aGUgQU1EIFJhZGVvbiBSOSBHUFUgSSBwbHVn
Z2VkIGluLiBBbGwgb2YgdGhhdCB3b3JrcyBmaW5lIHdpdGggVWJ1bnR1LjxvOnA+PC9vOnA+PC9w
Pg0KPHAgY2xhc3M9Ik1zb1BsYWluVGV4dCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBjbGFz
cz0iTXNvUGxhaW5UZXh0Ij5UaGFua3MgaW4gYWR2YW5jZSBmb3IgYW55IGlkZWFzPG86cD48L286
cD48L3A+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PHNwYW4gc3R5bGU9ImZvbnQtc2l6
ZToxMi4wcHQiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3Jt
YWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTIuMHB0Ij5SZ2RzLiw8bzpwPjwvbzpwPjwvc3Bh
bj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEyLjBw
dCI+flNhbTxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxzcGFu
IHN0eWxlPSJmb250LXNpemU6MTQuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O0ZyZWVzdHlsZSBTY3Jp
cHQmcXVvdDsiPjxvOnA+Jm5ic3A7PC9vOnA+PC9zcGFuPjwvcD4NCjxwIGNsYXNzPSJNc29Ob3Jt
YWwiPjxzcGFuIHN0eWxlPSJmb250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O01vbm90
eXBlIENvcnNpdmEmcXVvdDs7Y29sb3I6YmxhY2siPuKAnFlvdXIgYmVzdCBhbmQgd2lzZXN0IHJl
ZnVnZSBmcm9tIGFsbCB0cm91YmxlcyBpcyBpbiB5b3VyIHNjaWVuY2XigJ08bzpwPjwvbzpwPjwv
c3Bhbj48L3A+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48c3BhbiBzdHlsZT0iZm9udC1zaXplOjEw
LjBwdDtmb250LWZhbWlseTomcXVvdDtNb25vdHlwZSBDb3JzaXZhJnF1b3Q7O2NvbG9yOmJsYWNr
Ij4mbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsm
bmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJz
cDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsmbmJzcDsgLTwvc3Bhbj48Yj48c3BhbiBz
dHlsZT0iZm9udC1zaXplOjE1LjBwdDtmb250LWZhbWlseTomcXVvdDtHZW9yZ2lhJnF1b3Q7LHNl
cmlmO2NvbG9yOiMxQTFBMUE7Ym9yZGVyOm5vbmUgd2luZG93dGV4dCAxLjBwdDtwYWRkaW5nOjBp
bjtiYWNrZ3JvdW5kOndoaXRlIj4NCjwvc3Bhbj48L2I+PHNwYW4gc3R5bGU9ImZvbnQtc2l6ZTox
MC4wcHQ7Zm9udC1mYW1pbHk6JnF1b3Q7TW9ub3R5cGUgQ29yc2l2YSZxdW90Oztjb2xvcjojMUEx
QTFBO2JvcmRlcjpub25lIHdpbmRvd3RleHQgMS4wcHQ7cGFkZGluZzowaW47YmFja2dyb3VuZDp3
aGl0ZSI+QWRhIEtpbmcsIGNvdW50ZXNzIG9mIExvdmVsYWNlPC9zcGFuPjxzcGFuIHN0eWxlPSJm
b250LXNpemU6MTAuMHB0O2ZvbnQtZmFtaWx5OiZxdW90O01vbm90eXBlIENvcnNpdmEmcXVvdDsi
PjxvOnA+PC9vOnA+PC9zcGFuPjwvcD4NCjwvZGl2Pg0KPHAgY2xhc3M9Ik1zb1BsYWluVGV4dCI+
PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBjbGFzcz0iTXNvUGxhaW5UZXh0Ij4tLS0tLU9yaWdp
bmFsIE1lc3NhZ2UtLS0tLTxicj4NCkZyb206IEJlcnRyYW5kIE1hcnF1aXMgJmx0O0JlcnRyYW5k
Lk1hcnF1aXNAYXJtLmNvbSZndDsgPGJyPg0KU2VudDogRnJpZGF5LCBNYXkgMSwgMjAyMCAyOjUy
PGJyPg0KVG86IFNhbXVlbCBQLiBGZWx0b24gLSBHUFQgTExDICZsdDtzYW0uZmVsdG9uQGdsYWNp
ZXItcGVhay5uZXQmZ3Q7PGJyPg0KQ2M6IHhlbi1kZXZlbEBsaXN0cy54ZW4ub3JnPGJyPg0KU3Vi
amVjdDogUmU6IFF1ZXN0aW9uLi4uPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvUGxhaW5U
ZXh0Ij48bzpwPiZuYnNwOzwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29QbGFpblRleHQiPkhpIFNh
bXVlbCw8bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29QbGFpblRleHQiPjxvOnA+Jm5ic3A7
PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb1BsYWluVGV4dCI+Jmd0OyBPbiAxIE1heSAyMDIwLCBh
dCAwNjo1OSwgU2FtdWVsIFAuIEZlbHRvbiAtIEdQVCBMTEMgJmx0OzxhIGhyZWY9Im1haWx0bzpz
YW0uZmVsdG9uQGdsYWNpZXItcGVhay5uZXQiPjxzcGFuIHN0eWxlPSJjb2xvcjp3aW5kb3d0ZXh0
O3RleHQtZGVjb3JhdGlvbjpub25lIj5zYW0uZmVsdG9uQGdsYWNpZXItcGVhay5uZXQ8L3NwYW4+
PC9hPiZndDsgd3JvdGU6PG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvUGxhaW5UZXh0Ij4m
Z3Q7PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBjbGFzcz0iTXNvUGxhaW5UZXh0Ij4mZ3Q7IFNv
LCBJ4oCZbSB0cnlpbmcgdG8gZ2V0IGEgWGVuIERvbTAgYW5kIERvbVUsIGJvdGggcnVubmluZyBV
YnVudHUgMjAuMDQgTFRTIHVwIG9uIG91ciBicmFuZC1uZXcgR2lnYWJ5dGUgVGh1bmRlclgyIEFS
TTY0IGJveC4gSSBjYW4gZ2V0IFVidW50dSB1cCBhbmQgcnVubmluZywgYnV0IGFmdGVyIGluc3Rh
bGxpbmcgdGhlIFhlbiBiaXRzLCBpdCBkaWVzIGFmdGVyIHRoZSBVRUZJIGxheWVyIGxhdW5jaGVz
IEdSVUIuDQogSSBoYXZlbuKAmXQgYmVlbiBhYmxlIHRvIGdldCBhbnkgbG9nZmlsZXMgYmVjYXVz
ZSBpdCBkb2VzbuKAmXQgZ2V0IHRoYXQgZmFyLiBOb3RoaW5nIHNob3dzIHVwIG9uIHRoZSBzZXJp
YWwgcG9ydCBsb2cgZWl0aGVyIOKAkyBpdCBqdXN0IGhhbmdzLjxvOnA+PC9vOnA+PC9wPg0KPHAg
Y2xhc3M9Ik1zb1BsYWluVGV4dCI+Jmd0OzxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0KPHAgY2xhc3M9
Ik1zb1BsYWluVGV4dCI+Jmd0OyBIYXMgYW55b25lIG92ZXIgdGhlcmUgYmVlbiB0cnlpbmcgdG8g
Z2V0IGEgc2ltaWxhciBzZXR1cCBydW5uaW5nPyBBbSBJIG91dCB0byBsdW5jaCBmb3IgdHJ5aW5n
IHRoaXMsIG9yIGlzIHRoZXJlIHNvbWV0aGluZyBJ4oCZbSBtaXNzaW5nPyBBbnkgaGVscCBhdCBh
bGwgd291bGQgYmUgYXBwcmVjaWF0ZWQuPG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvUGxh
aW5UZXh0Ij48bzpwPiZuYnNwOzwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29QbGFpblRleHQiPkkg
YW0gY3VycmVudGx5IHBvcnRpbmcgWGVuIG9uIGFuIE4xU0RQIGFybSBib2FyZCB3aGljaCBpcyBh
bHNvIHVzaW5nIGEgRURLMi9ncnViIHNldHVwIGFuZCBJIG1hbmFnZSB0byBzdGFydCB4ZW4gZnJv
bSBncnViIGFuZCB0aGVuIHN0YXJ0IGRvbTAgcHJvdmlkaW5nIGEgRFRCLjxvOnA+PC9vOnA+PC9w
Pg0KPHAgY2xhc3M9Ik1zb1BsYWluVGV4dCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBjbGFz
cz0iTXNvUGxhaW5UZXh0Ij5NeSBncnViIGNvbmZpZ3VyYXRpb24gbG9va3MgbGlrZSB0aGlzOjxv
OnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb1BsYWluVGV4dCI+bWVudWVudHJ5ICd4ZW4nIHs8
bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29QbGFpblRleHQiPiZuYnNwOyZuYnNwOyZuYnNw
OyB4ZW5faHlwZXJ2aXNvciAkcHJlZml4L3hlbi5lZmkgbG9nbHZsPWFsbCBndWVzdF9sb2dsdmw9
YWxsIGNvbnNvbGU9ZHR1YXJ0IGR0dWFydD1zZXJpYWwwIGRvbTBfbWVtPTFHPG86cD48L286cD48
L3A+DQo8cCBjbGFzcz0iTXNvUGxhaW5UZXh0Ij4mbmJzcDsmbmJzcDsmbmJzcDsgeGVuX21vZHVs
ZSAkcHJlZml4L0ltYWdlIGNvbnNvbGU9aHZjMCBlZmk9bm9ydW50aW1lIHJ3IHJvb3Q9L2Rldi9z
ZGExIHJvb3R3YWl0PG86cD48L286cD48L3A+DQo8cCBjbGFzcz0iTXNvUGxhaW5UZXh0Ij4mbmJz
cDsmbmJzcDsmbmJzcDsgZGV2aWNldHJlZSAkcHJlZml4L24xc2RwLmR0YjxvOnA+PC9vOnA+PC9w
Pg0KPHAgY2xhc3M9Ik1zb1BsYWluVGV4dCI+fTxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1z
b1BsYWluVGV4dCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBjbGFzcz0iTXNvUGxhaW5UZXh0
Ij5Db3VsZCB5b3Ugc2hhcmUgeW91ciBncnViIGNvbmZpZ3VyYXRpb24gPzxvOnA+PC9vOnA+PC9w
Pg0KPHAgY2xhc3M9Ik1zb1BsYWluVGV4dCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBjbGFz
cz0iTXNvUGxhaW5UZXh0Ij5CZXJ0cmFuZDxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb1Bs
YWluVGV4dCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBjbGFzcz0iTXNvUGxhaW5UZXh0Ij4m
Z3Q7PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBjbGFzcz0iTXNvUGxhaW5UZXh0Ij4mZ3Q7IElm
IHRoaXMgZG9lc27igJl0IHdvcmssIEnigJltIGdvaW5nIHRvIGhhdmUgdG8gZ28gdG8gRnJlZUJT
RCBhbmQgQmh5dmUgYmVjYXVzZSBJIGtub3cgc29tZW9uZSB3aG8gaGFzIHRoYXQgd29ya2luZy4g
SeKAmWQgcmF0aGVyIHVzZSBMaW51eCB0aGFuIEJTRCBmb3IgdGhpcyBhcHBsaWNhdGlvbiwgdGhl
cmUgYXJlIG1vcmUgZHJpdmVycyBzdXBwb3J0aW5nIHRoaXMgaGFyZGF3YXJlLjxvOnA+PC9vOnA+
PC9wPg0KPHAgY2xhc3M9Ik1zb1BsYWluVGV4dCI+Jmd0OzxvOnA+Jm5ic3A7PC9vOnA+PC9wPg0K
PHAgY2xhc3M9Ik1zb1BsYWluVGV4dCI+Jmd0OyBUaGFua3MsPG86cD48L286cD48L3A+DQo8cCBj
bGFzcz0iTXNvUGxhaW5UZXh0Ij4mZ3Q7IH5TYW08bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJN
c29QbGFpblRleHQiPiZndDs8bzpwPiZuYnNwOzwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29QbGFp
blRleHQiPiZndDs8bzpwPiZuYnNwOzwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29QbGFpblRleHQi
PiZndDs8bzpwPiZuYnNwOzwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29QbGFpblRleHQiPiZndDs8
bzpwPiZuYnNwOzwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29QbGFpblRleHQiPiZndDsgJmx0O2lt
YWdlMDAxLnBuZyZndDs8bzpwPjwvbzpwPjwvcD4NCjxwIGNsYXNzPSJNc29QbGFpblRleHQiPiZn
dDsgUGhvbmU6ICYjNDM7MSAyMDYgNzAxLTczMjEgZXh0LiAxMDE8bzpwPjwvbzpwPjwvcD4NCjxw
IGNsYXNzPSJNc29QbGFpblRleHQiPiZndDsgRW1haWw6IDxhIGhyZWY9Im1haWx0bzppbmZvQGds
YWNpZXItcGVhay5uZXQiPjxzcGFuIHN0eWxlPSJjb2xvcjp3aW5kb3d0ZXh0O3RleHQtZGVjb3Jh
dGlvbjpub25lIj5pbmZvQGdsYWNpZXItcGVhay5uZXQ8L3NwYW4+PC9hPjxvOnA+PC9vOnA+PC9w
Pg0KPHAgY2xhc3M9Ik1zb1BsYWluVGV4dCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8cCBjbGFz
cz0iTXNvUGxhaW5UZXh0Ij5JTVBPUlRBTlQgTk9USUNFOiBUaGUgY29udGVudHMgb2YgdGhpcyBl
bWFpbCBhbmQgYW55IGF0dGFjaG1lbnRzIGFyZSBjb25maWRlbnRpYWwgYW5kIG1heSBhbHNvIGJl
IHByaXZpbGVnZWQuIElmIHlvdSBhcmUgbm90IHRoZSBpbnRlbmRlZCByZWNpcGllbnQsIHBsZWFz
ZSBub3RpZnkgdGhlIHNlbmRlciBpbW1lZGlhdGVseSBhbmQgZG8gbm90IGRpc2Nsb3NlIHRoZSBj
b250ZW50cyB0byBhbnkgb3RoZXIgcGVyc29uLA0KIHVzZSBpdCBmb3IgYW55IHB1cnBvc2UsIG9y
IHN0b3JlIG9yIGNvcHkgdGhlIGluZm9ybWF0aW9uIGluIGFueSBtZWRpdW0uIFRoYW5rIHlvdS48
bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPC9ib2R5Pg0KPC9odG1sPg0K

--_000_b2396085a65a407089d635e68beabb61glacierpeaknet_--

--_004_b2396085a65a407089d635e68beabb61glacierpeaknet_
Content-Type: application/octet-stream; name="Xen_Ubuntu_Dom0.bsh"
Content-Description: Xen_Ubuntu_Dom0.bsh
Content-Disposition: attachment; filename="Xen_Ubuntu_Dom0.bsh"; size=812;
	creation-date="Fri, 01 May 2020 01:32:58 GMT";
	modification-date="Fri, 01 May 2020 03:00:43 GMT"
Content-Transfer-Encoding: base64

IyEvYmluL2Jhc2gKIyBTZXR1cCBhbmQgY29uZmlndXJlIFhlbiBEb21VCiMgUnVuIGFzIHJvb3Qv
c3VkbyBvbiBhIGZyZXNoIFVidW50dSBTZXJ2ZXIgMTIuMDQgTFRTIGluc3RhbGwuICBSZWJvb3Qg
d2hlbiBjb21wbGV0ZS4KCiMgVXBkYXRlCmFwdC1nZXQgdXBkYXRlCmFwdC1nZXQgdXBncmFkZQoK
IyBJbnN0YWxsIFhlbiBoeXBlcnZpc29yCmFwdC1nZXQgaW5zdGFsbCB4ZW4taHlwZXJ2aXNvci00
LjExLWFybTY0CnNlZCAtaSAncy9HUlVCX0RFRkFVTFQ9LipcKy9HUlVCX0RFRkFVTFQ9IlhlbiA0
LjExLWFybTY0Ii8nIC9ldGMvZGVmYXVsdC9ncnViCnNlZCAtaSAncy9UT09MU1RBQ0s9LipcKy9U
T09MU1RBQ0s9InhtIi8nIC9ldGMvZGVmYXVsdC94ZW4KCiMgU2V0dXAgYnJpZGdlIG5ldHdvcmsg
aW50ZXJmYWNlCmNhdCA+PiAvZXRjL25ldHdvcmsvaW50ZXJmYWNlcyA8PCBFT0wKICAKI1hlbiBC
cmlkZ2UKYXV0byB4ZW5icjAKaWZhY2UgeGVuYnIwIGluZXQgZGhjcAogIGJyaWRnZV9wb3J0cyBl
dGgwCkVPTAoKIyBMaW1pdCBEb20wIG1lbW9yeSB0byA1MTJNCmVjaG8gJ0dSVUJfQ01ETElORV9Y
RU5fREVGQVVMVD0iZG9tMF9tZW09NTEyTSInID4+IC9ldGMvZGVmYXVsdC9ncnViCnVwZGF0ZS1n
cnViCgojQWxsb3cgVk5DIGNvbm5lY3Rpb25zIGZyb20gYW55d2hlcmUKc2VkIC1pICJzLyModm5j
LWxpc3Rlbi4qLyh2bmMtbGlzdGVuIFwnMC4wLjAuMFwnKS8iIC9ldGMveGVuL3hlbmQtY29uZmln
LnN4cAoKI0ZpeCBrZXltYXAgaXNzdWUKbG4gLXMgL3Vzci9zaGFyZS9xZW11LWxpbmFyby8gL3Vz
ci9zaGFyZS9xZW11Lwo=

--_004_b2396085a65a407089d635e68beabb61glacierpeaknet_--


From xen-devel-bounces@lists.xenproject.org Thu May 14 03:53:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 03:53:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ4vZ-0005w2-Pz; Thu, 14 May 2020 03:52:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v3kr=64=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZ4vY-0005vx-A2
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 03:52:56 +0000
X-Inumbo-ID: 63eb52a0-9596-11ea-a444-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 63eb52a0-9596-11ea-a444-12813bfff9fa;
 Thu, 14 May 2020 03:52:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=VjOrWUN3gOt/UtmajTC3ijjqN1djZO7IlD88IcuhEdI=; b=y2PGb68i2vMKjk1wCnysCln+/
 tRibFl7yQRdVAve1VYGf2OkKWx6nn4bFhFgtvuMqZcOD+yEabzARLStr10ZnZqq2CflqFFT2ltnBY
 tS8MpQtWZ9a4eiKmty/kwsReYGA0ROyksaLShl47nBWTW6CTCXEluZ8NAwF0/8dQOWf0Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZ4vV-0002pQ-UL; Thu, 14 May 2020 03:52:53 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZ4vV-000582-JS; Thu, 14 May 2020 03:52:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZ4vV-0000PY-If; Thu, 14 May 2020 03:52:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150164-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150164: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=9d83ad86834300927b636fa02b29d84854399ed8
X-Osstest-Versions-That: xen=a82582b1af6a4a57ca53bcfad9f71428cb5f9a54
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 14 May 2020 03:52:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150164 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150164/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 150150

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate           fail  like 150142
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150154
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150154
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150154
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150154
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150154
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150154
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150154
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150154
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150154
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  9d83ad86834300927b636fa02b29d84854399ed8
baseline version:
 xen                  a82582b1af6a4a57ca53bcfad9f71428cb5f9a54

Last test of basis   150154  2020-05-13 04:09:34 Z    0 days
Testing same since   150164  2020-05-13 17:36:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Paul Durrant <pdurrant@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   a82582b1af..9d83ad8683  9d83ad86834300927b636fa02b29d84854399ed8 -> master


From xen-devel-bounces@lists.xenproject.org Thu May 14 06:19:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 06:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ7Cp-0001qr-VO; Thu, 14 May 2020 06:18:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v3kr=64=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZ7Co-0001qk-WB
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 06:18:55 +0000
X-Inumbo-ID: c5c0fa52-95aa-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c5c0fa52-95aa-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 06:18:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1o+fVnMfisOaNqAj8QUDryBI8tKSIxZREH8/TxbE254=; b=nc1duIzFF/Ro7p2eOE06HVjpv
 C1GycKXe1Kaolwe2rvdd3xAe3Bd18VIvLGurP2v0USMYfA7F6m1NJYQfx1GUvuUeaXT146Nd4lfOF
 QdXSF/ngV1Bnus1kL45/B7WwVyVDqr7Ibxzdr1QWFk9rIapBwS3jOddkRTe0z99TcvXGw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZ7Ci-0006ov-1M; Thu, 14 May 2020 06:18:48 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZ7Ch-0002AH-Kb; Thu, 14 May 2020 06:18:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZ7Ch-0003bV-Jc; Thu, 14 May 2020 06:18:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150170-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150170: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=6b50f54ca7a021512332653f21d5050675f8127d
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 14 May 2020 06:18:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150170 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150170/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              6b50f54ca7a021512332653f21d5050675f8127d
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  118 days
Failing since        146211  2020-01-18 04:18:52 Z  117 days  108 attempts
Testing same since   150170  2020-05-14 04:19:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Chris Jester-Young <cky@cky.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yan Wang <wangyan122@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 18201 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 14 07:10:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 07:10:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ80o-0006ja-PR; Thu, 14 May 2020 07:10:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZ80n-0006jU-Ps
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 07:10:33 +0000
X-Inumbo-ID: ff548746-95b1-11ea-a458-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ff548746-95b1-11ea-a458-12813bfff9fa;
 Thu, 14 May 2020 07:10:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 138C7AAD0;
 Thu, 14 May 2020 07:10:33 +0000 (UTC)
Subject: Re: [PATCH v2 3/3] xen/sched: fix latent races accessing
 vcpu->dirty_cpu
To: Juergen Gross <jgross@suse.com>
References: <20200511112829.5500-1-jgross@suse.com>
 <20200511112829.5500-4-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <eaa891af-697d-bb30-8e34-470102a98561@suse.com>
Date: Thu, 14 May 2020 09:10:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200511112829.5500-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.2020 13:28, Juergen Gross wrote:
> @@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
>  
>  void sync_vcpu_execstate(struct vcpu *v)
>  {
> -    if ( v->dirty_cpu == smp_processor_id() )
> +    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
> +
> +    if ( dirty_cpu == smp_processor_id() )
>          sync_local_execstate();
> -    else if ( vcpu_cpu_dirty(v) )
> +    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
>      {
>          /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
> -        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
> +        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>      }
> +    ASSERT(!is_vcpu_dirty_cpu(dirty_cpu) ||
> +           read_atomic(&v->dirty_cpu) != dirty_cpu);

Repeating my v1.1 comments:

"However, having stared at it for a while now - is this race
 free? I can see this being fine in the (initial) case of
 dirty_cpu == smp_processor_id(), but if this is for a foreign
 CPU, can't the vCPU have gone back to that same CPU again in
 the meantime?"

and later

"There is a time window from late in flush_mask() to the assertion
 you add. All sorts of things can happen during this window on
 other CPUs. IOW what guarantees the vCPU not getting unpaused or
 its affinity getting changed yet another time?"

You did reply that by what is now patch 2 this race can be
eliminated, but I have to admit I don't see why this would be.
Hence at the very least I'd expect justification in either the
description or a code comment as to why there's no race left
(and also no race to be expected to be re-introduced by code
changes elsewhere - very unlikely races are, by their nature,
unlikely to be hit during code development and the associated
testing, hence I'd like there to be sufficiently close to a
guarantee here).

My reservations here may in part be due to not following the
reasoning for patch 2, which therefore I'll have to rely on the
scheduler maintainers to judge on.

> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -158,7 +158,7 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
>  
>      v->domain = d;
>      v->vcpu_id = vcpu_id;
> -    v->dirty_cpu = VCPU_CPU_CLEAN;
> +    write_atomic(&v->dirty_cpu, VCPU_CPU_CLEAN);

This is not strictly necessary (the vCPU won't be acted upon by
any other entity in the system just yet), and with this I'd like
to suggest to drop this change again.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 08:00:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 08:00:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ8mn-0003b5-7z; Thu, 14 May 2020 08:00:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZ8mm-0003b0-J0
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 08:00:08 +0000
X-Inumbo-ID: ed1b6354-95b8-11ea-a45c-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ed1b6354-95b8-11ea-a45c-12813bfff9fa;
 Thu, 14 May 2020 08:00:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id EADFBAEEA;
 Thu, 14 May 2020 08:00:08 +0000 (UTC)
Subject: Re: [PATCH v8 04/12] xen: add basic hypervisor filesystem support
To: Juergen Gross <jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-5-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <db277779-5b1e-a2aa-3948-9e6dd8e8bef0@suse.com>
Date: Thu, 14 May 2020 09:59:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200508153421.24525-5-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 17:34, Juergen Gross wrote:
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -10,6 +10,7 @@ obj-y += domain.o
>  obj-y += event_2l.o
>  obj-y += event_channel.o
>  obj-y += event_fifo.o
> +obj-$(CONFIG_HYPFS) += hypfs.o
>  obj-$(CONFIG_CRASH_DEBUG) += gdbstub.o
>  obj-$(CONFIG_GRANT_TABLE) += grant_table.o
>  obj-y += guestcopy.o

I guess I could/should have noticed this earlier - this isn't the
correct insertion point, considering that we try to keep these
alphabetically sorted.

> +static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
> +                                               const char *path)
> +{
> +    const char *end;
> +    struct hypfs_entry *entry;
> +    unsigned int name_len;
> +
> +    if ( dir->e.type != XEN_HYPFS_TYPE_DIR )
> +        return NULL;
> +
> +    if ( !*path )
> +        return &dir->e;
> +
> +    end = strchr(path, '/');
> +    if ( !end )
> +        end = strchr(path, '\0');
> +    name_len = end - path;
> +
> +    list_for_each_entry ( entry, &dir->dirlist, list )
> +    {
> +        int cmp = strncmp(path, entry->name, name_len);
> +        struct hypfs_entry_dir *d = container_of(entry,
> +                                                 struct hypfs_entry_dir, e);
> +
> +        if ( cmp < 0 )
> +            return NULL;
> +        if ( !cmp && strlen(entry->name) == name_len )
> +            return *end ? hypfs_get_entry_rel(d, end + 1) : entry;

The compiler may translate this into a tail call, but shouldn't
we be worried about the nesting depth in case it doesn't?
Perhaps this doesn't need addressing here, but rather by limiting
the depth at which new entries can be created?

> +int hypfs_read_dir(const struct hypfs_entry *entry,
> +                   XEN_GUEST_HANDLE_PARAM(void) uaddr)
> +{
> +    const struct hypfs_entry_dir *d;
> +    const struct hypfs_entry *e;
> +    unsigned int size = entry->size;
> +
> +    d = container_of(entry, const struct hypfs_entry_dir, e);
> +
> +    list_for_each_entry ( e, &d->dirlist, list )

This function, in particular because of being non-static, makes
me wonder how, with add_entry() taking a lock, it can be safe
without any locking. Initially I thought the justification might
be because all adding of entries is an init-time-only thing, but
various involved functions aren't marked __init (and it is at
least not implausible that down the road we might see new
entries getting added during certain hotplug operations).

I do realize that do_hypfs_op() takes the necessary read lock,
but then you're still building on the assumption that the
function is reachable through only that code path, despite
being non-static. An ASSERT() here would be the minimum I guess,
but with read locks now being recursive I don't see why you
couldn't read-lock here again.

The same goes for other non-static functions, albeit things may
become more interesting for functions living on the
XEN_HYPFS_OP_write_contents path (because write locks aren't
recursive [yet]). I notice even hypfs_get_entry() is non-static,
albeit I can't see why that is (perhaps a later patch needs it).

> +int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
> +                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
> +{
> +    char *buf;
> +    int ret;
> +
> +    if ( leaf->e.type != XEN_HYPFS_TYPE_STRING &&
> +         leaf->e.type != XEN_HYPFS_TYPE_BLOB && ulen != leaf->e.size )
> +        return -EDOM;

hypfs_write() checks ulen against max_size, but the function
being non-static makes me once again wonder whether at the very
leaset an ASSERT() wouldn't be wanted here.

> +    buf = xmalloc_array(char, ulen);
> +    if ( !buf )
> +        return -ENOMEM;
> +
> +    ret = -EFAULT;
> +    if ( copy_from_guest(buf, uaddr, ulen) )
> +        goto out;
> +
> +    ret = -EINVAL;
> +    if ( leaf->e.type == XEN_HYPFS_TYPE_STRING &&
> +         memchr(buf, 0, ulen) != (buf + ulen - 1) )
> +        goto out;

How does this check play with gzip-ed strings?

> +long do_hypfs_op(unsigned int cmd,
> +                 XEN_GUEST_HANDLE_PARAM(const_char) arg1, unsigned long arg2,
> +                 XEN_GUEST_HANDLE_PARAM(void) arg3, unsigned long arg4)
> +{
> +    int ret;
> +    struct hypfs_entry *entry;
> +    static char path[XEN_HYPFS_MAX_PATHLEN];
> +
> +    if ( xsm_hypfs_op(XSM_PRIV) )
> +        return -EPERM;
> +
> +    if ( cmd == XEN_HYPFS_OP_get_version )
> +        return XEN_HYPFS_VERSION;

Check that all args are zero/null for this sub-op, to allow future
extension?

> +#define HYPFS_FIXEDSIZE_INIT(var, typ, nam, contvar, wr) \
> +    struct hypfs_entry_leaf __read_mostly var = {        \
> +        .e.type = typ,                                   \
> +        .e.encoding = XEN_HYPFS_ENC_PLAIN,               \
> +        .e.name = nam,                                   \
> +        .e.size = sizeof(contvar),                       \
> +        .e.max_size = wr ? sizeof(contvar) : 0,          \
> +        .e.read = hypfs_read_leaf,                       \
> +        .e.write = wr,                                   \
> +        .content = &contvar,                             \
> +    }

At the example of this, some of the macros look like they want
parentheses added around uses of some of their parameters.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 08:19:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 08:19:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ95d-0004oL-Sw; Thu, 14 May 2020 08:19:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v3kr=64=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZ95d-0004oG-2b
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 08:19:37 +0000
X-Inumbo-ID: a2cb05c2-95bb-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2cb05c2-95bb-11ea-b9cf-bc764e2007e4;
 Thu, 14 May 2020 08:19:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=nR5uX871E4gACpthw0UlCKCqf1LTQvebJ2b4Nq39xIM=; b=OM8mdduuW/bApH64igkdiYWih
 TSa4pwmYrpGUseEgpWgBucMO/BRhUmKbGYZi6a39aC26j7uQ9/EMOoA84qWkez9oaxDgPsPNdMYE1
 Y4le15Veuc0CJ0bcPemyK2vUAhIUhn97PEOs5SaFbbF0FMt9mSeDzTHlwaSdwP8Z5Wacw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZ95W-0001QN-TP; Thu, 14 May 2020 08:19:30 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZ95W-0007Hd-Jj; Thu, 14 May 2020 08:19:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZ95W-0002ca-J6; Thu, 14 May 2020 08:19:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150171-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150171: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=b539eeffc737d859dd1814c2e529e0ed0feba7a7
X-Osstest-Versions-That: xen=3a218961b16f1f4feb1147f56338faf1ac8f5703
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 14 May 2020 08:19:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150171 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150171/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b539eeffc737d859dd1814c2e529e0ed0feba7a7
baseline version:
 xen                  3a218961b16f1f4feb1147f56338faf1ac8f5703

Last test of basis   150168  2020-05-13 21:00:43 Z    0 days
Testing same since   150171  2020-05-14 06:00:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3a218961b1..b539eeffc7  b539eeffc737d859dd1814c2e529e0ed0feba7a7 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 14 08:51:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 08:51:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ9Zh-0008EA-Q0; Thu, 14 May 2020 08:50:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WGWk=64=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZ9Zh-0008E5-E9
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 08:50:41 +0000
X-Inumbo-ID: fccde8ce-95bf-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fccde8ce-95bf-11ea-b9cf-bc764e2007e4;
 Thu, 14 May 2020 08:50:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id F3CA8ACCC;
 Thu, 14 May 2020 08:50:41 +0000 (UTC)
Subject: Re: [PATCH v2 3/3] xen/sched: fix latent races accessing
 vcpu->dirty_cpu
To: Jan Beulich <jbeulich@suse.com>
References: <20200511112829.5500-1-jgross@suse.com>
 <20200511112829.5500-4-jgross@suse.com>
 <eaa891af-697d-bb30-8e34-470102a98561@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <35440630-c065-8d3f-94d2-e01c6a5df2a2@suse.com>
Date: Thu, 14 May 2020 10:50:36 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <eaa891af-697d-bb30-8e34-470102a98561@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.20 09:10, Jan Beulich wrote:
> On 11.05.2020 13:28, Juergen Gross wrote:
>> @@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
>>   
>>   void sync_vcpu_execstate(struct vcpu *v)
>>   {
>> -    if ( v->dirty_cpu == smp_processor_id() )
>> +    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
>> +
>> +    if ( dirty_cpu == smp_processor_id() )
>>           sync_local_execstate();
>> -    else if ( vcpu_cpu_dirty(v) )
>> +    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
>>       {
>>           /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
>> -        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
>> +        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>>       }
>> +    ASSERT(!is_vcpu_dirty_cpu(dirty_cpu) ||
>> +           read_atomic(&v->dirty_cpu) != dirty_cpu);
> 
> Repeating my v1.1 comments:
> 
> "However, having stared at it for a while now - is this race
>   free? I can see this being fine in the (initial) case of
>   dirty_cpu == smp_processor_id(), but if this is for a foreign
>   CPU, can't the vCPU have gone back to that same CPU again in
>   the meantime?"
> 
> and later
> 
> "There is a time window from late in flush_mask() to the assertion
>   you add. All sorts of things can happen during this window on
>   other CPUs. IOW what guarantees the vCPU not getting unpaused or
>   its affinity getting changed yet another time?"
> 
> You did reply that by what is now patch 2 this race can be
> eliminated, but I have to admit I don't see why this would be.
> Hence at the very least I'd expect justification in either the
> description or a code comment as to why there's no race left
> (and also no race to be expected to be re-introduced by code
> changes elsewhere - very unlikely races are, by their nature,
> unlikely to be hit during code development and the associated
> testing, hence I'd like there to be sufficiently close to a
> guarantee here).
> 
> My reservations here may in part be due to not following the
> reasoning for patch 2, which therefore I'll have to rely on the
> scheduler maintainers to judge on.

sync_vcpu_execstate() isn't called for a running or runnable vcpu any
longer. I can add an ASSERT() and a comment explaining it if you like
that better.

> 
>> --- a/xen/common/domain.c
>> +++ b/xen/common/domain.c
>> @@ -158,7 +158,7 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
>>   
>>       v->domain = d;
>>       v->vcpu_id = vcpu_id;
>> -    v->dirty_cpu = VCPU_CPU_CLEAN;
>> +    write_atomic(&v->dirty_cpu, VCPU_CPU_CLEAN);
> 
> This is not strictly necessary (the vCPU won't be acted upon by
> any other entity in the system just yet), and with this I'd like
> to suggest to drop this change again.

Fine with me.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu May 14 08:52:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 08:52:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ9bt-0008K0-6p; Thu, 14 May 2020 08:52:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZ9bs-0008Ju-IO
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 08:52:56 +0000
X-Inumbo-ID: 4d62ca8e-95c0-11ea-a462-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4d62ca8e-95c0-11ea-a462-12813bfff9fa;
 Thu, 14 May 2020 08:52:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 67F55AA4F;
 Thu, 14 May 2020 08:52:57 +0000 (UTC)
Subject: Re: [PATCH v8 12/12] x86/HVM: don't needlessly intercept
 APERF/MPERF/TSC MSR reads
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <60cc730f-2a1c-d7a6-74fe-64f3c9308831@suse.com>
 <e92b6c1a-b2c3-13e7-116c-4772c851dd0b@suse.com>
 <81cc74ce-0a53-d5cd-3513-af3af6382815@citrix.com>
 <05203042-662c-3dc4-15e6-bc45587fbeec@suse.com>
Message-ID: <dd51c7b4-71cd-71b4-5213-39bd31ab40e9@suse.com>
Date: Thu, 14 May 2020 10:52:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <05203042-662c-3dc4-15e6-bc45587fbeec@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 13.05.2020 15:35, Jan Beulich wrote:
> On 08.05.2020 23:04, Andrew Cooper wrote:
>> On 05/05/2020 09:20, Jan Beulich wrote:
>>> If the hardware can handle accesses, we should allow it to do so. This
>>> way we can expose EFRO to HVM guests,
>>
>> I'm reminded now of the conversation I'm sure we've had before, although
>> I have a feeling it was on IRC.
>>
>> APERF/MPERF (including the EFRO interface on AMD) are free running
>> counters but only in C0.  The raw values are not synchronised across
>> threads.
>>
>> A vCPU which gets rescheduled has a 50% chance of finding the one or
>> both values going backwards, and a 100% chance of totally bogus calculation.
>>
>> There is no point exposing APERF/MPERF to guests.  It can only be used
>> safely in hypervisor context, on AMD hardware with a CLGI/STGI region,
>> or on Intel hardware in an NMI handler if you trust that a machine check
>> isn't going to ruin your day.
>>
>> VMs have no way of achieving the sampling requirements, and has a fair
>> chance of getting a plausible-but-wrong answer.
>>
>> The only possibility to do this safely is on a VM which is pinned to
>> pCPU for its lifetime, but even I'm unconvinced of the correctness.
>>
>> I don't think we should be exposing this functionality to guests at all,
>> although I might be persuaded if someone wanting to use it in a VM can
>> provide a concrete justification of why the above problems won't get in
>> their way.
> 
> Am I getting it right then that here you're reverting what you've said
> on patch 10: "I'm tempted to suggest that we offer EFRO on Intel ..."?
> And hence your request is to drop both that and this patch?

Which in turn would then mean also dropping patch 11. Not
supporting RDPRU because of the APERF/MPERF peculiarities
means we also won't be able to support it once other leaves
get defined, as its specification seems to only halfway
allow for supporting higher leaves but not lower ones (by
making the lower ones return zero, which for the two
initially defined leaves may not be expected by the caller).

Since I don't see us settle on this very soon, I guess I'll
re-submit the series with the last three patches left out.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 09:03:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:03:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ9mJ-0000rj-9t; Thu, 14 May 2020 09:03:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZ9mI-0000re-34
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:03:42 +0000
X-Inumbo-ID: ce5ee54a-95c1-11ea-a463-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ce5ee54a-95c1-11ea-a463-12813bfff9fa;
 Thu, 14 May 2020 09:03:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 40867AF22;
 Thu, 14 May 2020 09:03:43 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v9 0/9] x86emul: further work
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Date: Thu, 14 May 2020 11:03:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

1: address x86_insn_is_mem_{access,write}() omissions 
2: disable FPU/MMX/SIMD insn emulation when !HVM
3: support MOVDIR{I,64B} insns
4: support ENQCMD insn
5: support SERIALIZE
6: support X{SUS,RES}LDTRK
7: support FNSTENV and FNSAVE
8: support FLDENV and FRSTOR
9: support FXSAVE/FXRSTOR

Main changes from v8 are the new patch 1 and dropped patches
from the end of the series. For other changes see the individual
patches.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 09:07:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:07:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ9pt-00011B-Qg; Thu, 14 May 2020 09:07:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZ9pr-000115-TM
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:07:23 +0000
X-Inumbo-ID: 51a12e22-95c2-11ea-a463-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 51a12e22-95c2-11ea-a463-12813bfff9fa;
 Thu, 14 May 2020 09:07:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7DFE6ABE6;
 Thu, 14 May 2020 09:07:23 +0000 (UTC)
Subject: [PATCH v9 1/9] x86emul: address x86_insn_is_mem_{access,write}()
 omissions
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Message-ID: <9a1fdfad-d7e5-df0c-0bb5-8b8c609312d3@suse.com>
Date: Thu, 14 May 2020 11:07:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

First of all explain in comments what the functions' purposes are. Then
make them actually match their comments.

Note that fc6fa977be54 ("x86emul: extend x86_insn_is_mem_write()
coverage") didn't actually fix the function's behavior for {,V}STMXCSR:
Both are covered by generic code higher up in the function, due to
x86_decode_twobyte() already doing suitable adjustments. And VSTMXCSR
wouldn't have been covered anyway without a further X86EMUL_OPC_VEX()
case label. Keep the inner case label in a comment for reference.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v9: New.
---
I'm intending to add testing of the functions to the harness, but this
will take some more time. Possibly such a test harness addition could
be acceptable even after the freeze point.

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -11438,25 +11438,62 @@ x86_insn_operand_ea(const struct x86_emu
     return state->ea.mem.off;
 }
 
+/*
+ * This function means to return 'true' for all supported insns with explicit
+ * accesses to memory.  This means also insns which don't have an explicit
+ * memory operand (like POP), but it does not mean e.g. segment selector
+ * loads, where the descriptor table access is considered an implicit one.
+ */
 bool
 x86_insn_is_mem_access(const struct x86_emulate_state *state,
                        const struct x86_emulate_ctxt *ctxt)
 {
+    if ( mode_64bit() && state->not_64bit )
+        return false;
+
     if ( state->ea.type == OP_MEM )
         return ctxt->opcode != 0x8d /* LEA */ &&
+               (ctxt->opcode & ~7) != X86EMUL_OPC(0x0f, 0x18) /* NOP space */ &&
                (ctxt->opcode != X86EMUL_OPC(0x0f, 0x01) ||
                 (state->modrm_reg & 7) != 7) /* INVLPG */;
 
     switch ( ctxt->opcode )
     {
+    case 0x06 ... 0x07: /* PUSH / POP %es */
+    case 0x0e:          /* PUSH %cs */
+    case 0x16 ... 0x17: /* PUSH / POP %ss */
+    case 0x1e ... 0x1f: /* PUSH / POP %ds */
+    case 0x50 ... 0x5f: /* PUSH / POP reg */
+    case 0x60 ... 0x61: /* PUSHA / POPA */
+    case 0x68: case 0x6a: /* PUSH imm */
     case 0x6c ... 0x6f: /* INS / OUTS */
+    case 0x8f:          /* POP r/m */
+    case 0x9a:          /* CALL (far, direct) */
+    case 0x9c ... 0x9d: /* PUSHF / POPF */
     case 0xa4 ... 0xa7: /* MOVS / CMPS */
     case 0xaa ... 0xaf: /* STOS / LODS / SCAS */
+    case 0xc2 ... 0xc3: /* RET (near) */
+    case 0xc8 ... 0xc9: /* ENTER / LEAVE */
+    case 0xca ... 0xcb: /* RET (far) */
     case 0xd7:          /* XLAT */
+    case 0xe8:          /* CALL (near, direct) */
+    case X86EMUL_OPC(0x0f, 0xa0):         /* PUSH %fs */
+    case X86EMUL_OPC(0x0f, 0xa1):         /* POP %fs */
+    case X86EMUL_OPC(0x0f, 0xa8):         /* PUSH %gs */
+    case X86EMUL_OPC(0x0f, 0xa9):         /* POP %gs */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xf7): /* MASKMOV{Q,DQU} */
                                           /* VMASKMOVDQU */
         return true;
 
+    case 0xff:
+        switch ( state->modrm_reg & 7 )
+        {
+        case 2: /* CALL (near, indirect) */
+        case 6: /* PUSH r/m */
+            return true;
+        }
+        break;
+
     case X86EMUL_OPC(0x0f, 0x01):
         /* Cover CLZERO. */
         return (state->modrm_rm & 7) == 4 && (state->modrm_reg & 7) == 7;
@@ -11465,10 +11502,20 @@ x86_insn_is_mem_access(const struct x86_
     return false;
 }
 
+/*
+ * This function means to return 'true' for all supported insns with explicit
+ * writes to memory.  This means also insns which don't have an explicit
+ * memory operand (like PUSH), but it does not mean e.g. segment selector
+ * loads, where the (possible) descriptor table write is considered an
+ * implicit access.
+ */
 bool
 x86_insn_is_mem_write(const struct x86_emulate_state *state,
                       const struct x86_emulate_ctxt *ctxt)
 {
+    if ( mode_64bit() && state->not_64bit )
+        return false;
+
     switch ( state->desc & DstMask )
     {
     case DstMem:
@@ -11490,9 +11537,25 @@ x86_insn_is_mem_write(const struct x86_e
 
     switch ( ctxt->opcode )
     {
+    case 0x63:                           /* ARPL */
+        return !mode_64bit();
+
+    case 0x06:                           /* PUSH %es */
+    case 0x0e:                           /* PUSH %cs */
+    case 0x16:                           /* PUSH %ss */
+    case 0x1e:                           /* PUSH %ds */
+    case 0x50 ... 0x57:                  /* PUSH reg */
+    case 0x60:                           /* PUSHA */
+    case 0x68: case 0x6a:                /* PUSH imm */
     case 0x6c: case 0x6d:                /* INS */
+    case 0x9a:                           /* CALL (far, direct) */
+    case 0x9c:                           /* PUSHF */
     case 0xa4: case 0xa5:                /* MOVS */
     case 0xaa: case 0xab:                /* STOS */
+    case 0xc8:                           /* ENTER */
+    case 0xe8:                           /* CALL (near, direct) */
+    case X86EMUL_OPC(0x0f, 0xa0):        /* PUSH %fs */
+    case X86EMUL_OPC(0x0f, 0xa8):        /* PUSH %gs */
     case X86EMUL_OPC(0x0f, 0xab):        /* BTS */
     case X86EMUL_OPC(0x0f, 0xb3):        /* BTR */
     case X86EMUL_OPC(0x0f, 0xbb):        /* BTC */
@@ -11550,6 +11613,16 @@ x86_insn_is_mem_write(const struct x86_e
         }
         break;
 
+    case 0xff:
+        switch ( state->modrm_reg & 7 )
+        {
+        case 2: /* CALL (near, indirect) */
+        case 3: /* CALL (far, indirect) */
+        case 6: /* PUSH r/m */
+            return true;
+        }
+        break;
+
     case X86EMUL_OPC(0x0f, 0x01):
         switch ( state->modrm_reg & 7 )
         {
@@ -11564,7 +11637,7 @@ x86_insn_is_mem_write(const struct x86_e
         switch ( state->modrm_reg & 7 )
         {
         case 0: /* FXSAVE */
-        case 3: /* {,V}STMXCSR */
+        /* case 3: STMXCSR - handled above */
         case 4: /* XSAVE */
         case 6: /* XSAVEOPT */
             return true;



From xen-devel-bounces@lists.xenproject.org Thu May 14 09:08:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:08:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ9qJ-000148-6g; Thu, 14 May 2020 09:07:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZ9qI-000142-KJ
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:07:50 +0000
X-Inumbo-ID: 6193c9f2-95c2-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6193c9f2-95c2-11ea-b9cf-bc764e2007e4;
 Thu, 14 May 2020 09:07:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 6AB74AF33;
 Thu, 14 May 2020 09:07:50 +0000 (UTC)
Subject: [PATCH v9 2/9] x86emul: disable FPU/MMX/SIMD insn emulation when !HVM
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Message-ID: <7349e0e5-5347-e0cc-f661-df8961b2a2aa@suse.com>
Date: Thu, 14 May 2020 11:07:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In a pure PV environment (the PV shim in particular) we don't really
need emulation of all these. To limit #ifdef-ary utilize some of the
CASE_*() macros we have, by providing variants expanding to
(effectively) nothing (really a label, which in turn requires passing
-Wno-unused-label to the compiler when build such configurations).

Due to the mixture of macro and #ifdef use, the placement of some of
the #ifdef-s is a little arbitrary.

The resulting object file's .text is less than half the size of the
original, and looks to also be compiling a little more quickly.

This is meant as a first step; more parts can likely be disabled down
the road.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v7: Integrate into this series. Re-base.
---
I'll be happy to take suggestions allowing to avoid -Wno-unused-label.

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -73,6 +73,9 @@ obj-y += vm_event.o
 obj-y += xstate.o
 extra-y += asm-macros.i
 
+ifneq ($(CONFIG_HVM),y)
+x86_emulate.o: CFLAGS-y += -Wno-unused-label
+endif
 x86_emulate.o: x86_emulate/x86_emulate.c x86_emulate/x86_emulate.h
 
 efi-y := $(shell if [ ! -r $(BASEDIR)/include/xen/compile.h -o \
--- a/xen/arch/x86/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate.c
@@ -42,6 +42,12 @@
     }                                                      \
 })
 
+#ifndef CONFIG_HVM
+# define X86EMUL_NO_FPU
+# define X86EMUL_NO_MMX
+# define X86EMUL_NO_SIMD
+#endif
+
 #include "x86_emulate/x86_emulate.c"
 
 int x86emul_read_xcr(unsigned int reg, uint64_t *val,
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -3492,6 +3492,7 @@ x86_decode(
             op_bytes = 4;
         break;
 
+#ifndef X86EMUL_NO_SIMD
     case simd_packed_int:
         switch ( vex.pfx )
         {
@@ -3557,6 +3558,7 @@ x86_decode(
     case simd_256:
         op_bytes = 32;
         break;
+#endif /* !X86EMUL_NO_SIMD */
 
     default:
         op_bytes = 0;
@@ -3711,6 +3713,7 @@ x86_emulate(
         break;
     }
 
+#ifndef X86EMUL_NO_SIMD
     /* With a memory operand, fetch the mask register in use (if any). */
     if ( ea.type == OP_MEM && evex.opmsk &&
          _get_fpu(fpu_type = X86EMUL_FPU_opmask, ctxt, ops) == X86EMUL_OKAY )
@@ -3741,6 +3744,7 @@ x86_emulate(
         put_fpu(X86EMUL_FPU_opmask, false, state, ctxt, ops);
         fpu_type = X86EMUL_FPU_none;
     }
+#endif /* !X86EMUL_NO_SIMD */
 
     /* Decode (but don't fetch) the destination operand: register or memory. */
     switch ( d & DstMask )
@@ -4386,11 +4390,13 @@ x86_emulate(
         singlestep = _regs.eflags & X86_EFLAGS_TF;
         break;
 
+#ifndef X86EMUL_NO_FPU
     case 0x9b:  /* wait/fwait */
         host_and_vcpu_must_have(fpu);
         get_fpu(X86EMUL_FPU_wait);
         emulate_fpu_insn_stub(b);
         break;
+#endif
 
     case 0x9c: /* pushf */
         if ( (_regs.eflags & X86_EFLAGS_VM) &&
@@ -4800,6 +4806,7 @@ x86_emulate(
         break;
     }
 
+#ifndef X86EMUL_NO_FPU
     case 0xd8: /* FPU 0xd8 */
         host_and_vcpu_must_have(fpu);
         get_fpu(X86EMUL_FPU_fpu);
@@ -5134,6 +5141,7 @@ x86_emulate(
             }
         }
         break;
+#endif /* !X86EMUL_NO_FPU */
 
     case 0xe0 ... 0xe2: /* loop{,z,nz} */ {
         unsigned long count = get_loop_count(&_regs, ad_bytes);
@@ -6079,6 +6087,8 @@ x86_emulate(
     case X86EMUL_OPC(0x0f, 0x19) ... X86EMUL_OPC(0x0f, 0x1f): /* nop */
         break;
 
+#ifndef X86EMUL_NO_MMX
+
     case X86EMUL_OPC(0x0f, 0x0e): /* femms */
         host_and_vcpu_must_have(3dnow);
         asm volatile ( "femms" );
@@ -6099,39 +6109,71 @@ x86_emulate(
         state->simd_size = simd_other;
         goto simd_0f_imm8;
 
-#define CASE_SIMD_PACKED_INT(pfx, opc)       \
+#endif /* !X86EMUL_NO_MMX */
+
+#if !defined(X86EMUL_NO_SIMD) && !defined(X86EMUL_NO_MMX)
+# define CASE_SIMD_PACKED_INT(pfx, opc)      \
     case X86EMUL_OPC(pfx, opc):              \
     case X86EMUL_OPC_66(pfx, opc)
-#define CASE_SIMD_PACKED_INT_VEX(pfx, opc)   \
+#elif !defined(X86EMUL_NO_SIMD)
+# define CASE_SIMD_PACKED_INT(pfx, opc)      \
+    case X86EMUL_OPC_66(pfx, opc)
+#elif !defined(X86EMUL_NO_MMX)
+# define CASE_SIMD_PACKED_INT(pfx, opc)      \
+    case X86EMUL_OPC(pfx, opc)
+#else
+# define CASE_SIMD_PACKED_INT(pfx, opc) C##pfx##_##opc
+#endif
+
+#ifndef X86EMUL_NO_SIMD
+
+# define CASE_SIMD_PACKED_INT_VEX(pfx, opc)  \
     CASE_SIMD_PACKED_INT(pfx, opc):          \
     case X86EMUL_OPC_VEX_66(pfx, opc)
 
-#define CASE_SIMD_ALL_FP(kind, pfx, opc)     \
+# define CASE_SIMD_ALL_FP(kind, pfx, opc)    \
     CASE_SIMD_PACKED_FP(kind, pfx, opc):     \
     CASE_SIMD_SCALAR_FP(kind, pfx, opc)
-#define CASE_SIMD_PACKED_FP(kind, pfx, opc)  \
+# define CASE_SIMD_PACKED_FP(kind, pfx, opc) \
     case X86EMUL_OPC##kind(pfx, opc):        \
     case X86EMUL_OPC##kind##_66(pfx, opc)
-#define CASE_SIMD_SCALAR_FP(kind, pfx, opc)  \
+# define CASE_SIMD_SCALAR_FP(kind, pfx, opc) \
     case X86EMUL_OPC##kind##_F3(pfx, opc):   \
     case X86EMUL_OPC##kind##_F2(pfx, opc)
-#define CASE_SIMD_SINGLE_FP(kind, pfx, opc)  \
+# define CASE_SIMD_SINGLE_FP(kind, pfx, opc) \
     case X86EMUL_OPC##kind(pfx, opc):        \
     case X86EMUL_OPC##kind##_F3(pfx, opc)
 
-#define CASE_SIMD_ALL_FP_VEX(pfx, opc)       \
+# define CASE_SIMD_ALL_FP_VEX(pfx, opc)      \
     CASE_SIMD_ALL_FP(, pfx, opc):            \
     CASE_SIMD_ALL_FP(_VEX, pfx, opc)
-#define CASE_SIMD_PACKED_FP_VEX(pfx, opc)    \
+# define CASE_SIMD_PACKED_FP_VEX(pfx, opc)   \
     CASE_SIMD_PACKED_FP(, pfx, opc):         \
     CASE_SIMD_PACKED_FP(_VEX, pfx, opc)
-#define CASE_SIMD_SCALAR_FP_VEX(pfx, opc)    \
+# define CASE_SIMD_SCALAR_FP_VEX(pfx, opc)   \
     CASE_SIMD_SCALAR_FP(, pfx, opc):         \
     CASE_SIMD_SCALAR_FP(_VEX, pfx, opc)
-#define CASE_SIMD_SINGLE_FP_VEX(pfx, opc)    \
+# define CASE_SIMD_SINGLE_FP_VEX(pfx, opc)   \
     CASE_SIMD_SINGLE_FP(, pfx, opc):         \
     CASE_SIMD_SINGLE_FP(_VEX, pfx, opc)
 
+#else
+
+# define CASE_SIMD_PACKED_INT_VEX(pfx, opc)  \
+    CASE_SIMD_PACKED_INT(pfx, opc)
+
+# define CASE_SIMD_ALL_FP(kind, pfx, opc)    C##kind##pfx##_##opc
+# define CASE_SIMD_PACKED_FP(kind, pfx, opc) Cp##kind##pfx##_##opc
+# define CASE_SIMD_SCALAR_FP(kind, pfx, opc) Cs##kind##pfx##_##opc
+# define CASE_SIMD_SINGLE_FP(kind, pfx, opc) C##kind##pfx##_##opc
+
+# define CASE_SIMD_ALL_FP_VEX(pfx, opc)    CASE_SIMD_ALL_FP(, pfx, opc)
+# define CASE_SIMD_PACKED_FP_VEX(pfx, opc) CASE_SIMD_PACKED_FP(, pfx, opc)
+# define CASE_SIMD_SCALAR_FP_VEX(pfx, opc) CASE_SIMD_SCALAR_FP(, pfx, opc)
+# define CASE_SIMD_SINGLE_FP_VEX(pfx, opc) CASE_SIMD_SINGLE_FP(, pfx, opc)
+
+#endif
+
     CASE_SIMD_SCALAR_FP(, 0x0f, 0x2b):     /* movnts{s,d} xmm,mem */
         host_and_vcpu_must_have(sse4a);
         /* fall through */
@@ -6269,6 +6311,8 @@ x86_emulate(
         insn_bytes = EVEX_PFX_BYTES + 2;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_66(0x0f, 0x12):       /* movlpd m64,xmm */
     case X86EMUL_OPC_VEX_66(0x0f, 0x12):   /* vmovlpd m64,xmm,xmm */
     CASE_SIMD_PACKED_FP_VEX(0x0f, 0x13):   /* movlp{s,d} xmm,m64 */
@@ -6375,6 +6419,8 @@ x86_emulate(
         avx512_vlen_check(false);
         goto simd_zmm;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f, 0x20): /* mov cr,reg */
     case X86EMUL_OPC(0x0f, 0x21): /* mov dr,reg */
     case X86EMUL_OPC(0x0f, 0x22): /* mov reg,cr */
@@ -6401,6 +6447,8 @@ x86_emulate(
             goto done;
         break;
 
+#if !defined(X86EMUL_NO_MMX) && !defined(X86EMUL_NO_SIMD)
+
     case X86EMUL_OPC_66(0x0f, 0x2a):       /* cvtpi2pd mm/m64,xmm */
         if ( ea.type == OP_REG )
         {
@@ -6412,6 +6460,8 @@ x86_emulate(
         op_bytes = (b & 4) && (vex.pfx & VEX_PREFIX_DOUBLE_MASK) ? 16 : 8;
         goto simd_0f_fp;
 
+#endif /* !X86EMUL_NO_MMX && !X86EMUL_NO_SIMD */
+
     CASE_SIMD_SCALAR_FP_VEX(0x0f, 0x2a):   /* {,v}cvtsi2s{s,d} r/m,xmm */
         if ( vex.opcx == vex_none )
         {
@@ -6758,6 +6808,8 @@ x86_emulate(
             dst.val = src.val;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_VEX(0x0f, 0x4a):    /* kadd{w,q} k,k,k */
         if ( !vex.w )
             host_and_vcpu_must_have(avx512dq);
@@ -6812,6 +6864,8 @@ x86_emulate(
         generate_exception_if(!vex.l || vex.w, EXC_UD);
         goto opmask_common;
 
+#endif /* X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_FP_VEX(0x0f, 0x50):   /* movmskp{s,d} xmm,reg */
                                            /* vmovmskp{s,d} {x,y}mm,reg */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xd7):  /* pmovmskb {,x}mm,reg */
@@ -6895,6 +6949,8 @@ x86_emulate(
                          evex.w);
         goto avx512f_all_fp;
 
+#ifndef X86EMUL_NO_SIMD
+
     CASE_SIMD_PACKED_FP_VEX(0x0f, 0x5b):   /* cvt{ps,dq}2{dq,ps} xmm/mem,xmm */
                                            /* vcvt{ps,dq}2{dq,ps} {x,y}mm/mem,{x,y}mm */
     case X86EMUL_OPC_F3(0x0f, 0x5b):       /* cvttps2dq xmm/mem,xmm */
@@ -6925,6 +6981,8 @@ x86_emulate(
         op_bytes = 16 << evex.lr;
         goto simd_zmm;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x60): /* punpcklbw {,x}mm/mem,{,x}mm */
                                           /* vpunpcklbw {x,y}mm/mem,{x,y}mm,{x,y}mm */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x61): /* punpcklwd {,x}mm/mem,{,x}mm */
@@ -6951,6 +7009,7 @@ x86_emulate(
                                           /* vpackusbw {x,y}mm/mem,{x,y}mm,{x,y}mm */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x6b): /* packsswd {,x}mm/mem,{,x}mm */
                                           /* vpacksswd {x,y}mm/mem,{x,y}mm,{x,y}mm */
+#ifndef X86EMUL_NO_SIMD
     case X86EMUL_OPC_66(0x0f, 0x6c):     /* punpcklqdq xmm/m128,xmm */
     case X86EMUL_OPC_VEX_66(0x0f, 0x6c): /* vpunpcklqdq {x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_66(0x0f, 0x6d):     /* punpckhqdq xmm/m128,xmm */
@@ -7035,6 +7094,7 @@ x86_emulate(
                                           /* vpsubd {x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_66(0x0f, 0xfb):     /* psubq xmm/m128,xmm */
     case X86EMUL_OPC_VEX_66(0x0f, 0xfb): /* vpsubq {x,y}mm/mem,{x,y}mm,{x,y}mm */
+#endif /* !X86EMUL_NO_SIMD */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xfc): /* paddb {,x}mm/mem,{,x}mm */
                                           /* vpaddb {x,y}mm/mem,{x,y}mm,{x,y}mm */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xfd): /* paddw {,x}mm/mem,{,x}mm */
@@ -7042,6 +7102,7 @@ x86_emulate(
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xfe): /* paddd {,x}mm/mem,{,x}mm */
                                           /* vpaddd {x,y}mm/mem,{x,y}mm,{x,y}mm */
     simd_0f_int:
+#ifndef X86EMUL_NO_SIMD
         if ( vex.opcx != vex_none )
         {
     case X86EMUL_OPC_VEX_66(0x0f38, 0x00): /* vpshufb {x,y}mm/mem,{x,y}mm,{x,y}mm */
@@ -7083,11 +7144,14 @@ x86_emulate(
         }
         if ( vex.pfx )
             goto simd_0f_sse2;
+#endif /* !X86EMUL_NO_SIMD */
     simd_0f_mmx:
         host_and_vcpu_must_have(mmx);
         get_fpu(X86EMUL_FPU_mmx);
         goto simd_0f_common;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0xf6): /* vpsadbw [xyz]mm/mem,[xyz]mm,[xyz]mm */
         generate_exception_if(evex.opmsk, EXC_UD);
         /* fall through */
@@ -7181,6 +7245,8 @@ x86_emulate(
         generate_exception_if(!evex.w, EXC_UD);
         goto avx512f_no_sae;
 
+#endif /* X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x6e): /* mov{d,q} r/m,{,x}mm */
                                           /* vmov{d,q} r/m,xmm */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x7e): /* mov{d,q} {,x}mm,r/m */
@@ -7222,6 +7288,8 @@ x86_emulate(
         ASSERT(!state->simd_size);
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0x6e): /* vmov{d,q} r/m,xmm */
     case X86EMUL_OPC_EVEX_66(0x0f, 0x7e): /* vmov{d,q} xmm,r/m */
         generate_exception_if((evex.lr || evex.opmsk || evex.brs ||
@@ -7294,11 +7362,15 @@ x86_emulate(
         d |= TwoOp;
         /* fall through */
     case X86EMUL_OPC_66(0x0f, 0xd6):     /* movq xmm,xmm/m64 */
+#endif /* !X86EMUL_NO_SIMD */
+#ifndef X86EMUL_NO_MMX
     case X86EMUL_OPC(0x0f, 0x6f):        /* movq mm/m64,mm */
     case X86EMUL_OPC(0x0f, 0x7f):        /* movq mm,mm/m64 */
+#endif
         op_bytes = 8;
         goto simd_0f_int;
 
+#ifndef X86EMUL_NO_SIMD
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x70):/* pshuf{w,d} $imm8,{,x}mm/mem,{,x}mm */
                                          /* vpshufd $imm8,{x,y}mm/mem,{x,y}mm */
     case X86EMUL_OPC_F3(0x0f, 0x70):     /* pshufhw $imm8,xmm/m128,xmm */
@@ -7307,12 +7379,15 @@ x86_emulate(
     case X86EMUL_OPC_VEX_F2(0x0f, 0x70): /* vpshuflw $imm8,{x,y}mm/mem,{x,y}mm */
         d = (d & ~SrcMask) | SrcMem | TwoOp;
         op_bytes = vex.pfx ? 16 << vex.l : 8;
+#endif
     simd_0f_int_imm8:
         if ( vex.opcx != vex_none )
         {
+#ifndef X86EMUL_NO_SIMD
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x0e): /* vpblendw $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x0f): /* vpalignr $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x42): /* vmpsadbw $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
+#endif
             if ( vex.l )
             {
     simd_0f_imm8_avx2:
@@ -7320,6 +7395,7 @@ x86_emulate(
             }
             else
             {
+#ifndef X86EMUL_NO_SIMD
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x08): /* vroundps $imm8,{x,y}mm/mem,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x09): /* vroundpd $imm8,{x,y}mm/mem,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x0a): /* vroundss $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
@@ -7327,6 +7403,7 @@ x86_emulate(
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x0c): /* vblendps $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x0d): /* vblendpd $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x40): /* vdpps $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
+#endif
     simd_0f_imm8_avx:
                 host_and_vcpu_must_have(avx);
             }
@@ -7360,6 +7437,8 @@ x86_emulate(
         insn_bytes = PFX_BYTES + 3;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0x70): /* vpshufd $imm8,[xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_F3(0x0f, 0x70): /* vpshufhw $imm8,[xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_F2(0x0f, 0x70): /* vpshuflw $imm8,[xyz]mm/mem,[xyz]mm{k} */
@@ -7418,6 +7497,9 @@ x86_emulate(
         opc[1] = modrm;
         opc[2] = imm1;
         insn_bytes = PFX_BYTES + 3;
+
+#endif /* X86EMUL_NO_SIMD */
+
     simd_0f_reg_only:
         opc[insn_bytes - PFX_BYTES] = 0xc3;
 
@@ -7428,6 +7510,8 @@ x86_emulate(
         ASSERT(!state->simd_size);
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0x71): /* Grp12 */
         switch ( modrm_reg & 7 )
         {
@@ -7459,6 +7543,9 @@ x86_emulate(
         }
         goto unrecognized_insn;
 
+#endif /* !X86EMUL_NO_SIMD */
+#ifndef X86EMUL_NO_MMX
+
     case X86EMUL_OPC(0x0f, 0x73):        /* Grp14 */
         switch ( modrm_reg & 7 )
         {
@@ -7468,6 +7555,9 @@ x86_emulate(
         }
         goto unrecognized_insn;
 
+#endif /* !X86EMUL_NO_MMX */
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_66(0x0f, 0x73):
     case X86EMUL_OPC_VEX_66(0x0f, 0x73):
         switch ( modrm_reg & 7 )
@@ -7498,7 +7588,12 @@ x86_emulate(
         }
         goto unrecognized_insn;
 
+#endif /* !X86EMUL_NO_SIMD */
+
+#ifndef X86EMUL_NO_MMX
     case X86EMUL_OPC(0x0f, 0x77):        /* emms */
+#endif
+#ifndef X86EMUL_NO_SIMD
     case X86EMUL_OPC_VEX(0x0f, 0x77):    /* vzero{all,upper} */
         if ( vex.opcx != vex_none )
         {
@@ -7544,6 +7639,7 @@ x86_emulate(
 #endif
         }
         else
+#endif /* !X86EMUL_NO_SIMD */
         {
             host_and_vcpu_must_have(mmx);
             get_fpu(X86EMUL_FPU_mmx);
@@ -7557,6 +7653,8 @@ x86_emulate(
         insn_bytes = PFX_BYTES + 1;
         goto simd_0f_reg_only;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_66(0x0f, 0x78):     /* Grp17 */
         switch ( modrm_reg & 7 )
         {
@@ -7654,6 +7752,8 @@ x86_emulate(
         op_bytes = 8;
         goto simd_zmm;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f, 0x80) ... X86EMUL_OPC(0x0f, 0x8f): /* jcc (near) */
         if ( test_cc(b, _regs.eflags) )
             jmp_rel((int32_t)src.val);
@@ -7664,6 +7764,8 @@ x86_emulate(
         dst.val = test_cc(b, _regs.eflags);
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_VEX(0x0f, 0x91):    /* kmov{w,q} k,mem */
     case X86EMUL_OPC_VEX_66(0x0f, 0x91): /* kmov{b,d} k,mem */
         generate_exception_if(ea.type != OP_MEM, EXC_UD);
@@ -7812,6 +7914,8 @@ x86_emulate(
         dst.type = OP_NONE;
         break;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f, 0xa2): /* cpuid */
         msr_val = 0;
         fail_if(ops->cpuid == NULL);
@@ -7908,6 +8012,7 @@ x86_emulate(
     case X86EMUL_OPC(0x0f, 0xae): case X86EMUL_OPC_66(0x0f, 0xae): /* Grp15 */
         switch ( modrm_reg & 7 )
         {
+#ifndef X86EMUL_NO_SIMD
         case 2: /* ldmxcsr */
             generate_exception_if(vex.pfx, EXC_UD);
             vcpu_must_have(sse);
@@ -7926,6 +8031,7 @@ x86_emulate(
             get_fpu(vex.opcx ? X86EMUL_FPU_ymm : X86EMUL_FPU_xmm);
             asm volatile ( "stmxcsr %0" : "=m" (dst.val) );
             break;
+#endif /* X86EMUL_NO_SIMD */
 
         case 5: /* lfence */
             fail_if(modrm_mod != 3);
@@ -7974,6 +8080,8 @@ x86_emulate(
         }
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_VEX(0x0f, 0xae): /* Grp15 */
         switch ( modrm_reg & 7 )
         {
@@ -7988,6 +8096,8 @@ x86_emulate(
         }
         goto unrecognized_insn;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC_F3(0x0f, 0xae): /* Grp15 */
         fail_if(modrm_mod != 3);
         generate_exception_if((modrm_reg & 4) || !mode_64bit(), EXC_UD);
@@ -8227,6 +8337,8 @@ x86_emulate(
         }
         goto simd_0f_imm8_avx;
 
+#ifndef X86EMUL_NO_SIMD
+
     CASE_SIMD_ALL_FP(_EVEX, 0x0f, 0xc2): /* vcmp{p,s}{s,d} $imm8,[xyz]mm/mem,[xyz]mm,k{k} */
         generate_exception_if((evex.w != (evex.pfx & VEX_PREFIX_DOUBLE_MASK) ||
                                (ea.type != OP_REG && evex.brs &&
@@ -8253,6 +8365,8 @@ x86_emulate(
         insn_bytes = EVEX_PFX_BYTES + 3;
         break;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f, 0xc3): /* movnti */
         /* Ignore the non-temporal hint for now. */
         vcpu_must_have(sse2);
@@ -8267,6 +8381,8 @@ x86_emulate(
         ea.type = OP_MEM;
         goto simd_0f_int_imm8;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0xc4):   /* vpinsrw $imm8,r32/m16,xmm,xmm */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x20): /* vpinsrb $imm8,r32/m8,xmm,xmm */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x22): /* vpinsr{d,q} $imm8,r/m,xmm,xmm */
@@ -8284,6 +8400,8 @@ x86_emulate(
         state->simd_size = simd_other;
         goto avx512f_imm8_no_sae;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xc5):  /* pextrw $imm8,{,x}mm,reg */
                                            /* vpextrw $imm8,xmm,reg */
         generate_exception_if(vex.l, EXC_UD);
@@ -8299,6 +8417,8 @@ x86_emulate(
         insn_bytes = PFX_BYTES + 3;
         goto simd_0f_to_gpr;
 
+#ifndef X86EMUL_NO_SIMD
+
     CASE_SIMD_PACKED_FP(_EVEX, 0x0f, 0xc6): /* vshufp{s,d} $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         generate_exception_if(evex.w != (evex.pfx & VEX_PREFIX_DOUBLE_MASK),
                               EXC_UD);
@@ -8313,6 +8433,8 @@ x86_emulate(
         avx512_vlen_check(false);
         goto simd_imm8_zmm;
 
+#endif /* X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f, 0xc7): /* Grp9 */
     {
         union {
@@ -8503,6 +8625,8 @@ x86_emulate(
         }
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0xd2): /* vpsrld xmm/m128,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0xd3): /* vpsrlq xmm/m128,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0xe2): /* vpsra{d,q} xmm/m128,[xyz]mm,[xyz]mm{k} */
@@ -8524,12 +8648,18 @@ x86_emulate(
         generate_exception_if(evex.w != (b & 1), EXC_UD);
         goto avx512f_no_sae;
 
+#endif /* !X86EMUL_NO_SIMD */
+#ifndef X86EMUL_NO_MMX
+
     case X86EMUL_OPC(0x0f, 0xd4):        /* paddq mm/m64,mm */
     case X86EMUL_OPC(0x0f, 0xf4):        /* pmuludq mm/m64,mm */
     case X86EMUL_OPC(0x0f, 0xfb):        /* psubq mm/m64,mm */
         vcpu_must_have(sse2);
         goto simd_0f_mmx;
 
+#endif /* !X86EMUL_NO_MMX */
+#if !defined(X86EMUL_NO_MMX) && !defined(X86EMUL_NO_SIMD)
+
     case X86EMUL_OPC_F3(0x0f, 0xd6):     /* movq2dq mm,xmm */
     case X86EMUL_OPC_F2(0x0f, 0xd6):     /* movdq2q xmm,mm */
         generate_exception_if(ea.type != OP_REG, EXC_UD);
@@ -8537,6 +8667,9 @@ x86_emulate(
         host_and_vcpu_must_have(mmx);
         goto simd_0f_int;
 
+#endif /* !X86EMUL_NO_MMX && !X86EMUL_NO_SIMD */
+#ifndef X86EMUL_NO_MMX
+
     case X86EMUL_OPC(0x0f, 0xe7):        /* movntq mm,m64 */
         generate_exception_if(ea.type != OP_MEM, EXC_UD);
         sfence = true;
@@ -8552,6 +8685,9 @@ x86_emulate(
         vcpu_must_have(mmxext);
         goto simd_0f_mmx;
 
+#endif /* !X86EMUL_NO_MMX */
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0xda): /* vpminub [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0xde): /* vpmaxub [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0xe4): /* vpmulhuw [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
@@ -8572,6 +8708,8 @@ x86_emulate(
         op_bytes = 8 << (!!(vex.pfx & VEX_PREFIX_DOUBLE_MASK) + vex.l);
         goto simd_0f_cvt;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xf7): /* {,v}maskmov{q,dqu} {,x}mm,{,x}mm */
         generate_exception_if(ea.type != OP_REG, EXC_UD);
         if ( vex.opcx != vex_none )
@@ -8675,6 +8813,8 @@ x86_emulate(
         insn_bytes = PFX_BYTES + 3;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_VEX_66(0x0f38, 0x19): /* vbroadcastsd xmm/m64,ymm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x1a): /* vbroadcastf128 m128,ymm */
         generate_exception_if(!vex.l, EXC_UD);
@@ -9257,6 +9397,8 @@ x86_emulate(
         ASSERT(!state->simd_size);
         break;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC_66(0x0f38, 0x82): /* invpcid reg,m128 */
         vcpu_must_have(invpcid);
         generate_exception_if(ea.type != OP_MEM, EXC_UD);
@@ -9299,6 +9441,8 @@ x86_emulate(
         state->simd_size = simd_none;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x83): /* vpmultishiftqb [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         generate_exception_if(!evex.w, EXC_UD);
         host_and_vcpu_must_have(avx512_vbmi);
@@ -9862,6 +10006,8 @@ x86_emulate(
         generate_exception_if(evex.brs || evex.opmsk, EXC_UD);
         goto avx512f_no_sae;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f38, 0xf0): /* movbe m,r */
     case X86EMUL_OPC(0x0f38, 0xf1): /* movbe r,m */
         vcpu_must_have(movbe);
@@ -10027,6 +10173,8 @@ x86_emulate(
                             : "0" ((uint32_t)src.val), "rm" (_regs.edx) );
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x00): /* vpermq $imm8,ymm/m256,ymm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x01): /* vpermpd $imm8,ymm/m256,ymm */
         generate_exception_if(!vex.l || !vex.w, EXC_UD);
@@ -10087,6 +10235,8 @@ x86_emulate(
         avx512_vlen_check(b & 2);
         goto simd_imm8_zmm;
 
+#endif /* X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_INT(0x0f3a, 0x0f): /* palignr $imm8,{,x}mm/mem,{,x}mm */
         host_and_vcpu_must_have(ssse3);
         if ( vex.pfx )
@@ -10114,6 +10264,8 @@ x86_emulate(
         insn_bytes = PFX_BYTES + 4;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x42): /* vdbpsadbw $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         generate_exception_if(evex.w, EXC_UD);
         /* fall through */
@@ -10612,6 +10764,8 @@ x86_emulate(
         generate_exception_if(vex.l, EXC_UD);
         goto simd_0f_imm8_avx;
 
+#endif /* X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC_VEX_F2(0x0f3a, 0xf0): /* rorx imm,r/m,r */
         vcpu_must_have(bmi2);
         generate_exception_if(vex.l || vex.reg != 0xf, EXC_UD);
@@ -10626,6 +10780,8 @@ x86_emulate(
             asm ( "rorl %b1,%k0" : "=g" (dst.val) : "c" (imm1), "0" (src.val) );
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_XOP(08, 0x85): /* vpmacssww xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x86): /* vpmacsswd xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x87): /* vpmacssdql xmm,xmm/m128,xmm,xmm */
@@ -10661,6 +10817,8 @@ x86_emulate(
         host_and_vcpu_must_have(xop);
         goto simd_0f_imm8_ymm;
 
+#endif /* X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC_XOP(09, 0x01): /* XOP Grp1 */
         switch ( modrm_reg & 7 )
         {
@@ -10720,6 +10878,8 @@ x86_emulate(
         }
         goto unrecognized_insn;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_XOP(09, 0x82): /* vfrczss xmm/m128,xmm */
     case X86EMUL_OPC_XOP(09, 0x83): /* vfrczsd xmm/m128,xmm */
         generate_exception_if(vex.l, EXC_UD);
@@ -10775,6 +10935,8 @@ x86_emulate(
         host_and_vcpu_must_have(xop);
         goto simd_0f_ymm;
 
+#endif /* X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC_XOP(0a, 0x10): /* bextr imm,r/m,r */
     {
         uint8_t *buf = get_stub(stub);



From xen-devel-bounces@lists.xenproject.org Thu May 14 09:08:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:08:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ9qi-00019F-Fk; Thu, 14 May 2020 09:08:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZ9qh-000194-6j
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:08:15 +0000
X-Inumbo-ID: 70af663a-95c2-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 70af663a-95c2-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 09:08:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id C325CAF8D;
 Thu, 14 May 2020 09:08:15 +0000 (UTC)
Subject: [PATCH v9 3/9] x86emul: support MOVDIR{I,64B} insns
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Message-ID: <99a8266c-f4f2-76fd-0092-fe10f1148eb4@suse.com>
Date: Thu, 14 May 2020 11:08:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Introduce a new blk() hook, paralleling the rmw() one in a certain way,
but being intended for larger data sizes, and hence its HVM intermediate
handling function doesn't fall back to splitting the operation if the
requested virtual address can't be mapped.

Note that SDM revision 071 doesn't specify exception behavior for
ModRM.mod == 0b11; assuming #UD here.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
v9: Fold in "x86/HVM: make hvmemul_blk() capable of handling r/o
    operations". Also adjust x86_insn_is_mem_write().
v7: Add blk_NONE. Move  harness'es setting of .blk. Correct indentation.
    Re-base.
v6: Fold MOVDIRI and MOVDIR64B changes again. Use blk() for both. All
    tags dropped.
v5: Introduce/use ->blk() hook. Correct asm() operands.
v4: Split MOVDIRI and MOVDIR64B and move this one ahead. Re-base.
v3: Update description.
---
(SDE: -tnt)

--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -652,6 +652,18 @@ static int cmpxchg(
     return X86EMUL_OKAY;
 }
 
+static int blk(
+    enum x86_segment seg,
+    unsigned long offset,
+    void *p_data,
+    unsigned int bytes,
+    uint32_t *eflags,
+    struct x86_emulate_state *state,
+    struct x86_emulate_ctxt *ctxt)
+{
+    return x86_emul_blk((void *)offset, p_data, bytes, eflags, state, ctxt);
+}
+
 static int read_segment(
     enum x86_segment seg,
     struct segment_register *reg,
@@ -721,6 +733,7 @@ static struct x86_emulate_ops emulops =
     .insn_fetch = fetch,
     .write      = write,
     .cmpxchg    = cmpxchg,
+    .blk        = blk,
     .read_segment = read_segment,
     .cpuid      = emul_test_cpuid,
     .read_cr    = emul_test_read_cr,
@@ -2339,6 +2352,50 @@ int main(int argc, char **argv)
         goto fail;
     printf("okay\n");
 
+    printf("%-40s", "Testing movdiri %edx,(%ecx)...");
+    if ( stack_exec && cpu_has_movdiri )
+    {
+        instr[0] = 0x0f; instr[1] = 0x38; instr[2] = 0xf9; instr[3] = 0x11;
+
+        regs.eip = (unsigned long)&instr[0];
+        regs.ecx = (unsigned long)memset(res, -1, 16);
+        regs.edx = 0x44332211;
+
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             (regs.eip != (unsigned long)&instr[4]) ||
+             res[0] != 0x44332211 || ~res[1] )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
+    printf("%-40s", "Testing movdir64b 144(%edx),%ecx...");
+    if ( stack_exec && cpu_has_movdir64b )
+    {
+        instr[0] = 0x66; instr[1] = 0x0f; instr[2] = 0x38; instr[3] = 0xf8;
+        instr[4] = 0x8a; instr[5] = 0x90; instr[8] = instr[7] = instr[6] = 0;
+
+        regs.eip = (unsigned long)&instr[0];
+        for ( i = 0; i < 64; ++i )
+            res[i] = i - 20;
+        regs.edx = (unsigned long)res;
+        regs.ecx = (unsigned long)(res + 16);
+
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             (regs.eip != (unsigned long)&instr[9]) ||
+             res[15] != -5 || res[32] != 12 )
+            goto fail;
+        for ( i = 16; i < 32; ++i )
+            if ( res[i] != i )
+                goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
     printf("%-40s", "Testing movq %mm3,(%ecx)...");
     if ( stack_exec && cpu_has_mmx )
     {
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -154,6 +154,8 @@ static inline bool xcr0_mask(uint64_t ma
 #define cpu_has_avx512_vnni (cp.feat.avx512_vnni && xcr0_mask(0xe6))
 #define cpu_has_avx512_bitalg (cp.feat.avx512_bitalg && xcr0_mask(0xe6))
 #define cpu_has_avx512_vpopcntdq (cp.feat.avx512_vpopcntdq && xcr0_mask(0xe6))
+#define cpu_has_movdiri    cp.feat.movdiri
+#define cpu_has_movdir64b  cp.feat.movdir64b
 #define cpu_has_avx512_4vnniw (cp.feat.avx512_4vnniw && xcr0_mask(0xe6))
 #define cpu_has_avx512_4fmaps (cp.feat.avx512_4fmaps && xcr0_mask(0xe6))
 #define cpu_has_avx512_bf16 (cp.feat.avx512_bf16 && xcr0_mask(0xe6))
--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -47,6 +47,7 @@ $(call as-option-add,CFLAGS,CC,"rdseed %
 $(call as-option-add,CFLAGS,CC,"clwb (%rax)",-DHAVE_AS_CLWB)
 $(call as-option-add,CFLAGS,CC,".equ \"x\"$$(comma)1",-DHAVE_AS_QUOTED_SYM)
 $(call as-option-add,CFLAGS,CC,"invpcid (%rax)$$(comma)%rax",-DHAVE_AS_INVPCID)
+$(call as-option-add,CFLAGS,CC,"movdiri %rax$$(comma)(%rax)",-DHAVE_AS_MOVDIR)
 
 # GAS's idea of true is -1.  Clang's idea is 1
 $(call as-option-add,CFLAGS,CC,\
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -1441,6 +1441,47 @@ static int hvmemul_rmw(
     return rc;
 }
 
+static int hvmemul_blk(
+    enum x86_segment seg,
+    unsigned long offset,
+    void *p_data,
+    unsigned int bytes,
+    uint32_t *eflags,
+    struct x86_emulate_state *state,
+    struct x86_emulate_ctxt *ctxt)
+{
+    struct hvm_emulate_ctxt *hvmemul_ctxt =
+        container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
+    unsigned long addr;
+    uint32_t pfec = PFEC_page_present;
+    int rc;
+    void *mapping = NULL;
+
+    rc = hvmemul_virtual_to_linear(
+        seg, offset, bytes, NULL, hvm_access_write, hvmemul_ctxt, &addr);
+    if ( rc != X86EMUL_OKAY || !bytes )
+        return rc;
+
+    if ( x86_insn_is_mem_write(state, ctxt) )
+        pfec |= PFEC_write_access;
+
+    if ( is_x86_system_segment(seg) )
+        pfec |= PFEC_implicit;
+    else if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 )
+        pfec |= PFEC_user_mode;
+
+    mapping = hvmemul_map_linear_addr(addr, bytes, pfec, hvmemul_ctxt);
+    if ( IS_ERR(mapping) )
+        return ~PTR_ERR(mapping);
+    if ( !mapping )
+        return X86EMUL_UNHANDLEABLE;
+
+    rc = x86_emul_blk(mapping, p_data, bytes, eflags, state, ctxt);
+    hvmemul_unmap_linear_addr(mapping, addr, bytes, hvmemul_ctxt);
+
+    return rc;
+}
+
 static int hvmemul_write_discard(
     enum x86_segment seg,
     unsigned long offset,
@@ -2512,6 +2553,7 @@ static const struct x86_emulate_ops hvm_
     .write         = hvmemul_write,
     .rmw           = hvmemul_rmw,
     .cmpxchg       = hvmemul_cmpxchg,
+    .blk           = hvmemul_blk,
     .validate      = hvmemul_validate,
     .rep_ins       = hvmemul_rep_ins,
     .rep_outs      = hvmemul_rep_outs,
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -548,6 +548,8 @@ static const struct ext0f38_table {
     [0xf1] = { .to_mem = 1, .two_op = 1 },
     [0xf2 ... 0xf3] = {},
     [0xf5 ... 0xf7] = {},
+    [0xf8] = { .simd_size = simd_other },
+    [0xf9] = { .to_mem = 1, .two_op = 1 /* Mov */ },
 };
 
 /* Shift values between src and dst sizes of pmov{s,z}x{b,w,d}{w,d,q}. */
@@ -851,6 +853,10 @@ struct x86_emulate_state {
         rmw_xchg,
         rmw_xor,
     } rmw;
+    enum {
+        blk_NONE,
+        blk_movdir,
+    } blk;
     uint8_t modrm, modrm_mod, modrm_reg, modrm_rm;
     uint8_t sib_index, sib_scale;
     uint8_t rex_prefix;
@@ -1914,6 +1920,8 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_avx512_bitalg() (ctxt->cpuid->feat.avx512_bitalg)
 #define vcpu_has_avx512_vpopcntdq() (ctxt->cpuid->feat.avx512_vpopcntdq)
 #define vcpu_has_rdpid()       (ctxt->cpuid->feat.rdpid)
+#define vcpu_has_movdiri()     (ctxt->cpuid->feat.movdiri)
+#define vcpu_has_movdir64b()   (ctxt->cpuid->feat.movdir64b)
 #define vcpu_has_avx512_4vnniw() (ctxt->cpuid->feat.avx512_4vnniw)
 #define vcpu_has_avx512_4fmaps() (ctxt->cpuid->feat.avx512_4fmaps)
 #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
@@ -2722,10 +2730,12 @@ x86_decode_0f38(
     {
     case 0x00 ... 0xef:
     case 0xf2 ... 0xf5:
-    case 0xf7 ... 0xff:
+    case 0xf7 ... 0xf8:
+    case 0xfa ... 0xff:
         op_bytes = 0;
         /* fall through */
     case 0xf6: /* adcx / adox */
+    case 0xf9: /* movdiri */
         ctxt->opcode |= MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK);
         break;
 
@@ -10173,6 +10183,34 @@ x86_emulate(
                             : "0" ((uint32_t)src.val), "rm" (_regs.edx) );
         break;
 
+    case X86EMUL_OPC_66(0x0f38, 0xf8): /* movdir64b r,m512 */
+        host_and_vcpu_must_have(movdir64b);
+        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        src.val = truncate_ea(*dst.reg);
+        generate_exception_if(!is_aligned(x86_seg_es, src.val, 64, ctxt, ops),
+                              EXC_GP, 0);
+        fail_if(!ops->blk);
+        state->blk = blk_movdir;
+        BUILD_BUG_ON(sizeof(*mmvalp) < 64);
+        if ( (rc = ops->read(ea.mem.seg, ea.mem.off, mmvalp, 64,
+                             ctxt)) != X86EMUL_OKAY ||
+             (rc = ops->blk(x86_seg_es, src.val, mmvalp, 64, &_regs.eflags,
+                            state, ctxt)) != X86EMUL_OKAY )
+            goto done;
+        state->simd_size = simd_none;
+        break;
+
+    case X86EMUL_OPC(0x0f38, 0xf9): /* movdiri mem,r */
+        host_and_vcpu_must_have(movdiri);
+        generate_exception_if(dst.type != OP_MEM, EXC_UD);
+        fail_if(!ops->blk);
+        state->blk = blk_movdir;
+        if ( (rc = ops->blk(dst.mem.seg, dst.mem.off, &src.val, op_bytes,
+                            &_regs.eflags, state, ctxt)) != X86EMUL_OKAY )
+            goto done;
+        dst.type = OP_NONE;
+        break;
+
 #ifndef X86EMUL_NO_SIMD
 
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x00): /* vpermq $imm8,ymm/m256,ymm */
@@ -11432,6 +11470,77 @@ int x86_emul_rmw(
     return X86EMUL_OKAY;
 }
 
+int x86_emul_blk(
+    void *ptr,
+    void *data,
+    unsigned int bytes,
+    uint32_t *eflags,
+    struct x86_emulate_state *state,
+    struct x86_emulate_ctxt *ctxt)
+{
+    switch ( state->blk )
+    {
+        /*
+         * Throughout this switch(), memory clobbers are used to compensate
+         * that other operands may not properly express the (full) memory
+         * ranges covered.
+         */
+    case blk_movdir:
+        switch ( bytes )
+        {
+#ifdef __x86_64__
+        case sizeof(uint32_t):
+# ifdef HAVE_AS_MOVDIR
+            asm ( "movdiri %0, (%1)"
+                  :: "r" (*(uint32_t *)data), "r" (ptr) : "memory" );
+# else
+            /* movdiri %esi, (%rdi) */
+            asm ( ".byte 0x0f, 0x38, 0xf9, 0x37"
+                  :: "S" (*(uint32_t *)data), "D" (ptr) : "memory" );
+# endif
+            break;
+#endif
+
+        case sizeof(unsigned long):
+#ifdef HAVE_AS_MOVDIR
+            asm ( "movdiri %0, (%1)"
+                  :: "r" (*(unsigned long *)data), "r" (ptr) : "memory" );
+#else
+            /* movdiri %rsi, (%rdi) */
+            asm ( ".byte 0x48, 0x0f, 0x38, 0xf9, 0x37"
+                  :: "S" (*(unsigned long *)data), "D" (ptr) : "memory" );
+#endif
+            break;
+
+        case 64:
+            if ( ((unsigned long)ptr & 0x3f) )
+            {
+                ASSERT_UNREACHABLE();
+                return X86EMUL_UNHANDLEABLE;
+            }
+#ifdef HAVE_AS_MOVDIR
+            asm ( "movdir64b (%0), %1" :: "r" (data), "r" (ptr) : "memory" );
+#else
+            /* movdir64b (%rsi), %rdi */
+            asm ( ".byte 0x66, 0x0f, 0x38, 0xf8, 0x3e"
+                  :: "S" (data), "D" (ptr) : "memory" );
+#endif
+            break;
+
+        default:
+            ASSERT_UNREACHABLE();
+            return X86EMUL_UNHANDLEABLE;
+        }
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+        return X86EMUL_UNHANDLEABLE;
+    }
+
+    return X86EMUL_OKAY;
+}
+
 static void __init __maybe_unused build_assertions(void)
 {
     /* Check the values against SReg3 encoding in opcode/ModRM bytes. */
@@ -11689,6 +11798,11 @@ x86_insn_is_mem_write(const struct x86_e
         break;
 
     default:
+        switch ( ctxt->opcode )
+        {
+        case X86EMUL_OPC_66(0x0f38, 0xf8): /* MOVDIR64B */
+            return true;
+        }
         return false;
     }
 
--- a/xen/arch/x86/x86_emulate/x86_emulate.h
+++ b/xen/arch/x86/x86_emulate/x86_emulate.h
@@ -310,6 +310,22 @@ struct x86_emulate_ops
         struct x86_emulate_ctxt *ctxt);
 
     /*
+     * blk: Emulate a large (block) memory access.
+     * @p_data: [IN/OUT] (optional) Pointer to source/destination buffer.
+     * @eflags: [IN/OUT] Pointer to EFLAGS to be updated according to
+     *                   instruction effects.
+     * @state:  [IN/OUT] Pointer to (opaque) emulator state.
+     */
+    int (*blk)(
+        enum x86_segment seg,
+        unsigned long offset,
+        void *p_data,
+        unsigned int bytes,
+        uint32_t *eflags,
+        struct x86_emulate_state *state,
+        struct x86_emulate_ctxt *ctxt);
+
+    /*
      * validate: Post-decode, pre-emulate hook to allow caller controlled
      * filtering.
      */
@@ -793,6 +809,14 @@ x86_emul_rmw(
     unsigned int bytes,
     uint32_t *eflags,
     struct x86_emulate_state *state,
+    struct x86_emulate_ctxt *ctxt);
+int
+x86_emul_blk(
+    void *ptr,
+    void *data,
+    unsigned int bytes,
+    uint32_t *eflags,
+    struct x86_emulate_state *state,
     struct x86_emulate_ctxt *ctxt);
 
 static inline void x86_emul_hw_exception(
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -118,6 +118,8 @@
 #define cpu_has_avx512_bitalg   boot_cpu_has(X86_FEATURE_AVX512_BITALG)
 #define cpu_has_avx512_vpopcntdq boot_cpu_has(X86_FEATURE_AVX512_VPOPCNTDQ)
 #define cpu_has_rdpid           boot_cpu_has(X86_FEATURE_RDPID)
+#define cpu_has_movdiri         boot_cpu_has(X86_FEATURE_MOVDIRI)
+#define cpu_has_movdir64b       boot_cpu_has(X86_FEATURE_MOVDIR64B)
 
 /* CPUID level 0x80000007.edx */
 #define cpu_has_itsc            boot_cpu_has(X86_FEATURE_ITSC)
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -240,6 +240,8 @@ XEN_CPUFEATURE(AVX512_BITALG, 6*32+12) /
 XEN_CPUFEATURE(AVX512_VPOPCNTDQ, 6*32+14) /*A  POPCNT for vectors of DW/QW */
 XEN_CPUFEATURE(RDPID,         6*32+22) /*A  RDPID instruction */
 XEN_CPUFEATURE(CLDEMOTE,      6*32+25) /*A  CLDEMOTE instruction */
+XEN_CPUFEATURE(MOVDIRI,       6*32+27) /*A  MOVDIRI instruction */
+XEN_CPUFEATURE(MOVDIR64B,     6*32+28) /*A  MOVDIR64B instruction */
 
 /* AMD-defined CPU features, CPUID level 0x80000007.edx, word 7 */
 XEN_CPUFEATURE(ITSC,          7*32+ 8) /*   Invariant TSC */



From xen-devel-bounces@lists.xenproject.org Thu May 14 09:09:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:09:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ9rF-0001HI-US; Thu, 14 May 2020 09:08:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZ9rE-0001H3-Ld
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:08:48 +0000
X-Inumbo-ID: 84d2137e-95c2-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 84d2137e-95c2-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 09:08:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 598D5AF69;
 Thu, 14 May 2020 09:08:49 +0000 (UTC)
Subject: [PATCH v9 4/9] x86emul: support ENQCMD insns
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Message-ID: <f11e1d13-2d43-847f-0815-d780d66dddeb@suse.com>
Date: Thu, 14 May 2020 11:08:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Note that the ISA extensions document revision 038 doesn't specify
exception behavior for ModRM.mod == 0b11; assuming #UD here.

No tests are being added to the harness - this would be quite hard,
we can't just issue the insns against RAM. Their similarity with
MOVDIR64B should have the test case there be god enough to cover any
fundamental flaws.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
TBD: This doesn't (can't) consult PASID translation tables yet, as we
     have no VMX code for this so far. I guess for this we will want to
     replace the direct ->read_msr(MSR_PASID, ...) with a new
     ->read_pasid() hook.
---
v9: Consistently use named asm() operands. Also adjust
    x86_insn_is_mem_write(). A -> a in public header. Move
    asm-x86/msr-index.h addition and drop _IA32 from their names.
    Introduce _AC() into the emulator harness as a result.
v7: Re-base.
v6: Re-base.
v5: New.

--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -59,6 +59,9 @@
     (type *)((char *)mptr__ - offsetof(type, member)); \
 })
 
+#define AC_(n,t) (n##t)
+#define _AC(n,t) AC_(n,t)
+
 #define hweight32 __builtin_popcount
 #define hweight64 __builtin_popcountll
 
--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -48,6 +48,7 @@ $(call as-option-add,CFLAGS,CC,"clwb (%r
 $(call as-option-add,CFLAGS,CC,".equ \"x\"$$(comma)1",-DHAVE_AS_QUOTED_SYM)
 $(call as-option-add,CFLAGS,CC,"invpcid (%rax)$$(comma)%rax",-DHAVE_AS_INVPCID)
 $(call as-option-add,CFLAGS,CC,"movdiri %rax$$(comma)(%rax)",-DHAVE_AS_MOVDIR)
+$(call as-option-add,CFLAGS,CC,"enqcmd (%rax)$$(comma)%rax",-DHAVE_AS_ENQCMD)
 
 # GAS's idea of true is -1.  Clang's idea is 1
 $(call as-option-add,CFLAGS,CC,\
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -855,6 +855,7 @@ struct x86_emulate_state {
     } rmw;
     enum {
         blk_NONE,
+        blk_enqcmd,
         blk_movdir,
     } blk;
     uint8_t modrm, modrm_mod, modrm_reg, modrm_rm;
@@ -901,6 +902,7 @@ typedef union {
     uint64_t __attribute__ ((aligned(16))) xmm[2];
     uint64_t __attribute__ ((aligned(32))) ymm[4];
     uint64_t __attribute__ ((aligned(64))) zmm[8];
+    uint32_t data32[16];
 } mmval_t;
 
 /*
@@ -1922,6 +1924,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_rdpid()       (ctxt->cpuid->feat.rdpid)
 #define vcpu_has_movdiri()     (ctxt->cpuid->feat.movdiri)
 #define vcpu_has_movdir64b()   (ctxt->cpuid->feat.movdir64b)
+#define vcpu_has_enqcmd()      (ctxt->cpuid->feat.enqcmd)
 #define vcpu_has_avx512_4vnniw() (ctxt->cpuid->feat.avx512_4vnniw)
 #define vcpu_has_avx512_4fmaps() (ctxt->cpuid->feat.avx512_4fmaps)
 #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
@@ -10200,6 +10203,36 @@ x86_emulate(
         state->simd_size = simd_none;
         break;
 
+    case X86EMUL_OPC_F2(0x0f38, 0xf8): /* enqcmd r,m512 */
+    case X86EMUL_OPC_F3(0x0f38, 0xf8): /* enqcmds r,m512 */
+        host_and_vcpu_must_have(enqcmd);
+        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        generate_exception_if(vex.pfx != vex_f2 && !mode_ring0(), EXC_GP, 0);
+        src.val = truncate_ea(*dst.reg);
+        generate_exception_if(!is_aligned(x86_seg_es, src.val, 64, ctxt, ops),
+                              EXC_GP, 0);
+        fail_if(!ops->blk);
+        BUILD_BUG_ON(sizeof(*mmvalp) < 64);
+        if ( (rc = ops->read(ea.mem.seg, ea.mem.off, mmvalp, 64,
+                             ctxt)) != X86EMUL_OKAY )
+            goto done;
+        if ( vex.pfx == vex_f2 ) /* enqcmd */
+        {
+            fail_if(!ops->read_msr);
+            if ( (rc = ops->read_msr(MSR_PASID, &msr_val,
+                                     ctxt)) != X86EMUL_OKAY )
+                goto done;
+            generate_exception_if(!(msr_val & PASID_VALID), EXC_GP, 0);
+            mmvalp->data32[0] = MASK_EXTR(msr_val, PASID_PASID_MASK);
+        }
+        mmvalp->data32[0] &= ~0x7ff00000;
+        state->blk = blk_enqcmd;
+        if ( (rc = ops->blk(x86_seg_es, src.val, mmvalp, 64, &_regs.eflags,
+                            state, ctxt)) != X86EMUL_OKAY )
+            goto done;
+        state->simd_size = simd_none;
+        break;
+
     case X86EMUL_OPC(0x0f38, 0xf9): /* movdiri mem,r */
         host_and_vcpu_must_have(movdiri);
         generate_exception_if(dst.type != OP_MEM, EXC_UD);
@@ -11480,11 +11513,36 @@ int x86_emul_blk(
 {
     switch ( state->blk )
     {
+        bool zf;
+
         /*
          * Throughout this switch(), memory clobbers are used to compensate
          * that other operands may not properly express the (full) memory
          * ranges covered.
          */
+    case blk_enqcmd:
+        ASSERT(bytes == 64);
+        if ( ((unsigned long)ptr & 0x3f) )
+        {
+            ASSERT_UNREACHABLE();
+            return X86EMUL_UNHANDLEABLE;
+        }
+        *eflags &= ~EFLAGS_MASK;
+#ifdef HAVE_AS_ENQCMD
+        asm ( "enqcmds (%[src]), %[dst]" ASM_FLAG_OUT(, "; setz %[zf]")
+              : [zf] ASM_FLAG_OUT("=@ccz", "=qm") (zf)
+              : [src] "r" (data), [dst] "r" (ptr) : "memory" );
+#else
+        /* enqcmds (%rsi), %rdi */
+        asm ( ".byte 0xf3, 0x0f, 0x38, 0xf8, 0x3e"
+              ASM_FLAG_OUT(, "; setz %[zf]")
+              : [zf] ASM_FLAG_OUT("=@ccz", "=qm") (zf)
+              : "S" (data), "D" (ptr) : "memory" );
+#endif
+        if ( zf )
+            *eflags |= X86_EFLAGS_ZF;
+        break;
+
     case blk_movdir:
         switch ( bytes )
         {
@@ -11801,6 +11859,8 @@ x86_insn_is_mem_write(const struct x86_e
         switch ( ctxt->opcode )
         {
         case X86EMUL_OPC_66(0x0f38, 0xf8): /* MOVDIR64B */
+        case X86EMUL_OPC_F2(0x0f38, 0xf8): /* ENQCMD */
+        case X86EMUL_OPC_F3(0x0f38, 0xf8): /* ENQCMDS */
             return true;
         }
         return false;
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -120,6 +120,7 @@
 #define cpu_has_rdpid           boot_cpu_has(X86_FEATURE_RDPID)
 #define cpu_has_movdiri         boot_cpu_has(X86_FEATURE_MOVDIRI)
 #define cpu_has_movdir64b       boot_cpu_has(X86_FEATURE_MOVDIR64B)
+#define cpu_has_enqcmd          boot_cpu_has(X86_FEATURE_ENQCMD)
 
 /* CPUID level 0x80000007.edx */
 #define cpu_has_itsc            boot_cpu_has(X86_FEATURE_ITSC)
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -74,6 +74,10 @@
 #define MSR_PL3_SSP                         0x000006a7
 #define MSR_INTERRUPT_SSP_TABLE             0x000006a8
 
+#define MSR_PASID                           0x00000d93
+#define  PASID_PASID_MASK                   0x000fffff
+#define  PASID_VALID                        (_AC(1, ULL) << 31)
+
 /*
  * Legacy MSR constants in need of cleanup.  No new MSRs below this comment.
  */
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -242,6 +242,7 @@ XEN_CPUFEATURE(RDPID,         6*32+22) /
 XEN_CPUFEATURE(CLDEMOTE,      6*32+25) /*A  CLDEMOTE instruction */
 XEN_CPUFEATURE(MOVDIRI,       6*32+27) /*A  MOVDIRI instruction */
 XEN_CPUFEATURE(MOVDIR64B,     6*32+28) /*A  MOVDIR64B instruction */
+XEN_CPUFEATURE(ENQCMD,        6*32+29) /*   ENQCMD{,S} instructions */
 
 /* AMD-defined CPU features, CPUID level 0x80000007.edx, word 7 */
 XEN_CPUFEATURE(ITSC,          7*32+ 8) /*   Invariant TSC */



From xen-devel-bounces@lists.xenproject.org Thu May 14 09:09:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:09:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ9rj-0001Mx-7d; Thu, 14 May 2020 09:09:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZ9rh-0001Ma-BX
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:09:17 +0000
X-Inumbo-ID: 96095896-95c2-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96095896-95c2-11ea-b07b-bc764e2007e4;
 Thu, 14 May 2020 09:09:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 736FEAF4D;
 Thu, 14 May 2020 09:09:18 +0000 (UTC)
Subject: [PATCH v9 5/9] x86emul: support SERIALIZE
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Message-ID: <20f4c371-b60a-b86c-b0c0-b1ab64ca4349@suse.com>
Date: Thu, 14 May 2020 11:09:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

... enabling its use by all guest kinds at the same time.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
v9: A -> a in public header.
v7: Re-base.
v6: New.

--- a/tools/libxl/libxl_cpuid.c
+++ b/tools/libxl/libxl_cpuid.c
@@ -214,6 +214,7 @@ int libxl_cpuid_parse_config(libxl_cpuid
         {"avx512-4vnniw",0x00000007,  0, CPUID_REG_EDX,  2,  1},
         {"avx512-4fmaps",0x00000007,  0, CPUID_REG_EDX,  3,  1},
         {"md-clear",     0x00000007,  0, CPUID_REG_EDX, 10,  1},
+        {"serialize",    0x00000007,  0, CPUID_REG_EDX, 14,  1},
         {"cet-ibt",      0x00000007,  0, CPUID_REG_EDX, 20,  1},
         {"ibrsb",        0x00000007,  0, CPUID_REG_EDX, 26,  1},
         {"stibp",        0x00000007,  0, CPUID_REG_EDX, 27,  1},
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -161,6 +161,7 @@ static const char *const str_7d0[32] =
 
     [10] = "md-clear",
     /* 12 */                [13] = "tsx-force-abort",
+    [14] = "serialize",
 
     [18] = "pconfig",
     [20] = "cet-ibt",
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -161,6 +161,7 @@ static inline bool xcr0_mask(uint64_t ma
 #define cpu_has_movdir64b  cp.feat.movdir64b
 #define cpu_has_avx512_4vnniw (cp.feat.avx512_4vnniw && xcr0_mask(0xe6))
 #define cpu_has_avx512_4fmaps (cp.feat.avx512_4fmaps && xcr0_mask(0xe6))
+#define cpu_has_serialize  cp.feat.serialize
 #define cpu_has_avx512_bf16 (cp.feat.avx512_bf16 && xcr0_mask(0xe6))
 
 #define cpu_has_xgetbv1   (cpu_has_xsave && cp.xstate.xgetbv1)
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1927,6 +1927,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_enqcmd()      (ctxt->cpuid->feat.enqcmd)
 #define vcpu_has_avx512_4vnniw() (ctxt->cpuid->feat.avx512_4vnniw)
 #define vcpu_has_avx512_4fmaps() (ctxt->cpuid->feat.avx512_4fmaps)
+#define vcpu_has_serialize()   (ctxt->cpuid->feat.serialize)
 #define vcpu_has_avx512_bf16() (ctxt->cpuid->feat.avx512_bf16)
 
 #define vcpu_must_have(feat) \
@@ -5660,6 +5661,18 @@ x86_emulate(
                 goto done;
             break;
 
+        case 0xe8:
+            switch ( vex.pfx )
+            {
+            case vex_none: /* serialize */
+                host_and_vcpu_must_have(serialize);
+                asm volatile ( ".byte 0x0f, 0x01, 0xe8" );
+                break;
+            default:
+                goto unimplemented_insn;
+            }
+            break;
+
         case 0xf8: /* swapgs */
             generate_exception_if(!mode_64bit(), EXC_UD);
             generate_exception_if(!mode_ring0(), EXC_GP, 0);
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -129,6 +129,7 @@
 #define cpu_has_avx512_4vnniw   boot_cpu_has(X86_FEATURE_AVX512_4VNNIW)
 #define cpu_has_avx512_4fmaps   boot_cpu_has(X86_FEATURE_AVX512_4FMAPS)
 #define cpu_has_tsx_force_abort boot_cpu_has(X86_FEATURE_TSX_FORCE_ABORT)
+#define cpu_has_serialize       boot_cpu_has(X86_FEATURE_SERIALIZE)
 
 /* CPUID level 0x00000007:1.eax */
 #define cpu_has_avx512_bf16     boot_cpu_has(X86_FEATURE_AVX512_BF16)
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -260,6 +260,7 @@ XEN_CPUFEATURE(AVX512_4VNNIW, 9*32+ 2) /
 XEN_CPUFEATURE(AVX512_4FMAPS, 9*32+ 3) /*A  AVX512 Multiply Accumulation Single Precision */
 XEN_CPUFEATURE(MD_CLEAR,      9*32+10) /*A  VERW clears microarchitectural buffers */
 XEN_CPUFEATURE(TSX_FORCE_ABORT, 9*32+13) /* MSR_TSX_FORCE_ABORT.RTM_ABORT */
+XEN_CPUFEATURE(SERIALIZE,     9*32+14) /*a  SERIALIZE insn */
 XEN_CPUFEATURE(CET_IBT,       9*32+20) /*   CET - Indirect Branch Tracking */
 XEN_CPUFEATURE(IBRSB,         9*32+26) /*A  IBRS and IBPB support (used by Intel) */
 XEN_CPUFEATURE(STIBP,         9*32+27) /*A  STIBP */



From xen-devel-bounces@lists.xenproject.org Thu May 14 09:09:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:09:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ9s7-0001RW-Hc; Thu, 14 May 2020 09:09:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZ9s6-0001RI-W7
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:09:43 +0000
X-Inumbo-ID: a553c413-95c2-11ea-a463-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a553c413-95c2-11ea-a463-12813bfff9fa;
 Thu, 14 May 2020 09:09:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 2332DAF68;
 Thu, 14 May 2020 09:09:44 +0000 (UTC)
Subject: [PATCH v9 6/9] x86emul: support X{SUS,RES}LDTRK
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Message-ID: <564bb51e-aeaf-48de-6326-a7c03e1fe738@suse.com>
Date: Thu, 14 May 2020 11:09:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

There's nothing to be done by the emulator, as we unconditionally abort
any XBEGIN.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
v9: A -> a in public header. Add comments.
v6: New.

--- a/tools/libxl/libxl_cpuid.c
+++ b/tools/libxl/libxl_cpuid.c
@@ -208,6 +208,7 @@ int libxl_cpuid_parse_config(libxl_cpuid
         {"avx512-vnni",  0x00000007,  0, CPUID_REG_ECX, 11,  1},
         {"avx512-bitalg",0x00000007,  0, CPUID_REG_ECX, 12,  1},
         {"avx512-vpopcntdq",0x00000007,0,CPUID_REG_ECX, 14,  1},
+        {"tsxldtrk",     0x00000007,  0, CPUID_REG_ECX, 16,  1},
         {"rdpid",        0x00000007,  0, CPUID_REG_ECX, 22,  1},
         {"cldemote",     0x00000007,  0, CPUID_REG_ECX, 25,  1},
 
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -128,6 +128,7 @@ static const char *const str_7c0[32] =
     [10] = "vpclmulqdq",       [11] = "avx512_vnni",
     [12] = "avx512_bitalg",
     [14] = "avx512_vpopcntdq",
+    [16] = "tsxldtrk",
 
     [22] = "rdpid",
     /* 24 */                   [25] = "cldemote",
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -1921,6 +1921,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_avx512_vnni() (ctxt->cpuid->feat.avx512_vnni)
 #define vcpu_has_avx512_bitalg() (ctxt->cpuid->feat.avx512_bitalg)
 #define vcpu_has_avx512_vpopcntdq() (ctxt->cpuid->feat.avx512_vpopcntdq)
+#define vcpu_has_tsxldtrk()    (ctxt->cpuid->feat.tsxldtrk)
 #define vcpu_has_rdpid()       (ctxt->cpuid->feat.rdpid)
 #define vcpu_has_movdiri()     (ctxt->cpuid->feat.movdiri)
 #define vcpu_has_movdir64b()   (ctxt->cpuid->feat.movdir64b)
@@ -5668,6 +5669,28 @@ x86_emulate(
                 host_and_vcpu_must_have(serialize);
                 asm volatile ( ".byte 0x0f, 0x01, 0xe8" );
                 break;
+            case vex_f2: /* xsusldtrk */
+                vcpu_must_have(tsxldtrk);
+                /*
+                 * We're never in a transactional region when coming here
+                 * - nothing else to do.
+                 */
+                break;
+            default:
+                goto unimplemented_insn;
+            }
+            break;
+
+        case 0xe9:
+            switch ( vex.pfx )
+            {
+            case vex_f2: /* xresldtrk */
+                vcpu_must_have(tsxldtrk);
+                /*
+                 * We're never in a transactional region when coming here
+                 * - nothing else to do.
+                 */
+                break;
             default:
                 goto unimplemented_insn;
             }
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -238,6 +238,7 @@ XEN_CPUFEATURE(VPCLMULQDQ,    6*32+10) /
 XEN_CPUFEATURE(AVX512_VNNI,   6*32+11) /*A  Vector Neural Network Instrs */
 XEN_CPUFEATURE(AVX512_BITALG, 6*32+12) /*A  Support for VPOPCNT[B,W] and VPSHUFBITQMB */
 XEN_CPUFEATURE(AVX512_VPOPCNTDQ, 6*32+14) /*A  POPCNT for vectors of DW/QW */
+XEN_CPUFEATURE(TSXLDTRK,      6*32+16) /*a  TSX load tracking suspend/resume insns */
 XEN_CPUFEATURE(RDPID,         6*32+22) /*A  RDPID instruction */
 XEN_CPUFEATURE(CLDEMOTE,      6*32+25) /*A  CLDEMOTE instruction */
 XEN_CPUFEATURE(MOVDIRI,       6*32+27) /*A  MOVDIRI instruction */
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -289,6 +289,9 @@ def crunch_numbers(state):
         # as dependent features simplifies Xen's logic, and prevents the guest
         # from seeing implausible configurations.
         IBRSB: [STIBP, SSBD],
+
+        # In principle the TSXLDTRK insns could also be considered independent.
+        RTM: [TSXLDTRK],
     }
 
     deep_features = tuple(sorted(deps.keys()))



From xen-devel-bounces@lists.xenproject.org Thu May 14 09:10:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:10:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ9sk-0002EF-Qo; Thu, 14 May 2020 09:10:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZ9sj-0002Dz-Rv
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:10:21 +0000
X-Inumbo-ID: bc465a22-95c2-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bc465a22-95c2-11ea-b07b-bc764e2007e4;
 Thu, 14 May 2020 09:10:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 81AACAF48;
 Thu, 14 May 2020 09:10:22 +0000 (UTC)
Subject: [PATCH v9 7/9] x86emul: support FNSTENV and FNSAVE
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Message-ID: <29dd611f-cd5d-464b-2046-9c73160d4cea@suse.com>
Date: Thu, 14 May 2020 11:10:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

To avoid introducing another boolean into emulator state, the
rex_prefix field gets (ab)used to convey the real/VM86 vs protected mode
info (affecting structure layout, albeit not size) to x86_emul_blk().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
TBD: The full 16-bit padding fields in the 32-bit structures get filled
     with all ones by modern CPUs (i.e. unlike what the comment says for
     FIP and FDP). We may want to mirror this as well (for the real mode
     variant), even if those fields' contents are unspecified.
---
v9: Fix !HVM build. Add /*state->*/ comments. Add memset(). Add/extend
    comments.
v7: New.

--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -123,6 +123,7 @@ static inline bool xcr0_mask(uint64_t ma
 }
 
 #define cache_line_size() (cp.basic.clflush_size * 8)
+#define cpu_has_fpu        cp.basic.fpu
 #define cpu_has_mmx        cp.basic.mmx
 #define cpu_has_fxsr       cp.basic.fxsr
 #define cpu_has_sse        cp.basic.sse
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -748,6 +748,25 @@ static struct x86_emulate_ops emulops =
 
 #define MMAP_ADDR 0x100000
 
+/*
+ * 64-bit OSes may not (be able to) properly restore the two selectors in
+ * the FPU environment. Zap them so that memcmp() on two saved images will
+ * work regardless of whether a context switch occurred in the middle.
+ */
+static void zap_fpsel(unsigned int *env, bool is_32bit)
+{
+    if ( is_32bit )
+    {
+        env[4] &= ~0xffff;
+        env[6] &= ~0xffff;
+    }
+    else
+    {
+        env[2] &= ~0xffff;
+        env[3] &= ~0xffff;
+    }
+}
+
 #ifdef __x86_64__
 # define STKVAL_DISP 64
 static const struct {
@@ -2394,6 +2413,62 @@ int main(int argc, char **argv)
         printf("okay\n");
     }
     else
+        printf("skipped\n");
+
+    printf("%-40s", "Testing fnstenv 4(%ecx)...");
+    if ( stack_exec && cpu_has_fpu )
+    {
+        const uint16_t three = 3;
+
+        asm volatile ( "fninit\n\t"
+                       "fld1\n\t"
+                       "fidivs %1\n\t"
+                       "fstenv %0"
+                       : "=m" (res[9]) : "m" (three) : "memory" );
+        zap_fpsel(&res[9], true);
+        instr[0] = 0xd9; instr[1] = 0x71; instr[2] = 0x04;
+        regs.eip = (unsigned long)&instr[0];
+        regs.ecx = (unsigned long)res;
+        res[8] = 0xaa55aa55;
+        rc = x86_emulate(&ctxt, &emulops);
+        zap_fpsel(&res[1], true);
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res + 1, res + 9, 28) ||
+             res[8] != 0xaa55aa55 ||
+             (regs.eip != (unsigned long)&instr[3]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
+    printf("%-40s", "Testing 16-bit fnsave (%ecx)...");
+    if ( stack_exec && cpu_has_fpu )
+    {
+        const uint16_t five = 5;
+
+        asm volatile ( "fninit\n\t"
+                       "fld1\n\t"
+                       "fidivs %1\n\t"
+                       "fsaves %0"
+                       : "=m" (res[25]) : "m" (five) : "memory" );
+        zap_fpsel(&res[25], false);
+        asm volatile ( "frstors %0" :: "m" (res[25]) : "memory" );
+        instr[0] = 0x66; instr[1] = 0xdd; instr[2] = 0x31;
+        regs.eip = (unsigned long)&instr[0];
+        regs.ecx = (unsigned long)res;
+        res[23] = 0xaa55aa55;
+        res[24] = 0xaa55aa55;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res, res + 25, 94) ||
+             (res[23] >> 16) != 0xaa55 ||
+             res[24] != 0xaa55aa55 ||
+             (regs.eip != (unsigned long)&instr[3]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
         printf("skipped\n");
 
     printf("%-40s", "Testing movq %mm3,(%ecx)...");
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -856,6 +856,9 @@ struct x86_emulate_state {
     enum {
         blk_NONE,
         blk_enqcmd,
+#ifndef X86EMUL_NO_FPU
+        blk_fst, /* FNSTENV, FNSAVE */
+#endif
         blk_movdir,
     } blk;
     uint8_t modrm, modrm_mod, modrm_reg, modrm_rm;
@@ -897,6 +900,50 @@ struct x86_emulate_state {
 #define PTR_POISON NULL /* 32-bit builds are for user-space, so NULL is OK. */
 #endif
 
+#ifndef X86EMUL_NO_FPU
+struct x87_env16 {
+    uint16_t fcw;
+    uint16_t fsw;
+    uint16_t ftw;
+    union {
+        struct {
+            uint16_t fip_lo;
+            uint16_t fop:11, :1, fip_hi:4;
+            uint16_t fdp_lo;
+            uint16_t :12, fdp_hi:4;
+        } real;
+        struct {
+            uint16_t fip;
+            uint16_t fcs;
+            uint16_t fdp;
+            uint16_t fds;
+        } prot;
+    } mode;
+};
+
+struct x87_env32 {
+    uint32_t fcw:16, :16;
+    uint32_t fsw:16, :16;
+    uint32_t ftw:16, :16;
+    union {
+        struct {
+            /* some CPUs/FPUs also store the full FIP here */
+            uint32_t fip_lo:16, :16;
+            uint32_t fop:11, :1, fip_hi:16, :4;
+            /* some CPUs/FPUs also store the full FDP here */
+            uint32_t fdp_lo:16, :16;
+            uint32_t :12, fdp_hi:16, :4;
+        } real;
+        struct {
+            uint32_t fip;
+            uint32_t fcs:16, fop:11, :5;
+            uint32_t fdp;
+            uint32_t fds:16, :16;
+        } prot;
+    } mode;
+};
+#endif
+
 typedef union {
     uint64_t mmx;
     uint64_t __attribute__ ((aligned(16))) xmm[2];
@@ -4912,9 +4959,22 @@ x86_emulate(
                     goto done;
                 emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
                 break;
-            case 6: /* fnstenv - TODO */
+            case 6: /* fnstenv */
+                fail_if(!ops->blk);
+                state->blk = blk_fst;
+                /*
+                 * REX is meaningless for this insn by this point - (ab)use
+                 * the field to communicate real vs protected mode to ->blk().
+                 */
+                /*state->*/rex_prefix = in_protmode(ctxt, ops);
+                if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
+                                    op_bytes > 2 ? sizeof(struct x87_env32)
+                                                 : sizeof(struct x87_env16),
+                                    &_regs.eflags,
+                                    state, ctxt)) != X86EMUL_OKAY )
+                    goto done;
                 state->fpu_ctrl = true;
-                goto unimplemented_insn;
+                break;
             case 7: /* fnstcw m2byte */
                 state->fpu_ctrl = true;
             fpu_memdst16:
@@ -5068,9 +5128,24 @@ x86_emulate(
                 emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
                 break;
             case 4: /* frstor - TODO */
-            case 6: /* fnsave - TODO */
                 state->fpu_ctrl = true;
                 goto unimplemented_insn;
+            case 6: /* fnsave */
+                fail_if(!ops->blk);
+                state->blk = blk_fst;
+                /*
+                 * REX is meaningless for this insn by this point - (ab)use
+                 * the field to communicate real vs protected mode to ->blk().
+                 */
+                /*state->*/rex_prefix = in_protmode(ctxt, ops);
+                if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
+                                    op_bytes > 2 ? sizeof(struct x87_env32) + 80
+                                                 : sizeof(struct x87_env16) + 80,
+                                    &_regs.eflags,
+                                    state, ctxt)) != X86EMUL_OKAY )
+                    goto done;
+                state->fpu_ctrl = true;
+                break;
             case 7: /* fnstsw m2byte */
                 state->fpu_ctrl = true;
                 goto fpu_memdst16;
@@ -11550,6 +11625,14 @@ int x86_emul_blk(
     switch ( state->blk )
     {
         bool zf;
+#ifndef X86EMUL_NO_FPU
+        struct {
+            struct x87_env32 env;
+            struct {
+               uint8_t bytes[10];
+            } freg[8];
+        } fpstate;
+#endif
 
         /*
          * Throughout this switch(), memory clobbers are used to compensate
@@ -11579,6 +11662,98 @@ int x86_emul_blk(
             *eflags |= X86_EFLAGS_ZF;
         break;
 
+#ifndef X86EMUL_NO_FPU
+
+    case blk_fst:
+        ASSERT(!data);
+
+        /* Don't chance consuming uninitialized data. */
+        memset(&fpstate, 0, sizeof(fpstate));
+        if ( bytes > sizeof(fpstate.env) )
+            asm ( "fnsave %0" : "+m" (fpstate) );
+        else
+            asm ( "fnstenv %0" : "+m" (fpstate.env) );
+
+        /* state->rex_prefix carries CR0.PE && !EFLAGS.VM setting */
+        switch ( bytes )
+        {
+        case sizeof(fpstate.env): /* 32-bit FNSTENV */
+        case sizeof(fpstate):     /* 32-bit FNSAVE */
+            if ( !state->rex_prefix )
+            {
+                /* Convert 32-bit prot to 32-bit real/vm86 format. */
+                unsigned int fip = fpstate.env.mode.prot.fip +
+                                   (fpstate.env.mode.prot.fcs << 4);
+                unsigned int fdp = fpstate.env.mode.prot.fdp +
+                                   (fpstate.env.mode.prot.fds << 4);
+                unsigned int fop = fpstate.env.mode.prot.fop;
+
+                memset(&fpstate.env.mode, 0, sizeof(fpstate.env.mode));
+                fpstate.env.mode.real.fip_lo = fip;
+                fpstate.env.mode.real.fip_hi = fip >> 16;
+                fpstate.env.mode.real.fop = fop;
+                fpstate.env.mode.real.fdp_lo = fdp;
+                fpstate.env.mode.real.fdp_hi = fdp >> 16;
+            }
+            memcpy(ptr, &fpstate.env, sizeof(fpstate.env));
+            if ( bytes == sizeof(fpstate.env) )
+                ptr = NULL;
+            else
+                ptr += sizeof(fpstate.env);
+            break;
+
+        case sizeof(struct x87_env16):                        /* 16-bit FNSTENV */
+        case sizeof(struct x87_env16) + sizeof(fpstate.freg): /* 16-bit FNSAVE */
+            if ( state->rex_prefix )
+            {
+                /* Convert 32-bit prot to 16-bit prot format. */
+                struct x87_env16 *env = ptr;
+
+                env->fcw = fpstate.env.fcw;
+                env->fsw = fpstate.env.fsw;
+                env->ftw = fpstate.env.ftw;
+                env->mode.prot.fip = fpstate.env.mode.prot.fip;
+                env->mode.prot.fcs = fpstate.env.mode.prot.fcs;
+                env->mode.prot.fdp = fpstate.env.mode.prot.fdp;
+                env->mode.prot.fds = fpstate.env.mode.prot.fds;
+            }
+            else
+            {
+                /* Convert 32-bit prot to 16-bit real/vm86 format. */
+                unsigned int fip = fpstate.env.mode.prot.fip +
+                                   (fpstate.env.mode.prot.fcs << 4);
+                unsigned int fdp = fpstate.env.mode.prot.fdp +
+                                   (fpstate.env.mode.prot.fds << 4);
+                struct x87_env16 env = {
+                    .fcw = fpstate.env.fcw,
+                    .fsw = fpstate.env.fsw,
+                    .ftw = fpstate.env.ftw,
+                    .mode.real.fip_lo = fip,
+                    .mode.real.fip_hi = fip >> 16,
+                    .mode.real.fop = fpstate.env.mode.prot.fop,
+                    .mode.real.fdp_lo = fdp,
+                    .mode.real.fdp_hi = fdp >> 16
+                };
+
+                memcpy(ptr, &env, sizeof(env));
+            }
+            if ( bytes == sizeof(struct x87_env16) )
+                ptr = NULL;
+            else
+                ptr += sizeof(struct x87_env16);
+            break;
+
+        default:
+            ASSERT_UNREACHABLE();
+            return X86EMUL_UNHANDLEABLE;
+        }
+
+        if ( ptr )
+            memcpy(ptr, fpstate.freg, sizeof(fpstate.freg));
+        break;
+
+#endif /* X86EMUL_NO_FPU */
+
     case blk_movdir:
         switch ( bytes )
         {



From xen-devel-bounces@lists.xenproject.org Thu May 14 09:11:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ9ta-0002Mj-8O; Thu, 14 May 2020 09:11:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZ9tY-0002MX-KU
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:11:12 +0000
X-Inumbo-ID: da377976-95c2-11ea-a463-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id da377976-95c2-11ea-a463-12813bfff9fa;
 Thu, 14 May 2020 09:11:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D3800AA4F;
 Thu, 14 May 2020 09:11:12 +0000 (UTC)
Subject: [PATCH v9 8/9] x86emul: support FLDENV and FRSTOR
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Message-ID: <e0acff20-6cdf-b0d6-637f-90d3fa540936@suse.com>
Date: Thu, 14 May 2020 11:11:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

While the Intel SDM claims that FRSTOR itself may raise #MF upon
completion, this was confirmed by Intel to be a doc error which will be
corrected in due course; behavior is like FLDENV, and like old hard copy
manuals describe it.

Re-arrange a switch() statement's case label order to allow for
fall-through from FLDENV handling to FNSTENV's.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v9: Refine description. Re-base over changes to earlier patch. Add
    comments.
v7: New.

--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -2442,6 +2442,27 @@ int main(int argc, char **argv)
     else
         printf("skipped\n");
 
+    printf("%-40s", "Testing fldenv 8(%edx)...");
+    if ( stack_exec && cpu_has_fpu )
+    {
+        asm volatile ( "fnstenv %0\n\t"
+                       "fninit"
+                       : "=m" (res[2]) :: "memory" );
+        zap_fpsel(&res[2], true);
+        instr[0] = 0xd9; instr[1] = 0x62; instr[2] = 0x08;
+        regs.eip = (unsigned long)&instr[0];
+        regs.edx = (unsigned long)res;
+        rc = x86_emulate(&ctxt, &emulops);
+        asm volatile ( "fnstenv %0" : "=m" (res[9]) :: "memory" );
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res + 2, res + 9, 28) ||
+             (regs.eip != (unsigned long)&instr[3]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
     printf("%-40s", "Testing 16-bit fnsave (%ecx)...");
     if ( stack_exec && cpu_has_fpu )
     {
@@ -2468,6 +2489,31 @@ int main(int argc, char **argv)
             goto fail;
         printf("okay\n");
     }
+    else
+        printf("skipped\n");
+
+    printf("%-40s", "Testing frstor (%edx)...");
+    if ( stack_exec && cpu_has_fpu )
+    {
+        const uint16_t seven = 7;
+
+        asm volatile ( "fninit\n\t"
+                       "fld1\n\t"
+                       "fidivs %1\n\t"
+                       "fnsave %0\n\t"
+                       : "=&m" (res[0]) : "m" (seven) : "memory" );
+        zap_fpsel(&res[0], true);
+        instr[0] = 0xdd; instr[1] = 0x22;
+        regs.eip = (unsigned long)&instr[0];
+        regs.edx = (unsigned long)res;
+        rc = x86_emulate(&ctxt, &emulops);
+        asm volatile ( "fnsave %0" : "=m" (res[27]) :: "memory" );
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res, res + 27, 108) ||
+             (regs.eip != (unsigned long)&instr[2]) )
+            goto fail;
+        printf("okay\n");
+    }
     else
         printf("skipped\n");
 
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -857,6 +857,7 @@ struct x86_emulate_state {
         blk_NONE,
         blk_enqcmd,
 #ifndef X86EMUL_NO_FPU
+        blk_fld, /* FLDENV, FRSTOR */
         blk_fst, /* FNSTENV, FNSAVE */
 #endif
         blk_movdir,
@@ -4948,22 +4949,15 @@ x86_emulate(
                 dst.bytes = 4;
                 emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
                 break;
-            case 4: /* fldenv - TODO */
-                state->fpu_ctrl = true;
-                goto unimplemented_insn;
-            case 5: /* fldcw m2byte */
-                state->fpu_ctrl = true;
-            fpu_memsrc16:
-                if ( (rc = ops->read(ea.mem.seg, ea.mem.off, &src.val,
-                                     2, ctxt)) != X86EMUL_OKAY )
-                    goto done;
-                emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
-                break;
+            case 4: /* fldenv */
+                /* Raise #MF now if there are pending unmasked exceptions. */
+                emulate_fpu_insn_stub(0xd9, 0xd0 /* fnop */);
+                /* fall through */
             case 6: /* fnstenv */
                 fail_if(!ops->blk);
-                state->blk = blk_fst;
+                state->blk = modrm_reg & 2 ? blk_fst : blk_fld;
                 /*
-                 * REX is meaningless for this insn by this point - (ab)use
+                 * REX is meaningless for these insns by this point - (ab)use
                  * the field to communicate real vs protected mode to ->blk().
                  */
                 /*state->*/rex_prefix = in_protmode(ctxt, ops);
@@ -4975,6 +4969,14 @@ x86_emulate(
                     goto done;
                 state->fpu_ctrl = true;
                 break;
+            case 5: /* fldcw m2byte */
+                state->fpu_ctrl = true;
+            fpu_memsrc16:
+                if ( (rc = ops->read(ea.mem.seg, ea.mem.off, &src.val,
+                                     2, ctxt)) != X86EMUL_OKAY )
+                    goto done;
+                emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
+                break;
             case 7: /* fnstcw m2byte */
                 state->fpu_ctrl = true;
             fpu_memdst16:
@@ -5127,14 +5129,15 @@ x86_emulate(
                 dst.bytes = 8;
                 emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
                 break;
-            case 4: /* frstor - TODO */
-                state->fpu_ctrl = true;
-                goto unimplemented_insn;
+            case 4: /* frstor */
+                /* Raise #MF now if there are pending unmasked exceptions. */
+                emulate_fpu_insn_stub(0xd9, 0xd0 /* fnop */);
+                /* fall through */
             case 6: /* fnsave */
                 fail_if(!ops->blk);
-                state->blk = blk_fst;
+                state->blk = modrm_reg & 2 ? blk_fst : blk_fld;
                 /*
-                 * REX is meaningless for this insn by this point - (ab)use
+                 * REX is meaningless for these insns by this point - (ab)use
                  * the field to communicate real vs protected mode to ->blk().
                  */
                 /*state->*/rex_prefix = in_protmode(ctxt, ops);
@@ -11664,6 +11667,92 @@ int x86_emul_blk(
 
 #ifndef X86EMUL_NO_FPU
 
+    case blk_fld:
+        ASSERT(!data);
+
+        /* state->rex_prefix carries CR0.PE && !EFLAGS.VM setting */
+        switch ( bytes )
+        {
+        case sizeof(fpstate.env): /* 32-bit FLDENV */
+        case sizeof(fpstate):     /* 32-bit FRSTOR */
+            memcpy(&fpstate.env, ptr, sizeof(fpstate.env));
+            if ( !state->rex_prefix )
+            {
+                /* Convert 32-bit real/vm86 to 32-bit prot format. */
+                unsigned int fip = fpstate.env.mode.real.fip_lo +
+                                   (fpstate.env.mode.real.fip_hi << 16);
+                unsigned int fdp = fpstate.env.mode.real.fdp_lo +
+                                   (fpstate.env.mode.real.fdp_hi << 16);
+                unsigned int fop = fpstate.env.mode.real.fop;
+
+                fpstate.env.mode.prot.fip = fip & 0xf;
+                fpstate.env.mode.prot.fcs = fip >> 4;
+                fpstate.env.mode.prot.fop = fop;
+                fpstate.env.mode.prot.fdp = fdp & 0xf;
+                fpstate.env.mode.prot.fds = fdp >> 4;
+            }
+
+            if ( bytes == sizeof(fpstate.env) )
+                ptr = NULL;
+            else
+                ptr += sizeof(fpstate.env);
+            break;
+
+        case sizeof(struct x87_env16):                        /* 16-bit FLDENV */
+        case sizeof(struct x87_env16) + sizeof(fpstate.freg): /* 16-bit FRSTOR */
+        {
+            const struct x87_env16 *env = ptr;
+
+            fpstate.env.fcw = env->fcw;
+            fpstate.env.fsw = env->fsw;
+            fpstate.env.ftw = env->ftw;
+
+            if ( state->rex_prefix )
+            {
+                /* Convert 16-bit prot to 32-bit prot format. */
+                fpstate.env.mode.prot.fip = env->mode.prot.fip;
+                fpstate.env.mode.prot.fcs = env->mode.prot.fcs;
+                fpstate.env.mode.prot.fdp = env->mode.prot.fdp;
+                fpstate.env.mode.prot.fds = env->mode.prot.fds;
+                fpstate.env.mode.prot.fop = 0; /* unknown */
+            }
+            else
+            {
+                /* Convert 16-bit real/vm86 to 32-bit prot format. */
+                unsigned int fip = env->mode.real.fip_lo +
+                                   (env->mode.real.fip_hi << 16);
+                unsigned int fdp = env->mode.real.fdp_lo +
+                                   (env->mode.real.fdp_hi << 16);
+                unsigned int fop = env->mode.real.fop;
+
+                fpstate.env.mode.prot.fip = fip & 0xf;
+                fpstate.env.mode.prot.fcs = fip >> 4;
+                fpstate.env.mode.prot.fop = fop;
+                fpstate.env.mode.prot.fdp = fdp & 0xf;
+                fpstate.env.mode.prot.fds = fdp >> 4;
+            }
+
+            if ( bytes == sizeof(*env) )
+                ptr = NULL;
+            else
+                ptr += sizeof(*env);
+            break;
+        }
+
+        default:
+            ASSERT_UNREACHABLE();
+            return X86EMUL_UNHANDLEABLE;
+        }
+
+        if ( ptr )
+        {
+            memcpy(fpstate.freg, ptr, sizeof(fpstate.freg));
+            asm volatile ( "frstor %0" :: "m" (fpstate) );
+        }
+        else
+            asm volatile ( "fldenv %0" :: "m" (fpstate.env) );
+        break;
+
     case blk_fst:
         ASSERT(!data);
 



From xen-devel-bounces@lists.xenproject.org Thu May 14 09:11:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:11:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ9ty-0002Qp-Gy; Thu, 14 May 2020 09:11:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZ9tw-0002Qa-P8
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:11:36 +0000
X-Inumbo-ID: e8e47d20-95c2-11ea-a463-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e8e47d20-95c2-11ea-a463-12813bfff9fa;
 Thu, 14 May 2020 09:11:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4DC5CAF33;
 Thu, 14 May 2020 09:11:37 +0000 (UTC)
Subject: [PATCH v9 9/9] x86emul: support FXSAVE/FXRSTOR
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Message-ID: <6747df64-dce8-8a24-0986-81ddfe44b285@suse.com>
Date: Thu, 14 May 2020 11:11:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Note that FPU selector handling as well as MXCSR mask saving for now
does not honor differences between host and guest visible featuresets.

While for Intel operation of the insns with CR4.OSFXSR=0 is
implementation dependent, use the easiest solution there: Simply don't
look at the bit in the first place. For AMD and alike the behavior is
well defined, so it gets handled together with FFXSR.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v9: Change a few field types in struct x86_fxsr. Leave reserved fields
    either entirely unnamed, or named "rsvd". Set state->fpu_ctrl. Avoid
    memory clobbers. Add memset() to FXSAVE logic. Add comments.
v8: Respect EFER.FFXSE and CR4.OSFXSR. Correct wrong X86EMUL_NO_*
    dependencies. Reduce #ifdef-ary.
v7: New.

--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -767,6 +767,12 @@ static void zap_fpsel(unsigned int *env,
     }
 }
 
+static void zap_xfpsel(unsigned int *env)
+{
+    env[3] &= ~0xffff;
+    env[5] &= ~0xffff;
+}
+
 #ifdef __x86_64__
 # define STKVAL_DISP 64
 static const struct {
@@ -2517,6 +2523,91 @@ int main(int argc, char **argv)
     else
         printf("skipped\n");
 
+    printf("%-40s", "Testing fxsave 4(%ecx)...");
+    if ( stack_exec && cpu_has_fxsr )
+    {
+        const uint16_t nine = 9;
+
+        memset(res + 0x80, 0xcc, 0x400);
+        if ( cpu_has_sse2 )
+            asm volatile ( "pcmpeqd %xmm7, %xmm7\n\t"
+                           "pxor %xmm6, %xmm6\n\t"
+                           "psubw %xmm7, %xmm6" );
+        asm volatile ( "fninit\n\t"
+                       "fld1\n\t"
+                       "fidivs %1\n\t"
+                       "fxsave %0"
+                       : "=m" (res[0x100]) : "m" (nine) : "memory" );
+        zap_xfpsel(&res[0x100]);
+        instr[0] = 0x0f; instr[1] = 0xae; instr[2] = 0x41; instr[3] = 0x04;
+        regs.eip = (unsigned long)&instr[0];
+        regs.ecx = (unsigned long)(res + 0x7f);
+        memset(res + 0x100 + 0x74, 0x33, 0x30);
+        memset(res + 0x80 + 0x74, 0x33, 0x30);
+        rc = x86_emulate(&ctxt, &emulops);
+        zap_xfpsel(&res[0x80]);
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res + 0x80, res + 0x100, 0x200) ||
+             (regs.eip != (unsigned long)&instr[4]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
+    printf("%-40s", "Testing fxrstor -4(%ecx)...");
+    if ( stack_exec && cpu_has_fxsr )
+    {
+        const uint16_t eleven = 11;
+
+        memset(res + 0x80, 0xcc, 0x400);
+        asm volatile ( "fxsave %0" : "=m" (res[0x80]) :: "memory" );
+        zap_xfpsel(&res[0x80]);
+        if ( cpu_has_sse2 )
+            asm volatile ( "pxor %xmm7, %xmm6\n\t"
+                           "pxor %xmm7, %xmm3\n\t"
+                           "pxor %xmm7, %xmm0\n\t"
+                           "pxor %xmm7, %xmm7" );
+        asm volatile ( "fninit\n\t"
+                       "fld1\n\t"
+                       "fidivs %0\n\t"
+                       :: "m" (eleven) );
+        instr[0] = 0x0f; instr[1] = 0xae; instr[2] = 0x49; instr[3] = 0xfc;
+        regs.eip = (unsigned long)&instr[0];
+        regs.ecx = (unsigned long)(res + 0x81);
+        rc = x86_emulate(&ctxt, &emulops);
+        asm volatile ( "fxsave %0" : "=m" (res[0x100]) :: "memory" );
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res + 0x100, res + 0x80, 0x200) ||
+             (regs.eip != (unsigned long)&instr[4]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
+#ifdef __x86_64__
+    printf("%-40s", "Testing fxsaveq 8(%edx)...");
+    if ( stack_exec && cpu_has_fxsr )
+    {
+        memset(res + 0x80, 0xcc, 0x400);
+        asm volatile ( "fxsaveq %0" : "=m" (res[0x100]) :: "memory" );
+        instr[0] = 0x48; instr[1] = 0x0f; instr[2] = 0xae; instr[3] = 0x42; instr[4] = 0x08;
+        regs.eip = (unsigned long)&instr[0];
+        regs.edx = (unsigned long)(res + 0x7e);
+        memset(res + 0x100 + 0x74, 0x33, 0x30);
+        memset(res + 0x80 + 0x74, 0x33, 0x30);
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res + 0x80, res + 0x100, 0x200) ||
+             (regs.eip != (unsigned long)&instr[5]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+#endif
+
     printf("%-40s", "Testing movq %mm3,(%ecx)...");
     if ( stack_exec && cpu_has_mmx )
     {
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -30,6 +30,13 @@ struct cpuid_policy cp;
 static char fpu_save_area[4096] __attribute__((__aligned__((64))));
 static bool use_xsave;
 
+/*
+ * Re-use the area above also as scratch space for the emulator itself.
+ * (When debugging the emulator, care needs to be taken when inserting
+ * printf() or alike function calls into regions using this.)
+ */
+#define FXSAVE_AREA ((struct x86_fxsr *)fpu_save_area)
+
 void emul_save_fpu_state(void)
 {
     if ( use_xsave )
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -860,6 +860,11 @@ struct x86_emulate_state {
         blk_fld, /* FLDENV, FRSTOR */
         blk_fst, /* FNSTENV, FNSAVE */
 #endif
+#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
+    !defined(X86EMUL_NO_SIMD)
+        blk_fxrstor,
+        blk_fxsave,
+#endif
         blk_movdir,
     } blk;
     uint8_t modrm, modrm_mod, modrm_reg, modrm_rm;
@@ -953,6 +958,29 @@ typedef union {
     uint32_t data32[16];
 } mmval_t;
 
+struct x86_fxsr {
+    uint16_t fcw;
+    uint16_t fsw;
+    uint8_t ftw, :8;
+    uint16_t fop;
+    union {
+        struct {
+            uint32_t offs;
+            uint16_t sel, :16;
+        };
+        uint64_t addr;
+    } fip, fdp;
+    uint32_t mxcsr;
+    uint32_t mxcsr_mask;
+    struct {
+        uint8_t data[10];
+        uint16_t :16, :16, :16;
+    } fpreg[8];
+    uint64_t __attribute__ ((aligned(16))) xmm[16][2];
+    uint64_t rsvd[6];
+    uint64_t avl[6];
+};
+
 /*
  * While proper alignment gets specified above, this doesn't get honored by
  * the compiler for automatic variables. Use this helper to instantiate a
@@ -1910,6 +1938,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_cmov()        (ctxt->cpuid->basic.cmov)
 #define vcpu_has_clflush()     (ctxt->cpuid->basic.clflush)
 #define vcpu_has_mmx()         (ctxt->cpuid->basic.mmx)
+#define vcpu_has_fxsr()        (ctxt->cpuid->basic.fxsr)
 #define vcpu_has_sse()         (ctxt->cpuid->basic.sse)
 #define vcpu_has_sse2()        (ctxt->cpuid->basic.sse2)
 #define vcpu_has_sse3()        (ctxt->cpuid->basic.sse3)
@@ -8139,6 +8168,49 @@ x86_emulate(
     case X86EMUL_OPC(0x0f, 0xae): case X86EMUL_OPC_66(0x0f, 0xae): /* Grp15 */
         switch ( modrm_reg & 7 )
         {
+#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
+    !defined(X86EMUL_NO_SIMD)
+        case 0: /* fxsave */
+        case 1: /* fxrstor */
+            generate_exception_if(vex.pfx, EXC_UD);
+            vcpu_must_have(fxsr);
+            generate_exception_if(ea.type != OP_MEM, EXC_UD);
+            generate_exception_if(!is_aligned(ea.mem.seg, ea.mem.off, 16,
+                                              ctxt, ops),
+                                  EXC_GP, 0);
+            fail_if(!ops->blk);
+            op_bytes =
+#ifdef __x86_64__
+                !mode_64bit() ? offsetof(struct x86_fxsr, xmm[8]) :
+#endif
+                sizeof(struct x86_fxsr);
+            if ( amd_like(ctxt) )
+            {
+                /* Assume "normal" operation in case of missing hooks. */
+                if ( !ops->read_cr ||
+                     ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
+                    cr4 = X86_CR4_OSFXSR;
+                if ( !ops->read_msr ||
+                     ops->read_msr(MSR_EFER, &msr_val, ctxt) != X86EMUL_OKAY )
+                    msr_val = 0;
+                if ( !(cr4 & X86_CR4_OSFXSR) ||
+                     (mode_64bit() && mode_ring0() && (msr_val & EFER_FFXSE)) )
+                    op_bytes = offsetof(struct x86_fxsr, xmm[0]);
+            }
+            /*
+             * This could also be X86EMUL_FPU_mmx, but it shouldn't be
+             * X86EMUL_FPU_xmm, as we don't want CR4.OSFXSR checked.
+             */
+            get_fpu(X86EMUL_FPU_fpu);
+            state->fpu_ctrl = true;
+            state->blk = modrm_reg & 1 ? blk_fxrstor : blk_fxsave;
+            if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
+                                sizeof(struct x86_fxsr), &_regs.eflags,
+                                state, ctxt)) != X86EMUL_OKAY )
+                goto done;
+            break;
+#endif /* X86EMUL_NO_{FPU,MMX,SIMD} */
+
 #ifndef X86EMUL_NO_SIMD
         case 2: /* ldmxcsr */
             generate_exception_if(vex.pfx, EXC_UD);
@@ -11625,6 +11697,8 @@ int x86_emul_blk(
     struct x86_emulate_state *state,
     struct x86_emulate_ctxt *ctxt)
 {
+    int rc = X86EMUL_OKAY;
+
     switch ( state->blk )
     {
         bool zf;
@@ -11843,6 +11917,86 @@ int x86_emul_blk(
 
 #endif /* X86EMUL_NO_FPU */
 
+#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
+    !defined(X86EMUL_NO_SIMD)
+
+    case blk_fxrstor:
+    {
+        struct x86_fxsr *fxsr = FXSAVE_AREA;
+
+        ASSERT(!data);
+        ASSERT(bytes == sizeof(*fxsr));
+        ASSERT(state->op_bytes <= bytes);
+
+        if ( state->op_bytes < sizeof(*fxsr) )
+        {
+            if ( state->rex_prefix & REX_W )
+            {
+                /*
+                 * The only way to force fxsaveq on a wide range of gas
+                 * versions. On older versions the rex64 prefix works only if
+                 * we force an addressing mode that doesn't require extended
+                 * registers.
+                 */
+                asm volatile ( ".byte 0x48; fxsave (%1)"
+                               : "=m" (*fxsr) : "R" (fxsr) );
+            }
+            else
+                asm volatile ( "fxsave %0" : "=m" (*fxsr) );
+        }
+
+        /*
+         * Don't chance the reserved or available ranges to contain any
+         * data FXRSTOR may actually consume in some way: Copy only the
+         * defined portion, and zero the rest.
+         */
+        memcpy(fxsr, ptr, min(state->op_bytes,
+                              (unsigned int)offsetof(struct x86_fxsr, rsvd)));
+        memset(fxsr->rsvd, 0, sizeof(*fxsr) - offsetof(struct x86_fxsr, rsvd));
+
+        generate_exception_if(fxsr->mxcsr & ~mxcsr_mask, EXC_GP, 0);
+
+        if ( state->rex_prefix & REX_W )
+        {
+            /* See above for why operand/constraints are this way. */
+            asm volatile ( ".byte 0x48; fxrstor (%1)"
+                           :: "m" (*fxsr), "R" (fxsr) );
+        }
+        else
+            asm volatile ( "fxrstor %0" :: "m" (*fxsr) );
+        break;
+    }
+
+    case blk_fxsave:
+    {
+        struct x86_fxsr *fxsr = FXSAVE_AREA;
+
+        ASSERT(!data);
+        ASSERT(bytes == sizeof(*fxsr));
+        ASSERT(state->op_bytes <= bytes);
+
+        if ( state->op_bytes < sizeof(*fxsr) )
+            /* Don't chance consuming uninitialized data. */
+            memset(fxsr, 0, state->op_bytes);
+        else
+            fxsr = ptr;
+
+        if ( state->rex_prefix & REX_W )
+        {
+            /* See above for why operand/constraints are this way. */
+            asm volatile ( ".byte 0x48; fxsave (%1)"
+                           : "=m" (*fxsr) : "R" (fxsr) );
+        }
+        else
+            asm volatile ( "fxsave %0" : "=m" (*fxsr) );
+
+        if ( fxsr != ptr ) /* i.e. state->op_bytes < sizeof(*fxsr) */
+            memcpy(ptr, fxsr, state->op_bytes);
+        break;
+    }
+
+#endif /* X86EMUL_NO_{FPU,MMX,SIMD} */
+
     case blk_movdir:
         switch ( bytes )
         {
@@ -11896,7 +12050,8 @@ int x86_emul_blk(
         return X86EMUL_UNHANDLEABLE;
     }
 
-    return X86EMUL_OKAY;
+ done:
+    return rc;
 }
 
 static void __init __maybe_unused build_assertions(void)
--- a/xen/arch/x86/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate.c
@@ -42,6 +42,8 @@
     }                                                      \
 })
 
+#define FXSAVE_AREA current->arch.fpu_ctxt
+
 #ifndef CONFIG_HVM
 # define X86EMUL_NO_FPU
 # define X86EMUL_NO_MMX



From xen-devel-bounces@lists.xenproject.org Thu May 14 09:12:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:12:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZ9vD-0002Zg-Rv; Thu, 14 May 2020 09:12:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZ9vD-0002ZY-1X
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:12:55 +0000
X-Inumbo-ID: 17ba3d60-95c3-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 17ba3d60-95c3-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 09:12:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 0D3C7ABE6;
 Thu, 14 May 2020 09:12:56 +0000 (UTC)
Subject: Re: [PATCH v9 0/9] x86emul: further work
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Message-ID: <75b9068b-4bbb-221d-6737-dc4a7867d580@suse.com>
Date: Thu, 14 May 2020 11:12:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <e2c2bb6c-b089-de09-6388-50ec837310d7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 11:03, Jan Beulich wrote:
> 1: address x86_insn_is_mem_{access,write}() omissions 
> 2: disable FPU/MMX/SIMD insn emulation when !HVM
> 3: support MOVDIR{I,64B} insns
> 4: support ENQCMD insn
> 5: support SERIALIZE
> 6: support X{SUS,RES}LDTRK
> 7: support FNSTENV and FNSAVE
> 8: support FLDENV and FRSTOR
> 9: support FXSAVE/FXRSTOR
> 
> Main changes from v8 are the new patch 1 and dropped patches
> from the end of the series. For other changes see the individual
> patches.

Oh, and I should have mentioned that parts here depend on
"x86/gen-cpuid: Distinguish default vs max in feature annotations",
which looks to be ready to go in, but hasn't gone in yet.

Jan



From xen-devel-bounces@lists.xenproject.org Thu May 14 09:24:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:24:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZA6K-0003ey-OE; Thu, 14 May 2020 09:24:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZA6K-0003et-CH
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:24:24 +0000
X-Inumbo-ID: af106a3a-95c4-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af106a3a-95c4-11ea-9887-bc764e2007e4;
 Thu, 14 May 2020 09:24:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A49EFAF72;
 Thu, 14 May 2020 09:24:18 +0000 (UTC)
Subject: Re: [PATCH v2 3/3] xen/sched: fix latent races accessing
 vcpu->dirty_cpu
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
References: <20200511112829.5500-1-jgross@suse.com>
 <20200511112829.5500-4-jgross@suse.com>
 <eaa891af-697d-bb30-8e34-470102a98561@suse.com>
 <35440630-c065-8d3f-94d2-e01c6a5df2a2@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b5173d4a-437a-fe21-be4b-842dad960f81@suse.com>
Date: Thu, 14 May 2020 11:24:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <35440630-c065-8d3f-94d2-e01c6a5df2a2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 10:50, Jürgen Groß wrote:
> On 14.05.20 09:10, Jan Beulich wrote:
>> On 11.05.2020 13:28, Juergen Gross wrote:
>>> @@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
>>>     void sync_vcpu_execstate(struct vcpu *v)
>>>   {
>>> -    if ( v->dirty_cpu == smp_processor_id() )
>>> +    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
>>> +
>>> +    if ( dirty_cpu == smp_processor_id() )
>>>           sync_local_execstate();
>>> -    else if ( vcpu_cpu_dirty(v) )
>>> +    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
>>>       {
>>>           /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
>>> -        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
>>> +        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>>>       }
>>> +    ASSERT(!is_vcpu_dirty_cpu(dirty_cpu) ||
>>> +           read_atomic(&v->dirty_cpu) != dirty_cpu);
>>
>> Repeating my v1.1 comments:
>>
>> "However, having stared at it for a while now - is this race
>>   free? I can see this being fine in the (initial) case of
>>   dirty_cpu == smp_processor_id(), but if this is for a foreign
>>   CPU, can't the vCPU have gone back to that same CPU again in
>>   the meantime?"
>>
>> and later
>>
>> "There is a time window from late in flush_mask() to the assertion
>>   you add. All sorts of things can happen during this window on
>>   other CPUs. IOW what guarantees the vCPU not getting unpaused or
>>   its affinity getting changed yet another time?"
>>
>> You did reply that by what is now patch 2 this race can be
>> eliminated, but I have to admit I don't see why this would be.
>> Hence at the very least I'd expect justification in either the
>> description or a code comment as to why there's no race left
>> (and also no race to be expected to be re-introduced by code
>> changes elsewhere - very unlikely races are, by their nature,
>> unlikely to be hit during code development and the associated
>> testing, hence I'd like there to be sufficiently close to a
>> guarantee here).
>>
>> My reservations here may in part be due to not following the
>> reasoning for patch 2, which therefore I'll have to rely on the
>> scheduler maintainers to judge on.
> 
> sync_vcpu_execstate() isn't called for a running or runnable vcpu any
> longer. I can add an ASSERT() and a comment explaining it if you like
> that better.

This would help (hopefully people adding new uses of the function
would run into this assertion/comment), but for example the uses
in mapcache_current_vcpu() or do_tasklet_work() look to be pretty
hard to prove they can't happen for a runnable vCPU.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 09:29:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:29:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZABZ-0004De-4H; Thu, 14 May 2020 09:29:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WGWk=64=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZABX-0004DT-B6
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:29:47 +0000
X-Inumbo-ID: 733206d0-95c5-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 733206d0-95c5-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 09:29:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 18F42AFA7;
 Thu, 14 May 2020 09:29:48 +0000 (UTC)
Subject: Re: [PATCH v2 3/3] xen/sched: fix latent races accessing
 vcpu->dirty_cpu
To: Jan Beulich <jbeulich@suse.com>
References: <20200511112829.5500-1-jgross@suse.com>
 <20200511112829.5500-4-jgross@suse.com>
 <eaa891af-697d-bb30-8e34-470102a98561@suse.com>
 <35440630-c065-8d3f-94d2-e01c6a5df2a2@suse.com>
 <b5173d4a-437a-fe21-be4b-842dad960f81@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <8c37fd91-2d97-e30a-700c-141c86c5745a@suse.com>
Date: Thu, 14 May 2020 11:29:44 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <b5173d4a-437a-fe21-be4b-842dad960f81@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.20 11:24, Jan Beulich wrote:
> On 14.05.2020 10:50, Jürgen Groß wrote:
>> On 14.05.20 09:10, Jan Beulich wrote:
>>> On 11.05.2020 13:28, Juergen Gross wrote:
>>>> @@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
>>>>      void sync_vcpu_execstate(struct vcpu *v)
>>>>    {
>>>> -    if ( v->dirty_cpu == smp_processor_id() )
>>>> +    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
>>>> +
>>>> +    if ( dirty_cpu == smp_processor_id() )
>>>>            sync_local_execstate();
>>>> -    else if ( vcpu_cpu_dirty(v) )
>>>> +    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
>>>>        {
>>>>            /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
>>>> -        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
>>>> +        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>>>>        }
>>>> +    ASSERT(!is_vcpu_dirty_cpu(dirty_cpu) ||
>>>> +           read_atomic(&v->dirty_cpu) != dirty_cpu);
>>>
>>> Repeating my v1.1 comments:
>>>
>>> "However, having stared at it for a while now - is this race
>>>    free? I can see this being fine in the (initial) case of
>>>    dirty_cpu == smp_processor_id(), but if this is for a foreign
>>>    CPU, can't the vCPU have gone back to that same CPU again in
>>>    the meantime?"
>>>
>>> and later
>>>
>>> "There is a time window from late in flush_mask() to the assertion
>>>    you add. All sorts of things can happen during this window on
>>>    other CPUs. IOW what guarantees the vCPU not getting unpaused or
>>>    its affinity getting changed yet another time?"
>>>
>>> You did reply that by what is now patch 2 this race can be
>>> eliminated, but I have to admit I don't see why this would be.
>>> Hence at the very least I'd expect justification in either the
>>> description or a code comment as to why there's no race left
>>> (and also no race to be expected to be re-introduced by code
>>> changes elsewhere - very unlikely races are, by their nature,
>>> unlikely to be hit during code development and the associated
>>> testing, hence I'd like there to be sufficiently close to a
>>> guarantee here).
>>>
>>> My reservations here may in part be due to not following the
>>> reasoning for patch 2, which therefore I'll have to rely on the
>>> scheduler maintainers to judge on.
>>
>> sync_vcpu_execstate() isn't called for a running or runnable vcpu any
>> longer. I can add an ASSERT() and a comment explaining it if you like
>> that better.
> 
> This would help (hopefully people adding new uses of the function
> would run into this assertion/comment), but for example the uses
> in mapcache_current_vcpu() or do_tasklet_work() look to be pretty
> hard to prove they can't happen for a runnable vCPU.

Those call sync_local_execstate(), not sync_vcpu_execstate().


Juergen


From xen-devel-bounces@lists.xenproject.org Thu May 14 09:32:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:32:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZADu-00055o-I8; Thu, 14 May 2020 09:32:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZADt-00055h-ML
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:32:13 +0000
X-Inumbo-ID: ca8f3056-95c5-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca8f3056-95c5-11ea-9887-bc764e2007e4;
 Thu, 14 May 2020 09:32:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id C88BBAFAC;
 Thu, 14 May 2020 09:32:14 +0000 (UTC)
Subject: Re: [PATCH v8 08/12] xen: add /buildinfo/config entry to hypervisor
 filesystem
To: Juergen Gross <jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-9-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2e963c64-a2f0-dde7-ef3d-fd6e91c6e520@suse.com>
Date: Thu, 14 May 2020 11:32:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200508153421.24525-9-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 17:34, Juergen Gross wrote:
> Add the /buildinfo/config entry to the hypervisor filesystem. This
> entry contains the .config file used to build the hypervisor.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with a remark and one further adjustment:

> @@ -73,3 +74,14 @@ obj-$(CONFIG_UBSAN) += ubsan/
>  
>  obj-$(CONFIG_NEEDS_LIBELF) += libelf/
>  obj-$(CONFIG_HAS_DEVICE_TREE) += libfdt/
> +
> +config.gz: ../.config
> +	gzip -c $< >$@
> +
> +config_data.o: config.gz
> +
> +config_data.S: $(XEN_ROOT)/xen/tools/binfile
> +	$(XEN_ROOT)/xen/tools/binfile $@ config.gz xen_config_data

This will want changing to the changed build infrastructure, such
that in default (non-verbose, non-silent) mode a line gets output
to stdout. But I'd be fine with this getting done subsequently.

> --- a/xen/include/xen/kernel.h
> +++ b/xen/include/xen/kernel.h
> @@ -100,5 +100,8 @@ extern enum system_state {
>  
>  bool_t is_active_kernel_text(unsigned long addr);
>  
> +extern const char xen_config_data;

Surely you meant to add [] here, and then possibly omit the & on
the line using the symbol?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 09:50:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZAVa-0007NS-9b; Thu, 14 May 2020 09:50:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WGWk=64=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZAVZ-0007NK-Cw
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:50:29 +0000
X-Inumbo-ID: 57515ada-95c8-11ea-a464-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 57515ada-95c8-11ea-a464-12813bfff9fa;
 Thu, 14 May 2020 09:50:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 6D533AF0D;
 Thu, 14 May 2020 09:50:29 +0000 (UTC)
Subject: Re: [PATCH v8 04/12] xen: add basic hypervisor filesystem support
To: Jan Beulich <jbeulich@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-5-jgross@suse.com>
 <db277779-5b1e-a2aa-3948-9e6dd8e8bef0@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <23938228-e947-fe36-8b19-0e89886db9ac@suse.com>
Date: Thu, 14 May 2020 11:50:25 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <db277779-5b1e-a2aa-3948-9e6dd8e8bef0@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.20 09:59, Jan Beulich wrote:
> On 08.05.2020 17:34, Juergen Gross wrote:
>> --- a/xen/common/Makefile
>> +++ b/xen/common/Makefile
>> @@ -10,6 +10,7 @@ obj-y += domain.o
>>   obj-y += event_2l.o
>>   obj-y += event_channel.o
>>   obj-y += event_fifo.o
>> +obj-$(CONFIG_HYPFS) += hypfs.o
>>   obj-$(CONFIG_CRASH_DEBUG) += gdbstub.o
>>   obj-$(CONFIG_GRANT_TABLE) += grant_table.o
>>   obj-y += guestcopy.o
> 
> I guess I could/should have noticed this earlier - this isn't the
> correct insertion point, considering that we try to keep these
> alphabetically sorted.

Oh, the placement seems to be leftover from an early version when the
file was named fs.c or so.

> 
>> +static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
>> +                                               const char *path)
>> +{
>> +    const char *end;
>> +    struct hypfs_entry *entry;
>> +    unsigned int name_len;
>> +
>> +    if ( dir->e.type != XEN_HYPFS_TYPE_DIR )
>> +        return NULL;
>> +
>> +    if ( !*path )
>> +        return &dir->e;
>> +
>> +    end = strchr(path, '/');
>> +    if ( !end )
>> +        end = strchr(path, '\0');
>> +    name_len = end - path;
>> +
>> +    list_for_each_entry ( entry, &dir->dirlist, list )
>> +    {
>> +        int cmp = strncmp(path, entry->name, name_len);
>> +        struct hypfs_entry_dir *d = container_of(entry,
>> +                                                 struct hypfs_entry_dir, e);
>> +
>> +        if ( cmp < 0 )
>> +            return NULL;
>> +        if ( !cmp && strlen(entry->name) == name_len )
>> +            return *end ? hypfs_get_entry_rel(d, end + 1) : entry;
> 
> The compiler may translate this into a tail call, but shouldn't
> we be worried about the nesting depth in case it doesn't?
> Perhaps this doesn't need addressing here, but rather by limiting
> the depth at which new entries can be created?

Hmm, it should be possible to solve this without recursion. I'll have a
try.

> 
>> +int hypfs_read_dir(const struct hypfs_entry *entry,
>> +                   XEN_GUEST_HANDLE_PARAM(void) uaddr)
>> +{
>> +    const struct hypfs_entry_dir *d;
>> +    const struct hypfs_entry *e;
>> +    unsigned int size = entry->size;
>> +
>> +    d = container_of(entry, const struct hypfs_entry_dir, e);
>> +
>> +    list_for_each_entry ( e, &d->dirlist, list )
> 
> This function, in particular because of being non-static, makes
> me wonder how, with add_entry() taking a lock, it can be safe
> without any locking. Initially I thought the justification might
> be because all adding of entries is an init-time-only thing, but
> various involved functions aren't marked __init (and it is at
> least not implausible that down the road we might see new
> entries getting added during certain hotplug operations).
> 
> I do realize that do_hypfs_op() takes the necessary read lock,
> but then you're still building on the assumption that the
> function is reachable through only that code path, despite
> being non-static. An ASSERT() here would be the minimum I guess,
> but with read locks now being recursive I don't see why you
> couldn't read-lock here again.

Right, will add the read-lock.

> 
> The same goes for other non-static functions, albeit things may
> become more interesting for functions living on the
> XEN_HYPFS_OP_write_contents path (because write locks aren't

Adding an ASSERT() in this regard should be rather easy.

> recursive [yet]). I notice even hypfs_get_entry() is non-static,
> albeit I can't see why that is (perhaps a later patch needs it).

No need for non-static, will change to static.

> 
>> +int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
>> +                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
>> +{
>> +    char *buf;
>> +    int ret;
>> +
>> +    if ( leaf->e.type != XEN_HYPFS_TYPE_STRING &&
>> +         leaf->e.type != XEN_HYPFS_TYPE_BLOB && ulen != leaf->e.size )
>> +        return -EDOM;
> 
> hypfs_write() checks ulen against max_size, but the function
> being non-static makes me once again wonder whether at the very
> leaset an ASSERT() wouldn't be wanted here.

Okay.

> 
>> +    buf = xmalloc_array(char, ulen);
>> +    if ( !buf )
>> +        return -ENOMEM;
>> +
>> +    ret = -EFAULT;
>> +    if ( copy_from_guest(buf, uaddr, ulen) )
>> +        goto out;
>> +
>> +    ret = -EINVAL;
>> +    if ( leaf->e.type == XEN_HYPFS_TYPE_STRING &&
>> +         memchr(buf, 0, ulen) != (buf + ulen - 1) )
>> +        goto out;
> 
> How does this check play with gzip-ed strings?

Hmm, right, will add a check for XEN_HYPFS_ENC_PLAIN.

> 
>> +long do_hypfs_op(unsigned int cmd,
>> +                 XEN_GUEST_HANDLE_PARAM(const_char) arg1, unsigned long arg2,
>> +                 XEN_GUEST_HANDLE_PARAM(void) arg3, unsigned long arg4)
>> +{
>> +    int ret;
>> +    struct hypfs_entry *entry;
>> +    static char path[XEN_HYPFS_MAX_PATHLEN];
>> +
>> +    if ( xsm_hypfs_op(XSM_PRIV) )
>> +        return -EPERM;
>> +
>> +    if ( cmd == XEN_HYPFS_OP_get_version )
>> +        return XEN_HYPFS_VERSION;
> 
> Check that all args are zero/null for this sub-op, to allow future
> extension?

Yes, good idea.

> 
>> +#define HYPFS_FIXEDSIZE_INIT(var, typ, nam, contvar, wr) \
>> +    struct hypfs_entry_leaf __read_mostly var = {        \
>> +        .e.type = typ,                                   \
>> +        .e.encoding = XEN_HYPFS_ENC_PLAIN,               \
>> +        .e.name = nam,                                   \
>> +        .e.size = sizeof(contvar),                       \
>> +        .e.max_size = wr ? sizeof(contvar) : 0,          \
>> +        .e.read = hypfs_read_leaf,                       \
>> +        .e.write = wr,                                   \
>> +        .content = &contvar,                             \
>> +    }
> 
> At the example of this, some of the macros look like they want
> parentheses added around uses of some of their parameters.

Hmm, which ones? As I've understood from previous patch reviews by you
you only want those parameters with parantheses where they are really
needed.

- var is a plain variable, so no parantheses
- typ _should_ be a XEN_HYPFS_TYPE_* define, so probably no parantheses
   (its usage in the macro doesn't call for using parantheses anyway)
- nam might be a candidate, but I can't come up with a reason to put it
   in parantheses here
- contvar has to be a variable (otherwise sizeof(contvar) wouldn't
   work), so no parantheses
- wr is a function pointer or NULL, so no parantheses


Juergen


From xen-devel-bounces@lists.xenproject.org Thu May 14 09:52:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:52:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZAXk-0007a6-Ms; Thu, 14 May 2020 09:52:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WGWk=64=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZAXj-0007a0-95
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:52:43 +0000
X-Inumbo-ID: a72e4f5e-95c8-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a72e4f5e-95c8-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 09:52:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id EF2B2AF10;
 Thu, 14 May 2020 09:52:43 +0000 (UTC)
Subject: Re: [PATCH v8 08/12] xen: add /buildinfo/config entry to hypervisor
 filesystem
To: Jan Beulich <jbeulich@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-9-jgross@suse.com>
 <2e963c64-a2f0-dde7-ef3d-fd6e91c6e520@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d77dc23c-3408-8956-af62-b30ed83a9dfd@suse.com>
Date: Thu, 14 May 2020 11:52:40 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <2e963c64-a2f0-dde7-ef3d-fd6e91c6e520@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.20 11:32, Jan Beulich wrote:
> On 08.05.2020 17:34, Juergen Gross wrote:
>> Add the /buildinfo/config entry to the hypervisor filesystem. This
>> entry contains the .config file used to build the hypervisor.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with a remark and one further adjustment:
> 
>> @@ -73,3 +74,14 @@ obj-$(CONFIG_UBSAN) += ubsan/
>>   
>>   obj-$(CONFIG_NEEDS_LIBELF) += libelf/
>>   obj-$(CONFIG_HAS_DEVICE_TREE) += libfdt/
>> +
>> +config.gz: ../.config
>> +	gzip -c $< >$@
>> +
>> +config_data.o: config.gz
>> +
>> +config_data.S: $(XEN_ROOT)/xen/tools/binfile
>> +	$(XEN_ROOT)/xen/tools/binfile $@ config.gz xen_config_data
> 
> This will want changing to the changed build infrastructure, such
> that in default (non-verbose, non-silent) mode a line gets output
> to stdout. But I'd be fine with this getting done subsequently.
> 
>> --- a/xen/include/xen/kernel.h
>> +++ b/xen/include/xen/kernel.h
>> @@ -100,5 +100,8 @@ extern enum system_state {
>>   
>>   bool_t is_active_kernel_text(unsigned long addr);
>>   
>> +extern const char xen_config_data;
> 
> Surely you meant to add [] here, and then possibly omit the & on
> the line using the symbol?

Of course. ;-p


Juergen



From xen-devel-bounces@lists.xenproject.org Thu May 14 09:55:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:55:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZAae-0007oh-5P; Thu, 14 May 2020 09:55:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZAac-0007oV-Pv
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:55:42 +0000
X-Inumbo-ID: 125b135c-95c9-11ea-a464-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 125b135c-95c9-11ea-a464-12813bfff9fa;
 Thu, 14 May 2020 09:55:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 435C7AF8A;
 Thu, 14 May 2020 09:55:43 +0000 (UTC)
Subject: Re: [PATCH v8 04/12] xen: add basic hypervisor filesystem support
To: Juergen Gross <jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-5-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <479b4738-2127-998a-2d8e-c7a9af8ff0a3@suse.com>
Date: Thu, 14 May 2020 11:55:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200508153421.24525-5-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 17:34, Juergen Gross wrote:
> --- /dev/null
> +++ b/xen/include/xen/hypfs.h
> @@ -0,0 +1,122 @@
> +#ifndef __XEN_HYPFS_H__
> +#define __XEN_HYPFS_H__
> +
> +#ifdef CONFIG_HYPFS
> +#include <xen/list.h>
> +#include <xen/string.h>
> +#include <public/hypfs.h>
> +
> +struct hypfs_entry_leaf;
> +
> +struct hypfs_entry {
> +    unsigned short type;
> +    unsigned short encoding;
> +    unsigned int size;
> +    unsigned int max_size;

Btw with these, ...

> +    const char *name;
> +    struct list_head list;
> +    int (*read)(const struct hypfs_entry *entry,
> +                XEN_GUEST_HANDLE_PARAM(void) uaddr);
> +    int (*write)(struct hypfs_entry_leaf *leaf,
> +                 XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen);

... why unsigned long here (noticed while looking at patch 9)?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 09:58:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 09:58:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZAdg-00080m-KZ; Thu, 14 May 2020 09:58:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WGWk=64=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZAdf-00080A-Iz
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 09:58:51 +0000
X-Inumbo-ID: 82e089e0-95c9-11ea-a464-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 82e089e0-95c9-11ea-a464-12813bfff9fa;
 Thu, 14 May 2020 09:58:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4C759AEB9;
 Thu, 14 May 2020 09:58:51 +0000 (UTC)
Subject: Re: [PATCH v8 04/12] xen: add basic hypervisor filesystem support
To: Jan Beulich <jbeulich@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-5-jgross@suse.com>
 <479b4738-2127-998a-2d8e-c7a9af8ff0a3@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <8d0b5a5c-efee-1189-40a2-e70ba60cedbb@suse.com>
Date: Thu, 14 May 2020 11:58:47 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <479b4738-2127-998a-2d8e-c7a9af8ff0a3@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.20 11:55, Jan Beulich wrote:
> On 08.05.2020 17:34, Juergen Gross wrote:
>> --- /dev/null
>> +++ b/xen/include/xen/hypfs.h
>> @@ -0,0 +1,122 @@
>> +#ifndef __XEN_HYPFS_H__
>> +#define __XEN_HYPFS_H__
>> +
>> +#ifdef CONFIG_HYPFS
>> +#include <xen/list.h>
>> +#include <xen/string.h>
>> +#include <public/hypfs.h>
>> +
>> +struct hypfs_entry_leaf;
>> +
>> +struct hypfs_entry {
>> +    unsigned short type;
>> +    unsigned short encoding;
>> +    unsigned int size;
>> +    unsigned int max_size;
> 
> Btw with these, ...
> 
>> +    const char *name;
>> +    struct list_head list;
>> +    int (*read)(const struct hypfs_entry *entry,
>> +                XEN_GUEST_HANDLE_PARAM(void) uaddr);
>> +    int (*write)(struct hypfs_entry_leaf *leaf,
>> +                 XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen);
> 
> ... why unsigned long here (noticed while looking at patch 9)?

Indeed, will switch to unsigned int.


Juergen



From xen-devel-bounces@lists.xenproject.org Thu May 14 10:01:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 10:01:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZAgN-0000Pr-2R; Thu, 14 May 2020 10:01:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SZsc=64=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jZAgL-0000Pm-SZ
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 10:01:37 +0000
X-Inumbo-ID: e58d6090-95c9-11ea-a464-12813bfff9fa
Received: from mail-wm1-f68.google.com (unknown [209.85.128.68])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e58d6090-95c9-11ea-a464-12813bfff9fa;
 Thu, 14 May 2020 10:01:36 +0000 (UTC)
Received: by mail-wm1-f68.google.com with SMTP id f13so1162382wmc.5
 for <xen-devel@lists.xenproject.org>; Thu, 14 May 2020 03:01:36 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=hpaBJ8k/wS/3dOaZzQwwuKmfoM1Lz+L6HOA3llNZp6g=;
 b=jq0JENzvFyc29P8RiD08BGdKDX2LW5zOc0nSV/IbgfwvsT87+8DChxlZuY2K8KzH+0
 JN5RNQlO/z4Me9cew/CBYOVlLy8Ayl6NjJvMuHzoxsE93oYZGSPxIaAt52ZBZMvpubX/
 J0spnq75s3DT94OgbEaE4+B6iOPL5Pc51KyxjEM7cR2vapZaAMs9fzG1c0cbmgcI/bao
 eNtvoTQq5Xr//JlXhpfBWaM5GTurYoMY+pIqqxsmIfLUsd5rhFkRqVjlFPAS/aNa7e5z
 VTT/CgznryTU1Vgt0Z6Q489XHB+RFPcuhDz7RhzHEdSKdXKNjGNhjk1otwmPDbaDMEyx
 bFJw==
X-Gm-Message-State: AOAM531XuLyc9mUL8mVQwFIzxtSqNvKieTW+JEw5seVV0mQoCST4OAjA
 pEOjPB9Ng5WBjNZ5qQMHml0=
X-Google-Smtp-Source: ABdhPJzj6sKBOsEL8o563s+cwBODl1NfFJXXI3GJ7tYjFnrfXqfA/+yv6JK2vRWqP74c741h8PSBmA==
X-Received: by 2002:a1c:7d43:: with SMTP id y64mr9673774wmc.46.1589450495718; 
 Thu, 14 May 2020 03:01:35 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id w15sm14050208wmi.35.2020.05.14.03.01.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 14 May 2020 03:01:35 -0700 (PDT)
Date: Thu, 14 May 2020 10:01:33 +0000
From: Wei Liu <wl@xen.org>
To: Hongyan Xia <hx242@xen.org>
Subject: Re: [PATCH] domain_page: handle NULL within unmap_domain_page() itself
Message-ID: <20200514100133.ne3ed6laazrta3xa@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <a3ddf0c755227a3c742f6b93783c576135a86874.1589384602.git.hongyxia@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <a3ddf0c755227a3c742f6b93783c576135a86874.1589384602.git.hongyxia@amazon.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 13, 2020 at 04:43:33PM +0100, Hongyan Xia wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> The macro version UNMAP_DOMAIN_PAGE() does both NULL checking and
> variable clearing. Move NULL checking into the function itself so that
> the semantics is consistent with other similar constructs like XFREE().
> This also eases the use unmap_domain_page() in error handling paths,
> where we only care about NULL checking but not about variable clearing.
> 
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

Reviewed-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Thu May 14 10:02:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 10:02:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZAgg-0000Rl-Gv; Thu, 14 May 2020 10:01:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=otfA=64=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZAgf-0000Rc-8V
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 10:01:57 +0000
X-Inumbo-ID: f123cce6-95c9-11ea-a464-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f123cce6-95c9-11ea-a464-12813bfff9fa;
 Thu, 14 May 2020 10:01:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589450516;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=htGQpczr7kVz+Vy1EktgZWFDxdY3YxljwCelp6QqO0Q=;
 b=InO/hM6tapFN97H3RXJPZ03Q/G80yuD9/Pwpyx/ZHdI0hFxgxR+FydpQ
 C5kxXIDZqlFri9sDM6xftRj/p8pil59Fub9TcdK4F0Sw2ZU4ZpJjS/lhr
 RMo57gJ2MVfD6mlqxZ6ZKAtqycMOIm4N23w0dVZ3Z4IVlw0974wyVDc4u Q=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: qFo1ZqjK9iKSIq17MsRtARt+8wMsOg/uEGgrqSE9NvpFS6oaJMiQtxsMmPlXJM79XobrwJclDt
 Q/l06SJ/0flY+h0WWlzqPSgXadF8rnHj5LR+hCADMHmaZtAoaN+i4xwmLMJO429zx43VJqY7PO
 2c1vl5Lov3AF+qHAI44EIG/pbaMGvjhPAfOvJKvB0VWIk+FyWR0UqIJxjvqeImOy+utUj7rBPC
 9n6OC0a5GPsSTsOzbznHx84DB/qxqIXvHdM9CvJGRm2ZKw2IKm6jc9aXlap6iuyX/CJeTKNw4w
 dYM=
X-SBRS: 2.7
X-MesageID: 17529291
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,390,1583211600"; d="scan'208";a="17529291"
Date: Thu, 14 May 2020 12:01:45 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 4/4] x86/APIC: restrict certain messages to BSP
Message-ID: <20200514100145.GA54375@Air-de-Roger>
References: <60130f14-3fc5-e40d-fec6-2448fefa6fc4@suse.com>
 <513e4f93-a8a0-ae72-abcc-aa28531eca97@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <513e4f93-a8a0-ae72-abcc-aa28531eca97@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Mar 13, 2020 at 10:26:47AM +0100, Jan Beulich wrote:
> All CPUs get an equal setting of EOI broadcast suppression; no need to
> log one message per CPU, even if it's only in verbose APIC mode.
> 
> Only the BSP is eligible to possibly get ExtINT enabled; no need to log
> that it gets disabled on all APs, even if - again - it's only in verbose
> APIC mode.
> 
> Take the opportunity and introduce a "bsp" parameter to the function, to
> stop using smp_processor_id() to tell BSP from APs.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

LGTM:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

AFAICT this doesn't introduce any functional change in APIC setup or
behavior, the only functional change is the log message reduction.
Might be good to add a note to that effect to make this clear, since
the change from smp_processor_id() -> bsp might make this not obvious.

> 
> --- a/xen/arch/x86/apic.c
> +++ b/xen/arch/x86/apic.c
> @@ -499,7 +499,7 @@ static void resume_x2apic(void)
>      __enable_x2apic();
>  }
>  
> -void setup_local_APIC(void)
> +void setup_local_APIC(bool bsp)
>  {
>      unsigned long oldvalue, value, maxlvt;
>      int i, j;
> @@ -598,8 +598,8 @@ void setup_local_APIC(void)
>      if ( directed_eoi_enabled )
>      {
>          value |= APIC_SPIV_DIRECTED_EOI;
> -        apic_printk(APIC_VERBOSE, "Suppress EOI broadcast on CPU#%d\n",
> -                    smp_processor_id());
> +        if ( bsp )
> +            apic_printk(APIC_VERBOSE, "Suppressing EOI broadcast\n");
>      }
>  
>      apic_write(APIC_SPIV, value);
> @@ -615,21 +615,22 @@ void setup_local_APIC(void)
>       * TODO: set up through-local-APIC from through-I/O-APIC? --macro
>       */
>      value = apic_read(APIC_LVT0) & APIC_LVT_MASKED;
> -    if (!smp_processor_id() && (pic_mode || !value)) {
> +    if (bsp && (pic_mode || !value)) {
>          value = APIC_DM_EXTINT;
>          apic_printk(APIC_VERBOSE, "enabled ExtINT on CPU#%d\n",
>                      smp_processor_id());
>      } else {
>          value = APIC_DM_EXTINT | APIC_LVT_MASKED;
> -        apic_printk(APIC_VERBOSE, "masked ExtINT on CPU#%d\n",
> -                    smp_processor_id());
> +        if (bsp)
> +            apic_printk(APIC_VERBOSE, "masked ExtINT on CPU#%d\n",
> +                        smp_processor_id());

You might want to also drop the CPU#%d from the above messages, since
they would only be printed for the BSP.

>      }
>      apic_write(APIC_LVT0, value);
>  
>      /*
>       * only the BP should see the LINT1 NMI signal, obviously.
>       */
> -    if (!smp_processor_id())
> +    if (bsp)
>          value = APIC_DM_NMI;
>      else
>          value = APIC_DM_NMI | APIC_LVT_MASKED;

This would be shorter as:

value = APIC_DM_NMI | (bsp ? 0 : APIC_LVT_MASKED);

Not specially trilled anyway.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 14 10:20:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 10:20:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZAyc-0002JO-2D; Thu, 14 May 2020 10:20:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZAya-0002JJ-Ou
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 10:20:28 +0000
X-Inumbo-ID: 87ad0f7c-95cc-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 87ad0f7c-95cc-11ea-9887-bc764e2007e4;
 Thu, 14 May 2020 10:20:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id AAD52AEB9;
 Thu, 14 May 2020 10:20:28 +0000 (UTC)
Subject: Re: [PATCH v8 09/12] xen: add runtime parameter access support to
 hypfs
To: Juergen Gross <jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-10-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a6c10680-d570-dabb-61ad-627591d08b0e@suse.com>
Date: Thu, 14 May 2020 12:20:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200508153421.24525-10-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 17:34, Juergen Gross wrote:
> --- a/xen/arch/arm/xen.lds.S
> +++ b/xen/arch/arm/xen.lds.S
> @@ -89,6 +89,13 @@ SECTIONS
>         __start_schedulers_array = .;
>         *(.data.schedulers)
>         __end_schedulers_array = .;
> +
> +#ifdef CONFIG_HYPFS
> +       . = ALIGN(8);
> +       __paramhypfs_start = .;
> +       *(.data.paramhypfs)
> +       __paramhypfs_end = .;
> +#endif
>         *(.data.rel)
>         *(.data.rel.*)
>         CONSTRUCTORS

I'm not the maintainer of this code, but I think it would be better
if there was either no blank line inserted, or two (a 2nd one after
your insertion).

> --- a/xen/arch/x86/pv/domain.c
> +++ b/xen/arch/x86/pv/domain.c
> @@ -52,9 +52,27 @@ static __read_mostly enum {
>      PCID_OFF,
>      PCID_ALL,
>      PCID_XPTI,
> -    PCID_NOXPTI
> +    PCID_NOXPTI,
> +    PCID_END
>  } opt_pcid = PCID_XPTI;

Is this change really needed? The only use looks to be ...

> +#ifdef CONFIG_HYPFS
> +static const char opt_pcid_2_string[PCID_END][7] = {

... here, yet the arry would end up the same when using [][7].

> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -85,8 +85,43 @@ struct grant_table {
>      struct grant_table_arch arch;
>  };
>  
> -static int parse_gnttab_limit(const char *param, const char *arg,
> -                              unsigned int *valp)
> +unsigned int __read_mostly opt_max_grant_frames = 64;
> +static unsigned int __read_mostly opt_max_maptrack_frames = 1024;
> +
> +#ifdef CONFIG_HYPFS
> +#define GRANT_CUSTOM_VAL_SZ  12
> +static char __read_mostly opt_max_grant_frames_val[GRANT_CUSTOM_VAL_SZ];
> +static char __read_mostly opt_max_maptrack_frames_val[GRANT_CUSTOM_VAL_SZ];
> +
> +static void update_gnttab_par(struct param_hypfs *par, unsigned int val,
> +                              char *parval)
> +{
> +    snprintf(parval, GRANT_CUSTOM_VAL_SZ, "%u", val);
> +    custom_runtime_set_var_sz(par, parval, GRANT_CUSTOM_VAL_SZ);
> +}
> +
> +static void __init gnttab_max_frames_init(struct param_hypfs *par)
> +{
> +    update_gnttab_par(par, opt_max_grant_frames, opt_max_grant_frames_val);
> +}
> +
> +static void __init max_maptrack_frames_init(struct param_hypfs *par)
> +{
> +    update_gnttab_par(par, opt_max_maptrack_frames,
> +                      opt_max_maptrack_frames_val);
> +}
> +#else
> +#define opt_max_grant_frames_val    NULL
> +#define opt_max_maptrack_frames_val NULL

This looks latently dangerous to me (in case new uses of these
two identifiers appeared), but I guess my alternative suggestion
will be at best controversial, too:

#define update_gnttab_par(par, val, unused) update_gnttab_par(par, val)
#define parse_gnttab_limit(par, arg, valp, unused) parse_gnttab_limit(par, arg, valp)

(placed right here)

> @@ -281,6 +282,36 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
>      return 0;
>  }
>  
> +int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
> +                       XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
> +{
> +    struct param_hypfs *p;
> +    char *buf;
> +    int ret;
> +
> +    buf = xzalloc_array(char, ulen);
> +    if ( !buf )
> +        return -ENOMEM;
> +
> +    ret = -EFAULT;
> +    if ( copy_from_guest(buf, uaddr, ulen) )

As just indicated in an extra reply to patch 4, ulen not getting
truncated here silently is well obscured (the max_size field type
and the check against it elsewhere looks to guarantee this).

> +        goto out;
> +
> +    ret = -EDOM;
> +    if ( memchr(buf, 0, ulen) != (buf + ulen - 1) )
> +        goto out;
> +
> +    p = container_of(leaf, struct param_hypfs, hypfs);
> +    ret = p->param->par.func(buf);
> +
> +    if ( !ret )
> +        leaf->e.size = ulen;

Why? For "ept", "no-exec-sp" would yield "exec-sp=0", and hence
you'd wrongly extend the size from what parse_ept_param_runtime()
has already set through custom_runtime_set_var(). It looks to me
as if there's no reason to update e.size here at all; it's the
par.func() handlers which need to take care of this.

> --- a/xen/drivers/char/console.c
> +++ b/xen/drivers/char/console.c
> @@ -75,12 +75,35 @@ enum con_timestamp_mode
>      TSM_DATE_MS,       /* [YYYY-MM-DD HH:MM:SS.mmm] */
>      TSM_BOOT,          /* [SSSSSS.uuuuuu] */
>      TSM_RAW,           /* [XXXXXXXXXXXXXXXX] */
> +    TSM_END
>  };

Just like for the PCID enumeration I don't think a sentinel is
needed here.

>  static enum con_timestamp_mode __read_mostly opt_con_timestamp_mode = TSM_NONE;
>  
> +#ifdef CONFIG_HYPFS
> +static const char con_timestamp_mode_2_string[TSM_END][7] = {
> +    [TSM_NONE] = "none",
> +    [TSM_DATE] = "date",
> +    [TSM_DATE_MS] = "datems",
> +    [TSM_BOOT] = "boot",
> +    [TSM_RAW] = "raw"

Add a trailing comma please (and as I notice only now then also
in the similar PCID array).

To the subsequent code the gnttab comment applies as well.

> @@ -80,7 +81,120 @@ extern const struct kernel_param __param_start[], __param_end[];
>  
>  #define __rtparam         __param(__dataparam)
>  
> -#define custom_runtime_only_param(_name, _var) \
> +#ifdef CONFIG_HYPFS
> +
> +struct param_hypfs {
> +    const struct kernel_param *param;
> +    struct hypfs_entry_leaf hypfs;
> +    void (*init_leaf)(struct param_hypfs *par);
> +};
> +
> +extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
> +
> +#define __paramhypfs      __used_section(".data.paramhypfs")
> +
> +#define __paramfs         static __paramhypfs \
> +    __attribute__((__aligned__(sizeof(void *)))) struct param_hypfs

Why the attribute?

> +#define custom_runtime_set_var_sz(parfs, var, sz) \
> +    { \
> +        (parfs)->hypfs.content = var; \
> +        (parfs)->hypfs.e.max_size = sz; \

var and sz want parentheses around them.

> +        (parfs)->hypfs.e.size = strlen(var) + 1; \
> +    }
> +#define custom_runtime_set_var(parfs, var) \
> +    custom_runtime_set_var_sz(parfs, var, sizeof(var))
> +
> +#define param_2_parfs(par) &__parfs_##par
> +
> +/* initfunc needs to set size and content, e.g. via custom_runtime_set_var(). */
> +#define custom_runtime_only_param(_name, _var, initfunc) \

I've started noticing it here, but the issue exists further up
(and down) as well - please can you avoid identifiers with
leading underscores that are in violation of the C standard?
Even more so that here you're not even consistent across
macro parameter names.

> +    __rtparam __rtpar_##_var = \
> +      { .name = _name, \
> +          .type = OPT_CUSTOM, \
> +          .par.func = _var }; \
> +    __paramfs __parfs_##_var = \
> +        { .param = &__rtpar_##_var, \
> +          .init_leaf = initfunc, \
> +          .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
> +          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
> +          .hypfs.e.name = _name, \
> +          .hypfs.e.read = hypfs_read_leaf, \
> +          .hypfs.e.write = hypfs_write_custom }
> +#define boolean_runtime_only_param(_name, _var) \
> +    __rtparam __rtpar_##_var = \
> +        { .name = _name, \
> +          .type = OPT_BOOL, \
> +          .len = sizeof(_var) + \
> +                 BUILD_BUG_ON_ZERO(sizeof(_var) != sizeof(bool)), \
> +          .par.var = &_var }; \
> +    __paramfs __parfs_##_var = \
> +        { .param = &__rtpar_##_var, \
> +          .hypfs.e.type = XEN_HYPFS_TYPE_BOOL, \
> +          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
> +          .hypfs.e.name = _name, \
> +          .hypfs.e.size = sizeof(_var), \
> +          .hypfs.e.max_size = sizeof(_var), \
> +          .hypfs.e.read = hypfs_read_leaf, \
> +          .hypfs.e.write = hypfs_write_bool, \
> +          .hypfs.content = &_var }
> +#define integer_runtime_only_param(_name, _var) \
> +    __rtparam __rtpar_##_var = \
> +        { .name = _name, \
> +          .type = OPT_UINT, \
> +          .len = sizeof(_var), \
> +          .par.var = &_var }; \
> +    __paramfs __parfs_##_var = \
> +        { .param = &__rtpar_##_var, \
> +          .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
> +          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
> +          .hypfs.e.name = _name, \
> +          .hypfs.e.size = sizeof(_var), \
> +          .hypfs.e.max_size = sizeof(_var), \
> +          .hypfs.e.read = hypfs_read_leaf, \
> +          .hypfs.e.write = hypfs_write_leaf, \
> +          .hypfs.content = &_var }
> +#define size_runtime_only_param(_name, _var) \
> +    __rtparam __rtpar_##_var = \
> +        { .name = _name, \
> +          .type = OPT_SIZE, \
> +          .len = sizeof(_var), \
> +          .par.var = &_var }; \
> +    __paramfs __parfs_##_var = \
> +        { .param = &__rtpar_##_var, \
> +          .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
> +          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
> +          .hypfs.e.name = _name, \
> +          .hypfs.e.size = sizeof(_var), \
> +          .hypfs.e.max_size = sizeof(_var), \
> +          .hypfs.e.read = hypfs_read_leaf, \
> +          .hypfs.e.write = hypfs_write_leaf, \
> +          .hypfs.content = &_var }
> +#define string_runtime_only_param(_name, _var) \
> +    __rtparam __rtpar_##_var = \
> +        { .name = _name, \
> +          .type = OPT_STR, \
> +          .len = sizeof(_var), \
> +          .par.var = &_var }; \
> +    __paramfs __parfs_##_var = \
> +        { .param = &__rtpar_##_var, \
> +          .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
> +          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
> +          .hypfs.e.name = _name, \
> +          .hypfs.e.size = sizeof(_var), \

Is this really correct here?

> +          .hypfs.e.max_size = sizeof(_var), \
> +          .hypfs.e.read = hypfs_read_leaf, \
> +          .hypfs.e.write = hypfs_write_leaf, \
> +          .hypfs.content = &_var }
> +
> +#else
> +
> +struct param_hypfs {
> +};
> +
> +#define param_2_parfs(par)  NULL

Along the lines of the earlier comment, this looks latently dangerous.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 10:44:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 10:44:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZBLj-00047v-36; Thu, 14 May 2020 10:44:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4jr8=64=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jZBLi-00047q-6h
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 10:44:22 +0000
X-Inumbo-ID: de399b1e-95cf-11ea-a468-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id de399b1e-95cf-11ea-a468-12813bfff9fa;
 Thu, 14 May 2020 10:44:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=uni5D110GJhVW8R1nmuUAgF8BlDiOJsh2zMtnMCsaA4=; b=JNZY3Me34rWb1aifmg8I/T6llk
 DDUpYdQkboeg3HAZDwKKcxNiDgnlapGUd0Q8se7lnhmgnH5MmfPKvP1awLkd3wYmUIVSOnzazSUHd
 hH0B3/V6I0b/eHPNORRTk3DKP0PJpqk/Dg+SKNFtpxIvAwD3xj8dcYnKtouYmCb5hOK0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jZBLg-0004a6-EH; Thu, 14 May 2020 10:44:20 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jZBLg-0000sH-4Q; Thu, 14 May 2020 10:44:20 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 0/5] domain context infrastructure
Date: Thu, 14 May 2020 11:44:11 +0100
Message-Id: <20200514104416.16657-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

Paul Durrant (5):
  xen/common: introduce a new framework for save/restore of 'domain'
    context
  xen/common/domctl: introduce XEN_DOMCTL_get/setdomaincontext
  tools/misc: add xen-domctx to present domain context
  common/domain: add a domain context record for shared_info...
  tools/libxc: make use of domain context SHARED_INFO record...

 .gitignore                             |   1 +
 tools/flask/policy/modules/xen.if      |   4 +-
 tools/libxc/include/xenctrl.h          |   5 +
 tools/libxc/xc_domain.c                |  54 +++++
 tools/libxc/xc_sr_common.c             |  58 +++++
 tools/libxc/xc_sr_common.h             |  11 +-
 tools/libxc/xc_sr_common_x86_pv.c      |  47 ++++
 tools/libxc/xc_sr_common_x86_pv.h      |   3 +
 tools/libxc/xc_sr_restore_x86_pv.c     |  40 ++--
 tools/libxc/xc_sr_save_x86_pv.c        |  26 +-
 tools/libxc/xg_save_restore.h          |   1 +
 tools/misc/Makefile                    |   4 +
 tools/misc/xen-domctx.c                | 273 +++++++++++++++++++++
 xen/common/Makefile                    |   1 +
 xen/common/domain.c                    |  60 +++++
 xen/common/domctl.c                    | 167 +++++++++++++
 xen/common/save.c                      | 313 +++++++++++++++++++++++++
 xen/include/public/arch-arm/hvm/save.h |   5 +
 xen/include/public/arch-x86/hvm/save.h |   5 +
 xen/include/public/domctl.h            |  41 ++++
 xen/include/public/save.h              |  89 +++++++
 xen/include/xen/save.h                 | 165 +++++++++++++
 xen/xsm/flask/hooks.c                  |   6 +
 xen/xsm/flask/policy/access_vectors    |   4 +
 24 files changed, 1332 insertions(+), 51 deletions(-)
 create mode 100644 tools/misc/xen-domctx.c
 create mode 100644 xen/common/save.c
 create mode 100644 xen/include/public/save.h
 create mode 100644 xen/include/xen/save.h

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 14 10:44:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 10:44:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZBLp-00048e-Nm; Thu, 14 May 2020 10:44:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4jr8=64=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jZBLo-00048G-NK
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 10:44:28 +0000
X-Inumbo-ID: e234ba64-95cf-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e234ba64-95cf-11ea-b9cf-bc764e2007e4;
 Thu, 14 May 2020 10:44:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=a+qWjzDbL/zh1WF3j/PxxvGeYzWgtIFSwjgyRIR7Ufg=; b=MVoUe+Hr1nExwkjY4rPWqkideA
 bOcHkbw6i/GTk056z4Y9RFOqTDo0t7F7/WMszHozLGKitJT3c0PCAvgx3kCBlprllkgO9NzTIwE3T
 zKm0rd4vZ1ALP1ZjjSKXtz5yadANXiqKLHtFAluulO7M5GQWl20aLIiiOsDAqlfkOnCI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jZBLk-0004aG-8U; Thu, 14 May 2020 10:44:24 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jZBLj-0000sH-V7; Thu, 14 May 2020 10:44:24 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 2/5] xen/common/domctl: introduce
 XEN_DOMCTL_get/setdomaincontext
Date: Thu, 14 May 2020 11:44:13 +0100
Message-Id: <20200514104416.16657-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200514104416.16657-1-paul@xen.org>
References: <20200514104416.16657-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

These domctls provide a mechanism to get and set domain context from
the toolstack.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>

v3:
 - Addressed comments from Julien and Jan
 - Use vmalloc() rather than xmalloc_bytes()

v2:
 - drop mask parameter
 - const-ify some more buffers
---
 tools/flask/policy/modules/xen.if   |   4 +-
 tools/libxc/include/xenctrl.h       |   5 +
 tools/libxc/xc_domain.c             |  54 +++++++++
 xen/common/domctl.c                 | 167 ++++++++++++++++++++++++++++
 xen/include/public/domctl.h         |  41 +++++++
 xen/xsm/flask/hooks.c               |   6 +
 xen/xsm/flask/policy/access_vectors |   4 +
 7 files changed, 279 insertions(+), 2 deletions(-)

diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 8eb2293a52..2bc9db4f64 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -53,7 +53,7 @@ define(`create_domain_common', `
 	allow $1 $2:domain2 { set_cpu_policy settsc setscheduler setclaim
 			set_vnumainfo get_vnumainfo cacheflush
 			psr_cmt_op psr_alloc soft_reset
-			resource_map get_cpu_policy };
+			resource_map get_cpu_policy setcontext };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op updatemp };
@@ -97,7 +97,7 @@ define(`migrate_domain_out', `
 	allow $1 $2:hvm { gethvmc getparam };
 	allow $1 $2:mmu { stat pageinfo map_read };
 	allow $1 $2:domain { getaddrsize getvcpucontext pause destroy };
-	allow $1 $2:domain2 gettsc;
+	allow $1 $2:domain2 { gettsc getcontext };
 	allow $1 $2:shadow { enable disable logdirty };
 ')
 
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 45ff7db1e8..0ce2372e2f 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -867,6 +867,11 @@ int xc_domain_hvm_setcontext(xc_interface *xch,
                              uint8_t *hvm_ctxt,
                              uint32_t size);
 
+int xc_domain_getcontext(xc_interface *xch, uint32_t domid,
+                         void *ctxt_buf, size_t *size);
+int xc_domain_setcontext(xc_interface *xch, uint32_t domid,
+                         const void *ctxt_buf, size_t size);
+
 /**
  * This function will return guest IO ABI protocol
  *
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 71829c2bce..212d1489dd 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -537,6 +537,60 @@ int xc_domain_hvm_setcontext(xc_interface *xch,
     return ret;
 }
 
+int xc_domain_getcontext(xc_interface *xch, uint32_t domid,
+                         void *ctxt_buf, size_t *size)
+{
+    int ret;
+    DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BOUNCE(ctxt_buf, *size, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+
+    if ( xc_hypercall_bounce_pre(xch, ctxt_buf) )
+        return -1;
+
+    domctl.cmd = XEN_DOMCTL_getdomaincontext;
+    domctl.domain = domid;
+    domctl.u.getdomaincontext.size = *size;
+    set_xen_guest_handle(domctl.u.setdomaincontext.buffer, ctxt_buf);
+
+    ret = do_domctl(xch, &domctl);
+
+    xc_hypercall_bounce_post(xch, ctxt_buf);
+
+    if ( ret )
+        return ret;
+
+    *size = domctl.u.getdomaincontext.size;
+    if ( *size != domctl.u.getdomaincontext.size )
+    {
+        errno = EOVERFLOW;
+        return -1;
+    }
+
+    return 0;
+}
+
+int xc_domain_setcontext(xc_interface *xch, uint32_t domid,
+                         const void *ctxt_buf, size_t size)
+{
+    int ret;
+    DECLARE_DOMCTL;
+    DECLARE_HYPERCALL_BOUNCE_IN(ctxt_buf, size);
+
+    if ( xc_hypercall_bounce_pre(xch, ctxt_buf) )
+        return -1;
+
+    domctl.cmd = XEN_DOMCTL_setdomaincontext;
+    domctl.domain = domid;
+    domctl.u.setdomaincontext.size = size;
+    set_xen_guest_handle(domctl.u.setdomaincontext.buffer, ctxt_buf);
+
+    ret = do_domctl(xch, &domctl);
+
+    xc_hypercall_bounce_post(xch, ctxt_buf);
+
+    return ret;
+}
+
 int xc_vcpu_getcontext(xc_interface *xch,
                        uint32_t domid,
                        uint32_t vcpu,
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index a69b3b59a8..c37d2ad366 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -25,6 +25,8 @@
 #include <xen/hypercall.h>
 #include <xen/vm_event.h>
 #include <xen/monitor.h>
+#include <xen/save.h>
+#include <xen/vmap.h>
 #include <asm/current.h>
 #include <asm/irq.h>
 #include <asm/page.h>
@@ -358,6 +360,162 @@ static struct vnuma_info *vnuma_init(const struct xen_domctl_vnuma *uinfo,
     return ERR_PTR(ret);
 }
 
+struct domctl_context
+{
+    void *buffer;
+    struct domain_save_descriptor *desc;
+    size_t len;
+    size_t cur;
+};
+
+static int dry_run_append(void *priv, const void *data, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len + len < c->len )
+        return -EOVERFLOW;
+
+    c->len += len;
+
+    return 0;
+}
+
+static int dry_run_begin(void *priv, const struct domain_save_descriptor *desc)
+{
+    return dry_run_append(priv, NULL, sizeof(*desc));
+}
+
+static int dry_run_end(void *priv, size_t len)
+{
+    return 0;
+}
+
+static struct domain_save_ops dry_run_ops = {
+    .begin = dry_run_begin,
+    .append = dry_run_append,
+    .end = dry_run_end,
+};
+
+static int save_begin(void *priv, const struct domain_save_descriptor *desc)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len - c->cur < sizeof(*desc) )
+        return -ENOSPC;
+
+    c->desc = c->buffer + c->cur; /* stash pointer to descriptor */
+    *c->desc = *desc;
+
+    c->cur += sizeof(*desc);
+
+    return 0;
+}
+
+static int save_append(void *priv, const void *data, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len - c->cur < len )
+        return -ENOSPC;
+
+    memcpy(c->buffer + c->cur, data, len);
+    c->cur += len;
+
+    return 0;
+}
+
+static int save_end(void *priv, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    c->desc->length = len;
+
+    return 0;
+}
+
+static struct domain_save_ops save_ops = {
+    .begin = save_begin,
+    .append = save_append,
+    .end = save_end,
+};
+
+static int getdomaincontext(struct domain *d,
+                            struct xen_domctl_getdomaincontext *gdc)
+{
+    struct domctl_context c = { .buffer = ZERO_BLOCK_PTR };
+    int rc;
+
+    if ( d == current->domain )
+        return -EPERM;
+
+    if ( guest_handle_is_null(gdc->buffer) ) /* query for buffer size */
+    {
+        if ( gdc->size )
+            return -EINVAL;
+
+        /* dry run to acquire buffer size */
+        rc = domain_save(d, &dry_run_ops, &c, true);
+        if ( rc )
+            return rc;
+
+        gdc->size = c.len;
+        return 0;
+    }
+
+    c.len = gdc->size;
+    c.buffer = vmalloc(c.len);
+    if ( !c.buffer )
+        return -ENOMEM;
+
+    rc = domain_save(d, &save_ops, &c, false);
+
+    gdc->size = c.cur;
+    if ( !rc && copy_to_guest(gdc->buffer, c.buffer, gdc->size) )
+        rc = -EFAULT;
+
+    vfree(c.buffer);
+
+    return rc;
+}
+
+static int load_read(void *priv, void *data, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len - c->cur < len )
+        return -ENODATA;
+
+    memcpy(data, c->buffer + c->cur, len);
+    c->cur += len;
+
+    return 0;
+}
+
+static struct domain_load_ops load_ops = {
+    .read = load_read,
+};
+
+static int setdomaincontext(struct domain *d,
+                            const struct xen_domctl_setdomaincontext *sdc)
+{
+    struct domctl_context c = { .buffer = ZERO_BLOCK_PTR, .len = sdc->size };
+    int rc;
+
+    if ( d == current->domain )
+        return -EPERM;
+
+    c.buffer = vmalloc(c.len);
+    if ( !c.buffer )
+        return -ENOMEM;
+
+    rc = !copy_from_guest(c.buffer, sdc->buffer, c.len) ?
+        domain_load(d, &load_ops, &c) : -EFAULT;
+
+    vfree(c.buffer);
+
+    return rc;
+}
+
 long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
@@ -942,6 +1100,15 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             copyback = 1;
         break;
 
+    case XEN_DOMCTL_getdomaincontext:
+        ret = getdomaincontext(d, &op->u.getdomaincontext);
+        copyback = !ret;
+        break;
+
+    case XEN_DOMCTL_setdomaincontext:
+        ret = setdomaincontext(d, &op->u.setdomaincontext);
+        break;
+
     default:
         ret = arch_do_domctl(op, d, u_domctl);
         break;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 1ad34c35eb..1b133bda59 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1129,6 +1129,43 @@ struct xen_domctl_vuart_op {
                                  */
 };
 
+/*
+ * XEN_DOMCTL_getdomaincontext
+ * ---------------------------
+ *
+ * buffer (IN):   The buffer into which the context data should be
+ *                copied, or NULL to query the buffer size that should
+ *                be allocated.
+ * size (IN/OUT): If 'buffer' is NULL then the value passed in must be
+ *                zero, and the value passed out will be the size of the
+ *                buffer to allocate.
+ *                If 'buffer' is non-NULL then the value passed in must
+ *                be the size of the buffer into which data may be copied.
+ *                The value passed out will be the size of data written.
+ */
+struct xen_domctl_getdomaincontext {
+    uint32_t size;
+    uint32_t pad;
+    XEN_GUEST_HANDLE_64(void) buffer;
+};
+
+/* XEN_DOMCTL_setdomaincontext
+ * ---------------------------
+ *
+ * buffer (IN):   The buffer from which the context data should be
+ *                copied.
+ * size (IN):     The size of the buffer from which data may be copied.
+ *                This data must include DOMAIN_SAVE_CODE_HEADER at the
+ *                start and terminate with a DOMAIN_SAVE_CODE_END record.
+ *                Any data beyond the DOMAIN_SAVE_CODE_END record will be
+ *                ignored.
+ */
+struct xen_domctl_setdomaincontext {
+    uint32_t size;
+    uint32_t pad;
+    XEN_GUEST_HANDLE_64(const_void) buffer;
+};
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -1210,6 +1247,8 @@ struct xen_domctl {
 #define XEN_DOMCTL_vuart_op                      81
 #define XEN_DOMCTL_get_cpu_policy                82
 #define XEN_DOMCTL_set_cpu_policy                83
+#define XEN_DOMCTL_getdomaincontext              84
+#define XEN_DOMCTL_setdomaincontext              85
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1270,6 +1309,8 @@ struct xen_domctl {
         struct xen_domctl_monitor_op        monitor_op;
         struct xen_domctl_psr_alloc         psr_alloc;
         struct xen_domctl_vuart_op          vuart_op;
+        struct xen_domctl_getdomaincontext  getdomaincontext;
+        struct xen_domctl_setdomaincontext  setdomaincontext;
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 4649e6fd95..6f3db276ef 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -745,6 +745,12 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_get_cpu_policy:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GET_CPU_POLICY);
 
+    case XEN_DOMCTL_setdomaincontext:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SETCONTEXT);
+
+    case XEN_DOMCTL_getdomaincontext:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GETCONTEXT);
+
     default:
         return avc_unknown_permission("domctl", cmd);
     }
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index c055c14c26..fccfb9de82 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -245,6 +245,10 @@ class domain2
     resource_map
 # XEN_DOMCTL_get_cpu_policy
     get_cpu_policy
+# XEN_DOMCTL_setdomaincontext
+    setcontext
+# XEN_DOMCTL_getdomaincontext
+    getcontext
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 14 10:44:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 10:44:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZBLo-00048C-BC; Thu, 14 May 2020 10:44:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4jr8=64=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jZBLn-000484-4X
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 10:44:27 +0000
X-Inumbo-ID: df8f1d7d-95cf-11ea-a468-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id df8f1d7d-95cf-11ea-a468-12813bfff9fa;
 Thu, 14 May 2020 10:44:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=t8E+21CLqXY/N2lBKPM4IcxRr4ZFSD8fm46FFAv0DXY=; b=mfWAM8oaFnucGvVT0ks19Ve5yu
 4kRVu97x2+AMR5nxHbpyAXlzZP0hxCyc3gHBSnoy5wT4kNB1TQzsRHLevv+2Lf7IZ7HHQ8bT56Ik2
 A0aL7sczOy2oyZqae+vx2heDK5JflIfdRYwXHUZ5KK1u1OIT1Y30dADKC4ceG0H2F1Ps=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jZBLi-0004aB-Ho; Thu, 14 May 2020 10:44:22 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jZBLh-0000sH-Vm; Thu, 14 May 2020 10:44:22 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 1/5] xen/common: introduce a new framework for save/restore
 of 'domain' context
Date: Thu, 14 May 2020 11:44:12 +0100
Message-Id: <20200514104416.16657-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200514104416.16657-1-paul@xen.org>
References: <20200514104416.16657-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

To allow enlightened HVM guests (i.e. those that have PV drivers) to be
migrated without their co-operation it will be necessary to transfer 'PV'
state such as event channel state, grant entry state, etc.

Currently there is a framework (entered via the hvm_save/load() functions)
that allows a domain's 'HVM' (architectural) state to be transferred but
'PV' state is also common with pure PV guests and so this framework is not
really suitable.

This patch adds the new public header and low level implementation of a new
common framework, entered via the domain_save/load() functions. Subsequent
patches will introduce other parts of the framework, and code that will
make use of it within the current version of the libxc migration stream.

This patch also marks the HVM-only framework as deprecated in favour of the
new framework.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v3:
 - Addressed comments from Julien and Jan
 - Save handlers no longer need to state entry length up-front
 - Save handlers expected to deal with multiple instances internally
 - Entries are now auto-padded to 8 byte boundary

v2:
 - Allow multi-stage save/load to avoid the need to double-buffer
 - Get rid of the masks and add an 'ignore' flag instead
 - Create copy function union to preserve const save buffer
 - Deprecate HVM-only framework
---
 xen/common/Makefile                    |   1 +
 xen/common/save.c                      | 313 +++++++++++++++++++++++++
 xen/include/public/arch-arm/hvm/save.h |   5 +
 xen/include/public/arch-x86/hvm/save.h |   5 +
 xen/include/public/save.h              |  80 +++++++
 xen/include/xen/save.h                 | 165 +++++++++++++
 6 files changed, 569 insertions(+)
 create mode 100644 xen/common/save.c
 create mode 100644 xen/include/public/save.h
 create mode 100644 xen/include/xen/save.h

diff --git a/xen/common/Makefile b/xen/common/Makefile
index e8cde65370..90553ba5d7 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -37,6 +37,7 @@ obj-y += radix-tree.o
 obj-y += rbtree.o
 obj-y += rcupdate.o
 obj-y += rwlock.o
+obj-y += save.o
 obj-y += shutdown.o
 obj-y += softirq.o
 obj-y += sort.o
diff --git a/xen/common/save.c b/xen/common/save.c
new file mode 100644
index 0000000000..62a2b7c5f6
--- /dev/null
+++ b/xen/common/save.c
@@ -0,0 +1,313 @@
+/*
+ * save.c: Save and restore PV guest state common to all domain types.
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/compile.h>
+#include <xen/save.h>
+
+struct domain_context {
+    struct domain *domain;
+    const char *name; /* for logging purposes */
+    struct domain_save_descriptor desc;
+    size_t len; /* for internal accounting */
+    union {
+        struct domain_save_ops *save;
+        struct domain_load_ops *load;
+    } ops;
+    void *priv;
+    bool log;
+};
+
+static struct {
+    const char *name;
+    domain_save_handler save;
+    domain_load_handler load;
+} handlers[DOMAIN_SAVE_CODE_MAX + 1];
+
+void __init domain_register_save_type(unsigned int typecode,
+                                      const char *name,
+                                      domain_save_handler save,
+                                      domain_load_handler load)
+{
+    BUG_ON(typecode >= ARRAY_SIZE(handlers));
+
+    ASSERT(!handlers[typecode].save);
+    ASSERT(!handlers[typecode].load);
+
+    handlers[typecode].name = name;
+    handlers[typecode].save = save;
+    handlers[typecode].load = load;
+}
+
+int domain_save_begin(struct domain_context *c, unsigned int typecode,
+                      const char *name, unsigned int instance)
+{
+    int rc;
+
+    if ( typecode != c->desc.typecode )
+    {
+        ASSERT_UNREACHABLE();
+        return -EINVAL;
+    }
+    ASSERT(!c->desc.length); /* Should always be zero during domain_save() */
+    ASSERT(!c->len); /* Verify domain_save_end() was called */
+
+    c->desc.instance = instance;
+
+    rc = c->ops.save->begin(c->priv, &c->desc);
+    if ( rc )
+        return rc;
+
+    c->name = name;
+
+    return 0;
+}
+
+int domain_save_data(struct domain_context *c, const void *src, size_t len)
+{
+    int rc = c->ops.save->append(c->priv, src, len);
+
+    if ( !rc )
+        c->len += len;
+
+    return rc;
+}
+
+#define DOMAIN_SAVE_ALIGN 8
+
+int domain_save_end(struct domain_context *c)
+{
+    struct domain *d = c->domain;
+    uint8_t pad[DOMAIN_SAVE_ALIGN] = {};
+    size_t len = ROUNDUP(c->len, DOMAIN_SAVE_ALIGN) - c->len; /* padding */
+    int rc;
+
+    if ( len )
+    {
+        rc = domain_save_data(c, pad, len);
+
+        if ( rc )
+            return rc;
+    }
+    ASSERT(IS_ALIGNED(c->len, DOMAIN_SAVE_ALIGN));
+
+    if ( c->log )
+        gdprintk(XENLOG_INFO, "%pd save: %s[%u] +%zu (-%zu)\n", d, c->name,
+                 c->desc.instance, c->len, len);
+
+    rc = c->ops.save->end(c->priv, c->len);
+    c->len = 0;
+
+    return rc;
+}
+
+int domain_save(struct domain *d, struct domain_save_ops *ops, void *priv,
+                bool dry_run)
+{
+    struct domain_context c = {
+        .domain = d,
+        .ops.save = ops,
+        .priv = priv,
+        .log = !dry_run,
+    };
+    static struct domain_save_header h = {
+        .magic = DOMAIN_SAVE_MAGIC,
+        .xen_major = XEN_VERSION,
+        .xen_minor = XEN_SUBVERSION,
+        .version = DOMAIN_SAVE_VERSION,
+    };
+    struct domain_save_end e = {};
+    unsigned int i;
+    int rc;
+
+    ASSERT(d != current->domain);
+    domain_pause(d);
+
+    c.desc.typecode = DOMAIN_SAVE_CODE(HEADER);
+
+    rc = DOMAIN_SAVE_ENTRY(HEADER, &c, 0, &h, sizeof(h));
+    if ( rc )
+        goto out;
+
+    for ( i = 0; i < ARRAY_SIZE(handlers); i++ )
+    {
+        domain_save_handler save = handlers[i].save;
+
+        if ( !save )
+            continue;
+
+        memset(&c.desc, 0, sizeof(c.desc));
+        c.desc.typecode = i;
+
+        rc = save(d, &c, dry_run);
+        if ( rc )
+            goto out;
+    }
+
+    c.desc.typecode = DOMAIN_SAVE_CODE(END);
+
+    rc = DOMAIN_SAVE_ENTRY(END, &c, 0, &e, sizeof(e));
+
+ out:
+    domain_unpause(d);
+
+    return rc;
+}
+
+int domain_load_begin(struct domain_context *c, unsigned int typecode,
+                      const char *name, unsigned int *instance)
+{
+    if ( typecode != c->desc.typecode )
+    {
+        ASSERT_UNREACHABLE();
+        return -EINVAL;
+    }
+
+    ASSERT(!c->len); /* Verify domain_load_end() was called */
+
+    *instance = c->desc.instance;
+
+    c->name = name;
+
+    return 0;
+}
+
+int domain_load_data(struct domain_context *c, void *dst, size_t len)
+{
+    size_t copy_len = min_t(size_t, len, c->desc.length - c->len);
+    int rc;
+
+    c->len += copy_len;
+    ASSERT(c->len <= c->desc.length);
+
+    rc = copy_len ? c->ops.load->read(c->priv, dst, copy_len) : 0;
+    if ( rc )
+        return rc;
+
+    /* Zero extend if the entry is exhausted */
+    len -= copy_len;
+    if ( len )
+    {
+        dst += copy_len;
+        memset(dst, 0, len);
+    }
+
+    return 0;
+}
+
+int domain_load_end(struct domain_context *c)
+{
+    struct domain *d = c->domain;
+    size_t len = c->desc.length - c->len;
+
+    while ( c->len != c->desc.length ) /* unconsumed data or pad */
+    {
+        uint8_t pad;
+        int rc = domain_load_data(c, &pad, sizeof(pad));
+
+        if ( rc )
+            return rc;
+
+        if ( pad )
+            return -EINVAL;
+    }
+
+    if ( c->log )
+        gdprintk(XENLOG_INFO, "%pd load: %s[%u] +%zu (-%zu)\n", d, c->name,
+                 c->desc.instance, c->len, len);
+
+    c->len = 0;
+
+    return 0;
+}
+
+int domain_load(struct domain *d, struct domain_load_ops *ops, void *priv)
+{
+    struct domain_context c = {
+        .domain = d,
+        .ops.load = ops,
+        .priv = priv,
+        .log = true,
+    };
+    unsigned int instance;
+    struct domain_save_header h;
+    int rc;
+
+    ASSERT(d != current->domain);
+
+    rc = c.ops.load->read(c.priv, &c.desc, sizeof(c.desc));
+    if ( rc )
+        return rc;
+
+    rc = DOMAIN_LOAD_ENTRY(HEADER, &c, &instance, &h, sizeof(h));
+    if ( rc )
+        return rc;
+
+    if ( instance || h.magic != DOMAIN_SAVE_MAGIC ||
+         h.version != DOMAIN_SAVE_VERSION )
+        return -EINVAL;
+
+    domain_pause(d);
+
+    for (;;)
+    {
+        unsigned int i;
+        domain_load_handler load;
+
+        rc = c.ops.load->read(c.priv, &c.desc, sizeof(c.desc));
+        if ( rc )
+            return rc;
+
+        rc = -EINVAL;
+
+        if ( c.desc.typecode == DOMAIN_SAVE_CODE(END) )
+        {
+            struct domain_save_end e;
+
+            rc = DOMAIN_LOAD_ENTRY(END, &c, &instance, NULL, sizeof(e));
+
+            if ( instance )
+                return -EINVAL;
+
+            break;
+        }
+
+        i = c.desc.typecode;
+        if ( i >= ARRAY_SIZE(handlers) )
+            break;
+
+        load = handlers[i].load;
+
+        rc = load ? load(d, &c) : -EOPNOTSUPP;
+        if ( rc )
+            break;
+    }
+
+    domain_unpause(d);
+
+    return rc;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/arch-arm/hvm/save.h b/xen/include/public/arch-arm/hvm/save.h
index 75b8e65bcb..d5b0c15203 100644
--- a/xen/include/public/arch-arm/hvm/save.h
+++ b/xen/include/public/arch-arm/hvm/save.h
@@ -26,6 +26,11 @@
 #ifndef __XEN_PUBLIC_HVM_SAVE_ARM_H__
 #define __XEN_PUBLIC_HVM_SAVE_ARM_H__
 
+/*
+ * Further use of HVM state is deprecated. New state records should only
+ * be added to the domain state header: public/save.h
+ */
+
 #endif
 
 /*
diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h
index 773a380bc2..e61e2dbcd7 100644
--- a/xen/include/public/arch-x86/hvm/save.h
+++ b/xen/include/public/arch-x86/hvm/save.h
@@ -648,6 +648,11 @@ struct hvm_msr {
  */
 #define HVM_SAVE_CODE_MAX 20
 
+/*
+ * Further use of HVM state is deprecated. New state records should only
+ * be added to the domain state header: public/save.h
+ */
+
 #endif /* __XEN_PUBLIC_HVM_SAVE_X86_H__ */
 
 /*
diff --git a/xen/include/public/save.h b/xen/include/public/save.h
new file mode 100644
index 0000000000..834c031c51
--- /dev/null
+++ b/xen/include/public/save.h
@@ -0,0 +1,80 @@
+/*
+ * save.h
+ *
+ * Structure definitions for common PV/HVM domain state that is held by
+ * Xen and must be saved along with the domain's memory.
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef XEN_PUBLIC_SAVE_H
+#define XEN_PUBLIC_SAVE_H
+
+#include "xen.h"
+
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
+/* Entry data is preceded by a descriptor */
+struct domain_save_descriptor {
+    uint16_t typecode;
+
+    /*
+     * Instance number of the entry (since there may by multiple of some
+     * types of entry).
+     */
+    uint16_t instance;
+
+    /* Entry length not including this descriptor */
+    uint32_t length;
+};
+
+/*
+ * Each entry has a type associated with it. DECLARE_DOMAIN_SAVE_TYPE
+ * binds these things together.
+ */
+#define DECLARE_DOMAIN_SAVE_TYPE(_x, _code, _type) \
+    struct DOMAIN_SAVE_TYPE_##_x { char c[_code]; _type t; };
+
+#define DOMAIN_SAVE_CODE(_x) \
+    (sizeof(((struct DOMAIN_SAVE_TYPE_##_x *)(0))->c))
+#define DOMAIN_SAVE_TYPE(_x) \
+    typeof(((struct DOMAIN_SAVE_TYPE_##_x *)(0))->t)
+
+/* Terminating entry */
+struct domain_save_end {};
+DECLARE_DOMAIN_SAVE_TYPE(END, 0, struct domain_save_end);
+
+#define DOMAIN_SAVE_MAGIC   0x53415645
+#define DOMAIN_SAVE_VERSION 0x00000001
+
+/* Initial entry */
+struct domain_save_header {
+    uint32_t magic;                /* Must be DOMAIN_SAVE_MAGIC */
+    uint16_t xen_major, xen_minor; /* Xen version */
+    uint32_t version;              /* Save format version */
+};
+DECLARE_DOMAIN_SAVE_TYPE(HEADER, 1, struct domain_save_header);
+
+#define DOMAIN_SAVE_CODE_MAX 1
+
+#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
+
+#endif /* XEN_PUBLIC_SAVE_H */
diff --git a/xen/include/xen/save.h b/xen/include/xen/save.h
new file mode 100644
index 0000000000..c5386b780c
--- /dev/null
+++ b/xen/include/xen/save.h
@@ -0,0 +1,165 @@
+/*
+ * save.h: support routines for save/restore
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef XEN_SAVE_H
+#define XEN_SAVE_H
+
+#include <xen/init.h>
+#include <xen/sched.h>
+#include <xen/types.h>
+
+#include <public/save.h>
+
+struct domain_context;
+
+int domain_save_begin(struct domain_context *c, unsigned int typecode,
+                      const char *name, unsigned int instance);
+
+#define DOMAIN_SAVE_BEGIN(_x, _c, _instance) \
+    domain_save_begin((_c), DOMAIN_SAVE_CODE(_x), #_x, (_instance))
+
+int domain_save_data(struct domain_context *c, const void *data, size_t len);
+int domain_save_end(struct domain_context *c);
+
+static inline int domain_save_entry(struct domain_context *c,
+                                    unsigned int typecode, const char *name,
+                                    unsigned int instance, const void *src,
+                                    size_t len)
+{
+    int rc;
+
+    rc = domain_save_begin(c, typecode, name, instance);
+    if ( rc )
+        return rc;
+
+    rc = domain_save_data(c, src, len);
+    if ( rc )
+        return rc;
+
+    return domain_save_end(c);
+}
+
+#define DOMAIN_SAVE_ENTRY(_x, _c, _instance, _src, _len)            \
+    domain_save_entry((_c), DOMAIN_SAVE_CODE(_x), #_x, (_instance), \
+                      (_src), (_len))
+
+int domain_load_begin(struct domain_context *c, unsigned int typecode,
+                      const char *name, unsigned int *instance);
+
+#define DOMAIN_LOAD_BEGIN(_x, _c, _instance) \
+    domain_load_begin((_c), DOMAIN_SAVE_CODE(_x), #_x, (_instance))
+
+int domain_load_data(struct domain_context *c, void *data, size_t len);
+int domain_load_end(struct domain_context *c);
+
+static inline int domain_load_entry(struct domain_context *c,
+                                    unsigned int typecode, const char *name,
+                                    unsigned int *instance, void *dst,
+                                    size_t len)
+{
+    int rc;
+
+    rc = domain_load_begin(c, typecode, name, instance);
+    if ( rc )
+        return rc;
+
+    rc = domain_load_data(c, dst, len);
+    if ( rc )
+        return rc;
+
+    return domain_load_end(c);
+}
+
+#define DOMAIN_LOAD_ENTRY(_x, _c, _instance, _dst, _len)            \
+    domain_load_entry((_c), DOMAIN_SAVE_CODE(_x), #_x, (_instance), \
+                      (_dst), (_len))
+
+/*
+ * The 'dry_run' flag indicates that the caller of domain_save() (see below)
+ * is not trying to actually acquire the data, only the size of the data.
+ * The save handler can therefore limit work to only that which is necessary
+ * to call domain_save_data() the correct number of times with accurate values
+ * for 'len'.
+ */
+typedef int (*domain_save_handler)(const struct domain *d,
+                                   struct domain_context *c,
+                                   bool dry_run);
+typedef int (*domain_load_handler)(struct domain *d,
+                                   struct domain_context *c);
+
+void domain_register_save_type(unsigned int typecode, const char *name,
+                               domain_save_handler save,
+                               domain_load_handler load);
+
+/*
+ * Register save and restore handlers. Save handlers will be invoked
+ * in order of DOMAIN_SAVE_CODE().
+ */
+#define DOMAIN_REGISTER_SAVE_RESTORE(_x, _save, _load)            \
+    static int __init __domain_register_##_x##_save_restore(void) \
+    {                                                             \
+        domain_register_save_type(                                \
+            DOMAIN_SAVE_CODE(_x),                                 \
+            #_x,                                                  \
+            &(_save),                                             \
+            &(_load));                                            \
+                                                                  \
+        return 0;                                                 \
+    }                                                             \
+    __initcall(__domain_register_##_x##_save_restore);
+
+/* Callback functions */
+struct domain_save_ops {
+    /*
+     * Begin a new entry with the given descriptor (only type and instance
+     * are valid).
+     */
+    int (*begin)(void *priv, const struct domain_save_descriptor *desc);
+    /* Append data/padding to the buffer */
+    int (*append)(void *priv, const void *data, size_t len);
+    /*
+     * Complete the entry by updating the descriptor with the total
+     * length of the appended data (not including padding).
+     */
+    int (*end)(void *priv, size_t len);
+};
+
+struct domain_load_ops {
+    /* Read data/padding from the buffer */
+    int (*read)(void *priv, void *data, size_t len);
+};
+
+/*
+ * Entry points:
+ *
+ * ops:     These are callback functions provided by the caller that will
+ *          be used to write to (in the save case) or read from (in the
+ *          load case) the context buffer. See above for more detail.
+ * priv:    This is a pointer that will be passed to the copy function to
+ *          allow it to identify the context buffer and the current state
+ *          of the save or load operation.
+ * dry_run: If this is set then the caller of domain_save() is only trying
+ *          to acquire the total size of the data, not the data itself.
+ *          In this case the caller may supply different ops to avoid doing
+ *          unnecessary work.
+ */
+int domain_save(struct domain *d, struct domain_save_ops *ops, void *priv,
+                bool dry_run);
+int domain_load(struct domain *d, struct domain_load_ops *ops, void *priv);
+
+#endif /* XEN_SAVE_H */
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 14 10:44:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 10:44:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZBLu-00049Z-0U; Thu, 14 May 2020 10:44:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4jr8=64=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jZBLs-000497-4e
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 10:44:32 +0000
X-Inumbo-ID: e0b7b561-95cf-11ea-a468-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e0b7b561-95cf-11ea-a468-12813bfff9fa;
 Thu, 14 May 2020 10:44:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=TSD5SFhdA2z4EdigzeomUdHIWLSMmGf2Su2R08PvL0w=; b=QYR7PWNRm0QjRP+/giIA6nUuuU
 EJl6qajtzCPRBgNOr4vtw9V2IIPPIbZl/hf4kQlBZIsy0694E9HQU2pts02rBpnV8q90PsLAiE12I
 cTa+EPWTd9Lx96ZVEXehR3PfGLWAvqc/OPn/k67S9tLbkCBBAalwiShsavgys6heQHd4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jZBLl-0004aK-9e; Thu, 14 May 2020 10:44:25 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jZBLl-0000sH-0N; Thu, 14 May 2020 10:44:25 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 3/5] tools/misc: add xen-domctx to present domain context
Date: Thu, 14 May 2020 11:44:14 +0100
Message-Id: <20200514104416.16657-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200514104416.16657-1-paul@xen.org>
References: <20200514104416.16657-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This tool is analogous to 'xen-hvmctx' which presents HVM context.
Subsequent patches will add 'dump' functions when new records are
introduced.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>

v3:
 - Re-worked to avoid copying onto stack
 - Added optional typecode and instance arguments

v2:
 - Change name from 'xen-ctx' to 'xen-domctx'
---
 .gitignore              |   1 +
 tools/misc/Makefile     |   4 +
 tools/misc/xen-domctx.c | 200 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 205 insertions(+)
 create mode 100644 tools/misc/xen-domctx.c

diff --git a/.gitignore b/.gitignore
index bfa53723b3..96fd7527bf 100644
--- a/.gitignore
+++ b/.gitignore
@@ -209,6 +209,7 @@ tools/misc/xen_cpuperf
 tools/misc/xen-cpuid
 tools/misc/xen-detect
 tools/misc/xen-diag
+tools/misc/xen-domctx
 tools/misc/xen-tmem-list-parse
 tools/misc/xen-livepatch
 tools/misc/xenperf
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 63947bfadc..ef25524354 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -30,6 +30,7 @@ INSTALL_SBIN                   += xenpm
 INSTALL_SBIN                   += xenwatchdogd
 INSTALL_SBIN                   += xen-livepatch
 INSTALL_SBIN                   += xen-diag
+INSTALL_SBIN                   += xen-domctx
 INSTALL_SBIN += $(INSTALL_SBIN-y)
 
 # Everything to be installed in a private bin/
@@ -108,6 +109,9 @@ xen-livepatch: xen-livepatch.o
 xen-diag: xen-diag.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
+xen-domctx: xen-domctx.o
+	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
+
 xen-lowmemd: xen-lowmemd.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(APPEND_LDFLAGS)
 
diff --git a/tools/misc/xen-domctx.c b/tools/misc/xen-domctx.c
new file mode 100644
index 0000000000..243325dfce
--- /dev/null
+++ b/tools/misc/xen-domctx.c
@@ -0,0 +1,200 @@
+/*
+ * xen-domctx.c
+ *
+ * Print out domain save records in a human-readable way.
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+
+#include <xenctrl.h>
+#include <xen/xen.h>
+#include <xen/domctl.h>
+#include <xen/save.h>
+
+static void *buf = NULL;
+static size_t len, off;
+
+#define GET_PTR(_x)                                                        \
+    do {                                                                   \
+        if ( len - off < sizeof(*(_x)) )                                   \
+        {                                                                  \
+            fprintf(stderr,                                                \
+                    "error: need another %lu bytes, only %lu available\n", \
+                    sizeof(*(_x)), len - off);                             \
+            exit(1);                                                       \
+        }                                                                  \
+        (_x) = buf + off;                                                  \
+    } while (false);
+
+static void dump_header(void)
+{
+    DOMAIN_SAVE_TYPE(HEADER) *h;
+
+    GET_PTR(h);
+
+    printf("    HEADER: magic %#x, version %u\n",
+           h->magic, h->version);
+
+}
+
+static void dump_end(void)
+{
+    DOMAIN_SAVE_TYPE(END) *e;
+
+    GET_PTR(e);
+
+    printf("    END\n");
+}
+
+static void usage(const char *prog)
+{
+    fprintf(stderr, "usage: %s <domid> [ <typecode> [ <instance> ]]\n",
+            prog);
+    exit(1);
+}
+
+int main(int argc, char **argv)
+{
+    char *s, *e;
+    long domid;
+    long typecode = -1;
+    long instance = -1;
+    unsigned int entry;
+    xc_interface *xch;
+    int rc;
+
+    if ( argc < 2 || argc > 4 )
+        usage(argv[0]);
+
+    s = e = argv[1];
+    domid = strtol(s, &e, 0);
+
+    if ( *s == '\0' || *e != '\0' ||
+         domid < 0 || domid >= DOMID_FIRST_RESERVED )
+    {
+        fprintf(stderr, "invalid domid '%s'\n", s);
+        exit(1);
+    }
+
+    if ( argc >= 3 )
+    {
+        s = e = argv[2];
+        typecode = strtol(s, &e, 0);
+
+        if ( *s == '\0' || *e != '\0' )
+        {
+            fprintf(stderr, "invalid typecode '%s'\n", s);
+            exit(1);
+        }
+    }
+
+    if ( argc == 4 )
+    {
+        s = e = argv[3];
+        instance = strtol(s, &e, 0);
+
+        if ( *s == '\0' || *e != '\0' )
+        {
+            fprintf(stderr, "invalid instance '%s'\n", s);
+            exit(1);
+        }
+    }
+
+    xch = xc_interface_open(0, 0, 0);
+    if ( !xch )
+    {
+        fprintf(stderr, "error: can't open libxc handle\n");
+        exit(1);
+    }
+
+    rc = xc_domain_getcontext(xch, domid, NULL, &len);
+    if ( rc < 0 )
+    {
+        fprintf(stderr, "error: can't get record length for dom %lu: %s\n",
+                domid, strerror(errno));
+        exit(1);
+    }
+
+    buf = malloc(len);
+    if ( !buf )
+    {
+        fprintf(stderr, "error: can't allocate %lu bytes\n", len);
+        exit(1);
+    }
+
+    rc = xc_domain_getcontext(xch, domid, buf, &len);
+    if ( rc < 0 )
+    {
+        fprintf(stderr, "error: can't get domain record for dom %lu: %s\n",
+                domid, strerror(errno));
+        exit(1);
+    }
+    off = 0;
+
+    entry = 0;
+    for ( ; ; )
+    {
+        struct domain_save_descriptor *desc;
+
+        GET_PTR(desc);
+
+        off += sizeof(*desc);
+
+        if ( (typecode < 0 || typecode == desc->typecode) &&
+             (instance < 0 || instance == desc->instance) )
+        {
+            printf("[%u] type: %u instance: %u length: %u\n", entry++,
+                   desc->typecode, desc->instance, desc->length);
+
+            switch (desc->typecode)
+            {
+            case DOMAIN_SAVE_CODE(HEADER): dump_header(); break;
+            case DOMAIN_SAVE_CODE(END): dump_end(); break;
+            default:
+                printf("Unknown type %u: skipping\n", desc->typecode);
+                break;
+            }
+        }
+
+        if ( desc->typecode == DOMAIN_SAVE_CODE(END) )
+            break;
+
+        off += desc->length;
+    }
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 14 10:44:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 10:44:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZBLy-0004Av-8W; Thu, 14 May 2020 10:44:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4jr8=64=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jZBLx-0004Aa-4r
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 10:44:37 +0000
X-Inumbo-ID: e2303dea-95cf-11ea-a468-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e2303dea-95cf-11ea-a468-12813bfff9fa;
 Thu, 14 May 2020 10:44:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=b9T3VeNPVxqhKc11ERGW3Ko1mTMz/vBpR5z7eveF6nE=; b=4gNl2eOwTytWx7+/tUDUjMUR0p
 uBRn4zyHzPlVf9qNouO/wSVuNg4dy+g1rsKv6ka2USlJYHX5kBcuRDgkX2a3GJYwen8YsQuIzf9WE
 BFz9V+zvVn8E25TsnBquXJoyWzSLAqljcD/eGwCNPIizWjsqihnHaIZG+HjAn4mvZJdM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jZBLm-0004aS-Po; Thu, 14 May 2020 10:44:26 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jZBLm-0000sH-Gh; Thu, 14 May 2020 10:44:26 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 4/5] common/domain: add a domain context record for
 shared_info...
Date: Thu, 14 May 2020 11:44:15 +0100
Message-Id: <20200514104416.16657-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200514104416.16657-1-paul@xen.org>
References: <20200514104416.16657-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

... and update xen-domctx to dump some information describing the record.

NOTE: The domain may or may not be using the embedded vcpu_info array so
      ultimately separate context records will be added for vcpu_info when
      this becomes necessary.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>

v3:
 - Actually dump some of the content of shared_info

v2:
 - Drop the header change to define a 'Xen' page size and instead use a
   variable length struct now that the framework makes this is feasible
 - Guard use of 'has_32bit_shinfo' in common code with CONFIG_COMPAT
---
 tools/misc/xen-domctx.c   | 73 +++++++++++++++++++++++++++++++++++++++
 xen/common/domain.c       | 60 ++++++++++++++++++++++++++++++++
 xen/include/public/save.h | 11 +++++-
 3 files changed, 143 insertions(+), 1 deletion(-)

diff --git a/tools/misc/xen-domctx.c b/tools/misc/xen-domctx.c
index 243325dfce..b2fed5eae7 100644
--- a/tools/misc/xen-domctx.c
+++ b/tools/misc/xen-domctx.c
@@ -31,6 +31,7 @@
 #include <errno.h>
 
 #include <xenctrl.h>
+#include <xen-tools/libs.h>
 #include <xen/xen.h>
 #include <xen/domctl.h>
 #include <xen/save.h>
@@ -61,6 +62,76 @@ static void dump_header(void)
 
 }
 
+static void print_binary(const char *prefix, void *val, size_t size,
+                         const char *suffix)
+{
+    printf("%s", prefix);
+
+    while (size--)
+    {
+        uint8_t octet = *(uint8_t *)val++;
+        unsigned int i;
+
+        for ( i = 0; i < 8; i++ )
+        {
+            printf("%u", octet & 1);
+            octet >>= 1;
+        }
+    }
+
+    printf("%s", suffix);
+}
+
+static void dump_shared_info(void)
+{
+    DOMAIN_SAVE_TYPE(SHARED_INFO) *s;
+    shared_info_any_t *info;
+    unsigned int i;
+
+    GET_PTR(s);
+
+    printf("    SHARED_INFO: has_32bit_shinfo: %s buffer_size: %u\n",
+           s->has_32bit_shinfo ? "true" : "false", s->buffer_size);
+
+    info = (shared_info_any_t *)s->buffer;
+
+#define GET_FIELD_PTR(_f) \
+    (s->has_32bit_shinfo ? (void *)&(info->x32._f) : (void *)&(info->x64._f))
+#define GET_FIELD_SIZE(_f) \
+    (s->has_32bit_shinfo ? sizeof(info->x32._f) : sizeof(info->x64._f))
+#define GET_FIELD(_f) \
+    (s->has_32bit_shinfo ? info->x32._f : info->x64._f)
+
+    /* Array lengths are the same for 32-bit and 64-bit shared info */
+
+    for ( i = 0; i < ARRAY_SIZE(info->x64.evtchn_pending); i++ )
+    {
+        const char *prefix = !i ?
+            "                 evtchn_pending: " :
+            "                                 ";
+
+        print_binary(prefix, GET_FIELD_PTR(evtchn_pending[0]),
+                 GET_FIELD_SIZE(evtchn_pending[0]), "\n");
+    }
+
+    for ( i = 0; i < ARRAY_SIZE(info->x64.evtchn_mask); i++ )
+    {
+        const char *prefix = !i ?
+            "                    evtchn_mask: " :
+            "                                 ";
+
+        print_binary(prefix, GET_FIELD_PTR(evtchn_mask[0]),
+                 GET_FIELD_SIZE(evtchn_mask[0]), "\n");
+    }
+
+    printf("                 wc: version: %u sec: %u nsec: %u\n",
+           GET_FIELD(wc_version), GET_FIELD(wc_sec), GET_FIELD(wc_nsec));
+
+#undef GET_FIELD
+#undef GET_FIELD_SIZE
+#undef GET_FIELD_PTR
+}
+
 static void dump_end(void)
 {
     DOMAIN_SAVE_TYPE(END) *e;
@@ -167,12 +238,14 @@ int main(int argc, char **argv)
         if ( (typecode < 0 || typecode == desc->typecode) &&
              (instance < 0 || instance == desc->instance) )
         {
+
             printf("[%u] type: %u instance: %u length: %u\n", entry++,
                    desc->typecode, desc->instance, desc->length);
 
             switch (desc->typecode)
             {
             case DOMAIN_SAVE_CODE(HEADER): dump_header(); break;
+            case DOMAIN_SAVE_CODE(SHARED_INFO): dump_shared_info(); break;
             case DOMAIN_SAVE_CODE(END): dump_end(); break;
             default:
                 printf("Unknown type %u: skipping\n", desc->typecode);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 7cc9526139..e4518cd28d 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -33,6 +33,7 @@
 #include <xen/xenoprof.h>
 #include <xen/irq.h>
 #include <xen/argo.h>
+#include <xen/save.h>
 #include <asm/debugger.h>
 #include <asm/p2m.h>
 #include <asm/processor.h>
@@ -1649,6 +1650,65 @@ int continue_hypercall_on_cpu(
     return 0;
 }
 
+static int save_shared_info(const struct domain *d, struct domain_context *c,
+                            bool dry_run)
+{
+    struct domain_shared_info_context ctxt = { .buffer_size = PAGE_SIZE };
+    size_t hdr_size = offsetof(typeof(ctxt), buffer);
+    int rc;
+
+    rc = DOMAIN_SAVE_BEGIN(SHARED_INFO, c, 0);
+    if ( rc )
+        return rc;
+
+#ifdef CONFIG_COMPAT
+    if ( !dry_run )
+        ctxt.has_32bit_shinfo = has_32bit_shinfo(d);
+#endif
+
+    rc = domain_save_data(c, &ctxt, hdr_size);
+    if ( rc )
+        return rc;
+
+    rc = domain_save_data(c, d->shared_info, ctxt.buffer_size);
+    if ( rc )
+        return rc;
+
+    return domain_save_end(c);
+}
+
+static int load_shared_info(struct domain *d, struct domain_context *c)
+{
+    struct domain_shared_info_context ctxt;
+    size_t hdr_size = offsetof(typeof(ctxt), buffer);
+    unsigned int i;
+    int rc;
+
+    rc = DOMAIN_LOAD_BEGIN(SHARED_INFO, c, &i);
+    if ( rc || i ) /* expect only a single instance */
+        return rc;
+
+    rc = domain_load_data(c, &ctxt, hdr_size);
+    if ( rc )
+        return rc;
+
+    if ( ctxt.pad[0] || ctxt.pad[1] || ctxt.pad[2] ||
+         ctxt.buffer_size != PAGE_SIZE )
+        return -EINVAL;
+
+#ifdef CONFIG_COMPAT
+    d->arch.has_32bit_shinfo = ctxt.has_32bit_shinfo;
+#endif
+
+    rc = domain_load_data(c, d->shared_info, ctxt.buffer_size);
+    if ( rc )
+        return rc;
+
+    return domain_load_end(c);
+}
+
+DOMAIN_REGISTER_SAVE_RESTORE(SHARED_INFO, save_shared_info, load_shared_info);
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/public/save.h b/xen/include/public/save.h
index 834c031c51..2b633cf03d 100644
--- a/xen/include/public/save.h
+++ b/xen/include/public/save.h
@@ -73,7 +73,16 @@ struct domain_save_header {
 };
 DECLARE_DOMAIN_SAVE_TYPE(HEADER, 1, struct domain_save_header);
 
-#define DOMAIN_SAVE_CODE_MAX 1
+struct domain_shared_info_context {
+    uint8_t has_32bit_shinfo;
+    uint8_t pad[3];
+    uint32_t buffer_size;
+    uint8_t buffer[XEN_FLEX_ARRAY_DIM]; /* Implementation specific size */
+};
+
+DECLARE_DOMAIN_SAVE_TYPE(SHARED_INFO, 2, struct domain_shared_info_context);
+
+#define DOMAIN_SAVE_CODE_MAX 2
 
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 14 10:44:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 10:44:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZBM3-0004DZ-Ks; Thu, 14 May 2020 10:44:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4jr8=64=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jZBM2-0004D4-54
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 10:44:42 +0000
X-Inumbo-ID: e28df61a-95cf-11ea-a468-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e28df61a-95cf-11ea-a468-12813bfff9fa;
 Thu, 14 May 2020 10:44:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=H+ue0sJzFwPzycz7nFrNvQLKDCIIGqxlVEm38yRe3CE=; b=5RIPItSwkvolR+YMiSyg/Gjkkc
 V+fMk6K66aop3yuZEDjPOr1zLQqCnxPLNKeCLuQ9yno1JuminFSDwyNxNRdlzP/hADiGYFjlaUBtt
 dc/3xsJPW8HQFFiX3WMk1iLB+m5IUTBTe6qhjJfZwXlfgVPxX6JjSV6Exg+/KaL8qAPc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jZBLn-0004aa-Qz; Thu, 14 May 2020 10:44:27 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jZBLn-0000sH-I0; Thu, 14 May 2020 10:44:27 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 5/5] tools/libxc: make use of domain context SHARED_INFO
 record...
Date: Thu, 14 May 2020 11:44:16 +0100
Message-Id: <20200514104416.16657-6-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200514104416.16657-1-paul@xen.org>
References: <20200514104416.16657-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

... in the save/restore code.

This patch replaces direct mapping of the shared_info_frame (retrieved
using XEN_DOMCTL_getdomaininfo) with save/load of the domain context
SHARED_INFO record.

No modifications are made to the definition of the migration stream at
this point. Subsequent patches will define a record in the libxc domain
image format for passing domain context and convert the save/restore code
to use that.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>

v3:
 - Moved basic get/set domain context functions to common code

v2:
 - Re-based (now making use of DOMAIN_SAVE_FLAG_IGNORE)
---
 tools/libxc/xc_sr_common.c         | 58 ++++++++++++++++++++++++++++++
 tools/libxc/xc_sr_common.h         | 11 +++++-
 tools/libxc/xc_sr_common_x86_pv.c  | 47 ++++++++++++++++++++++++
 tools/libxc/xc_sr_common_x86_pv.h  |  3 ++
 tools/libxc/xc_sr_restore_x86_pv.c | 40 ++++++++-------------
 tools/libxc/xc_sr_save_x86_pv.c    | 26 ++------------
 tools/libxc/xg_save_restore.h      |  1 +
 7 files changed, 137 insertions(+), 49 deletions(-)

diff --git a/tools/libxc/xc_sr_common.c b/tools/libxc/xc_sr_common.c
index dd9a11b4b5..5b2b6944f8 100644
--- a/tools/libxc/xc_sr_common.c
+++ b/tools/libxc/xc_sr_common.c
@@ -138,6 +138,64 @@ int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec)
     return 0;
 };
 
+int get_domain_context(struct xc_sr_context *ctx)
+{
+    xc_interface *xch = ctx->xch;
+    size_t len = 0;
+    int rc;
+
+    if ( ctx->domain_context.buffer )
+    {
+        ERROR("Domain context already present");
+        return -1;
+    }
+
+    rc = xc_domain_getcontext(xch, ctx->domid, NULL, &len);
+    if ( rc < 0 )
+    {
+        PERROR("Unable to get size of domain context");
+        return -1;
+    }
+
+    ctx->domain_context.buffer = malloc(len);
+    if ( !ctx->domain_context.buffer )
+    {
+        PERROR("Unable to allocate memory for domain context");
+        return -1;
+    }
+
+    rc = xc_domain_getcontext(xch, ctx->domid, ctx->domain_context.buffer,
+                              &len);
+    if ( rc < 0 )
+    {
+        PERROR("Unable to get domain context");
+        return -1;
+    }
+
+    ctx->domain_context.len = len;
+
+    return 0;
+}
+
+int set_domain_context(struct xc_sr_context *ctx)
+{
+    xc_interface *xch = ctx->xch;
+
+    if ( !ctx->domain_context.buffer )
+    {
+        ERROR("Domain context not present");
+        return -1;
+    }
+
+    return xc_domain_setcontext(xch, ctx->domid, ctx->domain_context.buffer,
+                                ctx->domain_context.len);
+}
+
+void common_cleanup(struct xc_sr_context *ctx)
+{
+    free(ctx->domain_context.buffer);
+}
+
 static void __attribute__((unused)) build_assertions(void)
 {
     BUILD_BUG_ON(sizeof(struct xc_sr_ihdr) != 24);
diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
index 5dd51ccb15..0d61978b08 100644
--- a/tools/libxc/xc_sr_common.h
+++ b/tools/libxc/xc_sr_common.h
@@ -208,6 +208,11 @@ struct xc_sr_context
 
     xc_dominfo_t dominfo;
 
+    struct {
+        void *buffer;
+        unsigned int len;
+    } domain_context;
+
     union /* Common save or restore data. */
     {
         struct /* Save data. */
@@ -314,7 +319,7 @@ struct xc_sr_context
                 /* The guest pfns containing the p2m leaves */
                 xen_pfn_t *p2m_pfns;
 
-                /* Read-only mapping of guests shared info page */
+                /* Pointer to shared_info (located in context buffer) */
                 shared_info_any_t *shinfo;
 
                 /* p2m generation count for verifying validity of local p2m. */
@@ -425,6 +430,10 @@ int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec);
 int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
                   const xen_pfn_t *original_pfns, const uint32_t *types);
 
+int get_domain_context(struct xc_sr_context *ctx);
+int set_domain_context(struct xc_sr_context *ctx);
+void common_cleanup(struct xc_sr_context *ctx);
+
 #endif
 /*
  * Local variables:
diff --git a/tools/libxc/xc_sr_common_x86_pv.c b/tools/libxc/xc_sr_common_x86_pv.c
index d3d425cb82..5d34144ce8 100644
--- a/tools/libxc/xc_sr_common_x86_pv.c
+++ b/tools/libxc/xc_sr_common_x86_pv.c
@@ -182,6 +182,53 @@ int x86_pv_map_m2p(struct xc_sr_context *ctx)
     return rc;
 }
 
+int x86_pv_get_shinfo(struct xc_sr_context *ctx)
+{
+    unsigned int off = 0;
+    struct domain_save_descriptor *desc;
+    int rc;
+
+    rc = get_domain_context(ctx);
+    if ( rc )
+        return rc;
+
+    do {
+        if ( ctx->domain_context.len - off < sizeof(*desc) )
+            return -1;
+
+        desc = ctx->domain_context.buffer + off;
+        off += sizeof(*desc);
+
+        switch (desc->typecode)
+        {
+        case DOMAIN_SAVE_CODE(SHARED_INFO):
+        {
+            DOMAIN_SAVE_TYPE(SHARED_INFO) *s;
+
+            if ( ctx->domain_context.len - off < sizeof(*s) )
+                return -1;
+
+            s = ctx->domain_context.buffer + off;
+            ctx->x86.pv.shinfo = (shared_info_any_t *)s->buffer;
+            /* fall through */
+        }
+        default:
+            off += desc->length;
+            break;
+        }
+    } while ( desc->typecode != DOMAIN_SAVE_CODE(END) );
+
+    if ( !ctx->x86.pv.shinfo )
+        return -1;
+
+    return 0;
+}
+
+int x86_pv_set_shinfo(struct xc_sr_context *ctx)
+{
+    return ctx->x86.pv.shinfo ? set_domain_context(ctx) : -1;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xc_sr_common_x86_pv.h b/tools/libxc/xc_sr_common_x86_pv.h
index 2ed03309af..01442f48fb 100644
--- a/tools/libxc/xc_sr_common_x86_pv.h
+++ b/tools/libxc/xc_sr_common_x86_pv.h
@@ -97,6 +97,9 @@ int x86_pv_domain_info(struct xc_sr_context *ctx);
  */
 int x86_pv_map_m2p(struct xc_sr_context *ctx);
 
+int x86_pv_get_shinfo(struct xc_sr_context *ctx);
+int x86_pv_set_shinfo(struct xc_sr_context *ctx);
+
 #endif
 /*
  * Local variables:
diff --git a/tools/libxc/xc_sr_restore_x86_pv.c b/tools/libxc/xc_sr_restore_x86_pv.c
index 904ccc462a..4dbc7f0da5 100644
--- a/tools/libxc/xc_sr_restore_x86_pv.c
+++ b/tools/libxc/xc_sr_restore_x86_pv.c
@@ -864,8 +864,7 @@ static int handle_shared_info(struct xc_sr_context *ctx,
 {
     xc_interface *xch = ctx->xch;
     unsigned int i;
-    int rc = -1;
-    shared_info_any_t *guest_shinfo = NULL;
+    int rc;
     const shared_info_any_t *old_shinfo = rec->data;
 
     if ( !ctx->x86.pv.restore.seen_pv_info )
@@ -878,39 +877,30 @@ static int handle_shared_info(struct xc_sr_context *ctx,
     {
         ERROR("X86_PV_SHARED_INFO record wrong size: length %u"
               ", expected 4096", rec->length);
-        goto err;
+        return -1;
     }
 
-    guest_shinfo = xc_map_foreign_range(
-        xch, ctx->domid, PAGE_SIZE, PROT_READ | PROT_WRITE,
-        ctx->dominfo.shared_info_frame);
-    if ( !guest_shinfo )
-    {
-        PERROR("Failed to map Shared Info at mfn %#lx",
-               ctx->dominfo.shared_info_frame);
-        goto err;
-    }
+    rc = x86_pv_get_shinfo(ctx);
+    if ( rc )
+        return rc;
 
-    MEMCPY_FIELD(guest_shinfo, old_shinfo, vcpu_info, ctx->x86.pv.width);
-    MEMCPY_FIELD(guest_shinfo, old_shinfo, arch, ctx->x86.pv.width);
+    MEMCPY_FIELD(ctx->x86.pv.shinfo, old_shinfo, vcpu_info,
+                 ctx->x86.pv.width);
+    MEMCPY_FIELD(ctx->x86.pv.shinfo, old_shinfo, arch, ctx->x86.pv.width);
 
-    SET_FIELD(guest_shinfo, arch.pfn_to_mfn_frame_list_list,
+    SET_FIELD(ctx->x86.pv.shinfo, arch.pfn_to_mfn_frame_list_list,
               0, ctx->x86.pv.width);
 
-    MEMSET_ARRAY_FIELD(guest_shinfo, evtchn_pending, 0, ctx->x86.pv.width);
+    MEMSET_ARRAY_FIELD(ctx->x86.pv.shinfo, evtchn_pending, 0,
+                       ctx->x86.pv.width);
     for ( i = 0; i < XEN_LEGACY_MAX_VCPUS; i++ )
-        SET_FIELD(guest_shinfo, vcpu_info[i].evtchn_pending_sel,
+        SET_FIELD(ctx->x86.pv.shinfo, vcpu_info[i].evtchn_pending_sel,
                   0, ctx->x86.pv.width);
 
-    MEMSET_ARRAY_FIELD(guest_shinfo, evtchn_mask, 0xff, ctx->x86.pv.width);
-
-    rc = 0;
+    MEMSET_ARRAY_FIELD(ctx->x86.pv.shinfo, evtchn_mask, 0xff,
+                       ctx->x86.pv.width);
 
- err:
-    if ( guest_shinfo )
-        munmap(guest_shinfo, PAGE_SIZE);
-
-    return rc;
+    return x86_pv_set_shinfo(ctx);
 }
 
 /* restore_ops function. */
diff --git a/tools/libxc/xc_sr_save_x86_pv.c b/tools/libxc/xc_sr_save_x86_pv.c
index f3ccf5bb4b..0f73c30dbf 100644
--- a/tools/libxc/xc_sr_save_x86_pv.c
+++ b/tools/libxc/xc_sr_save_x86_pv.c
@@ -9,25 +9,6 @@ static inline bool is_canonical_address(xen_vaddr_t vaddr)
     return ((int64_t)vaddr >> 47) == ((int64_t)vaddr >> 63);
 }
 
-/*
- * Maps the guests shared info page.
- */
-static int map_shinfo(struct xc_sr_context *ctx)
-{
-    xc_interface *xch = ctx->xch;
-
-    ctx->x86.pv.shinfo = xc_map_foreign_range(
-        xch, ctx->domid, PAGE_SIZE, PROT_READ, ctx->dominfo.shared_info_frame);
-    if ( !ctx->x86.pv.shinfo )
-    {
-        PERROR("Failed to map shared info frame at mfn %#lx",
-               ctx->dominfo.shared_info_frame);
-        return -1;
-    }
-
-    return 0;
-}
-
 /*
  * Copy a list of mfns from a guest, accounting for differences between guest
  * and toolstack width.  Can fail if truncation would occur.
@@ -1041,7 +1022,7 @@ static int x86_pv_setup(struct xc_sr_context *ctx)
     if ( rc )
         return rc;
 
-    rc = map_shinfo(ctx);
+    rc = x86_pv_get_shinfo(ctx);
     if ( rc )
         return rc;
 
@@ -1112,12 +1093,11 @@ static int x86_pv_cleanup(struct xc_sr_context *ctx)
     if ( ctx->x86.pv.p2m )
         munmap(ctx->x86.pv.p2m, ctx->x86.pv.p2m_frames * PAGE_SIZE);
 
-    if ( ctx->x86.pv.shinfo )
-        munmap(ctx->x86.pv.shinfo, PAGE_SIZE);
-
     if ( ctx->x86.pv.m2p )
         munmap(ctx->x86.pv.m2p, ctx->x86.pv.nr_m2p_frames * PAGE_SIZE);
 
+    common_cleanup(ctx);
+
     return 0;
 }
 
diff --git a/tools/libxc/xg_save_restore.h b/tools/libxc/xg_save_restore.h
index 303081df0d..296b523963 100644
--- a/tools/libxc/xg_save_restore.h
+++ b/tools/libxc/xg_save_restore.h
@@ -19,6 +19,7 @@
 
 #include <xen/foreign/x86_32.h>
 #include <xen/foreign/x86_64.h>
+#include <xen/save.h>
 
 /*
 ** We process save/restore/migrate in batches of pages; the below
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 14 11:03:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 11:03:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZBdP-0006Kd-9e; Thu, 14 May 2020 11:02:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZBdO-0006KY-8s
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 11:02:38 +0000
X-Inumbo-ID: 6b8f88aa-95d2-11ea-b9cf-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6b8f88aa-95d2-11ea-b9cf-bc764e2007e4;
 Thu, 14 May 2020 11:02:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589454157;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=2lJRdDZAL09uSXwNG4aY919xa6S1TgFHXfBN9OF5Fes=;
 b=iWNpWPjdYOsrjpOE9GLVyPnfxopzYpzPB0ZX1a5VN1ae9wQ1b4y0T7LD
 AVjLrZMeZ83yncKF+ngPety2A3fKUG7jdvN3ALwgsl1qmHqdAIAr8r2X8
 YBuFCH4WNU2IOjV1eM4mTDPnRBzrK4lyOOi1g33cPZ3wtmKixTNYrXk0y s=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: ycrrxEam5/OvF+NUPGbXxJXAErZ/L/1PX0JzEgACgk87l092NO3DIfuvmW9B6v54Uih8O7WUu2
 eBeK5v1aQEE3RMyA3RUPEckdk5aQ4EnzqBtvckGGRiA52+xx0p/fqfVWkDLs7hkyxxi2sNv+2a
 UqnZ45fUUBFk5OsEmN0eWzsw8ghJkAnGPzJnk4FePve/srXdAmFWw47ZX/zIqg237ProuKzVLD
 QCwhr45zCTXoDfAyE88RZXTw2HhWNalafthnj+pE9k1D3X29LwQf+HuKHT8ZnzR6a/qPe/4WAK
 C/s=
X-SBRS: 2.7
X-MesageID: 17777103
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,391,1583211600"; d="scan'208";a="17777103"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24253.9543.974853.499775@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 12:02:31 +0100
To: Elliott Mitchell <ehem+xen@m5p.com>
Subject: Re: use of "stat -"
In-Reply-To: <20200512225458.GA1530@mattapan.m5p.com>
References: <3bfd6384-fcaf-c74a-e560-a35aafa06a43@suse.com>
 <20200512141947.yqx4gmbvqs4grx5g@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
 <fa507eab-547a-c0fb-9620-825aba5f55b2@suse.com>
 <4b90b635-84bb-e827-d52e-dfe1ebdb4e4d@citrix.com>
 <814db557-4f6a-020d-9f71-4ee3724981e3@suse.com>
 <CAKf6xps0XDRTUJsbE1zHpn=h98yTN+Y1DzaNpVGzhhJGVccRRw@mail.gmail.com>
 <20200512195005.GA96154@mattapan.m5p.com>
 <049e0022-f9c1-6dc9-3360-d25d88eeb97f@citrix.com>
 <20200512225458.GA1530@mattapan.m5p.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Jason Andryuk <jandryuk@gmail.com>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

I've read this thread.  Jan, I'm sorry that this causes you
inconvenience.  I'm hoping it won't come down to a choice between
supporting people who want to ship a dom0 without perl, and people who
want a dom0 using more-than-a-decade-old coreutils.

Jan, can you tell me what the output is of this on your ancient
system:

  $ rm -f t
  $ >t
  $ exec 3<t
  $ stat -L -c '%F %i' /dev/stdin <&3
  regular empty file 393549
  $ rm t
  $ stat -L -c '%F %i' /dev/stdin <&3
  regular empty file 393549
  $ strace -ou stat -L -c '%F %i' /dev/stdin <&3
  $

Also, the contents of the file "u" afterwards, please.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 14 11:48:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 11:48:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZCLg-0001Rc-T2; Thu, 14 May 2020 11:48:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WGWk=64=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZCLg-0001RX-0y
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 11:48:24 +0000
X-Inumbo-ID: cffdd624-95d8-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cffdd624-95d8-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 11:48:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 0C1D8AE0F;
 Thu, 14 May 2020 11:48:23 +0000 (UTC)
Subject: Re: [PATCH v8 09/12] xen: add runtime parameter access support to
 hypfs
To: Jan Beulich <jbeulich@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-10-jgross@suse.com>
 <a6c10680-d570-dabb-61ad-627591d08b0e@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <76ed2db5-6091-959a-8224-0a77e9cc4c45@suse.com>
Date: Thu, 14 May 2020 13:48:19 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <a6c10680-d570-dabb-61ad-627591d08b0e@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.20 12:20, Jan Beulich wrote:
> On 08.05.2020 17:34, Juergen Gross wrote:
>> --- a/xen/arch/arm/xen.lds.S
>> +++ b/xen/arch/arm/xen.lds.S
>> @@ -89,6 +89,13 @@ SECTIONS
>>          __start_schedulers_array = .;
>>          *(.data.schedulers)
>>          __end_schedulers_array = .;
>> +
>> +#ifdef CONFIG_HYPFS
>> +       . = ALIGN(8);
>> +       __paramhypfs_start = .;
>> +       *(.data.paramhypfs)
>> +       __paramhypfs_end = .;
>> +#endif
>>          *(.data.rel)
>>          *(.data.rel.*)
>>          CONSTRUCTORS
> 
> I'm not the maintainer of this code, but I think it would be better
> if there was either no blank line inserted, or two (a 2nd one after
> your insertion).

Okay (either one).

> 
>> --- a/xen/arch/x86/pv/domain.c
>> +++ b/xen/arch/x86/pv/domain.c
>> @@ -52,9 +52,27 @@ static __read_mostly enum {
>>       PCID_OFF,
>>       PCID_ALL,
>>       PCID_XPTI,
>> -    PCID_NOXPTI
>> +    PCID_NOXPTI,
>> +    PCID_END
>>   } opt_pcid = PCID_XPTI;
> 
> Is this change really needed? The only use looks to be ...
> 
>> +#ifdef CONFIG_HYPFS
>> +static const char opt_pcid_2_string[PCID_END][7] = {
> 
> ... here, yet the arry would end up the same when using [][7].

Hmm, true.

> 
>> --- a/xen/common/grant_table.c
>> +++ b/xen/common/grant_table.c
>> @@ -85,8 +85,43 @@ struct grant_table {
>>       struct grant_table_arch arch;
>>   };
>>   
>> -static int parse_gnttab_limit(const char *param, const char *arg,
>> -                              unsigned int *valp)
>> +unsigned int __read_mostly opt_max_grant_frames = 64;
>> +static unsigned int __read_mostly opt_max_maptrack_frames = 1024;
>> +
>> +#ifdef CONFIG_HYPFS
>> +#define GRANT_CUSTOM_VAL_SZ  12
>> +static char __read_mostly opt_max_grant_frames_val[GRANT_CUSTOM_VAL_SZ];
>> +static char __read_mostly opt_max_maptrack_frames_val[GRANT_CUSTOM_VAL_SZ];
>> +
>> +static void update_gnttab_par(struct param_hypfs *par, unsigned int val,
>> +                              char *parval)
>> +{
>> +    snprintf(parval, GRANT_CUSTOM_VAL_SZ, "%u", val);
>> +    custom_runtime_set_var_sz(par, parval, GRANT_CUSTOM_VAL_SZ);
>> +}
>> +
>> +static void __init gnttab_max_frames_init(struct param_hypfs *par)
>> +{
>> +    update_gnttab_par(par, opt_max_grant_frames, opt_max_grant_frames_val);
>> +}
>> +
>> +static void __init max_maptrack_frames_init(struct param_hypfs *par)
>> +{
>> +    update_gnttab_par(par, opt_max_maptrack_frames,
>> +                      opt_max_maptrack_frames_val);
>> +}
>> +#else
>> +#define opt_max_grant_frames_val    NULL
>> +#define opt_max_maptrack_frames_val NULL
> 
> This looks latently dangerous to me (in case new uses of these
> two identifiers appeared), but I guess my alternative suggestion
> will be at best controversial, too:
> 
> #define update_gnttab_par(par, val, unused) update_gnttab_par(par, val)
> #define parse_gnttab_limit(par, arg, valp, unused) parse_gnttab_limit(par, arg, valp)
> 
> (placed right here)

Who else would use those identifiers not related to hypfs?

> 
>> @@ -281,6 +282,36 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
>>       return 0;
>>   }
>>   
>> +int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
>> +                       XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
>> +{
>> +    struct param_hypfs *p;
>> +    char *buf;
>> +    int ret;
>> +
>> +    buf = xzalloc_array(char, ulen);
>> +    if ( !buf )
>> +        return -ENOMEM;
>> +
>> +    ret = -EFAULT;
>> +    if ( copy_from_guest(buf, uaddr, ulen) )
> 
> As just indicated in an extra reply to patch 4, ulen not getting
> truncated here silently is well obscured (the max_size field type
> and the check against it elsewhere looks to guarantee this).

Yes, will change ulen to unsigned int.

> 
>> +        goto out;
>> +
>> +    ret = -EDOM;
>> +    if ( memchr(buf, 0, ulen) != (buf + ulen - 1) )
>> +        goto out;
>> +
>> +    p = container_of(leaf, struct param_hypfs, hypfs);
>> +    ret = p->param->par.func(buf);
>> +
>> +    if ( !ret )
>> +        leaf->e.size = ulen;
> 
> Why? For "ept", "no-exec-sp" would yield "exec-sp=0", and hence
> you'd wrongly extend the size from what parse_ept_param_runtime()
> has already set through custom_runtime_set_var(). It looks to me
> as if there's no reason to update e.size here at all; it's the
> par.func() handlers which need to take care of this.

Oh, indeed. Thanks for catching.

> 
>> --- a/xen/drivers/char/console.c
>> +++ b/xen/drivers/char/console.c
>> @@ -75,12 +75,35 @@ enum con_timestamp_mode
>>       TSM_DATE_MS,       /* [YYYY-MM-DD HH:MM:SS.mmm] */
>>       TSM_BOOT,          /* [SSSSSS.uuuuuu] */
>>       TSM_RAW,           /* [XXXXXXXXXXXXXXXX] */
>> +    TSM_END
>>   };
> 
> Just like for the PCID enumeration I don't think a sentinel is
> needed here.

Yes.

> 
>>   static enum con_timestamp_mode __read_mostly opt_con_timestamp_mode = TSM_NONE;
>>   
>> +#ifdef CONFIG_HYPFS
>> +static const char con_timestamp_mode_2_string[TSM_END][7] = {
>> +    [TSM_NONE] = "none",
>> +    [TSM_DATE] = "date",
>> +    [TSM_DATE_MS] = "datems",
>> +    [TSM_BOOT] = "boot",
>> +    [TSM_RAW] = "raw"
> 
> Add a trailing comma please (and as I notice only now then also
> in the similar PCID array).
> 
> To the subsequent code the gnttab comment applies as well.
> 
>> @@ -80,7 +81,120 @@ extern const struct kernel_param __param_start[], __param_end[];
>>   
>>   #define __rtparam         __param(__dataparam)
>>   
>> -#define custom_runtime_only_param(_name, _var) \
>> +#ifdef CONFIG_HYPFS
>> +
>> +struct param_hypfs {
>> +    const struct kernel_param *param;
>> +    struct hypfs_entry_leaf hypfs;
>> +    void (*init_leaf)(struct param_hypfs *par);
>> +};
>> +
>> +extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
>> +
>> +#define __paramhypfs      __used_section(".data.paramhypfs")
>> +
>> +#define __paramfs         static __paramhypfs \
>> +    __attribute__((__aligned__(sizeof(void *)))) struct param_hypfs
> 
> Why the attribute?

Oh, copy and paste leftover.

> 
>> +#define custom_runtime_set_var_sz(parfs, var, sz) \
>> +    { \
>> +        (parfs)->hypfs.content = var; \
>> +        (parfs)->hypfs.e.max_size = sz; \
> 
> var and sz want parentheses around them.

Fine with me, but out of curiosity: what can go wrong without? Are
you thinking of multi-statement arguments?

> 
>> +        (parfs)->hypfs.e.size = strlen(var) + 1; \
>> +    }
>> +#define custom_runtime_set_var(parfs, var) \
>> +    custom_runtime_set_var_sz(parfs, var, sizeof(var))
>> +
>> +#define param_2_parfs(par) &__parfs_##par
>> +
>> +/* initfunc needs to set size and content, e.g. via custom_runtime_set_var(). */
>> +#define custom_runtime_only_param(_name, _var, initfunc) \
> 
> I've started noticing it here, but the issue exists further up
> (and down) as well - please can you avoid identifiers with
> leading underscores that are in violation of the C standard?
> Even more so that here you're not even consistent across
> macro parameter names.

Basically I only extended the macro to take another parameter and I
omitted the underscore exactly for the reason you mentioned.

In case you like it better I can prepend a patch to the series dropping
all the leading single underscores from the macros in param.h.

> 
>> +    __rtparam __rtpar_##_var = \
>> +      { .name = _name, \
>> +          .type = OPT_CUSTOM, \
>> +          .par.func = _var }; \
>> +    __paramfs __parfs_##_var = \
>> +        { .param = &__rtpar_##_var, \
>> +          .init_leaf = initfunc, \
>> +          .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
>> +          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
>> +          .hypfs.e.name = _name, \
>> +          .hypfs.e.read = hypfs_read_leaf, \
>> +          .hypfs.e.write = hypfs_write_custom }
>> +#define boolean_runtime_only_param(_name, _var) \
>> +    __rtparam __rtpar_##_var = \
>> +        { .name = _name, \
>> +          .type = OPT_BOOL, \
>> +          .len = sizeof(_var) + \
>> +                 BUILD_BUG_ON_ZERO(sizeof(_var) != sizeof(bool)), \
>> +          .par.var = &_var }; \
>> +    __paramfs __parfs_##_var = \
>> +        { .param = &__rtpar_##_var, \
>> +          .hypfs.e.type = XEN_HYPFS_TYPE_BOOL, \
>> +          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
>> +          .hypfs.e.name = _name, \
>> +          .hypfs.e.size = sizeof(_var), \
>> +          .hypfs.e.max_size = sizeof(_var), \
>> +          .hypfs.e.read = hypfs_read_leaf, \
>> +          .hypfs.e.write = hypfs_write_bool, \
>> +          .hypfs.content = &_var }
>> +#define integer_runtime_only_param(_name, _var) \
>> +    __rtparam __rtpar_##_var = \
>> +        { .name = _name, \
>> +          .type = OPT_UINT, \
>> +          .len = sizeof(_var), \
>> +          .par.var = &_var }; \
>> +    __paramfs __parfs_##_var = \
>> +        { .param = &__rtpar_##_var, \
>> +          .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
>> +          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
>> +          .hypfs.e.name = _name, \
>> +          .hypfs.e.size = sizeof(_var), \
>> +          .hypfs.e.max_size = sizeof(_var), \
>> +          .hypfs.e.read = hypfs_read_leaf, \
>> +          .hypfs.e.write = hypfs_write_leaf, \
>> +          .hypfs.content = &_var }
>> +#define size_runtime_only_param(_name, _var) \
>> +    __rtparam __rtpar_##_var = \
>> +        { .name = _name, \
>> +          .type = OPT_SIZE, \
>> +          .len = sizeof(_var), \
>> +          .par.var = &_var }; \
>> +    __paramfs __parfs_##_var = \
>> +        { .param = &__rtpar_##_var, \
>> +          .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
>> +          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
>> +          .hypfs.e.name = _name, \
>> +          .hypfs.e.size = sizeof(_var), \
>> +          .hypfs.e.max_size = sizeof(_var), \
>> +          .hypfs.e.read = hypfs_read_leaf, \
>> +          .hypfs.e.write = hypfs_write_leaf, \
>> +          .hypfs.content = &_var }
>> +#define string_runtime_only_param(_name, _var) \
>> +    __rtparam __rtpar_##_var = \
>> +        { .name = _name, \
>> +          .type = OPT_STR, \
>> +          .len = sizeof(_var), \
>> +          .par.var = &_var }; \
>> +    __paramfs __parfs_##_var = \
>> +        { .param = &__rtpar_##_var, \
>> +          .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
>> +          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
>> +          .hypfs.e.name = _name, \
>> +          .hypfs.e.size = sizeof(_var), \
> 
> Is this really correct here?

Hmm, right, 0 might be the better choice here, even if it shouldn't
really matter.

> 
>> +          .hypfs.e.max_size = sizeof(_var), \
>> +          .hypfs.e.read = hypfs_read_leaf, \
>> +          .hypfs.e.write = hypfs_write_leaf, \
>> +          .hypfs.content = &_var }
>> +
>> +#else
>> +
>> +struct param_hypfs {
>> +};
>> +
>> +#define param_2_parfs(par)  NULL
> 
> Along the lines of the earlier comment, this looks latently dangerous.

In which way? How can the empty struct be dereferenced in a way not
resulting in build time errors, other than using a cast which would
be obviously wrong for the standard case when CONFIG_HYPFS is defined?


Juergen


From xen-devel-bounces@lists.xenproject.org Thu May 14 11:54:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 11:54:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZCR2-0002Hn-M7; Thu, 14 May 2020 11:53:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v3kr=64=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZCR1-0002Hi-7p
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 11:53:55 +0000
X-Inumbo-ID: 95e9eabc-95d9-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 95e9eabc-95d9-11ea-9887-bc764e2007e4;
 Thu, 14 May 2020 11:53:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=yHtm0Br3dVEN10L+rFVjdqTt7XacStVEYvXvjBRgJj8=; b=QZf5bGTzzbi3vtSZWpJpOzai8
 BnV0v/7ygKn9hCaBjYH8ol7Hu0jOJsWmwCNhqoP2q/58BIj/QMuTd7IpmsU5N65eXouEcaJ04JTU9
 jxk/9PTFBlG2qgCEkv+nEMFx7U+2UQZTT1Pr0Al1g+yZbwl5LPwrEdn1ZazaMa2ZY9/+E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZCR0-00063O-80; Thu, 14 May 2020 11:53:54 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZCR0-0004sw-0p; Thu, 14 May 2020 11:53:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZCR0-0002EI-09; Thu, 14 May 2020 11:53:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150173-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150173: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=f8644fe441abfd8de8b1f237229cfbe600a58701
X-Osstest-Versions-That: xen=b539eeffc737d859dd1814c2e529e0ed0feba7a7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 14 May 2020 11:53:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150173 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150173/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f8644fe441abfd8de8b1f237229cfbe600a58701
baseline version:
 xen                  b539eeffc737d859dd1814c2e529e0ed0feba7a7

Last test of basis   150171  2020-05-14 06:00:58 Z    0 days
Testing same since   150173  2020-05-14 09:01:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dario Faggioli <dfaggioli@suse.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b539eeffc7..f8644fe441  f8644fe441abfd8de8b1f237229cfbe600a58701 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 14 11:58:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 11:58:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZCVc-0002UT-92; Thu, 14 May 2020 11:58:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZCVb-0002UO-9R
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 11:58:39 +0000
X-Inumbo-ID: 3f102bc4-95da-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3f102bc4-95da-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 11:58:38 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 8C5E5ABC7;
 Thu, 14 May 2020 11:58:39 +0000 (UTC)
Subject: Re: [PATCH v8 04/12] xen: add basic hypervisor filesystem support
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-5-jgross@suse.com>
 <db277779-5b1e-a2aa-3948-9e6dd8e8bef0@suse.com>
 <23938228-e947-fe36-8b19-0e89886db9ac@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ef7d7ea1-e2ba-5f5f-5817-b7c29bc33f11@suse.com>
Date: Thu, 14 May 2020 13:58:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <23938228-e947-fe36-8b19-0e89886db9ac@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 11:50, Jürgen Groß wrote:
> On 14.05.20 09:59, Jan Beulich wrote:
>> On 08.05.2020 17:34, Juergen Gross wrote:
>>> +#define HYPFS_FIXEDSIZE_INIT(var, typ, nam, contvar, wr) \
>>> +    struct hypfs_entry_leaf __read_mostly var = {        \
>>> +        .e.type = typ,                                   \
>>> +        .e.encoding = XEN_HYPFS_ENC_PLAIN,               \
>>> +        .e.name = nam,                                   \
>>> +        .e.size = sizeof(contvar),                       \
>>> +        .e.max_size = wr ? sizeof(contvar) : 0,          \
>>> +        .e.read = hypfs_read_leaf,                       \
>>> +        .e.write = wr,                                   \
>>> +        .content = &contvar,                             \
>>> +    }
>>
>> At the example of this, some of the macros look like they want
>> parentheses added around uses of some of their parameters.
> 
> Hmm, which ones? As I've understood from previous patch reviews by you
> you only want those parameters with parantheses where they are really
> needed.
> 
> - var is a plain variable, so no parantheses
> - typ _should_ be a XEN_HYPFS_TYPE_* define, so probably no parantheses
>   (its usage in the macro doesn't call for using parantheses anyway)
> - nam might be a candidate, but I can't come up with a reason to put it
>   in parantheses here
> - contvar has to be a variable (otherwise sizeof(contvar) wouldn't
>   work), so no parantheses
> - wr is a function pointer or NULL, so no parantheses

You have a point for uses as initializers, as there's no lower
precedence operator than the assignment ones except comma,
which would need parenthesizing in the macro invocation already.
However, I disagree on what you say about contvar and wr -
contvar is expected, but not required to be just an identifier.
And wr in turn is expected but not required to by an identifier
or NULL. I.e. the respective two lines where I think parentheses
can't be avoided are

        .e.max_size = (wr) ? sizeof(contvar) : 0,

and

        .content = &(contvar),

.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 12:10:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 12:10:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZCgr-0004Bz-R8; Thu, 14 May 2020 12:10:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZCgq-0004Bu-Nk
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 12:10:16 +0000
X-Inumbo-ID: de9aa5b0-95db-11ea-a480-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id de9aa5b0-95db-11ea-a480-12813bfff9fa;
 Thu, 14 May 2020 12:10:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id DFB22B145;
 Thu, 14 May 2020 12:10:16 +0000 (UTC)
Subject: Re: [PATCH v8 09/12] xen: add runtime parameter access support to
 hypfs
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-10-jgross@suse.com>
 <a6c10680-d570-dabb-61ad-627591d08b0e@suse.com>
 <76ed2db5-6091-959a-8224-0a77e9cc4c45@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <76cf4476-f8b8-dc44-9e68-bfa92a3fcd2a@suse.com>
Date: Thu, 14 May 2020 14:10:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <76ed2db5-6091-959a-8224-0a77e9cc4c45@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 13:48, Jürgen Groß wrote:
> On 14.05.20 12:20, Jan Beulich wrote:
>> On 08.05.2020 17:34, Juergen Gross wrote:
>>> --- a/xen/common/grant_table.c
>>> +++ b/xen/common/grant_table.c
>>> @@ -85,8 +85,43 @@ struct grant_table {
>>>       struct grant_table_arch arch;
>>>   };
>>>   -static int parse_gnttab_limit(const char *param, const char *arg,
>>> -                              unsigned int *valp)
>>> +unsigned int __read_mostly opt_max_grant_frames = 64;
>>> +static unsigned int __read_mostly opt_max_maptrack_frames = 1024;
>>> +
>>> +#ifdef CONFIG_HYPFS
>>> +#define GRANT_CUSTOM_VAL_SZ  12
>>> +static char __read_mostly opt_max_grant_frames_val[GRANT_CUSTOM_VAL_SZ];
>>> +static char __read_mostly opt_max_maptrack_frames_val[GRANT_CUSTOM_VAL_SZ];
>>> +
>>> +static void update_gnttab_par(struct param_hypfs *par, unsigned int val,
>>> +                              char *parval)
>>> +{
>>> +    snprintf(parval, GRANT_CUSTOM_VAL_SZ, "%u", val);
>>> +    custom_runtime_set_var_sz(par, parval, GRANT_CUSTOM_VAL_SZ);
>>> +}
>>> +
>>> +static void __init gnttab_max_frames_init(struct param_hypfs *par)
>>> +{
>>> +    update_gnttab_par(par, opt_max_grant_frames, opt_max_grant_frames_val);
>>> +}
>>> +
>>> +static void __init max_maptrack_frames_init(struct param_hypfs *par)
>>> +{
>>> +    update_gnttab_par(par, opt_max_maptrack_frames,
>>> +                      opt_max_maptrack_frames_val);
>>> +}
>>> +#else
>>> +#define opt_max_grant_frames_val    NULL
>>> +#define opt_max_maptrack_frames_val NULL
>>
>> This looks latently dangerous to me (in case new uses of these
>> two identifiers appeared), but I guess my alternative suggestion
>> will be at best controversial, too:
>>
>> #define update_gnttab_par(par, val, unused) update_gnttab_par(par, val)
>> #define parse_gnttab_limit(par, arg, valp, unused) parse_gnttab_limit(par, arg, valp)
>>
>> (placed right here)
> 
> Who else would use those identifiers not related to hypfs?

I can't see an obvious possible use, but people get creative, i.e.
you never know. Passing NULL into a function without it being
blindingly obvious that it won't ever get (de)referenced is an at
least theoretical risk imo.

>>> +#define custom_runtime_set_var_sz(parfs, var, sz) \
>>> +    { \
>>> +        (parfs)->hypfs.content = var; \
>>> +        (parfs)->hypfs.e.max_size = sz; \
>>
>> var and sz want parentheses around them.
> 
> Fine with me, but out of curiosity: what can go wrong without? Are
> you thinking of multi-statement arguments?

Well, as just said in the reply on the patch 4 thread, you have
a point about this being the right side of an assignment.
Nevertheless such uses would look more consistent if parenthesized.
The only cases where I see it reasonable to omit parentheses are
- parameters made the operands of # or ##,
- parameters handed on to further macros / functions unaltered,
- parameters representing struct/union field names,
- perhaps other special cases along the lines of the above that
  I can't seem to be able to think of right now.

>>> +/* initfunc needs to set size and content, e.g. via custom_runtime_set_var(). */
>>> +#define custom_runtime_only_param(_name, _var, initfunc) \
>>
>> I've started noticing it here, but the issue exists further up
>> (and down) as well - please can you avoid identifiers with
>> leading underscores that are in violation of the C standard?
>> Even more so that here you're not even consistent across
>> macro parameter names.
> 
> Basically I only extended the macro to take another parameter and I
> omitted the underscore exactly for the reason you mentioned.
> 
> In case you like it better I can prepend a patch to the series dropping
> all the leading single underscores from the macros in param.h.

That's code churn I don't view as strictly necessary - adjusting
these can be done when they get touched anyway. But here we have
a whole new body of code.

>>> +#define param_2_parfs(par)  NULL
>>
>> Along the lines of the earlier comment, this looks latently dangerous.
> 
> In which way? How can the empty struct be dereferenced in a way not
> resulting in build time errors, other than using a cast which would
> be obviously wrong for the standard case when CONFIG_HYPFS is defined?

Have the result of the macro passed to a function taking void *, e.g.
memcpy().

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 12:12:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 12:12:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZCig-0004H7-71; Thu, 14 May 2020 12:12:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WGWk=64=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZCie-0004H2-OY
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 12:12:08 +0000
X-Inumbo-ID: 2169f896-95dc-11ea-a480-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2169f896-95dc-11ea-a480-12813bfff9fa;
 Thu, 14 May 2020 12:12:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 06317B143;
 Thu, 14 May 2020 12:12:08 +0000 (UTC)
Subject: Re: [PATCH v8 04/12] xen: add basic hypervisor filesystem support
To: Jan Beulich <jbeulich@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-5-jgross@suse.com>
 <db277779-5b1e-a2aa-3948-9e6dd8e8bef0@suse.com>
 <23938228-e947-fe36-8b19-0e89886db9ac@suse.com>
 <ef7d7ea1-e2ba-5f5f-5817-b7c29bc33f11@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <3ff40c34-e2ad-a821-a3ee-2222224cfece@suse.com>
Date: Thu, 14 May 2020 14:12:05 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <ef7d7ea1-e2ba-5f5f-5817-b7c29bc33f11@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.20 13:58, Jan Beulich wrote:
> On 14.05.2020 11:50, Jürgen Groß wrote:
>> On 14.05.20 09:59, Jan Beulich wrote:
>>> On 08.05.2020 17:34, Juergen Gross wrote:
>>>> +#define HYPFS_FIXEDSIZE_INIT(var, typ, nam, contvar, wr) \
>>>> +    struct hypfs_entry_leaf __read_mostly var = {        \
>>>> +        .e.type = typ,                                   \
>>>> +        .e.encoding = XEN_HYPFS_ENC_PLAIN,               \
>>>> +        .e.name = nam,                                   \
>>>> +        .e.size = sizeof(contvar),                       \
>>>> +        .e.max_size = wr ? sizeof(contvar) : 0,          \
>>>> +        .e.read = hypfs_read_leaf,                       \
>>>> +        .e.write = wr,                                   \
>>>> +        .content = &contvar,                             \
>>>> +    }
>>>
>>> At the example of this, some of the macros look like they want
>>> parentheses added around uses of some of their parameters.
>>
>> Hmm, which ones? As I've understood from previous patch reviews by you
>> you only want those parameters with parantheses where they are really
>> needed.
>>
>> - var is a plain variable, so no parantheses
>> - typ _should_ be a XEN_HYPFS_TYPE_* define, so probably no parantheses
>>    (its usage in the macro doesn't call for using parantheses anyway)
>> - nam might be a candidate, but I can't come up with a reason to put it
>>    in parantheses here
>> - contvar has to be a variable (otherwise sizeof(contvar) wouldn't
>>    work), so no parantheses
>> - wr is a function pointer or NULL, so no parantheses
> 
> You have a point for uses as initializers, as there's no lower
> precedence operator than the assignment ones except comma,
> which would need parenthesizing in the macro invocation already.
> However, I disagree on what you say about contvar and wr -
> contvar is expected, but not required to be just an identifier.
> And wr in turn is expected but not required to by an identifier
> or NULL. I.e. the respective two lines where I think parentheses
> can't be avoided are
> 
>          .e.max_size = (wr) ? sizeof(contvar) : 0,
> 
> and
> 
>          .content = &(contvar),

Okay.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu May 14 12:31:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 12:31:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZD1V-00063z-TR; Thu, 14 May 2020 12:31:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZD1U-00063u-KD
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 12:31:36 +0000
X-Inumbo-ID: d986612e-95de-11ea-a482-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d986612e-95de-11ea-a482-12813bfff9fa;
 Thu, 14 May 2020 12:31:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 553DCAC22;
 Thu, 14 May 2020 12:31:37 +0000 (UTC)
Subject: Re: [PATCH 4/4] x86/APIC: restrict certain messages to BSP
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <60130f14-3fc5-e40d-fec6-2448fefa6fc4@suse.com>
 <513e4f93-a8a0-ae72-abcc-aa28531eca97@suse.com>
 <20200514100145.GA54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c4ac126d-6f37-4c93-7189-35128bcd3e04@suse.com>
Date: Thu, 14 May 2020 14:31:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200514100145.GA54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 12:01, Roger Pau Monné wrote:
> On Fri, Mar 13, 2020 at 10:26:47AM +0100, Jan Beulich wrote:
>> All CPUs get an equal setting of EOI broadcast suppression; no need to
>> log one message per CPU, even if it's only in verbose APIC mode.
>>
>> Only the BSP is eligible to possibly get ExtINT enabled; no need to log
>> that it gets disabled on all APs, even if - again - it's only in verbose
>> APIC mode.
>>
>> Take the opportunity and introduce a "bsp" parameter to the function, to
>> stop using smp_processor_id() to tell BSP from APs.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> LGTM:
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> AFAICT this doesn't introduce any functional change in APIC setup or
> behavior, the only functional change is the log message reduction.
> Might be good to add a note to that effect to make this clear, since
> the change from smp_processor_id() -> bsp might make this not obvious.

I've added "No functional change from this" to the last paragraph.

>> --- a/xen/arch/x86/apic.c
>> +++ b/xen/arch/x86/apic.c
>> @@ -499,7 +499,7 @@ static void resume_x2apic(void)
>>      __enable_x2apic();
>>  }
>>  
>> -void setup_local_APIC(void)
>> +void setup_local_APIC(bool bsp)
>>  {
>>      unsigned long oldvalue, value, maxlvt;
>>      int i, j;
>> @@ -598,8 +598,8 @@ void setup_local_APIC(void)
>>      if ( directed_eoi_enabled )
>>      {
>>          value |= APIC_SPIV_DIRECTED_EOI;
>> -        apic_printk(APIC_VERBOSE, "Suppress EOI broadcast on CPU#%d\n",
>> -                    smp_processor_id());
>> +        if ( bsp )
>> +            apic_printk(APIC_VERBOSE, "Suppressing EOI broadcast\n");
>>      }
>>  
>>      apic_write(APIC_SPIV, value);
>> @@ -615,21 +615,22 @@ void setup_local_APIC(void)
>>       * TODO: set up through-local-APIC from through-I/O-APIC? --macro
>>       */
>>      value = apic_read(APIC_LVT0) & APIC_LVT_MASKED;
>> -    if (!smp_processor_id() && (pic_mode || !value)) {
>> +    if (bsp && (pic_mode || !value)) {
>>          value = APIC_DM_EXTINT;
>>          apic_printk(APIC_VERBOSE, "enabled ExtINT on CPU#%d\n",
>>                      smp_processor_id());
>>      } else {
>>          value = APIC_DM_EXTINT | APIC_LVT_MASKED;
>> -        apic_printk(APIC_VERBOSE, "masked ExtINT on CPU#%d\n",
>> -                    smp_processor_id());
>> +        if (bsp)
>> +            apic_printk(APIC_VERBOSE, "masked ExtINT on CPU#%d\n",
>> +                        smp_processor_id());
> 
> You might want to also drop the CPU#%d from the above messages, since
> they would only be printed for the BSP.

I want to specifically keep them, so that once (if ever) we introduce
hot-unplug support for the BSP, the same or similar messages can be
used and matched against earlier ones in the log.

>>      }
>>      apic_write(APIC_LVT0, value);
>>  
>>      /*
>>       * only the BP should see the LINT1 NMI signal, obviously.
>>       */
>> -    if (!smp_processor_id())
>> +    if (bsp)
>>          value = APIC_DM_NMI;
>>      else
>>          value = APIC_DM_NMI | APIC_LVT_MASKED;
> 
> This would be shorter as:
> 
> value = APIC_DM_NMI | (bsp ? 0 : APIC_LVT_MASKED);

Indeed, at the expense of larger code churn. Seems like an at least
partially unrelated change to me (at risk of obscuring the actual
purpose of the change here).

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 12:39:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 12:39:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZD8p-0006Ky-OU; Thu, 14 May 2020 12:39:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZD8p-0006Ks-7u
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 12:39:11 +0000
X-Inumbo-ID: e8ba5014-95df-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e8ba5014-95df-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 12:39:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 52B5FAC2C;
 Thu, 14 May 2020 12:39:12 +0000 (UTC)
Subject: Re: use of "stat -"
To: Ian Jackson <ian.jackson@citrix.com>
References: <3bfd6384-fcaf-c74a-e560-a35aafa06a43@suse.com>
 <20200512141947.yqx4gmbvqs4grx5g@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
 <fa507eab-547a-c0fb-9620-825aba5f55b2@suse.com>
 <4b90b635-84bb-e827-d52e-dfe1ebdb4e4d@citrix.com>
 <814db557-4f6a-020d-9f71-4ee3724981e3@suse.com>
 <CAKf6xps0XDRTUJsbE1zHpn=h98yTN+Y1DzaNpVGzhhJGVccRRw@mail.gmail.com>
 <20200512195005.GA96154@mattapan.m5p.com>
 <049e0022-f9c1-6dc9-3360-d25d88eeb97f@citrix.com>
 <20200512225458.GA1530@mattapan.m5p.com>
 <24253.9543.974853.499775@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <28dd2890-cfd5-ec99-af47-7bdd3cbc75e4@suse.com>
Date: Thu, 14 May 2020 14:39:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <24253.9543.974853.499775@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Jason Andryuk <jandryuk@gmail.com>,
 Elliott Mitchell <ehem+xen@m5p.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 13:02, Ian Jackson wrote:
> I've read this thread.  Jan, I'm sorry that this causes you
> inconvenience.  I'm hoping it won't come down to a choice between
> supporting people who want to ship a dom0 without perl, and people who
> want a dom0 using more-than-a-decade-old coreutils.

Well, there are options, like producing the actual script from
a locking.sh.in template, with a configure control over whether
the Perl fallback is needed / wanted.

> Jan, can you tell me what the output is of this on your ancient
> system:
> 
>   $ rm -f t
>   $ >t
>   $ exec 3<t
>   $ stat -L -c '%F %i' /dev/stdin <&3
>   regular empty file 393549
>   $ rm t
>   $ stat -L -c '%F %i' /dev/stdin <&3
>   regular empty file 393549
>   $ strace -ou stat -L -c '%F %i' /dev/stdin <&3
>   $
> 
> Also, the contents of the file "u" afterwards, please.

Will do early next week, as the system is in the office (and off).

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 13:03:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 13:03:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZDW5-0000Nd-RM; Thu, 14 May 2020 13:03:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZDW4-0000Mu-QI
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 13:03:12 +0000
X-Inumbo-ID: 435a2f82-95e3-11ea-a487-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 435a2f82-95e3-11ea-a487-12813bfff9fa;
 Thu, 14 May 2020 13:03:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7D71EAC6D;
 Thu, 14 May 2020 13:03:12 +0000 (UTC)
Subject: Re: [PATCH v8 12/12] xen: remove XEN_SYSCTL_set_parameter support
To: Juergen Gross <jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-13-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2c962dd4-5af6-e09e-d712-18f8a92b4a92@suse.com>
Date: Thu, 14 May 2020 15:03:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200508153421.24525-13-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08.05.2020 17:34, Juergen Gross wrote:

Besides the changes made, wouldn't it be worthwhile to change
HYPFS Kconfig help wording from "result in some features not
being available" to something mentioning runtime parameter
setting in particular, and perhaps also from "might" to "will"?

> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -71,27 +71,6 @@ static bool __read_mostly opt_ept_pml = true;
>  static s8 __read_mostly opt_ept_ad = -1;
>  int8_t __read_mostly opt_ept_exec_sp = -1;
>  
> -#ifdef CONFIG_HYPFS
> -static char opt_ept_setting[10];
> -
> -static void update_ept_param(void)
> -{
> -    if ( opt_ept_exec_sp >= 0 )
> -        snprintf(opt_ept_setting, sizeof(opt_ept_setting), "exec-sp=%d",
> -                 opt_ept_exec_sp);
> -}
> -
> -static void __init init_ept_param(struct param_hypfs *par)
> -{
> -    update_ept_param();
> -    custom_runtime_set_var(par, opt_ept_setting);
> -}
> -#else
> -static void update_ept_param(void)
> -{
> -}
> -#endif
> -
>  static int __init parse_ept_param(const char *s)
>  {
>      const char *ss;
> @@ -118,6 +97,22 @@ static int __init parse_ept_param(const char *s)
>  }
>  custom_param("ept", parse_ept_param);
>  
> +#ifdef CONFIG_HYPFS
> +static char opt_ept_setting[10];
> +
> +static void update_ept_param(void)
> +{
> +    if ( opt_ept_exec_sp >= 0 )
> +        snprintf(opt_ept_setting, sizeof(opt_ept_setting), "exec-sp=%d",
> +                 opt_ept_exec_sp);
> +}
> +
> +static void __init init_ept_param(struct param_hypfs *par)
> +{
> +    update_ept_param();
> +    custom_runtime_set_var(par, opt_ept_setting);
> +}

If I'm not mistaken this is pure code movement, and solely to be
able to have ...

> +
>  static int parse_ept_param_runtime(const char *s);
>  custom_runtime_only_param("ept", parse_ept_param_runtime, init_ept_param);
>  
> @@ -172,6 +167,7 @@ static int parse_ept_param_runtime(const char *s)
>  
>      return 0;
>  }
> +#endif

... a single #ifdef region now including a few more lines. No
strict need to change it, but couldn't the earlier patch have
inserted the code in its final place right away?

> @@ -1106,7 +1090,7 @@ struct xen_sysctl {
>  #define XEN_SYSCTL_get_cpu_levelling_caps        25
>  #define XEN_SYSCTL_get_cpu_featureset            26
>  #define XEN_SYSCTL_livepatch_op                  27
> -#define XEN_SYSCTL_set_parameter                 28
> +/* #define XEN_SYSCTL_set_parameter                 28 */

Nit: There are now 3 too many padding spaces. Granted that's
how it was done for XEN_SYSCTL_tmem_op ...

In any event, as before,
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 13:11:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 13:11:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZDdx-0001Jh-4I; Thu, 14 May 2020 13:11:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZDdu-0001J8-OT
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 13:11:18 +0000
X-Inumbo-ID: 658974ae-95e4-11ea-a488-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 658974ae-95e4-11ea-a488-12813bfff9fa;
 Thu, 14 May 2020 13:11:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 39B46AC5B;
 Thu, 14 May 2020 13:11:19 +0000 (UTC)
Subject: Re: [PATCH] domain_page: handle NULL within unmap_domain_page() itself
To: Hongyan Xia <hx242@xen.org>
References: <a3ddf0c755227a3c742f6b93783c576135a86874.1589384602.git.hongyxia@amazon.com>
 <20200514100133.ne3ed6laazrta3xa@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <94343df1-64bc-2b07-acbc-4f4b6c54fb55@suse.com>
Date: Thu, 14 May 2020 15:11:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200514100133.ne3ed6laazrta3xa@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 12:01, Wei Liu wrote:
> On Wed, May 13, 2020 at 04:43:33PM +0100, Hongyan Xia wrote:
>> From: Hongyan Xia <hongyxia@amazon.com>
>>
>> The macro version UNMAP_DOMAIN_PAGE() does both NULL checking and
>> variable clearing. Move NULL checking into the function itself so that
>> the semantics is consistent with other similar constructs like XFREE().
>> This also eases the use unmap_domain_page() in error handling paths,
>> where we only care about NULL checking but not about variable clearing.
>>
>> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> 
> Reviewed-by: Wei Liu <wl@xen.org>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Thu May 14 13:11:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 13:11:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZDdu-0001J9-Oi; Thu, 14 May 2020 13:11:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=otfA=64=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZDds-0001J0-VM
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 13:11:17 +0000
X-Inumbo-ID: 640fe202-95e4-11ea-ae69-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 640fe202-95e4-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 13:11:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589461875;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=sdqEqzAUQyRIahzgAP6WBKjocHkR4BpmKUfYcI0d+So=;
 b=Cp1UrEhHuEGqD4O+ZQn+49yQ81xw/lopvS81rUC/XbLB9y3LP/UMi235
 vbcalR74k2yN0OAeJEQhV9IH1tNmY5F30Blyk6/pQFeQLQ23Aev4tyiFL
 6rDgbDUXcmpBHf6o1co7tcaZxZITFr/QWKDJQOF9EQGHqgTluauVhiwmC M=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: BDavbFFcUIq33r8dWqq31/q9Y/izuR6GJ/BlZ/wxyIpe/Cvr1H8/rWTItd6HJm/oz7bAHEe7zU
 kSqe9KUc7Ivh5FNRvu31piWUNUZHEuDiKH4INJ+p7PmcIOJhKrTa7CFTPEo2wlbPU8l8oUyI2y
 2PEGabnkvhUr07iQdyrZUN/GDQBjRSWQS0nRp30HQDf+w0bd6BfvNxu6aQvRIAbmq9kYKelHJr
 +3BeQg7R7iuYY5MaoJP9ZinFVZ+nC8Hw1ThkhQEnXnGUP+eziVp6tiCYsO9IdN80mwGbk+b4qb
 qJA=
X-SBRS: 2.7
X-MesageID: 17790673
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,391,1583211600"; d="scan'208";a="17790673"
Date: Thu, 14 May 2020 15:10:21 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86: retrieve and log CPU frequency information
Message-ID: <20200514131021.GB54375@Air-de-Roger>
References: <1fd091d2-30e2-0691-0485-3f5142bd457f@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <1fd091d2-30e2-0691-0485-3f5142bd457f@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Apr 15, 2020 at 01:55:24PM +0200, Jan Beulich wrote:
> While from just a single Skylake system it is already clear that we
> can't base any of our logic on CPUID leaf 15 [1] (leaf 16 is
> documented to be used for display purposes only anyway), logging this
> information may still give us some reference in case of problems as well
> as for future work. Additionally on the AMD side it is unclear whether
> the deviation between reported and measured frequencies is because of us
> not doing well, or because of nominal and actual frequencies being quite
> far apart.

Can you add some reference to the AMD implementation? I've looked at
the PMs and haven't been able to find a description of some of the
MSRs, like 0xC0010064.

> The chosen variable naming in amd_log_freq() has pointed out a naming
> problem in rdmsr_safe(), which is being taken care of at the same time.
> Symmetrically wrmsr_safe(), being an inline function, also gets an
> unnecessary underscore dropped from one of its local variables.
> 
> [1] With a core crystal clock of 24MHz and a ratio of 216/2, the
>     reported frequency nevertheless is 2600MHz, rather than the to be
>     expected (and calibrated by both us and Linux) 2592MHz.
> 
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> TBD: The node ID retrieval using extended leaf 1E implies it won't work
>      on older hardware (pre-Fam15 I think). Besides the Node ID MSR,
>      which doesn't get advertised on my Fam10 box (and it's zero on all
>      processors despite there being two nodes as per the PCI device
>      map), and which isn't even documented for Fam11, Fam12, and Fam14,
>      I didn't find any other means to retrieve the node ID a CPU is
>      associated with - the NodeId register in PCI config space depends
>      on one already knowing the node ID for doing the access, as the
>      device to be used is a function of the node ID.
> 
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -532,6 +532,102 @@ static void amd_get_topology(struct cpui
>                                                            : c->cpu_core_id);
>  }
>  
> +void amd_log_freq(const struct cpuinfo_x86 *c)
> +{
> +	unsigned int idx = 0, h;
> +	uint64_t hi, lo, val;
> +
> +	if (c->x86 < 0x10 || c->x86 > 0x19 ||
> +	    (c != &boot_cpu_data &&
> +	     (!opt_cpu_info || (c->apicid & (c->x86_num_siblings - 1)))))
> +		return;
> +
> +	if (c->x86 < 0x17) {
> +		unsigned int node = 0;
> +		uint64_t nbcfg;
> +
> +		/*
> +		 * Make an attempt at determining the node ID, but assume
> +		 * symmetric setup (using node 0) if this fails.
> +		 */
> +		if (c->extended_cpuid_level >= 0x8000001e &&
> +		    cpu_has(c, X86_FEATURE_TOPOEXT)) {
> +			node = cpuid_ecx(0x8000001e) & 0xff;
> +			if (node > 7)
> +				node = 0;
> +		} else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) {
> +			rdmsrl(0xC001100C, val);
> +			node = val & 7;
> +		}
> +
> +		/*
> +		 * Enable (and use) Extended Config Space accesses, as we
> +		 * can't be certain that MCFG is available here during boot.
> +		 */
> +		rdmsrl(MSR_AMD64_NB_CFG, nbcfg);
> +		wrmsrl(MSR_AMD64_NB_CFG,
> +		       nbcfg | (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT));
> +#define PCI_ECS_ADDRESS(sbdf, reg) \
> +    (0x80000000 | ((sbdf).bdf << 8) | ((reg) & 0xfc) | (((reg) & 0xf00) << 16))
> +
> +		for ( ; ; ) {
> +			pci_sbdf_t sbdf = PCI_SBDF(0, 0, 0x18 | node, 4);
> +
> +			switch (pci_conf_read32(sbdf, PCI_VENDOR_ID)) {
> +			case 0x00000000:
> +			case 0xffffffff:
> +				/* No device at this SBDF. */
> +				if (!node)
> +					break;
> +				node = 0;
> +				continue;
> +
> +			default:
> +				/*
> +				 * Core Performance Boost Control, family
> +				 * dependent up to 3 bits starting at bit 2.
> +				 */
> +				switch (c->x86) {
> +				case 0x10: idx = 1; break;
> +				case 0x12: idx = 7; break;
> +				case 0x14: idx = 7; break;
> +				case 0x15: idx = 7; break;
> +				case 0x16: idx = 7; break;
> +				}
> +				idx &= pci_conf_read(PCI_ECS_ADDRESS(sbdf,
> +				                                     0x15c),
> +				                     0, 4) >> 2;
> +				break;
> +			}
> +			break;
> +		}
> +
> +#undef PCI_ECS_ADDRESS
> +		wrmsrl(MSR_AMD64_NB_CFG, nbcfg);
> +	}
> +
> +	lo = 0; /* gcc may not recognize the loop having at least 5 iterations */
> +	for (h = c->x86 == 0x10 ? 5 : 8; h--; )
> +		if (!rdmsr_safe(0xC0010064 + h, lo) && (lo >> 63))
> +			break;
> +	if (!(lo >> 63))
> +		return;
> +
> +#define FREQ(v) (c->x86 < 0x17 ? ((((v) & 0x3f) + 0x10) * 100) >> (((v) >> 6) & 7) \
> +		                     : (((v) & 0xff) * 25 * 8) / (((v) >> 8) & 0x3f))
> +	if (idx && idx < h &&
> +	    !rdmsr_safe(0xC0010064 + idx, val) && (val >> 63) &&
> +	    !rdmsr_safe(0xC0010064, hi) && (hi >> 63))
> +		printk("CPU%u: %lu (%lu..%lu) MHz\n",
> +		       smp_processor_id(), FREQ(val), FREQ(lo), FREQ(hi));
> +	else if (h && !rdmsr_safe(0xC0010064, hi) && (hi >> 63))
> +		printk("CPU%u: %lu..%lu MHz\n",
> +		       smp_processor_id(), FREQ(lo), FREQ(hi));
> +	else
> +		printk("CPU%u: %lu MHz\n", smp_processor_id(), FREQ(lo));
> +#undef FREQ
> +}
> +
>  void early_init_amd(struct cpuinfo_x86 *c)
>  {
>  	if (c == &boot_cpu_data)
> @@ -803,6 +899,8 @@ static void init_amd(struct cpuinfo_x86
>  		disable_c1_ramping();
>  
>  	check_syscfg_dram_mod_en();
> +
> +	amd_log_freq(c);
>  }
>  
>  const struct cpu_dev amd_cpu_dev = {
> --- a/xen/arch/x86/cpu/cpu.h
> +++ b/xen/arch/x86/cpu/cpu.h
> @@ -19,3 +19,4 @@ extern void detect_ht(struct cpuinfo_x86
>  extern bool detect_extended_topology(struct cpuinfo_x86 *c);
>  
>  void early_init_amd(struct cpuinfo_x86 *c);
> +void amd_log_freq(const struct cpuinfo_x86 *c);
> --- a/xen/arch/x86/cpu/hygon.c
> +++ b/xen/arch/x86/cpu/hygon.c
> @@ -99,6 +99,8 @@ static void init_hygon(struct cpuinfo_x8
>  		value |= (1 << 27); /* Enable read-only APERF/MPERF bit */
>  		wrmsrl(MSR_K7_HWCR, value);
>  	}
> +
> +	amd_log_freq(c);
>  }
>  
>  const struct cpu_dev hygon_cpu_dev = {
> --- a/xen/arch/x86/cpu/intel.c
> +++ b/xen/arch/x86/cpu/intel.c
> @@ -378,6 +378,72 @@ static void init_intel(struct cpuinfo_x8
>  	     ( c->cpuid_level >= 0x00000006 ) &&
>  	     ( cpuid_eax(0x00000006) & (1u<<2) ) )
>  		__set_bit(X86_FEATURE_ARAT, c->x86_capability);
> +

I would split this into a separate helper, ie: intel_log_freq. That
will allow you to exit early and reduce some of the indentation IMO.

> +    if ( (opt_cpu_info && !(c->apicid & (c->x86_num_siblings - 1))) ||
> +         c == &boot_cpu_data )
> +    {
> +        unsigned int eax, ebx, ecx, edx;
> +        uint64_t msrval;
> +
> +        if ( c->cpuid_level >= 0x15 )
> +        {
> +            cpuid(0x15, &eax, &ebx, &ecx, &edx);
> +            if ( ecx && ebx && eax )
> +            {
> +                unsigned long long val = ecx;
> +
> +                val *= ebx;
> +                do_div(val, eax);
> +                printk("CPU%u: TSC: %uMHz * %u / %u = %LuMHz\n",
> +                       smp_processor_id(), ecx, ebx, eax, val);
> +            }
> +            else if ( ecx | eax | ebx )
> +            {
> +                printk("CPU%u: TSC:", smp_processor_id());
> +                if ( ecx )
> +                    printk(" core: %uMHz", ecx);
> +                if ( ebx && eax )
> +                    printk(" ratio: %u / %u", ebx, eax);
> +                printk("\n");
> +            }
> +        }
> +
> +        if ( c->cpuid_level >= 0x16 )
> +        {
> +            cpuid(0x16, &eax, &ebx, &ecx, &edx);
> +            if ( ecx | eax | ebx )
> +            {
> +                printk("CPU%u:", smp_processor_id());
> +                if ( ecx )
> +                    printk(" bus: %uMHz", ecx);
> +                if ( eax )
> +                    printk(" base: %uMHz", eax);
> +                if ( ebx )
> +                    printk(" max: %uMHz", ebx);
> +                printk("\n");
> +            }
> +        }
> +
> +        if ( !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, msrval) &&
> +             (uint8_t)(msrval >> 8) )

I would introduce a mask for it would be cleaner, since you use it
here and below (and would avoid the casting to uint8_t.

> +        {
> +            unsigned int factor = 10000;
> +
> +            if ( c->x86 == 6 )
> +                switch ( c->x86_model )
> +                {
> +                case 0x1a: case 0x1e: case 0x1f: case 0x2e: /* Nehalem */
> +                case 0x25: case 0x2c: case 0x2f: /* Westmere */
> +                    factor = 13333;

The SDM lists ratio * 100MHz without any notes, why are those models
different, is this some errata?

> +                    break;
> +                }
> +
> +            printk("CPU%u: ", smp_processor_id());
> +            if ( (uint8_t)(msrval >> 40) )
> +                printk("%u..", (factor * (uint8_t)(msrval >> 40) + 50) / 100);
> +            printk("%u MHz\n", (factor * (uint8_t)(msrval >> 8) + 50) / 100);

Since you are calculating using Hz, should you use an unsigned long
factor to prevent capping at 4GHz?

> +        }
> +    }
>  }
>  
>  const struct cpu_dev intel_cpu_dev = {
> --- a/xen/include/asm-x86/msr.h
> +++ b/xen/include/asm-x86/msr.h
> @@ -40,8 +40,8 @@ static inline void wrmsrl(unsigned int m
>  
>  /* rdmsr with exception handling */
>  #define rdmsr_safe(msr,val) ({\
> -    int _rc; \
> -    uint32_t lo, hi; \
> +    int rc_; \
> +    uint32_t lo_, hi_; \
>      __asm__ __volatile__( \
>          "1: rdmsr\n2:\n" \
>          ".section .fixup,\"ax\"\n" \
> @@ -49,15 +49,15 @@ static inline void wrmsrl(unsigned int m
>          "   movl %5,%2\n; jmp 2b\n" \
>          ".previous\n" \
>          _ASM_EXTABLE(1b, 3b) \
> -        : "=a" (lo), "=d" (hi), "=&r" (_rc) \
> +        : "=a" (lo_), "=d" (hi_), "=&r" (rc_) \
>          : "c" (msr), "2" (0), "i" (-EFAULT)); \
> -    val = lo | ((uint64_t)hi << 32); \
> -    _rc; })
> +    val = lo_ | ((uint64_t)hi_ << 32); \
> +    rc_; })

Since you are changing the local variable names, I would just switch
rdmsr_safe to a static inline, and drop the underlines. I don't see a
reason this has to stay as a macro.

>  
>  /* wrmsr with exception handling */
>  static inline int wrmsr_safe(unsigned int msr, uint64_t val)
>  {
> -    int _rc;
> +    int rc;
>      uint32_t lo, hi;
>      lo = (uint32_t)val;
>      hi = (uint32_t)(val >> 32);

Since you are already playing with this, could you initialize lo and
hi at definition time?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 14 13:13:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 13:13:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZDfe-0001TE-Fk; Thu, 14 May 2020 13:13:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=otfA=64=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZDfc-0001T6-U5
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 13:13:04 +0000
X-Inumbo-ID: a45ea942-95e4-11ea-a488-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a45ea942-95e4-11ea-a488-12813bfff9fa;
 Thu, 14 May 2020 13:13:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589461984;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=I8mMmrF+qm0DEtB94xHFQ4tMx6KQHhhEyszNspZA9N0=;
 b=PXzq6/gkGe7OP3+E/10MX5CLXDuO3PlcnQD1yRuOcbY0xv+ejQgeCJfO
 hIob3p1jGDSrbrKbslAUUDGYeWdsc61JZjfMKkNbj0YazTWducZlEEmXm
 LSso3emX8yYs2+3Xopzj44njHAGbIMIv5qkI8aFAfcDNgIL0QsQhJp4uc s=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: AzzTz93Kmq181XKT+Su2ppDGyCP4BXnbJqVuyBja+Jyr7IMp0kYVYanQu7bSfkTqjHszx/x27z
 r518t8q97/2q2v7rQ2yJ5DuwMUa6YWfi7HBB8CpfiIFVI9qHiTPw01ex7iffA5UHw+o8LGKBle
 TVtsfEv3FUSHyDXwjCwK4jnCrd+fRYLendnF9PxeKL8X4lYuEgZskbHh2D730qbBlBUNzzjJze
 //BR5qhLRp1nyKF0AhbNTFRLKm64tGWhVXG6xFQqQ7Jy88naJVitK5tl7ybUS7LhKRtawV0/qK
 9mo=
X-SBRS: 2.7
X-MesageID: 17517364
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,391,1583211600"; d="scan'208";a="17517364"
Date: Thu, 14 May 2020 15:12:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 4/4] x86/APIC: restrict certain messages to BSP
Message-ID: <20200514131255.GC54375@Air-de-Roger>
References: <60130f14-3fc5-e40d-fec6-2448fefa6fc4@suse.com>
 <513e4f93-a8a0-ae72-abcc-aa28531eca97@suse.com>
 <20200514100145.GA54375@Air-de-Roger>
 <c4ac126d-6f37-4c93-7189-35128bcd3e04@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c4ac126d-6f37-4c93-7189-35128bcd3e04@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 14, 2020 at 02:31:33PM +0200, Jan Beulich wrote:
> On 14.05.2020 12:01, Roger Pau Monné wrote:
> > On Fri, Mar 13, 2020 at 10:26:47AM +0100, Jan Beulich wrote:
> >> --- a/xen/arch/x86/apic.c
> >> +++ b/xen/arch/x86/apic.c
> >> @@ -499,7 +499,7 @@ static void resume_x2apic(void)
> >>      __enable_x2apic();
> >>  }
> >>  
> >> -void setup_local_APIC(void)
> >> +void setup_local_APIC(bool bsp)
> >>  {
> >>      unsigned long oldvalue, value, maxlvt;
> >>      int i, j;
> >> @@ -598,8 +598,8 @@ void setup_local_APIC(void)
> >>      if ( directed_eoi_enabled )
> >>      {
> >>          value |= APIC_SPIV_DIRECTED_EOI;
> >> -        apic_printk(APIC_VERBOSE, "Suppress EOI broadcast on CPU#%d\n",
> >> -                    smp_processor_id());
> >> +        if ( bsp )
> >> +            apic_printk(APIC_VERBOSE, "Suppressing EOI broadcast\n");
> >>      }
> >>  
> >>      apic_write(APIC_SPIV, value);
> >> @@ -615,21 +615,22 @@ void setup_local_APIC(void)
> >>       * TODO: set up through-local-APIC from through-I/O-APIC? --macro
> >>       */
> >>      value = apic_read(APIC_LVT0) & APIC_LVT_MASKED;
> >> -    if (!smp_processor_id() && (pic_mode || !value)) {
> >> +    if (bsp && (pic_mode || !value)) {
> >>          value = APIC_DM_EXTINT;
> >>          apic_printk(APIC_VERBOSE, "enabled ExtINT on CPU#%d\n",
> >>                      smp_processor_id());
> >>      } else {
> >>          value = APIC_DM_EXTINT | APIC_LVT_MASKED;
> >> -        apic_printk(APIC_VERBOSE, "masked ExtINT on CPU#%d\n",
> >> -                    smp_processor_id());
> >> +        if (bsp)
> >> +            apic_printk(APIC_VERBOSE, "masked ExtINT on CPU#%d\n",
> >> +                        smp_processor_id());
> > 
> > You might want to also drop the CPU#%d from the above messages, since
> > they would only be printed for the BSP.
> 
> I want to specifically keep them, so that once (if ever) we introduce
> hot-unplug support for the BSP, the same or similar messages can be
> used and matched against earlier ones in the log.
> 
> >>      }
> >>      apic_write(APIC_LVT0, value);
> >>  
> >>      /*
> >>       * only the BP should see the LINT1 NMI signal, obviously.
> >>       */
> >> -    if (!smp_processor_id())
> >> +    if (bsp)
> >>          value = APIC_DM_NMI;
> >>      else
> >>          value = APIC_DM_NMI | APIC_LVT_MASKED;
> > 
> > This would be shorter as:
> > 
> > value = APIC_DM_NMI | (bsp ? 0 : APIC_LVT_MASKED);
> 
> Indeed, at the expense of larger code churn. Seems like an at least
> partially unrelated change to me (at risk of obscuring the actual
> purpose of the change here).

FTR, I'm happy with both of the above and my RB stands.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 14 13:31:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 13:31:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZDxM-0003HH-04; Thu, 14 May 2020 13:31:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v3kr=64=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZDxK-0003HC-5x
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 13:31:22 +0000
X-Inumbo-ID: 2cec38af-95e7-11ea-a48b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2cec38af-95e7-11ea-a48b-12813bfff9fa;
 Thu, 14 May 2020 13:31:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=MGkqLYAuWyGhqCOMwljvVhriKBSeMk/IGQgYYIjDBLA=; b=td8CxW1kYrYl2jKbj3mxQwb+s
 1FXtUeNbrdZG/azW1vlPqv3hOQiDupo8tRq2kFSz8uJkHM1PvaoCKNKQaTDEN48p/D7opyhyi/MUO
 /AwP6kfbHWcWR+1Qtu935q33ZhkDruIiQW8Jj4awf/y3JikocUGHhTp7VkfGMu2U2ZIC4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZDx9-00084a-TH; Thu, 14 May 2020 13:31:11 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZDx9-0000fq-La; Thu, 14 May 2020 13:31:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZDx9-0008Gx-K7; Thu, 14 May 2020 13:31:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150169-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150169: regressions - FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-libvirt-vhd:debian-di-install:fail:regression
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=3a218961b16f1f4feb1147f56338faf1ac8f5703
X-Osstest-Versions-That: xen=9d83ad86834300927b636fa02b29d84854399ed8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 14 May 2020 13:31:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150169 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150169/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt-vhd 10 debian-di-install        fail REGR. vs. 150164

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate           fail  like 150164
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150164
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150164
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150164
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150164
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150164
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150164
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150164
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150164
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150164
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  3a218961b16f1f4feb1147f56338faf1ac8f5703
baseline version:
 xen                  9d83ad86834300927b636fa02b29d84854399ed8

Last test of basis   150164  2020-05-13 17:36:57 Z    0 days
Testing same since   150169  2020-05-14 03:54:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3a218961b16f1f4feb1147f56338faf1ac8f5703
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue May 12 19:18:43 2020 +0100

    x86/build: Unilaterally disable -fcf-protection
    
    Xen doesn't support CET-IBT yet.  At a minimum, logic is required to enable it
    for supervisor use, but the livepatch functionality needs to learn not to
    overwrite ENDBR64 instructions.
    
    Furthermore, Ubuntu enables -fcf-protection by default, along with a buggy
    version of GCC-9 which objects to it in combination with
    -mindirect-branch=thunk-extern (Fixed in GCC 10, 9.4).
    
    Various objects (Xen boot path, Rombios 32 stubs) require .text to be at the
    beginning of the object.  These paths explode when .note.gnu.properties gets
    put ahead of .text and we end up executing the notes data.
    
    Disable -fcf-protection for all embedded objects.
    
    Reported-by: Jason Andryuk <jandryuk@gmail.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 1a47731115c2c8eb510e135fa48ed51ad2e94a26
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 13 13:06:28 2020 +0100

    x86/build: move -fno-asynchronous-unwind-tables into EMBEDDED_EXTRA_CFLAGS
    
    Users of EMBEDDED_EXTRA_CFLAGS already use -fno-asynchronous-unwind-tables, or
    ought to.  This shrinks the size of the rombios 32bit stubs in guest memory.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 24f94fca23ad7c45806a1428331e1d602dfd8604
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue May 12 19:18:37 2020 +0100

    x86/build32: Discard all orphaned sections
    
    Linkers may put orphaned sections ahead of .text, which breaks the calling
    requirements.  A concrete example is Ubuntu's GCC-9 default of enabling
    -fcf-protection which causes us to try and execute .note.gnu.properties during
    Xen's boot.
    
    Put .got.plt in its own section as it specifically needs preserving from the
    linkers point of view, and discard everything else.  This will hopefully be
    more robust to other unexpected toolchain properties.
    
    Fixes boot from an Ubuntu build of Xen.
    
    Reported-by: Jason Andryuk <jandryuk@gmail.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 9f74a7b66b0b03fe563779bb2c133051f1595ece
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue May 12 17:21:33 2020 +0100

    x86/guest: Fix assembler warnings with newer binutils
    
    GAS of at least version 2.34 complains:
    
      hypercall_page.S: Assembler messages:
      hypercall_page.S:24: Warning: symbol 'HYPERCALL_set_trap_table' already has its type set
      ...
      hypercall_page.S:71: Warning: symbol 'HYPERCALL_arch_7' already has its type set
    
    which is because the whole page is declared as STT_OBJECT already.  Rearrange
    .set with respect to .type in DECLARE_HYPERCALL() so STT_FUNC is already in
    place.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit f3b0d25e343562dee29729cfaf32f8c79f8b6502
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 13 13:07:53 2020 +0100

    stubdom: Use matching quotes in error message
    
    This prevents syntax highlighting from believing the rest of the file is a
    string.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit d8a6a8b36d864e1e56d3c63b30892cbb4e55d65c
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Mar 2 14:36:03 2020 +0000

    tools/libxc: Reduce feature handling complexity in xc_cpuid_apply_policy()
    
    xc_cpuid_apply_policy() is gaining extra parameters to untangle CPUID
    complexity in Xen.  While an improvement in general, it does have the
    unfortunate side effect of duplicating some settings across multiple
    parameters.
    
    Rearrange the logic to only consider 'pae' if no explicit featureset is
    provided.  This reduces the complexity for callers who have already provided a
    pae setting in the featureset.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Paul Durrant <pdurrant@amzn.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 61be48dc029294275348443f78a5e600ef28274f
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Wed May 13 10:18:19 2020 -0400

    golang/xenlight: add necessary module/package documentation
    
    Add a README and package comment giving a brief overview of the package.
    These also help pkg.go.dev generate better documentation.
    
    Also, add a copy of the LGPL (the same license used by libxl) to
    tools/golang/xenlight. This is required for the package to be shown
    on pkg.go.dev and added to the default module proxy, proxy.golang.org.
    
    Finally, add an entry for the xenlight package to SUPPORT.md.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu May 14 13:38:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 13:38:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZE4B-0003WB-Ry; Thu, 14 May 2020 13:38:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZE4A-0003W6-J2
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 13:38:26 +0000
X-Inumbo-ID: 2f32398c-95e8-11ea-a48d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2f32398c-95e8-11ea-a48d-12813bfff9fa;
 Thu, 14 May 2020 13:38:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id B07A0ABCE;
 Thu, 14 May 2020 13:38:26 +0000 (UTC)
Subject: Re: [PATCH] x86: retrieve and log CPU frequency information
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <1fd091d2-30e2-0691-0485-3f5142bd457f@suse.com>
 <20200514131021.GB54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2e9c7c05-e42c-52d4-f48c-9ecc8b14a1a7@suse.com>
Date: Thu, 14 May 2020 15:38:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200514131021.GB54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 15:10, Roger Pau Monné wrote:
> On Wed, Apr 15, 2020 at 01:55:24PM +0200, Jan Beulich wrote:
>> While from just a single Skylake system it is already clear that we
>> can't base any of our logic on CPUID leaf 15 [1] (leaf 16 is
>> documented to be used for display purposes only anyway), logging this
>> information may still give us some reference in case of problems as well
>> as for future work. Additionally on the AMD side it is unclear whether
>> the deviation between reported and measured frequencies is because of us
>> not doing well, or because of nominal and actual frequencies being quite
>> far apart.
> 
> Can you add some reference to the AMD implementation? I've looked at
> the PMs and haven't been able to find a description of some of the
> MSRs, like 0xC0010064.

Take a look at

https://developer.amd.com/resources/developer-guides-manuals/

I'm unconvinced a reference needs adding here.

>> --- a/xen/arch/x86/cpu/intel.c
>> +++ b/xen/arch/x86/cpu/intel.c
>> @@ -378,6 +378,72 @@ static void init_intel(struct cpuinfo_x8
>>  	     ( c->cpuid_level >= 0x00000006 ) &&
>>  	     ( cpuid_eax(0x00000006) & (1u<<2) ) )
>>  		__set_bit(X86_FEATURE_ARAT, c->x86_capability);
>> +
> 
> I would split this into a separate helper, ie: intel_log_freq. That
> will allow you to exit early and reduce some of the indentation IMO.

Can do; splitting this for AMD/Hygon however was merely to
facilitate using it for both vendors, though.

>> +    if ( (opt_cpu_info && !(c->apicid & (c->x86_num_siblings - 1))) ||
>> +         c == &boot_cpu_data )
>> +    {
>> +        unsigned int eax, ebx, ecx, edx;
>> +        uint64_t msrval;
>> +
>> +        if ( c->cpuid_level >= 0x15 )
>> +        {
>> +            cpuid(0x15, &eax, &ebx, &ecx, &edx);
>> +            if ( ecx && ebx && eax )
>> +            {
>> +                unsigned long long val = ecx;
>> +
>> +                val *= ebx;
>> +                do_div(val, eax);
>> +                printk("CPU%u: TSC: %uMHz * %u / %u = %LuMHz\n",
>> +                       smp_processor_id(), ecx, ebx, eax, val);
>> +            }
>> +            else if ( ecx | eax | ebx )
>> +            {
>> +                printk("CPU%u: TSC:", smp_processor_id());
>> +                if ( ecx )
>> +                    printk(" core: %uMHz", ecx);
>> +                if ( ebx && eax )
>> +                    printk(" ratio: %u / %u", ebx, eax);
>> +                printk("\n");
>> +            }
>> +        }
>> +
>> +        if ( c->cpuid_level >= 0x16 )
>> +        {
>> +            cpuid(0x16, &eax, &ebx, &ecx, &edx);
>> +            if ( ecx | eax | ebx )
>> +            {
>> +                printk("CPU%u:", smp_processor_id());
>> +                if ( ecx )
>> +                    printk(" bus: %uMHz", ecx);
>> +                if ( eax )
>> +                    printk(" base: %uMHz", eax);
>> +                if ( ebx )
>> +                    printk(" max: %uMHz", ebx);
>> +                printk("\n");
>> +            }
>> +        }
>> +
>> +        if ( !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, msrval) &&
>> +             (uint8_t)(msrval >> 8) )
> 
> I would introduce a mask for it would be cleaner, since you use it
> here and below (and would avoid the casting to uint8_t.

To avoid the casts (also below) I could introduce local variables.
I specifically wanted to avoid MASK_EXTR() such that the rest of the
calculations in

            if ( (uint8_t)(msrval >> 40) )
                printk("%u..", (factor * (uint8_t)(msrval >> 40) + 50) / 100);
            printk("%u MHz\n", (factor * (uint8_t)(msrval >> 8) + 50) / 100);

can be done as 32-bit arithmetic.

>> +        {
>> +            unsigned int factor = 10000;
>> +
>> +            if ( c->x86 == 6 )
>> +                switch ( c->x86_model )
>> +                {
>> +                case 0x1a: case 0x1e: case 0x1f: case 0x2e: /* Nehalem */
>> +                case 0x25: case 0x2c: case 0x2f: /* Westmere */
>> +                    factor = 13333;
> 
> The SDM lists ratio * 100MHz without any notes, why are those models
> different, is this some errata?

Did you go through the MSR lists for the various models? It's there
where I found this anomaly, not in any spec updates.

>> +                    break;
>> +                }
>> +
>> +            printk("CPU%u: ", smp_processor_id());
>> +            if ( (uint8_t)(msrval >> 40) )
>> +                printk("%u..", (factor * (uint8_t)(msrval >> 40) + 50) / 100);
>> +            printk("%u MHz\n", (factor * (uint8_t)(msrval >> 8) + 50) / 100);
> 
> Since you are calculating using Hz, should you use an unsigned long
> factor to prevent capping at 4GHz?

Hmm, the calculation looks to be in units of 10kHz, until the division
by 100. I don't think we'd cap at 4GHz this way.

>> --- a/xen/include/asm-x86/msr.h
>> +++ b/xen/include/asm-x86/msr.h
>> @@ -40,8 +40,8 @@ static inline void wrmsrl(unsigned int m
>>  
>>  /* rdmsr with exception handling */
>>  #define rdmsr_safe(msr,val) ({\
>> -    int _rc; \
>> -    uint32_t lo, hi; \
>> +    int rc_; \
>> +    uint32_t lo_, hi_; \
>>      __asm__ __volatile__( \
>>          "1: rdmsr\n2:\n" \
>>          ".section .fixup,\"ax\"\n" \
>> @@ -49,15 +49,15 @@ static inline void wrmsrl(unsigned int m
>>          "   movl %5,%2\n; jmp 2b\n" \
>>          ".previous\n" \
>>          _ASM_EXTABLE(1b, 3b) \
>> -        : "=a" (lo), "=d" (hi), "=&r" (_rc) \
>> +        : "=a" (lo_), "=d" (hi_), "=&r" (rc_) \
>>          : "c" (msr), "2" (0), "i" (-EFAULT)); \
>> -    val = lo | ((uint64_t)hi << 32); \
>> -    _rc; })
>> +    val = lo_ | ((uint64_t)hi_ << 32); \
>> +    rc_; })
> 
> Since you are changing the local variable names, I would just switch
> rdmsr_safe to a static inline, and drop the underlines. I don't see a
> reason this has to stay as a macro.

Well, all callers would need to be changed to pass the address of
the variable to store the value read into. That's quite a bit of
code churn, and hence nothing I'd want to do in this patch.

>>  /* wrmsr with exception handling */
>>  static inline int wrmsr_safe(unsigned int msr, uint64_t val)
>>  {
>> -    int _rc;
>> +    int rc;
>>      uint32_t lo, hi;
>>      lo = (uint32_t)val;
>>      hi = (uint32_t)(val >> 32);
> 
> Since you are already playing with this, could you initialize lo and
> hi at definition time?

If I was touching any of the three lines anyway - yes. But the way
it is it looks like an unrelated change to me.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 13:57:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 13:57:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZEMd-0005GU-Ek; Thu, 14 May 2020 13:57:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZEMc-0005GP-MY
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 13:57:30 +0000
X-Inumbo-ID: d93d8916-95ea-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d93d8916-95ea-11ea-9887-bc764e2007e4;
 Thu, 14 May 2020 13:57:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id F3FACAAD0;
 Thu, 14 May 2020 13:57:30 +0000 (UTC)
Subject: Re: [PATCH v2 2/3] xen/sched: don't call sync_vcpu_execstate() in
 sched_unit_migrate_finish()
To: Juergen Gross <jgross@suse.com>
References: <20200511112829.5500-1-jgross@suse.com>
 <20200511112829.5500-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f01cca9f-ba49-75bd-72c6-d0c638ed5e15@suse.com>
Date: Thu, 14 May 2020 15:57:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200511112829.5500-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.2020 13:28, Juergen Gross wrote:
> With support of core scheduling sched_unit_migrate_finish() gained a
> call of sync_vcpu_execstate() as it was believed to be called as a
> result of vcpu migration in any case.
> 
> In case of migrating a vcpu away from a physical cpu for a short period
> of time only this might not be true, so drop the call and let the lazy
> state syncing do its job.

Replying here instead of on the patch 3 thread (and I'm sorry
for mixing up function names there): By saying "for a short
period of time only", do you imply without ever getting scheduled
on the new (temporary) CPU? If so, I think I understand this
change now, but then this could do with saying here. If not, I'm
afraid I'm still lost.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 13:58:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 13:58:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZENp-0005Kn-Pw; Thu, 14 May 2020 13:58:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZENo-0005Kf-Kq
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 13:58:44 +0000
X-Inumbo-ID: 05adc89e-95eb-11ea-a490-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 05adc89e-95eb-11ea-a490-12813bfff9fa;
 Thu, 14 May 2020 13:58:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 2AFDDAAD0;
 Thu, 14 May 2020 13:58:45 +0000 (UTC)
Subject: Re: [PATCH v2 3/3] xen/sched: fix latent races accessing
 vcpu->dirty_cpu
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
References: <20200511112829.5500-1-jgross@suse.com>
 <20200511112829.5500-4-jgross@suse.com>
 <eaa891af-697d-bb30-8e34-470102a98561@suse.com>
 <35440630-c065-8d3f-94d2-e01c6a5df2a2@suse.com>
 <b5173d4a-437a-fe21-be4b-842dad960f81@suse.com>
 <8c37fd91-2d97-e30a-700c-141c86c5745a@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <41215616-ca7e-7448-3d5c-908d06943d91@suse.com>
Date: Thu, 14 May 2020 15:58:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <8c37fd91-2d97-e30a-700c-141c86c5745a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 11:29, Jürgen Groß wrote:
> On 14.05.20 11:24, Jan Beulich wrote:
>> On 14.05.2020 10:50, Jürgen Groß wrote:
>>> On 14.05.20 09:10, Jan Beulich wrote:
>>>> On 11.05.2020 13:28, Juergen Gross wrote:
>>>>> @@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
>>>>>      void sync_vcpu_execstate(struct vcpu *v)
>>>>>    {
>>>>> -    if ( v->dirty_cpu == smp_processor_id() )
>>>>> +    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
>>>>> +
>>>>> +    if ( dirty_cpu == smp_processor_id() )
>>>>>            sync_local_execstate();
>>>>> -    else if ( vcpu_cpu_dirty(v) )
>>>>> +    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
>>>>>        {
>>>>>            /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
>>>>> -        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
>>>>> +        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
>>>>>        }
>>>>> +    ASSERT(!is_vcpu_dirty_cpu(dirty_cpu) ||
>>>>> +           read_atomic(&v->dirty_cpu) != dirty_cpu);
>>>>
>>>> Repeating my v1.1 comments:
>>>>
>>>> "However, having stared at it for a while now - is this race
>>>>    free? I can see this being fine in the (initial) case of
>>>>    dirty_cpu == smp_processor_id(), but if this is for a foreign
>>>>    CPU, can't the vCPU have gone back to that same CPU again in
>>>>    the meantime?"
>>>>
>>>> and later
>>>>
>>>> "There is a time window from late in flush_mask() to the assertion
>>>>    you add. All sorts of things can happen during this window on
>>>>    other CPUs. IOW what guarantees the vCPU not getting unpaused or
>>>>    its affinity getting changed yet another time?"
>>>>
>>>> You did reply that by what is now patch 2 this race can be
>>>> eliminated, but I have to admit I don't see why this would be.
>>>> Hence at the very least I'd expect justification in either the
>>>> description or a code comment as to why there's no race left
>>>> (and also no race to be expected to be re-introduced by code
>>>> changes elsewhere - very unlikely races are, by their nature,
>>>> unlikely to be hit during code development and the associated
>>>> testing, hence I'd like there to be sufficiently close to a
>>>> guarantee here).
>>>>
>>>> My reservations here may in part be due to not following the
>>>> reasoning for patch 2, which therefore I'll have to rely on the
>>>> scheduler maintainers to judge on.
>>>
>>> sync_vcpu_execstate() isn't called for a running or runnable vcpu any
>>> longer. I can add an ASSERT() and a comment explaining it if you like
>>> that better.
>>
>> This would help (hopefully people adding new uses of the function
>> would run into this assertion/comment), but for example the uses
>> in mapcache_current_vcpu() or do_tasklet_work() look to be pretty
>> hard to prove they can't happen for a runnable vCPU.
> 
> Those call sync_local_execstate(), not sync_vcpu_execstate().

Ouch, as said on the other sub-thread - I'm sorry for mixing those up.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 14:06:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 14:06:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZEUp-0006Ic-Kl; Thu, 14 May 2020 14:05:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=otfA=64=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZEUo-0006IX-CC
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 14:05:58 +0000
X-Inumbo-ID: 07f6fd0e-95ec-11ea-b07b-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 07f6fd0e-95ec-11ea-b07b-bc764e2007e4;
 Thu, 14 May 2020 14:05:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589465157;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=/pRMa7Z58eA1SUUfejjYG4BYlhrJegP7AEZKsOlO1o8=;
 b=Dx/MqfPteXsrd48DBUjjvcIMjmWnxA6eb5H90/jk4el+BPQGY0XxnlB7
 vTxDKyvyiPf76oH+qeisbjgXg5n9T0MCnJ7iPtKoeOBgQOZNUZH6Mu1n6
 CmPB0W+JB31UzF7lUVLb8/eRWk8rwGU1k922LHrnAqBM+yh9IOcjlzZZd M=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: fyyBCVrePwR4GBdsysZEJAQpa9Epxy7xNAIGXW3MNyuZL0JMQzIFGuIgRmCxvvWgu0zuge19E2
 3+5Xf8n3oSEdTnP9jlCUERB+jNWCIslu7E9QYBcbRiuFqilfvK4JKRcplh6SoOujTCNIGzmN3N
 aeBXQUOk3jd2LvZYEWkXeoplUo6i6qJYcOgpDIEKbLYfOzFbEzilUMt46eQumeu8ChdL6OlmDw
 3WTnDvH1MziiUNbLYEF1wnphpYBleq5kZlzwV5NohTKL4IgY2AvtBnThZowrz+QsFiSTT+3JEQ
 sRQ=
X-SBRS: 2.7
X-MesageID: 17524862
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,391,1583211600"; d="scan'208";a="17524862"
Date: Thu, 14 May 2020 16:05:22 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <JBeulich@suse.com>
Subject: Re: [PATCH v2] x86: use POPCNT for hweight<N>() when available
Message-ID: <20200514140522.GD54375@Air-de-Roger>
References: <55a4a24d-7fac-527c-6bcf-8d689136bac2@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <55a4a24d-7fac-527c-6bcf-8d689136bac2@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Jul 15, 2019 at 02:39:04PM +0000, Jan Beulich wrote:
> This is faster than using the software implementation, and the insn is
> available on all half-way recent hardware. Therefore convert
> generic_hweight<N>() to out-of-line functions (without affecting Arm)
> and use alternatives patching to replace the function calls.
> 
> Note that the approach doesn#t work for clang, due to it not recognizing
> -ffixed-*.
> 
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Also suppress UB sanitizer instrumentation. Reduce macroization in
>      hweight.c. Exclude clang builds.
> ---
> Note: Using "g" instead of "X" as the dummy constraint in hweight64()
>        and hweight32(), other than expected, produces slightly better
>        code with gcc 8.
> 
> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -31,6 +31,10 @@ obj-y += emul-i8254.o
>   obj-y += extable.o
>   obj-y += flushtlb.o
>   obj-$(CONFIG_CRASH_DEBUG) += gdbstub.o
> +# clang doesn't appear to know of -ffixed-*
> +hweight-$(gcc) := hweight.o
> +hweight-$(clang) :=
> +obj-y += $(hweight-y)
>   obj-y += hypercall.o
>   obj-y += i387.o
>   obj-y += i8259.o
> @@ -251,6 +255,10 @@ boot/mkelf32: boot/mkelf32.c
>   efi/mkreloc: efi/mkreloc.c
>   	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
>   
> +nocov-y += hweight.o
> +noubsan-y += hweight.o
> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))

Why not use clobbers in the asm to list the scratch registers? Is it
that much expensive?

> +
>   .PHONY: clean
>   clean::
>   	rm -f asm-offsets.s *.lds boot/*.o boot/*~ boot/core boot/mkelf32
> --- /dev/null
> +++ b/xen/arch/x86/hweight.c
> @@ -0,0 +1,21 @@
> +#define generic_hweight64 _hweight64
> +#define generic_hweight32 _hweight32
> +#define generic_hweight16 _hweight16
> +#define generic_hweight8  _hweight8
> +
> +#include <xen/compiler.h>
> +
> +#undef inline
> +#define inline always_inline
> +
> +#include <xen/bitops.h>
> +
> +#undef generic_hweight8
> +#undef generic_hweight16
> +#undef generic_hweight32
> +#undef generic_hweight64
> +
> +unsigned int generic_hweight8 (unsigned int x) { return _hweight8 (x); }
> +unsigned int generic_hweight16(unsigned int x) { return _hweight16(x); }
> +unsigned int generic_hweight32(unsigned int x) { return _hweight32(x); }
> +unsigned int generic_hweight64(uint64_t x)     { return _hweight64(x); }
> --- a/xen/include/asm-x86/bitops.h
> +++ b/xen/include/asm-x86/bitops.h
> @@ -475,9 +475,36 @@ static inline int fls(unsigned int x)
>    *
>    * The Hamming Weight of a number is the total number of bits set in it.
>    */
> +#ifndef __clang__
> +/* POPCNT encodings with %{r,e}di input and %{r,e}ax output: */
> +#define POPCNT_64 ".byte 0xF3, 0x48, 0x0F, 0xB8, 0xC7"
> +#define POPCNT_32 ".byte 0xF3, 0x0F, 0xB8, 0xC7"
> +
> +#define hweight_(n, x, insn, setup, cout, cin) ({                         \
> +    unsigned int res_;                                                    \
> +    /*                                                                    \
> +     * For the function call the POPCNT input register needs to be marked \
> +     * modified as well. Set up a local variable of appropriate type      \
> +     * for this purpose.                                                  \
> +     */                                                                   \
> +    typeof((uint##n##_t)(x) + 0U) val_ = (x);                             \
> +    alternative_io(setup "; call generic_hweight" #n,                     \
> +                   insn, X86_FEATURE_POPCNT,                              \
> +                   ASM_OUTPUT2([res] "=a" (res_), [val] cout (val_)),     \
> +                   [src] cin (val_));                                     \
> +    res_;                                                                 \
> +})
> +#define hweight64(x) hweight_(64, x, POPCNT_64, "", "+D", "g")
> +#define hweight32(x) hweight_(32, x, POPCNT_32, "", "+D", "g")
> +#define hweight16(x) hweight_(16, x, "movzwl %w[src], %[val]; " POPCNT_32, \
> +                              "mov %[src], %[val]", "=&D", "rm")
> +#define hweight8(x)  hweight_( 8, x, "movzbl %b[src], %[val]; " POPCNT_32, \
> +                              "mov %[src], %[val]", "=&D", "rm")

Why not just convert types < 32bits into uint32_t and avoid the asm
prefix? You are already converting them in hweight_ to an uintX_t.

That would made the asm simpler as you would then only need to make
sure the input is in %rdi and the output is fetched from %rax?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 14 14:16:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 14:16:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZEf1-0007EO-Js; Thu, 14 May 2020 14:16:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f2eD=64=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jZEf0-0007EJ-ND
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 14:16:30 +0000
X-Inumbo-ID: 813b5452-95ed-11ea-ae69-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 813b5452-95ed-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 14:16:30 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jZEez-0003Uc-6M; Thu, 14 May 2020 15:16:29 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH] cri-args-hostlists: Do not include transcript in
 reports
Date: Thu, 14 May 2020 15:16:26 +0100
Message-Id: <20200514141626.29137-1-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

These transcripts are huge and not useful.

Still include them if sg-secute-flight failed.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 cri-args-hostlists | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/cri-args-hostlists b/cri-args-hostlists
index 3578fe1c..28d576db 100644
--- a/cri-args-hostlists
+++ b/cri-args-hostlists
@@ -86,8 +86,11 @@ execute_flight () {
         ./sg-execute-flight $1 $2 >tmp/$1.transcript 2>&1
 	local rc=$?
 	set -e
-        cat tmp/$1.transcript
-	test $rc = 0
+        if [ "$rc" != 0 ]; then
+		cat tmp/$1.transcript
+		echo "rc=$rc"
+		exit 1
+	fi
 }
 
 start_email () {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 14 14:19:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 14:19:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZEiL-0007Pn-3a; Thu, 14 May 2020 14:19:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f2eD=64=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jZEiJ-0007Pi-NX
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 14:19:55 +0000
X-Inumbo-ID: fb89b7e4-95ed-11ea-ae69-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fb89b7e4-95ed-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 14:19:55 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jZEiI-0004pf-ES; Thu, 14 May 2020 15:19:54 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 0/4] Four misc small fixes
Date: Thu, 14 May 2020 15:19:47 +0100
Message-Id: <20200514141951.29371-1-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Thse four small fixes/improvements have been generated as the result
of my buster work.  They are not really related to buster.

Many more changes will follow...




From xen-devel-bounces@lists.xenproject.org Thu May 14 14:20:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 14:20:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZEiP-0007QT-Cs; Thu, 14 May 2020 14:20:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f2eD=64=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jZEiO-0007QM-Jy
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 14:20:00 +0000
X-Inumbo-ID: fbb2d6ce-95ed-11ea-9887-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fbb2d6ce-95ed-11ea-9887-bc764e2007e4;
 Thu, 14 May 2020 14:19:55 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jZEiI-0004pf-Ma; Thu, 14 May 2020 15:19:54 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 1/4] Executive: Do not print "shared ... marked ready"
 when not shared
Date: Thu, 14 May 2020 15:19:48 +0100
Message-Id: <20200514141951.29371-2-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200514141951.29371-1-ian.jackson@eu.citrix.com>
References: <20200514141951.29371-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Executive.pm | 9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)

diff --git a/Osstest/Executive.pm b/Osstest/Executive.pm
index e741f529..c3dc1261 100644
--- a/Osstest/Executive.pm
+++ b/Osstest/Executive.pm
@@ -1017,11 +1017,12 @@ sub executive_resource_shared_mark_ready ($$$) {
     my ($restype, $resname, $sharetype) = @_;
     # must run outside transaction
 
+    my $oldshr;
     my $what= "resource $restype $resname";
     $sharetype .= ' '.get_harness_rev();
 
     db_retry($dbh_tests, [qw(resources)], sub {
-        my $oldshr= resource_check_allocated_core($restype, $resname);
+        $oldshr= resource_check_allocated_core($restype, $resname);
         if (defined $oldshr) {
             die "$what shared $oldshr->{Type} not $sharetype"
                 unless $oldshr->{Type} eq $sharetype;
@@ -1053,7 +1054,11 @@ END
     }) {
        logm("post-mark-ready queue daemon prod failed: $@");
     }
-    logm("$restype $resname shared $sharetype marked ready");
+    if ($oldshr) {
+	logm("$restype $resname shared $sharetype marked ready");
+    } else {
+	logm("$restype $resname (not shared, $sharetype) is ready");
+    }
 }
 
 # hostalloc_maxwait_starvation
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 14 14:20:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 14:20:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZEiU-00081v-PW; Thu, 14 May 2020 14:20:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f2eD=64=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jZEiT-0007x7-JE
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 14:20:05 +0000
X-Inumbo-ID: fbe02a0c-95ed-11ea-ae69-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fbe02a0c-95ed-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 14:19:55 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jZEiI-0004pf-Tl; Thu, 14 May 2020 15:19:55 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 2/4] ts-repeat-test: Honour repeat_mult
Date: Thu, 14 May 2020 15:19:49 +0100
Message-Id: <20200514141951.29371-3-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200514141951.29371-1-ian.jackson@eu.citrix.com>
References: <20200514141951.29371-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Nothing automatic uses repeat_mult but this script should definitely
honour it.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-repeat-test | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/ts-repeat-test b/ts-repeat-test
index 4625add8..e6b52465 100755
--- a/ts-repeat-test
+++ b/ts-repeat-test
@@ -51,6 +51,10 @@ my $dumper = new Data::Dumper [\@cmdis], [qw(*cmdis)];
 $dumper->Indent(0);
 print $dumper->Dump,"\n";
 
+my $times = $r{repeat_mult} || 1;
+my $orgreps = $reps;
+$reps *= $times;
+
 foreach my $rep (1..$reps) {
     logm("========== rep $rep ==========");
     foreach my $cmdi (@cmdis) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 14 14:20:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 14:20:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZEia-00087L-3R; Thu, 14 May 2020 14:20:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f2eD=64=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jZEiY-00086y-Jq
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 14:20:10 +0000
X-Inumbo-ID: fc0457ec-95ed-11ea-9887-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fc0457ec-95ed-11ea-9887-bc764e2007e4;
 Thu, 14 May 2020 14:19:56 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jZEiJ-0004pf-85; Thu, 14 May 2020 15:19:55 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 3/4] ts-repeat-test: Show total planned repeat count
 info
Date: Thu, 14 May 2020 15:19:50 +0100
Message-Id: <20200514141951.29371-4-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200514141951.29371-1-ian.jackson@eu.citrix.com>
References: <20200514141951.29371-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Message change only

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-repeat-test | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-repeat-test b/ts-repeat-test
index e6b52465..5e17c335 100755
--- a/ts-repeat-test
+++ b/ts-repeat-test
@@ -56,7 +56,7 @@ my $orgreps = $reps;
 $reps *= $times;
 
 foreach my $rep (1..$reps) {
-    logm("========== rep $rep ==========");
+    logm("========== rep $rep / $reps ($orgreps x $times) ==========");
     foreach my $cmdi (@cmdis) {
 	my $l = $cmdi->{L};
 	logm("---------- rep $rep @$l ----------");
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 14 14:20:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 14:20:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZEif-0008A0-Bg; Thu, 14 May 2020 14:20:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=f2eD=64=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jZEid-00089E-Kg
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 14:20:15 +0000
X-Inumbo-ID: fc289710-95ed-11ea-ae69-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fc289710-95ed-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 14:19:56 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jZEiJ-0004pf-Fi; Thu, 14 May 2020 15:19:55 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 4/4] TestSupport: honour OSSTEST_PDU_MANUAL override
 env var
Date: Thu, 14 May 2020 15:19:51 +0100
Message-Id: <20200514141951.29371-5-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200514141951.29371-1-ian.jackson@eu.citrix.com>
References: <20200514141951.29371-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This takes effect on everything that uses selecthost().  The result is
that PDU operations are made manual.  This can be useful for testing
etc.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 3700a8fe..1e7da676 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -1059,6 +1059,12 @@ sub power_cycle_host_setup ($) {
 	    MethObjs => power_cycle_parse_method($ho, $spec),
         };
     }
+    if ($ENV{OSSTEST_PDU_MANUAL}) {
+	@approaches = ({
+            Name => 'manual-override',
+            MethObjs => power_cycle_parse_method($ho, 'manual'),
+        });
+    }
     $ho->{PowerApproaches} = \@approaches;
 }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 14 14:21:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 14:21:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZEjQ-0008NB-MH; Thu, 14 May 2020 14:21:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WGWk=64=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZEjP-0008Mw-Mr
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 14:21:03 +0000
X-Inumbo-ID: 23f6d11c-95ee-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23f6d11c-95ee-11ea-b9cf-bc764e2007e4;
 Thu, 14 May 2020 14:21:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id CECB3AE52;
 Thu, 14 May 2020 14:21:04 +0000 (UTC)
Subject: Re: [PATCH v2 2/3] xen/sched: don't call sync_vcpu_execstate() in
 sched_unit_migrate_finish()
To: Jan Beulich <jbeulich@suse.com>
References: <20200511112829.5500-1-jgross@suse.com>
 <20200511112829.5500-3-jgross@suse.com>
 <f01cca9f-ba49-75bd-72c6-d0c638ed5e15@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <6b6912d3-1bb1-ac3d-7fc7-a8d2a2f2db9b@suse.com>
Date: Thu, 14 May 2020 16:21:00 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <f01cca9f-ba49-75bd-72c6-d0c638ed5e15@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.20 15:57, Jan Beulich wrote:
> On 11.05.2020 13:28, Juergen Gross wrote:
>> With support of core scheduling sched_unit_migrate_finish() gained a
>> call of sync_vcpu_execstate() as it was believed to be called as a
>> result of vcpu migration in any case.
>>
>> In case of migrating a vcpu away from a physical cpu for a short period
>> of time only this might not be true, so drop the call and let the lazy
>> state syncing do its job.
> 
> Replying here instead of on the patch 3 thread (and I'm sorry
> for mixing up function names there): By saying "for a short
> period of time only", do you imply without ever getting scheduled
> on the new (temporary) CPU? If so, I think I understand this
> change now, but then this could do with saying here. If not, I'm
> afraid I'm still lost.

I'll change the commit message to:

... for a short period of time only without ever being scheduled on the
selected new cpu ...


Juergen


From xen-devel-bounces@lists.xenproject.org Thu May 14 14:29:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 14:29:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZErV-0000NH-IQ; Thu, 14 May 2020 14:29:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a33M=64=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jZErU-0000NC-M6
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 14:29:24 +0000
X-Inumbo-ID: 4df6cd0e-95ef-11ea-ae69-bc764e2007e4
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.75]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4df6cd0e-95ef-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 14:29:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2BZVsjf+c438uDvCdiV4xwfZF+wKN8GYiL936qBJPPI=;
 b=2IOTN84UexVXYzOBcb01rEEOuifu+SKto8y1p5NCBK6o4XPvdXKvbfbgb6bMzmhmH1DmiOpdx16qOGztjv3EqODO5yTtNzPakApShgy/joYYNpzRgXtX6d34Ms8fVKknl5SsSykwSe84hu7cVkGFBz3yuS6NRoMmjR1GQcO1+hM=
Received: from DB6PR0801CA0051.eurprd08.prod.outlook.com (2603:10a6:4:2b::19)
 by DB6PR0801MB1703.eurprd08.prod.outlook.com (2603:10a6:4:2e::8) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2979.34; Thu, 14 May
 2020 14:29:21 +0000
Received: from DB5EUR03FT028.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:2b:cafe::46) by DB6PR0801CA0051.outlook.office365.com
 (2603:10a6:4:2b::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.25 via Frontend
 Transport; Thu, 14 May 2020 14:29:21 +0000
Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT028.mail.protection.outlook.com (10.152.20.99) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3000.19 via Frontend Transport; Thu, 14 May 2020 14:29:21 +0000
Received: ("Tessian outbound fb9de21a7e90:v54");
 Thu, 14 May 2020 14:29:21 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 6f67ce0d6b494af1
X-CR-MTA-TID: 64aa7808
Received: from 4ef0435808d9.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A3512FBA-7077-4E9E-B016-38DFBE18D576.1; 
 Thu, 14 May 2020 14:28:15 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4ef0435808d9.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 14 May 2020 14:28:15 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j+l8XD0dKrkplkqaxyZYOY1onvoVcj0IwCPbQFyEKVdyjUAmMcoAerqoUn5M31Q5RFnO10OZC400MzIhQ0F6zCJ8arVfFBCxk9889mqOR7tcjpCeWTwKwyfAkiiMF8DMdBdqpAIrE0l8NLYpE8Cj1c0AmwneM42lEmN3bu+IGqGmMNqJbq/RVDymSoGkB5ywqVpWCPj6HFymm6gxYvMA1UFb1ITaCiIYxFubSw295mz7ENf+EQYA+6Fh8yL86cL3q6fwXNN6Wbz3YhGjY3C3LI3x6UFkBPQgAQ/t0CrJNv9tnrbvwjXw2z25oc03/QNCGDT3q+whYCZ7Cvp9lZOsCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2BZVsjf+c438uDvCdiV4xwfZF+wKN8GYiL936qBJPPI=;
 b=mZkg2iRdHQJG1rcQLcz5HYKkCPELi/Gf9dETmDqkIgE+aiXRcv0IqmABB6FrHKnrruBbSNdG3T+3QeebuV+G7f7kohghJwtofBKgKzNtWHJTuhtvMHgPY9QqvvmCgi31RTvP3yk0r0HdDJ3yihJYvaanJnV4sa+u61kD7R7C+LfPqBEJyRzX22lXPf8w15Zy2YPYf+m+cWkaT48C4umDS4h2GfO33vf5PsHm3/L3lJQKLBixjNHjmbSXK1KUIDVCoN5zzjIWZvqeeI2zIQ6z3C1mbZVhEizy7+YjFfk9UB7UvKH3IHLxgmahRPu6xhHVspYGk3bkLT09Xoyw676AYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2BZVsjf+c438uDvCdiV4xwfZF+wKN8GYiL936qBJPPI=;
 b=2IOTN84UexVXYzOBcb01rEEOuifu+SKto8y1p5NCBK6o4XPvdXKvbfbgb6bMzmhmH1DmiOpdx16qOGztjv3EqODO5yTtNzPakApShgy/joYYNpzRgXtX6d34Ms8fVKknl5SsSykwSe84hu7cVkGFBz3yuS6NRoMmjR1GQcO1+hM=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3884.eurprd08.prod.outlook.com (2603:10a6:10:78::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2979.33; Thu, 14 May
 2020 14:28:13 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3000.022; Thu, 14 May 2020
 14:28:13 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: Error during update_runstate_area with KPTI activated
Thread-Topic: Error during update_runstate_area with KPTI activated
Thread-Index: AQHWKfvm/FV2s2q8bUucgMUz0AtbkA==
Date: Thu, 14 May 2020 14:28:12 +0000
Message-ID: <C6B0E24F-60E6-4621-8448-C8DBAE3277A9@arm.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: lists.xenproject.org; dkim=none (message not
 signed) header.d=none; lists.xenproject.org;
 dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 7310c15c-7a40-4031-a4f0-08d7f813317c
x-ms-traffictypediagnostic: DB7PR08MB3884:|DB6PR0801MB1703:
X-Microsoft-Antispam-PRVS: <DB6PR0801MB1703EAFBA446A7D218E028F09DBC0@DB6PR0801MB1703.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
x-forefront-prvs: 040359335D
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: xKO6i00nogS3mvdqGV9zfVkvo27AlemkE6za7E80g0IF1WQPJir105Kf4DXYppmozCPg3VmIkJKHf6a25FCAqXPNscemJyhHYKqoMF7wdvwr0iezkdVQi523qAQNacsPNYKkTARrQciSHvQPl6neuXMOfzh/r0tploKaW9H/IwoazNrz1SHax2W6kiwPqn4Ww9XCBWlN1shUMw7heTrf8HLxCt7fZQF9cgYRZyhqom6gQ6tBZIG/8ZHioefcHf0OWidQfilb6eDuj8KM4uENgh0GLPC3eig6Dmch2Apg/nWpTLyiRSfr1LaiYPpT09GNHx6yVAUs7cUP+Ezk1uCgJY4v/m6pxdlnxXma9zEWYpBlWpNw9FyeAj9ETSow67NpBAv8JPwQOZsuPp5h39NqPTEF8838pCKEoweUUKgqtAVZxYKxuTfBEh/TTVseaeXzS8OKQpx8ZcDqPwPt1Zg98GT9LOhekAqIJ9BM8Ws5HaGFpB2kERmYP29YkySSK1Vh+9ujFrhT6DZ9C8NHvsF6Pg==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(376002)(346002)(136003)(366004)(39860400002)(6486002)(6506007)(6512007)(6916009)(36756003)(66476007)(66556008)(64756008)(76116006)(33656002)(5660300002)(15650500001)(8936002)(66946007)(186003)(91956017)(8676002)(316002)(26005)(66446008)(71200400001)(54906003)(2906002)(4326008)(86362001)(966005)(2616005)(478600001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: mrOvKUmHaKWSYijGVtUdWcbKNLGh2szfEUj/mtAfearIFOjk6jiKtT1Db/8saL2/P14rPQg47wHl2ATgfXEXydQdwf4KGbHW9i4SWPFFh9Jb6wg2VZ5QxuMYSXjiCydoY2SW2zFPFNWva4W/OC14nDSdb7uE6aGlT9+Pst+tY36o1zipe4dpvU80ZRM5yQ+TLZ7R/xqIYsXxRRoCge3UmaFZ6UxQDTNpEfL+UIYnzeVNFej0jXmvX7wx1jGcODjwd8hVrpNR5ewZpwjcsXTSd3u5wTntprewFzAm3Hw8yzFyRIaG7h2Muh4LbTjw7E04eglclfKv4LxvqzVIgDBSdWxanFhYAyjPrkYMdaHZEuH2kK5V/9qOSNIRvI431vZy0ghTWfA3xqe56W8bjyywzf7qh62uqt63kQ+g3OejhoDKSqbeO8TxMcwIvA3+NjShCfsAtlWJICdlZwVRfN29l36zN49TKy9O+mSul+EkFemwMz0OjaMLX/E0IcFTbM5U
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <4FE509AC1606E748A9D44279AAD27872@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3884
Original-Authentication-Results: lists.xenproject.org;
 dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT028.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(376002)(346002)(136003)(39860400002)(46966005)(82310400002)(15650500001)(33656002)(356005)(2906002)(47076004)(4326008)(336012)(966005)(70206006)(82740400003)(81166007)(6506007)(70586007)(6512007)(6486002)(478600001)(54906003)(316002)(8676002)(2616005)(5660300002)(6916009)(36756003)(26005)(107886003)(186003)(8936002)(86362001);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 3c16ffa8-07d8-47d2-c231-08d7f81308b9
X-Forefront-PRVS: 040359335D
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 0jWVszHPg3qWYBEB3lZDcoYSm5aegcCK/GGHwB6HtEkVQZAAm0ut2GgntTBtEvKZXqWKIHCptTmFpWTUFJ8t+25cglV1ExzOK0YsOjFMXZUlkIq2dKzzfiwBH42BdhQCq0j4sHw/Fc0sTbywxUCzHFlFCRyNYgawLh5p6AWlz9POXY7TZRd18uaGGYRXh9beQuMHkn2j864y4N5BASmIMO/NyrKRa+Ap2NIOj58UWyj0go4ZjjnU7Pq3/UyvmEG8lco4Wcqbw2y5lEanf++Hn/x4xP/sFlg+0cAHIl1SZZ9FGA55oSEg1s1lnqY6cvEd+2QPBQFzoVIeEcYQ4NbA8yKeqB9YZgDVvf9tbg7c3jz1mlyiSU67UIUUN7EW+dIAYDkvfR2uogdELVnvOL2qo7N7ardzbn8XtXsyQmSJ6WYyulQnzIFIsFPLO256SK4B0gjaHAAJYHmJSSR2Mir1zEFKGWpehh8xTP2eI0WX9McdkZ8ySitRMbH2fxGZHN5hkcMEIs4Lvlqj/hOsXR6I4VtpSY0cdH6A5cuJcv3v+Tn7hnUdIO0mXp03H1N6B0zRkw4SsPKNb+0QDiHwL0cW+GsRjmqM8cnqHdXNi4FBOrA=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 May 2020 14:29:21.4463 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7310c15c-7a40-4031-a4f0-08d7f813317c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1703
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

When executing linux on arm64 with KPTI activated (in Dom0 or in a DomU), I=
 have a lot of walk page table errors like this:
(XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0

After implementing a call trace, I found that the problem was coming from t=
he update_runstate_area when linux has KPTI activated.

I have the following call trace:
(XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
(XEN) backtrace.c:29: Stacktrace start at 0x8007638efbb0 depth 10
(XEN)    [<000000000027780c>] get_page_from_gva+0x180/0x35c
(XEN)    [<00000000002700c8>] guestcopy.c#copy_guest+0x1b0/0x2e4
(XEN)    [<0000000000270228>] raw_copy_to_guest+0x2c/0x34
(XEN)    [<0000000000268dd0>] domain.c#update_runstate_area+0x90/0xc8
(XEN)    [<000000000026909c>] domain.c#schedule_tail+0x294/0x2d8
(XEN)    [<0000000000269524>] context_switch+0x58/0x70
(XEN)    [<00000000002479c4>] core.c#sched_context_switch+0x88/0x1e4
(XEN)    [<000000000024845c>] core.c#schedule+0x224/0x2ec
(XEN)    [<0000000000224018>] softirq.c#__do_softirq+0xe4/0x128
(XEN)    [<00000000002240d4>] do_softirq+0x14/0x1c

Discussing this subject with Stefano, he pointed me to a discussion started=
 a year ago on this subject here:
https://lists.xenproject.org/archives/html/xen-devel/2018-11/msg03053.html

And a patch was submitted:
https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg02320.html

I rebased this patch on current master and it is solving the problem I have=
 seen.

It sounds to me like a good solution to introduce a VCPUOP_register_runstat=
e_phys_memory_area to not depend on the area actually being mapped in the g=
uest when a context switch is being done (which is actually the problem hap=
pening when a context switch is trigger while a guest is running in EL0).

Is there any reason why this was not merged at the end ?

Thanks
Bertrand



From xen-devel-bounces@lists.xenproject.org Thu May 14 14:35:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 14:35:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZEwl-0001Ay-8Z; Thu, 14 May 2020 14:34:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v3kr=64=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZEwj-0001At-TN
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 14:34:49 +0000
X-Inumbo-ID: 0fcd3fd1-95f0-11ea-a49b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0fcd3fd1-95f0-11ea-a49b-12813bfff9fa;
 Thu, 14 May 2020 14:34:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=X0wdj3Z43K5vkGN5GCOA5tvsBi4iDCB2ihRNKiVbpqo=; b=Skr2CiwaV6CXzRawZrgeCSjDQ
 jOXYWBc9VM2IfqoniqBV1sqe8wv71Bf/ViuinNcef/i5BHDJWm5mRLPpr3TtGkG0fTaQ496tMyoYa
 kXTyHbsHrtazrHjvpvInHpCNsheJnROEzXHz/QoTq6rWykOsDe9wWiffbXVsmtNRXZl58=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZEwi-00014g-7I; Thu, 14 May 2020 14:34:48 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZEwh-0003c1-Ly; Thu, 14 May 2020 14:34:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZEwh-0008Qt-LO; Thu, 14 May 2020 14:34:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150175-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150175: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=d155e4aef35cd9c03f6db7030a956f83f33a1e99
X-Osstest-Versions-That: xen=f8644fe441abfd8de8b1f237229cfbe600a58701
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 14 May 2020 14:34:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150175 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150175/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d155e4aef35cd9c03f6db7030a956f83f33a1e99
baseline version:
 xen                  f8644fe441abfd8de8b1f237229cfbe600a58701

Last test of basis   150173  2020-05-14 09:01:09 Z    0 days
Testing same since   150175  2020-05-14 12:00:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f8644fe441..d155e4aef3  d155e4aef35cd9c03f6db7030a956f83f33a1e99 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 14 14:50:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 14:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZFBs-0002uG-MN; Thu, 14 May 2020 14:50:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZFBq-0002uB-JF
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 14:50:26 +0000
X-Inumbo-ID: 3e7a9c40-95f2-11ea-b9cf-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e7a9c40-95f2-11ea-b9cf-bc764e2007e4;
 Thu, 14 May 2020 14:50:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589467826;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=gVykKTj/P+WuV0tzCF97b3U8cBLCrTF9Uur5DWTWSQw=;
 b=ey4ka19cpFa/mDfxRubHg7grx+RhRky/ctPcJTqmtkSLJW5v9C40iDAb
 Zxwy1uaIPLxOSTt1EXHwvT1Vxk9SX0zdaDws/E3rEK0/1nvfuyZ3117FL
 7L3wHW1VMf7osGKU5A0rEnkEaf9f0XBfgCjquMDN/PVISI57RtZGuHL6v Q=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: LPAMW0Qs7hpqecft9acBMHafdNoBoaMDlN2MszvvxgoqtQzmS/nUlajxeKTdYmj8klg0qOBluQ
 pVdF39T5v/7/3aUGeafkwXXGGvc47Mtevj1Kb823Q9iWqiV0z8RLo3roGCdi8TOpkl89M2KgKi
 FG2QYufsFR3WvcitkM2bQaMHEUBpKiVnDofVUdoSAsA3U1GX+zX0T/BHvuDmwRP8lgPIgPY02P
 S7f2lOpmexqoRjNGZq7bTUcxWzsEXf8u8qjPjhDq2YG5cs9E222r2tBfvx7do+SDTsvZuWVbEH
 KFU=
X-SBRS: 2.7
X-MesageID: 17530258
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,391,1583211600"; d="scan'208";a="17530258"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24253.23212.178710.524294@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 15:50:20 +0100
To: Tamas K Lengyel <tamas.lengyel@intel.com>
Subject: Re: [PATCH v18 2/2] tools/libxl: VM forking toolstack side
In-Reply-To: <b91c338ab8165b6e228b46bbd1853eb140ab69c7.1588772376.git.tamas.lengyel@intel.com>
References: <a59dabe3a40d4f3709d3ad6ca605523f180c2dc5.1588772376.git.tamas.lengyel@intel.com>
 <b91c338ab8165b6e228b46bbd1853eb140ab69c7.1588772376.git.tamas.lengyel@intel.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Tamas K Lengyel writes ("[PATCH v18 2/2] tools/libxl: VM forking toolstack side"):
> Add necessary bits to implement "xl fork-vm" commands. The command allows the
> user to specify how to launch the device model allowing for a late-launch model
> in which the user can execute the fork without the device model and decide to
> only later launch it.

Hi.

Sorry to be so late in reviewing this.  I will divert my main
attention to the API elements...

> +=item B<fork-vm> [I<OPTIONS>] I<domain-id>
> +
> +Create a fork of a running VM.  The domain will be paused after the operation
> +and remains paused while forks of it exist.

Do you mean "must remain paused" ?  And "The original domain" rather
than "The domain" ?

> +B<OPTIONS>
> +
> +=over 4
> +
> +=item B<-p>
> +
> +Leave the fork paused after creating it.

By default the fork runs right away, then, I take it.

> +=item B<--launch-dm>
> +
> +Specify whether the device model (QEMU) should be launched for the fork. Late
> +launch allows to start the device model for an already running fork.

It's not clear to me whether this launches the DM for an existing
fork, or specify when forking that the DM should be run ?

Do you really mean that you can run a fork for a while with no DM ?
How does that work ?

Also you seem to have not documented the launch-dm operation ?

> +=item B<-C>
> +
> +The config file to use when launching the device model.  Currently required when
> +launching the device model.  Most config settings MUST match the parent domain
> +exactly, only change VM name, disk path and network configurations.

This is a libxl config file, right ?

> +=item B<-Q>
> +
> +The path to the qemu save file to use when launching the device model.  Currently
> +required when launching the device model.

Where would the user get one of these ?

I think this question has no good answer and this reveals a problem
with the API...

> +=item B<--fork-reset>
> +
> +Perform a reset operation of an already running fork.  Note that resetting may
> +be less performant then creating a new fork depending on how much memory the
> +fork has deduplicated during its runtime.

What is the semantic effect of a reset ?

> +=item B<--max-vcpus>
> +
> +Specify the max-vcpus matching the parent domain when not launching the dm.

What ?  This makes little sense to me.  You specify vm-fork
--max-vcpus and it changes the parent's max-vcpus ??

> +=item B<--allow-iommu>
> +
> +Specify to allow forking a domain that has IOMMU enabled. Only compatible with
> +forks using --launch-dm no.

Are there no some complex implications here ?  Maybe this doc needs a
caveat.

> +int libxl_domain_fork_launch_dm(libxl_ctx *ctx, libxl_domain_config *d_config,
> +                                uint32_t domid,
> +                                const libxl_asyncprogress_how *aop_console_how)
> +                                LIBXL_EXTERNAL_CALLERS_ONLY;
> +
> +int libxl_domain_fork_reset(libxl_ctx *ctx, uint32_t domid)
> +                            LIBXL_EXTERNAL_CALLERS_ONLY;
>  #endif

I'm afraid I found the code very hard to review.  In particular:

> -    if (!soft_reset) {
> -        struct xen_domctl_createdomain create = {
> -            .ssidref = info->ssidref,
> -            .max_vcpus = b_info->max_vcpus,
> -            .max_evtchn_port = b_info->event_channels,
> -            .max_grant_frames = b_info->max_grant_frames,
> -            .max_maptrack_frames = b_info->max_maptrack_frames,

I think this containsw a lot of code motion.  There is probably some
other refactoring.


Can you please split this up into several patches ?  Code motion
should cocur in patches that do nothing else.  Refactoring should be
broken down into pieces as small as possible, and separated from the
addition of new functionality.  So most of the patches should be
annotated "no functional change".

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 14 14:55:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 14:55:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZFGX-00036w-Dn; Thu, 14 May 2020 14:55:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v3kr=64=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZFGW-00036r-EX
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 14:55:16 +0000
X-Inumbo-ID: e7cd92f2-95f2-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7cd92f2-95f2-11ea-b07b-bc764e2007e4;
 Thu, 14 May 2020 14:55:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=0RnqNXznGXqN1PxA5WyujLnNeyGvsAj3nVlg00FAymc=; b=XzBBXpe9R1ZU4SsSM9CyuEvag
 UnKm7unrjka6MLn1ytYQQIXv7xPvcXJ8BfV1E6FX61A4ntx/B1HDIm0PXBwv18OFlZFTxRQ/oDKqu
 AGtr1dhqjTOv/jZgEY/NFzN7DSI2vYmomu3l/SIWv9hX/LXHzpQmFSVpj8irL1huEw6uc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZFGP-0001UI-1e; Thu, 14 May 2020 14:55:09 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZFGO-0004J2-Qz; Thu, 14 May 2020 14:55:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZFGO-0007OQ-Pd; Thu, 14 May 2020 14:55:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150172-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 150172: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=cbaf2369956178e68fb714a30dc86cf768dd596a
X-Osstest-Versions-That: linux=f015b86259a520ad886523d9ec6fdb0ed80edc38
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 14 May 2020 14:55:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150172 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150172/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate           fail  like 150122
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150143
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                cbaf2369956178e68fb714a30dc86cf768dd596a
baseline version:
 linux                f015b86259a520ad886523d9ec6fdb0ed80edc38

Last test of basis   150143  2020-05-12 00:39:21 Z    2 days
Testing same since   150172  2020-05-14 06:09:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Allen Pais <allen.pais@oracle.com>
  Amir Goldstein <amir73il@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Anthony Felice <tony.felice@timesys.com>
  Anton Eidelman <anton@lightbitslabs.com>
  Arnd Bergmann <arnd@arndb.de>
  Benjamin Tissoires <benjamin.tissoires@redhat.com>
  Bjørn Mork <bjorn@mork.no>
  Bryan O'Donoghue <bryan.odonoghue@linaro.org>
  Catalin Marinas <catalin.marinas@arm.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christoph Hellwig <hch@lst.de>
  Dan Carpenter <dan.carpenter@oracle.com>
  David Ahern <dsahern@kernel.org>
  David Hildenbrand <david@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dejin Zheng <zhengdejin5@gmail.com>
  Eran Ben Elisha <eranbe@mellanox.com>
  Erez Shitrit <erezsh@mellanox.com>
  Eric Dumazet <edumazet@google.com>
  Evan Quan <evan.quan@amd.com>
  Florian Fainelli <f.fainelli@gmail.com>
  George Spelvin <lkml@sdf.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guillaume Nault <gnault@redhat.com>
  H. Nikolaus Schaller <hns@goldelico.com>
  Henry Willard <henry.willard@oracle.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Ido Schimmel <idosch@mellanox.com>
  Ilya Dryomov <idryomov@gmail.com>
  Ingo Molnar <mingo@kernel.org>
  Ivan Delalande <colona@arista.com>
  Jakub Kicinski <kuba@kernel.org>
  Jan Kara <jack@suse.cz>
  Janakarajan Natarajan <Janakarajan.Natarajan@amd.com>
  Jann Horn <jannh@google.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jason Gerecke <jason.gerecke@wacom.com>
  Jason Gerecke <killertofu@gmail.com>
  Jason Gunthorpe <jgg@mellanox.com>
  Jeff Layton <jlayton@kernel.org>
  Jens Axboe <axboe@kernel.dk>
  Jere Leppänen <jere.leppanen@nokia.com>
  Jiri Kosina <jkosina@suse.cz>
  Jiri Pirko <jiri@mellanox.com>
  Joerg Roedel <jroedel@suse.de>
  Johan Hovold <johan@kernel.org>
  Johannes Weiner <hannes@cmpxchg.org>
  Jon Maloy <jmaloy@redhat.com>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Julia Lawall <Julia.Lawall@inria.fr>
  Keith Busch <kbusch@kernel.org>
  Khazhismel Kumykov <khazhy@google.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Luis Chamberlain <mcgrof@kernel.org>
  Luis Henriques <lhenriques@suse.com>
  Manfred Spraul <manfred@colorfullife.com>
  Marc Zyngier <maz@kernel.org>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Mark Rutland <mark.rutland@arm.com>
  Masami Hiramatsu <mhiramat@kernel.org>
  Matt Jolly <Kangie@footclan.ninja>
  Maxim Levitsky <mlevitsk@redhat.com>
  Mel Gorman <mgorman@techsingularity.net>
  Michael Chan <michael.chan@broadcom.com>
  Michal Hocko <mhocko@kernel.org>
  Michal Hocko <mhocko@suse.com>
  Michal Simek <michal.simek@xilinx.com>
  Miroslav Benes <mbenes@suse.cz>
  Moshe Shemesh <moshe@mellanox.com>
  Nicolas Ferre <nicolas.ferre@microchip.com>
  Nicolas Pitre <nico@fluxnic.net>
  Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
  Oleg Nesterov <oleg@redhat.com>
  Oliver Neukum <oneukum@suse.com>
  Oscar Carter <oscar.carter@gmx.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Cercueil <paul@crapouillou.net>
  Peter Chen <peter.chen@nxp.com>
  Qiushi Wu <wu000273@umn.edu>
  Roman Mashak <mrv@mojatatu.com>
  Roman Penyaev <rpenyaev@suse.de>
  Saeed Mahameed <saeedm@mellanox.com>
  Sagi Grimberg <sagi@grimberg.me>
  Sasha Levin <sashal@kernel.org>
  Scott Dial <scott@scottdial.com>
  Sean Christopherson <sean.j.christopherson@intel.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Shubhrajyoti Datta <shubhrajyoti.datta@xilinx.com>
  Simon Wunderlich <sw@simonwunderlich.de>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Sven Eckelmann <sven@narfation.org>
  Tariq Toukan <tariqt@mellanox.com>
  Tejun Heo <tj@kernel.org>
  Toke Høiland-Jørgensen <toke@redhat.com>
  Tuong Lien <tuong.t.lien@dektech.com.au>
  Vasundhara Volam <vasundhara-v.volam@broadcom.com>
  Vincent Chen <vincent.chen@sifive.com>
  Will Deacon <will@kernel.org>
  Willem de Bruijn <willemb@google.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yafang Shao <laoar.shao@gmail.com>
  Yash Shah <yash.shah@sifive.com>
  Ying Xue <ying.xue@windriver.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   f015b86259a5..cbaf23699561  cbaf2369956178e68fb714a30dc86cf768dd596a -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu May 14 14:56:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 14:56:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZFHq-0003Bm-PK; Thu, 14 May 2020 14:56:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WGWk=64=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZFHp-0003Bd-SJ
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 14:56:37 +0000
X-Inumbo-ID: 1bbab860-95f3-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1bbab860-95f3-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 14:56:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 96733AC6E;
 Thu, 14 May 2020 14:56:37 +0000 (UTC)
Subject: Re: [PATCH v8 09/12] xen: add runtime parameter access support to
 hypfs
To: Jan Beulich <jbeulich@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-10-jgross@suse.com>
 <a6c10680-d570-dabb-61ad-627591d08b0e@suse.com>
 <76ed2db5-6091-959a-8224-0a77e9cc4c45@suse.com>
 <76cf4476-f8b8-dc44-9e68-bfa92a3fcd2a@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <33daeea9-a038-b153-44b5-d9a8a11ae21f@suse.com>
Date: Thu, 14 May 2020 16:56:33 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <76cf4476-f8b8-dc44-9e68-bfa92a3fcd2a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.20 14:10, Jan Beulich wrote:
> On 14.05.2020 13:48, Jürgen Groß wrote:
>> On 14.05.20 12:20, Jan Beulich wrote:
>>> On 08.05.2020 17:34, Juergen Gross wrote:
>>>> --- a/xen/common/grant_table.c
>>>> +++ b/xen/common/grant_table.c
>>>> @@ -85,8 +85,43 @@ struct grant_table {
>>>>        struct grant_table_arch arch;
>>>>    };
>>>>    -static int parse_gnttab_limit(const char *param, const char *arg,
>>>> -                              unsigned int *valp)
>>>> +unsigned int __read_mostly opt_max_grant_frames = 64;
>>>> +static unsigned int __read_mostly opt_max_maptrack_frames = 1024;
>>>> +
>>>> +#ifdef CONFIG_HYPFS
>>>> +#define GRANT_CUSTOM_VAL_SZ  12
>>>> +static char __read_mostly opt_max_grant_frames_val[GRANT_CUSTOM_VAL_SZ];
>>>> +static char __read_mostly opt_max_maptrack_frames_val[GRANT_CUSTOM_VAL_SZ];
>>>> +
>>>> +static void update_gnttab_par(struct param_hypfs *par, unsigned int val,
>>>> +                              char *parval)
>>>> +{
>>>> +    snprintf(parval, GRANT_CUSTOM_VAL_SZ, "%u", val);
>>>> +    custom_runtime_set_var_sz(par, parval, GRANT_CUSTOM_VAL_SZ);
>>>> +}
>>>> +
>>>> +static void __init gnttab_max_frames_init(struct param_hypfs *par)
>>>> +{
>>>> +    update_gnttab_par(par, opt_max_grant_frames, opt_max_grant_frames_val);
>>>> +}
>>>> +
>>>> +static void __init max_maptrack_frames_init(struct param_hypfs *par)
>>>> +{
>>>> +    update_gnttab_par(par, opt_max_maptrack_frames,
>>>> +                      opt_max_maptrack_frames_val);
>>>> +}
>>>> +#else
>>>> +#define opt_max_grant_frames_val    NULL
>>>> +#define opt_max_maptrack_frames_val NULL
>>>
>>> This looks latently dangerous to me (in case new uses of these
>>> two identifiers appeared), but I guess my alternative suggestion
>>> will be at best controversial, too:
>>>
>>> #define update_gnttab_par(par, val, unused) update_gnttab_par(par, val)
>>> #define parse_gnttab_limit(par, arg, valp, unused) parse_gnttab_limit(par, arg, valp)
>>>
>>> (placed right here)
>>
>> Who else would use those identifiers not related to hypfs?
> 
> I can't see an obvious possible use, but people get creative, i.e.
> you never know. Passing NULL into a function without it being
> blindingly obvious that it won't ever get (de)referenced is an at
> least theoretical risk imo.

Hmm, what about using a special type for those content variables?
Something like:

#ifdef CONFIG_HYPFS
#define hypfs_string_var            char *
#else
#define hypfs_string_var            char
#define opt_max_grant_frames_val    0
#define opt_max_maptrack_frames_val 0
#endif

And then use that as type for function parameters? This should make
dereferencing pretty hard.

Other than that I have no really good idea how to avoid this problem.

> 
>>>> +#define custom_runtime_set_var_sz(parfs, var, sz) \
>>>> +    { \
>>>> +        (parfs)->hypfs.content = var; \
>>>> +        (parfs)->hypfs.e.max_size = sz; \
>>>
>>> var and sz want parentheses around them.
>>
>> Fine with me, but out of curiosity: what can go wrong without? Are
>> you thinking of multi-statement arguments?
> 
> Well, as just said in the reply on the patch 4 thread, you have
> a point about this being the right side of an assignment.
> Nevertheless such uses would look more consistent if parenthesized.
> The only cases where I see it reasonable to omit parentheses are
> - parameters made the operands of # or ##,
> - parameters handed on to further macros / functions unaltered,
> - parameters representing struct/union field names,
> - perhaps other special cases along the lines of the above that
>    I can't seem to be able to think of right now.

Okay.

> 
>>>> +/* initfunc needs to set size and content, e.g. via custom_runtime_set_var(). */
>>>> +#define custom_runtime_only_param(_name, _var, initfunc) \
>>>
>>> I've started noticing it here, but the issue exists further up
>>> (and down) as well - please can you avoid identifiers with
>>> leading underscores that are in violation of the C standard?
>>> Even more so that here you're not even consistent across
>>> macro parameter names.
>>
>> Basically I only extended the macro to take another parameter and I
>> omitted the underscore exactly for the reason you mentioned.
>>
>> In case you like it better I can prepend a patch to the series dropping
>> all the leading single underscores from the macros in param.h.
> 
> That's code churn I don't view as strictly necessary - adjusting
> these can be done when they get touched anyway. But here we have
> a whole new body of code.

Okay.

> 
>>>> +#define param_2_parfs(par)  NULL
>>>
>>> Along the lines of the earlier comment, this looks latently dangerous.
>>
>> In which way? How can the empty struct be dereferenced in a way not
>> resulting in build time errors, other than using a cast which would
>> be obviously wrong for the standard case when CONFIG_HYPFS is defined?
> 
> Have the result of the macro passed to a function taking void *, e.g.
> memcpy().

Any function like this needs either a guaranteed minimum size or a
specific size as parameter. I can't imagine a valid use case leading
to problems, TBH.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu May 14 14:58:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 14:58:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZFJi-0003Md-9A; Thu, 14 May 2020 14:58:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WGWk=64=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZFJh-0003MX-0m
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 14:58:33 +0000
X-Inumbo-ID: 604e5f0e-95f3-11ea-a49f-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 604e5f0e-95f3-11ea-a49f-12813bfff9fa;
 Thu, 14 May 2020 14:58:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id F0B11ACA1;
 Thu, 14 May 2020 14:58:32 +0000 (UTC)
Subject: Re: [PATCH v8 12/12] xen: remove XEN_SYSCTL_set_parameter support
To: Jan Beulich <jbeulich@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-13-jgross@suse.com>
 <2c962dd4-5af6-e09e-d712-18f8a92b4a92@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <92b11704-7e60-e38a-4232-f4ab6e0258e0@suse.com>
Date: Thu, 14 May 2020 16:58:29 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <2c962dd4-5af6-e09e-d712-18f8a92b4a92@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.20 15:03, Jan Beulich wrote:
> On 08.05.2020 17:34, Juergen Gross wrote:
> 
> Besides the changes made, wouldn't it be worthwhile to change
> HYPFS Kconfig help wording from "result in some features not
> being available" to something mentioning runtime parameter
> setting in particular, and perhaps also from "might" to "will"?

Yes, good idea.

> 
>> --- a/xen/arch/x86/hvm/vmx/vmcs.c
>> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
>> @@ -71,27 +71,6 @@ static bool __read_mostly opt_ept_pml = true;
>>   static s8 __read_mostly opt_ept_ad = -1;
>>   int8_t __read_mostly opt_ept_exec_sp = -1;
>>   
>> -#ifdef CONFIG_HYPFS
>> -static char opt_ept_setting[10];
>> -
>> -static void update_ept_param(void)
>> -{
>> -    if ( opt_ept_exec_sp >= 0 )
>> -        snprintf(opt_ept_setting, sizeof(opt_ept_setting), "exec-sp=%d",
>> -                 opt_ept_exec_sp);
>> -}
>> -
>> -static void __init init_ept_param(struct param_hypfs *par)
>> -{
>> -    update_ept_param();
>> -    custom_runtime_set_var(par, opt_ept_setting);
>> -}
>> -#else
>> -static void update_ept_param(void)
>> -{
>> -}
>> -#endif
>> -
>>   static int __init parse_ept_param(const char *s)
>>   {
>>       const char *ss;
>> @@ -118,6 +97,22 @@ static int __init parse_ept_param(const char *s)
>>   }
>>   custom_param("ept", parse_ept_param);
>>   
>> +#ifdef CONFIG_HYPFS
>> +static char opt_ept_setting[10];
>> +
>> +static void update_ept_param(void)
>> +{
>> +    if ( opt_ept_exec_sp >= 0 )
>> +        snprintf(opt_ept_setting, sizeof(opt_ept_setting), "exec-sp=%d",
>> +                 opt_ept_exec_sp);
>> +}
>> +
>> +static void __init init_ept_param(struct param_hypfs *par)
>> +{
>> +    update_ept_param();
>> +    custom_runtime_set_var(par, opt_ept_setting);
>> +}
> 
> If I'm not mistaken this is pure code movement, and solely to be
> able to have ...
> 
>> +
>>   static int parse_ept_param_runtime(const char *s);
>>   custom_runtime_only_param("ept", parse_ept_param_runtime, init_ept_param);
>>   
>> @@ -172,6 +167,7 @@ static int parse_ept_param_runtime(const char *s)
>>   
>>       return 0;
>>   }
>> +#endif
> 
> ... a single #ifdef region now including a few more lines. No
> strict need to change it, but couldn't the earlier patch have
> inserted the code in its final place right away?

I can do that.

> 
>> @@ -1106,7 +1090,7 @@ struct xen_sysctl {
>>   #define XEN_SYSCTL_get_cpu_levelling_caps        25
>>   #define XEN_SYSCTL_get_cpu_featureset            26
>>   #define XEN_SYSCTL_livepatch_op                  27
>> -#define XEN_SYSCTL_set_parameter                 28
>> +/* #define XEN_SYSCTL_set_parameter                 28 */
> 
> Nit: There are now 3 too many padding spaces. Granted that's
> how it was done for XEN_SYSCTL_tmem_op ...

You said: like it was done with tmem. ;-)

> 
> In any event, as before,
> Acked-by: Jan Beulich <jbeulich@suse.com>

Thanks,


Juergen



From xen-devel-bounces@lists.xenproject.org Thu May 14 14:59:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 14:59:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZFKs-0003U4-KJ; Thu, 14 May 2020 14:59:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZFKs-0003Tx-55
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 14:59:46 +0000
X-Inumbo-ID: 8c476876-95f3-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8c476876-95f3-11ea-b9cf-bc764e2007e4;
 Thu, 14 May 2020 14:59:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 69708AD41;
 Thu, 14 May 2020 14:59:47 +0000 (UTC)
Subject: Re: [PATCH v2 2/3] xen/sched: don't call sync_vcpu_execstate() in
 sched_unit_migrate_finish()
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
References: <20200511112829.5500-1-jgross@suse.com>
 <20200511112829.5500-3-jgross@suse.com>
 <f01cca9f-ba49-75bd-72c6-d0c638ed5e15@suse.com>
 <6b6912d3-1bb1-ac3d-7fc7-a8d2a2f2db9b@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d2614d09-b9c2-6ba4-2a78-1d6f483f9af6@suse.com>
Date: Thu, 14 May 2020 16:59:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <6b6912d3-1bb1-ac3d-7fc7-a8d2a2f2db9b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 16:21, Jürgen Groß wrote:
> On 14.05.20 15:57, Jan Beulich wrote:
>> On 11.05.2020 13:28, Juergen Gross wrote:
>>> With support of core scheduling sched_unit_migrate_finish() gained a
>>> call of sync_vcpu_execstate() as it was believed to be called as a
>>> result of vcpu migration in any case.
>>>
>>> In case of migrating a vcpu away from a physical cpu for a short period
>>> of time only this might not be true, so drop the call and let the lazy
>>> state syncing do its job.
>>
>> Replying here instead of on the patch 3 thread (and I'm sorry
>> for mixing up function names there): By saying "for a short
>> period of time only", do you imply without ever getting scheduled
>> on the new (temporary) CPU? If so, I think I understand this
>> change now, but then this could do with saying here. If not, I'm
>> afraid I'm still lost.
> 
> I'll change the commit message to:
> 
> ... for a short period of time only without ever being scheduled on the
> selected new cpu ...

And then
Reviewed-by: Jan Beulich <jbeulich@suse.com>
for both this one and patch 3 (ideally with the one unnecessary hunk
dropped).

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 15:02:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 15:02:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZFNL-0004HR-1f; Thu, 14 May 2020 15:02:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZFNK-0004HM-7b
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 15:02:18 +0000
X-Inumbo-ID: e6cfaf38-95f3-11ea-a4a3-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e6cfaf38-95f3-11ea-a4a3-12813bfff9fa;
 Thu, 14 May 2020 15:02:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id AE37EAC7B;
 Thu, 14 May 2020 15:02:18 +0000 (UTC)
Subject: Re: [PATCH v8 09/12] xen: add runtime parameter access support to
 hypfs
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-10-jgross@suse.com>
 <a6c10680-d570-dabb-61ad-627591d08b0e@suse.com>
 <76ed2db5-6091-959a-8224-0a77e9cc4c45@suse.com>
 <76cf4476-f8b8-dc44-9e68-bfa92a3fcd2a@suse.com>
 <33daeea9-a038-b153-44b5-d9a8a11ae21f@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <95a5644c-e208-57bc-2f47-13581a16b568@suse.com>
Date: Thu, 14 May 2020 17:02:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <33daeea9-a038-b153-44b5-d9a8a11ae21f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 16:56, Jürgen Groß wrote:
> On 14.05.20 14:10, Jan Beulich wrote:
>> On 14.05.2020 13:48, Jürgen Groß wrote:
>>> On 14.05.20 12:20, Jan Beulich wrote:
>>>> On 08.05.2020 17:34, Juergen Gross wrote:
>>>>> --- a/xen/common/grant_table.c
>>>>> +++ b/xen/common/grant_table.c
>>>>> @@ -85,8 +85,43 @@ struct grant_table {
>>>>>        struct grant_table_arch arch;
>>>>>    };
>>>>>    -static int parse_gnttab_limit(const char *param, const char *arg,
>>>>> -                              unsigned int *valp)
>>>>> +unsigned int __read_mostly opt_max_grant_frames = 64;
>>>>> +static unsigned int __read_mostly opt_max_maptrack_frames = 1024;
>>>>> +
>>>>> +#ifdef CONFIG_HYPFS
>>>>> +#define GRANT_CUSTOM_VAL_SZ  12
>>>>> +static char __read_mostly opt_max_grant_frames_val[GRANT_CUSTOM_VAL_SZ];
>>>>> +static char __read_mostly opt_max_maptrack_frames_val[GRANT_CUSTOM_VAL_SZ];
>>>>> +
>>>>> +static void update_gnttab_par(struct param_hypfs *par, unsigned int val,
>>>>> +                              char *parval)
>>>>> +{
>>>>> +    snprintf(parval, GRANT_CUSTOM_VAL_SZ, "%u", val);
>>>>> +    custom_runtime_set_var_sz(par, parval, GRANT_CUSTOM_VAL_SZ);
>>>>> +}
>>>>> +
>>>>> +static void __init gnttab_max_frames_init(struct param_hypfs *par)
>>>>> +{
>>>>> +    update_gnttab_par(par, opt_max_grant_frames, opt_max_grant_frames_val);
>>>>> +}
>>>>> +
>>>>> +static void __init max_maptrack_frames_init(struct param_hypfs *par)
>>>>> +{
>>>>> +    update_gnttab_par(par, opt_max_maptrack_frames,
>>>>> +                      opt_max_maptrack_frames_val);
>>>>> +}
>>>>> +#else
>>>>> +#define opt_max_grant_frames_val    NULL
>>>>> +#define opt_max_maptrack_frames_val NULL
>>>>
>>>> This looks latently dangerous to me (in case new uses of these
>>>> two identifiers appeared), but I guess my alternative suggestion
>>>> will be at best controversial, too:
>>>>
>>>> #define update_gnttab_par(par, val, unused) update_gnttab_par(par, val)
>>>> #define parse_gnttab_limit(par, arg, valp, unused) parse_gnttab_limit(par, arg, valp)
>>>>
>>>> (placed right here)
>>>
>>> Who else would use those identifiers not related to hypfs?
>>
>> I can't see an obvious possible use, but people get creative, i.e.
>> you never know. Passing NULL into a function without it being
>> blindingly obvious that it won't ever get (de)referenced is an at
>> least theoretical risk imo.
> 
> Hmm, what about using a special type for those content variables?
> Something like:
> 
> #ifdef CONFIG_HYPFS
> #define hypfs_string_var            char *
> #else
> #define hypfs_string_var            char
> #define opt_max_grant_frames_val    0
> #define opt_max_maptrack_frames_val 0
> #endif
> 
> And then use that as type for function parameters? This should make
> dereferencing pretty hard.
> 
> Other than that I have no really good idea how to avoid this problem.

IOW (as suspected) you don't like my suggestion? Personally I think
yours, having more #define-s, is at least not better than mine.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 15:13:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 15:13:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZFY3-0005EL-3J; Thu, 14 May 2020 15:13:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lICV=64=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jZFY1-0005EG-QL
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 15:13:21 +0000
X-Inumbo-ID: 7200d838-95f5-11ea-b9cf-bc764e2007e4
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7200d838-95f5-11ea-b9cf-bc764e2007e4;
 Thu, 14 May 2020 15:13:20 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id l11so4686858wru.0
 for <xen-devel@lists.xenproject.org>; Thu, 14 May 2020 08:13:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=jenYmo923njm2GR8Qf+VdNJ4UtHrNZ8SRfJaSixVhk0=;
 b=MYgw5BUwHF1fWCyks6ncvEWUE8VlxsQFhQxYLj3NnoXHp0BfnV4H5r6i/UxafvoxlK
 hVLx+HyOmpy28qxeuwNWgvqK4GUdr3TFrFVU9q+17TsshYVTE2HAd6pZcWSy0uTIVDDF
 gTUYRrLG7Q0e+dnCXUJ6yBeI4mYxWYsZWFH+aD3DvkujkFArbc6q0BYKfvAuMdyd800i
 KW3QRtd4s1foOLgVhMVThtqpo3f1dTBfW3NuSkcLSM1EfrLIDvUjp4Ti9VoNkgW+b9OF
 ++xEdPkTf7IxjmdS79SwMFPm0uGRCjnOe67ABkCQR9CiSbuFFAY5X6SvWL2J+PlxX9/1
 tmSg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=jenYmo923njm2GR8Qf+VdNJ4UtHrNZ8SRfJaSixVhk0=;
 b=hQ4Pp5VSAg4kZGgTqhu6jUyofP1JeV4oLJjpL3OWeXnz8sXf9LrsDTtI8vk+FXAXuM
 PPhCmoWKzaTHg95YMNE9N3zHD/m3Wg1OTlMRToJZviYfU8okq5kMZJImUfjNiilDHX9s
 q7IC9ADDjugWnYRQ/F7sZFTbjWVCKumB2YVNGebgJH0M3PtpnzbcduSBjLOcPGz+vBq7
 o99Vp+vKH00IiIcrPHNANHcg/qW4j5MWO5WLxMZOQiaRV0HfXrwF3I2K5k4U/+wm2foZ
 hWxyeqcpvcZYwBcRt71qrfU31srwFu1mj25j11lCd4wJVtimiotqLLnL1aoZdNhA9MZR
 v7HQ==
X-Gm-Message-State: AOAM532h2rbyUYv3oAmP0KNFL9OMOGFDGwp7WJumKXwyAAUBLOFc7yT0
 V9wcEXMG4cjUMAS1+E1WGxtoCfoJWJSe7mNR370=
X-Google-Smtp-Source: ABdhPJxC9nH3Y5w2qHEWJl4W+ZNmvsIPznD6lG8O79Mwfajn0EzFXlRYTnQVl4L9rwt7syiCyVZAMq4imhlSIH6MOjs=
X-Received: by 2002:adf:fe07:: with SMTP id n7mr5870911wrr.259.1589469199479; 
 Thu, 14 May 2020 08:13:19 -0700 (PDT)
MIME-Version: 1.0
References: <a59dabe3a40d4f3709d3ad6ca605523f180c2dc5.1588772376.git.tamas.lengyel@intel.com>
 <b91c338ab8165b6e228b46bbd1853eb140ab69c7.1588772376.git.tamas.lengyel@intel.com>
 <24253.23212.178710.524294@mariner.uk.xensource.com>
In-Reply-To: <24253.23212.178710.524294@mariner.uk.xensource.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Thu, 14 May 2020 09:12:44 -0600
Message-ID: <CABfawhkQtxd1NX7Gx9e2Eefc=tHwgDzPp8TEYZBc8B2xtE4rmQ@mail.gmail.com>
Subject: Re: [PATCH v18 2/2] tools/libxl: VM forking toolstack side
To: Ian Jackson <ian.jackson@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Tamas K Lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 14, 2020 at 8:52 AM Ian Jackson <ian.jackson@citrix.com> wrote:
>
> Tamas K Lengyel writes ("[PATCH v18 2/2] tools/libxl: VM forking toolstack side"):
> > Add necessary bits to implement "xl fork-vm" commands. The command allows the
> > user to specify how to launch the device model allowing for a late-launch model
> > in which the user can execute the fork without the device model and decide to
> > only later launch it.
>
> Hi.
>
> Sorry to be so late in reviewing this.  I will divert my main
> attention to the API elements...
>
> > +=item B<fork-vm> [I<OPTIONS>] I<domain-id>
> > +
> > +Create a fork of a running VM.  The domain will be paused after the operation
> > +and remains paused while forks of it exist.
>
> Do you mean "must remain paused" ?  And "The original domain" rather
> than "The domain" ?

Yes, I mean the original domain.

>
> > +B<OPTIONS>
> > +
> > +=over 4
> > +
> > +=item B<-p>
> > +
> > +Leave the fork paused after creating it.
>
> By default the fork runs right away, then, I take it.

Same route is taken as when you run "xl restore" so yes. This applies
if you are launching the device model. Without the device model launch
we are not going down the same path as "xl restore" so the fork is
paused in that case.

>
> > +=item B<--launch-dm>
> > +
> > +Specify whether the device model (QEMU) should be launched for the fork. Late
> > +launch allows to start the device model for an already running fork.
>
> It's not clear to me whether this launches the DM for an existing
> fork, or specify when forking that the DM should be run ?

It's possible to do both. You can create a fork and launch the device
model for it right away, or you can create a fork, unpause it, and
only launch the device model when its actually necessary.

>
> Do you really mean that you can run a fork for a while with no DM ?
> How does that work ?
>
> Also you seem to have not documented the launch-dm operation ?

It's possible I missed it.

>
> > +=item B<-C>
> > +
> > +The config file to use when launching the device model.  Currently required when
> > +launching the device model.  Most config settings MUST match the parent domain
> > +exactly, only change VM name, disk path and network configurations.
>
> This is a libxl config file, right ?

Yes.

>
> > +=item B<-Q>
> > +
> > +The path to the qemu save file to use when launching the device model.  Currently
> > +required when launching the device model.
>
> Where would the user get one of these ?

Generate it by connecting to the qmp socket of the parent domain and
issuing the command saves it. See the cover letter to the series. I
stopped sending the cover letter since there is only this one
outstanding patch now.

>
> I think this question has no good answer and this reveals a problem
> with the API...

I don't know what "problem" you are referring to. We deliberately
chose not to include saving the qemu save file every time a fork is
made because for our usecase you only need to generate the qemu save
file once. Doing it for every fork is huge waste of time since we are
spinning off forks from the same state hundreds of thousands of time.
No need to regenerate the same save file for each.

>
> > +=item B<--fork-reset>
> > +
> > +Perform a reset operation of an already running fork.  Note that resetting may
> > +be less performant then creating a new fork depending on how much memory the
> > +fork has deduplicated during its runtime.
>
> What is the semantic effect of a reset ?

I don't understand the question.

>
> > +=item B<--max-vcpus>
> > +
> > +Specify the max-vcpus matching the parent domain when not launching the dm.
>
> What ?  This makes little sense to me.  You specify vm-fork
> --max-vcpus and it changes the parent's max-vcpus ??

No. You need the max-vcpus value when you create a fork. The domain
create hypercall needs it to set the domain up. I originally wanted to
extend the domain create hypercall so this could be copied by the
hypervisor but the hypervisor maintainers were against changing that
hypercall. So we are left with having to pass it manually.

>
> > +=item B<--allow-iommu>
> > +
> > +Specify to allow forking a domain that has IOMMU enabled. Only compatible with
> > +forks using --launch-dm no.
>
> Are there no some complex implications here ?  Maybe this doc needs a
> caveat.

Only caveat is that this option is only available for forks that have
no device models launched for them. We use it for fuzzing the device
driver of the IOMMU device in the forks.

>
> > +int libxl_domain_fork_launch_dm(libxl_ctx *ctx, libxl_domain_config *d_config,
> > +                                uint32_t domid,
> > +                                const libxl_asyncprogress_how *aop_console_how)
> > +                                LIBXL_EXTERNAL_CALLERS_ONLY;
> > +
> > +int libxl_domain_fork_reset(libxl_ctx *ctx, uint32_t domid)
> > +                            LIBXL_EXTERNAL_CALLERS_ONLY;
> >  #endif
>
> I'm afraid I found the code very hard to review.  In particular:
>
> > -    if (!soft_reset) {
> > -        struct xen_domctl_createdomain create = {
> > -            .ssidref = info->ssidref,
> > -            .max_vcpus = b_info->max_vcpus,
> > -            .max_evtchn_port = b_info->event_channels,
> > -            .max_grant_frames = b_info->max_grant_frames,
> > -            .max_maptrack_frames = b_info->max_maptrack_frames,
>
> I think this containsw a lot of code motion.  There is probably some
> other refactoring.
>
>
> Can you please split this up into several patches ?  Code motion
> should cocur in patches that do nothing else.  Refactoring should be
> broken down into pieces as small as possible, and separated from the
> addition of new functionality.  So most of the patches should be
> annotated "no functional change".

I understand that this patch mixes code-movement and new functionality
which makes it harder for review. The code movement involves no
functional changes to existing code, only splitting some existing
functions to have parts of them reusable for the fork case and skip
the parts that are not relevant. Unfortunately I won't be able to
split this up into multiple patches as I found this codebase
impossibly hard to work with to begin with. It's callback hell with
like 15 different structures being created and freed left and right.
Add to it that it constantly changes even just in the last couple
months, so just rebasing it has been a massive pain. Forget about
backporting this patch too, everything else I was able to easily apply
to Xen 4.13, libxl is just an absolute no go since it has so much code
churn. So since this feature is not required for any of our use-cases
- this is only to make it possible for other potential users to use
forks where they need them to be more fully featured - I won't be able
to assign the time it would require to split this up further. It
probably would take me weeks which I can't justify at the moment.

Tamas


From xen-devel-bounces@lists.xenproject.org Thu May 14 15:33:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 15:33:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZFr5-0006zR-LJ; Thu, 14 May 2020 15:33:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=otfA=64=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZFr4-0006zL-EQ
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 15:33:02 +0000
X-Inumbo-ID: 31c35f90-95f8-11ea-b07b-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31c35f90-95f8-11ea-b07b-bc764e2007e4;
 Thu, 14 May 2020 15:33:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589470381;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=tJ6JoekXk7uvOZSQJxLdc5GU5d7Rhd+CLgHwfEMBLZQ=;
 b=S9FSGu9yRWRtS6tyrrttEEblHlSTjcEcZtqOZW/0u4JkGqru2t8VP6Z5
 LqUhgU3LaGs9O3m4r6m3YOTt2VcQ1b2U+7BxAkdNRm8ThtFEc0icVzT1O
 haDJnPFA5IGqwe8KN13BN2sKN1Dx8cEBbiq05DaGrGD+VxaViY8yekVvn Q=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: Lv+p62IM9QtP7W9HeJwaYiEIKl6Er0ymhz3vSPL5xJ78DTo2XKTm2dZh2edV6qVMvw3/ZBIRx3
 4yt9nZfFL3vSHU0Dm0eoUJuGXflaBYdu1//KmplXHtSB7b6dsfYqWM/jxZ1O624Ksj/oxfwj2u
 lM8aKY/Fg7QmJ3Z8Eq57h/+7aUJf8R0zqvocoazsPZAH1AmbKFQ1dG4N70FAJUe3jS2dOgqJP6
 OqB1J3dmFZzqJT9lr99eMmkshihOHAR28s/T8LEYruiczAA5fOeH2IKc95llrmgeFnSdR93T/S
 sLc=
X-SBRS: 2.7
X-MesageID: 17808368
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,391,1583211600"; d="scan'208";a="17808368"
Date: Thu, 14 May 2020 17:32:52 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86: retrieve and log CPU frequency information
Message-ID: <20200514153252.GE54375@Air-de-Roger>
References: <1fd091d2-30e2-0691-0485-3f5142bd457f@suse.com>
 <20200514131021.GB54375@Air-de-Roger>
 <2e9c7c05-e42c-52d4-f48c-9ecc8b14a1a7@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2e9c7c05-e42c-52d4-f48c-9ecc8b14a1a7@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 14, 2020 at 03:38:18PM +0200, Jan Beulich wrote:
> On 14.05.2020 15:10, Roger Pau Monné wrote:
> > On Wed, Apr 15, 2020 at 01:55:24PM +0200, Jan Beulich wrote:
> >> While from just a single Skylake system it is already clear that we
> >> can't base any of our logic on CPUID leaf 15 [1] (leaf 16 is
> >> documented to be used for display purposes only anyway), logging this
> >> information may still give us some reference in case of problems as well
> >> as for future work. Additionally on the AMD side it is unclear whether
> >> the deviation between reported and measured frequencies is because of us
> >> not doing well, or because of nominal and actual frequencies being quite
> >> far apart.
> > 
> > Can you add some reference to the AMD implementation? I've looked at
> > the PMs and haven't been able to find a description of some of the
> > MSRs, like 0xC0010064.
> 
> Take a look at
> 
> https://developer.amd.com/resources/developer-guides-manuals/
> 
> I'm unconvinced a reference needs adding here.

Do you think it would be sensible to introduce some defines for at
least 0xC0010064? (ie: MSR_AMD_PSTATE_DEF_BASE)

I think it would make it easier to find on the manuals.

> 
> >> --- a/xen/arch/x86/cpu/intel.c
> >> +++ b/xen/arch/x86/cpu/intel.c
> >> @@ -378,6 +378,72 @@ static void init_intel(struct cpuinfo_x8
> >>  	     ( c->cpuid_level >= 0x00000006 ) &&
> >>  	     ( cpuid_eax(0x00000006) & (1u<<2) ) )
> >>  		__set_bit(X86_FEATURE_ARAT, c->x86_capability);
> >> +
> > 
> > I would split this into a separate helper, ie: intel_log_freq. That
> > will allow you to exit early and reduce some of the indentation IMO.
> 
> Can do; splitting this for AMD/Hygon however was merely to
> facilitate using it for both vendors, though.
> 
> >> +    if ( (opt_cpu_info && !(c->apicid & (c->x86_num_siblings - 1))) ||
> >> +         c == &boot_cpu_data )
> >> +    {
> >> +        unsigned int eax, ebx, ecx, edx;
> >> +        uint64_t msrval;
> >> +
> >> +        if ( c->cpuid_level >= 0x15 )
> >> +        {
> >> +            cpuid(0x15, &eax, &ebx, &ecx, &edx);
> >> +            if ( ecx && ebx && eax )
> >> +            {
> >> +                unsigned long long val = ecx;
> >> +
> >> +                val *= ebx;
> >> +                do_div(val, eax);
> >> +                printk("CPU%u: TSC: %uMHz * %u / %u = %LuMHz\n",
> >> +                       smp_processor_id(), ecx, ebx, eax, val);
> >> +            }
> >> +            else if ( ecx | eax | ebx )
> >> +            {
> >> +                printk("CPU%u: TSC:", smp_processor_id());
> >> +                if ( ecx )
> >> +                    printk(" core: %uMHz", ecx);
> >> +                if ( ebx && eax )
> >> +                    printk(" ratio: %u / %u", ebx, eax);
> >> +                printk("\n");
> >> +            }
> >> +        }
> >> +
> >> +        if ( c->cpuid_level >= 0x16 )
> >> +        {
> >> +            cpuid(0x16, &eax, &ebx, &ecx, &edx);
> >> +            if ( ecx | eax | ebx )
> >> +            {
> >> +                printk("CPU%u:", smp_processor_id());
> >> +                if ( ecx )
> >> +                    printk(" bus: %uMHz", ecx);
> >> +                if ( eax )
> >> +                    printk(" base: %uMHz", eax);
> >> +                if ( ebx )
> >> +                    printk(" max: %uMHz", ebx);
> >> +                printk("\n");
> >> +            }
> >> +        }
> >> +
> >> +        if ( !rdmsr_safe(MSR_INTEL_PLATFORM_INFO, msrval) &&
> >> +             (uint8_t)(msrval >> 8) )
> > 
> > I would introduce a mask for it would be cleaner, since you use it
> > here and below (and would avoid the casting to uint8_t.
> 
> To avoid the casts (also below) I could introduce local variables.
> I specifically wanted to avoid MASK_EXTR() such that the rest of the
> calculations in
> 
>             if ( (uint8_t)(msrval >> 40) )
>                 printk("%u..", (factor * (uint8_t)(msrval >> 40) + 50) / 100);
>             printk("%u MHz\n", (factor * (uint8_t)(msrval >> 8) + 50) / 100);
> 
> can be done as 32-bit arithmetic.

Might be cleaner with the local variables.

> >> +        {
> >> +            unsigned int factor = 10000;
> >> +
> >> +            if ( c->x86 == 6 )
> >> +                switch ( c->x86_model )
> >> +                {
> >> +                case 0x1a: case 0x1e: case 0x1f: case 0x2e: /* Nehalem */
> >> +                case 0x25: case 0x2c: case 0x2f: /* Westmere */
> >> +                    factor = 13333;
> > 
> > The SDM lists ratio * 100MHz without any notes, why are those models
> > different, is this some errata?
> 
> Did you go through the MSR lists for the various models? It's there
> where I found this anomaly, not in any spec updates.

My bad, I was looking at the Atom table I think, and didn't realize
they where multiple tables instead of a single table with different
notes for models.

> 
> >> +                    break;
> >> +                }
> >> +
> >> +            printk("CPU%u: ", smp_processor_id());
> >> +            if ( (uint8_t)(msrval >> 40) )
> >> +                printk("%u..", (factor * (uint8_t)(msrval >> 40) + 50) / 100);
> >> +            printk("%u MHz\n", (factor * (uint8_t)(msrval >> 8) + 50) / 100);
> > 
> > Since you are calculating using Hz, should you use an unsigned long
> > factor to prevent capping at 4GHz?
> 
> Hmm, the calculation looks to be in units of 10kHz, until the division
> by 100. I don't think we'd cap at 4GHz this way.

Oh yes, sorry, it's kHz, not Hz.

> 
> >> --- a/xen/include/asm-x86/msr.h
> >> +++ b/xen/include/asm-x86/msr.h
> >> @@ -40,8 +40,8 @@ static inline void wrmsrl(unsigned int m
> >>  
> >>  /* rdmsr with exception handling */
> >>  #define rdmsr_safe(msr,val) ({\
> >> -    int _rc; \
> >> -    uint32_t lo, hi; \
> >> +    int rc_; \
> >> +    uint32_t lo_, hi_; \
> >>      __asm__ __volatile__( \
> >>          "1: rdmsr\n2:\n" \
> >>          ".section .fixup,\"ax\"\n" \
> >> @@ -49,15 +49,15 @@ static inline void wrmsrl(unsigned int m
> >>          "   movl %5,%2\n; jmp 2b\n" \
> >>          ".previous\n" \
> >>          _ASM_EXTABLE(1b, 3b) \
> >> -        : "=a" (lo), "=d" (hi), "=&r" (_rc) \
> >> +        : "=a" (lo_), "=d" (hi_), "=&r" (rc_) \
> >>          : "c" (msr), "2" (0), "i" (-EFAULT)); \
> >> -    val = lo | ((uint64_t)hi << 32); \
> >> -    _rc; })
> >> +    val = lo_ | ((uint64_t)hi_ << 32); \
> >> +    rc_; })
> > 
> > Since you are changing the local variable names, I would just switch
> > rdmsr_safe to a static inline, and drop the underlines. I don't see a
> > reason this has to stay as a macro.
> 
> Well, all callers would need to be changed to pass the address of
> the variable to store the value read into. That's quite a bit of
> code churn, and hence nothing I'd want to do in this patch.

Oh, right, didn't realize it's a macro for that reason.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 14 15:36:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 15:36:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZFuG-00079b-9D; Thu, 14 May 2020 15:36:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WGWk=64=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZFuF-00079W-AH
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 15:36:19 +0000
X-Inumbo-ID: a7582e02-95f8-11ea-a4ad-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a7582e02-95f8-11ea-a4ad-12813bfff9fa;
 Thu, 14 May 2020 15:36:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 34574AEC5;
 Thu, 14 May 2020 15:36:20 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 2/3] xen/sched: don't call sync_vcpu_execstate() in
 sched_unit_migrate_finish()
Date: Thu, 14 May 2020 17:36:13 +0200
Message-Id: <20200514153614.2240-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200514153614.2240-1-jgross@suse.com>
References: <20200514153614.2240-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Dario Faggioli <dfaggioli@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

With support of core scheduling sched_unit_migrate_finish() gained a
call of sync_vcpu_execstate() as it was believed to be called as a
result of vcpu migration in any case.

In case of migrating a vcpu away from a physical cpu for a short period
of time ionly without ever being scheduled on the selected new cpu this
might not be true, so drop the call and let the lazy state syncing do
its job.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V2:
- new patch
---
 xen/common/sched/core.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 5df66cbf9b..cb49a8bc02 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1078,12 +1078,7 @@ static void sched_unit_migrate_finish(struct sched_unit *unit)
     sched_spin_unlock_double(old_lock, new_lock, flags);
 
     if ( old_cpu != new_cpu )
-    {
-        /* Vcpus are moved to other pcpus, commit their states to memory. */
-        for_each_sched_unit_vcpu ( unit, v )
-            sync_vcpu_execstate(v);
         sched_move_irqs(unit);
-    }
 
     /* Wake on new CPU. */
     for_each_sched_unit_vcpu ( unit, v )
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Thu May 14 15:36:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 15:36:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZFuL-0007AF-Ha; Thu, 14 May 2020 15:36:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WGWk=64=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZFuK-0007A7-78
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 15:36:24 +0000
X-Inumbo-ID: a76a2436-95f8-11ea-a4ad-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a76a2436-95f8-11ea-a4ad-12813bfff9fa;
 Thu, 14 May 2020 15:36:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D2228AC37;
 Thu, 14 May 2020 15:36:19 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 0/3] xen: Fix some bugs in scheduling
Date: Thu, 14 May 2020 17:36:11 +0200
Message-Id: <20200514153614.2240-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Dario Faggioli <dfaggioli@suse.com>,
 Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Some problems I found when trying to find a problem with
cpu-on/offlining in core scheduling mode.

Juergen Gross (3):
  xen/sched: allow rcu work to happen when syncing cpus in core
    scheduling
  xen/sched: don't call sync_vcpu_execstate() in
    sched_unit_migrate_finish()
  xen/sched: fix latent races accessing vcpu->dirty_cpu

 xen/arch/x86/domain.c     | 16 +++++++++++-----
 xen/common/keyhandler.c   |  2 +-
 xen/common/sched/core.c   | 18 ++++++++++--------
 xen/include/xen/sched.h   |  2 +-
 xen/include/xen/softirq.h |  2 +-
 5 files changed, 24 insertions(+), 16 deletions(-)

-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Thu May 14 15:36:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 15:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZFuQ-0007BW-PW; Thu, 14 May 2020 15:36:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WGWk=64=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZFuP-0007BA-78
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 15:36:29 +0000
X-Inumbo-ID: a779f668-95f8-11ea-a4ad-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a779f668-95f8-11ea-a4ad-12813bfff9fa;
 Thu, 14 May 2020 15:36:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 2CB67AC90;
 Thu, 14 May 2020 15:36:20 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 1/3] xen/sched: allow rcu work to happen when syncing cpus
 in core scheduling
Date: Thu, 14 May 2020 17:36:12 +0200
Message-Id: <20200514153614.2240-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200514153614.2240-1-jgross@suse.com>
References: <20200514153614.2240-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Sergey Dyasli <sergey.dyasli@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Dario Faggioli <dfaggioli@suse.com>,
 Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

With RCU barriers moved from tasklets to normal RCU processing cpu
offlining in core scheduling might deadlock due to cpu synchronization
required by RCU processing and core scheduling concurrently.

Fix that by bailing out from core scheduling synchronization in case
of pending RCU work. Additionally the RCU softirq is now required to
be of higher priority than the scheduling softirqs in order to do
RCU processing before entering the scheduler again, as bailing out from
the core scheduling synchronization requires to raise another softirq
SCHED_SLAVE, which would bypass RCU processing again.

Reported-by: Sergey Dyasli <sergey.dyasli@citrix.com>
Tested-by: Sergey Dyasli <sergey.dyasli@citrix.com>
Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Dario Faggioli <dfaggioli@suse.com>
---
V2:
- add BUILD_BUG_ON() and comment (Dario Faggioli)
---
 xen/common/sched/core.c   | 13 ++++++++++---
 xen/include/xen/softirq.h |  2 +-
 2 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index d94b95285f..5df66cbf9b 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -2457,13 +2457,20 @@ static struct sched_unit *sched_wait_rendezvous_in(struct sched_unit *prev,
             v = unit2vcpu_cpu(prev, cpu);
         }
         /*
-         * Coming from idle might need to do tasklet work.
+         * Check for any work to be done which might need cpu synchronization.
+         * This is either pending RCU work, or tasklet work when coming from
+         * idle. It is mandatory that RCU softirqs are of higher priority
+         * than scheduling ones as otherwise a deadlock might occur.
          * In order to avoid deadlocks we can't do that here, but have to
-         * continue the idle loop.
+         * schedule the previous vcpu again, which will lead to the desired
+         * processing to be done.
          * Undo the rendezvous_in_cnt decrement and schedule another call of
          * sched_slave().
          */
-        if ( is_idle_unit(prev) && sched_tasklet_check_cpu(cpu) )
+        BUILD_BUG_ON(RCU_SOFTIRQ > SCHED_SLAVE_SOFTIRQ ||
+                     RCU_SOFTIRQ > SCHEDULE_SOFTIRQ);
+        if ( rcu_pending(cpu) ||
+             (is_idle_unit(prev) && sched_tasklet_check_cpu(cpu)) )
         {
             struct vcpu *vprev = current;
 
diff --git a/xen/include/xen/softirq.h b/xen/include/xen/softirq.h
index b4724f5c8b..1f6c4783da 100644
--- a/xen/include/xen/softirq.h
+++ b/xen/include/xen/softirq.h
@@ -4,10 +4,10 @@
 /* Low-latency softirqs come first in the following list. */
 enum {
     TIMER_SOFTIRQ = 0,
+    RCU_SOFTIRQ,
     SCHED_SLAVE_SOFTIRQ,
     SCHEDULE_SOFTIRQ,
     NEW_TLBFLUSH_CLOCK_PERIOD_SOFTIRQ,
-    RCU_SOFTIRQ,
     TASKLET_SOFTIRQ,
     NR_COMMON_SOFTIRQS
 };
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Thu May 14 15:36:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 15:36:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZFuW-0007D3-2F; Thu, 14 May 2020 15:36:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WGWk=64=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZFuU-0007Cc-7O
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 15:36:34 +0000
X-Inumbo-ID: a76a2437-95f8-11ea-a4ad-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a76a2437-95f8-11ea-a4ad-12813bfff9fa;
 Thu, 14 May 2020 15:36:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 712D4AEDE;
 Thu, 14 May 2020 15:36:20 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 3/3] xen/sched: fix latent races accessing vcpu->dirty_cpu
Date: Thu, 14 May 2020 17:36:14 +0200
Message-Id: <20200514153614.2240-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200514153614.2240-1-jgross@suse.com>
References: <20200514153614.2240-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The dirty_cpu field of struct vcpu denotes which cpu still holds data
of a vcpu. All accesses to this field should be atomic in case the
vcpu could just be running, as it is accessed without any lock held
in most cases. Especially sync_local_execstate() and context_switch()
for the same vcpu running concurrently have a risk for failing.

There are some instances where accesses are not atomically done, and
even worse where multiple accesses are done when a single one would
be mandated.

Correct that in order to avoid potential problems.

Add some assertions to verify dirty_cpu is handled properly.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V2:
- convert all accesses to v->dirty_cpu to atomic ones (Jan Beulich)
- drop cast (Julien Grall)

V3:
- drop atomic access in vcpu_create() (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/arch/x86/domain.c   | 16 +++++++++++-----
 xen/common/keyhandler.c |  2 +-
 xen/include/xen/sched.h |  2 +-
 3 files changed, 13 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index a4428190d5..2e5717b983 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -183,7 +183,7 @@ void startup_cpu_idle_loop(void)
 
     ASSERT(is_idle_vcpu(v));
     cpumask_set_cpu(v->processor, v->domain->dirty_cpumask);
-    v->dirty_cpu = v->processor;
+    write_atomic(&v->dirty_cpu, v->processor);
 
     reset_stack_and_jump(idle_loop);
 }
@@ -1769,6 +1769,7 @@ static void __context_switch(void)
 
     if ( !is_idle_domain(pd) )
     {
+        ASSERT(read_atomic(&p->dirty_cpu) == cpu);
         memcpy(&p->arch.user_regs, stack_regs, CTXT_SWITCH_STACK_BYTES);
         vcpu_save_fpu(p);
         pd->arch.ctxt_switch->from(p);
@@ -1832,7 +1833,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
 {
     unsigned int cpu = smp_processor_id();
     const struct domain *prevd = prev->domain, *nextd = next->domain;
-    unsigned int dirty_cpu = next->dirty_cpu;
+    unsigned int dirty_cpu = read_atomic(&next->dirty_cpu);
 
     ASSERT(prev != next);
     ASSERT(local_irq_is_enabled());
@@ -1844,6 +1845,7 @@ void context_switch(struct vcpu *prev, struct vcpu *next)
     {
         /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
         flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
+        ASSERT(!vcpu_cpu_dirty(next));
     }
 
     _update_runstate_area(prev);
@@ -1956,13 +1958,17 @@ void sync_local_execstate(void)
 
 void sync_vcpu_execstate(struct vcpu *v)
 {
-    if ( v->dirty_cpu == smp_processor_id() )
+    unsigned int dirty_cpu = read_atomic(&v->dirty_cpu);
+
+    if ( dirty_cpu == smp_processor_id() )
         sync_local_execstate();
-    else if ( vcpu_cpu_dirty(v) )
+    else if ( is_vcpu_dirty_cpu(dirty_cpu) )
     {
         /* Remote CPU calls __sync_local_execstate() from flush IPI handler. */
-        flush_mask(cpumask_of(v->dirty_cpu), FLUSH_VCPU_STATE);
+        flush_mask(cpumask_of(dirty_cpu), FLUSH_VCPU_STATE);
     }
+    ASSERT(!is_vcpu_dirty_cpu(dirty_cpu) ||
+           read_atomic(&v->dirty_cpu) != dirty_cpu);
 }
 
 static int relinquish_memory(
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 87bd145374..68364e987d 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -316,7 +316,7 @@ static void dump_domains(unsigned char key)
                        vcpu_info(v, evtchn_upcall_pending),
                        !vcpu_event_delivery_is_enabled(v));
                 if ( vcpu_cpu_dirty(v) )
-                    printk("dirty_cpu=%u", v->dirty_cpu);
+                    printk("dirty_cpu=%u", read_atomic(&v->dirty_cpu));
                 printk("\n");
                 printk("    pause_count=%d pause_flags=%lx\n",
                        atomic_read(&v->pause_count), v->pause_flags);
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 6101761d25..ac53519d7f 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -844,7 +844,7 @@ static inline bool is_vcpu_dirty_cpu(unsigned int cpu)
 
 static inline bool vcpu_cpu_dirty(const struct vcpu *v)
 {
-    return is_vcpu_dirty_cpu(v->dirty_cpu);
+    return is_vcpu_dirty_cpu(read_atomic(&v->dirty_cpu));
 }
 
 void vcpu_block(void);
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Thu May 14 15:44:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 15:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZG1h-0008J4-Rf; Thu, 14 May 2020 15:44:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=WGWk=64=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZG1g-0008Iz-0O
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 15:44:00 +0000
X-Inumbo-ID: b9ee870e-95f9-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9ee870e-95f9-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 15:43:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 6C892ACC4;
 Thu, 14 May 2020 15:44:00 +0000 (UTC)
Subject: Re: [PATCH v8 09/12] xen: add runtime parameter access support to
 hypfs
To: Jan Beulich <jbeulich@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-10-jgross@suse.com>
 <a6c10680-d570-dabb-61ad-627591d08b0e@suse.com>
 <76ed2db5-6091-959a-8224-0a77e9cc4c45@suse.com>
 <76cf4476-f8b8-dc44-9e68-bfa92a3fcd2a@suse.com>
 <33daeea9-a038-b153-44b5-d9a8a11ae21f@suse.com>
 <95a5644c-e208-57bc-2f47-13581a16b568@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <777b4289-6d1f-3bd4-36ad-d0925298cd5e@suse.com>
Date: Thu, 14 May 2020 17:43:56 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <95a5644c-e208-57bc-2f47-13581a16b568@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.20 17:02, Jan Beulich wrote:
> On 14.05.2020 16:56, Jürgen Groß wrote:
>> On 14.05.20 14:10, Jan Beulich wrote:
>>> On 14.05.2020 13:48, Jürgen Groß wrote:
>>>> On 14.05.20 12:20, Jan Beulich wrote:
>>>>> On 08.05.2020 17:34, Juergen Gross wrote:
>>>>>> --- a/xen/common/grant_table.c
>>>>>> +++ b/xen/common/grant_table.c
>>>>>> @@ -85,8 +85,43 @@ struct grant_table {
>>>>>>         struct grant_table_arch arch;
>>>>>>     };
>>>>>>     -static int parse_gnttab_limit(const char *param, const char *arg,
>>>>>> -                              unsigned int *valp)
>>>>>> +unsigned int __read_mostly opt_max_grant_frames = 64;
>>>>>> +static unsigned int __read_mostly opt_max_maptrack_frames = 1024;
>>>>>> +
>>>>>> +#ifdef CONFIG_HYPFS
>>>>>> +#define GRANT_CUSTOM_VAL_SZ  12
>>>>>> +static char __read_mostly opt_max_grant_frames_val[GRANT_CUSTOM_VAL_SZ];
>>>>>> +static char __read_mostly opt_max_maptrack_frames_val[GRANT_CUSTOM_VAL_SZ];
>>>>>> +
>>>>>> +static void update_gnttab_par(struct param_hypfs *par, unsigned int val,
>>>>>> +                              char *parval)
>>>>>> +{
>>>>>> +    snprintf(parval, GRANT_CUSTOM_VAL_SZ, "%u", val);
>>>>>> +    custom_runtime_set_var_sz(par, parval, GRANT_CUSTOM_VAL_SZ);
>>>>>> +}
>>>>>> +
>>>>>> +static void __init gnttab_max_frames_init(struct param_hypfs *par)
>>>>>> +{
>>>>>> +    update_gnttab_par(par, opt_max_grant_frames, opt_max_grant_frames_val);
>>>>>> +}
>>>>>> +
>>>>>> +static void __init max_maptrack_frames_init(struct param_hypfs *par)
>>>>>> +{
>>>>>> +    update_gnttab_par(par, opt_max_maptrack_frames,
>>>>>> +                      opt_max_maptrack_frames_val);
>>>>>> +}
>>>>>> +#else
>>>>>> +#define opt_max_grant_frames_val    NULL
>>>>>> +#define opt_max_maptrack_frames_val NULL
>>>>>
>>>>> This looks latently dangerous to me (in case new uses of these
>>>>> two identifiers appeared), but I guess my alternative suggestion
>>>>> will be at best controversial, too:
>>>>>
>>>>> #define update_gnttab_par(par, val, unused) update_gnttab_par(par, val)
>>>>> #define parse_gnttab_limit(par, arg, valp, unused) parse_gnttab_limit(par, arg, valp)
>>>>>
>>>>> (placed right here)
>>>>
>>>> Who else would use those identifiers not related to hypfs?
>>>
>>> I can't see an obvious possible use, but people get creative, i.e.
>>> you never know. Passing NULL into a function without it being
>>> blindingly obvious that it won't ever get (de)referenced is an at
>>> least theoretical risk imo.
>>
>> Hmm, what about using a special type for those content variables?
>> Something like:
>>
>> #ifdef CONFIG_HYPFS
>> #define hypfs_string_var            char *
>> #else
>> #define hypfs_string_var            char
>> #define opt_max_grant_frames_val    0
>> #define opt_max_maptrack_frames_val 0
>> #endif
>>
>> And then use that as type for function parameters? This should make
>> dereferencing pretty hard.
>>
>> Other than that I have no really good idea how to avoid this problem.
> 
> IOW (as suspected) you don't like my suggestion? Personally I think
> yours, having more #define-s, is at least not better than mine.

Oh, maybe I misunderstood you here. I thought you weren't happy with
your solution either and suspected others (not me) wouldn't like it.

In case others don't reject it I'd be happy to use your variant.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu May 14 15:45:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 15:45:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZG30-0008Q0-6l; Thu, 14 May 2020 15:45:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=otfA=64=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZG2z-0008Pu-5S
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 15:45:21 +0000
X-Inumbo-ID: e9b4f195-95f9-11ea-a4af-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9b4f195-95f9-11ea-a4af-12813bfff9fa;
 Thu, 14 May 2020 15:45:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589471120;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=atK0j8stW0hy8yFn0JJVw+dsWCt4Ao8Knt7TgByirK4=;
 b=dfUwAed2/fWL/2fmV5ZKQJr8Omz/cr1kxB+UD4HNe/NBAMbveJVkcXZ7
 OOwPfPeNe6KzdjpHgv5MIewBtTn+syqb7JylNQCtTpdDtnPjZLNqr5BwV
 wZCylw8PaVevmEQfHNvPBWwZr71goL5B4ZI3dL5XOXpglX5wVF7xpz7Lo g=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: G0Gf3wODEYDo3iUuf5y1YgV8Gh0FCLChAhSV4EuCqx6Ub9kGYVImC+8UKVypvvwVjjue678NOk
 vR5zBN57gysKdjf4dqHMAw5AhB8YhYo8y0fM5jnEw15zm1HSPgUef9YrUQsYZvYw7M9cnO7cou
 nCviCewpCgFCWaDHMbPe/OXn/R1kFrVrx2gfgxkdaA5tm74H21fBilDbG0QoFIcZmhL7OOUU83
 uTHLCPy4nbhy7FfU3MTEE75Wiwrp/pz5rBOvdNLee3+5TcuoRwTHfofl3fWrm+yeLenEJCgIUY
 KUs=
X-SBRS: 2.7
X-MesageID: 17811274
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,391,1583211600"; d="scan'208";a="17811274"
Date: Thu, 14 May 2020 17:45:06 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 1/6] x86/mem-paging: fold p2m_mem_paging_prep()'s main
 if()-s
Message-ID: <20200514154506.GF54375@Air-de-Roger>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <cea2307f-1aae-51cb-20ac-fbaf4b945771@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <cea2307f-1aae-51cb-20ac-fbaf4b945771@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Apr 23, 2020 at 10:37:06AM +0200, Jan Beulich wrote:
> The condition of the second can be true only if the condition of the
> first was met; the second half of the condition of the second then also
> is redundant with an earlier check. Combine them, drop a pointless
> local variable, and re-flow the affected gdprintk().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> 
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1808,6 +1808,8 @@ int p2m_mem_paging_prep(struct domain *d
>      /* Allocate a page if the gfn does not have one yet */
>      if ( !mfn_valid(mfn) )
>      {
> +        void *guest_map;
> +
>          /* If the user did not provide a buffer, we disallow */
>          ret = -EINVAL;
>          if ( unlikely(user_ptr == NULL) )
> @@ -1819,22 +1821,16 @@ int p2m_mem_paging_prep(struct domain *d
>              goto out;
>          mfn = page_to_mfn(page);
>          page_extant = 0;
> -    }
> -
> -    /* If we were given a buffer, now is the time to use it */
> -    if ( !page_extant && user_ptr )
> -    {
> -        void *guest_map;
> -        int rc;
>  
>          ASSERT( mfn_valid(mfn) );

I would be tempted to remove this assert also, since you just
successfully allocated the page at this point.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 14 15:50:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 15:50:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZG80-0000rZ-PW; Thu, 14 May 2020 15:50:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZG7z-0000rU-OT
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 15:50:31 +0000
X-Inumbo-ID: a3966b92-95fa-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a3966b92-95fa-11ea-b07b-bc764e2007e4;
 Thu, 14 May 2020 15:50:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E1703AEF1;
 Thu, 14 May 2020 15:50:32 +0000 (UTC)
Subject: Re: [PATCH] x86: retrieve and log CPU frequency information
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <1fd091d2-30e2-0691-0485-3f5142bd457f@suse.com>
 <20200514131021.GB54375@Air-de-Roger>
 <2e9c7c05-e42c-52d4-f48c-9ecc8b14a1a7@suse.com>
 <20200514153252.GE54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d166968f-23da-6065-3a90-e0fb0c4f6dcf@suse.com>
Date: Thu, 14 May 2020 17:50:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200514153252.GE54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 17:32, Roger Pau Monné wrote:
> On Thu, May 14, 2020 at 03:38:18PM +0200, Jan Beulich wrote:
>> On 14.05.2020 15:10, Roger Pau Monné wrote:
>>> On Wed, Apr 15, 2020 at 01:55:24PM +0200, Jan Beulich wrote:
>>>> While from just a single Skylake system it is already clear that we
>>>> can't base any of our logic on CPUID leaf 15 [1] (leaf 16 is
>>>> documented to be used for display purposes only anyway), logging this
>>>> information may still give us some reference in case of problems as well
>>>> as for future work. Additionally on the AMD side it is unclear whether
>>>> the deviation between reported and measured frequencies is because of us
>>>> not doing well, or because of nominal and actual frequencies being quite
>>>> far apart.
>>>
>>> Can you add some reference to the AMD implementation? I've looked at
>>> the PMs and haven't been able to find a description of some of the
>>> MSRs, like 0xC0010064.
>>
>> Take a look at
>>
>> https://developer.amd.com/resources/developer-guides-manuals/
>>
>> I'm unconvinced a reference needs adding here.
> 
> Do you think it would be sensible to introduce some defines for at
> least 0xC0010064? (ie: MSR_AMD_PSTATE_DEF_BASE)
> 
> I think it would make it easier to find on the manuals.

I did consider doing so at the time, but dropped the idea as we have
a few more examples where we use bare MSR numbers when they're used
just once or very rarely. What I'm not sure about is whether the
name would help finding the doc - the doc is organized by MSR number
after all.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 15:52:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 15:52:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZG9T-0000xn-8U; Thu, 14 May 2020 15:52:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ezST=64=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZG9S-0000xg-6x
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 15:52:02 +0000
X-Inumbo-ID: d886ecdd-95fa-11ea-a4af-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d886ecdd-95fa-11ea-a4af-12813bfff9fa;
 Thu, 14 May 2020 15:52:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id DE07CAEE6;
 Thu, 14 May 2020 15:52:02 +0000 (UTC)
Subject: Re: [PATCH v2 1/6] x86/mem-paging: fold p2m_mem_paging_prep()'s main
 if()-s
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <cea2307f-1aae-51cb-20ac-fbaf4b945771@suse.com>
 <20200514154506.GF54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4ef5582f-23c6-40d3-b694-eac9cc3b0edf@suse.com>
Date: Thu, 14 May 2020 17:51:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200514154506.GF54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 17:45, Roger Pau Monné wrote:
> On Thu, Apr 23, 2020 at 10:37:06AM +0200, Jan Beulich wrote:
>> The condition of the second can be true only if the condition of the
>> first was met; the second half of the condition of the second then also
>> is redundant with an earlier check. Combine them, drop a pointless
>> local variable, and re-flow the affected gdprintk().
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

>> --- a/xen/arch/x86/mm/p2m.c
>> +++ b/xen/arch/x86/mm/p2m.c
>> @@ -1808,6 +1808,8 @@ int p2m_mem_paging_prep(struct domain *d
>>      /* Allocate a page if the gfn does not have one yet */
>>      if ( !mfn_valid(mfn) )
>>      {
>> +        void *guest_map;
>> +
>>          /* If the user did not provide a buffer, we disallow */
>>          ret = -EINVAL;
>>          if ( unlikely(user_ptr == NULL) )
>> @@ -1819,22 +1821,16 @@ int p2m_mem_paging_prep(struct domain *d
>>              goto out;
>>          mfn = page_to_mfn(page);
>>          page_extant = 0;
>> -    }
>> -
>> -    /* If we were given a buffer, now is the time to use it */
>> -    if ( !page_extant && user_ptr )
>> -    {
>> -        void *guest_map;
>> -        int rc;
>>  
>>          ASSERT( mfn_valid(mfn) );
> 
> I would be tempted to remove this assert also, since you just
> successfully allocated the page at this point.

Oh, indeed, good point.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 14 15:57:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 15:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGEV-000193-Sf; Thu, 14 May 2020 15:57:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=k5kQ=64=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jZGEU-00018y-Aw
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 15:57:14 +0000
X-Inumbo-ID: 93c9408a-95fb-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 93c9408a-95fb-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 15:57:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=OB8EQH1kjQWMC46INt+AuR5nFnjyJsUf4LPFKk3OaiI=; b=iFU3EZOay3yylMAfOjH6smBfBG
 jOMZSrcz0P5HNxBFL6bbROdNPC8BGBF5URo3kOmoigZcJpGtihorxuDy4+4CazkcFvjQVJxoQ/T4/
 locifI76x0PWZgVAQGqY6FxIdgbjMeKbJv7sRtrn0Ttp3cpENz9hy7G7njvc62WWEOY0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jZGET-0002qK-G7; Thu, 14 May 2020 15:57:13 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jZGET-0004UU-9L; Thu, 14 May 2020 15:57:13 +0000
Subject: Re: Error during update_runstate_area with KPTI activated
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <C6B0E24F-60E6-4621-8448-C8DBAE3277A9@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
Date: Thu, 14 May 2020 16:57:11 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <C6B0E24F-60E6-4621-8448-C8DBAE3277A9@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: nd <nd@arm.com>, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 14/05/2020 15:28, Bertrand Marquis wrote:
> Hi,

Hi,

> 
> When executing linux on arm64 with KPTI activated (in Dom0 or in a DomU), I have a lot of walk page table errors like this:
> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
> 
> After implementing a call trace, I found that the problem was coming from the update_runstate_area when linux has KPTI activated.
> 
> I have the following call trace:
> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
> (XEN) backtrace.c:29: Stacktrace start at 0x8007638efbb0 depth 10
> (XEN)    [<000000000027780c>] get_page_from_gva+0x180/0x35c
> (XEN)    [<00000000002700c8>] guestcopy.c#copy_guest+0x1b0/0x2e4
> (XEN)    [<0000000000270228>] raw_copy_to_guest+0x2c/0x34
> (XEN)    [<0000000000268dd0>] domain.c#update_runstate_area+0x90/0xc8
> (XEN)    [<000000000026909c>] domain.c#schedule_tail+0x294/0x2d8
> (XEN)    [<0000000000269524>] context_switch+0x58/0x70
> (XEN)    [<00000000002479c4>] core.c#sched_context_switch+0x88/0x1e4
> (XEN)    [<000000000024845c>] core.c#schedule+0x224/0x2ec
> (XEN)    [<0000000000224018>] softirq.c#__do_softirq+0xe4/0x128
> (XEN)    [<00000000002240d4>] do_softirq+0x14/0x1c
> 
> Discussing this subject with Stefano, he pointed me to a discussion started a year ago on this subject here:
> https://lists.xenproject.org/archives/html/xen-devel/2018-11/msg03053.html
> 
> And a patch was submitted:
> https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg02320.html
> 
> I rebased this patch on current master and it is solving the problem I have seen.
> 
> It sounds to me like a good solution to introduce a VCPUOP_register_runstate_phys_memory_area to not depend on the area actually being mapped in the guest when a context switch is being done (which is actually the problem happening when a context switch is trigger while a guest is running in EL0).
> 
> Is there any reason why this was not merged at the end ?

I just skimmed through the thread to remind myself the state. AFAICT, 
this is blocked on the contributor to clarify the intended interaction 
and provide a new version.

I am still in favor of the new hypercall (and still in my todo list) but 
I haven't yet found time to revive the series.

Would you be willing to take over the series? I would be happy to bring 
you up to speed and provide review.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 14 15:57:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 15:57:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGF3-0001Dy-5B; Thu, 14 May 2020 15:57:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=otfA=64=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZGF1-0001Dp-SY
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 15:57:47 +0000
X-Inumbo-ID: a720af10-95fb-11ea-ae69-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a720af10-95fb-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 15:57:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589471866;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=rM6h2i5DD9mIsJ6mQQcZx4xRPynu9dyvimr2JtiPofs=;
 b=IAHVjekh7fIZtnFfaDsbOcWQW+cNVuSJnuuKLxwCoCqM+2HaAQZ6NW+m
 orYYyDTwd1rmNZ8+5dQVVjtgF67Fxus53QysXCwl1S0MvXAPS40N9ePMl
 L6JGl84lkmmVDHMABV4UYTULZnbi1g9oxMRwWUIYxlyyk7EF1aJu6H2oO U=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: YtTTa4/qfyQbJOJP/ZSPJ5uQb+0NSsgqQH4T35kug6kkmNX8vzOXJNl1eUrvMFLD6PzI/wcUVe
 R5ufMdZp9MGBMtMj+Yj7Z0whY+mjPscopaMoch1IfVcTDo3L+4YpT88dkEmhDlaquIML5OjYin
 N0KDsGK2HR6d6iXi//e16S6aEgQGlgBSPcfn7X5Y0cxzu+GrSGWXnSgm6hGHBZcU3jQyGsM7FA
 UiJeSNp5mJ8j68T6e/X1nPfizYncGivJ7Zjp/BJfhGYR2qtO48eNweVjgVOBMHxXNGdqtChTSk
 884=
X-SBRS: 2.7
X-MesageID: 18240066
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,391,1583211600"; d="scan'208";a="18240066"
Date: Thu, 14 May 2020 17:57:39 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 2/6] x86/mem-paging: correct p2m_mem_paging_prep()'s
 error handling
Message-ID: <20200514155739.GG54375@Air-de-Roger>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <bf9dd27b-a7db-de0e-a804-d687e66ecf1e@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <bf9dd27b-a7db-de0e-a804-d687e66ecf1e@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Apr 23, 2020 at 10:37:46AM +0200, Jan Beulich wrote:
> Communicating errors from p2m_set_entry() to the caller is not enough:
> Neither the M2P nor the stats updates should occur in such a case.
> Instead the allocated page needs to be freed again; for cleanliness
> reasons also properly take into account _PGC_allocated there.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> 
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1781,7 +1781,7 @@ void p2m_mem_paging_populate(struct doma
>   */
>  int p2m_mem_paging_prep(struct domain *d, unsigned long gfn_l, uint64_t buffer)
>  {
> -    struct page_info *page;
> +    struct page_info *page = NULL;
>      p2m_type_t p2mt;
>      p2m_access_t a;
>      gfn_t gfn = _gfn(gfn_l);
> @@ -1816,9 +1816,19 @@ int p2m_mem_paging_prep(struct domain *d
>              goto out;
>          /* Get a free page */
>          ret = -ENOMEM;
> -        page = alloc_domheap_page(p2m->domain, 0);
> +        page = alloc_domheap_page(d, 0);
>          if ( unlikely(page == NULL) )
>              goto out;
> +        if ( unlikely(!get_page(page, d)) )
> +        {
> +            /*
> +             * The domain can't possibly know about this page yet, so failure
> +             * here is a clear indication of something fishy going on.
> +             */
> +            domain_crash(d);
> +            page = NULL;
> +            goto out;
> +        }
>          mfn = page_to_mfn(page);
>          page_extant = 0;
>  
> @@ -1832,7 +1842,6 @@ int p2m_mem_paging_prep(struct domain *d
>                       "Failed to load paging-in gfn %lx Dom%d bytes left %d\n",
>                       gfn_l, d->domain_id, ret);
>              ret = -EFAULT;
> -            put_page(page); /* Don't leak pages */
>              goto out;            
>          }
>      }
> @@ -1843,13 +1852,24 @@ int p2m_mem_paging_prep(struct domain *d
>      ret = p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
>                          paging_mode_log_dirty(d) ? p2m_ram_logdirty
>                                                   : p2m_ram_rw, a);
> -    set_gpfn_from_mfn(mfn_x(mfn), gfn_l);

I would instead do `if ( ret ) goto out`;

And won't touch the code afterwards.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:07:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:07:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGOI-0002mU-2s; Thu, 14 May 2020 16:07:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGOG-0002mP-Sl
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:07:20 +0000
X-Inumbo-ID: fcbb4e52-95fc-11ea-ae69-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fcbb4e52-95fc-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 16:07:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589472439;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=putPDERoXCy+RWMiO+hN6Z1MFytmR+R439LvDv8Q3WQ=;
 b=EWexutKehR2EXLkS/Qcc8u31OE7zJmBmZsVZUQ6cVSySO3E/0nERBVTY
 08bVoU3Cxe3K958kOTtU/EduyrvgPxs99T7ZVHhJc46unxQysq6+MPX8y
 smPmOwIEX3t7nmfm6x1DhdvnlGe3Z3yZvamtbTKCHnt6QMnggAw2LSLRD k=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: q7IabMW/+O4bmabmf8L7K23WpB2sSO1mQil8dqaBBf02/NUm0Ludp47JZjjogP3N8alXa38uRt
 m1IqpkuN1lIMz6H06TlLjnqH3Jl/PI8KjnJ1ln3x9bxJ5jW6NH6KuCnE0a0AOaLlFEemXqt7HE
 rR8dzESgMubJXMbncuqLwINmhkKJMpsSG8nHVXMw3X/EuWAaxgKQBEmodTiur78Ua+zG2Ur0dW
 lWSCD7Ettz+Kb22K2+V+TKOVbX1CdLOInk6atuvR/cLeDTRZJ95FkoApk2dVdUfZHX37qa4Ld/
 kuw=
X-SBRS: 2.7
X-MesageID: 17907121
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,391,1583211600"; d="scan'208";a="17907121"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24253.27824.82649.907746@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:07:12 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 00/21] Add support for qemu-xen runnning in a
 Linux-based stubdomain
In-Reply-To: <20200428040433.23504-1-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew
 Cooper <Andrew.Cooper3@citrix.com>, George Dunlap <George.Dunlap@citrix.com>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Simon Gaiser <simon@invisiblethingslab.com>,
 Jan Beulich <jbeulich@suse.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Eric Shelton <eshelton@pobox.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 00/21] Add support for qemu-xen runnning in a Linux-based stubdomain"):
> In coordination with Marek, I'm making a submission of his patches for Linux
> stubdomain device-model support.  I made a few of my own additions, but Marek
> did the heavy lifting.  Thank you, Marek.

Hi.  Thanks very much for this contribution.  Sorry it has taken me so
long to get to review it.

> Later patches add QMP over libvchan connection support. The actual connection
> is made in a separate process. As discussed on Xen Summit 2019, this allows to
> apply some basic checks and/or filtering (not part of this series), to limit
> libxl exposure for potentially malicious stubdomain.

OK.

> Few comments/questions about the stubdomain code:
> 
> 1. There are extra patches for qemu that are necessary to run it in stubdomain.
> While it is desirable to upstream them, I think it can be done after merging
> libxl part. Stubdomain's qemu build will in most cases be separate anyway, to
> limit qemu's dependencies (so the stubdomain size).

Yes.

> 2. By default Linux hvc-xen console frontend is unreliable for data transfer
> (qemu state save/restore) - it drops data sent faster than client is reading
> it. To fix it, console device needs to be switched into raw mode (`stty raw
> /dev/hvc1`). Especially for restoring qemu state it is tricky, as it would need
> to be done before opening the device, but stty (obviously) needs to open the
> device first. To solve this problem, for now the repository contains kernel
> patch which changes the default for all hvc consoles. Again, this isn't
> practical problem, as the kernel for stubdomain is built separately. But it
> would be nice to have something working with vanilla kernel. I see those
> options:
>   - convert it to kernel cmdline parameter (hvc_console_raw=1 ?)
>   - use channels instead of consoles (and on the kernel side change the default
>     to "raw" only for channels); while in theory better design, libxl part will
>     be more complex, as channels can be connected to sockets but not files, so
>     libxl would need to read/write to it exactly when qemu write/read the data,
>     not before/after as it is done now

What a mess.  Thanks for trying to tackle it!

Would it be possible to add a rendenzvous to the console ?  Eg, the
guest could write a "ready" byte after it has set the mode.

I'm not sure I understand the problem with libxl and channels.  Maybe
a helper process (perhaps existing only during migration) could help ?

Or, libxl has the "datacopier" async thing in it which you can spawn
one of and hopefully forget about.  You could teach it channels, or
make a thing like it that uses channels, or something.

> 3. Mini-OS stubdoms use dmargs xenstore key as a string.  Linux stubdoms use
> dmargs as a directory for numbered entries.  Should they be different names?

Yes, I think so.  That way if there's a version mismatch you get
ENOENT rather than an empty argument list...


I'll go and look at the patches now.

Regards,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:08:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:08:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGPC-0002pn-D0; Thu, 14 May 2020 16:08:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGPA-0002pC-Vc
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:08:17 +0000
X-Inumbo-ID: 1e1f15cf-95fd-11ea-a4b0-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1e1f15cf-95fd-11ea-a4b0-12813bfff9fa;
 Thu, 14 May 2020 16:08:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589472496;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=7Cz4xNGLTLSK2Hzii2YeOv1h69cqCsh47KQcOWUtZ88=;
 b=U/TCTBBh+PdwIRbJdYid9s3V6Bc8ONIoXXeB7smFcAZR2jFoUv8eJ+LK
 5x5jS31HUw4ehRJUlQbvKBMmVXsoVQQh1gJr9u0caZEKWXpdS2sD0idnW
 c+t6xqwQ3Yv6Xg+k1tRpdq3bKsWvqWhz4pL+gc5KJcalK4+EMPFPt9Q9K k=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: eZ6QYcvK0aZKfzBwZQy12xNHwcTC9TF7YN08SisyNTqzuX0t8fuzYJsJhenc2N/zQY8gmDAY5n
 WTPsoFzRoPmn4M59YBy947Y9N2i7xvTlP6NHm27Ph2BKIq1fqczHL96WcZPjdRASqygYSmhV+d
 TjN33lxc8BMewEkSz4v4ekIiFA7/SR4oSvY05KfjfaLcV+2KC5aFJQADSuoJS9T6E39L+1j/Ig
 58WrMUwVuteH2IsENrosgSh78vlXfufespKpssqP4qBtFvkb8pLXc/wtNBxhoMnOErQ+TAZWX1
 mWg=
X-SBRS: 2.7
X-MesageID: 17814097
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,391,1583211600"; d="scan'208";a="17814097"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24253.27882.665779.471665@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:08:10 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 01/21] Document ioemu MiniOS stubdomain protocol
In-Reply-To: <20200428040433.23504-2-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-2-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 01/21] Document ioemu MiniOS stubdomain protocol"):
> From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> 
> Add documentation based on reverse-engineered toolstack-ioemu stubdomain
> protocol.
> 
> Signed-off-by: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

NB I have not reviewed this for correctness.  I don't think I know the
protocol well enough ...

Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:08:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:08:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGPf-0002t2-M5; Thu, 14 May 2020 16:08:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGPe-0002sj-6z
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:08:46 +0000
X-Inumbo-ID: 2fded8d0-95fd-11ea-b9cf-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2fded8d0-95fd-11ea-b9cf-bc764e2007e4;
 Thu, 14 May 2020 16:08:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589472526;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=Y/T8Grcn34HMj3CGoXXwBGV24cvuScdXdDdEnlZ/FJY=;
 b=UnmTCGJmcXIWgknyulNCbULjlpBiMkVjPqz3RYQAOBRNytGiGkE06YKA
 IfV4um0E9wVObnxI2dK61gXJ+lH/72XbZ/LDXOOhDAw3eiemPXE5WwrXK
 VOaPewn74GhWkOALq+zVtKCXBGWp5SA0+xv2kTI+irt+OTZ/zr00t7buU s=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: j4NUtXO7gk5nDydIprIn/k7pD/k9R5rFwVhwJxYzS2c9apFb/VkPjnSd2BKKQ09+B0bbsZ8Xlk
 ur3b+xNZ9X29F+YQFvH5ParntF7ravEE517k7mVvnyMpMCrSDmx0uBHa1LCrRLATWAXb4yy6KJ
 TSjMZSOhXE2z7Ioed18LOSl+qWtCTZ/vhIer3avP6+UNfWnfS/+M3GenVeFk54uS6Uoddf1A7R
 YvlNp/DV13/tHWAVUT6wyyxc46tTCkVUsAhOFYnYPpBFErd2EFRRdxGFWe+4g1CaWquVhFHdFl
 KvI=
X-SBRS: 2.7
X-MesageID: 17539758
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,391,1583211600"; d="scan'208";a="17539758"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24253.27911.881494.657040@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:08:39 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 02/21] Document ioemu Linux stubdomain protocol
In-Reply-To: <20200428040433.23504-3-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-3-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 02/21] Document ioemu Linux stubdomain protocol"):
> From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> 
> Add documentation for upcoming Linux stubdomain for qemu-upstream.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

This probably shouldn't be committed except with the implmeentation
:-).

Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:10:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:10:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGRS-0003jJ-6e; Thu, 14 May 2020 16:10:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGRR-0003jE-QW
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:10:37 +0000
X-Inumbo-ID: 720e2a59-95fd-11ea-a4b0-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 720e2a59-95fd-11ea-a4b0-12813bfff9fa;
 Thu, 14 May 2020 16:10:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589472636;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=BJAywIiEegLx6jsR7Rtjl8UG8QzQHLIMfNnFGm9NpD0=;
 b=ADckRYwV6ism7kn9OCkV0U3top9E7DJiWjPfgNXcTrQMecrVVVvELBGP
 Znvk23eo9PjvakGJN6bIYKW0XTt+Ls0kN9c6dfixP7oZ/hcIPMx3J05cV
 V7Rmfh8nkqAC6IxG963lSqh/W+8uXGSJqFTW+lucGpe3F55rrihb4ydOB E=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: Hq6JI01u0muvXElBBXv5O0LgzKVBjolsunBIyrhsh+vdMUI+tLscZ+Xy7T7Ut6jTwv1amsskwc
 caCSgf99Vb4SbVGPmvTCXxag2Kti36eiKsxaJ3jn6W13jwK5NbPTY4iOol0RGcnYLVSe4DKngO
 26CNfPpY05Obk3V1DPaZHrWMnOL7EKaI/5FbWSKCIE++DBaktC1U1n4KTkaGNc/GLy7Nc792OX
 sWtaVN4SXf1BvJTF1SlyH5oPG49Tm9Lx+UprWtR0v98j651bu9iKQxqqF24cVpnpdtgDNuD9pu
 PQE=
X-SBRS: 2.7
X-MesageID: 17814247
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17814247"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24253.28008.395070.927378@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:10:16 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 04/21] libxl: Allow running qemu-xen in stubdomain
In-Reply-To: <20200428040433.23504-5-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-5-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 04/21] libxl: Allow running qemu-xen in stubdomain"):
> From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> 
> Do not prohibit anymore using stubdomain with qemu-xen.
> To help distingushing MiniOS and Linux stubdomain, add helper inline
> functions libxl__stubdomain_is_linux() and
> libxl__stubdomain_is_linux_running(). Those should be used where really
> the difference is about MiniOS/Linux, not qemu-xen/qemu-xen-traditional.
> 
> Signed-off-by: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:13:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:13:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGUT-0003sA-MC; Thu, 14 May 2020 16:13:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=otfA=64=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZGUS-0003s4-Cs
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:13:44 +0000
X-Inumbo-ID: e1763f70-95fd-11ea-9887-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1763f70-95fd-11ea-9887-bc764e2007e4;
 Thu, 14 May 2020 16:13:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589472823;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=xaY8pStL6yfvJkUyVVFH83gJZhG2kxKjgsQih04PDSI=;
 b=AcCzjHsOsZc1WlpgL4Vvr/1MgAh12Vd3+BxSWPQU/f3x5Hfbnzdgyp4m
 reCeSsSEuCb6hA4HQK5P+XWaL6F6PgTfEHbVevrbkGNPX55DQSWtxlQ3D
 gDa48QEqMqPQ+jZERIbWUYMcK8Z7xwu/kI4+qn6yh9PofJwvUp+Eap7Ht U=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: HN12NLgAoq9Hf5Ag8lgDFTQgarHL5v6cmtTFZvJlfoZVOooIlJhE4LKbaJgVrVz8mDArT6kTlH
 sAE/5iuoX2P4L8Jhrtp/Tv2LJUPzMvO0MYkdtubi3VqO+fsguwUqrSwn9buo7evga1Tb/O1tTN
 Ocf5aBfr6YDveYRtnnNOd7ZU2f2TG3H1gyRvEZQWaeVr6ePPoYeK2BzOkf8wlV4lMGiOR8NFMD
 AKtjLvxKShehcSbR53AgYKlYBzoc84BZyU92f4ngDPR9uRzz+1Z7WEObfqBqX/co+w62uzC1ek
 ldc=
X-SBRS: 2.7
X-MesageID: 17907785
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17907785"
Date: Thu, 14 May 2020 18:13:36 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 3/6] x86/mem-paging: use guest handle for
 XENMEM_paging_op_prep
Message-ID: <20200514161336.GH54375@Air-de-Roger>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <43811c95-aa41-a34a-06ce-7d344cb1411d@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <43811c95-aa41-a34a-06ce-7d344cb1411d@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Apr 23, 2020 at 10:38:18AM +0200, Jan Beulich wrote:
> While it should have been this way from the beginning, not doing so will
> become an actual problem with PVH Dom0. The interface change is binary
> compatible, but requires tools side producers to be re-built.
> 
> Drop the bogus/unnecessary page alignment restriction on the input
> buffer at the same time.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> v2: Use HANDLE_64() instead of HANDLE_PARAM() for function parameter.
> ---
> Is there really no way to avoid the buffer copying in libxc?
> 
> --- a/tools/libxc/xc_mem_paging.c
> +++ b/tools/libxc/xc_mem_paging.c
> @@ -26,15 +26,33 @@ static int xc_mem_paging_memop(xc_interf
>                                 unsigned int op, uint64_t gfn, void *buffer)
>  {
>      xen_mem_paging_op_t mpo;
> +    DECLARE_HYPERCALL_BOUNCE(buffer, XC_PAGE_SIZE,
> +                             XC_HYPERCALL_BUFFER_BOUNCE_IN);
> +    int rc;
>  
>      memset(&mpo, 0, sizeof(mpo));
>  
>      mpo.op      = op;
>      mpo.domain  = domain_id;
>      mpo.gfn     = gfn;
> -    mpo.buffer  = (unsigned long) buffer;
>  
> -    return do_memory_op(xch, XENMEM_paging_op, &mpo, sizeof(mpo));
> +    if ( buffer )
> +    {
> +        if ( xc_hypercall_bounce_pre(xch, buffer) )
> +        {
> +            PERROR("Could not bounce memory for XENMEM_paging_op %u", op);
> +            return -1;
> +        }
> +
> +        set_xen_guest_handle(mpo.buffer, buffer);
> +    }
> +
> +    rc = do_memory_op(xch, XENMEM_paging_op, &mpo, sizeof(mpo));
> +
> +    if ( buffer )
> +        xc_hypercall_bounce_post(xch, buffer);
> +
> +    return rc;
>  }
>  
>  int xc_mem_paging_enable(xc_interface *xch, uint32_t domain_id,
> @@ -92,28 +110,13 @@ int xc_mem_paging_prep(xc_interface *xch
>  int xc_mem_paging_load(xc_interface *xch, uint32_t domain_id,
>                         uint64_t gfn, void *buffer)
>  {
> -    int rc, old_errno;
> -
>      errno = EINVAL;
>  
>      if ( !buffer )
>          return -1;
>  
> -    if ( ((unsigned long) buffer) & (XC_PAGE_SIZE - 1) )
> -        return -1;
> -
> -    if ( mlock(buffer, XC_PAGE_SIZE) )
> -        return -1;
> -
> -    rc = xc_mem_paging_memop(xch, domain_id,
> -                             XENMEM_paging_op_prep,
> -                             gfn, buffer);
> -
> -    old_errno = errno;
> -    munlock(buffer, XC_PAGE_SIZE);
> -    errno = old_errno;
> -
> -    return rc;
> +    return xc_mem_paging_memop(xch, domain_id, XENMEM_paging_op_prep,
> +                               gfn, buffer);

Sadly this function seems to still return -1/-errnoval, which is
weird, same applies to xc_mem_paging_memop. Not that you should fix it
here, just noticed.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:18:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:18:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGZE-00045f-C1; Thu, 14 May 2020 16:18:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a33M=64=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jZGZC-00045U-FC
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:18:38 +0000
X-Inumbo-ID: 90cf4a02-95fe-11ea-a4b1-12813bfff9fa
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.72]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 90cf4a02-95fe-11ea-a4b1-12813bfff9fa;
 Thu, 14 May 2020 16:18:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=89N85O5fAQXzsbCHBGrMAcoD6BAuvn5jSOH7dT0hUDQ=;
 b=wrMYX2C1+yWnKmzgLf7RoOwYjSpGGbgih2K+Yl8tqT8a0P8ppt61BmmY2OVmUP+yZ5nsrksPLwheNwobrvxROmQZQYUK2TIaV/rRDuaayBCFBfxKBHrjgGAxYu3VpfzwKlShNkS6SdGjv260YrvhqAvRVObN+7fqeVjHBJ4Kaog=
Received: from AM5PR1001CA0042.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:206:15::19) by AM0PR08MB5105.eurprd08.prod.outlook.com
 (2603:10a6:208:15b::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.24; Thu, 14 May
 2020 16:18:35 +0000
Received: from VE1EUR03FT050.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:15:cafe::ce) by AM5PR1001CA0042.outlook.office365.com
 (2603:10a6:206:15::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.25 via Frontend
 Transport; Thu, 14 May 2020 16:18:35 +0000
Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT050.mail.protection.outlook.com (10.152.19.209) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3000.19 via Frontend Transport; Thu, 14 May 2020 16:18:34 +0000
Received: ("Tessian outbound 11763d234d54:v54");
 Thu, 14 May 2020 16:18:34 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: caa49ff583a150a7
X-CR-MTA-TID: 64aa7808
Received: from d6ca09abcc5e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D2C22D94-D772-4181-ADD6-8B7FA9F8568C.1; 
 Thu, 14 May 2020 16:18:29 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d6ca09abcc5e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 14 May 2020 16:18:29 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SN6wGkX0M7LPPbZdUVxcyhOjvmoiY1JaPdrAe5+p/6kYsafCyC+PtFp1SIXfGCmnVkfURJzXWwyZjwXepmvBxMhdocJjsTp2vYRbxx+cgUW0k0PN+V+GD07B1q9L7rMLNHx2b9SSArwcPW7hSJB4teXYbFh5eeXPpt7fxXrLb00MwWbC7TdjfVNfa+puuuZySkif2alNvRTFSgfLGvI/+fJQx+DxTYQ9LiWQOoBFRQHuNYzT6x6KZunhwaT9CkfCfWOQ9gLxQ8yVjhbhUK3BXBJZi8/2eoP9sb6ByoKxKHPmdx4ACUVSJa1HK3ilqOxvSCbILebTOHEgvR387aNhnw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=89N85O5fAQXzsbCHBGrMAcoD6BAuvn5jSOH7dT0hUDQ=;
 b=ZDr65+t5UKUo7nZqp99M9N7AyfmRXWER+HxyllzoxYV0LzARhtp+nfXirxEKY5McbY9FNw1VNpS4WknIRIA/Y6igFDkSr4EDZQHoCc4oqRuZsYMB6QS1qcla4iac7aOJKnPKfP4OK9XDlhBCIP1XcsIPle6UBHL1ZcLFOZ/9/EugdfJ8P0ZIrJqN0vA25IXYm59gPG77OITOZYYVDkzqFeY5N7FL6JHs3PWpH6P/lTdiY4GsLx/aWtkrhLtLsbv227aK93l+ikf6hFGqbku2xeEIU08561IEv+QYkO/w/WNEzwjWz+RaA4a8N4nNhIQ5Zp6cIg7wlrVa4n025Omlaw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=89N85O5fAQXzsbCHBGrMAcoD6BAuvn5jSOH7dT0hUDQ=;
 b=wrMYX2C1+yWnKmzgLf7RoOwYjSpGGbgih2K+Yl8tqT8a0P8ppt61BmmY2OVmUP+yZ5nsrksPLwheNwobrvxROmQZQYUK2TIaV/rRDuaayBCFBfxKBHrjgGAxYu3VpfzwKlShNkS6SdGjv260YrvhqAvRVObN+7fqeVjHBJ4Kaog=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3866.eurprd08.prod.outlook.com (2603:10a6:10:7b::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.20; Thu, 14 May
 2020 16:18:26 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3000.022; Thu, 14 May 2020
 16:18:26 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: Error during update_runstate_area with KPTI activated
Thread-Topic: Error during update_runstate_area with KPTI activated
Thread-Index: AQHWKfvm5I6PbMAmoUakTwbdgqMBQqinvLCAgAAF7oA=
Date: Thu, 14 May 2020 16:18:26 +0000
Message-ID: <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
References: <C6B0E24F-60E6-4621-8448-C8DBAE3277A9@arm.com>
 <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
In-Reply-To: <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: c3eb7d23-65b7-4de3-d687-08d7f822737b
x-ms-traffictypediagnostic: DB7PR08MB3866:|AM0PR08MB5105:
X-Microsoft-Antispam-PRVS: <AM0PR08MB5105474C9DFCDC037D3AE2209DBC0@AM0PR08MB5105.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 040359335D
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: 7z7Umg/kC3kLSfBGEqGOcapWLpNZcyI+oPLN7NWxz4UFHU+QKZ5zEmkDUGWudZp6MgYWnMsTCtfRgSVy4VInKpi46vHXCLix0hK/6A+vJDFGcyRnWM9Eo1X17Kt4/wAqRgclVWIhAJuXxqmdEHP/MQSJ2gaQi8dBhzW29S3P5k+bj7+bqfpxSN0yl2TeBxhC9XHWGHeUZ80pAAHdRstOvjVPbjcizfFoZ9bUFB4RjcIq6f+7uzzqiogI/eeKPxTvDWCKZevG1iYF0acZ7WdQAb9nEfz3bWVWLSEJqjl0iBtnrSHf7N717rrMkhC5i5diFojfEy80D2rtl/PrS+XAegJT5Zd4kEZ8az/bNgN6jmJpIoOA5Y2MgP86+t2K5A9kIw0nxZ3VduH+ZFatu4J5mVVftXRIDzeqNJUvpCtbxm2+mZcPoqYVhwDhT1ZtVhqfJ0LV4kZMpmp0+zu92EHAquGe/Tr9O29sJ/xXwH2hMKYkyqgTjztZzq4QeIj7YbgW5lvnEFzqZQtJranYBzVdPA==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(376002)(366004)(346002)(136003)(39860400002)(396003)(26005)(76116006)(91956017)(66476007)(6486002)(33656002)(64756008)(6512007)(966005)(54906003)(478600001)(2616005)(6916009)(66446008)(66946007)(66556008)(53546011)(6506007)(86362001)(186003)(71200400001)(5660300002)(36756003)(2906002)(4326008)(8936002)(8676002)(15650500001)(316002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: fL8gD2At0zHq2tM/IW0AnyKTDiGyZtVWYCQicBHne2QH60T+31mJNw+/WoilU6y6SPBxUEs6pEZaUMZKZKhU5Q0/IluqQc/LChigivfp/WSaW6fKVY+W1tSloaADW/YFwr1SNe4nr3tl3mSjYN52+2spT/JVTrC1Q7cew/gYvZkKpymW2F3sM9ag6nX/I2enkp5jsGUyljWdtIHZ1H5aF0J9WQCpkP4fhB0qD3Uo9Hmr5ZdlGlWZJIcEW/YB5M4Jus2+krheQqGGFnrWXX7yM01k2tKtw1OBV3Bb2yWa5bCPAAipvR7yhYD7O5dP6R14O3j0PQNAO3c8mSgouU2JGEBY7XLhuBcuFyNla3y1SH/xgukv7/84YqADGoztIimR4/+UAK877DBkg848M3CzhxNogek5C9/FA1JYDWvDjf2cztnb7SX4BJjKZ51QrvPSHIPBhKXuJZiUxIboy0yu0SsYn8AU/2HxSAUYOG+eUED0wGJxcyzLwwoPeOCMTycY
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <23F2A5A326571B499E346972F2169E5F@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3866
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(39860400002)(346002)(396003)(376002)(46966005)(86362001)(36756003)(478600001)(47076004)(6486002)(336012)(70586007)(70206006)(356005)(82310400002)(2906002)(966005)(6512007)(4326008)(82740400003)(81166007)(15650500001)(316002)(8676002)(8936002)(54906003)(53546011)(107886003)(186003)(6506007)(36906005)(5660300002)(6862004)(33656002)(2616005)(26005);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: cad724d4-292d-40dc-c866-08d7f8226ed6
X-Forefront-PRVS: 040359335D
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Ld3hdmgegWdDpTZ8oy6MGW3kLS02wMNiKEpzSuv8n1539WGxwQjk+vd3Et7tF1h91OBjF5tsTydlAHTB8/pkkrsr85GqFDTJBksDzDZB/iscz2vrcLclRFf8v/dQpB/5xGDkLxQ/Ukory41b4ACI5GQPsi2xKJXfvwhF7afjOdQrzFuIPOR3qGtFfE34HKcWbicQ8xMPvtyauV9dPjxo190MoXX1q6dFiv8ncRIrhU4xhCddyi99Ka+VrX/Fu90MCpzTSOxa49cFbMeRZyWGrA9Q+Afx2n/X8nHtMIqqAbk8he0+/MtfIKh+6yOZqHXieEv75xK6ND10ff0qltAmOOUN5kkjy089qtrBtFOB41INn9Kup8q2se1wqQNNWyi6HFPwxnVBWFeAY2AwY8AmYd20lnmD3jz1A5RTT2QjF2eXnIrpo1wssYFvjBSPomUwlqYLZspsz0wOCx0By2OwAgFY/uMWHw+/HZeFFaNUXdY3sORfk9F975iYAS0pveqtLJcuWpi+nmTSEW4Wd0nTbuQinTD4RJDlEVtOAHedq861U9xMMdKBarbG7qbTPJ2CTjqVP3JmQ7mXuPHPZC2A7mohadCwDv0SylC42lSuAds=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 May 2020 16:18:34.4867 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c3eb7d23-65b7-4de3-d687-08d7f822737b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5105
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 14 May 2020, at 16:57, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 14/05/2020 15:28, Bertrand Marquis wrote:
>> Hi,
>=20
> Hi,
>=20
>> When executing linux on arm64 with KPTI activated (in Dom0 or in a DomU)=
, I have a lot of walk page table errors like this:
>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>> After implementing a call trace, I found that the problem was coming fro=
m the update_runstate_area when linux has KPTI activated.
>> I have the following call trace:
>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>> (XEN) backtrace.c:29: Stacktrace start at 0x8007638efbb0 depth 10
>> (XEN)    [<000000000027780c>] get_page_from_gva+0x180/0x35c
>> (XEN)    [<00000000002700c8>] guestcopy.c#copy_guest+0x1b0/0x2e4
>> (XEN)    [<0000000000270228>] raw_copy_to_guest+0x2c/0x34
>> (XEN)    [<0000000000268dd0>] domain.c#update_runstate_area+0x90/0xc8
>> (XEN)    [<000000000026909c>] domain.c#schedule_tail+0x294/0x2d8
>> (XEN)    [<0000000000269524>] context_switch+0x58/0x70
>> (XEN)    [<00000000002479c4>] core.c#sched_context_switch+0x88/0x1e4
>> (XEN)    [<000000000024845c>] core.c#schedule+0x224/0x2ec
>> (XEN)    [<0000000000224018>] softirq.c#__do_softirq+0xe4/0x128
>> (XEN)    [<00000000002240d4>] do_softirq+0x14/0x1c
>> Discussing this subject with Stefano, he pointed me to a discussion star=
ted a year ago on this subject here:
>> https://lists.xenproject.org/archives/html/xen-devel/2018-11/msg03053.ht=
ml
>> And a patch was submitted:
>> https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg02320.ht=
ml
>> I rebased this patch on current master and it is solving the problem I h=
ave seen.
>> It sounds to me like a good solution to introduce a VCPUOP_register_runs=
tate_phys_memory_area to not depend on the area actually being mapped in th=
e guest when a context switch is being done (which is actually the problem =
happening when a context switch is trigger while a guest is running in EL0)=
.
>> Is there any reason why this was not merged at the end ?
>=20
> I just skimmed through the thread to remind myself the state. AFAICT, thi=
s is blocked on the contributor to clarify the intended interaction and pro=
vide a new version.

What do you mean here by intended interaction ? How the new hyper call shou=
ld be used by the guest OS ?

>=20
> I am still in favor of the new hypercall (and still in my todo list) but =
I haven't yet found time to revive the series.
>=20
> Would you be willing to take over the series? I would be happy to bring y=
ou up to speed and provide review.

Sure I can take it over.

I ported it to master version of xen and I tested it on a board.
I still need to do a deep review of the code myself but I have an understan=
ding of the problem and what is the idea.

Any help to get on speed would be more then welcome :-)

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Thu May 14 16:19:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGaS-0004Bt-N9; Thu, 14 May 2020 16:19:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGaQ-0004Bg-Nh
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:19:54 +0000
X-Inumbo-ID: bdd2111a-95fe-11ea-a4b1-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bdd2111a-95fe-11ea-a4b1-12813bfff9fa;
 Thu, 14 May 2020 16:19:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589473193;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=jn5y2nGCAKtkglRtj9uZixFDb0TgstouT5h9GidQtNk=;
 b=VT5lz+X4CtzyC0o5f1ERhyGBXbWDjNjr9X+dFQbowkVQ1nn75pHFQMDB
 RbG2RsIS6nvB6wSsK4TBktWLMvjheidQ8uOyxpnlUTR7jyreS0dt07wso
 Wkx5soQNpbvBzBtOOqYb6wSDCI5dW3mbDlHyFVUSg/ZBbOcWRW8w8ITIp 0=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: Rn78C8ByZQ2s68SxNEepTqIi3XsqGHtGploUdNnjzPNRo33kI51cWwfSYaVC42iBScciaBQfzu
 wNJPmeWG1AZ62v/R+FkDTwChZFlE65qMucrnqlunlNJWNt+b9w1mMC9zCbgOKZmtJwiUQtr6aO
 SD8KKA/bhUeH233d1HVcXNiZCv980eDV9oKwodd8yZEZUTnoFfdZPKVfpTbhwvwepRIMUYLsAN
 s8ZVYokpnI99DJN9hW5Jbd9EFznvNODIhGBii7AHTHTXrN8VEXa55qhsZ90fBSkLQ7vhQWSDN8
 i1U=
X-SBRS: 2.7
X-MesageID: 17815285
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17815285"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24253.28579.577001.476506@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:19:47 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 05/21] libxl: Handle Linux stubdomain specific QEMU
 options.
In-Reply-To: <20200428040433.23504-6-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-6-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Simon Gaiser <simon@invisiblethingslab.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Eric
 Shelton <eshelton@pobox.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 05/21] libxl: Handle Linux stubdomain specific QEMU options."):
> From: Eric Shelton <eshelton@pobox.com>
> 
> This patch creates an appropriate command line for the QEMU instance
> running in a Linux-based stubdomain.
> 
> NOTE: a number of items are not currently implemented for Linux-based
> stubdomains, such as:
> - save/restore
> - QMP socket
> - graphics output (e.g., VNC)
> 
> Signed-off-by: Eric Shelton <eshelton@pobox.com>
> 
> Simon:
>  * fix disk path
>  * fix cdrom path and "format"
> 
> Signed-off-by: Simon Gaiser <simon@invisiblethingslab.com>
> [drop Qubes-specific parts]
> Signed-off-by: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>

Nice work all.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

This is despite me spotting three tiny style nits:

> @@ -1312,7 +1316,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
>          }
>  
>          flexarray_append(dm_args, vncarg);
> -    } else
> +    } else if (!is_stubdom)
>          /*
>           * Ensure that by default no vnc server is created.
>           */

While you are here it would be nice to regularise the { }.  (libxl
CODING_STYLE says that all branches of an if should have { }, if any
of them do.)

> @@ -1974,8 +2006,10 @@ static int libxl__build_device_model_args(libxl__gc *gc,
>                                                    args, envs,
>                                                    state);
>      case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
> -        assert(dm_state_fd != NULL);
> -        assert(*dm_state_fd < 0);
> +        if (!libxl_defbool_val(guest_config->b_info.device_model_stubdomain)) {
> +            assert(dm_state_fd != NULL);
> +            assert(*dm_state_fd < 0);
> +	}

This } seems to be misindented ?

>      if (guest_config->b_info.u.hvm.serial)
>          num_console++;
> +    else if (guest_config->b_info.u.hvm.serial_list) {
> +        char **serial = guest_config->b_info.u.hvm.serial_list;
> +        while (*(serial++))
> +            num_console++;
> +    }

You should add the { } areound the if block too.

Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:25:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:25:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGfq-00052i-FI; Thu, 14 May 2020 16:25:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGfp-00052b-K0
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:25:29 +0000
X-Inumbo-ID: 855c8ac6-95ff-11ea-a4b1-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 855c8ac6-95ff-11ea-a4b1-12813bfff9fa;
 Thu, 14 May 2020 16:25:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589473528;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=xR71YgTN54w6dV6dVcN3iIaQwTlM5J5CVvcqLfSMfVY=;
 b=bQ5KMH9NI5Z0ludVI6ShXt+3DShrFgK+fwmMH5CsxAnkW7gnpe+m9jxh
 aQ5dPMWxhC3+xmtYK+aV0qPc80DRA7MVqkAzoEvUbYvR6ALgd9CRLPYq+
 YaCuNIq3pbG9PgDCQYCzpwnt+GlD9Gg3DJnBdTUKEEgkTljW65weEmisf o=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 3Mg8vH9EciP3uNYc/MCbuG2Z6vGhpARIcRzq0qeWZwBBKPSJRoiVDK7H+dsoZUc/hPkgnwNiE5
 owFaOb9CC+zISaKJvjidIY47EHaeHxLQDUdL6q9ZejWi4ZIc7No3r0AQaHB8I6F57Hi8MzJyf6
 eHLc+AT+sAwWqBnbBiO1oPCH99ssYaVV6FiSJLUMISqLBg2kV+YOSLvvG0Ep8wyC7kxiLVMMyi
 Yo1p7Iq7b0GKl8hozecsLwirzvbNmUyfGN8pkSuQ5f+gNJ7knEvI/AKR2tRrK54LBF/QxKjl8T
 zZg=
X-SBRS: 2.7
X-MesageID: 17570245
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17570245"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24253.28914.656817.996478@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:25:22 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 06/21] libxl: write qemu arguments into separate
 xenstore keys
In-Reply-To: <20200428040433.23504-7-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-7-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 06/21] libxl: write qemu arguments into separate xenstore keys"):
> +static int libxl__write_stub_linux_dmargs(libxl__gc *gc,
> +                                    int dm_domid, int guest_domid,
> +                                    char **args)
> +{
...
> +    vm_path = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("/local/domain/%d/vm", guest_domid));
> +    path = GCSPRINTF("%s/image/dmargs", vm_path);
> +
> +retry_transaction:
> +    t = xs_transaction_start(ctx->xsh);
> +    xs_write(ctx->xsh, t, path, "", 0);
> +    xs_set_permissions(ctx->xsh, t, path, roperm, ARRAY_SIZE(roperm));

This wants to be libxl__xs_mknod I think ?

> +    i = 1;
> +    for (i=1; args[i] != NULL; i++)
> +        xs_write(ctx->xsh, t, GCSPRINTF("%s/%03d", path, i), args[i], strlen(args[i]));

Can you do this with libxl__xs_transaction_* please, and a loop rather
than a goto ?

Why not use libxl__xs_write_checked ?

> +    xs_set_permissions(ctx->xsh, t, GCSPRINTF("%s/rtc/timeoffset", vm_path), roperm, ARRAY_SIZE(roperm));

This line seems out of place.  At least, it is not mentioned in the
commit message.  If it's needed, can you please split it out - and, of
course, then, provide an explanation :-).

> +    if (!xs_transaction_end(ctx->xsh, t, 0))
> +        if (errno == EAGAIN)
> +            goto retry_transaction;
> +    return 0;

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:26:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:26:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGgE-00054K-OD; Thu, 14 May 2020 16:25:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=otfA=64=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZGgD-00054A-Ip
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:25:53 +0000
X-Inumbo-ID: 94419cac-95ff-11ea-b07b-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 94419cac-95ff-11ea-b07b-bc764e2007e4;
 Thu, 14 May 2020 16:25:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589473552;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=GN4lWMY13FvzACO9lWX0bYCvDy9k+rCP+X515Is5yuI=;
 b=LndPkm+KDiNGOTKp29QLfRPOhOQHoe/8jpSPOlc5VB2bxlH+y7k+va3e
 PXHTPxjfPiIpBhvpODbuEP/g77jdomjdGq44/PO+loigyqX2NiCarND6N
 tciUgrFtmpNw30xNFlKLSrujchd0opzvM3IlGGA6WxyFig2R2aAMXpyDn A=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: +thdOU5cCjKvYfEL6b95SNZaWPP2F83xCOjbhQEksgT//E9HlcekbPntUisPJNXJsiMAQ1jjCq
 7zCpPfA0NrppecU22kngUbNqAjTfrW0FFV/FhuwsKktgYHR9Ha0lzXSN8W6UloJK25wEfDYWoZ
 BmveTCd3DOU5BP6PMYVjhuY75WdDhec6TTG6EdpW+sVYtWki9JucBsJ1ZCZiBSKEb4T8zXLl7J
 mwJtz3BCJgEqGlbpDpMDQMuvnTsZVwJS6e/ic6xVGgi2UongvoYM+L7v49tkxjHWgqd29Bx/kp
 aBg=
X-SBRS: 2.7
X-MesageID: 17908928
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17908928"
Date: Thu, 14 May 2020 18:25:45 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 4/6] x86/mem-paging: add minimal lock order
 enforcement to p2m_mem_paging_prep()
Message-ID: <20200514162545.GI54375@Air-de-Roger>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <4af1f459-fe7a-cd61-43cb-fb3fa4f15c00@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <4af1f459-fe7a-cd61-43cb-fb3fa4f15c00@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Apr 23, 2020 at 10:38:44AM +0200, Jan Beulich wrote:
> While full checking is impossible (as the lock is being acquired/
> released down the call tree), perform at least a lock level check.

I'm slightly confused, doesn't alloc_domheap_page already have it's
own lock order checking?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:26:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:26:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGga-00057F-0x; Thu, 14 May 2020 16:26:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGgY-00056x-JC
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:26:14 +0000
X-Inumbo-ID: a0c68582-95ff-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a0c68582-95ff-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 16:26:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589473574;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=vf0BnNSZVE/00QmV1a5fPfWjL0yQ4186LfICng2mcg8=;
 b=YnE6MWeHsLj7LdaA/7J3zRvMG94EOGEvZa53d6krf7IrqZjQHvbxSBn3
 F4ZSfvEn5xoVRWBAhqB2nlpkl+kbUMaNZPtB6Wg18D05vchVrnb+BlAWQ
 tKAYPqJ6Lx/XBazJDXyLTGMSSxEG8wMpkNSSXLX44AUNozVepxnhvMP6S k=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: Z1+LnZYGMoURu8/FKSDWHbmqE8uFaa9i75EtrMfG1pqjoyRqP9kHIyoi8DFrJFRlXcML2aecar
 wZ06iWm2V+g2OZA8Ch/zdy7TzcOAcRHoPgXifoBqRirmoQ8HAwIGkANLEbNZamWzZh/cJ94VGE
 eesFQfl5QW0n0qgysmizdqk+lHd4TOZ5acVXSEB8k6FPtyHflqtnOwEsMKA5WI0PIwu1oK4XpR
 hIkhVovIdO3uz/G+fDmaC5h+ckwog/7iLrP8K/zE/Qf9fkCTJCnePGhTWeTuR0AlM4/tw3qScg
 pCo=
X-SBRS: 2.7
X-MesageID: 17815947
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17815947"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24253.28960.758984.914907@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:26:08 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 07/21] xl: add stubdomain related options to xl config
 parser
In-Reply-To: <20200428040433.23504-8-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-8-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 07/21] xl: add stubdomain related options to xl config parser"):
> From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> 
> Signed-off-by: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:28:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:28:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGiR-0005Mf-DG; Thu, 14 May 2020 16:28:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGiQ-0005Ma-8S
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:28:10 +0000
X-Inumbo-ID: e594690e-95ff-11ea-a4b1-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e594690e-95ff-11ea-a4b1-12813bfff9fa;
 Thu, 14 May 2020 16:28:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589473689;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=pOok6b23FY270oMqz6LZdIAUiaV37a5LjYdYuXO6wKM=;
 b=fnAS2omlg9YkemPrvvshl2wMG8OBoKO8w0II2MbpGIUjXkO5rD87nElx
 RvQyxovZsex6Md2bmawHiIAtf43f90b/ioFHVZClxFXPg0cpt0bZqctnR
 Eao7yfklzY7fJLhaeftboxssi40MClEmKEgQA+1/OhPsSU7s+sbgm0INh 4=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: DKj042E5dnZrFhVSFrubRPzWB64nxT4IhLzYy5G3jJSzis0I01N0KFwVmOyxRbH29ylav1vLC5
 eSFy6yluU9BYkpxUThaVav5ABv4cYe+1qDF/Pm3rzmcHCkx1KjatbKbiFHwWfz3m/7zlfaw8p1
 5h9qRzhRUjUV3+3bM2rVSh0I3V/Vi67ESY0Vpr8GOpbzzcg99wEHiZ40MbL4NvikGn6K6NM8ae
 f1UWXnm81ygiUHPVaEQiK5G1e6n/KGmzgy6h3fXjPyjNx3/FXgPiTuHRLNWJU59jtcYp9RilFk
 91E=
X-SBRS: 2.7
X-MesageID: 17909104
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17909104"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24253.29066.32494.314618@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:27:54 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 08/21] tools/libvchan: notify server when client is
 connected
In-Reply-To: <20200428040433.23504-9-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-9-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 08/21] tools/libvchan: notify server when client is connected"):
> From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> 
> Let the server know when the client is connected. Otherwise server will
> notice only when client send some data.
> This change does not break existing clients, as libvchan user should
> handle spurious notifications anyway (for example acknowledge of remote
> side reading the data).
> 
> Signed-off-by: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> Replace spaces with tabs to match the file's whitespace.
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---
> Marek: I had this patch in Qubes for a long time and totally forgot it
> wasn't upstream thing...

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

CCing Daniel De Graaf in case he feels like having an opinion....

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:30:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:30:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGkH-00065R-Pf; Thu, 14 May 2020 16:30:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=otfA=64=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZGkF-0005nF-MN
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:30:03 +0000
X-Inumbo-ID: 2920d00e-9600-11ea-a4b1-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2920d00e-9600-11ea-a4b1-12813bfff9fa;
 Thu, 14 May 2020 16:30:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589473803;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=/9nygPkkp5Q+lpg2QhK9iP++CHjileKsIkdOdkAm6YE=;
 b=O9ZwrLxHaB5AtRq5wyzVBbja8Pg+9vgqNJI+dfjDnZ/fvcp3SygtMfcU
 YJrcCiSRCxGisG57KW2plTLSzDCEPy7bHkteW79ModPtDXCWU3xrj+bJO
 1I7pGfYB34T3pKK9iQHBWzsxPtFULzzpTCwSKClwM/7XZwp+G9K/tNFxq A=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: uiOcC6PytHKgVWXDkSN386MNBT6O4rE2AsztKQw0bcwYqVetUPKuSxyG5FnwlMY1zdpmU8Kh0t
 TklHLq14m3Zqmqd4dj9yDgyO6AbDpvdZ704OlI7fdZEU0pSIQQBLfrxrYkYuXbbzE5U4E+aL09
 nueJ9N2HUbEizLw1W7JplXGS31YzYBrfOzJMqg/wUQQNQKinGY3w7oZILxp4M871FxytfW7JRj
 k7JYMYCt33+CIHyOZYbi+KTuwreE/UkujJhgbHvsN1oWQ5l4qy+v0ANEWkTkdYadZFeEoLQ6Ny
 tJ0=
X-SBRS: 2.7
X-MesageID: 17541766
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17541766"
Date: Thu, 14 May 2020 18:29:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 5/6] x86/mem-paging: move code to its dedicated source
 file
Message-ID: <20200514162955.GJ54375@Air-de-Roger>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <b9b189b1-9484-4501-6085-adf86e73f262@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b9b189b1-9484-4501-6085-adf86e73f262@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Apr 23, 2020 at 10:39:17AM +0200, Jan Beulich wrote:
> Do a little bit of style adjustment along the way, and drop the
> "p2m_mem_paging_" prefixes from the now static functions.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:35:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:35:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGpf-0006Kc-EX; Thu, 14 May 2020 16:35:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGpe-0006KX-UK
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:35:38 +0000
X-Inumbo-ID: f10024da-9600-11ea-a4b1-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f10024da-9600-11ea-a4b1-12813bfff9fa;
 Thu, 14 May 2020 16:35:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589474138;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=BUwW6yMydQVB8wBk6Rv7WxqzYJXcPZ4UF+HHSswujn4=;
 b=IA9qIytJOjNHsOXqZd7pm3qQpVGP58VTNq1A4ZW8nZ6MVE3L7/NVh/i/
 FC/Im7KdazMqnKAAHVZeOq97RyPR5oQ6qMfZC9Nh/FLimq7s9hg3VTOQw
 qMfYZQsUdUzd0Aa5lHjm1MYkwy9SXzRvJ/a1zdxOSDM3Ldrs7vNAu+xzv c=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: uE/nex3gRMr/YDqD7eUMlz2qjMwwjXp5M6GxbvcKZlfPW2sLFve+Y3A0XhsUXLMxYYvA4X1hys
 wO+2hVOOOdauSUEBY1O3ZNNJ09i93iaBlnFwwOQ3xlOGUe7uSe2t3szpVHii1+VISJi0YjPLMu
 O/iDT8jwp/Y7h47gQW61559vab2C70axo7kHogZmQcZYjf2BH3J9KnLgeevs6JiXeTDQ3imKN0
 Ptmff6i6LZLAY5z/GQdAdX+pWWML27xJlSZ+0tKQIy7wQFfezylbR6uY1DWCwX0Z6Zsd8HvAAu
 T8A=
X-SBRS: 2.7
X-MesageID: 17571195
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17571195"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24253.29524.798802.978257@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:35:32 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 09/21] libxl: add save/restore support for qemu-xen in
 stubdomain
In-Reply-To: <20200428040433.23504-10-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-10-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 09/21] libxl: add save/restore support for qemu-xen in stubdomain"):
> From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> 
> Rely on a wrapper script in stubdomain to attach FD 3/4 of qemu to
> relevant consoles.
> 
> Signed-off-by: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> Address TODO in dm_state_save_to_fdset: Only remove savefile for
> non-stubdom.
...
> +        if (is_stubdom) {
> +            /* Linux stubdomain connects specific FD to STUBDOM_CONSOLE_RESTORE
> +             */
> +            flexarray_append(dm_args, "-incoming");
> +            flexarray_append(dm_args, "fd:3");

Would it be possible to use a different fixed fd number ?  Low numbers
are particularly vulnerable to clashes with autoallocated numbers.

I suggest randomly allocating one in the range [64,192>.  My random
number generator picked 119.  So 118 and 119 ?

Also, why couldn't your wrapper script add this argument ?  If you do
that there then there is one place that knows the fd number and a
slightly less tortuous linkage between libxl and the script...

It's not stated anywhere here that I can see but I think what is
happening here is that your wrapper script knows the qemu savefile
pathname and reads it directly.  Maybbe a comment would be
worthwhile ?

The code looks like a good implementation of what it does, though.

Regards,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:35:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:35:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGpv-0006MD-QZ; Thu, 14 May 2020 16:35:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGpu-0006M1-JX
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:35:54 +0000
X-Inumbo-ID: fa70211e-9600-11ea-9887-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa70211e-9600-11ea-9887-bc764e2007e4;
 Thu, 14 May 2020 16:35:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589474154;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=qwtVVp+ccBGno3qwbkxZ3SGpiyVoSG6FcIf+fQ/YyHk=;
 b=TJ1w2WXWZKRFGyy5S7Ks5PZpxh4eCVUK/ifC2A0SRbQVcjMUgQtGXSG1
 2gaxYb0pBVEmH3n9iDz6uxZzah0V4n+Qe6j7sgI0NDo/n2K8FTar4J4g6
 QuyB/fVTGi8GVnKfObzKrYmVxEmUF0LJW9nnl1Bm6t1sdJeorr+ohHs3p Q=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: zR9S7KlqnsrROAKQGc3slbURnWZASnugNjse+qecOgBEeTlVZwStzRSMNFps9hNoGQoaSjFbl6
 gM+KWR0mDq70ZlzrDdSdRysjyjeNru3E98d0AElo7txde/VJNIu7VDPjYzfJhqWyTV2If/lMUc
 M3OxzrILj2geDS8K4xXTRxkPJ5J2ZNXtj0kZazR6J8u/bGQr1qeiA4JuRFx4q5Jy5uo4j5Z+G2
 WH3PFasXwWJQq6lC9FzUyw0eYH/rzrSIg1etm60soexK2ln+dNQo9ly7dH53+HPw4EdaP60JZO
 N4o=
X-SBRS: 2.7
X-MesageID: 17817132
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17817132"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24253.29542.373728.684202@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:35:50 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 10/21] tools: add missing libxenvchan cflags
In-Reply-To: <20200428040433.23504-11-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-11-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 10/21] tools: add missing libxenvchan cflags"):
> From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> 
> libxenvchan.h include xenevtchn.h and xengnttab.h, so applications built
> with it needs applicable -I in CFLAGS too.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:36:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGqH-0006PH-2h; Thu, 14 May 2020 16:36:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=otfA=64=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZGqF-0006Ox-94
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:36:15 +0000
X-Inumbo-ID: 06d6660c-9601-11ea-b07b-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 06d6660c-9601-11ea-b07b-bc764e2007e4;
 Thu, 14 May 2020 16:36:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589474175;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=1WBB4KiWEnDGidm0Nd5NBCCB52dlvaGttGdwJaUd1lM=;
 b=c1NylXEz5fQaqOg28s2GWXSsfUJqJ9YafLBYQUIsvFXHLOrgAzkIaTIo
 yT0vEfTYD791Rgjc43veqYOBncSrlDrrTf8LQhYXRhRDH4WkfaayViKv1
 UbF4Up8OF7vMgaBke3behTZVheowm/dcB6UaO76P7daU5Q+jrebV4eVYm M=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: y04uiOZ7JZ8NjJ2N9R837hwoMXSIcr7zd5uqu3DsiCz7CsArDfcFtCEBMBNUYbVsBhKcCeM/kY
 uPpYrfng43YxlN01btvBPccuxASoK2zTaxO7Ue1yNG+prd5qmaMZnHOUioiyUJLXBrihIZaAvA
 a7j6f/xYGMh0rvss/Rf54fIfzJQfR+pT2Y6mcAfbpdlRK1MCaKJLqv5IIXsk5bReRmjUV7VSwu
 HdZWe+cY09dCx6cGpRkiKVrrGMHyX7ff6g62PeJLpZP1S9uLagbXa6+8FTFklbqw1NV1V+CJsb
 tm4=
X-SBRS: 2.7
X-MesageID: 17571250
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17571250"
Date: Thu, 14 May 2020 18:36:05 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 6/6] x86/mem-paging: consistently use gfn_t
Message-ID: <20200514163605.GK54375@Air-de-Roger>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <b50f9677-3b62-b071-decc-007e6a92701d@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b50f9677-3b62-b071-decc-007e6a92701d@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Apr 23, 2020 at 10:39:39AM +0200, Jan Beulich wrote:
> Where gprintk()s get touched anyway to switch to PRI_gfn, also switch to
> %pd for the domain logged.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:37:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:37:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGrm-0006eZ-EM; Thu, 14 May 2020 16:37:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGrk-0006eR-Pk
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:37:48 +0000
X-Inumbo-ID: 3e7a0f32-9601-11ea-9887-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e7a0f32-9601-11ea-9887-bc764e2007e4;
 Thu, 14 May 2020 16:37:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589474267;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=W4//fzUFWhbl5FZ6lmcht3748uvOPcgf8xtPcwPVNxY=;
 b=gCfjQ8zR1w2GvB3gFrHCWifCUSlglGnl2jti0bm9h4zac7wuVocuMiSp
 qFqWYyePVyNfqjZGEm+afqqtOcT2RziTXo+qjmGPLFy5ViU6kD0FxfoUN
 lCvrythOHRuJed3dOTnnFLL+u8LwKtUbPyzI0khCyPKPeyxSo46TGHzQ+ Y=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: KV/creLlsjvBVcTHvfj4rMq7xQQKE77or6YUq6cWkwPc5BevId5WYDFABMk4WdbKbc1p4tQ+ac
 p91+juzmAWpJKlux4xpfv/IkrLsHvqftZwtcgw4mHllB5WU0CsxLWmPlzun4ZGRxJhnnYejK+G
 z/0stFI2tfQM1X+qrPDjl0PsJJvOz+HDyp64L6teXTo6a/qFqMm/AkiqkGV67Sj6MwQcgIM9ZX
 UunKaEFnjxRWXFuQg05pv7f5sc3XiuQASLYXQPWqHY9nZ0spIXa33hh8lDYuTgStC+IjNpwHff
 i6c=
X-SBRS: 2.7
X-MesageID: 18244890
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="18244890"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24253.29628.780563.640276@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:37:16 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 11/21] tools: add simple vchan-socket-proxy
In-Reply-To: <20200428040433.23504-12-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-12-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 11/21] tools: add simple vchan-socket-proxy"):
> From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> 
> Add a simple proxy for tunneling socket connection over vchan. This is
> based on existing vchan-node* applications, but extended with socket
> support. vchan-socket-proxy serves both as a client and as a server,
> depending on parameters. It can be used to transparently communicate
> with an application in another domian that normally expose UNIX socket
> interface. Specifically, it's written to communicate with qemu running
> within stubdom.
> 
> Server mode listens for vchan connections and when one is opened,
> connects to a pointed UNIX socket.  Client mode listens on UNIX
> socket and when someone connects, opens a vchan connection.  Only
> a single connection at a time is supported.
> 
> Additionally, socket can be provided as a number - in which case it's
> interpreted as already open FD (in case of UNIX listening socket -
> listen() needs to be already called). Or "-" meaning stdin/stdout - in
> which case it is reduced to vchan-node2 functionality.
> 
> Example usage:
> 
> 1. (in dom0) vchan-socket-proxy --mode=client <DOMID>
>     /local/domain/<DOMID>/data/vchan/1234 /run/qemu.(DOMID)
> 
> 2. (in DOMID) vchan-socket-proxy --mode=server 0
>    /local/domain/<DOMID>/data/vchan/1234 /run/qemu.(DOMID)
> 
> This will listen on /run/qemu.(DOMID) in dom0 and whenever connection is
> made, it will connect to DOMID, where server process will connect to
> /run/qemu.(DOMID) there. When client disconnects, vchan connection is
> terminated and server vchan-socket-proxy process also disconnects from
> qemu.
> 
> Signed-off-by: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

I have not reviewed this.  It sounds very useful.  Thank you!
(I did skim over it.)

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:39:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:39:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGti-0006oV-R4; Thu, 14 May 2020 16:39:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGth-0006oP-NE
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:39:49 +0000
X-Inumbo-ID: 86b1c5c4-9601-11ea-b07b-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 86b1c5c4-9601-11ea-b07b-bc764e2007e4;
 Thu, 14 May 2020 16:39:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589474389;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=AqKWdpfdTTJgLBP6wY9qmTHo4s2pjhm8KYmzS1XJjpQ=;
 b=faIjd4Qmre6+cfoOt6RYWQ4+nXXq1YlFttDlIJLwM+ab+U6kqug3FKI6
 Hpzy3p/ugGjEBUCWkzJyTRnV84EkldX7EeCSQShfJJuyTEF9ljN3VLGjA
 SSM6D6buaDem89JYS6wMXzNJxbNIA5uxQiUH28EjErqGLnsPj1/Yx0uqm o=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: jv4vrkbNMnRObN497itbi0poQfmoPvTIHdb4n5BkSqqvLVc7tLvZYK76sE0bKRu+PgqjKFwv4Z
 N4VAVLzYYNnGFrCEKlViPTv32k4xLZPSPL80FQDPA4MN7J5tvrk6QGAidPfgbPo5S1iyT7BxAl
 HbjWz6e2u4Hhx+bSdh7LjOpv66Hq8JOrKddt7AEKI+fXT6NPIpOQo7o/RfNG0ITvWy1UbZJeeE
 ipMcwWJMuB12/MZUCcLMem+BFCXXKCcZnX3P+9Ro092YeH0jvGd8hjfAIwqQIFLSvQh1mhYbE6
 YiU=
X-SBRS: 2.7
X-MesageID: 17910432
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17910432"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24253.29774.668013.710363@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:39:42 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 12/21] libxl: use vchan for QMP access with Linux
 stubdomain
In-Reply-To: <20200428040433.23504-13-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-13-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 12/21] libxl: use vchan for QMP access with Linux stubdomain"):
> From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> 
> Access to QMP of QEMU in Linux stubdomain is possible over vchan
> connection. Handle the actual vchan connection in a separate process
> (vchan-socket-proxy). This simplified integration with QMP (already
> quite complex), but also allows preliminary filtering of (potentially
> malicious) QMP input.
> Since only one client can be connected to vchan server at the same time
> and it is not enforced by the libxenvchan itself, additional client-side
> locking is needed. It is implicitly implemented by vchan-socket-proxy,
> as it handle only one connection at a time. Note that qemu supports only
> one simultaneous client on a control socket anyway (but in UNIX socket
> case, it enforce it server-side), so it doesn't add any extra
> limitation.

You might mention in your commit message libxl qmp client code already
has locking to avoid attempts to talk concurrently to the same qemu.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:41:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:41:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGv7-0007XM-6B; Thu, 14 May 2020 16:41:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGv6-0007XG-H1
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:41:16 +0000
X-Inumbo-ID: b6e62187-9601-11ea-a4b1-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b6e62187-9601-11ea-a4b1-12813bfff9fa;
 Thu, 14 May 2020 16:41:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589474472;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=cyiVLU6EQWf62E9vmHbjg+E3NRWfTbuH4+5ne1kWS3k=;
 b=cYSzzyFMOe57SvrYvc3gcpZdih5kbT/PmxEj1M6xqoc5X1TaQS99cJqj
 l1Cw9gD67uOn2q2ecZUkX7vu0cg5/dSWF9jjcNspLcgr9S+dt45OeRSmC
 qkPaL++ss7I7rKbsLHRvOyYB1vd4IEfBcJRnZbcoRAmbJmCP/f5vIofgn 8=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: rxKE1WN8e0GswRpCSgyg6qN/MHgR3WbCM7xZDBlqgF6yEYWm4/U8ejsEgPDpxd/miISz85OD8/
 xj5370nJIoHeeFMdfacVqqUvB04WZCIG5MG0TBFTE/JDbxwvIptG5IrArpLQV/8KXTEmkXwTjn
 kZ3pwxQAW4rMB/R3hME3Hy86mRjYNS4YTw/8hIhKYStcOlp+rPpaXrYf2gIrmFdcaBBRJkPDld
 Fj7V/NMcIGItgFcMVT2q4pK+FiZjeesiy1WF9yyjAXfEL7NmJ9jVlojb1eZ1B1j+pZ0XPnf8gL
 QJw=
X-SBRS: 2.7
X-MesageID: 17571637
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17571637"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24253.29859.291364.371640@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:41:07 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 13/21] Regenerate autotools files
In-Reply-To: <20200428040433.23504-14-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-14-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 13/21] Regenerate autotools files"):
> From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> 
> Since we have those generated files committed to the repo (why?!),
> update them after changing configure.ac.

This should be folded into the patch that changes the input.  Can you
please add a note to the commit message instructing the committer to
rerun autotools.

We have these files in-tree because not everyone has a recent
autotools.

Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:42:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:42:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGwV-0007cv-IZ; Thu, 14 May 2020 16:42:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGwU-0007cp-J2
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:42:42 +0000
X-Inumbo-ID: ed9ba0b6-9601-11ea-a4b1-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ed9ba0b6-9601-11ea-a4b1-12813bfff9fa;
 Thu, 14 May 2020 16:42:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589474561;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=rKKVKX2U6395qlHtdbnVfdQhVMeO/gVk0U1PwfYAm1Q=;
 b=cyjUHJo1Gxd8bEUMGSyySxLrLcSiZtRZFiJQypqmtGtQqHw7yQfQb8tY
 G0wRePSWKt33TEy/DUf/vZavXukianF3TWjxsSSNj9ddrdrEK6UUbY36m
 Lzgf/0ENNVH16/P9e4/mGgZzBWBgLO3Sh4jRI36Wb8e7KVbCbH7OB5deH w=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: rmDlEYYNyGXw8T7g3EVCrfPUWar3mXtynqJvVSoNVTjxbCqUJBd59rQFF4AGojYDe+nlYDon2I
 sgWVlI3O+NzB3YrCHRxThpdNGIDExAcv0Q/rGPPqs97dHGix32yJ8ALW31jgY8Y5GBGmabNpz0
 trwN6PJ6bCq4nMmldz0FfDFzqeQ9L6ro/meFnt9OJsmnB4KE3GhOGLyM/7MkPcgou5GSOsTPfK
 /k1lu1z8jiI9ZxxyPChd1k4b75EJunb+0MTE1ZL0wRuJJ9/b3RYMcWDX1F3wQia6112Z8cMa6/
 RIY=
X-SBRS: 2.7
X-MesageID: 17910753
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17910753"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24253.29948.624988.194564@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:42:36 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 14/21] libxl: require qemu in dom0 even if stubdomain
 is in use
In-Reply-To: <20200428040433.23504-15-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-15-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 14/21] libxl: require qemu in dom0 even if stubdomain is in use"):
> From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> 
> Until xenconsoled learns how to handle multiple consoles, this is needed
> for save/restore support (qemu state is transferred over secondary
> consoles).
> Additionally, Linux-based stubdomain waits for all the backends to
> initialize during boot. Lack of some console backends results in
> stubdomain startup timeout.
> 
> This is a temporary patch until xenconsoled will be improved.
> 
> Signed-off-by: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---
>  tools/libxl/libxl_dm.c | 12 ++++++++++--
>  1 file changed, 10 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> index e420c3fc7b..5e5e7a27b3 100644
> --- a/tools/libxl/libxl_dm.c
> +++ b/tools/libxl/libxl_dm.c
> @@ -2484,7 +2484,11 @@ static void spawn_stub_launch_dm(libxl__egc *egc,
>          }
>      }
>  
> -    need_qemu = libxl__need_xenpv_qemu(gc, dm_config);
> +    /*
> +     * Until xenconsoled learns how to handle multiple consoles, require qemu
> +     * in dom0 to serve consoles for a stubdomain - it require at least 3 of them.
> +     */
> +    need_qemu = 1 || libxl__need_xenpv_qemu(gc, &sdss->dm_config);

But I don't think this is true for a trad non-linux stubdm ?
So I think this ought to be conditional.

What am I missing ?

Regards,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:43:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:43:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGx6-0007in-09; Thu, 14 May 2020 16:43:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGx4-0007hn-0B
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:43:18 +0000
X-Inumbo-ID: 02af982c-9602-11ea-b07b-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 02af982c-9602-11ea-b07b-bc764e2007e4;
 Thu, 14 May 2020 16:43:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589474598;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=zk407Zx2NhZqy1uEhJfHNMgPucOjR8qsFj7cF6VTpiA=;
 b=gejuf4/Cbt/qrLr+wVuwjWj4Zq1PWTXCAKEb5sg9ZH3i0MsIeRFxWizs
 f9q6tIihOafkUTuWIoNcZhzkfWGHMMxr4shn4xMbrTxxbOnyWQnxN4LEC
 MOFaM+5nN6uDey+t7gHd+AEObqvN6F6x+77nn++0Wc4BNFy7/pcN/an2+ 0=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: fphBZdBlwuEEDWVe/fPZLWXBvpDaGAmMdl5nBEYpjV1JXpGDGjZHWPHbXooyxXnMY3T9Jfmjd1
 lx+S07vX7Ovqk+oIMUfBuy2dZcRGQUMV4q2+wht6Dt8c5OVmFgKP8mdWva1nL2GCeZIxSrvlaI
 ZUZhi8r+ZY90Bx9lj2ygVglvuPqlEKzy4zzfs3zo1a6OcLfSE5oAiOc99UO3mL5EQgG3YQzOCZ
 5IRT9st2MPxFSXrR/WSX4RVKPhJs4DvE/eq/fEG/bT/CR0yX1ohtcrqOq99hHbTJ5aiw8wdExD
 /LU=
X-SBRS: 2.7
X-MesageID: 17571860
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17571860"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24253.29984.17178.839943@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:43:12 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 15/21] libxl: ignore emulated IDE disks beyond the
 first 4
In-Reply-To: <20200428040433.23504-16-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-16-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 15/21] libxl: ignore emulated IDE disks beyond the first 4"):
> From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> 
> Qemu supports only 4 emulated IDE disks, when given more (or with higher
> indexes), it will fail to start. Since the disks can still be accessible
> using PV interface, just ignore emulated path and log a warning, instead
> of rejecting the configuration altogether.
> 
> Signed-off-by: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:44:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:44:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGxk-0007oD-AR; Thu, 14 May 2020 16:44:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGxj-0007o4-8M
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:43:59 +0000
X-Inumbo-ID: 1b691d5c-9602-11ea-b07b-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b691d5c-9602-11ea-b07b-bc764e2007e4;
 Thu, 14 May 2020 16:43:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589474638;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=Pja4qSknMHJIdob7GFnuNB5kdgoBZ8chGBn+Uy+iYyo=;
 b=gw5lb8wNOZScMh0ZYnl89krvxZArW3eA5hQ82DU74Lv8v5ddkHI9q8tC
 b1h/3LCelVebLOlEjP97WTckcciKdRQ6F+t5jLBkKvljy4emJq+XlcUVc
 YBuo+5lppate2HEaOCkmnf2pUwHkZUjUn2Vvzc1LTk7atX6Bwxl1j9xk9 Q=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 2ZQSUVwwDodzmcNkcQuJ3ajQywzliYgteVVnfc1Vo2hUz7TlGCTAInk1IuAhHh2hZmf+/m3f7p
 jMg59cKXzj2kWWAxoCYBnLiZpEjxQr/KJHnvyGyxSyERbnpNNljWEEpyr/XcbfV0hQQg0tL2je
 4rTcGZczfQt7hHmuxqn/878LpWZFuvN70OIzm9nr59a6SvN1wqACozKjN0O7bGbadeAF32oT7t
 okfhSMCh1DnLcPbohkM1hDtox1BMJ47FjW7aEbfOGs0Jv8uO/WYraE1wfeXoKt5QF2HEokTGII
 +/M=
X-SBRS: 2.7
X-MesageID: 17815815
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17815815"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24253.30019.941784.599298@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:43:47 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 16/21] libxl: consider also qemu in stubdomain in
 libxl__dm_active check
In-Reply-To: <20200428040433.23504-17-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-17-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 16/21] libxl: consider also qemu in stubdomain in libxl__dm_active check"):
> From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> 
> Since qemu-xen can now run in stubdomain too, handle this case when
> checking it's state too.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:45:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:45:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGyk-0007wU-Kx; Thu, 14 May 2020 16:45:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGyj-0007wM-Jq
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:45:01 +0000
X-Inumbo-ID: 4051c575-9602-11ea-a4b1-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4051c575-9602-11ea-a4b1-12813bfff9fa;
 Thu, 14 May 2020 16:45:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589474701;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=EobmdpSmoUNR44WE9j3Dk3qZK//9w89cvSj+HAZSL6I=;
 b=MI/QDT/IkgggMDVKzjGuzqL99mHDbYIMuPjXYu/WEmMeLn1C7AkacXgZ
 U/eTaFA+8ny0lHUCiqKEDiYZhvnAXEBmQtAqcpwAWBxhMG9dcHnSeFGXZ
 6phvmPEV8WxmMurwf0FwydWfShQUz+FmIynoB2/HZludGgsj92l+XFQWY g=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: O4snGjlQuoA9GBZSYtcXQ42dBBD/AXvO9CQTMDJCdMJ49rO17RF5Us4oXUr+wAuTUrniSzU/Ta
 gQ8PuMRjnPjDDBvnjkPvdBtJesalFh1VFa0YNljfvUdL+dc6As0CAPS6k+SzjrtBfK/kNdQ9dL
 167rSIvIxPnnB9XDI+cze/f0R/FExQkRrAFwp7ggr4qRrJGEL9qGiaXcJyGOfpQYq5Tenez4bI
 5WPYoVLxg3p1DQ5BoHzsyItnNfMsQqbLS4owvhZ7Aan4bp/3qOEaD0XYMD8mFxfJL6vxUH24Qj
 Bt8=
X-SBRS: 2.7
X-MesageID: 17572092
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17572092"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24253.30078.812654.307279@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:44:46 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 17/21] docs: Add device-model-domid to xenstore-paths
In-Reply-To: <20200428040433.23504-18-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-18-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew
 Cooper <Andrew.Cooper3@citrix.com>, George Dunlap <George.Dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 17/21] docs: Add device-model-domid to xenstore-paths"):
> Document device-model-domid for when using a device model stubdomain.
> 

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:45:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:45:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGz5-0007zz-Uy; Thu, 14 May 2020 16:45:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGz4-0007zd-C5
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:45:22 +0000
X-Inumbo-ID: 4cef1994-9602-11ea-a4b3-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4cef1994-9602-11ea-a4b3-12813bfff9fa;
 Thu, 14 May 2020 16:45:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589474721;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=czhQvsIWRDbqz1RpqtIinIp60DIS8tczDgv9fc4yOZM=;
 b=YWIUN4+NlKlO2D0CeSVFxQKN329mKgnw7XeDn9VGDRyZL6St5tV1iPpN
 LU2cxRqHYxyqbaC5Gu14JtaA4WdfQEzI2qO7vIMxUGMJOJXvP91yub/CS
 L3MDFxcq77JvxLTVcGkcrk3cmyJQAOgzrCvvtrQ3+fqeMcw/DFkHa5HNp 4=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: hufTwrDfaVviPbptfpajk9bonJMrP7EoCHNI3XhmGhGaBoVpIirjljT0bmx06kwxCpZlgmy/cf
 E1IowfnMRyqEg+RdWJMkgKO7m3XVCKSe0KGsBvhu1bVkeV6ukYvhIcd4r6hPKDZpdan2JNmPGH
 odpLFiHkb1ctmlpvZwZzhr2ufKkEamTDVqvGsHj3W/vtWxxpT7HbOIZ5+eioPydY3JQPB85veK
 xecSzFNfbvpvLRWAp3NBFhwRKFEGZAraxVHkynjMGmSfBl1Xhx86NwGGHB9O6gRMgFzN9ZBEeG
 UWs=
X-SBRS: 2.7
X-MesageID: 18245754
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="18245754"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24253.30108.586111.505987@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:45:16 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 18/21] libxl: Check stubdomain kernel & ramdisk presence
In-Reply-To: <20200428040433.23504-19-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-19-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 18/21] libxl: Check stubdomain kernel & ramdisk presence"):
> Just out of context is the following comment for libxl__domain_make:
> /* fixme: this function can leak the stubdom if it fails */
> 
> When the stubdomain kernel or ramdisk is not present, the domid and
> stubdomain name will indeed be leaked.  Avoid the leak by checking the
> file presence and erroring out when absent.  It doesn't fix all cases,
> but it avoids a big one when using a linux device model stubdomain.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:45:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:45:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZGza-00085M-7s; Thu, 14 May 2020 16:45:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZGzY-000852-ML
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:45:52 +0000
X-Inumbo-ID: 5ef48f16-9602-11ea-b07b-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ef48f16-9602-11ea-b07b-bc764e2007e4;
 Thu, 14 May 2020 16:45:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589474752;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=g30fltV70jAfpv2TCl8PWDqrZ24Z2bUhc7hgAkgNdiM=;
 b=c3chgtyja0/D+KGIKR0tEx99m7WOLbnTuOCd892OlBhSITEWsTnjX6S4
 OlGJ/BTR9inKorkQHu6X8PkapL7Ra/MZoOI7vc7TFCo9bw8K5EKee1/2R
 KPn8eGFrkZi3i1GkZ1YO3Vxatq/4b3aqo55erzV8E27KYacWLwldWQkuY 0=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: n8/8p2G5Dryi0gZb58ogBsI4R6vTaGdIlD50BDtSaWKwE3p5tOg7yCFnye+O0MrEjLjyaJIxYQ
 UzmJ4f38XZtjv+BpdcXXNuK+pl2KoUE3zzZIKGLCaJhBCjulPx/FTPFL+Un2PRhQ2vjk4Pkt2s
 Eps/sQrMCF9+rPV7vWEEStixpcr1OyoBsN8vvN53FfePyx1cDjWGwpmeGs43MGbIALVL0QN1R4
 2j/WpYn2Olx6Y+lHApNlp7/KegUldD3wfMvKoTV5DzVpXyK7AYx/Q0d5nCjW1rEGE2N00bucoT
 ARI=
X-SBRS: 2.7
X-MesageID: 17543549
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17543549"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24253.30138.601001.944534@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:45:46 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 19/21] libxl: Refactor kill_device_model to
 libxl__kill_xs_path
In-Reply-To: <20200428040433.23504-20-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-20-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 19/21] libxl: Refactor kill_device_model to libxl__kill_xs_path"):
> Move kill_device_model to libxl__kill_xs_path so we have a helper to
> kill a process from a pid stored in xenstore.  We'll be using it to kill
> vchan-qmp-proxy.
> 
> libxl__kill_xs_path takes a "what" string for use in printing error
> messages.  kill_device_model is retained in libxl_dm.c to provide the
> string.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:48:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:48:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZH1Z-0008Iz-Kv; Thu, 14 May 2020 16:47:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZH1Z-0008It-8V
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:47:57 +0000
X-Inumbo-ID: a85ef8c6-9602-11ea-a4b3-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a85ef8c6-9602-11ea-a4b3-12813bfff9fa;
 Thu, 14 May 2020 16:47:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589474876;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=Hwiy3ed45RImGOwT2s2uoZACuwopz6Cd4PBM7vCb5AY=;
 b=I07rGclWTt4XQZA0Z+KUkkUCa6DTtNaEvLq4fw44cLNP+yR8HN3hi9qD
 u/85h5UtAFkNLAhEJoDAY9ZcMP6K+FPS5tLthc8Dv5UOxcTRzTjDwT/3a
 W8nYAkVTvG4x8lWrF56Nkamdi/UvZIz0zjIchorL7hQgcoR+ZF7FJPMBt A=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: tl9wFKBrf1Q34ZijM7/EpjMQBtVfrEgxxiR/AlTRU6QlFRcwWfKRd7HJKSm8eFKMstIGUjDI1o
 q3nPzp5CozavOQeL4JQ/m3yqnQzWQJyLv8DwNTHES5GDCMGw9vPRNEhgLDdiw/hCfi4c5fiZBS
 C9zDb5PyIkxN2urC2Yn92JpleQKoLQTr4aLRVVe0TQ7DEtPy6vXclKbTN4buVef3lL27xyTthR
 vBx/sgN/XADHW4q+UylU8U45MPrO2YmIwDNt7unOhRJRW8FuDJNwAkk2Ad/Y+58Rm3TTPXc3Kd
 NMc=
X-SBRS: 2.7
X-MesageID: 18246029
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="18246029"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24253.30259.427251.49728@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:47:47 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 20/21] libxl: Kill vchan-socket-proxy when cleaning up
 qmp
In-Reply-To: <20200428040433.23504-21-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-21-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 20/21] libxl: Kill vchan-socket-proxy when cleaning up qmp"):
> We need to kill the vchan-socket-proxy so we don't leak the daemonized
> processes.  libxl__stubdomain_is_linux_running works against the
> guest_domid, but the xenstore path is beneath the stubdomain.  This
> leads to the use of libxl_is_stubdom in addition to
> libxl__stubdomain_is_linux_running so that the stubdomain calls kill for
> the qmp-proxy

In theory maybe this patch should be folded into the one that
introduces the vchan proxy.  But since this whole mode of operation is
new, having a point in the history where it leaks these is OK I
think.

Do others agree ?

For my part,

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

However, if possible maybe this patch could be moved to right after
the one which spawns the proxy ?

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:48:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:48:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZH2G-0008O5-V2; Thu, 14 May 2020 16:48:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZH2F-0008Nw-VK
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:48:39 +0000
X-Inumbo-ID: c29a366a-9602-11ea-a4b3-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c29a366a-9602-11ea-a4b3-12813bfff9fa;
 Thu, 14 May 2020 16:48:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589474918;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=aOSwwwpAP8vxN4YJ8ydW7xghPeM4DBW79TLTtqIM1LU=;
 b=AeFN5DEPe8dtUEyUzRALxXRohdNQqsGhrIanWLmaJWV/UizvVOvgSun0
 WDE7tyFwm8bCMJdm3tE7+PTQcoRc1HUw07Q0EA5VJqBcvdDoN0gC5r9AU
 lqa1O4taC9LMMb+Q392ruz76xsLlEa120zTJufTiua5QETuQ/RWCOp7N6 g=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: m70DffVekrkTeA0ohNqCuFaij02MzVE3SpMwAz5VRQMCmQVnrHJ9oWto/mWzKuVK/UXQClknE1
 HdyByvzBW5R3tuqjd386197E9kUOMnPzIi1DDx/7Lbx/AJ6mul6TdKmCl8V/vKs1cYtTTE111c
 UNbfY7VTyANIEr12Hvg8sMFP1CavG/zKgiCknaVaaR+8ITLKtOKGLfLOia0gWQsE7sU5sdNMdM
 fzgnZseznRSBLm1CKpwvOHAqfWSbheSKeG/cb4t0FJIM3HuQmcDmHEMehdWd52mfk3iX6HL+vY
 bOg=
X-SBRS: 2.7
X-MesageID: 17816542
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17816542"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24253.30307.935419.913847@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:48:35 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 21/21] tools: Clean up vchan-socket-proxy socket
In-Reply-To: <20200428040433.23504-22-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-22-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 21/21] tools: Clean up vchan-socket-proxy socket"):
> To avoid socket files lingering in /run/xen, have vchan-socket-proxy
> clean up the sockets it creates.  Use a signal handler as well as atexit
> to handle both means of termination.

This should be done in [lib]xl destroy, not here.  That way if the
proxy crashes or something it will also get cleaned up.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:51:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:51:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZH4t-0000nr-GT; Thu, 14 May 2020 16:51:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=otfA=64=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZH4s-0000nm-JA
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:51:22 +0000
X-Inumbo-ID: 23861656-9603-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23861656-9603-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 16:51:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589475082;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=Pp8JbQ9QRaQeKFHCajjGtu3O0dsndqmSPyyTnO6uZcM=;
 b=Sm8Nuuy8tsaZjklU/2xL1TWnbIJVQ+BM6rAevG3p4OkJi1+2bL1u7crn
 /W05vWopFGlIPoBu+gMjhLhhbmMn4h2mmZcTB3oN2gRMDIBsotqpC65SE
 2iyzlUK3aVwIbwdddWnld8aWZl7NkiS468953g7m07XTcBs3acY3ftKLz o=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: FxyjoXYK6n85q4eEjwyYYn3JqDvM/CftwFVJoxjN67Wex22QrFtaduzZOaRxi91vu2/29fQLKX
 QAvlxjSB/fM+B+zElyLuzixJVjZt7qeJ+IH0+UhkTQdR2AAyGxkyJVmYsKvoah99pc7e0t0yB2
 vY05jaDjOFkFACsA2R7dTsE+gxSOidJ7/y7aA8u6afRcvew8VyV2WoU6YIhlYneqMGj19X+HWh
 IqpHYTR2ZLOllCh2WeorrreQngHio+zDA2QX/cmupdkErBOt+UPjD5MbIkHyQqfVyiOfl8RQAs
 4fU=
X-SBRS: 2.7
X-MesageID: 17818899
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17818899"
Date: Thu, 14 May 2020 18:51:12 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86: retrieve and log CPU frequency information
Message-ID: <20200514165112.GL54375@Air-de-Roger>
References: <1fd091d2-30e2-0691-0485-3f5142bd457f@suse.com>
 <20200514131021.GB54375@Air-de-Roger>
 <2e9c7c05-e42c-52d4-f48c-9ecc8b14a1a7@suse.com>
 <20200514153252.GE54375@Air-de-Roger>
 <d166968f-23da-6065-3a90-e0fb0c4f6dcf@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d166968f-23da-6065-3a90-e0fb0c4f6dcf@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 14, 2020 at 05:50:29PM +0200, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> 
> On 14.05.2020 17:32, Roger Pau Monné wrote:
> > On Thu, May 14, 2020 at 03:38:18PM +0200, Jan Beulich wrote:
> >> On 14.05.2020 15:10, Roger Pau Monné wrote:
> >>> On Wed, Apr 15, 2020 at 01:55:24PM +0200, Jan Beulich wrote:
> >>>> While from just a single Skylake system it is already clear that we
> >>>> can't base any of our logic on CPUID leaf 15 [1] (leaf 16 is
> >>>> documented to be used for display purposes only anyway), logging this
> >>>> information may still give us some reference in case of problems as well
> >>>> as for future work. Additionally on the AMD side it is unclear whether
> >>>> the deviation between reported and measured frequencies is because of us
> >>>> not doing well, or because of nominal and actual frequencies being quite
> >>>> far apart.
> >>>
> >>> Can you add some reference to the AMD implementation? I've looked at
> >>> the PMs and haven't been able to find a description of some of the
> >>> MSRs, like 0xC0010064.
> >>
> >> Take a look at
> >>
> >> https://developer.amd.com/resources/developer-guides-manuals/
> >>
> >> I'm unconvinced a reference needs adding here.
> > 
> > Do you think it would be sensible to introduce some defines for at
> > least 0xC0010064? (ie: MSR_AMD_PSTATE_DEF_BASE)
> > 
> > I think it would make it easier to find on the manuals.
> 
> I did consider doing so at the time, but dropped the idea as we have
> a few more examples where we use bare MSR numbers when they're used
> just once or very rarely. What I'm not sure about is whether the
> name would help finding the doc - the doc is organized by MSR number
> after all.

I would prefer if we add names as much as possible, as I think it
makes the code easier to understand, but I can also see the point of
that being more churn as you have to end up looking at the manual to
see exactly what each MSR contains anyway.

FTR, I wasn't finding the MSR in the AMD docs because I was searching
as C0010064 when I should be instead using C001_0064.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 14 16:56:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 16:56:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZH9J-0000yL-3S; Thu, 14 May 2020 16:55:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NlH/=64=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jZH9H-0000yG-NA
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 16:55:55 +0000
X-Inumbo-ID: c61d9c55-9603-11ea-a4b5-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c61d9c55-9603-11ea-a4b5-12813bfff9fa;
 Thu, 14 May 2020 16:55:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589475354;
 h=from:mime-version:content-transfer-encoding:message-id:
 date:to:cc:subject:in-reply-to:references;
 bh=D5qXEfAW5rG98iWDbNNeq73weeSNhCXVmeNdOZN5+qE=;
 b=bmTJCn2Fkoc3VHKb2yxB1ssl1GAUQ0d31X7v5EL63COdSEX8YfIW4SSF
 POZt7p0qMiWkaHXZvoaTiUfjVnMFODFQz1Z6e6fgq7vjIK9dAKdWBmHgB
 JK74bmuAS9Pt2xcN8hDQN/z6dZoP7uKtYC5t6OggJY7miswmIqu0Exnse c=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 ian.jackson@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="ian.jackson@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 Ian.Jackson@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="Ian.Jackson@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Ian.Jackson@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=ian.jackson@citrix.com;
 spf=Pass smtp.mailfrom=Ian.Jackson@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: M+cvbzxc6j3YYi2jQEl7IzuB0ywxBnWxuKMIvZsU+5xEn2NOANz4J3qiq4WJ9L39M8hAW7ju07
 yW+IpIsiRdsM3jat/wYoWOcbK3nK4B7WgUtN+51f/8rI+92EdNv2jo8YL6JMn56OmzT6nF2FYD
 ArZYLTaMiDboeYJv9UA40aRN4J4HClt6BzGhCGZkUzDBMwY3A21fR9wGV9MTKo3HIHwdFKt+4L
 pWD4eURhQerGApx0IKEq/QTbr0bDrSG+v7J5/oyqgq74bJoG7QLYWwPLyh034JVt19mE2c/EgS
 P/E=
X-SBRS: 2.7
X-MesageID: 17817623
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="17817623"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24253.30741.77867.105081@mariner.uk.xensource.com>
Date: Thu, 14 May 2020 17:55:49 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 00/21] Add support for qemu-xen runnning in a
 Linux-based stubdomain
In-Reply-To: <20200428040433.23504-1-jandryuk@gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>, Andrew
 Cooper <Andrew.Cooper3@citrix.com>, George Dunlap <George.Dunlap@citrix.com>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Simon Gaiser <simon@invisiblethingslab.com>,
 Jan Beulich <jbeulich@suse.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Eric Shelton <eshelton@pobox.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v5 00/21] Add support for qemu-xen runnning in a Linux-based stubdomain"):
> In coordination with Marek, I'm making a submission of his patches for Linux
> stubdomain device-model support.  I made a few of my own additions, but Marek
> did the heavy lifting.  Thank you, Marek.

Hi.  I finished reading these patches.  Thank you very much.  They
were nicely structured.  I found them clear and easy to read.  As
you'll have seen I have requested only a few changes.

I am very hopeful that this series will make 4.14.  Codefreeze is
Friday the 22nd of May.  Please let us know whether you think we'll
all be able to make that...

Regards,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 14 17:37:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 17:37:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZHmf-0004YF-Aa; Thu, 14 May 2020 17:36:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Dixi=64=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1jZHme-0004Y9-9p
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 17:36:36 +0000
X-Inumbo-ID: 74ec59c8-9609-11ea-9887-bc764e2007e4
Received: from wout5-smtp.messagingengine.com (unknown [64.147.123.21])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 74ec59c8-9609-11ea-9887-bc764e2007e4;
 Thu, 14 May 2020 17:36:35 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id 2601A9FD;
 Thu, 14 May 2020 13:36:34 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Thu, 14 May 2020 13:36:34 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
 messagingengine.com; h=cc:content-type:date:from:in-reply-to
 :message-id:mime-version:references:subject:to:x-me-proxy
 :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=h/L3cY
 +LR9flu4KOtrM7e80JgWA33EMrYSaK3tVD9cc=; b=TSgIeyVblBjdQYva1DBg34
 xmJOfElJ8xWF/X9syYymuqEOPDEBuZz06K2wRj4ETa31nCYB5eIDnDYdtQ5LNQ9y
 f9DQ6csWS6apjM6D6YkaiZZ9LbcfFtL4Lf4SQkjgLVeI7plYB5nLvYPywYSaW27B
 rK3g3NuO7I1jbgUDEKIqQC2lxYLjxcgh+c2zoxteul/8jS0IyrnpphPjDuVMSZm3
 B8aDB5vx/kCjnLCjZRNFhtT8W4GT8X3jNQ/x7jC1p7q1nwPVGdw7/ZGcCroOazBq
 aOFQZgXXqggEBCskjIuQzrl8C7ZL7KY7v06KfHlqNbZ44fcpOSt9VIPAt7Z5w6og
 ==
X-ME-Sender: <xms:oYG9Xj6BCz2O2BcgnSgWfG4UC0zJztYMDairCrVDYUSsGB1nT0hUJg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduhedrleeigddutdejucetufdoteggodetrfdotf
 fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
 uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
 cujfgurhepfffhvffukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
 ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
 hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeetveff
 iefghfekhffggeeffffhgeevieektedthfehveeiheeiiedtudegfeetffenucfkpheple
 durdeihedrfeegrdeffeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgr
 ihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrd
 gtohhm
X-ME-Proxy: <xmx:oYG9Xo7XIYWsnh5k14JI8XfQUDikXOm23LELBHNZQO8i05ZT5EI9PQ>
 <xmx:oYG9XqezWukGFQCSZ8fIIAY2beceMLmTi5Ybki8OYM1hHJazVxjQ9g>
 <xmx:oYG9XkL03Y3466O7_UTnWvMv37NhTwfS2JzDt5jlnG5rk8BPdOj6uw>
 <xmx:oYG9Xpz-tUI1UiI1Hk4Fgj6_Q7RJar6D77egH9ixvF2hi3kMQAiVvQ>
Received: from mail-itl (ip5b412221.dynamic.kabel-deutschland.de [91.65.34.33])
 by mail.messagingengine.com (Postfix) with ESMTPA id 7DDD43060EE4;
 Thu, 14 May 2020 13:36:32 -0400 (EDT)
Date: Thu, 14 May 2020 19:36:28 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>
To: Ian Jackson <ian.jackson@citrix.com>
Subject: Re: [PATCH v5 14/21] libxl: require qemu in dom0 even if stubdomain
 is in use
Message-ID: <20200514173628.GO1178@mail-itl>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-15-jandryuk@gmail.com>
 <24253.29948.624988.194564@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature"; boundary="hkJ/XfKuQcNFcQvU"
Content-Disposition: inline
In-Reply-To: <24253.29948.624988.194564@mariner.uk.xensource.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--hkJ/XfKuQcNFcQvU
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: [PATCH v5 14/21] libxl: require qemu in dom0 even if stubdomain
 is in use

On Thu, May 14, 2020 at 05:42:36PM +0100, Ian Jackson wrote:
> Jason Andryuk writes ("[PATCH v5 14/21] libxl: require qemu in dom0 even =
if stubdomain is in use"):
> > From: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>
> >=20
> > Until xenconsoled learns how to handle multiple consoles, this is needed
> > for save/restore support (qemu state is transferred over secondary
> > consoles).
> > Additionally, Linux-based stubdomain waits for all the backends to
> > initialize during boot. Lack of some console backends results in
> > stubdomain startup timeout.
> >=20
> > This is a temporary patch until xenconsoled will be improved.
> >=20
> > Signed-off-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblething=
slab.com>
> > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> > ---
> >  tools/libxl/libxl_dm.c | 12 ++++++++++--
> >  1 file changed, 10 insertions(+), 2 deletions(-)
> >=20
> > diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
> > index e420c3fc7b..5e5e7a27b3 100644
> > --- a/tools/libxl/libxl_dm.c
> > +++ b/tools/libxl/libxl_dm.c
> > @@ -2484,7 +2484,11 @@ static void spawn_stub_launch_dm(libxl__egc *egc,
> >          }
> >      }
> > =20
> > -    need_qemu =3D libxl__need_xenpv_qemu(gc, dm_config);
> > +    /*
> > +     * Until xenconsoled learns how to handle multiple consoles, requi=
re qemu
> > +     * in dom0 to serve consoles for a stubdomain - it require at leas=
t 3 of them.
> > +     */
> > +    need_qemu =3D 1 || libxl__need_xenpv_qemu(gc, &sdss->dm_config);
>=20
> But I don't think this is true for a trad non-linux stubdm ?
> So I think this ought to be conditional.

For qemu-trad is true too. Stubdomain (mini-os + qemu-trad and linux +
qemu-xen) is always started with at least 3 consoles: log, save,
restore. Which currently requires qemu in dom0. So, yes, technically it
is a bug in the current libxl for qemu-trad. In practice, it works in
most cases because there is something else that triggers qemu in dom0
too: vfb/vkb added if vnc/sdl/spice is enabled.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--hkJ/XfKuQcNFcQvU
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl69gZsACgkQ24/THMrX
1ywCaAf+IYGXp1Q2zR1Y1HNp+uagnpMDI+i1nSEf25JeokogLCacax1GmBYSvYBd
KXX4gCSQjxk4dW47h0LaRvBUffOlWfSFgj5bRhYGedto7Gdl98+q//SamCVarcVQ
bOrDSB95PZGJQOvL2WRjr3HC4YTaNmZZSomSCDU8S/Si9zDDbo6do7SG5nyqQpjE
fMMfQcP5ZTckUIJ0PoTjx6a3lMgM3X60Kwgtt4FUtV5unGnjTFQLYlhdZyDaRXBj
68SE/AX4WbcJMgVA9qwDHToPT4slZYe951PhvWBYLFmhU7T3C9qAfzNysuJqdFP4
G1D2NQ2/NR8L1VR6JEAuj1o+ZQWM2w==
=/BCD
-----END PGP SIGNATURE-----

--hkJ/XfKuQcNFcQvU--


From xen-devel-bounces@lists.xenproject.org Thu May 14 17:39:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 17:39:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZHp0-0004ia-O9; Thu, 14 May 2020 17:39:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=k5kQ=64=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jZHoz-0004iV-Nz
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 17:39:01 +0000
X-Inumbo-ID: cc18715a-9609-11ea-a4b9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cc18715a-9609-11ea-a4b9-12813bfff9fa;
 Thu, 14 May 2020 17:39:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=LAq8M18uQJa/MqnV6zG743UR+FEIh0XqbRkVbtwdhyU=; b=wJgdfYf8dmiuq8ASn/YVWCj9id
 2UcixizJ82wXexg0sBQFqGqJtznCAkis69Hr8idJ9PYcH9aLIMszs11g3yolh2jBrccIX6AonYBQb
 r9I5Bdi1tl0dOzz5gPoJBVnRpcKImJDv4UyM+yDId0yI5TuTe+ZG5JoKsdVqUAhGCfaQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jZHoy-0005bX-Sn; Thu, 14 May 2020 17:39:00 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jZHoy-0001nY-M3; Thu, 14 May 2020 17:39:00 +0000
Subject: Re: Error during update_runstate_area with KPTI activated
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <C6B0E24F-60E6-4621-8448-C8DBAE3277A9@arm.com>
 <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
Date: Thu, 14 May 2020 18:38:59 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 14/05/2020 17:18, Bertrand Marquis wrote:
> 
> 
>> On 14 May 2020, at 16:57, Julien Grall <julien@xen.org> wrote:
>>
>>
>>
>> On 14/05/2020 15:28, Bertrand Marquis wrote:
>>> Hi,
>>
>> Hi,
>>
>>> When executing linux on arm64 with KPTI activated (in Dom0 or in a DomU), I have a lot of walk page table errors like this:
>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>>> After implementing a call trace, I found that the problem was coming from the update_runstate_area when linux has KPTI activated.
>>> I have the following call trace:
>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>>> (XEN) backtrace.c:29: Stacktrace start at 0x8007638efbb0 depth 10
>>> (XEN)    [<000000000027780c>] get_page_from_gva+0x180/0x35c
>>> (XEN)    [<00000000002700c8>] guestcopy.c#copy_guest+0x1b0/0x2e4
>>> (XEN)    [<0000000000270228>] raw_copy_to_guest+0x2c/0x34
>>> (XEN)    [<0000000000268dd0>] domain.c#update_runstate_area+0x90/0xc8
>>> (XEN)    [<000000000026909c>] domain.c#schedule_tail+0x294/0x2d8
>>> (XEN)    [<0000000000269524>] context_switch+0x58/0x70
>>> (XEN)    [<00000000002479c4>] core.c#sched_context_switch+0x88/0x1e4
>>> (XEN)    [<000000000024845c>] core.c#schedule+0x224/0x2ec
>>> (XEN)    [<0000000000224018>] softirq.c#__do_softirq+0xe4/0x128
>>> (XEN)    [<00000000002240d4>] do_softirq+0x14/0x1c
>>> Discussing this subject with Stefano, he pointed me to a discussion started a year ago on this subject here:
>>> https://lists.xenproject.org/archives/html/xen-devel/2018-11/msg03053.html
>>> And a patch was submitted:
>>> https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg02320.html
>>> I rebased this patch on current master and it is solving the problem I have seen.
>>> It sounds to me like a good solution to introduce a VCPUOP_register_runstate_phys_memory_area to not depend on the area actually being mapped in the guest when a context switch is being done (which is actually the problem happening when a context switch is trigger while a guest is running in EL0).
>>> Is there any reason why this was not merged at the end ?
>>
>> I just skimmed through the thread to remind myself the state. AFAICT, this is blocked on the contributor to clarify the intended interaction and provide a new version.
> 
> What do you mean here by intended interaction ? How the new hyper call should be used by the guest OS ?

 From what I remember, Jan was seeking clarification on whether the two 
hypercalls (existing and new) can be called together by the same OS (and 
make sense).

There was also the question of the handover between two pieces of 
sotfware. For instance, what if the firmware is using the existing 
interface but the OS the new one? Similar question about Kexecing a 
different kernel.

This part is mostly documentation so we can discuss about the approach 
and review the implementation.

> 
>>
>> I am still in favor of the new hypercall (and still in my todo list) but I haven't yet found time to revive the series.
>>
>> Would you be willing to take over the series? I would be happy to bring you up to speed and provide review.
> 
> Sure I can take it over.
> 
> I ported it to master version of xen and I tested it on a board.
> I still need to do a deep review of the code myself but I have an understanding of the problem and what is the idea.
> 
> Any help to get on speed would be more then welcome :-)
I would recommend to go through the latest version (v3) and the previous 
(v2). I am also suggesting v2 because I think the split was easier to 
review/understand.

The x86 code is probably what is going to give you the most trouble as 
there are two ABIs to support (compat and non-compat). If you don't have 
an x86 setup, I should be able to test it/help write it.

Feel free to ask any questions and I will try my best to remember the 
discussion from last year :).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 14 17:57:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 17:57:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZI6G-0006S7-BI; Thu, 14 May 2020 17:56:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v3kr=64=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZI6E-0006S2-W5
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 17:56:51 +0000
X-Inumbo-ID: 466b303a-960c-11ea-a4c3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 466b303a-960c-11ea-a4c3-12813bfff9fa;
 Thu, 14 May 2020 17:56:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=iMdm8U8pIkVdcgmhJTTFQfKZxcZM5B1927POgH8i9VQ=; b=SACMjCQQZ+1djXrYxUB0V66fz
 Z8EIOILbs9IPkkta7+cTAOgWxhDFceFEbJbemvavGt+l5BeJFN49O+C+1mUj3nbqPgSHDiDZ1yYwL
 lw0N4xDquo88y6FV8NrxvCbejeFgrP5uW4DTRsTrmOcEG/RabkIuNhhmE0ItlM2Jirx1s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZI69-0005yR-5c; Thu, 14 May 2020 17:56:45 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZI68-0003kX-S3; Thu, 14 May 2020 17:56:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZI68-0005Dr-RA; Thu, 14 May 2020 17:56:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150182-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150182: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=5115b437eef595ce77f05bfc02626e31e263e965
X-Osstest-Versions-That: xen=d155e4aef35cd9c03f6db7030a956f83f33a1e99
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 14 May 2020 17:56:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150182 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150182/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5115b437eef595ce77f05bfc02626e31e263e965
baseline version:
 xen                  d155e4aef35cd9c03f6db7030a956f83f33a1e99

Last test of basis   150175  2020-05-14 12:00:33 Z    0 days
Testing same since   150182  2020-05-14 15:01:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d155e4aef3..5115b437ee  5115b437eef595ce77f05bfc02626e31e263e965 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 14 18:13:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 18:13:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZILZ-0008Gz-VA; Thu, 14 May 2020 18:12:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EaOj=64=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jZILZ-0008Gu-Eo
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 18:12:41 +0000
X-Inumbo-ID: 7ed33c5e-960e-11ea-a4c6-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7ed33c5e-960e-11ea-a4c6-12813bfff9fa;
 Thu, 14 May 2020 18:12:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589479959;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=8kiKORUliEJku6HCbrVqvII9eSEw4dm3sqPW/9+DRIE=;
 b=PfvjLbR0cU4B8OWE6m9/YAJVvdt5xnqpDfZsci8BPTWIzZl41/2n7+Of
 IQuiAW1OVjIpndHrcKxltXYo7QrhJmV0gn802wYOWnX9PGJWwg0KAGn/o
 p6ucz8i7JY/lfqPulmt5uqkt4gAzqK6TrJBa921FNmSCZjg2NLWmF29pr o=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: OzDCrIAqu8njRCpyCcPoN5oI9Ycl8X1VUBhSYKt346IPk7Me5S5oUgiPXrLj1/EzuRH+KcMehL
 zTfxopVOWx8of32F+m/jBluLLNAVeOMtNVSXqsF18QVBTFw4gAoy9w0CzLSPzxJR+gynjvB55j
 we2k68LnirZOhGYmdTYEWSz3tKejTBKV9qHPhFOOS3nXEJjBCo5SpdWCXMtb368wJLVThShn5L
 16HQoBgGUX+5cK6exY297BQQEglOPjwQwqO50VESBVTNHLq1QCso2rRZ1PCzQbtj5CEgfNxfLe
 vC4=
X-SBRS: 2.7
X-MesageID: 18256411
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,392,1583211600"; d="scan'208";a="18256411"
Subject: Re: Error during update_runstate_area with KPTI activated
To: Julien Grall <julien@xen.org>, Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <C6B0E24F-60E6-4621-8448-C8DBAE3277A9@arm.com>
 <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
 <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
Date: Thu, 14 May 2020 19:12:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>, Stefano
 Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14/05/2020 18:38, Julien Grall wrote:
> Hi,
>
> On 14/05/2020 17:18, Bertrand Marquis wrote:
>>
>>
>>> On 14 May 2020, at 16:57, Julien Grall <julien@xen.org> wrote:
>>>
>>>
>>>
>>> On 14/05/2020 15:28, Bertrand Marquis wrote:
>>>> Hi,
>>>
>>> Hi,
>>>
>>>> When executing linux on arm64 with KPTI activated (in Dom0 or in a
>>>> DomU), I have a lot of walk page table errors like this:
>>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
>>>> 0xffffff837ebe0cd0
>>>> After implementing a call trace, I found that the problem was
>>>> coming from the update_runstate_area when linux has KPTI activated.
>>>> I have the following call trace:
>>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
>>>> 0xffffff837ebe0cd0
>>>> (XEN) backtrace.c:29: Stacktrace start at 0x8007638efbb0 depth 10
>>>> (XEN)    [<000000000027780c>] get_page_from_gva+0x180/0x35c
>>>> (XEN)    [<00000000002700c8>] guestcopy.c#copy_guest+0x1b0/0x2e4
>>>> (XEN)    [<0000000000270228>] raw_copy_to_guest+0x2c/0x34
>>>> (XEN)    [<0000000000268dd0>] domain.c#update_runstate_area+0x90/0xc8
>>>> (XEN)    [<000000000026909c>] domain.c#schedule_tail+0x294/0x2d8
>>>> (XEN)    [<0000000000269524>] context_switch+0x58/0x70
>>>> (XEN)    [<00000000002479c4>] core.c#sched_context_switch+0x88/0x1e4
>>>> (XEN)    [<000000000024845c>] core.c#schedule+0x224/0x2ec
>>>> (XEN)    [<0000000000224018>] softirq.c#__do_softirq+0xe4/0x128
>>>> (XEN)    [<00000000002240d4>] do_softirq+0x14/0x1c
>>>> Discussing this subject with Stefano, he pointed me to a discussion
>>>> started a year ago on this subject here:
>>>> https://lists.xenproject.org/archives/html/xen-devel/2018-11/msg03053.html
>>>>
>>>> And a patch was submitted:
>>>> https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg02320.html
>>>>
>>>> I rebased this patch on current master and it is solving the
>>>> problem I have seen.
>>>> It sounds to me like a good solution to introduce a
>>>> VCPUOP_register_runstate_phys_memory_area to not depend on the area
>>>> actually being mapped in the guest when a context switch is being
>>>> done (which is actually the problem happening when a context switch
>>>> is trigger while a guest is running in EL0).
>>>> Is there any reason why this was not merged at the end ?
>>>
>>> I just skimmed through the thread to remind myself the state.
>>> AFAICT, this is blocked on the contributor to clarify the intended
>>> interaction and provide a new version.
>>
>> What do you mean here by intended interaction ? How the new hyper
>> call should be used by the guest OS ?
>
> From what I remember, Jan was seeking clarification on whether the two
> hypercalls (existing and new) can be called together by the same OS
> (and make sense).
>
> There was also the question of the handover between two pieces of
> sotfware. For instance, what if the firmware is using the existing
> interface but the OS the new one? Similar question about Kexecing a
> different kernel.
>
> This part is mostly documentation so we can discuss about the approach
> and review the implementation.
>
>>
>>>
>>> I am still in favor of the new hypercall (and still in my todo list)
>>> but I haven't yet found time to revive the series.
>>>
>>> Would you be willing to take over the series? I would be happy to
>>> bring you up to speed and provide review.
>>
>> Sure I can take it over.
>>
>> I ported it to master version of xen and I tested it on a board.
>> I still need to do a deep review of the code myself but I have an
>> understanding of the problem and what is the idea.
>>
>> Any help to get on speed would be more then welcome :-)
> I would recommend to go through the latest version (v3) and the
> previous (v2). I am also suggesting v2 because I think the split was
> easier to review/understand.
>
> The x86 code is probably what is going to give you the most trouble as
> there are two ABIs to support (compat and non-compat). If you don't
> have an x86 setup, I should be able to test it/help write it.
>
> Feel free to ask any questions and I will try my best to remember the
> discussion from last year :).

At risk of being shouted down again, a new hypercall isn't necessarily
necessary, and there are probably better ways of fixing it.

The underlying ABI problem is that the area is registered by virtual
address.  The only correct way this should have been done is to register
by guest physical address, so Xen's updating of the data doesn't
interact with the guest pagetable settings/restrictions.  x86 suffers
the same kind of problems as ARM, except we silently squash the fallout.

The logic in Xen is horrible, and I would really rather it was deleted
completely, rather than to be kept for compatibility.

The runstate area is always fixed kernel memory and doesn't move.  I
believe it is already restricted from crossing a page boundary, and we
can calculate the va=>pa translation when the hypercall is made.

Yes - this is a technically ABI change, but nothing is going to break
(AFAICT) and the cleanup win is large enough to make this a *very*
attractive option.

I would prefer to fix it like this, (perhaps adding a new hypercall
which explicitly takes a guest physical address), than to keep any of
this mess around forever more to cope with legacy guests.

It is definitely an option which should be considered.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 14 18:15:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 18:15:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZINy-0008PT-E8; Thu, 14 May 2020 18:15:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=v3kr=64=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZINx-0008PO-2P
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 18:15:09 +0000
X-Inumbo-ID: d3d61c30-960e-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d3d61c30-960e-11ea-9887-bc764e2007e4;
 Thu, 14 May 2020 18:15:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=PIKMOuxL9SSLD6R0u4KMQWcfjFCF/dLKVS9lJpD+inw=; b=apnyCciaBWASn4dNPUmMBioye
 GZQ/9Vj5+n3fKcujaaNJQSi/4iulmmesqX93sc7IK9BCwjIcWS+/wTK2W5sbX+09Qyl/80GviJD3n
 sSQB3a1qio5CadI/+HTVO64n2PUQ27w8pYhlMuf+jJj3+uCfIrD/0GpmeWfc7I0pX4jl8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZINp-0006Q4-En; Thu, 14 May 2020 18:15:01 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZINp-0004SS-3m; Thu, 14 May 2020 18:15:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZINp-0002Og-39; Thu, 14 May 2020 18:15:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150174-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150174: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
 qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=d8f9d57dbd0caf225c47f12e9faea9180e79fe2a
X-Osstest-Versions-That: qemuu=d5c75ec500d96f1d93447f990cd5a4ef5ba27fae
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 14 May 2020 18:15:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150174 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150174/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd       7 xen-boot                 fail REGR. vs. 150151

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150151
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150151
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150151
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150151
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150151
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150151
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                d8f9d57dbd0caf225c47f12e9faea9180e79fe2a
baseline version:
 qemuu                d5c75ec500d96f1d93447f990cd5a4ef5ba27fae

Last test of basis   150151  2020-05-12 19:38:05 Z    1 days
Testing same since   150174  2020-05-14 10:06:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Denis Plotnikov <dplotnikov@virtuozzo.com>
  Markus Armbruster <armbru@redhat.com>
  Max Reitz <mreitz@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d8f9d57dbd0caf225c47f12e9faea9180e79fe2a
Merge: d5c75ec500 fc9aefc8c0
Author: Peter Maydell <peter.maydell@linaro.org>
Date:   Wed May 13 15:35:32 2020 +0100

    Merge remote-tracking branch 'remotes/maxreitz/tags/pull-block-2020-05-13' into staging
    
    Block patches:
    - zstd compression for qcow2
    - Fix use-after-free
    
    # gpg: Signature made Wed 13 May 2020 15:14:06 BST
    # gpg:                using RSA key 91BEB60A30DB3E8857D11829F407DB0061D5CF40
    # gpg:                issuer "mreitz@redhat.com"
    # gpg: Good signature from "Max Reitz <mreitz@redhat.com>" [full]
    # Primary key fingerprint: 91BE B60A 30DB 3E88 57D1  1829 F407 DB00 61D5 CF40
    
    * remotes/maxreitz/tags/pull-block-2020-05-13:
      block/block-copy: fix use-after-free of task pointer
      iotests: 287: add qcow2 compression type test
      qcow2: add zstd cluster compression
      qcow2: rework the cluster compression routine
      qcow2: introduce compression type feature
    
    Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

commit fc9aefc8c0d3c6392656ea661ce72c1583b70bbd
Author: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
Date:   Thu May 7 21:38:00 2020 +0300

    block/block-copy: fix use-after-free of task pointer
    
    Obviously, we should g_free the task after trace point and offset
    update.
    
    Reported-by: Coverity (CID 1428756)
    Fixes: 4ce5dd3e9b5ee0fac18625860eb3727399ee965e
    Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
    Message-Id: <20200507183800.22626-1-vsementsov@virtuozzo.com>
    Reviewed-by: Eric Blake <eblake@redhat.com>
    Signed-off-by: Max Reitz <mreitz@redhat.com>

commit dd488fc1c000700741355426198d240c6f25ccb7
Author: Denis Plotnikov <dplotnikov@virtuozzo.com>
Date:   Thu May 7 11:25:21 2020 +0300

    iotests: 287: add qcow2 compression type test
    
    The test checks fulfilling qcow2 requirements for the compression
    type feature and zstd compression type operability.
    
    Signed-off-by: Denis Plotnikov <dplotnikov@virtuozzo.com>
    Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
    Tested-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
    Reviewed-by: Eric Blake <eblake@redhat.com>
    Message-Id: <20200507082521.29210-5-dplotnikov@virtuozzo.com>
    Signed-off-by: Max Reitz <mreitz@redhat.com>

commit d298ac10add95e2b7f8850332f3b755c8a23d925
Author: Denis Plotnikov <dplotnikov@virtuozzo.com>
Date:   Thu May 7 11:25:20 2020 +0300

    qcow2: add zstd cluster compression
    
    zstd significantly reduces cluster compression time.
    It provides better compression performance maintaining
    the same level of the compression ratio in comparison with
    zlib, which, at the moment, is the only compression
    method available.
    
    The performance test results:
    Test compresses and decompresses qemu qcow2 image with just
    installed rhel-7.6 guest.
    Image cluster size: 64K. Image on disk size: 2.2G
    
    The test was conducted with brd disk to reduce the influence
    of disk subsystem to the test results.
    The results is given in seconds.
    
    compress cmd:
      time ./qemu-img convert -O qcow2 -c -o compression_type=[zlib|zstd]
                      src.img [zlib|zstd]_compressed.img
    decompress cmd
      time ./qemu-img convert -O qcow2
                      [zlib|zstd]_compressed.img uncompressed.img
    
               compression               decompression
             zlib       zstd           zlib         zstd
    ------------------------------------------------------------
    real     65.5       16.3 (-75 %)    1.9          1.6 (-16 %)
    user     65.0       15.8            5.3          2.5
    sys       3.3        0.2            2.0          2.0
    
    Both ZLIB and ZSTD gave the same compression ratio: 1.57
    compressed image size in both cases: 1.4G
    
    Signed-off-by: Denis Plotnikov <dplotnikov@virtuozzo.com>
    QAPI part:
    Acked-by: Markus Armbruster <armbru@redhat.com>
    Message-Id: <20200507082521.29210-4-dplotnikov@virtuozzo.com>
    Signed-off-by: Max Reitz <mreitz@redhat.com>

commit 25dd077d1d0aaef23f8608468cbef9a396583b1b
Author: Denis Plotnikov <dplotnikov@virtuozzo.com>
Date:   Thu May 7 11:25:19 2020 +0300

    qcow2: rework the cluster compression routine
    
    The patch enables processing the image compression type defined
    for the image and chooses an appropriate method for image clusters
    (de)compression.
    
    Signed-off-by: Denis Plotnikov <dplotnikov@virtuozzo.com>
    Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
    Reviewed-by: Alberto Garcia <berto@igalia.com>
    Reviewed-by: Max Reitz <mreitz@redhat.com>
    Message-Id: <20200507082521.29210-3-dplotnikov@virtuozzo.com>
    Signed-off-by: Max Reitz <mreitz@redhat.com>

commit 572ad9783f3e728a70617ce6bf144c09de3add50
Author: Denis Plotnikov <dplotnikov@virtuozzo.com>
Date:   Thu May 7 11:25:18 2020 +0300

    qcow2: introduce compression type feature
    
    The patch adds some preparation parts for incompatible compression type
    feature to qcow2 allowing the use different compression methods for
    image clusters (de)compressing.
    
    It is implied that the compression type is set on the image creation and
    can be changed only later by image conversion, thus compression type
    defines the only compression algorithm used for the image, and thus,
    for all image clusters.
    
    The goal of the feature is to add support of other compression methods
    to qcow2. For example, ZSTD which is more effective on compression than ZLIB.
    
    The default compression is ZLIB. Images created with ZLIB compression type
    are backward compatible with older qemu versions.
    
    Adding of the compression type breaks a number of tests because now the
    compression type is reported on image creation and there are some changes
    in the qcow2 header in size and offsets.
    
    The tests are fixed in the following ways:
        * filter out compression_type for many tests
        * fix header size, feature table size and backing file offset
          affected tests: 031, 036, 061, 080
          header_size +=8: 1 byte compression type
                           7 bytes padding
          feature_table += 48: incompatible feature compression type
          backing_file_offset += 56 (8 + 48 -> header_change + feature_table_change)
        * add "compression type" for test output matching when it isn't filtered
          affected tests: 049, 060, 061, 065, 082, 085, 144, 182, 185, 198, 206,
                          242, 255, 274, 280
    
    Signed-off-by: Denis Plotnikov <dplotnikov@virtuozzo.com>
    Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
    Reviewed-by: Eric Blake <eblake@redhat.com>
    Reviewed-by: Max Reitz <mreitz@redhat.com>
    QAPI part:
    Acked-by: Markus Armbruster <armbru@redhat.com>
    Message-Id: <20200507082521.29210-2-dplotnikov@virtuozzo.com>
    Signed-off-by: Max Reitz <mreitz@redhat.com>


From xen-devel-bounces@lists.xenproject.org Thu May 14 19:11:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 19:11:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZJG1-0005Cu-Vi; Thu, 14 May 2020 19:11:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E/fC=64=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jZJG0-0005Cp-Ll
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 19:11:00 +0000
X-Inumbo-ID: a533391e-9616-11ea-b9cf-bc764e2007e4
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a533391e-9616-11ea-b9cf-bc764e2007e4;
 Thu, 14 May 2020 19:11:00 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id c21so3603665lfb.3
 for <xen-devel@lists.xenproject.org>; Thu, 14 May 2020 12:10:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=dypvJ1+qLI5BUmBlyc5qDGFYdZQzWYY+2X/7jccyaeI=;
 b=bY98bxlB75sG9EVavmm3OxWUt56pnbzmVg2k/I62sVIEFErJbowv5reyzuDkqu5Jxq
 fWeHt8P/L6KPcRX2x+9D2DsxYMw375GH/kVdcXBTJn5vxqFMA4YoWQOrVQZiBKNYsnwy
 A8TMBQJJ/5N1SYQlDvTvvnal+Wrjvg+gaOX8K1VGaAC37tP86Zbt9prQJDX9GEhui6Nv
 +vnjsyNduYAcDaUTE9CCkMCFuhS03LDYdHpS5IMdpgJMVQ7AeAmwUrCaEG2yEiHYcxmr
 URB8BA1uLbXzBq+f4rsUf8r8XF2mMH6fx23CUlEfWn3oZholLoEveCRTbUyTi7T7N0CR
 CkcA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=dypvJ1+qLI5BUmBlyc5qDGFYdZQzWYY+2X/7jccyaeI=;
 b=SYIGEAPZ2wXdMoaSvCXrrqpjRgbKeLZmBBp1q+07PEpCXid59YexM8y/YqZsdsg4aN
 YWkH0Hbtjj+KE15Dd+FEVX/+RW/pvbRsxDYlphXOckfs+T747sGXVBEgBMKv2ekl/8mY
 TwO89zZRC5mfD6LvHUw6EJlDKYoQlAZVc7OkkBIbGpcX3afRQkEb6ePHflsBo6zua1di
 la4KUt9Oxt0tiH2fhyKKuIOob1yzxnT3RO/BUKhI+1OZ83GKG7Ig/3bpkCn4WhwurHCC
 t91p23AiVe8NXeTM/6Q3wpWMKxMsCocs7nxXUcsBKJEXQn7GlqutSRqqLgfFBDMZzkwI
 nGmQ==
X-Gm-Message-State: AOAM532NaJ8YDzCzF4TOWOryWSkgdVN+0bnCGEBRfap0WtsSHH8zpSQb
 /lHK9FxdvOkDTOiYZeAgKBoIua+LkXmlDqZstiQ=
X-Google-Smtp-Source: ABdhPJz+n4gh/+N5xD99HKj1UyCu5cbuNQisG6jqGrxNTOVKvoLHeMcJGf3Z1ddvcd3OVllUkxTleEQEVcbTZcLK7fw=
X-Received: by 2002:ac2:4c3b:: with SMTP id u27mr4235755lfq.212.1589483458831; 
 Thu, 14 May 2020 12:10:58 -0700 (PDT)
MIME-Version: 1.0
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <24253.30741.77867.105081@mariner.uk.xensource.com>
In-Reply-To: <24253.30741.77867.105081@mariner.uk.xensource.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 14 May 2020 15:10:46 -0400
Message-ID: <CAKf6xptQu7Yb1RGZn+mgp_ikSe2AXm1nun6_0f_oL_r88Hvwpg@mail.gmail.com>
Subject: Re: [PATCH v5 00/21] Add support for qemu-xen runnning in a
 Linux-based stubdomain
To: Ian Jackson <ian.jackson@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Simon Gaiser <simon@invisiblethingslab.com>,
 Jan Beulich <jbeulich@suse.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Eric Shelton <eshelton@pobox.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 14, 2020 at 12:55 PM Ian Jackson <ian.jackson@citrix.com> wrote:
>
> Jason Andryuk writes ("[PATCH v5 00/21] Add support for qemu-xen runnning in a Linux-based stubdomain"):
> > In coordination with Marek, I'm making a submission of his patches for Linux
> > stubdomain device-model support.  I made a few of my own additions, but Marek
> > did the heavy lifting.  Thank you, Marek.
>
> Hi.  I finished reading these patches.  Thank you very much.  They
> were nicely structured.  I found them clear and easy to read.  As
> you'll have seen I have requested only a few changes.

I haven't taken a look yet, but thanks for the review.

Marek deserves credit for the structuring and work.

> I am very hopeful that this series will make 4.14.  Codefreeze is
> Friday the 22nd of May.  Please let us know whether you think we'll
> all be able to make that...

Yes, I'm aiming for 4.14.  I plan to re-spin and re-post over the new
couple days.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu May 14 19:13:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 14 May 2020 19:13:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZJIg-0005JU-ED; Thu, 14 May 2020 19:13:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qJsg=64=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1jZJIf-0005JP-M4
 for xen-devel@lists.xenproject.org; Thu, 14 May 2020 19:13:45 +0000
X-Inumbo-ID: 0790bc62-9617-11ea-ae69-bc764e2007e4
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0790bc62-9617-11ea-ae69-bc764e2007e4;
 Thu, 14 May 2020 19:13:44 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id y3so65032wrt.1
 for <xen-devel@lists.xenproject.org>; Thu, 14 May 2020 12:13:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=z4XZZiSfWkihRTv0z6DMjCYuLu6A8CT9yvCzKZwkeGI=;
 b=bX1rOC9u1TxHiQdjYwuUkoSg4/AVt8NU2oAZHS3AcWRcDVox81imOO6bdL2oKBZR8C
 kln3tUddjNF0f2zvNGr8108+dR8s6lc2vi4UUBhZDbiZ9v6c4bQAQnvUUjd1yzHQWFgC
 NW/5iokVf0YWp06miHz8/ZmhgqG6ic8UrxfaIg5wCnEE4tiTcDCxTJOHAHdSkyf0w8HC
 68+PGP4i3hxAuiruw7SgfmBCrZbJajmGTZAhTGKTH70JXGQUmbr8D2Q75df86ARsyIMM
 CFXGA1MaCdi/o09da62+ZdhuJZLG4/VCYQj5zlY/PsPzoRJIPNezmtgfBbkiuF/6xePj
 i7cA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=z4XZZiSfWkihRTv0z6DMjCYuLu6A8CT9yvCzKZwkeGI=;
 b=JEb477h4qBzZDWS/nG6Uxrqds+/GpipDbVGjpImgB7l8Qw7jZLW8HPkLaiUlY8bagO
 mLn+rdbQjJXhLbLRLGWGaCDaSns3rnFwWarEyfxG2KTJcUlyA+R7ng1RSWWv+OtI8YHJ
 IU4GO69vD+JoRY1E/IJR+oHwlc/B9QZWyd7q7nyZtWLjUvPC441xcjUw+UAfP6V4rbxD
 bgsn+RblaSQ6HU+BHpgCKCSW1clP6tsECt0danpTxj4EOrUeRjSIoAUNo3MOCxsO4JqQ
 j7qsXQd60fVEGgWzZUEMKLWzktjzKKObiyLaIjb5kd2wDiIoaC902+LFXPZ9kdUDsHJj
 D9Ig==
X-Gm-Message-State: AOAM5309lcxzxmIglWBmwZfbcVYfVI8YU1wMBoov7M+ptkLVCWFBY6c3
 JeGbglVvTuzZ2EL5csS2bp3JvMmwZbjp4l9CbCmwP6E3nIE=
X-Google-Smtp-Source: ABdhPJzb47H4g1vuzB6fsN96p0zIkvOYiiyDjE2xgsSAgY0t+mpPZezGKOnIZCniSUyarSJWtGeOqXdIIFNpl2G9+YM=
X-Received: by 2002:a5d:510f:: with SMTP id s15mr7558281wrt.103.1589483623835; 
 Thu, 14 May 2020 12:13:43 -0700 (PDT)
MIME-Version: 1.0
References: <C6B0E24F-60E6-4621-8448-C8DBAE3277A9@arm.com>
 <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
 <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
 <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
In-Reply-To: <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Thu, 14 May 2020 20:13:32 +0100
Message-ID: <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
Subject: Re: Error during update_runstate_area with KPTI activated
To: Andrew Cooper <andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 14 May 2020 at 19:12, Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>
> On 14/05/2020 18:38, Julien Grall wrote:
> > Hi,
> >
> > On 14/05/2020 17:18, Bertrand Marquis wrote:
> >>
> >>
> >>> On 14 May 2020, at 16:57, Julien Grall <julien@xen.org> wrote:
> >>>
> >>>
> >>>
> >>> On 14/05/2020 15:28, Bertrand Marquis wrote:
> >>>> Hi,
> >>>
> >>> Hi,
> >>>
> >>>> When executing linux on arm64 with KPTI activated (in Dom0 or in a
> >>>> DomU), I have a lot of walk page table errors like this:
> >>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
> >>>> 0xffffff837ebe0cd0
> >>>> After implementing a call trace, I found that the problem was
> >>>> coming from the update_runstate_area when linux has KPTI activated.
> >>>> I have the following call trace:
> >>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
> >>>> 0xffffff837ebe0cd0
> >>>> (XEN) backtrace.c:29: Stacktrace start at 0x8007638efbb0 depth 10
> >>>> (XEN)    [<000000000027780c>] get_page_from_gva+0x180/0x35c
> >>>> (XEN)    [<00000000002700c8>] guestcopy.c#copy_guest+0x1b0/0x2e4
> >>>> (XEN)    [<0000000000270228>] raw_copy_to_guest+0x2c/0x34
> >>>> (XEN)    [<0000000000268dd0>] domain.c#update_runstate_area+0x90/0xc8
> >>>> (XEN)    [<000000000026909c>] domain.c#schedule_tail+0x294/0x2d8
> >>>> (XEN)    [<0000000000269524>] context_switch+0x58/0x70
> >>>> (XEN)    [<00000000002479c4>] core.c#sched_context_switch+0x88/0x1e4
> >>>> (XEN)    [<000000000024845c>] core.c#schedule+0x224/0x2ec
> >>>> (XEN)    [<0000000000224018>] softirq.c#__do_softirq+0xe4/0x128
> >>>> (XEN)    [<00000000002240d4>] do_softirq+0x14/0x1c
> >>>> Discussing this subject with Stefano, he pointed me to a discussion
> >>>> started a year ago on this subject here:
> >>>> https://lists.xenproject.org/archives/html/xen-devel/2018-11/msg03053.html
> >>>>
> >>>> And a patch was submitted:
> >>>> https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg02320.html
> >>>>
> >>>> I rebased this patch on current master and it is solving the
> >>>> problem I have seen.
> >>>> It sounds to me like a good solution to introduce a
> >>>> VCPUOP_register_runstate_phys_memory_area to not depend on the area
> >>>> actually being mapped in the guest when a context switch is being
> >>>> done (which is actually the problem happening when a context switch
> >>>> is trigger while a guest is running in EL0).
> >>>> Is there any reason why this was not merged at the end ?
> >>>
> >>> I just skimmed through the thread to remind myself the state.
> >>> AFAICT, this is blocked on the contributor to clarify the intended
> >>> interaction and provide a new version.
> >>
> >> What do you mean here by intended interaction ? How the new hyper
> >> call should be used by the guest OS ?
> >
> > From what I remember, Jan was seeking clarification on whether the two
> > hypercalls (existing and new) can be called together by the same OS
> > (and make sense).
> >
> > There was also the question of the handover between two pieces of
> > sotfware. For instance, what if the firmware is using the existing
> > interface but the OS the new one? Similar question about Kexecing a
> > different kernel.
> >
> > This part is mostly documentation so we can discuss about the approach
> > and review the implementation.
> >
> >>
> >>>
> >>> I am still in favor of the new hypercall (and still in my todo list)
> >>> but I haven't yet found time to revive the series.
> >>>
> >>> Would you be willing to take over the series? I would be happy to
> >>> bring you up to speed and provide review.
> >>
> >> Sure I can take it over.
> >>
> >> I ported it to master version of xen and I tested it on a board.
> >> I still need to do a deep review of the code myself but I have an
> >> understanding of the problem and what is the idea.
> >>
> >> Any help to get on speed would be more then welcome :-)
> > I would recommend to go through the latest version (v3) and the
> > previous (v2). I am also suggesting v2 because I think the split was
> > easier to review/understand.
> >
> > The x86 code is probably what is going to give you the most trouble as
> > there are two ABIs to support (compat and non-compat). If you don't
> > have an x86 setup, I should be able to test it/help write it.
> >
> > Feel free to ask any questions and I will try my best to remember the
> > discussion from last year :).
>
> At risk of being shouted down again, a new hypercall isn't necessarily
> necessary, and there are probably better ways of fixing it.
>
> The underlying ABI problem is that the area is registered by virtual
> address.  The only correct way this should have been done is to register
> by guest physical address, so Xen's updating of the data doesn't
> interact with the guest pagetable settings/restrictions.  x86 suffers
> the same kind of problems as ARM, except we silently squash the fallout.
>
> The logic in Xen is horrible, and I would really rather it was deleted
> completely, rather than to be kept for compatibility.
>
> The runstate area is always fixed kernel memory and doesn't move.  I
> believe it is already restricted from crossing a page boundary, and we
> can calculate the va=>pa translation when the hypercall is made.
>
> Yes - this is a technically ABI change, but nothing is going to break
> (AFAICT) and the cleanup win is large enough to make this a *very*
> attractive option.

I suggested this approach two years ago [1] but you were the one
saying that buffer could cross page-boundary on older Linux [2]:

"I'd love to do this, but we cant.  Older Linux used to have a virtual
buffer spanning a page boundary.  Changing the behaviour under that will
cause older setups to explode."

So can you explain your change of heart here?

>
> I would prefer to fix it like this, (perhaps adding a new hypercall
> which explicitly takes a guest physical address), than to keep any of
> this mess around forever more to cope with legacy guests.

What does legacy guests mean? Is it PV 32-bit or does it also include some HVM?

Cheers,

[1] <3a77a293-1a29-42ed-8fc0-a74bda213b92@arm.com>
[2] <dc80422f-80bb-bd37-ed41-bb6559f4d7d8@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri May 15 00:03:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 00:03:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZNoF-0005lJ-OM; Fri, 15 May 2020 00:02:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ibeu=65=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZNoE-0005lE-PS
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 00:02:38 +0000
X-Inumbo-ID: 5f78c5d2-963f-11ea-a4f7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5f78c5d2-963f-11ea-a4f7-12813bfff9fa;
 Fri, 15 May 2020 00:02:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=7BDr4phLh7JiGxUxLpLdd9PMrzNSUCRRlYWszapbPIY=; b=YJOI9fhqymDhKP0sFgZDGfWS3
 0dp1C4f7PwMERq0rOf8RpnBS2zaSrZhgJjc9Cy8IC/Wk2czuuHa5BA129xMNu8mjexCXLuac3WDjh
 dyz3Zpcx7XIHCj4RA1QG4mcdjZR552FsVYjTq1glqAQsqHOOw1rKT+SHKiWDJZcktW5L4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZNo7-0005jm-Gp; Fri, 15 May 2020 00:02:31 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZNo7-0000DU-75; Fri, 15 May 2020 00:02:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZNo7-0000Uz-6R; Fri, 15 May 2020 00:02:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150177-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 150177: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-4.13-testing:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: xen=6278553325a9f76d37811923221b21db3882e017
X-Osstest-Versions-That: xen=9649b83b2ab4707de79da42307f8757e317bf217
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 15 May 2020 00:02:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150177 xen-4.13-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150177/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150073
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass

version targeted for testing:
 xen                  6278553325a9f76d37811923221b21db3882e017
baseline version:
 xen                  9649b83b2ab4707de79da42307f8757e317bf217

Last test of basis   150073  2020-05-07 13:06:17 Z    7 days
Testing same since   150177  2020-05-14 12:36:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9649b83b2a..6278553325  6278553325a9f76d37811923221b21db3882e017 -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Fri May 15 00:57:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 00:57:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZOeo-0001kN-O2; Fri, 15 May 2020 00:56:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ibeu=65=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZOen-0001kI-Mb
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 00:56:57 +0000
X-Inumbo-ID: f5f19d20-9646-11ea-a504-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f5f19d20-9646-11ea-a504-12813bfff9fa;
 Fri, 15 May 2020 00:56:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=dWCkKoMV72pQaoHg4eoLv8zshDjaE2KQ8xF76BEnwcg=; b=UqvNBiFoDqtHgXKpr+80Tusf7
 kDB5/56lkm5vro0DdfLB/MXHBhXphUcd0MTAoKR8WUodAhFoalSt4swNn6Gmrh+Hor62mJ/LSEH0F
 H+U1iD09El+OEenwRr9R9Cd+asAe8tPnSK730z+xd5f1/cpU/kpFulxwQT7pSD03D4q6M=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZOeg-0006oq-D5; Fri, 15 May 2020 00:56:50 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZOef-0003Yq-Qk; Fri, 15 May 2020 00:56:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZOef-0003ci-Q7; Fri, 15 May 2020 00:56:49 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150178-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150178: all pass - PUSHED
X-Osstest-Versions-This: ovmf=f2cdb268ef04eeec51948b5d81eeca5cab5ed9af
X-Osstest-Versions-That: ovmf=ceacd9e992cd12f3c07ae1a28a75a6b8750718aa
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 15 May 2020 00:56:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150178 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150178/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 f2cdb268ef04eeec51948b5d81eeca5cab5ed9af
baseline version:
 ovmf                 ceacd9e992cd12f3c07ae1a28a75a6b8750718aa

Last test of basis   150160  2020-05-13 12:09:21 Z    1 days
Testing same since   150178  2020-05-14 12:39:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chasel Chiu <chasel.chiu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ceacd9e992..f2cdb268ef  f2cdb268ef04eeec51948b5d81eeca5cab5ed9af -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 15 02:14:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 02:14:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZPqw-0001dk-Dv; Fri, 15 May 2020 02:13:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ibeu=65=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZPqv-0001df-HM
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 02:13:33 +0000
X-Inumbo-ID: ac636f20-9651-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac636f20-9651-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 02:13:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=xlYc48gIdU+poCwWfKHHfgZV6lHQIJ3NFdY/T1dRBrg=; b=t0bUZWDqNnQiN26/gIsiYYlLE
 kGbDTB1vfSKytay4b/DDONvHYHW0iHtEteq+LD1FwJm5QAUOS5KrWYP4iZVWpvDcmffpTvhDL0jH7
 H5VmHaVOw03WSdD632Yc+HW3UTZaayaYEC/RBbT3sCxm87+u1Vyh5467itAkXPQo68e54=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZPqt-0001jo-G3; Fri, 15 May 2020 02:13:31 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZPqt-0008Tm-5x; Fri, 15 May 2020 02:13:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZPqt-00056B-5B; Fri, 15 May 2020 02:13:31 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150176-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 150176: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: xen=09b61126b4d1e9d372fd2e24b702be10a358da9d
X-Osstest-Versions-That: xen=c26841f0aad0af887a6b3658d71959c4946e480d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 15 May 2020 02:13:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150176 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150176/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qcow2    17 guest-localmigrate/x10       fail  like 150072
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 xen                  09b61126b4d1e9d372fd2e24b702be10a358da9d
baseline version:
 xen                  c26841f0aad0af887a6b3658d71959c4946e480d

Last test of basis   150072  2020-05-07 13:06:22 Z    7 days
Testing same since   150176  2020-05-14 12:36:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   c26841f0aa..09b61126b4  09b61126b4d1e9d372fd2e24b702be10a358da9d -> stable-4.12


From xen-devel-bounces@lists.xenproject.org Fri May 15 04:39:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 04:39:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZS80-0005dk-Dg; Fri, 15 May 2020 04:39:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wPnU=65=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jZS7y-0005dc-OG
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 04:39:19 +0000
X-Inumbo-ID: 06587af3-9666-11ea-a51d-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 06587af3-9666-11ea-a51d-12813bfff9fa;
 Fri, 15 May 2020 04:39:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1589517552;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=OKJR3XWl1FD5yBYTcoQ2dBjNcY31o5w0McSpi+jkOug=;
 b=ND9nhKXJFC38bpyHUK9S8P0I90pXZtkZotYskW/ckiDyVb16CyBsWvyqGhdmu/kHxFh6B8
 UNXBbuuOrCw0Qqu7hA35bHtZLMTwTdNp/wD1/kF5FXy2TkLpuNORiCX5WKY1xvTPrTd4Oi
 dEaoBUqpTQ14O14JXWmXFhDyCyOaU+I=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-107-4tGOUR7dN4mlE7aK9dqnnQ-1; Fri, 15 May 2020 00:39:10 -0400
X-MC-Unique: 4tGOUR7dN4mlE7aK9dqnnQ-1
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id C3776460;
 Fri, 15 May 2020 04:39:06 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-113-6.ams2.redhat.com [10.36.113.6])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 7AD5360BE2;
 Fri, 15 May 2020 04:38:59 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 082E611358BC; Fri, 15 May 2020 06:38:58 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: Re: [PATCH v3 2/3] various: Remove unnecessary OBJECT() cast
References: <20200512070020.22782-1-f4bug@amsat.org>
 <20200512070020.22782-3-f4bug@amsat.org>
Date: Fri, 15 May 2020 06:38:57 +0200
In-Reply-To: <20200512070020.22782-3-f4bug@amsat.org> ("Philippe
 =?utf-8?Q?Mathieu-Daud=C3=A9=22's?= message of "Tue, 12 May 2020 09:00:19
 +0200")
Message-ID: <87r1vlstge.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 qemu-trivial@nongnu.org, David Hildenbrand <david@redhat.com>,
 Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, Richard Henderson <rth@twiddle.net>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 Richard Henderson <richard.henderson@linaro.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 =?utf-8?Q?C=C3=A9dric?= Le Goater <clg@kaod.org>, qemu-ppc@nongnu.org,
 John Snow <jsnow@redhat.com>, David Gibson <david@gibson.dropbear.id.au>,
 "Daniel P. =?utf-8?Q?Berrang=C3=A9?=" <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, Corey Minyard <cminyard@mvista.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org> writes:

> The OBJECT() macro is defined as:
>
>   #define OBJECT(obj) ((Object *)(obj))
>
> Remove the unnecessary OBJECT() casts when we already know the
> pointer is of Object type.
>
> Patch created mechanically using spatch with this script:
>
>   @@
>   typedef Object;
>   Object *o;
>   @@
>   -   OBJECT(o)
>   +   o
>
> Acked-by: Cornelia Huck <cohuck@redhat.com>
> Acked-by: Corey Minyard <cminyard@mvista.com>
> Acked-by: John Snow <jsnow@redhat.com>
> Reviewed-by: Richard Henderson <richard.henderson@linaro.org>
> Signed-off-by: Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>

Reviewed-by: Markus Armbruster <armbru@redhat.com>



From xen-devel-bounces@lists.xenproject.org Fri May 15 05:33:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 05:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZSy5-0002ZM-PG; Fri, 15 May 2020 05:33:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yAEg=65=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZSy4-0002ZH-M8
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 05:33:08 +0000
X-Inumbo-ID: 8e6dfba4-966d-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e6dfba4-966d-11ea-9887-bc764e2007e4;
 Fri, 15 May 2020 05:33:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 287D2AEED;
 Fri, 15 May 2020 05:33:09 +0000 (UTC)
Subject: Re: [PATCH v8 04/12] xen: add basic hypervisor filesystem support
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-5-jgross@suse.com>
 <db277779-5b1e-a2aa-3948-9e6dd8e8bef0@suse.com>
 <23938228-e947-fe36-8b19-0e89886db9ac@suse.com>
Message-ID: <28dd8109-1815-70cd-834c-53330d5c824d@suse.com>
Date: Fri, 15 May 2020 07:33:04 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <23938228-e947-fe36-8b19-0e89886db9ac@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.20 11:50, Jürgen Groß wrote:
> On 14.05.20 09:59, Jan Beulich wrote:
>> On 08.05.2020 17:34, Juergen Gross wrote:
>>> +int hypfs_read_dir(const struct hypfs_entry *entry,
>>> +                   XEN_GUEST_HANDLE_PARAM(void) uaddr)
>>> +{
>>> +    const struct hypfs_entry_dir *d;
>>> +    const struct hypfs_entry *e;
>>> +    unsigned int size = entry->size;
>>> +
>>> +    d = container_of(entry, const struct hypfs_entry_dir, e);
>>> +
>>> +    list_for_each_entry ( e, &d->dirlist, list )
>>
>> This function, in particular because of being non-static, makes
>> me wonder how, with add_entry() taking a lock, it can be safe
>> without any locking. Initially I thought the justification might
>> be because all adding of entries is an init-time-only thing, but
>> various involved functions aren't marked __init (and it is at
>> least not implausible that down the road we might see new
>> entries getting added during certain hotplug operations).
>>
>> I do realize that do_hypfs_op() takes the necessary read lock,
>> but then you're still building on the assumption that the
>> function is reachable through only that code path, despite
>> being non-static. An ASSERT() here would be the minimum I guess,
>> but with read locks now being recursive I don't see why you
>> couldn't read-lock here again.
> 
> Right, will add the read-lock.
> 
>>
>> The same goes for other non-static functions, albeit things may
>> become more interesting for functions living on the
>> XEN_HYPFS_OP_write_contents path (because write locks aren't
> 
> Adding an ASSERT() in this regard should be rather easy.

As the type specific read- and write-functions should only be called
through the generic read/write functions I think it is better to have
a percpu variable holding the current locking state and ASSERT() that
to match. This will be cheaper than nesting locks and I don't have to
worry about either write_lock nesting or making _is_write_locked_by_me()
an official interface.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri May 15 05:59:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 05:59:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZTMh-0004Vc-QU; Fri, 15 May 2020 05:58:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wPnU=65=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jZTMh-0004VX-0D
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 05:58:35 +0000
X-Inumbo-ID: 1ba1104e-9671-11ea-a523-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 1ba1104e-9671-11ea-a523-12813bfff9fa;
 Fri, 15 May 2020 05:58:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1589522312;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=pWfAQ0tUqEVcxnRaG3DXAf86VAbLFMa86O/kZ9GP1v0=;
 b=IaAlzbR9DUN4a0fDNNHb0TFtWsNNssS71Rw7QfcbaA40fqhzMKkXU3AGrd7WbovpPtTkhI
 ifq4WTuAm2TrdM4A++kBQedDkb+pYO0DYdP7ai+rvNg4ePp0kv7KgNPvShb+tdyCTwg7/7
 cmQd41J9Jfonv2iNTn/o92bNNrJdaWA=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-166-PR4YSkK4P7mzfPK4-G2OSA-1; Fri, 15 May 2020 01:58:28 -0400
X-MC-Unique: PR4YSkK4P7mzfPK4-G2OSA-1
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 32191107ACF3;
 Fri, 15 May 2020 05:58:25 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-113-6.ams2.redhat.com [10.36.113.6])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 96F0A5D9D7;
 Fri, 15 May 2020 05:58:18 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 44AA411358BC; Fri, 15 May 2020 07:58:17 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
Subject: Re: [PATCH v3 0/3] various: Remove unnecessary casts
References: <20200512070020.22782-1-f4bug@amsat.org>
Date: Fri, 15 May 2020 07:58:17 +0200
In-Reply-To: <20200512070020.22782-1-f4bug@amsat.org> ("Philippe
 =?utf-8?Q?Mathieu-Daud=C3=A9=22's?= message of "Tue, 12 May 2020 09:00:17
 +0200")
Message-ID: <871rnlsps6.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 qemu-trivial@nongnu.org, David Hildenbrand <david@redhat.com>,
 Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, Richard Henderson <rth@twiddle.net>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 =?utf-8?Q?C=C3=A9dric?= Le Goater <clg@kaod.org>, John Snow <jsnow@redhat.com>,
 David Gibson <david@gibson.dropbear.id.au>,
 "Daniel P. =?utf-8?Q?Berrang=C3=A9?=" <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, qemu-ppc@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org> writes:

> Remove unnecessary casts using coccinelle scripts.
>
> The CPU()/OBJECT() patches don't introduce logical change,
> The DEVICE() one removes various OBJECT_CHECK() calls.

Queued, thanks!

Managing expecations: I'm not a QOM maintainer, I don't want to become
one, and I don't normally queue QOM patches :)



From xen-devel-bounces@lists.xenproject.org Fri May 15 07:01:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 07:01:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZULN-00028L-NN; Fri, 15 May 2020 07:01:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yAEg=65=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZULM-00028G-Jz
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 07:01:16 +0000
X-Inumbo-ID: ddb2ffa0-9679-11ea-a529-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ddb2ffa0-9679-11ea-a529-12813bfff9fa;
 Fri, 15 May 2020 07:01:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 33FC0ACFE;
 Fri, 15 May 2020 07:01:16 +0000 (UTC)
Subject: Re: [PATCH 1/1] xen/manage: enable C_A_D to force reboot
To: Dongli Zhang <dongli.zhang@oracle.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
References: <20200513233410.18120-1-dongli.zhang@oracle.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <e604a96d-087e-573b-c3bf-fc53005f8994@suse.com>
Date: Fri, 15 May 2020 09:01:12 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200513233410.18120-1-dongli.zhang@oracle.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: rose.wang@oracle.com, boris.ostrovsky@oracle.com, sstabellini@kernel.org,
 joe.jin@oracle.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.20 01:34, Dongli Zhang wrote:
> The systemd may be configured to mask ctrl-alt-del via "systemctl mask
> ctrl-alt-del.target". As a result, the pv reboot would not work as signal
> is ignored.
> 
> This patch always enables C_A_D before the call of ctrl_alt_del() in order
> to force the reboot.

Hmm, I'm not sure this is a good idea.

Suppose a guest admin is doing a critical update and wants to avoid a
sudden reboot in between. By masking the reboot this would be possible,
with your patch it isn't.

In case a reboot is really mandatory it would still be possible to just
kill the guest.

I'm not completely opposed to the patch, but I think this is a change
which should not be done easily.


Juergen

> 
> Reported-by: Rose Wang <rose.wang@oracle.com>
> Cc: Joe Jin <joe.jin@oracle.com>
> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
> ---
>   drivers/xen/manage.c | 7 +++++++
>   1 file changed, 7 insertions(+)
> 
> diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
> index cd046684e0d1..3190d0ecb52e 100644
> --- a/drivers/xen/manage.c
> +++ b/drivers/xen/manage.c
> @@ -204,6 +204,13 @@ static void do_poweroff(void)
>   static void do_reboot(void)
>   {
>   	shutting_down = SHUTDOWN_POWEROFF; /* ? */
> +	/*
> +	 * The systemd may be configured to mask ctrl-alt-del via
> +	 * "systemctl mask ctrl-alt-del.target". As a result, the pv reboot
> +	 * would not work. To enable C_A_D would force the reboot.
> +	 */
> +	C_A_D = 1;
> +
>   	ctrl_alt_del();
>   }
>   
> 



From xen-devel-bounces@lists.xenproject.org Fri May 15 07:32:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 07:32:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZUp4-0004nN-6t; Fri, 15 May 2020 07:31:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OuqU=65=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jZUp2-0004nG-Tz
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 07:31:57 +0000
X-Inumbo-ID: 275c6c50-967e-11ea-b07b-bc764e2007e4
Received: from mail-wr1-x42a.google.com (unknown [2a00:1450:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 275c6c50-967e-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 07:31:56 +0000 (UTC)
Received: by mail-wr1-x42a.google.com with SMTP id l11so2346518wru.0
 for <xen-devel@lists.xenproject.org>; Fri, 15 May 2020 00:31:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:subject:date:message-id:mime-version
 :content-transfer-encoding:content-language:thread-index;
 bh=/ocs+7WMPCtyzKVNdpdsTSb8gG4Snmg5kN30V0JBJbQ=;
 b=V6qEgp90ePw1B50KWWo6COzF6yzRerkU92PgWtNWF3kg/gJfhxbEBJnYyoUfrjB0AQ
 37aHM7pPaOuI1LTQdFZ+UhI9Fej8+c7GtMcgtDBKDffOe0XfvvBxPVLyuriMvm8xwvM6
 K/dgisBU8EWDYabUYnxPbIS6Ol9xsGGOe27YnkrP/T16pC1dQvDkOC0I89oLBSZnPnbm
 ylnOvm8YSavEF8zKDE3wjLN1iwEvUdScf4kZlM5IQxzhUE+rVkbINtaVt5xu9gWrBAl0
 jbbNOkVWv1Ztv+WfXTNKi+8DPkF2+NlxI5vOv6ZoNxRAKkgUBa3TOPaQoGy1Wmvbz7hZ
 iPUQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index;
 bh=/ocs+7WMPCtyzKVNdpdsTSb8gG4Snmg5kN30V0JBJbQ=;
 b=Hc4ZTaytj2XlQI6u32ZHxu3puwY3HzrBpImmquHHSG0xGl4U1H1df9W0jatumchtab
 9bsFWE5ZtV5JU5k5nVlH6tXT7AFNWBVPeHWQl/cTU+TolcKNyOpJsJ2SRRL8YpaMJG4X
 JQa8XJ9K5UudvLdHZ3bolTvtR1T9pZj0iXyAnpCMDJRIyDBUbpzE2lWOOR3yxtHSqlh7
 2UWxnME2F/gVIkI+AzmO2KWTD672ykyyk/g1gtIL4v4LyVubzDVkXFVZn8iaeIZDsFaI
 8OpQ6bGB50luSzx2kt/TKrFfsmyTOsqjmVQFp9w16/kZ3s/UM236Z/W4hlKFNN+ClO9b
 xgqQ==
X-Gm-Message-State: AOAM531hsvCoeSlMxBDJSYTr9Ell/vyPTijonN7LdW0LsRDg9cG41DhT
 bkCb400mJEe9mMz5UIUZ735tq3eIz+E=
X-Google-Smtp-Source: ABdhPJyVjx/izG32NLb+bmufgR4UDTfLXLTml4kWosa9iU6s15Uhqm94PyoGScTGS1tlFMMp04VjVA==
X-Received: by 2002:adf:e703:: with SMTP id c3mr2800851wrm.252.1589527915294; 
 Fri, 15 May 2020 00:31:55 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.186])
 by smtp.gmail.com with ESMTPSA id x17sm2177434wrp.71.2020.05.15.00.31.54
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 15 May 2020 00:31:54 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: <xen-devel@lists.xenproject.org>
Subject: Cut-off date for Xen 4.14 is May 22nd
Date: Fri, 15 May 2020 08:31:53 +0100
Message-ID: <002701d62a8a$e86c8d90$b945a8b0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AdYqirHDK/Nw4wzWS4qg4/nhX6yO9w==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

*** ONE WEEK TO GO ***

Hi all,

The cut-off date for Xen 4.14 is Friday May 22nd. If you want your features to be
included for the release, please make sure they are committed by that date.

  Paul Durrant



From xen-devel-bounces@lists.xenproject.org Fri May 15 07:39:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 07:39:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZUwT-00053f-19; Fri, 15 May 2020 07:39:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lh6L=65=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jZUwR-00053a-4W
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 07:39:35 +0000
X-Inumbo-ID: 382d8b6c-967f-11ea-a52c-12813bfff9fa
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.53]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 382d8b6c-967f-11ea-a52c-12813bfff9fa;
 Fri, 15 May 2020 07:39:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pakA4T75Axh8hjqouFdaJiSQLAm3fu+OKSvdPPLKor0=;
 b=kI+R+D5xLG6rg1gYSHZyeuTA+NrYHOTXVmdNx3D8snNoMSfitHjjDib9ZwL5/ZhIk2I0H4D16F/PUN6ArZgz9OiqfG6uoWHH0geDMY2Vgr+fvkM5IHEypC2yGwVB8Xim8Ji2IJPO+DoN7kxt5XFNCRrbAms5P4Z691zbzbcgyCQ=
Received: from AM5PR1001CA0017.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:206:2::30)
 by DB6PR08MB2837.eurprd08.prod.outlook.com (2603:10a6:6:19::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2979.27; Fri, 15 May
 2020 07:39:25 +0000
Received: from AM5EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:2:cafe::53) by AM5PR1001CA0017.outlook.office365.com
 (2603:10a6:206:2::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.20 via Frontend
 Transport; Fri, 15 May 2020 07:39:24 +0000
Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT012.mail.protection.outlook.com (10.152.16.161) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3000.19 via Frontend Transport; Fri, 15 May 2020 07:39:24 +0000
Received: ("Tessian outbound ff098c684b24:v54");
 Fri, 15 May 2020 07:39:24 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 67ce12c4f3bc0b86
X-CR-MTA-TID: 64aa7808
Received: from 19d34f196933.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1BB05BA9-3665-4B89-B615-EDB8FB28FB17.1; 
 Fri, 15 May 2020 07:39:18 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 19d34f196933.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 15 May 2020 07:39:18 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jiVId1Wq9ftD0Q4l6epFMW9HSUYSVKSnIiCHam2dV4/0+wURACZ35cdmDYeGx1yFyuHKv+CHbW/8UGsASIVr5WA7tl4NH9MTY7HbwTgfTp0eBXbheF9oBjcyYtHzYk8zJqS38hInFTkNHyqKNkQ/bCsTbSLlqcniFsczHEMYOAxpulzeszHflg8VR3GyRylPWM2uW8A+0wrY5eVWDpBeqbcG8StHUsolOaOvndbiLhgNaKG7WWfMZYrq2zlpFLDLbLPVVKdC19TSABMGnSwj2OHmzk12qmiTYQLv/NQzb6ccrkGbRAUQvgk+GQFuonyOO7Udt9w2/BisAbhmc3awMA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pakA4T75Axh8hjqouFdaJiSQLAm3fu+OKSvdPPLKor0=;
 b=Wr4hbg3Nj6HcL/nuHQdTJtng8KV7Q4pjPpfNHIZ7VR7WGUlwiv1F8hZeVAyy/npA2J60CaV+HYwTWqfy9bebDD6m3NVyyamMEnJW3eD+QWHVpbcrQRe2rS6CaJ2nVu2rM6O5iKymZHQBdnC9SxfVPjXjUK9yzR42W/YAPI57fuw0GlyYH6vJS7GcHbu6Oq298ztpMsX7osYr2/PqxH5qreiWIV3SCyDSMb5NRwBtzXnwsFbDEYYdpycd31mUkFE0//T/dx61NtuBoJBr0fiLUSPWMX+6JjHviL53Yvw1Hy8wEGwAMAod3e/6llaC0200mS0OJwz2DCFrIkJWPwTZ6A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pakA4T75Axh8hjqouFdaJiSQLAm3fu+OKSvdPPLKor0=;
 b=kI+R+D5xLG6rg1gYSHZyeuTA+NrYHOTXVmdNx3D8snNoMSfitHjjDib9ZwL5/ZhIk2I0H4D16F/PUN6ArZgz9OiqfG6uoWHH0geDMY2Vgr+fvkM5IHEypC2yGwVB8Xim8Ji2IJPO+DoN7kxt5XFNCRrbAms5P4Z691zbzbcgyCQ=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB2970.eurprd08.prod.outlook.com (2603:10a6:5:17::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.20; Fri, 15 May
 2020 07:39:16 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3000.022; Fri, 15 May 2020
 07:39:16 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien.grall.oss@gmail.com>
Subject: Re: Error during update_runstate_area with KPTI activated
Thread-Topic: Error during update_runstate_area with KPTI activated
Thread-Index: AQHWKfvm5I6PbMAmoUakTwbdgqMBQqinvLCAgAAF7oCAABaDgIAACWGAgAARCgCAANBZgA==
Date: Fri, 15 May 2020 07:39:16 +0000
Message-ID: <478C4829-CCAF-495B-860E-6BA3D86AA47D@arm.com>
References: <C6B0E24F-60E6-4621-8448-C8DBAE3277A9@arm.com>
 <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
 <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
 <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
 <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
In-Reply-To: <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 62716e4c-0fc2-45ec-4ade-08d7f8a316f2
x-ms-traffictypediagnostic: DB7PR08MB2970:|DB6PR08MB2837:
X-Microsoft-Antispam-PRVS: <DB6PR08MB28374B786A986E45956F80B29DBD0@DB6PR08MB2837.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 04041A2886
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: CqBUInAG6bTg+U1pFZ99w0FQdeQ2Rjhrbxx4Pkyo5fnXgcDcC8T+w7nv9UqEknL6cAy6RR9Sp1I/fWehN06LT98Thxbx5IQKfTqn9W93vbqvyjFhYFQaSDKVDPYbPAlTYZkq0ZPHZ1XI1g9rsl8QWg2iMvoMk5If1YtpOz269dpV5E3p0vivi6ZBYt3Pzs1a1maeW5+I5WH2K7ECJel8DFeXY+CeLYgqp3jV6FJw39oWAWziBo3bnErTbobVdQ4BgxXm6eLcQcQfGOFBNCU3RbaRQMvrWE1llo+fP54tcdgMqscMPh8crH4I2E/c6Nf8vcdRVDjkWw3TrvbaGMOdttkOZIXRKGu3GYDP8FWIL2LGxI2ZSyeeBIBaGqOGcxf6ITlR76bFV4nog4KL6c5MsU2U1q3qY7m3V36gBZNkM+DT6eGzApBGGrmlCcu0M8X4kRHnnDMaxb+XSenR143kOk8MN9/LGbRNUl4TFdyR+PhHmkUYrCFkIikZevV+o0shcdzfN2dqHOZvYvXZSr2mrg==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(376002)(396003)(39860400002)(136003)(346002)(366004)(21615005)(66946007)(26005)(76116006)(6506007)(8936002)(8676002)(4326008)(316002)(186003)(54906003)(33656002)(86362001)(6486002)(2616005)(53546011)(64756008)(2906002)(66476007)(36756003)(6512007)(6916009)(478600001)(91956017)(5660300002)(66556008)(66446008)(966005)(166002)(15650500001)(71200400001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: msz+GeqhIJ9t+J9DV1L9At3e+my8fLZSsMC5rHqoiSHSZgP3Uu3Yj/HdsXzhmqzOj/7ZDjktzBrD3gI5/PVfq7bFTU7frVZC4NUMAO2d11qpEuH1RLg3S+NA/pfJim0jitOO/PQPFdt1WLeA9fnpmoKtIEZDzSm0dxP1ort33dEOD2lt3Mdvk/Uexy7hf/Chf48z1orcPEFrDQQUAWCCVV/tCYmLXVvTR+k94gV9T4LxDeodRILFO2fW/HL4nC8DETpWljcGeWXBXn9QfNTN5ftoz/4Knvrs2b/WCxMJa0e1rxCSblNBQRoVHDUyfV0xFo9MF60rm4LjZ1T9K9tPfQ9lvjA1dCjAcfKCniQA7eDnvPbL+0TKt5UDG152E22YQyZx+ITbCCpbWwjOtvkC2v8Q24ANKcxu7d12lbDkEIYlRbYRSWrMTpOnDzciKDlV3v5WKY1c6N+3KGTpstTKIxh4xN7ctIgiYbmEp/yUR5ZN6cVMbctZgciXP24rByOY
x-ms-exchange-transport-forked: True
Content-Type: multipart/alternative;
 boundary="_000_478C4829CCAF495B860E6BA3D86AA47Darmcom_"
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB2970
Original-Authentication-Results: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(346002)(136003)(39860400002)(376002)(46966005)(53546011)(186003)(47076004)(36756003)(478600001)(356005)(6486002)(21615005)(336012)(70206006)(26005)(6512007)(966005)(4326008)(2616005)(6862004)(107886003)(45080400002)(6506007)(33656002)(70586007)(30864003)(86362001)(8676002)(5660300002)(82310400002)(166002)(8936002)(81166007)(2906002)(36906005)(82740400003)(316002)(15650500001)(54906003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: fd4d273e-d2db-4e55-4aed-08d7f8a3124f
X-Forefront-PRVS: 04041A2886
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 9yqDkwFz0/mJIk3WHoUiNl+wAGuz7vYU/NnyxH8Xhj01jzi9KB3cJHivkzfU3UqtLQOpfXQytRLIysmEjkXM/cfh+hclqYWHDKN8DS5H8OaK5YCPCaS4R9wrHg0Rp9vGSfG6EjgHXT/LJKfdXx2cAEZRNTeqy1R9cJXIspYQj7L90ltkxbRMUBINgRBVucLTYgjLJPWxTQgjJiHUHgL9LDxNJTk0c5YiblTmnfO7NX10u4JbHxrlHoZElZ1AVKYwA3AaeASObY/TOZNoh1UCogA45B4lrkyNbPUm1z726E+OodkrpPYaPM3mKON7Jhw3ZbODjTspEMYigv8yG0VAy0hQIf0A4CQNafzUiURU+Hw8+Hm8TmRH+5/FK6bsCPif5RbVB1MEd/3ehbKqjKmrfVVKnlBTCB8sSglfDBxZUKGROyBorLVq0oUXzHNyqwOCiWqp09miJhuVdCfhqw7rk/pLXwaFLHE5ODDp+6qimy/wV7Ff+4jJaHkwLM6y8zt80XApf1zfWUQamvwwig89U2Vwz/GnixLZi4eWotvF0fUcUbzRm62Pr4ASl9sRVpXm8+T3S/yGRtvjEo5I1Le3qxPjLbzgRzRizvEqj7kludo=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2020 07:39:24.3210 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 62716e4c-0fc2-45ec-4ade-08d7f8a316f2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR08MB2837
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, nd <nd@arm.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--_000_478C4829CCAF495B860E6BA3D86AA47Darmcom_
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable



On 14 May 2020, at 20:13, Julien Grall <julien.grall.oss@gmail.com<mailto:j=
ulien.grall.oss@gmail.com>> wrote:

On Thu, 14 May 2020 at 19:12, Andrew Cooper <andrew.cooper3@citrix.com<mail=
to:andrew.cooper3@citrix.com>> wrote:

On 14/05/2020 18:38, Julien Grall wrote:
Hi,

On 14/05/2020 17:18, Bertrand Marquis wrote:


On 14 May 2020, at 16:57, Julien Grall <julien@xen.org<mailto:julien@xen.or=
g>> wrote:



On 14/05/2020 15:28, Bertrand Marquis wrote:
Hi,

Hi,

When executing linux on arm64 with KPTI activated (in Dom0 or in a
DomU), I have a lot of walk page table errors like this:
(XEN) p2m.c:1890: d1v0: Failed to walk page-table va
0xffffff837ebe0cd0
After implementing a call trace, I found that the problem was
coming from the update_runstate_area when linux has KPTI activated.
I have the following call trace:
(XEN) p2m.c:1890: d1v0: Failed to walk page-table va
0xffffff837ebe0cd0
(XEN) backtrace.c:29: Stacktrace start at 0x8007638efbb0 depth 10
(XEN)    [<000000000027780c>] get_page_from_gva+0x180/0x35c
(XEN)    [<00000000002700c8>] guestcopy.c#copy_guest+0x1b0/0x2e4
(XEN)    [<0000000000270228>] raw_copy_to_guest+0x2c/0x34
(XEN)    [<0000000000268dd0>] domain.c#update_runstate_area+0x90/0xc8
(XEN)    [<000000000026909c>] domain.c#schedule_tail+0x294/0x2d8
(XEN)    [<0000000000269524>] context_switch+0x58/0x70
(XEN)    [<00000000002479c4>] core.c#sched_context_switch+0x88/0x1e4
(XEN)    [<000000000024845c>] core.c#schedule+0x224/0x2ec
(XEN)    [<0000000000224018>] softirq.c#__do_softirq+0xe4/0x128
(XEN)    [<00000000002240d4>] do_softirq+0x14/0x1c
Discussing this subject with Stefano, he pointed me to a discussion
started a year ago on this subject here:
https://lists.xenproject.org/archives/html/xen-devel/2018-11/msg03053.html

And a patch was submitted:
https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg02320.html

I rebased this patch on current master and it is solving the
problem I have seen.
It sounds to me like a good solution to introduce a
VCPUOP_register_runstate_phys_memory_area to not depend on the area
actually being mapped in the guest when a context switch is being
done (which is actually the problem happening when a context switch
is trigger while a guest is running in EL0).
Is there any reason why this was not merged at the end ?

I just skimmed through the thread to remind myself the state.
AFAICT, this is blocked on the contributor to clarify the intended
interaction and provide a new version.

What do you mean here by intended interaction ? How the new hyper
call should be used by the guest OS ?

>From what I remember, Jan was seeking clarification on whether the two
hypercalls (existing and new) can be called together by the same OS
(and make sense).

There was also the question of the handover between two pieces of
sotfware. For instance, what if the firmware is using the existing
interface but the OS the new one? Similar question about Kexecing a
different kernel.

This part is mostly documentation so we can discuss about the approach
and review the implementation.



I am still in favor of the new hypercall (and still in my todo list)
but I haven't yet found time to revive the series.

Would you be willing to take over the series? I would be happy to
bring you up to speed and provide review.

Sure I can take it over.

I ported it to master version of xen and I tested it on a board.
I still need to do a deep review of the code myself but I have an
understanding of the problem and what is the idea.

Any help to get on speed would be more then welcome :-)
I would recommend to go through the latest version (v3) and the
previous (v2). I am also suggesting v2 because I think the split was
easier to review/understand.

The x86 code is probably what is going to give you the most trouble as
there are two ABIs to support (compat and non-compat). If you don't
have an x86 setup, I should be able to test it/help write it.

Feel free to ask any questions and I will try my best to remember the
discussion from last year :).

At risk of being shouted down again, a new hypercall isn't necessarily
necessary, and there are probably better ways of fixing it.

The underlying ABI problem is that the area is registered by virtual
address.  The only correct way this should have been done is to register
by guest physical address, so Xen's updating of the data doesn't
interact with the guest pagetable settings/restrictions.  x86 suffers
the same kind of problems as ARM, except we silently squash the fallout.

The logic in Xen is horrible, and I would really rather it was deleted
completely, rather than to be kept for compatibility.

The runstate area is always fixed kernel memory and doesn't move.  I
believe it is already restricted from crossing a page boundary, and we
can calculate the va=3D>pa translation when the hypercall is made.

Yes - this is a technically ABI change, but nothing is going to break
(AFAICT) and the cleanup win is large enough to make this a *very*
attractive option.

I suggested this approach two years ago [1] but you were the one
saying that buffer could cross page-boundary on older Linux [2]:

"I'd love to do this, but we cant.  Older Linux used to have a virtual
buffer spanning a page boundary.  Changing the behaviour under that will
cause older setups to explode."

So can you explain your change of heart here?


I would prefer to fix it like this, (perhaps adding a new hypercall
which explicitly takes a guest physical address), than to keep any of
this mess around forever more to cope with legacy guests.

What does legacy guests mean? Is it PV 32-bit or does it also include some =
HVM?

Reading all this and digging into the code, the meaning full implementation=
 would definitely be to validate and translate the address when during the =
hypercall handling and then to just reuse this address along the way.
Wether or not the guest is passing a virtual address (versus an intermediat=
e physical) and creating a new hyper call for this might be a different que=
stion that we could handle separatly.
Does anyone see something wrong with such an approach ?

Answer myself:
There might be the corner case where the physical area is actually can be r=
emoved from the guest (ie a guest using some memory coming from a temporary=
 mapped area).
Would there be a way to check that during the hypercall ?

Cheers
Bertrand



Cheers,

[1] <3a77a293-1a29-42ed-8fc0-a74bda213b92@arm.com<mailto:3a77a293-1a29-42ed=
-8fc0-a74bda213b92@arm.com>>
[2] <dc80422f-80bb-bd37-ed41-bb6559f4d7d8@citrix.com<mailto:dc80422f-80bb-b=
d37-ed41-bb6559f4d7d8@citrix.com>>


--_000_478C4829CCAF495B860E6BA3D86AA47Darmcom_
Content-Type: text/html; charset="us-ascii"
Content-ID: <398C66BC003CCA4E8C49E38C8BC3253B@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable

<html>
<head>
<meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"=
>
</head>
<body style=3D"word-wrap: break-word; -webkit-nbsp-mode: space; line-break:=
 after-white-space;" class=3D"">
<br class=3D"">
<div><br class=3D"">
<blockquote type=3D"cite" class=3D"">
<div class=3D"">On 14 May 2020, at 20:13, Julien Grall &lt;<a href=3D"mailt=
o:julien.grall.oss@gmail.com" class=3D"">julien.grall.oss@gmail.com</a>&gt;=
 wrote:</div>
<br class=3D"Apple-interchange-newline">
<div class=3D""><span style=3D"caret-color: rgb(0, 0, 0); font-family: Helv=
etica; font-size: 12px; font-style: normal; font-variant-caps: normal; font=
-weight: normal; letter-spacing: normal; text-align: start; text-indent: 0p=
x; text-transform: none; white-space: normal; word-spacing: 0px; -webkit-te=
xt-stroke-width: 0px; text-decoration: none; float: none; display: inline !=
important;" class=3D"">On
 Thu, 14 May 2020 at 19:12, Andrew Cooper &lt;</span><a href=3D"mailto:andr=
ew.cooper3@citrix.com" style=3D"font-family: Helvetica; font-size: 12px; fo=
nt-style: normal; font-variant-caps: normal; font-weight: normal; letter-sp=
acing: normal; orphans: auto; text-align: start; text-indent: 0px; text-tra=
nsform: none; white-space: normal; widows: auto; word-spacing: 0px; -webkit=
-text-size-adjust: auto; -webkit-text-stroke-width: 0px;" class=3D"">andrew=
.cooper3@citrix.com</a><span style=3D"caret-color: rgb(0, 0, 0); font-famil=
y: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: norma=
l; font-weight: normal; letter-spacing: normal; text-align: start; text-ind=
ent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -we=
bkit-text-stroke-width: 0px; text-decoration: none; float: none; display: i=
nline !important;" class=3D"">&gt;
 wrote:</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetic=
a; font-size: 12px; font-style: normal; font-variant-caps: normal; font-wei=
ght: normal; letter-spacing: normal; text-align: start; text-indent: 0px; t=
ext-transform: none; white-space: normal; word-spacing: 0px; -webkit-text-s=
troke-width: 0px; text-decoration: none;" class=3D"">
<blockquote type=3D"cite" style=3D"font-family: Helvetica; font-size: 12px;=
 font-style: normal; font-variant-caps: normal; font-weight: normal; letter=
-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-=
transform: none; white-space: normal; widows: auto; word-spacing: 0px; -web=
kit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration=
: none;" class=3D"">
<br class=3D"">
On 14/05/2020 18:38, Julien Grall wrote:<br class=3D"">
<blockquote type=3D"cite" class=3D"">Hi,<br class=3D"">
<br class=3D"">
On 14/05/2020 17:18, Bertrand Marquis wrote:<br class=3D"">
<blockquote type=3D"cite" class=3D""><br class=3D"">
<br class=3D"">
<blockquote type=3D"cite" class=3D"">On 14 May 2020, at 16:57, Julien Grall=
 &lt;<a href=3D"mailto:julien@xen.org" class=3D"">julien@xen.org</a>&gt; wr=
ote:<br class=3D"">
<br class=3D"">
<br class=3D"">
<br class=3D"">
On 14/05/2020 15:28, Bertrand Marquis wrote:<br class=3D"">
<blockquote type=3D"cite" class=3D"">Hi,<br class=3D"">
</blockquote>
<br class=3D"">
Hi,<br class=3D"">
<br class=3D"">
<blockquote type=3D"cite" class=3D"">When executing linux on arm64 with KPT=
I activated (in Dom0 or in a<br class=3D"">
DomU), I have a lot of walk page table errors like this:<br class=3D"">
(XEN) p2m.c:1890: d1v0: Failed to walk page-table va<br class=3D"">
0xffffff837ebe0cd0<br class=3D"">
After implementing a call trace, I found that the problem was<br class=3D""=
>
coming from the update_runstate_area when linux has KPTI activated.<br clas=
s=3D"">
I have the following call trace:<br class=3D"">
(XEN) p2m.c:1890: d1v0: Failed to walk page-table va<br class=3D"">
0xffffff837ebe0cd0<br class=3D"">
(XEN) backtrace.c:29: Stacktrace start at 0x8007638efbb0 depth 10<br class=
=3D"">
(XEN) &nbsp;&nbsp;&nbsp;[&lt;000000000027780c&gt;] get_page_from_gva&#43;0x=
180/0x35c<br class=3D"">
(XEN) &nbsp;&nbsp;&nbsp;[&lt;00000000002700c8&gt;] guestcopy.c#copy_guest&#=
43;0x1b0/0x2e4<br class=3D"">
(XEN) &nbsp;&nbsp;&nbsp;[&lt;0000000000270228&gt;] raw_copy_to_guest&#43;0x=
2c/0x34<br class=3D"">
(XEN) &nbsp;&nbsp;&nbsp;[&lt;0000000000268dd0&gt;] domain.c#update_runstate=
_area&#43;0x90/0xc8<br class=3D"">
(XEN) &nbsp;&nbsp;&nbsp;[&lt;000000000026909c&gt;] domain.c#schedule_tail&#=
43;0x294/0x2d8<br class=3D"">
(XEN) &nbsp;&nbsp;&nbsp;[&lt;0000000000269524&gt;] context_switch&#43;0x58/=
0x70<br class=3D"">
(XEN) &nbsp;&nbsp;&nbsp;[&lt;00000000002479c4&gt;] core.c#sched_context_swi=
tch&#43;0x88/0x1e4<br class=3D"">
(XEN) &nbsp;&nbsp;&nbsp;[&lt;000000000024845c&gt;] core.c#schedule&#43;0x22=
4/0x2ec<br class=3D"">
(XEN) &nbsp;&nbsp;&nbsp;[&lt;0000000000224018&gt;] softirq.c#__do_softirq&#=
43;0xe4/0x128<br class=3D"">
(XEN) &nbsp;&nbsp;&nbsp;[&lt;00000000002240d4&gt;] do_softirq&#43;0x14/0x1c=
<br class=3D"">
Discussing this subject with Stefano, he pointed me to a discussion<br clas=
s=3D"">
started a year ago on this subject here:<br class=3D"">
<a href=3D"https://lists.xenproject.org/archives/html/xen-devel/2018-11/msg=
03053.html" class=3D"">https://lists.xenproject.org/archives/html/xen-devel=
/2018-11/msg03053.html</a><br class=3D"">
<br class=3D"">
And a patch was submitted:<br class=3D"">
https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg02320.html<=
br class=3D"">
<br class=3D"">
I rebased this patch on current master and it is solving the<br class=3D"">
problem I have seen.<br class=3D"">
It sounds to me like a good solution to introduce a<br class=3D"">
VCPUOP_register_runstate_phys_memory_area to not depend on the area<br clas=
s=3D"">
actually being mapped in the guest when a context switch is being<br class=
=3D"">
done (which is actually the problem happening when a context switch<br clas=
s=3D"">
is trigger while a guest is running in EL0).<br class=3D"">
Is there any reason why this was not merged at the end ?<br class=3D"">
</blockquote>
<br class=3D"">
I just skimmed through the thread to remind myself the state.<br class=3D""=
>
AFAICT, this is blocked on the contributor to clarify the intended<br class=
=3D"">
interaction and provide a new version.<br class=3D"">
</blockquote>
<br class=3D"">
What do you mean here by intended interaction ? How the new hyper<br class=
=3D"">
call should be used by the guest OS ?<br class=3D"">
</blockquote>
<br class=3D"">
>From what I remember, Jan was seeking clarification on whether the two<br c=
lass=3D"">
hypercalls (existing and new) can be called together by the same OS<br clas=
s=3D"">
(and make sense).<br class=3D"">
<br class=3D"">
There was also the question of the handover between two pieces of<br class=
=3D"">
sotfware. For instance, what if the firmware is using the existing<br class=
=3D"">
interface but the OS the new one? Similar question about Kexecing a<br clas=
s=3D"">
different kernel.<br class=3D"">
<br class=3D"">
This part is mostly documentation so we can discuss about the approach<br c=
lass=3D"">
and review the implementation.<br class=3D"">
<br class=3D"">
<blockquote type=3D"cite" class=3D""><br class=3D"">
<blockquote type=3D"cite" class=3D""><br class=3D"">
I am still in favor of the new hypercall (and still in my todo list)<br cla=
ss=3D"">
but I haven't yet found time to revive the series.<br class=3D"">
<br class=3D"">
Would you be willing to take over the series? I would be happy to<br class=
=3D"">
bring you up to speed and provide review.<br class=3D"">
</blockquote>
<br class=3D"">
Sure I can take it over.<br class=3D"">
<br class=3D"">
I ported it to master version of xen and I tested it on a board.<br class=
=3D"">
I still need to do a deep review of the code myself but I have an<br class=
=3D"">
understanding of the problem and what is the idea.<br class=3D"">
<br class=3D"">
Any help to get on speed would be more then welcome :-)<br class=3D"">
</blockquote>
I would recommend to go through the latest version (v3) and the<br class=3D=
"">
previous (v2). I am also suggesting v2 because I think the split was<br cla=
ss=3D"">
easier to review/understand.<br class=3D"">
<br class=3D"">
The x86 code is probably what is going to give you the most trouble as<br c=
lass=3D"">
there are two ABIs to support (compat and non-compat). If you don't<br clas=
s=3D"">
have an x86 setup, I should be able to test it/help write it.<br class=3D""=
>
<br class=3D"">
Feel free to ask any questions and I will try my best to remember the<br cl=
ass=3D"">
discussion from last year :).<br class=3D"">
</blockquote>
<br class=3D"">
At risk of being shouted down again, a new hypercall isn't necessarily<br c=
lass=3D"">
necessary, and there are probably better ways of fixing it.<br class=3D"">
<br class=3D"">
The underlying ABI problem is that the area is registered by virtual<br cla=
ss=3D"">
address. &nbsp;The only correct way this should have been done is to regist=
er<br class=3D"">
by guest physical address, so Xen's updating of the data doesn't<br class=
=3D"">
interact with the guest pagetable settings/restrictions. &nbsp;x86 suffers<=
br class=3D"">
the same kind of problems as ARM, except we silently squash the fallout.<br=
 class=3D"">
<br class=3D"">
The logic in Xen is horrible, and I would really rather it was deleted<br c=
lass=3D"">
completely, rather than to be kept for compatibility.<br class=3D"">
<br class=3D"">
The runstate area is always fixed kernel memory and doesn't move. &nbsp;I<b=
r class=3D"">
believe it is already restricted from crossing a page boundary, and we<br c=
lass=3D"">
can calculate the va=3D&gt;pa translation when the hypercall is made.<br cl=
ass=3D"">
<br class=3D"">
Yes - this is a technically ABI change, but nothing is going to break<br cl=
ass=3D"">
(AFAICT) and the cleanup win is large enough to make this a *very*<br class=
=3D"">
attractive option.<br class=3D"">
</blockquote>
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">I
 suggested this approach two years ago [1] but you were the one</span><br s=
tyle=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px;=
 font-style: normal; font-variant-caps: normal; font-weight: normal; letter=
-spacing: normal; text-align: start; text-indent: 0px; text-transform: none=
; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; t=
ext-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">saying
 that buffer could cross page-boundary on older Linux [2]:</span><br style=
=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; fon=
t-style: normal; font-variant-caps: normal; font-weight: normal; letter-spa=
cing: normal; text-align: start; text-indent: 0px; text-transform: none; wh=
ite-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-=
decoration: none;" class=3D"">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">&quot;I'd
 love to do this, but we cant. &nbsp;Older Linux used to have a virtual</sp=
an><br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-siz=
e: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal=
; letter-spacing: normal; text-align: start; text-indent: 0px; text-transfo=
rm: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width=
: 0px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">buffer
 spanning a page boundary. &nbsp;Changing the behaviour under that will</sp=
an><br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-siz=
e: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal=
; letter-spacing: normal; text-align: start; text-indent: 0px; text-transfo=
rm: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width=
: 0px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">cause
 older setups to explode.&quot;</span><br style=3D"caret-color: rgb(0, 0, 0=
); font-family: Helvetica; font-size: 12px; font-style: normal; font-varian=
t-caps: normal; font-weight: normal; letter-spacing: normal; text-align: st=
art; text-indent: 0px; text-transform: none; white-space: normal; word-spac=
ing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;" class=3D"=
">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">So
 can you explain your change of heart here?</span><br style=3D"caret-color:=
 rgb(0, 0, 0); font-family: Helvetica; font-size: 12px; font-style: normal;=
 font-variant-caps: normal; font-weight: normal; letter-spacing: normal; te=
xt-align: start; text-indent: 0px; text-transform: none; white-space: norma=
l; word-spacing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none=
;" class=3D"">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<blockquote type=3D"cite" style=3D"font-family: Helvetica; font-size: 12px;=
 font-style: normal; font-variant-caps: normal; font-weight: normal; letter=
-spacing: normal; orphans: auto; text-align: start; text-indent: 0px; text-=
transform: none; white-space: normal; widows: auto; word-spacing: 0px; -web=
kit-text-size-adjust: auto; -webkit-text-stroke-width: 0px; text-decoration=
: none;" class=3D"">
<br class=3D"">
I would prefer to fix it like this, (perhaps adding a new hypercall<br clas=
s=3D"">
which explicitly takes a guest physical address), than to keep any of<br cl=
ass=3D"">
this mess around forever more to cope with legacy guests.<br class=3D"">
</blockquote>
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">What
 does legacy guests mean? Is it PV 32-bit or does it also include some HVM?=
</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font=
-size: 12px; font-style: normal; font-variant-caps: normal; font-weight: no=
rmal; letter-spacing: normal; text-align: start; text-indent: 0px; text-tra=
nsform: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-w=
idth: 0px; text-decoration: none;" class=3D"">
</div>
</blockquote>
<div><br class=3D"">
</div>
<div>Reading all this and digging into the code, the meaning full implement=
ation would definitely be to validate and translate the address when during=
 the hypercall handling and then to just reuse this address along the way.<=
/div>
<div>Wether or not the guest is passing a virtual address (versus an interm=
ediate physical) and creating a new hyper call for this might be a differen=
t question that we could handle separatly.</div>
<div>Does anyone see something wrong with such an approach ?</div>
<div><br class=3D"">
</div>
<div>Answer myself:</div>
<div>There might be the corner case where the physical area is actually can=
 be removed from the guest (ie a guest using some memory coming from a temp=
orary mapped area).</div>
<div>Would there be a way to check that during the hypercall ?</div>
<div><br class=3D"">
</div>
<div>Cheers</div>
<div>Bertrand</div>
<div><br class=3D"">
</div>
<br class=3D"">
<blockquote type=3D"cite" class=3D"">
<div class=3D""><br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvet=
ica; font-size: 12px; font-style: normal; font-variant-caps: normal; font-w=
eight: normal; letter-spacing: normal; text-align: start; text-indent: 0px;=
 text-transform: none; white-space: normal; word-spacing: 0px; -webkit-text=
-stroke-width: 0px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">Cheers,</span><br style=3D"caret-color: rgb(0, 0, 0); font-family: H=
elvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; f=
ont-weight: normal; letter-spacing: normal; text-align: start; text-indent:=
 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webkit=
-text-stroke-width: 0px; text-decoration: none;" class=3D"">
<br style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size: =
12px; font-style: normal; font-variant-caps: normal; font-weight: normal; l=
etter-spacing: normal; text-align: start; text-indent: 0px; text-transform:=
 none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width: 0=
px; text-decoration: none;" class=3D"">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">[1]
 &lt;</span><a href=3D"mailto:3a77a293-1a29-42ed-8fc0-a74bda213b92@arm.com"=
 style=3D"font-family: Helvetica; font-size: 12px; font-style: normal; font=
-variant-caps: normal; font-weight: normal; letter-spacing: normal; orphans=
: auto; text-align: start; text-indent: 0px; text-transform: none; white-sp=
ace: normal; widows: auto; word-spacing: 0px; -webkit-text-size-adjust: aut=
o; -webkit-text-stroke-width: 0px;" class=3D"">3a77a293-1a29-42ed-8fc0-a74b=
da213b92@arm.com</a><span style=3D"caret-color: rgb(0, 0, 0); font-family: =
Helvetica; font-size: 12px; font-style: normal; font-variant-caps: normal; =
font-weight: normal; letter-spacing: normal; text-align: start; text-indent=
: 0px; text-transform: none; white-space: normal; word-spacing: 0px; -webki=
t-text-stroke-width: 0px; text-decoration: none; float: none; display: inli=
ne !important;" class=3D"">&gt;</span><br style=3D"caret-color: rgb(0, 0, 0=
); font-family: Helvetica; font-size: 12px; font-style: normal; font-varian=
t-caps: normal; font-weight: normal; letter-spacing: normal; text-align: st=
art; text-indent: 0px; text-transform: none; white-space: normal; word-spac=
ing: 0px; -webkit-text-stroke-width: 0px; text-decoration: none;" class=3D"=
">
<span style=3D"caret-color: rgb(0, 0, 0); font-family: Helvetica; font-size=
: 12px; font-style: normal; font-variant-caps: normal; font-weight: normal;=
 letter-spacing: normal; text-align: start; text-indent: 0px; text-transfor=
m: none; white-space: normal; word-spacing: 0px; -webkit-text-stroke-width:=
 0px; text-decoration: none; float: none; display: inline !important;" clas=
s=3D"">[2]
 &lt;</span><a href=3D"mailto:dc80422f-80bb-bd37-ed41-bb6559f4d7d8@citrix.c=
om" style=3D"font-family: Helvetica; font-size: 12px; font-style: normal; f=
ont-variant-caps: normal; font-weight: normal; letter-spacing: normal; orph=
ans: auto; text-align: start; text-indent: 0px; text-transform: none; white=
-space: normal; widows: auto; word-spacing: 0px; -webkit-text-size-adjust: =
auto; -webkit-text-stroke-width: 0px;" class=3D"">dc80422f-80bb-bd37-ed41-b=
b6559f4d7d8@citrix.com</a><span style=3D"caret-color: rgb(0, 0, 0); font-fa=
mily: Helvetica; font-size: 12px; font-style: normal; font-variant-caps: no=
rmal; font-weight: normal; letter-spacing: normal; text-align: start; text-=
indent: 0px; text-transform: none; white-space: normal; word-spacing: 0px; =
-webkit-text-stroke-width: 0px; text-decoration: none; float: none; display=
: inline !important;" class=3D"">&gt;</span></div>
</blockquote>
</div>
<br class=3D"">
</body>
</html>

--_000_478C4829CCAF495B860E6BA3D86AA47Darmcom_--


From xen-devel-bounces@lists.xenproject.org Fri May 15 08:32:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 08:32:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZVlR-0002GW-Gj; Fri, 15 May 2020 08:32:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I0w9=65=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZVlQ-0002GR-8P
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 08:32:16 +0000
X-Inumbo-ID: 94217238-9686-11ea-a530-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 94217238-9686-11ea-a530-12813bfff9fa;
 Fri, 15 May 2020 08:32:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589531536;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=1FcibkN+M9mk9H4hXwPg7huENntptjJlMYcxLwCtNx0=;
 b=SLrojQb9Uztf5sLxYVOI6mhPi8OqC6h3v6R2UeSS08AUsGUew6TQMEOT
 U1/y8GAtNI3w5Yr083q8srw5ReiOjN9I98C7r18k329/olRIGAiKGldkN
 8PtdQ/Lelx0l16ctTizxjYCU9WftA+dWOwxw+HAxklY6mhLuO97UtB3Qv Y=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: RP73nD2P38KqDXAbnDCinD/6RlKzcuqt8jzFRCz1/iRWV0ps+AjZJ9KNcXouWUbgMORajhSMHu
 XoukeKNXLr+1so2yYf1TEgr8ZqLyWzOfcdWDyoDclOtqKQj+rd1yl19hOxQ1k+Jg+MaOZ425II
 ILfpa4bZ57vFJxYDGuzSYPf/4iiBOnVrx0AEiPR+Pnc5woFeNMpwaCwxyBcnHRQTN87WSn1kzk
 9uII24o8/sAbgxzG3H4NdyXBE2FKx6kTz4/oISBoow592VujmnYHnlnm1WItoGjG2Jn5LYROQx
 dqo=
X-SBRS: 2.7
X-MesageID: 17600240
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,394,1583211600"; d="scan'208";a="17600240"
Date: Fri, 15 May 2020 10:32:04 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86: retrieve and log CPU frequency information
Message-ID: <20200515083204.GM54375@Air-de-Roger>
References: <1fd091d2-30e2-0691-0485-3f5142bd457f@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <1fd091d2-30e2-0691-0485-3f5142bd457f@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Apr 15, 2020 at 01:55:24PM +0200, Jan Beulich wrote:
> While from just a single Skylake system it is already clear that we
> can't base any of our logic on CPUID leaf 15 [1] (leaf 16 is
> documented to be used for display purposes only anyway), logging this
> information may still give us some reference in case of problems as well
> as for future work. Additionally on the AMD side it is unclear whether
> the deviation between reported and measured frequencies is because of us
> not doing well, or because of nominal and actual frequencies being quite
> far apart.
> 
> The chosen variable naming in amd_log_freq() has pointed out a naming
> problem in rdmsr_safe(), which is being taken care of at the same time.
> Symmetrically wrmsr_safe(), being an inline function, also gets an
> unnecessary underscore dropped from one of its local variables.
> 
> [1] With a core crystal clock of 24MHz and a ratio of 216/2, the
>     reported frequency nevertheless is 2600MHz, rather than the to be
>     expected (and calibrated by both us and Linux) 2592MHz.
> 
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

I have one question below about P-state limits.

> ---
> TBD: The node ID retrieval using extended leaf 1E implies it won't work
>      on older hardware (pre-Fam15 I think). Besides the Node ID MSR,
>      which doesn't get advertised on my Fam10 box (and it's zero on all
>      processors despite there being two nodes as per the PCI device
>      map), and which isn't even documented for Fam11, Fam12, and Fam14,
>      I didn't find any other means to retrieve the node ID a CPU is
>      associated with - the NodeId register in PCI config space depends
>      on one already knowing the node ID for doing the access, as the
>      device to be used is a function of the node ID.

I there a real chance of the boost states being different between
nodes? Won't Xen explode elsewhere due to possibly diverging features
between nodes?

> 
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -532,6 +532,102 @@ static void amd_get_topology(struct cpui
>                                                            : c->cpu_core_id);
>  }
>  
> +void amd_log_freq(const struct cpuinfo_x86 *c)
> +{
> +	unsigned int idx = 0, h;
> +	uint64_t hi, lo, val;
> +
> +	if (c->x86 < 0x10 || c->x86 > 0x19 ||
> +	    (c != &boot_cpu_data &&
> +	     (!opt_cpu_info || (c->apicid & (c->x86_num_siblings - 1)))))
> +		return;
> +
> +	if (c->x86 < 0x17) {
> +		unsigned int node = 0;
> +		uint64_t nbcfg;
> +
> +		/*
> +		 * Make an attempt at determining the node ID, but assume
> +		 * symmetric setup (using node 0) if this fails.
> +		 */
> +		if (c->extended_cpuid_level >= 0x8000001e &&
> +		    cpu_has(c, X86_FEATURE_TOPOEXT)) {
> +			node = cpuid_ecx(0x8000001e) & 0xff;
> +			if (node > 7)
> +				node = 0;
> +		} else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) {
> +			rdmsrl(0xC001100C, val);
> +			node = val & 7;
> +		}
> +
> +		/*
> +		 * Enable (and use) Extended Config Space accesses, as we
> +		 * can't be certain that MCFG is available here during boot.
> +		 */
> +		rdmsrl(MSR_AMD64_NB_CFG, nbcfg);
> +		wrmsrl(MSR_AMD64_NB_CFG,
> +		       nbcfg | (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT));
> +#define PCI_ECS_ADDRESS(sbdf, reg) \
> +    (0x80000000 | ((sbdf).bdf << 8) | ((reg) & 0xfc) | (((reg) & 0xf00) << 16))
> +
> +		for ( ; ; ) {
> +			pci_sbdf_t sbdf = PCI_SBDF(0, 0, 0x18 | node, 4);
> +
> +			switch (pci_conf_read32(sbdf, PCI_VENDOR_ID)) {
> +			case 0x00000000:
> +			case 0xffffffff:
> +				/* No device at this SBDF. */
> +				if (!node)
> +					break;
> +				node = 0;
> +				continue;
> +
> +			default:
> +				/*
> +				 * Core Performance Boost Control, family
> +				 * dependent up to 3 bits starting at bit 2.


I would add:

"Note that boot states operate at a frequency above the base one, and
thus need to be accounted for in order to correctly fetch the nominal
frequency of the processor."

> +				 */
> +				switch (c->x86) {
> +				case 0x10: idx = 1; break;
> +				case 0x12: idx = 7; break;
> +				case 0x14: idx = 7; break;
> +				case 0x15: idx = 7; break;
> +				case 0x16: idx = 7; break;
> +				}
> +				idx &= pci_conf_read(PCI_ECS_ADDRESS(sbdf,
> +				                                     0x15c),
> +				                     0, 4) >> 2;
> +				break;
> +			}
> +			break;
> +		}
> +
> +#undef PCI_ECS_ADDRESS
> +		wrmsrl(MSR_AMD64_NB_CFG, nbcfg);
> +	}
> +
> +	lo = 0; /* gcc may not recognize the loop having at least 5 iterations */
> +	for (h = c->x86 == 0x10 ? 5 : 8; h--; )
> +		if (!rdmsr_safe(0xC0010064 + h, lo) && (lo >> 63))
> +			break;
> +	if (!(lo >> 63))
> +		return;

Should you also take the P-state limit here into account (from MSR
0xC0010061)?

I assume the firmware could set a minimum P-state higher than the
possible ones present in the list of P-states, effectively preventing
switching to lowest-performance P-states?

The rest LGTM, Thanks.


From xen-devel-bounces@lists.xenproject.org Fri May 15 08:38:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 08:38:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZVrk-0002Vr-6Q; Fri, 15 May 2020 08:38:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I0w9=65=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZVrj-0002Vm-0e
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 08:38:47 +0000
X-Inumbo-ID: 7d11e7e8-9687-11ea-ae69-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d11e7e8-9687-11ea-ae69-bc764e2007e4;
 Fri, 15 May 2020 08:38:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589531926;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=oTPG2wBRijrd8LHXdr4va9+Pc6c1+NanC48YuXpyyLU=;
 b=ZhClPJg61OTdEZXSTPtBdHuIzj8r1kdeh5GWBGh0jI+0sCV+UNcEvWHO
 GK6sYQlesLJS5T3JTb9hmVDpaAAjwRJDZytmm+bfowfAVq7y8EqffiugJ
 jwTtmqgbBxdL3yD3k2vb+CwBXMIj8Jc4JkEsTb1w9yVxvUc8gheuHLewI A=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: UWmkBaTnuikMLwSTjlvyKRhPwGjpbrMSNe1hB/FoXu5l2PrSTsT1RNUqj5HztCYJsgqdKvL/nD
 zuJKTFF3JK5ku80oj5XS0mPckdyBcG/oLVnFg5zMww7H0wbKgT2k7iW2B9HJtKS3z+ZKziOGeb
 g1ZIjegeOUToaJ1brY8cGEjKrtGWb1r97DFTMCfq5mtfMsPYlGZeJ7zIuKgnu2wtQxo2gl4zj3
 36giR3mApPxpaKejuq3u5ONiOq6KmHqhN4bQH0NR4yWKSNZCdPhgw5x4MdYW4L5RbDaw1m1npO
 ANg=
X-SBRS: 2.7
X-MesageID: 17631882
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,394,1583211600"; d="scan'208";a="17631882"
Date: Fri, 15 May 2020 10:38:38 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: Error during update_runstate_area with KPTI activated
Message-ID: <20200515083838.GN54375@Air-de-Roger>
References: <C6B0E24F-60E6-4621-8448-C8DBAE3277A9@arm.com>
 <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
 <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
 <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
 <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
 <478C4829-CCAF-495B-860E-6BA3D86AA47D@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <478C4829-CCAF-495B-860E-6BA3D86AA47D@arm.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, nd <nd@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 15, 2020 at 07:39:16AM +0000, Bertrand Marquis wrote:
> 
> 
> On 14 May 2020, at 20:13, Julien Grall <julien.grall.oss@gmail.com<mailto:julien.grall.oss@gmail.com>> wrote:
> 
> On Thu, 14 May 2020 at 19:12, Andrew Cooper <andrew.cooper3@citrix.com<mailto:andrew.cooper3@citrix.com>> wrote:
> 
> On 14/05/2020 18:38, Julien Grall wrote:
> Hi,
> 
> On 14/05/2020 17:18, Bertrand Marquis wrote:
> 
> 
> On 14 May 2020, at 16:57, Julien Grall <julien@xen.org<mailto:julien@xen.org>> wrote:
> 
> 
> 
> On 14/05/2020 15:28, Bertrand Marquis wrote:
> Hi,
> 
> Hi,
> 
> When executing linux on arm64 with KPTI activated (in Dom0 or in a
> DomU), I have a lot of walk page table errors like this:
> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
> 0xffffff837ebe0cd0
> After implementing a call trace, I found that the problem was
> coming from the update_runstate_area when linux has KPTI activated.
> I have the following call trace:
> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
> 0xffffff837ebe0cd0
> (XEN) backtrace.c:29: Stacktrace start at 0x8007638efbb0 depth 10
> (XEN)    [<000000000027780c>] get_page_from_gva+0x180/0x35c
> (XEN)    [<00000000002700c8>] guestcopy.c#copy_guest+0x1b0/0x2e4
> (XEN)    [<0000000000270228>] raw_copy_to_guest+0x2c/0x34
> (XEN)    [<0000000000268dd0>] domain.c#update_runstate_area+0x90/0xc8
> (XEN)    [<000000000026909c>] domain.c#schedule_tail+0x294/0x2d8
> (XEN)    [<0000000000269524>] context_switch+0x58/0x70
> (XEN)    [<00000000002479c4>] core.c#sched_context_switch+0x88/0x1e4
> (XEN)    [<000000000024845c>] core.c#schedule+0x224/0x2ec
> (XEN)    [<0000000000224018>] softirq.c#__do_softirq+0xe4/0x128
> (XEN)    [<00000000002240d4>] do_softirq+0x14/0x1c
> Discussing this subject with Stefano, he pointed me to a discussion
> started a year ago on this subject here:
> https://lists.xenproject.org/archives/html/xen-devel/2018-11/msg03053.html
> 
> And a patch was submitted:
> https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg02320.html
> 
> I rebased this patch on current master and it is solving the
> problem I have seen.
> It sounds to me like a good solution to introduce a
> VCPUOP_register_runstate_phys_memory_area to not depend on the area
> actually being mapped in the guest when a context switch is being
> done (which is actually the problem happening when a context switch
> is trigger while a guest is running in EL0).
> Is there any reason why this was not merged at the end ?
> 
> I just skimmed through the thread to remind myself the state.
> AFAICT, this is blocked on the contributor to clarify the intended
> interaction and provide a new version.
> 
> What do you mean here by intended interaction ? How the new hyper
> call should be used by the guest OS ?
> 
> From what I remember, Jan was seeking clarification on whether the two
> hypercalls (existing and new) can be called together by the same OS
> (and make sense).
> 
> There was also the question of the handover between two pieces of
> sotfware. For instance, what if the firmware is using the existing
> interface but the OS the new one? Similar question about Kexecing a
> different kernel.
> 
> This part is mostly documentation so we can discuss about the approach
> and review the implementation.
> 
> 
> 
> I am still in favor of the new hypercall (and still in my todo list)
> but I haven't yet found time to revive the series.
> 
> Would you be willing to take over the series? I would be happy to
> bring you up to speed and provide review.
> 
> Sure I can take it over.
> 
> I ported it to master version of xen and I tested it on a board.
> I still need to do a deep review of the code myself but I have an
> understanding of the problem and what is the idea.
> 
> Any help to get on speed would be more then welcome :-)
> I would recommend to go through the latest version (v3) and the
> previous (v2). I am also suggesting v2 because I think the split was
> easier to review/understand.
> 
> The x86 code is probably what is going to give you the most trouble as
> there are two ABIs to support (compat and non-compat). If you don't
> have an x86 setup, I should be able to test it/help write it.
> 
> Feel free to ask any questions and I will try my best to remember the
> discussion from last year :).
> 
> At risk of being shouted down again, a new hypercall isn't necessarily
> necessary, and there are probably better ways of fixing it.
> 
> The underlying ABI problem is that the area is registered by virtual
> address.  The only correct way this should have been done is to register
> by guest physical address, so Xen's updating of the data doesn't
> interact with the guest pagetable settings/restrictions.  x86 suffers
> the same kind of problems as ARM, except we silently squash the fallout.
> 
> The logic in Xen is horrible, and I would really rather it was deleted
> completely, rather than to be kept for compatibility.
> 
> The runstate area is always fixed kernel memory and doesn't move.  I
> believe it is already restricted from crossing a page boundary, and we
> can calculate the va=>pa translation when the hypercall is made.
> 
> Yes - this is a technically ABI change, but nothing is going to break
> (AFAICT) and the cleanup win is large enough to make this a *very*
> attractive option.
> 
> I suggested this approach two years ago [1] but you were the one
> saying that buffer could cross page-boundary on older Linux [2]:
> 
> "I'd love to do this, but we cant.  Older Linux used to have a virtual
> buffer spanning a page boundary.  Changing the behaviour under that will
> cause older setups to explode."

Sorry this was long time ago, and details have faded. IIRC there was
even a proposal (or patch set) that took that into account and allowed
buffers to span across a page boundary by taking a reference to two
different pages in that case.

Another option would be to just return -EINVAL or -EOPNOTSUPP in that
case and just get on with it. runstate info shouldn't be mandatory for
guests to function properly, I would say it's just extra info that's
provided in good faith from the hypervisor in order to help the guest
make better scheduling decisions.

> So can you explain your change of heart here?
> 
> 
> I would prefer to fix it like this, (perhaps adding a new hypercall
> which explicitly takes a guest physical address), than to keep any of
> this mess around forever more to cope with legacy guests.
> 
> What does legacy guests mean? Is it PV 32-bit or does it also include some HVM?
> 
> Reading all this and digging into the code, the meaning full implementation would definitely be to validate and translate the address when during the hypercall handling and then to just reuse this address along the way.
> Wether or not the guest is passing a virtual address (versus an intermediate physical) and creating a new hyper call for this might be a different question that we could handle separatly.
> Does anyone see something wrong with such an approach ?
> 
> Answer myself:
> There might be the corner case where the physical area is actually can be removed from the guest (ie a guest using some memory coming from a temporary mapped area).
> Would there be a way to check that during the hypercall ?

You have to take a reference to the page in order to prevent it from
being freed under your fit. That way if the guest decides to balloon
out the page you would prevent it from being freed by having taken
that reference.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 15 08:54:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 08:54:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZW6Q-0004AT-Ga; Fri, 15 May 2020 08:53:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7E4v=65=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jZW6Q-0004AO-5f
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 08:53:58 +0000
X-Inumbo-ID: 9ca9fea4-9689-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ca9fea4-9689-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 08:53:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=dCL3VPVGAwr9hKH073torXgmQtJkFHHbqu4henwG2fY=; b=KcUZTjoLF7zm8Gpod5XXKlcvEj
 Hy4X94VA2bh0mZfe1Rl2JaU+/oXxdRtxGPXp2wQe5XlkhEI389kAUpOrMRipAsQ7GgWz4WRZIV+OR
 Syt82vgN4L4UrcKz39XwNQqOmRJfKlLfYd5j4Lr7hWGkvDmXnWGOQWh0tHNULALkqtAc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jZW6O-0002U7-JA; Fri, 15 May 2020 08:53:56 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jZW6O-0006Km-78; Fri, 15 May 2020 08:53:56 +0000
Subject: Re: Error during update_runstate_area with KPTI activated
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <C6B0E24F-60E6-4621-8448-C8DBAE3277A9@arm.com>
 <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
 <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
 <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
 <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
 <478C4829-CCAF-495B-860E-6BA3D86AA47D@arm.com>
 <20200515083838.GN54375@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <d2033adc-3f98-2d14-ae6d-f8dcd8b90002@xen.org>
Date: Fri, 15 May 2020 09:53:54 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200515083838.GN54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, nd <nd@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 15/05/2020 09:38, Roger Pau Monné wrote:
> On Fri, May 15, 2020 at 07:39:16AM +0000, Bertrand Marquis wrote:
>>
>>
>> On 14 May 2020, at 20:13, Julien Grall <julien.grall.oss@gmail.com<mailto:julien.grall.oss@gmail.com>> wrote:
>>
>> On Thu, 14 May 2020 at 19:12, Andrew Cooper <andrew.cooper3@citrix.com<mailto:andrew.cooper3@citrix.com>> wrote:
>>
>> On 14/05/2020 18:38, Julien Grall wrote:
>> Hi,
>>
>> On 14/05/2020 17:18, Bertrand Marquis wrote:
>>
>>
>> On 14 May 2020, at 16:57, Julien Grall <julien@xen.org<mailto:julien@xen.org>> wrote:
>>
>>
>>
>> On 14/05/2020 15:28, Bertrand Marquis wrote:
>> Hi,
>>
>> Hi,
>>
>> When executing linux on arm64 with KPTI activated (in Dom0 or in a
>> DomU), I have a lot of walk page table errors like this:
>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
>> 0xffffff837ebe0cd0
>> After implementing a call trace, I found that the problem was
>> coming from the update_runstate_area when linux has KPTI activated.
>> I have the following call trace:
>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
>> 0xffffff837ebe0cd0
>> (XEN) backtrace.c:29: Stacktrace start at 0x8007638efbb0 depth 10
>> (XEN)    [<000000000027780c>] get_page_from_gva+0x180/0x35c
>> (XEN)    [<00000000002700c8>] guestcopy.c#copy_guest+0x1b0/0x2e4
>> (XEN)    [<0000000000270228>] raw_copy_to_guest+0x2c/0x34
>> (XEN)    [<0000000000268dd0>] domain.c#update_runstate_area+0x90/0xc8
>> (XEN)    [<000000000026909c>] domain.c#schedule_tail+0x294/0x2d8
>> (XEN)    [<0000000000269524>] context_switch+0x58/0x70
>> (XEN)    [<00000000002479c4>] core.c#sched_context_switch+0x88/0x1e4
>> (XEN)    [<000000000024845c>] core.c#schedule+0x224/0x2ec
>> (XEN)    [<0000000000224018>] softirq.c#__do_softirq+0xe4/0x128
>> (XEN)    [<00000000002240d4>] do_softirq+0x14/0x1c
>> Discussing this subject with Stefano, he pointed me to a discussion
>> started a year ago on this subject here:
>> https://lists.xenproject.org/archives/html/xen-devel/2018-11/msg03053.html
>>
>> And a patch was submitted:
>> https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg02320.html
>>
>> I rebased this patch on current master and it is solving the
>> problem I have seen.
>> It sounds to me like a good solution to introduce a
>> VCPUOP_register_runstate_phys_memory_area to not depend on the area
>> actually being mapped in the guest when a context switch is being
>> done (which is actually the problem happening when a context switch
>> is trigger while a guest is running in EL0).
>> Is there any reason why this was not merged at the end ?
>>
>> I just skimmed through the thread to remind myself the state.
>> AFAICT, this is blocked on the contributor to clarify the intended
>> interaction and provide a new version.
>>
>> What do you mean here by intended interaction ? How the new hyper
>> call should be used by the guest OS ?
>>
>>  From what I remember, Jan was seeking clarification on whether the two
>> hypercalls (existing and new) can be called together by the same OS
>> (and make sense).
>>
>> There was also the question of the handover between two pieces of
>> sotfware. For instance, what if the firmware is using the existing
>> interface but the OS the new one? Similar question about Kexecing a
>> different kernel.
>>
>> This part is mostly documentation so we can discuss about the approach
>> and review the implementation.
>>
>>
>>
>> I am still in favor of the new hypercall (and still in my todo list)
>> but I haven't yet found time to revive the series.
>>
>> Would you be willing to take over the series? I would be happy to
>> bring you up to speed and provide review.
>>
>> Sure I can take it over.
>>
>> I ported it to master version of xen and I tested it on a board.
>> I still need to do a deep review of the code myself but I have an
>> understanding of the problem and what is the idea.
>>
>> Any help to get on speed would be more then welcome :-)
>> I would recommend to go through the latest version (v3) and the
>> previous (v2). I am also suggesting v2 because I think the split was
>> easier to review/understand.
>>
>> The x86 code is probably what is going to give you the most trouble as
>> there are two ABIs to support (compat and non-compat). If you don't
>> have an x86 setup, I should be able to test it/help write it.
>>
>> Feel free to ask any questions and I will try my best to remember the
>> discussion from last year :).
>>
>> At risk of being shouted down again, a new hypercall isn't necessarily
>> necessary, and there are probably better ways of fixing it.
>>
>> The underlying ABI problem is that the area is registered by virtual
>> address.  The only correct way this should have been done is to register
>> by guest physical address, so Xen's updating of the data doesn't
>> interact with the guest pagetable settings/restrictions.  x86 suffers
>> the same kind of problems as ARM, except we silently squash the fallout.
>>
>> The logic in Xen is horrible, and I would really rather it was deleted
>> completely, rather than to be kept for compatibility.
>>
>> The runstate area is always fixed kernel memory and doesn't move.  I
>> believe it is already restricted from crossing a page boundary, and we
>> can calculate the va=>pa translation when the hypercall is made.
>>
>> Yes - this is a technically ABI change, but nothing is going to break
>> (AFAICT) and the cleanup win is large enough to make this a *very*
>> attractive option.
>>
>> I suggested this approach two years ago [1] but you were the one
>> saying that buffer could cross page-boundary on older Linux [2]:
>>
>> "I'd love to do this, but we cant.  Older Linux used to have a virtual
>> buffer spanning a page boundary.  Changing the behaviour under that will
>> cause older setups to explode."
> 
> Sorry this was long time ago, and details have faded. IIRC there was
> even a proposal (or patch set) that took that into account and allowed
> buffers to span across a page boundary by taking a reference to two
> different pages in that case.

I am not aware of a patch set. Juergen suggested a per-domain mapping 
but there was no details how this could be done (my e-mail was left 
unanswered [1]).

If we were using the vmap() then we would need up 1MB per domain 
(assuming 128 vCPUs). This sounds quite a bit and I think we need to 
agree whether it would be an acceptable solution (this was also left 
unanswered [1]).

> 
> Another option would be to just return -EINVAL or -EOPNOTSUPP in that
> case and just get on with it. runstate info shouldn't be mandatory for
> guests to function properly, I would say it's just extra info that's
> provided in good faith from the hypervisor in order to help the guest
> make better scheduling decisions.

Linux will panic if the VCPUOP_register_runstate_memory_area returns an 
error (see xen_setup_runstate_info()).

> 
>> So can you explain your change of heart here?
>>
>>
>> I would prefer to fix it like this, (perhaps adding a new hypercall
>> which explicitly takes a guest physical address), than to keep any of
>> this mess around forever more to cope with legacy guests.
>>
>> What does legacy guests mean? Is it PV 32-bit or does it also include some HVM?
>>
>> Reading all this and digging into the code, the meaning full implementation would definitely be to validate and translate the address when during the hypercall handling and then to just reuse this address along the way.
>> Wether or not the guest is passing a virtual address (versus an intermediate physical) and creating a new hyper call for this might be a different question that we could handle separatly.
>> Does anyone see something wrong with such an approach ?
>>
>> Answer myself:
>> There might be the corner case where the physical area is actually can be removed from the guest (ie a guest using some memory coming from a temporary mapped area).
>> Would there be a way to check that during the hypercall ?
> 
> You have to take a reference to the page in order to prevent it from
> being freed under your fit. That way if the guest decides to balloon
> out the page you would prevent it from being freed by having taken
> that reference.
> 
> Roger.
> 

Cheers,

[1] <fb92072f-2709-fa5a-0284-08a66c401049@arm.com>

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 15 09:08:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 09:08:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZWKF-0005IH-Vu; Fri, 15 May 2020 09:08:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7uuJ=65=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZWKE-0005IC-Oz
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 09:08:14 +0000
X-Inumbo-ID: 99bd3dd0-968b-11ea-a533-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 99bd3dd0-968b-11ea-a533-12813bfff9fa;
 Fri, 15 May 2020 09:08:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 6927EB195;
 Fri, 15 May 2020 09:08:13 +0000 (UTC)
Subject: Re: [PATCH] x86: retrieve and log CPU frequency information
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <1fd091d2-30e2-0691-0485-3f5142bd457f@suse.com>
 <20200515083204.GM54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b672f753-cffd-def9-35bb-0d1314b682ba@suse.com>
Date: Fri, 15 May 2020 11:08:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200515083204.GM54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.05.2020 10:32, Roger Pau Monné wrote:
> On Wed, Apr 15, 2020 at 01:55:24PM +0200, Jan Beulich wrote:
>> While from just a single Skylake system it is already clear that we
>> can't base any of our logic on CPUID leaf 15 [1] (leaf 16 is
>> documented to be used for display purposes only anyway), logging this
>> information may still give us some reference in case of problems as well
>> as for future work. Additionally on the AMD side it is unclear whether
>> the deviation between reported and measured frequencies is because of us
>> not doing well, or because of nominal and actual frequencies being quite
>> far apart.
>>
>> The chosen variable naming in amd_log_freq() has pointed out a naming
>> problem in rdmsr_safe(), which is being taken care of at the same time.
>> Symmetrically wrmsr_safe(), being an inline function, also gets an
>> unnecessary underscore dropped from one of its local variables.
>>
>> [1] With a core crystal clock of 24MHz and a ratio of 216/2, the
>>     reported frequency nevertheless is 2600MHz, rather than the to be
>>     expected (and calibrated by both us and Linux) 2592MHz.
>>
>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, but please clarify whether this is with or without the
two suggested changes (breaking out intel_log_freq() and introducing
local variables for (uint8_t)(msrval >> NN)), or whether you
mean to leave it to me whether to make them. And if I'm to make the
change, whether you'd trust me to not screw up things, i.e. whether
I can keep you R-b in that case.

> I have one question below about P-state limits.
> 
>> ---
>> TBD: The node ID retrieval using extended leaf 1E implies it won't work
>>      on older hardware (pre-Fam15 I think). Besides the Node ID MSR,
>>      which doesn't get advertised on my Fam10 box (and it's zero on all
>>      processors despite there being two nodes as per the PCI device
>>      map), and which isn't even documented for Fam11, Fam12, and Fam14,
>>      I didn't find any other means to retrieve the node ID a CPU is
>>      associated with - the NodeId register in PCI config space depends
>>      on one already knowing the node ID for doing the access, as the
>>      device to be used is a function of the node ID.
> 
> I there a real chance of the boost states being different between
> nodes?

Probably not, but doing things properly would still have been
nice.

> Won't Xen explode elsewhere due to possibly diverging features
> between nodes?

For many features - yes, but for boost states being different I
don't think it would.

>> --- a/xen/arch/x86/cpu/amd.c
>> +++ b/xen/arch/x86/cpu/amd.c
>> @@ -532,6 +532,102 @@ static void amd_get_topology(struct cpui
>>                                                            : c->cpu_core_id);
>>  }
>>  
>> +void amd_log_freq(const struct cpuinfo_x86 *c)
>> +{
>> +	unsigned int idx = 0, h;
>> +	uint64_t hi, lo, val;
>> +
>> +	if (c->x86 < 0x10 || c->x86 > 0x19 ||
>> +	    (c != &boot_cpu_data &&
>> +	     (!opt_cpu_info || (c->apicid & (c->x86_num_siblings - 1)))))
>> +		return;
>> +
>> +	if (c->x86 < 0x17) {
>> +		unsigned int node = 0;
>> +		uint64_t nbcfg;
>> +
>> +		/*
>> +		 * Make an attempt at determining the node ID, but assume
>> +		 * symmetric setup (using node 0) if this fails.
>> +		 */
>> +		if (c->extended_cpuid_level >= 0x8000001e &&
>> +		    cpu_has(c, X86_FEATURE_TOPOEXT)) {
>> +			node = cpuid_ecx(0x8000001e) & 0xff;
>> +			if (node > 7)
>> +				node = 0;
>> +		} else if (cpu_has(c, X86_FEATURE_NODEID_MSR)) {
>> +			rdmsrl(0xC001100C, val);
>> +			node = val & 7;
>> +		}
>> +
>> +		/*
>> +		 * Enable (and use) Extended Config Space accesses, as we
>> +		 * can't be certain that MCFG is available here during boot.
>> +		 */
>> +		rdmsrl(MSR_AMD64_NB_CFG, nbcfg);
>> +		wrmsrl(MSR_AMD64_NB_CFG,
>> +		       nbcfg | (1ULL << AMD64_NB_CFG_CF8_EXT_ENABLE_BIT));
>> +#define PCI_ECS_ADDRESS(sbdf, reg) \
>> +    (0x80000000 | ((sbdf).bdf << 8) | ((reg) & 0xfc) | (((reg) & 0xf00) << 16))
>> +
>> +		for ( ; ; ) {
>> +			pci_sbdf_t sbdf = PCI_SBDF(0, 0, 0x18 | node, 4);
>> +
>> +			switch (pci_conf_read32(sbdf, PCI_VENDOR_ID)) {
>> +			case 0x00000000:
>> +			case 0xffffffff:
>> +				/* No device at this SBDF. */
>> +				if (!node)
>> +					break;
>> +				node = 0;
>> +				continue;
>> +
>> +			default:
>> +				/*
>> +				 * Core Performance Boost Control, family
>> +				 * dependent up to 3 bits starting at bit 2.
> 
> 
> I would add:
> 
> "Note that boot states operate at a frequency above the base one, and
> thus need to be accounted for in order to correctly fetch the nominal
> frequency of the processor."

Done.

>> +				 */
>> +				switch (c->x86) {
>> +				case 0x10: idx = 1; break;
>> +				case 0x12: idx = 7; break;
>> +				case 0x14: idx = 7; break;
>> +				case 0x15: idx = 7; break;
>> +				case 0x16: idx = 7; break;
>> +				}
>> +				idx &= pci_conf_read(PCI_ECS_ADDRESS(sbdf,
>> +				                                     0x15c),
>> +				                     0, 4) >> 2;
>> +				break;
>> +			}
>> +			break;
>> +		}
>> +
>> +#undef PCI_ECS_ADDRESS
>> +		wrmsrl(MSR_AMD64_NB_CFG, nbcfg);
>> +	}
>> +
>> +	lo = 0; /* gcc may not recognize the loop having at least 5 iterations */
>> +	for (h = c->x86 == 0x10 ? 5 : 8; h--; )
>> +		if (!rdmsr_safe(0xC0010064 + h, lo) && (lo >> 63))
>> +			break;
>> +	if (!(lo >> 63))
>> +		return;
> 
> Should you also take the P-state limit here into account (from MSR
> 0xC0010061)?
> 
> I assume the firmware could set a minimum P-state higher than the
> possible ones present in the list of P-states, effectively preventing
> switching to lowest-performance P-states?

We're not after permitted P-states here - these would matter only if
we were meaning to alter the current P-state by direct MSR accesses.
Here we're only after logging capabilities (and the P-state limits
can, aiui, in principle also change at runtime).

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 15 09:10:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 09:10:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZWMP-00061k-D1; Fri, 15 May 2020 09:10:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I0w9=65=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZWMO-00061e-55
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 09:10:28 +0000
X-Inumbo-ID: e9ec7a64-968b-11ea-b07b-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e9ec7a64-968b-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 09:10:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589533826;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=xdjmvN1D6FBQUS/kuHUoMyL1ugNzbdErb0hAIANClc4=;
 b=Cx3lg4gyA26CeQuUUt5I0JtgAekLKs8vOCRI2HsQphIPtDucTkXVW1ct
 EJelueAKu5EtrGR/ARikYBR+ttiUrwJ7ZqWe33sle4pihDZcfrun/bnAp
 v3NCT5dr/2b+sjAYiL7tXka7b6hvSK09e9naVtUyDlzq6ycGLiStpn790 U=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: +6oWXEnq6SYeWqe2XD4AQsFlUL2k3pmYzkhab/zL3v8THIosWCBVH2q84Bkr5VsT3w2WTTZ/53
 8Hx06LQiAs+DTAmsUHISOCPInMr1aR8Z4A/Rg/dA5143syntf+q1tlz5cP9OHB0pd1J4lzduk/
 t9yZ5aLWfjokfhHpgE/ueZf9tBxcOH+ota9arepS6hQMTyccR+qeGhIvyatvUsdHN7H3ILrSkw
 jqcvjQ458cF2U4/fyV0IA31+IEV6UkwqhFGxHNovKeYynyqQIEnDY3WuHh6JfFQQ3NnqnSOUKU
 XyE=
X-SBRS: 2.7
X-MesageID: 17873503
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,394,1583211600"; d="scan'208";a="17873503"
Date: Fri, 15 May 2020 11:10:18 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: Error during update_runstate_area with KPTI activated
Message-ID: <20200515091018.GO54375@Air-de-Roger>
References: <C6B0E24F-60E6-4621-8448-C8DBAE3277A9@arm.com>
 <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
 <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
 <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
 <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
 <478C4829-CCAF-495B-860E-6BA3D86AA47D@arm.com>
 <20200515083838.GN54375@Air-de-Roger>
 <d2033adc-3f98-2d14-ae6d-f8dcd8b90002@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d2033adc-3f98-2d14-ae6d-f8dcd8b90002@xen.org>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 15, 2020 at 09:53:54AM +0100, Julien Grall wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> 
> Hi,
> 
> On 15/05/2020 09:38, Roger Pau Monné wrote:
> > On Fri, May 15, 2020 at 07:39:16AM +0000, Bertrand Marquis wrote:
> > > 
> > > 
> > > On 14 May 2020, at 20:13, Julien Grall <julien.grall.oss@gmail.com<mailto:julien.grall.oss@gmail.com>> wrote:
> > > 
> > > On Thu, 14 May 2020 at 19:12, Andrew Cooper <andrew.cooper3@citrix.com<mailto:andrew.cooper3@citrix.com>> wrote:
> > > 
> > > On 14/05/2020 18:38, Julien Grall wrote:
> > > Hi,
> > > 
> > > On 14/05/2020 17:18, Bertrand Marquis wrote:
> > > 
> > > 
> > > On 14 May 2020, at 16:57, Julien Grall <julien@xen.org<mailto:julien@xen.org>> wrote:
> > > 
> > > 
> > > 
> > > On 14/05/2020 15:28, Bertrand Marquis wrote:
> > > Hi,
> > > 
> > > Hi,
> > > 
> > > When executing linux on arm64 with KPTI activated (in Dom0 or in a
> > > DomU), I have a lot of walk page table errors like this:
> > > (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
> > > 0xffffff837ebe0cd0
> > > After implementing a call trace, I found that the problem was
> > > coming from the update_runstate_area when linux has KPTI activated.
> > > I have the following call trace:
> > > (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
> > > 0xffffff837ebe0cd0
> > > (XEN) backtrace.c:29: Stacktrace start at 0x8007638efbb0 depth 10
> > > (XEN)    [<000000000027780c>] get_page_from_gva+0x180/0x35c
> > > (XEN)    [<00000000002700c8>] guestcopy.c#copy_guest+0x1b0/0x2e4
> > > (XEN)    [<0000000000270228>] raw_copy_to_guest+0x2c/0x34
> > > (XEN)    [<0000000000268dd0>] domain.c#update_runstate_area+0x90/0xc8
> > > (XEN)    [<000000000026909c>] domain.c#schedule_tail+0x294/0x2d8
> > > (XEN)    [<0000000000269524>] context_switch+0x58/0x70
> > > (XEN)    [<00000000002479c4>] core.c#sched_context_switch+0x88/0x1e4
> > > (XEN)    [<000000000024845c>] core.c#schedule+0x224/0x2ec
> > > (XEN)    [<0000000000224018>] softirq.c#__do_softirq+0xe4/0x128
> > > (XEN)    [<00000000002240d4>] do_softirq+0x14/0x1c
> > > Discussing this subject with Stefano, he pointed me to a discussion
> > > started a year ago on this subject here:
> > > https://lists.xenproject.org/archives/html/xen-devel/2018-11/msg03053.html
> > > 
> > > And a patch was submitted:
> > > https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg02320.html
> > > 
> > > I rebased this patch on current master and it is solving the
> > > problem I have seen.
> > > It sounds to me like a good solution to introduce a
> > > VCPUOP_register_runstate_phys_memory_area to not depend on the area
> > > actually being mapped in the guest when a context switch is being
> > > done (which is actually the problem happening when a context switch
> > > is trigger while a guest is running in EL0).
> > > Is there any reason why this was not merged at the end ?
> > > 
> > > I just skimmed through the thread to remind myself the state.
> > > AFAICT, this is blocked on the contributor to clarify the intended
> > > interaction and provide a new version.
> > > 
> > > What do you mean here by intended interaction ? How the new hyper
> > > call should be used by the guest OS ?
> > > 
> > >  From what I remember, Jan was seeking clarification on whether the two
> > > hypercalls (existing and new) can be called together by the same OS
> > > (and make sense).
> > > 
> > > There was also the question of the handover between two pieces of
> > > sotfware. For instance, what if the firmware is using the existing
> > > interface but the OS the new one? Similar question about Kexecing a
> > > different kernel.
> > > 
> > > This part is mostly documentation so we can discuss about the approach
> > > and review the implementation.
> > > 
> > > 
> > > 
> > > I am still in favor of the new hypercall (and still in my todo list)
> > > but I haven't yet found time to revive the series.
> > > 
> > > Would you be willing to take over the series? I would be happy to
> > > bring you up to speed and provide review.
> > > 
> > > Sure I can take it over.
> > > 
> > > I ported it to master version of xen and I tested it on a board.
> > > I still need to do a deep review of the code myself but I have an
> > > understanding of the problem and what is the idea.
> > > 
> > > Any help to get on speed would be more then welcome :-)
> > > I would recommend to go through the latest version (v3) and the
> > > previous (v2). I am also suggesting v2 because I think the split was
> > > easier to review/understand.
> > > 
> > > The x86 code is probably what is going to give you the most trouble as
> > > there are two ABIs to support (compat and non-compat). If you don't
> > > have an x86 setup, I should be able to test it/help write it.
> > > 
> > > Feel free to ask any questions and I will try my best to remember the
> > > discussion from last year :).
> > > 
> > > At risk of being shouted down again, a new hypercall isn't necessarily
> > > necessary, and there are probably better ways of fixing it.
> > > 
> > > The underlying ABI problem is that the area is registered by virtual
> > > address.  The only correct way this should have been done is to register
> > > by guest physical address, so Xen's updating of the data doesn't
> > > interact with the guest pagetable settings/restrictions.  x86 suffers
> > > the same kind of problems as ARM, except we silently squash the fallout.
> > > 
> > > The logic in Xen is horrible, and I would really rather it was deleted
> > > completely, rather than to be kept for compatibility.
> > > 
> > > The runstate area is always fixed kernel memory and doesn't move.  I
> > > believe it is already restricted from crossing a page boundary, and we
> > > can calculate the va=>pa translation when the hypercall is made.
> > > 
> > > Yes - this is a technically ABI change, but nothing is going to break
> > > (AFAICT) and the cleanup win is large enough to make this a *very*
> > > attractive option.
> > > 
> > > I suggested this approach two years ago [1] but you were the one
> > > saying that buffer could cross page-boundary on older Linux [2]:
> > > 
> > > "I'd love to do this, but we cant.  Older Linux used to have a virtual
> > > buffer spanning a page boundary.  Changing the behaviour under that will
> > > cause older setups to explode."
> > 
> > Sorry this was long time ago, and details have faded. IIRC there was
> > even a proposal (or patch set) that took that into account and allowed
> > buffers to span across a page boundary by taking a reference to two
> > different pages in that case.
> 
> I am not aware of a patch set. Juergen suggested a per-domain mapping but
> there was no details how this could be done (my e-mail was left unanswered
> [1]).
> 
> If we were using the vmap() then we would need up 1MB per domain (assuming
> 128 vCPUs). This sounds quite a bit and I think we need to agree whether it
> would be an acceptable solution (this was also left unanswered [1]).

Could we map/unmap the runtime area on domain switch at a per-cpu
based linear space area? There's no reason to have all the runtime
areas mapped all the time, you just care about the one from the
running vcpu.

Maybe the overhead of that mapping and unmapping would be
too high? But seeing that we are aiming at a secret-free Xen we would
have to eventually go that route anyway.

> > 
> > Another option would be to just return -EINVAL or -EOPNOTSUPP in that
> > case and just get on with it. runstate info shouldn't be mandatory for
> > guests to function properly, I would say it's just extra info that's
> > provided in good faith from the hypervisor in order to help the guest
> > make better scheduling decisions.
> 
> Linux will panic if the VCPUOP_register_runstate_memory_area returns an
> error (see xen_setup_runstate_info()).

Oh, that's dull. That hypercall was never noted to be optional, so it
failing would mean Linux has somehow screwed the call which is not
expected.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 15 09:18:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 09:18:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZWTi-0006KI-6h; Fri, 15 May 2020 09:18:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I0w9=65=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZWTg-0006KD-VN
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 09:18:00 +0000
X-Inumbo-ID: f8a68aa8-968c-11ea-a536-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f8a68aa8-968c-11ea-a536-12813bfff9fa;
 Fri, 15 May 2020 09:18:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589534280;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=QGQk0kPjoJHxXoRzVfmJN/xTWWRnGgjce91XlkHH8wQ=;
 b=ZYA9rCdwhuXi2SvnCk/2oQKIX+96LgQW1EZ1609gWuV2dW8rmfwjGgUK
 eIt3jW9YrLppiGVyrsLXMXcqFiO6mIkdrBdxrrzcvCxopMzoz23gV/m26
 x3eNq0DPSmV/eZWP0JKR88JEl83T4G/Xd5oC1dysrePOxPVo8T+brhi8C U=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: c2l1EbN9LXFJ1tUwxQXfHTZt+iF0Oszqvi8gGc0SSyCOlYTZhR0XKY+ceqO8hDmGukQ1CoDI6T
 lg79OOfZ4SkFaZ+yPH7p+4yJx5xYxtmmLCJlyuSObNjOJTqtz+TKYel6he0apQnnYWj7t6CAl3
 0D7VHunt2KnQPLpONc/kj0GaqYiPXA4wJCOTZOovLfkD78tq+rJkwUBA9Bk5ZUrcvd4497O3Mj
 n2lzCu4drNpZGN+AF9DYP3S1HA7zP4EAwKNuzkH3GSmo6R3Q5H1koYjGlG9V7qrlHHvp74bGqC
 +qY=
X-SBRS: 2.7
X-MesageID: 17634068
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,394,1583211600"; d="scan'208";a="17634068"
Date: Fri, 15 May 2020 11:17:50 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86: retrieve and log CPU frequency information
Message-ID: <20200515091750.GP54375@Air-de-Roger>
References: <1fd091d2-30e2-0691-0485-3f5142bd457f@suse.com>
 <20200515083204.GM54375@Air-de-Roger>
 <b672f753-cffd-def9-35bb-0d1314b682ba@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b672f753-cffd-def9-35bb-0d1314b682ba@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 15, 2020 at 11:08:04AM +0200, Jan Beulich wrote:
> On 15.05.2020 10:32, Roger Pau Monné wrote:
> > On Wed, Apr 15, 2020 at 01:55:24PM +0200, Jan Beulich wrote:
> >> While from just a single Skylake system it is already clear that we
> >> can't base any of our logic on CPUID leaf 15 [1] (leaf 16 is
> >> documented to be used for display purposes only anyway), logging this
> >> information may still give us some reference in case of problems as well
> >> as for future work. Additionally on the AMD side it is unclear whether
> >> the deviation between reported and measured frequencies is because of us
> >> not doing well, or because of nominal and actual frequencies being quite
> >> far apart.
> >>
> >> The chosen variable naming in amd_log_freq() has pointed out a naming
> >> problem in rdmsr_safe(), which is being taken care of at the same time.
> >> Symmetrically wrmsr_safe(), being an inline function, also gets an
> >> unnecessary underscore dropped from one of its local variables.
> >>
> >> [1] With a core crystal clock of 24MHz and a ratio of 216/2, the
> >>     reported frequency nevertheless is 2600MHz, rather than the to be
> >>     expected (and calibrated by both us and Linux) 2592MHz.
> >>
> >> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Thanks, but please clarify whether this is with or without the
> two suggested changes (breaking out intel_log_freq() and introducing
> local variables for (uint8_t)(msrval >> NN)), or whether you
> mean to leave it to me whether to make them. And if I'm to make the
> change, whether you'd trust me to not screw up things, i.e. whether
> I can keep you R-b in that case.

None of those are mandatory in order to keep the RB, just suggestions.

> >> +#undef PCI_ECS_ADDRESS
> >> +		wrmsrl(MSR_AMD64_NB_CFG, nbcfg);
> >> +	}
> >> +
> >> +	lo = 0; /* gcc may not recognize the loop having at least 5 iterations */
> >> +	for (h = c->x86 == 0x10 ? 5 : 8; h--; )
> >> +		if (!rdmsr_safe(0xC0010064 + h, lo) && (lo >> 63))
> >> +			break;
> >> +	if (!(lo >> 63))
> >> +		return;
> > 
> > Should you also take the P-state limit here into account (from MSR
> > 0xC0010061)?
> > 
> > I assume the firmware could set a minimum P-state higher than the
> > possible ones present in the list of P-states, effectively preventing
> > switching to lowest-performance P-states?
> 
> We're not after permitted P-states here - these would matter only if
> we were meaning to alter the current P-state by direct MSR accesses.
> Here we're only after logging capabilities (and the P-state limits
> can, aiui, in principle also change at runtime).

OK, I have to admit I'm not aware of how this is supposed to work.

One scenario I had in mind was that the firmware would set P-state
control to a value higher than the minimum one in order to prevent the
OS from switching to a lower P-state, in which case reporting the
frequency of the minimum P-state would be kind of incorrect since
switching to it won't be possible anyway. But if this is supposed to
change at runtime then reporting the lowest possible makes sense.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 15 09:21:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 09:21:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZWXK-00077K-Nu; Fri, 15 May 2020 09:21:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lh6L=65=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jZWXJ-00077E-H3
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 09:21:45 +0000
X-Inumbo-ID: 7e0a338e-968d-11ea-ae69-bc764e2007e4
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1a::615])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7e0a338e-968d-11ea-ae69-bc764e2007e4;
 Fri, 15 May 2020 09:21:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UAYazsqCgqjYgzglOsgoORYCWm410rnQUlE9IsY0EWU=;
 b=GOxFVLlkWhRZw6jy3Ksow1re6fS+Ojpwonvi1Pkom7FAPhyUpmkrJfveqzluTB6Sa2Wcys7U2irOzg+bP7xeWXvirL7E8wEAEX56plN86tt06eQRn+XomnA0b8QOAiqvSQzoE3SATEJt55EIYQ6hHsw9vVLizxwmCrRb8jpizEs=
Received: from AM5PR0201CA0017.eurprd02.prod.outlook.com
 (2603:10a6:203:3d::27) by VI1PR08MB3231.eurprd08.prod.outlook.com
 (2603:10a6:803:4a::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.25; Fri, 15 May
 2020 09:21:42 +0000
Received: from VE1EUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:3d:cafe::40) by AM5PR0201CA0017.outlook.office365.com
 (2603:10a6:203:3d::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.20 via Frontend
 Transport; Fri, 15 May 2020 09:21:41 +0000
Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT005.mail.protection.outlook.com (10.152.18.172) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3000.19 via Frontend Transport; Fri, 15 May 2020 09:21:41 +0000
Received: ("Tessian outbound fb9de21a7e90:v54");
 Fri, 15 May 2020 09:21:40 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 8ef66cda67505fe0
X-CR-MTA-TID: 64aa7808
Received: from c3adb754710d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 36C1C1F1-8F9C-4138-BEEE-8593003035EE.1; 
 Fri, 15 May 2020 09:21:35 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c3adb754710d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 15 May 2020 09:21:35 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WaUqRrImr0go73SnegUW61dcyZICGqAksoBsZuStvuXJRmotE3aLcfPhNGWuXA84F49mea3VisXMpBcymO1+4fXa8ROmUX+R/uy1kpoTpXCGMIN6b1Pg+ifLeZtKH18/ag8bk7BUjUVIChL2/14J34+09gxQ2JIu1nWoAIT5EU27XhVFpeAvMqGdhmrseNRp5LK9Hk/ja5TNQ2hzH/0ZJ2AeH1+rBaHNMtobR6JwD7cVQjbWmtuSApWkKQnNJO5MvS3dX0/21fO91gzKDHCSW8jbzpZ9QIIa3yEnKzpZkphy6foPgOlvyqZfVzzXEkXtw0hm9qFj6dgNDGb+EZU8rA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UAYazsqCgqjYgzglOsgoORYCWm410rnQUlE9IsY0EWU=;
 b=ae5w7X8bhArR2iQP5Orf7v+Z1fBj89ednpNHVaPUAALcqXJReZQLRCpjW8GTd+81IYmU2GR1z7+ErK8DuGkshoQxfAl59DaYhj7RQ0Fi+5WpdEx2/HDNoXC1SI+rsleO0nc8rr2PLa3QAeQtmGiwMxSahfAZ9EnHDFA/J5Y6S5nDXvtel378acuUStWD2KnKqHD1kwiDy0oG+jANNcpLjUtTPGNBUV1KITJDwTy+NWwJH1FZEPZ6gF7HS5kRn4E5DCm7e+DMngL53pjJdPiyqylZDu/hydTWkiLIx7QqD4WT8FXLB/fvHp4fdJ11yJ8xpGA6xsaWfeLIOlS1vdahYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UAYazsqCgqjYgzglOsgoORYCWm410rnQUlE9IsY0EWU=;
 b=GOxFVLlkWhRZw6jy3Ksow1re6fS+Ojpwonvi1Pkom7FAPhyUpmkrJfveqzluTB6Sa2Wcys7U2irOzg+bP7xeWXvirL7E8wEAEX56plN86tt06eQRn+XomnA0b8QOAiqvSQzoE3SATEJt55EIYQ6hHsw9vVLizxwmCrRb8jpizEs=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3883.eurprd08.prod.outlook.com (2603:10a6:10:76::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.26; Fri, 15 May
 2020 09:21:34 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3000.022; Fri, 15 May 2020
 09:21:34 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: Error during update_runstate_area with KPTI activated
Thread-Topic: Error during update_runstate_area with KPTI activated
Thread-Index: AQHWKfvm5I6PbMAmoUakTwbdgqMBQqinvLCAgAAF7oCAABaDgIAACWGAgAARCgCAANBZgIAAEJgAgAAERACAAASVAIAAAySA
Date: Fri, 15 May 2020 09:21:34 +0000
Message-ID: <93D7EBEF-E3E0-4DBB-A5BC-7D326B7AE0DB@arm.com>
References: <C6B0E24F-60E6-4621-8448-C8DBAE3277A9@arm.com>
 <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
 <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
 <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
 <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
 <478C4829-CCAF-495B-860E-6BA3D86AA47D@arm.com>
 <20200515083838.GN54375@Air-de-Roger>
 <d2033adc-3f98-2d14-ae6d-f8dcd8b90002@xen.org>
 <20200515091018.GO54375@Air-de-Roger>
In-Reply-To: <20200515091018.GO54375@Air-de-Roger>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 85b02ef3-185c-44bb-d05e-08d7f8b160d0
x-ms-traffictypediagnostic: DB7PR08MB3883:|VI1PR08MB3231:
X-Microsoft-Antispam-PRVS: <VI1PR08MB32317D201CE9BE7B77CC3C659DBD0@VI1PR08MB3231.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 04041A2886
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: LC3aodrNodt7ywYH/0R+pbaip7hrm5L3ZHMw9hxX01rApTBtov0C5is/kMfwPJROcl6lI2D2ob0aPUaf/ZNdwJdvmuCnGcIaVDVG2FMR6D9krVXEqjQy3lp2V857V5+LHp9xilL9erkMzezMewBaKzgHnnGh/4mw166aGlwzAiSSKvD47CBg5TxzFhugbW3erwOUKDIBfN1p9qkgvhzNWAKg9Ng3tvV+mh8+JhKy22zfSQldIovBbgd+qvCsFdHnjB+VzsqbbBUAd320jLsoAQ+fD9NWE8Bm803sGSy38x0UL6hfGkAlVMdOy+RcT4ZO+UZ7Wn1KYvjhqC7FCIPUME6W4UnW3bKbW00WIsI5KQtWlpnj9iN3W850dib/ABsB5KTsujL3P2k4YjyfoXjrp+gQabYLWEsq3CtdrL4eCXg9xWgR9t1spPjaVR1ajnC0fD74clj3A1NtAXhwsUVyTNvMMi/d81dB5mVg5UitVMH7u328JNHe/Ezdt/LV4LFJqAgYuFVwwJQqvUkyrGQSOw==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(396003)(346002)(136003)(376002)(366004)(53546011)(6506007)(186003)(21615005)(316002)(5660300002)(26005)(91956017)(76116006)(15650500001)(66446008)(64756008)(2906002)(66556008)(2616005)(66946007)(66476007)(8676002)(8936002)(6486002)(166002)(36756003)(71200400001)(966005)(86362001)(33656002)(4326008)(6916009)(6512007)(54906003)(478600001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: K2llP3rneszduzW2UQRaEp4qAULwXueh8w2/tbeu99FL5392KZ6ImGpIPuue981zjO2TCPzvzP5lyyqVwanWYOrfPIK1KJtk07pywGXufIZTEUkRP8BO08cuTStA/MPkN/NLAleiCR/LiSuwQGkRrFSgrncMP66PdsAd0ypN82RIekJtD1LMxKXQbOxu3z7Rg/fxmLPHGjnsl4P+3XtpgrsJ7g3kzANjfnU5tqyZn920wILdEt0tGa2Zyfb6RaP59yRCVozisA5Co/e1N1TRUz6FNQxkWq6MmUAByf8PC4sEVQ8gkOBZNz+Z/A7wpH69DIqf+NFwhPPcjmVZwg5f66FzaKiOuc+C3hZPtpAQsuXyVZmpeuzO/AXydq1W6NFoA8/xWfWCY9RHa1Sa/Zk3eOCqREc8OZ0AA3wFHIRxjDZqbmyjEhySKZhraZWFcM4iUn/M69BRDcNjKunXydUbUwSbZJk2g5iRSI2gr6iLeAL163j3rB6uR6iIrz0FdYif
x-ms-exchange-transport-forked: True
Content-Type: multipart/alternative;
 boundary="_000_93D7EBEFE3E04DBBA5BC7D326B7AE0DBarmcom_"
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3883
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT005.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(376002)(346002)(396003)(136003)(46966005)(47076004)(82740400003)(81166007)(33656002)(6512007)(4326008)(15650500001)(166002)(6486002)(82310400002)(6862004)(2906002)(86362001)(45080400002)(356005)(30864003)(8936002)(966005)(8676002)(5660300002)(478600001)(6506007)(2616005)(54906003)(53546011)(26005)(21615005)(36906005)(70586007)(336012)(33964004)(316002)(36756003)(70206006)(186003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 51e737f4-9747-42d2-0ac6-08d7f8b15c8f
X-Forefront-PRVS: 04041A2886
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 3K4PKdrgY+ddSq9OALrRW00eEd1InRTFnAzFgT8QjHyfGws43vMlI7+FTS4tEkZnCQ/5kyafb7XXHKLxKlsJ5yY3maczCteoPaJy7ZhGWLVFwzZQfv+bXSh32At/3ty+NjF/VGDsclf7Y94DM+CNUCdDRJiNj/tJ70PlJrBc9vM4N1e2M4pvTpNof57DxXaCkgDL14tK77kqzfe7J8ocjW5Gu62ckttM+1yPYak9xn426bvJw+OA9JVD3eHG3uqNbtW7dd7w1LcH46qyg1RUX9H3XuMvdbv/LMn0J8SCujrAC3QXComPCLSB2xA8dmkGZS+z39aa8LcmTlBXHT4FUrReOKD1bOTAheAYt4Di5To2xmeyNbbtN72m6LS1jJUlrw6GKA2C2wCgQM/yzv8rWslTJFHCzDN2gY/7igk/KIaY7TV0SLG2jvlkE7eyfsd4d+gNgc8A71oLTlDiN6BD1bhag1oEbV89d7LQ3DrByhPtBN+qP7MIa7S9vsVnVpLjFSYH5jvPbC2pYh6PMKn/4BPyBxajL8cQLIYwPYcImXbAGhN4OEgGeMSpQMYqPgPWegEsAiyZIe74xiNTeW/c1j42o10h9Otf/kJGJHb/eTw=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2020 09:21:41.2012 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 85b02ef3-185c-44bb-d05e-08d7f8b160d0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3231
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--_000_93D7EBEFE3E04DBBA5BC7D326B7AE0DBarmcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

DQoNCk9uIDE1IE1heSAyMDIwLCBhdCAxMDoxMCwgUm9nZXIgUGF1IE1vbm7DqSA8cm9nZXIucGF1
QGNpdHJpeC5jb208bWFpbHRvOnJvZ2VyLnBhdUBjaXRyaXguY29tPj4gd3JvdGU6DQoNCk9uIEZy
aSwgTWF5IDE1LCAyMDIwIGF0IDA5OjUzOjU0QU0gKzAxMDAsIEp1bGllbiBHcmFsbCB3cm90ZToN
CltDQVVUSU9OIC0gRVhURVJOQUwgRU1BSUxdIERPIE5PVCByZXBseSwgY2xpY2sgbGlua3MsIG9y
IG9wZW4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBoYXZlIHZlcmlmaWVkIHRoZSBzZW5kZXIgYW5k
IGtub3cgdGhlIGNvbnRlbnQgaXMgc2FmZS4NCg0KSGksDQoNCk9uIDE1LzA1LzIwMjAgMDk6Mzgs
IFJvZ2VyIFBhdSBNb25uw6kgd3JvdGU6DQpPbiBGcmksIE1heSAxNSwgMjAyMCBhdCAwNzozOTox
NkFNICswMDAwLCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOg0KDQoNCk9uIDE0IE1heSAyMDIwLCBh
dCAyMDoxMywgSnVsaWVuIEdyYWxsIDxqdWxpZW4uZ3JhbGwub3NzQGdtYWlsLmNvbTxtYWlsdG86
anVsaWVuLmdyYWxsLm9zc0BnbWFpbC5jb20+PG1haWx0bzpqdWxpZW4uZ3JhbGwub3NzQGdtYWls
LmNvbT4+IHdyb3RlOg0KDQpPbiBUaHUsIDE0IE1heSAyMDIwIGF0IDE5OjEyLCBBbmRyZXcgQ29v
cGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPG1haWx0bzphbmRyZXcuY29vcGVyM0BjaXRy
aXguY29tPjxtYWlsdG86YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4+IHdyb3RlOg0KDQpPbiAx
NC8wNS8yMDIwIDE4OjM4LCBKdWxpZW4gR3JhbGwgd3JvdGU6DQpIaSwNCg0KT24gMTQvMDUvMjAy
MCAxNzoxOCwgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCg0KDQpPbiAxNCBNYXkgMjAyMCwgYXQg
MTY6NTcsIEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc8bWFpbHRvOmp1bGllbkB4ZW4ub3Jn
PjxtYWlsdG86anVsaWVuQHhlbi5vcmc+PiB3cm90ZToNCg0KDQoNCk9uIDE0LzA1LzIwMjAgMTU6
MjgsIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQpIaSwNCg0KSGksDQoNCldoZW4gZXhlY3V0aW5n
IGxpbnV4IG9uIGFybTY0IHdpdGggS1BUSSBhY3RpdmF0ZWQgKGluIERvbTAgb3IgaW4gYQ0KRG9t
VSksIEkgaGF2ZSBhIGxvdCBvZiB3YWxrIHBhZ2UgdGFibGUgZXJyb3JzIGxpa2UgdGhpczoNCihY
RU4pIHAybS5jOjE4OTA6IGQxdjA6IEZhaWxlZCB0byB3YWxrIHBhZ2UtdGFibGUgdmENCjB4ZmZm
ZmZmODM3ZWJlMGNkMA0KQWZ0ZXIgaW1wbGVtZW50aW5nIGEgY2FsbCB0cmFjZSwgSSBmb3VuZCB0
aGF0IHRoZSBwcm9ibGVtIHdhcw0KY29taW5nIGZyb20gdGhlIHVwZGF0ZV9ydW5zdGF0ZV9hcmVh
IHdoZW4gbGludXggaGFzIEtQVEkgYWN0aXZhdGVkLg0KSSBoYXZlIHRoZSBmb2xsb3dpbmcgY2Fs
bCB0cmFjZToNCihYRU4pIHAybS5jOjE4OTA6IGQxdjA6IEZhaWxlZCB0byB3YWxrIHBhZ2UtdGFi
bGUgdmENCjB4ZmZmZmZmODM3ZWJlMGNkMA0KKFhFTikgYmFja3RyYWNlLmM6Mjk6IFN0YWNrdHJh
Y2Ugc3RhcnQgYXQgMHg4MDA3NjM4ZWZiYjAgZGVwdGggMTANCihYRU4pICAgIFs8MDAwMDAwMDAw
MDI3NzgwYz5dIGdldF9wYWdlX2Zyb21fZ3ZhKzB4MTgwLzB4MzVjDQooWEVOKSAgICBbPDAwMDAw
MDAwMDAyNzAwYzg+XSBndWVzdGNvcHkuYyNjb3B5X2d1ZXN0KzB4MWIwLzB4MmU0DQooWEVOKSAg
ICBbPDAwMDAwMDAwMDAyNzAyMjg+XSByYXdfY29weV90b19ndWVzdCsweDJjLzB4MzQNCihYRU4p
ICAgIFs8MDAwMDAwMDAwMDI2OGRkMD5dIGRvbWFpbi5jI3VwZGF0ZV9ydW5zdGF0ZV9hcmVhKzB4
OTAvMHhjOA0KKFhFTikgICAgWzwwMDAwMDAwMDAwMjY5MDljPl0gZG9tYWluLmMjc2NoZWR1bGVf
dGFpbCsweDI5NC8weDJkOA0KKFhFTikgICAgWzwwMDAwMDAwMDAwMjY5NTI0Pl0gY29udGV4dF9z
d2l0Y2grMHg1OC8weDcwDQooWEVOKSAgICBbPDAwMDAwMDAwMDAyNDc5YzQ+XSBjb3JlLmMjc2No
ZWRfY29udGV4dF9zd2l0Y2grMHg4OC8weDFlNA0KKFhFTikgICAgWzwwMDAwMDAwMDAwMjQ4NDVj
Pl0gY29yZS5jI3NjaGVkdWxlKzB4MjI0LzB4MmVjDQooWEVOKSAgICBbPDAwMDAwMDAwMDAyMjQw
MTg+XSBzb2Z0aXJxLmMjX19kb19zb2Z0aXJxKzB4ZTQvMHgxMjgNCihYRU4pICAgIFs8MDAwMDAw
MDAwMDIyNDBkND5dIGRvX3NvZnRpcnErMHgxNC8weDFjDQpEaXNjdXNzaW5nIHRoaXMgc3ViamVj
dCB3aXRoIFN0ZWZhbm8sIGhlIHBvaW50ZWQgbWUgdG8gYSBkaXNjdXNzaW9uDQpzdGFydGVkIGEg
eWVhciBhZ28gb24gdGhpcyBzdWJqZWN0IGhlcmU6DQpodHRwczovL2xpc3RzLnhlbnByb2plY3Qu
b3JnL2FyY2hpdmVzL2h0bWwveGVuLWRldmVsLzIwMTgtMTEvbXNnMDMwNTMuaHRtbA0KDQpBbmQg
YSBwYXRjaCB3YXMgc3VibWl0dGVkOg0KaHR0cHM6Ly9saXN0cy54ZW5wcm9qZWN0Lm9yZy9hcmNo
aXZlcy9odG1sL3hlbi1kZXZlbC8yMDE5LTA1L21zZzAyMzIwLmh0bWwNCg0KSSByZWJhc2VkIHRo
aXMgcGF0Y2ggb24gY3VycmVudCBtYXN0ZXIgYW5kIGl0IGlzIHNvbHZpbmcgdGhlDQpwcm9ibGVt
IEkgaGF2ZSBzZWVuLg0KSXQgc291bmRzIHRvIG1lIGxpa2UgYSBnb29kIHNvbHV0aW9uIHRvIGlu
dHJvZHVjZSBhDQpWQ1BVT1BfcmVnaXN0ZXJfcnVuc3RhdGVfcGh5c19tZW1vcnlfYXJlYSB0byBu
b3QgZGVwZW5kIG9uIHRoZSBhcmVhDQphY3R1YWxseSBiZWluZyBtYXBwZWQgaW4gdGhlIGd1ZXN0
IHdoZW4gYSBjb250ZXh0IHN3aXRjaCBpcyBiZWluZw0KZG9uZSAod2hpY2ggaXMgYWN0dWFsbHkg
dGhlIHByb2JsZW0gaGFwcGVuaW5nIHdoZW4gYSBjb250ZXh0IHN3aXRjaA0KaXMgdHJpZ2dlciB3
aGlsZSBhIGd1ZXN0IGlzIHJ1bm5pbmcgaW4gRUwwKS4NCklzIHRoZXJlIGFueSByZWFzb24gd2h5
IHRoaXMgd2FzIG5vdCBtZXJnZWQgYXQgdGhlIGVuZCA/DQoNCkkganVzdCBza2ltbWVkIHRocm91
Z2ggdGhlIHRocmVhZCB0byByZW1pbmQgbXlzZWxmIHRoZSBzdGF0ZS4NCkFGQUlDVCwgdGhpcyBp
cyBibG9ja2VkIG9uIHRoZSBjb250cmlidXRvciB0byBjbGFyaWZ5IHRoZSBpbnRlbmRlZA0KaW50
ZXJhY3Rpb24gYW5kIHByb3ZpZGUgYSBuZXcgdmVyc2lvbi4NCg0KV2hhdCBkbyB5b3UgbWVhbiBo
ZXJlIGJ5IGludGVuZGVkIGludGVyYWN0aW9uID8gSG93IHRoZSBuZXcgaHlwZXINCmNhbGwgc2hv
dWxkIGJlIHVzZWQgYnkgdGhlIGd1ZXN0IE9TID8NCg0KRnJvbSB3aGF0IEkgcmVtZW1iZXIsIEph
biB3YXMgc2Vla2luZyBjbGFyaWZpY2F0aW9uIG9uIHdoZXRoZXIgdGhlIHR3bw0KaHlwZXJjYWxs
cyAoZXhpc3RpbmcgYW5kIG5ldykgY2FuIGJlIGNhbGxlZCB0b2dldGhlciBieSB0aGUgc2FtZSBP
Uw0KKGFuZCBtYWtlIHNlbnNlKS4NCg0KVGhlcmUgd2FzIGFsc28gdGhlIHF1ZXN0aW9uIG9mIHRo
ZSBoYW5kb3ZlciBiZXR3ZWVuIHR3byBwaWVjZXMgb2YNCnNvdGZ3YXJlLiBGb3IgaW5zdGFuY2Us
IHdoYXQgaWYgdGhlIGZpcm13YXJlIGlzIHVzaW5nIHRoZSBleGlzdGluZw0KaW50ZXJmYWNlIGJ1
dCB0aGUgT1MgdGhlIG5ldyBvbmU/IFNpbWlsYXIgcXVlc3Rpb24gYWJvdXQgS2V4ZWNpbmcgYQ0K
ZGlmZmVyZW50IGtlcm5lbC4NCg0KVGhpcyBwYXJ0IGlzIG1vc3RseSBkb2N1bWVudGF0aW9uIHNv
IHdlIGNhbiBkaXNjdXNzIGFib3V0IHRoZSBhcHByb2FjaA0KYW5kIHJldmlldyB0aGUgaW1wbGVt
ZW50YXRpb24uDQoNCg0KDQpJIGFtIHN0aWxsIGluIGZhdm9yIG9mIHRoZSBuZXcgaHlwZXJjYWxs
IChhbmQgc3RpbGwgaW4gbXkgdG9kbyBsaXN0KQ0KYnV0IEkgaGF2ZW4ndCB5ZXQgZm91bmQgdGlt
ZSB0byByZXZpdmUgdGhlIHNlcmllcy4NCg0KV291bGQgeW91IGJlIHdpbGxpbmcgdG8gdGFrZSBv
dmVyIHRoZSBzZXJpZXM/IEkgd291bGQgYmUgaGFwcHkgdG8NCmJyaW5nIHlvdSB1cCB0byBzcGVl
ZCBhbmQgcHJvdmlkZSByZXZpZXcuDQoNClN1cmUgSSBjYW4gdGFrZSBpdCBvdmVyLg0KDQpJIHBv
cnRlZCBpdCB0byBtYXN0ZXIgdmVyc2lvbiBvZiB4ZW4gYW5kIEkgdGVzdGVkIGl0IG9uIGEgYm9h
cmQuDQpJIHN0aWxsIG5lZWQgdG8gZG8gYSBkZWVwIHJldmlldyBvZiB0aGUgY29kZSBteXNlbGYg
YnV0IEkgaGF2ZSBhbg0KdW5kZXJzdGFuZGluZyBvZiB0aGUgcHJvYmxlbSBhbmQgd2hhdCBpcyB0
aGUgaWRlYS4NCg0KQW55IGhlbHAgdG8gZ2V0IG9uIHNwZWVkIHdvdWxkIGJlIG1vcmUgdGhlbiB3
ZWxjb21lIDotKQ0KSSB3b3VsZCByZWNvbW1lbmQgdG8gZ28gdGhyb3VnaCB0aGUgbGF0ZXN0IHZl
cnNpb24gKHYzKSBhbmQgdGhlDQpwcmV2aW91cyAodjIpLiBJIGFtIGFsc28gc3VnZ2VzdGluZyB2
MiBiZWNhdXNlIEkgdGhpbmsgdGhlIHNwbGl0IHdhcw0KZWFzaWVyIHRvIHJldmlldy91bmRlcnN0
YW5kLg0KDQpUaGUgeDg2IGNvZGUgaXMgcHJvYmFibHkgd2hhdCBpcyBnb2luZyB0byBnaXZlIHlv
dSB0aGUgbW9zdCB0cm91YmxlIGFzDQp0aGVyZSBhcmUgdHdvIEFCSXMgdG8gc3VwcG9ydCAoY29t
cGF0IGFuZCBub24tY29tcGF0KS4gSWYgeW91IGRvbid0DQpoYXZlIGFuIHg4NiBzZXR1cCwgSSBz
aG91bGQgYmUgYWJsZSB0byB0ZXN0IGl0L2hlbHAgd3JpdGUgaXQuDQoNCkZlZWwgZnJlZSB0byBh
c2sgYW55IHF1ZXN0aW9ucyBhbmQgSSB3aWxsIHRyeSBteSBiZXN0IHRvIHJlbWVtYmVyIHRoZQ0K
ZGlzY3Vzc2lvbiBmcm9tIGxhc3QgeWVhciA6KS4NCg0KQXQgcmlzayBvZiBiZWluZyBzaG91dGVk
IGRvd24gYWdhaW4sIGEgbmV3IGh5cGVyY2FsbCBpc24ndCBuZWNlc3NhcmlseQ0KbmVjZXNzYXJ5
LCBhbmQgdGhlcmUgYXJlIHByb2JhYmx5IGJldHRlciB3YXlzIG9mIGZpeGluZyBpdC4NCg0KVGhl
IHVuZGVybHlpbmcgQUJJIHByb2JsZW0gaXMgdGhhdCB0aGUgYXJlYSBpcyByZWdpc3RlcmVkIGJ5
IHZpcnR1YWwNCmFkZHJlc3MuICBUaGUgb25seSBjb3JyZWN0IHdheSB0aGlzIHNob3VsZCBoYXZl
IGJlZW4gZG9uZSBpcyB0byByZWdpc3Rlcg0KYnkgZ3Vlc3QgcGh5c2ljYWwgYWRkcmVzcywgc28g
WGVuJ3MgdXBkYXRpbmcgb2YgdGhlIGRhdGEgZG9lc24ndA0KaW50ZXJhY3Qgd2l0aCB0aGUgZ3Vl
c3QgcGFnZXRhYmxlIHNldHRpbmdzL3Jlc3RyaWN0aW9ucy4gIHg4NiBzdWZmZXJzDQp0aGUgc2Ft
ZSBraW5kIG9mIHByb2JsZW1zIGFzIEFSTSwgZXhjZXB0IHdlIHNpbGVudGx5IHNxdWFzaCB0aGUg
ZmFsbG91dC4NCg0KVGhlIGxvZ2ljIGluIFhlbiBpcyBob3JyaWJsZSwgYW5kIEkgd291bGQgcmVh
bGx5IHJhdGhlciBpdCB3YXMgZGVsZXRlZA0KY29tcGxldGVseSwgcmF0aGVyIHRoYW4gdG8gYmUg
a2VwdCBmb3IgY29tcGF0aWJpbGl0eS4NCg0KVGhlIHJ1bnN0YXRlIGFyZWEgaXMgYWx3YXlzIGZp
eGVkIGtlcm5lbCBtZW1vcnkgYW5kIGRvZXNuJ3QgbW92ZS4gIEkNCmJlbGlldmUgaXQgaXMgYWxy
ZWFkeSByZXN0cmljdGVkIGZyb20gY3Jvc3NpbmcgYSBwYWdlIGJvdW5kYXJ5LCBhbmQgd2UNCmNh
biBjYWxjdWxhdGUgdGhlIHZhPT5wYSB0cmFuc2xhdGlvbiB3aGVuIHRoZSBoeXBlcmNhbGwgaXMg
bWFkZS4NCg0KWWVzIC0gdGhpcyBpcyBhIHRlY2huaWNhbGx5IEFCSSBjaGFuZ2UsIGJ1dCBub3Ro
aW5nIGlzIGdvaW5nIHRvIGJyZWFrDQooQUZBSUNUKSBhbmQgdGhlIGNsZWFudXAgd2luIGlzIGxh
cmdlIGVub3VnaCB0byBtYWtlIHRoaXMgYSAqdmVyeSoNCmF0dHJhY3RpdmUgb3B0aW9uLg0KDQpJ
IHN1Z2dlc3RlZCB0aGlzIGFwcHJvYWNoIHR3byB5ZWFycyBhZ28gWzFdIGJ1dCB5b3Ugd2VyZSB0
aGUgb25lDQpzYXlpbmcgdGhhdCBidWZmZXIgY291bGQgY3Jvc3MgcGFnZS1ib3VuZGFyeSBvbiBv
bGRlciBMaW51eCBbMl06DQoNCiJJJ2QgbG92ZSB0byBkbyB0aGlzLCBidXQgd2UgY2FudC4gIE9s
ZGVyIExpbnV4IHVzZWQgdG8gaGF2ZSBhIHZpcnR1YWwNCmJ1ZmZlciBzcGFubmluZyBhIHBhZ2Ug
Ym91bmRhcnkuICBDaGFuZ2luZyB0aGUgYmVoYXZpb3VyIHVuZGVyIHRoYXQgd2lsbA0KY2F1c2Ug
b2xkZXIgc2V0dXBzIHRvIGV4cGxvZGUuIg0KDQpTb3JyeSB0aGlzIHdhcyBsb25nIHRpbWUgYWdv
LCBhbmQgZGV0YWlscyBoYXZlIGZhZGVkLiBJSVJDIHRoZXJlIHdhcw0KZXZlbiBhIHByb3Bvc2Fs
IChvciBwYXRjaCBzZXQpIHRoYXQgdG9vayB0aGF0IGludG8gYWNjb3VudCBhbmQgYWxsb3dlZA0K
YnVmZmVycyB0byBzcGFuIGFjcm9zcyBhIHBhZ2UgYm91bmRhcnkgYnkgdGFraW5nIGEgcmVmZXJl
bmNlIHRvIHR3bw0KZGlmZmVyZW50IHBhZ2VzIGluIHRoYXQgY2FzZS4NCg0KSSBhbSBub3QgYXdh
cmUgb2YgYSBwYXRjaCBzZXQuIEp1ZXJnZW4gc3VnZ2VzdGVkIGEgcGVyLWRvbWFpbiBtYXBwaW5n
IGJ1dA0KdGhlcmUgd2FzIG5vIGRldGFpbHMgaG93IHRoaXMgY291bGQgYmUgZG9uZSAobXkgZS1t
YWlsIHdhcyBsZWZ0IHVuYW5zd2VyZWQNClsxXSkuDQoNCklmIHdlIHdlcmUgdXNpbmcgdGhlIHZt
YXAoKSB0aGVuIHdlIHdvdWxkIG5lZWQgdXAgMU1CIHBlciBkb21haW4gKGFzc3VtaW5nDQoxMjgg
dkNQVXMpLiBUaGlzIHNvdW5kcyBxdWl0ZSBhIGJpdCBhbmQgSSB0aGluayB3ZSBuZWVkIHRvIGFn
cmVlIHdoZXRoZXIgaXQNCndvdWxkIGJlIGFuIGFjY2VwdGFibGUgc29sdXRpb24gKHRoaXMgd2Fz
IGFsc28gbGVmdCB1bmFuc3dlcmVkIFsxXSkuDQoNCkNvdWxkIHdlIG1hcC91bm1hcCB0aGUgcnVu
dGltZSBhcmVhIG9uIGRvbWFpbiBzd2l0Y2ggYXQgYSBwZXItY3B1DQpiYXNlZCBsaW5lYXIgc3Bh
Y2UgYXJlYT8gVGhlcmUncyBubyByZWFzb24gdG8gaGF2ZSBhbGwgdGhlIHJ1bnRpbWUNCmFyZWFz
IG1hcHBlZCBhbGwgdGhlIHRpbWUsIHlvdSBqdXN0IGNhcmUgYWJvdXQgdGhlIG9uZSBmcm9tIHRo
ZQ0KcnVubmluZyB2Y3B1Lg0KDQpNYXliZSB0aGUgb3ZlcmhlYWQgb2YgdGhhdCBtYXBwaW5nIGFu
ZCB1bm1hcHBpbmcgd291bGQgYmUNCnRvbyBoaWdoPyBCdXQgc2VlaW5nIHRoYXQgd2UgYXJlIGFp
bWluZyBhdCBhIHNlY3JldC1mcmVlIFhlbiB3ZSB3b3VsZA0KaGF2ZSB0byBldmVudHVhbGx5IGdv
IHRoYXQgcm91dGUgYW55d2F5Lg0KDQpNYXliZSB0aGUgbmV3IGh5cGVyY2FsbCBzaG91bGQgYmUg
YSBiaXQgZGlmZmVyZW50Og0KLSB3ZSBoYXZlIHRoaXMgYXJlYSBhbGxvY2F0ZWQgYWxyZWFkeSBp
bnNpZGUgWGVuIGFuZCB3ZSBkbyBhIGNvcHkgb2YgaXQgb24gYW55IGNvbnRleHQgc3dpdGNoDQot
IHRoZSBndWVzdCBpcyBub3Qgc3VwcG9zZWQgdG8gbW9kaWZ5IGFueSBkYXRhIGluIHRoaXMgYXJl
YQ0KDQpXZSBjb3VsZCBpbnRyb2R1Y2UgYSBuZXcgaHlwZXJjYWxsOg0KLSBYZW4gYWxsb2NhdGUg
dGhlIHJ1bnN0YXRlIGFyZWEgdXNpbmcgYSBwYWdlIGFsaWduZWQgYWRkcmVzcyBhbmQgc2l6ZQ0K
LSB0aGUgZ3Vlc3QgcHJvdmlkZSBhIGZyZWUgZ3Vlc3QgcGh5c2ljYWwgc3BhY2UgdG8gdGhlIGh5
cGVyY2FsbA0KLSBYZW4gbWFwcyByZWFkLW9ubHkgaXRzIG93biBhcmVhIHRvIHRoZSBndWVzdCBh
dCB0aGUgcHJvdmlkZWQgYWRkcmVzcw0KLSBYZW4gc2hhbGwgbm90IG1vZGlmeSBhbnkgZGF0YSBp
biB0aGUgcnVuc3RhdGUgYXJlYSBvZiBvdGhlciBjb3Jlcy9ndWVzdHMgKHNob3VsZCBhbHJlYWR5
IGJlIHRoZSBjYXNlKQ0KLSBXZSBrZWVwIHRoZSBjdXJyZW50IGh5cGVyY2FsbCBmb3IgYmFja3dh
cmQgY29tcGF0aWJpbGl0eSBhbmQgbWFwIHRoZSBhcmVhbCBkdXJpbmcgdGhlIGh5cGVyY2FsbCBh
bmQga2VlcCB0aGUgYXJlYSBtYXBwZWQgYXQgYWxsIHRpbWUsIHdlIGtlZXAgZG9pbmcgdGhlIGNv
cHkgZHVyaW5nIGNvbnRleHQgc3dpdGNoZXMNCg0KVGhpcyB3b3VsZCBoaWdobHkgcmVkdWNlIHRo
ZSBvdmVyaGVhZCBieSByZW1vdmluZyB0aGUgbWFwcGluZy91bm1hcHBpbmcuDQpSZWdhcmRpbmcg
dGhlIHNlY3JldCBmcmVlIEkgZG8gbm90IHJlYWxseSB0aGluayB0aGlzIGlzIHNvbWV0aGluZyBw
cm9ibGVtYXRpYyBoZXJlIGFzIHdlIGFscmVhZHkgaGF2ZSBhIGNvcHkgb2YgdGhpcyBpbnRlcm5h
bGx5IGFueXdheQ0KRGlkIEkgbWlzcyBzb21ldGhpbmcgPw0KDQpCZXJ0cmFuZA0KDQoNCg0KQW5v
dGhlciBvcHRpb24gd291bGQgYmUgdG8ganVzdCByZXR1cm4gLUVJTlZBTCBvciAtRU9QTk9UU1VQ
UCBpbiB0aGF0DQpjYXNlIGFuZCBqdXN0IGdldCBvbiB3aXRoIGl0LiBydW5zdGF0ZSBpbmZvIHNo
b3VsZG4ndCBiZSBtYW5kYXRvcnkgZm9yDQpndWVzdHMgdG8gZnVuY3Rpb24gcHJvcGVybHksIEkg
d291bGQgc2F5IGl0J3MganVzdCBleHRyYSBpbmZvIHRoYXQncw0KcHJvdmlkZWQgaW4gZ29vZCBm
YWl0aCBmcm9tIHRoZSBoeXBlcnZpc29yIGluIG9yZGVyIHRvIGhlbHAgdGhlIGd1ZXN0DQptYWtl
IGJldHRlciBzY2hlZHVsaW5nIGRlY2lzaW9ucy4NCg0KTGludXggd2lsbCBwYW5pYyBpZiB0aGUg
VkNQVU9QX3JlZ2lzdGVyX3J1bnN0YXRlX21lbW9yeV9hcmVhIHJldHVybnMgYW4NCmVycm9yIChz
ZWUgeGVuX3NldHVwX3J1bnN0YXRlX2luZm8oKSkuDQoNCk9oLCB0aGF0J3MgZHVsbC4gVGhhdCBo
eXBlcmNhbGwgd2FzIG5ldmVyIG5vdGVkIHRvIGJlIG9wdGlvbmFsLCBzbyBpdA0KZmFpbGluZyB3
b3VsZCBtZWFuIExpbnV4IGhhcyBzb21laG93IHNjcmV3ZWQgdGhlIGNhbGwgd2hpY2ggaXMgbm90
DQpleHBlY3RlZC4NCg0KUm9nZXIuDQoNCg==

--_000_93D7EBEFE3E04DBBA5BC7D326B7AE0DBarmcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <08B4CB7A9E3E94449BDCA65803ACAC34@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64

PGh0bWw+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIgY29udGVudD0i
dGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjwvaGVhZD4NCjxib2R5IHN0eWxlPSJ3b3JkLXdy
YXA6IGJyZWFrLXdvcmQ7IC13ZWJraXQtbmJzcC1tb2RlOiBzcGFjZTsgbGluZS1icmVhazogYWZ0
ZXItd2hpdGUtc3BhY2U7IiBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCjxkaXY+PGJyIGNsYXNz
PSIiPg0KPGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSIgY2xhc3M9IiI+DQo8ZGl2IGNsYXNzPSIiPk9u
IDE1IE1heSAyMDIwLCBhdCAxMDoxMCwgUm9nZXIgUGF1IE1vbm7DqSAmbHQ7PGEgaHJlZj0ibWFp
bHRvOnJvZ2VyLnBhdUBjaXRyaXguY29tIiBjbGFzcz0iIj5yb2dlci5wYXVAY2l0cml4LmNvbTwv
YT4mZ3Q7IHdyb3RlOjwvZGl2Pg0KPGJyIGNsYXNzPSJBcHBsZS1pbnRlcmNoYW5nZS1uZXdsaW5l
Ij4NCjxkaXYgY2xhc3M9IiI+PHNwYW4gc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7
IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9y
bWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0
ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsg
dGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzog
MHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9u
ZTsgZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAhaW1wb3J0YW50OyIgY2xhc3M9IiI+T24N
CiBGcmksIE1heSAxNSwgMjAyMCBhdCAwOTo1Mzo1NEFNICYjNDM7MDEwMCwgSnVsaWVuIEdyYWxs
IHdyb3RlOjwvc3Bhbj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQt
ZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBm
b250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3Bh
Y2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10
cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAt
d2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNs
YXNzPSIiPg0KPGJsb2NrcXVvdGUgdHlwZT0iY2l0ZSIgc3R5bGU9ImZvbnQtZmFtaWx5OiBIZWx2
ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQt
Y2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFs
OyBvcnBoYW5zOiBhdXRvOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4
dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdpZG93czogYXV0bzsgd29y
ZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zaXplLWFkanVzdDogYXV0bzsgLXdlYmtpdC10
ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4N
CltDQVVUSU9OIC0gRVhURVJOQUwgRU1BSUxdIERPIE5PVCByZXBseSwgY2xpY2sgbGlua3MsIG9y
IG9wZW4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBoYXZlIHZlcmlmaWVkIHRoZSBzZW5kZXIgYW5k
IGtub3cgdGhlIGNvbnRlbnQgaXMgc2FmZS48YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpI
aSw8YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpPbiAxNS8wNS8yMDIwIDA5OjM4LCBSb2dl
ciBQYXUgTW9ubsOpIHdyb3RlOjxiciBjbGFzcz0iIj4NCjxibG9ja3F1b3RlIHR5cGU9ImNpdGUi
IGNsYXNzPSIiPk9uIEZyaSwgTWF5IDE1LCAyMDIwIGF0IDA3OjM5OjE2QU0gJiM0MzswMDAwLCBC
ZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOjxiciBjbGFzcz0iIj4NCjxibG9ja3F1b3RlIHR5cGU9ImNp
dGUiIGNsYXNzPSIiPjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCk9uIDE0IE1heSAyMDIw
LCBhdCAyMDoxMywgSnVsaWVuIEdyYWxsICZsdDs8YSBocmVmPSJtYWlsdG86anVsaWVuLmdyYWxs
Lm9zc0BnbWFpbC5jb20iIGNsYXNzPSIiPmp1bGllbi5ncmFsbC5vc3NAZ21haWwuY29tPC9hPiZs
dDs8YSBocmVmPSJtYWlsdG86anVsaWVuLmdyYWxsLm9zc0BnbWFpbC5jb20iIGNsYXNzPSIiPm1h
aWx0bzpqdWxpZW4uZ3JhbGwub3NzQGdtYWlsLmNvbTwvYT4mZ3Q7Jmd0OyB3cm90ZTo8YnIgY2xh
c3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpPbiBUaHUsIDE0IE1heSAyMDIwIGF0IDE5OjEyLCBBbmRy
ZXcgQ29vcGVyICZsdDs8YSBocmVmPSJtYWlsdG86YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbSIg
Y2xhc3M9IiI+YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbTwvYT4mbHQ7PGEgaHJlZj0ibWFpbHRv
OmFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20iIGNsYXNzPSIiPm1haWx0bzphbmRyZXcuY29vcGVy
M0BjaXRyaXguY29tPC9hPiZndDsmZ3Q7IHdyb3RlOjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0i
Ij4NCk9uIDE0LzA1LzIwMjAgMTg6MzgsIEp1bGllbiBHcmFsbCB3cm90ZTo8YnIgY2xhc3M9IiI+
DQpIaSw8YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpPbiAxNC8wNS8yMDIwIDE3OjE4LCBC
ZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCjxiciBj
bGFzcz0iIj4NCk9uIDE0IE1heSAyMDIwLCBhdCAxNjo1NywgSnVsaWVuIEdyYWxsICZsdDs8YSBo
cmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIGNsYXNzPSIiPmp1bGllbkB4ZW4ub3JnPC9hPiZs
dDs8YSBocmVmPSJtYWlsdG86anVsaWVuQHhlbi5vcmciIGNsYXNzPSIiPm1haWx0bzpqdWxpZW5A
eGVuLm9yZzwvYT4mZ3Q7Jmd0OyB3cm90ZTo8YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQo8
YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpPbiAxNC8wNS8yMDIwIDE1OjI4LCBCZXJ0cmFu
ZCBNYXJxdWlzIHdyb3RlOjxiciBjbGFzcz0iIj4NCkhpLDxiciBjbGFzcz0iIj4NCjxiciBjbGFz
cz0iIj4NCkhpLDxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCldoZW4gZXhlY3V0aW5nIGxp
bnV4IG9uIGFybTY0IHdpdGggS1BUSSBhY3RpdmF0ZWQgKGluIERvbTAgb3IgaW4gYTxiciBjbGFz
cz0iIj4NCkRvbVUpLCBJIGhhdmUgYSBsb3Qgb2Ygd2FsayBwYWdlIHRhYmxlIGVycm9ycyBsaWtl
IHRoaXM6PGJyIGNsYXNzPSIiPg0KKFhFTikgcDJtLmM6MTg5MDogZDF2MDogRmFpbGVkIHRvIHdh
bGsgcGFnZS10YWJsZSB2YTxiciBjbGFzcz0iIj4NCjB4ZmZmZmZmODM3ZWJlMGNkMDxiciBjbGFz
cz0iIj4NCkFmdGVyIGltcGxlbWVudGluZyBhIGNhbGwgdHJhY2UsIEkgZm91bmQgdGhhdCB0aGUg
cHJvYmxlbSB3YXM8YnIgY2xhc3M9IiI+DQpjb21pbmcgZnJvbSB0aGUgdXBkYXRlX3J1bnN0YXRl
X2FyZWEgd2hlbiBsaW51eCBoYXMgS1BUSSBhY3RpdmF0ZWQuPGJyIGNsYXNzPSIiPg0KSSBoYXZl
IHRoZSBmb2xsb3dpbmcgY2FsbCB0cmFjZTo8YnIgY2xhc3M9IiI+DQooWEVOKSBwMm0uYzoxODkw
OiBkMXYwOiBGYWlsZWQgdG8gd2FsayBwYWdlLXRhYmxlIHZhPGJyIGNsYXNzPSIiPg0KMHhmZmZm
ZmY4MzdlYmUwY2QwPGJyIGNsYXNzPSIiPg0KKFhFTikgYmFja3RyYWNlLmM6Mjk6IFN0YWNrdHJh
Y2Ugc3RhcnQgYXQgMHg4MDA3NjM4ZWZiYjAgZGVwdGggMTA8YnIgY2xhc3M9IiI+DQooWEVOKSAm
bmJzcDsmbmJzcDsmbmJzcDtbJmx0OzAwMDAwMDAwMDAyNzc4MGMmZ3Q7XSBnZXRfcGFnZV9mcm9t
X2d2YSYjNDM7MHgxODAvMHgzNWM8YnIgY2xhc3M9IiI+DQooWEVOKSAmbmJzcDsmbmJzcDsmbmJz
cDtbJmx0OzAwMDAwMDAwMDAyNzAwYzgmZ3Q7XSBndWVzdGNvcHkuYyNjb3B5X2d1ZXN0JiM0Mzsw
eDFiMC8weDJlNDxiciBjbGFzcz0iIj4NCihYRU4pICZuYnNwOyZuYnNwOyZuYnNwO1smbHQ7MDAw
MDAwMDAwMDI3MDIyOCZndDtdIHJhd19jb3B5X3RvX2d1ZXN0JiM0MzsweDJjLzB4MzQ8YnIgY2xh
c3M9IiI+DQooWEVOKSAmbmJzcDsmbmJzcDsmbmJzcDtbJmx0OzAwMDAwMDAwMDAyNjhkZDAmZ3Q7
XSBkb21haW4uYyN1cGRhdGVfcnVuc3RhdGVfYXJlYSYjNDM7MHg5MC8weGM4PGJyIGNsYXNzPSIi
Pg0KKFhFTikgJm5ic3A7Jm5ic3A7Jm5ic3A7WyZsdDswMDAwMDAwMDAwMjY5MDljJmd0O10gZG9t
YWluLmMjc2NoZWR1bGVfdGFpbCYjNDM7MHgyOTQvMHgyZDg8YnIgY2xhc3M9IiI+DQooWEVOKSAm
bmJzcDsmbmJzcDsmbmJzcDtbJmx0OzAwMDAwMDAwMDAyNjk1MjQmZ3Q7XSBjb250ZXh0X3N3aXRj
aCYjNDM7MHg1OC8weDcwPGJyIGNsYXNzPSIiPg0KKFhFTikgJm5ic3A7Jm5ic3A7Jm5ic3A7WyZs
dDswMDAwMDAwMDAwMjQ3OWM0Jmd0O10gY29yZS5jI3NjaGVkX2NvbnRleHRfc3dpdGNoJiM0Mzsw
eDg4LzB4MWU0PGJyIGNsYXNzPSIiPg0KKFhFTikgJm5ic3A7Jm5ic3A7Jm5ic3A7WyZsdDswMDAw
MDAwMDAwMjQ4NDVjJmd0O10gY29yZS5jI3NjaGVkdWxlJiM0MzsweDIyNC8weDJlYzxiciBjbGFz
cz0iIj4NCihYRU4pICZuYnNwOyZuYnNwOyZuYnNwO1smbHQ7MDAwMDAwMDAwMDIyNDAxOCZndDtd
IHNvZnRpcnEuYyNfX2RvX3NvZnRpcnEmIzQzOzB4ZTQvMHgxMjg8YnIgY2xhc3M9IiI+DQooWEVO
KSAmbmJzcDsmbmJzcDsmbmJzcDtbJmx0OzAwMDAwMDAwMDAyMjQwZDQmZ3Q7XSBkb19zb2Z0aXJx
JiM0MzsweDE0LzB4MWM8YnIgY2xhc3M9IiI+DQpEaXNjdXNzaW5nIHRoaXMgc3ViamVjdCB3aXRo
IFN0ZWZhbm8sIGhlIHBvaW50ZWQgbWUgdG8gYSBkaXNjdXNzaW9uPGJyIGNsYXNzPSIiPg0Kc3Rh
cnRlZCBhIHllYXIgYWdvIG9uIHRoaXMgc3ViamVjdCBoZXJlOjxiciBjbGFzcz0iIj4NCjxhIGhy
ZWY9Imh0dHBzOi8vbGlzdHMueGVucHJvamVjdC5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwv
MjAxOC0xMS9tc2cwMzA1My5odG1sIiBjbGFzcz0iIj5odHRwczovL2xpc3RzLnhlbnByb2plY3Qu
b3JnL2FyY2hpdmVzL2h0bWwveGVuLWRldmVsLzIwMTgtMTEvbXNnMDMwNTMuaHRtbDwvYT48YnIg
Y2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQpBbmQgYSBwYXRjaCB3YXMgc3VibWl0dGVkOjxiciBj
bGFzcz0iIj4NCmh0dHBzOi8vbGlzdHMueGVucHJvamVjdC5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4t
ZGV2ZWwvMjAxOS0wNS9tc2cwMjMyMC5odG1sPGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIiPg0K
SSByZWJhc2VkIHRoaXMgcGF0Y2ggb24gY3VycmVudCBtYXN0ZXIgYW5kIGl0IGlzIHNvbHZpbmcg
dGhlPGJyIGNsYXNzPSIiPg0KcHJvYmxlbSBJIGhhdmUgc2Vlbi48YnIgY2xhc3M9IiI+DQpJdCBz
b3VuZHMgdG8gbWUgbGlrZSBhIGdvb2Qgc29sdXRpb24gdG8gaW50cm9kdWNlIGE8YnIgY2xhc3M9
IiI+DQpWQ1BVT1BfcmVnaXN0ZXJfcnVuc3RhdGVfcGh5c19tZW1vcnlfYXJlYSB0byBub3QgZGVw
ZW5kIG9uIHRoZSBhcmVhPGJyIGNsYXNzPSIiPg0KYWN0dWFsbHkgYmVpbmcgbWFwcGVkIGluIHRo
ZSBndWVzdCB3aGVuIGEgY29udGV4dCBzd2l0Y2ggaXMgYmVpbmc8YnIgY2xhc3M9IiI+DQpkb25l
ICh3aGljaCBpcyBhY3R1YWxseSB0aGUgcHJvYmxlbSBoYXBwZW5pbmcgd2hlbiBhIGNvbnRleHQg
c3dpdGNoPGJyIGNsYXNzPSIiPg0KaXMgdHJpZ2dlciB3aGlsZSBhIGd1ZXN0IGlzIHJ1bm5pbmcg
aW4gRUwwKS48YnIgY2xhc3M9IiI+DQpJcyB0aGVyZSBhbnkgcmVhc29uIHdoeSB0aGlzIHdhcyBu
b3QgbWVyZ2VkIGF0IHRoZSBlbmQgPzxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCkkganVz
dCBza2ltbWVkIHRocm91Z2ggdGhlIHRocmVhZCB0byByZW1pbmQgbXlzZWxmIHRoZSBzdGF0ZS48
YnIgY2xhc3M9IiI+DQpBRkFJQ1QsIHRoaXMgaXMgYmxvY2tlZCBvbiB0aGUgY29udHJpYnV0b3Ig
dG8gY2xhcmlmeSB0aGUgaW50ZW5kZWQ8YnIgY2xhc3M9IiI+DQppbnRlcmFjdGlvbiBhbmQgcHJv
dmlkZSBhIG5ldyB2ZXJzaW9uLjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCldoYXQgZG8g
eW91IG1lYW4gaGVyZSBieSBpbnRlbmRlZCBpbnRlcmFjdGlvbiA/IEhvdyB0aGUgbmV3IGh5cGVy
PGJyIGNsYXNzPSIiPg0KY2FsbCBzaG91bGQgYmUgdXNlZCBieSB0aGUgZ3Vlc3QgT1MgPzxiciBj
bGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCkZyb20gd2hhdCBJIHJlbWVtYmVyLCBKYW4gd2FzIHNl
ZWtpbmcgY2xhcmlmaWNhdGlvbiBvbiB3aGV0aGVyIHRoZSB0d288YnIgY2xhc3M9IiI+DQpoeXBl
cmNhbGxzIChleGlzdGluZyBhbmQgbmV3KSBjYW4gYmUgY2FsbGVkIHRvZ2V0aGVyIGJ5IHRoZSBz
YW1lIE9TPGJyIGNsYXNzPSIiPg0KKGFuZCBtYWtlIHNlbnNlKS48YnIgY2xhc3M9IiI+DQo8YnIg
Y2xhc3M9IiI+DQpUaGVyZSB3YXMgYWxzbyB0aGUgcXVlc3Rpb24gb2YgdGhlIGhhbmRvdmVyIGJl
dHdlZW4gdHdvIHBpZWNlcyBvZjxiciBjbGFzcz0iIj4NCnNvdGZ3YXJlLiBGb3IgaW5zdGFuY2Us
IHdoYXQgaWYgdGhlIGZpcm13YXJlIGlzIHVzaW5nIHRoZSBleGlzdGluZzxiciBjbGFzcz0iIj4N
CmludGVyZmFjZSBidXQgdGhlIE9TIHRoZSBuZXcgb25lPyBTaW1pbGFyIHF1ZXN0aW9uIGFib3V0
IEtleGVjaW5nIGE8YnIgY2xhc3M9IiI+DQpkaWZmZXJlbnQga2VybmVsLjxiciBjbGFzcz0iIj4N
CjxiciBjbGFzcz0iIj4NClRoaXMgcGFydCBpcyBtb3N0bHkgZG9jdW1lbnRhdGlvbiBzbyB3ZSBj
YW4gZGlzY3VzcyBhYm91dCB0aGUgYXBwcm9hY2g8YnIgY2xhc3M9IiI+DQphbmQgcmV2aWV3IHRo
ZSBpbXBsZW1lbnRhdGlvbi48YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9IiI+DQo8YnIgY2xhc3M9
IiI+DQo8YnIgY2xhc3M9IiI+DQpJIGFtIHN0aWxsIGluIGZhdm9yIG9mIHRoZSBuZXcgaHlwZXJj
YWxsIChhbmQgc3RpbGwgaW4gbXkgdG9kbyBsaXN0KTxiciBjbGFzcz0iIj4NCmJ1dCBJIGhhdmVu
J3QgeWV0IGZvdW5kIHRpbWUgdG8gcmV2aXZlIHRoZSBzZXJpZXMuPGJyIGNsYXNzPSIiPg0KPGJy
IGNsYXNzPSIiPg0KV291bGQgeW91IGJlIHdpbGxpbmcgdG8gdGFrZSBvdmVyIHRoZSBzZXJpZXM/
IEkgd291bGQgYmUgaGFwcHkgdG88YnIgY2xhc3M9IiI+DQpicmluZyB5b3UgdXAgdG8gc3BlZWQg
YW5kIHByb3ZpZGUgcmV2aWV3LjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NClN1cmUgSSBj
YW4gdGFrZSBpdCBvdmVyLjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCkkgcG9ydGVkIGl0
IHRvIG1hc3RlciB2ZXJzaW9uIG9mIHhlbiBhbmQgSSB0ZXN0ZWQgaXQgb24gYSBib2FyZC48YnIg
Y2xhc3M9IiI+DQpJIHN0aWxsIG5lZWQgdG8gZG8gYSBkZWVwIHJldmlldyBvZiB0aGUgY29kZSBt
eXNlbGYgYnV0IEkgaGF2ZSBhbjxiciBjbGFzcz0iIj4NCnVuZGVyc3RhbmRpbmcgb2YgdGhlIHBy
b2JsZW0gYW5kIHdoYXQgaXMgdGhlIGlkZWEuPGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIiPg0K
QW55IGhlbHAgdG8gZ2V0IG9uIHNwZWVkIHdvdWxkIGJlIG1vcmUgdGhlbiB3ZWxjb21lIDotKTxi
ciBjbGFzcz0iIj4NCkkgd291bGQgcmVjb21tZW5kIHRvIGdvIHRocm91Z2ggdGhlIGxhdGVzdCB2
ZXJzaW9uICh2MykgYW5kIHRoZTxiciBjbGFzcz0iIj4NCnByZXZpb3VzICh2MikuIEkgYW0gYWxz
byBzdWdnZXN0aW5nIHYyIGJlY2F1c2UgSSB0aGluayB0aGUgc3BsaXQgd2FzPGJyIGNsYXNzPSIi
Pg0KZWFzaWVyIHRvIHJldmlldy91bmRlcnN0YW5kLjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0i
Ij4NClRoZSB4ODYgY29kZSBpcyBwcm9iYWJseSB3aGF0IGlzIGdvaW5nIHRvIGdpdmUgeW91IHRo
ZSBtb3N0IHRyb3VibGUgYXM8YnIgY2xhc3M9IiI+DQp0aGVyZSBhcmUgdHdvIEFCSXMgdG8gc3Vw
cG9ydCAoY29tcGF0IGFuZCBub24tY29tcGF0KS4gSWYgeW91IGRvbid0PGJyIGNsYXNzPSIiPg0K
aGF2ZSBhbiB4ODYgc2V0dXAsIEkgc2hvdWxkIGJlIGFibGUgdG8gdGVzdCBpdC9oZWxwIHdyaXRl
IGl0LjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NCkZlZWwgZnJlZSB0byBhc2sgYW55IHF1
ZXN0aW9ucyBhbmQgSSB3aWxsIHRyeSBteSBiZXN0IHRvIHJlbWVtYmVyIHRoZTxiciBjbGFzcz0i
Ij4NCmRpc2N1c3Npb24gZnJvbSBsYXN0IHllYXIgOikuPGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNz
PSIiPg0KQXQgcmlzayBvZiBiZWluZyBzaG91dGVkIGRvd24gYWdhaW4sIGEgbmV3IGh5cGVyY2Fs
bCBpc24ndCBuZWNlc3NhcmlseTxiciBjbGFzcz0iIj4NCm5lY2Vzc2FyeSwgYW5kIHRoZXJlIGFy
ZSBwcm9iYWJseSBiZXR0ZXIgd2F5cyBvZiBmaXhpbmcgaXQuPGJyIGNsYXNzPSIiPg0KPGJyIGNs
YXNzPSIiPg0KVGhlIHVuZGVybHlpbmcgQUJJIHByb2JsZW0gaXMgdGhhdCB0aGUgYXJlYSBpcyBy
ZWdpc3RlcmVkIGJ5IHZpcnR1YWw8YnIgY2xhc3M9IiI+DQphZGRyZXNzLiAmbmJzcDtUaGUgb25s
eSBjb3JyZWN0IHdheSB0aGlzIHNob3VsZCBoYXZlIGJlZW4gZG9uZSBpcyB0byByZWdpc3Rlcjxi
ciBjbGFzcz0iIj4NCmJ5IGd1ZXN0IHBoeXNpY2FsIGFkZHJlc3MsIHNvIFhlbidzIHVwZGF0aW5n
IG9mIHRoZSBkYXRhIGRvZXNuJ3Q8YnIgY2xhc3M9IiI+DQppbnRlcmFjdCB3aXRoIHRoZSBndWVz
dCBwYWdldGFibGUgc2V0dGluZ3MvcmVzdHJpY3Rpb25zLiAmbmJzcDt4ODYgc3VmZmVyczxiciBj
bGFzcz0iIj4NCnRoZSBzYW1lIGtpbmQgb2YgcHJvYmxlbXMgYXMgQVJNLCBleGNlcHQgd2Ugc2ls
ZW50bHkgc3F1YXNoIHRoZSBmYWxsb3V0LjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NClRo
ZSBsb2dpYyBpbiBYZW4gaXMgaG9ycmlibGUsIGFuZCBJIHdvdWxkIHJlYWxseSByYXRoZXIgaXQg
d2FzIGRlbGV0ZWQ8YnIgY2xhc3M9IiI+DQpjb21wbGV0ZWx5LCByYXRoZXIgdGhhbiB0byBiZSBr
ZXB0IGZvciBjb21wYXRpYmlsaXR5LjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NClRoZSBy
dW5zdGF0ZSBhcmVhIGlzIGFsd2F5cyBmaXhlZCBrZXJuZWwgbWVtb3J5IGFuZCBkb2Vzbid0IG1v
dmUuICZuYnNwO0k8YnIgY2xhc3M9IiI+DQpiZWxpZXZlIGl0IGlzIGFscmVhZHkgcmVzdHJpY3Rl
ZCBmcm9tIGNyb3NzaW5nIGEgcGFnZSBib3VuZGFyeSwgYW5kIHdlPGJyIGNsYXNzPSIiPg0KY2Fu
IGNhbGN1bGF0ZSB0aGUgdmE9Jmd0O3BhIHRyYW5zbGF0aW9uIHdoZW4gdGhlIGh5cGVyY2FsbCBp
cyBtYWRlLjxiciBjbGFzcz0iIj4NCjxiciBjbGFzcz0iIj4NClllcyAtIHRoaXMgaXMgYSB0ZWNo
bmljYWxseSBBQkkgY2hhbmdlLCBidXQgbm90aGluZyBpcyBnb2luZyB0byBicmVhazxiciBjbGFz
cz0iIj4NCihBRkFJQ1QpIGFuZCB0aGUgY2xlYW51cCB3aW4gaXMgbGFyZ2UgZW5vdWdoIHRvIG1h
a2UgdGhpcyBhICp2ZXJ5KjxiciBjbGFzcz0iIj4NCmF0dHJhY3RpdmUgb3B0aW9uLjxiciBjbGFz
cz0iIj4NCjxiciBjbGFzcz0iIj4NCkkgc3VnZ2VzdGVkIHRoaXMgYXBwcm9hY2ggdHdvIHllYXJz
IGFnbyBbMV0gYnV0IHlvdSB3ZXJlIHRoZSBvbmU8YnIgY2xhc3M9IiI+DQpzYXlpbmcgdGhhdCBi
dWZmZXIgY291bGQgY3Jvc3MgcGFnZS1ib3VuZGFyeSBvbiBvbGRlciBMaW51eCBbMl06PGJyIGNs
YXNzPSIiPg0KPGJyIGNsYXNzPSIiPg0KJnF1b3Q7SSdkIGxvdmUgdG8gZG8gdGhpcywgYnV0IHdl
IGNhbnQuICZuYnNwO09sZGVyIExpbnV4IHVzZWQgdG8gaGF2ZSBhIHZpcnR1YWw8YnIgY2xhc3M9
IiI+DQpidWZmZXIgc3Bhbm5pbmcgYSBwYWdlIGJvdW5kYXJ5LiAmbmJzcDtDaGFuZ2luZyB0aGUg
YmVoYXZpb3VyIHVuZGVyIHRoYXQgd2lsbDxiciBjbGFzcz0iIj4NCmNhdXNlIG9sZGVyIHNldHVw
cyB0byBleHBsb2RlLiZxdW90OzxiciBjbGFzcz0iIj4NCjwvYmxvY2txdW90ZT4NCjxiciBjbGFz
cz0iIj4NClNvcnJ5IHRoaXMgd2FzIGxvbmcgdGltZSBhZ28sIGFuZCBkZXRhaWxzIGhhdmUgZmFk
ZWQuIElJUkMgdGhlcmUgd2FzPGJyIGNsYXNzPSIiPg0KZXZlbiBhIHByb3Bvc2FsIChvciBwYXRj
aCBzZXQpIHRoYXQgdG9vayB0aGF0IGludG8gYWNjb3VudCBhbmQgYWxsb3dlZDxiciBjbGFzcz0i
Ij4NCmJ1ZmZlcnMgdG8gc3BhbiBhY3Jvc3MgYSBwYWdlIGJvdW5kYXJ5IGJ5IHRha2luZyBhIHJl
ZmVyZW5jZSB0byB0d288YnIgY2xhc3M9IiI+DQpkaWZmZXJlbnQgcGFnZXMgaW4gdGhhdCBjYXNl
LjxiciBjbGFzcz0iIj4NCjwvYmxvY2txdW90ZT4NCjxiciBjbGFzcz0iIj4NCkkgYW0gbm90IGF3
YXJlIG9mIGEgcGF0Y2ggc2V0LiBKdWVyZ2VuIHN1Z2dlc3RlZCBhIHBlci1kb21haW4gbWFwcGlu
ZyBidXQ8YnIgY2xhc3M9IiI+DQp0aGVyZSB3YXMgbm8gZGV0YWlscyBob3cgdGhpcyBjb3VsZCBi
ZSBkb25lIChteSBlLW1haWwgd2FzIGxlZnQgdW5hbnN3ZXJlZDxiciBjbGFzcz0iIj4NClsxXSku
PGJyIGNsYXNzPSIiPg0KPGJyIGNsYXNzPSIiPg0KSWYgd2Ugd2VyZSB1c2luZyB0aGUgdm1hcCgp
IHRoZW4gd2Ugd291bGQgbmVlZCB1cCAxTUIgcGVyIGRvbWFpbiAoYXNzdW1pbmc8YnIgY2xhc3M9
IiI+DQoxMjggdkNQVXMpLiBUaGlzIHNvdW5kcyBxdWl0ZSBhIGJpdCBhbmQgSSB0aGluayB3ZSBu
ZWVkIHRvIGFncmVlIHdoZXRoZXIgaXQ8YnIgY2xhc3M9IiI+DQp3b3VsZCBiZSBhbiBhY2NlcHRh
YmxlIHNvbHV0aW9uICh0aGlzIHdhcyBhbHNvIGxlZnQgdW5hbnN3ZXJlZCBbMV0pLjxiciBjbGFz
cz0iIj4NCjwvYmxvY2txdW90ZT4NCjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAw
KTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBu
b3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxl
dHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4
OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5n
OiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBu
b25lOyIgY2xhc3M9IiI+DQo8c3BhbiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsg
Zm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3Jt
YWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRl
ci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0
ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAw
cHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25l
OyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0iIj5Db3Vs
ZA0KIHdlIG1hcC91bm1hcCB0aGUgcnVudGltZSBhcmVhIG9uIGRvbWFpbiBzd2l0Y2ggYXQgYSBw
ZXItY3B1PC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1m
YW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZv
bnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFj
aW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRy
YW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13
ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xh
c3M9IiI+DQo8c3BhbiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1p
bHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQt
dmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5n
OiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5z
Zm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJr
aXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyBmbG9hdDog
bm9uZTsgZGlzcGxheTogaW5saW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0iIj5iYXNlZA0KIGxpbmVh
ciBzcGFjZSBhcmVhPyBUaGVyZSdzIG5vIHJlYXNvbiB0byBoYXZlIGFsbCB0aGUgcnVudGltZTwv
c3Bhbj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBI
ZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlh
bnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9y
bWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06
IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRl
eHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0K
PHNwYW4gc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2
ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQt
Y2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFs
OyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5v
bmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQt
c3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsgZmxvYXQ6IG5vbmU7IGRp
c3BsYXk6IGlubGluZSAhaW1wb3J0YW50OyIgY2xhc3M9IiI+YXJlYXMNCiBtYXBwZWQgYWxsIHRo
ZSB0aW1lLCB5b3UganVzdCBjYXJlIGFib3V0IHRoZSBvbmUgZnJvbSB0aGU8L3NwYW4+PGJyIHN0
eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBm
b250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5v
cm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1h
bGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0
ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13
aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxl
PSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250
LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1h
bDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGln
bjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1z
cGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0
aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxp
bmUgIWltcG9ydGFudDsiIGNsYXNzPSIiPnJ1bm5pbmcNCiB2Y3B1Ljwvc3Bhbj48YnIgc3R5bGU9
ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQt
c2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFs
OyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWdu
OiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNw
YWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRo
OiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0KPGJyIHN0eWxlPSJjYXJl
dC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6
IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9u
dC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3Rh
cnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTog
bm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4
OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjYXJldC1j
b2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEy
cHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13
ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7
IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9y
bWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0
ZXh0LWRlY29yYXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9y
dGFudDsiIGNsYXNzPSIiPk1heWJlDQogdGhlIG92ZXJoZWFkIG9mIHRoYXQgbWFwcGluZyBhbmQg
dW5tYXBwaW5nIHdvdWxkIGJlPC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAw
LCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxl
OiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7
IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDog
MHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFj
aW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9u
OiBub25lOyIgY2xhc3M9IiI+DQo8c3BhbiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAw
KTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBu
b3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxl
dHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4
OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5n
OiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBu
b25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0iIj50
b28NCiBoaWdoPyBCdXQgc2VlaW5nIHRoYXQgd2UgYXJlIGFpbWluZyBhdCBhIHNlY3JldC1mcmVl
IFhlbiB3ZSB3b3VsZDwvc3Bhbj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7
IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9y
bWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0
ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsg
dGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzog
MHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9u
ZTsiIGNsYXNzPSIiPg0KPHNwYW4gc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZv
bnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFs
OyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXIt
c3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4
dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4
OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsg
ZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAhaW1wb3J0YW50OyIgY2xhc3M9IiI+aGF2ZQ0K
IHRvIGV2ZW50dWFsbHkgZ28gdGhhdCByb3V0ZSBhbnl3YXkuPC9zcGFuPjxiciBzdHlsZT0iY2Fy
ZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXpl
OiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZv
bnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0
YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6
IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBw
eDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8L2Rpdj4NCjwvYmxvY2txdW90
ZT4NCjxkaXY+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2Pk1heWJlIHRoZSBuZXcgaHlwZXJj
YWxsIHNob3VsZCBiZSBhIGJpdCBkaWZmZXJlbnQ6PC9kaXY+DQo8ZGl2Pi0gd2UgaGF2ZSB0aGlz
IGFyZWEgYWxsb2NhdGVkIGFscmVhZHkgaW5zaWRlIFhlbiBhbmQgd2UgZG8gYSBjb3B5IG9mIGl0
IG9uIGFueSBjb250ZXh0IHN3aXRjaDwvZGl2Pg0KPGRpdj4tIHRoZSBndWVzdCBpcyBub3Qgc3Vw
cG9zZWQgdG8gbW9kaWZ5IGFueSBkYXRhIGluIHRoaXMgYXJlYTwvZGl2Pg0KPGRpdj48YnIgY2xh
c3M9IiI+DQo8L2Rpdj4NCjxkaXY+V2UgY291bGQgaW50cm9kdWNlIGEgbmV3IGh5cGVyY2FsbDo8
L2Rpdj4NCjxkaXY+LSBYZW4gYWxsb2NhdGUgdGhlIHJ1bnN0YXRlIGFyZWEgdXNpbmcgYSBwYWdl
IGFsaWduZWQgYWRkcmVzcyBhbmQgc2l6ZTwvZGl2Pg0KPGRpdj4tIHRoZSBndWVzdCBwcm92aWRl
IGEgZnJlZSBndWVzdCBwaHlzaWNhbCBzcGFjZSB0byB0aGUgaHlwZXJjYWxsPC9kaXY+DQo8ZGl2
Pi0gWGVuIG1hcHMgcmVhZC1vbmx5IGl0cyBvd24gYXJlYSB0byB0aGUgZ3Vlc3QgYXQgdGhlIHBy
b3ZpZGVkIGFkZHJlc3M8L2Rpdj4NCjxkaXY+LSBYZW4gc2hhbGwgbm90IG1vZGlmeSBhbnkgZGF0
YSBpbiB0aGUgcnVuc3RhdGUgYXJlYSBvZiBvdGhlciBjb3Jlcy9ndWVzdHMgKHNob3VsZCBhbHJl
YWR5IGJlIHRoZSBjYXNlKTwvZGl2Pg0KPGRpdj4tIFdlIGtlZXAgdGhlIGN1cnJlbnQgaHlwZXJj
YWxsIGZvciBiYWNrd2FyZCBjb21wYXRpYmlsaXR5IGFuZCBtYXAgdGhlIGFyZWFsIGR1cmluZyB0
aGUgaHlwZXJjYWxsIGFuZCBrZWVwIHRoZSBhcmVhIG1hcHBlZCBhdCBhbGwgdGltZSwgd2Uga2Vl
cCBkb2luZyB0aGUgY29weSBkdXJpbmcgY29udGV4dCBzd2l0Y2hlczwvZGl2Pg0KPGRpdj48YnIg
Y2xhc3M9IiI+DQo8L2Rpdj4NCjxkaXY+VGhpcyB3b3VsZCBoaWdobHkgcmVkdWNlIHRoZSBvdmVy
aGVhZCBieSByZW1vdmluZyB0aGUgbWFwcGluZy91bm1hcHBpbmcuPC9kaXY+DQo8ZGl2PlJlZ2Fy
ZGluZyB0aGUgc2VjcmV0IGZyZWUgSSBkbyBub3QgcmVhbGx5IHRoaW5rIHRoaXMgaXMgc29tZXRo
aW5nIHByb2JsZW1hdGljIGhlcmUgYXMgd2UgYWxyZWFkeSBoYXZlIGEgY29weSBvZiB0aGlzIGlu
dGVybmFsbHkgYW55d2F5PC9kaXY+DQo8ZGl2PkRpZCBJIG1pc3Mgc29tZXRoaW5nID88L2Rpdj4N
CjxkaXY+PGJyIGNsYXNzPSIiPg0KPC9kaXY+DQo8ZGl2PkJlcnRyYW5kPC9kaXY+DQo8YnIgY2xh
c3M9IiI+DQo8YmxvY2txdW90ZSB0eXBlPSJjaXRlIiBjbGFzcz0iIj4NCjxkaXYgY2xhc3M9IiI+
PGJyIHN0eWxlPSJjYXJldC1jb2xvcjogcmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0
aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNh
cHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsg
dGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25l
OyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0
cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxibG9j
a3F1b3RlIHR5cGU9ImNpdGUiIHN0eWxlPSJmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNp
emU6IDEycHg7IGZvbnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsg
Zm9udC13ZWlnaHQ6IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgb3JwaGFuczogYXV0
bzsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBu
b25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3aWRvd3M6IGF1dG87IHdvcmQtc3BhY2luZzogMHB4
OyAtd2Via2l0LXRleHQtc2l6ZS1hZGp1c3Q6IGF1dG87IC13ZWJraXQtdGV4dC1zdHJva2Utd2lk
dGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8YmxvY2txdW90ZSB0
eXBlPSJjaXRlIiBjbGFzcz0iIj48YnIgY2xhc3M9IiI+DQpBbm90aGVyIG9wdGlvbiB3b3VsZCBi
ZSB0byBqdXN0IHJldHVybiAtRUlOVkFMIG9yIC1FT1BOT1RTVVBQIGluIHRoYXQ8YnIgY2xhc3M9
IiI+DQpjYXNlIGFuZCBqdXN0IGdldCBvbiB3aXRoIGl0LiBydW5zdGF0ZSBpbmZvIHNob3VsZG4n
dCBiZSBtYW5kYXRvcnkgZm9yPGJyIGNsYXNzPSIiPg0KZ3Vlc3RzIHRvIGZ1bmN0aW9uIHByb3Bl
cmx5LCBJIHdvdWxkIHNheSBpdCdzIGp1c3QgZXh0cmEgaW5mbyB0aGF0J3M8YnIgY2xhc3M9IiI+
DQpwcm92aWRlZCBpbiBnb29kIGZhaXRoIGZyb20gdGhlIGh5cGVydmlzb3IgaW4gb3JkZXIgdG8g
aGVscCB0aGUgZ3Vlc3Q8YnIgY2xhc3M9IiI+DQptYWtlIGJldHRlciBzY2hlZHVsaW5nIGRlY2lz
aW9ucy48YnIgY2xhc3M9IiI+DQo8L2Jsb2NrcXVvdGU+DQo8YnIgY2xhc3M9IiI+DQpMaW51eCB3
aWxsIHBhbmljIGlmIHRoZSBWQ1BVT1BfcmVnaXN0ZXJfcnVuc3RhdGVfbWVtb3J5X2FyZWEgcmV0
dXJucyBhbjxiciBjbGFzcz0iIj4NCmVycm9yIChzZWUgeGVuX3NldHVwX3J1bnN0YXRlX2luZm8o
KSkuPGJyIGNsYXNzPSIiPg0KPC9ibG9ja3F1b3RlPg0KPGJyIHN0eWxlPSJjYXJldC1jb2xvcjog
cmdiKDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZv
bnQtc3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6
IG5vcm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQt
aW5kZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3
b3JkLXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRl
Y29yYXRpb246IG5vbmU7IiBjbGFzcz0iIj4NCjxzcGFuIHN0eWxlPSJjYXJldC1jb2xvcjogcmdi
KDAsIDAsIDApOyBmb250LWZhbWlseTogSGVsdmV0aWNhOyBmb250LXNpemU6IDEycHg7IGZvbnQt
c3R5bGU6IG5vcm1hbDsgZm9udC12YXJpYW50LWNhcHM6IG5vcm1hbDsgZm9udC13ZWlnaHQ6IG5v
cm1hbDsgbGV0dGVyLXNwYWNpbmc6IG5vcm1hbDsgdGV4dC1hbGlnbjogc3RhcnQ7IHRleHQtaW5k
ZW50OiAwcHg7IHRleHQtdHJhbnNmb3JtOiBub25lOyB3aGl0ZS1zcGFjZTogbm9ybWFsOyB3b3Jk
LXNwYWNpbmc6IDBweDsgLXdlYmtpdC10ZXh0LXN0cm9rZS13aWR0aDogMHB4OyB0ZXh0LWRlY29y
YXRpb246IG5vbmU7IGZsb2F0OiBub25lOyBkaXNwbGF5OiBpbmxpbmUgIWltcG9ydGFudDsiIGNs
YXNzPSIiPk9oLA0KIHRoYXQncyBkdWxsLiBUaGF0IGh5cGVyY2FsbCB3YXMgbmV2ZXIgbm90ZWQg
dG8gYmUgb3B0aW9uYWwsIHNvIGl0PC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigw
LCAwLCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0
eWxlOiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3Jt
YWw7IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVu
dDogMHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1z
cGFjaW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0
aW9uOiBub25lOyIgY2xhc3M9IiI+DQo8c3BhbiBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAw
LCAwKTsgZm9udC1mYW1pbHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxl
OiBub3JtYWw7IGZvbnQtdmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7
IGxldHRlci1zcGFjaW5nOiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDog
MHB4OyB0ZXh0LXRyYW5zZm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFj
aW5nOiAwcHg7IC13ZWJraXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9u
OiBub25lOyBmbG9hdDogbm9uZTsgZGlzcGxheTogaW5saW5lICFpbXBvcnRhbnQ7IiBjbGFzcz0i
Ij5mYWlsaW5nDQogd291bGQgbWVhbiBMaW51eCBoYXMgc29tZWhvdyBzY3Jld2VkIHRoZSBjYWxs
IHdoaWNoIGlzIG5vdDwvc3Bhbj48YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7
IGZvbnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9y
bWFsOyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0
ZXItc3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsg
dGV4dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzog
MHB4OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9u
ZTsiIGNsYXNzPSIiPg0KPHNwYW4gc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZv
bnQtZmFtaWx5OiBIZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFs
OyBmb250LXZhcmlhbnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXIt
c3BhY2luZzogbm9ybWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4
dC10cmFuc2Zvcm06IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4
OyAtd2Via2l0LXRleHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsg
ZmxvYXQ6IG5vbmU7IGRpc3BsYXk6IGlubGluZSAhaW1wb3J0YW50OyIgY2xhc3M9IiI+ZXhwZWN0
ZWQuPC9zcGFuPjxiciBzdHlsZT0iY2FyZXQtY29sb3I6IHJnYigwLCAwLCAwKTsgZm9udC1mYW1p
bHk6IEhlbHZldGljYTsgZm9udC1zaXplOiAxMnB4OyBmb250LXN0eWxlOiBub3JtYWw7IGZvbnQt
dmFyaWFudC1jYXBzOiBub3JtYWw7IGZvbnQtd2VpZ2h0OiBub3JtYWw7IGxldHRlci1zcGFjaW5n
OiBub3JtYWw7IHRleHQtYWxpZ246IHN0YXJ0OyB0ZXh0LWluZGVudDogMHB4OyB0ZXh0LXRyYW5z
Zm9ybTogbm9uZTsgd2hpdGUtc3BhY2U6IG5vcm1hbDsgd29yZC1zcGFjaW5nOiAwcHg7IC13ZWJr
aXQtdGV4dC1zdHJva2Utd2lkdGg6IDBweDsgdGV4dC1kZWNvcmF0aW9uOiBub25lOyIgY2xhc3M9
IiI+DQo8YnIgc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBI
ZWx2ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlh
bnQtY2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9y
bWFsOyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06
IG5vbmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRl
eHQtc3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsiIGNsYXNzPSIiPg0K
PHNwYW4gc3R5bGU9ImNhcmV0LWNvbG9yOiByZ2IoMCwgMCwgMCk7IGZvbnQtZmFtaWx5OiBIZWx2
ZXRpY2E7IGZvbnQtc2l6ZTogMTJweDsgZm9udC1zdHlsZTogbm9ybWFsOyBmb250LXZhcmlhbnQt
Y2Fwczogbm9ybWFsOyBmb250LXdlaWdodDogbm9ybWFsOyBsZXR0ZXItc3BhY2luZzogbm9ybWFs
OyB0ZXh0LWFsaWduOiBzdGFydDsgdGV4dC1pbmRlbnQ6IDBweDsgdGV4dC10cmFuc2Zvcm06IG5v
bmU7IHdoaXRlLXNwYWNlOiBub3JtYWw7IHdvcmQtc3BhY2luZzogMHB4OyAtd2Via2l0LXRleHQt
c3Ryb2tlLXdpZHRoOiAwcHg7IHRleHQtZGVjb3JhdGlvbjogbm9uZTsgZmxvYXQ6IG5vbmU7IGRp
c3BsYXk6IGlubGluZSAhaW1wb3J0YW50OyIgY2xhc3M9IiI+Um9nZXIuPC9zcGFuPjwvZGl2Pg0K
PC9ibG9ja3F1b3RlPg0KPC9kaXY+DQo8YnIgY2xhc3M9IiI+DQo8L2JvZHk+DQo8L2h0bWw+DQo=

--_000_93D7EBEFE3E04DBBA5BC7D326B7AE0DBarmcom_--


From xen-devel-bounces@lists.xenproject.org Fri May 15 09:23:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 09:23:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZWYs-0007DE-7g; Fri, 15 May 2020 09:23:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7E4v=65=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jZWYr-0007D5-14
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 09:23:21 +0000
X-Inumbo-ID: b75eb8e4-968d-11ea-a536-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b75eb8e4-968d-11ea-a536-12813bfff9fa;
 Fri, 15 May 2020 09:23:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=WHPeUnBeBhc625rc952H/9DDw4Z8mMm3okmV/UjUryk=; b=e7eLkTmgHNySNHMgbk6Juu+2Nm
 8MiO+GjdqZAN2ibZwPGCONyZMES2ZkfSvA0VvQWHz61IY0uLvAW0AZa07YOYr4gvE6Q8jXIoJA/Cj
 9QJqZcOUPp7K+8shcyGi6onr35jyvkzVDBqPN4yyr7X4g/Qi1mJYuKw8XvTuiucnyYgg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jZWYo-000375-Pl; Fri, 15 May 2020 09:23:18 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jZWYo-0008DW-Dv; Fri, 15 May 2020 09:23:18 +0000
Subject: Re: Error during update_runstate_area with KPTI activated
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <C6B0E24F-60E6-4621-8448-C8DBAE3277A9@arm.com>
 <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
 <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
 <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
 <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
 <478C4829-CCAF-495B-860E-6BA3D86AA47D@arm.com>
 <20200515083838.GN54375@Air-de-Roger>
 <d2033adc-3f98-2d14-ae6d-f8dcd8b90002@xen.org>
 <20200515091018.GO54375@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <3813cfa2-c881-3fa5-bdf8-a2e874584a9f@xen.org>
Date: Fri, 15 May 2020 10:23:16 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200515091018.GO54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 15/05/2020 10:10, Roger Pau Monné wrote:
> On Fri, May 15, 2020 at 09:53:54AM +0100, Julien Grall wrote:
>> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>>
>> Hi,
>>
>> On 15/05/2020 09:38, Roger Pau Monné wrote:
>>> On Fri, May 15, 2020 at 07:39:16AM +0000, Bertrand Marquis wrote:
>>>>
>>>>
>>>> On 14 May 2020, at 20:13, Julien Grall <julien.grall.oss@gmail.com<mailto:julien.grall.oss@gmail.com>> wrote:
>>>>
>>>> On Thu, 14 May 2020 at 19:12, Andrew Cooper <andrew.cooper3@citrix.com<mailto:andrew.cooper3@citrix.com>> wrote:
>>>>
>>>> On 14/05/2020 18:38, Julien Grall wrote:
>>>> Hi,
>>>>
>>>> On 14/05/2020 17:18, Bertrand Marquis wrote:
>>>>
>>>>
>>>> On 14 May 2020, at 16:57, Julien Grall <julien@xen.org<mailto:julien@xen.org>> wrote:
>>>>
>>>>
>>>>
>>>> On 14/05/2020 15:28, Bertrand Marquis wrote:
>>>> Hi,
>>>>
>>>> Hi,
>>>>
>>>> When executing linux on arm64 with KPTI activated (in Dom0 or in a
>>>> DomU), I have a lot of walk page table errors like this:
>>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
>>>> 0xffffff837ebe0cd0
>>>> After implementing a call trace, I found that the problem was
>>>> coming from the update_runstate_area when linux has KPTI activated.
>>>> I have the following call trace:
>>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
>>>> 0xffffff837ebe0cd0
>>>> (XEN) backtrace.c:29: Stacktrace start at 0x8007638efbb0 depth 10
>>>> (XEN)    [<000000000027780c>] get_page_from_gva+0x180/0x35c
>>>> (XEN)    [<00000000002700c8>] guestcopy.c#copy_guest+0x1b0/0x2e4
>>>> (XEN)    [<0000000000270228>] raw_copy_to_guest+0x2c/0x34
>>>> (XEN)    [<0000000000268dd0>] domain.c#update_runstate_area+0x90/0xc8
>>>> (XEN)    [<000000000026909c>] domain.c#schedule_tail+0x294/0x2d8
>>>> (XEN)    [<0000000000269524>] context_switch+0x58/0x70
>>>> (XEN)    [<00000000002479c4>] core.c#sched_context_switch+0x88/0x1e4
>>>> (XEN)    [<000000000024845c>] core.c#schedule+0x224/0x2ec
>>>> (XEN)    [<0000000000224018>] softirq.c#__do_softirq+0xe4/0x128
>>>> (XEN)    [<00000000002240d4>] do_softirq+0x14/0x1c
>>>> Discussing this subject with Stefano, he pointed me to a discussion
>>>> started a year ago on this subject here:
>>>> https://lists.xenproject.org/archives/html/xen-devel/2018-11/msg03053.html
>>>>
>>>> And a patch was submitted:
>>>> https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg02320.html
>>>>
>>>> I rebased this patch on current master and it is solving the
>>>> problem I have seen.
>>>> It sounds to me like a good solution to introduce a
>>>> VCPUOP_register_runstate_phys_memory_area to not depend on the area
>>>> actually being mapped in the guest when a context switch is being
>>>> done (which is actually the problem happening when a context switch
>>>> is trigger while a guest is running in EL0).
>>>> Is there any reason why this was not merged at the end ?
>>>>
>>>> I just skimmed through the thread to remind myself the state.
>>>> AFAICT, this is blocked on the contributor to clarify the intended
>>>> interaction and provide a new version.
>>>>
>>>> What do you mean here by intended interaction ? How the new hyper
>>>> call should be used by the guest OS ?
>>>>
>>>>   From what I remember, Jan was seeking clarification on whether the two
>>>> hypercalls (existing and new) can be called together by the same OS
>>>> (and make sense).
>>>>
>>>> There was also the question of the handover between two pieces of
>>>> sotfware. For instance, what if the firmware is using the existing
>>>> interface but the OS the new one? Similar question about Kexecing a
>>>> different kernel.
>>>>
>>>> This part is mostly documentation so we can discuss about the approach
>>>> and review the implementation.
>>>>
>>>>
>>>>
>>>> I am still in favor of the new hypercall (and still in my todo list)
>>>> but I haven't yet found time to revive the series.
>>>>
>>>> Would you be willing to take over the series? I would be happy to
>>>> bring you up to speed and provide review.
>>>>
>>>> Sure I can take it over.
>>>>
>>>> I ported it to master version of xen and I tested it on a board.
>>>> I still need to do a deep review of the code myself but I have an
>>>> understanding of the problem and what is the idea.
>>>>
>>>> Any help to get on speed would be more then welcome :-)
>>>> I would recommend to go through the latest version (v3) and the
>>>> previous (v2). I am also suggesting v2 because I think the split was
>>>> easier to review/understand.
>>>>
>>>> The x86 code is probably what is going to give you the most trouble as
>>>> there are two ABIs to support (compat and non-compat). If you don't
>>>> have an x86 setup, I should be able to test it/help write it.
>>>>
>>>> Feel free to ask any questions and I will try my best to remember the
>>>> discussion from last year :).
>>>>
>>>> At risk of being shouted down again, a new hypercall isn't necessarily
>>>> necessary, and there are probably better ways of fixing it.
>>>>
>>>> The underlying ABI problem is that the area is registered by virtual
>>>> address.  The only correct way this should have been done is to register
>>>> by guest physical address, so Xen's updating of the data doesn't
>>>> interact with the guest pagetable settings/restrictions.  x86 suffers
>>>> the same kind of problems as ARM, except we silently squash the fallout.
>>>>
>>>> The logic in Xen is horrible, and I would really rather it was deleted
>>>> completely, rather than to be kept for compatibility.
>>>>
>>>> The runstate area is always fixed kernel memory and doesn't move.  I
>>>> believe it is already restricted from crossing a page boundary, and we
>>>> can calculate the va=>pa translation when the hypercall is made.
>>>>
>>>> Yes - this is a technically ABI change, but nothing is going to break
>>>> (AFAICT) and the cleanup win is large enough to make this a *very*
>>>> attractive option.
>>>>
>>>> I suggested this approach two years ago [1] but you were the one
>>>> saying that buffer could cross page-boundary on older Linux [2]:
>>>>
>>>> "I'd love to do this, but we cant.  Older Linux used to have a virtual
>>>> buffer spanning a page boundary.  Changing the behaviour under that will
>>>> cause older setups to explode."
>>>
>>> Sorry this was long time ago, and details have faded. IIRC there was
>>> even a proposal (or patch set) that took that into account and allowed
>>> buffers to span across a page boundary by taking a reference to two
>>> different pages in that case.
>>
>> I am not aware of a patch set. Juergen suggested a per-domain mapping but
>> there was no details how this could be done (my e-mail was left unanswered
>> [1]).
>>
>> If we were using the vmap() then we would need up 1MB per domain (assuming
>> 128 vCPUs). This sounds quite a bit and I think we need to agree whether it
>> would be an acceptable solution (this was also left unanswered [1]).
> 
> Could we map/unmap the runtime area on domain switch at a per-cpu
> based linear space area? There's no reason to have all the runtime
> areas mapped all the time, you just care about the one from the
> running vcpu.

AFAICT, this is only used during context switching. This is a bit 
surprising because I would expect it to be updated when the vCPU is running.

So maybe we could just use map_domain_page() and take care manually 
about cross-page boundary.

> 
> Maybe the overhead of that mapping and unmapping would be
> too high? But seeing that we are aiming at a secret-free Xen we would
> have to eventually go that route anyway.

The overhead is likely to be higher with the existing code as you have 
to walk the guest page-tables and the p2m everytime in order to 
translate the guest virtual address to a host physical address.

So mapping/unmapping a physical address is not going to be too bad :).

>>>
>>> Another option would be to just return -EINVAL or -EOPNOTSUPP in that
>>> case and just get on with it. runstate info shouldn't be mandatory for
>>> guests to function properly, I would say it's just extra info that's
>>> provided in good faith from the hypervisor in order to help the guest
>>> make better scheduling decisions.
>>
>> Linux will panic if the VCPUOP_register_runstate_memory_area returns an
>> error (see xen_setup_runstate_info()).
> 
> Oh, that's dull. That hypercall was never noted to be optional, so it
> failing would mean Linux has somehow screwed the call which is not
> expected.
> 
> Roger.
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 15 09:28:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 09:28:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZWdM-0007T6-Ru; Fri, 15 May 2020 09:28:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7uuJ=65=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZWdL-0007T1-7F
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 09:27:59 +0000
X-Inumbo-ID: 5d0c03a0-968e-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d0c03a0-968e-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 09:27:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 54F69B02E;
 Fri, 15 May 2020 09:27:59 +0000 (UTC)
Subject: Re: [PATCH v8 04/12] xen: add basic hypervisor filesystem support
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
References: <20200508153421.24525-1-jgross@suse.com>
 <20200508153421.24525-5-jgross@suse.com>
 <db277779-5b1e-a2aa-3948-9e6dd8e8bef0@suse.com>
 <23938228-e947-fe36-8b19-0e89886db9ac@suse.com>
 <28dd8109-1815-70cd-834c-53330d5c824d@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ae9db4d5-993a-5bf6-c520-3aec428cdc5d@suse.com>
Date: Fri, 15 May 2020 11:27:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <28dd8109-1815-70cd-834c-53330d5c824d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.05.2020 07:33, Jürgen Groß wrote:
> On 14.05.20 11:50, Jürgen Groß wrote:
>> On 14.05.20 09:59, Jan Beulich wrote:
>>> On 08.05.2020 17:34, Juergen Gross wrote:
>>>> +int hypfs_read_dir(const struct hypfs_entry *entry,
>>>> +                   XEN_GUEST_HANDLE_PARAM(void) uaddr)
>>>> +{
>>>> +    const struct hypfs_entry_dir *d;
>>>> +    const struct hypfs_entry *e;
>>>> +    unsigned int size = entry->size;
>>>> +
>>>> +    d = container_of(entry, const struct hypfs_entry_dir, e);
>>>> +
>>>> +    list_for_each_entry ( e, &d->dirlist, list )
>>>
>>> This function, in particular because of being non-static, makes
>>> me wonder how, with add_entry() taking a lock, it can be safe
>>> without any locking. Initially I thought the justification might
>>> be because all adding of entries is an init-time-only thing, but
>>> various involved functions aren't marked __init (and it is at
>>> least not implausible that down the road we might see new
>>> entries getting added during certain hotplug operations).
>>>
>>> I do realize that do_hypfs_op() takes the necessary read lock,
>>> but then you're still building on the assumption that the
>>> function is reachable through only that code path, despite
>>> being non-static. An ASSERT() here would be the minimum I guess,
>>> but with read locks now being recursive I don't see why you
>>> couldn't read-lock here again.
>>
>> Right, will add the read-lock.
>>
>>>
>>> The same goes for other non-static functions, albeit things may
>>> become more interesting for functions living on the
>>> XEN_HYPFS_OP_write_contents path (because write locks aren't
>>
>> Adding an ASSERT() in this regard should be rather easy.
> 
> As the type specific read- and write-functions should only be called
> through the generic read/write functions I think it is better to have
> a percpu variable holding the current locking state and ASSERT() that
> to match. This will be cheaper than nesting locks and I don't have to
> worry about either write_lock nesting or making _is_write_locked_by_me()
> an official interface.

Ah yes, this should do.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 15 09:47:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 09:47:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZWvE-0000lS-Dw; Fri, 15 May 2020 09:46:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7uuJ=65=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZWvC-0000lN-MB
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 09:46:26 +0000
X-Inumbo-ID: f1343dc0-9690-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f1343dc0-9690-11ea-ae69-bc764e2007e4;
 Fri, 15 May 2020 09:46:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 26800AD31;
 Fri, 15 May 2020 09:46:27 +0000 (UTC)
Subject: Re: [PATCH v2 4/6] x86/mem-paging: add minimal lock order enforcement
 to p2m_mem_paging_prep()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <4af1f459-fe7a-cd61-43cb-fb3fa4f15c00@suse.com>
 <20200514162545.GI54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8b9fd4ce-177f-6f57-8d24-8468fea0c299@suse.com>
Date: Fri, 15 May 2020 11:46:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200514162545.GI54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 18:25, Roger Pau Monné wrote:
> On Thu, Apr 23, 2020 at 10:38:44AM +0200, Jan Beulich wrote:
>> While full checking is impossible (as the lock is being acquired/
>> released down the call tree), perform at least a lock level check.
> 
> I'm slightly confused, doesn't alloc_domheap_page already have it's
> own lock order checking?

I don't see how it would, as it doesn't (and can't legitimately)
include arch/x86/mm/mm-locks.h. Also maybe this comment in the
header clarifies it:

/* Page alloc lock (per-domain)
 *
 * This is an external lock, not represented by an mm_lock_t. However,
 * pod code uses it in conjunction with the p2m lock, and expecting
 * the ordering which we enforce here.
 * The lock is not recursive. */

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 15 09:58:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 09:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZX6P-0001n9-Ez; Fri, 15 May 2020 09:58:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I0w9=65=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZX6O-0001n4-6O
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 09:58:00 +0000
X-Inumbo-ID: 8e85f446-9692-11ea-ae69-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e85f446-9692-11ea-ae69-bc764e2007e4;
 Fri, 15 May 2020 09:57:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589536679;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=6Gp7i5DjV3oxSWhQxLmNtP5FHypocaX1vX1FkyxMkVw=;
 b=SeAbJZ6LAvL6SgVgm0nsiL1BwQHFhMwjBoppX7E87b0vhbJln8m6xfuE
 12xtr1f6yOnq9sDDRASiIY53MMY8g9GEgdBDe7UPcY1omq3wQDKFxonhr
 o8gvCpvz6RXu2RdDTCf+YQXO3kFqhfbY8IROHpXxtgjn3HwYkdZWhlBic E=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: ExsvqBbEXZcuBESytKX4nm5MgKwDl4An5ML9+wAyzhTOSc0UeedbpqK/XqWk/0P36YZIi/vvF5
 pAppqro+Lr8Ca185uz/cXzEdHYT1QmOufZvcRp9wHdi4b+TMB2rKdI80W8rkAqgICJ8BAByfMM
 DRGhmx65C5buG/hcmkWwZKW9rJXEiJKSfCrQNWPQ5cKX/H4ZRRGpD7kTr5fRdTwHLC3f+XQ76h
 RNm4gCtx8Zci4ts+l76a6yEWgaEOzQQsLSKrHb6Npe3aTummXlw6n6QId762L1Q/zVtJa/OWpO
 et4=
X-SBRS: 2.7
X-MesageID: 17970664
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,394,1583211600"; d="scan'208";a="17970664"
Date: Fri, 15 May 2020 11:57:51 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
Subject: Re: Error during update_runstate_area with KPTI activated
Message-ID: <20200515095751.GQ54375@Air-de-Roger>
References: <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
 <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
 <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
 <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
 <478C4829-CCAF-495B-860E-6BA3D86AA47D@arm.com>
 <20200515083838.GN54375@Air-de-Roger>
 <d2033adc-3f98-2d14-ae6d-f8dcd8b90002@xen.org>
 <20200515091018.GO54375@Air-de-Roger>
 <3813cfa2-c881-3fa5-bdf8-a2e874584a9f@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <3813cfa2-c881-3fa5-bdf8-a2e874584a9f@xen.org>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 15, 2020 at 10:23:16AM +0100, Julien Grall wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> 
> 
> On 15/05/2020 10:10, Roger Pau Monné wrote:
> > On Fri, May 15, 2020 at 09:53:54AM +0100, Julien Grall wrote:
> > > [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> > > 
> > > Hi,
> > > 
> > > On 15/05/2020 09:38, Roger Pau Monné wrote:
> > > > On Fri, May 15, 2020 at 07:39:16AM +0000, Bertrand Marquis wrote:
> > > > > 
> > > > > 
> > > > > On 14 May 2020, at 20:13, Julien Grall <julien.grall.oss@gmail.com<mailto:julien.grall.oss@gmail.com>> wrote:
> > > > > "I'd love to do this, but we cant.  Older Linux used to have a virtual
> > > > > buffer spanning a page boundary.  Changing the behaviour under that will
> > > > > cause older setups to explode."
> > > > 
> > > > Sorry this was long time ago, and details have faded. IIRC there was
> > > > even a proposal (or patch set) that took that into account and allowed
> > > > buffers to span across a page boundary by taking a reference to two
> > > > different pages in that case.
> > > 
> > > I am not aware of a patch set. Juergen suggested a per-domain mapping but
> > > there was no details how this could be done (my e-mail was left unanswered
> > > [1]).
> > > 
> > > If we were using the vmap() then we would need up 1MB per domain (assuming
> > > 128 vCPUs). This sounds quite a bit and I think we need to agree whether it
> > > would be an acceptable solution (this was also left unanswered [1]).
> > 
> > Could we map/unmap the runtime area on domain switch at a per-cpu
> > based linear space area? There's no reason to have all the runtime
> > areas mapped all the time, you just care about the one from the
> > running vcpu.
> 
> AFAICT, this is only used during context switching. This is a bit surprising
> because I would expect it to be updated when the vCPU is running.
> 
> So maybe we could just use map_domain_page() and take care manually about
> cross-page boundary.
> 
> > 
> > Maybe the overhead of that mapping and unmapping would be
> > too high? But seeing that we are aiming at a secret-free Xen we would
> > have to eventually go that route anyway.
> 
> The overhead is likely to be higher with the existing code as you have to
> walk the guest page-tables and the p2m everytime in order to translate the
> guest virtual address to a host physical address.

Maybe I'm getting confused, but you actually want to avoid the guest
page table walk, as the guest might be running with user-space page
tables that don't have the linear address of the runtime area mapped,
and hence you would like to do the walk only once (at hypercall
registration time) and keep a reference to the page(s)?

I assumed the whole point of this was to avoid doing the page table
walk when you need to update the runstate info area.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 15 10:00:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 10:00:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZX8X-0002bj-St; Fri, 15 May 2020 10:00:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7E4v=65=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jZX8W-0002be-LA
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 10:00:12 +0000
X-Inumbo-ID: dd8967bc-9692-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd8967bc-9692-11ea-9887-bc764e2007e4;
 Fri, 15 May 2020 10:00:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=pZ4ycuEbFXZr/VhW8FA0XMYHsTnPO3E5s5G5EE0P0iI=; b=zNNrbFxvI1NMNyPc03kcEE+rAE
 CDM3/ECRr8b6IA3XpIca4o7KRCJpCHMVNjYutbryMqlP53+BsOQRsQDpKJEyxJrArF+NEorHYsEuR
 a5XeOQ28JQUlWsiFRbfCO18thAY+xbxRU9vOtfDdy7+rGssYx410xteln3Tu5rOcXZwQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jZX8V-0003x0-0J; Fri, 15 May 2020 10:00:11 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jZX8U-0001mK-Ke; Fri, 15 May 2020 10:00:10 +0000
Subject: Re: Error during update_runstate_area with KPTI activated
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <C6B0E24F-60E6-4621-8448-C8DBAE3277A9@arm.com>
 <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
 <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
 <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
 <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
 <478C4829-CCAF-495B-860E-6BA3D86AA47D@arm.com>
 <20200515083838.GN54375@Air-de-Roger>
 <d2033adc-3f98-2d14-ae6d-f8dcd8b90002@xen.org>
 <20200515091018.GO54375@Air-de-Roger>
 <93D7EBEF-E3E0-4DBB-A5BC-7D326B7AE0DB@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <355d0b59-29d4-a483-73a3-3cdd468c0b77@xen.org>
Date: Fri, 15 May 2020 11:00:08 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <93D7EBEF-E3E0-4DBB-A5BC-7D326B7AE0DB@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Hongyan Xia <hx242@xen.org>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Bertrand,

On 15/05/2020 10:21, Bertrand Marquis wrote:
> 
> 
>> On 15 May 2020, at 10:10, Roger Pau Monné <roger.pau@citrix.com 
>> <mailto:roger.pau@citrix.com>> wrote:
>>
>> On Fri, May 15, 2020 at 09:53:54AM +0100, Julien Grall wrote:
>>> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open 
>>> attachments unless you have verified the sender and know the content 
>>> is safe.
>>>
>>> Hi,
>>>
>>> On 15/05/2020 09:38, Roger Pau Monné wrote:
>>>> On Fri, May 15, 2020 at 07:39:16AM +0000, Bertrand Marquis wrote:
>>>>>
>>>>>
>>>>> On 14 May 2020, at 20:13, Julien Grall <julien.grall.oss@gmail.com 
>>>>> <mailto:julien.grall.oss@gmail.com><mailto:julien.grall.oss@gmail.com>> 
>>>>> wrote:
>>>>>
>>>>> On Thu, 14 May 2020 at 19:12, Andrew Cooper 
>>>>> <andrew.cooper3@citrix.com 
>>>>> <mailto:andrew.cooper3@citrix.com><mailto:andrew.cooper3@citrix.com>> 
>>>>> wrote:
>>>>>
>>>>> On 14/05/2020 18:38, Julien Grall wrote:
>>>>> Hi,
>>>>>
>>>>> On 14/05/2020 17:18, Bertrand Marquis wrote:
>>>>>
>>>>>
>>>>> On 14 May 2020, at 16:57, Julien Grall <julien@xen.org 
>>>>> <mailto:julien@xen.org><mailto:julien@xen.org>> wrote:
>>>>>
>>>>>
>>>>>
>>>>> On 14/05/2020 15:28, Bertrand Marquis wrote:
>>>>> Hi,
>>>>>
>>>>> Hi,
>>>>>
>>>>> When executing linux on arm64 with KPTI activated (in Dom0 or in a
>>>>> DomU), I have a lot of walk page table errors like this:
>>>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
>>>>> 0xffffff837ebe0cd0
>>>>> After implementing a call trace, I found that the problem was
>>>>> coming from the update_runstate_area when linux has KPTI activated.
>>>>> I have the following call trace:
>>>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
>>>>> 0xffffff837ebe0cd0
>>>>> (XEN) backtrace.c:29: Stacktrace start at 0x8007638efbb0 depth 10
>>>>> (XEN)    [<000000000027780c>] get_page_from_gva+0x180/0x35c
>>>>> (XEN)    [<00000000002700c8>] guestcopy.c#copy_guest+0x1b0/0x2e4
>>>>> (XEN)    [<0000000000270228>] raw_copy_to_guest+0x2c/0x34
>>>>> (XEN)    [<0000000000268dd0>] domain.c#update_runstate_area+0x90/0xc8
>>>>> (XEN)    [<000000000026909c>] domain.c#schedule_tail+0x294/0x2d8
>>>>> (XEN)    [<0000000000269524>] context_switch+0x58/0x70
>>>>> (XEN)    [<00000000002479c4>] core.c#sched_context_switch+0x88/0x1e4
>>>>> (XEN)    [<000000000024845c>] core.c#schedule+0x224/0x2ec
>>>>> (XEN)    [<0000000000224018>] softirq.c#__do_softirq+0xe4/0x128
>>>>> (XEN)    [<00000000002240d4>] do_softirq+0x14/0x1c
>>>>> Discussing this subject with Stefano, he pointed me to a discussion
>>>>> started a year ago on this subject here:
>>>>> https://lists.xenproject.org/archives/html/xen-devel/2018-11/msg03053.html
>>>>>
>>>>> And a patch was submitted:
>>>>> https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg02320.html
>>>>>
>>>>> I rebased this patch on current master and it is solving the
>>>>> problem I have seen.
>>>>> It sounds to me like a good solution to introduce a
>>>>> VCPUOP_register_runstate_phys_memory_area to not depend on the area
>>>>> actually being mapped in the guest when a context switch is being
>>>>> done (which is actually the problem happening when a context switch
>>>>> is trigger while a guest is running in EL0).
>>>>> Is there any reason why this was not merged at the end ?
>>>>>
>>>>> I just skimmed through the thread to remind myself the state.
>>>>> AFAICT, this is blocked on the contributor to clarify the intended
>>>>> interaction and provide a new version.
>>>>>
>>>>> What do you mean here by intended interaction ? How the new hyper
>>>>> call should be used by the guest OS ?
>>>>>
>>>>> From what I remember, Jan was seeking clarification on whether the two
>>>>> hypercalls (existing and new) can be called together by the same OS
>>>>> (and make sense).
>>>>>
>>>>> There was also the question of the handover between two pieces of
>>>>> sotfware. For instance, what if the firmware is using the existing
>>>>> interface but the OS the new one? Similar question about Kexecing a
>>>>> different kernel.
>>>>>
>>>>> This part is mostly documentation so we can discuss about the approach
>>>>> and review the implementation.
>>>>>
>>>>>
>>>>>
>>>>> I am still in favor of the new hypercall (and still in my todo list)
>>>>> but I haven't yet found time to revive the series.
>>>>>
>>>>> Would you be willing to take over the series? I would be happy to
>>>>> bring you up to speed and provide review.
>>>>>
>>>>> Sure I can take it over.
>>>>>
>>>>> I ported it to master version of xen and I tested it on a board.
>>>>> I still need to do a deep review of the code myself but I have an
>>>>> understanding of the problem and what is the idea.
>>>>>
>>>>> Any help to get on speed would be more then welcome :-)
>>>>> I would recommend to go through the latest version (v3) and the
>>>>> previous (v2). I am also suggesting v2 because I think the split was
>>>>> easier to review/understand.
>>>>>
>>>>> The x86 code is probably what is going to give you the most trouble as
>>>>> there are two ABIs to support (compat and non-compat). If you don't
>>>>> have an x86 setup, I should be able to test it/help write it.
>>>>>
>>>>> Feel free to ask any questions and I will try my best to remember the
>>>>> discussion from last year :).
>>>>>
>>>>> At risk of being shouted down again, a new hypercall isn't necessarily
>>>>> necessary, and there are probably better ways of fixing it.
>>>>>
>>>>> The underlying ABI problem is that the area is registered by virtual
>>>>> address.  The only correct way this should have been done is to 
>>>>> register
>>>>> by guest physical address, so Xen's updating of the data doesn't
>>>>> interact with the guest pagetable settings/restrictions.  x86 suffers
>>>>> the same kind of problems as ARM, except we silently squash the 
>>>>> fallout.
>>>>>
>>>>> The logic in Xen is horrible, and I would really rather it was deleted
>>>>> completely, rather than to be kept for compatibility.
>>>>>
>>>>> The runstate area is always fixed kernel memory and doesn't move.  I
>>>>> believe it is already restricted from crossing a page boundary, and we
>>>>> can calculate the va=>pa translation when the hypercall is made.
>>>>>
>>>>> Yes - this is a technically ABI change, but nothing is going to break
>>>>> (AFAICT) and the cleanup win is large enough to make this a *very*
>>>>> attractive option.
>>>>>
>>>>> I suggested this approach two years ago [1] but you were the one
>>>>> saying that buffer could cross page-boundary on older Linux [2]:
>>>>>
>>>>> "I'd love to do this, but we cant.  Older Linux used to have a virtual
>>>>> buffer spanning a page boundary.  Changing the behaviour under that 
>>>>> will
>>>>> cause older setups to explode."
>>>>
>>>> Sorry this was long time ago, and details have faded. IIRC there was
>>>> even a proposal (or patch set) that took that into account and allowed
>>>> buffers to span across a page boundary by taking a reference to two
>>>> different pages in that case.
>>>
>>> I am not aware of a patch set. Juergen suggested a per-domain mapping but
>>> there was no details how this could be done (my e-mail was left 
>>> unanswered
>>> [1]).
>>>
>>> If we were using the vmap() then we would need up 1MB per domain 
>>> (assuming
>>> 128 vCPUs). This sounds quite a bit and I think we need to agree 
>>> whether it
>>> would be an acceptable solution (this was also left unanswered [1]).
>>
>> Could we map/unmap the runtime area on domain switch at a per-cpu
>> based linear space area? There's no reason to have all the runtime
>> areas mapped all the time, you just care about the one from the
>> running vcpu.
>>
>> Maybe the overhead of that mapping and unmapping would be
>> too high? But seeing that we are aiming at a secret-free Xen we would
>> have to eventually go that route anyway.
> 
> Maybe the new hypercall should be a bit different:
> - we have this area allocated already inside Xen and we do a copy of it 
> on any context switch
> - the guest is not supposed to modify any data in this area
> 
> We could introduce a new hypercall:
> - Xen allocate the runstate area using a page aligned address and size

At the moment the runstate is 40 bytes. If we were going to follow this 
proposal, I would recommend to try to have as many runstate as possible 
in your page.

Otherewise, you would waste 4056 bytes per vCPU in both Xen and the 
guest OS. This would even be worse for 64KB kernel.


> - the guest provide a free guest physical space to the hypercall

This part is the most tricky part. How does the guest know what is free 
in its physical address space?

I am not aware of any way to do this in Linux. So the best you could do 
would be to allocate a page from the RAM and tell Xen to replace it with 
the runstate mapping.

However, this also means you are going to possibly shatter a superpage 
in the P2M. This may affect the performance in long-run.

> - Xen maps read-only its own area to the guest at the provided address
> - Xen shall not modify any data in the runstate area of other 
> cores/guests (should already be the case)
> - We keep the current hypercall for backward compatibility and map the 
> areal during the hypercall and keep the area mapped at all time, we keep 
> doing the copy during context switches
> 
> This would highly reduce the overhead by removing the mapping/unmapping.

I don't think the overhead is going to be significant with 
domain_map_page()/domain_unmap_page().

On Arm64, the memory is always mapped so map/unmap is a NOP. On Arm32, 
we have a fast map/unmap implementation.

On x86, without SH, most of the memory is also always mapped. So this 
operation is mostly a NOP. For the SH case, the map/unmap will be used 
in any access to the guest memory (such as hypercalls access) but it is 
quite optimized.

Note that the current overhead is much more important today as you need 
to walk the guest PT and P2M (we are talking at multiple map/unmap). So 
moving to one map/unmap is already going to be a major improvement.

> Regarding the secret free I do not really think this is something 
> problematic here as we already have a copy of this internally anyway

The secret free work is still under review, so what is done in Xen today 
shouldn't dictate the future.

The question to answer is whether we believe leaking the content may be 
a problem. If the answer is yes, then most likely we will want the 
internal representation to be mapped on demand or just mapped for Xen PT 
associated for that domain.

My gut feeling is the runstate content is not critical. But I haven't 
fully thought through yet.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 15 10:08:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 10:08:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZXFx-0002v2-Tg; Fri, 15 May 2020 10:07:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I0w9=65=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZXFw-0002ux-K3
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 10:07:52 +0000
X-Inumbo-ID: ee72a30a-9693-11ea-a545-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ee72a30a-9693-11ea-a545-12813bfff9fa;
 Fri, 15 May 2020 10:07:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589537270;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=ve+XuuDfXh9Md/H4t5Pjko2ru3NNvfZtHyKdl89tmFk=;
 b=XFfi0WrzYAD2QEBSTujJDJFDX/iaSwjQUS0/5MyjhO36D3DHAj4Oq8z3
 EXcK6juQ7cTyXWrObLZqRrjlYIXgTTPEVTXSz9+WCNj5EZJDZHPVKF5ZO
 LV1RtB0AcTVZHOCGmDEtPSLc6/4B2AXFBSZ3yXns87zynTSvNi43GzFRr g=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: JTsJ4/YCKDvMoXHQJuOhjO3npv41XJmknYZeV2rKyV7j9iEhFXQuzf13v6vovf7XM2JGC7AskP
 neacPEuh2I2xNtGj6yh0d4xkcjxkqkxND4durRz2wMlTgwmCPvxQwacAKR9SjffZLDfpoa7nXT
 OwGEsvuiZ4pIKTxbAQID15noTPeM/50apKYdJlpp5b5fOc+OpbLLZLG7dy7nDmUlrqOO0Q2kMK
 VSbWYzo57aNzHnrCjTb6hP3qAyLGUuhJJSlFnuh5/e7kAbuTDtz+EK37h0f1j3fJoqJN4gnhLu
 tug=
X-SBRS: 2.7
X-MesageID: 17971316
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,394,1583211600"; d="scan'208";a="17971316"
Date: Fri, 15 May 2020 12:07:42 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: Error during update_runstate_area with KPTI activated
Message-ID: <20200515100742.GR54375@Air-de-Roger>
References: <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
 <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
 <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
 <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
 <478C4829-CCAF-495B-860E-6BA3D86AA47D@arm.com>
 <20200515083838.GN54375@Air-de-Roger>
 <d2033adc-3f98-2d14-ae6d-f8dcd8b90002@xen.org>
 <20200515091018.GO54375@Air-de-Roger>
 <93D7EBEF-E3E0-4DBB-A5BC-7D326B7AE0DB@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <93D7EBEF-E3E0-4DBB-A5BC-7D326B7AE0DB@arm.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano
 Stabellini <stefano.stabellini@xilinx.com>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Can you please fix your email client to properly indent replies? It's
impossible to distinguish your added text when you reply from the
original email, as it's not indented in any way.

On Fri, May 15, 2020 at 09:21:34AM +0000, Bertrand Marquis wrote:
> On 15 May 2020, at 10:10, Roger Pau Monné <roger.pau@citrix.com<mailto:roger.pau@citrix.com>> wrote:
> 
> On Fri, May 15, 2020 at 09:53:54AM +0100, Julien Grall wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> 
> Hi,
> 
> On 15/05/2020 09:38, Roger Pau Monné wrote:
> On Fri, May 15, 2020 at 07:39:16AM +0000, Bertrand Marquis wrote:
> 
> 
> On 14 May 2020, at 20:13, Julien Grall <julien.grall.oss@gmail.com<mailto:julien.grall.oss@gmail.com><mailto:julien.grall.oss@gmail.com>> wrote:
> 
> On Thu, 14 May 2020 at 19:12, Andrew Cooper <andrew.cooper3@citrix.com<mailto:andrew.cooper3@citrix.com><mailto:andrew.cooper3@citrix.com>> wrote:
> 
> On 14/05/2020 18:38, Julien Grall wrote:
> Hi,
> 
> On 14/05/2020 17:18, Bertrand Marquis wrote:
> 
> 
> On 14 May 2020, at 16:57, Julien Grall <julien@xen.org<mailto:julien@xen.org><mailto:julien@xen.org>> wrote:
> 
> 
> 
> On 14/05/2020 15:28, Bertrand Marquis wrote:
> Hi,
> 
> Hi,
> 
> When executing linux on arm64 with KPTI activated (in Dom0 or in a
> DomU), I have a lot of walk page table errors like this:
> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
> 0xffffff837ebe0cd0
> After implementing a call trace, I found that the problem was
> coming from the update_runstate_area when linux has KPTI activated.
> I have the following call trace:
> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
> 0xffffff837ebe0cd0
> (XEN) backtrace.c:29: Stacktrace start at 0x8007638efbb0 depth 10
> (XEN)    [<000000000027780c>] get_page_from_gva+0x180/0x35c
> (XEN)    [<00000000002700c8>] guestcopy.c#copy_guest+0x1b0/0x2e4
> (XEN)    [<0000000000270228>] raw_copy_to_guest+0x2c/0x34
> (XEN)    [<0000000000268dd0>] domain.c#update_runstate_area+0x90/0xc8
> (XEN)    [<000000000026909c>] domain.c#schedule_tail+0x294/0x2d8
> (XEN)    [<0000000000269524>] context_switch+0x58/0x70
> (XEN)    [<00000000002479c4>] core.c#sched_context_switch+0x88/0x1e4
> (XEN)    [<000000000024845c>] core.c#schedule+0x224/0x2ec
> (XEN)    [<0000000000224018>] softirq.c#__do_softirq+0xe4/0x128
> (XEN)    [<00000000002240d4>] do_softirq+0x14/0x1c
> Discussing this subject with Stefano, he pointed me to a discussion
> started a year ago on this subject here:
> https://lists.xenproject.org/archives/html/xen-devel/2018-11/msg03053.html
> 
> And a patch was submitted:
> https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg02320.html
> 
> I rebased this patch on current master and it is solving the
> problem I have seen.
> It sounds to me like a good solution to introduce a
> VCPUOP_register_runstate_phys_memory_area to not depend on the area
> actually being mapped in the guest when a context switch is being
> done (which is actually the problem happening when a context switch
> is trigger while a guest is running in EL0).
> Is there any reason why this was not merged at the end ?
> 
> I just skimmed through the thread to remind myself the state.
> AFAICT, this is blocked on the contributor to clarify the intended
> interaction and provide a new version.
> 
> What do you mean here by intended interaction ? How the new hyper
> call should be used by the guest OS ?
> 
> From what I remember, Jan was seeking clarification on whether the two
> hypercalls (existing and new) can be called together by the same OS
> (and make sense).
> 
> There was also the question of the handover between two pieces of
> sotfware. For instance, what if the firmware is using the existing
> interface but the OS the new one? Similar question about Kexecing a
> different kernel.
> 
> This part is mostly documentation so we can discuss about the approach
> and review the implementation.
> 
> 
> 
> I am still in favor of the new hypercall (and still in my todo list)
> but I haven't yet found time to revive the series.
> 
> Would you be willing to take over the series? I would be happy to
> bring you up to speed and provide review.
> 
> Sure I can take it over.
> 
> I ported it to master version of xen and I tested it on a board.
> I still need to do a deep review of the code myself but I have an
> understanding of the problem and what is the idea.
> 
> Any help to get on speed would be more then welcome :-)
> I would recommend to go through the latest version (v3) and the
> previous (v2). I am also suggesting v2 because I think the split was
> easier to review/understand.
> 
> The x86 code is probably what is going to give you the most trouble as
> there are two ABIs to support (compat and non-compat). If you don't
> have an x86 setup, I should be able to test it/help write it.
> 
> Feel free to ask any questions and I will try my best to remember the
> discussion from last year :).
> 
> At risk of being shouted down again, a new hypercall isn't necessarily
> necessary, and there are probably better ways of fixing it.
> 
> The underlying ABI problem is that the area is registered by virtual
> address.  The only correct way this should have been done is to register
> by guest physical address, so Xen's updating of the data doesn't
> interact with the guest pagetable settings/restrictions.  x86 suffers
> the same kind of problems as ARM, except we silently squash the fallout.
> 
> The logic in Xen is horrible, and I would really rather it was deleted
> completely, rather than to be kept for compatibility.
> 
> The runstate area is always fixed kernel memory and doesn't move.  I
> believe it is already restricted from crossing a page boundary, and we
> can calculate the va=>pa translation when the hypercall is made.
> 
> Yes - this is a technically ABI change, but nothing is going to break
> (AFAICT) and the cleanup win is large enough to make this a *very*
> attractive option.
> 
> I suggested this approach two years ago [1] but you were the one
> saying that buffer could cross page-boundary on older Linux [2]:
> 
> "I'd love to do this, but we cant.  Older Linux used to have a virtual
> buffer spanning a page boundary.  Changing the behaviour under that will
> cause older setups to explode."
> 
> Sorry this was long time ago, and details have faded. IIRC there was
> even a proposal (or patch set) that took that into account and allowed
> buffers to span across a page boundary by taking a reference to two
> different pages in that case.
> 
> I am not aware of a patch set. Juergen suggested a per-domain mapping but
> there was no details how this could be done (my e-mail was left unanswered
> [1]).
> 
> If we were using the vmap() then we would need up 1MB per domain (assuming
> 128 vCPUs). This sounds quite a bit and I think we need to agree whether it
> would be an acceptable solution (this was also left unanswered [1]).
> 
> Could we map/unmap the runtime area on domain switch at a per-cpu
> based linear space area? There's no reason to have all the runtime
> areas mapped all the time, you just care about the one from the
> running vcpu.
> 
> Maybe the overhead of that mapping and unmapping would be
> too high? But seeing that we are aiming at a secret-free Xen we would
> have to eventually go that route anyway.
> 
> Maybe the new hypercall should be a bit different:
> - we have this area allocated already inside Xen and we do a copy of it on any context switch
> - the guest is not supposed to modify any data in this area
> 
> We could introduce a new hypercall:
> - Xen allocate the runstate area using a page aligned address and size

It's generally best if we can use a guest provided memory area that's
already populated, because...

> - the guest provide a free guest physical space to the hypercall

... it's hard for the guest to figure out which non-populated areas
are safe for mapping arbitrary things. Ie: you might attempt to map
the runstate area on top of some MMIO area the guest is not aware of
for example if it has passthrough enabled.

> - Xen maps read-only its own area to the guest at the provided address
> - Xen shall not modify any data in the runstate area of other cores/guests (should already be the case)

I'm not sure those two restrictions are relevant, it's not relevant to
Xen whether the guest decided to overwrite the runstate area. Xen will
just write to it when doing a context switch in order to update it,
but it should never read from it.

Or are you meaning to map vcpu->runstate directly into the guest
physmap?

I think that's a bad idea as we would then have to force each vCPU
runstate to take up to a whole page, wasting lots of memory.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 15 10:08:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 10:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZXGN-0002wZ-8T; Fri, 15 May 2020 10:08:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7E4v=65=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jZXGM-0002wR-LV
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 10:08:18 +0000
X-Inumbo-ID: ff76a154-9693-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ff76a154-9693-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 10:08:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=MAS93v0kB//VDfbcV7aM2qFCuJXCxfgKAuDOwSI7H6k=; b=U/B2zZ4O0bwY7uSbSJEwRUKUmd
 INAYZJFNR3EK/wIXKjgeZA2FuVdQ79Cyc1Rl/f2MYZCAgjeb5kYcvSIbcdZvtiohm6AsJiBSVGsPX
 abwZoBua4+HJL2JlX6/CMWpvxwvbyEmWfbgNAlDKfdfrjMKPsZzNm5YqhhVa+kz8fSGk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jZXGL-00047r-HS; Fri, 15 May 2020 10:08:17 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jZXGL-0002BU-8v; Fri, 15 May 2020 10:08:17 +0000
Subject: Re: Error during update_runstate_area with KPTI activated
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
 <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
 <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
 <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
 <478C4829-CCAF-495B-860E-6BA3D86AA47D@arm.com>
 <20200515083838.GN54375@Air-de-Roger>
 <d2033adc-3f98-2d14-ae6d-f8dcd8b90002@xen.org>
 <20200515091018.GO54375@Air-de-Roger>
 <3813cfa2-c881-3fa5-bdf8-a2e874584a9f@xen.org>
 <20200515095751.GQ54375@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <108a179b-d8ea-01b9-6c6b-9f5cc57f6dc0@xen.org>
Date: Fri, 15 May 2020 11:08:15 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200515095751.GQ54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 15/05/2020 10:57, Roger Pau Monné wrote:
> On Fri, May 15, 2020 at 10:23:16AM +0100, Julien Grall wrote:
>> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>>
>>
>> On 15/05/2020 10:10, Roger Pau Monné wrote:
>>> On Fri, May 15, 2020 at 09:53:54AM +0100, Julien Grall wrote:
>>>> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>>>>
>>>> Hi,
>>>>
>>>> On 15/05/2020 09:38, Roger Pau Monné wrote:
>>>>> On Fri, May 15, 2020 at 07:39:16AM +0000, Bertrand Marquis wrote:
>>>>>>
>>>>>>
>>>>>> On 14 May 2020, at 20:13, Julien Grall <julien.grall.oss@gmail.com<mailto:julien.grall.oss@gmail.com>> wrote:
>>>>>> "I'd love to do this, but we cant.  Older Linux used to have a virtual
>>>>>> buffer spanning a page boundary.  Changing the behaviour under that will
>>>>>> cause older setups to explode."
>>>>>
>>>>> Sorry this was long time ago, and details have faded. IIRC there was
>>>>> even a proposal (or patch set) that took that into account and allowed
>>>>> buffers to span across a page boundary by taking a reference to two
>>>>> different pages in that case.
>>>>
>>>> I am not aware of a patch set. Juergen suggested a per-domain mapping but
>>>> there was no details how this could be done (my e-mail was left unanswered
>>>> [1]).
>>>>
>>>> If we were using the vmap() then we would need up 1MB per domain (assuming
>>>> 128 vCPUs). This sounds quite a bit and I think we need to agree whether it
>>>> would be an acceptable solution (this was also left unanswered [1]).
>>>
>>> Could we map/unmap the runtime area on domain switch at a per-cpu
>>> based linear space area? There's no reason to have all the runtime
>>> areas mapped all the time, you just care about the one from the
>>> running vcpu.
>>
>> AFAICT, this is only used during context switching. This is a bit surprising
>> because I would expect it to be updated when the vCPU is running.
>>
>> So maybe we could just use map_domain_page() and take care manually about
>> cross-page boundary.
>>
>>>
>>> Maybe the overhead of that mapping and unmapping would be
>>> too high? But seeing that we are aiming at a secret-free Xen we would
>>> have to eventually go that route anyway.
>>
>> The overhead is likely to be higher with the existing code as you have to
>> walk the guest page-tables and the p2m everytime in order to translate the
>> guest virtual address to a host physical address.
> 
> Maybe I'm getting confused, but you actually want to avoid the guest
> page table walk, as the guest might be running with user-space page
> tables that don't have the linear address of the runtime area mapped,
> and hence you would like to do the walk only once (at hypercall
> registration time) and keep a reference to the page(s)?

That's right.

> 
> I assumed the whole point of this was to avoid doing the page table
> walk when you need to update the runstate info area.

Sorry I wasn't clear. I was trying to answer to your question about the 
overhead.

The overhead with SH and the existing runstate implementation is going 
to be quite high because you would need to map/unmap each table during 
your walk. By removing the walk, you would now have only one map/unmap 
for the update which I think is acceptable.

So the change discussed in this thread is also going to be beneficial 
for SH even if we keep a map/unmap in the process.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 15 10:11:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 10:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZXIx-0003lD-Of; Fri, 15 May 2020 10:10:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lh6L=65=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jZXIw-0003kM-6z
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 10:10:58 +0000
X-Inumbo-ID: 5e5fc09c-9694-11ea-a545-12813bfff9fa
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.78]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5e5fc09c-9694-11ea-a545-12813bfff9fa;
 Fri, 15 May 2020 10:10:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=m0NHD7GnZd5Pl9LPlYYxn1pbIOW3+bNvRgoiI++4xXc=;
 b=xi8ktib/Y0t4Xg0jJNlMkbIkDMxSGnRT90MXRrxRll8FuTExSpoL7eHJPZGosax5PVMOz5Gb/jnwgW0XfXFSBrPww2Kyqs4fByVIJAkxsmON3QEz9sUYW2ZWpYaRl8qPF7OvY/cqo4OQb6dw/zuGGflD7wvHMk9O9IHs9IIW+8Y=
Received: from AM5PR0601CA0080.eurprd06.prod.outlook.com (2603:10a6:206::45)
 by AM6PR08MB3063.eurprd08.prod.outlook.com (2603:10a6:209:46::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2979.34; Fri, 15 May
 2020 10:10:55 +0000
Received: from AM5EUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:0:cafe::f3) by AM5PR0601CA0080.outlook.office365.com
 (2603:10a6:206::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.20 via Frontend
 Transport; Fri, 15 May 2020 10:10:55 +0000
Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT023.mail.protection.outlook.com (10.152.16.169) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3000.19 via Frontend Transport; Fri, 15 May 2020 10:10:55 +0000
Received: ("Tessian outbound fb9de21a7e90:v54");
 Fri, 15 May 2020 10:10:54 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: f289acd9b883b6b0
X-CR-MTA-TID: 64aa7808
Received: from 4087d04e5ce4.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 01EA4ED5-3972-47BB-A28B-BDA5BB446538.1; 
 Fri, 15 May 2020 10:10:49 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4087d04e5ce4.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 15 May 2020 10:10:49 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZqNAusBf3xmC9Dy0dv6j1oa2k7EFH54YRBh9m3fHrBLpGg/eeEUmWqZFTFnjnhm2x+eWop/lTw1fpsRHgnqhoHOug24Dg9LWhYmswtHFgCFp1Q+xG4ZYWxFEip6ES67hO3sztS0o8x+C8CGwHAjDfbeqcydp5avqcGJu2SgvGd7K0Ns6EJXrhPYTI4MZdJiuzWgRlEPXU6nr8p3Rys+H/Rbyo+3zW4JO0mKompNBSastY1g4hGud2HJ3NAygTR0HN3/zbgOiVDFz+Q1sP0jNk/gfsNXOSkoUpN3NRmTbsxdwPN2IgN8SdcFaUFOoeG8cdK0M7BVIRgli8f3aJ4J3Dw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=m0NHD7GnZd5Pl9LPlYYxn1pbIOW3+bNvRgoiI++4xXc=;
 b=labrYkBWQPzEZA2CzmIToOXrk2ih01asmckmPh6JMCcKF4qEmjh6n0hUH5Ndr3etIyNQhq10Aie7Y3tT30f8DiUzUCGjheGrFOMS4C4ckzbMkPEQcxLOBFboHYlG5WpwjtOB493GAHVO4l9w/1zPikosa9I1ok9HSv3OUtVnCpqpsV3GntgUzz8+bAq4stdBIxhUgGshWfFX/t8IlW1hiBBj2MkJ8u8fYf7+u+13vXPeiPuENSP0ZAw2XvqcbkSPNjRV0wrjCI2juz4Kag5B6eoOAOx78cxBQH3WDLXEvuoydyL40erLT41LMGABEHicsg2CiNQN3PjM81TAmbfuXQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=m0NHD7GnZd5Pl9LPlYYxn1pbIOW3+bNvRgoiI++4xXc=;
 b=xi8ktib/Y0t4Xg0jJNlMkbIkDMxSGnRT90MXRrxRll8FuTExSpoL7eHJPZGosax5PVMOz5Gb/jnwgW0XfXFSBrPww2Kyqs4fByVIJAkxsmON3QEz9sUYW2ZWpYaRl8qPF7OvY/cqo4OQb6dw/zuGGflD7wvHMk9O9IHs9IIW+8Y=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3306.eurprd08.prod.outlook.com (2603:10a6:5:1f::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2979.33; Fri, 15 May
 2020 10:10:39 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3000.022; Fri, 15 May 2020
 10:10:39 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: Error during update_runstate_area with KPTI activated
Thread-Topic: Error during update_runstate_area with KPTI activated
Thread-Index: AQHWKfvm5I6PbMAmoUakTwbdgqMBQqinvLCAgAAF7oCAABaDgIAACWGAgAARCgCAANBZgIAAEJgAgAAERACAAASVAIAAAySAgAAKyACAAALvAA==
Date: Fri, 15 May 2020 10:10:39 +0000
Message-ID: <0E6DD742-3C79-4CD2-93A1-6D054377A919@arm.com>
References: <C6B0E24F-60E6-4621-8448-C8DBAE3277A9@arm.com>
 <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
 <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
 <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
 <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
 <478C4829-CCAF-495B-860E-6BA3D86AA47D@arm.com>
 <20200515083838.GN54375@Air-de-Roger>
 <d2033adc-3f98-2d14-ae6d-f8dcd8b90002@xen.org>
 <20200515091018.GO54375@Air-de-Roger>
 <93D7EBEF-E3E0-4DBB-A5BC-7D326B7AE0DB@arm.com>
 <355d0b59-29d4-a483-73a3-3cdd468c0b77@xen.org>
In-Reply-To: <355d0b59-29d4-a483-73a3-3cdd468c0b77@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 86ff91ef-e0e6-44f3-bd68-08d7f8b84161
x-ms-traffictypediagnostic: DB7PR08MB3306:|AM6PR08MB3063:
X-Microsoft-Antispam-PRVS: <AM6PR08MB3063DD6A3CDE8E09B18B07389DBD0@AM6PR08MB3063.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 04041A2886
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: gK+IAAZjqgVEfR5IjcUg0ys4ZfkqrVKOzsutuuURG8cCnHTstKKS9xyHngB2hXF2z30hJS78wgS14rYW0he5B4UqTKaCcmpgo5oc8uvWvobLPJ1L8nLOGL1qYJ/eHBolOFi1VRafz2G2ogCbWzkobqJh4LzboJabfvNmRdrLXA4ZGGE8IXXAPhRRFdTTMTo65Rj1rPC7R7LVP/LUjejU3ZntvnIcMwOQq2Lcz7SRkY6/JJyZelMpYqb0YdWW7HhrlPW7bTKK4trlsPa2wSb653tZixs1Tq+pmko/RFTobswjyPe4qkOJSGdunuCJKLyrYaORhOyocEyE2F2ImHgzPowL3tCvssn1TRTZoTjJMbYG6RWtYrG83Kl3n+Ne/L8RkFi7RP75D/ezzFWSzZWL6Ot3WBkOEsaxI6K0GVPrkMph7h46KYclNJo4IGYzl1q2NdyMy9p1WjKVgGsncPplsqmPcaWhElnsleAAlBOHjBL3juoIQ/hJzeZ9EClZD4q8FsEB1H3baH9iLxlu2RM0ww==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(396003)(346002)(376002)(39860400002)(366004)(66446008)(26005)(8936002)(66476007)(91956017)(6512007)(30864003)(66946007)(71200400001)(186003)(8676002)(76116006)(36756003)(6486002)(86362001)(2906002)(66556008)(5660300002)(33656002)(15650500001)(478600001)(966005)(64756008)(53546011)(4326008)(2616005)(6916009)(6506007)(316002)(54906003);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: NzdZ5k8ix+3k1VParL1Vz+NqJQ1i8M2oDGjPrfJYlLdGXG1dDpqdOWeM/6m9ot4NlX9YhOpNotq7hL7gwNcz0jIf4OaiZfBxv/1/O62mqBoYG5nFGTZd3AqEIVjn6UIWmc8rheS1uH6E8W4WLWMzzhbJo/FEF3ckMCx0H6zC2s78kZuq/Y4tdW2EKqt865g1r6zXTPaJZ0h2fzaoGif9OD0HIBMeW2KfaV8B1XBqyI097NeTy1KJqrPy8xe9STXgClzkQap3U7uPu85EiwM8gWSIgBOSd7qZuXfoFeqN2Et0Os/lTh8HNnsME+OXNQNE/uEiuqTGWb+kUewfh/mKAuThQL+X2H/4mX+eSILbF8QR6k9CHCDUH+uzOvvaOvoTzzzE5KFG7xYJMqjtitMaAeRiXpoky2EpRAXEaY2Qo6aptZVL4XxApBznSek2efX1C8z/KswPcOH+jPBeS/Ij/FgwCpn0W8kBYtB1BRogUhiJoRBfCsecxwCGKsNQvgFH
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <D837E88337E0264492D02DBD5026943D@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3306
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(376002)(346002)(396003)(46966005)(33656002)(5660300002)(70206006)(54906003)(8936002)(6862004)(6512007)(82740400003)(70586007)(86362001)(316002)(15650500001)(6486002)(36906005)(26005)(336012)(356005)(82310400002)(81166007)(2616005)(966005)(47076004)(30864003)(53546011)(4326008)(8676002)(6506007)(478600001)(2906002)(36756003)(186003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 3d8f6b82-3fde-43d4-f33e-08d7f8b83810
X-Forefront-PRVS: 04041A2886
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 0ADNrhhCwcJTIK05+FA3EtJ6686jxLv+o50S4HPt/kJmTVz57gUx2VGMS5+lL39JxNZ6N/UEPX/naGt3XheA9Yv5SirMHZ1uJ8ZJqkI18sAxGMl8xDA7FQOIo8U4PC9GHykRyLGKITCMV3p22csOCcDMAtAsQVvivZ/p1Rq0OHoKZey9r2019841Y7qdGGzbkB76CVFE6YDGGF3MHK1UTRB2NxhzvqhTl9cmwtaPnxG9+bx07aBvDBboRsUIYWOEBoiILQE2K4wWqV9uHMwWuoDR7KqRnH0qiJlG4osT2Kjq+6150NFgXkwBHV+R7Y1Vjmp18+r+oa79BNQE5OpchiV4fQiGleJk69GCy7aBsyKoeWQxCDRbiKuZMZQXk7GvdcaN+kQMQGfEH20tDhTLol5R/aEUrmN1lD8s5ehq3hZA0cKyGHQtgS9YBly0jAHAfVOKm/l05JlMWAshvtLUC3myAy3cuxMC0GMIOAW+Y2bmQDh88yp3/C3xeFeCtXtGrhrMmfTs9FylVZ1XiSGj3P3kw11z6daVrGbdSCSyhT0rWi9yT/Ia2wZnVDsI8MOxb/orwAkeX/SzCej1llQtaPnknrWLxf0VDMpF8DzOtRo=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2020 10:10:55.0010 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 86ff91ef-e0e6-44f3-bd68-08d7f8b84161
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3063
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Hongyan Xia <hx242@xen.org>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMTUgTWF5IDIwMjAsIGF0IDExOjAwLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpIEJlcnRyYW5kLA0KPiANCj4gT24gMTUvMDUvMjAyMCAxMDoy
MSwgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCj4+PiBPbiAxNSBNYXkgMjAyMCwgYXQgMTA6MTAs
IFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBjaXRyaXguY29tIDxtYWlsdG86cm9nZXIucGF1
QGNpdHJpeC5jb20+PiB3cm90ZToNCj4+PiANCj4+PiBPbiBGcmksIE1heSAxNSwgMjAyMCBhdCAw
OTo1Mzo1NEFNICswMTAwLCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+Pj4+IFtDQVVUSU9OIC0gRVhU
RVJOQUwgRU1BSUxdIERPIE5PVCByZXBseSwgY2xpY2sgbGlua3MsIG9yIG9wZW4gYXR0YWNobWVu
dHMgdW5sZXNzIHlvdSBoYXZlIHZlcmlmaWVkIHRoZSBzZW5kZXIgYW5kIGtub3cgdGhlIGNvbnRl
bnQgaXMgc2FmZS4NCj4+Pj4gDQo+Pj4+IEhpLA0KPj4+PiANCj4+Pj4gT24gMTUvMDUvMjAyMCAw
OTozOCwgUm9nZXIgUGF1IE1vbm7DqSB3cm90ZToNCj4+Pj4+IE9uIEZyaSwgTWF5IDE1LCAyMDIw
IGF0IDA3OjM5OjE2QU0gKzAwMDAsIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+Pj4+Pj4gDQo+
Pj4+Pj4gDQo+Pj4+Pj4gT24gMTQgTWF5IDIwMjAsIGF0IDIwOjEzLCBKdWxpZW4gR3JhbGwgPGp1
bGllbi5ncmFsbC5vc3NAZ21haWwuY29tIDxtYWlsdG86anVsaWVuLmdyYWxsLm9zc0BnbWFpbC5j
b20+PG1haWx0bzpqdWxpZW4uZ3JhbGwub3NzQGdtYWlsLmNvbT4+IHdyb3RlOg0KPj4+Pj4+IA0K
Pj4+Pj4+IE9uIFRodSwgMTQgTWF5IDIwMjAgYXQgMTk6MTIsIEFuZHJldyBDb29wZXIgPGFuZHJl
dy5jb29wZXIzQGNpdHJpeC5jb20gPG1haWx0bzphbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPjxt
YWlsdG86YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4+IHdyb3RlOg0KPj4+Pj4+IA0KPj4+Pj4+
IE9uIDE0LzA1LzIwMjAgMTg6MzgsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4+Pj4+PiBIaSwNCj4+
Pj4+PiANCj4+Pj4+PiBPbiAxNC8wNS8yMDIwIDE3OjE4LCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3Rl
Og0KPj4+Pj4+IA0KPj4+Pj4+IA0KPj4+Pj4+IE9uIDE0IE1heSAyMDIwLCBhdCAxNjo1NywgSnVs
aWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZyA8bWFpbHRvOmp1bGllbkB4ZW4ub3JnPjxtYWlsdG86
anVsaWVuQHhlbi5vcmc+PiB3cm90ZToNCj4+Pj4+PiANCj4+Pj4+PiANCj4+Pj4+PiANCj4+Pj4+
PiBPbiAxNC8wNS8yMDIwIDE1OjI4LCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4+Pj4+IEhp
LA0KPj4+Pj4+IA0KPj4+Pj4+IEhpLA0KPj4+Pj4+IA0KPj4+Pj4+IFdoZW4gZXhlY3V0aW5nIGxp
bnV4IG9uIGFybTY0IHdpdGggS1BUSSBhY3RpdmF0ZWQgKGluIERvbTAgb3IgaW4gYQ0KPj4+Pj4+
IERvbVUpLCBJIGhhdmUgYSBsb3Qgb2Ygd2FsayBwYWdlIHRhYmxlIGVycm9ycyBsaWtlIHRoaXM6
DQo+Pj4+Pj4gKFhFTikgcDJtLmM6MTg5MDogZDF2MDogRmFpbGVkIHRvIHdhbGsgcGFnZS10YWJs
ZSB2YQ0KPj4+Pj4+IDB4ZmZmZmZmODM3ZWJlMGNkMA0KPj4+Pj4+IEFmdGVyIGltcGxlbWVudGlu
ZyBhIGNhbGwgdHJhY2UsIEkgZm91bmQgdGhhdCB0aGUgcHJvYmxlbSB3YXMNCj4+Pj4+PiBjb21p
bmcgZnJvbSB0aGUgdXBkYXRlX3J1bnN0YXRlX2FyZWEgd2hlbiBsaW51eCBoYXMgS1BUSSBhY3Rp
dmF0ZWQuDQo+Pj4+Pj4gSSBoYXZlIHRoZSBmb2xsb3dpbmcgY2FsbCB0cmFjZToNCj4+Pj4+PiAo
WEVOKSBwMm0uYzoxODkwOiBkMXYwOiBGYWlsZWQgdG8gd2FsayBwYWdlLXRhYmxlIHZhDQo+Pj4+
Pj4gMHhmZmZmZmY4MzdlYmUwY2QwDQo+Pj4+Pj4gKFhFTikgYmFja3RyYWNlLmM6Mjk6IFN0YWNr
dHJhY2Ugc3RhcnQgYXQgMHg4MDA3NjM4ZWZiYjAgZGVwdGggMTANCj4+Pj4+PiAoWEVOKSAgICBb
PDAwMDAwMDAwMDAyNzc4MGM+XSBnZXRfcGFnZV9mcm9tX2d2YSsweDE4MC8weDM1Yw0KPj4+Pj4+
IChYRU4pICAgIFs8MDAwMDAwMDAwMDI3MDBjOD5dIGd1ZXN0Y29weS5jI2NvcHlfZ3Vlc3QrMHgx
YjAvMHgyZTQNCj4+Pj4+PiAoWEVOKSAgICBbPDAwMDAwMDAwMDAyNzAyMjg+XSByYXdfY29weV90
b19ndWVzdCsweDJjLzB4MzQNCj4+Pj4+PiAoWEVOKSAgICBbPDAwMDAwMDAwMDAyNjhkZDA+XSBk
b21haW4uYyN1cGRhdGVfcnVuc3RhdGVfYXJlYSsweDkwLzB4YzgNCj4+Pj4+PiAoWEVOKSAgICBb
PDAwMDAwMDAwMDAyNjkwOWM+XSBkb21haW4uYyNzY2hlZHVsZV90YWlsKzB4Mjk0LzB4MmQ4DQo+
Pj4+Pj4gKFhFTikgICAgWzwwMDAwMDAwMDAwMjY5NTI0Pl0gY29udGV4dF9zd2l0Y2grMHg1OC8w
eDcwDQo+Pj4+Pj4gKFhFTikgICAgWzwwMDAwMDAwMDAwMjQ3OWM0Pl0gY29yZS5jI3NjaGVkX2Nv
bnRleHRfc3dpdGNoKzB4ODgvMHgxZTQNCj4+Pj4+PiAoWEVOKSAgICBbPDAwMDAwMDAwMDAyNDg0
NWM+XSBjb3JlLmMjc2NoZWR1bGUrMHgyMjQvMHgyZWMNCj4+Pj4+PiAoWEVOKSAgICBbPDAwMDAw
MDAwMDAyMjQwMTg+XSBzb2Z0aXJxLmMjX19kb19zb2Z0aXJxKzB4ZTQvMHgxMjgNCj4+Pj4+PiAo
WEVOKSAgICBbPDAwMDAwMDAwMDAyMjQwZDQ+XSBkb19zb2Z0aXJxKzB4MTQvMHgxYw0KPj4+Pj4+
IERpc2N1c3NpbmcgdGhpcyBzdWJqZWN0IHdpdGggU3RlZmFubywgaGUgcG9pbnRlZCBtZSB0byBh
IGRpc2N1c3Npb24NCj4+Pj4+PiBzdGFydGVkIGEgeWVhciBhZ28gb24gdGhpcyBzdWJqZWN0IGhl
cmU6DQo+Pj4+Pj4gaHR0cHM6Ly9saXN0cy54ZW5wcm9qZWN0Lm9yZy9hcmNoaXZlcy9odG1sL3hl
bi1kZXZlbC8yMDE4LTExL21zZzAzMDUzLmh0bWwNCj4+Pj4+PiANCj4+Pj4+PiBBbmQgYSBwYXRj
aCB3YXMgc3VibWl0dGVkOg0KPj4+Pj4+IGh0dHBzOi8vbGlzdHMueGVucHJvamVjdC5vcmcvYXJj
aGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAxOS0wNS9tc2cwMjMyMC5odG1sDQo+Pj4+Pj4gDQo+Pj4+
Pj4gSSByZWJhc2VkIHRoaXMgcGF0Y2ggb24gY3VycmVudCBtYXN0ZXIgYW5kIGl0IGlzIHNvbHZp
bmcgdGhlDQo+Pj4+Pj4gcHJvYmxlbSBJIGhhdmUgc2Vlbi4NCj4+Pj4+PiBJdCBzb3VuZHMgdG8g
bWUgbGlrZSBhIGdvb2Qgc29sdXRpb24gdG8gaW50cm9kdWNlIGENCj4+Pj4+PiBWQ1BVT1BfcmVn
aXN0ZXJfcnVuc3RhdGVfcGh5c19tZW1vcnlfYXJlYSB0byBub3QgZGVwZW5kIG9uIHRoZSBhcmVh
DQo+Pj4+Pj4gYWN0dWFsbHkgYmVpbmcgbWFwcGVkIGluIHRoZSBndWVzdCB3aGVuIGEgY29udGV4
dCBzd2l0Y2ggaXMgYmVpbmcNCj4+Pj4+PiBkb25lICh3aGljaCBpcyBhY3R1YWxseSB0aGUgcHJv
YmxlbSBoYXBwZW5pbmcgd2hlbiBhIGNvbnRleHQgc3dpdGNoDQo+Pj4+Pj4gaXMgdHJpZ2dlciB3
aGlsZSBhIGd1ZXN0IGlzIHJ1bm5pbmcgaW4gRUwwKS4NCj4+Pj4+PiBJcyB0aGVyZSBhbnkgcmVh
c29uIHdoeSB0aGlzIHdhcyBub3QgbWVyZ2VkIGF0IHRoZSBlbmQgPw0KPj4+Pj4+IA0KPj4+Pj4+
IEkganVzdCBza2ltbWVkIHRocm91Z2ggdGhlIHRocmVhZCB0byByZW1pbmQgbXlzZWxmIHRoZSBz
dGF0ZS4NCj4+Pj4+PiBBRkFJQ1QsIHRoaXMgaXMgYmxvY2tlZCBvbiB0aGUgY29udHJpYnV0b3Ig
dG8gY2xhcmlmeSB0aGUgaW50ZW5kZWQNCj4+Pj4+PiBpbnRlcmFjdGlvbiBhbmQgcHJvdmlkZSBh
IG5ldyB2ZXJzaW9uLg0KPj4+Pj4+IA0KPj4+Pj4+IFdoYXQgZG8geW91IG1lYW4gaGVyZSBieSBp
bnRlbmRlZCBpbnRlcmFjdGlvbiA/IEhvdyB0aGUgbmV3IGh5cGVyDQo+Pj4+Pj4gY2FsbCBzaG91
bGQgYmUgdXNlZCBieSB0aGUgZ3Vlc3QgT1MgPw0KPj4+Pj4+IA0KPj4+Pj4+IEZyb20gd2hhdCBJ
IHJlbWVtYmVyLCBKYW4gd2FzIHNlZWtpbmcgY2xhcmlmaWNhdGlvbiBvbiB3aGV0aGVyIHRoZSB0
d28NCj4+Pj4+PiBoeXBlcmNhbGxzIChleGlzdGluZyBhbmQgbmV3KSBjYW4gYmUgY2FsbGVkIHRv
Z2V0aGVyIGJ5IHRoZSBzYW1lIE9TDQo+Pj4+Pj4gKGFuZCBtYWtlIHNlbnNlKS4NCj4+Pj4+PiAN
Cj4+Pj4+PiBUaGVyZSB3YXMgYWxzbyB0aGUgcXVlc3Rpb24gb2YgdGhlIGhhbmRvdmVyIGJldHdl
ZW4gdHdvIHBpZWNlcyBvZg0KPj4+Pj4+IHNvdGZ3YXJlLiBGb3IgaW5zdGFuY2UsIHdoYXQgaWYg
dGhlIGZpcm13YXJlIGlzIHVzaW5nIHRoZSBleGlzdGluZw0KPj4+Pj4+IGludGVyZmFjZSBidXQg
dGhlIE9TIHRoZSBuZXcgb25lPyBTaW1pbGFyIHF1ZXN0aW9uIGFib3V0IEtleGVjaW5nIGENCj4+
Pj4+PiBkaWZmZXJlbnQga2VybmVsLg0KPj4+Pj4+IA0KPj4+Pj4+IFRoaXMgcGFydCBpcyBtb3N0
bHkgZG9jdW1lbnRhdGlvbiBzbyB3ZSBjYW4gZGlzY3VzcyBhYm91dCB0aGUgYXBwcm9hY2gNCj4+
Pj4+PiBhbmQgcmV2aWV3IHRoZSBpbXBsZW1lbnRhdGlvbi4NCj4+Pj4+PiANCj4+Pj4+PiANCj4+
Pj4+PiANCj4+Pj4+PiBJIGFtIHN0aWxsIGluIGZhdm9yIG9mIHRoZSBuZXcgaHlwZXJjYWxsIChh
bmQgc3RpbGwgaW4gbXkgdG9kbyBsaXN0KQ0KPj4+Pj4+IGJ1dCBJIGhhdmVuJ3QgeWV0IGZvdW5k
IHRpbWUgdG8gcmV2aXZlIHRoZSBzZXJpZXMuDQo+Pj4+Pj4gDQo+Pj4+Pj4gV291bGQgeW91IGJl
IHdpbGxpbmcgdG8gdGFrZSBvdmVyIHRoZSBzZXJpZXM/IEkgd291bGQgYmUgaGFwcHkgdG8NCj4+
Pj4+PiBicmluZyB5b3UgdXAgdG8gc3BlZWQgYW5kIHByb3ZpZGUgcmV2aWV3Lg0KPj4+Pj4+IA0K
Pj4+Pj4+IFN1cmUgSSBjYW4gdGFrZSBpdCBvdmVyLg0KPj4+Pj4+IA0KPj4+Pj4+IEkgcG9ydGVk
IGl0IHRvIG1hc3RlciB2ZXJzaW9uIG9mIHhlbiBhbmQgSSB0ZXN0ZWQgaXQgb24gYSBib2FyZC4N
Cj4+Pj4+PiBJIHN0aWxsIG5lZWQgdG8gZG8gYSBkZWVwIHJldmlldyBvZiB0aGUgY29kZSBteXNl
bGYgYnV0IEkgaGF2ZSBhbg0KPj4+Pj4+IHVuZGVyc3RhbmRpbmcgb2YgdGhlIHByb2JsZW0gYW5k
IHdoYXQgaXMgdGhlIGlkZWEuDQo+Pj4+Pj4gDQo+Pj4+Pj4gQW55IGhlbHAgdG8gZ2V0IG9uIHNw
ZWVkIHdvdWxkIGJlIG1vcmUgdGhlbiB3ZWxjb21lIDotKQ0KPj4+Pj4+IEkgd291bGQgcmVjb21t
ZW5kIHRvIGdvIHRocm91Z2ggdGhlIGxhdGVzdCB2ZXJzaW9uICh2MykgYW5kIHRoZQ0KPj4+Pj4+
IHByZXZpb3VzICh2MikuIEkgYW0gYWxzbyBzdWdnZXN0aW5nIHYyIGJlY2F1c2UgSSB0aGluayB0
aGUgc3BsaXQgd2FzDQo+Pj4+Pj4gZWFzaWVyIHRvIHJldmlldy91bmRlcnN0YW5kLg0KPj4+Pj4+
IA0KPj4+Pj4+IFRoZSB4ODYgY29kZSBpcyBwcm9iYWJseSB3aGF0IGlzIGdvaW5nIHRvIGdpdmUg
eW91IHRoZSBtb3N0IHRyb3VibGUgYXMNCj4+Pj4+PiB0aGVyZSBhcmUgdHdvIEFCSXMgdG8gc3Vw
cG9ydCAoY29tcGF0IGFuZCBub24tY29tcGF0KS4gSWYgeW91IGRvbid0DQo+Pj4+Pj4gaGF2ZSBh
biB4ODYgc2V0dXAsIEkgc2hvdWxkIGJlIGFibGUgdG8gdGVzdCBpdC9oZWxwIHdyaXRlIGl0Lg0K
Pj4+Pj4+IA0KPj4+Pj4+IEZlZWwgZnJlZSB0byBhc2sgYW55IHF1ZXN0aW9ucyBhbmQgSSB3aWxs
IHRyeSBteSBiZXN0IHRvIHJlbWVtYmVyIHRoZQ0KPj4+Pj4+IGRpc2N1c3Npb24gZnJvbSBsYXN0
IHllYXIgOikuDQo+Pj4+Pj4gDQo+Pj4+Pj4gQXQgcmlzayBvZiBiZWluZyBzaG91dGVkIGRvd24g
YWdhaW4sIGEgbmV3IGh5cGVyY2FsbCBpc24ndCBuZWNlc3NhcmlseQ0KPj4+Pj4+IG5lY2Vzc2Fy
eSwgYW5kIHRoZXJlIGFyZSBwcm9iYWJseSBiZXR0ZXIgd2F5cyBvZiBmaXhpbmcgaXQuDQo+Pj4+
Pj4gDQo+Pj4+Pj4gVGhlIHVuZGVybHlpbmcgQUJJIHByb2JsZW0gaXMgdGhhdCB0aGUgYXJlYSBp
cyByZWdpc3RlcmVkIGJ5IHZpcnR1YWwNCj4+Pj4+PiBhZGRyZXNzLiAgVGhlIG9ubHkgY29ycmVj
dCB3YXkgdGhpcyBzaG91bGQgaGF2ZSBiZWVuIGRvbmUgaXMgdG8gcmVnaXN0ZXINCj4+Pj4+PiBi
eSBndWVzdCBwaHlzaWNhbCBhZGRyZXNzLCBzbyBYZW4ncyB1cGRhdGluZyBvZiB0aGUgZGF0YSBk
b2Vzbid0DQo+Pj4+Pj4gaW50ZXJhY3Qgd2l0aCB0aGUgZ3Vlc3QgcGFnZXRhYmxlIHNldHRpbmdz
L3Jlc3RyaWN0aW9ucy4gIHg4NiBzdWZmZXJzDQo+Pj4+Pj4gdGhlIHNhbWUga2luZCBvZiBwcm9i
bGVtcyBhcyBBUk0sIGV4Y2VwdCB3ZSBzaWxlbnRseSBzcXVhc2ggdGhlIGZhbGxvdXQuDQo+Pj4+
Pj4gDQo+Pj4+Pj4gVGhlIGxvZ2ljIGluIFhlbiBpcyBob3JyaWJsZSwgYW5kIEkgd291bGQgcmVh
bGx5IHJhdGhlciBpdCB3YXMgZGVsZXRlZA0KPj4+Pj4+IGNvbXBsZXRlbHksIHJhdGhlciB0aGFu
IHRvIGJlIGtlcHQgZm9yIGNvbXBhdGliaWxpdHkuDQo+Pj4+Pj4gDQo+Pj4+Pj4gVGhlIHJ1bnN0
YXRlIGFyZWEgaXMgYWx3YXlzIGZpeGVkIGtlcm5lbCBtZW1vcnkgYW5kIGRvZXNuJ3QgbW92ZS4g
IEkNCj4+Pj4+PiBiZWxpZXZlIGl0IGlzIGFscmVhZHkgcmVzdHJpY3RlZCBmcm9tIGNyb3NzaW5n
IGEgcGFnZSBib3VuZGFyeSwgYW5kIHdlDQo+Pj4+Pj4gY2FuIGNhbGN1bGF0ZSB0aGUgdmE9PnBh
IHRyYW5zbGF0aW9uIHdoZW4gdGhlIGh5cGVyY2FsbCBpcyBtYWRlLg0KPj4+Pj4+IA0KPj4+Pj4+
IFllcyAtIHRoaXMgaXMgYSB0ZWNobmljYWxseSBBQkkgY2hhbmdlLCBidXQgbm90aGluZyBpcyBn
b2luZyB0byBicmVhaw0KPj4+Pj4+IChBRkFJQ1QpIGFuZCB0aGUgY2xlYW51cCB3aW4gaXMgbGFy
Z2UgZW5vdWdoIHRvIG1ha2UgdGhpcyBhICp2ZXJ5Kg0KPj4+Pj4+IGF0dHJhY3RpdmUgb3B0aW9u
Lg0KPj4+Pj4+IA0KPj4+Pj4+IEkgc3VnZ2VzdGVkIHRoaXMgYXBwcm9hY2ggdHdvIHllYXJzIGFn
byBbMV0gYnV0IHlvdSB3ZXJlIHRoZSBvbmUNCj4+Pj4+PiBzYXlpbmcgdGhhdCBidWZmZXIgY291
bGQgY3Jvc3MgcGFnZS1ib3VuZGFyeSBvbiBvbGRlciBMaW51eCBbMl06DQo+Pj4+Pj4gDQo+Pj4+
Pj4gIkknZCBsb3ZlIHRvIGRvIHRoaXMsIGJ1dCB3ZSBjYW50LiAgT2xkZXIgTGludXggdXNlZCB0
byBoYXZlIGEgdmlydHVhbA0KPj4+Pj4+IGJ1ZmZlciBzcGFubmluZyBhIHBhZ2UgYm91bmRhcnku
ICBDaGFuZ2luZyB0aGUgYmVoYXZpb3VyIHVuZGVyIHRoYXQgd2lsbA0KPj4+Pj4+IGNhdXNlIG9s
ZGVyIHNldHVwcyB0byBleHBsb2RlLiINCj4+Pj4+IA0KPj4+Pj4gU29ycnkgdGhpcyB3YXMgbG9u
ZyB0aW1lIGFnbywgYW5kIGRldGFpbHMgaGF2ZSBmYWRlZC4gSUlSQyB0aGVyZSB3YXMNCj4+Pj4+
IGV2ZW4gYSBwcm9wb3NhbCAob3IgcGF0Y2ggc2V0KSB0aGF0IHRvb2sgdGhhdCBpbnRvIGFjY291
bnQgYW5kIGFsbG93ZWQNCj4+Pj4+IGJ1ZmZlcnMgdG8gc3BhbiBhY3Jvc3MgYSBwYWdlIGJvdW5k
YXJ5IGJ5IHRha2luZyBhIHJlZmVyZW5jZSB0byB0d28NCj4+Pj4+IGRpZmZlcmVudCBwYWdlcyBp
biB0aGF0IGNhc2UuDQo+Pj4+IA0KPj4+PiBJIGFtIG5vdCBhd2FyZSBvZiBhIHBhdGNoIHNldC4g
SnVlcmdlbiBzdWdnZXN0ZWQgYSBwZXItZG9tYWluIG1hcHBpbmcgYnV0DQo+Pj4+IHRoZXJlIHdh
cyBubyBkZXRhaWxzIGhvdyB0aGlzIGNvdWxkIGJlIGRvbmUgKG15IGUtbWFpbCB3YXMgbGVmdCB1
bmFuc3dlcmVkDQo+Pj4+IFsxXSkuDQo+Pj4+IA0KPj4+PiBJZiB3ZSB3ZXJlIHVzaW5nIHRoZSB2
bWFwKCkgdGhlbiB3ZSB3b3VsZCBuZWVkIHVwIDFNQiBwZXIgZG9tYWluIChhc3N1bWluZw0KPj4+
PiAxMjggdkNQVXMpLiBUaGlzIHNvdW5kcyBxdWl0ZSBhIGJpdCBhbmQgSSB0aGluayB3ZSBuZWVk
IHRvIGFncmVlIHdoZXRoZXIgaXQNCj4+Pj4gd291bGQgYmUgYW4gYWNjZXB0YWJsZSBzb2x1dGlv
biAodGhpcyB3YXMgYWxzbyBsZWZ0IHVuYW5zd2VyZWQgWzFdKS4NCj4+PiANCj4+PiBDb3VsZCB3
ZSBtYXAvdW5tYXAgdGhlIHJ1bnRpbWUgYXJlYSBvbiBkb21haW4gc3dpdGNoIGF0IGEgcGVyLWNw
dQ0KPj4+IGJhc2VkIGxpbmVhciBzcGFjZSBhcmVhPyBUaGVyZSdzIG5vIHJlYXNvbiB0byBoYXZl
IGFsbCB0aGUgcnVudGltZQ0KPj4+IGFyZWFzIG1hcHBlZCBhbGwgdGhlIHRpbWUsIHlvdSBqdXN0
IGNhcmUgYWJvdXQgdGhlIG9uZSBmcm9tIHRoZQ0KPj4+IHJ1bm5pbmcgdmNwdS4NCj4+PiANCj4+
PiBNYXliZSB0aGUgb3ZlcmhlYWQgb2YgdGhhdCBtYXBwaW5nIGFuZCB1bm1hcHBpbmcgd291bGQg
YmUNCj4+PiB0b28gaGlnaD8gQnV0IHNlZWluZyB0aGF0IHdlIGFyZSBhaW1pbmcgYXQgYSBzZWNy
ZXQtZnJlZSBYZW4gd2Ugd291bGQNCj4+PiBoYXZlIHRvIGV2ZW50dWFsbHkgZ28gdGhhdCByb3V0
ZSBhbnl3YXkuDQo+PiBNYXliZSB0aGUgbmV3IGh5cGVyY2FsbCBzaG91bGQgYmUgYSBiaXQgZGlm
ZmVyZW50Og0KPj4gLSB3ZSBoYXZlIHRoaXMgYXJlYSBhbGxvY2F0ZWQgYWxyZWFkeSBpbnNpZGUg
WGVuIGFuZCB3ZSBkbyBhIGNvcHkgb2YgaXQgb24gYW55IGNvbnRleHQgc3dpdGNoDQo+PiAtIHRo
ZSBndWVzdCBpcyBub3Qgc3VwcG9zZWQgdG8gbW9kaWZ5IGFueSBkYXRhIGluIHRoaXMgYXJlYQ0K
Pj4gV2UgY291bGQgaW50cm9kdWNlIGEgbmV3IGh5cGVyY2FsbDoNCj4+IC0gWGVuIGFsbG9jYXRl
IHRoZSBydW5zdGF0ZSBhcmVhIHVzaW5nIGEgcGFnZSBhbGlnbmVkIGFkZHJlc3MgYW5kIHNpemUN
Cj4gDQo+IEF0IHRoZSBtb21lbnQgdGhlIHJ1bnN0YXRlIGlzIDQwIGJ5dGVzLiBJZiB3ZSB3ZXJl
IGdvaW5nIHRvIGZvbGxvdyB0aGlzIHByb3Bvc2FsLCBJIHdvdWxkIHJlY29tbWVuZCB0byB0cnkg
dG8gaGF2ZSBhcyBtYW55IHJ1bnN0YXRlIGFzIHBvc3NpYmxlIGluIHlvdXIgcGFnZS4NCj4gDQo+
IE90aGVyZXdpc2UsIHlvdSB3b3VsZCB3YXN0ZSA0MDU2IGJ5dGVzIHBlciB2Q1BVIGluIGJvdGgg
WGVuIGFuZCB0aGUgZ3Vlc3QgT1MuIFRoaXMgd291bGQgZXZlbiBiZSB3b3JzZSBmb3IgNjRLQiBr
ZXJuZWwuDQoNCkFncmVlLCBzbyBpdCBzaG91bGQgYmUgb25lIGNhbGwgdG8gaGF2ZSBhbiBhcmVh
IHdpdGggdGhlIHJ1bnN0YXRlIGZvciBhbGwgdkNQVXMsIGVuc3VyZSBhIHZDUFUgcnVuc3RhdGUg
aGFzIGEgc2l6ZSBhbmQgYW4gYWRkcmVzcyB3aGljaCBhcmUgY2FjaGUgbGluZSBzaXplIGFsaWdu
ZWQgdG8gcHJldmVudCBjb2hlcmVuY3kgc3RyZXNzLiANCg0KPiANCj4gDQo+PiAtIHRoZSBndWVz
dCBwcm92aWRlIGEgZnJlZSBndWVzdCBwaHlzaWNhbCBzcGFjZSB0byB0aGUgaHlwZXJjYWxsDQo+
IA0KPiBUaGlzIHBhcnQgaXMgdGhlIG1vc3QgdHJpY2t5IHBhcnQuIEhvdyBkb2VzIHRoZSBndWVz
dCBrbm93IHdoYXQgaXMgZnJlZSBpbiBpdHMgcGh5c2ljYWwgYWRkcmVzcyBzcGFjZT8NCj4gDQo+
IEkgYW0gbm90IGF3YXJlIG9mIGFueSB3YXkgdG8gZG8gdGhpcyBpbiBMaW51eC4gU28gdGhlIGJl
c3QgeW91IGNvdWxkIGRvIHdvdWxkIGJlIHRvIGFsbG9jYXRlIGEgcGFnZSBmcm9tIHRoZSBSQU0g
YW5kIHRlbGwgWGVuIHRvIHJlcGxhY2UgaXQgd2l0aCB0aGUgcnVuc3RhdGUgbWFwcGluZy4NCj4g
DQo+IEhvd2V2ZXIsIHRoaXMgYWxzbyBtZWFucyB5b3UgYXJlIGdvaW5nIHRvIHBvc3NpYmx5IHNo
YXR0ZXIgYSBzdXBlcnBhZ2UgaW4gdGhlIFAyTS4gVGhpcyBtYXkgYWZmZWN0IHRoZSBwZXJmb3Jt
YW5jZSBpbiBsb25nLXJ1bi4NCg0KVmVyeSB0cnVlLCBMaW51eCBkb2VzIG5vdCBoYXZlIGEgd2F5
IHRvIGRvIHRoYXQuDQpXaGF0IGFib3V0IGdvaW5nIHRoZSBvdGhlciB3YXkgYXJvdW5kOiBYZW4g
Y2FuIHByb3ZpZGUgdGhlIHBoeXNpY2FsIGFkZHJlc3MgdG8gdGhlIGd1ZXN0Lg0KDQo+IA0KPj4g
LSBYZW4gbWFwcyByZWFkLW9ubHkgaXRzIG93biBhcmVhIHRvIHRoZSBndWVzdCBhdCB0aGUgcHJv
dmlkZWQgYWRkcmVzcw0KPj4gLSBYZW4gc2hhbGwgbm90IG1vZGlmeSBhbnkgZGF0YSBpbiB0aGUg
cnVuc3RhdGUgYXJlYSBvZiBvdGhlciBjb3Jlcy9ndWVzdHMgKHNob3VsZCBhbHJlYWR5IGJlIHRo
ZSBjYXNlKQ0KPj4gLSBXZSBrZWVwIHRoZSBjdXJyZW50IGh5cGVyY2FsbCBmb3IgYmFja3dhcmQg
Y29tcGF0aWJpbGl0eSBhbmQgbWFwIHRoZSBhcmVhbCBkdXJpbmcgdGhlIGh5cGVyY2FsbCBhbmQg
a2VlcCB0aGUgYXJlYSBtYXBwZWQgYXQgYWxsIHRpbWUsIHdlIGtlZXAgZG9pbmcgdGhlIGNvcHkg
ZHVyaW5nIGNvbnRleHQgc3dpdGNoZXMNCj4+IFRoaXMgd291bGQgaGlnaGx5IHJlZHVjZSB0aGUg
b3ZlcmhlYWQgYnkgcmVtb3ZpbmcgdGhlIG1hcHBpbmcvdW5tYXBwaW5nLg0KPiANCj4gSSBkb24n
dCB0aGluayB0aGUgb3ZlcmhlYWQgaXMgZ29pbmcgdG8gYmUgc2lnbmlmaWNhbnQgd2l0aCBkb21h
aW5fbWFwX3BhZ2UoKS9kb21haW5fdW5tYXBfcGFnZSgpLg0KPiANCj4gT24gQXJtNjQsIHRoZSBt
ZW1vcnkgaXMgYWx3YXlzIG1hcHBlZCBzbyBtYXAvdW5tYXAgaXMgYSBOT1AuIE9uIEFybTMyLCB3
ZSBoYXZlIGEgZmFzdCBtYXAvdW5tYXAgaW1wbGVtZW50YXRpb24uDQo+IA0KPiBPbiB4ODYsIHdp
dGhvdXQgU0gsIG1vc3Qgb2YgdGhlIG1lbW9yeSBpcyBhbHNvIGFsd2F5cyBtYXBwZWQuIFNvIHRo
aXMgb3BlcmF0aW9uIGlzIG1vc3RseSBhIE5PUC4gRm9yIHRoZSBTSCBjYXNlLCB0aGUgbWFwL3Vu
bWFwIHdpbGwgYmUgdXNlZCBpbiBhbnkgYWNjZXNzIHRvIHRoZSBndWVzdCBtZW1vcnkgKHN1Y2gg
YXMgaHlwZXJjYWxscyBhY2Nlc3MpIGJ1dCBpdCBpcyBxdWl0ZSBvcHRpbWl6ZWQuDQo+IA0KPiBO
b3RlIHRoYXQgdGhlIGN1cnJlbnQgb3ZlcmhlYWQgaXMgbXVjaCBtb3JlIGltcG9ydGFudCB0b2Rh
eSBhcyB5b3UgbmVlZCB0byB3YWxrIHRoZSBndWVzdCBQVCBhbmQgUDJNICh3ZSBhcmUgdGFsa2lu
ZyBhdCBtdWx0aXBsZSBtYXAvdW5tYXApLiBTbyBtb3ZpbmcgdG8gb25lIG1hcC91bm1hcCBpcyBh
bHJlYWR5IGdvaW5nIHRvIGJlIGEgbWFqb3IgaW1wcm92ZW1lbnQuDQoNCkFncmVlDQoNCj4gDQo+
PiBSZWdhcmRpbmcgdGhlIHNlY3JldCBmcmVlIEkgZG8gbm90IHJlYWxseSB0aGluayB0aGlzIGlz
IHNvbWV0aGluZyBwcm9ibGVtYXRpYyBoZXJlIGFzIHdlIGFscmVhZHkgaGF2ZSBhIGNvcHkgb2Yg
dGhpcyBpbnRlcm5hbGx5IGFueXdheQ0KPiANCj4gVGhlIHNlY3JldCBmcmVlIHdvcmsgaXMgc3Rp
bGwgdW5kZXIgcmV2aWV3LCBzbyB3aGF0IGlzIGRvbmUgaW4gWGVuIHRvZGF5IHNob3VsZG4ndCBk
aWN0YXRlIHRoZSBmdXR1cmUuDQo+IA0KPiBUaGUgcXVlc3Rpb24gdG8gYW5zd2VyIGlzIHdoZXRo
ZXIgd2UgYmVsaWV2ZSBsZWFraW5nIHRoZSBjb250ZW50IG1heSBiZSBhIHByb2JsZW0uIElmIHRo
ZSBhbnN3ZXIgaXMgeWVzLCB0aGVuIG1vc3QgbGlrZWx5IHdlIHdpbGwgd2FudCB0aGUgaW50ZXJu
YWwgcmVwcmVzZW50YXRpb24gdG8gYmUgbWFwcGVkIG9uIGRlbWFuZCBvciBqdXN0IG1hcHBlZCBm
b3IgWGVuIFBUIGFzc29jaWF0ZWQgZm9yIHRoYXQgZG9tYWluLg0KPiANCj4gTXkgZ3V0IGZlZWxp
bmcgaXMgdGhlIHJ1bnN0YXRlIGNvbnRlbnQgaXMgbm90IGNyaXRpY2FsLiBCdXQgSSBoYXZlbid0
IGZ1bGx5IHRob3VnaHQgdGhyb3VnaCB5ZXQuDQoNClRoZSBydW5zdGF0ZSBpbmZvcm1hdGlvbiBp
cyBzdG9yZWQgaW5zaWRlIHhlbiBhbmQgdGhlbiBjb3BpZWQgdG8gdGhlIGd1ZXN0IG1lbW9yeSBk
dXJpbmcgY29udGV4dCBzd2l0Y2guIFNvIGV2ZW4gaWYgdGhlIGd1ZXN0IGFyZWEgaXMgbm90IG1h
cHBlZCwgdGhpcyBpbmZvcm1hdGlvbiBpcyBzdGlsbCBhdmFpbGFibGUgaW5zaWRlIHRoZSB4ZW4g
aW50ZXJuYWwgY29weS4NCg0KQ2hlZXJzDQpCZXJ0cmFuZA0KDQo+IA0KPiBDaGVlcnMsDQo+IA0K
PiAtLSANCj4gSnVsaWVuIEdyYWxsDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri May 15 10:22:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 10:22:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZXTe-0004l4-1u; Fri, 15 May 2020 10:22:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I0w9=65=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZXTc-0004kz-Uu
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 10:22:00 +0000
X-Inumbo-ID: e90dd476-9695-11ea-b07b-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e90dd476-9695-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 10:21:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589538119;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=PLX/irBcj77s+RR4ozPmcDA6KQYvrn6NnBQ7KLS9iIc=;
 b=R1iDMpgHbFfFpB26Ua0an/7mqbQAkCjDkKIvjxddfgg4iUGhU6U1dvrj
 zYoGdmGwobw2h1MHQsswhmtk+1Bz4qwUiz2lPFONRgYB/ncSFcEaqeEwr
 SX/1GTUUtKZ6sDXzThjOd8+F4qGc1jFbk6Xa6jPid1HiwrYVFAUYjKDC5 M=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: pZRbFrtpvl5g/9Fs8nfhTIb67djmHS7ju521evFBt3UKyW1ZEYf5xD2yoZ104a1FtaC28Cle/r
 wOZEUz7uEt1CzQ1QRfC6t0ewtH7QcjuNB7PXsZsUedx1TOdjpZLq3WfYNEQKDkxdWv18hhvPf+
 HTZqKraVbhsx5mwDSX8LEOrumx+GtHIi/BJAxIcAcU22Xl+p/vxOHUHXUwq2xUquV0x/d1qcRP
 CpAfdibcZts3IZhHa5wHcjb5I4IupRJFu8pYgYwf+ncc/EmkahonKW3gii4CyBFHnXyu3043pn
 Mmg=
X-SBRS: 2.7
X-MesageID: 17972128
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,394,1583211600"; d="scan'208";a="17972128"
Date: Fri, 15 May 2020 12:21:49 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 4/6] x86/mem-paging: add minimal lock order
 enforcement to p2m_mem_paging_prep()
Message-ID: <20200515102149.GS54375@Air-de-Roger>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <4af1f459-fe7a-cd61-43cb-fb3fa4f15c00@suse.com>
 <20200514162545.GI54375@Air-de-Roger>
 <8b9fd4ce-177f-6f57-8d24-8468fea0c299@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <8b9fd4ce-177f-6f57-8d24-8468fea0c299@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 15, 2020 at 11:46:23AM +0200, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> 
> On 14.05.2020 18:25, Roger Pau Monné wrote:
> > On Thu, Apr 23, 2020 at 10:38:44AM +0200, Jan Beulich wrote:
> >> While full checking is impossible (as the lock is being acquired/
> >> released down the call tree), perform at least a lock level check.
> > 
> > I'm slightly confused, doesn't alloc_domheap_page already have it's
> > own lock order checking?
> 
> I don't see how it would, as it doesn't (and can't legitimately)
> include arch/x86/mm/mm-locks.h. Also maybe this comment in the
> header clarifies it:
> 
> /* Page alloc lock (per-domain)
>  *
>  * This is an external lock, not represented by an mm_lock_t. However,
>  * pod code uses it in conjunction with the p2m lock, and expecting
>  * the ordering which we enforce here.
>  * The lock is not recursive. */


Thanks.

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri May 15 11:21:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 11:21:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZYOY-0001ZR-Hx; Fri, 15 May 2020 11:20:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OuqU=65=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jZYOW-0001ZM-G9
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 11:20:48 +0000
X-Inumbo-ID: 1fc774d8-969e-11ea-b07b-bc764e2007e4
Received: from mail-wm1-x32d.google.com (unknown [2a00:1450:4864:20::32d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1fc774d8-969e-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 11:20:47 +0000 (UTC)
Received: by mail-wm1-x32d.google.com with SMTP id h4so1925823wmb.4
 for <xen-devel@lists.xenproject.org>; Fri, 15 May 2020 04:20:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding:content-language:thread-index;
 bh=0cwizCCLnaE80HNj/7XqB5NxVoOetYajCXsROcEIMjQ=;
 b=JRLvdVy/cVUmUynq+6Sd3uhtrci6lh/X4Wk/x/IG3uU7X0Qoxp50CGRLcQbRifi4Gl
 +Uu+tcfgKrAGtyUHI2OP+0EVJj4zMOJvQecy2a2usg37Zo/JNI48mBnIEIz+/WWuJvGv
 fvbOBfGUWUGKdOi4YwDcIQAKocSbYqANhrPvbNOmf80nBP9fgy91yxnacMHmZuDmwhAA
 7Ue0oXgMWlqX/HR2AfUmYphSn0eTS/VA3UxY3la0sbOt/SDsC6nxymEk6BhhY5WcufjH
 cMxv+nI88oeL9KVfAunZAZQZAE+5fSJrQhDzj10kqDTLA7nSaMmkVoB0Qytuf4uuPUUh
 ulNA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index;
 bh=0cwizCCLnaE80HNj/7XqB5NxVoOetYajCXsROcEIMjQ=;
 b=cUoF3+yUhgh+RTcrfvhVsItYXEz8nCAESBlMKqna4EkUtoXPiLifIIaEbPB1Su0zI+
 ko29JbcBF5XIspkxkHoQ6n6U1kxC6qM8k5IvCT95SDyAJBOIUEIPUVWtaFrFX8bKrSG6
 7TCtfQ4F0MEv1y3qykUhfc0pKiPad9oQ9OL2gMYTV2RIoCJzHXh19H4ne2gZGBtiOBXN
 2h4y4dhoXsw4fGrZm7f2LRTSipc5gRFOJ76ygDaVs5tIL6G1NrLQk/WaBuqxsa0fK/te
 gNetEJfSluIAzwkttWxxvEkB/F2rVm6/tnHcxqKFdEhncIfBsm123oqMYOW5boLArdTi
 ZE2A==
X-Gm-Message-State: AOAM530OqUtwUQL6vXyHWoC9UT/D7kvOWXuJld7eKZfh4Hhz0qdgNSq+
 1/P/doy1EyiRKOjRtWXRwqI=
X-Google-Smtp-Source: ABdhPJzUAedf2I7qBgP1Ve8uFeDgh1rOXvu08/ZupZz/FO1U3uaI4rqljLksGjVIZuFuj7Xh5/MJ5A==
X-Received: by 2002:a05:600c:d6:: with SMTP id
 u22mr3363779wmm.45.1589541646690; 
 Fri, 15 May 2020 04:20:46 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.185])
 by smtp.gmail.com with ESMTPSA id h188sm3391735wme.8.2020.05.15.04.20.45
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 15 May 2020 04:20:46 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: <xen-devel@lists.xenproject.org>
Subject: Items for CHANGELOG.md in 4.14
Date: Fri, 15 May 2020 12:20:44 +0100
Message-ID: <000401d62aaa$e0eccab0$a2c66010$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="US-ASCII"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AdYqqt+yYIZV2cMIQ1mLIcH9/L6gIw==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Roger Pau Monne' <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

All,

  In the last community call I took an ACTION to send a reminder email to submit patches to get CHANGELOG.md up to date. Several
items were mentioned during the call (see minutes at https://cryptpad.fr/pad/#/2/pad/edit/qPQUEQEv3nJJ97clS8b2KdtP/):

- Ability to conditionally compile-out 32-bit PV guests (security attack surface reduction)
- Basic support for AMD Milan CPUS
- Golang binding advances
- x2apic, improvements on tlbflush hypercall
- General improvements in pvshim
- Xen on Hyper-V
- VM-fork

  Roger has already sent a patch for the x2apic/tlbflush improvements (which has not yet been committed). Can those with the
relevant information regarding the other items please send patches a.s.a.p.?

  Thanks,

    Paul Durrant



From xen-devel-bounces@lists.xenproject.org Fri May 15 11:22:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 11:22:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZYPm-0001eq-U8; Fri, 15 May 2020 11:22:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ibeu=65=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZYPl-0001eg-LO
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 11:22:05 +0000
X-Inumbo-ID: 4a0e0554-969e-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a0e0554-969e-11ea-b9cf-bc764e2007e4;
 Fri, 15 May 2020 11:21:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Tr0aXhJhMY68R9mm0Eh3V/DQhBLYjZ14pOpUdpPVZAY=; b=lZjFqnyWMKc8McmkGRumtXDOr
 iurM33ZXzMoNvBBWQMdjaNlGLmpkNiRJhxfPrPpZDI1f6D6TGpP1hUcJoEuauoJtDLNrAQNn4indZ
 WcEwjRGGL5tLChfpO0y/vvLMPC4cYwIf25zKP/hTnLalD7SpsP0stj+DKEP0Ay6p1vD18=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZYPd-0005dg-6C; Fri, 15 May 2020 11:21:57 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZYPc-0004NM-PO; Fri, 15 May 2020 11:21:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZYPc-000674-Na; Fri, 15 May 2020 11:21:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150179-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150179: regressions - FAIL
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-vhd:debian-di-install:fail:regression
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=f8644fe441abfd8de8b1f237229cfbe600a58701
X-Osstest-Versions-That: xen=9d83ad86834300927b636fa02b29d84854399ed8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 15 May 2020 11:21:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150179 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150179/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd      10 debian-di-install        fail REGR. vs. 150164

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150164
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150164
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150164
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150164
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150164
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150164
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150164
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150164
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150164
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150164
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  f8644fe441abfd8de8b1f237229cfbe600a58701
baseline version:
 xen                  9d83ad86834300927b636fa02b29d84854399ed8

Last test of basis   150164  2020-05-13 17:36:57 Z    1 days
Failing since        150169  2020-05-14 03:54:42 Z    1 days    2 attempts
Testing same since   150179  2020-05-14 13:37:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f8644fe441abfd8de8b1f237229cfbe600a58701
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu Apr 30 15:25:47 2020 +0100

    xen/Kconfig: define EXPERT a bool rather than a string
    
    Since commit f80fe2b34f08 "xen: Update Kconfig to Linux v5.4" EXPERT
    can only have two values (enabled or disabled). So switch from a string
    to a bool.
    
    Take the opportunity to replace all "EXPERT = y" to "EXPERT" and use
    squash the lines bool and prompt together in modified place.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Dario Faggioli <dfaggioli@suse.com>
    Acked-by: Stefano Stabellini <sstabellini@kernel.org>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 122b52230aa5b79d65e18b8b77094027faa2f8e2
Author: Juergen Gross <jgross@suse.com>
Date:   Thu Apr 30 07:38:42 2020 +0200

    tools/xenstore: don't store domU's mfn of ring page in xenstored
    
    The XS_INTRODUCE command has two parameters: the mfn (or better: gfn)
    of the domain's xenstore ring page and the event channel of the
    domain for communicating with Xenstore.
    
    The gfn is not really needed. It is stored in the per-domain struct
    in xenstored and in case of another XS_INTRODUCE for the domain it
    is tested to match the original value. If it doesn't match the
    command is aborted via EINVAL, otherwise the event channel to the
    domain is recreated.
    
    As XS_INTRODUCE is limited to dom0 and there is no real downside of
    recreating the event channel just omit the test for the gfn to
    match and don't return EINVAL for multiple XS_INTRODUCE calls.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
    Acked-by: Wei Liu <wl@xen.org>

commit b539eeffc737d859dd1814c2e529e0ed0feba7a7
Author: Jan Beulich <jbeulich@suse.com>
Date:   Thu May 14 07:53:55 2020 +0200

    x86/PVH: PHYSDEVOP_pci_mmcfg_reserved should not blindly register a region
    
    The op has a "is reserved" flag, and hence registration shouldn't
    happen unilaterally.
    
    Fixes: eb3dd90e4089 ("x86/physdev: enable PHYSDEVOP_pci_mmcfg_reserved for PVH Dom0")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

commit 3a218961b16f1f4feb1147f56338faf1ac8f5703
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue May 12 19:18:43 2020 +0100

    x86/build: Unilaterally disable -fcf-protection
    
    Xen doesn't support CET-IBT yet.  At a minimum, logic is required to enable it
    for supervisor use, but the livepatch functionality needs to learn not to
    overwrite ENDBR64 instructions.
    
    Furthermore, Ubuntu enables -fcf-protection by default, along with a buggy
    version of GCC-9 which objects to it in combination with
    -mindirect-branch=thunk-extern (Fixed in GCC 10, 9.4).
    
    Various objects (Xen boot path, Rombios 32 stubs) require .text to be at the
    beginning of the object.  These paths explode when .note.gnu.properties gets
    put ahead of .text and we end up executing the notes data.
    
    Disable -fcf-protection for all embedded objects.
    
    Reported-by: Jason Andryuk <jandryuk@gmail.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 1a47731115c2c8eb510e135fa48ed51ad2e94a26
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 13 13:06:28 2020 +0100

    x86/build: move -fno-asynchronous-unwind-tables into EMBEDDED_EXTRA_CFLAGS
    
    Users of EMBEDDED_EXTRA_CFLAGS already use -fno-asynchronous-unwind-tables, or
    ought to.  This shrinks the size of the rombios 32bit stubs in guest memory.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 24f94fca23ad7c45806a1428331e1d602dfd8604
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue May 12 19:18:37 2020 +0100

    x86/build32: Discard all orphaned sections
    
    Linkers may put orphaned sections ahead of .text, which breaks the calling
    requirements.  A concrete example is Ubuntu's GCC-9 default of enabling
    -fcf-protection which causes us to try and execute .note.gnu.properties during
    Xen's boot.
    
    Put .got.plt in its own section as it specifically needs preserving from the
    linkers point of view, and discard everything else.  This will hopefully be
    more robust to other unexpected toolchain properties.
    
    Fixes boot from an Ubuntu build of Xen.
    
    Reported-by: Jason Andryuk <jandryuk@gmail.com>
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Tested-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 9f74a7b66b0b03fe563779bb2c133051f1595ece
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Tue May 12 17:21:33 2020 +0100

    x86/guest: Fix assembler warnings with newer binutils
    
    GAS of at least version 2.34 complains:
    
      hypercall_page.S: Assembler messages:
      hypercall_page.S:24: Warning: symbol 'HYPERCALL_set_trap_table' already has its type set
      ...
      hypercall_page.S:71: Warning: symbol 'HYPERCALL_arch_7' already has its type set
    
    which is because the whole page is declared as STT_OBJECT already.  Rearrange
    .set with respect to .type in DECLARE_HYPERCALL() so STT_FUNC is already in
    place.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit f3b0d25e343562dee29729cfaf32f8c79f8b6502
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Wed May 13 13:07:53 2020 +0100

    stubdom: Use matching quotes in error message
    
    This prevents syntax highlighting from believing the rest of the file is a
    string.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit d8a6a8b36d864e1e56d3c63b30892cbb4e55d65c
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Mon Mar 2 14:36:03 2020 +0000

    tools/libxc: Reduce feature handling complexity in xc_cpuid_apply_policy()
    
    xc_cpuid_apply_policy() is gaining extra parameters to untangle CPUID
    complexity in Xen.  While an improvement in general, it does have the
    unfortunate side effect of duplicating some settings across multiple
    parameters.
    
    Rearrange the logic to only consider 'pae' if no explicit featureset is
    provided.  This reduces the complexity for callers who have already provided a
    pae setting in the featureset.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Paul Durrant <pdurrant@amzn.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 61be48dc029294275348443f78a5e600ef28274f
Author: Nick Rosbrook <rosbrookn@gmail.com>
Date:   Wed May 13 10:18:19 2020 -0400

    golang/xenlight: add necessary module/package documentation
    
    Add a README and package comment giving a brief overview of the package.
    These also help pkg.go.dev generate better documentation.
    
    Also, add a copy of the LGPL (the same license used by libxl) to
    tools/golang/xenlight. This is required for the package to be shown
    on pkg.go.dev and added to the default module proxy, proxy.golang.org.
    
    Finally, add an entry for the xenlight package to SUPPORT.md.
    
    Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
    Reviewed-by: George Dunlap <george.dunlap@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri May 15 11:30:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 11:30:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZYXw-0002cS-W0; Fri, 15 May 2020 11:30:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ibeu=65=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZYXw-0002cN-4L
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 11:30:32 +0000
X-Inumbo-ID: 78621886-969f-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78621886-969f-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 11:30:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hJH5/efeAXhzumNImE0ENbTPkCcgH+zmbbKcwaGztv0=; b=DjMXTgSCHiLAgQjrWVWeBCNr1
 mOAIMtL1uaPqxGXVA8K7MjWC2tWolhXQsGTVRDfAC2K7wkxzeE43WWoO8cftCmZEf7c89Tn++OEOP
 aRfsTLJhdjoi0j4fHlLIdzUWrDX91LdeltY7feSpX0tinC1dHQuUg/dlnOY3uzZdNiYRA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZYXn-0005nO-AE; Fri, 15 May 2020 11:30:23 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZYXm-0005Dm-U2; Fri, 15 May 2020 11:30:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZYXm-00033r-TQ; Fri, 15 May 2020 11:30:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150185-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150185: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
 qemu-mainline:build-arm64:xen-build:fail:regression
 qemu-mainline:build-amd64:xen-build:fail:regression
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
 qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=013a18edbbc59cdad019100c7d03c0494642b74c
X-Osstest-Versions-That: qemuu=d5c75ec500d96f1d93447f990cd5a4ef5ba27fae
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 15 May 2020 11:30:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150185 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150185/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-xsm       7 xen-boot                 fail REGR. vs. 150151
 build-arm64                   6 xen-build                fail REGR. vs. 150151
 build-amd64                   6 xen-build                fail REGR. vs. 150151

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 150151

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150151
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150151
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass

version targeted for testing:
 qemuu                013a18edbbc59cdad019100c7d03c0494642b74c
baseline version:
 qemuu                d5c75ec500d96f1d93447f990cd5a4ef5ba27fae

Last test of basis   150151  2020-05-12 19:38:05 Z    2 days
Failing since        150174  2020-05-14 10:06:36 Z    1 days    2 attempts
Testing same since   150185  2020-05-14 18:39:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cornelia Huck <cohuck@redhat.com>
  Denis Plotnikov <dplotnikov@virtuozzo.com>
  Dongjiu Geng <gengdongjiu@huawei.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Greg Kurz <groug@kaod.org>
  Joe Komlodi <joe.komlodi@xilinx.com>
  Joe Komlodi <komlodi@xilinx.com>
  Markus Armbruster <armbru@redhat.com>
  Max Reitz <mreitz@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Omar Sandoval <osandov@fb.com>
  Patrick Williams <patrick@stwcx.xyz>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Stefano Stabellini <sstabellini@kernel.org>
  Tong Ho <tong.ho@xilinx.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Xiang Zheng <zhengxiang9@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1339 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 15 11:59:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 11:59:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZYzZ-0004kQ-Ts; Fri, 15 May 2020 11:59:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yAEg=65=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZYzZ-0004kL-Bl
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 11:59:05 +0000
X-Inumbo-ID: 78a97984-96a3-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78a97984-96a3-11ea-ae69-bc764e2007e4;
 Fri, 15 May 2020 11:59:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 09BAAAEC1;
 Fri, 15 May 2020 11:59:03 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v9 07/12] xen: provide version information in hypfs
Date: Fri, 15 May 2020 13:58:51 +0200
Message-Id: <20200515115856.11965-8-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200515115856.11965-1-jgross@suse.com>
References: <20200515115856.11965-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Provide version and compile information in /buildinfo/ node of the
Xen hypervisor file system. As this information is accessible by dom0
only no additional security problem arises.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V3:
- new patch

V4:
- add __read_mostly annotations (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/hypfs-paths.pandoc | 45 ++++++++++++++++++++++++++++++++++
 xen/common/kernel.c          | 47 ++++++++++++++++++++++++++++++++++++
 2 files changed, 92 insertions(+)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index 39539fa1b5..d730caf394 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -105,3 +105,48 @@ A populated Xen hypervisor file system might look like the following example:
 #### /
 
 The root of the hypervisor file system.
+
+#### /buildinfo/
+
+A directory containing static information generated while building the
+hypervisor.
+
+#### /buildinfo/changeset = STRING
+
+Git commit of the hypervisor.
+
+#### /buildinfo/compileinfo/
+
+A directory containing information about compilation of Xen.
+
+#### /buildinfo/compileinfo/compile_by = STRING
+
+Information who compiled the hypervisor.
+
+#### /buildinfo/compileinfo/compile_date = STRING
+
+Date of the hypervisor compilation.
+
+#### /buildinfo/compileinfo/compile_domain = STRING
+
+Information about the compile domain.
+
+#### /buildinfo/compileinfo/compiler = STRING
+
+The compiler used to build Xen.
+
+#### /buildinfo/version/
+
+A directory containing version information of the hypervisor.
+
+#### /buildinfo/version/extra = STRING
+
+Extra version information.
+
+#### /buildinfo/version/major = INTEGER
+
+The major version of Xen.
+
+#### /buildinfo/version/minor = INTEGER
+
+The minor version of Xen.
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index 572e3fc07d..db7bd23fcb 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -13,6 +13,7 @@
 #include <xen/paging.h>
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
+#include <xen/hypfs.h>
 #include <xsm/xsm.h>
 #include <asm/current.h>
 #include <public/version.h>
@@ -373,6 +374,52 @@ void __init do_initcalls(void)
         (*call)();
 }
 
+#ifdef CONFIG_HYPFS
+static unsigned int __read_mostly major_version;
+static unsigned int __read_mostly minor_version;
+
+static HYPFS_DIR_INIT(buildinfo, "buildinfo");
+static HYPFS_DIR_INIT(compileinfo, "compileinfo");
+static HYPFS_DIR_INIT(version, "version");
+static HYPFS_UINT_INIT(major, "major", major_version);
+static HYPFS_UINT_INIT(minor, "minor", minor_version);
+static HYPFS_STRING_INIT(changeset, "changeset");
+static HYPFS_STRING_INIT(compiler, "compiler");
+static HYPFS_STRING_INIT(compile_by, "compile_by");
+static HYPFS_STRING_INIT(compile_date, "compile_date");
+static HYPFS_STRING_INIT(compile_domain, "compile_domain");
+static HYPFS_STRING_INIT(extra, "extra");
+
+static int __init buildinfo_init(void)
+{
+    hypfs_add_dir(&hypfs_root, &buildinfo, true);
+
+    hypfs_string_set_reference(&changeset, xen_changeset());
+    hypfs_add_leaf(&buildinfo, &changeset, true);
+
+    hypfs_add_dir(&buildinfo, &compileinfo, true);
+    hypfs_string_set_reference(&compiler, xen_compiler());
+    hypfs_string_set_reference(&compile_by, xen_compile_by());
+    hypfs_string_set_reference(&compile_date, xen_compile_date());
+    hypfs_string_set_reference(&compile_domain, xen_compile_domain());
+    hypfs_add_leaf(&compileinfo, &compiler, true);
+    hypfs_add_leaf(&compileinfo, &compile_by, true);
+    hypfs_add_leaf(&compileinfo, &compile_date, true);
+    hypfs_add_leaf(&compileinfo, &compile_domain, true);
+
+    major_version = xen_major_version();
+    minor_version = xen_minor_version();
+    hypfs_add_dir(&buildinfo, &version, true);
+    hypfs_string_set_reference(&extra, xen_extra_version());
+    hypfs_add_leaf(&version, &extra, true);
+    hypfs_add_leaf(&version, &major, true);
+    hypfs_add_leaf(&version, &minor, true);
+
+    return 0;
+}
+__initcall(buildinfo_init);
+#endif
+
 # define DO(fn) long do_##fn
 
 #endif
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 15 11:59:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 11:59:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZYzX-0004kF-MG; Fri, 15 May 2020 11:59:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yAEg=65=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZYzX-0004kA-8X
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 11:59:03 +0000
X-Inumbo-ID: 76ee0704-96a3-11ea-a554-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 76ee0704-96a3-11ea-a554-12813bfff9fa;
 Fri, 15 May 2020 11:59:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 700D2AE52;
 Fri, 15 May 2020 11:59:02 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v9 01/12] xen/vmx: let opt_ept_ad always reflect the current
 setting
Date: Fri, 15 May 2020 13:58:45 +0200
Message-Id: <20200515115856.11965-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200515115856.11965-1-jgross@suse.com>
References: <20200515115856.11965-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Kevin Tian <kevin.tian@intel.com>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In case opt_ept_ad has not been set explicitly by the user via command
line or runtime parameter, it is treated as "no" on Avoton cpus.

Change that handling by setting opt_ept_ad to 0 for this cpu type
explicitly if no user value has been set.

By putting this into the (renamed) boot time initialization of vmcs.c
_vmx_cpu_up() can be made static.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        | 22 +++++++++++++++-------
 xen/arch/x86/hvm/vmx/vmx.c         |  4 +---
 xen/include/asm-x86/hvm/vmx/vmcs.h |  3 +--
 3 files changed, 17 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 4c23645454..221af9737a 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -315,10 +315,6 @@ static int vmx_init_vmcs_config(void)
 
         if ( !opt_ept_ad )
             _vmx_ept_vpid_cap &= ~VMX_EPT_AD_BIT;
-        else if ( /* Work around Erratum AVR41 on Avoton processors. */
-                  boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x4d &&
-                  opt_ept_ad < 0 )
-            _vmx_ept_vpid_cap &= ~VMX_EPT_AD_BIT;
 
         /*
          * Additional sanity checking before using EPT:
@@ -652,7 +648,7 @@ void vmx_cpu_dead(unsigned int cpu)
     vmx_pi_desc_fixup(cpu);
 }
 
-int _vmx_cpu_up(bool bsp)
+static int _vmx_cpu_up(bool bsp)
 {
     u32 eax, edx;
     int rc, bios_locked, cpu = smp_processor_id();
@@ -2108,9 +2104,21 @@ static void vmcs_dump(unsigned char ch)
     printk("**************************************\n");
 }
 
-void __init setup_vmcs_dump(void)
+int __init vmx_vmcs_init(void)
 {
-    register_keyhandler('v', vmcs_dump, "dump VT-x VMCSs", 1);
+    int ret;
+
+    if ( opt_ept_ad < 0 )
+        /* Work around Erratum AVR41 on Avoton processors. */
+        opt_ept_ad = !(boot_cpu_data.x86 == 6 &&
+                       boot_cpu_data.x86_model == 0x4d);
+
+    ret = _vmx_cpu_up(true);
+
+    if ( !ret )
+        register_keyhandler('v', vmcs_dump, "dump VT-x VMCSs", 1);
+
+    return ret;
 }
 
 static void __init __maybe_unused build_assertions(void)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 6efa80e422..11a4dd94cf 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2482,7 +2482,7 @@ const struct hvm_function_table * __init start_vmx(void)
 {
     set_in_cr4(X86_CR4_VMXE);
 
-    if ( _vmx_cpu_up(true) )
+    if ( vmx_vmcs_init() )
     {
         printk("VMX: failed to initialise.\n");
         return NULL;
@@ -2553,8 +2553,6 @@ const struct hvm_function_table * __init start_vmx(void)
         vmx_function_table.get_guest_bndcfgs = vmx_get_guest_bndcfgs;
     }
 
-    setup_vmcs_dump();
-
     lbr_tsx_fixup_check();
     bdf93_fixup_check();
 
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 95c1dea7b8..906810592f 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -21,11 +21,10 @@
 #include <xen/mm.h>
 
 extern void vmcs_dump_vcpu(struct vcpu *v);
-extern void setup_vmcs_dump(void);
+extern int vmx_vmcs_init(void);
 extern int  vmx_cpu_up_prepare(unsigned int cpu);
 extern void vmx_cpu_dead(unsigned int cpu);
 extern int  vmx_cpu_up(void);
-extern int  _vmx_cpu_up(bool bsp);
 extern void vmx_cpu_down(void);
 
 struct vmcs_struct {
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 15 11:59:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 11:59:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZYzd-0004l8-8T; Fri, 15 May 2020 11:59:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yAEg=65=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZYzc-0004ky-4v
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 11:59:08 +0000
X-Inumbo-ID: 7726e7a5-96a3-11ea-a554-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7726e7a5-96a3-11ea-a554-12813bfff9fa;
 Fri, 15 May 2020 11:59:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 232DAAE96;
 Fri, 15 May 2020 11:59:03 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v9 03/12] docs: add feature document for Xen hypervisor
 sysfs-like support
Date: Fri, 15 May 2020 13:58:47 +0200
Message-Id: <20200515115856.11965-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200515115856.11965-1-jgross@suse.com>
References: <20200515115856.11965-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On the 2019 Xen developer summit there was agreement that the Xen
hypervisor should gain support for a hierarchical name-value store
similar to the Linux kernel's sysfs.

In the beginning there should only be basic support: entries can be
added from the hypervisor itself only, there is a simple hypercall
interface to read the data.

Add a feature document for setting the base of a discussion regarding
the desired functionality and the entries to add.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
V1:
- remove the "--" prefixes of the sub-commands of the user tool
  (Jan Beulich)
- rename xenfs to xenhypfs (Jan Beulich)
- add "tree" and "write" options to user tool

V2:
- move example tree to the paths description (Ian Jackson)
- specify allowed characters for keys and values (Ian Jackson)

V3:
- correct introduction (writable entries)

V4:
- add list specification
- add entry example (Julien Grall)
- correct date and Xen version (Julien Grall)
- add ARM64 as possible architecture (Julien Grall)
- add version description to the feature doc (Jan Beulich)

V8:
- clarify syntax used in hypfs-paths.pandoc (George Dunlap)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/features/hypervisorfs.pandoc |  92 +++++++++++++++++++++++++
 docs/misc/hypfs-paths.pandoc      | 107 ++++++++++++++++++++++++++++++
 2 files changed, 199 insertions(+)
 create mode 100644 docs/features/hypervisorfs.pandoc
 create mode 100644 docs/misc/hypfs-paths.pandoc

diff --git a/docs/features/hypervisorfs.pandoc b/docs/features/hypervisorfs.pandoc
new file mode 100644
index 0000000000..a0a0ead057
--- /dev/null
+++ b/docs/features/hypervisorfs.pandoc
@@ -0,0 +1,92 @@
+% Hypervisor FS
+% Revision 1
+
+\clearpage
+
+# Basics
+---------------- ---------------------
+         Status: **Supported**
+
+  Architectures: all
+
+     Components: Hypervisor, toolstack
+---------------- ---------------------
+
+# Overview
+
+The Hypervisor FS is a hierarchical name-value store for reporting
+information to guests, especially dom0. It is similar to the Linux
+kernel's sysfs. Entries and directories are created by the hypervisor,
+while the toolstack is able to use a hypercall to query the entry
+values or (if allowed by the hypervisor) to modify them.
+
+# User details
+
+With:
+
+    xenhypfs ls <path>
+
+the user can list the entries of a specific path of the FS. Using:
+
+    xenhypfs cat <path>
+
+the content of an entry can be retrieved. Using:
+
+    xenhypfs write <path> <string>
+
+a writable entry can be modified. With:
+
+    xenhypfs tree
+
+the complete Hypervisor FS entry tree can be printed.
+
+The FS paths are documented in `docs/misc/hypfs-paths.pandoc`.
+
+# Technical details
+
+Access to the hypervisor filesystem is done via the stable new hypercall
+__HYPERVISOR_filesystem_op. This hypercall supports a sub-command
+XEN_HYPFS_OP_get_version which will return the highest version of the
+interface supported by the hypervisor. Additions to the interface need
+to bump the interface version. The hypervisor is required to support the
+previous interface versions, too (this implies that additions will always
+require new sub-commands in order to allow the hypervisor to decide which
+version of the interface to use).
+
+* hypercall interface specification
+    * `xen/include/public/hypfs.h`
+* hypervisor internal files
+    * `xen/include/xen/hypfs.h`
+    * `xen/common/hypfs.c`
+* `libxenhypfs`
+    * `tools/libs/libxenhypfs/*`
+* `xenhypfs`
+    * `tools/misc/xenhypfs.c`
+* path documentation
+    * `docs/misc/hypfs-paths.pandoc`
+
+# Testing
+
+Any new parameters or hardware mitigations should be verified to show up
+correctly in the filesystem.
+
+# Areas for improvement
+
+* More detailed access rights
+* Entries per domain and/or per cpupool
+
+# Known issues
+
+* None
+
+# References
+
+* None
+
+# History
+
+------------------------------------------------------------------------
+Date       Revision Version  Notes
+---------- -------- -------- -------------------------------------------
+2020-01-23 1        Xen 4.14 Document written
+---------- -------- -------- -------------------------------------------
diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
new file mode 100644
index 0000000000..39539fa1b5
--- /dev/null
+++ b/docs/misc/hypfs-paths.pandoc
@@ -0,0 +1,107 @@
+# Xenhypfs Paths
+
+This document attempts to define all the paths which are available
+in the Xen hypervisor file system (hypfs).
+
+The hypervisor file system can be accessed via the xenhypfs tool.
+
+## Notation
+
+The hypervisor file system is similar to the Linux kernel's sysfs.
+In this document directories are always specified with a trailing "/".
+
+The following notation conventions apply:
+
+        DIRECTORY/
+
+        PATH = VALUES [TAGS]
+
+The first syntax defines a directory. It normally contains related
+entries and the general scope of the directory is described.
+
+The second syntax defines a file entry containing values which are
+either set by the hypervisor or, if the file is writable, can be set
+by the user.
+
+PATH can contain simple regex constructs following the Perl compatible
+regexp syntax described in pcre(3) or perlre(1).
+
+A hypervisor file system entry name can be any 0-delimited byte string
+not containing any '/' character. The names "." and ".." are reserved
+for file system internal use.
+
+VALUES are strings and can take the following forms (note that this represents
+only the syntax used in this document):
+
+* STRING -- an arbitrary 0-delimited byte string.
+* INTEGER -- An integer, in decimal representation unless otherwise
+  noted.
+* "a literal string" -- literal strings are contained within quotes.
+* (VALUE | VALUE | ... ) -- a set of alternatives. Alternatives are
+  separated by a "|" and all the alternatives are enclosed in "(" and
+  ")".
+* {VALUE, VALUE, ... } -- a list of possible values separated by "," and
+  enclosed in "{" and "}".
+
+Additional TAGS may follow as a comma separated set of the following
+tags enclosed in square brackets.
+
+* w -- Path is writable by the user. This capability is usually
+  limited to the control domain (e.g. dom0).
+* ARM | ARM32 | ARM64 | X86: the path is available for the respective
+  architecture only.
+* PV --  Path is valid for PV capable hypervisors only.
+* HVM -- Path is valid for HVM capable hypervisors only.
+* CONFIG_* -- Path is valid only in case the hypervisor was built with
+  the respective config option.
+
+So an entry could look like this:
+
+    /cpu-bugs/active-pv/xpti = ("No"|{"dom0", "domU", "PCID-on"}) [w,X86,PV]
+
+Possible values would be "No" or a list of "dom0", "domU", and "PCID-on" with
+the list elements separated by spaces, e.g. "dom0 PCID-on".
+The entry would be writable and it would exist on X86 only and only if the
+hypervisor is configured to support PV guests.
+
+## Example
+
+A populated Xen hypervisor file system might look like the following example:
+
+    /
+        buildinfo/           directory containing build-time data
+            config           contents of .config file used to build Xen
+        cpu-bugs/            x86: directory of cpu bug information
+            l1tf             "Vulnerable" or "Not vulnerable"
+            mds              "Vulnerable" or "Not vulnerable"
+            meltdown         "Vulnerable" or "Not vulnerable"
+            spec-store-bypass "Vulnerable" or "Not vulnerable"
+            spectre-v1       "Vulnerable" or "Not vulnerable"
+            spectre-v2       "Vulnerable" or "Not vulnerable"
+            mitigations/     directory of mitigation settings
+                bti-thunk    "N/A", "RETPOLINE", "LFENCE" or "JMP"
+                spec-ctrl    "No", "IBRS+" or "IBRS-"
+                ibpb         "No" or "Yes"
+                l1d-flush    "No" or "Yes"
+                md-clear     "No" or "VERW"
+                l1tf-barrier "No" or "Yes"
+            active-hvm/      directory for mitigations active in hvm doamins
+                msr-spec-ctrl "No" or "Yes"
+                rsb          "No" or "Yes"
+                eager-fpu    "No" or "Yes"
+                md-clear     "No" or "Yes"
+            active-pv/       directory for mitigations active in pv doamins
+                msr-spec-ctrl "No" or "Yes"
+                rsb          "No" or "Yes"
+                eager-fpu    "No" or "Yes"
+                md-clear     "No" or "Yes"
+                xpti         "No" or list of "dom0", "domU", "PCID-on"
+                l1tf-shadow  "No" or list of "dom0", "domU"
+        params/              directory with hypervisor parameter values
+                             (boot/runtime parameters)
+
+## General Paths
+
+#### /
+
+The root of the hypervisor file system.
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 15 11:59:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 11:59:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZYzf-0004lf-H1; Fri, 15 May 2020 11:59:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yAEg=65=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZYze-0004lJ-96
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 11:59:10 +0000
X-Inumbo-ID: 78768182-96a3-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78768182-96a3-11ea-ae69-bc764e2007e4;
 Fri, 15 May 2020 11:59:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 85714AEC2;
 Fri, 15 May 2020 11:59:03 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v9 05/12] libs: add libxenhypfs
Date: Fri, 15 May 2020 13:58:49 +0200
Message-Id: <20200515115856.11965-6-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200515115856.11965-1-jgross@suse.com>
References: <20200515115856.11965-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add the new library libxenhypfs for access to the hypervisor filesystem.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Wei Liu <wl@xen.org>
---
V1:
- rename to libxenhypfs
- add xenhypfs_write()

V3:
- major rework due to new hypervisor interface
- add decompression capability

V4:
- add dependency to libz in pkgconfig file (Wei Liu)

V7:
- don't assume hypervisor's sizeof(bool) is same as in user land

V8:
- add some comments regarding the semantics of the lib functions
  (George Dunlap)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore                          |   2 +
 tools/Rules.mk                      |   6 +
 tools/libs/Makefile                 |   1 +
 tools/libs/hypfs/Makefile           |  16 +
 tools/libs/hypfs/core.c             | 536 ++++++++++++++++++++++++++++
 tools/libs/hypfs/include/xenhypfs.h |  90 +++++
 tools/libs/hypfs/libxenhypfs.map    |  10 +
 tools/libs/hypfs/xenhypfs.pc.in     |  10 +
 8 files changed, 671 insertions(+)
 create mode 100644 tools/libs/hypfs/Makefile
 create mode 100644 tools/libs/hypfs/core.c
 create mode 100644 tools/libs/hypfs/include/xenhypfs.h
 create mode 100644 tools/libs/hypfs/libxenhypfs.map
 create mode 100644 tools/libs/hypfs/xenhypfs.pc.in

diff --git a/.gitignore b/.gitignore
index 034f44b21b..7bd292f64d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -110,6 +110,8 @@ tools/libs/evtchn/headers.chk
 tools/libs/evtchn/xenevtchn.pc
 tools/libs/gnttab/headers.chk
 tools/libs/gnttab/xengnttab.pc
+tools/libs/hypfs/headers.chk
+tools/libs/hypfs/xenhypfs.pc
 tools/libs/call/headers.chk
 tools/libs/call/xencall.pc
 tools/libs/foreignmemory/headers.chk
diff --git a/tools/Rules.mk b/tools/Rules.mk
index 5b8cf748ad..ad6073fcad 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -19,6 +19,7 @@ XEN_LIBXENGNTTAB   = $(XEN_ROOT)/tools/libs/gnttab
 XEN_LIBXENCALL     = $(XEN_ROOT)/tools/libs/call
 XEN_LIBXENFOREIGNMEMORY = $(XEN_ROOT)/tools/libs/foreignmemory
 XEN_LIBXENDEVICEMODEL = $(XEN_ROOT)/tools/libs/devicemodel
+XEN_LIBXENHYPFS    = $(XEN_ROOT)/tools/libs/hypfs
 XEN_LIBXC          = $(XEN_ROOT)/tools/libxc
 XEN_XENLIGHT       = $(XEN_ROOT)/tools/libxl
 # Currently libxlutil lives in the same directory as libxenlight
@@ -132,6 +133,11 @@ SHDEPS_libxendevicemodel = $(SHLIB_libxentoollog) $(SHLIB_libxentoolcore) $(SHLI
 LDLIBS_libxendevicemodel = $(SHDEPS_libxendevicemodel) $(XEN_LIBXENDEVICEMODEL)/libxendevicemodel$(libextension)
 SHLIB_libxendevicemodel  = $(SHDEPS_libxendevicemodel) -Wl,-rpath-link=$(XEN_LIBXENDEVICEMODEL)
 
+CFLAGS_libxenhypfs = -I$(XEN_LIBXENHYPFS)/include $(CFLAGS_xeninclude)
+SHDEPS_libxenhypfs = $(SHLIB_libxentoollog) $(SHLIB_libxentoolcore) $(SHLIB_xencall)
+LDLIBS_libxenhypfs = $(SHDEPS_libxenhypfs) $(XEN_LIBXENHYPFS)/libxenhypfs$(libextension)
+SHLIB_libxenhypfs  = $(SHDEPS_libxenhypfs) -Wl,-rpath-link=$(XEN_LIBXENHYPFS)
+
 # code which compiles against libxenctrl get __XEN_TOOLS__ and
 # therefore sees the unstable hypercall interfaces.
 CFLAGS_libxenctrl = -I$(XEN_LIBXC)/include $(CFLAGS_libxentoollog) $(CFLAGS_libxenforeignmemory) $(CFLAGS_libxendevicemodel) $(CFLAGS_xeninclude) -D__XEN_TOOLS__
diff --git a/tools/libs/Makefile b/tools/libs/Makefile
index 88901e7341..69cdfb5975 100644
--- a/tools/libs/Makefile
+++ b/tools/libs/Makefile
@@ -9,6 +9,7 @@ SUBDIRS-y += gnttab
 SUBDIRS-y += call
 SUBDIRS-y += foreignmemory
 SUBDIRS-y += devicemodel
+SUBDIRS-y += hypfs
 
 ifeq ($(CONFIG_RUMP),y)
 SUBDIRS-y := toolcore
diff --git a/tools/libs/hypfs/Makefile b/tools/libs/hypfs/Makefile
new file mode 100644
index 0000000000..06dd449929
--- /dev/null
+++ b/tools/libs/hypfs/Makefile
@@ -0,0 +1,16 @@
+XEN_ROOT = $(CURDIR)/../../..
+include $(XEN_ROOT)/tools/Rules.mk
+
+MAJOR    = 1
+MINOR    = 0
+LIBNAME  := hypfs
+USELIBS  := toollog toolcore call
+
+APPEND_LDFLAGS += -lz
+
+SRCS-y                 += core.c
+
+include ../libs.mk
+
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_LIBXENHYPFS)/include
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/hypfs/core.c b/tools/libs/hypfs/core.c
new file mode 100644
index 0000000000..d4309b5ae2
--- /dev/null
+++ b/tools/libs/hypfs/core.c
@@ -0,0 +1,536 @@
+/*
+ * Copyright (c) 2019 SUSE Software Solutions Germany GmbH
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#define __XEN_TOOLS__ 1
+
+#define _GNU_SOURCE
+
+#include <errno.h>
+#include <inttypes.h>
+#include <stdbool.h>
+#include <stdlib.h>
+#include <string.h>
+#include <zlib.h>
+
+#include <xentoollog.h>
+#include <xenhypfs.h>
+#include <xencall.h>
+#include <xentoolcore_internal.h>
+
+#include <xen/xen.h>
+#include <xen/hypfs.h>
+
+#define BUF_SIZE 4096
+
+struct xenhypfs_handle {
+    xentoollog_logger *logger, *logger_tofree;
+    unsigned int flags;
+    xencall_handle *xcall;
+};
+
+xenhypfs_handle *xenhypfs_open(xentoollog_logger *logger,
+                               unsigned open_flags)
+{
+    xenhypfs_handle *fshdl = calloc(1, sizeof(*fshdl));
+
+    if (!fshdl)
+        return NULL;
+
+    fshdl->flags = open_flags;
+    fshdl->logger = logger;
+    fshdl->logger_tofree = NULL;
+
+    if (!fshdl->logger) {
+        fshdl->logger = fshdl->logger_tofree =
+            (xentoollog_logger*)
+            xtl_createlogger_stdiostream(stderr, XTL_PROGRESS, 0);
+        if (!fshdl->logger)
+            goto err;
+    }
+
+    fshdl->xcall = xencall_open(fshdl->logger, 0);
+    if (!fshdl->xcall)
+        goto err;
+
+    /* No need to remember supported version, we only support V1. */
+    if (xencall5(fshdl->xcall, __HYPERVISOR_hypfs_op,
+                 XEN_HYPFS_OP_get_version, 0, 0, 0, 0) < 0)
+        goto err;
+
+    return fshdl;
+
+err:
+    xtl_logger_destroy(fshdl->logger_tofree);
+    xencall_close(fshdl->xcall);
+    free(fshdl);
+    return NULL;
+}
+
+int xenhypfs_close(xenhypfs_handle *fshdl)
+{
+    if (!fshdl)
+        return 0;
+
+    xencall_close(fshdl->xcall);
+    xtl_logger_destroy(fshdl->logger_tofree);
+    free(fshdl);
+    return 0;
+}
+
+static int xenhypfs_get_pathbuf(xenhypfs_handle *fshdl, const char *path,
+                                char **path_buf)
+{
+    int ret = -1;
+    int path_sz;
+
+    if (!fshdl) {
+        errno = EBADF;
+        goto out;
+    }
+
+    path_sz = strlen(path) + 1;
+    if (path_sz > XEN_HYPFS_MAX_PATHLEN)
+    {
+        errno = ENAMETOOLONG;
+        goto out;
+    }
+
+    *path_buf = xencall_alloc_buffer(fshdl->xcall, path_sz);
+    if (!*path_buf) {
+        errno = ENOMEM;
+        goto out;
+    }
+    strcpy(*path_buf, path);
+
+    ret = path_sz;
+
+ out:
+    return ret;
+}
+
+static void *xenhypfs_inflate(void *in_data, size_t *sz)
+{
+    unsigned char *workbuf;
+    void *content = NULL;
+    unsigned int out_sz;
+    z_stream z = { .opaque = NULL };
+    int ret;
+
+    workbuf = malloc(BUF_SIZE);
+    if (!workbuf)
+        return NULL;
+
+    z.next_in = in_data;
+    z.avail_in = *sz;
+    ret = inflateInit2(&z, MAX_WBITS + 32); /* 32 == gzip */
+
+    for (*sz = 0; ret == Z_OK; *sz += out_sz) {
+        z.next_out = workbuf;
+        z.avail_out = BUF_SIZE;
+        ret = inflate(&z, Z_SYNC_FLUSH);
+        if (ret != Z_OK && ret != Z_STREAM_END)
+            break;
+
+        out_sz = z.next_out - workbuf;
+        content = realloc(content, *sz + out_sz);
+        if (!content) {
+            ret = Z_MEM_ERROR;
+            break;
+        }
+        memcpy(content + *sz, workbuf, out_sz);
+    }
+
+    inflateEnd(&z);
+    if (ret != Z_STREAM_END) {
+        free(content);
+        content = NULL;
+        errno = EIO;
+    }
+    free(workbuf);
+    return content;
+}
+
+static void xenhypfs_set_attrs(struct xen_hypfs_direntry *entry,
+                               struct xenhypfs_dirent *dirent)
+{
+    dirent->size = entry->content_len;
+
+    switch(entry->type) {
+    case XEN_HYPFS_TYPE_DIR:
+        dirent->type = xenhypfs_type_dir;
+        break;
+    case XEN_HYPFS_TYPE_BLOB:
+        dirent->type = xenhypfs_type_blob;
+        break;
+    case XEN_HYPFS_TYPE_STRING:
+        dirent->type = xenhypfs_type_string;
+        break;
+    case XEN_HYPFS_TYPE_UINT:
+        dirent->type = xenhypfs_type_uint;
+        break;
+    case XEN_HYPFS_TYPE_INT:
+        dirent->type = xenhypfs_type_int;
+        break;
+    case XEN_HYPFS_TYPE_BOOL:
+        dirent->type = xenhypfs_type_bool;
+        break;
+    default:
+        dirent->type = xenhypfs_type_blob;
+    }
+
+    switch (entry->encoding) {
+    case XEN_HYPFS_ENC_PLAIN:
+        dirent->encoding = xenhypfs_enc_plain;
+        break;
+    case XEN_HYPFS_ENC_GZIP:
+        dirent->encoding = xenhypfs_enc_gzip;
+        break;
+    default:
+        dirent->encoding = xenhypfs_enc_plain;
+        dirent->type = xenhypfs_type_blob;
+    }
+
+    dirent->is_writable = entry->max_write_len;
+}
+
+void *xenhypfs_read_raw(xenhypfs_handle *fshdl, const char *path,
+                        struct xenhypfs_dirent **dirent)
+{
+    void *retbuf = NULL, *content = NULL;
+    char *path_buf = NULL;
+    const char *name;
+    struct xen_hypfs_direntry *entry;
+    int ret;
+    int sz, path_sz;
+
+    *dirent = NULL;
+    ret = xenhypfs_get_pathbuf(fshdl, path, &path_buf);
+    if (ret < 0)
+        goto out;
+
+    path_sz = ret;
+
+    for (sz = BUF_SIZE;; sz = sizeof(*entry) + entry->content_len) {
+        if (retbuf)
+            xencall_free_buffer(fshdl->xcall, retbuf);
+
+        retbuf = xencall_alloc_buffer(fshdl->xcall, sz);
+        if (!retbuf) {
+            errno = ENOMEM;
+            goto out;
+        }
+        entry = retbuf;
+
+        ret = xencall5(fshdl->xcall, __HYPERVISOR_hypfs_op, XEN_HYPFS_OP_read,
+                       (unsigned long)path_buf, path_sz,
+                       (unsigned long)retbuf, sz);
+        if (!ret)
+            break;
+
+        if (ret != ENOBUFS) {
+            errno = -ret;
+            goto out;
+        }
+    }
+
+    content = malloc(entry->content_len);
+    if (!content)
+        goto out;
+    memcpy(content, entry + 1, entry->content_len);
+
+    name = strrchr(path, '/');
+    if (!name)
+        name = path;
+    else {
+        name++;
+        if (!*name)
+            name--;
+    }
+    *dirent = calloc(1, sizeof(struct xenhypfs_dirent) + strlen(name) + 1);
+    if (!*dirent) {
+        free(content);
+        content = NULL;
+        errno = ENOMEM;
+        goto out;
+    }
+    (*dirent)->name = (char *)(*dirent + 1);
+    strcpy((*dirent)->name, name);
+    xenhypfs_set_attrs(entry, *dirent);
+
+ out:
+    ret = errno;
+    xencall_free_buffer(fshdl->xcall, path_buf);
+    xencall_free_buffer(fshdl->xcall, retbuf);
+    errno = ret;
+
+    return content;
+}
+
+char *xenhypfs_read(xenhypfs_handle *fshdl, const char *path)
+{
+    char *buf, *ret_buf = NULL;
+    struct xenhypfs_dirent *dirent;
+    int ret;
+
+    buf = xenhypfs_read_raw(fshdl, path, &dirent);
+    if (!buf)
+        goto out;
+
+    switch (dirent->encoding) {
+    case xenhypfs_enc_plain:
+        break;
+    case xenhypfs_enc_gzip:
+        ret_buf = xenhypfs_inflate(buf, &dirent->size);
+        if (!ret_buf)
+            goto out;
+        free(buf);
+        buf = ret_buf;
+        ret_buf = NULL;
+        break;
+    }
+
+    switch (dirent->type) {
+    case xenhypfs_type_dir:
+        errno = EISDIR;
+        break;
+    case xenhypfs_type_blob:
+        errno = EDOM;
+        break;
+    case xenhypfs_type_string:
+        ret_buf = buf;
+        buf = NULL;
+        break;
+    case xenhypfs_type_uint:
+    case xenhypfs_type_bool:
+        switch (dirent->size) {
+        case 1:
+            ret = asprintf(&ret_buf, "%"PRIu8, *(uint8_t *)buf);
+            break;
+        case 2:
+            ret = asprintf(&ret_buf, "%"PRIu16, *(uint16_t *)buf);
+            break;
+        case 4:
+            ret = asprintf(&ret_buf, "%"PRIu32, *(uint32_t *)buf);
+            break;
+        case 8:
+            ret = asprintf(&ret_buf, "%"PRIu64, *(uint64_t *)buf);
+            break;
+        default:
+            ret = -1;
+            errno = EDOM;
+        }
+        if (ret < 0)
+            ret_buf = NULL;
+        break;
+    case xenhypfs_type_int:
+        switch (dirent->size) {
+        case 1:
+            ret = asprintf(&ret_buf, "%"PRId8, *(int8_t *)buf);
+            break;
+        case 2:
+            ret = asprintf(&ret_buf, "%"PRId16, *(int16_t *)buf);
+            break;
+        case 4:
+            ret = asprintf(&ret_buf, "%"PRId32, *(int32_t *)buf);
+            break;
+        case 8:
+            ret = asprintf(&ret_buf, "%"PRId64, *(int64_t *)buf);
+            break;
+        default:
+            ret = -1;
+            errno = EDOM;
+        }
+        if (ret < 0)
+            ret_buf = NULL;
+        break;
+    }
+
+ out:
+    ret = errno;
+    free(buf);
+    free(dirent);
+    errno = ret;
+
+    return ret_buf;
+}
+
+struct xenhypfs_dirent *xenhypfs_readdir(xenhypfs_handle *fshdl,
+                                         const char *path,
+                                         unsigned int *num_entries)
+{
+    void *buf, *curr;
+    int ret;
+    char *names;
+    struct xenhypfs_dirent *ret_buf = NULL, *dirent;
+    unsigned int n = 0, name_sz = 0;
+    struct xen_hypfs_dirlistentry *entry;
+
+    buf = xenhypfs_read_raw(fshdl, path, &dirent);
+    if (!buf)
+        goto out;
+
+    if (dirent->type != xenhypfs_type_dir ||
+        dirent->encoding != xenhypfs_enc_plain) {
+        errno = ENOTDIR;
+        goto out;
+    }
+
+    if (dirent->size) {
+        curr = buf;
+        for (n = 1;; n++) {
+            entry = curr;
+            name_sz += strlen(entry->name) + 1;
+            if (!entry->off_next)
+                break;
+
+            curr += entry->off_next;
+        }
+    }
+
+    ret_buf = malloc(n * sizeof(*ret_buf) + name_sz);
+    if (!ret_buf)
+        goto out;
+
+    *num_entries = n;
+    names = (char *)(ret_buf + n);
+    curr = buf;
+    for (n = 0; n < *num_entries; n++) {
+        entry = curr;
+        xenhypfs_set_attrs(&entry->e, ret_buf + n);
+        ret_buf[n].name = names;
+        strcpy(names, entry->name);
+        names += strlen(entry->name) + 1;
+        curr += entry->off_next;
+    }
+
+ out:
+    ret = errno;
+    free(buf);
+    free(dirent);
+    errno = ret;
+
+    return ret_buf;
+}
+
+int xenhypfs_write(xenhypfs_handle *fshdl, const char *path, const char *val)
+{
+    void *buf = NULL;
+    char *path_buf = NULL, *val_end;
+    int ret, saved_errno;
+    int sz, path_sz;
+    struct xen_hypfs_direntry *entry;
+    uint64_t mask;
+
+    ret = xenhypfs_get_pathbuf(fshdl, path, &path_buf);
+    if (ret < 0)
+        goto out;
+
+    path_sz = ret;
+    ret = -1;
+
+    sz = BUF_SIZE;
+    buf = xencall_alloc_buffer(fshdl->xcall, sz);
+    if (!buf) {
+        errno = ENOMEM;
+        goto out;
+    }
+
+    ret = xencall5(fshdl->xcall, __HYPERVISOR_hypfs_op, XEN_HYPFS_OP_read,
+                   (unsigned long)path_buf, path_sz,
+                   (unsigned long)buf, sizeof(*entry));
+    if (ret && errno != ENOBUFS)
+        goto out;
+    ret = -1;
+    entry = buf;
+    if (!entry->max_write_len) {
+        errno = EACCES;
+        goto out;
+    }
+    if (entry->encoding != XEN_HYPFS_ENC_PLAIN) {
+        /* Writing compressed data currently not supported. */
+        errno = EDOM;
+        goto out;
+    }
+
+    switch (entry->type) {
+    case XEN_HYPFS_TYPE_STRING:
+        if (sz < strlen(val) + 1) {
+            sz = strlen(val) + 1;
+            xencall_free_buffer(fshdl->xcall, buf);
+            buf = xencall_alloc_buffer(fshdl->xcall, sz);
+            if (!buf) {
+                errno = ENOMEM;
+                goto out;
+            }
+        }
+        sz = strlen(val) + 1;
+        strcpy(buf, val);
+        break;
+    case XEN_HYPFS_TYPE_UINT:
+        sz = entry->content_len;
+        errno = 0;
+        *(unsigned long long *)buf = strtoull(val, &val_end, 0);
+        if (errno || !*val || *val_end)
+            goto out;
+        mask = ~0ULL << (8 * sz);
+        if ((*(uint64_t *)buf & mask) && ((*(uint64_t *)buf & mask) != mask)) {
+            errno = ERANGE;
+            goto out;
+        }
+        break;
+    case XEN_HYPFS_TYPE_INT:
+        sz = entry->content_len;
+        errno = 0;
+        *(unsigned long long *)buf = strtoll(val, &val_end, 0);
+        if (errno || !*val || *val_end)
+            goto out;
+        mask = (sz == 8) ? 0 : ~0ULL << (8 * sz);
+        if ((*(uint64_t *)buf & mask) && ((*(uint64_t *)buf & mask) != mask)) {
+            errno = ERANGE;
+            goto out;
+        }
+        break;
+    case XEN_HYPFS_TYPE_BOOL:
+        sz = entry->content_len;
+        *(unsigned long long *)buf = 0;
+        if (!strcmp(val, "1") || !strcmp(val, "on") || !strcmp(val, "yes") ||
+            !strcmp(val, "true") || !strcmp(val, "enable"))
+            *(unsigned long long *)buf = 1;
+        else if (strcmp(val, "0") && strcmp(val, "no") && strcmp(val, "off") &&
+                 strcmp(val, "false") && strcmp(val, "disable")) {
+            errno = EDOM;
+            goto out;
+        }
+        break;
+    default:
+        /* No support for other types (yet). */
+        errno = EDOM;
+        goto out;
+    }
+
+    ret = xencall5(fshdl->xcall, __HYPERVISOR_hypfs_op,
+                   XEN_HYPFS_OP_write_contents,
+                   (unsigned long)path_buf, path_sz,
+                   (unsigned long)buf, sz);
+
+ out:
+    saved_errno = errno;
+    xencall_free_buffer(fshdl->xcall, path_buf);
+    xencall_free_buffer(fshdl->xcall, buf);
+    errno = saved_errno;
+    return ret;
+}
diff --git a/tools/libs/hypfs/include/xenhypfs.h b/tools/libs/hypfs/include/xenhypfs.h
new file mode 100644
index 0000000000..ab157edceb
--- /dev/null
+++ b/tools/libs/hypfs/include/xenhypfs.h
@@ -0,0 +1,90 @@
+/*
+ * Copyright (c) 2019 SUSE Software Solutions Germany GmbH
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef XENHYPFS_H
+#define XENHYPFS_H
+
+#include <stdbool.h>
+#include <stdint.h>
+#include <sys/types.h>
+
+/* Callers who don't care don't need to #include <xentoollog.h> */
+struct xentoollog_logger;
+
+typedef struct xenhypfs_handle xenhypfs_handle;
+
+struct xenhypfs_dirent {
+    char *name;
+    size_t size;
+    enum {
+        xenhypfs_type_dir,
+        xenhypfs_type_blob,
+        xenhypfs_type_string,
+        xenhypfs_type_uint,
+        xenhypfs_type_int,
+        xenhypfs_type_bool
+    } type;
+    enum {
+        xenhypfs_enc_plain,
+        xenhypfs_enc_gzip
+    } encoding;
+    bool is_writable;
+};
+
+xenhypfs_handle *xenhypfs_open(struct xentoollog_logger *logger,
+                               unsigned int open_flags);
+int xenhypfs_close(xenhypfs_handle *fshdl);
+
+/*
+ * Return the raw contents of a Xen hypfs entry and its dirent containing
+ * the size, type and encoding.
+ * Returned buffer and dirent should be freed via free().
+ */
+void *xenhypfs_read_raw(xenhypfs_handle *fshdl, const char *path,
+                        struct xenhypfs_dirent **dirent);
+
+/*
+ * Return the contents of a Xen hypfs entry as a string.
+ * Returned buffer should be freed via free().
+ */
+char *xenhypfs_read(xenhypfs_handle *fshdl, const char *path);
+
+/*
+ * Return the contents of a Xen hypfs directory in form of an array of
+ * dirents.
+ * Returned buffer should be freed via free().
+ */
+struct xenhypfs_dirent *xenhypfs_readdir(xenhypfs_handle *fshdl,
+                                         const char *path,
+                                         unsigned int *num_entries);
+
+/*
+ * Write a Xen hypfs entry with a value. The value is converted from a string
+ * to the appropriate type.
+ */
+int xenhypfs_write(xenhypfs_handle *fshdl, const char *path, const char *val);
+
+#endif /* XENHYPFS_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libs/hypfs/libxenhypfs.map b/tools/libs/hypfs/libxenhypfs.map
new file mode 100644
index 0000000000..47f1edda3e
--- /dev/null
+++ b/tools/libs/hypfs/libxenhypfs.map
@@ -0,0 +1,10 @@
+VERS_1.0 {
+	global:
+		xenhypfs_open;
+		xenhypfs_close;
+		xenhypfs_read_raw;
+		xenhypfs_read;
+		xenhypfs_readdir;
+		xenhypfs_write;
+	local: *; /* Do not expose anything by default */
+};
diff --git a/tools/libs/hypfs/xenhypfs.pc.in b/tools/libs/hypfs/xenhypfs.pc.in
new file mode 100644
index 0000000000..92a262c7a2
--- /dev/null
+++ b/tools/libs/hypfs/xenhypfs.pc.in
@@ -0,0 +1,10 @@
+prefix=@@prefix@@
+includedir=@@incdir@@
+libdir=@@libdir@@
+
+Name: Xenhypfs
+Description: The Xenhypfs library for Xen hypervisor
+Version: @@version@@
+Cflags: -I${includedir} @@cflagslocal@@
+Libs: @@libsflag@@${libdir} -lxenhypfs
+Requires.private: xentoolcore,xentoollog,xencall,z
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 15 11:59:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 11:59:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZYzh-0004n3-Uy; Fri, 15 May 2020 11:59:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yAEg=65=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZYzh-0004mh-5A
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 11:59:13 +0000
X-Inumbo-ID: 7708c986-96a3-11ea-a554-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7708c986-96a3-11ea-a554-12813bfff9fa;
 Fri, 15 May 2020 11:59:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A4C78AE8C;
 Fri, 15 May 2020 11:59:02 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v9 02/12] xen: add a generic way to include binary files as
 variables
Date: Fri, 15 May 2020 13:58:46 +0200
Message-Id: <20200515115856.11965-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200515115856.11965-1-jgross@suse.com>
References: <20200515115856.11965-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add a new script xen/tools/binfile for including a binary file at build
time being usable via a pointer and a size variable in the hypervisor.

Make use of that generic tool in xsm.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Wei Liu <wl@xen.org>
---
V3:
- new patch

V4:
- add alignment parameter (Jan Beulich)
- use .Lend instead of . (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore                   |  1 +
 xen/tools/binfile            | 41 ++++++++++++++++++++++++++++++++++++
 xen/xsm/flask/Makefile       |  5 ++++-
 xen/xsm/flask/flask-policy.S | 16 --------------
 4 files changed, 46 insertions(+), 17 deletions(-)
 create mode 100755 xen/tools/binfile
 delete mode 100644 xen/xsm/flask/flask-policy.S

diff --git a/.gitignore b/.gitignore
index bfa53723b3..034f44b21b 100644
--- a/.gitignore
+++ b/.gitignore
@@ -314,6 +314,7 @@ xen/test/livepatch/*.livepatch
 xen/tools/kconfig/.tmp_gtkcheck
 xen/tools/kconfig/.tmp_qtcheck
 xen/tools/symbols
+xen/xsm/flask/flask-policy.S
 xen/xsm/flask/include/av_perm_to_string.h
 xen/xsm/flask/include/av_permissions.h
 xen/xsm/flask/include/class_to_string.h
diff --git a/xen/tools/binfile b/xen/tools/binfile
new file mode 100755
index 0000000000..7bb35a5178
--- /dev/null
+++ b/xen/tools/binfile
@@ -0,0 +1,41 @@
+#!/bin/sh
+# usage: binfile [-i] [-a <align>] <target-src.S> <binary-file> <varname>
+# -a <align>  align data at 2^<align> boundary (default: byte alignment)
+# -i          add to .init.rodata (default: .rodata) section
+
+section=""
+align=0
+
+OPTIND=1
+while getopts "ia:" opt; do
+    case "$opt" in
+    i)
+        section=".init"
+        ;;
+    a)
+        align=$OPTARG
+        ;;
+    esac
+done
+
+target=$1
+binsource=$2
+varname=$3
+
+cat <<EOF >$target
+#include <asm/asm_defns.h>
+
+        .section $section.rodata, "a", %progbits
+
+        .p2align $align
+        .global $varname
+$varname:
+        .incbin "$binsource"
+.Lend:
+
+        .type $varname, %object
+        .size $varname, .Lend - $varname
+
+        .global ${varname}_size
+        ASM_INT(${varname}_size, .Lend - $varname)
+EOF
diff --git a/xen/xsm/flask/Makefile b/xen/xsm/flask/Makefile
index eebfceecc5..d8486fc7e4 100644
--- a/xen/xsm/flask/Makefile
+++ b/xen/xsm/flask/Makefile
@@ -39,6 +39,9 @@ $(subst include/,%/,$(AV_H_FILES)): $(AV_H_DEPEND) $(mkaccess) FORCE
 obj-bin-$(CONFIG_XSM_FLASK_POLICY) += flask-policy.o
 flask-policy.o: policy.bin
 
+flask-policy.S: $(XEN_ROOT)/xen/tools/binfile
+	$(XEN_ROOT)/xen/tools/binfile -i $@ policy.bin xsm_flask_init_policy
+
 FLASK_BUILD_DIR := $(CURDIR)
 POLICY_SRC := $(FLASK_BUILD_DIR)/xenpolicy-$(XEN_FULLVERSION)
 
@@ -48,4 +51,4 @@ policy.bin: FORCE
 
 .PHONY: clean
 clean::
-	rm -f $(ALL_H_FILES) *.o $(DEPS_RM) policy.* $(POLICY_SRC)
+	rm -f $(ALL_H_FILES) *.o $(DEPS_RM) policy.* $(POLICY_SRC) flask-policy.S
diff --git a/xen/xsm/flask/flask-policy.S b/xen/xsm/flask/flask-policy.S
deleted file mode 100644
index d38aa39964..0000000000
--- a/xen/xsm/flask/flask-policy.S
+++ /dev/null
@@ -1,16 +0,0 @@
-#include <asm/asm_defns.h>
-
-        .section .init.rodata, "a", %progbits
-
-/* const unsigned char xsm_flask_init_policy[] __initconst */
-        .global xsm_flask_init_policy
-xsm_flask_init_policy:
-        .incbin "policy.bin"
-.Lend:
-
-        .type xsm_flask_init_policy, %object
-        .size xsm_flask_init_policy, . - xsm_flask_init_policy
-
-/* const unsigned int __initconst xsm_flask_init_policy_size */
-        .global xsm_flask_init_policy_size
-        ASM_INT(xsm_flask_init_policy_size, .Lend - xsm_flask_init_policy)
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 15 11:59:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 11:59:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZYzk-0004o6-6c; Fri, 15 May 2020 11:59:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yAEg=65=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZYzj-0004ni-93
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 11:59:15 +0000
X-Inumbo-ID: 78ab8c2e-96a3-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78ab8c2e-96a3-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 11:59:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 3EF92AEC4;
 Fri, 15 May 2020 11:59:04 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v9 08/12] xen: add /buildinfo/config entry to hypervisor
 filesystem
Date: Fri, 15 May 2020 13:58:52 +0200
Message-Id: <20200515115856.11965-9-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200515115856.11965-1-jgross@suse.com>
References: <20200515115856.11965-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add the /buildinfo/config entry to the hypervisor filesystem. This
entry contains the .config file used to build the hypervisor.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V3:
- store data in gzip format
- use binfile mechanism to create data file
- move code to kernel.c

V6:
- add config item for the /buildinfo/config (Jan Beulich)
- make config related variables const in kernel.h (Jan Beulich)

V7:
- update doc (Jan Beulich)
- use "rm -f" in Makefile (Jan Beulich)

V8:
- add dependency top CONFIG_HYPFS
- use macro for definition of leaf (Jan Beulich)

V9:
- adjust type of xen_config_data (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore                   |  2 ++
 docs/misc/hypfs-paths.pandoc |  4 ++++
 xen/common/Kconfig           | 11 +++++++++++
 xen/common/Makefile          | 12 ++++++++++++
 xen/common/kernel.c          | 11 +++++++++++
 xen/include/xen/kernel.h     |  3 +++
 6 files changed, 43 insertions(+)

diff --git a/.gitignore b/.gitignore
index 6171b3b43f..b8bdb25040 100644
--- a/.gitignore
+++ b/.gitignore
@@ -298,6 +298,8 @@ xen/arch/*/efi/boot.c
 xen/arch/*/efi/compat.c
 xen/arch/*/efi/efi.h
 xen/arch/*/efi/runtime.c
+xen/common/config_data.S
+xen/common/config.gz
 xen/include/headers*.chk
 xen/include/asm
 xen/include/asm-*/asm-offsets.h
diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index d730caf394..9a76bc383b 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -135,6 +135,10 @@ Information about the compile domain.
 
 The compiler used to build Xen.
 
+#### /buildinfo/config = STRING [CONFIG_HYPFS_CONFIG]
+
+The contents of the `xen/.config` file at the time of the hypervisor build.
+
 #### /buildinfo/version/
 
 A directory containing version information of the hypervisor.
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index e768ea36b2..065f2ee454 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -127,6 +127,17 @@ config HYPFS
 
 	  If unsure, say Y.
 
+config HYPFS_CONFIG
+	bool "Provide hypervisor .config via hypfs entry"
+	default y
+	depends on HYPFS
+	---help---
+	  When enabled the contents of the .config file used to build the
+	  hypervisor are provided via the hypfs entry /buildinfo/config.
+
+	  Disable this option in case you want to spare some memory or you
+	  want to hide the .config contents from dom0.
+
 config KEXEC
 	bool "kexec support"
 	default y
diff --git a/xen/common/Makefile b/xen/common/Makefile
index bf7d0e25a3..3d61239fbf 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -1,6 +1,7 @@
 obj-$(CONFIG_ARGO) += argo.o
 obj-y += bitmap.o
 obj-y += bsearch.o
+obj-$(CONFIG_HYPFS_CONFIG) += config_data.o
 obj-$(CONFIG_CORE_PARKING) += core_parking.o
 obj-y += cpu.o
 obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o
@@ -73,3 +74,14 @@ obj-$(CONFIG_UBSAN) += ubsan/
 
 obj-$(CONFIG_NEEDS_LIBELF) += libelf/
 obj-$(CONFIG_HAS_DEVICE_TREE) += libfdt/
+
+config.gz: ../.config
+	gzip -c $< >$@
+
+config_data.o: config.gz
+
+config_data.S: $(XEN_ROOT)/xen/tools/binfile
+	$(XEN_ROOT)/xen/tools/binfile $@ config.gz xen_config_data
+
+clean::
+	rm -f config_data.S config.gz 2>/dev/null
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index db7bd23fcb..f464fe02ed 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -390,6 +390,10 @@ static HYPFS_STRING_INIT(compile_date, "compile_date");
 static HYPFS_STRING_INIT(compile_domain, "compile_domain");
 static HYPFS_STRING_INIT(extra, "extra");
 
+#ifdef CONFIG_HYPFS_CONFIG
+static HYPFS_STRING_INIT(config, "config");
+#endif
+
 static int __init buildinfo_init(void)
 {
     hypfs_add_dir(&hypfs_root, &buildinfo, true);
@@ -415,6 +419,13 @@ static int __init buildinfo_init(void)
     hypfs_add_leaf(&version, &major, true);
     hypfs_add_leaf(&version, &minor, true);
 
+#ifdef CONFIG_HYPFS_CONFIG
+    config.e.encoding = XEN_HYPFS_ENC_GZIP;
+    config.e.size = xen_config_data_size;
+    config.content = xen_config_data;
+    hypfs_add_leaf(&buildinfo, &config, true);
+#endif
+
     return 0;
 }
 __initcall(buildinfo_init);
diff --git a/xen/include/xen/kernel.h b/xen/include/xen/kernel.h
index 548b64da9f..8cd142032d 100644
--- a/xen/include/xen/kernel.h
+++ b/xen/include/xen/kernel.h
@@ -100,5 +100,8 @@ extern enum system_state {
 
 bool_t is_active_kernel_text(unsigned long addr);
 
+extern const char xen_config_data[];
+extern const unsigned int xen_config_data_size;
+
 #endif /* _LINUX_KERNEL_H */
 
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 15 11:59:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 11:59:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZYzn-0004q9-Ei; Fri, 15 May 2020 11:59:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yAEg=65=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZYzm-0004p5-5F
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 11:59:18 +0000
X-Inumbo-ID: 7714b584-96a3-11ea-a554-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7714b584-96a3-11ea-a554-12813bfff9fa;
 Fri, 15 May 2020 11:59:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 71777AE70;
 Fri, 15 May 2020 11:59:02 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v9 00/12] Add hypervisor sysfs-like support
Date: Fri, 15 May 2020 13:58:44 +0200
Message-Id: <20200515115856.11965-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On the 2019 Xen developer summit there was agreement that the Xen
hypervisor should gain support for a hierarchical name-value store
similar to the Linux kernel's sysfs.

This is a first implementation of that idea adding the basic
functionality to hypervisor and tools side. The interface to any
user program making use of that "xen-hypfs" is a new library
"libxenhypfs" with a stable interface.

The series adds read-only nodes with buildinfo data and writable
nodes with runtime parameters. xl is switched to use the new file
system for modifying the runtime parameters and the old sysctl
interface for that purpose is dropped.

Changes in V9:
- addressed review comments

Changes in V8:
- addressed review comments
- added CONFIG_HYPFS config option

Changes in V7:
- old patch 1 already applied
- add new patch 1 (carved out and modified from patch 9)
- addressed review comments
- modified public interface to have a max write size instead of a
  writable flag only

Changes in V6:
- added new patches 1, 10, 11, 12
- addressed review comments
- modified interface for creating nodes for runtime parameters

Changes in V5:
- switched to xsm for privilege check

Changes in V4:
- former patch 2 removed as already committed
- addressed review comments

Changes in V3:
- major rework, especially by supporting binary contents of entries
- added several new patches (1, 2, 7)
- full support of all runtime parameters
- support of writing entries (especially runtime parameters)

Changes in V2:
- all comments to V1 addressed
- added man-page for xenhypfs tool
- added runtime parameter read access for string parameters

Changes in V1:
- renamed xenfs ->xenhypfs
- added writable entries support at the interface level and in the
  xenhypfs tool
- added runtime parameter read access (integer type only for now)
- added docs/misc/hypfs-paths.pandoc for path descriptions

Juergen Gross (12):
  xen/vmx: let opt_ept_ad always reflect the current setting
  xen: add a generic way to include binary files as variables
  docs: add feature document for Xen hypervisor sysfs-like support
  xen: add basic hypervisor filesystem support
  libs: add libxenhypfs
  tools: add xenfs tool
  xen: provide version information in hypfs
  xen: add /buildinfo/config entry to hypervisor filesystem
  xen: add runtime parameter access support to hypfs
  tools/libxl: use libxenhypfs for setting xen runtime parameters
  tools/libxc: remove xc_set_parameters()
  xen: remove XEN_SYSCTL_set_parameter support

 .gitignore                          |   6 +
 docs/features/hypervisorfs.pandoc   |  92 +++++
 docs/man/xenhypfs.1.pod             |  61 ++++
 docs/misc/hypfs-paths.pandoc        | 165 +++++++++
 tools/Rules.mk                      |   8 +-
 tools/flask/policy/modules/dom0.te  |   4 +-
 tools/libs/Makefile                 |   1 +
 tools/libs/hypfs/Makefile           |  16 +
 tools/libs/hypfs/core.c             | 536 ++++++++++++++++++++++++++++
 tools/libs/hypfs/include/xenhypfs.h |  90 +++++
 tools/libs/hypfs/libxenhypfs.map    |  10 +
 tools/libs/hypfs/xenhypfs.pc.in     |  10 +
 tools/libxc/include/xenctrl.h       |   1 -
 tools/libxc/xc_misc.c               |  21 --
 tools/libxl/Makefile                |   3 +-
 tools/libxl/libxl.c                 |  53 ++-
 tools/libxl/libxl_internal.h        |   1 +
 tools/libxl/xenlight.pc.in          |   2 +-
 tools/misc/Makefile                 |   6 +
 tools/misc/xenhypfs.c               | 192 ++++++++++
 tools/xl/xl_misc.c                  |   1 -
 xen/arch/arm/traps.c                |   3 +
 xen/arch/arm/xen.lds.S              |  13 +-
 xen/arch/x86/hvm/hypercall.c        |   3 +
 xen/arch/x86/hvm/vmx/vmcs.c         |  47 ++-
 xen/arch/x86/hvm/vmx/vmx.c          |   4 +-
 xen/arch/x86/hypercall.c            |   3 +
 xen/arch/x86/pv/domain.c            |  21 +-
 xen/arch/x86/pv/hypercall.c         |   3 +
 xen/arch/x86/xen.lds.S              |  12 +-
 xen/common/Kconfig                  |  23 ++
 xen/common/Makefile                 |  13 +
 xen/common/grant_table.c            |  62 +++-
 xen/common/hypfs.c                  | 448 +++++++++++++++++++++++
 xen/common/kernel.c                 |  84 ++++-
 xen/common/sysctl.c                 |  36 --
 xen/drivers/char/console.c          |  72 +++-
 xen/include/Makefile                |   1 +
 xen/include/asm-x86/hvm/vmx/vmcs.h  |   3 +-
 xen/include/public/hypfs.h          | 129 +++++++
 xen/include/public/sysctl.h         |  19 +-
 xen/include/public/xen.h            |   1 +
 xen/include/xen/hypercall.h         |  10 +
 xen/include/xen/hypfs.h             | 123 +++++++
 xen/include/xen/kernel.h            |   3 +
 xen/include/xen/lib.h               |   1 -
 xen/include/xen/param.h             | 126 +++++--
 xen/include/xlat.lst                |   2 +
 xen/include/xsm/dummy.h             |   6 +
 xen/include/xsm/xsm.h               |   6 +
 xen/tools/binfile                   |  41 +++
 xen/xsm/dummy.c                     |   1 +
 xen/xsm/flask/Makefile              |   5 +-
 xen/xsm/flask/flask-policy.S        |  16 -
 xen/xsm/flask/hooks.c               |   9 +-
 xen/xsm/flask/policy/access_vectors |   4 +-
 56 files changed, 2439 insertions(+), 193 deletions(-)
 create mode 100644 docs/features/hypervisorfs.pandoc
 create mode 100644 docs/man/xenhypfs.1.pod
 create mode 100644 docs/misc/hypfs-paths.pandoc
 create mode 100644 tools/libs/hypfs/Makefile
 create mode 100644 tools/libs/hypfs/core.c
 create mode 100644 tools/libs/hypfs/include/xenhypfs.h
 create mode 100644 tools/libs/hypfs/libxenhypfs.map
 create mode 100644 tools/libs/hypfs/xenhypfs.pc.in
 create mode 100644 tools/misc/xenhypfs.c
 create mode 100644 xen/common/hypfs.c
 create mode 100644 xen/include/public/hypfs.h
 create mode 100644 xen/include/xen/hypfs.h
 create mode 100755 xen/tools/binfile
 delete mode 100644 xen/xsm/flask/flask-policy.S

-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 15 11:59:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 11:59:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZYzs-0004sv-PG; Fri, 15 May 2020 11:59:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yAEg=65=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZYzr-0004rs-5Z
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 11:59:23 +0000
X-Inumbo-ID: 787698b6-96a3-11ea-a554-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 787698b6-96a3-11ea-a554-12813bfff9fa;
 Fri, 15 May 2020 11:59:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id C5734AECD;
 Fri, 15 May 2020 11:59:03 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v9 06/12] tools: add xenfs tool
Date: Fri, 15 May 2020 13:58:50 +0200
Message-Id: <20200515115856.11965-7-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200515115856.11965-1-jgross@suse.com>
References: <20200515115856.11965-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add the xenfs tool for accessing the hypervisor filesystem.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Wei Liu <wl@xen.org>
---
V1:
- rename to xenhypfs
- don't use "--" for subcommands
- add write support

V2:
- escape non-printable characters per default with cat subcommand
  (Ian Jackson)
- add -b option to cat subcommand (Ian Jackson)
- add man page

V3:
- adapt to new hypfs interface

V7:
- added missing bool for ls output

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore              |   1 +
 docs/man/xenhypfs.1.pod |  61 +++++++++++++
 tools/misc/Makefile     |   6 ++
 tools/misc/xenhypfs.c   | 192 ++++++++++++++++++++++++++++++++++++++++
 4 files changed, 260 insertions(+)
 create mode 100644 docs/man/xenhypfs.1.pod
 create mode 100644 tools/misc/xenhypfs.c

diff --git a/.gitignore b/.gitignore
index 7bd292f64d..6171b3b43f 100644
--- a/.gitignore
+++ b/.gitignore
@@ -368,6 +368,7 @@ tools/libxl/test_timedereg
 tools/libxl/test_fdderegrace
 tools/firmware/etherboot/eb-roms.h
 tools/firmware/etherboot/gpxe-git-snapshot.tar.gz
+tools/misc/xenhypfs
 tools/misc/xenwatchdogd
 tools/misc/xen-hvmcrash
 tools/misc/xen-lowmemd
diff --git a/docs/man/xenhypfs.1.pod b/docs/man/xenhypfs.1.pod
new file mode 100644
index 0000000000..37aa488fcc
--- /dev/null
+++ b/docs/man/xenhypfs.1.pod
@@ -0,0 +1,61 @@
+=head1 NAME
+
+xenhypfs - Xen tool to access Xen hypervisor file system
+
+=head1 SYNOPSIS
+
+B<xenhypfs> I<subcommand> [I<options>] [I<args>]
+
+=head1 DESCRIPTION
+
+The B<xenhypfs> program is used to access the Xen hypervisor file system.
+It can be used to show the available entries, to show their contents and
+(if allowed) to modify their contents.
+
+=head1 SUBCOMMANDS
+
+=over 4
+
+=item B<ls> I<path>
+
+List the available entries below I<path>.
+
+=item B<cat> [I<-b>] I<path>
+
+Show the contents of the entry specified by I<path>. Non-printable characters
+other than white space characters (like tab, new line) will be shown as
+B<\xnn> (B<nn> being a two digit hex number) unless the option B<-b> is
+specified.
+
+=item B<write> I<path> I<value>
+
+Set the contents of the entry specified by I<path> to I<value>.
+
+=item B<tree>
+
+Show all the entries of the file system as a tree.
+
+=back
+
+=head1 RETURN CODES
+
+=over 4
+
+=item B<0>
+
+Success
+
+=item B<1>
+
+Invalid usage (e.g. unknown subcommand, unknown option, missing parameter).
+
+=item B<2>
+
+Entry not found while traversing the tree.
+
+=item B<3>
+
+Access right violation.
+
+=back
+
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 63947bfadc..9fdb13597f 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -24,6 +24,7 @@ INSTALL_SBIN-$(CONFIG_X86)     += xen-lowmemd
 INSTALL_SBIN-$(CONFIG_X86)     += xen-mfndump
 INSTALL_SBIN-$(CONFIG_X86)     += xen-ucode
 INSTALL_SBIN                   += xencov
+INSTALL_SBIN                   += xenhypfs
 INSTALL_SBIN                   += xenlockprof
 INSTALL_SBIN                   += xenperf
 INSTALL_SBIN                   += xenpm
@@ -86,6 +87,9 @@ xenperf: xenperf.o
 xenpm: xenpm.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
+xenhypfs: xenhypfs.o
+	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenhypfs) $(APPEND_LDFLAGS)
+
 xenlockprof: xenlockprof.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
@@ -94,6 +98,8 @@ xen-hptool.o: CFLAGS += -I$(XEN_ROOT)/tools/libxc $(CFLAGS_libxencall)
 xen-hptool: xen-hptool.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(APPEND_LDFLAGS)
 
+xenhypfs.o: CFLAGS += $(CFLAGS_libxenhypfs)
+
 # xen-mfndump incorrectly uses libxc internals
 xen-mfndump.o: CFLAGS += -I$(XEN_ROOT)/tools/libxc $(CFLAGS_libxencall)
 xen-mfndump: xen-mfndump.o
diff --git a/tools/misc/xenhypfs.c b/tools/misc/xenhypfs.c
new file mode 100644
index 0000000000..158b901f42
--- /dev/null
+++ b/tools/misc/xenhypfs.c
@@ -0,0 +1,192 @@
+#define _GNU_SOURCE
+#include <ctype.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <xenhypfs.h>
+
+static struct xenhypfs_handle *hdl;
+
+static int usage(void)
+{
+    fprintf(stderr, "usage: xenhypfs ls <path>\n");
+    fprintf(stderr, "       xenhypfs cat [-b] <path>\n");
+    fprintf(stderr, "       xenhypfs write <path> <val>\n");
+    fprintf(stderr, "       xenhypfs tree\n");
+
+    return 1;
+}
+
+static void xenhypfs_print_escaped(char *string)
+{
+    char *c;
+
+    for (c = string; *c; c++) {
+        if (isgraph(*c) || isspace(*c))
+            printf("%c", *c);
+        else
+            printf("\\x%02x", *c);
+    }
+    printf("\n");
+}
+
+static int xenhypfs_cat(int argc, char *argv[])
+{
+    int ret = 0;
+    char *result;
+    char *path;
+    bool bin = false;
+
+    switch (argc) {
+    case 1:
+        path = argv[0];
+        break;
+
+    case 2:
+        if (strcmp(argv[0], "-b"))
+            return usage();
+        bin = true;
+        path = argv[1];
+        break;
+
+    default:
+        return usage();
+    }
+
+    result = xenhypfs_read(hdl, path);
+    if (!result) {
+        perror("could not read");
+        ret = 3;
+    } else {
+        if (!bin)
+            printf("%s\n", result);
+        else
+            xenhypfs_print_escaped(result);
+        free(result);
+    }
+
+    return ret;
+}
+
+static int xenhypfs_wr(char *path, char *val)
+{
+    int ret;
+
+    ret = xenhypfs_write(hdl, path, val);
+    if (ret) {
+        perror("could not write");
+        ret = 3;
+    }
+
+    return ret;
+}
+
+static char *xenhypfs_type(struct xenhypfs_dirent *ent)
+{
+    char *res;
+
+    switch (ent->type) {
+    case xenhypfs_type_dir:
+        res = "<dir>   ";
+        break;
+    case xenhypfs_type_blob:
+        res = "<blob>  ";
+        break;
+    case xenhypfs_type_string:
+        res = "<string>";
+        break;
+    case xenhypfs_type_uint:
+        res = "<uint>  ";
+        break;
+    case xenhypfs_type_int:
+        res = "<int>   ";
+        break;
+    case xenhypfs_type_bool:
+        res = "<bool>  ";
+        break;
+    default:
+        res = "<\?\?\?>   ";
+        break;
+    }
+
+    return res;
+}
+
+static int xenhypfs_ls(char *path)
+{
+    struct xenhypfs_dirent *ent;
+    unsigned int n, i;
+    int ret = 0;
+
+    ent = xenhypfs_readdir(hdl, path, &n);
+    if (!ent) {
+        perror("could not read dir");
+        ret = 3;
+    } else {
+        for (i = 0; i < n; i++)
+            printf("%s r%c %s\n", xenhypfs_type(ent + i),
+                   ent[i].is_writable ? 'w' : '-', ent[i].name);
+
+        free(ent);
+    }
+
+    return ret;
+}
+
+static int xenhypfs_tree_sub(char *path, unsigned int depth)
+{
+    struct xenhypfs_dirent *ent;
+    unsigned int n, i;
+    int ret = 0;
+    char *p;
+
+    ent = xenhypfs_readdir(hdl, path, &n);
+    if (!ent)
+        return 2;
+
+    for (i = 0; i < n; i++) {
+        printf("%*s%s%s\n", depth * 2, "", ent[i].name,
+               ent[i].type == xenhypfs_type_dir ? "/" : "");
+        if (ent[i].type == xenhypfs_type_dir) {
+            asprintf(&p, "%s%s%s", path, (depth == 1) ? "" : "/", ent[i].name);
+            if (xenhypfs_tree_sub(p, depth + 1))
+                ret = 2;
+        }
+    }
+
+    free(ent);
+
+    return ret;
+}
+
+static int xenhypfs_tree(void)
+{
+    printf("/\n");
+
+    return xenhypfs_tree_sub("/", 1);
+}
+
+int main(int argc, char *argv[])
+{
+    int ret;
+
+    hdl = xenhypfs_open(NULL, 0);
+
+    if (!hdl) {
+        fprintf(stderr, "Could not open libxenhypfs\n");
+        ret = 2;
+    } else if (argc >= 3 && !strcmp(argv[1], "cat"))
+        ret = xenhypfs_cat(argc - 2, argv + 2);
+    else if (argc == 3 && !strcmp(argv[1], "ls"))
+        ret = xenhypfs_ls(argv[2]);
+    else if (argc == 4 && !strcmp(argv[1], "write"))
+        ret = xenhypfs_wr(argv[2], argv[3]);
+    else if (argc == 2 && !strcmp(argv[1], "tree"))
+        ret = xenhypfs_tree();
+    else
+        ret = usage();
+
+    xenhypfs_close(hdl);
+
+    return ret;
+}
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 15 11:59:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 11:59:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZYzx-0004vt-6j; Fri, 15 May 2020 11:59:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yAEg=65=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZYzw-0004vB-5e
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 11:59:28 +0000
X-Inumbo-ID: 787698b7-96a3-11ea-a554-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 787698b7-96a3-11ea-a554-12813bfff9fa;
 Fri, 15 May 2020 11:59:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 115E0AE2A;
 Fri, 15 May 2020 11:59:04 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v9 11/12] tools/libxc: remove xc_set_parameters()
Date: Fri, 15 May 2020 13:58:55 +0200
Message-Id: <20200515115856.11965-12-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200515115856.11965-1-jgross@suse.com>
References: <20200515115856.11965-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

There is no user of xc_set_parameters() left, so remove it.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V6:
- new patch
---
 tools/libxc/include/xenctrl.h |  1 -
 tools/libxc/xc_misc.c         | 21 ---------------------
 2 files changed, 22 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 45ff7db1e8..f9e17ae424 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1226,7 +1226,6 @@ int xc_readconsolering(xc_interface *xch,
                        int clear, int incremental, uint32_t *pindex);
 
 int xc_send_debug_keys(xc_interface *xch, const char *keys);
-int xc_set_parameters(xc_interface *xch, const char *params);
 
 typedef struct xen_sysctl_physinfo xc_physinfo_t;
 typedef struct xen_sysctl_cputopo xc_cputopo_t;
diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index fe477bf344..3820394413 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -187,27 +187,6 @@ int xc_send_debug_keys(xc_interface *xch, const char *keys)
     return ret;
 }
 
-int xc_set_parameters(xc_interface *xch, const char *params)
-{
-    int ret, len = strlen(params);
-    DECLARE_SYSCTL;
-    DECLARE_HYPERCALL_BOUNCE_IN(params, len);
-
-    if ( xc_hypercall_bounce_pre(xch, params) )
-        return -1;
-
-    sysctl.cmd = XEN_SYSCTL_set_parameter;
-    set_xen_guest_handle(sysctl.u.set_parameter.params, params);
-    sysctl.u.set_parameter.size = len;
-    memset(sysctl.u.set_parameter.pad, 0, sizeof(sysctl.u.set_parameter.pad));
-
-    ret = do_sysctl(xch, &sysctl);
-
-    xc_hypercall_bounce_post(xch, params);
-
-    return ret;
-}
-
 int xc_physinfo(xc_interface *xch,
                 xc_physinfo_t *put_info)
 {
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 15 11:59:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 11:59:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZZ02-000514-GU; Fri, 15 May 2020 11:59:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yAEg=65=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZZ01-0004zs-5x
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 11:59:33 +0000
X-Inumbo-ID: 76ee0707-96a3-11ea-a554-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 76ee0707-96a3-11ea-a554-12813bfff9fa;
 Fri, 15 May 2020 11:59:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 47277AEBA;
 Fri, 15 May 2020 11:59:03 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v9 04/12] xen: add basic hypervisor filesystem support
Date: Fri, 15 May 2020 13:58:48 +0200
Message-Id: <20200515115856.11965-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200515115856.11965-1-jgross@suse.com>
References: <20200515115856.11965-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add the infrastructure for the hypervisor filesystem.

This includes the hypercall interface and the base functions for
entry creation, deletion and modification.

In order not to have to repeat the same pattern multiple times in case
adding a new node should BUG_ON() failure, the helpers for adding a
node (hypfs_add_dir() and hypfs_add_leaf()) get a nofault parameter
causing the BUG() in case of a failure.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V1:
- rename files from filesystem.* to hypfs.*
- add dummy write entry support
- rename hypercall filesystem_op to hypfs_op
- add support for unsigned integer entries

V2:
- test new entry name to be valid

V3:
- major rework, especially by supporting binary contents of entries
- addressed all comments

V4:
- sort #includes alphabetically (Wei Liu)
- add public interface structures to xlat.lst (Jan Beulich)
- let DIRENTRY_SIZE() add 1 for trailing nul byte (Jan Beulich)
- remove hypfs_add_entry() (Jan Beulich)
- len -> ulen (Jan Beulich)
- switch sequence of tests in hypfs_get_entry_rel() (Jan Beulich)
- add const qualifier (Jan Beulich)
- return -ENOBUFS if only direntry but no entry contents are returned
  (Jan Beulich)
- use xmalloc() instead of xzalloc() (Jan Beulich)
- better error handling in hypfs_write_leaf() (Jan Beulich)
- return -EOPNOTSUPP for unknown sub-command (Jan Beulich)
- use plain integers for enum-like constants in public interface
  (Jan Beulich)
- rename XEN_HYPFS_OP_read_contents to XEN_HYPFS_OP_read (Jan Beulich)
- add some comments in include/public/hypfs.h (Jan Beulich)
- use const_char for user parameter path (Jan Beulich)
- add helpers for XEN_HYPFS_TYPE_BOOL and XEN_HYPFS_TYPE_INT entry
  definitions (Jan Beulich)
- make statically defined entries __read_mostly (Jan Beulich)

V5:
- switch to xsm for privilege check

V6:
- use memchr() for testing correct string length (Jan Beulich)
- reject writing to non-string leafs with wrong length (Jan Beulich)
- only support bools of natural size (Julien Grall)
- adjust blank padding in header (Jan Beulich)
- adjust comments in public header (Jan Beulich)
- rename hypfs_string_set() and add comment (Jan Beulich)
- add common HYPFS_INIT helper macro (Jan Beulich)
- really check structures added to xlat.lst (Jan Beulich)
- add missing xsm parts (Jan Beulich)

V7:
- simplify compat check (Jan Beulich)
- add max write size (Jan Beulich)
- better length testing of written string (Jan Beulich)

V8:
- add Kconfig item CONFIG_HYPFS (Jan Beulich)
- init write pointer in HYPFS_*_INIT_WRITABLE() (Jan Beulich)
- expand write ASSERT()s (Jan Beulich)

V9:
- move hypfs to correct position in Makefile (Jan Beulich)
- avoid recursion in hypfs_get_entry_rel() (Jan Beulich)
- make hypfs_get_entry() static (Jan Beulich)
- assert locking in read/write functions (Jan Beulich)
- add ASSERT() in hypfs_write_leaf() (Jan Beulich)
- add encoding test in hypfs_write_leaf() (Jan Beulich)
- test parameters of XEN_HYPFS_OP_get_version to be zero (Jan Beulich)
- add parantheses in macro (Jan Beulich)
- make ulen type unsigned int in functions (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/flask/policy/modules/dom0.te  |   2 +-
 xen/arch/arm/traps.c                |   3 +
 xen/arch/x86/hvm/hypercall.c        |   3 +
 xen/arch/x86/hypercall.c            |   3 +
 xen/arch/x86/pv/hypercall.c         |   3 +
 xen/common/Kconfig                  |  11 +
 xen/common/Makefile                 |   1 +
 xen/common/hypfs.c                  | 418 ++++++++++++++++++++++++++++
 xen/include/Makefile                |   1 +
 xen/include/public/hypfs.h          | 129 +++++++++
 xen/include/public/xen.h            |   1 +
 xen/include/xen/hypercall.h         |  10 +
 xen/include/xen/hypfs.h             | 121 ++++++++
 xen/include/xlat.lst                |   2 +
 xen/include/xsm/dummy.h             |   6 +
 xen/include/xsm/xsm.h               |   6 +
 xen/xsm/dummy.c                     |   1 +
 xen/xsm/flask/hooks.c               |   6 +
 xen/xsm/flask/policy/access_vectors |   2 +
 19 files changed, 728 insertions(+), 1 deletion(-)
 create mode 100644 xen/common/hypfs.c
 create mode 100644 xen/include/public/hypfs.h
 create mode 100644 xen/include/xen/hypfs.h

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index 272f6a4f75..20925e38a2 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -11,7 +11,7 @@ allow dom0_t xen_t:xen {
 	mtrr_del mtrr_read microcode physinfo quirk writeconsole readapic
 	writeapic privprofile nonprivprofile kexec firmware sleep frequency
 	getidle debug getcpuinfo heap pm_op mca_op lockprof cpupool_op
-	getscheduler setscheduler
+	getscheduler setscheduler hypfs_op
 };
 allow dom0_t xen_t:xen2 {
 	resource_op psr_cmt_op psr_alloc pmu_ctrl get_symbol
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 30c4c1830b..8f40d0e0b6 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1381,6 +1381,9 @@ static arm_hypercall_t arm_hypercall_table[] = {
 #ifdef CONFIG_ARGO
     HYPERCALL(argo_op, 5),
 #endif
+#ifdef CONFIG_HYPFS
+    HYPERCALL(hypfs_op, 5),
+#endif
 };
 
 #ifndef NDEBUG
diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index c41c2179c9..b6ccaf4457 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -150,6 +150,9 @@ static const hypercall_table_t hvm_hypercall_table[] = {
 #endif
     HYPERCALL(xenpmu_op),
     COMPAT_CALL(dm_op),
+#ifdef CONFIG_HYPFS
+    HYPERCALL(hypfs_op),
+#endif
     HYPERCALL(arch_1)
 };
 
diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c
index 7f299d45c6..dd00983005 100644
--- a/xen/arch/x86/hypercall.c
+++ b/xen/arch/x86/hypercall.c
@@ -72,6 +72,9 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls] =
 #ifdef CONFIG_HVM
     ARGS(hvm_op, 2),
     ARGS(dm_op, 3),
+#endif
+#ifdef CONFIG_HYPFS
+    ARGS(hypfs_op, 5),
 #endif
     ARGS(mca, 1),
     ARGS(arch_1, 1),
diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c
index b0d1d0ed77..53a52360fa 100644
--- a/xen/arch/x86/pv/hypercall.c
+++ b/xen/arch/x86/pv/hypercall.c
@@ -84,6 +84,9 @@ const hypercall_table_t pv_hypercall_table[] = {
 #ifdef CONFIG_HVM
     HYPERCALL(hvm_op),
     COMPAT_CALL(dm_op),
+#endif
+#ifdef CONFIG_HYPFS
+    HYPERCALL(hypfs_op),
 #endif
     HYPERCALL(mca),
     HYPERCALL(arch_1),
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index fe9b41f721..e768ea36b2 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -116,6 +116,17 @@ config SPECULATIVE_HARDEN_BRANCH
 
 endmenu
 
+config HYPFS
+	bool "Hypervisor file system support"
+	default y
+	---help---
+	  Support Xen hypervisor file system. This file system is used to
+	  present various hypervisor internal data to dom0 and in some
+	  cases to allow modifying settings. Disabling the support might
+	  result in some features not being available.
+
+	  If unsure, say Y.
+
 config KEXEC
 	bool "kexec support"
 	default y
diff --git a/xen/common/Makefile b/xen/common/Makefile
index e8cde65370..bf7d0e25a3 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -14,6 +14,7 @@ obj-$(CONFIG_CRASH_DEBUG) += gdbstub.o
 obj-$(CONFIG_GRANT_TABLE) += grant_table.o
 obj-y += guestcopy.o
 obj-bin-y += gunzip.init.o
+obj-$(CONFIG_HYPFS) += hypfs.o
 obj-y += irq.o
 obj-y += kernel.o
 obj-y += keyhandler.o
diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
new file mode 100644
index 0000000000..13764fa02e
--- /dev/null
+++ b/xen/common/hypfs.c
@@ -0,0 +1,418 @@
+/******************************************************************************
+ *
+ * hypfs.c
+ *
+ * Simple sysfs-like file system for the hypervisor.
+ */
+
+#include <xen/err.h>
+#include <xen/guest_access.h>
+#include <xen/hypercall.h>
+#include <xen/hypfs.h>
+#include <xen/lib.h>
+#include <xen/rwlock.h>
+#include <public/hypfs.h>
+
+#ifdef CONFIG_COMPAT
+#include <compat/hypfs.h>
+CHECK_hypfs_dirlistentry;
+#endif
+
+#define DIRENTRY_NAME_OFF offsetof(struct xen_hypfs_dirlistentry, name)
+#define DIRENTRY_SIZE(name_len) \
+    (DIRENTRY_NAME_OFF +        \
+     ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
+
+static DEFINE_RWLOCK(hypfs_lock);
+enum hypfs_lock_state {
+    hypfs_unlocked,
+    hypfs_read_locked,
+    hypfs_write_locked
+};
+static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked);
+
+HYPFS_DIR_INIT(hypfs_root, "");
+
+static void hypfs_read_lock(void)
+{
+    read_lock(&hypfs_lock);
+    this_cpu(hypfs_locked) = hypfs_read_locked;
+}
+
+static void hypfs_write_lock(void)
+{
+    write_lock(&hypfs_lock);
+    this_cpu(hypfs_locked) = hypfs_write_locked;
+}
+
+static void hypfs_unlock(void)
+{
+    enum hypfs_lock_state locked = this_cpu(hypfs_locked);
+
+    this_cpu(hypfs_locked) = hypfs_unlocked;
+
+    switch ( locked )
+    {
+    case hypfs_read_locked:
+        read_unlock(&hypfs_lock);
+        break;
+    case hypfs_write_locked:
+        write_unlock(&hypfs_lock);
+        break;
+    default:
+        BUG();
+    }
+}
+
+static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new)
+{
+    int ret = -ENOENT;
+    struct hypfs_entry *e;
+
+    hypfs_write_lock();
+
+    list_for_each_entry ( e, &parent->dirlist, list )
+    {
+        int cmp = strcmp(e->name, new->name);
+
+        if ( cmp > 0 )
+        {
+            ret = 0;
+            list_add_tail(&new->list, &e->list);
+            break;
+        }
+        if ( cmp == 0 )
+        {
+            ret = -EEXIST;
+            break;
+        }
+    }
+
+    if ( ret == -ENOENT )
+    {
+        ret = 0;
+        list_add_tail(&new->list, &parent->dirlist);
+    }
+
+    if ( !ret )
+    {
+        unsigned int sz = strlen(new->name);
+
+        parent->e.size += DIRENTRY_SIZE(sz);
+    }
+
+    hypfs_unlock();
+
+    return ret;
+}
+
+int hypfs_add_dir(struct hypfs_entry_dir *parent,
+                  struct hypfs_entry_dir *dir, bool nofault)
+{
+    int ret;
+
+    ret = add_entry(parent, &dir->e);
+    BUG_ON(nofault && ret);
+
+    return ret;
+}
+
+int hypfs_add_leaf(struct hypfs_entry_dir *parent,
+                   struct hypfs_entry_leaf *leaf, bool nofault)
+{
+    int ret;
+
+    if ( !leaf->content )
+        ret = -EINVAL;
+    else
+        ret = add_entry(parent, &leaf->e);
+    BUG_ON(nofault && ret);
+
+    return ret;
+}
+
+static int hypfs_get_path_user(char *buf,
+                               XEN_GUEST_HANDLE_PARAM(const_char) uaddr,
+                               unsigned long ulen)
+{
+    if ( ulen > XEN_HYPFS_MAX_PATHLEN )
+        return -EINVAL;
+
+    if ( copy_from_guest(buf, uaddr, ulen) )
+        return -EFAULT;
+
+    if ( memchr(buf, 0, ulen) != buf + ulen - 1 )
+        return -EINVAL;
+
+    return 0;
+}
+
+static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
+                                               const char *path)
+{
+    const char *end;
+    struct hypfs_entry *entry;
+    unsigned int name_len;
+    bool again = true;
+
+    while ( again )
+    {
+        if ( dir->e.type != XEN_HYPFS_TYPE_DIR )
+            return NULL;
+
+        if ( !*path )
+            return &dir->e;
+
+        end = strchr(path, '/');
+        if ( !end )
+            end = strchr(path, '\0');
+        name_len = end - path;
+
+	again = false;
+
+        list_for_each_entry ( entry, &dir->dirlist, list )
+        {
+            int cmp = strncmp(path, entry->name, name_len);
+            struct hypfs_entry_dir *d = container_of(entry,
+                                                     struct hypfs_entry_dir, e);
+
+            if ( cmp < 0 )
+                return NULL;
+            if ( !cmp && strlen(entry->name) == name_len )
+            {
+                if ( !*end )
+                    return entry;
+
+                again = true;
+                dir = d;
+                path = end + 1;
+
+                break;
+            }
+        }
+    }
+
+    return NULL;
+}
+
+static struct hypfs_entry *hypfs_get_entry(const char *path)
+{
+    if ( path[0] != '/' )
+        return NULL;
+
+    return hypfs_get_entry_rel(&hypfs_root, path + 1);
+}
+
+int hypfs_read_dir(const struct hypfs_entry *entry,
+                   XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    const struct hypfs_entry_dir *d;
+    const struct hypfs_entry *e;
+    unsigned int size = entry->size;
+
+    ASSERT(this_cpu(hypfs_locked) != hypfs_unlocked);
+
+    d = container_of(entry, const struct hypfs_entry_dir, e);
+
+    list_for_each_entry ( e, &d->dirlist, list )
+    {
+        struct xen_hypfs_dirlistentry direntry;
+        unsigned int e_namelen = strlen(e->name);
+        unsigned int e_len = DIRENTRY_SIZE(e_namelen);
+
+        direntry.e.pad = 0;
+        direntry.e.type = e->type;
+        direntry.e.encoding = e->encoding;
+        direntry.e.content_len = e->size;
+        direntry.e.max_write_len = e->max_size;
+        direntry.off_next = list_is_last(&e->list, &d->dirlist) ? 0 : e_len;
+        if ( copy_to_guest(uaddr, &direntry, 1) )
+            return -EFAULT;
+
+        if ( copy_to_guest_offset(uaddr, DIRENTRY_NAME_OFF,
+                                  e->name, e_namelen + 1) )
+            return -EFAULT;
+
+        guest_handle_add_offset(uaddr, e_len);
+
+        ASSERT(e_len <= size);
+        size -= e_len;
+    }
+
+    return 0;
+}
+
+int hypfs_read_leaf(const struct hypfs_entry *entry,
+                    XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    const struct hypfs_entry_leaf *l;
+
+    ASSERT(this_cpu(hypfs_locked) != hypfs_unlocked);
+
+    l = container_of(entry, const struct hypfs_entry_leaf, e);
+
+    return copy_to_guest(uaddr, l->content, entry->size) ? -EFAULT: 0;
+}
+
+static int hypfs_read(const struct hypfs_entry *entry,
+                      XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
+{
+    struct xen_hypfs_direntry e;
+    long ret = -EINVAL;
+
+    if ( ulen < sizeof(e) )
+        goto out;
+
+    e.pad = 0;
+    e.type = entry->type;
+    e.encoding = entry->encoding;
+    e.content_len = entry->size;
+    e.max_write_len = entry->max_size;
+
+    ret = -EFAULT;
+    if ( copy_to_guest(uaddr, &e, 1) )
+        goto out;
+
+    ret = -ENOBUFS;
+    if ( ulen < entry->size + sizeof(e) )
+        goto out;
+
+    guest_handle_add_offset(uaddr, sizeof(e));
+
+    ret = entry->read(entry, uaddr);
+
+ out:
+    return ret;
+}
+
+int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
+                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen)
+{
+    char *buf;
+    int ret;
+
+    ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked);
+    ASSERT(ulen <= leaf->e.max_size);
+
+    if ( leaf->e.type != XEN_HYPFS_TYPE_STRING &&
+         leaf->e.type != XEN_HYPFS_TYPE_BLOB && ulen != leaf->e.size )
+        return -EDOM;
+
+    buf = xmalloc_array(char, ulen);
+    if ( !buf )
+        return -ENOMEM;
+
+    ret = -EFAULT;
+    if ( copy_from_guest(buf, uaddr, ulen) )
+        goto out;
+
+    ret = -EINVAL;
+    if ( leaf->e.type == XEN_HYPFS_TYPE_STRING &&
+         leaf->e.encoding == XEN_HYPFS_ENC_PLAIN &&
+         memchr(buf, 0, ulen) != (buf + ulen - 1) )
+        goto out;
+
+    ret = 0;
+    memcpy(leaf->write_ptr, buf, ulen);
+    leaf->e.size = ulen;
+
+ out:
+    xfree(buf);
+    return ret;
+}
+
+int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
+                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen)
+{
+    bool buf;
+
+    ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked);
+    ASSERT(leaf->e.type == XEN_HYPFS_TYPE_BOOL &&
+           leaf->e.size == sizeof(bool) &&
+           leaf->e.max_size == sizeof(bool) );
+
+    if ( ulen != leaf->e.max_size )
+        return -EDOM;
+
+    if ( copy_from_guest(&buf, uaddr, ulen) )
+        return -EFAULT;
+
+    *(bool *)leaf->write_ptr = buf;
+
+    return 0;
+}
+
+static int hypfs_write(struct hypfs_entry *entry,
+                       XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
+{
+    struct hypfs_entry_leaf *l;
+
+    if ( !entry->write )
+        return -EACCES;
+
+    ASSERT(entry->max_size);
+
+    if ( ulen > entry->max_size )
+        return -ENOSPC;
+
+    l = container_of(entry, struct hypfs_entry_leaf, e);
+
+    return entry->write(l, uaddr, ulen);
+}
+
+long do_hypfs_op(unsigned int cmd,
+                 XEN_GUEST_HANDLE_PARAM(const_char) arg1, unsigned long arg2,
+                 XEN_GUEST_HANDLE_PARAM(void) arg3, unsigned long arg4)
+{
+    int ret;
+    struct hypfs_entry *entry;
+    static char path[XEN_HYPFS_MAX_PATHLEN];
+
+    if ( xsm_hypfs_op(XSM_PRIV) )
+        return -EPERM;
+
+    if ( cmd == XEN_HYPFS_OP_get_version )
+    {
+        if ( !guest_handle_is_null(arg1) || arg2 ||
+             !guest_handle_is_null(arg3) || arg4 )
+            return -EINVAL;
+
+        return XEN_HYPFS_VERSION;
+    }
+
+    if ( cmd == XEN_HYPFS_OP_write_contents )
+        hypfs_write_lock();
+    else
+        hypfs_read_lock();
+
+    ret = hypfs_get_path_user(path, arg1, arg2);
+    if ( ret )
+        goto out;
+
+    entry = hypfs_get_entry(path);
+    if ( !entry )
+    {
+        ret = -ENOENT;
+        goto out;
+    }
+
+    switch ( cmd )
+    {
+    case XEN_HYPFS_OP_read:
+        ret = hypfs_read(entry, arg3, arg4);
+        break;
+
+    case XEN_HYPFS_OP_write_contents:
+        ret = hypfs_write(entry, arg3, arg4);
+        break;
+
+    default:
+        ret = -EOPNOTSUPP;
+        break;
+    }
+
+ out:
+    hypfs_unlock();
+
+    return ret;
+}
diff --git a/xen/include/Makefile b/xen/include/Makefile
index 2a10725d68..089314dc72 100644
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -9,6 +9,7 @@ headers-y := \
     compat/event_channel.h \
     compat/features.h \
     compat/grant_table.h \
+    compat/hypfs.h \
     compat/kexec.h \
     compat/memory.h \
     compat/nmi.h \
diff --git a/xen/include/public/hypfs.h b/xen/include/public/hypfs.h
new file mode 100644
index 0000000000..63a5df1629
--- /dev/null
+++ b/xen/include/public/hypfs.h
@@ -0,0 +1,129 @@
+/******************************************************************************
+ * Xen Hypervisor Filesystem
+ *
+ * Copyright (c) 2019, SUSE Software Solutions Germany GmbH
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __XEN_PUBLIC_HYPFS_H__
+#define __XEN_PUBLIC_HYPFS_H__
+
+#include "xen.h"
+
+/*
+ * Definitions for the __HYPERVISOR_hypfs_op hypercall.
+ */
+
+/* Highest version number of the hypfs interface currently defined. */
+#define XEN_HYPFS_VERSION      1
+
+/* Maximum length of a path in the filesystem. */
+#define XEN_HYPFS_MAX_PATHLEN  1024
+
+struct xen_hypfs_direntry {
+    uint8_t type;
+#define XEN_HYPFS_TYPE_DIR     0
+#define XEN_HYPFS_TYPE_BLOB    1
+#define XEN_HYPFS_TYPE_STRING  2
+#define XEN_HYPFS_TYPE_UINT    3
+#define XEN_HYPFS_TYPE_INT     4
+#define XEN_HYPFS_TYPE_BOOL    5
+    uint8_t encoding;
+#define XEN_HYPFS_ENC_PLAIN    0
+#define XEN_HYPFS_ENC_GZIP     1
+    uint16_t pad;              /* Returned as 0. */
+    uint32_t content_len;      /* Current length of data. */
+    uint32_t max_write_len;    /* Max. length for writes (0 if read-only). */
+};
+
+struct xen_hypfs_dirlistentry {
+    struct xen_hypfs_direntry e;
+    /* Offset in bytes to next entry (0 == this is the last entry). */
+    uint16_t off_next;
+    /* Zero terminated entry name, possibly with some padding for alignment. */
+    char name[XEN_FLEX_ARRAY_DIM];
+};
+
+/*
+ * Hypercall operations.
+ */
+
+/*
+ * XEN_HYPFS_OP_get_version
+ *
+ * Read highest interface version supported by the hypervisor.
+ *
+ * arg1 - arg4: all 0/NULL
+ *
+ * Possible return values:
+ * >0: highest supported interface version
+ * <0: negative Xen errno value
+ */
+#define XEN_HYPFS_OP_get_version     0
+
+/*
+ * XEN_HYPFS_OP_read
+ *
+ * Read a filesystem entry.
+ *
+ * Returns the direntry and contents of an entry in the buffer supplied by the
+ * caller (struct xen_hypfs_direntry with the contents following directly
+ * after it).
+ * The data buffer must be at least the size of the direntry returned. If the
+ * data buffer was not large enough for all the data -ENOBUFS and no entry
+ * data is returned, but the direntry will contain the needed size for the
+ * returned data.
+ * The format of the contents is according to its entry type and encoding.
+ * The contents of a directory are multiple struct xen_hypfs_dirlistentry
+ * items.
+ *
+ * arg1: XEN_GUEST_HANDLE(path name)
+ * arg2: length of path name (including trailing zero byte)
+ * arg3: XEN_GUEST_HANDLE(data buffer written by hypervisor)
+ * arg4: data buffer size
+ *
+ * Possible return values:
+ * 0: success
+ * <0 : negative Xen errno value
+ */
+#define XEN_HYPFS_OP_read              1
+
+/*
+ * XEN_HYPFS_OP_write_contents
+ *
+ * Write contents of a filesystem entry.
+ *
+ * Writes an entry with the contents of a buffer supplied by the caller.
+ * The data type and encoding can't be changed. The size can be changed only
+ * for blobs and strings.
+ *
+ * arg1: XEN_GUEST_HANDLE(path name)
+ * arg2: length of path name (including trailing zero byte)
+ * arg3: XEN_GUEST_HANDLE(content buffer read by hypervisor)
+ * arg4: content buffer size
+ *
+ * Possible return values:
+ * 0: success
+ * <0 : negative Xen errno value
+ */
+#define XEN_HYPFS_OP_write_contents    2
+
+#endif /* __XEN_PUBLIC_HYPFS_H__ */
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 75b1619d0d..945ef30273 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -130,6 +130,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_argo_op              39
 #define __HYPERVISOR_xenpmu_op            40
 #define __HYPERVISOR_dm_op                41
+#define __HYPERVISOR_hypfs_op             42
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index d82a293377..655acc7f47 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -150,6 +150,16 @@ do_dm_op(
     unsigned int nr_bufs,
     XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs);
 
+#ifdef CONFIG_HYPFS
+extern long
+do_hypfs_op(
+    unsigned int cmd,
+    XEN_GUEST_HANDLE_PARAM(const_char) arg1,
+    unsigned long arg2,
+    XEN_GUEST_HANDLE_PARAM(void) arg3,
+    unsigned long arg4);
+#endif
+
 #ifdef CONFIG_COMPAT
 
 extern int
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
new file mode 100644
index 0000000000..5c6a0ccece
--- /dev/null
+++ b/xen/include/xen/hypfs.h
@@ -0,0 +1,121 @@
+#ifndef __XEN_HYPFS_H__
+#define __XEN_HYPFS_H__
+
+#ifdef CONFIG_HYPFS
+#include <xen/list.h>
+#include <xen/string.h>
+#include <public/hypfs.h>
+
+struct hypfs_entry_leaf;
+
+struct hypfs_entry {
+    unsigned short type;
+    unsigned short encoding;
+    unsigned int size;
+    unsigned int max_size;
+    const char *name;
+    struct list_head list;
+    int (*read)(const struct hypfs_entry *entry,
+                XEN_GUEST_HANDLE_PARAM(void) uaddr);
+    int (*write)(struct hypfs_entry_leaf *leaf,
+                 XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+};
+
+struct hypfs_entry_leaf {
+    struct hypfs_entry e;
+    union {
+        const void *content;
+        void *write_ptr;
+    };
+};
+
+struct hypfs_entry_dir {
+    struct hypfs_entry e;
+    struct list_head dirlist;
+};
+
+#define HYPFS_DIR_INIT(var, nam)                  \
+    struct hypfs_entry_dir __read_mostly var = {  \
+        .e.type = XEN_HYPFS_TYPE_DIR,             \
+        .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
+        .e.name = (nam),                          \
+        .e.size = 0,                              \
+        .e.max_size = 0,                          \
+        .e.list = LIST_HEAD_INIT(var.e.list),     \
+        .e.read = hypfs_read_dir,                 \
+        .dirlist = LIST_HEAD_INIT(var.dirlist),   \
+    }
+
+#define HYPFS_VARSIZE_INIT(var, typ, nam, msz)    \
+    struct hypfs_entry_leaf __read_mostly var = { \
+        .e.type = (typ),                          \
+        .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
+        .e.name = (nam),                          \
+        .e.max_size = (msz),                      \
+        .e.read = hypfs_read_leaf,                \
+    }
+
+/* Content and size need to be set via hypfs_string_set_reference(). */
+#define HYPFS_STRING_INIT(var, nam)               \
+    HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0)
+
+/*
+ * Set content and size of a XEN_HYPFS_TYPE_STRING node. The node will point
+ * to str, so any later modification of *str should be followed by a call
+ * to hypfs_string_set_reference() in order to update the size of the node
+ * data.
+ */
+static inline void hypfs_string_set_reference(struct hypfs_entry_leaf *leaf,
+                                              const char *str)
+{
+    leaf->content = str;
+    leaf->e.size = strlen(str) + 1;
+}
+
+#define HYPFS_FIXEDSIZE_INIT(var, typ, nam, contvar, wr) \
+    struct hypfs_entry_leaf __read_mostly var = {        \
+        .e.type = (typ),                                 \
+        .e.encoding = XEN_HYPFS_ENC_PLAIN,               \
+        .e.name = (nam),                                 \
+        .e.size = sizeof(contvar),                       \
+        .e.max_size = (wr) ? sizeof(contvar) : 0,        \
+        .e.read = hypfs_read_leaf,                       \
+        .e.write = (wr),                                 \
+        .content = &(contvar),                           \
+    }
+
+#define HYPFS_UINT_INIT(var, nam, contvar)                       \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_UINT, nam, contvar, NULL)
+#define HYPFS_UINT_INIT_WRITABLE(var, nam, contvar)              \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_UINT, nam, contvar, \
+                         hypfs_write_leaf)
+
+#define HYPFS_INT_INIT(var, nam, contvar)                        \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_INT, nam, contvar, NULL)
+#define HYPFS_INT_INIT_WRITABLE(var, nam, contvar)               \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_INT, nam, contvar, \
+                         hypfs_write_leaf)
+
+#define HYPFS_BOOL_INIT(var, nam, contvar)                       \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, NULL)
+#define HYPFS_BOOL_INIT_WRITABLE(var, nam, contvar)              \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, \
+                         hypfs_write_bool)
+
+extern struct hypfs_entry_dir hypfs_root;
+
+int hypfs_add_dir(struct hypfs_entry_dir *parent,
+                  struct hypfs_entry_dir *dir, bool nofault);
+int hypfs_add_leaf(struct hypfs_entry_dir *parent,
+                   struct hypfs_entry_leaf *leaf, bool nofault);
+int hypfs_read_dir(const struct hypfs_entry *entry,
+                   XEN_GUEST_HANDLE_PARAM(void) uaddr);
+int hypfs_read_leaf(const struct hypfs_entry *entry,
+                    XEN_GUEST_HANDLE_PARAM(void) uaddr);
+int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
+                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
+                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+#endif
+
+#endif /* __XEN_HYPFS_H__ */
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index 95f5e5592b..0921d4a8d0 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -86,6 +86,8 @@
 ?	vcpu_hvm_context		hvm/hvm_vcpu.h
 ?	vcpu_hvm_x86_32			hvm/hvm_vcpu.h
 ?	vcpu_hvm_x86_64			hvm/hvm_vcpu.h
+?	hypfs_direntry			hypfs.h
+?	hypfs_dirlistentry		hypfs.h
 ?	kexec_exec			kexec.h
 !	kexec_image			kexec.h
 !	kexec_range			kexec.h
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 295dd67c48..2368acebed 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -434,6 +434,12 @@ static XSM_INLINE int xsm_page_offline(XSM_DEFAULT_ARG uint32_t cmd)
     return xsm_default_action(action, current->domain, NULL);
 }
 
+static XSM_INLINE int xsm_hypfs_op(XSM_DEFAULT_VOID)
+{
+    XSM_ASSERT_ACTION(XSM_PRIV);
+    return xsm_default_action(action, current->domain, NULL);
+}
+
 static XSM_INLINE long xsm_do_xsm_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return -ENOSYS;
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index e22d6160b5..a80bcf3e42 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -127,6 +127,7 @@ struct xsm_operations {
     int (*resource_setup_misc) (void);
 
     int (*page_offline)(uint32_t cmd);
+    int (*hypfs_op)(void);
 
     long (*do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
 #ifdef CONFIG_COMPAT
@@ -536,6 +537,11 @@ static inline int xsm_page_offline(xsm_default_t def, uint32_t cmd)
     return xsm_ops->page_offline(cmd);
 }
 
+static inline int xsm_hypfs_op(xsm_default_t def)
+{
+    return xsm_ops->hypfs_op();
+}
+
 static inline long xsm_do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return xsm_ops->do_xsm_op(op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 5705e52791..d4cce68089 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -103,6 +103,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, resource_setup_misc);
 
     set_to_dummy_if_null(ops, page_offline);
+    set_to_dummy_if_null(ops, hypfs_op);
     set_to_dummy_if_null(ops, hvm_param);
     set_to_dummy_if_null(ops, hvm_control);
     set_to_dummy_if_null(ops, hvm_param_nested);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 4649e6fd95..a2c78e445c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1173,6 +1173,11 @@ static inline int flask_page_offline(uint32_t cmd)
     }
 }
 
+static inline int flask_hypfs_op(void)
+{
+    return domain_has_xen(current->domain, XEN__HYPFS_OP);
+}
+
 static int flask_add_to_physmap(struct domain *d1, struct domain *d2)
 {
     return domain_has_perm(d1, d2, SECCLASS_MMU, MMU__PHYSMAP);
@@ -1812,6 +1817,7 @@ static struct xsm_operations flask_ops = {
     .resource_setup_misc = flask_resource_setup_misc,
 
     .page_offline = flask_page_offline,
+    .hypfs_op = flask_hypfs_op,
     .hvm_param = flask_hvm_param,
     .hvm_control = flask_hvm_param,
     .hvm_param_nested = flask_hvm_param_nested,
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index c055c14c26..c9e385fb9b 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -67,6 +67,8 @@ class xen
     lockprof
 # XEN_SYSCTL_cpupool_op
     cpupool_op
+# hypfs hypercall
+    hypfs_op
 # XEN_SYSCTL_scheduler_op with XEN_DOMCTL_SCHEDOP_getinfo, XEN_SYSCTL_sched_id, XEN_DOMCTL_SCHEDOP_getvcpuinfo
     getscheduler
 # XEN_SYSCTL_scheduler_op with XEN_DOMCTL_SCHEDOP_putinfo, XEN_DOMCTL_SCHEDOP_putvcpuinfo
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 15 11:59:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 11:59:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZZ08-00057D-05; Fri, 15 May 2020 11:59:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yAEg=65=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZZ06-00054g-5m
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 11:59:38 +0000
X-Inumbo-ID: 791c6214-96a3-11ea-a554-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 791c6214-96a3-11ea-a554-12813bfff9fa;
 Fri, 15 May 2020 11:59:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id BB626AED7;
 Fri, 15 May 2020 11:59:04 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v9 10/12] tools/libxl: use libxenhypfs for setting xen runtime
 parameters
Date: Fri, 15 May 2020 13:58:54 +0200
Message-Id: <20200515115856.11965-11-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200515115856.11965-1-jgross@suse.com>
References: <20200515115856.11965-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Instead of xc_set_parameters() use xenhypfs_write() for setting
parameters of the hypervisor.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V6:
- new patch
---
 tools/Rules.mk               |  2 +-
 tools/libxl/Makefile         |  3 +-
 tools/libxl/libxl.c          | 53 ++++++++++++++++++++++++++++++++----
 tools/libxl/libxl_internal.h |  1 +
 tools/libxl/xenlight.pc.in   |  2 +-
 tools/xl/xl_misc.c           |  1 -
 6 files changed, 52 insertions(+), 10 deletions(-)

diff --git a/tools/Rules.mk b/tools/Rules.mk
index ad6073fcad..883a193f9e 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -178,7 +178,7 @@ CFLAGS += -O2 -fomit-frame-pointer
 endif
 
 CFLAGS_libxenlight = -I$(XEN_XENLIGHT) $(CFLAGS_libxenctrl) $(CFLAGS_xeninclude)
-SHDEPS_libxenlight = $(SHLIB_libxenctrl) $(SHLIB_libxenstore)
+SHDEPS_libxenlight = $(SHLIB_libxenctrl) $(SHLIB_libxenstore) $(SHLIB_libxenhypfs)
 LDLIBS_libxenlight = $(SHDEPS_libxenlight) $(XEN_XENLIGHT)/libxenlight$(libextension)
 SHLIB_libxenlight  = $(SHDEPS_libxenlight) -Wl,-rpath-link=$(XEN_XENLIGHT)
 
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 69fcf21577..a89ebab0b4 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -20,7 +20,7 @@ LIBUUID_LIBS += -luuid
 endif
 
 LIBXL_LIBS =
-LIBXL_LIBS = $(LDLIBS_libxentoollog) $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(LDLIBS_libxentoolcore) $(PTYFUNCS_LIBS) $(LIBUUID_LIBS)
+LIBXL_LIBS = $(LDLIBS_libxentoollog) $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenhypfs) $(LDLIBS_libxenstore) $(LDLIBS_libxentoolcore) $(PTYFUNCS_LIBS) $(LIBUUID_LIBS)
 ifeq ($(CONFIG_LIBNL),y)
 LIBXL_LIBS += $(LIBNL3_LIBS)
 endif
@@ -33,6 +33,7 @@ CFLAGS_LIBXL += $(CFLAGS_libxentoolcore)
 CFLAGS_LIBXL += $(CFLAGS_libxenevtchn)
 CFLAGS_LIBXL += $(CFLAGS_libxenctrl)
 CFLAGS_LIBXL += $(CFLAGS_libxenguest)
+CFLAGS_LIBXL += $(CFLAGS_libxenhypfs)
 CFLAGS_LIBXL += $(CFLAGS_libxenstore)
 ifeq ($(CONFIG_LIBNL),y)
 CFLAGS_LIBXL += $(LIBNL3_CFLAGS)
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index f60fd3e4fd..621acc88f3 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -663,15 +663,56 @@ int libxl_set_parameters(libxl_ctx *ctx, char *params)
 {
     int ret;
     GC_INIT(ctx);
+    char *par, *val, *end, *path;
+    xenhypfs_handle *hypfs;
 
-    ret = xc_set_parameters(ctx->xch, params);
-    if (ret < 0) {
-        LOGEV(ERROR, ret, "setting parameters");
-        GC_FREE;
-        return ERROR_FAIL;
+    hypfs = xenhypfs_open(ctx->lg, 0);
+    if (!hypfs) {
+        LOGE(ERROR, "opening Xen hypfs");
+        ret = ERROR_FAIL;
+        goto out;
     }
+
+    while (isblank(*params))
+        params++;
+
+    for (par = params; *par; par = end) {
+        end = strchr(par, ' ');
+        if (!end)
+            end = par + strlen(par);
+
+        val = strchr(par, '=');
+        if (val > end)
+            val = NULL;
+        if (!val && !strncmp(par, "no", 2)) {
+            path = libxl__sprintf(gc, "/params/%s", par + 2);
+            path[end - par - 2 + 8] = 0;
+            val = "no";
+            par += 2;
+        } else {
+            path = libxl__sprintf(gc, "/params/%s", par);
+            path[val - par + 8] = 0;
+            val = libxl__strndup(gc, val + 1, end - val - 1);
+        }
+
+	LOG(DEBUG, "setting node \"%s\" to value \"%s\"", path, val);
+        ret = xenhypfs_write(hypfs, path, val);
+        if (ret < 0) {
+            LOGE(ERROR, "setting parameters");
+            ret = ERROR_FAIL;
+            goto out;
+        }
+
+        while (isblank(*end))
+            end++;
+    }
+
+    ret = 0;
+
+out:
+    xenhypfs_close(hypfs);
     GC_FREE;
-    return 0;
+    return ret;
 }
 
 static int fd_set_flags(libxl_ctx *ctx, int fd,
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index e5effd2ad1..b85b771659 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -56,6 +56,7 @@
 #define XC_WANT_COMPAT_MAP_FOREIGN_API
 #include <xenctrl.h>
 #include <xenguest.h>
+#include <xenhypfs.h>
 #include <xc_dom.h>
 
 #include <xen-tools/libs.h>
diff --git a/tools/libxl/xenlight.pc.in b/tools/libxl/xenlight.pc.in
index c0f769fd20..6b351ba096 100644
--- a/tools/libxl/xenlight.pc.in
+++ b/tools/libxl/xenlight.pc.in
@@ -9,4 +9,4 @@ Description: The Xenlight library for Xen hypervisor
 Version: @@version@@
 Cflags: -I${includedir}
 Libs: @@libsflag@@${libdir} -lxenlight
-Requires.private: xentoollog,xenevtchn,xencontrol,xenguest,xenstore
+Requires.private: xentoollog,xenevtchn,xencontrol,xenguest,xenstore,xenhypfs
diff --git a/tools/xl/xl_misc.c b/tools/xl/xl_misc.c
index 20ed605f4f..08f0fb6dc9 100644
--- a/tools/xl/xl_misc.c
+++ b/tools/xl/xl_misc.c
@@ -168,7 +168,6 @@ int main_set_parameters(int argc, char **argv)
 
     if (libxl_set_parameters(ctx, params)) {
         fprintf(stderr, "cannot set parameters: %s\n", params);
-        fprintf(stderr, "Use \"xl dmesg\" to look for possible reason.\n");
         return EXIT_FAILURE;
     }
 
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 15 11:59:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 11:59:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZZ0C-0005At-Ar; Fri, 15 May 2020 11:59:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yAEg=65=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZZ0B-00059q-64
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 11:59:43 +0000
X-Inumbo-ID: 787698b9-96a3-11ea-a554-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 787698b9-96a3-11ea-a554-12813bfff9fa;
 Fri, 15 May 2020 11:59:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 46E94AEDA;
 Fri, 15 May 2020 11:59:05 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v9 12/12] xen: remove XEN_SYSCTL_set_parameter support
Date: Fri, 15 May 2020 13:58:56 +0200
Message-Id: <20200515115856.11965-13-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200515115856.11965-1-jgross@suse.com>
References: <20200515115856.11965-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The functionality of XEN_SYSCTL_set_parameter is available via hypfs
now, so it can be removed.

This allows to remove the kernel_param structure for runtime parameters
by putting the now only used structure element into the hypfs node
structure of the runtime parameters.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
V6:
- new patch

V7:
- only comment out definition of XEN_SYSCTL_set_parameter (Jan Beulich)

V8:
- rebase to use CONFIG_HYPFS

V9:
- adjust CONFIG_HYPFS Kconfig text (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/flask/policy/modules/dom0.te  |  2 +-
 xen/arch/arm/xen.lds.S              |  5 --
 xen/arch/x86/hvm/vmx/vmcs.c         |  6 +-
 xen/arch/x86/xen.lds.S              |  5 --
 xen/common/Kconfig                  |  5 +-
 xen/common/hypfs.c                  |  6 +-
 xen/common/kernel.c                 | 11 ----
 xen/common/sysctl.c                 | 36 ------------
 xen/include/public/sysctl.h         | 19 +------
 xen/include/xen/hypfs.h             |  5 --
 xen/include/xen/lib.h               |  1 -
 xen/include/xen/param.h             | 87 +++++------------------------
 xen/xsm/flask/hooks.c               |  3 -
 xen/xsm/flask/policy/access_vectors |  2 -
 14 files changed, 23 insertions(+), 170 deletions(-)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index 20925e38a2..0a63ce15b6 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -16,7 +16,7 @@ allow dom0_t xen_t:xen {
 allow dom0_t xen_t:xen2 {
 	resource_op psr_cmt_op psr_alloc pmu_ctrl get_symbol
 	get_cpu_levelling_caps get_cpu_featureset livepatch_op
-	coverage_op set_parameter
+	coverage_op
 };
 
 # Allow dom0 to use all XENVER_ subops that have checks.
diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
index 549ceb9749..6342ac4ead 100644
--- a/xen/arch/arm/xen.lds.S
+++ b/xen/arch/arm/xen.lds.S
@@ -54,11 +54,6 @@ SECTIONS
        *(.data.rel.ro)
        *(.data.rel.ro.*)
 
-       . = ALIGN(POINTER_ALIGN);
-       __param_start = .;
-       *(.data.param)
-       __param_end = .;
-
        __proc_info_start = .;
        *(.proc.info)
        __proc_info_end = .;
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 3410bc5f6d..ca94c2bedc 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -112,11 +112,6 @@ static void __init init_ept_param(struct param_hypfs *par)
     update_ept_param();
     custom_runtime_set_var(par, opt_ept_setting);
 }
-#else
-static void update_ept_param(void)
-{
-}
-#endif
 
 static int parse_ept_param_runtime(const char *s);
 custom_runtime_only_param("ept", parse_ept_param_runtime, init_ept_param);
@@ -172,6 +167,7 @@ static int parse_ept_param_runtime(const char *s)
 
     return 0;
 }
+#endif
 
 /* Dynamic (run-time adjusted) execution control flags. */
 u32 vmx_pin_based_exec_control __read_mostly;
diff --git a/xen/arch/x86/xen.lds.S b/xen/arch/x86/xen.lds.S
index 3ed020e26b..0273f79152 100644
--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -128,11 +128,6 @@ SECTIONS
        *(.ex_table.pre)
        __stop___pre_ex_table = .;
 
-       . = ALIGN(POINTER_ALIGN);
-       __param_start = .;
-       *(.data.param)
-       __param_end = .;
-
 #if defined(CONFIG_HAS_VPCI) && defined(CONFIG_LATE_HWDOM)
        . = ALIGN(POINTER_ALIGN);
        __start_vpci_array = .;
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 065f2ee454..15e3b79ff5 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -122,8 +122,9 @@ config HYPFS
 	---help---
 	  Support Xen hypervisor file system. This file system is used to
 	  present various hypervisor internal data to dom0 and in some
-	  cases to allow modifying settings. Disabling the support might
-	  result in some features not being available.
+	  cases to allow modifying settings. Disabling the support will
+	  result in some features not being available, e.g. runtime parameter
+	  setting.
 
 	  If unsure, say Y.
 
diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 34fb6d64c8..0164fcbb85 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -40,13 +40,13 @@ static void hypfs_read_lock(void)
     this_cpu(hypfs_locked) = hypfs_read_locked;
 }
 
-void hypfs_write_lock(void)
+static void hypfs_write_lock(void)
 {
     write_lock(&hypfs_lock);
     this_cpu(hypfs_locked) = hypfs_write_locked;
 }
 
-void hypfs_unlock(void)
+static void hypfs_unlock(void)
 {
     enum hypfs_lock_state locked = this_cpu(hypfs_locked);
 
@@ -365,7 +365,7 @@ int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
         goto out;
 
     p = container_of(leaf, struct param_hypfs, hypfs);
-    ret = p->param->par.func(buf);
+    ret = p->func(buf);
 
  out:
     xfree(buf);
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index d1381d6900..c4caeaec71 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -196,17 +196,6 @@ static void __init _cmdline_parse(const char *cmdline)
     parse_params(cmdline, __setup_start, __setup_end);
 }
 
-int runtime_parse(const char *line)
-{
-    int ret;
-
-    hypfs_write_lock();
-    ret = parse_params(line, __param_start, __param_end);
-    hypfs_unlock();
-
-    return ret;
-}
-
 /**
  *    cmdline_parse -- parses the xen command line.
  * If CONFIG_CMDLINE is set, it would be parsed prior to @cmdline.
diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index 1c6a817476..ec916424e5 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -471,42 +471,6 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
             copyback = 1;
         break;
 
-    case XEN_SYSCTL_set_parameter:
-    {
-#define XEN_SET_PARAMETER_MAX_SIZE 1023
-        char *params;
-
-        if ( op->u.set_parameter.pad[0] || op->u.set_parameter.pad[1] ||
-             op->u.set_parameter.pad[2] )
-        {
-            ret = -EINVAL;
-            break;
-        }
-        if ( op->u.set_parameter.size > XEN_SET_PARAMETER_MAX_SIZE )
-        {
-            ret = -E2BIG;
-            break;
-        }
-        params = xmalloc_bytes(op->u.set_parameter.size + 1);
-        if ( !params )
-        {
-            ret = -ENOMEM;
-            break;
-        }
-        if ( copy_from_guest(params, op->u.set_parameter.params,
-                             op->u.set_parameter.size) )
-            ret = -EFAULT;
-        else
-        {
-            params[op->u.set_parameter.size] = 0;
-            ret = runtime_parse(params);
-        }
-
-        xfree(params);
-
-        break;
-    }
-
     default:
         ret = arch_do_sysctl(op, u_sysctl);
         copyback = 0;
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 3a08c512e8..a073647117 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -1026,22 +1026,6 @@ struct xen_sysctl_livepatch_op {
     } u;
 };
 
-/*
- * XEN_SYSCTL_set_parameter
- *
- * Change hypervisor parameters at runtime.
- * The input string is parsed similar to the boot parameters.
- * Parameters are a single string terminated by a NUL byte of max. size
- * characters. Multiple settings can be specified by separating them
- * with blanks.
- */
-
-struct xen_sysctl_set_parameter {
-    XEN_GUEST_HANDLE_64(const_char) params; /* IN: pointer to parameters. */
-    uint16_t size;                          /* IN: size of parameters. */
-    uint16_t pad[3];                        /* IN: MUST be zero. */
-};
-
 #if defined(__i386__) || defined(__x86_64__)
 /*
  * XEN_SYSCTL_get_cpu_policy (x86 specific)
@@ -1106,7 +1090,7 @@ struct xen_sysctl {
 #define XEN_SYSCTL_get_cpu_levelling_caps        25
 #define XEN_SYSCTL_get_cpu_featureset            26
 #define XEN_SYSCTL_livepatch_op                  27
-#define XEN_SYSCTL_set_parameter                 28
+/* #define XEN_SYSCTL_set_parameter              28 */
 #define XEN_SYSCTL_get_cpu_policy                29
     uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
     union {
@@ -1135,7 +1119,6 @@ struct xen_sysctl {
         struct xen_sysctl_cpu_levelling_caps cpu_levelling_caps;
         struct xen_sysctl_cpu_featureset    cpu_featureset;
         struct xen_sysctl_livepatch_op      livepatch;
-        struct xen_sysctl_set_parameter     set_parameter;
 #if defined(__i386__) || defined(__x86_64__)
         struct xen_sysctl_cpu_policy        cpu_policy;
 #endif
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 507ed3ae0b..4c9016f119 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -118,11 +118,6 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
                      XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
 int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
                        XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
-void hypfs_write_lock(void);
-void hypfs_unlock(void);
-#else
-static inline void hypfs_write_lock(void) {}
-static inline void hypfs_unlock(void) {}
 #endif
 
 #endif /* __XEN_HYPFS_H__ */
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 2d7a054931..e5b0a007b8 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -75,7 +75,6 @@
 struct domain;
 
 void cmdline_parse(const char *cmdline);
-int runtime_parse(const char *line);
 int parse_bool(const char *s, const char *e);
 
 /**
diff --git a/xen/include/xen/param.h b/xen/include/xen/param.h
index 3fe1a06a41..064ba8da6e 100644
--- a/xen/include/xen/param.h
+++ b/xen/include/xen/param.h
@@ -27,9 +27,6 @@ struct kernel_param {
 };
 
 extern const struct kernel_param __setup_start[], __setup_end[];
-extern const struct kernel_param __param_start[], __param_end[];
-
-#define __dataparam       __used_section(".data.param")
 
 #define __param(att)      static const att \
     __attribute__((__aligned__(sizeof(void *)))) struct kernel_param
@@ -79,14 +76,12 @@ extern const struct kernel_param __param_start[], __param_end[];
         { .name = setup_str_ign,            \
           .type = OPT_IGNORE }
 
-#define __rtparam         __param(__dataparam)
-
 #ifdef CONFIG_HYPFS
 
 struct param_hypfs {
-    const struct kernel_param *param;
     struct hypfs_entry_leaf hypfs;
     void (*init_leaf)(struct param_hypfs *par);
+    int (*func)(const char *);
 };
 
 extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
@@ -109,28 +104,17 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
 
 /* initfunc needs to set size and content, e.g. via custom_runtime_set_var(). */
 #define custom_runtime_only_param(nam, variable, initfunc) \
-    __rtparam __rtpar_##variable = \
-      { .name = (nam), \
-          .type = OPT_CUSTOM, \
-          .par.func = (variable) }; \
     __paramfs __parfs_##variable = \
-        { .param = &__rtpar_##variable, \
-          .init_leaf = (initfunc), \
-          .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
+        { .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
           .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
           .hypfs.e.name = (nam), \
           .hypfs.e.read = hypfs_read_leaf, \
-          .hypfs.e.write = hypfs_write_custom }
+          .hypfs.e.write = hypfs_write_custom, \
+          .init_leaf = (initfunc), \
+          .func = (variable) }
 #define boolean_runtime_only_param(nam, variable) \
-    __rtparam __rtpar_##variable = \
-        { .name = (nam), \
-          .type = OPT_BOOL, \
-          .len = sizeof(variable) + \
-                 BUILD_BUG_ON_ZERO(sizeof(variable) != sizeof(bool)), \
-          .par.var = &(variable) }; \
     __paramfs __parfs_##variable = \
-        { .param = &__rtpar_##variable, \
-          .hypfs.e.type = XEN_HYPFS_TYPE_BOOL, \
+        { .hypfs.e.type = XEN_HYPFS_TYPE_BOOL, \
           .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
           .hypfs.e.name = (nam), \
           .hypfs.e.size = sizeof(variable), \
@@ -139,14 +123,8 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
           .hypfs.e.write = hypfs_write_bool, \
           .hypfs.content = &(variable) }
 #define integer_runtime_only_param(nam, variable) \
-    __rtparam __rtpar_##variable = \
-        { .name = (nam), \
-          .type = OPT_UINT, \
-          .len = sizeof(variable), \
-          .par.var = &(variable) }; \
     __paramfs __parfs_##variable = \
-        { .param = &__rtpar_##variable, \
-          .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
+        { .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
           .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
           .hypfs.e.name = (nam), \
           .hypfs.e.size = sizeof(variable), \
@@ -155,14 +133,8 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
           .hypfs.e.write = hypfs_write_leaf, \
           .hypfs.content = &(variable) }
 #define size_runtime_only_param(nam, variable) \
-    __rtparam __rtpar_##variable = \
-        { .name = (nam), \
-          .type = OPT_SIZE, \
-          .len = sizeof(variable), \
-          .par.var = &(variable) }; \
     __paramfs __parfs_##variable = \
-        { .param = &__rtpar_##variable, \
-          .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
+        { .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
           .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
           .hypfs.e.name = (nam), \
           .hypfs.e.size = sizeof(variable), \
@@ -171,14 +143,8 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
           .hypfs.e.write = hypfs_write_leaf, \
           .hypfs.content = &(variable) }
 #define string_runtime_only_param(nam, variable) \
-    __rtparam __rtpar_##variable = \
-        { .name = (nam), \
-          .type = OPT_STR, \
-          .len = sizeof(variable), \
-          .par.var = &(variable) }; \
     __paramfs __parfs_##variable = \
-        { .param = &__rtpar_##variable, \
-          .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
+        { .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
           .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
           .hypfs.e.name = (nam), \
           .hypfs.e.size = 0, \
@@ -189,36 +155,11 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
 
 #else
 
-#define custom_runtime_only_param(_name, _var, unused) \
-    __rtparam __rtpar_##_var = \
-      { .name = _name, \
-          .type = OPT_CUSTOM, \
-          .par.func = _var }
-#define boolean_runtime_only_param(_name, _var) \
-    __rtparam __rtpar_##_var = \
-        { .name = _name, \
-          .type = OPT_BOOL, \
-          .len = sizeof(_var) + \
-                 BUILD_BUG_ON_ZERO(sizeof(_var) != sizeof(bool)), \
-          .par.var = &_var }
-#define integer_runtime_only_param(_name, _var) \
-    __rtparam __rtpar_##_var = \
-        { .name = _name, \
-          .type = OPT_UINT, \
-          .len = sizeof(_var), \
-          .par.var = &_var }
-#define size_runtime_only_param(_name, _var) \
-    __rtparam __rtpar_##_var = \
-        { .name = _name, \
-          .type = OPT_SIZE, \
-          .len = sizeof(_var), \
-          .par.var = &_var }
-#define string_runtime_only_param(_name, _var) \
-    __rtparam __rtpar_##_var = \
-        { .name = _name, \
-          .type = OPT_STR, \
-          .len = sizeof(_var), \
-          .par.var = &_var }
+#define custom_runtime_only_param(nam, var, initfunc)
+#define boolean_runtime_only_param(nam, var)
+#define integer_runtime_only_param(nam, var)
+#define size_runtime_only_param(nam, var)
+#define string_runtime_only_param(nam, var)
 
 #define custom_runtime_set_var(parfs, var)
 
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index a2c78e445c..a314bf85ce 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -822,9 +822,6 @@ static int flask_sysctl(int cmd)
     case XEN_SYSCTL_coverage_op:
         return avc_current_has_perm(SECINITSID_XEN, SECCLASS_XEN2,
                                     XEN2__COVERAGE_OP, NULL);
-    case XEN_SYSCTL_set_parameter:
-        return avc_current_has_perm(SECINITSID_XEN, SECCLASS_XEN2,
-                                    XEN2__SET_PARAMETER, NULL);
 
     default:
         return avc_unknown_permission("sysctl", cmd);
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index c9e385fb9b..b87c99ea98 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -99,8 +99,6 @@ class xen2
     livepatch_op
 # XEN_SYSCTL_coverage_op
     coverage_op
-# XEN_SYSCTL_set_parameter
-    set_parameter
 }
 
 # Classes domain and domain2 consist of operations that a domain performs on
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 15 11:59:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 11:59:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZZ0H-0005Fd-KM; Fri, 15 May 2020 11:59:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yAEg=65=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jZZ0G-0005Dq-66
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 11:59:48 +0000
X-Inumbo-ID: 792e57f8-96a3-11ea-a554-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 792e57f8-96a3-11ea-a554-12813bfff9fa;
 Fri, 15 May 2020 11:59:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 973C6AECE;
 Fri, 15 May 2020 11:59:04 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v9 09/12] xen: add runtime parameter access support to hypfs
Date: Fri, 15 May 2020 13:58:53 +0200
Message-Id: <20200515115856.11965-10-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200515115856.11965-1-jgross@suse.com>
References: <20200515115856.11965-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add support to read and modify values of hypervisor runtime parameters
via the hypervisor file system.

As runtime parameters can be modified via a sysctl, too, this path has
to take the hypfs rw_lock as writer.

For custom runtime parameters the connection between the parameter
value and the file system is done via an init function which will set
the initial value (if needed) and the leaf properties.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V3:
- complete rework
- support custom parameters, too
- support parameter writing

V6:
- rewording in docs/misc/hypfs-paths.pandoc (Jan Beulich)
- use memchr() (Jan Beulich)
- use strlcat() (Jan Beulich)
- rework to use a custom parameter init function instead of a reference
  to a content variable, allowing to drop default strings
- style correction (Jan Beulich)
- dropping param_append_str() in favor of a custom function at its only
  use site

V7:
- fine tune some parameter initializations (Jan Beulich)
- call custom_runtime_set_var() after updating the value
- modify alignment in Arm linker script to 4 (Jan Beulich)

V8:
- modify alignment in Arm linker script to 8 (Julien Grall)
- fix ept runtime parameter reporting (Jan Beulich)
- rebase to support CONFIG_HYPFS

V9:
- add empty line in arm linker script (Jan Beulich)
- drop array size enum members (Jan Beulich)
- hide struct param_hypfs completely without CONFIG_HYPFS (Jan Beulich)
- don't write size from hypfs_write_custom() (Jan Beulich)
- drop underscores from macro parameters (Jan Beulich)
- add parantheses around macro parameters (Jan Beulich)
- don't set initial string param size (Jan Beulich)
- move code in vmcs.c in preparation of patch 12 (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/hypfs-paths.pandoc |   9 +++
 xen/arch/arm/xen.lds.S       |   8 +++
 xen/arch/x86/hvm/vmx/vmcs.c  |  29 ++++++++-
 xen/arch/x86/pv/domain.c     |  21 ++++++-
 xen/arch/x86/xen.lds.S       |   7 +++
 xen/common/grant_table.c     |  62 ++++++++++++++----
 xen/common/hypfs.c           |  34 +++++++++-
 xen/common/kernel.c          |  29 ++++++++-
 xen/drivers/char/console.c   |  72 +++++++++++++++++++--
 xen/include/xen/hypfs.h      |   7 +++
 xen/include/xen/param.h      | 119 ++++++++++++++++++++++++++++++++++-
 11 files changed, 372 insertions(+), 25 deletions(-)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index 9a76bc383b..a111c6f25c 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -154,3 +154,12 @@ The major version of Xen.
 #### /buildinfo/version/minor = INTEGER
 
 The minor version of Xen.
+
+#### /params/
+
+A directory of runtime parameters.
+
+#### /params/*
+
+The individual parameters. The description of the different parameters can be
+found in `docs/misc/xen-command-line.pandoc`.
diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
index a497f6a48d..549ceb9749 100644
--- a/xen/arch/arm/xen.lds.S
+++ b/xen/arch/arm/xen.lds.S
@@ -89,6 +89,14 @@ SECTIONS
        __start_schedulers_array = .;
        *(.data.schedulers)
        __end_schedulers_array = .;
+
+#ifdef CONFIG_HYPFS
+       . = ALIGN(8);
+       __paramhypfs_start = .;
+       *(.data.paramhypfs)
+       __paramhypfs_end = .;
+#endif
+
        *(.data.rel)
        *(.data.rel.*)
        CONSTRUCTORS
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 221af9737a..3410bc5f6d 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -97,6 +97,30 @@ static int __init parse_ept_param(const char *s)
 }
 custom_param("ept", parse_ept_param);
 
+#ifdef CONFIG_HYPFS
+static char opt_ept_setting[10];
+
+static void update_ept_param(void)
+{
+    if ( opt_ept_exec_sp >= 0 )
+        snprintf(opt_ept_setting, sizeof(opt_ept_setting), "exec-sp=%d",
+                 opt_ept_exec_sp);
+}
+
+static void __init init_ept_param(struct param_hypfs *par)
+{
+    update_ept_param();
+    custom_runtime_set_var(par, opt_ept_setting);
+}
+#else
+static void update_ept_param(void)
+{
+}
+#endif
+
+static int parse_ept_param_runtime(const char *s);
+custom_runtime_only_param("ept", parse_ept_param_runtime, init_ept_param);
+
 static int parse_ept_param_runtime(const char *s)
 {
     struct domain *d;
@@ -115,6 +139,10 @@ static int parse_ept_param_runtime(const char *s)
 
     opt_ept_exec_sp = val;
 
+    update_ept_param();
+    custom_runtime_set_var(param_2_parfs(parse_ept_param_runtime),
+                           opt_ept_setting);
+
     rcu_read_lock(&domlist_read_lock);
     for_each_domain ( d )
     {
@@ -144,7 +172,6 @@ static int parse_ept_param_runtime(const char *s)
 
     return 0;
 }
-custom_runtime_only_param("ept", parse_ept_param_runtime);
 
 /* Dynamic (run-time adjusted) execution control flags. */
 u32 vmx_pin_based_exec_control __read_mostly;
diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
index 0a4a5bd001..f4e863a410 100644
--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -55,6 +55,23 @@ static __read_mostly enum {
     PCID_NOXPTI
 } opt_pcid = PCID_XPTI;
 
+#ifdef CONFIG_HYPFS
+static const char opt_pcid_2_string[][7] = {
+    [PCID_OFF] = "off",
+    [PCID_ALL] = "on",
+    [PCID_XPTI] = "xpti",
+    [PCID_NOXPTI] = "noxpti",
+};
+
+static void __init opt_pcid_init(struct param_hypfs *par)
+{
+    custom_runtime_set_var(par, opt_pcid_2_string[opt_pcid]);
+}
+#endif
+
+static int parse_pcid(const char *s);
+custom_runtime_param("pcid", parse_pcid, opt_pcid_init);
+
 static int parse_pcid(const char *s)
 {
     int rc = 0;
@@ -87,9 +104,11 @@ static int parse_pcid(const char *s)
         break;
     }
 
+    custom_runtime_set_var(param_2_parfs(parse_pcid),
+                           opt_pcid_2_string[opt_pcid]);
+
     return rc;
 }
-custom_runtime_param("pcid", parse_pcid);
 
 static void noreturn continue_nonidle_domain(struct vcpu *v)
 {
diff --git a/xen/arch/x86/xen.lds.S b/xen/arch/x86/xen.lds.S
index 0e3a733cab..3ed020e26b 100644
--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -279,6 +279,13 @@ SECTIONS
        __start_schedulers_array = .;
        *(.data.schedulers)
        __end_schedulers_array = .;
+
+#ifdef CONFIG_HYPFS
+       . = ALIGN(8);
+       __paramhypfs_start = .;
+       *(.data.paramhypfs)
+       __paramhypfs_end = .;
+#endif
   } :text
 
   DECL_SECTION(.data) {
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 5ef7ff940d..ece670e484 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -85,8 +85,43 @@ struct grant_table {
     struct grant_table_arch arch;
 };
 
-static int parse_gnttab_limit(const char *param, const char *arg,
-                              unsigned int *valp)
+unsigned int __read_mostly opt_max_grant_frames = 64;
+static unsigned int __read_mostly opt_max_maptrack_frames = 1024;
+
+#ifdef CONFIG_HYPFS
+#define GRANT_CUSTOM_VAL_SZ  12
+static char __read_mostly opt_max_grant_frames_val[GRANT_CUSTOM_VAL_SZ];
+static char __read_mostly opt_max_maptrack_frames_val[GRANT_CUSTOM_VAL_SZ];
+
+static void update_gnttab_par(unsigned int val, struct param_hypfs *par,
+                              char *parval)
+{
+    snprintf(parval, GRANT_CUSTOM_VAL_SZ, "%u", val);
+    custom_runtime_set_var_sz(par, parval, GRANT_CUSTOM_VAL_SZ);
+}
+
+static void __init gnttab_max_frames_init(struct param_hypfs *par)
+{
+    update_gnttab_par(opt_max_grant_frames, par, opt_max_grant_frames_val);
+}
+
+static void __init max_maptrack_frames_init(struct param_hypfs *par)
+{
+    update_gnttab_par(opt_max_maptrack_frames, par,
+                      opt_max_maptrack_frames_val);
+}
+#else
+#define update_gnttab_par(v, unused1, unused2)     update_gnttab_par(v)
+#define parse_gnttab_limit(a, v, unused1, unused2) parse_gnttab_limit(a, v)
+
+static void update_gnttab_par(unsigned int val, struct param_hypfs *par,
+                              char *parval)
+{
+}
+#endif
+
+static int parse_gnttab_limit(const char *arg, unsigned int *valp,
+                              struct param_hypfs *par, char *parval)
 {
     const char *e;
     unsigned long val;
@@ -99,28 +134,33 @@ static int parse_gnttab_limit(const char *param, const char *arg,
         return -ERANGE;
 
     *valp = val;
+    update_gnttab_par(val, par, parval);
 
     return 0;
 }
 
-unsigned int __read_mostly opt_max_grant_frames = 64;
+static int parse_gnttab_max_frames(const char *arg);
+custom_runtime_param("gnttab_max_frames", parse_gnttab_max_frames,
+                     gnttab_max_frames_init);
 
 static int parse_gnttab_max_frames(const char *arg)
 {
-    return parse_gnttab_limit("gnttab_max_frames", arg,
-                              &opt_max_grant_frames);
+    return parse_gnttab_limit(arg, &opt_max_grant_frames,
+                              param_2_parfs(parse_gnttab_max_frames),
+                              opt_max_grant_frames_val);
 }
-custom_runtime_param("gnttab_max_frames", parse_gnttab_max_frames);
 
-static unsigned int __read_mostly opt_max_maptrack_frames = 1024;
+static int parse_gnttab_max_maptrack_frames(const char *arg);
+custom_runtime_param("gnttab_max_maptrack_frames",
+                     parse_gnttab_max_maptrack_frames,
+                     max_maptrack_frames_init);
 
 static int parse_gnttab_max_maptrack_frames(const char *arg)
 {
-    return parse_gnttab_limit("gnttab_max_maptrack_frames", arg,
-                              &opt_max_maptrack_frames);
+    return parse_gnttab_limit(arg, &opt_max_maptrack_frames,
+                              param_2_parfs(parse_gnttab_max_maptrack_frames),
+                              opt_max_maptrack_frames_val);
 }
-custom_runtime_param("gnttab_max_maptrack_frames",
-                     parse_gnttab_max_maptrack_frames);
 
 #ifndef GNTTAB_MAX_VERSION
 #define GNTTAB_MAX_VERSION 2
diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 13764fa02e..34fb6d64c8 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -10,6 +10,7 @@
 #include <xen/hypercall.h>
 #include <xen/hypfs.h>
 #include <xen/lib.h>
+#include <xen/param.h>
 #include <xen/rwlock.h>
 #include <public/hypfs.h>
 
@@ -39,13 +40,13 @@ static void hypfs_read_lock(void)
     this_cpu(hypfs_locked) = hypfs_read_locked;
 }
 
-static void hypfs_write_lock(void)
+void hypfs_write_lock(void)
 {
     write_lock(&hypfs_lock);
     this_cpu(hypfs_locked) = hypfs_write_locked;
 }
 
-static void hypfs_unlock(void)
+void hypfs_unlock(void)
 {
     enum hypfs_lock_state locked = this_cpu(hypfs_locked);
 
@@ -342,6 +343,35 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
     return 0;
 }
 
+int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
+                       XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen)
+{
+    struct param_hypfs *p;
+    char *buf;
+    int ret;
+
+    ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked);
+
+    buf = xzalloc_array(char, ulen);
+    if ( !buf )
+        return -ENOMEM;
+
+    ret = -EFAULT;
+    if ( copy_from_guest(buf, uaddr, ulen) )
+        goto out;
+
+    ret = -EDOM;
+    if ( memchr(buf, 0, ulen) != (buf + ulen - 1) )
+        goto out;
+
+    p = container_of(leaf, struct param_hypfs, hypfs);
+    ret = p->param->par.func(buf);
+
+ out:
+    xfree(buf);
+    return ret;
+}
+
 static int hypfs_write(struct hypfs_entry *entry,
                        XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
 {
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index f464fe02ed..d1381d6900 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -198,7 +198,13 @@ static void __init _cmdline_parse(const char *cmdline)
 
 int runtime_parse(const char *line)
 {
-    return parse_params(line, __param_start, __param_end);
+    int ret;
+
+    hypfs_write_lock();
+    ret = parse_params(line, __param_start, __param_end);
+    hypfs_unlock();
+
+    return ret;
 }
 
 /**
@@ -429,6 +435,27 @@ static int __init buildinfo_init(void)
     return 0;
 }
 __initcall(buildinfo_init);
+
+static HYPFS_DIR_INIT(params, "params");
+
+static int __init param_init(void)
+{
+    struct param_hypfs *param;
+
+    hypfs_add_dir(&hypfs_root, &params, true);
+
+    for ( param = __paramhypfs_start; param < __paramhypfs_end; param++ )
+    {
+        if ( param->init_leaf )
+            param->init_leaf(param);
+        else if ( param->hypfs.e.type == XEN_HYPFS_TYPE_STRING )
+            param->hypfs.e.size = strlen(param->hypfs.content) + 1;
+        hypfs_add_leaf(&params, &param->hypfs, true);
+    }
+
+    return 0;
+}
+__initcall(param_init);
 #endif
 
 # define DO(fn) long do_##fn
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 913ae1b66a..56e24821b2 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -79,8 +79,28 @@ enum con_timestamp_mode
 
 static enum con_timestamp_mode __read_mostly opt_con_timestamp_mode = TSM_NONE;
 
+#ifdef CONFIG_HYPFS
+static const char con_timestamp_mode_2_string[][7] = {
+    [TSM_NONE] = "none",
+    [TSM_DATE] = "date",
+    [TSM_DATE_MS] = "datems",
+    [TSM_BOOT] = "boot",
+    [TSM_RAW] = "raw",
+};
+
+static void con_timestamp_mode_upd(struct param_hypfs *par)
+{
+    const char *val = con_timestamp_mode_2_string[opt_con_timestamp_mode];
+
+    custom_runtime_set_var_sz(par, val, 7);
+}
+#else
+#define con_timestamp_mode_upd(par)
+#endif
+
 static int parse_console_timestamps(const char *s);
-custom_runtime_param("console_timestamps", parse_console_timestamps);
+custom_runtime_param("console_timestamps", parse_console_timestamps,
+                     con_timestamp_mode_upd);
 
 /* conring_size: allows a large console ring than default (16kB). */
 static uint32_t __initdata opt_conring_size;
@@ -143,6 +163,39 @@ static int __read_mostly xenlog_guest_lower_thresh =
 static int parse_loglvl(const char *s);
 static int parse_guest_loglvl(const char *s);
 
+#ifdef CONFIG_HYPFS
+#define LOGLVL_VAL_SZ 16
+static char xenlog_val[LOGLVL_VAL_SZ];
+static char xenlog_guest_val[LOGLVL_VAL_SZ];
+
+static char *lvl2opt[] = { "none", "error", "warning", "info", "all" };
+
+static void xenlog_update_val(int lower, int upper, char *val)
+{
+    snprintf(val, LOGLVL_VAL_SZ, "%s/%s", lvl2opt[lower], lvl2opt[upper]);
+}
+
+static void __init xenlog_init(struct param_hypfs *par)
+{
+    xenlog_update_val(xenlog_lower_thresh, xenlog_upper_thresh, xenlog_val);
+    custom_runtime_set_var(par, xenlog_val);
+}
+
+static void __init xenlog_guest_init(struct param_hypfs *par)
+{
+    xenlog_update_val(xenlog_guest_lower_thresh, xenlog_guest_upper_thresh,
+                      xenlog_guest_val);
+    custom_runtime_set_var(par, xenlog_guest_val);
+}
+#else
+#define xenlog_val       NULL
+#define xenlog_guest_val NULL
+
+static void xenlog_update_val(int lower, int upper, char *val)
+{
+}
+#endif
+
 /*
  * <lvl> := none|error|warning|info|debug|all
  * loglvl=<lvl_print_always>[/<lvl_print_ratelimit>]
@@ -151,8 +204,8 @@ static int parse_guest_loglvl(const char *s);
  * Similar definitions for guest_loglvl, but applies to guest tracing.
  * Defaults: loglvl=warning ; guest_loglvl=none/warning
  */
-custom_runtime_param("loglvl", parse_loglvl);
-custom_runtime_param("guest_loglvl", parse_guest_loglvl);
+custom_runtime_param("loglvl", parse_loglvl, xenlog_init);
+custom_runtime_param("guest_loglvl", parse_guest_loglvl, xenlog_guest_init);
 
 static atomic_t print_everything = ATOMIC_INIT(0);
 
@@ -173,7 +226,7 @@ static int __parse_loglvl(const char *s, const char **ps)
     return 2; /* sane fallback */
 }
 
-static int _parse_loglvl(const char *s, int *lower, int *upper)
+static int _parse_loglvl(const char *s, int *lower, int *upper, char *val)
 {
     *lower = *upper = __parse_loglvl(s, &s);
     if ( *s == '/' )
@@ -181,18 +234,21 @@ static int _parse_loglvl(const char *s, int *lower, int *upper)
     if ( *upper < *lower )
         *upper = *lower;
 
+    xenlog_update_val(*lower, *upper, val);
+
     return *s ? -EINVAL : 0;
 }
 
 static int parse_loglvl(const char *s)
 {
-    return _parse_loglvl(s, &xenlog_lower_thresh, &xenlog_upper_thresh);
+    return _parse_loglvl(s, &xenlog_lower_thresh, &xenlog_upper_thresh,
+                         xenlog_val);
 }
 
 static int parse_guest_loglvl(const char *s)
 {
     return _parse_loglvl(s, &xenlog_guest_lower_thresh,
-                         &xenlog_guest_upper_thresh);
+                         &xenlog_guest_upper_thresh, xenlog_guest_val);
 }
 
 static char *loglvl_str(int lvl)
@@ -731,9 +787,11 @@ static int parse_console_timestamps(const char *s)
     {
     case 0:
         opt_con_timestamp_mode = TSM_NONE;
+        con_timestamp_mode_upd(param_2_parfs(parse_console_timestamps));
         return 0;
     case 1:
         opt_con_timestamp_mode = TSM_DATE;
+        con_timestamp_mode_upd(param_2_parfs(parse_console_timestamps));
         return 0;
     }
     if ( *s == '\0' || /* Compat for old booleanparam() */
@@ -750,6 +808,8 @@ static int parse_console_timestamps(const char *s)
     else
         return -EINVAL;
 
+    con_timestamp_mode_upd(param_2_parfs(parse_console_timestamps));
+
     return 0;
 }
 
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 5c6a0ccece..507ed3ae0b 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -116,6 +116,13 @@ int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
                      XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
 int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
                      XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
+                       XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+void hypfs_write_lock(void);
+void hypfs_unlock(void);
+#else
+static inline void hypfs_write_lock(void) {}
+static inline void hypfs_unlock(void) {}
 #endif
 
 #endif /* __XEN_HYPFS_H__ */
diff --git a/xen/include/xen/param.h b/xen/include/xen/param.h
index a1dc3ba8f0..3fe1a06a41 100644
--- a/xen/include/xen/param.h
+++ b/xen/include/xen/param.h
@@ -1,6 +1,7 @@
 #ifndef _XEN_PARAM_H
 #define _XEN_PARAM_H
 
+#include <xen/hypfs.h>
 #include <xen/init.h>
 #include <xen/lib.h>
 #include <xen/stdbool.h>
@@ -80,7 +81,115 @@ extern const struct kernel_param __param_start[], __param_end[];
 
 #define __rtparam         __param(__dataparam)
 
-#define custom_runtime_only_param(_name, _var) \
+#ifdef CONFIG_HYPFS
+
+struct param_hypfs {
+    const struct kernel_param *param;
+    struct hypfs_entry_leaf hypfs;
+    void (*init_leaf)(struct param_hypfs *par);
+};
+
+extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
+
+#define __paramhypfs      __used_section(".data.paramhypfs")
+
+#define __paramfs         static __paramhypfs  \
+    __attribute__((__aligned__(sizeof(void *)))) struct param_hypfs
+
+#define custom_runtime_set_var_sz(parfs, var, sz) \
+    { \
+        (parfs)->hypfs.content = var; \
+        (parfs)->hypfs.e.max_size = sz; \
+        (parfs)->hypfs.e.size = strlen(var) + 1; \
+    }
+#define custom_runtime_set_var(parfs, var) \
+    custom_runtime_set_var_sz(parfs, var, sizeof(var))
+
+#define param_2_parfs(par) &__parfs_##par
+
+/* initfunc needs to set size and content, e.g. via custom_runtime_set_var(). */
+#define custom_runtime_only_param(nam, variable, initfunc) \
+    __rtparam __rtpar_##variable = \
+      { .name = (nam), \
+          .type = OPT_CUSTOM, \
+          .par.func = (variable) }; \
+    __paramfs __parfs_##variable = \
+        { .param = &__rtpar_##variable, \
+          .init_leaf = (initfunc), \
+          .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
+          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
+          .hypfs.e.name = (nam), \
+          .hypfs.e.read = hypfs_read_leaf, \
+          .hypfs.e.write = hypfs_write_custom }
+#define boolean_runtime_only_param(nam, variable) \
+    __rtparam __rtpar_##variable = \
+        { .name = (nam), \
+          .type = OPT_BOOL, \
+          .len = sizeof(variable) + \
+                 BUILD_BUG_ON_ZERO(sizeof(variable) != sizeof(bool)), \
+          .par.var = &(variable) }; \
+    __paramfs __parfs_##variable = \
+        { .param = &__rtpar_##variable, \
+          .hypfs.e.type = XEN_HYPFS_TYPE_BOOL, \
+          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
+          .hypfs.e.name = (nam), \
+          .hypfs.e.size = sizeof(variable), \
+          .hypfs.e.max_size = sizeof(variable), \
+          .hypfs.e.read = hypfs_read_leaf, \
+          .hypfs.e.write = hypfs_write_bool, \
+          .hypfs.content = &(variable) }
+#define integer_runtime_only_param(nam, variable) \
+    __rtparam __rtpar_##variable = \
+        { .name = (nam), \
+          .type = OPT_UINT, \
+          .len = sizeof(variable), \
+          .par.var = &(variable) }; \
+    __paramfs __parfs_##variable = \
+        { .param = &__rtpar_##variable, \
+          .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
+          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
+          .hypfs.e.name = (nam), \
+          .hypfs.e.size = sizeof(variable), \
+          .hypfs.e.max_size = sizeof(variable), \
+          .hypfs.e.read = hypfs_read_leaf, \
+          .hypfs.e.write = hypfs_write_leaf, \
+          .hypfs.content = &(variable) }
+#define size_runtime_only_param(nam, variable) \
+    __rtparam __rtpar_##variable = \
+        { .name = (nam), \
+          .type = OPT_SIZE, \
+          .len = sizeof(variable), \
+          .par.var = &(variable) }; \
+    __paramfs __parfs_##variable = \
+        { .param = &__rtpar_##variable, \
+          .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
+          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
+          .hypfs.e.name = (nam), \
+          .hypfs.e.size = sizeof(variable), \
+          .hypfs.e.max_size = sizeof(variable), \
+          .hypfs.e.read = hypfs_read_leaf, \
+          .hypfs.e.write = hypfs_write_leaf, \
+          .hypfs.content = &(variable) }
+#define string_runtime_only_param(nam, variable) \
+    __rtparam __rtpar_##variable = \
+        { .name = (nam), \
+          .type = OPT_STR, \
+          .len = sizeof(variable), \
+          .par.var = &(variable) }; \
+    __paramfs __parfs_##variable = \
+        { .param = &__rtpar_##variable, \
+          .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
+          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
+          .hypfs.e.name = (nam), \
+          .hypfs.e.size = 0, \
+          .hypfs.e.max_size = sizeof(variable), \
+          .hypfs.e.read = hypfs_read_leaf, \
+          .hypfs.e.write = hypfs_write_leaf, \
+          .hypfs.content = &(variable) }
+
+#else
+
+#define custom_runtime_only_param(_name, _var, unused) \
     __rtparam __rtpar_##_var = \
       { .name = _name, \
           .type = OPT_CUSTOM, \
@@ -111,9 +220,13 @@ extern const struct kernel_param __param_start[], __param_end[];
           .len = sizeof(_var), \
           .par.var = &_var }
 
-#define custom_runtime_param(_name, _var) \
+#define custom_runtime_set_var(parfs, var)
+
+#endif
+
+#define custom_runtime_param(_name, _var, initfunc) \
     custom_param(_name, _var); \
-    custom_runtime_only_param(_name, _var)
+    custom_runtime_only_param(_name, _var, initfunc)
 #define boolean_runtime_param(_name, _var) \
     boolean_param(_name, _var); \
     boolean_runtime_only_param(_name, _var)
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 15 12:02:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 12:02:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZZ2f-0006fz-BQ; Fri, 15 May 2020 12:02:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kC4v=65=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jZZ2d-0006fm-Kh
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 12:02:15 +0000
X-Inumbo-ID: ea3a8f98-96a3-11ea-9887-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea3a8f98-96a3-11ea-9887-bc764e2007e4;
 Fri, 15 May 2020 12:02:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589544135;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=CCyNLz7EWe53JS9aRhnR6dRoNGqVsGy6uPBVjhf1zjM=;
 b=Kn872NsobxrWac+wZmwoLcwZ6P17X6n0XZznrmhJ+QXbJhRD+d4K3QH+
 vaX0lfHbx8MNPaE+JPlQMibyvtXFWQPuiZ0vavnEl/qCYsqfb+ZfEe0q5
 SvAQ7mhqA2Q6tM+cMl38rcwnroYO4EhSb+rCPPvZglAdFJSdqg9bEXd+g E=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: D7Q9zG165YE0dfhtC6ApJ3++Qk+zhWj3M0xdVQPpKDoS8qw45AsPEtnnykbRA1PrgIV6a/hPeg
 OtM7fel3wJtNNALPdsvfhSurTTSo3TGdw2uL60Fm5wZSwpTufxGnV50XcQ4w4chxETmfJhbnWE
 Oi4QFiZW1xa5Di46apvSZzhu9xpd98qG68HgKRZQayCfWEBLP1AnEURSC45H7sbUKAOZzuceBT
 vyXpEEud2UxkZZxMFShXdn00J07p4QSSFuw789gSSSoG/qHIzNaNfdKoUczJF+nP6pHEbiTOzo
 Qfo=
X-SBRS: 2.7
X-MesageID: 17882516
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,395,1583211600"; d="scan'208";a="17882516"
Subject: Re: [PATCH v2 1/6] x86/mem-paging: fold p2m_mem_paging_prep()'s main
 if()-s
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <cea2307f-1aae-51cb-20ac-fbaf4b945771@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e5d2a06e-99bd-a529-1621-4583bb41c78d@citrix.com>
Date: Fri, 15 May 2020 13:02:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <cea2307f-1aae-51cb-20ac-fbaf4b945771@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23/04/2020 09:37, Jan Beulich wrote:
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1819,22 +1821,16 @@ int p2m_mem_paging_prep(struct domain *d
>              goto out;
>          mfn = page_to_mfn(page);
>          page_extant = 0;
> -    }
> -
> -    /* If we were given a buffer, now is the time to use it */
> -    if ( !page_extant && user_ptr )
> -    {
> -        void *guest_map;
> -        int rc;
>  
>          ASSERT( mfn_valid(mfn) );
>          guest_map = map_domain_page(mfn);
> -        rc = copy_from_user(guest_map, user_ptr, PAGE_SIZE);
> +        ret = copy_from_user(guest_map, user_ptr, PAGE_SIZE);
>          unmap_domain_page(guest_map);
> -        if ( rc )
> +        if ( ret )
>          {
> -            gdprintk(XENLOG_ERR, "Failed to load paging-in gfn %lx domain %u "
> -                                 "bytes left %d\n", gfn_l, d->domain_id, rc);
> +            gdprintk(XENLOG_ERR,
> +                     "Failed to load paging-in gfn %lx Dom%d bytes left %d\n",
> +                     gfn_l, d->domain_id, ret);

%pd, and "%pd gfn %lx" would be a more natural way to phrase it.

That said - I'm not sure how useful the information is.  We don't
normally print any diagnostics on -EFAULT and I don't see why this case
is special.

With at least %pd fixed, but preferably with the printk() dropped,
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri May 15 12:07:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 12:07:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZZ7P-0006rr-V8; Fri, 15 May 2020 12:07:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7E4v=65=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jZZ7P-0006rm-5Q
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 12:07:11 +0000
X-Inumbo-ID: 99e03b28-96a4-11ea-a556-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 99e03b28-96a4-11ea-a556-12813bfff9fa;
 Fri, 15 May 2020 12:07:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=C+ktQXzlviggagCmERPsVH4THlGkI7d1rhffwZriee0=; b=LCN4K2mYlOsPOIlJcIM/jUT5lT
 CKLaK5vHA0OwSN/aozX/YawKTZ8jE8MgBdmj7JrZ8CRC5Gws+Bl0I38l7aS+lBiEOgcyCuyK+JM5g
 v+aH2pAlJP24mfd+OehI+YMfbmYSWTfG9dxQLC+IonsqbCRB3Z5ti/VJXVg8AUG0Eoio=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jZZ7L-0006f2-Tr; Fri, 15 May 2020 12:07:07 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jZZ7L-0001H5-BS; Fri, 15 May 2020 12:07:07 +0000
Subject: Re: Error during update_runstate_area with KPTI activated
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <C6B0E24F-60E6-4621-8448-C8DBAE3277A9@arm.com>
 <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
 <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
 <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
 <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
 <478C4829-CCAF-495B-860E-6BA3D86AA47D@arm.com>
 <20200515083838.GN54375@Air-de-Roger>
 <d2033adc-3f98-2d14-ae6d-f8dcd8b90002@xen.org>
 <20200515091018.GO54375@Air-de-Roger>
 <93D7EBEF-E3E0-4DBB-A5BC-7D326B7AE0DB@arm.com>
 <355d0b59-29d4-a483-73a3-3cdd468c0b77@xen.org>
 <0E6DD742-3C79-4CD2-93A1-6D054377A919@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <19ea976d-5d62-db57-28b2-116b0c4da03f@xen.org>
Date: Fri, 15 May 2020 13:07:04 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <0E6DD742-3C79-4CD2-93A1-6D054377A919@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Hongyan Xia <hx242@xen.org>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 15/05/2020 11:10, Bertrand Marquis wrote:
> 
> 
>> On 15 May 2020, at 11:00, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Bertrand,
>>
>> On 15/05/2020 10:21, Bertrand Marquis wrote:
>>>> On 15 May 2020, at 10:10, Roger Pau Monné <roger.pau@citrix.com <mailto:roger.pau@citrix.com>> wrote:
>>>>
>>>> On Fri, May 15, 2020 at 09:53:54AM +0100, Julien Grall wrote:
>>>>> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>>>>>
>>>>> Hi,
>>>>>
>>>>> On 15/05/2020 09:38, Roger Pau Monné wrote:
>>>>>> On Fri, May 15, 2020 at 07:39:16AM +0000, Bertrand Marquis wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 14 May 2020, at 20:13, Julien Grall <julien.grall.oss@gmail.com <mailto:julien.grall.oss@gmail.com><mailto:julien.grall.oss@gmail.com>> wrote:
>>>>>>>
>>>>>>> On Thu, 14 May 2020 at 19:12, Andrew Cooper <andrew.cooper3@citrix.com <mailto:andrew.cooper3@citrix.com><mailto:andrew.cooper3@citrix.com>> wrote:
>>>>>>>
>>>>>>> On 14/05/2020 18:38, Julien Grall wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> On 14/05/2020 17:18, Bertrand Marquis wrote:
>>>>>>>
>>>>>>>
>>>>>>> On 14 May 2020, at 16:57, Julien Grall <julien@xen.org <mailto:julien@xen.org><mailto:julien@xen.org>> wrote:
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> On 14/05/2020 15:28, Bertrand Marquis wrote:
>>>>>>> Hi,
>>>>>>>
>>>>>>> Hi,
>>>>>>>
>>>>>>> When executing linux on arm64 with KPTI activated (in Dom0 or in a
>>>>>>> DomU), I have a lot of walk page table errors like this:
>>>>>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
>>>>>>> 0xffffff837ebe0cd0
>>>>>>> After implementing a call trace, I found that the problem was
>>>>>>> coming from the update_runstate_area when linux has KPTI activated.
>>>>>>> I have the following call trace:
>>>>>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
>>>>>>> 0xffffff837ebe0cd0
>>>>>>> (XEN) backtrace.c:29: Stacktrace start at 0x8007638efbb0 depth 10
>>>>>>> (XEN)    [<000000000027780c>] get_page_from_gva+0x180/0x35c
>>>>>>> (XEN)    [<00000000002700c8>] guestcopy.c#copy_guest+0x1b0/0x2e4
>>>>>>> (XEN)    [<0000000000270228>] raw_copy_to_guest+0x2c/0x34
>>>>>>> (XEN)    [<0000000000268dd0>] domain.c#update_runstate_area+0x90/0xc8
>>>>>>> (XEN)    [<000000000026909c>] domain.c#schedule_tail+0x294/0x2d8
>>>>>>> (XEN)    [<0000000000269524>] context_switch+0x58/0x70
>>>>>>> (XEN)    [<00000000002479c4>] core.c#sched_context_switch+0x88/0x1e4
>>>>>>> (XEN)    [<000000000024845c>] core.c#schedule+0x224/0x2ec
>>>>>>> (XEN)    [<0000000000224018>] softirq.c#__do_softirq+0xe4/0x128
>>>>>>> (XEN)    [<00000000002240d4>] do_softirq+0x14/0x1c
>>>>>>> Discussing this subject with Stefano, he pointed me to a discussion
>>>>>>> started a year ago on this subject here:
>>>>>>> https://lists.xenproject.org/archives/html/xen-devel/2018-11/msg03053.html
>>>>>>>
>>>>>>> And a patch was submitted:
>>>>>>> https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg02320.html
>>>>>>>
>>>>>>> I rebased this patch on current master and it is solving the
>>>>>>> problem I have seen.
>>>>>>> It sounds to me like a good solution to introduce a
>>>>>>> VCPUOP_register_runstate_phys_memory_area to not depend on the area
>>>>>>> actually being mapped in the guest when a context switch is being
>>>>>>> done (which is actually the problem happening when a context switch
>>>>>>> is trigger while a guest is running in EL0).
>>>>>>> Is there any reason why this was not merged at the end ?
>>>>>>>
>>>>>>> I just skimmed through the thread to remind myself the state.
>>>>>>> AFAICT, this is blocked on the contributor to clarify the intended
>>>>>>> interaction and provide a new version.
>>>>>>>
>>>>>>> What do you mean here by intended interaction ? How the new hyper
>>>>>>> call should be used by the guest OS ?
>>>>>>>
>>>>>>>  From what I remember, Jan was seeking clarification on whether the two
>>>>>>> hypercalls (existing and new) can be called together by the same OS
>>>>>>> (and make sense).
>>>>>>>
>>>>>>> There was also the question of the handover between two pieces of
>>>>>>> sotfware. For instance, what if the firmware is using the existing
>>>>>>> interface but the OS the new one? Similar question about Kexecing a
>>>>>>> different kernel.
>>>>>>>
>>>>>>> This part is mostly documentation so we can discuss about the approach
>>>>>>> and review the implementation.
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> I am still in favor of the new hypercall (and still in my todo list)
>>>>>>> but I haven't yet found time to revive the series.
>>>>>>>
>>>>>>> Would you be willing to take over the series? I would be happy to
>>>>>>> bring you up to speed and provide review.
>>>>>>>
>>>>>>> Sure I can take it over.
>>>>>>>
>>>>>>> I ported it to master version of xen and I tested it on a board.
>>>>>>> I still need to do a deep review of the code myself but I have an
>>>>>>> understanding of the problem and what is the idea.
>>>>>>>
>>>>>>> Any help to get on speed would be more then welcome :-)
>>>>>>> I would recommend to go through the latest version (v3) and the
>>>>>>> previous (v2). I am also suggesting v2 because I think the split was
>>>>>>> easier to review/understand.
>>>>>>>
>>>>>>> The x86 code is probably what is going to give you the most trouble as
>>>>>>> there are two ABIs to support (compat and non-compat). If you don't
>>>>>>> have an x86 setup, I should be able to test it/help write it.
>>>>>>>
>>>>>>> Feel free to ask any questions and I will try my best to remember the
>>>>>>> discussion from last year :).
>>>>>>>
>>>>>>> At risk of being shouted down again, a new hypercall isn't necessarily
>>>>>>> necessary, and there are probably better ways of fixing it.
>>>>>>>
>>>>>>> The underlying ABI problem is that the area is registered by virtual
>>>>>>> address.  The only correct way this should have been done is to register
>>>>>>> by guest physical address, so Xen's updating of the data doesn't
>>>>>>> interact with the guest pagetable settings/restrictions.  x86 suffers
>>>>>>> the same kind of problems as ARM, except we silently squash the fallout.
>>>>>>>
>>>>>>> The logic in Xen is horrible, and I would really rather it was deleted
>>>>>>> completely, rather than to be kept for compatibility.
>>>>>>>
>>>>>>> The runstate area is always fixed kernel memory and doesn't move.  I
>>>>>>> believe it is already restricted from crossing a page boundary, and we
>>>>>>> can calculate the va=>pa translation when the hypercall is made.
>>>>>>>
>>>>>>> Yes - this is a technically ABI change, but nothing is going to break
>>>>>>> (AFAICT) and the cleanup win is large enough to make this a *very*
>>>>>>> attractive option.
>>>>>>>
>>>>>>> I suggested this approach two years ago [1] but you were the one
>>>>>>> saying that buffer could cross page-boundary on older Linux [2]:
>>>>>>>
>>>>>>> "I'd love to do this, but we cant.  Older Linux used to have a virtual
>>>>>>> buffer spanning a page boundary.  Changing the behaviour under that will
>>>>>>> cause older setups to explode."
>>>>>>
>>>>>> Sorry this was long time ago, and details have faded. IIRC there was
>>>>>> even a proposal (or patch set) that took that into account and allowed
>>>>>> buffers to span across a page boundary by taking a reference to two
>>>>>> different pages in that case.
>>>>>
>>>>> I am not aware of a patch set. Juergen suggested a per-domain mapping but
>>>>> there was no details how this could be done (my e-mail was left unanswered
>>>>> [1]).
>>>>>
>>>>> If we were using the vmap() then we would need up 1MB per domain (assuming
>>>>> 128 vCPUs). This sounds quite a bit and I think we need to agree whether it
>>>>> would be an acceptable solution (this was also left unanswered [1]).
>>>>
>>>> Could we map/unmap the runtime area on domain switch at a per-cpu
>>>> based linear space area? There's no reason to have all the runtime
>>>> areas mapped all the time, you just care about the one from the
>>>> running vcpu.
>>>>
>>>> Maybe the overhead of that mapping and unmapping would be
>>>> too high? But seeing that we are aiming at a secret-free Xen we would
>>>> have to eventually go that route anyway.
>>> Maybe the new hypercall should be a bit different:
>>> - we have this area allocated already inside Xen and we do a copy of it on any context switch
>>> - the guest is not supposed to modify any data in this area
>>> We could introduce a new hypercall:
>>> - Xen allocate the runstate area using a page aligned address and size
>>
>> At the moment the runstate is 40 bytes. If we were going to follow this proposal, I would recommend to try to have as many runstate as possible in your page.
>>
>> Otherewise, you would waste 4056 bytes per vCPU in both Xen and the guest OS. This would even be worse for 64KB kernel.
> 
> Agree, so it should be one call to have an area with the runstate for all vCPUs, ensure a vCPU runstate has a size and an address which are cache line size aligned to prevent coherency stress.

One 4KB region only going to cover 64 vCPUs. So you would need multiple 
pages. This brings more question on how this would work for vCPU 
online/offline or even hotplug.

The code required in the guest to track them is going to be more complex 
either in Xen or the guest.

> 
>>
>>
>>> - the guest provide a free guest physical space to the hypercall
>>
>> This part is the most tricky part. How does the guest know what is free in its physical address space?
>>
>> I am not aware of any way to do this in Linux. So the best you could do would be to allocate a page from the RAM and tell Xen to replace it with the runstate mapping.
>>
>> However, this also means you are going to possibly shatter a superpage in the P2M. This may affect the performance in long-run.
> 
> Very true, Linux does not have a way to do that.
> What about going the other way around: Xen can provide the physical address to the guest.

Right now, the hypervisor doesn't have an easy to know the layout. Only 
the toolstack has. So you would need a way to tell Xen where the region 
has been reserved.

This region would have to be allocated below 4GB to cater all the type 
of guest. A guest may not use them at all (time accounting is not 
mandatory).

Even if this is a few pages, this is not very ideal. It would be best if 
you let the guest allocate some RAM and then pass the address to Xen.

> 
>>
>>> - Xen maps read-only its own area to the guest at the provided address
>>> - Xen shall not modify any data in the runstate area of other cores/guests (should already be the case)
>>> - We keep the current hypercall for backward compatibility and map the areal during the hypercall and keep the area mapped at all time, we keep doing the copy during context switches
>>> This would highly reduce the overhead by removing the mapping/unmapping.
>>
>> I don't think the overhead is going to be significant with domain_map_page()/domain_unmap_page().
>>
>> On Arm64, the memory is always mapped so map/unmap is a NOP. On Arm32, we have a fast map/unmap implementation.
>>
>> On x86, without SH, most of the memory is also always mapped. So this operation is mostly a NOP. For the SH case, the map/unmap will be used in any access to the guest memory (such as hypercalls access) but it is quite optimized.
>>
>> Note that the current overhead is much more important today as you need to walk the guest PT and P2M (we are talking at multiple map/unmap). So moving to one map/unmap is already going to be a major improvement.
> 
> Agree
> 
>>
>>> Regarding the secret free I do not really think this is something problematic here as we already have a copy of this internally anyway
>>
>> The secret free work is still under review, so what is done in Xen today shouldn't dictate the future.
>>
>> The question to answer is whether we believe leaking the content may be a problem. If the answer is yes, then most likely we will want the internal representation to be mapped on demand or just mapped for Xen PT associated for that domain.
>>
>> My gut feeling is the runstate content is not critical. But I haven't fully thought through yet.
> 
> The runstate information is stored inside xen and then copied to the guest memory during context switch. So even if the guest area is not mapped, this information is still available inside the xen internal copy.
Again, Xen is not secret free today. So the fact Xen has an internal 
copy doesn't mean it is fine to leak the content. For instance, the 
guests' memory regions are always mapped, does it mean the content is 
not sensitive? Definitely not. Hence why the effort behind Secret Hiding.

As I wrote in my previous e-mail, *if* we consider the leaking a 
problem, then we would want the *internal* representation to be mapped 
on demande or just mapped for Xen PT associated for that domain.

But... the runstate area doesn't use up a full page. Today the page may 
also contain secret from a domain. If you always map it, then there is a 
risk to leak that content. This would have to be taken into 
consideration if we follow Roger's approach.
Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 15 12:34:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 12:34:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZZXh-00019V-9Y; Fri, 15 May 2020 12:34:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lh6L=65=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jZZXg-00019Q-Eh
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 12:34:20 +0000
X-Inumbo-ID: 651ec9d2-96a8-11ea-ae69-bc764e2007e4
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.74]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 651ec9d2-96a8-11ea-ae69-bc764e2007e4;
 Fri, 15 May 2020 12:34:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wRE0ISnQtIe4ZKrBjziChSlukldnTXgGXArBy7ifr1k=;
 b=BHPQmKyshT+Dyd9vchWEuJar3Ed33wAXYlQcJzUrQmov68ukVyd3Fy4C3J4TSZZX6yMCQc8htrKUMfAWVkNP+3+nKhbJJJw+JySnfRNCyT0TQ8O35wtSHPl18YFgz0QZiKq88hhrQ9p2TlP47xPbKX8RaqCAn8RuBhVJpY3Y6Lk=
Received: from AM6P194CA0035.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:90::48)
 by VI1PR08MB3917.eurprd08.prod.outlook.com (2603:10a6:803:c1::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.25; Fri, 15 May
 2020 12:34:16 +0000
Received: from AM5EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:90:cafe::a3) by AM6P194CA0035.outlook.office365.com
 (2603:10a6:209:90::48) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.20 via Frontend
 Transport; Fri, 15 May 2020 12:34:16 +0000
Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT031.mail.protection.outlook.com (10.152.16.111) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3000.19 via Frontend Transport; Fri, 15 May 2020 12:34:16 +0000
Received: ("Tessian outbound e88319d7ccd0:v54");
 Fri, 15 May 2020 12:34:16 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 836bd842f052c2a9
X-CR-MTA-TID: 64aa7808
Received: from 8db15bc3ba0c.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F3D98B4B-8964-4804-8FB0-2557BD324182.1; 
 Fri, 15 May 2020 12:34:10 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8db15bc3ba0c.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 15 May 2020 12:34:10 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FPzGqQTdHbkH4srSVVhdENJ3/yx9VB9GLfXi23GwogzH2SeFqUUVm1U2hPDKKXtIVVLpT2W24YeMQyUn7ZpcA88r20ydg6MNlTYcMZrJDgiRFTEKQRpf3LTTNl9k9DlL8+1f8HDb6DfOSko87IRPR/54CdzzMEai82KRXaO1/ph6aNXffYBXD33+ddYZYH2pCFyK+WwLbNk1KrZdiOuugOygySMV+VyPwKOojdkxd4eWb+vIwtuTIzPMAF8Eoa1NI2Lf1B1pfGMmJI61QeGYL+1vPRJLFoUapF86y3D0K2UlKFsRM1QEv0V18V6IT+hHx79HZ+LtPdIl9vViLXAuig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wRE0ISnQtIe4ZKrBjziChSlukldnTXgGXArBy7ifr1k=;
 b=b01ke12dpTUJvVR/2XlDV6DY3xDLYc5Sm30tbZ+ooT70pakqFTLTaCB6di31+dO/xo8vvvwB18LmYMcuCBRShVHoehXyEA9VRJFHURWQyVMGg4MvE9E/o84NqaUKITM4JUG7C6zstus16tbBnsQuCnjvrGZ67h08pFP7En+Qs6nznnBCE9xGN8s7BZyOH0eFt59idwoj2wB1jTdcEpV5LqtZeOUcXzyTK4NflxDAXiqG2kZcHLp1UXGe3VLPL1DII69o5iualgErg8XekaRNyyNwojWNN2PtGQ50p0BumRpwXNpNOYivN++eLMBDavIjezA9MwPgXfOeXaLYjZVB6A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wRE0ISnQtIe4ZKrBjziChSlukldnTXgGXArBy7ifr1k=;
 b=BHPQmKyshT+Dyd9vchWEuJar3Ed33wAXYlQcJzUrQmov68ukVyd3Fy4C3J4TSZZX6yMCQc8htrKUMfAWVkNP+3+nKhbJJJw+JySnfRNCyT0TQ8O35wtSHPl18YFgz0QZiKq88hhrQ9p2TlP47xPbKX8RaqCAn8RuBhVJpY3Y6Lk=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3916.eurprd08.prod.outlook.com (2603:10a6:10:7e::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2979.33; Fri, 15 May
 2020 12:34:08 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3000.022; Fri, 15 May 2020
 12:34:08 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: Error during update_runstate_area with KPTI activated
Thread-Topic: Error during update_runstate_area with KPTI activated
Thread-Index: AQHWKfvm5I6PbMAmoUakTwbdgqMBQqinvLCAgAAF7oCAABaDgIAACWGAgAARCgCAANBZgIAAEJgAgAAERACAAASVAIAAAySAgAAM5QCAACjqAA==
Date: Fri, 15 May 2020 12:34:08 +0000
Message-ID: <B0FB86DC-1D4F-438E-B78B-0D9845D13E8A@arm.com>
References: <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
 <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
 <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
 <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
 <478C4829-CCAF-495B-860E-6BA3D86AA47D@arm.com>
 <20200515083838.GN54375@Air-de-Roger>
 <d2033adc-3f98-2d14-ae6d-f8dcd8b90002@xen.org>
 <20200515091018.GO54375@Air-de-Roger>
 <93D7EBEF-E3E0-4DBB-A5BC-7D326B7AE0DB@arm.com>
 <20200515100742.GR54375@Air-de-Roger>
In-Reply-To: <20200515100742.GR54375@Air-de-Roger>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: def04ec8-939a-4e7c-ea2f-08d7f8cc482a
x-ms-traffictypediagnostic: DB7PR08MB3916:|VI1PR08MB3917:
X-Microsoft-Antispam-PRVS: <VI1PR08MB3917FD7166BA4615C572A2F69DBD0@VI1PR08MB3917.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 04041A2886
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: grjRgKf+8NhZMoyDbylsq9D+ewc/tgCvCrq2EtWjNi4cv+mmt+5qc8iZlKgiDvPeR7mB5jEEpYp2iSFfCl3VpFj3XZzx8Emce443BYcL42hai7VuJUUTXJa9xhUPo9cPx/saZa8p0ILE9X1MdXYDNVkWdz+ceRup1+k2w37DCnVV8jAu08H22gTPw8vGWj+By7N2ZxLobNDntjQjdZlectTFVIAKIWoBOohYDPMOUu5WUfKDwQxqj72rrTJ8v+WvsQAPfNnW0Zcv2L2Q5ivG2OHP7rDb/OIlntXY7aZebD95ZLg5RNme87MJ505x0oXlrJqN98DRWDuMdYopTDIvu/bv7e/6mApLQQsCnARo2zBmfE5BBZMkDbM0DQJpVIWzABQQ31fJgKmHQZe6Mgw5sZphfno0Ol+WGZsWqBUyzxLpxvxV5cHcHEKtuP0QQAr2Dkuwf6ilGrJms67tKrYeA5YX6KtrTo/OXJU4Q+voXMOALHjizBfgRkI3PzDVdXOrlRpKlKXhbbBcsyQ79IP1sg==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(376002)(366004)(346002)(136003)(396003)(39860400002)(2616005)(15650500001)(6916009)(8676002)(36756003)(26005)(76116006)(66476007)(186003)(53546011)(66946007)(66446008)(91956017)(6506007)(316002)(64756008)(8936002)(2906002)(54906003)(66556008)(6486002)(6512007)(86362001)(966005)(71200400001)(4326008)(478600001)(5660300002)(33656002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: EXQUz16AJcZl7B1OZNjRXYhdLoKZRaksqPh+4deLR2BLs7BROIh1aI+2F8eK102PGkOXLno4XaO8L2va3/u9coLj2uutqGdRxJFhU7EeChfhBjvISFqItgZ2UupodThLCNhA4eFAk/+paDZDuEsDXcWYdc7sxY+9twx81lrDpNgTwFUcSY41M8wy8wdmxUCZ8Gee33rZjKN4N2DImT1aJOYxTCCP5EX3BPXXoSZFmHj54I17jqwpVPLZFysaQV9f4p5lgvA/qpBQCtak4nSOWeYpVBatqkcl0bks5+Jd/v9jR6S7fZe/emv/oXss84kA6CMLML2U1ecFv/pkEpsCPu+0/DicpFTxpaAJYAB48mozy/mcnLzq9ln0YJhE+sy8VtMEfC9H91sOb/nj42Z4AMQwOor83XZRvxES7s4OA4HkqH29zBbNvJvRdy7201w/TeBiV3TuGGwEu8ttDUm9AXMEAAohhCIpqvVRkyljCcNhgURCX4/uJv+QCkkdw+yO
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <469F167F2EF9584BAE2E2CF143C05B59@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3916
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(376002)(136003)(346002)(39860400002)(46966005)(6506007)(4326008)(53546011)(316002)(36906005)(8676002)(6486002)(2906002)(186003)(6862004)(6512007)(47076004)(966005)(8936002)(54906003)(81166007)(36756003)(356005)(70206006)(82310400002)(336012)(478600001)(5660300002)(33656002)(26005)(86362001)(70586007)(15650500001)(2616005)(82740400003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 6147f22f-b20d-47ca-e9e2-08d7f8cc436e
X-Forefront-PRVS: 04041A2886
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: uK0TWIsNbHF+/0Co13e9dL+Z7CXN+h5CZEwWkOoF1qeazGNgMhUWPc8u7bmCqzKc6Lh20jmOM13GNrknG2etR5STCxeVkHNZOOuMxKNIpp2oDK42yBvkjYX7SBX2uG2jRUIqGXQs3490fnsWwtIQe7kayZoCrAe/b5FPvuVVPPNraSrNqxCVR04CNuQoIycYFAqYLVLuiITa8xAGQB+Al9UK4/50w1z2kNE1w6dijyyARny5ns3W/KPfgToHx2JmEzgioL9BVbWDQoHAL5Z7Qm6Gk22QBvLDQ7xH9apy+4ZojMEk8D/DVKjCRoaAUouXyQ9qJsvkVs23n2QlkaHaJsvF6cQFYFOpRbjoD9OGTc0Be6K1Ruq4HVGlYJsEsI97OO/8FTMPmuOX0Y8lzQl2dZ5KXp6Yk7sW/E9zuf21cypUVXs2HssCpd7LxOWuyREEbBIjAQ0xQL3ipTn42KXfBoQl8j4pdY26iJyoqRLqSNKD3ttT6RnlbCQQMy9LTxTpl3+8BLSsfIG8G3iF6x36G2uBH3wz3u2DPlP3MxzahU1TpftkUL8oZD8PHCLgdkHVpVIrmlm3ixPqhSDDkMEfSCNOAwA5C6LHli3ottKHNeI=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2020 12:34:16.3123 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: def04ec8-939a-4e7c-ea2f-08d7f8cc482a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3917
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGkgUm9nZXINCg0KPiBPbiAxNSBNYXkgMjAyMCwgYXQgMTE6MDcsIFJvZ2VyIFBhdSBNb25uw6kg
PHJvZ2VyLnBhdUBjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+IENhbiB5b3UgcGxlYXNlIGZpeCB5
b3VyIGVtYWlsIGNsaWVudCB0byBwcm9wZXJseSBpbmRlbnQgcmVwbGllcz8gSXQncw0KPiBpbXBv
c3NpYmxlIHRvIGRpc3Rpbmd1aXNoIHlvdXIgYWRkZWQgdGV4dCB3aGVuIHlvdSByZXBseSBmcm9t
IHRoZQ0KPiBvcmlnaW5hbCBlbWFpbCwgYXMgaXQncyBub3QgaW5kZW50ZWQgaW4gYW55IHdheS4N
Cg0KU29ycnkgZm9yIHRoYXQgaXQgc2VlbXMgbXkgZW1haWwgY2xpZW50IGlzIGRldGVjdGluZyBt
YWlsIGFzIGJlaW5nIGluIHJpY2ggdGV4dCBhbmQgd2FzIGFuc3dlcmluZyBrZWVwaW5nIHRoaXMg
Zm9ybWF0DQpQbGVhc2UgdGVsbCBtZSBpZiB0aGlzIHdhcyBub3QgZml4ZWQgaW4gdGhpcyBlbWFp
bC4NCg0KPiANCj4gT24gRnJpLCBNYXkgMTUsIDIwMjAgYXQgMDk6MjE6MzRBTSArMDAwMCwgQmVy
dHJhbmQgTWFycXVpcyB3cm90ZToNCj4+IE9uIDE1IE1heSAyMDIwLCBhdCAxMDoxMCwgUm9nZXIg
UGF1IE1vbm7DqSA8cm9nZXIucGF1QGNpdHJpeC5jb208bWFpbHRvOnJvZ2VyLnBhdUBjaXRyaXgu
Y29tPj4gd3JvdGU6DQo+PiANCj4+IE9uIEZyaSwgTWF5IDE1LCAyMDIwIGF0IDA5OjUzOjU0QU0g
KzAxMDAsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4+IFtDQVVUSU9OIC0gRVhURVJOQUwgRU1BSUxd
IERPIE5PVCByZXBseSwgY2xpY2sgbGlua3MsIG9yIG9wZW4gYXR0YWNobWVudHMgdW5sZXNzIHlv
dSBoYXZlIHZlcmlmaWVkIHRoZSBzZW5kZXIgYW5kIGtub3cgdGhlIGNvbnRlbnQgaXMgc2FmZS4N
Cj4+IA0KPj4gSGksDQo+PiANCj4+IE9uIDE1LzA1LzIwMjAgMDk6MzgsIFJvZ2VyIFBhdSBNb25u
w6kgd3JvdGU6DQo+PiBPbiBGcmksIE1heSAxNSwgMjAyMCBhdCAwNzozOToxNkFNICswMDAwLCBC
ZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4gDQo+PiANCj4+IE9uIDE0IE1heSAyMDIwLCBhdCAy
MDoxMywgSnVsaWVuIEdyYWxsIDxqdWxpZW4uZ3JhbGwub3NzQGdtYWlsLmNvbTxtYWlsdG86anVs
aWVuLmdyYWxsLm9zc0BnbWFpbC5jb20+PG1haWx0bzpqdWxpZW4uZ3JhbGwub3NzQGdtYWlsLmNv
bT4+IHdyb3RlOg0KPj4gDQo+PiBPbiBUaHUsIDE0IE1heSAyMDIwIGF0IDE5OjEyLCBBbmRyZXcg
Q29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPG1haWx0bzphbmRyZXcuY29vcGVyM0Bj
aXRyaXguY29tPjxtYWlsdG86YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT4+IHdyb3RlOg0KPj4g
DQo+PiBPbiAxNC8wNS8yMDIwIDE4OjM4LCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+PiBIaSwNCj4+
IA0KPj4gT24gMTQvMDUvMjAyMCAxNzoxOCwgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCj4+IA0K
Pj4gDQo+PiBPbiAxNCBNYXkgMjAyMCwgYXQgMTY6NTcsIEp1bGllbiBHcmFsbCA8anVsaWVuQHhl
bi5vcmc8bWFpbHRvOmp1bGllbkB4ZW4ub3JnPjxtYWlsdG86anVsaWVuQHhlbi5vcmc+PiB3cm90
ZToNCj4+IA0KPj4gDQo+PiANCj4+IE9uIDE0LzA1LzIwMjAgMTU6MjgsIEJlcnRyYW5kIE1hcnF1
aXMgd3JvdGU6DQo+PiBIaSwNCj4+IA0KPj4gSGksDQo+PiANCj4+IFdoZW4gZXhlY3V0aW5nIGxp
bnV4IG9uIGFybTY0IHdpdGggS1BUSSBhY3RpdmF0ZWQgKGluIERvbTAgb3IgaW4gYQ0KPj4gRG9t
VSksIEkgaGF2ZSBhIGxvdCBvZiB3YWxrIHBhZ2UgdGFibGUgZXJyb3JzIGxpa2UgdGhpczoNCj4+
IChYRU4pIHAybS5jOjE4OTA6IGQxdjA6IEZhaWxlZCB0byB3YWxrIHBhZ2UtdGFibGUgdmENCj4+
IDB4ZmZmZmZmODM3ZWJlMGNkMA0KPj4gQWZ0ZXIgaW1wbGVtZW50aW5nIGEgY2FsbCB0cmFjZSwg
SSBmb3VuZCB0aGF0IHRoZSBwcm9ibGVtIHdhcw0KPj4gY29taW5nIGZyb20gdGhlIHVwZGF0ZV9y
dW5zdGF0ZV9hcmVhIHdoZW4gbGludXggaGFzIEtQVEkgYWN0aXZhdGVkLg0KPj4gSSBoYXZlIHRo
ZSBmb2xsb3dpbmcgY2FsbCB0cmFjZToNCj4+IChYRU4pIHAybS5jOjE4OTA6IGQxdjA6IEZhaWxl
ZCB0byB3YWxrIHBhZ2UtdGFibGUgdmENCj4+IDB4ZmZmZmZmODM3ZWJlMGNkMA0KPj4gKFhFTikg
YmFja3RyYWNlLmM6Mjk6IFN0YWNrdHJhY2Ugc3RhcnQgYXQgMHg4MDA3NjM4ZWZiYjAgZGVwdGgg
MTANCj4+IChYRU4pICAgIFs8MDAwMDAwMDAwMDI3NzgwYz5dIGdldF9wYWdlX2Zyb21fZ3ZhKzB4
MTgwLzB4MzVjDQo+PiAoWEVOKSAgICBbPDAwMDAwMDAwMDAyNzAwYzg+XSBndWVzdGNvcHkuYyNj
b3B5X2d1ZXN0KzB4MWIwLzB4MmU0DQo+PiAoWEVOKSAgICBbPDAwMDAwMDAwMDAyNzAyMjg+XSBy
YXdfY29weV90b19ndWVzdCsweDJjLzB4MzQNCj4+IChYRU4pICAgIFs8MDAwMDAwMDAwMDI2OGRk
MD5dIGRvbWFpbi5jI3VwZGF0ZV9ydW5zdGF0ZV9hcmVhKzB4OTAvMHhjOA0KPj4gKFhFTikgICAg
WzwwMDAwMDAwMDAwMjY5MDljPl0gZG9tYWluLmMjc2NoZWR1bGVfdGFpbCsweDI5NC8weDJkOA0K
Pj4gKFhFTikgICAgWzwwMDAwMDAwMDAwMjY5NTI0Pl0gY29udGV4dF9zd2l0Y2grMHg1OC8weDcw
DQo+PiAoWEVOKSAgICBbPDAwMDAwMDAwMDAyNDc5YzQ+XSBjb3JlLmMjc2NoZWRfY29udGV4dF9z
d2l0Y2grMHg4OC8weDFlNA0KPj4gKFhFTikgICAgWzwwMDAwMDAwMDAwMjQ4NDVjPl0gY29yZS5j
I3NjaGVkdWxlKzB4MjI0LzB4MmVjDQo+PiAoWEVOKSAgICBbPDAwMDAwMDAwMDAyMjQwMTg+XSBz
b2Z0aXJxLmMjX19kb19zb2Z0aXJxKzB4ZTQvMHgxMjgNCj4+IChYRU4pICAgIFs8MDAwMDAwMDAw
MDIyNDBkND5dIGRvX3NvZnRpcnErMHgxNC8weDFjDQo+PiBEaXNjdXNzaW5nIHRoaXMgc3ViamVj
dCB3aXRoIFN0ZWZhbm8sIGhlIHBvaW50ZWQgbWUgdG8gYSBkaXNjdXNzaW9uDQo+PiBzdGFydGVk
IGEgeWVhciBhZ28gb24gdGhpcyBzdWJqZWN0IGhlcmU6DQo+PiBodHRwczovL2xpc3RzLnhlbnBy
b2plY3Qub3JnL2FyY2hpdmVzL2h0bWwveGVuLWRldmVsLzIwMTgtMTEvbXNnMDMwNTMuaHRtbA0K
Pj4gDQo+PiBBbmQgYSBwYXRjaCB3YXMgc3VibWl0dGVkOg0KPj4gaHR0cHM6Ly9saXN0cy54ZW5w
cm9qZWN0Lm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDE5LTA1L21zZzAyMzIwLmh0bWwN
Cj4+IA0KPj4gSSByZWJhc2VkIHRoaXMgcGF0Y2ggb24gY3VycmVudCBtYXN0ZXIgYW5kIGl0IGlz
IHNvbHZpbmcgdGhlDQo+PiBwcm9ibGVtIEkgaGF2ZSBzZWVuLg0KPj4gSXQgc291bmRzIHRvIG1l
IGxpa2UgYSBnb29kIHNvbHV0aW9uIHRvIGludHJvZHVjZSBhDQo+PiBWQ1BVT1BfcmVnaXN0ZXJf
cnVuc3RhdGVfcGh5c19tZW1vcnlfYXJlYSB0byBub3QgZGVwZW5kIG9uIHRoZSBhcmVhDQo+PiBh
Y3R1YWxseSBiZWluZyBtYXBwZWQgaW4gdGhlIGd1ZXN0IHdoZW4gYSBjb250ZXh0IHN3aXRjaCBp
cyBiZWluZw0KPj4gZG9uZSAod2hpY2ggaXMgYWN0dWFsbHkgdGhlIHByb2JsZW0gaGFwcGVuaW5n
IHdoZW4gYSBjb250ZXh0IHN3aXRjaA0KPj4gaXMgdHJpZ2dlciB3aGlsZSBhIGd1ZXN0IGlzIHJ1
bm5pbmcgaW4gRUwwKS4NCj4+IElzIHRoZXJlIGFueSByZWFzb24gd2h5IHRoaXMgd2FzIG5vdCBt
ZXJnZWQgYXQgdGhlIGVuZCA/DQo+PiANCj4+IEkganVzdCBza2ltbWVkIHRocm91Z2ggdGhlIHRo
cmVhZCB0byByZW1pbmQgbXlzZWxmIHRoZSBzdGF0ZS4NCj4+IEFGQUlDVCwgdGhpcyBpcyBibG9j
a2VkIG9uIHRoZSBjb250cmlidXRvciB0byBjbGFyaWZ5IHRoZSBpbnRlbmRlZA0KPj4gaW50ZXJh
Y3Rpb24gYW5kIHByb3ZpZGUgYSBuZXcgdmVyc2lvbi4NCj4+IA0KPj4gV2hhdCBkbyB5b3UgbWVh
biBoZXJlIGJ5IGludGVuZGVkIGludGVyYWN0aW9uID8gSG93IHRoZSBuZXcgaHlwZXINCj4+IGNh
bGwgc2hvdWxkIGJlIHVzZWQgYnkgdGhlIGd1ZXN0IE9TID8NCj4+IA0KPj4gRnJvbSB3aGF0IEkg
cmVtZW1iZXIsIEphbiB3YXMgc2Vla2luZyBjbGFyaWZpY2F0aW9uIG9uIHdoZXRoZXIgdGhlIHR3
bw0KPj4gaHlwZXJjYWxscyAoZXhpc3RpbmcgYW5kIG5ldykgY2FuIGJlIGNhbGxlZCB0b2dldGhl
ciBieSB0aGUgc2FtZSBPUw0KPj4gKGFuZCBtYWtlIHNlbnNlKS4NCj4+IA0KPj4gVGhlcmUgd2Fz
IGFsc28gdGhlIHF1ZXN0aW9uIG9mIHRoZSBoYW5kb3ZlciBiZXR3ZWVuIHR3byBwaWVjZXMgb2YN
Cj4+IHNvdGZ3YXJlLiBGb3IgaW5zdGFuY2UsIHdoYXQgaWYgdGhlIGZpcm13YXJlIGlzIHVzaW5n
IHRoZSBleGlzdGluZw0KPj4gaW50ZXJmYWNlIGJ1dCB0aGUgT1MgdGhlIG5ldyBvbmU/IFNpbWls
YXIgcXVlc3Rpb24gYWJvdXQgS2V4ZWNpbmcgYQ0KPj4gZGlmZmVyZW50IGtlcm5lbC4NCj4+IA0K
Pj4gVGhpcyBwYXJ0IGlzIG1vc3RseSBkb2N1bWVudGF0aW9uIHNvIHdlIGNhbiBkaXNjdXNzIGFi
b3V0IHRoZSBhcHByb2FjaA0KPj4gYW5kIHJldmlldyB0aGUgaW1wbGVtZW50YXRpb24uDQo+PiAN
Cj4+IA0KPj4gDQo+PiBJIGFtIHN0aWxsIGluIGZhdm9yIG9mIHRoZSBuZXcgaHlwZXJjYWxsIChh
bmQgc3RpbGwgaW4gbXkgdG9kbyBsaXN0KQ0KPj4gYnV0IEkgaGF2ZW4ndCB5ZXQgZm91bmQgdGlt
ZSB0byByZXZpdmUgdGhlIHNlcmllcy4NCj4+IA0KPj4gV291bGQgeW91IGJlIHdpbGxpbmcgdG8g
dGFrZSBvdmVyIHRoZSBzZXJpZXM/IEkgd291bGQgYmUgaGFwcHkgdG8NCj4+IGJyaW5nIHlvdSB1
cCB0byBzcGVlZCBhbmQgcHJvdmlkZSByZXZpZXcuDQo+PiANCj4+IFN1cmUgSSBjYW4gdGFrZSBp
dCBvdmVyLg0KPj4gDQo+PiBJIHBvcnRlZCBpdCB0byBtYXN0ZXIgdmVyc2lvbiBvZiB4ZW4gYW5k
IEkgdGVzdGVkIGl0IG9uIGEgYm9hcmQuDQo+PiBJIHN0aWxsIG5lZWQgdG8gZG8gYSBkZWVwIHJl
dmlldyBvZiB0aGUgY29kZSBteXNlbGYgYnV0IEkgaGF2ZSBhbg0KPj4gdW5kZXJzdGFuZGluZyBv
ZiB0aGUgcHJvYmxlbSBhbmQgd2hhdCBpcyB0aGUgaWRlYS4NCj4+IA0KPj4gQW55IGhlbHAgdG8g
Z2V0IG9uIHNwZWVkIHdvdWxkIGJlIG1vcmUgdGhlbiB3ZWxjb21lIDotKQ0KPj4gSSB3b3VsZCBy
ZWNvbW1lbmQgdG8gZ28gdGhyb3VnaCB0aGUgbGF0ZXN0IHZlcnNpb24gKHYzKSBhbmQgdGhlDQo+
PiBwcmV2aW91cyAodjIpLiBJIGFtIGFsc28gc3VnZ2VzdGluZyB2MiBiZWNhdXNlIEkgdGhpbmsg
dGhlIHNwbGl0IHdhcw0KPj4gZWFzaWVyIHRvIHJldmlldy91bmRlcnN0YW5kLg0KPj4gDQo+PiBU
aGUgeDg2IGNvZGUgaXMgcHJvYmFibHkgd2hhdCBpcyBnb2luZyB0byBnaXZlIHlvdSB0aGUgbW9z
dCB0cm91YmxlIGFzDQo+PiB0aGVyZSBhcmUgdHdvIEFCSXMgdG8gc3VwcG9ydCAoY29tcGF0IGFu
ZCBub24tY29tcGF0KS4gSWYgeW91IGRvbid0DQo+PiBoYXZlIGFuIHg4NiBzZXR1cCwgSSBzaG91
bGQgYmUgYWJsZSB0byB0ZXN0IGl0L2hlbHAgd3JpdGUgaXQuDQo+PiANCj4+IEZlZWwgZnJlZSB0
byBhc2sgYW55IHF1ZXN0aW9ucyBhbmQgSSB3aWxsIHRyeSBteSBiZXN0IHRvIHJlbWVtYmVyIHRo
ZQ0KPj4gZGlzY3Vzc2lvbiBmcm9tIGxhc3QgeWVhciA6KS4NCj4+IA0KPj4gQXQgcmlzayBvZiBi
ZWluZyBzaG91dGVkIGRvd24gYWdhaW4sIGEgbmV3IGh5cGVyY2FsbCBpc24ndCBuZWNlc3Nhcmls
eQ0KPj4gbmVjZXNzYXJ5LCBhbmQgdGhlcmUgYXJlIHByb2JhYmx5IGJldHRlciB3YXlzIG9mIGZp
eGluZyBpdC4NCj4+IA0KPj4gVGhlIHVuZGVybHlpbmcgQUJJIHByb2JsZW0gaXMgdGhhdCB0aGUg
YXJlYSBpcyByZWdpc3RlcmVkIGJ5IHZpcnR1YWwNCj4+IGFkZHJlc3MuICBUaGUgb25seSBjb3Jy
ZWN0IHdheSB0aGlzIHNob3VsZCBoYXZlIGJlZW4gZG9uZSBpcyB0byByZWdpc3Rlcg0KPj4gYnkg
Z3Vlc3QgcGh5c2ljYWwgYWRkcmVzcywgc28gWGVuJ3MgdXBkYXRpbmcgb2YgdGhlIGRhdGEgZG9l
c24ndA0KPj4gaW50ZXJhY3Qgd2l0aCB0aGUgZ3Vlc3QgcGFnZXRhYmxlIHNldHRpbmdzL3Jlc3Ry
aWN0aW9ucy4gIHg4NiBzdWZmZXJzDQo+PiB0aGUgc2FtZSBraW5kIG9mIHByb2JsZW1zIGFzIEFS
TSwgZXhjZXB0IHdlIHNpbGVudGx5IHNxdWFzaCB0aGUgZmFsbG91dC4NCj4+IA0KPj4gVGhlIGxv
Z2ljIGluIFhlbiBpcyBob3JyaWJsZSwgYW5kIEkgd291bGQgcmVhbGx5IHJhdGhlciBpdCB3YXMg
ZGVsZXRlZA0KPj4gY29tcGxldGVseSwgcmF0aGVyIHRoYW4gdG8gYmUga2VwdCBmb3IgY29tcGF0
aWJpbGl0eS4NCj4+IA0KPj4gVGhlIHJ1bnN0YXRlIGFyZWEgaXMgYWx3YXlzIGZpeGVkIGtlcm5l
bCBtZW1vcnkgYW5kIGRvZXNuJ3QgbW92ZS4gIEkNCj4+IGJlbGlldmUgaXQgaXMgYWxyZWFkeSBy
ZXN0cmljdGVkIGZyb20gY3Jvc3NpbmcgYSBwYWdlIGJvdW5kYXJ5LCBhbmQgd2UNCj4+IGNhbiBj
YWxjdWxhdGUgdGhlIHZhPT5wYSB0cmFuc2xhdGlvbiB3aGVuIHRoZSBoeXBlcmNhbGwgaXMgbWFk
ZS4NCj4+IA0KPj4gWWVzIC0gdGhpcyBpcyBhIHRlY2huaWNhbGx5IEFCSSBjaGFuZ2UsIGJ1dCBu
b3RoaW5nIGlzIGdvaW5nIHRvIGJyZWFrDQo+PiAoQUZBSUNUKSBhbmQgdGhlIGNsZWFudXAgd2lu
IGlzIGxhcmdlIGVub3VnaCB0byBtYWtlIHRoaXMgYSAqdmVyeSoNCj4+IGF0dHJhY3RpdmUgb3B0
aW9uLg0KPj4gDQo+PiBJIHN1Z2dlc3RlZCB0aGlzIGFwcHJvYWNoIHR3byB5ZWFycyBhZ28gWzFd
IGJ1dCB5b3Ugd2VyZSB0aGUgb25lDQo+PiBzYXlpbmcgdGhhdCBidWZmZXIgY291bGQgY3Jvc3Mg
cGFnZS1ib3VuZGFyeSBvbiBvbGRlciBMaW51eCBbMl06DQo+PiANCj4+ICJJJ2QgbG92ZSB0byBk
byB0aGlzLCBidXQgd2UgY2FudC4gIE9sZGVyIExpbnV4IHVzZWQgdG8gaGF2ZSBhIHZpcnR1YWwN
Cj4+IGJ1ZmZlciBzcGFubmluZyBhIHBhZ2UgYm91bmRhcnkuICBDaGFuZ2luZyB0aGUgYmVoYXZp
b3VyIHVuZGVyIHRoYXQgd2lsbA0KPj4gY2F1c2Ugb2xkZXIgc2V0dXBzIHRvIGV4cGxvZGUuIg0K
Pj4gDQo+PiBTb3JyeSB0aGlzIHdhcyBsb25nIHRpbWUgYWdvLCBhbmQgZGV0YWlscyBoYXZlIGZh
ZGVkLiBJSVJDIHRoZXJlIHdhcw0KPj4gZXZlbiBhIHByb3Bvc2FsIChvciBwYXRjaCBzZXQpIHRo
YXQgdG9vayB0aGF0IGludG8gYWNjb3VudCBhbmQgYWxsb3dlZA0KPj4gYnVmZmVycyB0byBzcGFu
IGFjcm9zcyBhIHBhZ2UgYm91bmRhcnkgYnkgdGFraW5nIGEgcmVmZXJlbmNlIHRvIHR3bw0KPj4g
ZGlmZmVyZW50IHBhZ2VzIGluIHRoYXQgY2FzZS4NCj4+IA0KPj4gSSBhbSBub3QgYXdhcmUgb2Yg
YSBwYXRjaCBzZXQuIEp1ZXJnZW4gc3VnZ2VzdGVkIGEgcGVyLWRvbWFpbiBtYXBwaW5nIGJ1dA0K
Pj4gdGhlcmUgd2FzIG5vIGRldGFpbHMgaG93IHRoaXMgY291bGQgYmUgZG9uZSAobXkgZS1tYWls
IHdhcyBsZWZ0IHVuYW5zd2VyZWQNCj4+IFsxXSkuDQo+PiANCj4+IElmIHdlIHdlcmUgdXNpbmcg
dGhlIHZtYXAoKSB0aGVuIHdlIHdvdWxkIG5lZWQgdXAgMU1CIHBlciBkb21haW4gKGFzc3VtaW5n
DQo+PiAxMjggdkNQVXMpLiBUaGlzIHNvdW5kcyBxdWl0ZSBhIGJpdCBhbmQgSSB0aGluayB3ZSBu
ZWVkIHRvIGFncmVlIHdoZXRoZXIgaXQNCj4+IHdvdWxkIGJlIGFuIGFjY2VwdGFibGUgc29sdXRp
b24gKHRoaXMgd2FzIGFsc28gbGVmdCB1bmFuc3dlcmVkIFsxXSkuDQo+PiANCj4+IENvdWxkIHdl
IG1hcC91bm1hcCB0aGUgcnVudGltZSBhcmVhIG9uIGRvbWFpbiBzd2l0Y2ggYXQgYSBwZXItY3B1
DQo+PiBiYXNlZCBsaW5lYXIgc3BhY2UgYXJlYT8gVGhlcmUncyBubyByZWFzb24gdG8gaGF2ZSBh
bGwgdGhlIHJ1bnRpbWUNCj4+IGFyZWFzIG1hcHBlZCBhbGwgdGhlIHRpbWUsIHlvdSBqdXN0IGNh
cmUgYWJvdXQgdGhlIG9uZSBmcm9tIHRoZQ0KPj4gcnVubmluZyB2Y3B1Lg0KPj4gDQo+PiBNYXli
ZSB0aGUgb3ZlcmhlYWQgb2YgdGhhdCBtYXBwaW5nIGFuZCB1bm1hcHBpbmcgd291bGQgYmUNCj4+
IHRvbyBoaWdoPyBCdXQgc2VlaW5nIHRoYXQgd2UgYXJlIGFpbWluZyBhdCBhIHNlY3JldC1mcmVl
IFhlbiB3ZSB3b3VsZA0KPj4gaGF2ZSB0byBldmVudHVhbGx5IGdvIHRoYXQgcm91dGUgYW55d2F5
Lg0KPj4gDQo+PiBNYXliZSB0aGUgbmV3IGh5cGVyY2FsbCBzaG91bGQgYmUgYSBiaXQgZGlmZmVy
ZW50Og0KPj4gLSB3ZSBoYXZlIHRoaXMgYXJlYSBhbGxvY2F0ZWQgYWxyZWFkeSBpbnNpZGUgWGVu
IGFuZCB3ZSBkbyBhIGNvcHkgb2YgaXQgb24gYW55IGNvbnRleHQgc3dpdGNoDQo+PiAtIHRoZSBn
dWVzdCBpcyBub3Qgc3VwcG9zZWQgdG8gbW9kaWZ5IGFueSBkYXRhIGluIHRoaXMgYXJlYQ0KPj4g
DQo+PiBXZSBjb3VsZCBpbnRyb2R1Y2UgYSBuZXcgaHlwZXJjYWxsOg0KPj4gLSBYZW4gYWxsb2Nh
dGUgdGhlIHJ1bnN0YXRlIGFyZWEgdXNpbmcgYSBwYWdlIGFsaWduZWQgYWRkcmVzcyBhbmQgc2l6
ZQ0KPiANCj4gSXQncyBnZW5lcmFsbHkgYmVzdCBpZiB3ZSBjYW4gdXNlIGEgZ3Vlc3QgcHJvdmlk
ZWQgbWVtb3J5IGFyZWEgdGhhdCdzDQo+IGFscmVhZHkgcG9wdWxhdGVkLCBiZWNhdXNlLi4uDQo+
IA0KPj4gLSB0aGUgZ3Vlc3QgcHJvdmlkZSBhIGZyZWUgZ3Vlc3QgcGh5c2ljYWwgc3BhY2UgdG8g
dGhlIGh5cGVyY2FsbA0KPiANCj4gLi4uIGl0J3MgaGFyZCBmb3IgdGhlIGd1ZXN0IHRvIGZpZ3Vy
ZSBvdXQgd2hpY2ggbm9uLXBvcHVsYXRlZCBhcmVhcw0KPiBhcmUgc2FmZSBmb3IgbWFwcGluZyBh
cmJpdHJhcnkgdGhpbmdzLiBJZTogeW91IG1pZ2h0IGF0dGVtcHQgdG8gbWFwDQo+IHRoZSBydW5z
dGF0ZSBhcmVhIG9uIHRvcCBvZiBzb21lIE1NSU8gYXJlYSB0aGUgZ3Vlc3QgaXMgbm90IGF3YXJl
IG9mDQo+IGZvciBleGFtcGxlIGlmIGl0IGhhcyBwYXNzdGhyb3VnaCBlbmFibGVkLg0KDQpXaXRo
IHlvdSBhbnN3ZXIgYW5kIEp1bGlhbiBvbmVzIGl0IGlzIG5vdyBjbGVhciB0aGF0IHRoZSBvbmx5
IHNvbHV0aW9uIGlzIHRvIGhhdmUgdGhlIGFyZWEgcHJvdmlkZWQgYnkgdGhlIGd1ZXN0Lg0KDQo+
IA0KPj4gLSBYZW4gbWFwcyByZWFkLW9ubHkgaXRzIG93biBhcmVhIHRvIHRoZSBndWVzdCBhdCB0
aGUgcHJvdmlkZWQgYWRkcmVzcw0KPj4gLSBYZW4gc2hhbGwgbm90IG1vZGlmeSBhbnkgZGF0YSBp
biB0aGUgcnVuc3RhdGUgYXJlYSBvZiBvdGhlciBjb3Jlcy9ndWVzdHMgKHNob3VsZCBhbHJlYWR5
IGJlIHRoZSBjYXNlKQ0KPiANCj4gSSdtIG5vdCBzdXJlIHRob3NlIHR3byByZXN0cmljdGlvbnMg
YXJlIHJlbGV2YW50LCBpdCdzIG5vdCByZWxldmFudCB0bw0KPiBYZW4gd2hldGhlciB0aGUgZ3Vl
c3QgZGVjaWRlZCB0byBvdmVyd3JpdGUgdGhlIHJ1bnN0YXRlIGFyZWEuIFhlbiB3aWxsDQo+IGp1
c3Qgd3JpdGUgdG8gaXQgd2hlbiBkb2luZyBhIGNvbnRleHQgc3dpdGNoIGluIG9yZGVyIHRvIHVw
ZGF0ZSBpdCwNCj4gYnV0IGl0IHNob3VsZCBuZXZlciByZWFkIGZyb20gaXQuDQo+IA0KPiBPciBh
cmUgeW91IG1lYW5pbmcgdG8gbWFwIHZjcHUtPnJ1bnN0YXRlIGRpcmVjdGx5IGludG8gdGhlIGd1
ZXN0DQo+IHBoeXNtYXA/DQo+IA0KPiBJIHRoaW5rIHRoYXQncyBhIGJhZCBpZGVhIGFzIHdlIHdv
dWxkIHRoZW4gaGF2ZSB0byBmb3JjZSBlYWNoIHZDUFUNCj4gcnVuc3RhdGUgdG8gdGFrZSB1cCB0
byBhIHdob2xlIHBhZ2UsIHdhc3RpbmcgbG90cyBvZiBtZW1vcnkuDQoNCkkgd2FzIG1vcmUgdGhp
bmtpbmcgaW4gcHV0dGluZyBhbGwgdGhlIHJ1bnN0YXRlIG9mIGFsbCB2Q1BVcyBpbiB0aGUgc2Ft
ZSBwYWdlIChvciBpbiBzZXZlcmFsIGlmIHRoaXMgd2FzIG5vdCBlbm91Z2gpDQoNCk15IG1haW4g
cG9pbnQgd2FzIHRvIGhhdmUgWGVuIGRpcmVjdGx5IG1vZGlmeWluZyB0aGlzIG9uZSBpbnN0ZWFk
IG9mIGRvaW5nIGNvcGllcyBhcyBYZW4gaXMganVzdCB3cml0aW5nIHRvIGl0IGFuZCBuZXZlciBy
ZWFkcyBmcm9tIGl0IGFuZCB0aGUgZ3Vlc3QgaXMgbm90IHN1cHBvc2UgdG8gd3JpdGUgdG8gaXQg
KGJ1dCBpZiBpdCBkb2VzIHRoYXTigJlzIG1vcmUgb3IgbGVzcyBhbiBlcnJvciBvbiBpdHMgc2lk
ZSkuDQoNCkJlcnRyYW5kDQoNCg0KPiANCj4gVGhhbmtzLCBSb2dlci4NCg0K


From xen-devel-bounces@lists.xenproject.org Fri May 15 12:39:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 12:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZZcy-0001PL-25; Fri, 15 May 2020 12:39:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=lh6L=65=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jZZcw-0001PG-Ih
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 12:39:46 +0000
X-Inumbo-ID: 27bfb122-96a9-11ea-ae69-bc764e2007e4
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0e::60e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27bfb122-96a9-11ea-ae69-bc764e2007e4;
 Fri, 15 May 2020 12:39:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Cw4bNmTVnyczlmiJYHCEsAFL4zrUO8yHCFANvKZuxI=;
 b=vP7XflMNPWKu316hX6PBHyUmyNTM5qHHobHEubiK4dFImsEIanGSUzvK+C4SVkZNNOXaQPuY363Xdns4MLWgCetLO93noJQeQ2LO2AFZfeflDjAihVWq2C+28k7eDxpzjTJqLkSspX3U45f6jyS+cO1x56+vvEj3+om2OMXeoAs=
Received: from DB6P18901CA0006.EURP189.PROD.OUTLOOK.COM (2603:10a6:4:16::16)
 by VI1PR08MB5421.eurprd08.prod.outlook.com (2603:10a6:803:132::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2979.27; Fri, 15 May
 2020 12:39:43 +0000
Received: from DB5EUR03FT047.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:16:cafe::77) by DB6P18901CA0006.outlook.office365.com
 (2603:10a6:4:16::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.20 via Frontend
 Transport; Fri, 15 May 2020 12:39:43 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT047.mail.protection.outlook.com (10.152.21.232) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3000.19 via Frontend Transport; Fri, 15 May 2020 12:39:42 +0000
Received: ("Tessian outbound e88319d7ccd0:v54");
 Fri, 15 May 2020 12:39:42 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 306f35cad789f5d9
X-CR-MTA-TID: 64aa7808
Received: from 1f4d2f31696e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 7D83808C-4AEF-454D-B61B-419BBEE1D925.1; 
 Fri, 15 May 2020 12:39:37 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1f4d2f31696e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 15 May 2020 12:39:37 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=oIsn+in9M84C4pJUJ332oMkCjtdq7k4sTgUsapgnI/0zalxaig38oYjq8XNFN6GgWEU4O9jai7amqrn1lZJ6s9yZ/KnGMY+O1FTK/PBpBOO6pWm0Ae/ECATUB2IbhRG6DY4txfkkGpS3yzW0ICbyMhEiPcE7RCU6uJ0SIfUe/HguvQeO+qmvNsDcsc2F5wNI+V9uasCgagW/P0WZxRWW2BiAtCyl4whAjAF0qUfIxD8kcns1QjRpvVdvBvFHLVAfT0iNjZf8lOPxxrWC4jIDq2RF3anPPIFEDaPuN/OsNcsz35wGzgK4LtvucWOV44MuXJa7WGsV77AfNyZbBXaQqw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Cw4bNmTVnyczlmiJYHCEsAFL4zrUO8yHCFANvKZuxI=;
 b=MY8mzdv+C9m86Yy9iXtF65Jnj1+ZrMwWcybSN2pXNOHpBWLgny9ZZUI/ch1n3DscOz2BMcbB7fa0tnMPO2pfD0El2Z2RS/ODRhTGZsP7uDs3z7uO3Lmi4D8X8IUL8TSbqHIInOXN1U/ZD1SZDqmRK5RJYDSa4vWWv4qxszOnIMmbWlS1D1sGSFIcX3JtrNAyJNGd49FdnanixxoIESPLjP8bjIycHZ0+mcENe7KIB2wLfRAI8lAjLqFZwLJ6pI0SL0XoyW6S3GiAbzswiZs739ISUlLNHVNfCxkdVy/B1nHhAdQ0FZ+WpJouJMRQ+gM9ZFguLlPl9Xso9gzJvoJTpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Cw4bNmTVnyczlmiJYHCEsAFL4zrUO8yHCFANvKZuxI=;
 b=vP7XflMNPWKu316hX6PBHyUmyNTM5qHHobHEubiK4dFImsEIanGSUzvK+C4SVkZNNOXaQPuY363Xdns4MLWgCetLO93noJQeQ2LO2AFZfeflDjAihVWq2C+28k7eDxpzjTJqLkSspX3U45f6jyS+cO1x56+vvEj3+om2OMXeoAs=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3035.eurprd08.prod.outlook.com (2603:10a6:5:1d::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.20; Fri, 15 May
 2020 12:39:34 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3000.022; Fri, 15 May 2020
 12:39:34 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: Error during update_runstate_area with KPTI activated
Thread-Topic: Error during update_runstate_area with KPTI activated
Thread-Index: AQHWKfvm5I6PbMAmoUakTwbdgqMBQqinvLCAgAAF7oCAABaDgIAACWGAgAARCgCAANBZgIAAEJgAgAAERACAAASVAIAAAySAgAAKyACAAALvAIAAIIgAgAAJFQA=
Date: Fri, 15 May 2020 12:39:34 +0000
Message-ID: <5217F8B3-5260-402D-8D53-A8DD732C44CC@arm.com>
References: <C6B0E24F-60E6-4621-8448-C8DBAE3277A9@arm.com>
 <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
 <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
 <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
 <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
 <478C4829-CCAF-495B-860E-6BA3D86AA47D@arm.com>
 <20200515083838.GN54375@Air-de-Roger>
 <d2033adc-3f98-2d14-ae6d-f8dcd8b90002@xen.org>
 <20200515091018.GO54375@Air-de-Roger>
 <93D7EBEF-E3E0-4DBB-A5BC-7D326B7AE0DB@arm.com>
 <355d0b59-29d4-a483-73a3-3cdd468c0b77@xen.org>
 <0E6DD742-3C79-4CD2-93A1-6D054377A919@arm.com>
 <19ea976d-5d62-db57-28b2-116b0c4da03f@xen.org>
In-Reply-To: <19ea976d-5d62-db57-28b2-116b0c4da03f@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: e26269fb-256a-430f-ec7e-08d7f8cd0ac6
x-ms-traffictypediagnostic: DB7PR08MB3035:|VI1PR08MB5421:
X-Microsoft-Antispam-PRVS: <VI1PR08MB5421F2153DD945FCB951C0A89DBD0@VI1PR08MB5421.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 04041A2886
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: 8maqSVoQyMu9uKKr/n6nLyZXMKCabVH7HeTnkubE7eDWBzxgGfZ0WKBtNwokpCj1Kpsv8GnQDEnHjOBtf40WeRQ+7cXJKC8YA9E48SC8PShYY/X2ourwlO8oSSVzynIA+T/68YkDkMHr42PzGwPolSSOvnBUad6h+FY2LdgsO+uZp7iR0JO3nGHimMMAfCIgCIcc8GQSTnNhp7NrbT0SRETys6hf47oVz9VFqA6LzPeTeRbL+pEKzh70j8Rv+CYOSxeVR7wd0OA+Eeaau2Ww22BwRQycbc4qXjmeX3xXRpGR3nws+0zvPNSKxO5Unug7/Vb1piFHj4m7Qtw7MRQEnm0B2onleo7T8reGw9HDb6XdVItZWxOtmiJHvNtaaDPYtn0dpyWZywaIVmOH+WEk8qcEZeGGE3C9ft4QU6SfmL7m9+waWdzJOoIbk6B9590nUX4yi0VZcedmlNbRnH/X88RdA6sO8+4oNdv9Z/DY/+xhoGAOzyOlm/uI1MbiUYcGk24tigIJ66+olYrNhjp+ag==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(366004)(376002)(39860400002)(136003)(346002)(396003)(8676002)(6512007)(6506007)(54906003)(15650500001)(6486002)(316002)(5660300002)(2906002)(186003)(86362001)(53546011)(2616005)(26005)(33656002)(91956017)(4326008)(966005)(478600001)(64756008)(71200400001)(36756003)(30864003)(66476007)(76116006)(6916009)(66446008)(66946007)(8936002)(66556008);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: j7fkZTrn1lztoD+NJRsR/8qOfipEyMmKw70NAtGMoxvp62MCujLdssWPRlShW9RFR83FQwVP/lqZ1xJL28uDAjOIr4VHOhIQgg0EuNOr92BtuZ1BNS00N0RWbGFZOspIHMlrga5DtL3AymR5ViCBezYM4Y343uWP0ocJUsD6WinVfj3Cec6l5LIFJvQFpcU1XhRxyXarILixQ9q/QDF7NExoLicnIESWqf0XYIVKgRITJpA4cyBZUarw3RAy6HO7wZNpH6nQwC4+zzj4XbVCr/UiTkPVKv7Nfs3Wz/MQsWp8LhKTSdbl/PY5HRMDtiOyk1KhT93PJCWfkEiFKzVXmkbY/4+M6kT8VaGCQkWA800CmiGlNxI6X3IXO4b5HKAeGiQvW8m2/yHRvZwpWTsbtXuXvmxdnTw3HALQcf1TGEODEURdy18WBNUaaxOH4u0wOwdiVIXPGxPtAVWXlTXSWS6YhD4rFYzEI8eG/9JTh6x/PMGImtXcbFe2kJC8xlSj
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <260945C106BB9B49A18025CF822BDDA1@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3035
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT047.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(376002)(346002)(396003)(136003)(39860400002)(46966005)(33656002)(36756003)(8676002)(30864003)(82740400003)(5660300002)(6506007)(6512007)(6486002)(81166007)(478600001)(8936002)(26005)(86362001)(186003)(2906002)(47076004)(54906003)(4326008)(336012)(15650500001)(53546011)(966005)(2616005)(82310400002)(356005)(70206006)(70586007)(316002)(6862004);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 318ed55e-f4f0-4712-2d64-08d7f8cd05c4
X-Forefront-PRVS: 04041A2886
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: H52/OjfqVFHqM/iQOOsFQ/uxKDmd85pTKMabE+VnueVUWfLCZJEXDXpbtC2Ha7QEBC2g1SgD/zwoWW1mLInE95IB8UURqshaIhC6+K3WrTATijBprgIB3YBUz7mX7MCGinmQeuHTBDy/1ltT5+we9UGQuPpiIegE70YBlff2bG4X8WPGLYVssAm2gZl403g4YqnelTdndPFrKeHtWd2wSFA6AYZ+hwZ6n1+cQ77nrg28EAhp/EpYnNLQwNUOEZS/M1hJUxi2Ybylx+2+8KlCgMLSirzp8x7Yf0WXRtK5btwG53sK6qRicprXnPln6RRx8oyVNtgaVwyPPhjMIHigsOtVZADoVgBJve1b4O7efHyWmEyBSVjXuCgC1SQW8bdCD2dNwalSJylcOfpZT/tBJNBgGzOGptGi6uMTRkCZ/qvqzYJX6O/BpmzXeFeNoWpW0EHPA+JH3LoBz5ZYq7JE2DYJG/iyr1kuLYNRYC6beHEQ6rG9DaNi3slavWcD/Nx1LgUejbxboB1bBShGS7ZEEmv/UsGr73Q2dJOMVYNuxT7bCEqJUFyBLy1o+g9Cq5bUuMfYvR7TKWPByVhtIti8Fq5vVCcAyJu+9B9ebCp7RDM=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2020 12:39:42.8932 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e26269fb-256a-430f-ec7e-08d7f8cd0ac6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5421
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Hongyan Xia <hx242@xen.org>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMTUgTWF5IDIwMjAsIGF0IDEzOjA3LCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpLA0KPiANCj4gT24gMTUvMDUvMjAyMCAxMToxMCwgQmVydHJh
bmQgTWFycXVpcyB3cm90ZToNCj4+PiBPbiAxNSBNYXkgMjAyMCwgYXQgMTE6MDAsIEp1bGllbiBH
cmFsbCA8anVsaWVuQHhlbi5vcmc+IHdyb3RlOg0KPj4+IA0KPj4+IEhpIEJlcnRyYW5kLA0KPj4+
IA0KPj4+IE9uIDE1LzA1LzIwMjAgMTA6MjEsIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+Pj4+
PiBPbiAxNSBNYXkgMjAyMCwgYXQgMTA6MTAsIFJvZ2VyIFBhdSBNb25uw6kgPHJvZ2VyLnBhdUBj
aXRyaXguY29tIDxtYWlsdG86cm9nZXIucGF1QGNpdHJpeC5jb20+PiB3cm90ZToNCj4+Pj4+IA0K
Pj4+Pj4gT24gRnJpLCBNYXkgMTUsIDIwMjAgYXQgMDk6NTM6NTRBTSArMDEwMCwgSnVsaWVuIEdy
YWxsIHdyb3RlOg0KPj4+Pj4+IFtDQVVUSU9OIC0gRVhURVJOQUwgRU1BSUxdIERPIE5PVCByZXBs
eSwgY2xpY2sgbGlua3MsIG9yIG9wZW4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBoYXZlIHZlcmlm
aWVkIHRoZSBzZW5kZXIgYW5kIGtub3cgdGhlIGNvbnRlbnQgaXMgc2FmZS4NCj4+Pj4+PiANCj4+
Pj4+PiBIaSwNCj4+Pj4+PiANCj4+Pj4+PiBPbiAxNS8wNS8yMDIwIDA5OjM4LCBSb2dlciBQYXUg
TW9ubsOpIHdyb3RlOg0KPj4+Pj4+PiBPbiBGcmksIE1heSAxNSwgMjAyMCBhdCAwNzozOToxNkFN
ICswMDAwLCBCZXJ0cmFuZCBNYXJxdWlzIHdyb3RlOg0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiANCj4+
Pj4+Pj4+IE9uIDE0IE1heSAyMDIwLCBhdCAyMDoxMywgSnVsaWVuIEdyYWxsIDxqdWxpZW4uZ3Jh
bGwub3NzQGdtYWlsLmNvbSA8bWFpbHRvOmp1bGllbi5ncmFsbC5vc3NAZ21haWwuY29tPjxtYWls
dG86anVsaWVuLmdyYWxsLm9zc0BnbWFpbC5jb20+PiB3cm90ZToNCj4+Pj4+Pj4+IA0KPj4+Pj4+
Pj4gT24gVGh1LCAxNCBNYXkgMjAyMCBhdCAxOToxMiwgQW5kcmV3IENvb3BlciA8YW5kcmV3LmNv
b3BlcjNAY2l0cml4LmNvbSA8bWFpbHRvOmFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+PG1haWx0
bzphbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPj4gd3JvdGU6DQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+
IE9uIDE0LzA1LzIwMjAgMTg6MzgsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4+Pj4+Pj4+IEhpLA0K
Pj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBPbiAxNC8wNS8yMDIwIDE3OjE4LCBCZXJ0cmFuZCBNYXJxdWlz
IHdyb3RlOg0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IE9uIDE0IE1heSAyMDIwLCBh
dCAxNjo1NywgSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZyA8bWFpbHRvOmp1bGllbkB4ZW4u
b3JnPjxtYWlsdG86anVsaWVuQHhlbi5vcmc+PiB3cm90ZToNCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4g
DQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IE9uIDE0LzA1LzIwMjAgMTU6MjgsIEJlcnRyYW5kIE1hcnF1
aXMgd3JvdGU6DQo+Pj4+Pj4+PiBIaSwNCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gSGksDQo+Pj4+Pj4+
PiANCj4+Pj4+Pj4+IFdoZW4gZXhlY3V0aW5nIGxpbnV4IG9uIGFybTY0IHdpdGggS1BUSSBhY3Rp
dmF0ZWQgKGluIERvbTAgb3IgaW4gYQ0KPj4+Pj4+Pj4gRG9tVSksIEkgaGF2ZSBhIGxvdCBvZiB3
YWxrIHBhZ2UgdGFibGUgZXJyb3JzIGxpa2UgdGhpczoNCj4+Pj4+Pj4+IChYRU4pIHAybS5jOjE4
OTA6IGQxdjA6IEZhaWxlZCB0byB3YWxrIHBhZ2UtdGFibGUgdmENCj4+Pj4+Pj4+IDB4ZmZmZmZm
ODM3ZWJlMGNkMA0KPj4+Pj4+Pj4gQWZ0ZXIgaW1wbGVtZW50aW5nIGEgY2FsbCB0cmFjZSwgSSBm
b3VuZCB0aGF0IHRoZSBwcm9ibGVtIHdhcw0KPj4+Pj4+Pj4gY29taW5nIGZyb20gdGhlIHVwZGF0
ZV9ydW5zdGF0ZV9hcmVhIHdoZW4gbGludXggaGFzIEtQVEkgYWN0aXZhdGVkLg0KPj4+Pj4+Pj4g
SSBoYXZlIHRoZSBmb2xsb3dpbmcgY2FsbCB0cmFjZToNCj4+Pj4+Pj4+IChYRU4pIHAybS5jOjE4
OTA6IGQxdjA6IEZhaWxlZCB0byB3YWxrIHBhZ2UtdGFibGUgdmENCj4+Pj4+Pj4+IDB4ZmZmZmZm
ODM3ZWJlMGNkMA0KPj4+Pj4+Pj4gKFhFTikgYmFja3RyYWNlLmM6Mjk6IFN0YWNrdHJhY2Ugc3Rh
cnQgYXQgMHg4MDA3NjM4ZWZiYjAgZGVwdGggMTANCj4+Pj4+Pj4+IChYRU4pICAgIFs8MDAwMDAw
MDAwMDI3NzgwYz5dIGdldF9wYWdlX2Zyb21fZ3ZhKzB4MTgwLzB4MzVjDQo+Pj4+Pj4+PiAoWEVO
KSAgICBbPDAwMDAwMDAwMDAyNzAwYzg+XSBndWVzdGNvcHkuYyNjb3B5X2d1ZXN0KzB4MWIwLzB4
MmU0DQo+Pj4+Pj4+PiAoWEVOKSAgICBbPDAwMDAwMDAwMDAyNzAyMjg+XSByYXdfY29weV90b19n
dWVzdCsweDJjLzB4MzQNCj4+Pj4+Pj4+IChYRU4pICAgIFs8MDAwMDAwMDAwMDI2OGRkMD5dIGRv
bWFpbi5jI3VwZGF0ZV9ydW5zdGF0ZV9hcmVhKzB4OTAvMHhjOA0KPj4+Pj4+Pj4gKFhFTikgICAg
WzwwMDAwMDAwMDAwMjY5MDljPl0gZG9tYWluLmMjc2NoZWR1bGVfdGFpbCsweDI5NC8weDJkOA0K
Pj4+Pj4+Pj4gKFhFTikgICAgWzwwMDAwMDAwMDAwMjY5NTI0Pl0gY29udGV4dF9zd2l0Y2grMHg1
OC8weDcwDQo+Pj4+Pj4+PiAoWEVOKSAgICBbPDAwMDAwMDAwMDAyNDc5YzQ+XSBjb3JlLmMjc2No
ZWRfY29udGV4dF9zd2l0Y2grMHg4OC8weDFlNA0KPj4+Pj4+Pj4gKFhFTikgICAgWzwwMDAwMDAw
MDAwMjQ4NDVjPl0gY29yZS5jI3NjaGVkdWxlKzB4MjI0LzB4MmVjDQo+Pj4+Pj4+PiAoWEVOKSAg
ICBbPDAwMDAwMDAwMDAyMjQwMTg+XSBzb2Z0aXJxLmMjX19kb19zb2Z0aXJxKzB4ZTQvMHgxMjgN
Cj4+Pj4+Pj4+IChYRU4pICAgIFs8MDAwMDAwMDAwMDIyNDBkND5dIGRvX3NvZnRpcnErMHgxNC8w
eDFjDQo+Pj4+Pj4+PiBEaXNjdXNzaW5nIHRoaXMgc3ViamVjdCB3aXRoIFN0ZWZhbm8sIGhlIHBv
aW50ZWQgbWUgdG8gYSBkaXNjdXNzaW9uDQo+Pj4+Pj4+PiBzdGFydGVkIGEgeWVhciBhZ28gb24g
dGhpcyBzdWJqZWN0IGhlcmU6DQo+Pj4+Pj4+PiBodHRwczovL2xpc3RzLnhlbnByb2plY3Qub3Jn
L2FyY2hpdmVzL2h0bWwveGVuLWRldmVsLzIwMTgtMTEvbXNnMDMwNTMuaHRtbA0KPj4+Pj4+Pj4g
DQo+Pj4+Pj4+PiBBbmQgYSBwYXRjaCB3YXMgc3VibWl0dGVkOg0KPj4+Pj4+Pj4gaHR0cHM6Ly9s
aXN0cy54ZW5wcm9qZWN0Lm9yZy9hcmNoaXZlcy9odG1sL3hlbi1kZXZlbC8yMDE5LTA1L21zZzAy
MzIwLmh0bWwNCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gSSByZWJhc2VkIHRoaXMgcGF0Y2ggb24gY3Vy
cmVudCBtYXN0ZXIgYW5kIGl0IGlzIHNvbHZpbmcgdGhlDQo+Pj4+Pj4+PiBwcm9ibGVtIEkgaGF2
ZSBzZWVuLg0KPj4+Pj4+Pj4gSXQgc291bmRzIHRvIG1lIGxpa2UgYSBnb29kIHNvbHV0aW9uIHRv
IGludHJvZHVjZSBhDQo+Pj4+Pj4+PiBWQ1BVT1BfcmVnaXN0ZXJfcnVuc3RhdGVfcGh5c19tZW1v
cnlfYXJlYSB0byBub3QgZGVwZW5kIG9uIHRoZSBhcmVhDQo+Pj4+Pj4+PiBhY3R1YWxseSBiZWlu
ZyBtYXBwZWQgaW4gdGhlIGd1ZXN0IHdoZW4gYSBjb250ZXh0IHN3aXRjaCBpcyBiZWluZw0KPj4+
Pj4+Pj4gZG9uZSAod2hpY2ggaXMgYWN0dWFsbHkgdGhlIHByb2JsZW0gaGFwcGVuaW5nIHdoZW4g
YSBjb250ZXh0IHN3aXRjaA0KPj4+Pj4+Pj4gaXMgdHJpZ2dlciB3aGlsZSBhIGd1ZXN0IGlzIHJ1
bm5pbmcgaW4gRUwwKS4NCj4+Pj4+Pj4+IElzIHRoZXJlIGFueSByZWFzb24gd2h5IHRoaXMgd2Fz
IG5vdCBtZXJnZWQgYXQgdGhlIGVuZCA/DQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IEkganVzdCBza2lt
bWVkIHRocm91Z2ggdGhlIHRocmVhZCB0byByZW1pbmQgbXlzZWxmIHRoZSBzdGF0ZS4NCj4+Pj4+
Pj4+IEFGQUlDVCwgdGhpcyBpcyBibG9ja2VkIG9uIHRoZSBjb250cmlidXRvciB0byBjbGFyaWZ5
IHRoZSBpbnRlbmRlZA0KPj4+Pj4+Pj4gaW50ZXJhY3Rpb24gYW5kIHByb3ZpZGUgYSBuZXcgdmVy
c2lvbi4NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gV2hhdCBkbyB5b3UgbWVhbiBoZXJlIGJ5IGludGVu
ZGVkIGludGVyYWN0aW9uID8gSG93IHRoZSBuZXcgaHlwZXINCj4+Pj4+Pj4+IGNhbGwgc2hvdWxk
IGJlIHVzZWQgYnkgdGhlIGd1ZXN0IE9TID8NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gRnJvbSB3aGF0
IEkgcmVtZW1iZXIsIEphbiB3YXMgc2Vla2luZyBjbGFyaWZpY2F0aW9uIG9uIHdoZXRoZXIgdGhl
IHR3bw0KPj4+Pj4+Pj4gaHlwZXJjYWxscyAoZXhpc3RpbmcgYW5kIG5ldykgY2FuIGJlIGNhbGxl
ZCB0b2dldGhlciBieSB0aGUgc2FtZSBPUw0KPj4+Pj4+Pj4gKGFuZCBtYWtlIHNlbnNlKS4NCj4+
Pj4+Pj4+IA0KPj4+Pj4+Pj4gVGhlcmUgd2FzIGFsc28gdGhlIHF1ZXN0aW9uIG9mIHRoZSBoYW5k
b3ZlciBiZXR3ZWVuIHR3byBwaWVjZXMgb2YNCj4+Pj4+Pj4+IHNvdGZ3YXJlLiBGb3IgaW5zdGFu
Y2UsIHdoYXQgaWYgdGhlIGZpcm13YXJlIGlzIHVzaW5nIHRoZSBleGlzdGluZw0KPj4+Pj4+Pj4g
aW50ZXJmYWNlIGJ1dCB0aGUgT1MgdGhlIG5ldyBvbmU/IFNpbWlsYXIgcXVlc3Rpb24gYWJvdXQg
S2V4ZWNpbmcgYQ0KPj4+Pj4+Pj4gZGlmZmVyZW50IGtlcm5lbC4NCj4+Pj4+Pj4+IA0KPj4+Pj4+
Pj4gVGhpcyBwYXJ0IGlzIG1vc3RseSBkb2N1bWVudGF0aW9uIHNvIHdlIGNhbiBkaXNjdXNzIGFi
b3V0IHRoZSBhcHByb2FjaA0KPj4+Pj4+Pj4gYW5kIHJldmlldyB0aGUgaW1wbGVtZW50YXRpb24u
DQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBJIGFtIHN0aWxsIGlu
IGZhdm9yIG9mIHRoZSBuZXcgaHlwZXJjYWxsIChhbmQgc3RpbGwgaW4gbXkgdG9kbyBsaXN0KQ0K
Pj4+Pj4+Pj4gYnV0IEkgaGF2ZW4ndCB5ZXQgZm91bmQgdGltZSB0byByZXZpdmUgdGhlIHNlcmll
cy4NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gV291bGQgeW91IGJlIHdpbGxpbmcgdG8gdGFrZSBvdmVy
IHRoZSBzZXJpZXM/IEkgd291bGQgYmUgaGFwcHkgdG8NCj4+Pj4+Pj4+IGJyaW5nIHlvdSB1cCB0
byBzcGVlZCBhbmQgcHJvdmlkZSByZXZpZXcuDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IFN1cmUgSSBj
YW4gdGFrZSBpdCBvdmVyLg0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBJIHBvcnRlZCBpdCB0byBtYXN0
ZXIgdmVyc2lvbiBvZiB4ZW4gYW5kIEkgdGVzdGVkIGl0IG9uIGEgYm9hcmQuDQo+Pj4+Pj4+PiBJ
IHN0aWxsIG5lZWQgdG8gZG8gYSBkZWVwIHJldmlldyBvZiB0aGUgY29kZSBteXNlbGYgYnV0IEkg
aGF2ZSBhbg0KPj4+Pj4+Pj4gdW5kZXJzdGFuZGluZyBvZiB0aGUgcHJvYmxlbSBhbmQgd2hhdCBp
cyB0aGUgaWRlYS4NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gQW55IGhlbHAgdG8gZ2V0IG9uIHNwZWVk
IHdvdWxkIGJlIG1vcmUgdGhlbiB3ZWxjb21lIDotKQ0KPj4+Pj4+Pj4gSSB3b3VsZCByZWNvbW1l
bmQgdG8gZ28gdGhyb3VnaCB0aGUgbGF0ZXN0IHZlcnNpb24gKHYzKSBhbmQgdGhlDQo+Pj4+Pj4+
PiBwcmV2aW91cyAodjIpLiBJIGFtIGFsc28gc3VnZ2VzdGluZyB2MiBiZWNhdXNlIEkgdGhpbmsg
dGhlIHNwbGl0IHdhcw0KPj4+Pj4+Pj4gZWFzaWVyIHRvIHJldmlldy91bmRlcnN0YW5kLg0KPj4+
Pj4+Pj4gDQo+Pj4+Pj4+PiBUaGUgeDg2IGNvZGUgaXMgcHJvYmFibHkgd2hhdCBpcyBnb2luZyB0
byBnaXZlIHlvdSB0aGUgbW9zdCB0cm91YmxlIGFzDQo+Pj4+Pj4+PiB0aGVyZSBhcmUgdHdvIEFC
SXMgdG8gc3VwcG9ydCAoY29tcGF0IGFuZCBub24tY29tcGF0KS4gSWYgeW91IGRvbid0DQo+Pj4+
Pj4+PiBoYXZlIGFuIHg4NiBzZXR1cCwgSSBzaG91bGQgYmUgYWJsZSB0byB0ZXN0IGl0L2hlbHAg
d3JpdGUgaXQuDQo+Pj4+Pj4+PiANCj4+Pj4+Pj4+IEZlZWwgZnJlZSB0byBhc2sgYW55IHF1ZXN0
aW9ucyBhbmQgSSB3aWxsIHRyeSBteSBiZXN0IHRvIHJlbWVtYmVyIHRoZQ0KPj4+Pj4+Pj4gZGlz
Y3Vzc2lvbiBmcm9tIGxhc3QgeWVhciA6KS4NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gQXQgcmlzayBv
ZiBiZWluZyBzaG91dGVkIGRvd24gYWdhaW4sIGEgbmV3IGh5cGVyY2FsbCBpc24ndCBuZWNlc3Nh
cmlseQ0KPj4+Pj4+Pj4gbmVjZXNzYXJ5LCBhbmQgdGhlcmUgYXJlIHByb2JhYmx5IGJldHRlciB3
YXlzIG9mIGZpeGluZyBpdC4NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gVGhlIHVuZGVybHlpbmcgQUJJ
IHByb2JsZW0gaXMgdGhhdCB0aGUgYXJlYSBpcyByZWdpc3RlcmVkIGJ5IHZpcnR1YWwNCj4+Pj4+
Pj4+IGFkZHJlc3MuICBUaGUgb25seSBjb3JyZWN0IHdheSB0aGlzIHNob3VsZCBoYXZlIGJlZW4g
ZG9uZSBpcyB0byByZWdpc3Rlcg0KPj4+Pj4+Pj4gYnkgZ3Vlc3QgcGh5c2ljYWwgYWRkcmVzcywg
c28gWGVuJ3MgdXBkYXRpbmcgb2YgdGhlIGRhdGEgZG9lc24ndA0KPj4+Pj4+Pj4gaW50ZXJhY3Qg
d2l0aCB0aGUgZ3Vlc3QgcGFnZXRhYmxlIHNldHRpbmdzL3Jlc3RyaWN0aW9ucy4gIHg4NiBzdWZm
ZXJzDQo+Pj4+Pj4+PiB0aGUgc2FtZSBraW5kIG9mIHByb2JsZW1zIGFzIEFSTSwgZXhjZXB0IHdl
IHNpbGVudGx5IHNxdWFzaCB0aGUgZmFsbG91dC4NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gVGhlIGxv
Z2ljIGluIFhlbiBpcyBob3JyaWJsZSwgYW5kIEkgd291bGQgcmVhbGx5IHJhdGhlciBpdCB3YXMg
ZGVsZXRlZA0KPj4+Pj4+Pj4gY29tcGxldGVseSwgcmF0aGVyIHRoYW4gdG8gYmUga2VwdCBmb3Ig
Y29tcGF0aWJpbGl0eS4NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gVGhlIHJ1bnN0YXRlIGFyZWEgaXMg
YWx3YXlzIGZpeGVkIGtlcm5lbCBtZW1vcnkgYW5kIGRvZXNuJ3QgbW92ZS4gIEkNCj4+Pj4+Pj4+
IGJlbGlldmUgaXQgaXMgYWxyZWFkeSByZXN0cmljdGVkIGZyb20gY3Jvc3NpbmcgYSBwYWdlIGJv
dW5kYXJ5LCBhbmQgd2UNCj4+Pj4+Pj4+IGNhbiBjYWxjdWxhdGUgdGhlIHZhPT5wYSB0cmFuc2xh
dGlvbiB3aGVuIHRoZSBoeXBlcmNhbGwgaXMgbWFkZS4NCj4+Pj4+Pj4+IA0KPj4+Pj4+Pj4gWWVz
IC0gdGhpcyBpcyBhIHRlY2huaWNhbGx5IEFCSSBjaGFuZ2UsIGJ1dCBub3RoaW5nIGlzIGdvaW5n
IHRvIGJyZWFrDQo+Pj4+Pj4+PiAoQUZBSUNUKSBhbmQgdGhlIGNsZWFudXAgd2luIGlzIGxhcmdl
IGVub3VnaCB0byBtYWtlIHRoaXMgYSAqdmVyeSoNCj4+Pj4+Pj4+IGF0dHJhY3RpdmUgb3B0aW9u
Lg0KPj4+Pj4+Pj4gDQo+Pj4+Pj4+PiBJIHN1Z2dlc3RlZCB0aGlzIGFwcHJvYWNoIHR3byB5ZWFy
cyBhZ28gWzFdIGJ1dCB5b3Ugd2VyZSB0aGUgb25lDQo+Pj4+Pj4+PiBzYXlpbmcgdGhhdCBidWZm
ZXIgY291bGQgY3Jvc3MgcGFnZS1ib3VuZGFyeSBvbiBvbGRlciBMaW51eCBbMl06DQo+Pj4+Pj4+
PiANCj4+Pj4+Pj4+ICJJJ2QgbG92ZSB0byBkbyB0aGlzLCBidXQgd2UgY2FudC4gIE9sZGVyIExp
bnV4IHVzZWQgdG8gaGF2ZSBhIHZpcnR1YWwNCj4+Pj4+Pj4+IGJ1ZmZlciBzcGFubmluZyBhIHBh
Z2UgYm91bmRhcnkuICBDaGFuZ2luZyB0aGUgYmVoYXZpb3VyIHVuZGVyIHRoYXQgd2lsbA0KPj4+
Pj4+Pj4gY2F1c2Ugb2xkZXIgc2V0dXBzIHRvIGV4cGxvZGUuIg0KPj4+Pj4+PiANCj4+Pj4+Pj4g
U29ycnkgdGhpcyB3YXMgbG9uZyB0aW1lIGFnbywgYW5kIGRldGFpbHMgaGF2ZSBmYWRlZC4gSUlS
QyB0aGVyZSB3YXMNCj4+Pj4+Pj4gZXZlbiBhIHByb3Bvc2FsIChvciBwYXRjaCBzZXQpIHRoYXQg
dG9vayB0aGF0IGludG8gYWNjb3VudCBhbmQgYWxsb3dlZA0KPj4+Pj4+PiBidWZmZXJzIHRvIHNw
YW4gYWNyb3NzIGEgcGFnZSBib3VuZGFyeSBieSB0YWtpbmcgYSByZWZlcmVuY2UgdG8gdHdvDQo+
Pj4+Pj4+IGRpZmZlcmVudCBwYWdlcyBpbiB0aGF0IGNhc2UuDQo+Pj4+Pj4gDQo+Pj4+Pj4gSSBh
bSBub3QgYXdhcmUgb2YgYSBwYXRjaCBzZXQuIEp1ZXJnZW4gc3VnZ2VzdGVkIGEgcGVyLWRvbWFp
biBtYXBwaW5nIGJ1dA0KPj4+Pj4+IHRoZXJlIHdhcyBubyBkZXRhaWxzIGhvdyB0aGlzIGNvdWxk
IGJlIGRvbmUgKG15IGUtbWFpbCB3YXMgbGVmdCB1bmFuc3dlcmVkDQo+Pj4+Pj4gWzFdKS4NCj4+
Pj4+PiANCj4+Pj4+PiBJZiB3ZSB3ZXJlIHVzaW5nIHRoZSB2bWFwKCkgdGhlbiB3ZSB3b3VsZCBu
ZWVkIHVwIDFNQiBwZXIgZG9tYWluIChhc3N1bWluZw0KPj4+Pj4+IDEyOCB2Q1BVcykuIFRoaXMg
c291bmRzIHF1aXRlIGEgYml0IGFuZCBJIHRoaW5rIHdlIG5lZWQgdG8gYWdyZWUgd2hldGhlciBp
dA0KPj4+Pj4+IHdvdWxkIGJlIGFuIGFjY2VwdGFibGUgc29sdXRpb24gKHRoaXMgd2FzIGFsc28g
bGVmdCB1bmFuc3dlcmVkIFsxXSkuDQo+Pj4+PiANCj4+Pj4+IENvdWxkIHdlIG1hcC91bm1hcCB0
aGUgcnVudGltZSBhcmVhIG9uIGRvbWFpbiBzd2l0Y2ggYXQgYSBwZXItY3B1DQo+Pj4+PiBiYXNl
ZCBsaW5lYXIgc3BhY2UgYXJlYT8gVGhlcmUncyBubyByZWFzb24gdG8gaGF2ZSBhbGwgdGhlIHJ1
bnRpbWUNCj4+Pj4+IGFyZWFzIG1hcHBlZCBhbGwgdGhlIHRpbWUsIHlvdSBqdXN0IGNhcmUgYWJv
dXQgdGhlIG9uZSBmcm9tIHRoZQ0KPj4+Pj4gcnVubmluZyB2Y3B1Lg0KPj4+Pj4gDQo+Pj4+PiBN
YXliZSB0aGUgb3ZlcmhlYWQgb2YgdGhhdCBtYXBwaW5nIGFuZCB1bm1hcHBpbmcgd291bGQgYmUN
Cj4+Pj4+IHRvbyBoaWdoPyBCdXQgc2VlaW5nIHRoYXQgd2UgYXJlIGFpbWluZyBhdCBhIHNlY3Jl
dC1mcmVlIFhlbiB3ZSB3b3VsZA0KPj4+Pj4gaGF2ZSB0byBldmVudHVhbGx5IGdvIHRoYXQgcm91
dGUgYW55d2F5Lg0KPj4+PiBNYXliZSB0aGUgbmV3IGh5cGVyY2FsbCBzaG91bGQgYmUgYSBiaXQg
ZGlmZmVyZW50Og0KPj4+PiAtIHdlIGhhdmUgdGhpcyBhcmVhIGFsbG9jYXRlZCBhbHJlYWR5IGlu
c2lkZSBYZW4gYW5kIHdlIGRvIGEgY29weSBvZiBpdCBvbiBhbnkgY29udGV4dCBzd2l0Y2gNCj4+
Pj4gLSB0aGUgZ3Vlc3QgaXMgbm90IHN1cHBvc2VkIHRvIG1vZGlmeSBhbnkgZGF0YSBpbiB0aGlz
IGFyZWENCj4+Pj4gV2UgY291bGQgaW50cm9kdWNlIGEgbmV3IGh5cGVyY2FsbDoNCj4+Pj4gLSBY
ZW4gYWxsb2NhdGUgdGhlIHJ1bnN0YXRlIGFyZWEgdXNpbmcgYSBwYWdlIGFsaWduZWQgYWRkcmVz
cyBhbmQgc2l6ZQ0KPj4+IA0KPj4+IEF0IHRoZSBtb21lbnQgdGhlIHJ1bnN0YXRlIGlzIDQwIGJ5
dGVzLiBJZiB3ZSB3ZXJlIGdvaW5nIHRvIGZvbGxvdyB0aGlzIHByb3Bvc2FsLCBJIHdvdWxkIHJl
Y29tbWVuZCB0byB0cnkgdG8gaGF2ZSBhcyBtYW55IHJ1bnN0YXRlIGFzIHBvc3NpYmxlIGluIHlv
dXIgcGFnZS4NCj4+PiANCj4+PiBPdGhlcmV3aXNlLCB5b3Ugd291bGQgd2FzdGUgNDA1NiBieXRl
cyBwZXIgdkNQVSBpbiBib3RoIFhlbiBhbmQgdGhlIGd1ZXN0IE9TLiBUaGlzIHdvdWxkIGV2ZW4g
YmUgd29yc2UgZm9yIDY0S0Iga2VybmVsLg0KPj4gQWdyZWUsIHNvIGl0IHNob3VsZCBiZSBvbmUg
Y2FsbCB0byBoYXZlIGFuIGFyZWEgd2l0aCB0aGUgcnVuc3RhdGUgZm9yIGFsbCB2Q1BVcywgZW5z
dXJlIGEgdkNQVSBydW5zdGF0ZSBoYXMgYSBzaXplIGFuZCBhbiBhZGRyZXNzIHdoaWNoIGFyZSBj
YWNoZSBsaW5lIHNpemUgYWxpZ25lZCB0byBwcmV2ZW50IGNvaGVyZW5jeSBzdHJlc3MuDQo+IA0K
PiBPbmUgNEtCIHJlZ2lvbiBvbmx5IGdvaW5nIHRvIGNvdmVyIDY0IHZDUFVzLiBTbyB5b3Ugd291
bGQgbmVlZCBtdWx0aXBsZSBwYWdlcy4gVGhpcyBicmluZ3MgbW9yZSBxdWVzdGlvbiBvbiBob3cg
dGhpcyB3b3VsZCB3b3JrIGZvciB2Q1BVIG9ubGluZS9vZmZsaW5lIG9yIGV2ZW4gaG90cGx1Zy4N
Cj4gDQo+IFRoZSBjb2RlIHJlcXVpcmVkIGluIHRoZSBndWVzdCB0byB0cmFjayB0aGVtIGlzIGdv
aW5nIHRvIGJlIG1vcmUgY29tcGxleCBlaXRoZXIgaW4gWGVuIG9yIHRoZSBndWVzdC4NCg0KT2sg
c28gdGhlIGd1ZXN0IHdpbGwgaGF2ZSB0byBhbGxvY2F0ZSB0aGUgcnVuc3RhdGUgYXJlYSBhbmQg
Z2l2ZSB0aGlzIGFyZWEgdG8gWGVuICh1c2luZyB2aXJ0dWFsIG9yIGd1ZXN0IHBoeXNpY2FsIGFk
ZHJlc3MpDQoNCj4gDQo+Pj4gDQo+Pj4gDQo+Pj4+IC0gdGhlIGd1ZXN0IHByb3ZpZGUgYSBmcmVl
IGd1ZXN0IHBoeXNpY2FsIHNwYWNlIHRvIHRoZSBoeXBlcmNhbGwNCj4+PiANCj4+PiBUaGlzIHBh
cnQgaXMgdGhlIG1vc3QgdHJpY2t5IHBhcnQuIEhvdyBkb2VzIHRoZSBndWVzdCBrbm93IHdoYXQg
aXMgZnJlZSBpbiBpdHMgcGh5c2ljYWwgYWRkcmVzcyBzcGFjZT8NCj4+PiANCj4+PiBJIGFtIG5v
dCBhd2FyZSBvZiBhbnkgd2F5IHRvIGRvIHRoaXMgaW4gTGludXguIFNvIHRoZSBiZXN0IHlvdSBj
b3VsZCBkbyB3b3VsZCBiZSB0byBhbGxvY2F0ZSBhIHBhZ2UgZnJvbSB0aGUgUkFNIGFuZCB0ZWxs
IFhlbiB0byByZXBsYWNlIGl0IHdpdGggdGhlIHJ1bnN0YXRlIG1hcHBpbmcuDQo+Pj4gDQo+Pj4g
SG93ZXZlciwgdGhpcyBhbHNvIG1lYW5zIHlvdSBhcmUgZ29pbmcgdG8gcG9zc2libHkgc2hhdHRl
ciBhIHN1cGVycGFnZSBpbiB0aGUgUDJNLiBUaGlzIG1heSBhZmZlY3QgdGhlIHBlcmZvcm1hbmNl
IGluIGxvbmctcnVuLg0KPj4gVmVyeSB0cnVlLCBMaW51eCBkb2VzIG5vdCBoYXZlIGEgd2F5IHRv
IGRvIHRoYXQuDQo+PiBXaGF0IGFib3V0IGdvaW5nIHRoZSBvdGhlciB3YXkgYXJvdW5kOiBYZW4g
Y2FuIHByb3ZpZGUgdGhlIHBoeXNpY2FsIGFkZHJlc3MgdG8gdGhlIGd1ZXN0Lg0KPiANCj4gUmln
aHQgbm93LCB0aGUgaHlwZXJ2aXNvciBkb2Vzbid0IGhhdmUgYW4gZWFzeSB0byBrbm93IHRoZSBs
YXlvdXQuIE9ubHkgdGhlIHRvb2xzdGFjayBoYXMuIFNvIHlvdSB3b3VsZCBuZWVkIGEgd2F5IHRv
IHRlbGwgWGVuIHdoZXJlIHRoZSByZWdpb24gaGFzIGJlZW4gcmVzZXJ2ZWQuDQo+IA0KPiBUaGlz
IHJlZ2lvbiB3b3VsZCBoYXZlIHRvIGJlIGFsbG9jYXRlZCBiZWxvdyA0R0IgdG8gY2F0ZXIgYWxs
IHRoZSB0eXBlIG9mIGd1ZXN0LiBBIGd1ZXN0IG1heSBub3QgdXNlIHRoZW0gYXQgYWxsICh0aW1l
IGFjY291bnRpbmcgaXMgbm90IG1hbmRhdG9yeSkuDQo+IA0KPiBFdmVuIGlmIHRoaXMgaXMgYSBm
ZXcgcGFnZXMsIHRoaXMgaXMgbm90IHZlcnkgaWRlYWwuIEl0IHdvdWxkIGJlIGJlc3QgaWYgeW91
IGxldCB0aGUgZ3Vlc3QgYWxsb2NhdGUgc29tZSBSQU0gYW5kIHRoZW4gcGFzcyB0aGUgYWRkcmVz
cyB0byBYZW4uDQoNCk9rDQoNCj4gDQo+Pj4gDQo+Pj4+IC0gWGVuIG1hcHMgcmVhZC1vbmx5IGl0
cyBvd24gYXJlYSB0byB0aGUgZ3Vlc3QgYXQgdGhlIHByb3ZpZGVkIGFkZHJlc3MNCj4+Pj4gLSBY
ZW4gc2hhbGwgbm90IG1vZGlmeSBhbnkgZGF0YSBpbiB0aGUgcnVuc3RhdGUgYXJlYSBvZiBvdGhl
ciBjb3Jlcy9ndWVzdHMgKHNob3VsZCBhbHJlYWR5IGJlIHRoZSBjYXNlKQ0KPj4+PiAtIFdlIGtl
ZXAgdGhlIGN1cnJlbnQgaHlwZXJjYWxsIGZvciBiYWNrd2FyZCBjb21wYXRpYmlsaXR5IGFuZCBt
YXAgdGhlIGFyZWFsIGR1cmluZyB0aGUgaHlwZXJjYWxsIGFuZCBrZWVwIHRoZSBhcmVhIG1hcHBl
ZCBhdCBhbGwgdGltZSwgd2Uga2VlcCBkb2luZyB0aGUgY29weSBkdXJpbmcgY29udGV4dCBzd2l0
Y2hlcw0KPj4+PiBUaGlzIHdvdWxkIGhpZ2hseSByZWR1Y2UgdGhlIG92ZXJoZWFkIGJ5IHJlbW92
aW5nIHRoZSBtYXBwaW5nL3VubWFwcGluZy4NCj4+PiANCj4+PiBJIGRvbid0IHRoaW5rIHRoZSBv
dmVyaGVhZCBpcyBnb2luZyB0byBiZSBzaWduaWZpY2FudCB3aXRoIGRvbWFpbl9tYXBfcGFnZSgp
L2RvbWFpbl91bm1hcF9wYWdlKCkuDQo+Pj4gDQo+Pj4gT24gQXJtNjQsIHRoZSBtZW1vcnkgaXMg
YWx3YXlzIG1hcHBlZCBzbyBtYXAvdW5tYXAgaXMgYSBOT1AuIE9uIEFybTMyLCB3ZSBoYXZlIGEg
ZmFzdCBtYXAvdW5tYXAgaW1wbGVtZW50YXRpb24uDQo+Pj4gDQo+Pj4gT24geDg2LCB3aXRob3V0
IFNILCBtb3N0IG9mIHRoZSBtZW1vcnkgaXMgYWxzbyBhbHdheXMgbWFwcGVkLiBTbyB0aGlzIG9w
ZXJhdGlvbiBpcyBtb3N0bHkgYSBOT1AuIEZvciB0aGUgU0ggY2FzZSwgdGhlIG1hcC91bm1hcCB3
aWxsIGJlIHVzZWQgaW4gYW55IGFjY2VzcyB0byB0aGUgZ3Vlc3QgbWVtb3J5IChzdWNoIGFzIGh5
cGVyY2FsbHMgYWNjZXNzKSBidXQgaXQgaXMgcXVpdGUgb3B0aW1pemVkLg0KPj4+IA0KPj4+IE5v
dGUgdGhhdCB0aGUgY3VycmVudCBvdmVyaGVhZCBpcyBtdWNoIG1vcmUgaW1wb3J0YW50IHRvZGF5
IGFzIHlvdSBuZWVkIHRvIHdhbGsgdGhlIGd1ZXN0IFBUIGFuZCBQMk0gKHdlIGFyZSB0YWxraW5n
IGF0IG11bHRpcGxlIG1hcC91bm1hcCkuIFNvIG1vdmluZyB0byBvbmUgbWFwL3VubWFwIGlzIGFs
cmVhZHkgZ29pbmcgdG8gYmUgYSBtYWpvciBpbXByb3ZlbWVudC4NCj4+IEFncmVlDQo+Pj4gDQo+
Pj4+IFJlZ2FyZGluZyB0aGUgc2VjcmV0IGZyZWUgSSBkbyBub3QgcmVhbGx5IHRoaW5rIHRoaXMg
aXMgc29tZXRoaW5nIHByb2JsZW1hdGljIGhlcmUgYXMgd2UgYWxyZWFkeSBoYXZlIGEgY29weSBv
ZiB0aGlzIGludGVybmFsbHkgYW55d2F5DQo+Pj4gDQo+Pj4gVGhlIHNlY3JldCBmcmVlIHdvcmsg
aXMgc3RpbGwgdW5kZXIgcmV2aWV3LCBzbyB3aGF0IGlzIGRvbmUgaW4gWGVuIHRvZGF5IHNob3Vs
ZG4ndCBkaWN0YXRlIHRoZSBmdXR1cmUuDQo+Pj4gDQo+Pj4gVGhlIHF1ZXN0aW9uIHRvIGFuc3dl
ciBpcyB3aGV0aGVyIHdlIGJlbGlldmUgbGVha2luZyB0aGUgY29udGVudCBtYXkgYmUgYSBwcm9i
bGVtLiBJZiB0aGUgYW5zd2VyIGlzIHllcywgdGhlbiBtb3N0IGxpa2VseSB3ZSB3aWxsIHdhbnQg
dGhlIGludGVybmFsIHJlcHJlc2VudGF0aW9uIHRvIGJlIG1hcHBlZCBvbiBkZW1hbmQgb3IganVz
dCBtYXBwZWQgZm9yIFhlbiBQVCBhc3NvY2lhdGVkIGZvciB0aGF0IGRvbWFpbi4NCj4+PiANCj4+
PiBNeSBndXQgZmVlbGluZyBpcyB0aGUgcnVuc3RhdGUgY29udGVudCBpcyBub3QgY3JpdGljYWwu
IEJ1dCBJIGhhdmVuJ3QgZnVsbHkgdGhvdWdodCB0aHJvdWdoIHlldC4NCj4+IFRoZSBydW5zdGF0
ZSBpbmZvcm1hdGlvbiBpcyBzdG9yZWQgaW5zaWRlIHhlbiBhbmQgdGhlbiBjb3BpZWQgdG8gdGhl
IGd1ZXN0IG1lbW9yeSBkdXJpbmcgY29udGV4dCBzd2l0Y2guIFNvIGV2ZW4gaWYgdGhlIGd1ZXN0
IGFyZWEgaXMgbm90IG1hcHBlZCwgdGhpcyBpbmZvcm1hdGlvbiBpcyBzdGlsbCBhdmFpbGFibGUg
aW5zaWRlIHRoZSB4ZW4gaW50ZXJuYWwgY29weS4NCj4gQWdhaW4sIFhlbiBpcyBub3Qgc2VjcmV0
IGZyZWUgdG9kYXkuIFNvIHRoZSBmYWN0IFhlbiBoYXMgYW4gaW50ZXJuYWwgY29weSBkb2Vzbid0
IG1lYW4gaXQgaXMgZmluZSB0byBsZWFrIHRoZSBjb250ZW50LiBGb3IgaW5zdGFuY2UsIHRoZSBn
dWVzdHMnIG1lbW9yeSByZWdpb25zIGFyZSBhbHdheXMgbWFwcGVkLCBkb2VzIGl0IG1lYW4gdGhl
IGNvbnRlbnQgaXMgbm90IHNlbnNpdGl2ZT8gRGVmaW5pdGVseSBub3QuIEhlbmNlIHdoeSB0aGUg
ZWZmb3J0IGJlaGluZCBTZWNyZXQgSGlkaW5nLg0KPiANCj4gQXMgSSB3cm90ZSBpbiBteSBwcmV2
aW91cyBlLW1haWwsICppZiogd2UgY29uc2lkZXIgdGhlIGxlYWtpbmcgYSBwcm9ibGVtLCB0aGVu
IHdlIHdvdWxkIHdhbnQgdGhlICppbnRlcm5hbCogcmVwcmVzZW50YXRpb24gdG8gYmUgbWFwcGVk
IG9uIGRlbWFuZGUgb3IganVzdCBtYXBwZWQgZm9yIFhlbiBQVCBhc3NvY2lhdGVkIGZvciB0aGF0
IGRvbWFpbi4NCj4gDQo+IEJ1dC4uLiB0aGUgcnVuc3RhdGUgYXJlYSBkb2Vzbid0IHVzZSB1cCBh
IGZ1bGwgcGFnZS4gVG9kYXkgdGhlIHBhZ2UgbWF5IGFsc28gY29udGFpbiBzZWNyZXQgZnJvbSBh
IGRvbWFpbi4gSWYgeW91IGFsd2F5cyBtYXAgaXQsIHRoZW4gdGhlcmUgaXMgYSByaXNrIHRvIGxl
YWsgdGhhdCBjb250ZW50LiBUaGlzIHdvdWxkIGhhdmUgdG8gYmUgdGFrZW4gaW50byBjb25zaWRl
cmF0aW9uIGlmIHdlIGZvbGxvdyBSb2dlcidzIGFwcHJvYWNoLg0KDQpTbyBlbmQgcG9pbnQgaXMg
dGhlIGFyZWEgaXMgYWxsb2NhdGVkIGJ5IHRoZSBndWVzdCBhbmQgd2UgbmVlZCB0byBkbyBhIGNv
cHkgZnJvbSBvdXIgaW50ZXJuYWwgdmVyc2lvbiB0byB0aGUgZ3Vlc3QgdmVyc2lvbiBlYWNoIHRp
bWUgd2UgcmVzdG9yZSBhIHZDUFUgY29udGV4dC4NCg0KQ2hlZXJzDQpCZXJ0cmFuZA0KDQo+IENo
ZWVycywNCj4gDQo+IC0tIA0KPiBKdWxpZW4gR3JhbGwNCg0K


From xen-devel-bounces@lists.xenproject.org Fri May 15 12:54:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 12:54:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZZqZ-00035r-Fn; Fri, 15 May 2020 12:53:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I0w9=65=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZZqY-000358-2V
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 12:53:50 +0000
X-Inumbo-ID: 1e007b1b-96ab-11ea-a560-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1e007b1b-96ab-11ea-a560-12813bfff9fa;
 Fri, 15 May 2020 12:53:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589547228;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=h8ROL07qR+cfOqxXmvtq6ZaRiASF5lxplxlfq5bX5jk=;
 b=WGU54Sz4hEg0hrpF9XxDt4YUIbFwwbPovhYbRhqgmDLc/Mn/WGj5vtNi
 Inu5uUcs/wEvWuT3itA6w2YWlvinXXGZJLud+S5VqN0NC68YRbnDIQ+Pf
 /qdzAQQc0UkGJkUtA043jORxNfmAk2q9n5LLRSKyu1hmrT370k/JynrAX A=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: aoLvq7WzYD3TaCF5sCuANHIgxZEuJUfdZt8Jyk5IStpMgF20mSgo4rhP9IToDgHOvwKzWbq/Js
 WDtAVgbYapqXblyOhd5GfF6JsW+UuEoRIvVu+T9OUc43dMg3LMpCToykiKUPwtRGlKCN1F7rRy
 x0YpFRamh+pAHbAPm5x+la8VIBeOnZx7LnBL/l75mDB5DbJZ3dLEwPIQ8sj4SNI43i8lok6lvT
 LAK13FE6kq7C9gOJbhOnrpiOSIUY1ZbWPEX+DPPhLngoQXVxkrauHToTNsm/wQS089o1+IhqQY
 OUw=
X-SBRS: 2.7
X-MesageID: 17886315
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,395,1583211600"; d="scan'208";a="17886315"
Date: Fri, 15 May 2020 14:53:41 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: Error during update_runstate_area with KPTI activated
Message-ID: <20200515125341.GT54375@Air-de-Roger>
References: <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
 <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
 <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
 <478C4829-CCAF-495B-860E-6BA3D86AA47D@arm.com>
 <20200515083838.GN54375@Air-de-Roger>
 <d2033adc-3f98-2d14-ae6d-f8dcd8b90002@xen.org>
 <20200515091018.GO54375@Air-de-Roger>
 <93D7EBEF-E3E0-4DBB-A5BC-7D326B7AE0DB@arm.com>
 <20200515100742.GR54375@Air-de-Roger>
 <B0FB86DC-1D4F-438E-B78B-0D9845D13E8A@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <B0FB86DC-1D4F-438E-B78B-0D9845D13E8A@arm.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano
 Stabellini <stefano.stabellini@xilinx.com>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 15, 2020 at 12:34:08PM +0000, Bertrand Marquis wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> 
> Hi Roger
> 
> > On 15 May 2020, at 11:07, Roger Pau Monné <roger.pau@citrix.com> wrote:
> > 
> > Can you please fix your email client to properly indent replies? It's
> > impossible to distinguish your added text when you reply from the
> > original email, as it's not indented in any way.
> 
> Sorry for that it seems my email client is detecting mail as being in rich text and was answering keeping this format
> Please tell me if this was not fixed in this email.

Yes, this looks much better, thanks!

> > 
> > On Fri, May 15, 2020 at 09:21:34AM +0000, Bertrand Marquis wrote:
> >> On 15 May 2020, at 10:10, Roger Pau Monné <roger.pau@citrix.com<mailto:roger.pau@citrix.com>> wrote:
> >> 
> >> On Fri, May 15, 2020 at 09:53:54AM +0100, Julien Grall wrote:
> >> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> >> 
> >> Hi,
> >> 
> >> On 15/05/2020 09:38, Roger Pau Monné wrote:
> > ... it's hard for the guest to figure out which non-populated areas
> > are safe for mapping arbitrary things. Ie: you might attempt to map
> > the runstate area on top of some MMIO area the guest is not aware of
> > for example if it has passthrough enabled.
> 
> With you answer and Julian ones it is now clear that the only solution is to have the area provided by the guest.
> 
> > 
> >> - Xen maps read-only its own area to the guest at the provided address
> >> - Xen shall not modify any data in the runstate area of other cores/guests (should already be the case)
> > 
> > I'm not sure those two restrictions are relevant, it's not relevant to
> > Xen whether the guest decided to overwrite the runstate area. Xen will
> > just write to it when doing a context switch in order to update it,
> > but it should never read from it.
> > 
> > Or are you meaning to map vcpu->runstate directly into the guest
> > physmap?
> > 
> > I think that's a bad idea as we would then have to force each vCPU
> > runstate to take up to a whole page, wasting lots of memory.
> 
> I was more thinking in putting all the runstate of all vCPUs in the same page (or in several if this was not enough)
> 
> My main point was to have Xen directly modifying this one instead of doing copies as Xen is just writing to it and never reads from it and the guest is not suppose to write to it (but if it does that’s more or less an error on its side).

I'm not saying it's not possible, but IMO having Xen allocate such
memory will be much more complicated than just using a guest provided
memory area and doing a copy.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 15 13:15:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 13:15:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZaAt-0004xK-9I; Fri, 15 May 2020 13:14:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ibeu=65=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZaAs-0004wo-Am
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 13:14:50 +0000
X-Inumbo-ID: 0e3f1e36-96ae-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0e3f1e36-96ae-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 13:14:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=rRHSKnL+k2eZme66hGtBi04CkooLu0W1aqs7yQAtIo4=; b=N0y9UstK5Rlw6AjyArmzK93J2
 Sp8y/GtOjJ5A7v4LCTQu/jQYTJNC14mWHJxjwKWOiiR0lp8dtcfyLJg/jHbODLgpiv5sT9aOudSN1
 y2Wsxiv0CC5j41SUQwN7Se/ToFlHiLgoLs+2FQqM4Q8vHL5w/Avy/XCEgAr9QBJzStEOo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZaAr-00080y-9L; Fri, 15 May 2020 13:14:49 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZaAr-0002L1-0a; Fri, 15 May 2020 13:14:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZaAq-0000sw-Ut; Fri, 15 May 2020 13:14:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150187-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150187: all pass - PUSHED
X-Osstest-Versions-This: ovmf=bcf181a33b2ea46c36c3be701a5b2e232deaece7
X-Osstest-Versions-That: ovmf=f2cdb268ef04eeec51948b5d81eeca5cab5ed9af
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 15 May 2020 13:14:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150187 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150187/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 bcf181a33b2ea46c36c3be701a5b2e232deaece7
baseline version:
 ovmf                 f2cdb268ef04eeec51948b5d81eeca5cab5ed9af

Last test of basis   150178  2020-05-14 12:39:29 Z    1 days
Testing same since   150187  2020-05-15 01:10:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Roman Bolshakov <r.bolshakov@yadro.com>
  Shenglei Zhang <shenglei.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   f2cdb268ef..bcf181a33b  bcf181a33b2ea46c36c3be701a5b2e232deaece7 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 15 13:19:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 13:19:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZaFG-0005CK-SK; Fri, 15 May 2020 13:19:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kC4v=65=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jZaFF-0005CF-Dz
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 13:19:21 +0000
X-Inumbo-ID: af2ac7d2-96ae-11ea-b07b-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af2ac7d2-96ae-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 13:19:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589548760;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=D0J6JzjlekOhk7fZXV/q1awfnttraVTjwuLWUd84rUw=;
 b=WkH+sWMV935Adf+OHZmETH3HMY/9xnnJ7pfvCHVvCtznDH6AcCCJLXvx
 Cwgc95XpRJzr32gIBshPDM1o+uwU8lzCvlsxDqCPtSVUjibDTEGvaPY1b
 HTJJNIAHBn1Ltxx1VyTYsyiUOErz1ZWf2HHRiqMnqQ8+w4clMvY7dkGbE s=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: uv/02h6jRFfjgXt/bPJ8gUbH2iIstUjnwGnu9H7HIRdaop6KwlqNi5WjECa105Bf4Ud2DojuD1
 ntrsii1Sc0dlvJkEjEKknph/Zv2hIvO8t+tWV6oGiBFMBOP4pmgd7P1uYoXGZ+kYLIttW+GRg8
 +qVTfSTObes2Gm9Hbr7HX7qkrS8KjPOZSGLthJ2CqlZDoYkDFvUmbd0apx0NUt2E99REifxRAC
 ME/RLzlbseKPOnoOyio3KgNovGKOEMfSotamQTpKGJo4ZYyoxw2IXQrLFZhyx7YMbWWF3cUvwV
 dFA=
X-SBRS: 2.7
X-MesageID: 17650516
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,395,1583211600"; d="scan'208";a="17650516"
Subject: Re: [PATCH] x86/gen-cpuid: Distinguish default vs max in feature
 annotations
To: Jan Beulich <jbeulich@suse.com>
References: <20200508152729.14295-1-andrew.cooper3@citrix.com>
 <e6380a05-d67a-b3a8-a624-ba5c161a8c53@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <142f8f43-98fb-506c-0812-e431f2666e5e@citrix.com>
Date: Fri, 15 May 2020 14:19:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <e6380a05-d67a-b3a8-a624-ba5c161a8c53@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11/05/2020 08:43, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> On 08.05.2020 17:27, Andrew Cooper wrote:
>> @@ -133,9 +134,13 @@ def crunch_numbers(state):
>>      state.hvm_shadow_def = state.pv_def | state.raw['S']
>>      state.hvm_hap_def = state.hvm_shadow_def | state.raw['H']
>>  
>> +    # TODO: Ignore def/max split until the toolstack migration logic is fixed
>>      state.pv_max = state.pv_def
>>      state.hvm_shadow_max = state.hvm_shadow_def
>>      state.hvm_hap_max = state.hvm_hap_def
>> +    # state.pv_max = state.raw['A'] | state.raw['a']
>> +    # state.hvm_shadow_max = state.pv_max | state.raw['S'] | state.raw['s']
>> +    # state.hvm_hap_max = state.hvm_shadow_max | state.raw['H'] | state.raw['h']
> While in comment form it doesn't matter yet, for actually enabling
> this it would seem to me to be more expressive as
>
>     state.pv_max = state.pv_def | state.raw['a']
>     state.hvm_shadow_max = state.hvm_shadow_def | state.pv_max | state.raw['s']
>     state.hvm_hap_max = state.hvm_hap_def | state.hvm_shadow_max | state.raw['h']
>
> Thoughts?

The format/layout of metadata prohibits that from having any kind of
difference.

Furthermore, expressing it this way doesn't require anyone to look at
the *_def derivation to figure out what is going on.

I'm going to commit it in this form, but will tidy it up to be properly
tabulated in a future patch, to make it easier to read.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 15 13:43:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 13:43:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZabi-0007ik-QB; Fri, 15 May 2020 13:42:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ibeu=65=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZabh-0007if-RK
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 13:42:33 +0000
X-Inumbo-ID: e9941420-96b1-11ea-a56c-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9941420-96b1-11ea-a56c-12813bfff9fa;
 Fri, 15 May 2020 13:42:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=e6Sj62jzqblXXD4TSIr7n8j+5e6TSSJpaRlJ7ReIVxo=; b=5eD+jKjwvYMrsj8YPxZjH591a
 omUlDjuSKLYv7M68VW1cf+6SQivLneY3tpvqv/QzoZsGGlRfYqR4VUEfAz7Ibm/vI+RMagKzm8YUY
 outDjUnLT5I3ZjQN4wk7YVCJiR2h6t6T0sF18tPnBtdHkI/vTSR/feeQbhSY6HlD7fNOw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZabZ-00009e-Ah; Fri, 15 May 2020 13:42:25 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZabY-0004Na-SK; Fri, 15 May 2020 13:42:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZabY-0002FA-Ri; Fri, 15 May 2020 13:42:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150190-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150190: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64:xen-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:build-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=62c27cfc4f693e7ffd1e8f31942bc9c51dfaf815
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 15 May 2020 13:42:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150190 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150190/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64                   6 xen-build                fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              62c27cfc4f693e7ffd1e8f31942bc9c51dfaf815
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  119 days
Failing since        146211  2020-01-18 04:18:52 Z  118 days  109 attempts
Testing same since   150190  2020-05-15 04:18:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Chris Jester-Young <cky@cky.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yan Wang <wangyan122@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 18247 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 15 13:58:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 13:58:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZarK-0000R3-Hz; Fri, 15 May 2020 13:58:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I0w9=65=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZarJ-0000Qs-Iv
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 13:58:41 +0000
X-Inumbo-ID: 2b451bd8-96b4-11ea-b9cf-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b451bd8-96b4-11ea-b9cf-bc764e2007e4;
 Fri, 15 May 2020 13:58:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589551115;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=9gp9wrRn7KriUqfiJcrm7JJ3iaK6QWAFr6UqKaTmPqA=;
 b=Nyu1ydGQhnnkTs3Z37gQQESMBInF4iQ6+07kuJZPkThc/rH5H0T4sXXQ
 8PlD1T8ZUIFIWrpAxRZOoCvks2o/nY97HRR6InIg5b9Pu6+UfZNpx2vdM
 O4DBSPgFrqyMO38xzJkgUajjsUj2NbeVvsMOltHEmdcbMYMBz0t9ZQzpZ g=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: wsU8T5RVIy4TwDZfNfGWqJH7KIhF++S5PgXkWVfYhWdlTE+WpOdLbtXF7aDDEHOnWURRrqNNfS
 wWXc5u/7PhSEO30wecvWa46Z13aWzgWuUbNdPdljwPlqP9vyUw6tBZVvFE9c+rchMnMuZf554P
 rlSy7nE6uGSA2dUigapgt4koBJKFMO5PwS3UKL6HoOuCJxlgEviR0BWBzq8NHDy7PCOLLsncG0
 TXvdt6lfEurwQGrzLI9K18c9sQJINEHnmDysrgx6YYbGRL1M+ofC/fB6zItluw/2rIOeZYixNl
 6J8=
X-SBRS: 2.7
X-MesageID: 17988403
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,395,1583211600"; d="scan'208";a="17988403"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v3 1/2] x86/idle: rework C6 EOI workaround
Date: Fri, 15 May 2020 15:58:01 +0200
Message-ID: <20200515135802.63853-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200515135802.63853-1-roger.pau@citrix.com>
References: <20200515135802.63853-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Change the C6 EOI workaround (errata AAJ72) to use x86_match_cpu. Also
call the workaround from mwait_idle, previously it was only used by
the ACPI idle driver. Finally make sure the routine is called for all
states equal or greater than ACPI_STATE_C3, note that the ACPI driver
doesn't currently handle them, but the errata condition shouldn't be
limited by that.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - New in this version.
---
 xen/arch/x86/acpi/cpu_idle.c  | 43 +++++++++++++++++++++--------------
 xen/arch/x86/cpu/mwait-idle.c |  3 +++
 xen/include/asm-x86/cpuidle.h |  1 +
 3 files changed, 30 insertions(+), 17 deletions(-)

diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index b83446e77d..0efdaff21b 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -548,26 +548,35 @@ void trace_exit_reason(u32 *irq_traced)
     }
 }
 
-/*
- * "AAJ72. EOI Transaction May Not be Sent if Software Enters Core C6 During 
- * an Interrupt Service Routine"
- * 
- * There was an errata with some Core i7 processors that an EOI transaction 
- * may not be sent if software enters core C6 during an interrupt service 
- * routine. So we don't enter deep Cx state if there is an EOI pending.
- */
-static bool errata_c6_eoi_workaround(void)
+bool errata_c6_eoi_workaround(void)
 {
-    static int8_t fix_needed = -1;
+    static int8_t __read_mostly fix_needed = -1;
 
     if ( unlikely(fix_needed == -1) )
     {
-        int model = boot_cpu_data.x86_model;
-        fix_needed = (cpu_has_apic && !directed_eoi_enabled &&
-                      (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) &&
-                      (boot_cpu_data.x86 == 6) &&
-                      ((model == 0x1a) || (model == 0x1e) || (model == 0x1f) ||
-                       (model == 0x25) || (model == 0x2c) || (model == 0x2f)));
+#define INTEL_FAM6_MODEL(m) { X86_VENDOR_INTEL, 6, m, X86_FEATURE_ALWAYS }
+        /*
+         * Errata AAJ72: EOI Transaction May Not be Sent if Software Enters
+         * Core C6 During an Interrupt Service Routine"
+         *
+         * There was an errata with some Core i7 processors that an EOI
+         * transaction may not be sent if software enters core C6 during an
+         * interrupt service routine. So we don't enter deep Cx state if
+         * there is an EOI pending.
+         */
+        const static struct x86_cpu_id eoi_errata[] = {
+            INTEL_FAM6_MODEL(0x1a),
+            INTEL_FAM6_MODEL(0x1e),
+            INTEL_FAM6_MODEL(0x1f),
+            INTEL_FAM6_MODEL(0x25),
+            INTEL_FAM6_MODEL(0x2c),
+            INTEL_FAM6_MODEL(0x2f),
+            { }
+        };
+#undef INTEL_FAM6_MODEL
+
+        fix_needed = cpu_has_apic && !directed_eoi_enabled &&
+                     x86_match_cpu(eoi_errata);
     }
 
     return (fix_needed && cpu_has_pending_apic_eoi());
@@ -676,7 +685,7 @@ static void acpi_processor_idle(void)
         return;
     }
 
-    if ( (cx->type == ACPI_STATE_C3) && errata_c6_eoi_workaround() )
+    if ( (cx->type >= ACPI_STATE_C3) && errata_c6_eoi_workaround() )
         cx = power->safe_state;
 
 
diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index b81937966e..88a3e160c5 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -770,6 +770,9 @@ static void mwait_idle(void)
 		return;
 	}
 
+	if ((cx->type >= 3) && errata_c6_eoi_workaround())
+		cx = power->safe_state;
+
 	eax = cx->address;
 	cstate = ((eax >> MWAIT_SUBSTATE_SIZE) & MWAIT_CSTATE_MASK) + 1;
 
diff --git a/xen/include/asm-x86/cpuidle.h b/xen/include/asm-x86/cpuidle.h
index 5d7dffd228..13879f58a1 100644
--- a/xen/include/asm-x86/cpuidle.h
+++ b/xen/include/asm-x86/cpuidle.h
@@ -26,4 +26,5 @@ void update_idle_stats(struct acpi_processor_power *,
 void update_last_cx_stat(struct acpi_processor_power *,
                          struct acpi_processor_cx *, uint64_t);
 
+bool errata_c6_eoi_workaround(void);
 #endif /* __X86_ASM_CPUIDLE_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri May 15 13:58:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 13:58:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZarF-0000Qe-A4; Fri, 15 May 2020 13:58:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I0w9=65=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZarE-0000QZ-PO
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 13:58:36 +0000
X-Inumbo-ID: 2b0dafea-96b4-11ea-b9cf-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b0dafea-96b4-11ea-b9cf-bc764e2007e4;
 Fri, 15 May 2020 13:58:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589551115;
 h=from:to:cc:subject:date:message-id:mime-version:
 content-transfer-encoding;
 bh=eUopZb5WLRi6cSi1Y9lp6AxsrSmMmFJQjkxh0NZ5ixU=;
 b=KmfptNgdLYpCE2CpVPM+Xq+nhhBF70/5298GP2YO4B6GnOIHXwen3glp
 3YQ8/AZDXrYsDi9gN3nEhp/PzPYAxYZ1mPc/k6adCEYIX7FwAeep4Fkht
 Z4EOQHZaZtKTPP4ZzuoFBTS++hKikJbLc95a6jRIUsONR70THc2/do46a 0=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 0y69BHivvJ4Dhuyga0SkjvoUvWm53LyqI+Pmwb0l6a6GQgqL6XcQW2GFjFq5atPz2IbSphc57z
 401edO4EVbo/x3+0PsU9fE/OM5myZNRgML38zuNaPqxphh+8m3s0qPX6tOkvM8I62Rqj4O1FW1
 bzBpRvIUFou/dJ1vYiWjdgMTRfYyGDtK3B6W215j96fW+sMyDItapQ875Y8uGxeTYlnklUjPCV
 L0eZ0zVCJa3iGqBO6Y6SE+EHzq939+5t5HKLNkPFGny1dZM52th9B+z5XxaSyAZUYtycJaeZaD
 aiM=
X-SBRS: 2.7
X-MesageID: 17621872
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,395,1583211600"; d="scan'208";a="17621872"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v3 0/2] x86/idle: fix for Intel ISR errata
Date: Fri, 15 May 2020 15:58:00 +0200
Message-ID: <20200515135802.63853-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, George
 Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Apply a workaround in order to cope with the BDX99, CLX30, SKX100,
CFW125, BDF104, BDH85, BDM135, KWB131 erratas.

Roger Pau Monne (2):
  x86/idle: rework C6 EOI workaround
  x86/idle: prevent entering C6 with in service interrupts on Intel

 docs/misc/xen-command-line.pandoc |  9 ++++
 xen/arch/x86/acpi/cpu_idle.c      | 74 ++++++++++++++++++++++++-------
 xen/arch/x86/cpu/mwait-idle.c     |  3 ++
 xen/include/asm-x86/cpuidle.h     |  1 +
 4 files changed, 70 insertions(+), 17 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri May 15 13:58:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 13:58:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZarP-0000RQ-PY; Fri, 15 May 2020 13:58:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I0w9=65=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZarO-0000RH-Ji
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 13:58:46 +0000
X-Inumbo-ID: 2ed6e5d8-96b4-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2ed6e5d8-96b4-11ea-ae69-bc764e2007e4;
 Fri, 15 May 2020 13:58:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589551123;
 h=from:to:cc:subject:date:message-id:in-reply-to:
 references:mime-version:content-transfer-encoding;
 bh=hGisH2+dxVByj66jlzhR1rIynA6hX56thdi9LQ2Txn8=;
 b=Lrt42LXdfj0SVEbkGqqXF/iVAdQ/lzk3L2NbHwMb/JnDLQijkYEt7fxJ
 Gf7nD7lHryCIbG9N4sDBKXlK0u7Zy+BrFL5Gv/MBLjfA6MewemeAOQl+V
 ySUBsEWjUiv8xkpdekg3LQBGEIepaDwqlKyroGfiB5dayPBuNeJCWUv9D o=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: IgMUU1DY5bJBi9zQvCIIKwh/7tktGi0K/5rxWMAYlvRkp5YlNyK37r+u/129x4e8ZFUTvS1pqF
 U1o4jbYJhBRPm7aLS0JG8dRr8Bq4pIR+1DeawfZDZGYqTce93pvYcIKWVJcEZU9UF1EGcFiIKM
 yFiISZWzJ5QdFwqyzxktWzH+uVtR0SWhWLV+5y7cc89gG2GGQ5w5uARvlDw15HQhLp4cQhYAtN
 TbFqNYt+IyzXg3ptbLAnsLVKkH73bgMdy8gO/G8j8vA9CgvsR9xITHyZmZynnQNZTjnHfVJLdU
 b3I=
X-SBRS: 2.7
X-MesageID: 17900393
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,395,1583211600"; d="scan'208";a="17900393"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v3 2/2] x86/idle: prevent entering C6 with in service
 interrupts on Intel
Date: Fri, 15 May 2020 15:58:02 +0200
Message-ID: <20200515135802.63853-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200515135802.63853-1-roger.pau@citrix.com>
References: <20200515135802.63853-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Ian
 Jackson <ian.jackson@eu.citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Apply a workaround for Intel errata BDX99, CLX30, SKX100, CFW125,
BDF104, BDH85, BDM135, KWB131: "A Pending Fixed Interrupt May Be
Dispatched Before an Interrupt of The Same Priority Completes".

Apply the errata to all server and client models (big cores) from
Broadwell to Cascade Lake. The workaround is grouped together with the
existing fix for errata AAJ72, and the eoi from the function name is
removed.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - Use x86_match_cpu and apply the workaround to all models from
   Broadwell to Cascade Lake.
 - Rename command line option to disable-c6-errata.

Changes since v1:
 - Unify workaround with errata_c6_eoi_workaround.
 - Properly check state in both acpi and mwait drivers.
---
 docs/misc/xen-command-line.pandoc |  9 +++++++
 xen/arch/x86/acpi/cpu_idle.c      | 39 +++++++++++++++++++++++++++----
 xen/arch/x86/cpu/mwait-idle.c     |  2 +-
 xen/include/asm-x86/cpuidle.h     |  2 +-
 4 files changed, 46 insertions(+), 6 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index ee12b0f53f..8dd944b357 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -652,6 +652,15 @@ Specify the size of the console debug trace buffer. By specifying `cpu:`
 additionally a trace buffer of the specified size is allocated per cpu.
 The debug trace feature is only enabled in debugging builds of Xen.
 
+### disable-c6-errata
+> `= <boolean>`
+
+> Default: `true for affected Intel CPUs`
+
+Workaround for Intel errata AAJ72 and BDX99, CLX30, SKX100, CFW125, BDF104,
+BDH85, BDM135, KWB131. Prevent entering C6 idle states when certain conditions
+are meet in order to avoid triggering the listed erratas.
+
 ### dma_bits
 > `= <integer>`
 
diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index 0efdaff21b..2fa1ccc031 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -548,9 +548,10 @@ void trace_exit_reason(u32 *irq_traced)
     }
 }
 
-bool errata_c6_eoi_workaround(void)
+bool errata_c6_workaround(void)
 {
     static int8_t __read_mostly fix_needed = -1;
+    boolean_param("disable-c6-errata", fix_needed);
 
     if ( unlikely(fix_needed == -1) )
     {
@@ -573,10 +574,40 @@ bool errata_c6_eoi_workaround(void)
             INTEL_FAM6_MODEL(0x2f),
             { }
         };
+        /*
+         * Errata BDX99, CLX30, SKX100, CFW125, BDF104, BDH85, BDM135, KWB131:
+         * A Pending Fixed Interrupt May Be Dispatched Before an Interrupt of
+         * The Same Priority Completes.
+         *
+         * Resuming from C6 Sleep-State, with Fixed Interrupts of the same
+         * priority queued (in the corresponding bits of the IRR and ISR APIC
+         * registers), the processor may dispatch the second interrupt (from
+         * the IRR bit) before the first interrupt has completed and written to
+         * the EOI register, causing the first interrupt to never complete.
+         */
+        const static struct x86_cpu_id isr_errata[] = {
+            /* Broadwell */
+            INTEL_FAM6_MODEL(0x47),
+            INTEL_FAM6_MODEL(0x3d),
+            INTEL_FAM6_MODEL(0x4f),
+            INTEL_FAM6_MODEL(0x56),
+            /* Skylake (client) */
+            INTEL_FAM6_MODEL(0x5e),
+            INTEL_FAM6_MODEL(0x4e),
+            /* {Sky/Cascade}lake (server) */
+            INTEL_FAM6_MODEL(0x55),
+            /* {Kaby/Coffee/Whiskey/Amber} Lake */
+            INTEL_FAM6_MODEL(0x9e),
+            INTEL_FAM6_MODEL(0x8e),
+            /* Cannon Lake */
+            INTEL_FAM6_MODEL(0x66),
+            { }
+        };
 #undef INTEL_FAM6_MODEL
 
-        fix_needed = cpu_has_apic && !directed_eoi_enabled &&
-                     x86_match_cpu(eoi_errata);
+        fix_needed = cpu_has_apic &&
+                     ((!directed_eoi_enabled && x86_match_cpu(eoi_errata)) ||
+                      x86_match_cpu(isr_errata));
     }
 
     return (fix_needed && cpu_has_pending_apic_eoi());
@@ -685,7 +716,7 @@ static void acpi_processor_idle(void)
         return;
     }
 
-    if ( (cx->type >= ACPI_STATE_C3) && errata_c6_eoi_workaround() )
+    if ( (cx->type >= ACPI_STATE_C3) && errata_c6_workaround() )
         cx = power->safe_state;
 
 
diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 88a3e160c5..52eab81bf8 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -770,7 +770,7 @@ static void mwait_idle(void)
 		return;
 	}
 
-	if ((cx->type >= 3) && errata_c6_eoi_workaround())
+	if ((cx->type >= 3) && errata_c6_workaround())
 		cx = power->safe_state;
 
 	eax = cx->address;
diff --git a/xen/include/asm-x86/cpuidle.h b/xen/include/asm-x86/cpuidle.h
index 13879f58a1..dc7298a538 100644
--- a/xen/include/asm-x86/cpuidle.h
+++ b/xen/include/asm-x86/cpuidle.h
@@ -26,5 +26,5 @@ void update_idle_stats(struct acpi_processor_power *,
 void update_last_cx_stat(struct acpi_processor_power *,
                          struct acpi_processor_cx *, uint64_t);
 
-bool errata_c6_eoi_workaround(void);
+bool errata_c6_workaround(void);
 #endif /* __X86_ASM_CPUIDLE_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri May 15 13:59:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 13:59:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZarY-0000TJ-1O; Fri, 15 May 2020 13:58:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=j7Zx=65=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jZarW-0000T6-Sv
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 13:58:54 +0000
X-Inumbo-ID: 36980248-96b4-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 36980248-96b4-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 13:58:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
 References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=PWQWXrIuGs7RAXhKWVwKGHVI3lRjEpMtYlHNc3tRSjM=; b=sq5WMAsJn1nO680KN5L6/h5C0d
 hEzUTFsGq65nwZENqishP+cB0P7xiqWCtad+DAyt0HrSjtxDQVm/U8bXy4N7CoPf2BgU56WVPBr1L
 jGw4esiQUnsb17gQt01iI7NQv7/fzY/xUSuzdJdH2km0+BtdI6dg+CbEiBo25KhzCH0g=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jZarT-0000U5-VT; Fri, 15 May 2020 13:58:51 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=freeip.amazon.com) by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <hx242@xen.org>)
 id 1jZarT-0008HY-KL; Fri, 15 May 2020 13:58:51 +0000
Message-ID: <09b58be54cc40812631653f149e017933a3cfdf8.camel@xen.org>
Subject: Re: Error during update_runstate_area with KPTI activated
From: Hongyan Xia <hx242@xen.org>
To: Julien Grall <julien@xen.org>, Roger Pau =?ISO-8859-1?Q?Monn=E9?=
 <roger.pau@citrix.com>
Date: Fri, 15 May 2020 14:58:48 +0100
In-Reply-To: <108a179b-d8ea-01b9-6c6b-9f5cc57f6dc0@xen.org>
References: <2c4437e9-d513-3e3c-7fec-13ffadc17df2@xen.org>
 <2E95C767-FFE1-4A48-B56D-F858A8CEE5D7@arm.com>
 <ab4f3c2a-95aa-1256-f6f4-0c3057f5600c@xen.org>
 <b6511a29-35a4-a1d0-dd29-7de4103ec98e@citrix.com>
 <CAJ=z9a1H2C6sWiScYw9XXLRcezBfUxYz2semj33D5GpB5=EE_w@mail.gmail.com>
 <478C4829-CCAF-495B-860E-6BA3D86AA47D@arm.com>
 <20200515083838.GN54375@Air-de-Roger>
 <d2033adc-3f98-2d14-ae6d-f8dcd8b90002@xen.org>
 <20200515091018.GO54375@Air-de-Roger>
 <3813cfa2-c881-3fa5-bdf8-a2e874584a9f@xen.org>
 <20200515095751.GQ54375@Air-de-Roger>
 <108a179b-d8ea-01b9-6c6b-9f5cc57f6dc0@xen.org>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <stefano.stabellini@xilinx.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Julien Grall <julien.grall.oss@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 2020-05-15 at 11:08 +0100, Julien Grall wrote:
> Hi,
> 
> On 15/05/2020 10:57, Roger Pau Monné wrote:
> > On Fri, May 15, 2020 at 10:23:16AM +0100, Julien Grall wrote:
> > > [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open
> > > attachments unless you have verified the sender and know the
> > > content is safe.
> > > 
> > > 
> > > On 15/05/2020 10:10, Roger Pau Monné wrote:
> > > > On Fri, May 15, 2020 at 09:53:54AM +0100, Julien Grall wrote:
...

> > > > 
> > > > Could we map/unmap the runtime area on domain switch at a per-
> > > > cpu
> > > > based linear space area? There's no reason to have all the
> > > > runtime
> > > > areas mapped all the time, you just care about the one from the
> > > > running vcpu.
> > > 
> > > AFAICT, this is only used during context switching. This is a bit
> > > surprising
> > > because I would expect it to be updated when the vCPU is running.
> > > 
> > > So maybe we could just use map_domain_page() and take care
> > > manually about
> > > cross-page boundary.
> > > 
> > > > 
> > > > Maybe the overhead of that mapping and unmapping would be
> > > > too high? But seeing that we are aiming at a secret-free Xen we
> > > > would
> > > > have to eventually go that route anyway.
> > > 
> > > The overhead is likely to be higher with the existing code as you
> > > have to
> > > walk the guest page-tables and the p2m everytime in order to
> > > translate the
> > > guest virtual address to a host physical address.
> > 
> > Maybe I'm getting confused, but you actually want to avoid the
> > guest
> > page table walk, as the guest might be running with user-space page
> > tables that don't have the linear address of the runtime area
> > mapped,
> > and hence you would like to do the walk only once (at hypercall
> > registration time) and keep a reference to the page(s)?
> 
> That's right.
> 
> > 
> > I assumed the whole point of this was to avoid doing the page table
> > walk when you need to update the runstate info area.
> 
> Sorry I wasn't clear. I was trying to answer to your question about
> the 
> overhead.
> 
> The overhead with SH and the existing runstate implementation is
> going 
> to be quite high because you would need to map/unmap each table
> during 
> your walk. By removing the walk, you would now have only one
> map/unmap 
> for the update which I think is acceptable.
> 
> So the change discussed in this thread is also going to be
> beneficial 
> for SH even if we keep a map/unmap in the process.

For every hypercall, trap, context switch... one or two maps and unmaps
is definitely fine, showing almost no impact in real-world performance.
The most impact I found in the direct map removal work is definitely
GVA->GFN->MFN walk for traps and hypercalls. HVM + EPT walk can take up
to 20 maps and unmaps which degrades hypercall and emmulated MMIO
performance by up to 60%. It would be really nice if some paths can
just take GFN or just register the MFN. So I would definitely welcome a
change to use the physcial address.

Hongyan



From xen-devel-bounces@lists.xenproject.org Fri May 15 14:13:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 14:13:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZb5e-0002RQ-AS; Fri, 15 May 2020 14:13:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CqcK=65=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jZb5c-0002RL-Dz
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 14:13:28 +0000
X-Inumbo-ID: 3ecda0c4-96b6-11ea-a570-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3ecda0c4-96b6-11ea-a570-12813bfff9fa;
 Fri, 15 May 2020 14:13:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589552007;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=cC1STYRLTf6YUZdMJFzv2MxI8WzpEFWRewwQ81O2R2g=;
 b=CJ/RTGh5szvJwNw/9rC4Mx6/Csl5t4VxFmteIHJoV9R2nzwJPNzcOaHo
 h4RNXRBWnm36jTKJWkGrc0sSWGQ5tgOlDqOcph7AE+eCKes3O/NaieZfK
 SUK2Ce2zrTC3WL82YPrIEqijRW882bkoQQ0IEtC8vjmkm8qVhY37tkJ9k A=;
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 anthony.perard@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="anthony.perard@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa1.hc3370-68.iphmx.com: domain of
 anthony.perard@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="anthony.perard@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa1.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa1.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=anthony.perard@citrix.com;
 spf=Pass smtp.mailfrom=anthony.perard@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: qk1LzcWYbAtizaKmEL0G23TGS0brcBnTveqpZcsf3P6RTWTF2keiUFQXbg2RNKW5w+Q/2MnY13
 vD91ewXcMzSOSgDe4ZnovpZW6dSDX4FTf+K4zDPBbpHUAVgNRJCL8p2iudyfXcO0aivCsGT+Ov
 baiLzE7k2V8lFX3wVNtTYRmteZAyO9SJatA9xpWaYpvImv6mefl50XoWQX4ariCkU8lFEFOQsn
 OwYXtlVjYUcTWC1B9PSMdniV/eElOEyo5mCWdJh7fqjoy/jZSiWzIYfRs15Vt4NNYj2gdV52Br
 8VI=
X-SBRS: 2.7
X-MesageID: 17902781
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,395,1583211600"; d="scan'208";a="17902781"
Date: Fri, 15 May 2020 15:13:05 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
Subject: Re: [XEN PATCH 1/2] xen/build: fixup path to merge_config.sh
Message-ID: <20200515141305.GJ2116@perard.uk.xensource.com>
References: <20200512175206.20314-1-stewart.hildebrand@dornerworks.com>
 <20200512175206.20314-2-stewart.hildebrand@dornerworks.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20200512175206.20314-2-stewart.hildebrand@dornerworks.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 12, 2020 at 01:52:05PM -0400, Stewart Hildebrand wrote:
> This resolves the following observed error:
> 
> /bin/sh: /path/to/xen/xen/../xen/scripts/kconfig/merge_config.sh: No such file or directory
> 
> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri May 15 14:15:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 14:15:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZb7A-0002XM-Ps; Fri, 15 May 2020 14:15:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7uuJ=65=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZb79-0002XF-KY
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 14:15:03 +0000
X-Inumbo-ID: 777abe66-96b6-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 777abe66-96b6-11ea-ae69-bc764e2007e4;
 Fri, 15 May 2020 14:15:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 07DECAAD0;
 Fri, 15 May 2020 14:15:03 +0000 (UTC)
Subject: Ping: [PATCH v2 3/6] x86/mem-paging: use guest handle for
 XENMEM_paging_op_prep
From: Jan Beulich <jbeulich@suse.com>
To: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <43811c95-aa41-a34a-06ce-7d344cb1411d@suse.com>
Message-ID: <25fdba8a-9dbf-50b4-d7d5-098602d40fb3@suse.com>
Date: Fri, 15 May 2020 16:14:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <43811c95-aa41-a34a-06ce-7d344cb1411d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23.04.2020 10:38, Jan Beulich wrote:
> While it should have been this way from the beginning, not doing so will
> become an actual problem with PVH Dom0. The interface change is binary
> compatible, but requires tools side producers to be re-built.
> 
> Drop the bogus/unnecessary page alignment restriction on the input
> buffer at the same time.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

May I ask for a libxc side ack or otherwise, please?

Thanks, Jan

> --- a/tools/libxc/xc_mem_paging.c
> +++ b/tools/libxc/xc_mem_paging.c
> @@ -26,15 +26,33 @@ static int xc_mem_paging_memop(xc_interf
>                                 unsigned int op, uint64_t gfn, void *buffer)
>  {
>      xen_mem_paging_op_t mpo;
> +    DECLARE_HYPERCALL_BOUNCE(buffer, XC_PAGE_SIZE,
> +                             XC_HYPERCALL_BUFFER_BOUNCE_IN);
> +    int rc;
>  
>      memset(&mpo, 0, sizeof(mpo));
>  
>      mpo.op      = op;
>      mpo.domain  = domain_id;
>      mpo.gfn     = gfn;
> -    mpo.buffer  = (unsigned long) buffer;
>  
> -    return do_memory_op(xch, XENMEM_paging_op, &mpo, sizeof(mpo));
> +    if ( buffer )
> +    {
> +        if ( xc_hypercall_bounce_pre(xch, buffer) )
> +        {
> +            PERROR("Could not bounce memory for XENMEM_paging_op %u", op);
> +            return -1;
> +        }
> +
> +        set_xen_guest_handle(mpo.buffer, buffer);
> +    }
> +
> +    rc = do_memory_op(xch, XENMEM_paging_op, &mpo, sizeof(mpo));
> +
> +    if ( buffer )
> +        xc_hypercall_bounce_post(xch, buffer);
> +
> +    return rc;
>  }
>  
>  int xc_mem_paging_enable(xc_interface *xch, uint32_t domain_id,
> @@ -92,28 +110,13 @@ int xc_mem_paging_prep(xc_interface *xch
>  int xc_mem_paging_load(xc_interface *xch, uint32_t domain_id,
>                         uint64_t gfn, void *buffer)
>  {
> -    int rc, old_errno;
> -
>      errno = EINVAL;
>  
>      if ( !buffer )
>          return -1;
>  
> -    if ( ((unsigned long) buffer) & (XC_PAGE_SIZE - 1) )
> -        return -1;
> -
> -    if ( mlock(buffer, XC_PAGE_SIZE) )
> -        return -1;
> -
> -    rc = xc_mem_paging_memop(xch, domain_id,
> -                             XENMEM_paging_op_prep,
> -                             gfn, buffer);
> -
> -    old_errno = errno;
> -    munlock(buffer, XC_PAGE_SIZE);
> -    errno = old_errno;
> -
> -    return rc;
> +    return xc_mem_paging_memop(xch, domain_id, XENMEM_paging_op_prep,
> +                               gfn, buffer);
>  }
>  
>  
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1779,7 +1779,8 @@ void p2m_mem_paging_populate(struct doma
>   * mfn if populate was called for  gfn which was nominated but not evicted. In
>   * this case only the p2mt needs to be forwarded.
>   */
> -int p2m_mem_paging_prep(struct domain *d, unsigned long gfn_l, uint64_t buffer)
> +int p2m_mem_paging_prep(struct domain *d, unsigned long gfn_l,
> +                        XEN_GUEST_HANDLE_64(const_uint8) buffer)
>  {
>      struct page_info *page = NULL;
>      p2m_type_t p2mt;
> @@ -1788,13 +1789,9 @@ int p2m_mem_paging_prep(struct domain *d
>      mfn_t mfn;
>      struct p2m_domain *p2m = p2m_get_hostp2m(d);
>      int ret, page_extant = 1;
> -    const void *user_ptr = (const void *) buffer;
>  
> -    if ( user_ptr )
> -        /* Sanity check the buffer and bail out early if trouble */
> -        if ( (buffer & (PAGE_SIZE - 1)) || 
> -             (!access_ok(user_ptr, PAGE_SIZE)) )
> -            return -EINVAL;
> +    if ( !guest_handle_okay(buffer, PAGE_SIZE) )
> +        return -EINVAL;
>  
>      gfn_lock(p2m, gfn, 0);
>  
> @@ -1812,7 +1809,7 @@ int p2m_mem_paging_prep(struct domain *d
>  
>          /* If the user did not provide a buffer, we disallow */
>          ret = -EINVAL;
> -        if ( unlikely(user_ptr == NULL) )
> +        if ( unlikely(guest_handle_is_null(buffer)) )
>              goto out;
>          /* Get a free page */
>          ret = -ENOMEM;
> @@ -1834,7 +1831,7 @@ int p2m_mem_paging_prep(struct domain *d
>  
>          ASSERT( mfn_valid(mfn) );
>          guest_map = map_domain_page(mfn);
> -        ret = copy_from_user(guest_map, user_ptr, PAGE_SIZE);
> +        ret = copy_from_guest(guest_map, buffer, PAGE_SIZE);
>          unmap_domain_page(guest_map);
>          if ( ret )
>          {
> --- a/xen/include/asm-x86/p2m.h
> +++ b/xen/include/asm-x86/p2m.h
> @@ -741,7 +741,8 @@ void p2m_mem_paging_drop_page(struct dom
>  /* Start populating a paged out frame */
>  void p2m_mem_paging_populate(struct domain *d, unsigned long gfn);
>  /* Prepare the p2m for paging a frame in */
> -int p2m_mem_paging_prep(struct domain *d, unsigned long gfn, uint64_t buffer);
> +int p2m_mem_paging_prep(struct domain *d, unsigned long gfn,
> +                        XEN_GUEST_HANDLE_64(const_uint8) buffer);
>  /* Resume normal operation (in case a domain was paused) */
>  struct vm_event_st;
>  void p2m_mem_paging_resume(struct domain *d, struct vm_event_st *rsp);
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -396,10 +396,10 @@ struct xen_mem_paging_op {
>      uint8_t     op;         /* XENMEM_paging_op_* */
>      domid_t     domain;
>  
> -    /* PAGING_PREP IN: buffer to immediately fill page in */
> -    uint64_aligned_t    buffer;
> -    /* Other OPs */
> -    uint64_aligned_t    gfn;           /* IN:  gfn of page being operated on */
> +    /* IN: (XENMEM_paging_op_prep) buffer to immediately fill page from */
> +    XEN_GUEST_HANDLE_64(const_uint8) buffer;
> +    /* IN:  gfn of page being operated on */
> +    uint64_aligned_t    gfn;
>  };
>  typedef struct xen_mem_paging_op xen_mem_paging_op_t;
>  DEFINE_XEN_GUEST_HANDLE(xen_mem_paging_op_t);
> 
> 



From xen-devel-bounces@lists.xenproject.org Fri May 15 14:20:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 14:20:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZbBz-00034M-Ds; Fri, 15 May 2020 14:20:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CqcK=65=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jZbBx-0002sR-T7
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 14:20:01 +0000
X-Inumbo-ID: 2956f118-96b7-11ea-b9cf-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2956f118-96b7-11ea-b9cf-bc764e2007e4;
 Fri, 15 May 2020 14:20:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589552400;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=XGeb1kddEGhTLReRlY0YSXI1kO0qFDqExmEgxUFSibw=;
 b=DD/7NzmdHkrjWxYYwRP6zQk6HlXy7n8SXm8W0ScigLeomDEtBUorlSB6
 sLzRMMBX8Xyvwe/lTxOW2Zv4deVnHQ7idrQWH3bfUzmXSDNlZnHdkIlx/
 cuzkz3s+dLChrVEsM1V93KAcjqynp/ZrOzYPktCAmQESlzvCkiVEs/PTh w=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 anthony.perard@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="anthony.perard@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 anthony.perard@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="anthony.perard@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=anthony.perard@citrix.com;
 spf=Pass smtp.mailfrom=anthony.perard@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: n8VlNHnaIBotNw/RGV1qIF5f7sliBuIx+HPVkAkNfXr29C7Aew35OeT4xh56DUIV9s+EbuTGud
 9XtMmG6wbZxfcpfjJRu1w4RIuUxiyI9ojlWD1TNRF4n1J7W7UDVnaek9b6gUhK6kCncwFR6gtA
 1pf9uq61vfOs+HFiXq1ei/y1MzYclbJN42SFA0RGlV2DykQ8aKhjbV46RMQZbAFQEl4jyN2Ffb
 uayjheupNXh60OAGl1TcMCFkvyC/Q//X3YObURaaWLt1bJ4HZSHM0CSm1BvkKom4Rh+DBdAWZB
 9FE=
X-SBRS: 2.7
X-MesageID: 18327950
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,395,1583211600"; d="scan'208";a="18327950"
Date: Fri, 15 May 2020 15:19:57 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
Subject: Re: [XEN PATCH 0/2] xen/build: fix kconfig errors during config merge
Message-ID: <20200515141957.GK2116@perard.uk.xensource.com>
References: <20200512175206.20314-1-stewart.hildebrand@dornerworks.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20200512175206.20314-1-stewart.hildebrand@dornerworks.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 12, 2020 at 01:52:04PM -0400, Stewart Hildebrand wrote:
> This series fixes a couple of kconfig errors that I observed while
> invoking a build with a defconfig and config fragment.
> 
> I invoked the build as follows:
> 
> cat > xen/arch/arm/configs/custom.config <<EOF
> CONFIG_DEBUG=y
> CONFIG_SCHED_ARINC653=y
> CONFIG_EARLY_PRINTK_ZYNQMP=y
> EOF
> make -C xen XEN_TARGET_ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- defconfig custom.config

Thanks for the patches.

FYI, `make defconfig custom.config` doesn't work as expected with the Xen
build system, it doesn't deal with this use case like Linux's one does.
There is no guaranty that "defconfig" will be made before "custom.config".
It would be better to run `make defconfig && make custom.config`, or
maybe use -j1, until this is properly handled. That's what is done by
kbuild.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri May 15 14:40:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 14:40:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZbVl-0005PM-KR; Fri, 15 May 2020 14:40:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kC4v=65=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jZbVl-0005PH-7b
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 14:40:29 +0000
X-Inumbo-ID: 050bbbec-96ba-11ea-ae69-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 050bbbec-96ba-11ea-ae69-bc764e2007e4;
 Fri, 15 May 2020 14:40:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589553628;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=8Q8tQl8vjsX5cS0QouMdFSLFPF18ydCuLFkK+pEKQlQ=;
 b=GgVsuo5GbHfAHwMHkidqpZ2ZqPjUgFxZ7jWB7ql/8X+lm8iK7d8NM/yY
 aJDafpH5snnjW5Z6lXX+vlIIAweFWILvzhccXw2B+GddHTD2PQX6ZfpYc
 paX1PmCbDkEl4TRC5wZydkXZsYFLFAcrbWZbKsyAEAhUbV8QciXl4Mk4B 4=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 8jnWG1/uxZLjiG6AJmZB7EEE6+XYIEJgiNuiUD7Y4Gjiz112IOVD6a8Vl7Y6Y8JUwUkb7ZGbZv
 PJgZZPWQyq2N/1BLXFkcU9Alwa4CcizyQux2+5FYVg3y4AqjANPlDmiwrxqkHIVEWaUevE/77q
 00sTujf4+kdgqFq5Ajb4oo2nxoKDzRof2JHjJQhgkleewfiRIwk+zFKR8nAQWQ1GP+sYqDrhXi
 wIZ20pIkKHcVdCebqICXWYYTmE1iTh35YtEqqxEfyvZ5nS+b54czsUdQENr9m1gs5eUtQWc0cG
 Qu4=
X-SBRS: 2.7
X-MesageID: 17627724
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,395,1583211600"; d="scan'208";a="17627724"
Subject: Re: [PATCH v2 2/6] x86/mem-paging: correct p2m_mem_paging_prep()'s
 error handling
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <bf9dd27b-a7db-de0e-a804-d687e66ecf1e@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <2cccf9bb-3930-436d-65de-f0eb7dd0c498@citrix.com>
Date: Fri, 15 May 2020 15:40:21 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <bf9dd27b-a7db-de0e-a804-d687e66ecf1e@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23/04/2020 09:37, Jan Beulich wrote:
> Communicating errors from p2m_set_entry() to the caller is not enough:
> Neither the M2P nor the stats updates should occur in such a case.

"neither".  Following a colon/semicolon isn't the start of a new sentence.

> Instead the allocated page needs to be freed again; for cleanliness
> reasons also properly take into account _PGC_allocated there.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/mm/p2m.c
> +++ b/xen/arch/x86/mm/p2m.c
> @@ -1781,7 +1781,7 @@ void p2m_mem_paging_populate(struct doma
>   */
>  int p2m_mem_paging_prep(struct domain *d, unsigned long gfn_l, uint64_t buffer)
>  {
> -    struct page_info *page;
> +    struct page_info *page = NULL;
>      p2m_type_t p2mt;
>      p2m_access_t a;
>      gfn_t gfn = _gfn(gfn_l);
> @@ -1816,9 +1816,19 @@ int p2m_mem_paging_prep(struct domain *d
>              goto out;
>          /* Get a free page */
>          ret = -ENOMEM;
> -        page = alloc_domheap_page(p2m->domain, 0);
> +        page = alloc_domheap_page(d, 0);
>          if ( unlikely(page == NULL) )
>              goto out;
> +        if ( unlikely(!get_page(page, d)) )
> +        {
> +            /*
> +             * The domain can't possibly know about this page yet, so failure
> +             * here is a clear indication of something fishy going on.
> +             */

This needs a gprintk(XENLOG_ERR, ...) of some form.  (which also reminds
me that I *still* need to finish my patch forcing everyone to provide
something to qualify the domain crash, so release builds of Xen get some
hint as to what went on or why.)

> +            domain_crash(d);
> +            page = NULL;
> +            goto out;
> +        }
>          mfn = page_to_mfn(page);
>          page_extant = 0;
>  
> @@ -1832,7 +1842,6 @@ int p2m_mem_paging_prep(struct domain *d
>                       "Failed to load paging-in gfn %lx Dom%d bytes left %d\n",
>                       gfn_l, d->domain_id, ret);
>              ret = -EFAULT;
> -            put_page(page); /* Don't leak pages */
>              goto out;            
>          }
>      }
> @@ -1843,13 +1852,24 @@ int p2m_mem_paging_prep(struct domain *d
>      ret = p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
>                          paging_mode_log_dirty(d) ? p2m_ram_logdirty
>                                                   : p2m_ram_rw, a);
> -    set_gpfn_from_mfn(mfn_x(mfn), gfn_l);
> +    if ( !ret )
> +    {
> +        set_gpfn_from_mfn(mfn_x(mfn), gfn_l);
>  
> -    if ( !page_extant )
> -        atomic_dec(&d->paged_pages);
> +        if ( !page_extant )
> +            atomic_dec(&d->paged_pages);
> +    }
>  
>   out:
>      gfn_unlock(p2m, gfn, 0);
> +
> +    if ( page )
> +    {
> +        if ( ret )
> +            put_page_alloc_ref(page);
> +        put_page(page);

This is a very long way from clear enough to follow, and buggy if anyone
inserts a new goto out path.

~Andrew

> +    }
> +
>      return ret;
>  }
>  
>



From xen-devel-bounces@lists.xenproject.org Fri May 15 14:47:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 14:47:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZbbv-0005cL-Cn; Fri, 15 May 2020 14:46:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kC4v=65=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jZbbu-0005cG-Ei
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 14:46:50 +0000
X-Inumbo-ID: e802582a-96ba-11ea-a580-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e802582a-96ba-11ea-a580-12813bfff9fa;
 Fri, 15 May 2020 14:46:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589554010;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=m+SBMttchB81UO7EH4e8Axt0cFZzlkx2BBcCefdUTpI=;
 b=R8tEMSaZqQBEd+6IsJizfHRdeE4a/YHiZVXxcWHs4GzFRRuYxFIP9B7U
 xtdFBq9Ien/CZfdKMwgCkT1DqKBTtanj/R4NUxDKrPPKU1bMyoT7kYvjO
 7NIuKMvWe0zpa6tCJR2KiH0IyvtA0Tutg2+1BY169QFw/CzKWWI5yh4if Q=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: /3DzRUHt8YFVmZw1JiDWUVItk4n9hSUX5gV7t6KxfapSMhQSBtjQURbki07kVy0+FVnReqs8TE
 qZNUTH5q0AIoOvk5Ko8Xzk0hOoHpNvfseCoqOdxyJ4aB3G7Jrg58whJXb9i9tY+5Zv/1jUZzdA
 Z+KLnq9MaCrMLipVYx9AGAXgyO7gLOMIGucCJrt07TjIA+ZrZSpdiIVIU7bNgyp8L0ehJe4SyV
 0wdBXbT/WvKRLKGPoLbSDku3RYpJKHecMrKA16lMDgll0ztsURbcg8FYLd1H1byYt0SH9+rQLR
 +L0=
X-SBRS: 2.7
X-MesageID: 17660730
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,395,1583211600"; d="scan'208";a="17660730"
Subject: Re: [PATCH v2 3/6] x86/mem-paging: use guest handle for
 XENMEM_paging_op_prep
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <43811c95-aa41-a34a-06ce-7d344cb1411d@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <28cba02b-b1fd-8cbc-5e12-9ccbf25b305a@citrix.com>
Date: Fri, 15 May 2020 15:46:44 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <43811c95-aa41-a34a-06ce-7d344cb1411d@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23/04/2020 09:38, Jan Beulich wrote:
> While it should have been this way from the beginning, not doing so will
> become an actual problem with PVH Dom0. The interface change is binary
> compatible, but requires tools side producers to be re-built.
>
> Drop the bogus/unnecessary page alignment restriction on the input
> buffer at the same time.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Use HANDLE_64() instead of HANDLE_PARAM() for function parameter.
> ---
> Is there really no way to avoid the buffer copying in libxc?

Not currently no.

Since we now have access to regular kernel memory via mmap() on
/dev/xen/hypercall, we are now in a position where someone could hook up
implement a small memory allocator backed exclusively by mmap()'d pages,
and this would remove all bounce buffering in userspace.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 15 14:52:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 14:52:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZbgt-0006VI-0J; Fri, 15 May 2020 14:51:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CqcK=65=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jZbgr-0006VB-DT
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 14:51:57 +0000
X-Inumbo-ID: 9ef4a632-96bb-11ea-a581-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9ef4a632-96bb-11ea-a581-12813bfff9fa;
 Fri, 15 May 2020 14:51:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589554316;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=tXozS/7HhGffTeLEnjbw45EYd6Z1LV9w9ZXlgf+C5lw=;
 b=Vx6ae0CKqh0F1L8iAzQABGTPGyq4oUreD9XBf64i21JRKu/Xlesl9FRT
 VfaYX1cX1D34YkbcmAZGRLqh1a1nzE6b7zWETVFE5ws+d+eyNcQPAkqhJ
 HI9nIeeNi/TeUuj4U2vwF8vP7I0sjR51Ds+UrF4Up+wlG/Lh6d5YUJG/a Y=;
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 anthony.perard@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="anthony.perard@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa3.hc3370-68.iphmx.com: domain of
 anthony.perard@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="anthony.perard@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa3.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa3.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=anthony.perard@citrix.com;
 spf=Pass smtp.mailfrom=anthony.perard@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: bZ1NkMZLB4UHhB5ve1Ln8dfMSqKtKNoWZ4U5AoJCA5HOqjdqZYjKaBY+cEpMdCgj/sHfA2zTpS
 e2W82+/7vM7yQVvjgL/gJafeezdiWJmRvtp7cO1eEEUc3V/aDo7zXaGHvO3Ww1Noz+psNiMy4K
 bQ3A6BuWSW9m+FRIB4a9t8Gk5C3uN8b4vv3kDI6rwtXJAVmxtRdSglvQmJlWTPi4uGUjWnTWfk
 LNeUYn1QJqhVFtjg5DRO/d3ggjcuzH3Pemz3LXufyfZ9ilhMqV/1VCix8L3OIFauCNwTm4WW6R
 IKM=
X-SBRS: 2.7
X-MesageID: 17629650
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,395,1583211600"; d="scan'208";a="17629650"
Date: Fri, 15 May 2020 15:51:50 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
Subject: Re: [XEN PATCH 2/2] xen/build: use the correct kconfig makefile
Message-ID: <20200515145150.GL2116@perard.uk.xensource.com>
References: <20200512175206.20314-1-stewart.hildebrand@dornerworks.com>
 <20200512175206.20314-3-stewart.hildebrand@dornerworks.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20200512175206.20314-3-stewart.hildebrand@dornerworks.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 12, 2020 at 01:52:06PM -0400, Stewart Hildebrand wrote:
> This resolves the following observed error:
> 
> #
> # merged configuration written to .config (needs make)
> #
> make -f /path/to/xen/xen/../xen/Makefile olddefconfig
> make[2]: Entering directory '/path/to/xen/xen'
> make[2]: *** No rule to make target 'olddefconfig'.  Stop.
> make[2]: Leaving directory '/path/to/xen/xen'
> tools/kconfig/Makefile:95: recipe for target 'custom.config' failed

That commit message is kind of misleading, as the command run which
lead to the error isn't written. Could you expand the commit message to
include the problematic command ran? Something like:
    Running `make custom.config` fails with:
    ...

> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
> 
> ---
> It's possible there are other places where the Makefile path will need
> to be changed. This just happened to be the one that failed for me.

The two other locations that calls back on the main Makefile aren't
usable by Xen, they would lead to another error anyway. So fixing the
only %.config is good enough.

> ---
>  xen/tools/kconfig/Makefile | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/tools/kconfig/Makefile b/xen/tools/kconfig/Makefile
> index fd37f4386a..f39521a0ed 100644
> --- a/xen/tools/kconfig/Makefile
> +++ b/xen/tools/kconfig/Makefile
> @@ -94,7 +94,7 @@ configfiles=$(wildcard $(srctree)/kernel/configs/$@ $(srctree)/arch/$(SRCARCH)/c
>  %.config: $(obj)/conf
>  	$(if $(call configfiles),, $(error No configuration exists for this target on this architecture))
>  	$(Q)$(CONFIG_SHELL) $(srctree)/tools/kconfig/merge_config.sh -m .config $(configfiles)
> -	$(Q)$(MAKE) -f $(srctree)/Makefile olddefconfig
> +	$(Q)$(MAKE) -f $(srctree)/tools/kconfig/Makefile.kconfig olddefconfig

Well, the issue would be with $(srctree)/Makefile, but I don't think
that can be fixed right now. So that change is good for now.


With the commit message adjusted:
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri May 15 14:56:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 14:56:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZblI-0006gP-L2; Fri, 15 May 2020 14:56:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ibeu=65=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZblG-0006gK-RN
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 14:56:30 +0000
X-Inumbo-ID: 3f5f27be-96bc-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3f5f27be-96bc-11ea-9887-bc764e2007e4;
 Fri, 15 May 2020 14:56:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=jJqN7NAk96ys78gBhjmnHyWsAwghSSrpl94QDU/Ba08=; b=s3qeRPWH/4hvl1TRLJyy1BRfb
 SztOgkzPwjuM9NQGDOrYuHcDtXgh/i+zoN49kpPBOk9TAsiPsvddo+Qk4S9IIxa8WuyBwk/ag2rCt
 OOy3tbkrRPueoMuUkQyHYkCKa6Li+K0voSC1DGMYB68GN7NQQ3yMXLbMwOx3KfN3/IFSk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZblA-0001nc-Rk; Fri, 15 May 2020 14:56:24 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZblA-0007yZ-HE; Fri, 15 May 2020 14:56:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZblA-0005VY-Gc; Fri, 15 May 2020 14:56:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150188-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 150188: tolerable FAIL - PUSHED
X-Osstest-Failures: seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: seabios=8d25ca41c5a146ebd81f442d4e241171db8fe0ae
X-Osstest-Versions-That: seabios=eaaf726038cb9b2d01312d6430b4e93842aa96eb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 15 May 2020 14:56:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150188 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150188/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 149712
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 149712
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 149712
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 149712
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 seabios              8d25ca41c5a146ebd81f442d4e241171db8fe0ae
baseline version:
 seabios              eaaf726038cb9b2d01312d6430b4e93842aa96eb

Last test of basis   149712  2020-04-21 10:44:40 Z   24 days
Testing same since   150188  2020-05-15 01:54:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Ehrhardt <christian.ehrhardt@canonical.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   eaaf726..8d25ca4  8d25ca41c5a146ebd81f442d4e241171db8fe0ae -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 15 15:15:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 15:15:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZc3K-0008WV-7a; Fri, 15 May 2020 15:15:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7uuJ=65=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jZc3I-0008WQ-Ht
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 15:15:08 +0000
X-Inumbo-ID: dc357d7a-96be-11ea-a585-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dc357d7a-96be-11ea-a585-12813bfff9fa;
 Fri, 15 May 2020 15:15:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E91E1AC6E;
 Fri, 15 May 2020 15:15:08 +0000 (UTC)
Subject: Re: [PATCH v2 2/6] x86/mem-paging: correct p2m_mem_paging_prep()'s
 error handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <bf9dd27b-a7db-de0e-a804-d687e66ecf1e@suse.com>
 <2cccf9bb-3930-436d-65de-f0eb7dd0c498@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <83f6463c-6a61-e79b-cf1b-77589ef287c1@suse.com>
Date: Fri, 15 May 2020 17:15:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <2cccf9bb-3930-436d-65de-f0eb7dd0c498@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.05.2020 16:40, Andrew Cooper wrote:
> On 23/04/2020 09:37, Jan Beulich wrote:
>> @@ -1816,9 +1816,19 @@ int p2m_mem_paging_prep(struct domain *d
>>              goto out;
>>          /* Get a free page */
>>          ret = -ENOMEM;
>> -        page = alloc_domheap_page(p2m->domain, 0);
>> +        page = alloc_domheap_page(d, 0);
>>          if ( unlikely(page == NULL) )
>>              goto out;
>> +        if ( unlikely(!get_page(page, d)) )
>> +        {
>> +            /*
>> +             * The domain can't possibly know about this page yet, so failure
>> +             * here is a clear indication of something fishy going on.
>> +             */
> 
> This needs a gprintk(XENLOG_ERR, ...) of some form.  (which also reminds
> me that I *still* need to finish my patch forcing everyone to provide
> something to qualify the domain crash, so release builds of Xen get some
> hint as to what went on or why.)

First of all - I've committed the patch earlier today, with Roger's
R-b, and considering that it had been pending for quite some time.

As to the specific aspect:

>> +            domain_crash(d);

This already leaves a file/line combination as a (minimal hint). I
can make a patch to add a gprintk() as you ask for, but I'm not
sure it's worth it for this almost dead code.

>> @@ -1843,13 +1852,24 @@ int p2m_mem_paging_prep(struct domain *d
>>      ret = p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
>>                          paging_mode_log_dirty(d) ? p2m_ram_logdirty
>>                                                   : p2m_ram_rw, a);
>> -    set_gpfn_from_mfn(mfn_x(mfn), gfn_l);
>> +    if ( !ret )
>> +    {
>> +        set_gpfn_from_mfn(mfn_x(mfn), gfn_l);
>>  
>> -    if ( !page_extant )
>> -        atomic_dec(&d->paged_pages);
>> +        if ( !page_extant )
>> +            atomic_dec(&d->paged_pages);
>> +    }
>>  
>>   out:
>>      gfn_unlock(p2m, gfn, 0);
>> +
>> +    if ( page )
>> +    {
>> +        if ( ret )
>> +            put_page_alloc_ref(page);
>> +        put_page(page);
> 
> This is a very long way from clear enough to follow, and buggy if anyone
> inserts a new goto out path.

What alternatives do you see?

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 15 15:18:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 15:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZc6q-0000JK-OL; Fri, 15 May 2020 15:18:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=I0w9=65=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jZc6p-0000JE-Hd
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 15:18:47 +0000
X-Inumbo-ID: 5ecfecac-96bf-11ea-a585-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5ecfecac-96bf-11ea-a585-12813bfff9fa;
 Fri, 15 May 2020 15:18:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589555926;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:content-transfer-encoding:in-reply-to;
 bh=gmhiXTSPLW12MRbW/gilR9oElhljD8TCyuf173u7HbM=;
 b=fD/opMdMryppcs0pJov0FFZhRfrR4DLxx9urFoPdChWAtrxaZIjhlU4F
 flGqKPce8Ck9yvVymBB2m5cg2vjcLIuT51irwZGSzPIPMUfNhXlGrRnsH
 5y87c7ZiI5OI1V/2eVjlzAyPa5EGkuC/HDMqjP7036d+ef4yV4d6LONuw k=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 roger.pau@citrix.com) identity=pra; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com"; x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 roger.pau@citrix.com designates 162.221.158.21 as permitted
 sender) identity=mailfrom; client-ip=162.221.158.21;
 receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="roger.pau@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="roger.pau@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=roger.pau@citrix.com;
 spf=Pass smtp.mailfrom=roger.pau@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: 2Ygj9Ix/7obFuEFqhUqBg5tdIYd+kNhECKf79Zm0zTmQza7eD2JLf2DhKYDdhPMcmAg4plmKP8
 gyyMVnSuUJvSJ2X0SjDCqcykZ3FQ0PKqiUOfwjjIY6iUB756cBHGy2NEuvZrRsgnqZBKwJ9zHi
 /QagGRpWZmYqIqPX+L6My/zIziu44v6DPePLZC8CzqTSIpUSFSD1jeNLhXLp1x/ME0hzhl94tK
 u/2+Gun15LEyQeZe3ob7mylo6RgnoIgpBhcneXXG+JyPa8Fc3KPFwLcuwjvd793gqo7Sf6hw7f
 BaQ=
X-SBRS: 2.7
X-MesageID: 18001253
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,395,1583211600"; d="scan'208";a="18001253"
Date: Fri, 15 May 2020 17:18:38 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v4] x86: clear RDRAND CPUID bit on AMD family 15h/16h
Message-ID: <20200515151838.GU54375@Air-de-Roger>
References: <69382ba7-b562-2c8c-1843-b17ce6c512f1@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <69382ba7-b562-2c8c-1843-b17ce6c512f1@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, Mar 09, 2020 at 10:08:50AM +0100, Jan Beulich wrote:
> Inspired by Linux commit c49a0a80137c7ca7d6ced4c812c9e07a949f6f24:
> 
>     There have been reports of RDRAND issues after resuming from suspend on
>     some AMD family 15h and family 16h systems. This issue stems from a BIOS
>     not performing the proper steps during resume to ensure RDRAND continues
>     to function properly.
> 
>     Update the CPU initialization to clear the RDRAND CPUID bit for any family
>     15h and 16h processor that supports RDRAND. If it is known that the family
>     15h or family 16h system does not have an RDRAND resume issue or that the
>     system will not be placed in suspend, the "cpuid=rdrand" kernel parameter
>     can be used to stop the clearing of the RDRAND CPUID bit.
> 
>     Note, that clearing the RDRAND CPUID bit does not prevent a processor
>     that normally supports the RDRAND instruction from executing it. So any
>     code that determined the support based on family and model won't #UD.
> 
> Warn if no explicit choice was given on affected hardware.
> 
> Check RDRAND functions at boot as well as after S3 resume (the retry
> limit chosen is entirely arbitrary).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> Still slightly RFC, and still in particular because of the change to
> parse_xen_cpuid(): Alternative approach suggestions are welcome. But now
> also because with many CPUs there may now be a lot of warnings in case
> of issues.
> ---
> v4: Check always, including during boot. Slightly better sanity check,
>     inspired by Linux commit 7879fc4bdc7.
> v3: Add call to warning_add(). If force-enabled, check RDRAND still
>     functioning after S3 resume.
> v2: Re-base.
> 
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -488,6 +488,10 @@ The Speculation Control hardware feature
>  be ignored, e.g. `no-ibrsb`, at which point Xen won't use them itself, and
>  won't offer them to guests.
>  
> +`rdrand` can be used to override the default disabling of the feature on certain
> +AMD systems.  Its negative form can of course also be used to suppress use and
> +exposure of the feature.
> +
>  ### cpuid_mask_cpu
>  > `= fam_0f_rev_[cdefg] | fam_10_rev_[bc] | fam_11_rev_b`
>  
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -4,6 +4,7 @@
>  #include <xen/param.h>
>  #include <xen/smp.h>
>  #include <xen/pci.h>
> +#include <xen/warning.h>
>  #include <asm/io.h>
>  #include <asm/msr.h>
>  #include <asm/processor.h>
> @@ -646,6 +647,25 @@ static void init_amd(struct cpuinfo_x86
>  		if (acpi_smi_cmd && (acpi_enable_value | acpi_disable_value))
>  			amd_acpi_c1e_quirk = true;
>  		break;
> +
> +	case 0x15: case 0x16:
> +		/*
> +		 * There are too many Fam15/Fam16 systems where upon resume
> +		 * from S3 firmware fails to re-setup properly functioning
> +		 * RDRAND.  Clear the feature unless force-enabled on the
> +		 * command line.
> +		 */
> +		if (c == &boot_cpu_data &&
> +		    cpu_has(c, X86_FEATURE_RDRAND) &&
> +		    !is_forced_cpu_cap(X86_FEATURE_RDRAND)) {

Given this is the only user of is_forced_cpu_cap...

> +			static const char __initconst text[] =
> +				"RDRAND may cease to work on this hardware upon resume from S3.\n"
> +				"Please choose an explicit cpuid={no-}rdrand setting.\n";
> +
> +			setup_clear_cpu_cap(X86_FEATURE_RDRAND);
> +			warning_add(text);
> +		}
> +		break;
>  	}
>  
>  	display_cacheinfo(c);
> --- a/xen/arch/x86/cpu/common.c
> +++ b/xen/arch/x86/cpu/common.c
> @@ -11,6 +11,7 @@
>  #include <asm/io.h>
>  #include <asm/mpspec.h>
>  #include <asm/apic.h>
> +#include <asm/random.h>
>  #include <asm/setup.h>
>  #include <mach_apic.h>
>  #include <public/sysctl.h> /* for XEN_INVALID_{SOCKET,CORE}_ID */
> @@ -98,6 +99,11 @@ void __init setup_force_cpu_cap(unsigned
>  	__set_bit(cap, boot_cpu_data.x86_capability);
>  }
>  
> +bool is_forced_cpu_cap(unsigned int cap)

... I think this could be made __init?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 15 15:23:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 15:23:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZcB2-00017G-9b; Fri, 15 May 2020 15:23:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ibeu=65=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZcB1-000178-KS
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 15:23:07 +0000
X-Inumbo-ID: f90f1a2c-96bf-11ea-a585-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f90f1a2c-96bf-11ea-a585-12813bfff9fa;
 Fri, 15 May 2020 15:23:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=QfhQNrgk8L6oEOUv0i7LtXScsiE/+Da7z8C+I52E4M0=; b=hAwXv+wCZB6CJTeDafnV266SP
 AoTQqndgOib29efBgUVT27RXmBDC+kU2xFaHw3rtXG0Lj7FOaApHYpM194tPbJeUB9PNHBWV4Gve9
 TPoPGxmFokntg5ReGyrIV83e6r0/v0+JtJFJz5XB6+ooTmkRzPJsOaKUDoY12mzUWgoTM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZcAx-0002OI-Dh; Fri, 15 May 2020 15:23:03 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZcAx-0000Oz-7D; Fri, 15 May 2020 15:23:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZcAx-0005Kb-5Y; Fri, 15 May 2020 15:23:03 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150186-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150186: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=8c1684bb81f173543599f1848c29a2a3b1ee5907
X-Osstest-Versions-That: linux=24085f70a6e1b0cb647ec92623284641d8270637
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 15 May 2020 15:23:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150186 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150186/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 150159

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150159
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150159
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150159
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150159
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150159
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150159
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150159
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150159
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150159
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                8c1684bb81f173543599f1848c29a2a3b1ee5907
baseline version:
 linux                24085f70a6e1b0cb647ec92623284641d8270637

Last test of basis   150159  2020-05-13 11:59:54 Z    2 days
Testing same since   150186  2020-05-14 19:08:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chris Chiu <chiu@endlessm.com>
  Christian Brauner <christian.brauner@ubuntu.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Peter Zijlstra <peterz@infradead.org>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sven Schnelle <svens@linux.ibm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   24085f70a6e1..8c1684bb81f1  8c1684bb81f173543599f1848c29a2a3b1ee5907 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Fri May 15 15:46:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 15:46:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZcWy-0002zY-7J; Fri, 15 May 2020 15:45:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kC4v=65=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jZcWw-0002zT-UK
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 15:45:46 +0000
X-Inumbo-ID: 23de8154-96c3-11ea-b07b-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23de8154-96c3-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 15:45:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589557545;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=UtDV0Wc1d/DrVU2id3rwOqamk7RLP5bareP4MzNV1nc=;
 b=R35Z+D6iQZnaKEzrQwAAXeYYJxzCLPCtZVjjjh/7ji6KBoizg48Ox07n
 g0BfbZ/735RYLzPMzq1qOnfLSk3025gvlVUsLxlN0G4O9GwC4EyQikq7a
 H47xdgOn7Kpv8dDA9kSjuI/+RdciW4cp9t1hQto/xwuzWsCfKAx0pofBB 0=;
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa6.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa6.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa6.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: gCPOMydmHXfCjDbQ7f1je/Yx+NmydMK+205b1jM2YfbC70J/9Mf8Dn3QgPXJigWiC7ZmRyHfyv
 EiLL6e/HESuzk9RSHDRC7ssgrMd8Mka81q4BbIzfr7zrr2vBcPzpREL4GBgtyOuxo0UJdWcKpL
 Yelq5yDCEy/rHBi9rdwTfm/9AmnSjoySKQF5ITmnQp5lvmC98xPRxsHrasOVaTbQXXBSd+W5X4
 hssCxhitAruOSV7cvwie5zN1oYyDSbHZhfa17UBDDsF4aYp7SRy+zffVYTGu/BWxLO4shLquxc
 HZE=
X-SBRS: 2.7
X-MesageID: 18006277
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,395,1583211600"; d="scan'208";a="18006277"
Subject: Re: [PATCH v2 5/6] x86/mem-paging: move code to its dedicated source
 file
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <b9b189b1-9484-4501-6085-adf86e73f262@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <5ffd9c4e-fbb2-239d-1cdf-c72fcb0ef8ac@citrix.com>
Date: Fri, 15 May 2020 16:45:40 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <b9b189b1-9484-4501-6085-adf86e73f262@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23/04/2020 09:39, Jan Beulich wrote:
> Do a little bit of style adjustment along the way, and drop the
> "p2m_mem_paging_" prefixes from the now static functions.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri May 15 15:49:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 15:49:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZcaP-0003DT-NR; Fri, 15 May 2020 15:49:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kC4v=65=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jZcaO-0003DO-7r
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 15:49:20 +0000
X-Inumbo-ID: a362c6b0-96c3-11ea-b9cf-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a362c6b0-96c3-11ea-b9cf-bc764e2007e4;
 Fri, 15 May 2020 15:49:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589557759;
 h=subject:to:cc:references:from:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=AC3U4/QPe+c8htFYdAMgNCnvefMoc9sYFxJWm49ju0Y=;
 b=DIsPiCDZGE4t1anD6M0Q6eiocathdJ+dTwupeUy4rS5zb7fzmI98CLEQ
 QRENmsEz8QEJ9jQCbmZ4zYoj398gL9OEBHh/L/Ao3R8TbGd0+V4ujUcwf
 6D/X/JE68JRdwPO5IcV6T5O4kYYt/gbNjgZnAqkmqx9biVPK5hF9mGARv E=;
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa4.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa4.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa4.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: qOGkNWPVO24HXQZMG8Wg3OJqdpcopAbjjmOxVC6xHj7c1WxIHRZh5v1KZbu6feispD/OzCWJHc
 7xNnCKqmXmvGZUrcVtB4tpw+ivhne3VYhUcflQkIdtr6KQljvSKLVPxFGApTZttg7TNZTFPVii
 ndh9jP51q/WGCfVefsE82KeEEEhK3dYuY17nXsEJYj24/GeFQuM02a9Ueyi1JZVv1tPxmXXo/U
 Iypyrl4CFeZsPmQkvGBV23Tuvhq74tB8i+jAYGiIZ0TkwgewVntTTme4+ApQvCz+O22nU45ZAr
 oy8=
X-SBRS: 2.7
X-MesageID: 18343889
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,395,1583211600"; d="scan'208";a="18343889"
Subject: Re: [PATCH v2 6/6] x86/mem-paging: consistently use gfn_t
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <b50f9677-3b62-b071-decc-007e6a92701d@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d1a87f81-4373-8174-b54a-c98e25a12a99@citrix.com>
Date: Fri, 15 May 2020 16:49:15 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <b50f9677-3b62-b071-decc-007e6a92701d@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 23/04/2020 09:39, Jan Beulich wrote:
> --- a/xen/arch/x86/mm/hap/guest_walk.c
> +++ b/xen/arch/x86/mm/hap/guest_walk.c
> @@ -68,7 +68,7 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PA
>          *pfec = PFEC_page_paged;
>          if ( top_page )
>              put_page(top_page);
> -        p2m_mem_paging_populate(p2m->domain, cr3 >> PAGE_SHIFT);
> +        p2m_mem_paging_populate(p2m->domain, _gfn(PFN_DOWN(cr3)));

addr_to_gfn()

Otherwise, Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri May 15 15:52:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 15:52:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZcdT-0003zp-6V; Fri, 15 May 2020 15:52:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kC4v=65=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jZcdS-0003zk-2a
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 15:52:30 +0000
X-Inumbo-ID: 14506526-96c4-11ea-ae69-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14506526-96c4-11ea-ae69-bc764e2007e4;
 Fri, 15 May 2020 15:52:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589557949;
 h=subject:from:to:cc:references:message-id:date:
 mime-version:in-reply-to:content-transfer-encoding;
 bh=LqOMgwBCraexW+x7DyWMRZYdT+fJD87gTxi9XiSBgi4=;
 b=DsoVeDZTDoNNW9T9qhZpPgCdJstIVgALrYZiak0GImRfyT+dSd5DO3mX
 GB11Jv3sP/5g41SbDTNafcTCZI5RdKofYM4g13Ktz0H6412ffMCu2vgWm
 aPJFjvrIDhlx3hQJZtvI7TY+k79KxMTOiMtdKyYJPVGAZ0Gxgwple9Wx4 o=;
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 andrew.cooper3@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="andrew.cooper3@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa2.hc3370-68.iphmx.com: domain of
 Andrew.Cooper3@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="Andrew.Cooper3@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa2.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa2.hc3370-68.iphmx.com;
 envelope-from="Andrew.Cooper3@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=andrew.cooper3@citrix.com;
 spf=Pass smtp.mailfrom=Andrew.Cooper3@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: TIBIMGfxpy5XlkH/PD9FtukFDItMTCgr0iPsjmHbKlkHhK/SGR3c28LsiAT60nOYjq0vt0Mm7Y
 sHaZBoBW8M5oPzP26eZVDzneEuZz/JW2znJx6PRgZoRux2jTczclDeimrr1s5vIV6YrWFVh+21
 sTQ77LK7jiSomjUO23ebRLnH54JloAp8geGjb/1cG7pn4wziw0sYsF18toLZ0Ct1H+vb62FZfv
 +SWNdv+jqEUlHsEUSJlhjvzZ6yWj/K0Zr8pATE9bqezyham03yvIUntCHIA7Kn1/ueLtkCAJZL
 /XM=
X-SBRS: 2.7
X-MesageID: 17672619
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,395,1583211600"; d="scan'208";a="17672619"
Subject: Re: [PATCH v2 6/6] x86/mem-paging: consistently use gfn_t
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <b50f9677-3b62-b071-decc-007e6a92701d@suse.com>
 <d1a87f81-4373-8174-b54a-c98e25a12a99@citrix.com>
Message-ID: <fe2f6c75-9cdf-a359-8be9-71066ac13ae4@citrix.com>
Date: Fri, 15 May 2020 16:52:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <d1a87f81-4373-8174-b54a-c98e25a12a99@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15/05/2020 16:49, Andrew Cooper wrote:
> On 23/04/2020 09:39, Jan Beulich wrote:
>> --- a/xen/arch/x86/mm/hap/guest_walk.c
>> +++ b/xen/arch/x86/mm/hap/guest_walk.c
>> @@ -68,7 +68,7 @@ unsigned long hap_p2m_ga_to_gfn(GUEST_PA
>>          *pfec = PFEC_page_paged;
>>          if ( top_page )
>>              put_page(top_page);
>> -        p2m_mem_paging_populate(p2m->domain, cr3 >> PAGE_SHIFT);
>> +        p2m_mem_paging_populate(p2m->domain, _gfn(PFN_DOWN(cr3)));
> addr_to_gfn()

Sorry - gaddr_to_gfn()

~Andrew

>
> Otherwise, Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
>



From xen-devel-bounces@lists.xenproject.org Fri May 15 15:56:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 15:56:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZcgr-00049Z-Ne; Fri, 15 May 2020 15:56:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eCno=65=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jZcgq-00049U-D6
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 15:56:00 +0000
X-Inumbo-ID: 91e325b4-96c4-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 91e325b4-96c4-11ea-9887-bc764e2007e4;
 Fri, 15 May 2020 15:55:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 61E23ABC2;
 Fri, 15 May 2020 15:56:01 +0000 (UTC)
Message-ID: <aa35e39df4e8b10650678f8fa385f80364208270.camel@suse.com>
Subject: Re: [PATCH v3 2/3] xen/sched: don't call sync_vcpu_execstate() in
 sched_unit_migrate_finish()
From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Date: Fri, 15 May 2020 17:55:57 +0200
In-Reply-To: <20200514153614.2240-3-jgross@suse.com>
References: <20200514153614.2240-1-jgross@suse.com>
 <20200514153614.2240-3-jgross@suse.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-UqaGQ5PM4VTxPNQ6DIZQ"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-UqaGQ5PM4VTxPNQ6DIZQ
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2020-05-14 at 17:36 +0200, Juergen Gross wrote:
> With support of core scheduling sched_unit_migrate_finish() gained a
> call of sync_vcpu_execstate() as it was believed to be called as a
> result of vcpu migration in any case.
>=20
> In case of migrating a vcpu away from a physical cpu for a short
> period
> of time ionly without ever being scheduled on the selected new cpu
> this
> might not be true, so drop the call and let the lazy state syncing do
> its job.
>=20
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-UqaGQ5PM4VTxPNQ6DIZQ
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl6+u40ACgkQFkJ4iaW4
c+7BwRAA2rZs7YlhgJhoP8L0xeiN/Wsq4EkIhQJbZHRVOE/fgs4lgFCQSmOQF0EO
aoMOPp3sM2GCchEWfaEHF7T8Rl5d51J29iJVZFhCQlqPHuAody+NviqQhaXrnKyN
a3Fr9silEajjkqqdHyXM2acZn2Gh6MnQzIEfcEem7i6CWylzr0UmFWranvc1he7s
TJQ8lgv04mWvHZjwKuDaFXGhcPJ5IY4xqVqQbrwvrgkYdihxAAf9rASNpps+QcA/
7m1wNm5AuNukQ2TLir28MvmTJN/TcuudoCHJnZPVl2hX8H2FE5GumyCzdOzBN+7d
gfDwcDNcwau2hOvPpspmlBYdimb1XwhLa2yEl0T6jr66MXxYRaSItV7Or5hcww8y
WOXMr2CZHke3R64h9u0GExEeKe1bP3yQmQkAI1q+6oDSeSTh7KCPF86rBXSjfEqy
aZHqoKgWrWcc7wBkwi2hc/ABHLfP4Pb9WE7L5im7ave9KyIiVzDqJp9rxTfndeQi
fAVdFd5vVVl28MPCTQNh8eyugZugCFt+lK6D0+NYkg3uAg0paHTyUH2M0s09i1ij
Brdt95jfuWtShZRc/Z5qznTLskcpnmXM8LcvTJFXGinn4eSdjcF1FIPOKtg4fb1o
M+7Qwt/fpvu2fGJDVQq2uSa+9TygmXdO8Ad3iHqKEewHfHHVmcQ=
=RjuR
-----END PGP SIGNATURE-----

--=-UqaGQ5PM4VTxPNQ6DIZQ--



From xen-devel-bounces@lists.xenproject.org Fri May 15 16:21:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 16:21:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZd5I-0007J0-Rc; Fri, 15 May 2020 16:21:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CqcK=65=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jZd5H-0007Iv-UC
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 16:21:15 +0000
X-Inumbo-ID: 18876e7f-96c8-11ea-a594-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 18876e7f-96c8-11ea-a594-12813bfff9fa;
 Fri, 15 May 2020 16:21:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
 d=citrix.com; s=securemail; t=1589559674;
 h=date:from:to:cc:subject:message-id:references:
 mime-version:in-reply-to;
 bh=gTDncA46BJ3IXjS8vTOnondOAUPsB/a2ijpHo0ua58s=;
 b=SV9H0NtLJ+XNO93eXEW2dBw6izrFRIKyZQCm7zLYf3wcrHLOwt7ZR8zb
 7yBFwSigwZqY0WwZzA9nODe5LzAnTLelQ4w38iA1PedJnXOwAnHl+eB91
 TmU8H+rQ/sLt2tNJfTctNafI/3BwWqVjrST6ce7C+WxHcXHdWAr+e12jm M=;
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 anthony.perard@citrix.com) identity=pra;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="anthony.perard@citrix.com";
 x-conformance=sidf_compatible
Received-SPF: Pass (esa5.hc3370-68.iphmx.com: domain of
 anthony.perard@citrix.com designates 162.221.158.21 as
 permitted sender) identity=mailfrom;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="anthony.perard@citrix.com";
 x-conformance=sidf_compatible; x-record-type="v=spf1";
 x-record-text="v=spf1 ip4:209.167.231.154 ip4:178.63.86.133
 ip4:195.66.111.40/30 ip4:85.115.9.32/28 ip4:199.102.83.4
 ip4:192.28.146.160 ip4:192.28.146.107 ip4:216.52.6.88
 ip4:216.52.6.188 ip4:162.221.158.21 ip4:162.221.156.83
 ip4:168.245.78.127 ~all"
Received-SPF: None (esa5.hc3370-68.iphmx.com: no sender
 authenticity information available from domain of
 postmaster@mail.citrix.com) identity=helo;
 client-ip=162.221.158.21; receiver=esa5.hc3370-68.iphmx.com;
 envelope-from="anthony.perard@citrix.com";
 x-sender="postmaster@mail.citrix.com";
 x-conformance=sidf_compatible
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none;
 spf=None smtp.pra=anthony.perard@citrix.com;
 spf=Pass smtp.mailfrom=anthony.perard@citrix.com;
 spf=None smtp.helo=postmaster@mail.citrix.com;
 dmarc=pass (p=none dis=none) d=citrix.com
IronPort-SDR: got8mvFOKaDV5eQwxeRoQ+nLzxGoxx5NCZZVwfEnoPBCAlnfuvf6FdnK9H6xy5x5NHCH6hstV9
 g/wlhiY/HKRQe5hYUOC+7bNSdkTKQ1PvEubs5yhnZydD2+AfuoSv0l9DQk2LIGxM5Kl8FEENBe
 azR73aRASpO8eIorwtChbREzrWv18AIqbFitbuxTGxt5m+BUNKDCZ4/kLPNaxy+9rM0WZc1rD6
 7xucs80M64rCV7VqQ4LO4fG+PuPDr42E0AG7hSmLhWmaUStQDH+PQF1TJyCGez6qi86ZRHEEVa
 u74=
X-SBRS: 2.7
X-MesageID: 17915855
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,395,1583211600"; d="scan'208";a="17915855"
Date: Fri, 15 May 2020 17:21:08 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH 05/16] x86/shstk: Introduce Supervisor Shadow Stack support
Message-ID: <20200515162108.GM2116@perard.uk.xensource.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-6-andrew.cooper3@citrix.com>
 <d0347fec-3ccb-daa7-5c4d-f0e74b5fb247@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <d0347fec-3ccb-daa7-5c4d-f0e74b5fb247@suse.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 04, 2020 at 03:52:58PM +0200, Jan Beulich wrote:
> On 02.05.2020 00:58, Andrew Cooper wrote:
> > --- a/xen/scripts/Kconfig.include
> > +++ b/xen/scripts/Kconfig.include
> > @@ -31,6 +31,10 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -E -x c /dev/null -o /de
> >  # Return y if the linker supports <flag>, n otherwise
> >  ld-option = $(success,$(LD) -v $(1))
> >  
> > +# $(as-instr,<instr>)
> > +# Return y if the assembler supports <instr>, n otherwise
> > +as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -)
> 
> CLANG_FLAGS caught my eye here, then noticing that cc-option
> also uses it. Anthony - what's the deal with this? It doesn't
> look to get defined anywhere, and I also don't see what clang-
> specific about these constructs.

It's because these constructs are gcc-specific :-). Indeed CLANG_FLAGS
probably needs to be defined as I don't think providing the full
AFLAGS/CFLAGS is going to be a good idea and may change the result of
the commands.

Linux has a few clang specific flags in CLANG_FLAGS, I have found those:
    The ones for cross compilation: --prefix, --target, --gcc-toolchain
    -no-integrated-as
    -Werror=unknown-warning-option
And that's it.

So, I think we could keep using CLANG_FLAGS in Kconfig.include and
define it in Makefile with a comment saying that it's only used by
Kconfig. It would always have -Werror=unknown-warning-option and have
-no-integrated-as when needed, the -Wunknown-warning-option is present
in clang 3.0.0, according Linux's commit 589834b3a009 ("kbuild: Add
-Werror=unknown-warning-option to CLANG_FLAGS").

The options -Werror=unknown-warning-option is to make sure that the
warning is enabled, even though it is by default but could be disabled
in a particular build of clang, see e8de12fb7cde ("kbuild: Check for
unknown options with cc-option usage in Kconfig and clang")

I'll write a patch with this new CLANG_FLAGS.

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri May 15 16:40:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 16:40:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZdNn-0000hJ-Gg; Fri, 15 May 2020 16:40:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iw4X=65=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jZdNl-0000hE-MM
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 16:40:21 +0000
X-Inumbo-ID: c337ac1a-96ca-11ea-a599-12813bfff9fa
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c337ac1a-96ca-11ea-a599-12813bfff9fa;
 Fri, 15 May 2020 16:40:19 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id w64so3344242wmg.4
 for <xen-devel@lists.xenproject.org>; Fri, 15 May 2020 09:40:19 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=wXiT8A/j8/brRNzY3NCxwEiCp+yIyHWy013b1kekcVE=;
 b=UBhG1TVi52CZx79fK1f7HHujMKtqrhG5qVPWuHYU8+FjuXY5zdkAjmki1pm3x3/CqD
 KfUw8jTczVeJc4OuiU9Y9YAptbYN81nARm25S+zm1tRl9X+3GOOFAjeR/PUmd6U35oQY
 sqIh2fbmhAynR+TFt17139L411wNQKvLWQ5+RV8baosjFjCd40msi/MVQdNP2qctTB2q
 haRZ7uZBJAmChSTb3ceOqQ32YCCvCdgoOohYOMwq2/z6DR/PK1n7t7JrSVb24DEb8/NJ
 wuq6sgJNUYA95BsHkIaVdCNsYjY+F10l4zqPGpYidgs7SgB+bcWyVL2UTZES2Q7ldwAD
 vl7Q==
X-Gm-Message-State: AOAM533XbG1s2AciEc7fteiMdD2UGh/9oZSeGunmpwwW8q1gBlFg0kKA
 zseU8YB1qeOfOhN2nTO6fVQ=
X-Google-Smtp-Source: ABdhPJzMfM+xGPeotI/kLGAKRdGSeS0WJG6IdZjVyZPhMNCboy92/SIIs+jwFireDIQ+DETcaI+1cg==
X-Received: by 2002:a1c:990d:: with SMTP id b13mr4772822wme.179.1589560818737; 
 Fri, 15 May 2020 09:40:18 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id u127sm4440006wme.8.2020.05.15.09.40.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 15 May 2020 09:40:18 -0700 (PDT)
Date: Fri, 15 May 2020 16:40:16 +0000
From: Wei Liu <wl@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 3/6] x86/mem-paging: use guest handle for
 XENMEM_paging_op_prep
Message-ID: <20200515164016.ipftrwdykjd4j2yf@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <43811c95-aa41-a34a-06ce-7d344cb1411d@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <43811c95-aa41-a34a-06ce-7d344cb1411d@suse.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, Apr 23, 2020 at 10:38:18AM +0200, Jan Beulich wrote:
> While it should have been this way from the beginning, not doing so will
> become an actual problem with PVH Dom0. The interface change is binary
> compatible, but requires tools side producers to be re-built.
> 
> Drop the bogus/unnecessary page alignment restriction on the input
> buffer at the same time.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Wei Liu <wl@xen.org>

> ---
> v2: Use HANDLE_64() instead of HANDLE_PARAM() for function parameter.
> ---
> Is there really no way to avoid the buffer copying in libxc?

The buffer is managed by the caller. There is no good way for libxc to
know its properties as far as I can tell.

Wei.


From xen-devel-bounces@lists.xenproject.org Fri May 15 16:44:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 16:44:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZdRJ-0000r0-0i; Fri, 15 May 2020 16:44:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iw4X=65=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jZdRH-0000qt-6d
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 16:43:59 +0000
X-Inumbo-ID: 45dfe34e-96cb-11ea-b07b-bc764e2007e4
Received: from mail-wr1-f45.google.com (unknown [209.85.221.45])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 45dfe34e-96cb-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 16:43:58 +0000 (UTC)
Received: by mail-wr1-f45.google.com with SMTP id v12so4248047wrp.12
 for <xen-devel@lists.xenproject.org>; Fri, 15 May 2020 09:43:58 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=WI1ahOcWbzT79YScFm2k3Snud6/mjAqZd8zfDh+GsFc=;
 b=PPeKFuOtC8BXDFQJ8JgqrfKK0ANzefXC2Xw8oglbY6mLrDSdqZrlbqbC5osBd4tqf0
 cCHUVDY5KMD3fs9UVwlL/KhWBoO6U0fMhyUP00UpOGgZpx3J8wMQ8ftET/nTrzp2rQzN
 Ix4xz+cQ+qAjBI5UpxfqOB87GUG95AxAZmz8jYguZxUIUFgBKkmn2H3p6AZTUv+QyxsV
 1QPoWqagz6by37oOxyvvqzZe8RCf/dru7y/UNdCYonzVr655koPdvhdojYs75ix/P3Wu
 EI8zp2OGEuvJkL2Ggg7k8+nZn8rYFzHlLf6qErjJbCQ4uQXfOX6fWKJ6ZIpq/9iZMQef
 bCww==
X-Gm-Message-State: AOAM532xy6ehGA2IxhnyFmQbjbq/eTjyYwOKlpsRwIjdFs0LEFpOb58h
 ay8SY6LEYgJu06zAYTzvP9w=
X-Google-Smtp-Source: ABdhPJxxuUMcUrB7+efaSelt+CKLf0mZxIhPLOg5WPmCbE2PJ/9EkzKSRyvKTpzDzinBxr5H9kJNfw==
X-Received: by 2002:adf:82c3:: with SMTP id 61mr5441863wrc.326.1589561038048; 
 Fri, 15 May 2020 09:43:58 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id q17sm4602659wmk.36.2020.05.15.09.43.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 15 May 2020 09:43:57 -0700 (PDT)
Date: Fri, 15 May 2020 16:43:55 +0000
From: Wei Liu <wl@xen.org>
To: paul@xen.org
Subject: Re: Items for CHANGELOG.md in 4.14
Message-ID: <20200515164355.oujf2mvbep6yhlua@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <000401d62aaa$e0eccab0$a2c66010$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <000401d62aaa$e0eccab0$a2c66010$@xen.org>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 'Roger Pau Monne' <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 15, 2020 at 12:20:44PM +0100, Paul Durrant wrote:
> All,
> 
>   In the last community call I took an ACTION to send a reminder email to submit patches to get CHANGELOG.md up to date. Several
> items were mentioned during the call (see minutes at https://cryptpad.fr/pad/#/2/pad/edit/qPQUEQEv3nJJ97clS8b2KdtP/):
> 
> - Ability to conditionally compile-out 32-bit PV guests (security attack surface reduction)
> - Basic support for AMD Milan CPUS
> - Golang binding advances
> - x2apic, improvements on tlbflush hypercall
> - General improvements in pvshim
> - Xen on Hyper-V

As part of this work, there is now a "framework" to make it easy to port
Xen to run on top of hypervisors. Does this need calling out?

Wei.


From xen-devel-bounces@lists.xenproject.org Fri May 15 16:53:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 16:53:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZdaE-0001pa-R8; Fri, 15 May 2020 16:53:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gjd2=65=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jZdaD-0001pF-9i
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 16:53:13 +0000
X-Inumbo-ID: 8e6510f2-96cc-11ea-a5a3-12813bfff9fa
Received: from mail-il1-f195.google.com (unknown [209.85.166.195])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8e6510f2-96cc-11ea-a5a3-12813bfff9fa;
 Fri, 15 May 2020 16:53:09 +0000 (UTC)
Received: by mail-il1-f195.google.com with SMTP id q10so3264451ile.0
 for <xen-devel@lists.xenproject.org>; Fri, 15 May 2020 09:53:09 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=jt1CDv42jbXBvQwTwsCwnePoxzMUg9bAOQny9395qek=;
 b=LEVXgLxXJBHHw0dcVcpUNYBjENXF7i3ETBSZI/+Z0EoHXTlWBJkHloWVv2WGMuj9L4
 DoQRdfLqxYxFASwj4VeP+76GutLSCkox1Yv6YiD3iIZ8Ab47uMgvG4L+ADxZj2NXMupZ
 4DrKHyNk+5FVPELF+3RwEzSqxH845cv6y8HkCTYHMrxR6eNHwUXW67h/j70tDdn+3l50
 QFKSkm/Qegq5vmSZ7W1DSYSDpytcXZ4FU8o4vTwc9jZJQr8Uqq9EiNzjOCPVUK9/FLkJ
 uLVqkrYpCds9P1a/yAPdFsNgeyt0jWCbPBQSrfs61SeqUnEmDIt3vPq93jRNxgrNuVKu
 Jikw==
X-Gm-Message-State: AOAM531HNHa7mGSrrAALdYmyI2PHUyG6A6gZsbsilKTWvGNjfrRfTs9I
 9zAs38BMaJU+M2NB5Uu2iRvxG6n+
X-Google-Smtp-Source: ABdhPJwOm/mJrPEyejLSbaZbRPEYr43yHgbWuONTakvkvCRA7qod90BxOnWL1cGETaMDIyqLHWvdMQ==
X-Received: by 2002:a92:3607:: with SMTP id d7mr2608056ila.222.1589561588859; 
 Fri, 15 May 2020 09:53:08 -0700 (PDT)
Received: from t0.lan (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124])
 by smtp.googlemail.com with ESMTPSA id f17sm932136iol.26.2020.05.15.09.53.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 15 May 2020 09:53:08 -0700 (PDT)
From: Tamas K Lengyel <tamas@tklengyel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 2/3] xen/vm_event: add vm_event_check_pending_op
Date: Fri, 15 May 2020 10:53:01 -0600
Message-Id: <0b282dc0a59459da4db0c53b13508e1ff39d70b9.1589561218.git.tamas@tklengyel.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <cover.1589561218.git.tamas@tklengyel.com>
References: <cover.1589561218.git.tamas@tklengyel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Petre Pircalabu <ppircalabu@bitdefender.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Perform sanity checking when shutting vm_event down to determine whether
it is safe to do so. Error out with -EAGAIN in case pending operations
have been found for the domain.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
 xen/arch/x86/vm_event.c        | 23 +++++++++++++++++++++++
 xen/common/vm_event.c          | 17 ++++++++++++++---
 xen/include/asm-arm/vm_event.h |  7 +++++++
 xen/include/asm-x86/vm_event.h |  2 ++
 4 files changed, 46 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
index 848d69c1b0..558b7da4b1 100644
--- a/xen/arch/x86/vm_event.c
+++ b/xen/arch/x86/vm_event.c
@@ -297,6 +297,29 @@ void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
     };
 }
 
+bool vm_event_check_pending_op(struct vcpu *v)
+{
+    struct monitor_write_data *w = &v->arch.vm_event->write_data;
+
+    if ( !v->arch.vm_event->sync_event )
+        return false;
+
+    if ( w->do_write.cr0 )
+        return true;
+    if ( w->do_write.cr3 )
+        return true;
+    if ( w->do_write.cr4 )
+        return true;
+    if ( w->do_write.msr )
+        return true;
+    if ( v->arch.vm_event->set_gprs )
+        return true;
+    if ( v->arch.vm_event->emulate_flags )
+        return true;
+
+    return false;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 127f2d58f1..2df327a42c 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -183,6 +183,7 @@ static int vm_event_disable(struct domain *d, struct vm_event_domain **p_ved)
     if ( vm_event_check_ring(ved) )
     {
         struct vcpu *v;
+        bool pending_op = false;
 
         spin_lock(&ved->lock);
 
@@ -192,9 +193,6 @@ static int vm_event_disable(struct domain *d, struct vm_event_domain **p_ved)
             return -EBUSY;
         }
 
-        /* Free domU's event channel and leave the other one unbound */
-        free_xen_event_channel(d, ved->xen_port);
-
         /* Unblock all vCPUs */
         for_each_vcpu ( d, v )
         {
@@ -203,8 +201,21 @@ static int vm_event_disable(struct domain *d, struct vm_event_domain **p_ved)
                 vcpu_unpause(v);
                 ved->blocked--;
             }
+
+            if ( vm_event_check_pending_op(v) )
+                pending_op = true;
         }
 
+        /* vm_event ops are still pending until vCPUs get scheduled */
+        if ( pending_op )
+        {
+            spin_unlock(&ved->lock);
+            return -EAGAIN;
+        }
+
+        /* Free domU's event channel and leave the other one unbound */
+        free_xen_event_channel(d, ved->xen_port);
+
         destroy_ring_for_helper(&ved->ring_page, ved->ring_pg_struct);
 
         vm_event_cleanup_domain(d);
diff --git a/xen/include/asm-arm/vm_event.h b/xen/include/asm-arm/vm_event.h
index 14d1d341cc..5cbc9c6dc2 100644
--- a/xen/include/asm-arm/vm_event.h
+++ b/xen/include/asm-arm/vm_event.h
@@ -58,4 +58,11 @@ void vm_event_sync_event(struct vcpu *v, bool value)
     /* Not supported on ARM. */
 }
 
+static inline
+bool vm_event_check_pending_op(struct vcpu *v)
+{
+    /* Not supported on ARM. */
+    return false;
+}
+
 #endif /* __ASM_ARM_VM_EVENT_H__ */
diff --git a/xen/include/asm-x86/vm_event.h b/xen/include/asm-x86/vm_event.h
index 785e741fba..9c5ce3129c 100644
--- a/xen/include/asm-x86/vm_event.h
+++ b/xen/include/asm-x86/vm_event.h
@@ -54,4 +54,6 @@ void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp);
 
 void vm_event_sync_event(struct vcpu *v, bool value);
 
+bool vm_event_check_pending_op(struct vcpu *v);
+
 #endif /* __ASM_X86_VM_EVENT_H__ */
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 15 16:53:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 16:53:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZdaA-0001ow-Ad; Fri, 15 May 2020 16:53:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gjd2=65=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jZda8-0001ol-Sw
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 16:53:08 +0000
X-Inumbo-ID: 8d9cea8c-96cc-11ea-9887-bc764e2007e4
Received: from mail-io1-f66.google.com (unknown [209.85.166.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d9cea8c-96cc-11ea-9887-bc764e2007e4;
 Fri, 15 May 2020 16:53:08 +0000 (UTC)
Received: by mail-io1-f66.google.com with SMTP id j8so3389551iog.13
 for <xen-devel@lists.xenproject.org>; Fri, 15 May 2020 09:53:08 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=LCoqaKlszXhgdfaEb+32A9dBWLg/txZ1ajVdd1EaKUI=;
 b=Opj7CDveF5EdT5c/QAKT5cReFiT0u19RWTQ6vbaztpwmFhfdGvOfTTlXYhP5g8qyyW
 8aIpVyDfbc+vc1WyZScBfqQYIvQKNeuchsY1U1+ikI0Wdf/Of8SFDNk+Xe+clxOBwMX0
 zrQfj69S8wcUwy2SHKzMX4oSEat1J3zUgzeulzwJwjRVt/VmYe4sSHZDUy8y/TVZoRYK
 nceDgLTOek2ActZnKHbalWAGMef13Yie9WFo/0FtRWbmrpvkp/P1/DX4GOvLpdkG1x7a
 BtE99RMmpAYPCrVuMnnmxJUnkw/SmTwj3QTgfELNvpOeEqJ6gSOqvWD4oF5pxVAKJmRZ
 Pr+Q==
X-Gm-Message-State: AOAM532f8TikI3fpBQ82A3JLyo0uWA/tOOBV21kM+psOOywy3v1Or2ff
 li9Vd1DgabiRbO0RuWIKmy/EAYIZ
X-Google-Smtp-Source: ABdhPJxSkpecThAe6sZz9e5Ngtdnf6wU8HA7RM/rYMWhKrBrKtW25xrNSVuPAx+qunX4javiQ0Bnhg==
X-Received: by 2002:a5e:d506:: with SMTP id e6mr3878910iom.184.1589561587502; 
 Fri, 15 May 2020 09:53:07 -0700 (PDT)
Received: from t0.lan (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124])
 by smtp.googlemail.com with ESMTPSA id f17sm932136iol.26.2020.05.15.09.53.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 15 May 2020 09:53:06 -0700 (PDT)
From: Tamas K Lengyel <tamas@tklengyel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 1/3] xen/monitor: Control register values
Date: Fri, 15 May 2020 10:53:00 -0600
Message-Id: <72d4d282dd20b79ebdbaf1f70865ea38b075c5c0.1589561218.git.tamas@tklengyel.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <cover.1589561218.git.tamas@tklengyel.com>
References: <cover.1589561218.git.tamas@tklengyel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Petre Pircalabu <ppircalabu@bitdefender.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Extend the monitor_op domctl to include option that enables
controlling what values certain registers are permitted to hold
by a monitor subscriber.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
 xen/arch/x86/hvm/hvm.c       | 31 +++++++++++++++++++------------
 xen/arch/x86/monitor.c       | 10 +++++++++-
 xen/include/asm-x86/domain.h |  1 +
 xen/include/public/domctl.h  |  1 +
 4 files changed, 30 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 814b7020d8..063f8ddc18 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2263,9 +2263,10 @@ int hvm_set_cr0(unsigned long value, bool may_defer)
     {
         ASSERT(v->arch.vm_event);
 
-        if ( hvm_monitor_crX(CR0, value, old_value) )
+        if ( hvm_monitor_crX(CR0, value, old_value) &&
+             v->domain->arch.monitor.control_register_values )
         {
-            /* The actual write will occur in hvm_do_resume(), if permitted. */
+            /* The actual write will occur in hvm_do_resume, if permitted. */
             v->arch.vm_event->write_data.do_write.cr0 = 1;
             v->arch.vm_event->write_data.cr0 = value;
 
@@ -2362,9 +2363,10 @@ int hvm_set_cr3(unsigned long value, bool may_defer)
     {
         ASSERT(v->arch.vm_event);
 
-        if ( hvm_monitor_crX(CR3, value, old) )
+        if ( hvm_monitor_crX(CR3, value, old) &&
+             v->domain->arch.monitor.control_register_values )
         {
-            /* The actual write will occur in hvm_do_resume(), if permitted. */
+            /* The actual write will occur in hvm_do_resume, if permitted. */
             v->arch.vm_event->write_data.do_write.cr3 = 1;
             v->arch.vm_event->write_data.cr3 = value;
 
@@ -2443,9 +2445,10 @@ int hvm_set_cr4(unsigned long value, bool may_defer)
     {
         ASSERT(v->arch.vm_event);
 
-        if ( hvm_monitor_crX(CR4, value, old_cr) )
+        if ( hvm_monitor_crX(CR4, value, old_cr) &&
+             v->domain->arch.monitor.control_register_values )
         {
-            /* The actual write will occur in hvm_do_resume(), if permitted. */
+            /* The actual write will occur in hvm_do_resume, if permitted. */
             v->arch.vm_event->write_data.do_write.cr4 = 1;
             v->arch.vm_event->write_data.cr4 = value;
 
@@ -3587,13 +3590,17 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
 
         ASSERT(v->arch.vm_event);
 
-        /* The actual write will occur in hvm_do_resume() (if permitted). */
-        v->arch.vm_event->write_data.do_write.msr = 1;
-        v->arch.vm_event->write_data.msr = msr;
-        v->arch.vm_event->write_data.value = msr_content;
-
         hvm_monitor_msr(msr, msr_content, msr_old_content);
-        return X86EMUL_OKAY;
+
+        if ( v->domain->arch.monitor.control_register_values )
+        {
+            /* The actual write will occur in hvm_do_resume, if permitted. */
+            v->arch.vm_event->write_data.do_write.msr = 1;
+            v->arch.vm_event->write_data.msr = msr;
+            v->arch.vm_event->write_data.value = msr_content;
+
+            return X86EMUL_OKAY;
+        }
     }
 
     if ( (ret = guest_wrmsr(v, msr, msr_content)) != X86EMUL_UNHANDLEABLE )
diff --git a/xen/arch/x86/monitor.c b/xen/arch/x86/monitor.c
index bbcb7536c7..1517a97f50 100644
--- a/xen/arch/x86/monitor.c
+++ b/xen/arch/x86/monitor.c
@@ -144,7 +144,15 @@ int arch_monitor_domctl_event(struct domain *d,
                               struct xen_domctl_monitor_op *mop)
 {
     struct arch_domain *ad = &d->arch;
-    bool requested_status = (XEN_DOMCTL_MONITOR_OP_ENABLE == mop->op);
+    bool requested_status;
+
+    if ( XEN_DOMCTL_MONITOR_OP_CONTROL_REGISTERS == mop->op )
+    {
+        ad->monitor.control_register_values = true;
+        return 0;
+    }
+
+    requested_status = (XEN_DOMCTL_MONITOR_OP_ENABLE == mop->op);
 
     switch ( mop->event )
     {
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 5b6d909266..d890ab7a22 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -416,6 +416,7 @@ struct arch_domain
          * This is used to filter out pagefaults.
          */
         unsigned int inguest_pagefault_disabled                            : 1;
+        unsigned int control_register_values                               : 1;
         struct monitor_msr_bitmap *msr_bitmap;
         uint64_t write_ctrlreg_mask[4];
     } monitor;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 1ad34c35eb..cbcd25f12c 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1025,6 +1025,7 @@ struct xen_domctl_psr_cmt_op {
 #define XEN_DOMCTL_MONITOR_OP_DISABLE           1
 #define XEN_DOMCTL_MONITOR_OP_GET_CAPABILITIES  2
 #define XEN_DOMCTL_MONITOR_OP_EMULATE_EACH_REP  3
+#define XEN_DOMCTL_MONITOR_OP_CONTROL_REGISTERS 4
 
 #define XEN_DOMCTL_MONITOR_EVENT_WRITE_CTRLREG         0
 #define XEN_DOMCTL_MONITOR_EVENT_MOV_TO_MSR            1
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 15 16:53:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 16:53:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZda9-0001oq-2U; Fri, 15 May 2020 16:53:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gjd2=65=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jZda8-0001og-9U
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 16:53:08 +0000
X-Inumbo-ID: 8cc78f7c-96cc-11ea-a5a3-12813bfff9fa
Received: from mail-io1-f66.google.com (unknown [209.85.166.66])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8cc78f7c-96cc-11ea-a5a3-12813bfff9fa;
 Fri, 15 May 2020 16:53:07 +0000 (UTC)
Received: by mail-io1-f66.google.com with SMTP id x5so3459508ioh.6
 for <xen-devel@lists.xenproject.org>; Fri, 15 May 2020 09:53:06 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=CRPzrYLWQvzMenpCCXIyMtcBORlWmDbOjSdaQ3OXrrc=;
 b=XZFlOk1EY3r20Ag9bWB7PnP6oBInA8pkHQWi/wUswgaB6Qb0og0SiFiS+yEeyoaEeN
 ErEH3BZ/Jf84oY4dkgKnkKQ+c0/cPrF9IhJfdF5qX9ov7cqMUvfWaLpEWj9S0ybFNK0l
 VCZHyDcbWrw6N6NULnMMeKu2CjIRUYtLWYkC/1qUaG89vCdrTiOSVbS0oGW29wUqUjvP
 A8yO9flRLwLcXWvt7Bm2A804pJHEFxw9CHJCQ9vt4P8lX3To6Gw4OSudOpXs6nW9C22g
 uFjGQtnR96vGBZTdi1jXBh1mMRS8J560KpEviYko893J9JTWRC4AT251zR6DixhhM+/h
 TYzQ==
X-Gm-Message-State: AOAM5315RbS3xvTQPgpGqKayBaKL2eLacZnj1T/xF8bLVMqj0se7ZP1l
 DaxCRKVEVwdF5X9ZeuhewtLtg+Ng
X-Google-Smtp-Source: ABdhPJxzEU1foLBwsnvKSYPTWtjsFABSua9Zs52rNrnUqTDJiwComzN5U74Ub1XyBQ8unvv4gzliJQ==
X-Received: by 2002:a5d:9ecb:: with SMTP id a11mr3815297ioe.115.1589561586090; 
 Fri, 15 May 2020 09:53:06 -0700 (PDT)
Received: from t0.lan (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124])
 by smtp.googlemail.com with ESMTPSA id f17sm932136iol.26.2020.05.15.09.53.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 15 May 2020 09:53:05 -0700 (PDT)
From: Tamas K Lengyel <tamas@tklengyel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 0/3] vm_event: fix race-condition when disabling monitor events
Date: Fri, 15 May 2020 10:52:59 -0600
Message-Id: <cover.1589561218.git.tamas@tklengyel.com>
X-Mailer: git-send-email 2.26.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Petre Pircalabu <ppircalabu@bitdefender.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

For the last couple years we have received numerous reports from users of
monitor vm_events of spurious guest crashes when using events. In particular,
it has observed that the problem occurs when vm_events are being disabled. The
nature of the guest crash varied widely and has only occured occasionally. This
made debugging the issue particularly hard. We had discussions about this issue
even here on the xen-devel mailinglist with no luck figuring it out.

The bug has now been identified as a race-condition between register event
handling and disabling the vm_event interface.

Patch 96760e2fba100d694300a81baddb5740e0f8c0ee, "vm_event: deny register writes
if refused by  vm_event reply" is the patch that introduced the error. In this
patch emulation of register write events can be postponed until the
corresponding vm_event handler decides whether to allow such write to take
place. Unfortunately this can only be implemented by performing the deny/allow
step when the vCPU gets scheduled. Due to that postponed emulation of the event
if the user decides to pause the VM in the vm_event handler and then disable
events, the entire emulation step is skipped the next time the vCPU is resumed.
Even if the user doesn't pause during the vm_event handling but exits
immediately and disables vm_event, the situation becomes racey as disabling
vm_event may succeed before the guest's vCPUs get scheduled with the pending
emulation task. This has been particularly the case with VMs that have several
vCPUs as after the VM is unpaused it may actually take a long time before all
vCPUs get scheduled.

The only solution currently is to poll each vCPU before vm_events are disabled
to verify they had been scheduled. The following patches resolve this issue in
a much nicer way.

Patch 1 adds an option to the monitor_op domctl that needs to be specified if
    the user wants to actually use the postponed register-write handling
    mechanism. If that option is not specified then handling is performed the
    same way as before patch 96760e2fba100d694300a81baddb5740e0f8c0ee.
    
Patch 2 performs sanity checking when disabling vm_events to determine whether
    its safe to free all vm_event structures. The vCPUs still get unpaused to
    allow them to get scheduled and perform any of their pending operations,
    but otherwise an -EAGAIN error is returned signaling to the user that they
    need to wait and try again disabling the interface.
    
Patch 3 adds a vm_event specifically to signal to the user when it is safe to
    continue disabling the interface.
    
Shout out to our friends at CERT.pl for stumbling upon a crucial piece of
information that lead to finally squashing this nasty bug.

Tamas K Lengyel (3):
  xen/monitor: Control register values
  xen/vm_event: add vm_event_check_pending_op
  xen/vm_event: Add safe to disable vm_event

 xen/arch/x86/hvm/hvm.c            | 69 +++++++++++++++++++++++--------
 xen/arch/x86/hvm/monitor.c        | 14 +++++++
 xen/arch/x86/monitor.c            | 23 ++++++++++-
 xen/arch/x86/vm_event.c           | 23 +++++++++++
 xen/common/vm_event.c             | 17 ++++++--
 xen/include/asm-arm/vm_event.h    |  7 ++++
 xen/include/asm-x86/domain.h      |  2 +
 xen/include/asm-x86/hvm/monitor.h |  1 +
 xen/include/asm-x86/vm_event.h    |  2 +
 xen/include/public/domctl.h       |  3 ++
 xen/include/public/vm_event.h     |  8 ++++
 11 files changed, 147 insertions(+), 22 deletions(-)

-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 15 16:53:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 16:53:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZdaD-0001pM-Ie; Fri, 15 May 2020 16:53:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Gjd2=65=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jZdaB-0001p2-N7
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 16:53:11 +0000
X-Inumbo-ID: 8f3ce28e-96cc-11ea-ae69-bc764e2007e4
Received: from mail-il1-f182.google.com (unknown [209.85.166.182])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8f3ce28e-96cc-11ea-ae69-bc764e2007e4;
 Fri, 15 May 2020 16:53:11 +0000 (UTC)
Received: by mail-il1-f182.google.com with SMTP id b15so3189423ilq.12
 for <xen-devel@lists.xenproject.org>; Fri, 15 May 2020 09:53:11 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=EVHM5utimK/juoVOGf87kC9v393UUF6EscVsahd5AAI=;
 b=jsgolpbrSYsh05kCr67iZz3aPrdbzMu/v4Ye8VS0AVrz76Ch9LxgVGOlBx829V1RlX
 3FwJ+L7/eabS2G4spR4yoKhwQSfir+6PF/V3G+NGOJGlCDUfHuHsV4CQ2DIfT/FxuW4c
 nwxbagYYlFV0iDsQjln8Az2jc+su9/RHjVRZcqtmz8duWp9FyvLHGN+I1Xy3CLWlyAI8
 y+cxmFUs6pkuehkagmhsMxgBEu244vzDX5/c9sptje3PZSIGQHwQshiGEH5g4AuOUU6J
 oCtU1QWFhGtITjbkYNPnv1MHYjQjWZcEXzmEVLwMuNz53ob/EyyHAqYuoaVo1G7uqE/T
 ru7g==
X-Gm-Message-State: AOAM531XfG7vftDlrcAIXRKFA+1sBN0ipKjA5FjxwIU0i5zq1rKolCvv
 ByFZA04REBwwHB5DPoE2UhxvSu5J
X-Google-Smtp-Source: ABdhPJxnYPY7tDAaD9tQ8BK9qzAgu5onEgM2zJjYOSVSILd7u8NqWQxDge/1JaA1cp+uIFoZ4XZ5qg==
X-Received: by 2002:a92:3954:: with SMTP id g81mr4477788ila.105.1589561590277; 
 Fri, 15 May 2020 09:53:10 -0700 (PDT)
Received: from t0.lan (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124])
 by smtp.googlemail.com with ESMTPSA id f17sm932136iol.26.2020.05.15.09.53.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 15 May 2020 09:53:09 -0700 (PDT)
From: Tamas K Lengyel <tamas@tklengyel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 3/3] xen/vm_event: Add safe to disable vm_event
Date: Fri, 15 May 2020 10:53:02 -0600
Message-Id: <1168bacc61f655f559c236cdf63a6b2beccd4d6b.1589561218.git.tamas@tklengyel.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <cover.1589561218.git.tamas@tklengyel.com>
References: <cover.1589561218.git.tamas@tklengyel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Petre Pircalabu <ppircalabu@bitdefender.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Instead of having to repeatedly try to disable vm_events, request a specific
vm_event to be sent when the domain is safe to continue with shutting down
the vm_event interface.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
 xen/arch/x86/hvm/hvm.c            | 38 ++++++++++++++++++++++++++-----
 xen/arch/x86/hvm/monitor.c        | 14 ++++++++++++
 xen/arch/x86/monitor.c            | 13 +++++++++++
 xen/include/asm-x86/domain.h      |  1 +
 xen/include/asm-x86/hvm/monitor.h |  1 +
 xen/include/public/domctl.h       |  2 ++
 xen/include/public/vm_event.h     |  8 +++++++
 7 files changed, 71 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 063f8ddc18..50c67e7b8e 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -563,15 +563,41 @@ void hvm_do_resume(struct vcpu *v)
         v->arch.hvm.inject_event.vector = HVM_EVENT_VECTOR_UNSET;
     }
 
-    if ( unlikely(v->arch.vm_event) && v->arch.monitor.next_interrupt_enabled )
+    if ( unlikely(v->arch.vm_event) )
     {
-        struct x86_event info;
+        struct domain *d = v->domain;
+
+        if ( v->arch.monitor.next_interrupt_enabled )
+        {
+            struct x86_event info;
+
+            if ( hvm_get_pending_event(v, &info) )
+            {
+                hvm_monitor_interrupt(info.vector, info.type, info.error_code,
+                                      info.cr2);
+                v->arch.monitor.next_interrupt_enabled = false;
+            }
+        }
 
-        if ( hvm_get_pending_event(v, &info) )
+        if ( d->arch.monitor.safe_to_disable )
         {
-            hvm_monitor_interrupt(info.vector, info.type, info.error_code,
-                                  info.cr2);
-            v->arch.monitor.next_interrupt_enabled = false;
+            struct vcpu *check_vcpu;
+            bool pending_op = false;
+
+            for_each_vcpu ( d, check_vcpu )
+            {
+                if ( vm_event_check_pending_op(check_vcpu) )
+                {
+                    pending_op = true;
+                    break;
+                }
+            }
+
+            if ( !pending_op )
+            {
+                hvm_monitor_safe_to_disable();
+                d->arch.monitor.safe_to_disable = false;
+            }
         }
     }
 }
diff --git a/xen/arch/x86/hvm/monitor.c b/xen/arch/x86/hvm/monitor.c
index f5d89e71d1..8e67dd1a0b 100644
--- a/xen/arch/x86/hvm/monitor.c
+++ b/xen/arch/x86/hvm/monitor.c
@@ -300,6 +300,20 @@ bool hvm_monitor_check_p2m(unsigned long gla, gfn_t gfn, uint32_t pfec,
     return monitor_traps(curr, true, &req) >= 0;
 }
 
+bool hvm_monitor_safe_to_disable(void)
+{
+    struct vcpu *curr = current;
+    struct arch_domain *ad = &curr->domain->arch;
+    vm_event_request_t req = {};
+
+    if ( !ad->monitor.safe_to_disable )
+        return 0;
+
+    req.reason = VM_EVENT_REASON_SAFE_TO_DISABLE;
+
+    return monitor_traps(curr, 0, &req);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/monitor.c b/xen/arch/x86/monitor.c
index 1517a97f50..86e0ba2fbc 100644
--- a/xen/arch/x86/monitor.c
+++ b/xen/arch/x86/monitor.c
@@ -339,6 +339,19 @@ int arch_monitor_domctl_event(struct domain *d,
         break;
     }
 
+    case XEN_DOMCTL_MONITOR_EVENT_SAFE_TO_DISABLE:
+    {
+        bool old_status = ad->monitor.safe_to_disable;
+
+        if ( unlikely(old_status == requested_status) )
+            return -EEXIST;
+
+        domain_pause(d);
+        ad->monitor.safe_to_disable = requested_status;
+        domain_unpause(d);
+        break;
+    }
+
     default:
         /*
          * Should not be reached unless arch_monitor_get_capabilities() is
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index d890ab7a22..948b750c71 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -417,6 +417,7 @@ struct arch_domain
          */
         unsigned int inguest_pagefault_disabled                            : 1;
         unsigned int control_register_values                               : 1;
+        unsigned int safe_to_disable                                       : 1;
         struct monitor_msr_bitmap *msr_bitmap;
         uint64_t write_ctrlreg_mask[4];
     } monitor;
diff --git a/xen/include/asm-x86/hvm/monitor.h b/xen/include/asm-x86/hvm/monitor.h
index 66de24cb75..194e2f857e 100644
--- a/xen/include/asm-x86/hvm/monitor.h
+++ b/xen/include/asm-x86/hvm/monitor.h
@@ -52,6 +52,7 @@ bool hvm_monitor_emul_unimplemented(void);
 
 bool hvm_monitor_check_p2m(unsigned long gla, gfn_t gfn, uint32_t pfec,
                            uint16_t kind);
+bool hvm_monitor_safe_to_disable(void);
 
 #endif /* __ASM_X86_HVM_MONITOR_H__ */
 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index cbcd25f12c..247e809a6c 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1040,6 +1040,8 @@ struct xen_domctl_psr_cmt_op {
 #define XEN_DOMCTL_MONITOR_EVENT_EMUL_UNIMPLEMENTED    10
 /* Enabled by default */
 #define XEN_DOMCTL_MONITOR_EVENT_INGUEST_PAGEFAULT     11
+/* Always async, disables automaticaly on first event */
+#define XEN_DOMCTL_MONITOR_EVENT_SAFE_TO_DISABLE       12
 
 struct xen_domctl_monitor_op {
     uint32_t op; /* XEN_DOMCTL_MONITOR_OP_* */
diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h
index fdd3ad8a30..b66d2a8634 100644
--- a/xen/include/public/vm_event.h
+++ b/xen/include/public/vm_event.h
@@ -159,6 +159,14 @@
 #define VM_EVENT_REASON_DESCRIPTOR_ACCESS       13
 /* Current instruction is not implemented by the emulator */
 #define VM_EVENT_REASON_EMUL_UNIMPLEMENTED      14
+/*
+ * When shutting down vm_event it may not be immediately safe to complete the
+ * process as some vCPUs may be pending synchronization. This async event
+ * type can be used to receive a notification when its safe to finish disabling
+ * the vm_event interface. All other event types need to be disabled before
+ * registering to this one.
+ */
+#define VM_EVENT_REASON_SAFE_TO_DISABLE         15
 
 /* Supported values for the vm_event_write_ctrlreg index. */
 #define VM_EVENT_X86_CR0    0
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Fri May 15 16:59:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 16:59:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZdfe-0002M9-NG; Fri, 15 May 2020 16:58:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ibeu=65=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZdfd-0002M4-To
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 16:58:49 +0000
X-Inumbo-ID: 55778c24-96cd-11ea-a5a3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 55778c24-96cd-11ea-a5a3-12813bfff9fa;
 Fri, 15 May 2020 16:58:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=vFlslN5mq2//G4nQuKGTwosmQgwZ03SB0oJ1bAeZyl8=; b=XSwsJ4S8jMzT+eZnaZ1CXSf+O
 0w+D1dMbWSNP+0acAGJ6f3rbsH7CjVdxvczRi21MfSeIZyw4FWQjbqIOJDqDMs+ygP//nfEvq1D/W
 hyOm2Dr13qRhOFOV6pN3gFEUfp+ilyUkT7wtl+5skxLFP8Z2p8cBFE0LTUbd3+b1cg9dw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZdfX-0004tC-5u; Fri, 15 May 2020 16:58:43 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZdfW-0004Rw-SZ; Fri, 15 May 2020 16:58:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZdfW-0007yx-Rx; Fri, 15 May 2020 16:58:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150198-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150198: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=58202ebc5a7fbf8f03875c2b5218d3deed6debe8
X-Osstest-Versions-That: xen=5115b437eef595ce77f05bfc02626e31e263e965
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 15 May 2020 16:58:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150198 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150198/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  58202ebc5a7fbf8f03875c2b5218d3deed6debe8
baseline version:
 xen                  5115b437eef595ce77f05bfc02626e31e263e965

Last test of basis   150182  2020-05-14 15:01:11 Z    1 days
Testing same since   150198  2020-05-15 14:01:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5115b437ee..58202ebc5a  58202ebc5a7fbf8f03875c2b5218d3deed6debe8 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri May 15 17:16:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 17:16:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZdw1-00045f-1q; Fri, 15 May 2020 17:15:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OuqU=65=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jZdvz-00045V-Fx
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 17:15:43 +0000
X-Inumbo-ID: b4c77fe8-96cf-11ea-b07b-bc764e2007e4
Received: from mail-wm1-x335.google.com (unknown [2a00:1450:4864:20::335])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4c77fe8-96cf-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 17:15:42 +0000 (UTC)
Received: by mail-wm1-x335.google.com with SMTP id f134so3070471wmf.1
 for <xen-devel@lists.xenproject.org>; Fri, 15 May 2020 10:15:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=n1ffL/SUcQ313IGVkpwazg6BWeAGgp8cidcIXU2wv9k=;
 b=W8PThp2Tb7VaHKHbgXiVAk9omIy9fNxDfZrSc/b4mH4QgHvAP2SaZ/XPdqmw0SGM27
 GUOnFZiad77Cy9VEwpES3Dw+ZV/SZ2txApQGdSV0MTqcNJfhE5Ks2Ngge7Gyxsru7O7v
 LcSA0fywIxhqecxU42G9CGEYF3+zDVHwLTgf/b/ONe6CF+EAZtP4mdRhnbtjzGUsnhaV
 iw5RpaoAMd6AVIUj6UlKiZw/KMgRZ9JkiWHengea2yzglrZcn525A6y5Fd1LBvMVEadT
 tc3rkZcHGzMRwhQhmboSrT4r5Y2049K18/1z7m+LxmP7Mdu4j+KZXiN3Fgv3rYORO9GG
 3Njw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=n1ffL/SUcQ313IGVkpwazg6BWeAGgp8cidcIXU2wv9k=;
 b=I3OtpFXJIM6z8xdpZxqLHXFN8sbbnxBMlnYGbfHYdaU6zG4NqpnnGaxvhEGidzOz5I
 3IqZLB6Aug0ZmUBRdwlwvpBlvH2biu0HauT7Jcs+sEfESOucXF/qDoSKVNQ+MfnfCL73
 83N85Uti4t1JeEdtogwsiYIL4bVf8VE3WsDHWKyEHQ12RQl6XFW/E5SFCFRXLgl3+xxy
 tQLIIEUV/3aqkUbAPvWOIAoB/7AMfNR4fUbKREoGnyiWOpZVYs+aKk1qBbB7mZjDULuH
 zeaPZ6CsxD1TnLakABiQwVj1jhIR11jDVdpqwhhRt4Egtz7jMO1cTSaNzTNGFPgC2h8q
 /Z/w==
X-Gm-Message-State: AOAM530O+AoN+vazQsThWojPPlp07gZgG+dzyuCRpN9ONBUBObMGIdcN
 uYq6HIeTzFMevq60Jcr2JA4=
X-Google-Smtp-Source: ABdhPJxsq67gpQwJRa6Y1YKe8flogSsIVPE9ecTjaFM4pbFndMdwIXIrnOmUPngL8hQtvLysxYJhPQ==
X-Received: by 2002:a7b:c959:: with SMTP id i25mr4969069wml.84.1589562941065; 
 Fri, 15 May 2020 10:15:41 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.185])
 by smtp.gmail.com with ESMTPSA id r11sm4386655wrv.14.2020.05.15.10.15.39
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 15 May 2020 10:15:40 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Wei Liu'" <wl@xen.org>
References: <000401d62aaa$e0eccab0$a2c66010$@xen.org>
 <20200515164355.oujf2mvbep6yhlua@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
In-Reply-To: <20200515164355.oujf2mvbep6yhlua@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
Subject: RE: Items for CHANGELOG.md in 4.14
Date: Fri, 15 May 2020 18:15:39 +0100
Message-ID: <001601d62adc$75558940$60009bc0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="US-ASCII"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQKPZmgsyOvWLQ9lSdYfSjCDezh0+QEfP5V4py3x0oA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: xen-devel@lists.xenproject.org, 'Roger Pau Monne' <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Wei Liu <wl@xen.org>
> Sent: 15 May 2020 17:44
> To: paul@xen.org
> Cc: xen-devel@lists.xenproject.org; 'Roger Pau Monne' <roger.pau@citrix.com>; Wei Liu <wl@xen.org>
> Subject: Re: Items for CHANGELOG.md in 4.14
> 
> On Fri, May 15, 2020 at 12:20:44PM +0100, Paul Durrant wrote:
> > All,
> >
> >   In the last community call I took an ACTION to send a reminder email to submit patches to get
> CHANGELOG.md up to date. Several
> > items were mentioned during the call (see minutes at
> https://cryptpad.fr/pad/#/2/pad/edit/qPQUEQEv3nJJ97clS8b2KdtP/):
> >
> > - Ability to conditionally compile-out 32-bit PV guests (security attack surface reduction)
> > - Basic support for AMD Milan CPUS
> > - Golang binding advances
> > - x2apic, improvements on tlbflush hypercall
> > - General improvements in pvshim
> > - Xen on Hyper-V
> 
> As part of this work, there is now a "framework" to make it easy to port
> Xen to run on top of hypervisors. Does this need calling out?

IMO it is worth calling out this; it encourages further development not strictly related to Hyper-V.

  Paul

> 
> Wei.



From xen-devel-bounces@lists.xenproject.org Fri May 15 17:26:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 17:26:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZe6T-00054N-4u; Fri, 15 May 2020 17:26:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MKSV=65=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jZe6R-00054I-A6
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 17:26:31 +0000
X-Inumbo-ID: 36da5702-96d1-11ea-b9cf-bc764e2007e4
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 36da5702-96d1-11ea-b9cf-bc764e2007e4;
 Fri, 15 May 2020 17:26:30 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04FHHHTE153046;
 Fri, 15 May 2020 17:26:27 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=qzyYV2vqwkE0ZFmp8Rb/VITsaERuvm6iR01gtqhZ0zY=;
 b=wuxVNr6oD7U1b9BTbSQFkNxICJhpiebFYJtt0KAfVYo+RxlD9PVDv/maiyVWjBHuvH2V
 7OWbAePL0v+f/6Vo9pG+l6uvKJJIl1OZYMAvPMXrShn4gUIQeWgYAgml/6zGP+GhMuyI
 EoG8jSstt/+g9OqtaASMLyall+ntpMFl+5CwehEO9dfYfECLycGQWcYKVQq/mPdYv56S
 ECjt/ZF6b2IK1samFc7R+fjlaHF1y90fUBn+9ddhUED6ypUqX+BdC+ihd/K8UJqhcxBD
 R8f95N2OU7Zn2PrHEImpZmmqestX9wnPqQFdYscIDPvhDvjFTPDOJnpSWNtBiGSNA94/ gQ== 
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2120.oracle.com with ESMTP id 311nu5mu4t-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Fri, 15 May 2020 17:26:27 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04FHMmJF051074;
 Fri, 15 May 2020 17:26:26 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by aserp3020.oracle.com with ESMTP id 310vjwx0sb-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 15 May 2020 17:26:26 +0000
Received: from abhmp0010.oracle.com (abhmp0010.oracle.com [141.146.116.16])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 04FHQO6L010340;
 Fri, 15 May 2020 17:26:25 GMT
Received: from [10.39.196.206] (/10.39.196.206)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 15 May 2020 10:26:23 -0700
Subject: Re: [PATCH 1/1] xen/manage: enable C_A_D to force reboot
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Dongli Zhang <dongli.zhang@oracle.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
References: <20200513233410.18120-1-dongli.zhang@oracle.com>
 <e604a96d-087e-573b-c3bf-fc53005f8994@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <8b237bf7-3862-b3c1-e2ec-07a174c1e315@oracle.com>
Date: Fri, 15 May 2020 13:26:16 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <e604a96d-087e-573b-c3bf-fc53005f8994@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9622
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0
 bulkscore=0 phishscore=0
 spamscore=0 mlxlogscore=999 malwarescore=0 suspectscore=0 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2005150148
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9622
 signatures=668687
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 phishscore=0 mlxscore=0
 adultscore=0 priorityscore=1501 mlxlogscore=999 impostorscore=0
 suspectscore=0 spamscore=0 lowpriorityscore=0 cotscore=-2147483648
 bulkscore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2005150147
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: rose.wang@oracle.com, sstabellini@kernel.org, joe.jin@oracle.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/15/20 3:01 AM, Jürgen Groß wrote:
> On 14.05.20 01:34, Dongli Zhang wrote:
>> The systemd may be configured to mask ctrl-alt-del via "systemctl mask
>> ctrl-alt-del.target". As a result, the pv reboot would not work as
>> signal
>> is ignored.
>>
>> This patch always enables C_A_D before the call of ctrl_alt_del() in
>> order
>> to force the reboot.
>
> Hmm, I'm not sure this is a good idea.
>
> Suppose a guest admin is doing a critical update and wants to avoid a
> sudden reboot in between. By masking the reboot this would be possible,
> with your patch it isn't.
>
> In case a reboot is really mandatory it would still be possible to just
> kill the guest.
>
> I'm not completely opposed to the patch, but I think this is a change
> which should not be done easily.


I think 'xl reboot -F' should do be handling this scenario but (1) it is
currently not quite set up for this and (2) I can't see how it works at
all given that noone handles LIBXL_TRIGGER_RESET in
arch_do_domctl(XEN_DOMCTL_sendtrigger).


-boris


>
>
> Juergen
>
>>
>> Reported-by: Rose Wang <rose.wang@oracle.com>
>> Cc: Joe Jin <joe.jin@oracle.com>
>> Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>> Signed-off-by: Dongli Zhang <dongli.zhang@oracle.com>
>> ---
>>   drivers/xen/manage.c | 7 +++++++
>>   1 file changed, 7 insertions(+)
>>
>> diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
>> index cd046684e0d1..3190d0ecb52e 100644
>> --- a/drivers/xen/manage.c
>> +++ b/drivers/xen/manage.c
>> @@ -204,6 +204,13 @@ static void do_poweroff(void)
>>   static void do_reboot(void)
>>   {
>>       shutting_down = SHUTDOWN_POWEROFF; /* ? */
>> +    /*
>> +     * The systemd may be configured to mask ctrl-alt-del via
>> +     * "systemctl mask ctrl-alt-del.target". As a result, the pv reboot
>> +     * would not work. To enable C_A_D would force the reboot.
>> +     */
>> +    C_A_D = 1;
>> +
>>       ctrl_alt_del();
>>   }
>>  
>



From xen-devel-bounces@lists.xenproject.org Fri May 15 18:06:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 18:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZeiq-0000K7-6H; Fri, 15 May 2020 18:06:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7E4v=65=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jZeio-0000K2-Kb
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 18:06:10 +0000
X-Inumbo-ID: c167024e-96d6-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c167024e-96d6-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 18:06:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=UUrgM6G2SGsjkSvajE0euP5vq0t5CmvDR7n90AeN8SI=; b=GEUmW8BJHwHmIb2W7T3B2pZXDa
 nC41VKatLM6EF8+re7UlLHDf0ypf+Es7VXSPmc6xefbsfIU3sq8aH6tpr/Hx9A0RzkxlknrcP0gVU
 Y0MU1Ir5lRFmrjbe7noFDFDNmOddPkVM4s0oLoAchN7XE5pBsU3gLj9CYG8ViLsoWr10=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jZeik-0006MO-AZ; Fri, 15 May 2020 18:06:06 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jZeik-00015Q-2x; Fri, 15 May 2020 18:06:06 +0000
Subject: Re: [PATCH] domain_page: handle NULL within unmap_domain_page() itself
To: Hongyan Xia <hx242@xen.org>, xen-devel@lists.xenproject.org
References: <a3ddf0c755227a3c742f6b93783c576135a86874.1589384602.git.hongyxia@amazon.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0d630ac6-00fc-df47-2de5-bbeba1d1e432@xen.org>
Date: Fri, 15 May 2020 19:06:03 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <a3ddf0c755227a3c742f6b93783c576135a86874.1589384602.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 13/05/2020 16:43, Hongyan Xia wrote:
> From: Hongyan Xia <hongyxia@amazon.com>
> 
> The macro version UNMAP_DOMAIN_PAGE() does both NULL checking and
> variable clearing. Move NULL checking into the function itself so that
> the semantics is consistent with other similar constructs like XFREE().
> This also eases the use unmap_domain_page() in error handling paths,
> where we only care about NULL checking but not about variable clearing.
> 
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
>   xen/arch/arm/mm.c             | 3 +++
>   xen/arch/x86/domain_page.c    | 2 +-
>   xen/include/xen/domain_page.h | 7 ++-----
>   3 files changed, 6 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 727107eefa..1b14f49345 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -498,6 +498,9 @@ void unmap_domain_page(const void *va)
>       lpae_t *map = this_cpu(xen_dommap);
>       int slot = ((unsigned long) va - DOMHEAP_VIRT_START) >> SECOND_SHIFT;
>   
> +    if ( !va )
> +        return;
> +
>       local_irq_save(flags);
>   
>       ASSERT(slot >= 0 && slot < DOMHEAP_ENTRIES);
> diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
> index dd32712d2f..b03728e18e 100644
> --- a/xen/arch/x86/domain_page.c
> +++ b/xen/arch/x86/domain_page.c
> @@ -181,7 +181,7 @@ void unmap_domain_page(const void *ptr)
>       unsigned long va = (unsigned long)ptr, mfn, flags;
>       struct vcpu_maphash_entry *hashent;
>   
> -    if ( va >= DIRECTMAP_VIRT_START )
> +    if ( !va || va >= DIRECTMAP_VIRT_START )
>           return;
>   
>       ASSERT(va >= MAPCACHE_VIRT_START && va < MAPCACHE_VIRT_END);
> diff --git a/xen/include/xen/domain_page.h b/xen/include/xen/domain_page.h
> index ab2be7b719..a182d33b67 100644
> --- a/xen/include/xen/domain_page.h
> +++ b/xen/include/xen/domain_page.h
> @@ -73,11 +73,8 @@ static inline void unmap_domain_page_global(const void *va) {};
>   #endif /* !CONFIG_DOMAIN_PAGE */
>   
>   #define UNMAP_DOMAIN_PAGE(p) do {   \
> -    if ( p )                        \
> -    {                               \
> -        unmap_domain_page(p);       \
> -        (p) = NULL;                 \
> -    }                               \
> +    unmap_domain_page(p);           \
> +    (p) = NULL;                     \
>   } while ( false )
>   
>   #endif /* __XEN_DOMAIN_PAGE_H__ */
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 15 18:25:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 18:25:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZf1S-00028v-TH; Fri, 15 May 2020 18:25:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sXO4=65=dornerworks.com=stewart.hildebrand@srs-us1.protection.inumbo.net>)
 id 1jZf1R-00028q-Kj
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 18:25:25 +0000
X-Inumbo-ID: 7019f6e6-96d9-11ea-a5b2-12813bfff9fa
Received: from USG02-CY1-obe.outbound.protection.office365.us (unknown
 [23.103.209.84]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7019f6e6-96d9-11ea-a5b2-12813bfff9fa;
 Fri, 15 May 2020 18:25:22 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector5401; d=microsoft.com; cv=none;
 b=Uf1EUMhCapRRYsBH3QY98wpUSN7jJYCCkQ92Ad5mGQpkzdtwraVJih5PeZ8erfPNymdD0f3v0Yxn6OHpnY70irNBFenJwZswStPDEr53v8SXFKbmC+qJ8utuLE84MlCpGjPP6H/Dgu35GwIPpgv4g65y5o+9BVrk2PbLVHrn5JMVTRLkwTscJlCIRBuQs2grrURQGE7to6LG85Wv6693Yy1VuYAXdsgOvZhSdAZo370Ha2o7OCn7iBKUu9vWWX7Ock8lPaUv6OCwG9GpeDB1d9cwwnwWEX8kA0fl2Y4bu/Y7wWFUQL2Mg1uvQLsevd+v37akZKB+MfLRIASidoGViQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector5401;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2bBfou/chdJqZdN6bXhQ23PEqm32owHKlLolqa3BTx0=;
 b=LxKyxbWFOargpVgbQ4+AK00k70lpG1OMFPxz9px7SEDKRJYuUpS0qbn3eCgyc6+wfl2YSL/HUm3dOTCvKyCqDyRXsG5asEZrfXLjdrzaQ9RlsfM6P66yJaB4TjCgHG0TSLIeHCnZbSWAP+qVtAn2IoENLChqtWQsty2/K4UH45AfFGTmQhcQk3w1Qs0uov4dscPQgFo9mOxpeBHoSyZbrBWRITL1S8apkMB6OiKygQLEcqhPbCU+bhJyBB1ZEXYYFPXeJ0J2Uw9YdSWg34+Uvo6yDRo9t58PevMFh0+RyaLNriTgWVM5RgVvIGI3MJit4n1mdvk7aBNdqMHLyhJJ7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=fail (sender ip is
 207.242.234.14) smtp.rcpttodomain=lists.xenproject.org
 smtp.mailfrom=dornerworks.com; dmarc=none action=none
 header.from=dornerworks.com; dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=dornerworkssystem.onmicrosoft.us;
 s=selector1-dornerworkssystem-onmicrosoft-us;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2bBfou/chdJqZdN6bXhQ23PEqm32owHKlLolqa3BTx0=;
 b=3Z6txRrfHqCPM3T/pV4YKVkr78QjLUShpN5iM5B/TR/MHQw6+bfyHPfDVeIimhcjwTGTcy781wFNPiMiKIn8r+uoSHUTytPCXChZ9kzLJoJBB9mZKZj/cQB1GQpJxZ5SAkhfVBdPCSetm1EQ+/v7Be6bES6gevwBkB3NAQfn8oM=
Received: from SN1P110CA0028.NAMP110.PROD.OUTLOOK.COM (2001:489a:200:61::18)
 by CY1P110MB0358.NAMP110.PROD.OUTLOOK.COM (2001:489a:200:401::16) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2958.35; Fri, 15 May
 2020 18:25:19 +0000
Received: from CY1USG02FT007.eop-usg02.itar.protection.office365.us
 (2001:489a:2202:d::631) by SN1P110CA0028.office365.us (2001:489a:200:61::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2958.35 via Frontend
 Transport; Fri, 15 May 2020 18:25:18 +0000
Authentication-Results: spf=fail (sender IP is 207.242.234.14)
 smtp.mailfrom=dornerworks.com; lists.xenproject.org; dkim=none (message not
 signed) header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=dornerworks.com;
Received-SPF: Fail (protection.outlook.com: domain of dornerworks.com does not
 designate 207.242.234.14 as permitted sender)
 receiver=protection.outlook.com; client-ip=207.242.234.14;
 helo=ubuntu.localdomain;
Received: from ubuntu.localdomain (207.242.234.14) by
 CY1USG02FT007.mail.protection.office365.us (10.97.26.110) with Microsoft SMTP
 Server id 15.20.2979.30 via Frontend Transport; Fri, 15 May 2020 18:25:12
 +0000
From: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
To: xen-devel@lists.xenproject.org
Subject: [XEN PATCH v2] xen/build: use the correct kconfig makefile
Date: Fri, 15 May 2020 14:25:09 -0400
Message-Id: <20200515182509.5476-1-stewart.hildebrand@dornerworks.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200512175206.20314-1-stewart.hildebrand@dornerworks.com>
References: <20200512175206.20314-1-stewart.hildebrand@dornerworks.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-spam-status: No, score=-2.9 required=3.5 tests=ALL_TRUSTED, BAYES_00,
 MAILSHELL_SCORE_0_4
X-Spam-Flag: NO
X-EOPAttributedMessage: 0
X-Forefront-Antispam-Report: CIP:207.242.234.14; CTRY:US; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:ubuntu.localdomain; PTR:InfoDomainNonexistent; CAT:NONE;
 SFTY:;
 SFS:(346002)(46966005)(508600001)(6916009)(2906002)(6666004)(186003)(956004)(70206006)(54906003)(2616005)(44832011)(336012)(70586007)(26005)(4326008)(36756003)(33310700002)(8676002)(82310400002)(5660300002)(81166007)(86362001)(8936002)(47076004)(1076003);
 DIR:OUT; SFP:1101; 
Content-Type: text/plain
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 71c76360-2106-447a-897c-08d7f8fd5226
X-MS-TrafficTypeDiagnostic: CY1P110MB0358:|CY1P110MB0358:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <CY1P110MB0358F36E9C31893D670955658CBD0@CY1P110MB0358.NAMP110.PROD.OUTLOOK.COM>
X-MS-Oob-TLC-OOBClassifiers: OLM:466;
X-Forefront-PRVS: 04041A2886
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-OriginatorOrg: dornerworks.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 15 May 2020 18:25:12.4962 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 71c76360-2106-447a-897c-08d7f8fd5226
X-MS-Exchange-CrossTenant-Id: 097cf9aa-db69-4b12-aeab-ab5f513dbff9
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=097cf9aa-db69-4b12-aeab-ab5f513dbff9; Ip=[207.242.234.14];
 Helo=[ubuntu.localdomain]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1P110MB0358
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>,
 Anthony PERARD <anthony.perard@citrix.com>, Doug Goldstein <cardoe@cardoe.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This resolves the following observed error during config merge:

/bin/sh /path/to/xen/xen/../xen/tools/kconfig/merge_config.sh -m .config /path/to/xen/xen/../xen/arch/arm/configs/custom.config
Using .config as base
Merging /path/to/xen/xen/../xen/arch/arm/configs/custom.config
#
# merged configuration written to .config (needs make)
#
make -f /path/to/xen/xen/../xen/Makefile olddefconfig
make[2]: Entering directory '/path/to/xen/xen'
make[2]: *** No rule to make target 'olddefconfig'.  Stop.
make[2]: Leaving directory '/path/to/xen/xen'
tools/kconfig/Makefile:95: recipe for target 'custom.config' failed

The build was invoked by first doing a defconfig (which succeeded):

$ make -C xen XEN_TARGET_ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- defconfig

Followed by the config fragment merge command (which failed before this patch)

$ cat > xen/arch/arm/configs/custom.config <<EOF
CONFIG_DEBUG=y
CONFIG_EARLY_PRINTK_ZYNQMP=y
EOF
$ make -C xen XEN_TARGET_ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- custom.config

Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>

---
v2: updated commit message
---
 xen/tools/kconfig/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/tools/kconfig/Makefile b/xen/tools/kconfig/Makefile
index fd37f4386a..f39521a0ed 100644
--- a/xen/tools/kconfig/Makefile
+++ b/xen/tools/kconfig/Makefile
@@ -94,7 +94,7 @@ configfiles=$(wildcard $(srctree)/kernel/configs/$@ $(srctree)/arch/$(SRCARCH)/c
 %.config: $(obj)/conf
 	$(if $(call configfiles),, $(error No configuration exists for this target on this architecture))
 	$(Q)$(CONFIG_SHELL) $(srctree)/tools/kconfig/merge_config.sh -m .config $(configfiles)
-	$(Q)$(MAKE) -f $(srctree)/Makefile olddefconfig
+	$(Q)$(MAKE) -f $(srctree)/tools/kconfig/Makefile.kconfig olddefconfig
 
 PHONY += kvmconfig
 kvmconfig: kvm_guest.config
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri May 15 18:30:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 18:30:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZf6e-00032I-Lk; Fri, 15 May 2020 18:30:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sXO4=65=dornerworks.com=stewart.hildebrand@srs-us1.protection.inumbo.net>)
 id 1jZf6d-00032D-5n
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 18:30:47 +0000
X-Inumbo-ID: 30f8e41c-96da-11ea-b9cf-bc764e2007e4
Received: from USG02-CY1-obe.outbound.protection.office365.us (unknown
 [2001:489a:2202:d::60f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30f8e41c-96da-11ea-b9cf-bc764e2007e4;
 Fri, 15 May 2020 18:30:46 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector5401; d=microsoft.com; cv=none;
 b=t4k6B29PLTME7UUNC38EtMDhFZwfS2stwxmvq1oPoww9XEf4NcqtNp+o+lD0eU5qayDkjxjJetTL6KrbVi0NM6V2iYFZe2F/ZzMkKILZyibJ2fhSXQO6QODQBfgxzGQijoW1Cs+6FCXUnWt4qdF1OrrxsLyTBCYcxH+39cskOdvF+gdavQN0f2iENZs9RbjbJ/QO9oN5IFQ4+Xm94/EShjht7Zcf7TW6z3+RpFfExmX0ph+fTi6Ko7pdolIb0PCV8SQ1co5jwez2pDpIejD+rByleWqsgoCIA+AnlvX+cNZlCtH2ws+r8csw+tXlwRjsoB+cql79oBIbkpxs2KRvHQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector5401;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7hYA9aoOhl84nJcESWARiJ0BxzRpOnYbkQaV/ykIkuc=;
 b=x/BbIN9GcHbdt+SKiPUgg+27UXDE/pSo5fiEvx0KrziSXBGgHE5adduKtlne1JSxCaromLDpP6Efwz58UGICi4BkiteR+hwnjdKnf7MgTyedcmf112CkWiWz6p3WJH4Bub9rKAqhqBubD8NhZo5zS+Z6ik7IKaEuzJ3+pluQi87Y2x96uySrQG55R6lq17AkwVSUkt75Y7UtFFeLAv5WI2pAyyklb7XlMPKUHYhW4Uh89LUXC9hMR8m+m07V85mMMCWPfrLFqmsr/sm0ar+KLaJBj82sFp6RTKVlNaY3sed9MW4plLHgru4MWL4y7SocMHbUrH3VJha/I7qCle4z2w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=dornerworks.com; dmarc=pass action=none
 header.from=dornerworks.com; dkim=pass header.d=dornerworks.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=dornerworkssystem.onmicrosoft.us;
 s=selector1-dornerworkssystem-onmicrosoft-us;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7hYA9aoOhl84nJcESWARiJ0BxzRpOnYbkQaV/ykIkuc=;
 b=rHILfo+wEBKRhbLwYs5wpDR4v+mJsQgHbDl6X0dzjXgqPh94VWdamH1HWTR4Cbs7JAVKbtiLsyCWxbuYyAetBrLf5KqDHaLul9RASgNw+geWQHkJkGM3OwZ0alReD9wGuVL7QerW80XTbPozx1084792hPJmeuVTf97N/s8nYsE=
Received: from CY1P110MB0551.NAMP110.PROD.OUTLOOK.COM (2001:489a:200:404::14)
 by CY1P110MB0549.NAMP110.PROD.OUTLOOK.COM (2001:489a:200:404::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.2958.35; Fri, 15 May
 2020 18:30:42 +0000
Received: from CY1P110MB0551.NAMP110.PROD.OUTLOOK.COM
 ([fe80::8888:67a8:9eca:3edf]) by CY1P110MB0551.NAMP110.PROD.OUTLOOK.COM
 ([fe80::8888:67a8:9eca:3edf%6]) with mapi id 15.20.2958.035; Fri, 15 May 2020
 18:30:42 +0000
From: Stewart Hildebrand <Stewart.Hildebrand@dornerworks.com>
To: Stewart Hildebrand <Stewart.Hildebrand@dornerworks.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [XEN PATCH v2] xen/build: use the correct kconfig makefile
Thread-Topic: [XEN PATCH v2] xen/build: use the correct kconfig makefile
Thread-Index: AQHWKuYywWMBJNbmN0GLTPve7p+kuaipd9QA
Date: Fri, 15 May 2020 18:30:42 +0000
Message-ID: <CY1P110MB0551ACF3B18E4969A8D15C698CBD0@CY1P110MB0551.NAMP110.PROD.OUTLOOK.COM>
References: <20200512175206.20314-1-stewart.hildebrand@dornerworks.com>
 <20200515182509.5476-1-stewart.hildebrand@dornerworks.com>
In-Reply-To: <20200515182509.5476-1-stewart.hildebrand@dornerworks.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: dornerworks.com; dkim=none (message not signed)
 header.d=none;dornerworks.com; dmarc=none action=none
 header.from=dornerworks.com;
x-originating-ip: [2607:fb90:a22b:677:9065:c12d:e772:d615]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: ce20010b-97bf-4d7b-998d-08d7f8fe1345
x-ms-traffictypediagnostic: CY1P110MB0549:|CY1P110MB0549:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <CY1P110MB0549566A2AACB0F7A95EF3278CBD0@CY1P110MB0549.NAMP110.PROD.OUTLOOK.COM>
x-ms-oob-tlc-oobclassifiers: OLM:612;
x-forefront-prvs: 04041A2886
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:CY1P110MB0551.NAMP110.PROD.OUTLOOK.COM; PTR:; CAT:NONE;
 SFTY:;
 SFS:(366004)(966005)(186003)(52536014)(6506007)(54906003)(55016002)(7696005)(71200400001)(9686003)(508600001)(558084003)(76116006)(8936002)(110136005)(66476007)(5660300002)(64756008)(8676002)(2906002)(86362001)(66946007)(66556008)(66446008)(33656002)(4326008);
 DIR:OUT; SFP:1101; 
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: dornerworks.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ce20010b-97bf-4d7b-998d-08d7f8fe1345
X-MS-Exchange-CrossTenant-originalarrivaltime: 15 May 2020 18:30:42.2765 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 097cf9aa-db69-4b12-aeab-ab5f513dbff9
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY1P110MB0549
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Doug Goldstein <cardoe@cardoe.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Friday, May 15, 2020 2:25 PM, Stewart Hildebrand wrote:
>Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>

I forgot to include Anthony's Reviewed-by from v1 https://lists.xenproject.=
org/archives/html/xen-devel/2020-05/msg00848.html


From xen-devel-bounces@lists.xenproject.org Fri May 15 19:20:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 19:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZfsP-0007dH-HY; Fri, 15 May 2020 19:20:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ibeu=65=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZfsO-0007dC-2l
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 19:20:08 +0000
X-Inumbo-ID: 161eb75a-96e1-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 161eb75a-96e1-11ea-b9cf-bc764e2007e4;
 Fri, 15 May 2020 19:20:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=KWawvrIFczTBeyvSnqG50ey12AM0/wFPfdCCSa+q2yI=; b=aBZ/IkNZJKyn38ZmhqPozaKMJ
 ZNMBRvnT4h3DyJ6RpESjXOWxT/gKffHUjhsO26k1n+YIoS7EKQ7lXAiH2s19tjEGOUFBXM4BRZNoP
 ekdfAXaQ6Qz524oZUfc8Dcfj8IuJ3kf/LjmB0B4t95RUrA9868nD2SMxCo4+hE0P+Zrr4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZfsM-0007rv-Nv; Fri, 15 May 2020 19:20:06 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZfsM-00054R-DK; Fri, 15 May 2020 19:20:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZfsM-0006s4-Cd; Fri, 15 May 2020 19:20:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150205-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150205: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=90b1701905dd0cbb7efe525b1bf92007fe818b60
X-Osstest-Versions-That: xen=58202ebc5a7fbf8f03875c2b5218d3deed6debe8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 15 May 2020 19:20:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150205 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150205/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  90b1701905dd0cbb7efe525b1bf92007fe818b60
baseline version:
 xen                  58202ebc5a7fbf8f03875c2b5218d3deed6debe8

Last test of basis   150198  2020-05-15 14:01:35 Z    0 days
Testing same since   150205  2020-05-15 17:01:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Sergey Dyasli <sergey.dyasli@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@dornerworks.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   58202ebc5a..90b1701905  90b1701905dd0cbb7efe525b1bf92007fe818b60 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri May 15 20:03:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 20:03:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZgXs-0002wd-53; Fri, 15 May 2020 20:03:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kC4v=65=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jZgXr-0002wY-5a
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 20:02:59 +0000
X-Inumbo-ID: 12698558-96e7-11ea-a5bc-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 12698558-96e7-11ea-a5bc-12813bfff9fa;
 Fri, 15 May 2020 20:02:58 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: yGC/Kqg1r1VUA3XNT+zpL7F45PdFGy6WSsP13wEDiKhAO5cDDN/GhYcyrx5YpSQjQVct9+lba3
 vgMWQrirJWtbh9rZNt00z5CR93QNUW5OE9vRw1vThk1/hnsG3iwxEAS+0WojPbXZVdwCdXShYg
 hDYRHTacWginBiIieQwPkZS/s8Yq9dhE3UwMG87N+iSUo7zC0yxDBfFQ4rJtvrcFmYmC0ffIhI
 1BKImwvOIW882/Bqd585tjSZcjbmfbVaRGk+sV2EtlNLOk4g3KH+gSBzOlQvcoV3++KL6kff+b
 7Aw=
X-SBRS: 2.7
X-MesageID: 17934500
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,396,1583211600"; d="scan'208";a="17934500"
Subject: Re: [PATCH v2 2/6] x86/mem-paging: correct p2m_mem_paging_prep()'s
 error handling
To: Jan Beulich <jbeulich@suse.com>
References: <b8437b1f-af58-70df-91d2-bd875912e57b@suse.com>
 <bf9dd27b-a7db-de0e-a804-d687e66ecf1e@suse.com>
 <2cccf9bb-3930-436d-65de-f0eb7dd0c498@citrix.com>
 <83f6463c-6a61-e79b-cf1b-77589ef287c1@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1e390685-dbca-afee-0cda-692b1134183e@citrix.com>
Date: Fri, 15 May 2020 21:02:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <83f6463c-6a61-e79b-cf1b-77589ef287c1@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, George
 Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15/05/2020 16:15, Jan Beulich wrote:
>>> +            domain_crash(d);
> This already leaves a file/line combination as a (minimal hint).

First, that is still tantamount to useless in logs from a user.

Second, the use of __LINE__ is why it breaks livepatching, and people
using livepatching is still carrying an out-of-tree patch to unbreak it.

> I can make a patch to add a gprintk() as you ask for, but I'm not
> sure it's worth it for this almost dead code.

"page in unexpected state" would be better than nothing, but given the
comment, it might also be better as ASSERT_UNREACHABLE(), and we now
have a lot of cases where we declare unreachable, and kill the domain in
release builds.

>
>>> @@ -1843,13 +1852,24 @@ int p2m_mem_paging_prep(struct domain *d
>>>      ret = p2m_set_entry(p2m, gfn, mfn, PAGE_ORDER_4K,
>>>                          paging_mode_log_dirty(d) ? p2m_ram_logdirty
>>>                                                   : p2m_ram_rw, a);
>>> -    set_gpfn_from_mfn(mfn_x(mfn), gfn_l);
>>> +    if ( !ret )
>>> +    {
>>> +        set_gpfn_from_mfn(mfn_x(mfn), gfn_l);
>>>  
>>> -    if ( !page_extant )
>>> -        atomic_dec(&d->paged_pages);
>>> +        if ( !page_extant )
>>> +            atomic_dec(&d->paged_pages);
>>> +    }
>>>  
>>>   out:
>>>      gfn_unlock(p2m, gfn, 0);
>>> +
>>> +    if ( page )
>>> +    {
>>> +        if ( ret )
>>> +            put_page_alloc_ref(page);
>>> +        put_page(page);
>> This is a very long way from clear enough to follow, and buggy if anyone
>> inserts a new goto out path.
> What alternatives do you see?

/* Fully free the page on error.  Drop our temporary reference in all
cases. */

would at least help someone trying to figure out what is going on here,
especially as put_page_alloc_ref() is not the obvious freeing function
for alloc_domheap_page().

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 15 20:29:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 20:29:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZgxJ-0004wg-7Z; Fri, 15 May 2020 20:29:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2khj=65=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1jZgxI-0004wb-2I
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 20:29:16 +0000
X-Inumbo-ID: bd4474a8-96ea-11ea-b9cf-bc764e2007e4
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd4474a8-96ea-11ea-b9cf-bc764e2007e4;
 Fri, 15 May 2020 20:29:13 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne
 [IPv6:2001:41d0:fe9d:1100:221:70ff:fe0c:9885])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 04FKTCmI028947
 for <xen-devel@lists.xenproject.org>; Fri, 15 May 2020 22:29:12 +0200 (MEST)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 6D2D62810; Fri, 15 May 2020 22:29:12 +0200 (CEST)
Date: Fri, 15 May 2020 22:29:12 +0200
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: xen-devel@lists.xenproject.org
Subject: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
Message-ID: <20200515202912.GA11714@antioche.eu.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: Mutt/1.12.1 (2019-06-15)
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3
 (chassiron.antioche.eu.org [IPv6:2001:41d0:fe9d:1101:0:0:0:1]);
 Fri, 15 May 2020 22:29:12 +0200 (MEST)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,
NetBSD works as dom0 up to Xen 4.11. I'm trying to get it working
on 4.13.0. I added the support for gntdev operations,  but I'm stuck with
privcmd IOCTL_PRIVCMD_MMAPBATCH. It seems to work fine for PV and PVH domUs,
but with HVM domUs, MMU_NORMAL_PT_UPDATE returns -22 (EINVAL) and
qemu-dm dumps core (as expected; the page is not mapped).
Of course this works fine in 4.11

In the Xen kernel, I tracked it down to arch/x86/mm.c near line 2229,
in mod_l1_entry():
        /* Translate foreign guest address. */
        if ( cmd != MMU_PT_UPDATE_NO_TRANSLATE &&
             paging_mode_translate(pg_dom) )
        {
            p2m_type_t p2mt;
            p2m_query_t q = l1e_get_flags(nl1e) & _PAGE_RW ?
                            P2M_ALLOC | P2M_UNSHARE : P2M_ALLOC;

            page = get_page_from_gfn(pg_dom, l1e_get_pfn(nl1e), &p2mt, q);

            if ( p2m_is_paged(p2mt) )
            {
                if ( page )
                    put_page(page);
                p2m_mem_paging_populate(pg_dom, l1e_get_pfn(nl1e));
                return -ENOENT;
            }

            if ( p2mt == p2m_ram_paging_in && !page )
                return -ENOENT;

            /* Did our attempt to unshare fail? */
            if ( (q & P2M_UNSHARE) && p2m_is_shared(p2mt) )
            {
                /* We could not have obtained a page ref. */
                ASSERT(!page);
                /* And mem_sharing_notify has already been called. */
                return -ENOMEM;
            }

            if ( !page ) {
                gdprintk(XENLOG_WARNING, "translate but no page\n");
                return -EINVAL;
            }                        
            nl1e = l1e_from_page(page, l1e_get_flags(nl1e));
        }

the gdprintk() I added in the ( !page) case fires, so this is the
cause of the EINVAL.
Is it expected for a HVM domU ? If so, how should the dom0 code be
changed to get it working ? I failed to see where our code is different
from linux ...

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri May 15 21:00:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 21:00:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZhRG-0008LC-O3; Fri, 15 May 2020 21:00:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kC4v=65=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jZhRF-0008L7-TR
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 21:00:13 +0000
X-Inumbo-ID: 11705778-96ef-11ea-a5c9-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 11705778-96ef-11ea-a5c9-12813bfff9fa;
 Fri, 15 May 2020 21:00:12 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: RYV7/2NCiqT8wgvNSKOcdAlaEfVAvkoeTCimpzCFkJVX/t9DnY0umUjsdcecsfoERxpLU+1kX9
 Z9c2D8cCfrI2bpOvFC5qfssJKQevPoMzzO9M+9RSvj9peSWggf2GzaCS04ekRTVrFXQj08eHmV
 nz19AMedne5JRZQnpHbHuJEhc1LKKGjnMK/9Xb+ieDuy9i2LhJRm0C9coo3mLOzo6S0hJzwVTa
 2FF0hSznfu8GMbdAvFS1YA8DJktXQ9+jU8nF+Tg4uzRvagXl9EwSTHM4i1IfTbeC9/Yb131opH
 1f0=
X-SBRS: 2.7
X-MesageID: 17939101
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,396,1583211600"; d="scan'208";a="17939101"
Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
To: Manuel Bouyer <bouyer@antioche.eu.org>, <xen-devel@lists.xenproject.org>
References: <20200515202912.GA11714@antioche.eu.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d623cd12-4024-82ba-7388-21f606e1a0bd@citrix.com>
Date: Fri, 15 May 2020 22:00:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200515202912.GA11714@antioche.eu.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15/05/2020 21:29, Manuel Bouyer wrote:
> Hello,
> NetBSD works as dom0 up to Xen 4.11. I'm trying to get it working
> on 4.13.0. I added the support for gntdev operations,  but I'm stuck with
> privcmd IOCTL_PRIVCMD_MMAPBATCH. It seems to work fine for PV and PVH domUs,
> but with HVM domUs, MMU_NORMAL_PT_UPDATE returns -22 (EINVAL) and
> qemu-dm dumps core (as expected; the page is not mapped).
> Of course this works fine in 4.11
>
> In the Xen kernel, I tracked it down to arch/x86/mm.c near line 2229,
> in mod_l1_entry():
>         /* Translate foreign guest address. */
>         if ( cmd != MMU_PT_UPDATE_NO_TRANSLATE &&
>              paging_mode_translate(pg_dom) )
>         {
>             p2m_type_t p2mt;
>             p2m_query_t q = l1e_get_flags(nl1e) & _PAGE_RW ?
>                             P2M_ALLOC | P2M_UNSHARE : P2M_ALLOC;
>
>             page = get_page_from_gfn(pg_dom, l1e_get_pfn(nl1e), &p2mt, q);
>
>             if ( p2m_is_paged(p2mt) )
>             {
>                 if ( page )
>                     put_page(page);
>                 p2m_mem_paging_populate(pg_dom, l1e_get_pfn(nl1e));
>                 return -ENOENT;
>             }
>
>             if ( p2mt == p2m_ram_paging_in && !page )
>                 return -ENOENT;
>
>             /* Did our attempt to unshare fail? */
>             if ( (q & P2M_UNSHARE) && p2m_is_shared(p2mt) )
>             {
>                 /* We could not have obtained a page ref. */
>                 ASSERT(!page);
>                 /* And mem_sharing_notify has already been called. */
>                 return -ENOMEM;
>             }
>
>             if ( !page ) {
>                 gdprintk(XENLOG_WARNING, "translate but no page\n");
>                 return -EINVAL;
>             }                        
>             nl1e = l1e_from_page(page, l1e_get_flags(nl1e));
>         }
>
> the gdprintk() I added in the ( !page) case fires, so this is the
> cause of the EINVAL.
> Is it expected for a HVM domU ? If so, how should the dom0 code be
> changed to get it working ? I failed to see where our code is different
> from linux ...

What is qemu doing at the time?  Is it by any chance trying to map the
IOREQ server frame?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 15 21:06:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 21:06:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZhXU-00006g-Et; Fri, 15 May 2020 21:06:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2khj=65=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1jZhXT-00006b-2C
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 21:06:39 +0000
X-Inumbo-ID: f6e41254-96ef-11ea-b07b-bc764e2007e4
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f6e41254-96ef-11ea-b07b-bc764e2007e4;
 Fri, 15 May 2020 21:06:37 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne
 [IPv6:2001:41d0:fe9d:1100:221:70ff:fe0c:9885])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 04FL6T1U017877;
 Fri, 15 May 2020 23:06:29 +0200 (MEST)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id D6F142810; Fri, 15 May 2020 23:06:29 +0200 (CEST)
Date: Fri, 15 May 2020 23:06:29 +0200
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
Message-ID: <20200515210629.GA10976@antioche.eu.org>
References: <20200515202912.GA11714@antioche.eu.org>
 <d623cd12-4024-82ba-7388-21f606e1a0bd@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d623cd12-4024-82ba-7388-21f606e1a0bd@citrix.com>
User-Agent: Mutt/1.12.1 (2019-06-15)
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3
 (chassiron.antioche.eu.org [IPv6:2001:41d0:fe9d:1101:0:0:0:1]);
 Fri, 15 May 2020 23:06:30 +0200 (MEST)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 15, 2020 at 10:00:07PM +0100, Andrew Cooper wrote:
> What is qemu doing at the time? Is it by any chance trying to map the
> IOREQ server frame?

Here's what gdb says about it:
Core was generated by `qemu-dm'.
Program terminated with signal SIGSEGV, Segmentation fault.
#0  0x000000000046997d in cpu_x86_init (
    cpu_model=cpu_model@entry=0x4d622d "qemu32")
    at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/i386-dm/helper2.c:156
156                 rc = xenevtchn_bind_interdomain(
--Type <RET> for more, q to quit, c to continue without paging--
[Current thread is 1 (process 1480)]
(gdb) where
#0  0x000000000046997d in cpu_x86_init (
    cpu_model=cpu_model@entry=0x4d622d "qemu32")
    at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/i386-dm/helper2.c:156
#1  0x000000000043628d in pc_init1 (ram_size=<optimized out>, 
    vga_ram_size=4194304, boot_device=0x7f7fff460397 "cda", pci_enabled=1, 
    cpu_model=0x4d622d "qemu32", initrd_filename=<optimized out>, 
    kernel_cmdline=<optimized out>, kernel_filename=<optimized out>)
    at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/hw/pc.c:829
#2  0x00000000004636e7 in xen_init_fv (ram_size=0, vga_ram_size=4194304, 
    boot_device=0x7f7fff460397 "cda", kernel_filename=0x0, 
    kernel_cmdline=0x4abff6 "", initrd_filename=0x0, cpu_model=0x0, 
    direct_pci=0x0)
    at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/hw/xen_machine_fv.c:405
#3  0x00000000004a975b in main (argc=23, argv=0x7f7fff45fc78, 
    envp=<optimized out>)
    at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/vl.c:6014

Does it help ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Fri May 15 21:38:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 21:38:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZi2A-0002wX-0b; Fri, 15 May 2020 21:38:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kC4v=65=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jZi28-0002wS-O7
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 21:38:20 +0000
X-Inumbo-ID: 645ec3b6-96f4-11ea-9887-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 645ec3b6-96f4-11ea-9887-bc764e2007e4;
 Fri, 15 May 2020 21:38:19 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 6sBbulZbKKT8BljoNnbmu89Zx+L6x6D1rpSEj8KKNMCiv8B/ReEmTvE0AoFmzas/hxnzmGZFfO
 WlZLQqiBd+t7j6O9wvDTJ1xQaR/7lOgUt36re8TLsAwkXUTfXpMFIaJbUbC20S/CbGjNBND/Hf
 GequjRboVj2z3YriWvksJqJ2jYZ6xbbwMLVkgQO1VTTp8HkpMS0F3YfLN/Hr6gIPwlQewKR2i+
 h4XtNPWx8dpQ+9xKN+ztcP8jHX1huEfIGy9Po2YOFJDq3ahLF8hIput+sCHLXNSU8FNx+sO5it
 aYA=
X-SBRS: 2.7
X-MesageID: 17670414
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,396,1583211600"; d="scan'208";a="17670414"
Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
To: Manuel Bouyer <bouyer@antioche.eu.org>
References: <20200515202912.GA11714@antioche.eu.org>
 <d623cd12-4024-82ba-7388-21f606e1a0bd@citrix.com>
 <20200515210629.GA10976@antioche.eu.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b1dfc07d-bf0f-da26-79f0-8cf93952689e@citrix.com>
Date: Fri, 15 May 2020 22:38:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200515210629.GA10976@antioche.eu.org>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15/05/2020 22:06, Manuel Bouyer wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> On Fri, May 15, 2020 at 10:00:07PM +0100, Andrew Cooper wrote:
>> What is qemu doing at the time?  Is it by any chance trying to map the
>> IOREQ server frame?
> Here's what gdb says about it:
> Core was generated by `qemu-dm'.
> Program terminated with signal SIGSEGV, Segmentation fault.
> #0  0x000000000046997d in cpu_x86_init (
>     cpu_model=cpu_model@entry=0x4d622d "qemu32")
>     at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/i386-dm/helper2.c:156
> 156                 rc = xenevtchn_bind_interdomain(
> --Type <RET> for more, q to quit, c to continue without paging--
> [Current thread is 1 (process 1480)]
> (gdb) where
> #0  0x000000000046997d in cpu_x86_init (
>     cpu_model=cpu_model@entry=0x4d622d "qemu32")
>     at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/i386-dm/helper2.c:156
> #1  0x000000000043628d in pc_init1 (ram_size=<optimized out>, 
>     vga_ram_size=4194304, boot_device=0x7f7fff460397 "cda", pci_enabled=1, 
>     cpu_model=0x4d622d "qemu32", initrd_filename=<optimized out>, 
>     kernel_cmdline=<optimized out>, kernel_filename=<optimized out>)
>     at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/hw/pc.c:829
> #2  0x00000000004636e7 in xen_init_fv (ram_size=0, vga_ram_size=4194304, 
>     boot_device=0x7f7fff460397 "cda", kernel_filename=0x0, 
>     kernel_cmdline=0x4abff6 "", initrd_filename=0x0, cpu_model=0x0, 
>     direct_pci=0x0)
>     at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/hw/xen_machine_fv.c:405
> #3  0x00000000004a975b in main (argc=23, argv=0x7f7fff45fc78, 
>     envp=<optimized out>)
>     at /home/bouyer/pkgbuild/current/sysutils/xentools413/work/xen-4.13.0/tools/qemu-xen-traditional/vl.c:6014
>
> Does it help ?

Yes and no.  This is collateral damage of earlier bug.

What failed was xen_init_fv()'s

    shared_page = xc_map_foreign_range(xc_handle, domid, XC_PAGE_SIZE,
                                       PROT_READ|PROT_WRITE, ioreq_pfn);
    if (shared_page == NULL) {
        fprintf(logfile, "map shared IO page returned error %d\n", errno);
        exit(-1);
    }

because we've ended up with a non-NULL pointer with no mapping behind
it, hence the SIGSEGV for the first time we try to use the pointer.

Whatever logic is behind xc_map_foreign_range() should have returned
NULL or a real mapping.

ioreq_pfn ought to be something just below the 4G boundary, and the
toolstack ought to put memory there in the first place.  Can you
identify what value ioreq_pfn has, and whether it matches up with the
magic gfn range as reported by `xl create -vvv` for the guest?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 15 21:54:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 21:54:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZiH2-0004dF-I0; Fri, 15 May 2020 21:53:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2khj=65=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1jZiH0-0004dA-Oa
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 21:53:42 +0000
X-Inumbo-ID: 894d2af8-96f6-11ea-9887-bc764e2007e4
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 894d2af8-96f6-11ea-9887-bc764e2007e4;
 Fri, 15 May 2020 21:53:40 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne
 [IPv6:2001:41d0:fe9d:1100:221:70ff:fe0c:9885])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 04FLrZwT017507;
 Fri, 15 May 2020 23:53:35 +0200 (MEST)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 6C18C2810; Fri, 15 May 2020 23:53:35 +0200 (CEST)
Date: Fri, 15 May 2020 23:53:35 +0200
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
Message-ID: <20200515215335.GA9991@antioche.eu.org>
References: <20200515202912.GA11714@antioche.eu.org>
 <d623cd12-4024-82ba-7388-21f606e1a0bd@citrix.com>
 <20200515210629.GA10976@antioche.eu.org>
 <b1dfc07d-bf0f-da26-79f0-8cf93952689e@citrix.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="ew6BAiZeqk4r7MaW"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b1dfc07d-bf0f-da26-79f0-8cf93952689e@citrix.com>
User-Agent: Mutt/1.12.1 (2019-06-15)
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3
 (chassiron.antioche.eu.org [IPv6:2001:41d0:fe9d:1101:0:0:0:1]);
 Fri, 15 May 2020 23:53:35 +0200 (MEST)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--ew6BAiZeqk4r7MaW
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit

On Fri, May 15, 2020 at 10:38:13PM +0100, Andrew Cooper wrote:
> > [...]
> > Does it help ?
> 
> Yes and no. This is collateral damage of earlier bug.
> 
> What failed was xen_init_fv()'s
> 
>  shared_page = xc_map_foreign_range(xc_handle, domid, XC_PAGE_SIZE,
>   PROT_READ|PROT_WRITE, ioreq_pfn);
>  if (shared_page == NULL) {
>   fprintf(logfile, "map shared IO page returned error %d\n", errno);
>   exit(-1);
>  }
> 
> because we've ended up with a non-NULL pointer with no mapping behind
> it, hence the SIGSEGV for the first time we try to use the pointer.
> 
> Whatever logic is behind xc_map_foreign_range() should have returned
> NULL or a real mapping.

What's strange is that the mapping is validated, by mapping it in
the dom0 kernel space. But when we try to remap in in the process's
space, it fails.

> 
> ioreq_pfn ought to be something just below the 4G boundary, and the
> toolstack ought to put memory there in the first place. Can you
> identify what value ioreq_pfn has,

You mean, something like:
(gdb) print/x ioreq_pfn
$2 = 0xfeff0

> and whether it matches up with the
> magic gfn range as reported by `xl create -vvv` for the guest?

I guess you mean
xl -vvv create
the output is attached

The kernel says it tries to map 0xfeff0000 to virtual address 0x79656f951000.


-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--

--ew6BAiZeqk4r7MaW
Content-Type: text/plain; charset=us-ascii
Content-Disposition: attachment; filename=typescript
Content-Transfer-Encoding: quoted-printable

Script started on Fri May 15 23:47:25 2020
# xl -vvv create nb1=3D-=08 =08=08 =08-hbvm=08 =08=08 =08=08 =08vm=0D=0D
Parsing config from nb1-hvm=0D
libxl: debug: libxl_create.c:1819:do_domain_create: Domain 0:ao 0x75de79e9b=
000: create: how=3D0x0 callback=3D0x0 poller=3D0x75de79ec60a0=0D
libxl: detail: libxl_create.c:584:libxl__domain_make: passthrough: disabled=
=0D
libxl: debug: libxl_device.c:380:libxl__device_disk_set_backend: Disk vdev=
=3Dhda spec.backend=3Dunknown=0D
libxl: debug: libxl_device.c:415:libxl__device_disk_set_backend: Disk vdev=
=3Dhda, using backend phy=0D
libxl: debug: libxl_create.c:1148:initiate_domain_create: Domain 5:running =
bootloader=0D
libxl: debug: libxl_bootloader.c:328:libxl__bootloader_run: Domain 5:not a =
PV/PVH domain, skipping bootloader=0D
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=3D0x7=
5de79e64cd0: deregister unregistered=0D
domainbuilder: detail: xc_dom_allocate: cmdline=3D"", features=3D""=0D
domainbuilder: detail: xc_dom_kernel_file: filename=3D"/usr/pkg/libexec/xen=
/boot/hvmloader"=0D
domainbuilder: detail: xc_dom_malloc_filemap    : 337 kB=0D
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.13, caps xen-3.0-x86_64 =
xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64 =0D
domainbuilder: detail: xc_dom_parse_image: called=0D
domainbuilder: detail: xc_dom_find_loader: trying ELF-generic loader ... =0D
domainbuilder: detail: loader probe failed=0D
domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ... =
=0D
domainbuilder: detail: xc_dom_probe_bzimage_kernel: kernel is not a bzImage=
=0D
domainbuilder: detail: loader probe failed=0D
domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ... =0D
domainbuilder: detail: loader probe OK=0D
xc: detail: ELF: phdr: paddr=3D0x100000 memsz=3D0x5c844=0D
xc: detail: ELF: memory: 0x100000 -> 0x15c844=0D
domainbuilder: detail: xc_dom_mem_init: mem 1020 MB, pages 0x3fc00 pages, 4=
k each=0D
domainbuilder: detail: xc_dom_mem_init: 0x3fc00 pages=0D
domainbuilder: detail: xc_dom_boot_mem_init: called=0D
domainbuilder: detail: range: start=3D0x0 end=3D0x3fc00000=0D
domainbuilder: detail: xc_dom_malloc            : 2040 kB=0D
xc: detail: PHYSICAL MEMORY ALLOCATION:=0D
xc: detail:   4KB PAGES: 0x0000000000000200=0D
xc: detail:   2MB PAGES: 0x00000000000001fd=0D
xc: detail:   1GB PAGES: 0x0000000000000000=0D
domainbuilder: detail: xc_dom_build_image: called=0D
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x100+=
0x5d at 0x75de79b31000=0D
domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x100000 -> 0=
x15d000  (pfn 0x100 + 0x5d pages)=0D
xc: detail: ELF: phdr 0 at 0x75de79ad4000 -> 0x75de79b26ca0=0D
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x15d+=
0x1 at 0x75de79e1d000=0D
domainbuilder: detail: xc_dom_alloc_segment:   HVM start info : 0x15d000 ->=
 0x15e000  (pfn 0x15d + 0x1 pages)=0D
domainbuilder: detail: alloc_pgtables_hvm: doing nothing=0D
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x15e000=0D
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x0=0D
domainbuilder: detail: xc_dom_boot_image: called=0D
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x=
86_64=0D
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x=
86_32p=0D
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x=
86_32 <=3D matches=0D
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x=
86_32p=0D
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x=
86_64=0D
domainbuilder: detail: domain builder memory footprint=0D
domainbuilder: detail:    allocated=0D
domainbuilder: detail:       malloc             : 2045 kB=0D
domainbuilder: detail:       anon mmap          : 0 bytes=0D
domainbuilder: detail:    mapped=0D
domainbuilder: detail:       file mmap          : 337 kB=0D
domainbuilder: detail:       domU mmap          : 376 kB=0D
domainbuilder: detail: vcpu_hvm: called=0D
domainbuilder: detail: compat_gnttab_hvm_seed: d5: pfn=3D0xff000=0D
domainbuilder: detail: xc_dom_set_gnttab_entry: d5 gnt[0] -> d0 0xfefff=0D
domainbuilder: detail: xc_dom_set_gnttab_entry: d5 gnt[1] -> d0 0xfeffc=0D
domainbuilder: detail: xc_dom_release: called=0D
libxl: debug: libxl_device.c:380:libxl__device_disk_set_backend: Disk vdev=
=3Dhda spec.backend=3Dphy=0D
libxl: debug: libxl_event.c:639:libxl__ev_xswatch_register: watch w=3D0x75d=
e79bfd4d0 wpath=3D/local/domain/0/backend/vbd/5/768/state token=3D3/0: regi=
ster slotnum=3D3=0D
libxl: debug: libxl_create.c:1856:do_domain_create: Domain 0:ao 0x75de79e9b=
000: inprogress: poller=3D0x75de79ec60a0, flags=3Di=0D
libxl: debug: libxl_event.c:576:watchfd_callback: watch w=3D0x75de79bfd4d0 =
wpath=3D/local/domain/0/backend/vbd/5/768/state token=3D3/0: event epath=3D=
/local/domain/0/backend/vbd/5/768/state=0D
libxl: debug: libxl_event.c:881:devstate_callback: backend /local/domain/0/=
backend/vbd/5/768/state wanted state 2 still waiting state 1=0D
libxl: debug: libxl_event.c:576:watchfd_callback: watch w=3D0x75de79bfd4d0 =
wpath=3D/local/domain/0/backend/vbd/5/768/state token=3D3/0: event epath=3D=
/local/domain/0/backend/vbd/5/768/state=0D
libxl: debug: libxl_event.c:877:devstate_callback: backend /local/domain/0/=
backend/vbd/5/768/state wanted state 2 ok=0D
libxl: debug: libxl_event.c:676:libxl__ev_xswatch_deregister: watch w=3D0x7=
5de79bfd4d0 wpath=3D/local/domain/0/backend/vbd/5/768/state token=3D3/0: de=
register slotnum=3D3=0D
libxl: debug: libxl_device.c:1090:device_backend_callback: Domain 5:calling=
 device_backend_cleanup=0D
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=3D0x7=
5de79bfd4d0: deregister unregistered=0D
libxl: debug: libxl_device.c:1191:device_hotplug: Domain 5:calling hotplug =
script: /usr/pkg/etc/xen/scripts/block /local/domain/0/backend/vbd/5/768=0D
libxl: debug: libxl_device.c:1192:device_hotplug: Domain 5:extra args:=0D
libxl: debug: libxl_device.c:1198:device_hotplug: Domain 5:     2=0D
libxl: debug: libxl_device.c:1200:device_hotplug: Domain 5:env:=0D
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execu=
te: /usr/pkg/etc/xen/scripts/block /local/domain/0/backend/vbd/5/768 =0D
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=3D0x7=
5de79bfd5d0: deregister unregistered=0D
libxl: debug: libxl_netbsd.c:74:libxl__get_hotplug_script_info: Domain 5:nu=
m_exec 1, not running hotplug scripts=0D
libxl: debug: libxl_device.c:1176:device_hotplug: Domain 5:No hotplug scrip=
t to execute=0D
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=3D0x7=
5de79bfd5d0: deregister unregistered=0D
libxl: debug: libxl_dm.c:2626:libxl__spawn_local_dm: Domain 5:Spawning devi=
ce-model /usr/pkg/libexec/xen/bin/qemu-dm with arguments:=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  /usr/pkg/li=
bexec/xen/bin/qemu-dm=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -d=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  5=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -domain-nam=
e=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  nb1=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -vnc=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  132.227.103=
=2E47:1=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -videoram=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  4=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -boot=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  cda=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -usb=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -usbdevice=
=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  tablet=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -acpi=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -vcpu_avail=
=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  0x01=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -net=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  nic,vlan=3D=
0,macaddr=3D00:16:3e:00:10:54,model=3De1000=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -net=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  tap,vlan=3D=
0,ifname=3Dxvif5i0-emu,bridge=3Dbridge0,script=3D/usr/pkg/etc/xen/scripts/q=
emu-ifup,downscript=3D/usr/pkg/etc/xen/scripts/qemu-ifup=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  -M=0D
libxl: debug: libxl_dm.c:2628:libxl__spawn_local_dm: Domain 5:  xenfv=0D
libxl: debug: libxl_dm.c:2630:libxl__spawn_local_dm: Domain 5:Spawning devi=
ce-model /usr/pkg/libexec/xen/bin/qemu-dm with additional environment:=0D
libxl: debug: libxl_dm.c:2632:libxl__spawn_local_dm: Domain 5:  XEN_QEMU_CO=
NSOLE_LIMIT=3D1048576=0D
libxl: debug: libxl_event.c:639:libxl__ev_xswatch_register: watch w=3D0x75d=
e79e64fc8 wpath=3D/local/domain/0/device-model/5/state token=3D3/1: registe=
r slotnum=3D3=0D
libxl: debug: libxl_event.c:576:watchfd_callback: watch w=3D0x75de79e64fc8 =
wpath=3D/local/domain/0/device-model/5/state token=3D3/1: event epath=3D/lo=
cal/domain/0/device-model/5/state=0D
libxl: debug: libxl_exec.c:407:spawn_watch_event: domain 5 device model: sp=
awn watch p=3D(null)=0D
libxl: debug: libxl_event.c:676:libxl__ev_xswatch_deregister: watch w=3D0x7=
5de79e64fc8 wpath=3D/local/domain/0/device-model/5/state token=3D3/1: dereg=
ister slotnum=3D3=0D
libxl: error: libxl_dm.c:2783:device_model_spawn_outcome: Domain 5:domain 5=
 device model: spawn failed (rc=3D-3)=0D
libxl: error: libxl_dm.c:2999:device_model_postconfig_done: Domain 5:Post D=
M startup configs failed, rc=3D-3=0D
libxl: debug: libxl_qmp.c:1896:libxl__ev_qmp_dispose: Domain 0: ev 0x75de79=
e64fe0=0D
libxl: error: libxl_create.c:1676:domcreate_devmodel_started: Domain 5:devi=
ce model did not start: -3=0D
libxl: debug: libxl_dm.c:3237:libxl__destroy_device_model: Domain 5:Didn't =
find dm UID; destroying by pid=0D
libxl: error: libxl_dm.c:3103:kill_device_model: Device Model already exite=
d=0D
libxl: debug: libxl_event.c:639:libxl__ev_xswatch_register: watch w=3D0x75d=
e79bfe0d0 wpath=3D/local/domain/0/backend/vbd/5/768/state token=3D3/2: regi=
ster slotnum=3D3=0D
libxl: debug: libxl_event.c:576:watchfd_callback: watch w=3D0x75de79bfe0d0 =
wpath=3D/local/domain/0/backend/vbd/5/768/state token=3D3/2: event epath=3D=
/local/domain/0/backend/vbd/5/768/state=0D
libxl: debug: libxl_event.c:881:devstate_callback: backend /local/domain/0/=
backend/vbd/5/768/state wanted state 6 still waiting state 5=0D
libxl: debug: libxl_event.c:576:watchfd_callback: watch w=3D0x75de79bfe0d0 =
wpath=3D/local/domain/0/backend/vbd/5/768/state token=3D3/2: event epath=3D=
/local/domain/0/backend/vbd/5/768/state=0D
libxl: debug: libxl_event.c:877:devstate_callback: backend /local/domain/0/=
backend/vbd/5/768/state wanted state 6 ok=0D
libxl: debug: libxl_event.c:676:libxl__ev_xswatch_deregister: watch w=3D0x7=
5de79bfe0d0 wpath=3D/local/domain/0/backend/vbd/5/768/state token=3D3/2: de=
register slotnum=3D3=0D
libxl: debug: libxl_device.c:1090:device_backend_callback: Domain 5:calling=
 device_backend_cleanup=0D
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=3D0x7=
5de79bfe0d0: deregister unregistered=0D
libxl: debug: libxl_device.c:1191:device_hotplug: Domain 5:calling hotplug =
script: /usr/pkg/etc/xen/scripts/block /local/domain/0/backend/vbd/5/768=0D
libxl: debug: libxl_device.c:1192:device_hotplug: Domain 5:extra args:=0D
libxl: debug: libxl_device.c:1198:device_hotplug: Domain 5:     6=0D
libxl: debug: libxl_device.c:1200:device_hotplug: Domain 5:env:=0D
libxl: debug: libxl_aoutils.c:593:libxl__async_exec_start: forking to execu=
te: /usr/pkg/etc/xen/scripts/block /local/domain/0/backend/vbd/5/768 =0D
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=3D0x7=
5de79bfe1d0: deregister unregistered=0D
libxl: debug: libxl_netbsd.c:74:libxl__get_hotplug_script_info: Domain 5:nu=
m_exec 1, not running hotplug scripts=0D
libxl: debug: libxl_device.c:1176:device_hotplug: Domain 5:No hotplug scrip=
t to execute=0D
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=3D0x7=
5de79bfe1d0: deregister unregistered=0D
libxl: debug: libxl_device.c:1176:device_hotplug: Domain 5:No hotplug scrip=
t to execute=0D
libxl: debug: libxl_event.c:689:libxl__ev_xswatch_deregister: watch w=3D0x7=
5de79bfe5d0: deregister unregistered=0D
libxl: debug: libxl_domain.c:1355:devices_destroy_cb: Domain 5:Forked pid 2=
095 for destroy of domain=0D
libxl: debug: libxl_event.c:1893:libxl__ao_complete: ao 0x75de79e9b000: com=
plete, rc=3D-3=0D
libxl: debug: libxl_event.c:1862:libxl__ao__destroy: ao 0x75de79e9b000: des=
troy=0D
libxl: debug: libxl_domain.c:1040:libxl_domain_destroy: Domain 5:ao 0x75de7=
9e9b000: create: how=3D0x0 callback=3D0x0 poller=3D0x75de79ec60a0=0D
libxl: error: libxl_domain.c:1177:libxl__destroy_domid: Domain 5:Non-exista=
nt domain=0D
libxl: error: libxl_domain.c:1131:domain_destroy_callback: Domain 5:Unable =
to destroy guest=0D
libxl: error: libxl_domain.c:1058:domain_destroy_cb: Domain 5:Destruction o=
f domain failed=0D
libxl: debug: libxl_event.c:1893:libxl__ao_complete: ao 0x75de79e9b000: com=
plete, rc=3D-21=0D
libxl: debug: libxl_domain.c:1049:libxl_domain_destroy: Domain 5:ao 0x75de7=
9e9b000: inprogress: poller=3D0x75de79ec60a0, flags=3Dic=0D
libxl: debug: libxl_event.c:1862:libxl__ao__destroy: ao 0x75de79e9b000: des=
troy=0D
xencall:buffer: debug: total allocations:400 total releases:400=0D
xencall:buffer: debug: current allocations:0 maximum allocations:3=0D
xencall:buffer: debug: cache current size:3=0D
xencall:buffer: debug: cache hits:383 misses:3 toobig:14=0D
xencall:buffer: debug: total allocations:0 total releases:0=0D
xencall:buffer: debug: current allocations:0 maximum allocations:0=0D
xencall:buffer: debug: cache current size:0=0D
xencall:buffer: debug: cache hits:0 misses:0 toobig:0=0D
#=20
Script done on Fri May 15 23:47:46 2020

--ew6BAiZeqk4r7MaW--


From xen-devel-bounces@lists.xenproject.org Fri May 15 22:37:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 15 May 2020 22:37:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZixT-0008Od-Tb; Fri, 15 May 2020 22:37:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ibeu=65=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZixS-0008OX-HI
 for xen-devel@lists.xenproject.org; Fri, 15 May 2020 22:37:34 +0000
X-Inumbo-ID: a792841c-96fc-11ea-a5e2-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a792841c-96fc-11ea-a5e2-12813bfff9fa;
 Fri, 15 May 2020 22:37:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=+1WLuhGmFl6RAGLQj2pEDVl/IsIoeF3d6+Io7tYPiZU=; b=VEa5hars4vhjeSCeGm4UdojKq
 LWoJB/LC/ytywFR1Z4v7l138if+eXp5+4KXkFN+qoYdeoYOoA8L3hLf0V1/Upt6SgObQtEMIXGcKO
 UuiBTjCDYiXvXhFllA3qZmOb95h+A2dxWNE6yAp2o9kpXzDOSgFknpZkGPXHR0VdBjt8I=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZixL-0003YD-CK; Fri, 15 May 2020 22:37:27 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZixL-0006hV-5J; Fri, 15 May 2020 22:37:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZixL-0001Px-4a; Fri, 15 May 2020 22:37:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150209-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150209: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=57880053dd24012e9f59c23b630fefe07e15dc49
X-Osstest-Versions-That: xen=90b1701905dd0cbb7efe525b1bf92007fe818b60
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 15 May 2020 22:37:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150209 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150209/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  57880053dd24012e9f59c23b630fefe07e15dc49
baseline version:
 xen                  90b1701905dd0cbb7efe525b1bf92007fe818b60

Last test of basis   150205  2020-05-15 17:01:32 Z    0 days
Testing same since   150209  2020-05-15 20:01:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Hongyan Xia <hongyxia@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Stewart Hildebrand <stewart.hildebrand@dornerworks.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   90b1701905..57880053dd  57880053dd24012e9f59c23b630fefe07e15dc49 -> smoke


From xen-devel-bounces@lists.xenproject.org Sat May 16 00:54:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 00:54:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZl5h-0004Lh-07; Sat, 16 May 2020 00:54:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4r6I=66=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1jZl5f-0004Lc-PG
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 00:54:11 +0000
X-Inumbo-ID: c033694c-970f-11ea-9887-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c033694c-970f-11ea-9887-bc764e2007e4;
 Sat, 16 May 2020 00:54:10 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id E2953A22BA
 for <xen-devel@lists.xenproject.org>; Sat, 16 May 2020 02:54:08 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id DD70FA221F
 for <xen-devel@lists.xenproject.org>; Sat, 16 May 2020 02:54:07 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id wBJ7ySV-v-f5 for <xen-devel@lists.xenproject.org>;
 Sat, 16 May 2020 02:54:07 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 5CDA0A1FAE
 for <xen-devel@lists.xenproject.org>; Sat, 16 May 2020 02:54:07 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id gLT2_Iv1aunW for <xen-devel@lists.xenproject.org>;
 Sat, 16 May 2020 02:54:07 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 3EAF3A1F9B
 for <xen-devel@lists.xenproject.org>; Sat, 16 May 2020 02:54:07 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 25510219DD
 for <xen-devel@lists.xenproject.org>; Sat, 16 May 2020 02:53:37 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id XEWAgGxgG1Km for <xen-devel@lists.xenproject.org>;
 Sat, 16 May 2020 02:53:31 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 7783F21A09
 for <xen-devel@lists.xenproject.org>; Sat, 16 May 2020 02:53:31 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id Qat2-CW364Mr for <xen-devel@lists.xenproject.org>;
 Sat, 16 May 2020 02:53:31 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 500A2219DD
 for <xen-devel@lists.xenproject.org>; Sat, 16 May 2020 02:53:31 +0200 (CEST)
Date: Sat, 16 May 2020 02:53:31 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Message-ID: <1740303418.53653891.1589590411232.JavaMail.zimbra@cert.pl>
Subject: Re: [PATCH 0/3] vm_event: fix race-condition when disabling monitor
 events
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - GC81 (Win)/8.6.0_GA_1194)
Thread-Topic: vm_event: fix race-condition when disabling monitor events
Thread-Index: GQOYuTEZ4+ZEM83zLw5zzDot3iub4A==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> For the last couple years we have received numerous reports from users of
> monitor vm_events of spurious guest crashes when using events. In particu=
lar,
> it has observed that the problem occurs when vm_events are being disabled=
. The
> nature of the guest crash varied widely and has only occured occasionally=
. This
> made debugging the issue particularly hard. We had discussions about this=
 issue
> even here on the xen-devel mailinglist with no luck figuring it out.
>=20
> The bug has now been identified as a race-condition between register even=
t
> handling and disabling the vm_event interface.
>=20
> Patch 96760e2fba100d694300a81baddb5740e0f8c0ee, "vm_event: deny register =
writes
> if refused by  vm_event reply" is the patch that introduced the error. In=
 this
> patch emulation of register write events can be postponed until the
> corresponding vm_event handler decides whether to allow such write to tak=
e
> place. Unfortunately this can only be implemented by performing the deny/=
allow
> step when the vCPU gets scheduled. Due to that postponed emulation of the=
 event
> if the user decides to pause the VM in the vm_event handler and then disa=
ble
> events, the entire emulation step is skipped the next time the vCPU is re=
sumed.
> Even if the user doesn't pause during the vm_event handling but exits
> immediately and disables vm_event, the situation becomes racey as disabli=
ng
> vm_event may succeed before the guest's vCPUs get scheduled with the pend=
ing
> emulation task. This has been particularly the case with VMs that have se=
veral
> vCPUs as after the VM is unpaused it may actually take a long time before=
 all
?vCPUs get scheduled.
>=20
> The only solution currently is to poll each vCPU before vm_events are dis=
abled
> to verify they had been scheduled. The following patches resolve this iss=
ue in
> a much nicer way.
>=20
> Patch 1 adds an option to the monitor_op domctl that needs to be specifie=
d if
>     the user wants to actually use the postponed register-write handling
>     mechanism. If that option is not specified then handling is performed=
 the
>     same way as before patch 96760e2fba100d694300a81baddb5740e0f8c0ee.
>    =20
> Patch 2 performs sanity checking when disabling vm_events to determine wh=
ether
>     its safe to free all vm_event structures. The vCPUs still get unpause=
d to
>     allow them to get scheduled and perform any of their pending operatio=
ns,
>     but otherwise an -EAGAIN error is returned signaling to the user that=
 they
>     need to wait and try again disabling the interface.
>    =20
> Patch 3 adds a vm_event specifically to signal to the user when it is saf=
e to
>     continue disabling the interface.
>    =20
> Shout out to our friends at CERT.pl for stumbling upon a crucial piece of
> information that lead to finally squashing this nasty bug.
>=20
> Tamas K Lengyel (3):
>   xen/monitor: Control register values
>   xen/vm_event: add vm_event_check_pending_op
>   xen/vm_event: Add safe to disable vm_event
>=20
>  xen/arch/x86/hvm/hvm.c            | 69 +++++++++++++++++++++++--------
>  xen/arch/x86/hvm/monitor.c        | 14 +++++++
>  xen/arch/x86/monitor.c            | 23 ++++++++++-
>  xen/arch/x86/vm_event.c           | 23 +++++++++++
>  xen/common/vm_event.c             | 17 ++++++--
>  xen/include/asm-arm/vm_event.h    |  7 ++++
>  xen/include/asm-x86/domain.h      |  2 +
>  xen/include/asm-x86/hvm/monitor.h |  1 +
>  xen/include/asm-x86/vm_event.h    |  2 +
>  xen/include/public/domctl.h       |  3 ++
>  xen/include/public/vm_event.h     |  8 ++++
>  11 files changed, 147 insertions(+), 22 deletions(-)
>=20
> --=20
> 2.26.1

Hi,

I have reproduced the mentioned race-condition between register event handl=
ing and disabling the vm_event interface. With Xen 4.13.0 and Windows 7 x64=
 DomU (2 VCPUs), my test program causes random BSODs on DomU once vm_event =
interface is disabled. I can confirm that after applying Patch 1 to Xen 4.1=
3.0 the test program doesn't crash the DomU anymore, so it would actually r=
esolve the mentioned bug.

Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Poland


From xen-devel-bounces@lists.xenproject.org Sat May 16 02:45:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 02:45:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZmok-00076U-Pp; Sat, 16 May 2020 02:44:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zWuL=66=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZmoj-00076P-TY
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 02:44:49 +0000
X-Inumbo-ID: 32a68978-971f-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 32a68978-971f-11ea-b9cf-bc764e2007e4;
 Sat, 16 May 2020 02:44:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=x3mPqApx7MYBnMMJz/G6dGxkJPlQZvtgp3o/cTAZWW0=; b=ainkTF2a+tdPvUe03bglHLBdq
 fA8q1fQY9hmtW0pTNNW/OWjwgbKHt8dFvnJUT9hNauLgP8WwcW0VVJYvfv8/Xu0+hqceMooQNEXbO
 mXNLuYjuDqo9uFAYUe6vpDsnAqOFSPmKCQippJ49HtkB/KwqFwce0tkV7gB3QfElfeMQc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZmoc-0002Ud-Uh; Sat, 16 May 2020 02:44:42 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZmoc-00013w-K9; Sat, 16 May 2020 02:44:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZmoc-0003ol-Hq; Sat, 16 May 2020 02:44:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150196-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150196: all pass - PUSHED
X-Osstest-Versions-This: ovmf=9099dcbd61c8d22b5eedda783d143c222d2705a3
X-Osstest-Versions-That: ovmf=bcf181a33b2ea46c36c3be701a5b2e232deaece7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 16 May 2020 02:44:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150196 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150196/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 9099dcbd61c8d22b5eedda783d143c222d2705a3
baseline version:
 ovmf                 bcf181a33b2ea46c36c3be701a5b2e232deaece7

Last test of basis   150187  2020-05-15 01:10:49 Z    1 days
Testing same since   150196  2020-05-15 13:16:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Oleksiy Yakovlev <oleksiyy@ami.com>
  Ray Ni <ray.ni@intel.com>
  Robert Phelps <robert@ami.com>
  Wei6 Xu <wei6.xu@intel.com>
  Zhichao Gao <zhichao.gao@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   bcf181a33b..9099dcbd61  9099dcbd61c8d22b5eedda783d143c222d2705a3 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat May 16 04:55:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 04:55:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZoqi-0001zp-Ay; Sat, 16 May 2020 04:55:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zWuL=66=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZoqg-0001zk-Iz
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 04:54:58 +0000
X-Inumbo-ID: 610cc252-9731-11ea-a611-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 610cc252-9731-11ea-a611-12813bfff9fa;
 Sat, 16 May 2020 04:54:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=9WeIW1nMXt3K0ZvqV7I+EpiByon/gd+0xf02iu59VXY=; b=NOwVzZVp2BQeoGQqoVioFUx+/
 rZdiBXkOzzGh29lK0scON7heDCk/TkgI/mJTqx4uwBL8y3wEQosr53F4RwiT/OoA+p1tqBavsgE/r
 53Bg0Ll/CIQioq5RkhqWpOcRqgaXHUOyi0UtddX8KhTnSV+sEBNt95RP3FA8Aj8+Mo+TY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZoqa-0005IP-19; Sat, 16 May 2020 04:54:52 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZoqZ-00023M-Pm; Sat, 16 May 2020 04:54:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZoqZ-0000ar-P6; Sat, 16 May 2020 04:54:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150194-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150194: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=0db949f1810f4d497762d57d8db6f219c0607529
X-Osstest-Versions-That: qemuu=d5c75ec500d96f1d93447f990cd5a4ef5ba27fae
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 16 May 2020 04:54:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150194 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150194/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150151
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150151
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150151
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150151
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150151
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150151
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                0db949f1810f4d497762d57d8db6f219c0607529
baseline version:
 qemuu                d5c75ec500d96f1d93447f990cd5a4ef5ba27fae

Last test of basis   150151  2020-05-12 19:38:05 Z    3 days
Failing since        150174  2020-05-14 10:06:36 Z    1 days    3 attempts
Testing same since   150194  2020-05-15 11:32:28 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Corey Minyard <cminyard@mvista.com>
  Cornelia Huck <cohuck@redhat.com>
  David Gibson <david@gibson.dropbear.id.au>
  Denis Plotnikov <dplotnikov@virtuozzo.com>
  Dongjiu Geng <gengdongjiu@huawei.com>
  Edgar E. Iglesias <edgar.iglesias@xilinx.com>
  Greg Kurz <groug@kaod.org>
  Jason Wang <jasowang@redhat.com>
  Joe Komlodi <joe.komlodi@xilinx.com>
  Joe Komlodi <komlodi@xilinx.com>
  John Snow <jsnow@redhat.com>
  Markus Armbruster <armbru@redhat.com>
  Max Reitz <mreitz@redhat.com>
  Michael S. Tsirkin <mst@redhat.com>
  Omar Sandoval <osandov@fb.com>
  Patrick Williams <patrick@stwcx.xyz>
  Paul Durrant <paul@xen.org>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Stefano Stabellini <sstabellini@kernel.org>
  Tong Ho <tong.ho@xilinx.com>
  Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
  Xiang Zheng <zhengxiang9@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   d5c75ec500..0db949f181  0db949f1810f4d497762d57d8db6f219c0607529 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sat May 16 05:21:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 05:21:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZpFo-0004wW-Eu; Sat, 16 May 2020 05:20:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zWuL=66=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZpFm-0004wR-CU
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 05:20:54 +0000
X-Inumbo-ID: 03203486-9735-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03203486-9735-11ea-ae69-bc764e2007e4;
 Sat, 16 May 2020 05:20:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ro66PKaoZwDzkWVBN6gCjOpMh84pD7jAJ2QpuT+ypBo=; b=KMXRl0eqsSeYrKD3fSn9osZFe
 dtNdfHK9lr+m5OdNdE786pFecJwmLwiszs0iAE5JnD1rfYddMXz4vISyFk/Cdsn4L3DbCohOncyq6
 ImZsGsz6us2xZTrpxEAhkl0RjXTwtFZ8UWu188JQo1FgsN1kpSsD0/J0pf6aI8npb5Dmo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZpFk-00068K-8M; Sat, 16 May 2020 05:20:52 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZpFj-0003Is-TE; Sat, 16 May 2020 05:20:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZpFj-0008PL-Sc; Sat, 16 May 2020 05:20:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150201-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 150201: tolerable FAIL - PUSHED
X-Osstest-Failures: seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: seabios=665dce17c04b574bb0ebcde4cac129c3dd9e681c
X-Osstest-Versions-That: seabios=8d25ca41c5a146ebd81f442d4e241171db8fe0ae
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 16 May 2020 05:20:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150201 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150201/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150188
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150188
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150188
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150188
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 seabios              665dce17c04b574bb0ebcde4cac129c3dd9e681c
baseline version:
 seabios              8d25ca41c5a146ebd81f442d4e241171db8fe0ae

Last test of basis   150188  2020-05-15 01:54:24 Z    1 days
Testing same since   150201  2020-05-15 15:10:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   8d25ca4..665dce1  665dce17c04b574bb0ebcde4cac129c3dd9e681c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat May 16 06:35:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 06:35:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZqPX-00037Q-AU; Sat, 16 May 2020 06:35:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zWuL=66=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZqPV-00037H-Oj
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 06:35:01 +0000
X-Inumbo-ID: 5ac21eac-973f-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ac21eac-973f-11ea-b9cf-bc764e2007e4;
 Sat, 16 May 2020 06:34:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=D4VEUt0ZS0oz7QG46LtDsCIPMnZEKJzRHk9teA+WIHE=; b=tCB/41jRb43RMF+OGggRMzAz2
 NASKQi4KMsm77vJKpaNS/S3Bh1TZCMlo+i+7/j9cfSfZE39m/6474SkKoSAHykOINrL3DqAFUFQ1O
 QBLvVG6tzmm9j4v1O7dBYoguFVQaG8ojWWT7o8WIfbdsCP98gg3W3Q4BdRQjUuYDiPcyQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZqPO-0007hZ-4f; Sat, 16 May 2020 06:34:54 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZqPN-0007Mq-RR; Sat, 16 May 2020 06:34:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZqPN-0007oj-Qi; Sat, 16 May 2020 06:34:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150192-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150192: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=5115b437eef595ce77f05bfc02626e31e263e965
X-Osstest-Versions-That: xen=9d83ad86834300927b636fa02b29d84854399ed8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 16 May 2020 06:34:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150192 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150192/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate           fail  like 150164
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150164
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150164
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150164
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150164
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150164
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150164
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150164
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150164
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150164
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  5115b437eef595ce77f05bfc02626e31e263e965
baseline version:
 xen                  9d83ad86834300927b636fa02b29d84854399ed8

Last test of basis   150164  2020-05-13 17:36:57 Z    2 days
Failing since        150169  2020-05-14 03:54:42 Z    2 days    3 attempts
Testing same since   150192  2020-05-15 11:25:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9d83ad8683..5115b437ee  5115b437eef595ce77f05bfc02626e31e263e965 -> master


From xen-devel-bounces@lists.xenproject.org Sat May 16 06:57:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 06:57:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZql1-0004zs-6v; Sat, 16 May 2020 06:57:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zWuL=66=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZqkz-0004zn-GT
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 06:57:13 +0000
X-Inumbo-ID: 77be4b54-9742-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77be4b54-9742-11ea-9887-bc764e2007e4;
 Sat, 16 May 2020 06:57:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=MSbRsMc9nMkzcS/b7u6OK61c79/lrcaAxAB/g3BYRmY=; b=OBhuaxAWMxQda+bB+36o7m0tC
 cKpOIJ7TRKVplUwf2/QbwREuCft4djcMEYrIfIPh7L1bXKulzK4OYX5/P/JRbz1R3Mw/WwSQXE2e/
 aVW1SJcVv8BiOh9QYSvqO4v7EmmQGrs7PERUrC49bcKfdIGtDDLuN6Ianm5t2GRXeNdnw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZqkx-0008AD-AQ; Sat, 16 May 2020 06:57:11 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZqkx-00009N-1n; Sat, 16 May 2020 06:57:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZqkx-0006pF-1A; Sat, 16 May 2020 06:57:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150210-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150210: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=144dfe4215902b40a9d17fdb326054bbd8e07563
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 16 May 2020 06:57:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150210 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150210/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              144dfe4215902b40a9d17fdb326054bbd8e07563
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  120 days
Failing since        146211  2020-01-18 04:18:52 Z  119 days  110 attempts
Testing same since   150210  2020-05-16 04:20:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Chris Jester-Young <cky@cky.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yan Wang <wangyan122@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 18353 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 16 10:20:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 10:20:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZtvF-0006ce-Is; Sat, 16 May 2020 10:20:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2yaN=66=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jZtvE-0006bA-36
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 10:20:00 +0000
X-Inumbo-ID: caa26082-975e-11ea-a637-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id caa26082-975e-11ea-a637-12813bfff9fa;
 Sat, 16 May 2020 10:19:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=M7gjsTUXtdXblxF+pv6oq7jdPrbDZAREuVCENqwOAtU=; b=R//ORzhthvvPSFGwENQGztxSgx
 6TW6cgns925P0UiN0ojpQDXQMztPvblURL2H6r3+QStOSkdRLaf5h1wvC97gJ3gikLa45RE4c51m7
 I1BwxhHOKDEViT08vJAltvpO9O934dmj1vEbFItzxyiEG9sBi904V510sfPkA/1Wo2Vc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jZtvA-0004Nv-NJ; Sat, 16 May 2020 10:19:56 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jZtvA-0000c2-DP; Sat, 16 May 2020 10:19:56 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] pvcalls: Document correctly and explicitely the padding for
 all arches
Date: Sat, 16 May 2020 11:19:53 +0100
Message-Id: <20200516101953.1235-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <jgrall@amazon.com>, julien@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

The documentation of pvcalls suggests there is padding for 32-bit x86
at the end of most the structure. However, they are not described in
in the public header.

Because of that all the structures would be 32-bit aligned and not
64-bit aligned for 32-bit x86.

For all the other architectures supported (Arm and 64-bit x86), the
structure are aligned to 64-bit because they contain uint64_t field.
Therefore all the structures contain implicit padding.

The paddings are now corrected for 32-bit x86 and written explicitly for
all the architectures.

While the structure size between 32-bit and 64-bit x86 is different, it
shouldn't cause any incompatibility between a 32-bit and 64-bit
frontend/backend because the commands are always 56 bits and the padding
are at the end of the structure.

As an aside, the padding sadly cannot be mandated to be 0 as they are
already present. So it is not going to be possible to use the padding
for extending a command in the future.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v2:
        - It is not possible to use the same padding for 32-bit x86 and
        all the other supported architectures.
---
 docs/misc/pvcalls.pandoc        | 18 ++++++++++--------
 xen/include/public/io/pvcalls.h | 14 ++++++++++++++
 2 files changed, 24 insertions(+), 8 deletions(-)

diff --git a/docs/misc/pvcalls.pandoc b/docs/misc/pvcalls.pandoc
index 665dad556c39..c25412868f5d 100644
--- a/docs/misc/pvcalls.pandoc
+++ b/docs/misc/pvcalls.pandoc
@@ -246,9 +246,9 @@ The format is defined as follows:
     			uint32_t domain;
     			uint32_t type;
     			uint32_t protocol;
-    			#ifdef CONFIG_X86_32
+			#ifndef CONFIG_X86_32
     			uint8_t pad[4];
-    			#endif
+			#endif
     		} socket;
     		struct xen_pvcalls_connect {
     			uint64_t id;
@@ -257,16 +257,18 @@ The format is defined as follows:
     			uint32_t flags;
     			grant_ref_t ref;
     			uint32_t evtchn;
-    			#ifdef CONFIG_X86_32
+			#ifndef CONFIG_X86_32
     			uint8_t pad[4];
-    			#endif
+			#endif
     		} connect;
     		struct xen_pvcalls_release {
     			uint64_t id;
     			uint8_t reuse;
-    			#ifdef CONFIG_X86_32
+			#ifndef CONFIG_X86_32
     			uint8_t pad[7];
-    			#endif
+			#else
+			uint8_t pad[3];
+			#endif
     		} release;
     		struct xen_pvcalls_bind {
     			uint64_t id;
@@ -276,9 +278,9 @@ The format is defined as follows:
     		struct xen_pvcalls_listen {
     			uint64_t id;
     			uint32_t backlog;
-    			#ifdef CONFIG_X86_32
+			#ifndef CONFIG_X86_32
     			uint8_t pad[4];
-    			#endif
+			#endif
     		} listen;
     		struct xen_pvcalls_accept {
     			uint64_t id;
diff --git a/xen/include/public/io/pvcalls.h b/xen/include/public/io/pvcalls.h
index cb8171275c13..590c5e9e41aa 100644
--- a/xen/include/public/io/pvcalls.h
+++ b/xen/include/public/io/pvcalls.h
@@ -65,6 +65,9 @@ struct xen_pvcalls_request {
             uint32_t domain;
             uint32_t type;
             uint32_t protocol;
+#ifndef CONFIG_X86_32
+            uint8_t pad[4];
+#endif
         } socket;
         struct xen_pvcalls_connect {
             uint64_t id;
@@ -73,10 +76,18 @@ struct xen_pvcalls_request {
             uint32_t flags;
             grant_ref_t ref;
             uint32_t evtchn;
+#ifndef CONFIG_X86_32
+            uint8_t pad[4];
+#endif
         } connect;
         struct xen_pvcalls_release {
             uint64_t id;
             uint8_t reuse;
+#ifndef CONFIG_X86_32
+            uint8_t pad[7];
+#else
+            uint8_t pad[3];
+#endif
         } release;
         struct xen_pvcalls_bind {
             uint64_t id;
@@ -86,6 +97,9 @@ struct xen_pvcalls_request {
         struct xen_pvcalls_listen {
             uint64_t id;
             uint32_t backlog;
+#ifndef CONFIG_X86_32
+            uint8_t pad[4];
+#endif
         } listen;
         struct xen_pvcalls_accept {
             uint64_t id;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sat May 16 10:21:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 10:21:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZtwQ-0007Hd-1d; Sat, 16 May 2020 10:21:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2yaN=66=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jZtwO-0007HU-VD
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 10:21:12 +0000
X-Inumbo-ID: f6eb066c-975e-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f6eb066c-975e-11ea-b07b-bc764e2007e4;
 Sat, 16 May 2020 10:21:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=OMC3SvG/IDTkGrQ9g09O6JMRuWTkg4XZkRiTl8ST/bc=; b=egWKVjdgs4T+KgXmGLuft1tQqB
 1QNNmtjLUvy1GThhhUWyTFJeF1P0Fk+eqsZbNK7GMxxIRmOznbQZ8muiQ9Z51/IDH82Zs6M/fkDD4
 Z+micIldzFlcClcbOBadYDV0ykRXB1R18MD0fKBOeEqGLKDN4VO+EUWyhA8c8SM4KboY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jZtwN-0004P9-1r; Sat, 16 May 2020 10:21:11 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jZtwM-0000gh-Rj; Sat, 16 May 2020 10:21:10 +0000
Subject: Re: [PATCH] pvcalls: Document correctly and explicitely the padding
 for all arches
To: xen-devel@lists.xenproject.org
References: <20200516101953.1235-1-julien@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <3b3a9f13-67d6-1fb2-5735-1b9974a1999c@xen.org>
Date: Sat, 16 May 2020 11:21:09 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200516101953.1235-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <jgrall@amazon.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Sorry I forgot to CC the maintainers on the e-mail.

I will resent it.

On 16/05/2020 11:19, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The documentation of pvcalls suggests there is padding for 32-bit x86
> at the end of most the structure. However, they are not described in
> in the public header.
> 
> Because of that all the structures would be 32-bit aligned and not
> 64-bit aligned for 32-bit x86.
> 
> For all the other architectures supported (Arm and 64-bit x86), the
> structure are aligned to 64-bit because they contain uint64_t field.
> Therefore all the structures contain implicit padding.
> 
> The paddings are now corrected for 32-bit x86 and written explicitly for
> all the architectures.
> 
> While the structure size between 32-bit and 64-bit x86 is different, it
> shouldn't cause any incompatibility between a 32-bit and 64-bit
> frontend/backend because the commands are always 56 bits and the padding
> are at the end of the structure.
> 
> As an aside, the padding sadly cannot be mandated to be 0 as they are
> already present. So it is not going to be possible to use the padding
> for extending a command in the future.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
>      Changes in v2:
>          - It is not possible to use the same padding for 32-bit x86 and
>          all the other supported architectures.
> ---
>   docs/misc/pvcalls.pandoc        | 18 ++++++++++--------
>   xen/include/public/io/pvcalls.h | 14 ++++++++++++++
>   2 files changed, 24 insertions(+), 8 deletions(-)
> 
> diff --git a/docs/misc/pvcalls.pandoc b/docs/misc/pvcalls.pandoc
> index 665dad556c39..c25412868f5d 100644
> --- a/docs/misc/pvcalls.pandoc
> +++ b/docs/misc/pvcalls.pandoc
> @@ -246,9 +246,9 @@ The format is defined as follows:
>       			uint32_t domain;
>       			uint32_t type;
>       			uint32_t protocol;
> -    			#ifdef CONFIG_X86_32
> +			#ifndef CONFIG_X86_32
>       			uint8_t pad[4];
> -    			#endif
> +			#endif
>       		} socket;
>       		struct xen_pvcalls_connect {
>       			uint64_t id;
> @@ -257,16 +257,18 @@ The format is defined as follows:
>       			uint32_t flags;
>       			grant_ref_t ref;
>       			uint32_t evtchn;
> -    			#ifdef CONFIG_X86_32
> +			#ifndef CONFIG_X86_32
>       			uint8_t pad[4];
> -    			#endif
> +			#endif
>       		} connect;
>       		struct xen_pvcalls_release {
>       			uint64_t id;
>       			uint8_t reuse;
> -    			#ifdef CONFIG_X86_32
> +			#ifndef CONFIG_X86_32
>       			uint8_t pad[7];
> -    			#endif
> +			#else
> +			uint8_t pad[3];
> +			#endif
>       		} release;
>       		struct xen_pvcalls_bind {
>       			uint64_t id;
> @@ -276,9 +278,9 @@ The format is defined as follows:
>       		struct xen_pvcalls_listen {
>       			uint64_t id;
>       			uint32_t backlog;
> -    			#ifdef CONFIG_X86_32
> +			#ifndef CONFIG_X86_32
>       			uint8_t pad[4];
> -    			#endif
> +			#endif
>       		} listen;
>       		struct xen_pvcalls_accept {
>       			uint64_t id;
> diff --git a/xen/include/public/io/pvcalls.h b/xen/include/public/io/pvcalls.h
> index cb8171275c13..590c5e9e41aa 100644
> --- a/xen/include/public/io/pvcalls.h
> +++ b/xen/include/public/io/pvcalls.h
> @@ -65,6 +65,9 @@ struct xen_pvcalls_request {
>               uint32_t domain;
>               uint32_t type;
>               uint32_t protocol;
> +#ifndef CONFIG_X86_32
> +            uint8_t pad[4];
> +#endif
>           } socket;
>           struct xen_pvcalls_connect {
>               uint64_t id;
> @@ -73,10 +76,18 @@ struct xen_pvcalls_request {
>               uint32_t flags;
>               grant_ref_t ref;
>               uint32_t evtchn;
> +#ifndef CONFIG_X86_32
> +            uint8_t pad[4];
> +#endif
>           } connect;
>           struct xen_pvcalls_release {
>               uint64_t id;
>               uint8_t reuse;
> +#ifndef CONFIG_X86_32
> +            uint8_t pad[7];
> +#else
> +            uint8_t pad[3];
> +#endif
>           } release;
>           struct xen_pvcalls_bind {
>               uint64_t id;
> @@ -86,6 +97,9 @@ struct xen_pvcalls_request {
>           struct xen_pvcalls_listen {
>               uint64_t id;
>               uint32_t backlog;
> +#ifndef CONFIG_X86_32
> +            uint8_t pad[4];
> +#endif
>           } listen;
>           struct xen_pvcalls_accept {
>               uint64_t id;
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 16 10:22:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 10:22:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZtxI-0007NL-C2; Sat, 16 May 2020 10:22:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2yaN=66=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jZtxH-0007NA-8p
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 10:22:07 +0000
X-Inumbo-ID: 17d78b2a-975f-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 17d78b2a-975f-11ea-b9cf-bc764e2007e4;
 Sat, 16 May 2020 10:22:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=M7gjsTUXtdXblxF+pv6oq7jdPrbDZAREuVCENqwOAtU=; b=Afw1PXlXP5VmQpYedSpUQ/wnl0
 1KE7Fd2RvfKSUjT6Y0aXl/QSdsqHl7/yUvV8fTeeYSAzbF8wuklJc49enSJcEbQFybo+tHzebX5tQ
 3z5p3i/U+gVZKN7KyE4vZuGmKxuUXFjYSXnMBGzL00wQkqNEMbVy6IkUuLUUYoKipN5w=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jZtxF-0004Rb-KA; Sat, 16 May 2020 10:22:05 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jZtxF-0000pA-A7; Sat, 16 May 2020 10:22:05 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [RESEND PATCH v2 for-4.14] pvcalls: Document correctly and
 explicitely the padding for all arches
Date: Sat, 16 May 2020 11:21:57 +0100
Message-Id: <20200516102157.1928-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

The documentation of pvcalls suggests there is padding for 32-bit x86
at the end of most the structure. However, they are not described in
in the public header.

Because of that all the structures would be 32-bit aligned and not
64-bit aligned for 32-bit x86.

For all the other architectures supported (Arm and 64-bit x86), the
structure are aligned to 64-bit because they contain uint64_t field.
Therefore all the structures contain implicit padding.

The paddings are now corrected for 32-bit x86 and written explicitly for
all the architectures.

While the structure size between 32-bit and 64-bit x86 is different, it
shouldn't cause any incompatibility between a 32-bit and 64-bit
frontend/backend because the commands are always 56 bits and the padding
are at the end of the structure.

As an aside, the padding sadly cannot be mandated to be 0 as they are
already present. So it is not going to be possible to use the padding
for extending a command in the future.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v2:
        - It is not possible to use the same padding for 32-bit x86 and
        all the other supported architectures.
---
 docs/misc/pvcalls.pandoc        | 18 ++++++++++--------
 xen/include/public/io/pvcalls.h | 14 ++++++++++++++
 2 files changed, 24 insertions(+), 8 deletions(-)

diff --git a/docs/misc/pvcalls.pandoc b/docs/misc/pvcalls.pandoc
index 665dad556c39..c25412868f5d 100644
--- a/docs/misc/pvcalls.pandoc
+++ b/docs/misc/pvcalls.pandoc
@@ -246,9 +246,9 @@ The format is defined as follows:
     			uint32_t domain;
     			uint32_t type;
     			uint32_t protocol;
-    			#ifdef CONFIG_X86_32
+			#ifndef CONFIG_X86_32
     			uint8_t pad[4];
-    			#endif
+			#endif
     		} socket;
     		struct xen_pvcalls_connect {
     			uint64_t id;
@@ -257,16 +257,18 @@ The format is defined as follows:
     			uint32_t flags;
     			grant_ref_t ref;
     			uint32_t evtchn;
-    			#ifdef CONFIG_X86_32
+			#ifndef CONFIG_X86_32
     			uint8_t pad[4];
-    			#endif
+			#endif
     		} connect;
     		struct xen_pvcalls_release {
     			uint64_t id;
     			uint8_t reuse;
-    			#ifdef CONFIG_X86_32
+			#ifndef CONFIG_X86_32
     			uint8_t pad[7];
-    			#endif
+			#else
+			uint8_t pad[3];
+			#endif
     		} release;
     		struct xen_pvcalls_bind {
     			uint64_t id;
@@ -276,9 +278,9 @@ The format is defined as follows:
     		struct xen_pvcalls_listen {
     			uint64_t id;
     			uint32_t backlog;
-    			#ifdef CONFIG_X86_32
+			#ifndef CONFIG_X86_32
     			uint8_t pad[4];
-    			#endif
+			#endif
     		} listen;
     		struct xen_pvcalls_accept {
     			uint64_t id;
diff --git a/xen/include/public/io/pvcalls.h b/xen/include/public/io/pvcalls.h
index cb8171275c13..590c5e9e41aa 100644
--- a/xen/include/public/io/pvcalls.h
+++ b/xen/include/public/io/pvcalls.h
@@ -65,6 +65,9 @@ struct xen_pvcalls_request {
             uint32_t domain;
             uint32_t type;
             uint32_t protocol;
+#ifndef CONFIG_X86_32
+            uint8_t pad[4];
+#endif
         } socket;
         struct xen_pvcalls_connect {
             uint64_t id;
@@ -73,10 +76,18 @@ struct xen_pvcalls_request {
             uint32_t flags;
             grant_ref_t ref;
             uint32_t evtchn;
+#ifndef CONFIG_X86_32
+            uint8_t pad[4];
+#endif
         } connect;
         struct xen_pvcalls_release {
             uint64_t id;
             uint8_t reuse;
+#ifndef CONFIG_X86_32
+            uint8_t pad[7];
+#else
+            uint8_t pad[3];
+#endif
         } release;
         struct xen_pvcalls_bind {
             uint64_t id;
@@ -86,6 +97,9 @@ struct xen_pvcalls_request {
         struct xen_pvcalls_listen {
             uint64_t id;
             uint32_t backlog;
+#ifndef CONFIG_X86_32
+            uint8_t pad[4];
+#endif
         } listen;
         struct xen_pvcalls_accept {
             uint64_t id;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sat May 16 10:25:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 10:25:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZu0v-0007bg-V7; Sat, 16 May 2020 10:25:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=2yaN=66=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jZu0v-0007ba-ID
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 10:25:53 +0000
X-Inumbo-ID: 9eb5c9c2-975f-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9eb5c9c2-975f-11ea-b07b-bc764e2007e4;
 Sat, 16 May 2020 10:25:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Ichr6SHMXmKGfKP3My2ejoF99cAitD44TOiqqpYyWsU=; b=XVPyebpkTKH3W9XvLGQEHWKUNS
 RrEwcw9xTwnWj/uSjy7M9H31cV5lcdIBkB4dHCz7DePknYoriyxj2R2ABUjdT8O0gqOJEVerFThVz
 B2bRAARYKqHQ0FUMJQYyORTsmcFAr437O4ecFgrs00iwJErePmZBLZexe2TgdsPP7BH0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jZu0s-0004Vn-Hy; Sat, 16 May 2020 10:25:50 +0000
Received: from 54-240-197-228.amazon.com ([54.240.197.228]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jZu0s-0000xe-BO; Sat, 16 May 2020 10:25:50 +0000
Subject: Re: [PATCH 6/7] xen/guest_access: Consolidate guest access helpers in
 xen/guest_access.h
To: Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200404131017.27330-1-julien@xen.org>
 <20200404131017.27330-7-julien@xen.org>
 <e2588f6e-1f13-b66f-8e3d-b8568f67b62a@suse.com>
 <041a9f9f-cc9e-eac5-cdd2-555fb1c88e6f@xen.org>
 <cf6c0e0b-ade0-587f-ea0e-80b02b21b1a9@suse.com>
 <c8e66108-7ac1-fb51-841f-21886b731f04@xen.org>
 <f02f09ec-b643-8321-e235-ce0ee5526ab3@suse.com>
 <69deb8f4-bafe-734c-f6fa-de41ecf539d2@xen.org>
 <c38f4581-42a6-bb4a-1f84-066528edd3ee@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <aa209d94-2b39-7932-919b-9842f376e0dc@xen.org>
Date: Sat, 16 May 2020 11:25:48 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <c38f4581-42a6-bb4a-1f84-066528edd3ee@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Ping 2.

On 29/04/2020 15:04, Julien Grall wrote:
> Hi,
> 
> Gentle ping. Any comments on the direction to take?
> 
> On 09/04/2020 10:28, Julien Grall wrote:
>>
>>
>> On 09/04/2020 09:06, Jan Beulich wrote:
>>> On 09.04.2020 10:01, Julien Grall wrote:
>>>> Hi,
>>>>
>>>> On 09/04/2020 07:30, Jan Beulich wrote:
>>>>> On 09.04.2020 00:05, Julien Grall wrote:
>>>>> I don't see why a new port may not also want
>>>>> to go that route instead of the x86/Arm one.
>>>> I could accept that someone would want to reinvent a new ABI
>>>> from scratch for just an hypothetical new arch. However it would
>>>> be quite an effort to reinvent XEN_GUEST_HANDLE(). The chance is
>>>> RISC-V is only going to re-use what Arm did as Arm already did
>>>> with x86.
>>>>
>>>> I would like to avoid to introduce a new directory asm-generic
>>>> with just one header in it. Maybe you have some other headers in
>>>> mind?
>>>
>>> I recall having wondered a few times whether we shouldn't use this
>>> concept elsewhere. One case iirc was bitops stuff. Looking over
>>> the Linux ones, some atomic and barrier fallback implementations
>>> may also sensibly live there, and there are likely more.
>>
>> In theory it makes sense but, in the current state of Xen, 
>> 'asm-generic' means they are common to Arm and x86. This is basically 
>> the same definition as for the directory 'xen'. So how do you draw a 
>> line which files go where?
>>
>> To be honest, I don't think we can draw a line without a 3rd 
>> architecture. So I would recommend to wait until then to create an 
>> asm-generic directory.
>>
>> Meanwhile, I still think the consolidation in xen/ is useful as it 
>> makes easier to maintain. It is also going to make easier for RISCv 
>> (or a new arch) as they don't have to worry about duplication (if any).
>>
>> In the event they decide to purse their own route, then it is not 
>> going to be a massive pain to move part of xen/guest_access.h in 
>> asm-generic/guest_access.h and include the latter from {xen, asm} 
>> /guest_access.h.
> 
> Cheers,
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 16 10:40:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 10:40:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZuEN-0000IA-8f; Sat, 16 May 2020 10:39:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zWuL=66=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZuEM-0000I5-DK
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 10:39:46 +0000
X-Inumbo-ID: 8a52e01c-9761-11ea-a63a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8a52e01c-9761-11ea-a63a-12813bfff9fa;
 Sat, 16 May 2020 10:39:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=0j3Ct7m2qan3nuRhRGPTl1LX2gs8H/Z9plZVpQA77Eg=; b=wPl+7XJB2tAx9uJBf8RKZWrCr
 kCj/mz92L/6Ff2sz15DKigBLjTVRI8kNYf44/3aLLmElkCLLkpLICzvhX03trXDwyBlx7utkcdASP
 UDcikhMSzw9rxX9s18LSU52mk6Xwp2lPs1XD4MOgv4QwGd7rOUL/UeCbd+OFEl6iQ9hUE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZuED-0004nW-8n; Sat, 16 May 2020 10:39:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZuEC-00037v-Sq; Sat, 16 May 2020 10:39:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZuEC-0002bq-SH; Sat, 16 May 2020 10:39:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150202-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150202: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=1ae7efb388540adc1653a51a3bc3b2c9cef5ec1a
X-Osstest-Versions-That: linux=8c1684bb81f173543599f1848c29a2a3b1ee5907
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 16 May 2020 10:39:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150202 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150202/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150186
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150186
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150186
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150186
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150186
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150186
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150186
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150186
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150186
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150186
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                1ae7efb388540adc1653a51a3bc3b2c9cef5ec1a
baseline version:
 linux                8c1684bb81f173543599f1848c29a2a3b1ee5907

Last test of basis   150186  2020-05-14 19:08:49 Z    1 days
Testing same since   150202  2020-05-15 15:26:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrey Konovalov <andreyknvl@google.com>
  Ben Chuang <ben.chuang@genesyslogic.com.tw>
  Brian Geffon <bgeffon@google.com>
  Chris Down <chris@chrisdown.name>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Dave Flogeras <dflogeras2@gmail.com>
  Grzegorz Kowal <custos.mentis@gmail.com>
  Johannes Weiner <hannes@cmpxchg.org>
  Leon Romanovsky <leon@kernel.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michal Hocko <mhocko@suse.com>
  Peter Xu <peterx@redhat.com>
  Raul E Rangel <rrangel@chromium.org>
  Renius Chen <renius.chen@genesyslogic.com.tw>
  Roman Penyaev <rpenyaev@suse.de>
  Samuel Zou <zou_wei@huawei.com>
  Sarthak Garg <sartgarg@codeaurora.org>
  Ulf Hansson <ulf.hansson@linaro.org>
  Vasily Averin <vvs@virtuozzo.com>
  Veerabhadrarao Badiganti <vbadigan@codeaurora.org>
  Vineeth Pillai <vineethrp@gmail.com>
  Waiman Long <longman@redhat.com>
  Yafang Shao <laoar.shao@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   8c1684bb81f1..1ae7efb38854  1ae7efb388540adc1653a51a3bc3b2c9cef5ec1a -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sat May 16 11:55:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 11:55:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZvOv-0007GH-Vc; Sat, 16 May 2020 11:54:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=puAY=66=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jZvOu-0007GC-I0
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 11:54:44 +0000
X-Inumbo-ID: 0751fdb4-976c-11ea-a648-12813bfff9fa
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0751fdb4-976c-11ea-a648-12813bfff9fa;
 Sat, 16 May 2020 11:54:42 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id f13so4712350wmc.5
 for <xen-devel@lists.xenproject.org>; Sat, 16 May 2020 04:54:42 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=fF7M1/yKhrVf0CfvlpiRjCy6G705JL2S0jhVuRGaFL0=;
 b=ubp8aiKl2TPCWvq0Cw7r2br6q6Ah6kowzcZGy2/7BPXaXaxtkVX9cjB0NfRjsPHtwS
 c9IBPNX+HXVReJ3dNqDuI6+/jJ1urr+6CNj1uJPdA4QmLukN9xVNc28I+6HcdhWAjbKQ
 TJH6R8D3WG77+Yr/YCHZsmQRCW4NBw/EL7LP6U/U+5hjPLe/jn8bDOJxWK7VmdAAyQ3p
 yeO8sYjxa0sVKwu6xv3OKc1G4iGkiXp+RUtucREmDxIDyQLIvthi6MzVIR1zbYtoKIHX
 IL5RR5q1AbmknmgwZQ43RUsTOOMAYxIAa1P2mGZ3DtNol27MuKiKLjWEgEciT9NcZNSv
 S1jA==
X-Gm-Message-State: AOAM532sh3Z7rlOXYztSa4YLj8iFfMmRS2BB5ElBQpuUqkMlrNSSOMpM
 PZsxyAP6cw67sIFYSgp9eG/mVBcGh7w=
X-Google-Smtp-Source: ABdhPJzDlwTQoynpjiGk336W+qNXwlLBEAc6U/nouNw6HUvSRHwMJ9+QNwXvTMJOKejEW5c2toyeUQ==
X-Received: by 2002:a1c:1c6:: with SMTP id 189mr8730207wmb.47.1589630081811;
 Sat, 16 May 2020 04:54:41 -0700 (PDT)
Received: from localhost.localdomain (96.142.6.51.dyn.plus.net. [51.6.142.96])
 by smtp.gmail.com with ESMTPSA id
 c16sm7538150wrv.62.2020.05.16.04.54.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sat, 16 May 2020 04:54:41 -0700 (PDT)
From: Wei Liu <wl@xen.org>
To: Xen Development List <xen-devel@lists.xenproject.org>
Subject: [PATCH] CHANGELOG: add hypervisor framework and Hyper-V support
Date: Sat, 16 May 2020 12:54:38 +0100
Message-Id: <20200516115438.1740-1-wl@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Community Manager <community.manager@xenproject.org>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Wei Liu <wl@xen.org>
---
 CHANGELOG.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 554eeb6a1216..ccb5055c87b7 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -14,6 +14,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
    the Xen hypercall interface or the viridian one.
  - Assorted pvshim performance and scalability improvements plus some bug
    fixes.
+ - Hypervisor framework to ease porting Xen to run on hypervisors.
+ - Initial support to run on Hyper-V.
 
 ## [4.13.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.13.0) - 2019-12-17
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Sat May 16 12:23:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 12:23:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZvpz-0001a2-EM; Sat, 16 May 2020 12:22:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H74w=66=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jZvpy-0001Zx-NK
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 12:22:42 +0000
X-Inumbo-ID: eff7eeea-976f-11ea-b07b-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eff7eeea-976f-11ea-b07b-bc764e2007e4;
 Sat, 16 May 2020 12:22:41 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 7gniSrxD044gQrv0A2IdFq/CQ6dICUMU3MO34jajP54ad8PV7+kNE95XKhEWNHCc776SeKjHOh
 PZhSj8hJvacu3IvAtcxIoAaZ0AACf7cv+c3HSNZEvApQURFV/Lc7sxQaYDwR3PAVlwlKmNFm0J
 lDEcqygUH4v7bXvrVGDjdQL6edovO0dcoxMAsbIGEjO88wfKeRaE/N6ey9af1VoBgae++QBfnh
 g4mZtvvCgSMM0tRsfUJYlySbIiRp1jWuFu9XyOrNZj58N44dQCx+JCf/qCFXtPOIPODLV94UIq
 QxE=
X-SBRS: 2.7
X-MesageID: 17694874
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,398,1583211600"; d="scan'208";a="17694874"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/hvm: Fix memory leaks in hvm_copy_context_and_params()
Date: Sat, 16 May 2020 13:22:21 +0100
Message-ID: <20200516122221.5434-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Any error from hvm_save() or hvm_set_param() leaks the c.data allocation.

Spotted by Coverity.

Fixes: 353744830 "x86/hvm: introduce hvm_copy_context_and_params"
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Tamas K Lengyel <tamas@tklengyel.com>

This was the XenServer internal Coverity.  The public one doesn't appear to
have spotted the issue, so no Coverity-ID tag for the fix.
---
 xen/arch/x86/hvm/hvm.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 814b7020d8..0a3797ef6e 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -5318,7 +5318,7 @@ int hvm_copy_context_and_params(struct domain *dst, struct domain *src)
         return -ENOMEM;
 
     if ( (rc = hvm_save(src, &c)) )
-        return rc;
+        goto out;
 
     for ( i = 0; i < HVM_NR_PARAMS; i++ )
     {
@@ -5328,11 +5328,13 @@ int hvm_copy_context_and_params(struct domain *dst, struct domain *src)
             continue;
 
         if ( (rc = hvm_set_param(dst, i, value)) )
-            return rc;
+            goto out;
     }
 
     c.cur = 0;
     rc = hvm_load(dst, &c);
+
+ out:
     vfree(c.data);
 
     return rc;
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Sat May 16 12:56:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 12:56:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZwMc-0004Lz-0r; Sat, 16 May 2020 12:56:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zWuL=66=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZwMa-0004Lu-VK
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 12:56:25 +0000
X-Inumbo-ID: a1ae2af6-9774-11ea-a652-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a1ae2af6-9774-11ea-a652-12813bfff9fa;
 Sat, 16 May 2020 12:56:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=055vEI8l+IgYTt2fTsRhav3R2ncg4zLTKFAcT3sN9ok=; b=KLsaLU+F0S2kOmqANyCDiqaxt
 /UxE11Qm4vzYF0XJcop2F8WJRtftvRCzRPUDKxdI8kK/r7HjtrWWlhPH9hPv3KSHY9SKK+EDMt3oq
 PTSO/4jIyPDHomRxSpoteg1iSpnZkLty9cl1UGT6ifnQDoZxVx8+VBaZini2yBWfftB10=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZwMS-0007Xr-QS; Sat, 16 May 2020 12:56:16 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZwMS-0008UA-JR; Sat, 16 May 2020 12:56:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZwMS-0005Xc-It; Sat, 16 May 2020 12:56:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150211-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150211: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-armhf-armhf-libvirt-raw:guest-start/debian.repeat:fail:regression
 qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=debe78ce14bf8f8940c2bdf3ef387505e9e035a9
X-Osstest-Versions-That: qemuu=0db949f1810f4d497762d57d8db6f219c0607529
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 16 May 2020 12:56:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150211 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150211/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt-raw 15 guest-start/debian.repeat fail REGR. vs. 150194

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 150194

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150194
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150194
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150194
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150194
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150194
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                debe78ce14bf8f8940c2bdf3ef387505e9e035a9
baseline version:
 qemuu                0db949f1810f4d497762d57d8db6f219c0607529

Last test of basis   150194  2020-05-15 11:32:28 Z    1 days
Testing same since   150211  2020-05-16 05:06:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Changbin Du <changbin.du@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Joseph Myers <joseph@codesourcery.com>
  Laurent Vivier <laurent@vivier.eu>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 392 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 16 14:43:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 14:43:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZy1b-0005t1-FQ; Sat, 16 May 2020 14:42:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3AUN=66=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jZy1a-0005sw-7q
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 14:42:50 +0000
X-Inumbo-ID: 82f2d2c4-9783-11ea-9887-bc764e2007e4
Received: from mail-ej1-x62d.google.com (unknown [2a00:1450:4864:20::62d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82f2d2c4-9783-11ea-9887-bc764e2007e4;
 Sat, 16 May 2020 14:42:48 +0000 (UTC)
Received: by mail-ej1-x62d.google.com with SMTP id se13so4799191ejb.9
 for <xen-devel@lists.xenproject.org>; Sat, 16 May 2020 07:42:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=74KTbtZzpez+dpiBE5+rnwJMvKyB20ZRlwXvxpPfhCg=;
 b=TG3R+CBMNWw3ZQEGKXfClHSJEgAnPzJJPGShhpnXOtjI6G2+KG1kf1/WUy1gEikvgH
 2w27UJ4oRsT8qxNlTFi3kXS/5SW/YIesq+/dNuBzWYNtEJCDHI6DGcXEJYVvBFdX5Lzi
 kq0XQxq+OP+mz3o/ijOO58FdGh7dBbZUr8cTZuP5iPXY7WnVELJVbvh2bu3FWNNIwBvv
 3tHl/myeHlC9QguqjpFbyQHUbLdbXX3vRMac/rODF9IFIA1bDFXgNJX21QC7O1DPyNAv
 v0AlH+1DavDBx+U/hGA5x115Jdt/0NqJyjI6LqfUvR8iDDqoqk4/bxenB3BTIbAO9h98
 95sA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=74KTbtZzpez+dpiBE5+rnwJMvKyB20ZRlwXvxpPfhCg=;
 b=HeH47AHhRmT86D8EmyBBJcnUz4FC4EXRqXaphoX7Airx2XECVMtGbuug/4jfvFM5fl
 fqkXUkcBmz6ZGlNqYXfL7vWSMH5QyIDy3pVgXjP+Ltmaa8T7314EOu1gLa0iqWjEo+aZ
 +lp3mAs9cnZ07aQRrrRefyIL5wd1eOjksnOfYSpna1A0FQxaGtxrtMzAhVVpPlL4AuzB
 4PIRpZIQFc4UKrxM7d6SGLBVvQWzGENWsvcj32JcCrtqaGw6StBZoBM54sXKXX1yvw/+
 77QZafHBd9yDBwe0NrPUJZc7zE9Eqj/LgKIDOEmb/D4haVHP+LtgRUKtTypkReUTcrBU
 AQcA==
X-Gm-Message-State: AOAM5309B6i8Y9lbXt8nOO9kBa7HhnS03/7FM931Zd9vefWQ+msikoXv
 Ui6SHlxhF6srz4S0uNid5jPrgoDWnsY=
X-Google-Smtp-Source: ABdhPJx73Nm5GC76zpyBqI5DfdWrbwiIE0EuwNzEj7GMceadc+cfSQj1wvp2SSdhWEIHagXt0SUSPg==
X-Received: by 2002:a17:906:f98f:: with SMTP id
 li15mr7435186ejb.259.1589640167786; 
 Sat, 16 May 2020 07:42:47 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.185])
 by smtp.gmail.com with ESMTPSA id c15sm707212ejx.62.2020.05.16.07.42.46
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Sat, 16 May 2020 07:42:47 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Wei Liu'" <wl@xen.org>,
 "'Xen Development List'" <xen-devel@lists.xenproject.org>
References: <20200516115438.1740-1-wl@xen.org>
In-Reply-To: <20200516115438.1740-1-wl@xen.org>
Subject: RE: [PATCH] CHANGELOG: add hypervisor framework and Hyper-V support
Date: Sat, 16 May 2020 15:42:44 +0100
Message-ID: <000001d62b90$4416fad0$cc44f070$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQIQi6qGIn4T8KibJ+ZyHrDtic922Kg2CP9g
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Community Manager' <community.manager@xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Wei Liu <wl@xen.org>
> Sent: 16 May 2020 12:55
> To: Xen Development List <xen-devel@lists.xenproject.org>
> Cc: Wei Liu <wl@xen.org>; Paul Durrant <paul@xen.org>; Community Manager
> <community.manager@xenproject.org>
> Subject: [PATCH] CHANGELOG: add hypervisor framework and Hyper-V support
> 
> Signed-off-by: Wei Liu <wl@xen.org>

Acked-by: Paul Durrant <paul@xen.org>

> ---
>  CHANGELOG.md | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/CHANGELOG.md b/CHANGELOG.md
> index 554eeb6a1216..ccb5055c87b7 100644
> --- a/CHANGELOG.md
> +++ b/CHANGELOG.md
> @@ -14,6 +14,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
>     the Xen hypercall interface or the viridian one.
>   - Assorted pvshim performance and scalability improvements plus some bug
>     fixes.
> + - Hypervisor framework to ease porting Xen to run on hypervisors.
> + - Initial support to run on Hyper-V.
> 
>  ## [4.13.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.13.0) - 2019-12-17
> 
> --
> 2.20.1




From xen-devel-bounces@lists.xenproject.org Sat May 16 14:56:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 14:56:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZyEG-0006uf-L2; Sat, 16 May 2020 14:55:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zWuL=66=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jZyEF-0006ua-3W
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 14:55:55 +0000
X-Inumbo-ID: 5482b088-9785-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5482b088-9785-11ea-9887-bc764e2007e4;
 Sat, 16 May 2020 14:55:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ijJysNuDnjHy0gIP4HjtCdrEq4VsWZU258gEbH9uv5c=; b=ZUZKTROHtoeAFHqPNifhnwHNz
 gl4GrZeSVfvIGbBhPspdZXhiYpj/VOb/A4YP+lkGfPHq9mXb7oPAo8pBHgwE/Jl4WCNlaYdAZQIE1
 1s0YUdiO/cLPqZbhhAXcvp0vJAM2pffZtP8WM3xA6UAK70lM3uuZ7P/AcX53fGn650mm0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZyE8-0001ba-Q6; Sat, 16 May 2020 14:55:48 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jZyE8-0006F3-HW; Sat, 16 May 2020 14:55:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jZyE8-0007dl-Go; Sat, 16 May 2020 14:55:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150214-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150214: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=22970c0e0c9e4ffd51816c1cc7e4aa19800d3d09
X-Osstest-Versions-That: xen=57880053dd24012e9f59c23b630fefe07e15dc49
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 16 May 2020 14:55:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150214 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150214/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  22970c0e0c9e4ffd51816c1cc7e4aa19800d3d09
baseline version:
 xen                  57880053dd24012e9f59c23b630fefe07e15dc49

Last test of basis   150209  2020-05-15 20:01:39 Z    0 days
Testing same since   150214  2020-05-16 12:00:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Paul Durrant <paul@xen.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   57880053dd..22970c0e0c  22970c0e0c9e4ffd51816c1cc7e4aa19800d3d09 -> smoke


From xen-devel-bounces@lists.xenproject.org Sat May 16 16:19:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 16:19:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jZzWW-0006jr-Ls; Sat, 16 May 2020 16:18:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H74w=66=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jZzWV-0006jm-5W
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 16:18:51 +0000
X-Inumbo-ID: ed292e88-9790-11ea-9887-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ed292e88-9790-11ea-9887-bc764e2007e4;
 Sat, 16 May 2020 16:18:50 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: j41tsxKUU1prgfgu90Dch55mEWG+4gXxGu3+eN8ZjGrlYLLlNJVhY+L9f3YuXp1OK61ylP2Nno
 hOeXbSNBPyysUoArbyZZGGKIXQIZMe0m0fClvFlmUczWiaLi/aI6Th6Y4JVSM14aaRwselZMJo
 sEp5HSJPhqblEj4BmLAdKyWqaq0phzrhfBvcctV7cp4O2T7khJ8hWawnKDF+oJsNfNCpG4gQ5a
 FYrCB2s1iM1ZuFo66GfKrj571l+ZTDu6tVN3MwuU25+MOtpvw7dZw8AF2JeAU93RRg2VXUx6xM
 Rhs=
X-SBRS: None
X-MesageID: 17967882
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,398,1583211600"; d="scan'208";a="17967882"
Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
To: Manuel Bouyer <bouyer@antioche.eu.org>
References: <20200515202912.GA11714@antioche.eu.org>
 <d623cd12-4024-82ba-7388-21f606e1a0bd@citrix.com>
 <20200515210629.GA10976@antioche.eu.org>
 <b1dfc07d-bf0f-da26-79f0-8cf93952689e@citrix.com>
 <20200515215335.GA9991@antioche.eu.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d22b6b7c-9d1c-4cfb-427a-ca6f440a9b08@citrix.com>
Date: Sat, 16 May 2020 17:18:45 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200515215335.GA9991@antioche.eu.org>
Content-Type: text/plain; charset="windows-1252"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15/05/2020 22:53, Manuel Bouyer wrote:
> On Fri, May 15, 2020 at 10:38:13PM +0100, Andrew Cooper wrote:
>>> [...]
>>> Does it help ?
>> Yes and no. This is collateral damage of earlier bug.
>>
>> What failed was xen_init_fv()'s
>>
>>  shared_page = xc_map_foreign_range(xc_handle, domid, XC_PAGE_SIZE,
>>   PROT_READ|PROT_WRITE, ioreq_pfn);
>>  if (shared_page == NULL) {
>>   fprintf(logfile, "map shared IO page returned error %d\n", errno);
>>   exit(-1);
>>  }
>>
>> because we've ended up with a non-NULL pointer with no mapping behind
>> it, hence the SIGSEGV for the first time we try to use the pointer.
>>
>> Whatever logic is behind xc_map_foreign_range() should have returned
>> NULL or a real mapping.
> What's strange is that the mapping is validated, by mapping it in
> the dom0 kernel space. But when we try to remap in in the process's
> space, it fails.

Hmm - this sounds like a kernel bug I'm afraid.

>> ioreq_pfn ought to be something just below the 4G boundary, and the
>> toolstack ought to put memory there in the first place. Can you
>> identify what value ioreq_pfn has,
> You mean, something like:
> (gdb) print/x ioreq_pfn
> $2 = 0xfeff0
>
>> and whether it matches up with the
>> magic gfn range as reported by `xl create -vvv` for the guest?
> I guess you mean
> xl -vvv create
> the output is attached
>
> The kernel says it tries to map 0xfeff0000 to virtual address 0x79656f951000.

The value looks right, and the logs look normal.

~Andrew


From xen-devel-bounces@lists.xenproject.org Sat May 16 17:13:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 17:13:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ja0NE-0003WC-0g; Sat, 16 May 2020 17:13:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zWuL=66=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1ja0NC-0003W7-Ev
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 17:13:18 +0000
X-Inumbo-ID: 88541862-9798-11ea-a6c3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 88541862-9798-11ea-a6c3-12813bfff9fa;
 Sat, 16 May 2020 17:13:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=G6MmSYwq9BIjZTW5QTHlpS7Exg0nkHkRro9tkSi5RJM=; b=Sg30w9Re5/us4CWNRDBfvbE89
 u+iy3IFDUTq3q0VlscGnsFNYZoApYI536IT3GGIaOhRumaoDho6cxrTAmQv3i2efjO8EfsCuBeLCh
 QfIKXuSfA7XogSspbsHZ2HPE0JyKL7NI/gvz/EebALBvuwhT5Ih6jZxUS/Yi2okOmBGgA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ja0N9-0004v9-VL; Sat, 16 May 2020 17:13:15 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ja0N9-00064O-MJ; Sat, 16 May 2020 17:13:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1ja0N9-0006wL-Lf; Sat, 16 May 2020 17:13:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150217-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150217: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=99266e31832fb4a4da5c9b8163328be350d1261d
X-Osstest-Versions-That: xen=22970c0e0c9e4ffd51816c1cc7e4aa19800d3d09
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 16 May 2020 17:13:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150217 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150217/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  99266e31832fb4a4da5c9b8163328be350d1261d
baseline version:
 xen                  22970c0e0c9e4ffd51816c1cc7e4aa19800d3d09

Last test of basis   150214  2020-05-16 12:00:45 Z    0 days
Testing same since   150217  2020-05-16 15:01:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   22970c0e0c..99266e3183  99266e31832fb4a4da5c9b8163328be350d1261d -> smoke


From xen-devel-bounces@lists.xenproject.org Sat May 16 19:02:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 19:02:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ja24q-0004zT-3S; Sat, 16 May 2020 19:02:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=H74w=66=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ja24p-0004zO-JX
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 19:02:27 +0000
X-Inumbo-ID: c83f51b2-97a7-11ea-a6f8-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c83f51b2-97a7-11ea-a6f8-12813bfff9fa;
 Sat, 16 May 2020 19:02:27 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: GVkynst+cvXN7JmYxKUw0wzgjon4EQJhl7SNg2qSyhvX70uvy0IAA4Kk3pM3YGTjihGQMCxt4H
 eHBh9s3M+R+23UgGJ+1DMiRlhzCS1sIqeZWGrl5T9WU9EEjzGRbbbPB6aYFjv0dcJwtmr/lShz
 aBi+n9AMma00cEQQkisy+MnfjifUmaIQ3yQSZW2HwsT/W9lqCFKw2FvpPNehW5P9P4gB1/AZwP
 n5pQAfQcvwM8sEWXk0b7pKJFeUGHAwFYfpuWF3SskIX5XXl/26ZjZZCGs4o06Kwbv4horo+KUn
 HUc=
X-SBRS: None
X-MesageID: 17735596
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,400,1583211600"; d="scan'208";a="17735596"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/hvm: Fix shifting in stdvga_mem_read()
Date: Sat, 16 May 2020 20:02:11 +0100
Message-ID: <20200516190211.4120-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

stdvga_mem_read() has a return type of uint8_t, which promotes to int rather
than unsigned int.  Shifting by 24 may hit the sign bit.

Spotted by Coverity.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/stdvga.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/stdvga.c b/xen/arch/x86/hvm/stdvga.c
index bd398dbb1b..e2675139e7 100644
--- a/xen/arch/x86/hvm/stdvga.c
+++ b/xen/arch/x86/hvm/stdvga.c
@@ -322,7 +322,7 @@ static int stdvga_mem_read(const struct hvm_io_handler *handler,
         data = stdvga_mem_readb(addr);
         data |= stdvga_mem_readb(addr + 1) << 8;
         data |= stdvga_mem_readb(addr + 2) << 16;
-        data |= stdvga_mem_readb(addr + 3) << 24;
+        data |= (uint32_t)stdvga_mem_readb(addr + 3) << 24;
         break;
 
     case 8:
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Sat May 16 20:05:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 20:05:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ja33I-00026o-0p; Sat, 16 May 2020 20:04:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zWuL=66=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1ja33G-00026j-E6
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 20:04:54 +0000
X-Inumbo-ID: 7df6968e-97b0-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7df6968e-97b0-11ea-9887-bc764e2007e4;
 Sat, 16 May 2020 20:04:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=HwcuU26831rYNSwOuJGFgQ5x7Nw5DevnQYtnQeA/3P0=; b=vO1qs/5QPNxhbIWrWUVWhEUTY
 k18P3lfLbTzh4yXIs7Y2w2avDO0drVPxjIyHkyYoFAhVLhyVTVwuMgPm0pkoNrHB185a6R5kwoEU0
 xAAGzM+Ka0WZR1VvdK192ZPYcNJgHXtTgpDZkzJDebMiBy133snUprXxobykNOpSgoLJo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ja338-0008VB-C0; Sat, 16 May 2020 20:04:46 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ja338-0007D3-3D; Sat, 16 May 2020 20:04:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1ja338-0008RP-1N; Sat, 16 May 2020 20:04:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150212-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150212: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=57880053dd24012e9f59c23b630fefe07e15dc49
X-Osstest-Versions-That: xen=5115b437eef595ce77f05bfc02626e31e263e965
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 16 May 2020 20:04:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150212 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150212/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 150192

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150192
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150192
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150192
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150192
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150192
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150192
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150192
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150192
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150192
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150192
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  57880053dd24012e9f59c23b630fefe07e15dc49
baseline version:
 xen                  5115b437eef595ce77f05bfc02626e31e263e965

Last test of basis   150192  2020-05-15 11:25:47 Z    1 days
Testing same since   150212  2020-05-16 06:37:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Hongyan Xia <hongyxia@amazon.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Sergey Dyasli <sergey.dyasli@citrix.com>
  Stewart Hildebrand <stewart.hildebrand@dornerworks.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5115b437ee..57880053dd  57880053dd24012e9f59c23b630fefe07e15dc49 -> master


From xen-devel-bounces@lists.xenproject.org Sat May 16 20:57:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 16 May 2020 20:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ja3sJ-0006tu-OO; Sat, 16 May 2020 20:57:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zWuL=66=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1ja3sI-0006tp-QO
 for xen-devel@lists.xenproject.org; Sat, 16 May 2020 20:57:38 +0000
X-Inumbo-ID: dfd7baa2-97b7-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dfd7baa2-97b7-11ea-9887-bc764e2007e4;
 Sat, 16 May 2020 20:57:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=554EpISedtJjL7i9cYypIBVqqMWdJQZlg6r7uzRqVp8=; b=ij+CmKo3NztVamvxz6Mv+XSGV
 h7nfi2eckvFJ2p/m06YIPtutVtHHGD09Q11LvhAHq51F1kjSHmlGje0NUFHB3rrV1c0ltYtdrxJs4
 ZBFZE91QsROyttRCdDx5CUkbMYG4FulqeO8RHPYYSvf5zDhUhM4BMXAZsRW164jBbLAVw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ja3sH-0001As-Ak; Sat, 16 May 2020 20:57:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ja3sG-0002TN-US; Sat, 16 May 2020 20:57:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1ja3sG-0000RH-Tn; Sat, 16 May 2020 20:57:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150218-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150218: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=664e1bc12f8658da124a4eff7a8f16da073bd47f
X-Osstest-Versions-That: xen=99266e31832fb4a4da5c9b8163328be350d1261d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 16 May 2020 20:57:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150218 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150218/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  664e1bc12f8658da124a4eff7a8f16da073bd47f
baseline version:
 xen                  99266e31832fb4a4da5c9b8163328be350d1261d

Last test of basis   150217  2020-05-16 15:01:49 Z    0 days
Testing same since   150218  2020-05-16 18:01:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Paul Durrant <paul@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   99266e3183..664e1bc12f  664e1bc12f8658da124a4eff7a8f16da073bd47f -> smoke


From xen-devel-bounces@lists.xenproject.org Sun May 17 00:00:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 17 May 2020 00:00:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ja6j8-0007au-Fq; Sun, 17 May 2020 00:00:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1FZQ=67=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1ja6j6-0007ap-Bg
 for xen-devel@lists.xenproject.org; Sun, 17 May 2020 00:00:20 +0000
X-Inumbo-ID: 5e835640-97d1-11ea-a730-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5e835640-97d1-11ea-a730-12813bfff9fa;
 Sun, 17 May 2020 00:00:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=qRLLLhkUd5hgX4tzRlDmqnRBAr8a9lLt48UzieE9/ZI=; b=Fc4LL90mQcllpjC0tm0wPTvoF
 GnmKw7y6mknGQABzO0gALKt6JqjDBV5fh25uAu89ufQWOTATPYh/mddNKXQEa0SFZgkfYZkNikSLo
 O8flQtP1eN4xJYGP+o+pZ/ex0jcFEVb5JTrxC1hS0EnJHHrgLYGyNF2hzGlI3J6XqUK8c=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ja6it-0005J5-25; Sun, 17 May 2020 00:00:07 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ja6is-0003R9-Et; Sun, 17 May 2020 00:00:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1ja6is-0007CZ-EI; Sun, 17 May 2020 00:00:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150213-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150213: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=12bf0b632ed090358cbf03e323e5342212d0b2e4
X-Osstest-Versions-That: linux=1ae7efb388540adc1653a51a3bc3b2c9cef5ec1a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 17 May 2020 00:00:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150213 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150213/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     12 guest-start              fail REGR. vs. 150202

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150202
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150202
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150202
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150202
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150202
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150202
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150202
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150202
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150202
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                12bf0b632ed090358cbf03e323e5342212d0b2e4
baseline version:
 linux                1ae7efb388540adc1653a51a3bc3b2c9cef5ec1a

Last test of basis   150202  2020-05-15 15:26:20 Z    1 days
Testing same since   150213  2020-05-16 10:42:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Akeem G Abodunrin <akeem.g.abodunrin@intel.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Deucher <alexdeucher@gmail.com>
  Alex Elder <elder@linaro.org>
  Alex Sierra <alex.sierra@amd.com>
  Alexei Starovoitov <ast@kernel.org>
  Amol Grover <frextrite@gmail.com>
  Amy Shih <amy.shih@advantech.com.tw>
  Anders Roxell <anders.roxell@linaro.org>
  Andrew Oakley <andrew@adoakley.name>
  Andrii Nakryiko <andriin@fb.com>
  Arnd Bergmann <arnd@arndb.de>
  Bernard Zhao <bernard@vivo.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chris Chiu <chiu@endlessm.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian König <christian.koenig@amd.com>
  Christoph Hellwig <hch@lst.de>
  Christoph Paasch <cpaasch@apple.com>
  Chuck Lever <chuck.lever@oracle.com>
  Chuhong Yuan <hslester96@gmail.com>
  Clay McClure <clay@daemons.net>
  Colin Xu <colin.xu@intel.com>
  Cong Wang <xiyou.wangcong@gmail.com>
  Daisuke Matsuda <matsuda-daisuke@fujitsu.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Drake <drake@endlessm.com>
  Dave Airlie <airlied@redhat.com>
  Dave Wysochanski <dwysocha@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  David Wysochanski <dwysocha@redhat.com>
  Denis V. Lunev <den@openvz.org>
  Eric Dumazet <edumazet@google.com>
  Evan Quan <evan.quan@amd.com>
  Felix Kuehling <Felix.Kuehling@amd.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Florian Westphal <fw@strlen.de>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Grygorii Strashko <grygorii.strashko@ti.com>
  Guenter Roeck <linux@roeck-us.net>
  Guillaume Nault <gnault@redhat.com>
  Gustavo A. R. Silva <gustavoars@kernel.org>
  Heiner Kallweit <hkallweit1@gmail.com>
  Imre Deak <imre.deak@intel.com>
  Ioana Ciornei <ioana.ciornei@nxp.com>
  Iris Liu <iris@onechronos.com>
  J. Bruce Fields <bfields@redhat.com>
  Jack Morgenstein <jackm@dev.mellanox.co.il>
  Jacob Keller <jacob.e.keller@intel.com>
  Jakub Kicinski <kuba@kernel.org>
  James Morris <jamorris@linux.microsoft.com>
  Jason Gunthorpe <jgg@mellanox.com>
  Jesus Ramos <jesus-ramos@live.com>
  Jian-Hong Pan <jian-hong@endlessm.com>
  John Fastabend <john.fastabend@gmail.com>
  John Stultz <john.stultz@linaro.org>
  Jon Hunter <jonathanh@nvidia.com>
  Jon Maloy <jmaloy@redhat.com>
  Julian Wiedmann <jwi@linux.ibm.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kees Cook <keescook@chromium.org>
  Kefeng Wang <wangkefeng.wang@huawei.com>
  Kelly Littlepage <kelly@onechronos.com>
  Kevin Lo <kevlo@kevlo.org>
  Lei Xue <carmark.dlut@gmail.com>
  Leo (Hanghong) Ma <hanghong.ma@amd.com>
  Leon Romanovsky <leonro@mellanox.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Luo bin <luobin9@huawei.com>
  Maciej Żenczykowski <maze@google.com>
  Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com>
  Maor Gottlieb <maorg@mellanox.com>
  Martin KaFai Lau <kafai@fb.com>
  Masami Hiramatsu <mhiramat@kernel.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matteo Croce <mcroce@redhat.com>
  Matthieu Baerts <matthieu.baerts@tessares.net>
  Maxime Ripard <maxime@cerno.tech>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael S. Tsirkin <mst@redhat.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Mike Pozulp <pozulp.kernel@gmail.com>
  Neil Armstrong <narmstrong@baylibre.com>
  NeilBrown <neilb@suse.de>
  Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
  Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
  Olga Kornievskaia <kolga@netapp.com>
  Olga Kornievskaia <olga.kornievskaia@gmail.com>
  Oliver Neukum <oneukum@suse.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paolo Abeni <pabeni@redhat.com>
  Paul Blakey <paulb@mellanox.com>
  Paul Moore <paul@paul-moore.com>
  Peter Jones <pjones@redhat.com>
  Phil Sutter <phil@nwl.cc>
  Po-Hsu Lin <po-hsu.lin@canonical.com>
  Potnuri Bharat Teja <bharat@chelsio.com>
  Richard Cochran <richardcochran@gmail.com>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Roi Dayan <roid@mellanox.com>
  Samu Nuutamo <samu.nuutamo@vincit.fi>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Shannon Nelson <snelson@pensando.io>
  Shiraz Saleem <shiraz.saleem@intel.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Simon Ser <contact@emersion.fr>
  Soheil Hassas Yeganeh <soheil@google.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Sultan Alsawaf <sultan@kerneltoast.com>
  Sumanth Korikkar <sumanthk@linux.ibm.com>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tejun Heo <tj@kernel.org>
  Thierry Reding <thierry.reding@gmail.com>
  Thierry Reding <treding@nvidia.com>
  Tom St Denis <tom.stdenis@amd.com>
  Tony Lindgren <tony@atomide.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Tuong Lien <tuong.t.lien@dektech.com.au>
  Ursula Braun <ubraun@linux.ibm.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vincent Minet <v.minet@criteo.com>
  Vinod Koul <vkoul@kernel.org>
  Wang Wenhu <wenhu.wang@vivo.com>
  Wei Yongjun <weiyongjun1@huawei.com>
  Willem de Bruijn <willemb@google.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yang Yingliang <yangyingliang@huawei.com>
  Ying Xue <ying.xue@windriver.com>
  Yishai Hadas <yishaih@mellanox.com>
  Yonghong Song <yhs@fb.com>
  Zefan Li <lizefan@huawei.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   1ae7efb38854..12bf0b632ed0  12bf0b632ed090358cbf03e323e5342212d0b2e4 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sun May 17 00:29:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 17 May 2020 00:29:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ja7BI-0001H7-27; Sun, 17 May 2020 00:29:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1FZQ=67=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1ja7BG-0001H2-Uv
 for xen-devel@lists.xenproject.org; Sun, 17 May 2020 00:29:27 +0000
X-Inumbo-ID: 70be211a-97d5-11ea-a732-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 70be211a-97d5-11ea-a732-12813bfff9fa;
 Sun, 17 May 2020 00:29:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=f4cyaJxX1gHvuTVMrqFpOCNXAFcyv17wvJgBEOPZGAA=; b=Y2ryInq1ep+k1mXjNQehNUzkq
 JqJuiRTRzTWAa0Fu+Mz0LVOgmgbTGB5BMY+yedGPY55mUXZW3s/esJRZ/WPCEPA0xp7UDQx7UkNYu
 nvswMIGWbVh7PiCmnB2OeVU2MwzdENMqmCzQULM5K8gbekDZhnKMOFSSNq335NhLJHlRU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ja7B5-00061x-Uv; Sun, 17 May 2020 00:29:16 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ja7B5-0005GF-Np; Sun, 17 May 2020 00:29:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1ja7B5-0005Zf-My; Sun, 17 May 2020 00:29:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150215-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150215: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:heisenbug
 qemu-mainline:test-armhf-armhf-libvirt-raw:guest-start/debian.repeat:fail:heisenbug
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=debe78ce14bf8f8940c2bdf3ef387505e9e035a9
X-Osstest-Versions-That: qemuu=0db949f1810f4d497762d57d8db6f219c0607529
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 17 May 2020 00:29:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150215 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150215/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds   16 guest-localmigrate fail in 150211 pass in 150215
 test-armhf-armhf-libvirt-raw 15 guest-start/debian.repeat fail in 150211 pass in 150215
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat  fail pass in 150211

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150194
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150194
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150194
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150194
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150194
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150194
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                debe78ce14bf8f8940c2bdf3ef387505e9e035a9
baseline version:
 qemuu                0db949f1810f4d497762d57d8db6f219c0607529

Last test of basis   150194  2020-05-15 11:32:28 Z    1 days
Testing same since   150211  2020-05-16 05:06:20 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Changbin Du <changbin.du@gmail.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Joseph Myers <joseph@codesourcery.com>
  Laurent Vivier <laurent@vivier.eu>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   0db949f181..debe78ce14  debe78ce14bf8f8940c2bdf3ef387505e9e035a9 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sun May 17 07:01:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 17 May 2020 07:01:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaDHy-0005Am-MY; Sun, 17 May 2020 07:00:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1FZQ=67=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jaDHx-0005Ah-Uw
 for xen-devel@lists.xenproject.org; Sun, 17 May 2020 07:00:45 +0000
X-Inumbo-ID: 210f125a-980c-11ea-a75b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 210f125a-980c-11ea-a75b-12813bfff9fa;
 Sun, 17 May 2020 07:00:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=mqnGMcknJgT6jS30x5IuoHYqcyNUkd4Vi0C+kuQPTZU=; b=w/S+v1HXGFzQJg9bGeTnqxUXw
 lOPOfg+pm4f9zcNS60cz453LQwF/JpyjGtV7yuoS2qRuVcce+I0vIK7p7pYi49hUIGpamq3nVHCUY
 xlq6k9ASRjCLQThryBAST3DuFupovZ2MHkIytJ9O+PmHBjPCyV4DVxBTJ5NPhrWoZaqgk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaDHw-0008Dn-Fu; Sun, 17 May 2020 07:00:44 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaDHw-0008LM-3l; Sun, 17 May 2020 07:00:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jaDHw-00042T-3E; Sun, 17 May 2020 07:00:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150222-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150222: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=144dfe4215902b40a9d17fdb326054bbd8e07563
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 17 May 2020 07:00:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150222 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150222/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              144dfe4215902b40a9d17fdb326054bbd8e07563
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  121 days
Failing since        146211  2020-01-18 04:18:52 Z  120 days  111 attempts
Testing same since   150210  2020-05-16 04:20:17 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Chris Jester-Young <cky@cky.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yan Wang <wangyan122@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 18353 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 17 08:28:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 17 May 2020 08:28:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaEe2-0004b9-3v; Sun, 17 May 2020 08:27:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1FZQ=67=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jaEe0-0004b4-K0
 for xen-devel@lists.xenproject.org; Sun, 17 May 2020 08:27:36 +0000
X-Inumbo-ID: 3dbb0b83-9818-11ea-a766-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3dbb0b83-9818-11ea-a766-12813bfff9fa;
 Sun, 17 May 2020 08:27:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1EuUL6wGrNsGZJABEEyTmqGTBMEesJ2V7boO0cZ0j0A=; b=Sb9Aq2IdMQDwXphp5O+4TnKoa
 Kbj7ygoqnP8IOzOgT28l6i2R1UYKyquFnlZg7eAsQzpdX6hzz+s5L56kOMQklv8Uj3vWOGAAKwSqG
 gBm7FSrZ1XyVngjD3tEmvGVT+a7SQwNwxaPza6MJZN8Je53TyCFjxUeznb+flgFEMvD0Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaEds-00025Q-Bd; Sun, 17 May 2020 08:27:28 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaEds-0002sb-0K; Sun, 17 May 2020 08:27:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jaEdr-000078-Vi; Sun, 17 May 2020 08:27:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150220-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150220: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=99266e31832fb4a4da5c9b8163328be350d1261d
X-Osstest-Versions-That: xen=57880053dd24012e9f59c23b630fefe07e15dc49
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 17 May 2020 08:27:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150220 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150220/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150212
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150212
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150212
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150212
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150212
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150212
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150212
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150212
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150212
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150212
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  99266e31832fb4a4da5c9b8163328be350d1261d
baseline version:
 xen                  57880053dd24012e9f59c23b630fefe07e15dc49

Last test of basis   150212  2020-05-16 06:37:26 Z    1 days
Testing same since   150220  2020-05-16 20:07:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>
  Paul Durrant <paul@xen.org>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   57880053dd..99266e3183  99266e31832fb4a4da5c9b8163328be350d1261d -> master


From xen-devel-bounces@lists.xenproject.org Sun May 17 09:31:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 17 May 2020 09:31:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaFdU-0002PU-PD; Sun, 17 May 2020 09:31:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rqg2=67=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1jaFdT-0002PP-En
 for xen-devel@lists.xenproject.org; Sun, 17 May 2020 09:31:07 +0000
X-Inumbo-ID: 21754fe2-9821-11ea-b9cf-bc764e2007e4
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 21754fe2-9821-11ea-b9cf-bc764e2007e4;
 Sun, 17 May 2020 09:31:05 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne [10.0.0.1])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 04H9UxNV023817;
 Sun, 17 May 2020 11:30:59 +0200 (MEST)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 28AED2810; Sun, 17 May 2020 11:30:59 +0200 (CEST)
Date: Sun, 17 May 2020 11:30:59 +0200
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
Message-ID: <20200517093059.GD1820@antioche.eu.org>
References: <20200515202912.GA11714@antioche.eu.org>
 <d623cd12-4024-82ba-7388-21f606e1a0bd@citrix.com>
 <20200515210629.GA10976@antioche.eu.org>
 <b1dfc07d-bf0f-da26-79f0-8cf93952689e@citrix.com>
 <20200515215335.GA9991@antioche.eu.org>
 <d22b6b7c-9d1c-4cfb-427a-ca6f440a9b08@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d22b6b7c-9d1c-4cfb-427a-ca6f440a9b08@citrix.com>
User-Agent: Mutt/1.12.1 (2019-06-15)
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3
 (chassiron.antioche.eu.org [151.127.5.145]);
 Sun, 17 May 2020 11:30:59 +0200 (MEST)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, May 16, 2020 at 05:18:45PM +0100, Andrew Cooper wrote:
> On 15/05/2020 22:53, Manuel Bouyer wrote:
> > On Fri, May 15, 2020 at 10:38:13PM +0100, Andrew Cooper wrote:
> >>> [...]
> >>> Does it help ?
> >> Yes and no. This is collateral damage of earlier bug.
> >>
> >> What failed was xen_init_fv()'s
> >>
> >>  shared_page = xc_map_foreign_range(xc_handle, domid, XC_PAGE_SIZE,
> >>   PROT_READ|PROT_WRITE, ioreq_pfn);
> >>  if (shared_page == NULL) {
> >>   fprintf(logfile, "map shared IO page returned error %d\n", errno);
> >>   exit(-1);
> >>  }
> >>
> >> because we've ended up with a non-NULL pointer with no mapping behind
> >> it, hence the SIGSEGV for the first time we try to use the pointer.
> >>
> >> Whatever logic is behind xc_map_foreign_range() should have returned
> >> NULL or a real mapping.
> > What's strange is that the mapping is validated, by mapping it in
> > the dom0 kernel space. But when we try to remap in in the process's
> > space, it fails.
> 
> Hmm - this sounds like a kernel bug I'm afraid.

no I don't think it is. It works with Xen 4.11, and it works with 4.13 for
PV/PVH domUs. Maybe there's a missing flag in the PTE of some sort that is
mandatory for 4.13 but not 4.11 for userland PTE but it doens't look obvious.

The difference could be that the kernel page tables are active when
mapping the foreing page in the dom0's kernel space, but the
user process page table it not (obviously, as we're in the kernel
when doing the mapping).

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Sun May 17 09:55:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 17 May 2020 09:55:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaG0W-0004JK-Gq; Sun, 17 May 2020 09:54:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1FZQ=67=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jaG0V-0004JF-2f
 for xen-devel@lists.xenproject.org; Sun, 17 May 2020 09:54:55 +0000
X-Inumbo-ID: 7542219c-9824-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7542219c-9824-11ea-b07b-bc764e2007e4;
 Sun, 17 May 2020 09:54:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=0UR3E0E7iZ7SebAjvZysDHJJ0E2PklQuTN55uHk1mhs=; b=hA2CT2qzgvvQdf9u0Lh1teyzj
 xiGF8cyKhYx8hgMKMiFtqB1mxubZkwpingRsnVF8nigqRksUf7w742ODxp6YmVoPUqc/ZvLxRAuXQ
 eYTW4gqCphukyDanBm3LlPEPIaSpEfd+PKW+uQmT7DGhNDP0KnZ+HzgIU9p/WZX0tLX4M=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaG0T-0003tS-ED; Sun, 17 May 2020 09:54:53 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaG0Q-0005ei-Tp; Sun, 17 May 2020 09:54:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jaG0Q-0004Rb-T7; Sun, 17 May 2020 09:54:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150224-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 150224: all pass - PUSHED
X-Osstest-Versions-This: xen=664e1bc12f8658da124a4eff7a8f16da073bd47f
X-Osstest-Versions-That: xen=a82582b1af6a4a57ca53bcfad9f71428cb5f9a54
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 17 May 2020 09:54:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150224 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150224/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  664e1bc12f8658da124a4eff7a8f16da073bd47f
baseline version:
 xen                  a82582b1af6a4a57ca53bcfad9f71428cb5f9a54

Last test of basis   150157  2020-05-13 09:19:04 Z    4 days
Testing same since   150224  2020-05-17 09:19:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  George Dunlap <george.dunlap@citrix.com>
  Hongyan Xia <hongyxia@amazon.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Paul Durrant <paul@xen.org>
  Paul Durrant <pdurrant@amazon.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Sergey Dyasli <sergey.dyasli@citrix.com>
  Stefano Stabellini <sstabellini@kernel.org>
  Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   a82582b1af..664e1bc12f  664e1bc12f8658da124a4eff7a8f16da073bd47f -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun May 17 11:35:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 17 May 2020 11:35:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaHZH-0004tx-2A; Sun, 17 May 2020 11:34:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1FZQ=67=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jaHZF-0004ts-UC
 for xen-devel@lists.xenproject.org; Sun, 17 May 2020 11:34:54 +0000
X-Inumbo-ID: 6bf0e138-9832-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6bf0e138-9832-11ea-b9cf-bc764e2007e4;
 Sun, 17 May 2020 11:34:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=OxTwAVQ/nZLkA4w48ORd5ZE3D/wMvPllwovRVLmdoXE=; b=fo0tWobblJjK+TkIbjUjftEzI
 1mbb1UIC+7A4dLSPmzylckngNrVp0hlf4Twtv05CgMcrpfzALknJ2MJSj9ld0WhkL4QcaI/ASgGCG
 ARpbFqYZX4x3VwPpZDUiq+HdV7zyyLSMPE6gWcr1dMf5nUozE02b6I3SxLNRdSk0CYVKw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaHZC-0005zg-Ts; Sun, 17 May 2020 11:34:50 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaHZC-0001vN-Gt; Sun, 17 May 2020 11:34:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jaHZC-0000kE-Fo; Sun, 17 May 2020 11:34:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150221-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150221: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-rtds:guest-saverestore:fail:allowable
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=3d1c1e5931ce45b3a3f309385bbc00c78e9951c6
X-Osstest-Versions-That: linux=12bf0b632ed090358cbf03e323e5342212d0b2e4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 17 May 2020 11:34:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150221 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150221/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     15 guest-saverestore        fail REGR. vs. 150213

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds   16 guest-start/debian.repeat fail blocked in 150213
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150213
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150213
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150213
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150213
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150213
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150213
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150213
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150213
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                3d1c1e5931ce45b3a3f309385bbc00c78e9951c6
baseline version:
 linux                12bf0b632ed090358cbf03e323e5342212d0b2e4

Last test of basis   150213  2020-05-16 10:42:39 Z    1 days
Testing same since   150221  2020-05-17 00:10:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Amit Singh Tomar <amittomer25@gmail.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ansuel Smith <ansuelsmth@gmail.com>
  Christoph Hellwig <hch@lst.de>
  Douglas Anderson <dianders@chromium.org>
  Grace Kao <grace.kao@intel.com>
  Jens Axboe <axboe@kernel.dk>
  John Garry <john.garry@huawei.com>
  Keith Busch <kbusch@kernel.org>
  Light Hsieh <light.hsieh@mediatek.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Maulik Shah <mkshah@codeaurora.org>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Pavel Begunkov <asml.silence@gmail.com>
  Venkata Narendra Kumar Gutta <vnkgutta@codeaurora.org>
  Will Deacon <will@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   12bf0b632ed0..3d1c1e5931ce  3d1c1e5931ce45b3a3f309385bbc00c78e9951c6 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sun May 17 13:16:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 17 May 2020 13:16:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaJ8f-0005Yc-QR; Sun, 17 May 2020 13:15:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VT2+=67=yahoo.com=hack3rcon@srs-us1.protection.inumbo.net>)
 id 1jaJ8e-0005X0-7y
 for xen-devel@lists.xenproject.org; Sun, 17 May 2020 13:15:32 +0000
X-Inumbo-ID: 7bd56d2c-9840-11ea-9887-bc764e2007e4
Received: from sonic303-2.consmr.mail.bf2.yahoo.com (unknown [74.6.131.41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7bd56d2c-9840-11ea-9887-bc764e2007e4;
 Sun, 17 May 2020 13:15:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048;
 t=1589721330; bh=Ge/gHFIJDjB0LKhj6EBAtyXhKyAm2rHwT8NJxRHHmgM=;
 h=Date:From:Reply-To:To:Subject:References:From:Subject;
 b=rmIV3wkJl5uoQjG+H/453VPRHzxcswOy1FYBOjFupsbKEe8tvL4SDJT90LucGwpxQGoD0o9x/Tw963FawkQuS1PJiPyCMyy8tbQKYfGGvCZflAzc5+DO3Kg9yPNdHmp2zWWI6oQeF66rDgCYtjTUlbbJJJ0D8lTD9ENFN1iBg2rFynOcPhSFvZJFmCtJobRqBj9aX2dw2mB2PewV6nfSovuUGNAokWCIRtPhQNtTumiLYA3GAMXylZbLtvOd67Ow5xTH5oh5l9tf4XE+rG7LnDgFqK9oFnZJB0L49e/8qojQrXTiGdqnsE7gWBNgyTGgJ9wu6ZevE11Q3V5eKhmn7g==
X-YMail-OSG: fdu9jOoVM1l0psyy2KlVRaKFt152dRQRz2ANDSdJNBj8gsYfwLSrkcZdBkRZ1wL
 _DPo2RNArfnzGAGu1qKR8DZVxC1TWYX6YLLzSfIbEB5Yr6_9F4e.KVVUGIlvsOOTtACkdbqqbftZ
 NBOz6HvZeDL2nmkZXF0vHmAZUaoWUrMGg9k5SK61zdX8_1ny9.IWcwtec0_ntPRUgLoqA8A8d2LC
 qy1TKcURhxwUAjHeKkPJ5O2Z4mc9r0KC4VzjypELPuPDVbgdKrGG4R79XPxiUxK9VRYlSNo2ndwf
 PBQLEQTS0H5JV6Fib3kKEzA2xTchTeSr_xLbLGFfzRip3AykmxJpXUwrsZUo6h8IgqXSGOC5pddY
 1oOyyz38pbcwxnIDdFLsfo4AjdJMKGAo5duVIbP1oKVn2kaDuHtlxmAoWwWf8w5PG9A7ICYyEOyh
 VH2ng8DQ9.0Kgl6GszTyYff6Sb.NA5ZXHguAwQOp6cIgmZ3YYjBNgQx6ScRZFyHXv_CgcXynJhtw
 wMeL2R6HCqLsc8vektjvgu0aC.wQLm9_rryaW15VLQJrNG44IfkGmLwrWWZw2aNFd5_9v0iho7F_
 1PyMOzmq8BFKwbT8uK_xUsj.7bDPQSEZbNX.XrNfry7wH2ubgcaBemPLA0ulRnNOVaqz.tl9MVkd
 _zxntuxcd81expxhg165y_txoLJnl59icLTTFYyeCe0El94Ra_3b8t3O8.ZbaZa5FN7L_wICk.vD
 VQBvDk30tJUH7Qr4OTs9U803tiAwcUR0HjDez6.PhP4HoL9Hwv054q5deE5ULPA5zHW_pWAItgo2
 lVcq2ETftiEbLmKdK0czVNJvxll1M975rfVF2UNkGmb4B86oXTmF4q4o7wp3eI1WXc5XbXlEVZ_h
 zZ9UJEiy.ih3OG6NM9he6L96RPAqX_CPGqrZV5l.30ywxqMcCm.2wXi3wgHf7xfPTCWmI6m.dXeR
 LgHU3xO0qAUHwYJSibDaQGQWniLU4JFQJr45UAUKu1rG1al4CZ21L3tHH_zXRAgM1avgiGpfVeki
 SbwhvEuzC4DsWnSjQkgRubjZ0Ht94JLjNsMKb5ViHfdAtZWttT2s4JjM2_Th3v2yShXvOHfCiovA
 JupIXnjQRA6._mUDmBPEajMewg5_2DstiTKLRQDCKHWE0.T_gjG5kC2xOOJRI8ZxBFYqaGh7QojA
 ScA7ytf3uGRm4Cf.cLJX1c8OjHDFN4dgWCqAZNJCAFS1mgn_MWrA_W0dDsdyks1d0BNSHB_4rI15
 y9oOFO_XwOqlk1jUqgEzwjT_Y3Ze.ehvU3qiWAtZfOl4kR8RQJL4sYRQbXPg7OEp1O4t3GgQl8D2
 7A8.Ms2DCkGJ8OfL1zavBafnGHf9ibYvortsfgC9EKA--
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic303.consmr.mail.bf2.yahoo.com with HTTP; Sun, 17 May 2020 13:15:30 +0000
Date: Sun, 17 May 2020 13:15:29 +0000 (UTC)
From: Jason Long <hack3rcon@yahoo.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <1478987168.271548.1589721329818@mail.yahoo.com>
Subject: RoCE adapters and Xen.
MIME-Version: 1.0
Content-Type: multipart/alternative; 
 boundary="----=_Part_271547_1740513832.1589721329817"
References: <1478987168.271548.1589721329818.ref@mail.yahoo.com>
X-Mailer: WebService/1.1.15960 YahooMailAndroidMobile YMobile/1.0
 (com.yahoo.mobile.client.android.mail/6.7.1; Android/7.1.1; NMF26F; bbc100;
 BlackBerry; BBC100-1; 5.16; 1184x720; )
Content-Length: 792
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: "hack3rcon@yahoo.com" <hack3rcon@yahoo.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

------=_Part_271547_1740513832.1589721329817
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hello,The Xen hypervisor doesn't support the RoCE adapters? The Oracle migrates to the KVM because of it.Why not add this feature?
Cheers.
------=_Part_271547_1740513832.1589721329817
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: 7bit

Hello,<div id="yMail_cursorElementTracker_1589721184566">The Xen hypervisor doesn't support the RoCE adapters? The Oracle migrates to the KVM because of it.</div><div id="yMail_cursorElementTracker_1589721247290">Why not add this feature?</div><div id="yMail_cursorElementTracker_1589721258410"><br></div><div id="yMail_cursorElementTracker_1589721258680">Cheers.</div>
------=_Part_271547_1740513832.1589721329817--


From xen-devel-bounces@lists.xenproject.org Sun May 17 13:56:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 17 May 2020 13:56:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaJlX-0000tg-1a; Sun, 17 May 2020 13:55:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mSDI=67=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaJlW-0000tb-0Z
 for xen-devel@lists.xenproject.org; Sun, 17 May 2020 13:55:42 +0000
X-Inumbo-ID: 181dce68-9846-11ea-b07b-bc764e2007e4
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 181dce68-9846-11ea-b07b-bc764e2007e4;
 Sun, 17 May 2020 13:55:41 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id f18so6985243lja.13
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 06:55:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=7VpQcp51ccV4kzfWXX52bu/wmIwxl6pAmFUXhjFthJo=;
 b=GBMBz+qghMJt2JE6VWFyi3RCVsISeYD7MBtyRqCOI90dMPUaRnavEpoqzY16vZO5P4
 zJlaImHF9DDe7TySdj1M+qmd/YlMp5aveG/JnQvAXcjc+cnTvXWq+pdCg2xvm7qaa/xy
 zV19oJDyxkFStsxPtBLfSRy5zHOVnm84eKqXxOhWYjJT8l17jBxYJByJfmdclTznPe7I
 FC18uVb+82aLz2xwqeK6DfHko4u7l2dfY45Yk9oebkO/f3CYKCUBy1+eDfsZ/m7qXckB
 lSgsM/qH4v1EqXM0MQya/Fb5sKefKJFsGV63ySRD1HhmVrb1suXuSUFweULL2/beCj7g
 0iWQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=7VpQcp51ccV4kzfWXX52bu/wmIwxl6pAmFUXhjFthJo=;
 b=NzU4UkXpySZrvZrowZLihh/ns1sIKj5c44Zmu/YdMauDyXvBoI7ysd3MIry1NRTmH/
 Lm5RL2/w/G5HoTesWuyIMpIcefMfFF2CSjZhVRPb6/A71dG5W3e/jF6LXoeudHVGT8Kb
 Z1PdTgdYYFXKOmaiW1c8rycGjl/fhNlSeG9x1IxVQYTpH+PfPlYKEaNUVbpmUgqGzYum
 Osu+IcB7BcDn65ktzNYLesMXb6KKyPqiaHK7uCBFrMAkKxque1o7BN0PWzcz524T5VAF
 pR3xH/RmqQPHBA7VYKfZSvyVW4Ojlz3M/Ya422RRHD1qidxDJuvpQ3x2VZ8frZMqGSHX
 pcsw==
X-Gm-Message-State: AOAM533gx+U2ijfh9g6IuhLDfIMhDZ2ix4TqLetmfIkozb02MgM3wmpY
 91JubAmlItMh/ZT9BEK25GYWCemwG/YzB4I/HME=
X-Google-Smtp-Source: ABdhPJzjKvvwdSJ1HbC3RIbbC9P2N5OTEjSc5nafwr9n0yji50HHmF0tOEaXnmmkX8iXq/vxHS4ELgXeZgnxmpEVa+s=
X-Received: by 2002:a2e:3519:: with SMTP id z25mr7191720ljz.253.1589723740363; 
 Sun, 17 May 2020 06:55:40 -0700 (PDT)
MIME-Version: 1.0
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-6-jandryuk@gmail.com>
 <24253.28579.577001.476506@mariner.uk.xensource.com>
In-Reply-To: <24253.28579.577001.476506@mariner.uk.xensource.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Sun, 17 May 2020 09:55:28 -0400
Message-ID: <CAKf6xpuYXFCZ+cfEWTQ4jZKt2fb+_gw+bcBq6DSHVQwSCLF81A@mail.gmail.com>
Subject: Re: [PATCH v5 05/21] libxl: Handle Linux stubdomain specific QEMU
 options.
To: Ian Jackson <ian.jackson@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Simon Gaiser <simon@invisiblethingslab.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Eric Shelton <eshelton@pobox.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 14, 2020 at 12:19 PM Ian Jackson <ian.jackson@citrix.com> wrote:
> Jason Andryuk writes ("[PATCH v5 05/21] libxl: Handle Linux stubdomain specific QEMU options."):
> > @@ -1974,8 +2006,10 @@ static int libxl__build_device_model_args(libxl__gc *gc,
> >                                                    args, envs,
> >                                                    state);
> >      case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
> > -        assert(dm_state_fd != NULL);
> > -        assert(*dm_state_fd < 0);
> > +        if (!libxl_defbool_val(guest_config->b_info.device_model_stubdomain)) {
> > +            assert(dm_state_fd != NULL);
> > +            assert(*dm_state_fd < 0);
> > +     }
>
> This } seems to be misindented ?

This was a stray tab.  Fixed along with the { } changes.

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Sun May 17 13:56:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 17 May 2020 13:56:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaJly-0000vX-An; Sun, 17 May 2020 13:56:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mSDI=67=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaJlw-0000vP-VE
 for xen-devel@lists.xenproject.org; Sun, 17 May 2020 13:56:08 +0000
X-Inumbo-ID: 281b21e4-9846-11ea-b07b-bc764e2007e4
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 281b21e4-9846-11ea-b07b-bc764e2007e4;
 Sun, 17 May 2020 13:56:08 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id o14so7024483ljp.4
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 06:56:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=RaH2NXVxSetmRQwxowvpRUDcblAi66spguzDnjTLRSo=;
 b=G4G9Ue0ZF+jd2fEINAh1dk7pZmjYb5ZgAaWDB/ByBTqG7UIeg/LDLrW08Zj1ExUMmI
 tI1SKjxwKbE9hbNj9UoF3hBDTuPS59sDjd1TgyQcf3dbIX7VhCA/boznEATi+Wn5JkvR
 QcmHJ7yi71m3fRxxXUCBQiUPMI/ciZMKw2rYfB2GOW285UQQl+n/q9kGWRke+9I1Jd2t
 HRYlIQOfeg5yAZg1APP/08VgnyEmHJODQhsiMK4A/NlvQVRUdwdvlzf6d47sDgqIC6oa
 dVL9isWRJT3YhccSz2ehs+NFUUvIlmfOzDa78i7spzFkLSjKnhKjP5UCl9+c4CXqvX1d
 V8hg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=RaH2NXVxSetmRQwxowvpRUDcblAi66spguzDnjTLRSo=;
 b=VfKYY403d6N4iBGJCpRC9G4VPLLz/CZGOsYMaOU49gjsBEeTnwbV5Ul/mMVMVjhUv0
 iw7aHQ6Yk0T1DmZuSt5H/hypKrvwkuQ3W/BL+LKGfyMPCpBkMEcjdYFWhMHwd41iZMX1
 HausA7lWQAPS8NoJ2P1cKYGNjRTzjnQ786GQWOibn9En//KIoEhlSGAnj/wUYRhB7yiZ
 8hvoqJ/HXvKykqCFs6aQ4aeVCA0Rb4Igkcd7nMCej610F+4dZB6QV+DZ0FW40ilSeqae
 usfiy/xv5+QrfqamxvtGxjyaU67c/XlMVP6S3PJY89k0KJjSY4kOZ1dKdXpWJMOoGqry
 eTGA==
X-Gm-Message-State: AOAM532HQ+YO7VeVR5W3KJLhzgxR7ojYyTY6iWAy3syOGK9SP4BSYRTW
 KPMjVU6aSkZduReAYCiY3iHc70UgrXwUMUhlSZE=
X-Google-Smtp-Source: ABdhPJw5xw7Qa6MR/gRnhQHpeNQBv2rxuIRdshlWdyC1UW0AL2GO5CFYyFV3dSY8RKkBlyuDqLelW60WFtGyOZ57o34=
X-Received: by 2002:a2e:9005:: with SMTP id h5mr1031947ljg.246.1589723767212; 
 Sun, 17 May 2020 06:56:07 -0700 (PDT)
MIME-Version: 1.0
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-10-jandryuk@gmail.com>
 <24253.29524.798802.978257@mariner.uk.xensource.com>
In-Reply-To: <24253.29524.798802.978257@mariner.uk.xensource.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Sun, 17 May 2020 09:55:55 -0400
Message-ID: <CAKf6xpvJMovKMTWipC4gZuBD8FgmBEWbDbkm=ryRWSxNifQcJw@mail.gmail.com>
Subject: Re: [PATCH v5 09/21] libxl: add save/restore support for qemu-xen in
 stubdomain
To: Ian Jackson <ian.jackson@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 14, 2020 at 12:35 PM Ian Jackson <ian.jackson@citrix.com> wrote=
:
>
> Jason Andryuk writes ("[PATCH v5 09/21] libxl: add save/restore support f=
or qemu-xen in stubdomain"):
> > From: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>
> >
> > Rely on a wrapper script in stubdomain to attach FD 3/4 of qemu to
> > relevant consoles.
> >
> > Signed-off-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblething=
slab.com>
> > Address TODO in dm_state_save_to_fdset: Only remove savefile for
> > non-stubdom.
> ...
> > +        if (is_stubdom) {
> > +            /* Linux stubdomain connects specific FD to STUBDOM_CONSOL=
E_RESTORE
> > +             */
> > +            flexarray_append(dm_args, "-incoming");
> > +            flexarray_append(dm_args, "fd:3");
>
> Would it be possible to use a different fixed fd number ?  Low numbers
> are particularly vulnerable to clashes with autoallocated numbers.
>
> I suggest randomly allocating one in the range [64,192>.  My random
> number generator picked 119.  So 118 and 119 ?

This makes sense and would be the easiest change.

> Also, why couldn't your wrapper script add this argument ?  If you do
> that there then there is one place that knows the fd number and a
> slightly less tortuous linkage between libxl and the script...

I like this idea, but there is a complication.  "-incoming" is only
added when performing a restore, so it cannot just be blindly added to
all qemu command lines in the stubdom.  Two options I see are to
either communicate a restore some other way (so the stubdom scripts
can add the appropriate option), or pass something though dm_args, but
let the script convert it into something usable.

There is "-incoming defer" where we can later specify
"migrate_incoming fd:119".  Another option is to `sed
s/defer/fd:119/`, but that is a little tricky since we need to look at
the preceding key to know if we should sed the second.  We could pass
only "-incoming" and require the stubdom script to modify that option.

I haven't tested any of this.

> It's not stated anywhere here that I can see but I think what is
> happening here is that your wrapper script knows the qemu savefile
> pathname and reads it directly.  Maybbe a comment would be
> worthwhile ?

The existing comment "Linux stubdomain connects specific FD to
STUBDOM_CONSOLE_RESTORE" is trying to state that.
STUBDOM_CONSOLE_RESTORE is defined as 2 for console 2 (/dev/hvc2), but
it is only a libxl_internal.h define.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Sun May 17 14:30:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 17 May 2020 14:30:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaKIG-0003bi-2l; Sun, 17 May 2020 14:29:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mSDI=67=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaKIE-0003bd-H4
 for xen-devel@lists.xenproject.org; Sun, 17 May 2020 14:29:30 +0000
X-Inumbo-ID: d0fe3d2e-984a-11ea-b9cf-bc764e2007e4
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d0fe3d2e-984a-11ea-b9cf-bc764e2007e4;
 Sun, 17 May 2020 14:29:29 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id a4so5710653lfh.12
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 07:29:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=sLe9QjRHmo2wHbxFgAhempcBpvA9GW7nyuL0G9e0F84=;
 b=jRONCA69hkrBtMCzGcdaUC4+SSR7YNf7YbVy8UdxEEpzrqwDQDS2LQopjKIXpl8RLX
 BCo05Ev/rZWIsWCbKuJcjyIZoRSXlR4FqKV3LFaX/FWAProreyMNNaRaNHJDPELiWVtC
 yegfdiD853phYRyK9n0Bx3blFT30OfDw829qKxZMowKGKppoU41NJpwEQlbfAkznXxnX
 jSb0REU1qreo5U8C5+Sjp4KHp++QgytwdQz8JZaNA9CL9Y7zecje3BT6AhetxDyEk0u/
 D+3S5lNg/IX1k6648aUdrXJfRrqUDg/mhtw+B08AFzxxViPHJEbRR7QLhDUfKZt54E95
 QYTg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=sLe9QjRHmo2wHbxFgAhempcBpvA9GW7nyuL0G9e0F84=;
 b=LHuUf8TOFhxKG4EISNiJZkz2WA9HqRH1Cz3V/B5gRwqC/h29OWWY9sSPLuoui/bUyH
 X8dfVxdiO4wt8jIWHTnijvmPeXJdocl3dDhicvFzwBDG7sfxGj16C6khWVkmCoBIro17
 3krltJPTl05mrQFb36d6y8X8Kd1Ocx/zGtv1mSFo4DbObyoU1PFHmkbgT/5+87vX+P7D
 2kdC+BEKsx1Mv1K/ijcI1UO+h6MxtaR2oMdRTyay0HPCbOaz+zKYwT7KMF6QZy/QVtSf
 XF7weW/qUhfAx6l0UGGsJsxbVe+lAst/ThT3nUm0uEQXiMfXRgaIlJxjRPcOK3uDfiAQ
 0npg==
X-Gm-Message-State: AOAM530NmJLb2eDa+2+1DVcsr2pza/f8NnCZIYVoHg0ZbxJ5I4X//jTg
 Jibx7TwdJXZ7WOIE3QCB/PF+L0WlEIfk2iZOeeo=
X-Google-Smtp-Source: ABdhPJwwJFlHwqPZEA9SglFzpte6WGuUMUZ9KCxv61ADcl/NrjP3dE3ipCsvkBDHAwpFETYNX7RE2wZHgETNlyRrIwM=
X-Received: by 2002:ac2:5a11:: with SMTP id q17mr8325254lfn.44.1589725768470; 
 Sun, 17 May 2020 07:29:28 -0700 (PDT)
MIME-Version: 1.0
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-7-jandryuk@gmail.com>
 <24253.28914.656817.996478@mariner.uk.xensource.com>
In-Reply-To: <24253.28914.656817.996478@mariner.uk.xensource.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Sun, 17 May 2020 10:29:17 -0400
Message-ID: <CAKf6xptx3jk6VT3L6VS4e8pZhURFiLE+P8AzwP0TrHdX2q7EYA@mail.gmail.com>
Subject: Re: [PATCH v5 06/21] libxl: write qemu arguments into separate
 xenstore keys
To: Ian Jackson <ian.jackson@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 14, 2020 at 12:25 PM Ian Jackson <ian.jackson@citrix.com> wrote:
>
> Jason Andryuk writes ("[PATCH v5 06/21] libxl: write qemu arguments into separate xenstore keys"):

> > +    xs_set_permissions(ctx->xsh, t, GCSPRINTF("%s/rtc/timeoffset", vm_path), roperm, ARRAY_SIZE(roperm));
>
> This line seems out of place.  At least, it is not mentioned in the
> commit message.  If it's needed, can you please split it out - and, of
> course, then, provide an explanation :-).

This was copied over from the mini-os version.  It looks like it is
unused by qemu-xen, so it can be dropped.

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Sun May 17 16:06:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 17 May 2020 16:06:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaLnb-0003pK-3X; Sun, 17 May 2020 16:05:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LWv/=67=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jaLnZ-0003pF-36
 for xen-devel@lists.xenproject.org; Sun, 17 May 2020 16:05:57 +0000
X-Inumbo-ID: 4a711a2a-9858-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a711a2a-9858-11ea-9887-bc764e2007e4;
 Sun, 17 May 2020 16:05:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
 References:In-Reply-To:Date:To:From:Subject:Message-ID:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=aracTdkUOGAum0JzZngocn0rsggXfItohclySS5ODlk=; b=K9sw9zGDYs8Vwl2ATe6njP+Ju2
 UwdA7/Phv9KI74jSRvIOwyJfFAGhL5ypQjXtEV+JqKgjvLTMOtg7vaGTekrJSF5Rj4eFfn0HOF7Sq
 miF1SqUhTXVwzTkWMl/XzfhvSrrboBCx6DTwW95ImmVkm42aqJZspYIvDXbuqCm9UPCY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jaLnY-0003aH-0q; Sun, 17 May 2020 16:05:56 +0000
Received: from 54-240-197-234.amazon.com ([54.240.197.234]
 helo=freeip.amazon.com) by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <hx242@xen.org>)
 id 1jaLnX-00063A-KD; Sun, 17 May 2020 16:05:55 +0000
Message-ID: <b73dbed41e83ee08b9ac6694a8fba76c512d2c93.camel@xen.org>
Subject: Re: RoCE adapters and Xen.
From: Hongyan Xia <hx242@xen.org>
To: "hack3rcon@yahoo.com" <hack3rcon@yahoo.com>, Xen-devel
 <xen-devel@lists.xenproject.org>
Date: Sun, 17 May 2020 17:05:48 +0100
In-Reply-To: <1478987168.271548.1589721329818@mail.yahoo.com>
References: <1478987168.271548.1589721329818.ref@mail.yahoo.com>
 <1478987168.271548.1589721329818@mail.yahoo.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sun, 2020-05-17 at 13:15 +0000, Jason Long wrote:
> Hello,
> The Xen hypervisor doesn't support the RoCE adapters? The Oracle
> migrates to the KVM because of it.
> Why not add this feature?

I am curious. Aren't RoCE adapters just PCIe devices? If things are set
up correctly and drivers are present, I don't think Xen has any problem
supporting them.

Do you have any reference that Oracle migrates to KVM *because* Xen
cannot support RoCE? Or is it simply that Oracle is migrating to KVM in
general?

I haven't worked on RoCE before so I could be wrong.

Hongyan



From xen-devel-bounces@lists.xenproject.org Sun May 17 16:36:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 17 May 2020 16:36:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaMGw-0006KW-HG; Sun, 17 May 2020 16:36:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VT2+=67=yahoo.com=hack3rcon@srs-us1.protection.inumbo.net>)
 id 1jaMGv-0006KR-7X
 for xen-devel@lists.xenproject.org; Sun, 17 May 2020 16:36:17 +0000
X-Inumbo-ID: 86df0108-985c-11ea-a7cc-12813bfff9fa
Received: from sonic308-2.consmr.mail.bf2.yahoo.com (unknown [74.6.130.41])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 86df0108-985c-11ea-a7cc-12813bfff9fa;
 Sun, 17 May 2020 16:36:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048;
 t=1589733375; bh=hZXtgOGZQSNtu+kLdJO7ROCPHeDnussSdNhlaJdx97E=;
 h=Date:From:Reply-To:To:In-Reply-To:References:Subject:From:Subject;
 b=nkKl+w+XOZmC9ZcA6RvykvEgPrpI2janMuQr8XNHvQimaZ7X0YqELK7Nl5DCxSiXc63QjsjnVWvB05w/om/Onn1V71mebUPoqVV9l5mDau40JokH5HqBXnTlJbCI9yHDGScq0dYcKrr2DoMVyxKfBA5+6Qv5czdI/TY//4ylOEfqLlG12Z/OsOah2NcEG9PUmSPqgPMp4InVAUrRF/UO7KuAByGXfpPVb/XQ6uDdKfINAAQAtshcAQ3HB4dbU028IHf+MWFzp/pvv8tQCIN6FffTPCfQqJwREeTK7BdF7/9PUzHP9j5LQMwP0tAnmkIW5p0x81+vuAi2VK936R9ISA==
X-YMail-OSG: OkvITWgVM1n_nmREuisrM4I9Z2M7_aOJPYKws8A38nD64ZjuWi_5QMpXflBrsLO
 Fc.GJ2LQPEbDysw0DibWYN1jS84aRX31YR8MY71SCaIKNpBDfX0vvNh1rxGF6CZF_UZvqj8Yhxv0
 kAcYjibTDzn9o7uQYuE8MhqUFOEx97v0xCoPteWf3JC6mnrnYNTuycpVSCSzZ7_9L5fUnmaFcWel
 aQmtB_WHLk7cV9NVEv7dOGxoQ8l1yrqUSk.HOT5fpEikZusTzNzxWbMzQe2gTiAeojKEOSgIkdKZ
 PpkjJTfCE2tkTzTTl1eJ3SzAqwe65pf93DQOjGuyTEPgn2r7h08yNTc9PJLDKFV1fxulrkywRZRk
 9sqA1LxDeyJthtqgDIQ3NK5mZewZa_1TPoQtCMq1OrkytvCppKXisL4YdTvn7koD6iMkJkMpOjE8
 geokFFANh.y6MAQaIYM4d5hSdPO9hNbmVXJ7YwZSyA4XNH4URTSHr8Vq3KkdlxPJ4u.zaEURj1oR
 25Z7TWwLwhvIduaVpFfxg3Z5zDwpPYhV__w4mntpfGMXITFnsFP2waZhLKkHq7.YZopOIINSSF8K
 An407TemObRsYdHuRMexewHO.PAQItL0ka9HuTjzHed7JxDoLb0L_o_A0w4SfE5WAH_TArSsdZYq
 aZQlVghf83kA2gN7IZ8AVMHRrXw4mQRJlNYu.x1sUjVpmRwAOC_oY6JwrL6pC2xixPWPd.gyq6QM
 UnfccK4jtwglZCV9Pyj.NVLe8p7xL6bIScEc0y75m7oN3mocnLDKoN2ItU5zm9ktW05zH0Gs8BBy
 E.jG6ruGu_dAMsaXCCNkI2OubhnMd3bQu0voEkzppEWq3EHI.O361ke7alWWvEYKBLxb8VT0bueX
 ZfEORy43gUTQspfA5s07oLc7BSghWpg0vQCxESjGXg3gKYred7qk9Q2lyb.foJmS1IBcMFxrrlt0
 a.2F_kyq23UlnU9IxZqwE.XNM_EqqPsdljoKzH_toAqycbdMKJEZmNzJk.rmUucOrKB6buUvFk6o
 ec1HHAh.Dbr2XQpUc9U5uQPM8WDtYzO1SLMe_pM49JVS7Y22PVAXwqIXM2aVNoeKAZXVXciL56gC
 9MYG2z7rUSkai.J74NQy__ymRtW5R9EEbBFJYNZIy9Ylkfq38ahvwgzqrP_RS4mdiSwbCSq.ZpO3
 .t.BdM.AH3jHMHSCKQ6E4x53jMrcHOiE9aAqLIAIKrOSCMrNPWP7xbQU4RyOtZefwGEaPGS.M5Pl
 XuuG2gTh47IuB1Z4GrsX56ziX_54J1phlgYkPd.ciN_lQbuiH8wlDDcIRWm8h.Kdiq_dHkbKavmo
 erxnH0vpjkfy0c7ajz3LMEQX.4qi9hix2bcA8POK2Ho0xv02IugleDVL18jIYUYabSg--
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic308.consmr.mail.bf2.yahoo.com with HTTP; Sun, 17 May 2020 16:36:15 +0000
Date: Sun, 17 May 2020 16:36:10 +0000 (UTC)
From: Jason Long <hack3rcon@yahoo.com>
To: "hx242@xen.org" <hx242@xen.org>, 
 Xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <205965483.308970.1589733370664@mail.yahoo.com>
In-Reply-To: <b73dbed41e83ee08b9ac6694a8fba76c512d2c93.camel@xen.org>
References: <1478987168.271548.1589721329818.ref@mail.yahoo.com>
 <1478987168.271548.1589721329818@mail.yahoo.com>
 <b73dbed41e83ee08b9ac6694a8fba76c512d2c93.camel@xen.org>
Subject: Re: RoCE adapters and Xen.
MIME-Version: 1.0
Content-Type: multipart/alternative; 
 boundary="----=_Part_308969_429439335.1589733370662"
X-Mailer: WebService/1.1.15960 YahooMailAndroidMobile YMobile/1.0
 (com.yahoo.mobile.client.android.mail/6.7.1; Android/7.1.1; NMF26F; bbc100;
 BlackBerry; BBC100-1; 5.16; 1184x720; )
Content-Length: 3128
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: "hack3rcon@yahoo.com" <hack3rcon@yahoo.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

------=_Part_308969_429439335.1589733370662
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Please see https://community.oracle.com/thread/4326908=C2=A0Migration to a =
new hypervisor is more easy than adding hardware support !!!!!!!

Sent from Yahoo Mail on Android=20
=20
  On Sun, May 17, 2020 at 8:35 PM, Hongyan Xia<hx242@xen.org> wrote:   On S=
un, 2020-05-17 at 13:15 +0000, Jason Long wrote:
> Hello,
> The Xen hypervisor doesn't support the RoCE adapters? The Oracle
> migrates to the KVM because of it.
> Why not add this feature?

I am curious. Aren't RoCE adapters just PCIe devices? If things are set
up correctly and drivers are present, I don't think Xen has any problem
supporting them.

Do you have any reference that Oracle migrates to KVM *because* Xen
cannot support RoCE? Or is it simply that Oracle is migrating to KVM in
general?

I haven't worked on RoCE before so I could be wrong.

Hongyan

 =20

------=_Part_308969_429439335.1589733370662
Content-Type: text/html; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Please see https://community.oracle.com/thread/4326908&nbsp;<div id=3D"yMai=
l_cursorElementTracker_1589733296857">Migration to a new hypervisor is more=
 easy than adding hardware support !!!!!!!<br id=3D"yMail_cursorElementTrac=
ker_1589733077751"><br><div id=3D"ymail_android_signature"><a id=3D"ymail_a=
ndroid_signature_link" href=3D"https://go.onelink.me/107872968?pid=3DInProd=
uct&amp;c=3DGlobal_Internal_YGrowth_AndroidEmailSig__AndroidUsers&amp;af_wl=
=3Dym&amp;af_sub1=3DInternal&amp;af_sub2=3DGlobal_YGrowth&amp;af_sub3=3DEma=
ilSignature">Sent from Yahoo Mail on Android</a></div> <br> <blockquote sty=
le=3D"margin: 0 0 20px 0;"> <div style=3D"font-family:Roboto, sans-serif; c=
olor:#6D00F6;"> <div>On Sun, May 17, 2020 at 8:35 PM, Hongyan Xia</div><div=
>&lt;hx242@xen.org&gt; wrote:</div> </div> <div style=3D"padding: 10px 0 0 =
20px; margin: 10px 0 0 0; border-left: 1px solid #6D00F6;"> On Sun, 2020-05=
-17 at 13:15 +0000, Jason Long wrote:<div class=3D"yqt6114786521 yQTDBase" =
id=3D"yqtfd98920"><br clear=3D"none">&gt; Hello,<br clear=3D"none">&gt; The=
 Xen hypervisor doesn't support the RoCE adapters? The Oracle<br clear=3D"n=
one">&gt; migrates to the KVM because of it.<br clear=3D"none">&gt; Why not=
 add this feature?</div><br clear=3D"none"><br clear=3D"none">I am curious.=
 Aren't RoCE adapters just PCIe devices? If things are set<br clear=3D"none=
">up correctly and drivers are present, I don't think Xen has any problem<b=
r clear=3D"none">supporting them.<br clear=3D"none"><br clear=3D"none">Do y=
ou have any reference that Oracle migrates to KVM *because* Xen<br clear=3D=
"none">cannot support RoCE? Or is it simply that Oracle is migrating to KVM=
 in<br clear=3D"none">general?<br clear=3D"none"><br clear=3D"none">I haven=
't worked on RoCE before so I could be wrong.<br clear=3D"none"><br clear=
=3D"none">Hongyan<div class=3D"yqt6114786521 yQTDBase" id=3D"yqtfd05174"><b=
r clear=3D"none"><br clear=3D"none"></div> </div> </blockquote></div>
------=_Part_308969_429439335.1589733370662--


From xen-devel-bounces@lists.xenproject.org Sun May 17 17:33:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 17 May 2020 17:33:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaN9v-0002l5-P8; Sun, 17 May 2020 17:33:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rqg2=67=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1jaN9u-0002l0-7p
 for xen-devel@lists.xenproject.org; Sun, 17 May 2020 17:33:06 +0000
X-Inumbo-ID: 76683936-9864-11ea-9887-bc764e2007e4
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 76683936-9864-11ea-9887-bc764e2007e4;
 Sun, 17 May 2020 17:33:04 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne [10.0.0.1])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 04HHWxhv029020;
 Sun, 17 May 2020 19:32:59 +0200 (MEST)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 06CE52810; Sun, 17 May 2020 19:32:59 +0200 (CEST)
Date: Sun, 17 May 2020 19:32:59 +0200
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
Message-ID: <20200517173259.GA7285@antioche.eu.org>
References: <20200515202912.GA11714@antioche.eu.org>
 <d623cd12-4024-82ba-7388-21f606e1a0bd@citrix.com>
 <20200515210629.GA10976@antioche.eu.org>
 <b1dfc07d-bf0f-da26-79f0-8cf93952689e@citrix.com>
 <20200515215335.GA9991@antioche.eu.org>
 <d22b6b7c-9d1c-4cfb-427a-ca6f440a9b08@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <d22b6b7c-9d1c-4cfb-427a-ca6f440a9b08@citrix.com>
User-Agent: Mutt/1.12.1 (2019-06-15)
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3
 (chassiron.antioche.eu.org [151.127.5.145]);
 Sun, 17 May 2020 19:32:59 +0200 (MEST)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

I've been looking a bit deeper in the Xen kernel.
The mapping is failed in ./arch/x86/mm/p2m.c:p2m_get_page_from_gfn(),
        /* Error path: not a suitable GFN at all */
	if ( !p2m_is_ram(*t) && !p2m_is_paging(*t) && !p2m_is_pod(*t) ) {
	    gdprintk(XENLOG_ERR, "p2m_get_page_from_gfn2: %d is_ram %ld is_paging %ld is_pod %ld\n", *t, p2m_is_ram(*t), p2m_is_paging(*t), p2m_is_pod(*t) );
	    return NULL;
	}

*t is 4, which translates to p2m_mmio_dm

it looks like p2m_get_page_from_gfn() is not ready to handle this case
for dom0.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Sun May 17 17:41:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 17 May 2020 17:41:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaNHa-0003eR-Jv; Sun, 17 May 2020 17:41:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1FZQ=67=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jaNHY-0003eM-Ub
 for xen-devel@lists.xenproject.org; Sun, 17 May 2020 17:41:00 +0000
X-Inumbo-ID: 8ea0f6ae-9865-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ea0f6ae-9865-11ea-b9cf-bc764e2007e4;
 Sun, 17 May 2020 17:40:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=AiIoGfBXM90pcBOxBft8iPff0OfrbRuhJO1cr3qA2I0=; b=mcZ6sfMmeKfI0G3063VOGBQpl
 EaVctdHm5iOPPlnaIHwFZlAHjgCvqB1PigfSTHt9qE12k22rh6h2qlk4xaABR0qf2Pp8YGsKQsS3q
 n3j2f5lW4hOHPBg3GpxWLRAiJLZoQ2FRvZIf4JuzvF4YSvNb1F+NZxz42ekZGYKdf5nb0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaNHR-0005UJ-Kl; Sun, 17 May 2020 17:40:53 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaNHR-00058P-BT; Sun, 17 May 2020 17:40:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jaNHR-0006cf-9r; Sun, 17 May 2020 17:40:53 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150223-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150223: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=664e1bc12f8658da124a4eff7a8f16da073bd47f
X-Osstest-Versions-That: xen=99266e31832fb4a4da5c9b8163328be350d1261d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 17 May 2020 17:40:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150223 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150223/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 150220

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150220
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150220
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150220
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150220
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150220
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150220
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150220
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150220
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150220
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150220
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  664e1bc12f8658da124a4eff7a8f16da073bd47f
baseline version:
 xen                  99266e31832fb4a4da5c9b8163328be350d1261d

Last test of basis   150220  2020-05-16 20:07:06 Z    0 days
Testing same since   150223  2020-05-17 08:29:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Paul Durrant <paul@xen.org>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   99266e3183..664e1bc12f  664e1bc12f8658da124a4eff7a8f16da073bd47f -> master


From xen-devel-bounces@lists.xenproject.org Sun May 17 17:56:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 17 May 2020 17:56:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaNWH-0004eF-0l; Sun, 17 May 2020 17:56:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rqg2=67=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1jaNWF-0004dj-M7
 for xen-devel@lists.xenproject.org; Sun, 17 May 2020 17:56:11 +0000
X-Inumbo-ID: b0ad470a-9867-11ea-b07b-bc764e2007e4
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b0ad470a-9867-11ea-b07b-bc764e2007e4;
 Sun, 17 May 2020 17:56:10 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne
 [IPv6:2001:41d0:fe9d:1100:221:70ff:fe0c:9885])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 04HHu79t023460;
 Sun, 17 May 2020 19:56:07 +0200 (MEST)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 23E682810; Sun, 17 May 2020 19:56:07 +0200 (CEST)
Date: Sun, 17 May 2020 19:56:07 +0200
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
Message-ID: <20200517175607.GA8793@antioche.eu.org>
References: <20200515202912.GA11714@antioche.eu.org>
 <d623cd12-4024-82ba-7388-21f606e1a0bd@citrix.com>
 <20200515210629.GA10976@antioche.eu.org>
 <b1dfc07d-bf0f-da26-79f0-8cf93952689e@citrix.com>
 <20200515215335.GA9991@antioche.eu.org>
 <d22b6b7c-9d1c-4cfb-427a-ca6f440a9b08@citrix.com>
 <20200517173259.GA7285@antioche.eu.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200517173259.GA7285@antioche.eu.org>
User-Agent: Mutt/1.12.1 (2019-06-15)
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3
 (chassiron.antioche.eu.org [IPv6:2001:41d0:fe9d:1101:0:0:0:1]);
 Sun, 17 May 2020 19:56:07 +0200 (MEST)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sun, May 17, 2020 at 07:32:59PM +0200, Manuel Bouyer wrote:
> I've been looking a bit deeper in the Xen kernel.
> The mapping is failed in ./arch/x86/mm/p2m.c:p2m_get_page_from_gfn(),
>         /* Error path: not a suitable GFN at all */
> 	if ( !p2m_is_ram(*t) && !p2m_is_paging(*t) && !p2m_is_pod(*t) ) {
> 	    gdprintk(XENLOG_ERR, "p2m_get_page_from_gfn2: %d is_ram %ld is_paging %ld is_pod %ld\n", *t, p2m_is_ram(*t), p2m_is_paging(*t), p2m_is_pod(*t) );
> 	    return NULL;
> 	}
> 
> *t is 4, which translates to p2m_mmio_dm
> 
> it looks like p2m_get_page_from_gfn() is not ready to handle this case
> for dom0.

And so it looks like I need to implement osdep_xenforeignmemory_map_resource()
for NetBSD

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Sun May 17 22:23:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 17 May 2020 22:23:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaRgX-0001Mw-Nd; Sun, 17 May 2020 22:23:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1FZQ=67=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jaRgW-0001Mr-Gz
 for xen-devel@lists.xenproject.org; Sun, 17 May 2020 22:23:04 +0000
X-Inumbo-ID: f73f1c0a-988c-11ea-a80e-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f73f1c0a-988c-11ea-a80e-12813bfff9fa;
 Sun, 17 May 2020 22:23:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1SxmbGHKAE8FrP6MxsNFCU3H0qlHXf75Oia8O1E4Yyc=; b=MSZjtWra/V+TWfelVsM6BDW2r
 8p7gqsaycO7VM4uU8p6SDdz967OfNO59y2Fc80xfZGIsdewCzj7cJaAwkViJMrb+3D70THS4uTG3O
 F5lIesvXVn4Sv+KRFBb4FeBl1sn0WgPhFmxfmw7+1OQYPNOrC0c2UWA4J0s6usUKsMOlI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaRgR-00033p-1O; Sun, 17 May 2020 22:22:59 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaRgQ-0005J3-Q5; Sun, 17 May 2020 22:22:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jaRgQ-0004mf-PK; Sun, 17 May 2020 22:22:58 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150225-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150225: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=5a9ffb954a3933d7867f4341684a23e008d6839b
X-Osstest-Versions-That: linux=3d1c1e5931ce45b3a3f309385bbc00c78e9951c6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 17 May 2020 22:22:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150225 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150225/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150221
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150221
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150221
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150221
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150221
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150221
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150221
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150221
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150221
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150221
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                5a9ffb954a3933d7867f4341684a23e008d6839b
baseline version:
 linux                3d1c1e5931ce45b3a3f309385bbc00c78e9951c6

Last test of basis   150221  2020-05-17 00:10:52 Z    0 days
Testing same since   150225  2020-05-17 11:37:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adam Ford <aford173@gmail.com>
  Adam McCoy <adam@forsedomani.com>
  Ahmad Fatoum <a.fatoum@pengutronix.de>
  Al Viro <viro@zeniv.linux.org.uk>
  Arnd Bergmann <arnd@arndb.de>
  Aurelien Jarno <aurelien@aurel32.net>
  Babu Moger <babu.moger@amd.com>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Chen-Yu Tsai <wens@csie.org>
  Chris Brandt <Chris.Brandt@renesas.com>
  Christophe Leroy <christophe.leroy@c-s.fr>
  Christophe Leroy <christophe.leroy@csgroup.eu>
  Clément Péron <peron.clem@gmail.com>
  Fabio Estevam <festevam@gmail.com>
  Faiz Abbas <faiz_abbas@ti.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Guo Ren <guoren@linux.alibaba.com>
  Heiko Stuebner <heiko@sntech.de>
  Jerome Brunet <jbrunet@baylibre.com>
  Jim Mattson <jmattson@google.com>
  Johan Jonker <jbx6244@gmail.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jue Wang <juew@google.com>
  Kevin Hilman <khilman@baylibre.com>
  Kishon Vijay Abraham I <kishon@ti.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Liu Yibin <jiulong@linux.alibaba.com>
  Loic Poulain <loic.poulain@linaro.org>
  Ma Feng <mafeng.ma@huawei.com>
  Mao Han <han_mao@linux.alibaba.com>
  Max Krummenacher <max.krummenacher@toradex.com>
  Max Krummenacher <max.oss.09@gmail.com>
  Maxime Ripard <maxime@cerno.tech>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Walle <michael@walle.cc>
  Michal Vokáč <michal.vokac@ysoft.com>
  Mimi Zohar <zohar@linux.ibm.com>
  Nayna Jain <nayna@linux.ibm.com>
  Neil Armstrong <narmstrong@baylibre.com>
  Nicholas Piggin <npiggin@gmail.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Xu <peterx@redhat.com>
  Ricardo Cañuelo <ricardo.canuelo@collabora.com>
  Robin Murphy <robin.murphy@arm.com>
  Samuel Holland <samuel@sholland.org>
  Shawn Guo <shawnguo@kernel.org>
  Shengjiu Wang <shengjiu.wang@nxp.com>
  Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
  Steve French <stfrench@microsoft.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Thierry Reding <treding@nvidia.com>
  Tobias Schramm <t.schramm@manjaro.org>
  Tony Lindgren <tony@atomide.com>
  Vinod Koul <vkoul@kernel.org>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   3d1c1e5931ce..5a9ffb954a39  5a9ffb954a3933d7867f4341684a23e008d6839b -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Mon May 18 01:15:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:15:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUMm-0008Sk-Km; Mon, 18 May 2020 01:14:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUMl-0008Se-RE
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:14:51 +0000
X-Inumbo-ID: f5e58d7c-98a4-11ea-b07b-bc764e2007e4
Received: from mail-qt1-x841.google.com (unknown [2607:f8b0:4864:20::841])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5e58d7c-98a4-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 01:14:45 +0000 (UTC)
Received: by mail-qt1-x841.google.com with SMTP id c24so6841869qtw.7
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:14:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=RY6wcLegoM0K/cf+d7rR6BE972Bl6q9Nr9kajAEhtQU=;
 b=Y02XyqZCQf5DLoJ5ElPAZ2MbegdUvKWWpK9lCJ29Jt4Dla/rcJbRwaS9Gh84iCq2dA
 HvpkN8a/fYMOj/GGBEqWI9VUXBur/WfE7ErzGalgmEx3wIP3Fm4s9mZrBNGXm/QtBoFl
 89JudLtlhhnFHAV3VGRPcUSeX0OPMwCRQwX/6hhELdrfoDfHfUlEJIEky3SPHrkSJIkE
 fcFbrd+A2rnQwQl0W8c1A/E+YhSuRZztA/+A2/1GeNtaPs4BqEKmPgOFFxdb2wkHHPoa
 tTbF4+S4zq+k751AE6Zh8f8GAKTRxUbVEzb5oH3OR8Zjy25T7GqdXomwQuSxEJJ3/GnP
 2nyQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=RY6wcLegoM0K/cf+d7rR6BE972Bl6q9Nr9kajAEhtQU=;
 b=lwpX/qaiod5D2b/cBFKSOZl36sVooKYBPs2nM2VH7LnTOUwEZeyJAFpYqFNDlh7MnX
 ccpekIbMi04IAft6pNOI/ycYntTmIY/qrtvrlPnpvPoIzumJJL95Hz041jEkDsr5F+Z8
 lATEJwbd/L7bEi9kyyMiVE3Zm5y13GebZ9kQT3VjeIg3IA16EMoFHFFZbMzx9d7/7BX3
 ZOu34AwPivFZpVTPBWpvmjCu3ooqIvqLgt7KxR0YCPfM1A89ItQuWUf9m+70vro2ki0/
 rU9awtVVKd5f4eha27GImSaV+0+SPZ7yxKCmxhiOmdauoQVnFpmOpJYcsC1W6f4svFrd
 GMgg==
X-Gm-Message-State: AOAM531w3J3FkdpS+o0D0KYkotz97Ig7rUCDYx0NSgZf/ajuTV7zxMCy
 LU7EbdyNpbduAZtnV7dk2nmNw2IZ
X-Google-Smtp-Source: ABdhPJw1F6kcO9qa+k5fZ7Ybw7yoA8SWCDZD/hUIbl2y1ONpTodcDDReNplNurbDBVW7EbBXUW6a7w==
X-Received: by 2002:ac8:4f4e:: with SMTP id i14mr14313161qtw.167.1589764484950; 
 Sun, 17 May 2020 18:14:44 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.14.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:14:44 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 02/18] Document ioemu Linux stubdomain protocol
Date: Sun, 17 May 2020 21:13:37 -0400
Message-Id: <20200518011353.326287-3-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Add documentation for upcoming Linux stubdomain for qemu-upstream.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6:
 - Add Acked-by: Ian Jackson
 - Replace dmargs with dm-argv for xenstore directory
 - Explain $STUBDOM_RESTORE_INCOMING_ARG for -incoming restore argument
---
 docs/misc/stubdom.txt | 52 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/docs/misc/stubdom.txt b/docs/misc/stubdom.txt
index 64c77d9b64..c717a95d17 100644
--- a/docs/misc/stubdom.txt
+++ b/docs/misc/stubdom.txt
@@ -75,6 +75,58 @@ Defined commands:
    - "running" - success
 
 
+Toolstack to Linux ioemu stubdomain protocol
+--------------------------------------------
+
+This section describe communication protocol between toolstack and
+qemu-upstream running in Linux stubdomain. The protocol include
+expectations of both stubdomain, and qemu.
+
+Setup (done by toolstack, expected by stubdomain):
+ - Block devices for target domain are connected as PV disks to stubdomain,
+   according to configuration order, starting with xvda
+ - Network devices for target domain are connected as PV nics to stubdomain,
+   according to configuration order, starting with 0
+ - [not implemented] if graphics output is expected, VFB and VKB devices are set for stubdomain
+   (its backend is responsible for exposing them using appropriate protocol
+   like VNC or Spice)
+ - other target domain's devices are not connected at this point to stubdomain
+   (may be hot-plugged later)
+ - QEMU command line is stored in
+   /vm/<target-uuid>/image/dm-argv xenstore dir, each argument as separate key
+   in form /vm/<target-uuid>/image/dm-argv/NNN, where NNN is 0-padded argument
+   number
+ - target domain id is stored in /local/domain/<stubdom-id>/target xenstore path
+?? - bios type is stored in /local/domain/<target-id>/hvmloader/bios
+ - stubdomain's console 0 is connected to qemu log file
+ - stubdomain's console 1 is connected to qemu save file (for saving state)
+ - stubdomain's console 2 is connected to qemu save file (for restoring state)
+ - next consoles are connected according to target guest's serial console configuration
+
+Environment exposed by stubdomain to qemu (needed to construct appropriate qemu command line and later interact with qmp):
+ - target domain's disks are available as /dev/xvd[a-z]
+ - console 2 (incoming domain state) must be connected to an FD and the command
+   line argument $STUBDOM_RESTORE_INCOMING_ARG must be replaced with fd:$FD to
+   form "-incoming fd:$FD"
+ - console 1 (saving domain state) is added over QMP to qemu as "fdset-id 1" (done by stubdomain, toolstack doesn't need to care about it)
+ - nics are connected to relevant stubdomain PV vifs when available (qemu -netdev should specify ifname= explicitly)
+
+Startup:
+1. toolstack starts PV stubdomain with stubdom-linux-kernel kernel and stubdom-linux-initrd initrd
+2. stubdomain initialize relevant devices
+3. stubdomain starts qemu with requested command line, plus few stubdomain specific ones - including local qmp access options
+4. stubdomain starts vchan server on /local/domain/<stubdom-id>/device-model/<target-id>/qmp-vchan, exposing qmp socket to the toolstack
+5. qemu signal readiness by writing "running" to /local/domain/<stubdom-id>/device-model/<target-id>/state xenstore path
+6. now device model is considered running
+
+QEMU can be controlled using QMP over vchan at /local/domain/<stubdom-id>/device-model/<target-id>/qmp-vchan. Only one simultaneous connection is supported and toolstack needs to ensure that.
+
+Limitations:
+ - PCI passthrough require permissive mode
+ - only one nic is supported
+ - at most 26 emulated disks are supported (more are still available as PV disks)
+ - graphics output (VNC/SDL/Spice) not supported
+
 
                                    PV-GRUB
                                    =======
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:15:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:15:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUMr-0008T3-TJ; Mon, 18 May 2020 01:14:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUMq-0008Sx-Rc
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:14:56 +0000
X-Inumbo-ID: f7337eb4-98a4-11ea-ae69-bc764e2007e4
Received: from mail-qk1-x744.google.com (unknown [2607:f8b0:4864:20::744])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7337eb4-98a4-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 01:14:47 +0000 (UTC)
Received: by mail-qk1-x744.google.com with SMTP id z80so8666542qka.0
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:14:47 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=3tvoe8SF1AjGsABKutuEKPrEzuPGqgtVPQhRYGoZ/qw=;
 b=EqaonDD1s63rJJcyTTB0dwNKRO7dRlGbjoVwcid1g/eOXpEooPxdi4HHl8eWvdpkSt
 kWpiCfceq4V8/WcdVlWw7gFIyYgS5diJXVyzA5aOXWMPsoTg9OjfC4uG85JwyKrCIAik
 CC9wzu5mNTzBxjcItSRQVmx9b+fdCxNtpI89u0OcBXQfVFTuXyMtMnl0J+xURksB/ocF
 lt/KmyD0nlof1MY8YwtMRuNGhNZZXwjrl5Gd0/cbhNJwQxl42FYbiVmKF7AaV9WhHgfh
 tfg++Qx2Q8R9b9QLq74Aj04CNVKKYrSXwE3vUiK8g4yEHRsF3dx4c6HtsK+WF3BeCO8v
 PBiw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=3tvoe8SF1AjGsABKutuEKPrEzuPGqgtVPQhRYGoZ/qw=;
 b=l6JKWb81luA3nVGjUrG1+TTRX9Jgdr7u2JPHUYYI0sVzCX2Nd8Bud7lbJ7TN+0MNH9
 +1f226caSZ7UEAdJkgFGCIGtWRgiYhjGy3Ss7DOaBpyof/mfklvBfNAAtLvr67gHR+t3
 2CXh3GxpUgOPL/u72ZRfsEP1Mt1F/AVRcAiwlEBhgGRByrCla4ifAlHW62S4c2XghWSv
 JOAHj4sj+zThd6ANZ0JQIaTUMRMZUDAfpTvzgbbYIQRNR1moPx94FmkzD/QYenqvrTTF
 t+OkBbYjJWdp5poerbCg5r3/8gE1vf5PSfaYnDsrDOUE/zQm9Va9la7cPrIYo8/ppV33
 Cy3A==
X-Gm-Message-State: AOAM532j7vbtLFRxywkw33kCKgZCtcx4LmK0V2K224ZQZxivWFCYL9up
 e16PJOznUAQFV+F/mkQ3l6HRZ/FA
X-Google-Smtp-Source: ABdhPJx+0nrOUsdpBC+yOU0jzbNkIHJpkML3dNlwR+94zjFMhrwW4iamZSUQB1230VZmiawf0rfVPw==
X-Received: by 2002:a37:8844:: with SMTP id k65mr13969716qkd.309.1589764487202; 
 Sun, 17 May 2020 18:14:47 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.14.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:14:46 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 03/18] libxl: fix qemu-trad cmdline for no sdl/vnc case
Date: Sun, 17 May 2020 21:13:38 -0400
Message-Id: <20200518011353.326287-4-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wei.liu2@citrix.com>, Wei Liu <wl@xen.org>,
 Jason Andryuk <jandryuk@gmail.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

When qemu is running in stubdomain, any attempt to initialize vnc/sdl
there will crash it (on failed attempt to load a keymap from a file). If
vfb is present, all those cases are skipped. But since
b053f0c4c9e533f3d97837cf897eb920b8355ed3 "libxl: do not start dom0 qemu
for stubdomain when not needed" it is possible to create a stubdomain
without vfb and contrary to the comment -vnc none do trigger VNC
initialization code (just skips exposing it externally).
Change the implicit SDL avoiding method to -nographics option, used when
none of SDL or VNC is enabled.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
Changes in v2:
 - typo in qemu option
Changes in v3:
 - add missing { }
---
 tools/libxl/libxl_dm.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index f4007bbe50..b91e63db6f 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -734,14 +734,15 @@ static int libxl__build_device_model_args_old(libxl__gc *gc,
         if (libxl_defbool_val(vnc->findunused)) {
             flexarray_append(dm_args, "-vncunused");
         }
-    } else
+    } else if (!sdl) {
         /*
          * VNC is not enabled by default by qemu-xen-traditional,
-         * however passing -vnc none causes SDL to not be
-         * (unexpectedly) enabled by default. This is overridden by
-         * explicitly passing -sdl below as required.
+         * however skipping -vnc causes SDL to be
+         * (unexpectedly) enabled by default. If undesired, disable graphics at
+         * all.
          */
-        flexarray_append_pair(dm_args, "-vnc", "none");
+        flexarray_append(dm_args, "-nographic");
+    }
 
     if (sdl) {
         flexarray_append(dm_args, "-sdl");
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:15:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:15:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUMh-0008SL-Cl; Mon, 18 May 2020 01:14:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUMg-0008SG-QU
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:14:46 +0000
X-Inumbo-ID: f47536d6-98a4-11ea-b9cf-bc764e2007e4
Received: from mail-qt1-x842.google.com (unknown [2607:f8b0:4864:20::842])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f47536d6-98a4-11ea-b9cf-bc764e2007e4;
 Mon, 18 May 2020 01:14:43 +0000 (UTC)
Received: by mail-qt1-x842.google.com with SMTP id p12so6807444qtn.13
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:14:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=Y63NCLIHxjj8BP4Xs3EPNGxMJKsqwIK/ULA13gyh9ao=;
 b=eSkNWqdbswfyDAji7MFKscHPl69O8wGmYPQmhFSpff0pcoiWJUbVrg3VRBpFW2kfno
 8fZ3kAE2AWhyj5m0TZtbB0rKrLY1fgkIYUTdAhbYERSdsuuTYDCCdYNiUTl/F/2RlEL9
 nYyXOAgv3bXQRJPCfjLMejeteMUjOa8l/BDh5IT/16ory2Gw7B5Mhd6uyy9R0gSTqrC/
 ZZz6L8Honi6qGiqsvGJSLRLnT8ssqbBU853PdYXFMbTcEMQlZ4ED4Sh5QQI1NzCXRgQv
 PLH1mEAN5TSG6VAVZDHjJpaSZfu1WP7QMObtk3hEJXwV3dfQbjSfkBLuvQ6BFeEOT7i1
 N7Ig==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=Y63NCLIHxjj8BP4Xs3EPNGxMJKsqwIK/ULA13gyh9ao=;
 b=TFTKogHbyLbDdvhcTEpxTzGsSxeQMNd/X+oM9oYOPF+zSI6yHepB8AYE8rqEvz42J0
 2EDv1S1K4z4LxuQA4tewOYQIcMX1tBdYR8hep32a1IHDxzqS7usjNHyYNrQdubq7g49j
 0ms2k/Mqr4/7N0NTOwIaLkury+w1NnfMCAcS27AKfGXu38CnVLpYlPxaxhZAr6i3ZzOs
 1s2IxoJZmZ+MMUuLeEkMKpXcChC4qQv8yajDTDLrL34RoEVuUSxa8TUN3EY3PtW8kK46
 QTL1dnAPnEeGrDZjUajfWWvhaqj06PmPUSpjJkZxNGOLpKEqpTnyf+jTzJ3Xd8lbf7fu
 +xQw==
X-Gm-Message-State: AOAM530dPUCuT+rUPgA0icMg6+q4wEJ1UTXGBgG8w6XdOOn/KDe6wdjd
 s5ITgKxUJIMAcM3XRl7omLrAdsQu
X-Google-Smtp-Source: ABdhPJzgBKtNE4UuZ+JccVPVxVfMOGB9LaK0mJCPZqbJO7XGSDTxnkecYW5ina97e/LH7ZQzpeVZoQ==
X-Received: by 2002:ac8:4741:: with SMTP id k1mr14158535qtp.250.1589764482565; 
 Sun, 17 May 2020 18:14:42 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.14.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:14:41 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 01/18] Document ioemu MiniOS stubdomain protocol
Date: Sun, 17 May 2020 21:13:36 -0400
Message-Id: <20200518011353.326287-2-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Add documentation based on reverse-engineered toolstack-ioemu stubdomain
protocol.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6:
 - Add Acked-by: Ian Jackson
---
 docs/misc/stubdom.txt | 53 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 53 insertions(+)

diff --git a/docs/misc/stubdom.txt b/docs/misc/stubdom.txt
index 882a18cab4..64c77d9b64 100644
--- a/docs/misc/stubdom.txt
+++ b/docs/misc/stubdom.txt
@@ -23,6 +23,59 @@ and https://wiki.xen.org/wiki/Device_Model_Stub_Domains for more
 information on device model stub domains
 
 
+Toolstack to MiniOS ioemu stubdomain protocol
+---------------------------------------------
+
+This section describe communication protocol between toolstack and
+qemu-traditional running in MiniOS stubdomain. The protocol include
+expectations of both qemu and stubdomain itself.
+
+Setup (done by toolstack, expected by stubdomain):
+ - Block devices for target domain are connected as PV disks to stubdomain,
+   according to configuration order, starting with xvda
+ - Network devices for target domain are connected as PV nics to stubdomain,
+   according to configuration order, starting with 0
+ - if graphics output is expected, VFB and VKB devices are set for stubdomain
+   (its backend is responsible for exposing them using appropriate protocol
+   like VNC or Spice)
+ - other target domain's devices are not connected at this point to stubdomain
+   (may be hot-plugged later)
+ - QEMU command line (space separated arguments) is stored in
+   /vm/<target-uuid>/image/dmargs xenstore path
+ - target domain id is stored in /local/domain/<stubdom-id>/target xenstore path
+?? - bios type is stored in /local/domain/<target-id>/hvmloader/bios
+ - stubdomain's console 0 is connected to qemu log file
+ - stubdomain's console 1 is connected to qemu save file (for saving state)
+ - stubdomain's console 2 is connected to qemu save file (for restoring state)
+ - next consoles are connected according to target guest's serial console configuration
+
+Startup:
+1. PV stubdomain is started with ioemu-stubdom.gz kernel and no initrd
+2. stubdomain initialize relevant devices
+3. stubdomain signal readiness by writing "running" to /local/domain/<stubdom-id>/device-model/<target-id>/state xenstore path
+4. now stubdomain is considered running
+
+Runtime control (hotplug etc):
+Toolstack can issue command through xenstore. The sequence is (from toolstack POV):
+1. Write parameter to /local/domain/<stubdom-id>/device-model/<target-id>/parameter.
+2. Write command to /local/domain/<stubdom-id>/device-model/<target-id>/command.
+3. Wait for command result in /local/domain/<stubdom-id>/device-model/<target-id>/state (command specific value).
+4. Write "running" back to /local/domain/<stubdom-id>/device-model/<target-id>/state.
+
+Defined commands:
+ - "pci-ins" - PCI hot plug, results:
+   - "pci-inserted" - success
+   - "pci-insert-failed" - failure
+ - "pci-rem" - PCI hot remove, results:
+   - "pci-removed" - success
+   - ??
+ - "save" - save domain state to console 1, results:
+   - "paused" - success
+ - "continue" - resume domain execution, after loading state from console 2 (require -loadvm command argument), results:
+   - "running" - success
+
+
+
                                    PV-GRUB
                                    =======
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:15:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:15:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUMe-0008SA-4y; Mon, 18 May 2020 01:14:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUMc-0008S5-1F
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:14:42 +0000
X-Inumbo-ID: f325192c-98a4-11ea-b07b-bc764e2007e4
Received: from mail-qt1-x843.google.com (unknown [2607:f8b0:4864:20::843])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f325192c-98a4-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 01:14:41 +0000 (UTC)
Received: by mail-qt1-x843.google.com with SMTP id p12so6807385qtn.13
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:14:41 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=AjO9kkt5bMdK0S8Ne/6CQZwANRVtIV2wBZ1lc1Nj27g=;
 b=pFmqH/y+TAs8gwaQTKUwL2fnTsjIrU/KF/XOv2XPu2xj6DK3venX8LTJEhc7Bbk/li
 OQwsI7c8ecgBAaImOBCQbfheHk0ZXBREa9RX781f8ThYsphxRTRYMYzia8B8j5kCJb+r
 wLYuE11rRf+XL74zxJP/PItqAkck40Ia4bxLiu3QwFzslUTSxGyN4hQuk6QNx/ZjMLK7
 pmYBATA1lDVjG2WZNlUH7KxrBsckSCf6JluS9m7nfiYfB5tqyAiSYNgprytJS/ZehSMI
 32IzO2mI9m292zhxvu9OUrw0+CO4zszGcu+pAcggr5oVK/70u1lTfoPovpT/WUe/hqvU
 tl3Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=AjO9kkt5bMdK0S8Ne/6CQZwANRVtIV2wBZ1lc1Nj27g=;
 b=bHetmOSsDZ4FirkMD3TLuL0lpWuYTnqUmGPn9q5DPr+5OTRRIQhg1HSOOIPyk0cEF2
 vYIN9SUrVBX5htCpWXAhWoy12I1DmkbnTBkbFaaltbcO+alq/fxohVcWfKElnRpIfc7q
 K7uiB/uHlgJz8ednTM60WPLQiPZgRyCRn8fWIbbtGOyfABHwFjwxyACR8Ut4fGITkR79
 N99JYW1TEOz3ofA1tFfiLVD+MMine86sDhQ3jvsmM1UmIs8NCYnUX6Wv6GHmxmcLC278
 qs2JHmvT5hM5Egm2mrjNhqmBCFwdh8NPLzjiHLsWtacj0gXjet26EH3ZZr1l8IAB3ym1
 EiJw==
X-Gm-Message-State: AOAM5316IaP9KVtjjZjVxG5Ewe3aROWpgOsAcTi9/9Q/wMMllFmxMnnA
 EpOLsC78xqzxMHMnjYKXYCrr22wdayI=
X-Google-Smtp-Source: ABdhPJzk3UMrvPbTVGipeZvHLW8jkkL2P8wcQKp+/bfuYa5dOeomzXK3NnhG5KezYUpIF/9C0S38wg==
X-Received: by 2002:ac8:326d:: with SMTP id y42mr13923879qta.243.1589764480140; 
 Sun, 17 May 2020 18:14:40 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.14.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:14:39 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 00/18] Add support for qemu-xen runnning in a Linux-based
 stubdomain
Date: Sun, 17 May 2020 21:13:35 -0400
Message-Id: <20200518011353.326287-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

In coordination with Marek, I'm making a submission of his patches for Linux
stubdomain device-model support.  I made a few of my own additions, but Marek
did the heavy lifting.  Thank you, Marek.

Below is mostly the v5 cover leter with a few additions.

General idea is to allow freely set device_model_version and
device_model_stubdomain_override and choose the right options based on this
choice.  Also, allow to specific path to stubdomain kernel/ramdisk, for greater
flexibility.

First two patches add documentation about expected toolstack-stubdomain-qemu
interface, both for MiniOS stubdomain and Linux stubdomain.

Initial version has no QMP support - in initial patches it is completely
disabled, which means no suspend/restore and no PCI passthrough.

Later patches add QMP over libvchan connection support. The actual connection
is made in a separate process. As discussed on Xen Summit 2019, this allows to
apply some basic checks and/or filtering (not part of this series), to limit
libxl exposure for potentially malicious stubdomain.

Jason's additions ensure the qmp-proxy (vchan-socket-proxy) processes and
sockets are cleaned up and add some documentation.

The actual stubdomain implementation is here:

   https://github.com/jandryuk/qubes-vmm-xen-stubdom-linux
   (branch initramfs-tools, tag for-upstream-v6)

See readme there for build instructions.  Marek's version requires dracut.  I
have hacked up a version usable with initramfs-tools.

The v6 version is needed to be compatible with these changes in the v6 posting:

 - Mini-OS stubdoms use dmargs xenstore key as a string.  Linux stubdoms
   use dm-argv as a directory for numbered entries.

 - The hardcoded "fd:3" for the restore image is replaced with the
   placehodler string $STUBDOM_RESTORE_INCOMING_ARG.  The stubdom
   initscript needs to replace that with a an "fd:$FD" string to the
   hooked up console 2.

Few comments/questions about the stubdomain code:

1. There are extra patches for qemu that are necessary to run it in stubdomain.
While it is desirable to upstream them, I think it can be done after merging
libxl part. Stubdomain's qemu build will in most cases be separate anyway, to
limit qemu's dependencies (so the stubdomain size).

2. By default Linux hvc-xen console frontend is unreliable for data transfer
(qemu state save/restore) - it drops data sent faster than client is reading
it. To fix it, console device needs to be switched into raw mode (`stty raw
/dev/hvc1`). Especially for restoring qemu state it is tricky, as it would need
to be done before opening the device, but stty (obviously) needs to open the
device first. To solve this problem, for now the repository contains kernel
patch which changes the default for all hvc consoles. Again, this isn't
practical problem, as the kernel for stubdomain is built separately. But it
would be nice to have something working with vanilla kernel. I see those
options:
  - convert it to kernel cmdline parameter (hvc_console_raw=1 ?)
  - use channels instead of consoles (and on the kernel side change the default
    to "raw" only for channels); while in theory better design, libxl part will
    be more complex, as channels can be connected to sockets but not files, so
    libxl would need to read/write to it exactly when qemu write/read the data,
    not before/after as it is done now

Remaining parts for eliminating dom0's instance of qemu:
 - do not force QDISK backend for CDROM
 - multiple consoles support in xenconsoled

Changes in v2:
 - apply review comments by Jason Andryuk
Changes in v3:
 - rework qemu arguments handling (separate xenstore keys, instead of \x1b separator)
 - add QMP over libvchan, instead of console
 - add protocol documentation
 - a lot of minor changes, see individual patches for full changes list
 - split xenconsoled patches into separate series
Changes in v4:
 - extract vchan connection into a separate process
 - rebase on master
 - various fixes
Changes in v5:
 - Marek: apply review comments from Jason Andryuk
 - Jason: Clean up qmp-proxy processes and sockets
Changes in v6:
 - Squash vchan-proxy kill and socket cleanup into "libxl: use vchan for
   QMP access with Linux stubdomain".
 - Use dm-argv as the xenstore directory for the QEMU arguments.
 - Use $STUBDOM_RESTORE_INCOMING_ARG as a placeholder instead of
   hardcoding "fd:3".
 - Comment to re-run autotools.
 - Add Acked-by from Ian Jackson where approriate.

Eric Shelton (1):
  libxl: Handle Linux stubdomain specific QEMU options.

Jason Andryuk (3):
  libxl: Refactor kill_device_model to libxl__kill_xs_path
  docs: Add device-model-domid to xenstore-paths
  libxl: Check stubdomain kernel & ramdisk presence

Marek Marczykowski-Górecki (14):
  Document ioemu MiniOS stubdomain protocol
  Document ioemu Linux stubdomain protocol
  libxl: fix qemu-trad cmdline for no sdl/vnc case
  libxl: Allow running qemu-xen in stubdomain
  libxl: write qemu arguments into separate xenstore keys
  xl: add stubdomain related options to xl config parser
  tools/libvchan: notify server when client is connected
  libxl: add save/restore support for qemu-xen in stubdomain
  tools: add missing libxenvchan cflags
  tools: add simple vchan-socket-proxy
  libxl: use vchan for QMP access with Linux stubdomain
  libxl: require qemu in dom0 for multiple stubdomain consoles
  libxl: ignore emulated IDE disks beyond the first 4
  libxl: consider also qemu in stubdomain in libxl__dm_active check

 .gitignore                          |   1 +
 configure                           |  14 +-
 docs/configure                      |  14 +-
 docs/man/xl.cfg.5.pod.in            |  27 +-
 docs/misc/stubdom.txt               | 105 ++++++
 docs/misc/xenstore-paths.pandoc     |   5 +
 stubdom/configure                   |  14 +-
 tools/Rules.mk                      |   2 +-
 tools/config.h.in                   |   3 +
 tools/configure                     |  46 ++-
 tools/configure.ac                  |   9 +
 tools/libvchan/Makefile             |   8 +-
 tools/libvchan/init.c               |   3 +
 tools/libvchan/vchan-socket-proxy.c | 478 ++++++++++++++++++++++++++
 tools/libxl/libxl_aoutils.c         |  32 ++
 tools/libxl/libxl_create.c          |  46 ++-
 tools/libxl/libxl_dm.c              | 506 ++++++++++++++++++++++------
 tools/libxl/libxl_domain.c          |  10 +
 tools/libxl/libxl_internal.h        |  22 ++
 tools/libxl/libxl_mem.c             |   6 +-
 tools/libxl/libxl_qmp.c             |  27 +-
 tools/libxl/libxl_types.idl         |   3 +
 tools/xl/xl_parse.c                 |   7 +
 23 files changed, 1209 insertions(+), 179 deletions(-)
 create mode 100644 tools/libvchan/vchan-socket-proxy.c

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:15:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:15:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUMx-0008UK-8Z; Mon, 18 May 2020 01:15:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUMv-0008U5-SI
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:15:01 +0000
X-Inumbo-ID: f92d3b24-98a4-11ea-b9cf-bc764e2007e4
Received: from mail-qt1-x842.google.com (unknown [2607:f8b0:4864:20::842])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f92d3b24-98a4-11ea-b9cf-bc764e2007e4;
 Mon, 18 May 2020 01:14:51 +0000 (UTC)
Received: by mail-qt1-x842.google.com with SMTP id a23so1134322qto.1
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:14:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=BWSyY9N8FyMXoF1yPMPDtUrpbXjsGUwF5REIL/gyKE4=;
 b=dmYzjqhn8djThiD2veq57JdGz25GPorcVaEokl2lHY+o69SGbl/uoxQ1RZCUEe4XZ+
 CdP6adOaOBbmJcRmbZbYX3YgVEz55ewTwrPuHXvm95IyZbWqH3scoAjIhi6QDHl+sb6e
 tL30WijZOeD8Y+jmufz+6CCWH7l0aGfp748EWFOWb9AQW4T0yRvnRlKf6KJvUy9gCFFh
 dGrmCC9DsDnFF5YqMUDumDQxT0v1nzx9Ci4MbTaMGb/bui5n0x6X0zpJH/4h4OXIQpaw
 XH6C6bVbmFxkvsQqofYtzg7xf3YsTjycd9iyVCsLTpeXclxxFijq9DRE5YYUzB06mIfY
 i1sQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=BWSyY9N8FyMXoF1yPMPDtUrpbXjsGUwF5REIL/gyKE4=;
 b=N0poky+kT4/hjGdYMsCjpQiAHAA9+qernI8bNm7l2Tp6IEB8b9wrsbis2pZHkmAMU6
 C6JJAovHP488nOyY4j+q8H0UvmiDf/aZE7TRsES7HBV9oi+6MnOspAL5RTC6UvG2RvuH
 +1E83rszwtnCaSaC+hXc4RV/GN54XdqTHSayB+cAKi+zLRjdzoJBm1bDHt3plxHhTDUh
 OtNCc7l1wtKe1wtJWyL3eufMc91CBujlaY4B7tvOUehkpuse3Xju6GnE0eXteB2qCqpZ
 k9MTVftTcA/OaQm4Q0lh/ftiilnvEMEMI3QC0ZnG2n8BnGNw7o2nuqor5ZSi2gtwBpel
 +C/w==
X-Gm-Message-State: AOAM532fdggrclJN7raGn1+VSaB8ec3VFC8P3fUrqPb7YnusBO0k7h2G
 kZCIdLBycSp9PZUEnVREwsYfOxAk
X-Google-Smtp-Source: ABdhPJyQ8nGpESmVdZae8TILsY1KpZ+Wbr4ArZ7xFlRRhBuWFQQSSMn+p/7NqkeImxBSOkezy/nJ/Q==
X-Received: by 2002:aed:3b75:: with SMTP id q50mr14557795qte.23.1589764490424; 
 Sun, 17 May 2020 18:14:50 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.14.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:14:49 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 04/18] libxl: Allow running qemu-xen in stubdomain
Date: Sun, 17 May 2020 21:13:39 -0400
Message-Id: <20200518011353.326287-5-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Do not prohibit anymore using stubdomain with qemu-xen.
To help distingushing MiniOS and Linux stubdomain, add helper inline
functions libxl__stubdomain_is_linux() and
libxl__stubdomain_is_linux_running(). Those should be used where really
the difference is about MiniOS/Linux, not qemu-xen/qemu-xen-traditional.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v3:
 - new patch, instead of "libxl: Add "stubdomain_version" to
 domain_build_info"
 - helper functions as suggested by Ian Jackson
Changes in v6:
 - Add Acked-by: Ian Jackson
---
 tools/libxl/libxl_create.c   |  9 ---------
 tools/libxl/libxl_internal.h | 17 +++++++++++++++++
 2 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 5a043df15f..433947abab 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -171,15 +171,6 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
         }
     }
 
-    if (b_info->type == LIBXL_DOMAIN_TYPE_HVM &&
-        b_info->device_model_version !=
-            LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL &&
-        libxl_defbool_val(b_info->device_model_stubdomain)) {
-        LOG(ERROR,
-            "device model stubdomains require \"qemu-xen-traditional\"");
-        return ERROR_INVAL;
-    }
-
     if (!b_info->max_vcpus)
         b_info->max_vcpus = 1;
     if (!b_info->avail_vcpus.size) {
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index e5effd2ad1..d1ebdec8d2 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2324,6 +2324,23 @@ _hidden int libxl__device_model_version_running(libxl__gc *gc, uint32_t domid);
   /* Return the system-wide default device model */
 _hidden libxl_device_model_version libxl__default_device_model(libxl__gc *gc);
 
+static inline
+bool libxl__stubdomain_is_linux_running(libxl__gc *gc, uint32_t domid)
+{
+    /* same logic as in libxl__stubdomain_is_linux */
+    return libxl__device_model_version_running(gc, domid)
+        == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
+}
+
+static inline
+bool libxl__stubdomain_is_linux(libxl_domain_build_info *b_info)
+{
+    /* right now qemu-tranditional implies MiniOS stubdomain and qemu-xen
+     * implies Linux stubdomain */
+    return libxl_defbool_val(b_info->device_model_stubdomain) &&
+        b_info->device_model_version == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
+}
+
 #define DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, fmt, _a...)              \
     libxl__sprintf(gc, "/local/domain/%u/device-model/%u" fmt, dm_domid,   \
                    domid, ##_a)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:15:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:15:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUN1-0008Vi-IY; Mon, 18 May 2020 01:15:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUN0-0008VP-Sv
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:15:06 +0000
X-Inumbo-ID: fb2675e4-98a4-11ea-b9cf-bc764e2007e4
Received: from mail-qk1-x744.google.com (unknown [2607:f8b0:4864:20::744])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fb2675e4-98a4-11ea-b9cf-bc764e2007e4;
 Mon, 18 May 2020 01:14:54 +0000 (UTC)
Received: by mail-qk1-x744.google.com with SMTP id f13so8661976qkh.2
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:14:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=vewEl/AeTk1WX8L90ml6cglMpSrPMmirDuDTRVf5RxA=;
 b=mYwa8kdUkz7FDLPebBZ7t0utzaVWREG6e9gFZwYCXeXvg6oGfWnfKLUQdvnyK8LfS+
 ueGVjDFZ2e4xk0uKFFlSPGbaV8tRnkRNpxkjZc+dzyXWiGFeaM0dvBbjRshF1+2Uii0t
 1E0CYQJS8RVytCrsCkbhDcsht5DZkCctCZQGmhgUDe50CK3Vo1rVbeKQoycs/ZEpUGUu
 w4QNirltasDW53wU7Q2LtXolkcnTQEn1d+0LeR0XkJOvGT8MUiGpVzlbDj6wNAX3dBnn
 7be/NN5b2SwS81srHb40jIXCfUg7BjXUjkRMoBOm3V5/NHIcCtxk9kDapnDij+IBMNp3
 4Qkw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=vewEl/AeTk1WX8L90ml6cglMpSrPMmirDuDTRVf5RxA=;
 b=BquRqxJ8yzbVh1kybWBrw5ql5qgeTbMZ3o9RLyCbbEm55SFhYE7Qm+NwSwjn5eGRtc
 NVk6LcuSfUohGR1UNt4tmCtv68d4B8iMEpan1xPR2rDxd/DFljQ0aWs/2q0Mex7CFNIs
 97PAijRQAGzzlZfWoncf0ckK/FmIEi4BCSjoNVbTljEnj9XCnRCS8ee9XHiN0w6x033G
 N7cu1jGKzFhwqh6CEpSrxgYjDLiEaSKHfnTSczZkfL2vts/sDT1bg/pPfJUcFQkMcSJl
 fkr74Ce+MLRYlBdxTHCs7HXzeAn8qYl/9x9AL/hiB96d/9YuGywTRyU6r+DV+BO4CDwT
 Pq1g==
X-Gm-Message-State: AOAM532RtbQWTbOZsnL28+epnUPyUO5jEBc8o4HMsz8GUNit0EwjqfTb
 Yx0OrCz+Ba1zb+aLqi6iQU9JKD/6
X-Google-Smtp-Source: ABdhPJwnlMQ1VTKY2FzZBq/K3936uiwYigyztBHHlpu1eAkPn9PAUed7ZzavgvhvhWBT1t2T5Pe0MA==
X-Received: by 2002:a37:7904:: with SMTP id u4mr13676395qkc.297.1589764493547; 
 Sun, 17 May 2020 18:14:53 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.14.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:14:52 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 05/18] libxl: Handle Linux stubdomain specific QEMU options.
Date: Sun, 17 May 2020 21:13:40 -0400
Message-Id: <20200518011353.326287-6-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Simon Gaiser <simon@invisiblethingslab.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>, Eric Shelton <eshelton@pobox.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Eric Shelton <eshelton@pobox.com>

This patch creates an appropriate command line for the QEMU instance
running in a Linux-based stubdomain.

NOTE: a number of items are not currently implemented for Linux-based
stubdomains, such as:
- save/restore
- QMP socket
- graphics output (e.g., VNC)

Signed-off-by: Eric Shelton <eshelton@pobox.com>

Simon:
 * fix disk path
 * fix cdrom path and "format"

Signed-off-by: Simon Gaiser <simon@invisiblethingslab.com>
[drop Qubes-specific parts]
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Allow setting stubdomain_ramdisk independently from stubdomain_kernel
Add a qemu- prefix for qemu-stubdom-linux-{kernel,rootfs} since stubdom
doesn't convey device-model.  Use qemu- since this code is qemu specific.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
Changes in v2:
 - fix serial specified with serial=[ ... ] syntax
 - error out on multiple consoles (incompatible with stubdom)
 - drop erroneous chunk about cdrom
Changes in v3:
 - change to use libxl__stubdomain_is_linux instead of
   b_info->stubdomain_version
 - drop libxl__stubdomain_version_running, prefer
   libxl__stubdomain_is_linux_running introduced by previous patch
 - drop ifup/ifdown script - stubdomain will handle that with qemu
   events itself
 - slightly simplify -serial argument
 - add support for multiple serial consoles, do not ignore
   b_info.u.serial(_list)
 - add error checking for more than 26 emulated disks ("/dev/xvd%c"
   format string)
Changes in v5:
 - commit message fixup to match patch contents - Marek
 - file names are now qemu-stubdom-linux-{kernel,rootfs} - Jason
 - allow setting ramdisk independently of kernel - Jason
Changes in v6:
 - Add Acked-by: Ian Jackson
 - Fixes for style nits
---
 tools/libxl/libxl_create.c   |  45 ++++++++
 tools/libxl/libxl_dm.c       | 193 ++++++++++++++++++++++++-----------
 tools/libxl/libxl_internal.h |   1 +
 tools/libxl/libxl_mem.c      |   6 +-
 tools/libxl/libxl_types.idl  |   3 +
 5 files changed, 186 insertions(+), 62 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 433947abab..8614a2c241 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -171,6 +171,40 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
         }
     }
 
+    if (b_info->type == LIBXL_DOMAIN_TYPE_HVM &&
+        libxl_defbool_val(b_info->device_model_stubdomain)) {
+        if (!b_info->stubdomain_kernel) {
+            switch (b_info->device_model_version) {
+                case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
+                    b_info->stubdomain_kernel =
+                        libxl__abs_path(NOGC, "ioemu-stubdom.gz", libxl__xenfirmwaredir_path());
+                    break;
+                case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
+                    b_info->stubdomain_kernel =
+                        libxl__abs_path(NOGC,
+                                "qemu-stubdom-linux-kernel",
+                                libxl__xenfirmwaredir_path());
+                    break;
+                default:
+                    abort();
+            }
+        }
+        if (!b_info->stubdomain_ramdisk) {
+            switch (b_info->device_model_version) {
+                case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
+                    break;
+                case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
+                    b_info->stubdomain_ramdisk =
+                        libxl__abs_path(NOGC,
+                                "qemu-stubdom-linux-rootfs",
+                                libxl__xenfirmwaredir_path());
+                    break;
+                default:
+                    abort();
+            }
+        }
+    }
+
     if (!b_info->max_vcpus)
         b_info->max_vcpus = 1;
     if (!b_info->avail_vcpus.size) {
@@ -206,6 +240,17 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
     if (b_info->target_memkb == LIBXL_MEMKB_DEFAULT)
         b_info->target_memkb = b_info->max_memkb;
 
+    if (b_info->stubdomain_memkb == LIBXL_MEMKB_DEFAULT) {
+        if (libxl_defbool_val(b_info->device_model_stubdomain)) {
+            if (libxl__stubdomain_is_linux(b_info))
+                b_info->stubdomain_memkb = LIBXL_LINUX_STUBDOM_MEM * 1024;
+            else
+                b_info->stubdomain_memkb = 28 * 1024; // MiniOS
+        } else {
+            b_info->stubdomain_memkb = 0; // no stubdomain
+        }
+    }
+
     libxl_defbool_setdefault(&b_info->claim_mode, false);
 
     libxl_defbool_setdefault(&b_info->localtime, false);
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index b91e63db6f..dc1717bc12 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -1188,6 +1188,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     int i, connection, devid;
     uint64_t ram_size;
     const char *path, *chardev;
+    bool is_stubdom = libxl_defbool_val(b_info->device_model_stubdomain);
 
     dm_args = flexarray_make(gc, 16, 1);
     dm_envs = flexarray_make(gc, 16, 1);
@@ -1197,39 +1198,42 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     flexarray_vappend(dm_args, dm,
                       "-xen-domid",
                       GCSPRINTF("%d", guest_domid), NULL);
+    flexarray_append(dm_args, "-no-shutdown");
 
-    flexarray_append(dm_args, "-chardev");
-    if (state->dm_monitor_fd >= 0) {
-        flexarray_append(dm_args,
-            GCSPRINTF("socket,id=libxl-cmd,fd=%d,server,nowait",
-                      state->dm_monitor_fd));
+    /* There is currently no way to access the QMP socket in the stubdom */
+    if (!is_stubdom) {
+        flexarray_append(dm_args, "-chardev");
+        if (state->dm_monitor_fd >= 0) {
+            flexarray_append(dm_args,
+                GCSPRINTF("socket,id=libxl-cmd,fd=%d,server,nowait",
+                          state->dm_monitor_fd));
 
-        /*
-         * Start QEMU with its "CPU" paused, it will not start any emulation
-         * until the QMP command "cont" is used. This also prevent QEMU from
-         * writing "running" to the "state" xenstore node so we only use this
-         * flag when we have the QMP based startup notification.
-         * */
-        flexarray_append(dm_args, "-S");
-    } else {
-        flexarray_append(dm_args,
-                         GCSPRINTF("socket,id=libxl-cmd,"
-                                   "path=%s,server,nowait",
-                                   libxl__qemu_qmp_path(gc, guest_domid)));
-    }
+            /*
+             * Start QEMU with its "CPU" paused, it will not start any emulation
+             * until the QMP command "cont" is used. This also prevent QEMU from
+             * writing "running" to the "state" xenstore node so we only use this
+             * flag when we have the QMP based startup notification.
+             * */
+            flexarray_append(dm_args, "-S");
+        } else {
+            flexarray_append(dm_args,
+                             GCSPRINTF("socket,id=libxl-cmd,"
+                                       "path=%s,server,nowait",
+                                       libxl__qemu_qmp_path(gc, guest_domid)));
+        }
 
-    flexarray_append(dm_args, "-no-shutdown");
-    flexarray_append(dm_args, "-mon");
-    flexarray_append(dm_args, "chardev=libxl-cmd,mode=control");
+        flexarray_append(dm_args, "-mon");
+        flexarray_append(dm_args, "chardev=libxl-cmd,mode=control");
 
-    flexarray_append(dm_args, "-chardev");
-    flexarray_append(dm_args,
-                     GCSPRINTF("socket,id=libxenstat-cmd,"
-                                    "path=%s/qmp-libxenstat-%d,server,nowait",
-                                    libxl__run_dir_path(), guest_domid));
+        flexarray_append(dm_args, "-chardev");
+        flexarray_append(dm_args,
+                         GCSPRINTF("socket,id=libxenstat-cmd,"
+                                        "path=%s/qmp-libxenstat-%d,server,nowait",
+                                        libxl__run_dir_path(), guest_domid));
 
-    flexarray_append(dm_args, "-mon");
-    flexarray_append(dm_args, "chardev=libxenstat-cmd,mode=control");
+        flexarray_append(dm_args, "-mon");
+        flexarray_append(dm_args, "chardev=libxenstat-cmd,mode=control");
+    }
 
     for (i = 0; i < guest_config->num_channels; i++) {
         connection = guest_config->channels[i].connection;
@@ -1273,7 +1277,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
         flexarray_vappend(dm_args, "-name", c_info->name, NULL);
     }
 
-    if (vnc) {
+    if (vnc && !is_stubdom) {
         char *vncarg = NULL;
 
         flexarray_append(dm_args, "-vnc");
@@ -1312,11 +1316,12 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
         }
 
         flexarray_append(dm_args, vncarg);
-    } else
+    } else if (!is_stubdom) {
         /*
          * Ensure that by default no vnc server is created.
          */
         flexarray_append_pair(dm_args, "-vnc", "none");
+    }
 
     /*
      * Ensure that by default no display backend is created. Further
@@ -1324,7 +1329,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
      */
     flexarray_append_pair(dm_args, "-display", "none");
 
-    if (sdl) {
+    if (sdl && !is_stubdom) {
         flexarray_append(dm_args, "-sdl");
         if (sdl->display)
             flexarray_append_pair(dm_envs, "DISPLAY", sdl->display);
@@ -1366,18 +1371,34 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             {
                 LOGD(ERROR, guest_domid, "Both serial and serial_list set");
                 return ERROR_INVAL;
-            }
-            if (b_info->u.hvm.serial) {
-                flexarray_vappend(dm_args,
-                                  "-serial", b_info->u.hvm.serial, NULL);
-            } else if (b_info->u.hvm.serial_list) {
-                char **p;
-                for (p = b_info->u.hvm.serial_list;
-                     *p;
-                     p++) {
-                    flexarray_vappend(dm_args,
-                                      "-serial",
-                                      *p, NULL);
+            } else {
+                if (b_info->u.hvm.serial) {
+                    if (is_stubdom) {
+                        /* see spawn_stub_launch_dm() for connecting STUBDOM_CONSOLE_SERIAL */
+                        flexarray_vappend(dm_args,
+                                          "-serial",
+                                          GCSPRINTF("/dev/hvc%d", STUBDOM_CONSOLE_SERIAL),
+                                          NULL);
+                    } else {
+                        flexarray_vappend(dm_args,
+                                          "-serial", b_info->u.hvm.serial, NULL);
+                    }
+                } else if (b_info->u.hvm.serial_list) {
+                    char **p;
+                    /* see spawn_stub_launch_dm() for connecting STUBDOM_CONSOLE_SERIAL */
+                    for (p = b_info->u.hvm.serial_list, i = 0;
+                         *p;
+                         p++, i++) {
+                        if (is_stubdom)
+                            flexarray_vappend(dm_args,
+                                              "-serial",
+                                              GCSPRINTF("/dev/hvc%d", STUBDOM_CONSOLE_SERIAL + i),
+                                              NULL);
+                        else
+                            flexarray_vappend(dm_args,
+                                              "-serial",
+                                              *p, NULL);
+                    }
                 }
             }
         }
@@ -1386,7 +1407,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, "-nographic");
         }
 
-        if (libxl_defbool_val(b_info->u.hvm.spice.enable)) {
+        if (libxl_defbool_val(b_info->u.hvm.spice.enable) && !is_stubdom) {
             const libxl_spice_info *spice = &b_info->u.hvm.spice;
             char *spiceoptions = dm_spice_options(gc, spice);
             if (!spiceoptions)
@@ -1813,7 +1834,9 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
              * If qemu isn't doing the interpreting, the parameter is
              * always raw
              */
-            if (disks[i].backend == LIBXL_DISK_BACKEND_QDISK)
+            if (libxl_defbool_val(b_info->device_model_stubdomain))
+                format = "host_device";
+            else if (disks[i].backend == LIBXL_DISK_BACKEND_QDISK)
                 format = libxl__qemu_disk_format_string(disks[i].format);
             else
                 format = libxl__qemu_disk_format_string(LIBXL_DISK_FORMAT_RAW);
@@ -1824,6 +1847,16 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                          disks[i].vdev);
                     continue;
                 }
+            } else if (libxl_defbool_val(b_info->device_model_stubdomain)) {
+                if (disk > 'z' - 'a') {
+                    LOGD(WARN, guest_domid,
+                            "Emulation of only first %d disks is supported with qemu-xen in stubdomain.\n"
+                            "Disk %d will be available via PV drivers but not as an emulated disk.",
+                            'z' - 'a',
+                            disk);
+                    continue;
+                }
+                target_path = GCSPRINTF("/dev/xvd%c", 'a' + disk);
             } else {
                 if (format == NULL) {
                     LOGD(WARN, guest_domid,
@@ -1964,7 +1997,7 @@ static int libxl__build_device_model_args(libxl__gc *gc,
                                         char ***args, char ***envs,
                                         const libxl__domain_build_state *state,
                                         int *dm_state_fd)
-/* dm_state_fd may be NULL iff caller knows we are using old stubdom
+/* dm_state_fd may be NULL iff caller knows we are using stubdom
  * and therefore will be passing a filename rather than a fd. */
 {
     switch (guest_config->b_info.device_model_version) {
@@ -1974,8 +2007,10 @@ static int libxl__build_device_model_args(libxl__gc *gc,
                                                   args, envs,
                                                   state);
     case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
-        assert(dm_state_fd != NULL);
-        assert(*dm_state_fd < 0);
+        if (!libxl_defbool_val(guest_config->b_info.device_model_stubdomain)) {
+            assert(dm_state_fd != NULL);
+            assert(*dm_state_fd < 0);
+        }
         return libxl__build_device_model_args_new(gc, dm,
                                                   guest_domid, guest_config,
                                                   args, envs,
@@ -2080,6 +2115,16 @@ retry_transaction:
     return 0;
 }
 
+static int libxl__store_libxl_entry(libxl__gc *gc, uint32_t domid,
+                                    const char *name, const char *value)
+{
+    char *path = NULL;
+
+    path = libxl__xs_libxl_path(gc, domid);
+    path = libxl__sprintf(gc, "%s/%s", path, name);
+    return libxl__xs_printf(gc, XBT_NULL, path, "%s", value);
+}
+
 static void dmss_init(libxl__dm_spawn_state *dmss)
 {
     libxl__ev_qmp_init(&dmss->qmp);
@@ -2138,10 +2183,14 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
     dmss_init(&sdss->pvqemu);
     libxl__xswait_init(&sdss->xswait);
 
-    if (guest_config->b_info.device_model_version !=
-        LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL) {
-        ret = ERROR_INVAL;
-        goto out;
+    assert(libxl_defbool_val(guest_config->b_info.device_model_stubdomain));
+
+    if (libxl__stubdomain_is_linux(&guest_config->b_info)) {
+        if (d_state->saved_state) {
+            LOG(ERROR, "Save/Restore not supported yet with Linux Stubdom.");
+            ret = -1;
+            goto out;
+        }
     }
 
     sdss->pvqemu.guest_domid = INVALID_DOMID;
@@ -2163,8 +2212,8 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
 
     dm_config->b_info.shadow_memkb = 0;
     dm_config->b_info.max_vcpus = 1;
-    dm_config->b_info.max_memkb = 28 * 1024 +
-        guest_config->b_info.video_memkb;
+    dm_config->b_info.max_memkb = guest_config->b_info.stubdomain_memkb;
+    dm_config->b_info.max_memkb += guest_config->b_info.video_memkb;
     dm_config->b_info.target_memkb = dm_config->b_info.max_memkb;
 
     dm_config->b_info.max_grant_frames = guest_config->b_info.max_grant_frames;
@@ -2203,10 +2252,8 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
         dm_config->num_vkbs = 1;
     }
 
-    stubdom_state->pv_kernel.path
-        = libxl__abs_path(gc, "ioemu-stubdom.gz", libxl__xenfirmwaredir_path());
-    stubdom_state->pv_cmdline = GCSPRINTF(" -d %d", guest_domid);
-    stubdom_state->pv_ramdisk.path = "";
+    stubdom_state->pv_kernel.path = guest_config->b_info.stubdomain_kernel;
+    stubdom_state->pv_ramdisk.path = guest_config->b_info.stubdomain_ramdisk;
 
     /* fixme: this function can leak the stubdom if it fails */
     ret = libxl__domain_make(gc, dm_config, stubdom_state,
@@ -2226,6 +2273,8 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
         goto out;
     }
 
+    libxl__store_libxl_entry(gc, guest_domid, "dm-version",
+        libxl_device_model_version_to_string(dm_config->b_info.device_model_version));
     libxl__write_stub_dmargs(gc, dm_domid, guest_domid, args);
     libxl__xs_printf(gc, XBT_NULL,
                      GCSPRINTF("%s/image/device-model-domid",
@@ -2235,6 +2284,15 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
                      GCSPRINTF("%s/target",
                                libxl__xs_get_dompath(gc, dm_domid)),
                      "%d", guest_domid);
+    if (guest_config->b_info.device_model_version == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN) {
+        /* qemu-xen is used as a dm in the stubdomain, so we set the bios
+         * accroding to this */
+        libxl__xs_printf(gc, XBT_NULL,
+                        libxl__sprintf(gc, "%s/hvmloader/bios",
+                                       libxl__xs_get_dompath(gc, guest_domid)),
+                        "%s",
+                        libxl_bios_type_to_string(guest_config->b_info.u.hvm.bios));
+    }
     ret = xc_domain_set_target(ctx->xch, dm_domid, guest_domid);
     if (ret<0) {
         LOGED(ERROR, guest_domid, "setting target domain %d -> %d",
@@ -2314,8 +2372,13 @@ static void spawn_stub_launch_dm(libxl__egc *egc,
         if (ret) goto out;
     }
 
-    if (guest_config->b_info.u.hvm.serial)
+    if (guest_config->b_info.u.hvm.serial) {
         num_console++;
+    } else if (guest_config->b_info.u.hvm.serial_list) {
+        char **serial = guest_config->b_info.u.hvm.serial_list;
+        while (*(serial++))
+            num_console++;
+    }
 
     console = libxl__calloc(gc, num_console, sizeof(libxl__device_console));
 
@@ -2349,8 +2412,18 @@ static void spawn_stub_launch_dm(libxl__egc *egc,
                     console[i].output =
                         GCSPRINTF("pipe:%s", d_state->saved_state);
                 break;
+            case STUBDOM_CONSOLE_SERIAL:
+                if (guest_config->b_info.u.hvm.serial) {
+                    console[i].output = guest_config->b_info.u.hvm.serial;
+                    break;
+                }
+                /* fall-through */
             default:
-                console[i].output = "pty";
+                /* Serial_list is set, as otherwise num_consoles would be
+                 * smaller and consoles 0-2 are handled above. */
+                assert(guest_config->b_info.u.hvm.serial_list);
+                console[i].output = guest_config->b_info.u.hvm.serial_list[
+                    i-STUBDOM_CONSOLE_SERIAL];
                 break;
         }
     }
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index d1ebdec8d2..f2f76439ec 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -119,6 +119,7 @@
 #define STUBDOM_CONSOLE_RESTORE 2
 #define STUBDOM_CONSOLE_SERIAL 3
 #define STUBDOM_SPECIAL_CONSOLES 3
+#define LIBXL_LINUX_STUBDOM_MEM 128
 #define TAP_DEVICE_SUFFIX "-emu"
 #define DOMID_XS_PATH "domid"
 #define PVSHIM_BASENAME "xen-shim"
diff --git a/tools/libxl/libxl_mem.c b/tools/libxl/libxl_mem.c
index bc7b95aa74..e52a9624ea 100644
--- a/tools/libxl/libxl_mem.c
+++ b/tools/libxl/libxl_mem.c
@@ -459,8 +459,10 @@ int libxl__domain_need_memory_calculate(libxl__gc *gc,
     case LIBXL_DOMAIN_TYPE_PVH:
     case LIBXL_DOMAIN_TYPE_HVM:
         *need_memkb += LIBXL_HVM_EXTRA_MEMORY;
-        if (libxl_defbool_val(b_info->device_model_stubdomain))
-            *need_memkb += 32 * 1024;
+        if (libxl_defbool_val(b_info->device_model_stubdomain)) {
+            *need_memkb += b_info->stubdomain_memkb;
+            *need_memkb += b_info->video_memkb;
+        }
         break;
     case LIBXL_DOMAIN_TYPE_PV:
         *need_memkb += LIBXL_PV_EXTRA_MEMORY;
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index f7c473be74..9d3f05f399 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -518,6 +518,9 @@ libxl_domain_build_info = Struct("domain_build_info",[
     
     ("device_model_version", libxl_device_model_version),
     ("device_model_stubdomain", libxl_defbool),
+    ("stubdomain_memkb",   MemKB),
+    ("stubdomain_kernel",  string),
+    ("stubdomain_ramdisk", string),
     # if you set device_model you must set device_model_version too
     ("device_model",     string),
     ("device_model_ssidref", uint32),
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:15:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:15:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUN7-00007A-1V; Mon, 18 May 2020 01:15:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUN5-00006h-So
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:15:11 +0000
X-Inumbo-ID: fc669eac-98a4-11ea-b9cf-bc764e2007e4
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fc669eac-98a4-11ea-b9cf-bc764e2007e4;
 Mon, 18 May 2020 01:14:56 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id n14so8611652qke.8
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:14:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=iuOS5lKMe+jAFhTt05o45iDFjEi/wVIZWcBxoRF26Ho=;
 b=khyu6WsPvZWt0u9iQ2IrrvLYA9TD208VQjIiV6prcTnUX9q82CDQl3Bce5fdz4ShYr
 oEHmCVvlzBQVi7x2BasBL1ZMP9ExDSot3EEMeb968Gcg/Mpr8vjvP6lT+PbKv80Ad6B9
 fEpsLP0qfq8RVFUndXU294GOfuR+/ltCB3wulhKsTvL/KU/qdaTBg6pZulMAk6XUdoF7
 jXJ+PGISMdXbLiwF3DmUn4SpMBiKrCmf6F4kpJhr4bkoyE4Lewbq4lKbeJkYP04IXvcA
 eZLFgBP2fqjUYF2tUymdk4LQL6GfJGaDyfC/EDnF3I8H2EHQSIjWVlmcYvuI0RMG2go7
 fNBg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=iuOS5lKMe+jAFhTt05o45iDFjEi/wVIZWcBxoRF26Ho=;
 b=D6fcNTW6tj/Sq0jQYh91W454vVlMaGS6JXewwWsD4upzFjpQpCl9KhzrkeYfOI5CE+
 fac65kvcnQFlgaW5pHSC/VdcCwewfllzD2a17/kNuMMetcYYz7Sp1wczOK9ug2+l3xTQ
 pByd+bhV7zULIZPB9swWcw4cxSfyBHIEDRQ6TJNrlB6HZ6znEAhcX9+ejS7U5iCSH+Ys
 DRfwfJ/03I8ZuKrdJUlNd6ALiGgfPf22HXpc5utIaTI0mzyuBMmDBr95WAfg4yiku7Tc
 ADuDCHwzLABIuQD5OViNWIbAMSmBT7JMgRs82pcu0gGRm2FM/TVv7aJbBi3Kj5BViqQo
 6Nig==
X-Gm-Message-State: AOAM530vGUr5j56QGEGM4KS8pDk30pgMBOqzzZz22m51OGa86Dqcyzmn
 s6Zxl4jASqbRPhP8z0jAOxBK7w1+
X-Google-Smtp-Source: ABdhPJySweQ8Z/MdXq2k2CojCoWGTbRxcoGz07qlXfFHNKgg6QHEkJNs87rXCVxXQD+S6SIkbtBZvw==
X-Received: by 2002:a05:620a:6bc:: with SMTP id
 i28mr13905580qkh.330.1589764495938; 
 Sun, 17 May 2020 18:14:55 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.14.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:14:55 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 06/18] libxl: write qemu arguments into separate xenstore
 keys
Date: Sun, 17 May 2020 21:13:41 -0400
Message-Id: <20200518011353.326287-7-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

This allows using arguments with spaces, like -append, without
nominating any special "separator" character.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

Re-work to use libxl_xs_* functions in a loop.  Also write arguments in
dm-argv directory instead of overloading mini-os's dmargs string.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
Changes in v3:
 - previous version of this patch "libxl: use \x1b to separate qemu
   arguments for linux stubdomain" used specific non-printable
   separator, but it was rejected as xenstore doesn't cope well with
   non-printable chars
Changes in v6:
 - Re-work to use libxl__xs_ functions in a loop.
 - Drop rtc/timeoffset
---
 tools/libxl/libxl_dm.c | 56 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 55 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index dc1717bc12..eaed6e8ee7 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -2066,6 +2066,57 @@ static int libxl__vfb_and_vkb_from_hvm_guest_config(libxl__gc *gc,
     return 0;
 }
 
+static int libxl__write_stub_linux_dm_argv(libxl__gc *gc,
+                                           int dm_domid, int guest_domid,
+                                           char **args)
+{
+    const char *vm_path;
+    char *path;
+    struct xs_permissions roperm[2];
+    xs_transaction_t t = XBT_NULL;
+    int rc;
+
+    roperm[0].id = 0;
+    roperm[0].perms = XS_PERM_NONE;
+    roperm[1].id = dm_domid;
+    roperm[1].perms = XS_PERM_READ;
+
+    rc = libxl__xs_read_mandatory(gc, XBT_NULL,
+                                  GCSPRINTF("/local/domain/%d/vm", guest_domid),
+                                  &vm_path);
+    if (rc)
+        return rc;
+
+    path = GCSPRINTF("%s/image/dm-argv", vm_path);
+
+    for (;;) {
+        int i;
+
+        rc = libxl__xs_transaction_start(gc, &t);
+        if (rc) goto out;
+
+        rc = libxl__xs_mknod(gc, t, path, roperm, ARRAY_SIZE(roperm));
+        if (rc) goto out;
+
+        for (i=1; args[i] != NULL; i++) {
+            rc = libxl__xs_write_checked(gc, t, GCSPRINTF("%s/%03d", path, i),
+                                         args[i]);
+            if (rc) goto out;
+        }
+
+        rc = libxl__xs_transaction_commit(gc, &t);
+        if (!rc) break;
+        if (rc<0) goto out;
+    }
+
+    return 0;
+
+ out:
+    libxl__xs_transaction_abort(gc, &t);
+
+    return rc;
+}
+
 static int libxl__write_stub_dmargs(libxl__gc *gc,
                                     int dm_domid, int guest_domid,
                                     char **args)
@@ -2275,7 +2326,10 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
 
     libxl__store_libxl_entry(gc, guest_domid, "dm-version",
         libxl_device_model_version_to_string(dm_config->b_info.device_model_version));
-    libxl__write_stub_dmargs(gc, dm_domid, guest_domid, args);
+    if (libxl__stubdomain_is_linux(&guest_config->b_info))
+        libxl__write_stub_linux_dm_argv(gc, dm_domid, guest_domid, args);
+    else
+        libxl__write_stub_dmargs(gc, dm_domid, guest_domid, args);
     libxl__xs_printf(gc, XBT_NULL,
                      GCSPRINTF("%s/image/device-model-domid",
                                libxl__xs_get_dompath(gc, guest_domid)),
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:15:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:15:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUNC-0000AK-BW; Mon, 18 May 2020 01:15:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUNA-00009C-UA
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:15:16 +0000
X-Inumbo-ID: fdb22538-98a4-11ea-9887-bc764e2007e4
Received: from mail-qv1-xf42.google.com (unknown [2607:f8b0:4864:20::f42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fdb22538-98a4-11ea-9887-bc764e2007e4;
 Mon, 18 May 2020 01:14:58 +0000 (UTC)
Received: by mail-qv1-xf42.google.com with SMTP id g20so3966786qvb.9
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:14:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=ZT6l/MiPRcZkFyWjFyjLysM4bTyEFRemlpKKfgP53K8=;
 b=ZWs4G3cvljs854So2eVVa51MJT2R9HZcmfGA0jSnGPe07lXItHSgOW1qZQSuYbNzGV
 R4bZ4lyqcab2++5vnVaMlyJ8zhkFwgcbHHuPwue2Qb15ltHeoSEiaVrxx2xR9hm93kZ4
 4x2k53D0z5xPoboQ6DQL+pWcf0DLJtHHv9/7cNOJTbs/q9FzBlWwR0fy+V9diAWAuEAA
 seKIINGSVWe0Ai3YYbayONgS4MNo8zt9pIKYiLBYyfb/R8Q66McEPB6mxYQfHbLEiFsS
 OS/snIZXrsCpV0JwM/XyDYpswMyXecljlE+Eng6fV6Ofxd49PRJ120E24WdVDN2swUnH
 dUqg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=ZT6l/MiPRcZkFyWjFyjLysM4bTyEFRemlpKKfgP53K8=;
 b=nDdh96OgKK0ADj3oJWRvFu6OvskDqsIKyUnRYcmxHgZmFcQM2tkE5+TBRUXwJ6ESfP
 dK6LwvtjGNt7OuzUtOhuPaqdFGpLad63VQrqeeqUcdNRiYx5q+ZeXAzLzbNXsurY/Xze
 gjpy2e97sn8Up7lpS+cuWs8sgEtrfHQKiK2vYDJ+jqcfjUcN0Q/70UtBeN6tFrKpLmdU
 tSIV9F71in0Uh/n+gs1T6rvDNnFjUHFcmqsjk+opzCkMZslR361DtT2le5TkxqHP84Is
 S86lIFaO5hcGf3lzKiFBwtVJ6Eu4+n/GD7P3PXj/CGwzFc8QAVxjUGh+W9SivbV6MwG9
 FZIw==
X-Gm-Message-State: AOAM530Eo1wcx/8ksmNBLB422dM9DdRbEc8hJJsgtCWVdHGE7gNv1QPS
 ygCMB6ZIDMjIYuoYOv59e8GbMHDR
X-Google-Smtp-Source: ABdhPJxyBw/nM/Fgze8bx5QDsvLGwWtMqUCcThi3xkJ73BfrJmBnOwb54O24QFL5KfLmoZ65kRz+gg==
X-Received: by 2002:ad4:4b72:: with SMTP id m18mr13842435qvx.62.1589764498064; 
 Sun, 17 May 2020 18:14:58 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.14.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:14:57 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 07/18] xl: add stubdomain related options to xl config
 parser
Date: Sun, 17 May 2020 21:13:42 -0400
Message-Id: <20200518011353.326287-8-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6:
 - Add Acked-by: Ian Jackson
---
 docs/man/xl.cfg.5.pod.in | 27 +++++++++++++++++++++++----
 tools/xl/xl_parse.c      |  7 +++++++
 2 files changed, 30 insertions(+), 4 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0e9e58a41a..c9bc181a95 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2733,10 +2733,29 @@ model which they were installed with.
 
 =item B<device_model_override="PATH">
 
-Override the path to the binary to be used as the device-model. The
-binary provided here MUST be consistent with the
-B<device_model_version> which you have specified. You should not
-normally need to specify this option.
+Override the path to the binary to be used as the device-model running in
+toolstack domain. The binary provided here MUST be consistent with the
+B<device_model_version> which you have specified. You should not normally need
+to specify this option.
+
+=item B<stubdomain_kernel="PATH">
+
+Override the path to the kernel image used as device-model stubdomain.
+The binary provided here MUST be consistent with the
+B<device_model_version> which you have specified.
+In case of B<qemu-xen-traditional> it is expected to be MiniOS-based stubdomain
+image, in case of B<qemu-xen> it is expected to be Linux-based stubdomain
+kernel.
+
+=item B<stubdomain_ramdisk="PATH">
+
+Override the path to the ramdisk image used as device-model stubdomain.
+The binary provided here is to be used by a kernel pointed by B<stubdomain_kernel>.
+It is known to be used only by Linux-based stubdomain kernel.
+
+=item B<stubdomain_memory=MBYTES>
+
+Start the stubdomain with MBYTES megabytes of RAM. Default is 128.
 
 =item B<device_model_stubdomain_override=BOOLEAN>
 
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 4450d59f16..61b4ef7b7e 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -2525,6 +2525,13 @@ skip_usbdev:
     xlu_cfg_replace_string(config, "device_model_user",
                            &b_info->device_model_user, 0);
 
+    xlu_cfg_replace_string (config, "stubdomain_kernel",
+                            &b_info->stubdomain_kernel, 0);
+    xlu_cfg_replace_string (config, "stubdomain_ramdisk",
+                            &b_info->stubdomain_ramdisk, 0);
+    if (!xlu_cfg_get_long (config, "stubdomain_memory", &l, 0))
+        b_info->stubdomain_memkb = l * 1024;
+
 #define parse_extra_args(type)                                            \
     e = xlu_cfg_get_list_as_string_list(config, "device_model_args"#type, \
                                     &b_info->extra##type, 0);            \
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:15:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:15:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUNG-0000DE-L9; Mon, 18 May 2020 01:15:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUNF-0000Ch-SK
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:15:21 +0000
X-Inumbo-ID: ff2e7cfe-98a4-11ea-b9cf-bc764e2007e4
Received: from mail-qv1-xf43.google.com (unknown [2607:f8b0:4864:20::f43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ff2e7cfe-98a4-11ea-b9cf-bc764e2007e4;
 Mon, 18 May 2020 01:15:01 +0000 (UTC)
Received: by mail-qv1-xf43.google.com with SMTP id fb16so3938787qvb.5
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:15:01 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=2CXxR3/fu54dMwhCVd1DE62O7N45m5o1SIg5x3dwKzg=;
 b=mCL9Oj5OXtEdRROI5LWby0nOvw900mzZIO5quk28U2jZm7/NqQivU/+iz8T33e8u7n
 0Rdw/N3idGWCnRxdlKRP6uBL757h6LtBgRE8M+JB7jfXk8LADSvphGsrJB3s2o8/qY+o
 K0NSDTLYpotXWpS4QQ3Y2lq7Rk1yvJ8gNLzNoh951rp2ZET3V74ubyPS2Fw2zenZN9y3
 krvC9GyvDK/4v/RFW3exy7p6rnFYRy1NAdUzBvGglAVgxEUgwQjdfk6FHGozg+bLlY31
 Io/uKp/qNsde8PkJ1e+/FgTGLjLZUcFYZwbuG9ULUAnmAElEa4aIxFEpZ/dx2FSkcGDi
 7awQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=2CXxR3/fu54dMwhCVd1DE62O7N45m5o1SIg5x3dwKzg=;
 b=V+GgUHS1dAJAKf0fMREca28VNsF3dAxI1K/bqFLeG3qVN87wWOYu5quo5IclOmwO6X
 VempTt90uk/CoBA8cQdmlqE1X55aLlYoMgEjSX5ePK2aQ9oa56EhdSR373dilX6FiDVE
 y26TKdYgQ/5uj68pAtlC5h79spwDW0OmWwfIH0DJqFBdVjggupzEx6iqzUq6y/myoiO8
 h6lHrE0xloVBHNOJR623wBI5cHsVa3s5UOdQCX25+ZmKQqT6zdpVemgeKT5VjaYSITTZ
 ObbX9matIhjOfZLl2/falr6Ce21vbfMLOVq1n4f1uKedm+kqOLcn34XYEh+ruEBSHIiS
 xSHQ==
X-Gm-Message-State: AOAM530w+mC7DzCtIPtnFPLl+jKlmSLwN2WNEr7x4dVgOdiV4Nr1LMy5
 uadhDWHnt2qCwjasLZNjEviSgjjC
X-Google-Smtp-Source: ABdhPJzsmUptP5Ebhlwr3EDVbW73cYfMgRTMTBrbkkwiyvDk64mLowA/Oynu9LaJ72tUt7A9HNHUUg==
X-Received: by 2002:a0c:99d3:: with SMTP id y19mr13638364qve.72.1589764500653; 
 Sun, 17 May 2020 18:15:00 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.14.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:15:00 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 08/18] tools/libvchan: notify server when client is
 connected
Date: Sun, 17 May 2020 21:13:43 -0400
Message-Id: <20200518011353.326287-9-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Ian Jackson <ian.jackson@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Let the server know when the client is connected. Otherwise server will
notice only when client send some data.
This change does not break existing clients, as libvchan user should
handle spurious notifications anyway (for example acknowledge of remote
side reading the data).

Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Replace spaces with tabs to match the file's whitespace.
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
Marek: I had this patch in Qubes for a long time and totally forgot it
wasn't upstream thing...

Changes in v6:
 - Add Acked-by: Ian Jackson
 - CC Daniel De Graaf
---
 tools/libvchan/init.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/tools/libvchan/init.c b/tools/libvchan/init.c
index 180833dc2f..ad4b64fbe3 100644
--- a/tools/libvchan/init.c
+++ b/tools/libvchan/init.c
@@ -447,6 +447,9 @@ struct libxenvchan *libxenvchan_client_init(struct xentoollog_logger *logger,
 	ctrl->ring->cli_live = 1;
 	ctrl->ring->srv_notify = VCHAN_NOTIFY_WRITE;
 
+	/* wake up the server */
+	xenevtchn_notify(ctrl->event, ctrl->event_port);
+
  out:
 	if (xs)
 		xs_daemon_close(xs);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:15:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:15:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUNL-0000HB-VQ; Mon, 18 May 2020 01:15:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUNK-0000Ga-Sc
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:15:26 +0000
X-Inumbo-ID: 00682200-98a5-11ea-b07b-bc764e2007e4
Received: from mail-qv1-xf44.google.com (unknown [2607:f8b0:4864:20::f44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 00682200-98a5-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 01:15:03 +0000 (UTC)
Received: by mail-qv1-xf44.google.com with SMTP id dh1so98257qvb.13
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:15:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=yf2qnzYuPc74VoW0YKV2ithZ/xphv7gmcP5v/+l3iQ4=;
 b=KOHoVa50/UtC/CAtbj0849ENHoVAFfMHEV1l42yEKy1v3NaFnlVMQq+lm29to+PbfH
 v9icKYK9XomLzwQS3JcPFh3cIzt179r3fAWsmq9IJZKbte7S9LDDnD2CYiHj+yQcvG6m
 BgDDNhy1fsPIDTZY2qE6GQURttaagMhR4reexRtYNP9kVq2cPWXUbo+uXa47JwtYIe16
 5uZrFlD2woXK9PfOA/+YtrQDGxdNHOTDY0zfwq3mpMuAbn6+IcO0onEW8rS7tNdRZm6t
 vpDl4WqDun8gKi5spPTtLrVI3efY5pKhpK8P/1mA8cgI9o7laVcNP596RlPGbxjG8yca
 452w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=yf2qnzYuPc74VoW0YKV2ithZ/xphv7gmcP5v/+l3iQ4=;
 b=c71lneqMGHnoSi0k9OsNyTV0MPbajpvfg0Ix8iYkMs51R1ubtt0NIgCQcyXQe/b0QP
 WhpprUVDQwhjFu4Mz8G/jUb8bTBUi5In1bQ2AM4F4jLVy1LPcfU/KnC1T1SZNsoipRJ/
 zOkwuz3qrJ2rwutsChQmp1QmkNHUW9OkE9YPsA3J4h6jVUjYcmwVYtb2YKc10BV3LW7l
 6ZnNjvbNvpL4o3jNToUBTLLA+U0Xc0KOLDRQ2nitAwu2dM+EtRvT6wgBiFqDlB8Z65kR
 8Glqt/xtuBUmZOUx2gALIxklYa88BSIAMgLzH39qK9ffqtEXIZogAUktBw54O9vcQVkw
 cQOg==
X-Gm-Message-State: AOAM531SWBbSTQuhlSsg7EwLtRIrOIgLXp3w59V21fNOP24ZEl2NVxEU
 AGbhTVP8YriTWvIi2OWgBNscQ3TL
X-Google-Smtp-Source: ABdhPJxH9LGGKMjjgcrRHtlij2/ia7N6HrfDzZLI0FwdCEyudApg4MP186774dBIFdggW47RIC5pVg==
X-Received: by 2002:ad4:4b01:: with SMTP id r1mr13499333qvw.38.1589764502578; 
 Sun, 17 May 2020 18:15:02 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.15.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:15:01 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 09/18] libxl: add save/restore support for qemu-xen in
 stubdomain
Date: Sun, 17 May 2020 21:13:44 -0400
Message-Id: <20200518011353.326287-10-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Rely on a wrapper script in stubdomain to attach relevant consoles to
qemu.  The save console (1) must be attached to fdset/1.  When
performing a restore, $STUBDOM_RESTORE_INCOMING_ARG must be replaced on
the qemu command line by "fd:$FD", where $FD is an open file descriptor
number to the restore console (2).

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Address TODO in dm_state_save_to_fdset: Only remove savefile for
non-stubdom.
Use $STUBDOM_RESTORE_INCOMING_ARG instead of fd:3 and update commit
message.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
Changes in v3:
 - adjust for qmp_ev*
 - assume specific fdset id in qemu set in stubdomain
Changes in v5:
 - Only remove savefile for non-stubdom
Chanres in v6:
 - Replace hardcoded fd:3 with placeholder $STUBDOM_RESTORE_INCOMING_ARG
---
 tools/libxl/libxl_dm.c  | 25 +++++++++++++------------
 tools/libxl/libxl_qmp.c | 27 +++++++++++++++++++++++++--
 2 files changed, 38 insertions(+), 14 deletions(-)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index eaed6e8ee7..a4f8866d33 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -1745,10 +1745,19 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     }
 
     if (state->saved_state) {
-        /* This file descriptor is meant to be used by QEMU */
-        *dm_state_fd = open(state->saved_state, O_RDONLY);
-        flexarray_append(dm_args, "-incoming");
-        flexarray_append(dm_args, GCSPRINTF("fd:%d",*dm_state_fd));
+        if (is_stubdom) {
+            /* Linux stubdomain must replace $STUBDOM_RESTORE_INCOMING_ARG
+             * with the approriate fd:$num argument for the
+             * STUBDOM_CONSOLE_RESTORE console 2.
+             */
+            flexarray_append(dm_args, "-incoming");
+            flexarray_append(dm_args, "$STUBDOM_RESTORE_INCOMING_ARG");
+        } else {
+            /* This file descriptor is meant to be used by QEMU */
+            *dm_state_fd = open(state->saved_state, O_RDONLY);
+            flexarray_append(dm_args, "-incoming");
+            flexarray_append(dm_args, GCSPRINTF("fd:%d",*dm_state_fd));
+        }
     }
     for (i = 0; b_info->extra && b_info->extra[i] != NULL; i++)
         flexarray_append(dm_args, b_info->extra[i]);
@@ -2236,14 +2245,6 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
 
     assert(libxl_defbool_val(guest_config->b_info.device_model_stubdomain));
 
-    if (libxl__stubdomain_is_linux(&guest_config->b_info)) {
-        if (d_state->saved_state) {
-            LOG(ERROR, "Save/Restore not supported yet with Linux Stubdom.");
-            ret = -1;
-            goto out;
-        }
-    }
-
     sdss->pvqemu.guest_domid = INVALID_DOMID;
 
     libxl_domain_create_info_init(&dm_config->c_info);
diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index efaba91086..c394000ea9 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -962,6 +962,7 @@ static void dm_stopped(libxl__egc *egc, libxl__ev_qmp *ev,
                        const libxl__json_object *response, int rc);
 static void dm_state_fd_ready(libxl__egc *egc, libxl__ev_qmp *ev,
                               const libxl__json_object *response, int rc);
+static void dm_state_save_to_fdset(libxl__egc *egc, libxl__ev_qmp *ev, int fdset);
 static void dm_state_saved(libxl__egc *egc, libxl__ev_qmp *ev,
                            const libxl__json_object *response, int rc);
 
@@ -994,10 +995,17 @@ static void dm_stopped(libxl__egc *egc, libxl__ev_qmp *ev,
     EGC_GC;
     libxl__domain_suspend_state *dsps = CONTAINER_OF(ev, *dsps, qmp);
     const char *const filename = dsps->dm_savefile;
+    uint32_t dm_domid = libxl_get_stubdom_id(CTX, dsps->domid);
 
     if (rc)
         goto error;
 
+    if (dm_domid) {
+        /* see Linux stubdom interface in docs/stubdom.txt */
+        dm_state_save_to_fdset(egc, ev, 1);
+        return;
+    }
+
     ev->payload_fd = open(filename, O_WRONLY | O_CREAT, 0600);
     if (ev->payload_fd < 0) {
         LOGED(ERROR, ev->domid,
@@ -1028,7 +1036,6 @@ static void dm_state_fd_ready(libxl__egc *egc, libxl__ev_qmp *ev,
     EGC_GC;
     int fdset;
     const libxl__json_object *o;
-    libxl__json_object *args = NULL;
     libxl__domain_suspend_state *dsps = CONTAINER_OF(ev, *dsps, qmp);
 
     close(ev->payload_fd);
@@ -1043,6 +1050,21 @@ static void dm_state_fd_ready(libxl__egc *egc, libxl__ev_qmp *ev,
         goto error;
     }
     fdset = libxl__json_object_get_integer(o);
+    dm_state_save_to_fdset(egc, ev, fdset);
+    return;
+
+error:
+    assert(rc);
+    libxl__remove_file(gc, dsps->dm_savefile);
+    dsps->callback_device_model_done(egc, dsps, rc);
+}
+
+static void dm_state_save_to_fdset(libxl__egc *egc, libxl__ev_qmp *ev, int fdset)
+{
+    EGC_GC;
+    int rc;
+    libxl__json_object *args = NULL;
+    libxl__domain_suspend_state *dsps = CONTAINER_OF(ev, *dsps, qmp);
 
     ev->callback = dm_state_saved;
 
@@ -1060,7 +1082,8 @@ static void dm_state_fd_ready(libxl__egc *egc, libxl__ev_qmp *ev,
 
 error:
     assert(rc);
-    libxl__remove_file(gc, dsps->dm_savefile);
+    if (!libxl_get_stubdom_id(CTX, dsps->domid))
+        libxl__remove_file(gc, dsps->dm_savefile);
     dsps->callback_device_model_done(egc, dsps, rc);
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:15:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:15:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUNR-0000M2-83; Mon, 18 May 2020 01:15:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUNP-0000Ko-TU
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:15:31 +0000
X-Inumbo-ID: 016d70ec-98a5-11ea-b07b-bc764e2007e4
Received: from mail-qt1-x844.google.com (unknown [2607:f8b0:4864:20::844])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 016d70ec-98a5-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 01:15:04 +0000 (UTC)
Received: by mail-qt1-x844.google.com with SMTP id z18so6870011qto.2
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:15:04 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=67muXVTuajJEPjuiVYNz+RE396Rty6mUCKX18cQDly8=;
 b=D4T5H0vbM6abA4IPQnX96kkmnRgQT1VWkXvM5OKBk1oIKycc7dDzOFxd8SbWWve1AU
 HTtYsY5V6j/jarOmwrKf5FVXK/IQPUX/JYod5mnmMnQIrOyr6Icixd5KKi0hyELi3Pnj
 b96SVQ3gEvZyVaGU6DSe5Qd8c9c43K2ktSCpiTvMRQRVv/XurUDFIrzrNef9mZrbkI+j
 5TYudBhi4LF5agAUHdpuHRQbrw++PhdOxME4+7TwD4beWhLuRjIErTSnARsfmghGJtP9
 8NdG/FUm8PwVNwqRLhpjJR51H+RQTDBGj1o+sfX+TiBJV9ZuS69NRwJecGZvOVXw2ex9
 nGGw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=67muXVTuajJEPjuiVYNz+RE396Rty6mUCKX18cQDly8=;
 b=H91FW7Ydlt++MwFxXZg3eY5pY1/qW2C/3MXRREwRWozrVXtKqINCi579OKQLjUjCoS
 S0tXJRY/KWrNTwuv7LPzdrgPoGf+PmCvwd/VL1cPQVdlxEO8oWuHkukuztk1IxBU7a0F
 fsDz/OQZnG1OPWatXCXZXzUTNEMlnUYNybJyjgy+/KJM4/w3JDLFWcf5AKg6sym33qMl
 qf2dJyTXBMzGuSQC/bH3iEYEKWiq91e86p2+a8SNE9D9V/C/3AYpQbIJquxwWRgt1Vh3
 tfoYxN8SwHbovXzu8Lvxjetzz5kzwdloCAIkSNB+HNaikmTm2kJM1E1sOrBX2499way9
 nsXQ==
X-Gm-Message-State: AOAM533GECx7PB36b1y/iVEvJ9hFEHQukZD14FyD6hapWUhN37PH847R
 z3H+4MMXT/VGrohX6bRfsLwCWgX0
X-Google-Smtp-Source: ABdhPJxtLH9rX6Ck17bZx1k313hMMaEQ8RldeRFajSKvDhUMnZIgE9mHv3ehOVt3GiNAChejL2Z6WA==
X-Received: by 2002:aed:20d1:: with SMTP id 75mr7791829qtb.1.1589764504323;
 Sun, 17 May 2020 18:15:04 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.15.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:15:03 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 10/18] tools: add missing libxenvchan cflags
Date: Sun, 17 May 2020 21:13:45 -0400
Message-Id: <20200518011353.326287-11-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>,
 Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

libxenvchan.h include xenevtchn.h and xengnttab.h, so applications built
with it needs applicable -I in CFLAGS too.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6
 - Add Acked-by: Ian Jackson
---
 tools/Rules.mk | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/Rules.mk b/tools/Rules.mk
index 5b8cf748ad..59c72e7a88 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -157,7 +157,7 @@ SHDEPS_libxenstat  = $(SHLIB_libxenctrl) $(SHLIB_libxenstore)
 LDLIBS_libxenstat  = $(SHDEPS_libxenstat) $(XEN_LIBXENSTAT)/libxenstat$(libextension)
 SHLIB_libxenstat   = $(SHDEPS_libxenstat) -Wl,-rpath-link=$(XEN_LIBXENSTAT)
 
-CFLAGS_libxenvchan = -I$(XEN_LIBVCHAN)
+CFLAGS_libxenvchan = -I$(XEN_LIBVCHAN) $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
 SHDEPS_libxenvchan = $(SHLIB_libxentoollog) $(SHLIB_libxenstore) $(SHLIB_libxenevtchn) $(SHLIB_libxengnttab)
 LDLIBS_libxenvchan = $(SHDEPS_libxenvchan) $(XEN_LIBVCHAN)/libxenvchan$(libextension)
 SHLIB_libxenvchan  = $(SHDEPS_libxenvchan) -Wl,-rpath-link=$(XEN_LIBVCHAN)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:15:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:15:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUNV-0000QO-PH; Mon, 18 May 2020 01:15:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUNU-0000PL-Sx
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:15:36 +0000
X-Inumbo-ID: 02fc0018-98a5-11ea-b9cf-bc764e2007e4
Received: from mail-qt1-x843.google.com (unknown [2607:f8b0:4864:20::843])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 02fc0018-98a5-11ea-b9cf-bc764e2007e4;
 Mon, 18 May 2020 01:15:07 +0000 (UTC)
Received: by mail-qt1-x843.google.com with SMTP id m44so6831521qtm.8
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:15:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=kEXkWybJoTWgEbRxoPRBV+0UyHS+McZIDFYoiINJxvs=;
 b=JZ1knBpIl1uMj6fw7I7Sg2bGPMN034eG56IZG3yTLqLIx37AS7dvd2EV2NZ3FUzocO
 MKfHMTxpC2UpA/k+5ZYJX9U4ettdhzjR/eOcaGOoCAWqkZJ1qy+3kbIlmAaYO1wjohVM
 RYHZN1g76meSesBGDkREn5np5hF0+wslBnAQUy3Mp8U52nGcLdX1M/4sLWs0iBkOfGOs
 3VzGsZtZCKZpeJa+P3nTF8wClYP7FuY8cQ0dA9XXwzGcIkeshlRVcYchEfhfYN1v1HWG
 5cOnRkimBobvpxevHCo++CthhRqDemyFx5COghrJprTxPXZNWp2V/TPIABxvDDqUzZya
 BpqQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=kEXkWybJoTWgEbRxoPRBV+0UyHS+McZIDFYoiINJxvs=;
 b=LTO95Y8R9ydOEHR2rDzNzfV0cAvwcw7TMyyheg1ZbwTmJEDmZW2WGW1+LqFYXkX4Dm
 F1Td1fBU1Gh7S5fsSCb/lINgKOh1tnr7tHxqiRh9wIf7G1JCRzB6Y9YBr0XmiXAuqjIN
 g4LRwTNIn/rQXYqZGVhp5SbYC4nB6IPbKH6Dfo+6bjVm1z4MH0BiFbJF3yDNShBqe9qI
 p2lBsW++s8TSU1Zu+fu4lXF+WYtBoybTKwehUeliBxIEy2ERbXXZgZfBR+uxx0p9/T3X
 qVAd6uhpEVIimnrw/rAtQ3mUKPJSeepMpFHlyNMJ9qBZkm/l3sC5dtk1aRZeIMdvkoj9
 Achw==
X-Gm-Message-State: AOAM533Sn1xNMHfOUbRc5DH6GUVr4cnkarRa795VjGWxoIEiH4FwOMV4
 49NijUsBt1qfmJcB/w0THbcn+P0H
X-Google-Smtp-Source: ABdhPJzXyZUrcLLfTU8PgMhqeqRrKcW1PRBOtnFO9VTqBoEsJbsWe8ytBZ85/BviYqmNNAIlEm/7Sg==
X-Received: by 2002:ac8:3f5d:: with SMTP id w29mr13968856qtk.192.1589764506581; 
 Sun, 17 May 2020 18:15:06 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.15.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:15:05 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 11/18] tools: add simple vchan-socket-proxy
Date: Sun, 17 May 2020 21:13:46 -0400
Message-Id: <20200518011353.326287-12-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Add a simple proxy for tunneling socket connection over vchan. This is
based on existing vchan-node* applications, but extended with socket
support. vchan-socket-proxy serves both as a client and as a server,
depending on parameters. It can be used to transparently communicate
with an application in another domian that normally expose UNIX socket
interface. Specifically, it's written to communicate with qemu running
within stubdom.

Server mode listens for vchan connections and when one is opened,
connects to a pointed UNIX socket.  Client mode listens on UNIX
socket and when someone connects, opens a vchan connection.  Only
a single connection at a time is supported.

Additionally, socket can be provided as a number - in which case it's
interpreted as already open FD (in case of UNIX listening socket -
listen() needs to be already called). Or "-" meaning stdin/stdout - in
which case it is reduced to vchan-node2 functionality.

Example usage:

1. (in dom0) vchan-socket-proxy --mode=client <DOMID>
    /local/domain/<DOMID>/data/vchan/1234 /run/qemu.(DOMID)

2. (in DOMID) vchan-socket-proxy --mode=server 0
   /local/domain/<DOMID>/data/vchan/1234 /run/qemu.(DOMID)

This will listen on /run/qemu.(DOMID) in dom0 and whenever connection is
made, it will connect to DOMID, where server process will connect to
/run/qemu.(DOMID) there. When client disconnects, vchan connection is
terminated and server vchan-socket-proxy process also disconnects from
qemu.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes on v5:
 - Ensure bindir directory is present
 - String and comment fixes
Changes in v6
 - Add Acked-by: Ian Jackson
---
 .gitignore                          |   1 +
 tools/libvchan/Makefile             |   8 +-
 tools/libvchan/vchan-socket-proxy.c | 478 ++++++++++++++++++++++++++++
 3 files changed, 486 insertions(+), 1 deletion(-)
 create mode 100644 tools/libvchan/vchan-socket-proxy.c

diff --git a/.gitignore b/.gitignore
index bfa53723b3..7418ce9829 100644
--- a/.gitignore
+++ b/.gitignore
@@ -369,6 +369,7 @@ tools/misc/xenwatchdogd
 tools/misc/xen-hvmcrash
 tools/misc/xen-lowmemd
 tools/libvchan/vchan-node[12]
+tools/libvchan/vchan-socket-proxy
 tools/ocaml/*/.ocamldep.make
 tools/ocaml/*/*.cm[ixao]
 tools/ocaml/*/*.cmxa
diff --git a/tools/libvchan/Makefile b/tools/libvchan/Makefile
index 7892750c3e..913bcc8884 100644
--- a/tools/libvchan/Makefile
+++ b/tools/libvchan/Makefile
@@ -13,6 +13,7 @@ LIBVCHAN_PIC_OBJS = $(patsubst %.o,%.opic,$(LIBVCHAN_OBJS))
 LIBVCHAN_LIBS = $(LDLIBS_libxenstore) $(LDLIBS_libxengnttab) $(LDLIBS_libxenevtchn)
 $(LIBVCHAN_OBJS) $(LIBVCHAN_PIC_OBJS): CFLAGS += $(CFLAGS_libxenstore) $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
 $(NODE_OBJS) $(NODE2_OBJS): CFLAGS += $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
+vchan-socket-proxy.o: CFLAGS += $(CFLAGS_libxenstore) $(CFLAGS_libxenctrl) $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
 
 MAJOR = 4.14
 MINOR = 0
@@ -39,7 +40,7 @@ $(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
 
 .PHONY: all
-all: libxenvchan.so vchan-node1 vchan-node2 libxenvchan.a $(PKG_CONFIG_INST) $(PKG_CONFIG_LOCAL)
+all: libxenvchan.so vchan-node1 vchan-node2 vchan-socket-proxy libxenvchan.a $(PKG_CONFIG_INST) $(PKG_CONFIG_LOCAL)
 
 libxenvchan.so: libxenvchan.so.$(MAJOR)
 	ln -sf $< $@
@@ -59,13 +60,18 @@ vchan-node1: $(NODE_OBJS) libxenvchan.so
 vchan-node2: $(NODE2_OBJS) libxenvchan.so
 	$(CC) $(LDFLAGS) -o $@ $(NODE2_OBJS) $(LDLIBS_libxenvchan) $(APPEND_LDFLAGS)
 
+vchan-socket-proxy: vchan-socket-proxy.o libxenvchan.so
+	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenvchan) $(LDLIBS_libxenstore) $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
+
 .PHONY: install
 install: all
 	$(INSTALL_DIR) $(DESTDIR)$(libdir)
 	$(INSTALL_DIR) $(DESTDIR)$(includedir)
+	$(INSTALL_DIR) $(DESTDIR)$(bindir)
 	$(INSTALL_PROG) libxenvchan.so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)
 	ln -sf libxenvchan.so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)/libxenvchan.so.$(MAJOR)
 	ln -sf libxenvchan.so.$(MAJOR) $(DESTDIR)$(libdir)/libxenvchan.so
+	$(INSTALL_PROG) vchan-socket-proxy $(DESTDIR)$(bindir)
 	$(INSTALL_DATA) libxenvchan.h $(DESTDIR)$(includedir)
 	$(INSTALL_DATA) libxenvchan.a $(DESTDIR)$(libdir)
 	$(INSTALL_DATA) xenvchan.pc $(DESTDIR)$(PKG_INSTALLDIR)
diff --git a/tools/libvchan/vchan-socket-proxy.c b/tools/libvchan/vchan-socket-proxy.c
new file mode 100644
index 0000000000..13700c5d67
--- /dev/null
+++ b/tools/libvchan/vchan-socket-proxy.c
@@ -0,0 +1,478 @@
+/**
+ * @file
+ * @section AUTHORS
+ *
+ * Copyright (C) 2010  Rafal Wojtczuk  <rafal@invisiblethingslab.com>
+ *
+ *  Authors:
+ *       Rafal Wojtczuk  <rafal@invisiblethingslab.com>
+ *       Daniel De Graaf <dgdegra@tycho.nsa.gov>
+ *       Marek Marczykowski-Górecki  <marmarek@invisiblethingslab.com>
+ *
+ * @section LICENSE
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU Lesser General Public
+ *  License as published by the Free Software Foundation; either
+ *  version 2.1 of the License, or (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  Lesser General Public License for more details.
+ *
+ *  You should have received a copy of the GNU Lesser General Public
+ *  License along with this program; If not, see <http://www.gnu.org/licenses/>.
+ *
+ * @section DESCRIPTION
+ *
+ * This is a vchan to unix socket proxy. Vchan server is set, and on client
+ * connection, local socket connection is established. Communication is bidirectional.
+ * One client is served at a time, clients needs to coordinate this themselves.
+ */
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <unistd.h>
+#include <fcntl.h>
+#include <errno.h>
+#include <sys/socket.h>
+#include <sys/un.h>
+#include <getopt.h>
+
+#include <xenstore.h>
+#include <xenctrl.h>
+#include <libxenvchan.h>
+
+static void usage(char** argv)
+{
+    fprintf(stderr, "usage:\n"
+        "\t%s [options] domainid nodepath [socket-path|file-no|-]\n"
+        "\n"
+        "options:\n"
+        "\t-m, --mode=client|server - vchan connection mode (client by default)\n"
+        "\t-s, --state-path=path - xenstore path where write \"running\" to \n"
+        "\t                        at startup\n"
+        "\t-v, --verbose - verbose logging\n"
+        "\n"
+        "client: client of a vchan connection, fourth parameter can be:\n"
+        "\tsocket-path: listen on a UNIX socket at this path and connect to vchan\n"
+        "\t             whenever new connection is accepted;\n"
+        "\t             handle multiple _subsequent_ connections, until terminated\n"
+        "\n"
+        "\tfile-no:     except open FD of a socket in listen mode;\n"
+        "\t             otherwise similar to socket-path\n"
+        "\n"
+        "\t-:           open vchan connection immediately and pass the data\n"
+        "\t             from stdin/stdout; terminate when vchan connection\n"
+        "\t             is closed\n"
+        "\n"
+        "server: server of a vchan connection, fourth parameter can be:\n"
+        "\tsocket-path: connect to this UNIX socket when new vchan connection\n"
+        "\t             is accepted;\n"
+        "\t             handle multiple _subsequent_ connections, until terminated\n"
+        "\n"
+        "\tfile-no:     pass data to/from this FD; terminate when vchan connection\n"
+        "\t             is closed\n"
+        "\n"
+        "\t-:           pass data to/from stdin/stdout; terminate when vchan\n"
+        "\t             connection is closed\n",
+        argv[0]);
+    exit(1);
+}
+
+#define BUFSIZE 8192
+char inbuf[BUFSIZE];
+char outbuf[BUFSIZE];
+int insiz = 0;
+int outsiz = 0;
+int verbose = 0;
+
+static void vchan_wr(struct libxenvchan *ctrl) {
+    int ret;
+
+    if (!insiz)
+        return;
+    ret = libxenvchan_write(ctrl, inbuf, insiz);
+    if (ret < 0) {
+        fprintf(stderr, "vchan write failed\n");
+        exit(1);
+    }
+    if (verbose)
+        fprintf(stderr, "wrote %d bytes to vchan\n", ret);
+    if (ret > 0) {
+        insiz -= ret;
+        memmove(inbuf, inbuf + ret, insiz);
+    }
+}
+
+static void socket_wr(int output_fd) {
+    int ret;
+
+    if (!outsiz)
+        return;
+    ret = write(output_fd, outbuf, outsiz);
+    if (ret < 0 && errno != EAGAIN)
+        exit(1);
+    if (ret > 0) {
+        outsiz -= ret;
+        memmove(outbuf, outbuf + ret, outsiz);
+    }
+}
+
+static int set_nonblocking(int fd, int nonblocking) {
+    int flags = fcntl(fd, F_GETFL);
+    if (flags == -1)
+        return -1;
+
+    if (nonblocking)
+        flags |= O_NONBLOCK;
+    else
+        flags &= ~O_NONBLOCK;
+
+    if (fcntl(fd, F_SETFL, flags) == -1)
+        return -1;
+
+    return 0;
+}
+
+static int connect_socket(const char *path_or_fd) {
+    int fd;
+    char *endptr;
+    struct sockaddr_un addr;
+
+    fd = strtoll(path_or_fd, &endptr, 0);
+    if (*endptr == '\0') {
+        set_nonblocking(fd, 1);
+        return fd;
+    }
+
+    fd = socket(AF_UNIX, SOCK_STREAM, 0);
+    if (fd == -1)
+        return -1;
+
+    addr.sun_family = AF_UNIX;
+    strncpy(addr.sun_path, path_or_fd, sizeof(addr.sun_path));
+    if (connect(fd, (const struct sockaddr *)&addr, sizeof(addr)) == -1) {
+        close(fd);
+        return -1;
+    }
+
+    set_nonblocking(fd, 1);
+
+    return fd;
+}
+
+static int listen_socket(const char *path_or_fd) {
+    int fd;
+    char *endptr;
+    struct sockaddr_un addr;
+
+    fd = strtoll(path_or_fd, &endptr, 0);
+    if (*endptr == '\0') {
+        return fd;
+    }
+
+    /* if not a number, assume a socket path */
+    fd = socket(AF_UNIX, SOCK_STREAM, 0);
+    if (fd == -1)
+        return -1;
+
+    addr.sun_family = AF_UNIX;
+    strncpy(addr.sun_path, path_or_fd, sizeof(addr.sun_path));
+    if (bind(fd, (const struct sockaddr *)&addr, sizeof(addr)) == -1) {
+        close(fd);
+        return -1;
+    }
+    if (listen(fd, 5) != 0) {
+        close(fd);
+        return -1;
+    }
+
+    return fd;
+}
+
+static struct libxenvchan *connect_vchan(int domid, const char *path) {
+    struct libxenvchan *ctrl = NULL;
+    struct xs_handle *xs = NULL;
+    xc_interface *xc = NULL;
+    xc_dominfo_t dominfo;
+    char **watch_ret;
+    unsigned int watch_num;
+    int ret;
+
+    xs = xs_open(XS_OPEN_READONLY);
+    if (!xs) {
+        perror("xs_open");
+        goto out;
+    }
+    xc = xc_interface_open(NULL, NULL, XC_OPENFLAG_NON_REENTRANT);
+    if (!xc) {
+        perror("xc_interface_open");
+        goto out;
+    }
+    /* wait for vchan server to create *path* */
+    xs_watch(xs, path, "path");
+    xs_watch(xs, "@releaseDomain", "release");
+    while ((watch_ret = xs_read_watch(xs, &watch_num))) {
+        /* don't care about exact which fired the watch */
+        free(watch_ret);
+        ctrl = libxenvchan_client_init(NULL, domid, path);
+        if (ctrl)
+            break;
+
+        ret = xc_domain_getinfo(xc, domid, 1, &dominfo);
+        /* break the loop if domain is definitely not there anymore, but
+         * continue if it is or the call failed (like EPERM) */
+        if (ret == -1 && errno == ESRCH)
+            break;
+        if (ret == 1 && (dominfo.domid != (uint32_t)domid || dominfo.dying))
+            break;
+    }
+
+out:
+    if (xc)
+        xc_interface_close(xc);
+    if (xs)
+        xs_close(xs);
+    return ctrl;
+}
+
+
+static void discard_buffers(struct libxenvchan *ctrl) {
+    /* discard local buffers */
+    insiz = 0;
+    outsiz = 0;
+
+    /* discard remaining incoming data */
+    while (libxenvchan_data_ready(ctrl)) {
+        if (libxenvchan_read(ctrl, inbuf, BUFSIZE) == -1) {
+            perror("vchan read");
+            exit(1);
+        }
+    }
+}
+
+int data_loop(struct libxenvchan *ctrl, int input_fd, int output_fd)
+{
+    int ret;
+    int libxenvchan_fd;
+    int max_fd;
+
+    libxenvchan_fd = libxenvchan_fd_for_select(ctrl);
+    for (;;) {
+        fd_set rfds;
+        fd_set wfds;
+        FD_ZERO(&rfds);
+        FD_ZERO(&wfds);
+
+        max_fd = -1;
+        if (input_fd != -1 && insiz != BUFSIZE) {
+            FD_SET(input_fd, &rfds);
+            if (input_fd > max_fd)
+                max_fd = input_fd;
+        }
+        if (output_fd != -1 && outsiz) {
+            FD_SET(output_fd, &wfds);
+            if (output_fd > max_fd)
+                max_fd = output_fd;
+        }
+        FD_SET(libxenvchan_fd, &rfds);
+        if (libxenvchan_fd > max_fd)
+            max_fd = libxenvchan_fd;
+        ret = select(max_fd + 1, &rfds, &wfds, NULL, NULL);
+        if (ret < 0) {
+            perror("select");
+            exit(1);
+        }
+        if (FD_ISSET(libxenvchan_fd, &rfds)) {
+            libxenvchan_wait(ctrl);
+            if (!libxenvchan_is_open(ctrl)) {
+                if (verbose)
+                    fprintf(stderr, "vchan client disconnected\n");
+                while (outsiz)
+                    socket_wr(output_fd);
+                close(output_fd);
+                close(input_fd);
+                discard_buffers(ctrl);
+                break;
+            }
+            vchan_wr(ctrl);
+        }
+
+        if (FD_ISSET(input_fd, &rfds)) {
+            ret = read(input_fd, inbuf + insiz, BUFSIZE - insiz);
+            if (ret < 0 && errno != EAGAIN)
+                exit(1);
+            if (verbose)
+                fprintf(stderr, "from-unix: %.*s\n", ret, inbuf + insiz);
+            if (ret == 0) {
+                /* EOF on socket, write everything in the buffer and close the
+                 * input_fd socket */
+                while (insiz) {
+                    vchan_wr(ctrl);
+                    libxenvchan_wait(ctrl);
+                }
+                close(input_fd);
+                input_fd = -1;
+                /* TODO: maybe signal the vchan client somehow? */
+                break;
+            }
+            if (ret)
+                insiz += ret;
+            vchan_wr(ctrl);
+        }
+        if (FD_ISSET(output_fd, &wfds))
+            socket_wr(output_fd);
+        while (libxenvchan_data_ready(ctrl) && outsiz < BUFSIZE) {
+            ret = libxenvchan_read(ctrl, outbuf + outsiz, BUFSIZE - outsiz);
+            if (ret < 0)
+                exit(1);
+            if (verbose)
+                fprintf(stderr, "from-vchan: %.*s\n", ret, outbuf + outsiz);
+            outsiz += ret;
+            socket_wr(output_fd);
+        }
+    }
+    return 0;
+}
+
+/**
+    Simple libxenvchan application, both client and server.
+    Both sides may write and read, both from the libxenvchan and from
+    stdin/stdout (just like netcat).
+*/
+
+static struct option options[] = {
+    { "mode",       required_argument, NULL, 'm' },
+    { "verbose",          no_argument, NULL, 'v' },
+    { "state-path", required_argument, NULL, 's' },
+    { }
+};
+
+int main(int argc, char **argv)
+{
+    int is_server = 0;
+    int socket_fd = -1;
+    int input_fd, output_fd;
+    struct libxenvchan *ctrl = NULL;
+    const char *socket_path;
+    int domid;
+    const char *vchan_path;
+    const char *state_path = NULL;
+    int opt;
+
+    while ((opt = getopt_long(argc, argv, "m:vs:", options, NULL)) != -1) {
+        switch (opt) {
+            case 'm':
+                if (strcmp(optarg, "server") == 0)
+                    is_server = 1;
+                else if (strcmp(optarg, "client") == 0)
+                    is_server = 0;
+                else {
+                    fprintf(stderr, "invalid argument for --mode: %s\n", optarg);
+                    usage(argv);
+                    return 1;
+                }
+                break;
+            case 'v':
+                verbose = 1;
+                break;
+            case 's':
+                state_path = optarg;
+                break;
+            case '?':
+                usage(argv);
+        }
+    }
+
+    if (argc-optind != 3)
+        usage(argv);
+
+    domid = atoi(argv[optind]);
+    vchan_path = argv[optind+1];
+    socket_path = argv[optind+2];
+
+    if (is_server) {
+        ctrl = libxenvchan_server_init(NULL, domid, vchan_path, 0, 0);
+        if (!ctrl) {
+            perror("libxenvchan_server_init");
+            exit(1);
+        }
+    } else {
+        if (strcmp(socket_path, "-") == 0) {
+            input_fd = 0;
+            output_fd = 1;
+        } else {
+            socket_fd = listen_socket(socket_path);
+            if (socket_fd == -1) {
+                perror("listen socket");
+                return 1;
+            }
+        }
+    }
+
+    if (state_path) {
+        struct xs_handle *xs;
+
+        xs = xs_open(0);
+        if (!xs) {
+            perror("xs_open");
+            return 1;
+        }
+        if (!xs_write(xs, XBT_NULL, state_path, "running", strlen("running"))) {
+            perror("xs_write");
+            return 1;
+        }
+        xs_close(xs);
+    }
+
+    for (;;) {
+        if (is_server) {
+            /* wait for vchan connection */
+            while (libxenvchan_is_open(ctrl) != 1)
+                libxenvchan_wait(ctrl);
+            /* vchan client connected, setup local FD if needed */
+            if (strcmp(socket_path, "-") == 0) {
+                input_fd = 0;
+                output_fd = 1;
+            } else {
+                input_fd = output_fd = connect_socket(socket_path);
+            }
+            if (input_fd == -1) {
+                perror("connect socket");
+                return 1;
+            }
+            if (data_loop(ctrl, input_fd, output_fd) != 0)
+                break;
+            /* keep it running only when get UNIX socket path */
+            if (socket_path[0] != '/')
+                break;
+        } else {
+            /* wait for local socket connection */
+            if (strcmp(socket_path, "-") != 0)
+                input_fd = output_fd = accept(socket_fd, NULL, NULL);
+            if (input_fd == -1) {
+                perror("accept");
+                return 1;
+            }
+            set_nonblocking(input_fd, 1);
+            set_nonblocking(output_fd, 1);
+            ctrl = connect_vchan(domid, vchan_path);
+            if (!ctrl) {
+                perror("vchan client init");
+                return 1;
+            }
+            if (data_loop(ctrl, input_fd, output_fd) != 0)
+                break;
+            /* don't reconnect if output was stdout */
+            if (strcmp(socket_path, "-") == 0)
+                break;
+
+            libxenvchan_close(ctrl);
+            ctrl = NULL;
+        }
+    }
+    return 0;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:15:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:15:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUNb-0000Ub-2h; Mon, 18 May 2020 01:15:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUNZ-0000Tc-T9
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:15:41 +0000
X-Inumbo-ID: 04190766-98a5-11ea-b07b-bc764e2007e4
Received: from mail-qt1-x843.google.com (unknown [2607:f8b0:4864:20::843])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 04190766-98a5-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 01:15:09 +0000 (UTC)
Received: by mail-qt1-x843.google.com with SMTP id d7so6813755qtn.11
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:15:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=JJdKgFtG+bONqBhd+RWucBrJtCEa7OPLIkXsUE/WDhQ=;
 b=DDBsN0ImDbg3aGNryJgVfeLZMpkcHZ+zwVHOUZyi+n2hIFyRIu+PorKZrKoLFMPqcJ
 zTuktIU74Od2MUUWjjT/v9QU1DfndYDHwjoOglPBuex8VreAvu4YDmywQ4pKT1F+YaFF
 aYPHn7cY7A6nnkaVNMHU9cg8fkf8eYiBral4bYA4yRNvnRoNOiVgoc2FM49NBOLIjStP
 at9Bd8BrBjR4HyQj4Jdr0AWohFpgbk1AKfFYIDPm5QCaEmw42Itn0Hh8Pj+3bnC6Tkeb
 X1/htWOFt/9I/yx1FZU6ZnofyhBDQT7aqI5rzZ39KZQigLp0TjmGVgwyGeJt1qbtPRkv
 LX1A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=JJdKgFtG+bONqBhd+RWucBrJtCEa7OPLIkXsUE/WDhQ=;
 b=QI4tE3edhlQaOeRBil7lNfWWyJulKJ9sEc0k/BU/7rSgYBOGZWw5zm7qKvAh6rVTAs
 +aeswPfhF8Cm3Vk5rJjU9tXGAbtKkIZJrJFBfBBZOAXw3pe0Oih9ec4zGoYNEhy1alQG
 QGAr3FCyjIcghOAZnRXA4kAcCZbd6lBozklRTpL9Dho+4nI/6SJHQCyAngXcPpRO1fmv
 Z8bhIolF9jRNusJvcDYcHihySTdiv//f11qSJdlafd9ew+TVGKODWEfoiOkNL84G1Y6w
 zqZJaSEIPdW5nQ0kr+TopHcdhrH/MQKlMdhSDdn718taZgKsx3Ss8d0omMFSHomSc7Hb
 jx5g==
X-Gm-Message-State: AOAM531PSiABilB2aBWlRgmN2BsQh+yoDeHUYVOxzhgW4q5ZD5il4Ycs
 e28S0UODKdumZAB37f6P/amw2f2A
X-Google-Smtp-Source: ABdhPJxIf0bJvjq7YXR6BoChI1jyG+3VBrrw0r5jpB2Qewv8LrG29Lf/VzYbpSPfnZ9wkBWqtWHMWg==
X-Received: by 2002:ac8:543:: with SMTP id c3mr13667629qth.8.1589764508809;
 Sun, 17 May 2020 18:15:08 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.15.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:15:08 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 12/18] libxl: Refactor kill_device_model to
 libxl__kill_xs_path
Date: Sun, 17 May 2020 21:13:47 -0400
Message-Id: <20200518011353.326287-13-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Move kill_device_model to libxl__kill_xs_path so we have a helper to
kill a process from a pid stored in xenstore.  We'll be using it to kill
vchan-qmp-proxy.

libxl__kill_xs_path takes a "what" string for use in printing error
messages.  kill_device_model is retained in libxl_dm.c to provide the
string.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6:
 - Add Acked-by: Ian Jackson
---
 tools/libxl/libxl_aoutils.c  | 32 ++++++++++++++++++++++++++++++++
 tools/libxl/libxl_dm.c       | 27 +--------------------------
 tools/libxl/libxl_internal.h |  3 +++
 3 files changed, 36 insertions(+), 26 deletions(-)

diff --git a/tools/libxl/libxl_aoutils.c b/tools/libxl/libxl_aoutils.c
index 1be858c93c..c4c095a5ba 100644
--- a/tools/libxl/libxl_aoutils.c
+++ b/tools/libxl/libxl_aoutils.c
@@ -626,6 +626,38 @@ void libxl__kill(libxl__gc *gc, pid_t pid, int sig, const char *what)
                 what, (unsigned long)pid, sig);
 }
 
+/* Generic function to signal (HUP) a pid stored in xenstore */
+int libxl__kill_xs_path(libxl__gc *gc, const char *xs_path_pid,
+                        const char *what)
+{
+    const char *xs_pid;
+    int ret, pid;
+
+    ret = libxl__xs_read_checked(gc, XBT_NULL, xs_path_pid, &xs_pid);
+    if (ret || !xs_pid) {
+        LOG(ERROR, "unable to find %s pid in %s", what, xs_path_pid);
+        ret = ret ? : ERROR_FAIL;
+        goto out;
+    }
+    pid = atoi(xs_pid);
+
+    ret = kill(pid, SIGHUP);
+    if (ret < 0 && errno == ESRCH) {
+        LOG(ERROR, "%s already exited", what);
+        ret = 0;
+    } else if (ret == 0) {
+        LOG(DEBUG, "%s signaled", what);
+        ret = 0;
+    } else {
+        LOGE(ERROR, "failed to kill %s [%d]", what, pid);
+        ret = ERROR_FAIL;
+        goto out;
+    }
+
+out:
+    return ret;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index a4f8866d33..478e6540df 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -3235,32 +3235,7 @@ out:
 /* Generic function to signal a Qemu instance to exit */
 static int kill_device_model(libxl__gc *gc, const char *xs_path_pid)
 {
-    const char *xs_pid;
-    int ret, pid;
-
-    ret = libxl__xs_read_checked(gc, XBT_NULL, xs_path_pid, &xs_pid);
-    if (ret || !xs_pid) {
-        LOG(ERROR, "unable to find device model pid in %s", xs_path_pid);
-        ret = ret ? : ERROR_FAIL;
-        goto out;
-    }
-    pid = atoi(xs_pid);
-
-    ret = kill(pid, SIGHUP);
-    if (ret < 0 && errno == ESRCH) {
-        LOG(ERROR, "Device Model already exited");
-        ret = 0;
-    } else if (ret == 0) {
-        LOG(DEBUG, "Device Model signaled");
-        ret = 0;
-    } else {
-        LOGE(ERROR, "failed to kill Device Model [%d]", pid);
-        ret = ERROR_FAIL;
-        goto out;
-    }
-
-out:
-    return ret;
+    return libxl__kill_xs_path(gc, xs_path_pid, "Device Model");
 }
 
 /* Helper to destroy a Qdisk backend */
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index f2f76439ec..c939557b2e 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2711,6 +2711,9 @@ int libxl__async_exec_start(libxl__async_exec_state *aes);
 bool libxl__async_exec_inuse(const libxl__async_exec_state *aes);
 
 _hidden void libxl__kill(libxl__gc *gc, pid_t pid, int sig, const char *what);
+/* kill SIGHUP a pid stored in xenstore */
+_hidden int libxl__kill_xs_path(libxl__gc *gc, const char *xs_path_pid,
+                                const char *what);
 
 /*----- device addition/removal -----*/
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:15:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUNg-0000ZU-CZ; Mon, 18 May 2020 01:15:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUNe-0000Xu-Sk
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:15:46 +0000
X-Inumbo-ID: 059b902c-98a5-11ea-b07b-bc764e2007e4
Received: from mail-qt1-x841.google.com (unknown [2607:f8b0:4864:20::841])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 059b902c-98a5-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 01:15:11 +0000 (UTC)
Received: by mail-qt1-x841.google.com with SMTP id m44so6831608qtm.8
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:15:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=MBOf9NHcia/tE2IHCHNUayCitn6l1FbdXL7jbkbo+Ic=;
 b=LGYLXn1/pd2A7HfYRuM0TLEpoOg7u3p+3squ/pqfhivnS4JNk9fPSjEiBu8jNObGNr
 gBlrV+H+QK37ykz/XsMa93YTatEpxOnVynF4+GUyDp2qM/694i/9R9sbuzgUCm/lZ8BP
 Gq4lZhUr3ldI3fYFGfQsOlBYnbPAGzkmIAexA64OZpDuGs6HCp4kyrFTMK7QKdp/n6IW
 gYuZ+Ki74eZsKkWFWZk1oTaEsk28VbQe1wDbtpJmEbqmDiCkYsaOjIECsE/uqwIPSQr/
 isyXXNrNzT17Qbs5NCzbLP0bFIXZsnXnGzE1cUg9qc7BP4YLl1jkazm43UtP7csrLCZ/
 zkWA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=MBOf9NHcia/tE2IHCHNUayCitn6l1FbdXL7jbkbo+Ic=;
 b=NCikQVK31otK3MiyhqqNR/h21W4b4Vtru1LY5AAck9nRPhlAnXnYXPlgToMuhYvpSF
 88nD3sTI4oDVpID9y3scF1F5bmOGLLEwcZntYpFjAo9mZRRr5AyeTcc1q5abdf8gxYf+
 dQ5CwntVWnlfgINreICLBQotaFRL0Kcs85FtSUjY884acS4/tTSfYpofhysZhQ4YXL3J
 QSXQ3N6o1o2s34KOym45pjOEbR732bRU/3xVfNun/+8Tu9jox+TUzkAHb3vgDqguxd5G
 rzqVrSqAKFumuwC9zWQ9FEAV7DSegXLUc0zmyNMd0uTQq8Af52/xtwYQ8xkii06LiE/n
 14ng==
X-Gm-Message-State: AOAM533VUdq3BuMDbu8+1sydqr5pYlsR3U8DqYN308avd+PNGUXDyJx3
 EYui+fDypBD8PZqec/kp7+m5ywEk
X-Google-Smtp-Source: ABdhPJyoh/TJMzmePCy9RabmS0xrHX8fpaU/qg3+BOF29MV204y8amrIeiG1utQveZN3K+EA0s4p7Q==
X-Received: by 2002:ac8:458f:: with SMTP id l15mr14791009qtn.221.1589764510939; 
 Sun, 17 May 2020 18:15:10 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.15.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:15:10 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 13/18] libxl: use vchan for QMP access with Linux stubdomain
Date: Sun, 17 May 2020 21:13:48 -0400
Message-Id: <20200518011353.326287-14-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Ian Jackson <ian.jackson@citrix.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Access to QMP of QEMU in Linux stubdomain is possible over vchan
connection. Handle the actual vchan connection in a separate process
(vchan-socket-proxy). This simplified integration with QMP (already
quite complex), but also allows preliminary filtering of (potentially
malicious) QMP input.
Since only one client can be connected to vchan server at the same time
and it is not enforced by the libxenvchan itself, additional client-side
locking is needed. It is implicitly implemented by vchan-socket-proxy,
as it handle only one connection at a time. Note that qemu supports only
one simultaneous client on a control socket anyway (but in UNIX socket
case, it enforce it server-side), so it doesn't add any extra
limitation.

libxl qmp client code already has locking to handle concurrent access
attempts to the same qemu qmp interface.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Squash in changes of regenerated autotools files.

Kill the vchan-socket-proxy so we don't leak the daemonized processes.
libxl__stubdomain_is_linux_running() works against the guest_domid, but
the xenstore path is beneath the stubdomain.  This leads to the use of
libxl_is_stubdom in addition to libxl__stubdomain_is_linux_running() so
that the stubdomain calls kill for the qmp-proxy.

Also call libxl__qmp_cleanup() to remove the unix sockets used by
vchan-socket-proxy.  vchan-socket-proxy only creates qmp-libxl-$domid,
and libxl__qmp_cleanup removes that as well as qmp-libxenstat-$domid.
However, it tolerates ENOENT, and a stray qmp-libxenstat-$domid should
not exist.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---

Re-run autotools after applying.

Changes in v4:
 - new patch, in place of both "libxl: use vchan for QMP access ..."
Changes in v5:
 - Use device-model/%u/qmp-proxy-state xenstore path
 - Rephrase comment
Changes in v6
 - Commit message mention libxl locking
 - Mention re-run autotools
 - Squashed in re-generated autotools files
 - Call libxl__qmp_cleanup() to remove unix socket.
 - Cleanup in vchan-socket-proxy is dropped.

Ian, you acked the original and the squashed in "libxl: Kill
vchan-socket-proxy when cleaning up qmp".  However, I also added the
libxl__qmp_cleanup() call, so I did not retain your Ack.  That change is
at the end in dm_destroy_cb().

---
 configure                    |  14 +--
 docs/configure               |  14 +--
 stubdom/configure            |  14 +--
 tools/config.h.in            |   3 +
 tools/configure              |  46 ++++++----
 tools/configure.ac           |   9 ++
 tools/libxl/libxl_dm.c       | 163 +++++++++++++++++++++++++++++++++--
 tools/libxl/libxl_domain.c   |  10 +++
 tools/libxl/libxl_internal.h |   1 +
 9 files changed, 209 insertions(+), 65 deletions(-)

diff --git a/configure b/configure
index 9da3970cef..8af54e8a5a 100755
--- a/configure
+++ b/configure
@@ -644,7 +644,6 @@ infodir
 docdir
 oldincludedir
 includedir
-runstatedir
 localstatedir
 sharedstatedir
 sysconfdir
@@ -723,7 +722,6 @@ datadir='${datarootdir}'
 sysconfdir='${prefix}/etc'
 sharedstatedir='${prefix}/com'
 localstatedir='${prefix}/var'
-runstatedir='${localstatedir}/run'
 includedir='${prefix}/include'
 oldincludedir='/usr/include'
 docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
@@ -976,15 +974,6 @@ do
   | -silent | --silent | --silen | --sile | --sil)
     silent=yes ;;
 
-  -runstatedir | --runstatedir | --runstatedi | --runstated \
-  | --runstate | --runstat | --runsta | --runst | --runs \
-  | --run | --ru | --r)
-    ac_prev=runstatedir ;;
-  -runstatedir=* | --runstatedir=* | --runstatedi=* | --runstated=* \
-  | --runstate=* | --runstat=* | --runsta=* | --runst=* | --runs=* \
-  | --run=* | --ru=* | --r=*)
-    runstatedir=$ac_optarg ;;
-
   -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb)
     ac_prev=sbindir ;;
   -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \
@@ -1122,7 +1111,7 @@ fi
 for ac_var in	exec_prefix prefix bindir sbindir libexecdir datarootdir \
 		datadir sysconfdir sharedstatedir localstatedir includedir \
 		oldincludedir docdir infodir htmldir dvidir pdfdir psdir \
-		libdir localedir mandir runstatedir
+		libdir localedir mandir
 do
   eval ac_val=\$$ac_var
   # Remove trailing slashes.
@@ -1275,7 +1264,6 @@ Fine tuning of the installation directories:
   --sysconfdir=DIR        read-only single-machine data [PREFIX/etc]
   --sharedstatedir=DIR    modifiable architecture-independent data [PREFIX/com]
   --localstatedir=DIR     modifiable single-machine data [PREFIX/var]
-  --runstatedir=DIR       modifiable per-process data [LOCALSTATEDIR/run]
   --libdir=DIR            object code libraries [EPREFIX/lib]
   --includedir=DIR        C header files [PREFIX/include]
   --oldincludedir=DIR     C header files for non-gcc [/usr/include]
diff --git a/docs/configure b/docs/configure
index 9e3ed60462..93e9dcf404 100755
--- a/docs/configure
+++ b/docs/configure
@@ -634,7 +634,6 @@ infodir
 docdir
 oldincludedir
 includedir
-runstatedir
 localstatedir
 sharedstatedir
 sysconfdir
@@ -711,7 +710,6 @@ datadir='${datarootdir}'
 sysconfdir='${prefix}/etc'
 sharedstatedir='${prefix}/com'
 localstatedir='${prefix}/var'
-runstatedir='${localstatedir}/run'
 includedir='${prefix}/include'
 oldincludedir='/usr/include'
 docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
@@ -964,15 +962,6 @@ do
   | -silent | --silent | --silen | --sile | --sil)
     silent=yes ;;
 
-  -runstatedir | --runstatedir | --runstatedi | --runstated \
-  | --runstate | --runstat | --runsta | --runst | --runs \
-  | --run | --ru | --r)
-    ac_prev=runstatedir ;;
-  -runstatedir=* | --runstatedir=* | --runstatedi=* | --runstated=* \
-  | --runstate=* | --runstat=* | --runsta=* | --runst=* | --runs=* \
-  | --run=* | --ru=* | --r=*)
-    runstatedir=$ac_optarg ;;
-
   -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb)
     ac_prev=sbindir ;;
   -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \
@@ -1110,7 +1099,7 @@ fi
 for ac_var in	exec_prefix prefix bindir sbindir libexecdir datarootdir \
 		datadir sysconfdir sharedstatedir localstatedir includedir \
 		oldincludedir docdir infodir htmldir dvidir pdfdir psdir \
-		libdir localedir mandir runstatedir
+		libdir localedir mandir
 do
   eval ac_val=\$$ac_var
   # Remove trailing slashes.
@@ -1263,7 +1252,6 @@ Fine tuning of the installation directories:
   --sysconfdir=DIR        read-only single-machine data [PREFIX/etc]
   --sharedstatedir=DIR    modifiable architecture-independent data [PREFIX/com]
   --localstatedir=DIR     modifiable single-machine data [PREFIX/var]
-  --runstatedir=DIR       modifiable per-process data [LOCALSTATEDIR/run]
   --libdir=DIR            object code libraries [EPREFIX/lib]
   --includedir=DIR        C header files [PREFIX/include]
   --oldincludedir=DIR     C header files for non-gcc [/usr/include]
diff --git a/stubdom/configure b/stubdom/configure
index da03da535a..f7604a37f7 100755
--- a/stubdom/configure
+++ b/stubdom/configure
@@ -661,7 +661,6 @@ infodir
 docdir
 oldincludedir
 includedir
-runstatedir
 localstatedir
 sharedstatedir
 sysconfdir
@@ -751,7 +750,6 @@ datadir='${datarootdir}'
 sysconfdir='${prefix}/etc'
 sharedstatedir='${prefix}/com'
 localstatedir='${prefix}/var'
-runstatedir='${localstatedir}/run'
 includedir='${prefix}/include'
 oldincludedir='/usr/include'
 docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
@@ -1004,15 +1002,6 @@ do
   | -silent | --silent | --silen | --sile | --sil)
     silent=yes ;;
 
-  -runstatedir | --runstatedir | --runstatedi | --runstated \
-  | --runstate | --runstat | --runsta | --runst | --runs \
-  | --run | --ru | --r)
-    ac_prev=runstatedir ;;
-  -runstatedir=* | --runstatedir=* | --runstatedi=* | --runstated=* \
-  | --runstate=* | --runstat=* | --runsta=* | --runst=* | --runs=* \
-  | --run=* | --ru=* | --r=*)
-    runstatedir=$ac_optarg ;;
-
   -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb)
     ac_prev=sbindir ;;
   -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \
@@ -1150,7 +1139,7 @@ fi
 for ac_var in	exec_prefix prefix bindir sbindir libexecdir datarootdir \
 		datadir sysconfdir sharedstatedir localstatedir includedir \
 		oldincludedir docdir infodir htmldir dvidir pdfdir psdir \
-		libdir localedir mandir runstatedir
+		libdir localedir mandir
 do
   eval ac_val=\$$ac_var
   # Remove trailing slashes.
@@ -1303,7 +1292,6 @@ Fine tuning of the installation directories:
   --sysconfdir=DIR        read-only single-machine data [PREFIX/etc]
   --sharedstatedir=DIR    modifiable architecture-independent data [PREFIX/com]
   --localstatedir=DIR     modifiable single-machine data [PREFIX/var]
-  --runstatedir=DIR       modifiable per-process data [LOCALSTATEDIR/run]
   --libdir=DIR            object code libraries [EPREFIX/lib]
   --includedir=DIR        C header files [PREFIX/include]
   --oldincludedir=DIR     C header files for non-gcc [/usr/include]
diff --git a/tools/config.h.in b/tools/config.h.in
index 5a5944ebe1..5abf6092de 100644
--- a/tools/config.h.in
+++ b/tools/config.h.in
@@ -123,6 +123,9 @@
 /* Define to 1 if you have the ANSI C header files. */
 #undef STDC_HEADERS
 
+/* QMP proxy path */
+#undef STUBDOM_QMP_PROXY_PATH
+
 /* Enable large inode numbers on Mac OS X 10.5.  */
 #ifndef _DARWIN_USE_64_BIT_INODE
 # define _DARWIN_USE_64_BIT_INODE 1
diff --git a/tools/configure b/tools/configure
index 36596389b8..35036dc1db 100755
--- a/tools/configure
+++ b/tools/configure
@@ -772,7 +772,6 @@ infodir
 docdir
 oldincludedir
 includedir
-runstatedir
 localstatedir
 sharedstatedir
 sysconfdir
@@ -814,6 +813,7 @@ with_linux_backend_modules
 enable_qemu_traditional
 enable_rombios
 with_system_qemu
+with_stubdom_qmp_proxy
 with_system_seabios
 with_system_ovmf
 enable_ipxe
@@ -899,7 +899,6 @@ datadir='${datarootdir}'
 sysconfdir='${prefix}/etc'
 sharedstatedir='${prefix}/com'
 localstatedir='${prefix}/var'
-runstatedir='${localstatedir}/run'
 includedir='${prefix}/include'
 oldincludedir='/usr/include'
 docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
@@ -1152,15 +1151,6 @@ do
   | -silent | --silent | --silen | --sile | --sil)
     silent=yes ;;
 
-  -runstatedir | --runstatedir | --runstatedi | --runstated \
-  | --runstate | --runstat | --runsta | --runst | --runs \
-  | --run | --ru | --r)
-    ac_prev=runstatedir ;;
-  -runstatedir=* | --runstatedir=* | --runstatedi=* | --runstated=* \
-  | --runstate=* | --runstat=* | --runsta=* | --runst=* | --runs=* \
-  | --run=* | --ru=* | --r=*)
-    runstatedir=$ac_optarg ;;
-
   -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb)
     ac_prev=sbindir ;;
   -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \
@@ -1298,7 +1288,7 @@ fi
 for ac_var in	exec_prefix prefix bindir sbindir libexecdir datarootdir \
 		datadir sysconfdir sharedstatedir localstatedir includedir \
 		oldincludedir docdir infodir htmldir dvidir pdfdir psdir \
-		libdir localedir mandir runstatedir
+		libdir localedir mandir
 do
   eval ac_val=\$$ac_var
   # Remove trailing slashes.
@@ -1451,7 +1441,6 @@ Fine tuning of the installation directories:
   --sysconfdir=DIR        read-only single-machine data [PREFIX/etc]
   --sharedstatedir=DIR    modifiable architecture-independent data [PREFIX/com]
   --localstatedir=DIR     modifiable single-machine data [PREFIX/var]
-  --runstatedir=DIR       modifiable per-process data [LOCALSTATEDIR/run]
   --libdir=DIR            object code libraries [EPREFIX/lib]
   --includedir=DIR        C header files [PREFIX/include]
   --oldincludedir=DIR     C header files for non-gcc [/usr/include]
@@ -1535,6 +1524,9 @@ Optional Packages:
                           Use system supplied qemu PATH or qemu (taken from
                           $PATH) as qemu-xen device model instead of building
                           and installing our own version
+  --stubdom-qmp-proxy[=PATH]
+                          Use supplied binary PATH as a QMP proxy into
+                          stubdomain
   --with-system-seabios[=PATH]
                           Use system supplied seabios PATH instead of building
                           and installing our own version
@@ -3382,7 +3374,7 @@ else
     We can't simply define LARGE_OFF_T to be 9223372036854775807,
     since some C++ compilers masquerading as C compilers
     incorrectly reject 9223372036854775807.  */
-#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
+#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
   int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
 		       && LARGE_OFF_T % 2147483647 == 1)
 		      ? 1 : -1];
@@ -3428,7 +3420,7 @@ else
     We can't simply define LARGE_OFF_T to be 9223372036854775807,
     since some C++ compilers masquerading as C compilers
     incorrectly reject 9223372036854775807.  */
-#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
+#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
   int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
 		       && LARGE_OFF_T % 2147483647 == 1)
 		      ? 1 : -1];
@@ -3452,7 +3444,7 @@ rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
     We can't simply define LARGE_OFF_T to be 9223372036854775807,
     since some C++ compilers masquerading as C compilers
     incorrectly reject 9223372036854775807.  */
-#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
+#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
   int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
 		       && LARGE_OFF_T % 2147483647 == 1)
 		      ? 1 : -1];
@@ -3497,7 +3489,7 @@ else
     We can't simply define LARGE_OFF_T to be 9223372036854775807,
     since some C++ compilers masquerading as C compilers
     incorrectly reject 9223372036854775807.  */
-#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
+#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
   int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
 		       && LARGE_OFF_T % 2147483647 == 1)
 		      ? 1 : -1];
@@ -3521,7 +3513,7 @@ rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
     We can't simply define LARGE_OFF_T to be 9223372036854775807,
     since some C++ compilers masquerading as C compilers
     incorrectly reject 9223372036854775807.  */
-#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
+#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
   int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
 		       && LARGE_OFF_T % 2147483647 == 1)
 		      ? 1 : -1];
@@ -4548,6 +4540,24 @@ _ACEOF
 
 
 
+# Check whether --with-stubdom-qmp-proxy was given.
+if test "${with_stubdom_qmp_proxy+set}" = set; then :
+  withval=$with_stubdom_qmp_proxy;
+    stubdom_qmp_proxy="$withval"
+
+else
+
+    stubdom_qmp_proxy="$bindir/vchan-socket-proxy"
+
+fi
+
+
+cat >>confdefs.h <<_ACEOF
+#define STUBDOM_QMP_PROXY_PATH "$stubdom_qmp_proxy"
+_ACEOF
+
+
+
 # Check whether --with-system-seabios was given.
 if test "${with_system_seabios+set}" = set; then :
   withval=$with_system_seabios;
diff --git a/tools/configure.ac b/tools/configure.ac
index b6f8882be4..a9af0a21c6 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -194,6 +194,15 @@ AC_SUBST(qemu_xen)
 AC_SUBST(qemu_xen_path)
 AC_SUBST(qemu_xen_systemd)
 
+AC_ARG_WITH([stubdom-qmp-proxy],
+    AC_HELP_STRING([--stubdom-qmp-proxy@<:@=PATH@:>@],
+        [Use supplied binary PATH as a QMP proxy into stubdomain]),[
+    stubdom_qmp_proxy="$withval"
+],[
+    stubdom_qmp_proxy="$bindir/vchan-socket-proxy"
+])
+AC_DEFINE_UNQUOTED([STUBDOM_QMP_PROXY_PATH], ["$stubdom_qmp_proxy"], [QMP proxy path])
+
 AC_ARG_WITH([system-seabios],
     AS_HELP_STRING([--with-system-seabios@<:@=PATH@:>@],
        [Use system supplied seabios PATH instead of building and installing
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 478e6540df..c66e8ebc24 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -1200,7 +1200,11 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                       GCSPRINTF("%d", guest_domid), NULL);
     flexarray_append(dm_args, "-no-shutdown");
 
-    /* There is currently no way to access the QMP socket in the stubdom */
+    /*
+     * QMP access to qemu running in stubdomain is done over vchan. The
+     * stubdomain init script adds the appropriate monitor options for
+     * vchan-socket-proxy.
+     */
     if (!is_stubdom) {
         flexarray_append(dm_args, "-chardev");
         if (state->dm_monitor_fd >= 0) {
@@ -2214,6 +2218,23 @@ static void stubdom_pvqemu_unpaused(libxl__egc *egc,
 static void stubdom_xswait_cb(libxl__egc *egc, libxl__xswait_state *xswait,
                               int rc, const char *p);
 
+static void spawn_qmp_proxy(libxl__egc *egc,
+                            libxl__stub_dm_spawn_state *sdss);
+
+static void qmp_proxy_confirm(libxl__egc *egc, libxl__spawn_state *spawn,
+                              const char *xsdata);
+
+static void qmp_proxy_startup_failed(libxl__egc *egc,
+                                     libxl__spawn_state *spawn,
+                                     int rc);
+
+static void qmp_proxy_detached(libxl__egc *egc,
+                               libxl__spawn_state *spawn);
+
+static void qmp_proxy_spawn_outcome(libxl__egc *egc,
+                                    libxl__stub_dm_spawn_state *sdss,
+                                    int rc);
+
 char *libxl__stub_dm_name(libxl__gc *gc, const char *guest_name)
 {
     return GCSPRINTF("%s-dm", guest_name);
@@ -2496,24 +2517,150 @@ static void spawn_stub_launch_dm(libxl__egc *egc,
             goto out;
     }
 
+    sdss->qmp_proxy_spawn.ao = ao;
+    if (libxl__stubdomain_is_linux(&guest_config->b_info)) {
+        spawn_qmp_proxy(egc, sdss);
+    } else {
+        qmp_proxy_spawn_outcome(egc, sdss, 0);
+    }
+
+    return;
+
+out:
+    assert(ret);
+    qmp_proxy_spawn_outcome(egc, sdss, ret);
+}
+
+static void spawn_qmp_proxy(libxl__egc *egc,
+                            libxl__stub_dm_spawn_state *sdss)
+{
+    STATE_AO_GC(sdss->qmp_proxy_spawn.ao);
+    const uint32_t guest_domid = sdss->dm.guest_domid;
+    const uint32_t dm_domid = sdss->pvqemu.guest_domid;
+    const char *dom_path = libxl__xs_get_dompath(gc, dm_domid);
+    char **args;
+    int nr = 0;
+    int rc, logfile_w, null;
+
+    if (access(STUBDOM_QMP_PROXY_PATH, X_OK) < 0) {
+        LOGED(ERROR, guest_domid, "qmp proxy %s is not executable", STUBDOM_QMP_PROXY_PATH);
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    sdss->qmp_proxy_spawn.what = GCSPRINTF("domain %d device model qmp proxy", guest_domid);
+    sdss->qmp_proxy_spawn.pidpath = GCSPRINTF("%s/image/qmp-proxy-pid", dom_path);
+    sdss->qmp_proxy_spawn.xspath = DEVICE_MODEL_XS_PATH(gc, LIBXL_TOOLSTACK_DOMID,
+                                                        dm_domid, "/qmp-proxy-state");
+    sdss->qmp_proxy_spawn.timeout_ms = LIBXL_DEVICE_MODEL_START_TIMEOUT * 1000;
+    sdss->qmp_proxy_spawn.midproc_cb = libxl__spawn_record_pid;
+    sdss->qmp_proxy_spawn.confirm_cb = qmp_proxy_confirm;
+    sdss->qmp_proxy_spawn.failure_cb = qmp_proxy_startup_failed;
+    sdss->qmp_proxy_spawn.detached_cb = qmp_proxy_detached;
+
+    const int arraysize = 6;
+    GCNEW_ARRAY(args, arraysize);
+    args[nr++] = STUBDOM_QMP_PROXY_PATH;
+    args[nr++] = GCSPRINTF("--state-path=%s", sdss->qmp_proxy_spawn.xspath);
+    args[nr++] = GCSPRINTF("%u", dm_domid);
+    args[nr++] = GCSPRINTF("%s/device-model/%u/qmp-vchan", dom_path, guest_domid);
+    args[nr++] = (char*)libxl__qemu_qmp_path(gc, guest_domid);
+    args[nr++] = NULL;
+    assert(nr == arraysize);
+
+    logfile_w = libxl__create_qemu_logfile(gc, GCSPRINTF("qmp-proxy-%s",
+                                                         sdss->dm_config.c_info.name));
+    if (logfile_w < 0) {
+        rc = logfile_w;
+        goto out;
+    }
+    null = open("/dev/null", O_RDWR);
+    if (null < 0) {
+        LOGED(ERROR, guest_domid, "unable to open /dev/null");
+        rc = ERROR_FAIL;
+        goto out_close;
+    }
+
+    rc = libxl__spawn_spawn(egc, &sdss->qmp_proxy_spawn);
+    if (rc < 0)
+        goto out_close;
+    if (!rc) { /* inner child */
+        setsid();
+        libxl__exec(gc, null, null, logfile_w, STUBDOM_QMP_PROXY_PATH, args, NULL);
+        /* unreachable */
+    }
+
+    rc = 0;
+
+out_close:
+    if (logfile_w >= 0)
+        close(logfile_w);
+    if (null >= 0)
+        close(null);
+out:
+    if (rc)
+        qmp_proxy_spawn_outcome(egc, sdss, rc);
+}
+
+static void qmp_proxy_confirm(libxl__egc *egc, libxl__spawn_state *spawn,
+                              const char *xsdata)
+{
+    STATE_AO_GC(spawn->ao);
+
+    if (!xsdata)
+        return;
+
+    if (strcmp(xsdata, "running"))
+        return;
+
+    libxl__spawn_initiate_detach(gc, spawn);
+}
+
+static void qmp_proxy_startup_failed(libxl__egc *egc,
+                                     libxl__spawn_state *spawn,
+                                     int rc)
+{
+    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(spawn, *sdss, qmp_proxy_spawn);
+    qmp_proxy_spawn_outcome(egc, sdss, rc);
+}
+
+static void qmp_proxy_detached(libxl__egc *egc,
+                               libxl__spawn_state *spawn)
+{
+    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(spawn, *sdss, qmp_proxy_spawn);
+    qmp_proxy_spawn_outcome(egc, sdss, 0);
+}
+
+static void qmp_proxy_spawn_outcome(libxl__egc *egc,
+                                    libxl__stub_dm_spawn_state *sdss,
+                                    int rc)
+{
+    STATE_AO_GC(sdss->qmp_proxy_spawn.ao);
+    int need_pvqemu = libxl__need_xenpv_qemu(gc, &sdss->dm_config);
+
+    if (rc) goto out;
+
+    if (need_pvqemu < 0) {
+        rc = need_pvqemu;
+        goto out;
+    }
+
     sdss->pvqemu.spawn.ao = ao;
-    sdss->pvqemu.guest_domid = dm_domid;
     sdss->pvqemu.guest_config = &sdss->dm_config;
     sdss->pvqemu.build_state = &sdss->dm_state;
     sdss->pvqemu.callback = spawn_stubdom_pvqemu_cb;
-
-    if (!need_qemu) {
+    if (need_pvqemu) {
+        libxl__spawn_local_dm(egc, &sdss->pvqemu);
+    } else {
         /* If dom0 qemu not needed, do not launch it */
         spawn_stubdom_pvqemu_cb(egc, &sdss->pvqemu, 0);
-    } else {
-        libxl__spawn_local_dm(egc, &sdss->pvqemu);
     }
 
     return;
 
 out:
-    assert(ret);
-    spawn_stubdom_pvqemu_cb(egc, &sdss->pvqemu, ret);
+    assert(rc);
+    spawn_stubdom_pvqemu_cb(egc, &sdss->pvqemu, rc);
 }
 
 static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c
index fef2cd4e13..c08af308fa 100644
--- a/tools/libxl/libxl_domain.c
+++ b/tools/libxl/libxl_domain.c
@@ -1260,10 +1260,20 @@ static void dm_destroy_cb(libxl__egc *egc,
     libxl__destroy_domid_state *dis = CONTAINER_OF(ddms, *dis, ddms);
     STATE_AO_GC(dis->ao);
     uint32_t domid = dis->domid;
+    uint32_t target_domid;
 
     if (rc < 0)
         LOGD(ERROR, domid, "libxl__destroy_device_model failed");
 
+    if (libxl_is_stubdom(CTX, domid, &target_domid) &&
+        libxl__stubdomain_is_linux_running(gc, target_domid)) {
+        char *path = GCSPRINTF("/local/domain/%d/image/qmp-proxy-pid", domid);
+
+        libxl__kill_xs_path(gc, path, "QMP Proxy");
+        /* qmp-proxy for stubdom registers target_domid's QMP sockets. */
+        libxl__qmp_cleanup(gc, target_domid);
+    }
+
     dis->drs.ao = ao;
     dis->drs.domid = domid;
     dis->drs.callback = devices_destroy_cb;
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index c939557b2e..41b51b07cd 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -4166,6 +4166,7 @@ typedef struct {
     libxl__destroy_domid_state dis;
     libxl__multidev multidev;
     libxl__xswait_state xswait;
+    libxl__spawn_state qmp_proxy_spawn;
 } libxl__stub_dm_spawn_state;
 
 _hidden void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state*);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:16:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:16:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUNl-0000eX-RA; Mon, 18 May 2020 01:15:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUNj-0000co-Sr
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:15:51 +0000
X-Inumbo-ID: 0678a098-98a5-11ea-b9cf-bc764e2007e4
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0678a098-98a5-11ea-b9cf-bc764e2007e4;
 Mon, 18 May 2020 01:15:13 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id f189so8626084qkd.5
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:15:13 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=SwjQUhGQXAcvvbP1vAzAMtzC5lAyshhwUcCpJNOCDmM=;
 b=o5aaEYqBg7HMOcTAiPitRuIFI6dTFtg98l+mR6von+MOecayYEw1H3kZSlm605N0y5
 CHJAGgkbwOLaRDdWNtmReADPaS04L1IN9QjHIP3FCQ2Zp/mrm47Wf0vLKfvV9GN2vm/m
 1Va3Jikp+ur0cRtPbfFKXZ1jSN8uQ70WRM0llMOrgSnOc4v9ND2AMqrGYq6z2Lb/jtbq
 kVuyJZNrmqI/0kVBTbxyCxTEDPoJwjofK3sEoBSrKvMs2vnSZegdPSRkOteINxtJpwf/
 TONU4ISCzhNhbmAsC6EgkNFNk6tIJzwBlHGtyzf1y6VWLHTfn1eX7et62AFga6s/KrjJ
 loFg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=SwjQUhGQXAcvvbP1vAzAMtzC5lAyshhwUcCpJNOCDmM=;
 b=NLFBwevIP2u3bwTTVClA6DgaWgS2O9bvhMX++TaVH3WRsvL/ayDdT5n5CQLSqdF/A0
 bz1Leg3Aalm+x+WcAftSPJRvpXo2kB2410omOejN/R/UH7J5xNGdnA26dgHW1pfWlHHc
 e+dFgGLQXN9xVm4rUHqcpirEQDGXFF+ivlb9nIdFatZ0Fe4JyS6W1zTn/JdNnKHA/X4Y
 oCK8iQdZDsNsRmuHoLuMM/xMdqtfVRndP3KrZxSyfN6TicfTxlRu99stc6/q6LKRDDwA
 rJj9cIkHr7Np/OA2xpGMB21buu83TW1+EQiory6TMz7tRe/5whMzkVfTKbLzV++9TgSE
 SXBw==
X-Gm-Message-State: AOAM532lyXi/MaF1LjW5Zz7HVAlmmzjKOTQZOmpXMcItV5lf3JXu3Xuh
 YMZDKodOTd+dU8zV/9iGpvKTZbtF
X-Google-Smtp-Source: ABdhPJyAD7qRMvp98E2GNAyHX3Lom3vbs++4CVF2A14dVLkhCewPMFqee098J/n0qJxQBWTglzgh7g==
X-Received: by 2002:a37:f517:: with SMTP id l23mr13946427qkk.475.1589764512837; 
 Sun, 17 May 2020 18:15:12 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.15.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:15:12 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 14/18] libxl: require qemu in dom0 for multiple stubdomain
 consoles
Date: Sun, 17 May 2020 21:13:49 -0400
Message-Id: <20200518011353.326287-15-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Device model stubdomains (both Mini-OS + qemu-trad and linux + qemu-xen)
are always started with at least 3 consoles: log, save, and restore.
Until xenconsoled learns how to handle multiple consoles, this is needed
for save/restore support.

For Mini-OS stubdoms, this is a bug.  In practice, it works in most
cases because there is something else that triggers qemu in dom0 too:
vfb/vkb added if vnc/sdl/spice is enabled.

Additionally, Linux-based stubdomain waits for all the backends to
initialize during boot. Lack of some console backends results in
stubdomain startup timeout.

This is a temporary patch until xenconsoled will be improved.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
[Updated commit message with Marek's explanation from mailing list.]
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
Changes in v6
 - Update commit message
---
 tools/libxl/libxl_dm.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index c66e8ebc24..fb87deae91 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -2504,7 +2504,11 @@ static void spawn_stub_launch_dm(libxl__egc *egc,
         }
     }
 
-    need_qemu = libxl__need_xenpv_qemu(gc, dm_config);
+    /*
+     * Until xenconsoled learns how to handle multiple consoles, require qemu
+     * in dom0 to serve consoles for a stubdomain - it require at least 3 of them.
+     */
+    need_qemu = 1 || libxl__need_xenpv_qemu(gc, &sdss->dm_config);
 
     for (i = 0; i < num_console; i++) {
         libxl__device device;
@@ -2636,7 +2640,11 @@ static void qmp_proxy_spawn_outcome(libxl__egc *egc,
                                     int rc)
 {
     STATE_AO_GC(sdss->qmp_proxy_spawn.ao);
-    int need_pvqemu = libxl__need_xenpv_qemu(gc, &sdss->dm_config);
+    /*
+     * Until xenconsoled learns how to handle multiple consoles, require qemu
+     * in dom0 to serve consoles for a stubdomain - it require at least 3 of them.
+     */
+    int need_pvqemu = 1 || libxl__need_xenpv_qemu(gc, &sdss->dm_config);
 
     if (rc) goto out;
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:16:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:16:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUNq-0000is-4z; Mon, 18 May 2020 01:15:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUNo-0000hV-Tm
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:15:56 +0000
X-Inumbo-ID: 07e1f24a-98a5-11ea-b07b-bc764e2007e4
Received: from mail-qt1-x843.google.com (unknown [2607:f8b0:4864:20::843])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 07e1f24a-98a5-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 01:15:15 +0000 (UTC)
Received: by mail-qt1-x843.google.com with SMTP id t25so6880035qtc.0
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:15:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=rCwSVJujwRWOjutOj30xLbizDFYlX19nLFDyw22sUns=;
 b=clGXf81c93iFGqR1y/RtvvItvqvPOhbA+FmgiQxUhdZlhEkkYn2syrKv5ZPQJEk+IN
 O2gjhOznfXPsj+2wUWRXNwDt+JztGTH+eWiedGrSUN2FUhxn59JdGBWg/eX61cZZvC+r
 TYBH1Cc5EBZSmHEBXlbDY5nlWlqn/eidiqL9PCms2agWxRHjMwDRS3EzklTuFBxAlAxe
 CJxjbaKZy1+oM9ef4bG5u4V/tgeyhmsPb0NXElg3zqi8v/PyC7jipiF7IpKQ8rn2b4hh
 7k6L52TCzhwSoHHvE9s0C0bJdvnAZrWzAFs2LLylamZRc7v9ZdgilLSURtKsZh/kaXKr
 3yHw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=rCwSVJujwRWOjutOj30xLbizDFYlX19nLFDyw22sUns=;
 b=Di65ci2a00IsEo7PGG9lMjjrtw9iaaNHAWH+QnYAze2QZ+B8FjBabxQvedVubaUlkl
 7ucka6ddNoui699Y4/6B1dR/6U8nIas3UP7ezYyg8F31dJdrtQZQQi6Kpd7EtYSM5Ghw
 l4Y+S0BpRSRLUbIEQnekvzXHS0xV5eIu9c0xH2FIFpbvU1TrVz67bVAlSOBGUxE9EM+T
 qVfsS/txXcmgxbVTfrFR04BruCPIRdH76ysm03sFlPAR/G6BS04bfMClwmf0jh10WHro
 kiK9/LFrn8WqamMh83rdCvao8ToFITLK17Ga5z+kGKZwb9/+DKNFmrQZonTDgXOB0gvf
 2WVA==
X-Gm-Message-State: AOAM533aV4ElP+iGSVDlu47b9vMctjaR8puCR6xuRIqx1Wsz5YTPqk44
 No4C3pa/G8aLyZvjWUCnoYr+yxY9
X-Google-Smtp-Source: ABdhPJwRgYFeQW6kaOEMq6bJv7CT3RQV1W1zjZVD+QBsX7wPgZz8r/hS+DwbKYVpJD6MGi1xcACtXQ==
X-Received: by 2002:ac8:4e53:: with SMTP id e19mr13905188qtw.277.1589764514556; 
 Sun, 17 May 2020 18:15:14 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.15.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:15:13 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 15/18] libxl: ignore emulated IDE disks beyond the first 4
Date: Sun, 17 May 2020 21:13:50 -0400
Message-Id: <20200518011353.326287-16-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Qemu supports only 4 emulated IDE disks, when given more (or with higher
indexes), it will fail to start. Since the disks can still be accessible
using PV interface, just ignore emulated path and log a warning, instead
of rejecting the configuration altogether.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6:
 - Add Acked-by: Ian Jackson
---
 tools/libxl/libxl_dm.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index fb87deae91..3356880346 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -1894,6 +1894,13 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             }
 
             if (disks[i].is_cdrom) {
+                if (disk > 4) {
+                    LOGD(WARN, guest_domid, "Emulated CDROM can be only one of the first 4 disks.\n"
+                         "Disk %s will be available via PV drivers but not as an "
+                         "emulated disk.",
+                         disks[i].vdev);
+                    continue;
+                }
                 drive = libxl__sprintf(gc,
                          "if=ide,index=%d,readonly=on,media=cdrom,id=ide-%i",
                          disk, dev_number);
@@ -1971,6 +1978,10 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                                                        &disks[i],
                                                        colo_mode);
                 } else {
+                    LOGD(WARN, guest_domid, "Only 4 emulated IDE disks are supported.\n"
+                         "Disk %s will be available via PV drivers but not as an "
+                         "emulated disk.",
+                         disks[i].vdev);
                     continue; /* Do not emulate this disk */
                 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:16:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:16:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUNv-0000np-EK; Mon, 18 May 2020 01:16:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUNt-0000me-UV
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:16:01 +0000
X-Inumbo-ID: 08cc9dea-98a5-11ea-9887-bc764e2007e4
Received: from mail-qt1-x843.google.com (unknown [2607:f8b0:4864:20::843])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 08cc9dea-98a5-11ea-9887-bc764e2007e4;
 Mon, 18 May 2020 01:15:17 +0000 (UTC)
Received: by mail-qt1-x843.google.com with SMTP id z18so6870288qto.2
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:15:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=wEGJ4owD3/xLHnFR9WnbJ1JPiPZww0tJ9aDlYTBZYPs=;
 b=rWL31mC1GXoXLOKl9gwngfk3xr4fV4TgBAq3sZiJBn8jmkY0d6pZBW+xxqoDgMtkwv
 AePmL/5yNvxC2hf1VixmAsKlxvWw65iX1F57KiPbdfRvYosnibEbEzLjBlHjtFVCE9X+
 OswxRUMKnVkneFHf7hAd7JZe1MwSxAyRDOiV9no/iQCncSkornYjNfjrVFuUsmx6HbcG
 XSogiTKH4IxkupLQuEu4UkRbd5l5fcAYQXZKDjXtHvXR6OLaeoYXjC5bPdA8qUmkJWK9
 SWttXWxsXfXisqGitRUeZfu0YBq9xrJwdO5QBXmioc4XDPRXXALxsfZ+7ZoM/7/toeIF
 U49Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=wEGJ4owD3/xLHnFR9WnbJ1JPiPZww0tJ9aDlYTBZYPs=;
 b=QYA1lv9IFDOtDg0xZFIY4sG9oEBsmYDPQrHzE+fdJT5l+FMEzZKJLc1OCavw5cJMBg
 Q1mKbqYRcH91TuT3nj4maZE1pVTeVyf+xR02hbNl6R7S7WaAdJ/9CcnP6NX9hJLbK804
 TQhHuNsClMIrHUNLFpi7vf0ZxGDbLO5RmYySqS1cb90nI3pnTuXUTeJ1X/l0Hl6Q8Rhx
 qGLd9q+Ht4neRi3k9jDeomDJ3J7nEZ9VMQUVc7TMAst3ymx1ujtJ86vNF3SG/1uKsAlS
 3m5e1Ouas+4KBLc/FMzbV2msQHjvc6wBFnNJxAiGGvBGcyqa89Bk8cGFmZtrMrCGYxDL
 shmQ==
X-Gm-Message-State: AOAM533sJ97K3kNP0/ajGFa5kRtmlfllJUhSablEC1bAtc1Agfw+l1vE
 HYfvXZY14ELksET3Mv/M7Q9ATA/K
X-Google-Smtp-Source: ABdhPJyO3CeZMrMr8cQbmPSNX1RUwkhHMSgA4CYIQ61kZol9uMP2rdF/WZ8bL0pd0FTCfKuHfEw6QQ==
X-Received: by 2002:ac8:7153:: with SMTP id h19mr14150357qtp.5.1589764516615; 
 Sun, 17 May 2020 18:15:16 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.15.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:15:15 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 16/18] libxl: consider also qemu in stubdomain in
 libxl__dm_active check
Date: Sun, 17 May 2020 21:13:51 -0400
Message-Id: <20200518011353.326287-17-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Since qemu-xen can now run in stubdomain too, handle this case when
checking it's state too.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6:
 - Add Acked-by: Ian Jackson
---
 tools/libxl/libxl_dm.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 3356880346..098dc49ecb 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -3744,12 +3744,18 @@ out:
 
 int libxl__dm_active(libxl__gc *gc, uint32_t domid)
 {
-    char *pid, *path;
+    char *pid, *dm_domid, *path;
 
     path = GCSPRINTF("/local/domain/%d/image/device-model-pid", domid);
     pid = libxl__xs_read(gc, XBT_NULL, path);
 
-    return pid != NULL;
+    if (pid)
+        return true;
+
+    path = GCSPRINTF("/local/domain/%d/image/device-model-domid", domid);
+    dm_domid = libxl__xs_read(gc, XBT_NULL, path);
+
+    return dm_domid != NULL;
 }
 
 int libxl__dm_check_start(libxl__gc *gc, libxl_domain_config *d_config,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:16:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:16:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUO0-0000sv-NA; Mon, 18 May 2020 01:16:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUNy-0000r9-Td
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:16:06 +0000
X-Inumbo-ID: 0a171162-98a5-11ea-ae69-bc764e2007e4
Received: from mail-qv1-xf41.google.com (unknown [2607:f8b0:4864:20::f41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0a171162-98a5-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 01:15:19 +0000 (UTC)
Received: by mail-qv1-xf41.google.com with SMTP id ee19so3960856qvb.11
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:15:19 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=aXBQgi2sAye4H55aK3umMyFvWsC+VqqA4xSnBHzfcsA=;
 b=Y4CygHC367Xbf5A5hj37oQO+gjLXJALnjUlf1hUo1/p0jZGOddoNrQdqhOv0g1ZRn/
 60HOuxmnDFCU5aoQqaCzo9567/GJoG1e+zjIT40yI2/u2leFRyenK5OA1MKLXVnDVLo6
 GzGVgrKL2WB50cwdsMjkI7FsqEcW6G8RuJpItEzJHiyNfjwF3Wb/7nnbedNNFCrZSUDN
 kbiOBqw2uz0cW6WAwexPdAR8nR0O6cbo74TPxyGD4fQ0x9zxilfa7eirSTBVlE84+k5h
 PFIW4l4glYiBJx7Gcvp9Dqru4Rrb/1IFByUFWFaJzt3opu4EjTZuLxKcgJmbBB4+H1Wg
 /DXA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=aXBQgi2sAye4H55aK3umMyFvWsC+VqqA4xSnBHzfcsA=;
 b=epC3mg/fXb9bOaFxGb3qZW0oxhEV/mcy3E7/VBUq0SETKCo8RppZ/NKfWLm3eG4jZH
 pPydwJNy8+GMI/taiE3iidywkhZi3mo9RXo9b5K4s915OY1XypGkSiZG7uHNNKXWE043
 WU0mCwAp6bb1TcKbFcdLcy30vE+2sKMMBPyjeccavCIG7Y+kSrw41LtsUvFJO9aLu/TV
 Fp6911c9FEPhAxyuAgBxoOuNyk5QI4njlc86DyA5H56BWj2mFgBoTjJBCtp2i3/Ei8SI
 IbgpGBzMCEUkLvDMksttBDAdydc2HFXGm8N19fkek3LSvJFWUtSknFysDkCOXO+ajgtj
 DI5A==
X-Gm-Message-State: AOAM532rRcWdS/cAJSfvBHVV0yZ/HbL4WtwIyt9kkniUw8toWdmwpyDn
 UQeoqw0tigoZjJhKme6va4/Ec3VG
X-Google-Smtp-Source: ABdhPJxQGoZ1QAmGwaUzaXiU2hFYeH9LnL6fFqe6WVMNfa04YE+a3N6KoBM2gCiM0CmwlYxZYL1R2A==
X-Received: by 2002:a0c:e488:: with SMTP id n8mr3033916qvl.172.1589764518955; 
 Sun, 17 May 2020 18:15:18 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.15.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:15:18 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 17/18] docs: Add device-model-domid to xenstore-paths
Date: Sun, 17 May 2020 21:13:52 -0400
Message-Id: <20200518011353.326287-18-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Document device-model-domid for when using a device model stubdomain.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6:
 - Add Acked-by: Ian Jackson
---
 docs/misc/xenstore-paths.pandoc | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/docs/misc/xenstore-paths.pandoc b/docs/misc/xenstore-paths.pandoc
index a152f5ea68..766e8008dc 100644
--- a/docs/misc/xenstore-paths.pandoc
+++ b/docs/misc/xenstore-paths.pandoc
@@ -148,6 +148,11 @@ The domain's own ID.
 The process ID of the device model associated with this domain, if it
 has one.
 
+#### ~/image/device-model-domid = INTEGER   [INTERNAL]
+
+The domain ID of the device model stubdomain associated with this domain,
+if it has one.
+
 #### ~/cpu/[0-9]+/availability = ("online"|"offline") [PV]
 
 One node for each virtual CPU up to the guest's configured
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:16:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:16:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUO6-0000ys-12; Mon, 18 May 2020 01:16:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jaUO3-0000wK-UF
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:16:11 +0000
X-Inumbo-ID: 0b78a354-98a5-11ea-b07b-bc764e2007e4
Received: from mail-qt1-x843.google.com (unknown [2607:f8b0:4864:20::843])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b78a354-98a5-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 01:15:21 +0000 (UTC)
Received: by mail-qt1-x843.google.com with SMTP id 4so6859085qtb.4
 for <xen-devel@lists.xenproject.org>; Sun, 17 May 2020 18:15:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=ecdUSLx0EkiAamNbQS5sqG5ss0VDCxifhfvPg5x2Guk=;
 b=VrTYroypDCzEoAzWrvBenNTRDz191WXxnoEdPPieqTWm22xBGc37Y6RiZYRQVm9Yus
 kUJSCEh90WgNcgAl2FhQE3TVNiCXBE/giYQpALiaVMgUQ7YvWQrNPPYHCOpz4PFeJ3X+
 M8+a1z0R/r6zPttm3Xz3+VUTv8LifqbGrNnVkJpAjGvoAV8LL+/qfR46ULbW7INe5VFL
 RARdPwfgNMlf0Tm7/WvrAKVcs8AJpLt+hvi0XjVWfPq1i4pwiUReYG6ZIWwfqV7Px/Lt
 zGdYiOn+7SKxCKJAHBy7VN9CO/6JsQa7AETneMYISRBBm2IoCoP0nYR0Xq0u6eXwcoI4
 p9iQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=ecdUSLx0EkiAamNbQS5sqG5ss0VDCxifhfvPg5x2Guk=;
 b=q6BgcUtGKkvFqlg2FWMNEvhHSdlatZhPS+h+u8s9f6tZKtI/20TllsgGlHvAhtD0FQ
 ADoT7c8NM2QKKAcWz0jl+dkDBRvlPKhNLn6JVaim5Rner9Pvf91kXOYrc9eR8DHOOLDr
 pNDynlKJ8542J0k/NI6TJh5TN7uGiplQAqd04b914jaIUrtwPOqWblkgPkyBa69SAQw9
 yAcGoDnlXG2eQvR0F2rOWSFY5BL5JmaAJcb7Gam94/WArWCIijrUqh5oYldnn3JV/nu9
 Ig6eE53MchguepKM/vPxPIozoR7EQkj5wDyZMQKc/GDY+Z4sI8lnUYUgzjhNQ9bPljeL
 +YvQ==
X-Gm-Message-State: AOAM532qAjW1CtEw9HZQ0zZFBCfSPbJnD9Gll4u49d5C7aU+p/ZqqsDf
 u9jplSCgZvcsoVqR8YBN/JX7jeaK
X-Google-Smtp-Source: ABdhPJyF9a8HZCeyrTGOYZghXrrZsAytIyAZoVEMeeHS+bILTEshZd06m0cilmYOf2s0lNKs0XhpLA==
X-Received: by 2002:ac8:554c:: with SMTP id o12mr13445021qtr.89.1589764521229; 
 Sun, 17 May 2020 18:15:21 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:ec68:c92e:af5a:2d3a])
 by smtp.gmail.com with ESMTPSA id l2sm7072864qkd.57.2020.05.17.18.15.20
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 17 May 2020 18:15:20 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 18/18] libxl: Check stubdomain kernel & ramdisk presence
Date: Sun, 17 May 2020 21:13:53 -0400
Message-Id: <20200518011353.326287-19-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Just out of context is the following comment for libxl__domain_make:
/* fixme: this function can leak the stubdom if it fails */

When the stubdomain kernel or ramdisk is not present, the domid and
stubdomain name will indeed be leaked.  Avoid the leak by checking the
file presence and erroring out when absent.  It doesn't fix all cases,
but it avoids a big one when using a linux device model stubdomain.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6:
 - Add Acked-by: Ian Jackson
---
 tools/libxl/libxl_dm.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 098dc49ecb..997c4815e0 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -2336,6 +2336,22 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
         dm_config->num_vkbs = 1;
     }
 
+    if (guest_config->b_info.stubdomain_kernel &&
+        access(guest_config->b_info.stubdomain_kernel, R_OK) != 0) {
+        LOGED(ERROR, guest_domid, "could not access stubdomain kernel %s",
+              guest_config->b_info.stubdomain_kernel);
+        ret = ERROR_INVAL;
+        goto out;
+    }
+
+    if (guest_config->b_info.stubdomain_ramdisk &&
+        access(guest_config->b_info.stubdomain_ramdisk, R_OK) != 0) {
+        LOGED(ERROR, guest_domid, "could not access stubdomain ramdisk %s",
+              guest_config->b_info.stubdomain_ramdisk);
+        ret = ERROR_INVAL;
+        goto out;
+    }
+
     stubdom_state->pv_kernel.path = guest_config->b_info.stubdomain_kernel;
     stubdom_state->pv_ramdisk.path = guest_config->b_info.stubdomain_ramdisk;
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 01:53:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 01:53:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaUxu-00054w-Ta; Mon, 18 May 2020 01:53:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDU1=7A=epam.com=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1jaUxt-00054r-Kd
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 01:53:13 +0000
X-Inumbo-ID: 53a3a200-98aa-11ea-a82d-12813bfff9fa
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.53]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 53a3a200-98aa-11ea-a82d-12813bfff9fa;
 Mon, 18 May 2020 01:53:11 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fZXnPSIulmQAA0NeXcpgP3yo/bQ9MYx+SRpwBAk/YYjNW4up4aJflsPPV3hSJSP0LeexEWjuDpnDN3uDHpX5pm3bv1wIcPninXn6YDWImTr3XF3T9KE7s6slFeeldcTf6fE7Lf573pI+iMlM/6R94nohcV6YZx9TOprE7M6vki2bIVIp6JsOn1At+R5LnWVXbq5DiO/dim2fOsN6bIIlv7BcnpyZcijcFaYtXvpLl7Q9iJmztBOU2pvTgPijEyMChe2CIaG/IS8qNk+YJLSxPn1YqVvv2lgmthaGORIaDgc2VvHbEOmHaUYGsA9Xto/P9jmNLxorluwVYfLXs4JzAQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K8ZTmz7ZkuJd8she9nbYHz8c8nga6+mSjhcSH14/Cso=;
 b=hRRqk4aosQjGq11MWAjEUwwkd3shu2nMhf284TYrop4ADstuFu7Tq+078L3PRYN8poNDmoUZsYaCLA3Ng8mVeScD/V8ou5o2eIL4zqc6StQ1fXkaaWwz9PSwFRTiRSi9aYiglh9F2yKE3uhEpXoJVGs46UrYbHuYDCUIVGpQmyJaee+8jhwsbJIIJED83GOZi3zGiipJNLoz/5EJtw2l9BxYjBhAiFbuMCTYzmhVWdTAG64ItfEGAD4MQY+UIq9Vcewo8oSmsqE4n1+L35Zv0h6+/kXYHFFbbQOgEZ9KPamAWCtcFQueeee/S1tjbFETNUleDScinkdNpoFY6FGC2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K8ZTmz7ZkuJd8she9nbYHz8c8nga6+mSjhcSH14/Cso=;
 b=LUuQkGNfVhj50WuCZF5h4pG9oHufSTIfp1q772aOvjwL/5tOVEBZZEG7U0NGgCqo1d1O1/OnkF5rK+cCqsk+yT88BmkyueTuHiJMzex0vtZCcDvgfWjzv7ssFTfLhCD8gxWlt0QQq++925MJ8YhGT0pMUwjOMTtD5LJF/cF43+M=
Received: from VI1PR0302MB3407.eurprd03.prod.outlook.com
 (2603:10a6:803:23::18) by VI1PR0302MB2640.eurprd03.prod.outlook.com
 (2603:10a6:800:e1::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.33; Mon, 18 May
 2020 01:53:06 +0000
Received: from VI1PR0302MB3407.eurprd03.prod.outlook.com
 ([fe80::a01e:c0b1:7174:4c75]) by VI1PR0302MB3407.eurprd03.prod.outlook.com
 ([fe80::a01e:c0b1:7174:4c75%5]) with mapi id 15.20.3000.022; Mon, 18 May 2020
 01:53:06 +0000
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "sstabellini@kernel.org" <sstabellini@kernel.org>, "julien@xen.org"
 <julien@xen.org>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: [PATCH] xen/arm: optee: allow plain TMEM buffers with NULL address
Thread-Topic: [PATCH] xen/arm: optee: allow plain TMEM buffers with NULL
 address
Thread-Index: AQHWLLcTxhZtogukPEOG78aedSZTLQ==
Date: Mon, 18 May 2020 01:53:06 +0000
Message-ID: <2a32c7c2048333169c9378194d6a435e2e7ed2d7.camel@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: Evolution 3.36.2 
authentication-results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: f03da269-fdc3-4f1c-6c22-08d7face35bd
x-ms-traffictypediagnostic: VI1PR0302MB2640:
x-microsoft-antispam-prvs: <VI1PR0302MB26409110A42B7A0BD4C40943E6B80@VI1PR0302MB2640.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:2331;
x-forefront-prvs: 04073E895A
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: MSUU8g8CFFGS49noMtYaZTrPrjA9IBiKtbE/JGFyIIefUH3w2SwuI6ynmj6H/KKfIv3C5q2wCPwPlx2cH6U3GRsMZvSyGGfkz8rUJchCnszJGjlMGq2U4sl/W9xlEww9ErBTrLiddELc2bqfXvpy8gLXQvsA/7Lw1GZKIVXFdhS2dExgr+5U2w72qfhJyJ3xDq65cHRoumjOXWyiqazO67HfPRHjVEt77EUpa3ozJycx4h50CqaPgXA6na+2SBEh3p5DqN9YfvE6RiuJN5HJRPhEaNnWcWC68uiLPTUO2ealJg73gYfzrq1U804cd1qpuEq7RBaaAgKnW4TpF666rzCZ0LCKVXiipXki8nbwzjaw71uVKBHtVBVCitPTx2GLT0majsw7aXwt+Fv2YI5TnNtG2oCmvlxwNrJFC4Y+8jMGlpt8MlL1bp9/vIsDpQ8T
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:VI1PR0302MB3407.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(39860400002)(346002)(396003)(376002)(136003)(366004)(36756003)(478600001)(2906002)(86362001)(6512007)(6486002)(76116006)(91956017)(66446008)(64756008)(66556008)(66476007)(66946007)(71200400001)(8676002)(55236004)(186003)(26005)(2616005)(5660300002)(316002)(8936002)(6506007)(110136005);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: DImgU2KnJH7Uho1iQB9Q0w6r4Wecfz+85ZBGpDI36C3CTwQMH3xUdooZMZNUEnksOuWswXOzcswhsiTUX7bBy64j0+5E5FAXDDmDt3Qz6ydCrmLM5rALdXIt3485kw20ReZYzv/3DrtimEWuSS31pLVS9BfnBzttcId9oATFHIaxhnUlku62ovXbMMB8DZ2xvGLFxrmfWtUIwJXzRJrNFTUe9CLBxeLBYHjYQjP7yAfeVod0NKIYWfw6PLKtvZB3QfgPGA46EbIFzvvY7d42LjfIpxnKdK0A4GGHysf332jyBNElKgE95OgJDObYO9/O2SYTIuCIThcxncSw1b0gADvguVG5AkUznwAPoWohNfU3DxXmGinK4Y3snJfgxeuKBaCNZBUDMAzPa/N/wRcKxaIuP+TN7XP9oXc8VrIqv/sNB0VR+AMSEyj6+mJ42wPAxXS0BWyd3OpcMyHS1xZFqzcBlmMwNCoJ5UId/lAWPc8=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <24933F503D2E4A419B0B0F6CA6A2F68A@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f03da269-fdc3-4f1c-6c22-08d7face35bd
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 May 2020 01:53:06.6428 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: lGrOR4dWWfmp9CEK7KoTb2h9bJ2SH5GijUIiDfCptqpOr4NJL9XiXGDmK+enFnhQ4fVsYDiDl+QgMQnt4U7VrzW4VwLDV+7skohkT0mEdrc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0302MB2640
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

VHJ1c3RlZCBBcHBsaWNhdGlvbnMgdXNlIHBvcHVsYXIgYXBwcm9hY2ggdG8gZGV0ZXJtaW5lIHJl
cXVpcmVkIHNpemUNCm9mIGJ1ZmZlcjogY2xpZW50IHByb3ZpZGVzIGEgbWVtb3J5IHJlZmVyZW5j
ZSB3aXRoIHRoZSBOVUxMIHBvaW50ZXIgdG8NCmEgYnVmZmVyLiBUaGlzIGlzIHNvIGNhbGxlZCAi
TnVsbCBtZW1vcnkgcmVmZXJlbmNlIi4gIFRBIHVwZGF0ZXMgdGhlDQpyZWZlcmVuY2Ugd2l0aCB0
aGUgcmVxdWlyZWQgc2l6ZSBhbmQgcmV0dXJucyBpdCBiYWNrIHRvIGNsaWVudC4gVGhlbg0KY2xp
ZW50IGFsbG9jYXRlcyBidWZmZXIgb2YgbmVlZGVkIHNpemUgYW5kIHJlcGVhdHMgdGhlIG9wZXJh
dGlvbi4NCg0KVGhpcyBiZWhhdmlvciBpcyBkZXNjcmliZWQgaW4gVEVFIENsaWVudCBBUEkgU3Bl
Y2lmaWNhdGlvbiwgcGFyYWdyYXBoDQozLjIuNS4gTWVtb3J5IFJlZmVyZW5jZXMuDQoNCk9QLVRF
RSByZXByZXNlbnRzIHRoaXMgbnVsbCBtZW1vcnkgcmVmZXJlbmNlIGFzIGEgVE1FTSBwYXJhbWV0
ZXIgd2l0aA0KYnVmX3B0ciA9PSBOVUxMLiBUaGlzIGlzIHRoZSBvbmx5IGNhc2Ugd2hlbiB3ZSBz
aG91bGQgYWxsb3cgVE1FTQ0KYnVmZmVyIHdpdGhvdXQgdGhlIE9QVEVFX01TR19BVFRSX05PTkNP
TlRJRyBmbGFnLg0KDQpTaWduZWQtb2ZmLWJ5OiBWb2xvZHlteXIgQmFiY2h1ayA8dm9sb2R5bXly
X2JhYmNodWtAZXBhbS5jb20+DQotLS0NCiB4ZW4vYXJjaC9hcm0vdGVlL29wdGVlLmMgfCA5ICsr
KysrKy0tLQ0KIDEgZmlsZSBjaGFuZ2VkLCA2IGluc2VydGlvbnMoKyksIDMgZGVsZXRpb25zKC0p
DQoNCmRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vdGVlL29wdGVlLmMgYi94ZW4vYXJjaC9hcm0v
dGVlL29wdGVlLmMNCmluZGV4IGFmMTlmYzMxZjguLmZiN2Q0OTFiMjUgMTAwNjQ0DQotLS0gYS94
ZW4vYXJjaC9hcm0vdGVlL29wdGVlLmMNCisrKyBiL3hlbi9hcmNoL2FybS90ZWUvb3B0ZWUuYw0K
QEAgLTg2NSw5ICs4NjUsMTIgQEAgc3RhdGljIGludCB0cmFuc2xhdGVfcGFyYW1zKHN0cnVjdCBv
cHRlZV9kb21haW4NCipjdHgsDQogICAgICAgICAgICAgfQ0KICAgICAgICAgICAgIGVsc2UNCiAg
ICAgICAgICAgICB7DQotICAgICAgICAgICAgICAgIGdkcHJpbnRrKFhFTkxPR19XQVJOSU5HLCAi
R3Vlc3QgdHJpZXMgdG8gdXNlIG9sZCB0bWVtDQphcmdcbiIpOw0KLSAgICAgICAgICAgICAgICBy
ZXQgPSAtRUlOVkFMOw0KLSAgICAgICAgICAgICAgICBnb3RvIG91dDsNCisgICAgICAgICAgICAg
ICAgaWYgKCBjYWxsLT54ZW5fYXJnLT5wYXJhbXNbaV0udS50bWVtLmJ1Zl9wdHIgKQ0KKyAgICAg
ICAgICAgICAgICB7DQorICAgICAgICAgICAgICAgICAgICBnZHByaW50ayhYRU5MT0dfV0FSTklO
RywgIkd1ZXN0IHRyaWVzIHRvIHVzZSBvbGQNCnRtZW0gYXJnXG4iKTsNCisgICAgICAgICAgICAg
ICAgICAgIHJldCA9IC1FSU5WQUw7DQorICAgICAgICAgICAgICAgICAgICBnb3RvIG91dDsNCisg
ICAgICAgICAgICAgICAgfQ0KICAgICAgICAgICAgIH0NCiAgICAgICAgICAgICBicmVhazsNCiAg
ICAgICAgIGNhc2UgT1BURUVfTVNHX0FUVFJfVFlQRV9OT05FOg0K


From xen-devel-bounces@lists.xenproject.org Mon May 18 02:05:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 02:05:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaV9F-0006OH-0n; Mon, 18 May 2020 02:04:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDU1=7A=epam.com=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1jaV9D-0006OC-MC
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 02:04:55 +0000
X-Inumbo-ID: f61990ac-98ab-11ea-9887-bc764e2007e4
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.78]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f61990ac-98ab-11ea-9887-bc764e2007e4;
 Mon, 18 May 2020 02:04:54 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YSlrXsz99LmPn+NccvlaJuWMrx8GY8y/TpHXLRWvLg7eh3pjbaT35GBVc/UsI5UdeEdkH/07prH1RGLc4p/Y3SbWDXxDkaEPMI1cJIanip63ytRVw4478nvuV4nww4s2wesjPJZ/XlHYApPBQsrmieO7YdzdKzP/+um7XBDcQn8yzH7bfExWIw2W/eSOPkmGrj2j0csWIx3vvc+GS6OKAIiPQVQFIgRobAW2FEPnGgMU7HSnOYhdc6cBW7EF2hxv5cJsgP+/vjQw+JTgz/Qc2M1m/pHgGGWMvUn+5lN7+2BQYgvwFFgaHljEWXAe5B1YwvANRuYNQa5HMEzHd2fYaQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DXvAl7S+oQcvO3GhrU89tCqWr+yWlU2u6vae9W0gSJ4=;
 b=CP/ExeLcSCZKG8CETO95h2KRwkTsccSlEALCAsQUjjUE6yfuS15XttRU8zkGi1MJjG61pGSnLS/5Flx5pEUBWIgoIwLFmLr6URPT+u8sihMkbPpZeAtWRGE/ZiSkgpAtY0Y2elr5CTI26JpHDuIBR2JRGUKmIFBtuiDPCMwEXGVeughTJqy65upDefvL9REf5OpwDcSy1vIoyC4+Q7DIdaAtTPO3+GpalNLz79VdqNjYbs/CHzsg62wvbAWTU56RIi2dnRrmVUJzXB02VevJeRSgDs8adsUvghezzrflCQiXkyKwjq3QRtij5gouX9M3k3+I8om9nLO3FF7UBTL02A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DXvAl7S+oQcvO3GhrU89tCqWr+yWlU2u6vae9W0gSJ4=;
 b=uo4DhjJOeG3Uf7xmRlByQZVQTTsL15fBdYYEeYI5xh5L1pdkWSr8l7uKJ9KKFqidzuwKr+8f7cB535TxkOlglVW8lK+NQ1dQo+prI5Baf9GemkbyG+JOJ9r0jz/p0rQzPG2j9i6Fn7dk1wtYliSY8m42C9weTmGExsTI2ni6LgY=
Received: from VI1PR0302MB3407.eurprd03.prod.outlook.com
 (2603:10a6:803:23::18) by VI1PR0302MB2813.eurprd03.prod.outlook.com
 (2603:10a6:800:e2::12) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.26; Mon, 18 May
 2020 02:04:50 +0000
Received: from VI1PR0302MB3407.eurprd03.prod.outlook.com
 ([fe80::a01e:c0b1:7174:4c75]) by VI1PR0302MB3407.eurprd03.prod.outlook.com
 ([fe80::a01e:c0b1:7174:4c75%5]) with mapi id 15.20.3000.022; Mon, 18 May 2020
 02:04:50 +0000
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "julien@xen.org" <julien@xen.org>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] optee: immediately free buffers that are released by
 OP-TEE
Thread-Topic: [PATCH] optee: immediately free buffers that are released by
 OP-TEE
Thread-Index: AQHWI0fTRy+rlPvmsUKZMLlR49t2iaiiqEuAgAqCkIA=
Date: Mon, 18 May 2020 02:04:49 +0000
Message-ID: <878c09ec58b9c9bef81497fa7e7a0ac47ddd8f21.camel@epam.com>
References: <20200506014246.3397490-1-volodymyr_babchuk@epam.com>
 <51b8c855-5e94-2829-a703-d43c84948120@xen.org>
In-Reply-To: <51b8c855-5e94-2829-a703-d43c84948120@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: Evolution 3.36.2 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 8e5f16a0-90fb-45a5-49aa-08d7facfd8f7
x-ms-traffictypediagnostic: VI1PR0302MB2813:
x-microsoft-antispam-prvs: <VI1PR0302MB28138DC5A1B68E84AC9693BDE6B80@VI1PR0302MB2813.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8273;
x-forefront-prvs: 04073E895A
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: KVRWg37OBQpeTY55ei6Xu4rF5uG64MwmgiPCiVFsnqPGsKzzZgmS5GFhan2qvfVQqmS7d1IIeNsEIaM19VJ4h/hK27UC6Dczmt1gZm5bIE8e0RtKmDHaq+MMhXLaLXPLrkqTBrNWZbzYDJJ/eGxzrKeXaStLQqFU4/c5ilJNz3I02EAZsKDrTrCiVLCc6pKX4fjvIEKXo+0XRKrVg6S28fo+6LFc3axxgCIv1fj1Y/QNDlmTpJDbZhV9Drhod+iD1ozDHgk/vxP3t0rZNh0G2BHpYz0N0VqZTc97rZv0OzAVHNZ3gJiiz9kxnCiPqjwy00ymhxd6JL8RyDDRDv9gh1svn+SFpdrqlYKmwLFYU3vmQhHa8k6bol1uAJyok9TN+e9HMFKre67Qj/+723NLmx3F9BYCb7pRtmL4VGav1c0XQbUjuD43Nf2PPVmzOe7kbGE9e5mmqwx55yDl27kgjZpNhgDRz4OFeCmdGtza1Rw=
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:VI1PR0302MB3407.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(396003)(366004)(39860400002)(136003)(346002)(376002)(6506007)(66556008)(478600001)(66446008)(64756008)(110136005)(36756003)(316002)(91956017)(76116006)(6512007)(4326008)(66946007)(2616005)(66476007)(8936002)(2906002)(6486002)(71200400001)(186003)(53546011)(55236004)(26005)(86362001)(8676002)(54906003)(5660300002)(14773001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: Xx2voGuvRyspclELgASqDLYAYBLUX3PzDJMSH9jWevEJDmDqTd/5QTwI8WOLhrG+ZIa81zfPXsQIcB71PLU3Rz9bI5DZ3x9sRTQR2xPEV/rYJ5lJ8fyLYJULjn8SEcFNhyTrjfNc0EwaDSHf1n9ljflzRlD9WaQ0x8U9oJPTw4HAET5Xu088XeCxgJ/0S0PX5aqKjGTho7F0slmrN0hNw308XNHblKDu6SZnBk8i80MwhPtAcGGvwST2W5KG49r7Wpw79s9J1vxaKvUYgUmDnk5Ay/Lby9cvzxeIx4y2erd6bcBD/9t8+GqvH1PZRe6STpvNPP/m0iH6/J7KQrnNx3stFqbE5DoshyyvadQICh/S+jqTS/TVpY+2gkwJaBH1ux2hKn0zkXiQS6RnT38PVA31GDzkWVgccVWJik+/HBr3cEa1F0x6iWoNRF6AU/d3rI0kHqU7h0sNIOR5Hmxhv4zL3+F61k3zAky6uSho2Ro=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <F5161EBA8E59D14A97581D21098D352E@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8e5f16a0-90fb-45a5-49aa-08d7facfd8f7
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 May 2020 02:04:49.9634 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: SDuYEPMeJ+hAdhNfc1hdvuAPwkS7CPxry5UZgzfqkMsNjPlAyHPQTgNE//5UxQlgEQ/U6GA+Q6AEr6s/4cGCyoV52NVDtlleufg2KJizlq0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0302MB2813
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "tee-dev@lists.linaro.org" <tee-dev@lists.linaro.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGkgSnVsaWVuLA0KDQpPbiBNb24sIDIwMjAtMDUtMTEgYXQgMTA6MzQgKzAxMDAsIEp1bGllbiBH
cmFsbCB3cm90ZToNCj4gSGkgVm9sb2R5bXlyLA0KPiANCj4gT24gMDYvMDUvMjAyMCAwMjo0NCwg
Vm9sb2R5bXlyIEJhYmNodWsgd3JvdGU6DQo+ID4gTm9ybWFsIFdvcmxkIGNhbiBzaGFyZSBidWZm
ZXIgd2l0aCBPUC1URUUgZm9yIHR3byByZWFzb25zOg0KPiA+IDEuIFNvbWUgY2xpZW50IGFwcGxp
Y2F0aW9uIHdhbnRzIHRvIGV4Y2hhbmdlIGRhdGEgd2l0aCBUQQ0KPiA+IDIuIE9QLVRFRSBhc2tz
IGZvciBzaGFyZWQgYnVmZmVyIGZvciBpbnRlcm5hbCBuZWVkcw0KPiA+IA0KPiA+IFRoZSBzZWNv
bmQgY2FzZSB3YXMgaGFuZGxlIG1vcmUgc3RyaWN0bHkgdGhhbiBuZWNlc3Nhcnk6DQo+ID4gDQo+
ID4gMS4gSW4gUlBDIHJlcXVlc3QgT1AtVEVFIGFza3MgZm9yIGJ1ZmZlcg0KPiA+IDIuIE5XIGFs
bG9jYXRlcyBidWZmZXIgYW5kIHByb3ZpZGVzIGl0IHZpYSBSUEMgcmVzcG9uc2UNCj4gPiAzLiBY
ZW4gcGlucyBwYWdlcyBhbmQgdHJhbnNsYXRlcyBkYXRhDQo+ID4gNC4gWGVuIHByb3ZpZGVzIGJ1
ZmZlciB0byBPUC1URUUNCj4gPiA1LiBPUC1URUUgdXNlcyBpdA0KPiA+IDYuIE9QLVRFRSBzZW5k
cyByZXF1ZXN0IHRvIGZyZWUgdGhlIGJ1ZmZlcg0KPiA+IDcuIE5XIGZyZWVzIHRoZSBidWZmZXIg
YW5kIHNlbmRzIHRoZSBSUEMgcmVzcG9uc2UNCj4gPiA4LiBYZW4gdW5waW5zIHBhZ2VzIGFuZCBm
b3JnZXRzIGFib3V0IHRoZSBidWZmZXINCj4gPiANCj4gPiBUaGUgcHJvYmxlbSBpcyB0aGF0IFhl
biBzaG91bGQgZm9yZ2V0IGFib3V0IGJ1ZmZlciBpbiBiZXR3ZWVuIHN0YWdlcyA2DQo+ID4gYW5k
IDcuIEkuZS4gdGhlIHJpZ2h0IGZsb3cgc2hvdWxkIGJlIGxpa2UgdGhpczoNCj4gPiANCj4gPiA2
LiBPUC1URUUgc2VuZHMgcmVxdWVzdCB0byBmcmVlIHRoZSBidWZmZXINCj4gPiA3LiBYZW4gdW5w
aW5zIHBhZ2VzIGFuZCBmb3JnZXRzIGFib3V0IHRoZSBidWZmZXINCj4gPiA4LiBOVyBmcmVlcyB0
aGUgYnVmZmVyIGFuZCBzZW5kcyB0aGUgUlBDIHJlc3BvbnNlDQo+ID4gDQo+ID4gVGhpcyBpcyBi
ZWNhdXNlIE9QLVRFRSBpbnRlcm5hbGx5IGZyZWVzIHRoZSBidWZmZXIgYmVmb3JlIHNlbmRpbmcg
dGhlDQo+ID4gImZyZWUgU0hNIGJ1ZmZlciIgcmVxdWVzdC4gU28gd2UgaGF2ZSBubyByZWFzb24g
dG8gaG9sZCByZWZlcmVuY2UgZm9yDQo+ID4gdGhpcyBidWZmZXIgYW55bW9yZS4gTW9yZW92ZXIs
IGluIG11bHRpcHJvY2Vzc29yIHN5c3RlbXMgTlcgaGF2ZSB0aW1lDQo+ID4gdG8gcmV1c2UgYnVm
ZmVyIGNvb2tpZSBmb3IgYW5vdGhlciBidWZmZXIuIFhlbiBjb21wbGFpbmVkIGFib3V0IHRoaXMN
Cj4gPiBhbmQgZGVuaWVkIHRoZSBuZXcgYnVmZmVyIHJlZ2lzdHJhdGlvbi4gSSBoYXZlIHNlZW4g
dGhpcyBpc3N1ZSB3aGlsZQ0KPiA+IHJ1bm5pbmcgdGVzdHMgb24gaU1YIFNvQy4NCj4gPiANCj4g
PiBTbywgdGhpcyBwYXRjaCBiYXNpY2FsbHkgY29ycmVjdHMgdGhhdCBiZWhhdmlvciBieSBmcmVl
aW5nIHRoZSBidWZmZXINCj4gPiBlYXJsaWVyLCB3aGVuIGhhbmRsaW5nIFJQQyByZXR1cm4gZnJv
bSBPUC1URUUuDQo+ID4gDQo+ID4gU2lnbmVkLW9mZi1ieTogVm9sb2R5bXlyIEJhYmNodWsgPHZv
bG9keW15cl9iYWJjaHVrQGVwYW0uY29tPg0KPiA+IC0tLQ0KPiA+ICAgeGVuL2FyY2gvYXJtL3Rl
ZS9vcHRlZS5jIHwgMjQgKysrKysrKysrKysrKysrKysrKystLS0tDQo+ID4gICAxIGZpbGUgY2hh
bmdlZCwgMjAgaW5zZXJ0aW9ucygrKSwgNCBkZWxldGlvbnMoLSkNCj4gPiANCj4gPiBkaWZmIC0t
Z2l0IGEveGVuL2FyY2gvYXJtL3RlZS9vcHRlZS5jIGIveGVuL2FyY2gvYXJtL3RlZS9vcHRlZS5j
DQo+ID4gaW5kZXggNmEwMzUzNTVkYi4uYWYxOWZjMzFmOCAxMDA2NDQNCj4gPiAtLS0gYS94ZW4v
YXJjaC9hcm0vdGVlL29wdGVlLmMNCj4gPiArKysgYi94ZW4vYXJjaC9hcm0vdGVlL29wdGVlLmMN
Cj4gPiBAQCAtMTA5OSw2ICsxMDk5LDI2IEBAIHN0YXRpYyBpbnQgaGFuZGxlX3JwY19yZXR1cm4o
c3RydWN0IG9wdGVlX2RvbWFpbiAqY3R4LA0KPiA+ICAgICAgICAgICBpZiAoIHNobV9ycGMtPnhl
bl9hcmctPmNtZCA9PSBPUFRFRV9SUENfQ01EX1NITV9BTExPQyApDQo+ID4gICAgICAgICAgICAg
ICBjYWxsLT5ycGNfYnVmZmVyX3R5cGUgPSBzaG1fcnBjLT54ZW5fYXJnLT5wYXJhbXNbMF0udS52
YWx1ZS5hOw0KPiA+ICAgDQo+ID4gKyAgICAgICAgLyoNCj4gPiArICAgICAgICAgKiBPUC1URUUg
c2lnbmFscyB0aGF0IGl0IGZyZWVzIHRoZSBidWZmZXIgdGhhdCBpdCByZXF1ZXN0ZWQNCj4gPiAr
ICAgICAgICAgKiBiZWZvcmUuIFRoaXMgaXMgdGhlIHJpZ2h0IGZvciB1cyB0byBkbyB0aGUgc2Ft
ZS4NCj4gPiArICAgICAgICAgKi8NCj4gPiArICAgICAgICBpZiAoIHNobV9ycGMtPnhlbl9hcmct
PmNtZCA9PSBPUFRFRV9SUENfQ01EX1NITV9GUkVFICkNCj4gPiArICAgICAgICB7DQo+ID4gKyAg
ICAgICAgICAgIHVpbnQ2NF90IGNvb2tpZSA9IHNobV9ycGMtPnhlbl9hcmctPnBhcmFtc1swXS51
LnZhbHVlLmI7DQo+ID4gKw0KPiA+ICsgICAgICAgICAgICBmcmVlX29wdGVlX3NobV9idWYoY3R4
LCBjb29raWUpOw0KPiA+ICsNCj4gPiArICAgICAgICAgICAgLyoNCj4gPiArICAgICAgICAgICAg
ICogVGhpcyBzaG91bGQgbmV2ZXIgaGFwcGVuLiBXZSBoYXZlIGEgYnVnIGVpdGhlciBpbiB0aGUN
Cj4gPiArICAgICAgICAgICAgICogT1AtVEVFIG9yIGluIHRoZSBtZWRpYXRvci4NCj4gPiArICAg
ICAgICAgICAgICovDQo+ID4gKyAgICAgICAgICAgIGlmICggY2FsbC0+cnBjX2RhdGFfY29va2ll
ICYmIGNhbGwtPnJwY19kYXRhX2Nvb2tpZSAhPSBjb29raWUgKQ0KPiA+ICsgICAgICAgICAgICAg
ICAgZ3ByaW50ayhYRU5MT0dfRVJSLA0KPiA+ICsgICAgICAgICAgICAgICAgICAgICAgICAiU2F2
ZWQgUlBDIGNvb2tpZSBkb2VzIG5vdCBjb3JyZXNwb25kcyB0byBPUC1URUUncyAoJSJQUkl4NjQi
ICE9ICUiUFJJeDY0IilcbiIsDQo+IA0KPiBzL2NvcnJlc3BvbmRzL2NvcnJlc3BvbmQvDQpXaWxs
IGZpeCBpbiB0aGUgbmV4dCB2ZXJzaW9uLg0KDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgICAg
IGNhbGwtPnJwY19kYXRhX2Nvb2tpZSwgY29va2llKTsNCj4gDQo+IElJVUMsIGlmIHlvdSBmcmVl
IHRoZSB3cm9uZyBTSE0gYnVmZmVyIHRoZW4geW91ciBndWVzdCBpcyBsaWtlbHkgdG8gYmUgDQo+
IHJ1bm5pbmcgaW5jb3JyZWN0bHkgYWZ0ZXJ3YXJkcy4gU28gc2hvdWxkbid0IHdlIGNyYXNoIHRo
ZSBndWVzdCB0byBhdm9pZCANCj4gZnVydGhlciBpc3N1ZT8NCj4gDQoNCldlbGwsIHdlIGZyZWVk
IHRoZSBleGFjdCBidWZmZXIgdGhhdCBPUC1URUUgYXNrZWQgdXMgdG8gZnJlZS4gU28gZ3Vlc3QN
CmRpZG4ndCBhbnl0aGluZyBiYWQuIE1vcmVvdmVyLCBvcHRlZSBkcml2ZXIgaW4gTGludXgga2Vy
bmVsIGRvZXMgbm90DQpoYXZlIHNpbWlsYXIgY2hlY2ssIHNvIGl0IHdpbGwgZnJlZSB0aGlzIGJ1
ZmZlciB3aXRob3V0IGFueSBjb21wbGFpbnMuIA0KSSdtIGp1c3QgYmVpbmcgb3ZlcmNhdXRpb3Vz
IGhlcmUuIFRodXMsIEkgc2VlIG5vIHJlYXNvbiB0byBjcmFzaCB0aGUNCmd1ZXN0Lg0K


From xen-devel-bounces@lists.xenproject.org Mon May 18 05:17:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 05:17:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaY9N-000631-Mt; Mon, 18 May 2020 05:17:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=k4Zq=7A=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jaY9M-00062w-Nz
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 05:17:16 +0000
X-Inumbo-ID: d5514a34-98c6-11ea-a83d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d5514a34-98c6-11ea-a83d-12813bfff9fa;
 Mon, 18 May 2020 05:17:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id EF326ABE4;
 Mon, 18 May 2020 05:17:14 +0000 (UTC)
Subject: Re: [PATCH v5 27/38] xen: gntdev: fix common struct sg_table related
 issues
To: Marek Szyprowski <m.szyprowski@samsung.com>,
 dri-devel@lists.freedesktop.org, iommu@lists.linux-foundation.org,
 linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org
References: <20200513132114.6046-1-m.szyprowski@samsung.com>
 <20200513133245.6408-1-m.szyprowski@samsung.com>
 <CGME20200513133316eucas1p2ad01d27ea4388cb50424bcf112d710ef@eucas1p2.samsung.com>
 <20200513133245.6408-27-m.szyprowski@samsung.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <e2f6451b-d9fa-ff22-83c9-d22636113dc8@suse.com>
Date: Mon, 18 May 2020 07:17:09 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200513133245.6408-27-m.szyprowski@samsung.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>,
 David Airlie <airlied@linux.ie>, Daniel Vetter <daniel@ffwll.ch>,
 xen-devel@lists.xenproject.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Robin Murphy <robin.murphy@arm.com>, Christoph Hellwig <hch@lst.de>,
 linux-arm-kernel@lists.infradead.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 13.05.20 15:32, Marek Szyprowski wrote:
> The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function
> returns the number of the created entries in the DMA address space.
> However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and
> dma_unmap_sg must be called with the original number of the entries
> passed to the dma_map_sg().
> 
> struct sg_table is a common structure used for describing a non-contiguous
> memory buffer, used commonly in the DRM and graphics subsystems. It
> consists of a scatterlist with memory pages and DMA addresses (sgl entry),
> as well as the number of scatterlist entries: CPU pages (orig_nents entry)
> and DMA mapped pages (nents entry).
> 
> It turned out that it was a common mistake to misuse nents and orig_nents
> entries, calling DMA-mapping functions with a wrong number of entries or
> ignoring the number of mapped entries returned by the dma_map_sg()
> function.
> 
> To avoid such issues, lets use a common dma-mapping wrappers operating
> directly on the struct sg_table objects and use scatterlist page
> iterators where possible. This, almost always, hides references to the
> nents and orig_nents entries, making the code robust, easier to follow
> and copy/paste safe.
> 
> Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>

Acked-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Mon May 18 07:09:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 07:09:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaZtq-0006lH-FB; Mon, 18 May 2020 07:09:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hVld=7A=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jaZtp-0006lC-D4
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 07:09:21 +0000
X-Inumbo-ID: 7e726e5e-98d6-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7e726e5e-98d6-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 07:09:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 47320AC52;
 Mon, 18 May 2020 07:09:22 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH mini-os] console: add newline at EOF
Message-ID: <9d2e445b-0b0f-4e4d-08a8-0f22013f111b@suse.com>
Date: Mon, 18 May 2020 09:09:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Some gcc versions get pretty unhappy without.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/console/console.c
+++ b/console/console.c
@@ -174,4 +174,4 @@ void resume_console(void)
 {
     xencons_ring_resume(xen_console);
     console_initialised = 1;
-}
\ No newline at end of file
+}


From xen-devel-bounces@lists.xenproject.org Mon May 18 07:36:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 07:36:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaaK4-0000mh-NM; Mon, 18 May 2020 07:36:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VSjt=7A=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jaaK3-0000mc-Qc
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 07:36:27 +0000
X-Inumbo-ID: 48134f14-98da-11ea-ae69-bc764e2007e4
Received: from mail-wm1-x32d.google.com (unknown [2a00:1450:4864:20::32d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 48134f14-98da-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 07:36:27 +0000 (UTC)
Received: by mail-wm1-x32d.google.com with SMTP id w64so9053593wmg.4
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 00:36:27 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=Ut3zX0lDMhHI/3kiw2RPgl+dTpiWvlQB1KHO6HQQWCo=;
 b=U5os+9nUp3Xx1p9nFGD+1FZWuEGYQkKIGMP9ZwW4MVjcSwd5+g4ZBiiJfZR2YoxJ93
 265Puo2xTEO8cgSlOTqKB7gNVUVOqY66JnEwn+4bEtZQyWsGWkV0GxqvsD+ufsICO79N
 TBds4spzzCdgNAyqVuk5aBquwm3oS6BPDgmqrSM9ltTcoTGw9SBdrTZXei7yKCMT4l45
 GLVhGIOvON4p5OVXqEWZyw0SDs2WtyC0PTD2/u5SXzlb+mdST6pAJQSeG3VpTFcrMkIg
 2mbUTAGBMFBS5YeHJ5Lhc+rM+ErTVGQxqymRcH1nXEjZPVHYFfG1IRvpjVXhamIcdFwh
 MsQA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=Ut3zX0lDMhHI/3kiw2RPgl+dTpiWvlQB1KHO6HQQWCo=;
 b=qOlCvqsyf4AMWa8BrfBDWy/JKKSG+wl9783eRmPC7crwLgB/w83OO8F87NI3lChex3
 TYc7GsKLphcEeXSLgpveUN51CKet7QSQmB2rQSTnPLyNoE4rSmdPbfigtvDUYlrz3HmU
 zOSJ2co8F/C0BDoiERM1ieSM9tg5m1bYMrW0eHpeoTgbYmjSmX6nc6yRCpeoE6A9cUaB
 x8j+5wJMIGze1IVz9KPCVjsc7rEULjwXVub11sosSLEOSlHn7M/Lkf+so9BXRYNW+oJI
 Jv43F+/eozrZyMYxplnSP0jy1YDpJywHo6cBhgeQzFNw8tHiK7qPsocUjY/u/C2xoJpU
 8Ubg==
X-Gm-Message-State: AOAM533Hbx3zubRv6H48Q9BXo9g1RQyFYmIRL5iNvNobwN2jtHa5v/wK
 VY8FCmqj0UXpweX9mNWX84M=
X-Google-Smtp-Source: ABdhPJz5151iXuXnbhr+BkTmNro4ZbFKb32igu3sxSzvnU54Zrf480MKLOH9k6iFx0b1I5ZmK5i8+Q==
X-Received: by 2002:a1c:c38b:: with SMTP id t133mr17496082wmf.31.1589787386275; 
 Mon, 18 May 2020 00:36:26 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.187])
 by smtp.gmail.com with ESMTPSA id r11sm16070057wro.15.2020.05.18.00.36.25
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 18 May 2020 00:36:25 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Manuel Bouyer'" <bouyer@antioche.eu.org>,
 "'Andrew Cooper'" <andrew.cooper3@citrix.com>
References: <20200515202912.GA11714@antioche.eu.org>
 <d623cd12-4024-82ba-7388-21f606e1a0bd@citrix.com>
 <20200515210629.GA10976@antioche.eu.org>
 <b1dfc07d-bf0f-da26-79f0-8cf93952689e@citrix.com>
 <20200515215335.GA9991@antioche.eu.org>
 <d22b6b7c-9d1c-4cfb-427a-ca6f440a9b08@citrix.com>
 <20200517173259.GA7285@antioche.eu.org>
 <20200517175607.GA8793@antioche.eu.org>
In-Reply-To: <20200517175607.GA8793@antioche.eu.org>
Subject: RE: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
Date: Mon, 18 May 2020 08:36:24 +0100
Message-ID: <000a01d62ce7$093b7f50$1bb27df0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQGWX+AZzwLaymDWSfMLMVZAbnMTPQIii/0tAVvecg8AeYK6jgI07lBRAkhrFd0CPbsL8AESTYVIqM7iRZA=
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Manuel Bouyer
> Sent: 17 May 2020 18:56
> To: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: xen-devel@lists.xenproject.org
> Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
> 
> On Sun, May 17, 2020 at 07:32:59PM +0200, Manuel Bouyer wrote:
> > I've been looking a bit deeper in the Xen kernel.
> > The mapping is failed in ./arch/x86/mm/p2m.c:p2m_get_page_from_gfn(),
> >         /* Error path: not a suitable GFN at all */
> > 	if ( !p2m_is_ram(*t) && !p2m_is_paging(*t) && !p2m_is_pod(*t) ) {
> > 	    gdprintk(XENLOG_ERR, "p2m_get_page_from_gfn2: %d is_ram %ld is_paging %ld is_pod %ld\n", *t,
> p2m_is_ram(*t), p2m_is_paging(*t), p2m_is_pod(*t) );
> > 	    return NULL;
> > 	}
> >
> > *t is 4, which translates to p2m_mmio_dm
> >
> > it looks like p2m_get_page_from_gfn() is not ready to handle this case
> > for dom0.
> 
> And so it looks like I need to implement osdep_xenforeignmemory_map_resource()
> for NetBSD
> 

It would be a good idea but you shouldn't have to. Also, qemu-trad won't use it even if it is there.

  Paul



From xen-devel-bounces@lists.xenproject.org Mon May 18 07:43:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 07:43:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaaQS-0001ki-Qg; Mon, 18 May 2020 07:43:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d2DE=7A=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jaaQQ-0001kd-Nx
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 07:43:02 +0000
X-Inumbo-ID: 304aa70a-98db-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 304aa70a-98db-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 07:42:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=GvVRK1FqJ0OiW25KtEBjyVtX9YcMzFIezObSa9LwTBM=; b=Frf5cLjbyxe0j+toZHoM1+I+e
 w5s5QmocANdshd/flzyFsL5Vns08b1kkSbCysloAHBscTN8W361kNErkSR6OOOnGBnyMBvCTkldLO
 kuqih7bPZ83iUSx1HewSXgpNSQQt2Os+kv4BqZBKuMJJ48Smn6JGevlCmSF/7obsWzkQQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaaQJ-00014R-U4; Mon, 18 May 2020 07:42:55 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaaQJ-0005AU-M6; Mon, 18 May 2020 07:42:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jaaQJ-0004gp-Lf; Mon, 18 May 2020 07:42:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150228-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150228: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=144dfe4215902b40a9d17fdb326054bbd8e07563
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 18 May 2020 07:42:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150228 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150228/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              144dfe4215902b40a9d17fdb326054bbd8e07563
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  122 days
Failing since        146211  2020-01-18 04:18:52 Z  121 days  112 attempts
Testing same since   150210  2020-05-16 04:20:17 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Chris Jester-Young <cky@cky.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yan Wang <wangyan122@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 18353 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 18 07:49:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 07:49:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaaWm-00022L-MC; Mon, 18 May 2020 07:49:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2hxz=7A=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
 id 1jaaWl-00022G-GN
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 07:49:35 +0000
X-Inumbo-ID: 1da76114-98dc-11ea-ae69-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1da76114-98dc-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 07:49:35 +0000 (UTC)
Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl
 [83.86.89.107])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id BA1B120825;
 Mon, 18 May 2020 07:49:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1589788174;
 bh=llFhUjdStD3n4Gh/ZphczSomZFgVEAqpTzetCDaM+zA=;
 h=Subject:To:Cc:From:Date:From;
 b=wU54fkNWp8LHLgEzn+kFamrMP/Lgiy5nlD8Xm7u8UC5eW9cgVWFdiFC70UHoA33wW
 r2IZfxHbupNZcSMMwMs8fBSxGqI257JGly6cAocZb+Kp5goFNkMXOGX8YH/oeo5XFP
 p+CxJTo0n4LTVaS80fYE5CkqyeAup/x/6Z30UNjs=
Subject: Patch "x86/paravirt: Remove the unused irq_enable_sysexit pv op" has
 been added to the 4.4-stable tree
To: akpm@linux-foundation.org, boris.ostrovsky@oracle.com, bp@alien8.de,
 bp@suse.de, brgerst@gmail.com, david.vrabel@citrix.com, dvlasenk@redhat.com,
 gregkh@linuxfoundation.org, hpa@zytor.com, konrad.wilk@oracle.com,
 luto@amacapital.net, luto@kernel.org, mingo@kernel.org, peterz@infradead.org,
 tglx@linutronix.de, torvalds@linux-foundation.org,
 virtualization@lists.linux-foundation.org, xen-devel@lists.xenproject.org
From: <gregkh@linuxfoundation.org>
Date: Mon, 18 May 2020 09:49:24 +0200
Message-ID: <1589788164224171@kroah.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=ANSI_X3.4-1968
Content-Transfer-Encoding: 8bit
X-stable: commit
X-Patchwork-Hint: ignore 
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: stable-commits@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


This is a note to let you know that I've just added the patch titled

    x86/paravirt: Remove the unused irq_enable_sysexit pv op

to the 4.4-stable tree which can be found at:
    http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     x86-paravirt-remove-the-unused-irq_enable_sysexit-pv-op.patch
and it can be found in the queue-4.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <stable@vger.kernel.org> know about it.


>From 88c15ec90ff16880efab92b519436ee17b198477 Mon Sep 17 00:00:00 2001
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Date: Thu, 19 Nov 2015 16:55:46 -0500
Subject: x86/paravirt: Remove the unused irq_enable_sysexit pv op

From: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 88c15ec90ff16880efab92b519436ee17b198477 upstream.

As result of commit "x86/xen: Avoid fast syscall path for Xen PV
guests", the irq_enable_sysexit pv op is not called by Xen PV guests
anymore and since they were the only ones who used it we can
safely remove it.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Borislav Petkov <bp@suse.de>
Acked-by: Andy Lutomirski <luto@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: david.vrabel@citrix.com
Cc: konrad.wilk@oracle.com
Cc: virtualization@lists.linux-foundation.org
Cc: xen-devel@lists.xenproject.org
Link: http://lkml.kernel.org/r/1447970147-1733-3-git-send-email-boris.ostrovsky@oracle.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 arch/x86/entry/entry_32.S             |    8 ++------
 arch/x86/include/asm/paravirt.h       |    7 -------
 arch/x86/include/asm/paravirt_types.h |    9 ---------
 arch/x86/kernel/asm-offsets.c         |    3 ---
 arch/x86/kernel/paravirt.c            |    7 -------
 arch/x86/kernel/paravirt_patch_32.c   |    2 --
 arch/x86/kernel/paravirt_patch_64.c   |    1 -
 arch/x86/xen/enlighten.c              |    3 ---
 arch/x86/xen/xen-asm_32.S             |   14 --------------
 arch/x86/xen/xen-ops.h                |    3 ---
 10 files changed, 2 insertions(+), 55 deletions(-)

--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -331,7 +331,8 @@ sysenter_past_esp:
 	 * Return back to the vDSO, which will pop ecx and edx.
 	 * Don't bother with DS and ES (they already contain __USER_DS).
 	 */
-	ENABLE_INTERRUPTS_SYSEXIT
+	sti
+	sysexit
 
 .pushsection .fixup, "ax"
 2:	movl	$0, PT_FS(%esp)
@@ -554,11 +555,6 @@ ENTRY(native_iret)
 	iret
 	_ASM_EXTABLE(native_iret, iret_exc)
 END(native_iret)
-
-ENTRY(native_irq_enable_sysexit)
-	sti
-	sysexit
-END(native_irq_enable_sysexit)
 #endif
 
 ENTRY(overflow)
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -938,13 +938,6 @@ extern void default_banner(void);
 	push %ecx; push %edx;				\
 	call PARA_INDIRECT(pv_cpu_ops+PV_CPU_read_cr0);	\
 	pop %edx; pop %ecx
-
-#define ENABLE_INTERRUPTS_SYSEXIT					\
-	PARA_SITE(PARA_PATCH(pv_cpu_ops, PV_CPU_irq_enable_sysexit),	\
-		  CLBR_NONE,						\
-		  jmp PARA_INDIRECT(pv_cpu_ops+PV_CPU_irq_enable_sysexit))
-
-
 #else	/* !CONFIG_X86_32 */
 
 /*
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -162,15 +162,6 @@ struct pv_cpu_ops {
 
 	u64 (*read_pmc)(int counter);
 
-#ifdef CONFIG_X86_32
-	/*
-	 * Atomically enable interrupts and return to userspace.  This
-	 * is only used in 32-bit kernels.  64-bit kernels use
-	 * usergs_sysret32 instead.
-	 */
-	void (*irq_enable_sysexit)(void);
-#endif
-
 	/*
 	 * Switch to usermode gs and return to 64-bit usermode using
 	 * sysret.  Only used in 64-bit kernels to return to 64-bit
--- a/arch/x86/kernel/asm-offsets.c
+++ b/arch/x86/kernel/asm-offsets.c
@@ -65,9 +65,6 @@ void common(void) {
 	OFFSET(PV_IRQ_irq_disable, pv_irq_ops, irq_disable);
 	OFFSET(PV_IRQ_irq_enable, pv_irq_ops, irq_enable);
 	OFFSET(PV_CPU_iret, pv_cpu_ops, iret);
-#ifdef CONFIG_X86_32
-	OFFSET(PV_CPU_irq_enable_sysexit, pv_cpu_ops, irq_enable_sysexit);
-#endif
 	OFFSET(PV_CPU_read_cr0, pv_cpu_ops, read_cr0);
 	OFFSET(PV_MMU_read_cr2, pv_mmu_ops, read_cr2);
 #endif
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -168,9 +168,6 @@ unsigned paravirt_patch_default(u8 type,
 		ret = paravirt_patch_ident_64(insnbuf, len);
 
 	else if (type == PARAVIRT_PATCH(pv_cpu_ops.iret) ||
-#ifdef CONFIG_X86_32
-		 type == PARAVIRT_PATCH(pv_cpu_ops.irq_enable_sysexit) ||
-#endif
 		 type == PARAVIRT_PATCH(pv_cpu_ops.usergs_sysret32) ||
 		 type == PARAVIRT_PATCH(pv_cpu_ops.usergs_sysret64))
 		/* If operation requires a jmp, then jmp */
@@ -226,7 +223,6 @@ static u64 native_steal_clock(int cpu)
 
 /* These are in entry.S */
 extern void native_iret(void);
-extern void native_irq_enable_sysexit(void);
 extern void native_usergs_sysret32(void);
 extern void native_usergs_sysret64(void);
 
@@ -385,9 +381,6 @@ __visible struct pv_cpu_ops pv_cpu_ops =
 
 	.load_sp0 = native_load_sp0,
 
-#if defined(CONFIG_X86_32)
-	.irq_enable_sysexit = native_irq_enable_sysexit,
-#endif
 #ifdef CONFIG_X86_64
 #ifdef CONFIG_IA32_EMULATION
 	.usergs_sysret32 = native_usergs_sysret32,
--- a/arch/x86/kernel/paravirt_patch_32.c
+++ b/arch/x86/kernel/paravirt_patch_32.c
@@ -5,7 +5,6 @@ DEF_NATIVE(pv_irq_ops, irq_enable, "sti"
 DEF_NATIVE(pv_irq_ops, restore_fl, "push %eax; popf");
 DEF_NATIVE(pv_irq_ops, save_fl, "pushf; pop %eax");
 DEF_NATIVE(pv_cpu_ops, iret, "iret");
-DEF_NATIVE(pv_cpu_ops, irq_enable_sysexit, "sti; sysexit");
 DEF_NATIVE(pv_mmu_ops, read_cr2, "mov %cr2, %eax");
 DEF_NATIVE(pv_mmu_ops, write_cr3, "mov %eax, %cr3");
 DEF_NATIVE(pv_mmu_ops, read_cr3, "mov %cr3, %eax");
@@ -46,7 +45,6 @@ unsigned native_patch(u8 type, u16 clobb
 		PATCH_SITE(pv_irq_ops, restore_fl);
 		PATCH_SITE(pv_irq_ops, save_fl);
 		PATCH_SITE(pv_cpu_ops, iret);
-		PATCH_SITE(pv_cpu_ops, irq_enable_sysexit);
 		PATCH_SITE(pv_mmu_ops, read_cr2);
 		PATCH_SITE(pv_mmu_ops, read_cr3);
 		PATCH_SITE(pv_mmu_ops, write_cr3);
--- a/arch/x86/kernel/paravirt_patch_64.c
+++ b/arch/x86/kernel/paravirt_patch_64.c
@@ -12,7 +12,6 @@ DEF_NATIVE(pv_mmu_ops, write_cr3, "movq
 DEF_NATIVE(pv_cpu_ops, clts, "clts");
 DEF_NATIVE(pv_cpu_ops, wbinvd, "wbinvd");
 
-DEF_NATIVE(pv_cpu_ops, irq_enable_sysexit, "swapgs; sti; sysexit");
 DEF_NATIVE(pv_cpu_ops, usergs_sysret64, "swapgs; sysretq");
 DEF_NATIVE(pv_cpu_ops, usergs_sysret32, "swapgs; sysretl");
 DEF_NATIVE(pv_cpu_ops, swapgs, "swapgs");
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1240,10 +1240,7 @@ static const struct pv_cpu_ops xen_cpu_o
 
 	.iret = xen_iret,
 #ifdef CONFIG_X86_64
-	.usergs_sysret32 = xen_sysret32,
 	.usergs_sysret64 = xen_sysret64,
-#else
-	.irq_enable_sysexit = xen_sysexit,
 #endif
 
 	.load_tr_desc = paravirt_nop,
--- a/arch/x86/xen/xen-asm_32.S
+++ b/arch/x86/xen/xen-asm_32.S
@@ -35,20 +35,6 @@ check_events:
 	ret
 
 /*
- * We can't use sysexit directly, because we're not running in ring0.
- * But we can easily fake it up using iret.  Assuming xen_sysexit is
- * jumped to with a standard stack frame, we can just strip it back to
- * a standard iret frame and use iret.
- */
-ENTRY(xen_sysexit)
-	movl PT_EAX(%esp), %eax			/* Shouldn't be necessary? */
-	orl $X86_EFLAGS_IF, PT_EFLAGS(%esp)
-	lea PT_EIP(%esp), %esp
-
-	jmp xen_iret
-ENDPROC(xen_sysexit)
-
-/*
  * This is run where a normal iret would be run, with the same stack setup:
  *	8: eflags
  *	4: cs
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -139,9 +139,6 @@ DECL_ASM(void, xen_restore_fl_direct, un
 
 /* These are not functions, and cannot be called normally */
 __visible void xen_iret(void);
-#ifdef CONFIG_X86_32
-__visible void xen_sysexit(void);
-#endif
 __visible void xen_sysret32(void);
 __visible void xen_sysret64(void);
 __visible void xen_adjust_exception_frame(void);


Patches currently in stable-queue which might be from boris.ostrovsky@oracle.com are

queue-4.4/x86-paravirt-remove-the-unused-irq_enable_sysexit-pv-op.patch


From xen-devel-bounces@lists.xenproject.org Mon May 18 07:53:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 07:53:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaaaC-0002qh-4s; Mon, 18 May 2020 07:53:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=a79S=7A=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1jaaaA-0002qY-Jb
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 07:53:06 +0000
X-Inumbo-ID: 9af47f9e-98dc-11ea-ae69-bc764e2007e4
Received: from hera.aquilenet.fr (unknown [2a0c:e300::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9af47f9e-98dc-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 07:53:05 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id 853FD76B;
 Mon, 18 May 2020 09:53:03 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id KDUjvMKp-PnR; Mon, 18 May 2020 09:53:02 +0200 (CEST)
Received: from function (lfbn-bor-1-797-11.w86-234.abo.wanadoo.fr
 [86.234.239.11])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id E466D70A;
 Mon, 18 May 2020 09:53:01 +0200 (CEST)
Received: from samy by function with local (Exim 4.93)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1jaaa4-0054WS-Lo; Mon, 18 May 2020 09:53:00 +0200
Date: Mon, 18 May 2020 09:53:00 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH mini-os] console: add newline at EOF
Message-ID: <20200518075300.t3tvfo7ucbwujmif@function>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
References: <9d2e445b-0b0f-4e4d-08a8-0f22013f111b@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <9d2e445b-0b0f-4e4d-08a8-0f22013f111b@suse.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jan Beulich, le lun. 18 mai 2020 09:09:14 +0200, a ecrit:
> Some gcc versions get pretty unhappy without.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

That was an easy one :)

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> 
> --- a/console/console.c
> +++ b/console/console.c
> @@ -174,4 +174,4 @@ void resume_console(void)
>  {
>      xencons_ring_resume(xen_console);
>      console_initialised = 1;
> -}
> \ No newline at end of file
> +}
> 


From xen-devel-bounces@lists.xenproject.org Mon May 18 08:28:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 08:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jab7V-000610-1m; Mon, 18 May 2020 08:27:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hPYZ=7A=linux-powerpc.org=kda@srs-us1.protection.inumbo.net>)
 id 1jab5A-0005xo-Sh
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 08:25:09 +0000
X-Inumbo-ID: 14a6caa0-98e1-11ea-ae69-bc764e2007e4
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14a6caa0-98e1-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 08:25:07 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id c21so7256235lfb.3
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 01:25:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=linux-powerpc-org.20150623.gappssmtp.com; s=20150623;
 h=from:to:cc:subject:date:message-id;
 bh=62dBjwAqFx42J0m7dvvwDYjAwTZ2hCNIWZwjeYro/gg=;
 b=f4AneL6nEMxZSczyq6KZSI1VTA5SndsiS8Iq6rYHRldr4TMOgI9EilEs/JbiTo0liB
 O44NFkD7fInDweSj0w89baGKMusGXCN+pilefCaYustNxyjrG6HYPdKZ+CVroH9Ux2Il
 Zd0ZCgMovIePLK2i+yLoT8e6UNFOFXp7TEPz1t6ppOo94HNFbnVwVAIWeMOu7y26wKf+
 NyVxL9208wqBYtEQIS1KBWb3ihlss1kgA29uyrCydbKHUydlPgnMDnOKUvBxCyDZaL39
 5Xr36kr0PXQeCfPS8RluyeKxk+PWf492N3ADht1x63ze3hjpfpArh6IeIZ362EIeYPjA
 vSdQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id;
 bh=62dBjwAqFx42J0m7dvvwDYjAwTZ2hCNIWZwjeYro/gg=;
 b=KWgviQgY/0AEBYYyKyR2FLUTIx3ZAJaKG4AGidP0KZkOaK3i7SWz+Ijq0EfpV3Fgxr
 /Sv0rWCgQ18241RkS++CgjHeZVmZznJfilVy7DoPsV3eMRWF7uuwduSdbAJV1zZvr3dz
 rficw7CeNmRoIfcIA5kms4kdGGGFvBSLv2OBmtBZzXUx3QjzaCDSiRffJa6fQRQYUMzF
 rIY/zprDjS4Kka1FCAVTFrbA5D6/YUV3YkXf4igC7ihj+S/ZP9B0MaC6rT+tX+/bIlI3
 YvbyPCj7wNLsPxncWZRTrgT4MQ1KDTm+JdCQEh+/cfYl4yCU5hVQpRYZTj2SG/Qx6pbT
 xgnA==
X-Gm-Message-State: AOAM5330AmrsKVMd0b2OJ8VyMhx4C7wDx2rJTWe7idNBTd2L+O6dvUy9
 Uc7DTIujI7mUQ+3j7rNskP2QuCkvymVZ5g==
X-Google-Smtp-Source: ABdhPJzA9c6bRJ1VHIs6AJSFgy2kPYrjodljBmKY4nDsIXV4imqGSahSJk1tj7h1h2RImhCxDQmXFQ==
X-Received: by 2002:a19:4895:: with SMTP id v143mr6566918lfa.193.1589790306189; 
 Mon, 18 May 2020 01:25:06 -0700 (PDT)
Received: from centos7-pv-guest.localdomain ([5.35.46.227])
 by smtp.gmail.com with ESMTPSA id h84sm6485290lfd.88.2020.05.18.01.25.05
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 18 May 2020 01:25:05 -0700 (PDT)
From: Denis Kirjanov <kda@linux-powerpc.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] public/io/netif.h: add a new extra type for XDP
Date: Mon, 18 May 2020 11:24:45 +0300
Message-Id: <1589790285-1250-1-git-send-email-kda@linux-powerpc.org>
X-Mailer: git-send-email 1.8.3.1
X-Mailman-Approved-At: Mon, 18 May 2020 08:27:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The patch adds a new extra type to be able to diffirentiate
between RX responses on xen-netfront side with the adjusted offset
required for XDP processing.

For Linux the offset value is going to be passed via xenstore.

Signed-off-by: Denis Kirjanov <denis.kirjanov@suse.com>
---
 xen/include/public/io/netif.h | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
index 9fcf91a..759c88a 100644
--- a/xen/include/public/io/netif.h
+++ b/xen/include/public/io/netif.h
@@ -985,7 +985,8 @@ typedef struct netif_tx_request netif_tx_request_t;
 #define XEN_NETIF_EXTRA_TYPE_MCAST_ADD (2)  /* u.mcast */
 #define XEN_NETIF_EXTRA_TYPE_MCAST_DEL (3)  /* u.mcast */
 #define XEN_NETIF_EXTRA_TYPE_HASH      (4)  /* u.hash */
-#define XEN_NETIF_EXTRA_TYPE_MAX       (5)
+#define XEN_NETIF_EXTRA_TYPE_XDP       (5)  /* u.xdp */
+#define XEN_NETIF_EXTRA_TYPE_MAX       (6)
 
 /* netif_extra_info_t flags. */
 #define _XEN_NETIF_EXTRA_FLAG_MORE (0)
@@ -1018,6 +1019,10 @@ struct netif_extra_info {
             uint8_t algorithm;
             uint8_t value[4];
         } hash;
+        struct {
+            uint16_t headroom;
+            uint32_t pad;
+        } xdp;
         uint16_t pad[3];
     } u;
 };
-- 
1.8.3.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 08:33:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 08:33:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jabDR-0006qA-NE; Mon, 18 May 2020 08:33:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/+tu=7A=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jabDQ-0006q5-5J
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 08:33:40 +0000
X-Inumbo-ID: 4565dfc2-98e2-11ea-a841-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4565dfc2-98e2-11ea-a841-12813bfff9fa;
 Mon, 18 May 2020 08:33:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=RhCPo2G5xnrshzZHN/F6+mQVwMUA8xrhcFyLYhc6USE=; b=kwsfwk3doSQJTs0PxwlFK7nWxx
 FvYXJwR5YiWIfOQH1L24HYZRiHaVNdrfBE7YaD7ASoyiGXQzthciJwz2x5lCxjZis7Uo5S/8xMoHN
 jp4KpHKi4PBRjti+w4w3d2Get7LWM1gYjkLxRDTUYGbOM4OLVYdKEACMeE08unOCcztU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jabDN-0002fJ-BL; Mon, 18 May 2020 08:33:37 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jabDN-0007em-49; Mon, 18 May 2020 08:33:37 +0000
Subject: Re: [PATCH] optee: immediately free buffers that are released by
 OP-TEE
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20200506014246.3397490-1-volodymyr_babchuk@epam.com>
 <51b8c855-5e94-2829-a703-d43c84948120@xen.org>
 <878c09ec58b9c9bef81497fa7e7a0ac47ddd8f21.camel@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <15c29139-e790-e085-9d86-3f806cd19f3a@xen.org>
Date: Mon, 18 May 2020 09:33:35 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <878c09ec58b9c9bef81497fa7e7a0ac47ddd8f21.camel@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "tee-dev@lists.linaro.org" <tee-dev@lists.linaro.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 18/05/2020 03:04, Volodymyr Babchuk wrote:
> Hi Julien,

Hi,

> On Mon, 2020-05-11 at 10:34 +0100, Julien Grall wrote:
>> Hi Volodymyr,
>>
>> On 06/05/2020 02:44, Volodymyr Babchuk wrote:
>>> Normal World can share buffer with OP-TEE for two reasons:
>>> 1. Some client application wants to exchange data with TA
>>> 2. OP-TEE asks for shared buffer for internal needs
>>>
>>> The second case was handle more strictly than necessary:
>>>
>>> 1. In RPC request OP-TEE asks for buffer
>>> 2. NW allocates buffer and provides it via RPC response
>>> 3. Xen pins pages and translates data
>>> 4. Xen provides buffer to OP-TEE
>>> 5. OP-TEE uses it
>>> 6. OP-TEE sends request to free the buffer
>>> 7. NW frees the buffer and sends the RPC response
>>> 8. Xen unpins pages and forgets about the buffer
>>>
>>> The problem is that Xen should forget about buffer in between stages 6
>>> and 7. I.e. the right flow should be like this:
>>>
>>> 6. OP-TEE sends request to free the buffer
>>> 7. Xen unpins pages and forgets about the buffer
>>> 8. NW frees the buffer and sends the RPC response
>>>
>>> This is because OP-TEE internally frees the buffer before sending the
>>> "free SHM buffer" request. So we have no reason to hold reference for
>>> this buffer anymore. Moreover, in multiprocessor systems NW have time
>>> to reuse buffer cookie for another buffer. Xen complained about this
>>> and denied the new buffer registration. I have seen this issue while
>>> running tests on iMX SoC.
>>>
>>> So, this patch basically corrects that behavior by freeing the buffer
>>> earlier, when handling RPC return from OP-TEE.
>>>
>>> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
>>> ---
>>>    xen/arch/arm/tee/optee.c | 24 ++++++++++++++++++++----
>>>    1 file changed, 20 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
>>> index 6a035355db..af19fc31f8 100644
>>> --- a/xen/arch/arm/tee/optee.c
>>> +++ b/xen/arch/arm/tee/optee.c
>>> @@ -1099,6 +1099,26 @@ static int handle_rpc_return(struct optee_domain *ctx,
>>>            if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_ALLOC )
>>>                call->rpc_buffer_type = shm_rpc->xen_arg->params[0].u.value.a;
>>>    
>>> +        /*
>>> +         * OP-TEE signals that it frees the buffer that it requested
>>> +         * before. This is the right for us to do the same.
>>> +         */
>>> +        if ( shm_rpc->xen_arg->cmd == OPTEE_RPC_CMD_SHM_FREE )
>>> +        {
>>> +            uint64_t cookie = shm_rpc->xen_arg->params[0].u.value.b;
>>> +
>>> +            free_optee_shm_buf(ctx, cookie);
>>> +
>>> +            /*
>>> +             * This should never happen. We have a bug either in the
>>> +             * OP-TEE or in the mediator.
>>> +             */
>>> +            if ( call->rpc_data_cookie && call->rpc_data_cookie != cookie )
>>> +                gprintk(XENLOG_ERR,
>>> +                        "Saved RPC cookie does not corresponds to OP-TEE's (%"PRIx64" != %"PRIx64")\n",
>>
>> s/corresponds/correspond/
> Will fix in the next version.
> 
>>> +                        call->rpc_data_cookie, cookie);
>>
>> IIUC, if you free the wrong SHM buffer then your guest is likely to be
>> running incorrectly afterwards. So shouldn't we crash the guest to avoid
>> further issue?
>>
> 
> Well, we freed the exact buffer that OP-TEE asked us to free. So guest
> didn't anything bad. Moreover, optee driver in Linux kernel does not
> have similar check, so it will free this buffer without any complains.
> I'm just being overcautious here. Thus, I see no reason to crash the
> guest.

My point is not whether the guest did anything bad but whether 
acknowledging a bug and continuing like nothing happened is the right 
thing to do.

I can't judge whether the bug is critical enough. However I don't 
consider a single message on the console to be sufficient in a case of a 
bug. This is likely going to be missed and it may cause side-effect 
which may only be noticed a long time after. The amount of debugging 
required to figure out the original problem may be quite consequent.

The first suggestion would be to expand your comment and explain why it 
is fine continue.

Secondly, if it is consider safe to continue but still needs attention, 
then I would suggest to add a WARN() to make easier to spot in the log.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 18 08:34:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 08:34:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jabEG-0006tU-0n; Mon, 18 May 2020 08:34:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=k4Zq=7A=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jabEF-0006tL-Eu
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 08:34:31 +0000
X-Inumbo-ID: 643dc3ba-98e2-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 643dc3ba-98e2-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 08:34:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 48E7AAB99;
 Mon, 18 May 2020 08:34:32 +0000 (UTC)
Subject: Re: [PATCH] public/io/netif.h: add a new extra type for XDP
To: Denis Kirjanov <kda@linux-powerpc.org>, xen-devel@lists.xenproject.org
References: <1589790285-1250-1-git-send-email-kda@linux-powerpc.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <bbd8a83a-a676-9fa1-b03b-526e5122966f@suse.com>
Date: Mon, 18 May 2020 10:34:28 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <1589790285-1250-1-git-send-email-kda@linux-powerpc.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 18.05.20 10:24, Denis Kirjanov wrote:
> The patch adds a new extra type to be able to diffirentiate
> between RX responses on xen-netfront side with the adjusted offset
> required for XDP processing.
> 
> For Linux the offset value is going to be passed via xenstore.

Why? I can only see disadvantages by using a different communication
mechanism.

> 
> Signed-off-by: Denis Kirjanov <denis.kirjanov@suse.com>
> ---
>   xen/include/public/io/netif.h | 7 ++++++-
>   1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
> index 9fcf91a..759c88a 100644
> --- a/xen/include/public/io/netif.h
> +++ b/xen/include/public/io/netif.h
> @@ -985,7 +985,8 @@ typedef struct netif_tx_request netif_tx_request_t;
>   #define XEN_NETIF_EXTRA_TYPE_MCAST_ADD (2)  /* u.mcast */
>   #define XEN_NETIF_EXTRA_TYPE_MCAST_DEL (3)  /* u.mcast */
>   #define XEN_NETIF_EXTRA_TYPE_HASH      (4)  /* u.hash */
> -#define XEN_NETIF_EXTRA_TYPE_MAX       (5)
> +#define XEN_NETIF_EXTRA_TYPE_XDP       (5)  /* u.xdp */
> +#define XEN_NETIF_EXTRA_TYPE_MAX       (6)
>   
>   /* netif_extra_info_t flags. */
>   #define _XEN_NETIF_EXTRA_FLAG_MORE (0)
> @@ -1018,6 +1019,10 @@ struct netif_extra_info {
>               uint8_t algorithm;
>               uint8_t value[4];
>           } hash;
> +        struct {
> +            uint16_t headroom;
> +            uint32_t pad;

Please use uint16_t pad[2] in order to avoid padding holes.

Additionally you are missing the addition of the related feature
Xenstore nodes in the comment area further up in the same file.


Juergen


From xen-devel-bounces@lists.xenproject.org Mon May 18 09:14:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 09:14:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jabqu-0001qo-HX; Mon, 18 May 2020 09:14:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d2DE=7A=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jabqt-0001qj-4K
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 09:14:27 +0000
X-Inumbo-ID: f7cea4fa-98e7-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7cea4fa-98e7-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 09:14:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=SIan2AEefCFmYE97Q4hMv4MpFW6fzRbpawNo936FxQ4=; b=WsFXLzm+3xlJ2xZclAUikKwiw
 YC46KBovkv/PRkn+TJpRxYbJrTn1ww6K69QBm75zFnieyJMhRW7ntjKljH1vHfyiIhXNPAxaFyu7G
 50vboBi/XllitEx44qTQjAtl68sxqqF7DoERp9cFcTEI347+JZMzwYtgJPJpbEa4HKEMc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jabqq-0003VX-KH; Mon, 18 May 2020 09:14:24 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jabqq-0000lX-BV; Mon, 18 May 2020 09:14:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jabqq-000461-Ai; Mon, 18 May 2020 09:14:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150226-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150226: regressions - FAIL
X-Osstest-Failures: linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:regression
 linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=9b1f2cbdb6d3062c468d3f9b579501f0f7ce330b
X-Osstest-Versions-That: linux=5a9ffb954a3933d7867f4341684a23e008d6839b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 18 May 2020 09:14:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150226 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150226/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt      7 xen-boot                 fail REGR. vs. 150225

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     12 guest-start              fail REGR. vs. 150225

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150225
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150225
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150225
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150225
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150225
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150225
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150225
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150225
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                9b1f2cbdb6d3062c468d3f9b579501f0f7ce330b
baseline version:
 linux                5a9ffb954a3933d7867f4341684a23e008d6839b

Last test of basis   150225  2020-05-17 11:37:25 Z    0 days
Testing same since   150226  2020-05-17 22:39:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Alan Stern <stern@rowland.harvard.edu>
  Andrey Konovalov <andreyknvl@google.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Arnd Bergmann <arnd@arndb.de>
  Borislav Petkov <bp@suse.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Eric W. Biederman <ebiederm@xmission.com>
  Eugeniu Rosca <erosca@de.adit-jv.com>
  Felipe Balbi <balbi@kernel.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Heiko Stuebner <heiko@sntech.de>
  Jason Yan <yanaijie@huawei.com>
  John Stultz <john.stultz@linaro.org>
  Jon Hunter <jonathanh@nvidia.com>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Justin Swartz <justin.swartz@risingedge.co.za>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kalle Valo <kvalo@codeaurora.org>
  Kyungtae Kim <kt0755@gmail.com>
  Li Jun <jun.li@nxp.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Marc Zyngier <maz@kernel.org>
  Masahiro Yamada <masahiroy@kernel.org>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Peter Chen <peter.chen@nxp.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Prashant Malani <pmalani@chromium.org>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Samuel Zou <zou_wei@huawei.com>
  Sriharsha Allenki <sallenki@codeaurora.org>
  Stephen Boyd <sboyd@kernel.org>
  Tero Kristo <t-kristo@ti.com>
  Thierry Reding <treding@nvidia.com>
  Tony Lindgren <tony@atomide.com>
  Wei Yongjun <weiyongjun1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1240 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 18 09:45:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 09:45:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jacK0-0004L0-4u; Mon, 18 May 2020 09:44:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fjRX=7A=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jacJy-0004Kv-QT
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 09:44:30 +0000
X-Inumbo-ID: 2b4065cc-98ec-11ea-ae69-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b4065cc-98ec-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 09:44:29 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: lXYajm9kXJcAINKOW5xAwAaU9x4ej7e+NuolY1beqoZYRnqe/mZXg7dE1Im2MNKdAK7AfxZZhU
 jQojjL5CSOFyn54+uVJaac72yXvHcPqt/VK4TZyB+TWl88v5UA15cFG2jUveR8EYkMnl0i6qYo
 ZxAXte3B1nTgmb8A4ZU4iTxU8FIEOrKpz84l2obmRuODKSj/CsW/MHdbTAFQ1/bxJIfHtvKMW9
 +HlusUmOmy+8mS5poPzQ6pCkGTpc8K8zAb8pj5PQOD22zBPH3HshlIACVmFl7+PvzTHnPyYPTF
 cik=
X-SBRS: 2.7
X-MesageID: 18463980
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,406,1583211600"; d="scan'208";a="18463980"
Date: Mon, 18 May 2020 10:44:26 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
Subject: Re: [XEN PATCH v2] xen/build: use the correct kconfig makefile
Message-ID: <20200518094426.GN2116@perard.uk.xensource.com>
References: <20200512175206.20314-1-stewart.hildebrand@dornerworks.com>
 <20200515182509.5476-1-stewart.hildebrand@dornerworks.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20200515182509.5476-1-stewart.hildebrand@dornerworks.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 15, 2020 at 02:25:09PM -0400, Stewart Hildebrand wrote:
> This resolves the following observed error during config merge:
> 
> /bin/sh /path/to/xen/xen/../xen/tools/kconfig/merge_config.sh -m .config /path/to/xen/xen/../xen/arch/arm/configs/custom.config
> Using .config as base
> Merging /path/to/xen/xen/../xen/arch/arm/configs/custom.config
> #
> # merged configuration written to .config (needs make)
> #
> make -f /path/to/xen/xen/../xen/Makefile olddefconfig
> make[2]: Entering directory '/path/to/xen/xen'
> make[2]: *** No rule to make target 'olddefconfig'.  Stop.
> make[2]: Leaving directory '/path/to/xen/xen'
> tools/kconfig/Makefile:95: recipe for target 'custom.config' failed
> 
> The build was invoked by first doing a defconfig (which succeeded):
> 
> $ make -C xen XEN_TARGET_ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- defconfig
> 
> Followed by the config fragment merge command (which failed before this patch)
> 
> $ cat > xen/arch/arm/configs/custom.config <<EOF
> CONFIG_DEBUG=y
> CONFIG_EARLY_PRINTK_ZYNQMP=y
> EOF
> $ make -C xen XEN_TARGET_ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- custom.config
> 
> Signed-off-by: Stewart Hildebrand <stewart.hildebrand@dornerworks.com>
> 
> ---
> v2: updated commit message

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon May 18 09:53:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 09:53:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jacSC-0005DF-0h; Mon, 18 May 2020 09:53:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hPYZ=7A=linux-powerpc.org=kda@srs-us1.protection.inumbo.net>)
 id 1jacSA-0005DA-L3
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 09:52:58 +0000
X-Inumbo-ID: 59ef185e-98ed-11ea-ae69-bc764e2007e4
Received: from mail-ej1-x643.google.com (unknown [2a00:1450:4864:20::643])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59ef185e-98ed-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 09:52:57 +0000 (UTC)
Received: by mail-ej1-x643.google.com with SMTP id x1so8305170ejd.8
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 02:52:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=linux-powerpc-org.20150623.gappssmtp.com; s=20150623;
 h=mime-version:in-reply-to:references:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=wlvXmRK547HKma4/tSG5+s6wQquNKkAgiXenRCOwyvU=;
 b=ATAngUCpF0JT8O+foKhMGeC513W5YVxEkqRyzeLl1b4i2cqRtnYdA/MxUOipnDGfa9
 98p8e8YuJgCXrxzwP3Pt2I+uWDPiiFqEFCglZn7enQ2j3bcpilbc7EwmFGQDaxl3tIHq
 +ZSoAN21T1xfAt+ZESdOIQB0aCIkOg5HDN1DR3GathzJN2X1rD7bzrnDQO8YYt+BhM/0
 jKlkkiNVzxREdsx2Om/lhrddk3V8MPTkjeQp4IjJYHd7RV0gbCtvq9JAg26q3OdiGL3/
 IXWXriaZxhQS152DQ7LWi5bmbCZFw/twYI/W9KfH5HB7vGJC1q6GPETIKgn+qn4jQE1s
 xIkg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:in-reply-to:references:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=wlvXmRK547HKma4/tSG5+s6wQquNKkAgiXenRCOwyvU=;
 b=cB5JOCKnXZQMPv7cXtpR7QROWShvQTaReCMrBrZwATaPDdjm/gmNSpz4ZmvaC0uXjC
 FeKvd+jT3eenvH6DTkTOyHRemRpdCplSq4pftVB7Oilfn9otaXentLBSmy8YYp+3Gmg5
 OD5lMMriZ+hs16dnY2PbwqoUwwdHamekUBQysEt96SHwaXTJ+s1a5dcejg6+MJDyjk6z
 whQ9cYa3N33v7xwLRRvgL/kf+8KbwVwlHun5Ez91M5tG//o+mY+5v9A11ffu7qnpqW+b
 1ghvHaPogpO5nZwS6AI7dSxvcwWLsb3JfGs/HmPGwUYJXGUQC6TK96C5firgu39EyOuu
 GY3g==
X-Gm-Message-State: AOAM532W/W5TKS543BH+Bo+kvg5xEWlf5etAV9KCPbCSIkHVLhN8C49q
 NFDj6vAc2YUhQzs3/gj+hDH36cT/44uw9wBBhDUv0A==
X-Google-Smtp-Source: ABdhPJzotjNJfhl9+jFaNCvkqmqJ+9X05lWSIqXzTZTpY1eYYfRCuna7yD9Y8IePMfLZbeOnDIIfmXKT7AIX4Kw2s8M=
X-Received: by 2002:a17:906:bce6:: with SMTP id
 op6mr14274713ejb.337.1589795576762; 
 Mon, 18 May 2020 02:52:56 -0700 (PDT)
MIME-Version: 1.0
Received: by 2002:a50:34e:0:0:0:0:0 with HTTP;
 Mon, 18 May 2020 02:52:56 -0700 (PDT)
X-Originating-IP: [5.35.46.227]
In-Reply-To: <bbd8a83a-a676-9fa1-b03b-526e5122966f@suse.com>
References: <1589790285-1250-1-git-send-email-kda@linux-powerpc.org>
 <bbd8a83a-a676-9fa1-b03b-526e5122966f@suse.com>
From: Denis Kirjanov <kda@linux-powerpc.org>
Date: Mon, 18 May 2020 12:52:56 +0300
Message-ID: <CAOJe8K2pUv4Upvxya3LFPj0CxZ1-_hDZcrv-r6Q2EaxC8Ym6Ow@mail.gmail.com>
Subject: Re: [PATCH] public/io/netif.h: add a new extra type for XDP
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/18/20, J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wrote:
> On 18.05.20 10:24, Denis Kirjanov wrote:
>> The patch adds a new extra type to be able to diffirentiate
>> between RX responses on xen-netfront side with the adjusted offset
>> required for XDP processing.
>>
>> For Linux the offset value is going to be passed via xenstore.
>
> Why? I can only see disadvantages by using a different communication
> mechanism.
I see it like other features passed through xenstore and it requires
less changes to
other structures with the same final result.

>
>>
>> Signed-off-by: Denis Kirjanov <denis.kirjanov@suse.com>
>> ---
>>   xen/include/public/io/netif.h | 7 ++++++-
>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/include/public/io/netif.h
>> b/xen/include/public/io/netif.h
>> index 9fcf91a..759c88a 100644
>> --- a/xen/include/public/io/netif.h
>> +++ b/xen/include/public/io/netif.h
>> @@ -985,7 +985,8 @@ typedef struct netif_tx_request netif_tx_request_t;
>>   #define XEN_NETIF_EXTRA_TYPE_MCAST_ADD (2)  /* u.mcast */
>>   #define XEN_NETIF_EXTRA_TYPE_MCAST_DEL (3)  /* u.mcast */
>>   #define XEN_NETIF_EXTRA_TYPE_HASH      (4)  /* u.hash */
>> -#define XEN_NETIF_EXTRA_TYPE_MAX       (5)
>> +#define XEN_NETIF_EXTRA_TYPE_XDP       (5)  /* u.xdp */
>> +#define XEN_NETIF_EXTRA_TYPE_MAX       (6)
>>
>>   /* netif_extra_info_t flags. */
>>   #define _XEN_NETIF_EXTRA_FLAG_MORE (0)
>> @@ -1018,6 +1019,10 @@ struct netif_extra_info {
>>               uint8_t algorithm;
>>               uint8_t value[4];
>>           } hash;
>> +        struct {
>> +            uint16_t headroom;
>> +            uint32_t pad;
>
> Please use uint16_t pad[2] in order to avoid padding holes.
>
> Additionally you are missing the addition of the related feature
> Xenstore nodes in the comment area further up in the same file.

Done.

>
>
> Juergen
>


From xen-devel-bounces@lists.xenproject.org Mon May 18 09:53:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 09:53:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jacSZ-0005FP-9P; Mon, 18 May 2020 09:53:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hPYZ=7A=linux-powerpc.org=kda@srs-us1.protection.inumbo.net>)
 id 1jacSY-0005FC-28
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 09:53:22 +0000
X-Inumbo-ID: 67fdf1cc-98ed-11ea-9887-bc764e2007e4
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 67fdf1cc-98ed-11ea-9887-bc764e2007e4;
 Mon, 18 May 2020 09:53:21 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id l19so9134437lje.10
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 02:53:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=linux-powerpc-org.20150623.gappssmtp.com; s=20150623;
 h=from:to:cc:subject:date:message-id;
 bh=mgqN/92BYx5OUANjC5x6lp8VHURe4GtJqolrmxqbSTo=;
 b=L2/RqyeZqbYfh4n0vK+d8LjDXWRDN3mJ7TKrKBfO2X8olzl+rGsXSQ4KhLpb5RVJsR
 URGBpOIuoehslCloSrsne3Fs83ZmJu7bhE1Axt1lG+gXz1NXKHrtgFJRhyr6XIUobp1A
 Pa/ymv0M3iFTD5s2F6cAEJYidF8uGh36TA95mWBbUp6L7yIOnose4Drftp+HSYNJwKt4
 yAuAXqUex3YduP3VJ4kwKvBH6gqUN8baNUJ1sJuCCeTxLZoP8APQPwh9Tn0SPh9/vao3
 YZykrykNdQhwohuKuCSwcfuKMP8BsuVmnGXyZMoDyvPpihho0heFAPuO/eoiOVvq7fm9
 A0Vg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id;
 bh=mgqN/92BYx5OUANjC5x6lp8VHURe4GtJqolrmxqbSTo=;
 b=RuEUSYbGWSrpiDtRl7DZJUA4jGvRqno7JZIvmHR5g54QQ/UYP3cVRKN1275NRHg880
 2chz6i4ZjExJEqhRAR5QAD624LdneKOsWZkpA3ovK+zh4+xzUKwhJASh5pTa1pg+gr79
 5IxQxY2WGDlImCxM4XTtu3cfhvoPAx/o4+67nz+diMKxOzYTDrguFDJAkONB77UjSP1/
 FJif/l9X6e15meH35T/0n3SbDEUf8AKUXxsFT80JeWBDn16sx50WxA5/3VO6GaQczX2Q
 cP+CPxvPD76u41JCx8uQzLfp3MUR5DjPxE9LI/2jhFgqJ5Jz9+o71hQl678/IkwZqC8T
 bVJA==
X-Gm-Message-State: AOAM530/Ta9v4/oY7IiJS1T9TIGkUiPT2ziBeq+gGa6Y8NA8w9hYJsNU
 8FVm6dizyYsMjAOkJeXwUAty1pOhdC61JA==
X-Google-Smtp-Source: ABdhPJzhi7UgbaxRXZMUkMLtSv1/OAjHY+8i0X8mIM1HrDzfsaMEecUagR5WJE/BeYoieiL6nYhPuw==
X-Received: by 2002:a2e:978d:: with SMTP id y13mr9891908lji.80.1589795600060; 
 Mon, 18 May 2020 02:53:20 -0700 (PDT)
Received: from centos7-pv-guest.localdomain ([5.35.46.227])
 by smtp.gmail.com with ESMTPSA id c11sm2166231lji.17.2020.05.18.02.53.19
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 18 May 2020 02:53:19 -0700 (PDT)
From: Denis Kirjanov <kda@linux-powerpc.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2] public/io/netif.h: add a new extra type for XDP
Date: Mon, 18 May 2020 12:53:13 +0300
Message-Id: <1589795593-1544-1-git-send-email-kda@linux-powerpc.org>
X-Mailer: git-send-email 1.8.3.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The patch adds a new extra type to be able to diffirentiate
between RX responses on xen-netfront side with the adjusted offset
required for XDP processing.

For Linux the offset value is going to be passed via xenstore.

Signed-off-by: Denis Kirjanov <denis.kirjanov@suse.com>

v2:
- added documentation
- fixed padding for netif_extra_info
---
 xen/include/public/io/netif.h | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
index 9fcf91a..ec56a15 100644
--- a/xen/include/public/io/netif.h
+++ b/xen/include/public/io/netif.h
@@ -161,6 +161,13 @@
  */
 
 /*
+ * "netfront-xdp-headroom" is used to add an extra space before packet data
+ * for XDP processing. The value is passed by the frontend to be consistent
+ * between both ends. If the value is greater than zero that means that
+ * an RX response is going to be passed to an XDP program for processing.
+ */
+
+/*
  * Control ring
  * ============
  *
@@ -985,7 +992,8 @@ typedef struct netif_tx_request netif_tx_request_t;
 #define XEN_NETIF_EXTRA_TYPE_MCAST_ADD (2)  /* u.mcast */
 #define XEN_NETIF_EXTRA_TYPE_MCAST_DEL (3)  /* u.mcast */
 #define XEN_NETIF_EXTRA_TYPE_HASH      (4)  /* u.hash */
-#define XEN_NETIF_EXTRA_TYPE_MAX       (5)
+#define XEN_NETIF_EXTRA_TYPE_XDP       (5)  /* u.xdp */
+#define XEN_NETIF_EXTRA_TYPE_MAX       (6)
 
 /* netif_extra_info_t flags. */
 #define _XEN_NETIF_EXTRA_FLAG_MORE (0)
@@ -1018,6 +1026,10 @@ struct netif_extra_info {
             uint8_t algorithm;
             uint8_t value[4];
         } hash;
+        struct {
+            uint16_t headroom;
+            uint16_t pad[2]
+        } xdp;
         uint16_t pad[3];
     } u;
 };
-- 
1.8.3.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 10:28:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 10:28:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jad05-0007wH-0Y; Mon, 18 May 2020 10:28:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=k4Zq=7A=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jad03-0007wC-K6
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 10:27:59 +0000
X-Inumbo-ID: 3e46f0fe-98f2-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e46f0fe-98f2-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 10:27:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 6EA6BAA4F;
 Mon, 18 May 2020 10:28:00 +0000 (UTC)
Subject: Re: [PATCH] public/io/netif.h: add a new extra type for XDP
To: Denis Kirjanov <kda@linux-powerpc.org>
References: <1589790285-1250-1-git-send-email-kda@linux-powerpc.org>
 <bbd8a83a-a676-9fa1-b03b-526e5122966f@suse.com>
 <CAOJe8K2pUv4Upvxya3LFPj0CxZ1-_hDZcrv-r6Q2EaxC8Ym6Ow@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <2615d770-c17b-26e0-4686-852ca122eeb4@suse.com>
Date: Mon, 18 May 2020 12:27:56 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <CAOJe8K2pUv4Upvxya3LFPj0CxZ1-_hDZcrv-r6Q2EaxC8Ym6Ow@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 18.05.20 11:52, Denis Kirjanov wrote:
> On 5/18/20, Jürgen Groß <jgross@suse.com> wrote:
>> On 18.05.20 10:24, Denis Kirjanov wrote:
>>> The patch adds a new extra type to be able to diffirentiate
>>> between RX responses on xen-netfront side with the adjusted offset
>>> required for XDP processing.
>>>
>>> For Linux the offset value is going to be passed via xenstore.
>>
>> Why? I can only see disadvantages by using a different communication
>> mechanism.
> I see it like other features passed through xenstore and it requires
> less changes to
> other structures with the same final result.

This is okay as long there is no Xenstore interaction required when the
interface has been setup completely (i.e. only defining the needed
offset for XDP is fine, enabling/disabling XDP at runtime should not be
done via Xenstore IMO).

And please, no guest type special casing. Please replace "Linux" by e.g.
"The guest" (with additional tweaking of the following sentence).


Juergen


From xen-devel-bounces@lists.xenproject.org Mon May 18 10:31:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 10:31:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jad3F-0000G6-Fw; Mon, 18 May 2020 10:31:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eFi1=7A=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jad3E-0000G1-9z
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 10:31:16 +0000
X-Inumbo-ID: b3c9b686-98f2-11ea-ae69-bc764e2007e4
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b3c9b686-98f2-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 10:31:15 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id s8so11189309wrt.9
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 03:31:15 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=qypNNsoA5wBKPhj7Sf33bS3ovebO9BMY32ZKbRxVRBY=;
 b=Xr6oFU7PwOBkYl1VPkWniVywwc8MFtuzJOabpO3EzcsvCnU1WjzjKR5QMAyJJJVbEQ
 QZFLsM+RxNjxiaV4isF8aTP2GME9zMCGvn/NeLZrH72FaQpmkDmSN4dZLfuYuyZeeZRA
 yvvzw8N/shkS6k8fRRx1sshuzqABrdkpEsLmpP49FI+/f6xgAWutGG+P+AqK3bjod174
 38mWVn89bwAJ8BzCoBg4MxWjQE7qslaGWOuycr9jCeOnzDeWuUzHfjvdUyuJTipVGNLK
 ts56Q2nO38X5CWliCvxkqsCzeUWrTaZ2ZVRO07MPsEoSlPIQ/W1y8+vNypKGUgICggex
 tP8A==
X-Gm-Message-State: AOAM531gnCPhKJKZJfgPLX0Bq/0rqvzwNv7z1TPt5xfVZNby2Iebfzat
 8k3x9P8Jrhg5ordRxeB22ZM=
X-Google-Smtp-Source: ABdhPJxcbk7mn21pDORZuLxyNWw2IKhPO6lSmS9k23Qcp+QsHkMLHjf2zKm7zOLTu0psCCUUN5/t7w==
X-Received: by 2002:a5d:4cc1:: with SMTP id c1mr18409241wrt.32.1589797875067; 
 Mon, 18 May 2020 03:31:15 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id p10sm17171731wra.78.2020.05.18.03.31.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 03:31:14 -0700 (PDT)
Date: Mon, 18 May 2020 10:31:13 +0000
From: Wei Liu <wl@xen.org>
To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Subject: Re: [PATCH mini-os] console: add newline at EOF
Message-ID: <20200518103113.v4vik5mcwzuaofxn@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <9d2e445b-0b0f-4e4d-08a8-0f22013f111b@suse.com>
 <20200518075300.t3tvfo7ucbwujmif@function>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200518075300.t3tvfo7ucbwujmif@function>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 18, 2020 at 09:53:00AM +0200, Samuel Thibault wrote:
> Jan Beulich, le lun. 18 mai 2020 09:09:14 +0200, a ecrit:
> > Some gcc versions get pretty unhappy without.
> > 
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> That was an easy one :)
> 
> Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

Applied.


From xen-devel-bounces@lists.xenproject.org Mon May 18 10:34:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 10:34:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jad6P-0000RJ-0O; Mon, 18 May 2020 10:34:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hVld=7A=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jad6O-0000RD-Dz
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 10:34:32 +0000
X-Inumbo-ID: 282eedc0-98f3-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 282eedc0-98f3-11ea-9887-bc764e2007e4;
 Mon, 18 May 2020 10:34:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id B67BEAC46;
 Mon, 18 May 2020 10:34:32 +0000 (UTC)
Subject: Re: use of "stat -"
To: Ian Jackson <ian.jackson@citrix.com>, Elliott Mitchell <ehem+xen@m5p.com>
References: <3bfd6384-fcaf-c74a-e560-a35aafa06a43@suse.com>
 <20200512141947.yqx4gmbvqs4grx5g@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
 <fa507eab-547a-c0fb-9620-825aba5f55b2@suse.com>
 <4b90b635-84bb-e827-d52e-dfe1ebdb4e4d@citrix.com>
 <814db557-4f6a-020d-9f71-4ee3724981e3@suse.com>
 <CAKf6xps0XDRTUJsbE1zHpn=h98yTN+Y1DzaNpVGzhhJGVccRRw@mail.gmail.com>
 <20200512195005.GA96154@mattapan.m5p.com>
 <049e0022-f9c1-6dc9-3360-d25d88eeb97f@citrix.com>
 <20200512225458.GA1530@mattapan.m5p.com>
 <24253.9543.974853.499775@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0b449d5a-9629-8e41-5354-b985a063eba4@suse.com>
Date: Mon, 18 May 2020 12:34:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <24253.9543.974853.499775@mariner.uk.xensource.com>
Content-Type: multipart/mixed; boundary="------------5FFB4B7ACC60D3ABD6F0D5D7"
Content-Language: en-US
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <Andrew.Cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a multi-part message in MIME format.
--------------5FFB4B7ACC60D3ABD6F0D5D7
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

On 14.05.2020 13:02, Ian Jackson wrote:
> I've read this thread.  Jan, I'm sorry that this causes you
> inconvenience.  I'm hoping it won't come down to a choice between
> supporting people who want to ship a dom0 without perl, and people who
> want a dom0 using more-than-a-decade-old coreutils.
> 
> Jan, can you tell me what the output is of this on your ancient
> system:
> 
>   $ rm -f t
>   $ >t
>   $ exec 3<t
>   $ stat -L -c '%F %i' /dev/stdin <&3
>   regular empty file 393549
>   $ rm t
>   $ stat -L -c '%F %i' /dev/stdin <&3
>   regular empty file 393549
>   $ strace -ou stat -L -c '%F %i' /dev/stdin <&3
>   $

$ rm -f t
$ >t
$ exec 3<t
$ stat -L -c '%F %i' /dev/stdin <&3
regular empty file 3380369
$ rm t
$ stat -L -c '%F %i' /dev/stdin <&3
regular empty file 3380369
$ strace -ou stat -L -c '%F %i' /dev/stdin <&3
regular empty file 3380369

> Also, the contents of the file "u" afterwards, please.

Attached.

Thanks for looking into this, Jan

--------------5FFB4B7ACC60D3ABD6F0D5D7
Content-Type: text/plain; charset=UTF-8;
 name="u"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="u"

ZXhlY3ZlKCIvdXNyL2Jpbi9zdGF0IiwgWyJzdGF0IiwgIi1MIiwgIi1jIiwgIiVGICVpIiwg
Ii9kZXYvc3RkaW4iXSwgWy8qIDg5IHZhcnMgKi9dKSA9IDAKYnJrKDApICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgID0gMHg4YmQwMDAKbW1hcChOVUxMLCA0MDk2LCBQUk9U
X1JFQUR8UFJPVF9XUklURSwgTUFQX1BSSVZBVEV8TUFQX0FOT05ZTU9VUywgLTEsIDApID0g
MHg3ZmEzMzJjODkwMDAKYWNjZXNzKCIvZXRjL2xkLnNvLnByZWxvYWQiLCBSX09LKSAgICAg
ID0gLTEgRU5PRU5UIChObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5KQpvcGVuKCIvdXNyL2xv
Y2FsL2xpYjY0L3Rscy94ODZfNjQvbGlic2VsaW51eC5zby4xIiwgT19SRE9OTFkpID0gLTEg
RU5PRU5UIChObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5KQpzdGF0KCIvdXNyL2xvY2FsL2xp
YjY0L3Rscy94ODZfNjQiLCAweDdmZmZlMmI2YWQ3MCkgPSAtMSBFTk9FTlQgKE5vIHN1Y2gg
ZmlsZSBvciBkaXJlY3RvcnkpCm9wZW4oIi91c3IvbG9jYWwvbGliNjQvdGxzL2xpYnNlbGlu
dXguc28uMSIsIE9fUkRPTkxZKSA9IC0xIEVOT0VOVCAoTm8gc3VjaCBmaWxlIG9yIGRpcmVj
dG9yeSkKc3RhdCgiL3Vzci9sb2NhbC9saWI2NC90bHMiLCAweDdmZmZlMmI2YWQ3MCkgPSAt
MSBFTk9FTlQgKE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkpCm9wZW4oIi91c3IvbG9jYWwv
bGliNjQveDg2XzY0L2xpYnNlbGludXguc28uMSIsIE9fUkRPTkxZKSA9IC0xIEVOT0VOVCAo
Tm8gc3VjaCBmaWxlIG9yIGRpcmVjdG9yeSkKc3RhdCgiL3Vzci9sb2NhbC9saWI2NC94ODZf
NjQiLCAweDdmZmZlMmI2YWQ3MCkgPSAtMSBFTk9FTlQgKE5vIHN1Y2ggZmlsZSBvciBkaXJl
Y3RvcnkpCm9wZW4oIi91c3IvbG9jYWwvbGliNjQvbGlic2VsaW51eC5zby4xIiwgT19SRE9O
TFkpID0gLTEgRU5PRU5UIChObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5KQpzdGF0KCIvdXNy
L2xvY2FsL2xpYjY0Iiwge3N0X21vZGU9U19JRkRJUnwwNzU1LCBzdF9zaXplPTQwOTYsIC4u
Ln0pID0gMApvcGVuKCIvdXNyL2xvY2FsL2xpYi90bHMveDg2XzY0L2xpYnNlbGludXguc28u
MSIsIE9fUkRPTkxZKSA9IC0xIEVOT0VOVCAoTm8gc3VjaCBmaWxlIG9yIGRpcmVjdG9yeSkK
c3RhdCgiL3Vzci9sb2NhbC9saWIvdGxzL3g4Nl82NCIsIDB4N2ZmZmUyYjZhZDcwKSA9IC0x
IEVOT0VOVCAoTm8gc3VjaCBmaWxlIG9yIGRpcmVjdG9yeSkKb3BlbigiL3Vzci9sb2NhbC9s
aWIvdGxzL2xpYnNlbGludXguc28uMSIsIE9fUkRPTkxZKSA9IC0xIEVOT0VOVCAoTm8gc3Vj
aCBmaWxlIG9yIGRpcmVjdG9yeSkKc3RhdCgiL3Vzci9sb2NhbC9saWIvdGxzIiwgMHg3ZmZm
ZTJiNmFkNzApID0gLTEgRU5PRU5UIChObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5KQpvcGVu
KCIvdXNyL2xvY2FsL2xpYi94ODZfNjQvbGlic2VsaW51eC5zby4xIiwgT19SRE9OTFkpID0g
LTEgRU5PRU5UIChObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5KQpzdGF0KCIvdXNyL2xvY2Fs
L2xpYi94ODZfNjQiLCAweDdmZmZlMmI2YWQ3MCkgPSAtMSBFTk9FTlQgKE5vIHN1Y2ggZmls
ZSBvciBkaXJlY3RvcnkpCm9wZW4oIi91c3IvbG9jYWwvbGliL2xpYnNlbGludXguc28uMSIs
IE9fUkRPTkxZKSA9IC0xIEVOT0VOVCAoTm8gc3VjaCBmaWxlIG9yIGRpcmVjdG9yeSkKc3Rh
dCgiL3Vzci9sb2NhbC9saWIiLCB7c3RfbW9kZT1TX0lGRElSfDA3NTUsIHN0X3NpemU9NDA5
NiwgLi4ufSkgPSAwCm9wZW4oIi9vcHQvaW50ZWwvY29tcGlsZXItOC4xLTAyNi9saWIvdGxz
L3g4Nl82NC9saWJzZWxpbnV4LnNvLjEiLCBPX1JET05MWSkgPSAtMSBFTk9FTlQgKE5vIHN1
Y2ggZmlsZSBvciBkaXJlY3RvcnkpCnN0YXQoIi9vcHQvaW50ZWwvY29tcGlsZXItOC4xLTAy
Ni9saWIvdGxzL3g4Nl82NCIsIDB4N2ZmZmUyYjZhZDcwKSA9IC0xIEVOT0VOVCAoTm8gc3Vj
aCBmaWxlIG9yIGRpcmVjdG9yeSkKb3BlbigiL29wdC9pbnRlbC9jb21waWxlci04LjEtMDI2
L2xpYi90bHMvbGlic2VsaW51eC5zby4xIiwgT19SRE9OTFkpID0gLTEgRU5PRU5UIChObyBz
dWNoIGZpbGUgb3IgZGlyZWN0b3J5KQpzdGF0KCIvb3B0L2ludGVsL2NvbXBpbGVyLTguMS0w
MjYvbGliL3RscyIsIDB4N2ZmZmUyYjZhZDcwKSA9IC0xIEVOT0VOVCAoTm8gc3VjaCBmaWxl
IG9yIGRpcmVjdG9yeSkKb3BlbigiL29wdC9pbnRlbC9jb21waWxlci04LjEtMDI2L2xpYi94
ODZfNjQvbGlic2VsaW51eC5zby4xIiwgT19SRE9OTFkpID0gLTEgRU5PRU5UIChObyBzdWNo
IGZpbGUgb3IgZGlyZWN0b3J5KQpzdGF0KCIvb3B0L2ludGVsL2NvbXBpbGVyLTguMS0wMjYv
bGliL3g4Nl82NCIsIDB4N2ZmZmUyYjZhZDcwKSA9IC0xIEVOT0VOVCAoTm8gc3VjaCBmaWxl
IG9yIGRpcmVjdG9yeSkKb3BlbigiL29wdC9pbnRlbC9jb21waWxlci04LjEtMDI2L2xpYi9s
aWJzZWxpbnV4LnNvLjEiLCBPX1JET05MWSkgPSAtMSBFTk9FTlQgKE5vIHN1Y2ggZmlsZSBv
ciBkaXJlY3RvcnkpCnN0YXQoIi9vcHQvaW50ZWwvY29tcGlsZXItOC4xLTAyNi9saWIiLCAw
eDdmZmZlMmI2YWQ3MCkgPSAtMSBFTk9FTlQgKE5vIHN1Y2ggZmlsZSBvciBkaXJlY3Rvcnkp
Cm9wZW4oIi9ldGMvbGQuc28uY2FjaGUiLCBPX1JET05MWSkgICAgICA9IDQKZnN0YXQoNCwg
e3N0X21vZGU9U19JRlJFR3wwNjQ0LCBzdF9zaXplPTkzNTk4LCAuLi59KSA9IDAKbW1hcChO
VUxMLCA5MzU5OCwgUFJPVF9SRUFELCBNQVBfUFJJVkFURSwgNCwgMCkgPSAweDdmYTMzMmM3
MjAwMApjbG9zZSg0KSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPSAwCm9wZW4o
Ii9saWI2NC9saWJzZWxpbnV4LnNvLjEiLCBPX1JET05MWSkgPSA0CnJlYWQoNCwgIlwxNzdF
TEZcMlwxXDFcMFwwXDBcMFwwXDBcMFwwXDBcM1wwPlwwXDFcMFwwXDBcMF9cMFwwXDBcMFww
XDAiLi4uLCA4MzIpID0gODMyCmZzdGF0KDQsIHtzdF9tb2RlPVNfSUZSRUd8MDc1NSwgc3Rf
c2l6ZT0xMTgwODAsIC4uLn0pID0gMAptbWFwKE5VTEwsIDIyMTc3NjgsIFBST1RfUkVBRHxQ
Uk9UX0VYRUMsIE1BUF9QUklWQVRFfE1BUF9ERU5ZV1JJVEUsIDQsIDApID0gMHg3ZmEzMzI4
NGUwMDAKZmFkdmlzZTY0KDQsIDAsIDIyMTc3NjgsIFBPU0lYX0ZBRFZfV0lMTE5FRUQpID0g
MAptcHJvdGVjdCgweDdmYTMzMjg2YTAwMCwgMjA5MzA1NiwgUFJPVF9OT05FKSA9IDAKbW1h
cCgweDdmYTMzMmE2OTAwMCwgODE5MiwgUFJPVF9SRUFEfFBST1RfV1JJVEUsIE1BUF9QUklW
QVRFfE1BUF9GSVhFRHxNQVBfREVOWVdSSVRFLCA0LCAweDFiMDAwKSA9IDB4N2ZhMzMyYTY5
MDAwCm1tYXAoMHg3ZmEzMzJhNmIwMDAsIDE4MzIsIFBST1RfUkVBRHxQUk9UX1dSSVRFLCBN
QVBfUFJJVkFURXxNQVBfRklYRUR8TUFQX0FOT05ZTU9VUywgLTEsIDApID0gMHg3ZmEzMzJh
NmIwMDAKY2xvc2UoNCkgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgID0gMApvcGVu
KCIvdXNyL2xvY2FsL2xpYjY0L2xpYmMuc28uNiIsIE9fUkRPTkxZKSA9IC0xIEVOT0VOVCAo
Tm8gc3VjaCBmaWxlIG9yIGRpcmVjdG9yeSkKb3BlbigiL3Vzci9sb2NhbC9saWIvbGliYy5z
by42IiwgT19SRE9OTFkpID0gLTEgRU5PRU5UIChObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5
KQpvcGVuKCIvbGliNjQvbGliYy5zby42IiwgT19SRE9OTFkpICAgICAgPSA0CnJlYWQoNCwg
IlwxNzdFTEZcMlwxXDFcMFwwXDBcMFwwXDBcMFwwXDBcM1wwPlwwXDFcMFwwXDBAXDM1NVwx
XDBcMFwwXDBcMCIuLi4sIDgzMikgPSA4MzIKZnN0YXQoNCwge3N0X21vZGU9U19JRlJFR3ww
NzU1LCBzdF9zaXplPTE2OTA5OTMsIC4uLn0pID0gMAptbWFwKE5VTEwsIDQwOTYsIFBST1Rf
UkVBRHxQUk9UX1dSSVRFLCBNQVBfUFJJVkFURXxNQVBfQU5PTllNT1VTLCAtMSwgMCkgPSAw
eDdmYTMzMmM3MTAwMAptbWFwKE5VTEwsIDM1NTc0NDgsIFBST1RfUkVBRHxQUk9UX0VYRUMs
IE1BUF9QUklWQVRFfE1BUF9ERU5ZV1JJVEUsIDQsIDApID0gMHg3ZmEzMzI0ZTkwMDAKZmFk
dmlzZTY0KDQsIDAsIDM1NTc0NDgsIFBPU0lYX0ZBRFZfV0lMTE5FRUQpID0gMAptcHJvdGVj
dCgweDdmYTMzMjY0NTAwMCwgMjA5MzA1NiwgUFJPVF9OT05FKSA9IDAKbW1hcCgweDdmYTMz
Mjg0NDAwMCwgMjA0ODAsIFBST1RfUkVBRHxQUk9UX1dSSVRFLCBNQVBfUFJJVkFURXxNQVBf
RklYRUR8TUFQX0RFTllXUklURSwgNCwgMHgxNWIwMDApID0gMHg3ZmEzMzI4NDQwMDAKbW1h
cCgweDdmYTMzMjg0OTAwMCwgMTg1MDQsIFBST1RfUkVBRHxQUk9UX1dSSVRFLCBNQVBfUFJJ
VkFURXxNQVBfRklYRUR8TUFQX0FOT05ZTU9VUywgLTEsIDApID0gMHg3ZmEzMzI4NDkwMDAK
Y2xvc2UoNCkgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgID0gMApvcGVuKCIvdXNy
L2xvY2FsL2xpYjY0L2xpYmRsLnNvLjIiLCBPX1JET05MWSkgPSAtMSBFTk9FTlQgKE5vIHN1
Y2ggZmlsZSBvciBkaXJlY3RvcnkpCm9wZW4oIi91c3IvbG9jYWwvbGliL2xpYmRsLnNvLjIi
LCBPX1JET05MWSkgPSAtMSBFTk9FTlQgKE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkpCm9w
ZW4oIi9saWI2NC9saWJkbC5zby4yIiwgT19SRE9OTFkpICAgICA9IDQKcmVhZCg0LCAiXDE3
N0VMRlwyXDFcMVwwXDBcMFwwXDBcMFwwXDBcMFwzXDA+XDBcMVwwXDBcMFwzNjBcclwwXDBc
MFwwXDBcMCIuLi4sIDgzMikgPSA4MzIKZnN0YXQoNCwge3N0X21vZGU9U19JRlJFR3wwNzU1
LCBzdF9zaXplPTE5MTQ5LCAuLi59KSA9IDAKbW1hcChOVUxMLCAyMTA5Njk2LCBQUk9UX1JF
QUR8UFJPVF9FWEVDLCBNQVBfUFJJVkFURXxNQVBfREVOWVdSSVRFLCA0LCAwKSA9IDB4N2Zh
MzMyMmU1MDAwCmZhZHZpc2U2NCg0LCAwLCAyMTA5Njk2LCBQT1NJWF9GQURWX1dJTExORUVE
KSA9IDAKbXByb3RlY3QoMHg3ZmEzMzIyZTcwMDAsIDIwOTcxNTIsIFBST1RfTk9ORSkgPSAw
Cm1tYXAoMHg3ZmEzMzI0ZTcwMDAsIDgxOTIsIFBST1RfUkVBRHxQUk9UX1dSSVRFLCBNQVBf
UFJJVkFURXxNQVBfRklYRUR8TUFQX0RFTllXUklURSwgNCwgMHgyMDAwKSA9IDB4N2ZhMzMy
NGU3MDAwCmNsb3NlKDQpICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA9IDAKbW1h
cChOVUxMLCA0MDk2LCBQUk9UX1JFQUR8UFJPVF9XUklURSwgTUFQX1BSSVZBVEV8TUFQX0FO
T05ZTU9VUywgLTEsIDApID0gMHg3ZmEzMzJjNzAwMDAKbW1hcChOVUxMLCA4MTkyLCBQUk9U
X1JFQUR8UFJPVF9XUklURSwgTUFQX1BSSVZBVEV8TUFQX0FOT05ZTU9VUywgLTEsIDApID0g
MHg3ZmEzMzJjNmUwMDAKYXJjaF9wcmN0bChBUkNIX1NFVF9GUywgMHg3ZmEzMzJjNmU3YTAp
ID0gMAptcHJvdGVjdCgweDdmYTMzMjRlNzAwMCwgNDA5NiwgUFJPVF9SRUFEKSA9IDAKbXBy
b3RlY3QoMHg3ZmEzMzI4NDQwMDAsIDE2Mzg0LCBQUk9UX1JFQUQpID0gMAptcHJvdGVjdCgw
eDdmYTMzMmE2OTAwMCwgNDA5NiwgUFJPVF9SRUFEKSA9IDAKbXByb3RlY3QoMHg2MDkwMDAs
IDQwOTYsIFBST1RfUkVBRCkgICAgID0gMAptcHJvdGVjdCgweDdmYTMzMmM4YTAwMCwgNDA5
NiwgUFJPVF9SRUFEKSA9IDAKbXVubWFwKDB4N2ZhMzMyYzcyMDAwLCA5MzU5OCkgICAgICAg
ICAgID0gMApzdGF0ZnMoIi9zZWxpbnV4Iiwge2ZfdHlwZT0iRVhUMl9TVVBFUl9NQUdJQyIs
IGZfYnNpemU9NDA5NiwgZl9ibG9ja3M9NTEyNjE2NSwgZl9iZnJlZT00NjQxNTIxLCBmX2Jh
dmFpbD00Mzc5NDYxLCBmX2ZpbGVzPTEzMTA3MjAsIGZfZmZyZWU9MTI0MTk3NywgZl9mc2lk
PXstMzA0NzM1NzIwLCAxNDA2OTE3NDk0fSwgZl9uYW1lbGVuPTI1NSwgZl9mcnNpemU9NDA5
Nn0pID0gMApicmsoMCkgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPSAweDhi
ZDAwMApicmsoMHg4ZGUwMDApICAgICAgICAgICAgICAgICAgICAgICAgICAgPSAweDhkZTAw
MApvcGVuKCIvcHJvYy9maWxlc3lzdGVtcyIsIE9fUkRPTkxZKSAgICAgPSA0CmZzdGF0KDQs
IHtzdF9tb2RlPVNfSUZSRUd8MDQ0NCwgc3Rfc2l6ZT0wLCAuLi59KSA9IDAKbW1hcChOVUxM
LCA0MDk2LCBQUk9UX1JFQUR8UFJPVF9XUklURSwgTUFQX1BSSVZBVEV8TUFQX0FOT05ZTU9V
UywgLTEsIDApID0gMHg3ZmEzMzJjODgwMDAKcmVhZCg0LCAibm9kZXZcdHN5c2ZzXG5ub2Rl
dlx0cm9vdGZzXG5ub2Rldlx0ciIuLi4sIDEwMjQpID0gMjY1CnJlYWQoNCwgIiIsIDEwMjQp
ICAgICAgICAgICAgICAgICAgICAgICA9IDAKY2xvc2UoNCkgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgID0gMAptdW5tYXAoMHg3ZmEzMzJjODgwMDAsIDQwOTYpICAgICAgICAg
ICAgPSAwCm9wZW4oIi91c3IvbGliL2xvY2FsZS9sb2NhbGUtYXJjaGl2ZSIsIE9fUkRPTkxZ
KSA9IC0xIEVOT0VOVCAoTm8gc3VjaCBmaWxlIG9yIGRpcmVjdG9yeSkKb3BlbigiL3Vzci9z
aGFyZS9sb2NhbGUvbG9jYWxlLmFsaWFzIiwgT19SRE9OTFkpID0gNApmc3RhdCg0LCB7c3Rf
bW9kZT1TX0lGUkVHfDA2NDQsIHN0X3NpemU9MjUxMiwgLi4ufSkgPSAwCm1tYXAoTlVMTCwg
NDA5NiwgUFJPVF9SRUFEfFBST1RfV1JJVEUsIE1BUF9QUklWQVRFfE1BUF9BTk9OWU1PVVMs
IC0xLCAwKSA9IDB4N2ZhMzMyYzg4MDAwCnJlYWQoNCwgIiMgTG9jYWxlIG5hbWUgYWxpYXMg
ZGF0YSBiYXNlLlxuIyIuLi4sIDQwOTYpID0gMjUxMgpyZWFkKDQsICIiLCA0MDk2KSAgICAg
ICAgICAgICAgICAgICAgICAgPSAwCmNsb3NlKDQpICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICA9IDAKbXVubWFwKDB4N2ZhMzMyYzg4MDAwLCA0MDk2KSAgICAgICAgICAgID0g
MApvcGVuKCIvdXNyL2xpYi9sb2NhbGUvZW5fVVMuVVRGLTgvTENfSURFTlRJRklDQVRJT04i
LCBPX1JET05MWSkgPSAtMSBFTk9FTlQgKE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkpCm9w
ZW4oIi91c3IvbGliL2xvY2FsZS9lbl9VUy51dGY4L0xDX0lERU5USUZJQ0FUSU9OIiwgT19S
RE9OTFkpID0gNApmc3RhdCg0LCB7c3RfbW9kZT1TX0lGUkVHfDA2NDQsIHN0X3NpemU9Mzcz
LCAuLi59KSA9IDAKbW1hcChOVUxMLCAzNzMsIFBST1RfUkVBRCwgTUFQX1BSSVZBVEUsIDQs
IDApID0gMHg3ZmEzMzJjODgwMDAKY2xvc2UoNCkgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgID0gMApvcGVuKCIvdXNyL2xpYjY0L2djb252L2djb252LW1vZHVsZXMuY2FjaGUi
LCBPX1JET05MWSkgPSA0CmZzdGF0KDQsIHtzdF9tb2RlPVNfSUZSRUd8MDY0NCwgc3Rfc2l6
ZT0yNjA1MCwgLi4ufSkgPSAwCm1tYXAoTlVMTCwgMjYwNTAsIFBST1RfUkVBRCwgTUFQX1NI
QVJFRCwgNCwgMCkgPSAweDdmYTMzMmM4MTAwMApjbG9zZSg0KSAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgPSAwCm9wZW4oIi91c3IvbGliL2xvY2FsZS9lbl9VUy5VVEYtOC9M
Q19NRUFTVVJFTUVOVCIsIE9fUkRPTkxZKSA9IC0xIEVOT0VOVCAoTm8gc3VjaCBmaWxlIG9y
IGRpcmVjdG9yeSkKb3BlbigiL3Vzci9saWIvbG9jYWxlL2VuX1VTLnV0ZjgvTENfTUVBU1VS
RU1FTlQiLCBPX1JET05MWSkgPSA0CmZzdGF0KDQsIHtzdF9tb2RlPVNfSUZSRUd8MDY0NCwg
c3Rfc2l6ZT0yMywgLi4ufSkgPSAwCm1tYXAoTlVMTCwgMjMsIFBST1RfUkVBRCwgTUFQX1BS
SVZBVEUsIDQsIDApID0gMHg3ZmEzMzJjODAwMDAKY2xvc2UoNCkgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgID0gMApvcGVuKCIvdXNyL2xpYi9sb2NhbGUvZW5fVVMuVVRGLTgv
TENfVEVMRVBIT05FIiwgT19SRE9OTFkpID0gLTEgRU5PRU5UIChObyBzdWNoIGZpbGUgb3Ig
ZGlyZWN0b3J5KQpvcGVuKCIvdXNyL2xpYi9sb2NhbGUvZW5fVVMudXRmOC9MQ19URUxFUEhP
TkUiLCBPX1JET05MWSkgPSA0CmZzdGF0KDQsIHtzdF9tb2RlPVNfSUZSRUd8MDY0NCwgc3Rf
c2l6ZT01OSwgLi4ufSkgPSAwCm1tYXAoTlVMTCwgNTksIFBST1RfUkVBRCwgTUFQX1BSSVZB
VEUsIDQsIDApID0gMHg3ZmEzMzJjN2YwMDAKY2xvc2UoNCkgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgID0gMApvcGVuKCIvdXNyL2xpYi9sb2NhbGUvZW5fVVMuVVRGLTgvTENf
QUREUkVTUyIsIE9fUkRPTkxZKSA9IC0xIEVOT0VOVCAoTm8gc3VjaCBmaWxlIG9yIGRpcmVj
dG9yeSkKb3BlbigiL3Vzci9saWIvbG9jYWxlL2VuX1VTLnV0ZjgvTENfQUREUkVTUyIsIE9f
UkRPTkxZKSA9IDQKZnN0YXQoNCwge3N0X21vZGU9U19JRlJFR3wwNjQ0LCBzdF9zaXplPTE1
NSwgLi4ufSkgPSAwCm1tYXAoTlVMTCwgMTU1LCBQUk9UX1JFQUQsIE1BUF9QUklWQVRFLCA0
LCAwKSA9IDB4N2ZhMzMyYzdlMDAwCmNsb3NlKDQpICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICA9IDAKb3BlbigiL3Vzci9saWIvbG9jYWxlL2VuX1VTLlVURi04L0xDX05BTUUi
LCBPX1JET05MWSkgPSAtMSBFTk9FTlQgKE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkpCm9w
ZW4oIi91c3IvbGliL2xvY2FsZS9lbl9VUy51dGY4L0xDX05BTUUiLCBPX1JET05MWSkgPSA0
CmZzdGF0KDQsIHtzdF9tb2RlPVNfSUZSRUd8MDY0NCwgc3Rfc2l6ZT03NywgLi4ufSkgPSAw
Cm1tYXAoTlVMTCwgNzcsIFBST1RfUkVBRCwgTUFQX1BSSVZBVEUsIDQsIDApID0gMHg3ZmEz
MzJjN2QwMDAKY2xvc2UoNCkgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgID0gMApv
cGVuKCIvdXNyL2xpYi9sb2NhbGUvZW5fVVMuVVRGLTgvTENfUEFQRVIiLCBPX1JET05MWSkg
PSAtMSBFTk9FTlQgKE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkpCm9wZW4oIi91c3IvbGli
L2xvY2FsZS9lbl9VUy51dGY4L0xDX1BBUEVSIiwgT19SRE9OTFkpID0gNApmc3RhdCg0LCB7
c3RfbW9kZT1TX0lGUkVHfDA2NDQsIHN0X3NpemU9MzQsIC4uLn0pID0gMAptbWFwKE5VTEws
IDM0LCBQUk9UX1JFQUQsIE1BUF9QUklWQVRFLCA0LCAwKSA9IDB4N2ZhMzMyYzdjMDAwCmNs
b3NlKDQpICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA9IDAKb3BlbigiL3Vzci9s
aWIvbG9jYWxlL2VuX1VTLlVURi04L0xDX01FU1NBR0VTIiwgT19SRE9OTFkpID0gLTEgRU5P
RU5UIChObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5KQpvcGVuKCIvdXNyL2xpYi9sb2NhbGUv
ZW5fVVMudXRmOC9MQ19NRVNTQUdFUyIsIE9fUkRPTkxZKSA9IDQKZnN0YXQoNCwge3N0X21v
ZGU9U19JRkRJUnwwNzU1LCBzdF9zaXplPTQwOTYsIC4uLn0pID0gMApjbG9zZSg0KSAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgPSAwCm9wZW4oIi91c3IvbGliL2xvY2FsZS9l
bl9VUy51dGY4L0xDX01FU1NBR0VTL1NZU19MQ19NRVNTQUdFUyIsIE9fUkRPTkxZKSA9IDQK
ZnN0YXQoNCwge3N0X21vZGU9U19JRlJFR3wwNjQ0LCBzdF9zaXplPTU3LCAuLi59KSA9IDAK
bW1hcChOVUxMLCA1NywgUFJPVF9SRUFELCBNQVBfUFJJVkFURSwgNCwgMCkgPSAweDdmYTMz
MmM3YjAwMApjbG9zZSg0KSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPSAwCm9w
ZW4oIi91c3IvbGliL2xvY2FsZS9lbl9VUy5VVEYtOC9MQ19NT05FVEFSWSIsIE9fUkRPTkxZ
KSA9IC0xIEVOT0VOVCAoTm8gc3VjaCBmaWxlIG9yIGRpcmVjdG9yeSkKb3BlbigiL3Vzci9s
aWIvbG9jYWxlL2VuX1VTLnV0ZjgvTENfTU9ORVRBUlkiLCBPX1JET05MWSkgPSA0CmZzdGF0
KDQsIHtzdF9tb2RlPVNfSUZSRUd8MDY0NCwgc3Rfc2l6ZT0yODYsIC4uLn0pID0gMAptbWFw
KE5VTEwsIDI4NiwgUFJPVF9SRUFELCBNQVBfUFJJVkFURSwgNCwgMCkgPSAweDdmYTMzMmM3
YTAwMApjbG9zZSg0KSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPSAwCm9wZW4o
Ii91c3IvbGliL2xvY2FsZS9lbl9VUy5VVEYtOC9MQ19DT0xMQVRFIiwgT19SRE9OTFkpID0g
LTEgRU5PRU5UIChObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5KQpvcGVuKCIvdXNyL2xpYi9s
b2NhbGUvZW5fVVMudXRmOC9MQ19DT0xMQVRFIiwgT19SRE9OTFkpID0gNApmc3RhdCg0LCB7
c3RfbW9kZT1TX0lGUkVHfDA2NDQsIHN0X3NpemU9MTE2MzY4MiwgLi4ufSkgPSAwCm1tYXAo
TlVMTCwgMTE2MzY4MiwgUFJPVF9SRUFELCBNQVBfUFJJVkFURSwgNCwgMCkgPSAweDdmYTMz
MmI1MTAwMApjbG9zZSg0KSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPSAwCm9w
ZW4oIi91c3IvbGliL2xvY2FsZS9lbl9VUy5VVEYtOC9MQ19USU1FIiwgT19SRE9OTFkpID0g
LTEgRU5PRU5UIChObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5KQpvcGVuKCIvdXNyL2xpYi9s
b2NhbGUvZW5fVVMudXRmOC9MQ19USU1FIiwgT19SRE9OTFkpID0gNApmc3RhdCg0LCB7c3Rf
bW9kZT1TX0lGUkVHfDA2NDQsIHN0X3NpemU9MjQ1NCwgLi4ufSkgPSAwCm1tYXAoTlVMTCwg
MjQ1NCwgUFJPVF9SRUFELCBNQVBfUFJJVkFURSwgNCwgMCkgPSAweDdmYTMzMmM3OTAwMApj
bG9zZSg0KSAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgPSAwCm9wZW4oIi91c3Iv
bGliL2xvY2FsZS9lbl9VUy5VVEYtOC9MQ19OVU1FUklDIiwgT19SRE9OTFkpID0gLTEgRU5P
RU5UIChObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5KQpvcGVuKCIvdXNyL2xpYi9sb2NhbGUv
ZW5fVVMudXRmOC9MQ19OVU1FUklDIiwgT19SRE9OTFkpID0gNApmc3RhdCg0LCB7c3RfbW9k
ZT1TX0lGUkVHfDA2NDQsIHN0X3NpemU9NTQsIC4uLn0pID0gMAptbWFwKE5VTEwsIDU0LCBQ
Uk9UX1JFQUQsIE1BUF9QUklWQVRFLCA0LCAwKSA9IDB4N2ZhMzMyYzc4MDAwCmNsb3NlKDQp
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICA9IDAKb3BlbigiL3Vzci9saWIvbG9j
YWxlL2VuX1VTLlVURi04L0xDX0NUWVBFIiwgT19SRE9OTFkpID0gLTEgRU5PRU5UIChObyBz
dWNoIGZpbGUgb3IgZGlyZWN0b3J5KQpvcGVuKCIvdXNyL2xpYi9sb2NhbGUvZW5fVVMudXRm
OC9MQ19DVFlQRSIsIE9fUkRPTkxZKSA9IDQKZnN0YXQoNCwge3N0X21vZGU9U19JRlJFR3ww
NjQ0LCBzdF9zaXplPTI1NjMyNCwgLi4ufSkgPSAwCm1tYXAoTlVMTCwgMjU2MzI0LCBQUk9U
X1JFQUQsIE1BUF9QUklWQVRFLCA0LCAwKSA9IDB4N2ZhMzMyYjEyMDAwCmNsb3NlKDQpICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICA9IDAKc3RhdCgiL2Rldi9zdGRpbiIsIHtz
dF9tb2RlPVNfSUZSRUd8MDY0NCwgc3Rfc2l6ZT0wLCAuLi59KSA9IDAKb3BlbigiL3Vzci9z
aGFyZS9sb2NhbGUtbGFuZ3BhY2svZW5fVVMuVVRGLTgvTENfTUVTU0FHRVMvY29yZXV0aWxz
Lm1vIiwgT19SRE9OTFkpID0gLTEgRU5PRU5UIChObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5
KQpvcGVuKCIvdXNyL3NoYXJlL2xvY2FsZS9lbl9VUy5VVEYtOC9MQ19NRVNTQUdFUy9jb3Jl
dXRpbHMubW8iLCBPX1JET05MWSkgPSAtMSBFTk9FTlQgKE5vIHN1Y2ggZmlsZSBvciBkaXJl
Y3RvcnkpCm9wZW4oIi91c3Ivc2hhcmUvbG9jYWxlLWJ1bmRsZS9lbl9VUy5VVEYtOC9MQ19N
RVNTQUdFUy9jb3JldXRpbHMubW8iLCBPX1JET05MWSkgPSAtMSBFTk9FTlQgKE5vIHN1Y2gg
ZmlsZSBvciBkaXJlY3RvcnkpCm9wZW4oIi91c3Ivc2hhcmUvbG9jYWxlLWxhbmdwYWNrL2Vu
X1VTLnV0ZjgvTENfTUVTU0FHRVMvY29yZXV0aWxzLm1vIiwgT19SRE9OTFkpID0gLTEgRU5P
RU5UIChObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5KQpvcGVuKCIvdXNyL3NoYXJlL2xvY2Fs
ZS9lbl9VUy51dGY4L0xDX01FU1NBR0VTL2NvcmV1dGlscy5tbyIsIE9fUkRPTkxZKSA9IC0x
IEVOT0VOVCAoTm8gc3VjaCBmaWxlIG9yIGRpcmVjdG9yeSkKb3BlbigiL3Vzci9zaGFyZS9s
b2NhbGUtYnVuZGxlL2VuX1VTLnV0ZjgvTENfTUVTU0FHRVMvY29yZXV0aWxzLm1vIiwgT19S
RE9OTFkpID0gLTEgRU5PRU5UIChObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5KQpvcGVuKCIv
dXNyL3NoYXJlL2xvY2FsZS1sYW5ncGFjay9lbl9VUy9MQ19NRVNTQUdFUy9jb3JldXRpbHMu
bW8iLCBPX1JET05MWSkgPSAtMSBFTk9FTlQgKE5vIHN1Y2ggZmlsZSBvciBkaXJlY3Rvcnkp
Cm9wZW4oIi91c3Ivc2hhcmUvbG9jYWxlL2VuX1VTL0xDX01FU1NBR0VTL2NvcmV1dGlscy5t
byIsIE9fUkRPTkxZKSA9IC0xIEVOT0VOVCAoTm8gc3VjaCBmaWxlIG9yIGRpcmVjdG9yeSkK
b3BlbigiL3Vzci9zaGFyZS9sb2NhbGUtYnVuZGxlL2VuX1VTL0xDX01FU1NBR0VTL2NvcmV1
dGlscy5tbyIsIE9fUkRPTkxZKSA9IC0xIEVOT0VOVCAoTm8gc3VjaCBmaWxlIG9yIGRpcmVj
dG9yeSkKb3BlbigiL3Vzci9zaGFyZS9sb2NhbGUtbGFuZ3BhY2svZW4uVVRGLTgvTENfTUVT
U0FHRVMvY29yZXV0aWxzLm1vIiwgT19SRE9OTFkpID0gLTEgRU5PRU5UIChObyBzdWNoIGZp
bGUgb3IgZGlyZWN0b3J5KQpvcGVuKCIvdXNyL3NoYXJlL2xvY2FsZS9lbi5VVEYtOC9MQ19N
RVNTQUdFUy9jb3JldXRpbHMubW8iLCBPX1JET05MWSkgPSAtMSBFTk9FTlQgKE5vIHN1Y2gg
ZmlsZSBvciBkaXJlY3RvcnkpCm9wZW4oIi91c3Ivc2hhcmUvbG9jYWxlLWJ1bmRsZS9lbi5V
VEYtOC9MQ19NRVNTQUdFUy9jb3JldXRpbHMubW8iLCBPX1JET05MWSkgPSAtMSBFTk9FTlQg
KE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkpCm9wZW4oIi91c3Ivc2hhcmUvbG9jYWxlLWxh
bmdwYWNrL2VuLnV0ZjgvTENfTUVTU0FHRVMvY29yZXV0aWxzLm1vIiwgT19SRE9OTFkpID0g
LTEgRU5PRU5UIChObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5KQpvcGVuKCIvdXNyL3NoYXJl
L2xvY2FsZS9lbi51dGY4L0xDX01FU1NBR0VTL2NvcmV1dGlscy5tbyIsIE9fUkRPTkxZKSA9
IC0xIEVOT0VOVCAoTm8gc3VjaCBmaWxlIG9yIGRpcmVjdG9yeSkKb3BlbigiL3Vzci9zaGFy
ZS9sb2NhbGUtYnVuZGxlL2VuLnV0ZjgvTENfTUVTU0FHRVMvY29yZXV0aWxzLm1vIiwgT19S
RE9OTFkpID0gLTEgRU5PRU5UIChObyBzdWNoIGZpbGUgb3IgZGlyZWN0b3J5KQpvcGVuKCIv
dXNyL3NoYXJlL2xvY2FsZS1sYW5ncGFjay9lbi9MQ19NRVNTQUdFUy9jb3JldXRpbHMubW8i
LCBPX1JET05MWSkgPSAtMSBFTk9FTlQgKE5vIHN1Y2ggZmlsZSBvciBkaXJlY3RvcnkpCm9w
ZW4oIi91c3Ivc2hhcmUvbG9jYWxlL2VuL0xDX01FU1NBR0VTL2NvcmV1dGlscy5tbyIsIE9f
UkRPTkxZKSA9IC0xIEVOT0VOVCAoTm8gc3VjaCBmaWxlIG9yIGRpcmVjdG9yeSkKb3Blbigi
L3Vzci9zaGFyZS9sb2NhbGUtYnVuZGxlL2VuL0xDX01FU1NBR0VTL2NvcmV1dGlscy5tbyIs
IE9fUkRPTkxZKSA9IC0xIEVOT0VOVCAoTm8gc3VjaCBmaWxlIG9yIGRpcmVjdG9yeSkKZnN0
YXQoMSwge3N0X21vZGU9U19JRkNIUnwwNjAwLCBzdF9yZGV2PW1ha2VkZXYoMTM2LCA0KSwg
Li4ufSkgPSAwCm1tYXAoTlVMTCwgNDA5NiwgUFJPVF9SRUFEfFBST1RfV1JJVEUsIE1BUF9Q
UklWQVRFfE1BUF9BTk9OWU1PVVMsIC0xLCAwKSA9IDB4N2ZhMzMyYzc3MDAwCndyaXRlKDEs
ICJyZWd1bGFyIGVtcHR5IGZpbGUgMzM4MDM2OVxuIiwgMjcpID0gMjcKY2xvc2UoMSkgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgID0gMAptdW5tYXAoMHg3ZmEzMzJjNzcwMDAs
IDQwOTYpICAgICAgICAgICAgPSAwCmNsb3NlKDIpICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICA9IDAKZXhpdF9ncm91cCgwKSAgICAgICAgICAgICAgICAgICAgICAgICAgID0g
Pwo=
--------------5FFB4B7ACC60D3ABD6F0D5D7--


From xen-devel-bounces@lists.xenproject.org Mon May 18 10:37:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 10:37:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jad9a-0000ba-Id; Mon, 18 May 2020 10:37:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hPYZ=7A=linux-powerpc.org=kda@srs-us1.protection.inumbo.net>)
 id 1jad9Z-0000bV-Lu
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 10:37:49 +0000
X-Inumbo-ID: 9e1e67cc-98f3-11ea-9887-bc764e2007e4
Received: from mail-ed1-x544.google.com (unknown [2a00:1450:4864:20::544])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e1e67cc-98f3-11ea-9887-bc764e2007e4;
 Mon, 18 May 2020 10:37:49 +0000 (UTC)
Received: by mail-ed1-x544.google.com with SMTP id g9so7997016edw.10
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 03:37:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=linux-powerpc-org.20150623.gappssmtp.com; s=20150623;
 h=mime-version:in-reply-to:references:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=5Bl/WG6QnU3SEi8B+fZ0xquucXbSbIUYtTuEV8XsL2w=;
 b=KJeAVM5yedKkJIgWwNbyfnVys3l7bclbOzdmns1BXhHb20udhMxCoPtOxsMu/kTO6H
 yD6p3+LVK9QcUyKf4UQz/CNc/0bbfOtpeb+3fNpywCaPSYZMYvzRL3jAwyokUkb7AcXL
 aSogFIoeAxb9PxpEaYaToavYxuFvqcPDTHvqKMkFHlDCDLCiKBURqIUamxEWSAT1kvRO
 IcKoH0/IKhrElGGtHCCLs2xXYCLRKYEtP3PSFfOMUEHcWpKqBDemLVOFzTKnWc9b8cj0
 yMtXhj52ADtbC62jCwAHdbpKnb8/80XdIuo24BefbRLPWDUdGcryTXS9cieQf5zEjdDp
 Et9w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:in-reply-to:references:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=5Bl/WG6QnU3SEi8B+fZ0xquucXbSbIUYtTuEV8XsL2w=;
 b=qlls+oJSP0Rl7VXovEll5Hunxw1zrj6Fh3E6sPiEG6XJbr5HIIRfBhN0eeew4x+gVN
 uZkgO5SetTXNbyHCOjOpN7M0swM74w82/QfCKL79b/gQu9sh0IYd7r7VUx16upty2+ez
 YKaEyohp6GCf7+/tmz+/xCOVQj5BdzPipOvXedQcinlQNySYELzymZnolelceyZoUzdi
 b/yHS9H1YtiANM+3VkyBqzXywdVTDEwzH5l+KSXlltwK6xaT4OvgKCONVha/HPRFL5dE
 Trdv+G/Qk+S2uEH9Zb1CwlKaUpCJK9WNePtpRLX3Y3bHdWuixMrng43otVcyMCMW1P69
 PwQA==
X-Gm-Message-State: AOAM530TV5gkOoMgDMNbMvU/jm6NNg6HLs0TEd4RdTlWvt0xpFrudLfJ
 loLY5JFojO9BXJORuIiaoc18T+b1r0NVywZpBM7eZL2NExg=
X-Google-Smtp-Source: ABdhPJxdot3/awlmb1zQUzX00JePE7Qs3APx29J6/8Y5vscdjZoogGVM6aRzly+26egsaUsEHkx/ayArSupHaJ9qhs8=
X-Received: by 2002:aa7:da04:: with SMTP id r4mr12705670eds.346.1589798268112; 
 Mon, 18 May 2020 03:37:48 -0700 (PDT)
MIME-Version: 1.0
Received: by 2002:a50:34e:0:0:0:0:0 with HTTP;
 Mon, 18 May 2020 03:37:47 -0700 (PDT)
X-Originating-IP: [5.35.46.227]
In-Reply-To: <2615d770-c17b-26e0-4686-852ca122eeb4@suse.com>
References: <1589790285-1250-1-git-send-email-kda@linux-powerpc.org>
 <bbd8a83a-a676-9fa1-b03b-526e5122966f@suse.com>
 <CAOJe8K2pUv4Upvxya3LFPj0CxZ1-_hDZcrv-r6Q2EaxC8Ym6Ow@mail.gmail.com>
 <2615d770-c17b-26e0-4686-852ca122eeb4@suse.com>
From: Denis Kirjanov <kda@linux-powerpc.org>
Date: Mon, 18 May 2020 13:37:47 +0300
Message-ID: <CAOJe8K08BJv5UefSYUQRWuFtsR=aMxJEV=JJvkMsrGk6q0fQRQ@mail.gmail.com>
Subject: Re: [PATCH] public/io/netif.h: add a new extra type for XDP
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/18/20, J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wrote:
> On 18.05.20 11:52, Denis Kirjanov wrote:
>> On 5/18/20, J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wrote:
>>> On 18.05.20 10:24, Denis Kirjanov wrote:
>>>> The patch adds a new extra type to be able to diffirentiate
>>>> between RX responses on xen-netfront side with the adjusted offset
>>>> required for XDP processing.
>>>>
>>>> For Linux the offset value is going to be passed via xenstore.
>>>
>>> Why? I can only see disadvantages by using a different communication
>>> mechanism.
>> I see it like other features passed through xenstore and it requires
>> less changes to
>> other structures with the same final result.
>
> This is okay as long there is no Xenstore interaction required when the
> interface has been setup completely (i.e. only defining the needed
> offset for XDP is fine, enabling/disabling XDP at runtime should not be
> done via Xenstore IMO).
I've checked netfront-<--->netback interaction and found no problems with i=
t.
Paul found an issue that the value of the netfront state hasn't been
re-read (during unbind-bind sequence in dom0) and I've fixed it the
patch for the guest

>
> And please, no guest type special casing. Please replace "Linux" by e.g.
> "The guest" (with additional tweaking of the following sentence).

Oh, just sent v2. I'll fix a comment.

>
>
> Juergen
>


From xen-devel-bounces@lists.xenproject.org Mon May 18 10:45:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 10:45:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jadGb-0001S1-76; Mon, 18 May 2020 10:45:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VSjt=7A=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jadGa-0001Rw-Gf
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 10:45:04 +0000
X-Inumbo-ID: a128c8c6-98f4-11ea-b9cf-bc764e2007e4
Received: from mail-ed1-x52d.google.com (unknown [2a00:1450:4864:20::52d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a128c8c6-98f4-11ea-b9cf-bc764e2007e4;
 Mon, 18 May 2020 10:45:03 +0000 (UTC)
Received: by mail-ed1-x52d.google.com with SMTP id k19so8026493edv.9
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 03:45:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=pZnkQa7IsGVBBiwRyiPvUnDrBchyPX6Z8G16FQ+le9c=;
 b=p7u1cTs+UdUa/9G/pRZSw6S4WzbZ/0zGeEjiUm3AQTheHHwMeFBuE6jWhUazWkPVi5
 81mvIVdZZFT+MJrxBpX3nXd+DusfQTDhVB2NZwDyF5UuUD6iQG79EM+4fLNVTarrNyAY
 ZBK67/Ci89l6/LXpda3tn670e3SSwJBGOi5wRPFluazTKUHrLKRO4rx+UWioKx0+eGaE
 v0OIyoxg36sBOQypoalhXBdNgi/1SAZWy0ihZUP3JLblQb6rlqxaP+oh1H2cjPritqBb
 bjrRI2siJPevd5F4swQJV+gVu2bTUtw441ue3AEArhg8+ldSq27YH7Crb+tCkd5FOeeV
 5gRQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=pZnkQa7IsGVBBiwRyiPvUnDrBchyPX6Z8G16FQ+le9c=;
 b=bWhCsYoeeHG9CNjmLpW9GKXYPrwZwjP9hqGhgNpIf0BC8D9QwRYpBvxLYwrkgch3Yp
 IeRaR6HzlubKlmWr7xCac6nKsmMDmV3hQIZVvdCzGpr0zQwbkhWhv8JF2ok1wz8ELZLT
 gBHSuQMtlgzle6aQ9Js0wLK4TSnLmDeKMeYXtCZRARJdejfAF1z14nbovpvg7afvxRxR
 atlyrGrGUuyo+1ajICBMMFz1Cm7PgeE3g2xzxAzqeMzYDSKdpIEVDCAwNYDdDtZjA7tF
 bOT0WtIQGbhEkHdPMvHvkt4wjb6MkvblPAjVDOUXpfgCK1YnHirczYXTZB0eCufo1xO0
 /qXA==
X-Gm-Message-State: AOAM532q/eOmFPJMkXqiZ0XyKtTExVqQPhd63Oiob0tOW0OstGdzFzFH
 9ZQrq7Mjp7TYslyQOBq/g3g=
X-Google-Smtp-Source: ABdhPJzHl9CpZn7kB4k3fa7tgocXn1NyxoJcQu6vyujToBsINSK+f3TrvmZ92V0o3h7WWbcJmvxkiQ==
X-Received: by 2002:a50:ec8e:: with SMTP id e14mr2482850edr.105.1589798702648; 
 Mon, 18 May 2020 03:45:02 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.187])
 by smtp.gmail.com with ESMTPSA id c9sm840993edl.21.2020.05.18.03.45.01
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 18 May 2020 03:45:02 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: =?utf-8?Q?'J=C3=BCrgen_Gro=C3=9F'?= <jgross@suse.com>,
 "'Denis Kirjanov'" <kda@linux-powerpc.org>
References: <1589790285-1250-1-git-send-email-kda@linux-powerpc.org>
 <bbd8a83a-a676-9fa1-b03b-526e5122966f@suse.com>
 <CAOJe8K2pUv4Upvxya3LFPj0CxZ1-_hDZcrv-r6Q2EaxC8Ym6Ow@mail.gmail.com>
 <2615d770-c17b-26e0-4686-852ca122eeb4@suse.com>
In-Reply-To: <2615d770-c17b-26e0-4686-852ca122eeb4@suse.com>
Subject: RE: [PATCH] public/io/netif.h: add a new extra type for XDP
Date: Mon, 18 May 2020 11:45:00 +0100
Message-ID: <001101d62d01$6241b960$26c52c20$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQDg4gzfI0WgzPcwHONQbDlUkwh/TAGQBCHrAx2kla0CPnXkHqpg3NKQ
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: J=C3=BCrgen Gro=C3=9F <jgross@suse.com>
> Sent: 18 May 2020 11:28
> To: Denis Kirjanov <kda@linux-powerpc.org>
> Cc: xen-devel@lists.xenproject.org; paul@xen.org
> Subject: Re: [PATCH] public/io/netif.h: add a new extra type for XDP
>=20
> On 18.05.20 11:52, Denis Kirjanov wrote:
> > On 5/18/20, J=C3=BCrgen Gro=C3=9F <jgross@suse.com> wrote:
> >> On 18.05.20 10:24, Denis Kirjanov wrote:
> >>> The patch adds a new extra type to be able to diffirentiate
> >>> between RX responses on xen-netfront side with the adjusted offset
> >>> required for XDP processing.
> >>>
> >>> For Linux the offset value is going to be passed via xenstore.
> >>
> >> Why? I can only see disadvantages by using a different =
communication
> >> mechanism.
> > I see it like other features passed through xenstore and it requires
> > less changes to
> > other structures with the same final result.
>=20
> This is okay as long there is no Xenstore interaction required when =
the
> interface has been setup completely (i.e. only defining the needed
> offset for XDP is fine, enabling/disabling XDP at runtime should not =
be
> done via Xenstore IMO).

FWIW it is for this kind of thing that I introduced the control ring, =
but that may be overkill for this.

  Paul

>=20
> And please, no guest type special casing. Please replace "Linux" by =
e.g.
> "The guest" (with additional tweaking of the following sentence).
>=20
>=20
> Juergen



From xen-devel-bounces@lists.xenproject.org Mon May 18 10:45:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 10:45:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jadH2-0001T7-GH; Mon, 18 May 2020 10:45:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=k4Zq=7A=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jadH1-0001Sr-2J
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 10:45:31 +0000
X-Inumbo-ID: b0b61e06-98f4-11ea-a84b-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b0b61e06-98f4-11ea-a84b-12813bfff9fa;
 Mon, 18 May 2020 10:45:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7F9C6ABD1;
 Mon, 18 May 2020 10:45:31 +0000 (UTC)
Subject: Re: [PATCH] public/io/netif.h: add a new extra type for XDP
To: Denis Kirjanov <kda@linux-powerpc.org>
References: <1589790285-1250-1-git-send-email-kda@linux-powerpc.org>
 <bbd8a83a-a676-9fa1-b03b-526e5122966f@suse.com>
 <CAOJe8K2pUv4Upvxya3LFPj0CxZ1-_hDZcrv-r6Q2EaxC8Ym6Ow@mail.gmail.com>
 <2615d770-c17b-26e0-4686-852ca122eeb4@suse.com>
 <CAOJe8K08BJv5UefSYUQRWuFtsR=aMxJEV=JJvkMsrGk6q0fQRQ@mail.gmail.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <b60ff33c-eb3d-41bf-2541-e516bbf6967e@suse.com>
Date: Mon, 18 May 2020 12:45:28 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <CAOJe8K08BJv5UefSYUQRWuFtsR=aMxJEV=JJvkMsrGk6q0fQRQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 18.05.20 12:37, Denis Kirjanov wrote:
> On 5/18/20, Jürgen Groß <jgross@suse.com> wrote:
>> On 18.05.20 11:52, Denis Kirjanov wrote:
>>> On 5/18/20, Jürgen Groß <jgross@suse.com> wrote:
>>>> On 18.05.20 10:24, Denis Kirjanov wrote:
>>>>> The patch adds a new extra type to be able to diffirentiate
>>>>> between RX responses on xen-netfront side with the adjusted offset
>>>>> required for XDP processing.
>>>>>
>>>>> For Linux the offset value is going to be passed via xenstore.
>>>>
>>>> Why? I can only see disadvantages by using a different communication
>>>> mechanism.
>>> I see it like other features passed through xenstore and it requires
>>> less changes to
>>> other structures with the same final result.
>>
>> This is okay as long there is no Xenstore interaction required when the
>> interface has been setup completely (i.e. only defining the needed
>> offset for XDP is fine, enabling/disabling XDP at runtime should not be
>> done via Xenstore IMO).
> I've checked netfront-<--->netback interaction and found no problems with it.
> Paul found an issue that the value of the netfront state hasn't been
> re-read (during unbind-bind sequence in dom0) and I've fixed it the
> patch for the guest

I don't say your variant isn't working, but a feature being switchable
dynamically AND needing a ring page synchronization anyway should IMO
be switched via a specific ring page request.

I might have not the complete picture, so in case you have a good reason
to do it via Xenstore, fine, but "I'm seeing no problem with it" is no
good reason for a specific design decision.

> 
>>
>> And please, no guest type special casing. Please replace "Linux" by e.g.
>> "The guest" (with additional tweaking of the following sentence).
> 
> Oh, just sent v2. I'll fix a comment.

Thanks.


Juergen


From xen-devel-bounces@lists.xenproject.org Mon May 18 11:11:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 11:11:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jadfO-0003xy-KO; Mon, 18 May 2020 11:10:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hVld=7A=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jadfN-0003xt-AR
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 11:10:41 +0000
X-Inumbo-ID: 353d425a-98f8-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 353d425a-98f8-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 11:10:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 05189ADAB;
 Mon, 18 May 2020 11:10:41 +0000 (UTC)
Subject: Re: [PATCH] x86/hvm: Fix memory leaks in hvm_copy_context_and_params()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200516122221.5434-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3fecba0e-6f3d-c1ec-fc77-0bdd6f1e0906@suse.com>
Date: Mon, 18 May 2020 13:10:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200516122221.5434-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Tamas K Lengyel <tamas@tklengyel.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 16.05.2020 14:22, Andrew Cooper wrote:
> Any error from hvm_save() or hvm_set_param() leaks the c.data allocation.
> 
> Spotted by Coverity.
> 
> Fixes: 353744830 "x86/hvm: introduce hvm_copy_context_and_params"
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon May 18 11:12:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 11:12:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jadgX-00042C-VD; Mon, 18 May 2020 11:11:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hVld=7A=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jadgW-000426-VL
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 11:11:52 +0000
X-Inumbo-ID: 601bfb92-98f8-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 601bfb92-98f8-11ea-b9cf-bc764e2007e4;
 Mon, 18 May 2020 11:11:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4C397AEEF;
 Mon, 18 May 2020 11:11:54 +0000 (UTC)
Subject: Re: [PATCH] x86/hvm: Fix shifting in stdvga_mem_read()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200516190211.4120-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dc45bf66-c0e7-2e22-8abf-bab4adb975e4@suse.com>
Date: Mon, 18 May 2020 13:11:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200516190211.4120-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 16.05.2020 21:02, Andrew Cooper wrote:
> stdvga_mem_read() has a return type of uint8_t, which promotes to int rather
> than unsigned int.  Shifting by 24 may hit the sign bit.
> 
> Spotted by Coverity.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon May 18 11:30:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 11:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jadyY-0005jw-Qp; Mon, 18 May 2020 11:30:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/+tu=7A=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jadyX-0005jk-7n
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 11:30:29 +0000
X-Inumbo-ID: f8d52a8d-98fa-11ea-a850-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f8d52a8d-98fa-11ea-a850-12813bfff9fa;
 Mon, 18 May 2020 11:30:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=lD1rKTHPpy3ZHsAcorCh4F2qKZRDeWwZ6XqxYnQhffE=; b=YAkxoSZtaUEfszOUP6wQmzgkw2
 fb2xZjDNbO2cCQE9Eg/VY/xs7x7D4ZCzU/P3nNC2a2mPX18s/7O4hWryyFz69hh1nRqPRCEnFx1uo
 Yu7yYNi33gxHa781bH5ZOOf7vc4wTyy/3wXi8jTOkvmI8brsMxOv5SBh21u5bOfeGdpQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jadyV-0006PN-1F; Mon, 18 May 2020 11:30:27 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jadyU-0000cn-Ns; Mon, 18 May 2020 11:30:26 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH for-4.14 2/3] xen/arm: Take into account the DMA width when
 allocating Dom0 memory banks
Date: Mon, 18 May 2020 12:30:07 +0100
Message-Id: <20200518113008.15422-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200518113008.15422-1-julien@xen.org>
References: <20200518113008.15422-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 minyard@acm.org, Julien Grall <jgrall@amazon.com>, roman@zededa.com,
 jeff.kubascik@dornerworks.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

At the moment, Xen is assuming that all the devices are at least 32-bit
DMA capable. However, some SoCs have devices that may be able to access
a much restricted range. For instance, the Raspberry PI 4 has devices
that can only access the first GB of RAM.

The function arch_get_dma_bit_size() will return the lowest DMA width on
the platform. Use it to decide what is the limit for the low memory.

Signed-off-by: Julien GralL <jgrall@amazon.com>
---
 xen/arch/arm/domain_build.c | 32 +++++++++++++++++++-------------
 1 file changed, 19 insertions(+), 13 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 430708753642..abc4e463d27c 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -211,10 +211,13 @@ fail:
  *    the ramdisk and DTB must be placed within a certain proximity of
  *    the kernel within RAM.
  * 3. For dom0 we want to place as much of the RAM as we reasonably can
- *    below 4GB, so that it can be used by non-LPAE enabled kernels (32-bit)
+ *    below 4GB, so that it can be used by non-LPAE enabled kernels (32-bit).
  *    or when a device assigned to dom0 can only do 32-bit DMA access.
- * 4. For 32-bit dom0 the kernel must be located below 4GB.
- * 5. We want to have a few largers banks rather than many smaller ones.
+ * 4. Some devices assigned to dom0 can only do 32-bit DMA access or
+ *    even be more restricted. We want to allocate as much of the RAM
+ *    as we reasonably can that can be accessed from all the devices..
+ * 5. For 32-bit dom0 the kernel must be located below 4GB.
+ * 6. We want to have a few largers banks rather than many smaller ones.
  *
  * For the first two requirements we need to make sure that the lowest
  * bank is sufficiently large.
@@ -245,9 +248,9 @@ fail:
  * we give up.
  *
  * For 32-bit domain we require that the initial allocation for the
- * first bank is under 4G. For 64-bit domain, the first bank is preferred
- * to be allocated under 4G. Then for the subsequent allocations we
- * initially allocate memory only from below 4GB. Once that runs out
+ * first bank is part of the low mem. For 64-bit, the first bank is preferred
+ * to be allocated in the low mem. Then for subsequent allocation, we
+ * initially allocate memory only from low mem. Once that runs out out
  * (as described above) we allow higher allocations and continue until
  * that runs out (or we have allocated sufficient dom0 memory).
  */
@@ -262,6 +265,7 @@ static void __init allocate_memory_11(struct domain *d,
     int i;
 
     bool lowmem = true;
+    unsigned int lowmem_bitsize = min(32U, arch_get_dma_bitsize());
     unsigned int bits;
 
     /*
@@ -282,7 +286,7 @@ static void __init allocate_memory_11(struct domain *d,
      */
     while ( order >= min_low_order )
     {
-        for ( bits = order ; bits <= (lowmem ? 32 : PADDR_BITS); bits++ )
+        for ( bits = order ; bits <= lowmem_bitsize; bits++ )
         {
             pg = alloc_domheap_pages(d, order, MEMF_bits(bits));
             if ( pg != NULL )
@@ -296,24 +300,26 @@ static void __init allocate_memory_11(struct domain *d,
         order--;
     }
 
-    /* Failed to allocate bank0 under 4GB */
+    /* Failed to allocate bank0 in the lowmem region. */
     if ( is_32bit_domain(d) )
         panic("Unable to allocate first memory bank\n");
 
-    /* Try to allocate memory from above 4GB */
-    printk(XENLOG_INFO "No bank has been allocated below 4GB.\n");
+    /* Try to allocate memory from above the lowmem region */
+    printk(XENLOG_INFO "No bank has been allocated below %u-bit.\n",
+           lowmem_bitsize);
     lowmem = false;
 
  got_bank0:
 
     /*
-     * If we failed to allocate bank0 under 4GB, continue allocating
-     * memory from above 4GB and fill in banks.
+     * If we failed to allocate bank0 in the lowmem region,
+     * continue allocating from above the lowmem and fill in banks.
      */
     order = get_allocation_size(kinfo->unassigned_mem);
     while ( kinfo->unassigned_mem && kinfo->mem.nr_banks < NR_MEM_BANKS )
     {
-        pg = alloc_domheap_pages(d, order, lowmem ? MEMF_bits(32) : 0);
+        pg = alloc_domheap_pages(d, order,
+                                 lowmem ? MEMF_bits(lowmem_bitsize) : 0);
         if ( !pg )
         {
             order --;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 11:30:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 11:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jadya-0005kQ-6k; Mon, 18 May 2020 11:30:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/+tu=7A=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jadyY-0005jp-Jv
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 11:30:30 +0000
X-Inumbo-ID: f8a96db6-98fa-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f8a96db6-98fa-11ea-9887-bc764e2007e4;
 Mon, 18 May 2020 11:30:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=F11MEj5YMT2oxGVhUvZNpiEihGr2ehtJLVZ6B95SzeU=; b=N06dxr5K661DEuk0r/cmdE/9Ok
 KLvxdkXY3T0U6y2bKecDnpty0rVqYwYMSsLmzQjiqXfas/+5loRAA2gmJjuToVHo5Hwge+Kbmqvmd
 kKp5VzC4xueXrkitSBvWQsn+lnJz7QPCeEeyzE6+03llrMPDdAx8/02IxPxrktZfaEQI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jadyT-0006PI-Je; Mon, 18 May 2020 11:30:25 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jadyT-0000cn-9u; Mon, 18 May 2020 11:30:25 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH for-4.14 1/3] xen/arm: Allow a platform to override the DMA
 width
Date: Mon, 18 May 2020 12:30:06 +0100
Message-Id: <20200518113008.15422-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200518113008.15422-1-julien@xen.org>
References: <20200518113008.15422-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 minyard@acm.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, roman@zededa.com,
 George Dunlap <george.dunlap@citrix.com>, jeff.kubascik@dornerworks.com,
 Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

At the moment, Xen is assuming that all the devices are at least 32-bit
DMA capable. However, some SoC have devices that may be able to access
a much restricted range. For instance, the RPI has devices that can
only access the first 1GB of RAM.

The structure platform_desc is now extended to allow a platform to
override the DMA width. The new is used to implement
arch_get_dma_bit_size().

The prototype is now moved in asm-arm/mm.h as the function is not NUMA
specific. The implementation is done in platform.c so we don't have to
include platform.h everywhere. This should be fine as the function is
not expected to be called in hotpath.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---

Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>

I noticed that arch_get_dma_bit_size() is only called when there is more
than one NUMA node. I am a bit unsure what is the reason behind it.

The goal for Arm is to use arch_get_dma_bit_size() when deciding how low
the first Dom0 bank should be allocated.
---
 xen/arch/arm/platform.c        | 5 +++++
 xen/include/asm-arm/mm.h       | 2 ++
 xen/include/asm-arm/numa.h     | 5 -----
 xen/include/asm-arm/platform.h | 2 ++
 4 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/platform.c b/xen/arch/arm/platform.c
index 8eb0b6e57a5a..4db5bbb4c51d 100644
--- a/xen/arch/arm/platform.c
+++ b/xen/arch/arm/platform.c
@@ -155,6 +155,11 @@ bool platform_device_is_blacklisted(const struct dt_device_node *node)
     return (dt_match_node(blacklist, node) != NULL);
 }
 
+unsigned int arch_get_dma_bitsize(void)
+{
+    return ( platform && platform->dma_bitsize ) ? platform->dma_bitsize : 32;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 7df91280bc77..f8ba49b1188f 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -366,6 +366,8 @@ int arch_acquire_resource(struct domain *d, unsigned int type, unsigned int id,
     return -EOPNOTSUPP;
 }
 
+unsigned int arch_get_dma_bitsize(void);
+
 #endif /*  __ARCH_ARM_MM__ */
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/numa.h b/xen/include/asm-arm/numa.h
index 490d1f31aa14..31a6de4e2346 100644
--- a/xen/include/asm-arm/numa.h
+++ b/xen/include/asm-arm/numa.h
@@ -25,11 +25,6 @@ extern mfn_t first_valid_mfn;
 #define node_start_pfn(nid) (mfn_x(first_valid_mfn))
 #define __node_distance(a, b) (20)
 
-static inline unsigned int arch_get_dma_bitsize(void)
-{
-    return 32;
-}
-
 #endif /* __ARCH_ARM_NUMA_H */
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/platform.h b/xen/include/asm-arm/platform.h
index ed4d30a1be7c..997eb2521631 100644
--- a/xen/include/asm-arm/platform.h
+++ b/xen/include/asm-arm/platform.h
@@ -38,6 +38,8 @@ struct platform_desc {
      * List of devices which must not pass-through to a guest
      */
     const struct dt_device_match *blacklist_dev;
+    /* Override the DMA width (32-bit by default). */
+    unsigned int dma_bitsize;
 };
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 11:30:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 11:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jadyU-0005je-Hd; Mon, 18 May 2020 11:30:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/+tu=7A=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jadyT-0005jZ-Lp
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 11:30:25 +0000
X-Inumbo-ID: f7857c36-98fa-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7857c36-98fa-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 11:30:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=k4PBRVru4LKBF7jxp/hVw/A4ZA5rY75r31sh7ceXQAo=; b=g0WVCuOPYWdI4k/NVaFMrS67wL
 hz7hnYpBnp4By8v3EfYqCfF004/3q/gQ6OOOm0sOUy/I6x9+GFPsGvkb7Z12T7CDCppqtccDKUiWk
 n4mCW6Y3hDqiFRrxUS7YX0vz+ZtnSgqHJLH+KzBGpdB3ThyMQOz1B0tpd0a9aec5FIRQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jadyR-0006PC-Ss; Mon, 18 May 2020 11:30:23 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jadyR-0000cn-Go; Mon, 18 May 2020 11:30:23 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH for-4.14 0/3] Remove the 1GB limitation on Rasberry Pi 4
Date: Mon, 18 May 2020 12:30:05 +0100
Message-Id: <20200518113008.15422-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 minyard@acm.org, paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, roman@zededa.com,
 George Dunlap <george.dunlap@citrix.com>, jeff.kubascik@dornerworks.com,
 Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

Hi all,

At the moment, a user who wants to boot Xen on the Raspberry Pi 4 can
only use the first GB of memory.

This is because several devices cannot DMA above 1GB but Xen doesn't
necessarily allocate memory for Dom0 below 1GB.

This small series is trying to address the problem by allowing a
platform to restrict where Dom0 banks are allocated.

This is also a candidate for Xen 4.14. Without it, a user will not be
able to use all the RAM on the Raspberry Pi 4.

This series has only be slighlty tested. I would appreciate more test on
the Rasbperry Pi 4 to confirm this removing the restriction.

Cheers,

Cc: paul@xen.org

Julien Grall (3):
  xen/arm: Allow a platform to override the DMA width
  xen/arm: Take into account the DMA width when allocating Dom0 memory
    banks
  xen/arm: plat: Allocate as much as possible memory below 1GB for dom0
    for RPI

 xen/arch/arm/domain_build.c                | 32 +++++++++++++---------
 xen/arch/arm/platform.c                    |  5 ++++
 xen/arch/arm/platforms/brcm-raspberry-pi.c |  1 +
 xen/include/asm-arm/mm.h                   |  2 ++
 xen/include/asm-arm/numa.h                 |  5 ----
 xen/include/asm-arm/platform.h             |  2 ++
 6 files changed, 29 insertions(+), 18 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 11:30:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 11:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jadye-0005lA-GB; Mon, 18 May 2020 11:30:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/+tu=7A=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jadyd-0005l0-Je
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 11:30:35 +0000
X-Inumbo-ID: fa06629a-98fa-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa06629a-98fa-11ea-9887-bc764e2007e4;
 Mon, 18 May 2020 11:30:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Q0wZuBDGyM9VQL31+v5+pRe+uO7y+dOvvr/QwhFBcOc=; b=4n/eyLU4N9H7b2z9cd/z9w6PIa
 oF60eXSivItYOzwZeeR6uK9uEWtnYbr7eWPpxQS2ikP12q+a6yFyJ7v7CftwXl3cQ5V6Y4D8tmZpy
 UoRUFBB7pze4byUP9c3KSkgphIty5xqauxFQZCeKhwUaB/nVXNwHLfxWPg4mXIzlvH4E=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jadyW-0006PW-FL; Mon, 18 May 2020 11:30:28 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jadyW-0000cn-5S; Mon, 18 May 2020 11:30:28 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH for-4.14 3/3] xen/arm: plat: Allocate as much as possible
 memory below 1GB for dom0 for RPI
Date: Mon, 18 May 2020 12:30:08 +0100
Message-Id: <20200518113008.15422-4-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200518113008.15422-1-julien@xen.org>
References: <20200518113008.15422-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 minyard@acm.org, Julien Grall <jgrall@amazon.com>, roman@zededa.com,
 jeff.kubascik@dornerworks.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

The raspberry PI 4 has devices that can only DMA into the first GB of
the RAM. Therefore we want allocate as much as possible memory below 1GB
for dom0.

Use the recently introduced dma_bitsize field to specify the DMA width
supported.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reported-by: Corey Minyard <minyard@acm.org>
---
 xen/arch/arm/platforms/brcm-raspberry-pi.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/arm/platforms/brcm-raspberry-pi.c b/xen/arch/arm/platforms/brcm-raspberry-pi.c
index b697fa2c6c0e..ad5483437b31 100644
--- a/xen/arch/arm/platforms/brcm-raspberry-pi.c
+++ b/xen/arch/arm/platforms/brcm-raspberry-pi.c
@@ -43,6 +43,7 @@ static const struct dt_device_match rpi4_blacklist_dev[] __initconst =
 PLATFORM_START(rpi4, "Raspberry Pi 4")
     .compatible     = rpi4_dt_compat,
     .blacklist_dev  = rpi4_blacklist_dev,
+    .dma_bitsize    = 10,
 PLATFORM_END
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 11:51:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 11:51:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaeIa-0007yl-Hh; Mon, 18 May 2020 11:51:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hVld=7A=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jaeIY-0007yg-Vn
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 11:51:11 +0000
X-Inumbo-ID: dd60b4d0-98fd-11ea-a854-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd60b4d0-98fd-11ea-a854-12813bfff9fa;
 Mon, 18 May 2020 11:51:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 55EF4B01C;
 Mon, 18 May 2020 11:51:11 +0000 (UTC)
Subject: Re: [RESEND PATCH v2 for-4.14] pvcalls: Document correctly and
 explicitely the padding for all arches
To: Julien Grall <julien@xen.org>
References: <20200516102157.1928-1-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <31a7d5b0-4e4f-960c-d4e0-8e87bf489db2@suse.com>
Date: Mon, 18 May 2020 13:51:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200516102157.1928-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 16.05.2020 12:21, Julien Grall wrote:
> --- a/xen/include/public/io/pvcalls.h
> +++ b/xen/include/public/io/pvcalls.h
> @@ -65,6 +65,9 @@ struct xen_pvcalls_request {
>              uint32_t domain;
>              uint32_t type;
>              uint32_t protocol;
> +#ifndef CONFIG_X86_32
> +            uint8_t pad[4];
> +#endif

There's no concept of CONFIG_* in the public headers, the dependency
(as you'll find elsewhere) is on __i386__ / __x86_64__. Also whether
there's any padding really doesn't depend directly on the architecture,
but instead on __alignof__(uint64_t) (i.e. a future port to a 32-bit
arch, even if - like on x86 - just a guest bitness, may similarly
want / need / have no padding here).

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 18 12:24:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 12:24:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaeoR-0002He-LF; Mon, 18 May 2020 12:24:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d2DE=7A=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jaeoQ-0002HX-7b
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 12:24:06 +0000
X-Inumbo-ID: 76f3034c-9902-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 76f3034c-9902-11ea-9887-bc764e2007e4;
 Mon, 18 May 2020 12:24:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=A8w16CXT+V6e40RwR+8f4pkyWuSpPo/LV7TTWzryb40=; b=fTNy0TchWrWyRa4mLISeuclJx
 XD1/2BC+4y14YkQxm/0D+WKvs74x5zHzNnaGYqKg1EtDpt+uHlEdJlMIrNKUkSp8240WHtTxod/Mw
 goxQUMERhWU+nDS17fpe591H1U+lQdA6htS0vNJ33hEyjNBPT7ZhdKCGdwCZ5HrWkf4AE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaeoP-0007X1-3l; Mon, 18 May 2020 12:24:05 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaeoO-0002TG-OV; Mon, 18 May 2020 12:24:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jaeoO-0001t8-Np; Mon, 18 May 2020 12:24:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150229-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 150229: tolerable FAIL - PUSHED
X-Osstest-Failures: seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: seabios=b8eda131954452bb5a236100a6572fe8f27d8021
X-Osstest-Versions-That: seabios=665dce17c04b574bb0ebcde4cac129c3dd9e681c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 18 May 2020 12:24:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150229 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150229/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150201
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150201
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150201
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150201
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 seabios              b8eda131954452bb5a236100a6572fe8f27d8021
baseline version:
 seabios              665dce17c04b574bb0ebcde4cac129c3dd9e681c

Last test of basis   150201  2020-05-15 15:10:33 Z    2 days
Testing same since   150229  2020-05-18 08:09:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   665dce1..b8eda13  b8eda131954452bb5a236100a6572fe8f27d8021 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 18 12:40:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 12:40:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaf4G-0003tr-3K; Mon, 18 May 2020 12:40:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Jhae=7A=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1jaf4D-0003tm-Vx
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 12:40:26 +0000
X-Inumbo-ID: bec9eeae-9904-11ea-9887-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.61])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id bec9eeae-9904-11ea-9887-bc764e2007e4;
 Mon, 18 May 2020 12:40:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1589805624;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=yJ4IgRgsiTBA/N9X0FKjjCpdw+wTyU5KVvVwmM9WQmc=;
 b=JIzX4Y+JHwnWAANpaRL40qOMaQIwDS9SlSeYUJGM2t3zZom0XZ1WZbmtim5aa0ixvB1uyX
 XY1r5EAdkm76hieKhtEhL8/CSqvLnIe1Lf33Lnkcx9ook0Vmaa5r4i3gHPsB9VXgHbjOBD
 EZ/kphXoFNcBxvt2xlA/FbJqG0XnrEs=
Received: from mail-wr1-f71.google.com (mail-wr1-f71.google.com
 [209.85.221.71]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-470-BWVRANwpMj-VI7nOZRJV2g-1; Mon, 18 May 2020 08:40:22 -0400
X-MC-Unique: BWVRANwpMj-VI7nOZRJV2g-1
Received: by mail-wr1-f71.google.com with SMTP id e14so5595770wrv.11
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 05:40:22 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-language
 :content-transfer-encoding;
 bh=yJ4IgRgsiTBA/N9X0FKjjCpdw+wTyU5KVvVwmM9WQmc=;
 b=ZKteEQ2kxz/MVr6JyQbPi+7cPjcditk0JtWoz2EhN0qhhDRcWwY1D7RvdS6dM0kf2H
 j8ZgjJBmhrpGWQJCgwsH+DTIi7cQzMiWMuW2khGLqv61rN7T0DL6u0TADWNLHfqIuMdd
 aDq7XUMjlcgMbHLdGDjgcOSDolyC4m/7bdmVFmX+IPXVEocGnl8x3dnggxOFWCwkgUHO
 ghTJM3u1qRqgI/H7WPUZSzcR9z7oRaSUB6VXE3/U5yczQRx6dw0PE3TZJwh5xJ9qwCYl
 yk5aJwYmVp29ah0GOgQZN6EOMhrf+UdbuxqhfeNE+20yiJZ4e2xcMcLFMqXLFuioOhrp
 41xA==
X-Gm-Message-State: AOAM532tH4OuLWBwZ8PrpF5m7ocpWJTulHfU8dMD4yDpdzFjPq4KXJwI
 25oG8hLJxB0OREO1MtzeNSeOYUTLicU9KyLs9qohRJSW3cgyzVTP+ImUR3TprLuGlFyLy7G81Vo
 2lxLmjqeCbndl6KaFdqySvYNNlJA=
X-Received: by 2002:adf:f8c1:: with SMTP id f1mr19378250wrq.171.1589805621609; 
 Mon, 18 May 2020 05:40:21 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJwCAR79q1r/u+ElQ0DSvEZICk9eI3J0VHulHSHUS5E92yAv0i/D3GvHsxNo7k5eyzb/9Bk8AA==
X-Received: by 2002:adf:f8c1:: with SMTP id f1mr19378219wrq.171.1589805621356; 
 Mon, 18 May 2020 05:40:21 -0700 (PDT)
Received: from [192.168.178.58] ([151.30.90.67])
 by smtp.gmail.com with ESMTPSA id q74sm17336275wme.14.2020.05.18.05.40.19
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 18 May 2020 05:40:20 -0700 (PDT)
Subject: Re: [PATCH v3 0/3] various: Remove unnecessary casts
To: Markus Armbruster <armbru@redhat.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>
References: <20200512070020.22782-1-f4bug@amsat.org>
 <871rnlsps6.fsf@dusky.pond.sub.org>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <8791b385-8493-f81d-5ee3-cca5b8559c27@redhat.com>
Date: Mon, 18 May 2020 14:40:19 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.6.0
MIME-Version: 1.0
In-Reply-To: <871rnlsps6.fsf@dusky.pond.sub.org>
Content-Language: en-US
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 qemu-trivial@nongnu.org, David Hildenbrand <david@redhat.com>,
 Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, Richard Henderson <rth@twiddle.net>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 =?UTF-8?Q?C=c3=a9dric_Le_Goater?= <clg@kaod.org>, John Snow <jsnow@redhat.com>,
 David Gibson <david@gibson.dropbear.id.au>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, qemu-ppc@nongnu.org,
 Aurelien Jarno <aurelien@aurel32.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15/05/20 07:58, Markus Armbruster wrote:
> Philippe Mathieu-Daudé <f4bug@amsat.org> writes:
> 
>> Remove unnecessary casts using coccinelle scripts.
>>
>> The CPU()/OBJECT() patches don't introduce logical change,
>> The DEVICE() one removes various OBJECT_CHECK() calls.
> Queued, thanks!
> 
> Managing expecations: I'm not a QOM maintainer, I don't want to become
> one, and I don't normally queue QOM patches :)
> 

I want to be again a QOM maintainer, but it's not the best time for me
to be one.  So thanks for picking up my slack.

Paolo



From xen-devel-bounces@lists.xenproject.org Mon May 18 13:13:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 13:13:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jafaC-0006Ww-RY; Mon, 18 May 2020 13:13:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hVld=7A=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jafaB-0006Wr-TV
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 13:13:27 +0000
X-Inumbo-ID: 5bafc226-9909-11ea-a860-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5bafc226-9909-11ea-a860-12813bfff9fa;
 Mon, 18 May 2020 13:13:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 0491BB1F7;
 Mon, 18 May 2020 13:13:27 +0000 (UTC)
Subject: Re: [PATCH v4] x86: clear RDRAND CPUID bit on AMD family 15h/16h
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <69382ba7-b562-2c8c-1843-b17ce6c512f1@suse.com>
 <20200515151838.GU54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e4c23fcb-0a09-4e8c-eabe-ff7040427376@suse.com>
Date: Mon, 18 May 2020 15:13:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200515151838.GU54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.05.2020 17:18, Roger Pau Monné wrote:
> On Mon, Mar 09, 2020 at 10:08:50AM +0100, Jan Beulich wrote:
>> Inspired by Linux commit c49a0a80137c7ca7d6ced4c812c9e07a949f6f24:
>>
>>     There have been reports of RDRAND issues after resuming from suspend on
>>     some AMD family 15h and family 16h systems. This issue stems from a BIOS
>>     not performing the proper steps during resume to ensure RDRAND continues
>>     to function properly.
>>
>>     Update the CPU initialization to clear the RDRAND CPUID bit for any family
>>     15h and 16h processor that supports RDRAND. If it is known that the family
>>     15h or family 16h system does not have an RDRAND resume issue or that the
>>     system will not be placed in suspend, the "cpuid=rdrand" kernel parameter
>>     can be used to stop the clearing of the RDRAND CPUID bit.
>>
>>     Note, that clearing the RDRAND CPUID bit does not prevent a processor
>>     that normally supports the RDRAND instruction from executing it. So any
>>     code that determined the support based on family and model won't #UD.
>>
>> Warn if no explicit choice was given on affected hardware.
>>
>> Check RDRAND functions at boot as well as after S3 resume (the retry
>> limit chosen is entirely arbitrary).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks much.

>> @@ -646,6 +647,25 @@ static void init_amd(struct cpuinfo_x86
>>  		if (acpi_smi_cmd && (acpi_enable_value | acpi_disable_value))
>>  			amd_acpi_c1e_quirk = true;
>>  		break;
>> +
>> +	case 0x15: case 0x16:
>> +		/*
>> +		 * There are too many Fam15/Fam16 systems where upon resume
>> +		 * from S3 firmware fails to re-setup properly functioning
>> +		 * RDRAND.  Clear the feature unless force-enabled on the
>> +		 * command line.
>> +		 */
>> +		if (c == &boot_cpu_data &&
>> +		    cpu_has(c, X86_FEATURE_RDRAND) &&
>> +		    !is_forced_cpu_cap(X86_FEATURE_RDRAND)) {
> 
> Given this is the only user of is_forced_cpu_cap...
> 
>> +			static const char __initconst text[] =
>> +				"RDRAND may cease to work on this hardware upon resume from S3.\n"
>> +				"Please choose an explicit cpuid={no-}rdrand setting.\n";
>> +
>> +			setup_clear_cpu_cap(X86_FEATURE_RDRAND);
>> +			warning_add(text);
>> +		}
>> +		break;
>>  	}
>>  
>>  	display_cacheinfo(c);
>> --- a/xen/arch/x86/cpu/common.c
>> +++ b/xen/arch/x86/cpu/common.c
>> @@ -11,6 +11,7 @@
>>  #include <asm/io.h>
>>  #include <asm/mpspec.h>
>>  #include <asm/apic.h>
>> +#include <asm/random.h>
>>  #include <asm/setup.h>
>>  #include <mach_apic.h>
>>  #include <public/sysctl.h> /* for XEN_INVALID_{SOCKET,CORE}_ID */
>> @@ -98,6 +99,11 @@ void __init setup_force_cpu_cap(unsigned
>>  	__set_bit(cap, boot_cpu_data.x86_capability);
>>  }
>>  
>> +bool is_forced_cpu_cap(unsigned int cap)
> 
> ... I think this could be made __init?

Ah, now it can be again, yes. It was an endless back and forth between
the various versions (some not even posted).

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 18 13:15:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 13:15:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jafcR-0006dv-8D; Mon, 18 May 2020 13:15:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/+tu=7A=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jafcP-0006do-PN
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 13:15:45 +0000
X-Inumbo-ID: aeaf1e40-9909-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aeaf1e40-9909-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 13:15:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=NhDnZvpcDeGZ65uLgD9pNSdYv9QgJZf6F3tFtD4GQ+w=; b=xjnq5gPi9bggAZ7cXwfE857poU
 WJT1eGlht7AwSom1wM6HCZRFyslEEtH6K1INQD9Uvc87C9iElMpsh6w7jYGjIg4+SeTJaFVap9iFM
 nMi5ltXlpJmL3d4TPg8IAXdDXEIkC605I8tQvo5yyW5vqc9ejDl3gDmgTBMdd5ArdH9E=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jafcM-00008D-NE; Mon, 18 May 2020 13:15:42 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jafcM-0006d4-FZ; Mon, 18 May 2020 13:15:42 +0000
Subject: Re: [RESEND PATCH v2 for-4.14] pvcalls: Document correctly and
 explicitely the padding for all arches
To: Jan Beulich <jbeulich@suse.com>
References: <20200516102157.1928-1-julien@xen.org>
 <31a7d5b0-4e4f-960c-d4e0-8e87bf489db2@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <8b0aa4b3-9220-ab13-aa8f-2b7907a3efdf@xen.org>
Date: Mon, 18 May 2020 14:15:40 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <31a7d5b0-4e4f-960c-d4e0-8e87bf489db2@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan,

On 18/05/2020 12:51, Jan Beulich wrote:
> On 16.05.2020 12:21, Julien Grall wrote:
>> --- a/xen/include/public/io/pvcalls.h
>> +++ b/xen/include/public/io/pvcalls.h
>> @@ -65,6 +65,9 @@ struct xen_pvcalls_request {
>>               uint32_t domain;
>>               uint32_t type;
>>               uint32_t protocol;
>> +#ifndef CONFIG_X86_32
>> +            uint8_t pad[4];
>> +#endif
> 
> There's no concept of CONFIG_* in the public headers, the dependency
> (as you'll find elsewhere) is on __i386__ / __x86_64__.

Doh, I forgot it. I will fix it.

> Also whether
> there's any padding really doesn't depend directly on the architecture,
> but instead on __alignof__(uint64_t) (i.e. a future port to a 32-bit
> arch, even if - like on x86 - just a guest bitness, may similarly
> want / need / have no padding here).

Lets imagine someone decide to introduce 32-bit and then later on 
64-bit. Both have different padding requirements. This would result to 
the same mess as on x86.

So I think we shouldn't depend on __alignof__(uint64_t) to avoid any 
more screw up. Obviously extra care would need to be taken if the 
padding is higher, but it is also true in many other place of Xen headers.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 18 13:16:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 13:16:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jafd0-0006ha-H7; Mon, 18 May 2020 13:16:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fjRX=7A=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jafcz-0006hP-22
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 13:16:21 +0000
X-Inumbo-ID: c2a0310b-9909-11ea-a863-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c2a0310b-9909-11ea-a863-12813bfff9fa;
 Mon, 18 May 2020 13:16:19 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 3xwQ56JtVTHqRISHqKHqFtXE+sfOHS0hEaWadxKIo2qskijMzu1B8jiI1zlBVBDIe+DSSH6stX
 bGtqTJmO/QQlZ6aI0yOxJ++v8y7o0Pa5fup7Bo+hW00sAQfraEpCF3wVE3Ae98wJWJGOw2DEwG
 0WMVoW0bmDR5X4+qsUduYgy8KgevvR2xdbdQP4PhTynLRhVgwXzb3fRdmCfzvcwmqrQy1fMg3S
 A9f+en9XHjoUrCSbpK0e5aesqmi6EfWK7uVojZg9B5041hSIolkam+3PNi9ygvi1mkrr/uuf06
 /vo=
X-SBRS: 2.7
X-MesageID: 18042234
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="18042234"
Date: Mon, 18 May 2020 14:15:06 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: Re: [XEN PATCH 0/2] Fix installation of python scripts
Message-ID: <20200518131506.GA2105@perard.uk.xensource.com>
References: <20200311175933.1362235-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20200311175933.1362235-1-anthony.perard@citrix.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Ping?

Cheers.

On Wed, Mar 11, 2020 at 05:59:31PM +0000, Anthony PERARD wrote:
> Patch series available in this git branch:
> https://xenbits.xen.org/git-http/people/aperard/xen-unstable.git br.fix-python-install-v1
> 
> Hi,
> 
> A patch to make packaging of xen on centos8 easier. rpmbuild
> prevents unversions python shebang from been packaged.
> And the first patch fix a bug discovered with the second.
> 
> Cheers,
> 
> Anthony PERARD (2):
>   tools/python: Fix install-wrap
>   tools: Use INSTALL_PYTHON_PROG
> 
>  tools/misc/xencov_split   | 2 +-
>  tools/python/Makefile     | 4 ++--
>  tools/python/install-wrap | 2 +-
>  tools/xenmon/Makefile     | 2 +-
>  4 files changed, 5 insertions(+), 5 deletions(-)

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon May 18 13:16:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 13:16:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jafdZ-0006nB-Qa; Mon, 18 May 2020 13:16:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hVld=7A=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jafdY-0006my-PT
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 13:16:56 +0000
X-Inumbo-ID: d735db2e-9909-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d735db2e-9909-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 13:16:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 8708DAE72;
 Mon, 18 May 2020 13:16:55 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86: determine MXCSR mask in all cases
Message-ID: <687f8a71-5c5c-c95e-146d-8f38211e5e00@suse.com>
Date: Mon, 18 May 2020 15:16:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

For its use(s) by the emulator to be correct in all cases, the filling
of the variable needs to be independent of XSAVE availability. As
there's no suitable function in i387.c to put the logic in, keep it in
xstate_init(), arrange for the function to be called unconditionally,
and pull the logic ahead of all return paths there.

Fixes: 9a4496a35b20 ("x86emul: support {,V}{LD,ST}MXCSR")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -488,8 +488,7 @@ void identify_cpu(struct cpuinfo_x86 *c)
 
 	/* Now the feature flags better reflect actual CPU features! */
 
-	if ( cpu_has_xsave )
-		xstate_init(c);
+	xstate_init(c);
 
 #ifdef NOISY_CAPS
 	printk(KERN_DEBUG "CPU: After all inits, caps:");
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -588,6 +588,18 @@ void xstate_init(struct cpuinfo_x86 *c)
     u32 eax, ebx, ecx, edx;
     u64 feature_mask;
 
+    if ( bsp )
+    {
+        static typeof(current->arch.xsave_area->fpu_sse) __initdata ctxt;
+
+        asm ( "fxsave %0" : "=m" (ctxt) );
+        if ( ctxt.mxcsr_mask )
+            mxcsr_mask = ctxt.mxcsr_mask;
+    }
+
+    if ( !cpu_has_xsave )
+        return;
+
     if ( (bsp && !use_xsave) ||
          boot_cpu_data.cpuid_level < XSTATE_CPUID )
     {
@@ -611,8 +623,6 @@ void xstate_init(struct cpuinfo_x86 *c)
 
     if ( bsp )
     {
-        static typeof(current->arch.xsave_area->fpu_sse) __initdata ctxt;
-
         xfeature_mask = feature_mask;
         /*
          * xsave_cntxt_size is the max size required by enabled features.
@@ -621,10 +631,6 @@ void xstate_init(struct cpuinfo_x86 *c)
         xsave_cntxt_size = _xstate_ctxt_size(feature_mask);
         printk("xstate: size: %#x and states: %#"PRIx64"\n",
                xsave_cntxt_size, xfeature_mask);
-
-        asm ( "fxsave %0" : "=m" (ctxt) );
-        if ( ctxt.mxcsr_mask )
-            mxcsr_mask = ctxt.mxcsr_mask;
     }
     else
     {


From xen-devel-bounces@lists.xenproject.org Mon May 18 13:17:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 13:17:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jafe4-0006rU-36; Mon, 18 May 2020 13:17:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=pKIP=7A=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jafe2-0006rF-J7
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 13:17:26 +0000
X-Inumbo-ID: ea7ec84e-9909-11ea-a863-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id ea7ec84e-9909-11ea-a863-12813bfff9fa;
 Mon, 18 May 2020 13:17:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1589807845;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=hO5s4pwgSKIz4wc7eUJFUZw07FYxYL3A4Zvi1+EiJ8I=;
 b=hDlzNGJeUPsBv0j196s3475GFJ8RvOGfURpG9GZUWm0alYDGQeK86g+TCXqFDJ17egHjwS
 OsJzWpH/GfXOtxgA+ig5EX3WUDNehAJK7AeUqa1LOW7HoX4E7DfHMJcnf2l8tn+WvEwin1
 XEVPzSTOahkY/JEFD1a91mEOVbKmJ4U=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-485-HRl-J9wqPuicx608h3qMiw-1; Mon, 18 May 2020 09:17:23 -0400
X-MC-Unique: HRl-J9wqPuicx608h3qMiw-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7F518872FE0;
 Mon, 18 May 2020 13:17:20 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-32.ams2.redhat.com
 [10.36.112.32])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 37AA22E161;
 Mon, 18 May 2020 13:17:12 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id BC6FF11358BC; Mon, 18 May 2020 15:17:10 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [PATCH v3 0/3] various: Remove unnecessary casts
References: <20200512070020.22782-1-f4bug@amsat.org>
 <871rnlsps6.fsf@dusky.pond.sub.org>
 <8791b385-8493-f81d-5ee3-cca5b8559c27@redhat.com>
Date: Mon, 18 May 2020 15:17:10 +0200
In-Reply-To: <8791b385-8493-f81d-5ee3-cca5b8559c27@redhat.com> (Paolo
 Bonzini's message of "Mon, 18 May 2020 14:40:19 +0200")
Message-ID: <87imgt9ycp.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Laurent Vivier <laurent@vivier.eu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 qemu-trivial@nongnu.org, David Hildenbrand <david@redhat.com>,
 Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, David Gibson <david@gibson.dropbear.id.au>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 "Dr. David Alan
 Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org, qemu-arm@nongnu.org,
 Peter Chubb <peter.chubb@nicta.com.au>,
 =?utf-8?Q?C=C3=A9dric?= Le Goater <clg@kaod.org>, John Snow <jsnow@redhat.com>,
 Richard Henderson <rth@twiddle.net>,
 "Daniel P. =?utf-8?Q?Berrang=C3=A9?=" <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <f4bug@amsat.org>, qemu-ppc@nongnu.org,
 Aurelien Jarno <aurelien@aurel32.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Paolo Bonzini <pbonzini@redhat.com> writes:

> On 15/05/20 07:58, Markus Armbruster wrote:
>> Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org> writes:
>>=20
>>> Remove unnecessary casts using coccinelle scripts.
>>>
>>> The CPU()/OBJECT() patches don't introduce logical change,
>>> The DEVICE() one removes various OBJECT_CHECK() calls.
>> Queued, thanks!
>>=20
>> Managing expecations: I'm not a QOM maintainer, I don't want to become
>> one, and I don't normally queue QOM patches :)
>>=20
>
> I want to be again a QOM maintainer, but it's not the best time for me
> to be one.  So thanks for picking up my slack.

You're welcome :)



From xen-devel-bounces@lists.xenproject.org Mon May 18 13:19:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 13:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jafgQ-00077G-L6; Mon, 18 May 2020 13:19:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hVld=7A=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jafgP-00077A-Hj
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 13:19:53 +0000
X-Inumbo-ID: 41d15d00-990a-11ea-a863-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 41d15d00-990a-11ea-a863-12813bfff9fa;
 Mon, 18 May 2020 13:19:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 25FF8B209;
 Mon, 18 May 2020 13:19:54 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v5] x86: clear RDRAND CPUID bit on AMD family 15h/16h
Message-ID: <4f76749b-54bd-7c39-6c90-279ce25cb57c@suse.com>
Date: Mon, 18 May 2020 15:19:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Inspired by Linux commit c49a0a80137c7ca7d6ced4c812c9e07a949f6f24:

    There have been reports of RDRAND issues after resuming from suspend on
    some AMD family 15h and family 16h systems. This issue stems from a BIOS
    not performing the proper steps during resume to ensure RDRAND continues
    to function properly.

    Update the CPU initialization to clear the RDRAND CPUID bit for any family
    15h and 16h processor that supports RDRAND. If it is known that the family
    15h or family 16h system does not have an RDRAND resume issue or that the
    system will not be placed in suspend, the "cpuid=rdrand" kernel parameter
    can be used to stop the clearing of the RDRAND CPUID bit.

    Note, that clearing the RDRAND CPUID bit does not prevent a processor
    that normally supports the RDRAND instruction from executing it. So any
    code that determined the support based on family and model won't #UD.

Warn if no explicit choice was given on affected hardware.

Check RDRAND functions at boot as well as after S3 resume (the retry
limit chosen is entirely arbitrary).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
Still slightly RFC, and still in particular because of the change to
parse_xen_cpuid(): Alternative approach suggestions are welcome. But now
also because with many CPUs there may now be a lot of warnings in case
of issues.
---
v5: Extend a comment. Drop cpu_relax(). Mark is_forced_cpu_cap() __init.
v4: Check always, including during boot. Slightly better sanity check,
    inspired by Linux commit 7879fc4bdc7.
v3: Add call to warning_add(). If force-enabled, check RDRAND still
    functioning after S3 resume.
v2: Re-base.

--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -488,6 +488,10 @@ The Speculation Control hardware feature
 be ignored, e.g. `no-ibrsb`, at which point Xen won't use them itself, and
 won't offer them to guests.
 
+`rdrand` can be used to override the default disabling of the feature on certain
+AMD systems.  Its negative form can of course also be used to suppress use and
+exposure of the feature.
+
 ### cpuid_mask_cpu
 > `= fam_0f_rev_[cdefg] | fam_10_rev_[bc] | fam_11_rev_b`
 
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -4,6 +4,7 @@
 #include <xen/param.h>
 #include <xen/smp.h>
 #include <xen/pci.h>
+#include <xen/warning.h>
 #include <asm/io.h>
 #include <asm/msr.h>
 #include <asm/processor.h>
@@ -747,6 +748,26 @@ static void init_amd(struct cpuinfo_x86
 		if (acpi_smi_cmd && (acpi_enable_value | acpi_disable_value))
 			amd_acpi_c1e_quirk = true;
 		break;
+
+	case 0x15: case 0x16:
+		/*
+		 * There are some Fam15/Fam16 systems where upon resume from S3
+		 * firmware fails to re-setup properly functioning RDRAND.
+		 * By the time we can spot the problem, it is too late to take
+		 * action, and there is nothing Xen can do to repair the problem.
+		 * Clear the feature unless force-enabled on the command line.
+		 */
+		if (c == &boot_cpu_data &&
+		    cpu_has(c, X86_FEATURE_RDRAND) &&
+		    !is_forced_cpu_cap(X86_FEATURE_RDRAND)) {
+			static const char __initconst text[] =
+				"RDRAND may cease to work on this hardware upon resume from S3.\n"
+				"Please choose an explicit cpuid={no-}rdrand setting.\n";
+
+			setup_clear_cpu_cap(X86_FEATURE_RDRAND);
+			warning_add(text);
+		}
+		break;
 	}
 
 	display_cacheinfo(c);
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -11,6 +11,7 @@
 #include <asm/io.h>
 #include <asm/mpspec.h>
 #include <asm/apic.h>
+#include <asm/random.h>
 #include <asm/setup.h>
 #include <mach_apic.h>
 #include <public/sysctl.h> /* for XEN_INVALID_{SOCKET,CORE}_ID */
@@ -98,6 +99,11 @@ void __init setup_force_cpu_cap(unsigned
 	__set_bit(cap, boot_cpu_data.x86_capability);
 }
 
+bool __init is_forced_cpu_cap(unsigned int cap)
+{
+	return test_bit(cap, forced_caps);
+}
+
 static void default_init(struct cpuinfo_x86 * c)
 {
 	/* Not much we can do here... */
@@ -498,6 +504,27 @@ void identify_cpu(struct cpuinfo_x86 *c)
 	printk("\n");
 #endif
 
+	/*
+	 * If RDRAND is available, make an attempt to check that it actually
+	 * (still) works.
+	 */
+	if (cpu_has(c, X86_FEATURE_RDRAND)) {
+		unsigned int prev = 0;
+
+		for (i = 0; i < 5; ++i)
+		{
+			unsigned int cur = arch_get_random();
+
+			if (prev && cur != prev)
+				break;
+			prev = cur;
+		}
+
+		if (i >= 5)
+			printk(XENLOG_WARNING "CPU%u: RDRAND appears to not work\n",
+			       smp_processor_id());
+	}
+
 	if (system_state == SYS_STATE_resume)
 		return;
 
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -72,6 +72,9 @@ static int __init parse_xen_cpuid(const
             {
                 if ( !val )
                     setup_clear_cpu_cap(mid->bit);
+                else if ( mid->bit == X86_FEATURE_RDRAND &&
+                          (cpuid_ecx(1) & cpufeat_mask(X86_FEATURE_RDRAND)) )
+                    setup_force_cpu_cap(X86_FEATURE_RDRAND);
                 mid = NULL;
             }
 
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -165,6 +165,7 @@ extern const struct x86_cpu_id *x86_matc
 extern void identify_cpu(struct cpuinfo_x86 *);
 extern void setup_clear_cpu_cap(unsigned int);
 extern void setup_force_cpu_cap(unsigned int);
+extern bool is_forced_cpu_cap(unsigned int);
 extern void print_cpu_info(unsigned int cpu);
 extern void init_intel_cacheinfo(struct cpuinfo_x86 *c);
 


From xen-devel-bounces@lists.xenproject.org Mon May 18 13:22:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 13:22:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jafj3-0007rv-2r; Mon, 18 May 2020 13:22:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZwhG=7A=kaod.org=clg@srs-us1.protection.inumbo.net>)
 id 1jafj2-0007rp-4v
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 13:22:36 +0000
X-Inumbo-ID: a2964eca-990a-11ea-9887-bc764e2007e4
Received: from 19.mo5.mail-out.ovh.net (unknown [46.105.35.78])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2964eca-990a-11ea-9887-bc764e2007e4;
 Mon, 18 May 2020 13:22:34 +0000 (UTC)
Received: from player732.ha.ovh.net (unknown [10.110.103.118])
 by mo5.mail-out.ovh.net (Postfix) with ESMTP id 21FA827C242
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 15:22:33 +0200 (CEST)
Received: from kaod.org (82-64-250-170.subs.proxad.net [82.64.250.170])
 (Authenticated sender: clg@kaod.org)
 by player732.ha.ovh.net (Postfix) with ESMTPSA id D7420125CCEDC;
 Mon, 18 May 2020 13:21:52 +0000 (UTC)
Authentication-Results: garm.ovh; auth=pass
 (GARM-99G003b3c596e5-fc65-4a7f-a892-87f5b49a92ea,2E2A9519E3FAC9D985F861812C9F86F7BE89492F)
 smtp.auth=clg@kaod.org
Subject: Re: [PATCH v3 0/3] various: Remove unnecessary casts
To: Markus Armbruster <armbru@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>
References: <20200512070020.22782-1-f4bug@amsat.org>
 <871rnlsps6.fsf@dusky.pond.sub.org>
 <8791b385-8493-f81d-5ee3-cca5b8559c27@redhat.com>
 <87imgt9ycp.fsf@dusky.pond.sub.org>
From: =?UTF-8?Q?C=c3=a9dric_Le_Goater?= <clg@kaod.org>
Message-ID: <2f4607cf-90a9-ca9a-4ef6-a8358631cdf0@kaod.org>
Date: Mon, 18 May 2020 15:21:52 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <87imgt9ycp.fsf@dusky.pond.sub.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Ovh-Tracer-Id: 10095100040702757704
X-VR-SPAMSTATE: OK
X-VR-SPAMSCORE: -100
X-VR-SPAMCAUSE: gggruggvucftvghtrhhoucdtuddrgeduhedruddthedgieefucetufdoteggodetrfdotffvucfrrhhofhhilhgvmecuqfggjfdpvefjgfevmfevgfenuceurghilhhouhhtmecuhedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmnecujfgurhepuffvfhfhkffffgggjggtgfesthekredttdefjeenucfhrhhomhepveorughrihgtpgfnvggpifhorghtvghruceotghlgheskhgrohgurdhorhhgqeenucggtffrrghtthgvrhhnpeffhfffudegjeeggedugeefgeeifffhueethefhfeekkedvkefggfelteefuddvteenucffohhmrghinhepohiilhgrsghsrdhorhhgnecukfhppedtrddtrddtrddtpdekvddrieegrddvhedtrddujedtnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmohguvgepshhmthhpqdhouhhtpdhhvghlohepphhlrgihvghrjeefvddrhhgrrdhovhhhrdhnvghtpdhinhgvtheptddrtddrtddrtddpmhgrihhlfhhrohhmpegtlhhgsehkrghougdrohhrghdprhgtphhtthhopeigvghnqdguvghvvghlsehlihhsthhsrdigvghnphhrohhjvggtthdrohhrgh
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Laurent Vivier <laurent@vivier.eu>, Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 qemu-trivial@nongnu.org, David Hildenbrand <david@redhat.com>,
 Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, David Gibson <david@gibson.dropbear.id.au>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 John Snow <jsnow@redhat.com>, Richard Henderson <rth@twiddle.net>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>, qemu-ppc@nongnu.org,
 Aurelien Jarno <aurelien@aurel32.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/18/20 3:17 PM, Markus Armbruster wrote:
> Paolo Bonzini <pbonzini@redhat.com> writes:
> 
>> On 15/05/20 07:58, Markus Armbruster wrote:
>>> Philippe Mathieu-Daudé <f4bug@amsat.org> writes:
>>>
>>>> Remove unnecessary casts using coccinelle scripts.
>>>>
>>>> The CPU()/OBJECT() patches don't introduce logical change,
>>>> The DEVICE() one removes various OBJECT_CHECK() calls.
>>> Queued, thanks!
>>>
>>> Managing expecations: I'm not a QOM maintainer, I don't want to become
>>> one, and I don't normally queue QOM patches :)
>>>
>>
>> I want to be again a QOM maintainer, but it's not the best time for me
>> to be one.  So thanks for picking up my slack.
> 
> You're welcome :)

Could you help me getting this patch merged ? :)

http://patchwork.ozlabs.org/project/qemu-devel/patch/20200404153340.164861-1-clg@kaod.org/

Thanks,

C. 


From xen-devel-bounces@lists.xenproject.org Mon May 18 13:26:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 13:26:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jafmH-00082x-K1; Mon, 18 May 2020 13:25:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hPYZ=7A=linux-powerpc.org=kda@srs-us1.protection.inumbo.net>)
 id 1jafmG-00082s-CL
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 13:25:56 +0000
X-Inumbo-ID: 1a06211a-990b-11ea-b9cf-bc764e2007e4
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1a06211a-990b-11ea-b9cf-bc764e2007e4;
 Mon, 18 May 2020 13:25:55 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id g1so9826500ljk.7
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 06:25:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=linux-powerpc-org.20150623.gappssmtp.com; s=20150623;
 h=from:to:cc:subject:date:message-id;
 bh=oD7uBg2txWw1buup5dqXIpdnCiWGkgpK8zE5/uRvEwE=;
 b=ezGbIJQAfRx8928jC+cxdXYjth9HudpYFjWKUt4Z+S1tlIHOEW6b0n/X3VyKttQaxi
 kxRt208N22OpoWYK1EPvdWvLRIRwLwIPk8A0s48nMBDMxaD+J4/d//LaNusGpggXeIs5
 OTjw7rJ5Cvfvd06DxQDidXFKgDyoOeIEfhggbWkGv4jDDIqIh2W7vwMSk8GFhZAalv9j
 SW/7US57rbnFNfe9ah3U1G1vosR5iqJVxbWYvanceeDFgZ5qxhFwj9b9Y0PEG+4F2bY4
 0LGzLRBFogXBpr/UAV/lKxREd1ghKZqWitegkILOiLDjvKj8EwAIfyVlldUgSuhXky1o
 aFAw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id;
 bh=oD7uBg2txWw1buup5dqXIpdnCiWGkgpK8zE5/uRvEwE=;
 b=TQAYFwaua/WR6uZuRI+LcIDN5DGNJuOrn0bJoo+cI5/Wo00lnNg4mzP9KYcXdQUkZw
 snxeYJs1269LTgPvP/lMnXiVRp+b+svsA0AHqqcXOBdyNC1/PeG4VEqA1ueroC8RpcSo
 v5stxGUqtYyUFuj/DOHuTNfjcQ/i/Rdi8m7oJDWAFDBkPdZyhWD9dNo9VkZCcyo8JwhH
 Lmw3NVkoreJ6XzjIZ/lAMRBMJTcCSJ/Uk/pUxLl9s6YUz35L6N3oR/sX7kdOkQecd5YA
 wK/XJJkTuFCmPITdpoTeOdX7KlCazes1zqQqkG9xoGPwiRxTDNCmnFuTzEqy2goo/wJh
 K9NA==
X-Gm-Message-State: AOAM53299HdoyijC8RY86iqwGnD/d46hsesgkh0tVu2ZwW/UyZ8w+OeO
 gAp/O250cTyi0JNpqLD3K/MUmK9G1Y/dpQ==
X-Google-Smtp-Source: ABdhPJwgmPvVIaKaHrdzHdyIkysHoQ6rTNQ5uTcrxb9pduG2EXVnpyx/jV2XpvFdWzs8R4imez8c1w==
X-Received: by 2002:a2e:b177:: with SMTP id a23mr10430252ljm.140.1589808354123; 
 Mon, 18 May 2020 06:25:54 -0700 (PDT)
Received: from centos7-pv-guest.localdomain ([5.35.46.227])
 by smtp.gmail.com with ESMTPSA id u8sm7078694lff.38.2020.05.18.06.25.52
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 18 May 2020 06:25:53 -0700 (PDT)
From: Denis Kirjanov <kda@linux-powerpc.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3] public/io/netif.h: add a new extra type for XDP
Date: Mon, 18 May 2020 16:25:44 +0300
Message-Id: <1589808344-1687-1-git-send-email-kda@linux-powerpc.org>
X-Mailer: git-send-email 1.8.3.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The patch adds a new extra type to be able to diffirentiate
between RX responses on xen-netfront side with the adjusted offset
required for XDP processing.

The offset value from a guest is passed via xenstore.

Signed-off-by: Denis Kirjanov <denis.kirjanov@suse.com>
---
v3:
- updated the commit message

v2:
- added documentation
- fixed padding for netif_extra_info
---
 xen/include/public/io/netif.h | 14 +++++++++++++-
 1 file changed, 13 insertions(+), 1 deletion(-)

diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
index 9fcf91a..ec56a15 100644
--- a/xen/include/public/io/netif.h
+++ b/xen/include/public/io/netif.h
@@ -161,6 +161,13 @@
  */
 
 /*
+ * "netfront-xdp-headroom" is used to add an extra space before packet data
+ * for XDP processing. The value is passed by the frontend to be consistent
+ * between both ends. If the value is greater than zero that means that
+ * an RX response is going to be passed to an XDP program for processing.
+ */
+
+/*
  * Control ring
  * ============
  *
@@ -985,7 +992,8 @@ typedef struct netif_tx_request netif_tx_request_t;
 #define XEN_NETIF_EXTRA_TYPE_MCAST_ADD (2)  /* u.mcast */
 #define XEN_NETIF_EXTRA_TYPE_MCAST_DEL (3)  /* u.mcast */
 #define XEN_NETIF_EXTRA_TYPE_HASH      (4)  /* u.hash */
-#define XEN_NETIF_EXTRA_TYPE_MAX       (5)
+#define XEN_NETIF_EXTRA_TYPE_XDP       (5)  /* u.xdp */
+#define XEN_NETIF_EXTRA_TYPE_MAX       (6)
 
 /* netif_extra_info_t flags. */
 #define _XEN_NETIF_EXTRA_FLAG_MORE (0)
@@ -1018,6 +1026,10 @@ struct netif_extra_info {
             uint8_t algorithm;
             uint8_t value[4];
         } hash;
+        struct {
+            uint16_t headroom;
+            uint16_t pad[2]
+        } xdp;
         uint16_t pad[3];
     } u;
 };
-- 
1.8.3.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 13:39:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 13:39:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jafz4-0000ZH-Q3; Mon, 18 May 2020 13:39:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VSjt=7A=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jafz2-0000ZA-KA
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 13:39:08 +0000
X-Inumbo-ID: f25ed04c-990c-11ea-b9cf-bc764e2007e4
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f25ed04c-990c-11ea-b9cf-bc764e2007e4;
 Mon, 18 May 2020 13:39:07 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id v12so11863829wrp.12
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 06:39:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=L9cL93kWhx2dXPlqi8FddZceJcA0i/90HizEBlALI1M=;
 b=N7Na9MfIYpylEkeu1Hd/8vTXagEicCuBJYnpWIUom9mplR1VAv4aCPFJQ3AaeyMq56
 RX6RuTYHVGqGUczYNzaQNp2yklux33GOHTVcV5tkqubpkBhsh7VG0KDz0ADnPAgSSdBQ
 oW4o0EWNvamTbskPYMC+Qv4kOqGzFp8Dm3z8/h9oYIqaFibAubvx3ItlAK6mGRLESpeE
 8hdJRExJRqTa5VDPcjtYPGRmNBVwkn1vLjoWSZiyijVLO28YN1isuD7zUAbRCohuLmQF
 H8F5Inc2Hbm8Y5JZgkg9xO8w0oz0D8UX7zC4rC9X0HfPZ5vOROnZdAyXfSYk2X9sKOAo
 /dKg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=L9cL93kWhx2dXPlqi8FddZceJcA0i/90HizEBlALI1M=;
 b=DNa8WpEvQDtLPYB1tKnlh/LY2MiopFpfWLLNIuL8hkL1cAC9Vq3EM3XbuzSxzg429w
 pAQInN4BWcV7YWBsNjb6ZSXc59gyhbhRcUUnbUHQhY3MMqYlZCkKTILALAvl86U/epXP
 LltaoJK1zK+eMr/tns8PgWoKbWfN9qrwtU5h55PqTP4iYelo8NskJ4Pjp03TjZQMmidn
 1Ku9y7Oz0/BKulnO3nEPDNxcR0tpWRypGafmFIuThrcTI98rED6LSk01XFs9cxAHa4Vo
 RdmJ9SllXH3fFoA+KsMwP3FE+0a+WYusK5tn/D4vM1gOeTt3UDI6ixQnRdTCdUnzLDbX
 rFhg==
X-Gm-Message-State: AOAM5338ZZv7NLbMOf9Bst2jVJ5p/s5Vf4Rn3GzEUjMkXIrxkUHn9KhE
 Z+b5yGp5cpu6tne52VyrWIA=
X-Google-Smtp-Source: ABdhPJxojX7SQA/wC2mYaCfEOdG3klbyhQgEwh+47mEdcKVfzg4jTFokDYYCAD2ynV9FwhlrLqDIzQ==
X-Received: by 2002:adf:fac4:: with SMTP id a4mr19058998wrs.134.1589809147010; 
 Mon, 18 May 2020 06:39:07 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.188])
 by smtp.gmail.com with ESMTPSA id 205sm14298083wmc.12.2020.05.18.06.39.05
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 18 May 2020 06:39:06 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Denis Kirjanov'" <kda@linux-powerpc.org>,
 <xen-devel@lists.xenproject.org>
References: <1589808344-1687-1-git-send-email-kda@linux-powerpc.org>
In-Reply-To: <1589808344-1687-1-git-send-email-kda@linux-powerpc.org>
Subject: RE: [PATCH v3] public/io/netif.h: add a new extra type for XDP
Date: Mon, 18 May 2020 14:39:05 +0100
Message-ID: <000001d62d19$b3866510$1a932f30$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQFrOVL0IY3PjX3dJoEC4QuxiCQecqmDvxBg
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: jgross@suse.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Denis Kirjanov <kda@linux-powerpc.org>
> Sent: 18 May 2020 14:26
> To: xen-devel@lists.xenproject.org
> Cc: paul@xen.org; jgross@suse.com
> Subject: [PATCH v3] public/io/netif.h: add a new extra type for XDP
> 
> The patch adds a new extra type to be able to diffirentiate
> between RX responses on xen-netfront side with the adjusted offset
> required for XDP processing.
> 
> The offset value from a guest is passed via xenstore.
> 
> Signed-off-by: Denis Kirjanov <denis.kirjanov@suse.com>
> ---
> v3:
> - updated the commit message
> 
> v2:
> - added documentation
> - fixed padding for netif_extra_info
> ---
>  xen/include/public/io/netif.h | 14 +++++++++++++-
>  1 file changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
> index 9fcf91a..ec56a15 100644
> --- a/xen/include/public/io/netif.h
> +++ b/xen/include/public/io/netif.h
> @@ -161,6 +161,13 @@
>   */
> 
>  /*
> + * "netfront-xdp-headroom" is used to add an extra space before packet data
> + * for XDP processing. The value is passed by the frontend to be consistent
> + * between both ends. If the value is greater than zero that means that
> + * an RX response is going to be passed to an XDP program for processing.
> + */

I think 'used to add extra space' is probably the wrong phrase. How about 'is used to request that extra space is added'
It also does not state what unit the value is specified in so you need something to clarify that. I also don't understand what "The
value is passed by the frontend to be consistent between both ends" means. What happens if the backend is older and does not know
what this key means? 

  Paul

> +
> +/*
>   * Control ring
>   * ============
>   *
> @@ -985,7 +992,8 @@ typedef struct netif_tx_request netif_tx_request_t;
>  #define XEN_NETIF_EXTRA_TYPE_MCAST_ADD (2)  /* u.mcast */
>  #define XEN_NETIF_EXTRA_TYPE_MCAST_DEL (3)  /* u.mcast */
>  #define XEN_NETIF_EXTRA_TYPE_HASH      (4)  /* u.hash */
> -#define XEN_NETIF_EXTRA_TYPE_MAX       (5)
> +#define XEN_NETIF_EXTRA_TYPE_XDP       (5)  /* u.xdp */
> +#define XEN_NETIF_EXTRA_TYPE_MAX       (6)
> 
>  /* netif_extra_info_t flags. */
>  #define _XEN_NETIF_EXTRA_FLAG_MORE (0)
> @@ -1018,6 +1026,10 @@ struct netif_extra_info {
>              uint8_t algorithm;
>              uint8_t value[4];
>          } hash;
> +        struct {
> +            uint16_t headroom;
> +            uint16_t pad[2]
> +        } xdp;
>          uint16_t pad[3];
>      } u;
>  };
> --
> 1.8.3.1




From xen-devel-bounces@lists.xenproject.org Mon May 18 13:43:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 13:43:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jag38-0001NR-Ba; Mon, 18 May 2020 13:43:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ws3m=7A=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jag37-0001NM-5k
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 13:43:21 +0000
X-Inumbo-ID: 88145346-990d-11ea-a867-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 88145346-990d-11ea-a867-12813bfff9fa;
 Mon, 18 May 2020 13:43:19 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: iPDJMvHDM8739jgWth+7bNACdGvlAPah29U/UnnktOeRjWJh3q0uiNGfw68V3DbPh54w8CVTeb
 SJ2bfr0+P2frQ2rNyRC+4d3Vy+yMe9+VmQAmGR2mQvQ5Unc2er8d3G/xob20P+0A41THAc6qSL
 Qs4dE2m+57XjpZMHLSBRZ/FzwZlew+OLXjw3Jh5FqvFZci64dpPY347WW5Jzv7oWXxrXe1y9Jk
 KZdqFgtvg/3p3oRgd/N7TPVO0Z/wGY8LTMYdzElq4cz+PEX1LoVe6tGpplr9Cx6UkAYO5WeOr9
 WcQ=
X-SBRS: 2.7
X-MesageID: 18486562
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="18486562"
Subject: Re: [PATCH] x86: determine MXCSR mask in all cases
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <687f8a71-5c5c-c95e-146d-8f38211e5e00@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f49bc57d-69dc-daba-07a3-016c4786c919@citrix.com>
Date: Mon, 18 May 2020 14:43:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <687f8a71-5c5c-c95e-146d-8f38211e5e00@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 18/05/2020 14:16, Jan Beulich wrote:
> For its use(s) by the emulator to be correct in all cases, the filling
> of the variable needs to be independent of XSAVE availability. As
> there's no suitable function in i387.c to put the logic in, keep it in
> xstate_init(), arrange for the function to be called unconditionally,
> and pull the logic ahead of all return paths there.
>
> Fixes: 9a4496a35b20 ("x86emul: support {,V}{LD,ST}MXCSR")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon May 18 13:48:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 13:48:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jag8F-0001ct-C8; Mon, 18 May 2020 13:48:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hPYZ=7A=linux-powerpc.org=kda@srs-us1.protection.inumbo.net>)
 id 1jag8D-0001cf-Ow
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 13:48:37 +0000
X-Inumbo-ID: 45b9a482-990e-11ea-b07b-bc764e2007e4
Received: from mail-ej1-x643.google.com (unknown [2a00:1450:4864:20::643])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 45b9a482-990e-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 13:48:37 +0000 (UTC)
Received: by mail-ej1-x643.google.com with SMTP id n24so885197ejd.0
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 06:48:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=linux-powerpc-org.20150623.gappssmtp.com; s=20150623;
 h=mime-version:in-reply-to:references:from:date:message-id:subject:to
 :cc; bh=R49DYIk8gyxYrhl4FSK3W75EuNxo2wOsvTGnhEq4JfU=;
 b=b+VZR+XtXiIYNliszTZlNWqkslMz6n81+V5cVXg9putYBJpmO1P6rNvsQQP5Dw/Fmd
 M5KyAAr6tFIPhaN5l3kLmPKq1QhNfPW+DWKTSsabVsrXbL5iDuw6C+fPRHWU8JNcmRgD
 zHLqejt7wMjzaCWNHhQ9hK5bTq0ESqy4AdaZijJnZytBU0lJHtafvh82scuz739gZmqn
 KWqLHhrHqBVfaA5O/0t+FJIusFkZokgZ/K6w14Iqcf3sOgYdJ1x7ea3xaBRpcYOIYV1g
 lvqZE0oNfv/FZoAOR8QhOEA9Z0gHw5lFoSUAmYfEPf6sp/oGp/fRbIC5fB178ZpuwC/9
 UA5A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:in-reply-to:references:from:date
 :message-id:subject:to:cc;
 bh=R49DYIk8gyxYrhl4FSK3W75EuNxo2wOsvTGnhEq4JfU=;
 b=m7PAWzzF8xRSUUu97d9AvBxGCv/VzQmrwleznYi7ebT/TUhKFz2F20orj6FKfazKtK
 UDz72MGyd1KxeyhQ5M+8OPQa6y31Ni9/Nvb/jtScvHZKq1bR0sQzH6g/KBuSySSGfQFN
 8vYCEzE28xpK2dMv/pQgroXsVTshro0vrqnJBu2VTvnEY+2AdUoI4JyjqWVQPjcLQCA4
 ZxlbHlBrSFA981uXh78Xy/S5TUfarhW2fAcZlxGnX4KghlTxTtGv4kVe2DAJBrOGbZ9C
 AzRWX+Hro19NKegD7fkRbMw8pWi2IZfk4AaLaVZyFTYEx/zIPRHHONFwVveehVVoT40y
 QSNQ==
X-Gm-Message-State: AOAM531lhc9aIVAoF7YMvN6RkHnDwN4v1J/RR81nDFovczxgMvEhbQQh
 AziwtEOjpfavWMs0ZPzGaBRfA2ypR17rViTpzyqKuQ==
X-Google-Smtp-Source: ABdhPJxpLscWtuMpNcYVCo8WVvccspGP2jZgzCvt+WybYfC+eyFdfWnKmHS8cnDhfwf98RL0sHcDjDnGvuuBSHclpTs=
X-Received: by 2002:a17:906:f53:: with SMTP id
 h19mr2041779ejj.343.1589809716216; 
 Mon, 18 May 2020 06:48:36 -0700 (PDT)
MIME-Version: 1.0
Received: by 2002:a50:34e:0:0:0:0:0 with HTTP;
 Mon, 18 May 2020 06:48:35 -0700 (PDT)
X-Originating-IP: [5.35.46.227]
In-Reply-To: <000001d62d19$b3866510$1a932f30$@xen.org>
References: <1589808344-1687-1-git-send-email-kda@linux-powerpc.org>
 <000001d62d19$b3866510$1a932f30$@xen.org>
From: Denis Kirjanov <kda@linux-powerpc.org>
Date: Mon, 18 May 2020 16:48:35 +0300
Message-ID: <CAOJe8K3RTjNZyNBZqs56GKk-yaKwB4svOmgw9xt3vjzH6r5fBg@mail.gmail.com>
Subject: Re: [PATCH v3] public/io/netif.h: add a new extra type for XDP
To: paul@xen.org
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/18/20, Paul Durrant <xadimgnik@gmail.com> wrote:
>> -----Original Message-----
>> From: Denis Kirjanov <kda@linux-powerpc.org>
>> Sent: 18 May 2020 14:26
>> To: xen-devel@lists.xenproject.org
>> Cc: paul@xen.org; jgross@suse.com
>> Subject: [PATCH v3] public/io/netif.h: add a new extra type for XDP
>>
>> The patch adds a new extra type to be able to diffirentiate
>> between RX responses on xen-netfront side with the adjusted offset
>> required for XDP processing.
>>
>> The offset value from a guest is passed via xenstore.
>>
>> Signed-off-by: Denis Kirjanov <denis.kirjanov@suse.com>
>> ---
>> v3:
>> - updated the commit message
>>
>> v2:
>> - added documentation
>> - fixed padding for netif_extra_info
>> ---
>>  xen/include/public/io/netif.h | 14 +++++++++++++-
>>  1 file changed, 13 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/include/public/io/netif.h
>> b/xen/include/public/io/netif.h
>> index 9fcf91a..ec56a15 100644
>> --- a/xen/include/public/io/netif.h
>> +++ b/xen/include/public/io/netif.h
>> @@ -161,6 +161,13 @@
>>   */
>>
>>  /*
>> + * "netfront-xdp-headroom" is used to add an extra space before packet
>> data
>> + * for XDP processing. The value is passed by the frontend to be
>> consistent
>> + * between both ends. If the value is greater than zero that means that
>> + * an RX response is going to be passed to an XDP program for
>> processing.
>> + */
>
> I think 'used to add extra space' is probably the wrong phrase. How about
> 'is used to request that extra space is added'
> It also does not state what unit the value is specified in so you need
> something to clarify that.
Ok.

I also don't understand what "The
> value is passed by the frontend to be consistent between both ends" means.
> What happens if the backend is older and does not know
> what this key means?

Looks like it has also be stated here since I've added another value
"feature-xdp-headroom" which is set by  the netback side.

>
>   Paul
>
>> +
>> +/*
>>   * Control ring
>>   * ============
>>   *
>> @@ -985,7 +992,8 @@ typedef struct netif_tx_request netif_tx_request_t;
>>  #define XEN_NETIF_EXTRA_TYPE_MCAST_ADD (2)  /* u.mcast */
>>  #define XEN_NETIF_EXTRA_TYPE_MCAST_DEL (3)  /* u.mcast */
>>  #define XEN_NETIF_EXTRA_TYPE_HASH      (4)  /* u.hash */
>> -#define XEN_NETIF_EXTRA_TYPE_MAX       (5)
>> +#define XEN_NETIF_EXTRA_TYPE_XDP       (5)  /* u.xdp */
>> +#define XEN_NETIF_EXTRA_TYPE_MAX       (6)
>>
>>  /* netif_extra_info_t flags. */
>>  #define _XEN_NETIF_EXTRA_FLAG_MORE (0)
>> @@ -1018,6 +1026,10 @@ struct netif_extra_info {
>>              uint8_t algorithm;
>>              uint8_t value[4];
>>          } hash;
>> +        struct {
>> +            uint16_t headroom;
>> +            uint16_t pad[2]
>> +        } xdp;
>>          uint16_t pad[3];
>>      } u;
>>  };
>> --
>> 1.8.3.1
>
>
>


From xen-devel-bounces@lists.xenproject.org Mon May 18 13:57:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 13:57:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jagGO-0002Zv-9q; Mon, 18 May 2020 13:57:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VSjt=7A=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jagGN-0002Zq-RF
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 13:57:04 +0000
X-Inumbo-ID: 72fb36bc-990f-11ea-ae69-bc764e2007e4
Received: from mail-ed1-x541.google.com (unknown [2a00:1450:4864:20::541])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 72fb36bc-990f-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 13:57:02 +0000 (UTC)
Received: by mail-ed1-x541.google.com with SMTP id s19so8487704edt.12
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 06:57:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=3SAte+cpgCwI3kheSBfJNfdqojatNOS0CydK0J3mIYE=;
 b=tqRSh/V3JWbgJEOCSomL5uJnkcsXvciTKLoh+/c4bxOhJyHiVISl2GX5Q8fiUMbcqV
 whRw0ONCNhelpfRw02JjDCBKujPnugbhEVL9Ib7g2g+TKWF2tGi8K/l5GaA10W3elVWl
 TkrwebocbUfbPRsZ3DJ9KseJTWyXZRIfRglesVDDv9hSSQxwGLqgNEEKY21pTxHIAUv5
 8N6yWPDFye6DDbSchWX0Y+CZrHmD3iHB3q3MientWoJANU0gvLJMj2aAXY9833jpdu5z
 2VnGmK8mKsatepABJ+3N/E4zG+X1RYm/oCbh7gzLJaaBxkdZ17Z/ue4znvtjkLSY4hgW
 SbgQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=3SAte+cpgCwI3kheSBfJNfdqojatNOS0CydK0J3mIYE=;
 b=umn/xhXeBcuIHB4YVKVRmoHrbLJvHQdQHYAQLi/1AA7aDdrfITN8BUjYj3T4QlBQlt
 49iG7YQC0f4y5FYnhzoIng/tpQtTTXeAW5nQEFfKvpsPIlVZjt6mu8ZQqt1Pwbau+OM7
 iIQDw5lfKTUYHmSGn/fmBs4cuIQMNQwDExl6EY2X+V8v0fhiJSBXlBx5nrkUm0YovNqA
 InaHXz1Qkwf9ZICryv4SRzM18Nl9ToX6NM4SA5mtR8CgHoQHpw0zzjn//0qxMD8D4xSA
 f8pzGJSR+oBHdt4oz/Ww5dYmEwNYlNosxCLjoGc9DXPuSPS8ZQ3PjWs0zsw3o4VAqh+9
 iP4g==
X-Gm-Message-State: AOAM530xPwGMINXcH1jjenUl/KJaBL3JaWPnooNw6XXhBYMZdNCnvdOC
 pTIG4mtvlJfFmc3m4USFjrE=
X-Google-Smtp-Source: ABdhPJwpPri8GDUkkvkg00119KieLRvTPzuNGQ1kXd0eWrQ375lzIAKWrCVEXws+VvfppvO9eTEgfw==
X-Received: by 2002:a05:6402:30b2:: with SMTP id
 df18mr13247439edb.323.1589810221681; 
 Mon, 18 May 2020 06:57:01 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.186])
 by smtp.gmail.com with ESMTPSA id c16sm1063898edv.88.2020.05.18.06.57.00
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 18 May 2020 06:57:00 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Denis Kirjanov'" <kda@linux-powerpc.org>
References: <1589808344-1687-1-git-send-email-kda@linux-powerpc.org>
 <000001d62d19$b3866510$1a932f30$@xen.org>
 <CAOJe8K3RTjNZyNBZqs56GKk-yaKwB4svOmgw9xt3vjzH6r5fBg@mail.gmail.com>
In-Reply-To: <CAOJe8K3RTjNZyNBZqs56GKk-yaKwB4svOmgw9xt3vjzH6r5fBg@mail.gmail.com>
Subject: RE: [PATCH v3] public/io/netif.h: add a new extra type for XDP
Date: Mon, 18 May 2020 14:56:59 +0100
Message-ID: <000101d62d1c$33f5a880$9be0f980$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQFrOVL0IY3PjX3dJoEC4QuxiCQecgFN2F+HAduvI1epaniuwA==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: jgross@suse.com, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Denis Kirjanov <kda@linux-powerpc.org>
> Sent: 18 May 2020 14:49
> To: paul@xen.org
> Cc: xen-devel@lists.xenproject.org; jgross@suse.com
> Subject: Re: [PATCH v3] public/io/netif.h: add a new extra type for =
XDP
>=20
> On 5/18/20, Paul Durrant <xadimgnik@gmail.com> wrote:
> >> -----Original Message-----
> >> From: Denis Kirjanov <kda@linux-powerpc.org>
> >> Sent: 18 May 2020 14:26
> >> To: xen-devel@lists.xenproject.org
> >> Cc: paul@xen.org; jgross@suse.com
> >> Subject: [PATCH v3] public/io/netif.h: add a new extra type for XDP
> >>
> >> The patch adds a new extra type to be able to diffirentiate
> >> between RX responses on xen-netfront side with the adjusted offset
> >> required for XDP processing.
> >>
> >> The offset value from a guest is passed via xenstore.
> >>
> >> Signed-off-by: Denis Kirjanov <denis.kirjanov@suse.com>
> >> ---
> >> v3:
> >> - updated the commit message
> >>
> >> v2:
> >> - added documentation
> >> - fixed padding for netif_extra_info
> >> ---
> >>  xen/include/public/io/netif.h | 14 +++++++++++++-
> >>  1 file changed, 13 insertions(+), 1 deletion(-)
> >>
> >> diff --git a/xen/include/public/io/netif.h
> >> b/xen/include/public/io/netif.h
> >> index 9fcf91a..ec56a15 100644
> >> --- a/xen/include/public/io/netif.h
> >> +++ b/xen/include/public/io/netif.h
> >> @@ -161,6 +161,13 @@
> >>   */
> >>
> >>  /*
> >> + * "netfront-xdp-headroom" is used to add an extra space before =
packet
> >> data
> >> + * for XDP processing. The value is passed by the frontend to be
> >> consistent
> >> + * between both ends. If the value is greater than zero that means =
that
> >> + * an RX response is going to be passed to an XDP program for
> >> processing.
> >> + */
> >
> > I think 'used to add extra space' is probably the wrong phrase. How =
about
> > 'is used to request that extra space is added'
> > It also does not state what unit the value is specified in so you =
need
> > something to clarify that.
> Ok.
>=20
> I also don't understand what "The
> > value is passed by the frontend to be consistent between both ends" =
means.
> > What happens if the backend is older and does not know
> > what this key means?
>=20
> Looks like it has also be stated here since I've added another value
> "feature-xdp-headroom" which is set by  the netback side.
>=20

Yeah, that needs to be mentioned. I also think you should drop the =
'netfront' out of the name. I think simply 'xdp-headroom' would be fine.

  Paul

> >
> >   Paul
> >
> >> +
> >> +/*
> >>   * Control ring
> >>   * =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> >>   *
> >> @@ -985,7 +992,8 @@ typedef struct netif_tx_request =
netif_tx_request_t;
> >>  #define XEN_NETIF_EXTRA_TYPE_MCAST_ADD (2)  /* u.mcast */
> >>  #define XEN_NETIF_EXTRA_TYPE_MCAST_DEL (3)  /* u.mcast */
> >>  #define XEN_NETIF_EXTRA_TYPE_HASH      (4)  /* u.hash */
> >> -#define XEN_NETIF_EXTRA_TYPE_MAX       (5)
> >> +#define XEN_NETIF_EXTRA_TYPE_XDP       (5)  /* u.xdp */
> >> +#define XEN_NETIF_EXTRA_TYPE_MAX       (6)
> >>
> >>  /* netif_extra_info_t flags. */
> >>  #define _XEN_NETIF_EXTRA_FLAG_MORE (0)
> >> @@ -1018,6 +1026,10 @@ struct netif_extra_info {
> >>              uint8_t algorithm;
> >>              uint8_t value[4];
> >>          } hash;
> >> +        struct {
> >> +            uint16_t headroom;
> >> +            uint16_t pad[2]
> >> +        } xdp;
> >>          uint16_t pad[3];
> >>      } u;
> >>  };
> >> --
> >> 1.8.3.1
> >
> >
> >



From xen-devel-bounces@lists.xenproject.org Mon May 18 14:15:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 14:15:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jagYE-0004Pk-R9; Mon, 18 May 2020 14:15:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TuVG=7A=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jagYD-0004Pf-Hw
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 14:15:29 +0000
X-Inumbo-ID: 05fb1a52-9912-11ea-a86a-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 05fb1a52-9912-11ea-a86a-12813bfff9fa;
 Mon, 18 May 2020 14:15:28 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: BY2xgvnlCalyodz+iXE45gv4FW9rpW8N0PDth5UsoT4uc30prNkruBZA3qODL4gOUoSOIxLr7p
 fPgAAF2jB/gA82R0n06my6dq7XVTaoifauTrrHHzBmA66m/Hd5/j8xjFGCMST2kBJDMroWlzWH
 eQPn/nCkyOWqUGylu6XlmHI952EiWzXrcZN57wqTrewWqWdaxYtjRVnSZkX88nklouo/+brlpZ
 VPEBZpeKQAC6jEyKIpWdfet1wcKDFnot9Use0tq32PR+ndLY1/nfuGP6Qq9SMfs/Y7eXCwmXBG
 YbE=
X-SBRS: 2.7
X-MesageID: 18154567
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="18154567"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24258.39029.788968.419649@mariner.uk.xensource.com>
Date: Mon, 18 May 2020 15:15:17 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 09/21] libxl: add save/restore support for qemu-xen in
 stubdomain
In-Reply-To: <CAKf6xpvJMovKMTWipC4gZuBD8FgmBEWbDbkm=ryRWSxNifQcJw@mail.gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-10-jandryuk@gmail.com>
 <24253.29524.798802.978257@mariner.uk.xensource.com>
 <CAKf6xpvJMovKMTWipC4gZuBD8FgmBEWbDbkm=ryRWSxNifQcJw@mail.gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("Re: [PATCH v5 09/21] libxl: add save/restore support for qemu-xen in stubdomain"):
> On Thu, May 14, 2020 at 12:35 PM Ian Jackson <ian.jackson@citrix.com> wrote:
> > I suggest randomly allocating one in the range [64,192>.  My random
> > number generator picked 119.  So 118 and 119 ?
> 
> This makes sense and would be the easiest change.

Cool.

> > Also, why couldn't your wrapper script add this argument ?  If you do
> > that there then there is one place that knows the fd number and a
> > slightly less tortuous linkage between libxl and the script...
> 
> I like this idea, but there is a complication.  "-incoming" is only
> added when performing a restore, so it cannot just be blindly added to
> all qemu command lines in the stubdom.  Two options I see are to
> either communicate a restore some other way (so the stubdom scripts
> can add the appropriate option), or pass something though dm_args, but
> let the script convert it into something usable.
> 
> There is "-incoming defer" where we can later specify
> "migrate_incoming fd:119".  Another option is to `sed
> s/defer/fd:119/`, but that is a little tricky since we need to look at
> the preceding key to know if we should sed the second.  We could pass
> only "-incoming" and require the stubdom script to modify that option.
> 
> I haven't tested any of this.

Erk.  I see now why you did it the way you did !

> > It's not stated anywhere here that I can see but I think what is
> > happening here is that your wrapper script knows the qemu savefile
> > pathname and reads it directly.  Maybbe a comment would be
> > worthwhile ?
> 
> The existing comment "Linux stubdomain connects specific FD to
> STUBDOM_CONSOLE_RESTORE" is trying to state that.
> STUBDOM_CONSOLE_RESTORE is defined as 2 for console 2 (/dev/hvc2), but
> it is only a libxl_internal.h define.

Err, by "the qemu savefile pathname" I meant the pathname in dom0.
I assume your wrapper script opens that and feeds it to the console.
Is that right ?

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Mon May 18 14:20:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 14:20:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jagck-0005E6-Di; Mon, 18 May 2020 14:20:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TuVG=7A=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jagcj-0005E1-DC
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 14:20:09 +0000
X-Inumbo-ID: ad27ad72-9912-11ea-a86a-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ad27ad72-9912-11ea-a86a-12813bfff9fa;
 Mon, 18 May 2020 14:20:08 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 5eQFsejmTheHw2nmIstzN+eVgeD8oLJB4A1YgKCxeb9/EOb7rzWB1NPVo1KCOhZj6wU0VObIbs
 x/pMBJF3seHorm5w1pSw4/HXuIXdCijeUdpdRsLl9UiiyiKq19pSKa0FXQ9GUDa6p23GWqp9Wy
 HysyP11ElT43tN0eOBFYdq5LkX9LX5Gq9jPWhLSlnqYxYE53naUUGgnMi+pFj6mrEYawwinjn4
 +2kv5cOOtzOYPxXoxkKD8iZul/p/x17Af91A82HwRHlS9WXPIC/BUi1ktul5rF/vohrf/youfd
 gfA=
X-SBRS: 2.7
X-MesageID: 18054298
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="18054298"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24258.39310.574582.176081@mariner.uk.xensource.com>
Date: Mon, 18 May 2020 15:19:58 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v6 06/18] libxl: write qemu arguments into separate
 xenstore keys
In-Reply-To: <20200518011353.326287-7-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
 <20200518011353.326287-7-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v6 06/18] libxl: write qemu arguments into separate xenstore keys"):
> From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
...
> +static int libxl__write_stub_linux_dm_argv(libxl__gc *gc,
> +                                           int dm_domid, int guest_domid,
> +                                           char **args)
> +{

Thanks for the changes.

> +    xs_transaction_t t = XBT_NULL;
...
> +    rc = libxl__xs_read_mandatory(gc, XBT_NULL,
> +                                  GCSPRINTF("/local/domain/%d/vm", guest_domid),
> +                                  &vm_path);
> +    if (rc)
> +        return rc;

I think this should be "goto out".  That conforms to standard libxl
error handling discipline and avoids future leak bugs etc.
libxl__xs_transaction_abort is a no-op with t==NULL.

Also, it is not clear to me why you chose to put this outside the
transaction loop.  Can you either put it inside the transaction loop,
or produce an explanation as to why this approach is race-free...

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Mon May 18 14:24:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 14:24:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaghE-0005Pu-1T; Mon, 18 May 2020 14:24:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TuVG=7A=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jaghC-0005Pp-Am
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 14:24:46 +0000
X-Inumbo-ID: 4f4d1c87-9913-11ea-a86a-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f4d1c87-9913-11ea-a86a-12813bfff9fa;
 Mon, 18 May 2020 14:24:40 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: wzFvJ0cZeqwqiNab3YVcZR2JPAzp5A/NTFcMfCFqHZ5V5eqO/7msNgRTOgTlzJE3DPhC4tGaeM
 kue2Bmr84iE5xPmhSAnGlwiLRmeaSd4cIE14TD8UuPNbPclEEKdWve5b2eu/+QLyijmM2w8fFD
 pteljN0YRQj0n/b6gM2OmdptFf8tqt4MpvOavbBv1GcssoiUL/qrEa2mz7bBxRh38UalE6ZABJ
 70WOlynCZ4T9WWZ+kcN5ug55GGozFxvp0YAwqoH/gnMelZgujSVJx90IXaMp9joGCOmf5qHATg
 jPA=
X-SBRS: 2.7
X-MesageID: 18068792
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="18068792"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24258.39586.245004.804616@mariner.uk.xensource.com>
Date: Mon, 18 May 2020 15:24:34 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v6 09/18] libxl: add save/restore support for qemu-xen in
 stubdomain
In-Reply-To: <20200518011353.326287-10-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
 <20200518011353.326287-10-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v6 09/18] libxl: add save/restore support for qemu-xen in stubdomain"):
> From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
...
>      if (state->saved_state) {
> -        /* This file descriptor is meant to be used by QEMU */
> -        *dm_state_fd = open(state->saved_state, O_RDONLY);
> -        flexarray_append(dm_args, "-incoming");
> -        flexarray_append(dm_args, GCSPRINTF("fd:%d",*dm_state_fd));
> +        if (is_stubdom) {
> +            /* Linux stubdomain must replace $STUBDOM_RESTORE_INCOMING_ARG
> +             * with the approriate fd:$num argument for the
> +             * STUBDOM_CONSOLE_RESTORE console 2.
> +             */
> +            flexarray_append(dm_args, "-incoming");
> +            flexarray_append(dm_args, "$STUBDOM_RESTORE_INCOMING_ARG");
> +        } else {
> +            /* This file descriptor is meant to be used by QEMU */
> +            *dm_state_fd = open(state->saved_state, O_RDONLY);
> +            flexarray_append(dm_args, "-incoming");
> +            flexarray_append(dm_args, GCSPRINTF("fd:%d",*dm_state_fd));

Hrk.  The stubdom script is expected to spot this particular value in
the dm_args array and seddery it.  OK.  This is, at leasst, sound.
I'm happy with the code and the protocol.

I think this needs a change to this doc:

  Subject: [PATCH v6 01/18] Document ioemu MiniOS stubdomain protocol

  +Toolstack to MiniOS ioemu stubdomain protocol
  +---------------------------------------------

Provided that you update the docs commit and take my ack off that,
please add my ack to this code :-).

Or you can fold the docs change into this commit, if you prefer, in
which case I'll review this patch again.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Mon May 18 14:27:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 14:27:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jagjZ-0005WK-Eh; Mon, 18 May 2020 14:27:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TuVG=7A=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jagjY-0005WF-6o
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 14:27:12 +0000
X-Inumbo-ID: a8bd485e-9913-11ea-a86b-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a8bd485e-9913-11ea-a86b-12813bfff9fa;
 Mon, 18 May 2020 14:27:10 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 6LDBNUj70xxnAHHhAw4gIVpftCOn5PyO2HdQBKIHNo4Jt2tKu8IqNqYDKmBiGnY4LjWRZqkyix
 JH4WukB2JzNLvLgCOqOy1GtVEeiAKsLSxfdOoxZUCAstF6Rxyd7Syo+xN3WmigJvv0MH9dcgNF
 C0jb3fFEE1mWm/rRJTlWYzeRGM944orvulkFrSKmfbrQ+Jsw1MR1AerULghozXpgazWfe7+o+2
 DWIgoh+a0TZLNrMcHAc22m5z1aXviUDMErrtwfIjREU3ZXxWXLT4n626HGajCO69QhEoNMXif7
 gAw=
X-SBRS: 2.7
X-MesageID: 18156220
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="18156220"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24258.39737.658839.279956@mariner.uk.xensource.com>
Date: Mon, 18 May 2020 15:27:05 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v6 13/18] libxl: use vchan for QMP access with Linux
 stubdomain
In-Reply-To: <20200518011353.326287-14-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
 <20200518011353.326287-14-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v6 13/18] libxl: use vchan for QMP access with Linux stubdomain"):
> Ian, you acked the original and the squashed in "libxl: Kill
> vchan-socket-proxy when cleaning up qmp".  However, I also added the
> libxl__qmp_cleanup() call, so I did not retain your Ack.  That change is
> at the end in dm_destroy_cb().

Thanks.  That's appropriate.  Thanks for drawing my attention to it.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon May 18 14:27:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 14:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jagjo-0005XY-Nk; Mon, 18 May 2020 14:27:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hVld=7A=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jagjn-0005XR-Rw
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 14:27:27 +0000
X-Inumbo-ID: b291ff14-9913-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b291ff14-9913-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 14:27:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 8F06FAFBF;
 Mon, 18 May 2020 14:27:28 +0000 (UTC)
Subject: Re: [PATCH v9 04/12] xen: add basic hypervisor filesystem support
To: Juergen Gross <jgross@suse.com>
References: <20200515115856.11965-1-jgross@suse.com>
 <20200515115856.11965-5-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d991fb07-5e86-f5b3-b8df-6726f5b0030d@suse.com>
Date: Mon, 18 May 2020 16:27:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200515115856.11965-5-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.05.2020 13:58, Juergen Gross wrote:
> --- /dev/null
> +++ b/xen/common/hypfs.c
> @@ -0,0 +1,418 @@
> +/******************************************************************************
> + *
> + * hypfs.c
> + *
> + * Simple sysfs-like file system for the hypervisor.
> + */
> +
> +#include <xen/err.h>
> +#include <xen/guest_access.h>
> +#include <xen/hypercall.h>
> +#include <xen/hypfs.h>
> +#include <xen/lib.h>
> +#include <xen/rwlock.h>
> +#include <public/hypfs.h>
> +
> +#ifdef CONFIG_COMPAT
> +#include <compat/hypfs.h>
> +CHECK_hypfs_dirlistentry;
> +#endif
> +
> +#define DIRENTRY_NAME_OFF offsetof(struct xen_hypfs_dirlistentry, name)
> +#define DIRENTRY_SIZE(name_len) \
> +    (DIRENTRY_NAME_OFF +        \
> +     ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
> +
> +static DEFINE_RWLOCK(hypfs_lock);
> +enum hypfs_lock_state {
> +    hypfs_unlocked,
> +    hypfs_read_locked,
> +    hypfs_write_locked
> +};
> +static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked);
> +
> +HYPFS_DIR_INIT(hypfs_root, "");
> +
> +static void hypfs_read_lock(void)
> +{
> +    read_lock(&hypfs_lock);
> +    this_cpu(hypfs_locked) = hypfs_read_locked;
> +}

Perhaps at least

    ASSERT(this_cpu(hypfs_locked) != hypfs_write_locked);

first thing in the function?

> +static void hypfs_write_lock(void)
> +{
> +    write_lock(&hypfs_lock);
> +    this_cpu(hypfs_locked) = hypfs_write_locked;
> +}

If so,

    ASSERT(this_cpu(hypfs_locked) == hypfs_unlocked);

here then.

> +static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
> +                                               const char *path)
> +{
> +    const char *end;
> +    struct hypfs_entry *entry;
> +    unsigned int name_len;
> +    bool again = true;
> +
> +    while ( again )
> +    {
> +        if ( dir->e.type != XEN_HYPFS_TYPE_DIR )
> +            return NULL;
> +
> +        if ( !*path )
> +            return &dir->e;
> +
> +        end = strchr(path, '/');
> +        if ( !end )
> +            end = strchr(path, '\0');
> +        name_len = end - path;
> +
> +	again = false;

Hard tab slipped in.

With at least the latter taken care of, non-XSM pieces
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 18 14:27:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 14:27:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jagkI-0005dr-1T; Mon, 18 May 2020 14:27:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TuVG=7A=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jagkH-0005df-1w
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 14:27:57 +0000
X-Inumbo-ID: c3d3a23c-9913-11ea-a86b-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c3d3a23c-9913-11ea-a86b-12813bfff9fa;
 Mon, 18 May 2020 14:27:56 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: fYSolxNRxRa2055MhTyahWx4GARPya8AIgN0+NMWPwVOK1TGuSjZMmlJOKVJ6xArKDGAJTDjrs
 m/wWrlW643N5/ZRkiRIQHsWBUfbKEmZWkVoxrH98GR2GF31tdBcb859MPR8qhnmrXLge8T4cA/
 xMhMy/kA7AlVUyLpMIOmjA9P2oek9AcKmwFIMGDdNNWPPWnImC0yhsNxVTTjgvHcpzFugu3Ge+
 FUI5Ja4Lr+M/dzLShLopnK2X7BBRq9SKw5bxPqkOuTxyXl1dy1qyN0z+1PLiqNqmHVsvFTlCLg
 RVM=
X-SBRS: 2.7
X-MesageID: 17821438
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="17821438"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24258.39782.953526.84349@mariner.uk.xensource.com>
Date: Mon, 18 May 2020 15:27:50 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v6 14/18] libxl: require qemu in dom0 for multiple
 stubdomain consoles
In-Reply-To: <20200518011353.326287-15-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
 <20200518011353.326287-15-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v6 14/18] libxl: require qemu in dom0 for multiple stubdomain consoles"):
> From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> 
> Device model stubdomains (both Mini-OS + qemu-trad and linux + qemu-xen)
> are always started with at least 3 consoles: log, save, and restore.
> Until xenconsoled learns how to handle multiple consoles, this is needed
> for save/restore support.
> 
> For Mini-OS stubdoms, this is a bug.  In practice, it works in most
> cases because there is something else that triggers qemu in dom0 too:
> vfb/vkb added if vnc/sdl/spice is enabled.
> 
> Additionally, Linux-based stubdomain waits for all the backends to
> initialize during boot. Lack of some console backends results in
> stubdomain startup timeout.
> 
> This is a temporary patch until xenconsoled will be improved.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon May 18 14:28:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 14:28:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jagl8-0005mi-N5; Mon, 18 May 2020 14:28:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TuVG=7A=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jagl7-0005mJ-Gj
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 14:28:49 +0000
X-Inumbo-ID: e337d0c6-9913-11ea-b9cf-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e337d0c6-9913-11ea-b9cf-bc764e2007e4;
 Mon, 18 May 2020 14:28:48 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: EgIldiut3SCl7ZE3hTfDxMhA9CBwczqu8OsybRG7zMOaqOjZCqeCwlOGijW3vlYDHOYI8PhFKy
 b3Q1tcUtzfgHL0S83zchAlW7VldSd1OHbZ1GH9NZ5AC7r/Q2M5RC2liLFFwD8xYX6YT+ozQXd2
 R1rCx0M5lOR0JtxQDqOLJk87kXqBJJzb0gyBrCYyYyTG70+3ZkgG42YbGWJ58gi6B+6mpf9Zxv
 4BAp5jcNukTD3GKWOTGQYnxTbOsUXv3CTOr3HhNIs+j6q0lcAm+Ts7GKSj80bWcNuPiGhz8yPN
 NWE=
X-SBRS: 2.7
X-MesageID: 18156373
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="18156373"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24258.39835.645941.176515@mariner.uk.xensource.com>
Date: Mon, 18 May 2020 15:28:43 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v6 00/18] Add support for qemu-xen runnning in a
 Linux-based stubdomain
In-Reply-To: <20200518011353.326287-1-jandryuk@gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew
 Cooper <Andrew.Cooper3@citrix.com>, George Dunlap <George.Dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v6 00/18] Add support for qemu-xen runnning in a Linux-based stubdomain"):
> In coordination with Marek, I'm making a submission of his patches for Linux
> stubdomain device-model support.  I made a few of my own additions, but Marek
> did the heavy lifting.  Thank you, Marek.

Hi.  I've gone through this version and there is little left to do.  I
look forward to committing this this week...

Ian.


From xen-devel-bounces@lists.xenproject.org Mon May 18 14:28:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 14:28:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jagl8-0005mY-Ex; Mon, 18 May 2020 14:28:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d2DE=7A=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jagl6-0005m6-Ob
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 14:28:48 +0000
X-Inumbo-ID: df2b7b68-9913-11ea-a86b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id df2b7b68-9913-11ea-a86b-12813bfff9fa;
 Mon, 18 May 2020 14:28:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=iLGi4EQGBzizuEtQRHPgO98DktWwrZYeJ9Ris3rOA34=; b=ynZIAakHv66cIC0TkcMm7w8PP
 2ZQu1N2cLd9JC3WjpeODWJYxl009+ct7DWWws9HwH31qvP5WCMeLDI29w1aHBLiB9VXooqguS6mCZ
 16F5cPYs9nHo5kikoC9byWB8K/6QzrCJydSbDjQ5fwEO2tImYbjnp28W+4szVSIiFmboY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jagkz-0001nt-Ep; Mon, 18 May 2020 14:28:41 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jagkz-00017T-6G; Mon, 18 May 2020 14:28:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jagkz-00081s-5h; Mon, 18 May 2020 14:28:41 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150227-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150227: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=664e1bc12f8658da124a4eff7a8f16da073bd47f
X-Osstest-Versions-That: xen=664e1bc12f8658da124a4eff7a8f16da073bd47f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 18 May 2020 14:28:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150227 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150227/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150223
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150223
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150223
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150223
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150223
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150223
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150223
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150223
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150223
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150223
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150223
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  664e1bc12f8658da124a4eff7a8f16da073bd47f
baseline version:
 xen                  664e1bc12f8658da124a4eff7a8f16da073bd47f

Last test of basis   150227  2020-05-18 01:51:25 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon May 18 14:41:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 14:41:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jagxO-0007Yw-1l; Mon, 18 May 2020 14:41:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hVld=7A=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jagxN-0007Yr-11
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 14:41:29 +0000
X-Inumbo-ID: a7fb78ee-9915-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7fb78ee-9915-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 14:41:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 97AAFAF3F;
 Mon, 18 May 2020 14:41:29 +0000 (UTC)
Subject: Re: [PATCH v9 09/12] xen: add runtime parameter access support to
 hypfs
To: Juergen Gross <jgross@suse.com>
References: <20200515115856.11965-1-jgross@suse.com>
 <20200515115856.11965-10-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <719c42fb-a3ca-113d-687a-11b1119571b2@suse.com>
Date: Mon, 18 May 2020 16:41:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200515115856.11965-10-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.05.2020 13:58, Juergen Gross wrote:
> Add support to read and modify values of hypervisor runtime parameters
> via the hypervisor file system.
> 
> As runtime parameters can be modified via a sysctl, too, this path has
> to take the hypfs rw_lock as writer.
> 
> For custom runtime parameters the connection between the parameter
> value and the file system is done via an init function which will set
> the initial value (if needed) and the leaf properties.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon May 18 14:42:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 14:42:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jagxk-0007bU-At; Mon, 18 May 2020 14:41:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=k4Zq=7A=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jagxi-0007bG-TR
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 14:41:50 +0000
X-Inumbo-ID: b45304af-9915-11ea-a86b-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b45304af-9915-11ea-a86b-12813bfff9fa;
 Mon, 18 May 2020 14:41:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 786B2ACCE;
 Mon, 18 May 2020 14:41:51 +0000 (UTC)
Subject: Re: [PATCH v9 04/12] xen: add basic hypervisor filesystem support
To: Jan Beulich <jbeulich@suse.com>
References: <20200515115856.11965-1-jgross@suse.com>
 <20200515115856.11965-5-jgross@suse.com>
 <d991fb07-5e86-f5b3-b8df-6726f5b0030d@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d9f97a63-f133-d730-2a11-3e5e390752f5@suse.com>
Date: Mon, 18 May 2020 16:41:47 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <d991fb07-5e86-f5b3-b8df-6726f5b0030d@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 18.05.20 16:27, Jan Beulich wrote:
> On 15.05.2020 13:58, Juergen Gross wrote:
>> --- /dev/null
>> +++ b/xen/common/hypfs.c
>> @@ -0,0 +1,418 @@
>> +/******************************************************************************
>> + *
>> + * hypfs.c
>> + *
>> + * Simple sysfs-like file system for the hypervisor.
>> + */
>> +
>> +#include <xen/err.h>
>> +#include <xen/guest_access.h>
>> +#include <xen/hypercall.h>
>> +#include <xen/hypfs.h>
>> +#include <xen/lib.h>
>> +#include <xen/rwlock.h>
>> +#include <public/hypfs.h>
>> +
>> +#ifdef CONFIG_COMPAT
>> +#include <compat/hypfs.h>
>> +CHECK_hypfs_dirlistentry;
>> +#endif
>> +
>> +#define DIRENTRY_NAME_OFF offsetof(struct xen_hypfs_dirlistentry, name)
>> +#define DIRENTRY_SIZE(name_len) \
>> +    (DIRENTRY_NAME_OFF +        \
>> +     ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
>> +
>> +static DEFINE_RWLOCK(hypfs_lock);
>> +enum hypfs_lock_state {
>> +    hypfs_unlocked,
>> +    hypfs_read_locked,
>> +    hypfs_write_locked
>> +};
>> +static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked);
>> +
>> +HYPFS_DIR_INIT(hypfs_root, "");
>> +
>> +static void hypfs_read_lock(void)
>> +{
>> +    read_lock(&hypfs_lock);
>> +    this_cpu(hypfs_locked) = hypfs_read_locked;
>> +}
> 
> Perhaps at least
> 
>      ASSERT(this_cpu(hypfs_locked) != hypfs_write_locked);
> 
> first thing in the function?

Yes, good idea.

> 
>> +static void hypfs_write_lock(void)
>> +{
>> +    write_lock(&hypfs_lock);
>> +    this_cpu(hypfs_locked) = hypfs_write_locked;
>> +}
> 
> If so,
> 
>      ASSERT(this_cpu(hypfs_locked) == hypfs_unlocked);
> 
> here then.

Okay.

> 
>> +static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
>> +                                               const char *path)
>> +{
>> +    const char *end;
>> +    struct hypfs_entry *entry;
>> +    unsigned int name_len;
>> +    bool again = true;
>> +
>> +    while ( again )
>> +    {
>> +        if ( dir->e.type != XEN_HYPFS_TYPE_DIR )
>> +            return NULL;
>> +
>> +        if ( !*path )
>> +            return &dir->e;
>> +
>> +        end = strchr(path, '/');
>> +        if ( !end )
>> +            end = strchr(path, '\0');
>> +        name_len = end - path;
>> +
>> +	again = false;
> 
> Hard tab slipped in.

Oh, sorry.

> 
> With at least the latter taken care of, non-XSM pieces
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks,


Juergen


From xen-devel-bounces@lists.xenproject.org Mon May 18 14:44:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 14:44:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jagzz-0007ok-Ro; Mon, 18 May 2020 14:44:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c05r=7A=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1jagzw-0007ob-J4
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 14:44:10 +0000
X-Inumbo-ID: 065d6992-9916-11ea-b9cf-bc764e2007e4
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:20a:202:5300::7])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 065d6992-9916-11ea-b9cf-bc764e2007e4;
 Mon, 18 May 2020 14:44:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1589813046;
 s=strato-dkim-0002; d=aepfle.de;
 h=Message-Id:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH:From:
 Subject:Sender;
 bh=x+0kfUR9YmVlwWVg05/3BODhZYwM+tYi5GOOeNzNaLs=;
 b=T6TAkPFHOZ/DXffT+L/3rRJvzhzxyowWG4dFgoB0P8GHNivoJymq2PUBXSY4lB8j8w
 df41vjpMU4578Qn7tU9wKYWWxzfoyLlIaWqJgNcrJZbZtAljjm+fL4dFp0Sx5UgbWuC+
 DNAmIOOo3nxdG2iIe0YtbvOXXCoqt5MiHRdbY1mWStCruaNJMuf7wvKUKT2VLprJJQQE
 1Lfm3B3XXwF1y6qmoJsTTuGq8N7Z4wV7eXkk9wWRvVmY8mFH7VHvdYc6KUNPsU2/Wot1
 tSnUNEViyUsZBvuiYbOEJZRCRY9cU2J9GocLkIKG4tcgQglwR8SlaP5eO6HGniKDdxYI
 xSag==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS224g"
X-RZG-CLASS-ID: mo00
Received: from sender by smtp.strato.de (RZmta 46.6.2 DYNA|AUTH)
 with ESMTPSA id c03f94w4IEi5UFx
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 18 May 2020 16:44:05 +0200 (CEST)
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Wei Liu <wl@xen.org>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xenproject.org
Subject: [PATCH v1] tools: use HOSTCC/CPP to compile rombios code and helper
Date: Mon, 18 May 2020 16:44:00 +0200
Message-Id: <20200518144400.16708-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Olaf Hering <olaf@aepfle.de>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Use also HOSTCFLAGS for biossums while touching the code.

Spotted by inspecting build logfile.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/firmware/rombios/Makefile | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/firmware/rombios/Makefile b/tools/firmware/rombios/Makefile
index 78237fd736..02abdb3038 100644
--- a/tools/firmware/rombios/Makefile
+++ b/tools/firmware/rombios/Makefile
@@ -19,7 +19,7 @@ clean: subdirs-clean
 distclean: clean
 
 BIOS-bochs-latest: rombios.c biossums 32bitgateway.c tcgbios.c
-	gcc -DBX_SMP_PROCESSORS=1 -E -P $< > _rombios_.c
+	$(CPP) -DBX_SMP_PROCESSORS=1 -P $< > _rombios_.c
 	bcc -o rombios.s -C-c -D__i86__ -0 -S _rombios_.c
 	sed -e 's/^\.text//' -e 's/^\.data//' rombios.s > _rombios_.s
 	as86 _rombios_.s -b tmp.bin -u- -w- -g -0 -j -O -l rombios.txt
@@ -29,6 +29,6 @@ BIOS-bochs-latest: rombios.c biossums 32bitgateway.c tcgbios.c
 	rm -f _rombios_.s
 
 biossums: biossums.c
-	gcc -o biossums biossums.c
+	$(HOSTCC) $(HOSTCFLAGS) -o biossums biossums.c
 
 -include $(DEPS_INCLUDE)


From xen-devel-bounces@lists.xenproject.org Mon May 18 14:45:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 14:45:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jah1S-0007w0-Io; Mon, 18 May 2020 14:45:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ws3m=7A=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jah1R-0007vp-JP
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 14:45:41 +0000
X-Inumbo-ID: 3e23bd90-9916-11ea-a86e-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3e23bd90-9916-11ea-a86e-12813bfff9fa;
 Mon, 18 May 2020 14:45:40 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: iV8ummVhtJDE6cAe+rgazhfnKbLaZxUtidOEVvJA49rq0PW++xB8Wz/WkZh5itb0ymArqkHsWY
 QBdOE82nNjF+9nJZXLlhdAkirm7UkHl5ErV0x4lzcex69T0WvwcXOpAGa9R+KoAXZ+KWW8NiGp
 NDVdru7+4XfYNP2rptulUiZNB3G45mt48rMM6QRchx9alUDuLe5tL3Y0Xin74oq5ZXoPmKohyP
 /i1/30g+aamPjtZZvDDR3S5SUbaLI662DhaGxVj8eeax1BMhecLbMfCTnZfeseTbU50wVRRpmq
 Qr8=
X-SBRS: 2.7
X-MesageID: 18072338
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="18072338"
Subject: Re: [PATCH v1] tools: use HOSTCC/CPP to compile rombios code and
 helper
To: Olaf Hering <olaf@aepfle.de>, Jan Beulich <jbeulich@suse.com>, Wei Liu
 <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, "Ian
 Jackson" <ian.jackson@eu.citrix.com>, <xen-devel@lists.xenproject.org>
References: <20200518144400.16708-1-olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <899f9a7d-51a0-ec42-d20c-50a8fda629c8@citrix.com>
Date: Mon, 18 May 2020 15:45:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200518144400.16708-1-olaf@aepfle.de>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 18/05/2020 15:44, Olaf Hering wrote:
> Use also HOSTCFLAGS for biossums while touching the code.
>
> Spotted by inspecting build logfile.
>
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon May 18 14:48:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 14:48:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jah4Q-0008BY-26; Mon, 18 May 2020 14:48:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hVld=7A=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jah4O-0008Az-NL
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 14:48:44 +0000
X-Inumbo-ID: ab8a99b2-9916-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab8a99b2-9916-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 14:48:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A74BAAD5C;
 Mon, 18 May 2020 14:48:45 +0000 (UTC)
Subject: Re: [PATCH v3 1/2] x86/idle: rework C6 EOI workaround
To: Roger Pau Monne <roger.pau@citrix.com>
References: <20200515135802.63853-1-roger.pau@citrix.com>
 <20200515135802.63853-2-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <58a4bae6-ad36-06da-6f47-825d4b60bbaa@suse.com>
Date: Mon, 18 May 2020 16:48:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200515135802.63853-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.05.2020 15:58, Roger Pau Monne wrote:
> Change the C6 EOI workaround (errata AAJ72) to use x86_match_cpu. Also
> call the workaround from mwait_idle, previously it was only used by
> the ACPI idle driver. Finally make sure the routine is called for all
> states equal or greater than ACPI_STATE_C3, note that the ACPI driver
> doesn't currently handle them, but the errata condition shouldn't be
> limited by that.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with two nits:

> --- a/xen/arch/x86/acpi/cpu_idle.c
> +++ b/xen/arch/x86/acpi/cpu_idle.c
> @@ -548,26 +548,35 @@ void trace_exit_reason(u32 *irq_traced)
>      }
>  }
>  
> -/*
> - * "AAJ72. EOI Transaction May Not be Sent if Software Enters Core C6 During 
> - * an Interrupt Service Routine"
> - * 
> - * There was an errata with some Core i7 processors that an EOI transaction 
> - * may not be sent if software enters core C6 during an interrupt service 
> - * routine. So we don't enter deep Cx state if there is an EOI pending.
> - */
> -static bool errata_c6_eoi_workaround(void)
> +bool errata_c6_eoi_workaround(void)
>  {
> -    static int8_t fix_needed = -1;
> +    static int8_t __read_mostly fix_needed = -1;
>  
>      if ( unlikely(fix_needed == -1) )
>      {
> -        int model = boot_cpu_data.x86_model;
> -        fix_needed = (cpu_has_apic && !directed_eoi_enabled &&
> -                      (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL) &&
> -                      (boot_cpu_data.x86 == 6) &&
> -                      ((model == 0x1a) || (model == 0x1e) || (model == 0x1f) ||
> -                       (model == 0x25) || (model == 0x2c) || (model == 0x2f)));
> +#define INTEL_FAM6_MODEL(m) { X86_VENDOR_INTEL, 6, m, X86_FEATURE_ALWAYS }
> +        /*
> +         * Errata AAJ72: EOI Transaction May Not be Sent if Software Enters
> +         * Core C6 During an Interrupt Service Routine"
> +         *
> +         * There was an errata with some Core i7 processors that an EOI
> +         * transaction may not be sent if software enters core C6 during an
> +         * interrupt service routine. So we don't enter deep Cx state if
> +         * there is an EOI pending.
> +         */
> +        const static struct x86_cpu_id eoi_errata[] = {

Commonly we use "static const".

> --- a/xen/arch/x86/cpu/mwait-idle.c
> +++ b/xen/arch/x86/cpu/mwait-idle.c
> @@ -770,6 +770,9 @@ static void mwait_idle(void)
>  		return;
>  	}
>  
> +	if ((cx->type >= 3) && errata_c6_eoi_workaround())
> +		cx = power->safe_state;
> +
>  	eax = cx->address;
>  	cstate = ((eax >> MWAIT_SUBSTATE_SIZE) & MWAIT_CSTATE_MASK) + 1;
>  
> diff --git a/xen/include/asm-x86/cpuidle.h b/xen/include/asm-x86/cpuidle.h
> index 5d7dffd228..13879f58a1 100644
> --- a/xen/include/asm-x86/cpuidle.h
> +++ b/xen/include/asm-x86/cpuidle.h
> @@ -26,4 +26,5 @@ void update_idle_stats(struct acpi_processor_power *,
>  void update_last_cx_stat(struct acpi_processor_power *,
>                           struct acpi_processor_cx *, uint64_t);
>  
> +bool errata_c6_eoi_workaround(void);
>  #endif /* __X86_ASM_CPUIDLE_H__ */

I'd prefer if a blank line was left ahead of #ifdef-s of this kind.
Both are easy enough to do while committing, I guess.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 18 14:50:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 14:50:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jah6C-0000Wu-EL; Mon, 18 May 2020 14:50:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T93z=7A=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1jah6A-0000Wh-Pk
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 14:50:35 +0000
X-Inumbo-ID: ed1ea68e-9916-11ea-a86f-12813bfff9fa
Received: from wout5-smtp.messagingengine.com (unknown [64.147.123.21])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ed1ea68e-9916-11ea-a86f-12813bfff9fa;
 Mon, 18 May 2020 14:50:33 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.west.internal (Postfix) with ESMTP id 936CE95A;
 Mon, 18 May 2020 10:50:32 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Mon, 18 May 2020 10:50:32 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
 messagingengine.com; h=cc:content-type:date:from:in-reply-to
 :message-id:mime-version:references:subject:to:x-me-proxy
 :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=jPAjWK
 c5sSWWWN18xk/Zk61bphWCWHeGH9TSk3WjyZI=; b=vWW4pQUFfO+y+lmz6hAOpT
 KnxFB2g3AlR/UbWqVdXULUV2ByAul76h+wWlaPhS1fJQ+/T2F3XEMO1E4Ze4LCpu
 lFzbPckX+vdRRbUXIEZqtYzoBavF0uISZET4wFywvYPKAGGD7wtxfCy69D5K0A6X
 bQejPinyWO/AjbQHzA04x9q3bPQFjRrVrkC9RIo7weDF1jAHkTUvh+H+MifzX+3m
 J6MQ2XpquRR3P/KuRXNBSduA406QSsr5tH6+6TDynmhQi/lbAuqKqpl7mQe9VxXW
 9EJVkRkKtjW5m9wII0OtnpzQOuT82n6jOip5qQCoKhJHBzozhO1hsVZ+rf57nUKg
 ==
X-ME-Sender: <xms:uKDCXq7scS0mINLatJuoZw_DXM9_dY1bQRuaqLFqVHp8Wza3335ftA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduhedruddthedgkeduucetufdoteggodetrfdotf
 fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
 uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
 cujfgurhepfffhvffukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
 ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
 hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeetveff
 iefghfekhffggeeffffhgeevieektedthfehveeiheeiiedtudegfeetffenucfkpheple
 durdeihedrfeegrdeffeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgr
 ihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrd
 gtohhm
X-ME-Proxy: <xmx:uKDCXj42hBAYfH2UeLyJloUpjEV8C6nPEA8HavwEVu45om3qU4Rb5g>
 <xmx:uKDCXpckj43rkgJ7mNqBy8p2Lmvg1cJJDsdmOw70fwJM5yA3qZI41Q>
 <xmx:uKDCXnKq6dJc_R-JrSQTMHFkstXYNPRlzl6sFTo9yyT3o-kbkKKrcw>
 <xmx:uKDCXoydTWIhOsiy47_8FLaTEFiF-EuxpmWF5LgApVgygNXQG7RsNw>
Received: from mail-itl (ip5b412221.dynamic.kabel-deutschland.de [91.65.34.33])
 by mail.messagingengine.com (Postfix) with ESMTPA id 4E56930663ED;
 Mon, 18 May 2020 10:50:31 -0400 (EDT)
Date: Mon, 18 May 2020 16:50:28 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>
To: Ian Jackson <ian.jackson@citrix.com>
Subject: Re: [PATCH v5 09/21] libxl: add save/restore support for qemu-xen in
 stubdomain
Message-ID: <20200518145028.GD98582@mail-itl>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-10-jandryuk@gmail.com>
 <24253.29524.798802.978257@mariner.uk.xensource.com>
 <CAKf6xpvJMovKMTWipC4gZuBD8FgmBEWbDbkm=ryRWSxNifQcJw@mail.gmail.com>
 <24258.39029.788968.419649@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature"; boundary="qDbXVdCdHGoSgWSk"
Content-Disposition: inline
In-Reply-To: <24258.39029.788968.419649@mariner.uk.xensource.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--qDbXVdCdHGoSgWSk
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: [PATCH v5 09/21] libxl: add save/restore support for qemu-xen in
 stubdomain

On Mon, May 18, 2020 at 03:15:17PM +0100, Ian Jackson wrote:
> Jason Andryuk writes ("Re: [PATCH v5 09/21] libxl: add save/restore suppo=
rt for qemu-xen in stubdomain"):
> > On Thu, May 14, 2020 at 12:35 PM Ian Jackson <ian.jackson@citrix.com> w=
rote:
> > > It's not stated anywhere here that I can see but I think what is
> > > happening here is that your wrapper script knows the qemu savefile
> > > pathname and reads it directly.  Maybbe a comment would be
> > > worthwhile ?
> >=20
> > The existing comment "Linux stubdomain connects specific FD to
> > STUBDOM_CONSOLE_RESTORE" is trying to state that.
> > STUBDOM_CONSOLE_RESTORE is defined as 2 for console 2 (/dev/hvc2), but
> > it is only a libxl_internal.h define.
>=20
> Err, by "the qemu savefile pathname" I meant the pathname in dom0.
> I assume your wrapper script opens that and feeds it to the console.
> Is that right ?

Not really a wrapper script. On dom0 side it is console backend (qemu)
instructed to connect the console to a file, instead of pty. I have
implemented similar feature in my xenconsoled patch series sent a while
ago (sent along with v2 of this series), but that series needs some more
love.

On the stubdomain side, it is a script that launches qemu - opens a
/dev/hvc2, then pass the FD to qemu via -incoming option (which is
really constructed by libxl).

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--qDbXVdCdHGoSgWSk
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl7CoLMACgkQ24/THMrX
1ywd0gf/YbknnStZlnqO5hAce/czX5KBww8Jk4yDPQzLnxxMfzudLDUz//MPW38x
pXxG53WO2Iho42ebTWk5Rn5Q76ManDgUK47HzRDAW9ZSDgw2NKgB1w7p4Jm/XRVK
KoXRUta27/pVZu3nN6s9GEcfzJqzk9tKbJ2zXeBNuLjZ4PQPPV0t/1lHeiOTT3pa
nX7KlloJOl6GfhPgx9bSnLFnL25UA2lf3XXXc/ql27ob3Bar89kf/iUN3F2VvIp6
TP+oI81PwJG042EVVSmCye/N0KsgAnDzpJUFl0fDKP/KQFuS5qDuhPkcQr//Q9dH
7bJg6JX++PEoMvZEtfYW48BIVF3PKQ==
=oGCI
-----END PGP SIGNATURE-----

--qDbXVdCdHGoSgWSk--


From xen-devel-bounces@lists.xenproject.org Mon May 18 14:51:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 14:51:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jah6n-0000aV-NU; Mon, 18 May 2020 14:51:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dGN6=7A=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jah6l-0000aK-N1
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 14:51:11 +0000
X-Inumbo-ID: 030b9038-9917-11ea-a86f-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 030b9038-9917-11ea-a86f-12813bfff9fa;
 Mon, 18 May 2020 14:51:10 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ns5aG58Oely+m1QTj69pw08TM3CA8wn9T4grMaF2iJhAEmE1FuPsBSCfFtmz9HzkceZQRbGp4M
 UzUA0dI6zulKhym6m0OJblWGCHSavuTg5OV8E19HuGAYRMUh0JGCeu3FT+/BuqJPYffdyFxXtA
 L/bhlhpWQR30PRz7yEPucpxjlkDXGHMVRZ/t15SsTjPU+XAAgUHutzNxLs4N6+9M1m8Sf+KjI9
 UK10c1o0sesBbZBMlUPYIvhudNCU2j83vjKa7R5xXCjpHavDQMFIU/nN9KfzAaULlOwwc+8Hi7
 W18=
X-SBRS: 2.7
X-MesageID: 18073231
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="18073231"
Date: Mon, 18 May 2020 16:51:01 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86: refine guest_mode()
Message-ID: <20200518145101.GV54375@Air-de-Roger>
References: <7b62d06c-1369-2857-81c0-45e2434357f4@suse.com>
 <1704f4f6-7e77-971c-2c94-4f6a6719c34a@citrix.com>
 <5bbe6425-396c-d934-b5af-53b594a4afbc@suse.com>
 <16939982-3ccc-f848-0694-61b154dca89a@citrix.com>
 <5ce12c86-c894-4a2c-9fa6-1c2a6007ca28@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5ce12c86-c894-4a2c-9fa6-1c2a6007ca28@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Apr 28, 2020 at 08:30:12AM +0200, Jan Beulich wrote:
> On 27.04.2020 22:11, Andrew Cooper wrote:
> > On 27/04/2020 16:15, Jan Beulich wrote:
> >> On 27.04.2020 16:35, Andrew Cooper wrote:
> >>> On 27/04/2020 09:03, Jan Beulich wrote:
> >>>> The 2nd of the assertions as well as the macro's return value have been
> >>>> assuming we're on the primary stack. While for most IST exceptions we
> >>>> eventually switch back to the main one,
> >>> "we switch to the main one when interrupting user mode".
> >>>
> >>> "eventually" isn't accurate as it is before we enter C.
> >> Right, will change.
> >>
> >>>> --- a/xen/include/asm-x86/regs.h
> >>>> +++ b/xen/include/asm-x86/regs.h
> >>>> @@ -10,9 +10,10 @@
> >>>>      /* Frame pointer must point into current CPU stack. */                    \
> >>>>      ASSERT(diff < STACK_SIZE);                                                \
> >>>>      /* If not a guest frame, it must be a hypervisor frame. */                \
> >>>> -    ASSERT((diff == 0) || (r->cs == __HYPERVISOR_CS));                        \
> >>>> +    if ( diff < PRIMARY_STACK_SIZE )                                          \
> >>>> +        ASSERT(!diff || ((r)->cs == __HYPERVISOR_CS));                        \
> >>>>      /* Return TRUE if it's a guest frame. */                                  \
> >>>> -    (diff == 0);                                                              \
> >>>> +    !diff || ((r)->cs != __HYPERVISOR_CS);                                    \
> >>> The (diff == 0) already worried me before because it doesn't fail safe,
> >>> but this makes things more problematic.  Consider the case back when we
> >>> had __HYPERVISOR_CS32.
> >> Yes - if __HYPERVISOR_CS32 would ever have been to be used for
> >> anything, it would have needed checking for here.
> >>
> >>> Guest mode is strictly "(r)->cs & 3".
> >> As long as CS (a) gets properly saved (it's a "manual" step for
> >> SYSCALL/SYSRET as well as #VMEXIT) and (b) didn't get clobbered. I
> >> didn't write this code, I don't think, so I can only guess that
> >> there were intentions behind this along these lines.
> > 
> > Hmm - the VMExit case might be problematic here, due to the variability
> > in the poison used.
> 
> "Variability" is an understatement - there's no poisoning at all
> in release builds afaics (and to be honest it seems a somewhat
> pointless to write the same values over and over again in debug
> mode). With this, ...
> 
> >>> Everything else is expectations about how things ought to be laid out,
> >>> but for safety in release builds, the final judgement should not depend
> >>> on the expectations evaluating true.
> >> Well, I can switch to a purely CS.RPL based approach, as long as
> >> we're happy to live with the possible downside mentioned above.
> >> Of course this would then end up being a more intrusive change
> >> than originally intended ...
> > 
> > I'd certainly prefer to go for something which is more robust, even if
> > it is a larger change.
> 
> ... what's your suggestion? Basing on _just_ CS.RPL obviously won't
> work. Not even if we put in place the guest's CS (albeit that
> somewhat depends on the meaning we assign to the macro's returned
> value).

Just to check I'm following this correctly, using CS.RPL won't work
for HVM guests, as HVM can legitimately use a RPL of 0 (which is not
the case for PV guests). Doesn't the same apply to the usage of
__HYPERVISOR_CS? (A HVM guest could also use the same code segment
value as Xen?)

> Using current inside the macro to determine whether the
> guest is HVM would also seem fragile to me - there are quite a few
> uses of guest_mode(). Which would leave passing in a const struct
> vcpu * (or domain *), requiring to touch all call sites, including
> Arm's.

Fragile or slow? Are there corner cases where guest_mode is used where
current is not reliable?

> Compared to this it would seem to me that the change as presented
> is a clear improvement without becoming overly large of a change.

Using the cs register is already part of the guest_mode code, even if
just in debug mode, hence I don't see it as a regression from existing
code. It however feels weird to me that the reporter of the issue
doesn't agree with the fix, and hence would like to know if there's a
way we could achieve consensus on this.

Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 18 15:05:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 15:05:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jahKQ-0001j1-1c; Mon, 18 May 2020 15:05:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hVld=7A=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jahKP-0001iw-HK
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 15:05:17 +0000
X-Inumbo-ID: fab79d6d-9918-11ea-a870-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fab79d6d-9918-11ea-a870-12813bfff9fa;
 Mon, 18 May 2020 15:05:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id BA5BAACB1;
 Mon, 18 May 2020 15:05:17 +0000 (UTC)
Subject: Re: [PATCH v3 2/2] x86/idle: prevent entering C6 with in service
 interrupts on Intel
To: Roger Pau Monne <roger.pau@citrix.com>
References: <20200515135802.63853-1-roger.pau@citrix.com>
 <20200515135802.63853-3-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e9e337ae-295e-5577-3c6d-a42721190b07@suse.com>
Date: Mon, 18 May 2020 17:05:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200515135802.63853-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.05.2020 15:58, Roger Pau Monne wrote:
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -652,6 +652,15 @@ Specify the size of the console debug trace buffer. By specifying `cpu:`
>  additionally a trace buffer of the specified size is allocated per cpu.
>  The debug trace feature is only enabled in debugging builds of Xen.
>  
> +### disable-c6-errata

Hmm, yes please - a disable for errata! ;-)

How about "avoid-c6-errata", and then perhaps as a sub-option to
"cpuidle="? (If we really want a control for this in the first
place.)

> @@ -573,10 +574,40 @@ bool errata_c6_eoi_workaround(void)
>              INTEL_FAM6_MODEL(0x2f),
>              { }
>          };
> +        /*
> +         * Errata BDX99, CLX30, SKX100, CFW125, BDF104, BDH85, BDM135, KWB131:
> +         * A Pending Fixed Interrupt May Be Dispatched Before an Interrupt of
> +         * The Same Priority Completes.
> +         *
> +         * Resuming from C6 Sleep-State, with Fixed Interrupts of the same
> +         * priority queued (in the corresponding bits of the IRR and ISR APIC
> +         * registers), the processor may dispatch the second interrupt (from
> +         * the IRR bit) before the first interrupt has completed and written to
> +         * the EOI register, causing the first interrupt to never complete.
> +         */
> +        const static struct x86_cpu_id isr_errata[] = {

Same nit as for patch 1 here.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 18 15:05:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 15:05:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jahKk-0001jk-AI; Mon, 18 May 2020 15:05:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hPYZ=7A=linux-powerpc.org=kda@srs-us1.protection.inumbo.net>)
 id 1jahKj-0001ja-4u
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 15:05:37 +0000
X-Inumbo-ID: 06e66b40-9919-11ea-ae69-bc764e2007e4
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 06e66b40-9919-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 15:05:36 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id u15so10246248ljd.3
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 08:05:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=linux-powerpc-org.20150623.gappssmtp.com; s=20150623;
 h=from:to:cc:subject:date:message-id;
 bh=5ZQonOkXHgWeke396MyrCrje/s40Q6/pN3g2SkH/8Xw=;
 b=tX18kMH5bs5YknKTJ3HuUOoTtLZo58+zr1cAemX1Tmh6VYaMCONXwi2FkFWtm0DlgN
 dni5gZar6g/Do1i+EoBfEdsg8IDw3AmYsM86Xj4swEcazOD95NBOPFpt4ZAyJTZgLUYc
 o4NH4JFcPIWUOhpE2hgKG3zSRjAYTJyjB8qSadvrQROr2vOTP7bLZFoo2h2oFwdPGUf6
 b//lGKi5TKTNO95ZGEczYABPNDFwgaVrlC3pUKkLDKfJKwtuf8m8lkMepr3bMhhwmfcs
 tmc0zsd24wUuMscSYNu+HK3r6UAy0ljyBvptY35zrdm1S2NvXdQFeVlPoP7zf5vS+LVj
 3HKA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id;
 bh=5ZQonOkXHgWeke396MyrCrje/s40Q6/pN3g2SkH/8Xw=;
 b=ulwOldVH/PbdrBdy8g0jlmW5EwbxKrCrwMSMHfFNEcXOC/uS+yzd9ARpeCvTeZNO4L
 a7BGdOQVL+YWScPalKXcQNyTeWnvOG/0O1ctwDNpOW7jUrdwwJPIZCAGgnbUAuyof/3O
 vnNeGo+COeoofExMzn5u6I/MQsxlVio4c1hXvwCM3vI0cCgdNEZJPQ8I3rnZuv7mqtLa
 lPcr1XyNdKQBPLpvZbqGRHgMihyUmCMxQJAjdDqeMXtSm5c+opatpBBLRW/2K5IFHm6x
 +UgV+ccJAkszvwepLwtAVElgtJBMvnbcmm3hVQWiGOWCCFDBPJsDJXnePTRZCiyXOKrE
 bYjg==
X-Gm-Message-State: AOAM532E4t+qgMBx7dTM07kS57B1lrdmcs8/PZaW5QstiQyqnix01KTm
 B9YIS3L3A/FaCuWHGCdT8zG8lwsoEBzOmQ==
X-Google-Smtp-Source: ABdhPJyL9pGeWSaL7kl5ehK87kkKc+Del+rsWDEA2yvMomHnsOqt2/WC4Ll84GtxbzpIwGfLtHTnQg==
X-Received: by 2002:a2e:8703:: with SMTP id m3mr10548371lji.286.1589814334809; 
 Mon, 18 May 2020 08:05:34 -0700 (PDT)
Received: from centos7-pv-guest.localdomain ([5.35.46.227])
 by smtp.gmail.com with ESMTPSA id 130sm7296306lfl.37.2020.05.18.08.05.33
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Mon, 18 May 2020 08:05:34 -0700 (PDT)
From: Denis Kirjanov <kda@linux-powerpc.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v4] public/io/netif.h: add a new extra type for XDP
Date: Mon, 18 May 2020 18:04:52 +0300
Message-Id: <1589814292-1789-1-git-send-email-kda@linux-powerpc.org>
X-Mailer: git-send-email 1.8.3.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The patch adds a new extra type to be able to diffirentiate
between RX responses on xen-netfront side with the adjusted offset
required for XDP processing.

The offset value from a guest is passed via xenstore.

Signed-off-by: Denis Kirjanov <denis.kirjanov@suse.com>
---
v4:
- updated the commit and documenation

v3:
- updated the commit message

v2:
- added documentation
- fixed padding for netif_extra_info
---
---
 xen/include/public/io/netif.h | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
index 9fcf91a..a92bf04 100644
--- a/xen/include/public/io/netif.h
+++ b/xen/include/public/io/netif.h
@@ -161,6 +161,17 @@
  */
 
 /*
+ * "xdp-headroom" is used to request that extra space is added
+ * for XDP processing.  The value is measured in bytes and passed by
+ * the frontend to be consistent between both ends.
+ * If the value is greater than zero that means that
+ * an RX response is going to be passed to an XDP program for processing.
+ *
+ * "feature-xdp-headroom" is set to "1" by the netback side like other features
+ * so a guest can check if an XDP program can be processed.
+ */
+
+/*
  * Control ring
  * ============
  *
@@ -985,7 +996,8 @@ typedef struct netif_tx_request netif_tx_request_t;
 #define XEN_NETIF_EXTRA_TYPE_MCAST_ADD (2)  /* u.mcast */
 #define XEN_NETIF_EXTRA_TYPE_MCAST_DEL (3)  /* u.mcast */
 #define XEN_NETIF_EXTRA_TYPE_HASH      (4)  /* u.hash */
-#define XEN_NETIF_EXTRA_TYPE_MAX       (5)
+#define XEN_NETIF_EXTRA_TYPE_XDP       (5)  /* u.xdp */
+#define XEN_NETIF_EXTRA_TYPE_MAX       (6)
 
 /* netif_extra_info_t flags. */
 #define _XEN_NETIF_EXTRA_FLAG_MORE (0)
@@ -1018,6 +1030,10 @@ struct netif_extra_info {
             uint8_t algorithm;
             uint8_t value[4];
         } hash;
+        struct {
+            uint16_t headroom;
+            uint16_t pad[2]
+        } xdp;
         uint16_t pad[3];
     } u;
 };
-- 
1.8.3.1



From xen-devel-bounces@lists.xenproject.org Mon May 18 15:18:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 15:18:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jahWs-0002nd-Ov; Mon, 18 May 2020 15:18:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TuVG=7A=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jahWq-0002nX-R7
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 15:18:08 +0000
X-Inumbo-ID: c6d396c0-991a-11ea-a872-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c6d396c0-991a-11ea-a872-12813bfff9fa;
 Mon, 18 May 2020 15:18:07 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: JkbVuUvGzKnGQQCS/YF3/lx3SQ+gsGc95icrSqD7k1DiojJHjk73/SbTddemRrHOiF6NDI/OTq
 CfJp5ZIkP1FEVbRT78aVgciZNntvv4wycxF+34qjdp2i65LwMUUla4FayYsV+uNZjoIW4uBs3s
 n0BYbdyK/kRdS+4pJlh6wSi57aX/l99Z9vg7B/l/DAxHZqn2DSbhQ1FKgcZpRB9bbvNb7uTwLG
 EzOMMRGxMYTWuA8/Gf3zjHYZfdSpQDJli75cUa1tuCFmTWpdKa3pDnVBCteGeCM3GE1geH/nWj
 Vrk=
X-SBRS: 2.7
X-MesageID: 18502088
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="18502088"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24258.42794.136081.367565@mariner.uk.xensource.com>
Date: Mon, 18 May 2020 16:18:02 +0100
To: Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 09/21] libxl: add save/restore support for qemu-xen in
 stubdomain [and 1 more messages]
In-Reply-To: <20200428040433.23504-10-jandryuk@gmail.com>,
 <20200518145028.GD98582@mail-itl>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <20200428040433.23504-10-jandryuk@gmail.com>
 <24253.29524.798802.978257@mariner.uk.xensource.com>
 <CAKf6xpvJMovKMTWipC4gZuBD8FgmBEWbDbkm=ryRWSxNifQcJw@mail.gmail.com>
 <24258.39029.788968.419649@mariner.uk.xensource.com>
 <20200518145028.GD98582@mail-itl>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Wei
 Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> 
Marek Marczykowski-Grecki writes ("Re: [PATCH v5 09/21] libxl: add save/restore support for qemu-xen in stubdomain"):
> On Mon, May 18, 2020 at 03:15:17PM +0100, Ian Jackson wrote:
> > Err, by "the qemu savefile pathname" I meant the pathname in dom0.
> > I assume your wrapper script opens that and feeds it to the console.
> > Is that right ?
> 
> Not really a wrapper script. On dom0 side it is console backend (qemu)
> instructed to connect the console to a file, instead of pty. I have
> implemented similar feature in my xenconsoled patch series sent a while
> ago (sent along with v2 of this series), but that series needs some more
> love.
> 
> On the stubdomain side, it is a script that launches qemu - opens a
> /dev/hvc2, then pass the FD to qemu via -incoming option (which is
> really constructed by libxl).

Hi.  Thanks for trying to help me understand.  I was still confused
though.  I tried to explain another way and that helped me see what's
going on.

I think I understand now.

For reference, my confusion was this:

Jason Andryuk writes ("[PATCH v5 09/21] libxl: add save/restore support for qemu-xen in stubdomain"):
> index bdc23554eb..45d0dd56f5 100644
> --- a/tools/libxl/libxl_dm.c
> +++ b/tools/libxl/libxl_dm.c
> @@ -1744,10 +1744,17 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
>      }
>  
>      if (state->saved_state) {
> -        /* This file descriptor is meant to be used by QEMU */
> -        *dm_state_fd = open(state->saved_state, O_RDONLY);
> -        flexarray_append(dm_args, "-incoming");
> -        flexarray_append(dm_args, GCSPRINTF("fd:%d",*dm_state_fd));
> +        if (is_stubdom) {
> +            /* Linux stubdomain connects specific FD to STUBDOM_CONSOLE_RESTORE
> +             */
> +            flexarray_append(dm_args, "-incoming");
> +            flexarray_append(dm_args, "fd:3");
> +        } else {
> +            /* This file descriptor is meant to be used by QEMU */
> +            *dm_state_fd = open(state->saved_state, O_RDONLY);
> +            flexarray_append(dm_args, "-incoming");
> +            flexarray_append(dm_args, GCSPRINTF("fd:%d",*dm_state_fd));
> +        }

In this hunk, the call
           *dm_state_fd = open(state->saved_state, O_RDONLY);
becomes conditional.  It is no longer executed in the stubdomain
case.

So then, who opens state->saved_state ?  And how do they get told
where it is ?  If it's somewhere else in libxl, why doesn't it show up
in this patch ?

Posing the question liked that allowed me to see that the answer is

           console[i].output = GCSPRINTF("file:%s",
                           libxl__device_model_savefile(gc, guest_domid));

in spawn_stub_launch_dm.  And it doesn't appear in the patch because
it's already used for minios stub trad qemu and the same code is
now going to be executed for linux stub moderm qemu.

Ian.


From xen-devel-bounces@lists.xenproject.org Mon May 18 15:20:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 15:20:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jahZP-0003YB-6a; Mon, 18 May 2020 15:20:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jahZO-0003Y6-MZ
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 15:20:46 +0000
X-Inumbo-ID: 25118742-991b-11ea-ae69-bc764e2007e4
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 25118742-991b-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 15:20:46 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id f18so10275508lja.13
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 08:20:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=wWiHQi0e8L74KHAj4JXZmCf/pFA8Cj68yED+D3NeSlU=;
 b=uD6ltweur65jVLAF9t2IdoxK5L9t65GlPCc3yYB+ynPcU380gq3ZeGLw6T+BW8qMy2
 erJhrOK5g5Cy25ft4Sq7Wb/J7U6ClU6bD4v2mUgEfM3TAURf9SYHQhGcTMw00CpVu+/T
 puHDcUEyoNYgRn28B8bQ5bvwOjlCea85ZuBQhw03tHhmnTNWg/v/LN1GdjsoMYRNLmHA
 6+pNrsUxEc+iobBpvS2B7n/SB/XkVhvbWyWMF+jKKg3cId+CNp/fp5p/jQd9dsOGdE3l
 4He3iZ8xuDBlTHUEB/UAThNRey/SNolQDrYCTVePvd5udjW6SGQocRwRcF9apojNKcii
 dH3Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=wWiHQi0e8L74KHAj4JXZmCf/pFA8Cj68yED+D3NeSlU=;
 b=MljNvMPNzoKIs/FuNv4CC1rPoJXGoR4rh3wlEMISkjWXTvR96xniQ9ksxc0jFdzSTg
 B8jtXqCyWbPGA08QPqIfhQFM5xpe+9LId3K3KjVffuS10VEcXUsb+KNovLXRbCEmOn3T
 3xB4P+gdu7avjPL8ThfPmkt7PAARbYhWreP8TPVrLwO37vYf2vVH12W6dXycwi7Ig/lp
 mAXfT8UPx1a5AdXBprmHZIR39/pfpoU/abD8Ll+JTXCHwB419Lf1dhwulA3NLlQcINa5
 3YaAfzQ/giTAGTYEtQFneF/C1DuGJfH1afQPO70AIPdLB/h8/F5OHWYELbe1HGJgUVG2
 PnWA==
X-Gm-Message-State: AOAM532dQ3JJNfrFxECh/QVLt5B2v11z02GGC/wO81ajhp030YuH4Wys
 /T3/tQ9Wq9ZawUs0oPUgXnP17FQFkRbsHwGGw9E=
X-Google-Smtp-Source: ABdhPJzQdtnUfUFxI4mg6gUm5vDxHpfy43carulanlXtFGZd1FwsbkOhOXW6OrCq3sn7ucCZqYRWDnuxiFdxJYGTR50=
X-Received: by 2002:a2e:9b10:: with SMTP id u16mr1805008lji.210.1589815244483; 
 Mon, 18 May 2020 08:20:44 -0700 (PDT)
MIME-Version: 1.0
References: <20200518011353.326287-1-jandryuk@gmail.com>
 <20200518011353.326287-7-jandryuk@gmail.com>
 <24258.39310.574582.176081@mariner.uk.xensource.com>
In-Reply-To: <24258.39310.574582.176081@mariner.uk.xensource.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 18 May 2020 11:20:33 -0400
Message-ID: <CAKf6xpueM5BXd0ivDHHpq2oRo_T1Uh+zMF0TrrV5u5dVR8DiLQ@mail.gmail.com>
Subject: Re: [PATCH v6 06/18] libxl: write qemu arguments into separate
 xenstore keys
To: Ian Jackson <ian.jackson@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 18, 2020 at 10:20 AM Ian Jackson <ian.jackson@citrix.com> wrote=
:
>
> Jason Andryuk writes ("[PATCH v6 06/18] libxl: write qemu arguments into =
separate xenstore keys"):
> > From: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>
> ...
> > +static int libxl__write_stub_linux_dm_argv(libxl__gc *gc,
> > +                                           int dm_domid, int guest_dom=
id,
> > +                                           char **args)
> > +{
>
> Thanks for the changes.
>
> > +    xs_transaction_t t =3D XBT_NULL;
> ...
> > +    rc =3D libxl__xs_read_mandatory(gc, XBT_NULL,
> > +                                  GCSPRINTF("/local/domain/%d/vm", gue=
st_domid),
> > +                                  &vm_path);
> > +    if (rc)
> > +        return rc;
>
> I think this should be "goto out".  That conforms to standard libxl
> error handling discipline and avoids future leak bugs etc.
> libxl__xs_transaction_abort is a no-op with t=3D=3DNULL.
>
> Also, it is not clear to me why you chose to put this outside the
> transaction loop.  Can you either put it inside the transaction loop,
> or produce an explanation as to why this approach is race-free...

I just matched the old code's transaction only around the write.  "vm"
shouldn't change during runtime, but I can make the changes as you
suggest.

-Jason


From xen-devel-bounces@lists.xenproject.org Mon May 18 15:30:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 15:30:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jahii-0004WJ-Do; Mon, 18 May 2020 15:30:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jahih-0004WE-HZ
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 15:30:23 +0000
X-Inumbo-ID: 7ce1bcfc-991c-11ea-b07b-bc764e2007e4
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ce1bcfc-991c-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 15:30:22 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id h188so8449435lfd.7
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 08:30:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=RNVXcfewXqmsp+ezBCQMBAjKuoMluyNqSpecwaZ3Lv4=;
 b=Gn/vED0Gf9+nQjOCDkaMwgyUeH3+ftV7JzbGYrpkT1kGB2w5I2zG8UoF4sWKy2wJdM
 48MMyY76gUyjJSuCfnR+aAvp24Hfyg4uaTD6//tr5JXejea4ikNRpeWfuudpnQJBPpcL
 izDc6aARp/Dr9D0GNYWe/sEiQwAYuwllHwiv0hVUzbIqkAlva4PGUw3gnpIkP+kqj6hI
 I7bdy15mnpNfKJwqKF/X7BNmMKBAMucjgY5tX2arUVVwSnSS+YyQ8Kc40rYOj8dSyjdC
 4G1Ed7hmZpoCsG3zMU38bfcQTQpgB8DzssTawsqvFmOTpLKmaWHs1oBYdGNjErxjAezO
 gzpQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=RNVXcfewXqmsp+ezBCQMBAjKuoMluyNqSpecwaZ3Lv4=;
 b=Ia46fBZnpOVrBQvqUItDmSait06D22ZocMhljan9D55e9qEgnBAP7TeR7h77g6TnfG
 xbplbxlmEVtNdk8AYwmND2wTD7GAho2Z1LiUBL8b1TlNtz4CMrIuy6z2JujR3f7KNDwH
 m8pm/Q+UW/4vFA9cvAAseIfGhm/ulavUgyPMGYHoKutUlkavMm6ovFFKpHLN3J+i7TVW
 /9whhk5XlDpkEW7E7SIdsN3OjT/BfKLYqTjkNS4A4ohlqbuxpuTdbJ28GeMqG3ZpeWjZ
 3ZEc/R3DVvcMshA1WF75BiN9gUMNlU3cIstCJHPjEZLgPAchkeJsYt0if8t+vkJwsARm
 MIzw==
X-Gm-Message-State: AOAM532o3hoLKJxQRoimpq/At/30aPlseq/3caWLuMt2qbOJxJyODfVy
 jRfM7ZdR+2AZZmdTq2FClCXFUoZ8ZfRbRLHASgQ=
X-Google-Smtp-Source: ABdhPJwkj6Nkoq5LPDS2Iwv0eBH8eyAMlCib8NYu+rS5P+pGBS+uGljxz+DfUjDo2qG7EBWwP5wDShzlUDvNdiA2grk=
X-Received: by 2002:a05:6512:31d1:: with SMTP id
 j17mr4845055lfe.148.1589815821575; 
 Mon, 18 May 2020 08:30:21 -0700 (PDT)
MIME-Version: 1.0
References: <20200518011353.326287-1-jandryuk@gmail.com>
 <20200518011353.326287-10-jandryuk@gmail.com>
 <24258.39586.245004.804616@mariner.uk.xensource.com>
In-Reply-To: <24258.39586.245004.804616@mariner.uk.xensource.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 18 May 2020 11:30:10 -0400
Message-ID: <CAKf6xpt-wBML1kFPOddaM8J8KbqSveN=Z0esvRN-O4UzidrTQg@mail.gmail.com>
Subject: Re: [PATCH v6 09/18] libxl: add save/restore support for qemu-xen in
 stubdomain
To: Ian Jackson <ian.jackson@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 18, 2020 at 10:24 AM Ian Jackson <ian.jackson@citrix.com> wrote=
:
>
> Jason Andryuk writes ("[PATCH v6 09/18] libxl: add save/restore support f=
or qemu-xen in stubdomain"):
> > From: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>
> ...
> >      if (state->saved_state) {
> > -        /* This file descriptor is meant to be used by QEMU */
> > -        *dm_state_fd =3D open(state->saved_state, O_RDONLY);
> > -        flexarray_append(dm_args, "-incoming");
> > -        flexarray_append(dm_args, GCSPRINTF("fd:%d",*dm_state_fd));
> > +        if (is_stubdom) {
> > +            /* Linux stubdomain must replace $STUBDOM_RESTORE_INCOMING=
_ARG
> > +             * with the approriate fd:$num argument for the
> > +             * STUBDOM_CONSOLE_RESTORE console 2.
> > +             */
> > +            flexarray_append(dm_args, "-incoming");
> > +            flexarray_append(dm_args, "$STUBDOM_RESTORE_INCOMING_ARG")=
;
> > +        } else {
> > +            /* This file descriptor is meant to be used by QEMU */
> > +            *dm_state_fd =3D open(state->saved_state, O_RDONLY);
> > +            flexarray_append(dm_args, "-incoming");
> > +            flexarray_append(dm_args, GCSPRINTF("fd:%d",*dm_state_fd))=
;
>
> Hrk.  The stubdom script is expected to spot this particular value in
> the dm_args array and seddery it.  OK.  This is, at leasst, sound.
> I'm happy with the code and the protocol.
>
> I think this needs a change to this doc:
>
>   Subject: [PATCH v6 01/18] Document ioemu MiniOS stubdomain protocol
>
>   +Toolstack to MiniOS ioemu stubdomain protocol
>   +---------------------------------------------
>
> Provided that you update the docs commit and take my ack off that,
> please add my ack to this code :-).

I updated "[PATCH v6 02/18] Document ioemu Linux stubdomain protocol"
to mention $STUBDOM_RESTORE_INCOMING_ARG as well as the xenstore
directory change to "dm-argv" in this v6, but I left your Ack on it.
Sorry about that.  I'll remove your Ack from 02/18 when I post v7,
but I'll add the Ack to this 09/18.

-Jason


From xen-devel-bounces@lists.xenproject.org Mon May 18 15:39:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 15:39:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jahqs-0004l9-AK; Mon, 18 May 2020 15:38:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ws3m=7A=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jahqq-0004l4-RS
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 15:38:48 +0000
X-Inumbo-ID: aa1a0e30-991d-11ea-b07b-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa1a0e30-991d-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 15:38:48 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 8QUu+RqeCZTfLX9+9j1ojai1ovw9ILlgQLJwDsrEgt6CH5GczC6YWJZFgbsgqGiCZp5+0V/pix
 8tuXJ520mu9zerRSOtQeTnTAeLRRYkPVBi6ozObfHjm3gk+P4WfeuuM92ImZTE53nohKCNwLGP
 bgY5Ecxy1sLb92snbPP1lKQoWkLxpmSaKWqu87LuhGZhBRC39l2ds6FvggVS8UDnncnldY3C2L
 X1QH7KEft+vvhkftRVSalTwu2TiFUsEFAm1FHne+LHLlvDIRbcnCj1TrcNl4/SjecEXtOe6uiW
 x28=
X-SBRS: 2.7
X-MesageID: 18079791
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="18079791"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
Date: Mon, 18 May 2020 16:38:20 +0100
Message-ID: <20200518153820.18170-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The reserved_bit_page_fault() paths effectively turn reserved bit faults into
a warning, but in the light of L1TF, the real impact is far more serious.

Xen does not have any reserved bits set in its pagetables, nor do we permit PV
guests to write any.  An HVM shadow guest may have reserved bits via the MMIO
fastpath, but those faults are handled in the VMExit #PF intercept, rather
than Xen's #PF handler.

There is no need to disable interrupts (in spurious_page_fault()) for
__page_fault_type() to look at the rsvd bit, nor should extable fixup be
tolerated.

Make #PF[Rsvd] a hard error, irrespective of mode.  Any new panic() caused by
this constitutes an L1TF gadget needing fixing.

Additionally, drop the comment for do_page_fault().  It is inaccurate (bit 0
being set isn't always a protection violation) and stale (missing bits
5,6,15,31).

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/traps.c | 39 +++++++++++++--------------------------
 1 file changed, 13 insertions(+), 26 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 73c6218660..4f8e3c7a32 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1137,15 +1137,6 @@ void do_int3(struct cpu_user_regs *regs)
     pv_inject_hw_exception(TRAP_int3, X86_EVENT_NO_EC);
 }
 
-static void reserved_bit_page_fault(unsigned long addr,
-                                    struct cpu_user_regs *regs)
-{
-    printk("%pv: reserved bit in page table (ec=%04X)\n",
-           current, regs->error_code);
-    show_page_walk(addr);
-    show_execution_state(regs);
-}
-
 #ifdef CONFIG_PV
 static int handle_ldt_mapping_fault(unsigned int offset,
                                     struct cpu_user_regs *regs)
@@ -1248,10 +1239,6 @@ static enum pf_type __page_fault_type(unsigned long addr,
     if ( in_irq() )
         return real_fault;
 
-    /* Reserved bit violations are never spurious faults. */
-    if ( error_code & PFEC_reserved_bit )
-        return real_fault;
-
     required_flags  = _PAGE_PRESENT;
     if ( error_code & PFEC_write_access )
         required_flags |= _PAGE_RW;
@@ -1413,14 +1400,6 @@ static int fixup_page_fault(unsigned long addr, struct cpu_user_regs *regs)
     return 0;
 }
 
-/*
- * #PF error code:
- *  Bit 0: Protection violation (=1) ; Page not present (=0)
- *  Bit 1: Write access
- *  Bit 2: User mode (=1) ; Supervisor mode (=0)
- *  Bit 3: Reserved bit violation
- *  Bit 4: Instruction fetch
- */
 void do_page_fault(struct cpu_user_regs *regs)
 {
     unsigned long addr, fixup;
@@ -1439,6 +1418,18 @@ void do_page_fault(struct cpu_user_regs *regs)
     if ( unlikely(fixup_page_fault(addr, regs) != 0) )
         return;
 
+    /*
+     * Xen have reserved bits in its pagetables, nor do we permit PV guests to
+     * write any.  Such entries would be vulnerable to the L1TF sidechannel.
+     *
+     * The only logic which intentionally sets reserved bits is the shadow
+     * MMIO fastpath (SH_L1E_MMIO_*), which is careful not to be
+     * L1TF-vulnerable, and handled via the VMExit #PF intercept path, rather
+     * than here.
+     */
+    if ( error_code & PFEC_reserved_bit )
+        goto fatal;
+
     if ( unlikely(!guest_mode(regs)) )
     {
         enum pf_type pf_type = spurious_page_fault(addr, regs);
@@ -1457,13 +1448,12 @@ void do_page_fault(struct cpu_user_regs *regs)
         if ( likely((fixup = search_exception_table(regs)) != 0) )
         {
             perfc_incr(copy_user_faults);
-            if ( unlikely(regs->error_code & PFEC_reserved_bit) )
-                reserved_bit_page_fault(addr, regs);
             this_cpu(last_extable_addr) = regs->rip;
             regs->rip = fixup;
             return;
         }
 
+    fatal:
         if ( debugger_trap_fatal(TRAP_page_fault, regs) )
             return;
 
@@ -1475,9 +1465,6 @@ void do_page_fault(struct cpu_user_regs *regs)
               error_code, _p(addr));
     }
 
-    if ( unlikely(regs->error_code & PFEC_reserved_bit) )
-        reserved_bit_page_fault(addr, regs);
-
     pv_inject_page_fault(regs->error_code, addr);
 }
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon May 18 15:40:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 15:40:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jahsp-0005VZ-MQ; Mon, 18 May 2020 15:40:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ws3m=7A=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jahsn-0005VT-Nu
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 15:40:49 +0000
X-Inumbo-ID: f1e07380-991d-11ea-a872-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f1e07380-991d-11ea-a872-12813bfff9fa;
 Mon, 18 May 2020 15:40:48 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: MOb2nie1M6IwKbYYf4DS0pS2LH1JcVC+SZf9s+wWZPLPmRyoIgZcHT+heFRF+Ozs+YIEvY0SC7
 orpLa/eJBMaTCcMVEBMSJSBw2Qu5XhwUavdXZE3VxX3Ge61JPbFvapU1XKCbqwQDh1XNhrLJGK
 Ib86cpyl0kxaGgqHWmpVcYZ41RhF5HahwIKpeMZ2FHmDreNExY6wc54jNo9Ln9fuNge55ATs7L
 Af78ABlGPJEdc9+V7LEQMtDaiVmSw9q2GMgbkZOuroorxcAaCmW3E9vdu6+LexS34d7/CKcdHH
 kJU=
X-SBRS: 2.7
X-MesageID: 17831024
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="17831024"
Subject: Re: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
To: Xen-devel <xen-devel@lists.xenproject.org>
References: <20200518153820.18170-1-andrew.cooper3@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <baf42a0a-0da5-74d6-aaf6-0377af7a8888@citrix.com>
Date: Mon, 18 May 2020 16:40:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200518153820.18170-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 18/05/2020 16:38, Andrew Cooper wrote:
> The reserved_bit_page_fault() paths effectively turn reserved bit faults into
> a warning, but in the light of L1TF, the real impact is far more serious.
>
> Xen does not have any reserved bits set in its pagetables, nor do we permit PV
> guests to write any.  An HVM shadow guest may have reserved bits via the MMIO
> fastpath, but those faults are handled in the VMExit #PF intercept, rather
> than Xen's #PF handler.
>
> There is no need to disable interrupts (in spurious_page_fault()) for
> __page_fault_type() to look at the rsvd bit, nor should extable fixup be
> tolerated.
>
> Make #PF[Rsvd] a hard error, irrespective of mode.  Any new panic() caused by
> this constitutes an L1TF gadget needing fixing.
>
> Additionally, drop the comment for do_page_fault().  It is inaccurate (bit 0
> being set isn't always a protection violation) and stale (missing bits
> 5,6,15,31).
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> ---
>  xen/arch/x86/traps.c | 39 +++++++++++++--------------------------
>  1 file changed, 13 insertions(+), 26 deletions(-)
>
> diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
> index 73c6218660..4f8e3c7a32 100644
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -1137,15 +1137,6 @@ void do_int3(struct cpu_user_regs *regs)
>      pv_inject_hw_exception(TRAP_int3, X86_EVENT_NO_EC);
>  }
>  
> -static void reserved_bit_page_fault(unsigned long addr,
> -                                    struct cpu_user_regs *regs)
> -{
> -    printk("%pv: reserved bit in page table (ec=%04X)\n",
> -           current, regs->error_code);
> -    show_page_walk(addr);
> -    show_execution_state(regs);
> -}
> -
>  #ifdef CONFIG_PV
>  static int handle_ldt_mapping_fault(unsigned int offset,
>                                      struct cpu_user_regs *regs)
> @@ -1248,10 +1239,6 @@ static enum pf_type __page_fault_type(unsigned long addr,
>      if ( in_irq() )
>          return real_fault;
>  
> -    /* Reserved bit violations are never spurious faults. */
> -    if ( error_code & PFEC_reserved_bit )
> -        return real_fault;
> -
>      required_flags  = _PAGE_PRESENT;
>      if ( error_code & PFEC_write_access )
>          required_flags |= _PAGE_RW;
> @@ -1413,14 +1400,6 @@ static int fixup_page_fault(unsigned long addr, struct cpu_user_regs *regs)
>      return 0;
>  }
>  
> -/*
> - * #PF error code:
> - *  Bit 0: Protection violation (=1) ; Page not present (=0)
> - *  Bit 1: Write access
> - *  Bit 2: User mode (=1) ; Supervisor mode (=0)
> - *  Bit 3: Reserved bit violation
> - *  Bit 4: Instruction fetch
> - */
>  void do_page_fault(struct cpu_user_regs *regs)
>  {
>      unsigned long addr, fixup;
> @@ -1439,6 +1418,18 @@ void do_page_fault(struct cpu_user_regs *regs)
>      if ( unlikely(fixup_page_fault(addr, regs) != 0) )
>          return;
>  
> +    /*
> +     * Xen have reserved bits in its pagetables, nor do we permit PV guests to

This should read "Xen doesn't have"

~Andrew

> +     * write any.  Such entries would be vulnerable to the L1TF sidechannel.
> +     *
> +     * The only logic which intentionally sets reserved bits is the shadow
> +     * MMIO fastpath (SH_L1E_MMIO_*), which is careful not to be
> +     * L1TF-vulnerable, and handled via the VMExit #PF intercept path, rather
> +     * than here.
> +     */
> +    if ( error_code & PFEC_reserved_bit )
> +        goto fatal;
> +
>      if ( unlikely(!guest_mode(regs)) )
>      {
>          enum pf_type pf_type = spurious_page_fault(addr, regs);
> @@ -1457,13 +1448,12 @@ void do_page_fault(struct cpu_user_regs *regs)
>          if ( likely((fixup = search_exception_table(regs)) != 0) )
>          {
>              perfc_incr(copy_user_faults);
> -            if ( unlikely(regs->error_code & PFEC_reserved_bit) )
> -                reserved_bit_page_fault(addr, regs);
>              this_cpu(last_extable_addr) = regs->rip;
>              regs->rip = fixup;
>              return;
>          }
>  
> +    fatal:
>          if ( debugger_trap_fatal(TRAP_page_fault, regs) )
>              return;
>  
> @@ -1475,9 +1465,6 @@ void do_page_fault(struct cpu_user_regs *regs)
>                error_code, _p(addr));
>      }
>  
> -    if ( unlikely(regs->error_code & PFEC_reserved_bit) )
> -        reserved_bit_page_fault(addr, regs);
> -
>      pv_inject_page_fault(regs->error_code, addr);
>  }
>  



From xen-devel-bounces@lists.xenproject.org Mon May 18 15:45:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 15:45:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jahxR-0005fz-9e; Mon, 18 May 2020 15:45:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dGN6=7A=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jahxQ-0005fu-EH
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 15:45:36 +0000
X-Inumbo-ID: 9cdf7ec0-991e-11ea-a876-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9cdf7ec0-991e-11ea-a876-12813bfff9fa;
 Mon, 18 May 2020 15:45:35 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: BZjX5jd/1oAlfihWH2kFAnCKLcmxMsfDuQ/Ug/4xxaT+ZR72b5g3Kbi6IW6RvNv/BrvFW+Di8Y
 2ebxCkEg93EX/saXzY2cJUas5US0GmJTD4M9/8HE1TeiMbnDR8WjScR+f1I9Jp7/tWjKIp4ivr
 CSb0rmg7fc4qn7vurx/sv6falyz8PaXgkylpFOkAvjuB/VUwIRi/xDYcGwbPXXimtMJBo6B419
 JP+kkJ4MzvC9ljJ/V9RsJl6/pNiU11WJgd/1TksWwcjGe3dEsKZ5i80xENI+IH0EP1ps78MiO+
 RGY=
X-SBRS: 2.7
X-MesageID: 18080382
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="18080382"
Date: Mon, 18 May 2020 17:45:27 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v3 2/2] x86/idle: prevent entering C6 with in service
 interrupts on Intel
Message-ID: <20200518154527.GW54375@Air-de-Roger>
References: <20200515135802.63853-1-roger.pau@citrix.com>
 <20200515135802.63853-3-roger.pau@citrix.com>
 <e9e337ae-295e-5577-3c6d-a42721190b07@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <e9e337ae-295e-5577-3c6d-a42721190b07@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Ian
 Jackson <ian.jackson@eu.citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 18, 2020 at 05:05:12PM +0200, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> 
> On 15.05.2020 15:58, Roger Pau Monne wrote:
> > --- a/docs/misc/xen-command-line.pandoc
> > +++ b/docs/misc/xen-command-line.pandoc
> > @@ -652,6 +652,15 @@ Specify the size of the console debug trace buffer. By specifying `cpu:`
> >  additionally a trace buffer of the specified size is allocated per cpu.
> >  The debug trace feature is only enabled in debugging builds of Xen.
> >  
> > +### disable-c6-errata
> 
> Hmm, yes please - a disable for errata! ;-)
> 
> How about "avoid-c6-errata", and then perhaps as a sub-option to
> "cpuidle="? (If we really want a control for this in the first
> place.)

Right, I see I'm very bad at naming. Not sure it's even worth it
maybe?

I can remove it completely from the patch if that is OK.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 18 15:47:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 15:47:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jahzc-0005pd-QK; Mon, 18 May 2020 15:47:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hVld=7A=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jahzb-0005ol-FH
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 15:47:51 +0000
X-Inumbo-ID: edbf9e88-991e-11ea-a877-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id edbf9e88-991e-11ea-a877-12813bfff9fa;
 Mon, 18 May 2020 15:47:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 5ACCCAEDD;
 Mon, 18 May 2020 15:47:52 +0000 (UTC)
Subject: Re: [PATCH v3 2/2] x86/idle: prevent entering C6 with in service
 interrupts on Intel
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200515135802.63853-1-roger.pau@citrix.com>
 <20200515135802.63853-3-roger.pau@citrix.com>
 <e9e337ae-295e-5577-3c6d-a42721190b07@suse.com>
 <20200518154527.GW54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6ce247e4-ef85-6793-68a6-0d1cde7f886d@suse.com>
Date: Mon, 18 May 2020 17:47:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200518154527.GW54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 18.05.2020 17:45, Roger Pau Monné wrote:
> On Mon, May 18, 2020 at 05:05:12PM +0200, Jan Beulich wrote:
>> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>>
>> On 15.05.2020 15:58, Roger Pau Monne wrote:
>>> --- a/docs/misc/xen-command-line.pandoc
>>> +++ b/docs/misc/xen-command-line.pandoc
>>> @@ -652,6 +652,15 @@ Specify the size of the console debug trace buffer. By specifying `cpu:`
>>>  additionally a trace buffer of the specified size is allocated per cpu.
>>>  The debug trace feature is only enabled in debugging builds of Xen.
>>>  
>>> +### disable-c6-errata
>>
>> Hmm, yes please - a disable for errata! ;-)
>>
>> How about "avoid-c6-errata", and then perhaps as a sub-option to
>> "cpuidle="? (If we really want a control for this in the first
>> place.)
> 
> Right, I see I'm very bad at naming. Not sure it's even worth it
> maybe?
> 
> I can remove it completely from the patch if that is OK.

I'd be fine without. Andrew?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 18 15:48:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 15:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jai0h-0005ur-3v; Mon, 18 May 2020 15:48:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tJLm=7A=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jai0f-0005uh-Hi
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 15:48:57 +0000
X-Inumbo-ID: 14cc2744-991f-11ea-ae69-bc764e2007e4
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14cc2744-991f-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 15:48:56 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id o14so10415921ljp.4
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 08:48:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=T/0JXutgb5uv3kHR9Zj9p+h6EdPHwSs1XSdCNi7BT1Y=;
 b=LhCuswHDjSeumcXuDr1jBauIopcfv+qPOx5ieZtIc4Fh9X06nq0a5WfdG7dBj+lbwc
 Y1iYSHrRZczphyKTA7RlxTAACp2C8eOEk4nJfcAjIY0YHL5PBkOnB74SUuouYOy98tRr
 3qf8MrZpXdqLEplwxT/DdTSTOb0f1XQYg02HBrMBDvfk+sKNHHOxSUMmykcVVK7Ml1uv
 hdiGzqCHBAtWOPHx9lGRnAT5PDlRAD6YDc8HtnYhElbZjTRps9njUp+9ixB8RTUiIXPk
 /6g6BxV3th74Rq4ubqO+dParMIey9OFiW7jfWivhH7cXF329nNein09H205AJCkcsRzV
 WJ+Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=T/0JXutgb5uv3kHR9Zj9p+h6EdPHwSs1XSdCNi7BT1Y=;
 b=SrW+wtFA1AKP9y68xNQhBuKbMP0ydvhkxQixeGoC2W8QaAK00/rQbJneKz6ufm8Ehv
 rbGk+joUSvEcTWXKDaPtISuVuBFC1th/ub/GcXxVRzq2LW/qJsv43yhfEU9H9C6upvtn
 R1vPvFYANOPtoC2Bwuafm3gprROt4qSNTr8WQg51/D50u/rab6xfWgsnFk2sBRAgxwIt
 Kgb7k3D4zqloTs6/Fv+ZzaIwRgkeGVHW1uQRe9RFTWHcwwML02NOS6nEx0abqD0jzG4B
 +cKrUxYptCKE8plUrpFZrqSVSQsKCqKMbrolx5/PalAJq+748gcJxxTmnXWD1DkOj6/I
 ZKvA==
X-Gm-Message-State: AOAM530UiCJ0OIS0Oy62+BWkA+gMA9QWJ8dQnVaQxuE5TjeU5kYU8fZd
 BsHNYvDFvsCqQDgzb9yFBxzM+SYpzNEbDpL2tG4=
X-Google-Smtp-Source: ABdhPJz5ECz3tbhHkf6OeL44VHAJG8Zj3gNR6q05FbODkB+mGj0IhrClYSgYstwWupjkDeJAjRXa+uyIzBq4eE6vhTQ=
X-Received: by 2002:a2e:9b10:: with SMTP id u16mr1872153lji.210.1589816935489; 
 Mon, 18 May 2020 08:48:55 -0700 (PDT)
MIME-Version: 1.0
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <24253.29524.798802.978257@mariner.uk.xensource.com>
 <CAKf6xpvJMovKMTWipC4gZuBD8FgmBEWbDbkm=ryRWSxNifQcJw@mail.gmail.com>
 <24258.39029.788968.419649@mariner.uk.xensource.com>
 <20200428040433.23504-10-jandryuk@gmail.com>
 <20200518145028.GD98582@mail-itl>
 <24258.42794.136081.367565@mariner.uk.xensource.com>
In-Reply-To: <24258.42794.136081.367565@mariner.uk.xensource.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 18 May 2020 11:48:44 -0400
Message-ID: <CAKf6xpvdSb=fSebzpHaLb1F9zNqsUn3dA03wYoXaZtxSLn0K+w@mail.gmail.com>
Subject: Re: [PATCH v5 09/21] libxl: add save/restore support for qemu-xen in
 stubdomain [and 1 more messages]
To: Ian Jackson <ian.jackson@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 18, 2020 at 11:18 AM Ian Jackson <ian.jackson@citrix.com> wrote=
:
>
> >
> Marek Marczykowski-G=C3=B3recki writes ("Re: [PATCH v5 09/21] libxl: add =
save/restore support for qemu-xen in stubdomain"):
> > On Mon, May 18, 2020 at 03:15:17PM +0100, Ian Jackson wrote:
> > > Err, by "the qemu savefile pathname" I meant the pathname in dom0.
> > > I assume your wrapper script opens that and feeds it to the console.
> > > Is that right ?
> >
> > Not really a wrapper script. On dom0 side it is console backend (qemu)
> > instructed to connect the console to a file, instead of pty. I have
> > implemented similar feature in my xenconsoled patch series sent a while
> > ago (sent along with v2 of this series), but that series needs some mor=
e
> > love.
> >
> > On the stubdomain side, it is a script that launches qemu - opens a
> > /dev/hvc2, then pass the FD to qemu via -incoming option (which is
> > really constructed by libxl).
>
> Hi.  Thanks for trying to help me understand.  I was still confused
> though.  I tried to explain another way and that helped me see what's
> going on.
>
> I think I understand now.
>
> For reference, my confusion was this:
>
> Jason Andryuk writes ("[PATCH v5 09/21] libxl: add save/restore support f=
or qemu-xen in stubdomain"):
> > index bdc23554eb..45d0dd56f5 100644
> > --- a/tools/libxl/libxl_dm.c
> > +++ b/tools/libxl/libxl_dm.c
> > @@ -1744,10 +1744,17 @@ static int libxl__build_device_model_args_new(l=
ibxl__gc *gc,
> >      }
> >
> >      if (state->saved_state) {
> > -        /* This file descriptor is meant to be used by QEMU */
> > -        *dm_state_fd =3D open(state->saved_state, O_RDONLY);
> > -        flexarray_append(dm_args, "-incoming");
> > -        flexarray_append(dm_args, GCSPRINTF("fd:%d",*dm_state_fd));
> > +        if (is_stubdom) {
> > +            /* Linux stubdomain connects specific FD to STUBDOM_CONSOL=
E_RESTORE
> > +             */
> > +            flexarray_append(dm_args, "-incoming");
> > +            flexarray_append(dm_args, "fd:3");
> > +        } else {
> > +            /* This file descriptor is meant to be used by QEMU */
> > +            *dm_state_fd =3D open(state->saved_state, O_RDONLY);
> > +            flexarray_append(dm_args, "-incoming");
> > +            flexarray_append(dm_args, GCSPRINTF("fd:%d",*dm_state_fd))=
;
> > +        }
>
> In this hunk, the call
>            *dm_state_fd =3D open(state->saved_state, O_RDONLY);
> becomes conditional.  It is no longer executed in the stubdomain
> case.
>
> So then, who opens state->saved_state ?  And how do they get told
> where it is ?  If it's somewhere else in libxl, why doesn't it show up
> in this patch ?
>
> Posing the question liked that allowed me to see that the answer is
>
>            console[i].output =3D GCSPRINTF("file:%s",
>                            libxl__device_model_savefile(gc, guest_domid))=
;
>
> in spawn_stub_launch_dm.  And it doesn't appear in the patch because
> it's already used for minios stub trad qemu and the same code is
> now going to be executed for linux stub moderm qemu.

Do you want the commit message to add a blurb about this?  So the
message becomes:
"""
Rely on a wrapper script in stubdomain to attach relevant consoles to
qemu.  The save console (1) must be attached to fdset/1.  When
performing a restore, $STUBDOM_RESTORE_INCOMING_ARG must be replaced on
the qemu command line by "fd:$FD", where $FD is an open file descriptor
number to the restore console (2).

Existing libxl code (for dom0) already connects the stubdom's save &
restore console outputs to the save & restore files.
"""

-Jason


From xen-devel-bounces@lists.xenproject.org Mon May 18 16:29:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 16:29:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaidh-0001ND-Ae; Mon, 18 May 2020 16:29:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TuVG=7A=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jaidg-0001N6-K0
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 16:29:16 +0000
X-Inumbo-ID: b68ee12b-9924-11ea-a87c-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b68ee12b-9924-11ea-a87c-12813bfff9fa;
 Mon, 18 May 2020 16:29:15 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: LuhgWmARxEc/NkntMIXnQ/hdaom8NfEq/HXWacJe7S/Qa7mNfb5tP42aQyruA8/YBmhH9BfnYx
 h3qq805/aXyO6gXhM5VXR5QF132n+2NN6TQCLwonWAhaFaDigUUM1ffIYbROqF0cSbCQFDhaqo
 TLEB746apsYBdcAUKt6Q9aPsriTJ1gJdgeigtDMxg5czVpSZpG3CVJ00FR4SVnaB9YfrdWLr/E
 JxtLhUejOLCfShKT8DHUC977bIs+lfRTFVqIRDCy2inJNy94s48Zuek9lWczXQtJWSYE9EXmmv
 G0Q=
X-SBRS: 2.7
X-MesageID: 18510447
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="18510447"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24258.47057.611121.946520@mariner.uk.xensource.com>
Date: Mon, 18 May 2020 17:29:05 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v6 09/18] libxl: add save/restore support for qemu-xen in
 stubdomain
In-Reply-To: <CAKf6xpt-wBML1kFPOddaM8J8KbqSveN=Z0esvRN-O4UzidrTQg@mail.gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
 <20200518011353.326287-10-jandryuk@gmail.com>
 <24258.39586.245004.804616@mariner.uk.xensource.com>
 <CAKf6xpt-wBML1kFPOddaM8J8KbqSveN=Z0esvRN-O4UzidrTQg@mail.gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("Re: [PATCH v6 09/18] libxl: add save/restore support for qemu-xen in stubdomain"):
> On Mon, May 18, 2020 at 10:24 AM Ian Jackson <ian.jackson@citrix.com> wrote:
> > Provided that you update the docs commit and take my ack off that,
> > please add my ack to this code :-).
> 
> I updated "[PATCH v6 02/18] Document ioemu Linux stubdomain protocol"
> to mention $STUBDOM_RESTORE_INCOMING_ARG as well as the xenstore
> directory change to "dm-argv" in this v6, but I left your Ack on it.
> Sorry about that.  I'll remove your Ack from 02/18 when I post v7,
> but I'll add the Ack to this 09/18.

Oh, that's why I didn't see that docs change.  I went to look what you
wrote in v6 02/18.  It LGTM.  Please put my ack back :-).

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Mon May 18 16:34:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 16:34:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaij3-0002Cd-01; Mon, 18 May 2020 16:34:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TuVG=7A=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jaij1-0002C7-K6
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 16:34:47 +0000
X-Inumbo-ID: 7c237eb4-9925-11ea-9887-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c237eb4-9925-11ea-9887-bc764e2007e4;
 Mon, 18 May 2020 16:34:46 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Q3jbo6O4LYW6cbIn/QuEaplO4s0geX/mDEzvnfGuEkLEkqNQGsYNrWufn/wWHO63ocU7LJtjE6
 bj7SYEOnEzEnQ1Fio4i4GEoD1sxa77eJpnHzA+ts2SIhr1WME1I93Szf3WnCqEaXpYuWjlUSu/
 f8xmtHaGSq6Owi7eiq/zOubIeaaDlL6UGQ6RNMytn/0H83/Ek1vQSXQlC7iO7VoaIlly53HUzD
 yODUDz2AWZBjTqKdO86rYrhrnporMzDxI8qryRZETviUl0QO27r1nC+VzYDzHlGvCZ81tDOIiC
 qMk=
X-SBRS: 2.7
X-MesageID: 18511119
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="18511119"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24258.47393.798081.764926@mariner.uk.xensource.com>
Date: Mon, 18 May 2020 17:34:41 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v6 06/18] libxl: write qemu arguments into separate
 xenstore keys
In-Reply-To: <CAKf6xpueM5BXd0ivDHHpq2oRo_T1Uh+zMF0TrrV5u5dVR8DiLQ@mail.gmail.com>
References: <20200518011353.326287-1-jandryuk@gmail.com>
 <20200518011353.326287-7-jandryuk@gmail.com>
 <24258.39310.574582.176081@mariner.uk.xensource.com>
 <CAKf6xpueM5BXd0ivDHHpq2oRo_T1Uh+zMF0TrrV5u5dVR8DiLQ@mail.gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("Re: [PATCH v6 06/18] libxl: write qemu arguments into separate xenstore keys"):
> On Mon, May 18, 2020 at 10:20 AM Ian Jackson <ian.jackson@citrix.com> wrote:
> > I think this should be "goto out".  That conforms to standard libxl
> > error handling discipline and avoids future leak bugs etc.
> > libxl__xs_transaction_abort is a no-op with t==NULL.
> >
> > Also, it is not clear to me why you chose to put this outside the
> > transaction loop.  Can you either put it inside the transaction loop,
> > or produce an explanation as to why this approach is race-free...
> 
> I just matched the old code's transaction only around the write.  "vm"
> shouldn't change during runtime, but I can make the changes as you
> suggest.

Ah I see.  I hadn't spotted this duplication.

As there is only one caller of libxl__write_stub_dmargs the messing
about with %d/vm and the transaction and so on could be factored out
and only the actual arg processing made conditional.

I would prefer that, to be honest.  But I don't want to derail this
series at this point by asking you to take on refactorings that I
ought to have asked for sooner.

So I'll take it if you make the new code the way I like it, as I
suggest above.  Maybe it will be refactored later (perhaps even by
me...)

Regards,
Ian.


From xen-devel-bounces@lists.xenproject.org Mon May 18 16:37:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 16:37:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jailW-0002JF-De; Mon, 18 May 2020 16:37:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TuVG=7A=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jailV-0002JA-2G
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 16:37:21 +0000
X-Inumbo-ID: d7b9d80e-9925-11ea-9887-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d7b9d80e-9925-11ea-9887-bc764e2007e4;
 Mon, 18 May 2020 16:37:20 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: EazD7dgrhyln6e9FNS/gLcSpODu/YgR8dXgEJwxvEotcQqESdQu6TuPP9pcquSLEXhD5MuoTIk
 yneyBGzzmhtOB/Zd5wrVTP22UJxpPYuU5iPGS6KwtESQc0ChHYnypj52hTLFkmrOT6EF1toPFW
 o3V8S+d93f976m0Ys41cypvBlN0UO7iPiaj9XNtaI3+Dc6JHia1LdZK1PVWfDNWGvaT08KuKIq
 Dydd0lcQwbUBgu2LgqQ2zJu8AH3I9ZPm3Cgbfrp1PMh+pPA5gtKXFjZhCpB+0iAyI6Sl0sCBZN
 Jbg=
X-SBRS: 2.7
X-MesageID: 18071372
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="18071372"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24258.47547.105405.754194@mariner.uk.xensource.com>
Date: Mon, 18 May 2020 17:37:15 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v5 09/21] libxl: add save/restore support for qemu-xen in
 stubdomain [and 1 more messages]
In-Reply-To: <CAKf6xpvdSb=fSebzpHaLb1F9zNqsUn3dA03wYoXaZtxSLn0K+w@mail.gmail.com>
References: <20200428040433.23504-1-jandryuk@gmail.com>
 <24253.29524.798802.978257@mariner.uk.xensource.com>
 <CAKf6xpvJMovKMTWipC4gZuBD8FgmBEWbDbkm=ryRWSxNifQcJw@mail.gmail.com>
 <24258.39029.788968.419649@mariner.uk.xensource.com>
 <20200428040433.23504-10-jandryuk@gmail.com>
 <20200518145028.GD98582@mail-itl>
 <24258.42794.136081.367565@mariner.uk.xensource.com>
 <CAKf6xpvdSb=fSebzpHaLb1F9zNqsUn3dA03wYoXaZtxSLn0K+w@mail.gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("Re: [PATCH v5 09/21] libxl: add save/restore support for qemu-xen in stubdomain [and 1 more messages]"):
> On Mon, May 18, 2020 at 11:18 AM Ian Jackson <ian.jackson@citrix.com> wrote:
> > [explanation of confusion]
> 
> Do you want the commit message to add a blurb about this?  So the
> message becomes:
> """
> Rely on a wrapper script in stubdomain to attach relevant consoles to
> qemu.  The save console (1) must be attached to fdset/1.  When
> performing a restore, $STUBDOM_RESTORE_INCOMING_ARG must be replaced on
> the qemu command line by "fd:$FD", where $FD is an open file descriptor
> number to the restore console (2).
> 
> Existing libxl code (for dom0) already connects the stubdom's save &
> restore console outputs to the save & restore files.
> """

I think that would be good, thanks, yes but I won't insist on it.

I think I already gave my ack for v6 of this.  If you add the commit
message text above you should obviously keep that ack.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Mon May 18 16:52:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 16:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaj03-0003vH-PY; Mon, 18 May 2020 16:52:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dGN6=7A=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jaj01-0003vC-Ms
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 16:52:21 +0000
X-Inumbo-ID: f03f7bde-9927-11ea-ae69-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f03f7bde-9927-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 16:52:20 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: bT9m3LRueHZIYCT7JnvSTiQSjpwj/iDHe8KfUtKnoEobqEMCVfREJ/AQEThI8uLgyJVjsCJMdi
 J23myrWLQu/4lpSON5aDnNmnrumR2II2xXwVPLUSg2Kj8qG1MiMpw73Ciz2UJBradh/qLwSbzw
 78WSFFiUOLr3VevXbIGfUlFwTHMExDdntisdXYZiVzZNySPiDj5jLGKrOd/Z4zBg0UGbLV5RLT
 uJ4WapX9dPVzBds1Z+pFHcPTtdhNPwn5IfRPn5VPHSY1KvHrK2KZSD90fvZhLh8CiAELw3tgp1
 CfE=
X-SBRS: 2.7
X-MesageID: 17804328
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="17804328"
Date: Mon, 18 May 2020 18:52:13 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v3] x86/PV: remove unnecessary toggle_guest_pt() overhead
Message-ID: <20200518165213.GX54375@Air-de-Roger>
References: <24d8b606-f74b-9367-d67e-e952838c7048@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <24d8b606-f74b-9367-d67e-e952838c7048@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 05, 2020 at 08:16:03AM +0200, Jan Beulich wrote:
> While the mere updating of ->pv_cr3 and ->root_pgt_changed aren't overly
> expensive (but still needed only for the toggle_guest_mode() path), the
> effect of the latter on the exit-to-guest path is not insignificant.
> Move the logic into toggle_guest_mode(), on the basis that
> toggle_guest_pt() will always be invoked in pairs, yet we can't safely
> undo the setting of root_pgt_changed during the second of these
> invocations.

I'm not sure if it would be worth to add a comment to note the
intended usage of toggle_guest_pt is to fetch data from the kernel
page tables when running in user mode. The one about using it in pairs
is certainly fine.

> While at it, add a comment ahead of toggle_guest_pt() to clarify its
> intended usage.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 18 16:54:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 16:54:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaj1q-00042X-5k; Mon, 18 May 2020 16:54:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ws3m=7A=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jaj1p-00042Q-5z
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 16:54:13 +0000
X-Inumbo-ID: 3238e0df-9928-11ea-a87d-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3238e0df-9928-11ea-a87d-12813bfff9fa;
 Mon, 18 May 2020 16:54:12 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: +ccds8SB45TqruE4apIM2XLthQlYshqcpezscngwotaVhd3bONz/ib38mK7DM7qyfHhYl2wJyR
 3jGuQtqA2AflF4DVLDuUrSTQiy2hwon4UCqkBaBJAozfbXPyYqszf5DBQ47/qq/CGLj0C7NRHv
 7PF9lxBWvwSb2R+rYEyFYThB06UV8A02MsQvkYVM0Paa897p+n9EpevAqRkapafDWGZPRs7vzL
 2OJ3WCymLKC/Ce33j0Q71ufmNjV8vrz4BOs02uSA1z7K0dTCij58LmjK+tR0gG9UNHB8Lz3ycV
 V0Q=
X-SBRS: 2.7
X-MesageID: 17839065
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="17839065"
Subject: Re: [PATCH 02/16] x86/traps: Clean up printing in
 do_reserved_trap()/fatal_trap()
To: Jan Beulich <jbeulich@suse.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-3-andrew.cooper3@citrix.com>
 <aca22b53-895e-19bb-c54c-f1e4945c95c1@suse.com>
 <8f1d68b1-895a-d2a6-4dcb-55b688b03336@citrix.com>
 <b1ef905c-dab6-d1c3-4673-4c06c7e94a0a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <560c3bce-211a-52ab-c919-9ca1ab9beab3@citrix.com>
Date: Mon, 18 May 2020 17:54:06 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <b1ef905c-dab6-d1c3-4673-4c06c7e94a0a@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11/05/2020 16:09, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> On 11.05.2020 17:01, Andrew Cooper wrote:
>> On 04/05/2020 14:08, Jan Beulich wrote:
>>> On 02.05.2020 00:58, Andrew Cooper wrote:
>>>> For one, they render the vector in a different base.
>>>>
>>>> Introduce X86_EXC_* constants and vec_name() to refer to exceptions by their
>>>> mnemonic, which starts bringing the code/diagnostics in line with the Intel
>>>> and AMD manuals.
>>> For this "bringing in line" purpose I'd like to see whether you could
>>> live with some adjustments to how you're currently doing things:
>>> - NMI is nowhere prefixed by #, hence I think we'd better not do so
>>>   either; may require embedding the #-es in the names[] table, or not
>>>   using N() for NMI
>> No-one is going to get confused at seeing #NMI in an error message.  I
>> don't mind jugging the existing names table, but anything more
>> complicated is overkill.
>>
>>> - neither Coprocessor Segment Overrun nor vector 0x0f have a mnemonic
>>>   and hence I think we shouldn't invent one; just treat them like
>>>   other reserved vectors (of which at least vector 0x09 indeed is one
>>>   on x86-64)?
>> This I disagree with.  Coprocessor Segment Overrun *is* its name in both
>> manuals, and the avoidance of vector 0xf is clearly documented as well,
>> due to it being the default PIC Spurious Interrupt Vector.
>>
>> Neither CSO or SPV are expected to be encountered in practice, but if
>> they are, highlighting them is a damn-sight more helpful than pretending
>> they don't exist.
> How is them occurring (and getting logged with their vector numbers)
> any different from other reserved, acronym-less vectors? I particularly
> didn't suggest to pretend they don't exist; instead I did suggest that
> they are as reserved as, say, vector 0x18. By inventing an acronym and
> logging this instead of the vector number you'll make people other than
> you have to look up what the odd acronym means iff such an exception
> ever got raised.

You snipped the bits in the patch where both the vector number and
acronym are printed together.

Anyone who doesn't know the vector has to look it up anyway, at which
point they'll find that what Xen prints out matches what both manuals
say.  OTOH, people who know what a coprocessor segment overrun or PIC
spurious vector is won't need to look it up.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 18 17:03:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 17:03:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jajAt-0004zL-6j; Mon, 18 May 2020 17:03:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eFi1=7A=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jajAr-0004zG-E4
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 17:03:33 +0000
X-Inumbo-ID: 80ab74c4-9929-11ea-a87f-12813bfff9fa
Received: from mail-wm1-f66.google.com (unknown [209.85.128.66])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 80ab74c4-9929-11ea-a87f-12813bfff9fa;
 Mon, 18 May 2020 17:03:32 +0000 (UTC)
Received: by mail-wm1-f66.google.com with SMTP id n5so330889wmd.0
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 10:03:32 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=nGkI8tP239tDUEkOH0wudLcKJV8oZ9pOIkB3tDZoxrg=;
 b=YREmVS/JqP7lqzxp39gP3lgCD2262D0QlMLXv/JPajIBs2LN+4rtDnuu91iwIaDwNI
 KTjd6KKkZwmA10xogjcy3Z4ZZZctTAcCDiBApVb0j+miwwUm6zjszCNK6+AHGxu/0Bh4
 nhDrblD35j2HzLN2MU6Bt8csaf3XA5CbC15mkcqQrHrkbH4pxLi2+rGaMz765v7lwVNP
 TAJhvpmwdVAzZ1YvEcPzYriui17ntuLg0llrMjDU+0grmWL/5Im9LvcbU7KfiW//9rAs
 pTrgYd7L758afGoRMDs/1eWSI+IL1WfSRj8xFR8HwFgXfFbK7S9s2bgFP04VfUi7Cfei
 icfQ==
X-Gm-Message-State: AOAM5321oLbbPpFq9tJS5LZeaURMeNBr6mPkblZnEe6jBqTii1ZcCU+h
 3E5hcqblFnwy+ipJVR5BOjU=
X-Google-Smtp-Source: ABdhPJywQb6/ikR7MO4Iw1Y4FKE6Mj4/g+sg5oH2NUQrfDPj2FUFFI/J5INqaz3stG76S71YWUUnMg==
X-Received: by 2002:a1c:acc8:: with SMTP id v191mr350905wme.154.1589821411650; 
 Mon, 18 May 2020 10:03:31 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id s67sm275684wmf.3.2020.05.18.10.03.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 10:03:30 -0700 (PDT)
Date: Mon, 18 May 2020 17:03:29 +0000
From: Wei Liu <wl@xen.org>
To: Olaf Hering <olaf@aepfle.de>
Subject: Re: [PATCH v1] tools: use HOSTCC/CPP to compile rombios code and
 helper
Message-ID: <20200518170329.vis2yzz5qcacqt64@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <20200518144400.16708-1-olaf@aepfle.de>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200518144400.16708-1-olaf@aepfle.de>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org,
 Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 18, 2020 at 04:44:00PM +0200, Olaf Hering wrote:
> Use also HOSTCFLAGS for biossums while touching the code.
> 
> Spotted by inspecting build logfile.
> 
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Wei Liu <wl@xen.org>

> ---
>  tools/firmware/rombios/Makefile | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/firmware/rombios/Makefile b/tools/firmware/rombios/Makefile
> index 78237fd736..02abdb3038 100644
> --- a/tools/firmware/rombios/Makefile
> +++ b/tools/firmware/rombios/Makefile
> @@ -19,7 +19,7 @@ clean: subdirs-clean
>  distclean: clean
>  
>  BIOS-bochs-latest: rombios.c biossums 32bitgateway.c tcgbios.c
> -	gcc -DBX_SMP_PROCESSORS=1 -E -P $< > _rombios_.c
> +	$(CPP) -DBX_SMP_PROCESSORS=1 -P $< > _rombios_.c
>  	bcc -o rombios.s -C-c -D__i86__ -0 -S _rombios_.c
>  	sed -e 's/^\.text//' -e 's/^\.data//' rombios.s > _rombios_.s
>  	as86 _rombios_.s -b tmp.bin -u- -w- -g -0 -j -O -l rombios.txt
> @@ -29,6 +29,6 @@ BIOS-bochs-latest: rombios.c biossums 32bitgateway.c tcgbios.c
>  	rm -f _rombios_.s
>  
>  biossums: biossums.c
> -	gcc -o biossums biossums.c
> +	$(HOSTCC) $(HOSTCFLAGS) -o biossums biossums.c
>  
>  -include $(DEPS_INCLUDE)


From xen-devel-bounces@lists.xenproject.org Mon May 18 17:09:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 17:09:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jajGL-0005Az-S3; Mon, 18 May 2020 17:09:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dGN6=7A=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jajGK-0005Au-QZ
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 17:09:12 +0000
X-Inumbo-ID: 4b1f4064-992a-11ea-ae69-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b1f4064-992a-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 17:09:12 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: H6g83SbejvVrH02O1Q9+qv7KYhczYNrhMvY/kE0e8jTXHP6oEKbpD0Dai7myDthVZv7bNSc+Sc
 lC+ie3J2O9ywlXqRof5Xdx1RWCvuAcwMOYvI9UQBUl7/wYfQUa3bU+mGhwiGRt81GT6iSKYUAC
 FdScaDROyf4ZRxO22RPmqOAEUs8rfOpdrX8tCL/9B2Fx9/XY/wqqcPVVjE9BN9x4xTF7tRy0HL
 rvxbvzCYdHYWmfnMLHJAPENgJG4LbKF7C9N9+j8DwgTAtJa1zopEOi0paFzGKPZPI5zTYOCs0k
 Pxg=
X-SBRS: 2.7
X-MesageID: 18176202
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,407,1583211600"; d="scan'208";a="18176202"
Date: Mon, 18 May 2020 19:09:04 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v3 1/5] x86: suppress XPTI-related TLB flushes when
 possible
Message-ID: <20200518170904.GY54375@Air-de-Roger>
References: <3ce4ab2c-8cb6-1482-6ce9-3d5b019e10c1@suse.com>
 <ae47cb2c-2fff-cd08-0a26-683cef1f3303@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <ae47cb2c-2fff-cd08-0a26-683cef1f3303@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Sep 25, 2019 at 05:23:11PM +0200, Jan Beulich wrote:
> When there's no XPTI-enabled PV domain at all, there's no need to issue
> respective TLB flushes. Hardwire opt_xpti_* to false when !PV, and
> record the creation of PV domains by bumping opt_xpti_* accordingly.
> 
> As to the sticky opt_xpti_domu vs increment/decrement of opt_xpti_hwdom,
> this is done this way to avoid
> (a) widening the former variable,
> (b) any risk of a missed flush, which would result in an XSA if a DomU
>     was able to exercise it, and
> (c) any races updating the variable.
> Fundamentally the TLB flush done when context switching out the domain's
> vCPU-s the last time before destroying the domain ought to be
> sufficient, so in principle DomU handling could be made match hwdom's.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v3: Re-base.
> v2: Add comment to spec_ctrl.h. Explain difference in accounting of DomU
>     and hwdom.
> ---
> TBD: The hardwiring to false could be extended to opt_pv_l1tf_* and (for
>      !HVM) opt_l1d_flush as well.
> 
> ---
>  xen/arch/x86/flushtlb.c         |    2 +-
>  xen/arch/x86/pv/domain.c        |   14 +++++++++++++-
>  xen/arch/x86/spec_ctrl.c        |    6 ++++++
>  xen/include/asm-x86/spec_ctrl.h |   11 +++++++++++
>  4 files changed, 31 insertions(+), 2 deletions(-)
> 
> --- a/xen/arch/x86/flushtlb.c
> +++ b/xen/arch/x86/flushtlb.c
> @@ -207,7 +207,7 @@ unsigned int flush_area_local(const void
>                   */
>                  invpcid_flush_one(PCID_PV_PRIV, addr);
>                  invpcid_flush_one(PCID_PV_USER, addr);
> -                if ( opt_xpti_hwdom || opt_xpti_domu )
> +                if ( opt_xpti_hwdom > 1 || opt_xpti_domu > 1 )
>                  {
>                      invpcid_flush_one(PCID_PV_PRIV | PCID_PV_XPTI, addr);
>                      invpcid_flush_one(PCID_PV_USER | PCID_PV_XPTI, addr);
> --- a/xen/arch/x86/pv/domain.c
> +++ b/xen/arch/x86/pv/domain.c
> @@ -272,6 +272,9 @@ void pv_domain_destroy(struct domain *d)
>      destroy_perdomain_mapping(d, GDT_LDT_VIRT_START,
>                                GDT_LDT_MBYTES << (20 - PAGE_SHIFT));
>  
> +    opt_xpti_hwdom -= IS_ENABLED(CONFIG_LATE_HWDOM) &&
> +                      !d->domain_id && opt_xpti_hwdom;
> +
>      XFREE(d->arch.pv.cpuidmasks);
>  
>      FREE_XENHEAP_PAGE(d->arch.pv.gdt_ldt_l1tab);
> @@ -310,7 +313,16 @@ int pv_domain_initialise(struct domain *
>      /* 64-bit PV guest by default. */
>      d->arch.is_32bit_pv = d->arch.has_32bit_shinfo = 0;
>  
> -    d->arch.pv.xpti = is_hardware_domain(d) ? opt_xpti_hwdom : opt_xpti_domu;
> +    if ( is_hardware_domain(d) && opt_xpti_hwdom )
> +    {
> +        d->arch.pv.xpti = true;
> +        ++opt_xpti_hwdom;
> +    }
> +    if ( !is_hardware_domain(d) && opt_xpti_domu )
> +    {
> +        d->arch.pv.xpti = true;
> +        opt_xpti_domu = 2;

I wonder whether a store fence is needed here in order to guarantee
that opt_xpti_domu is visible to flush_area_local before proceeding
any further with domain creation.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 18 17:23:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 17:23:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jajUL-0006lW-5n; Mon, 18 May 2020 17:23:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yZ0o=7A=tklsoftware.com=tamas@srs-us1.protection.inumbo.net>)
 id 1jajUK-0006lR-BJ
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 17:23:40 +0000
X-Inumbo-ID: 5046b21e-992c-11ea-b07b-bc764e2007e4
Received: from mail-ed1-x541.google.com (unknown [2a00:1450:4864:20::541])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5046b21e-992c-11ea-b07b-bc764e2007e4;
 Mon, 18 May 2020 17:23:39 +0000 (UTC)
Received: by mail-ed1-x541.google.com with SMTP id i16so4689976edv.1
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 10:23:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=tklengyel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=1PfetM1HV7IUId1vPTGru1KVmn3gL8fLaU41twFYtLA=;
 b=gwQkfFUf4EgM13hwWdAQrDQ5+BzrE3p6NRcBxRu0knzdQX5qg1W3Gzrlfzl7f7Rms2
 x2MTp9nn8672BecnUt90ywzRbQomEIegSwn2k1J+ykR4oRwNf9nvcNW1sik/2IZjByna
 CMbB9C6SANDa3K2zWDaTj79t33DSoZEYHM8Yd41hiiWPDcCDP73e8dSVBUa7oSlFQxW2
 m2PbomZJdFdlhyR+p1a0c9yCXqrV47m2JDYlEEfcTq6BJEviOLAYnkGhu93TzDfjiVyk
 k044PSBAFlt8rEQ+8fuIiSlJ2BCAlSV5qC0CSTkieJWa9tqu459tfIvpCovlAnJz7AWs
 qwfw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=1PfetM1HV7IUId1vPTGru1KVmn3gL8fLaU41twFYtLA=;
 b=P70FViZIh1VwEvXCBb5p0vuLM6pyhb4YRCvc6aZa4OiJ8Pt/cR07xTczmlSvncrKZB
 f/pyj1gxeJDHgYCvBaHuenl5Awh9khn+sm4u9z8aHWw6VT8kpVXTwLCCSwpc1VdtAKTF
 VbP9n9y87fZqNAcPH3lVsFOIYYK7fBO24a8SZowsSWovS7rkUd5AV//fah61VMZ0YBT9
 WjF4cIXvDqm1FmIGpsuCLPlKQXOPy7rfe2C5TqvdVd09RCRmkc4z119g85ystSSfzUAg
 iU9WHjPilIHv/Js/s6Q9sefVkzCz42UAIQ2MXoQT7NZe/6LEZMkKMsPf/NDsd7GrRaoh
 xxqg==
X-Gm-Message-State: AOAM5310ulniFKI3GH0whIxrhCz3b86fDRgsbRC6Kd6rjiyOtD9Mwret
 vkeE9R4rmSFdR+n08kFT89lup223cKc=
X-Google-Smtp-Source: ABdhPJyQ7RQ6jfOJL1jFmB2KHYLqUV7yrhvk+C22kIBbp9PsWL7xN1yHZtR00dRpwrwmBw1968cofA==
X-Received: by 2002:a50:a7e3:: with SMTP id i90mr15098667edc.6.1589822618283; 
 Mon, 18 May 2020 10:23:38 -0700 (PDT)
Received: from mail-wr1-f53.google.com (mail-wr1-f53.google.com.
 [209.85.221.53])
 by smtp.gmail.com with ESMTPSA id x7sm1403212ejc.58.2020.05.18.10.23.37
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 18 May 2020 10:23:37 -0700 (PDT)
Received: by mail-wr1-f53.google.com with SMTP id e16so12795943wra.7
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 10:23:37 -0700 (PDT)
X-Received: by 2002:adf:fccd:: with SMTP id f13mr21598322wrs.386.1589822616817; 
 Mon, 18 May 2020 10:23:36 -0700 (PDT)
MIME-Version: 1.0
References: <20200516122221.5434-1-andrew.cooper3@citrix.com>
In-Reply-To: <20200516122221.5434-1-andrew.cooper3@citrix.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Mon, 18 May 2020 11:23:00 -0600
X-Gmail-Original-Message-ID: <CABfawhkYRjCegBwMxqhzLNnTneV9WWb=WeYFep=OK2f0qv5TyQ@mail.gmail.com>
Message-ID: <CABfawhkYRjCegBwMxqhzLNnTneV9WWb=WeYFep=OK2f0qv5TyQ@mail.gmail.com>
Subject: Re: [PATCH] x86/hvm: Fix memory leaks in hvm_copy_context_and_params()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sat, May 16, 2020 at 6:22 AM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>
> Any error from hvm_save() or hvm_set_param() leaks the c.data allocation.
>
> Spotted by Coverity.
>
> Fixes: 353744830 "x86/hvm: introduce hvm_copy_context_and_params"
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks for the fix, my bad!

Tamas


From xen-devel-bounces@lists.xenproject.org Mon May 18 17:31:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 17:31:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jajbi-0007bg-V1; Mon, 18 May 2020 17:31:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NiGn=7A=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1jajbi-0007bb-8G
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 17:31:18 +0000
X-Inumbo-ID: 6062bf0c-992d-11ea-ae69-bc764e2007e4
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6062bf0c-992d-11ea-ae69-bc764e2007e4;
 Mon, 18 May 2020 17:31:16 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne [10.0.0.1])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 04IHVBY2003187;
 Mon, 18 May 2020 19:31:11 +0200 (MEST)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 2702D2810; Mon, 18 May 2020 19:31:11 +0200 (CEST)
Date: Mon, 18 May 2020 19:31:11 +0200
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: paul@xen.org
Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
Message-ID: <20200518173111.GA13512@antioche.eu.org>
References: <20200515202912.GA11714@antioche.eu.org>
 <d623cd12-4024-82ba-7388-21f606e1a0bd@citrix.com>
 <20200515210629.GA10976@antioche.eu.org>
 <b1dfc07d-bf0f-da26-79f0-8cf93952689e@citrix.com>
 <20200515215335.GA9991@antioche.eu.org>
 <d22b6b7c-9d1c-4cfb-427a-ca6f440a9b08@citrix.com>
 <20200517173259.GA7285@antioche.eu.org>
 <20200517175607.GA8793@antioche.eu.org>
 <000a01d62ce7$093b7f50$1bb27df0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <000a01d62ce7$093b7f50$1bb27df0$@xen.org>
User-Agent: Mutt/1.12.1 (2019-06-15)
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3
 (chassiron.antioche.eu.org [151.127.5.145]);
 Mon, 18 May 2020 19:31:11 +0200 (MEST)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 18, 2020 at 08:36:24AM +0100, Paul Durrant wrote:
> > -----Original Message-----
> > From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Manuel Bouyer
> > Sent: 17 May 2020 18:56
> > To: Andrew Cooper <andrew.cooper3@citrix.com>
> > Cc: xen-devel@lists.xenproject.org
> > Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
> > 
> > On Sun, May 17, 2020 at 07:32:59PM +0200, Manuel Bouyer wrote:
> > > I've been looking a bit deeper in the Xen kernel.
> > > The mapping is failed in ./arch/x86/mm/p2m.c:p2m_get_page_from_gfn(),
> > >         /* Error path: not a suitable GFN at all */
> > > 	if ( !p2m_is_ram(*t) && !p2m_is_paging(*t) && !p2m_is_pod(*t) ) {
> > > 	    gdprintk(XENLOG_ERR, "p2m_get_page_from_gfn2: %d is_ram %ld is_paging %ld is_pod %ld\n", *t,
> > p2m_is_ram(*t), p2m_is_paging(*t), p2m_is_pod(*t) );
> > > 	    return NULL;
> > > 	}
> > >
> > > *t is 4, which translates to p2m_mmio_dm
> > >
> > > it looks like p2m_get_page_from_gfn() is not ready to handle this case
> > > for dom0.
> > 
> > And so it looks like I need to implement osdep_xenforeignmemory_map_resource()
> > for NetBSD
> > 
> 
> It would be a good idea but you shouldn't have to.

This is how I read the code too, it should fallback to mmapbatch.
But mmapbatch doesn't work in all cases on 4.13.0

> Also, qemu-trad won't use it even if it is there.

Looks like it still use mmapbatch for some mappings, indeeed.
But now it goes a bit further, but still end up failing to map the guest's
memory.

Also, with some fix I got qemu-xen working building on NetBSD, it starts
but fails to load the BIOS rom. Once again p2m_get_page_from_gfn() fails
to find a page, and thinks the type is p2m_mmio_dm.

>From what I found it seems that all unallocated memory is tagged p2m_mmio_dm,
is it right ? That would point to some issue allocating RAM for
the domU in qemu, I would need to find where this happens in qemu.

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Mon May 18 17:40:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 17:40:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jajkx-0008Ux-Th; Mon, 18 May 2020 17:40:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2hxz=7A=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
 id 1jajkw-0008Us-2H
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 17:40:50 +0000
X-Inumbo-ID: b5733872-992e-11ea-a882-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b5733872-992e-11ea-a882-12813bfff9fa;
 Mon, 18 May 2020 17:40:48 +0000 (UTC)
Received: from localhost (83-86-89-107.cable.dynamic.v4.ziggo.nl
 [83.86.89.107])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 339EC20849;
 Mon, 18 May 2020 17:40:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1589823647;
 bh=IczQH/TtXU98bDp5k7ZWYNUFzCIb147wj2CPbCzb4QQ=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=Z385pTEQQSacawaL/uSVRTzNfOc0G2+xynwZg7FsekDAvr2Sc+YZRguAfVL4hqpjr
 UUoTS0oTi+UoQBF6kYFSLIO3dkr+/f9jGwISAGrzWD4xBTInvNGwV/McbM0nG/SUZ5
 AKoizO8pHE0Ow8OcLYkPw9u0Szx/CIyF989NwhGY=
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: linux-kernel@vger.kernel.org
Subject: [PATCH 4.4 67/86] x86/paravirt: Remove the unused irq_enable_sysexit
 pv op
Date: Mon, 18 May 2020 19:36:38 +0200
Message-Id: <20200518173503.889676448@linuxfoundation.org>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200518173450.254571947@linuxfoundation.org>
References: <20200518173450.254571947@linuxfoundation.org>
User-Agent: quilt/0.66
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ingo Molnar <mingo@kernel.org>, Denys Vlasenko <dvlasenk@redhat.com>,
 Thomas Gleixner <tglx@linutronix.de>, konrad.wilk@oracle.com,
 Peter Zijlstra <peterz@infradead.org>,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 "H. Peter Anvin" <hpa@zytor.com>, virtualization@lists.linux-foundation.org,
 stable@vger.kernel.org, Andy Lutomirski <luto@amacapital.net>,
 Borislav Petkov <bp@alien8.de>, david.vrabel@citrix.com,
 Andy Lutomirski <luto@kernel.org>, Brian Gerst <brgerst@gmail.com>,
 xen-devel@lists.xenproject.org, Andrew Morton <akpm@linux-foundation.org>,
 Borislav Petkov <bp@suse.de>, Linus Torvalds <torvalds@linux-foundation.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Boris Ostrovsky <boris.ostrovsky@oracle.com>

commit 88c15ec90ff16880efab92b519436ee17b198477 upstream.

As result of commit "x86/xen: Avoid fast syscall path for Xen PV
guests", the irq_enable_sysexit pv op is not called by Xen PV guests
anymore and since they were the only ones who used it we can
safely remove it.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Reviewed-by: Borislav Petkov <bp@suse.de>
Acked-by: Andy Lutomirski <luto@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Andy Lutomirski <luto@amacapital.net>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Denys Vlasenko <dvlasenk@redhat.com>
Cc: H. Peter Anvin <hpa@zytor.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: david.vrabel@citrix.com
Cc: konrad.wilk@oracle.com
Cc: virtualization@lists.linux-foundation.org
Cc: xen-devel@lists.xenproject.org
Link: http://lkml.kernel.org/r/1447970147-1733-3-git-send-email-boris.ostrovsky@oracle.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Greg Kroah-Hartman <gregkh@linuxfoundation.org>

---
 arch/x86/entry/entry_32.S             |    8 ++------
 arch/x86/include/asm/paravirt.h       |    7 -------
 arch/x86/include/asm/paravirt_types.h |    9 ---------
 arch/x86/kernel/asm-offsets.c         |    3 ---
 arch/x86/kernel/paravirt.c            |    7 -------
 arch/x86/kernel/paravirt_patch_32.c   |    2 --
 arch/x86/kernel/paravirt_patch_64.c   |    1 -
 arch/x86/xen/enlighten.c              |    3 ---
 arch/x86/xen/xen-asm_32.S             |   14 --------------
 arch/x86/xen/xen-ops.h                |    3 ---
 10 files changed, 2 insertions(+), 55 deletions(-)

--- a/arch/x86/entry/entry_32.S
+++ b/arch/x86/entry/entry_32.S
@@ -331,7 +331,8 @@ sysenter_past_esp:
 	 * Return back to the vDSO, which will pop ecx and edx.
 	 * Don't bother with DS and ES (they already contain __USER_DS).
 	 */
-	ENABLE_INTERRUPTS_SYSEXIT
+	sti
+	sysexit
 
 .pushsection .fixup, "ax"
 2:	movl	$0, PT_FS(%esp)
@@ -554,11 +555,6 @@ ENTRY(native_iret)
 	iret
 	_ASM_EXTABLE(native_iret, iret_exc)
 END(native_iret)
-
-ENTRY(native_irq_enable_sysexit)
-	sti
-	sysexit
-END(native_irq_enable_sysexit)
 #endif
 
 ENTRY(overflow)
--- a/arch/x86/include/asm/paravirt.h
+++ b/arch/x86/include/asm/paravirt.h
@@ -938,13 +938,6 @@ extern void default_banner(void);
 	push %ecx; push %edx;				\
 	call PARA_INDIRECT(pv_cpu_ops+PV_CPU_read_cr0);	\
 	pop %edx; pop %ecx
-
-#define ENABLE_INTERRUPTS_SYSEXIT					\
-	PARA_SITE(PARA_PATCH(pv_cpu_ops, PV_CPU_irq_enable_sysexit),	\
-		  CLBR_NONE,						\
-		  jmp PARA_INDIRECT(pv_cpu_ops+PV_CPU_irq_enable_sysexit))
-
-
 #else	/* !CONFIG_X86_32 */
 
 /*
--- a/arch/x86/include/asm/paravirt_types.h
+++ b/arch/x86/include/asm/paravirt_types.h
@@ -162,15 +162,6 @@ struct pv_cpu_ops {
 
 	u64 (*read_pmc)(int counter);
 
-#ifdef CONFIG_X86_32
-	/*
-	 * Atomically enable interrupts and return to userspace.  This
-	 * is only used in 32-bit kernels.  64-bit kernels use
-	 * usergs_sysret32 instead.
-	 */
-	void (*irq_enable_sysexit)(void);
-#endif
-
 	/*
 	 * Switch to usermode gs and return to 64-bit usermode using
 	 * sysret.  Only used in 64-bit kernels to return to 64-bit
--- a/arch/x86/kernel/asm-offsets.c
+++ b/arch/x86/kernel/asm-offsets.c
@@ -65,9 +65,6 @@ void common(void) {
 	OFFSET(PV_IRQ_irq_disable, pv_irq_ops, irq_disable);
 	OFFSET(PV_IRQ_irq_enable, pv_irq_ops, irq_enable);
 	OFFSET(PV_CPU_iret, pv_cpu_ops, iret);
-#ifdef CONFIG_X86_32
-	OFFSET(PV_CPU_irq_enable_sysexit, pv_cpu_ops, irq_enable_sysexit);
-#endif
 	OFFSET(PV_CPU_read_cr0, pv_cpu_ops, read_cr0);
 	OFFSET(PV_MMU_read_cr2, pv_mmu_ops, read_cr2);
 #endif
--- a/arch/x86/kernel/paravirt.c
+++ b/arch/x86/kernel/paravirt.c
@@ -168,9 +168,6 @@ unsigned paravirt_patch_default(u8 type,
 		ret = paravirt_patch_ident_64(insnbuf, len);
 
 	else if (type == PARAVIRT_PATCH(pv_cpu_ops.iret) ||
-#ifdef CONFIG_X86_32
-		 type == PARAVIRT_PATCH(pv_cpu_ops.irq_enable_sysexit) ||
-#endif
 		 type == PARAVIRT_PATCH(pv_cpu_ops.usergs_sysret32) ||
 		 type == PARAVIRT_PATCH(pv_cpu_ops.usergs_sysret64))
 		/* If operation requires a jmp, then jmp */
@@ -226,7 +223,6 @@ static u64 native_steal_clock(int cpu)
 
 /* These are in entry.S */
 extern void native_iret(void);
-extern void native_irq_enable_sysexit(void);
 extern void native_usergs_sysret32(void);
 extern void native_usergs_sysret64(void);
 
@@ -385,9 +381,6 @@ __visible struct pv_cpu_ops pv_cpu_ops =
 
 	.load_sp0 = native_load_sp0,
 
-#if defined(CONFIG_X86_32)
-	.irq_enable_sysexit = native_irq_enable_sysexit,
-#endif
 #ifdef CONFIG_X86_64
 #ifdef CONFIG_IA32_EMULATION
 	.usergs_sysret32 = native_usergs_sysret32,
--- a/arch/x86/kernel/paravirt_patch_32.c
+++ b/arch/x86/kernel/paravirt_patch_32.c
@@ -5,7 +5,6 @@ DEF_NATIVE(pv_irq_ops, irq_enable, "sti"
 DEF_NATIVE(pv_irq_ops, restore_fl, "push %eax; popf");
 DEF_NATIVE(pv_irq_ops, save_fl, "pushf; pop %eax");
 DEF_NATIVE(pv_cpu_ops, iret, "iret");
-DEF_NATIVE(pv_cpu_ops, irq_enable_sysexit, "sti; sysexit");
 DEF_NATIVE(pv_mmu_ops, read_cr2, "mov %cr2, %eax");
 DEF_NATIVE(pv_mmu_ops, write_cr3, "mov %eax, %cr3");
 DEF_NATIVE(pv_mmu_ops, read_cr3, "mov %cr3, %eax");
@@ -46,7 +45,6 @@ unsigned native_patch(u8 type, u16 clobb
 		PATCH_SITE(pv_irq_ops, restore_fl);
 		PATCH_SITE(pv_irq_ops, save_fl);
 		PATCH_SITE(pv_cpu_ops, iret);
-		PATCH_SITE(pv_cpu_ops, irq_enable_sysexit);
 		PATCH_SITE(pv_mmu_ops, read_cr2);
 		PATCH_SITE(pv_mmu_ops, read_cr3);
 		PATCH_SITE(pv_mmu_ops, write_cr3);
--- a/arch/x86/kernel/paravirt_patch_64.c
+++ b/arch/x86/kernel/paravirt_patch_64.c
@@ -12,7 +12,6 @@ DEF_NATIVE(pv_mmu_ops, write_cr3, "movq
 DEF_NATIVE(pv_cpu_ops, clts, "clts");
 DEF_NATIVE(pv_cpu_ops, wbinvd, "wbinvd");
 
-DEF_NATIVE(pv_cpu_ops, irq_enable_sysexit, "swapgs; sti; sysexit");
 DEF_NATIVE(pv_cpu_ops, usergs_sysret64, "swapgs; sysretq");
 DEF_NATIVE(pv_cpu_ops, usergs_sysret32, "swapgs; sysretl");
 DEF_NATIVE(pv_cpu_ops, swapgs, "swapgs");
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -1240,10 +1240,7 @@ static const struct pv_cpu_ops xen_cpu_o
 
 	.iret = xen_iret,
 #ifdef CONFIG_X86_64
-	.usergs_sysret32 = xen_sysret32,
 	.usergs_sysret64 = xen_sysret64,
-#else
-	.irq_enable_sysexit = xen_sysexit,
 #endif
 
 	.load_tr_desc = paravirt_nop,
--- a/arch/x86/xen/xen-asm_32.S
+++ b/arch/x86/xen/xen-asm_32.S
@@ -35,20 +35,6 @@ check_events:
 	ret
 
 /*
- * We can't use sysexit directly, because we're not running in ring0.
- * But we can easily fake it up using iret.  Assuming xen_sysexit is
- * jumped to with a standard stack frame, we can just strip it back to
- * a standard iret frame and use iret.
- */
-ENTRY(xen_sysexit)
-	movl PT_EAX(%esp), %eax			/* Shouldn't be necessary? */
-	orl $X86_EFLAGS_IF, PT_EFLAGS(%esp)
-	lea PT_EIP(%esp), %esp
-
-	jmp xen_iret
-ENDPROC(xen_sysexit)
-
-/*
  * This is run where a normal iret would be run, with the same stack setup:
  *	8: eflags
  *	4: cs
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -139,9 +139,6 @@ DECL_ASM(void, xen_restore_fl_direct, un
 
 /* These are not functions, and cannot be called normally */
 __visible void xen_iret(void);
-#ifdef CONFIG_X86_32
-__visible void xen_sysexit(void);
-#endif
 __visible void xen_sysret32(void);
 __visible void xen_sysret64(void);
 __visible void xen_adjust_exception_frame(void);




From xen-devel-bounces@lists.xenproject.org Mon May 18 17:53:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 17:53:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jajws-00010Z-2T; Mon, 18 May 2020 17:53:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d2DE=7A=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jajwp-00010R-V4
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 17:53:07 +0000
X-Inumbo-ID: 6de1b496-9930-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6de1b496-9930-11ea-b9cf-bc764e2007e4;
 Mon, 18 May 2020 17:53:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1QAO8ofd8gj66nH43MRAEV/cO6+6RtpLXn/R1/UXWnQ=; b=sprqYnY4i6KaudNo1TKb9pco6
 cCzCV5b3xcezb9szCDitvau8Vff4LE7h7zpfIJNKmfEEkwVFanLODJLZXHZn+rUOjSzRpEjO+s+sC
 wVE+p0QuRalJTZ69hgoekLgzk4/gTThQKsnjYtRno0uKPzvmAeaaDcyGrNoVISYuDk01Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jajwo-0006hu-4G; Mon, 18 May 2020 17:53:06 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jajwn-0007Bz-OP; Mon, 18 May 2020 17:53:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jajwn-00074S-Nl; Mon, 18 May 2020 17:53:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150231-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150231: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=97fb0253e6c2f2221bfd0895b7ffe3a99330d847
X-Osstest-Versions-That: xen=664e1bc12f8658da124a4eff7a8f16da073bd47f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 18 May 2020 17:53:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150231 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150231/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  97fb0253e6c2f2221bfd0895b7ffe3a99330d847
baseline version:
 xen                  664e1bc12f8658da124a4eff7a8f16da073bd47f

Last test of basis   150218  2020-05-16 18:01:31 Z    1 days
Testing same since   150231  2020-05-18 15:01:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   664e1bc12f..97fb0253e6  97fb0253e6c2f2221bfd0895b7ffe3a99330d847 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon May 18 17:57:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 17:57:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jak0k-0001BS-PV; Mon, 18 May 2020 17:57:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d2DE=7A=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jak0j-0001BN-On
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 17:57:09 +0000
X-Inumbo-ID: faddfd3c-9930-11ea-a886-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id faddfd3c-9930-11ea-a886-12813bfff9fa;
 Mon, 18 May 2020 17:57:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=lVhzAuOOAejmlAEdSCLDnGimczaBMxQFwqYtATTSYKs=; b=dKDQt3zXblE/EV6o7OZQccKgA
 mM5qoJjJp91mauG1+zapde6SI3WNXfPXq624yNIMeAzObngkDbSWSaR8lkeJ5Dpqmi0peE9l+JxwM
 +h6Q7c77jgCN11DCaDck2NDdZX89+eOoh+IuX5BvF/KuYEZvQHPBfHlZ4slZycdWEI9a0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jak0c-0006mu-Vf; Mon, 18 May 2020 17:57:03 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jak0c-0007Gq-Lu; Mon, 18 May 2020 17:57:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jak0c-0000oH-LD; Mon, 18 May 2020 17:57:02 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150232-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150232: all pass - PUSHED
X-Osstest-Versions-This: ovmf=7b6327ff03bb4436261ffad246ba3a14de10391f
X-Osstest-Versions-That: ovmf=9099dcbd61c8d22b5eedda783d143c222d2705a3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 18 May 2020 17:57:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150232 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150232/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 7b6327ff03bb4436261ffad246ba3a14de10391f
baseline version:
 ovmf                 9099dcbd61c8d22b5eedda783d143c222d2705a3

Last test of basis   150196  2020-05-15 13:16:10 Z    3 days
Testing same since   150232  2020-05-18 16:09:20 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   9099dcbd61..7b6327ff03  7b6327ff03bb4436261ffad246ba3a14de10391f -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 18 18:24:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 18:24:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jakR7-0003iY-1j; Mon, 18 May 2020 18:24:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDU1=7A=epam.com=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1jakR5-0003iT-Gn
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 18:24:23 +0000
X-Inumbo-ID: cab41962-9934-11ea-a88a-12813bfff9fa
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.52]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cab41962-9934-11ea-a88a-12813bfff9fa;
 Mon, 18 May 2020 18:24:21 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fkobl5YTEqhyqglGJRWOVM9735b9ubRfaIM3K1qxiqLWyB/3uJ+QZ1rCdoQ2loC+VTwEwlQcffQN6ATxN2DePTBcbEsDfX5gyLJwg9xv8C15+P1QufAEC7uOgbZyHnB6O/lW0fwewfocVALZBfgRMoMJ0VIRQTPJeWKKByFutI01tHsGEZxkTJjIuQsmwhcRyWidKjMpTNYC8ucm+FevTJnAYT6tubODmMdtzOmncUZq2n0d722Jm06a7nvlZgiKnyFNsHu41FjbRAF/iUOcbYViaSRytFDXKKLuG8b8Y3BY/TPGLEbVKH0TsP0L5li+DstWCbbji/N+U3mZjia5GQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RgPW3PW0EWT7kCFuNFjnoCJ+6osYm/etA00TItnAUSY=;
 b=CiKBENJ88fMYdrz7sUoJiWcbynWQRKinA8te8GXeih0dwXDtoPbIhC25KZCvJ4DxMLaJJeHqE2BKqlBzvQS5bq+cE28JsPTqGw3uydUoIlppuNhZqxBJ5yUm0Nu1HzyHPigBu0cu+c/IQUdEFe44Wt3uueXj0V20GZ75F68lf20b4JmklmUi8DgkPYd85FxYFU/XkdtKE7Pxc00nT/RYkU8nn9lBIkhgZBogI09pEW9Qiv8vplz/waVffMs+gf4a8lA75vwlCiWl8lXqAnE9aAVQyLoG6mesxJNG4GXxkVz9TFzYnzLceIxD1CKx1MLcj5E4xBtx6sAyTfLJzVTrAg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RgPW3PW0EWT7kCFuNFjnoCJ+6osYm/etA00TItnAUSY=;
 b=Mq9+FgJJu9IZWAfAIVqbcXHQ+K5GX++kjneZBmdtRuPjqTVjcVEZeHPj+NRhKmGFSyYxcxJo99iRhquE7gD29zw1XO6v46kNR3tYVgM3yL/p53Dt8uBpGQXuV60LoNGbtH/CttlNJ5l7re7UJ7mMxF4yb++6ZNgMiLZZmoH5M4o=
Received: from VI1PR0302MB3407.eurprd03.prod.outlook.com
 (2603:10a6:803:23::18) by VI1PR0302MB2814.eurprd03.prod.outlook.com
 (2603:10a6:800:e3::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.25; Mon, 18 May
 2020 18:24:19 +0000
Received: from VI1PR0302MB3407.eurprd03.prod.outlook.com
 ([fe80::a01e:c0b1:7174:4c75]) by VI1PR0302MB3407.eurprd03.prod.outlook.com
 ([fe80::a01e:c0b1:7174:4c75%5]) with mapi id 15.20.3000.034; Mon, 18 May 2020
 18:24:19 +0000
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "julien@xen.org" <julien@xen.org>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH for-4.14 1/3] xen/arm: Allow a platform to override the
 DMA width
Thread-Topic: [PATCH for-4.14 1/3] xen/arm: Allow a platform to override the
 DMA width
Thread-Index: AQHWLQe/10+te1SvfkynseIg0ziWG6iuKQYA
Date: Mon, 18 May 2020 18:24:19 +0000
Message-ID: <8e8afff48273589fd06192203c0452fb1e69cd1f.camel@epam.com>
References: <20200518113008.15422-1-julien@xen.org>
 <20200518113008.15422-2-julien@xen.org>
In-Reply-To: <20200518113008.15422-2-julien@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: Evolution 3.36.2 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 726e0bb0-0c34-4b29-b0f0-08d7fb58ae0b
x-ms-traffictypediagnostic: VI1PR0302MB2814:
x-microsoft-antispam-prvs: <VI1PR0302MB2814F467A286866178D18D0BE6B80@VI1PR0302MB2814.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-forefront-prvs: 04073E895A
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: MXLGsyYBnsoLQb+64DVRC3fHpMKSwOdFA+K/rw3x6NkVG6+HiwPia0E59jv7lghnxG0kjkfU6hlJKyw+hjr6i6WEOSDqlTfJ/43zrzDpOdnVPU8zvNfraPIcRO1CzW2kp0Fab8Gz9wU54BIpjS4GLG2vZo4DTDobXYreAejlPeIbPBbJsUcfx6C93HMGIsOQE08Tg8PBJX+F4hZ7s4VMJwKNF1n0XdhqLgvEfFDJUiH/m8iWiGgn5yDaYqEu4um6tBfZj5WsIFbQ/Z9yshxWyjB/9SSD+nPNCwD9XwpjXLqzRndkLtgAHlKj8jqne6CzaMXjYbOQ8A/hQcVOkFJu5Oz9uT8pzTY6Cs8V2CGPxZ5mbjEzjOuFjttnh9tBUje2PxtA4Pl9kVTPbXDDzN7uIe8oV25fWykndu4WuX5GJB5lZ2ZXrvIjZ/iel5UDjbDL
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:VI1PR0302MB3407.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(136003)(39860400002)(366004)(396003)(346002)(376002)(71200400001)(55236004)(478600001)(110136005)(6506007)(316002)(4326008)(26005)(36756003)(54906003)(7416002)(2906002)(186003)(8936002)(8676002)(5660300002)(6486002)(2616005)(66446008)(66946007)(86362001)(91956017)(76116006)(66476007)(66556008)(64756008)(6512007);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: 4QnbFypUdsYjpSfheUhrrN/GP2Ro4yruht7Od4gHBKRAbdXkdoEV4GyQ5nYXbHzjyPQ6A7LBl3Z3cfdwBC/KhBzA2mmSbEU3sj4AD8iQQ2jlTcJ4oGlwzbZxUNFiSaf/SV4FVkASFwLo4o/0jHxR1oTsEMqSBh9LLscCq3dAqbUbnsXK7SLKOb/+OZO9SHY822BXZgevgPjht0BbHGLTKEqdRwXfXTWvChRxhZzl7jdjttTi1BtJsMScAjp5uA+3E0NiCEqokVVCXfskozjVZeaPXIuUYNxZzntJNVSgKveSsjy6IZp5d3zIwQcm3PYJ1gBkolfWFjuLAKJBIiye8mkGAaMNzA7v0zZuA3CLqj2b7tgoB6HnzIVSTqRoMUWejD5Wmx7S5u1h3rInzYjYCnn9gbSfvdLJeAWCk63xPfbCLOniJuCvb4C3HK+sYjIoFN3rlYmhRHgX9YP9xtr/4aG+Pd2Drh3kTP4PYTREHZk=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <7590272FCF9D6142A147185D06DD12B2@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 726e0bb0-0c34-4b29-b0f0-08d7fb58ae0b
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 May 2020 18:24:19.1110 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: td5TtZkPeYRBRwZDYaBJfHv+GGZpjmTN78dnBbgixJc4XYNaolbEsHv0V0qbFVhg8//T+6jaWZMgPLP+s/FkyhEG+kjgeeQaJ/TzLpcFI9k=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0302MB2814
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "minyard@acm.org" <minyard@acm.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 "jgrall@amazon.com" <jgrall@amazon.com>, "roman@zededa.com" <roman@zededa.com>,
 "george.dunlap@citrix.com" <george.dunlap@citrix.com>,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 "jbeulich@suse.com" <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGkgSnVsaWVuLA0KDQpPbiBNb24sIDIwMjAtMDUtMTggYXQgMTI6MzAgKzAxMDAsIEp1bGllbiBH
cmFsbCB3cm90ZToNCj4gRnJvbTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCj4g
DQo+IEF0IHRoZSBtb21lbnQsIFhlbiBpcyBhc3N1bWluZyB0aGF0IGFsbCB0aGUgZGV2aWNlcyBh
cmUgYXQgbGVhc3QgMzItYml0DQo+IERNQSBjYXBhYmxlLiBIb3dldmVyLCBzb21lIFNvQyBoYXZl
IGRldmljZXMgdGhhdCBtYXkgYmUgYWJsZSB0byBhY2Nlc3MNCj4gYSBtdWNoIHJlc3RyaWN0ZWQg
cmFuZ2UuIEZvciBpbnN0YW5jZSwgdGhlIFJQSSBoYXMgZGV2aWNlcyB0aGF0IGNhbg0KPiBvbmx5
IGFjY2VzcyB0aGUgZmlyc3QgMUdCIG9mIFJBTS4NCj4gDQo+IFRoZSBzdHJ1Y3R1cmUgcGxhdGZv
cm1fZGVzYyBpcyBub3cgZXh0ZW5kZWQgdG8gYWxsb3cgYSBwbGF0Zm9ybSB0bw0KPiBvdmVycmlk
ZSB0aGUgRE1BIHdpZHRoLiBUaGUgbmV3IGlzIHVzZWQgdG8gaW1wbGVtZW50DQo+IGFyY2hfZ2V0
X2RtYV9iaXRfc2l6ZSgpLg0KPiANCj4gVGhlIHByb3RvdHlwZSBpcyBub3cgbW92ZWQgaW4gYXNt
LWFybS9tbS5oIGFzIHRoZSBmdW5jdGlvbiBpcyBub3QgTlVNQQ0KPiBzcGVjaWZpYy4gVGhlIGlt
cGxlbWVudGF0aW9uIGlzIGRvbmUgaW4gcGxhdGZvcm0uYyBzbyB3ZSBkb24ndCBoYXZlIHRvDQo+
IGluY2x1ZGUgcGxhdGZvcm0uaCBldmVyeXdoZXJlLiBUaGlzIHNob3VsZCBiZSBmaW5lIGFzIHRo
ZSBmdW5jdGlvbiBpcw0KPiBub3QgZXhwZWN0ZWQgdG8gYmUgY2FsbGVkIGluIGhvdHBhdGguDQo+
IA0KPiBTaWduZWQtb2ZmLWJ5OiBKdWxpZW4gR3JhbGwgPGpncmFsbEBhbWF6b24uY29tPg0KUmV2
aWV3ZWQtYnk6IFZvbG9keW15ciBCYWJjaHVrIDx2b2xvZHlteXJfYmFiY2h1a0BlcGFtLmNvbT4N
Cg0KPiAtLS0NCj4gDQo+IENjOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IENj
OiBBbmRyZXcgQ29vcGVyIDxhbmRyZXcuY29vcGVyM0BjaXRyaXguY29tPg0KPiBDYzogR2Vvcmdl
IER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXguY29tPg0KPiANCj4gSSBub3RpY2VkIHRoYXQg
YXJjaF9nZXRfZG1hX2JpdF9zaXplKCkgaXMgb25seSBjYWxsZWQgd2hlbiB0aGVyZSBpcyBtb3Jl
DQo+IHRoYW4gb25lIE5VTUEgbm9kZS4gSSBhbSBhIGJpdCB1bnN1cmUgd2hhdCBpcyB0aGUgcmVh
c29uIGJlaGluZCBpdC4NCj4gDQo+IFRoZSBnb2FsIGZvciBBcm0gaXMgdG8gdXNlIGFyY2hfZ2V0
X2RtYV9iaXRfc2l6ZSgpIHdoZW4gZGVjaWRpbmcgaG93IGxvdw0KPiB0aGUgZmlyc3QgRG9tMCBi
YW5rIHNob3VsZCBiZSBhbGxvY2F0ZWQuDQo+IC0tLQ0KPiAgeGVuL2FyY2gvYXJtL3BsYXRmb3Jt
LmMgICAgICAgIHwgNSArKysrKw0KPiAgeGVuL2luY2x1ZGUvYXNtLWFybS9tbS5oICAgICAgIHwg
MiArKw0KPiAgeGVuL2luY2x1ZGUvYXNtLWFybS9udW1hLmggICAgIHwgNSAtLS0tLQ0KPiAgeGVu
L2luY2x1ZGUvYXNtLWFybS9wbGF0Zm9ybS5oIHwgMiArKw0KPiAgNCBmaWxlcyBjaGFuZ2VkLCA5
IGluc2VydGlvbnMoKyksIDUgZGVsZXRpb25zKC0pDQo+IA0KPiBkaWZmIC0tZ2l0IGEveGVuL2Fy
Y2gvYXJtL3BsYXRmb3JtLmMgYi94ZW4vYXJjaC9hcm0vcGxhdGZvcm0uYw0KPiBpbmRleCA4ZWIw
YjZlNTdhNWEuLjRkYjViYmI0YzUxZCAxMDA2NDQNCj4gLS0tIGEveGVuL2FyY2gvYXJtL3BsYXRm
b3JtLmMNCj4gKysrIGIveGVuL2FyY2gvYXJtL3BsYXRmb3JtLmMNCj4gQEAgLTE1NSw2ICsxNTUs
MTEgQEAgYm9vbCBwbGF0Zm9ybV9kZXZpY2VfaXNfYmxhY2tsaXN0ZWQoY29uc3Qgc3RydWN0IGR0
X2RldmljZV9ub2RlICpub2RlKQ0KPiAgICAgIHJldHVybiAoZHRfbWF0Y2hfbm9kZShibGFja2xp
c3QsIG5vZGUpICE9IE5VTEwpOw0KPiAgfQ0KPiAgDQo+ICt1bnNpZ25lZCBpbnQgYXJjaF9nZXRf
ZG1hX2JpdHNpemUodm9pZCkNCj4gK3sNCj4gKyAgICByZXR1cm4gKCBwbGF0Zm9ybSAmJiBwbGF0
Zm9ybS0+ZG1hX2JpdHNpemUgKSA/IHBsYXRmb3JtLT5kbWFfYml0c2l6ZSA6IDMyOw0KPiArfQ0K
PiArDQo+ICAvKg0KPiAgICogTG9jYWwgdmFyaWFibGVzOg0KPiAgICogbW9kZTogQw0KPiBkaWZm
IC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNtLWFybS9tbS5oIGIveGVuL2luY2x1ZGUvYXNtLWFybS9t
bS5oDQo+IGluZGV4IDdkZjkxMjgwYmM3Ny4uZjhiYTQ5YjExODhmIDEwMDY0NA0KPiAtLS0gYS94
ZW4vaW5jbHVkZS9hc20tYXJtL21tLmgNCj4gKysrIGIveGVuL2luY2x1ZGUvYXNtLWFybS9tbS5o
DQo+IEBAIC0zNjYsNiArMzY2LDggQEAgaW50IGFyY2hfYWNxdWlyZV9yZXNvdXJjZShzdHJ1Y3Qg
ZG9tYWluICpkLCB1bnNpZ25lZCBpbnQgdHlwZSwgdW5zaWduZWQgaW50IGlkLA0KPiAgICAgIHJl
dHVybiAtRU9QTk9UU1VQUDsNCj4gIH0NCj4gIA0KPiArdW5zaWduZWQgaW50IGFyY2hfZ2V0X2Rt
YV9iaXRzaXplKHZvaWQpOw0KPiArDQo+ICAjZW5kaWYgLyogIF9fQVJDSF9BUk1fTU1fXyAqLw0K
PiAgLyoNCj4gICAqIExvY2FsIHZhcmlhYmxlczoNCj4gZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRl
L2FzbS1hcm0vbnVtYS5oIGIveGVuL2luY2x1ZGUvYXNtLWFybS9udW1hLmgNCj4gaW5kZXggNDkw
ZDFmMzFhYTE0Li4zMWE2ZGU0ZTIzNDYgMTAwNjQ0DQo+IC0tLSBhL3hlbi9pbmNsdWRlL2FzbS1h
cm0vbnVtYS5oDQo+ICsrKyBiL3hlbi9pbmNsdWRlL2FzbS1hcm0vbnVtYS5oDQo+IEBAIC0yNSwx
MSArMjUsNiBAQCBleHRlcm4gbWZuX3QgZmlyc3RfdmFsaWRfbWZuOw0KPiAgI2RlZmluZSBub2Rl
X3N0YXJ0X3BmbihuaWQpIChtZm5feChmaXJzdF92YWxpZF9tZm4pKQ0KPiAgI2RlZmluZSBfX25v
ZGVfZGlzdGFuY2UoYSwgYikgKDIwKQ0KPiAgDQo+IC1zdGF0aWMgaW5saW5lIHVuc2lnbmVkIGlu
dCBhcmNoX2dldF9kbWFfYml0c2l6ZSh2b2lkKQ0KPiAtew0KPiAtICAgIHJldHVybiAzMjsNCj4g
LX0NCj4gLQ0KPiAgI2VuZGlmIC8qIF9fQVJDSF9BUk1fTlVNQV9IICovDQo+ICAvKg0KPiAgICog
TG9jYWwgdmFyaWFibGVzOg0KPiBkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNtLWFybS9wbGF0
Zm9ybS5oIGIveGVuL2luY2x1ZGUvYXNtLWFybS9wbGF0Zm9ybS5oDQo+IGluZGV4IGVkNGQzMGEx
YmU3Yy4uOTk3ZWIyNTIxNjMxIDEwMDY0NA0KPiAtLS0gYS94ZW4vaW5jbHVkZS9hc20tYXJtL3Bs
YXRmb3JtLmgNCj4gKysrIGIveGVuL2luY2x1ZGUvYXNtLWFybS9wbGF0Zm9ybS5oDQo+IEBAIC0z
OCw2ICszOCw4IEBAIHN0cnVjdCBwbGF0Zm9ybV9kZXNjIHsNCj4gICAgICAgKiBMaXN0IG9mIGRl
dmljZXMgd2hpY2ggbXVzdCBub3QgcGFzcy10aHJvdWdoIHRvIGEgZ3Vlc3QNCj4gICAgICAgKi8N
Cj4gICAgICBjb25zdCBzdHJ1Y3QgZHRfZGV2aWNlX21hdGNoICpibGFja2xpc3RfZGV2Ow0KPiAr
ICAgIC8qIE92ZXJyaWRlIHRoZSBETUEgd2lkdGggKDMyLWJpdCBieSBkZWZhdWx0KS4gKi8NCj4g
KyAgICB1bnNpZ25lZCBpbnQgZG1hX2JpdHNpemU7DQo+ICB9Ow0KPiAgDQo+ICAvKg0K


From xen-devel-bounces@lists.xenproject.org Mon May 18 20:11:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 20:11:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jam5w-0004Du-Rc; Mon, 18 May 2020 20:10:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d2DE=7A=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jam5v-0004Dp-JH
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 20:10:39 +0000
X-Inumbo-ID: a39f199e-9943-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a39f199e-9943-11ea-9887-bc764e2007e4;
 Mon, 18 May 2020 20:10:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=UCKcSpquOpYqa0FlczQcR0jQkplo3iP1Dk1aM+Q+6Po=; b=IJHl/vMthMP/qNVJuCjUamZfY
 mVXoKdUg/rFy5YXFh0NIG6EAdor2Rl0iWNNKoRodXyIFJ86dUaMcK84j0KI91/o2XEl1kbt+tVqiG
 0hKrcsL3U1s9VyJk31i1otltztps+38O8qmaSRT0yQ3c0ywG0G9yFSeN133yi5W94jeUA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jam5t-0001Cc-BI; Mon, 18 May 2020 20:10:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jam5p-0003oE-2h; Mon, 18 May 2020 20:10:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jam5p-0004fD-10; Mon, 18 May 2020 20:10:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150230-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150230: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=b9bbe6ed63b2b9f2c9ee5cbd0f2c946a2723f4ce
X-Osstest-Versions-That: linux=5a9ffb954a3933d7867f4341684a23e008d6839b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 18 May 2020 20:10:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150230 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150230/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150225
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150225
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150225
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150225
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150225
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150225
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150225
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150225
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150225
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                b9bbe6ed63b2b9f2c9ee5cbd0f2c946a2723f4ce
baseline version:
 linux                5a9ffb954a3933d7867f4341684a23e008d6839b

Last test of basis   150225  2020-05-17 11:37:25 Z    1 days
Failing since        150226  2020-05-17 22:39:35 Z    0 days    2 attempts
Testing same since   150230  2020-05-18 09:16:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Alan Stern <stern@rowland.harvard.edu>
  Andrey Konovalov <andreyknvl@google.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Arnd Bergmann <arnd@arndb.de>
  Borislav Petkov <bp@suse.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Corey Minyard <cminyard@mvista.com>
  Eric W. Biederman <ebiederm@xmission.com>
  Eugeniu Rosca <erosca@de.adit-jv.com>
  Felipe Balbi <balbi@kernel.org>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Heiko Stuebner <heiko@sntech.de>
  Jason Yan <yanaijie@huawei.com>
  John Stultz <john.stultz@linaro.org>
  Jon Hunter <jonathanh@nvidia.com>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Justin Swartz <justin.swartz@risingedge.co.za>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kalle Valo <kvalo@codeaurora.org>
  Kyungtae Kim <kt0755@gmail.com>
  Li Jun <jun.li@nxp.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Marc Zyngier <maz@kernel.org>
  Masahiro Yamada <masahiroy@kernel.org>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Peter Chen <peter.chen@nxp.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Prashant Malani <pmalani@chromium.org>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Samuel Zou <zou_wei@huawei.com>
  Sriharsha Allenki <sallenki@codeaurora.org>
  Stephen Boyd <sboyd@kernel.org>
  Tero Kristo <t-kristo@ti.com>
  Thierry Reding <treding@nvidia.com>
  Tony Lindgren <tony@atomide.com>
  Wei Yongjun <weiyongjun1@huawei.com>
  Wolfram Sang <wsa+renesas@sang-engineering.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   5a9ffb954a39..b9bbe6ed63b2  b9bbe6ed63b2b9f2c9ee5cbd0f2c946a2723f4ce -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Mon May 18 20:34:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 20:34:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jamSy-00060F-UI; Mon, 18 May 2020 20:34:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDU1=7A=epam.com=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1jamSw-00060A-U2
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 20:34:27 +0000
X-Inumbo-ID: f684cb9c-9946-11ea-a89a-12813bfff9fa
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.60]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f684cb9c-9946-11ea-a89a-12813bfff9fa;
 Mon, 18 May 2020 20:34:25 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DkhxZv6tHjTqIVMoJQlO+A21klSBq2eslo1RmA8kRTKeDJUYRkDA+drM45JbPLVTadeKtAYbPhLEsDg9GYcLw18G6Y3Sm16E5pteGOcUbqbRY08tZBmw6tp+tZZWTa4NgD6NOIM9/xr9UZ1hjPkhq7WPhLpu7fzsM+snV9Yg91p4dIqNsSkuWV0PXtenPC5Q7cD00aBgXh7EmWNDwNpHuyxEkBCXm/KIa91+IqOJHhJjsayVSiW/lIzGMzeUeD2MlMo1Sgt5hWoRux1IQKw33ZUR6O/H+SS+Vf0oGI70zaIfi4spNmkwGcY3tTaSTqnMi8d+rgUGcD8izchE5JRqMQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NNc84L0gIWpd4BowBxoWBAW970PLb7AWSyx/lVpLhBw=;
 b=M2vGpoza23cid5qfz2pdo2qlSI6Q3RrEdDI6gMTrj9YgdaEIfAw2UcacRRW35f5tRP0ueqiJ+ATIiTV3hkWq4bHdDzSz2es0TrBkceajbYPiheYrLpR7HOzhoVvdrnN2cPJNTWZ/04Lb3ICskxAbVpJG++Lc8hVejW/cjKH7hr2WmEJIAmdbLJxcBEWqTM//aPpyoErHpMwIXFoo7jm2Wz7I3YnY5nkrJt9fH3H8stWzOT+QfgvKCMeVXqe/SdnDIHVjrtM3Uu3XDYHoGq194tHV86kzt0GPLVTzuuGpRYMBzgaKMnqWElgGhI0gfRUxUrlrWGg5D1N9jdIxinw0xA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NNc84L0gIWpd4BowBxoWBAW970PLb7AWSyx/lVpLhBw=;
 b=MS8atY4t000BLBwlhev/VRFUCO0wAKCAv38ZT1AZDyMW335wQ7mpwp6lFLn1ORNdMLCdxlVV8QsLtLm9aPlmTdqXSYDmBf6QtM+lg4OqqrUZwjI7tzN4trEnmOo3EV9fRpElK2I7p+lZHDgdYrYDFC26vP3irY9F3gTHe+WKT10=
Received: from VI1PR0302MB3407.eurprd03.prod.outlook.com
 (2603:10a6:803:23::18) by VI1PR0302MB3280.eurprd03.prod.outlook.com
 (2603:10a6:803:21::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.20; Mon, 18 May
 2020 20:34:23 +0000
Received: from VI1PR0302MB3407.eurprd03.prod.outlook.com
 ([fe80::a01e:c0b1:7174:4c75]) by VI1PR0302MB3407.eurprd03.prod.outlook.com
 ([fe80::a01e:c0b1:7174:4c75%5]) with mapi id 15.20.3000.034; Mon, 18 May 2020
 20:34:23 +0000
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "julien@xen.org" <julien@xen.org>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH for-4.14 2/3] xen/arm: Take into account the DMA width
 when allocating Dom0 memory banks
Thread-Topic: [PATCH for-4.14 2/3] xen/arm: Take into account the DMA width
 when allocating Dom0 memory banks
Thread-Index: AQHWLQfAliF6MoKavkWMBTAyxOognaiuTV0A
Date: Mon, 18 May 2020 20:34:22 +0000
Message-ID: <aa95369bf22df89404243dd4e7374f8015ccc9ad.camel@epam.com>
References: <20200518113008.15422-1-julien@xen.org>
 <20200518113008.15422-3-julien@xen.org>
In-Reply-To: <20200518113008.15422-3-julien@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: Evolution 3.36.2 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: aa7be173-5fd4-4761-a867-08d7fb6ad98c
x-ms-traffictypediagnostic: VI1PR0302MB3280:
x-microsoft-antispam-prvs: <VI1PR0302MB3280682F3A3460D150C6CDB0E6B80@VI1PR0302MB3280.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-forefront-prvs: 04073E895A
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: QFazzduIufV7tsGobL+RFowHffUnzE7UfxI5WWVCilwQ6erwSGHhe0AjlqwmwTnVBADhyn9HmIUhQd/UxKbBb6M6K7KwNMwyVWU639A0OJGevCCInY8xOkzIMrFttF3AZpjRmerr/9Niqpg56rbbPvc1wV+UAljf49iq0kyHebUqqXBXDAaEkLeLCGHqdMEMkRay402MujzjMDDO8tjgxT7PP2xStoqa/ycv7cygN/mr4TtbFKJ1RHCto4qRZ9VzIv9/UvOqkLOglYRKLDGgdM+nVeK1qhjiwj/9+uchXLYz9N2PUT+n1/lc2BDP8VY5AuwaAf73Z8w21n5/HCT6X627ckGyeJKZHDy85DPbAzLnHkzyIznuUEiHPYTmtxB7kMgB6rXOufEg9ELJWbw4NnOdCyEBTxOJt5JpLu47erx0QXWENhlPMmx5q6h1eN+P
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:VI1PR0302MB3407.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(39860400002)(346002)(136003)(396003)(366004)(376002)(5660300002)(55236004)(316002)(6512007)(36756003)(478600001)(8936002)(54906003)(110136005)(8676002)(15650500001)(6506007)(4326008)(26005)(6486002)(186003)(66446008)(64756008)(66556008)(76116006)(91956017)(2906002)(86362001)(66946007)(2616005)(66476007)(71200400001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: z0m/W3GbX+kBHXxQWuLbExhbMaU6fmExvkXQvENwKTMm71Y56nXA0cCrKDcxBElthjarU17b8/E4Dj3UwOvk7cPvMOMQWbukCm7ddxggNKu31VG4XqsyFE1fVUnU5Ta//Ld9EGY0EVICfDzEimhelV37ysicwNYvdRB4codLyjjZxtuYXjl/J9nZmD/fuZFGPcZtMJ0dMpYttrVNyyayPMdP+AcX2YXpy9djFlvKjMg5VwvG/29rMAhbqpB+Agr+wl0/fPgNXvkZNE9jq2/j/OOEL+FV8rlKRpZdmcVVInVGI5exaPpakoS4Ex/ZZ11j2Khs9plYgPXI6EgpyuPG0x/3P4OPG9g/8N75eJN5egZDqvJ/F9m8LIaQ/fmYA3+IsyxO6vzaB07sXKqIzlCNv/0099xHPduBzFRxf/nre46KO0zeDo2IYsrkfnEfJjH46noeePTl+iPEs9irO2bHabDTOm4nrRJM9mWgcYbpv20=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <9C87E15FCE3EAA4BB76ACFE7812BFC16@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: aa7be173-5fd4-4761-a867-08d7fb6ad98c
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 May 2020 20:34:22.9774 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 3kQ9mKCPX1ZsbEfQ+f4FhYFeLM9rwBvJ09+PrEww+q+R/Zm5jF2sOxCRHLOQaoQWncjlfiBF3/l/Y8OvHvXqvmA2aRsrIm5z9vGP/A6bW7Q=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0302MB3280
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 "jgrall@amazon.com" <jgrall@amazon.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "roman@zededa.com" <roman@zededa.com>, "minyard@acm.org" <minyard@acm.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGkgSnVsaWVuLA0KDQpPbiBNb24sIDIwMjAtMDUtMTggYXQgMTI6MzAgKzAxMDAsIEp1bGllbiBH
cmFsbCB3cm90ZToNCj4gRnJvbTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCj4g
DQo+IEF0IHRoZSBtb21lbnQsIFhlbiBpcyBhc3N1bWluZyB0aGF0IGFsbCB0aGUgZGV2aWNlcyBh
cmUgYXQgbGVhc3QgMzItYml0DQo+IERNQSBjYXBhYmxlLiBIb3dldmVyLCBzb21lIFNvQ3MgaGF2
ZSBkZXZpY2VzIHRoYXQgbWF5IGJlIGFibGUgdG8gYWNjZXNzDQo+IGEgbXVjaCByZXN0cmljdGVk
IHJhbmdlLiBGb3IgaW5zdGFuY2UsIHRoZSBSYXNwYmVycnkgUEkgNCBoYXMgZGV2aWNlcw0KPiB0
aGF0IGNhbiBvbmx5IGFjY2VzcyB0aGUgZmlyc3QgR0Igb2YgUkFNLg0KPiANCj4gVGhlIGZ1bmN0
aW9uIGFyY2hfZ2V0X2RtYV9iaXRfc2l6ZSgpIHdpbGwgcmV0dXJuIHRoZSBsb3dlc3QgRE1BIHdp
ZHRoIG9uDQo+IHRoZSBwbGF0Zm9ybS4gVXNlIGl0IHRvIGRlY2lkZSB3aGF0IGlzIHRoZSBsaW1p
dCBmb3IgdGhlIGxvdyBtZW1vcnkuDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBKdWxpZW4gR3JhbEwg
PGpncmFsbEBhbWF6b24uY29tPg0KPiAtLS0NCj4gIHhlbi9hcmNoL2FybS9kb21haW5fYnVpbGQu
YyB8IDMyICsrKysrKysrKysrKysrKysrKystLS0tLS0tLS0tLS0tDQo+ICAxIGZpbGUgY2hhbmdl
ZCwgMTkgaW5zZXJ0aW9ucygrKSwgMTMgZGVsZXRpb25zKC0pDQo+IA0KPiBkaWZmIC0tZ2l0IGEv
eGVuL2FyY2gvYXJtL2RvbWFpbl9idWlsZC5jIGIveGVuL2FyY2gvYXJtL2RvbWFpbl9idWlsZC5j
DQo+IGluZGV4IDQzMDcwODc1MzY0Mi4uYWJjNGU0NjNkMjdjIDEwMDY0NA0KPiAtLS0gYS94ZW4v
YXJjaC9hcm0vZG9tYWluX2J1aWxkLmMNCj4gKysrIGIveGVuL2FyY2gvYXJtL2RvbWFpbl9idWls
ZC5jDQo+IEBAIC0yMTEsMTAgKzIxMSwxMyBAQCBmYWlsOg0KPiAgICogICAgdGhlIHJhbWRpc2sg
YW5kIERUQiBtdXN0IGJlIHBsYWNlZCB3aXRoaW4gYSBjZXJ0YWluIHByb3hpbWl0eSBvZg0KPiAg
ICogICAgdGhlIGtlcm5lbCB3aXRoaW4gUkFNLg0KPiAgICogMy4gRm9yIGRvbTAgd2Ugd2FudCB0
byBwbGFjZSBhcyBtdWNoIG9mIHRoZSBSQU0gYXMgd2UgcmVhc29uYWJseSBjYW4NCj4gLSAqICAg
IGJlbG93IDRHQiwgc28gdGhhdCBpdCBjYW4gYmUgdXNlZCBieSBub24tTFBBRSBlbmFibGVkIGtl
cm5lbHMgKDMyLWJpdCkNCj4gKyAqICAgIGJlbG93IDRHQiwgc28gdGhhdCBpdCBjYW4gYmUgdXNl
ZCBieSBub24tTFBBRSBlbmFibGVkIGtlcm5lbHMgKDMyLWJpdCkuDQpJcyBmdWxsIHN0b3AgcmVh
bGx5IG5lZWRlZCB0aGVyZT8NCg0KPiAgICogICAgb3Igd2hlbiBhIGRldmljZSBhc3NpZ25lZCB0
byBkb20wIGNhbiBvbmx5IGRvIDMyLWJpdCBETUEgYWNjZXNzLg0KPiAtICogNC4gRm9yIDMyLWJp
dCBkb20wIHRoZSBrZXJuZWwgbXVzdCBiZSBsb2NhdGVkIGJlbG93IDRHQi4NCj4gLSAqIDUuIFdl
IHdhbnQgdG8gaGF2ZSBhIGZldyBsYXJnZXJzIGJhbmtzIHJhdGhlciB0aGFuIG1hbnkgc21hbGxl
ciBvbmVzLg0KPiArICogNC4gU29tZSBkZXZpY2VzIGFzc2lnbmVkIHRvIGRvbTAgY2FuIG9ubHkg
ZG8gMzItYml0IERNQSBhY2Nlc3Mgb3INCj4gKyAqICAgIGV2ZW4gYmUgbW9yZSByZXN0cmljdGVk
LiBXZSB3YW50IHRvIGFsbG9jYXRlIGFzIG11Y2ggb2YgdGhlIFJBTQ0KPiArICogICAgYXMgd2Ug
cmVhc29uYWJseSBjYW4gdGhhdCBjYW4gYmUgYWNjZXNzZWQgZnJvbSBhbGwgdGhlIGRldmljZXMu
Lg0KPiArICogNS4gRm9yIDMyLWJpdCBkb20wIHRoZSBrZXJuZWwgbXVzdCBiZSBsb2NhdGVkIGJl
bG93IDRHQi4NCj4gKyAqIDYuIFdlIHdhbnQgdG8gaGF2ZSBhIGZldyBsYXJnZXJzIGJhbmtzIHJh
dGhlciB0aGFuIG1hbnkgc21hbGxlciBvbmVzLg0KPiAgICoNCj4gICAqIEZvciB0aGUgZmlyc3Qg
dHdvIHJlcXVpcmVtZW50cyB3ZSBuZWVkIHRvIG1ha2Ugc3VyZSB0aGF0IHRoZSBsb3dlc3QNCj4g
ICAqIGJhbmsgaXMgc3VmZmljaWVudGx5IGxhcmdlLg0KPiBAQCAtMjQ1LDkgKzI0OCw5IEBAIGZh
aWw6DQo+ICAgKiB3ZSBnaXZlIHVwLg0KPiAgICoNCj4gICAqIEZvciAzMi1iaXQgZG9tYWluIHdl
IHJlcXVpcmUgdGhhdCB0aGUgaW5pdGlhbCBhbGxvY2F0aW9uIGZvciB0aGUNCj4gLSAqIGZpcnN0
IGJhbmsgaXMgdW5kZXIgNEcuIEZvciA2NC1iaXQgZG9tYWluLCB0aGUgZmlyc3QgYmFuayBpcyBw
cmVmZXJyZWQNCj4gLSAqIHRvIGJlIGFsbG9jYXRlZCB1bmRlciA0Ry4gVGhlbiBmb3IgdGhlIHN1
YnNlcXVlbnQgYWxsb2NhdGlvbnMgd2UNCj4gLSAqIGluaXRpYWxseSBhbGxvY2F0ZSBtZW1vcnkg
b25seSBmcm9tIGJlbG93IDRHQi4gT25jZSB0aGF0IHJ1bnMgb3V0DQo+ICsgKiBmaXJzdCBiYW5r
IGlzIHBhcnQgb2YgdGhlIGxvdyBtZW0uIEZvciA2NC1iaXQsIHRoZSBmaXJzdCBiYW5rIGlzIHBy
ZWZlcnJlZA0KPiArICogdG8gYmUgYWxsb2NhdGVkIGluIHRoZSBsb3cgbWVtLiBUaGVuIGZvciBz
dWJzZXF1ZW50IGFsbG9jYXRpb24sIHdlDQo+ICsgKiBpbml0aWFsbHkgYWxsb2NhdGUgbWVtb3J5
IG9ubHkgZnJvbSBsb3cgbWVtLiBPbmNlIHRoYXQgcnVucyBvdXQgb3V0DQo+ICAgKiAoYXMgZGVz
Y3JpYmVkIGFib3ZlKSB3ZSBhbGxvdyBoaWdoZXIgYWxsb2NhdGlvbnMgYW5kIGNvbnRpbnVlIHVu
dGlsDQo+ICAgKiB0aGF0IHJ1bnMgb3V0IChvciB3ZSBoYXZlIGFsbG9jYXRlZCBzdWZmaWNpZW50
IGRvbTAgbWVtb3J5KS4NCj4gICAqLw0KPiBAQCAtMjYyLDYgKzI2NSw3IEBAIHN0YXRpYyB2b2lk
IF9faW5pdCBhbGxvY2F0ZV9tZW1vcnlfMTEoc3RydWN0IGRvbWFpbiAqZCwNCj4gICAgICBpbnQg
aTsNCj4gIA0KPiAgICAgIGJvb2wgbG93bWVtID0gdHJ1ZTsNCj4gKyAgICB1bnNpZ25lZCBpbnQg
bG93bWVtX2JpdHNpemUgPSBtaW4oMzJVLCBhcmNoX2dldF9kbWFfYml0c2l6ZSgpKTsNCj4gICAg
ICB1bnNpZ25lZCBpbnQgYml0czsNCj4gIA0KPiAgICAgIC8qDQo+IEBAIC0yODIsNyArMjg2LDcg
QEAgc3RhdGljIHZvaWQgX19pbml0IGFsbG9jYXRlX21lbW9yeV8xMShzdHJ1Y3QgZG9tYWluICpk
LA0KPiAgICAgICAqLw0KPiAgICAgIHdoaWxlICggb3JkZXIgPj0gbWluX2xvd19vcmRlciApDQo+
ICAgICAgew0KPiAtICAgICAgICBmb3IgKCBiaXRzID0gb3JkZXIgOyBiaXRzIDw9IChsb3dtZW0g
PyAzMiA6IFBBRERSX0JJVFMpOyBiaXRzKysgKQ0KPiArICAgICAgICBmb3IgKCBiaXRzID0gb3Jk
ZXIgOyBiaXRzIDw9IGxvd21lbV9iaXRzaXplOyBiaXRzKysgKQ0KPiAgICAgICAgICB7DQo+ICAg
ICAgICAgICAgICBwZyA9IGFsbG9jX2RvbWhlYXBfcGFnZXMoZCwgb3JkZXIsIE1FTUZfYml0cyhi
aXRzKSk7DQo+ICAgICAgICAgICAgICBpZiAoIHBnICE9IE5VTEwgKQ0KPiBAQCAtMjk2LDI0ICsz
MDAsMjYgQEAgc3RhdGljIHZvaWQgX19pbml0IGFsbG9jYXRlX21lbW9yeV8xMShzdHJ1Y3QgZG9t
YWluICpkLA0KPiAgICAgICAgICBvcmRlci0tOw0KPiAgICAgIH0NCj4gIA0KPiAtICAgIC8qIEZh
aWxlZCB0byBhbGxvY2F0ZSBiYW5rMCB1bmRlciA0R0IgKi8NCj4gKyAgICAvKiBGYWlsZWQgdG8g
YWxsb2NhdGUgYmFuazAgaW4gdGhlIGxvd21lbSByZWdpb24uICovDQo+ICAgICAgaWYgKCBpc18z
MmJpdF9kb21haW4oZCkgKQ0KPiAgICAgICAgICBwYW5pYygiVW5hYmxlIHRvIGFsbG9jYXRlIGZp
cnN0IG1lbW9yeSBiYW5rXG4iKTsNCj4gIA0KPiAtICAgIC8qIFRyeSB0byBhbGxvY2F0ZSBtZW1v
cnkgZnJvbSBhYm92ZSA0R0IgKi8NCj4gLSAgICBwcmludGsoWEVOTE9HX0lORk8gIk5vIGJhbmsg
aGFzIGJlZW4gYWxsb2NhdGVkIGJlbG93IDRHQi5cbiIpOw0KPiArICAgIC8qIFRyeSB0byBhbGxv
Y2F0ZSBtZW1vcnkgZnJvbSBhYm92ZSB0aGUgbG93bWVtIHJlZ2lvbiAqLw0KPiArICAgIHByaW50
ayhYRU5MT0dfSU5GTyAiTm8gYmFuayBoYXMgYmVlbiBhbGxvY2F0ZWQgYmVsb3cgJXUtYml0Llxu
IiwNCj4gKyAgICAgICAgICAgbG93bWVtX2JpdHNpemUpOw0KPiAgICAgIGxvd21lbSA9IGZhbHNl
Ow0KPiAgDQo+ICAgZ290X2JhbmswOg0KPiAgDQo+ICAgICAgLyoNCj4gLSAgICAgKiBJZiB3ZSBm
YWlsZWQgdG8gYWxsb2NhdGUgYmFuazAgdW5kZXIgNEdCLCBjb250aW51ZSBhbGxvY2F0aW5nDQo+
IC0gICAgICogbWVtb3J5IGZyb20gYWJvdmUgNEdCIGFuZCBmaWxsIGluIGJhbmtzLg0KPiArICAg
ICAqIElmIHdlIGZhaWxlZCB0byBhbGxvY2F0ZSBiYW5rMCBpbiB0aGUgbG93bWVtIHJlZ2lvbiwN
Cj4gKyAgICAgKiBjb250aW51ZSBhbGxvY2F0aW5nIGZyb20gYWJvdmUgdGhlIGxvd21lbSBhbmQg
ZmlsbCBpbiBiYW5rcy4NCj4gICAgICAgKi8NCj4gICAgICBvcmRlciA9IGdldF9hbGxvY2F0aW9u
X3NpemUoa2luZm8tPnVuYXNzaWduZWRfbWVtKTsNCj4gICAgICB3aGlsZSAoIGtpbmZvLT51bmFz
c2lnbmVkX21lbSAmJiBraW5mby0+bWVtLm5yX2JhbmtzIDwgTlJfTUVNX0JBTktTICkNCj4gICAg
ICB7DQo+IC0gICAgICAgIHBnID0gYWxsb2NfZG9taGVhcF9wYWdlcyhkLCBvcmRlciwgbG93bWVt
ID8gTUVNRl9iaXRzKDMyKSA6IDApOw0KPiArICAgICAgICBwZyA9IGFsbG9jX2RvbWhlYXBfcGFn
ZXMoZCwgb3JkZXIsDQo+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBsb3dtZW0g
PyBNRU1GX2JpdHMobG93bWVtX2JpdHNpemUpIDogMCk7DQo+ICAgICAgICAgIGlmICggIXBnICkN
Cj4gICAgICAgICAgew0KPiAgICAgICAgICAgICAgb3JkZXIgLS07DQo=


From xen-devel-bounces@lists.xenproject.org Mon May 18 20:36:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 20:36:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jamUf-00065j-AK; Mon, 18 May 2020 20:36:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CDU1=7A=epam.com=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1jamUe-00065c-MH
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 20:36:12 +0000
X-Inumbo-ID: 3563b08a-9947-11ea-a89a-12813bfff9fa
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.0.84]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3563b08a-9947-11ea-a89a-12813bfff9fa;
 Mon, 18 May 2020 20:36:11 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BdYx+gfyM/FEpXd3AKqZafCUUn+j1r6IUAgSXeg0oMhfc959lJAjTD/RmqM77WoiDUIylxN7KkuAVpwdZs4cAlDI5N37EXRz0r6tvBnMUL4v+wN1NeN6yY5A1qUh6iLTTRbUjlJ/ELSkqE0LGZcQfeoNLLettQwwwm7adLBcSmaY9bRr8YS8oFW7MWlOaWioOd9IRvZ/gzI46Mf8S9iFQL7imdmELv8d2A+rL7HJ41ybP06BJEWQvQhoEdayA2TOIEUpk6Q9QJFLBVqC/q/pItYTX5ZMymqIQ/CBAz7KTBXZvGIWEJ3I2BLRrVFsUMW/54zgV729sxnEl9DulXC/KQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NJnt6gnt51F/31z+7R1URZkHQnPNV1KHLm19piMf+c0=;
 b=Cvec6Zu8PLZ7JrLIKCTT6JOcyCEr1egTWhaPSOh8y508WybppQS3vMeEbeo85qSFFQVNjUKi6A8ktZuZW5sw/4Y8XHQEF+HXFHODntUzg63r9EC7NJX0r8lW0z1fRYP4ye3Cqi3b99vjLZM48a/Ix/QERoQai+Vws6k4XshuzXv5D2VahWFoeTPpbqdOrFDfUm+24Cc5+kz47DW0U04P26v6sSmJozmPkCg8nanRuVbdBhkNppjqiSq160EQgYT8spZC7ud1qHJbgUDzmrrAO9OnpdSWHy4aEqh33Dgz7v6imt07TG6NSkyLse2+kN9S75u9Ts1ViCamHqYoMJvQtg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NJnt6gnt51F/31z+7R1URZkHQnPNV1KHLm19piMf+c0=;
 b=qW9fMKki+C2q/S/X1stHHD4Owr94h5frK75AvVJmr4nsgerqDXI+mQjkjXtX0wD8SPcl/iv5kAkMnZYwpYpOqZas+CpIOFpbsmKjWKguDYHlCS5sg+QITqNDlzJvLRc44bCqiVbSQuCknwsQspLWFm9f+gr9uyQSCnpHiLSnhLU=
Received: from VI1PR0302MB3407.eurprd03.prod.outlook.com
 (2603:10a6:803:23::18) by VI1PR0302MB3280.eurprd03.prod.outlook.com
 (2603:10a6:803:21::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.20; Mon, 18 May
 2020 20:36:08 +0000
Received: from VI1PR0302MB3407.eurprd03.prod.outlook.com
 ([fe80::a01e:c0b1:7174:4c75]) by VI1PR0302MB3407.eurprd03.prod.outlook.com
 ([fe80::a01e:c0b1:7174:4c75%5]) with mapi id 15.20.3000.034; Mon, 18 May 2020
 20:36:08 +0000
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "julien@xen.org" <julien@xen.org>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH for-4.14 3/3] xen/arm: plat: Allocate as much as possible
 memory below 1GB for dom0 for RPI
Thread-Topic: [PATCH for-4.14 3/3] xen/arm: plat: Allocate as much as possible
 memory below 1GB for dom0 for RPI
Thread-Index: AQHWLQfDNIfgFSjSGUyub+p/1yhn8KiuTdqA
Date: Mon, 18 May 2020 20:36:08 +0000
Message-ID: <bc9a1121a7484ef484c30869793698f912987d23.camel@epam.com>
References: <20200518113008.15422-1-julien@xen.org>
 <20200518113008.15422-4-julien@xen.org>
In-Reply-To: <20200518113008.15422-4-julien@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: Evolution 3.36.2 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 897fff7b-34a7-4823-22e1-08d7fb6b1863
x-ms-traffictypediagnostic: VI1PR0302MB3280:
x-microsoft-antispam-prvs: <VI1PR0302MB328039A09F93A5C9004EA84EE6B80@VI1PR0302MB3280.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7691;
x-forefront-prvs: 04073E895A
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: yKp16Pr/QKernNFeHu3uf+Yi+iS7KbUX2l/VxVElnf9L7bUJwgwYyfgf80WYj/DlhktHL0MlgyKhJ3TJ2L6g6EzHcfVOVJ+BWTgvAv2zMJoyFCrIs/r0Qp8/Gf2dmIibePG96qbs2OjdtNoJmDh+AbqgLTlzE0dPtgBlIDRoGznREuK6sqE95udl06Xo6oCwO1GT+pbOcC7XuFwUTLniTBu4ddhI9p1p1L7Pz0tycJl91oL5CGijVF+jqFINvBf646/gZZmCDtpgrSh1gz+AR90TpaQzLjQjKsyUdh0eM2TuZ9R3JBg5pgXm7XPHaOBzZnm+tUTwo4J9GByARPt1VYhGl39CFh167q68ubZAbhvwhYgi3PCZkgXaUMNVjIh+BQKCIfPgddlmTiwEKs3h0Rb0ei0wKTuhXTuGJZNe+64LcmLBGwN7ix8Jyx8QV+6i
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:VI1PR0302MB3407.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(396003)(376002)(366004)(346002)(136003)(39860400002)(2906002)(86362001)(6486002)(186003)(26005)(64756008)(66556008)(76116006)(91956017)(66446008)(66946007)(2616005)(66476007)(71200400001)(316002)(55236004)(5660300002)(6506007)(4326008)(54906003)(110136005)(8676002)(36756003)(6512007)(8936002)(478600001);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: 86Q/ey93VcyDN/YU6B9R0AUC5t9LY73gsjd7dfNTJPUWtpoItnT3sww1W2AHR83MWGtL7fL7M0iM2RvQ+5q9VQsPZqtsw3RvztNJu2PYFKOWaFrugXsv0TpoagYXtXvJyp0I/SoBiRAF7HkkOBLQWvGPg0qPapFOktqt/rEqHpVRRHO586+OvR29ODdN6zkleoFv22d/dQwl08PW/86YOCcku/rFsIHk0+vCmXHA2fxfqSsOH9GaQShjgJSVn3Amv/zOjlYctJGxBKhmk7Eppu3bDzsiURpeS3yjUjXYT0FyTb4EIaq29x/NiZapWS8xty32kPIJXNLiEQvHb85FUmJARDjDNjN1elHH8wFC6ISsVnI+u1D4DIG7jCb2Soog/P7kyzdnY1lc1kiw61XeXaeZ8XTRfRs1Y9ZCOycjpk9WQT2/7H7B3lQ/E1IzLhTt7yTyuH0DIX0bbjNBlVBLdT5DW5xah65TCWCORvZtxd8=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <88EC96D3B51CF44BB642111FF3E9F8D1@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 897fff7b-34a7-4823-22e1-08d7fb6b1863
X-MS-Exchange-CrossTenant-originalarrivaltime: 18 May 2020 20:36:08.4157 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: N3BcWM4g9M53PIlDQjvxVTozi69RzMzgjnw+wquklLk6CTws8VdC/dkpXyU2HLhrcQlCOkfojch3Fy9c2zpZYDe17K8kQqOokHp5BUv4SUQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0302MB3280
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 "jgrall@amazon.com" <jgrall@amazon.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "roman@zededa.com" <roman@zededa.com>, "minyard@acm.org" <minyard@acm.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGkgSnVsaWVuLA0KDQpPbiBNb24sIDIwMjAtMDUtMTggYXQgMTI6MzAgKzAxMDAsIEp1bGllbiBH
cmFsbCB3cm90ZToNCj4gRnJvbTogSnVsaWVuIEdyYWxsIDxqZ3JhbGxAYW1hem9uLmNvbT4NCj4g
DQo+IFRoZSByYXNwYmVycnkgUEkgNCBoYXMgZGV2aWNlcyB0aGF0IGNhbiBvbmx5IERNQSBpbnRv
IHRoZSBmaXJzdCBHQiBvZg0KPiB0aGUgUkFNLiBUaGVyZWZvcmUgd2Ugd2FudCBhbGxvY2F0ZSBh
cyBtdWNoIGFzIHBvc3NpYmxlIG1lbW9yeSBiZWxvdyAxR0INCj4gZm9yIGRvbTAuDQo+IA0KPiBV
c2UgdGhlIHJlY2VudGx5IGludHJvZHVjZWQgZG1hX2JpdHNpemUgZmllbGQgdG8gc3BlY2lmeSB0
aGUgRE1BIHdpZHRoDQo+IHN1cHBvcnRlZC4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IEp1bGllbiBH
cmFsbCA8amdyYWxsQGFtYXpvbi5jb20+DQo+IFJlcG9ydGVkLWJ5OiBDb3JleSBNaW55YXJkIDxt
aW55YXJkQGFjbS5vcmc+DQo+IC0tLQ0KPiAgeGVuL2FyY2gvYXJtL3BsYXRmb3Jtcy9icmNtLXJh
c3BiZXJyeS1waS5jIHwgMSArDQo+ICAxIGZpbGUgY2hhbmdlZCwgMSBpbnNlcnRpb24oKykNCj4g
DQo+IGRpZmYgLS1naXQgYS94ZW4vYXJjaC9hcm0vcGxhdGZvcm1zL2JyY20tcmFzcGJlcnJ5LXBp
LmMgYi94ZW4vYXJjaC9hcm0vcGxhdGZvcm1zL2JyY20tcmFzcGJlcnJ5LXBpLmMNCj4gaW5kZXgg
YjY5N2ZhMmM2YzBlLi5hZDU0ODM0MzdiMzEgMTAwNjQ0DQo+IC0tLSBhL3hlbi9hcmNoL2FybS9w
bGF0Zm9ybXMvYnJjbS1yYXNwYmVycnktcGkuYw0KPiArKysgYi94ZW4vYXJjaC9hcm0vcGxhdGZv
cm1zL2JyY20tcmFzcGJlcnJ5LXBpLmMNCj4gQEAgLTQzLDYgKzQzLDcgQEAgc3RhdGljIGNvbnN0
IHN0cnVjdCBkdF9kZXZpY2VfbWF0Y2ggcnBpNF9ibGFja2xpc3RfZGV2W10gX19pbml0Y29uc3Qg
PQ0KPiAgUExBVEZPUk1fU1RBUlQocnBpNCwgIlJhc3BiZXJyeSBQaSA0IikNCj4gICAgICAuY29t
cGF0aWJsZSAgICAgPSBycGk0X2R0X2NvbXBhdCwNCj4gICAgICAuYmxhY2tsaXN0X2RldiAgPSBy
cGk0X2JsYWNrbGlzdF9kZXYsDQo+ICsgICAgLmRtYV9iaXRzaXplICAgID0gMTAsDQoNCkknbSBj
b25mdXNlZC4gU2hvdWxkIGl0IGJlIDMwPw0KDQo+ICBQTEFURk9STV9FTkQNCj4gIA0KPiAgLyoN
Cg==


From xen-devel-bounces@lists.xenproject.org Mon May 18 20:54:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 18 May 2020 20:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jammS-0007mu-PS; Mon, 18 May 2020 20:54:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=d2DE=7A=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jammR-0007mp-Hv
 for xen-devel@lists.xenproject.org; Mon, 18 May 2020 20:54:35 +0000
X-Inumbo-ID: c320c744-9949-11ea-a89b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c320c744-9949-11ea-a89b-12813bfff9fa;
 Mon, 18 May 2020 20:54:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=XPsFNaQ/ZuI6o/nfxUgw7XAqCokgpQU8XwB8weDRmHI=; b=uprU6unXXN5dOVYRuWgQxrAP5
 yfZ4+c13YmdxB2l4IIi8jOw42mSGIuCd+gm2qcBPzVG/w0n0GGZ8lWnuzz+kIfQV/bA2ZS3T4TRRI
 HJfpauNPNrQhqampTVFi42TGqyD2FFyk0sJcT6+ZgFL3zsoQ+I2kO4KNSKwESPZvsyhm8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jammJ-00026q-73; Mon, 18 May 2020 20:54:27 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jammI-0005gC-Sw; Mon, 18 May 2020 20:54:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jammI-0005q2-SL; Mon, 18 May 2020 20:54:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150233-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150233: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=475ffdbbf5778329319ef6f7bd6315c163163440
X-Osstest-Versions-That: xen=97fb0253e6c2f2221bfd0895b7ffe3a99330d847
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 18 May 2020 20:54:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150233 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150233/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  475ffdbbf5778329319ef6f7bd6315c163163440
baseline version:
 xen                  97fb0253e6c2f2221bfd0895b7ffe3a99330d847

Last test of basis   150231  2020-05-18 15:01:39 Z    0 days
Testing same since   150233  2020-05-18 18:00:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   97fb0253e6..475ffdbbf5  475ffdbbf5778329319ef6f7bd6315c163163440 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 19 00:03:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 00:03:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1japib-0006v3-24; Tue, 19 May 2020 00:02:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Je38=7B=gmail.com=tcminyard@srs-us1.protection.inumbo.net>)
 id 1japiZ-0006uy-4K
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 00:02:47 +0000
X-Inumbo-ID: 11daf2d2-9964-11ea-b9cf-bc764e2007e4
Received: from mail-ot1-x342.google.com (unknown [2607:f8b0:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 11daf2d2-9964-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 00:02:46 +0000 (UTC)
Received: by mail-ot1-x342.google.com with SMTP id c3so9700423otr.12
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 17:02:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:date:from:to:cc:subject:message-id:reply-to:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=iYjOqrFGOmpbtD9yt96lQwWe0nWJADjGyH8dgI2UBJs=;
 b=vXG4Ea4qYNxaFsDjBV0PwUYxg/DGm6x2IWVvNPEMF9gSc9b36/aRdgohW8Pe1eq/M4
 N645SKh7DUAwIragn3CGxNF7txZZ8D3Mam4XhwDYkZ0NkqicUgpA7TzIRB7Cm65e3/G8
 xvEST1V/i/9C1Y2UOoPfveaguN1K6nL5C61iQlxJq/Bmz95VPphOWqnuqb4VmT2sRn/k
 bFBE0eRq9VGPjorXtiRHErquV5Zn874cLSkzDN+xICXLewjfKbe68/8W8JCZzvlTbNdG
 T5tC/lvGfkRb4HCBGtNDJpwL06Sba+qneRNPy4oWq39j7BBMMndd3hsahurPEBbuBg/6
 Nlpg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:date:from:to:cc:subject:message-id
 :reply-to:references:mime-version:content-disposition:in-reply-to
 :user-agent;
 bh=iYjOqrFGOmpbtD9yt96lQwWe0nWJADjGyH8dgI2UBJs=;
 b=npi8czN11/EClf+08/G2KFCEDkEbyfkrC49sSBT9/AdF9FuU85tD0GK3qfbD48kcla
 rK4K9ZKMydGcxO6jvS4wWbpQtuMDvGgrlrjbI+0L1OdOYEtvTKrklQO40Y9PozLtGOWm
 I8MokzKE1uuImRfooMfPKcvty66eoMc9Evmb4FdybLIHP1hROA4GydG4ZKbpzpo40A9b
 R6vQVDxMOQM26Nl6vSW8f4ETb/rqKnsT7qrE3hQuG4ZIQDj0ouJHczzSzv1NkZaovY3G
 AVYbkQ74V7WQPExSv2ZH1rsfdOAf0tVoh2wqA8TMn+2TXrY+kXumzC+5/pnC7NqBra2l
 UF9A==
X-Gm-Message-State: AOAM530t+8tEZYmvRG+TihMXKTvCfyiuOmqb+bhirBAD9aDWqvuuHWJ+
 ZCo5pYN5WEWsggrHENezvA==
X-Google-Smtp-Source: ABdhPJxj1gl61nfGlDjfEwABJRLHTww6pg7TG615C4W2TfrFeaxVYUBIwfoo+vmi4hqoBWECYKUp+Q==
X-Received: by 2002:a9d:1a7:: with SMTP id e36mr13383074ote.215.1589846565860; 
 Mon, 18 May 2020 17:02:45 -0700 (PDT)
Received: from serve.minyard.net ([47.184.146.204])
 by smtp.gmail.com with ESMTPSA id y89sm712062ota.16.2020.05.18.17.02.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 17:02:44 -0700 (PDT)
Received: from minyard.net (unknown [192.168.27.193])
 by serve.minyard.net (Postfix) with ESMTPSA id B2B83180042;
 Tue, 19 May 2020 00:02:43 +0000 (UTC)
Date: Mon, 18 May 2020 19:02:42 -0500
From: Corey Minyard <minyard@acm.org>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH for-4.14 3/3] xen/arm: plat: Allocate as much as possible
 memory below 1GB for dom0 for RPI
Message-ID: <20200519000242.GA3948@minyard.net>
References: <20200518113008.15422-1-julien@xen.org>
 <20200518113008.15422-4-julien@xen.org>
 <bc9a1121a7484ef484c30869793698f912987d23.camel@epam.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <bc9a1121a7484ef484c30869793698f912987d23.camel@epam.com>
User-Agent: Mutt/1.9.4 (2018-02-28)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: minyard@acm.org
Cc: "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "julien@xen.org" <julien@xen.org>, "jgrall@amazon.com" <jgrall@amazon.com>,
 "roman@zededa.com" <roman@zededa.com>,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 18, 2020 at 08:36:08PM +0000, Volodymyr Babchuk wrote:
> Hi Julien,
> 
> On Mon, 2020-05-18 at 12:30 +0100, Julien Grall wrote:
> > From: Julien Grall <jgrall@amazon.com>
> > 
> > The raspberry PI 4 has devices that can only DMA into the first GB of
> > the RAM. Therefore we want allocate as much as possible memory below 1GB
> > for dom0.
> > 
> > Use the recently introduced dma_bitsize field to specify the DMA width
> > supported.
> > 
> > Signed-off-by: Julien Grall <jgrall@amazon.com>
> > Reported-by: Corey Minyard <minyard@acm.org>
> > ---
> >  xen/arch/arm/platforms/brcm-raspberry-pi.c | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/xen/arch/arm/platforms/brcm-raspberry-pi.c b/xen/arch/arm/platforms/brcm-raspberry-pi.c
> > index b697fa2c6c0e..ad5483437b31 100644
> > --- a/xen/arch/arm/platforms/brcm-raspberry-pi.c
> > +++ b/xen/arch/arm/platforms/brcm-raspberry-pi.c
> > @@ -43,6 +43,7 @@ static const struct dt_device_match rpi4_blacklist_dev[] __initconst =
> >  PLATFORM_START(rpi4, "Raspberry Pi 4")
> >      .compatible     = rpi4_dt_compat,
> >      .blacklist_dev  = rpi4_blacklist_dev,
> > +    .dma_bitsize    = 10,
> 
> I'm confused. Should it be 30?

Indeed it should.  I just tested this series, and Linux fails to boot
with this set to 10.  With it set to 30 it works.

With this set to 30, you can have a:

Tested-by: Corey Minyard <cminyard@mvista.com>

for all three patches.

-corey

> 
> >  PLATFORM_END
> >  
> >  /*


From xen-devel-bounces@lists.xenproject.org Tue May 19 01:56:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:56:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarU7-00009H-8o; Tue, 19 May 2020 01:55:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarU5-00009A-LC
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:55:57 +0000
X-Inumbo-ID: deb56f9e-9973-11ea-ae69-bc764e2007e4
Received: from mail-qt1-x842.google.com (unknown [2607:f8b0:4864:20::842])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id deb56f9e-9973-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 01:55:52 +0000 (UTC)
Received: by mail-qt1-x842.google.com with SMTP id x12so9905710qts.9
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:55:52 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=RY6wcLegoM0K/cf+d7rR6BE972Bl6q9Nr9kajAEhtQU=;
 b=SsM8Xfz9X8ftXAmMO3K2D+HZByZc98cziQKaRFGUs1Gu/T9Yph4QEuD9FtYMPzP6IW
 CVpPOBI3oVZpocFP4HSHXUjDV+lJQNVOVneFVYzhpEE0gGdm6ihTFxmLOoe7ObFKvwVU
 2ovyorxvRYxOICjJOgIm+11kP77OwYHLvOt9JIrWs08Er1/yQgBYXvzttP+bCv41R+UB
 5p7RibXTXL24ezP1VRbODzk7dKy0g9HjkbFuvNh1iG7/2SAOFOlOjOWIxsp1LZRDWkz3
 mLXAUzAWAn+HYbPe3muVnK+/MR7Xp+yhG106aU5cCkumMRVGlGOnXXpcmwyRhSDqDgkT
 34dg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=RY6wcLegoM0K/cf+d7rR6BE972Bl6q9Nr9kajAEhtQU=;
 b=HhbKefTGFbselbj6QdKWhO9YrWpEdTpv0puqErlV39ABv+Dh26fWI4Srm7jrv70wz3
 AD6oQ9fHmoGGNHIsN701v7sZm0sXTRF0xsHBSSARAvgWVGBNUjai5/VkCa1DJIAQe0f2
 VuZKA9XJ54UB9LzJsdSwpuV85qrCmEKgMKGF8EKY98BBCo2lpyRIZiIIt0tO/NeWEDa5
 cWCmShimiz234IOPUqBMWXnj765IgV+jemdKDwwyoSIam2qjjXmgJmlXwrOYkJ63PmWh
 didNQ44w/MLXlZK1vyfBSTGVmi+Hlehytkk4a5xLXNHhRQKbJrGrfQ/6yaTprDIcDQ8u
 83Tg==
X-Gm-Message-State: AOAM533sV5JUX+mHwPUGUp4Wcw3XHgvWX0wsoFfqBCJeFCh5ka9DvHUh
 m70Jf/3koun7BqcTO5sHXqDxHeRX
X-Google-Smtp-Source: ABdhPJz9SwvFInDwhZCq7+sLwCbi3noQSsm8AYUsROaamQ7COBgQpKKaYKmnKgTpCzzQWAnO08ED/w==
X-Received: by 2002:ac8:554c:: with SMTP id o12mr18747723qtr.89.1589853351842; 
 Mon, 18 May 2020 18:55:51 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.55.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:55:51 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 02/19] Document ioemu Linux stubdomain protocol
Date: Mon, 18 May 2020 21:54:46 -0400
Message-Id: <20200519015503.115236-3-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Add documentation for upcoming Linux stubdomain for qemu-upstream.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6:
 - Add Acked-by: Ian Jackson
 - Replace dmargs with dm-argv for xenstore directory
 - Explain $STUBDOM_RESTORE_INCOMING_ARG for -incoming restore argument
---
 docs/misc/stubdom.txt | 52 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 52 insertions(+)

diff --git a/docs/misc/stubdom.txt b/docs/misc/stubdom.txt
index 64c77d9b64..c717a95d17 100644
--- a/docs/misc/stubdom.txt
+++ b/docs/misc/stubdom.txt
@@ -75,6 +75,58 @@ Defined commands:
    - "running" - success
 
 
+Toolstack to Linux ioemu stubdomain protocol
+--------------------------------------------
+
+This section describe communication protocol between toolstack and
+qemu-upstream running in Linux stubdomain. The protocol include
+expectations of both stubdomain, and qemu.
+
+Setup (done by toolstack, expected by stubdomain):
+ - Block devices for target domain are connected as PV disks to stubdomain,
+   according to configuration order, starting with xvda
+ - Network devices for target domain are connected as PV nics to stubdomain,
+   according to configuration order, starting with 0
+ - [not implemented] if graphics output is expected, VFB and VKB devices are set for stubdomain
+   (its backend is responsible for exposing them using appropriate protocol
+   like VNC or Spice)
+ - other target domain's devices are not connected at this point to stubdomain
+   (may be hot-plugged later)
+ - QEMU command line is stored in
+   /vm/<target-uuid>/image/dm-argv xenstore dir, each argument as separate key
+   in form /vm/<target-uuid>/image/dm-argv/NNN, where NNN is 0-padded argument
+   number
+ - target domain id is stored in /local/domain/<stubdom-id>/target xenstore path
+?? - bios type is stored in /local/domain/<target-id>/hvmloader/bios
+ - stubdomain's console 0 is connected to qemu log file
+ - stubdomain's console 1 is connected to qemu save file (for saving state)
+ - stubdomain's console 2 is connected to qemu save file (for restoring state)
+ - next consoles are connected according to target guest's serial console configuration
+
+Environment exposed by stubdomain to qemu (needed to construct appropriate qemu command line and later interact with qmp):
+ - target domain's disks are available as /dev/xvd[a-z]
+ - console 2 (incoming domain state) must be connected to an FD and the command
+   line argument $STUBDOM_RESTORE_INCOMING_ARG must be replaced with fd:$FD to
+   form "-incoming fd:$FD"
+ - console 1 (saving domain state) is added over QMP to qemu as "fdset-id 1" (done by stubdomain, toolstack doesn't need to care about it)
+ - nics are connected to relevant stubdomain PV vifs when available (qemu -netdev should specify ifname= explicitly)
+
+Startup:
+1. toolstack starts PV stubdomain with stubdom-linux-kernel kernel and stubdom-linux-initrd initrd
+2. stubdomain initialize relevant devices
+3. stubdomain starts qemu with requested command line, plus few stubdomain specific ones - including local qmp access options
+4. stubdomain starts vchan server on /local/domain/<stubdom-id>/device-model/<target-id>/qmp-vchan, exposing qmp socket to the toolstack
+5. qemu signal readiness by writing "running" to /local/domain/<stubdom-id>/device-model/<target-id>/state xenstore path
+6. now device model is considered running
+
+QEMU can be controlled using QMP over vchan at /local/domain/<stubdom-id>/device-model/<target-id>/qmp-vchan. Only one simultaneous connection is supported and toolstack needs to ensure that.
+
+Limitations:
+ - PCI passthrough require permissive mode
+ - only one nic is supported
+ - at most 26 emulated disks are supported (more are still available as PV disks)
+ - graphics output (VNC/SDL/Spice) not supported
+
 
                                    PV-GRUB
                                    =======
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:56:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:56:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarU2-00008q-0q; Tue, 19 May 2020 01:55:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarU0-00008l-Kv
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:55:52 +0000
X-Inumbo-ID: dd8b8de2-9973-11ea-b9cf-bc764e2007e4
Received: from mail-qt1-x843.google.com (unknown [2607:f8b0:4864:20::843])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd8b8de2-9973-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 01:55:50 +0000 (UTC)
Received: by mail-qt1-x843.google.com with SMTP id z18so9945223qto.2
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:55:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=Y63NCLIHxjj8BP4Xs3EPNGxMJKsqwIK/ULA13gyh9ao=;
 b=qhM4TcnwPR1z6ZqEH726/8IqkiIt05I778AInSF2R9Li+XAu2lZQ6aQc6e7qRus2Dg
 hVWPSFIOvL8OidZwX9cF8XaLrh0QqMYNtbiQ500czIMn/E51Yxn+YpKAPv7htylJ6FTw
 sB6nVaxnEt73V/5np44lGRQe/omkFbB0XHLkEDDJbGW01A6Dp98326iIWRjO1lXGff5j
 1IiM3+IZiqKuHRt3absnwz6rboTVXCj3I8yZkEFFhtB6N4Bn35HBpYDhbhJjhJlx74eF
 Kfyb9a50ZdNICE4r2gB/GndIjtizCqQQ10ovk/uFtNz7FFtoGkCK2BR9DNQEuyT1XBsG
 Rvgg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=Y63NCLIHxjj8BP4Xs3EPNGxMJKsqwIK/ULA13gyh9ao=;
 b=tFWBTw1teYDUOl9Ulu3cmMk+Qt7/NDpsKifaZmLeSEUvulsM4W6hW5n97ytQjT9WKM
 QcNnMtnkWN7zMqvHMRb4AG9d7oaEh0pdufCRROk/Ohb5+fXLzaRQAIPOvj5U3G+ZxkeN
 9XceI8by/62SXd2aOcNGXtJoju9Q+P34paFJPsfLka8k6wLczhEx1HEQWs9uO0l5w1Fv
 17gXoNw44BoFxH6lstHEaU1AuHj0FI/jFivnLi9iCg/vxJKdEiCwV5RF1sZZEFLG+EhC
 BXzq94w+qF1kcgmbK4koARSzYEXEJuc0Cn7r/71bf89SSLGJjXL+Fe7YIQuJO/z8TUmh
 twkw==
X-Gm-Message-State: AOAM531fLmNsn9/VT5Q1fCH1q742GU7lXfbj5fYnFVm73/z4dmgmYtip
 t2hBmE2cmo8f3ESHbjO5cc8IhW2d
X-Google-Smtp-Source: ABdhPJwbA5DIpZgKjkbW5v4VN0dDPLh8ElqMeZzUOatVchhLG9bG4zbM0npXKWlBud0eQGmNYyEONA==
X-Received: by 2002:ac8:36c8:: with SMTP id b8mr18131242qtc.212.1589853349593; 
 Mon, 18 May 2020 18:55:49 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.55.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:55:48 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 01/19] Document ioemu MiniOS stubdomain protocol
Date: Mon, 18 May 2020 21:54:45 -0400
Message-Id: <20200519015503.115236-2-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Add documentation based on reverse-engineered toolstack-ioemu stubdomain
protocol.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6:
 - Add Acked-by: Ian Jackson
---
 docs/misc/stubdom.txt | 53 +++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 53 insertions(+)

diff --git a/docs/misc/stubdom.txt b/docs/misc/stubdom.txt
index 882a18cab4..64c77d9b64 100644
--- a/docs/misc/stubdom.txt
+++ b/docs/misc/stubdom.txt
@@ -23,6 +23,59 @@ and https://wiki.xen.org/wiki/Device_Model_Stub_Domains for more
 information on device model stub domains
 
 
+Toolstack to MiniOS ioemu stubdomain protocol
+---------------------------------------------
+
+This section describe communication protocol between toolstack and
+qemu-traditional running in MiniOS stubdomain. The protocol include
+expectations of both qemu and stubdomain itself.
+
+Setup (done by toolstack, expected by stubdomain):
+ - Block devices for target domain are connected as PV disks to stubdomain,
+   according to configuration order, starting with xvda
+ - Network devices for target domain are connected as PV nics to stubdomain,
+   according to configuration order, starting with 0
+ - if graphics output is expected, VFB and VKB devices are set for stubdomain
+   (its backend is responsible for exposing them using appropriate protocol
+   like VNC or Spice)
+ - other target domain's devices are not connected at this point to stubdomain
+   (may be hot-plugged later)
+ - QEMU command line (space separated arguments) is stored in
+   /vm/<target-uuid>/image/dmargs xenstore path
+ - target domain id is stored in /local/domain/<stubdom-id>/target xenstore path
+?? - bios type is stored in /local/domain/<target-id>/hvmloader/bios
+ - stubdomain's console 0 is connected to qemu log file
+ - stubdomain's console 1 is connected to qemu save file (for saving state)
+ - stubdomain's console 2 is connected to qemu save file (for restoring state)
+ - next consoles are connected according to target guest's serial console configuration
+
+Startup:
+1. PV stubdomain is started with ioemu-stubdom.gz kernel and no initrd
+2. stubdomain initialize relevant devices
+3. stubdomain signal readiness by writing "running" to /local/domain/<stubdom-id>/device-model/<target-id>/state xenstore path
+4. now stubdomain is considered running
+
+Runtime control (hotplug etc):
+Toolstack can issue command through xenstore. The sequence is (from toolstack POV):
+1. Write parameter to /local/domain/<stubdom-id>/device-model/<target-id>/parameter.
+2. Write command to /local/domain/<stubdom-id>/device-model/<target-id>/command.
+3. Wait for command result in /local/domain/<stubdom-id>/device-model/<target-id>/state (command specific value).
+4. Write "running" back to /local/domain/<stubdom-id>/device-model/<target-id>/state.
+
+Defined commands:
+ - "pci-ins" - PCI hot plug, results:
+   - "pci-inserted" - success
+   - "pci-insert-failed" - failure
+ - "pci-rem" - PCI hot remove, results:
+   - "pci-removed" - success
+   - ??
+ - "save" - save domain state to console 1, results:
+   - "paused" - success
+ - "continue" - resume domain execution, after loading state from console 2 (require -loadvm command argument), results:
+   - "running" - success
+
+
+
                                    PV-GRUB
                                    =======
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:56:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:56:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarTw-00008f-Op; Tue, 19 May 2020 01:55:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarTv-00008a-Sv
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:55:48 +0000
X-Inumbo-ID: db523602-9973-11ea-b07b-bc764e2007e4
Received: from mail-qv1-xf41.google.com (unknown [2607:f8b0:4864:20::f41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db523602-9973-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 01:55:46 +0000 (UTC)
Received: by mail-qv1-xf41.google.com with SMTP id dh1so1897285qvb.13
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:55:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=dndjkB/v6249FZfKgtSAdCSkLVLnrD4llGZxAa+VIaY=;
 b=p8hAfAqVZ2oIYQWHF5VrpKZN6Ck+5KROmUJRR1VoEcqhb75D1dYZMOGnkqeoxV3azE
 AaSq4HeZLfRGrY+xevt59l8DCxvvvx94rF2rmeZszPrFlpcYIIXVwUL8w2608dUeX9JS
 26QZWk4st/XUq20sIG9f8AKtrCqPeSWEzGPgk5+0dY3HDKBcF+Uq8e5NAh0IpqkyFb62
 pbneRKvln9DyUZJL2DxPVvywBW7qRsR8uP1jf/4qT7XUTd5tEP9P8XZz6ZTfJslZ1NuX
 z6C2CJCxy72JYrVRUiyFnGxjzohXKvrkWYVVAQOuqFPbij+HC8goo9btkV8HK72ovflZ
 tplQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=dndjkB/v6249FZfKgtSAdCSkLVLnrD4llGZxAa+VIaY=;
 b=UNIOc9a9ZL/tCvraPRqsWcKkjk6Znq7mc5PEsZMipMtLLrPJbdazCUAcL7Sr6sfjKu
 vAAUSS2/mUZZH/uS6Xgl+/lsy0LPAaDmgEM2u2VbpY+gClFtOzkohOi0nbzVIKWs7rvp
 /u+njrlRsUdy7vSjXYiJ6qLNdi+dCpFqZaB7bBjdL2s4QtUrRO8xs3oM6IrLEtj9imbs
 AwH+eUet/5iy2iVm946IYlJCKtYefysW+0wbsxvGG7wGUxX38bKF9JvxS+GO3RVrIs05
 yiRjv3eixAd/qd+nfX/mlL1+iYnYgHeDqSMk89ig1azJXdQWxyw8/do0bzgj/kyxpkKK
 OBDg==
X-Gm-Message-State: AOAM532PwcjQHbtG5KWTqRhqbcUlakQUFr0TwqeJxLsha27YnXfiPC5h
 HEJOGWYOdguF+NU8Ppl22hf9CGAP
X-Google-Smtp-Source: ABdhPJxQUs855sOfPnW+0bb5CEns5OdP3zHwRuQDR62iy8/QfYO4IjHibmSz++CKvNo704zyM1GwRQ==
X-Received: by 2002:a0c:e488:: with SMTP id n8mr8020985qvl.172.1589853345978; 
 Mon, 18 May 2020 18:55:45 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.55.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:55:45 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 00/19] Add support for qemu-xen runnning in a Linux-based
 stubdomain
Date: Mon, 18 May 2020 21:54:44 -0400
Message-Id: <20200519015503.115236-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

Below is mostly the v6 cover leter with a few additions.  All patches
are Acked except for
  libxl: Use libxl__xs_* in libxl__write_stub_dmargs
  libxl: write qemu arguments into separate xenstore keys

These are refactored into Ian's preferred form (I think).  A single loop
for the xenstore transaction with behaviour differing for MiniOS and
Linux stubdoms.

In coordination with Marek, I'm making a submission of his patches for Linux
stubdomain device-model support.  I made a few of my own additions, but Marek
did the heavy lifting.  Thank you, Marek.

General idea is to allow freely set device_model_version and
device_model_stubdomain_override and choose the right options based on this
choice.  Also, allow to specific path to stubdomain kernel/ramdisk, for greater
flexibility.

First two patches add documentation about expected toolstack-stubdomain-qemu
interface, both for MiniOS stubdomain and Linux stubdomain.

Initial version has no QMP support - in initial patches it is completely
disabled, which means no suspend/restore and no PCI passthrough.

Later patches add QMP over libvchan connection support. The actual connection
is made in a separate process. As discussed on Xen Summit 2019, this allows to
apply some basic checks and/or filtering (not part of this series), to limit
libxl exposure for potentially malicious stubdomain.

Jason's additions ensure the qmp-proxy (vchan-socket-proxy) processes and
sockets are cleaned up and add some documentation.

The actual stubdomain implementation is here:

   https://github.com/jandryuk/qubes-vmm-xen-stubdom-linux
   (branch initramfs-tools, tag for-upstream-v6)

See readme there for build instructions.  Marek's version requires dracut.  I
have hacked up a version usable with initramfs-tools.

The v6 version is needed to be compatible with these changes in the v6 posting:

 - Mini-OS stubdoms use dmargs xenstore key as a string.  Linux stubdoms
   use dm-argv as a directory for numbered entries.

 - The hardcoded "fd:3" for the restore image is replaced with the
   placehodler string $STUBDOM_RESTORE_INCOMING_ARG.  The stubdom
   initscript needs to replace that with a an "fd:$FD" string to the
   hooked up console 2.

Few comments/questions about the stubdomain code:

1. There are extra patches for qemu that are necessary to run it in stubdomain.
While it is desirable to upstream them, I think it can be done after merging
libxl part. Stubdomain's qemu build will in most cases be separate anyway, to
limit qemu's dependencies (so the stubdomain size).

2. By default Linux hvc-xen console frontend is unreliable for data transfer
(qemu state save/restore) - it drops data sent faster than client is reading
it. To fix it, console device needs to be switched into raw mode (`stty raw
/dev/hvc1`). Especially for restoring qemu state it is tricky, as it would need
to be done before opening the device, but stty (obviously) needs to open the
device first. To solve this problem, for now the repository contains kernel
patch which changes the default for all hvc consoles. Again, this isn't
practical problem, as the kernel for stubdomain is built separately. But it
would be nice to have something working with vanilla kernel. I see those
options:
  - convert it to kernel cmdline parameter (hvc_console_raw=1 ?)
  - use channels instead of consoles (and on the kernel side change the default
    to "raw" only for channels); while in theory better design, libxl part will
    be more complex, as channels can be connected to sockets but not files, so
    libxl would need to read/write to it exactly when qemu write/read the data,
    not before/after as it is done now

Remaining parts for eliminating dom0's instance of qemu:
 - do not force QDISK backend for CDROM
 - multiple consoles support in xenconsoled

Changes in v2:
 - apply review comments by Jason Andryuk
Changes in v3:
 - rework qemu arguments handling (separate xenstore keys, instead of \x1b separator)
 - add QMP over libvchan, instead of console
 - add protocol documentation
 - a lot of minor changes, see individual patches for full changes list
 - split xenconsoled patches into separate series
Changes in v4:
 - extract vchan connection into a separate process
 - rebase on master
 - various fixes
Changes in v5:
 - Marek: apply review comments from Jason Andryuk
 - Jason: Clean up qmp-proxy processes and sockets
Changes in v6:
 - Squash vchan-proxy kill and socket cleanup into "libxl: use vchan for
   QMP access with Linux stubdomain".
 - Use dm-argv as the xenstore directory for the QEMU arguments.
 - Use $STUBDOM_RESTORE_INCOMING_ARG as a placeholder instead of
   hardcoding "fd:3".
 - Comment to re-run autotools.
 - Add Acked-by from Ian Jackson where approriate.
Changes in v7:
 - Have libxl__write_stub_dmargs handle both minios and linux qemu
   arguments.
 - Add Acked-by from Ian Jackson where approriate.

Eric Shelton (1):
  libxl: Handle Linux stubdomain specific QEMU options.

Jason Andryuk (4):
  libxl: Use libxl__xs_* in libxl__write_stub_dmargs
  libxl: Refactor kill_device_model to libxl__kill_xs_path
  docs: Add device-model-domid to xenstore-paths
  libxl: Check stubdomain kernel & ramdisk presence

Marek Marczykowski-Górecki (14):
  Document ioemu MiniOS stubdomain protocol
  Document ioemu Linux stubdomain protocol
  libxl: fix qemu-trad cmdline for no sdl/vnc case
  libxl: Allow running qemu-xen in stubdomain
  libxl: write qemu arguments into separate xenstore keys
  xl: add stubdomain related options to xl config parser
  tools/libvchan: notify server when client is connected
  libxl: add save/restore support for qemu-xen in stubdomain
  tools: add missing libxenvchan cflags
  tools: add simple vchan-socket-proxy
  libxl: use vchan for QMP access with Linux stubdomain
  libxl: require qemu in dom0 for multiple stubdomain consoles
  libxl: ignore emulated IDE disks beyond the first 4
  libxl: consider also qemu in stubdomain in libxl__dm_active check

 .gitignore                          |   1 +
 configure                           |  14 +-
 docs/configure                      |  14 +-
 docs/man/xl.cfg.5.pod.in            |  27 +-
 docs/misc/stubdom.txt               | 105 ++++++
 docs/misc/xenstore-paths.pandoc     |   5 +
 stubdom/configure                   |  14 +-
 tools/Rules.mk                      |   2 +-
 tools/config.h.in                   |   3 +
 tools/configure                     |  46 ++-
 tools/configure.ac                  |   9 +
 tools/libvchan/Makefile             |   8 +-
 tools/libvchan/init.c               |   3 +
 tools/libvchan/vchan-socket-proxy.c | 478 ++++++++++++++++++++++++
 tools/libxl/libxl_aoutils.c         |  32 ++
 tools/libxl/libxl_create.c          |  46 ++-
 tools/libxl/libxl_dm.c              | 558 +++++++++++++++++++++-------
 tools/libxl/libxl_domain.c          |  10 +
 tools/libxl/libxl_internal.h        |  22 ++
 tools/libxl/libxl_mem.c             |   6 +-
 tools/libxl/libxl_qmp.c             |  27 +-
 tools/libxl/libxl_types.idl         |   3 +
 tools/xl/xl_parse.c                 |   7 +
 23 files changed, 1230 insertions(+), 210 deletions(-)
 create mode 100644 tools/libvchan/vchan-socket-proxy.c

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:56:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:56:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarUC-0000AL-H9; Tue, 19 May 2020 01:56:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarUA-0000A7-Lo
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:56:02 +0000
X-Inumbo-ID: dfcb737e-9973-11ea-b07b-bc764e2007e4
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dfcb737e-9973-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 01:55:54 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id n14so12854525qke.8
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:55:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=3tvoe8SF1AjGsABKutuEKPrEzuPGqgtVPQhRYGoZ/qw=;
 b=mtmHxT/nE6SREvNjSTa70xknAuOQYCx3mhJxYqKW85A3On172I5aHclVH4KtBHHOoT
 wOnMHUayFj8Z4zzIVlul9ezbk7gVx1yjmoE5CGuHqgw88MMd1JxZDtMb45Zu97BX/Yo9
 LF1NF7C2VLdrRkFAhqNfujQ+aL/Ynn7HMS4FRNmnvzlxvlBWBX++kmklq6lZ9Ll5EVvI
 bAuqUU8QCnaBqJqH5U0ZynbreaZF+NOG4Z4hg1RNhCBTxau6UubxlS7t7vmv7EIKPXln
 /qOyDtZZ3vt9eCQBnhsrqcDM1bXgGmedKH7dPeplz2jXSevSfL8gwLjGo/ykyUReeYjY
 nW2Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=3tvoe8SF1AjGsABKutuEKPrEzuPGqgtVPQhRYGoZ/qw=;
 b=MbYjBBQlcM6H6vdxpRtth6Aa/+5uy8FxaJCgNiACT2Du6pnuL+HGB7JpDBh5NyWFl/
 RqpYC6pVX9mLTFCqLkfAz+cuXduku+QyycA4PCg4ujI7YYT9vFCj69wN2E6q5g3UsC2H
 y2GklUngD0eh0d1sVoqsHnu2PPy9wHidr+OAT+ezVK6RiR328cC8+wxlSXTnDZJjKwrd
 DG7lOdMQsD+yK/trGDIblhYOuyttUaYUYnHc/QhhQ/lPJ1hPED3W4jm7Dm9Li9wLbrne
 ANrE8Bff8DDBSPhRVQpHkGYKGQIv5bCoZEbKkqB9muBd5O4aEq8cM1FPrPwbpfD3gevv
 YHLw==
X-Gm-Message-State: AOAM530HiauYalJPlcROLdqw7tnE8/nU1farCmo8TZPO1LaXx7AnLJNq
 mGC1pVTpf8XKOMUqIiYzVq9Hdf26
X-Google-Smtp-Source: ABdhPJyYIiU6vwj09ZG7ffxrFoNLNx6KlximjmsF7KaLQBQ/lZA6U6mHSo6feSFCergEFepY74bWYg==
X-Received: by 2002:a37:5146:: with SMTP id f67mr4070554qkb.308.1589853353718; 
 Mon, 18 May 2020 18:55:53 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.55.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:55:53 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 03/19] libxl: fix qemu-trad cmdline for no sdl/vnc case
Date: Mon, 18 May 2020 21:54:47 -0400
Message-Id: <20200519015503.115236-4-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wei.liu2@citrix.com>, Wei Liu <wl@xen.org>,
 Jason Andryuk <jandryuk@gmail.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

When qemu is running in stubdomain, any attempt to initialize vnc/sdl
there will crash it (on failed attempt to load a keymap from a file). If
vfb is present, all those cases are skipped. But since
b053f0c4c9e533f3d97837cf897eb920b8355ed3 "libxl: do not start dom0 qemu
for stubdomain when not needed" it is possible to create a stubdomain
without vfb and contrary to the comment -vnc none do trigger VNC
initialization code (just skips exposing it externally).
Change the implicit SDL avoiding method to -nographics option, used when
none of SDL or VNC is enabled.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
Acked-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
Changes in v2:
 - typo in qemu option
Changes in v3:
 - add missing { }
---
 tools/libxl/libxl_dm.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index f4007bbe50..b91e63db6f 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -734,14 +734,15 @@ static int libxl__build_device_model_args_old(libxl__gc *gc,
         if (libxl_defbool_val(vnc->findunused)) {
             flexarray_append(dm_args, "-vncunused");
         }
-    } else
+    } else if (!sdl) {
         /*
          * VNC is not enabled by default by qemu-xen-traditional,
-         * however passing -vnc none causes SDL to not be
-         * (unexpectedly) enabled by default. This is overridden by
-         * explicitly passing -sdl below as required.
+         * however skipping -vnc causes SDL to be
+         * (unexpectedly) enabled by default. If undesired, disable graphics at
+         * all.
          */
-        flexarray_append_pair(dm_args, "-vnc", "none");
+        flexarray_append(dm_args, "-nographic");
+    }
 
     if (sdl) {
         flexarray_append(dm_args, "-sdl");
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:56:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:56:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarUF-0000BZ-UM; Tue, 19 May 2020 01:56:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarUF-0000BR-Le
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:56:07 +0000
X-Inumbo-ID: e0e34fc0-9973-11ea-b9cf-bc764e2007e4
Received: from mail-qt1-x841.google.com (unknown [2607:f8b0:4864:20::841])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e0e34fc0-9973-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 01:55:56 +0000 (UTC)
Received: by mail-qt1-x841.google.com with SMTP id a23so4202362qto.1
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:55:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=BWSyY9N8FyMXoF1yPMPDtUrpbXjsGUwF5REIL/gyKE4=;
 b=Q0G9EomtPMD43HAyYImzQGewd2pJhclm7ewCrDYgUizNdkAWPHdd/QDAqRrtwxE1Ud
 5C75344oe2RDuNnL/Rz8h3RzCAR3I7u3RGuVSrI8wgHSfnDiccr9Ygjig8Ckk/uMT2bM
 mayjHbihTwnBIeuVgu0XGn+9PakxiyYJRKKlA4oMrQ8Eyfe6RNyfHgP74dha0tlwkR9W
 N/21firtYqCM0w/UvCi+MLl5T55+shmdMlnc/qx21hQkOAE0ku1UjhHKgfYaENQ0UaY3
 8pwu0hIwdnQx9JTzNVxgi6zy63DrbIkL1IeW2MuPCzqeHmX2AJooCWIlMego33/mktcd
 aZlQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=BWSyY9N8FyMXoF1yPMPDtUrpbXjsGUwF5REIL/gyKE4=;
 b=QpjqAIL8BXrDKVm3boz7DF/oLZS/VCLQN1AbntS7IK7TFBePZFYPdSEFo3JZO0mJkh
 cj+F/IAAiCNbXxl5Un1p9VXiH86l45vmqILR+P4Pz+wAGn6hqTBU8fdip5kZFyywDNKU
 uodiVLfunFiikC1fgpkNKAqO97D824a4j7V9fzIBBLhQYWpKVsuTH1u57VKKFOAXIU+u
 snTHTqnpwQkZQWlLvAfxo/5k3nHlIHeoMUyS3kvlCXH8KtBUE3/zlNqYzhNTs9m+A3AG
 bdJ0sMj0MgAk9K9lOCcMfK1MtUCTh/51y+ieom/ygSWxR45nmeCh+43jVuaz626SP3dT
 b08A==
X-Gm-Message-State: AOAM531aMxG3KIDjjpeScgUIHv+41h+DFh/F3uemrSi2Nk8DabxHK8aI
 36kpkqRBDZPML73t7xdVhiaH+3JR
X-Google-Smtp-Source: ABdhPJzIp16aUQq7pmPkVxIO1Nsj4UJOrCx67rkQXO1uauxrQu18+nrF0NgGcZXQSrV72ttKJ2sGHg==
X-Received: by 2002:aed:2b67:: with SMTP id p94mr19592117qtd.255.1589853355425; 
 Mon, 18 May 2020 18:55:55 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.55.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:55:54 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 04/19] libxl: Allow running qemu-xen in stubdomain
Date: Mon, 18 May 2020 21:54:48 -0400
Message-Id: <20200519015503.115236-5-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Do not prohibit anymore using stubdomain with qemu-xen.
To help distingushing MiniOS and Linux stubdomain, add helper inline
functions libxl__stubdomain_is_linux() and
libxl__stubdomain_is_linux_running(). Those should be used where really
the difference is about MiniOS/Linux, not qemu-xen/qemu-xen-traditional.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v3:
 - new patch, instead of "libxl: Add "stubdomain_version" to
 domain_build_info"
 - helper functions as suggested by Ian Jackson
Changes in v6:
 - Add Acked-by: Ian Jackson
---
 tools/libxl/libxl_create.c   |  9 ---------
 tools/libxl/libxl_internal.h | 17 +++++++++++++++++
 2 files changed, 17 insertions(+), 9 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 5a043df15f..433947abab 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -171,15 +171,6 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
         }
     }
 
-    if (b_info->type == LIBXL_DOMAIN_TYPE_HVM &&
-        b_info->device_model_version !=
-            LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL &&
-        libxl_defbool_val(b_info->device_model_stubdomain)) {
-        LOG(ERROR,
-            "device model stubdomains require \"qemu-xen-traditional\"");
-        return ERROR_INVAL;
-    }
-
     if (!b_info->max_vcpus)
         b_info->max_vcpus = 1;
     if (!b_info->avail_vcpus.size) {
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index e5effd2ad1..d1ebdec8d2 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2324,6 +2324,23 @@ _hidden int libxl__device_model_version_running(libxl__gc *gc, uint32_t domid);
   /* Return the system-wide default device model */
 _hidden libxl_device_model_version libxl__default_device_model(libxl__gc *gc);
 
+static inline
+bool libxl__stubdomain_is_linux_running(libxl__gc *gc, uint32_t domid)
+{
+    /* same logic as in libxl__stubdomain_is_linux */
+    return libxl__device_model_version_running(gc, domid)
+        == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
+}
+
+static inline
+bool libxl__stubdomain_is_linux(libxl_domain_build_info *b_info)
+{
+    /* right now qemu-tranditional implies MiniOS stubdomain and qemu-xen
+     * implies Linux stubdomain */
+    return libxl_defbool_val(b_info->device_model_stubdomain) &&
+        b_info->device_model_version == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN;
+}
+
 #define DEVICE_MODEL_XS_PATH(gc, dm_domid, domid, fmt, _a...)              \
     libxl__sprintf(gc, "/local/domain/%u/device-model/%u" fmt, dm_domid,   \
                    domid, ##_a)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:56:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarUL-0000Dl-7T; Tue, 19 May 2020 01:56:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarUK-0000DQ-M7
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:56:12 +0000
X-Inumbo-ID: e253143a-9973-11ea-ae69-bc764e2007e4
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e253143a-9973-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 01:55:58 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id f13so12916804qkh.2
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:55:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=vewEl/AeTk1WX8L90ml6cglMpSrPMmirDuDTRVf5RxA=;
 b=rv2n920WU6liSgItKpw3Vw3M/4Bk/r0seoZPRI76yV/tUaUbGm84WQ+QPvwSwdwqZB
 ujgECBop2AYnbyiyYUwTkinpOPeYTE6ZuMdkryGsju8Ol0wNGUAI9ldlZFftkE4EOuBY
 VZj7uJjdT5XRBeWydO/LAgexJj/NsPw7WofcB9p1p1Xxyz2iEYvpakGn7jpkTgwXGdvc
 1OGamQFERfc3OHVIdoUcx8uRFVpI3tAwnCeYSOovIogkzFeniBpZ2YeC54nhZHojRAvs
 okC8WhT24+xvqaX5XCyovt0Cagmt56/KmG7eDVfJubEI0sVNbUEfiVgKMPvOs7BXhhTt
 jgTg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=vewEl/AeTk1WX8L90ml6cglMpSrPMmirDuDTRVf5RxA=;
 b=F4VBWUvLniKs1GGpad2Pvpw7ipKWJK7f+wpOhcxNX/lOe82Q3OEkjl/I21oTylUGB8
 U6eqSUY8a4IHfsX1vn714UgZ5dBNo2+VAWhsSD1mDNEVTEytea9BkKIPB7girPbtPPN8
 tca5OkQOvuZkHl+Yj0XLsvoj00HoQujzYJgWNz0DRYUaURsWB40Xboxki523lFgRkW/Z
 W1CrSO3vXo1jrnDEAmrhTn+fGULOqNhvFDAgMQtNkjaPdlH5RNBJY1rFGgM51x8Qb1KO
 34lOAxIZtbmD+QuCdozp+dz/ohtwM8CZHgKMEHUbO/BvBNPDqRe5+JAYsytZfFXCWz28
 3hqQ==
X-Gm-Message-State: AOAM531uyK0mgodxUjxUs5blv4NLHtAtOfAjoCAFDCu/1jaXhFK0ZDFW
 hTrjvnkK3W7S9Uepj9MpUFM3u6aD
X-Google-Smtp-Source: ABdhPJzUF4VUF27OWobT5fWJy3ECSiDxUcWxrjg8qtMaFpOPURt85LGpUDFgsFEhCxKIY8xYNvH+kQ==
X-Received: by 2002:a37:2f86:: with SMTP id v128mr8873101qkh.413.1589853357504; 
 Mon, 18 May 2020 18:55:57 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.55.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:55:56 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 05/19] libxl: Handle Linux stubdomain specific QEMU options.
Date: Mon, 18 May 2020 21:54:49 -0400
Message-Id: <20200519015503.115236-6-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Simon Gaiser <simon@invisiblethingslab.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>, Eric Shelton <eshelton@pobox.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Eric Shelton <eshelton@pobox.com>

This patch creates an appropriate command line for the QEMU instance
running in a Linux-based stubdomain.

NOTE: a number of items are not currently implemented for Linux-based
stubdomains, such as:
- save/restore
- QMP socket
- graphics output (e.g., VNC)

Signed-off-by: Eric Shelton <eshelton@pobox.com>

Simon:
 * fix disk path
 * fix cdrom path and "format"

Signed-off-by: Simon Gaiser <simon@invisiblethingslab.com>
[drop Qubes-specific parts]
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Allow setting stubdomain_ramdisk independently from stubdomain_kernel
Add a qemu- prefix for qemu-stubdom-linux-{kernel,rootfs} since stubdom
doesn't convey device-model.  Use qemu- since this code is qemu specific.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
Changes in v2:
 - fix serial specified with serial=[ ... ] syntax
 - error out on multiple consoles (incompatible with stubdom)
 - drop erroneous chunk about cdrom
Changes in v3:
 - change to use libxl__stubdomain_is_linux instead of
   b_info->stubdomain_version
 - drop libxl__stubdomain_version_running, prefer
   libxl__stubdomain_is_linux_running introduced by previous patch
 - drop ifup/ifdown script - stubdomain will handle that with qemu
   events itself
 - slightly simplify -serial argument
 - add support for multiple serial consoles, do not ignore
   b_info.u.serial(_list)
 - add error checking for more than 26 emulated disks ("/dev/xvd%c"
   format string)
Changes in v5:
 - commit message fixup to match patch contents - Marek
 - file names are now qemu-stubdom-linux-{kernel,rootfs} - Jason
 - allow setting ramdisk independently of kernel - Jason
Changes in v6:
 - Add Acked-by: Ian Jackson
 - Fixes for style nits
---
 tools/libxl/libxl_create.c   |  45 ++++++++
 tools/libxl/libxl_dm.c       | 193 ++++++++++++++++++++++++-----------
 tools/libxl/libxl_internal.h |   1 +
 tools/libxl/libxl_mem.c      |   6 +-
 tools/libxl/libxl_types.idl  |   3 +
 5 files changed, 186 insertions(+), 62 deletions(-)

diff --git a/tools/libxl/libxl_create.c b/tools/libxl/libxl_create.c
index 433947abab..8614a2c241 100644
--- a/tools/libxl/libxl_create.c
+++ b/tools/libxl/libxl_create.c
@@ -171,6 +171,40 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
         }
     }
 
+    if (b_info->type == LIBXL_DOMAIN_TYPE_HVM &&
+        libxl_defbool_val(b_info->device_model_stubdomain)) {
+        if (!b_info->stubdomain_kernel) {
+            switch (b_info->device_model_version) {
+                case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
+                    b_info->stubdomain_kernel =
+                        libxl__abs_path(NOGC, "ioemu-stubdom.gz", libxl__xenfirmwaredir_path());
+                    break;
+                case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
+                    b_info->stubdomain_kernel =
+                        libxl__abs_path(NOGC,
+                                "qemu-stubdom-linux-kernel",
+                                libxl__xenfirmwaredir_path());
+                    break;
+                default:
+                    abort();
+            }
+        }
+        if (!b_info->stubdomain_ramdisk) {
+            switch (b_info->device_model_version) {
+                case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL:
+                    break;
+                case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
+                    b_info->stubdomain_ramdisk =
+                        libxl__abs_path(NOGC,
+                                "qemu-stubdom-linux-rootfs",
+                                libxl__xenfirmwaredir_path());
+                    break;
+                default:
+                    abort();
+            }
+        }
+    }
+
     if (!b_info->max_vcpus)
         b_info->max_vcpus = 1;
     if (!b_info->avail_vcpus.size) {
@@ -206,6 +240,17 @@ int libxl__domain_build_info_setdefault(libxl__gc *gc,
     if (b_info->target_memkb == LIBXL_MEMKB_DEFAULT)
         b_info->target_memkb = b_info->max_memkb;
 
+    if (b_info->stubdomain_memkb == LIBXL_MEMKB_DEFAULT) {
+        if (libxl_defbool_val(b_info->device_model_stubdomain)) {
+            if (libxl__stubdomain_is_linux(b_info))
+                b_info->stubdomain_memkb = LIBXL_LINUX_STUBDOM_MEM * 1024;
+            else
+                b_info->stubdomain_memkb = 28 * 1024; // MiniOS
+        } else {
+            b_info->stubdomain_memkb = 0; // no stubdomain
+        }
+    }
+
     libxl_defbool_setdefault(&b_info->claim_mode, false);
 
     libxl_defbool_setdefault(&b_info->localtime, false);
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index b91e63db6f..dc1717bc12 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -1188,6 +1188,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     int i, connection, devid;
     uint64_t ram_size;
     const char *path, *chardev;
+    bool is_stubdom = libxl_defbool_val(b_info->device_model_stubdomain);
 
     dm_args = flexarray_make(gc, 16, 1);
     dm_envs = flexarray_make(gc, 16, 1);
@@ -1197,39 +1198,42 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     flexarray_vappend(dm_args, dm,
                       "-xen-domid",
                       GCSPRINTF("%d", guest_domid), NULL);
+    flexarray_append(dm_args, "-no-shutdown");
 
-    flexarray_append(dm_args, "-chardev");
-    if (state->dm_monitor_fd >= 0) {
-        flexarray_append(dm_args,
-            GCSPRINTF("socket,id=libxl-cmd,fd=%d,server,nowait",
-                      state->dm_monitor_fd));
+    /* There is currently no way to access the QMP socket in the stubdom */
+    if (!is_stubdom) {
+        flexarray_append(dm_args, "-chardev");
+        if (state->dm_monitor_fd >= 0) {
+            flexarray_append(dm_args,
+                GCSPRINTF("socket,id=libxl-cmd,fd=%d,server,nowait",
+                          state->dm_monitor_fd));
 
-        /*
-         * Start QEMU with its "CPU" paused, it will not start any emulation
-         * until the QMP command "cont" is used. This also prevent QEMU from
-         * writing "running" to the "state" xenstore node so we only use this
-         * flag when we have the QMP based startup notification.
-         * */
-        flexarray_append(dm_args, "-S");
-    } else {
-        flexarray_append(dm_args,
-                         GCSPRINTF("socket,id=libxl-cmd,"
-                                   "path=%s,server,nowait",
-                                   libxl__qemu_qmp_path(gc, guest_domid)));
-    }
+            /*
+             * Start QEMU with its "CPU" paused, it will not start any emulation
+             * until the QMP command "cont" is used. This also prevent QEMU from
+             * writing "running" to the "state" xenstore node so we only use this
+             * flag when we have the QMP based startup notification.
+             * */
+            flexarray_append(dm_args, "-S");
+        } else {
+            flexarray_append(dm_args,
+                             GCSPRINTF("socket,id=libxl-cmd,"
+                                       "path=%s,server,nowait",
+                                       libxl__qemu_qmp_path(gc, guest_domid)));
+        }
 
-    flexarray_append(dm_args, "-no-shutdown");
-    flexarray_append(dm_args, "-mon");
-    flexarray_append(dm_args, "chardev=libxl-cmd,mode=control");
+        flexarray_append(dm_args, "-mon");
+        flexarray_append(dm_args, "chardev=libxl-cmd,mode=control");
 
-    flexarray_append(dm_args, "-chardev");
-    flexarray_append(dm_args,
-                     GCSPRINTF("socket,id=libxenstat-cmd,"
-                                    "path=%s/qmp-libxenstat-%d,server,nowait",
-                                    libxl__run_dir_path(), guest_domid));
+        flexarray_append(dm_args, "-chardev");
+        flexarray_append(dm_args,
+                         GCSPRINTF("socket,id=libxenstat-cmd,"
+                                        "path=%s/qmp-libxenstat-%d,server,nowait",
+                                        libxl__run_dir_path(), guest_domid));
 
-    flexarray_append(dm_args, "-mon");
-    flexarray_append(dm_args, "chardev=libxenstat-cmd,mode=control");
+        flexarray_append(dm_args, "-mon");
+        flexarray_append(dm_args, "chardev=libxenstat-cmd,mode=control");
+    }
 
     for (i = 0; i < guest_config->num_channels; i++) {
         connection = guest_config->channels[i].connection;
@@ -1273,7 +1277,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
         flexarray_vappend(dm_args, "-name", c_info->name, NULL);
     }
 
-    if (vnc) {
+    if (vnc && !is_stubdom) {
         char *vncarg = NULL;
 
         flexarray_append(dm_args, "-vnc");
@@ -1312,11 +1316,12 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
         }
 
         flexarray_append(dm_args, vncarg);
-    } else
+    } else if (!is_stubdom) {
         /*
          * Ensure that by default no vnc server is created.
          */
         flexarray_append_pair(dm_args, "-vnc", "none");
+    }
 
     /*
      * Ensure that by default no display backend is created. Further
@@ -1324,7 +1329,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
      */
     flexarray_append_pair(dm_args, "-display", "none");
 
-    if (sdl) {
+    if (sdl && !is_stubdom) {
         flexarray_append(dm_args, "-sdl");
         if (sdl->display)
             flexarray_append_pair(dm_envs, "DISPLAY", sdl->display);
@@ -1366,18 +1371,34 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             {
                 LOGD(ERROR, guest_domid, "Both serial and serial_list set");
                 return ERROR_INVAL;
-            }
-            if (b_info->u.hvm.serial) {
-                flexarray_vappend(dm_args,
-                                  "-serial", b_info->u.hvm.serial, NULL);
-            } else if (b_info->u.hvm.serial_list) {
-                char **p;
-                for (p = b_info->u.hvm.serial_list;
-                     *p;
-                     p++) {
-                    flexarray_vappend(dm_args,
-                                      "-serial",
-                                      *p, NULL);
+            } else {
+                if (b_info->u.hvm.serial) {
+                    if (is_stubdom) {
+                        /* see spawn_stub_launch_dm() for connecting STUBDOM_CONSOLE_SERIAL */
+                        flexarray_vappend(dm_args,
+                                          "-serial",
+                                          GCSPRINTF("/dev/hvc%d", STUBDOM_CONSOLE_SERIAL),
+                                          NULL);
+                    } else {
+                        flexarray_vappend(dm_args,
+                                          "-serial", b_info->u.hvm.serial, NULL);
+                    }
+                } else if (b_info->u.hvm.serial_list) {
+                    char **p;
+                    /* see spawn_stub_launch_dm() for connecting STUBDOM_CONSOLE_SERIAL */
+                    for (p = b_info->u.hvm.serial_list, i = 0;
+                         *p;
+                         p++, i++) {
+                        if (is_stubdom)
+                            flexarray_vappend(dm_args,
+                                              "-serial",
+                                              GCSPRINTF("/dev/hvc%d", STUBDOM_CONSOLE_SERIAL + i),
+                                              NULL);
+                        else
+                            flexarray_vappend(dm_args,
+                                              "-serial",
+                                              *p, NULL);
+                    }
                 }
             }
         }
@@ -1386,7 +1407,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             flexarray_append(dm_args, "-nographic");
         }
 
-        if (libxl_defbool_val(b_info->u.hvm.spice.enable)) {
+        if (libxl_defbool_val(b_info->u.hvm.spice.enable) && !is_stubdom) {
             const libxl_spice_info *spice = &b_info->u.hvm.spice;
             char *spiceoptions = dm_spice_options(gc, spice);
             if (!spiceoptions)
@@ -1813,7 +1834,9 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
              * If qemu isn't doing the interpreting, the parameter is
              * always raw
              */
-            if (disks[i].backend == LIBXL_DISK_BACKEND_QDISK)
+            if (libxl_defbool_val(b_info->device_model_stubdomain))
+                format = "host_device";
+            else if (disks[i].backend == LIBXL_DISK_BACKEND_QDISK)
                 format = libxl__qemu_disk_format_string(disks[i].format);
             else
                 format = libxl__qemu_disk_format_string(LIBXL_DISK_FORMAT_RAW);
@@ -1824,6 +1847,16 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                          disks[i].vdev);
                     continue;
                 }
+            } else if (libxl_defbool_val(b_info->device_model_stubdomain)) {
+                if (disk > 'z' - 'a') {
+                    LOGD(WARN, guest_domid,
+                            "Emulation of only first %d disks is supported with qemu-xen in stubdomain.\n"
+                            "Disk %d will be available via PV drivers but not as an emulated disk.",
+                            'z' - 'a',
+                            disk);
+                    continue;
+                }
+                target_path = GCSPRINTF("/dev/xvd%c", 'a' + disk);
             } else {
                 if (format == NULL) {
                     LOGD(WARN, guest_domid,
@@ -1964,7 +1997,7 @@ static int libxl__build_device_model_args(libxl__gc *gc,
                                         char ***args, char ***envs,
                                         const libxl__domain_build_state *state,
                                         int *dm_state_fd)
-/* dm_state_fd may be NULL iff caller knows we are using old stubdom
+/* dm_state_fd may be NULL iff caller knows we are using stubdom
  * and therefore will be passing a filename rather than a fd. */
 {
     switch (guest_config->b_info.device_model_version) {
@@ -1974,8 +2007,10 @@ static int libxl__build_device_model_args(libxl__gc *gc,
                                                   args, envs,
                                                   state);
     case LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN:
-        assert(dm_state_fd != NULL);
-        assert(*dm_state_fd < 0);
+        if (!libxl_defbool_val(guest_config->b_info.device_model_stubdomain)) {
+            assert(dm_state_fd != NULL);
+            assert(*dm_state_fd < 0);
+        }
         return libxl__build_device_model_args_new(gc, dm,
                                                   guest_domid, guest_config,
                                                   args, envs,
@@ -2080,6 +2115,16 @@ retry_transaction:
     return 0;
 }
 
+static int libxl__store_libxl_entry(libxl__gc *gc, uint32_t domid,
+                                    const char *name, const char *value)
+{
+    char *path = NULL;
+
+    path = libxl__xs_libxl_path(gc, domid);
+    path = libxl__sprintf(gc, "%s/%s", path, name);
+    return libxl__xs_printf(gc, XBT_NULL, path, "%s", value);
+}
+
 static void dmss_init(libxl__dm_spawn_state *dmss)
 {
     libxl__ev_qmp_init(&dmss->qmp);
@@ -2138,10 +2183,14 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
     dmss_init(&sdss->pvqemu);
     libxl__xswait_init(&sdss->xswait);
 
-    if (guest_config->b_info.device_model_version !=
-        LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL) {
-        ret = ERROR_INVAL;
-        goto out;
+    assert(libxl_defbool_val(guest_config->b_info.device_model_stubdomain));
+
+    if (libxl__stubdomain_is_linux(&guest_config->b_info)) {
+        if (d_state->saved_state) {
+            LOG(ERROR, "Save/Restore not supported yet with Linux Stubdom.");
+            ret = -1;
+            goto out;
+        }
     }
 
     sdss->pvqemu.guest_domid = INVALID_DOMID;
@@ -2163,8 +2212,8 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
 
     dm_config->b_info.shadow_memkb = 0;
     dm_config->b_info.max_vcpus = 1;
-    dm_config->b_info.max_memkb = 28 * 1024 +
-        guest_config->b_info.video_memkb;
+    dm_config->b_info.max_memkb = guest_config->b_info.stubdomain_memkb;
+    dm_config->b_info.max_memkb += guest_config->b_info.video_memkb;
     dm_config->b_info.target_memkb = dm_config->b_info.max_memkb;
 
     dm_config->b_info.max_grant_frames = guest_config->b_info.max_grant_frames;
@@ -2203,10 +2252,8 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
         dm_config->num_vkbs = 1;
     }
 
-    stubdom_state->pv_kernel.path
-        = libxl__abs_path(gc, "ioemu-stubdom.gz", libxl__xenfirmwaredir_path());
-    stubdom_state->pv_cmdline = GCSPRINTF(" -d %d", guest_domid);
-    stubdom_state->pv_ramdisk.path = "";
+    stubdom_state->pv_kernel.path = guest_config->b_info.stubdomain_kernel;
+    stubdom_state->pv_ramdisk.path = guest_config->b_info.stubdomain_ramdisk;
 
     /* fixme: this function can leak the stubdom if it fails */
     ret = libxl__domain_make(gc, dm_config, stubdom_state,
@@ -2226,6 +2273,8 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
         goto out;
     }
 
+    libxl__store_libxl_entry(gc, guest_domid, "dm-version",
+        libxl_device_model_version_to_string(dm_config->b_info.device_model_version));
     libxl__write_stub_dmargs(gc, dm_domid, guest_domid, args);
     libxl__xs_printf(gc, XBT_NULL,
                      GCSPRINTF("%s/image/device-model-domid",
@@ -2235,6 +2284,15 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
                      GCSPRINTF("%s/target",
                                libxl__xs_get_dompath(gc, dm_domid)),
                      "%d", guest_domid);
+    if (guest_config->b_info.device_model_version == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN) {
+        /* qemu-xen is used as a dm in the stubdomain, so we set the bios
+         * accroding to this */
+        libxl__xs_printf(gc, XBT_NULL,
+                        libxl__sprintf(gc, "%s/hvmloader/bios",
+                                       libxl__xs_get_dompath(gc, guest_domid)),
+                        "%s",
+                        libxl_bios_type_to_string(guest_config->b_info.u.hvm.bios));
+    }
     ret = xc_domain_set_target(ctx->xch, dm_domid, guest_domid);
     if (ret<0) {
         LOGED(ERROR, guest_domid, "setting target domain %d -> %d",
@@ -2314,8 +2372,13 @@ static void spawn_stub_launch_dm(libxl__egc *egc,
         if (ret) goto out;
     }
 
-    if (guest_config->b_info.u.hvm.serial)
+    if (guest_config->b_info.u.hvm.serial) {
         num_console++;
+    } else if (guest_config->b_info.u.hvm.serial_list) {
+        char **serial = guest_config->b_info.u.hvm.serial_list;
+        while (*(serial++))
+            num_console++;
+    }
 
     console = libxl__calloc(gc, num_console, sizeof(libxl__device_console));
 
@@ -2349,8 +2412,18 @@ static void spawn_stub_launch_dm(libxl__egc *egc,
                     console[i].output =
                         GCSPRINTF("pipe:%s", d_state->saved_state);
                 break;
+            case STUBDOM_CONSOLE_SERIAL:
+                if (guest_config->b_info.u.hvm.serial) {
+                    console[i].output = guest_config->b_info.u.hvm.serial;
+                    break;
+                }
+                /* fall-through */
             default:
-                console[i].output = "pty";
+                /* Serial_list is set, as otherwise num_consoles would be
+                 * smaller and consoles 0-2 are handled above. */
+                assert(guest_config->b_info.u.hvm.serial_list);
+                console[i].output = guest_config->b_info.u.hvm.serial_list[
+                    i-STUBDOM_CONSOLE_SERIAL];
                 break;
         }
     }
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index d1ebdec8d2..f2f76439ec 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -119,6 +119,7 @@
 #define STUBDOM_CONSOLE_RESTORE 2
 #define STUBDOM_CONSOLE_SERIAL 3
 #define STUBDOM_SPECIAL_CONSOLES 3
+#define LIBXL_LINUX_STUBDOM_MEM 128
 #define TAP_DEVICE_SUFFIX "-emu"
 #define DOMID_XS_PATH "domid"
 #define PVSHIM_BASENAME "xen-shim"
diff --git a/tools/libxl/libxl_mem.c b/tools/libxl/libxl_mem.c
index bc7b95aa74..e52a9624ea 100644
--- a/tools/libxl/libxl_mem.c
+++ b/tools/libxl/libxl_mem.c
@@ -459,8 +459,10 @@ int libxl__domain_need_memory_calculate(libxl__gc *gc,
     case LIBXL_DOMAIN_TYPE_PVH:
     case LIBXL_DOMAIN_TYPE_HVM:
         *need_memkb += LIBXL_HVM_EXTRA_MEMORY;
-        if (libxl_defbool_val(b_info->device_model_stubdomain))
-            *need_memkb += 32 * 1024;
+        if (libxl_defbool_val(b_info->device_model_stubdomain)) {
+            *need_memkb += b_info->stubdomain_memkb;
+            *need_memkb += b_info->video_memkb;
+        }
         break;
     case LIBXL_DOMAIN_TYPE_PV:
         *need_memkb += LIBXL_PV_EXTRA_MEMORY;
diff --git a/tools/libxl/libxl_types.idl b/tools/libxl/libxl_types.idl
index f7c473be74..9d3f05f399 100644
--- a/tools/libxl/libxl_types.idl
+++ b/tools/libxl/libxl_types.idl
@@ -518,6 +518,9 @@ libxl_domain_build_info = Struct("domain_build_info",[
     
     ("device_model_version", libxl_device_model_version),
     ("device_model_stubdomain", libxl_defbool),
+    ("stubdomain_memkb",   MemKB),
+    ("stubdomain_kernel",  string),
+    ("stubdomain_ramdisk", string),
     # if you set device_model you must set device_model_version too
     ("device_model",     string),
     ("device_model_ssidref", uint32),
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:56:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:56:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarUQ-0000Gn-Lb; Tue, 19 May 2020 01:56:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarUP-0000GG-MQ
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:56:17 +0000
X-Inumbo-ID: e3264774-9973-11ea-ae69-bc764e2007e4
Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e3264774-9973-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 01:56:00 +0000 (UTC)
Received: by mail-qk1-x742.google.com with SMTP id i14so12816912qka.10
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:55:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=EhWGeBO5fU7aDrFb+y6kocNrTThgXEXh85XiUI3Yqtk=;
 b=Q35bbi2GYRG7GMFM+chG/uml8bUSKpczo3cdkK6Ozevr/n36ollyZpZ9ItTpBOvtyO
 6lcquzv2Axb0fOjT3wDgSrOarDUl0JJTZqk2XaPobUy6AYMxiuuHBqM1SVQaslg/O7Hv
 HZsyLP3BvB6WXfxXYPlb+FCDx5FZA6gn4mAgHMn+X4/zQdHsVPbTn8JfhfSd44XWSWuD
 1HE1BnBjVu5shGWrPbbFOyPpBs7EBwm1FImruCoomq4wNgCQyehNjmec8dOvZlJM4ur/
 6MbLPPTYkxMvR/lNrXdaWkEAd1MbgLMXBIqHeTxRPgYsZaikdf+c7xgsCfB2qBnl3ulm
 id0Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=EhWGeBO5fU7aDrFb+y6kocNrTThgXEXh85XiUI3Yqtk=;
 b=YaQFlG6HeNF/Kv3JGG4Hx+PnIj7n+8R716PXalSEapQkZCLFRk2ror3ZnOTbkQWIMc
 BgMwoIPaRK+ugpyo2SmeNgI4tno6ficbEitQRrYVcjMGwQhWtkJblOOE637Be1KerpXH
 3hSpMuud2WgbkQKY7cNIrPFCFQ/d/pt/S8QkeyD72JFDZtIf2IJl+SzENScxDLpxoTFz
 jkcvSS7JtmXONudUDcN1MpnxlntJZinQFSpfAuFnKdwSnAOBPkT5IwuOdQS7IRc4JOX5
 pl1biYpizvhF9mveQ+JY4BjW2/b6ONUUr0JjLO3NHC3+mDJcH4VNk6b5sJoibekiuVdj
 N5GA==
X-Gm-Message-State: AOAM531K/YDYKB9Og31qefdgWVPjue3+TOsKdNQ+nFmJw3dQhy0l9QVN
 8CiXXK+fXZBpB4HPSQV8c1J584P8
X-Google-Smtp-Source: ABdhPJwUZdL8WFdvXxXr0MTV4ZtRNTzB4gw/6FXjEUyXqs+QDg7lH5C8GZD7oO58aw6rQCbbxFFMjw==
X-Received: by 2002:a37:8d07:: with SMTP id p7mr19191071qkd.500.1589853359370; 
 Mon, 18 May 2020 18:55:59 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.55.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:55:58 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 06/19] libxl: Use libxl__xs_* in libxl__write_stub_dmargs
Date: Mon, 18 May 2020 21:54:50 -0400
Message-Id: <20200519015503.115236-7-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>, Wei Liu <wl@xen.org>,
 Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Re-work libxl__write_stub_dmargs to use libxl_xs_* functions in a loop.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

---
New in v7
---
 tools/libxl/libxl_dm.c | 53 ++++++++++++++++++++++++++++++------------
 1 file changed, 38 insertions(+), 15 deletions(-)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index dc1717bc12..8e57cd8c1f 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -2070,21 +2070,18 @@ static int libxl__write_stub_dmargs(libxl__gc *gc,
                                     int dm_domid, int guest_domid,
                                     char **args)
 {
-    libxl_ctx *ctx = libxl__gc_owner(gc);
     int i;
-    char *vm_path;
-    char *dmargs, *path;
+    char *dmargs;
     int dmargs_size;
     struct xs_permissions roperm[2];
-    xs_transaction_t t;
+    xs_transaction_t t = XBT_NULL;
+    int rc;
 
     roperm[0].id = 0;
     roperm[0].perms = XS_PERM_NONE;
     roperm[1].id = dm_domid;
     roperm[1].perms = XS_PERM_READ;
 
-    vm_path = libxl__xs_read(gc, XBT_NULL, GCSPRINTF("/local/domain/%d/vm", guest_domid));
-
     i = 0;
     dmargs_size = 0;
     while (args[i] != NULL) {
@@ -2102,17 +2099,43 @@ static int libxl__write_stub_dmargs(libxl__gc *gc,
         }
         i++;
     }
-    path = GCSPRINTF("%s/image/dmargs", vm_path);
 
-retry_transaction:
-    t = xs_transaction_start(ctx->xsh);
-    xs_write(ctx->xsh, t, path, dmargs, strlen(dmargs));
-    xs_set_permissions(ctx->xsh, t, path, roperm, ARRAY_SIZE(roperm));
-    xs_set_permissions(ctx->xsh, t, GCSPRINTF("%s/rtc/timeoffset", vm_path), roperm, ARRAY_SIZE(roperm));
-    if (!xs_transaction_end(ctx->xsh, t, 0))
-        if (errno == EAGAIN)
-            goto retry_transaction;
+    for (;;) {
+        const char *vm_path;
+        char *path;
+
+        rc = libxl__xs_transaction_start(gc, &t);
+        if (rc) goto out;
+
+        rc = libxl__xs_read_mandatory(gc, t,
+                                      GCSPRINTF("/local/domain/%d/vm",
+                                                guest_domid),
+                                      &vm_path);
+        if (rc) goto out;
+
+        path = GCSPRINTF("%s/image/dmargs", vm_path);
+
+        rc = libxl__xs_mknod(gc, t, path, roperm, ARRAY_SIZE(roperm));
+        if (rc) goto out;
+
+        rc = libxl__xs_write_checked(gc, t, path, dmargs);
+        if (rc) goto out;
+
+        rc = libxl__xs_mknod(gc, t, GCSPRINTF("%s/rtc/timeoffset", vm_path),
+                             roperm, ARRAY_SIZE(roperm));
+        if (rc) goto out;
+
+        rc = libxl__xs_transaction_commit(gc, &t);
+        if (!rc) break;
+        if (rc<0) goto out;
+    }
+
     return 0;
+
+ out:
+    libxl__xs_transaction_abort(gc, &t);
+
+    return rc;
 }
 
 static int libxl__store_libxl_entry(libxl__gc *gc, uint32_t domid,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:56:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:56:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarUV-0000Jr-VI; Tue, 19 May 2020 01:56:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarUU-0000J6-M4
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:56:22 +0000
X-Inumbo-ID: e5779618-9973-11ea-b9cf-bc764e2007e4
Received: from mail-qv1-xf43.google.com (unknown [2607:f8b0:4864:20::f43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e5779618-9973-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 01:56:03 +0000 (UTC)
Received: by mail-qv1-xf43.google.com with SMTP id p4so5758081qvr.10
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:56:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=e8TO7h61vMYwatja+1Xk6Uhtwz9m20nO7NIF4kE+skE=;
 b=BbRKeWSCviRUxAOA+eUdGqw6Y1cC5Wj/I2lE3fY6GiPOHoRcmI9EK6t1z21Rrn1nJu
 LHmg+mAbJ4GIVD/vBzQeXvHxi5lt6lUEcyj/deZ+h9nL2VeUZrXC+15t18IoPSjmgtcG
 NSVmzIR/ONrICz7UVWB7iodUK/M4XToSxwUX8GTiLtjOKRGvy5PUke3nNB4Lx2zWco6h
 +8KithsI9L4Kof0lW5iOrE0Hlu1vtnKyyn94fHZ9GMRcJHdqsMhXcliHgj0CUVT1IpHc
 ++bop7pplnhGfZ01OKW1nqHbhj8UKYIABX6ZHagwTPTHkSB7BdwT+OqEbIr4TGYd7Ooz
 pq0w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=e8TO7h61vMYwatja+1Xk6Uhtwz9m20nO7NIF4kE+skE=;
 b=K6M2YjjVeeZfqVVrwbXmnyk7nZL1PvedCbLr3mZ/437/JnbqH0yV11cV+FOc6QhwX7
 oHMwgCNtwl6Xl5T4/13h1tOZErcQJqxuulTkuT7s+5SfIF1daqY3eYDlZuSzkKsEGyR4
 iGLbUg93m2HsV8eWte/jgnJzKNmSbhz6vQpZ1TCNH7NWnUjxXyc9mNjp/2TB7Nm6rjkB
 QTlmxmc2HiHy4tX79iK0H/4ALoApQdkoOHKfaY1psCUz8numG3PyuxYMDSIW2y/3A7bL
 pdFX1iNuWiuodp9TrXtfl8Zn2vpB5OZkZa8t1DBItkC4HrdhnpfRSBEFM18WAcTbIQHr
 XJnA==
X-Gm-Message-State: AOAM530qgC4LVt0c1nE6wI9mbHzdiGqOp2VyAusuS1OI1NlwBOaafVhW
 hSFr7OfPYNtaGvpXDfl1CfPY9Aj3
X-Google-Smtp-Source: ABdhPJzbpjWOrcqmflQcR6E22gVQ4L4bxyt56htHAEFEX8VD8vIJ5q+gINX5KbokM7bGs0s+AcO3yA==
X-Received: by 2002:ad4:4b61:: with SMTP id m1mr19597825qvx.235.1589853363185; 
 Mon, 18 May 2020 18:56:03 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.56.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:56:02 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 07/19] libxl: write qemu arguments into separate xenstore
 keys
Date: Mon, 18 May 2020 21:54:51 -0400
Message-Id: <20200519015503.115236-8-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>,
 Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

This allows using arguments with spaces, like -append, without
nominating any special "separator" character.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

Write arguments in dm-argv directory instead of overloading mini-os's
dmargs string.

Make libxl__write_stub_dmargs vary behaviour based on the
is_linux_stubdom flag.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
Changes in v3:
 - previous version of this patch "libxl: use \x1b to separate qemu
   arguments for linux stubdomain" used specific non-printable
   separator, but it was rejected as xenstore doesn't cope well with
   non-printable chars
Changes in v6:
 - Re-work to use libxl__xs_ functions in a loop.
 - Drop rtc/timeoffset
Changes in v7:
 - Use a single function with an is_linux_stubdom flag.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libxl/libxl_dm.c | 77 +++++++++++++++++++++++++++---------------
 1 file changed, 49 insertions(+), 28 deletions(-)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 8e57cd8c1f..23b13f84d2 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -2068,13 +2068,11 @@ static int libxl__vfb_and_vkb_from_hvm_guest_config(libxl__gc *gc,
 
 static int libxl__write_stub_dmargs(libxl__gc *gc,
                                     int dm_domid, int guest_domid,
-                                    char **args)
+                                    char **args, bool is_linux_stubdom)
 {
-    int i;
-    char *dmargs;
-    int dmargs_size;
     struct xs_permissions roperm[2];
     xs_transaction_t t = XBT_NULL;
+    char *dmargs;
     int rc;
 
     roperm[0].id = 0;
@@ -2082,22 +2080,27 @@ static int libxl__write_stub_dmargs(libxl__gc *gc,
     roperm[1].id = dm_domid;
     roperm[1].perms = XS_PERM_READ;
 
-    i = 0;
-    dmargs_size = 0;
-    while (args[i] != NULL) {
-        dmargs_size = dmargs_size + strlen(args[i]) + 1;
-        i++;
-    }
-    dmargs_size++;
-    dmargs = (char *) libxl__malloc(gc, dmargs_size);
-    i = 1;
-    dmargs[0] = '\0';
-    while (args[i] != NULL) {
-        if (strcmp(args[i], "-sdl") && strcmp(args[i], "-M") && strcmp(args[i], "xenfv")) {
-            strcat(dmargs, " ");
-            strcat(dmargs, args[i]);
+    if (!is_linux_stubdom) {
+        int dmargs_size = 0;
+        int i = 0;
+
+        while (args[i] != NULL) {
+            dmargs_size = dmargs_size + strlen(args[i]) + 1;
+            i++;
+        }
+
+        dmargs_size++;
+        dmargs = (char *) libxl__malloc(gc, dmargs_size);
+
+        i = 1;
+        dmargs[0] = '\0';
+        while (args[i] != NULL) {
+            if (strcmp(args[i], "-sdl") && strcmp(args[i], "-M") && strcmp(args[i], "xenfv")) {
+                strcat(dmargs, " ");
+                strcat(dmargs, args[i]);
+            }
+            i++;
         }
-        i++;
     }
 
     for (;;) {
@@ -2113,17 +2116,33 @@ static int libxl__write_stub_dmargs(libxl__gc *gc,
                                       &vm_path);
         if (rc) goto out;
 
-        path = GCSPRINTF("%s/image/dmargs", vm_path);
+        if (is_linux_stubdom) {
+            int i;
 
-        rc = libxl__xs_mknod(gc, t, path, roperm, ARRAY_SIZE(roperm));
-        if (rc) goto out;
+            path = GCSPRINTF("%s/image/dm-argv", vm_path);
 
-        rc = libxl__xs_write_checked(gc, t, path, dmargs);
-        if (rc) goto out;
+            rc = libxl__xs_mknod(gc, t, path, roperm, ARRAY_SIZE(roperm));
+            if (rc) goto out;
 
-        rc = libxl__xs_mknod(gc, t, GCSPRINTF("%s/rtc/timeoffset", vm_path),
-                             roperm, ARRAY_SIZE(roperm));
-        if (rc) goto out;
+            for (i=1; args[i] != NULL; i++) {
+                rc = libxl__xs_write_checked(gc, t,
+                                             GCSPRINTF("%s/%03d", path, i),
+                                             args[i]);
+                if (rc) goto out;
+            }
+        } else {
+            path = GCSPRINTF("%s/image/dmargs", vm_path);
+
+            rc = libxl__xs_mknod(gc, t, path, roperm, ARRAY_SIZE(roperm));
+            if (rc) goto out;
+
+            rc = libxl__xs_write_checked(gc, t, path, dmargs);
+            if (rc) goto out;
+
+            rc = libxl__xs_mknod(gc, t, GCSPRINTF("%s/rtc/timeoffset", vm_path),
+                                 roperm, ARRAY_SIZE(roperm));
+            if (rc) goto out;
+        }
 
         rc = libxl__xs_transaction_commit(gc, &t);
         if (!rc) break;
@@ -2298,7 +2317,9 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
 
     libxl__store_libxl_entry(gc, guest_domid, "dm-version",
         libxl_device_model_version_to_string(dm_config->b_info.device_model_version));
-    libxl__write_stub_dmargs(gc, dm_domid, guest_domid, args);
+
+    libxl__write_stub_dmargs(gc, dm_domid, guest_domid, args,
+                             libxl__stubdomain_is_linux(&guest_config->b_info));
     libxl__xs_printf(gc, XBT_NULL,
                      GCSPRINTF("%s/image/device-model-domid",
                                libxl__xs_get_dompath(gc, guest_domid)),
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:56:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:56:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarUa-0000MB-8p; Tue, 19 May 2020 01:56:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarUZ-0000Lm-LB
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:56:27 +0000
X-Inumbo-ID: e6b5a40c-9973-11ea-b07b-bc764e2007e4
Received: from mail-qt1-x843.google.com (unknown [2607:f8b0:4864:20::843])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e6b5a40c-9973-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 01:56:05 +0000 (UTC)
Received: by mail-qt1-x843.google.com with SMTP id i68so9932464qtb.5
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:56:05 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=ZT6l/MiPRcZkFyWjFyjLysM4bTyEFRemlpKKfgP53K8=;
 b=elFKmU6j1ZIgLTmzjkhpJNbOoioOo6j4hHBYLYVKWNLIKoUfKkWwNOyzBwTNZzMsE3
 0uQ2ZTwcBrb/5S6D6DriuLMgGxkw1Lg16ggD5ixEVJbM8EST+KqV1t5SKs/kn6ZW+6yk
 HK0OjfDxefp9oQjhqEq96e7SfklGRsuk2fGHjiQW4+gphweYkIWtXYuQYyUb7hR/sDZf
 EfRWr/gRoAe7ivWx00ulFrs011sgQcPNRPmESUeMp39/avS65UUGKLXCt5RfXQhY7FSv
 EcMW68Q0PeDvXsXqKBRkLO/8QOkqSliQo8RKLIyAt04Bmjg1VJotdjzbX4R8pQyCHpHN
 gQwg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=ZT6l/MiPRcZkFyWjFyjLysM4bTyEFRemlpKKfgP53K8=;
 b=K436gOrqTF2qbHgFB7igmSrM2arDYpSkcwbMNotVPP9Sq5N8gRD3uX2/2/V+03DXEn
 oPeZjsIOYGoWfvkyq7/DWwvPP7qQPbI4AWQXqQoc+YI3Lf2QwGuC+Slr93Mv69mPQwnc
 GBvL2F6p6o/eDBXyicoJDUgEfNeC/zS/SjUn0NRIx664aGGE0P2LEtTHBRTvYm4ESZeU
 9fWVEynSRdE770MnAqa14+f5W3IQb4bAee14nu5EvCHO47hybA+7Lu1COXdjUysBtmCA
 w+lf9tkm8lJWV90oxWQ9mF4enWq6QWWVpHlUxXGsvrl8l7cH7wbq9LiLM6jM3XVMM0y/
 4mjQ==
X-Gm-Message-State: AOAM532/q/TUiUJPT9GGUDEFBy6Tqw0GUcXbJCMgkvEfhi50V9EIiCvH
 TsqpYwNZ0imWZTtwyU8pDq3MvP0j
X-Google-Smtp-Source: ABdhPJw7/9pM01IxlvHjRpZwT4cZjgA1vimz24Bp/lBjrL4XAKBw5ShvSMBRufnCcwYHiw1phAQqsw==
X-Received: by 2002:ac8:1aab:: with SMTP id x40mr19875411qtj.358.1589853365283; 
 Mon, 18 May 2020 18:56:05 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.56.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:56:04 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 08/19] xl: add stubdomain related options to xl config
 parser
Date: Mon, 18 May 2020 21:54:52 -0400
Message-Id: <20200519015503.115236-9-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6:
 - Add Acked-by: Ian Jackson
---
 docs/man/xl.cfg.5.pod.in | 27 +++++++++++++++++++++++----
 tools/xl/xl_parse.c      |  7 +++++++
 2 files changed, 30 insertions(+), 4 deletions(-)

diff --git a/docs/man/xl.cfg.5.pod.in b/docs/man/xl.cfg.5.pod.in
index 0e9e58a41a..c9bc181a95 100644
--- a/docs/man/xl.cfg.5.pod.in
+++ b/docs/man/xl.cfg.5.pod.in
@@ -2733,10 +2733,29 @@ model which they were installed with.
 
 =item B<device_model_override="PATH">
 
-Override the path to the binary to be used as the device-model. The
-binary provided here MUST be consistent with the
-B<device_model_version> which you have specified. You should not
-normally need to specify this option.
+Override the path to the binary to be used as the device-model running in
+toolstack domain. The binary provided here MUST be consistent with the
+B<device_model_version> which you have specified. You should not normally need
+to specify this option.
+
+=item B<stubdomain_kernel="PATH">
+
+Override the path to the kernel image used as device-model stubdomain.
+The binary provided here MUST be consistent with the
+B<device_model_version> which you have specified.
+In case of B<qemu-xen-traditional> it is expected to be MiniOS-based stubdomain
+image, in case of B<qemu-xen> it is expected to be Linux-based stubdomain
+kernel.
+
+=item B<stubdomain_ramdisk="PATH">
+
+Override the path to the ramdisk image used as device-model stubdomain.
+The binary provided here is to be used by a kernel pointed by B<stubdomain_kernel>.
+It is known to be used only by Linux-based stubdomain kernel.
+
+=item B<stubdomain_memory=MBYTES>
+
+Start the stubdomain with MBYTES megabytes of RAM. Default is 128.
 
 =item B<device_model_stubdomain_override=BOOLEAN>
 
diff --git a/tools/xl/xl_parse.c b/tools/xl/xl_parse.c
index 4450d59f16..61b4ef7b7e 100644
--- a/tools/xl/xl_parse.c
+++ b/tools/xl/xl_parse.c
@@ -2525,6 +2525,13 @@ skip_usbdev:
     xlu_cfg_replace_string(config, "device_model_user",
                            &b_info->device_model_user, 0);
 
+    xlu_cfg_replace_string (config, "stubdomain_kernel",
+                            &b_info->stubdomain_kernel, 0);
+    xlu_cfg_replace_string (config, "stubdomain_ramdisk",
+                            &b_info->stubdomain_ramdisk, 0);
+    if (!xlu_cfg_get_long (config, "stubdomain_memory", &l, 0))
+        b_info->stubdomain_memkb = l * 1024;
+
 #define parse_extra_args(type)                                            \
     e = xlu_cfg_get_list_as_string_list(config, "device_model_args"#type, \
                                     &b_info->extra##type, 0);            \
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:56:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:56:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarUf-0000Qo-JO; Tue, 19 May 2020 01:56:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarUe-0000Q1-M7
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:56:32 +0000
X-Inumbo-ID: e79c0834-9973-11ea-9887-bc764e2007e4
Received: from mail-qv1-xf42.google.com (unknown [2607:f8b0:4864:20::f42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e79c0834-9973-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 01:56:07 +0000 (UTC)
Received: by mail-qv1-xf42.google.com with SMTP id d1so5777994qvl.6
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:56:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=2CXxR3/fu54dMwhCVd1DE62O7N45m5o1SIg5x3dwKzg=;
 b=AKhBNPAcoTZToHpSRZE5KrTgrDLx7XfXdXFxBJNg7lYrrzSOzsoWnKqqFozWnmA9Cl
 n2peygGD3HfXEuDgP9E/gTEK23cgcDdGCtmZD8H7nypC8v571/5mZ1vlVpsiJsocOx9U
 w+Gx58vsJOSJS3foDSvkIg8MV1zlNjYoXbsmJvVBS5Hx39kEbaI9A/pD8bjuMFxrbjXv
 FIimCXiDuUZAsTPMFNLXQiW1z+HkqHKYZWzQxF1cza4zQ6qem47RYV37cGLYTFeHcejJ
 RjwhHpwmqUVlwIbD6hBx0cEEfO55YJAjrbIS8OgISFfGcPIhySXFb8ttHD/1MWOKwJCl
 Gc6g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=2CXxR3/fu54dMwhCVd1DE62O7N45m5o1SIg5x3dwKzg=;
 b=QuByHL+9D6da2fLBtQy6KVBJbcaoPmIYuntABZftdpevoI4xzkLU0qaAmJY4bOYBpR
 hWlewhLZs4UtUcyF7haxjZHTLAHnbrTEzSxzCQIiaZjAC/RAVCOwzWh51OIjJ73/SXYU
 MhD0Du5Hpy0IRwwr50UCWwaBzVKsJ3TiAHWzrYGsm97jZoqyJIbTwpU0qjleSg/oBmYt
 RJjOFJGIeq6x3Tdd7CFe8XPDxBs4ktp2BK6vJeuQpALnKtQgnYLWste/fE0wtnZYBkWb
 IGh4F8FR4lgHYbDm6xiGb9qXB2cjohkPUc2i2FJgPYssw6GhSUBbtsHsTLBMRuUIIrbu
 hdrA==
X-Gm-Message-State: AOAM531HYCwyDq57XPemDZqnBuVYU1k+tBAlbGdabeuUA5NnTreUHsdN
 aAQZ71JvgK6bAu1co7oEiokMMcVg
X-Google-Smtp-Source: ABdhPJwTXzjvx4qmTPZqyVC7gX6iM1nyvCSo9BTAgYUCX/xOWafExMXAS2HimiKJ/82sLt8AVE7eIw==
X-Received: by 2002:a0c:b5c4:: with SMTP id o4mr18315483qvf.229.1589853366849; 
 Mon, 18 May 2020 18:56:06 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.56.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:56:06 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 09/19] tools/libvchan: notify server when client is
 connected
Date: Mon, 18 May 2020 21:54:53 -0400
Message-Id: <20200519015503.115236-10-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Ian Jackson <ian.jackson@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Let the server know when the client is connected. Otherwise server will
notice only when client send some data.
This change does not break existing clients, as libvchan user should
handle spurious notifications anyway (for example acknowledge of remote
side reading the data).

Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Replace spaces with tabs to match the file's whitespace.
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
Marek: I had this patch in Qubes for a long time and totally forgot it
wasn't upstream thing...

Changes in v6:
 - Add Acked-by: Ian Jackson
 - CC Daniel De Graaf
---
 tools/libvchan/init.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/tools/libvchan/init.c b/tools/libvchan/init.c
index 180833dc2f..ad4b64fbe3 100644
--- a/tools/libvchan/init.c
+++ b/tools/libvchan/init.c
@@ -447,6 +447,9 @@ struct libxenvchan *libxenvchan_client_init(struct xentoollog_logger *logger,
 	ctrl->ring->cli_live = 1;
 	ctrl->ring->srv_notify = VCHAN_NOTIFY_WRITE;
 
+	/* wake up the server */
+	xenevtchn_notify(ctrl->event, ctrl->event_port);
+
  out:
 	if (xs)
 		xs_daemon_close(xs);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:56:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:56:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarUk-0000Ut-TA; Tue, 19 May 2020 01:56:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarUj-0000U6-Mf
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:56:37 +0000
X-Inumbo-ID: e8bc7762-9973-11ea-b9cf-bc764e2007e4
Received: from mail-qv1-xf43.google.com (unknown [2607:f8b0:4864:20::f43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e8bc7762-9973-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 01:56:09 +0000 (UTC)
Received: by mail-qv1-xf43.google.com with SMTP id ee19so5764152qvb.11
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:56:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=DwiumbG1d5HPUMbypCrL3Y7aDy+PWG/BBSK4eFZuL7c=;
 b=HP50HwP5r9rPIbd3fqfL3tTThoVI/3hE8NBtNP1oUlu9aflUHXGpge2DVqiibaFKWY
 PAKCUtOmAiOx7Oyr0ni4tcYHlhmGUrfZ3h3JDCMH7/pRVT/1bBHQlmc2q+NtPQw/vWZw
 W+TjANEOrODFwC8Ty8aKslnz3Yt/2zDl+Wp4mUhje0M+zxpRMD27PodPG8S4ewgKYnu3
 vp5B2t2pBpVM88iC37oLAtOUGZ0PsU9i+xsqnbEs9X6ewakdGa8rj8/SjA3y+itZBSO6
 pxQzDFdd7LoZpwuybt3xNfaQK0LsR2Ff3+rtCU1PjRSfMFAOIWxPDr4UrkVD/vtE5mUH
 CM6Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=DwiumbG1d5HPUMbypCrL3Y7aDy+PWG/BBSK4eFZuL7c=;
 b=li/wTvxNTaQwC/09qaykXKVTOn1ZAjJEc0pw5v9VNy/uti0qz/xujE0SgPRm3tuhmd
 e1uFI98XAPtVgelRQ0W/iuAVWR6P0vRxcRQ8CQGc8VR3iLtBIT4M5eusW5Wl9HbSmiJF
 Ec//14oOgrM/yFL9RPWnxj1fhWeXVMqeB4xmuSZwgm94iiBpK4OLVvlIgHM4lViqFbOT
 CENjJLDNUari/kqL+X6kv4py60ROCwAh4mqt92druz8a83X84AjbaB2W2V52Ag8sc42R
 J0YPKIdHsx+tgAnsmwSQX3IgwvY6Qi7gUB6IjtPvcXWq38skLtSwTgleYAxwfmjo8v39
 Vmpw==
X-Gm-Message-State: AOAM530Wz7jX191vnzORfu5/xZwL+deJ4oqBBKOcjQW6268JSbjNRCLI
 L0y160BR0RlvWhD7lFvHQ8SDj1DQ
X-Google-Smtp-Source: ABdhPJyGiSspDHrCA+s+ebMsuUXIQo8PhwHiSDhkYeC6N9s5pn5WX8CTsITCm8K3oroBjBUfaU3LsA==
X-Received: by 2002:a0c:e403:: with SMTP id o3mr17895876qvl.24.1589853368702; 
 Mon, 18 May 2020 18:56:08 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.56.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:56:07 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 10/19] libxl: add save/restore support for qemu-xen in
 stubdomain
Date: Mon, 18 May 2020 21:54:54 -0400
Message-Id: <20200519015503.115236-11-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Rely on a wrapper script in stubdomain to attach relevant consoles to
qemu.  The save console (1) must be attached to fdset/1.  When
performing a restore, $STUBDOM_RESTORE_INCOMING_ARG must be replaced on
the qemu command line by "fd:$FD", where $FD is an open file descriptor
number to the restore console (2).

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Address TODO in dm_state_save_to_fdset: Only remove savefile for
non-stubdom.
Use $STUBDOM_RESTORE_INCOMING_ARG instead of fd:3 and update commit
message.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
Changes in v3:
 - adjust for qmp_ev*
 - assume specific fdset id in qemu set in stubdomain
Changes in v5:
 - Only remove savefile for non-stubdom
Changes in v6:
 - Replace hardcoded fd:3 with placeholder $STUBDOM_RESTORE_INCOMING_ARG
Changes in v7
 - Added Acked-by: Ian Jackson
---
 tools/libxl/libxl_dm.c  | 25 +++++++++++++------------
 tools/libxl/libxl_qmp.c | 27 +++++++++++++++++++++++++--
 2 files changed, 38 insertions(+), 14 deletions(-)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 23b13f84d2..62d0d46c98 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -1745,10 +1745,19 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
     }
 
     if (state->saved_state) {
-        /* This file descriptor is meant to be used by QEMU */
-        *dm_state_fd = open(state->saved_state, O_RDONLY);
-        flexarray_append(dm_args, "-incoming");
-        flexarray_append(dm_args, GCSPRINTF("fd:%d",*dm_state_fd));
+        if (is_stubdom) {
+            /* Linux stubdomain must replace $STUBDOM_RESTORE_INCOMING_ARG
+             * with the approriate fd:$num argument for the
+             * STUBDOM_CONSOLE_RESTORE console 2.
+             */
+            flexarray_append(dm_args, "-incoming");
+            flexarray_append(dm_args, "$STUBDOM_RESTORE_INCOMING_ARG");
+        } else {
+            /* This file descriptor is meant to be used by QEMU */
+            *dm_state_fd = open(state->saved_state, O_RDONLY);
+            flexarray_append(dm_args, "-incoming");
+            flexarray_append(dm_args, GCSPRINTF("fd:%d",*dm_state_fd));
+        }
     }
     for (i = 0; b_info->extra && b_info->extra[i] != NULL; i++)
         flexarray_append(dm_args, b_info->extra[i]);
@@ -2227,14 +2236,6 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
 
     assert(libxl_defbool_val(guest_config->b_info.device_model_stubdomain));
 
-    if (libxl__stubdomain_is_linux(&guest_config->b_info)) {
-        if (d_state->saved_state) {
-            LOG(ERROR, "Save/Restore not supported yet with Linux Stubdom.");
-            ret = -1;
-            goto out;
-        }
-    }
-
     sdss->pvqemu.guest_domid = INVALID_DOMID;
 
     libxl_domain_create_info_init(&dm_config->c_info);
diff --git a/tools/libxl/libxl_qmp.c b/tools/libxl/libxl_qmp.c
index efaba91086..c394000ea9 100644
--- a/tools/libxl/libxl_qmp.c
+++ b/tools/libxl/libxl_qmp.c
@@ -962,6 +962,7 @@ static void dm_stopped(libxl__egc *egc, libxl__ev_qmp *ev,
                        const libxl__json_object *response, int rc);
 static void dm_state_fd_ready(libxl__egc *egc, libxl__ev_qmp *ev,
                               const libxl__json_object *response, int rc);
+static void dm_state_save_to_fdset(libxl__egc *egc, libxl__ev_qmp *ev, int fdset);
 static void dm_state_saved(libxl__egc *egc, libxl__ev_qmp *ev,
                            const libxl__json_object *response, int rc);
 
@@ -994,10 +995,17 @@ static void dm_stopped(libxl__egc *egc, libxl__ev_qmp *ev,
     EGC_GC;
     libxl__domain_suspend_state *dsps = CONTAINER_OF(ev, *dsps, qmp);
     const char *const filename = dsps->dm_savefile;
+    uint32_t dm_domid = libxl_get_stubdom_id(CTX, dsps->domid);
 
     if (rc)
         goto error;
 
+    if (dm_domid) {
+        /* see Linux stubdom interface in docs/stubdom.txt */
+        dm_state_save_to_fdset(egc, ev, 1);
+        return;
+    }
+
     ev->payload_fd = open(filename, O_WRONLY | O_CREAT, 0600);
     if (ev->payload_fd < 0) {
         LOGED(ERROR, ev->domid,
@@ -1028,7 +1036,6 @@ static void dm_state_fd_ready(libxl__egc *egc, libxl__ev_qmp *ev,
     EGC_GC;
     int fdset;
     const libxl__json_object *o;
-    libxl__json_object *args = NULL;
     libxl__domain_suspend_state *dsps = CONTAINER_OF(ev, *dsps, qmp);
 
     close(ev->payload_fd);
@@ -1043,6 +1050,21 @@ static void dm_state_fd_ready(libxl__egc *egc, libxl__ev_qmp *ev,
         goto error;
     }
     fdset = libxl__json_object_get_integer(o);
+    dm_state_save_to_fdset(egc, ev, fdset);
+    return;
+
+error:
+    assert(rc);
+    libxl__remove_file(gc, dsps->dm_savefile);
+    dsps->callback_device_model_done(egc, dsps, rc);
+}
+
+static void dm_state_save_to_fdset(libxl__egc *egc, libxl__ev_qmp *ev, int fdset)
+{
+    EGC_GC;
+    int rc;
+    libxl__json_object *args = NULL;
+    libxl__domain_suspend_state *dsps = CONTAINER_OF(ev, *dsps, qmp);
 
     ev->callback = dm_state_saved;
 
@@ -1060,7 +1082,8 @@ static void dm_state_fd_ready(libxl__egc *egc, libxl__ev_qmp *ev,
 
 error:
     assert(rc);
-    libxl__remove_file(gc, dsps->dm_savefile);
+    if (!libxl_get_stubdom_id(CTX, dsps->domid))
+        libxl__remove_file(gc, dsps->dm_savefile);
     dsps->callback_device_model_done(egc, dsps, rc);
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:56:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:56:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarUp-0000Y8-9X; Tue, 19 May 2020 01:56:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarUo-0000Xe-MK
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:56:42 +0000
X-Inumbo-ID: e98f2748-9973-11ea-b07b-bc764e2007e4
Received: from mail-qv1-xf42.google.com (unknown [2607:f8b0:4864:20::f42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e98f2748-9973-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 01:56:10 +0000 (UTC)
Received: by mail-qv1-xf42.google.com with SMTP id fb16so5737183qvb.5
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:56:10 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=67muXVTuajJEPjuiVYNz+RE396Rty6mUCKX18cQDly8=;
 b=fBhPiLx6vGxfX8RquZzoRxZDDliU6MKbftuDKBQ/hqIWz/3AJ+r8jOdcrEgX6Qlul9
 2URZxy/+snCKgAtOfzAbp9+PmijswCxfGVWdZhXBMAjfTowVb2aK/ZnZVZzqrN//E7+5
 qAHg4b8GhduyH4eyQ+tQWv9uSEdUgjKT/6s9hJ0/zXcqJU15gTy6wJFfB0DJV3HojOqy
 01RRG1x3y3iuOD4r+yEDNyrYOB4MPxsfZUuZ2pYSAgoc4MNuUlQl8poyupqNwP4BEElw
 yyDTC/eaib/pmo7bivFMu32MyyjqKWqw8aziMiTNd69vIKwLSuwDZ95N+M4t0GAukk+x
 Il/w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=67muXVTuajJEPjuiVYNz+RE396Rty6mUCKX18cQDly8=;
 b=Ox9ApQvLabky7vgL6y4k+hYtlnuniTYfZuqtVIwneWstGqqXH+LgMlXG6RL5AspPx2
 YRH1j7hPwkkRxauUwkGxQkiBdeOpgn6G3TaHRAXWH77tcs08er2hAFQTpg8/yo8AJacK
 KrYSUCQakoXl00zyJsZWnGcirEtzGyafg+J+rn6xx2sukC6E7oSWPgLYpb9peG3P2eHP
 QWXZrpShvm36EMxurh3MyuyFnW1GWcHkoNOnG5U37SNKBiWIiAz3lkUVOHm82nph1ot3
 3V5lleDN6ioAt5+oD28IB5/RvH78Xoks+iXSLIH7aUtF+8HOMNAN5vGHLlUz2AIkOFUR
 Yqtw==
X-Gm-Message-State: AOAM532nonNWXicP+UWt/MZRZL2Sk2eqiREiEBkrODnV/X9nSHGRqgdE
 xRfu/grYSnaODU0Pc8keSzJlM3ox
X-Google-Smtp-Source: ABdhPJxvCLHrLRHDGiZ2w3ADAvYvBlO5SG181DgeU83f4KwNeDp8MsclnwQltBEYw30Yz14dq2Jzgg==
X-Received: by 2002:ad4:5506:: with SMTP id az6mr17650121qvb.136.1589853370138; 
 Mon, 18 May 2020 18:56:10 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.56.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:56:09 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 11/19] tools: add missing libxenvchan cflags
Date: Mon, 18 May 2020 21:54:55 -0400
Message-Id: <20200519015503.115236-12-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>,
 Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

libxenvchan.h include xenevtchn.h and xengnttab.h, so applications built
with it needs applicable -I in CFLAGS too.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6
 - Add Acked-by: Ian Jackson
---
 tools/Rules.mk | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/Rules.mk b/tools/Rules.mk
index 5b8cf748ad..59c72e7a88 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -157,7 +157,7 @@ SHDEPS_libxenstat  = $(SHLIB_libxenctrl) $(SHLIB_libxenstore)
 LDLIBS_libxenstat  = $(SHDEPS_libxenstat) $(XEN_LIBXENSTAT)/libxenstat$(libextension)
 SHLIB_libxenstat   = $(SHDEPS_libxenstat) -Wl,-rpath-link=$(XEN_LIBXENSTAT)
 
-CFLAGS_libxenvchan = -I$(XEN_LIBVCHAN)
+CFLAGS_libxenvchan = -I$(XEN_LIBVCHAN) $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
 SHDEPS_libxenvchan = $(SHLIB_libxentoollog) $(SHLIB_libxenstore) $(SHLIB_libxenevtchn) $(SHLIB_libxengnttab)
 LDLIBS_libxenvchan = $(SHDEPS_libxenvchan) $(XEN_LIBVCHAN)/libxenvchan$(libextension)
 SHLIB_libxenvchan  = $(SHDEPS_libxenvchan) -Wl,-rpath-link=$(XEN_LIBVCHAN)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:56:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:56:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarUu-0000cP-L3; Tue, 19 May 2020 01:56:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarUt-0000bb-Ma
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:56:47 +0000
X-Inumbo-ID: ead05370-9973-11ea-b07b-bc764e2007e4
Received: from mail-qv1-xf31.google.com (unknown [2607:f8b0:4864:20::f31])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ead05370-9973-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 01:56:12 +0000 (UTC)
Received: by mail-qv1-xf31.google.com with SMTP id z5so5774793qvw.4
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:56:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=kEXkWybJoTWgEbRxoPRBV+0UyHS+McZIDFYoiINJxvs=;
 b=qcVzo3a+KB3ETvgsQidsSXACydEgJBtJ4WRf01+VVSjp9T9U5ONXTcDLnH7KNYaw3U
 u/d2DHNW5pths5bKQKRRcJDRAtDQR9fbm3y7ZJRScmJmZ6erZZonn8VVpQbpkTaF0PeX
 bIw8Z9Nk5GjqR9TfA7M2074DUcH5UaMbJp1/1qYpnpVf0BfOmhoOS0/A+7tmQKYC1vjN
 SvlXClEzwm2NSOqXkKVtde4GfacnkI3i5hD8B/pLb7M9NXOxtu/Q6vogspPNfEEg9kHB
 lx57+Zoih+Zro+0UqpJu9xZT99FnQSrx+dvfZsf96q9IBoPqMLYYrtZxgJtTFZkaK+3f
 SDag==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=kEXkWybJoTWgEbRxoPRBV+0UyHS+McZIDFYoiINJxvs=;
 b=YQ5Rrf+3HSYiaWG6FsYZUyQlaGBnX/AlD8DGiFVrYNTQm13gVJzSKJae2nvdLF+RBq
 Xy1VuLrQvg32TIV5rZgDTYy5NBZA0rOC/InK9ih/7WgGGgAA/NowqSeNNcYytQWbepra
 nPPdIbkAghiyMFu0UVhzbBnNtUzQrBWzNvvUeCpwFVlTGKyQJeoy4dBmFIvGw1xEACoW
 ITKTOKHsXl0j95BBrD5mIPGrd0Pu51/9Vketon7fPcVfWIoSk4yxojJ9JzuvxECIROR9
 BhJurQ3K0tKObOFp4yLOUv9tSmzRRlMcdwa9AHpUiIjhgg6I6CHZH+rW+jhtSezlpuhO
 2yTQ==
X-Gm-Message-State: AOAM532nYYZ9CKZMlSeVrgAueYXtNjbsQzhwnq4VHjLwFGlfDEW/k31X
 Xd08mbF/fjqPnsShaNNipbZV1VqU
X-Google-Smtp-Source: ABdhPJxRpf7O/7FOq43998l0WVEB5d3uHUdleuhp18e5E/w4QE1ZImpyheDSJV4p1ZZX9OpP3LMWhw==
X-Received: by 2002:a0c:ee64:: with SMTP id n4mr18445066qvs.246.1589853371812; 
 Mon, 18 May 2020 18:56:11 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.56.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:56:11 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 12/19] tools: add simple vchan-socket-proxy
Date: Mon, 18 May 2020 21:54:56 -0400
Message-Id: <20200519015503.115236-13-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>,
 Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Add a simple proxy for tunneling socket connection over vchan. This is
based on existing vchan-node* applications, but extended with socket
support. vchan-socket-proxy serves both as a client and as a server,
depending on parameters. It can be used to transparently communicate
with an application in another domian that normally expose UNIX socket
interface. Specifically, it's written to communicate with qemu running
within stubdom.

Server mode listens for vchan connections and when one is opened,
connects to a pointed UNIX socket.  Client mode listens on UNIX
socket and when someone connects, opens a vchan connection.  Only
a single connection at a time is supported.

Additionally, socket can be provided as a number - in which case it's
interpreted as already open FD (in case of UNIX listening socket -
listen() needs to be already called). Or "-" meaning stdin/stdout - in
which case it is reduced to vchan-node2 functionality.

Example usage:

1. (in dom0) vchan-socket-proxy --mode=client <DOMID>
    /local/domain/<DOMID>/data/vchan/1234 /run/qemu.(DOMID)

2. (in DOMID) vchan-socket-proxy --mode=server 0
   /local/domain/<DOMID>/data/vchan/1234 /run/qemu.(DOMID)

This will listen on /run/qemu.(DOMID) in dom0 and whenever connection is
made, it will connect to DOMID, where server process will connect to
/run/qemu.(DOMID) there. When client disconnects, vchan connection is
terminated and server vchan-socket-proxy process also disconnects from
qemu.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes on v5:
 - Ensure bindir directory is present
 - String and comment fixes
Changes in v6
 - Add Acked-by: Ian Jackson
---
 .gitignore                          |   1 +
 tools/libvchan/Makefile             |   8 +-
 tools/libvchan/vchan-socket-proxy.c | 478 ++++++++++++++++++++++++++++
 3 files changed, 486 insertions(+), 1 deletion(-)
 create mode 100644 tools/libvchan/vchan-socket-proxy.c

diff --git a/.gitignore b/.gitignore
index bfa53723b3..7418ce9829 100644
--- a/.gitignore
+++ b/.gitignore
@@ -369,6 +369,7 @@ tools/misc/xenwatchdogd
 tools/misc/xen-hvmcrash
 tools/misc/xen-lowmemd
 tools/libvchan/vchan-node[12]
+tools/libvchan/vchan-socket-proxy
 tools/ocaml/*/.ocamldep.make
 tools/ocaml/*/*.cm[ixao]
 tools/ocaml/*/*.cmxa
diff --git a/tools/libvchan/Makefile b/tools/libvchan/Makefile
index 7892750c3e..913bcc8884 100644
--- a/tools/libvchan/Makefile
+++ b/tools/libvchan/Makefile
@@ -13,6 +13,7 @@ LIBVCHAN_PIC_OBJS = $(patsubst %.o,%.opic,$(LIBVCHAN_OBJS))
 LIBVCHAN_LIBS = $(LDLIBS_libxenstore) $(LDLIBS_libxengnttab) $(LDLIBS_libxenevtchn)
 $(LIBVCHAN_OBJS) $(LIBVCHAN_PIC_OBJS): CFLAGS += $(CFLAGS_libxenstore) $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
 $(NODE_OBJS) $(NODE2_OBJS): CFLAGS += $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
+vchan-socket-proxy.o: CFLAGS += $(CFLAGS_libxenstore) $(CFLAGS_libxenctrl) $(CFLAGS_libxengnttab) $(CFLAGS_libxenevtchn)
 
 MAJOR = 4.14
 MINOR = 0
@@ -39,7 +40,7 @@ $(PKG_CONFIG_LOCAL): PKG_CONFIG_LIBDIR = $(CURDIR)
 $(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
 
 .PHONY: all
-all: libxenvchan.so vchan-node1 vchan-node2 libxenvchan.a $(PKG_CONFIG_INST) $(PKG_CONFIG_LOCAL)
+all: libxenvchan.so vchan-node1 vchan-node2 vchan-socket-proxy libxenvchan.a $(PKG_CONFIG_INST) $(PKG_CONFIG_LOCAL)
 
 libxenvchan.so: libxenvchan.so.$(MAJOR)
 	ln -sf $< $@
@@ -59,13 +60,18 @@ vchan-node1: $(NODE_OBJS) libxenvchan.so
 vchan-node2: $(NODE2_OBJS) libxenvchan.so
 	$(CC) $(LDFLAGS) -o $@ $(NODE2_OBJS) $(LDLIBS_libxenvchan) $(APPEND_LDFLAGS)
 
+vchan-socket-proxy: vchan-socket-proxy.o libxenvchan.so
+	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenvchan) $(LDLIBS_libxenstore) $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
+
 .PHONY: install
 install: all
 	$(INSTALL_DIR) $(DESTDIR)$(libdir)
 	$(INSTALL_DIR) $(DESTDIR)$(includedir)
+	$(INSTALL_DIR) $(DESTDIR)$(bindir)
 	$(INSTALL_PROG) libxenvchan.so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)
 	ln -sf libxenvchan.so.$(MAJOR).$(MINOR) $(DESTDIR)$(libdir)/libxenvchan.so.$(MAJOR)
 	ln -sf libxenvchan.so.$(MAJOR) $(DESTDIR)$(libdir)/libxenvchan.so
+	$(INSTALL_PROG) vchan-socket-proxy $(DESTDIR)$(bindir)
 	$(INSTALL_DATA) libxenvchan.h $(DESTDIR)$(includedir)
 	$(INSTALL_DATA) libxenvchan.a $(DESTDIR)$(libdir)
 	$(INSTALL_DATA) xenvchan.pc $(DESTDIR)$(PKG_INSTALLDIR)
diff --git a/tools/libvchan/vchan-socket-proxy.c b/tools/libvchan/vchan-socket-proxy.c
new file mode 100644
index 0000000000..13700c5d67
--- /dev/null
+++ b/tools/libvchan/vchan-socket-proxy.c
@@ -0,0 +1,478 @@
+/**
+ * @file
+ * @section AUTHORS
+ *
+ * Copyright (C) 2010  Rafal Wojtczuk  <rafal@invisiblethingslab.com>
+ *
+ *  Authors:
+ *       Rafal Wojtczuk  <rafal@invisiblethingslab.com>
+ *       Daniel De Graaf <dgdegra@tycho.nsa.gov>
+ *       Marek Marczykowski-Górecki  <marmarek@invisiblethingslab.com>
+ *
+ * @section LICENSE
+ *
+ *  This program is free software; you can redistribute it and/or
+ *  modify it under the terms of the GNU Lesser General Public
+ *  License as published by the Free Software Foundation; either
+ *  version 2.1 of the License, or (at your option) any later version.
+ *
+ *  This program is distributed in the hope that it will be useful,
+ *  but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ *  Lesser General Public License for more details.
+ *
+ *  You should have received a copy of the GNU Lesser General Public
+ *  License along with this program; If not, see <http://www.gnu.org/licenses/>.
+ *
+ * @section DESCRIPTION
+ *
+ * This is a vchan to unix socket proxy. Vchan server is set, and on client
+ * connection, local socket connection is established. Communication is bidirectional.
+ * One client is served at a time, clients needs to coordinate this themselves.
+ */
+
+#include <stdlib.h>
+#include <stdio.h>
+#include <string.h>
+#include <unistd.h>
+#include <fcntl.h>
+#include <errno.h>
+#include <sys/socket.h>
+#include <sys/un.h>
+#include <getopt.h>
+
+#include <xenstore.h>
+#include <xenctrl.h>
+#include <libxenvchan.h>
+
+static void usage(char** argv)
+{
+    fprintf(stderr, "usage:\n"
+        "\t%s [options] domainid nodepath [socket-path|file-no|-]\n"
+        "\n"
+        "options:\n"
+        "\t-m, --mode=client|server - vchan connection mode (client by default)\n"
+        "\t-s, --state-path=path - xenstore path where write \"running\" to \n"
+        "\t                        at startup\n"
+        "\t-v, --verbose - verbose logging\n"
+        "\n"
+        "client: client of a vchan connection, fourth parameter can be:\n"
+        "\tsocket-path: listen on a UNIX socket at this path and connect to vchan\n"
+        "\t             whenever new connection is accepted;\n"
+        "\t             handle multiple _subsequent_ connections, until terminated\n"
+        "\n"
+        "\tfile-no:     except open FD of a socket in listen mode;\n"
+        "\t             otherwise similar to socket-path\n"
+        "\n"
+        "\t-:           open vchan connection immediately and pass the data\n"
+        "\t             from stdin/stdout; terminate when vchan connection\n"
+        "\t             is closed\n"
+        "\n"
+        "server: server of a vchan connection, fourth parameter can be:\n"
+        "\tsocket-path: connect to this UNIX socket when new vchan connection\n"
+        "\t             is accepted;\n"
+        "\t             handle multiple _subsequent_ connections, until terminated\n"
+        "\n"
+        "\tfile-no:     pass data to/from this FD; terminate when vchan connection\n"
+        "\t             is closed\n"
+        "\n"
+        "\t-:           pass data to/from stdin/stdout; terminate when vchan\n"
+        "\t             connection is closed\n",
+        argv[0]);
+    exit(1);
+}
+
+#define BUFSIZE 8192
+char inbuf[BUFSIZE];
+char outbuf[BUFSIZE];
+int insiz = 0;
+int outsiz = 0;
+int verbose = 0;
+
+static void vchan_wr(struct libxenvchan *ctrl) {
+    int ret;
+
+    if (!insiz)
+        return;
+    ret = libxenvchan_write(ctrl, inbuf, insiz);
+    if (ret < 0) {
+        fprintf(stderr, "vchan write failed\n");
+        exit(1);
+    }
+    if (verbose)
+        fprintf(stderr, "wrote %d bytes to vchan\n", ret);
+    if (ret > 0) {
+        insiz -= ret;
+        memmove(inbuf, inbuf + ret, insiz);
+    }
+}
+
+static void socket_wr(int output_fd) {
+    int ret;
+
+    if (!outsiz)
+        return;
+    ret = write(output_fd, outbuf, outsiz);
+    if (ret < 0 && errno != EAGAIN)
+        exit(1);
+    if (ret > 0) {
+        outsiz -= ret;
+        memmove(outbuf, outbuf + ret, outsiz);
+    }
+}
+
+static int set_nonblocking(int fd, int nonblocking) {
+    int flags = fcntl(fd, F_GETFL);
+    if (flags == -1)
+        return -1;
+
+    if (nonblocking)
+        flags |= O_NONBLOCK;
+    else
+        flags &= ~O_NONBLOCK;
+
+    if (fcntl(fd, F_SETFL, flags) == -1)
+        return -1;
+
+    return 0;
+}
+
+static int connect_socket(const char *path_or_fd) {
+    int fd;
+    char *endptr;
+    struct sockaddr_un addr;
+
+    fd = strtoll(path_or_fd, &endptr, 0);
+    if (*endptr == '\0') {
+        set_nonblocking(fd, 1);
+        return fd;
+    }
+
+    fd = socket(AF_UNIX, SOCK_STREAM, 0);
+    if (fd == -1)
+        return -1;
+
+    addr.sun_family = AF_UNIX;
+    strncpy(addr.sun_path, path_or_fd, sizeof(addr.sun_path));
+    if (connect(fd, (const struct sockaddr *)&addr, sizeof(addr)) == -1) {
+        close(fd);
+        return -1;
+    }
+
+    set_nonblocking(fd, 1);
+
+    return fd;
+}
+
+static int listen_socket(const char *path_or_fd) {
+    int fd;
+    char *endptr;
+    struct sockaddr_un addr;
+
+    fd = strtoll(path_or_fd, &endptr, 0);
+    if (*endptr == '\0') {
+        return fd;
+    }
+
+    /* if not a number, assume a socket path */
+    fd = socket(AF_UNIX, SOCK_STREAM, 0);
+    if (fd == -1)
+        return -1;
+
+    addr.sun_family = AF_UNIX;
+    strncpy(addr.sun_path, path_or_fd, sizeof(addr.sun_path));
+    if (bind(fd, (const struct sockaddr *)&addr, sizeof(addr)) == -1) {
+        close(fd);
+        return -1;
+    }
+    if (listen(fd, 5) != 0) {
+        close(fd);
+        return -1;
+    }
+
+    return fd;
+}
+
+static struct libxenvchan *connect_vchan(int domid, const char *path) {
+    struct libxenvchan *ctrl = NULL;
+    struct xs_handle *xs = NULL;
+    xc_interface *xc = NULL;
+    xc_dominfo_t dominfo;
+    char **watch_ret;
+    unsigned int watch_num;
+    int ret;
+
+    xs = xs_open(XS_OPEN_READONLY);
+    if (!xs) {
+        perror("xs_open");
+        goto out;
+    }
+    xc = xc_interface_open(NULL, NULL, XC_OPENFLAG_NON_REENTRANT);
+    if (!xc) {
+        perror("xc_interface_open");
+        goto out;
+    }
+    /* wait for vchan server to create *path* */
+    xs_watch(xs, path, "path");
+    xs_watch(xs, "@releaseDomain", "release");
+    while ((watch_ret = xs_read_watch(xs, &watch_num))) {
+        /* don't care about exact which fired the watch */
+        free(watch_ret);
+        ctrl = libxenvchan_client_init(NULL, domid, path);
+        if (ctrl)
+            break;
+
+        ret = xc_domain_getinfo(xc, domid, 1, &dominfo);
+        /* break the loop if domain is definitely not there anymore, but
+         * continue if it is or the call failed (like EPERM) */
+        if (ret == -1 && errno == ESRCH)
+            break;
+        if (ret == 1 && (dominfo.domid != (uint32_t)domid || dominfo.dying))
+            break;
+    }
+
+out:
+    if (xc)
+        xc_interface_close(xc);
+    if (xs)
+        xs_close(xs);
+    return ctrl;
+}
+
+
+static void discard_buffers(struct libxenvchan *ctrl) {
+    /* discard local buffers */
+    insiz = 0;
+    outsiz = 0;
+
+    /* discard remaining incoming data */
+    while (libxenvchan_data_ready(ctrl)) {
+        if (libxenvchan_read(ctrl, inbuf, BUFSIZE) == -1) {
+            perror("vchan read");
+            exit(1);
+        }
+    }
+}
+
+int data_loop(struct libxenvchan *ctrl, int input_fd, int output_fd)
+{
+    int ret;
+    int libxenvchan_fd;
+    int max_fd;
+
+    libxenvchan_fd = libxenvchan_fd_for_select(ctrl);
+    for (;;) {
+        fd_set rfds;
+        fd_set wfds;
+        FD_ZERO(&rfds);
+        FD_ZERO(&wfds);
+
+        max_fd = -1;
+        if (input_fd != -1 && insiz != BUFSIZE) {
+            FD_SET(input_fd, &rfds);
+            if (input_fd > max_fd)
+                max_fd = input_fd;
+        }
+        if (output_fd != -1 && outsiz) {
+            FD_SET(output_fd, &wfds);
+            if (output_fd > max_fd)
+                max_fd = output_fd;
+        }
+        FD_SET(libxenvchan_fd, &rfds);
+        if (libxenvchan_fd > max_fd)
+            max_fd = libxenvchan_fd;
+        ret = select(max_fd + 1, &rfds, &wfds, NULL, NULL);
+        if (ret < 0) {
+            perror("select");
+            exit(1);
+        }
+        if (FD_ISSET(libxenvchan_fd, &rfds)) {
+            libxenvchan_wait(ctrl);
+            if (!libxenvchan_is_open(ctrl)) {
+                if (verbose)
+                    fprintf(stderr, "vchan client disconnected\n");
+                while (outsiz)
+                    socket_wr(output_fd);
+                close(output_fd);
+                close(input_fd);
+                discard_buffers(ctrl);
+                break;
+            }
+            vchan_wr(ctrl);
+        }
+
+        if (FD_ISSET(input_fd, &rfds)) {
+            ret = read(input_fd, inbuf + insiz, BUFSIZE - insiz);
+            if (ret < 0 && errno != EAGAIN)
+                exit(1);
+            if (verbose)
+                fprintf(stderr, "from-unix: %.*s\n", ret, inbuf + insiz);
+            if (ret == 0) {
+                /* EOF on socket, write everything in the buffer and close the
+                 * input_fd socket */
+                while (insiz) {
+                    vchan_wr(ctrl);
+                    libxenvchan_wait(ctrl);
+                }
+                close(input_fd);
+                input_fd = -1;
+                /* TODO: maybe signal the vchan client somehow? */
+                break;
+            }
+            if (ret)
+                insiz += ret;
+            vchan_wr(ctrl);
+        }
+        if (FD_ISSET(output_fd, &wfds))
+            socket_wr(output_fd);
+        while (libxenvchan_data_ready(ctrl) && outsiz < BUFSIZE) {
+            ret = libxenvchan_read(ctrl, outbuf + outsiz, BUFSIZE - outsiz);
+            if (ret < 0)
+                exit(1);
+            if (verbose)
+                fprintf(stderr, "from-vchan: %.*s\n", ret, outbuf + outsiz);
+            outsiz += ret;
+            socket_wr(output_fd);
+        }
+    }
+    return 0;
+}
+
+/**
+    Simple libxenvchan application, both client and server.
+    Both sides may write and read, both from the libxenvchan and from
+    stdin/stdout (just like netcat).
+*/
+
+static struct option options[] = {
+    { "mode",       required_argument, NULL, 'm' },
+    { "verbose",          no_argument, NULL, 'v' },
+    { "state-path", required_argument, NULL, 's' },
+    { }
+};
+
+int main(int argc, char **argv)
+{
+    int is_server = 0;
+    int socket_fd = -1;
+    int input_fd, output_fd;
+    struct libxenvchan *ctrl = NULL;
+    const char *socket_path;
+    int domid;
+    const char *vchan_path;
+    const char *state_path = NULL;
+    int opt;
+
+    while ((opt = getopt_long(argc, argv, "m:vs:", options, NULL)) != -1) {
+        switch (opt) {
+            case 'm':
+                if (strcmp(optarg, "server") == 0)
+                    is_server = 1;
+                else if (strcmp(optarg, "client") == 0)
+                    is_server = 0;
+                else {
+                    fprintf(stderr, "invalid argument for --mode: %s\n", optarg);
+                    usage(argv);
+                    return 1;
+                }
+                break;
+            case 'v':
+                verbose = 1;
+                break;
+            case 's':
+                state_path = optarg;
+                break;
+            case '?':
+                usage(argv);
+        }
+    }
+
+    if (argc-optind != 3)
+        usage(argv);
+
+    domid = atoi(argv[optind]);
+    vchan_path = argv[optind+1];
+    socket_path = argv[optind+2];
+
+    if (is_server) {
+        ctrl = libxenvchan_server_init(NULL, domid, vchan_path, 0, 0);
+        if (!ctrl) {
+            perror("libxenvchan_server_init");
+            exit(1);
+        }
+    } else {
+        if (strcmp(socket_path, "-") == 0) {
+            input_fd = 0;
+            output_fd = 1;
+        } else {
+            socket_fd = listen_socket(socket_path);
+            if (socket_fd == -1) {
+                perror("listen socket");
+                return 1;
+            }
+        }
+    }
+
+    if (state_path) {
+        struct xs_handle *xs;
+
+        xs = xs_open(0);
+        if (!xs) {
+            perror("xs_open");
+            return 1;
+        }
+        if (!xs_write(xs, XBT_NULL, state_path, "running", strlen("running"))) {
+            perror("xs_write");
+            return 1;
+        }
+        xs_close(xs);
+    }
+
+    for (;;) {
+        if (is_server) {
+            /* wait for vchan connection */
+            while (libxenvchan_is_open(ctrl) != 1)
+                libxenvchan_wait(ctrl);
+            /* vchan client connected, setup local FD if needed */
+            if (strcmp(socket_path, "-") == 0) {
+                input_fd = 0;
+                output_fd = 1;
+            } else {
+                input_fd = output_fd = connect_socket(socket_path);
+            }
+            if (input_fd == -1) {
+                perror("connect socket");
+                return 1;
+            }
+            if (data_loop(ctrl, input_fd, output_fd) != 0)
+                break;
+            /* keep it running only when get UNIX socket path */
+            if (socket_path[0] != '/')
+                break;
+        } else {
+            /* wait for local socket connection */
+            if (strcmp(socket_path, "-") != 0)
+                input_fd = output_fd = accept(socket_fd, NULL, NULL);
+            if (input_fd == -1) {
+                perror("accept");
+                return 1;
+            }
+            set_nonblocking(input_fd, 1);
+            set_nonblocking(output_fd, 1);
+            ctrl = connect_vchan(domid, vchan_path);
+            if (!ctrl) {
+                perror("vchan client init");
+                return 1;
+            }
+            if (data_loop(ctrl, input_fd, output_fd) != 0)
+                break;
+            /* don't reconnect if output was stdout */
+            if (strcmp(socket_path, "-") == 0)
+                break;
+
+            libxenvchan_close(ctrl);
+            ctrl = NULL;
+        }
+    }
+    return 0;
+}
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:57:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:57:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarUz-0000gy-UW; Tue, 19 May 2020 01:56:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarUy-0000fm-MX
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:56:52 +0000
X-Inumbo-ID: eb926a78-9973-11ea-ae69-bc764e2007e4
Received: from mail-qv1-xf43.google.com (unknown [2607:f8b0:4864:20::f43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb926a78-9973-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 01:56:14 +0000 (UTC)
Received: by mail-qv1-xf43.google.com with SMTP id z9so5758822qvi.12
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:56:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=SW7ozKClvM3WIhe8ybZkG6u5ih1lmHf7uAbI1XBrLpc=;
 b=CnKwwadtEAFYg1zazlGc4X/IU4eFWCwKZvHufvQ1e2crzFfRNLG2+PnUTwx8Rfz/TN
 s4/o/YpwsIHG54JsFcb2ekxpk31n/3WxquR9sQf+0OJkXy4l39YyW1pVqRqBPxCUGg8V
 ps08GLBOpQfc4ztTbITjxjiMPnsSvOiNPdwKH99MSgd0Ma1EHuzAV9+UlZKKeYWfZzBU
 yz6aTYI8a4gXCa9IQ0v1Ci3n46ZT/cg46lqD5kq9nzp65raXWuLf7i0Kj/z0Ay6f8pEs
 ByVay6MQx/r4W36o+J3yD5ieVOGzWLDVs3Nnl9msX79DzHiU7X6x/nu2yuT66wvvMy26
 Y80g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=SW7ozKClvM3WIhe8ybZkG6u5ih1lmHf7uAbI1XBrLpc=;
 b=P6SRfCpPvo6ClTWrockIOjcK2zpoxpWXHl38bdDsoz044v8rooQXr/LKw5IQWMK0Zb
 QkdQdIrkNZVnzZhhcYp2vpc4CZzogG/Cx6POp8EWSK0h8FQd2D5AbnuJrpw7xmvBTLGa
 27okBpwj2lf19GM4dTI0DBLDYDn2FhJrSuA8Om4q/HN3XJteCIWcfdi7ZDvKgr4rTsSb
 ApvoAQTKG9AhkuVnF5IMZXI9g1BUBbo1C42eNoMzbR1XO8XfDe/lvFIDD9lyefvj1hJb
 v4V7vR6a7sc/eUVCmIaxNxnOixTZp6rG5Wx26vhGyfT20lNXWhjDT6/T7A/rKntN1dcN
 MNag==
X-Gm-Message-State: AOAM533ZOKjjmML/RqCjf1idBnMVzG8Ea2iMVxIK6X7AJVy7MPOvVets
 NBa0XQC6rOY3hgqbhPdQHNqJiwbn
X-Google-Smtp-Source: ABdhPJxYFWLxUw/CqXdV+vgU4A067EXnFnHEE1WVte0qJKFDwTeC4vjmLzBBFixMy64+ZUjHFScS2Q==
X-Received: by 2002:a0c:e48f:: with SMTP id n15mr17472904qvl.73.1589853373326; 
 Mon, 18 May 2020 18:56:13 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.56.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:56:12 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 13/19] libxl: Refactor kill_device_model to
 libxl__kill_xs_path
Date: Mon, 18 May 2020 21:54:57 -0400
Message-Id: <20200519015503.115236-14-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Move kill_device_model to libxl__kill_xs_path so we have a helper to
kill a process from a pid stored in xenstore.  We'll be using it to kill
vchan-qmp-proxy.

libxl__kill_xs_path takes a "what" string for use in printing error
messages.  kill_device_model is retained in libxl_dm.c to provide the
string.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6:
 - Add Acked-by: Ian Jackson
---
 tools/libxl/libxl_aoutils.c  | 32 ++++++++++++++++++++++++++++++++
 tools/libxl/libxl_dm.c       | 27 +--------------------------
 tools/libxl/libxl_internal.h |  3 +++
 3 files changed, 36 insertions(+), 26 deletions(-)

diff --git a/tools/libxl/libxl_aoutils.c b/tools/libxl/libxl_aoutils.c
index 1be858c93c..c4c095a5ba 100644
--- a/tools/libxl/libxl_aoutils.c
+++ b/tools/libxl/libxl_aoutils.c
@@ -626,6 +626,38 @@ void libxl__kill(libxl__gc *gc, pid_t pid, int sig, const char *what)
                 what, (unsigned long)pid, sig);
 }
 
+/* Generic function to signal (HUP) a pid stored in xenstore */
+int libxl__kill_xs_path(libxl__gc *gc, const char *xs_path_pid,
+                        const char *what)
+{
+    const char *xs_pid;
+    int ret, pid;
+
+    ret = libxl__xs_read_checked(gc, XBT_NULL, xs_path_pid, &xs_pid);
+    if (ret || !xs_pid) {
+        LOG(ERROR, "unable to find %s pid in %s", what, xs_path_pid);
+        ret = ret ? : ERROR_FAIL;
+        goto out;
+    }
+    pid = atoi(xs_pid);
+
+    ret = kill(pid, SIGHUP);
+    if (ret < 0 && errno == ESRCH) {
+        LOG(ERROR, "%s already exited", what);
+        ret = 0;
+    } else if (ret == 0) {
+        LOG(DEBUG, "%s signaled", what);
+        ret = 0;
+    } else {
+        LOGE(ERROR, "failed to kill %s [%d]", what, pid);
+        ret = ERROR_FAIL;
+        goto out;
+    }
+
+out:
+    return ret;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 62d0d46c98..6829b4bdb5 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -3225,32 +3225,7 @@ out:
 /* Generic function to signal a Qemu instance to exit */
 static int kill_device_model(libxl__gc *gc, const char *xs_path_pid)
 {
-    const char *xs_pid;
-    int ret, pid;
-
-    ret = libxl__xs_read_checked(gc, XBT_NULL, xs_path_pid, &xs_pid);
-    if (ret || !xs_pid) {
-        LOG(ERROR, "unable to find device model pid in %s", xs_path_pid);
-        ret = ret ? : ERROR_FAIL;
-        goto out;
-    }
-    pid = atoi(xs_pid);
-
-    ret = kill(pid, SIGHUP);
-    if (ret < 0 && errno == ESRCH) {
-        LOG(ERROR, "Device Model already exited");
-        ret = 0;
-    } else if (ret == 0) {
-        LOG(DEBUG, "Device Model signaled");
-        ret = 0;
-    } else {
-        LOGE(ERROR, "failed to kill Device Model [%d]", pid);
-        ret = ERROR_FAIL;
-        goto out;
-    }
-
-out:
-    return ret;
+    return libxl__kill_xs_path(gc, xs_path_pid, "Device Model");
 }
 
 /* Helper to destroy a Qdisk backend */
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index f2f76439ec..c939557b2e 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -2711,6 +2711,9 @@ int libxl__async_exec_start(libxl__async_exec_state *aes);
 bool libxl__async_exec_inuse(const libxl__async_exec_state *aes);
 
 _hidden void libxl__kill(libxl__gc *gc, pid_t pid, int sig, const char *what);
+/* kill SIGHUP a pid stored in xenstore */
+_hidden int libxl__kill_xs_path(libxl__gc *gc, const char *xs_path_pid,
+                                const char *what);
 
 /*----- device addition/removal -----*/
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:57:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarV4-0000l6-Cc; Tue, 19 May 2020 01:56:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarV3-0000kM-MW
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:56:57 +0000
X-Inumbo-ID: ed25f18e-9973-11ea-ae69-bc764e2007e4
Received: from mail-qv1-xf41.google.com (unknown [2607:f8b0:4864:20::f41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ed25f18e-9973-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 01:56:16 +0000 (UTC)
Received: by mail-qv1-xf41.google.com with SMTP id g20so5763931qvb.9
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:56:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=aLKJExg1fkxsNY3bWeg2LgROjhysCmtqf2q3X+TB4c4=;
 b=eLtIVsO04HtNXgOxkh0x8e8blnPL2m5LxoRvrhQmeQ4l4zlhMdOU/Sq3pTKG5KCqDD
 WLFtMT+mAe4YrlyVY2ZTY8ESru/7Fc2dhAqMRG0yccRskZmz69urCLeUO/4iS36tILWK
 bx8AjOC5xO7V9BtBGx5UNiaKuDMsXCsDrUz8DiwB/QBgEqSABJdv4N61xxYy9Pi49+uG
 uKVnpG+QqoZJr6mlyAx/ZupphChI+8gupyW0X4XqI9Y0Mxoga20xcddXtHLklw2ko/5B
 s35tUtr57eYM4lI8/NhhD4JG9hnaXZD+hoZ+8qlc0hCVexihuRRCthmS2SkR4TtTi+ZK
 fzrw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=aLKJExg1fkxsNY3bWeg2LgROjhysCmtqf2q3X+TB4c4=;
 b=TRoN450miqFoWhUHd/FqJAvlP05I4GB/rmOnWWAQMFSRboNnL/6kLN/wiNReUdr9Ko
 FtcPmFbNVSzEkt0hnV/EJjjKIPYbNJmrve+pLNAXp5gFejR4jrcNqMH4Q+vJyQEQ22jJ
 negdKMbJDb8znKmOrmjvY4JYt5gP159i7RLuEivQZr24TlxnAcJRkUbjtvMHmkmjIyqg
 LW4Oho5JYcRUL1MbjijFqPpOORdytLyVscfTPDlQQcESPxYrLf9ITolGKn8QUMIRZM72
 oNPB0iJe5VZMUOOZC2mWRjbYAVKLQ7xKVhTlDnoOvpz4NZVD3Volgh9LiK27BOjXicGe
 3nTg==
X-Gm-Message-State: AOAM531v9t+7+OaPS8t815YDOxjyc2xC/bBPfjW6U0iN6CrD0CpWP9zI
 tzCB4OFSra4uHnUtTuxiJjKXeKSc
X-Google-Smtp-Source: ABdhPJwwsglMbjyIGgB1+ORmsHFCrULrBg2QyEZbB5t50JWgaHkG3mRuih3C4aVAzvnBPIqroiOCdQ==
X-Received: by 2002:a0c:f590:: with SMTP id k16mr16653479qvm.81.1589853375426; 
 Mon, 18 May 2020 18:56:15 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.56.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:56:14 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 14/19] libxl: use vchan for QMP access with Linux stubdomain
Date: Mon, 18 May 2020 21:54:58 -0400
Message-Id: <20200519015503.115236-15-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Access to QMP of QEMU in Linux stubdomain is possible over vchan
connection. Handle the actual vchan connection in a separate process
(vchan-socket-proxy). This simplified integration with QMP (already
quite complex), but also allows preliminary filtering of (potentially
malicious) QMP input.
Since only one client can be connected to vchan server at the same time
and it is not enforced by the libxenvchan itself, additional client-side
locking is needed. It is implicitly implemented by vchan-socket-proxy,
as it handle only one connection at a time. Note that qemu supports only
one simultaneous client on a control socket anyway (but in UNIX socket
case, it enforce it server-side), so it doesn't add any extra
limitation.

libxl qmp client code already has locking to handle concurrent access
attempts to the same qemu qmp interface.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Squash in changes of regenerated autotools files.

Kill the vchan-socket-proxy so we don't leak the daemonized processes.
libxl__stubdomain_is_linux_running() works against the guest_domid, but
the xenstore path is beneath the stubdomain.  This leads to the use of
libxl_is_stubdom in addition to libxl__stubdomain_is_linux_running() so
that the stubdomain calls kill for the qmp-proxy.

Also call libxl__qmp_cleanup() to remove the unix sockets used by
vchan-socket-proxy.  vchan-socket-proxy only creates qmp-libxl-$domid,
and libxl__qmp_cleanup removes that as well as qmp-libxenstat-$domid.
However, it tolerates ENOENT, and a stray qmp-libxenstat-$domid should
not exist.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---

Re-run autotools after applying.

Changes in v4:
 - new patch, in place of both "libxl: use vchan for QMP access ..."
Changes in v5:
 - Use device-model/%u/qmp-proxy-state xenstore path
 - Rephrase comment
Changes in v6
 - Commit message mention libxl locking
 - Mention re-run autotools
 - Squashed in re-generated autotools files
 - Call libxl__qmp_cleanup() to remove unix socket.
 - Cleanup in vchan-socket-proxy is dropped.
Changes in v7
 - Add Acked-by: Ian Jackson
---
 configure                    |  14 +--
 docs/configure               |  14 +--
 stubdom/configure            |  14 +--
 tools/config.h.in            |   3 +
 tools/configure              |  46 ++++++----
 tools/configure.ac           |   9 ++
 tools/libxl/libxl_dm.c       | 163 +++++++++++++++++++++++++++++++++--
 tools/libxl/libxl_domain.c   |  10 +++
 tools/libxl/libxl_internal.h |   1 +
 9 files changed, 209 insertions(+), 65 deletions(-)

diff --git a/configure b/configure
index 9da3970cef..8af54e8a5a 100755
--- a/configure
+++ b/configure
@@ -644,7 +644,6 @@ infodir
 docdir
 oldincludedir
 includedir
-runstatedir
 localstatedir
 sharedstatedir
 sysconfdir
@@ -723,7 +722,6 @@ datadir='${datarootdir}'
 sysconfdir='${prefix}/etc'
 sharedstatedir='${prefix}/com'
 localstatedir='${prefix}/var'
-runstatedir='${localstatedir}/run'
 includedir='${prefix}/include'
 oldincludedir='/usr/include'
 docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
@@ -976,15 +974,6 @@ do
   | -silent | --silent | --silen | --sile | --sil)
     silent=yes ;;
 
-  -runstatedir | --runstatedir | --runstatedi | --runstated \
-  | --runstate | --runstat | --runsta | --runst | --runs \
-  | --run | --ru | --r)
-    ac_prev=runstatedir ;;
-  -runstatedir=* | --runstatedir=* | --runstatedi=* | --runstated=* \
-  | --runstate=* | --runstat=* | --runsta=* | --runst=* | --runs=* \
-  | --run=* | --ru=* | --r=*)
-    runstatedir=$ac_optarg ;;
-
   -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb)
     ac_prev=sbindir ;;
   -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \
@@ -1122,7 +1111,7 @@ fi
 for ac_var in	exec_prefix prefix bindir sbindir libexecdir datarootdir \
 		datadir sysconfdir sharedstatedir localstatedir includedir \
 		oldincludedir docdir infodir htmldir dvidir pdfdir psdir \
-		libdir localedir mandir runstatedir
+		libdir localedir mandir
 do
   eval ac_val=\$$ac_var
   # Remove trailing slashes.
@@ -1275,7 +1264,6 @@ Fine tuning of the installation directories:
   --sysconfdir=DIR        read-only single-machine data [PREFIX/etc]
   --sharedstatedir=DIR    modifiable architecture-independent data [PREFIX/com]
   --localstatedir=DIR     modifiable single-machine data [PREFIX/var]
-  --runstatedir=DIR       modifiable per-process data [LOCALSTATEDIR/run]
   --libdir=DIR            object code libraries [EPREFIX/lib]
   --includedir=DIR        C header files [PREFIX/include]
   --oldincludedir=DIR     C header files for non-gcc [/usr/include]
diff --git a/docs/configure b/docs/configure
index 9e3ed60462..93e9dcf404 100755
--- a/docs/configure
+++ b/docs/configure
@@ -634,7 +634,6 @@ infodir
 docdir
 oldincludedir
 includedir
-runstatedir
 localstatedir
 sharedstatedir
 sysconfdir
@@ -711,7 +710,6 @@ datadir='${datarootdir}'
 sysconfdir='${prefix}/etc'
 sharedstatedir='${prefix}/com'
 localstatedir='${prefix}/var'
-runstatedir='${localstatedir}/run'
 includedir='${prefix}/include'
 oldincludedir='/usr/include'
 docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
@@ -964,15 +962,6 @@ do
   | -silent | --silent | --silen | --sile | --sil)
     silent=yes ;;
 
-  -runstatedir | --runstatedir | --runstatedi | --runstated \
-  | --runstate | --runstat | --runsta | --runst | --runs \
-  | --run | --ru | --r)
-    ac_prev=runstatedir ;;
-  -runstatedir=* | --runstatedir=* | --runstatedi=* | --runstated=* \
-  | --runstate=* | --runstat=* | --runsta=* | --runst=* | --runs=* \
-  | --run=* | --ru=* | --r=*)
-    runstatedir=$ac_optarg ;;
-
   -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb)
     ac_prev=sbindir ;;
   -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \
@@ -1110,7 +1099,7 @@ fi
 for ac_var in	exec_prefix prefix bindir sbindir libexecdir datarootdir \
 		datadir sysconfdir sharedstatedir localstatedir includedir \
 		oldincludedir docdir infodir htmldir dvidir pdfdir psdir \
-		libdir localedir mandir runstatedir
+		libdir localedir mandir
 do
   eval ac_val=\$$ac_var
   # Remove trailing slashes.
@@ -1263,7 +1252,6 @@ Fine tuning of the installation directories:
   --sysconfdir=DIR        read-only single-machine data [PREFIX/etc]
   --sharedstatedir=DIR    modifiable architecture-independent data [PREFIX/com]
   --localstatedir=DIR     modifiable single-machine data [PREFIX/var]
-  --runstatedir=DIR       modifiable per-process data [LOCALSTATEDIR/run]
   --libdir=DIR            object code libraries [EPREFIX/lib]
   --includedir=DIR        C header files [PREFIX/include]
   --oldincludedir=DIR     C header files for non-gcc [/usr/include]
diff --git a/stubdom/configure b/stubdom/configure
index da03da535a..f7604a37f7 100755
--- a/stubdom/configure
+++ b/stubdom/configure
@@ -661,7 +661,6 @@ infodir
 docdir
 oldincludedir
 includedir
-runstatedir
 localstatedir
 sharedstatedir
 sysconfdir
@@ -751,7 +750,6 @@ datadir='${datarootdir}'
 sysconfdir='${prefix}/etc'
 sharedstatedir='${prefix}/com'
 localstatedir='${prefix}/var'
-runstatedir='${localstatedir}/run'
 includedir='${prefix}/include'
 oldincludedir='/usr/include'
 docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
@@ -1004,15 +1002,6 @@ do
   | -silent | --silent | --silen | --sile | --sil)
     silent=yes ;;
 
-  -runstatedir | --runstatedir | --runstatedi | --runstated \
-  | --runstate | --runstat | --runsta | --runst | --runs \
-  | --run | --ru | --r)
-    ac_prev=runstatedir ;;
-  -runstatedir=* | --runstatedir=* | --runstatedi=* | --runstated=* \
-  | --runstate=* | --runstat=* | --runsta=* | --runst=* | --runs=* \
-  | --run=* | --ru=* | --r=*)
-    runstatedir=$ac_optarg ;;
-
   -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb)
     ac_prev=sbindir ;;
   -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \
@@ -1150,7 +1139,7 @@ fi
 for ac_var in	exec_prefix prefix bindir sbindir libexecdir datarootdir \
 		datadir sysconfdir sharedstatedir localstatedir includedir \
 		oldincludedir docdir infodir htmldir dvidir pdfdir psdir \
-		libdir localedir mandir runstatedir
+		libdir localedir mandir
 do
   eval ac_val=\$$ac_var
   # Remove trailing slashes.
@@ -1303,7 +1292,6 @@ Fine tuning of the installation directories:
   --sysconfdir=DIR        read-only single-machine data [PREFIX/etc]
   --sharedstatedir=DIR    modifiable architecture-independent data [PREFIX/com]
   --localstatedir=DIR     modifiable single-machine data [PREFIX/var]
-  --runstatedir=DIR       modifiable per-process data [LOCALSTATEDIR/run]
   --libdir=DIR            object code libraries [EPREFIX/lib]
   --includedir=DIR        C header files [PREFIX/include]
   --oldincludedir=DIR     C header files for non-gcc [/usr/include]
diff --git a/tools/config.h.in b/tools/config.h.in
index 5a5944ebe1..5abf6092de 100644
--- a/tools/config.h.in
+++ b/tools/config.h.in
@@ -123,6 +123,9 @@
 /* Define to 1 if you have the ANSI C header files. */
 #undef STDC_HEADERS
 
+/* QMP proxy path */
+#undef STUBDOM_QMP_PROXY_PATH
+
 /* Enable large inode numbers on Mac OS X 10.5.  */
 #ifndef _DARWIN_USE_64_BIT_INODE
 # define _DARWIN_USE_64_BIT_INODE 1
diff --git a/tools/configure b/tools/configure
index 36596389b8..35036dc1db 100755
--- a/tools/configure
+++ b/tools/configure
@@ -772,7 +772,6 @@ infodir
 docdir
 oldincludedir
 includedir
-runstatedir
 localstatedir
 sharedstatedir
 sysconfdir
@@ -814,6 +813,7 @@ with_linux_backend_modules
 enable_qemu_traditional
 enable_rombios
 with_system_qemu
+with_stubdom_qmp_proxy
 with_system_seabios
 with_system_ovmf
 enable_ipxe
@@ -899,7 +899,6 @@ datadir='${datarootdir}'
 sysconfdir='${prefix}/etc'
 sharedstatedir='${prefix}/com'
 localstatedir='${prefix}/var'
-runstatedir='${localstatedir}/run'
 includedir='${prefix}/include'
 oldincludedir='/usr/include'
 docdir='${datarootdir}/doc/${PACKAGE_TARNAME}'
@@ -1152,15 +1151,6 @@ do
   | -silent | --silent | --silen | --sile | --sil)
     silent=yes ;;
 
-  -runstatedir | --runstatedir | --runstatedi | --runstated \
-  | --runstate | --runstat | --runsta | --runst | --runs \
-  | --run | --ru | --r)
-    ac_prev=runstatedir ;;
-  -runstatedir=* | --runstatedir=* | --runstatedi=* | --runstated=* \
-  | --runstate=* | --runstat=* | --runsta=* | --runst=* | --runs=* \
-  | --run=* | --ru=* | --r=*)
-    runstatedir=$ac_optarg ;;
-
   -sbindir | --sbindir | --sbindi | --sbind | --sbin | --sbi | --sb)
     ac_prev=sbindir ;;
   -sbindir=* | --sbindir=* | --sbindi=* | --sbind=* | --sbin=* \
@@ -1298,7 +1288,7 @@ fi
 for ac_var in	exec_prefix prefix bindir sbindir libexecdir datarootdir \
 		datadir sysconfdir sharedstatedir localstatedir includedir \
 		oldincludedir docdir infodir htmldir dvidir pdfdir psdir \
-		libdir localedir mandir runstatedir
+		libdir localedir mandir
 do
   eval ac_val=\$$ac_var
   # Remove trailing slashes.
@@ -1451,7 +1441,6 @@ Fine tuning of the installation directories:
   --sysconfdir=DIR        read-only single-machine data [PREFIX/etc]
   --sharedstatedir=DIR    modifiable architecture-independent data [PREFIX/com]
   --localstatedir=DIR     modifiable single-machine data [PREFIX/var]
-  --runstatedir=DIR       modifiable per-process data [LOCALSTATEDIR/run]
   --libdir=DIR            object code libraries [EPREFIX/lib]
   --includedir=DIR        C header files [PREFIX/include]
   --oldincludedir=DIR     C header files for non-gcc [/usr/include]
@@ -1535,6 +1524,9 @@ Optional Packages:
                           Use system supplied qemu PATH or qemu (taken from
                           $PATH) as qemu-xen device model instead of building
                           and installing our own version
+  --stubdom-qmp-proxy[=PATH]
+                          Use supplied binary PATH as a QMP proxy into
+                          stubdomain
   --with-system-seabios[=PATH]
                           Use system supplied seabios PATH instead of building
                           and installing our own version
@@ -3382,7 +3374,7 @@ else
     We can't simply define LARGE_OFF_T to be 9223372036854775807,
     since some C++ compilers masquerading as C compilers
     incorrectly reject 9223372036854775807.  */
-#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
+#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
   int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
 		       && LARGE_OFF_T % 2147483647 == 1)
 		      ? 1 : -1];
@@ -3428,7 +3420,7 @@ else
     We can't simply define LARGE_OFF_T to be 9223372036854775807,
     since some C++ compilers masquerading as C compilers
     incorrectly reject 9223372036854775807.  */
-#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
+#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
   int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
 		       && LARGE_OFF_T % 2147483647 == 1)
 		      ? 1 : -1];
@@ -3452,7 +3444,7 @@ rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
     We can't simply define LARGE_OFF_T to be 9223372036854775807,
     since some C++ compilers masquerading as C compilers
     incorrectly reject 9223372036854775807.  */
-#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
+#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
   int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
 		       && LARGE_OFF_T % 2147483647 == 1)
 		      ? 1 : -1];
@@ -3497,7 +3489,7 @@ else
     We can't simply define LARGE_OFF_T to be 9223372036854775807,
     since some C++ compilers masquerading as C compilers
     incorrectly reject 9223372036854775807.  */
-#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
+#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
   int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
 		       && LARGE_OFF_T % 2147483647 == 1)
 		      ? 1 : -1];
@@ -3521,7 +3513,7 @@ rm -f core conftest.err conftest.$ac_objext conftest.$ac_ext
     We can't simply define LARGE_OFF_T to be 9223372036854775807,
     since some C++ compilers masquerading as C compilers
     incorrectly reject 9223372036854775807.  */
-#define LARGE_OFF_T ((((off_t) 1 << 31) << 31) - 1 + (((off_t) 1 << 31) << 31))
+#define LARGE_OFF_T (((off_t) 1 << 62) - 1 + ((off_t) 1 << 62))
   int off_t_is_large[(LARGE_OFF_T % 2147483629 == 721
 		       && LARGE_OFF_T % 2147483647 == 1)
 		      ? 1 : -1];
@@ -4548,6 +4540,24 @@ _ACEOF
 
 
 
+# Check whether --with-stubdom-qmp-proxy was given.
+if test "${with_stubdom_qmp_proxy+set}" = set; then :
+  withval=$with_stubdom_qmp_proxy;
+    stubdom_qmp_proxy="$withval"
+
+else
+
+    stubdom_qmp_proxy="$bindir/vchan-socket-proxy"
+
+fi
+
+
+cat >>confdefs.h <<_ACEOF
+#define STUBDOM_QMP_PROXY_PATH "$stubdom_qmp_proxy"
+_ACEOF
+
+
+
 # Check whether --with-system-seabios was given.
 if test "${with_system_seabios+set}" = set; then :
   withval=$with_system_seabios;
diff --git a/tools/configure.ac b/tools/configure.ac
index b6f8882be4..a9af0a21c6 100644
--- a/tools/configure.ac
+++ b/tools/configure.ac
@@ -194,6 +194,15 @@ AC_SUBST(qemu_xen)
 AC_SUBST(qemu_xen_path)
 AC_SUBST(qemu_xen_systemd)
 
+AC_ARG_WITH([stubdom-qmp-proxy],
+    AC_HELP_STRING([--stubdom-qmp-proxy@<:@=PATH@:>@],
+        [Use supplied binary PATH as a QMP proxy into stubdomain]),[
+    stubdom_qmp_proxy="$withval"
+],[
+    stubdom_qmp_proxy="$bindir/vchan-socket-proxy"
+])
+AC_DEFINE_UNQUOTED([STUBDOM_QMP_PROXY_PATH], ["$stubdom_qmp_proxy"], [QMP proxy path])
+
 AC_ARG_WITH([system-seabios],
     AS_HELP_STRING([--with-system-seabios@<:@=PATH@:>@],
        [Use system supplied seabios PATH instead of building and installing
diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 6829b4bdb5..6a26634ef9 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -1200,7 +1200,11 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                       GCSPRINTF("%d", guest_domid), NULL);
     flexarray_append(dm_args, "-no-shutdown");
 
-    /* There is currently no way to access the QMP socket in the stubdom */
+    /*
+     * QMP access to qemu running in stubdomain is done over vchan. The
+     * stubdomain init script adds the appropriate monitor options for
+     * vchan-socket-proxy.
+     */
     if (!is_stubdom) {
         flexarray_append(dm_args, "-chardev");
         if (state->dm_monitor_fd >= 0) {
@@ -2205,6 +2209,23 @@ static void stubdom_pvqemu_unpaused(libxl__egc *egc,
 static void stubdom_xswait_cb(libxl__egc *egc, libxl__xswait_state *xswait,
                               int rc, const char *p);
 
+static void spawn_qmp_proxy(libxl__egc *egc,
+                            libxl__stub_dm_spawn_state *sdss);
+
+static void qmp_proxy_confirm(libxl__egc *egc, libxl__spawn_state *spawn,
+                              const char *xsdata);
+
+static void qmp_proxy_startup_failed(libxl__egc *egc,
+                                     libxl__spawn_state *spawn,
+                                     int rc);
+
+static void qmp_proxy_detached(libxl__egc *egc,
+                               libxl__spawn_state *spawn);
+
+static void qmp_proxy_spawn_outcome(libxl__egc *egc,
+                                    libxl__stub_dm_spawn_state *sdss,
+                                    int rc);
+
 char *libxl__stub_dm_name(libxl__gc *gc, const char *guest_name)
 {
     return GCSPRINTF("%s-dm", guest_name);
@@ -2486,24 +2507,150 @@ static void spawn_stub_launch_dm(libxl__egc *egc,
             goto out;
     }
 
+    sdss->qmp_proxy_spawn.ao = ao;
+    if (libxl__stubdomain_is_linux(&guest_config->b_info)) {
+        spawn_qmp_proxy(egc, sdss);
+    } else {
+        qmp_proxy_spawn_outcome(egc, sdss, 0);
+    }
+
+    return;
+
+out:
+    assert(ret);
+    qmp_proxy_spawn_outcome(egc, sdss, ret);
+}
+
+static void spawn_qmp_proxy(libxl__egc *egc,
+                            libxl__stub_dm_spawn_state *sdss)
+{
+    STATE_AO_GC(sdss->qmp_proxy_spawn.ao);
+    const uint32_t guest_domid = sdss->dm.guest_domid;
+    const uint32_t dm_domid = sdss->pvqemu.guest_domid;
+    const char *dom_path = libxl__xs_get_dompath(gc, dm_domid);
+    char **args;
+    int nr = 0;
+    int rc, logfile_w, null;
+
+    if (access(STUBDOM_QMP_PROXY_PATH, X_OK) < 0) {
+        LOGED(ERROR, guest_domid, "qmp proxy %s is not executable", STUBDOM_QMP_PROXY_PATH);
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    sdss->qmp_proxy_spawn.what = GCSPRINTF("domain %d device model qmp proxy", guest_domid);
+    sdss->qmp_proxy_spawn.pidpath = GCSPRINTF("%s/image/qmp-proxy-pid", dom_path);
+    sdss->qmp_proxy_spawn.xspath = DEVICE_MODEL_XS_PATH(gc, LIBXL_TOOLSTACK_DOMID,
+                                                        dm_domid, "/qmp-proxy-state");
+    sdss->qmp_proxy_spawn.timeout_ms = LIBXL_DEVICE_MODEL_START_TIMEOUT * 1000;
+    sdss->qmp_proxy_spawn.midproc_cb = libxl__spawn_record_pid;
+    sdss->qmp_proxy_spawn.confirm_cb = qmp_proxy_confirm;
+    sdss->qmp_proxy_spawn.failure_cb = qmp_proxy_startup_failed;
+    sdss->qmp_proxy_spawn.detached_cb = qmp_proxy_detached;
+
+    const int arraysize = 6;
+    GCNEW_ARRAY(args, arraysize);
+    args[nr++] = STUBDOM_QMP_PROXY_PATH;
+    args[nr++] = GCSPRINTF("--state-path=%s", sdss->qmp_proxy_spawn.xspath);
+    args[nr++] = GCSPRINTF("%u", dm_domid);
+    args[nr++] = GCSPRINTF("%s/device-model/%u/qmp-vchan", dom_path, guest_domid);
+    args[nr++] = (char*)libxl__qemu_qmp_path(gc, guest_domid);
+    args[nr++] = NULL;
+    assert(nr == arraysize);
+
+    logfile_w = libxl__create_qemu_logfile(gc, GCSPRINTF("qmp-proxy-%s",
+                                                         sdss->dm_config.c_info.name));
+    if (logfile_w < 0) {
+        rc = logfile_w;
+        goto out;
+    }
+    null = open("/dev/null", O_RDWR);
+    if (null < 0) {
+        LOGED(ERROR, guest_domid, "unable to open /dev/null");
+        rc = ERROR_FAIL;
+        goto out_close;
+    }
+
+    rc = libxl__spawn_spawn(egc, &sdss->qmp_proxy_spawn);
+    if (rc < 0)
+        goto out_close;
+    if (!rc) { /* inner child */
+        setsid();
+        libxl__exec(gc, null, null, logfile_w, STUBDOM_QMP_PROXY_PATH, args, NULL);
+        /* unreachable */
+    }
+
+    rc = 0;
+
+out_close:
+    if (logfile_w >= 0)
+        close(logfile_w);
+    if (null >= 0)
+        close(null);
+out:
+    if (rc)
+        qmp_proxy_spawn_outcome(egc, sdss, rc);
+}
+
+static void qmp_proxy_confirm(libxl__egc *egc, libxl__spawn_state *spawn,
+                              const char *xsdata)
+{
+    STATE_AO_GC(spawn->ao);
+
+    if (!xsdata)
+        return;
+
+    if (strcmp(xsdata, "running"))
+        return;
+
+    libxl__spawn_initiate_detach(gc, spawn);
+}
+
+static void qmp_proxy_startup_failed(libxl__egc *egc,
+                                     libxl__spawn_state *spawn,
+                                     int rc)
+{
+    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(spawn, *sdss, qmp_proxy_spawn);
+    qmp_proxy_spawn_outcome(egc, sdss, rc);
+}
+
+static void qmp_proxy_detached(libxl__egc *egc,
+                               libxl__spawn_state *spawn)
+{
+    libxl__stub_dm_spawn_state *sdss = CONTAINER_OF(spawn, *sdss, qmp_proxy_spawn);
+    qmp_proxy_spawn_outcome(egc, sdss, 0);
+}
+
+static void qmp_proxy_spawn_outcome(libxl__egc *egc,
+                                    libxl__stub_dm_spawn_state *sdss,
+                                    int rc)
+{
+    STATE_AO_GC(sdss->qmp_proxy_spawn.ao);
+    int need_pvqemu = libxl__need_xenpv_qemu(gc, &sdss->dm_config);
+
+    if (rc) goto out;
+
+    if (need_pvqemu < 0) {
+        rc = need_pvqemu;
+        goto out;
+    }
+
     sdss->pvqemu.spawn.ao = ao;
-    sdss->pvqemu.guest_domid = dm_domid;
     sdss->pvqemu.guest_config = &sdss->dm_config;
     sdss->pvqemu.build_state = &sdss->dm_state;
     sdss->pvqemu.callback = spawn_stubdom_pvqemu_cb;
-
-    if (!need_qemu) {
+    if (need_pvqemu) {
+        libxl__spawn_local_dm(egc, &sdss->pvqemu);
+    } else {
         /* If dom0 qemu not needed, do not launch it */
         spawn_stubdom_pvqemu_cb(egc, &sdss->pvqemu, 0);
-    } else {
-        libxl__spawn_local_dm(egc, &sdss->pvqemu);
     }
 
     return;
 
 out:
-    assert(ret);
-    spawn_stubdom_pvqemu_cb(egc, &sdss->pvqemu, ret);
+    assert(rc);
+    spawn_stubdom_pvqemu_cb(egc, &sdss->pvqemu, rc);
 }
 
 static void spawn_stubdom_pvqemu_cb(libxl__egc *egc,
diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c
index fef2cd4e13..c08af308fa 100644
--- a/tools/libxl/libxl_domain.c
+++ b/tools/libxl/libxl_domain.c
@@ -1260,10 +1260,20 @@ static void dm_destroy_cb(libxl__egc *egc,
     libxl__destroy_domid_state *dis = CONTAINER_OF(ddms, *dis, ddms);
     STATE_AO_GC(dis->ao);
     uint32_t domid = dis->domid;
+    uint32_t target_domid;
 
     if (rc < 0)
         LOGD(ERROR, domid, "libxl__destroy_device_model failed");
 
+    if (libxl_is_stubdom(CTX, domid, &target_domid) &&
+        libxl__stubdomain_is_linux_running(gc, target_domid)) {
+        char *path = GCSPRINTF("/local/domain/%d/image/qmp-proxy-pid", domid);
+
+        libxl__kill_xs_path(gc, path, "QMP Proxy");
+        /* qmp-proxy for stubdom registers target_domid's QMP sockets. */
+        libxl__qmp_cleanup(gc, target_domid);
+    }
+
     dis->drs.ao = ao;
     dis->drs.domid = domid;
     dis->drs.callback = devices_destroy_cb;
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index c939557b2e..41b51b07cd 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -4166,6 +4166,7 @@ typedef struct {
     libxl__destroy_domid_state dis;
     libxl__multidev multidev;
     libxl__xswait_state xswait;
+    libxl__spawn_state qmp_proxy_spawn;
 } libxl__stub_dm_spawn_state;
 
 _hidden void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state*);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:57:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarV9-0000qS-MI; Tue, 19 May 2020 01:57:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarV8-0000pU-NJ
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:57:02 +0000
X-Inumbo-ID: edfca300-9973-11ea-b9cf-bc764e2007e4
Received: from mail-qk1-x742.google.com (unknown [2607:f8b0:4864:20::742])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id edfca300-9973-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 01:56:18 +0000 (UTC)
Received: by mail-qk1-x742.google.com with SMTP id f189so12871719qkd.5
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:56:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=zYxVyaXUbBwkO6YpZAMxyMF5n/sB6urbfp9DtPJAZLw=;
 b=IEQEPkp6jaiAgX30wKH3RXdIuyHNp1qM8Fw1z/Z62607XhkY4lacJtOHt+hqwYcIWf
 aGrHGEik84VfUwSLpYvKSyVxG5+BgRGuXr6v3TxXyLG9ZHNJ1xEEi7INszUrd9l7G1ey
 Cs8p2M+35n0Bz1b1NldY6x3FKEwMy12GghFmAvPC84/Q3kFGhzMXiqb01UG75B5iqUt/
 p0mtcugDY/gGwmtXsz8tjq63gLqfg8hJFiRD5zlLSeBAkL4BwOGnf95oiOcQraz3GdRb
 7AU3g7qmzk6TywvzyhjJesjVIqp380CH/npcDCgj+2uV/xkCyznrpxWr5l9mnEnr1VMn
 igwA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=zYxVyaXUbBwkO6YpZAMxyMF5n/sB6urbfp9DtPJAZLw=;
 b=LwzaPSGcBjp9PChfNPfYhmRzuv7gSUT+e2uF2LBMupjbFp16pG4Jze29FWgdabRyPw
 T0Yrmg1BqIltqFTHcGBhnUZmF+c7CIGuK6/QgHi3aDQmygZaz1SHeLpZ/sLj2n53R2Yb
 7c2sFapL9qX0Wq6RlOEcpIij/HCLD0H7zCAyrRUuGxF9coTSjIgHZKQL5WEFq29cGOks
 Su/BbpNA/c/opyzuRafqY9pXmS9OXfqS4G1xaTr7fkXNHTrWh5PbT7wB5/eFJkAU/UHI
 tqrHywlKQ8oA5AAhWQtF9fMPv1aRBuViT3i3F/5E3O7tToYvk/hYI9bKiFTVoC7526S4
 pQ4Q==
X-Gm-Message-State: AOAM530AmB2Gjva3aMMkf5b1HoshhIMy0nUdWjU73Lk3bgV3Bsosfh46
 s//cwqOKqJRPYDzoHCxA1JNeXEHA
X-Google-Smtp-Source: ABdhPJwzwY9WPZYTCdyanB+pJDmCSgNFO0oQxnmRTaaX0i7nC3iWT7eVbJSXqPrdH6UsWw3+ZdfUEA==
X-Received: by 2002:a37:59c7:: with SMTP id n190mr8225930qkb.471.1589853377492; 
 Mon, 18 May 2020 18:56:17 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.56.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:56:16 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 15/19] libxl: require qemu in dom0 for multiple stubdomain
 consoles
Date: Mon, 18 May 2020 21:54:59 -0400
Message-Id: <20200519015503.115236-16-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Device model stubdomains (both Mini-OS + qemu-trad and linux + qemu-xen)
are always started with at least 3 consoles: log, save, and restore.
Until xenconsoled learns how to handle multiple consoles, this is needed
for save/restore support.

For Mini-OS stubdoms, this is a bug.  In practice, it works in most
cases because there is something else that triggers qemu in dom0 too:
vfb/vkb added if vnc/sdl/spice is enabled.

Additionally, Linux-based stubdomain waits for all the backends to
initialize during boot. Lack of some console backends results in
stubdomain startup timeout.

This is a temporary patch until xenconsoled will be improved.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
[Updated commit message with Marek's explanation from mailing list.]
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
Changes in v6
 - Update commit message
Changes in v7
 - Add Acked-by: Ian Jackson
---
 tools/libxl/libxl_dm.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 6a26634ef9..8801e9364e 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -2494,7 +2494,11 @@ static void spawn_stub_launch_dm(libxl__egc *egc,
         }
     }
 
-    need_qemu = libxl__need_xenpv_qemu(gc, dm_config);
+    /*
+     * Until xenconsoled learns how to handle multiple consoles, require qemu
+     * in dom0 to serve consoles for a stubdomain - it require at least 3 of them.
+     */
+    need_qemu = 1 || libxl__need_xenpv_qemu(gc, &sdss->dm_config);
 
     for (i = 0; i < num_console; i++) {
         libxl__device device;
@@ -2626,7 +2630,11 @@ static void qmp_proxy_spawn_outcome(libxl__egc *egc,
                                     int rc)
 {
     STATE_AO_GC(sdss->qmp_proxy_spawn.ao);
-    int need_pvqemu = libxl__need_xenpv_qemu(gc, &sdss->dm_config);
+    /*
+     * Until xenconsoled learns how to handle multiple consoles, require qemu
+     * in dom0 to serve consoles for a stubdomain - it require at least 3 of them.
+     */
+    int need_pvqemu = 1 || libxl__need_xenpv_qemu(gc, &sdss->dm_config);
 
     if (rc) goto out;
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:57:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:57:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarVF-0000vx-3z; Tue, 19 May 2020 01:57:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarVD-0000uM-Nv
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:57:07 +0000
X-Inumbo-ID: f1360638-9973-11ea-b9cf-bc764e2007e4
Received: from mail-qv1-xf41.google.com (unknown [2607:f8b0:4864:20::f41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f1360638-9973-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 01:56:23 +0000 (UTC)
Received: by mail-qv1-xf41.google.com with SMTP id dh1so1897811qvb.13
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:56:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=h7aIlXvOP1lBOHwN5e2pe14rJQeMR42hM5D+KnY+oRg=;
 b=EMoFpo3VZpwrzTvutQCIzi+e5oMDXAf/1WVqqKsGMdaN+B6PPKLXvSJQ30kPoOncup
 8IVklOgU6XOROC1iMAvMXRX2ajS52STcAAiEzjM1BzVtaIUeH4Erfni7CtAaReIdtraP
 Ri1JABHCyznsDLRuMxJEhWtgWfX1PI2ICSpFxMqJ3y+t15SWRZYt77vM4lXN8ohwgpCE
 zhDaMH2yzTkRCSOCJcsF7chfFhpHDx5mCin4dz6xIoR9P95AJiQNGR2LFu0hstby1V17
 dvuQ7fO8+R9cWhsNHBQ4yDM/MNla0HpVj06PUlCGkDW6VfZ/vksDaZMGAYBcqsZaPJRJ
 amDA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=h7aIlXvOP1lBOHwN5e2pe14rJQeMR42hM5D+KnY+oRg=;
 b=AneQ7W4QTsIUeRODp00Wu/MsyXVr7MXue82ud9jB73UYAFe/44Wp+qhBPFZ/tI78vn
 tA4udW+xceRbrbxwDDeXv2RGinm5/Rq/+CgUFTzheU2/jNJai+d+GWH0CiJNghPCmOoD
 BPK10hIT0f8lLD49FeJsdxy516fWYUR2jTCoWMIPVd8Q1G0ZdIf9nKOmUYcTA6JPb8Nt
 PQQsF4vrzMGZY8smFIpCCig8RL9kIjLXisEZc5ZgcBWCuEcsUD11PFXmqEoIqTGQP6/6
 kK0SE98WvNkq3qNqns3lNvSx/HOck2wZI9lqvlxg+ol8SZv/ueZTYzx+lHbzFF8wByFV
 Taww==
X-Gm-Message-State: AOAM530/Zmh1P//++w2EP3OXGrQpLa5KLY2dwfkQcawvea+2MBUVyiiH
 GprfDnYOSke6A7R4nJl5EkKYMSVO
X-Google-Smtp-Source: ABdhPJy3v6xUElp1+dZ+UyZkFnEv8EEcgqtN7mGKsvOX6ZOSF0gdR6ZsfCJ4wXY6Uqs9zNd5eeYzQg==
X-Received: by 2002:a05:6214:42f:: with SMTP id
 a15mr18469050qvy.170.1589853383034; 
 Mon, 18 May 2020 18:56:23 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.56.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:56:22 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 16/19] libxl: ignore emulated IDE disks beyond the first 4
Date: Mon, 18 May 2020 21:55:00 -0400
Message-Id: <20200519015503.115236-17-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Qemu supports only 4 emulated IDE disks, when given more (or with higher
indexes), it will fail to start. Since the disks can still be accessible
using PV interface, just ignore emulated path and log a warning, instead
of rejecting the configuration altogether.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6:
 - Add Acked-by: Ian Jackson
---
 tools/libxl/libxl_dm.c | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 8801e9364e..86694f669d 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -1894,6 +1894,13 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             }
 
             if (disks[i].is_cdrom) {
+                if (disk > 4) {
+                    LOGD(WARN, guest_domid, "Emulated CDROM can be only one of the first 4 disks.\n"
+                         "Disk %s will be available via PV drivers but not as an "
+                         "emulated disk.",
+                         disks[i].vdev);
+                    continue;
+                }
                 drive = libxl__sprintf(gc,
                          "if=ide,index=%d,readonly=on,media=cdrom,id=ide-%i",
                          disk, dev_number);
@@ -1971,6 +1978,10 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                                                        &disks[i],
                                                        colo_mode);
                 } else {
+                    LOGD(WARN, guest_domid, "Only 4 emulated IDE disks are supported.\n"
+                         "Disk %s will be available via PV drivers but not as an "
+                         "emulated disk.",
+                         disks[i].vdev);
                     continue; /* Do not emulate this disk */
                 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:57:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:57:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarVK-00011a-FJ; Tue, 19 May 2020 01:57:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarVI-0000zO-Ng
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:57:12 +0000
X-Inumbo-ID: f23c3746-9973-11ea-b9cf-bc764e2007e4
Received: from mail-qv1-xf44.google.com (unknown [2607:f8b0:4864:20::f44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f23c3746-9973-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 01:56:25 +0000 (UTC)
Received: by mail-qv1-xf44.google.com with SMTP id g20so5764037qvb.9
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:56:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=IYMgysvcDxJ2qC+BMVljgkXJOptYz7V/ebvMGJu6l2I=;
 b=iSZ+2fPcl6pHnJEzYSwp6UecgywSI0Do0bUYZEoGlESIz/C2NFxfo+kao2piWSIJ1R
 2ss3oW3uGXdFjEgWoqiyeFRCxfe0TCV3jEtnTR4MnW5ZNw6REOc/hUprd3UmDu7svhov
 xcpESH85t4mv/RN9Db8AhFZMUmQzZbw6gNPAL6lakkYV8iiSN18ykFuCGo35tg0zjgM9
 mScKM/4KLC7jS3mJugI7VN3k5+fNGiV6NpM6Mq6uUuZEPwYxsqY1N79n2Kt852amqMFY
 bUSaaJobFK0VObOJg+EL3mSMBxsYYCLS6bOJzTXqWkY4BLnRWJEa9cjAPT5v++HSAGkA
 ufAQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=IYMgysvcDxJ2qC+BMVljgkXJOptYz7V/ebvMGJu6l2I=;
 b=AoN7yDf64PgWvuwONrfeRosCDcKdlo8/8nYCB54Auo1bTJKBo3ieJKmGKAEnMzSM6R
 ViBMR2yvkCu9L65P5vFXHDwttGtT7BXr1bk7/oPdV54w14IyMzp1WqSTNUuYCzMQ0or8
 9sf8WEU9eKzkMFDZd6nuT3OulqDTTsQrY6O7tuu8IEdbieLyhiFHt5iNm3qRR45RZP85
 lVB73jLCD6y1AqiCiz32RLu40TkGDvkViq0J3tXgnvYnNwz3Prkz9C4H8ufFXV8YUys0
 DeEVp6A9acH1eU8rDDc2wt3S4w71TwOTMgbcHQ/Gap3v3QUtL4tAK6alAV8D4a/v1iys
 rBfg==
X-Gm-Message-State: AOAM532WcEF7AxVTSNIRGhqP8rHhwezFqcBOy4ze5XgOor1rwZCfAFRo
 GNjSTzWrV9qnybCHeR1YwG++u8rD
X-Google-Smtp-Source: ABdhPJx+JJQZcXHO93LyrGJBq5x/P47XOfWXMJ7IPmronv27b21xMNl7q4ygzJNCos1CMGew2EZ6pg==
X-Received: by 2002:ad4:40ca:: with SMTP id x10mr19055315qvp.220.1589853384762; 
 Mon, 18 May 2020 18:56:24 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.56.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:56:24 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 17/19] libxl: consider also qemu in stubdomain in
 libxl__dm_active check
Date: Mon, 18 May 2020 21:55:01 -0400
Message-Id: <20200519015503.115236-18-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Since qemu-xen can now run in stubdomain too, handle this case when
checking it's state too.

Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6:
 - Add Acked-by: Ian Jackson
---
 tools/libxl/libxl_dm.c | 10 ++++++++--
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 86694f669d..454a2815ed 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -3734,12 +3734,18 @@ out:
 
 int libxl__dm_active(libxl__gc *gc, uint32_t domid)
 {
-    char *pid, *path;
+    char *pid, *dm_domid, *path;
 
     path = GCSPRINTF("/local/domain/%d/image/device-model-pid", domid);
     pid = libxl__xs_read(gc, XBT_NULL, path);
 
-    return pid != NULL;
+    if (pid)
+        return true;
+
+    path = GCSPRINTF("/local/domain/%d/image/device-model-domid", domid);
+    dm_domid = libxl__xs_read(gc, XBT_NULL, path);
+
+    return dm_domid != NULL;
 }
 
 int libxl__dm_check_start(libxl__gc *gc, libxl_domain_config *d_config,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:57:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:57:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarVO-00015l-RS; Tue, 19 May 2020 01:57:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarVN-00014l-NM
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:57:17 +0000
X-Inumbo-ID: f36e41b8-9973-11ea-b07b-bc764e2007e4
Received: from mail-qt1-x842.google.com (unknown [2607:f8b0:4864:20::842])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f36e41b8-9973-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 01:56:27 +0000 (UTC)
Received: by mail-qt1-x842.google.com with SMTP id o19so9895427qtr.10
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:56:27 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=aXBQgi2sAye4H55aK3umMyFvWsC+VqqA4xSnBHzfcsA=;
 b=mU+6BsnXFny2vtjKJ7i2MqzmjsByZT7LuGqKAmoJxuA8CfzxfN68x19jSieBJ9mFvy
 sH7P/lMzGGLi7uGGAx+jhC0nEko/+P2bZWYr7+SxqPqEi6XSQ2CgzAaIhOJYOcgHpvKG
 Prfgt9bJ6k1IHH8YuyPXhKNvYX+GdoNAZvn3lkPm/qjZ8uHGFi0Us8OXTMLJcXUSsSTg
 owlWvUDk/l2/FHzsjFVj4UkeZqINJ5mge2RFk6PV304ssF6OyVmKqMIDRsDGuQPo5BXv
 CVA1oXOqw8ZmaXYXIPnoak4/to9zpCRJS6R+OOyfIxYkNLfdCoIi2XWTwZKkWCMwCEaP
 9Fcg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=aXBQgi2sAye4H55aK3umMyFvWsC+VqqA4xSnBHzfcsA=;
 b=qHYMRcivoubHaHddF5HOAs+FFjb0hOLNwknqFP+VnMO/18181IUN28bKZHHm0CmVPj
 +zMKNb32oh91eEw+3BImrpCFhtFV0mUvKQm2w+hZRiXg96SLgjOY1qY7VZthrCP9XFLT
 z3a8hXNgOoy3pH0BCdrDFy5Ayh6gi2zcs+45hzIvLJwMABVzpHPh264lBBUHohQLtwa3
 nR7eMAd1hOFbPZK4+IRiLNg9yFYv6poAyhkhNur0rkLE7kHL6XltsxBrPpbNKL0EW/jy
 vtxYj4pj+m34k6HTcZ7Z39rtPVMA1qXUhUmRkzmppmFaI7PYkUBLzGJRD2Tsa4FauZem
 I2Eg==
X-Gm-Message-State: AOAM533Uq1ciD8pMPsQHOMd3dS44KIpnkmfUCyZmRiPe8M9eWzKP7jy1
 rlDGLmR20mn/k2TAgLr66EgBB0cK
X-Google-Smtp-Source: ABdhPJyOiC2OceB9Rb7cENPNDP6xt/sf5xjwv66y7Sna7Gqonzk3hgNejlfWlv7lb7Of+q2Ktn2LQw==
X-Received: by 2002:ac8:3693:: with SMTP id a19mr12844368qtc.226.1589853386733; 
 Mon, 18 May 2020 18:56:26 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.56.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:56:25 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 18/19] docs: Add device-model-domid to xenstore-paths
Date: Mon, 18 May 2020 21:55:02 -0400
Message-Id: <20200519015503.115236-19-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Document device-model-domid for when using a device model stubdomain.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6:
 - Add Acked-by: Ian Jackson
---
 docs/misc/xenstore-paths.pandoc | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/docs/misc/xenstore-paths.pandoc b/docs/misc/xenstore-paths.pandoc
index a152f5ea68..766e8008dc 100644
--- a/docs/misc/xenstore-paths.pandoc
+++ b/docs/misc/xenstore-paths.pandoc
@@ -148,6 +148,11 @@ The domain's own ID.
 The process ID of the device model associated with this domain, if it
 has one.
 
+#### ~/image/device-model-domid = INTEGER   [INTERNAL]
+
+The domain ID of the device model stubdomain associated with this domain,
+if it has one.
+
 #### ~/cpu/[0-9]+/availability = ("online"|"offline") [PV]
 
 One node for each virtual CPU up to the guest's configured
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 01:57:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 01:57:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jarVU-0001Ak-40; Tue, 19 May 2020 01:57:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jarVS-00019O-NQ
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 01:57:22 +0000
X-Inumbo-ID: f47caa90-9973-11ea-9887-bc764e2007e4
Received: from mail-qt1-x842.google.com (unknown [2607:f8b0:4864:20::842])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f47caa90-9973-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 01:56:29 +0000 (UTC)
Received: by mail-qt1-x842.google.com with SMTP id x12so9906678qts.9
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 18:56:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=9lx2s1mvPB2z7P0TCofZ4ZdaYHQdnvDFnijuTkR+OyI=;
 b=DQA/lDYikmIzpyMpAWL2Ah5GnWBzNAHqJQmCFJLRMgO19L1i6DmfI8Sfarert7aqF+
 YJp+hF5fIZ6nCcfx9JwziCSXHP837ryCwPcafLU9RNqq5Cu0S64EOPWg6qd3EhV//SoD
 qxvB3d6fF/rb81+GioysD7qWJ3/nHLu5lO1WIkx+Ijyqpb0Bwr+T5541GMwMsdO7StDr
 dwbKcwkKg7nkoSNXRM6EkQ8Qt8CUU1aNjiXhaPAPZcBsRRxnLvLDR51h7TpvZtBfYVSd
 GyDCBb11ezV4FBFARzm2aM4gvyk+6XTqx1LqxUBO1ri9b6FB1ediTL0cemyaxDy9EX+/
 q/ag==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=9lx2s1mvPB2z7P0TCofZ4ZdaYHQdnvDFnijuTkR+OyI=;
 b=OEit5RLGrWOiD74QYWZuObZb0rHnaXsASa9sRK4kgOauaAUv/rKZ5kLX1RkEDVx2j0
 AIDz0zQ6wYk6u5hPoRgNNyP5ndDjiNDYVsyOT/0qDsNqWFb1TSCS1b1IDN+fyYoVQ9xv
 lLquNrBfYzzxjh24YL0IAX94flbH7qA9ttG/iiZ1+vLrNzt/RBv0No9QHjgy1F6SK7x8
 XPwLvepEbdPq3uKpYvqR39NR0EuOLd1MNyve7BEi2YKupUi+bcj3iFt6s/0UY2g6pMgZ
 hhO4BBg1fUyhaM7AbFeJy/9xMIUrbBna6zrJ+sUy3pnTto4NVRrkBIHDog4A9JV72WkL
 LDzw==
X-Gm-Message-State: AOAM532x/B5+h3C9kpxFlJvVjwpRey+ZJOumgZRKJM4RI0tMTq55zMeS
 jbkNRV2Fx+3/aeC3+3wUEmpgqY9U
X-Google-Smtp-Source: ABdhPJw5yZkFl9IpXV431nEJj85p8N0Z6g2agmpa0BQy0bXY2sJpEBqPeZOKzx+Y+vo9oz0spg//vw==
X-Received: by 2002:ac8:6d3a:: with SMTP id r26mr19493770qtu.65.1589853388518; 
 Mon, 18 May 2020 18:56:28 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:e463:db9c:c6eb:4544])
 by smtp.gmail.com with ESMTPSA id q2sm9731898qkn.116.2020.05.18.18.56.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 18 May 2020 18:56:27 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 19/19] libxl: Check stubdomain kernel & ramdisk presence
Date: Mon, 18 May 2020 21:55:03 -0400
Message-Id: <20200519015503.115236-20-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Just out of context is the following comment for libxl__domain_make:
/* fixme: this function can leak the stubdom if it fails */

When the stubdomain kernel or ramdisk is not present, the domid and
stubdomain name will indeed be leaked.  Avoid the leak by checking the
file presence and erroring out when absent.  It doesn't fix all cases,
but it avoids a big one when using a linux device model stubdomain.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

---
Changes in v6:
 - Add Acked-by: Ian Jackson
---
 tools/libxl/libxl_dm.c | 16 ++++++++++++++++
 1 file changed, 16 insertions(+)

diff --git a/tools/libxl/libxl_dm.c b/tools/libxl/libxl_dm.c
index 454a2815ed..f2dc5696b9 100644
--- a/tools/libxl/libxl_dm.c
+++ b/tools/libxl/libxl_dm.c
@@ -2327,6 +2327,22 @@ void libxl__spawn_stub_dm(libxl__egc *egc, libxl__stub_dm_spawn_state *sdss)
         dm_config->num_vkbs = 1;
     }
 
+    if (guest_config->b_info.stubdomain_kernel &&
+        access(guest_config->b_info.stubdomain_kernel, R_OK) != 0) {
+        LOGED(ERROR, guest_domid, "could not access stubdomain kernel %s",
+              guest_config->b_info.stubdomain_kernel);
+        ret = ERROR_INVAL;
+        goto out;
+    }
+
+    if (guest_config->b_info.stubdomain_ramdisk &&
+        access(guest_config->b_info.stubdomain_ramdisk, R_OK) != 0) {
+        LOGED(ERROR, guest_domid, "could not access stubdomain ramdisk %s",
+              guest_config->b_info.stubdomain_ramdisk);
+        ret = ERROR_INVAL;
+        goto out;
+    }
+
     stubdom_state->pv_kernel.path = guest_config->b_info.stubdomain_kernel;
     stubdom_state->pv_ramdisk.path = guest_config->b_info.stubdomain_ramdisk;
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 03:09:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 03:09:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jascz-0008BR-JL; Tue, 19 May 2020 03:09:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TpIb=7B=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jascx-0008BI-CZ
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 03:09:11 +0000
X-Inumbo-ID: 1bf0357e-997e-11ea-ae69-bc764e2007e4
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1bf0357e-997e-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 03:09:10 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id y3so14150987wrt.1
 for <xen-devel@lists.xenproject.org>; Mon, 18 May 2020 20:09:10 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=u2EBXx7DlSGXsMywMu3hJISzU5hgPlWEH9dqjMZDjGU=;
 b=W9n9BXw3t6AQK2PYEVetOerPW9kf0cmuLW8j9DKk4oGJAHvN7iMb7rUAbcy2jmCGDX
 7IlhvpE3DZuAsnstOgOtK41jWwaPqvM21JPx8LYbuRCnhhVJRsiLmp9JjefiUaxOEraz
 HLAcDi0YtBiCnmzV5uzqbpP+gaVkRgny6B3RRDWMzpmYKTYLjl1xFDAbSlu67crOGFb8
 cfPB9w9rdTGFvmfHVqa+ckueNin6cqGOQv2RHfkPPTsFnl/EIt8WFuGavT5760HfB8hB
 Kr4oJ0u08VyvG4zNzYENeROLFsrHhggc7oU6vqASU/R67IhxBrGdA9GEzPl53eXYNaPv
 Jfbg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=u2EBXx7DlSGXsMywMu3hJISzU5hgPlWEH9dqjMZDjGU=;
 b=F0QPucgp2RUtXGmssqSEBYO/R52aKqFZgLOY4y4umuAY4Kzw2ruH23aNdpo7LZMEH5
 N4EV2qn9IPsGkK5mCyapNAJQxdmq9RgI6ih3INcPhDbLh0jr2wI3O3pd6Fu8LwT7LXcJ
 vvzo+ZH7UqQJo1fhyta5iWhkhwVzNGWD6vWYSTpViJZTpc20CZtY7f+CNk6N2a1IaZmj
 /EvnJPNQSu6PcOCIppC+h5UiRR0GXZJRCvkFHe9u2D2hp56Rh8uoO0rjzdQhlNX9E6+Y
 OFRMYjMSewxVjq6PzgsPZQ2SmZ+rWkS/JYbUpNBtYrK46q+/6k3Gvme2L6nA4KzOyYsF
 gvqA==
X-Gm-Message-State: AOAM531vuUp7DH3zxUShCVWrOtlNkowBaOvlMSifpfKMJ2n0/B6uGduU
 p0TMCpDKYQjencPDRqCKJV2hCRRs5JN4ZKcbV2Q=
X-Google-Smtp-Source: ABdhPJzIuz5yE2AFGwwLNyK1jTBd+NcZ4ELuTTRJ1BeOMDyfiiM5/j9a523fdCKWBQWQDBdXEDJA9HPmbbTX+5AW4gs=
X-Received: by 2002:adf:a3c5:: with SMTP id m5mr24539624wrb.390.1589857749732; 
 Mon, 18 May 2020 20:09:09 -0700 (PDT)
MIME-Version: 1.0
References: <20200518113008.15422-1-julien@xen.org>
In-Reply-To: <20200518113008.15422-1-julien@xen.org>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Mon, 18 May 2020 21:08:33 -0600
Message-ID: <CABfawh=-XVaRxQ+WyM9ZV7jO5hEO=jAWa4m=b_1bQ41NgEB-2A@mail.gmail.com>
Subject: Re: [PATCH for-4.14 0/3] Remove the 1GB limitation on Rasberry Pi 4
To: Julien Grall <julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, minyard@acm.org,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, roman@zededa.com,
 George Dunlap <george.dunlap@citrix.com>,
 Jeff Kubascik <jeff.kubascik@dornerworks.com>, Jan Beulich <jbeulich@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 18, 2020 at 5:32 AM Julien Grall <julien@xen.org> wrote:
>
> From: Julien Grall <jgrall@amazon.com>
>
> Hi all,
>
> At the moment, a user who wants to boot Xen on the Raspberry Pi 4 can
> only use the first GB of memory.
>
> This is because several devices cannot DMA above 1GB but Xen doesn't
> necessarily allocate memory for Dom0 below 1GB.
>
> This small series is trying to address the problem by allowing a
> platform to restrict where Dom0 banks are allocated.
>
> This is also a candidate for Xen 4.14. Without it, a user will not be
> able to use all the RAM on the Raspberry Pi 4.
>
> This series has only be slighlty tested. I would appreciate more test on
> the Rasbperry Pi 4 to confirm this removing the restriction.

Hi Julien,
could you post a git branch somewhere? I can try this on my rpi4 that
already runs 4.13.

Thanks,
Tamas


From xen-devel-bounces@lists.xenproject.org Tue May 19 04:35:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 04:35:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jatxk-0007Ev-Uz; Tue, 19 May 2020 04:34:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=BLjJ=7B=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jatxk-0007Eq-F3
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 04:34:44 +0000
X-Inumbo-ID: 0f89a566-998a-11ea-9887-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 0f89a566-998a-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 04:34:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1589862883;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=DfnN/w0SbYD3vSZtzIfW4lXKbb9M26zh5YzR9hzaFVI=;
 b=Dm6LnZnKoT4it72L2nKFzS9246GrP1xPPv6gQkBCTZmNoPQ+qTIxbOw2uXybnYpztaY+qR
 2pJMwnjSRJg4aqBgvxbU0uKTjyQl5jJ97oqJFo6YehDMpSH3F70aAh8TxkQCsVZkl0GbyV
 e1witRVGlsCGFxhoUFUY7EMstvMP9JE=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-365-m6FuIhfxMOmghIZJQdusCw-1; Tue, 19 May 2020 00:34:31 -0400
X-MC-Unique: m6FuIhfxMOmghIZJQdusCw-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B63A6A0BD7;
 Tue, 19 May 2020 04:34:27 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-32.ams2.redhat.com
 [10.36.112.32])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id A0F5E19C4F;
 Tue, 19 May 2020 04:34:20 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 2FD6311358BC; Tue, 19 May 2020 06:34:19 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: =?utf-8?Q?C=C3=A9dric?= Le Goater <clg@kaod.org>
Subject: Re: [PATCH v3 0/3] various: Remove unnecessary casts
References: <20200512070020.22782-1-f4bug@amsat.org>
 <871rnlsps6.fsf@dusky.pond.sub.org>
 <8791b385-8493-f81d-5ee3-cca5b8559c27@redhat.com>
 <87imgt9ycp.fsf@dusky.pond.sub.org>
 <2f4607cf-90a9-ca9a-4ef6-a8358631cdf0@kaod.org>
Date: Tue, 19 May 2020 06:34:19 +0200
In-Reply-To: <2f4607cf-90a9-ca9a-4ef6-a8358631cdf0@kaod.org>
 (=?utf-8?Q?=22C=C3=A9dric?= Le
 Goater"'s message of "Mon, 18 May 2020 15:21:52 +0200")
Message-ID: <87k1187dbo.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 qemu-trivial@nongnu.org, David Hildenbrand <david@redhat.com>,
 Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, Richard Henderson <rth@twiddle.net>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 John Snow <jsnow@redhat.com>, David Gibson <david@gibson.dropbear.id.au>,
 "Daniel P. =?utf-8?Q?Berrang=C3=A9?=" <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, qemu-ppc@nongnu.org,
 Paolo Bonzini <pbonzini@redhat.com>, Aurelien Jarno <aurelien@aurel32.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

C=C3=A9dric Le Goater <clg@kaod.org> writes:

> On 5/18/20 3:17 PM, Markus Armbruster wrote:
>> Paolo Bonzini <pbonzini@redhat.com> writes:
>>=20
>>> On 15/05/20 07:58, Markus Armbruster wrote:
>>>> Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org> writes:
>>>>
>>>>> Remove unnecessary casts using coccinelle scripts.
>>>>>
>>>>> The CPU()/OBJECT() patches don't introduce logical change,
>>>>> The DEVICE() one removes various OBJECT_CHECK() calls.
>>>> Queued, thanks!
>>>>
>>>> Managing expecations: I'm not a QOM maintainer, I don't want to become
>>>> one, and I don't normally queue QOM patches :)
>>>>
>>>
>>> I want to be again a QOM maintainer, but it's not the best time for me
>>> to be one.  So thanks for picking up my slack.
>>=20
>> You're welcome :)
>
> Could you help me getting this patch merged ? :)
>
> http://patchwork.ozlabs.org/project/qemu-devel/patch/20200404153340.16486=
1-1-clg@kaod.org/

I have more QOM patches in the pipe, and I may well post another QOM
pull request while Paolo is busy with other stuff.  I'll consider
including other QOM patches then.  Non-trivial ones need an R-by from
Paolo, Daniel or Eduardo.



From xen-devel-bounces@lists.xenproject.org Tue May 19 05:17:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 05:17:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaud6-0002Sa-GG; Tue, 19 May 2020 05:17:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wYyw=7B=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jaud5-0002SV-4k
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 05:17:27 +0000
X-Inumbo-ID: 03b768e4-9990-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03b768e4-9990-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 05:17:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=EFFBW+K4DyY5EB9PycAHP++ZRgGzhz86mcQ1VzNkSKQ=; b=I+jtIOcH5hWaPiqpz1ISmuU0a
 FcUbCVnsItxG80cmOqbolIiy2EkrvCXsGMAu1awvh43e3xO3V+HGJMDxhtBiz3Fv+7wOE0R7Ut7MI
 jgMXPxFVT9JIuDx4hSDmZeVatA+yn4lvLX4X995f5HQRNy2/jw/fwqsFYMMIAJn0MxtKY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaucx-00070I-MF; Tue, 19 May 2020 05:17:19 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jaucx-0007PM-AV; Tue, 19 May 2020 05:17:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jaucx-0000Jc-9x; Tue, 19 May 2020 05:17:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150235-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150235: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=a28c9c8c9fc42484efe1bf5a77affe842e54e38b
X-Osstest-Versions-That: qemuu=debe78ce14bf8f8940c2bdf3ef387505e9e035a9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 19 May 2020 05:17:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150235 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150235/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150215
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150215
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150215
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150215
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150215
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150215
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150215
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                a28c9c8c9fc42484efe1bf5a77affe842e54e38b
baseline version:
 qemuu                debe78ce14bf8f8940c2bdf3ef387505e9e035a9

Last test of basis   150215  2020-05-16 13:07:42 Z    2 days
Testing same since   150235  2020-05-18 19:06:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Bulekov <alxndr@bu.edu>
  BALATON Zoltan <balaton@eik.bme.hu>
  Gerd Hoffmann <kraxel@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   debe78ce14..a28c9c8c9f  a28c9c8c9fc42484efe1bf5a77affe842e54e38b -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Tue May 19 06:06:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 06:06:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1javOY-0006cX-5C; Tue, 19 May 2020 06:06:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wYyw=7B=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1javOX-0006cS-2x
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 06:06:29 +0000
X-Inumbo-ID: dd250da6-9996-11ea-a8d9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd250da6-9996-11ea-a8d9-12813bfff9fa;
 Tue, 19 May 2020 06:06:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=DbhFhSdDjfsVzwpqrsYNAz+tyueCQUBCca0kGMhGIQM=; b=3iSg1U57nkbQh4c/fnGjEdDwo
 q5iMfmtU03cKf7NAkExf7/5sLWjMg2J+DDro8cYQmFnyznIkU9zpo9x1ocoYi2N5Fl8SyKnptk1mB
 UFY7ovagDBXxKtn7jj8bA7CZmqs4sSdV1IESNvgclqAbjjEc6dxxoe2S6oHzz12C8k0IE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1javOP-00083T-KX; Tue, 19 May 2020 06:06:21 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1javOP-0001Yz-Bt; Tue, 19 May 2020 06:06:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1javOP-0005mZ-Aw; Tue, 19 May 2020 06:06:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150234-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150234: regressions - FAIL
X-Osstest-Failures: xen-unstable:build-arm64-xsm:xen-build:fail:regression
 xen-unstable:test-xtf-amd64-amd64-5:xtf/test-hvm64-lbr-tsx-vmentry:fail:regression
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
 xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=97fb0253e6c2f2221bfd0895b7ffe3a99330d847
X-Osstest-Versions-That: xen=664e1bc12f8658da124a4eff7a8f16da073bd47f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 19 May 2020 06:06:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150234 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150234/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 150227
 test-xtf-amd64-amd64-5 51 xtf/test-hvm64-lbr-tsx-vmentry fail REGR. vs. 150227

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 150227
 test-armhf-armhf-xl-rtds     12 guest-start              fail REGR. vs. 150227

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150227
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150227
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150227
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150227
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150227
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150227
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150227
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150227
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150227
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  97fb0253e6c2f2221bfd0895b7ffe3a99330d847
baseline version:
 xen                  664e1bc12f8658da124a4eff7a8f16da073bd47f

Last test of basis   150227  2020-05-18 01:51:25 Z    1 days
Testing same since   150234  2020-05-18 18:06:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 97fb0253e6c2f2221bfd0895b7ffe3a99330d847
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Sat May 16 19:50:45 2020 +0100

    x86/hvm: Fix shifting in stdvga_mem_read()
    
    stdvga_mem_read() has a return type of uint8_t, which promotes to int rather
    than unsigned int.  Shifting by 24 may hit the sign bit.
    
    Spotted by Coverity.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 3d6e92e309987c9e33177c9ccd155e58dbd5d0db
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Sat May 16 13:10:07 2020 +0100

    x86/hvm: Fix memory leaks in hvm_copy_context_and_params()
    
    Any error from hvm_save() or hvm_set_param() leaks the c.data allocation.
    
    Spotted by Coverity.
    
    Fixes: 353744830 "x86/hvm: introduce hvm_copy_context_and_params"
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue May 19 07:01:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:01:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jawFq-00035C-Kx; Tue, 19 May 2020 07:01:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wYyw=7B=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jawFp-000357-Mm
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:01:33 +0000
X-Inumbo-ID: 8e46c9a6-999e-11ea-a8df-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8e46c9a6-999e-11ea-a8df-12813bfff9fa;
 Tue, 19 May 2020 07:01:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=r3LviPeUvYOhXQbOefjH9ItfI+/lmsl/dmB5pN5tsEE=; b=F/XSkXpLT0FKJnnj/ocammxG9
 BjryR8UBRiwaXgdns2M6CVPX8XmAyBb8DZjS+/BuMnAOY5IJuSsWjEk6PRQHK0710FZNkfx4QmFOG
 pMYK+vtHKuBdNX1FfF2GGR0ekbdthjGR+BUR/+JunoUMhNcPJe64NV0VLsI9Zby8dhlR8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jawFh-0000lF-Gd; Tue, 19 May 2020 07:01:25 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jawFh-0002tI-4g; Tue, 19 May 2020 07:01:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jawFh-0007in-41; Tue, 19 May 2020 07:01:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150237-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150237: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=c0e04c2e62957fe872b5bc3d89d5b1d95f10450c
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 19 May 2020 07:01:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150237 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150237/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a

version targeted for testing:
 libvirt              c0e04c2e62957fe872b5bc3d89d5b1d95f10450c
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  123 days
Failing since        146211  2020-01-18 04:18:52 Z  122 days  113 attempts
Testing same since   150237  2020-05-19 04:20:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Chris Jester-Young <cky@cky.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yan Wang <wangyan122@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 18502 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 19 07:21:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jawZ0-0004nf-I5; Tue, 19 May 2020 07:21:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aMO8=7B=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jawYz-0004nX-EI
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:21:21 +0000
X-Inumbo-ID: 506b6198-99a1-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 506b6198-99a1-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 07:21:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 5447CB210;
 Tue, 19 May 2020 07:21:12 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v10 03/12] docs: add feature document for Xen hypervisor
 sysfs-like support
Date: Tue, 19 May 2020 09:20:57 +0200
Message-Id: <20200519072106.26894-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200519072106.26894-1-jgross@suse.com>
References: <20200519072106.26894-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On the 2019 Xen developer summit there was agreement that the Xen
hypervisor should gain support for a hierarchical name-value store
similar to the Linux kernel's sysfs.

In the beginning there should only be basic support: entries can be
added from the hypervisor itself only, there is a simple hypercall
interface to read the data.

Add a feature document for setting the base of a discussion regarding
the desired functionality and the entries to add.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
V1:
- remove the "--" prefixes of the sub-commands of the user tool
  (Jan Beulich)
- rename xenfs to xenhypfs (Jan Beulich)
- add "tree" and "write" options to user tool

V2:
- move example tree to the paths description (Ian Jackson)
- specify allowed characters for keys and values (Ian Jackson)

V3:
- correct introduction (writable entries)

V4:
- add list specification
- add entry example (Julien Grall)
- correct date and Xen version (Julien Grall)
- add ARM64 as possible architecture (Julien Grall)
- add version description to the feature doc (Jan Beulich)

V8:
- clarify syntax used in hypfs-paths.pandoc (George Dunlap)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/features/hypervisorfs.pandoc |  92 +++++++++++++++++++++++++
 docs/misc/hypfs-paths.pandoc      | 107 ++++++++++++++++++++++++++++++
 2 files changed, 199 insertions(+)
 create mode 100644 docs/features/hypervisorfs.pandoc
 create mode 100644 docs/misc/hypfs-paths.pandoc

diff --git a/docs/features/hypervisorfs.pandoc b/docs/features/hypervisorfs.pandoc
new file mode 100644
index 0000000000..a0a0ead057
--- /dev/null
+++ b/docs/features/hypervisorfs.pandoc
@@ -0,0 +1,92 @@
+% Hypervisor FS
+% Revision 1
+
+\clearpage
+
+# Basics
+---------------- ---------------------
+         Status: **Supported**
+
+  Architectures: all
+
+     Components: Hypervisor, toolstack
+---------------- ---------------------
+
+# Overview
+
+The Hypervisor FS is a hierarchical name-value store for reporting
+information to guests, especially dom0. It is similar to the Linux
+kernel's sysfs. Entries and directories are created by the hypervisor,
+while the toolstack is able to use a hypercall to query the entry
+values or (if allowed by the hypervisor) to modify them.
+
+# User details
+
+With:
+
+    xenhypfs ls <path>
+
+the user can list the entries of a specific path of the FS. Using:
+
+    xenhypfs cat <path>
+
+the content of an entry can be retrieved. Using:
+
+    xenhypfs write <path> <string>
+
+a writable entry can be modified. With:
+
+    xenhypfs tree
+
+the complete Hypervisor FS entry tree can be printed.
+
+The FS paths are documented in `docs/misc/hypfs-paths.pandoc`.
+
+# Technical details
+
+Access to the hypervisor filesystem is done via the stable new hypercall
+__HYPERVISOR_filesystem_op. This hypercall supports a sub-command
+XEN_HYPFS_OP_get_version which will return the highest version of the
+interface supported by the hypervisor. Additions to the interface need
+to bump the interface version. The hypervisor is required to support the
+previous interface versions, too (this implies that additions will always
+require new sub-commands in order to allow the hypervisor to decide which
+version of the interface to use).
+
+* hypercall interface specification
+    * `xen/include/public/hypfs.h`
+* hypervisor internal files
+    * `xen/include/xen/hypfs.h`
+    * `xen/common/hypfs.c`
+* `libxenhypfs`
+    * `tools/libs/libxenhypfs/*`
+* `xenhypfs`
+    * `tools/misc/xenhypfs.c`
+* path documentation
+    * `docs/misc/hypfs-paths.pandoc`
+
+# Testing
+
+Any new parameters or hardware mitigations should be verified to show up
+correctly in the filesystem.
+
+# Areas for improvement
+
+* More detailed access rights
+* Entries per domain and/or per cpupool
+
+# Known issues
+
+* None
+
+# References
+
+* None
+
+# History
+
+------------------------------------------------------------------------
+Date       Revision Version  Notes
+---------- -------- -------- -------------------------------------------
+2020-01-23 1        Xen 4.14 Document written
+---------- -------- -------- -------------------------------------------
diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
new file mode 100644
index 0000000000..39539fa1b5
--- /dev/null
+++ b/docs/misc/hypfs-paths.pandoc
@@ -0,0 +1,107 @@
+# Xenhypfs Paths
+
+This document attempts to define all the paths which are available
+in the Xen hypervisor file system (hypfs).
+
+The hypervisor file system can be accessed via the xenhypfs tool.
+
+## Notation
+
+The hypervisor file system is similar to the Linux kernel's sysfs.
+In this document directories are always specified with a trailing "/".
+
+The following notation conventions apply:
+
+        DIRECTORY/
+
+        PATH = VALUES [TAGS]
+
+The first syntax defines a directory. It normally contains related
+entries and the general scope of the directory is described.
+
+The second syntax defines a file entry containing values which are
+either set by the hypervisor or, if the file is writable, can be set
+by the user.
+
+PATH can contain simple regex constructs following the Perl compatible
+regexp syntax described in pcre(3) or perlre(1).
+
+A hypervisor file system entry name can be any 0-delimited byte string
+not containing any '/' character. The names "." and ".." are reserved
+for file system internal use.
+
+VALUES are strings and can take the following forms (note that this represents
+only the syntax used in this document):
+
+* STRING -- an arbitrary 0-delimited byte string.
+* INTEGER -- An integer, in decimal representation unless otherwise
+  noted.
+* "a literal string" -- literal strings are contained within quotes.
+* (VALUE | VALUE | ... ) -- a set of alternatives. Alternatives are
+  separated by a "|" and all the alternatives are enclosed in "(" and
+  ")".
+* {VALUE, VALUE, ... } -- a list of possible values separated by "," and
+  enclosed in "{" and "}".
+
+Additional TAGS may follow as a comma separated set of the following
+tags enclosed in square brackets.
+
+* w -- Path is writable by the user. This capability is usually
+  limited to the control domain (e.g. dom0).
+* ARM | ARM32 | ARM64 | X86: the path is available for the respective
+  architecture only.
+* PV --  Path is valid for PV capable hypervisors only.
+* HVM -- Path is valid for HVM capable hypervisors only.
+* CONFIG_* -- Path is valid only in case the hypervisor was built with
+  the respective config option.
+
+So an entry could look like this:
+
+    /cpu-bugs/active-pv/xpti = ("No"|{"dom0", "domU", "PCID-on"}) [w,X86,PV]
+
+Possible values would be "No" or a list of "dom0", "domU", and "PCID-on" with
+the list elements separated by spaces, e.g. "dom0 PCID-on".
+The entry would be writable and it would exist on X86 only and only if the
+hypervisor is configured to support PV guests.
+
+## Example
+
+A populated Xen hypervisor file system might look like the following example:
+
+    /
+        buildinfo/           directory containing build-time data
+            config           contents of .config file used to build Xen
+        cpu-bugs/            x86: directory of cpu bug information
+            l1tf             "Vulnerable" or "Not vulnerable"
+            mds              "Vulnerable" or "Not vulnerable"
+            meltdown         "Vulnerable" or "Not vulnerable"
+            spec-store-bypass "Vulnerable" or "Not vulnerable"
+            spectre-v1       "Vulnerable" or "Not vulnerable"
+            spectre-v2       "Vulnerable" or "Not vulnerable"
+            mitigations/     directory of mitigation settings
+                bti-thunk    "N/A", "RETPOLINE", "LFENCE" or "JMP"
+                spec-ctrl    "No", "IBRS+" or "IBRS-"
+                ibpb         "No" or "Yes"
+                l1d-flush    "No" or "Yes"
+                md-clear     "No" or "VERW"
+                l1tf-barrier "No" or "Yes"
+            active-hvm/      directory for mitigations active in hvm doamins
+                msr-spec-ctrl "No" or "Yes"
+                rsb          "No" or "Yes"
+                eager-fpu    "No" or "Yes"
+                md-clear     "No" or "Yes"
+            active-pv/       directory for mitigations active in pv doamins
+                msr-spec-ctrl "No" or "Yes"
+                rsb          "No" or "Yes"
+                eager-fpu    "No" or "Yes"
+                md-clear     "No" or "Yes"
+                xpti         "No" or list of "dom0", "domU", "PCID-on"
+                l1tf-shadow  "No" or list of "dom0", "domU"
+        params/              directory with hypervisor parameter values
+                             (boot/runtime parameters)
+
+## General Paths
+
+#### /
+
+The root of the hypervisor file system.
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 07:21:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jawYx-0004nG-9z; Tue, 19 May 2020 07:21:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aMO8=7B=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jawYv-0004mp-NJ
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:21:17 +0000
X-Inumbo-ID: 51106f6c-99a1-11ea-a8e2-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 51106f6c-99a1-11ea-a8e2-12813bfff9fa;
 Tue, 19 May 2020 07:21:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 60E3EB21B;
 Tue, 19 May 2020 07:21:13 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v10 07/12] xen: provide version information in hypfs
Date: Tue, 19 May 2020 09:21:01 +0200
Message-Id: <20200519072106.26894-8-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200519072106.26894-1-jgross@suse.com>
References: <20200519072106.26894-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Provide version and compile information in /buildinfo/ node of the
Xen hypervisor file system. As this information is accessible by dom0
only no additional security problem arises.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V3:
- new patch

V4:
- add __read_mostly annotations (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/hypfs-paths.pandoc | 45 ++++++++++++++++++++++++++++++++++
 xen/common/kernel.c          | 47 ++++++++++++++++++++++++++++++++++++
 2 files changed, 92 insertions(+)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index 39539fa1b5..d730caf394 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -105,3 +105,48 @@ A populated Xen hypervisor file system might look like the following example:
 #### /
 
 The root of the hypervisor file system.
+
+#### /buildinfo/
+
+A directory containing static information generated while building the
+hypervisor.
+
+#### /buildinfo/changeset = STRING
+
+Git commit of the hypervisor.
+
+#### /buildinfo/compileinfo/
+
+A directory containing information about compilation of Xen.
+
+#### /buildinfo/compileinfo/compile_by = STRING
+
+Information who compiled the hypervisor.
+
+#### /buildinfo/compileinfo/compile_date = STRING
+
+Date of the hypervisor compilation.
+
+#### /buildinfo/compileinfo/compile_domain = STRING
+
+Information about the compile domain.
+
+#### /buildinfo/compileinfo/compiler = STRING
+
+The compiler used to build Xen.
+
+#### /buildinfo/version/
+
+A directory containing version information of the hypervisor.
+
+#### /buildinfo/version/extra = STRING
+
+Extra version information.
+
+#### /buildinfo/version/major = INTEGER
+
+The major version of Xen.
+
+#### /buildinfo/version/minor = INTEGER
+
+The minor version of Xen.
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index 572e3fc07d..db7bd23fcb 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -13,6 +13,7 @@
 #include <xen/paging.h>
 #include <xen/guest_access.h>
 #include <xen/hypercall.h>
+#include <xen/hypfs.h>
 #include <xsm/xsm.h>
 #include <asm/current.h>
 #include <public/version.h>
@@ -373,6 +374,52 @@ void __init do_initcalls(void)
         (*call)();
 }
 
+#ifdef CONFIG_HYPFS
+static unsigned int __read_mostly major_version;
+static unsigned int __read_mostly minor_version;
+
+static HYPFS_DIR_INIT(buildinfo, "buildinfo");
+static HYPFS_DIR_INIT(compileinfo, "compileinfo");
+static HYPFS_DIR_INIT(version, "version");
+static HYPFS_UINT_INIT(major, "major", major_version);
+static HYPFS_UINT_INIT(minor, "minor", minor_version);
+static HYPFS_STRING_INIT(changeset, "changeset");
+static HYPFS_STRING_INIT(compiler, "compiler");
+static HYPFS_STRING_INIT(compile_by, "compile_by");
+static HYPFS_STRING_INIT(compile_date, "compile_date");
+static HYPFS_STRING_INIT(compile_domain, "compile_domain");
+static HYPFS_STRING_INIT(extra, "extra");
+
+static int __init buildinfo_init(void)
+{
+    hypfs_add_dir(&hypfs_root, &buildinfo, true);
+
+    hypfs_string_set_reference(&changeset, xen_changeset());
+    hypfs_add_leaf(&buildinfo, &changeset, true);
+
+    hypfs_add_dir(&buildinfo, &compileinfo, true);
+    hypfs_string_set_reference(&compiler, xen_compiler());
+    hypfs_string_set_reference(&compile_by, xen_compile_by());
+    hypfs_string_set_reference(&compile_date, xen_compile_date());
+    hypfs_string_set_reference(&compile_domain, xen_compile_domain());
+    hypfs_add_leaf(&compileinfo, &compiler, true);
+    hypfs_add_leaf(&compileinfo, &compile_by, true);
+    hypfs_add_leaf(&compileinfo, &compile_date, true);
+    hypfs_add_leaf(&compileinfo, &compile_domain, true);
+
+    major_version = xen_major_version();
+    minor_version = xen_minor_version();
+    hypfs_add_dir(&buildinfo, &version, true);
+    hypfs_string_set_reference(&extra, xen_extra_version());
+    hypfs_add_leaf(&version, &extra, true);
+    hypfs_add_leaf(&version, &major, true);
+    hypfs_add_leaf(&version, &minor, true);
+
+    return 0;
+}
+__initcall(buildinfo_init);
+#endif
+
 # define DO(fn) long do_##fn
 
 #endif
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 07:21:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jawYq-0004mA-Ea; Tue, 19 May 2020 07:21:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aMO8=7B=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jawYp-0004m5-Hj
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:21:11 +0000
X-Inumbo-ID: 50122fc4-99a1-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 50122fc4-99a1-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 07:21:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D84F5B1FC;
 Tue, 19 May 2020 07:21:11 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v10 01/12] xen/vmx: let opt_ept_ad always reflect the current
 setting
Date: Tue, 19 May 2020 09:20:55 +0200
Message-Id: <20200519072106.26894-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200519072106.26894-1-jgross@suse.com>
References: <20200519072106.26894-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Kevin Tian <kevin.tian@intel.com>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In case opt_ept_ad has not been set explicitly by the user via command
line or runtime parameter, it is treated as "no" on Avoton cpus.

Change that handling by setting opt_ept_ad to 0 for this cpu type
explicitly if no user value has been set.

By putting this into the (renamed) boot time initialization of vmcs.c
_vmx_cpu_up() can be made static.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/hvm/vmx/vmcs.c        | 22 +++++++++++++++-------
 xen/arch/x86/hvm/vmx/vmx.c         |  4 +---
 xen/include/asm-x86/hvm/vmx/vmcs.h |  3 +--
 3 files changed, 17 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 4c23645454..221af9737a 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -315,10 +315,6 @@ static int vmx_init_vmcs_config(void)
 
         if ( !opt_ept_ad )
             _vmx_ept_vpid_cap &= ~VMX_EPT_AD_BIT;
-        else if ( /* Work around Erratum AVR41 on Avoton processors. */
-                  boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x4d &&
-                  opt_ept_ad < 0 )
-            _vmx_ept_vpid_cap &= ~VMX_EPT_AD_BIT;
 
         /*
          * Additional sanity checking before using EPT:
@@ -652,7 +648,7 @@ void vmx_cpu_dead(unsigned int cpu)
     vmx_pi_desc_fixup(cpu);
 }
 
-int _vmx_cpu_up(bool bsp)
+static int _vmx_cpu_up(bool bsp)
 {
     u32 eax, edx;
     int rc, bios_locked, cpu = smp_processor_id();
@@ -2108,9 +2104,21 @@ static void vmcs_dump(unsigned char ch)
     printk("**************************************\n");
 }
 
-void __init setup_vmcs_dump(void)
+int __init vmx_vmcs_init(void)
 {
-    register_keyhandler('v', vmcs_dump, "dump VT-x VMCSs", 1);
+    int ret;
+
+    if ( opt_ept_ad < 0 )
+        /* Work around Erratum AVR41 on Avoton processors. */
+        opt_ept_ad = !(boot_cpu_data.x86 == 6 &&
+                       boot_cpu_data.x86_model == 0x4d);
+
+    ret = _vmx_cpu_up(true);
+
+    if ( !ret )
+        register_keyhandler('v', vmcs_dump, "dump VT-x VMCSs", 1);
+
+    return ret;
 }
 
 static void __init __maybe_unused build_assertions(void)
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 6efa80e422..11a4dd94cf 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2482,7 +2482,7 @@ const struct hvm_function_table * __init start_vmx(void)
 {
     set_in_cr4(X86_CR4_VMXE);
 
-    if ( _vmx_cpu_up(true) )
+    if ( vmx_vmcs_init() )
     {
         printk("VMX: failed to initialise.\n");
         return NULL;
@@ -2553,8 +2553,6 @@ const struct hvm_function_table * __init start_vmx(void)
         vmx_function_table.get_guest_bndcfgs = vmx_get_guest_bndcfgs;
     }
 
-    setup_vmcs_dump();
-
     lbr_tsx_fixup_check();
     bdf93_fixup_check();
 
diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-x86/hvm/vmx/vmcs.h
index 95c1dea7b8..906810592f 100644
--- a/xen/include/asm-x86/hvm/vmx/vmcs.h
+++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
@@ -21,11 +21,10 @@
 #include <xen/mm.h>
 
 extern void vmcs_dump_vcpu(struct vcpu *v);
-extern void setup_vmcs_dump(void);
+extern int vmx_vmcs_init(void);
 extern int  vmx_cpu_up_prepare(unsigned int cpu);
 extern void vmx_cpu_dead(unsigned int cpu);
 extern int  vmx_cpu_up(void);
-extern int  _vmx_cpu_up(bool bsp);
 extern void vmx_cpu_down(void);
 
 struct vmcs_struct {
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 07:21:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jawYu-0004ma-U1; Tue, 19 May 2020 07:21:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aMO8=7B=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jawYu-0004mV-Dx
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:21:16 +0000
X-Inumbo-ID: 502dd274-99a1-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 502dd274-99a1-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 07:21:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D8BA0B207;
 Tue, 19 May 2020 07:21:11 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v10 00/12] Add hypervisor sysfs-like support
Date: Tue, 19 May 2020 09:20:54 +0200
Message-Id: <20200519072106.26894-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On the 2019 Xen developer summit there was agreement that the Xen
hypervisor should gain support for a hierarchical name-value store
similar to the Linux kernel's sysfs.

This is a first implementation of that idea adding the basic
functionality to hypervisor and tools side. The interface to any
user program making use of that "xen-hypfs" is a new library
"libxenhypfs" with a stable interface.

The series adds read-only nodes with buildinfo data and writable
nodes with runtime parameters. xl is switched to use the new file
system for modifying the runtime parameters and the old sysctl
interface for that purpose is dropped.

Changes in V10:
- adressed review comments

Changes in V9:
- addressed review comments

Changes in V8:
- addressed review comments
- added CONFIG_HYPFS config option

Changes in V7:
- old patch 1 already applied
- add new patch 1 (carved out and modified from patch 9)
- addressed review comments
- modified public interface to have a max write size instead of a
  writable flag only

Changes in V6:
- added new patches 1, 10, 11, 12
- addressed review comments
- modified interface for creating nodes for runtime parameters

Changes in V5:
- switched to xsm for privilege check

Changes in V4:
- former patch 2 removed as already committed
- addressed review comments

Changes in V3:
- major rework, especially by supporting binary contents of entries
- added several new patches (1, 2, 7)
- full support of all runtime parameters
- support of writing entries (especially runtime parameters)

Changes in V2:
- all comments to V1 addressed
- added man-page for xenhypfs tool
- added runtime parameter read access for string parameters

Changes in V1:
- renamed xenfs ->xenhypfs
- added writable entries support at the interface level and in the
  xenhypfs tool
- added runtime parameter read access (integer type only for now)
- added docs/misc/hypfs-paths.pandoc for path descriptions

Juergen Gross (12):
  xen/vmx: let opt_ept_ad always reflect the current setting
  xen: add a generic way to include binary files as variables
  docs: add feature document for Xen hypervisor sysfs-like support
  xen: add basic hypervisor filesystem support
  libs: add libxenhypfs
  tools: add xenfs tool
  xen: provide version information in hypfs
  xen: add /buildinfo/config entry to hypervisor filesystem
  xen: add runtime parameter access support to hypfs
  tools/libxl: use libxenhypfs for setting xen runtime parameters
  tools/libxc: remove xc_set_parameters()
  xen: remove XEN_SYSCTL_set_parameter support

 .gitignore                          |   6 +
 docs/features/hypervisorfs.pandoc   |  92 +++++
 docs/man/xenhypfs.1.pod             |  61 ++++
 docs/misc/hypfs-paths.pandoc        | 165 +++++++++
 tools/Rules.mk                      |   8 +-
 tools/flask/policy/modules/dom0.te  |   4 +-
 tools/libs/Makefile                 |   1 +
 tools/libs/hypfs/Makefile           |  16 +
 tools/libs/hypfs/core.c             | 536 ++++++++++++++++++++++++++++
 tools/libs/hypfs/include/xenhypfs.h |  90 +++++
 tools/libs/hypfs/libxenhypfs.map    |  10 +
 tools/libs/hypfs/xenhypfs.pc.in     |  10 +
 tools/libxc/include/xenctrl.h       |   1 -
 tools/libxc/xc_misc.c               |  21 --
 tools/libxl/Makefile                |   3 +-
 tools/libxl/libxl.c                 |  53 ++-
 tools/libxl/libxl_internal.h        |   1 +
 tools/libxl/xenlight.pc.in          |   2 +-
 tools/misc/Makefile                 |   6 +
 tools/misc/xenhypfs.c               | 192 ++++++++++
 tools/xl/xl_misc.c                  |   1 -
 xen/arch/arm/traps.c                |   3 +
 xen/arch/arm/xen.lds.S              |  13 +-
 xen/arch/x86/hvm/hypercall.c        |   3 +
 xen/arch/x86/hvm/vmx/vmcs.c         |  47 ++-
 xen/arch/x86/hvm/vmx/vmx.c          |   4 +-
 xen/arch/x86/hypercall.c            |   3 +
 xen/arch/x86/pv/domain.c            |  21 +-
 xen/arch/x86/pv/hypercall.c         |   3 +
 xen/arch/x86/xen.lds.S              |  12 +-
 xen/common/Kconfig                  |  23 ++
 xen/common/Makefile                 |  13 +
 xen/common/grant_table.c            |  62 +++-
 xen/common/hypfs.c                  | 452 +++++++++++++++++++++++
 xen/common/kernel.c                 |  84 ++++-
 xen/common/sysctl.c                 |  36 --
 xen/drivers/char/console.c          |  72 +++-
 xen/include/Makefile                |   1 +
 xen/include/asm-x86/hvm/vmx/vmcs.h  |   3 +-
 xen/include/public/hypfs.h          | 129 +++++++
 xen/include/public/sysctl.h         |  19 +-
 xen/include/public/xen.h            |   1 +
 xen/include/xen/hypercall.h         |  10 +
 xen/include/xen/hypfs.h             | 123 +++++++
 xen/include/xen/kernel.h            |   3 +
 xen/include/xen/lib.h               |   1 -
 xen/include/xen/param.h             | 126 +++++--
 xen/include/xlat.lst                |   2 +
 xen/include/xsm/dummy.h             |   6 +
 xen/include/xsm/xsm.h               |   6 +
 xen/tools/binfile                   |  43 +++
 xen/xsm/dummy.c                     |   1 +
 xen/xsm/flask/Makefile              |   5 +-
 xen/xsm/flask/flask-policy.S        |  16 -
 xen/xsm/flask/hooks.c               |   9 +-
 xen/xsm/flask/policy/access_vectors |   4 +-
 56 files changed, 2445 insertions(+), 193 deletions(-)
 create mode 100644 docs/features/hypervisorfs.pandoc
 create mode 100644 docs/man/xenhypfs.1.pod
 create mode 100644 docs/misc/hypfs-paths.pandoc
 create mode 100644 tools/libs/hypfs/Makefile
 create mode 100644 tools/libs/hypfs/core.c
 create mode 100644 tools/libs/hypfs/include/xenhypfs.h
 create mode 100644 tools/libs/hypfs/libxenhypfs.map
 create mode 100644 tools/libs/hypfs/xenhypfs.pc.in
 create mode 100644 tools/misc/xenhypfs.c
 create mode 100644 xen/common/hypfs.c
 create mode 100644 xen/include/public/hypfs.h
 create mode 100644 xen/include/xen/hypfs.h
 create mode 100755 xen/tools/binfile
 delete mode 100644 xen/xsm/flask/flask-policy.S

-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 07:21:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jawYs-0004mL-M8; Tue, 19 May 2020 07:21:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aMO8=7B=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jawYq-0004mF-NG
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:21:12 +0000
X-Inumbo-ID: 501f5d5c-99a1-11ea-a8e2-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 501f5d5c-99a1-11ea-a8e2-12813bfff9fa;
 Tue, 19 May 2020 07:21:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 09CCBB209;
 Tue, 19 May 2020 07:21:11 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v10 02/12] xen: add a generic way to include binary files as
 variables
Date: Tue, 19 May 2020 09:20:56 +0200
Message-Id: <20200519072106.26894-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200519072106.26894-1-jgross@suse.com>
References: <20200519072106.26894-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add a new script xen/tools/binfile for including a binary file at build
time being usable via a pointer and a size variable in the hypervisor.

Make use of that generic tool in xsm.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Wei Liu <wl@xen.org>
---
V3:
- new patch

V4:
- add alignment parameter (Jan Beulich)
- use .Lend instead of . (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore                   |  1 +
 xen/tools/binfile            | 43 ++++++++++++++++++++++++++++++++++++
 xen/xsm/flask/Makefile       |  5 ++++-
 xen/xsm/flask/flask-policy.S | 16 --------------
 4 files changed, 48 insertions(+), 17 deletions(-)
 create mode 100755 xen/tools/binfile
 delete mode 100644 xen/xsm/flask/flask-policy.S

diff --git a/.gitignore b/.gitignore
index bfa53723b3..034f44b21b 100644
--- a/.gitignore
+++ b/.gitignore
@@ -314,6 +314,7 @@ xen/test/livepatch/*.livepatch
 xen/tools/kconfig/.tmp_gtkcheck
 xen/tools/kconfig/.tmp_qtcheck
 xen/tools/symbols
+xen/xsm/flask/flask-policy.S
 xen/xsm/flask/include/av_perm_to_string.h
 xen/xsm/flask/include/av_permissions.h
 xen/xsm/flask/include/class_to_string.h
diff --git a/xen/tools/binfile b/xen/tools/binfile
new file mode 100755
index 0000000000..df0301183f
--- /dev/null
+++ b/xen/tools/binfile
@@ -0,0 +1,43 @@
+#!/bin/sh
+# usage: binfile [-i] [-a <align>] <target-src.S> <binary-file> <varname>
+# -a <align>  align data at 2^<align> boundary (default: byte alignment)
+# -i          add to .init.rodata (default: .rodata) section
+
+section=""
+align=0
+
+OPTIND=1
+while getopts "ia:" opt; do
+    case "$opt" in
+    i)
+        section=".init"
+        ;;
+    a)
+        align=$OPTARG
+        ;;
+    esac
+done
+let "SHIFT=$OPTIND-1"
+shift $SHIFT
+
+target=$1
+binsource=$2
+varname=$3
+
+cat <<EOF >$target
+#include <asm/asm_defns.h>
+
+        .section $section.rodata, "a", %progbits
+
+        .p2align $align
+        .global $varname
+$varname:
+        .incbin "$binsource"
+.Lend:
+
+        .type $varname, %object
+        .size $varname, .Lend - $varname
+
+        .global ${varname}_size
+        ASM_INT(${varname}_size, .Lend - $varname)
+EOF
diff --git a/xen/xsm/flask/Makefile b/xen/xsm/flask/Makefile
index eebfceecc5..d8486fc7e4 100644
--- a/xen/xsm/flask/Makefile
+++ b/xen/xsm/flask/Makefile
@@ -39,6 +39,9 @@ $(subst include/,%/,$(AV_H_FILES)): $(AV_H_DEPEND) $(mkaccess) FORCE
 obj-bin-$(CONFIG_XSM_FLASK_POLICY) += flask-policy.o
 flask-policy.o: policy.bin
 
+flask-policy.S: $(XEN_ROOT)/xen/tools/binfile
+	$(XEN_ROOT)/xen/tools/binfile -i $@ policy.bin xsm_flask_init_policy
+
 FLASK_BUILD_DIR := $(CURDIR)
 POLICY_SRC := $(FLASK_BUILD_DIR)/xenpolicy-$(XEN_FULLVERSION)
 
@@ -48,4 +51,4 @@ policy.bin: FORCE
 
 .PHONY: clean
 clean::
-	rm -f $(ALL_H_FILES) *.o $(DEPS_RM) policy.* $(POLICY_SRC)
+	rm -f $(ALL_H_FILES) *.o $(DEPS_RM) policy.* $(POLICY_SRC) flask-policy.S
diff --git a/xen/xsm/flask/flask-policy.S b/xen/xsm/flask/flask-policy.S
deleted file mode 100644
index d38aa39964..0000000000
--- a/xen/xsm/flask/flask-policy.S
+++ /dev/null
@@ -1,16 +0,0 @@
-#include <asm/asm_defns.h>
-
-        .section .init.rodata, "a", %progbits
-
-/* const unsigned char xsm_flask_init_policy[] __initconst */
-        .global xsm_flask_init_policy
-xsm_flask_init_policy:
-        .incbin "policy.bin"
-.Lend:
-
-        .type xsm_flask_init_policy, %object
-        .size xsm_flask_init_policy, . - xsm_flask_init_policy
-
-/* const unsigned int __initconst xsm_flask_init_policy_size */
-        .global xsm_flask_init_policy_size
-        ASM_INT(xsm_flask_init_policy_size, .Lend - xsm_flask_init_policy)
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 07:21:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:21:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jawZ1-0004o3-Pd; Tue, 19 May 2020 07:21:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aMO8=7B=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jawZ0-0004nj-Nf
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:21:22 +0000
X-Inumbo-ID: 51917508-99a1-11ea-a8e2-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 51917508-99a1-11ea-a8e2-12813bfff9fa;
 Tue, 19 May 2020 07:21:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 44C1FB221;
 Tue, 19 May 2020 07:21:14 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v10 11/12] tools/libxc: remove xc_set_parameters()
Date: Tue, 19 May 2020 09:21:05 +0200
Message-Id: <20200519072106.26894-12-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200519072106.26894-1-jgross@suse.com>
References: <20200519072106.26894-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

There is no user of xc_set_parameters() left, so remove it.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V6:
- new patch
---
 tools/libxc/include/xenctrl.h |  1 -
 tools/libxc/xc_misc.c         | 21 ---------------------
 2 files changed, 22 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 45ff7db1e8..f9e17ae424 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -1226,7 +1226,6 @@ int xc_readconsolering(xc_interface *xch,
                        int clear, int incremental, uint32_t *pindex);
 
 int xc_send_debug_keys(xc_interface *xch, const char *keys);
-int xc_set_parameters(xc_interface *xch, const char *params);
 
 typedef struct xen_sysctl_physinfo xc_physinfo_t;
 typedef struct xen_sysctl_cputopo xc_cputopo_t;
diff --git a/tools/libxc/xc_misc.c b/tools/libxc/xc_misc.c
index fe477bf344..3820394413 100644
--- a/tools/libxc/xc_misc.c
+++ b/tools/libxc/xc_misc.c
@@ -187,27 +187,6 @@ int xc_send_debug_keys(xc_interface *xch, const char *keys)
     return ret;
 }
 
-int xc_set_parameters(xc_interface *xch, const char *params)
-{
-    int ret, len = strlen(params);
-    DECLARE_SYSCTL;
-    DECLARE_HYPERCALL_BOUNCE_IN(params, len);
-
-    if ( xc_hypercall_bounce_pre(xch, params) )
-        return -1;
-
-    sysctl.cmd = XEN_SYSCTL_set_parameter;
-    set_xen_guest_handle(sysctl.u.set_parameter.params, params);
-    sysctl.u.set_parameter.size = len;
-    memset(sysctl.u.set_parameter.pad, 0, sizeof(sysctl.u.set_parameter.pad));
-
-    ret = do_sysctl(xch, &sysctl);
-
-    xc_hypercall_bounce_post(xch, params);
-
-    return ret;
-}
-
 int xc_physinfo(xc_interface *xch,
                 xc_physinfo_t *put_info)
 {
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 07:21:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:21:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jawZ6-0004pB-1m; Tue, 19 May 2020 07:21:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aMO8=7B=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jawZ4-0004od-Dw
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:21:26 +0000
X-Inumbo-ID: 5101dcd6-99a1-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5101dcd6-99a1-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 07:21:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4061DB211;
 Tue, 19 May 2020 07:21:13 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v10 06/12] tools: add xenfs tool
Date: Tue, 19 May 2020 09:21:00 +0200
Message-Id: <20200519072106.26894-7-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200519072106.26894-1-jgross@suse.com>
References: <20200519072106.26894-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add the xenfs tool for accessing the hypervisor filesystem.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Wei Liu <wl@xen.org>
---
V1:
- rename to xenhypfs
- don't use "--" for subcommands
- add write support

V2:
- escape non-printable characters per default with cat subcommand
  (Ian Jackson)
- add -b option to cat subcommand (Ian Jackson)
- add man page

V3:
- adapt to new hypfs interface

V7:
- added missing bool for ls output

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore              |   1 +
 docs/man/xenhypfs.1.pod |  61 +++++++++++++
 tools/misc/Makefile     |   6 ++
 tools/misc/xenhypfs.c   | 192 ++++++++++++++++++++++++++++++++++++++++
 4 files changed, 260 insertions(+)
 create mode 100644 docs/man/xenhypfs.1.pod
 create mode 100644 tools/misc/xenhypfs.c

diff --git a/.gitignore b/.gitignore
index 7bd292f64d..6171b3b43f 100644
--- a/.gitignore
+++ b/.gitignore
@@ -368,6 +368,7 @@ tools/libxl/test_timedereg
 tools/libxl/test_fdderegrace
 tools/firmware/etherboot/eb-roms.h
 tools/firmware/etherboot/gpxe-git-snapshot.tar.gz
+tools/misc/xenhypfs
 tools/misc/xenwatchdogd
 tools/misc/xen-hvmcrash
 tools/misc/xen-lowmemd
diff --git a/docs/man/xenhypfs.1.pod b/docs/man/xenhypfs.1.pod
new file mode 100644
index 0000000000..37aa488fcc
--- /dev/null
+++ b/docs/man/xenhypfs.1.pod
@@ -0,0 +1,61 @@
+=head1 NAME
+
+xenhypfs - Xen tool to access Xen hypervisor file system
+
+=head1 SYNOPSIS
+
+B<xenhypfs> I<subcommand> [I<options>] [I<args>]
+
+=head1 DESCRIPTION
+
+The B<xenhypfs> program is used to access the Xen hypervisor file system.
+It can be used to show the available entries, to show their contents and
+(if allowed) to modify their contents.
+
+=head1 SUBCOMMANDS
+
+=over 4
+
+=item B<ls> I<path>
+
+List the available entries below I<path>.
+
+=item B<cat> [I<-b>] I<path>
+
+Show the contents of the entry specified by I<path>. Non-printable characters
+other than white space characters (like tab, new line) will be shown as
+B<\xnn> (B<nn> being a two digit hex number) unless the option B<-b> is
+specified.
+
+=item B<write> I<path> I<value>
+
+Set the contents of the entry specified by I<path> to I<value>.
+
+=item B<tree>
+
+Show all the entries of the file system as a tree.
+
+=back
+
+=head1 RETURN CODES
+
+=over 4
+
+=item B<0>
+
+Success
+
+=item B<1>
+
+Invalid usage (e.g. unknown subcommand, unknown option, missing parameter).
+
+=item B<2>
+
+Entry not found while traversing the tree.
+
+=item B<3>
+
+Access right violation.
+
+=back
+
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 63947bfadc..9fdb13597f 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -24,6 +24,7 @@ INSTALL_SBIN-$(CONFIG_X86)     += xen-lowmemd
 INSTALL_SBIN-$(CONFIG_X86)     += xen-mfndump
 INSTALL_SBIN-$(CONFIG_X86)     += xen-ucode
 INSTALL_SBIN                   += xencov
+INSTALL_SBIN                   += xenhypfs
 INSTALL_SBIN                   += xenlockprof
 INSTALL_SBIN                   += xenperf
 INSTALL_SBIN                   += xenpm
@@ -86,6 +87,9 @@ xenperf: xenperf.o
 xenpm: xenpm.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
+xenhypfs: xenhypfs.o
+	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenhypfs) $(APPEND_LDFLAGS)
+
 xenlockprof: xenlockprof.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
@@ -94,6 +98,8 @@ xen-hptool.o: CFLAGS += -I$(XEN_ROOT)/tools/libxc $(CFLAGS_libxencall)
 xen-hptool: xen-hptool.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(APPEND_LDFLAGS)
 
+xenhypfs.o: CFLAGS += $(CFLAGS_libxenhypfs)
+
 # xen-mfndump incorrectly uses libxc internals
 xen-mfndump.o: CFLAGS += -I$(XEN_ROOT)/tools/libxc $(CFLAGS_libxencall)
 xen-mfndump: xen-mfndump.o
diff --git a/tools/misc/xenhypfs.c b/tools/misc/xenhypfs.c
new file mode 100644
index 0000000000..158b901f42
--- /dev/null
+++ b/tools/misc/xenhypfs.c
@@ -0,0 +1,192 @@
+#define _GNU_SOURCE
+#include <ctype.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <xenhypfs.h>
+
+static struct xenhypfs_handle *hdl;
+
+static int usage(void)
+{
+    fprintf(stderr, "usage: xenhypfs ls <path>\n");
+    fprintf(stderr, "       xenhypfs cat [-b] <path>\n");
+    fprintf(stderr, "       xenhypfs write <path> <val>\n");
+    fprintf(stderr, "       xenhypfs tree\n");
+
+    return 1;
+}
+
+static void xenhypfs_print_escaped(char *string)
+{
+    char *c;
+
+    for (c = string; *c; c++) {
+        if (isgraph(*c) || isspace(*c))
+            printf("%c", *c);
+        else
+            printf("\\x%02x", *c);
+    }
+    printf("\n");
+}
+
+static int xenhypfs_cat(int argc, char *argv[])
+{
+    int ret = 0;
+    char *result;
+    char *path;
+    bool bin = false;
+
+    switch (argc) {
+    case 1:
+        path = argv[0];
+        break;
+
+    case 2:
+        if (strcmp(argv[0], "-b"))
+            return usage();
+        bin = true;
+        path = argv[1];
+        break;
+
+    default:
+        return usage();
+    }
+
+    result = xenhypfs_read(hdl, path);
+    if (!result) {
+        perror("could not read");
+        ret = 3;
+    } else {
+        if (!bin)
+            printf("%s\n", result);
+        else
+            xenhypfs_print_escaped(result);
+        free(result);
+    }
+
+    return ret;
+}
+
+static int xenhypfs_wr(char *path, char *val)
+{
+    int ret;
+
+    ret = xenhypfs_write(hdl, path, val);
+    if (ret) {
+        perror("could not write");
+        ret = 3;
+    }
+
+    return ret;
+}
+
+static char *xenhypfs_type(struct xenhypfs_dirent *ent)
+{
+    char *res;
+
+    switch (ent->type) {
+    case xenhypfs_type_dir:
+        res = "<dir>   ";
+        break;
+    case xenhypfs_type_blob:
+        res = "<blob>  ";
+        break;
+    case xenhypfs_type_string:
+        res = "<string>";
+        break;
+    case xenhypfs_type_uint:
+        res = "<uint>  ";
+        break;
+    case xenhypfs_type_int:
+        res = "<int>   ";
+        break;
+    case xenhypfs_type_bool:
+        res = "<bool>  ";
+        break;
+    default:
+        res = "<\?\?\?>   ";
+        break;
+    }
+
+    return res;
+}
+
+static int xenhypfs_ls(char *path)
+{
+    struct xenhypfs_dirent *ent;
+    unsigned int n, i;
+    int ret = 0;
+
+    ent = xenhypfs_readdir(hdl, path, &n);
+    if (!ent) {
+        perror("could not read dir");
+        ret = 3;
+    } else {
+        for (i = 0; i < n; i++)
+            printf("%s r%c %s\n", xenhypfs_type(ent + i),
+                   ent[i].is_writable ? 'w' : '-', ent[i].name);
+
+        free(ent);
+    }
+
+    return ret;
+}
+
+static int xenhypfs_tree_sub(char *path, unsigned int depth)
+{
+    struct xenhypfs_dirent *ent;
+    unsigned int n, i;
+    int ret = 0;
+    char *p;
+
+    ent = xenhypfs_readdir(hdl, path, &n);
+    if (!ent)
+        return 2;
+
+    for (i = 0; i < n; i++) {
+        printf("%*s%s%s\n", depth * 2, "", ent[i].name,
+               ent[i].type == xenhypfs_type_dir ? "/" : "");
+        if (ent[i].type == xenhypfs_type_dir) {
+            asprintf(&p, "%s%s%s", path, (depth == 1) ? "" : "/", ent[i].name);
+            if (xenhypfs_tree_sub(p, depth + 1))
+                ret = 2;
+        }
+    }
+
+    free(ent);
+
+    return ret;
+}
+
+static int xenhypfs_tree(void)
+{
+    printf("/\n");
+
+    return xenhypfs_tree_sub("/", 1);
+}
+
+int main(int argc, char *argv[])
+{
+    int ret;
+
+    hdl = xenhypfs_open(NULL, 0);
+
+    if (!hdl) {
+        fprintf(stderr, "Could not open libxenhypfs\n");
+        ret = 2;
+    } else if (argc >= 3 && !strcmp(argv[1], "cat"))
+        ret = xenhypfs_cat(argc - 2, argv + 2);
+    else if (argc == 3 && !strcmp(argv[1], "ls"))
+        ret = xenhypfs_ls(argv[2]);
+    else if (argc == 4 && !strcmp(argv[1], "write"))
+        ret = xenhypfs_wr(argv[2], argv[3]);
+    else if (argc == 2 && !strcmp(argv[1], "tree"))
+        ret = xenhypfs_tree();
+    else
+        ret = usage();
+
+    xenhypfs_close(hdl);
+
+    return ret;
+}
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 07:21:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:21:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jawZ6-0004pb-ET; Tue, 19 May 2020 07:21:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aMO8=7B=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jawZ5-0004p2-Nm
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:21:27 +0000
X-Inumbo-ID: 501f5d5d-99a1-11ea-a8e2-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 501f5d5d-99a1-11ea-a8e2-12813bfff9fa;
 Tue, 19 May 2020 07:21:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id DA765B21A;
 Tue, 19 May 2020 07:21:12 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v10 05/12] libs: add libxenhypfs
Date: Tue, 19 May 2020 09:20:59 +0200
Message-Id: <20200519072106.26894-6-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200519072106.26894-1-jgross@suse.com>
References: <20200519072106.26894-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add the new library libxenhypfs for access to the hypervisor filesystem.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Wei Liu <wl@xen.org>
---
V1:
- rename to libxenhypfs
- add xenhypfs_write()

V3:
- major rework due to new hypervisor interface
- add decompression capability

V4:
- add dependency to libz in pkgconfig file (Wei Liu)

V7:
- don't assume hypervisor's sizeof(bool) is same as in user land

V8:
- add some comments regarding the semantics of the lib functions
  (George Dunlap)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore                          |   2 +
 tools/Rules.mk                      |   6 +
 tools/libs/Makefile                 |   1 +
 tools/libs/hypfs/Makefile           |  16 +
 tools/libs/hypfs/core.c             | 536 ++++++++++++++++++++++++++++
 tools/libs/hypfs/include/xenhypfs.h |  90 +++++
 tools/libs/hypfs/libxenhypfs.map    |  10 +
 tools/libs/hypfs/xenhypfs.pc.in     |  10 +
 8 files changed, 671 insertions(+)
 create mode 100644 tools/libs/hypfs/Makefile
 create mode 100644 tools/libs/hypfs/core.c
 create mode 100644 tools/libs/hypfs/include/xenhypfs.h
 create mode 100644 tools/libs/hypfs/libxenhypfs.map
 create mode 100644 tools/libs/hypfs/xenhypfs.pc.in

diff --git a/.gitignore b/.gitignore
index 034f44b21b..7bd292f64d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -110,6 +110,8 @@ tools/libs/evtchn/headers.chk
 tools/libs/evtchn/xenevtchn.pc
 tools/libs/gnttab/headers.chk
 tools/libs/gnttab/xengnttab.pc
+tools/libs/hypfs/headers.chk
+tools/libs/hypfs/xenhypfs.pc
 tools/libs/call/headers.chk
 tools/libs/call/xencall.pc
 tools/libs/foreignmemory/headers.chk
diff --git a/tools/Rules.mk b/tools/Rules.mk
index 5b8cf748ad..ad6073fcad 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -19,6 +19,7 @@ XEN_LIBXENGNTTAB   = $(XEN_ROOT)/tools/libs/gnttab
 XEN_LIBXENCALL     = $(XEN_ROOT)/tools/libs/call
 XEN_LIBXENFOREIGNMEMORY = $(XEN_ROOT)/tools/libs/foreignmemory
 XEN_LIBXENDEVICEMODEL = $(XEN_ROOT)/tools/libs/devicemodel
+XEN_LIBXENHYPFS    = $(XEN_ROOT)/tools/libs/hypfs
 XEN_LIBXC          = $(XEN_ROOT)/tools/libxc
 XEN_XENLIGHT       = $(XEN_ROOT)/tools/libxl
 # Currently libxlutil lives in the same directory as libxenlight
@@ -132,6 +133,11 @@ SHDEPS_libxendevicemodel = $(SHLIB_libxentoollog) $(SHLIB_libxentoolcore) $(SHLI
 LDLIBS_libxendevicemodel = $(SHDEPS_libxendevicemodel) $(XEN_LIBXENDEVICEMODEL)/libxendevicemodel$(libextension)
 SHLIB_libxendevicemodel  = $(SHDEPS_libxendevicemodel) -Wl,-rpath-link=$(XEN_LIBXENDEVICEMODEL)
 
+CFLAGS_libxenhypfs = -I$(XEN_LIBXENHYPFS)/include $(CFLAGS_xeninclude)
+SHDEPS_libxenhypfs = $(SHLIB_libxentoollog) $(SHLIB_libxentoolcore) $(SHLIB_xencall)
+LDLIBS_libxenhypfs = $(SHDEPS_libxenhypfs) $(XEN_LIBXENHYPFS)/libxenhypfs$(libextension)
+SHLIB_libxenhypfs  = $(SHDEPS_libxenhypfs) -Wl,-rpath-link=$(XEN_LIBXENHYPFS)
+
 # code which compiles against libxenctrl get __XEN_TOOLS__ and
 # therefore sees the unstable hypercall interfaces.
 CFLAGS_libxenctrl = -I$(XEN_LIBXC)/include $(CFLAGS_libxentoollog) $(CFLAGS_libxenforeignmemory) $(CFLAGS_libxendevicemodel) $(CFLAGS_xeninclude) -D__XEN_TOOLS__
diff --git a/tools/libs/Makefile b/tools/libs/Makefile
index 88901e7341..69cdfb5975 100644
--- a/tools/libs/Makefile
+++ b/tools/libs/Makefile
@@ -9,6 +9,7 @@ SUBDIRS-y += gnttab
 SUBDIRS-y += call
 SUBDIRS-y += foreignmemory
 SUBDIRS-y += devicemodel
+SUBDIRS-y += hypfs
 
 ifeq ($(CONFIG_RUMP),y)
 SUBDIRS-y := toolcore
diff --git a/tools/libs/hypfs/Makefile b/tools/libs/hypfs/Makefile
new file mode 100644
index 0000000000..06dd449929
--- /dev/null
+++ b/tools/libs/hypfs/Makefile
@@ -0,0 +1,16 @@
+XEN_ROOT = $(CURDIR)/../../..
+include $(XEN_ROOT)/tools/Rules.mk
+
+MAJOR    = 1
+MINOR    = 0
+LIBNAME  := hypfs
+USELIBS  := toollog toolcore call
+
+APPEND_LDFLAGS += -lz
+
+SRCS-y                 += core.c
+
+include ../libs.mk
+
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_INCDIR = $(XEN_LIBXENHYPFS)/include
+$(PKG_CONFIG_LOCAL): PKG_CONFIG_CFLAGS_LOCAL = $(CFLAGS_xeninclude)
diff --git a/tools/libs/hypfs/core.c b/tools/libs/hypfs/core.c
new file mode 100644
index 0000000000..d4309b5ae2
--- /dev/null
+++ b/tools/libs/hypfs/core.c
@@ -0,0 +1,536 @@
+/*
+ * Copyright (c) 2019 SUSE Software Solutions Germany GmbH
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#define __XEN_TOOLS__ 1
+
+#define _GNU_SOURCE
+
+#include <errno.h>
+#include <inttypes.h>
+#include <stdbool.h>
+#include <stdlib.h>
+#include <string.h>
+#include <zlib.h>
+
+#include <xentoollog.h>
+#include <xenhypfs.h>
+#include <xencall.h>
+#include <xentoolcore_internal.h>
+
+#include <xen/xen.h>
+#include <xen/hypfs.h>
+
+#define BUF_SIZE 4096
+
+struct xenhypfs_handle {
+    xentoollog_logger *logger, *logger_tofree;
+    unsigned int flags;
+    xencall_handle *xcall;
+};
+
+xenhypfs_handle *xenhypfs_open(xentoollog_logger *logger,
+                               unsigned open_flags)
+{
+    xenhypfs_handle *fshdl = calloc(1, sizeof(*fshdl));
+
+    if (!fshdl)
+        return NULL;
+
+    fshdl->flags = open_flags;
+    fshdl->logger = logger;
+    fshdl->logger_tofree = NULL;
+
+    if (!fshdl->logger) {
+        fshdl->logger = fshdl->logger_tofree =
+            (xentoollog_logger*)
+            xtl_createlogger_stdiostream(stderr, XTL_PROGRESS, 0);
+        if (!fshdl->logger)
+            goto err;
+    }
+
+    fshdl->xcall = xencall_open(fshdl->logger, 0);
+    if (!fshdl->xcall)
+        goto err;
+
+    /* No need to remember supported version, we only support V1. */
+    if (xencall5(fshdl->xcall, __HYPERVISOR_hypfs_op,
+                 XEN_HYPFS_OP_get_version, 0, 0, 0, 0) < 0)
+        goto err;
+
+    return fshdl;
+
+err:
+    xtl_logger_destroy(fshdl->logger_tofree);
+    xencall_close(fshdl->xcall);
+    free(fshdl);
+    return NULL;
+}
+
+int xenhypfs_close(xenhypfs_handle *fshdl)
+{
+    if (!fshdl)
+        return 0;
+
+    xencall_close(fshdl->xcall);
+    xtl_logger_destroy(fshdl->logger_tofree);
+    free(fshdl);
+    return 0;
+}
+
+static int xenhypfs_get_pathbuf(xenhypfs_handle *fshdl, const char *path,
+                                char **path_buf)
+{
+    int ret = -1;
+    int path_sz;
+
+    if (!fshdl) {
+        errno = EBADF;
+        goto out;
+    }
+
+    path_sz = strlen(path) + 1;
+    if (path_sz > XEN_HYPFS_MAX_PATHLEN)
+    {
+        errno = ENAMETOOLONG;
+        goto out;
+    }
+
+    *path_buf = xencall_alloc_buffer(fshdl->xcall, path_sz);
+    if (!*path_buf) {
+        errno = ENOMEM;
+        goto out;
+    }
+    strcpy(*path_buf, path);
+
+    ret = path_sz;
+
+ out:
+    return ret;
+}
+
+static void *xenhypfs_inflate(void *in_data, size_t *sz)
+{
+    unsigned char *workbuf;
+    void *content = NULL;
+    unsigned int out_sz;
+    z_stream z = { .opaque = NULL };
+    int ret;
+
+    workbuf = malloc(BUF_SIZE);
+    if (!workbuf)
+        return NULL;
+
+    z.next_in = in_data;
+    z.avail_in = *sz;
+    ret = inflateInit2(&z, MAX_WBITS + 32); /* 32 == gzip */
+
+    for (*sz = 0; ret == Z_OK; *sz += out_sz) {
+        z.next_out = workbuf;
+        z.avail_out = BUF_SIZE;
+        ret = inflate(&z, Z_SYNC_FLUSH);
+        if (ret != Z_OK && ret != Z_STREAM_END)
+            break;
+
+        out_sz = z.next_out - workbuf;
+        content = realloc(content, *sz + out_sz);
+        if (!content) {
+            ret = Z_MEM_ERROR;
+            break;
+        }
+        memcpy(content + *sz, workbuf, out_sz);
+    }
+
+    inflateEnd(&z);
+    if (ret != Z_STREAM_END) {
+        free(content);
+        content = NULL;
+        errno = EIO;
+    }
+    free(workbuf);
+    return content;
+}
+
+static void xenhypfs_set_attrs(struct xen_hypfs_direntry *entry,
+                               struct xenhypfs_dirent *dirent)
+{
+    dirent->size = entry->content_len;
+
+    switch(entry->type) {
+    case XEN_HYPFS_TYPE_DIR:
+        dirent->type = xenhypfs_type_dir;
+        break;
+    case XEN_HYPFS_TYPE_BLOB:
+        dirent->type = xenhypfs_type_blob;
+        break;
+    case XEN_HYPFS_TYPE_STRING:
+        dirent->type = xenhypfs_type_string;
+        break;
+    case XEN_HYPFS_TYPE_UINT:
+        dirent->type = xenhypfs_type_uint;
+        break;
+    case XEN_HYPFS_TYPE_INT:
+        dirent->type = xenhypfs_type_int;
+        break;
+    case XEN_HYPFS_TYPE_BOOL:
+        dirent->type = xenhypfs_type_bool;
+        break;
+    default:
+        dirent->type = xenhypfs_type_blob;
+    }
+
+    switch (entry->encoding) {
+    case XEN_HYPFS_ENC_PLAIN:
+        dirent->encoding = xenhypfs_enc_plain;
+        break;
+    case XEN_HYPFS_ENC_GZIP:
+        dirent->encoding = xenhypfs_enc_gzip;
+        break;
+    default:
+        dirent->encoding = xenhypfs_enc_plain;
+        dirent->type = xenhypfs_type_blob;
+    }
+
+    dirent->is_writable = entry->max_write_len;
+}
+
+void *xenhypfs_read_raw(xenhypfs_handle *fshdl, const char *path,
+                        struct xenhypfs_dirent **dirent)
+{
+    void *retbuf = NULL, *content = NULL;
+    char *path_buf = NULL;
+    const char *name;
+    struct xen_hypfs_direntry *entry;
+    int ret;
+    int sz, path_sz;
+
+    *dirent = NULL;
+    ret = xenhypfs_get_pathbuf(fshdl, path, &path_buf);
+    if (ret < 0)
+        goto out;
+
+    path_sz = ret;
+
+    for (sz = BUF_SIZE;; sz = sizeof(*entry) + entry->content_len) {
+        if (retbuf)
+            xencall_free_buffer(fshdl->xcall, retbuf);
+
+        retbuf = xencall_alloc_buffer(fshdl->xcall, sz);
+        if (!retbuf) {
+            errno = ENOMEM;
+            goto out;
+        }
+        entry = retbuf;
+
+        ret = xencall5(fshdl->xcall, __HYPERVISOR_hypfs_op, XEN_HYPFS_OP_read,
+                       (unsigned long)path_buf, path_sz,
+                       (unsigned long)retbuf, sz);
+        if (!ret)
+            break;
+
+        if (ret != ENOBUFS) {
+            errno = -ret;
+            goto out;
+        }
+    }
+
+    content = malloc(entry->content_len);
+    if (!content)
+        goto out;
+    memcpy(content, entry + 1, entry->content_len);
+
+    name = strrchr(path, '/');
+    if (!name)
+        name = path;
+    else {
+        name++;
+        if (!*name)
+            name--;
+    }
+    *dirent = calloc(1, sizeof(struct xenhypfs_dirent) + strlen(name) + 1);
+    if (!*dirent) {
+        free(content);
+        content = NULL;
+        errno = ENOMEM;
+        goto out;
+    }
+    (*dirent)->name = (char *)(*dirent + 1);
+    strcpy((*dirent)->name, name);
+    xenhypfs_set_attrs(entry, *dirent);
+
+ out:
+    ret = errno;
+    xencall_free_buffer(fshdl->xcall, path_buf);
+    xencall_free_buffer(fshdl->xcall, retbuf);
+    errno = ret;
+
+    return content;
+}
+
+char *xenhypfs_read(xenhypfs_handle *fshdl, const char *path)
+{
+    char *buf, *ret_buf = NULL;
+    struct xenhypfs_dirent *dirent;
+    int ret;
+
+    buf = xenhypfs_read_raw(fshdl, path, &dirent);
+    if (!buf)
+        goto out;
+
+    switch (dirent->encoding) {
+    case xenhypfs_enc_plain:
+        break;
+    case xenhypfs_enc_gzip:
+        ret_buf = xenhypfs_inflate(buf, &dirent->size);
+        if (!ret_buf)
+            goto out;
+        free(buf);
+        buf = ret_buf;
+        ret_buf = NULL;
+        break;
+    }
+
+    switch (dirent->type) {
+    case xenhypfs_type_dir:
+        errno = EISDIR;
+        break;
+    case xenhypfs_type_blob:
+        errno = EDOM;
+        break;
+    case xenhypfs_type_string:
+        ret_buf = buf;
+        buf = NULL;
+        break;
+    case xenhypfs_type_uint:
+    case xenhypfs_type_bool:
+        switch (dirent->size) {
+        case 1:
+            ret = asprintf(&ret_buf, "%"PRIu8, *(uint8_t *)buf);
+            break;
+        case 2:
+            ret = asprintf(&ret_buf, "%"PRIu16, *(uint16_t *)buf);
+            break;
+        case 4:
+            ret = asprintf(&ret_buf, "%"PRIu32, *(uint32_t *)buf);
+            break;
+        case 8:
+            ret = asprintf(&ret_buf, "%"PRIu64, *(uint64_t *)buf);
+            break;
+        default:
+            ret = -1;
+            errno = EDOM;
+        }
+        if (ret < 0)
+            ret_buf = NULL;
+        break;
+    case xenhypfs_type_int:
+        switch (dirent->size) {
+        case 1:
+            ret = asprintf(&ret_buf, "%"PRId8, *(int8_t *)buf);
+            break;
+        case 2:
+            ret = asprintf(&ret_buf, "%"PRId16, *(int16_t *)buf);
+            break;
+        case 4:
+            ret = asprintf(&ret_buf, "%"PRId32, *(int32_t *)buf);
+            break;
+        case 8:
+            ret = asprintf(&ret_buf, "%"PRId64, *(int64_t *)buf);
+            break;
+        default:
+            ret = -1;
+            errno = EDOM;
+        }
+        if (ret < 0)
+            ret_buf = NULL;
+        break;
+    }
+
+ out:
+    ret = errno;
+    free(buf);
+    free(dirent);
+    errno = ret;
+
+    return ret_buf;
+}
+
+struct xenhypfs_dirent *xenhypfs_readdir(xenhypfs_handle *fshdl,
+                                         const char *path,
+                                         unsigned int *num_entries)
+{
+    void *buf, *curr;
+    int ret;
+    char *names;
+    struct xenhypfs_dirent *ret_buf = NULL, *dirent;
+    unsigned int n = 0, name_sz = 0;
+    struct xen_hypfs_dirlistentry *entry;
+
+    buf = xenhypfs_read_raw(fshdl, path, &dirent);
+    if (!buf)
+        goto out;
+
+    if (dirent->type != xenhypfs_type_dir ||
+        dirent->encoding != xenhypfs_enc_plain) {
+        errno = ENOTDIR;
+        goto out;
+    }
+
+    if (dirent->size) {
+        curr = buf;
+        for (n = 1;; n++) {
+            entry = curr;
+            name_sz += strlen(entry->name) + 1;
+            if (!entry->off_next)
+                break;
+
+            curr += entry->off_next;
+        }
+    }
+
+    ret_buf = malloc(n * sizeof(*ret_buf) + name_sz);
+    if (!ret_buf)
+        goto out;
+
+    *num_entries = n;
+    names = (char *)(ret_buf + n);
+    curr = buf;
+    for (n = 0; n < *num_entries; n++) {
+        entry = curr;
+        xenhypfs_set_attrs(&entry->e, ret_buf + n);
+        ret_buf[n].name = names;
+        strcpy(names, entry->name);
+        names += strlen(entry->name) + 1;
+        curr += entry->off_next;
+    }
+
+ out:
+    ret = errno;
+    free(buf);
+    free(dirent);
+    errno = ret;
+
+    return ret_buf;
+}
+
+int xenhypfs_write(xenhypfs_handle *fshdl, const char *path, const char *val)
+{
+    void *buf = NULL;
+    char *path_buf = NULL, *val_end;
+    int ret, saved_errno;
+    int sz, path_sz;
+    struct xen_hypfs_direntry *entry;
+    uint64_t mask;
+
+    ret = xenhypfs_get_pathbuf(fshdl, path, &path_buf);
+    if (ret < 0)
+        goto out;
+
+    path_sz = ret;
+    ret = -1;
+
+    sz = BUF_SIZE;
+    buf = xencall_alloc_buffer(fshdl->xcall, sz);
+    if (!buf) {
+        errno = ENOMEM;
+        goto out;
+    }
+
+    ret = xencall5(fshdl->xcall, __HYPERVISOR_hypfs_op, XEN_HYPFS_OP_read,
+                   (unsigned long)path_buf, path_sz,
+                   (unsigned long)buf, sizeof(*entry));
+    if (ret && errno != ENOBUFS)
+        goto out;
+    ret = -1;
+    entry = buf;
+    if (!entry->max_write_len) {
+        errno = EACCES;
+        goto out;
+    }
+    if (entry->encoding != XEN_HYPFS_ENC_PLAIN) {
+        /* Writing compressed data currently not supported. */
+        errno = EDOM;
+        goto out;
+    }
+
+    switch (entry->type) {
+    case XEN_HYPFS_TYPE_STRING:
+        if (sz < strlen(val) + 1) {
+            sz = strlen(val) + 1;
+            xencall_free_buffer(fshdl->xcall, buf);
+            buf = xencall_alloc_buffer(fshdl->xcall, sz);
+            if (!buf) {
+                errno = ENOMEM;
+                goto out;
+            }
+        }
+        sz = strlen(val) + 1;
+        strcpy(buf, val);
+        break;
+    case XEN_HYPFS_TYPE_UINT:
+        sz = entry->content_len;
+        errno = 0;
+        *(unsigned long long *)buf = strtoull(val, &val_end, 0);
+        if (errno || !*val || *val_end)
+            goto out;
+        mask = ~0ULL << (8 * sz);
+        if ((*(uint64_t *)buf & mask) && ((*(uint64_t *)buf & mask) != mask)) {
+            errno = ERANGE;
+            goto out;
+        }
+        break;
+    case XEN_HYPFS_TYPE_INT:
+        sz = entry->content_len;
+        errno = 0;
+        *(unsigned long long *)buf = strtoll(val, &val_end, 0);
+        if (errno || !*val || *val_end)
+            goto out;
+        mask = (sz == 8) ? 0 : ~0ULL << (8 * sz);
+        if ((*(uint64_t *)buf & mask) && ((*(uint64_t *)buf & mask) != mask)) {
+            errno = ERANGE;
+            goto out;
+        }
+        break;
+    case XEN_HYPFS_TYPE_BOOL:
+        sz = entry->content_len;
+        *(unsigned long long *)buf = 0;
+        if (!strcmp(val, "1") || !strcmp(val, "on") || !strcmp(val, "yes") ||
+            !strcmp(val, "true") || !strcmp(val, "enable"))
+            *(unsigned long long *)buf = 1;
+        else if (strcmp(val, "0") && strcmp(val, "no") && strcmp(val, "off") &&
+                 strcmp(val, "false") && strcmp(val, "disable")) {
+            errno = EDOM;
+            goto out;
+        }
+        break;
+    default:
+        /* No support for other types (yet). */
+        errno = EDOM;
+        goto out;
+    }
+
+    ret = xencall5(fshdl->xcall, __HYPERVISOR_hypfs_op,
+                   XEN_HYPFS_OP_write_contents,
+                   (unsigned long)path_buf, path_sz,
+                   (unsigned long)buf, sz);
+
+ out:
+    saved_errno = errno;
+    xencall_free_buffer(fshdl->xcall, path_buf);
+    xencall_free_buffer(fshdl->xcall, buf);
+    errno = saved_errno;
+    return ret;
+}
diff --git a/tools/libs/hypfs/include/xenhypfs.h b/tools/libs/hypfs/include/xenhypfs.h
new file mode 100644
index 0000000000..ab157edceb
--- /dev/null
+++ b/tools/libs/hypfs/include/xenhypfs.h
@@ -0,0 +1,90 @@
+/*
+ * Copyright (c) 2019 SUSE Software Solutions Germany GmbH
+ *
+ * This library is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU Lesser General Public
+ * License as published by the Free Software Foundation;
+ * version 2.1 of the License.
+ *
+ * This library is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * Lesser General Public License for more details.
+ *
+ * You should have received a copy of the GNU Lesser General Public
+ * License along with this library; If not, see <http://www.gnu.org/licenses/>.
+ */
+#ifndef XENHYPFS_H
+#define XENHYPFS_H
+
+#include <stdbool.h>
+#include <stdint.h>
+#include <sys/types.h>
+
+/* Callers who don't care don't need to #include <xentoollog.h> */
+struct xentoollog_logger;
+
+typedef struct xenhypfs_handle xenhypfs_handle;
+
+struct xenhypfs_dirent {
+    char *name;
+    size_t size;
+    enum {
+        xenhypfs_type_dir,
+        xenhypfs_type_blob,
+        xenhypfs_type_string,
+        xenhypfs_type_uint,
+        xenhypfs_type_int,
+        xenhypfs_type_bool
+    } type;
+    enum {
+        xenhypfs_enc_plain,
+        xenhypfs_enc_gzip
+    } encoding;
+    bool is_writable;
+};
+
+xenhypfs_handle *xenhypfs_open(struct xentoollog_logger *logger,
+                               unsigned int open_flags);
+int xenhypfs_close(xenhypfs_handle *fshdl);
+
+/*
+ * Return the raw contents of a Xen hypfs entry and its dirent containing
+ * the size, type and encoding.
+ * Returned buffer and dirent should be freed via free().
+ */
+void *xenhypfs_read_raw(xenhypfs_handle *fshdl, const char *path,
+                        struct xenhypfs_dirent **dirent);
+
+/*
+ * Return the contents of a Xen hypfs entry as a string.
+ * Returned buffer should be freed via free().
+ */
+char *xenhypfs_read(xenhypfs_handle *fshdl, const char *path);
+
+/*
+ * Return the contents of a Xen hypfs directory in form of an array of
+ * dirents.
+ * Returned buffer should be freed via free().
+ */
+struct xenhypfs_dirent *xenhypfs_readdir(xenhypfs_handle *fshdl,
+                                         const char *path,
+                                         unsigned int *num_entries);
+
+/*
+ * Write a Xen hypfs entry with a value. The value is converted from a string
+ * to the appropriate type.
+ */
+int xenhypfs_write(xenhypfs_handle *fshdl, const char *path, const char *val);
+
+#endif /* XENHYPFS_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/tools/libs/hypfs/libxenhypfs.map b/tools/libs/hypfs/libxenhypfs.map
new file mode 100644
index 0000000000..47f1edda3e
--- /dev/null
+++ b/tools/libs/hypfs/libxenhypfs.map
@@ -0,0 +1,10 @@
+VERS_1.0 {
+	global:
+		xenhypfs_open;
+		xenhypfs_close;
+		xenhypfs_read_raw;
+		xenhypfs_read;
+		xenhypfs_readdir;
+		xenhypfs_write;
+	local: *; /* Do not expose anything by default */
+};
diff --git a/tools/libs/hypfs/xenhypfs.pc.in b/tools/libs/hypfs/xenhypfs.pc.in
new file mode 100644
index 0000000000..92a262c7a2
--- /dev/null
+++ b/tools/libs/hypfs/xenhypfs.pc.in
@@ -0,0 +1,10 @@
+prefix=@@prefix@@
+includedir=@@incdir@@
+libdir=@@libdir@@
+
+Name: Xenhypfs
+Description: The Xenhypfs library for Xen hypervisor
+Version: @@version@@
+Cflags: -I${includedir} @@cflagslocal@@
+Libs: @@libsflag@@${libdir} -lxenhypfs
+Requires.private: xentoolcore,xentoollog,xencall,z
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 07:21:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:21:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jawZA-0004rY-OB; Tue, 19 May 2020 07:21:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aMO8=7B=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jawZ9-0004r0-FO
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:21:31 +0000
X-Inumbo-ID: 5110d970-99a1-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5110d970-99a1-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 07:21:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A6607B227;
 Tue, 19 May 2020 07:21:13 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v10 08/12] xen: add /buildinfo/config entry to hypervisor
 filesystem
Date: Tue, 19 May 2020 09:21:02 +0200
Message-Id: <20200519072106.26894-9-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200519072106.26894-1-jgross@suse.com>
References: <20200519072106.26894-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add the /buildinfo/config entry to the hypervisor filesystem. This
entry contains the .config file used to build the hypervisor.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V3:
- store data in gzip format
- use binfile mechanism to create data file
- move code to kernel.c

V6:
- add config item for the /buildinfo/config (Jan Beulich)
- make config related variables const in kernel.h (Jan Beulich)

V7:
- update doc (Jan Beulich)
- use "rm -f" in Makefile (Jan Beulich)

V8:
- add dependency top CONFIG_HYPFS
- use macro for definition of leaf (Jan Beulich)

V9:
- adjust type of xen_config_data (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore                   |  2 ++
 docs/misc/hypfs-paths.pandoc |  4 ++++
 xen/common/Kconfig           | 11 +++++++++++
 xen/common/Makefile          | 12 ++++++++++++
 xen/common/kernel.c          | 11 +++++++++++
 xen/include/xen/kernel.h     |  3 +++
 6 files changed, 43 insertions(+)

diff --git a/.gitignore b/.gitignore
index 6171b3b43f..b8bdb25040 100644
--- a/.gitignore
+++ b/.gitignore
@@ -298,6 +298,8 @@ xen/arch/*/efi/boot.c
 xen/arch/*/efi/compat.c
 xen/arch/*/efi/efi.h
 xen/arch/*/efi/runtime.c
+xen/common/config_data.S
+xen/common/config.gz
 xen/include/headers*.chk
 xen/include/asm
 xen/include/asm-*/asm-offsets.h
diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index d730caf394..9a76bc383b 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -135,6 +135,10 @@ Information about the compile domain.
 
 The compiler used to build Xen.
 
+#### /buildinfo/config = STRING [CONFIG_HYPFS_CONFIG]
+
+The contents of the `xen/.config` file at the time of the hypervisor build.
+
 #### /buildinfo/version/
 
 A directory containing version information of the hypervisor.
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index e768ea36b2..065f2ee454 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -127,6 +127,17 @@ config HYPFS
 
 	  If unsure, say Y.
 
+config HYPFS_CONFIG
+	bool "Provide hypervisor .config via hypfs entry"
+	default y
+	depends on HYPFS
+	---help---
+	  When enabled the contents of the .config file used to build the
+	  hypervisor are provided via the hypfs entry /buildinfo/config.
+
+	  Disable this option in case you want to spare some memory or you
+	  want to hide the .config contents from dom0.
+
 config KEXEC
 	bool "kexec support"
 	default y
diff --git a/xen/common/Makefile b/xen/common/Makefile
index bf7d0e25a3..3d61239fbf 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -1,6 +1,7 @@
 obj-$(CONFIG_ARGO) += argo.o
 obj-y += bitmap.o
 obj-y += bsearch.o
+obj-$(CONFIG_HYPFS_CONFIG) += config_data.o
 obj-$(CONFIG_CORE_PARKING) += core_parking.o
 obj-y += cpu.o
 obj-$(CONFIG_DEBUG_TRACE) += debugtrace.o
@@ -73,3 +74,14 @@ obj-$(CONFIG_UBSAN) += ubsan/
 
 obj-$(CONFIG_NEEDS_LIBELF) += libelf/
 obj-$(CONFIG_HAS_DEVICE_TREE) += libfdt/
+
+config.gz: ../.config
+	gzip -c $< >$@
+
+config_data.o: config.gz
+
+config_data.S: $(XEN_ROOT)/xen/tools/binfile
+	$(XEN_ROOT)/xen/tools/binfile $@ config.gz xen_config_data
+
+clean::
+	rm -f config_data.S config.gz 2>/dev/null
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index db7bd23fcb..f464fe02ed 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -390,6 +390,10 @@ static HYPFS_STRING_INIT(compile_date, "compile_date");
 static HYPFS_STRING_INIT(compile_domain, "compile_domain");
 static HYPFS_STRING_INIT(extra, "extra");
 
+#ifdef CONFIG_HYPFS_CONFIG
+static HYPFS_STRING_INIT(config, "config");
+#endif
+
 static int __init buildinfo_init(void)
 {
     hypfs_add_dir(&hypfs_root, &buildinfo, true);
@@ -415,6 +419,13 @@ static int __init buildinfo_init(void)
     hypfs_add_leaf(&version, &major, true);
     hypfs_add_leaf(&version, &minor, true);
 
+#ifdef CONFIG_HYPFS_CONFIG
+    config.e.encoding = XEN_HYPFS_ENC_GZIP;
+    config.e.size = xen_config_data_size;
+    config.content = xen_config_data;
+    hypfs_add_leaf(&buildinfo, &config, true);
+#endif
+
     return 0;
 }
 __initcall(buildinfo_init);
diff --git a/xen/include/xen/kernel.h b/xen/include/xen/kernel.h
index 548b64da9f..8cd142032d 100644
--- a/xen/include/xen/kernel.h
+++ b/xen/include/xen/kernel.h
@@ -100,5 +100,8 @@ extern enum system_state {
 
 bool_t is_active_kernel_text(unsigned long addr);
 
+extern const char xen_config_data[];
+extern const unsigned int xen_config_data_size;
+
 #endif /* _LINUX_KERNEL_H */
 
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 07:21:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:21:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jawZC-0004tn-4J; Tue, 19 May 2020 07:21:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aMO8=7B=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jawZA-0004rW-Nq
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:21:32 +0000
X-Inumbo-ID: 51f517a2-99a1-11ea-a8e2-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 51f517a2-99a1-11ea-a8e2-12813bfff9fa;
 Tue, 19 May 2020 07:21:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A532AB1F2;
 Tue, 19 May 2020 07:21:14 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v10 12/12] xen: remove XEN_SYSCTL_set_parameter support
Date: Tue, 19 May 2020 09:21:06 +0200
Message-Id: <20200519072106.26894-13-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200519072106.26894-1-jgross@suse.com>
References: <20200519072106.26894-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The functionality of XEN_SYSCTL_set_parameter is available via hypfs
now, so it can be removed.

This allows to remove the kernel_param structure for runtime parameters
by putting the now only used structure element into the hypfs node
structure of the runtime parameters.

Signed-off-by: Juergen Gross <jgross@suse.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
V6:
- new patch

V7:
- only comment out definition of XEN_SYSCTL_set_parameter (Jan Beulich)

V8:
- rebase to use CONFIG_HYPFS

V9:
- adjust CONFIG_HYPFS Kconfig text (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/flask/policy/modules/dom0.te  |  2 +-
 xen/arch/arm/xen.lds.S              |  5 --
 xen/arch/x86/hvm/vmx/vmcs.c         |  6 +-
 xen/arch/x86/xen.lds.S              |  5 --
 xen/common/Kconfig                  |  5 +-
 xen/common/hypfs.c                  |  6 +-
 xen/common/kernel.c                 | 11 ----
 xen/common/sysctl.c                 | 36 ------------
 xen/include/public/sysctl.h         | 19 +------
 xen/include/xen/hypfs.h             |  5 --
 xen/include/xen/lib.h               |  1 -
 xen/include/xen/param.h             | 87 +++++------------------------
 xen/xsm/flask/hooks.c               |  3 -
 xen/xsm/flask/policy/access_vectors |  2 -
 14 files changed, 23 insertions(+), 170 deletions(-)

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index 20925e38a2..0a63ce15b6 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -16,7 +16,7 @@ allow dom0_t xen_t:xen {
 allow dom0_t xen_t:xen2 {
 	resource_op psr_cmt_op psr_alloc pmu_ctrl get_symbol
 	get_cpu_levelling_caps get_cpu_featureset livepatch_op
-	coverage_op set_parameter
+	coverage_op
 };
 
 # Allow dom0 to use all XENVER_ subops that have checks.
diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
index 549ceb9749..6342ac4ead 100644
--- a/xen/arch/arm/xen.lds.S
+++ b/xen/arch/arm/xen.lds.S
@@ -54,11 +54,6 @@ SECTIONS
        *(.data.rel.ro)
        *(.data.rel.ro.*)
 
-       . = ALIGN(POINTER_ALIGN);
-       __param_start = .;
-       *(.data.param)
-       __param_end = .;
-
        __proc_info_start = .;
        *(.proc.info)
        __proc_info_end = .;
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 3410bc5f6d..ca94c2bedc 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -112,11 +112,6 @@ static void __init init_ept_param(struct param_hypfs *par)
     update_ept_param();
     custom_runtime_set_var(par, opt_ept_setting);
 }
-#else
-static void update_ept_param(void)
-{
-}
-#endif
 
 static int parse_ept_param_runtime(const char *s);
 custom_runtime_only_param("ept", parse_ept_param_runtime, init_ept_param);
@@ -172,6 +167,7 @@ static int parse_ept_param_runtime(const char *s)
 
     return 0;
 }
+#endif
 
 /* Dynamic (run-time adjusted) execution control flags. */
 u32 vmx_pin_based_exec_control __read_mostly;
diff --git a/xen/arch/x86/xen.lds.S b/xen/arch/x86/xen.lds.S
index 3ed020e26b..0273f79152 100644
--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -128,11 +128,6 @@ SECTIONS
        *(.ex_table.pre)
        __stop___pre_ex_table = .;
 
-       . = ALIGN(POINTER_ALIGN);
-       __param_start = .;
-       *(.data.param)
-       __param_end = .;
-
 #if defined(CONFIG_HAS_VPCI) && defined(CONFIG_LATE_HWDOM)
        . = ALIGN(POINTER_ALIGN);
        __start_vpci_array = .;
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 065f2ee454..15e3b79ff5 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -122,8 +122,9 @@ config HYPFS
 	---help---
 	  Support Xen hypervisor file system. This file system is used to
 	  present various hypervisor internal data to dom0 and in some
-	  cases to allow modifying settings. Disabling the support might
-	  result in some features not being available.
+	  cases to allow modifying settings. Disabling the support will
+	  result in some features not being available, e.g. runtime parameter
+	  setting.
 
 	  If unsure, say Y.
 
diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 4aed6ae182..1c69f9065a 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -42,7 +42,7 @@ static void hypfs_read_lock(void)
     this_cpu(hypfs_locked) = hypfs_read_locked;
 }
 
-void hypfs_write_lock(void)
+static void hypfs_write_lock(void)
 {
     ASSERT(this_cpu(hypfs_locked) == hypfs_unlocked);
 
@@ -50,7 +50,7 @@ void hypfs_write_lock(void)
     this_cpu(hypfs_locked) = hypfs_write_locked;
 }
 
-void hypfs_unlock(void)
+static void hypfs_unlock(void)
 {
     enum hypfs_lock_state locked = this_cpu(hypfs_locked);
 
@@ -369,7 +369,7 @@ int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
         goto out;
 
     p = container_of(leaf, struct param_hypfs, hypfs);
-    ret = p->param->par.func(buf);
+    ret = p->func(buf);
 
  out:
     xfree(buf);
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index d1381d6900..c4caeaec71 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -196,17 +196,6 @@ static void __init _cmdline_parse(const char *cmdline)
     parse_params(cmdline, __setup_start, __setup_end);
 }
 
-int runtime_parse(const char *line)
-{
-    int ret;
-
-    hypfs_write_lock();
-    ret = parse_params(line, __param_start, __param_end);
-    hypfs_unlock();
-
-    return ret;
-}
-
 /**
  *    cmdline_parse -- parses the xen command line.
  * If CONFIG_CMDLINE is set, it would be parsed prior to @cmdline.
diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index 1c6a817476..ec916424e5 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -471,42 +471,6 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
             copyback = 1;
         break;
 
-    case XEN_SYSCTL_set_parameter:
-    {
-#define XEN_SET_PARAMETER_MAX_SIZE 1023
-        char *params;
-
-        if ( op->u.set_parameter.pad[0] || op->u.set_parameter.pad[1] ||
-             op->u.set_parameter.pad[2] )
-        {
-            ret = -EINVAL;
-            break;
-        }
-        if ( op->u.set_parameter.size > XEN_SET_PARAMETER_MAX_SIZE )
-        {
-            ret = -E2BIG;
-            break;
-        }
-        params = xmalloc_bytes(op->u.set_parameter.size + 1);
-        if ( !params )
-        {
-            ret = -ENOMEM;
-            break;
-        }
-        if ( copy_from_guest(params, op->u.set_parameter.params,
-                             op->u.set_parameter.size) )
-            ret = -EFAULT;
-        else
-        {
-            params[op->u.set_parameter.size] = 0;
-            ret = runtime_parse(params);
-        }
-
-        xfree(params);
-
-        break;
-    }
-
     default:
         ret = arch_do_sysctl(op, u_sysctl);
         copyback = 0;
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 3a08c512e8..a073647117 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -1026,22 +1026,6 @@ struct xen_sysctl_livepatch_op {
     } u;
 };
 
-/*
- * XEN_SYSCTL_set_parameter
- *
- * Change hypervisor parameters at runtime.
- * The input string is parsed similar to the boot parameters.
- * Parameters are a single string terminated by a NUL byte of max. size
- * characters. Multiple settings can be specified by separating them
- * with blanks.
- */
-
-struct xen_sysctl_set_parameter {
-    XEN_GUEST_HANDLE_64(const_char) params; /* IN: pointer to parameters. */
-    uint16_t size;                          /* IN: size of parameters. */
-    uint16_t pad[3];                        /* IN: MUST be zero. */
-};
-
 #if defined(__i386__) || defined(__x86_64__)
 /*
  * XEN_SYSCTL_get_cpu_policy (x86 specific)
@@ -1106,7 +1090,7 @@ struct xen_sysctl {
 #define XEN_SYSCTL_get_cpu_levelling_caps        25
 #define XEN_SYSCTL_get_cpu_featureset            26
 #define XEN_SYSCTL_livepatch_op                  27
-#define XEN_SYSCTL_set_parameter                 28
+/* #define XEN_SYSCTL_set_parameter              28 */
 #define XEN_SYSCTL_get_cpu_policy                29
     uint32_t interface_version; /* XEN_SYSCTL_INTERFACE_VERSION */
     union {
@@ -1135,7 +1119,6 @@ struct xen_sysctl {
         struct xen_sysctl_cpu_levelling_caps cpu_levelling_caps;
         struct xen_sysctl_cpu_featureset    cpu_featureset;
         struct xen_sysctl_livepatch_op      livepatch;
-        struct xen_sysctl_set_parameter     set_parameter;
 #if defined(__i386__) || defined(__x86_64__)
         struct xen_sysctl_cpu_policy        cpu_policy;
 #endif
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 507ed3ae0b..4c9016f119 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -118,11 +118,6 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
                      XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
 int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
                        XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
-void hypfs_write_lock(void);
-void hypfs_unlock(void);
-#else
-static inline void hypfs_write_lock(void) {}
-static inline void hypfs_unlock(void) {}
 #endif
 
 #endif /* __XEN_HYPFS_H__ */
diff --git a/xen/include/xen/lib.h b/xen/include/xen/lib.h
index 2d7a054931..e5b0a007b8 100644
--- a/xen/include/xen/lib.h
+++ b/xen/include/xen/lib.h
@@ -75,7 +75,6 @@
 struct domain;
 
 void cmdline_parse(const char *cmdline);
-int runtime_parse(const char *line);
 int parse_bool(const char *s, const char *e);
 
 /**
diff --git a/xen/include/xen/param.h b/xen/include/xen/param.h
index 3fe1a06a41..064ba8da6e 100644
--- a/xen/include/xen/param.h
+++ b/xen/include/xen/param.h
@@ -27,9 +27,6 @@ struct kernel_param {
 };
 
 extern const struct kernel_param __setup_start[], __setup_end[];
-extern const struct kernel_param __param_start[], __param_end[];
-
-#define __dataparam       __used_section(".data.param")
 
 #define __param(att)      static const att \
     __attribute__((__aligned__(sizeof(void *)))) struct kernel_param
@@ -79,14 +76,12 @@ extern const struct kernel_param __param_start[], __param_end[];
         { .name = setup_str_ign,            \
           .type = OPT_IGNORE }
 
-#define __rtparam         __param(__dataparam)
-
 #ifdef CONFIG_HYPFS
 
 struct param_hypfs {
-    const struct kernel_param *param;
     struct hypfs_entry_leaf hypfs;
     void (*init_leaf)(struct param_hypfs *par);
+    int (*func)(const char *);
 };
 
 extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
@@ -109,28 +104,17 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
 
 /* initfunc needs to set size and content, e.g. via custom_runtime_set_var(). */
 #define custom_runtime_only_param(nam, variable, initfunc) \
-    __rtparam __rtpar_##variable = \
-      { .name = (nam), \
-          .type = OPT_CUSTOM, \
-          .par.func = (variable) }; \
     __paramfs __parfs_##variable = \
-        { .param = &__rtpar_##variable, \
-          .init_leaf = (initfunc), \
-          .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
+        { .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
           .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
           .hypfs.e.name = (nam), \
           .hypfs.e.read = hypfs_read_leaf, \
-          .hypfs.e.write = hypfs_write_custom }
+          .hypfs.e.write = hypfs_write_custom, \
+          .init_leaf = (initfunc), \
+          .func = (variable) }
 #define boolean_runtime_only_param(nam, variable) \
-    __rtparam __rtpar_##variable = \
-        { .name = (nam), \
-          .type = OPT_BOOL, \
-          .len = sizeof(variable) + \
-                 BUILD_BUG_ON_ZERO(sizeof(variable) != sizeof(bool)), \
-          .par.var = &(variable) }; \
     __paramfs __parfs_##variable = \
-        { .param = &__rtpar_##variable, \
-          .hypfs.e.type = XEN_HYPFS_TYPE_BOOL, \
+        { .hypfs.e.type = XEN_HYPFS_TYPE_BOOL, \
           .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
           .hypfs.e.name = (nam), \
           .hypfs.e.size = sizeof(variable), \
@@ -139,14 +123,8 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
           .hypfs.e.write = hypfs_write_bool, \
           .hypfs.content = &(variable) }
 #define integer_runtime_only_param(nam, variable) \
-    __rtparam __rtpar_##variable = \
-        { .name = (nam), \
-          .type = OPT_UINT, \
-          .len = sizeof(variable), \
-          .par.var = &(variable) }; \
     __paramfs __parfs_##variable = \
-        { .param = &__rtpar_##variable, \
-          .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
+        { .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
           .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
           .hypfs.e.name = (nam), \
           .hypfs.e.size = sizeof(variable), \
@@ -155,14 +133,8 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
           .hypfs.e.write = hypfs_write_leaf, \
           .hypfs.content = &(variable) }
 #define size_runtime_only_param(nam, variable) \
-    __rtparam __rtpar_##variable = \
-        { .name = (nam), \
-          .type = OPT_SIZE, \
-          .len = sizeof(variable), \
-          .par.var = &(variable) }; \
     __paramfs __parfs_##variable = \
-        { .param = &__rtpar_##variable, \
-          .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
+        { .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
           .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
           .hypfs.e.name = (nam), \
           .hypfs.e.size = sizeof(variable), \
@@ -171,14 +143,8 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
           .hypfs.e.write = hypfs_write_leaf, \
           .hypfs.content = &(variable) }
 #define string_runtime_only_param(nam, variable) \
-    __rtparam __rtpar_##variable = \
-        { .name = (nam), \
-          .type = OPT_STR, \
-          .len = sizeof(variable), \
-          .par.var = &(variable) }; \
     __paramfs __parfs_##variable = \
-        { .param = &__rtpar_##variable, \
-          .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
+        { .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
           .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
           .hypfs.e.name = (nam), \
           .hypfs.e.size = 0, \
@@ -189,36 +155,11 @@ extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
 
 #else
 
-#define custom_runtime_only_param(_name, _var, unused) \
-    __rtparam __rtpar_##_var = \
-      { .name = _name, \
-          .type = OPT_CUSTOM, \
-          .par.func = _var }
-#define boolean_runtime_only_param(_name, _var) \
-    __rtparam __rtpar_##_var = \
-        { .name = _name, \
-          .type = OPT_BOOL, \
-          .len = sizeof(_var) + \
-                 BUILD_BUG_ON_ZERO(sizeof(_var) != sizeof(bool)), \
-          .par.var = &_var }
-#define integer_runtime_only_param(_name, _var) \
-    __rtparam __rtpar_##_var = \
-        { .name = _name, \
-          .type = OPT_UINT, \
-          .len = sizeof(_var), \
-          .par.var = &_var }
-#define size_runtime_only_param(_name, _var) \
-    __rtparam __rtpar_##_var = \
-        { .name = _name, \
-          .type = OPT_SIZE, \
-          .len = sizeof(_var), \
-          .par.var = &_var }
-#define string_runtime_only_param(_name, _var) \
-    __rtparam __rtpar_##_var = \
-        { .name = _name, \
-          .type = OPT_STR, \
-          .len = sizeof(_var), \
-          .par.var = &_var }
+#define custom_runtime_only_param(nam, var, initfunc)
+#define boolean_runtime_only_param(nam, var)
+#define integer_runtime_only_param(nam, var)
+#define size_runtime_only_param(nam, var)
+#define string_runtime_only_param(nam, var)
 
 #define custom_runtime_set_var(parfs, var)
 
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index a2c78e445c..a314bf85ce 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -822,9 +822,6 @@ static int flask_sysctl(int cmd)
     case XEN_SYSCTL_coverage_op:
         return avc_current_has_perm(SECINITSID_XEN, SECCLASS_XEN2,
                                     XEN2__COVERAGE_OP, NULL);
-    case XEN_SYSCTL_set_parameter:
-        return avc_current_has_perm(SECINITSID_XEN, SECCLASS_XEN2,
-                                    XEN2__SET_PARAMETER, NULL);
 
     default:
         return avc_unknown_permission("sysctl", cmd);
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index c9e385fb9b..b87c99ea98 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -99,8 +99,6 @@ class xen2
     livepatch_op
 # XEN_SYSCTL_coverage_op
     coverage_op
-# XEN_SYSCTL_set_parameter
-    set_parameter
 }
 
 # Classes domain and domain2 consist of operations that a domain performs on
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 07:21:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:21:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jawZF-0004wI-E3; Tue, 19 May 2020 07:21:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aMO8=7B=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jawZE-0004vS-EO
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:21:36 +0000
X-Inumbo-ID: 5191f7ee-99a1-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5191f7ee-99a1-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 07:21:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 2137FB21D;
 Tue, 19 May 2020 07:21:14 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v10 10/12] tools/libxl: use libxenhypfs for setting xen
 runtime parameters
Date: Tue, 19 May 2020 09:21:04 +0200
Message-Id: <20200519072106.26894-11-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200519072106.26894-1-jgross@suse.com>
References: <20200519072106.26894-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Instead of xc_set_parameters() use xenhypfs_write() for setting
parameters of the hypervisor.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V6:
- new patch
---
 tools/Rules.mk               |  2 +-
 tools/libxl/Makefile         |  3 +-
 tools/libxl/libxl.c          | 53 ++++++++++++++++++++++++++++++++----
 tools/libxl/libxl_internal.h |  1 +
 tools/libxl/xenlight.pc.in   |  2 +-
 tools/xl/xl_misc.c           |  1 -
 6 files changed, 52 insertions(+), 10 deletions(-)

diff --git a/tools/Rules.mk b/tools/Rules.mk
index ad6073fcad..883a193f9e 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -178,7 +178,7 @@ CFLAGS += -O2 -fomit-frame-pointer
 endif
 
 CFLAGS_libxenlight = -I$(XEN_XENLIGHT) $(CFLAGS_libxenctrl) $(CFLAGS_xeninclude)
-SHDEPS_libxenlight = $(SHLIB_libxenctrl) $(SHLIB_libxenstore)
+SHDEPS_libxenlight = $(SHLIB_libxenctrl) $(SHLIB_libxenstore) $(SHLIB_libxenhypfs)
 LDLIBS_libxenlight = $(SHDEPS_libxenlight) $(XEN_XENLIGHT)/libxenlight$(libextension)
 SHLIB_libxenlight  = $(SHDEPS_libxenlight) -Wl,-rpath-link=$(XEN_XENLIGHT)
 
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 69fcf21577..a89ebab0b4 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -20,7 +20,7 @@ LIBUUID_LIBS += -luuid
 endif
 
 LIBXL_LIBS =
-LIBXL_LIBS = $(LDLIBS_libxentoollog) $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(LDLIBS_libxentoolcore) $(PTYFUNCS_LIBS) $(LIBUUID_LIBS)
+LIBXL_LIBS = $(LDLIBS_libxentoollog) $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenhypfs) $(LDLIBS_libxenstore) $(LDLIBS_libxentoolcore) $(PTYFUNCS_LIBS) $(LIBUUID_LIBS)
 ifeq ($(CONFIG_LIBNL),y)
 LIBXL_LIBS += $(LIBNL3_LIBS)
 endif
@@ -33,6 +33,7 @@ CFLAGS_LIBXL += $(CFLAGS_libxentoolcore)
 CFLAGS_LIBXL += $(CFLAGS_libxenevtchn)
 CFLAGS_LIBXL += $(CFLAGS_libxenctrl)
 CFLAGS_LIBXL += $(CFLAGS_libxenguest)
+CFLAGS_LIBXL += $(CFLAGS_libxenhypfs)
 CFLAGS_LIBXL += $(CFLAGS_libxenstore)
 ifeq ($(CONFIG_LIBNL),y)
 CFLAGS_LIBXL += $(LIBNL3_CFLAGS)
diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
index f60fd3e4fd..621acc88f3 100644
--- a/tools/libxl/libxl.c
+++ b/tools/libxl/libxl.c
@@ -663,15 +663,56 @@ int libxl_set_parameters(libxl_ctx *ctx, char *params)
 {
     int ret;
     GC_INIT(ctx);
+    char *par, *val, *end, *path;
+    xenhypfs_handle *hypfs;
 
-    ret = xc_set_parameters(ctx->xch, params);
-    if (ret < 0) {
-        LOGEV(ERROR, ret, "setting parameters");
-        GC_FREE;
-        return ERROR_FAIL;
+    hypfs = xenhypfs_open(ctx->lg, 0);
+    if (!hypfs) {
+        LOGE(ERROR, "opening Xen hypfs");
+        ret = ERROR_FAIL;
+        goto out;
     }
+
+    while (isblank(*params))
+        params++;
+
+    for (par = params; *par; par = end) {
+        end = strchr(par, ' ');
+        if (!end)
+            end = par + strlen(par);
+
+        val = strchr(par, '=');
+        if (val > end)
+            val = NULL;
+        if (!val && !strncmp(par, "no", 2)) {
+            path = libxl__sprintf(gc, "/params/%s", par + 2);
+            path[end - par - 2 + 8] = 0;
+            val = "no";
+            par += 2;
+        } else {
+            path = libxl__sprintf(gc, "/params/%s", par);
+            path[val - par + 8] = 0;
+            val = libxl__strndup(gc, val + 1, end - val - 1);
+        }
+
+	LOG(DEBUG, "setting node \"%s\" to value \"%s\"", path, val);
+        ret = xenhypfs_write(hypfs, path, val);
+        if (ret < 0) {
+            LOGE(ERROR, "setting parameters");
+            ret = ERROR_FAIL;
+            goto out;
+        }
+
+        while (isblank(*end))
+            end++;
+    }
+
+    ret = 0;
+
+out:
+    xenhypfs_close(hypfs);
     GC_FREE;
-    return 0;
+    return ret;
 }
 
 static int fd_set_flags(libxl_ctx *ctx, int fd,
diff --git a/tools/libxl/libxl_internal.h b/tools/libxl/libxl_internal.h
index e5effd2ad1..b85b771659 100644
--- a/tools/libxl/libxl_internal.h
+++ b/tools/libxl/libxl_internal.h
@@ -56,6 +56,7 @@
 #define XC_WANT_COMPAT_MAP_FOREIGN_API
 #include <xenctrl.h>
 #include <xenguest.h>
+#include <xenhypfs.h>
 #include <xc_dom.h>
 
 #include <xen-tools/libs.h>
diff --git a/tools/libxl/xenlight.pc.in b/tools/libxl/xenlight.pc.in
index c0f769fd20..6b351ba096 100644
--- a/tools/libxl/xenlight.pc.in
+++ b/tools/libxl/xenlight.pc.in
@@ -9,4 +9,4 @@ Description: The Xenlight library for Xen hypervisor
 Version: @@version@@
 Cflags: -I${includedir}
 Libs: @@libsflag@@${libdir} -lxenlight
-Requires.private: xentoollog,xenevtchn,xencontrol,xenguest,xenstore
+Requires.private: xentoollog,xenevtchn,xencontrol,xenguest,xenstore,xenhypfs
diff --git a/tools/xl/xl_misc.c b/tools/xl/xl_misc.c
index 20ed605f4f..08f0fb6dc9 100644
--- a/tools/xl/xl_misc.c
+++ b/tools/xl/xl_misc.c
@@ -168,7 +168,6 @@ int main_set_parameters(int argc, char **argv)
 
     if (libxl_set_parameters(ctx, params)) {
         fprintf(stderr, "cannot set parameters: %s\n", params);
-        fprintf(stderr, "Use \"xl dmesg\" to look for possible reason.\n");
         return EXIT_FAILURE;
     }
 
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 07:21:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:21:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jawZG-0004xj-NX; Tue, 19 May 2020 07:21:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aMO8=7B=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jawZF-0004wV-OB
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:21:37 +0000
X-Inumbo-ID: 51106f6f-99a1-11ea-a8e2-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 51106f6f-99a1-11ea-a8e2-12813bfff9fa;
 Tue, 19 May 2020 07:21:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id EE082B21E;
 Tue, 19 May 2020 07:21:13 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v10 09/12] xen: add runtime parameter access support to hypfs
Date: Tue, 19 May 2020 09:21:03 +0200
Message-Id: <20200519072106.26894-10-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200519072106.26894-1-jgross@suse.com>
References: <20200519072106.26894-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add support to read and modify values of hypervisor runtime parameters
via the hypervisor file system.

As runtime parameters can be modified via a sysctl, too, this path has
to take the hypfs rw_lock as writer.

For custom runtime parameters the connection between the parameter
value and the file system is done via an init function which will set
the initial value (if needed) and the leaf properties.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V3:
- complete rework
- support custom parameters, too
- support parameter writing

V6:
- rewording in docs/misc/hypfs-paths.pandoc (Jan Beulich)
- use memchr() (Jan Beulich)
- use strlcat() (Jan Beulich)
- rework to use a custom parameter init function instead of a reference
  to a content variable, allowing to drop default strings
- style correction (Jan Beulich)
- dropping param_append_str() in favor of a custom function at its only
  use site

V7:
- fine tune some parameter initializations (Jan Beulich)
- call custom_runtime_set_var() after updating the value
- modify alignment in Arm linker script to 4 (Jan Beulich)

V8:
- modify alignment in Arm linker script to 8 (Julien Grall)
- fix ept runtime parameter reporting (Jan Beulich)
- rebase to support CONFIG_HYPFS

V9:
- add empty line in arm linker script (Jan Beulich)
- drop array size enum members (Jan Beulich)
- hide struct param_hypfs completely without CONFIG_HYPFS (Jan Beulich)
- don't write size from hypfs_write_custom() (Jan Beulich)
- drop underscores from macro parameters (Jan Beulich)
- add parantheses around macro parameters (Jan Beulich)
- don't set initial string param size (Jan Beulich)
- move code in vmcs.c in preparation of patch 12 (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/misc/hypfs-paths.pandoc |   9 +++
 xen/arch/arm/xen.lds.S       |   8 +++
 xen/arch/x86/hvm/vmx/vmcs.c  |  29 ++++++++-
 xen/arch/x86/pv/domain.c     |  21 ++++++-
 xen/arch/x86/xen.lds.S       |   7 +++
 xen/common/grant_table.c     |  62 ++++++++++++++----
 xen/common/hypfs.c           |  34 +++++++++-
 xen/common/kernel.c          |  29 ++++++++-
 xen/drivers/char/console.c   |  72 +++++++++++++++++++--
 xen/include/xen/hypfs.h      |   7 +++
 xen/include/xen/param.h      | 119 ++++++++++++++++++++++++++++++++++-
 11 files changed, 372 insertions(+), 25 deletions(-)

diff --git a/docs/misc/hypfs-paths.pandoc b/docs/misc/hypfs-paths.pandoc
index 9a76bc383b..a111c6f25c 100644
--- a/docs/misc/hypfs-paths.pandoc
+++ b/docs/misc/hypfs-paths.pandoc
@@ -154,3 +154,12 @@ The major version of Xen.
 #### /buildinfo/version/minor = INTEGER
 
 The minor version of Xen.
+
+#### /params/
+
+A directory of runtime parameters.
+
+#### /params/*
+
+The individual parameters. The description of the different parameters can be
+found in `docs/misc/xen-command-line.pandoc`.
diff --git a/xen/arch/arm/xen.lds.S b/xen/arch/arm/xen.lds.S
index a497f6a48d..549ceb9749 100644
--- a/xen/arch/arm/xen.lds.S
+++ b/xen/arch/arm/xen.lds.S
@@ -89,6 +89,14 @@ SECTIONS
        __start_schedulers_array = .;
        *(.data.schedulers)
        __end_schedulers_array = .;
+
+#ifdef CONFIG_HYPFS
+       . = ALIGN(8);
+       __paramhypfs_start = .;
+       *(.data.paramhypfs)
+       __paramhypfs_end = .;
+#endif
+
        *(.data.rel)
        *(.data.rel.*)
        CONSTRUCTORS
diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
index 221af9737a..3410bc5f6d 100644
--- a/xen/arch/x86/hvm/vmx/vmcs.c
+++ b/xen/arch/x86/hvm/vmx/vmcs.c
@@ -97,6 +97,30 @@ static int __init parse_ept_param(const char *s)
 }
 custom_param("ept", parse_ept_param);
 
+#ifdef CONFIG_HYPFS
+static char opt_ept_setting[10];
+
+static void update_ept_param(void)
+{
+    if ( opt_ept_exec_sp >= 0 )
+        snprintf(opt_ept_setting, sizeof(opt_ept_setting), "exec-sp=%d",
+                 opt_ept_exec_sp);
+}
+
+static void __init init_ept_param(struct param_hypfs *par)
+{
+    update_ept_param();
+    custom_runtime_set_var(par, opt_ept_setting);
+}
+#else
+static void update_ept_param(void)
+{
+}
+#endif
+
+static int parse_ept_param_runtime(const char *s);
+custom_runtime_only_param("ept", parse_ept_param_runtime, init_ept_param);
+
 static int parse_ept_param_runtime(const char *s)
 {
     struct domain *d;
@@ -115,6 +139,10 @@ static int parse_ept_param_runtime(const char *s)
 
     opt_ept_exec_sp = val;
 
+    update_ept_param();
+    custom_runtime_set_var(param_2_parfs(parse_ept_param_runtime),
+                           opt_ept_setting);
+
     rcu_read_lock(&domlist_read_lock);
     for_each_domain ( d )
     {
@@ -144,7 +172,6 @@ static int parse_ept_param_runtime(const char *s)
 
     return 0;
 }
-custom_runtime_only_param("ept", parse_ept_param_runtime);
 
 /* Dynamic (run-time adjusted) execution control flags. */
 u32 vmx_pin_based_exec_control __read_mostly;
diff --git a/xen/arch/x86/pv/domain.c b/xen/arch/x86/pv/domain.c
index 0a4a5bd001..f4e863a410 100644
--- a/xen/arch/x86/pv/domain.c
+++ b/xen/arch/x86/pv/domain.c
@@ -55,6 +55,23 @@ static __read_mostly enum {
     PCID_NOXPTI
 } opt_pcid = PCID_XPTI;
 
+#ifdef CONFIG_HYPFS
+static const char opt_pcid_2_string[][7] = {
+    [PCID_OFF] = "off",
+    [PCID_ALL] = "on",
+    [PCID_XPTI] = "xpti",
+    [PCID_NOXPTI] = "noxpti",
+};
+
+static void __init opt_pcid_init(struct param_hypfs *par)
+{
+    custom_runtime_set_var(par, opt_pcid_2_string[opt_pcid]);
+}
+#endif
+
+static int parse_pcid(const char *s);
+custom_runtime_param("pcid", parse_pcid, opt_pcid_init);
+
 static int parse_pcid(const char *s)
 {
     int rc = 0;
@@ -87,9 +104,11 @@ static int parse_pcid(const char *s)
         break;
     }
 
+    custom_runtime_set_var(param_2_parfs(parse_pcid),
+                           opt_pcid_2_string[opt_pcid]);
+
     return rc;
 }
-custom_runtime_param("pcid", parse_pcid);
 
 static void noreturn continue_nonidle_domain(struct vcpu *v)
 {
diff --git a/xen/arch/x86/xen.lds.S b/xen/arch/x86/xen.lds.S
index 0e3a733cab..3ed020e26b 100644
--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -279,6 +279,13 @@ SECTIONS
        __start_schedulers_array = .;
        *(.data.schedulers)
        __end_schedulers_array = .;
+
+#ifdef CONFIG_HYPFS
+       . = ALIGN(8);
+       __paramhypfs_start = .;
+       *(.data.paramhypfs)
+       __paramhypfs_end = .;
+#endif
   } :text
 
   DECL_SECTION(.data) {
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index 5ef7ff940d..ece670e484 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -85,8 +85,43 @@ struct grant_table {
     struct grant_table_arch arch;
 };
 
-static int parse_gnttab_limit(const char *param, const char *arg,
-                              unsigned int *valp)
+unsigned int __read_mostly opt_max_grant_frames = 64;
+static unsigned int __read_mostly opt_max_maptrack_frames = 1024;
+
+#ifdef CONFIG_HYPFS
+#define GRANT_CUSTOM_VAL_SZ  12
+static char __read_mostly opt_max_grant_frames_val[GRANT_CUSTOM_VAL_SZ];
+static char __read_mostly opt_max_maptrack_frames_val[GRANT_CUSTOM_VAL_SZ];
+
+static void update_gnttab_par(unsigned int val, struct param_hypfs *par,
+                              char *parval)
+{
+    snprintf(parval, GRANT_CUSTOM_VAL_SZ, "%u", val);
+    custom_runtime_set_var_sz(par, parval, GRANT_CUSTOM_VAL_SZ);
+}
+
+static void __init gnttab_max_frames_init(struct param_hypfs *par)
+{
+    update_gnttab_par(opt_max_grant_frames, par, opt_max_grant_frames_val);
+}
+
+static void __init max_maptrack_frames_init(struct param_hypfs *par)
+{
+    update_gnttab_par(opt_max_maptrack_frames, par,
+                      opt_max_maptrack_frames_val);
+}
+#else
+#define update_gnttab_par(v, unused1, unused2)     update_gnttab_par(v)
+#define parse_gnttab_limit(a, v, unused1, unused2) parse_gnttab_limit(a, v)
+
+static void update_gnttab_par(unsigned int val, struct param_hypfs *par,
+                              char *parval)
+{
+}
+#endif
+
+static int parse_gnttab_limit(const char *arg, unsigned int *valp,
+                              struct param_hypfs *par, char *parval)
 {
     const char *e;
     unsigned long val;
@@ -99,28 +134,33 @@ static int parse_gnttab_limit(const char *param, const char *arg,
         return -ERANGE;
 
     *valp = val;
+    update_gnttab_par(val, par, parval);
 
     return 0;
 }
 
-unsigned int __read_mostly opt_max_grant_frames = 64;
+static int parse_gnttab_max_frames(const char *arg);
+custom_runtime_param("gnttab_max_frames", parse_gnttab_max_frames,
+                     gnttab_max_frames_init);
 
 static int parse_gnttab_max_frames(const char *arg)
 {
-    return parse_gnttab_limit("gnttab_max_frames", arg,
-                              &opt_max_grant_frames);
+    return parse_gnttab_limit(arg, &opt_max_grant_frames,
+                              param_2_parfs(parse_gnttab_max_frames),
+                              opt_max_grant_frames_val);
 }
-custom_runtime_param("gnttab_max_frames", parse_gnttab_max_frames);
 
-static unsigned int __read_mostly opt_max_maptrack_frames = 1024;
+static int parse_gnttab_max_maptrack_frames(const char *arg);
+custom_runtime_param("gnttab_max_maptrack_frames",
+                     parse_gnttab_max_maptrack_frames,
+                     max_maptrack_frames_init);
 
 static int parse_gnttab_max_maptrack_frames(const char *arg)
 {
-    return parse_gnttab_limit("gnttab_max_maptrack_frames", arg,
-                              &opt_max_maptrack_frames);
+    return parse_gnttab_limit(arg, &opt_max_maptrack_frames,
+                              param_2_parfs(parse_gnttab_max_maptrack_frames),
+                              opt_max_maptrack_frames_val);
 }
-custom_runtime_param("gnttab_max_maptrack_frames",
-                     parse_gnttab_max_maptrack_frames);
 
 #ifndef GNTTAB_MAX_VERSION
 #define GNTTAB_MAX_VERSION 2
diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 9c2213a068..4aed6ae182 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -10,6 +10,7 @@
 #include <xen/hypercall.h>
 #include <xen/hypfs.h>
 #include <xen/lib.h>
+#include <xen/param.h>
 #include <xen/rwlock.h>
 #include <public/hypfs.h>
 
@@ -41,7 +42,7 @@ static void hypfs_read_lock(void)
     this_cpu(hypfs_locked) = hypfs_read_locked;
 }
 
-static void hypfs_write_lock(void)
+void hypfs_write_lock(void)
 {
     ASSERT(this_cpu(hypfs_locked) == hypfs_unlocked);
 
@@ -49,7 +50,7 @@ static void hypfs_write_lock(void)
     this_cpu(hypfs_locked) = hypfs_write_locked;
 }
 
-static void hypfs_unlock(void)
+void hypfs_unlock(void)
 {
     enum hypfs_lock_state locked = this_cpu(hypfs_locked);
 
@@ -346,6 +347,35 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
     return 0;
 }
 
+int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
+                       XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen)
+{
+    struct param_hypfs *p;
+    char *buf;
+    int ret;
+
+    ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked);
+
+    buf = xzalloc_array(char, ulen);
+    if ( !buf )
+        return -ENOMEM;
+
+    ret = -EFAULT;
+    if ( copy_from_guest(buf, uaddr, ulen) )
+        goto out;
+
+    ret = -EDOM;
+    if ( memchr(buf, 0, ulen) != (buf + ulen - 1) )
+        goto out;
+
+    p = container_of(leaf, struct param_hypfs, hypfs);
+    ret = p->param->par.func(buf);
+
+ out:
+    xfree(buf);
+    return ret;
+}
+
 static int hypfs_write(struct hypfs_entry *entry,
                        XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
 {
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index f464fe02ed..d1381d6900 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -198,7 +198,13 @@ static void __init _cmdline_parse(const char *cmdline)
 
 int runtime_parse(const char *line)
 {
-    return parse_params(line, __param_start, __param_end);
+    int ret;
+
+    hypfs_write_lock();
+    ret = parse_params(line, __param_start, __param_end);
+    hypfs_unlock();
+
+    return ret;
 }
 
 /**
@@ -429,6 +435,27 @@ static int __init buildinfo_init(void)
     return 0;
 }
 __initcall(buildinfo_init);
+
+static HYPFS_DIR_INIT(params, "params");
+
+static int __init param_init(void)
+{
+    struct param_hypfs *param;
+
+    hypfs_add_dir(&hypfs_root, &params, true);
+
+    for ( param = __paramhypfs_start; param < __paramhypfs_end; param++ )
+    {
+        if ( param->init_leaf )
+            param->init_leaf(param);
+        else if ( param->hypfs.e.type == XEN_HYPFS_TYPE_STRING )
+            param->hypfs.e.size = strlen(param->hypfs.content) + 1;
+        hypfs_add_leaf(&params, &param->hypfs, true);
+    }
+
+    return 0;
+}
+__initcall(param_init);
 #endif
 
 # define DO(fn) long do_##fn
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 913ae1b66a..56e24821b2 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -79,8 +79,28 @@ enum con_timestamp_mode
 
 static enum con_timestamp_mode __read_mostly opt_con_timestamp_mode = TSM_NONE;
 
+#ifdef CONFIG_HYPFS
+static const char con_timestamp_mode_2_string[][7] = {
+    [TSM_NONE] = "none",
+    [TSM_DATE] = "date",
+    [TSM_DATE_MS] = "datems",
+    [TSM_BOOT] = "boot",
+    [TSM_RAW] = "raw",
+};
+
+static void con_timestamp_mode_upd(struct param_hypfs *par)
+{
+    const char *val = con_timestamp_mode_2_string[opt_con_timestamp_mode];
+
+    custom_runtime_set_var_sz(par, val, 7);
+}
+#else
+#define con_timestamp_mode_upd(par)
+#endif
+
 static int parse_console_timestamps(const char *s);
-custom_runtime_param("console_timestamps", parse_console_timestamps);
+custom_runtime_param("console_timestamps", parse_console_timestamps,
+                     con_timestamp_mode_upd);
 
 /* conring_size: allows a large console ring than default (16kB). */
 static uint32_t __initdata opt_conring_size;
@@ -143,6 +163,39 @@ static int __read_mostly xenlog_guest_lower_thresh =
 static int parse_loglvl(const char *s);
 static int parse_guest_loglvl(const char *s);
 
+#ifdef CONFIG_HYPFS
+#define LOGLVL_VAL_SZ 16
+static char xenlog_val[LOGLVL_VAL_SZ];
+static char xenlog_guest_val[LOGLVL_VAL_SZ];
+
+static char *lvl2opt[] = { "none", "error", "warning", "info", "all" };
+
+static void xenlog_update_val(int lower, int upper, char *val)
+{
+    snprintf(val, LOGLVL_VAL_SZ, "%s/%s", lvl2opt[lower], lvl2opt[upper]);
+}
+
+static void __init xenlog_init(struct param_hypfs *par)
+{
+    xenlog_update_val(xenlog_lower_thresh, xenlog_upper_thresh, xenlog_val);
+    custom_runtime_set_var(par, xenlog_val);
+}
+
+static void __init xenlog_guest_init(struct param_hypfs *par)
+{
+    xenlog_update_val(xenlog_guest_lower_thresh, xenlog_guest_upper_thresh,
+                      xenlog_guest_val);
+    custom_runtime_set_var(par, xenlog_guest_val);
+}
+#else
+#define xenlog_val       NULL
+#define xenlog_guest_val NULL
+
+static void xenlog_update_val(int lower, int upper, char *val)
+{
+}
+#endif
+
 /*
  * <lvl> := none|error|warning|info|debug|all
  * loglvl=<lvl_print_always>[/<lvl_print_ratelimit>]
@@ -151,8 +204,8 @@ static int parse_guest_loglvl(const char *s);
  * Similar definitions for guest_loglvl, but applies to guest tracing.
  * Defaults: loglvl=warning ; guest_loglvl=none/warning
  */
-custom_runtime_param("loglvl", parse_loglvl);
-custom_runtime_param("guest_loglvl", parse_guest_loglvl);
+custom_runtime_param("loglvl", parse_loglvl, xenlog_init);
+custom_runtime_param("guest_loglvl", parse_guest_loglvl, xenlog_guest_init);
 
 static atomic_t print_everything = ATOMIC_INIT(0);
 
@@ -173,7 +226,7 @@ static int __parse_loglvl(const char *s, const char **ps)
     return 2; /* sane fallback */
 }
 
-static int _parse_loglvl(const char *s, int *lower, int *upper)
+static int _parse_loglvl(const char *s, int *lower, int *upper, char *val)
 {
     *lower = *upper = __parse_loglvl(s, &s);
     if ( *s == '/' )
@@ -181,18 +234,21 @@ static int _parse_loglvl(const char *s, int *lower, int *upper)
     if ( *upper < *lower )
         *upper = *lower;
 
+    xenlog_update_val(*lower, *upper, val);
+
     return *s ? -EINVAL : 0;
 }
 
 static int parse_loglvl(const char *s)
 {
-    return _parse_loglvl(s, &xenlog_lower_thresh, &xenlog_upper_thresh);
+    return _parse_loglvl(s, &xenlog_lower_thresh, &xenlog_upper_thresh,
+                         xenlog_val);
 }
 
 static int parse_guest_loglvl(const char *s)
 {
     return _parse_loglvl(s, &xenlog_guest_lower_thresh,
-                         &xenlog_guest_upper_thresh);
+                         &xenlog_guest_upper_thresh, xenlog_guest_val);
 }
 
 static char *loglvl_str(int lvl)
@@ -731,9 +787,11 @@ static int parse_console_timestamps(const char *s)
     {
     case 0:
         opt_con_timestamp_mode = TSM_NONE;
+        con_timestamp_mode_upd(param_2_parfs(parse_console_timestamps));
         return 0;
     case 1:
         opt_con_timestamp_mode = TSM_DATE;
+        con_timestamp_mode_upd(param_2_parfs(parse_console_timestamps));
         return 0;
     }
     if ( *s == '\0' || /* Compat for old booleanparam() */
@@ -750,6 +808,8 @@ static int parse_console_timestamps(const char *s)
     else
         return -EINVAL;
 
+    con_timestamp_mode_upd(param_2_parfs(parse_console_timestamps));
+
     return 0;
 }
 
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 5c6a0ccece..507ed3ae0b 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -116,6 +116,13 @@ int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
                      XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
 int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
                      XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+int hypfs_write_custom(struct hypfs_entry_leaf *leaf,
+                       XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+void hypfs_write_lock(void);
+void hypfs_unlock(void);
+#else
+static inline void hypfs_write_lock(void) {}
+static inline void hypfs_unlock(void) {}
 #endif
 
 #endif /* __XEN_HYPFS_H__ */
diff --git a/xen/include/xen/param.h b/xen/include/xen/param.h
index a1dc3ba8f0..3fe1a06a41 100644
--- a/xen/include/xen/param.h
+++ b/xen/include/xen/param.h
@@ -1,6 +1,7 @@
 #ifndef _XEN_PARAM_H
 #define _XEN_PARAM_H
 
+#include <xen/hypfs.h>
 #include <xen/init.h>
 #include <xen/lib.h>
 #include <xen/stdbool.h>
@@ -80,7 +81,115 @@ extern const struct kernel_param __param_start[], __param_end[];
 
 #define __rtparam         __param(__dataparam)
 
-#define custom_runtime_only_param(_name, _var) \
+#ifdef CONFIG_HYPFS
+
+struct param_hypfs {
+    const struct kernel_param *param;
+    struct hypfs_entry_leaf hypfs;
+    void (*init_leaf)(struct param_hypfs *par);
+};
+
+extern struct param_hypfs __paramhypfs_start[], __paramhypfs_end[];
+
+#define __paramhypfs      __used_section(".data.paramhypfs")
+
+#define __paramfs         static __paramhypfs  \
+    __attribute__((__aligned__(sizeof(void *)))) struct param_hypfs
+
+#define custom_runtime_set_var_sz(parfs, var, sz) \
+    { \
+        (parfs)->hypfs.content = var; \
+        (parfs)->hypfs.e.max_size = sz; \
+        (parfs)->hypfs.e.size = strlen(var) + 1; \
+    }
+#define custom_runtime_set_var(parfs, var) \
+    custom_runtime_set_var_sz(parfs, var, sizeof(var))
+
+#define param_2_parfs(par) &__parfs_##par
+
+/* initfunc needs to set size and content, e.g. via custom_runtime_set_var(). */
+#define custom_runtime_only_param(nam, variable, initfunc) \
+    __rtparam __rtpar_##variable = \
+      { .name = (nam), \
+          .type = OPT_CUSTOM, \
+          .par.func = (variable) }; \
+    __paramfs __parfs_##variable = \
+        { .param = &__rtpar_##variable, \
+          .init_leaf = (initfunc), \
+          .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
+          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
+          .hypfs.e.name = (nam), \
+          .hypfs.e.read = hypfs_read_leaf, \
+          .hypfs.e.write = hypfs_write_custom }
+#define boolean_runtime_only_param(nam, variable) \
+    __rtparam __rtpar_##variable = \
+        { .name = (nam), \
+          .type = OPT_BOOL, \
+          .len = sizeof(variable) + \
+                 BUILD_BUG_ON_ZERO(sizeof(variable) != sizeof(bool)), \
+          .par.var = &(variable) }; \
+    __paramfs __parfs_##variable = \
+        { .param = &__rtpar_##variable, \
+          .hypfs.e.type = XEN_HYPFS_TYPE_BOOL, \
+          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
+          .hypfs.e.name = (nam), \
+          .hypfs.e.size = sizeof(variable), \
+          .hypfs.e.max_size = sizeof(variable), \
+          .hypfs.e.read = hypfs_read_leaf, \
+          .hypfs.e.write = hypfs_write_bool, \
+          .hypfs.content = &(variable) }
+#define integer_runtime_only_param(nam, variable) \
+    __rtparam __rtpar_##variable = \
+        { .name = (nam), \
+          .type = OPT_UINT, \
+          .len = sizeof(variable), \
+          .par.var = &(variable) }; \
+    __paramfs __parfs_##variable = \
+        { .param = &__rtpar_##variable, \
+          .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
+          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
+          .hypfs.e.name = (nam), \
+          .hypfs.e.size = sizeof(variable), \
+          .hypfs.e.max_size = sizeof(variable), \
+          .hypfs.e.read = hypfs_read_leaf, \
+          .hypfs.e.write = hypfs_write_leaf, \
+          .hypfs.content = &(variable) }
+#define size_runtime_only_param(nam, variable) \
+    __rtparam __rtpar_##variable = \
+        { .name = (nam), \
+          .type = OPT_SIZE, \
+          .len = sizeof(variable), \
+          .par.var = &(variable) }; \
+    __paramfs __parfs_##variable = \
+        { .param = &__rtpar_##variable, \
+          .hypfs.e.type = XEN_HYPFS_TYPE_UINT, \
+          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
+          .hypfs.e.name = (nam), \
+          .hypfs.e.size = sizeof(variable), \
+          .hypfs.e.max_size = sizeof(variable), \
+          .hypfs.e.read = hypfs_read_leaf, \
+          .hypfs.e.write = hypfs_write_leaf, \
+          .hypfs.content = &(variable) }
+#define string_runtime_only_param(nam, variable) \
+    __rtparam __rtpar_##variable = \
+        { .name = (nam), \
+          .type = OPT_STR, \
+          .len = sizeof(variable), \
+          .par.var = &(variable) }; \
+    __paramfs __parfs_##variable = \
+        { .param = &__rtpar_##variable, \
+          .hypfs.e.type = XEN_HYPFS_TYPE_STRING, \
+          .hypfs.e.encoding = XEN_HYPFS_ENC_PLAIN, \
+          .hypfs.e.name = (nam), \
+          .hypfs.e.size = 0, \
+          .hypfs.e.max_size = sizeof(variable), \
+          .hypfs.e.read = hypfs_read_leaf, \
+          .hypfs.e.write = hypfs_write_leaf, \
+          .hypfs.content = &(variable) }
+
+#else
+
+#define custom_runtime_only_param(_name, _var, unused) \
     __rtparam __rtpar_##_var = \
       { .name = _name, \
           .type = OPT_CUSTOM, \
@@ -111,9 +220,13 @@ extern const struct kernel_param __param_start[], __param_end[];
           .len = sizeof(_var), \
           .par.var = &_var }
 
-#define custom_runtime_param(_name, _var) \
+#define custom_runtime_set_var(parfs, var)
+
+#endif
+
+#define custom_runtime_param(_name, _var, initfunc) \
     custom_param(_name, _var); \
-    custom_runtime_only_param(_name, _var)
+    custom_runtime_only_param(_name, _var, initfunc)
 #define boolean_runtime_param(_name, _var) \
     boolean_param(_name, _var); \
     boolean_runtime_only_param(_name, _var)
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 07:21:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:21:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jawZL-000521-72; Tue, 19 May 2020 07:21:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aMO8=7B=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jawZJ-00050P-Ed
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:21:41 +0000
X-Inumbo-ID: 50b83612-99a1-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 50b83612-99a1-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 07:21:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A3B24B215;
 Tue, 19 May 2020 07:21:12 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v10 04/12] xen: add basic hypervisor filesystem support
Date: Tue, 19 May 2020 09:20:58 +0200
Message-Id: <20200519072106.26894-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <20200519072106.26894-1-jgross@suse.com>
References: <20200519072106.26894-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add the infrastructure for the hypervisor filesystem.

This includes the hypercall interface and the base functions for
entry creation, deletion and modification.

In order not to have to repeat the same pattern multiple times in case
adding a new node should BUG_ON() failure, the helpers for adding a
node (hypfs_add_dir() and hypfs_add_leaf()) get a nofault parameter
causing the BUG() in case of a failure.

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
V1:
- rename files from filesystem.* to hypfs.*
- add dummy write entry support
- rename hypercall filesystem_op to hypfs_op
- add support for unsigned integer entries

V2:
- test new entry name to be valid

V3:
- major rework, especially by supporting binary contents of entries
- addressed all comments

V4:
- sort #includes alphabetically (Wei Liu)
- add public interface structures to xlat.lst (Jan Beulich)
- let DIRENTRY_SIZE() add 1 for trailing nul byte (Jan Beulich)
- remove hypfs_add_entry() (Jan Beulich)
- len -> ulen (Jan Beulich)
- switch sequence of tests in hypfs_get_entry_rel() (Jan Beulich)
- add const qualifier (Jan Beulich)
- return -ENOBUFS if only direntry but no entry contents are returned
  (Jan Beulich)
- use xmalloc() instead of xzalloc() (Jan Beulich)
- better error handling in hypfs_write_leaf() (Jan Beulich)
- return -EOPNOTSUPP for unknown sub-command (Jan Beulich)
- use plain integers for enum-like constants in public interface
  (Jan Beulich)
- rename XEN_HYPFS_OP_read_contents to XEN_HYPFS_OP_read (Jan Beulich)
- add some comments in include/public/hypfs.h (Jan Beulich)
- use const_char for user parameter path (Jan Beulich)
- add helpers for XEN_HYPFS_TYPE_BOOL and XEN_HYPFS_TYPE_INT entry
  definitions (Jan Beulich)
- make statically defined entries __read_mostly (Jan Beulich)

V5:
- switch to xsm for privilege check

V6:
- use memchr() for testing correct string length (Jan Beulich)
- reject writing to non-string leafs with wrong length (Jan Beulich)
- only support bools of natural size (Julien Grall)
- adjust blank padding in header (Jan Beulich)
- adjust comments in public header (Jan Beulich)
- rename hypfs_string_set() and add comment (Jan Beulich)
- add common HYPFS_INIT helper macro (Jan Beulich)
- really check structures added to xlat.lst (Jan Beulich)
- add missing xsm parts (Jan Beulich)

V7:
- simplify compat check (Jan Beulich)
- add max write size (Jan Beulich)
- better length testing of written string (Jan Beulich)

V8:
- add Kconfig item CONFIG_HYPFS (Jan Beulich)
- init write pointer in HYPFS_*_INIT_WRITABLE() (Jan Beulich)
- expand write ASSERT()s (Jan Beulich)

V9:
- move hypfs to correct position in Makefile (Jan Beulich)
- avoid recursion in hypfs_get_entry_rel() (Jan Beulich)
- make hypfs_get_entry() static (Jan Beulich)
- assert locking in read/write functions (Jan Beulich)
- add ASSERT() in hypfs_write_leaf() (Jan Beulich)
- add encoding test in hypfs_write_leaf() (Jan Beulich)
- test parameters of XEN_HYPFS_OP_get_version to be zero (Jan Beulich)
- add parantheses in macro (Jan Beulich)
- make ulen type unsigned int in functions (Jan Beulich)

V10:
- add locking ASSERT()s (Jan Beulich)
- correct indentation (Jan Beulich)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/flask/policy/modules/dom0.te  |   2 +-
 xen/arch/arm/traps.c                |   3 +
 xen/arch/x86/hvm/hypercall.c        |   3 +
 xen/arch/x86/hypercall.c            |   3 +
 xen/arch/x86/pv/hypercall.c         |   3 +
 xen/common/Kconfig                  |  11 +
 xen/common/Makefile                 |   1 +
 xen/common/hypfs.c                  | 422 ++++++++++++++++++++++++++++
 xen/include/Makefile                |   1 +
 xen/include/public/hypfs.h          | 129 +++++++++
 xen/include/public/xen.h            |   1 +
 xen/include/xen/hypercall.h         |  10 +
 xen/include/xen/hypfs.h             | 121 ++++++++
 xen/include/xlat.lst                |   2 +
 xen/include/xsm/dummy.h             |   6 +
 xen/include/xsm/xsm.h               |   6 +
 xen/xsm/dummy.c                     |   1 +
 xen/xsm/flask/hooks.c               |   6 +
 xen/xsm/flask/policy/access_vectors |   2 +
 19 files changed, 732 insertions(+), 1 deletion(-)
 create mode 100644 xen/common/hypfs.c
 create mode 100644 xen/include/public/hypfs.h
 create mode 100644 xen/include/xen/hypfs.h

diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
index 272f6a4f75..20925e38a2 100644
--- a/tools/flask/policy/modules/dom0.te
+++ b/tools/flask/policy/modules/dom0.te
@@ -11,7 +11,7 @@ allow dom0_t xen_t:xen {
 	mtrr_del mtrr_read microcode physinfo quirk writeconsole readapic
 	writeapic privprofile nonprivprofile kexec firmware sleep frequency
 	getidle debug getcpuinfo heap pm_op mca_op lockprof cpupool_op
-	getscheduler setscheduler
+	getscheduler setscheduler hypfs_op
 };
 allow dom0_t xen_t:xen2 {
 	resource_op psr_cmt_op psr_alloc pmu_ctrl get_symbol
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index 30c4c1830b..8f40d0e0b6 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -1381,6 +1381,9 @@ static arm_hypercall_t arm_hypercall_table[] = {
 #ifdef CONFIG_ARGO
     HYPERCALL(argo_op, 5),
 #endif
+#ifdef CONFIG_HYPFS
+    HYPERCALL(hypfs_op, 5),
+#endif
 };
 
 #ifndef NDEBUG
diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
index c41c2179c9..b6ccaf4457 100644
--- a/xen/arch/x86/hvm/hypercall.c
+++ b/xen/arch/x86/hvm/hypercall.c
@@ -150,6 +150,9 @@ static const hypercall_table_t hvm_hypercall_table[] = {
 #endif
     HYPERCALL(xenpmu_op),
     COMPAT_CALL(dm_op),
+#ifdef CONFIG_HYPFS
+    HYPERCALL(hypfs_op),
+#endif
     HYPERCALL(arch_1)
 };
 
diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c
index 7f299d45c6..dd00983005 100644
--- a/xen/arch/x86/hypercall.c
+++ b/xen/arch/x86/hypercall.c
@@ -72,6 +72,9 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls] =
 #ifdef CONFIG_HVM
     ARGS(hvm_op, 2),
     ARGS(dm_op, 3),
+#endif
+#ifdef CONFIG_HYPFS
+    ARGS(hypfs_op, 5),
 #endif
     ARGS(mca, 1),
     ARGS(arch_1, 1),
diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c
index b0d1d0ed77..53a52360fa 100644
--- a/xen/arch/x86/pv/hypercall.c
+++ b/xen/arch/x86/pv/hypercall.c
@@ -84,6 +84,9 @@ const hypercall_table_t pv_hypercall_table[] = {
 #ifdef CONFIG_HVM
     HYPERCALL(hvm_op),
     COMPAT_CALL(dm_op),
+#endif
+#ifdef CONFIG_HYPFS
+    HYPERCALL(hypfs_op),
 #endif
     HYPERCALL(mca),
     HYPERCALL(arch_1),
diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index fe9b41f721..e768ea36b2 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -116,6 +116,17 @@ config SPECULATIVE_HARDEN_BRANCH
 
 endmenu
 
+config HYPFS
+	bool "Hypervisor file system support"
+	default y
+	---help---
+	  Support Xen hypervisor file system. This file system is used to
+	  present various hypervisor internal data to dom0 and in some
+	  cases to allow modifying settings. Disabling the support might
+	  result in some features not being available.
+
+	  If unsure, say Y.
+
 config KEXEC
 	bool "kexec support"
 	default y
diff --git a/xen/common/Makefile b/xen/common/Makefile
index e8cde65370..bf7d0e25a3 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -14,6 +14,7 @@ obj-$(CONFIG_CRASH_DEBUG) += gdbstub.o
 obj-$(CONFIG_GRANT_TABLE) += grant_table.o
 obj-y += guestcopy.o
 obj-bin-y += gunzip.init.o
+obj-$(CONFIG_HYPFS) += hypfs.o
 obj-y += irq.o
 obj-y += kernel.o
 obj-y += keyhandler.o
diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
new file mode 100644
index 0000000000..9c2213a068
--- /dev/null
+++ b/xen/common/hypfs.c
@@ -0,0 +1,422 @@
+/******************************************************************************
+ *
+ * hypfs.c
+ *
+ * Simple sysfs-like file system for the hypervisor.
+ */
+
+#include <xen/err.h>
+#include <xen/guest_access.h>
+#include <xen/hypercall.h>
+#include <xen/hypfs.h>
+#include <xen/lib.h>
+#include <xen/rwlock.h>
+#include <public/hypfs.h>
+
+#ifdef CONFIG_COMPAT
+#include <compat/hypfs.h>
+CHECK_hypfs_dirlistentry;
+#endif
+
+#define DIRENTRY_NAME_OFF offsetof(struct xen_hypfs_dirlistentry, name)
+#define DIRENTRY_SIZE(name_len) \
+    (DIRENTRY_NAME_OFF +        \
+     ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
+
+static DEFINE_RWLOCK(hypfs_lock);
+enum hypfs_lock_state {
+    hypfs_unlocked,
+    hypfs_read_locked,
+    hypfs_write_locked
+};
+static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked);
+
+HYPFS_DIR_INIT(hypfs_root, "");
+
+static void hypfs_read_lock(void)
+{
+    ASSERT(this_cpu(hypfs_locked) != hypfs_write_locked);
+
+    read_lock(&hypfs_lock);
+    this_cpu(hypfs_locked) = hypfs_read_locked;
+}
+
+static void hypfs_write_lock(void)
+{
+    ASSERT(this_cpu(hypfs_locked) == hypfs_unlocked);
+
+    write_lock(&hypfs_lock);
+    this_cpu(hypfs_locked) = hypfs_write_locked;
+}
+
+static void hypfs_unlock(void)
+{
+    enum hypfs_lock_state locked = this_cpu(hypfs_locked);
+
+    this_cpu(hypfs_locked) = hypfs_unlocked;
+
+    switch ( locked )
+    {
+    case hypfs_read_locked:
+        read_unlock(&hypfs_lock);
+        break;
+    case hypfs_write_locked:
+        write_unlock(&hypfs_lock);
+        break;
+    default:
+        BUG();
+    }
+}
+
+static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new)
+{
+    int ret = -ENOENT;
+    struct hypfs_entry *e;
+
+    hypfs_write_lock();
+
+    list_for_each_entry ( e, &parent->dirlist, list )
+    {
+        int cmp = strcmp(e->name, new->name);
+
+        if ( cmp > 0 )
+        {
+            ret = 0;
+            list_add_tail(&new->list, &e->list);
+            break;
+        }
+        if ( cmp == 0 )
+        {
+            ret = -EEXIST;
+            break;
+        }
+    }
+
+    if ( ret == -ENOENT )
+    {
+        ret = 0;
+        list_add_tail(&new->list, &parent->dirlist);
+    }
+
+    if ( !ret )
+    {
+        unsigned int sz = strlen(new->name);
+
+        parent->e.size += DIRENTRY_SIZE(sz);
+    }
+
+    hypfs_unlock();
+
+    return ret;
+}
+
+int hypfs_add_dir(struct hypfs_entry_dir *parent,
+                  struct hypfs_entry_dir *dir, bool nofault)
+{
+    int ret;
+
+    ret = add_entry(parent, &dir->e);
+    BUG_ON(nofault && ret);
+
+    return ret;
+}
+
+int hypfs_add_leaf(struct hypfs_entry_dir *parent,
+                   struct hypfs_entry_leaf *leaf, bool nofault)
+{
+    int ret;
+
+    if ( !leaf->content )
+        ret = -EINVAL;
+    else
+        ret = add_entry(parent, &leaf->e);
+    BUG_ON(nofault && ret);
+
+    return ret;
+}
+
+static int hypfs_get_path_user(char *buf,
+                               XEN_GUEST_HANDLE_PARAM(const_char) uaddr,
+                               unsigned long ulen)
+{
+    if ( ulen > XEN_HYPFS_MAX_PATHLEN )
+        return -EINVAL;
+
+    if ( copy_from_guest(buf, uaddr, ulen) )
+        return -EFAULT;
+
+    if ( memchr(buf, 0, ulen) != buf + ulen - 1 )
+        return -EINVAL;
+
+    return 0;
+}
+
+static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
+                                               const char *path)
+{
+    const char *end;
+    struct hypfs_entry *entry;
+    unsigned int name_len;
+    bool again = true;
+
+    while ( again )
+    {
+        if ( dir->e.type != XEN_HYPFS_TYPE_DIR )
+            return NULL;
+
+        if ( !*path )
+            return &dir->e;
+
+        end = strchr(path, '/');
+        if ( !end )
+            end = strchr(path, '\0');
+        name_len = end - path;
+
+        again = false;
+
+        list_for_each_entry ( entry, &dir->dirlist, list )
+        {
+            int cmp = strncmp(path, entry->name, name_len);
+            struct hypfs_entry_dir *d = container_of(entry,
+                                                     struct hypfs_entry_dir, e);
+
+            if ( cmp < 0 )
+                return NULL;
+            if ( !cmp && strlen(entry->name) == name_len )
+            {
+                if ( !*end )
+                    return entry;
+
+                again = true;
+                dir = d;
+                path = end + 1;
+
+                break;
+            }
+        }
+    }
+
+    return NULL;
+}
+
+static struct hypfs_entry *hypfs_get_entry(const char *path)
+{
+    if ( path[0] != '/' )
+        return NULL;
+
+    return hypfs_get_entry_rel(&hypfs_root, path + 1);
+}
+
+int hypfs_read_dir(const struct hypfs_entry *entry,
+                   XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    const struct hypfs_entry_dir *d;
+    const struct hypfs_entry *e;
+    unsigned int size = entry->size;
+
+    ASSERT(this_cpu(hypfs_locked) != hypfs_unlocked);
+
+    d = container_of(entry, const struct hypfs_entry_dir, e);
+
+    list_for_each_entry ( e, &d->dirlist, list )
+    {
+        struct xen_hypfs_dirlistentry direntry;
+        unsigned int e_namelen = strlen(e->name);
+        unsigned int e_len = DIRENTRY_SIZE(e_namelen);
+
+        direntry.e.pad = 0;
+        direntry.e.type = e->type;
+        direntry.e.encoding = e->encoding;
+        direntry.e.content_len = e->size;
+        direntry.e.max_write_len = e->max_size;
+        direntry.off_next = list_is_last(&e->list, &d->dirlist) ? 0 : e_len;
+        if ( copy_to_guest(uaddr, &direntry, 1) )
+            return -EFAULT;
+
+        if ( copy_to_guest_offset(uaddr, DIRENTRY_NAME_OFF,
+                                  e->name, e_namelen + 1) )
+            return -EFAULT;
+
+        guest_handle_add_offset(uaddr, e_len);
+
+        ASSERT(e_len <= size);
+        size -= e_len;
+    }
+
+    return 0;
+}
+
+int hypfs_read_leaf(const struct hypfs_entry *entry,
+                    XEN_GUEST_HANDLE_PARAM(void) uaddr)
+{
+    const struct hypfs_entry_leaf *l;
+
+    ASSERT(this_cpu(hypfs_locked) != hypfs_unlocked);
+
+    l = container_of(entry, const struct hypfs_entry_leaf, e);
+
+    return copy_to_guest(uaddr, l->content, entry->size) ? -EFAULT: 0;
+}
+
+static int hypfs_read(const struct hypfs_entry *entry,
+                      XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
+{
+    struct xen_hypfs_direntry e;
+    long ret = -EINVAL;
+
+    if ( ulen < sizeof(e) )
+        goto out;
+
+    e.pad = 0;
+    e.type = entry->type;
+    e.encoding = entry->encoding;
+    e.content_len = entry->size;
+    e.max_write_len = entry->max_size;
+
+    ret = -EFAULT;
+    if ( copy_to_guest(uaddr, &e, 1) )
+        goto out;
+
+    ret = -ENOBUFS;
+    if ( ulen < entry->size + sizeof(e) )
+        goto out;
+
+    guest_handle_add_offset(uaddr, sizeof(e));
+
+    ret = entry->read(entry, uaddr);
+
+ out:
+    return ret;
+}
+
+int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
+                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen)
+{
+    char *buf;
+    int ret;
+
+    ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked);
+    ASSERT(ulen <= leaf->e.max_size);
+
+    if ( leaf->e.type != XEN_HYPFS_TYPE_STRING &&
+         leaf->e.type != XEN_HYPFS_TYPE_BLOB && ulen != leaf->e.size )
+        return -EDOM;
+
+    buf = xmalloc_array(char, ulen);
+    if ( !buf )
+        return -ENOMEM;
+
+    ret = -EFAULT;
+    if ( copy_from_guest(buf, uaddr, ulen) )
+        goto out;
+
+    ret = -EINVAL;
+    if ( leaf->e.type == XEN_HYPFS_TYPE_STRING &&
+         leaf->e.encoding == XEN_HYPFS_ENC_PLAIN &&
+         memchr(buf, 0, ulen) != (buf + ulen - 1) )
+        goto out;
+
+    ret = 0;
+    memcpy(leaf->write_ptr, buf, ulen);
+    leaf->e.size = ulen;
+
+ out:
+    xfree(buf);
+    return ret;
+}
+
+int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
+                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen)
+{
+    bool buf;
+
+    ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked);
+    ASSERT(leaf->e.type == XEN_HYPFS_TYPE_BOOL &&
+           leaf->e.size == sizeof(bool) &&
+           leaf->e.max_size == sizeof(bool) );
+
+    if ( ulen != leaf->e.max_size )
+        return -EDOM;
+
+    if ( copy_from_guest(&buf, uaddr, ulen) )
+        return -EFAULT;
+
+    *(bool *)leaf->write_ptr = buf;
+
+    return 0;
+}
+
+static int hypfs_write(struct hypfs_entry *entry,
+                       XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
+{
+    struct hypfs_entry_leaf *l;
+
+    if ( !entry->write )
+        return -EACCES;
+
+    ASSERT(entry->max_size);
+
+    if ( ulen > entry->max_size )
+        return -ENOSPC;
+
+    l = container_of(entry, struct hypfs_entry_leaf, e);
+
+    return entry->write(l, uaddr, ulen);
+}
+
+long do_hypfs_op(unsigned int cmd,
+                 XEN_GUEST_HANDLE_PARAM(const_char) arg1, unsigned long arg2,
+                 XEN_GUEST_HANDLE_PARAM(void) arg3, unsigned long arg4)
+{
+    int ret;
+    struct hypfs_entry *entry;
+    static char path[XEN_HYPFS_MAX_PATHLEN];
+
+    if ( xsm_hypfs_op(XSM_PRIV) )
+        return -EPERM;
+
+    if ( cmd == XEN_HYPFS_OP_get_version )
+    {
+        if ( !guest_handle_is_null(arg1) || arg2 ||
+             !guest_handle_is_null(arg3) || arg4 )
+            return -EINVAL;
+
+        return XEN_HYPFS_VERSION;
+    }
+
+    if ( cmd == XEN_HYPFS_OP_write_contents )
+        hypfs_write_lock();
+    else
+        hypfs_read_lock();
+
+    ret = hypfs_get_path_user(path, arg1, arg2);
+    if ( ret )
+        goto out;
+
+    entry = hypfs_get_entry(path);
+    if ( !entry )
+    {
+        ret = -ENOENT;
+        goto out;
+    }
+
+    switch ( cmd )
+    {
+    case XEN_HYPFS_OP_read:
+        ret = hypfs_read(entry, arg3, arg4);
+        break;
+
+    case XEN_HYPFS_OP_write_contents:
+        ret = hypfs_write(entry, arg3, arg4);
+        break;
+
+    default:
+        ret = -EOPNOTSUPP;
+        break;
+    }
+
+ out:
+    hypfs_unlock();
+
+    return ret;
+}
diff --git a/xen/include/Makefile b/xen/include/Makefile
index 2a10725d68..089314dc72 100644
--- a/xen/include/Makefile
+++ b/xen/include/Makefile
@@ -9,6 +9,7 @@ headers-y := \
     compat/event_channel.h \
     compat/features.h \
     compat/grant_table.h \
+    compat/hypfs.h \
     compat/kexec.h \
     compat/memory.h \
     compat/nmi.h \
diff --git a/xen/include/public/hypfs.h b/xen/include/public/hypfs.h
new file mode 100644
index 0000000000..63a5df1629
--- /dev/null
+++ b/xen/include/public/hypfs.h
@@ -0,0 +1,129 @@
+/******************************************************************************
+ * Xen Hypervisor Filesystem
+ *
+ * Copyright (c) 2019, SUSE Software Solutions Germany GmbH
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __XEN_PUBLIC_HYPFS_H__
+#define __XEN_PUBLIC_HYPFS_H__
+
+#include "xen.h"
+
+/*
+ * Definitions for the __HYPERVISOR_hypfs_op hypercall.
+ */
+
+/* Highest version number of the hypfs interface currently defined. */
+#define XEN_HYPFS_VERSION      1
+
+/* Maximum length of a path in the filesystem. */
+#define XEN_HYPFS_MAX_PATHLEN  1024
+
+struct xen_hypfs_direntry {
+    uint8_t type;
+#define XEN_HYPFS_TYPE_DIR     0
+#define XEN_HYPFS_TYPE_BLOB    1
+#define XEN_HYPFS_TYPE_STRING  2
+#define XEN_HYPFS_TYPE_UINT    3
+#define XEN_HYPFS_TYPE_INT     4
+#define XEN_HYPFS_TYPE_BOOL    5
+    uint8_t encoding;
+#define XEN_HYPFS_ENC_PLAIN    0
+#define XEN_HYPFS_ENC_GZIP     1
+    uint16_t pad;              /* Returned as 0. */
+    uint32_t content_len;      /* Current length of data. */
+    uint32_t max_write_len;    /* Max. length for writes (0 if read-only). */
+};
+
+struct xen_hypfs_dirlistentry {
+    struct xen_hypfs_direntry e;
+    /* Offset in bytes to next entry (0 == this is the last entry). */
+    uint16_t off_next;
+    /* Zero terminated entry name, possibly with some padding for alignment. */
+    char name[XEN_FLEX_ARRAY_DIM];
+};
+
+/*
+ * Hypercall operations.
+ */
+
+/*
+ * XEN_HYPFS_OP_get_version
+ *
+ * Read highest interface version supported by the hypervisor.
+ *
+ * arg1 - arg4: all 0/NULL
+ *
+ * Possible return values:
+ * >0: highest supported interface version
+ * <0: negative Xen errno value
+ */
+#define XEN_HYPFS_OP_get_version     0
+
+/*
+ * XEN_HYPFS_OP_read
+ *
+ * Read a filesystem entry.
+ *
+ * Returns the direntry and contents of an entry in the buffer supplied by the
+ * caller (struct xen_hypfs_direntry with the contents following directly
+ * after it).
+ * The data buffer must be at least the size of the direntry returned. If the
+ * data buffer was not large enough for all the data -ENOBUFS and no entry
+ * data is returned, but the direntry will contain the needed size for the
+ * returned data.
+ * The format of the contents is according to its entry type and encoding.
+ * The contents of a directory are multiple struct xen_hypfs_dirlistentry
+ * items.
+ *
+ * arg1: XEN_GUEST_HANDLE(path name)
+ * arg2: length of path name (including trailing zero byte)
+ * arg3: XEN_GUEST_HANDLE(data buffer written by hypervisor)
+ * arg4: data buffer size
+ *
+ * Possible return values:
+ * 0: success
+ * <0 : negative Xen errno value
+ */
+#define XEN_HYPFS_OP_read              1
+
+/*
+ * XEN_HYPFS_OP_write_contents
+ *
+ * Write contents of a filesystem entry.
+ *
+ * Writes an entry with the contents of a buffer supplied by the caller.
+ * The data type and encoding can't be changed. The size can be changed only
+ * for blobs and strings.
+ *
+ * arg1: XEN_GUEST_HANDLE(path name)
+ * arg2: length of path name (including trailing zero byte)
+ * arg3: XEN_GUEST_HANDLE(content buffer read by hypervisor)
+ * arg4: content buffer size
+ *
+ * Possible return values:
+ * 0: success
+ * <0 : negative Xen errno value
+ */
+#define XEN_HYPFS_OP_write_contents    2
+
+#endif /* __XEN_PUBLIC_HYPFS_H__ */
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index 75b1619d0d..945ef30273 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -130,6 +130,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
 #define __HYPERVISOR_argo_op              39
 #define __HYPERVISOR_xenpmu_op            40
 #define __HYPERVISOR_dm_op                41
+#define __HYPERVISOR_hypfs_op             42
 
 /* Architecture-specific hypercall definitions. */
 #define __HYPERVISOR_arch_0               48
diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
index d82a293377..655acc7f47 100644
--- a/xen/include/xen/hypercall.h
+++ b/xen/include/xen/hypercall.h
@@ -150,6 +150,16 @@ do_dm_op(
     unsigned int nr_bufs,
     XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs);
 
+#ifdef CONFIG_HYPFS
+extern long
+do_hypfs_op(
+    unsigned int cmd,
+    XEN_GUEST_HANDLE_PARAM(const_char) arg1,
+    unsigned long arg2,
+    XEN_GUEST_HANDLE_PARAM(void) arg3,
+    unsigned long arg4);
+#endif
+
 #ifdef CONFIG_COMPAT
 
 extern int
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
new file mode 100644
index 0000000000..5c6a0ccece
--- /dev/null
+++ b/xen/include/xen/hypfs.h
@@ -0,0 +1,121 @@
+#ifndef __XEN_HYPFS_H__
+#define __XEN_HYPFS_H__
+
+#ifdef CONFIG_HYPFS
+#include <xen/list.h>
+#include <xen/string.h>
+#include <public/hypfs.h>
+
+struct hypfs_entry_leaf;
+
+struct hypfs_entry {
+    unsigned short type;
+    unsigned short encoding;
+    unsigned int size;
+    unsigned int max_size;
+    const char *name;
+    struct list_head list;
+    int (*read)(const struct hypfs_entry *entry,
+                XEN_GUEST_HANDLE_PARAM(void) uaddr);
+    int (*write)(struct hypfs_entry_leaf *leaf,
+                 XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+};
+
+struct hypfs_entry_leaf {
+    struct hypfs_entry e;
+    union {
+        const void *content;
+        void *write_ptr;
+    };
+};
+
+struct hypfs_entry_dir {
+    struct hypfs_entry e;
+    struct list_head dirlist;
+};
+
+#define HYPFS_DIR_INIT(var, nam)                  \
+    struct hypfs_entry_dir __read_mostly var = {  \
+        .e.type = XEN_HYPFS_TYPE_DIR,             \
+        .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
+        .e.name = (nam),                          \
+        .e.size = 0,                              \
+        .e.max_size = 0,                          \
+        .e.list = LIST_HEAD_INIT(var.e.list),     \
+        .e.read = hypfs_read_dir,                 \
+        .dirlist = LIST_HEAD_INIT(var.dirlist),   \
+    }
+
+#define HYPFS_VARSIZE_INIT(var, typ, nam, msz)    \
+    struct hypfs_entry_leaf __read_mostly var = { \
+        .e.type = (typ),                          \
+        .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
+        .e.name = (nam),                          \
+        .e.max_size = (msz),                      \
+        .e.read = hypfs_read_leaf,                \
+    }
+
+/* Content and size need to be set via hypfs_string_set_reference(). */
+#define HYPFS_STRING_INIT(var, nam)               \
+    HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0)
+
+/*
+ * Set content and size of a XEN_HYPFS_TYPE_STRING node. The node will point
+ * to str, so any later modification of *str should be followed by a call
+ * to hypfs_string_set_reference() in order to update the size of the node
+ * data.
+ */
+static inline void hypfs_string_set_reference(struct hypfs_entry_leaf *leaf,
+                                              const char *str)
+{
+    leaf->content = str;
+    leaf->e.size = strlen(str) + 1;
+}
+
+#define HYPFS_FIXEDSIZE_INIT(var, typ, nam, contvar, wr) \
+    struct hypfs_entry_leaf __read_mostly var = {        \
+        .e.type = (typ),                                 \
+        .e.encoding = XEN_HYPFS_ENC_PLAIN,               \
+        .e.name = (nam),                                 \
+        .e.size = sizeof(contvar),                       \
+        .e.max_size = (wr) ? sizeof(contvar) : 0,        \
+        .e.read = hypfs_read_leaf,                       \
+        .e.write = (wr),                                 \
+        .content = &(contvar),                           \
+    }
+
+#define HYPFS_UINT_INIT(var, nam, contvar)                       \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_UINT, nam, contvar, NULL)
+#define HYPFS_UINT_INIT_WRITABLE(var, nam, contvar)              \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_UINT, nam, contvar, \
+                         hypfs_write_leaf)
+
+#define HYPFS_INT_INIT(var, nam, contvar)                        \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_INT, nam, contvar, NULL)
+#define HYPFS_INT_INIT_WRITABLE(var, nam, contvar)               \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_INT, nam, contvar, \
+                         hypfs_write_leaf)
+
+#define HYPFS_BOOL_INIT(var, nam, contvar)                       \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, NULL)
+#define HYPFS_BOOL_INIT_WRITABLE(var, nam, contvar)              \
+    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, \
+                         hypfs_write_bool)
+
+extern struct hypfs_entry_dir hypfs_root;
+
+int hypfs_add_dir(struct hypfs_entry_dir *parent,
+                  struct hypfs_entry_dir *dir, bool nofault);
+int hypfs_add_leaf(struct hypfs_entry_dir *parent,
+                   struct hypfs_entry_leaf *leaf, bool nofault);
+int hypfs_read_dir(const struct hypfs_entry *entry,
+                   XEN_GUEST_HANDLE_PARAM(void) uaddr);
+int hypfs_read_leaf(const struct hypfs_entry *entry,
+                    XEN_GUEST_HANDLE_PARAM(void) uaddr);
+int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
+                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
+                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
+#endif
+
+#endif /* __XEN_HYPFS_H__ */
diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
index 95f5e5592b..0921d4a8d0 100644
--- a/xen/include/xlat.lst
+++ b/xen/include/xlat.lst
@@ -86,6 +86,8 @@
 ?	vcpu_hvm_context		hvm/hvm_vcpu.h
 ?	vcpu_hvm_x86_32			hvm/hvm_vcpu.h
 ?	vcpu_hvm_x86_64			hvm/hvm_vcpu.h
+?	hypfs_direntry			hypfs.h
+?	hypfs_dirlistentry		hypfs.h
 ?	kexec_exec			kexec.h
 !	kexec_image			kexec.h
 !	kexec_range			kexec.h
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index 295dd67c48..2368acebed 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -434,6 +434,12 @@ static XSM_INLINE int xsm_page_offline(XSM_DEFAULT_ARG uint32_t cmd)
     return xsm_default_action(action, current->domain, NULL);
 }
 
+static XSM_INLINE int xsm_hypfs_op(XSM_DEFAULT_VOID)
+{
+    XSM_ASSERT_ACTION(XSM_PRIV);
+    return xsm_default_action(action, current->domain, NULL);
+}
+
 static XSM_INLINE long xsm_do_xsm_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return -ENOSYS;
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index e22d6160b5..a80bcf3e42 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -127,6 +127,7 @@ struct xsm_operations {
     int (*resource_setup_misc) (void);
 
     int (*page_offline)(uint32_t cmd);
+    int (*hypfs_op)(void);
 
     long (*do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
 #ifdef CONFIG_COMPAT
@@ -536,6 +537,11 @@ static inline int xsm_page_offline(xsm_default_t def, uint32_t cmd)
     return xsm_ops->page_offline(cmd);
 }
 
+static inline int xsm_hypfs_op(xsm_default_t def)
+{
+    return xsm_ops->hypfs_op();
+}
+
 static inline long xsm_do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return xsm_ops->do_xsm_op(op);
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
index 5705e52791..d4cce68089 100644
--- a/xen/xsm/dummy.c
+++ b/xen/xsm/dummy.c
@@ -103,6 +103,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
     set_to_dummy_if_null(ops, resource_setup_misc);
 
     set_to_dummy_if_null(ops, page_offline);
+    set_to_dummy_if_null(ops, hypfs_op);
     set_to_dummy_if_null(ops, hvm_param);
     set_to_dummy_if_null(ops, hvm_control);
     set_to_dummy_if_null(ops, hvm_param_nested);
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 4649e6fd95..a2c78e445c 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -1173,6 +1173,11 @@ static inline int flask_page_offline(uint32_t cmd)
     }
 }
 
+static inline int flask_hypfs_op(void)
+{
+    return domain_has_xen(current->domain, XEN__HYPFS_OP);
+}
+
 static int flask_add_to_physmap(struct domain *d1, struct domain *d2)
 {
     return domain_has_perm(d1, d2, SECCLASS_MMU, MMU__PHYSMAP);
@@ -1812,6 +1817,7 @@ static struct xsm_operations flask_ops = {
     .resource_setup_misc = flask_resource_setup_misc,
 
     .page_offline = flask_page_offline,
+    .hypfs_op = flask_hypfs_op,
     .hvm_param = flask_hvm_param,
     .hvm_control = flask_hvm_param,
     .hvm_param_nested = flask_hvm_param_nested,
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index c055c14c26..c9e385fb9b 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -67,6 +67,8 @@ class xen
     lockprof
 # XEN_SYSCTL_cpupool_op
     cpupool_op
+# hypfs hypercall
+    hypfs_op
 # XEN_SYSCTL_scheduler_op with XEN_DOMCTL_SCHEDOP_getinfo, XEN_SYSCTL_sched_id, XEN_DOMCTL_SCHEDOP_getvcpuinfo
     getscheduler
 # XEN_SYSCTL_scheduler_op with XEN_DOMCTL_SCHEDOP_putinfo, XEN_DOMCTL_SCHEDOP_putvcpuinfo
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 07:31:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:31:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jawiH-0006s1-Bt; Tue, 19 May 2020 07:30:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aMO8=7B=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jawiF-0006ru-VJ
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:30:55 +0000
X-Inumbo-ID: ac7641dc-99a2-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac7641dc-99a2-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 07:30:55 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E9971ACB8;
 Tue, 19 May 2020 07:30:55 +0000 (UTC)
Subject: Re: [PATCH v10 00/12] Add hypervisor sysfs-like support
To: xen-devel@lists.xenproject.org, Kevin Tian <kevin.tian@intel.com>,
 Julien Grall <julien@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Wei Liu <wl@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <20200519072106.26894-1-jgross@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <24935c43-2f2d-83cf-9039-ec0f97498103@suse.com>
Date: Tue, 19 May 2020 09:30:52 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200519072106.26894-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.20 09:20, Juergen Gross wrote:
> On the 2019 Xen developer summit there was agreement that the Xen
> hypervisor should gain support for a hierarchical name-value store
> similar to the Linux kernel's sysfs.
> 
> This is a first implementation of that idea adding the basic
> functionality to hypervisor and tools side. The interface to any
> user program making use of that "xen-hypfs" is a new library
> "libxenhypfs" with a stable interface.
> 
> The series adds read-only nodes with buildinfo data and writable
> nodes with runtime parameters. xl is switched to use the new file
> system for modifying the runtime parameters and the old sysctl
> interface for that purpose is dropped.
> 
> Changes in V10:
> - adressed review comments
> 
> Changes in V9:
> - addressed review comments
> 
> Changes in V8:
> - addressed review comments
> - added CONFIG_HYPFS config option
> 
> Changes in V7:
> - old patch 1 already applied
> - add new patch 1 (carved out and modified from patch 9)
> - addressed review comments
> - modified public interface to have a max write size instead of a
>    writable flag only
> 
> Changes in V6:
> - added new patches 1, 10, 11, 12
> - addressed review comments
> - modified interface for creating nodes for runtime parameters
> 
> Changes in V5:
> - switched to xsm for privilege check
> 
> Changes in V4:
> - former patch 2 removed as already committed
> - addressed review comments
> 
> Changes in V3:
> - major rework, especially by supporting binary contents of entries
> - added several new patches (1, 2, 7)
> - full support of all runtime parameters
> - support of writing entries (especially runtime parameters)
> 
> Changes in V2:
> - all comments to V1 addressed
> - added man-page for xenhypfs tool
> - added runtime parameter read access for string parameters
> 
> Changes in V1:
> - renamed xenfs ->xenhypfs
> - added writable entries support at the interface level and in the
>    xenhypfs tool
> - added runtime parameter read access (integer type only for now)
> - added docs/misc/hypfs-paths.pandoc for path descriptions
> 
> Juergen Gross (12):
>    xen/vmx: let opt_ept_ad always reflect the current setting
>    xen: add a generic way to include binary files as variables
>    docs: add feature document for Xen hypervisor sysfs-like support
>    xen: add basic hypervisor filesystem support
>    libs: add libxenhypfs
>    tools: add xenfs tool
>    xen: provide version information in hypfs
>    xen: add /buildinfo/config entry to hypervisor filesystem
>    xen: add runtime parameter access support to hypfs
>    tools/libxl: use libxenhypfs for setting xen runtime parameters
>    tools/libxc: remove xc_set_parameters()
>    xen: remove XEN_SYSCTL_set_parameter support
> 
>   .gitignore                          |   6 +
>   docs/features/hypervisorfs.pandoc   |  92 +++++
>   docs/man/xenhypfs.1.pod             |  61 ++++
>   docs/misc/hypfs-paths.pandoc        | 165 +++++++++
>   tools/Rules.mk                      |   8 +-
>   tools/flask/policy/modules/dom0.te  |   4 +-
>   tools/libs/Makefile                 |   1 +
>   tools/libs/hypfs/Makefile           |  16 +
>   tools/libs/hypfs/core.c             | 536 ++++++++++++++++++++++++++++
>   tools/libs/hypfs/include/xenhypfs.h |  90 +++++
>   tools/libs/hypfs/libxenhypfs.map    |  10 +
>   tools/libs/hypfs/xenhypfs.pc.in     |  10 +
>   tools/libxc/include/xenctrl.h       |   1 -
>   tools/libxc/xc_misc.c               |  21 --
>   tools/libxl/Makefile                |   3 +-
>   tools/libxl/libxl.c                 |  53 ++-
>   tools/libxl/libxl_internal.h        |   1 +
>   tools/libxl/xenlight.pc.in          |   2 +-
>   tools/misc/Makefile                 |   6 +
>   tools/misc/xenhypfs.c               | 192 ++++++++++
>   tools/xl/xl_misc.c                  |   1 -
>   xen/arch/arm/traps.c                |   3 +
>   xen/arch/arm/xen.lds.S              |  13 +-
>   xen/arch/x86/hvm/hypercall.c        |   3 +
>   xen/arch/x86/hvm/vmx/vmcs.c         |  47 ++-
>   xen/arch/x86/hvm/vmx/vmx.c          |   4 +-
>   xen/arch/x86/hypercall.c            |   3 +
>   xen/arch/x86/pv/domain.c            |  21 +-
>   xen/arch/x86/pv/hypercall.c         |   3 +
>   xen/arch/x86/xen.lds.S              |  12 +-
>   xen/common/Kconfig                  |  23 ++
>   xen/common/Makefile                 |  13 +
>   xen/common/grant_table.c            |  62 +++-
>   xen/common/hypfs.c                  | 452 +++++++++++++++++++++++
>   xen/common/kernel.c                 |  84 ++++-
>   xen/common/sysctl.c                 |  36 --
>   xen/drivers/char/console.c          |  72 +++-
>   xen/include/Makefile                |   1 +
>   xen/include/asm-x86/hvm/vmx/vmcs.h  |   3 +-
>   xen/include/public/hypfs.h          | 129 +++++++
>   xen/include/public/sysctl.h         |  19 +-
>   xen/include/public/xen.h            |   1 +
>   xen/include/xen/hypercall.h         |  10 +
>   xen/include/xen/hypfs.h             | 123 +++++++
>   xen/include/xen/kernel.h            |   3 +
>   xen/include/xen/lib.h               |   1 -
>   xen/include/xen/param.h             | 126 +++++--
>   xen/include/xlat.lst                |   2 +
>   xen/include/xsm/dummy.h             |   6 +
>   xen/include/xsm/xsm.h               |   6 +
>   xen/tools/binfile                   |  43 +++
>   xen/xsm/dummy.c                     |   1 +
>   xen/xsm/flask/Makefile              |   5 +-
>   xen/xsm/flask/flask-policy.S        |  16 -
>   xen/xsm/flask/hooks.c               |   9 +-
>   xen/xsm/flask/policy/access_vectors |   4 +-
>   56 files changed, 2445 insertions(+), 193 deletions(-)
>   create mode 100644 docs/features/hypervisorfs.pandoc
>   create mode 100644 docs/man/xenhypfs.1.pod
>   create mode 100644 docs/misc/hypfs-paths.pandoc
>   create mode 100644 tools/libs/hypfs/Makefile
>   create mode 100644 tools/libs/hypfs/core.c
>   create mode 100644 tools/libs/hypfs/include/xenhypfs.h
>   create mode 100644 tools/libs/hypfs/libxenhypfs.map
>   create mode 100644 tools/libs/hypfs/xenhypfs.pc.in
>   create mode 100644 tools/misc/xenhypfs.c
>   create mode 100644 xen/common/hypfs.c
>   create mode 100644 xen/include/public/hypfs.h
>   create mode 100644 xen/include/xen/hypfs.h
>   create mode 100755 xen/tools/binfile
>   delete mode 100644 xen/xsm/flask/flask-policy.S
> 

There are some Acks missing on this series, so please have a look at the
patches!

There are missing especially:

- Patch 1: VMX maintainers
- Patch 2 + 4: XSM maintainer
- Patch 4 + 9: Arm maintainer
- Patch 10 + 11: tools maintainers

I'd really like the series to go into 4.14 (deadline this Friday).


Juergen


From xen-devel-bounces@lists.xenproject.org Tue May 19 07:34:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jawll-000729-Rn; Tue, 19 May 2020 07:34:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jawlk-000724-Ju
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:34:32 +0000
X-Inumbo-ID: 2d8aa628-99a3-11ea-a8e2-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2d8aa628-99a3-11ea-a8e2-12813bfff9fa;
 Tue, 19 May 2020 07:34:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 68BF7AB64;
 Tue, 19 May 2020 07:34:33 +0000 (UTC)
Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
To: Manuel Bouyer <bouyer@antioche.eu.org>
References: <20200515202912.GA11714@antioche.eu.org>
 <d623cd12-4024-82ba-7388-21f606e1a0bd@citrix.com>
 <20200515210629.GA10976@antioche.eu.org>
 <b1dfc07d-bf0f-da26-79f0-8cf93952689e@citrix.com>
 <20200515215335.GA9991@antioche.eu.org>
 <d22b6b7c-9d1c-4cfb-427a-ca6f440a9b08@citrix.com>
 <20200517173259.GA7285@antioche.eu.org>
 <20200517175607.GA8793@antioche.eu.org>
 <000a01d62ce7$093b7f50$1bb27df0$@xen.org>
 <20200518173111.GA13512@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cd60e91a-a4a4-d14e-eead-c93f7682413d@suse.com>
Date: Tue, 19 May 2020 09:34:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200518173111.GA13512@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org,
 paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 18.05.2020 19:31, Manuel Bouyer wrote:
> From what I found it seems that all unallocated memory is tagged p2m_mmio_dm,
> is it right ?

Yes. For many years there has been a plan to better separate this from
p2m_invalid ...

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 07:45:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:45:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaww6-0007wN-Td; Tue, 19 May 2020 07:45:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jaww5-0007wI-3H
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:45:13 +0000
X-Inumbo-ID: ab596232-99a4-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ab596232-99a4-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 07:45:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 10119ADC1;
 Tue, 19 May 2020 07:45:12 +0000 (UTC)
Subject: Re: [PATCH v10 00/12] Add hypervisor sysfs-like support
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Kevin Tian <kevin.tian@intel.com>, Julien Grall <julien@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, Paul Durrant <paul@xen.org>
References: <20200519072106.26894-1-jgross@suse.com>
 <24935c43-2f2d-83cf-9039-ec0f97498103@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <305d829f-24a9-1a6d-2131-fed92c22c305@suse.com>
Date: Tue, 19 May 2020 09:45:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <24935c43-2f2d-83cf-9039-ec0f97498103@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.2020 09:30, Jürgen Groß wrote:
> On 19.05.20 09:20, Juergen Gross wrote:
>>
>> Juergen Gross (12):
>>    xen/vmx: let opt_ept_ad always reflect the current setting
>>    xen: add a generic way to include binary files as variables
>>    docs: add feature document for Xen hypervisor sysfs-like support
>>    xen: add basic hypervisor filesystem support
>>    libs: add libxenhypfs
>>    tools: add xenfs tool
>>    xen: provide version information in hypfs
>>    xen: add /buildinfo/config entry to hypervisor filesystem
>>    xen: add runtime parameter access support to hypfs
>>    tools/libxl: use libxenhypfs for setting xen runtime parameters
>>    tools/libxc: remove xc_set_parameters()
>>    xen: remove XEN_SYSCTL_set_parameter support
>>
>>   .gitignore                          |   6 +
>>   docs/features/hypervisorfs.pandoc   |  92 +++++
>>   docs/man/xenhypfs.1.pod             |  61 ++++
>>   docs/misc/hypfs-paths.pandoc        | 165 +++++++++
>>   tools/Rules.mk                      |   8 +-
>>   tools/flask/policy/modules/dom0.te  |   4 +-
>>   tools/libs/Makefile                 |   1 +
>>   tools/libs/hypfs/Makefile           |  16 +
>>   tools/libs/hypfs/core.c             | 536 ++++++++++++++++++++++++++++
>>   tools/libs/hypfs/include/xenhypfs.h |  90 +++++
>>   tools/libs/hypfs/libxenhypfs.map    |  10 +
>>   tools/libs/hypfs/xenhypfs.pc.in     |  10 +
>>   tools/libxc/include/xenctrl.h       |   1 -
>>   tools/libxc/xc_misc.c               |  21 --
>>   tools/libxl/Makefile                |   3 +-
>>   tools/libxl/libxl.c                 |  53 ++-
>>   tools/libxl/libxl_internal.h        |   1 +
>>   tools/libxl/xenlight.pc.in          |   2 +-
>>   tools/misc/Makefile                 |   6 +
>>   tools/misc/xenhypfs.c               | 192 ++++++++++
>>   tools/xl/xl_misc.c                  |   1 -
>>   xen/arch/arm/traps.c                |   3 +
>>   xen/arch/arm/xen.lds.S              |  13 +-
>>   xen/arch/x86/hvm/hypercall.c        |   3 +
>>   xen/arch/x86/hvm/vmx/vmcs.c         |  47 ++-
>>   xen/arch/x86/hvm/vmx/vmx.c          |   4 +-
>>   xen/arch/x86/hypercall.c            |   3 +
>>   xen/arch/x86/pv/domain.c            |  21 +-
>>   xen/arch/x86/pv/hypercall.c         |   3 +
>>   xen/arch/x86/xen.lds.S              |  12 +-
>>   xen/common/Kconfig                  |  23 ++
>>   xen/common/Makefile                 |  13 +
>>   xen/common/grant_table.c            |  62 +++-
>>   xen/common/hypfs.c                  | 452 +++++++++++++++++++++++
>>   xen/common/kernel.c                 |  84 ++++-
>>   xen/common/sysctl.c                 |  36 --
>>   xen/drivers/char/console.c          |  72 +++-
>>   xen/include/Makefile                |   1 +
>>   xen/include/asm-x86/hvm/vmx/vmcs.h  |   3 +-
>>   xen/include/public/hypfs.h          | 129 +++++++
>>   xen/include/public/sysctl.h         |  19 +-
>>   xen/include/public/xen.h            |   1 +
>>   xen/include/xen/hypercall.h         |  10 +
>>   xen/include/xen/hypfs.h             | 123 +++++++
>>   xen/include/xen/kernel.h            |   3 +
>>   xen/include/xen/lib.h               |   1 -
>>   xen/include/xen/param.h             | 126 +++++--
>>   xen/include/xlat.lst                |   2 +
>>   xen/include/xsm/dummy.h             |   6 +
>>   xen/include/xsm/xsm.h               |   6 +
>>   xen/tools/binfile                   |  43 +++
>>   xen/xsm/dummy.c                     |   1 +
>>   xen/xsm/flask/Makefile              |   5 +-
>>   xen/xsm/flask/flask-policy.S        |  16 -
>>   xen/xsm/flask/hooks.c               |   9 +-
>>   xen/xsm/flask/policy/access_vectors |   4 +-
>>   56 files changed, 2445 insertions(+), 193 deletions(-)
>>   create mode 100644 docs/features/hypervisorfs.pandoc
>>   create mode 100644 docs/man/xenhypfs.1.pod
>>   create mode 100644 docs/misc/hypfs-paths.pandoc
>>   create mode 100644 tools/libs/hypfs/Makefile
>>   create mode 100644 tools/libs/hypfs/core.c
>>   create mode 100644 tools/libs/hypfs/include/xenhypfs.h
>>   create mode 100644 tools/libs/hypfs/libxenhypfs.map
>>   create mode 100644 tools/libs/hypfs/xenhypfs.pc.in
>>   create mode 100644 tools/misc/xenhypfs.c
>>   create mode 100644 xen/common/hypfs.c
>>   create mode 100644 xen/include/public/hypfs.h
>>   create mode 100644 xen/include/xen/hypfs.h
>>   create mode 100755 xen/tools/binfile
>>   delete mode 100644 xen/xsm/flask/flask-policy.S
>>
> 
> There are some Acks missing on this series, so please have a look at the
> patches!
> 
> There are missing especially:
> 
> - Patch 1: VMX maintainers
> - Patch 2 + 4: XSM maintainer
> - Patch 4 + 9: Arm maintainer
> - Patch 10 + 11: tools maintainers
> 
> I'd really like the series to go into 4.14 (deadline this Friday).

FTR I'm intending to waive the need for the first three of the named
sets if they don't arrive by Friday (and there I don't mean last
minute on Friday) - they're not overly intrusive (maybe with the
exception of the XSM parts in #4) and the series has been pending
for long enough. I don't feel comfortable to do so for patch 10,
though; patch 11 looks to be simple enough again.

Paul, as the release manager, please let me know if you disagree.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 07:47:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:47:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jawy7-00082k-AZ; Tue, 19 May 2020 07:47:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jawy5-00082X-JY
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:47:17 +0000
X-Inumbo-ID: f55a4fe0-99a4-11ea-a8e2-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f55a4fe0-99a4-11ea-a8e2-12813bfff9fa;
 Tue, 19 May 2020 07:47:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D2193AB64;
 Tue, 19 May 2020 07:47:17 +0000 (UTC)
Subject: Re: [PATCH v10 02/12] xen: add a generic way to include binary files
 as variables
To: Juergen Gross <jgross@suse.com>
References: <20200519072106.26894-1-jgross@suse.com>
 <20200519072106.26894-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <24d44577-2e59-1428-2b63-08c89a0046d8@suse.com>
Date: Tue, 19 May 2020 09:47:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200519072106.26894-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.2020 09:20, Juergen Gross wrote:
> --- a/xen/xsm/flask/Makefile
> +++ b/xen/xsm/flask/Makefile
> @@ -39,6 +39,9 @@ $(subst include/,%/,$(AV_H_FILES)): $(AV_H_DEPEND) $(mkaccess) FORCE
>  obj-bin-$(CONFIG_XSM_FLASK_POLICY) += flask-policy.o
>  flask-policy.o: policy.bin
>  
> +flask-policy.S: $(XEN_ROOT)/xen/tools/binfile
> +	$(XEN_ROOT)/xen/tools/binfile -i $@ policy.bin xsm_flask_init_policy

I realize the script gets installed as executable, but such
permissions can get lost. Typically I think we invoke the shell
instead, with the script as first argument. Thoughts? Would
affect patch 8 then as well. Sorry for noticing this only now.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 07:52:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:52:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jax2o-0000Qd-UU; Tue, 19 May 2020 07:52:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aMO8=7B=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jax2n-0000QY-Te
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:52:09 +0000
X-Inumbo-ID: a409276e-99a5-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a409276e-99a5-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 07:52:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A962BAC6D;
 Tue, 19 May 2020 07:52:10 +0000 (UTC)
Subject: Re: [PATCH v10 02/12] xen: add a generic way to include binary files
 as variables
To: Jan Beulich <jbeulich@suse.com>
References: <20200519072106.26894-1-jgross@suse.com>
 <20200519072106.26894-3-jgross@suse.com>
 <24d44577-2e59-1428-2b63-08c89a0046d8@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <0a46fa63-5914-8ced-cd38-fa2c938d167b@suse.com>
Date: Tue, 19 May 2020 09:52:07 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <24d44577-2e59-1428-2b63-08c89a0046d8@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.20 09:47, Jan Beulich wrote:
> On 19.05.2020 09:20, Juergen Gross wrote:
>> --- a/xen/xsm/flask/Makefile
>> +++ b/xen/xsm/flask/Makefile
>> @@ -39,6 +39,9 @@ $(subst include/,%/,$(AV_H_FILES)): $(AV_H_DEPEND) $(mkaccess) FORCE
>>   obj-bin-$(CONFIG_XSM_FLASK_POLICY) += flask-policy.o
>>   flask-policy.o: policy.bin
>>   
>> +flask-policy.S: $(XEN_ROOT)/xen/tools/binfile
>> +	$(XEN_ROOT)/xen/tools/binfile -i $@ policy.bin xsm_flask_init_policy
> 
> I realize the script gets installed as executable, but such
> permissions can get lost. Typically I think we invoke the shell
> instead, with the script as first argument. Thoughts? Would
> affect patch 8 then as well. Sorry for noticing this only now.

Shall I resend or would you do that while committing?


Juergen



From xen-devel-bounces@lists.xenproject.org Tue May 19 07:55:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jax6E-0000aX-En; Tue, 19 May 2020 07:55:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jax6D-0000aS-BV
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:55:41 +0000
X-Inumbo-ID: 21460d00-99a6-11ea-a8e3-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 21460d00-99a6-11ea-a8e3-12813bfff9fa;
 Tue, 19 May 2020 07:55:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4309DAECE;
 Tue, 19 May 2020 07:55:41 +0000 (UTC)
Subject: Re: [PATCH v3 1/5] x86: suppress XPTI-related TLB flushes when
 possible
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <3ce4ab2c-8cb6-1482-6ce9-3d5b019e10c1@suse.com>
 <ae47cb2c-2fff-cd08-0a26-683cef1f3303@suse.com>
 <20200518170904.GY54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <748e3d53-779b-1529-73e8-37f3c2da6e57@suse.com>
Date: Tue, 19 May 2020 09:55:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200518170904.GY54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 18.05.2020 19:09, Roger Pau Monné wrote:
> On Wed, Sep 25, 2019 at 05:23:11PM +0200, Jan Beulich wrote:
>> @@ -310,7 +313,16 @@ int pv_domain_initialise(struct domain *
>>      /* 64-bit PV guest by default. */
>>      d->arch.is_32bit_pv = d->arch.has_32bit_shinfo = 0;
>>  
>> -    d->arch.pv.xpti = is_hardware_domain(d) ? opt_xpti_hwdom : opt_xpti_domu;
>> +    if ( is_hardware_domain(d) && opt_xpti_hwdom )
>> +    {
>> +        d->arch.pv.xpti = true;
>> +        ++opt_xpti_hwdom;
>> +    }
>> +    if ( !is_hardware_domain(d) && opt_xpti_domu )
>> +    {
>> +        d->arch.pv.xpti = true;
>> +        opt_xpti_domu = 2;
> 
> I wonder whether a store fence is needed here in order to guarantee
> that opt_xpti_domu is visible to flush_area_local before proceeding
> any further with domain creation.

The changed behavior of flush_area_local() becomes relevant only
once the new domain runs. This being x86 code, the write can't
remain invisible for longer than the very latest when the function
returns, as the store can't be deferred past that (in reality it
can't be deferred even until after the next [real] function call
or the next barrier()). And due to x86'es cache coherent nature
(for WB memory) the moment the store insn completes the new value
is visible to all other CPUs.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 07:58:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 07:58:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jax93-0000i0-Tt; Tue, 19 May 2020 07:58:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jax92-0000hv-FH
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 07:58:36 +0000
X-Inumbo-ID: 8a625370-99a6-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a625370-99a6-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 07:58:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4AD3FACAE;
 Tue, 19 May 2020 07:58:37 +0000 (UTC)
Subject: Re: [PATCH v10 02/12] xen: add a generic way to include binary files
 as variables
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <20200519072106.26894-1-jgross@suse.com>
 <20200519072106.26894-3-jgross@suse.com>
 <24d44577-2e59-1428-2b63-08c89a0046d8@suse.com>
 <0a46fa63-5914-8ced-cd38-fa2c938d167b@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b6b3721c-4317-a3af-3a77-a3a882cc9530@suse.com>
Date: Tue, 19 May 2020 09:58:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <0a46fa63-5914-8ced-cd38-fa2c938d167b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.2020 09:52, Jürgen Groß wrote:
> On 19.05.20 09:47, Jan Beulich wrote:
>> On 19.05.2020 09:20, Juergen Gross wrote:
>>> --- a/xen/xsm/flask/Makefile
>>> +++ b/xen/xsm/flask/Makefile
>>> @@ -39,6 +39,9 @@ $(subst include/,%/,$(AV_H_FILES)): $(AV_H_DEPEND) $(mkaccess) FORCE
>>>   obj-bin-$(CONFIG_XSM_FLASK_POLICY) += flask-policy.o
>>>   flask-policy.o: policy.bin
>>>   
>>> +flask-policy.S: $(XEN_ROOT)/xen/tools/binfile
>>> +	$(XEN_ROOT)/xen/tools/binfile -i $@ policy.bin xsm_flask_init_policy
>>
>> I realize the script gets installed as executable, but such
>> permissions can get lost. Typically I think we invoke the shell
>> instead, with the script as first argument. Thoughts? Would
>> affect patch 8 then as well. Sorry for noticing this only now.
> 
> Shall I resend or would you do that while committing?

In patch 8 I'd be fine adding $(SHELL). Here, though, the question is
whether it should be $(SHELL) or $(CONFIG_SHELL) - I don't have any
idea why the latter exists in the first place. Daniel?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 08:06:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 08:06:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaxGJ-00028e-3W; Tue, 19 May 2020 08:06:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FgAx=7B=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jaxGI-00028Z-2a
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 08:06:06 +0000
X-Inumbo-ID: 95f9d806-99a7-11ea-9887-bc764e2007e4
Received: from mail-ej1-x642.google.com (unknown [2a00:1450:4864:20::642])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 95f9d806-99a7-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 08:06:04 +0000 (UTC)
Received: by mail-ej1-x642.google.com with SMTP id d7so10449094eja.7
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 01:06:04 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:thread-index
 :content-language;
 bh=RVJqIxPZoERjogup+I552owU+rzaIvrLTsANtoO6Mds=;
 b=siZm1C2KJf3kpy7DPeUxJVcKxFVKcxenePrSg2QkzsFfTiWvKknUV6yeouvvLhxBX+
 Fv/+o295EdnmvWYmpBjAtsxlqSXN2lP1md3UujXLLyB8/GBG/GBxm4Yqjc1u/lwjGG8I
 U9fMfgBaYGQI99uAVJhnRHNJFoV3ArTg6sep5G+gVwe2OlDdPGLmxPwnun1M4YxcFmwL
 4UmXjOa+gC+pcIw5aZbT0ItVUwpCqEPysKQVOo4wWKqg2H+4CvCO2sLVlMn/wUb3KAD4
 +AcTwa43kuexI/Jms2zEs+oxNIsl/0dAIlLQ9WNC1Lsw+ibn2j60CjbOW41xI7h0bkDH
 gedA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :thread-index:content-language;
 bh=RVJqIxPZoERjogup+I552owU+rzaIvrLTsANtoO6Mds=;
 b=HePFo4COEk7atwU61qs9sNif5+27hLtrMCITUjdAKGLRY9I7kWShDqNH2jtVH6l77D
 3IN8blS/YwsBrPCNLBdKejTuJHg9rAXIXiHmjTadT4HwhEAx71VhhQXTUqpsYvnr3wjY
 U0WQ3TfFLuLxCbGByvd7Zf/EJbn9dxgWWxGD9Q67XyZicSKQLmDVx0CyiFRBCI/4VgmP
 CPBtdnaJ/cAT/Q3iFQ/lQvCrR5ZmNbSSm1V+fy0Q/S0uSuNZ5Ihj53VJ4wCiXp5VgVTT
 UEdD+BcIn5F04RAhzX6knDSb8p6sK2M0593J9+RhUn8accWKuYqrE+ZJwE2bPuFYlBvh
 rcqw==
X-Gm-Message-State: AOAM530Nzq4gulI5Dfmufjnpd8oW/gYOKmds7rPQNfAyZa4IO+gGxkJo
 Tf3+6X0OHMf/ZqD5c+L5SoU=
X-Google-Smtp-Source: ABdhPJy5f3skt8sw4Nq05t4Kk1RXbU0cgID1IwPkxc5t88cL7AIQ1vtSSQHHla0YVDIfnclfCG3f8w==
X-Received: by 2002:a17:906:560b:: with SMTP id
 f11mr16879898ejq.264.1589875563816; 
 Tue, 19 May 2020 01:06:03 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.177])
 by smtp.gmail.com with ESMTPSA id jp17sm1689947ejb.23.2020.05.19.01.06.01
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 19 May 2020 01:06:03 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>,
 =?UTF-8?Q?'J=C3=BCrgen_Gro=C3=9F'?= <jgross@suse.com>,
 "'Kevin Tian'" <kevin.tian@intel.com>, "'Julien Grall'" <julien@xen.org>,
 "'Jun Nakajima'" <jun.nakajima@intel.com>, "'Wei Liu'" <wl@xen.org>,
 "'Ian Jackson'" <ian.jackson@eu.citrix.com>,
 "'Daniel De Graaf'" <dgdegra@tycho.nsa.gov>
References: <20200519072106.26894-1-jgross@suse.com>
 <24935c43-2f2d-83cf-9039-ec0f97498103@suse.com>
 <305d829f-24a9-1a6d-2131-fed92c22c305@suse.com>
In-Reply-To: <305d829f-24a9-1a6d-2131-fed92c22c305@suse.com>
Subject: RE: [PATCH v10 00/12] Add hypervisor sysfs-like support
Date: Tue, 19 May 2020 09:06:01 +0100
Message-ID: <000f01d62db4$57181e90$05485bb0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Thread-Index: AQF0GzHQelLbY29QZvJSPKpxy6orIQGqCQh0Aj1ZGvypU/G18A==
Content-Language: en-gb
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Anthony PERARD' <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 19 May 2020 08:45
> To: J=C3=BCrgen Gro=C3=9F <jgross@suse.com>; Kevin Tian =
<kevin.tian@intel.com>; Julien Grall <julien@xen.org>;
> Jun Nakajima <jun.nakajima@intel.com>; Wei Liu <wl@xen.org>; Ian =
Jackson <ian.jackson@eu.citrix.com>;
> Daniel De Graaf <dgdegra@tycho.nsa.gov>; Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; Stefano Stabellini =
<sstabellini@kernel.org>; Andrew Cooper
> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>; =
Anthony PERARD
> <anthony.perard@citrix.com>; Volodymyr Babchuk =
<Volodymyr_Babchuk@epam.com>; Roger Pau Monn=C3=A9
> <roger.pau@citrix.com>
> Subject: Re: [PATCH v10 00/12] Add hypervisor sysfs-like support
>=20
> On 19.05.2020 09:30, J=C3=BCrgen Gro=C3=9F wrote:
> > On 19.05.20 09:20, Juergen Gross wrote:
> >>
> >> Juergen Gross (12):
> >>    xen/vmx: let opt_ept_ad always reflect the current setting
> >>    xen: add a generic way to include binary files as variables
> >>    docs: add feature document for Xen hypervisor sysfs-like support
> >>    xen: add basic hypervisor filesystem support
> >>    libs: add libxenhypfs
> >>    tools: add xenfs tool
> >>    xen: provide version information in hypfs
> >>    xen: add /buildinfo/config entry to hypervisor filesystem
> >>    xen: add runtime parameter access support to hypfs
> >>    tools/libxl: use libxenhypfs for setting xen runtime parameters
> >>    tools/libxc: remove xc_set_parameters()
> >>    xen: remove XEN_SYSCTL_set_parameter support
> >>
> >>   .gitignore                          |   6 +
> >>   docs/features/hypervisorfs.pandoc   |  92 +++++
> >>   docs/man/xenhypfs.1.pod             |  61 ++++
> >>   docs/misc/hypfs-paths.pandoc        | 165 +++++++++
> >>   tools/Rules.mk                      |   8 +-
> >>   tools/flask/policy/modules/dom0.te  |   4 +-
> >>   tools/libs/Makefile                 |   1 +
> >>   tools/libs/hypfs/Makefile           |  16 +
> >>   tools/libs/hypfs/core.c             | 536 =
++++++++++++++++++++++++++++
> >>   tools/libs/hypfs/include/xenhypfs.h |  90 +++++
> >>   tools/libs/hypfs/libxenhypfs.map    |  10 +
> >>   tools/libs/hypfs/xenhypfs.pc.in     |  10 +
> >>   tools/libxc/include/xenctrl.h       |   1 -
> >>   tools/libxc/xc_misc.c               |  21 --
> >>   tools/libxl/Makefile                |   3 +-
> >>   tools/libxl/libxl.c                 |  53 ++-
> >>   tools/libxl/libxl_internal.h        |   1 +
> >>   tools/libxl/xenlight.pc.in          |   2 +-
> >>   tools/misc/Makefile                 |   6 +
> >>   tools/misc/xenhypfs.c               | 192 ++++++++++
> >>   tools/xl/xl_misc.c                  |   1 -
> >>   xen/arch/arm/traps.c                |   3 +
> >>   xen/arch/arm/xen.lds.S              |  13 +-
> >>   xen/arch/x86/hvm/hypercall.c        |   3 +
> >>   xen/arch/x86/hvm/vmx/vmcs.c         |  47 ++-
> >>   xen/arch/x86/hvm/vmx/vmx.c          |   4 +-
> >>   xen/arch/x86/hypercall.c            |   3 +
> >>   xen/arch/x86/pv/domain.c            |  21 +-
> >>   xen/arch/x86/pv/hypercall.c         |   3 +
> >>   xen/arch/x86/xen.lds.S              |  12 +-
> >>   xen/common/Kconfig                  |  23 ++
> >>   xen/common/Makefile                 |  13 +
> >>   xen/common/grant_table.c            |  62 +++-
> >>   xen/common/hypfs.c                  | 452 +++++++++++++++++++++++
> >>   xen/common/kernel.c                 |  84 ++++-
> >>   xen/common/sysctl.c                 |  36 --
> >>   xen/drivers/char/console.c          |  72 +++-
> >>   xen/include/Makefile                |   1 +
> >>   xen/include/asm-x86/hvm/vmx/vmcs.h  |   3 +-
> >>   xen/include/public/hypfs.h          | 129 +++++++
> >>   xen/include/public/sysctl.h         |  19 +-
> >>   xen/include/public/xen.h            |   1 +
> >>   xen/include/xen/hypercall.h         |  10 +
> >>   xen/include/xen/hypfs.h             | 123 +++++++
> >>   xen/include/xen/kernel.h            |   3 +
> >>   xen/include/xen/lib.h               |   1 -
> >>   xen/include/xen/param.h             | 126 +++++--
> >>   xen/include/xlat.lst                |   2 +
> >>   xen/include/xsm/dummy.h             |   6 +
> >>   xen/include/xsm/xsm.h               |   6 +
> >>   xen/tools/binfile                   |  43 +++
> >>   xen/xsm/dummy.c                     |   1 +
> >>   xen/xsm/flask/Makefile              |   5 +-
> >>   xen/xsm/flask/flask-policy.S        |  16 -
> >>   xen/xsm/flask/hooks.c               |   9 +-
> >>   xen/xsm/flask/policy/access_vectors |   4 +-
> >>   56 files changed, 2445 insertions(+), 193 deletions(-)
> >>   create mode 100644 docs/features/hypervisorfs.pandoc
> >>   create mode 100644 docs/man/xenhypfs.1.pod
> >>   create mode 100644 docs/misc/hypfs-paths.pandoc
> >>   create mode 100644 tools/libs/hypfs/Makefile
> >>   create mode 100644 tools/libs/hypfs/core.c
> >>   create mode 100644 tools/libs/hypfs/include/xenhypfs.h
> >>   create mode 100644 tools/libs/hypfs/libxenhypfs.map
> >>   create mode 100644 tools/libs/hypfs/xenhypfs.pc.in
> >>   create mode 100644 tools/misc/xenhypfs.c
> >>   create mode 100644 xen/common/hypfs.c
> >>   create mode 100644 xen/include/public/hypfs.h
> >>   create mode 100644 xen/include/xen/hypfs.h
> >>   create mode 100755 xen/tools/binfile
> >>   delete mode 100644 xen/xsm/flask/flask-policy.S
> >>
> >
> > There are some Acks missing on this series, so please have a look at =
the
> > patches!
> >
> > There are missing especially:
> >
> > - Patch 1: VMX maintainers
> > - Patch 2 + 4: XSM maintainer
> > - Patch 4 + 9: Arm maintainer
> > - Patch 10 + 11: tools maintainers
> >
> > I'd really like the series to go into 4.14 (deadline this Friday).
>=20

I would also like to see this in 4.14.

> FTR I'm intending to waive the need for the first three of the named
> sets if they don't arrive by Friday (and there I don't mean last
> minute on Friday) - they're not overly intrusive (maybe with the
> exception of the XSM parts in #4) and the series has been pending
> for long enough. I don't feel comfortable to do so for patch 10,
> though; patch 11 looks to be simple enough again.
>=20
> Paul, as the release manager, please let me know if you disagree.
>=20

Looking at patch #4, I'm not confident that the XSM parts are complete =
(e.g. does xen.if need updating?). Also I'd put the new access vector in =
xen2, since that's where set_parameter currently is (and will be removed =
from in a later patch), but the xen class does appear to have space so =
that's really just my taste.

I agree that patch #10 really needs a tools maintainer ack but that =
patch #11 looks straightforward too so I'd be happy without one for =
that.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Tue May 19 08:09:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 08:09:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaxJi-0002IA-Jj; Tue, 19 May 2020 08:09:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IbQz=7B=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jaxJh-0002I5-4G
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 08:09:37 +0000
X-Inumbo-ID: 143c1545-99a8-11ea-a8e3-12813bfff9fa
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 143c1545-99a8-11ea-a8e3-12813bfff9fa;
 Tue, 19 May 2020 08:09:36 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id l18so14776950wrn.6
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 01:09:36 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=wr+G6ba0u5WKbNxoCAyuyvl5jZ+YmSujiSW5/xkVRKA=;
 b=E9A/bK7mCxp21Rd2LZPSbXb4AGn8NOu5suZ/tjCrLjHQrtksrR06UFhKHxd2vYUsAx
 ymzAGkOM4CaoHGHsCucynBdszjMkYi6OilVpBrAVdBPhm36v0RnrK9RDgyoo+AYdKVns
 yWQAKoCjvqxLkym3FyM7nxLx03C2VgTf25ZSRDEPuFOabeJK79Wr98pj11ix+OFWQyM4
 BIfvhNMgp7pR9D7l1cI+gJp+3KLcDM7dE686W3lS5scQ1vuFsjW9Xi7hUocfqZrKv0DA
 6AaJ4fcFG03zR/xhezzoKfzj53dpxZ9aVQVJJFaQqIDOkuW0VuUcJKMxSHJHHNXaRd9H
 So4w==
X-Gm-Message-State: AOAM530Hj4+ZZsQOYN/WoTxZfqhk/JOcM2RJ975Y0zI3QjJ2Gh9Bz7+R
 CDzOTrd1wiJ56E5fRZwr3rc=
X-Google-Smtp-Source: ABdhPJz2FdxvQ8kADRoefVL37oAQb+8/uKaivVJ5hPH2Ni487nk8ppaQaG+3IoyG8TLOGkGQPXEXNA==
X-Received: by 2002:a5d:534e:: with SMTP id t14mr23961671wrv.15.1589875775806; 
 Tue, 19 May 2020 01:09:35 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id z18sm2656307wmk.46.2020.05.19.01.09.34
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 19 May 2020 01:09:35 -0700 (PDT)
Date: Tue, 19 May 2020 08:09:33 +0000
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v10 11/12] tools/libxc: remove xc_set_parameters()
Message-ID: <20200519080933.3pjozwgltpbgl2jp@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <20200519072106.26894-1-jgross@suse.com>
 <20200519072106.26894-12-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200519072106.26894-12-jgross@suse.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 09:21:05AM +0200, Juergen Gross wrote:
> There is no user of xc_set_parameters() left, so remove it.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Tue May 19 08:14:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 08:14:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaxO7-00034h-6H; Tue, 19 May 2020 08:14:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jaxO6-00034Z-Bp
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 08:14:10 +0000
X-Inumbo-ID: b696bb6e-99a8-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b696bb6e-99a8-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 08:14:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 8EDECAF17;
 Tue, 19 May 2020 08:14:10 +0000 (UTC)
Subject: Re: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200518153820.18170-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bf0d9e00-cb42-34b1-26ee-93628eea094c@suse.com>
Date: Tue, 19 May 2020 10:14:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200518153820.18170-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 18.05.2020 17:38, Andrew Cooper wrote:
> The reserved_bit_page_fault() paths effectively turn reserved bit faults into
> a warning, but in the light of L1TF, the real impact is far more serious.
> 
> Xen does not have any reserved bits set in its pagetables, nor do we permit PV
> guests to write any.  An HVM shadow guest may have reserved bits via the MMIO
> fastpath, but those faults are handled in the VMExit #PF intercept, rather
> than Xen's #PF handler.
> 
> There is no need to disable interrupts (in spurious_page_fault()) for
> __page_fault_type() to look at the rsvd bit, nor should extable fixup be
> tolerated.

I'm afraid I don't understand the connection of the first half of this
to the patch - you don't alter spurious_page_fault() in this regard (at
all, actually).

As to extable fixup, I'm not sure: If a reserved bit ends up slipping
into the non-Xen parts of the page tables, and if guest accessors then
become able to trip a corresponding #PF, the bug will need an XSA with
the proposed change, while - afaict - it won't if the exception gets
recovered from. (There may then still be log spam issue, I admit.)

> @@ -1439,6 +1418,18 @@ void do_page_fault(struct cpu_user_regs *regs)
>      if ( unlikely(fixup_page_fault(addr, regs) != 0) )
>          return;
>  
> +    /*
> +     * Xen have reserved bits in its pagetables, nor do we permit PV guests to
> +     * write any.  Such entries would be vulnerable to the L1TF sidechannel.
> +     *
> +     * The only logic which intentionally sets reserved bits is the shadow
> +     * MMIO fastpath (SH_L1E_MMIO_*), which is careful not to be
> +     * L1TF-vulnerable, and handled via the VMExit #PF intercept path, rather
> +     * than here.
> +     */
> +    if ( error_code & PFEC_reserved_bit )
> +        goto fatal;

Judging from the description, wouldn't this then better go even further
up, ahead of the fixup_page_fault() invocation? In fact the function
has two PFEC_reserved_bit checks to _avoid_ taking action, which look
like they could then be dropped.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 08:14:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 08:14:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaxOI-00037c-EZ; Tue, 19 May 2020 08:14:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=aMO8=7B=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jaxOG-00036j-Vy
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 08:14:21 +0000
X-Inumbo-ID: bd615ed6-99a8-11ea-a8e4-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bd615ed6-99a8-11ea-a8e4-12813bfff9fa;
 Tue, 19 May 2020 08:14:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E41A4AB76;
 Tue, 19 May 2020 08:14:21 +0000 (UTC)
Subject: Re: [PATCH v10 02/12] xen: add a generic way to include binary files
 as variables
To: Jan Beulich <jbeulich@suse.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
References: <20200519072106.26894-1-jgross@suse.com>
 <20200519072106.26894-3-jgross@suse.com>
 <24d44577-2e59-1428-2b63-08c89a0046d8@suse.com>
 <0a46fa63-5914-8ced-cd38-fa2c938d167b@suse.com>
 <b6b3721c-4317-a3af-3a77-a3a882cc9530@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <0c87b35b-801d-1ccc-696c-5d417e564868@suse.com>
Date: Tue, 19 May 2020 10:14:18 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <b6b3721c-4317-a3af-3a77-a3a882cc9530@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.20 09:58, Jan Beulich wrote:
> On 19.05.2020 09:52, Jürgen Groß wrote:
>> On 19.05.20 09:47, Jan Beulich wrote:
>>> On 19.05.2020 09:20, Juergen Gross wrote:
>>>> --- a/xen/xsm/flask/Makefile
>>>> +++ b/xen/xsm/flask/Makefile
>>>> @@ -39,6 +39,9 @@ $(subst include/,%/,$(AV_H_FILES)): $(AV_H_DEPEND) $(mkaccess) FORCE
>>>>    obj-bin-$(CONFIG_XSM_FLASK_POLICY) += flask-policy.o
>>>>    flask-policy.o: policy.bin
>>>>    
>>>> +flask-policy.S: $(XEN_ROOT)/xen/tools/binfile
>>>> +	$(XEN_ROOT)/xen/tools/binfile -i $@ policy.bin xsm_flask_init_policy
>>>
>>> I realize the script gets installed as executable, but such
>>> permissions can get lost. Typically I think we invoke the shell
>>> instead, with the script as first argument. Thoughts? Would
>>> affect patch 8 then as well. Sorry for noticing this only now.
>>
>> Shall I resend or would you do that while committing?
> 
> In patch 8 I'd be fine adding $(SHELL). Here, though, the question is
> whether it should be $(SHELL) or $(CONFIG_SHELL) - I don't have any
> idea why the latter exists in the first place. Daniel?

Why would different shells be needed in the two patches?

The binfile script is rather simple without any bash-isms in it (AFAICT
CONFIG_SHELL seems to prefer bash). So $(SHELL) should be fine IMO.


Juergen


From xen-devel-bounces@lists.xenproject.org Tue May 19 08:17:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 08:17:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaxRD-0003Jm-SK; Tue, 19 May 2020 08:17:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IbQz=7B=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jaxRB-0003Je-Rv
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 08:17:21 +0000
X-Inumbo-ID: 2924948a-99a9-11ea-b07b-bc764e2007e4
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2924948a-99a9-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 08:17:21 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id i15so14746800wrx.10
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 01:17:21 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=rPvsYD8u9NYxPAtjlCpCo+A9TeCrz0m4y0nIF0xzGoY=;
 b=f90jlETaMAK37uZJSvfVQh5BjieUZ2saCm7ExFdhoiZrUj104p4V8QLlW/2l0fcd53
 +li26pWKTB0RCTFjTfKOG0NZOKkkMdwupigYcgmyN/sWdvn4QfrjUmtq4ZCp1Gu2X2eK
 D4AJcvV7u/CSGxVozA+zo20dpDvGWzXmbHwnIJQ+YegIJNj6vymyfKiwSkhCZBwjmH4J
 OmHEWIZQo19YhgxwBw1v3nZ/uCrCOfSVOyMDRM7O1XGSBOGnAqbq2NxXHk+reXlHNvhn
 YyNKDBnjZdsxwiNz+w8coz9Zzb4JY83v7YAgPvWd45YhiUUFKetKFdrBwhSf6R4NgHDd
 6ThA==
X-Gm-Message-State: AOAM532eNXdKnqR076bFiDD35DYaBzttQ00K3A3iDPP2iX1mQTNuJGDk
 C/fMT32obM0jfbrvXv9kwmY=
X-Google-Smtp-Source: ABdhPJxzNTIywS28ZPLHd20SnUo68SfhSE+H31hG8Z1rBke1hX1NpKT0ILUz8d4ye6S1VjnedsbKCQ==
X-Received: by 2002:adf:e783:: with SMTP id n3mr23864842wrm.157.1589876240380; 
 Tue, 19 May 2020 01:17:20 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id b7sm2784796wmj.29.2020.05.19.01.17.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 19 May 2020 01:17:19 -0700 (PDT)
Date: Tue, 19 May 2020 08:17:18 +0000
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH v10 10/12] tools/libxl: use libxenhypfs for setting xen
 runtime parameters
Message-ID: <20200519081718.tkp7rsd3fseqmyzv@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <20200519072106.26894-1-jgross@suse.com>
 <20200519072106.26894-11-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200519072106.26894-11-jgross@suse.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 09:21:04AM +0200, Juergen Gross wrote:
> Instead of xc_set_parameters() use xenhypfs_write() for setting
> parameters of the hypervisor.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V6:
> - new patch
> ---
>  tools/Rules.mk               |  2 +-
>  tools/libxl/Makefile         |  3 +-
>  tools/libxl/libxl.c          | 53 ++++++++++++++++++++++++++++++++----
>  tools/libxl/libxl_internal.h |  1 +
>  tools/libxl/xenlight.pc.in   |  2 +-
>  tools/xl/xl_misc.c           |  1 -
>  6 files changed, 52 insertions(+), 10 deletions(-)
> 
> diff --git a/tools/Rules.mk b/tools/Rules.mk
> index ad6073fcad..883a193f9e 100644
> --- a/tools/Rules.mk
> +++ b/tools/Rules.mk
> @@ -178,7 +178,7 @@ CFLAGS += -O2 -fomit-frame-pointer
>  endif
>  
>  CFLAGS_libxenlight = -I$(XEN_XENLIGHT) $(CFLAGS_libxenctrl) $(CFLAGS_xeninclude)
> -SHDEPS_libxenlight = $(SHLIB_libxenctrl) $(SHLIB_libxenstore)
> +SHDEPS_libxenlight = $(SHLIB_libxenctrl) $(SHLIB_libxenstore) $(SHLIB_libxenhypfs)
>  LDLIBS_libxenlight = $(SHDEPS_libxenlight) $(XEN_XENLIGHT)/libxenlight$(libextension)
>  SHLIB_libxenlight  = $(SHDEPS_libxenlight) -Wl,-rpath-link=$(XEN_XENLIGHT)
>  
> diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
> index 69fcf21577..a89ebab0b4 100644
> --- a/tools/libxl/Makefile
> +++ b/tools/libxl/Makefile
> @@ -20,7 +20,7 @@ LIBUUID_LIBS += -luuid
>  endif
>  
>  LIBXL_LIBS =
> -LIBXL_LIBS = $(LDLIBS_libxentoollog) $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenstore) $(LDLIBS_libxentoolcore) $(PTYFUNCS_LIBS) $(LIBUUID_LIBS)
> +LIBXL_LIBS = $(LDLIBS_libxentoollog) $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxenhypfs) $(LDLIBS_libxenstore) $(LDLIBS_libxentoolcore) $(PTYFUNCS_LIBS) $(LIBUUID_LIBS)
>  ifeq ($(CONFIG_LIBNL),y)
>  LIBXL_LIBS += $(LIBNL3_LIBS)
>  endif
> @@ -33,6 +33,7 @@ CFLAGS_LIBXL += $(CFLAGS_libxentoolcore)
>  CFLAGS_LIBXL += $(CFLAGS_libxenevtchn)
>  CFLAGS_LIBXL += $(CFLAGS_libxenctrl)
>  CFLAGS_LIBXL += $(CFLAGS_libxenguest)
> +CFLAGS_LIBXL += $(CFLAGS_libxenhypfs)
>  CFLAGS_LIBXL += $(CFLAGS_libxenstore)
>  ifeq ($(CONFIG_LIBNL),y)
>  CFLAGS_LIBXL += $(LIBNL3_CFLAGS)
> diff --git a/tools/libxl/libxl.c b/tools/libxl/libxl.c
> index f60fd3e4fd..621acc88f3 100644
> --- a/tools/libxl/libxl.c
> +++ b/tools/libxl/libxl.c
> @@ -663,15 +663,56 @@ int libxl_set_parameters(libxl_ctx *ctx, char *params)
>  {
>      int ret;
>      GC_INIT(ctx);
> +    char *par, *val, *end, *path;
> +    xenhypfs_handle *hypfs;
>  
> -    ret = xc_set_parameters(ctx->xch, params);
> -    if (ret < 0) {
> -        LOGEV(ERROR, ret, "setting parameters");
> -        GC_FREE;
> -        return ERROR_FAIL;
> +    hypfs = xenhypfs_open(ctx->lg, 0);
> +    if (!hypfs) {
> +        LOGE(ERROR, "opening Xen hypfs");
> +        ret = ERROR_FAIL;
> +        goto out;
>      }
> +
> +    while (isblank(*params))
> +        params++;
> +
> +    for (par = params; *par; par = end) {
> +        end = strchr(par, ' ');
> +        if (!end)
> +            end = par + strlen(par);
> +
> +        val = strchr(par, '=');
> +        if (val > end)
> +            val = NULL;
> +        if (!val && !strncmp(par, "no", 2)) {
> +            path = libxl__sprintf(gc, "/params/%s", par + 2);
> +            path[end - par - 2 + 8] = 0;
> +            val = "no";
> +            par += 2;
> +        } else {
> +            path = libxl__sprintf(gc, "/params/%s", par);
> +            path[val - par + 8] = 0;
> +            val = libxl__strndup(gc, val + 1, end - val - 1);
> +        }
> +
> +	LOG(DEBUG, "setting node \"%s\" to value \"%s\"", path, val);

Indentation is wrong, but this can be fixed upon committing.

I would very much like the parsing be moved to libxlu. That can wait
till another day.

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Tue May 19 08:22:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 08:22:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaxW9-00048n-Fc; Tue, 19 May 2020 08:22:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mazm=7B=xen.org=wl@srs-us1.protection.inumbo.net>)
 id 1jaxW8-00048i-Lh
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 08:22:28 +0000
X-Inumbo-ID: e0320fe0-99a9-11ea-a8e4-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e0320fe0-99a9-11ea-a8e4-12813bfff9fa;
 Tue, 19 May 2020 08:22:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID
 :Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID
 :Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:
 Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe
 :List-Post:List-Owner:List-Archive;
 bh=Rc+qzUpe0T069lskf8TXNNcQCBC0VCFC0iUiW3PNpCA=; b=utrlcu0AgRkjLf/Rsyhse2WhB7
 kT1FLIx13BMfa4ikBcMl8hbyAQsf+qc8Oh1UqY51YSLy+8W6k8LPeBnWmXV8O+MsmslsO59RW8ZQv
 cJURxTrQWgTQEvKqlZ/IP9SmN/3VQDZ7QdooKQMZcggIKy8KZgdobJS1sE5m9PW1AJDM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <wl@xen.org>)
 id 1jaxW5-000370-QD; Tue, 19 May 2020 08:22:25 +0000
Received: from 96.142.6.51.dyn.plus.net ([51.6.142.96] helo=debian)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <wl@xen.org>)
 id 1jaxW5-00037u-G6; Tue, 19 May 2020 08:22:25 +0000
Date: Mon, 18 May 2020 23:24:38 +0100
From: Wei Liu <wl@xen.org>
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v7 00/19] Add support for qemu-xen runnning in a
 Linux-based stubdomain
Message-ID: <20200518222438.ar6e7fzkrjr5voau@debian>
References: <20200519015503.115236-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

I have queued up the first 5 patches for committing today.

Wei.


From xen-devel-bounces@lists.xenproject.org Tue May 19 08:34:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 08:34:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaxhd-000555-MW; Tue, 19 May 2020 08:34:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jaxhc-000550-Q9
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 08:34:20 +0000
X-Inumbo-ID: 8868d7c4-99ab-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8868d7c4-99ab-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 08:34:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A4F1EABE6;
 Tue, 19 May 2020 08:34:21 +0000 (UTC)
Subject: Re: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200518153820.18170-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2783ddc5-9919-3c97-ba52-2f734e7d72d5@suse.com>
Date: Tue, 19 May 2020 10:34:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200518153820.18170-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 18.05.2020 17:38, Andrew Cooper wrote:
> @@ -1439,6 +1418,18 @@ void do_page_fault(struct cpu_user_regs *regs)
>      if ( unlikely(fixup_page_fault(addr, regs) != 0) )
>          return;
>  
> +    /*
> +     * Xen have reserved bits in its pagetables, nor do we permit PV guests to
> +     * write any.  Such entries would be vulnerable to the L1TF sidechannel.
> +     *
> +     * The only logic which intentionally sets reserved bits is the shadow
> +     * MMIO fastpath (SH_L1E_MMIO_*), which is careful not to be
> +     * L1TF-vulnerable, and handled via the VMExit #PF intercept path, rather
> +     * than here.

What about SH_L1E_MAGIC and sh_l1e_gnp()? The latter gets used by
_sh_propagate() without visible restriction to HVM.

And of course every time I look at this code I wonder how we can
get away with (quoting a comment) "We store 28 bits of GFN in
bits 4:32 of the entry." Do we have a hidden restriction
somewhere guaranteeing that guests won't have (emulated MMIO)
GFNs above 1Tb when run in shadow mode?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 08:43:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 08:43:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaxpu-0005xC-HB; Tue, 19 May 2020 08:42:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PPOd=7B=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jaxpt-0005x7-HG
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 08:42:53 +0000
X-Inumbo-ID: b9fe6226-99ac-11ea-a8e6-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b9fe6226-99ac-11ea-a8e6-12813bfff9fa;
 Tue, 19 May 2020 08:42:52 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 3mLq1r6D77oju2SHbs9YkPn3FbFuz7bJ9mfmUKCQ0Vu8auB+BGUkl32gGXyIzkLsyorpKDqWNa
 b2e6xAoD8pW49shMm32KbXJwWLAjjkEM+qWkEGNfsA1t/Mw2X3gujQvFcVf2UViwSvXk4b6lal
 qx98L5uUI50CxUq/yxAqfwdrp1om5M6iYLR8jKOqxd3ZpMEXlbcoGv+elPaPUKrcLD9M9DrDhs
 KRr0xZ5jXim1c7BDJETH9I7Qm8+yv1dUMzDCZ/IHV1Gs2UxQ++etZ0aYxoTijlwncSQ83yLz9B
 JVE=
X-SBRS: 2.7
X-MesageID: 17890476
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,409,1583211600"; d="scan'208";a="17890476"
Date: Tue, 19 May 2020 10:42:42 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 1/3] x86: relax GDT check in arch_set_info_guest()
Message-ID: <20200519084242.GZ54375@Air-de-Roger>
References: <b7a1a7fe-0bc5-1654-ff1c-e5eb787c579e@suse.com>
 <3f78d1dc-720d-6bf3-0911-c19da1a2ddbb@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <3f78d1dc-720d-6bf3-0911-c19da1a2ddbb@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Dec 20, 2019 at 02:49:48PM +0100, Jan Beulich wrote:
> It is wrong for us to check frames beyond the guest specified limit
> (in the native case, other than in the compat one).

Wouldn't this result in arch_set_info_guest failing if gdt_ents was
smaller than the maximum? Or all callers always pass gdt_ents set to
the maximum?

> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -840,6 +840,7 @@ int arch_set_info_guest(
>  #ifdef CONFIG_PV
>      mfn_t cr3_mfn;
>      struct page_info *cr3_page = NULL;
> +    unsigned int nr_gdt_frames;
>      int rc = 0;
>  #endif
>  
> @@ -951,6 +952,8 @@ int arch_set_info_guest(
>      /* Ensure real hardware interrupts are enabled. */
>      v->arch.user_regs.eflags |= X86_EFLAGS_IF;
>  
> +    nr_gdt_frames = DIV_ROUND_UP(c(gdt_ents), 512);
> +
>      if ( !v->is_initialised )
>      {
>          if ( !compat && !(flags & VGCF_in_kernel) && !c.nat->ctrlreg[1] )
> @@ -982,9 +985,9 @@ int arch_set_info_guest(
>              fail = compat_pfn_to_cr3(pfn) != c.cmp->ctrlreg[3];
>          }
>  
> -        for ( i = 0; i < ARRAY_SIZE(v->arch.pv.gdt_frames); ++i )
> -            fail |= v->arch.pv.gdt_frames[i] != c(gdt_frames[i]);
>          fail |= v->arch.pv.gdt_ents != c(gdt_ents);
> +        for ( i = 0; !fail && i < nr_gdt_frames; ++i )
> +            fail |= v->arch.pv.gdt_frames[i] != c(gdt_frames[i]);

fail doesn't need to be OR'ed anymore here, since you check for it in
the loop condition.

>  
>          fail |= v->arch.pv.ldt_base != c(ldt_base);
>          fail |= v->arch.pv.ldt_ents != c(ldt_ents);
> @@ -1089,12 +1092,11 @@ int arch_set_info_guest(
>      else
>      {
>          unsigned long gdt_frames[ARRAY_SIZE(v->arch.pv.gdt_frames)];
> -        unsigned int nr_frames = DIV_ROUND_UP(c.cmp->gdt_ents, 512);
>  
> -        if ( nr_frames > ARRAY_SIZE(v->arch.pv.gdt_frames) )
> +        if ( nr_gdt_frames > ARRAY_SIZE(v->arch.pv.gdt_frames) )
>              return -EINVAL;

Shouldn't this check be performed when nr_gdt_frames is initialized
instead of here? (as nr_gdt_frames is already used as a limit to
iterate over gdt_frames).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 19 08:46:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 08:46:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaxtK-00067K-13; Tue, 19 May 2020 08:46:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o3Eh=7B=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1jaxtI-00067E-NM
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 08:46:24 +0000
X-Inumbo-ID: 37532dd8-99ad-11ea-b9cf-bc764e2007e4
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37532dd8-99ad-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 08:46:23 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne [10.0.0.1])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 04J8kFgN015594;
 Tue, 19 May 2020 10:46:15 +0200 (MEST)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 2B1F62810; Tue, 19 May 2020 10:46:15 +0200 (CEST)
Date: Tue, 19 May 2020 10:46:15 +0200
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
Message-ID: <20200519084615.GB1782@antioche.eu.org>
References: <d623cd12-4024-82ba-7388-21f606e1a0bd@citrix.com>
 <20200515210629.GA10976@antioche.eu.org>
 <b1dfc07d-bf0f-da26-79f0-8cf93952689e@citrix.com>
 <20200515215335.GA9991@antioche.eu.org>
 <d22b6b7c-9d1c-4cfb-427a-ca6f440a9b08@citrix.com>
 <20200517173259.GA7285@antioche.eu.org>
 <20200517175607.GA8793@antioche.eu.org>
 <000a01d62ce7$093b7f50$1bb27df0$@xen.org>
 <20200518173111.GA13512@antioche.eu.org>
 <cd60e91a-a4a4-d14e-eead-c93f7682413d@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <cd60e91a-a4a4-d14e-eead-c93f7682413d@suse.com>
User-Agent: Mutt/1.12.1 (2019-06-15)
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3
 (chassiron.antioche.eu.org [151.127.5.145]);
 Tue, 19 May 2020 10:46:15 +0200 (MEST)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org,
 paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 09:34:30AM +0200, Jan Beulich wrote:
> On 18.05.2020 19:31, Manuel Bouyer wrote:
> > From what I found it seems that all unallocated memory is tagged p2m_mmio_dm,
> > is it right ?
> 
> Yes. For many years there has been a plan to better separate this from
> p2m_invalid ...

thanks.

So for some reason, MMU_NORMAL_PT_UPDATE thinks that the memory is not
allocated for this domain. This is true for both the ioreq page, and
when trying to load the BIOS rom in the guest memory.
I traced the hypercall in the tools and the memory is allocated with
XENMEM_populate_physmap (and the gfn returned by XENMEM_populate_physmap
and passed to MMU_NORMAL_PT_UPDATE do match).

Still looking ...

Note that I'm using the 4.13.0 release sources, not the top of branch.
Is it something that could have been fixed after the release ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Tue May 19 08:50:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 08:50:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaxx3-0006vF-Hz; Tue, 19 May 2020 08:50:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jaxx3-0006vA-5d
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 08:50:17 +0000
X-Inumbo-ID: c26391a6-99ad-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c26391a6-99ad-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 08:50:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 05AF4AC22;
 Tue, 19 May 2020 08:50:17 +0000 (UTC)
Subject: Re: [PATCH 02/16] x86/traps: Clean up printing in
 do_reserved_trap()/fatal_trap()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-3-andrew.cooper3@citrix.com>
 <aca22b53-895e-19bb-c54c-f1e4945c95c1@suse.com>
 <8f1d68b1-895a-d2a6-4dcb-55b688b03336@citrix.com>
 <b1ef905c-dab6-d1c3-4673-4c06c7e94a0a@suse.com>
 <560c3bce-211a-52ab-c919-9ca1ab9beab3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <45545b0c-2f0d-1de8-88ec-472d0a110eaa@suse.com>
Date: Tue, 19 May 2020 10:50:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <560c3bce-211a-52ab-c919-9ca1ab9beab3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 18.05.2020 18:54, Andrew Cooper wrote:
> On 11/05/2020 16:09, Jan Beulich wrote:
>> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>>
>> On 11.05.2020 17:01, Andrew Cooper wrote:
>>> On 04/05/2020 14:08, Jan Beulich wrote:
>>>> On 02.05.2020 00:58, Andrew Cooper wrote:
>>>>> For one, they render the vector in a different base.
>>>>>
>>>>> Introduce X86_EXC_* constants and vec_name() to refer to exceptions by their
>>>>> mnemonic, which starts bringing the code/diagnostics in line with the Intel
>>>>> and AMD manuals.
>>>> For this "bringing in line" purpose I'd like to see whether you could
>>>> live with some adjustments to how you're currently doing things:
>>>> - NMI is nowhere prefixed by #, hence I think we'd better not do so
>>>>   either; may require embedding the #-es in the names[] table, or not
>>>>   using N() for NMI
>>> No-one is going to get confused at seeing #NMI in an error message.  I
>>> don't mind jugging the existing names table, but anything more
>>> complicated is overkill.
>>>
>>>> - neither Coprocessor Segment Overrun nor vector 0x0f have a mnemonic
>>>>   and hence I think we shouldn't invent one; just treat them like
>>>>   other reserved vectors (of which at least vector 0x09 indeed is one
>>>>   on x86-64)?
>>> This I disagree with.  Coprocessor Segment Overrun *is* its name in both
>>> manuals, and the avoidance of vector 0xf is clearly documented as well,
>>> due to it being the default PIC Spurious Interrupt Vector.
>>>
>>> Neither CSO or SPV are expected to be encountered in practice, but if
>>> they are, highlighting them is a damn-sight more helpful than pretending
>>> they don't exist.
>> How is them occurring (and getting logged with their vector numbers)
>> any different from other reserved, acronym-less vectors? I particularly
>> didn't suggest to pretend they don't exist; instead I did suggest that
>> they are as reserved as, say, vector 0x18. By inventing an acronym and
>> logging this instead of the vector number you'll make people other than
>> you have to look up what the odd acronym means iff such an exception
>> ever got raised.
> 
> You snipped the bits in the patch where both the vector number and
> acronym are printed together.
> 
> Anyone who doesn't know the vector has to look it up anyway, at which
> point they'll find that what Xen prints out matches what both manuals
> say.  OTOH, people who know what a coprocessor segment overrun or PIC
> spurious vector is won't need to look it up.

And who know to decipher the non-standard CPO and SPV (which are what
triggered my comments in the first place). What I continue to fail to
see is why these reserved vectors need treatment different from all
others. In addition I'm having trouble seeing how the default spurious
PIC vector matters for us - we program the PIC to vectors 0x20-0x2f,
i.e. a spurious PIC0 IRQ would show up at vector 0x27. (I notice we
still blindly assume there's a pair of PICs in the first place.)

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 08:51:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 08:51:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaxyg-00071V-Tp; Tue, 19 May 2020 08:51:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jaxyg-00071Q-2V
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 08:51:58 +0000
X-Inumbo-ID: fec1eecc-99ad-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fec1eecc-99ad-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 08:51:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 63266AC22;
 Tue, 19 May 2020 08:51:59 +0000 (UTC)
Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
To: Manuel Bouyer <bouyer@antioche.eu.org>
References: <d623cd12-4024-82ba-7388-21f606e1a0bd@citrix.com>
 <20200515210629.GA10976@antioche.eu.org>
 <b1dfc07d-bf0f-da26-79f0-8cf93952689e@citrix.com>
 <20200515215335.GA9991@antioche.eu.org>
 <d22b6b7c-9d1c-4cfb-427a-ca6f440a9b08@citrix.com>
 <20200517173259.GA7285@antioche.eu.org>
 <20200517175607.GA8793@antioche.eu.org>
 <000a01d62ce7$093b7f50$1bb27df0$@xen.org>
 <20200518173111.GA13512@antioche.eu.org>
 <cd60e91a-a4a4-d14e-eead-c93f7682413d@suse.com>
 <20200519084615.GB1782@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b3bcaf85-a37b-9fb8-a472-c8899ca800c8@suse.com>
Date: Tue, 19 May 2020 10:51:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200519084615.GB1782@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Andrew Cooper' <andrew.cooper3@citrix.com>, paul@xen.org,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.2020 10:46, Manuel Bouyer wrote:
> Note that I'm using the 4.13.0 release sources, not the top of branch.
> Is it something that could have been fixed after the release ?

I don't recall anything, but switching to 4.13.1 would still seem like
a helpful thing for you to do.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 08:58:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 08:58:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jay4q-0007Ek-MI; Tue, 19 May 2020 08:58:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mazm=7B=xen.org=wl@srs-us1.protection.inumbo.net>)
 id 1jay4p-0007Ee-92
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 08:58:19 +0000
X-Inumbo-ID: e2392558-99ae-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e2392558-99ae-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 08:58:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID
 :Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID
 :Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:
 Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe
 :List-Post:List-Owner:List-Archive;
 bh=7+z3ZDCgN11RO4REgESRXkZ917HXpAqJ50OIrMd4lfM=; b=EmBoUSjMIrfbsRkobjDHQgoLRD
 YfT5K/X1JL45cFeAC6mm2Ux9KVNTBBQvit8XsdBoW5D2pF8Fc0AV6eMRmJ9ia86i8skN8NKRwyPFn
 5ACE1Ag9U5gcVjx5IzjqXJVfPZusa1Di2QvtWbFk8BsMZjj571bp5lkB42RR26uSg4UI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <wl@xen.org>)
 id 1jay4o-0003qL-1A; Tue, 19 May 2020 08:58:18 +0000
Received: from 96.142.6.51.dyn.plus.net ([51.6.142.96] helo=debian)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <wl@xen.org>)
 id 1jay4n-0004tu-Nc; Tue, 19 May 2020 08:58:17 +0000
Date: Tue, 19 May 2020 09:58:15 +0100
From: Wei Liu <wl@xen.org>
To: Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [XEN PATCH 1/2] tools/python: Fix install-wrap
Message-ID: <20200519085815.nswits7owiutn3nc@debian>
References: <20200311175933.1362235-1-anthony.perard@citrix.com>
 <20200311175933.1362235-2-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200311175933.1362235-2-anthony.perard@citrix.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>,
 Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Mar 11, 2020 at 05:59:32PM +0000, Anthony PERARD wrote:
> This allows to use install-wrap when the source scripts is in a
> subdirectory.
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
>  tools/python/install-wrap | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/tools/python/install-wrap b/tools/python/install-wrap
> index 00e2014016f9..fef24e01708d 100755
> --- a/tools/python/install-wrap
> +++ b/tools/python/install-wrap
> @@ -44,7 +44,7 @@ shift
>  destf="$dest"
>  for srcf in ${srcs}; do
>  	if test -d "$dest"; then
> -		destf="$dest/${srcf%%*/}"
> +		destf="$dest/${srcf##*/}"

This seems to have changed the pattern from "Remove Largest Suffix" to
"Remove Largest Prefix".

What does it do in practice?

For POSIX sh

x=posix/src/std
echo ${x%%*/} -> posix/src/std
echo ${x##*/} -> std

I would think the former is what you want. But I could be missing
something obvious.

Wei.

>  	fi
>  	org="$(sed -n '2q; /^#! *\/usr\/bin\/env python *$/p' $srcf)"
>  	if test "x$org" = x; then
> -- 
> Anthony PERARD
> 


From xen-devel-bounces@lists.xenproject.org Tue May 19 08:58:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 08:58:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jay54-0007FB-UN; Tue, 19 May 2020 08:58:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mazm=7B=xen.org=wl@srs-us1.protection.inumbo.net>)
 id 1jay53-0007F2-Nl
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 08:58:33 +0000
X-Inumbo-ID: ea5affb8-99ae-11ea-a8e6-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ea5affb8-99ae-11ea-a8e6-12813bfff9fa;
 Tue, 19 May 2020 08:58:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID
 :Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID
 :Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:
 Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe
 :List-Post:List-Owner:List-Archive;
 bh=xoNOQdQYDnjoM3Si6FGbJPFomYZnBiPwJw5n76fyXgg=; b=FswXLYZFh4b+rmktrMJr1BIz5f
 FlKRWyh14K/yaL5utTJtD7VV5WIq3SYL9EkMEViXXneiyjvcm4/HgJ8CzeDjtEJgx95gucKE1PvIO
 elKCd8qBkTrvANunQM4+IhELNV7qloa/DPshX1qDpM43o4THqn5tYwaoL9FAPybWMH48=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <wl@xen.org>)
 id 1jay51-0003r6-Mo; Tue, 19 May 2020 08:58:31 +0000
Received: from 96.142.6.51.dyn.plus.net ([51.6.142.96] helo=debian)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <wl@xen.org>)
 id 1jay51-0004u8-Dm; Tue, 19 May 2020 08:58:31 +0000
Date: Tue, 19 May 2020 09:58:29 +0100
From: Wei Liu <wl@xen.org>
To: Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [XEN PATCH 2/2] tools: Use INSTALL_PYTHON_PROG
Message-ID: <20200519085829.5nvgbmzcirtovqh5@debian>
References: <20200311175933.1362235-1-anthony.perard@citrix.com>
 <20200311175933.1362235-3-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200311175933.1362235-3-anthony.perard@citrix.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>,
 Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Mar 11, 2020 at 05:59:33PM +0000, Anthony PERARD wrote:
> Whenever python scripts are install, have the shebang be modified to use
> whatever PYTHON_PATH is. This is useful for system where python isn't available, or
> where the package build tools prevent unversioned shebang.
> 
> INSTALL_PYTHON_PROG only looks for "#!/usr/bin/env python".
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Tue May 19 09:02:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 09:02:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jay8s-00089y-EV; Tue, 19 May 2020 09:02:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PPOd=7B=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jay8r-00089t-4h
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 09:02:29 +0000
X-Inumbo-ID: 76e13772-99af-11ea-b9cf-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 76e13772-99af-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 09:02:28 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: IUQcCtOhP4PrdXx2PlZTuKCKevbwnBlfPV5ArJdEVGrZJd/vFqqiZkL2JhnSLazzG3KyPo0VTr
 hxud0AzWeWjxBLJe8LtTSVvFrZyKku2ARMPU+U4I0C7/M4wvFFrlgW+qWZlkaXES3s9ljOUzX2
 8bWtYFBSDoqmPotu6/rU91AtsEs6qy4IcfvUxuwfBr2ab+csVCCUaV1N9ZDZwJyoojRRaxAa/9
 D5aDYuVmteaKaiZXLZSyH5PU2/x7ik/Gb7eZRG79d4njnsBN+ABzsR9qdvcw9Nl/P9USUO+E1W
 Ua4=
X-SBRS: 2.7
X-MesageID: 18568271
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,409,1583211600"; d="scan'208";a="18568271"
Date: Tue, 19 May 2020 11:02:20 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 2/3] x86: relax LDT check in arch_set_info_guest()
Message-ID: <20200519090220.GA54375@Air-de-Roger>
References: <b7a1a7fe-0bc5-1654-ff1c-e5eb787c579e@suse.com>
 <c36cac91-49ae-6bb2-b057-195031979d21@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c36cac91-49ae-6bb2-b057-195031979d21@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, Dec 20, 2019 at 02:50:06PM +0100, Jan Beulich wrote:
> It is wrong for us to check the base address when there's no LDT in the
> first place. Once we don't do this check anymore we can also set the
> base address to a non-canonical value when the LDT is empty.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

I wonder if a ldt_ents check should also be added to
pv_map_ldt_shadow_page in order to avoid trying to get the mapping of
the LDT.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 19 09:12:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 09:12:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jayIv-0000d6-Eb; Tue, 19 May 2020 09:12:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jayIu-0000d1-Vu
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 09:12:53 +0000
X-Inumbo-ID: e9989b42-99b0-11ea-a8e9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9989b42-99b0-11ea-a8e9-12813bfff9fa;
 Tue, 19 May 2020 09:12:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 47018AFA1;
 Tue, 19 May 2020 09:12:52 +0000 (UTC)
Subject: Re: [PATCH v2 2/3] x86: relax LDT check in arch_set_info_guest()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <b7a1a7fe-0bc5-1654-ff1c-e5eb787c579e@suse.com>
 <c36cac91-49ae-6bb2-b057-195031979d21@suse.com>
 <20200519090220.GA54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <50050054-b081-5d6e-13e8-c50e52afe42b@suse.com>
Date: Tue, 19 May 2020 11:12:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200519090220.GA54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.2020 11:02, Roger Pau Monné wrote:
> On Fri, Dec 20, 2019 at 02:50:06PM +0100, Jan Beulich wrote:
>> It is wrong for us to check the base address when there's no LDT in the
>> first place. Once we don't do this check anymore we can also set the
>> base address to a non-canonical value when the LDT is empty.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> I wonder if a ldt_ents check should also be added to
> pv_map_ldt_shadow_page in order to avoid trying to get the mapping of
> the LDT.

We already have

    if ( unlikely((offset >> 3) >= curr->arch.pv.ldt_ents) )
    {
        ASSERT_UNREACHABLE();
        return false;
    }

What else do you mean?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 09:15:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 09:15:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jayLf-0000mu-12; Tue, 19 May 2020 09:15:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PPOd=7B=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jayLd-0000mp-9M
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 09:15:41 +0000
X-Inumbo-ID: 4e94195e-99b1-11ea-ae69-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e94195e-99b1-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 09:15:40 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 1pHIIbnBroH8nZ2gt+9TtsMV9EOC+f3vEDrBZtmNBwMGudcB61qlUdykhrhUF/Hyjs3uHVPhP0
 PX0w7F5JGS4Cf4NKcciJcQUxsWeD64nJKc3VqxaGgueRej84m2FgMExDLGNfp7KD6uEkjCkeeN
 HJUh5NBi1P0Vj7Jd+L/deMcHLlyakpy9TnTSc5oiJsPiTPKTIKT1BXxfGLEHf1uM1hCCPGSl+i
 9YFWO8YL6FjxM8mRQVlkxppDjON7dfY+V6S47xIp4yTnlUZiOZOiaU1B7tncrkPumy25uGmXl7
 VQs=
X-SBRS: 2.7
X-MesageID: 17892331
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,409,1583211600"; d="scan'208";a="17892331"
Date: Tue, 19 May 2020 11:15:32 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v3 1/5] x86: suppress XPTI-related TLB flushes when
 possible
Message-ID: <20200519091532.GB54375@Air-de-Roger>
References: <3ce4ab2c-8cb6-1482-6ce9-3d5b019e10c1@suse.com>
 <ae47cb2c-2fff-cd08-0a26-683cef1f3303@suse.com>
 <20200518170904.GY54375@Air-de-Roger>
 <748e3d53-779b-1529-73e8-37f3c2da6e57@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <748e3d53-779b-1529-73e8-37f3c2da6e57@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 09:55:38AM +0200, Jan Beulich wrote:
> On 18.05.2020 19:09, Roger Pau Monné wrote:
> > On Wed, Sep 25, 2019 at 05:23:11PM +0200, Jan Beulich wrote:
> >> @@ -310,7 +313,16 @@ int pv_domain_initialise(struct domain *
> >>      /* 64-bit PV guest by default. */
> >>      d->arch.is_32bit_pv = d->arch.has_32bit_shinfo = 0;
> >>  
> >> -    d->arch.pv.xpti = is_hardware_domain(d) ? opt_xpti_hwdom : opt_xpti_domu;
> >> +    if ( is_hardware_domain(d) && opt_xpti_hwdom )
> >> +    {
> >> +        d->arch.pv.xpti = true;
> >> +        ++opt_xpti_hwdom;
> >> +    }
> >> +    if ( !is_hardware_domain(d) && opt_xpti_domu )
> >> +    {
> >> +        d->arch.pv.xpti = true;
> >> +        opt_xpti_domu = 2;
> > 
> > I wonder whether a store fence is needed here in order to guarantee
> > that opt_xpti_domu is visible to flush_area_local before proceeding
> > any further with domain creation.
> 
> The changed behavior of flush_area_local() becomes relevant only
> once the new domain runs. This being x86 code, the write can't
> remain invisible for longer than the very latest when the function
> returns, as the store can't be deferred past that (in reality it
> can't be deferred even until after the next [real] function call
> or the next barrier()). And due to x86'es cache coherent nature
> (for WB memory) the moment the store insn completes the new value
> is visible to all other CPUs.

Yes, I think it's fine because this is x86 specific code. A comment
in that regard might be nice, but I'm not going to make this a strong
request.

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

I also think that turning opt_xpti_domu into a proper atomic and
increasing/decreasing (maybe a cmpxg would be needed) upon PV domain
creation/destruction should be able to accurately keep track of PV
domUs and hence could be used to further reduce the flushes when no PV
domains are running?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 19 09:18:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 09:18:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jayOC-0000uS-Eh; Tue, 19 May 2020 09:18:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PPOd=7B=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jayOA-0000uM-UE
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 09:18:18 +0000
X-Inumbo-ID: ace82950-99b1-11ea-ae69-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ace82950-99b1-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 09:18:18 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: XYMfoYkHSsyFUFz//mHjRBhbbNozQp3u4Beu/gYByizNcYler69FW1HgM44fTVpAnsWN/jCHJ0
 akr1A4oQVAl8PCuuLtwm1iKnc1ATt3IwlYdo5yD3xILdSmcN6QT9St7VKHqKbq67T+2hoJV3sA
 DoDwvh+mADCX/h9ISHSjHqHYbogB/42nTQFSa1y3pSK+bJMnd+DMtM3PiGB2B/HeScwYSirhLc
 LyqxBRiXi3QsFvfQH76PQIomyGndYX/JeZELDuTN8GEHBBLwPy0urEDOKTV1Cnnm37gUOHWsE0
 eX8=
X-SBRS: 2.7
X-MesageID: 18126586
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,409,1583211600"; d="scan'208";a="18126586"
Date: Tue, 19 May 2020 11:18:10 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 2/3] x86: relax LDT check in arch_set_info_guest()
Message-ID: <20200519091810.GC54375@Air-de-Roger>
References: <b7a1a7fe-0bc5-1654-ff1c-e5eb787c579e@suse.com>
 <c36cac91-49ae-6bb2-b057-195031979d21@suse.com>
 <20200519090220.GA54375@Air-de-Roger>
 <50050054-b081-5d6e-13e8-c50e52afe42b@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <50050054-b081-5d6e-13e8-c50e52afe42b@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 11:12:49AM +0200, Jan Beulich wrote:
> On 19.05.2020 11:02, Roger Pau Monné wrote:
> > On Fri, Dec 20, 2019 at 02:50:06PM +0100, Jan Beulich wrote:
> >> It is wrong for us to check the base address when there's no LDT in the
> >> first place. Once we don't do this check anymore we can also set the
> >> base address to a non-canonical value when the LDT is empty.
> >>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > 
> > Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Thanks.
> 
> > I wonder if a ldt_ents check should also be added to
> > pv_map_ldt_shadow_page in order to avoid trying to get the mapping of
> > the LDT.
> 
> We already have
> 
>     if ( unlikely((offset >> 3) >= curr->arch.pv.ldt_ents) )
>     {
>         ASSERT_UNREACHABLE();
>         return false;
>     }
> 
> What else do you mean?

Oh, I've missed that. I was searching for a check before accessing
ldt_base, which is done at initialization time. That's indeed fine.

Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 19 09:37:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 09:37:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jayg2-0002bE-0U; Tue, 19 May 2020 09:36:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jayfz-0002b9-TH
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 09:36:43 +0000
X-Inumbo-ID: 3f940236-99b4-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3f940236-99b4-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 09:36:43 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id DFC8EAC5F;
 Tue, 19 May 2020 09:36:44 +0000 (UTC)
Subject: Re: [PATCH v2 1/3] x86: relax GDT check in arch_set_info_guest()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <b7a1a7fe-0bc5-1654-ff1c-e5eb787c579e@suse.com>
 <3f78d1dc-720d-6bf3-0911-c19da1a2ddbb@suse.com>
 <20200519084242.GZ54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3347c044-b4d2-cfeb-2bc7-1eccb956b47f@suse.com>
Date: Tue, 19 May 2020 11:36:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200519084242.GZ54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.2020 10:42, Roger Pau Monné wrote:
> On Fri, Dec 20, 2019 at 02:49:48PM +0100, Jan Beulich wrote:
>> It is wrong for us to check frames beyond the guest specified limit
>> (in the native case, other than in the compat one).
> 
> Wouldn't this result in arch_set_info_guest failing if gdt_ents was
> smaller than the maximum? Or all callers always pass gdt_ents set to
> the maximum?

Since the array is embedded in the struct, I suppose callers simply
start out from a zero-initialized array, in which case the actual
count given doesn't matter. Additionally I think it is uncommon for
the function to get called on a vCPU with v->is_initialised already
set.

>> @@ -982,9 +985,9 @@ int arch_set_info_guest(
>>              fail = compat_pfn_to_cr3(pfn) != c.cmp->ctrlreg[3];
>>          }
>>  
>> -        for ( i = 0; i < ARRAY_SIZE(v->arch.pv.gdt_frames); ++i )
>> -            fail |= v->arch.pv.gdt_frames[i] != c(gdt_frames[i]);
>>          fail |= v->arch.pv.gdt_ents != c(gdt_ents);
>> +        for ( i = 0; !fail && i < nr_gdt_frames; ++i )
>> +            fail |= v->arch.pv.gdt_frames[i] != c(gdt_frames[i]);
> 
> fail doesn't need to be OR'ed anymore here, since you check for it in
> the loop condition.

Ah yes, changed.

>> @@ -1089,12 +1092,11 @@ int arch_set_info_guest(
>>      else
>>      {
>>          unsigned long gdt_frames[ARRAY_SIZE(v->arch.pv.gdt_frames)];
>> -        unsigned int nr_frames = DIV_ROUND_UP(c.cmp->gdt_ents, 512);
>>  
>> -        if ( nr_frames > ARRAY_SIZE(v->arch.pv.gdt_frames) )
>> +        if ( nr_gdt_frames > ARRAY_SIZE(v->arch.pv.gdt_frames) )
>>              return -EINVAL;
> 
> Shouldn't this check be performed when nr_gdt_frames is initialized
> instead of here? (as nr_gdt_frames is already used as a limit to
> iterate over gdt_frames).

Oh, yes, of course. Thanks for spotting.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 09:42:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 09:42:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jayla-0003Qc-MN; Tue, 19 May 2020 09:42:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fY7P=7B=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jaylZ-0003QW-9q
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 09:42:29 +0000
X-Inumbo-ID: 0d1ebd40-99b5-11ea-ae69-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0d1ebd40-99b5-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 09:42:28 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: +/BYSCIfvq6nw29ir08rVcq6LLiA3pX7CRye5gBnQTvShJ6RsbCE7b+w4fliC8pmGKv1y5T3F7
 fmDRt4ITy2qzx8VaQQ8t3WXmrfN5nNsO91aTsiwkBgvXhGS8b34h/fpU3eafCnOoiMXZuUSvm+
 DS/TbzfH1WgSHjzUNn4BEr/n/gs+HIToPLN0WuOZ1egtlevIQztconL5IcncaDzDk91UGcnDNi
 2y8XkDxEqVXzi0QvMdDwuE8VWTpkV2xYQYhP2N55sFpZflbqnj9fx/B4ACO+5BowON9Njka2Us
 WC8=
X-SBRS: 2.7
X-MesageID: 18228401
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,409,1583211600"; d="scan'208";a="18228401"
Date: Tue, 19 May 2020 10:42:22 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH 1/2] tools/python: Fix install-wrap
Message-ID: <20200519094222.GB2105@perard.uk.xensource.com>
References: <20200311175933.1362235-1-anthony.perard@citrix.com>
 <20200311175933.1362235-2-anthony.perard@citrix.com>
 <20200519085815.nswits7owiutn3nc@debian>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20200519085815.nswits7owiutn3nc@debian>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 09:58:15AM +0100, Wei Liu wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> 
> On Wed, Mar 11, 2020 at 05:59:32PM +0000, Anthony PERARD wrote:
> > This allows to use install-wrap when the source scripts is in a
> > subdirectory.
> > 
> > Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> > ---
> >  tools/python/install-wrap | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/tools/python/install-wrap b/tools/python/install-wrap
> > index 00e2014016f9..fef24e01708d 100755
> > --- a/tools/python/install-wrap
> > +++ b/tools/python/install-wrap
> > @@ -44,7 +44,7 @@ shift
> >  destf="$dest"
> >  for srcf in ${srcs}; do
> >  	if test -d "$dest"; then
> > -		destf="$dest/${srcf%%*/}"
> > +		destf="$dest/${srcf##*/}"
> 
> This seems to have changed the pattern from "Remove Largest Suffix" to
> "Remove Largest Prefix".
> 
> What does it do in practice?
> 
> For POSIX sh
> 
> x=posix/src/std
> echo ${x%%*/} -> posix/src/std
> echo ${x##*/} -> std
> 
> I would think the former is what you want. But I could be missing
> something obvious.

The former is a noop. It's the same as not doing anything.

Unless x="dir/dir/" and in that case, the %% would remove everything,
resulting in an empty string.

$srcf contains the path to where the script which we want to install is,
which is a relative path from where the ./install-wrap is executed from.
$destf is the final destination of the script, but if $dest is a
directory, then ./install-wrap wants to install the script in $dest, not
in some sub-directory of it. ./install-wrap doesn't handle this
sub-directory it fails to execute when there is one. (It's probably the
$install that failed to copy $srcf in a non-existing directory.)

This from the next patch is probably where things fails
    $(INSTALL_PYTHON_PROG) scripts/convert-legacy-stream $(DESTDIR)$(LIBEXEC_BIN)

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 19 09:44:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 09:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaynA-0003WW-2I; Tue, 19 May 2020 09:44:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PPOd=7B=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jayn8-0003WK-KB
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 09:44:06 +0000
X-Inumbo-ID: 46acefc8-99b5-11ea-a8eb-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 46acefc8-99b5-11ea-a8eb-12813bfff9fa;
 Tue, 19 May 2020 09:44:04 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: nbfM0cE4uTvOGlVC+iVD6Ua5gLTXHWs2Plv3qJMlRGDH8uygZtKGap/f/0I8DsIedHP/f95zFn
 8HUyZz1f0BpMKuU7UeRPOtKmM06hAO4TLXCKiboIqmOQWUEuESjjIfpFTpNxidb0VLpRAqJpn2
 KDtkY0WMxN2WGzFaXtHlhSYbIrKjyPeg2AZW4iqa7FuuhB7fxhIFRODJyukV+pkYkNyVwR6qM4
 6JUedKS37SdJ62xO5fe36eqtyjj2qjHRyZoXvcl4JeiTujJOvg4xnxtVs8RuuTy8kDFTi+NanH
 hgg=
X-SBRS: 2.7
X-MesageID: 18141824
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,409,1583211600"; d="scan'208";a="18141824"
Date: Tue, 19 May 2020 11:43:57 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] changelog: add relevant changes during 4.14 development
 window
Message-ID: <20200519094357.GD54375@Air-de-Roger>
References: <20200511103145.37098-1-roger.pau@citrix.com>
 <9f783539-6a36-08eb-c141-bd0f76e5acfd@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <9f783539-6a36-08eb-c141-bd0f76e5acfd@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Community Manager <community.manager@xenproject.org>,
 xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 11, 2020 at 03:10:07PM +0200, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> 
> On 11.05.2020 12:31, Roger Pau Monne wrote:
> > Add entries for the relevant changes I've been working on during the
> > 4.14 development time frame. Mostly performance improvements related
> > to pvshim scalability issues when running with high number of vCPUs.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> >  CHANGELOG.md | 6 ++++++
> >  1 file changed, 6 insertions(+)
> > 
> > diff --git a/CHANGELOG.md b/CHANGELOG.md
> > index b11e9bc4e3..554eeb6a12 100644
> > --- a/CHANGELOG.md
> > +++ b/CHANGELOG.md
> > @@ -8,6 +8,12 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
> >  
> >  ### Added
> >   - This file and MAINTAINERS entry.
> > + - Use x2APIC mode whenever available, regardless of interrupt remapping
> > +   support.
> > + - Performance improvements to guest assisted TLB flushes, either when using
> > +   the Xen hypercall interface or the viridian one.
> > + - Assorted pvshim performance and scalability improvements plus some bug
> > +   fixes.
> 
> Wouldn't most/all of these better go under a "### Changed" heading?

Sorry I didn't get to this on time, I see the patch has been
committed. Would you like me to move them?

Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 19 09:45:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 09:45:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jayo4-0003iW-Kt; Tue, 19 May 2020 09:45:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jayo3-0003iK-Bq
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 09:45:03 +0000
X-Inumbo-ID: 69274883-99b5-11ea-a8eb-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 69274883-99b5-11ea-a8eb-12813bfff9fa;
 Tue, 19 May 2020 09:45:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 3B0DFAD9F;
 Tue, 19 May 2020 09:45:04 +0000 (UTC)
Subject: Re: [PATCH v3 1/5] x86: suppress XPTI-related TLB flushes when
 possible
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <3ce4ab2c-8cb6-1482-6ce9-3d5b019e10c1@suse.com>
 <ae47cb2c-2fff-cd08-0a26-683cef1f3303@suse.com>
 <20200518170904.GY54375@Air-de-Roger>
 <748e3d53-779b-1529-73e8-37f3c2da6e57@suse.com>
 <20200519091532.GB54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <25c2a650-8745-19d7-34fc-28570d7dfd65@suse.com>
Date: Tue, 19 May 2020 11:45:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200519091532.GB54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.2020 11:15, Roger Pau Monné wrote:
> On Tue, May 19, 2020 at 09:55:38AM +0200, Jan Beulich wrote:
>> On 18.05.2020 19:09, Roger Pau Monné wrote:
>>> On Wed, Sep 25, 2019 at 05:23:11PM +0200, Jan Beulich wrote:
>>>> @@ -310,7 +313,16 @@ int pv_domain_initialise(struct domain *
>>>>      /* 64-bit PV guest by default. */
>>>>      d->arch.is_32bit_pv = d->arch.has_32bit_shinfo = 0;
>>>>  
>>>> -    d->arch.pv.xpti = is_hardware_domain(d) ? opt_xpti_hwdom : opt_xpti_domu;
>>>> +    if ( is_hardware_domain(d) && opt_xpti_hwdom )
>>>> +    {
>>>> +        d->arch.pv.xpti = true;
>>>> +        ++opt_xpti_hwdom;
>>>> +    }
>>>> +    if ( !is_hardware_domain(d) && opt_xpti_domu )
>>>> +    {
>>>> +        d->arch.pv.xpti = true;
>>>> +        opt_xpti_domu = 2;
>>>
>>> I wonder whether a store fence is needed here in order to guarantee
>>> that opt_xpti_domu is visible to flush_area_local before proceeding
>>> any further with domain creation.
>>
>> The changed behavior of flush_area_local() becomes relevant only
>> once the new domain runs. This being x86 code, the write can't
>> remain invisible for longer than the very latest when the function
>> returns, as the store can't be deferred past that (in reality it
>> can't be deferred even until after the next [real] function call
>> or the next barrier()). And due to x86'es cache coherent nature
>> (for WB memory) the moment the store insn completes the new value
>> is visible to all other CPUs.
> 
> Yes, I think it's fine because this is x86 specific code. A comment
> in that regard might be nice, but I'm not going to make this a strong
> request.
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> I also think that turning opt_xpti_domu into a proper atomic and
> increasing/decreasing (maybe a cmpxg would be needed) upon PV domain
> creation/destruction should be able to accurately keep track of PV
> domUs and hence could be used to further reduce the flushes when no PV
> domains are running?

Possibly, which would take care of (c) in the reasons the commit
message gives. (a) and in particular (b) would then still need
addressing.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 09:46:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 09:46:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaypW-0003qO-08; Tue, 19 May 2020 09:46:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jaypV-0003qJ-98
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 09:46:33 +0000
X-Inumbo-ID: 9ef0e3d8-99b5-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ef0e3d8-99b5-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 09:46:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 93DC2AD9F;
 Tue, 19 May 2020 09:46:34 +0000 (UTC)
Subject: Re: [PATCH] changelog: add relevant changes during 4.14 development
 window
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200511103145.37098-1-roger.pau@citrix.com>
 <9f783539-6a36-08eb-c141-bd0f76e5acfd@suse.com>
 <20200519094357.GD54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <57969e66-69a0-7e67-81be-46ae686afc8d@suse.com>
Date: Tue, 19 May 2020 11:46:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200519094357.GD54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Community Manager <community.manager@xenproject.org>,
 xen-devel@lists.xenproject.org, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.2020 11:43, Roger Pau Monné wrote:
> On Mon, May 11, 2020 at 03:10:07PM +0200, Jan Beulich wrote:
>> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>>
>> On 11.05.2020 12:31, Roger Pau Monne wrote:
>>> Add entries for the relevant changes I've been working on during the
>>> 4.14 development time frame. Mostly performance improvements related
>>> to pvshim scalability issues when running with high number of vCPUs.
>>>
>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>> ---
>>>  CHANGELOG.md | 6 ++++++
>>>  1 file changed, 6 insertions(+)
>>>
>>> diff --git a/CHANGELOG.md b/CHANGELOG.md
>>> index b11e9bc4e3..554eeb6a12 100644
>>> --- a/CHANGELOG.md
>>> +++ b/CHANGELOG.md
>>> @@ -8,6 +8,12 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
>>>  
>>>  ### Added
>>>   - This file and MAINTAINERS entry.
>>> + - Use x2APIC mode whenever available, regardless of interrupt remapping
>>> +   support.
>>> + - Performance improvements to guest assisted TLB flushes, either when using
>>> +   the Xen hypercall interface or the viridian one.
>>> + - Assorted pvshim performance and scalability improvements plus some bug
>>> +   fixes.
>>
>> Wouldn't most/all of these better go under a "### Changed" heading?
> 
> Sorry I didn't get to this on time, I see the patch has been
> committed. Would you like me to move them?

Well, not sure. Whoever committed the patch must have done so for
a reason. Personally I continue to think as expressed ...

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 09:54:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 09:54:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jayx4-0004ie-Qm; Tue, 19 May 2020 09:54:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PPOd=7B=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jayx3-0004iZ-Ga
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 09:54:21 +0000
X-Inumbo-ID: b3c9da2a-99b6-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b3c9da2a-99b6-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 09:54:17 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: acHlKOzi/GU3AFK/PYajOqurCDz9Ig7JzGYMxy7rbLqbk+GrUT3jOq3vvki/7qQoLmgjBRtc9+
 tsRZ+g6Pr/tzPcoqg5zNWv+ujs4KkQl/9HFNgYl2VqMAGfAYJIaqpYUYDkBTXdJq+cm4BkmBYo
 R36hlH83AdSFmwbrI4VV6oMjBHPQvQsEADDYLLRL/5+YP8Sc9LHgyx72ZWtEPq6KhqEe+r6uoW
 kld8cmjTgwy2bMnGJohhn/Q4dW6pSqnk9N0mm8F9Y/ryonevu07ORtXsf/lZyOSN3no6IFPnym
 hGo=
X-SBRS: 2.7
X-MesageID: 18142236
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,409,1583211600"; d="scan'208";a="18142236"
Date: Tue, 19 May 2020 11:54:07 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Manuel Bouyer <bouyer@antioche.eu.org>
Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
Message-ID: <20200519095407.GE54375@Air-de-Roger>
References: <20200515202912.GA11714@antioche.eu.org>
 <d623cd12-4024-82ba-7388-21f606e1a0bd@citrix.com>
 <20200515210629.GA10976@antioche.eu.org>
 <b1dfc07d-bf0f-da26-79f0-8cf93952689e@citrix.com>
 <20200515215335.GA9991@antioche.eu.org>
 <d22b6b7c-9d1c-4cfb-427a-ca6f440a9b08@citrix.com>
 <20200517173259.GA7285@antioche.eu.org>
 <20200517175607.GA8793@antioche.eu.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <20200517175607.GA8793@antioche.eu.org>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sun, May 17, 2020 at 07:56:07PM +0200, Manuel Bouyer wrote:
> On Sun, May 17, 2020 at 07:32:59PM +0200, Manuel Bouyer wrote:
> > I've been looking a bit deeper in the Xen kernel.
> > The mapping is failed in ./arch/x86/mm/p2m.c:p2m_get_page_from_gfn(),
> >         /* Error path: not a suitable GFN at all */
> > 	if ( !p2m_is_ram(*t) && !p2m_is_paging(*t) && !p2m_is_pod(*t) ) {
> > 	    gdprintk(XENLOG_ERR, "p2m_get_page_from_gfn2: %d is_ram %ld is_paging %ld is_pod %ld\n", *t, p2m_is_ram(*t), p2m_is_paging(*t), p2m_is_pod(*t) );
> > 	    return NULL;
> > 	}
> > 
> > *t is 4, which translates to p2m_mmio_dm
> > 
> > it looks like p2m_get_page_from_gfn() is not ready to handle this case
> > for dom0.
> 
> And so it looks like I need to implement osdep_xenforeignmemory_map_resource()
> for NetBSD

FWIW, FreeBSD doesn't have osdep_xenforeignmemory_map_resource
implemented and still works fine with 4.13.0 (is able to create HVM
guests), but that's a PVH dom0, not a PV one.

Regards, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 19 10:06:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 10:06:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jaz8u-0005km-2c; Tue, 19 May 2020 10:06:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mazm=7B=xen.org=wl@srs-us1.protection.inumbo.net>)
 id 1jaz8s-0005kh-T8
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 10:06:34 +0000
X-Inumbo-ID: 6ac97018-99b8-11ea-a8eb-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6ac97018-99b8-11ea-a8eb-12813bfff9fa;
 Tue, 19 May 2020 10:06:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID
 :Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID
 :Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:
 Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe
 :List-Post:List-Owner:List-Archive;
 bh=MFa7Bhv3NLCsM2M0nsBmdqB1BOouQkXDgO8AR5eNfW8=; b=qA/0Cxx+XE4L+I84iKZuGtIi2K
 Z1ADjqWXkHyh4bXaSjzsH5bm/ugaWaorY0u0FiBCCfv9bbqKjzRXpkvEAJ3uWGzduOZYOsBzUFU3X
 1Gpyqx1Sm4UUBT+Whjs5xUSA84b+F3De5NnCuF2/dZV+0Rzqm2a5dqqGLW/g+JElAGTM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <wl@xen.org>)
 id 1jaz8q-0005OI-Ch; Tue, 19 May 2020 10:06:32 +0000
Received: from 96.142.6.51.dyn.plus.net ([51.6.142.96] helo=debian)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <wl@xen.org>)
 id 1jaz8q-0000lc-39; Tue, 19 May 2020 10:06:32 +0000
Date: Tue, 19 May 2020 11:06:29 +0100
From: Wei Liu <wl@xen.org>
To: Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [XEN PATCH 1/2] tools/python: Fix install-wrap
Message-ID: <20200519100629.pnozlsd5ozc6gfvl@debian>
References: <20200311175933.1362235-1-anthony.perard@citrix.com>
 <20200311175933.1362235-2-anthony.perard@citrix.com>
 <20200519085815.nswits7owiutn3nc@debian>
 <20200519094222.GB2105@perard.uk.xensource.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200519094222.GB2105@perard.uk.xensource.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>,
 Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 10:42:22AM +0100, Anthony PERARD wrote:
> On Tue, May 19, 2020 at 09:58:15AM +0100, Wei Liu wrote:
> > [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
> > 

Haha :-)

> > On Wed, Mar 11, 2020 at 05:59:32PM +0000, Anthony PERARD wrote:
> > > This allows to use install-wrap when the source scripts is in a
> > > subdirectory.
> > > 
> > > Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> > > ---
> > >  tools/python/install-wrap | 2 +-
> > >  1 file changed, 1 insertion(+), 1 deletion(-)
> > > 
> > > diff --git a/tools/python/install-wrap b/tools/python/install-wrap
> > > index 00e2014016f9..fef24e01708d 100755
> > > --- a/tools/python/install-wrap
> > > +++ b/tools/python/install-wrap
> > > @@ -44,7 +44,7 @@ shift
> > >  destf="$dest"
> > >  for srcf in ${srcs}; do
> > >  	if test -d "$dest"; then
> > > -		destf="$dest/${srcf%%*/}"
> > > +		destf="$dest/${srcf##*/}"
> > 
> > This seems to have changed the pattern from "Remove Largest Suffix" to
> > "Remove Largest Prefix".
> > 
> > What does it do in practice?
> > 
> > For POSIX sh
> > 
> > x=posix/src/std
> > echo ${x%%*/} -> posix/src/std
> > echo ${x##*/} -> std
> > 
> > I would think the former is what you want. But I could be missing
> > something obvious.
> 
> The former is a noop. It's the same as not doing anything.
> 
> Unless x="dir/dir/" and in that case, the %% would remove everything,
> resulting in an empty string.
> 
> $srcf contains the path to where the script which we want to install is,
> which is a relative path from where the ./install-wrap is executed from.
> $destf is the final destination of the script, but if $dest is a
> directory, then ./install-wrap wants to install the script in $dest, not
> in some sub-directory of it. ./install-wrap doesn't handle this
> sub-directory it fails to execute when there is one. (It's probably the
> $install that failed to copy $srcf in a non-existing directory.)
> 
> This from the next patch is probably where things fails
>     $(INSTALL_PYTHON_PROG) scripts/convert-legacy-stream $(DESTDIR)$(LIBEXEC_BIN)

I see. Thanks for explaining.

Acked-by: Wei Liu <wl@xen.org>

> 
> -- 
> Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 19 10:28:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 10:28:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jazTn-0007Th-Ua; Tue, 19 May 2020 10:28:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o3Eh=7B=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1jazTm-0007Tc-3n
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 10:28:10 +0000
X-Inumbo-ID: 6e4ee620-99bb-11ea-9887-bc764e2007e4
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6e4ee620-99bb-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 10:28:08 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne [10.0.0.1])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 04JAS2pb024884;
 Tue, 19 May 2020 12:28:02 +0200 (MEST)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 0210D2810; Tue, 19 May 2020 12:28:01 +0200 (CEST)
Date: Tue, 19 May 2020 12:28:01 +0200
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: IOCTL_PRIVCMD_MMAPBATCH on Xen 4.13.0
Message-ID: <20200519102801.GE2459@antioche.eu.org>
References: <20200515202912.GA11714@antioche.eu.org>
 <d623cd12-4024-82ba-7388-21f606e1a0bd@citrix.com>
 <20200515210629.GA10976@antioche.eu.org>
 <b1dfc07d-bf0f-da26-79f0-8cf93952689e@citrix.com>
 <20200515215335.GA9991@antioche.eu.org>
 <d22b6b7c-9d1c-4cfb-427a-ca6f440a9b08@citrix.com>
 <20200517173259.GA7285@antioche.eu.org>
 <20200517175607.GA8793@antioche.eu.org>
 <20200519095407.GE54375@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200519095407.GE54375@Air-de-Roger>
User-Agent: Mutt/1.12.1 (2019-06-15)
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3
 (chassiron.antioche.eu.org [151.127.5.145]);
 Tue, 19 May 2020 12:28:03 +0200 (MEST)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 11:54:07AM +0200, Roger Pau Monn wrote:
> FWIW, FreeBSD doesn't have osdep_xenforeignmemory_map_resource
> implemented and still works fine with 4.13.0 (is able to create HVM
> guests), but that's a PVH dom0, not a PV one.

Yes, FreeBSD is PVH-nnly. This implies different code paths (the dom0
kernel has to map the foreing pages in its physical space, which PV
doesn't have to do (and can't do))

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Tue May 19 11:28:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 11:28:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb0Pv-00042B-J5; Tue, 19 May 2020 11:28:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PPOd=7B=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jb0Pu-000426-Ng
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 11:28:14 +0000
X-Inumbo-ID: d33523f9-99c3-11ea-a8f6-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d33523f9-99c3-11ea-a8f6-12813bfff9fa;
 Tue, 19 May 2020 11:28:13 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: k6mOXDwddi60MIyYZcBbG8tvY6FOBiR4VhvoqthRBXYw89Fhix1g0DEwfOdhPT7wYNOYJOvhtN
 Ykg5I65aTj3h8ZPg7nQ7mVVIKUxXyyNElYI39ObkSEtXvRx3/GC6+KwhA+oqilF96belJVDR73
 kA3irFZVw5jVu5VQC2ZwouRLrEEAz+VZuSb6WkrHMT42YDPaapJNVSurGxKiBeE/cR6SOwL7RT
 dEPOmnIvl+MnKjyre7BRycvqDawR+ImknLHOOqOJY3iX1WN8uEaQzJEkYMD/miAHBgV1HWYVju
 8rA=
X-SBRS: 2.7
X-MesageID: 17899145
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,410,1583211600"; d="scan'208";a="17899145"
Date: Tue, 19 May 2020 13:28:06 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH] xen: fix build without pci passthrough
Message-ID: <20200519112806.GF54375@Air-de-Roger>
References: <20200504101443.3165-1-roger.pau@citrix.com>
 <20200511134043.GH2116@perard.uk.xensource.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <20200511134043.GH2116@perard.uk.xensource.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 qemu-devel@nongnu.org, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 11, 2020 at 02:40:43PM +0100, Anthony PERARD wrote:
> On Mon, May 04, 2020 at 12:14:43PM +0200, Roger Pau Monne wrote:
> > diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
> > index 179775db7b..660dd8a008 100644
> > --- a/hw/xen/xen_pt.h
> > +++ b/hw/xen/xen_pt.h
> > @@ -1,6 +1,7 @@
> >  #ifndef XEN_PT_H
> >  #define XEN_PT_H
> >  
> > +#include "qemu/osdep.h"
> 
> Why do you need osdep?

For CONFIG_XEN_PCI_PASSTHROUGH IIRC.

> 
> >  #include "hw/xen/xen_common.h"
> >  #include "hw/pci/pci.h"
> >  #include "xen-host-pci-device.h"
> > @@ -322,7 +323,13 @@ extern void *pci_assign_dev_load_option_rom(PCIDevice *dev,
> >                                              unsigned int domain,
> >                                              unsigned int bus, unsigned int slot,
> >                                              unsigned int function);
> > +
> > +#ifdef CONFIG_XEN_PCI_PASSTHROUGH
> >  extern bool has_igd_gfx_passthru;
> > +#else
> > +# define has_igd_gfx_passthru false
> > +#endif
> 
> I don't quite like the use of define here. Could you introduce a
> function that return a bool instead? And defining that function in
> hw/xen/xen.h like xen_enabled() would be fine I think.

But has_igd_gfx_passthru is defined in xen_pt.c which is only compiled
if CONFIG_XEN_PCI_PASSTHROUGH is defined, yet the variable is set from
xen-common.c. I think the former is fine, an any attempt to set
has_igd_gfx_passthru without CONFIG_XEN_PCI_PASSTHROUGH will result in
a compile error which is easier to catch?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 19 12:10:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 12:10:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb13x-0007Tn-9t; Tue, 19 May 2020 12:09:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wYyw=7B=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jb13w-0007Ti-HS
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 12:09:36 +0000
X-Inumbo-ID: 99a8224c-99c9-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 99a8224c-99c9-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 12:09:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=L1W5syRR6b60PCqhz8A2PE2HG3YmQQ8f/SfXEAD3YCU=; b=Nkz4p/DNjy9LiQh9eAxQTJyoo
 0fNdiWGeHtQVwYJ6ceZDs3QSbiOOqfIlj3WCdxZ6WpzlBg71wNeSof62hGGWbKAFjt0lRR+8fcsVB
 UUPnVS+7ADlwL9Xmie6pe+2XO1SLqiS8rpjSDNWnaXiP7v/rl15zRN6UR1hewnSkRq5ao=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jb13s-0007u7-Vm; Tue, 19 May 2020 12:09:33 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jb13s-0002db-KP; Tue, 19 May 2020 12:09:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jb13s-00043G-Jr; Tue, 19 May 2020 12:09:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150236-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150236: regressions - FAIL
X-Osstest-Failures: linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=642b151f45dd54809ea00ecd3976a56c1ec9b53d
X-Osstest-Versions-That: linux=b9bbe6ed63b2b9f2c9ee5cbd0f2c946a2723f4ce
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 19 May 2020 12:09:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150236 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150236/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd      11 guest-start              fail REGR. vs. 150230

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150230
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150230
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150230
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150230
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150230
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150230
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150230
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150230
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150230
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                642b151f45dd54809ea00ecd3976a56c1ec9b53d
baseline version:
 linux                b9bbe6ed63b2b9f2c9ee5cbd0f2c946a2723f4ce

Last test of basis   150230  2020-05-18 09:16:04 Z    1 days
Testing same since   150236  2020-05-18 20:40:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dan Carpenter <dan.carpenter@oracle.com>
  David Howells <dhowells@redhat.com>
  Eric Sandeen <sandeen@sandeen.net>
  Krzysztof Struczynski <krzysztof.struczynski@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com>
  Mimi Zohar <zohar@linux.ibm.com>
  Namjae Jeon <namjae.jeon@samsung.com>
  Paul E. McKenney <paulmck@kernel.org> (RCU viewpoint)
  Roberto Sassu <roberto.sassu@huawei.com>
  Wei Yongjun <weiyongjun1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 642b151f45dd54809ea00ecd3976a56c1ec9b53d
Merge: 45088963ca9c 843385694721
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Mon May 18 11:29:21 2020 -0700

    Merge branch 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity
    
    Pull integrity fixes from Mimi Zohar:
     "A couple of miscellaneous bug fixes for the integrity subsystem:
    
      IMA:
    
       - Properly modify the open flags in order to calculate the file hash.
    
       - On systems requiring the IMA policy to be signed, the policy is
         loaded differently. Don't differentiate between "enforce" and
         either "log" or "fix" modes how the policy is loaded.
    
      EVM:
    
       - Two patches to fix an EVM race condition, normally the result of
         attempting to load an unsupported hash algorithm.
    
       - Use the lockless RCU version for walking an append only list"
    
    * 'fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/zohar/linux-integrity:
      evm: Fix a small race in init_desc()
      evm: Fix RCU list related warnings
      ima: Fix return value of ima_write_policy()
      evm: Check also if *tfm is an error pointer in init_desc()
      ima: Set file->f_mode instead of file->f_flags in ima_calc_file_hash()

commit 45088963ca9cdc3df50dd7b1b63e1dc9dcc6375e
Merge: 9d1be4f4dc5f 94182167ec73
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Mon May 18 10:33:13 2020 -0700

    Merge tag 'for-5.7-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/linkinjeon/exfat
    
    Pull exfat fixes from Namjae Jeon:
    
     - Fix potential memory leak in exfat_find
    
     - Set exfat's splice_write to iter_file_splice_write to fix a splice
       failure on direct-opened files
    
    * tag 'for-5.7-rc7' of git://git.kernel.org/pub/scm/linux/kernel/git/linkinjeon/exfat:
      exfat: fix possible memory leak in exfat_find()
      exfat: use iter_file_splice_write

commit 9d1be4f4dc5ff1c66c86acfd2c35765d9e3776b3
Author: David Howells <dhowells@redhat.com>
Date:   Sun May 17 21:21:05 2020 +0100

    afs: Don't unlock fetched data pages until the op completes successfully
    
    Don't call req->page_done() on each page as we finish filling it with
    the data coming from the network.  Whilst this might speed up the
    application a bit, it's a problem if there's a network failure and the
    operation has to be reissued.
    
    If this happens, an oops occurs because afs_readpages_page_done() clears
    the pointer to each page it unlocks and when a retry happens, the
    pointers to the pages it wants to fill are now NULL (and the pages have
    been unlocked anyway).
    
    Instead, wait till the operation completes successfully and only then
    release all the pages after clearing any terminal gap (the server can
    give us less data than we requested as we're allowed to ask for more
    than is available).
    
    KASAN produces a bug like the following, and even without KASAN, it can
    oops and panic.
    
        BUG: KASAN: wild-memory-access in _copy_to_iter+0x323/0x5f4
        Write of size 1404 at addr 0005088000000000 by task md5sum/5235
    
        CPU: 0 PID: 5235 Comm: md5sum Not tainted 5.7.0-rc3-fscache+ #250
        Hardware name: ASUS All Series/H97-PLUS, BIOS 2306 10/09/2014
        Call Trace:
         memcpy+0x39/0x58
         _copy_to_iter+0x323/0x5f4
         __skb_datagram_iter+0x89/0x2a6
         skb_copy_datagram_iter+0x129/0x135
         rxrpc_recvmsg_data.isra.0+0x615/0xd42
         rxrpc_kernel_recv_data+0x1e9/0x3ae
         afs_extract_data+0x139/0x33a
         yfs_deliver_fs_fetch_data64+0x47a/0x91b
         afs_deliver_to_call+0x304/0x709
         afs_wait_for_call_to_complete+0x1cc/0x4ad
         yfs_fs_fetch_data+0x279/0x288
         afs_fetch_data+0x1e1/0x38d
         afs_readpages+0x593/0x72e
         read_pages+0xf5/0x21e
         __do_page_cache_readahead+0x128/0x23f
         ondemand_readahead+0x36e/0x37f
         generic_file_buffered_read+0x234/0x680
         new_sync_read+0x109/0x17e
         vfs_read+0xe6/0x138
         ksys_read+0xd8/0x14d
         do_syscall_64+0x6e/0x8a
         entry_SYSCALL_64_after_hwframe+0x49/0xb3
    
    Fixes: 196ee9cd2d04 ("afs: Make afs_fs_fetch_data() take a list of pages")
    Fixes: 30062bd13e36 ("afs: Implement YFS support in the fs client")
    Signed-off-by: David Howells <dhowells@redhat.com>
    Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
    Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>

commit 94182167ec730dadcaea5fbc6bb8f1136966ef66
Author: Wei Yongjun <weiyongjun1@huawei.com>
Date:   Wed May 6 14:25:54 2020 +0000

    exfat: fix possible memory leak in exfat_find()
    
    'es' is malloced from exfat_get_dentry_set() in exfat_find() and should
    be freed before leaving from the error handling cases, otherwise it will
    cause memory leak.
    
    Fixes: 5f2aa075070c ("exfat: add inode operations")
    Signed-off-by: Wei Yongjun <weiyongjun1@huawei.com>
    Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com>

commit 035779483072ff7854943dc0cbae82c4e0070d15
Author: Eric Sandeen <sandeen@sandeen.net>
Date:   Fri May 1 20:34:25 2020 -0500

    exfat: use iter_file_splice_write
    
    Doing copy_file_range() on exfat with a file opened for direct IO leads
    to an -EFAULT:
    
    # xfs_io -f -d -c "truncate 32768" \$
           -c "copy_range -d 16384 -l 16384 -f 0" /mnt/test/junk
    copy_range: Bad address
    
    and the reason seems to be that we go through:
    
    default_file_splice_write
     splice_from_pipe
      __splice_from_pipe
       write_pipe_buf
        __kernel_write
         new_sync_write
          generic_file_write_iter
           generic_file_direct_write
            exfat_direct_IO
             do_blockdev_direct_IO
              iov_iter_get_pages
    
    and land in iterate_all_kinds(), which does "return -EFAULT" for our kvec
    iter.
    
    Setting exfat's splice_write to iter_file_splice_write fixes this and lets
    fsx (which originally detected the problem) run to success from
    the xfstests harness.
    
    Signed-off-by: Eric Sandeen <sandeen@sandeen.net>
    Signed-off-by: Namjae Jeon <namjae.jeon@samsung.com>

commit 8433856947217ebb5697a8ff9c4c9cad4639a2cf
Author: Dan Carpenter <dan.carpenter@oracle.com>
Date:   Tue May 12 16:19:17 2020 +0300

    evm: Fix a small race in init_desc()
    
    The IS_ERR_OR_NULL() function has two conditions and if we got really
    unlucky we could hit a race where "ptr" started as an error pointer and
    then was set to NULL.  Both conditions would be false even though the
    pointer at the end was NULL.
    
    This patch fixes the problem by ensuring that "*tfm" can only be NULL
    or valid.  I have introduced a "tmp_tfm" variable to make that work.  I
    also reversed a condition and pulled the code in one tab.
    
    Reported-by: Roberto Sassu <roberto.sassu@huawei.com>
    Fixes: 53de3b080d5e ("evm: Check also if *tfm is an error pointer in init_desc()")
    Signed-off-by: Dan Carpenter <dan.carpenter@oracle.com>
    Acked-by: Roberto Sassu <roberto.sassu@huawei.com>
    Acked-by: Krzysztof Struczynski <krzysztof.struczynski@huawei.com>
    Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>

commit 770f60586d2af0590be263f55fd079226313922c
Author: Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com>
Date:   Thu Apr 30 21:32:05 2020 +0530

    evm: Fix RCU list related warnings
    
    This patch fixes the following warning and few other instances of
    traversal of evm_config_xattrnames list:
    
    [   32.848432] =============================
    [   32.848707] WARNING: suspicious RCU usage
    [   32.848966] 5.7.0-rc1-00006-ga8d5875ce5f0b #1 Not tainted
    [   32.849308] -----------------------------
    [   32.849567] security/integrity/evm/evm_main.c:231 RCU-list traversed in non-reader section!!
    
    Since entries are only added to the list and never deleted, use
    list_for_each_entry_lockless() instead of list_for_each_entry_rcu for
    traversing the list.  Also, add a relevant comment in evm_secfs.c to
    indicate this fact.
    
    Reported-by: kernel test robot <lkp@intel.com>
    Suggested-by: Paul E. McKenney <paulmck@kernel.org>
    Signed-off-by: Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com>
    Acked-by: Paul E. McKenney <paulmck@kernel.org> (RCU viewpoint)
    Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>

commit 2e3a34e9f409ebe83d1af7cd2f49fca7af97dfac
Author: Roberto Sassu <roberto.sassu@huawei.com>
Date:   Mon Apr 27 12:31:28 2020 +0200

    ima: Fix return value of ima_write_policy()
    
    This patch fixes the return value of ima_write_policy() when a new policy
    is directly passed to IMA and the current policy requires appraisal of the
    file containing the policy. Currently, if appraisal is not in ENFORCE mode,
    ima_write_policy() returns 0 and leads user space applications to an
    endless loop. Fix this issue by denying the operation regardless of the
    appraisal mode.
    
    Cc: stable@vger.kernel.org # 4.10.x
    Fixes: 19f8a84713edc ("ima: measure and appraise the IMA policy itself")
    Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
    Reviewed-by: Krzysztof Struczynski <krzysztof.struczynski@huawei.com>
    Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>

commit 53de3b080d5eae31d0de219617155dcc34e7d698
Author: Roberto Sassu <roberto.sassu@huawei.com>
Date:   Mon Apr 27 12:28:56 2020 +0200

    evm: Check also if *tfm is an error pointer in init_desc()
    
    This patch avoids a kernel panic due to accessing an error pointer set by
    crypto_alloc_shash(). It occurs especially when there are many files that
    require an unsupported algorithm, as it would increase the likelihood of
    the following race condition:
    
    Task A: *tfm = crypto_alloc_shash() <= error pointer
    Task B: if (*tfm == NULL) <= *tfm is not NULL, use it
    Task B: rc = crypto_shash_init(desc) <= panic
    Task A: *tfm = NULL
    
    This patch uses the IS_ERR_OR_NULL macro to determine whether or not a new
    crypto context must be created.
    
    Cc: stable@vger.kernel.org
    Fixes: d46eb3699502b ("evm: crypto hash replaced by shash")
    Co-developed-by: Krzysztof Struczynski <krzysztof.struczynski@huawei.com>
    Signed-off-by: Krzysztof Struczynski <krzysztof.struczynski@huawei.com>
    Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
    Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>

commit 0014cc04e8ec077dc482f00c87dfd949cfe2b98f
Author: Roberto Sassu <roberto.sassu@huawei.com>
Date:   Mon Apr 27 12:28:55 2020 +0200

    ima: Set file->f_mode instead of file->f_flags in ima_calc_file_hash()
    
    Commit a408e4a86b36 ("ima: open a new file instance if no read
    permissions") tries to create a new file descriptor to calculate a file
    digest if the file has not been opened with O_RDONLY flag. However, if a
    new file descriptor cannot be obtained, it sets the FMODE_READ flag to
    file->f_flags instead of file->f_mode.
    
    This patch fixes this issue by replacing f_flags with f_mode as it was
    before that commit.
    
    Cc: stable@vger.kernel.org # 4.20.x
    Fixes: a408e4a86b36 ("ima: open a new file instance if no read permissions")
    Signed-off-by: Roberto Sassu <roberto.sassu@huawei.com>
    Reviewed-by: Goldwyn Rodrigues <rgoldwyn@suse.com>
    Signed-off-by: Mimi Zohar <zohar@linux.ibm.com>


From xen-devel-bounces@lists.xenproject.org Tue May 19 12:21:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 12:21:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb1F7-0000dW-IX; Tue, 19 May 2020 12:21:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zPhu=7B=linaro.org=peter.maydell@srs-us1.protection.inumbo.net>)
 id 1jb1F6-0000dQ-7A
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 12:21:08 +0000
X-Inumbo-ID: 34951ae8-99cb-11ea-9887-bc764e2007e4
Received: from mail-oi1-x244.google.com (unknown [2607:f8b0:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 34951ae8-99cb-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 12:21:02 +0000 (UTC)
Received: by mail-oi1-x244.google.com with SMTP id s198so12118372oie.6
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 05:21:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=8/gZGNNUOMV0cNspyJua+QUzz3j0JQh8jq+6Z4+9XIc=;
 b=xl/C5nmsn8hHP41gZxCIkruWmIRYKYMIGB/O2pJclNNg9Bjms3yD5E94NP3Id5qdco
 GPw/Ynq+Fep0xOXtMn1sLFHVXhzxys4yyoPG/b6kwlJow8Bh6cKofVUI40ByKa7GPtZ+
 Pj4TlLl0GyoBfItBXQoMAPCBva6lUpO7qJPVCb49PEvGZUIgivzZxAq1MzG87JAfJ9rQ
 vzIZgKyyuxBSu5SBr9u2FjMW/yGpop/Fby8yw6HZkcv8JC3hf7FZFNyhwA5TmK4XtPP4
 jFkF3rO48HnEDxGpIN5OwlcFHNZfCvKcOG9NkRIeQ3HfoWZtaDybARA8GEKkSqBBmWIb
 987Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=8/gZGNNUOMV0cNspyJua+QUzz3j0JQh8jq+6Z4+9XIc=;
 b=Zqck0Z8oBMgtA4cZhQxhYGuGdOm+X4FdqV6MeWcOFN2w6x2MFNM3e4DI5BupsdQ1LD
 YLXyQ6dalsVxz0Gp0JPiIaV+SV0r7jDGtGvEzWfHdKnut1TppmTm8/PyM8dlQVrVmyu5
 R5Dei1x41QJDrmS6mhnKyjeM6V4QqPABPyQKqpxRc/Opp7cBQyweQCTlJ4Dokgi07GPt
 9Kgf6Hgj3oY1vWZHh3xWPa3qF1hfZJSifwiMbaKaJjpQiETmXuvyu3GnumpLKfm8rFtz
 XtUSX2WmEzjayLqgmBYDMuXbYj9uNZBl4X1fKOJvFQ6X9s/w3x7Kr4P3J/sGngwXaPze
 DbvQ==
X-Gm-Message-State: AOAM5300sUfB/pq3vzBrh9FyPqctFlDyLYm63gQOE8HkmJ9sSr6QyqD0
 KZ3van7+M5VQk3X2ghatfxsRAm3GiiPSOJOxpm7g3Q==
X-Google-Smtp-Source: ABdhPJx62nq7PczRR5OTY9N7Lsn/a95RAsGm383b2c1f/a8hDbeSIXlAn2Pv9r6UoqPXa+cO48FDwyBRhljHs04F5Bg=
X-Received: by 2002:aca:eb96:: with SMTP id j144mr2711690oih.48.1589890862477; 
 Tue, 19 May 2020 05:21:02 -0700 (PDT)
MIME-Version: 1.0
References: <20200504101443.3165-1-roger.pau@citrix.com>
 <20200511134043.GH2116@perard.uk.xensource.com>
 <20200519112806.GF54375@Air-de-Roger>
In-Reply-To: <20200519112806.GF54375@Air-de-Roger>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Tue, 19 May 2020 13:20:51 +0100
Message-ID: <CAFEAcA-RWR_6OQV1EgeYj0WmE89FDKqcywTpgfrMyr8FrELN+Q@mail.gmail.com>
Subject: Re: [PATCH] xen: fix build without pci passthrough
To: =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 QEMU Developers <qemu-devel@nongnu.org>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, 19 May 2020 at 12:28, Roger Pau Monn=C3=A9 <roger.pau@citrix.com> w=
rote:
>
> On Mon, May 11, 2020 at 02:40:43PM +0100, Anthony PERARD wrote:
> > On Mon, May 04, 2020 at 12:14:43PM +0200, Roger Pau Monne wrote:
> > > diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
> > > index 179775db7b..660dd8a008 100644
> > > --- a/hw/xen/xen_pt.h
> > > +++ b/hw/xen/xen_pt.h
> > > @@ -1,6 +1,7 @@
> > >  #ifndef XEN_PT_H
> > >  #define XEN_PT_H
> > >
> > > +#include "qemu/osdep.h"
> >
> > Why do you need osdep?
>
> For CONFIG_XEN_PCI_PASSTHROUGH IIRC.

All .c files should always include osdep as the first include
in the file, and .h files should never include osdep (we note
this in CODING_STYLE.rst).

If you added this #include to fix a compile issue that would
suggest that there's a .c file somewhere that's missing the
mandatory osdep include. I did a quick eyeball of all the files
that include xen_pt.h, though, and none of them are missing the
osdep include. So I think you should be able to simply drop the
osdep include here. If that produces an error, let us know what
fails and we can work out what's gone wrong.

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Tue May 19 12:54:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 12:54:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb1kt-0003B8-Tb; Tue, 19 May 2020 12:53:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wYyw=7B=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jb1kt-0003B0-4K
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 12:53:59 +0000
X-Inumbo-ID: cd72ae5c-99cf-11ea-a912-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd72ae5c-99cf-11ea-a912-12813bfff9fa;
 Tue, 19 May 2020 12:53:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=BaOutRfSDx+to1hwThP+cQcy/NluPqkZXXsfJTUN10g=; b=3My5/EGXtULfTC3vhMVntNIqZ
 7eV6wzdoOVk0vLD1q4OWX0Ir4Ai+eOJde/mtWeEv2hx7gkEAVvanmrldgz4e/GML1jKufc/Tr6B+q
 Y3pSJ+NDLDP2boA4UPxaYRzqQeljqiu1A7DDFw2dW47bQVzJd+glcgU3/voB3bBW8OfwU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jb1kq-0000M1-Of; Tue, 19 May 2020 12:53:56 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jb1kq-00049v-84; Tue, 19 May 2020 12:53:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jb1kq-0003Rl-6I; Tue, 19 May 2020 12:53:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150242-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150242: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=7efd9f3d45480c12328e4419547a98022f7af43a
X-Osstest-Versions-That: xen=475ffdbbf5778329319ef6f7bd6315c163163440
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 19 May 2020 12:53:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150242 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150242/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  7efd9f3d45480c12328e4419547a98022f7af43a
baseline version:
 xen                  475ffdbbf5778329319ef6f7bd6315c163163440

Last test of basis   150233  2020-05-18 18:00:22 Z    0 days
Testing same since   150242  2020-05-19 10:01:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Eric Shelton <eshelton@pobox.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jason Andryuk <jandryuk@gmail.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Wei Liu <wei.liu2@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   475ffdbbf5..7efd9f3d45  7efd9f3d45480c12328e4419547a98022f7af43a -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 19 13:04:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 13:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb1ue-00046z-1P; Tue, 19 May 2020 13:04:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jb1uc-00046u-8K
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 13:04:02 +0000
X-Inumbo-ID: 3493d952-99d1-11ea-a913-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3493d952-99d1-11ea-a913-12813bfff9fa;
 Tue, 19 May 2020 13:04:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 90296AD2A;
 Tue, 19 May 2020 13:04:01 +0000 (UTC)
Subject: Re: [PATCH v3 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
To: Paul Durrant <paul@xen.org>
References: <20200514104416.16657-1-paul@xen.org>
 <20200514104416.16657-2-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c1da7ff1-2c3a-02d1-cfa1-18840db37566@suse.com>
Date: Tue, 19 May 2020 15:03:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200514104416.16657-2-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 12:44, Paul Durrant wrote:
> --- /dev/null
> +++ b/xen/common/save.c
> @@ -0,0 +1,313 @@
> +/*
> + * save.c: Save and restore PV guest state common to all domain types.
> + *
> + * Copyright Amazon.com Inc. or its affiliates.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/compile.h>
> +#include <xen/save.h>
> +
> +struct domain_context {
> +    struct domain *domain;
> +    const char *name; /* for logging purposes */

With this comment, why ...

> +    struct domain_save_descriptor desc;
> +    size_t len; /* for internal accounting */
> +    union {
> +        struct domain_save_ops *save;
> +        struct domain_load_ops *load;
> +    } ops;
> +    void *priv;
> +    bool log;

... this separate field? Couldn't "no logging" simply be expressed by
name being NULL?

> +int domain_save_end(struct domain_context *c)
> +{
> +    struct domain *d = c->domain;
> +    uint8_t pad[DOMAIN_SAVE_ALIGN] = {};

Preferably moved into the more narrow scope, and probably wants making
"static const".

> +    size_t len = ROUNDUP(c->len, DOMAIN_SAVE_ALIGN) - c->len; /* padding */
> +    int rc;
> +
> +    if ( len )
> +    {
> +        rc = domain_save_data(c, pad, len);
> +
> +        if ( rc )
> +            return rc;
> +    }
> +    ASSERT(IS_ALIGNED(c->len, DOMAIN_SAVE_ALIGN));

While you mention the auto-padding as a change in v3, it's not really
clear to me why it was introduced. Could you add half a sentence to
the description clarifying the motivation?

> +int domain_save(struct domain *d, struct domain_save_ops *ops, void *priv,
> +                bool dry_run)
> +{
> +    struct domain_context c = {
> +        .domain = d,
> +        .ops.save = ops,
> +        .priv = priv,
> +        .log = !dry_run,
> +    };
> +    static struct domain_save_header h = {

const?

> +        .magic = DOMAIN_SAVE_MAGIC,
> +        .xen_major = XEN_VERSION,
> +        .xen_minor = XEN_SUBVERSION,
> +        .version = DOMAIN_SAVE_VERSION,
> +    };
> +    struct domain_save_end e = {};

const? (static would likely be quite pointless here)

> +int domain_load(struct domain *d, struct domain_load_ops *ops, void *priv)
> +{
> +    struct domain_context c = {
> +        .domain = d,
> +        .ops.load = ops,
> +        .priv = priv,
> +        .log = true,
> +    };
> +    unsigned int instance;
> +    struct domain_save_header h;
> +    int rc;
> +
> +    ASSERT(d != current->domain);
> +
> +    rc = c.ops.load->read(c.priv, &c.desc, sizeof(c.desc));
> +    if ( rc )
> +        return rc;
> +
> +    rc = DOMAIN_LOAD_ENTRY(HEADER, &c, &instance, &h, sizeof(h));
> +    if ( rc )
> +        return rc;
> +
> +    if ( instance || h.magic != DOMAIN_SAVE_MAGIC ||
> +         h.version != DOMAIN_SAVE_VERSION )
> +        return -EINVAL;
> +
> +    domain_pause(d);
> +
> +    for (;;)
> +    {
> +        unsigned int i;
> +        domain_load_handler load;
> +
> +        rc = c.ops.load->read(c.priv, &c.desc, sizeof(c.desc));
> +        if ( rc )
> +            return rc;
> +
> +        rc = -EINVAL;
> +
> +        if ( c.desc.typecode == DOMAIN_SAVE_CODE(END) )
> +        {
> +            struct domain_save_end e;
> +
> +            rc = DOMAIN_LOAD_ENTRY(END, &c, &instance, NULL, sizeof(e));

Without using &e here I don't see how you can get away without an
"unused variable" warning.

> --- /dev/null
> +++ b/xen/include/public/save.h
> @@ -0,0 +1,80 @@
> +/*
> + * save.h
> + *
> + * Structure definitions for common PV/HVM domain state that is held by
> + * Xen and must be saved along with the domain's memory.
> + *
> + * Copyright Amazon.com Inc. or its affiliates.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + */
> +
> +#ifndef XEN_PUBLIC_SAVE_H
> +#define XEN_PUBLIC_SAVE_H
> +
> +#include "xen.h"
> +
> +#if defined(__XEN__) || defined(__XEN_TOOLS__)

Move #include down below here?

> +/* Entry data is preceded by a descriptor */
> +struct domain_save_descriptor {
> +    uint16_t typecode;
> +
> +    /*
> +     * Instance number of the entry (since there may by multiple of some
> +     * types of entry).

With a German bias I'm inclined to ask: "entries"?

> +     */
> +    uint16_t instance;
> +
> +    /* Entry length not including this descriptor */
> +    uint32_t length;
> +};
> +
> +/*
> + * Each entry has a type associated with it. DECLARE_DOMAIN_SAVE_TYPE
> + * binds these things together.
> + */
> +#define DECLARE_DOMAIN_SAVE_TYPE(_x, _code, _type) \
> +    struct DOMAIN_SAVE_TYPE_##_x { char c[_code]; _type t; };

Perhaps point out in the comment that this type is not meant to
have instantiations?

> +#define DOMAIN_SAVE_CODE(_x) \
> +    (sizeof(((struct DOMAIN_SAVE_TYPE_##_x *)(0))->c))
> +#define DOMAIN_SAVE_TYPE(_x) \
> +    typeof(((struct DOMAIN_SAVE_TYPE_##_x *)(0))->t)

Feels like I already mentioned the oddity of having parentheses
around a single token that's not a macro parameter name.

> --- /dev/null
> +++ b/xen/include/xen/save.h
> @@ -0,0 +1,165 @@
> +/*
> + * save.h: support routines for save/restore
> + *
> + * Copyright Amazon.com Inc. or its affiliates.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef XEN_SAVE_H
> +#define XEN_SAVE_H
> +
> +#include <xen/init.h>
> +#include <xen/sched.h>
> +#include <xen/types.h>
> +
> +#include <public/save.h>
> +
> +struct domain_context;
> +
> +int domain_save_begin(struct domain_context *c, unsigned int typecode,
> +                      const char *name, unsigned int instance);
> +
> +#define DOMAIN_SAVE_BEGIN(_x, _c, _instance) \
> +    domain_save_begin((_c), DOMAIN_SAVE_CODE(_x), #_x, (_instance))

As per prior conversation I would have expected no leading underscores
here anymore, and no parenthesization of what is still _c. More of
these further down.

> +static inline int domain_load_entry(struct domain_context *c,
> +                                    unsigned int typecode, const char *name,
> +                                    unsigned int *instance, void *dst,
> +                                    size_t len)
> +{
> +    int rc;
> +
> +    rc = domain_load_begin(c, typecode, name, instance);

For some reason I've spotted this only here: Why is instance a pointer
parameter of the function, when typecode is a value? Both live next to
one another in struct domain_save_descriptor, and hence instance could
be retrieved at the same time as typecode.

> +/*
> + * Register save and restore handlers. Save handlers will be invoked
> + * in order of DOMAIN_SAVE_CODE().
> + */
> +#define DOMAIN_REGISTER_SAVE_RESTORE(_x, _save, _load)            \
> +    static int __init __domain_register_##_x##_save_restore(void) \
> +    {                                                             \
> +        domain_register_save_type(                                \
> +            DOMAIN_SAVE_CODE(_x),                                 \
> +            #_x,                                                  \
> +            &(_save),                                             \
> +            &(_load));                                            \
> +                                                                  \
> +        return 0;                                                 \
> +    }                                                             \
> +    __initcall(__domain_register_##_x##_save_restore);

I'm puzzled by part of the comment: Invoking by save code looks
reasonable for the saving side (albeit END doesn't match this rule
afaics), but is this going to be good enough for the consuming side?
There may be dependencies between types, and with fixed ordering
there may be no way to insert a depended upon type ahead of an
already defined one (at least as long as the codes are meant to be
stable).

> +/*
> + * Entry points:
> + *
> + * ops:     These are callback functions provided by the caller that will
> + *          be used to write to (in the save case) or read from (in the
> + *          load case) the context buffer. See above for more detail.
> + * priv:    This is a pointer that will be passed to the copy function to
> + *          allow it to identify the context buffer and the current state
> + *          of the save or load operation.
> + * dry_run: If this is set then the caller of domain_save() is only trying
> + *          to acquire the total size of the data, not the data itself.
> + *          In this case the caller may supply different ops to avoid doing
> + *          unnecessary work.
> + */
> +int domain_save(struct domain *d, struct domain_save_ops *ops, void *priv,
> +                bool dry_run);
> +int domain_load(struct domain *d, struct domain_load_ops *ops, void *priv);

I guess ops want to be pointer to const in both cases?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 13:49:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 13:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb2cU-0007Uc-GQ; Tue, 19 May 2020 13:49:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jb2cU-0007UX-30
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 13:49:22 +0000
X-Inumbo-ID: 8a9a1b45-99d7-11ea-a91a-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8a9a1b45-99d7-11ea-a91a-12813bfff9fa;
 Tue, 19 May 2020 13:49:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id C84A0ABE3;
 Tue, 19 May 2020 13:49:22 +0000 (UTC)
Subject: Re: [PATCH v3 2/5] xen/common/domctl: introduce
 XEN_DOMCTL_get/setdomaincontext
To: Paul Durrant <paul@xen.org>
References: <20200514104416.16657-1-paul@xen.org>
 <20200514104416.16657-3-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7455ebb7-89c8-75f0-5904-aec344c8b85f@suse.com>
Date: Tue, 19 May 2020 15:49:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200514104416.16657-3-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 12:44, Paul Durrant wrote:
> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -1129,6 +1129,43 @@ struct xen_domctl_vuart_op {
>                                   */
>  };
>  
> +/*
> + * XEN_DOMCTL_getdomaincontext
> + * ---------------------------
> + *
> + * buffer (IN):   The buffer into which the context data should be
> + *                copied, or NULL to query the buffer size that should
> + *                be allocated.
> + * size (IN/OUT): If 'buffer' is NULL then the value passed in must be
> + *                zero, and the value passed out will be the size of the
> + *                buffer to allocate.
> + *                If 'buffer' is non-NULL then the value passed in must
> + *                be the size of the buffer into which data may be copied.
> + *                The value passed out will be the size of data written.
> + */
> +struct xen_domctl_getdomaincontext {
> +    uint32_t size;
> +    uint32_t pad;

This and its counterpart don't seem to get checked to be zero.
While an option for a domctl, any desire to use the field in
the future would then require an interface version bump.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 14:05:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 14:05:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb2rh-0000m4-R8; Tue, 19 May 2020 14:05:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FgAx=7B=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jb2rg-0000lz-AQ
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 14:05:04 +0000
X-Inumbo-ID: bba42412-99d9-11ea-b07b-bc764e2007e4
Received: from mail-wr1-x433.google.com (unknown [2a00:1450:4864:20::433])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bba42412-99d9-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 14:05:02 +0000 (UTC)
Received: by mail-wr1-x433.google.com with SMTP id l18so16073412wrn.6
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 07:05:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=mzsuszs7g+FSF1DaIiHQwcq2ir70+uMpmZMBvtVzG64=;
 b=hmmY/GMsRIka7etjd5OU0d0CfnGWqFHOGmIBFIvniXFlJXlAOpl7dGYWBVp/vCmD4/
 aN4IFMALHn5bnc7rLlVBqGm4Y/qX5RM7z8dbJdP6/4NOnMjcDoWqscz1jV5VeNxMfF+B
 kKChkIQ2NiJ+Q4dsUo3xM+/zusrHfqs7W7FZD6F7SvTN3FkL3v0oUNbUo0bRn6FGIlmu
 89YVVHGcu8wGGVtaewz31rBbZui11dEi+fD35tpRd6UvFzdjVvbLT5A5+611SPgXWz3D
 lYrqgpee5jKR1BX9+ffstkjQw20+otJ4CY9jKFIZlN/v8ofjXnZzZJNN81/je/NnGO+n
 WcwQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=mzsuszs7g+FSF1DaIiHQwcq2ir70+uMpmZMBvtVzG64=;
 b=knwEQ/b1ea19NX2Pbj3p9EGlcbCIF1xcson4+0BDWy/j4b3Gnc9/Us2mnLCoRR2R2i
 i1DuxynZJZ29C5lISNO8VvDe4FjzFep+2/aim6Awx3UjbgiUDUuxwnNON5VCddmHYQjI
 xjPFMSbKZVCPav9cLcdEz8jBdW5Wu0CGAE1FWdbJJiPhCQnnITnboxKvcvLRMp4b7erM
 RGNbPlNA347AoZUG8/rkpVHvcbi0lf5VLJGY4J5au1rQ+550fBMbJ1KU2h1kEWw4F6Wm
 qkwvVhbGbNUVJCIK+m1lTpoDioqU+YteJr8tubZ3phA1nkH8fL4BXWKG4JmDU49ob/8J
 d1YQ==
X-Gm-Message-State: AOAM532kkZvgBcLEc/ncp1LBF9kMhS+4ZOqsBhYLAo6o+6evIf55BZ1j
 fFJoejLa9fLq+/IgH4//yXI=
X-Google-Smtp-Source: ABdhPJwHMsjjGZveH+YO9QansC3pNp/VvazRpRyoDudSqigjBb+N71fnkFxOCglfTy2XBJJtnlM+bw==
X-Received: by 2002:adf:fb0f:: with SMTP id c15mr27421298wrr.410.1589897101872; 
 Tue, 19 May 2020 07:05:01 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.185])
 by smtp.gmail.com with ESMTPSA id v5sm20995010wrr.93.2020.05.19.07.05.00
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 19 May 2020 07:05:01 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
References: <20200514104416.16657-1-paul@xen.org>
 <20200514104416.16657-2-paul@xen.org>
 <c1da7ff1-2c3a-02d1-cfa1-18840db37566@suse.com>
In-Reply-To: <c1da7ff1-2c3a-02d1-cfa1-18840db37566@suse.com>
Subject: RE: [PATCH v3 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
Date: Tue, 19 May 2020 15:04:59 +0100
Message-ID: <000401d62de6$7cb6efa0$7624cee0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQH3ss8UVmWNzdPcdrMljFFlRuFHxQJ5sln4ApsafleoQ7loAA==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 19 May 2020 14:04
> To: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; Paul Durrant =
<pdurrant@amazon.com>; Andrew Cooper
> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>; =
Ian Jackson
> <ian.jackson@eu.citrix.com>; Julien Grall <julien@xen.org>; Stefano =
Stabellini
> <sstabellini@kernel.org>; Wei Liu <wl@xen.org>; Volodymyr Babchuk =
<Volodymyr_Babchuk@epam.com>; Roger
> Pau Monn=C3=A9 <roger.pau@citrix.com>
> Subject: Re: [PATCH v3 1/5] xen/common: introduce a new framework for =
save/restore of 'domain' context
>=20
> On 14.05.2020 12:44, Paul Durrant wrote:
> > --- /dev/null
> > +++ b/xen/common/save.c
> > @@ -0,0 +1,313 @@
> > +/*
> > + * save.c: Save and restore PV guest state common to all domain =
types.
> > + *
> > + * Copyright Amazon.com Inc. or its affiliates.
> > + *
> > + * This program is free software; you can redistribute it and/or =
modify it
> > + * under the terms and conditions of the GNU General Public =
License,
> > + * version 2, as published by the Free Software Foundation.
> > + *
> > + * This program is distributed in the hope it will be useful, but =
WITHOUT
> > + * ANY WARRANTY; without even the implied warranty of =
MERCHANTABILITY or
> > + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public =
License for
> > + * more details.
> > + *
> > + * You should have received a copy of the GNU General Public =
License along with
> > + * this program; If not, see <http://www.gnu.org/licenses/>.
> > + */
> > +
> > +#include <xen/compile.h>
> > +#include <xen/save.h>
> > +
> > +struct domain_context {
> > +    struct domain *domain;
> > +    const char *name; /* for logging purposes */
>=20
> With this comment, why ...
>=20
> > +    struct domain_save_descriptor desc;
> > +    size_t len; /* for internal accounting */
> > +    union {
> > +        struct domain_save_ops *save;
> > +        struct domain_load_ops *load;
> > +    } ops;
> > +    void *priv;
> > +    bool log;
>=20
> ... this separate field? Couldn't "no logging" simply be expressed by
> name being NULL?
>=20

Ok. The log bool predated the name pointer so, yes these could be =
combined.

> > +int domain_save_end(struct domain_context *c)
> > +{
> > +    struct domain *d =3D c->domain;
> > +    uint8_t pad[DOMAIN_SAVE_ALIGN] =3D {};
>=20
> Preferably moved into the more narrow scope, and probably wants making
> "static const".
>=20

Ok.

> > +    size_t len =3D ROUNDUP(c->len, DOMAIN_SAVE_ALIGN) - c->len; /* =
padding */
> > +    int rc;
> > +
> > +    if ( len )
> > +    {
> > +        rc =3D domain_save_data(c, pad, len);
> > +
> > +        if ( rc )
> > +            return rc;
> > +    }
> > +    ASSERT(IS_ALIGNED(c->len, DOMAIN_SAVE_ALIGN));
>=20
> While you mention the auto-padding as a change in v3, it's not really
> clear to me why it was introduced. Could you add half a sentence to
> the description clarifying the motivation?
>=20

Julien favoured it and it does seem like a good idea as it avoids the =
need to put explicit trailing 'pad' fields in entries. I'll add to the =
commit comment to explain.

> > +int domain_save(struct domain *d, struct domain_save_ops *ops, void =
*priv,
> > +                bool dry_run)
> > +{
> > +    struct domain_context c =3D {
> > +        .domain =3D d,
> > +        .ops.save =3D ops,
> > +        .priv =3D priv,
> > +        .log =3D !dry_run,
> > +    };
> > +    static struct domain_save_header h =3D {
>=20
> const?
>=20

Yes, it can be.

> > +        .magic =3D DOMAIN_SAVE_MAGIC,
> > +        .xen_major =3D XEN_VERSION,
> > +        .xen_minor =3D XEN_SUBVERSION,
> > +        .version =3D DOMAIN_SAVE_VERSION,
> > +    };
> > +    struct domain_save_end e =3D {};
>=20
> const? (static would likely be quite pointless here)
>=20

Ok.

> > +int domain_load(struct domain *d, struct domain_load_ops *ops, void =
*priv)
> > +{
> > +    struct domain_context c =3D {
> > +        .domain =3D d,
> > +        .ops.load =3D ops,
> > +        .priv =3D priv,
> > +        .log =3D true,
> > +    };
> > +    unsigned int instance;
> > +    struct domain_save_header h;
> > +    int rc;
> > +
> > +    ASSERT(d !=3D current->domain);
> > +
> > +    rc =3D c.ops.load->read(c.priv, &c.desc, sizeof(c.desc));
> > +    if ( rc )
> > +        return rc;
> > +
> > +    rc =3D DOMAIN_LOAD_ENTRY(HEADER, &c, &instance, &h, sizeof(h));
> > +    if ( rc )
> > +        return rc;
> > +
> > +    if ( instance || h.magic !=3D DOMAIN_SAVE_MAGIC ||
> > +         h.version !=3D DOMAIN_SAVE_VERSION )
> > +        return -EINVAL;
> > +
> > +    domain_pause(d);
> > +
> > +    for (;;)
> > +    {
> > +        unsigned int i;
> > +        domain_load_handler load;
> > +
> > +        rc =3D c.ops.load->read(c.priv, &c.desc, sizeof(c.desc));
> > +        if ( rc )
> > +            return rc;
> > +
> > +        rc =3D -EINVAL;
> > +
> > +        if ( c.desc.typecode =3D=3D DOMAIN_SAVE_CODE(END) )
> > +        {
> > +            struct domain_save_end e;
> > +
> > +            rc =3D DOMAIN_LOAD_ENTRY(END, &c, &instance, NULL, =
sizeof(e));
>=20
> Without using &e here I don't see how you can get away without an
> "unused variable" warning.

Hmm. I definitely don't get a warning, but yes this ought to be changed.

>=20
> > --- /dev/null
> > +++ b/xen/include/public/save.h
> > @@ -0,0 +1,80 @@
> > +/*
> > + * save.h
> > + *
> > + * Structure definitions for common PV/HVM domain state that is =
held by
> > + * Xen and must be saved along with the domain's memory.
> > + *
> > + * Copyright Amazon.com Inc. or its affiliates.
> > + *
> > + * Permission is hereby granted, free of charge, to any person =
obtaining a copy
> > + * of this software and associated documentation files (the =
"Software"), to
> > + * deal in the Software without restriction, including without =
limitation the
> > + * rights to use, copy, modify, merge, publish, distribute, =
sublicense, and/or
> > + * sell copies of the Software, and to permit persons to whom the =
Software is
> > + * furnished to do so, subject to the following conditions:
> > + *
> > + * The above copyright notice and this permission notice shall be =
included in
> > + * all copies or substantial portions of the Software.
> > + *
> > + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, =
EXPRESS OR
> > + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF =
MERCHANTABILITY,
> > + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO =
EVENT SHALL THE
> > + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR =
OTHER
> > + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, =
ARISING
> > + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR =
OTHER
> > + * DEALINGS IN THE SOFTWARE.
> > + */
> > +
> > +#ifndef XEN_PUBLIC_SAVE_H
> > +#define XEN_PUBLIC_SAVE_H
> > +
> > +#include "xen.h"
> > +
> > +#if defined(__XEN__) || defined(__XEN_TOOLS__)
>=20
> Move #include down below here?
>=20

Sure.

> > +/* Entry data is preceded by a descriptor */
> > +struct domain_save_descriptor {
> > +    uint16_t typecode;
> > +
> > +    /*
> > +     * Instance number of the entry (since there may by multiple of =
some
> > +     * types of entry).
>=20
> With a German bias I'm inclined to ask: "entries"?
>=20

Not sure. Still understandable so I'm happy to change. Also s/by/be.

> > +     */
> > +    uint16_t instance;
> > +
> > +    /* Entry length not including this descriptor */
> > +    uint32_t length;
> > +};
> > +
> > +/*
> > + * Each entry has a type associated with it. =
DECLARE_DOMAIN_SAVE_TYPE
> > + * binds these things together.
> > + */
> > +#define DECLARE_DOMAIN_SAVE_TYPE(_x, _code, _type) \
> > +    struct DOMAIN_SAVE_TYPE_##_x { char c[_code]; _type t; };
>=20
> Perhaps point out in the comment that this type is not meant to
> have instantiations?

Ok.

>=20
> > +#define DOMAIN_SAVE_CODE(_x) \
> > +    (sizeof(((struct DOMAIN_SAVE_TYPE_##_x *)(0))->c))
> > +#define DOMAIN_SAVE_TYPE(_x) \
> > +    typeof(((struct DOMAIN_SAVE_TYPE_##_x *)(0))->t)
>=20
> Feels like I already mentioned the oddity of having parentheses
> around a single token that's not a macro parameter name.
>=20

Ok. Missed that.

> > --- /dev/null
> > +++ b/xen/include/xen/save.h
> > @@ -0,0 +1,165 @@
> > +/*
> > + * save.h: support routines for save/restore
> > + *
> > + * Copyright Amazon.com Inc. or its affiliates.
> > + *
> > + * This program is free software; you can redistribute it and/or =
modify it
> > + * under the terms and conditions of the GNU General Public =
License,
> > + * version 2, as published by the Free Software Foundation.
> > + *
> > + * This program is distributed in the hope it will be useful, but =
WITHOUT
> > + * ANY WARRANTY; without even the implied warranty of =
MERCHANTABILITY or
> > + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public =
License for
> > + * more details.
> > + *
> > + * You should have received a copy of the GNU General Public =
License along with
> > + * this program; If not, see <http://www.gnu.org/licenses/>.
> > + */
> > +
> > +#ifndef XEN_SAVE_H
> > +#define XEN_SAVE_H
> > +
> > +#include <xen/init.h>
> > +#include <xen/sched.h>
> > +#include <xen/types.h>
> > +
> > +#include <public/save.h>
> > +
> > +struct domain_context;
> > +
> > +int domain_save_begin(struct domain_context *c, unsigned int =
typecode,
> > +                      const char *name, unsigned int instance);
> > +
> > +#define DOMAIN_SAVE_BEGIN(_x, _c, _instance) \
> > +    domain_save_begin((_c), DOMAIN_SAVE_CODE(_x), #_x, (_instance))
>=20
> As per prior conversation I would have expected no leading underscores
> here anymore, and no parenthesization of what is still _c. More of
> these further down.

What's wrong with leading underscores in macro arguments? They can't =
pollute any namespace. Also, I prefer to keep the parentheses for =
arguments.

>=20
> > +static inline int domain_load_entry(struct domain_context *c,
> > +                                    unsigned int typecode, const =
char *name,
> > +                                    unsigned int *instance, void =
*dst,
> > +                                    size_t len)
> > +{
> > +    int rc;
> > +
> > +    rc =3D domain_load_begin(c, typecode, name, instance);
>=20
> For some reason I've spotted this only here: Why is instance a pointer
> parameter of the function, when typecode is a value? Both live next to
> one another in struct domain_save_descriptor, and hence instance could
> be retrieved at the same time as typecode.
>=20

Because the typecode is known a priori. It has to be known for the =
correct handler to be invoked. It is only provided here for verification =
purposes. I could have provided the instance as an argument to the load =
handler but I prefer making the interactions for save and load more =
symmetric.

> > +/*
> > + * Register save and restore handlers. Save handlers will be =
invoked
> > + * in order of DOMAIN_SAVE_CODE().
> > + */
> > +#define DOMAIN_REGISTER_SAVE_RESTORE(_x, _save, _load)            \
> > +    static int __init __domain_register_##_x##_save_restore(void) \
> > +    {                                                             \
> > +        domain_register_save_type(                                \
> > +            DOMAIN_SAVE_CODE(_x),                                 \
> > +            #_x,                                                  \
> > +            &(_save),                                             \
> > +            &(_load));                                            \
> > +                                                                  \
> > +        return 0;                                                 \
> > +    }                                                             \
> > +    __initcall(__domain_register_##_x##_save_restore);
>=20
> I'm puzzled by part of the comment: Invoking by save code looks
> reasonable for the saving side (albeit END doesn't match this rule
> afaics), but is this going to be good enough for the consuming side?

No, this only relates to the save side which is why the comment says =
'Save handlers'. I do note that it would be more consistent to use =
'load' rather than 'restore' here though.

> There may be dependencies between types, and with fixed ordering
> there may be no way to insert a depended upon type ahead of an
> already defined one (at least as long as the codes are meant to be
> stable).
>=20

The ordering of load handlers is determined by the stream. I'll add a =
sentence saying that.

> > +/*
> > + * Entry points:
> > + *
> > + * ops:     These are callback functions provided by the caller =
that will
> > + *          be used to write to (in the save case) or read from (in =
the
> > + *          load case) the context buffer. See above for more =
detail.
> > + * priv:    This is a pointer that will be passed to the copy =
function to
> > + *          allow it to identify the context buffer and the current =
state
> > + *          of the save or load operation.
> > + * dry_run: If this is set then the caller of domain_save() is only =
trying
> > + *          to acquire the total size of the data, not the data =
itself.
> > + *          In this case the caller may supply different ops to =
avoid doing
> > + *          unnecessary work.
> > + */
> > +int domain_save(struct domain *d, struct domain_save_ops *ops, void =
*priv,
> > +                bool dry_run);
> > +int domain_load(struct domain *d, struct domain_load_ops *ops, void =
*priv);
>=20
> I guess ops want to be pointer to const in both cases?
>=20

Yes, it can be.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Tue May 19 14:07:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 14:07:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb2uE-0000tD-Bq; Tue, 19 May 2020 14:07:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jb2uD-0000t5-AB
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 14:07:41 +0000
X-Inumbo-ID: 19272469-99da-11ea-a91d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 19272469-99da-11ea-a91d-12813bfff9fa;
 Tue, 19 May 2020 14:07:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id B8783AC84;
 Tue, 19 May 2020 14:07:41 +0000 (UTC)
Subject: Re: [PATCH v3 4/5] common/domain: add a domain context record for
 shared_info...
To: Paul Durrant <paul@xen.org>
References: <20200514104416.16657-1-paul@xen.org>
 <20200514104416.16657-5-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bbebc62f-8066-a60e-5717-58e46cd2d172@suse.com>
Date: Tue, 19 May 2020 16:07:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200514104416.16657-5-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 12:44, Paul Durrant wrote:
> @@ -61,6 +62,76 @@ static void dump_header(void)
>  
>  }
>  
> +static void print_binary(const char *prefix, void *val, size_t size,

const also for val?

> +                         const char *suffix)
> +{
> +    printf("%s", prefix);
> +
> +    while (size--)

Judging from style elsewhere you look to be missing two blanks
here.

> +    {
> +        uint8_t octet = *(uint8_t *)val++;

Following the above then also better don't cast const away here.

> +        unsigned int i;
> +
> +        for ( i = 0; i < 8; i++ )
> +        {
> +            printf("%u", octet & 1);
> +            octet >>= 1;
> +        }
> +    }
> +
> +    printf("%s", suffix);
> +}
> +
> +static void dump_shared_info(void)
> +{
> +    DOMAIN_SAVE_TYPE(SHARED_INFO) *s;
> +    shared_info_any_t *info;
> +    unsigned int i;
> +
> +    GET_PTR(s);
> +
> +    printf("    SHARED_INFO: has_32bit_shinfo: %s buffer_size: %u\n",
> +           s->has_32bit_shinfo ? "true" : "false", s->buffer_size);
> +
> +    info = (shared_info_any_t *)s->buffer;
> +
> +#define GET_FIELD_PTR(_f) \
> +    (s->has_32bit_shinfo ? (void *)&(info->x32._f) : (void *)&(info->x64._f))

Better cast to const void * ?

> +#define GET_FIELD_SIZE(_f) \
> +    (s->has_32bit_shinfo ? sizeof(info->x32._f) : sizeof(info->x64._f))
> +#define GET_FIELD(_f) \
> +    (s->has_32bit_shinfo ? info->x32._f : info->x64._f)
> +
> +    /* Array lengths are the same for 32-bit and 64-bit shared info */

Not really, no:

    xen_ulong_t evtchn_pending[sizeof(xen_ulong_t) * 8];
    xen_ulong_t evtchn_mask[sizeof(xen_ulong_t) * 8];

> @@ -167,12 +238,14 @@ int main(int argc, char **argv)
>          if ( (typecode < 0 || typecode == desc->typecode) &&
>               (instance < 0 || instance == desc->instance) )
>          {
> +
>              printf("[%u] type: %u instance: %u length: %u\n", entry++,
>                     desc->typecode, desc->instance, desc->length);

Stray insertion of a blank line?

> @@ -1649,6 +1650,65 @@ int continue_hypercall_on_cpu(
>      return 0;
>  }
>  
> +static int save_shared_info(const struct domain *d, struct domain_context *c,
> +                            bool dry_run)
> +{
> +    struct domain_shared_info_context ctxt = { .buffer_size = PAGE_SIZE };

Why not sizeof(shared_info), utilizing the zero padding on the
receiving side?

> +    size_t hdr_size = offsetof(typeof(ctxt), buffer);
> +    int rc;
> +
> +    rc = DOMAIN_SAVE_BEGIN(SHARED_INFO, c, 0);
> +    if ( rc )
> +        return rc;
> +
> +#ifdef CONFIG_COMPAT
> +    if ( !dry_run )
> +        ctxt.has_32bit_shinfo = has_32bit_shinfo(d);
> +#endif

Nothing will go wrong without the if(), I suppose? Better drop it
then? It could then also easily be part of the initializer of ctxt.

> +    rc = domain_save_data(c, &ctxt, hdr_size);
> +    if ( rc )
> +        return rc;
> +
> +    rc = domain_save_data(c, d->shared_info, ctxt.buffer_size);
> +    if ( rc )
> +        return rc;
> +
> +    return domain_save_end(c);
> +}
> +
> +static int load_shared_info(struct domain *d, struct domain_context *c)
> +{
> +    struct domain_shared_info_context ctxt;
> +    size_t hdr_size = offsetof(typeof(ctxt), buffer);
> +    unsigned int i;
> +    int rc;
> +
> +    rc = DOMAIN_LOAD_BEGIN(SHARED_INFO, c, &i);
> +    if ( rc || i ) /* expect only a single instance */
> +        return rc;
> +
> +    rc = domain_load_data(c, &ctxt, hdr_size);
> +    if ( rc )
> +        return rc;
> +
> +    if ( ctxt.pad[0] || ctxt.pad[1] || ctxt.pad[2] ||
> +         ctxt.buffer_size != PAGE_SIZE )
> +        return -EINVAL;
> +
> +#ifdef CONFIG_COMPAT
> +    d->arch.has_32bit_shinfo = ctxt.has_32bit_shinfo;
> +#endif

There's nothing wrong with using has_32bit_shinfo(d) here as well.

> --- a/xen/include/public/save.h
> +++ b/xen/include/public/save.h
> @@ -73,7 +73,16 @@ struct domain_save_header {
>  };
>  DECLARE_DOMAIN_SAVE_TYPE(HEADER, 1, struct domain_save_header);
>  
> -#define DOMAIN_SAVE_CODE_MAX 1
> +struct domain_shared_info_context {
> +    uint8_t has_32bit_shinfo;
> +    uint8_t pad[3];

32-(or 16-)bit flags, with just a single bit used for the purpose?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 14:12:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 14:12:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb2yS-0001iY-Qg; Tue, 19 May 2020 14:12:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0pGb=7B=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jb2yQ-0001iT-Qp
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 14:12:02 +0000
X-Inumbo-ID: a82fd70e-99da-11ea-b9cf-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a82fd70e-99da-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 14:11:39 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: n/6oX6cGiOaqDPcC/tFnbraeFQKrlsyekU4Rv4esDDmYTujhncFqwgpA6hhyjYn3AK7AuG2g9c
 Yh4mGtQ8iDC4DyHQ+esxSQp4LrehTvUYsI/MZ5nchtA9PhaZ/yWin9Tx+nNqQYtPKVuIYRtwXf
 +v7tr1/jOVIAUhSw/G3VuwmZ3eKwFoJTvhwYfL8VdbDPFWVUs33NoFrTeQG76kWa6QHweH+geq
 jPlzWLL/2kln+QfGWEZS2JUmHQGZI/wVYjpv681o0U5RFu0w0sWYR1TRibhy00kL74dyAPJbsz
 M/4=
X-SBRS: 2.7
X-MesageID: 18593141
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,410,1583211600"; d="scan'208";a="18593141"
Subject: Re: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
To: Jan Beulich <jbeulich@suse.com>
References: <20200518153820.18170-1-andrew.cooper3@citrix.com>
 <2783ddc5-9919-3c97-ba52-2f734e7d72d5@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <62d4999b-7db3-bac6-28ed-bb636347df38@citrix.com>
Date: Tue, 19 May 2020 15:11:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <2783ddc5-9919-3c97-ba52-2f734e7d72d5@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19/05/2020 09:34, Jan Beulich wrote:
> On 18.05.2020 17:38, Andrew Cooper wrote:
>> @@ -1439,6 +1418,18 @@ void do_page_fault(struct cpu_user_regs *regs)
>>      if ( unlikely(fixup_page_fault(addr, regs) != 0) )
>>          return;
>>  
>> +    /*
>> +     * Xen have reserved bits in its pagetables, nor do we permit PV guests to
>> +     * write any.  Such entries would be vulnerable to the L1TF sidechannel.
>> +     *
>> +     * The only logic which intentionally sets reserved bits is the shadow
>> +     * MMIO fastpath (SH_L1E_MMIO_*), which is careful not to be
>> +     * L1TF-vulnerable, and handled via the VMExit #PF intercept path, rather
>> +     * than here.
> What about SH_L1E_MAGIC and sh_l1e_gnp()? The latter gets used by
> _sh_propagate() without visible restriction to HVM.

SH_L1E_MAGIC looks to be redundant with SH_L1E_MMIO_MAGIC. 
sh_l1e_mmio() is the only path which ever creates an entry like that.

sh_l1e_gnp() is a very well hidden use of reserved bits, but surely
can't be used for PV guests, as there doesn't appear to be anything to
turn the resulting fault back into a plain not-present.

> And of course every time I look at this code I wonder how we can
> get away with (quoting a comment) "We store 28 bits of GFN in
> bits 4:32 of the entry." Do we have a hidden restriction
> somewhere guaranteeing that guests won't have (emulated MMIO)
> GFNs above 1Tb when run in shadow mode?

I've raised that several times before.  Its broken.

Given that shadow frames are limited to 44 bits anyway (and not yet
levelled safely in the migration stream), my suggestion for fixing this
was just to use one extra nibble for the extra 4 bits and call it done.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 19 14:15:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 14:15:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb31X-0001sE-CC; Tue, 19 May 2020 14:15:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PPOd=7B=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jb31W-0001s8-UQ
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 14:15:14 +0000
X-Inumbo-ID: 27e46618-99db-11ea-a91d-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 27e46618-99db-11ea-a91d-12813bfff9fa;
 Tue, 19 May 2020 14:15:13 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: co/52IONfVfY3pD0MBtpGnSbgSNqPj6wROJ39RkFZOsMWttiRHMLfFhEoh2VKpSiTQwBXDMnd0
 fViTxHl1iFXQE8bseAtq4QAmQbDffPEW60yj5pvjfCIR3Wg73tPi17muqUYWCDv8QA3+0ceaCZ
 fvkSMY6GpGcMq1oQvMdgOjPVGwAQ12fy2JOtunh0yf04e+JzFnNczo8lkp/Flo9WDBJhJ3Owt7
 FG5g+6p44b/DokCsHgO+dUOF+d0foZVx7i64X47mJGANiWMegq1YnUpGJ4zBhGNi6ArEZxwkYP
 1vQ=
X-SBRS: 2.7
X-MesageID: 18250272
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,410,1583211600"; d="scan'208";a="18250272"
Date: Tue, 19 May 2020 16:15:00 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Peter Maydell <peter.maydell@linaro.org>
Subject: Re: [PATCH] xen: fix build without pci passthrough
Message-ID: <20200519141500.GG54375@Air-de-Roger>
References: <20200504101443.3165-1-roger.pau@citrix.com>
 <20200511134043.GH2116@perard.uk.xensource.com>
 <20200519112806.GF54375@Air-de-Roger>
 <CAFEAcA-RWR_6OQV1EgeYj0WmE89FDKqcywTpgfrMyr8FrELN+Q@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CAFEAcA-RWR_6OQV1EgeYj0WmE89FDKqcywTpgfrMyr8FrELN+Q@mail.gmail.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 QEMU Developers <qemu-devel@nongnu.org>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 01:20:51PM +0100, Peter Maydell wrote:
> On Tue, 19 May 2020 at 12:28, Roger Pau Monné <roger.pau@citrix.com> wrote:
> >
> > On Mon, May 11, 2020 at 02:40:43PM +0100, Anthony PERARD wrote:
> > > On Mon, May 04, 2020 at 12:14:43PM +0200, Roger Pau Monne wrote:
> > > > diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
> > > > index 179775db7b..660dd8a008 100644
> > > > --- a/hw/xen/xen_pt.h
> > > > +++ b/hw/xen/xen_pt.h
> > > > @@ -1,6 +1,7 @@
> > > >  #ifndef XEN_PT_H
> > > >  #define XEN_PT_H
> > > >
> > > > +#include "qemu/osdep.h"
> > >
> > > Why do you need osdep?
> >
> > For CONFIG_XEN_PCI_PASSTHROUGH IIRC.
> 
> All .c files should always include osdep as the first include
> in the file, and .h files should never include osdep (we note
> this in CODING_STYLE.rst).
> 
> If you added this #include to fix a compile issue that would
> suggest that there's a .c file somewhere that's missing the
> mandatory osdep include. I did a quick eyeball of all the files
> that include xen_pt.h, though, and none of them are missing the
> osdep include. So I think you should be able to simply drop the
> osdep include here. If that produces an error, let us know what
> fails and we can work out what's gone wrong.

My bad, didn't know about this rule and just looked up where
CONFIG_XEN_PCI_PASSTHROUGH was defined in order to include it. Will
remove in v2.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 19 14:24:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 14:24:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb39l-0002jm-9a; Tue, 19 May 2020 14:23:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jb39k-0002jh-6D
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 14:23:44 +0000
X-Inumbo-ID: 570d2cb2-99dc-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 570d2cb2-99dc-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 14:23:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A8892AD80;
 Tue, 19 May 2020 14:23:43 +0000 (UTC)
Subject: Re: [PATCH v3 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
To: paul@xen.org
References: <20200514104416.16657-1-paul@xen.org>
 <20200514104416.16657-2-paul@xen.org>
 <c1da7ff1-2c3a-02d1-cfa1-18840db37566@suse.com>
 <000401d62de6$7cb6efa0$7624cee0$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <080a1fa3-eb1e-e3b2-c52e-5c7ffdabc6eb@suse.com>
Date: Tue, 19 May 2020 16:23:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <000401d62de6$7cb6efa0$7624cee0$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.2020 16:04, Paul Durrant wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 19 May 2020 14:04
>>
>> On 14.05.2020 12:44, Paul Durrant wrote:
>>> --- /dev/null
>>> +++ b/xen/include/xen/save.h
>>> @@ -0,0 +1,165 @@
>>> +/*
>>> + * save.h: support routines for save/restore
>>> + *
>>> + * Copyright Amazon.com Inc. or its affiliates.
>>> + *
>>> + * This program is free software; you can redistribute it and/or modify it
>>> + * under the terms and conditions of the GNU General Public License,
>>> + * version 2, as published by the Free Software Foundation.
>>> + *
>>> + * This program is distributed in the hope it will be useful, but WITHOUT
>>> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
>>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
>>> + * more details.
>>> + *
>>> + * You should have received a copy of the GNU General Public License along with
>>> + * this program; If not, see <http://www.gnu.org/licenses/>.
>>> + */
>>> +
>>> +#ifndef XEN_SAVE_H
>>> +#define XEN_SAVE_H
>>> +
>>> +#include <xen/init.h>
>>> +#include <xen/sched.h>
>>> +#include <xen/types.h>
>>> +
>>> +#include <public/save.h>
>>> +
>>> +struct domain_context;
>>> +
>>> +int domain_save_begin(struct domain_context *c, unsigned int typecode,
>>> +                      const char *name, unsigned int instance);
>>> +
>>> +#define DOMAIN_SAVE_BEGIN(_x, _c, _instance) \
>>> +    domain_save_begin((_c), DOMAIN_SAVE_CODE(_x), #_x, (_instance))
>>
>> As per prior conversation I would have expected no leading underscores
>> here anymore, and no parenthesization of what is still _c. More of
>> these further down.
> 
> What's wrong with leading underscores in macro arguments? They can't
> pollute any namespace.

They can still hide file scope variables legitimately named
this way (which may get accessed by a macro without being a
macro argument). The wording of the standard is quite clear:
"All identifiers that begin with an underscore are always
reserved for use as identifiers with file scope in both the
ordinary and tag name spaces."

> Also, I prefer to keep the parentheses for arguments.

More code churn then once we hopefully standardize of the less
obfuscating variant without.

>>> +static inline int domain_load_entry(struct domain_context *c,
>>> +                                    unsigned int typecode, const char *name,
>>> +                                    unsigned int *instance, void *dst,
>>> +                                    size_t len)
>>> +{
>>> +    int rc;
>>> +
>>> +    rc = domain_load_begin(c, typecode, name, instance);
>>
>> For some reason I've spotted this only here: Why is instance a pointer
>> parameter of the function, when typecode is a value? Both live next to
>> one another in struct domain_save_descriptor, and hence instance could
>> be retrieved at the same time as typecode.
>>
> 
> Because the typecode is known a priori. It has to be known for the
> correct handler to be invoked. It is only provided here for
> verification purposes. I could have provided the instance as an
> argument to the load handler but I prefer making the interactions
> for save and load more symmetric.

Hmm, I don't see any symmetry violation in the alternative model,
but anyway.

>>> +/*
>>> + * Register save and restore handlers. Save handlers will be invoked
>>> + * in order of DOMAIN_SAVE_CODE().
>>> + */
>>> +#define DOMAIN_REGISTER_SAVE_RESTORE(_x, _save, _load)            \
>>> +    static int __init __domain_register_##_x##_save_restore(void) \
>>> +    {                                                             \
>>> +        domain_register_save_type(                                \
>>> +            DOMAIN_SAVE_CODE(_x),                                 \
>>> +            #_x,                                                  \
>>> +            &(_save),                                             \
>>> +            &(_load));                                            \
>>> +                                                                  \
>>> +        return 0;                                                 \
>>> +    }                                                             \
>>> +    __initcall(__domain_register_##_x##_save_restore);
>>
>> I'm puzzled by part of the comment: Invoking by save code looks
>> reasonable for the saving side (albeit END doesn't match this rule
>> afaics), but is this going to be good enough for the consuming side?
> 
> No, this only relates to the save side which is why the comment
> says 'Save handlers'. I do note that it would be more consistent
> to use 'load' rather than 'restore' here though.
> 
>> There may be dependencies between types, and with fixed ordering
>> there may be no way to insert a depended upon type ahead of an
>> already defined one (at least as long as the codes are meant to be
>> stable).
>>
> 
> The ordering of load handlers is determined by the stream. I'll
> add a sentence saying that.

I.e. the consumer of the "get" interface (and producer of the stream)
is supposed to take apart the output it gets, bring records into
suitable order (which implies it knows of all the records, and which
hence means this code may need updating in cases where I'd expect
only the hypervisor needs), and only then issue to the stream?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 14:29:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 14:29:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb3Fa-0002v0-06; Tue, 19 May 2020 14:29:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0pGb=7B=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jb3FY-0002uv-IR
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 14:29:44 +0000
X-Inumbo-ID: 2e0f95e3-99dd-11ea-a91e-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2e0f95e3-99dd-11ea-a91e-12813bfff9fa;
 Tue, 19 May 2020 14:29:43 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 3RkqjoLPiMHHbHmwCUBf41A60PsfdwCEGc/9nLgMy404z1QdU0P1qliTWRymrGvDUXQIXU2TNC
 Z5ars5gBElsJjJ2tOAcGPxIXWakORoc6ZWI4llidUrlenJwjIH+Iz+B4Z+RU+4DxFYlt0zeaH6
 QGmeleueh4VwSBwq1hfbBMZxNCWTt2m1Dm/FaS2JjEmunWNJ/lE/0tqGUwNRjYDt8JH0p99+uG
 IJ8nYIiQWkZdHjed2EquMzh1u/20CCRGHsMmc7tFcrQ3FQ5ivKYxVmwWutOjSbUDfCSA9mvOpa
 jHg=
X-SBRS: 2.7
X-MesageID: 18151436
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,410,1583211600"; d="scan'208";a="18151436"
Subject: Re: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
To: Jan Beulich <jbeulich@suse.com>
References: <20200518153820.18170-1-andrew.cooper3@citrix.com>
 <bf0d9e00-cb42-34b1-26ee-93628eea094c@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d925943b-cad8-07c3-c21c-322ffc4a75da@citrix.com>
Date: Tue, 19 May 2020 15:29:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <bf0d9e00-cb42-34b1-26ee-93628eea094c@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19/05/2020 09:14, Jan Beulich wrote:
> On 18.05.2020 17:38, Andrew Cooper wrote:
>> The reserved_bit_page_fault() paths effectively turn reserved bit faults into
>> a warning, but in the light of L1TF, the real impact is far more serious.
>>
>> Xen does not have any reserved bits set in its pagetables, nor do we permit PV
>> guests to write any.  An HVM shadow guest may have reserved bits via the MMIO
>> fastpath, but those faults are handled in the VMExit #PF intercept, rather
>> than Xen's #PF handler.
>>
>> There is no need to disable interrupts (in spurious_page_fault()) for
>> __page_fault_type() to look at the rsvd bit, nor should extable fixup be
>> tolerated.
> I'm afraid I don't understand the connection of the first half of this
> to the patch - you don't alter spurious_page_fault() in this regard (at
> all, actually).

The disabling interrupts is in spurious_page_fault().  But the point is
that there is no need to enter this logic at all for a reserved page fault.

>
> As to extable fixup, I'm not sure: If a reserved bit ends up slipping
> into the non-Xen parts of the page tables, and if guest accessors then
> become able to trip a corresponding #PF, the bug will need an XSA with
> the proposed change, while - afaict - it won't if the exception gets
> recovered from. (There may then still be log spam issue, I admit.)

We need to issue an XSA anyway because such a construct would be an L1TF
gadget.

What this change does is make it substantially more obvious, and turns
an information leak into a DoS.

>> @@ -1439,6 +1418,18 @@ void do_page_fault(struct cpu_user_regs *regs)
>>      if ( unlikely(fixup_page_fault(addr, regs) != 0) )
>>          return;
>>  
>> +    /*
>> +     * Xen have reserved bits in its pagetables, nor do we permit PV guests to
>> +     * write any.  Such entries would be vulnerable to the L1TF sidechannel.
>> +     *
>> +     * The only logic which intentionally sets reserved bits is the shadow
>> +     * MMIO fastpath (SH_L1E_MMIO_*), which is careful not to be
>> +     * L1TF-vulnerable, and handled via the VMExit #PF intercept path, rather
>> +     * than here.
>> +     */
>> +    if ( error_code & PFEC_reserved_bit )
>> +        goto fatal;
> Judging from the description, wouldn't this then better go even further
> up, ahead of the fixup_page_fault() invocation? In fact the function
> has two PFEC_reserved_bit checks to _avoid_ taking action, which look
> like they could then be dropped.

Only for certain Xen-only fixup.  The path into paging_fault() is not
guarded.

Depending on whether GNP is actually used for PV guests, this is where
it would be fixed up.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 19 14:31:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 14:31:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb3H1-0003fB-BN; Tue, 19 May 2020 14:31:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PPOd=7B=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jb3H0-0003f4-68
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 14:31:14 +0000
X-Inumbo-ID: 63b1812e-99dd-11ea-b9cf-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63b1812e-99dd-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 14:31:13 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: CmRLdw6LI7GbmUBiN8Dk3dr+z8cZBmJRvieC/1Iylk3BPxEU3SgNwAHcpTcJzXLbGOqrbm8TWC
 15mQPDS2nlav+b5ExdiiPjOUHjTYKE6G5niJ6uI4t7/lkFVAkXlT0X8WmqV3r/Rvh5hPxAeNm4
 p2w0rKmhWkL3zVhDxRswNWqGNkFWJEuau6eW2pTMLI5x84vE1Djo/0R+GKhihf5tMptLEzmbht
 MpO5QkZjry+eDaJZY7wOqTTinQkNdM0rozNXzYhreS0zJFOHUBqdP5cARl2l1PO996SOrG8FPj
 Af4=
X-SBRS: 2.7
X-MesageID: 18167038
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,410,1583211600"; d="scan'208";a="18167038"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <qemu-devel@nongnu.org>
Subject: [PATCH v2] xen: fix build without pci passthrough
Date: Tue, 19 May 2020 16:31:01 +0200
Message-ID: <20200519143101.75330-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, Paul
 Durrant <paul@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

has_igd_gfx_passthru is only available when QEMU is built with
CONFIG_XEN_PCI_PASSTHROUGH, and hence shouldn't be used in common
code without checking if it's available.

Fixes: 46472d82322d0 ('xen: convert "-machine igd-passthru" to an accelerator property')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: xen-devel@lists.xenproject.org
---
Changes since v1:
 - Do not include osdep in header file.
 - Always add the setters/getters of igd-passthru, report an error
   when attempting to set igd-passthru without built in
   pci-passthrough support.
---
 hw/xen/xen-common.c | 4 ++++
 hw/xen/xen_pt.h     | 6 ++++++
 2 files changed, 10 insertions(+)

diff --git a/hw/xen/xen-common.c b/hw/xen/xen-common.c
index 70564cc952..d758770da0 100644
--- a/hw/xen/xen-common.c
+++ b/hw/xen/xen-common.c
@@ -134,7 +134,11 @@ static bool xen_get_igd_gfx_passthru(Object *obj, Error **errp)
 
 static void xen_set_igd_gfx_passthru(Object *obj, bool value, Error **errp)
 {
+#ifdef CONFIG_XEN_PCI_PASSTHROUGH
     has_igd_gfx_passthru = value;
+#else
+    error_setg(errp, "Xen PCI passthrough support not built in");
+#endif
 }
 
 static void xen_setup_post(MachineState *ms, AccelState *accel)
diff --git a/hw/xen/xen_pt.h b/hw/xen/xen_pt.h
index 179775db7b..7430235a27 100644
--- a/hw/xen/xen_pt.h
+++ b/hw/xen/xen_pt.h
@@ -322,7 +322,13 @@ extern void *pci_assign_dev_load_option_rom(PCIDevice *dev,
                                             unsigned int domain,
                                             unsigned int bus, unsigned int slot,
                                             unsigned int function);
+
+#ifdef CONFIG_XEN_PCI_PASSTHROUGH
 extern bool has_igd_gfx_passthru;
+#else
+# define has_igd_gfx_passthru false
+#endif
+
 static inline bool is_igd_vga_passthrough(XenHostPCIDevice *dev)
 {
     return (has_igd_gfx_passthru
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue May 19 14:49:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 14:49:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb3YB-0004hI-1Y; Tue, 19 May 2020 14:48:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jb3YA-0004hD-C3
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 14:48:58 +0000
X-Inumbo-ID: de2745cc-99df-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id de2745cc-99df-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 14:48:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 300FDADF7;
 Tue, 19 May 2020 14:48:59 +0000 (UTC)
Subject: Re: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200518153820.18170-1-andrew.cooper3@citrix.com>
 <2783ddc5-9919-3c97-ba52-2f734e7d72d5@suse.com>
 <62d4999b-7db3-bac6-28ed-bb636347df38@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3088e420-a72a-1b2d-144f-115610488418@suse.com>
Date: Tue, 19 May 2020 16:48:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <62d4999b-7db3-bac6-28ed-bb636347df38@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.2020 16:11, Andrew Cooper wrote:
> On 19/05/2020 09:34, Jan Beulich wrote:
>> On 18.05.2020 17:38, Andrew Cooper wrote:
>>> @@ -1439,6 +1418,18 @@ void do_page_fault(struct cpu_user_regs *regs)
>>>      if ( unlikely(fixup_page_fault(addr, regs) != 0) )
>>>          return;
>>>  
>>> +    /*
>>> +     * Xen have reserved bits in its pagetables, nor do we permit PV guests to
>>> +     * write any.  Such entries would be vulnerable to the L1TF sidechannel.
>>> +     *
>>> +     * The only logic which intentionally sets reserved bits is the shadow
>>> +     * MMIO fastpath (SH_L1E_MMIO_*), which is careful not to be
>>> +     * L1TF-vulnerable, and handled via the VMExit #PF intercept path, rather
>>> +     * than here.
>> What about SH_L1E_MAGIC and sh_l1e_gnp()? The latter gets used by
>> _sh_propagate() without visible restriction to HVM.
> 
> SH_L1E_MAGIC looks to be redundant with SH_L1E_MMIO_MAGIC. 
> sh_l1e_mmio() is the only path which ever creates an entry like that.
> 
> sh_l1e_gnp() is a very well hidden use of reserved bits, but surely
> can't be used for PV guests, as there doesn't appear to be anything to
> turn the resulting fault back into a plain not-present.

Well, in this case the implied question remains: How does this fit
with what _sh_propagate() does?

>> And of course every time I look at this code I wonder how we can
>> get away with (quoting a comment) "We store 28 bits of GFN in
>> bits 4:32 of the entry." Do we have a hidden restriction
>> somewhere guaranteeing that guests won't have (emulated MMIO)
>> GFNs above 1Tb when run in shadow mode?
> 
> I've raised that several times before.  Its broken.
> 
> Given that shadow frames are limited to 44 bits anyway (and not yet
> levelled safely in the migration stream), my suggestion for fixing this
> was just to use one extra nibble for the extra 4 bits and call it done.

Would you remind(?) me of where this 44-bit restriction is coming
from?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 14:55:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 14:55:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb3eZ-0005Xx-O2; Tue, 19 May 2020 14:55:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jb3eY-0005Xs-3w
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 14:55:34 +0000
X-Inumbo-ID: c8e16c47-99e0-11ea-a92c-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c8e16c47-99e0-11ea-a92c-12813bfff9fa;
 Tue, 19 May 2020 14:55:32 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 965D1ABCB;
 Tue, 19 May 2020 14:55:34 +0000 (UTC)
Subject: Re: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200518153820.18170-1-andrew.cooper3@citrix.com>
 <bf0d9e00-cb42-34b1-26ee-93628eea094c@suse.com>
 <d925943b-cad8-07c3-c21c-322ffc4a75da@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f06370cb-5cff-2e4a-571c-0b61656e4829@suse.com>
Date: Tue, 19 May 2020 16:55:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <d925943b-cad8-07c3-c21c-322ffc4a75da@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.2020 16:29, Andrew Cooper wrote:
> On 19/05/2020 09:14, Jan Beulich wrote:
>> On 18.05.2020 17:38, Andrew Cooper wrote:
>>> The reserved_bit_page_fault() paths effectively turn reserved bit faults into
>>> a warning, but in the light of L1TF, the real impact is far more serious.
>>>
>>> Xen does not have any reserved bits set in its pagetables, nor do we permit PV
>>> guests to write any.  An HVM shadow guest may have reserved bits via the MMIO
>>> fastpath, but those faults are handled in the VMExit #PF intercept, rather
>>> than Xen's #PF handler.
>>>
>>> There is no need to disable interrupts (in spurious_page_fault()) for
>>> __page_fault_type() to look at the rsvd bit, nor should extable fixup be
>>> tolerated.
>> I'm afraid I don't understand the connection of the first half of this
>> to the patch - you don't alter spurious_page_fault() in this regard (at
>> all, actually).
> 
> The disabling interrupts is in spurious_page_fault().  But the point is
> that there is no need to enter this logic at all for a reserved page fault.
> 
>>
>> As to extable fixup, I'm not sure: If a reserved bit ends up slipping
>> into the non-Xen parts of the page tables, and if guest accessors then
>> become able to trip a corresponding #PF, the bug will need an XSA with
>> the proposed change, while - afaict - it won't if the exception gets
>> recovered from. (There may then still be log spam issue, I admit.)
> 
> We need to issue an XSA anyway because such a construct would be an L1TF
> gadget.
> 
> What this change does is make it substantially more obvious, and turns
> an information leak into a DoS.

For L1TF-affected hardware. For unaffected hardware it turns a possible
(but not guaranteed) log spam DoS into a reliable crash.

>>> @@ -1439,6 +1418,18 @@ void do_page_fault(struct cpu_user_regs *regs)
>>>      if ( unlikely(fixup_page_fault(addr, regs) != 0) )
>>>          return;
>>>  
>>> +    /*
>>> +     * Xen have reserved bits in its pagetables, nor do we permit PV guests to
>>> +     * write any.  Such entries would be vulnerable to the L1TF sidechannel.
>>> +     *
>>> +     * The only logic which intentionally sets reserved bits is the shadow
>>> +     * MMIO fastpath (SH_L1E_MMIO_*), which is careful not to be
>>> +     * L1TF-vulnerable, and handled via the VMExit #PF intercept path, rather
>>> +     * than here.
>>> +     */
>>> +    if ( error_code & PFEC_reserved_bit )
>>> +        goto fatal;
>> Judging from the description, wouldn't this then better go even further
>> up, ahead of the fixup_page_fault() invocation? In fact the function
>> has two PFEC_reserved_bit checks to _avoid_ taking action, which look
>> like they could then be dropped.
> 
> Only for certain Xen-only fixup.  The path into paging_fault() is not
> guarded.

Hmm, yes indeed.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 15:06:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:06:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb3oP-0006Tb-Ov; Tue, 19 May 2020 15:05:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rpDv=7B=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jb3oO-0006TW-IB
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:05:44 +0000
X-Inumbo-ID: 356a1880-99e2-11ea-b9cf-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 356a1880-99e2-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 15:05:43 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 7OfRBbwAFIdeoyZ4RogUx6px3o7TbyFjVkKaHsBPJP1ELm20ETXeFANHeCnqM8HBlCxJiTtRi7
 al5H0E7VPfu8hDMEruEhrMJtQSdW1m6RloXpHA0AcexNYL40xcz2GhuScUzjqiztaTI8BXJWgm
 4RhdM98lIEXvHqGSP8V2AZWUUxOnl1n/CCNl1bogauPRyWnsDyG8Pmne8VVK8vKu4ODeMPNMxF
 JZEQGZcWk6hQKjwFkKlDF73L/ujYEXLbOoh6TR8nklzFALCUIus4ceqSySObVvVHeRNKJdz1K6
 gWw=
X-SBRS: 2.7
X-MesageID: 18599821
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,410,1583211600"; d="scan'208";a="18599821"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24259.62913.259268.987283@mariner.uk.xensource.com>
Date: Tue, 19 May 2020 16:05:37 +0100
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 6/7] xen/guest_access: Consolidate guest access helpers in
 xen/guest_access.h
In-Reply-To: <aa209d94-2b39-7932-919b-9842f376e0dc@xen.org>
References: <20200404131017.27330-1-julien@xen.org>
 <20200404131017.27330-7-julien@xen.org>
 <e2588f6e-1f13-b66f-8e3d-b8568f67b62a@suse.com>
 <041a9f9f-cc9e-eac5-cdd2-555fb1c88e6f@xen.org>
 <cf6c0e0b-ade0-587f-ea0e-80b02b21b1a9@suse.com>
 <c8e66108-7ac1-fb51-841f-21886b731f04@xen.org>
 <f02f09ec-b643-8321-e235-ce0ee5526ab3@suse.com>
 <69deb8f4-bafe-734c-f6fa-de41ecf539d2@xen.org>
 <c38f4581-42a6-bb4a-1f84-066528edd3ee@xen.org>
 <aa209d94-2b39-7932-919b-9842f376e0dc@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei
 Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, George Dunlap <George.Dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi.  My attention was drawn to this thread.

As I understand it, everyone is agreed that deduplicating the
implementation is good (I also agree).  The debate is only between:

1. Put it in xen/ until an arch comes along that needs something
  different, at which point maybe introduce an asm-generic-style
  thing with default implementations.

2. Say, now, that this is a default implementation and it should go in
   asm-generic.

My starting point is that Julien, as the primary author of this
cleanup, should be given leeway on a matter of taste like this.
(There are as I understand it no wider implications.)

Also, ISTM that it can be argued that introducing a new abstraction is
an additional piece of work.  Doing that is certainly not hampered by
Julien's change.  So that would be another reason to take Julien's
patch as-is.

On the merits, I don't have anything to add to the arguments already
presented.  I am considerably more persuaded by Julien's arguments
than Jan's.

So on all levels I think this commit should go in, unless there are
other concerns that have not been discussed here ?

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Tue May 19 15:09:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:09:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb3rn-0006bJ-9D; Tue, 19 May 2020 15:09:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rpDv=7B=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jb3rl-0006bE-EP
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:09:13 +0000
X-Inumbo-ID: b12081a9-99e2-11ea-a92d-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b12081a9-99e2-11ea-a92d-12813bfff9fa;
 Tue, 19 May 2020 15:09:12 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: yGINXa6yE3yL0yvjx4nwvA+P8UvLuZVSbOS2joma59YC2VIMr+OdPVQDioZ3XqgkmmyVQND8zM
 e6pXMgRbaC52nmcmSk1VG1MydnX+IVcp7BaAtGJlmjkAmaPosHDH+kDn48KXmVy6JuuGYtlaG1
 QeFIFICIOUsOFWAJIkxSAQyEzQzPQKU0dYYCV+hp9qb92R0pb3eHdTAr79iWJnzcKbQerGwlZP
 O/voYuljNmtt5ODebrJobFYejEW4nN9rBDTFlQLDP2lPLbPE5EqdD9OSgY7muosLN4n+1cdsWg
 Gag=
X-SBRS: 2.7
X-MesageID: 18256908
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,410,1583211600"; d="scan'208";a="18256908"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24259.63123.218728.507213@mariner.uk.xensource.com>
Date: Tue, 19 May 2020 16:09:07 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v7 06/19] libxl: Use libxl__xs_* in
 libxl__write_stub_dmargs
In-Reply-To: <20200519015503.115236-7-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
 <20200519015503.115236-7-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v7 06/19] libxl: Use libxl__xs_* in libxl__write_stub_dmargs"):
> Re-work libxl__write_stub_dmargs to use libxl_xs_* functions in a loop.

Cool, thank you!

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue May 19 15:10:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:10:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb3sm-0007KW-Kk; Tue, 19 May 2020 15:10:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rpDv=7B=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jb3sl-0007KR-To
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:10:15 +0000
X-Inumbo-ID: d7b92018-99e2-11ea-9887-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d7b92018-99e2-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 15:10:15 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: UjjAFFzsPSwk/KYosV2Pe+2/hXCaV4VzkjbANRIO1h0850cizeihSQN+kdf3bixJaJ4XpsCTKL
 fUCVoLQ+cJQHHY6xPMYe3cKblKhVzn4WOQDovWQ1H+E86w4FKKPa6n1mo5IVcJ8a595fwV5rxu
 rcpGvFOdro9EFaoPZLugrHrVacZH1NmLykvZ83pBAO5TxMGOf2Gq5MtWzlR12H1a2H4SZoR+Gd
 6MhctZ6N7t3A0w/oO089GrJ+yrmxdKdw5s/NXEXdJbS/IYBBDG1KmxDSNycn1jVOMFCVQtCUXI
 BRw=
X-SBRS: 2.7
X-MesageID: 18172747
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,410,1583211600"; d="scan'208";a="18172747"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24259.63185.474995.498745@mariner.uk.xensource.com>
Date: Tue, 19 May 2020 16:10:09 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v7 07/19] libxl: write qemu arguments into separate
 xenstore keys
In-Reply-To: <20200519015503.115236-8-jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
 <20200519015503.115236-8-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?iso-8859-1?Q?Marczykowski-G=F3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH v7 07/19] libxl: write qemu arguments into separate xenstore keys"):
> From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> 
> This allows using arguments with spaces, like -append, without
> nominating any special "separator" character.
> 
> Signed-off-by: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
> 
> Write arguments in dm-argv directory instead of overloading mini-os's
> dmargs string.
> 
> Make libxl__write_stub_dmargs vary behaviour based on the
> is_linux_stubdom flag.

Thank you, I like this.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue May 19 15:11:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:11:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb3tY-0007Q3-WE; Tue, 19 May 2020 15:11:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FgAx=7B=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jb3tX-0007Pu-Uh
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:11:04 +0000
X-Inumbo-ID: f4134eb4-99e2-11ea-b07b-bc764e2007e4
Received: from mail-wm1-x336.google.com (unknown [2a00:1450:4864:20::336])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f4134eb4-99e2-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 15:11:02 +0000 (UTC)
Received: by mail-wm1-x336.google.com with SMTP id z4so3443607wmi.2
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 08:11:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=qMwYAFJLioADAxSImpNd1LRDbB+GBLFEvjKE5vTlJUc=;
 b=kUanKqJQbExiRWpiSKBsdVmzttDXKDPlniyxDVytrPWAjvWUect3bnZvb0e390Qk3c
 SNK2vJhFwYn/dFAtQlVoN0ZyrQOftrR7j0njSZEfyeAMYQ2Q21io94/3ulA+ApIFnBR9
 kKgbWwtHcN163diTIrRFNeYXBv8xIZ6ZqI8hlNfMGkngKuYhH8tA+r/e2v4J8eTUVWXq
 KL1OFKCCjKXgItC8G0v3NnOsXiW76HM8vrm5u8y5gDqRSgtlL3WtGt0aBmQQK29DbcGy
 eSqVmKzY9kZDv63X/5y1TePOri3VBazLcIB+sS5BufTZoBjwsd4IQal3o+80ClvCVERf
 /n/g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=qMwYAFJLioADAxSImpNd1LRDbB+GBLFEvjKE5vTlJUc=;
 b=CsetT95hJh6gYZKXDEwjTyJFmwK4c5Hr6GdvidZrF4Y53GQqyBVeLoQSfDMH8vgutt
 HpDuKwvVqu3EH0O2tL6+0S7j7zQs4Aq1rqSnnmfcu4DtYdmy7LpCNIrtOtUjuhXbdpRY
 aHYX0scahAX2yDofGvt/UhsAXDZMCXumKwGOje9CMWaDo7VjqHGLjFXpi+vjlQLQ+ELZ
 kHqbX0II748oJxzFvf9jDvewusJiJ+4FGQca5mNQUS/DyHaOnFR3LykXj+hGbR7mRs0P
 Md6ksSZ3IzkCVNnznnrikqlbAr0NNpDsEE7EpYcsJ4dBF2WKo7VQsnF67Xxd0AvFSHyO
 z9kQ==
X-Gm-Message-State: AOAM532VS48YKzXmjVhD9gjtsd3bW9ANHSez1RGoHSjp5IONHfSbuasq
 J1oWbbpKLV6aoEG3FniNk3c=
X-Google-Smtp-Source: ABdhPJx9iMQRuoMMIngtTRF/vNSuh3VXSdQe9V2O7UlK3MaJPMT+u6etpCLzRHAQ5v+Zt8ssGfIYHA==
X-Received: by 2002:a1c:25c3:: with SMTP id l186mr6045677wml.103.1589901062015; 
 Tue, 19 May 2020 08:11:02 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.185])
 by smtp.gmail.com with ESMTPSA id u16sm20778836wrq.17.2020.05.19.08.10.59
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 19 May 2020 08:11:01 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
References: <20200514104416.16657-1-paul@xen.org>
 <20200514104416.16657-2-paul@xen.org>
 <c1da7ff1-2c3a-02d1-cfa1-18840db37566@suse.com>
 <000401d62de6$7cb6efa0$7624cee0$@xen.org>
 <080a1fa3-eb1e-e3b2-c52e-5c7ffdabc6eb@suse.com>
In-Reply-To: <080a1fa3-eb1e-e3b2-c52e-5c7ffdabc6eb@suse.com>
Subject: RE: [PATCH v3 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
Date: Tue, 19 May 2020 16:10:59 +0100
Message-ID: <000601d62def$b4f64380$1ee2ca80$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQH3ss8UVmWNzdPcdrMljFFlRuFHxQJ5sln4ApsaflcBRS23fQEQzGDfqDEiQUA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 19 May 2020 15:24
> To: paul@xen.org
> Cc: xen-devel@lists.xenproject.org; 'Paul Durrant' =
<pdurrant@amazon.com>; 'Andrew Cooper'
> <andrew.cooper3@citrix.com>; 'George Dunlap' =
<george.dunlap@citrix.com>; 'Ian Jackson'
> <ian.jackson@eu.citrix.com>; 'Julien Grall' <julien@xen.org>; 'Stefano =
Stabellini'
> <sstabellini@kernel.org>; 'Wei Liu' <wl@xen.org>; 'Volodymyr Babchuk' =
<Volodymyr_Babchuk@epam.com>;
> 'Roger Pau Monn=C3=A9' <roger.pau@citrix.com>
> Subject: Re: [PATCH v3 1/5] xen/common: introduce a new framework for =
save/restore of 'domain' context
>=20
> On 19.05.2020 16:04, Paul Durrant wrote:
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: 19 May 2020 14:04
> >>
> >> On 14.05.2020 12:44, Paul Durrant wrote:
> >>> --- /dev/null
> >>> +++ b/xen/include/xen/save.h
> >>> @@ -0,0 +1,165 @@
> >>> +/*
> >>> + * save.h: support routines for save/restore
> >>> + *
> >>> + * Copyright Amazon.com Inc. or its affiliates.
> >>> + *
> >>> + * This program is free software; you can redistribute it and/or =
modify it
> >>> + * under the terms and conditions of the GNU General Public =
License,
> >>> + * version 2, as published by the Free Software Foundation.
> >>> + *
> >>> + * This program is distributed in the hope it will be useful, but =
WITHOUT
> >>> + * ANY WARRANTY; without even the implied warranty of =
MERCHANTABILITY or
> >>> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public =
License for
> >>> + * more details.
> >>> + *
> >>> + * You should have received a copy of the GNU General Public =
License along with
> >>> + * this program; If not, see <http://www.gnu.org/licenses/>.
> >>> + */
> >>> +
> >>> +#ifndef XEN_SAVE_H
> >>> +#define XEN_SAVE_H
> >>> +
> >>> +#include <xen/init.h>
> >>> +#include <xen/sched.h>
> >>> +#include <xen/types.h>
> >>> +
> >>> +#include <public/save.h>
> >>> +
> >>> +struct domain_context;
> >>> +
> >>> +int domain_save_begin(struct domain_context *c, unsigned int =
typecode,
> >>> +                      const char *name, unsigned int instance);
> >>> +
> >>> +#define DOMAIN_SAVE_BEGIN(_x, _c, _instance) \
> >>> +    domain_save_begin((_c), DOMAIN_SAVE_CODE(_x), #_x, =
(_instance))
> >>
> >> As per prior conversation I would have expected no leading =
underscores
> >> here anymore, and no parenthesization of what is still _c. More of
> >> these further down.
> >
> > What's wrong with leading underscores in macro arguments? They can't
> > pollute any namespace.
>=20
> They can still hide file scope variables legitimately named
> this way (which may get accessed by a macro without being a
> macro argument). The wording of the standard is quite clear:
> "All identifiers that begin with an underscore are always
> reserved for use as identifiers with file scope in both the
> ordinary and tag name spaces."
>=20

Ok.

> > Also, I prefer to keep the parentheses for arguments.
>=20
> More code churn then once we hopefully standardize of the less
> obfuscating variant without.
>=20

If we standardize that way, so be it.

> >>> +static inline int domain_load_entry(struct domain_context *c,
> >>> +                                    unsigned int typecode, const =
char *name,
> >>> +                                    unsigned int *instance, void =
*dst,
> >>> +                                    size_t len)
> >>> +{
> >>> +    int rc;
> >>> +
> >>> +    rc =3D domain_load_begin(c, typecode, name, instance);
> >>
> >> For some reason I've spotted this only here: Why is instance a =
pointer
> >> parameter of the function, when typecode is a value? Both live next =
to
> >> one another in struct domain_save_descriptor, and hence instance =
could
> >> be retrieved at the same time as typecode.
> >>
> >
> > Because the typecode is known a priori. It has to be known for the
> > correct handler to be invoked. It is only provided here for
> > verification purposes. I could have provided the instance as an
> > argument to the load handler but I prefer making the interactions
> > for save and load more symmetric.
>=20
> Hmm, I don't see any symmetry violation in the alternative model,
> but anyway.
>=20
> >>> +/*
> >>> + * Register save and restore handlers. Save handlers will be =
invoked
> >>> + * in order of DOMAIN_SAVE_CODE().
> >>> + */
> >>> +#define DOMAIN_REGISTER_SAVE_RESTORE(_x, _save, _load)            =
\
> >>> +    static int __init __domain_register_##_x##_save_restore(void) =
\
> >>> +    {                                                             =
\
> >>> +        domain_register_save_type(                                =
\
> >>> +            DOMAIN_SAVE_CODE(_x),                                 =
\
> >>> +            #_x,                                                  =
\
> >>> +            &(_save),                                             =
\
> >>> +            &(_load));                                            =
\
> >>> +                                                                  =
\
> >>> +        return 0;                                                 =
\
> >>> +    }                                                             =
\
> >>> +    __initcall(__domain_register_##_x##_save_restore);
> >>
> >> I'm puzzled by part of the comment: Invoking by save code looks
> >> reasonable for the saving side (albeit END doesn't match this rule
> >> afaics), but is this going to be good enough for the consuming =
side?
> >
> > No, this only relates to the save side which is why the comment
> > says 'Save handlers'. I do note that it would be more consistent
> > to use 'load' rather than 'restore' here though.
> >
> >> There may be dependencies between types, and with fixed ordering
> >> there may be no way to insert a depended upon type ahead of an
> >> already defined one (at least as long as the codes are meant to be
> >> stable).
> >>
> >
> > The ordering of load handlers is determined by the stream. I'll
> > add a sentence saying that.
>=20
> I.e. the consumer of the "get" interface (and producer of the stream)
> is supposed to take apart the output it gets, bring records into
> suitable order (which implies it knows of all the records, and which
> hence means this code may need updating in cases where I'd expect
> only the hypervisor needs), and only then issue to the stream?

The intention is that the stream is always in a suitable order so the =
load side does not have to do any re-ordering.

  Paul




From xen-devel-bounces@lists.xenproject.org Tue May 19 15:12:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:12:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb3ud-0007XC-CF; Tue, 19 May 2020 15:12:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FgAx=7B=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jb3uc-0007X4-33
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:12:10 +0000
X-Inumbo-ID: 1bc5493a-99e3-11ea-b07b-bc764e2007e4
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1bc5493a-99e3-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 15:12:09 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id l11so16402461wru.0
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 08:12:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=BQQ4RGfaBDiChHD4UyQzlyaQqcVc9ngnZqQ32zLBwh8=;
 b=VnZizwWZHqLTaTFOjr9rzEY/JRZX473RjNO0owwTz3GPO77MZPWYvVjNaPoXgQ4eZG
 ffXnTx7gZ0RQQt4CzmjNVs5nTDsfWhoIK5iKAviCQ57dzURYw8rQ/SwgNJRvDokxKUD3
 xQgW0qMBU8q4dFn+D/rOQC3TYW6uw0j5HQ6l7yL62xJw4s/3p4MXH2am1I0SmWTR5790
 GtybqJmrxHXxui3G1gKrbJDmZWC6VMiLioevNDxuOHJ5W5BDaAWXuw3YGaJ+N7g4nuVh
 3QyUZG39wwUz9uvKr+dMjdU+0H5KNa/uc0vGIX4ThGKgxcuIjLO+hZmbVQR9lWzprRJ8
 +j9w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=BQQ4RGfaBDiChHD4UyQzlyaQqcVc9ngnZqQ32zLBwh8=;
 b=JZ8hRvK65TM34kJFYkQbAJceU0sXlYtdRT36w+pLHu0N+2KcYguTfuWYt1erMwZf3N
 fc5ctUmolZ0kgy21cyJkK/PiLi+y53EEMpVHo9+LT/x5Sk7bp+cth6BNdZniiKE6t+fk
 m2GvVxNZpEDmHd11K0bZ4fB+K4+F98MfC2cKQfwrhNCKx9Xbv1XV/9qVhcCtw2jrR4SD
 GJgiNabo5SH/orV6rdpS4O5d5xjCpKNfQowgdOFSWZIeoZNb5MEk3Ax46jKaU+p0SEiS
 Uvl5BP3o5GnVZDHaMrNIiPMT9MlDBePIeaGdWsDKqoxGXVVKmKi1KL8FKj+ZaOSlINr5
 Vb7w==
X-Gm-Message-State: AOAM530IthkTQ0iQMtyx605xahRNrruvgtehhcOM2aHypjUzxTUUUAI5
 FFTdVf67jk/TaPxW1Wsn3+0=
X-Google-Smtp-Source: ABdhPJzKC5wCniCoTCE5LpNwDSVOTZYCzIuaLFHGJL1mzBy1svCtORPhKxuWICOo6Qd61vDrSjO3Ag==
X-Received: by 2002:adf:ef01:: with SMTP id e1mr25169611wro.28.1589901128674; 
 Tue, 19 May 2020 08:12:08 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.187])
 by smtp.gmail.com with ESMTPSA id z11sm21260621wro.48.2020.05.19.08.12.07
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 19 May 2020 08:12:08 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
References: <20200514104416.16657-1-paul@xen.org>
 <20200514104416.16657-3-paul@xen.org>
 <7455ebb7-89c8-75f0-5904-aec344c8b85f@suse.com>
In-Reply-To: <7455ebb7-89c8-75f0-5904-aec344c8b85f@suse.com>
Subject: RE: [PATCH v3 2/5] xen/common/domctl: introduce
 XEN_DOMCTL_get/setdomaincontext
Date: Tue, 19 May 2020 16:12:06 +0100
Message-ID: <000701d62def$dce3f950$96abebf0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQH3ss8UVmWNzdPcdrMljFFlRuFHxQK96sr+AURHycuoTGhNIA==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Daniel De Graaf' <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 19 May 2020 14:49
> To: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; Paul Durrant <pdurrant@amazon.com>; Daniel De Graaf
> <dgdegra@tycho.nsa.gov>; Ian Jackson <ian.jackson@eu.citrix.com>; Wei Liu <wl@xen.org>; Andrew Cooper
> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>; Julien Grall <julien@xen.org>;
> Stefano Stabellini <sstabellini@kernel.org>
> Subject: Re: [PATCH v3 2/5] xen/common/domctl: introduce XEN_DOMCTL_get/setdomaincontext
> 
> On 14.05.2020 12:44, Paul Durrant wrote:
> > --- a/xen/include/public/domctl.h
> > +++ b/xen/include/public/domctl.h
> > @@ -1129,6 +1129,43 @@ struct xen_domctl_vuart_op {
> >                                   */
> >  };
> >
> > +/*
> > + * XEN_DOMCTL_getdomaincontext
> > + * ---------------------------
> > + *
> > + * buffer (IN):   The buffer into which the context data should be
> > + *                copied, or NULL to query the buffer size that should
> > + *                be allocated.
> > + * size (IN/OUT): If 'buffer' is NULL then the value passed in must be
> > + *                zero, and the value passed out will be the size of the
> > + *                buffer to allocate.
> > + *                If 'buffer' is non-NULL then the value passed in must
> > + *                be the size of the buffer into which data may be copied.
> > + *                The value passed out will be the size of data written.
> > + */
> > +struct xen_domctl_getdomaincontext {
> > +    uint32_t size;
> > +    uint32_t pad;
> 
> This and its counterpart don't seem to get checked to be zero.
> While an option for a domctl, any desire to use the field in
> the future would then require an interface version bump.
> 

Indeed. It does need to be zero checked.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Tue May 19 15:18:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:18:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb40S-0007lV-9I; Tue, 19 May 2020 15:18:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jb40R-0007lQ-HR
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:18:11 +0000
X-Inumbo-ID: f2eb2f24-99e3-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f2eb2f24-99e3-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 15:18:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D6670B227;
 Tue, 19 May 2020 15:18:11 +0000 (UTC)
Subject: Re: [PATCH v3 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
To: paul@xen.org
References: <20200514104416.16657-1-paul@xen.org>
 <20200514104416.16657-2-paul@xen.org>
 <c1da7ff1-2c3a-02d1-cfa1-18840db37566@suse.com>
 <000401d62de6$7cb6efa0$7624cee0$@xen.org>
 <080a1fa3-eb1e-e3b2-c52e-5c7ffdabc6eb@suse.com>
 <000601d62def$b4f64380$1ee2ca80$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0ee39765-bc1a-e795-5b20-52ba7026e8d4@suse.com>
Date: Tue, 19 May 2020 17:18:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <000601d62def$b4f64380$1ee2ca80$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.2020 17:10, Paul Durrant wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 19 May 2020 15:24
>>
>> On 19.05.2020 16:04, Paul Durrant wrote:
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: 19 May 2020 14:04
>>>>
>>>> On 14.05.2020 12:44, Paul Durrant wrote:
>>>>> +/*
>>>>> + * Register save and restore handlers. Save handlers will be invoked
>>>>> + * in order of DOMAIN_SAVE_CODE().
>>>>> + */
>>>>> +#define DOMAIN_REGISTER_SAVE_RESTORE(_x, _save, _load)            \
>>>>> +    static int __init __domain_register_##_x##_save_restore(void) \
>>>>> +    {                                                             \
>>>>> +        domain_register_save_type(                                \
>>>>> +            DOMAIN_SAVE_CODE(_x),                                 \
>>>>> +            #_x,                                                  \
>>>>> +            &(_save),                                             \
>>>>> +            &(_load));                                            \
>>>>> +                                                                  \
>>>>> +        return 0;                                                 \
>>>>> +    }                                                             \
>>>>> +    __initcall(__domain_register_##_x##_save_restore);
>>>>
>>>> I'm puzzled by part of the comment: Invoking by save code looks
>>>> reasonable for the saving side (albeit END doesn't match this rule
>>>> afaics), but is this going to be good enough for the consuming side?
>>>
>>> No, this only relates to the save side which is why the comment
>>> says 'Save handlers'. I do note that it would be more consistent
>>> to use 'load' rather than 'restore' here though.
>>>
>>>> There may be dependencies between types, and with fixed ordering
>>>> there may be no way to insert a depended upon type ahead of an
>>>> already defined one (at least as long as the codes are meant to be
>>>> stable).
>>>>
>>>
>>> The ordering of load handlers is determined by the stream. I'll
>>> add a sentence saying that.
>>
>> I.e. the consumer of the "get" interface (and producer of the stream)
>> is supposed to take apart the output it gets, bring records into
>> suitable order (which implies it knows of all the records, and which
>> hence means this code may need updating in cases where I'd expect
>> only the hypervisor needs), and only then issue to the stream?
> 
> The intention is that the stream is always in a suitable order so the
> load side does not have to do any re-ordering.

I understood this to be the intention, but what I continue to not
understand is where / how the save side orders it suitably. "Save
handlers will be invoked in order of DOMAIN_SAVE_CODE()" does not
allow for any ordering, unless at the time of the introduction of
a particular code you already know what others it may depend on
in the future, reserving appropriate codes.

And as said - END also doesn't look to fit this comment.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 15:21:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:21:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb43d-00006t-QZ; Tue, 19 May 2020 15:21:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FgAx=7B=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jb43c-00006o-Iv
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:21:28 +0000
X-Inumbo-ID: 67bfc652-99e4-11ea-9887-bc764e2007e4
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 67bfc652-99e4-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 15:21:26 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id 50so16322704wrc.11
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 08:21:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=wXgEDGvb5Fzh9IMx3Yk8Hp4pGlB6t9IjXAeNzQkMs4A=;
 b=Jcxdx13QVLKlLzqUmgv6xivc37j7zmyJfGh03mp75uBauVHX16k198wUwg2/7C1hr0
 V2LNNid+TmoN7T6bD3xL6qF+IZIThgwIASu1+PZX3RE6WjcRYGpFgr32wXl0KitqABS/
 SZZJ8KxaoE2p0ub5auPIrP8dg4gS9ohskuD7w2lw/QrR2r1LfKNQSG3ulGJgMLx8USWB
 7lCPJ8a1bQuZeNsyipdxqU3bg8yGerkbl5heSGiiQrtQHCkHip+p61Cr67KJphAE+5Wt
 41Erzohbf9qmUEz7CNgmYJNFu/s7CST+XNuMOgopyprCRWPhX8sxwjASGm3aXFiGmorN
 CziA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=wXgEDGvb5Fzh9IMx3Yk8Hp4pGlB6t9IjXAeNzQkMs4A=;
 b=dMi2H3EUEDzUVzpR3EcZPFSz183/cJxC5M113RcldptS2PYMgtwlGogPW9f+NKGDf7
 ILurbwxa5TdCyhcQAxXS8FGJfP0UnUNFZivSGWgP7XY8426HQWth4iAl7Rgonw6XrdE8
 SsTCNNf7m7KwG59knbEWr47gNd6kYEf45TpFTDh2LnRvvvWxqlTTg61LXuYAxVAveu5g
 wP9hGkgxBfgnafM1FaewbqsQj6SidwMbCMHRCw7Po3yrWRiSHJ7Mr6IEI4edXzotJ3nl
 +Am0TlUukme2uhMa4lFz2O1OKm07mmdNwcxud0rEcVCydabdAXwGA+GS1zuPzaMa/YEI
 NUdw==
X-Gm-Message-State: AOAM531nwoLUL7yx7tTItvBzL2nSWvsm5Rtnwyd6awj9xiIjDbV8gKD4
 APx8M7DgM3tx3syXwmwmSgA=
X-Google-Smtp-Source: ABdhPJyOxuPj2dcs9nOMzT61S9PLj6jv3HlSQEwI/k2KJQo/C7I4DY/P2+2RJ/4/PE35b+hwnu/zXA==
X-Received: by 2002:a5d:4d89:: with SMTP id b9mr26965878wru.210.1589901685656; 
 Tue, 19 May 2020 08:21:25 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.185])
 by smtp.gmail.com with ESMTPSA id w20sm18980wmk.25.2020.05.19.08.21.24
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 19 May 2020 08:21:25 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
References: <20200514104416.16657-1-paul@xen.org>
 <20200514104416.16657-5-paul@xen.org>
 <bbebc62f-8066-a60e-5717-58e46cd2d172@suse.com>
In-Reply-To: <bbebc62f-8066-a60e-5717-58e46cd2d172@suse.com>
Subject: RE: [PATCH v3 4/5] common/domain: add a domain context record for
 shared_info...
Date: Tue, 19 May 2020 16:21:23 +0100
Message-ID: <000a01d62df1$28e876e0$7ab964a0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQH3ss8UVmWNzdPcdrMljFFlRuFHxQHAdhdAAmVNhmmoS0wfcA==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 19 May 2020 15:08
> To: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; Paul Durrant <pdurrant@amazon.com>; Ian Jackson
> <ian.jackson@eu.citrix.com>; Wei Liu <wl@xen.org>; Andrew Cooper <andrew.cooper3@citrix.com>; George
> Dunlap <george.dunlap@citrix.com>; Julien Grall <julien@xen.org>; Stefano Stabellini
> <sstabellini@kernel.org>
> Subject: Re: [PATCH v3 4/5] common/domain: add a domain context record for shared_info...
> 
> On 14.05.2020 12:44, Paul Durrant wrote:
> > @@ -61,6 +62,76 @@ static void dump_header(void)
> >
> >  }
> >
> > +static void print_binary(const char *prefix, void *val, size_t size,
> 
> const also for val?

Yes, it can be.

> 
> > +                         const char *suffix)
> > +{
> > +    printf("%s", prefix);
> > +
> > +    while (size--)
> 
> Judging from style elsewhere you look to be missing two blanks
> here.
> 

Yes.

> > +    {
> > +        uint8_t octet = *(uint8_t *)val++;
> 
> Following the above then also better don't cast const away here.
> 
> > +        unsigned int i;
> > +
> > +        for ( i = 0; i < 8; i++ )
> > +        {
> > +            printf("%u", octet & 1);
> > +            octet >>= 1;
> > +        }
> > +    }
> > +
> > +    printf("%s", suffix);
> > +}
> > +
> > +static void dump_shared_info(void)
> > +{
> > +    DOMAIN_SAVE_TYPE(SHARED_INFO) *s;
> > +    shared_info_any_t *info;
> > +    unsigned int i;
> > +
> > +    GET_PTR(s);
> > +
> > +    printf("    SHARED_INFO: has_32bit_shinfo: %s buffer_size: %u\n",
> > +           s->has_32bit_shinfo ? "true" : "false", s->buffer_size);
> > +
> > +    info = (shared_info_any_t *)s->buffer;
> > +
> > +#define GET_FIELD_PTR(_f) \
> > +    (s->has_32bit_shinfo ? (void *)&(info->x32._f) : (void *)&(info->x64._f))
> 
> Better cast to const void * ?
> 

Ok.

> > +#define GET_FIELD_SIZE(_f) \
> > +    (s->has_32bit_shinfo ? sizeof(info->x32._f) : sizeof(info->x64._f))
> > +#define GET_FIELD(_f) \
> > +    (s->has_32bit_shinfo ? info->x32._f : info->x64._f)
> > +
> > +    /* Array lengths are the same for 32-bit and 64-bit shared info */
> 
> Not really, no:
> 
>     xen_ulong_t evtchn_pending[sizeof(xen_ulong_t) * 8];
>     xen_ulong_t evtchn_mask[sizeof(xen_ulong_t) * 8];
> 

Oh, I must have misread.

> > @@ -167,12 +238,14 @@ int main(int argc, char **argv)
> >          if ( (typecode < 0 || typecode == desc->typecode) &&
> >               (instance < 0 || instance == desc->instance) )
> >          {
> > +
> >              printf("[%u] type: %u instance: %u length: %u\n", entry++,
> >                     desc->typecode, desc->instance, desc->length);
> 
> Stray insertion of a blank line?
> 

Yes.

> > @@ -1649,6 +1650,65 @@ int continue_hypercall_on_cpu(
> >      return 0;
> >  }
> >
> > +static int save_shared_info(const struct domain *d, struct domain_context *c,
> > +                            bool dry_run)
> > +{
> > +    struct domain_shared_info_context ctxt = { .buffer_size = PAGE_SIZE };
> 
> Why not sizeof(shared_info), utilizing the zero padding on the
> receiving side?
> 

Ok, yes, I guess that would work.

> > +    size_t hdr_size = offsetof(typeof(ctxt), buffer);
> > +    int rc;
> > +
> > +    rc = DOMAIN_SAVE_BEGIN(SHARED_INFO, c, 0);
> > +    if ( rc )
> > +        return rc;
> > +
> > +#ifdef CONFIG_COMPAT
> > +    if ( !dry_run )
> > +        ctxt.has_32bit_shinfo = has_32bit_shinfo(d);
> > +#endif
> 
> Nothing will go wrong without the if(), I suppose? Better drop it
> then? It could then also easily be part of the initializer of ctxt.
> 

Ok. I said last time I wanted to keep it as it was illustrative but I'll drop it since it has now come up twice.

> > +    rc = domain_save_data(c, &ctxt, hdr_size);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    rc = domain_save_data(c, d->shared_info, ctxt.buffer_size);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    return domain_save_end(c);
> > +}
> > +
> > +static int load_shared_info(struct domain *d, struct domain_context *c)
> > +{
> > +    struct domain_shared_info_context ctxt;
> > +    size_t hdr_size = offsetof(typeof(ctxt), buffer);
> > +    unsigned int i;
> > +    int rc;
> > +
> > +    rc = DOMAIN_LOAD_BEGIN(SHARED_INFO, c, &i);
> > +    if ( rc || i ) /* expect only a single instance */
> > +        return rc;
> > +
> > +    rc = domain_load_data(c, &ctxt, hdr_size);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    if ( ctxt.pad[0] || ctxt.pad[1] || ctxt.pad[2] ||
> > +         ctxt.buffer_size != PAGE_SIZE )
> > +        return -EINVAL;
> > +
> > +#ifdef CONFIG_COMPAT
> > +    d->arch.has_32bit_shinfo = ctxt.has_32bit_shinfo;
> > +#endif
> 
> There's nothing wrong with using has_32bit_shinfo(d) here as well.
> 

I just thought it looked odd.

> > --- a/xen/include/public/save.h
> > +++ b/xen/include/public/save.h
> > @@ -73,7 +73,16 @@ struct domain_save_header {
> >  };
> >  DECLARE_DOMAIN_SAVE_TYPE(HEADER, 1, struct domain_save_header);
> >
> > -#define DOMAIN_SAVE_CODE_MAX 1
> > +struct domain_shared_info_context {
> > +    uint8_t has_32bit_shinfo;
> > +    uint8_t pad[3];
> 
> 32-(or 16-)bit flags, with just a single bit used for the purpose?
> 

I debated that. Given this is xen/tools-only would a bit-field be acceptable?

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Tue May 19 15:21:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:21:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb43m-00007d-3w; Tue, 19 May 2020 15:21:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mazm=7B=xen.org=wl@srs-us1.protection.inumbo.net>)
 id 1jb43l-00007R-4M
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:21:37 +0000
X-Inumbo-ID: 6d4aa331-99e4-11ea-a936-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6d4aa331-99e4-11ea-a936-12813bfff9fa;
 Tue, 19 May 2020 15:21:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Transfer-Encoding:Content-Type:
 MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Ssw3aErBh1F5cDhoZ4UrOm04Qc5bIPY02p4cVcWV6b4=; b=4nA8jeiwBuPaF52qs9KghxVkno
 9uFHnw3uSdyQd9TJLRjgnB2hhv188nOfMkmQULB9JBkUj0UlefaniwkkPBle+MpYk4UXLAbjmYBJG
 VUj97tFRWyptY+g5BqmqJjL4Y5PyYlyj9Xoa/H4qjNtbwXIWa2m56+duD5AUTxS7fILo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <wl@xen.org>)
 id 1jb43j-0003c4-3l; Tue, 19 May 2020 15:21:35 +0000
Received: from 82.149.115.87.dyn.plus.net ([87.115.149.82] helo=debian)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <wl@xen.org>)
 id 1jb43i-0003ix-QV; Tue, 19 May 2020 15:21:35 +0000
Date: Tue, 19 May 2020 16:21:32 +0100
From: Wei Liu <wl@xen.org>
To: Ian Jackson <ian.jackson@citrix.com>
Subject: Re: [PATCH v7 07/19] libxl: write qemu arguments into separate
 xenstore keys
Message-ID: <20200519152132.3ivs6gpembnjai3o@debian>
References: <20200519015503.115236-1-jandryuk@gmail.com>
 <20200519015503.115236-8-jandryuk@gmail.com>
 <24259.63185.474995.498745@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <24259.63185.474995.498745@mariner.uk.xensource.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 04:10:09PM +0100, Ian Jackson wrote:
> Jason Andryuk writes ("[PATCH v7 07/19] libxl: write qemu arguments into separate xenstore keys"):
> > From: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> > 
> > This allows using arguments with spaces, like -append, without
> > nominating any special "separator" character.
> > 
> > Signed-off-by: Marek Marczykowski-Grecki <marmarek@invisiblethingslab.com>
> > Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
> > 
> > Write arguments in dm-argv directory instead of overloading mini-os's
> > dmargs string.
> > 
> > Make libxl__write_stub_dmargs vary behaviour based on the
> > is_linux_stubdom flag.
> 
> Thank you, I like this.
> 
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Cool. Now this series is all acked. I will commit all the patches today.

Wei.


From xen-devel-bounces@lists.xenproject.org Tue May 19 15:28:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:28:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb4A1-0000P8-Rs; Tue, 19 May 2020 15:28:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jb4A0-0000P3-0I
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:28:04 +0000
X-Inumbo-ID: 53bca976-99e5-11ea-a939-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 53bca976-99e5-11ea-a939-12813bfff9fa;
 Tue, 19 May 2020 15:28:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 329AFB270;
 Tue, 19 May 2020 15:28:04 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/mem-paging: further adjustments to
 p2m_mem_paging_prep()'s error handling
Message-ID: <4543e93b-f861-ea0b-9de0-cba1aa938eb7@suse.com>
Date: Tue, 19 May 2020 17:28:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Address late comments on ecb913be4aaa ("x86/mem-paging: correct
p2m_mem_paging_prep()'s error handling"):
- insert a gprintk() ahead of domain_crash(),
- add a comment.

Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -385,6 +385,9 @@ static int prepare(struct domain *d, gfn
              * The domain can't possibly know about this page yet, so failure
              * here is a clear indication of something fishy going on.
              */
+            gprintk(XENLOG_ERR,
+                    "%pd: fresh page for GFN %"PRI_gfn" in unexpected state\n",
+                    d, gfn_x(gfn));
             domain_crash(d);
             page = NULL;
             goto out;
@@ -423,6 +426,10 @@ static int prepare(struct domain *d, gfn
 
     if ( page )
     {
+        /*
+         * Free the page on error.  Drop our temporary reference in all
+         * cases.
+         */
         if ( ret )
             put_page_alloc_ref(page);
         put_page(page);


From xen-devel-bounces@lists.xenproject.org Tue May 19 15:30:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:30:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb4Ci-0001Be-9y; Tue, 19 May 2020 15:30:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0pGb=7B=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jb4Ch-0001BZ-Jj
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:30:51 +0000
X-Inumbo-ID: b83ee8c8-99e5-11ea-9887-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b83ee8c8-99e5-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 15:30:51 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: qUqGNsjT6JdZExnSe0VG+HDbHyyq6KcCmGsB3luSxDaYmgQ10Ed2vs6UBY5Be+ZU/53rMPY7d3
 kEo0wyBSSrS8bn5oVC1UzD/r8Gkl3rojPuJ3EepKaWJpH9misww9O4KZXWvkczw/kPLgI1lmha
 dwcVeZvFMRPfYa6PVi0Wp52DrZgkcOYTxwLN4DbSZiDoE3DoRNMkSgmr+u1wwMSb5Zn64Lkzb4
 MtabyWeEtY/w2cvKq92Lfkf3IiiW0aKrqBF0r9o00A9i3JzwI8NWEDi9+clyxxFSmelV5J+M4s
 oMg=
X-SBRS: 2.7
X-MesageID: 18159075
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,410,1583211600"; d="scan'208";a="18159075"
Subject: Re: [PATCH] x86/mem-paging: further adjustments to
 p2m_mem_paging_prep()'s error handling
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <4543e93b-f861-ea0b-9de0-cba1aa938eb7@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <87071b03-a479-c7fc-ea03-b9ad32a6f4ba@citrix.com>
Date: Tue, 19 May 2020 16:30:45 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <4543e93b-f861-ea0b-9de0-cba1aa938eb7@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19/05/2020 16:28, Jan Beulich wrote:
> Address late comments on ecb913be4aaa ("x86/mem-paging: correct
> p2m_mem_paging_prep()'s error handling"):
> - insert a gprintk() ahead of domain_crash(),
> - add a comment.
>
> Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue May 19 15:32:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:32:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb4EQ-0001Hq-MT; Tue, 19 May 2020 15:32:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FgAx=7B=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jb4EP-0001Hi-Au
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:32:37 +0000
X-Inumbo-ID: f719c7ca-99e5-11ea-9887-bc764e2007e4
Received: from mail-wm1-x331.google.com (unknown [2a00:1450:4864:20::331])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f719c7ca-99e5-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 15:32:36 +0000 (UTC)
Received: by mail-wm1-x331.google.com with SMTP id t8so1008312wmi.0
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 08:32:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=8xUB9agd8Bv+WpPFwSobRBf7XV+WGDNHnCZb3KIZVLM=;
 b=MVqS2U2rrZxnnYm2ohM2jjwi7KzmEETA3Ve/Jz2JhuaEpf1RWIXN5G1n5mxiu9Viau
 XZ1kELnNKU+bzHxCfHP+DqsFOSU3noMKFmpAYSrAOavrHRjNIaSyfI/ooONpyro9+UsO
 hBqd1YT0cbhLFDBDimCoOe+l3JqrjTvl+bFkXdDM9hlnlFI0RBany20veOo5zi637caT
 u7dZUlSbstol0rUxy6xFyf4vxgdfb6CdHVpw1foL/KjYMoSUf+b/O65uh/BWT7/4sFj1
 qSrWBUrYheVN60gsiDIg9a9ebysCqfVDbsWLz1IG+widpEMDWOxVmGrlRRE2KlH0YfY4
 yoNQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=8xUB9agd8Bv+WpPFwSobRBf7XV+WGDNHnCZb3KIZVLM=;
 b=rTgv3h6NUaWbvWFm3vMC/FjIdLkYDV62nd4rJmSGV66SrMmdP1YJO9ZnkUD7IoTC83
 hwGqPFrnXBQBQjRFcNApAiq0NdDZN6fMPEgG7d1Q4Lgx6UT/RWI1THQIp6v8niUvsmr0
 Jqedf3HaUUFNPH4Ri3q2dFU69NbQDGu/O7mOl5bezLXI2ECuzPgipkbsBJ2euwoMV1sI
 E4xJxlRTarYLW/w9Gngj699a4u3r49BgpCWmSY3kUQKeAfVupuMkCtzlXznStQmJIl34
 ZUzAJhguN/z2aiHOecG/i8sV++qNO2X4dxtEkwAA+0xjLhE0eQOOfB/xc5GxX48UMjwR
 orTw==
X-Gm-Message-State: AOAM5317PTHwP/j4TgY7E1jIYruX+4SXdo9Y8IkZ4TYb9yUZOIUy+Xyu
 Xih8U+QSdkpi0tOycs6LCrE=
X-Google-Smtp-Source: ABdhPJxZpGvC/+B/cW+EmjllVgRwEBABC4XSXInrbQSYR8vV3NLIiQWplr0uBZt869xK2tZ646G64w==
X-Received: by 2002:a1c:7205:: with SMTP id n5mr6424018wmc.189.1589902355529; 
 Tue, 19 May 2020 08:32:35 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.185])
 by smtp.gmail.com with ESMTPSA id c140sm81503wmd.18.2020.05.19.08.32.33
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 19 May 2020 08:32:34 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
References: <20200514104416.16657-1-paul@xen.org>
 <20200514104416.16657-2-paul@xen.org>
 <c1da7ff1-2c3a-02d1-cfa1-18840db37566@suse.com>
 <000401d62de6$7cb6efa0$7624cee0$@xen.org>
 <080a1fa3-eb1e-e3b2-c52e-5c7ffdabc6eb@suse.com>
 <000601d62def$b4f64380$1ee2ca80$@xen.org>
 <0ee39765-bc1a-e795-5b20-52ba7026e8d4@suse.com>
In-Reply-To: <0ee39765-bc1a-e795-5b20-52ba7026e8d4@suse.com>
Subject: RE: [PATCH v3 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
Date: Tue, 19 May 2020 16:32:32 +0100
Message-ID: <000d01d62df2$b82534f0$286f9ed0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQH3ss8UVmWNzdPcdrMljFFlRuFHxQJ5sln4ApsaflcBRS23fQEQzGDfAiNx2zUB4Wnn86gQ/8tw
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 19 May 2020 16:18
> To: paul@xen.org
> Cc: xen-devel@lists.xenproject.org; 'Paul Durrant' =
<pdurrant@amazon.com>; 'Andrew Cooper'
> <andrew.cooper3@citrix.com>; 'George Dunlap' =
<george.dunlap@citrix.com>; 'Ian Jackson'
> <ian.jackson@eu.citrix.com>; 'Julien Grall' <julien@xen.org>; 'Stefano =
Stabellini'
> <sstabellini@kernel.org>; 'Wei Liu' <wl@xen.org>; 'Volodymyr Babchuk' =
<Volodymyr_Babchuk@epam.com>;
> 'Roger Pau Monn=C3=A9' <roger.pau@citrix.com>
> Subject: Re: [PATCH v3 1/5] xen/common: introduce a new framework for =
save/restore of 'domain' context
>=20
> On 19.05.2020 17:10, Paul Durrant wrote:
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: 19 May 2020 15:24
> >>
> >> On 19.05.2020 16:04, Paul Durrant wrote:
> >>>> From: Jan Beulich <jbeulich@suse.com>
> >>>> Sent: 19 May 2020 14:04
> >>>>
> >>>> On 14.05.2020 12:44, Paul Durrant wrote:
> >>>>> +/*
> >>>>> + * Register save and restore handlers. Save handlers will be =
invoked
> >>>>> + * in order of DOMAIN_SAVE_CODE().
> >>>>> + */
> >>>>> +#define DOMAIN_REGISTER_SAVE_RESTORE(_x, _save, _load)          =
  \
> >>>>> +    static int __init =
__domain_register_##_x##_save_restore(void) \
> >>>>> +    {                                                           =
  \
> >>>>> +        domain_register_save_type(                              =
  \
> >>>>> +            DOMAIN_SAVE_CODE(_x),                               =
  \
> >>>>> +            #_x,                                                =
  \
> >>>>> +            &(_save),                                           =
  \
> >>>>> +            &(_load));                                          =
  \
> >>>>> +                                                                =
  \
> >>>>> +        return 0;                                               =
  \
> >>>>> +    }                                                           =
  \
> >>>>> +    __initcall(__domain_register_##_x##_save_restore);
> >>>>
> >>>> I'm puzzled by part of the comment: Invoking by save code looks
> >>>> reasonable for the saving side (albeit END doesn't match this =
rule
> >>>> afaics), but is this going to be good enough for the consuming =
side?
> >>>
> >>> No, this only relates to the save side which is why the comment
> >>> says 'Save handlers'. I do note that it would be more consistent
> >>> to use 'load' rather than 'restore' here though.
> >>>
> >>>> There may be dependencies between types, and with fixed ordering
> >>>> there may be no way to insert a depended upon type ahead of an
> >>>> already defined one (at least as long as the codes are meant to =
be
> >>>> stable).
> >>>>
> >>>
> >>> The ordering of load handlers is determined by the stream. I'll
> >>> add a sentence saying that.
> >>
> >> I.e. the consumer of the "get" interface (and producer of the =
stream)
> >> is supposed to take apart the output it gets, bring records into
> >> suitable order (which implies it knows of all the records, and =
which
> >> hence means this code may need updating in cases where I'd expect
> >> only the hypervisor needs), and only then issue to the stream?
> >
> > The intention is that the stream is always in a suitable order so =
the
> > load side does not have to do any re-ordering.
>=20
> I understood this to be the intention, but what I continue to not
> understand is where / how the save side orders it suitably. "Save
> handlers will be invoked in order of DOMAIN_SAVE_CODE()" does not
> allow for any ordering, unless at the time of the introduction of
> a particular code you already know what others it may depend on
> in the future, reserving appropriate codes.
>=20

That's just how it is *now*. If a new code is defined that needs to be =
in the stream before one of the existing ones then we'll have to =
introduce a more elaborate scheme to deal with that at the time. Using =
the save code as the array index and iterating in that order is purely a =
convenience, and the load side does not depend on entries being in save =
code order.

> And as said - END also doesn't look to fit this comment.
>=20

Ok, I can add a comment stating that exception.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Tue May 19 15:33:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:33:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb4FB-0001NB-42; Tue, 19 May 2020 15:33:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0pGb=7B=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jb4FA-0001N3-5t
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:33:24 +0000
X-Inumbo-ID: 131064de-99e6-11ea-b07b-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 131064de-99e6-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 15:33:23 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: bhOoR8XA0PDJ8HCZobthv2CoaFFacnY5R6GzLR3CPfpHfNOt3VWEsRZk/AWHaZIQxiZfu8bjcu
 fysGA41uJTaL2vZs1h62pDb6h2iGMchgtgEBKwlQ67wD7xk559ysTh5BrYvwZ+JUopWFm9UnKF
 8W+6nP3BPgIeJp6H8iHRVPkzhFZZPYE1R2Akff1Gl2z33zOiZ6gLvU1KPKzHKN8w5jQiALCB4B
 ciN9ahghIGRSOmc3KWvoDTrlCCvWPp5dpuBP8L/6WDxIUMN/Ip9/bnKVUI5cJ4fRewJ8NQEneH
 pkM=
X-SBRS: 2.7
X-MesageID: 18176037
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,410,1583211600"; d="scan'208";a="18176037"
Subject: Re: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
To: Jan Beulich <jbeulich@suse.com>
References: <20200518153820.18170-1-andrew.cooper3@citrix.com>
 <2783ddc5-9919-3c97-ba52-2f734e7d72d5@suse.com>
 <62d4999b-7db3-bac6-28ed-bb636347df38@citrix.com>
 <3088e420-a72a-1b2d-144f-115610488418@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1750cbe5-ef48-6dc7-e372-cbc0a8cbc9cc@citrix.com>
Date: Tue, 19 May 2020 16:33:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <3088e420-a72a-1b2d-144f-115610488418@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19/05/2020 15:48, Jan Beulich wrote:
> On 19.05.2020 16:11, Andrew Cooper wrote:
>> On 19/05/2020 09:34, Jan Beulich wrote:
>>> On 18.05.2020 17:38, Andrew Cooper wrote:
>>>> @@ -1439,6 +1418,18 @@ void do_page_fault(struct cpu_user_regs *regs)
>>>>      if ( unlikely(fixup_page_fault(addr, regs) != 0) )
>>>>          return;
>>>>  
>>>> +    /*
>>>> +     * Xen have reserved bits in its pagetables, nor do we permit PV guests to
>>>> +     * write any.  Such entries would be vulnerable to the L1TF sidechannel.
>>>> +     *
>>>> +     * The only logic which intentionally sets reserved bits is the shadow
>>>> +     * MMIO fastpath (SH_L1E_MMIO_*), which is careful not to be
>>>> +     * L1TF-vulnerable, and handled via the VMExit #PF intercept path, rather
>>>> +     * than here.
>>> What about SH_L1E_MAGIC and sh_l1e_gnp()? The latter gets used by
>>> _sh_propagate() without visible restriction to HVM.
>> SH_L1E_MAGIC looks to be redundant with SH_L1E_MMIO_MAGIC. 
>> sh_l1e_mmio() is the only path which ever creates an entry like that.
>>
>> sh_l1e_gnp() is a very well hidden use of reserved bits, but surely
>> can't be used for PV guests, as there doesn't appear to be anything to
>> turn the resulting fault back into a plain not-present.
> Well, in this case the implied question remains: How does this fit
> with what _sh_propagate() does?

I'm in the process of investigating.

>>> And of course every time I look at this code I wonder how we can
>>> get away with (quoting a comment) "We store 28 bits of GFN in
>>> bits 4:32 of the entry." Do we have a hidden restriction
>>> somewhere guaranteeing that guests won't have (emulated MMIO)
>>> GFNs above 1Tb when run in shadow mode?
>> I've raised that several times before.  Its broken.
>>
>> Given that shadow frames are limited to 44 bits anyway (and not yet
>> levelled safely in the migration stream), my suggestion for fixing this
>> was just to use one extra nibble for the extra 4 bits and call it done.
> Would you remind(?) me of where this 44-bit restriction is coming
> from?

>From paging_max_paddr_bits(),

/* Shadowed superpages store GFNs in 32-bit page_info fields. */

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 19 15:33:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:33:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb4Fi-0001R4-D6; Tue, 19 May 2020 15:33:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Ytr=7B=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jb4Fg-0001Qq-MQ
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:33:56 +0000
X-Inumbo-ID: 2675279e-99e6-11ea-ae69-bc764e2007e4
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2675279e-99e6-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 15:33:56 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id w15so4257911lfe.11
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 08:33:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=8sLs83mH2h+oYBjYFuHYn1fthSV/529dzyPGDzaPuLo=;
 b=Aig9h+DA8PVgu4kajsyQpjHbvnMicsUx4cz5HMrzD9uS+bt7A30WU7SAA+Iz4WnLow
 wG6KBr9wWu3P9e5dxAW3VcWmM5eUMT8X7dLtdgJa06R7bSwke0QmOVjQiE4m5EUyAtHW
 Clid9HFS26vsHuaCJkkWL7NVNleOjgHvpjXKmmydBUEZuyO+Z7hW4yinp8KyRzgfu8Q0
 hQ+0OuiT9dEXYiDyC6Z3aMi3/AMiADktnctntEhb34wxkx3rkdcPsgkVStd9F3Hy445y
 bnXjw33I7eSIqQOt9VikcZBOObpRs0iAb+4AZcVlXi0FG2hzIXDNDh8UgEAcS5+m/80l
 C9Xg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=8sLs83mH2h+oYBjYFuHYn1fthSV/529dzyPGDzaPuLo=;
 b=s6ARqYY7m6LG4ICllgfF08WsmKPC6xHtyEDUPaK0l4Cd639tCk1arToVo067usfJLV
 gZXW8QiOrw48dddqpHQmME/uMNtipjGxf7dIhWs0axUjkqZEg8GVJWaVtop6jjXyVY7P
 DRWPN++LitFTkS4Db8BKFg7U5q93s3qxESrHcULDPYEp0TKWpwI7p+xpkYPE7l9M0Jc5
 A+7jBm7YjQ1Ub34a9aMjCNpED2TVDS37AaV66UM1HbjBMmH3hDkOUadjFnEJrHXLi2cp
 V+IAQl9XAe2+fSie49h1FNrefpfq/6AEfsKr+gibrJL6ihtHruhsJ675tj2YlEdWGZAu
 lJug==
X-Gm-Message-State: AOAM5330wODfkWHycSqvCAjMbRXVlQrsuL4fTNGUq13fiozni8+F7HuL
 Kk3EwFdOBkLCqxRIdZHao7CmIMcwtP1zekrSFf0=
X-Google-Smtp-Source: ABdhPJwfRyShlC9luUCjTtLsftL5qLA4CApmlpLFWIIclalY9/Ou3dMwYfhZGgbo/HOwpAxnLkvYfWE2XpxXFJOgsxs=
X-Received: by 2002:a05:6512:3049:: with SMTP id
 b9mr5397233lfb.44.1589902434988; 
 Tue, 19 May 2020 08:33:54 -0700 (PDT)
MIME-Version: 1.0
References: <20200519015503.115236-1-jandryuk@gmail.com>
 <20200519015503.115236-8-jandryuk@gmail.com>
 <24259.63185.474995.498745@mariner.uk.xensource.com>
 <20200519152132.3ivs6gpembnjai3o@debian>
In-Reply-To: <20200519152132.3ivs6gpembnjai3o@debian>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 19 May 2020 11:33:43 -0400
Message-ID: <CAKf6xpuD2761=9WvY3zryAMEE4XS2vfM-ds=c=FrRbVm-iHH7g@mail.gmail.com>
Subject: Re: [PATCH v7 07/19] libxl: write qemu arguments into separate
 xenstore keys
To: Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?=
 <marmarek@invisiblethingslab.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 11:21 AM Wei Liu <wl@xen.org> wrote:
>
> On Tue, May 19, 2020 at 04:10:09PM +0100, Ian Jackson wrote:
> > Jason Andryuk writes ("[PATCH v7 07/19] libxl: write qemu arguments int=
o separate xenstore keys"):
> > > From: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.co=
m>
> > >
> > > This allows using arguments with spaces, like -append, without
> > > nominating any special "separator" character.
> > >
> > > Signed-off-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethi=
ngslab.com>
> > > Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
> > >
> > > Write arguments in dm-argv directory instead of overloading mini-os's
> > > dmargs string.
> > >
> > > Make libxl__write_stub_dmargs vary behaviour based on the
> > > is_linux_stubdom flag.
> >
> > Thank you, I like this.

Since I was touching the code, I might as well make the change. :)

> > Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
>
> Cool. Now this series is all acked. I will commit all the patches today.

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Tue May 19 15:34:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:34:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb4Fv-0001Tt-MU; Tue, 19 May 2020 15:34:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jb4Fu-0001Te-S2
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:34:10 +0000
X-Inumbo-ID: 2f029626-99e6-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2f029626-99e6-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 15:34:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id BCD4EAD88;
 Tue, 19 May 2020 15:34:11 +0000 (UTC)
Subject: Re: [PATCH v3 4/5] common/domain: add a domain context record for
 shared_info...
To: paul@xen.org
References: <20200514104416.16657-1-paul@xen.org>
 <20200514104416.16657-5-paul@xen.org>
 <bbebc62f-8066-a60e-5717-58e46cd2d172@suse.com>
 <000a01d62df1$28e876e0$7ab964a0$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <035bded3-9542-31e1-aacf-515be43b8536@suse.com>
Date: Tue, 19 May 2020 17:34:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <000a01d62df1$28e876e0$7ab964a0$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.2020 17:21, Paul Durrant wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 19 May 2020 15:08
>>
>> On 14.05.2020 12:44, Paul Durrant wrote:
>>> --- a/xen/include/public/save.h
>>> +++ b/xen/include/public/save.h
>>> @@ -73,7 +73,16 @@ struct domain_save_header {
>>>  };
>>>  DECLARE_DOMAIN_SAVE_TYPE(HEADER, 1, struct domain_save_header);
>>>
>>> -#define DOMAIN_SAVE_CODE_MAX 1
>>> +struct domain_shared_info_context {
>>> +    uint8_t has_32bit_shinfo;
>>> +    uint8_t pad[3];
>>
>> 32-(or 16-)bit flags, with just a single bit used for the purpose?
>>
> 
> I debated that. Given this is xen/tools-only would a bit-field be acceptable?

Looking at domctl.h and sysctl.h, the only instance I can find is a
live-patching struct, and I'd suppose the addition of bitfields there
was missed in the hasty review back then. While it might be
acceptable, I'd recommend against it. It'll bite us the latest with
a port to an arch where endianness is not fixed, and hence may vary
between hypercall caller and callee. The standard way of using
#define-s looks more future proof.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 15:35:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:35:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb4HX-0001hv-2B; Tue, 19 May 2020 15:35:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FgAx=7B=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jb4HV-0001hp-Ms
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:35:49 +0000
X-Inumbo-ID: 69eae8c4-99e6-11ea-9887-bc764e2007e4
Received: from mail-wm1-x330.google.com (unknown [2a00:1450:4864:20::330])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 69eae8c4-99e6-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 15:35:49 +0000 (UTC)
Received: by mail-wm1-x330.google.com with SMTP id w64so4117795wmg.4
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 08:35:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=iZ9GK+2pKiMpaDV0l248S1LD25qGZ49UXbb330WifgI=;
 b=WQyFWW1OHP/EgPG0Kg8ZCfISC+AuO32mkjjgpfa6dQbzBwa/bGPQ7zG/CpdM6gR7Fk
 S6HjYbG1qAVtW6PzQu/dfbF2bnEK/oRcdSVm3/aIJBZoORhsAtLETwYl9OA3AaWJDPhX
 l5WYLAC5/gp87cIXeOvv2fZd8+X2eppxERoVntdf0rQMx2A842OfPIg3SdS3eruazegP
 FSZZbePcLd4HJieb1ry5dKylNCaEiVChXkEjVERWDxe+imA0sqdfzuG4mFT7YewHnNvy
 vS1yUQZHIn8+SNel/miXa268i6t96Vs2zERfyyUL9TJqc6A3Qq6oug9W6+8i6dGd06RE
 tsFg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=iZ9GK+2pKiMpaDV0l248S1LD25qGZ49UXbb330WifgI=;
 b=c+HMlQ77Zsr4mjsaGaDCbXEuFD+M3TZw1dcgvjxrPByhTLXP3SCkEKgvU+/ps7q7XD
 1xYI69VIO6AtY2zQvnaNjNIj5LTnqi0vvu1tzaQckpamGPoCX9Al5E+anK/XfkqRhYTt
 8qN4cHpiEZsfvHwNumsvSMqi6S/JkfPcJw+tdlYDvMV8SC5gmGS8OF/pPP/PQXojsiVm
 Lk3bOlllTco49A8hGXAJ2YtsMfwtrG6RvuR9Ly5Xi+RVXMTBosTxxVleMcgt/As3rULj
 L4Y+ADV8UA01WJqkYSPgUY+NVJcxHUTIRA1ZWa6OuKEvEuaPGnGLM7UO5CKUOWRjNXaD
 zFkg==
X-Gm-Message-State: AOAM531fUC2B2JUV4GtdK2qGj6W9X0Ys1HOnfwE/hw5uZA96vOQ72dQG
 JLlagBHEzt95NJNz7ToXtIw=
X-Google-Smtp-Source: ABdhPJyk1ExcUlWNRggZ0HhHqmKdA1p4JSHGtP2Co9otprbafUdV+vN570LmIsEZk2IYGwz4E1jEig==
X-Received: by 2002:a1c:bb0a:: with SMTP id l10mr6097942wmf.186.1589902548360; 
 Tue, 19 May 2020 08:35:48 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.187])
 by smtp.gmail.com with ESMTPSA id l13sm20983585wrm.55.2020.05.19.08.35.46
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 19 May 2020 08:35:47 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
References: <20200514104416.16657-1-paul@xen.org>
 <20200514104416.16657-5-paul@xen.org>
 <bbebc62f-8066-a60e-5717-58e46cd2d172@suse.com>
 <000a01d62df1$28e876e0$7ab964a0$@xen.org>
 <035bded3-9542-31e1-aacf-515be43b8536@suse.com>
In-Reply-To: <035bded3-9542-31e1-aacf-515be43b8536@suse.com>
Subject: RE: [PATCH v3 4/5] common/domain: add a domain context record for
 shared_info...
Date: Tue, 19 May 2020 16:35:46 +0100
Message-ID: <000e01d62df3$2b222850$816678f0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQH3ss8UVmWNzdPcdrMljFFlRuFHxQHAdhdAAmVNhmkClMN59gEjf36DqC2QVyA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 19 May 2020 16:34
> To: paul@xen.org
> Cc: xen-devel@lists.xenproject.org; 'Paul Durrant' <pdurrant@amazon.com>; 'Ian Jackson'
> <ian.jackson@eu.citrix.com>; 'Wei Liu' <wl@xen.org>; 'Andrew Cooper' <andrew.cooper3@citrix.com>;
> 'George Dunlap' <george.dunlap@citrix.com>; 'Julien Grall' <julien@xen.org>; 'Stefano Stabellini'
> <sstabellini@kernel.org>
> Subject: Re: [PATCH v3 4/5] common/domain: add a domain context record for shared_info...
> 
> On 19.05.2020 17:21, Paul Durrant wrote:
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: 19 May 2020 15:08
> >>
> >> On 14.05.2020 12:44, Paul Durrant wrote:
> >>> --- a/xen/include/public/save.h
> >>> +++ b/xen/include/public/save.h
> >>> @@ -73,7 +73,16 @@ struct domain_save_header {
> >>>  };
> >>>  DECLARE_DOMAIN_SAVE_TYPE(HEADER, 1, struct domain_save_header);
> >>>
> >>> -#define DOMAIN_SAVE_CODE_MAX 1
> >>> +struct domain_shared_info_context {
> >>> +    uint8_t has_32bit_shinfo;
> >>> +    uint8_t pad[3];
> >>
> >> 32-(or 16-)bit flags, with just a single bit used for the purpose?
> >>
> >
> > I debated that. Given this is xen/tools-only would a bit-field be acceptable?
> 
> Looking at domctl.h and sysctl.h, the only instance I can find is a
> live-patching struct, and I'd suppose the addition of bitfields there
> was missed in the hasty review back then. While it might be
> acceptable, I'd recommend against it. It'll bite us the latest with
> a port to an arch where endianness is not fixed, and hence may vary
> between hypercall caller and callee. The standard way of using
> #define-s looks more future proof.
> 

Ok, I'll go with a flag. Probably is better in the long run.

  Paul

> Jan



From xen-devel-bounces@lists.xenproject.org Tue May 19 15:37:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:37:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb4Ij-0001oN-Cj; Tue, 19 May 2020 15:37:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jb4Ih-0001oF-W9
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:37:04 +0000
X-Inumbo-ID: 960c79ea-99e6-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 960c79ea-99e6-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 15:37:03 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 89A04AC20;
 Tue, 19 May 2020 15:37:04 +0000 (UTC)
Subject: Re: [PATCH v3 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
To: paul@xen.org
References: <20200514104416.16657-1-paul@xen.org>
 <20200514104416.16657-2-paul@xen.org>
 <c1da7ff1-2c3a-02d1-cfa1-18840db37566@suse.com>
 <000401d62de6$7cb6efa0$7624cee0$@xen.org>
 <080a1fa3-eb1e-e3b2-c52e-5c7ffdabc6eb@suse.com>
 <000601d62def$b4f64380$1ee2ca80$@xen.org>
 <0ee39765-bc1a-e795-5b20-52ba7026e8d4@suse.com>
 <000d01d62df2$b82534f0$286f9ed0$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <00609ed0-14f1-e957-52e8-8832f2708b91@suse.com>
Date: Tue, 19 May 2020 17:37:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <000d01d62df2$b82534f0$286f9ed0$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.2020 17:32, Paul Durrant wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 19 May 2020 16:18
>> To: paul@xen.org
>> Cc: xen-devel@lists.xenproject.org; 'Paul Durrant' <pdurrant@amazon.com>; 'Andrew Cooper'
>> <andrew.cooper3@citrix.com>; 'George Dunlap' <george.dunlap@citrix.com>; 'Ian Jackson'
>> <ian.jackson@eu.citrix.com>; 'Julien Grall' <julien@xen.org>; 'Stefano Stabellini'
>> <sstabellini@kernel.org>; 'Wei Liu' <wl@xen.org>; 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>;
>> 'Roger Pau Monné' <roger.pau@citrix.com>
>> Subject: Re: [PATCH v3 1/5] xen/common: introduce a new framework for save/restore of 'domain' context
>>
>> On 19.05.2020 17:10, Paul Durrant wrote:
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: 19 May 2020 15:24
>>>>
>>>> On 19.05.2020 16:04, Paul Durrant wrote:
>>>>>> From: Jan Beulich <jbeulich@suse.com>
>>>>>> Sent: 19 May 2020 14:04
>>>>>>
>>>>>> On 14.05.2020 12:44, Paul Durrant wrote:
>>>>>>> +/*
>>>>>>> + * Register save and restore handlers. Save handlers will be invoked
>>>>>>> + * in order of DOMAIN_SAVE_CODE().
>>>>>>> + */
>>>>>>> +#define DOMAIN_REGISTER_SAVE_RESTORE(_x, _save, _load)            \
>>>>>>> +    static int __init __domain_register_##_x##_save_restore(void) \
>>>>>>> +    {                                                             \
>>>>>>> +        domain_register_save_type(                                \
>>>>>>> +            DOMAIN_SAVE_CODE(_x),                                 \
>>>>>>> +            #_x,                                                  \
>>>>>>> +            &(_save),                                             \
>>>>>>> +            &(_load));                                            \
>>>>>>> +                                                                  \
>>>>>>> +        return 0;                                                 \
>>>>>>> +    }                                                             \
>>>>>>> +    __initcall(__domain_register_##_x##_save_restore);
>>>>>>
>>>>>> I'm puzzled by part of the comment: Invoking by save code looks
>>>>>> reasonable for the saving side (albeit END doesn't match this rule
>>>>>> afaics), but is this going to be good enough for the consuming side?
>>>>>
>>>>> No, this only relates to the save side which is why the comment
>>>>> says 'Save handlers'. I do note that it would be more consistent
>>>>> to use 'load' rather than 'restore' here though.
>>>>>
>>>>>> There may be dependencies between types, and with fixed ordering
>>>>>> there may be no way to insert a depended upon type ahead of an
>>>>>> already defined one (at least as long as the codes are meant to be
>>>>>> stable).
>>>>>>
>>>>>
>>>>> The ordering of load handlers is determined by the stream. I'll
>>>>> add a sentence saying that.
>>>>
>>>> I.e. the consumer of the "get" interface (and producer of the stream)
>>>> is supposed to take apart the output it gets, bring records into
>>>> suitable order (which implies it knows of all the records, and which
>>>> hence means this code may need updating in cases where I'd expect
>>>> only the hypervisor needs), and only then issue to the stream?
>>>
>>> The intention is that the stream is always in a suitable order so the
>>> load side does not have to do any re-ordering.
>>
>> I understood this to be the intention, but what I continue to not
>> understand is where / how the save side orders it suitably. "Save
>> handlers will be invoked in order of DOMAIN_SAVE_CODE()" does not
>> allow for any ordering, unless at the time of the introduction of
>> a particular code you already know what others it may depend on
>> in the future, reserving appropriate codes.
>>
> 
> That's just how it is *now*. If a new code is defined that needs to
> be in the stream before one of the existing ones then we'll have to
> introduce a more elaborate scheme to deal with that at the time.
> Using the save code as the array index and iterating in that order
> is purely a convenience, and the load side does not depend on
> entries being in save code order.

Could then you make the comment indicate so? This will allow people
wanting to modify this do so more easily, without much digging in
code or mail history.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 15:38:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:38:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb4Jk-0001tz-Ph; Tue, 19 May 2020 15:38:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wYyw=7B=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jb4Jj-0001tl-FY
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:38:07 +0000
X-Inumbo-ID: b84df470-99e6-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b84df470-99e6-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 15:38:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=lu9fVUjnP1nJRYVJFdYJ71roB+S1AxcL3rz3SmtHGTc=; b=BFea3RLmIg8fSknDjinz017fw
 Qbr0xB+39KU0xy0C8f1zyrgco3INyQ34S/J05QXNEo4damF8yAg4myPYRbifhyN53S8b0vvMOltkR
 AWNwHMR0nGExaDfB2DcBE6yrzoMNAgQDZwC/YuYN+X0rwszPlGdx6nDMmOtFBTxf97dbE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jb4Jb-0003zD-VI; Tue, 19 May 2020 15:38:00 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jb4Jb-0004Ij-Je; Tue, 19 May 2020 15:37:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jb4Jb-0007tl-IV; Tue, 19 May 2020 15:37:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150238-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150238: regressions - FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:regression
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=475ffdbbf5778329319ef6f7bd6315c163163440
X-Osstest-Versions-That: xen=664e1bc12f8658da124a4eff7a8f16da073bd47f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 19 May 2020 15:37:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150238 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150238/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 150227

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150227
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150227
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150227
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150227
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150227
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150227
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150227
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150227
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150227
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150227
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150227
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  475ffdbbf5778329319ef6f7bd6315c163163440
baseline version:
 xen                  664e1bc12f8658da124a4eff7a8f16da073bd47f

Last test of basis   150227  2020-05-18 01:51:25 Z    1 days
Failing since        150234  2020-05-18 18:06:10 Z    0 days    2 attempts
Testing same since   150238  2020-05-19 06:09:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 475ffdbbf5778329319ef6f7bd6315c163163440
Author: Olaf Hering <olaf@aepfle.de>
Date:   Mon May 18 16:44:00 2020 +0200

    tools: use HOSTCC/CPP to compile rombios code and helper
    
    Use also HOSTCFLAGS for biossums while touching the code.
    
    Spotted by inspecting build logfile.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2b532519d64e653a6bbfd9eefed6040a09c8876d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 18 17:18:56 2020 +0200

    x86: determine MXCSR mask in all cases
    
    For its use(s) by the emulator to be correct in all cases, the filling
    of the variable needs to be independent of XSAVE availability. As
    there's no suitable function in i387.c to put the logic in, keep it in
    xstate_init(), arrange for the function to be called unconditionally,
    and pull the logic ahead of all return paths there.
    
    Fixes: 9a4496a35b20 ("x86emul: support {,V}{LD,ST}MXCSR")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4c73a2a939a51dee47db77b31208157dbc29fe98
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 18 17:17:51 2020 +0200

    x86/mem-paging: consistently use gfn_t
    
    Where gprintk()s get touched anyway to switch to PRI_gfn, also switch to
    %pd for the domain logged.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 02a2b19c8e2532998e262727d32c574267ac6b48
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 18 17:16:55 2020 +0200

    x86/mem-paging: move code to its dedicated source file
    
    Do a little bit of style adjustment along the way, and drop the
    "p2m_mem_paging_" prefixes from the now static functions.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9f40699c956c10419a91db0ff8d0c985dd3b2800
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 18 17:15:46 2020 +0200

    x86/mem-paging: use guest handle for XENMEM_paging_op_prep
    
    While it should have been this way from the beginning, not doing so will
    become an actual problem with PVH Dom0. The interface change is binary
    compatible, but requires tools side producers to be re-built.
    
    Drop the bogus/unnecessary page alignment restriction on the input
    buffer at the same time.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5dc999d711a75e1a83d4b21da203d3c3197ec0e0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 18 17:13:38 2020 +0200

    x86/mm: no-one passes a NULL domain to init_xen_l4_slots()
    
    Drop the NULL checks - they've been introduced by commit 8d7b633ada
    ("x86/mm: Consolidate all Xen L4 slot writing into
    init_xen_l4_slots()") without giving a reason; I'm told this was done
    in anticipation of the function potentially getting called with a NULL
    argument down the road.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

commit 97fb0253e6c2f2221bfd0895b7ffe3a99330d847
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Sat May 16 19:50:45 2020 +0100

    x86/hvm: Fix shifting in stdvga_mem_read()
    
    stdvga_mem_read() has a return type of uint8_t, which promotes to int rather
    than unsigned int.  Shifting by 24 may hit the sign bit.
    
    Spotted by Coverity.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 3d6e92e309987c9e33177c9ccd155e58dbd5d0db
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Sat May 16 13:10:07 2020 +0100

    x86/hvm: Fix memory leaks in hvm_copy_context_and_params()
    
    Any error from hvm_save() or hvm_set_param() leaks the c.data allocation.
    
    Spotted by Coverity.
    
    Fixes: 353744830 "x86/hvm: introduce hvm_copy_context_and_params"
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue May 19 15:39:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:39:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb4KV-00020N-6s; Tue, 19 May 2020 15:38:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FgAx=7B=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jb4KU-00020E-DM
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:38:54 +0000
X-Inumbo-ID: d7b5cafe-99e6-11ea-b07b-bc764e2007e4
Received: from mail-wr1-x430.google.com (unknown [2a00:1450:4864:20::430])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d7b5cafe-99e6-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 15:38:53 +0000 (UTC)
Received: by mail-wr1-x430.google.com with SMTP id v12so16377922wrp.12
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 08:38:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=45qFdJLV05+sb+eUMWjh35IdxZxs0X3EIKbxPAbyabk=;
 b=l9svqPdRpNxGedIKHVmFoZRcPwJAB1V47TsZqXBtKRknXQSrgGPCVdwJzdKZIo1ZDj
 6zNK1dWUK6TAcshGfNj48WErBspkPb87UyO6bW5m0hXe9W188c4pN0LmiJU6CjPMmPR7
 +Lh2gipVfoM9gaVxeclHZ47I7WaQpiYau4xQcK1Bo+S3epsVRZF8nrhfokq42JoW7F+R
 QNCf2xzXv6HSO2xOa3Rs2F9qAewf6OVpVpZTXQYDiQr+yTKBgNznSkjmCSuDqqFYqODg
 wyNN2hq02dTmh2vADGWnN/VMcbS7mgkwskjSvv5YdoZkYybdndWdnYqZIrtOJGhx50U6
 qqxw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=45qFdJLV05+sb+eUMWjh35IdxZxs0X3EIKbxPAbyabk=;
 b=Zn1o1MIbux56oBa+hltCeEaVT+XEP6CyE7DwjNlZq656o3f1SYTtCDZVeHF9mHBx2+
 f3G2oZDRAD4+gM3XRHrmU7IdZ3VYwpgG8H9leyniOzZlEPqsVMQiJjE4Tqg8ie/ptmfn
 WnrnJ3XOkDDhJL1dAalzciioxJ1Y9E1q3xm6HJMaltegtiln9IaA1aF8LHyDUjj4D0Dh
 obzLkKHv+42qR2M8Nxc6yEmJyNPn8dzujTfMkndu/IGPoJ7IdNq+aQSzIahLKMNR3KXO
 w/yVz8onJX3lN3EJFjF6XUppfHpq8TxFgRIsSHfwVeL6u22OBc8wHo4mTf7PC4u4ggYZ
 iXwA==
X-Gm-Message-State: AOAM5336Q03RvRJgvuwpIDa+SR9nPY4N0DudTlHKmIlFmJDmPXkWt6Vn
 dgki0KoaAS3NFQZptHO8mws=
X-Google-Smtp-Source: ABdhPJyeNS1cgONRaE5skr+PNBSoFHYI05sg/gtVgVgKiZ+hrJPzl2L12AnkvDuW7xOyUMdta1VDIw==
X-Received: by 2002:a5d:6541:: with SMTP id z1mr26474996wrv.264.1589902732446; 
 Tue, 19 May 2020 08:38:52 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.185])
 by smtp.gmail.com with ESMTPSA id a74sm87093wme.23.2020.05.19.08.38.50
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 19 May 2020 08:38:51 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jan Beulich'" <jbeulich@suse.com>
References: <20200514104416.16657-1-paul@xen.org>
 <20200514104416.16657-2-paul@xen.org>
 <c1da7ff1-2c3a-02d1-cfa1-18840db37566@suse.com>
 <000401d62de6$7cb6efa0$7624cee0$@xen.org>
 <080a1fa3-eb1e-e3b2-c52e-5c7ffdabc6eb@suse.com>
 <000601d62def$b4f64380$1ee2ca80$@xen.org>
 <0ee39765-bc1a-e795-5b20-52ba7026e8d4@suse.com>
 <000d01d62df2$b82534f0$286f9ed0$@xen.org>
 <00609ed0-14f1-e957-52e8-8832f2708b91@suse.com>
In-Reply-To: <00609ed0-14f1-e957-52e8-8832f2708b91@suse.com>
Subject: RE: [PATCH v3 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
Date: Tue, 19 May 2020 16:38:49 +0100
Message-ID: <000f01d62df3$98b61160$ca223420$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQH3ss8UVmWNzdPcdrMljFFlRuFHxQJ5sln4ApsaflcBRS23fQEQzGDfAiNx2zUB4Wnn8wJM5mTBAk6V1OKn7ChesA==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jan Beulich <jbeulich@suse.com>
> Sent: 19 May 2020 16:37
> To: paul@xen.org
> Cc: xen-devel@lists.xenproject.org; 'Paul Durrant' =
<pdurrant@amazon.com>; 'Andrew Cooper'
> <andrew.cooper3@citrix.com>; 'George Dunlap' =
<george.dunlap@citrix.com>; 'Ian Jackson'
> <ian.jackson@eu.citrix.com>; 'Julien Grall' <julien@xen.org>; 'Stefano =
Stabellini'
> <sstabellini@kernel.org>; 'Wei Liu' <wl@xen.org>; 'Volodymyr Babchuk' =
<Volodymyr_Babchuk@epam.com>;
> 'Roger Pau Monn=C3=A9' <roger.pau@citrix.com>
> Subject: Re: [PATCH v3 1/5] xen/common: introduce a new framework for =
save/restore of 'domain' context
>=20
> On 19.05.2020 17:32, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: 19 May 2020 16:18
> >> To: paul@xen.org
> >> Cc: xen-devel@lists.xenproject.org; 'Paul Durrant' =
<pdurrant@amazon.com>; 'Andrew Cooper'
> >> <andrew.cooper3@citrix.com>; 'George Dunlap' =
<george.dunlap@citrix.com>; 'Ian Jackson'
> >> <ian.jackson@eu.citrix.com>; 'Julien Grall' <julien@xen.org>; =
'Stefano Stabellini'
> >> <sstabellini@kernel.org>; 'Wei Liu' <wl@xen.org>; 'Volodymyr =
Babchuk' <Volodymyr_Babchuk@epam.com>;
> >> 'Roger Pau Monn=C3=A9' <roger.pau@citrix.com>
> >> Subject: Re: [PATCH v3 1/5] xen/common: introduce a new framework =
for save/restore of 'domain'
> context
> >>
> >> On 19.05.2020 17:10, Paul Durrant wrote:
> >>>> From: Jan Beulich <jbeulich@suse.com>
> >>>> Sent: 19 May 2020 15:24
> >>>>
> >>>> On 19.05.2020 16:04, Paul Durrant wrote:
> >>>>>> From: Jan Beulich <jbeulich@suse.com>
> >>>>>> Sent: 19 May 2020 14:04
> >>>>>>
> >>>>>> On 14.05.2020 12:44, Paul Durrant wrote:
> >>>>>>> +/*
> >>>>>>> + * Register save and restore handlers. Save handlers will be =
invoked
> >>>>>>> + * in order of DOMAIN_SAVE_CODE().
> >>>>>>> + */
> >>>>>>> +#define DOMAIN_REGISTER_SAVE_RESTORE(_x, _save, _load)        =
    \
> >>>>>>> +    static int __init =
__domain_register_##_x##_save_restore(void) \
> >>>>>>> +    {                                                         =
    \
> >>>>>>> +        domain_register_save_type(                            =
    \
> >>>>>>> +            DOMAIN_SAVE_CODE(_x),                             =
    \
> >>>>>>> +            #_x,                                              =
    \
> >>>>>>> +            &(_save),                                         =
    \
> >>>>>>> +            &(_load));                                        =
    \
> >>>>>>> +                                                              =
    \
> >>>>>>> +        return 0;                                             =
    \
> >>>>>>> +    }                                                         =
    \
> >>>>>>> +    __initcall(__domain_register_##_x##_save_restore);
> >>>>>>
> >>>>>> I'm puzzled by part of the comment: Invoking by save code looks
> >>>>>> reasonable for the saving side (albeit END doesn't match this =
rule
> >>>>>> afaics), but is this going to be good enough for the consuming =
side?
> >>>>>
> >>>>> No, this only relates to the save side which is why the comment
> >>>>> says 'Save handlers'. I do note that it would be more consistent
> >>>>> to use 'load' rather than 'restore' here though.
> >>>>>
> >>>>>> There may be dependencies between types, and with fixed =
ordering
> >>>>>> there may be no way to insert a depended upon type ahead of an
> >>>>>> already defined one (at least as long as the codes are meant to =
be
> >>>>>> stable).
> >>>>>>
> >>>>>
> >>>>> The ordering of load handlers is determined by the stream. I'll
> >>>>> add a sentence saying that.
> >>>>
> >>>> I.e. the consumer of the "get" interface (and producer of the =
stream)
> >>>> is supposed to take apart the output it gets, bring records into
> >>>> suitable order (which implies it knows of all the records, and =
which
> >>>> hence means this code may need updating in cases where I'd expect
> >>>> only the hypervisor needs), and only then issue to the stream?
> >>>
> >>> The intention is that the stream is always in a suitable order so =
the
> >>> load side does not have to do any re-ordering.
> >>
> >> I understood this to be the intention, but what I continue to not
> >> understand is where / how the save side orders it suitably. "Save
> >> handlers will be invoked in order of DOMAIN_SAVE_CODE()" does not
> >> allow for any ordering, unless at the time of the introduction of
> >> a particular code you already know what others it may depend on
> >> in the future, reserving appropriate codes.
> >>
> >
> > That's just how it is *now*. If a new code is defined that needs to
> > be in the stream before one of the existing ones then we'll have to
> > introduce a more elaborate scheme to deal with that at the time.
> > Using the save code as the array index and iterating in that order
> > is purely a convenience, and the load side does not depend on
> > entries being in save code order.
>=20
> Could then you make the comment indicate so? This will allow people
> wanting to modify this do so more easily, without much digging in
> code or mail history.

Ok, I'll add some words to that effect.

  Paul

>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Tue May 19 15:53:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:53:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb4YF-0003eh-Fw; Tue, 19 May 2020 15:53:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fY7P=7B=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jb4YE-0003ec-9D
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:53:06 +0000
X-Inumbo-ID: d399d59e-99e8-11ea-9887-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d399d59e-99e8-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 15:53:05 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: El7oOCcy1szBJ5zYjhYjThPqPzQODLg8uLNYfOHWUvO6Kb3R3YqEhEkxJkWmWZu7/B9Dmmdh9q
 GrymMtBPqIErwSlD4nIxYx7YidRjO4QcnRPw+jwHHBlMBV0c+roR0b/1UvaHr+pCE2Vl0Wml8t
 ASWDpvsj54Cb16cDC1gjtudozZkVsB31t9vWpeHYDfUcS6T2bsA00AnUwqRi/gHMORBFBkGgOy
 lsq6UaA/bSMbr7DqErUgz4vU/5Otv2jm06g8UtW9KNyXaHE7rr5cPxUJAsUC9F00uU07R2xbeA
 KJs=
X-SBRS: 2.7
X-MesageID: 18178493
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,410,1583211600"; d="scan'208";a="18178493"
Date: Tue, 19 May 2020 16:52:58 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH v2] xen: fix build without pci passthrough
Message-ID: <20200519155258.GC2105@perard.uk.xensource.com>
References: <20200519143101.75330-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200519143101.75330-1-roger.pau@citrix.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 qemu-devel@nongnu.org, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 04:31:01PM +0200, Roger Pau Monne wrote:
> has_igd_gfx_passthru is only available when QEMU is built with
> CONFIG_XEN_PCI_PASSTHROUGH, and hence shouldn't be used in common
> code without checking if it's available.
> 
> Fixes: 46472d82322d0 ('xen: convert "-machine igd-passthru" to an accelerator property')
> Signed-off-by: Roger Pau Monn <roger.pau@citrix.com>
> ---
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> Cc: Anthony Perard <anthony.perard@citrix.com>
> Cc: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org
> ---
> Changes since v1:
>  - Do not include osdep in header file.
>  - Always add the setters/getters of igd-passthru, report an error
>    when attempting to set igd-passthru without built in
>    pci-passthrough support.
> ---
>  hw/xen/xen-common.c | 4 ++++
>  hw/xen/xen_pt.h     | 6 ++++++
>  2 files changed, 10 insertions(+)
> 
> diff --git a/hw/xen/xen-common.c b/hw/xen/xen-common.c
> index 70564cc952..d758770da0 100644
> --- a/hw/xen/xen-common.c
> +++ b/hw/xen/xen-common.c
> @@ -134,7 +134,11 @@ static bool xen_get_igd_gfx_passthru(Object *obj, Error **errp)
>  
>  static void xen_set_igd_gfx_passthru(Object *obj, bool value, Error **errp)
>  {
> +#ifdef CONFIG_XEN_PCI_PASSTHROUGH
>      has_igd_gfx_passthru = value;
> +#else
> +    error_setg(errp, "Xen PCI passthrough support not built in");
> +#endif
>  }
>  

There's an issue that I haven't thought about before.
CONFIG_XEN_PCI_PASSTHROUGH is never defined in xen-common.c. So
xen_set_igd_gfx_passthru will always return an error.

I'm not sure what to do about that yet, maybe change the way that
CONFIG_ is defined, or maybe have have the setter/getter in xen_pt.c
with a stub in stubs/ which would return an error. or maybe some other
way.

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 19 15:59:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 15:59:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb4e2-00045f-RO; Tue, 19 May 2020 15:59:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0pGb=7B=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jb4e1-00045Y-Lg
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 15:59:05 +0000
X-Inumbo-ID: a9cda2da-99e9-11ea-ae69-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a9cda2da-99e9-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 15:59:04 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: t+VMYjCUDVBb8Qz48o/FakOmXoKdr+6rgoF9hwzgU2wY4/XZkYPIyn87gDMTNEI/N7HR+gmNOQ
 AnsAorALKTled5xWBKXc0Ed352SvY7/RGYDgCUQxLhMcKNWPBq6nu/oT9Es1oJsZtrfsM0zhuN
 YK1x8FgwHknb4wnmNMmCUq9bDO3EMDWUHYvusSzeTOOXnnkyw8dEsHxItZgb59tRZywyMjJyAz
 idolsfTvEmuf+6oY6ug7AvzJC/FXPya4KTyVdNzDNaWPH2UUZVVjc1yu3MXTEcgdx+ah9/t2mB
 bw8=
X-SBRS: 2.7
X-MesageID: 17893715
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,410,1583211600"; d="scan'208";a="17893715"
Subject: Re: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
To: Jan Beulich <jbeulich@suse.com>
References: <20200518153820.18170-1-andrew.cooper3@citrix.com>
 <bf0d9e00-cb42-34b1-26ee-93628eea094c@suse.com>
 <d925943b-cad8-07c3-c21c-322ffc4a75da@citrix.com>
 <f06370cb-5cff-2e4a-571c-0b61656e4829@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <8004a377-dd66-0248-8ed9-2145080ec570@citrix.com>
Date: Tue, 19 May 2020 16:59:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <f06370cb-5cff-2e4a-571c-0b61656e4829@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19/05/2020 15:55, Jan Beulich wrote:
> On 19.05.2020 16:29, Andrew Cooper wrote:
>> On 19/05/2020 09:14, Jan Beulich wrote:
>>> On 18.05.2020 17:38, Andrew Cooper wrote:
>>>> The reserved_bit_page_fault() paths effectively turn reserved bit faults into
>>>> a warning, but in the light of L1TF, the real impact is far more serious.
>>>>
>>>> Xen does not have any reserved bits set in its pagetables, nor do we permit PV
>>>> guests to write any.  An HVM shadow guest may have reserved bits via the MMIO
>>>> fastpath, but those faults are handled in the VMExit #PF intercept, rather
>>>> than Xen's #PF handler.
>>>>
>>>> There is no need to disable interrupts (in spurious_page_fault()) for
>>>> __page_fault_type() to look at the rsvd bit, nor should extable fixup be
>>>> tolerated.
>>> I'm afraid I don't understand the connection of the first half of this
>>> to the patch - you don't alter spurious_page_fault() in this regard (at
>>> all, actually).
>> The disabling interrupts is in spurious_page_fault().  But the point is
>> that there is no need to enter this logic at all for a reserved page fault.
>>
>>> As to extable fixup, I'm not sure: If a reserved bit ends up slipping
>>> into the non-Xen parts of the page tables, and if guest accessors then
>>> become able to trip a corresponding #PF, the bug will need an XSA with
>>> the proposed change, while - afaict - it won't if the exception gets
>>> recovered from. (There may then still be log spam issue, I admit.)
>> We need to issue an XSA anyway because such a construct would be an L1TF
>> gadget.
>>
>> What this change does is make it substantially more obvious, and turns
>> an information leak into a DoS.
> For L1TF-affected hardware. For unaffected hardware it turns a possible
> (but not guaranteed) log spam DoS into a reliable crash.

It represents unexpected corruption of our most critical security
resource in the processor.

Obviously we need to account for any legitimate uses Xen has of reserved
bits (so far maybe GNP for PV guests), but BUG()-like behaviour *is* the
response appropriate to the severity of finding corrupt PTEs.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 19 16:10:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 16:10:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb4oZ-0005Wh-Rw; Tue, 19 May 2020 16:09:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wtzB=7B=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jb4oY-0005Wc-M9
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 16:09:58 +0000
X-Inumbo-ID: 2f28f0aa-99eb-11ea-a945-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2f28f0aa-99eb-11ea-a945-12813bfff9fa;
 Tue, 19 May 2020 16:09:58 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A56FDAB91;
 Tue, 19 May 2020 16:09:59 +0000 (UTC)
Subject: Re: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
To: Andrew Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>
References: <20200518153820.18170-1-andrew.cooper3@citrix.com>
 <2783ddc5-9919-3c97-ba52-2f734e7d72d5@suse.com>
 <62d4999b-7db3-bac6-28ed-bb636347df38@citrix.com>
 <3088e420-a72a-1b2d-144f-115610488418@suse.com>
 <1750cbe5-ef48-6dc7-e372-cbc0a8cbc9cc@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4a5c33c0-9245-126b-123e-3980a9135190@suse.com>
Date: Tue, 19 May 2020 18:09:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <1750cbe5-ef48-6dc7-e372-cbc0a8cbc9cc@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.2020 17:33, Andrew Cooper wrote:
> On 19/05/2020 15:48, Jan Beulich wrote:
>> On 19.05.2020 16:11, Andrew Cooper wrote:
>>> Given that shadow frames are limited to 44 bits anyway (and not yet
>>> levelled safely in the migration stream), my suggestion for fixing this
>>> was just to use one extra nibble for the extra 4 bits and call it done.
>> Would you remind(?) me of where this 44-bit restriction is coming
>> from?
> 
> From paging_max_paddr_bits(),
> 
> /* Shadowed superpages store GFNs in 32-bit page_info fields. */

Ah, that's an abuse of the backlink field. After some looking
around I first thought the up field could be used to store the
GFN instead, as it's supposedly used for single-page shadows
only. Then however I found

static inline int sh_type_has_up_pointer(struct domain *d, unsigned int t)
{
    /* Multi-page shadows don't have up-pointers */
    if ( t == SH_type_l1_32_shadow
         || t == SH_type_fl1_32_shadow
         || t == SH_type_l2_32_shadow )
        return 0;
    /* Pinnable shadows don't have up-pointers either */
    return !sh_type_is_pinnable(d, t);
}

It's unclear to me in which way SH_type_l1_32_shadow and
SH_type_l2_32_shadow are "multi-page" shadows; I'd rather have
expected all three SH_type_fl1_*_shadow to be. Tim?

In any event there would be 12 bits to reclaim from the up
pointer - it being a physical address, there'll not be more
than 52 significant bits.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 19 16:13:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 16:13:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb4rr-0006Kj-Ce; Tue, 19 May 2020 16:13:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wYyw=7B=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jb4rq-0006Ke-Ky
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 16:13:22 +0000
X-Inumbo-ID: a85e03f2-99eb-11ea-a945-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a85e03f2-99eb-11ea-a945-12813bfff9fa;
 Tue, 19 May 2020 16:13:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=J2emaG9PbUmb1ROZKRHOWXcAQ/FCHmYp6Uteem4b0UM=; b=W0Wq1PyNeIwAplrzbkdU+zahY
 rKOk9yqEqCSRWOqehI/LJKbzlREEgDhLvQ/7+24CKORwi7shsEliUAzPwNzaiX/6HqNBO8I3VakdP
 /k2JRIGtByRomlSAuSpLMO14O7nAy85ZKXWkkbCdFgAgPG+MfirGgwoEnMQtp95ot4eGQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jb4ro-0005Jo-Qd; Tue, 19 May 2020 16:13:20 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jb4ro-0005xZ-8l; Tue, 19 May 2020 16:13:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jb4ro-0008L9-8A; Tue, 19 May 2020 16:13:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150245-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150245: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=271ade5a621005f86ec928280dc6ac85f2c4c95a
X-Osstest-Versions-That: xen=7efd9f3d45480c12328e4419547a98022f7af43a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 19 May 2020 16:13:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150245 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150245/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  271ade5a621005f86ec928280dc6ac85f2c4c95a
baseline version:
 xen                  7efd9f3d45480c12328e4419547a98022f7af43a

Last test of basis   150242  2020-05-19 10:01:19 Z    0 days
Testing same since   150245  2020-05-19 13:01:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7efd9f3d45..271ade5a62  271ade5a621005f86ec928280dc6ac85f2c4c95a -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 19 16:55:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 16:55:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb5WM-0001Gl-Pj; Tue, 19 May 2020 16:55:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B//R=7B=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jb5WL-0001Gg-LM
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 16:55:13 +0000
X-Inumbo-ID: 818b6d5e-99f1-11ea-a952-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 818b6d5e-99f1-11ea-a952-12813bfff9fa;
 Tue, 19 May 2020 16:55:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=gto+boTdw2omfbvehY12Pb4RamepkHtOCM4G5Kfb5+k=; b=dD57fle5yNNRL4AmhWyQZvSUaS
 /GTI8o8LC1/9MPfaTPWqFcFAyI54oIoxYpmErIQ//5jdfGPG5Pst3The0nhHSLIYOF1GG8RgfmwMN
 TPcfz5QgduPz51XlvyKwgePJL0/NVkv7elv+3Xo5pzSuMrzq6EWn3UY6zw+wBPxGy5LU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jb5WH-00068L-HT; Tue, 19 May 2020 16:55:09 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jb5WH-00026d-AD; Tue, 19 May 2020 16:55:09 +0000
Subject: Re: [PATCH for-4.14 2/3] xen/arm: Take into account the DMA width
 when allocating Dom0 memory banks
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20200518113008.15422-1-julien@xen.org>
 <20200518113008.15422-3-julien@xen.org>
 <aa95369bf22df89404243dd4e7374f8015ccc9ad.camel@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a88cb65b-469f-464e-cbfa-20d56ff5c839@xen.org>
Date: Tue, 19 May 2020 17:55:07 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <aa95369bf22df89404243dd4e7374f8015ccc9ad.camel@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 "jgrall@amazon.com" <jgrall@amazon.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "roman@zededa.com" <roman@zededa.com>, "minyard@acm.org" <minyard@acm.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 18/05/2020 21:34, Volodymyr Babchuk wrote:
> Hi Julien,

Hi Volodymyr,

Thank you for the review.

> 
> On Mon, 2020-05-18 at 12:30 +0100, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> At the moment, Xen is assuming that all the devices are at least 32-bit
>> DMA capable. However, some SoCs have devices that may be able to access
>> a much restricted range. For instance, the Raspberry PI 4 has devices
>> that can only access the first GB of RAM.
>>
>> The function arch_get_dma_bit_size() will return the lowest DMA width on
>> the platform. Use it to decide what is the limit for the low memory.
>>
>> Signed-off-by: Julien GralL <jgrall@amazon.com>
>> ---
>>   xen/arch/arm/domain_build.c | 32 +++++++++++++++++++-------------
>>   1 file changed, 19 insertions(+), 13 deletions(-)
>>
>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>> index 430708753642..abc4e463d27c 100644
>> --- a/xen/arch/arm/domain_build.c
>> +++ b/xen/arch/arm/domain_build.c
>> @@ -211,10 +211,13 @@ fail:
>>    *    the ramdisk and DTB must be placed within a certain proximity of
>>    *    the kernel within RAM.
>>    * 3. For dom0 we want to place as much of the RAM as we reasonably can
>> - *    below 4GB, so that it can be used by non-LPAE enabled kernels (32-bit)
>> + *    below 4GB, so that it can be used by non-LPAE enabled kernels (32-bit).
> Is full stop really needed there?

I was meant to remove the line below as it is now part of 4). I will 
remove it in the next version.

Best regards,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 19 17:06:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 17:06:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb5hC-0002DJ-0h; Tue, 19 May 2020 17:06:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B//R=7B=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jb5hA-0002DE-IB
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 17:06:24 +0000
X-Inumbo-ID: 110f5548-99f3-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 110f5548-99f3-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 17:06:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=89uvTAHM8r7Gcoc2v6jeqC5cUXOEfk1Yfte8B/+lU7I=; b=3vC/AbDTTQzmfh//YYGH+C7nI3
 mR+KSnKUqHskuhKWNsIKVvujSU+XONKTXjxDHUA8GyJCWbe0YSeodIadN5tmpEAIfPdtWGW6+x62l
 /I97rmGH9g7s82BuL+ga245BZTD1mtiS4ooPZcMdlKD+iws8B+LMOt/Nufa4F4OvG+DU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jb5h6-0006OU-6M; Tue, 19 May 2020 17:06:20 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jb5h5-0002vS-V1; Tue, 19 May 2020 17:06:20 +0000
Subject: Re: [PATCH for-4.14 3/3] xen/arm: plat: Allocate as much as possible
 memory below 1GB for dom0 for RPI
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <20200518113008.15422-1-julien@xen.org>
 <20200518113008.15422-4-julien@xen.org>
 <bc9a1121a7484ef484c30869793698f912987d23.camel@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <3570aa57-d5b0-7db2-536f-5cef3c8b2e01@xen.org>
Date: Tue, 19 May 2020 18:06:17 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <bc9a1121a7484ef484c30869793698f912987d23.camel@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 "jgrall@amazon.com" <jgrall@amazon.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "roman@zededa.com" <roman@zededa.com>, "minyard@acm.org" <minyard@acm.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 18/05/2020 21:36, Volodymyr Babchuk wrote:
> Hi Julien,

Hi,


> On Mon, 2020-05-18 at 12:30 +0100, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> The raspberry PI 4 has devices that can only DMA into the first GB of
>> the RAM. Therefore we want allocate as much as possible memory below 1GB
>> for dom0.
>>
>> Use the recently introduced dma_bitsize field to specify the DMA width
>> supported.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> Reported-by: Corey Minyard <minyard@acm.org>
>> ---
>>   xen/arch/arm/platforms/brcm-raspberry-pi.c | 1 +
>>   1 file changed, 1 insertion(+)
>>
>> diff --git a/xen/arch/arm/platforms/brcm-raspberry-pi.c b/xen/arch/arm/platforms/brcm-raspberry-pi.c
>> index b697fa2c6c0e..ad5483437b31 100644
>> --- a/xen/arch/arm/platforms/brcm-raspberry-pi.c
>> +++ b/xen/arch/arm/platforms/brcm-raspberry-pi.c
>> @@ -43,6 +43,7 @@ static const struct dt_device_match rpi4_blacklist_dev[] __initconst =
>>   PLATFORM_START(rpi4, "Raspberry Pi 4")
>>       .compatible     = rpi4_dt_compat,
>>       .blacklist_dev  = rpi4_blacklist_dev,
>> +    .dma_bitsize    = 10,
> 
> I'm confused. Should it be 30?

Argh, yes. I computed the number of bits for 1024 and forgot to add 20 :(.

I will fix it in the next revision.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 19 17:07:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 17:07:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb5iJ-0002Hn-BW; Tue, 19 May 2020 17:07:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B//R=7B=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jb5iI-0002Hf-2w
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 17:07:34 +0000
X-Inumbo-ID: 3ab6facd-99f3-11ea-a955-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3ab6facd-99f3-11ea-a955-12813bfff9fa;
 Tue, 19 May 2020 17:07:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=zBPupmTA7a2M6q+7ZyaoHEWiEA41qdpUVWh+L9qVbtQ=; b=zkYjjoc7eDBzN1/TzLxwSokr0V
 f4NbHPkQlInE3iuM49kKGFxnJFHoAqtEBVvF5YQb9RXfADDHWng11TD37EyH+sAW/zqq7Og6GaUJQ
 7BgvSo1BB7FcySKzocW4c+qgWvpQb+z61LPWxpCLntTyEGHagureR0pi/C0snrv4+I/E=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jb5iF-0006RF-3p; Tue, 19 May 2020 17:07:31 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jb5iE-00030w-3J; Tue, 19 May 2020 17:07:30 +0000
Subject: Re: [PATCH for-4.14 3/3] xen/arm: plat: Allocate as much as possible
 memory below 1GB for dom0 for RPI
To: minyard@acm.org, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20200518113008.15422-1-julien@xen.org>
 <20200518113008.15422-4-julien@xen.org>
 <bc9a1121a7484ef484c30869793698f912987d23.camel@epam.com>
 <20200519000242.GA3948@minyard.net>
From: Julien Grall <julien@xen.org>
Message-ID: <d48d6fa3-8c09-0a89-e399-09e25909b590@xen.org>
Date: Tue, 19 May 2020 18:07:28 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200519000242.GA3948@minyard.net>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "jgrall@amazon.com" <jgrall@amazon.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "roman@zededa.com" <roman@zededa.com>,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Corey,

On 19/05/2020 01:02, Corey Minyard wrote:
> On Mon, May 18, 2020 at 08:36:08PM +0000, Volodymyr Babchuk wrote:
>> Hi Julien,
>>
>> On Mon, 2020-05-18 at 12:30 +0100, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> The raspberry PI 4 has devices that can only DMA into the first GB of
>>> the RAM. Therefore we want allocate as much as possible memory below 1GB
>>> for dom0.
>>>
>>> Use the recently introduced dma_bitsize field to specify the DMA width
>>> supported.
>>>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>> Reported-by: Corey Minyard <minyard@acm.org>
>>> ---
>>>   xen/arch/arm/platforms/brcm-raspberry-pi.c | 1 +
>>>   1 file changed, 1 insertion(+)
>>>
>>> diff --git a/xen/arch/arm/platforms/brcm-raspberry-pi.c b/xen/arch/arm/platforms/brcm-raspberry-pi.c
>>> index b697fa2c6c0e..ad5483437b31 100644
>>> --- a/xen/arch/arm/platforms/brcm-raspberry-pi.c
>>> +++ b/xen/arch/arm/platforms/brcm-raspberry-pi.c
>>> @@ -43,6 +43,7 @@ static const struct dt_device_match rpi4_blacklist_dev[] __initconst =
>>>   PLATFORM_START(rpi4, "Raspberry Pi 4")
>>>       .compatible     = rpi4_dt_compat,
>>>       .blacklist_dev  = rpi4_blacklist_dev,
>>> +    .dma_bitsize    = 10,
>>
>> I'm confused. Should it be 30?
> 
> Indeed it should.  I just tested this series, and Linux fails to boot
> with this set to 10.  With it set to 30 it works.
> 
> With this set to 30, you can have a:
> 
> Tested-by: Corey Minyard <cminyard@mvista.com>
> 
> for all three patches.

Thank you for the testing! I will fix the bug and resend the series.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 19 17:20:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 17:20:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb5uy-0003vA-36; Tue, 19 May 2020 17:20:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B//R=7B=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jb5uw-0003uu-AI
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 17:20:38 +0000
X-Inumbo-ID: 0d1efad6-99f5-11ea-a95a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0d1efad6-99f5-11ea-a95a-12813bfff9fa;
 Tue, 19 May 2020 17:20:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ph7bs82KFKJYE5DJAdCJd2/AW9SaWrzXSk1YjKd+pXk=; b=Qie2SCxkxvamT9ybAE0Z2M7mXp
 Sm/gE4ceiUNR1CbEJ1JA90DTuyW1hT5y9YJP0p1bdDhMdjsEmXI12ZHzpEpGPmk6s0IsPHKJjRbvz
 FbjG++WkKM0FMVKiVoeXhLrSSKkpabQ8As9uRP2HoqR3/FKsIik1D7PywBnYcbHNOXZo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jb5us-0006gn-NE; Tue, 19 May 2020 17:20:34 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jb5us-0003ie-Dx; Tue, 19 May 2020 17:20:34 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 for-4.14 2/3] xen/arm: Take into account the DMA width when
 allocating Dom0 memory banks
Date: Tue, 19 May 2020 18:20:27 +0100
Message-Id: <20200519172028.31169-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200519172028.31169-1-julien@xen.org>
References: <20200519172028.31169-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Corey Minyard <cminyard@mvista.com>, Julien Grall <jgrall@amazon.com>,
 roman@zededa.com, jeff.kubascik@dornerworks.com, minyard@acm.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

At the moment, Xen is assuming that all the devices are at least 32-bit
DMA capable. However, some SoCs have devices that may be able to access
a much restricted range. For instance, the Raspberry PI 4 has devices
that can only access the first GB of RAM.

The function arch_get_dma_bit_size() will return the lowest DMA width on
the platform. Use it to decide what is the limit for the low memory.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Tested-by: Corey Minyard <cminyard@mvista.com>

---
    Changes in v2:
        - Remove left-over in the comment
        - Add Corey's tested-by
---
 xen/arch/arm/domain_build.c | 33 +++++++++++++++++++--------------
 1 file changed, 19 insertions(+), 14 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 430708753642..3d7a75c31881 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -211,10 +211,12 @@ fail:
  *    the ramdisk and DTB must be placed within a certain proximity of
  *    the kernel within RAM.
  * 3. For dom0 we want to place as much of the RAM as we reasonably can
- *    below 4GB, so that it can be used by non-LPAE enabled kernels (32-bit)
- *    or when a device assigned to dom0 can only do 32-bit DMA access.
- * 4. For 32-bit dom0 the kernel must be located below 4GB.
- * 5. We want to have a few largers banks rather than many smaller ones.
+ *    below 4GB, so that it can be used by non-LPAE enabled kernels (32-bit).
+ * 4. Some devices assigned to dom0 can only do 32-bit DMA access or
+ *    even be more restricted. We want to allocate as much of the RAM
+ *    as we reasonably can that can be accessed from all the devices..
+ * 5. For 32-bit dom0 the kernel must be located below 4GB.
+ * 6. We want to have a few largers banks rather than many smaller ones.
  *
  * For the first two requirements we need to make sure that the lowest
  * bank is sufficiently large.
@@ -245,9 +247,9 @@ fail:
  * we give up.
  *
  * For 32-bit domain we require that the initial allocation for the
- * first bank is under 4G. For 64-bit domain, the first bank is preferred
- * to be allocated under 4G. Then for the subsequent allocations we
- * initially allocate memory only from below 4GB. Once that runs out
+ * first bank is part of the low mem. For 64-bit, the first bank is preferred
+ * to be allocated in the low mem. Then for subsequent allocation, we
+ * initially allocate memory only from low mem. Once that runs out out
  * (as described above) we allow higher allocations and continue until
  * that runs out (or we have allocated sufficient dom0 memory).
  */
@@ -262,6 +264,7 @@ static void __init allocate_memory_11(struct domain *d,
     int i;
 
     bool lowmem = true;
+    unsigned int lowmem_bitsize = min(32U, arch_get_dma_bitsize());
     unsigned int bits;
 
     /*
@@ -282,7 +285,7 @@ static void __init allocate_memory_11(struct domain *d,
      */
     while ( order >= min_low_order )
     {
-        for ( bits = order ; bits <= (lowmem ? 32 : PADDR_BITS); bits++ )
+        for ( bits = order ; bits <= lowmem_bitsize; bits++ )
         {
             pg = alloc_domheap_pages(d, order, MEMF_bits(bits));
             if ( pg != NULL )
@@ -296,24 +299,26 @@ static void __init allocate_memory_11(struct domain *d,
         order--;
     }
 
-    /* Failed to allocate bank0 under 4GB */
+    /* Failed to allocate bank0 in the lowmem region. */
     if ( is_32bit_domain(d) )
         panic("Unable to allocate first memory bank\n");
 
-    /* Try to allocate memory from above 4GB */
-    printk(XENLOG_INFO "No bank has been allocated below 4GB.\n");
+    /* Try to allocate memory from above the lowmem region */
+    printk(XENLOG_INFO "No bank has been allocated below %u-bit.\n",
+           lowmem_bitsize);
     lowmem = false;
 
  got_bank0:
 
     /*
-     * If we failed to allocate bank0 under 4GB, continue allocating
-     * memory from above 4GB and fill in banks.
+     * If we failed to allocate bank0 in the lowmem region,
+     * continue allocating from above the lowmem and fill in banks.
      */
     order = get_allocation_size(kinfo->unassigned_mem);
     while ( kinfo->unassigned_mem && kinfo->mem.nr_banks < NR_MEM_BANKS )
     {
-        pg = alloc_domheap_pages(d, order, lowmem ? MEMF_bits(32) : 0);
+        pg = alloc_domheap_pages(d, order,
+                                 lowmem ? MEMF_bits(lowmem_bitsize) : 0);
         if ( !pg )
         {
             order --;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 17:20:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 17:20:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb5us-0003ud-JG; Tue, 19 May 2020 17:20:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B//R=7B=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jb5ur-0003uN-Ct
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 17:20:33 +0000
X-Inumbo-ID: 0b64377e-99f5-11ea-a95a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0b64377e-99f5-11ea-a95a-12813bfff9fa;
 Tue, 19 May 2020 17:20:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=baEkwJOPnZ+/I3Zcxf0VTvOVfRphCLzRTTzHYn+gkoI=; b=VZB8ZG9SmgSyY7eUf34BgFqJPC
 Jh3SNSF3V6nQf5xig4egkzBJzhvKXne3/fZ4mh/TtoCSBoTVQS+NFMr0guWTnxLo/GuM9oHecGX3J
 fxgDaql+dT1p6vwnsU5qlSCYnhtdYaV5habOxmz36uXHbzFFZADyMoRdH+EfxlCYM648=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jb5up-0006gb-Tq; Tue, 19 May 2020 17:20:31 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jb5up-0003ie-JF; Tue, 19 May 2020 17:20:31 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 for-4.14 0/3] Remove the 1GB limitation on Rasberry Pi 4
Date: Tue, 19 May 2020 18:20:25 +0100
Message-Id: <20200519172028.31169-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <jgrall@amazon.com>, roman@zededa.com,
 jeff.kubascik@dornerworks.com, minyard@acm.org, paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

Hi all,

At the moment, a user who wants to boot Xen on the Raspberry Pi 4 can
only use the first GB of memory.

This is because several devices cannot DMA above 1GB but Xen doesn't
necessarily allocate memory for Dom0 below 1GB.

This small series is trying to address the problem by allowing a
platform to restrict where Dom0 banks are allocated.

This is also a candidate for Xen 4.14. Without it, a user will not be
able to use all the RAM on the Raspberry Pi 4.

This series has only be slighlty tested. I would appreciate more test on
the Rasbperry Pi 4 to confirm this removing the restriction.

Cheers,

Cc: paul@xen.org

Julien Grall (3):
  xen/arm: Allow a platform to override the DMA width
  xen/arm: Take into account the DMA width when allocating Dom0 memory
    banks
  xen/arm: plat: Allocate as much as possible memory below 1GB for dom0
    for RPI

 xen/arch/arm/domain_build.c                | 33 +++++++++++++---------
 xen/arch/arm/platform.c                    |  5 ++++
 xen/arch/arm/platforms/brcm-raspberry-pi.c |  1 +
 xen/include/asm-arm/mm.h                   |  2 ++
 xen/include/asm-arm/numa.h                 |  5 ----
 xen/include/asm-arm/platform.h             |  2 ++
 6 files changed, 29 insertions(+), 19 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 17:20:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 17:20:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb5v0-0003vU-Aq; Tue, 19 May 2020 17:20:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B//R=7B=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jb5uy-0003vI-Fx
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 17:20:40 +0000
X-Inumbo-ID: 0dc478a8-99f5-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0dc478a8-99f5-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 17:20:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=SOZlp61CWrsQD7qa0dasRDDLg9x7AQZ16sVemh7nqoo=; b=VlugWGv3QcqQsn6sakAdRX3wC+
 jaNvLHe2n3WAdZIvmI1rzb9fMEeBAk8bDRF0s/TStPZ2dxwYkgJiPS1FrEXBtGhFAKZptrSLBDCWq
 j0xnHURlEPqkzKUhfrclevpmamAyl9anxhdxDUvs1w1f2zjt7hZqvMLmoQmMxoYfRMnE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jb5ut-0006gs-Sx; Tue, 19 May 2020 17:20:35 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jb5ut-0003ie-Js; Tue, 19 May 2020 17:20:35 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 for-4.14 3/3] xen/arm: plat: Allocate as much as possible
 memory below 1GB for dom0 for RPI
Date: Tue, 19 May 2020 18:20:28 +0100
Message-Id: <20200519172028.31169-4-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200519172028.31169-1-julien@xen.org>
References: <20200519172028.31169-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Corey Minyard <cminyard@mvista.com>, Julien Grall <jgrall@amazon.com>,
 roman@zededa.com, jeff.kubascik@dornerworks.com, minyard@acm.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

The raspberry PI 4 has devices that can only DMA into the first GB of
the RAM. Therefore we want allocate as much as possible memory below 1GB
for dom0.

Use the recently introduced dma_bitsize field to specify the DMA width
supported.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reported-by: Corey Minyard <minyard@acm.org>
Tested-by: Corey Minyard <cminyard@mvista.com>

---
    Changes in v2:
        - 1G is 30 bits not 10!
        - Add Corey's tested-by
---
 xen/arch/arm/platforms/brcm-raspberry-pi.c | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/arch/arm/platforms/brcm-raspberry-pi.c b/xen/arch/arm/platforms/brcm-raspberry-pi.c
index b697fa2c6c0e..f5ae58a7d5f2 100644
--- a/xen/arch/arm/platforms/brcm-raspberry-pi.c
+++ b/xen/arch/arm/platforms/brcm-raspberry-pi.c
@@ -43,6 +43,7 @@ static const struct dt_device_match rpi4_blacklist_dev[] __initconst =
 PLATFORM_START(rpi4, "Raspberry Pi 4")
     .compatible     = rpi4_dt_compat,
     .blacklist_dev  = rpi4_blacklist_dev,
+    .dma_bitsize    = 30,
 PLATFORM_END
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 17:20:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 17:20:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb5uu-0003uo-RP; Tue, 19 May 2020 17:20:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B//R=7B=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jb5ut-0003uj-Gw
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 17:20:35 +0000
X-Inumbo-ID: 0cb3400c-99f5-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0cb3400c-99f5-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 17:20:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=/nUeUpuVV0k6Tn2LGbWheh4wHL0r38S9VrtLJqRIuNc=; b=gSJKewAqElCrYrsymiW8f6VAGR
 qilPwMQ04D3whhl1Nh9PPboEHO56jG0QnHneY+CsGa992Bt5kP4cc4hFs1HDF7d9LJJtuxTQsHpbR
 A+ghRUv/gfL+SUYEwZWam2xlf+BSpfcdb9WfgRMxPg5prTlrvJeo84ijOmRPorOIqxwU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jb5ur-0006gi-IC; Tue, 19 May 2020 17:20:33 +0000
Received: from 54-240-197-227.amazon.com ([54.240.197.227]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jb5ur-0003ie-81; Tue, 19 May 2020 17:20:33 +0000
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 for-4.14 1/3] xen/arm: Allow a platform to override the DMA
 width
Date: Tue, 19 May 2020 18:20:26 +0100
Message-Id: <20200519172028.31169-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200519172028.31169-1-julien@xen.org>
References: <20200519172028.31169-1-julien@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Corey Minyard <cminyard@mvista.com>, minyard@acm.org,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 roman@zededa.com, George Dunlap <george.dunlap@citrix.com>,
 jeff.kubascik@dornerworks.com, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Julien Grall <jgrall@amazon.com>

At the moment, Xen is assuming that all the devices are at least 32-bit
DMA capable. However, some SoC have devices that may be able to access
a much restricted range. For instance, the RPI has devices that can
only access the first 1GB of RAM.

The structure platform_desc is now extended to allow a platform to
override the DMA width. The new is used to implement
arch_get_dma_bit_size().

The prototype is now moved in asm-arm/mm.h as the function is not NUMA
specific. The implementation is done in platform.c so we don't have to
include platform.h everywhere. This should be fine as the function is
not expected to be called in hotpath.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
Tested-by: Corey Minyard <cminyard@mvista.com>

---

Cc: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>

    Changes in v2:
        - Add Corey's tested-by
        - Add Volodymyr's reviewed-by

I noticed that arch_get_dma_bit_size() is only called when there is more
than one NUMA node. I am a bit unsure what is the reason behind it.

The goal for Arm is to use arch_get_dma_bit_size() when deciding how low
the first Dom0 bank should be allocated.
---
 xen/arch/arm/platform.c        | 5 +++++
 xen/include/asm-arm/mm.h       | 2 ++
 xen/include/asm-arm/numa.h     | 5 -----
 xen/include/asm-arm/platform.h | 2 ++
 4 files changed, 9 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/platform.c b/xen/arch/arm/platform.c
index 8eb0b6e57a5a..4db5bbb4c51d 100644
--- a/xen/arch/arm/platform.c
+++ b/xen/arch/arm/platform.c
@@ -155,6 +155,11 @@ bool platform_device_is_blacklisted(const struct dt_device_node *node)
     return (dt_match_node(blacklist, node) != NULL);
 }
 
+unsigned int arch_get_dma_bitsize(void)
+{
+    return ( platform && platform->dma_bitsize ) ? platform->dma_bitsize : 32;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 7df91280bc77..f8ba49b1188f 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -366,6 +366,8 @@ int arch_acquire_resource(struct domain *d, unsigned int type, unsigned int id,
     return -EOPNOTSUPP;
 }
 
+unsigned int arch_get_dma_bitsize(void);
+
 #endif /*  __ARCH_ARM_MM__ */
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/numa.h b/xen/include/asm-arm/numa.h
index 490d1f31aa14..31a6de4e2346 100644
--- a/xen/include/asm-arm/numa.h
+++ b/xen/include/asm-arm/numa.h
@@ -25,11 +25,6 @@ extern mfn_t first_valid_mfn;
 #define node_start_pfn(nid) (mfn_x(first_valid_mfn))
 #define __node_distance(a, b) (20)
 
-static inline unsigned int arch_get_dma_bitsize(void)
-{
-    return 32;
-}
-
 #endif /* __ARCH_ARM_NUMA_H */
 /*
  * Local variables:
diff --git a/xen/include/asm-arm/platform.h b/xen/include/asm-arm/platform.h
index ed4d30a1be7c..997eb2521631 100644
--- a/xen/include/asm-arm/platform.h
+++ b/xen/include/asm-arm/platform.h
@@ -38,6 +38,8 @@ struct platform_desc {
      * List of devices which must not pass-through to a guest
      */
     const struct dt_device_match *blacklist_dev;
+    /* Override the DMA width (32-bit by default). */
+    unsigned int dma_bitsize;
 };
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 17:23:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 17:23:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb5xJ-0004H7-TW; Tue, 19 May 2020 17:23:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=B//R=7B=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jb5xJ-0004H2-36
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 17:23:05 +0000
X-Inumbo-ID: 66032f78-99f5-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 66032f78-99f5-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 17:23:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Zp70TmI07WNb2Wb5ISiUmdisxzXfcuiOTTdhy02W+MI=; b=7DRQ+WRkGC5zmYHnEwd+qMroPz
 66oYAC+0/4XXtfVgmPmoMSlWbw5y8mKgN1DI6OI7+SZ+oxMPrfAyzBjC0MSEMwk1ZTFwm+Ujloq81
 y6ecLFfsTvcxzGcoJMh8frelCkklTVx94GtJvuK+OmVTFN/n843qwjfElYOw6UCH926U=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jb5xD-0006lV-7T; Tue, 19 May 2020 17:22:59 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jb5xD-00049s-0v; Tue, 19 May 2020 17:22:59 +0000
Subject: Re: [PATCH for-4.14 0/3] Remove the 1GB limitation on Rasberry Pi 4
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
References: <20200518113008.15422-1-julien@xen.org>
 <CABfawh=-XVaRxQ+WyM9ZV7jO5hEO=jAWa4m=b_1bQ41NgEB-2A@mail.gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <297448b7-7837-cbe5-dee4-da80ca03cd29@xen.org>
Date: Tue, 19 May 2020 18:22:56 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <CABfawh=-XVaRxQ+WyM9ZV7jO5hEO=jAWa4m=b_1bQ41NgEB-2A@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, minyard@acm.org,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, roman@zededa.com,
 George Dunlap <george.dunlap@citrix.com>,
 Jeff Kubascik <jeff.kubascik@dornerworks.com>, Jan Beulich <jbeulich@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 19/05/2020 04:08, Tamas K Lengyel wrote:
> On Mon, May 18, 2020 at 5:32 AM Julien Grall <julien@xen.org> wrote:
>>
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Hi all,
>>
>> At the moment, a user who wants to boot Xen on the Raspberry Pi 4 can
>> only use the first GB of memory.
>>
>> This is because several devices cannot DMA above 1GB but Xen doesn't
>> necessarily allocate memory for Dom0 below 1GB.
>>
>> This small series is trying to address the problem by allowing a
>> platform to restrict where Dom0 banks are allocated.
>>
>> This is also a candidate for Xen 4.14. Without it, a user will not be
>> able to use all the RAM on the Raspberry Pi 4.
>>
>> This series has only be slighlty tested. I would appreciate more test on
>> the Rasbperry Pi 4 to confirm this removing the restriction.
> 
> Hi Julien,

Hi,

> could you post a git branch somewhere? I can try this on my rpi4 that
> already runs 4.13.

I have pushed a branch based on unstable and the v2 of the series:

git://xenbits.xen.org/people/julieng/xen-unstable.git

branch arm-dma/v2

Thank you in advance for the testing!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 19 17:48:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 17:48:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb6M4-00069R-0H; Tue, 19 May 2020 17:48:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PPOd=7B=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jb6M3-00069G-32
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 17:48:39 +0000
X-Inumbo-ID: f7c8ed28-99f8-11ea-b9cf-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7c8ed28-99f8-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 17:48:38 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Kto5CE9hKXDANA3dv6AV+qfGgge3TBQmrrbG0XmUpjg/89DP+YqxeH4ymio+763qd+feXYZuBG
 sTOzD35a1V2G1ShTHilVH2uX6550WljdJUjg+8qrrpvjk87C17Fuu8zUmuXwGPPywqZcS+hc0e
 mrCi3b5gnyEWE9ED6/y/I6+4OakaLs8rT+TjKZ3ARif/rRGJThwHRc8+CF7yIGUCqu2MoKy5Ik
 g2A/eAiEqU3u6zjIxHyqk/3s8T3EoQgsfPlVJh97lTykBCoOFA0XGWUoAUQgjQRkRRH/gKvKJ0
 nzs=
X-SBRS: 2.7
X-MesageID: 18192776
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,410,1583211600"; d="scan'208";a="18192776"
Date: Tue, 19 May 2020 19:48:30 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v2] xen: fix build without pci passthrough
Message-ID: <20200519174830.GH54375@Air-de-Roger>
References: <20200519143101.75330-1-roger.pau@citrix.com>
 <20200519155258.GC2105@perard.uk.xensource.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200519155258.GC2105@perard.uk.xensource.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 qemu-devel@nongnu.org, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 04:52:58PM +0100, Anthony PERARD wrote:
> On Tue, May 19, 2020 at 04:31:01PM +0200, Roger Pau Monne wrote:
> > has_igd_gfx_passthru is only available when QEMU is built with
> > CONFIG_XEN_PCI_PASSTHROUGH, and hence shouldn't be used in common
> > code without checking if it's available.
> > 
> > Fixes: 46472d82322d0 ('xen: convert "-machine igd-passthru" to an accelerator property')
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> > Cc: Stefano Stabellini <sstabellini@kernel.org>
> > Cc: Anthony Perard <anthony.perard@citrix.com>
> > Cc: Paul Durrant <paul@xen.org>
> > Cc: xen-devel@lists.xenproject.org
> > ---
> > Changes since v1:
> >  - Do not include osdep in header file.
> >  - Always add the setters/getters of igd-passthru, report an error
> >    when attempting to set igd-passthru without built in
> >    pci-passthrough support.
> > ---
> >  hw/xen/xen-common.c | 4 ++++
> >  hw/xen/xen_pt.h     | 6 ++++++
> >  2 files changed, 10 insertions(+)
> > 
> > diff --git a/hw/xen/xen-common.c b/hw/xen/xen-common.c
> > index 70564cc952..d758770da0 100644
> > --- a/hw/xen/xen-common.c
> > +++ b/hw/xen/xen-common.c
> > @@ -134,7 +134,11 @@ static bool xen_get_igd_gfx_passthru(Object *obj, Error **errp)
> >  
> >  static void xen_set_igd_gfx_passthru(Object *obj, bool value, Error **errp)
> >  {
> > +#ifdef CONFIG_XEN_PCI_PASSTHROUGH
> >      has_igd_gfx_passthru = value;
> > +#else
> > +    error_setg(errp, "Xen PCI passthrough support not built in");
> > +#endif
> >  }
> >  
> 
> There's an issue that I haven't thought about before.
> CONFIG_XEN_PCI_PASSTHROUGH is never defined in xen-common.c. So
> xen_set_igd_gfx_passthru will always return an error.
> 
> I'm not sure what to do about that yet, maybe change the way that
> CONFIG_ is defined, or maybe have have the setter/getter in xen_pt.c
> with a stub in stubs/ which would return an error. or maybe some other
> way.

Hm, I think making it available is OK? Would it make sense to set it
in $config_host_mak instead of $config_target_mak?

It's a host property, not a target one AFAICT, as the support in the
host determines whether PCI passthrough is supported or not.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 19 18:00:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 18:00:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb6XI-0007p6-2z; Tue, 19 May 2020 18:00:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0pGb=7B=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jb6XG-0007oF-U0
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 18:00:14 +0000
X-Inumbo-ID: 96aa549e-99fa-11ea-b9cf-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96aa549e-99fa-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 18:00:14 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: K5fX4F3VQg+Ua+ggxX2n0/rQf6qXrRkr2HvTisQ9WAYwW3I2M+xI5OUTrVBGc5x7w8tIYlNjm5
 gw57uLtW0UF3HAIAbYCYidUTG1h/p+fih1kReNqXPSoWcHmT1+nh0j29KBj01sLZGVHojOEUCL
 s8dJhzgXwTQGVz5RRvBygDWRoWQVyumhFfgIsk2mXoZhvLhg2k7BMwvepK/K4gIY5AGbGQSx6o
 laFcNYksRRVWm2DhkIm7PsDEAd88+P/BULF/wqc0MIdQPiTa1HfsFHiT96GKkCJv9alhB6E4CY
 FZ8=
X-SBRS: 2.7
X-MesageID: 17907427
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,410,1583211600"; d="scan'208";a="17907427"
Subject: Re: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
To: Jan Beulich <jbeulich@suse.com>, Tim Deegan <tim@xen.org>
References: <20200518153820.18170-1-andrew.cooper3@citrix.com>
 <2783ddc5-9919-3c97-ba52-2f734e7d72d5@suse.com>
 <62d4999b-7db3-bac6-28ed-bb636347df38@citrix.com>
 <3088e420-a72a-1b2d-144f-115610488418@suse.com>
 <1750cbe5-ef48-6dc7-e372-cbc0a8cbc9cc@citrix.com>
 <4a5c33c0-9245-126b-123e-3980a9135190@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1808df24-ecde-97c6-c296-ecf385260395@citrix.com>
Date: Tue, 19 May 2020 19:00:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <4a5c33c0-9245-126b-123e-3980a9135190@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19/05/2020 17:09, Jan Beulich wrote:
> On 19.05.2020 17:33, Andrew Cooper wrote:
>> On 19/05/2020 15:48, Jan Beulich wrote:
>>> On 19.05.2020 16:11, Andrew Cooper wrote:
>>>> Given that shadow frames are limited to 44 bits anyway (and not yet
>>>> levelled safely in the migration stream), my suggestion for fixing this
>>>> was just to use one extra nibble for the extra 4 bits and call it done.
>>> Would you remind(?) me of where this 44-bit restriction is coming
>>> from?
>> From paging_max_paddr_bits(),
>>
>> /* Shadowed superpages store GFNs in 32-bit page_info fields. */
> Ah, that's an abuse of the backlink field. After some looking
> around I first thought the up field could be used to store the
> GFN instead, as it's supposedly used for single-page shadows
> only. Then however I found
>
> static inline int sh_type_has_up_pointer(struct domain *d, unsigned int t)
> {
>     /* Multi-page shadows don't have up-pointers */
>     if ( t == SH_type_l1_32_shadow
>          || t == SH_type_fl1_32_shadow
>          || t == SH_type_l2_32_shadow )
>         return 0;
>     /* Pinnable shadows don't have up-pointers either */
>     return !sh_type_is_pinnable(d, t);
> }
>
> It's unclear to me in which way SH_type_l1_32_shadow and
> SH_type_l2_32_shadow are "multi-page" shadows; I'd rather have
> expected all three SH_type_fl1_*_shadow to be. Tim?

I suspect the comment is incomplete, and should include "4k shadows
don't have up-pointers".

>
> In any event there would be 12 bits to reclaim from the up
> pointer - it being a physical address, there'll not be more
> than 52 significant bits.

Right, but for L1TF safety, the address bits in the PTE must not be
cacheable.

Currently, on fully populated multi-socket servers, the MMIO fastpath
relies on the top 4G of address space not being cacheable, which is the
safest we can reasonably manage.  Extending this by a nibble takes us to
16G which is not meaningfully less safe.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 19 18:59:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 18:59:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7S0-0003ix-DP; Tue, 19 May 2020 18:58:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wYyw=7B=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jb7Rz-0003is-6T
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 18:58:51 +0000
X-Inumbo-ID: c333acf6-9a02-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c333acf6-9a02-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 18:58:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=+C1SdnNtOWw2H9mY1P2QfImnSPC3bQqCsyxV2XohmLM=; b=r2+KseXb82zyqmFntJZQDb21j
 9JsTfx67Z/E2Xnwtst4Xl+UiJY0/YFwFmoKHqLQRiAimDmNZh3gSz8fuscofMp6fyzZAppyz6gEfN
 DrTU/rH6l6ZXcalXj8Px/16dU7PNsp6RIBQrR8pkMHyi90R+/EhcjNSykyoHZPavi0/qI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jb7Rs-0000LT-3Y; Tue, 19 May 2020 18:58:44 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jb7Rr-0007Ej-Op; Tue, 19 May 2020 18:58:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jb7Rr-0007TC-OG; Tue, 19 May 2020 18:58:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150243-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150243: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=a89af8c20ab289785806fcc1589b0f4265bd4e90
X-Osstest-Versions-That: qemuu=a28c9c8c9fc42484efe1bf5a77affe842e54e38b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 19 May 2020 18:58:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150243 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150243/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150235
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150235
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150235
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150235
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150235
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150235
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150235
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                a89af8c20ab289785806fcc1589b0f4265bd4e90
baseline version:
 qemuu                a28c9c8c9fc42484efe1bf5a77affe842e54e38b

Last test of basis   150235  2020-05-18 19:06:44 Z    0 days
Testing same since   150243  2020-05-19 11:07:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Eric Blake <eblake@redhat.com>
  Eyal Moscovici <eyal.moscovici@oracle.com>
  Mark Kanda <mark.kanda@oracle.com>
  Peter Maydell <peter.maydell@linaro.org>
  Raphael Pour <raphael.pour@hetzner.com>
  Yoav Elnekave <yoav.elnekave@oracle.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   a28c9c8c9f..a89af8c20a  a89af8c20ab289785806fcc1589b0f4265bd4e90 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Tue May 19 19:02:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:02:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7Ve-0004Yk-1c; Tue, 19 May 2020 19:02:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7Vc-0004Yf-6a
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:02:36 +0000
X-Inumbo-ID: 4ccb2c46-9a03-11ea-b07b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ccb2c46-9a03-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 19:02:35 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Va-0001da-Qf; Tue, 19 May 2020 20:02:34 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 01/38] ts-logs-capture: Cope if xl shutdown leaves
 domain running for a bit
Date: Tue, 19 May 2020 20:01:53 +0100
Message-Id: <20200519190230.29519-2-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This seems mostly to affect buster but it could in principle affect
earlier releases too I think.

In principle it would be nice to fix this bug, and to have a proper
test for it, but a reliable test is hard and an unreliable one is not
useful.  So I guess we are going to have this workaround
indefinitely...

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-logs-capture | 1 +
 1 file changed, 1 insertion(+)

diff --git a/ts-logs-capture b/ts-logs-capture
index 0320a5a5..d75a2fda 100755
--- a/ts-logs-capture
+++ b/ts-logs-capture
@@ -272,6 +272,7 @@ sub shutdown_guests () {
 		( xl shutdown -a -F -w ; echo y ) &
 	    ) | (
 		read x
+		sleep 10 # xl shutdown is a bit racy :-/
 		xl list | awk '!/^Domain-0 |^Name / {print $2}' \
 		| xargs -t -r -n1 xl destroy ||:
 	    )
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:02:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:02:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7Vi-0004ZL-9S; Tue, 19 May 2020 19:02:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7Vh-0004Z9-5i
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:02:41 +0000
X-Inumbo-ID: 4ced10f4-9a03-11ea-ae69-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ced10f4-9a03-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 19:02:35 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vb-0001da-2r; Tue, 19 May 2020 20:02:35 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 02/38] ts-xen-build-prep: Install rsync
Date: Tue, 19 May 2020 20:01:54 +0100
Message-Id: <20200519190230.29519-3-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

osstest uses this for transferring configuration, build artefacts, and
so on.

In Debian stretch and earlier, rsync happened to be pulled in by
something else.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-xen-build-prep | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-xen-build-prep b/ts-xen-build-prep
index e9298d54..8e73f763 100755
--- a/ts-xen-build-prep
+++ b/ts-xen-build-prep
@@ -197,7 +197,7 @@ END
 }
 
 sub prep () {
-    my @packages = qw(mercurial
+    my @packages = qw(mercurial rsync
                       build-essential bin86 bcc iasl bc
                       flex bison cmake
                       libpci-dev libncurses5-dev libssl-dev python-dev
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:02:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:02:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7Vn-0004af-Hw; Tue, 19 May 2020 19:02:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7Vm-0004aK-6t
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:02:46 +0000
X-Inumbo-ID: 4cb0cda6-9a03-11ea-9887-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4cb0cda6-9a03-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 19:02:35 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Va-0001da-JW; Tue, 19 May 2020 20:02:34 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 00/38] Upgrade most hosts/guests to buster
Date: Tue, 19 May 2020 20:01:52 +0100
Message-Id: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Paul Durrant <xadimgnik@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

buster is Debian stable right now.  We don't want to be using
oldstable because Debian have a history of breaking it.

Paul: we should have a conversation about whether this should be
pushed soon, or deferred until after the Xen 4.14 release.

I have yet to do a final full formal retest of this series so it may
not be final.

Ian Jackson (38):
  ts-logs-capture: Cope if xl shutdown leaves domain running for a bit
  ts-xen-build-prep: Install rsync
  lvcreate argments: pass --yes -Z y -W y
  TestSupport: allow more time for apt
  Booting: Use `--' rather than `---' to introduce host cmdline
  di_installcmdline_core: Pass locale on d-i command line
  setupboot_grub2: Drop $submenu variable
  ts-leak-check: Ignore buster's udevd too
  Bodge systemd random seed arrangements
  Debian guests: Write systemd random seed file
  ts-debian-di-install: Provide guest with more RAM
  Debian: preseed: use priority= alias
  Debian: Specify `priority=critical' rather than locale
  Honour 'LinuxSerialConsole <suite>' host property
  buster: make-hosts-flight: Add to possible suites for hosts flight
  buster: Extend grub2 uefi no install workaround
  buster: ts-host-install: Extend net.ifnames workaround
  buster: Deinstall the "systemd" package
  buster: preseed partman-auto-lvm/guided_size
  buster: ts-host-install: NTP not honoured bug remains
  buster: Extend ARM clock workaround
  buster: Extend guest bootloaer workaround
  Honour DebianImageFile_SUITE_ARCH
  buster: Specify DebianImageFile_SUITE_ARCH
  20_linux_xen: Copy Debian buster version into our initramfs area
  20_linux_xen: Adhoc template substitution
  20_linux_xen: Ignore xenpolicy and config files too
  20_linux_xen: Support Xen Security Modules (XSM/FLASK)
  mg-debian-installer-update: support overlay-intramfs-SUITE
  overlay-initrd-buster/sbin/reopen-console: Copy from Debian
  overlay-initrd-buster/sbin/reopen-console: Fix #932416
  buster: chiark-scripts: Install a new version on buster too
  buster: Provide TftpDiVersion
  buster: grub, arm64: extend chainloading workaround
  buster: setupboot_grub2: Note what files exist in /boot
  buster: setupboot_grub2: Handle missing policy file bug
  buster: Extend workaround for dhcpd EROFS bug
  buster: Switch to Debian buster as the default suite

 Osstest.pm                                    |   2 +-
 Osstest/Debian.pm                             |  60 +++-
 Osstest/TestSupport.pm                        |  15 +-
 make-hosts-flight                             |   2 +-
 mfi-common                                    |   9 +
 mg-debian-installer-update                    |  20 ++
 overlay-buster/etc/grub.d/20_linux_xen        | 327 ++++++++++++++++++
 overlay-initrd-buster/sbin/reopen-console     | 126 +++++++
 .../override.conf                             |   3 +
 overlay/usr/local/bin/random-seed-add         |  33 ++
 production-config                             |   5 +
 sg-run-job                                    |   1 +
 ts-debian-di-fixup                            |  29 ++
 ts-debian-di-install                          |   2 +-
 ts-debian-fixup                               |   1 +
 ts-host-install                               |   4 +-
 ts-leak-check                                 |   1 +
 ts-logs-capture                               |   1 +
 ts-xen-build-prep                             |   6 +-
 19 files changed, 617 insertions(+), 30 deletions(-)
 create mode 100755 overlay-buster/etc/grub.d/20_linux_xen
 create mode 100755 overlay-initrd-buster/sbin/reopen-console
 create mode 100644 overlay/etc/systemd/system/systemd-random-seed.service.d/override.conf
 create mode 100755 overlay/usr/local/bin/random-seed-add
 create mode 100755 ts-debian-di-fixup

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:02:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:02:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7Vs-0004cU-Qi; Tue, 19 May 2020 19:02:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7Vr-0004c1-61
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:02:51 +0000
X-Inumbo-ID: 4d086b92-9a03-11ea-ae69-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d086b92-9a03-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 19:02:36 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vb-0001da-A4; Tue, 19 May 2020 20:02:35 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 03/38] lvcreate argments: pass --yes -Z y -W y
Date: Tue, 19 May 2020 20:01:55 +0100
Message-Id: <20200519190230.29519-4-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The documentation seesm to think this is the default but empirically
it isn't.  In our environment --yes is fine.

I have reported this to Debian as #953183.  Also vaguely related (and
discovered by me at the same time) is #953185.

This came up while trying to get things work on buster.  I don't know
what has changed.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 2 +-
 ts-xen-build-prep      | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 1e7da676..43766ee3 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -935,7 +935,7 @@ sub lv_create ($$$$) {
     my ($ho, $vg, $lv, $mb) = @_;
     my $lvdev = "/dev/$vg/$lv";
     target_cmd_root($ho, "lvremove -f $lvdev ||:");
-    target_cmd_root($ho, "lvcreate -L ${mb}M -n $lv $vg");
+    target_cmd_root($ho, "lvcreate --yes -Z y -W y -L ${mb}M -n $lv $vg");
     target_cmd_root($ho, "dd if=/dev/zero of=$lvdev count=10");
     return $lvdev;
 }
diff --git a/ts-xen-build-prep b/ts-xen-build-prep
index 8e73f763..dabb9921 100755
--- a/ts-xen-build-prep
+++ b/ts-xen-build-prep
@@ -61,7 +61,7 @@ sub determine_vg_lv () {
 sub lvextend_stage1 () {
     target_cmd_root($ho, <<END);
         set -ex; if ! test -f /root/swap_osstest_enabled; then
-            lvcreate -L 10G -n swap_osstest_build $vg ||:
+            lvcreate --yes -Z y -W y -L 10G -n swap_osstest_build $vg ||:
             mkswap /dev/$vg/swap_osstest_build ||:
             swapon /dev/$vg/swap_osstest_build
             touch /root/swap_osstest_enabled
@@ -84,7 +84,7 @@ sub vginfo () {
 
 sub lvcreate () {
     target_cmd_output_root($ho,
-			   "lvdisplay $lv || lvcreate -l 1 -n $lvleaf $vg");
+			   "lvdisplay $lv || lvcreate --yes -Z y -W y -l 1 -n $lvleaf $vg");
 }
 
 sub lvextend1 ($$$) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:02:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:02:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7Vy-0004eV-2z; Tue, 19 May 2020 19:02:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7Vw-0004dz-6o
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:02:56 +0000
X-Inumbo-ID: 4d3b7140-9a03-11ea-ae69-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d3b7140-9a03-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 19:02:36 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vb-0001da-Fs; Tue, 19 May 2020 20:02:35 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 04/38] TestSupport: allow more time for apt
Date: Tue, 19 May 2020 20:01:56 +0100
Message-Id: <20200519190230.29519-5-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Empirically some of these operations can take longer than 30s,
especially with a cold cache.

Note that because of host sharing and our on-host apt lock, the
timeout needs to be the same for every apt operation: a fast operation
could be blocked behind a slow one.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 43766ee3..f4e9414c 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -637,12 +637,12 @@ sub target_install_packages_nonfree_nonconcurrent ($@) {
     my ($ho, @packages) = @_;
     my $slist= '/etc/apt/sources.list';
     my $xsuites= 'contrib non-free';
-    target_cmd_root($ho, <<END, 30);
+    target_cmd_root($ho, <<END, 300);
     perl -i~ -pe 'next unless m/^deb/; s{ main\$}{\$& $xsuites};' $slist
     apt-get update
 END
     target_run_pkgmanager_install($ho,\@packages);
-    target_cmd_root($ho, <<END, 30);
+    target_cmd_root($ho, <<END, 300);
     mv $slist~ $slist
     apt-get update
 END
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:03:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:03:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7W3-0004gn-C6; Tue, 19 May 2020 19:03:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7W1-0004fs-6R
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:03:01 +0000
X-Inumbo-ID: 4d508eb8-9a03-11ea-9887-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d508eb8-9a03-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 19:02:36 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vb-0001da-NG; Tue, 19 May 2020 20:02:35 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 05/38] Booting: Use `--' rather than `---' to
 introduce host cmdline
Date: Tue, 19 May 2020 20:01:57 +0100
Message-Id: <20200519190230.29519-6-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Because systemd did something obnoxious, the kernel retaliated in the
game of Core Wars by hiding all arguments before `--' from userspace.
So use `---' instead so that all the arguments remain visible.

This in some sense now applies to host installs a change we had
already made to Debian HVM guests.  See osstest#493b7395
  ts-debian-hvm-install: Use ---, and no longer duplicate $gconsole
and https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=762007
  Kernel command line handling change breaks d-i user-params functionality

This change is fine for all non-ancient versions of Debian, so I have
not made it conditional.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index f4e9414c..ff8103f2 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -2909,7 +2909,7 @@ label overwrite
 	menu label ^Overwrite
 	menu default
 	kernel $kern
-	append $dicmd initrd=$initrd -- $hocmd
+	append $dicmd initrd=$initrd --- $hocmd
 	ipappend $xopts{ipappend}
 	$dtbs
 default overwrite
@@ -2956,7 +2956,7 @@ sub setup_netboot_di_uefi ($$$$$;%) {
 set default=0
 set timeout=5
 menuentry 'overwrite' {
-  linux $kern $dicmd -- $hocmd
+  linux $kern $dicmd --- $hocmd
   initrd $initrd
 }
 END
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:03:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:03:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7W6-0004iN-KY; Tue, 19 May 2020 19:03:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7W6-0004iD-6u
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:03:06 +0000
X-Inumbo-ID: 4d73e76e-9a03-11ea-ae69-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d73e76e-9a03-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 19:02:36 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vb-0001da-Vh; Tue, 19 May 2020 20:02:36 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 06/38] di_installcmdline_core: Pass locale on d-i
 command line
Date: Tue, 19 May 2020 20:01:58 +0100
Message-Id: <20200519190230.29519-7-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In buster, d-i wants when setting up the network, ie before the
preseed is loaded.

We leave it in the preseed too because why not.

I think this change should be fine for older versions of Debian.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 6e9d2072..ba975b87 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -699,7 +699,8 @@ sub di_installcmdline_core ($$;@) {
                "hostname=$tho->{Name}",
                "$xopts{PreseedScheme}=$ps_url",
                "netcfg/dhcp_timeout=150",
-               "netcfg/choose_interface=$netcfg_interface"
+               "netcfg/choose_interface=$netcfg_interface",
+               "debian-installer/locale=en_GB",
                );
 
     my $debconf_priority= $xopts{DebconfPriority};
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:03:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:03:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7WB-0004lg-U8; Tue, 19 May 2020 19:03:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7WB-0004lJ-7w
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:03:11 +0000
X-Inumbo-ID: 4d983100-9a03-11ea-b9cf-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d983100-9a03-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 19:02:37 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vc-0001da-6h; Tue, 19 May 2020 20:02:36 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 07/38] setupboot_grub2: Drop $submenu variable
Date: Tue, 19 May 2020 20:01:59 +0100
Message-Id: <20200519190230.29519-8-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

We really only used this to check how many levels deep in { we are.
That can be done by checking $#offsets, which is >0 if we are in a
submenu and not otherwise.  We lose the ability to report the start
line of the submenu, but that's OK.

But as a bonus, we no longer bomb out on nested submenus: previously
the first } would cause $submenu to be undef.  Now we pop from
@offsets and all is fine.

Nested submenus are present in Debian buster.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index ba975b87..b8bf67dc 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -452,15 +452,13 @@ sub setupboot_grub2 ($$$$) {
         my @offsets = (0);
         my $entry;
         my $chainentry;
-        my $submenu;
         while (<$f>) {
             next if m/^\s*\#/ || !m/\S/;
             if (m/^\s*\}\s*$/) {
-                die unless $entry || $submenu;
-                if (!$entry && $submenu) {
-                    logm("Met end of a submenu $submenu->{StartLine}..$.. ".
+                die unless $entry || $#offsets;
+                if (!$entry && $#offsets) {
+                    logm("Met end of a submenu at $. (@offsets) ".
                         "Our want kern is $want_kernver");
-                    $submenu= undef;
                     pop @offsets;
                     $offsets[$#offsets]++;
                     next;
@@ -510,7 +508,6 @@ sub setupboot_grub2 ($$$$) {
                 $offsets[$#offsets]++;
             }
             if (m/^\s*submenu\s+[\'\"](.*)[\'\"].*\{\s*$/) {
-                $submenu={ StartLine =>$., MenuEntryPath => join ">", @offsets };
                 push @offsets,(0);
             }
             if (m/^\s*chainloader\s*\/EFI\/osstest\/xen.efi/) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:03:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:03:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7WH-0004oo-7e; Tue, 19 May 2020 19:03:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7WG-0004o4-7z
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:03:16 +0000
X-Inumbo-ID: 4dc93020-9a03-11ea-ae69-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4dc93020-9a03-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 19:02:37 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vc-0001da-EC; Tue, 19 May 2020 20:02:36 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 08/38] ts-leak-check: Ignore buster's udevd too
Date: Tue, 19 May 2020 20:02:00 +0100
Message-Id: <20200519190230.29519-9-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

For reasons I don't propose to investigate, on buster udevd shows up
like this:

  2019-11-26 18:13:48 Z LEAKED [process 2633 /lib/systemd/systemd-udevd] process: root      2633  1555  0 18:10 ?        00:00:00 /lib/systemd/systemd-udevd

This does not match our suppression.  Add an additional suppression.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-leak-check | 1 +
 1 file changed, 1 insertion(+)

diff --git a/ts-leak-check b/ts-leak-check
index 41e6245d..f3cca8aa 100755
--- a/ts-leak-check
+++ b/ts-leak-check
@@ -202,6 +202,7 @@ xenstore /vm
 xenstore /libxl
 
 process .* udevd
+process .* .*/systemd-udevd
 process .* /.+/systemd-shim
 
 file /var/run/xenstored/db
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:03:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:03:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7WM-0004rz-Gz; Tue, 19 May 2020 19:03:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7WL-0004rB-7l
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:03:21 +0000
X-Inumbo-ID: 4de532a2-9a03-11ea-9887-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4de532a2-9a03-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 19:02:37 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vc-0001da-Lt; Tue, 19 May 2020 20:02:36 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 09/38] Bodge systemd random seed arrangements
Date: Tue, 19 May 2020 20:02:01 +0100
Message-Id: <20200519190230.29519-10-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

systemd does not regard the contents of the random seed file as useful
for the purposes of placating the kernel's entropy tracker.  As a
result, the system hangs at boot waiting for entropy.

Fix this by providing a small program which can be used to load a seed
file into /dev/random and also call RNDADDTOENTCNT to add the
appropriate amount to the kernel's counter.

Arrange to run this program instead of
   /lib/systemd/systemd-random-seed load

With systemd the random seed file is in /var/lib/systemd/random-seed
rather than /var/lib/urandom/random-seed.

Unfortunately we must hardcode the actual numerical value of
RNDADDTOENTCNT because we don't have a suitable compiler anywhere
nearby.  It seems to have the same value on i386, amd64, armhf and
arm64, our currently supported architectures.

Thanks to Colin Watson for pointers to the systemd random unit and
Matthew Vernon for instructions on overriding just ExecStart.

I think this change should be a no-op on non-systemd systems.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 .../override.conf                             |  3 ++
 overlay/usr/local/bin/random-seed-add         | 33 +++++++++++++++++++
 2 files changed, 36 insertions(+)
 create mode 100644 overlay/etc/systemd/system/systemd-random-seed.service.d/override.conf
 create mode 100755 overlay/usr/local/bin/random-seed-add

diff --git a/overlay/etc/systemd/system/systemd-random-seed.service.d/override.conf b/overlay/etc/systemd/system/systemd-random-seed.service.d/override.conf
new file mode 100644
index 00000000..f6cc0f84
--- /dev/null
+++ b/overlay/etc/systemd/system/systemd-random-seed.service.d/override.conf
@@ -0,0 +1,3 @@
+[Service]
+ExecStart=
+ExecStart=/usr/local/bin/random-seed-add /var/lib/systemd/random-seed
diff --git a/overlay/usr/local/bin/random-seed-add b/overlay/usr/local/bin/random-seed-add
new file mode 100755
index 00000000..89e75c4d
--- /dev/null
+++ b/overlay/usr/local/bin/random-seed-add
@@ -0,0 +1,33 @@
+#!/usr/bin/perl -w
+use strict;
+
+open R, '>', '/dev/random' or die "open /dev/random: $!\n";
+R->autoflush(1);
+
+sub rndaddtoentcnt ($) {
+    my ($bits) = @_;
+    my $x = pack 'L', $bits;
+    my $r = ioctl R, 0x40045201, $x;
+    defined $r or die "RNDADDTOENTCNT: $!\n";
+}
+
+sub process_stdin ($) {
+    my ($f) = @_;
+    my $got = read STDIN, $_, 512;
+    defined $got or die "read $f: $!\n";
+    last if !$got;
+    print R $_ or die "write /dev/random: $!\n";
+    my $bits = length($_) * 8;
+    rndaddtoentcnt($bits);
+}
+
+if (!@ARGV) {
+    process_stdin('stdin');
+} else {
+    die "no options supported\n" if $ARGV[0] =~ m/^\-/;
+    foreach my $f (@ARGV) {
+        open STDIN, '<', $f or die "open for reading $f: $!\n";
+        process_stdin($f);
+    }
+}
+
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:09:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:09:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7bx-0005ZU-CT; Tue, 19 May 2020 19:09:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7bw-0005ZP-JY
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:09:08 +0000
X-Inumbo-ID: 36abac6e-9a04-11ea-b07b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 36abac6e-9a04-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 19:09:08 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vl-0001da-2x; Tue, 19 May 2020 20:02:45 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 37/38] buster: Extend workaround for dhcpd EROFS bug
Date: Tue, 19 May 2020 20:02:29 +0100
Message-Id: <20200519190230.29519-38-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 6c289cc7..e1ce757e 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -1617,7 +1617,7 @@ sub debian_dhcp_rofs_fix ($$) {
     # / is still ro.  In stretch, the isc dhcp client spins requesting
     # an address and then sending a DHCPDECLINE (and then, usually,
     # eventually works).
-    return '' unless $ho->{Suite} =~ m/stretch/;
+    return '' unless $ho->{Suite} =~ m/stretch|buster/;
     my $script = "$rootdir/lib/udev/ifupdown-hotplug";
     <<END.<<'ENDQ'.<<END
 set -ex
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:09:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:09:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7c2-0005Zu-KO; Tue, 19 May 2020 19:09:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7c1-0005Zo-Gq
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:09:13 +0000
X-Inumbo-ID: 37edfa64-9a04-11ea-9887-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37edfa64-9a04-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 19:09:10 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vj-0001da-A8; Tue, 19 May 2020 20:02:43 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 29/38] mg-debian-installer-update: support
 overlay-intramfs-SUITE
Date: Tue, 19 May 2020 20:02:21 +0100
Message-Id: <20200519190230.29519-30-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This lets us patch the installer more easily.

No uses yet.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 mg-debian-installer-update | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/mg-debian-installer-update b/mg-debian-installer-update
index f1e682f9..fb4fe2ab 100755
--- a/mg-debian-installer-update
+++ b/mg-debian-installer-update
@@ -33,6 +33,8 @@ sbase=$site/dists/$suite
 
 src=$sbase/main/installer-$arch/current/images/netboot/
 
+osstest_dir="$(pwd)"
+
 case ${suite}_${arch} in
     lenny_armhf|squeeze_armhf|lenny_arm64|squeeze_arm64|wheezy_arm64)
         # No such thing.
@@ -188,6 +190,24 @@ if [ "x$specialkernel" != x ]; then
     rm -rf x
 fi
 
+overlay_initrd=$osstest_dir/overlay-initrd-$suite
+if [ -e "$overlay_initrd" ]; then
+    for f in $files; do
+        s=${f/:*} ; d=${f/*:}
+        case "$d" in
+            *initrd*)
+                echo "adding $overlay_initrd to $d"
+                (
+                    set -e
+                    cd "$overlay_initrd"
+                    find -print0 | cpio -0 -Hnewc -o \
+                        | gzip -9nf
+                ) >>$d.new
+                ;;
+        esac
+    done
+fi
+
 for f in $files; do
         s=${f/:*} ; d=${f/*:}
         mv -f $d.new $d
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:09:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:09:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7c7-0005b0-TH; Tue, 19 May 2020 19:09:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7c6-0005ah-Gt
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:09:18 +0000
X-Inumbo-ID: 3923b658-9a04-11ea-ae69-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3923b658-9a04-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 19:09:12 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vg-0001da-Ud; Tue, 19 May 2020 20:02:41 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 19/38] buster: preseed partman-auto-lvm/guided_size
Date: Tue, 19 May 2020 20:02:11 +0100
Message-Id: <20200519190230.29519-20-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Otherwise we get this question:

  | You may use the whole volume group for guided partitioning, or part
  | of it.  [...]
  | Amount of volume group to use for guided partitioning:

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index cd8b2de0..bec72788 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -938,6 +938,7 @@ d-i partman/choose_partition select finish
 d-i partman/confirm boolean true
 d-i partman-lvm/confirm boolean true
 d-i partman-lvm/device_remove_lvm_span boolean true
+d-i partman-auto-lvm/guided_size string max
 
 d-i partman/confirm_nooverwrite true
 d-i partman-lvm/confirm_nooverwrite true
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:09:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:09:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7cC-0005ce-59; Tue, 19 May 2020 19:09:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7cB-0005cI-H8
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:09:23 +0000
X-Inumbo-ID: 3a4b0a68-9a04-11ea-b07b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3a4b0a68-9a04-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 19:09:14 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vk-0001da-Sz; Tue, 19 May 2020 20:02:44 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 36/38] buster: setupboot_grub2: Handle missing policy
 file bug
Date: Tue, 19 May 2020 20:02:28 +0100
Message-Id: <20200519190230.29519-37-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a complex interaction between update-grub and the Xen build
system on ARM64.  Not sure exactly who to blame but since we have our
own 20_linux_xen bodge, let's wait until we don't.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index c0b669c9..6c289cc7 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -496,7 +496,17 @@ sub setupboot_grub2 ($$$$) {
 			 " kernel $entry->{KernVer}, not $want_kernver)");
 		} elsif ($want_xsm && !defined $entry->{Xenpolicy}) {
 		    logm("(skipping entry at $entry->{StartLine}..$.;".
-			 " XSM policy file not present)");
+			 " XSM policy file not mentioned)");
+		} elsif ($ho->{Suite} =~ m/buster/ &&
+			 defined $entry->{Xenpolicy} &&
+			 !$bootfiles{
+                             $entry->{Xenpolicy} =~ m{^/?} ? $' : die
+						 }) {
+		    # Our 20_linux_xen bodge with buster's update-grub
+		    # generates entries which mention /boot/xenpolicy-xen
+		    # even though that file doesn't exist on ARM64.
+		    logm("(skipping entry at $entry->{StartLine}..$.;".
+			 " XSM policy file not on disk!)");
 		} else {
 		    # yes!
 		    last;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:09:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:09:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7cH-0005fA-EA; Tue, 19 May 2020 19:09:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7cG-0005eR-GX
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:09:28 +0000
X-Inumbo-ID: 3b9798a0-9a04-11ea-b9cf-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b9798a0-9a04-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 19:09:16 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vk-0001da-Du; Tue, 19 May 2020 20:02:44 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 34/38] buster: grub,
 arm64: extend chainloading workaround
Date: Tue, 19 May 2020 20:02:26 +0100
Message-Id: <20200519190230.29519-35-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

multiboot[2] isn't supported.

Also link to the bug report.

CC: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 77508d19..151677ed 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -443,9 +443,10 @@ sub setupboot_grub2 ($$$$) {
     my $kernkey= (defined $xenhopt ? 'KernDom0' : 'KernOnly');
 
     # Grub2 on jessie/stretch ARM* doesn't do multiboot, so we must chainload.
+    # https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=884770
     my $need_uefi_chainload =
         get_host_property($ho, "firmware") eq "uefi" &&
-        $ho->{Suite} =~ m/jessie|stretch/ && $ho->{Arch} =~ m/^arm/;
+        $ho->{Suite} =~ m/jessie|stretch|buster/ && $ho->{Arch} =~ m/^arm/;
 
     my $parsemenu= sub {
         my $f= bl_getmenu_open($ho, $rmenu, "$stash/$ho->{Name}--grub.cfg.1");
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:09:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:09:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7cM-0005hG-NU; Tue, 19 May 2020 19:09:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7cL-0005go-H8
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:09:33 +0000
X-Inumbo-ID: 3e4c4d34-9a04-11ea-ae69-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e4c4d34-9a04-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 19:09:20 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vj-0001da-Tb; Tue, 19 May 2020 20:02:44 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 32/38] buster: chiark-scripts: Install a new version
 on buster too
Date: Tue, 19 May 2020 20:02:24 +0100
Message-Id: <20200519190230.29519-33-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

We need various fixes that are not in buster, sadly.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 production-config | 1 +
 1 file changed, 1 insertion(+)

diff --git a/production-config b/production-config
index f0ddc132..e3870d47 100644
--- a/production-config
+++ b/production-config
@@ -107,6 +107,7 @@ TftpGrubVersion XXXX-XX-XX
 
 DebianExtraPackages_jessie chiark-scripts_6.0.3~citrix1_all.deb
 DebianExtraPackages_stretch chiark-scripts_6.0.4~citrix1_all.deb
+DebianExtraPackages_buster chiark-scripts_6.0.5~citrix1_all.deb
 
 DebianExtraPackages_uefi_i386_jessie   extradebs-uefi-i386-2018-04-01/
 DebianExtraPackages_uefi_amd64_jessie  extradebs-uefi-amd64-2018-04-01/
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:09:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:09:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7cS-0005kI-02; Tue, 19 May 2020 19:09:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7cQ-0005ja-I5
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:09:38 +0000
X-Inumbo-ID: 3f8783d0-9a04-11ea-9887-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3f8783d0-9a04-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 19:09:22 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vk-0001da-LP; Tue, 19 May 2020 20:02:44 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 35/38] buster: setupboot_grub2: Note what files exist
 in /boot
Date: Tue, 19 May 2020 20:02:27 +0100
Message-Id: <20200519190230.29519-36-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Nothing uses this yet.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 151677ed..c0b669c9 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -448,6 +448,11 @@ sub setupboot_grub2 ($$$$) {
         get_host_property($ho, "firmware") eq "uefi" &&
         $ho->{Suite} =~ m/jessie|stretch|buster/ && $ho->{Arch} =~ m/^arm/;
 
+    my %bootfiles =
+	map { $_ => 1 }
+	split / /,
+	target_cmd_output_root($ho, "cd /boot && echo *");
+
     my $parsemenu= sub {
         my $f= bl_getmenu_open($ho, $rmenu, "$stash/$ho->{Name}--grub.cfg.1");
     
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:09:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:09:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7cX-0005ni-9K; Tue, 19 May 2020 19:09:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7cV-0005mc-I7
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:09:43 +0000
X-Inumbo-ID: 4106a6a0-9a04-11ea-ae69-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4106a6a0-9a04-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 19:09:25 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vf-0001da-IG; Tue, 19 May 2020 20:02:40 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 17/38] buster: ts-host-install: Extend net.ifnames
 workaround
Date: Tue, 19 May 2020 20:02:09 +0100
Message-Id: <20200519190230.29519-18-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Really we should fix this by making a .deb in Debian that we could
install.  But this is a longer-term project.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-host-install | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-host-install b/ts-host-install
index 7a72a867..fe26f70f 100755
--- a/ts-host-install
+++ b/ts-host-install
@@ -282,7 +282,7 @@ END
 
     # Don't use "Predictable Network Interface Names"
     # https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/
-    push @hocmdline, "net.ifnames=0" if $ho->{Suite} =~ m/stretch/;
+    push @hocmdline, "net.ifnames=0" if $ho->{Suite} =~ m/stretch|buster/;
 
     push @hocmdline,
         get_host_property($ho, "linux-boot-append $ho->{Suite}", ''),
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:09:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:09:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7cb-0005qW-I9; Tue, 19 May 2020 19:09:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7ca-0005pm-Hr
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:09:48 +0000
X-Inumbo-ID: 4313cf9a-9a04-11ea-b07b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4313cf9a-9a04-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 19:09:28 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vl-0001da-9m; Tue, 19 May 2020 20:02:45 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 38/38] buster: Switch to Debian buster as the default
 suite
Date: Tue, 19 May 2020 20:02:30 +0100
Message-Id: <20200519190230.29519-39-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest.pm b/Osstest.pm
index 1e381d8f..63dddd95 100644
--- a/Osstest.pm
+++ b/Osstest.pm
@@ -87,7 +87,7 @@ our %c = qw(
 
     Images images
 
-    DebianSuite stretch
+    DebianSuite buster
     DebianMirrorSubpath debian
 
     TestHostKeypairPath id_rsa_osstest
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:09:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:09:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7cg-0005uD-RJ; Tue, 19 May 2020 19:09:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7cf-0005tF-Hz
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:09:53 +0000
X-Inumbo-ID: 44479798-9a04-11ea-b9cf-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 44479798-9a04-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 19:09:30 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vh-0001da-HU; Tue, 19 May 2020 20:02:41 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 21/38] buster: Extend ARM clock workaround
Date: Tue, 19 May 2020 20:02:13 +0100
Message-Id: <20200519190230.29519-22-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

CC: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index bec72788..6fed0b75 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -248,7 +248,7 @@ END
 	push @xenkopt, $xenkopt;
 	# https://bugs.xenproject.org/xen/bug/45
 	push @xenkopt, "clk_ignore_unused"
-	    if $ho->{Suite} =~ m/wheezy|jessie|stretch/;
+	    if $ho->{Suite} =~ m/wheezy|jessie|stretch|buster/;
 
 	$xenkopt = join ' ', @xenkopt;
 	logm("Dom0 Linux options: $xenkopt");
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:10:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:10:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7cm-0005yq-4N; Tue, 19 May 2020 19:10:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7ck-0005xW-IA
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:09:58 +0000
X-Inumbo-ID: 45943be2-9a04-11ea-ae69-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 45943be2-9a04-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 19:09:33 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vg-0001da-DV; Tue, 19 May 2020 20:02:40 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 18/38] buster: Deinstall the "systemd" package
Date: Tue, 19 May 2020 20:02:10 +0100
Message-Id: <20200519190230.29519-19-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This installs a pam rule which causes logins to hang.  It also seems
to cause some kind of udev wedge.

We are using sysvinit so this package is not desirable.  Empirically,
removing it makes the system work.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 0958d080..cd8b2de0 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -842,6 +842,7 @@ sub preseed_base ($$$;@) {
     if ( $suite !~ /squeeze|wheezy/ ) {
        preseed_hook_command($ho, 'late_command', $sfx, <<END)
 in-target apt-get install -y sysvinit-core
+in-target apt-get remove -y systemd
 END
     }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:10:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:10:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7cq-0006PT-Ig; Tue, 19 May 2020 19:10:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7cp-0006FG-IF
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:10:03 +0000
X-Inumbo-ID: 46d66bc4-9a04-11ea-b9cf-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 46d66bc4-9a04-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 19:09:35 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vj-0001da-Fu; Tue, 19 May 2020 20:02:43 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 30/38] overlay-initrd-buster/sbin/reopen-console: Copy
 from Debian
Date: Tue, 19 May 2020 20:02:22 +0100
Message-Id: <20200519190230.29519-31-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

We are going to patch this file to work around a bug, using the new
overlay mechanism.

The first step is to include the file in our overlay so we overwrite
it.  Currently, this is a no-op, so no functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 overlay-initrd-buster/sbin/reopen-console | 94 +++++++++++++++++++++++
 1 file changed, 94 insertions(+)
 create mode 100755 overlay-initrd-buster/sbin/reopen-console

diff --git a/overlay-initrd-buster/sbin/reopen-console b/overlay-initrd-buster/sbin/reopen-console
new file mode 100755
index 00000000..dd354deb
--- /dev/null
+++ b/overlay-initrd-buster/sbin/reopen-console
@@ -0,0 +1,94 @@
+#!/bin/sh
+
+# First find the enabled consoles from the kernel, noting if one is 'preferred'
+# Record these.
+# Run the startup scripts on the preferred console
+
+# In order to have D-I appear on all consoles, modify the inittab to
+# add one entry for each console, running debian-installer.
+# Finally HUP init so that it runs those installers
+# (but doesn't rerun the sysinit startup stuff, including this script)
+
+
+NL="
+"
+
+LOGGER_UP=0
+LOG_FILE=/var/log/reopen-console
+
+log() {
+	# In very early startup we don't have syslog. Log to file that
+	# we can flush out later so we can at least see what happened
+	# at early startup
+	if [ $LOGGER_UP -eq 1 ]; then
+	        logger -t reopen-console "$@"
+	else
+		echo "$@" >> $LOG_FILE
+	fi
+}
+
+flush_logger () {
+	cat $LOG_FILE | logger -t reopen-console
+	rm $LOG_FILE
+}
+
+consoles=
+preferred=
+# Retrieve all enabled consoles from kernel; ignore those
+# for which no device file exists
+
+kernelconsoles="$(cat /proc/consoles)"
+for cons in $(echo "$kernelconsoles" | sed -n -r -e 's/(^.*)  .*\((.*)\).*$/\1/p' )
+do
+	log "Looking at console $cons from /proc/consoles"
+	status=$(echo "$kernelconsoles" | grep $cons | sed -n -r -e 's/(^.*) *.*\((.*)\).*$/\2/p' )
+	if [ -e "/dev/$cons" ] && [ $(echo "$status" | grep -o 'E') ]; then
+		consoles="${consoles:+$consoles$NL}$cons"
+		log "   Adding $cons to consoles list"
+	fi
+	# 'C' console is 'most prefered'.
+	if [ $(echo "$status" | grep -o 'C') ]; then
+		preferred="$cons"
+		log "   $cons is preferred"
+	fi
+done
+
+if [ -z "$consoles" ]; then
+	# Nothing found? Default to /dev/console.
+	log "Found no consoles! Defaulting to /dev/console"
+	consoles=console
+fi
+if [ -z "$preferred" ]; then
+	#None marked preferred? Use the first one
+	preferred=$(echo "$consoles" | head -n 1)
+	log "Found no preferred console. Picking $preferred"
+fi
+
+for cons in $consoles
+do
+	echo "/dev/$cons " >> /var/run/console-devices
+done
+echo "/dev/$preferred " > /var/run/console-preferred
+
+
+# Add debian-installer lines into inittab - one per console
+for cons in $consoles
+do
+	log "Adding inittab entry for $cons"
+	echo "$cons::respawn:/sbin/debian-installer" >> /etc/inittab
+done
+
+# Run the startup scripts once, using the preferred console
+cons=$(cat /var/run/console-preferred)
+# Some other session may have that console as ctty. Steal it from them
+/sbin/steal-ctty $cons "$@"
+
+# Now we should have syslog running, so flush our log entries
+LOGGER_UP=1
+flush_logger
+
+# Finally restart init to run debian-installer on discovered consoles
+log "Restarting init to start d-i on the consoles we found"
+kill -HUP 1
+
+exit 0
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:10:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:10:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7cv-0006ll-SY; Tue, 19 May 2020 19:10:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7cu-0006ke-IV
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:10:08 +0000
X-Inumbo-ID: 481b40cc-9a04-11ea-b07b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 481b40cc-9a04-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 19:09:37 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vd-0001da-CR; Tue, 19 May 2020 20:02:37 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 12/38] Debian: preseed: use priority= alias
Date: Tue, 19 May 2020 20:02:04 +0100
Message-Id: <20200519190230.29519-13-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This marginally reduces command line clobber.  This alias has been
supported approximately forever.  (And this code is currently only
used when DebconfPriority is set, which it generally isn't.)

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 78d8c97e..c3fbc32c 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -702,7 +702,7 @@ sub di_installcmdline_core ($$;@) {
                );
 
     my $debconf_priority= $xopts{DebconfPriority};
-    push @cl, "debconf/priority=$debconf_priority"
+    push @cl, "priority=$debconf_priority"
         if defined $debconf_priority;
     push @cl, "rescue/enable=true" if $xopts{RescueMode};
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:10:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:10:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7d1-0006qf-6s; Tue, 19 May 2020 19:10:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7cz-0006p6-IL
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:10:13 +0000
X-Inumbo-ID: 49889432-9a04-11ea-9887-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 49889432-9a04-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 19:09:39 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vi-0001da-Gw; Tue, 19 May 2020 20:02:42 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 25/38] 20_linux_xen: Copy Debian buster version into
 our initramfs area
Date: Tue, 19 May 2020 20:02:17 +0100
Message-Id: <20200519190230.29519-26-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is from 41e42571ebc50fa351cd63ce40044946652c5c72 in Debian's grub
package.

We are going to want to modify this to support XSM/FLASK and cope with
upstream build outputs.

In this commit we dump the exact file contents across.  It's not
effective right now because of the ".in" extension.  In fact, the file
is a template.

At the time of writing I am trying to send our substantive changes
upstream via Debian's Gitlab:
  https://salsa.debian.org/grub-team/grub/-/merge_requests/18

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 overlay-buster/etc/grub.d/20_linux_xen.in | 299 ++++++++++++++++++++++
 1 file changed, 299 insertions(+)
 create mode 100644 overlay-buster/etc/grub.d/20_linux_xen.in

diff --git a/overlay-buster/etc/grub.d/20_linux_xen.in b/overlay-buster/etc/grub.d/20_linux_xen.in
new file mode 100644
index 00000000..98ef163c
--- /dev/null
+++ b/overlay-buster/etc/grub.d/20_linux_xen.in
@@ -0,0 +1,299 @@
+#! /bin/sh
+set -e
+
+# grub-mkconfig helper script.
+# Copyright (C) 2006,2007,2008,2009,2010  Free Software Foundation, Inc.
+#
+# GRUB is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# GRUB is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with GRUB.  If not, see <http://www.gnu.org/licenses/>.
+
+prefix="@prefix@"
+exec_prefix="@exec_prefix@"
+datarootdir="@datarootdir@"
+
+. "$pkgdatadir/grub-mkconfig_lib"
+
+export TEXTDOMAIN=@PACKAGE@
+export TEXTDOMAINDIR="@localedir@"
+
+CLASS="--class gnu-linux --class gnu --class os --class xen"
+SUPPORTED_INITS="sysvinit:/lib/sysvinit/init systemd:/lib/systemd/systemd upstart:/sbin/upstart"
+
+if [ "x${GRUB_DISTRIBUTOR}" = "x" ] ; then
+  OS=GNU/Linux
+else
+  OS="${GRUB_DISTRIBUTOR} GNU/Linux"
+  CLASS="--class $(echo ${GRUB_DISTRIBUTOR} | tr 'A-Z' 'a-z' | cut -d' ' -f1|LC_ALL=C sed 's,[^[:alnum:]_],_,g') ${CLASS}"
+fi
+
+# loop-AES arranges things so that /dev/loop/X can be our root device, but
+# the initrds that Linux uses don't like that.
+case ${GRUB_DEVICE} in
+  /dev/loop/*|/dev/loop[0-9])
+    GRUB_DEVICE=`losetup ${GRUB_DEVICE} | sed -e "s/^[^(]*(\([^)]\+\)).*/\1/"`
+    # We can't cope with devices loop-mounted from files here.
+    case ${GRUB_DEVICE} in
+      /dev/*) ;;
+      *) exit 0 ;;
+    esac
+  ;;
+esac
+
+# btrfs may reside on multiple devices. We cannot pass them as value of root= parameter
+# and mounting btrfs requires user space scanning, so force UUID in this case.
+if [ "x${GRUB_DEVICE_UUID}" = "x" ] || [ "x${GRUB_DISABLE_LINUX_UUID}" = "xtrue" ] \
+    || ! test -e "/dev/disk/by-uuid/${GRUB_DEVICE_UUID}" \
+    || ( test -e "${GRUB_DEVICE}" && uses_abstraction "${GRUB_DEVICE}" lvm ); then
+  LINUX_ROOT_DEVICE=${GRUB_DEVICE}
+else
+  LINUX_ROOT_DEVICE=UUID=${GRUB_DEVICE_UUID}
+fi
+
+# Allow overriding GRUB_CMDLINE_LINUX and GRUB_CMDLINE_LINUX_DEFAULT.
+if [ "${GRUB_CMDLINE_LINUX_XEN_REPLACE}" ]; then
+  GRUB_CMDLINE_LINUX="${GRUB_CMDLINE_LINUX_XEN_REPLACE}"
+fi
+if [ "${GRUB_CMDLINE_LINUX_XEN_REPLACE_DEFAULT}" ]; then
+  GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_XEN_REPLACE_DEFAULT}"
+fi
+
+case x"$GRUB_FS" in
+    xbtrfs)
+	rootsubvol="`make_system_path_relative_to_its_root /`"
+	rootsubvol="${rootsubvol#/}"
+	if [ "x${rootsubvol}" != x ]; then
+	    GRUB_CMDLINE_LINUX="rootflags=subvol=${rootsubvol} ${GRUB_CMDLINE_LINUX}"
+	fi;;
+    xzfs)
+	rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2>/dev/null || true`
+	bootfs="`make_system_path_relative_to_its_root / | sed -e "s,@$,,"`"
+	LINUX_ROOT_DEVICE="ZFS=${rpool}${bootfs%/}"
+	;;
+esac
+
+title_correction_code=
+
+linux_entry ()
+{
+  os="$1"
+  version="$2"
+  xen_version="$3"
+  type="$4"
+  args="$5"
+  xen_args="$6"
+  if [ -z "$boot_device_id" ]; then
+      boot_device_id="$(grub_get_device_id "${GRUB_DEVICE}")"
+  fi
+  if [ x$type != xsimple ] ; then
+      if [ x$type = xrecovery ] ; then
+	  title="$(gettext_printf "%s, with Xen %s and Linux %s (%s)" "${os}" "${xen_version}" "${version}" "$(gettext "${GRUB_RECOVERY_TITLE}")")"
+      elif [ "${type#init-}" != "$type" ] ; then
+	  title="$(gettext_printf "%s, with Xen %s and Linux %s (%s)" "${os}" "${xen_version}" "${version}" "${type#init-}")"
+      else
+	  title="$(gettext_printf "%s, with Xen %s and Linux %s" "${os}" "${xen_version}" "${version}")"
+      fi
+      replacement_title="$(echo "Advanced options for ${OS}" | sed 's,>,>>,g')>$(echo "$title" | sed 's,>,>>,g')"
+      if [ x"Xen ${xen_version}>$title" = x"$GRUB_ACTUAL_DEFAULT" ]; then
+         quoted="$(echo "$GRUB_ACTUAL_DEFAULT" | grub_quote)"
+         title_correction_code="${title_correction_code}if [ \"x\$default\" = '$quoted' ]; then default='$(echo "$replacement_title" | grub_quote)'; fi;"
+         grub_warn "$(gettext_printf "Please don't use old title \`%s' for GRUB_DEFAULT, use \`%s' (for versions before 2.00) or \`%s' (for 2.00 or later)" "$GRUB_ACTUAL_DEFAULT" "$replacement_title" "gnulinux-advanced-$boot_device_id>gnulinux-$version-$type-$boot_device_id")"
+      fi
+      echo "menuentry '$(echo "$title" | grub_quote)' ${CLASS} \$menuentry_id_option 'xen-gnulinux-$version-$type-$boot_device_id' {" | sed "s/^/$submenu_indentation/"
+  else
+      title="$(gettext_printf "%s, with Xen hypervisor" "${os}")"
+      echo "menuentry '$(echo "$title" | grub_quote)' ${CLASS} \$menuentry_id_option 'xen-gnulinux-simple-$boot_device_id' {" | sed "s/^/$submenu_indentation/"
+  fi
+  if [ x$type != xrecovery ] ; then
+      save_default_entry | grub_add_tab | sed "s/^/$submenu_indentation/"
+  fi
+
+  if [ -z "${prepare_boot_cache}" ]; then
+    prepare_boot_cache="$(prepare_grub_to_access_device ${GRUB_DEVICE_BOOT} | grub_add_tab)"
+  fi
+  printf '%s\n' "${prepare_boot_cache}" | sed "s/^/$submenu_indentation/"
+  xmessage="$(gettext_printf "Loading Xen %s ..." ${xen_version})"
+  lmessage="$(gettext_printf "Loading Linux %s ..." ${version})"
+  sed "s/^/$submenu_indentation/" << EOF
+	echo	'$(echo "$xmessage" | grub_quote)'
+        if [ "\$grub_platform" = "pc" -o "\$grub_platform" = "" ]; then
+            xen_rm_opts=
+        else
+            xen_rm_opts="no-real-mode edd=off"
+        fi
+	${xen_loader}	${rel_xen_dirname}/${xen_basename} placeholder ${xen_args} \${xen_rm_opts}
+	echo	'$(echo "$lmessage" | grub_quote)'
+	${module_loader}	${rel_dirname}/${basename} placeholder root=${linux_root_device_thisversion} ro ${args}
+EOF
+  if test -n "${initrd}" ; then
+    # TRANSLATORS: ramdisk isn't identifier. Should be translated.
+    message="$(gettext_printf "Loading initial ramdisk ...")"
+    sed "s/^/$submenu_indentation/" << EOF
+	echo	'$(echo "$message" | grub_quote)'
+	${module_loader}	--nounzip   ${rel_dirname}/${initrd}
+EOF
+  fi
+  sed "s/^/$submenu_indentation/" << EOF
+}
+EOF
+}
+
+linux_list=
+for i in /boot/vmlinu[xz]-* /vmlinu[xz]-* /boot/kernel-*; do
+    if grub_file_is_not_garbage "$i"; then
+    	basename=$(basename $i)
+	version=$(echo $basename | sed -e "s,^[^0-9]*-,,g")
+	dirname=$(dirname $i)
+	config=
+	for j in "${dirname}/config-${version}" "${dirname}/config-${alt_version}" "/etc/kernels/kernel-config-${version}" ; do
+	    if test -e "${j}" ; then
+		config="${j}"
+		break
+	    fi
+	done
+        if (grep -qx "CONFIG_XEN_DOM0=y" "${config}" 2> /dev/null || grep -qx "CONFIG_XEN_PRIVILEGED_GUEST=y" "${config}" 2> /dev/null); then linux_list="$linux_list $i" ; fi
+    fi
+done
+if [ "x${linux_list}" = "x" ] ; then
+    exit 0
+fi
+
+file_is_not_sym () {
+    case "$1" in
+	*/xen-syms-*)
+	    return 1;;
+	*)
+	    return 0;;
+    esac
+}
+
+xen_list=
+for i in /boot/xen*; do
+    if grub_file_is_not_garbage "$i" && file_is_not_sym "$i" ; then xen_list="$xen_list $i" ; fi
+done
+prepare_boot_cache=
+boot_device_id=
+
+title_correction_code=
+
+machine=`uname -m`
+
+case "$machine" in
+    i?86) GENKERNEL_ARCH="x86" ;;
+    mips|mips64) GENKERNEL_ARCH="mips" ;;
+    mipsel|mips64el) GENKERNEL_ARCH="mipsel" ;;
+    arm*) GENKERNEL_ARCH="arm" ;;
+    *) GENKERNEL_ARCH="$machine" ;;
+esac
+
+# Extra indentation to add to menu entries in a submenu. We're not in a submenu
+# yet, so it's empty. In a submenu it will be equal to '\t' (one tab).
+submenu_indentation=""
+
+is_top_level=true
+
+while [ "x${xen_list}" != "x" ] ; do
+    list="${linux_list}"
+    current_xen=`version_find_latest $xen_list`
+    xen_basename=`basename ${current_xen}`
+    xen_dirname=`dirname ${current_xen}`
+    rel_xen_dirname=`make_system_path_relative_to_its_root $xen_dirname`
+    xen_version=`echo $xen_basename | sed -e "s,.gz$,,g;s,^xen-,,g"`
+    if [ -z "$boot_device_id" ]; then
+	boot_device_id="$(grub_get_device_id "${GRUB_DEVICE}")"
+    fi
+    if [ "x$is_top_level" != xtrue ]; then
+	echo "	submenu '$(gettext_printf "Xen hypervisor, version %s" "${xen_version}" | grub_quote)' \$menuentry_id_option 'xen-hypervisor-$xen_version-$boot_device_id' {"
+    fi
+    if ($grub_file --is-x86-multiboot2 $current_xen); then
+	xen_loader="multiboot2"
+	module_loader="module2"
+    else
+	xen_loader="multiboot"
+	module_loader="module"
+    fi
+    while [ "x$list" != "x" ] ; do
+	linux=`version_find_latest $list`
+	gettext_printf "Found linux image: %s\n" "$linux" >&2
+	basename=`basename $linux`
+	dirname=`dirname $linux`
+	rel_dirname=`make_system_path_relative_to_its_root $dirname`
+	version=`echo $basename | sed -e "s,^[^0-9]*-,,g"`
+	alt_version=`echo $version | sed -e "s,\.old$,,g"`
+	linux_root_device_thisversion="${LINUX_ROOT_DEVICE}"
+
+	initrd=
+	for i in "initrd.img-${version}" "initrd-${version}.img" "initrd-${version}.gz" \
+	   "initrd-${version}" "initramfs-${version}.img" \
+	   "initrd.img-${alt_version}" "initrd-${alt_version}.img" \
+	   "initrd-${alt_version}" "initramfs-${alt_version}.img" \
+	   "initramfs-genkernel-${version}" \
+	   "initramfs-genkernel-${alt_version}" \
+	   "initramfs-genkernel-${GENKERNEL_ARCH}-${version}" \
+	   "initramfs-genkernel-${GENKERNEL_ARCH}-${alt_version}" ; do
+	    if test -e "${dirname}/${i}" ; then
+		initrd="$i"
+		break
+	    fi
+	done
+	if test -n "${initrd}" ; then
+	    gettext_printf "Found initrd image: %s\n" "${dirname}/${initrd}" >&2
+	else
+    # "UUID=" magic is parsed by initrds.  Since there's no initrd, it can't work here.
+	    linux_root_device_thisversion=${GRUB_DEVICE}
+	fi
+
+	if [ "x$is_top_level" = xtrue ] && [ "x${GRUB_DISABLE_SUBMENU}" != xy ]; then
+	    linux_entry "${OS}" "${version}" "${xen_version}" simple \
+		"${GRUB_CMDLINE_LINUX} ${GRUB_CMDLINE_LINUX_DEFAULT}" "${GRUB_CMDLINE_XEN} ${GRUB_CMDLINE_XEN_DEFAULT}"
+
+	    submenu_indentation="$grub_tab$grub_tab"
+    
+	    if [ -z "$boot_device_id" ]; then
+		boot_device_id="$(grub_get_device_id "${GRUB_DEVICE}")"
+	    fi
+            # TRANSLATORS: %s is replaced with an OS name
+	    echo "submenu '$(gettext_printf "Advanced options for %s (with Xen hypervisor)" "${OS}" | grub_quote)' \$menuentry_id_option 'gnulinux-advanced-$boot_device_id' {"
+	echo "	submenu '$(gettext_printf "Xen hypervisor, version %s" "${xen_version}" | grub_quote)' \$menuentry_id_option 'xen-hypervisor-$xen_version-$boot_device_id' {"
+	   is_top_level=false
+	fi
+
+	linux_entry "${OS}" "${version}" "${xen_version}" advanced \
+	    "${GRUB_CMDLINE_LINUX} ${GRUB_CMDLINE_LINUX_DEFAULT}" "${GRUB_CMDLINE_XEN} ${GRUB_CMDLINE_XEN_DEFAULT}"
+	for supported_init in ${SUPPORTED_INITS}; do
+	    init_path="${supported_init#*:}"
+	    if [ -x "${init_path}" ] && [ "$(readlink -f /sbin/init)" != "$(readlink -f "${init_path}")" ]; then
+		linux_entry "${OS}" "${version}" "${xen_version}" "init-${supported_init%%:*}" \
+		    "${GRUB_CMDLINE_LINUX} ${GRUB_CMDLINE_LINUX_DEFAULT} init=${init_path}" "${GRUB_CMDLINE_XEN} ${GRUB_CMDLINE_XEN_DEFAULT}"
+
+	    fi
+	done
+	if [ "x${GRUB_DISABLE_RECOVERY}" != "xtrue" ]; then
+	    linux_entry "${OS}" "${version}" "${xen_version}" recovery \
+		"single ${GRUB_CMDLINE_LINUX}" "${GRUB_CMDLINE_XEN}"
+	fi
+
+	list=`echo $list | tr ' ' '\n' | fgrep -vx "$linux" | tr '\n' ' '`
+    done
+    if [ x"$is_top_level" != xtrue ]; then
+	echo '	}'
+    fi
+    xen_list=`echo $xen_list | tr ' ' '\n' | fgrep -vx "$current_xen" | tr '\n' ' '`
+done
+
+# If at least one kernel was found, then we need to
+# add a closing '}' for the submenu command.
+if [ x"$is_top_level" != xtrue ]; then
+  echo '}'
+fi
+
+echo "$title_correction_code"
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:10:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:10:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7d6-0006vZ-HW; Tue, 19 May 2020 19:10:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7d4-0006ts-J7
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:10:18 +0000
X-Inumbo-ID: 4b4a7d26-9a04-11ea-b9cf-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b4a7d26-9a04-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 19:09:42 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vd-0001da-W2; Tue, 19 May 2020 20:02:38 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 14/38] Honour 'LinuxSerialConsole <suite>' host
 property
Date: Tue, 19 May 2020 20:02:06 +0100
Message-Id: <20200519190230.29519-15-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This works like LinuxSerialConsole.

I originally wrote this to try to work around #940028, where multiple
d-i autoinstalls run in parallel leading to hard-to-debug lossage!
Explicitly specing the console causes it to run only on that one.

However, it turns out that explicitly specifying the console does not
always work and a better fix is needed.  Nevertheless, having added
this feature it seems foolish to throw it away.

Currently there are no hosts with this property so no functiaonal
change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index ff8103f2..7eeac49f 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -1447,7 +1447,10 @@ sub get_target_property ($$;$) {
 sub get_host_native_linux_console ($) {
     my ($ho) = @_;
 
-    my $console = get_host_property($ho, "LinuxSerialConsole", "ttyS0");
+    my $console;
+    $console //= get_host_property($ho, "LinuxSerialConsole $ho->{Suite}")
+	if $ho->{Suite};
+    $console //= get_host_property($ho, "LinuxSerialConsole", "ttyS0");
     return $console if $console eq 'NONE';
 
     return "$console,$c{Baud}n8";
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:10:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:10:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7dA-00071G-RZ; Tue, 19 May 2020 19:10:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7d9-000707-Iv
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:10:23 +0000
X-Inumbo-ID: 4c80120a-9a04-11ea-b9cf-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4c80120a-9a04-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 19:09:44 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vi-0001da-95; Tue, 19 May 2020 20:02:42 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 24/38] buster: Specify DebianImageFile_SUITE_ARCH
Date: Tue, 19 May 2020 20:02:16 +0100
Message-Id: <20200519190230.29519-25-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 production-config | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/production-config b/production-config
index 103b8915..f0ddc132 100644
--- a/production-config
+++ b/production-config
@@ -98,6 +98,9 @@ DebianSnapshotBackports_jessie http://snapshot.debian.org/archive/debian/2019020
 DebianImageVersion_wheezy 7.2.0
 DebianImageVersion_jessie 8.2.0
 DebianImageVersion_stretch 9.4.0
+DebianImageFile_buster_amd64 debian-10.2.0-amd64-xfce-CD-1.iso
+DebianImageFile_buster_i386 debian-10.2.0-i386-xfce-CD-1.iso
+
 
 # Update with ./mg-netgrub-loader-update
 TftpGrubVersion XXXX-XX-XX
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:10:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:10:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7dG-00077n-6M; Tue, 19 May 2020 19:10:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7dE-00075c-KD
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:10:28 +0000
X-Inumbo-ID: 4dce2cdc-9a04-11ea-9887-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4dce2cdc-9a04-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 19:09:47 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vc-0001da-TG; Tue, 19 May 2020 20:02:36 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 10/38] Debian guests: Write systemd random seed file
Date: Tue, 19 May 2020 20:02:02 +0100
Message-Id: <20200519190230.29519-11-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This involves a new ts-debian-di-fixup script, which runs after
xen-tools.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm  | 14 ++++++++++++++
 sg-run-job         |  1 +
 ts-debian-di-fixup | 29 +++++++++++++++++++++++++++++
 ts-debian-fixup    |  1 +
 4 files changed, 45 insertions(+)
 create mode 100755 ts-debian-di-fixup

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index b8bf67dc..78d8c97e 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -49,6 +49,7 @@ BEGIN {
                       di_installcmdline_core
                       di_vg_name
                       debian_dhcp_rofs_fix
+		      debian_write_random_seed
                       );
     %EXPORT_TAGS = ( );
 
@@ -1612,4 +1613,17 @@ mv '$script.new' '$script'
 END
 }
 
+sub debian_write_random_seed ($) {
+    my ($gho) = @_;
+    my $mountpoint = '/mnt';
+    my $ho = $gho->{Host};
+    target_cmd_root($ho, <<END);
+        set -ex
+        mount /dev/$gho->{Vg}/$gho->{Lv} $mountpoint
+        umask 077
+        dd if=/dev/urandom of=$mountpoint/var/lib/systemd/random-seed bs=1k count=1
+END
+    guest_umount_lv($ho, $gho);
+}
+
 1;
diff --git a/sg-run-job b/sg-run-job
index aa7953ac..9255096d 100755
--- a/sg-run-job
+++ b/sg-run-job
@@ -511,6 +511,7 @@ proc run-job/test-debian {} {
 
 proc install-guest-debian-di {} {
     run-ts . = ts-debian-di-install
+    run-ts . = ts-debian-di-fixup + debian
     run-ts . = ts-guest-start + debian
 }
 
diff --git a/ts-debian-di-fixup b/ts-debian-di-fixup
new file mode 100755
index 00000000..68cda2f5
--- /dev/null
+++ b/ts-debian-di-fixup
@@ -0,0 +1,29 @@
+#!/usr/bin/perl -w
+# This is part of "osstest", an automated testing framework for Xen.
+# Copyright (C) 2009-2013 Citrix Inc.
+# 
+# This program is free software: you can redistribute it and/or modify
+# it under the terms of the GNU Affero General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+# 
+# This program is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU Affero General Public License for more details.
+# 
+# You should have received a copy of the GNU Affero General Public License
+# along with this program.  If not, see <http://www.gnu.org/licenses/>.
+
+use strict qw(vars);
+use DBI;
+BEGIN { unshift @INC, qw(.); }
+use Osstest;
+use Osstest::TestSupport;
+use Osstest::Debian;
+
+tsreadconfig();
+
+our ($ho,$gho) = ts_get_host_guest(@ARGV);
+
+debian_write_random_seed($gho);
diff --git a/ts-debian-fixup b/ts-debian-fixup
index fef9836e..45bbcd27 100755
--- a/ts-debian-fixup
+++ b/ts-debian-fixup
@@ -202,6 +202,7 @@ sub writecfg () {
     target_putfile_root($ho,10, $cfgstash, $cfgfile);
 }
 
+debian_write_random_seed($gho);
 savecfg();
 ether();
 target_kernkind_check($gho);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:10:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:10:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7dL-0007E3-MQ; Tue, 19 May 2020 19:10:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7dJ-0007By-Jy
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:10:33 +0000
X-Inumbo-ID: 4f048c04-9a04-11ea-9887-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4f048c04-9a04-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 19:09:48 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vj-0001da-43; Tue, 19 May 2020 20:02:43 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 28/38] 20_linux_xen: Support Xen Security Modules
 (XSM/FLASK)
Date: Tue, 19 May 2020 20:02:20 +0100
Message-Id: <20200519190230.29519-29-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

XSM is enabled by adding "flask=enforcing" as a Xen command line
argument, and providing the policy file as a grub module.

We make entries for both with and without XSM.  If XSM is not compiled
into Xen, then there are no policy files, so no change to the boot
options.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 overlay-buster/etc/grub.d/20_linux_xen | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/overlay-buster/etc/grub.d/20_linux_xen b/overlay-buster/etc/grub.d/20_linux_xen
index 01dfcb57..4d3294a2 100755
--- a/overlay-buster/etc/grub.d/20_linux_xen
+++ b/overlay-buster/etc/grub.d/20_linux_xen
@@ -84,6 +84,11 @@ esac
 title_correction_code=
 
 linux_entry ()
+{
+  linux_entry_xsm "$@" false
+  linux_entry_xsm "$@" true
+}
+linux_entry_xsm ()
 {
   os="$1"
   version="$2"
@@ -91,6 +96,18 @@ linux_entry ()
   type="$4"
   args="$5"
   xen_args="$6"
+  xsm="$7"
+  # If user wants to enable XSM support, make sure there's
+  # corresponding policy file.
+  if ${xsm} ; then
+      xenpolicy="xenpolicy-$xen_version"
+      if test ! -e "${xen_dirname}/${xenpolicy}" ; then
+	  return
+      fi
+      xen_args="$xen_args flask=enforcing"
+      xen_version="$(gettext_printf "%s (XSM enabled)" "$xen_version")"
+      # xen_version is used for messages only; actual file is xen_basename
+  fi
   if [ -z "$boot_device_id" ]; then
       boot_device_id="$(grub_get_device_id "${GRUB_DEVICE}")"
   fi
@@ -140,6 +157,13 @@ EOF
     sed "s/^/$submenu_indentation/" << EOF
 	echo	'$(echo "$message" | grub_quote)'
 	${module_loader}	--nounzip   ${rel_dirname}/${initrd}
+EOF
+  fi
+  if test -n "${xenpolicy}" ; then
+    message="$(gettext_printf "Loading XSM policy ...")"
+    sed "s/^/$submenu_indentation/" << EOF
+	echo	'$(echo "$message" | grub_quote)'
+	${module_loader}     ${rel_dirname}/${xenpolicy}
 EOF
   fi
   sed "s/^/$submenu_indentation/" << EOF
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:10:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:10:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7dQ-0007JZ-1w; Tue, 19 May 2020 19:10:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7dO-0007Hm-KN
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:10:38 +0000
X-Inumbo-ID: 5048cd82-9a04-11ea-b9cf-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5048cd82-9a04-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 19:09:51 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vh-0001da-6z; Tue, 19 May 2020 20:02:41 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 20/38] buster: ts-host-install: NTP not honoured bug
 remains
Date: Tue, 19 May 2020 20:02:12 +0100
Message-Id: <20200519190230.29519-21-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Debian #778564 remains open.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-host-install | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-host-install b/ts-host-install
index fe26f70f..253dbb5d 100755
--- a/ts-host-install
+++ b/ts-host-install
@@ -152,7 +152,7 @@ END
 	    my $done= 0;
 	    while (<EI>) {
 		if (m/^server\b|^pool\b\s/) {
-		    if ($ho->{Suite} =~ m/lenny|squeeze|wheezy|jessie|stretch/) {
+		    if ($ho->{Suite} =~ m/lenny|squeeze|wheezy|jessie|stretch|buster/) {
 			$_= $done ? "" : "server $ntpserver\n";
 		    } else {
 			m/^server \Q$ntpserver\E\s/ or
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:10:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:10:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7dU-0007Op-DC; Tue, 19 May 2020 19:10:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7dT-0007Nk-KH
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:10:43 +0000
X-Inumbo-ID: 58a87a5e-9a04-11ea-b9cf-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58a87a5e-9a04-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 19:10:05 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vd-0001da-4l; Tue, 19 May 2020 20:02:37 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 11/38] ts-debian-di-install: Provide guest with more
 RAM
Date: Tue, 19 May 2020 20:02:03 +0100
Message-Id: <20200519190230.29519-12-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

buster cannot boot in so little because its initramfs and kernel are
too large.  Bump it to 2G.

However, our armhf test nodes have very little RAM.  And the Debian
armhf does fit in them as a guest still, so use a smaller value there.

Keying this off the architecture rather than the available host memory
is better because you do need the bigger value precisely if you are
not using armhf, and this makes osstest less dependent on a completely
accurate and populated host properties database.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-debian-di-install | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-debian-di-install b/ts-debian-di-install
index 9abb4956..d84407cf 100755
--- a/ts-debian-di-install
+++ b/ts-debian-di-install
@@ -64,7 +64,7 @@ $gn ||= 'debian';
 
 our $ho= selecthost($whhost);
 
-our $ram_mb=    512;
+our $ram_mb= $r{arch} =~ m/^armhf/ ? 768 : 2048;
 our $disk_mb= 10000;
 
 our $guesthost= $gn.
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:10:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:10:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7dZ-0007Ud-Nh; Tue, 19 May 2020 19:10:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7dY-0007TI-KL
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:10:48 +0000
X-Inumbo-ID: 5a236182-9a04-11ea-9887-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a236182-9a04-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 19:10:07 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Ve-0001da-Ef; Tue, 19 May 2020 20:02:38 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 16/38] buster: Extend grub2 uefi no install workaround
Date: Tue, 19 May 2020 20:02:08 +0100
Message-Id: <20200519190230.29519-17-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

src:grub2 is RFH in Debian, which is a contributory factor to these
patches in #789798 and #792547 languishing.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 8380c428..0958d080 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -1459,7 +1459,7 @@ d-i partman-auto/expert_recipe string					\\
 END
 
     if (get_host_property($ho, "firmware") eq "uefi") {
-	die unless $ho->{Suite} =~ m/jessie|stretch/;
+	die unless $ho->{Suite} =~ m/jessie|stretch|buster/;
 	# Prevent grub-install from making a new Debian boot entry, so
 	# we always reboot from the network. Debian bug #789798 proposes a
 	# properly preseedable solution to this.
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:10:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:10:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7de-0007Za-2Q; Tue, 19 May 2020 19:10:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7dd-0007Yv-Ka
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:10:53 +0000
X-Inumbo-ID: 5bef4652-9a04-11ea-b07b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5bef4652-9a04-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 19:10:10 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vd-0001da-MC; Tue, 19 May 2020 20:02:37 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 13/38] Debian: Specify `priority=critical' rather than
 locale
Date: Tue, 19 May 2020 20:02:05 +0100
Message-Id: <20200519190230.29519-14-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In buster, it appears that specifying locale on the command line is
not sufficient.  Rather than adding more things to the command line,
instead, just say `priority=critical', by defaulting $debconf_priority
to 'critical'.

I think this change should be fine for earlier suites too.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index c3fbc32c..8380c428 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -698,12 +698,10 @@ sub di_installcmdline_core ($$;@) {
                "$xopts{PreseedScheme}=$ps_url",
                "netcfg/dhcp_timeout=150",
                "netcfg/choose_interface=$netcfg_interface",
-               "debian-installer/locale=en_GB",
                );
 
-    my $debconf_priority= $xopts{DebconfPriority};
-    push @cl, "priority=$debconf_priority"
-        if defined $debconf_priority;
+    my $debconf_priority= $xopts{DebconfPriority} // 'critical';
+    push @cl, "priority=$debconf_priority";
     push @cl, "rescue/enable=true" if $xopts{RescueMode};
 
     if ($r{syslog_server}) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:10:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:10:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7dj-0007fF-Ct; Tue, 19 May 2020 19:10:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7di-0007eJ-K0
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:10:58 +0000
X-Inumbo-ID: 5d1e0f68-9a04-11ea-9887-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d1e0f68-9a04-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 19:10:13 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vk-0001da-5E; Tue, 19 May 2020 20:02:44 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 33/38] buster: Provide TftpDiVersion
Date: Tue, 19 May 2020 20:02:25 +0100
Message-Id: <20200519190230.29519-34-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 production-config | 1 +
 1 file changed, 1 insertion(+)

diff --git a/production-config b/production-config
index e3870d47..6372ac9a 100644
--- a/production-config
+++ b/production-config
@@ -91,6 +91,7 @@ TftpNetbootGroup osstest
 TftpDiVersion_wheezy 2016-06-08
 TftpDiVersion_jessie 2018-06-26
 TftpDiVersion_stretch 2020-02-10
+TftpDiVersion_buster 2020-05-19
 
 DebianSnapshotBackports_jessie http://snapshot.debian.org/archive/debian/20190206T211314Z/
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:11:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:11:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7do-0007lJ-P7; Tue, 19 May 2020 19:11:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7dn-0007k3-KF
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:11:03 +0000
X-Inumbo-ID: 5e641cfa-9a04-11ea-b9cf-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e641cfa-9a04-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 19:10:14 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vi-0001da-UW; Tue, 19 May 2020 20:02:42 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 27/38] 20_linux_xen: Ignore xenpolicy and config files
 too
Date: Tue, 19 May 2020 20:02:19 +0100
Message-Id: <20200519190230.29519-28-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

"file_is_not_sym" currently only checks for xen-syms.  Extend it to
disregard xenpolicy (XSM policy files) and files ending .config (which
are built by the Xen upstream build system in some configurations and
can therefore end up in /boot).

Rename the function accordingly, to "file_is_not_xen_garbage".

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 overlay-buster/etc/grub.d/20_linux_xen | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/overlay-buster/etc/grub.d/20_linux_xen b/overlay-buster/etc/grub.d/20_linux_xen
index fb3ed82f..01dfcb57 100755
--- a/overlay-buster/etc/grub.d/20_linux_xen
+++ b/overlay-buster/etc/grub.d/20_linux_xen
@@ -167,10 +167,14 @@ if [ "x${linux_list}" = "x" ] ; then
     exit 0
 fi
 
-file_is_not_sym () {
+file_is_not_xen_garbage () {
     case "$1" in
 	*/xen-syms-*)
 	    return 1;;
+	*/xenpolicy-*)
+	    return 1;;
+	*/*.config)
+	    return 1;;
 	*)
 	    return 0;;
     esac
@@ -178,7 +182,7 @@ file_is_not_sym () {
 
 xen_list=
 for i in /boot/xen*; do
-    if grub_file_is_not_garbage "$i" && file_is_not_sym "$i" ; then xen_list="$xen_list $i" ; fi
+    if grub_file_is_not_garbage "$i" && file_is_not_xen_garbage "$i" ; then xen_list="$xen_list $i" ; fi
 done
 prepare_boot_cache=
 boot_device_id=
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:11:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:11:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7du-0007qr-3u; Tue, 19 May 2020 19:11:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7ds-0007pG-KR
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:11:08 +0000
X-Inumbo-ID: 5fe4877c-9a04-11ea-b07b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5fe4877c-9a04-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 19:10:17 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vh-0001da-Qg; Tue, 19 May 2020 20:02:41 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 22/38] buster: Extend guest bootloader workaround
Date: Tue, 19 May 2020 20:02:14 +0100
Message-Id: <20200519190230.29519-23-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

CC: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 6fed0b75..77508d19 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -1064,7 +1064,7 @@ END
     logm("\$arch is $arch, \$suite is $suite");
     if ($xopts{PvMenuLst} &&
 	$arch =~ /^arm/ &&
-	$suite =~ /wheezy|jessie|stretch|sid/ ) {
+	$suite =~ /wheezy|jessie|stretch|buster|sid/ ) {
 
 	# Debian doesn't currently know what bootloader to install in
 	# a Xen guest on ARM. We install pv-grub-menu above which
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:11:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7dy-0007w1-Fn; Tue, 19 May 2020 19:11:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7dx-0007vD-Kf
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:11:13 +0000
X-Inumbo-ID: 6172d4c2-9a04-11ea-b9cf-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6172d4c2-9a04-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 19:10:19 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vj-0001da-Lc; Tue, 19 May 2020 20:02:43 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 31/38] overlay-initrd-buster/sbin/reopen-console: Fix
 #932416
Date: Tue, 19 May 2020 20:02:23 +0100
Message-Id: <20200519190230.29519-32-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This bug affects us.  Cherry pick the changes to the relevant file
from the commit in the upstream debian-installer repo:

  https://salsa.debian.org/installer-team/rootskel/commit/0ee43d05b83f8ef5a856f3282e002a111809cef9

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 overlay-initrd-buster/sbin/reopen-console | 36 +++++++++++++++++++++--
 1 file changed, 34 insertions(+), 2 deletions(-)

diff --git a/overlay-initrd-buster/sbin/reopen-console b/overlay-initrd-buster/sbin/reopen-console
index dd354deb..13b15a33 100755
--- a/overlay-initrd-buster/sbin/reopen-console
+++ b/overlay-initrd-buster/sbin/reopen-console
@@ -16,6 +16,17 @@ NL="
 LOGGER_UP=0
 LOG_FILE=/var/log/reopen-console
 
+# If we're running with preseeding, we have a problem with running d-i
+# on multiple consoles. We'll end up running each of those d-i
+# instances in parallel with all kinds of hilarious undefined
+# behaviour as they trip over each other! If we detect that we're
+# preseeding (via any of the possible preseed methods), DO NOT run d-i
+# multiple times. Instead, fall back to the older, more simple
+# behaviour and run it once. If the user wants to see or interact with
+# their preseed on a specific console, they get to tell us which one
+# they want to use.
+PRESEEDING=0
+
 log() {
 	# In very early startup we don't have syslog. Log to file that
 	# we can flush out later so we can at least see what happened
@@ -32,6 +43,20 @@ flush_logger () {
 	rm $LOG_FILE
 }
 
+# If we have a preseed.cfg in the initramfs
+if [ -e /preseed.cfg ]; then
+    log "Found /preseed.cfg; falling back to simple mode for preseeding"
+    PRESEEDING=1
+fi
+
+# Have we been told to do preseeding stuff on the boot command line?
+for WORD in auto url; do
+    if (grep -qw "$WORD" /proc/cmdline); then
+	log "Found \"$WORD\" in the command line; falling back to simple mode for preseeding"
+	PRESEEDING=1
+    fi
+done
+
 consoles=
 preferred=
 # Retrieve all enabled consoles from kernel; ignore those
@@ -44,7 +69,7 @@ do
 	status=$(echo "$kernelconsoles" | grep $cons | sed -n -r -e 's/(^.*) *.*\((.*)\).*$/\2/p' )
 	if [ -e "/dev/$cons" ] && [ $(echo "$status" | grep -o 'E') ]; then
 		consoles="${consoles:+$consoles$NL}$cons"
-		log "   Adding $cons to consoles list"
+		log "   Adding $cons to possible consoles list"
 	fi
 	# 'C' console is 'most prefered'.
 	if [ $(echo "$status" | grep -o 'C') ]; then
@@ -64,6 +89,13 @@ if [ -z "$preferred" ]; then
 	log "Found no preferred console. Picking $preferred"
 fi
 
+# If we're preseeding, do simple stuff here (see above). We just
+# want one console. Let's pick the preferred one ONLY
+if [ $PRESEEDING = 1 ]; then
+    log "Running with preseeding. Picking preferred $preferred ONLY"
+    consoles=$preferred
+fi
+
 for cons in $consoles
 do
 	echo "/dev/$cons " >> /var/run/console-devices
@@ -88,7 +120,7 @@ LOGGER_UP=1
 flush_logger
 
 # Finally restart init to run debian-installer on discovered consoles
-log "Restarting init to start d-i on the consoles we found"
+log "Restarting init to start d-i on the console(s) we found"
 kill -HUP 1
 
 exit 0
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:11:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:11:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7e3-00081x-SL; Tue, 19 May 2020 19:11:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7e2-00080z-La
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:11:18 +0000
X-Inumbo-ID: 62b144ea-9a04-11ea-b07b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 62b144ea-9a04-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 19:10:21 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vi-0001da-Nc; Tue, 19 May 2020 20:02:42 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 26/38] 20_linux_xen: Adhoc template substitution
Date: Tue, 19 May 2020 20:02:18 +0100
Message-Id: <20200519190230.29519-27-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This file is a template that various build-time variables get
substituted into.  Make thos substitutions by hand (actually, by
copying the values our file for stretch).  And rename the file.

So now we are using our file instead of the grub package's.  But it is
the same...

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 .../etc/grub.d/{20_linux_xen.in => 20_linux_xen}       | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)
 rename overlay-buster/etc/grub.d/{20_linux_xen.in => 20_linux_xen} (98%)
 mode change 100644 => 100755

diff --git a/overlay-buster/etc/grub.d/20_linux_xen.in b/overlay-buster/etc/grub.d/20_linux_xen
old mode 100644
new mode 100755
similarity index 98%
rename from overlay-buster/etc/grub.d/20_linux_xen.in
rename to overlay-buster/etc/grub.d/20_linux_xen
index 98ef163c..fb3ed82f
--- a/overlay-buster/etc/grub.d/20_linux_xen.in
+++ b/overlay-buster/etc/grub.d/20_linux_xen
@@ -17,14 +17,14 @@ set -e
 # You should have received a copy of the GNU General Public License
 # along with GRUB.  If not, see <http://www.gnu.org/licenses/>.
 
-prefix="@prefix@"
-exec_prefix="@exec_prefix@"
-datarootdir="@datarootdir@"
+prefix="/usr"
+exec_prefix="/usr"
+datarootdir="/usr/share"
 
 . "$pkgdatadir/grub-mkconfig_lib"
 
-export TEXTDOMAIN=@PACKAGE@
-export TEXTDOMAINDIR="@localedir@"
+export TEXTDOMAIN=grub
+export TEXTDOMAINDIR="${datarootdir}/locale"
 
 CLASS="--class gnu-linux --class gnu --class os --class xen"
 SUPPORTED_INITS="sysvinit:/lib/sysvinit/init systemd:/lib/systemd/systemd upstart:/sbin/upstart"
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:11:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:11:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7e8-00087K-Cs; Tue, 19 May 2020 19:11:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7e7-00086U-Lp
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:11:23 +0000
X-Inumbo-ID: 64302fb6-9a04-11ea-b9cf-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 64302fb6-9a04-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 19:10:24 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Vi-0001da-1j; Tue, 19 May 2020 20:02:42 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 23/38] Honour DebianImageFile_SUITE_ARCH
Date: Tue, 19 May 2020 20:02:15 +0100
Message-Id: <20200519190230.29519-24-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This lets us specify the whole filename, not just a version.
This is needed because for buster we are going to use
   debian-10.2.0-ARCH-xfce-CD-1.iso

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 mfi-common | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/mfi-common b/mfi-common
index b40f057e..640cf328 100644
--- a/mfi-common
+++ b/mfi-common
@@ -522,6 +522,15 @@ job_create_test () {
 
 usual_debianhvm_image () {
   local arch=$1; shift
+  if [ -n "$DEBIAN_IMAGE_FILE" ]; then
+      echo $DEBIAN_IMAGE_FILE
+      return
+  fi
+  local file=`getconfig DebianImageFile_${guestsuite}_${arch}`
+  if [ -n "$file " ]; then
+      echo $file
+      return
+  fi
   local ver=$DEBIAN_IMAGE_VERSION
   if [ -z "$ver" ] ; then
       ver=`getconfig DebianImageVersion_$guestsuite`
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 19:11:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 19:11:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb7eD-0008E0-OS; Tue, 19 May 2020 19:11:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+xc8=7B=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jb7eC-0008Cb-Ml
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 19:11:28 +0000
X-Inumbo-ID: 658c60d2-9a04-11ea-9887-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 658c60d2-9a04-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 19:10:26 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jb7Ve-0001da-5v; Tue, 19 May 2020 20:02:38 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 15/38] buster: make-hosts-flight: Add to possible
 suites for hosts flight
Date: Tue, 19 May 2020 20:02:07 +0100
Message-Id: <20200519190230.29519-16-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 make-hosts-flight | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/make-hosts-flight b/make-hosts-flight
index 92da1c7c..e2c3776a 100755
--- a/make-hosts-flight
+++ b/make-hosts-flight
@@ -26,7 +26,7 @@ blessing=$4
 buildflight=$5
 
 : ${ALL_ARCHES:=amd64 i386 arm64 armhf}
-: ${ALL_SUITES:=stretch jessie}
+: ${ALL_SUITES:=buster stretch jessie}
 # ^ most preferred suite first
 
 : ${PERHOST_MAXWAIT:=20000} # seconds
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Tue May 19 20:16:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 20:16:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb8f5-0005rP-Oy; Tue, 19 May 2020 20:16:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wYyw=7B=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jb8f4-0005rK-Mk
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 20:16:26 +0000
X-Inumbo-ID: 99c178b6-9a0d-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 99c178b6-9a0d-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 20:16:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=etZXsnzVQ7zNlinxdEQZ8/QNncZ6eQcuHQDviSFOmXs=; b=xRe+/nXIcaztViNrU35Jh+kxZ
 1vHn7sizu/84EynGy6INpJIep6CtqBw4GDjYcjRL0zl8WxhvdgYB4eoA8SY6QiQqAmnUbLQ0EMVjp
 DikRBiEPJhUCFz9jDRul0nEasET/QPAmuYS5+o3D7j1WLp4oC7S9JOPrcFNftU3W+x/Ak=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jb8ew-0002AI-Jg; Tue, 19 May 2020 20:16:18 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jb8ew-0002Ag-8M; Tue, 19 May 2020 20:16:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jb8ew-00021i-7e; Tue, 19 May 2020 20:16:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150249-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150249: tolerable all pass
X-Osstest-Failures: xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=e235fa2794c95365519eac714d6ea82f8e64752e
X-Osstest-Versions-That: xen=271ade5a621005f86ec928280dc6ac85f2c4c95a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 19 May 2020 20:16:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150249 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150249/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e235fa2794c95365519eac714d6ea82f8e64752e
baseline version:
 xen                  271ade5a621005f86ec928280dc6ac85f2c4c95a

Last test of basis   150245  2020-05-19 13:01:12 Z    0 days
Testing same since   150249  2020-05-19 17:01:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jason Andryuk <jandryuk@gmail.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

ssh: Could not resolve hostname xenbits.xen.org: Temporary failure in name resolution
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
------------------------------------------------------------
commit e235fa2794c95365519eac714d6ea82f8e64752e
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Mon May 18 21:55:03 2020 -0400

    libxl: Check stubdomain kernel & ramdisk presence
    
    Just out of context is the following comment for libxl__domain_make:
    /* fixme: this function can leak the stubdom if it fails */
    
    When the stubdomain kernel or ramdisk is not present, the domid and
    stubdomain name will indeed be leaked.  Avoid the leak by checking the
    file presence and erroring out when absent.  It doesn't fix all cases,
    but it avoids a big one when using a linux device model stubdomain.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit b6af49af6093a9a0e0e0b4d39ab06da106f4bdf7
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Mon May 18 21:55:02 2020 -0400

    docs: Add device-model-domid to xenstore-paths
    
    Document device-model-domid for when using a device model stubdomain.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit cfc47cd4e24f3bfbe9b69f3196d1dd31f7423c31
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Mon May 18 21:55:01 2020 -0400

    libxl: consider also qemu in stubdomain in libxl__dm_active check
    
    Since qemu-xen can now run in stubdomain too, handle this case when
    checking it's state too.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit fa9d82825a8ddee1894528576f383efddcdc3691
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Mon May 18 21:55:00 2020 -0400

    libxl: ignore emulated IDE disks beyond the first 4
    
    Qemu supports only 4 emulated IDE disks, when given more (or with higher
    indexes), it will fail to start. Since the disks can still be accessible
    using PV interface, just ignore emulated path and log a warning, instead
    of rejecting the configuration altogether.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit ab9ce23f5af8f77078d63b11ff8bd7280e0e6b50
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Mon May 18 21:54:59 2020 -0400

    libxl: require qemu in dom0 for multiple stubdomain consoles
    
    Device model stubdomains (both Mini-OS + qemu-trad and linux + qemu-xen)
    are always started with at least 3 consoles: log, save, and restore.
    Until xenconsoled learns how to handle multiple consoles, this is needed
    for save/restore support.
    
    For Mini-OS stubdoms, this is a bug.  In practice, it works in most
    cases because there is something else that triggers qemu in dom0 too:
    vfb/vkb added if vnc/sdl/spice is enabled.
    
    Additionally, Linux-based stubdomain waits for all the backends to
    initialize during boot. Lack of some console backends results in
    stubdomain startup timeout.
    
    This is a temporary patch until xenconsoled will be improved.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    [Updated commit message with Marek's explanation from mailing list.]
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 83c845033dc8bb3a35ae245effb7832b6823174a
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Mon May 18 21:54:58 2020 -0400

    libxl: use vchan for QMP access with Linux stubdomain
    
    Access to QMP of QEMU in Linux stubdomain is possible over vchan
    connection. Handle the actual vchan connection in a separate process
    (vchan-socket-proxy). This simplified integration with QMP (already
    quite complex), but also allows preliminary filtering of (potentially
    malicious) QMP input.
    Since only one client can be connected to vchan server at the same time
    and it is not enforced by the libxenvchan itself, additional client-side
    locking is needed. It is implicitly implemented by vchan-socket-proxy,
    as it handle only one connection at a time. Note that qemu supports only
    one simultaneous client on a control socket anyway (but in UNIX socket
    case, it enforce it server-side), so it doesn't add any extra
    limitation.
    
    libxl qmp client code already has locking to handle concurrent access
    attempts to the same qemu qmp interface.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    
    Squash in changes of regenerated autotools files.
    
    Kill the vchan-socket-proxy so we don't leak the daemonized processes.
    libxl__stubdomain_is_linux_running() works against the guest_domid, but
    the xenstore path is beneath the stubdomain.  This leads to the use of
    libxl_is_stubdom in addition to libxl__stubdomain_is_linux_running() so
    that the stubdomain calls kill for the qmp-proxy.
    
    Also call libxl__qmp_cleanup() to remove the unix sockets used by
    vchan-socket-proxy.  vchan-socket-proxy only creates qmp-libxl-$domid,
    and libxl__qmp_cleanup removes that as well as qmp-libxenstat-$domid.
    However, it tolerates ENOENT, and a stray qmp-libxenstat-$domid should
    not exist.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 379ab27086be37fbb8d23c4e001e33e05dc18b2e
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Mon May 18 21:54:57 2020 -0400

    libxl: Refactor kill_device_model to libxl__kill_xs_path
    
    Move kill_device_model to libxl__kill_xs_path so we have a helper to
    kill a process from a pid stored in xenstore.  We'll be using it to kill
    vchan-qmp-proxy.
    
    libxl__kill_xs_path takes a "what" string for use in printing error
    messages.  kill_device_model is retained in libxl_dm.c to provide the
    string.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 14fe3ace50c3853670370d8b8ff93b066420a5e0
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Mon May 18 21:54:56 2020 -0400

    tools: add simple vchan-socket-proxy
    
    Add a simple proxy for tunneling socket connection over vchan. This is
    based on existing vchan-node* applications, but extended with socket
    support. vchan-socket-proxy serves both as a client and as a server,
    depending on parameters. It can be used to transparently communicate
    with an application in another domian that normally expose UNIX socket
    interface. Specifically, it's written to communicate with qemu running
    within stubdom.
    
    Server mode listens for vchan connections and when one is opened,
    connects to a pointed UNIX socket.  Client mode listens on UNIX
    socket and when someone connects, opens a vchan connection.  Only
    a single connection at a time is supported.
    
    Additionally, socket can be provided as a number - in which case it's
    interpreted as already open FD (in case of UNIX listening socket -
    listen() needs to be already called). Or "-" meaning stdin/stdout - in
    which case it is reduced to vchan-node2 functionality.
    
    Example usage:
    
    1. (in dom0) vchan-socket-proxy --mode=client <DOMID>
        /local/domain/<DOMID>/data/vchan/1234 /run/qemu.(DOMID)
    
    2. (in DOMID) vchan-socket-proxy --mode=server 0
       /local/domain/<DOMID>/data/vchan/1234 /run/qemu.(DOMID)
    
    This will listen on /run/qemu.(DOMID) in dom0 and whenever connection is
    made, it will connect to DOMID, where server process will connect to
    /run/qemu.(DOMID) there. When client disconnects, vchan connection is
    terminated and server vchan-socket-proxy process also disconnects from
    qemu.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 7f6bce6386824a6c69ea852cfa40673b0350f4d1
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Mon May 18 21:54:55 2020 -0400

    tools: add missing libxenvchan cflags
    
    libxenvchan.h include xenevtchn.h and xengnttab.h, so applications built
    with it needs applicable -I in CFLAGS too.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 449901fc966613c1829d92570df237d4d904cdf5
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Mon May 18 21:54:54 2020 -0400

    libxl: add save/restore support for qemu-xen in stubdomain
    
    Rely on a wrapper script in stubdomain to attach relevant consoles to
    qemu.  The save console (1) must be attached to fdset/1.  When
    performing a restore, $STUBDOM_RESTORE_INCOMING_ARG must be replaced on
    the qemu command line by "fd:$FD", where $FD is an open file descriptor
    number to the restore console (2).
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    
    Address TODO in dm_state_save_to_fdset: Only remove savefile for
    non-stubdom.
    Use $STUBDOM_RESTORE_INCOMING_ARG instead of fd:3 and update commit
    message.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit dae844977e1e10dc859ec21612f1811ca5d5f128
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Mon May 18 21:54:53 2020 -0400

    tools/libvchan: notify server when client is connected
    
    Let the server know when the client is connected. Otherwise server will
    notice only when client send some data.
    This change does not break existing clients, as libvchan user should
    handle spurious notifications anyway (for example acknowledge of remote
    side reading the data).
    
    Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Replace spaces with tabs to match the file's whitespace.
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit f76a0fa109ab48ec6e910bce3b45804ef6f0915d
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Mon May 18 21:54:52 2020 -0400

    xl: add stubdomain related options to xl config parser
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit dabc571b7292c3cdd51734b709a663eaa45345a1
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Mon May 18 21:54:51 2020 -0400

    libxl: write qemu arguments into separate xenstore keys
    
    This allows using arguments with spaces, like -append, without
    nominating any special "separator" character.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
    
    Write arguments in dm-argv directory instead of overloading mini-os's
    dmargs string.
    
    Make libxl__write_stub_dmargs vary behaviour based on the
    is_linux_stubdom flag.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 6d721be59d1b57088ab6ae92bcf79d0ac91fad18
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Mon May 18 21:54:50 2020 -0400

    libxl: Use libxl__xs_* in libxl__write_stub_dmargs
    
    Re-work libxl__write_stub_dmargs to use libxl_xs_* functions in a loop.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue May 19 21:22:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 21:22:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jb9gK-0003Jo-Tf; Tue, 19 May 2020 21:21:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2w/r=7B=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jb9gJ-0003Jj-0T
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 21:21:47 +0000
X-Inumbo-ID: bde4ef30-9a16-11ea-a991-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bde4ef30-9a16-11ea-a991-12813bfff9fa;
 Tue, 19 May 2020 21:21:45 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id BD7652072C;
 Tue, 19 May 2020 21:21:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1589923305;
 bh=lFItuiU0pJUpD20mjrwscQrNyRXkmctTL7PaHATfXeU=;
 h=Date:From:To:cc:Subject:From;
 b=Rm/b/c1sL3T9ygTYuV73v9qIV4bPgCH6i5qp/gs5MPwBsRTDEGfDBm2+5h8TirQXN
 3O6ARRszLJaT/tle/+W516Rec3S2vjK/9HjBYPJ8SoFHxOVOqonn4agAXXpo9N94WR
 IviIFT0hMX5XKRfEqUj+N8ijFUfBnDB1JQTsgkh4=
Date: Tue, 19 May 2020 14:21:37 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: jgross@suse.com, boris.ostrovsky@oracle.com
Subject: grant table issues mapping a ring order 10
Message-ID: <alpine.DEB.2.21.2005191252040.27502@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: andrew.cooper3@citrix.com, sstabellini@kernel.org, jbeulich@suse.com,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Juergen, Boris,

I am trying to increase the size of the rings used for Xen 9pfs
connections for performance reasons and also to reduce the likehood of
the backend having to wait on the frontend to free up space from the
ring.

FYI I realized that we cannot choose order 11 or greater in Linux
because then we incur into the hard limit CONFIG_FORCE_MAX_ZONEORDER=11.
But that is not the reason why I am writing to you :-)


The reason why I am writing is that even order 10 fails for some
grant-table related reason I cannot explain. There are two rings, each
of them order 10. Mapping the first ring results into an error. (Order 9
works fine, resulting in both rings being mapped correctly.)

QEMU tries to map the refs but gets an error:

  gnttab: error: mmap failed: Invalid argument
  xen be: 9pfs-0: xen be: 9pfs-0: xengnttab_map_domain_grant_refs failed: Invalid argument
  xengnttab_map_domain_grant_refs failed: Invalid argument

The error comes from Xen. The hypervisor returns GNTST_bad_gntref to
Linux (drivers/xen/grant-table.c:gnttab_map_refs). Then:

    	if (map->map_ops[i].status) {
			err = -EINVAL;
			continue;
		}

So Linux returns -EINVAL to QEMU. The ref seem to be garbage. The
following printks are in Xen in the implemenation of map_grant_ref:

(XEN) DEBUG map_grant_ref 1017 ref=998 nr=2560
(XEN) DEBUG map_grant_ref 1017 ref=999 nr=2560
(XEN) DEBUG map_grant_ref 1013 ref=2050669706 nr=2560
(XEN) grant_table.c:1015:d0v0 Bad ref 0x7a3abc8a for d1
(XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
(XEN) DEBUG map_grant_ref 1017 ref=19 nr=2560
(XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
(XEN) DEBUG map_grant_ref 1013 ref=56423797 nr=2560
(XEN) grant_table.c:1015:d0v0 Bad ref 0x35cf575 for d1
(XEN) DEBUG map_grant_ref 1013 ref=348793 nr=2560
(XEN) grant_table.c:1015:d0v0 Bad ref 0x55279 for d1
(XEN) DEBUG map_grant_ref 1013 ref=1589921828 nr=2560
(XEN) grant_table.c:1015:d0v0 Bad ref 0x5ec44824 for d1
(XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
(XEN) DEBUG map_grant_ref 1013 ref=2070386184 nr=2560
(XEN) grant_table.c:1015:d0v0 Bad ref 0x7b679608 for d1
(XEN) DEBUG map_grant_ref 1013 ref=3421871 nr=2560
(XEN) grant_table.c:1015:d0v0 Bad ref 0x3436af for d1
(XEN) DEBUG map_grant_ref 1013 ref=1589921828 nr=2560
(XEN) grant_table.c:1015:d0v0 Bad ref 0x5ec44824 for d1
(XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
(XEN) DEBUG map_grant_ref 1013 ref=875999099 nr=2560
(XEN) grant_table.c:1015:d0v0 Bad ref 0x3436af7b for d1
(XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
(XEN) DEBUG map_grant_ref 1013 ref=2705045486 nr=2560
(XEN) grant_table.c:1015:d0v0 Bad ref 0xa13bb7ee for d1
(XEN) DEBUG map_grant_ref 1013 ref=4294967295 nr=2560
(XEN) grant_table.c:1015:d0v0 Bad ref 0xffffffff for d1
(XEN) DEBUG map_grant_ref 1013 ref=213291910 nr=2560
(XEN) grant_table.c:1015:d0v0 Bad ref 0xcb69386 for d1
(XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
(XEN) DEBUG map_grant_ref 1013 ref=4912 nr=2560
(XEN) grant_table.c:1015:d0v0 Bad ref 0x1330 for d1
(XEN) DEBUG map_grant_ref 1013 ref=167788925 nr=2560
(XEN) grant_table.c:1015:d0v0 Bad ref 0xa00417d for d1
(XEN) DEBUG map_grant_ref 1017 ref=24 nr=2560
(XEN) DEBUG map_grant_ref 1013 ref=167788925 nr=2560
(XEN) grant_table.c:1015:d0v0 Bad ref 0xa00417d for d1
(XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
(XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
(XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
(XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
(XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
(XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
(XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
(XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
(XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
(XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
(XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560


Full logs https://pastebin.com/QLTUaUGJ
It is worth mentioning that no limits are being reached: we are below
2500 entries per domain and below the 64 pages of grant refs per domain.

What it seems to happen is that after ref 999, the next refs are garbage.
Do you have any ideas why?


I tracked the gnttab_expand calls in Dom0 and they seemed to be done
correctly. We need 5 grant table pages:

- order 10 -> 1024 refs
- 2 rings -> 2048 refs
- 512 refs per grant table page -> 4 pages
- plus few others refs by default -> 5 pages

[    3.896558] DEBUG gnttab_expand 1287 cur=1 extra=1 max=64 rc=0
[    5.115189] DEBUG gnttab_expand 1287 cur=2 extra=1 max=64 rc=0
[    6.334027] DEBUG gnttab_expand 1287 cur=3 extra=1 max=64 rc=0
[    7.350523] DEBUG gnttab_expand 1287 cur=4 extra=1 max=64 rc=0

As expected gnttab_expand gets called 4 times to add 4 more pages to the
initial page.


Thanks,

Stefano


From xen-devel-bounces@lists.xenproject.org Tue May 19 23:25:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:25:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBbY-0005IF-BC; Tue, 19 May 2020 23:25:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1yeG=7B=amazon.com=prvs=4015a1a96=anchalag@srs-us1.protection.inumbo.net>)
 id 1jbBbX-0005IA-58
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:24:59 +0000
X-Inumbo-ID: f4332c30-9a27-11ea-a9a0-12813bfff9fa
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f4332c30-9a27-11ea-a9a0-12813bfff9fa;
 Tue, 19 May 2020 23:24:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1589930698; x=1621466698;
 h=date:from:to:subject:message-id:mime-version;
 bh=sOfgRrPRmYHAfVrUOSXb1jwG6rejPn1YQXzhPKMaw1U=;
 b=Zno3tRzqCySeyGzd4HCkBCeo0lJfVOCMjEo6aqGqydve6c7cPBt+cjcg
 vXzVsV7ffCpDb1JSfJaPuE8ldfW/W8xSr5vz6tw4vCBMbhW5gDekZkQKh
 88xxFlbk/8e4MXOajtiCZnXc0xyhU6NPxE0E+VsM6uZFIFDb/M1FzYYOZ 8=;
IronPort-SDR: TapJ5+eP/s1RHA3K19MF7wVhCWSkQk3+kRa1I2OBkLAKB8EewWlyo02ycYQtyZZFMpV1Mo7QXf
 a/UcclI1rQNQ==
X-IronPort-AV: E=Sophos;i="5.73,411,1583193600"; d="scan'208";a="31066594"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2b-c7131dcf.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP;
 19 May 2020 23:24:43 +0000
Received: from EX13MTAUEE002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2b-c7131dcf.us-west-2.amazon.com (Postfix) with ESMTPS
 id A24D9A24BC; Tue, 19 May 2020 23:24:41 +0000 (UTC)
Received: from EX13D08UEE003.ant.amazon.com (10.43.62.118) by
 EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:24:29 +0000
Received: from EX13MTAUEE002.ant.amazon.com (10.43.62.24) by
 EX13D08UEE003.ant.amazon.com (10.43.62.118) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:24:29 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.62.224) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Tue, 19 May 2020 23:24:29 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id DD57040712; Tue, 19 May 2020 23:24:28 +0000 (UTC)
Date: Tue, 19 May 2020 23:24:28 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH 00/12] Fix PM hibernation in Xen guests
Message-ID: <cover.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,
This series fixes PM hibernation for hvm guests running on xen hypervisor.
The running guest could now be hibernated and resumed successfully at a
later time. The fixes for PM hibernation are added to block and
network device drivers i.e xen-blkfront and xen-netfront. Any other driver
that needs to add S4 support if not already, can follow same method of
introducing freeze/thaw/restore callbacks.
The patches had been tested against upstream kernel and xen4.11. Large
scale testing is also done on Xen based Amazon EC2 instances. All this testing
involved running memory exhausting workload in the background.

Doing guest hibernation does not involve any support from hypervisor and
this way guest has complete control over its state. Infrastructure
restrictions for saving up guest state can be overcome by guest initiated
hibernation.

These patches were send out as RFC before and all the feedback had been
incorporated in the patches. The last RFCV3 could be found here:
https://lkml.org/lkml/2020/2/14/2789

Known issues:
1.KASLR causes intermittent hibernation failures. VM fails to resumes and
has to be restarted. I will investigate this issue separately and shouldn't
be a blocker for this patch series.
2. During hibernation, I observed sometimes that freezing of tasks fails due
to busy XFS workqueuei[xfs-cil/xfs-sync]. This is also intermittent may be 1
out of 200 runs and hibernation is aborted in this case. Re-trying hibernation
may work. Also, this is a known issue with hibernation and some
filesystems like XFS has been discussed by the community for years with not an
effectve resolution at this point.

Testing How to:
---------------
1. Setup xen hypervisor on a physical machine[ I used Ubuntu 16.04 +upstream
xen-4.11]
2. Bring up a HVM guest w/t kernel compiled with hibernation patches
[I used ubuntu18.04 netboot bionic images and also Amazon Linux on-prem images].
3. Create a swap file size=RAM size
4. Update grub parameters and reboot
5. Trigger pm-hibernation from within the VM

Example:
Set up a file-backed swap space. Swap file size>=Total memory on the system
sudo dd if=/dev/zero of=/swap bs=$(( 1024 * 1024 )) count=4096 # 4096MiB
sudo chmod 600 /swap
sudo mkswap /swap
sudo swapon /swap

Update resume device/resume offset in grub if using swap file:
resume=/dev/xvda1 resume_offset=200704 no_console_suspend=1

Execute:
--------
sudo pm-hibernate
OR
echo disk > /sys/power/state && echo reboot > /sys/power/disk

Compute resume offset code:
"
#!/usr/bin/env python
import sys
import array
import fcntl

#swap file
f = open(sys.argv[1], 'r')
buf = array.array('L', [0])

#FIBMAP
ret = fcntl.ioctl(f.fileno(), 0x01, buf)
print buf[0]
"


Anchal Agarwal (5):
  x86/xen: Introduce new function to map HYPERVISOR_shared_info on
    Resume
  genirq: Shutdown irq chips in suspend/resume during hibernation
  xen: Introduce wrapper for save/restore sched clock offset
  xen: Update sched clock offset to avoid system instability in
    hibernation
  PM / hibernate: update the resume offset on SNAPSHOT_SET_SWAP_AREA

Munehisa Kamata (7):
  xen/manage: keep track of the on-going suspend mode
  xenbus: add freeze/thaw/restore callbacks support
  x86/xen: add system core suspend and resume callbacks
  xen-blkfront: add callbacks for PM suspend and hibernation
  xen-netfront: add callbacks for PM suspend and hibernation
  xen/time: introduce xen_{save,restore}_steal_clock
  x86/xen: save and restore steal clock

 arch/x86/xen/enlighten_hvm.c      |   8 ++
 arch/x86/xen/suspend.c            |  72 ++++++++++++++++++
 arch/x86/xen/time.c               |  18 ++++-
 arch/x86/xen/xen-ops.h            |   3 +
 drivers/block/xen-blkfront.c      | 122 ++++++++++++++++++++++++++++--
 drivers/net/xen-netfront.c        |  98 +++++++++++++++++++++++-
 drivers/xen/events/events_base.c  |   1 +
 drivers/xen/manage.c              |  73 ++++++++++++++++++
 drivers/xen/time.c                |  29 ++++++-
 drivers/xen/xenbus/xenbus_probe.c |  99 +++++++++++++++++++-----
 include/linux/irq.h               |   2 +
 include/xen/xen-ops.h             |   8 ++
 include/xen/xenbus.h              |   3 +
 kernel/irq/chip.c                 |   2 +-
 kernel/irq/internals.h            |   1 +
 kernel/irq/pm.c                   |  31 +++++---
 kernel/power/user.c               |   6 +-
 17 files changed, 536 insertions(+), 40 deletions(-)

-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Tue May 19 23:25:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:25:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBbj-0005Iy-NJ; Tue, 19 May 2020 23:25:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1yeG=7B=amazon.com=prvs=4015a1a96=anchalag@srs-us1.protection.inumbo.net>)
 id 1jbBbi-0005Ir-7n
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:25:10 +0000
X-Inumbo-ID: fac25a26-9a27-11ea-9887-bc764e2007e4
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fac25a26-9a27-11ea-9887-bc764e2007e4;
 Tue, 19 May 2020 23:25:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1589930709; x=1621466709;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=3obvm2K6wIdoHPoMRFNSCaSnwN5wi8ylwo2ias+p+AA=;
 b=M4WvS4C+rybkeMqsRnWUkm7t+xL/zJE0be3On6d9TPDMAfMwa86DZoWh
 lNPlFtYMVf3JuOAFT/hzUVmRzXirRSio6ECzyKh1dD9aiFYMXZHWk6Mw7
 doQ3a7PD4gNuVS3NoWIFSbDmkkO4XZ/vRXQisKH4QuGtYZAtHEZHAkuSJ 0=;
IronPort-SDR: P7emNvCRDvwDfHCN+LiCzJs6+SMASxtZyMSlkE+xUu3oDxVsLwu5RZokCrWc8Mmwr+VHYTbzkF
 Khz51BvVsG6g==
X-IronPort-AV: E=Sophos;i="5.73,411,1583193600"; d="scan'208";a="45949175"
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2a-e7be2041.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP;
 19 May 2020 23:25:07 +0000
Received: from EX13MTAUEE002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2a-e7be2041.us-west-2.amazon.com (Postfix) with ESMTPS
 id 8694BA0161; Tue, 19 May 2020 23:25:05 +0000 (UTC)
Received: from EX13D08UEE001.ant.amazon.com (10.43.62.126) by
 EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:24:52 +0000
Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by
 EX13D08UEE001.ant.amazon.com (10.43.62.126) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:24:52 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Tue, 19 May 2020 23:24:51 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id AD64640712; Tue, 19 May 2020 23:24:51 +0000 (UTC)
Date: Tue, 19 May 2020 23:24:51 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH 01/12] xen/manage: keep track of the on-going suspend mode
Message-ID: <20200519232451.GA18632@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <cover.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1589926004.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Munehisa Kamata <kamatam@amazon.com>

Guest hibernation is different from xen suspend/resume/live migration.
Xen save/restore does not use pm_ops as is needed by guest hibernation.
Hibernation in guest follows ACPI path and is guest inititated , the
hibernation image is saved within guest as compared to later modes
which are xen toolstack assisted and image creation/storage is in
control of hypervisor/host machine.
To differentiate between Xen suspend and PM hibernation, keep track
of the on-going suspend mode by mainly using a new PM notifier.
Introduce simple functions which help to know the on-going suspend mode
so that other Xen-related code can behave differently according to the
current suspend mode.
Since Xen suspend doesn't have corresponding PM event, its main logic
is modfied to acquire pm_mutex and set the current mode.

Though, acquirng pm_mutex is still right thing to do, we may
see deadlock if PM hibernation is interrupted by Xen suspend.
PM hibernation depends on xenwatch thread to process xenbus state
transactions, but the thread will sleep to wait pm_mutex which is
already held by PM hibernation context in the scenario. Xen shutdown
code may need some changes to avoid the issue.

[Anchal Changelog: Code refactoring]
Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 drivers/xen/manage.c  | 73 +++++++++++++++++++++++++++++++++++++++++++
 include/xen/xen-ops.h |  3 ++
 2 files changed, 76 insertions(+)

diff --git a/drivers/xen/manage.c b/drivers/xen/manage.c
index cd046684e0d1..0b30ab522b77 100644
--- a/drivers/xen/manage.c
+++ b/drivers/xen/manage.c
@@ -14,6 +14,7 @@
 #include <linux/freezer.h>
 #include <linux/syscore_ops.h>
 #include <linux/export.h>
+#include <linux/suspend.h>
 
 #include <xen/xen.h>
 #include <xen/xenbus.h>
@@ -40,6 +41,31 @@ enum shutdown_state {
 /* Ignore multiple shutdown requests. */
 static enum shutdown_state shutting_down = SHUTDOWN_INVALID;
 
+enum suspend_modes {
+	NO_SUSPEND = 0,
+	XEN_SUSPEND,
+	PM_SUSPEND,
+	PM_HIBERNATION,
+};
+
+/* Protected by pm_mutex */
+static enum suspend_modes suspend_mode = NO_SUSPEND;
+
+bool xen_suspend_mode_is_xen_suspend(void)
+{
+	return suspend_mode == XEN_SUSPEND;
+}
+
+bool xen_suspend_mode_is_pm_suspend(void)
+{
+	return suspend_mode == PM_SUSPEND;
+}
+
+bool xen_suspend_mode_is_pm_hibernation(void)
+{
+	return suspend_mode == PM_HIBERNATION;
+}
+
 struct suspend_info {
 	int cancelled;
 };
@@ -99,6 +125,10 @@ static void do_suspend(void)
 	int err;
 	struct suspend_info si;
 
+	lock_system_sleep();
+
+	suspend_mode = XEN_SUSPEND;
+
 	shutting_down = SHUTDOWN_SUSPEND;
 
 	err = freeze_processes();
@@ -162,6 +192,10 @@ static void do_suspend(void)
 	thaw_processes();
 out:
 	shutting_down = SHUTDOWN_INVALID;
+
+	suspend_mode = NO_SUSPEND;
+
+	unlock_system_sleep();
 }
 #endif	/* CONFIG_HIBERNATE_CALLBACKS */
 
@@ -387,3 +421,42 @@ int xen_setup_shutdown_event(void)
 EXPORT_SYMBOL_GPL(xen_setup_shutdown_event);
 
 subsys_initcall(xen_setup_shutdown_event);
+
+static int xen_pm_notifier(struct notifier_block *notifier,
+			   unsigned long pm_event, void *unused)
+{
+	switch (pm_event) {
+	case PM_SUSPEND_PREPARE:
+		suspend_mode = PM_SUSPEND;
+		break;
+	case PM_HIBERNATION_PREPARE:
+	case PM_RESTORE_PREPARE:
+		suspend_mode = PM_HIBERNATION;
+		break;
+	case PM_POST_SUSPEND:
+	case PM_POST_RESTORE:
+	case PM_POST_HIBERNATION:
+		/* Set back to the default */
+		suspend_mode = NO_SUSPEND;
+		break;
+	default:
+		pr_warn("Receive unknown PM event 0x%lx\n", pm_event);
+		return -EINVAL;
+	}
+
+	return 0;
+};
+
+static struct notifier_block xen_pm_notifier_block = {
+	.notifier_call = xen_pm_notifier
+};
+
+static int xen_setup_pm_notifier(void)
+{
+	if (!xen_hvm_domain())
+		return -ENODEV;
+
+	return register_pm_notifier(&xen_pm_notifier_block);
+}
+
+subsys_initcall(xen_setup_pm_notifier);
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 095be1d66f31..4ffe031adfc7 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -40,6 +40,9 @@ u64 xen_steal_clock(int cpu);
 
 int xen_setup_shutdown_event(void);
 
+bool xen_suspend_mode_is_xen_suspend(void);
+bool xen_suspend_mode_is_pm_suspend(void);
+bool xen_suspend_mode_is_pm_hibernation(void);
 extern unsigned long *xen_contiguous_bitmap;
 
 #if defined(CONFIG_XEN_PV) || defined(CONFIG_ARM) || defined(CONFIG_ARM64)
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Tue May 19 23:26:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:26:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBcP-0005QD-0i; Tue, 19 May 2020 23:25:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1yeG=7B=amazon.com=prvs=4015a1a96=anchalag@srs-us1.protection.inumbo.net>)
 id 1jbBcN-0005Pu-V3
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:25:51 +0000
X-Inumbo-ID: 1403dbe0-9a28-11ea-a9a0-12813bfff9fa
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1403dbe0-9a28-11ea-a9a0-12813bfff9fa;
 Tue, 19 May 2020 23:25:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1589930751; x=1621466751;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=u2I4SVbLijVxq/T2OZK7zwcQCdIZfr/59rBd0BiigZo=;
 b=YZCkOO/8hzWi08cqB+yEJtfF/G8MiVyO41OGu3js6Oti75WS24ZC7Y6o
 amdqRzp/Rgizj+Bs7fS9uuXpi4d8Qb0g1P6MEEjx4vqInee0RLJiQOTcB
 VWtzMhq/mRfUI8VfMsVnhXtc5uTzYBrUvznWyK2JuySf77bVh65hruclx w=;
IronPort-SDR: PUzafnrD9npruL/kMbXLm5hpJ1popdhgSPJqodhqecm7/PsBTkLK6Um1IrM4U5sGzedbKOXRIc
 CO06+Sbl0hSw==
X-IronPort-AV: E=Sophos;i="5.73,411,1583193600"; d="scan'208";a="31182118"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2b-4e24fd92.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP;
 19 May 2020 23:25:37 +0000
Received: from EX13MTAUEE002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2b-4e24fd92.us-west-2.amazon.com (Postfix) with ESMTPS
 id 76EF6A06D8; Tue, 19 May 2020 23:25:35 +0000 (UTC)
Received: from EX13D08UEE002.ant.amazon.com (10.43.62.92) by
 EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:25:09 +0000
Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by
 EX13D08UEE002.ant.amazon.com (10.43.62.92) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:25:09 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Tue, 19 May 2020 23:25:09 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id CE7BE40712; Tue, 19 May 2020 23:25:08 +0000 (UTC)
Date: Tue, 19 May 2020 23:25:08 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH 02/12] xenbus: add freeze/thaw/restore callbacks support
Message-ID: <7fd12227f923eacc5841b47bd69f72b4105843a7.1589926004.git.anchalag@amazon.com>
References: <cover.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1589926004.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Munehisa Kamata <kamatam@amazon.com>

Since commit b3e96c0c7562 ("xen: use freeze/restore/thaw PM events for
suspend/resume/chkpt"), xenbus uses PMSG_FREEZE, PMSG_THAW and
PMSG_RESTORE events for Xen suspend. However, they're actually assigned
to xenbus_dev_suspend(), xenbus_dev_cancel() and xenbus_dev_resume()
respectively, and only suspend and resume callbacks are supported at
driver level. To support PM suspend and PM hibernation, modify the bus
level PM callbacks to invoke not only device driver's suspend/resume but
also freeze/thaw/restore.

Note that we'll use freeze/restore callbacks even for PM suspend whereas
suspend/resume callbacks are normally used in the case, becausae the
existing xenbus device drivers already have suspend/resume callbacks
specifically designed for Xen suspend. So we can allow the device
drivers to keep the existing callbacks wihtout modification.

[Anchal Changelog: Refactored the callbacks code]
Signed-off-by: Agarwal Anchal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 drivers/xen/xenbus/xenbus_probe.c | 99 +++++++++++++++++++++++++------
 include/xen/xenbus.h              |  3 +
 2 files changed, 84 insertions(+), 18 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index 8c4d05b687b7..1589b9b2cb56 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -49,6 +49,7 @@
 #include <linux/io.h>
 #include <linux/slab.h>
 #include <linux/module.h>
+#include <linux/suspend.h>
 
 #include <asm/page.h>
 #include <asm/pgtable.h>
@@ -599,27 +600,44 @@ int xenbus_dev_suspend(struct device *dev)
 	struct xenbus_driver *drv;
 	struct xenbus_device *xdev
 		= container_of(dev, struct xenbus_device, dev);
-
+	bool xen_suspend = xen_suspend_mode_is_xen_suspend();
 	DPRINTK("%s", xdev->nodename);
 
 	if (dev->driver == NULL)
 		return 0;
 	drv = to_xenbus_driver(dev->driver);
-	if (drv->suspend)
-		err = drv->suspend(xdev);
-	if (err)
-		pr_warn("suspend %s failed: %i\n", dev_name(dev), err);
+
+	if (xen_suspend) {
+		if (drv->suspend)
+			err = drv->suspend(xdev);
+	} else {
+		if (drv->freeze) {
+			err = drv->freeze(xdev);
+			if (!err) {
+				free_otherend_watch(xdev);
+				free_otherend_details(xdev);
+				return 0;
+			}
+		}
+	}
+
+	if (err) {
+		pr_warn("%s %s failed: %i\n", xen_suspend ?
+			"suspend" : "freeze", dev_name(dev), err);
+		return err;
+	}
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(xenbus_dev_suspend);
 
 int xenbus_dev_resume(struct device *dev)
 {
-	int err;
+	int err = 0;
 	struct xenbus_driver *drv;
 	struct xenbus_device *xdev
 		= container_of(dev, struct xenbus_device, dev);
-
+	bool xen_suspend = xen_suspend_mode_is_xen_suspend();
 	DPRINTK("%s", xdev->nodename);
 
 	if (dev->driver == NULL)
@@ -627,24 +645,32 @@ int xenbus_dev_resume(struct device *dev)
 	drv = to_xenbus_driver(dev->driver);
 	err = talk_to_otherend(xdev);
 	if (err) {
-		pr_warn("resume (talk_to_otherend) %s failed: %i\n",
+		pr_warn("%s (talk_to_otherend) %s failed: %i\n",
+			xen_suspend ? "resume" : "restore",
 			dev_name(dev), err);
 		return err;
 	}
 
-	xdev->state = XenbusStateInitialising;
+	if (xen_suspend) {
+		xdev->state = XenbusStateInitialising;
+		if (drv->resume)
+			err = drv->resume(xdev);
+	} else {
+		if (drv->restore)
+			err = drv->restore(xdev);
+	}
 
-	if (drv->resume) {
-		err = drv->resume(xdev);
-		if (err) {
-			pr_warn("resume %s failed: %i\n", dev_name(dev), err);
-			return err;
-		}
+	if (err) {
+		pr_warn("%s %s failed: %i\n",
+			xen_suspend ? "resume" : "restore",
+			dev_name(dev), err);
+		return err;
 	}
 
 	err = watch_otherend(xdev);
 	if (err) {
-		pr_warn("resume (watch_otherend) %s failed: %d.\n",
+		pr_warn("%s (watch_otherend) %s failed: %d.\n",
+			xen_suspend ? "resume" : "restore",
 			dev_name(dev), err);
 		return err;
 	}
@@ -655,8 +681,45 @@ EXPORT_SYMBOL_GPL(xenbus_dev_resume);
 
 int xenbus_dev_cancel(struct device *dev)
 {
-	/* Do nothing */
-	DPRINTK("cancel");
+	int err = 0;
+	struct xenbus_driver *drv;
+	struct xenbus_device *xdev
+		= container_of(dev, struct xenbus_device, dev);
+	bool xen_suspend = xen_suspend_mode_is_xen_suspend();
+
+	if (xen_suspend) {
+		/* Do nothing */
+		DPRINTK("cancel");
+		return 0;
+	}
+
+	DPRINTK("%s", xdev->nodename);
+
+	if (dev->driver == NULL)
+		return 0;
+	drv = to_xenbus_driver(dev->driver);
+	err = talk_to_otherend(xdev);
+	if (err) {
+		pr_warn("thaw (talk_to_otherend) %s failed: %d.\n",
+			dev_name(dev), err);
+		return err;
+	}
+
+	if (drv->thaw) {
+		err = drv->thaw(xdev);
+		if (err) {
+			pr_warn("thaw %s failed: %i\n", dev_name(dev), err);
+			return err;
+		}
+	}
+
+	err = watch_otherend(xdev);
+	if (err) {
+		pr_warn("thaw (watch_otherend) %s failed: %d.\n",
+			dev_name(dev), err);
+		return err;
+	}
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(xenbus_dev_cancel);
diff --git a/include/xen/xenbus.h b/include/xen/xenbus.h
index 5a8315e6d8a6..8da964763255 100644
--- a/include/xen/xenbus.h
+++ b/include/xen/xenbus.h
@@ -104,6 +104,9 @@ struct xenbus_driver {
 	int (*remove)(struct xenbus_device *dev);
 	int (*suspend)(struct xenbus_device *dev);
 	int (*resume)(struct xenbus_device *dev);
+	int (*freeze)(struct xenbus_device *dev);
+	int (*thaw)(struct xenbus_device *dev);
+	int (*restore)(struct xenbus_device *dev);
 	int (*uevent)(struct xenbus_device *, struct kobj_uevent_env *);
 	struct device_driver driver;
 	int (*read_otherend_details)(struct xenbus_device *dev);
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Tue May 19 23:26:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:26:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBcr-0005Uv-B5; Tue, 19 May 2020 23:26:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1yeG=7B=amazon.com=prvs=4015a1a96=anchalag@srs-us1.protection.inumbo.net>)
 id 1jbBcr-0005Un-0P
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:26:21 +0000
X-Inumbo-ID: 24fe38e6-9a28-11ea-a9a0-12813bfff9fa
Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 24fe38e6-9a28-11ea-a9a0-12813bfff9fa;
 Tue, 19 May 2020 23:26:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1589930780; x=1621466780;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=SCTjmV5xVHiNIjiUIMGTdFjC64fXGAUedyZ2c0jICro=;
 b=CekwH+sbAr2pQsEc8oiWkbBH7ofiPVvejYJBFfA/XnznZP0Liel6g0nW
 Mzeh2uGLO9HiCH783//og6xcgE9RFhxYJTOoaJDWhlEFRSsUfnt0Ffb+2
 7KzlZJGf3ynT47xtPhx850MM2v5cW9OPQgeKidCfS6GdlOocYuuWcwjfU 4=;
IronPort-SDR: Kg+TLt+LY+bqUJUwBpjU3RbEzJfqG2xPZmKQMMNfSWm/LI+Yb1NN+YBg1sA0a/PiOEnBmgV9Bu
 QFRUnvjW9CbQ==
X-IronPort-AV: E=Sophos;i="5.73,411,1583193600"; d="scan'208";a="36215719"
Received: from sea32-co-svc-lb4-vlan2.sea.corp.amazon.com (HELO
 email-inbound-relay-1e-17c49630.us-east-1.amazon.com) ([10.47.23.34])
 by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP;
 19 May 2020 23:26:16 +0000
Received: from EX13MTAUWC001.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166])
 by email-inbound-relay-1e-17c49630.us-east-1.amazon.com (Postfix) with ESMTPS
 id 61936A2313; Tue, 19 May 2020 23:26:09 +0000 (UTC)
Received: from EX13D05UWC003.ant.amazon.com (10.43.162.226) by
 EX13MTAUWC001.ant.amazon.com (10.43.162.135) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:25:48 +0000
Received: from EX13MTAUWC001.ant.amazon.com (10.43.162.135) by
 EX13D05UWC003.ant.amazon.com (10.43.162.226) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:25:48 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.162.232) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Tue, 19 May 2020 23:25:48 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 91F2940712; Tue, 19 May 2020 23:25:48 +0000 (UTC)
Date: Tue, 19 May 2020 23:25:48 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH 03/12] x86/xen: Introduce new function to map
 HYPERVISOR_shared_info on Resume
Message-ID: <529f544a64bb93b920bf86b1d3f86d93b0a4219b.1589926004.git.anchalag@amazon.com>
References: <cover.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1589926004.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Introduce a small function which re-uses shared page's PA allocated
during guest initialization time in reserve_shared_info() and not
allocate new page during resume flow.
It also  does the mapping of shared_info_page by calling
xen_hvm_init_shared_info() to use the function.

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
---
 arch/x86/xen/enlighten_hvm.c | 7 +++++++
 arch/x86/xen/xen-ops.h       | 1 +
 2 files changed, 8 insertions(+)

diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index e138f7de52d2..75b1ec7a0fcd 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -27,6 +27,13 @@
 
 static unsigned long shared_info_pfn;
 
+void xen_hvm_map_shared_info(void)
+{
+	xen_hvm_init_shared_info();
+	if (shared_info_pfn)
+		HYPERVISOR_shared_info = __va(PFN_PHYS(shared_info_pfn));
+}
+
 void xen_hvm_init_shared_info(void)
 {
 	struct xen_add_to_physmap xatp;
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index 45a441c33d6d..d84c357994bd 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -56,6 +56,7 @@ void xen_enable_syscall(void);
 void xen_vcpu_restore(void);
 
 void xen_callback_vector(void);
+void xen_hvm_map_shared_info(void);
 void xen_hvm_init_shared_info(void);
 void xen_unplug_emulated_devices(void);
 
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Tue May 19 23:26:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:26:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBd7-0005Yb-K4; Tue, 19 May 2020 23:26:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1yeG=7B=amazon.com=prvs=4015a1a96=anchalag@srs-us1.protection.inumbo.net>)
 id 1jbBd7-0005YR-39
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:26:37 +0000
X-Inumbo-ID: 2f2669ce-9a28-11ea-a9a0-12813bfff9fa
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2f2669ce-9a28-11ea-a9a0-12813bfff9fa;
 Tue, 19 May 2020 23:26:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1589930797; x=1621466797;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=+5sqW/9tRYGzDifQrxQJ7j+i6WgVO1jvWkTIJZXYHhc=;
 b=A17VHiO105y+0KbNt1NGFoV0iUwj2GmB8gEjSckhk0GE5cMdr1EwXde6
 l1Eu/XU2IXqJ9YfCfLC7jLNtYcXN5SzTAlo3wSIXfOb8YzPfUw1B71Nyg
 PHlNoi1LxqAayfb+uTxDmktSSa6UfHFT67JfiR2WVnruy6Momz9yK82jt c=;
IronPort-SDR: mWw8AR8ej5ULAmO2NWPA9U3fjvDcfrEySiPbnKH0hbJ0nS6cPVOD41yjXFXqxCxYYtwfKRalTW
 k2MS/P3b7VvQ==
X-IronPort-AV: E=Sophos;i="5.73,411,1583193600"; d="scan'208";a="31243916"
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2a-538b0bfb.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 19 May 2020 23:26:23 +0000
Received: from EX13MTAUWB001.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166])
 by email-inbound-relay-2a-538b0bfb.us-west-2.amazon.com (Postfix) with ESMTPS
 id CA0F8A1EEC; Tue, 19 May 2020 23:26:20 +0000 (UTC)
Received: from EX13D01UWB002.ant.amazon.com (10.43.161.136) by
 EX13MTAUWB001.ant.amazon.com (10.43.161.249) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:26:15 +0000
Received: from EX13MTAUWB001.ant.amazon.com (10.43.161.207) by
 EX13d01UWB002.ant.amazon.com (10.43.161.136) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:26:15 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.161.249) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Tue, 19 May 2020 23:26:15 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 5012E40712; Tue, 19 May 2020 23:26:15 +0000 (UTC)
Date: Tue, 19 May 2020 23:26:15 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH 04/12] x86/xen: add system core suspend and resume callbacks
Message-ID: <79cf02631dc00e62ebf90410bfbbdb52fe7024cb.1589926004.git.anchalag@amazon.com>
References: <cover.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1589926004.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Munehisa Kamata <kamatam@amazon.com>

Add Xen PVHVM specific system core callbacks for PM suspend and
hibernation support. The callbacks suspend and resume Xen
primitives,like shared_info, pvclock and grant table. Note that
Xen suspend can handle them in a different manner, but system
core callbacks are called from the context. So if the callbacks
are called from Xen suspend context, return immediately.

Signed-off-by: Agarwal Anchal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 arch/x86/xen/enlighten_hvm.c |  1 +
 arch/x86/xen/suspend.c       | 53 ++++++++++++++++++++++++++++++++++++
 include/xen/xen-ops.h        |  3 ++
 3 files changed, 57 insertions(+)

diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
index 75b1ec7a0fcd..138e71786e03 100644
--- a/arch/x86/xen/enlighten_hvm.c
+++ b/arch/x86/xen/enlighten_hvm.c
@@ -204,6 +204,7 @@ static void __init xen_hvm_guest_init(void)
 	if (xen_feature(XENFEAT_hvm_callback_vector))
 		xen_have_vector_callback = 1;
 
+	xen_setup_syscore_ops();
 	xen_hvm_smp_init();
 	WARN_ON(xen_cpuhp_setup(xen_cpu_up_prepare_hvm, xen_cpu_dead_hvm));
 	xen_unplug_emulated_devices();
diff --git a/arch/x86/xen/suspend.c b/arch/x86/xen/suspend.c
index 1d83152c761b..784c4484100b 100644
--- a/arch/x86/xen/suspend.c
+++ b/arch/x86/xen/suspend.c
@@ -2,17 +2,22 @@
 #include <linux/types.h>
 #include <linux/tick.h>
 #include <linux/percpu-defs.h>
+#include <linux/syscore_ops.h>
+#include <linux/kernel_stat.h>
 
 #include <xen/xen.h>
 #include <xen/interface/xen.h>
+#include <xen/interface/memory.h>
 #include <xen/grant_table.h>
 #include <xen/events.h>
+#include <xen/xen-ops.h>
 
 #include <asm/cpufeatures.h>
 #include <asm/msr-index.h>
 #include <asm/xen/hypercall.h>
 #include <asm/xen/page.h>
 #include <asm/fixmap.h>
+#include <asm/pvclock.h>
 
 #include "xen-ops.h"
 #include "mmu.h"
@@ -82,3 +87,51 @@ void xen_arch_suspend(void)
 
 	on_each_cpu(xen_vcpu_notify_suspend, NULL, 1);
 }
+
+static int xen_syscore_suspend(void)
+{
+	struct xen_remove_from_physmap xrfp;
+	int ret;
+
+	/* Xen suspend does similar stuffs in its own logic */
+	if (xen_suspend_mode_is_xen_suspend())
+		return 0;
+
+	xrfp.domid = DOMID_SELF;
+	xrfp.gpfn = __pa(HYPERVISOR_shared_info) >> PAGE_SHIFT;
+
+	ret = HYPERVISOR_memory_op(XENMEM_remove_from_physmap, &xrfp);
+	if (!ret)
+		HYPERVISOR_shared_info = &xen_dummy_shared_info;
+
+	return ret;
+}
+
+static void xen_syscore_resume(void)
+{
+	/* Xen suspend does similar stuffs in its own logic */
+	if (xen_suspend_mode_is_xen_suspend())
+		return;
+
+	/* No need to setup vcpu_info as it's already moved off */
+	xen_hvm_map_shared_info();
+
+	pvclock_resume();
+
+	gnttab_resume();
+}
+
+/*
+ * These callbacks will be called with interrupts disabled and when having only
+ * one CPU online.
+ */
+static struct syscore_ops xen_hvm_syscore_ops = {
+	.suspend = xen_syscore_suspend,
+	.resume = xen_syscore_resume
+};
+
+void __init xen_setup_syscore_ops(void)
+{
+	if (xen_hvm_domain())
+		register_syscore_ops(&xen_hvm_syscore_ops);
+}
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 4ffe031adfc7..89b1e88712d6 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -43,6 +43,9 @@ int xen_setup_shutdown_event(void);
 bool xen_suspend_mode_is_xen_suspend(void);
 bool xen_suspend_mode_is_pm_suspend(void);
 bool xen_suspend_mode_is_pm_hibernation(void);
+
+void xen_setup_syscore_ops(void);
+
 extern unsigned long *xen_contiguous_bitmap;
 
 #if defined(CONFIG_XEN_PV) || defined(CONFIG_ARM) || defined(CONFIG_ARM64)
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Tue May 19 23:27:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:27:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBdR-0005eo-1C; Tue, 19 May 2020 23:26:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1yeG=7B=amazon.com=prvs=4015a1a96=anchalag@srs-us1.protection.inumbo.net>)
 id 1jbBdQ-0005eI-4b
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:26:56 +0000
X-Inumbo-ID: 3a7705ae-9a28-11ea-a9a0-12813bfff9fa
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3a7705ae-9a28-11ea-a9a0-12813bfff9fa;
 Tue, 19 May 2020 23:26:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1589930816; x=1621466816;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=w1eiGIKxH2xekT/yRQZeyaclvCGnjD5HhlMWQPvovNo=;
 b=CWYHgZqJRdDUtM1HZJEGRWcWAJhyfV/299740EWmc5U8hjOXiYBPwq2k
 XZLmN2NXcD22S4OkWbBa7j+affNhg+dW3gyT+yMKUr+RDrzfI6hylDE91
 JRZpTKEgBKL1zJ1tbLmqrX4LuTXovcH1R+Ts3aJCsPdrgREjwWcRQGxLn w=;
IronPort-SDR: 4zrULB5uLUcsAKAcAvhwiqdH17+b0NBjaHt+iOUxEhsVwTPymE9d12KL3WpgGQklo1tYTus5RN
 BWO//n740XSQ==
X-IronPort-AV: E=Sophos;i="5.73,411,1583193600"; d="scan'208";a="31182266"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2a-69849ee2.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP;
 19 May 2020 23:26:53 +0000
Received: from EX13MTAUEB002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2a-69849ee2.us-west-2.amazon.com (Postfix) with ESMTPS
 id BDAEFA20D6; Tue, 19 May 2020 23:26:51 +0000 (UTC)
Received: from EX13D08UEB001.ant.amazon.com (10.43.60.245) by
 EX13MTAUEB002.ant.amazon.com (10.43.60.12) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:26:37 +0000
Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by
 EX13D08UEB001.ant.amazon.com (10.43.60.245) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:26:37 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Tue, 19 May 2020 23:26:37 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 3BF6D40712; Tue, 19 May 2020 23:26:37 +0000 (UTC)
Date: Tue, 19 May 2020 23:26:37 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH 05/12] genirq: Shutdown irq chips in suspend/resume during
 hibernation
Message-ID: <fce013fc1348f02b8e4ec61e7a631093c72f993c.1589926004.git.anchalag@amazon.com>
References: <cover.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1589926004.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Many legacy device drivers do not implement power management (PM)
functions which means that interrupts requested by these drivers stay
in active state when the kernel is hibernated.

This does not matter on bare metal and on most hypervisors because the
interrupt is restored on resume without any noticable side effects as
it stays connected to the same physical or virtual interrupt line.

The XEN interrupt mechanism is different as it maintains a mapping
between the Linux interrupt number and a XEN event channel. If the
interrupt stays active on hibernation this mapping is preserved but
there is unfortunately no guarantee that on resume the same event
channels are reassigned to these devices. This can result in event
channel conflicts which prevent the affected devices from being
restored correctly.

One way to solve this would be to add the necessary power management
functions to all affected legacy device drivers, but that's a
questionable effort which does not provide any benefits on non-XEN
environments.

The least intrusive and most efficient solution is to provide a
mechanism which allows the core interrupt code to tear down these
interrupts on hibernation and bring them back up again on resume. This
allows the XEN event channel mechanism to assign an arbitrary event
channel on resume without affecting the functionality of these
devices.

Fortunately all these device interrupts are handled by a dedicated XEN
interrupt chip so the chip can be marked that all interrupts connected
to it are handled this way. This is pretty much in line with the other
interrupt chip specific quirks, e.g. IRQCHIP_MASK_ON_SUSPEND.

Add a new quirk flag IRQCHIP_SHUTDOWN_ON_SUSPEND and add support for
it the core interrupt suspend/resume paths.

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off--by: Thomas Gleixner <tglx@linutronix.de>
---
 drivers/xen/events/events_base.c |  1 +
 include/linux/irq.h              |  2 ++
 kernel/irq/chip.c                |  2 +-
 kernel/irq/internals.h           |  1 +
 kernel/irq/pm.c                  | 31 ++++++++++++++++++++++---------
 5 files changed, 27 insertions(+), 10 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 3a791c8485d0..decf65bd3451 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -1613,6 +1613,7 @@ static struct irq_chip xen_pirq_chip __read_mostly = {
 	.irq_set_affinity	= set_affinity_irq,
 
 	.irq_retrigger		= retrigger_dynirq,
+	.flags                  = IRQCHIP_SHUTDOWN_ON_SUSPEND,
 };
 
 static struct irq_chip xen_percpu_chip __read_mostly = {
diff --git a/include/linux/irq.h b/include/linux/irq.h
index 8d5bc2c237d7..94cb8c994d06 100644
--- a/include/linux/irq.h
+++ b/include/linux/irq.h
@@ -542,6 +542,7 @@ struct irq_chip {
  * IRQCHIP_EOI_THREADED:	Chip requires eoi() on unmask in threaded mode
  * IRQCHIP_SUPPORTS_LEVEL_MSI	Chip can provide two doorbells for Level MSIs
  * IRQCHIP_SUPPORTS_NMI:	Chip can deliver NMIs, only for root irqchips
+ * IRQCHIP_SHUTDOWN_ON_SUSPEND: Shutdown non wake irqs in the suspend path
  */
 enum {
 	IRQCHIP_SET_TYPE_MASKED		= (1 <<  0),
@@ -553,6 +554,7 @@ enum {
 	IRQCHIP_EOI_THREADED		= (1 <<  6),
 	IRQCHIP_SUPPORTS_LEVEL_MSI	= (1 <<  7),
 	IRQCHIP_SUPPORTS_NMI		= (1 <<  8),
+	IRQCHIP_SHUTDOWN_ON_SUSPEND     = (1 <<  9),
 };
 
 #include <linux/irqdesc.h>
diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
index 41e7e37a0928..fd59489ff14b 100644
--- a/kernel/irq/chip.c
+++ b/kernel/irq/chip.c
@@ -233,7 +233,7 @@ __irq_startup_managed(struct irq_desc *desc, struct cpumask *aff, bool force)
 }
 #endif
 
-static int __irq_startup(struct irq_desc *desc)
+int __irq_startup(struct irq_desc *desc)
 {
 	struct irq_data *d = irq_desc_get_irq_data(desc);
 	int ret = 0;
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index 7db284b10ac9..b6fca5eacff7 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -80,6 +80,7 @@ extern void __enable_irq(struct irq_desc *desc);
 extern int irq_activate(struct irq_desc *desc);
 extern int irq_activate_and_startup(struct irq_desc *desc, bool resend);
 extern int irq_startup(struct irq_desc *desc, bool resend, bool force);
+extern int __irq_startup(struct irq_desc *desc);
 
 extern void irq_shutdown(struct irq_desc *desc);
 extern void irq_shutdown_and_deactivate(struct irq_desc *desc);
diff --git a/kernel/irq/pm.c b/kernel/irq/pm.c
index 8f557fa1f4fe..dc48a25f1756 100644
--- a/kernel/irq/pm.c
+++ b/kernel/irq/pm.c
@@ -85,16 +85,25 @@ static bool suspend_device_irq(struct irq_desc *desc)
 	}
 
 	desc->istate |= IRQS_SUSPENDED;
-	__disable_irq(desc);
-
 	/*
-	 * Hardware which has no wakeup source configuration facility
-	 * requires that the non wakeup interrupts are masked at the
-	 * chip level. The chip implementation indicates that with
-	 * IRQCHIP_MASK_ON_SUSPEND.
+	 * Some irq chips (e.g. XEN PIRQ) require a full shutdown on suspend
+	 * as some of the legacy drivers(e.g. floppy) do nothing during the
+	 * suspend path
 	 */
-	if (irq_desc_get_chip(desc)->flags & IRQCHIP_MASK_ON_SUSPEND)
-		mask_irq(desc);
+	if (irq_desc_get_chip(desc)->flags & IRQCHIP_SHUTDOWN_ON_SUSPEND) {
+		irq_shutdown(desc);
+	} else {
+		__disable_irq(desc);
+
+	       /*
+		* Hardware which has no wakeup source configuration facility
+		* requires that the non wakeup interrupts are masked at the
+		* chip level. The chip implementation indicates that with
+		* IRQCHIP_MASK_ON_SUSPEND.
+		*/
+		if (irq_desc_get_chip(desc)->flags & IRQCHIP_MASK_ON_SUSPEND)
+			mask_irq(desc);
+	}
 	return true;
 }
 
@@ -152,7 +161,11 @@ static void resume_irq(struct irq_desc *desc)
 	irq_state_set_masked(desc);
 resume:
 	desc->istate &= ~IRQS_SUSPENDED;
-	__enable_irq(desc);
+
+	if (irq_desc_get_chip(desc)->flags & IRQCHIP_SHUTDOWN_ON_SUSPEND)
+		__irq_startup(desc);
+	else
+		__enable_irq(desc);
 }
 
 static void resume_irqs(bool want_early)
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Tue May 19 23:28:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:28:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBeY-0005qH-Cp; Tue, 19 May 2020 23:28:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1yeG=7B=amazon.com=prvs=4015a1a96=anchalag@srs-us1.protection.inumbo.net>)
 id 1jbBeX-0005q8-34
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:28:05 +0000
X-Inumbo-ID: 62b6c1bc-9a28-11ea-a9a0-12813bfff9fa
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 62b6c1bc-9a28-11ea-a9a0-12813bfff9fa;
 Tue, 19 May 2020 23:28:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1589930884; x=1621466884;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=uqwrgatq91TkK7sGNplkebnhm9CTfApD3Bn8LMVFgwk=;
 b=PMN0RcTu0YW9SIdxY4ybMpHMt4pdB1E7Z71LGjIQuE0F9WdLWdWauki8
 WWnef/ERMzdufYgvH3NUFRh+DWUMnCfbBF5K33ByPWIEMy/gbJx6M31bb
 tISnCBaMjBPmbgGln8HkEkVYNW5fRRbEUVxrxWyHIoYCCn9ylttzJepJe Y=;
IronPort-SDR: 3iRxtVvg/NRGfue+knyqXXV5q/cvhhWNlq9Qv/b65nkzfZBywqzjcqXj7j7ZSwS3nKzzI0ldwW
 0MeVqrDokKEQ==
X-IronPort-AV: E=Sophos;i="5.73,411,1583193600"; d="scan'208";a="45949651"
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2a-e7be2041.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP;
 19 May 2020 23:28:03 +0000
Received: from EX13MTAUEB002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2a-e7be2041.us-west-2.amazon.com (Postfix) with ESMTPS
 id 7F40BA04A4; Tue, 19 May 2020 23:28:01 +0000 (UTC)
Received: from EX13D08UEB001.ant.amazon.com (10.43.60.245) by
 EX13MTAUEB002.ant.amazon.com (10.43.60.12) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:27:51 +0000
Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by
 EX13D08UEB001.ant.amazon.com (10.43.60.245) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:27:51 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Tue, 19 May 2020 23:27:51 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id ED41340712; Tue, 19 May 2020 23:27:50 +0000 (UTC)
Date: Tue, 19 May 2020 23:27:50 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH 06/12] xen-blkfront: add callbacks for PM suspend and
 hibernation
Message-ID: <ad580b4d5b76c18fe2fe409704f25622e01af361.1589926004.git.anchalag@amazon.com>
References: <cover.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1589926004.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Munehisa Kamata <kamatam@amazon.com>

S4 power transition states are much different than xen
suspend/resume. Former is visible to the guest and frontend drivers should
be aware of the state transitions and should be able to take appropriate
actions when needed. In transition to S4 we need to make sure that at least
all the in-flight blkif requests get completed, since they probably contain
bits of the guest's memory image and that's not going to get saved any
other way. Hence, re-issuing of in-flight requests as in case of xen resume
will not work here. This is in contrast to xen-suspend where we need to
freeze with as little processing as possible to avoid dirtying RAM late in
the migration cycle and we know that in-flight data can wait.

Add freeze, thaw and restore callbacks for PM suspend and hibernation
support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
events, need to implement these xenbus_driver callbacks. The freeze handler
stops block-layer queue and disconnect the frontend from the backend while
freeing ring_info and associated resources. Before disconnecting from the
backend, we need to prevent any new IO from being queued and wait for existing
IO to complete. Freeze/unfreeze of the queues will guarantee that there are no
requests in use on the shared ring. However, for sanity we should check
state of the ring before disconnecting to make sure that there are no
outstanding requests to be processed on the ring. The restore handler
re-allocates ring_info, unquiesces and unfreezes the queue and re-connect to
the backend, so that rest of the kernel can continue to use the block device
transparently.

Note:For older backends,if a backend doesn't have commit'12ea729645ace'
xen/blkback: unmap all persistent grants when frontend gets disconnected,
the frontend may see massive amount of grant table warning when freeing
resources.
[   36.852659] deferring g.e. 0xf9 (pfn 0xffffffffffffffff)
[   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!

In this case, persistent grants would need to be disabled.

[Anchal Changelog: Removed timeout/request during blkfront freeze.
Reworked the whole patch to work with blk-mq and incorporate upstream's
comments]

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 drivers/block/xen-blkfront.c | 122 +++++++++++++++++++++++++++++++++--
 1 file changed, 115 insertions(+), 7 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 3b889ea950c2..464863ed7093 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -48,6 +48,8 @@
 #include <linux/list.h>
 #include <linux/workqueue.h>
 #include <linux/sched/mm.h>
+#include <linux/completion.h>
+#include <linux/delay.h>
 
 #include <xen/xen.h>
 #include <xen/xenbus.h>
@@ -80,6 +82,8 @@ enum blkif_state {
 	BLKIF_STATE_DISCONNECTED,
 	BLKIF_STATE_CONNECTED,
 	BLKIF_STATE_SUSPENDED,
+	BLKIF_STATE_FREEZING,
+	BLKIF_STATE_FROZEN
 };
 
 struct grant {
@@ -219,6 +223,7 @@ struct blkfront_info
 	struct list_head requests;
 	struct bio_list bio_list;
 	struct list_head info_list;
+	struct completion wait_backend_disconnected;
 };
 
 static unsigned int nr_minors;
@@ -1005,6 +1010,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size,
 	info->sector_size = sector_size;
 	info->physical_sector_size = physical_sector_size;
 	blkif_set_queue_limits(info);
+	init_completion(&info->wait_backend_disconnected);
 
 	return 0;
 }
@@ -1057,7 +1063,7 @@ static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset)
 		case XEN_SCSI_DISK5_MAJOR:
 		case XEN_SCSI_DISK6_MAJOR:
 		case XEN_SCSI_DISK7_MAJOR:
-			*offset = (*minor / PARTS_PER_DISK) + 
+			*offset = (*minor / PARTS_PER_DISK) +
 				((major - XEN_SCSI_DISK1_MAJOR + 1) * 16) +
 				EMULATED_SD_DISK_NAME_OFFSET;
 			*minor = *minor +
@@ -1072,7 +1078,7 @@ static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset)
 		case XEN_SCSI_DISK13_MAJOR:
 		case XEN_SCSI_DISK14_MAJOR:
 		case XEN_SCSI_DISK15_MAJOR:
-			*offset = (*minor / PARTS_PER_DISK) + 
+			*offset = (*minor / PARTS_PER_DISK) +
 				((major - XEN_SCSI_DISK8_MAJOR + 8) * 16) +
 				EMULATED_SD_DISK_NAME_OFFSET;
 			*minor = *minor +
@@ -1353,6 +1359,8 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 	unsigned int i;
 	struct blkfront_ring_info *rinfo;
 
+	if (info->connected == BLKIF_STATE_FREEZING)
+		goto free_rings;
 	/* Prevent new requests being issued until we fix things up. */
 	info->connected = suspend ?
 		BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
@@ -1360,6 +1368,7 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 	if (info->rq)
 		blk_mq_stop_hw_queues(info->rq);
 
+free_rings:
 	for_each_rinfo(info, rinfo, i)
 		blkif_free_ring(rinfo);
 
@@ -1563,8 +1572,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
 	struct blkfront_info *info = rinfo->dev_info;
 
-	if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
-		return IRQ_HANDLED;
+	if (unlikely(info->connected != BLKIF_STATE_CONNECTED
+		    && info->connected != BLKIF_STATE_FREEZING)){
+	    return IRQ_HANDLED;
+	}
 
 	spin_lock_irqsave(&rinfo->ring_lock, flags);
  again:
@@ -2027,6 +2038,7 @@ static int blkif_recover(struct blkfront_info *info)
 	unsigned int segs;
 	struct blkfront_ring_info *rinfo;
 
+	bool frozen = info->connected == BLKIF_STATE_FROZEN;
 	blkfront_gather_backend_features(info);
 	/* Reset limits changed by blk_mq_update_nr_hw_queues(). */
 	blkif_set_queue_limits(info);
@@ -2048,6 +2060,9 @@ static int blkif_recover(struct blkfront_info *info)
 		kick_pending_request_queues(rinfo);
 	}
 
+	if (frozen)
+		return 0;
+
 	list_for_each_entry_safe(req, n, &info->requests, queuelist) {
 		/* Requeue pending requests (flush or discard) */
 		list_del_init(&req->queuelist);
@@ -2364,6 +2379,7 @@ static void blkfront_connect(struct blkfront_info *info)
 
 		return;
 	case BLKIF_STATE_SUSPENDED:
+	case BLKIF_STATE_FROZEN:
 		/*
 		 * If we are recovering from suspension, we need to wait
 		 * for the backend to announce it's features before
@@ -2481,12 +2497,36 @@ static void blkback_changed(struct xenbus_device *dev,
 		break;
 
 	case XenbusStateClosed:
-		if (dev->state == XenbusStateClosed)
+		if (dev->state == XenbusStateClosed) {
+			if (info->connected == BLKIF_STATE_FREEZING) {
+				blkif_free(info, 0);
+				info->connected = BLKIF_STATE_FROZEN;
+				complete(&info->wait_backend_disconnected);
+				break;
+			}
+
 			break;
+		}
+
+		/*
+		 * We may somehow receive backend's Closed again while thawing
+		 * or restoring and it causes thawing or restoring to fail.
+		 * Ignore such unexpected state regardless of the backend state.
+		 */
+		if (info->connected == BLKIF_STATE_FROZEN) {
+			dev_dbg(&dev->dev,
+					"ignore the backend's Closed state: %s",
+					dev->nodename);
+			break;
+		}
 		/* fall through */
 	case XenbusStateClosing:
-		if (info)
-			blkfront_closing(info);
+		if (info) {
+			if (info->connected == BLKIF_STATE_FREEZING)
+				xenbus_frontend_closed(dev);
+			else
+				blkfront_closing(info);
+		}
 		break;
 	}
 }
@@ -2630,6 +2670,71 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
 	mutex_unlock(&blkfront_mutex);
 }
 
+static int blkfront_freeze(struct xenbus_device *dev)
+{
+	unsigned int i;
+	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
+	struct blkfront_ring_info *rinfo;
+	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
+	unsigned int timeout = 5 * HZ;
+	unsigned long flags;
+	int err = 0;
+
+	info->connected = BLKIF_STATE_FREEZING;
+
+	blk_mq_freeze_queue(info->rq);
+	blk_mq_quiesce_queue(info->rq);
+
+	for_each_rinfo(info, rinfo, i) {
+	    /* No more gnttab callback work. */
+	    gnttab_cancel_free_callback(&rinfo->callback);
+	    /* Flush gnttab callback work. Must be done with no locks held. */
+	    flush_work(&rinfo->work);
+	}
+
+	for_each_rinfo(info, rinfo, i) {
+	    spin_lock_irqsave(&rinfo->ring_lock, flags);
+	    if (RING_FULL(&rinfo->ring)
+		    || RING_HAS_UNCONSUMED_RESPONSES(&rinfo->ring)) {
+		xenbus_dev_error(dev, err, "Hibernation Failed.
+			The ring is still busy");
+		info->connected = BLKIF_STATE_CONNECTED;
+		spin_unlock_irqrestore(&rinfo->ring_lock, flags);
+		return -EBUSY;
+	}
+	    spin_unlock_irqrestore(&rinfo->ring_lock, flags);
+	}
+	/* Kick the backend to disconnect */
+	xenbus_switch_state(dev, XenbusStateClosing);
+
+	/*
+	 * We don't want to move forward before the frontend is diconnected
+	 * from the backend cleanly.
+	 */
+	timeout = wait_for_completion_timeout(&info->wait_backend_disconnected,
+					      timeout);
+	if (!timeout) {
+		err = -EBUSY;
+		xenbus_dev_error(dev, err, "Freezing timed out;"
+				 "the device may become inconsistent state");
+	}
+
+	return err;
+}
+
+static int blkfront_restore(struct xenbus_device *dev)
+{
+	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
+	int err = 0;
+
+	err = talk_to_blkback(dev, info);
+	blk_mq_unquiesce_queue(info->rq);
+	blk_mq_unfreeze_queue(info->rq);
+	if (!err)
+	    blk_mq_update_nr_hw_queues(&info->tag_set, info->nr_rings);
+	return err;
+}
+
 static const struct block_device_operations xlvbd_block_fops =
 {
 	.owner = THIS_MODULE,
@@ -2653,6 +2758,9 @@ static struct xenbus_driver blkfront_driver = {
 	.resume = blkfront_resume,
 	.otherend_changed = blkback_changed,
 	.is_ready = blkfront_is_ready,
+	.freeze = blkfront_freeze,
+	.thaw = blkfront_restore,
+	.restore = blkfront_restore
 };
 
 static void purge_persistent_grants(struct blkfront_info *info)
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Tue May 19 23:28:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:28:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBes-0005tF-LZ; Tue, 19 May 2020 23:28:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1yeG=7B=amazon.com=prvs=4015a1a96=anchalag@srs-us1.protection.inumbo.net>)
 id 1jbBes-0005t6-0T
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:28:26 +0000
X-Inumbo-ID: 6fbf5144-9a28-11ea-ae69-bc764e2007e4
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6fbf5144-9a28-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 23:28:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1589930905; x=1621466905;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=zbQMNYLtIoa4kLlnYGSDNfjwBBSpQp5232b6Reef8GY=;
 b=W0jYelCksAnJ5eSjVJnqsNfgRmjypnWi9Uj/z18d/R14FsKXkYnpyNZj
 1JPNTLWwahfYNMKPh+IB00IfAyYXo1PncvgPQ3M+OwoIlprRmwRuFVcVH
 j6OROfGBNX0KwNlCmIZIgL9gExTXtrqaQDWOWX4l1/99KozKFmX/0g8KZ E=;
IronPort-SDR: yjilb6+ifEVyXCjPbp681nPaYveCia7d74ArrOmgBsV4eZPeHO69p3DRLt0zG/aIApuBmDvFcZ
 uua+tOHnry/g==
X-IronPort-AV: E=Sophos;i="5.73,411,1583193600"; d="scan'208";a="31067017"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2b-859fe132.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP;
 19 May 2020 23:28:23 +0000
Received: from EX13MTAUWA001.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2b-859fe132.us-west-2.amazon.com (Postfix) with ESMTPS
 id 8AD99223413; Tue, 19 May 2020 23:28:21 +0000 (UTC)
Received: from EX13D07UWA003.ant.amazon.com (10.43.160.35) by
 EX13MTAUWA001.ant.amazon.com (10.43.160.118) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:28:08 +0000
Received: from EX13MTAUWA001.ant.amazon.com (10.43.160.58) by
 EX13D07UWA003.ant.amazon.com (10.43.160.35) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:28:08 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.160.118) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Tue, 19 May 2020 23:28:08 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 35D9940712; Tue, 19 May 2020 23:28:08 +0000 (UTC)
Date: Tue, 19 May 2020 23:28:08 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH 07/12] xen-netfront: add callbacks for PM suspend and
 hibernation
Message-ID: <d0371ece7f89c6c0a33fe51567878c3151ec0256.1589926004.git.anchalag@amazon.com>
References: <cover.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1589926004.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Munehisa Kamata <kamatam@amazon.com>

Add freeze, thaw and restore callbacks for PM suspend and hibernation
support. The freeze handler simply disconnects the frotnend from the
backend and frees resources associated with queues after disabling the
net_device from the system. The restore handler just changes the
frontend state and let the xenbus handler to re-allocate the resources
and re-connect to the backend. This can be performed transparently to
the rest of the system. The handlers are used for both PM suspend and
hibernation so that we can keep the existing suspend/resume callbacks
for Xen suspend without modification. Freezing netfront devices is
normally expected to finish within a few hundred milliseconds, but it
can rarely take more than 5 seconds and hit the hard coded timeout,
it would depend on backend state which may be congested and/or have
complex configuration. While it's rare case, longer default timeout
seems a bit more reasonable here to avoid hitting the timeout.
Also, make it configurable via module parameter so that we can cover
broader setups than what we know currently.

[Anchal changelog: Variable name fix and checkpatch.pl fixes]
Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 drivers/net/xen-netfront.c | 98 +++++++++++++++++++++++++++++++++++++-
 1 file changed, 97 insertions(+), 1 deletion(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 482c6c8b0fb7..65edcdd6e05f 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -43,6 +43,7 @@
 #include <linux/moduleparam.h>
 #include <linux/mm.h>
 #include <linux/slab.h>
+#include <linux/completion.h>
 #include <net/ip.h>
 
 #include <xen/xen.h>
@@ -56,6 +57,12 @@
 #include <xen/interface/memory.h>
 #include <xen/interface/grant_table.h>
 
+enum netif_freeze_state {
+	NETIF_FREEZE_STATE_UNFROZEN,
+	NETIF_FREEZE_STATE_FREEZING,
+	NETIF_FREEZE_STATE_FROZEN,
+};
+
 /* Module parameters */
 #define MAX_QUEUES_DEFAULT 8
 static unsigned int xennet_max_queues;
@@ -63,6 +70,12 @@ module_param_named(max_queues, xennet_max_queues, uint, 0644);
 MODULE_PARM_DESC(max_queues,
 		 "Maximum number of queues per virtual interface");
 
+static unsigned int netfront_freeze_timeout_secs = 10;
+module_param_named(freeze_timeout_secs,
+		   netfront_freeze_timeout_secs, uint, 0644);
+MODULE_PARM_DESC(freeze_timeout_secs,
+		 "timeout when freezing netfront device in seconds");
+
 static const struct ethtool_ops xennet_ethtool_ops;
 
 struct netfront_cb {
@@ -160,6 +173,10 @@ struct netfront_info {
 	struct netfront_stats __percpu *tx_stats;
 
 	atomic_t rx_gso_checksum_fixup;
+
+	int freeze_state;
+
+	struct completion wait_backend_disconnected;
 };
 
 struct netfront_rx_info {
@@ -721,6 +738,21 @@ static int xennet_close(struct net_device *dev)
 	return 0;
 }
 
+static int xennet_disable_interrupts(struct net_device *dev)
+{
+	struct netfront_info *np = netdev_priv(dev);
+	unsigned int num_queues = dev->real_num_tx_queues;
+	unsigned int queue_index;
+	struct netfront_queue *queue;
+
+	for (queue_index = 0; queue_index < num_queues; ++queue_index) {
+		queue = &np->queues[queue_index];
+		disable_irq(queue->tx_irq);
+		disable_irq(queue->rx_irq);
+	}
+	return 0;
+}
+
 static void xennet_move_rx_slot(struct netfront_queue *queue, struct sk_buff *skb,
 				grant_ref_t ref)
 {
@@ -1301,6 +1333,8 @@ static struct net_device *xennet_create_dev(struct xenbus_device *dev)
 
 	np->queues = NULL;
 
+	init_completion(&np->wait_backend_disconnected);
+
 	err = -ENOMEM;
 	np->rx_stats = netdev_alloc_pcpu_stats(struct netfront_stats);
 	if (np->rx_stats == NULL)
@@ -1794,6 +1828,50 @@ static int xennet_create_queues(struct netfront_info *info,
 	return 0;
 }
 
+static int netfront_freeze(struct xenbus_device *dev)
+{
+	struct netfront_info *info = dev_get_drvdata(&dev->dev);
+	unsigned long timeout = netfront_freeze_timeout_secs * HZ;
+	int err = 0;
+
+	xennet_disable_interrupts(info->netdev);
+
+	netif_device_detach(info->netdev);
+
+	info->freeze_state = NETIF_FREEZE_STATE_FREEZING;
+
+	/* Kick the backend to disconnect */
+	xenbus_switch_state(dev, XenbusStateClosing);
+
+	/* We don't want to move forward before the frontend is diconnected
+	 * from the backend cleanly.
+	 */
+	timeout = wait_for_completion_timeout(&info->wait_backend_disconnected,
+					      timeout);
+	if (!timeout) {
+		err = -EBUSY;
+		xenbus_dev_error(dev, err, "Freezing timed out;"
+				 "the device may become inconsistent state");
+		return err;
+	}
+
+	/* Tear down queues */
+	xennet_disconnect_backend(info);
+	xennet_destroy_queues(info);
+
+	info->freeze_state = NETIF_FREEZE_STATE_FROZEN;
+
+	return err;
+}
+
+static int netfront_restore(struct xenbus_device *dev)
+{
+	/* Kick the backend to re-connect */
+	xenbus_switch_state(dev, XenbusStateInitialising);
+
+	return 0;
+}
+
 /* Common code used when first setting up, and when resuming. */
 static int talk_to_netback(struct xenbus_device *dev,
 			   struct netfront_info *info)
@@ -1999,6 +2077,8 @@ static int xennet_connect(struct net_device *dev)
 		spin_unlock_bh(&queue->rx_lock);
 	}
 
+	np->freeze_state = NETIF_FREEZE_STATE_UNFROZEN;
+
 	return 0;
 }
 
@@ -2036,10 +2116,23 @@ static void netback_changed(struct xenbus_device *dev,
 		break;
 
 	case XenbusStateClosed:
-		if (dev->state == XenbusStateClosed)
+		if (dev->state == XenbusStateClosed) {
+		     /* dpm context is waiting for the backend */
+			if (np->freeze_state == NETIF_FREEZE_STATE_FREEZING)
+				complete(&np->wait_backend_disconnected);
 			break;
+		}
+
 		/* Fall through - Missed the backend's CLOSING state. */
 	case XenbusStateClosing:
+	       /* We may see unexpected Closed or Closing from the backend.
+		* Just ignore it not to prevent the frontend from being
+		* re-connected in the case of PM suspend or hibernation.
+		*/
+		if (np->freeze_state == NETIF_FREEZE_STATE_FROZEN &&
+		    dev->state == XenbusStateInitialising) {
+			break;
+		}
 		xenbus_frontend_closed(dev);
 		break;
 	}
@@ -2186,6 +2279,9 @@ static struct xenbus_driver netfront_driver = {
 	.probe = netfront_probe,
 	.remove = xennet_remove,
 	.resume = netfront_resume,
+	.freeze = netfront_freeze,
+	.thaw	= netfront_restore,
+	.restore = netfront_restore,
 	.otherend_changed = netback_changed,
 };
 
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Tue May 19 23:28:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:28:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBf8-0005xo-2h; Tue, 19 May 2020 23:28:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1yeG=7B=amazon.com=prvs=4015a1a96=anchalag@srs-us1.protection.inumbo.net>)
 id 1jbBf6-0005xQ-So
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:28:40 +0000
X-Inumbo-ID: 7852e852-9a28-11ea-a9a0-12813bfff9fa
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7852e852-9a28-11ea-a9a0-12813bfff9fa;
 Tue, 19 May 2020 23:28:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1589930919; x=1621466919;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=8oFwSiGiXazezHFt6ilNhdZTejmHfEW4a5eyQv6e/gQ=;
 b=O67L4Ghoyg8obEDO3ivTUOcNgc7b8Q8+mRkR6K8hVvkPyO1A3y6cEHm4
 Cu8St2CGH61ACGXV0/rhMCbirWBWM3hGOPReuITNJYBwt05ARxHxkLcB5
 NyHESGaaztZUoQwkmxfly6yrh36RwxACxwzpf5kDmA4UyjbJxsCzYxFY+ 0=;
IronPort-SDR: ypi20ESB/vUTFj2KU/00Es79bg7nB7GrK8X0Zx694qkzKCpMWUQK1Y3nZ8y0w9sxgpu4l6EFGx
 eREArqyAyGtA==
X-IronPort-AV: E=Sophos;i="5.73,411,1583193600"; d="scan'208";a="31067058"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2a-c5104f52.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP;
 19 May 2020 23:28:37 +0000
Received: from EX13MTAUWA001.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166])
 by email-inbound-relay-2a-c5104f52.us-west-2.amazon.com (Postfix) with ESMTPS
 id 03FA7A1E06; Tue, 19 May 2020 23:28:35 +0000 (UTC)
Received: from EX13D07UWA001.ant.amazon.com (10.43.160.145) by
 EX13MTAUWA001.ant.amazon.com (10.43.160.58) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:28:30 +0000
Received: from EX13MTAUWA001.ant.amazon.com (10.43.160.58) by
 EX13D07UWA001.ant.amazon.com (10.43.160.145) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:28:30 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.160.118) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Tue, 19 May 2020 23:28:30 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 8B45240712; Tue, 19 May 2020 23:28:30 +0000 (UTC)
Date: Tue, 19 May 2020 23:28:30 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH 08/12] xen/time: introduce xen_{save,restore}_steal_clock
Message-ID: <ae90ece495d29f54fc9986a07f45ab6659136573.1589926004.git.anchalag@amazon.com>
References: <cover.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1589926004.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Munehisa Kamata <kamatam@amazon.com>

Currently, steal time accounting code in scheduler expects steal clock
callback to provide monotonically increasing value. If the accounting
code receives a smaller value than previous one, it uses a negative
value to calculate steal time and results in incorrectly updated idle
and steal time accounting. This breaks userspace tools which read
/proc/stat.

top - 08:05:35 up  2:12,  3 users,  load average: 0.00, 0.07, 0.23
Tasks:  80 total,   1 running,  79 sleeping,   0 stopped,   0 zombie
Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,30100.0%id,  0.0%wa,  0.0%hi, 0.0%si,-1253874204672.0%st

This can actually happen when a Xen PVHVM guest gets restored from
hibernation, because such a restored guest is just a fresh domain from
Xen perspective and the time information in runstate info starts over
from scratch.

This patch introduces xen_save_steal_clock() which saves current values
in runstate info into per-cpu variables. Its couterpart,
xen_restore_steal_clock(), sets offset if it found the current values in
runstate info are smaller than previous ones. xen_steal_clock() is also
modified to use the offset to ensure that scheduler only sees
monotonically increasing number.

Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
---
 drivers/xen/time.c    | 29 ++++++++++++++++++++++++++++-
 include/xen/xen-ops.h |  2 ++
 2 files changed, 30 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/time.c b/drivers/xen/time.c
index 0968859c29d0..3560222cc0dd 100644
--- a/drivers/xen/time.c
+++ b/drivers/xen/time.c
@@ -23,6 +23,9 @@ static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_runstate);
 
 static DEFINE_PER_CPU(u64[4], old_runstate_time);
 
+static DEFINE_PER_CPU(u64, xen_prev_steal_clock);
+static DEFINE_PER_CPU(u64, xen_steal_clock_offset);
+
 /* return an consistent snapshot of 64-bit time/counter value */
 static u64 get64(const u64 *p)
 {
@@ -149,7 +152,7 @@ bool xen_vcpu_stolen(int vcpu)
 	return per_cpu(xen_runstate, vcpu).state == RUNSTATE_runnable;
 }
 
-u64 xen_steal_clock(int cpu)
+static u64 __xen_steal_clock(int cpu)
 {
 	struct vcpu_runstate_info state;
 
@@ -157,6 +160,30 @@ u64 xen_steal_clock(int cpu)
 	return state.time[RUNSTATE_runnable] + state.time[RUNSTATE_offline];
 }
 
+u64 xen_steal_clock(int cpu)
+{
+	return __xen_steal_clock(cpu) + per_cpu(xen_steal_clock_offset, cpu);
+}
+
+void xen_save_steal_clock(int cpu)
+{
+	per_cpu(xen_prev_steal_clock, cpu) = xen_steal_clock(cpu);
+}
+
+void xen_restore_steal_clock(int cpu)
+{
+	u64 steal_clock = __xen_steal_clock(cpu);
+
+	if (per_cpu(xen_prev_steal_clock, cpu) > steal_clock) {
+		/* Need to update the offset */
+		per_cpu(xen_steal_clock_offset, cpu) =
+		    per_cpu(xen_prev_steal_clock, cpu) - steal_clock;
+	} else {
+		/* Avoid unnecessary steal clock warp */
+		per_cpu(xen_steal_clock_offset, cpu) = 0;
+	}
+}
+
 void xen_setup_runstate_info(int cpu)
 {
 	struct vcpu_register_runstate_memory_area area;
diff --git a/include/xen/xen-ops.h b/include/xen/xen-ops.h
index 89b1e88712d6..74fb5eb3aad8 100644
--- a/include/xen/xen-ops.h
+++ b/include/xen/xen-ops.h
@@ -37,6 +37,8 @@ void xen_time_setup_guest(void);
 void xen_manage_runstate_time(int action);
 void xen_get_runstate_snapshot(struct vcpu_runstate_info *res);
 u64 xen_steal_clock(int cpu);
+void xen_save_steal_clock(int cpu);
+void xen_restore_steal_clock(int cpu);
 
 int xen_setup_shutdown_event(void);
 
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Tue May 19 23:29:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:29:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBfl-000672-D2; Tue, 19 May 2020 23:29:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1yeG=7B=amazon.com=prvs=4015a1a96=anchalag@srs-us1.protection.inumbo.net>)
 id 1jbBfk-00066o-Eb
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:29:20 +0000
X-Inumbo-ID: 90840622-9a28-11ea-a9a0-12813bfff9fa
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 90840622-9a28-11ea-a9a0-12813bfff9fa;
 Tue, 19 May 2020 23:29:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1589930961; x=1621466961;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=tsY3KKFSv8tIe/IlzAFXH+9mav4/7deKA0fP8gv9p2Q=;
 b=vmMjaxgHOywhPJlAoNX9xeMAU8OPxh5hG+wSL9v+0UCtVxxJpwgckF4x
 Kmf2XF7s3XGF6YSf1agN9mq5SGKNLPB4UaWDnGLwyeNMqu/8KtbY0DdXC
 uAEq2LkXD9Ka1KN4wP/16mThzWMdBa3MRkJScJepwGftD4nb47gOYnKMG k=;
IronPort-SDR: OuWSF/7THtiGViFkEIVnxtJ6/3N1YwtQJST5wNdhAbY4Qz78HugXzY7+6VAKZad4uN2yJc0PFS
 vqxB8NkWeCbg==
X-IronPort-AV: E=Sophos;i="5.73,411,1583193600"; d="scan'208";a="31244299"
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-1e-c7c08562.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 19 May 2020 23:29:20 +0000
Received: from EX13MTAUWC001.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162])
 by email-inbound-relay-1e-c7c08562.us-east-1.amazon.com (Postfix) with ESMTPS
 id 5678B2410A6; Tue, 19 May 2020 23:29:13 +0000 (UTC)
Received: from EX13D05UWC001.ant.amazon.com (10.43.162.82) by
 EX13MTAUWC001.ant.amazon.com (10.43.162.135) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:28:55 +0000
Received: from EX13MTAUWC001.ant.amazon.com (10.43.162.135) by
 EX13D05UWC001.ant.amazon.com (10.43.162.82) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:28:55 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.162.232) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Tue, 19 May 2020 23:28:55 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 5905D40712; Tue, 19 May 2020 23:28:55 +0000 (UTC)
Date: Tue, 19 May 2020 23:28:55 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH 09/12] x86/xen: save and restore steal clock
Message-ID: <6f39a1594a25ab5325f34e1e297900d699cd92bf.1589926004.git.anchalag@amazon.com>
References: <cover.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1589926004.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Munehisa Kamata <kamatam@amazon.com>

Save steal clock values of all present CPUs in the system core ops
suspend callbacks. Also, restore a boot CPU's steal clock in the system
core resume callback. For non-boot CPUs, restore after they're brought
up, because runstate info for non-boot CPUs are not active until then.

Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
---
 arch/x86/xen/suspend.c | 13 ++++++++++++-
 arch/x86/xen/time.c    |  3 +++
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/x86/xen/suspend.c b/arch/x86/xen/suspend.c
index 784c4484100b..dae0f74f5390 100644
--- a/arch/x86/xen/suspend.c
+++ b/arch/x86/xen/suspend.c
@@ -91,12 +91,20 @@ void xen_arch_suspend(void)
 static int xen_syscore_suspend(void)
 {
 	struct xen_remove_from_physmap xrfp;
-	int ret;
+	int cpu, ret;
 
 	/* Xen suspend does similar stuffs in its own logic */
 	if (xen_suspend_mode_is_xen_suspend())
 		return 0;
 
+	for_each_present_cpu(cpu) {
+		/*
+		 * Nonboot CPUs are already offline, but the last copy of
+		 * runstate info is still accessible.
+		 */
+		xen_save_steal_clock(cpu);
+	}
+
 	xrfp.domid = DOMID_SELF;
 	xrfp.gpfn = __pa(HYPERVISOR_shared_info) >> PAGE_SHIFT;
 
@@ -118,6 +126,9 @@ static void xen_syscore_resume(void)
 
 	pvclock_resume();
 
+	/* Nonboot CPUs will be resumed when they're brought up */
+	xen_restore_steal_clock(smp_processor_id());
+
 	gnttab_resume();
 }
 
diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index c8897aad13cd..33d754564b09 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -545,6 +545,9 @@ static void xen_hvm_setup_cpu_clockevents(void)
 {
 	int cpu = smp_processor_id();
 	xen_setup_runstate_info(cpu);
+	if (cpu)
+		xen_restore_steal_clock(cpu);
+
 	/*
 	 * xen_setup_timer(cpu) - snprintf is bad in atomic context. Hence
 	 * doing it xen_hvm_cpu_notify (which gets called by smp_init during
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Tue May 19 23:29:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:29:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBg7-0006Cd-Mj; Tue, 19 May 2020 23:29:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZgSX=7B=amazon.com=prvs=401c57c8a=sblbir@srs-us1.protection.inumbo.net>)
 id 1jbBg6-0006CK-H0
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:29:42 +0000
X-Inumbo-ID: 9d683f66-9a28-11ea-b9cf-bc764e2007e4
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9d683f66-9a28-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 23:29:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1589930982; x=1621466982;
 h=from:to:subject:date:message-id:references:in-reply-to:
 content-id:content-transfer-encoding:mime-version;
 bh=8yZoMAyXrRYJVt5Iyc8YC1zGknZc8xmhubY/BGGSFCw=;
 b=CWOUD2QVMZ0N7mOBIcUZN45QnIPX0rPuhHJgdHnu95RELwFSaOKNsw24
 AAQtoNz/iQbTZ41WI55PTJSNUw7s+jZg7mlupEYxX6HYz6JXouf/dpjxK
 +GtDdlUudDBeWcZChyOphyk3FTrGQ9eMXXOYsNW1iidtT3kuOHicRemfh M=;
IronPort-SDR: hykFHVKUxK6DZ14+U9SJdTDE56dr5hWCBRiFClMEcEe84C8zjyl5L9RuS+Y8800ogb50L6wed6
 oshAh/tPsu7w==
X-IronPort-AV: E=Sophos;i="5.73,411,1583193600"; d="scan'208";a="31067168"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-6002.iad6.amazon.com with ESMTP;
 19 May 2020 23:29:28 +0000
Received: from EX13MTAUWB001.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166])
 by email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com (Postfix) with ESMTPS
 id 91F4CA1E08; Tue, 19 May 2020 23:29:26 +0000 (UTC)
Received: from EX13D10UWB001.ant.amazon.com (10.43.161.111) by
 EX13MTAUWB001.ant.amazon.com (10.43.161.249) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:29:26 +0000
Received: from EX13D01UWB002.ant.amazon.com (10.43.161.136) by
 EX13D10UWB001.ant.amazon.com (10.43.161.111) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:29:25 +0000
Received: from EX13D01UWB002.ant.amazon.com ([10.43.161.136]) by
 EX13d01UWB002.ant.amazon.com ([10.43.161.136]) with mapi id 15.00.1497.006;
 Tue, 19 May 2020 23:29:25 +0000
From: "Singh, Balbir" <sblbir@amazon.com>
To: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, "Agarwal,
 Anchal" <anchalag@amazon.com>, "peterz@infradead.org" <peterz@infradead.org>, 
 "Woodhouse, David" <dwmw@amazon.co.uk>, "vkuznets@redhat.com"
 <vkuznets@redhat.com>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "tglx@linutronix.de" <tglx@linutronix.de>, "linux-pm@vger.kernel.org"
 <linux-pm@vger.kernel.org>, "Valentin, Eduardo" <eduval@amazon.com>,
 "linux-mm@kvack.org" <linux-mm@kvack.org>, "jgross@suse.com"
 <jgross@suse.com>, "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
 "axboe@kernel.dk" <axboe@kernel.dk>, "x86@kernel.org" <x86@kernel.org>,
 "roger.pau@citrix.com" <roger.pau@citrix.com>, "hpa@zytor.com"
 <hpa@zytor.com>, "rjw@rjwysocki.net" <rjw@rjwysocki.net>, "mingo@redhat.com"
 <mingo@redhat.com>, "Kamata, Munehisa" <kamatam@amazon.com>, "pavel@ucw.cz"
 <pavel@ucw.cz>, "bp@alien8.de" <bp@alien8.de>, "netdev@vger.kernel.org"
 <netdev@vger.kernel.org>, "len.brown@intel.com" <len.brown@intel.com>,
 "davem@davemloft.net" <davem@davemloft.net>, "benh@kernel.crashing.org"
 <benh@kernel.crashing.org>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 05/12] genirq: Shutdown irq chips in suspend/resume during
 hibernation
Thread-Topic: [PATCH 05/12] genirq: Shutdown irq chips in suspend/resume
 during hibernation
Thread-Index: AQHWLjTykqjazEodhU6xyc4OZjlkO6iwDkCA
Date: Tue, 19 May 2020 23:29:25 +0000
Message-ID: <d489ede4d70ae22a601ee0afc92bda936baa8b11.camel@amazon.com>
References: <cover.1589926004.git.anchalag@amazon.com>
 <fce013fc1348f02b8e4ec61e7a631093c72f993c.1589926004.git.anchalag@amazon.com>
In-Reply-To: <fce013fc1348f02b8e4ec61e7a631093c72f993c.1589926004.git.anchalag@amazon.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.160.100]
Content-Type: text/plain; charset="utf-8"
Content-ID: <B9AD7F83A1AFE949822D37CF84235DA3@amazon.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

T24gVHVlLCAyMDIwLTA1LTE5IGF0IDIzOjI2ICswMDAwLCBBbmNoYWwgQWdhcndhbCB3cm90ZToN
Cj4gU2lnbmVkLW9mZi0tYnk6IFRob21hcyBHbGVpeG5lciA8dGdseEBsaW51dHJvbml4LmRlPg0K
DQpUaGUgU2lnbmVkLW9mZi1ieSBsaW5lIG5lZWRzIHRvIGJlIGZpeGVkIChoaW50OiB5b3UgaGF2
ZSAtLSkNCg0KQmFsYmlyIFNpbmdoDQoNCg==


From xen-devel-bounces@lists.xenproject.org Tue May 19 23:29:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:29:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBgB-0006E1-Vc; Tue, 19 May 2020 23:29:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1yeG=7B=amazon.com=prvs=4015a1a96=anchalag@srs-us1.protection.inumbo.net>)
 id 1jbBgB-0006Dm-HQ
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:29:47 +0000
X-Inumbo-ID: a07f9622-9a28-11ea-a9a0-12813bfff9fa
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a07f9622-9a28-11ea-a9a0-12813bfff9fa;
 Tue, 19 May 2020 23:29:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1589930988; x=1621466988;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=Dbh0kC6AUhdBwnm/rg5xRtYEoyS/bWB7gw8bESiUXu8=;
 b=ujARxPOZrE2ff4RNd5HwI/Tui0eVklwzmkSIeS+vo7z1bLgqkk56Ahib
 yWb27O88x/8y4B2JfKMBqVDxFuO69pVerN/KMDelOBCHeLkR3vkxCltzU
 Uz6qByLCepoC46jVTcyTuKS+baWy29gWwevisJicw7l0RgkK/r0wuzqYk U=;
IronPort-SDR: QyxaVj7m3HsYActdDDQN//Bklv0kPtNEJKpgFFZzEhwvzUrO/yoZHFxDlLIQE70DrdHpZ7H+j2
 ZsDe6R6BRV9g==
X-IronPort-AV: E=Sophos;i="5.73,411,1583193600"; d="scan'208";a="32497101"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2c-1968f9fa.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP;
 19 May 2020 23:29:33 +0000
Received: from EX13MTAUEE002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166])
 by email-inbound-relay-2c-1968f9fa.us-west-2.amazon.com (Postfix) with ESMTPS
 id 3F3DDA217A; Tue, 19 May 2020 23:29:31 +0000 (UTC)
Received: from EX13D08UEE002.ant.amazon.com (10.43.62.92) by
 EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:29:11 +0000
Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by
 EX13D08UEE002.ant.amazon.com (10.43.62.92) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:29:11 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Tue, 19 May 2020 23:29:11 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id CEAED40712; Tue, 19 May 2020 23:29:10 +0000 (UTC)
Date: Tue, 19 May 2020 23:29:10 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH 10/12] xen: Introduce wrapper for save/restore sched clock
 offset
Message-ID: <9bb393ade4a6a30a2d4ba3ba191603a2b2fb8512.1589926004.git.anchalag@amazon.com>
References: <cover.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1589926004.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Introduce wrappers for save/restore xen_sched_clock_offset to be
used by PM hibernation code to avoid system instability during resume.

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
---
 arch/x86/xen/time.c    | 15 +++++++++++++--
 arch/x86/xen/xen-ops.h |  2 ++
 2 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
index 33d754564b09..1fc2beb7a6c1 100644
--- a/arch/x86/xen/time.c
+++ b/arch/x86/xen/time.c
@@ -386,12 +386,23 @@ static const struct pv_time_ops xen_time_ops __initconst = {
 static struct pvclock_vsyscall_time_info *xen_clock __read_mostly;
 static u64 xen_clock_value_saved;
 
+/*This is needed to maintain a monotonic clock value during PM hibernation */
+void xen_save_sched_clock_offset(void)
+{
+	xen_clock_value_saved = xen_clocksource_read() - xen_sched_clock_offset;
+}
+
+void xen_restore_sched_clock_offset(void)
+{
+	xen_sched_clock_offset = xen_clocksource_read() - xen_clock_value_saved;
+}
+
 void xen_save_time_memory_area(void)
 {
 	struct vcpu_register_time_memory_area t;
 	int ret;
 
-	xen_clock_value_saved = xen_clocksource_read() - xen_sched_clock_offset;
+	xen_save_sched_clock_offset();
 
 	if (!xen_clock)
 		return;
@@ -434,7 +445,7 @@ void xen_restore_time_memory_area(void)
 out:
 	/* Need pvclock_resume() before using xen_clocksource_read(). */
 	pvclock_resume();
-	xen_sched_clock_offset = xen_clocksource_read() - xen_clock_value_saved;
+	xen_restore_sched_clock_offset();
 }
 
 static void xen_setup_vsyscall_time_info(void)
diff --git a/arch/x86/xen/xen-ops.h b/arch/x86/xen/xen-ops.h
index d84c357994bd..9f49124df033 100644
--- a/arch/x86/xen/xen-ops.h
+++ b/arch/x86/xen/xen-ops.h
@@ -72,6 +72,8 @@ void xen_save_time_memory_area(void);
 void xen_restore_time_memory_area(void);
 void xen_init_time_ops(void);
 void xen_hvm_init_time_ops(void);
+void xen_save_sched_clock_offset(void);
+void xen_restore_sched_clock_offset(void);
 
 irqreturn_t xen_debug_interrupt(int irq, void *dev_id);
 
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Tue May 19 23:29:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:29:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBgM-0006Hg-8h; Tue, 19 May 2020 23:29:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1yeG=7B=amazon.com=prvs=4015a1a96=anchalag@srs-us1.protection.inumbo.net>)
 id 1jbBgL-0006HF-65
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:29:57 +0000
X-Inumbo-ID: a669ce68-9a28-11ea-b07b-bc764e2007e4
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a669ce68-9a28-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 23:29:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1589930997; x=1621466997;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=r7lbs73sfsvJ86GyRkYhd6/aeNxAVZ1wrAJ3MEWq1QM=;
 b=vIqPsPAplA6WpXtqyybTqMVOdFQYsDFx+OvQzEHCEdTY3aiD7ihfOx59
 g+5BMFzLzIVKJIoC8HddpFK/h1w7AKRMg77HKOsDj3+9/F3v4v+Z7Pp9M
 Zk9x3dzk4C1EP8xfQi0G4jq2PYRjluRTA/stvNspoNmy/5TtEmH7VzYiX 0=;
IronPort-SDR: 9zWNpH8TfAOVXYXC65zH0YTjctxGH5l94tqUBCBdTwFQ9nB5vkppYKlmCsdTEitS5G1lbnrQj9
 3msnk5oBY8Ww==
X-IronPort-AV: E=Sophos;i="5.73,411,1583193600"; d="scan'208";a="31182653"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP;
 19 May 2020 23:29:55 +0000
Received: from EX13MTAUEE002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166])
 by email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com (Postfix) with ESMTPS
 id 304E8A2003; Tue, 19 May 2020 23:29:53 +0000 (UTC)
Received: from EX13D08UEE001.ant.amazon.com (10.43.62.126) by
 EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:29:40 +0000
Received: from EX13MTAUEE002.ant.amazon.com (10.43.62.24) by
 EX13D08UEE001.ant.amazon.com (10.43.62.126) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:29:39 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.62.224) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Tue, 19 May 2020 23:29:39 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 81BF340712; Tue, 19 May 2020 23:29:39 +0000 (UTC)
Date: Tue, 19 May 2020 23:29:39 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH 11/12] xen: Update sched clock offset to avoid system
 instability in hibernation
Message-ID: <4066d54951f4ba9437eb0ff6aeb5288fcd20c2fb.1589926004.git.anchalag@amazon.com>
References: <cover.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1589926004.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Save/restore xen_sched_clock_offset in syscore suspend/resume during PM
hibernation. Commit '867cefb4cb1012: ("xen: Fix x86 sched_clock() interface
for xen")' fixes xen guest time handling during migration. A similar issue
is seen during PM hibernation when system runs CPU intensive workload.
Post resume pvclock resets the value to 0 however, xen sched_clock_offset
is never updated. System instability is seen during resume from hibernation
when system is under heavy CPU load. Since xen_sched_clock_offset is not
updated, system does not see the monotonic clock value and the scheduler
would then think that heavy CPU hog tasks need more time in CPU, causing
the system to freeze

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
---
 arch/x86/xen/suspend.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/arch/x86/xen/suspend.c b/arch/x86/xen/suspend.c
index dae0f74f5390..7e5275944810 100644
--- a/arch/x86/xen/suspend.c
+++ b/arch/x86/xen/suspend.c
@@ -105,6 +105,8 @@ static int xen_syscore_suspend(void)
 		xen_save_steal_clock(cpu);
 	}
 
+	xen_save_sched_clock_offset();
+
 	xrfp.domid = DOMID_SELF;
 	xrfp.gpfn = __pa(HYPERVISOR_shared_info) >> PAGE_SHIFT;
 
@@ -126,6 +128,12 @@ static void xen_syscore_resume(void)
 
 	pvclock_resume();
 
+	/*
+	 * Restore xen_sched_clock_offset during resume to maintain
+	 * monotonic clock value
+	 */
+	xen_restore_sched_clock_offset();
+
 	/* Nonboot CPUs will be resumed when they're brought up */
 	xen_restore_steal_clock(smp_processor_id());
 
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Tue May 19 23:30:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:30:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBgR-0006Xq-L2; Tue, 19 May 2020 23:30:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1yeG=7B=amazon.com=prvs=4015a1a96=anchalag@srs-us1.protection.inumbo.net>)
 id 1jbBgQ-0006NE-DH
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:30:02 +0000
X-Inumbo-ID: a8fd3945-9a28-11ea-a9a1-12813bfff9fa
Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a8fd3945-9a28-11ea-a9a1-12813bfff9fa;
 Tue, 19 May 2020 23:30:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1589931002; x=1621467002;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=fitAmmYR+siRHznU7MJYKk9c4Tw82/jy6e5chPwlJLw=;
 b=YygAM4rxhevUTIpwXDea+e1znhZ7YOkK+Xbwgvyk2RpqEX1HneGo34qH
 FUR9cIh77T27+TwpzuT/FfNlYH1JbfxCVlwArylM/KP1tL7hgMuWok+zE
 H0QLewXYVd0zlqr1NelNitMrBpv7JsNH4728d9RmELnUU2LHe0srjoA2i M=;
IronPort-SDR: Wl3dSMwhpOL5k2tH39DJuBMefCB10ew0UQM8WlE40AAwlaQq7gFVjSYFqv7rMMiJuNURig22t8
 62l/Qk1vmD/Q==
X-IronPort-AV: E=Sophos;i="5.73,411,1583193600"; d="scan'208";a="36216330"
Received: from sea32-co-svc-lb4-vlan2.sea.corp.amazon.com (HELO
 email-inbound-relay-2b-859fe132.us-west-2.amazon.com) ([10.47.23.34])
 by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP;
 19 May 2020 23:30:01 +0000
Received: from EX13MTAUWB001.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2b-859fe132.us-west-2.amazon.com (Postfix) with ESMTPS
 id C56E42225D4; Tue, 19 May 2020 23:29:59 +0000 (UTC)
Received: from EX13D07UWB004.ant.amazon.com (10.43.161.196) by
 EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:29:52 +0000
Received: from EX13MTAUWB001.ant.amazon.com (10.43.161.207) by
 EX13D07UWB004.ant.amazon.com (10.43.161.196) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:29:52 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.161.249) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Tue, 19 May 2020 23:29:52 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 847C740712; Tue, 19 May 2020 23:29:52 +0000 (UTC)
Date: Tue, 19 May 2020 23:29:52 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH 12/12] PM / hibernate: update the resume offset on
 SNAPSHOT_SET_SWAP_AREA
Message-ID: <40de33ca69c0d3bcf8c827862768ae5d399698d6.1589926004.git.anchalag@amazon.com>
References: <cover.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1589926004.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Aleksei Besogonov <cyberax@amazon.com>

The SNAPSHOT_SET_SWAP_AREA is supposed to be used to set the hibernation
offset on a running kernel to enable hibernating to a swap file.
However, it doesn't actually update the swsusp_resume_block variable. As
a result, the hibernation fails at the last step (after all the data is
written out) in the validation of the swap signature in
mark_swapfiles().

Before this patch, the command line processing was the only place where
swsusp_resume_block was set.
[Changelog: Resolved patch conflict as code fragmented to
snapshot_set_swap_area]
Signed-off-by: Aleksei Besogonov <cyberax@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
---
 kernel/power/user.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/kernel/power/user.c b/kernel/power/user.c
index 7959449765d9..1afa1f0a223e 100644
--- a/kernel/power/user.c
+++ b/kernel/power/user.c
@@ -235,8 +235,12 @@ static int snapshot_set_swap_area(struct snapshot_data *data,
 		return -EINVAL;
 	}
 	data->swap = swap_type_of(swdev, offset, NULL);
-	if (data->swap < 0)
+	if (data->swap < 0) {
 		return -ENODEV;
+	} else {
+	    swsusp_resume_device = swdev;
+	    swsusp_resume_block = offset;
+	}
 	return 0;
 }
 
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Tue May 19 23:35:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:35:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBlF-0007S0-9R; Tue, 19 May 2020 23:35:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1yeG=7B=amazon.com=prvs=4015a1a96=anchalag@srs-us1.protection.inumbo.net>)
 id 1jbBlD-0007Rv-QD
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:34:59 +0000
X-Inumbo-ID: 5ac7ba50-9a29-11ea-a9a1-12813bfff9fa
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5ac7ba50-9a29-11ea-a9a1-12813bfff9fa;
 Tue, 19 May 2020 23:34:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1589931300; x=1621467300;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=8UFUtTeRpjYPvMFrw037W1OadSP0YXrEVVUVgG1wRzY=;
 b=eD0YSA4t2c+JEAMcGmGDCgaxymJMy7Tqw4Xysygs1hOtArFEa3szm4ir
 W+mYZEFycmo7HQeORs1VV3N5k709aLqMVx+CM4D/3nmhBc+Uy79rKfsRq
 sNEiDPbe/NCKv59tiVaUHr6obpHddepzDhS4W6r+3SUBp2DO5vZ34W6H5 Y=;
IronPort-SDR: PstKWIwiKXwlx7AwfI2MpbFwnnvbl4PRGOucOXTFL8WICxveEsjrTgaNNQtydlcjq5vYiBt8hp
 NCHu/6FrICEg==
X-IronPort-AV: E=Sophos;i="5.73,411,1583193600"; d="scan'208";a="31244955"
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 19 May 2020 23:34:58 +0000
Received: from EX13MTAUEE002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166])
 by email-inbound-relay-2a-d0be17ee.us-west-2.amazon.com (Postfix) with ESMTPS
 id E4F73A2003; Tue, 19 May 2020 23:34:55 +0000 (UTC)
Received: from EX13D08UEE002.ant.amazon.com (10.43.62.92) by
 EX13MTAUEE002.ant.amazon.com (10.43.62.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:34:46 +0000
Received: from EX13MTAUEA002.ant.amazon.com (10.43.61.77) by
 EX13D08UEE002.ant.amazon.com (10.43.62.92) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:34:45 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.61.169) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Tue, 19 May 2020 23:34:45 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 12AFA40712; Tue, 19 May 2020 23:34:45 +0000 (UTC)
Date: Tue, 19 May 2020 23:34:45 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH 05/12] genirq: Shutdown irq chips in suspend/resume during
 hibernation
Message-ID: <fce013fc1348f02b8e4ec61e7a631093c72f993c.1589926004.git.anchalag@amazon.com>
References: <cover.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1589926004.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Resending with fixed Signed-off-by.

Many legacy device drivers do not implement power management (PM)
functions which means that interrupts requested by these drivers stay
in active state when the kernel is hibernated.

This does not matter on bare metal and on most hypervisors because the
interrupt is restored on resume without any noticable side effects as
it stays connected to the same physical or virtual interrupt line.

The XEN interrupt mechanism is different as it maintains a mapping
between the Linux interrupt number and a XEN event channel. If the
interrupt stays active on hibernation this mapping is preserved but
there is unfortunately no guarantee that on resume the same event
channels are reassigned to these devices. This can result in event
channel conflicts which prevent the affected devices from being
restored correctly.

One way to solve this would be to add the necessary power management
functions to all affected legacy device drivers, but that's a
questionable effort which does not provide any benefits on non-XEN
environments.

The least intrusive and most efficient solution is to provide a
mechanism which allows the core interrupt code to tear down these
interrupts on hibernation and bring them back up again on resume. This
allows the XEN event channel mechanism to assign an arbitrary event
channel on resume without affecting the functionality of these
devices.

Fortunately all these device interrupts are handled by a dedicated XEN
interrupt chip so the chip can be marked that all interrupts connected
to it are handled this way. This is pretty much in line with the other
interrupt chip specific quirks, e.g. IRQCHIP_MASK_ON_SUSPEND.

Add a new quirk flag IRQCHIP_SHUTDOWN_ON_SUSPEND and add support for
it the core interrupt suspend/resume paths.

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
 drivers/xen/events/events_base.c |  1 +
 include/linux/irq.h              |  2 ++
 kernel/irq/chip.c                |  2 +-
 kernel/irq/internals.h           |  1 +
 kernel/irq/pm.c                  | 31 ++++++++++++++++++++++---------
 5 files changed, 27 insertions(+), 10 deletions(-)

diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/events_base.c
index 3a791c8485d0..decf65bd3451 100644
--- a/drivers/xen/events/events_base.c
+++ b/drivers/xen/events/events_base.c
@@ -1613,6 +1613,7 @@ static struct irq_chip xen_pirq_chip __read_mostly = {
 	.irq_set_affinity	= set_affinity_irq,
 
 	.irq_retrigger		= retrigger_dynirq,
+	.flags                  = IRQCHIP_SHUTDOWN_ON_SUSPEND,
 };
 
 static struct irq_chip xen_percpu_chip __read_mostly = {
diff --git a/include/linux/irq.h b/include/linux/irq.h
index 8d5bc2c237d7..94cb8c994d06 100644
--- a/include/linux/irq.h
+++ b/include/linux/irq.h
@@ -542,6 +542,7 @@ struct irq_chip {
  * IRQCHIP_EOI_THREADED:	Chip requires eoi() on unmask in threaded mode
  * IRQCHIP_SUPPORTS_LEVEL_MSI	Chip can provide two doorbells for Level MSIs
  * IRQCHIP_SUPPORTS_NMI:	Chip can deliver NMIs, only for root irqchips
+ * IRQCHIP_SHUTDOWN_ON_SUSPEND: Shutdown non wake irqs in the suspend path
  */
 enum {
 	IRQCHIP_SET_TYPE_MASKED		= (1 <<  0),
@@ -553,6 +554,7 @@ enum {
 	IRQCHIP_EOI_THREADED		= (1 <<  6),
 	IRQCHIP_SUPPORTS_LEVEL_MSI	= (1 <<  7),
 	IRQCHIP_SUPPORTS_NMI		= (1 <<  8),
+	IRQCHIP_SHUTDOWN_ON_SUSPEND     = (1 <<  9),
 };
 
 #include <linux/irqdesc.h>
diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
index 41e7e37a0928..fd59489ff14b 100644
--- a/kernel/irq/chip.c
+++ b/kernel/irq/chip.c
@@ -233,7 +233,7 @@ __irq_startup_managed(struct irq_desc *desc, struct cpumask *aff, bool force)
 }
 #endif
 
-static int __irq_startup(struct irq_desc *desc)
+int __irq_startup(struct irq_desc *desc)
 {
 	struct irq_data *d = irq_desc_get_irq_data(desc);
 	int ret = 0;
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index 7db284b10ac9..b6fca5eacff7 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -80,6 +80,7 @@ extern void __enable_irq(struct irq_desc *desc);
 extern int irq_activate(struct irq_desc *desc);
 extern int irq_activate_and_startup(struct irq_desc *desc, bool resend);
 extern int irq_startup(struct irq_desc *desc, bool resend, bool force);
+extern int __irq_startup(struct irq_desc *desc);
 
 extern void irq_shutdown(struct irq_desc *desc);
 extern void irq_shutdown_and_deactivate(struct irq_desc *desc);
diff --git a/kernel/irq/pm.c b/kernel/irq/pm.c
index 8f557fa1f4fe..dc48a25f1756 100644
--- a/kernel/irq/pm.c
+++ b/kernel/irq/pm.c
@@ -85,16 +85,25 @@ static bool suspend_device_irq(struct irq_desc *desc)
 	}
 
 	desc->istate |= IRQS_SUSPENDED;
-	__disable_irq(desc);
-
 	/*
-	 * Hardware which has no wakeup source configuration facility
-	 * requires that the non wakeup interrupts are masked at the
-	 * chip level. The chip implementation indicates that with
-	 * IRQCHIP_MASK_ON_SUSPEND.
+	 * Some irq chips (e.g. XEN PIRQ) require a full shutdown on suspend
+	 * as some of the legacy drivers(e.g. floppy) do nothing during the
+	 * suspend path
 	 */
-	if (irq_desc_get_chip(desc)->flags & IRQCHIP_MASK_ON_SUSPEND)
-		mask_irq(desc);
+	if (irq_desc_get_chip(desc)->flags & IRQCHIP_SHUTDOWN_ON_SUSPEND) {
+		irq_shutdown(desc);
+	} else {
+		__disable_irq(desc);
+
+	       /*
+		* Hardware which has no wakeup source configuration facility
+		* requires that the non wakeup interrupts are masked at the
+		* chip level. The chip implementation indicates that with
+		* IRQCHIP_MASK_ON_SUSPEND.
+		*/
+		if (irq_desc_get_chip(desc)->flags & IRQCHIP_MASK_ON_SUSPEND)
+			mask_irq(desc);
+	}
 	return true;
 }
 
@@ -152,7 +161,11 @@ static void resume_irq(struct irq_desc *desc)
 	irq_state_set_masked(desc);
 resume:
 	desc->istate &= ~IRQS_SUSPENDED;
-	__enable_irq(desc);
+
+	if (irq_desc_get_chip(desc)->flags & IRQCHIP_SHUTDOWN_ON_SUSPEND)
+		__irq_startup(desc);
+	else
+		__enable_irq(desc);
 }
 
 static void resume_irqs(bool want_early)
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Tue May 19 23:36:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:36:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBmc-0007Yx-L5; Tue, 19 May 2020 23:36:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1yeG=7B=amazon.com=prvs=4015a1a96=anchalag@srs-us1.protection.inumbo.net>)
 id 1jbBmb-0007Yr-UW
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:36:25 +0000
X-Inumbo-ID: 8d9a399e-9a29-11ea-b9cf-bc764e2007e4
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d9a399e-9a29-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 23:36:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1589931385; x=1621467385;
 h=from:to:subject:date:message-id:references:in-reply-to:
 content-id:content-transfer-encoding:mime-version;
 bh=0h+Ql3kIm4ll7H2x0wPz8bL2x8DkfsmOA0XR/gfEPpY=;
 b=PA0WHi9a1pXITQUUnk2ahZnIEkbGdTpTqmQBDre65zpPSqrs8cj4Z5tQ
 bm0WWZqR3dWuG1pjDNKH5oeQD5IOFqhh/hTIS76GFoZWM5G59RAo9WqwY
 8iDf5CjsFJAw6peHKPOa/ajgiRgVntKw083xi5Kw7U2fT0rxTlBkOGaL9 g=;
IronPort-SDR: Sh8oCgbi6ol5SpCMMV88N8TUDQi1X1SmJ82frou2UB9gXc2YCfQwJsx4ulZpnJZdnAgU8Z+Lwj
 +kWTUjNdOG8g==
X-IronPort-AV: E=Sophos;i="5.73,411,1583193600"; d="scan'208";a="45951027"
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1d-9ec21598.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-33001.sea14.amazon.com with ESMTP;
 19 May 2020 23:36:23 +0000
Received: from EX13MTAUWB001.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166])
 by email-inbound-relay-1d-9ec21598.us-east-1.amazon.com (Postfix) with ESMTPS
 id 14090A20B8; Tue, 19 May 2020 23:36:15 +0000 (UTC)
Received: from EX13D01UWB002.ant.amazon.com (10.43.161.136) by
 EX13MTAUWB001.ant.amazon.com (10.43.161.249) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:36:15 +0000
Received: from EX13D07UWB001.ant.amazon.com (10.43.161.238) by
 EX13d01UWB002.ant.amazon.com (10.43.161.136) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 19 May 2020 23:36:15 +0000
Received: from EX13D07UWB001.ant.amazon.com ([10.43.161.238]) by
 EX13D07UWB001.ant.amazon.com ([10.43.161.238]) with mapi id 15.00.1497.006;
 Tue, 19 May 2020 23:36:15 +0000
From: "Agarwal, Anchal" <anchalag@amazon.com>
To: "Singh, Balbir" <sblbir@amazon.com>, "boris.ostrovsky@oracle.com"
 <boris.ostrovsky@oracle.com>, "linux-kernel@vger.kernel.org"
 <linux-kernel@vger.kernel.org>, "peterz@infradead.org"
 <peterz@infradead.org>, "Woodhouse, David" <dwmw@amazon.co.uk>,
 "vkuznets@redhat.com" <vkuznets@redhat.com>, "sstabellini@kernel.org"
 <sstabellini@kernel.org>, "tglx@linutronix.de" <tglx@linutronix.de>,
 "linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>, "Valentin, Eduardo"
 <eduval@amazon.com>, "linux-mm@kvack.org" <linux-mm@kvack.org>,
 "jgross@suse.com" <jgross@suse.com>, "konrad.wilk@oracle.com"
 <konrad.wilk@oracle.com>, "axboe@kernel.dk" <axboe@kernel.dk>,
 "x86@kernel.org" <x86@kernel.org>, "roger.pau@citrix.com"
 <roger.pau@citrix.com>, "hpa@zytor.com" <hpa@zytor.com>, "rjw@rjwysocki.net"
 <rjw@rjwysocki.net>, "mingo@redhat.com" <mingo@redhat.com>, "Kamata,
 Munehisa" <kamatam@amazon.com>, "pavel@ucw.cz" <pavel@ucw.cz>, "bp@alien8.de"
 <bp@alien8.de>, "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
 "len.brown@intel.com" <len.brown@intel.com>, "davem@davemloft.net"
 <davem@davemloft.net>, "benh@kernel.crashing.org" <benh@kernel.crashing.org>, 
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 05/12] genirq: Shutdown irq chips in suspend/resume during
 hibernation
Thread-Topic: [PATCH 05/12] genirq: Shutdown irq chips in suspend/resume
 during hibernation
Thread-Index: AQHWLjMagtxHeVJBCkiD+MEoaDFA5aiwDkSA//+Mf4A=
Date: Tue, 19 May 2020 23:36:14 +0000
Message-ID: <18B5CBBA-FF9E-4DBF-8631-EE9AF4925861@amazon.com>
References: <cover.1589926004.git.anchalag@amazon.com>
 <fce013fc1348f02b8e4ec61e7a631093c72f993c.1589926004.git.anchalag@amazon.com>
 <d489ede4d70ae22a601ee0afc92bda936baa8b11.camel@amazon.com>
In-Reply-To: <d489ede4d70ae22a601ee0afc92bda936baa8b11.camel@amazon.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.161.193]
Content-Type: text/plain; charset="utf-8"
Content-ID: <997D8B2104EB634CAA6BF7D36DEE734D@amazon.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

VGhhbmtzLiBMb29rcyBsaWtlIHNlbmQgYW4gb2xkIG9uZSB3aXRob3V0IGZpeC4gRGlkIHJlc2Vu
ZCB0aGUgcGF0Y2ggYWdhaW4uDQoNCu+7vyAgICBPbiBUdWUsIDIwMjAtMDUtMTkgYXQgMjM6MjYg
KzAwMDAsIEFuY2hhbCBBZ2Fyd2FsIHdyb3RlOg0KICAgID4gU2lnbmVkLW9mZi0tYnk6IFRob21h
cyBHbGVpeG5lciA8dGdseEBsaW51dHJvbml4LmRlPg0KDQogICAgVGhlIFNpZ25lZC1vZmYtYnkg
bGluZSBuZWVkcyB0byBiZSBmaXhlZCAoaGludDogeW91IGhhdmUgLS0pDQoNCiAgICBCYWxiaXIg
U2luZ2gNCg0KDQo=


From xen-devel-bounces@lists.xenproject.org Tue May 19 23:39:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:39:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBpM-0007hU-3d; Tue, 19 May 2020 23:39:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NLmZ=7B=gmail.com=bobbyeshleman@srs-us1.protection.inumbo.net>)
 id 1jbBpK-0007hP-Tt
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:39:15 +0000
X-Inumbo-ID: f2b35126-9a29-11ea-b9cf-bc764e2007e4
Received: from mail-qt1-x834.google.com (unknown [2607:f8b0:4864:20::834])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f2b35126-9a29-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 23:39:14 +0000 (UTC)
Received: by mail-qt1-x834.google.com with SMTP id z18so1198822qto.2
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 16:39:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=date:from:to:cc:subject:message-id:references:mime-version
 :content-disposition:in-reply-to:user-agent;
 bh=XEZQ7OYpVxyRqBV7t8uMteTmkrGbMYoCI/OeTqmbrJY=;
 b=aCcBTwGLdVSB7IFPcFD1/e9gg6W8dVc7fHHGFL2a2XokliWa356TaKDBzKgLXdpctH
 Mo6nEocpgYEZKTm9EgWpy2pt6fwOzBKTRIbxaRMmCTi63xWkM7ZmWDpwFdRhhX7ZPVnk
 EI9g66doi777brPkxhyx3k8mqoLyk8rJTfkdTG/9W18Z9yTWvROYvwlvcqCXtTg/nmL1
 /E20Q8y3eLcH4tpzMf6lmSc3gWMh33m1xlot6pUvDYRj212p2166fhprmD36aXZDKfQi
 AnQg9JJ+rAERGKEcTQHuEYuCFHBUDeaECQfXmc2ZsRVp+BKOIKww1uTGsgo7sCxeR+yO
 IQHQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=XEZQ7OYpVxyRqBV7t8uMteTmkrGbMYoCI/OeTqmbrJY=;
 b=JtFFD3KcXisaGNnbl7PRsMkmIbFf5gsPb54a2/fl/BnQDzBfsOxKKvQmNLh/3Cn3M7
 sGbdED8xYUcvxx5GyCHlm/b/37oLzL+h1WyEAlEt5rCEs/vfMHgkVB/4WbrWx9IHHdsr
 XlD0xbvTtINh6F+a0tFE5gl2T639OeWL63kx2gTOiZPfvx2ETAinAVv8GNjG8krHE+nV
 zzl0DUQb/vPCah5LpQxj+ozouN6PluVawbZ/qOenMpI7kF1LJlr0GYXXY2+QUrJA/TOJ
 cN6gRPzFgap1D+y2yeeHTi+xCPz7mUlAsV07CGo7EpVeHgyaUngy0feiP5Mo3jyyuCxO
 Lixw==
X-Gm-Message-State: AOAM53016ANP+lZPI3ghUTQGRlYOW2kBPK8D5A2u0MrxrlgDKk7jCoE8
 iaoa7kuVQiHHPItnXWZ1ZFioc1CeqiJJXQ==
X-Google-Smtp-Source: ABdhPJz89Y7T87BKBrdCINIRM04ldnJTQowwJdg2ufyv3kaSOZ92HHS9vyScACkx9xrhBfuzaiFR3A==
X-Received: by 2002:ac8:3609:: with SMTP id m9mr2594942qtb.107.1589931553709; 
 Tue, 19 May 2020 16:39:13 -0700 (PDT)
Received: from piano ([216.186.244.35])
 by smtp.gmail.com with ESMTPSA id l184sm858490qke.115.2020.05.19.16.39.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 19 May 2020 16:39:13 -0700 (PDT)
Date: Tue, 19 May 2020 18:39:11 -0500
From: Bobby Eshleman <bobbyeshleman@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: Re: [RFC] UEFI Secure Boot on Xen Hosts
Message-ID: <20200519233911.GA22451@piano>
References: <20200429225108.GA54201@bobbye-pc>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200429225108.GA54201@bobbye-pc>
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: michal.zygowski@3mdeb.com, daniel.kiper@oracle.com, ardb@kernel.org,
 krystian.hebel@3mdeb.com, olivier.lambert@vates.fr, piotr.krol@3mdeb.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, Apr 29, 2020 at 05:51:08PM -0500, Bobby Eshleman wrote:
> 
> # Option #1: PE/COFF and Shim
> 

... snip ...

> 
> # Option #3: Lean on Grub2's LoadFile2() Verification
> 

... snip ...

It's safe to say that the options boiled down to #1 and #3.  Seeing as how we
may not be able to start playing with the new Grub functionality for some time,
and also seeing as how the security properties of each approach are very
similar, I think that option #1 is probably the best path for what we are
looking to achieve in supporting UEFI SB.  With out any strong objections
against this, that'll be the path we start heading down (starting with Daniel's
patch set) and will be hoping to get upstream.

If possible, the implementation would support both SHIM_LOCK and LoadFile2(),
potentially by one falling back to other upon reporting a security violation,
or detecting the functionality provided by Grub in some manner...  but this
will be easier to evaluate after seeing how the LoadFile2() mechanism will
work.

If LoadFile2() proves itself a better approach, we would not be opposed to
moving in that direction when it is available.

I started joining community calls shortly after the intent of 'docs/design'
was discussed there.  Is this a change that merits a 'docs/design' RFC?

Best regards,
Bobby


From xen-devel-bounces@lists.xenproject.org Tue May 19 23:44:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:44:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbBuC-000095-NM; Tue, 19 May 2020 23:44:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TpIb=7B=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jbBuB-000090-No
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:44:15 +0000
X-Inumbo-ID: a581e150-9a2a-11ea-b9cf-bc764e2007e4
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a581e150-9a2a-11ea-b9cf-bc764e2007e4;
 Tue, 19 May 2020 23:44:14 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id e16so1244439wra.7
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 16:44:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=9o2XJvwUYUJOqQdC3ryt4U9cktrO55WZ4SLWOL1Z3nc=;
 b=hbyd/+95r8CtUveK8j7x3zpT3qrPbaCISfL6gphoitrGFMNvZ6kThS/rC0cQNt2vHS
 PwbpLCe/xvnWms5evzaBg95biY5ASAX1oj1u0/4Lce7WRAynHXTb0MZVygk8/EsMumxa
 p+7jArN8O3vpBt/FKe1INHgyFx5h3AElRuGBNt/zlxpaA5VwwgX8Fz9bsxAbzTHZ9YRH
 HoO+DpTWRAOfURwurhDLugz7Y7J3INqCkplygtc3S5RHdzdWutdUrPLUUCoVrpdppght
 vKmnunJiqWHuOWJ8IYpleTvFVmOjBT/CbVM+ZawKx+cx22pkr548uOzA+JlDsGCd0zH0
 ECdw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=9o2XJvwUYUJOqQdC3ryt4U9cktrO55WZ4SLWOL1Z3nc=;
 b=EGjCbPzEszDsIafw2asaju+lf16//BpJ0iFKgSRGa3JEQQGURsFTH4vzgYU/QtLl+g
 gE9Zcd18ag7HaKrWtE+9XyHeCIuxEo0hKz5avBZoYoWX2VDgwjPOrJcy7UMsxB9sBFGC
 AyQPUExIyD1Gbci6Arg9Z6YNgPH+dSWkntD8jHrKfA2nBk/+kxjWCpusG4sUYsZDh3MU
 8OLO6xh6bmAs6bz6dGNwmbZaC+aPR1MdrhHrVzBGq+sn1rElSGN9Rbwz11BFCQ5qXInw
 WX3M+mDfpWSBkUfG9xHGuU1dobgAH1BLbeE66V4djX9WEwNX1P59IGvCxnzclLdlV707
 MKsQ==
X-Gm-Message-State: AOAM533rECezEdBLVX9XhmJ/9xwZtYQLFEC9JPdozzeJogCZrUTWF7NC
 1uQwtWqRn2CChIhTFXZ3oCTuklzKTnoUtZz/JBM=
X-Google-Smtp-Source: ABdhPJz3nXqPjVWdJbf38GAVHIB1bg4qnN3gaLv55SJkMvYDTO4F4UJV0ecKgf/Dt1N4f7bhQvBCwbAuf/KAYwloXRM=
X-Received: by 2002:adf:fccd:: with SMTP id f13mr1375325wrs.386.1589931853949; 
 Tue, 19 May 2020 16:44:13 -0700 (PDT)
MIME-Version: 1.0
References: <20200518113008.15422-1-julien@xen.org>
 <CABfawh=-XVaRxQ+WyM9ZV7jO5hEO=jAWa4m=b_1bQ41NgEB-2A@mail.gmail.com>
 <297448b7-7837-cbe5-dee4-da80ca03cd29@xen.org>
In-Reply-To: <297448b7-7837-cbe5-dee4-da80ca03cd29@xen.org>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Tue, 19 May 2020 17:43:37 -0600
Message-ID: <CABfawhkMEu0kMH7dac6OrUxpif8v+m7MeWePRg8UYL7MstJNFA@mail.gmail.com>
Subject: Re: [PATCH for-4.14 0/3] Remove the 1GB limitation on Rasberry Pi 4
To: Julien Grall <julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, minyard@acm.org,
 Paul Durrant <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, roman@zededa.com,
 George Dunlap <george.dunlap@citrix.com>,
 Jeff Kubascik <jeff.kubascik@dornerworks.com>, Jan Beulich <jbeulich@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 11:23 AM Julien Grall <julien@xen.org> wrote:
>
>
>
> On 19/05/2020 04:08, Tamas K Lengyel wrote:
> > On Mon, May 18, 2020 at 5:32 AM Julien Grall <julien@xen.org> wrote:
> >>
> >> From: Julien Grall <jgrall@amazon.com>
> >>
> >> Hi all,
> >>
> >> At the moment, a user who wants to boot Xen on the Raspberry Pi 4 can
> >> only use the first GB of memory.
> >>
> >> This is because several devices cannot DMA above 1GB but Xen doesn't
> >> necessarily allocate memory for Dom0 below 1GB.
> >>
> >> This small series is trying to address the problem by allowing a
> >> platform to restrict where Dom0 banks are allocated.
> >>
> >> This is also a candidate for Xen 4.14. Without it, a user will not be
> >> able to use all the RAM on the Raspberry Pi 4.
> >>
> >> This series has only be slighlty tested. I would appreciate more test on
> >> the Rasbperry Pi 4 to confirm this removing the restriction.
> >
> > Hi Julien,
>
> Hi,
>
> > could you post a git branch somewhere? I can try this on my rpi4 that
> > already runs 4.13.
>
> I have pushed a branch based on unstable and the v2 of the series:
>
> git://xenbits.xen.org/people/julieng/xen-unstable.git
>
> branch arm-dma/v2
>

I've updated my image I built with
https://github.com/tklengyel/xen-rpi4-builder a while ago and I've
defined 2048m as total_mem and Xen seems to be booting fine and passes
execution to dom0. With 512m being set as the Xen cmdline for dom0_mem
it was working. When I increased the mem for dom0 the boot is now
stuck at:

[    1.427788] of_cfs_init
[    1.429667] of_cfs_init: OK
[    1.432561] clk: Not disabling unused clocks
[    1.437239] Waiting for root device /dev/mmcblk0p2...
[    1.451599] mmc1: queuing unknown CIS tuple 0x80 (2 bytes)
[    1.458156] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
[    1.464729] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
[    1.472804] mmc1: queuing unknown CIS tuple 0x80 (7 bytes)
[    1.479370] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
[    1.546902] random: fast init done
[    1.564590] mmc1: new high speed SDIO card at address 0001

Could this be because the DTB I compiled from a fresh checkout of
https://github.com/raspberrypi/linux.git branch rpi-4.19.y whereas the
kernel itself is from a checkout ~5 months ago? I guess that must be
the cause because even if I decrease the dom0_mem to 512m it still
gets stuck at the same spot whereas it was booting fine before.

Tamas


From xen-devel-bounces@lists.xenproject.org Tue May 19 23:48:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:48:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbByU-0000LW-Ct; Tue, 19 May 2020 23:48:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2w/r=7B=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbByS-0000LO-Vq
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:48:41 +0000
X-Inumbo-ID: 43e532de-9a2b-11ea-ae69-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43e532de-9a2b-11ea-ae69-bc764e2007e4;
 Tue, 19 May 2020 23:48:40 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 6E3E220674;
 Tue, 19 May 2020 23:48:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1589932119;
 bh=hx4rmMhuqqP1pu0djhl9/fSl/R6fQ24MUbPbLgsY9LA=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=djCUAKaVo/WbJYUGCQ5k34+gsaO40u5NWCm0QD4cgYVzDIeh4VKID8fzB9xYBvaig
 VTslOiFWJ+e7tE2OGDjmXLkNp0uU9VmzZLYYZjB+3ZT5muEs59w25ciz1QF++8MI1F
 pZMmmbvGd8f4rCK2yLx6v+S4eFbujGJsyeCWGrDo=
Date: Tue, 19 May 2020 16:48:38 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Subject: Re: [PATCH for-4.14 0/3] Remove the 1GB limitation on Rasberry Pi
 4
In-Reply-To: <CABfawhkMEu0kMH7dac6OrUxpif8v+m7MeWePRg8UYL7MstJNFA@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2005191645400.27502@sstabellini-ThinkPad-T480s>
References: <20200518113008.15422-1-julien@xen.org>
 <CABfawh=-XVaRxQ+WyM9ZV7jO5hEO=jAWa4m=b_1bQ41NgEB-2A@mail.gmail.com>
 <297448b7-7837-cbe5-dee4-da80ca03cd29@xen.org>
 <CABfawhkMEu0kMH7dac6OrUxpif8v+m7MeWePRg8UYL7MstJNFA@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 minyard@acm.org, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 roman@zededa.com, George Dunlap <george.dunlap@citrix.com>,
 Jeff Kubascik <jeff.kubascik@dornerworks.com>, Jan Beulich <jbeulich@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, 19 May 2020, Tamas K Lengyel wrote:
> On Tue, May 19, 2020 at 11:23 AM Julien Grall <julien@xen.org> wrote:
> >
> >
> >
> > On 19/05/2020 04:08, Tamas K Lengyel wrote:
> > > On Mon, May 18, 2020 at 5:32 AM Julien Grall <julien@xen.org> wrote:
> > >>
> > >> From: Julien Grall <jgrall@amazon.com>
> > >>
> > >> Hi all,
> > >>
> > >> At the moment, a user who wants to boot Xen on the Raspberry Pi 4 can
> > >> only use the first GB of memory.
> > >>
> > >> This is because several devices cannot DMA above 1GB but Xen doesn't
> > >> necessarily allocate memory for Dom0 below 1GB.
> > >>
> > >> This small series is trying to address the problem by allowing a
> > >> platform to restrict where Dom0 banks are allocated.
> > >>
> > >> This is also a candidate for Xen 4.14. Without it, a user will not be
> > >> able to use all the RAM on the Raspberry Pi 4.
> > >>
> > >> This series has only be slighlty tested. I would appreciate more test on
> > >> the Rasbperry Pi 4 to confirm this removing the restriction.
> > >
> > > Hi Julien,
> >
> > Hi,
> >
> > > could you post a git branch somewhere? I can try this on my rpi4 that
> > > already runs 4.13.
> >
> > I have pushed a branch based on unstable and the v2 of the series:
> >
> > git://xenbits.xen.org/people/julieng/xen-unstable.git
> >
> > branch arm-dma/v2
> >
> 
> I've updated my image I built with
> https://github.com/tklengyel/xen-rpi4-builder a while ago and I've
> defined 2048m as total_mem and Xen seems to be booting fine and passes
> execution to dom0. With 512m being set as the Xen cmdline for dom0_mem
> it was working. When I increased the mem for dom0 the boot is now
> stuck at:
> 
> [    1.427788] of_cfs_init
> [    1.429667] of_cfs_init: OK
> [    1.432561] clk: Not disabling unused clocks
> [    1.437239] Waiting for root device /dev/mmcblk0p2...
> [    1.451599] mmc1: queuing unknown CIS tuple 0x80 (2 bytes)
> [    1.458156] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
> [    1.464729] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
> [    1.472804] mmc1: queuing unknown CIS tuple 0x80 (7 bytes)
> [    1.479370] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
> [    1.546902] random: fast init done
> [    1.564590] mmc1: new high speed SDIO card at address 0001
> 
> Could this be because the DTB I compiled from a fresh checkout of
> https://github.com/raspberrypi/linux.git branch rpi-4.19.y whereas the
> kernel itself is from a checkout ~5 months ago? I guess that must be
> the cause because even if I decrease the dom0_mem to 512m it still
> gets stuck at the same spot whereas it was booting fine before.

Just so that you are aware, there is a known issue with setting dom0_mem
greater than 512M. I have a WIP patch to fix it in Linux that I plan to
send to the list soon.


From xen-devel-bounces@lists.xenproject.org Tue May 19 23:50:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:50:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbC0M-000160-PC; Tue, 19 May 2020 23:50:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=F61s=7B=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1jbC0L-00015t-L6
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:50:37 +0000
X-Inumbo-ID: 89514c90-9a2b-11ea-b07b-bc764e2007e4
Received: from mail-qt1-x842.google.com (unknown [2607:f8b0:4864:20::842])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 89514c90-9a2b-11ea-b07b-bc764e2007e4;
 Tue, 19 May 2020 23:50:36 +0000 (UTC)
Received: by mail-qt1-x842.google.com with SMTP id m44so1187672qtm.8
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 16:50:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zededa.com; s=google;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=g+2zeIx8NpuvSM61jzGrcp2yiL758KDce2BV7V9SnAs=;
 b=cWZNOkxiU1cJ3ZLqrs88LJ/NaB6HF+6TeDsc9heDqXrKI0WK2G4MJ4byP+EEcJ7qNc
 7znYEpaCmyYwigXj2B2YV+F13WCyHptPx3WD9I9l17A4MPEq9ymfgACBdA7kkO7cHby2
 lr8ysHh9mam6oYfR0yH66hK+lxfYKa7s0lutbh9Nt+pcUokl61nf2LDyuXi7OQVPU7PA
 XPcJJzMnHoQyK94+SgN98AQIlGNlcwY83xmv+V/uYdU9ApQ/zgZAP7HKO7FZdl2l5WSO
 F6CD0+N+Az6T8zD8vmoi0E7UhOCnqJZ+e+1ZUd3R/0Lyiq+ONkyCPCO5VAhVEbzD0Un9
 v/Hw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=g+2zeIx8NpuvSM61jzGrcp2yiL758KDce2BV7V9SnAs=;
 b=BrlzyZLJuDNCEiJgzoNINmltiUSOet7Nf6t1/ACUChJhBlcxhkfTEcTdlWd2jI8Qx6
 Uwd+P560dyK6lbxepbyfz0KlRj23Mzzg2WkM9zZbokn5129pir06UfmFVksTHKc9tbne
 V2aA0f3e+tQW54seSCfxsRdj4xGqZsHG7GMCENtGEwnhKVrThx4fo6Nq2Y9qlrQbvBxH
 PHblrBQIyiasO0vCG2nt0+/q9XAuRMWqoMK+ENEoXd+F9YEdmVN0U/7xUSBHoA6DRLFu
 TicSfYLIHQW1dWV7XDRT0c41x5DgVGK0GdSynAXdAbuu5mL2Z6ZhNJRgjJn4Aw9eIUoN
 2mdQ==
X-Gm-Message-State: AOAM533hodpQuBO91UuCmx++PqE2x/u8GlnVhnT17fMz4DAXUwci81NY
 r2A1HXovW2hlvqkH09WbxMtqjzYikzozT7jweJqb7A==
X-Google-Smtp-Source: ABdhPJyjR2Oq07MldvBp147h4gGXZeuUd6NDFAPgZDjAN1D+JPIRKvIlTLYth6u8TKPBWFTPneDHogLqME8K7U3NgMo=
X-Received: by 2002:ac8:1bd2:: with SMTP id m18mr2577310qtk.77.1589932236323; 
 Tue, 19 May 2020 16:50:36 -0700 (PDT)
MIME-Version: 1.0
References: <20200518113008.15422-1-julien@xen.org>
 <CABfawh=-XVaRxQ+WyM9ZV7jO5hEO=jAWa4m=b_1bQ41NgEB-2A@mail.gmail.com>
 <297448b7-7837-cbe5-dee4-da80ca03cd29@xen.org>
 <CABfawhkMEu0kMH7dac6OrUxpif8v+m7MeWePRg8UYL7MstJNFA@mail.gmail.com>
In-Reply-To: <CABfawhkMEu0kMH7dac6OrUxpif8v+m7MeWePRg8UYL7MstJNFA@mail.gmail.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Tue, 19 May 2020 16:50:25 -0700
Message-ID: <CAMmSBy9QQ4RPZnX6d4Mf6OqQjmN0+jLXL9nGMOQjnTt1axn4fQ@mail.gmail.com>
Subject: Re: [PATCH for-4.14 0/3] Remove the 1GB limitation on Rasberry Pi 4
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 minyard@acm.org, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jeff Kubascik <jeff.kubascik@dornerworks.com>, Jan Beulich <jbeulich@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 4:44 PM Tamas K Lengyel
<tamas.k.lengyel@gmail.com> wrote:
>
> On Tue, May 19, 2020 at 11:23 AM Julien Grall <julien@xen.org> wrote:
> >
> >
> >
> > On 19/05/2020 04:08, Tamas K Lengyel wrote:
> > > On Mon, May 18, 2020 at 5:32 AM Julien Grall <julien@xen.org> wrote:
> > >>
> > >> From: Julien Grall <jgrall@amazon.com>
> > >>
> > >> Hi all,
> > >>
> > >> At the moment, a user who wants to boot Xen on the Raspberry Pi 4 can
> > >> only use the first GB of memory.
> > >>
> > >> This is because several devices cannot DMA above 1GB but Xen doesn't
> > >> necessarily allocate memory for Dom0 below 1GB.
> > >>
> > >> This small series is trying to address the problem by allowing a
> > >> platform to restrict where Dom0 banks are allocated.
> > >>
> > >> This is also a candidate for Xen 4.14. Without it, a user will not be
> > >> able to use all the RAM on the Raspberry Pi 4.
> > >>
> > >> This series has only be slighlty tested. I would appreciate more test on
> > >> the Rasbperry Pi 4 to confirm this removing the restriction.
> > >
> > > Hi Julien,
> >
> > Hi,
> >
> > > could you post a git branch somewhere? I can try this on my rpi4 that
> > > already runs 4.13.
> >
> > I have pushed a branch based on unstable and the v2 of the series:
> >
> > git://xenbits.xen.org/people/julieng/xen-unstable.git
> >
> > branch arm-dma/v2
> >
>
> I've updated my image I built with
> https://github.com/tklengyel/xen-rpi4-builder a while ago and I've
> defined 2048m as total_mem and Xen seems to be booting fine and passes
> execution to dom0. With 512m being set as the Xen cmdline for dom0_mem
> it was working. When I increased the mem for dom0 the boot is now
> stuck at:
>
> [    1.427788] of_cfs_init
> [    1.429667] of_cfs_init: OK
> [    1.432561] clk: Not disabling unused clocks
> [    1.437239] Waiting for root device /dev/mmcblk0p2...
> [    1.451599] mmc1: queuing unknown CIS tuple 0x80 (2 bytes)
> [    1.458156] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
> [    1.464729] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
> [    1.472804] mmc1: queuing unknown CIS tuple 0x80 (7 bytes)
> [    1.479370] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
> [    1.546902] random: fast init done
> [    1.564590] mmc1: new high speed SDIO card at address 0001
>
> Could this be because the DTB I compiled from a fresh checkout of
> https://github.com/raspberrypi/linux.git branch rpi-4.19.y whereas the
> kernel itself is from a checkout ~5 months ago? I guess that must be
> the cause because even if I decrease the dom0_mem to 512m it still
> gets stuck at the same spot whereas it was booting fine before.

Stefano and I are testing the fix right now -- for now just set your
Dom0 mem to less than 512m.

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Tue May 19 23:55:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 19 May 2020 23:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbC4t-0001IY-Av; Tue, 19 May 2020 23:55:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=wYyw=7B=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbC4r-0001IT-Rl
 for xen-devel@lists.xenproject.org; Tue, 19 May 2020 23:55:17 +0000
X-Inumbo-ID: 309ecd60-9a2c-11ea-a9a2-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 309ecd60-9a2c-11ea-a9a2-12813bfff9fa;
 Tue, 19 May 2020 23:55:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=2Yke0jIJiDUC2IkaNTEsYjveeQjzONWsiV5/dPCzSkA=; b=hlb0CKr4g5kcnGTmeTp97+6YC
 ybNiPUpZvg2Fis7tUAY/2aU8+Sh9XKc4gDB9bpPJroQfiHDB4AANx1J6nRMQ6pN1qH8FQ8ftq8VSu
 gp4QpWbBSzJ1WrjPA0O+YODymOh06XKxe0yysf8siXSiDowPD0YTsbXQW3ZnFN4m7G7zI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbC4q-0006j2-SS; Tue, 19 May 2020 23:55:16 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbC4q-00066H-8j; Tue, 19 May 2020 23:55:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbC4q-0007FU-7f; Tue, 19 May 2020 23:55:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150265-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150265: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=e235fa2794c95365519eac714d6ea82f8e64752e
X-Osstest-Versions-That: xen=271ade5a621005f86ec928280dc6ac85f2c4c95a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 19 May 2020 23:55:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150265 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150265/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e235fa2794c95365519eac714d6ea82f8e64752e
baseline version:
 xen                  271ade5a621005f86ec928280dc6ac85f2c4c95a

Last test of basis   150245  2020-05-19 13:01:12 Z    0 days
Testing same since   150249  2020-05-19 17:01:31 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jason Andryuk <jandryuk@gmail.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   271ade5a62..e235fa2794  e235fa2794c95365519eac714d6ea82f8e64752e -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 20 00:20:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 00:20:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbCTN-0004Qw-BG; Wed, 20 May 2020 00:20:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AfT1=7C=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbCTL-0004QD-9Y
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 00:20:35 +0000
X-Inumbo-ID: b55b292e-9a2f-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b55b292e-9a2f-11ea-b9cf-bc764e2007e4;
 Wed, 20 May 2020 00:20:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Arx0pdm49yPPqQpEp2+ej4is1fFJ9Q1fx0TquliTq/Q=; b=rOK8g8Ubi1xfAtEKYhR6fkJQq
 kfi+6rMXoLXqNZL3DwdXQI7nzppvnEkxwsFlSW61Nc6YsXQEPXKoFeKbinS+7T4UHD4moSzut/DAr
 5GdQQvQoBPfk213ojyDKhtv4GpSR6SDq1MkezcnUpWaVvXdKFgVfPuRmcUh8ZrcmfwyOM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbCTD-0007r1-Oy; Wed, 20 May 2020 00:20:27 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbCTD-00073K-IL; Wed, 20 May 2020 00:20:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbCTD-0006IV-Ha; Wed, 20 May 2020 00:20:27 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150244-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150244: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=642b151f45dd54809ea00ecd3976a56c1ec9b53d
X-Osstest-Versions-That: linux=b9bbe6ed63b2b9f2c9ee5cbd0f2c946a2723f4ce
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 20 May 2020 00:20:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150244 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150244/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-vhd      11 guest-start      fail in 150236 pass in 150244
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat  fail pass in 150236

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150230
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150230
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150230
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150230
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150230
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150230
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150230
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150230
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150230
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                642b151f45dd54809ea00ecd3976a56c1ec9b53d
baseline version:
 linux                b9bbe6ed63b2b9f2c9ee5cbd0f2c946a2723f4ce

Last test of basis   150230  2020-05-18 09:16:04 Z    1 days
Testing same since   150236  2020-05-18 20:40:36 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dan Carpenter <dan.carpenter@oracle.com>
  David Howells <dhowells@redhat.com>
  Eric Sandeen <sandeen@sandeen.net>
  Krzysztof Struczynski <krzysztof.struczynski@huawei.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com>
  Mimi Zohar <zohar@linux.ibm.com>
  Namjae Jeon <namjae.jeon@samsung.com>
  Paul E. McKenney <paulmck@kernel.org> (RCU viewpoint)
  Roberto Sassu <roberto.sassu@huawei.com>
  Wei Yongjun <weiyongjun1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   b9bbe6ed63b2..642b151f45dd  642b151f45dd54809ea00ecd3976a56c1ec9b53d -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Wed May 20 02:16:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 02:16:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbEGr-0006pg-6X; Wed, 20 May 2020 02:15:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qY6k=7C=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jbEGp-0006pb-8n
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 02:15:47 +0000
X-Inumbo-ID: d025439c-9a3f-11ea-b07b-bc764e2007e4
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d025439c-9a3f-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 02:15:45 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id m185so1264337wme.3
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 19:15:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=Y1NQF7JEWHycCKE/MRXEY/iLaS0F6TnWPG8a7PHW3xk=;
 b=J0MQ9WFmvALZb0neoUTra1JD4abEfV+HLIRdr4HECIC3h4nwMZtsqjVHIlv2S5f5Dy
 kVFfkTbu+0QK8uw+qxG1ft6kxloVHnUGN+wMXxfhDR4beqJuDKJBRenDCs3xqbrJrITr
 uTRxWFvBwvEczCfWlW3siCBn1D480EE9YFDHTmQBX4wEzSh7K39LIAe41aElJXU7U9wI
 celgRCY8FW18mbwlsGwGvVoUB/z/q7+JGnXVS6zpnrKwu/TYj+PvUmR2bm5ylk3dC6wm
 RNMcHxumfaxYl5UeDdkhQglpMVsv0P+/akQjkV1m5zLpzb51AKxbBK4ISkanj8/rTXU2
 3EEg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=Y1NQF7JEWHycCKE/MRXEY/iLaS0F6TnWPG8a7PHW3xk=;
 b=f7pbmOyz/kGpt8sMwoPx0iW+o88Pru6iIbVYzuzjwTrb38JlR9AtVGPwo67G2yFysV
 meHpovZ9FvXIUqleyIcJ1bd+PS2y4tkhb2jv8G3hN840jpqcIm88eYEs0PvF7s4tsX8p
 cF4Q/N81jAxHkAHUzyfwvU331l0iLGzqWxzXgnXHL7VrQGUZCFaFKapgcHJ4caVXGAZH
 tVem9g+5Ab6rp3HSb+S6T1IBZUkmanxHbEHb833Ar550oyLkgBba6zkFQpLRjApRECXy
 3SgGytaqW5FC/8nbP4u4g5uDaQwEYYgg6uidPhimKhd7+LToOYiOHQSPKgDg25jpDKHW
 l+8A==
X-Gm-Message-State: AOAM532pAX6ITRnMDmxKGlnoBaCpSuK1LkU8LMtVD7ypiJ1vfRYnbyuu
 eBxYco+Hrc37/CIa/ntK7Pkxxoso7aqwfUbA1eU=
X-Google-Smtp-Source: ABdhPJxX9lRGUJsYId/9jlkmY3fKEUvTvhX8/NnJr65CrvQMwKiIrXksf4qU/VLEShSW7XRhrQUFxHggic3Tsh/e9J4=
X-Received: by 2002:a7b:c959:: with SMTP id i25mr2253112wml.84.1589940944975; 
 Tue, 19 May 2020 19:15:44 -0700 (PDT)
MIME-Version: 1.0
References: <20200518113008.15422-1-julien@xen.org>
 <CABfawh=-XVaRxQ+WyM9ZV7jO5hEO=jAWa4m=b_1bQ41NgEB-2A@mail.gmail.com>
 <297448b7-7837-cbe5-dee4-da80ca03cd29@xen.org>
 <CABfawhkMEu0kMH7dac6OrUxpif8v+m7MeWePRg8UYL7MstJNFA@mail.gmail.com>
 <CAMmSBy9QQ4RPZnX6d4Mf6OqQjmN0+jLXL9nGMOQjnTt1axn4fQ@mail.gmail.com>
In-Reply-To: <CAMmSBy9QQ4RPZnX6d4Mf6OqQjmN0+jLXL9nGMOQjnTt1axn4fQ@mail.gmail.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Tue, 19 May 2020 20:15:08 -0600
Message-ID: <CABfawh=nsM9kz0i2+NmCwapWXqL5k+gzaJLLnfGv06e5bZUvyA@mail.gmail.com>
Subject: Re: [PATCH for-4.14 0/3] Remove the 1GB limitation on Rasberry Pi 4
To: Roman Shaposhnik <roman@zededa.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 minyard@acm.org, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jeff Kubascik <jeff.kubascik@dornerworks.com>, Jan Beulich <jbeulich@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 5:50 PM Roman Shaposhnik <roman@zededa.com> wrote:
>
> On Tue, May 19, 2020 at 4:44 PM Tamas K Lengyel
> <tamas.k.lengyel@gmail.com> wrote:
> >
> > On Tue, May 19, 2020 at 11:23 AM Julien Grall <julien@xen.org> wrote:
> > >
> > >
> > >
> > > On 19/05/2020 04:08, Tamas K Lengyel wrote:
> > > > On Mon, May 18, 2020 at 5:32 AM Julien Grall <julien@xen.org> wrote:
> > > >>
> > > >> From: Julien Grall <jgrall@amazon.com>
> > > >>
> > > >> Hi all,
> > > >>
> > > >> At the moment, a user who wants to boot Xen on the Raspberry Pi 4 can
> > > >> only use the first GB of memory.
> > > >>
> > > >> This is because several devices cannot DMA above 1GB but Xen doesn't
> > > >> necessarily allocate memory for Dom0 below 1GB.
> > > >>
> > > >> This small series is trying to address the problem by allowing a
> > > >> platform to restrict where Dom0 banks are allocated.
> > > >>
> > > >> This is also a candidate for Xen 4.14. Without it, a user will not be
> > > >> able to use all the RAM on the Raspberry Pi 4.
> > > >>
> > > >> This series has only be slighlty tested. I would appreciate more test on
> > > >> the Rasbperry Pi 4 to confirm this removing the restriction.
> > > >
> > > > Hi Julien,
> > >
> > > Hi,
> > >
> > > > could you post a git branch somewhere? I can try this on my rpi4 that
> > > > already runs 4.13.
> > >
> > > I have pushed a branch based on unstable and the v2 of the series:
> > >
> > > git://xenbits.xen.org/people/julieng/xen-unstable.git
> > >
> > > branch arm-dma/v2
> > >
> >
> > I've updated my image I built with
> > https://github.com/tklengyel/xen-rpi4-builder a while ago and I've
> > defined 2048m as total_mem and Xen seems to be booting fine and passes
> > execution to dom0. With 512m being set as the Xen cmdline for dom0_mem
> > it was working. When I increased the mem for dom0 the boot is now
> > stuck at:
> >
> > [    1.427788] of_cfs_init
> > [    1.429667] of_cfs_init: OK
> > [    1.432561] clk: Not disabling unused clocks
> > [    1.437239] Waiting for root device /dev/mmcblk0p2...
> > [    1.451599] mmc1: queuing unknown CIS tuple 0x80 (2 bytes)
> > [    1.458156] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
> > [    1.464729] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
> > [    1.472804] mmc1: queuing unknown CIS tuple 0x80 (7 bytes)
> > [    1.479370] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
> > [    1.546902] random: fast init done
> > [    1.564590] mmc1: new high speed SDIO card at address 0001
> >
> > Could this be because the DTB I compiled from a fresh checkout of
> > https://github.com/raspberrypi/linux.git branch rpi-4.19.y whereas the
> > kernel itself is from a checkout ~5 months ago? I guess that must be
> > the cause because even if I decrease the dom0_mem to 512m it still
> > gets stuck at the same spot whereas it was booting fine before.
>
> Stefano and I are testing the fix right now -- for now just set your
> Dom0 mem to less than 512m.

Actually seems to work after I recompiled the kernel and reinstalled
all kernel modules. Xen boots with 4gb RAM and dom0 boots with 2g:

xl info:
...
total_memory           : 3956
free_memory            : 1842

cat /proc/meminfo
MemTotal:        1963844 kB

I get an emergency shell during boot on the console complaining about
xenbr0 not coming up but if I just hit continue it boots fine and the
network is up. So AFAICT things are good.

Cheers,
Tamas


From xen-devel-bounces@lists.xenproject.org Wed May 20 02:29:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 02:29:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbETa-0007oh-Bi; Wed, 20 May 2020 02:28:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Eu2h=7C=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1jbETY-0007oc-K5
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 02:28:56 +0000
X-Inumbo-ID: a7210b3c-9a41-11ea-b07b-bc764e2007e4
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a7210b3c-9a41-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 02:28:55 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id i14so2109660qka.10
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 19:28:55 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zededa.com; s=google;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=WGOOkDJsScw7s5V+9RdisLqP/Tk+LGbdqMPrItt2fl4=;
 b=LQGR0USgPmirv0ZmSovlpHX7FcGiV4vlQdS9Yw/5drsGXKk7IZYsgcrxwjqHosQpEC
 2mtBjK7z2MVmYZFMOVkdLCXJFe+ST6/pMNhpIcrBAqo4k6fO/HzeCVFMvMioMWUnPLpR
 p4UXA5y8zDtGm1DbIM+CfTjCbR9zyWAsHBgXf7OKKQZ1Ol1KFGcriNQ2CHFF/OvfmiIl
 nuPaOtEQvngUIYnGc8rOE8RfhJ/EggSU5u/CV/rO5qRJM4ryAxjrMZ3ybsXte2YI8mSg
 B83T75SzaLgL7usdy52fQfMQWl809SXBbw00hElvWbDt5ApnvzcPFNWHOG6oNk13j5qa
 +GnQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=WGOOkDJsScw7s5V+9RdisLqP/Tk+LGbdqMPrItt2fl4=;
 b=BjOgUQ66Tp8hjJ7Xt2J8BWR2JPVjbgGYz5UKAvVlblaggtLjaa85DKn7CYabRUz0WO
 VQAeXhuFr17Yp75JHrqJ7RmZMwFiUiR31/LgvR/A8eF0UQ9OGFoDuaAxliGiLXbGZ50e
 RZgCq5aCgGO+GPULcniSNaOQahJIh6JPcFDIv98I4IlKh20PqZXgKCr19DEK3MDfrc/Y
 h1k+//mECr24LWZncfqlikE7mB5nqnA8zW69lZywJPlcBQcCIuMHxrtwVwltEvumawUH
 Xv4vxJwP6xv9V+fZPBaWsXc4bNsUu1PPh/OBClgiqKzMEYdNHoHmr36UupspiyyPgIm6
 TJAA==
X-Gm-Message-State: AOAM5308bU4I1/djMcrWYXRrcneH1rr8Dn7QRw0ogS1vKPeqdFlprQug
 fOEfxSGh6FyBjPBtDrhsYSdYRgQcpoEZhBGncteZvQ==
X-Google-Smtp-Source: ABdhPJxCXhDxAh7AefgL873OGWdFwOMWlURpaRs5ogyGVpXBqsVkzyPHmtuW/Y9Ns+X/7PkatZcAsuXb9JjRanZVNy4=
X-Received: by 2002:a37:4017:: with SMTP id n23mr2388775qka.291.1589941735312; 
 Tue, 19 May 2020 19:28:55 -0700 (PDT)
MIME-Version: 1.0
References: <20200518113008.15422-1-julien@xen.org>
 <CABfawh=-XVaRxQ+WyM9ZV7jO5hEO=jAWa4m=b_1bQ41NgEB-2A@mail.gmail.com>
 <297448b7-7837-cbe5-dee4-da80ca03cd29@xen.org>
 <CABfawhkMEu0kMH7dac6OrUxpif8v+m7MeWePRg8UYL7MstJNFA@mail.gmail.com>
 <CAMmSBy9QQ4RPZnX6d4Mf6OqQjmN0+jLXL9nGMOQjnTt1axn4fQ@mail.gmail.com>
 <CABfawh=nsM9kz0i2+NmCwapWXqL5k+gzaJLLnfGv06e5bZUvyA@mail.gmail.com>
In-Reply-To: <CABfawh=nsM9kz0i2+NmCwapWXqL5k+gzaJLLnfGv06e5bZUvyA@mail.gmail.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Tue, 19 May 2020 19:28:43 -0700
Message-ID: <CAMmSBy-372BGtHGDsx6SHTwj7sZV4Qvq1XF+kbenkEcwboZF5w@mail.gmail.com>
Subject: Re: [PATCH for-4.14 0/3] Remove the 1GB limitation on Rasberry Pi 4
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Content-Type: multipart/alternative; boundary="00000000000058d77105a60b28fc"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 minyard@acm.org, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jeff Kubascik <jeff.kubascik@dornerworks.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--00000000000058d77105a60b28fc
Content-Type: text/plain; charset="UTF-8"

On Tue, May 19, 2020, 7:15 PM Tamas K Lengyel <tamas.k.lengyel@gmail.com>
wrote:

> On Tue, May 19, 2020 at 5:50 PM Roman Shaposhnik <roman@zededa.com> wrote:
> >
> > On Tue, May 19, 2020 at 4:44 PM Tamas K Lengyel
> > <tamas.k.lengyel@gmail.com> wrote:
> > >
> > > On Tue, May 19, 2020 at 11:23 AM Julien Grall <julien@xen.org> wrote:
> > > >
> > > >
> > > >
> > > > On 19/05/2020 04:08, Tamas K Lengyel wrote:
> > > > > On Mon, May 18, 2020 at 5:32 AM Julien Grall <julien@xen.org>
> wrote:
> > > > >>
> > > > >> From: Julien Grall <jgrall@amazon.com>
> > > > >>
> > > > >> Hi all,
> > > > >>
> > > > >> At the moment, a user who wants to boot Xen on the Raspberry Pi 4
> can
> > > > >> only use the first GB of memory.
> > > > >>
> > > > >> This is because several devices cannot DMA above 1GB but Xen
> doesn't
> > > > >> necessarily allocate memory for Dom0 below 1GB.
> > > > >>
> > > > >> This small series is trying to address the problem by allowing a
> > > > >> platform to restrict where Dom0 banks are allocated.
> > > > >>
> > > > >> This is also a candidate for Xen 4.14. Without it, a user will
> not be
> > > > >> able to use all the RAM on the Raspberry Pi 4.
> > > > >>
> > > > >> This series has only be slighlty tested. I would appreciate more
> test on
> > > > >> the Rasbperry Pi 4 to confirm this removing the restriction.
> > > > >
> > > > > Hi Julien,
> > > >
> > > > Hi,
> > > >
> > > > > could you post a git branch somewhere? I can try this on my rpi4
> that
> > > > > already runs 4.13.
> > > >
> > > > I have pushed a branch based on unstable and the v2 of the series:
> > > >
> > > > git://xenbits.xen.org/people/julieng/xen-unstable.git
> > > >
> > > > branch arm-dma/v2
> > > >
> > >
> > > I've updated my image I built with
> > > https://github.com/tklengyel/xen-rpi4-builder a while ago and I've
> > > defined 2048m as total_mem and Xen seems to be booting fine and passes
> > > execution to dom0. With 512m being set as the Xen cmdline for dom0_mem
> > > it was working. When I increased the mem for dom0 the boot is now
> > > stuck at:
> > >
> > > [    1.427788] of_cfs_init
> > > [    1.429667] of_cfs_init: OK
> > > [    1.432561] clk: Not disabling unused clocks
> > > [    1.437239] Waiting for root device /dev/mmcblk0p2...
> > > [    1.451599] mmc1: queuing unknown CIS tuple 0x80 (2 bytes)
> > > [    1.458156] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
> > > [    1.464729] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
> > > [    1.472804] mmc1: queuing unknown CIS tuple 0x80 (7 bytes)
> > > [    1.479370] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
> > > [    1.546902] random: fast init done
> > > [    1.564590] mmc1: new high speed SDIO card at address 0001
> > >
> > > Could this be because the DTB I compiled from a fresh checkout of
> > > https://github.com/raspberrypi/linux.git branch rpi-4.19.y whereas the
> > > kernel itself is from a checkout ~5 months ago? I guess that must be
> > > the cause because even if I decrease the dom0_mem to 512m it still
> > > gets stuck at the same spot whereas it was booting fine before.
> >
> > Stefano and I are testing the fix right now -- for now just set your
> > Dom0 mem to less than 512m.
>
> Actually seems to work after I recompiled the kernel and reinstalled
> all kernel modules. Xen boots with 4gb RAM and dom0 boots with 2g:
>
> xl info:
> ...
> total_memory           : 3956
> free_memory            : 1842
>
> cat /proc/meminfo
> MemTotal:        1963844 kB
>
> I get an emergency shell during boot on the console complaining about
> xenbr0 not coming up but if I just hit continue it boots fine and the
> network is up. So AFAICT things are good.
>

What exact version of the kernel are you using and what did you build it
from?

FWIW: 5.6.x clearly has an issue with DMA.

Thanks,
Roman.

>

--00000000000058d77105a60b28fc
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"auto"><div><div class=3D"gmail_quote"><div dir=3D"ltr" class=3D=
"gmail_attr">On Tue, May 19, 2020, 7:15 PM Tamas K Lengyel &lt;<a href=3D"m=
ailto:tamas.k.lengyel@gmail.com">tamas.k.lengyel@gmail.com</a>&gt; wrote:<b=
r></div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border=
-left:1px #ccc solid;padding-left:1ex">On Tue, May 19, 2020 at 5:50 PM Roma=
n Shaposhnik &lt;<a href=3D"mailto:roman@zededa.com" target=3D"_blank" rel=
=3D"noreferrer">roman@zededa.com</a>&gt; wrote:<br>
&gt;<br>
&gt; On Tue, May 19, 2020 at 4:44 PM Tamas K Lengyel<br>
&gt; &lt;<a href=3D"mailto:tamas.k.lengyel@gmail.com" target=3D"_blank" rel=
=3D"noreferrer">tamas.k.lengyel@gmail.com</a>&gt; wrote:<br>
&gt; &gt;<br>
&gt; &gt; On Tue, May 19, 2020 at 11:23 AM Julien Grall &lt;<a href=3D"mail=
to:julien@xen.org" target=3D"_blank" rel=3D"noreferrer">julien@xen.org</a>&=
gt; wrote:<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; On 19/05/2020 04:08, Tamas K Lengyel wrote:<br>
&gt; &gt; &gt; &gt; On Mon, May 18, 2020 at 5:32 AM Julien Grall &lt;<a hre=
f=3D"mailto:julien@xen.org" target=3D"_blank" rel=3D"noreferrer">julien@xen=
.org</a>&gt; wrote:<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt; From: Julien Grall &lt;<a href=3D"mailto:jgrall@ama=
zon.com" target=3D"_blank" rel=3D"noreferrer">jgrall@amazon.com</a>&gt;<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt; Hi all,<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt; At the moment, a user who wants to boot Xen on the =
Raspberry Pi 4 can<br>
&gt; &gt; &gt; &gt;&gt; only use the first GB of memory.<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt; This is because several devices cannot DMA above 1G=
B but Xen doesn&#39;t<br>
&gt; &gt; &gt; &gt;&gt; necessarily allocate memory for Dom0 below 1GB.<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt; This small series is trying to address the problem =
by allowing a<br>
&gt; &gt; &gt; &gt;&gt; platform to restrict where Dom0 banks are allocated=
.<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt; This is also a candidate for Xen 4.14. Without it, =
a user will not be<br>
&gt; &gt; &gt; &gt;&gt; able to use all the RAM on the Raspberry Pi 4.<br>
&gt; &gt; &gt; &gt;&gt;<br>
&gt; &gt; &gt; &gt;&gt; This series has only be slighlty tested. I would ap=
preciate more test on<br>
&gt; &gt; &gt; &gt;&gt; the Rasbperry Pi 4 to confirm this removing the res=
triction.<br>
&gt; &gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; Hi Julien,<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; Hi,<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; &gt; could you post a git branch somewhere? I can try this o=
n my rpi4 that<br>
&gt; &gt; &gt; &gt; already runs 4.13.<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; I have pushed a branch based on unstable and the v2 of the s=
eries:<br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; git://<a href=3D"http://xenbits.xen.org/people/julieng/xen-u=
nstable.git" rel=3D"noreferrer noreferrer" target=3D"_blank">xenbits.xen.or=
g/people/julieng/xen-unstable.git</a><br>
&gt; &gt; &gt;<br>
&gt; &gt; &gt; branch arm-dma/v2<br>
&gt; &gt; &gt;<br>
&gt; &gt;<br>
&gt; &gt; I&#39;ve updated my image I built with<br>
&gt; &gt; <a href=3D"https://github.com/tklengyel/xen-rpi4-builder" rel=3D"=
noreferrer noreferrer" target=3D"_blank">https://github.com/tklengyel/xen-r=
pi4-builder</a> a while ago and I&#39;ve<br>
&gt; &gt; defined 2048m as total_mem and Xen seems to be booting fine and p=
asses<br>
&gt; &gt; execution to dom0. With 512m being set as the Xen cmdline for dom=
0_mem<br>
&gt; &gt; it was working. When I increased the mem for dom0 the boot is now=
<br>
&gt; &gt; stuck at:<br>
&gt; &gt;<br>
&gt; &gt; [=C2=A0 =C2=A0 1.427788] of_cfs_init<br>
&gt; &gt; [=C2=A0 =C2=A0 1.429667] of_cfs_init: OK<br>
&gt; &gt; [=C2=A0 =C2=A0 1.432561] clk: Not disabling unused clocks<br>
&gt; &gt; [=C2=A0 =C2=A0 1.437239] Waiting for root device /dev/mmcblk0p2..=
.<br>
&gt; &gt; [=C2=A0 =C2=A0 1.451599] mmc1: queuing unknown CIS tuple 0x80 (2 =
bytes)<br>
&gt; &gt; [=C2=A0 =C2=A0 1.458156] mmc1: queuing unknown CIS tuple 0x80 (3 =
bytes)<br>
&gt; &gt; [=C2=A0 =C2=A0 1.464729] mmc1: queuing unknown CIS tuple 0x80 (3 =
bytes)<br>
&gt; &gt; [=C2=A0 =C2=A0 1.472804] mmc1: queuing unknown CIS tuple 0x80 (7 =
bytes)<br>
&gt; &gt; [=C2=A0 =C2=A0 1.479370] mmc1: queuing unknown CIS tuple 0x80 (3 =
bytes)<br>
&gt; &gt; [=C2=A0 =C2=A0 1.546902] random: fast init done<br>
&gt; &gt; [=C2=A0 =C2=A0 1.564590] mmc1: new high speed SDIO card at addres=
s 0001<br>
&gt; &gt;<br>
&gt; &gt; Could this be because the DTB I compiled from a fresh checkout of=
<br>
&gt; &gt; <a href=3D"https://github.com/raspberrypi/linux.git" rel=3D"noref=
errer noreferrer" target=3D"_blank">https://github.com/raspberrypi/linux.gi=
t</a> branch rpi-4.19.y whereas the<br>
&gt; &gt; kernel itself is from a checkout ~5 months ago? I guess that must=
 be<br>
&gt; &gt; the cause because even if I decrease the dom0_mem to 512m it stil=
l<br>
&gt; &gt; gets stuck at the same spot whereas it was booting fine before.<b=
r>
&gt;<br>
&gt; Stefano and I are testing the fix right now -- for now just set your<b=
r>
&gt; Dom0 mem to less than 512m.<br>
<br>
Actually seems to work after I recompiled the kernel and reinstalled<br>
all kernel modules. Xen boots with 4gb RAM and dom0 boots with 2g:<br>
<br>
xl info:<br>
...<br>
total_memory=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0: 3956<br>
free_memory=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 : 1842<br>
<br>
cat /proc/meminfo<br>
MemTotal:=C2=A0 =C2=A0 =C2=A0 =C2=A0 1963844 kB<br>
<br>
I get an emergency shell during boot on the console complaining about<br>
xenbr0 not coming up but if I just hit continue it boots fine and the<br>
network is up. So AFAICT things are good.<br></blockquote></div></div><div =
dir=3D"auto"><br></div><div dir=3D"auto">What exact version of the kernel a=
re you using and what did you build it from?</div><div dir=3D"auto"><br></d=
iv><div dir=3D"auto">FWIW: 5.6.x clearly has an issue with DMA.</div><div d=
ir=3D"auto"><br></div><div dir=3D"auto">Thanks,</div><div dir=3D"auto">Roma=
n.</div><div dir=3D"auto"><div class=3D"gmail_quote"><blockquote class=3D"g=
mail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-l=
eft:1ex">
</blockquote></div></div></div>

--00000000000058d77105a60b28fc--


From xen-devel-bounces@lists.xenproject.org Wed May 20 03:06:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 03:06:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbF44-0002p2-Bo; Wed, 20 May 2020 03:06:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qY6k=7C=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jbF43-0002ox-5H
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 03:06:39 +0000
X-Inumbo-ID: eb8f88c0-9a46-11ea-ae69-bc764e2007e4
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb8f88c0-9a46-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 03:06:38 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id h17so1552195wrc.8
 for <xen-devel@lists.xenproject.org>; Tue, 19 May 2020 20:06:38 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=zYK8pxeuvYkc7dUqt9jiTMrBHW3ldfKsgEdhuG4PKwM=;
 b=d9fIn91SQl4bmafn9AJdfRZJZEzlyq9uDbZc12EykUShyJcVWDDTEIoWeUMxT+eyM/
 h2kjsezgemmj3jF+AgncG4qAbIEGoiRDwlqgm7K56xDM1/KFw2E1oAJRpq+LB72kMdZG
 ST/UBup+yolyeXT3RH5ijxvRuw9nd9U1AFT8tqU0LKZVjSRz9B3zm7UDG62PbmoaR//L
 mmpJJKk5SQ13Che/uvMnPOItZexUCf6/PIEw+l4Yk5WCKMAZ8Vw0vFxZp3BMsQvnDFq4
 vBxF7/tPojkSz6WSqmefeYqY+x5YO0LvxBQ7La2ZS34oOzwLUVQEMzhJmkpIv+3I3qHX
 ik0g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=zYK8pxeuvYkc7dUqt9jiTMrBHW3ldfKsgEdhuG4PKwM=;
 b=Mo5aSe1tGbn3hZimrimc81PYljlFQ6GvKpa5Yzr0K0hrN/sHUc15hrDHQi2/k/YxzB
 WMxUd38Ev5BsLxIVUJpeAUEQnK6B4qEBySnjrCeImWUgWrVxH0O9+RJhXeqn3crXImme
 Ld8hJLSljFZot74wmn9P2bY+2maW3aUsd39D13KHYMMiJPSTlv+HJvlsukNZnYSKcQ+q
 05WPGihQi80TjMoFsEX9A5N2BvpAMkhHhmrNDcfyipsDp6gCpOuw6vZkm8kdnwt9xypx
 npQTxsHoSAsrltOJ8b65ThuNthakOo0y9cA88c5vMUhVCHooaDC5QNSwOKzy0X7LeQhQ
 J3RQ==
X-Gm-Message-State: AOAM531polzO6EMCwzC7DAGdIRKwQtQxGvrZoKtrpjrZG2zpfaQ2vQpD
 c80IwKpze1qm5KXlaL8oiqMBmzF5Z/BqILZmHWA=
X-Google-Smtp-Source: ABdhPJzECXQUiAT17g95E9FFT5vYtu7Prc/1GDOlxSKSwejnuQtTncu/0Dvq2fx1uH3D3mkFaEyObhJMriVMT8Mto0U=
X-Received: by 2002:adf:a3c5:: with SMTP id m5mr2075527wrb.390.1589943997423; 
 Tue, 19 May 2020 20:06:37 -0700 (PDT)
MIME-Version: 1.0
References: <20200518113008.15422-1-julien@xen.org>
 <CABfawh=-XVaRxQ+WyM9ZV7jO5hEO=jAWa4m=b_1bQ41NgEB-2A@mail.gmail.com>
 <297448b7-7837-cbe5-dee4-da80ca03cd29@xen.org>
 <CABfawhkMEu0kMH7dac6OrUxpif8v+m7MeWePRg8UYL7MstJNFA@mail.gmail.com>
 <CAMmSBy9QQ4RPZnX6d4Mf6OqQjmN0+jLXL9nGMOQjnTt1axn4fQ@mail.gmail.com>
 <CABfawh=nsM9kz0i2+NmCwapWXqL5k+gzaJLLnfGv06e5bZUvyA@mail.gmail.com>
 <CAMmSBy-372BGtHGDsx6SHTwj7sZV4Qvq1XF+kbenkEcwboZF5w@mail.gmail.com>
In-Reply-To: <CAMmSBy-372BGtHGDsx6SHTwj7sZV4Qvq1XF+kbenkEcwboZF5w@mail.gmail.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Tue, 19 May 2020 21:06:01 -0600
Message-ID: <CABfawhkoSz-zSdyuFvu=p7pPE=uG1QN1E0XawjRbLa47Cx3Nww@mail.gmail.com>
Subject: Re: [PATCH for-4.14 0/3] Remove the 1GB limitation on Rasberry Pi 4
To: Roman Shaposhnik <roman@zededa.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 minyard@acm.org, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jeff Kubascik <jeff.kubascik@dornerworks.com>, Jan Beulich <jbeulich@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 8:28 PM Roman Shaposhnik <roman@zededa.com> wrote:
>
> On Tue, May 19, 2020, 7:15 PM Tamas K Lengyel <tamas.k.lengyel@gmail.com> wrote:
>>
>> On Tue, May 19, 2020 at 5:50 PM Roman Shaposhnik <roman@zededa.com> wrote:
>> >
>> > On Tue, May 19, 2020 at 4:44 PM Tamas K Lengyel
>> > <tamas.k.lengyel@gmail.com> wrote:
>> > >
>> > > On Tue, May 19, 2020 at 11:23 AM Julien Grall <julien@xen.org> wrote:
>> > > >
>> > > >
>> > > >
>> > > > On 19/05/2020 04:08, Tamas K Lengyel wrote:
>> > > > > On Mon, May 18, 2020 at 5:32 AM Julien Grall <julien@xen.org> wrote:
>> > > > >>
>> > > > >> From: Julien Grall <jgrall@amazon.com>
>> > > > >>
>> > > > >> Hi all,
>> > > > >>
>> > > > >> At the moment, a user who wants to boot Xen on the Raspberry Pi 4 can
>> > > > >> only use the first GB of memory.
>> > > > >>
>> > > > >> This is because several devices cannot DMA above 1GB but Xen doesn't
>> > > > >> necessarily allocate memory for Dom0 below 1GB.
>> > > > >>
>> > > > >> This small series is trying to address the problem by allowing a
>> > > > >> platform to restrict where Dom0 banks are allocated.
>> > > > >>
>> > > > >> This is also a candidate for Xen 4.14. Without it, a user will not be
>> > > > >> able to use all the RAM on the Raspberry Pi 4.
>> > > > >>
>> > > > >> This series has only be slighlty tested. I would appreciate more test on
>> > > > >> the Rasbperry Pi 4 to confirm this removing the restriction.
>> > > > >
>> > > > > Hi Julien,
>> > > >
>> > > > Hi,
>> > > >
>> > > > > could you post a git branch somewhere? I can try this on my rpi4 that
>> > > > > already runs 4.13.
>> > > >
>> > > > I have pushed a branch based on unstable and the v2 of the series:
>> > > >
>> > > > git://xenbits.xen.org/people/julieng/xen-unstable.git
>> > > >
>> > > > branch arm-dma/v2
>> > > >
>> > >
>> > > I've updated my image I built with
>> > > https://github.com/tklengyel/xen-rpi4-builder a while ago and I've
>> > > defined 2048m as total_mem and Xen seems to be booting fine and passes
>> > > execution to dom0. With 512m being set as the Xen cmdline for dom0_mem
>> > > it was working. When I increased the mem for dom0 the boot is now
>> > > stuck at:
>> > >
>> > > [    1.427788] of_cfs_init
>> > > [    1.429667] of_cfs_init: OK
>> > > [    1.432561] clk: Not disabling unused clocks
>> > > [    1.437239] Waiting for root device /dev/mmcblk0p2...
>> > > [    1.451599] mmc1: queuing unknown CIS tuple 0x80 (2 bytes)
>> > > [    1.458156] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
>> > > [    1.464729] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
>> > > [    1.472804] mmc1: queuing unknown CIS tuple 0x80 (7 bytes)
>> > > [    1.479370] mmc1: queuing unknown CIS tuple 0x80 (3 bytes)
>> > > [    1.546902] random: fast init done
>> > > [    1.564590] mmc1: new high speed SDIO card at address 0001
>> > >
>> > > Could this be because the DTB I compiled from a fresh checkout of
>> > > https://github.com/raspberrypi/linux.git branch rpi-4.19.y whereas the
>> > > kernel itself is from a checkout ~5 months ago? I guess that must be
>> > > the cause because even if I decrease the dom0_mem to 512m it still
>> > > gets stuck at the same spot whereas it was booting fine before.
>> >
>> > Stefano and I are testing the fix right now -- for now just set your
>> > Dom0 mem to less than 512m.
>>
>> Actually seems to work after I recompiled the kernel and reinstalled
>> all kernel modules. Xen boots with 4gb RAM and dom0 boots with 2g:
>>
>> xl info:
>> ...
>> total_memory           : 3956
>> free_memory            : 1842
>>
>> cat /proc/meminfo
>> MemTotal:        1963844 kB
>>
>> I get an emergency shell during boot on the console complaining about
>> xenbr0 not coming up but if I just hit continue it boots fine and the
>> network is up. So AFAICT things are good.
>
>
> What exact version of the kernel are you using and what did you build it from?
>
> FWIW: 5.6.x clearly has an issue with DMA.

As I said above: https://github.com/raspberrypi/linux.git branch
rpi-4.19.y, I applied the Linux patches from the xen-rpi4-builder
repo, just changing the dom0_mem option in patch 1. I reverted the
xen-rpi4-builder a couple revisions as to not build using the DTB
overlay.

Tamas


From xen-devel-bounces@lists.xenproject.org Wed May 20 03:14:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 03:14:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbFBp-0003hi-5r; Wed, 20 May 2020 03:14:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k561=7C=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jbFBn-0003hc-Fv
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 03:14:39 +0000
X-Inumbo-ID: 07d4c6ac-9a48-11ea-a9b6-12813bfff9fa
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 07d4c6ac-9a48-11ea-a9b6-12813bfff9fa;
 Wed, 20 May 2020 03:14:35 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04K3DONp184721;
 Wed, 20 May 2020 03:14:31 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=r0Mj3h35rNep7HrpBAQkR3yFk/3ng5I0ex9Pj563aPc=;
 b=Qqwma8C4tLu6y0+lmh5bUDQkQF9E48NhYPAJMmEHRAqpHNZ2xB97ycom3YCKL8Zw7EBZ
 RopgSunRe4C8y9CGfSjBiuMwDUae6H5bUwoPudOUDR7e2vl4pncelXFVcXq7RAj67Uye
 Gy1NQHEZhPmGn14QnwurAlvYqKAqAHe5nl/JjdLCEowSO4Y/ROp2X9LHi82T25f2l1+K
 BcVh/LlQetznVn+4FKa4iNdzakQ5a6SKxMCKjJypEuAAnWtqokY9j8kDQs4HWfS8Xmw+
 l3OdMpjuFYRLAuHoKsMSZmOo8jzvr/gVX6lXfuwSQFcJP/XpGoFATP8zZPTYl0uOtzWO 9A== 
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2120.oracle.com with ESMTP id 3128tngn71-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Wed, 20 May 2020 03:14:31 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04K3DC3M082443;
 Wed, 20 May 2020 03:14:31 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by userp3020.oracle.com with ESMTP id 312sxtwucb-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 20 May 2020 03:14:31 +0000
Received: from abhmp0020.oracle.com (abhmp0020.oracle.com [141.146.116.26])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 04K3ERfG008669;
 Wed, 20 May 2020 03:14:27 GMT
Received: from [10.39.224.138] (/10.39.224.138)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Tue, 19 May 2020 20:14:27 -0700
Subject: Re: grant table issues mapping a ring order 10
To: Stefano Stabellini <sstabellini@kernel.org>, jgross@suse.com
References: <alpine.DEB.2.21.2005191252040.27502@sstabellini-ThinkPad-T480s>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <6f4c33ad-235b-a7be-b5fd-1a80d339e449@oracle.com>
Date: Tue, 19 May 2020 23:14:25 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2005191252040.27502@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9626
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0
 phishscore=0 malwarescore=0
 mlxlogscore=999 bulkscore=0 mlxscore=0 suspectscore=0 adultscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2005200026
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9626
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 impostorscore=0
 bulkscore=0 spamscore=0
 clxscore=1015 cotscore=-2147483648 suspectscore=0 lowpriorityscore=0
 adultscore=0 phishscore=0 mlxlogscore=999 mlxscore=0 priorityscore=1501
 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2005200026
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: andrew.cooper3@citrix.com, jbeulich@suse.com,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/19/20 5:21 PM, Stefano Stabellini wrote:
> Hi Juergen, Boris,
>
> I am trying to increase the size of the rings used for Xen 9pfs
> connections for performance reasons and also to reduce the likehood of
> the backend having to wait on the frontend to free up space from the
> ring.
>
> FYI I realized that we cannot choose order 11 or greater in Linux
> because then we incur into the hard limit CONFIG_FORCE_MAX_ZONEORDER=3D=
11.
> But that is not the reason why I am writing to you :-)
>
>
> The reason why I am writing is that even order 10 fails for some
> grant-table related reason I cannot explain. There are two rings, each
> of them order 10. Mapping the first ring results into an error. (Order =
9
> works fine, resulting in both rings being mapped correctly.)
>
> QEMU tries to map the refs but gets an error:
>
>   gnttab: error: mmap failed: Invalid argument
>   xen be: 9pfs-0: xen be: 9pfs-0: xengnttab_map_domain_grant_refs faile=
d: Invalid argument
>   xengnttab_map_domain_grant_refs failed: Invalid argument
>
> The error comes from Xen. The hypervisor returns GNTST_bad_gntref to
> Linux (drivers/xen/grant-table.c:gnttab_map_refs). Then:
>
>     	if (map->map_ops[i].status) {
> 			err =3D -EINVAL;
> 			continue;
> 		}
>
> So Linux returns -EINVAL to QEMU. The ref seem to be garbage. The
> following printks are in Xen in the implemenation of map_grant_ref:
>
> (XEN) DEBUG map_grant_ref 1017 ref=3D998 nr=3D2560
> (XEN) DEBUG map_grant_ref 1017 ref=3D999 nr=3D2560
> (XEN) DEBUG map_grant_ref 1013 ref=3D2050669706 nr=3D2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x7a3abc8a for d1
> (XEN) DEBUG map_grant_ref 1017 ref=3D0 nr=3D2560
> (XEN) DEBUG map_grant_ref 1017 ref=3D19 nr=3D2560
> (XEN) DEBUG map_grant_ref 1017 ref=3D0 nr=3D2560
> (XEN) DEBUG map_grant_ref 1013 ref=3D56423797 nr=3D2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x35cf575 for d1
> (XEN) DEBUG map_grant_ref 1013 ref=3D348793 nr=3D2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x55279 for d1
> (XEN) DEBUG map_grant_ref 1013 ref=3D1589921828 nr=3D2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x5ec44824 for d1
> (XEN) DEBUG map_grant_ref 1017 ref=3D0 nr=3D2560
> (XEN) DEBUG map_grant_ref 1013 ref=3D2070386184 nr=3D2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x7b679608 for d1
> (XEN) DEBUG map_grant_ref 1013 ref=3D3421871 nr=3D2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x3436af for d1
> (XEN) DEBUG map_grant_ref 1013 ref=3D1589921828 nr=3D2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x5ec44824 for d1
> (XEN) DEBUG map_grant_ref 1017 ref=3D0 nr=3D2560
> (XEN) DEBUG map_grant_ref 1013 ref=3D875999099 nr=3D2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x3436af7b for d1
> (XEN) DEBUG map_grant_ref 1017 ref=3D0 nr=3D2560
> (XEN) DEBUG map_grant_ref 1013 ref=3D2705045486 nr=3D2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0xa13bb7ee for d1
> (XEN) DEBUG map_grant_ref 1013 ref=3D4294967295 nr=3D2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0xffffffff for d1
> (XEN) DEBUG map_grant_ref 1013 ref=3D213291910 nr=3D2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0xcb69386 for d1
> (XEN) DEBUG map_grant_ref 1017 ref=3D0 nr=3D2560
> (XEN) DEBUG map_grant_ref 1013 ref=3D4912 nr=3D2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x1330 for d1
> (XEN) DEBUG map_grant_ref 1013 ref=3D167788925 nr=3D2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0xa00417d for d1
> (XEN) DEBUG map_grant_ref 1017 ref=3D24 nr=3D2560
> (XEN) DEBUG map_grant_ref 1013 ref=3D167788925 nr=3D2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0xa00417d for d1
> (XEN) DEBUG map_grant_ref 1017 ref=3D0 nr=3D2560
> (XEN) DEBUG map_grant_ref 1017 ref=3D0 nr=3D2560
> (XEN) DEBUG map_grant_ref 1017 ref=3D0 nr=3D2560
> (XEN) DEBUG map_grant_ref 1017 ref=3D0 nr=3D2560
> (XEN) DEBUG map_grant_ref 1017 ref=3D0 nr=3D2560
> (XEN) DEBUG map_grant_ref 1017 ref=3D0 nr=3D2560
> (XEN) DEBUG map_grant_ref 1017 ref=3D0 nr=3D2560
> (XEN) DEBUG map_grant_ref 1017 ref=3D0 nr=3D2560
> (XEN) DEBUG map_grant_ref 1017 ref=3D0 nr=3D2560
> (XEN) DEBUG map_grant_ref 1017 ref=3D0 nr=3D2560
> (XEN) DEBUG map_grant_ref 1017 ref=3D0 nr=3D2560
>
>
> Full logs https://pastebin.com/QLTUaUGJ
> It is worth mentioning that no limits are being reached: we are below
> 2500 entries per domain and below the 64 pages of grant refs per domain=
=2E
>
> What it seems to happen is that after ref 999, the next refs are garbag=
e.
> Do you have any ideas why?


Y1K? ;-)


Have you tried verifying that entry #1000 is properly initialized?


-boris


>
>
> I tracked the gnttab_expand calls in Dom0 and they seemed to be done
> correctly. We need 5 grant table pages:
>
> - order 10 -> 1024 refs
> - 2 rings -> 2048 refs
> - 512 refs per grant table page -> 4 pages
> - plus few others refs by default -> 5 pages
>
> [    3.896558] DEBUG gnttab_expand 1287 cur=3D1 extra=3D1 max=3D64 rc=3D=
0
> [    5.115189] DEBUG gnttab_expand 1287 cur=3D2 extra=3D1 max=3D64 rc=3D=
0
> [    6.334027] DEBUG gnttab_expand 1287 cur=3D3 extra=3D1 max=3D64 rc=3D=
0
> [    7.350523] DEBUG gnttab_expand 1287 cur=3D4 extra=3D1 max=3D64 rc=3D=
0
>
> As expected gnttab_expand gets called 4 times to add 4 more pages to th=
e
> initial page.
>
>
> Thanks,
>
> Stefano





From xen-devel-bounces@lists.xenproject.org Wed May 20 04:12:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 04:12:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbG56-0000tQ-J1; Wed, 20 May 2020 04:11:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AfT1=7C=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbG54-0000tL-W6
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 04:11:47 +0000
X-Inumbo-ID: 001461b9-9a50-11ea-a9b7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 001461b9-9a50-11ea-a9b7-12813bfff9fa;
 Wed, 20 May 2020 04:11:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=FTS4PYr4iEOKvYvICgCDcnitb1R9lzKjtKr1VfU5EOc=; b=TGQ7g3ifur9ltnk6BKTQSJMvd
 O/LkYe2GxOYHX3zFcSAnBUmMvnsM96QF++nHF51a3j42zg3X6Q9f8l5qfrO7tmBcqVCqkIlgwC4T2
 zN2ZuO2jAJ8cgKtAUqNhUCgCfqzUncg0Vv49LILu+yaDf2v6pJs703LuMeMJrOWMIjE8Y=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbG4v-0005zF-4G; Wed, 20 May 2020 04:11:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbG4u-0001qC-So; Wed, 20 May 2020 04:11:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbG4u-0001s1-Rq; Wed, 20 May 2020 04:11:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150247-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150247: regressions - FAIL
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl:xen-boot:fail:regression
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=7efd9f3d45480c12328e4419547a98022f7af43a
X-Osstest-Versions-That: xen=664e1bc12f8658da124a4eff7a8f16da073bd47f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 20 May 2020 04:11:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150247 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150247/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl           7 xen-boot                 fail REGR. vs. 150227

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150227
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150227
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150227
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150227
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150227
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150227
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150227
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150227
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150227
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150227
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150227
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  7efd9f3d45480c12328e4419547a98022f7af43a
baseline version:
 xen                  664e1bc12f8658da124a4eff7a8f16da073bd47f

Last test of basis   150227  2020-05-18 01:51:25 Z    2 days
Failing since        150234  2020-05-18 18:06:10 Z    1 days    3 attempts
Testing same since   150247  2020-05-19 16:07:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Eric Shelton <eshelton@pobox.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Simon Gaiser <simon@invisiblethingslab.com>
  Wei Liu <wei.liu2@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 7efd9f3d45480c12328e4419547a98022f7af43a
Author: Eric Shelton <eshelton@pobox.com>
Date:   Mon May 18 21:54:49 2020 -0400

    libxl: Handle Linux stubdomain specific QEMU options.
    
    This patch creates an appropriate command line for the QEMU instance
    running in a Linux-based stubdomain.
    
    NOTE: a number of items are not currently implemented for Linux-based
    stubdomains, such as:
    - save/restore
    - QMP socket
    - graphics output (e.g., VNC)
    
    Signed-off-by: Eric Shelton <eshelton@pobox.com>
    
    Simon:
     * fix disk path
     * fix cdrom path and "format"
    
    Signed-off-by: Simon Gaiser <simon@invisiblethingslab.com>
    [drop Qubes-specific parts]
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    
    Allow setting stubdomain_ramdisk independently from stubdomain_kernel
    Add a qemu- prefix for qemu-stubdom-linux-{kernel,rootfs} since stubdom
    doesn't convey device-model.  Use qemu- since this code is qemu specific.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 46587a30d4cb7702a8c2074c98d687b6f9602e6c
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Mon May 18 21:54:48 2020 -0400

    libxl: Allow running qemu-xen in stubdomain
    
    Do not prohibit anymore using stubdomain with qemu-xen.
    To help distingushing MiniOS and Linux stubdomain, add helper inline
    functions libxl__stubdomain_is_linux() and
    libxl__stubdomain_is_linux_running(). Those should be used where really
    the difference is about MiniOS/Linux, not qemu-xen/qemu-xen-traditional.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 11b7f8725d5f992a384a6ca55a08e5e908c06d85
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Mon May 18 21:54:47 2020 -0400

    libxl: fix qemu-trad cmdline for no sdl/vnc case
    
    When qemu is running in stubdomain, any attempt to initialize vnc/sdl
    there will crash it (on failed attempt to load a keymap from a file). If
    vfb is present, all those cases are skipped. But since
    b053f0c4c9e533f3d97837cf897eb920b8355ed3 "libxl: do not start dom0 qemu
    for stubdomain when not needed" it is possible to create a stubdomain
    without vfb and contrary to the comment -vnc none do trigger VNC
    initialization code (just skips exposing it externally).
    Change the implicit SDL avoiding method to -nographics option, used when
    none of SDL or VNC is enabled.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
    Acked-by: Wei Liu <wei.liu2@citrix.com>
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

commit cd7181308e196cba5375202262d1e27d9f0ac49c
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Mon May 18 21:54:46 2020 -0400

    Document ioemu Linux stubdomain protocol
    
    Add documentation for upcoming Linux stubdomain for qemu-upstream.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit c7c145470168c67ba45911370bd6902917e59da5
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date:   Mon May 18 21:54:45 2020 -0400

    Document ioemu MiniOS stubdomain protocol
    
    Add documentation based on reverse-engineered toolstack-ioemu stubdomain
    protocol.
    
    Signed-off-by: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

commit 475ffdbbf5778329319ef6f7bd6315c163163440
Author: Olaf Hering <olaf@aepfle.de>
Date:   Mon May 18 16:44:00 2020 +0200

    tools: use HOSTCC/CPP to compile rombios code and helper
    
    Use also HOSTCFLAGS for biossums while touching the code.
    
    Spotted by inspecting build logfile.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 2b532519d64e653a6bbfd9eefed6040a09c8876d
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 18 17:18:56 2020 +0200

    x86: determine MXCSR mask in all cases
    
    For its use(s) by the emulator to be correct in all cases, the filling
    of the variable needs to be independent of XSAVE availability. As
    there's no suitable function in i387.c to put the logic in, keep it in
    xstate_init(), arrange for the function to be called unconditionally,
    and pull the logic ahead of all return paths there.
    
    Fixes: 9a4496a35b20 ("x86emul: support {,V}{LD,ST}MXCSR")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 4c73a2a939a51dee47db77b31208157dbc29fe98
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 18 17:17:51 2020 +0200

    x86/mem-paging: consistently use gfn_t
    
    Where gprintk()s get touched anyway to switch to PRI_gfn, also switch to
    %pd for the domain logged.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 02a2b19c8e2532998e262727d32c574267ac6b48
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 18 17:16:55 2020 +0200

    x86/mem-paging: move code to its dedicated source file
    
    Do a little bit of style adjustment along the way, and drop the
    "p2m_mem_paging_" prefixes from the now static functions.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 9f40699c956c10419a91db0ff8d0c985dd3b2800
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 18 17:15:46 2020 +0200

    x86/mem-paging: use guest handle for XENMEM_paging_op_prep
    
    While it should have been this way from the beginning, not doing so will
    become an actual problem with PVH Dom0. The interface change is binary
    compatible, but requires tools side producers to be re-built.
    
    Drop the bogus/unnecessary page alignment restriction on the input
    buffer at the same time.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5dc999d711a75e1a83d4b21da203d3c3197ec0e0
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 18 17:13:38 2020 +0200

    x86/mm: no-one passes a NULL domain to init_xen_l4_slots()
    
    Drop the NULL checks - they've been introduced by commit 8d7b633ada
    ("x86/mm: Consolidate all Xen L4 slot writing into
    init_xen_l4_slots()") without giving a reason; I'm told this was done
    in anticipation of the function potentially getting called with a NULL
    argument down the road.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

commit 97fb0253e6c2f2221bfd0895b7ffe3a99330d847
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Sat May 16 19:50:45 2020 +0100

    x86/hvm: Fix shifting in stdvga_mem_read()
    
    stdvga_mem_read() has a return type of uint8_t, which promotes to int rather
    than unsigned int.  Shifting by 24 may hit the sign bit.
    
    Spotted by Coverity.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 3d6e92e309987c9e33177c9ccd155e58dbd5d0db
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Sat May 16 13:10:07 2020 +0100

    x86/hvm: Fix memory leaks in hvm_copy_context_and_params()
    
    Any error from hvm_save() or hvm_set_param() leaks the c.data allocation.
    
    Spotted by Coverity.
    
    Fixes: 353744830 "x86/hvm: introduce hvm_copy_context_and_params"
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 20 05:00:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 05:00:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbGqL-0005Ri-Af; Wed, 20 May 2020 05:00:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zw8I=7C=intel.com=lkp@srs-us1.protection.inumbo.net>)
 id 1jbGqK-0005Rd-2s
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 05:00:36 +0000
X-Inumbo-ID: d60767c4-9a56-11ea-ae69-bc764e2007e4
Received: from mga01.intel.com (unknown [192.55.52.88])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d60767c4-9a56-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 05:00:34 +0000 (UTC)
IronPort-SDR: +1QIwMlZXnqeRELOSwoC8YMAB+I6e4M8vn4R7wuXpq2shnSyrXdD9iOzz5Yqgv+5VHBuPy0W5F
 CUCfFGrzDdSw==
X-Amp-Result: UNKNOWN
X-Amp-Original-Verdict: FILE UNKNOWN
X-Amp-File-Uploaded: False
Received: from fmsmga008.fm.intel.com ([10.253.24.58])
 by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 19 May 2020 22:00:27 -0700
IronPort-SDR: Fcrn2Nlt+mrIWVnVztz1Y9/abzA1uWpTyz4VkYyeSXOtJSayTXi4fGdMP8Ylwt00fscP+HoQHF
 xl5qCHJosw+w==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.73,412,1583222400"; 
 d="gz'50?scan'50,208,50";a="254925887"
Received: from lkp-server01.sh.intel.com (HELO lkp-server01) ([10.239.97.150])
 by fmsmga008.fm.intel.com with ESMTP; 19 May 2020 22:00:20 -0700
Received: from kbuild by lkp-server01 with local (Exim 4.89)
 (envelope-from <lkp@intel.com>)
 id 1jbGq4-0006m1-4O; Wed, 20 May 2020 13:00:20 +0800
Date: Wed, 20 May 2020 13:00:01 +0800
From: kbuild test robot <lkp@intel.com>
To: Anchal Agarwal <anchalag@amazon.com>, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org,
 boris.ostrovsky@oracle.com, jgross@suse.com,
 linux-pm@vger.kernel.org, linux-mm@kvack.org, kamatam@amazon.com,
 sstabellini@kernel.org, konrad.wilk@oracle.com,
 roger.pau@citrix.com, axboe@kernel.dk, davem@davemloft.net,
 rjw@rjwysocki.net, len.brown@intel.com, pavel@ucw.cz,
 peterz@infradead.org, eduval@amazon.com, sblbir@amazon.com,
 xen-devel@lists.xenproject.org, vkuznets@redhat.com,
 netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
 dwmw@amazon.co.uk, benh@kernel.crashing.org
Subject: Re: [PATCH 06/12] xen-blkfront: add callbacks for PM suspend and
 hibernation
Message-ID: <202005201221.3QB506km%lkp@intel.com>
References: <ad580b4d5b76c18fe2fe409704f25622e01af361.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="2oS5YaxWCcQjTEyO"
Content-Disposition: inline
In-Reply-To: <ad580b4d5b76c18fe2fe409704f25622e01af361.1589926004.git.anchalag@amazon.com>
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: clang-built-linux@googlegroups.com, kbuild-all@lists.01.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--2oS5YaxWCcQjTEyO
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hi Anchal,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v5.7-rc6]
[cannot apply to xen-tip/linux-next tip/irq/core tip/auto-latest next-20200519]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Anchal-Agarwal/Fix-PM-hibernation-in-Xen-guests/20200520-073211
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 03fb3acae4be8a6b680ffedb220a8b6c07260b40
config: x86_64-randconfig-a016-20200519 (attached as .config)
compiler: clang version 11.0.0 (https://github.com/llvm/llvm-project e6658079aca6d971b4e9d7137a3a2ecbc9c34aec)
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install x86_64 cross compiling tool for clang build
        # apt-get install binutils-x86-64-linux-gnu
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>

All error/warnings (new ones prefixed by >>, old ones prefixed by <<):

>> drivers/block/xen-blkfront.c:2699:30: warning: missing terminating '"' character [-Winvalid-pp-token]
xenbus_dev_error(dev, err, "Hibernation Failed.
^
>> drivers/block/xen-blkfront.c:2699:30: error: expected expression
drivers/block/xen-blkfront.c:2700:26: warning: missing terminating '"' character [-Winvalid-pp-token]
The ring is still busy");
^
>> drivers/block/xen-blkfront.c:2726:1: error: function definition is not allowed here
{
^
>> drivers/block/xen-blkfront.c:2762:10: error: use of undeclared identifier 'blkfront_restore'
.thaw = blkfront_restore,
^
drivers/block/xen-blkfront.c:2763:13: error: use of undeclared identifier 'blkfront_restore'
.restore = blkfront_restore
^
drivers/block/xen-blkfront.c:2767:1: error: function definition is not allowed here
{
^
drivers/block/xen-blkfront.c:2800:1: error: function definition is not allowed here
{
^
drivers/block/xen-blkfront.c:2822:1: error: function definition is not allowed here
{
^
>> drivers/block/xen-blkfront.c:2863:13: error: use of undeclared identifier 'xlblk_init'
module_init(xlblk_init);
^
drivers/block/xen-blkfront.c:2867:1: error: function definition is not allowed here
{
^
>> drivers/block/xen-blkfront.c:2874:13: error: use of undeclared identifier 'xlblk_exit'
module_exit(xlblk_exit);
^
>> drivers/block/xen-blkfront.c:2880:24: error: expected '}'
MODULE_ALIAS("xenblk");
^
drivers/block/xen-blkfront.c:2674:1: note: to match this '{'
{
^
>> drivers/block/xen-blkfront.c:2738:45: warning: ISO C90 forbids mixing declarations and code [-Wdeclaration-after-statement]
static const struct block_device_operations xlvbd_block_fops =
^
3 warnings and 11 errors generated.

vim +2699 drivers/block/xen-blkfront.c

  2672	
  2673	static int blkfront_freeze(struct xenbus_device *dev)
  2674	{
  2675		unsigned int i;
  2676		struct blkfront_info *info = dev_get_drvdata(&dev->dev);
  2677		struct blkfront_ring_info *rinfo;
  2678		/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
  2679		unsigned int timeout = 5 * HZ;
  2680		unsigned long flags;
  2681		int err = 0;
  2682	
  2683		info->connected = BLKIF_STATE_FREEZING;
  2684	
  2685		blk_mq_freeze_queue(info->rq);
  2686		blk_mq_quiesce_queue(info->rq);
  2687	
  2688		for_each_rinfo(info, rinfo, i) {
  2689		    /* No more gnttab callback work. */
  2690		    gnttab_cancel_free_callback(&rinfo->callback);
  2691		    /* Flush gnttab callback work. Must be done with no locks held. */
  2692		    flush_work(&rinfo->work);
  2693		}
  2694	
  2695		for_each_rinfo(info, rinfo, i) {
  2696		    spin_lock_irqsave(&rinfo->ring_lock, flags);
  2697		    if (RING_FULL(&rinfo->ring)
  2698			    || RING_HAS_UNCONSUMED_RESPONSES(&rinfo->ring)) {
> 2699			xenbus_dev_error(dev, err, "Hibernation Failed.
  2700				The ring is still busy");
  2701			info->connected = BLKIF_STATE_CONNECTED;
  2702			spin_unlock_irqrestore(&rinfo->ring_lock, flags);
  2703			return -EBUSY;
  2704		}
  2705		    spin_unlock_irqrestore(&rinfo->ring_lock, flags);
  2706		}
  2707		/* Kick the backend to disconnect */
  2708		xenbus_switch_state(dev, XenbusStateClosing);
  2709	
  2710		/*
  2711		 * We don't want to move forward before the frontend is diconnected
  2712		 * from the backend cleanly.
  2713		 */
  2714		timeout = wait_for_completion_timeout(&info->wait_backend_disconnected,
  2715						      timeout);
  2716		if (!timeout) {
  2717			err = -EBUSY;
  2718			xenbus_dev_error(dev, err, "Freezing timed out;"
  2719					 "the device may become inconsistent state");
  2720		}
  2721	
  2722		return err;
  2723	}
  2724	
  2725	static int blkfront_restore(struct xenbus_device *dev)
> 2726	{
  2727		struct blkfront_info *info = dev_get_drvdata(&dev->dev);
  2728		int err = 0;
  2729	
  2730		err = talk_to_blkback(dev, info);
  2731		blk_mq_unquiesce_queue(info->rq);
  2732		blk_mq_unfreeze_queue(info->rq);
  2733		if (!err)
  2734		    blk_mq_update_nr_hw_queues(&info->tag_set, info->nr_rings);
  2735		return err;
  2736	}
  2737	
> 2738	static const struct block_device_operations xlvbd_block_fops =
  2739	{
  2740		.owner = THIS_MODULE,
  2741		.open = blkif_open,
  2742		.release = blkif_release,
  2743		.getgeo = blkif_getgeo,
  2744		.ioctl = blkif_ioctl,
  2745		.compat_ioctl = blkdev_compat_ptr_ioctl,
  2746	};
  2747	
  2748	
  2749	static const struct xenbus_device_id blkfront_ids[] = {
  2750		{ "vbd" },
  2751		{ "" }
  2752	};
  2753	
  2754	static struct xenbus_driver blkfront_driver = {
  2755		.ids  = blkfront_ids,
  2756		.probe = blkfront_probe,
  2757		.remove = blkfront_remove,
  2758		.resume = blkfront_resume,
  2759		.otherend_changed = blkback_changed,
  2760		.is_ready = blkfront_is_ready,
  2761		.freeze = blkfront_freeze,
> 2762		.thaw = blkfront_restore,
  2763		.restore = blkfront_restore
  2764	};
  2765	
  2766	static void purge_persistent_grants(struct blkfront_info *info)
> 2767	{
  2768		unsigned int i;
  2769		unsigned long flags;
  2770		struct blkfront_ring_info *rinfo;
  2771	
  2772		for_each_rinfo(info, rinfo, i) {
  2773			struct grant *gnt_list_entry, *tmp;
  2774	
  2775			spin_lock_irqsave(&rinfo->ring_lock, flags);
  2776	
  2777			if (rinfo->persistent_gnts_c == 0) {
  2778				spin_unlock_irqrestore(&rinfo->ring_lock, flags);
  2779				continue;
  2780			}
  2781	
  2782			list_for_each_entry_safe(gnt_list_entry, tmp, &rinfo->grants,
  2783						 node) {
  2784				if (gnt_list_entry->gref == GRANT_INVALID_REF ||
  2785				    gnttab_query_foreign_access(gnt_list_entry->gref))
  2786					continue;
  2787	
  2788				list_del(&gnt_list_entry->node);
  2789				gnttab_end_foreign_access(gnt_list_entry->gref, 0, 0UL);
  2790				rinfo->persistent_gnts_c--;
  2791				gnt_list_entry->gref = GRANT_INVALID_REF;
  2792				list_add_tail(&gnt_list_entry->node, &rinfo->grants);
  2793			}
  2794	
  2795			spin_unlock_irqrestore(&rinfo->ring_lock, flags);
  2796		}
  2797	}
  2798	
  2799	static void blkfront_delay_work(struct work_struct *work)
  2800	{
  2801		struct blkfront_info *info;
  2802		bool need_schedule_work = false;
  2803	
  2804		mutex_lock(&blkfront_mutex);
  2805	
  2806		list_for_each_entry(info, &info_list, info_list) {
  2807			if (info->feature_persistent) {
  2808				need_schedule_work = true;
  2809				mutex_lock(&info->mutex);
  2810				purge_persistent_grants(info);
  2811				mutex_unlock(&info->mutex);
  2812			}
  2813		}
  2814	
  2815		if (need_schedule_work)
  2816			schedule_delayed_work(&blkfront_work, HZ * 10);
  2817	
  2818		mutex_unlock(&blkfront_mutex);
  2819	}
  2820	
  2821	static int __init xlblk_init(void)
> 2822	{
  2823		int ret;
  2824		int nr_cpus = num_online_cpus();
  2825	
  2826		if (!xen_domain())
  2827			return -ENODEV;
  2828	
  2829		if (!xen_has_pv_disk_devices())
  2830			return -ENODEV;
  2831	
  2832		if (register_blkdev(XENVBD_MAJOR, DEV_NAME)) {
  2833			pr_warn("xen_blk: can't get major %d with name %s\n",
  2834				XENVBD_MAJOR, DEV_NAME);
  2835			return -ENODEV;
  2836		}
  2837	
  2838		if (xen_blkif_max_segments < BLKIF_MAX_SEGMENTS_PER_REQUEST)
  2839			xen_blkif_max_segments = BLKIF_MAX_SEGMENTS_PER_REQUEST;
  2840	
  2841		if (xen_blkif_max_ring_order > XENBUS_MAX_RING_GRANT_ORDER) {
  2842			pr_info("Invalid max_ring_order (%d), will use default max: %d.\n",
  2843				xen_blkif_max_ring_order, XENBUS_MAX_RING_GRANT_ORDER);
  2844			xen_blkif_max_ring_order = XENBUS_MAX_RING_GRANT_ORDER;
  2845		}
  2846	
  2847		if (xen_blkif_max_queues > nr_cpus) {
  2848			pr_info("Invalid max_queues (%d), will use default max: %d.\n",
  2849				xen_blkif_max_queues, nr_cpus);
  2850			xen_blkif_max_queues = nr_cpus;
  2851		}
  2852	
  2853		INIT_DELAYED_WORK(&blkfront_work, blkfront_delay_work);
  2854	
  2855		ret = xenbus_register_frontend(&blkfront_driver);
  2856		if (ret) {
  2857			unregister_blkdev(XENVBD_MAJOR, DEV_NAME);
  2858			return ret;
  2859		}
  2860	
  2861		return 0;
  2862	}
> 2863	module_init(xlblk_init);
  2864	
  2865	
  2866	static void __exit xlblk_exit(void)
  2867	{
  2868		cancel_delayed_work_sync(&blkfront_work);
  2869	
  2870		xenbus_unregister_driver(&blkfront_driver);
  2871		unregister_blkdev(XENVBD_MAJOR, DEV_NAME);
  2872		kfree(minors);
  2873	}
> 2874	module_exit(xlblk_exit);
  2875	
  2876	MODULE_DESCRIPTION("Xen virtual block device frontend");
  2877	MODULE_LICENSE("GPL");
  2878	MODULE_ALIAS_BLOCKDEV_MAJOR(XENVBD_MAJOR);
  2879	MODULE_ALIAS("xen:vbd");
> 2880	MODULE_ALIAS("xenblk");

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

--2oS5YaxWCcQjTEyO
Content-Type: application/gzip
Content-Disposition: attachment; filename=".config.gz"
Content-Transfer-Encoding: base64

H4sICKCwxF4AAy5jb25maWcAlDxbd9s20u/9FTrtS/vQ1HYcN93v5AEkQQkVSTAAKEt+wVFt
JfWuL1nZ7ib//psBQBIAQW+3pycJMYPbYDB36IfvfliQl+fH+/3z7fX+7u7b4vPh4XDcPx9u
Fp9u7w7/tyj4ouFqQQum3gBydfvw8vWXr+8v9MX54t2bX9+c/Hy8vlisD8eHw90if3z4dPv5
BfrfPj5898N38P8P0Hj/BYY6/mNxfbd/+Lz463B8AvDi9PTNyZuTxY+fb5//8csv8Of97fH4
ePzl7u6ve/3l+PjPw/Xz4nBx8e79ya+/7a/3Fze//Xr6x/nht5tfT9/+un+7Pztc/3H92/Xb
8/3h+ieYKudNyZZ6med6Q4VkvPlw0jdWxbQN8JjUeUWa5YdvQyN+Drinpyfwn9chJ42uWLP2
OuR6RaQmstZLrngSwBroQ0cQEx/1JRfeKFnHqkKxmmpFsopqyYUaoWolKClgmJLDH4Aisauh
7tKc193i6fD88mUkQib4mjaaN1rWrTdxw5SmzUYTATRhNVMf3p7hGbkl87plMLuiUi1unxYP
j8848EBEnpOqJ83336eaNel8GphtaUkq5eGvyIbqNRUNrfTyinnL8yEZQM7SoOqqJmnI9mqu
B58DnANgIIC3Kn//Mdys7TUEXGGCgP4qp1346yOeJwYsaEm6SukVl6ohNf3w/Y8Pjw+Hn74f
+8tL0iZ6yp3csDYfqeIa8O9cVf4CWy7ZVtcfO9rRxEi54FLqmtZc7DRRiuQrv3cnacWy5NZI
B1IlMaI5ICLylcXAFZGq6jkeLs/i6eWPp29Pz4d779rThgqWm7vVCp55180HyRW/TENY8zvN
FbK2xymiAJAEImpBJW2KdNd85XMxthS8JqwJ2ySrU0h6xajA3e7Sg9dECaA/UADumeIijYXL
ExuC69c1LyJhU3KR08LJEebLO9kSISkipcctaNYtS2lO9PBws3j8FB3AKDh5vpa8g4n0JVH5
quDeNOY0fRSURL7UHSEbUrGCKKorIpXOd3mVOEojKjcjZ0RgMx7d0EbJV4EoJ0mRw0Svo9Vw
TKT4vUvi1VzqrsUl9yyqbu9ByaW4VLF8DVKZAht6QzVcr65Q+taG+4YLAo0tzMELlieuie3F
Cp8+8JeiW6WVIPnaHrQn3UOY5Yq5gb1bwJYr5C9DdBGwwmSjntQQlNatgsEamrz9PcKGV12j
iNglVuJwxrX0nXIOfSbN9vJaq6PtflH7p38tnmGJiz0s9+l5//y02F9fP748PN8+fB4PZcME
jNh2muRm3Ihu5sxCcGKpiUGQRfyB8D4Zxk0PNOBlskARllOQq4CaEpFoAkhFfPbGJriuFdmZ
TsEOELSdGaqVLJD2kg1qpWASrZEiXKU7+79BYXMSIu8WMnET4Mg0wKZnaxuHBcGnplu4B6nF
y2AEM2bUhIQK58EBgXZVNd44D9JQEJSSLvOsYlL5zB5uZBCva/sPT+Cuhw3x3G9egfC1F2iw
nNBEKkEpsVJ9ODsZKcEatQa7qaQRzunbQEl2jXT2Yr6CZRuR1d8Aef3n4eYFDO/Fp8P++eV4
eDLNbjMJaCCrZde2YINK3XQ10RkBszgPFIfBuiSNAqAys3dNTVqtqkyXVSdXE0sY9nR69j4a
YZgnhuZLwbvWI1ZLltTeeyp8/gDLI5+5R9XaDZMyWwzAEs4friRMaA+WHBmu+gxKOHrLChlv
SYvCN19dYwmse0XFpH3VLSmQNJDjFlLQDcvTotVhwA2dufP98qgoJzNmbZmcDeyA1AXk+XrA
IYr4XdEiBfsC5FhqCSuar1sOx466Bewa6ne1/IzOxOT8RpydLCUsDOQGGEZzJ4UCMTE98gZQ
0NghwjPrzDepYWBrjnh+iygidwUaIi8FWkLnBBp8n8TAefQdeCAZ56jd8N8pouWag5qr2RVF
7W0OkIsa7mZAvRhNwj/SPoA19YNvELY5NbrUWAqecWFETpvLdg0zV0Th1J6v13q8ZAX2+F2D
OmHgCngMLoGxa1RMEyvOHuykuVyRJjB2rGMyWCWB8Iy/dVMz3y/1JBmtSqB4yH/RLtPihYDh
XHZVlSBt2YGp5S0dP0EaeJO2PNgyWzakKj1GNNvyG4wp6jfIFYg+T5wyj7EY150I5XWxYbBe
R1UZnaqRxXg+RvGXhb5sIxmbESHAUUl5tTjervaG7Ft0cH5DqyEcXkzFNgHVgYV0JevUdQXI
hCFGFdQbLIj2O1PxmNAES7kkOwm298zoiNMP49sEHlGimVGnjYSB5TX5hI/At/qYmBB60aKg
RXy3YCoduy2mEVahN7XxAT1Ifnpy3qt7F4JrD8dPj8f7/cP1YUH/OjyAQUZA4+dokoGpPtpf
ybmMlE/NONgNf3OakQKb2s7Sq+6kLuB1S+D0/IiYrEig9WTVpaMIsuJzAJLBAQkwG9zBpmQg
IqHyRWtPC5AwvA6mXXVlCQaWsT4GHzzpqvCSVcGVM+LTqLjAaQpDdj3yxXnme8RbE2INvn0l
JZXoTLACtpaDu+/JVd6ptlPa6AX14fvD3aeL85+/vr/4+eLcD9mtQXH21pdHdQW+oVn3FFbX
XXQDajT4RANqkFkn+cPZ+9cQyBbDjUmEngX6gWbGCdBguNOL2B0PRLvXOAgcbU4kUBmDK08q
lgmMPRg/KnHf0cHDgbYpGAGjBaPD1CjcBAYwCEys2yUwi4ruvqTKWmTWiRTUsyKMT9KDjOyA
oQRGR1adH4sO8AzHJtHselhGRWMDRqBIJcuqeMmyky0Fos+AjeQ1pCOVZ6c6lCvw+zVYum89
88iE80znOS/ACSBYeiTrQrTORPi8EyzBEKBEVLsc41++Wix2YIbC2barnWRwwLq20fP+2i6t
A1WBkAKt+C7yWSTB08RLgUdGcxt/MwK3PT5eH56eHo+L529frAMcOFoRHdIiqk65Jnj3S0pU
J6i1oX2BhMDtGWnDkFAArlsT1EvCl7wqSiZXM8ayArOENSljEQe23A9GoqhCSUW3ChgFmW80
C4MlvTotIuCVhLNhaSt+xKhaKWdRSD2uIOEfDbaRLHWdBVGPvm3q50QuCa+Bg0vwEAY5kgpg
7+ASgrkFpveyo37QEA6GYKRo2qK32zDq3rfPeV644dUGZVaVAX/qTc+dPZw2wYduN+E3aLqT
GGO1qcOmd6dnyyxskiiuRp/NH9Hc3jLkVztuym4E0yAijw0Ttx0GNeH+VcpZySNVNmkWwrH6
2VPKuSdmFH5LHEMfQxmG/p2wasXRHDKLTWcyctFMwb3eXL8PbOlWpq9ujcZjOq0EmpynnZBB
dbXdDI8YJm3ARnAKygaSLnyU6nQeZq8dWsE5b3fhtUeitCAYrTMvuzoEK5mHDXndbvPVMrJr
MIK+CVvAAmB1VxuBU4L4rnYfLs59BHPW4KzW0uN4BsrGCE4duLWIv6m38yLVxVrRUaYVTQdM
YCGgfSw1vJCJawbBM21c7Za+I9E352AMk05MAVcrwrd+0mjVUsuxImqj4E6jlSKUR+CiDiTa
kgAzMw5GWyrCb0wEifYuGAkZXcK0p2kgJr0mIGdPTwBjA+zHLDHM8xhmw3SxRgUWtoPLOm0U
VIA9a6MYLqdtIiSYlYt1TB1Ke6ukPT/l/vHh9vnxGIT+PS+o5/QmD+TaFEOQtnoNnmNgfmYE
o6H4pYtgOodgZpEB/zrn1x19kKW0xGsr/IP6ERn2PhBkYAUBo4MgmBEUwV1y6p4VYdM7YyeF
bQUTcGn0MkOzbXIqeUvQgFLgXrE8JZuRJGCxAaPlYtcGvnsEAjFrDPhs94pDZw1FYwDZriRh
2A7gnpEjuBEDfU4b06mBErIuiQUaQzRlZFQVXQKfOyMBs5od/XDy9eawvznx/vMJ2eKKsFu+
c9ZLSGgPHh0Uil9wirjEAITo+kxYcBB4b1DL1f3SR1Q7wAxb2MwyJiIuUQiP7KREyvYxxBn8
aG8cCT5cpD5q1sardHbgQFxlc/J6TXfzdp/tpOTWnJXmZfmKLvQRJ0SKEDAAnQrflIGohU9g
8TA80RuNNEeX1cdeXenTk5OU1Xilz96dRKhvQ9RolPQwH2CYYQHGal0JTGJ6oTi6pbk/k2lA
/3QmoSCIXOmiS7org2cF1xws25OvpyFrg7+MUZPwHloOwQA1RgNDvjBurOnlh8H6WcBHXzYw
y1kwSe/mOc4B7x30RhCJGya0KKn6Jnv1IlkbxPNilC1vql2SZDHmbB48rwsTM4D7WaWNP16w
ErZUqD4EOlczU4F0bDEbF2iXV/zUSYSCFIXuxbgPs2K0p+4KJEnVxcnACY6Af21i2euwZFuB
I9WiOlTO4k5gYVTBxDFqthSR2vPx1KoNUKzuf/zP4bgAtbr/fLg/PDybrZO8ZYvHL1igGLjp
LtaRoqwLlNDB1fK5stayojS44NCGcsO0p65LrS/JmpqKl2CgodVV5p2O3B1Al34IuI5mnvMW
AZRXgTlw+dFaIiC+SpYzOobRZ7VqH35BInoHMfnqmd/cedgO5+uujU4OjmulXPIBu7RFHg0C
zK5A4dlFGptKekFJz/dqndu+TDrjdqw2FzoSQXalLZuOhg5BKe3McyMKutHA3UKwgvoRsnAk
EKqugmpuHBJvOyMKDIZd3NopFaor07yB2VMaygBLMu2gSDrCYqnIk1aAgRlXTFBgGimjtY3+
U2z6RuCwVikEJg/BdiPLpQCGSsfd7a5WYPiSKho77yS4zbqQIDlRtXm53FHgWaKgEOlaECBF
vMAYluC7eYK2OcP8xFzQANfIwRsE8S/mUZy0nddbARbjzosKB5FZ2nyyfWeS5z4Va6pW/BW0
bCle2aagRYcViFhReUkEml4zetOgw7/mC0PNrWmpJ3PCdpfqDUdEQHK+olXl9KZ7IpVhMh34
j4Wpw8kxwr+Tt9xa7rEfL40J2ZeqLcrj4d8vh4frb4un6/1d4KL2Fy8MGJiruOQbrLjFuIWa
AcclTgMQb2qiuU+BYt+56oIkLlIQI4NpGyfVBWNKpjzk73fhTUFhPWk2TPYAmCuM3SSLl31a
hftNYvS7nIEPW5qB9+ufPaxxsT53fIq5Y3FzvP3LpnP97dvdp6/h6N20RkjPIrV53o81H2h3
GuFVJDCoaAEq3Ia8BGtSisrMeG6jrmDo9rt++nN/PNx4hlpy3IplvqWbvkgDFdnN3SG8Vk4f
BRxkQs94GBXYwkn1H2DVtOlmh1A0XdMfIPVh7qS8s6A+JB5v1uzIS2KY40XEZMHmf7eHDamy
l6e+YfEjKLDF4fn6jfe+BnWaDfl4Zii01bX9CFuDLIRFwYju6UnwSAAx8yY7OwGafOyYWCfJ
xiQBOyitxxBW1ARDiDMxpiYoJzCMtJNlliTVDA0sfW4f9sdvC3r/crefuBEmBD3E6Gb4fevn
RG1GO/42Ac/u4tx6u8Bkfo7evdQYeo7LnizNrK28Pd7/B+7TohikhhuKFn6BEfh+vPSKuEom
aqOswfKwwZtRa9ZsJlUHEFvAlXqugjB8QlWTfIU+LzjFJnpSOs/Kn4LJXDLNsjJlCZSXOi9d
pZjfyW/vPetE9yXny4oOGwzi5hYk6xQfOSBGiE0YOvIoHBgrUkGS8yo18Ai00XDjKc1P5aH3
s07m27SD3ARiLn6kX58PD0+3f9wdxqNnWJfzaX99+GkhX758eTw++2yLZ7AhydocBFHpZyyx
RWD+rIblhWxhz3Ldc8zMcH3nS0Hatn9R4MFz0soOc/KcFDNmGaLFr9QCoMjZmT2eWRRX3W4F
Wpygdzfqf6FnQDFXi9Cfizp8Pu4Xn/reVn379dgzCD14coUD03ftZ237FsyRhA+TfEgZ1565
do35liAzOkAnBYjYWNd+2SG2EFMI5xeDDiPUMjbasXWoqbGpSiw+DUfclPEcfdYT9JDaYYW8
eR/pwsUhaixfg81mu5bIuBYSgQ3XYYEm5oY7EMZXURQqID32BANR+Gl4M1WYNTRUqoOqc0vK
bvZJHHqqm+27U7+YRmq5Iqe6YXHb2bsL2xq8EN0fr/+8fT5cY+Tv55vDF+AsNAImJpaN9YZJ
MBseDtu4rXhLtLiCP1Pz21Z+Hayh7ysdwXOLHaX1UOozZuS7GrOLGU3ZTLxVcXGQmXWMcnWN
UapYX55jhCEKTGFSG9+twi3QGb6e9NaCBTLR4GjyY+FMJxrgGsVKFqRrcWoGdMMCtkTV1zq5
1tQ8jnTpdjcMvvctU0XbZdfYNIphzvRTxw0Na6THumAz4orzdQREewu+FVt2vEuU00k4JmPm
2neHEZ1NQRwXCmPbrtZ+igCOrYt7zABd8jHILXkrtw+nbbWkvlwxRd2rHn8sLGSTQwbBPJay
PeIhZY3BePfUOT4DQZdwSZvClos53kJ7NMaTvi8fHg8+y57taOO3fsvqUmewQfuMIoLVbAsc
PoKlWWCE9DeY189eT/kD4znoupkXIrY+LnpVMg6SmL+vNBaOaJhhSp3jeONfh/pV5YPP0ekl
wQChC+VhsW8SjK+/UiiO3+z9sI+yXCFLfEC21VY4zMAK3gUm47gLlyp0RaKeuT/T7vVE2lVw
0BFwUrfYS3hX2xiATYLKm3Wmb9QJ7gmfGAv2UjEFJr47V1MwFx8+io7oraoPnn9QGcjW6ZvK
+CJwZLQ6tnd6ydZgMQCqhT7X9HfxdNslx0Q4Vs7HCQ9TLmuAmPUCNS3SbMBLZe2ayT6KvnqB
5nBTvbwBgDpMtKDqwtcseAsSdKJbhm8b7GN0RSZJN2QK071PlabWF5R0xzoWJ0gK+7DXWCWe
GNcr8Z4bxEdJDOXABh2zy1PGa3e9alCTpzCWY90r9KmOBNoym8EcSuVHDBehCIW3W87bs4zZ
cq4UWZFhhkPxHk/0rXM5OXuRQduq/pcixOXWv66zoLi7ZaJk9xRoXHoLJHl71qf5Q804WFSg
xAMjaMyT4wNC74VIMlHmPbfx6oSseZvzzc9/7J8ON4t/2bcpX46Pn25dDH30vgHNkeG1CQxa
b40SV3vav9t4ZaaAKvgLNJiLYU3y3cd/McT7oQRa0CAgffY1L6ckvsoZK+3c5fdp6s7LvO83
XnSqKtXidA3CZztb8FxZQm8JzcFxHCny4bdhqtkCB4PJ0pk0B8ZrJOhM9bfDwUcDl2D6SIna
Ynh/qlltEtfJrl0D/AkCeVdnvEqjwC2oe7w1PmGbpae0b9DjjHcWljjg41MT3RL0Y1gA3T9L
zeQy2Wgj3lE7updLwXyNMQFpdRqUFvUI+Dwhfb7m9bSrTTH1culIDKJdZulAjJ1kWpUdICBR
eUsC1rD1G/vj8y3eiYX69iV8VAHrUcxazMUG8zVJDpcFlyNqGMDym8cIeTRjcHyTSAIuvv6I
4aNJG9oufswCm02phv0FGz4+v/ccb+jHuK3XKkA1hT8Z5QHXu8y3A/vmrPzo7yWcpEcef5XD
Gvu+qJbNqefWNvZHq8DaACsOxcDEXBjLQRRHx0nUlwnBb342qDDDRCUvMYq4TCGgVMYwMdZa
VKRt8WKTokBJoKNU26jR+qeYOqMl/oUuRvirOB6urc9ywcgRY3yRbyOrXw/XL897DALiL6Mt
TMXus3d8GWvKWqEB5XFaVYZBE7Mo9HKGJCUaXJMfhnBjyVwwPxjmmkG+5eGQzm8aw5YzizU7
qQ/3j8dvi3pMBE1iQOkC2B44VM/WpOlIChKbrn05J/6GkkqNBH4AmAc0BdrYWPKkkneCMZ3U
3lvz9mEKL/HnhZa+lHbLZJLHJdamA8becTrzM2xNwE1zdXFhu1tyoGxDhPEB8kziYr64zhXU
KSuqsBb/POqUoXb0d+UarGCL4kGpNuP1CIqXPnCzEsV5uQkL6fjB8mpnSgzBFY/fu9rXORyt
59A598ISY7Q0+Ua8J57hFvuLTYX4cH7y2/CeZcbd8/R8ws2zD8cT8yWxa/vOPhlhwirFMKAY
PNhcB7V8OTj0jXmVM1M1mn5RiAw6upBJlKs2qjTt2zPfob2Sk+fm7qlkbaXwFDWqKenDgCYX
0gdB/S2a2KChWh8CeM00/3/OvqQ5chxJ9/5+hWwOz7rNpqaCjP2ZzYHBJQIZ3ESAEVReaKpM
dZWslYulVD3V/37gABcAdGco3yGrFO4fsS/uDsC9VM9yL04a+nkl7QpFNq56XwO+krBTNnB4
IjWKUxZUluallEO4pKR6DU5j8St8ZumUYh5YOgO91o5DwHQEFgu5vBwryyoNxBihyWXfOYbn
54N+itnbJ9V6nz+9/c+3H/+EayPIxVc5y88xduQgJQBD3YNfcj+yhqiiRSzAug306i/GzEr5
XC8BWxTow5TEeUwqf6vtHL92Alx1ipRQ94oUhNcHON9hIT5DFEavanOJoK9aBgx00TnGjnab
qFR+c2Jhv5YZyVSrMj1axhlU6lMW8GCHwctBPlbnevYqysCmeADlJibnR59BmXaeSbmTgn6e
pTGBwB9nDjApwh0Kjj6XKdsyN30oqt9tdApLJ0Mgq6csVFYAqIIKO3xW06tkpTPhyiMIZXFW
Ny6jFXWem7LPgDdLJcVNueEVZ4Y6OtKfXASzU6kjPPWkqCeEsSR28wM7INpcrRqcaCVdJtjj
iUE2KZoiwqh0SCIse7KdPNTPHcU2ogquNxDAlR0DRmZ8okLu8s/jnBo4YML6YBpNe3Gh5//3
f3z687fnT/9hp55Fa466YpJdurGH5mXTTTIQdBNieEqQds0EC0cbEbYVqP1mrms3s327QTrX
LkPGyg3NZSnmvVanPBkY8IE1uhWFS+Hui5OspLWbCusjxc4jqdcowVo8lLGTHprtsXJh1uzp
KfjHs+salLY+gC2HuG6mUlDdTVWHx8dNm16HvJ3UgSulDsyx5whwfLHpMVamQ7LUxkMdjkvV
PzQVHvg5GcmaCmWjr9jIPMCBNRxOgeQ0i5FSvzJ7yz0mK3EhT0Ld462BNMxty3xVsUjKjQNo
YjwKv/14AslHqr9vTz8mnsknmWBSV8eC9gSX31+sqtnMiZ/OGejEkfIMNi3w5XGKLDj26DEH
12Z5ruRrqwKJciwpP5ZS2Y3v2k42xlggfnOCp1/RmL1msfWZMl47EwdDR86z9wHVGLtVHzU7
nFILdTBRtFFoThCTw0NBcORWJ3XvmKxpAFe+cXXNwiWEUGOBTkt/eRvFKtzHhAWSva/eDBMG
fAvLc2I7sbu8fE8VwKnOO1CED3a7z502s3pnnLgjOQ+E+xtRpDpGFnA5We1HQZLVbRhfJqRe
Pp/Q9TyzB4gA16pSRyYqCU4s0JpJiRAOOYokUSc3X5yPtJc1OlHZdOp5IYkglxzguV8aPGgs
s026drVJuvmtNKc7mcEsDh+kLOV+cl8XgppPkO2HmFqMVQuAGZ9kS2UeF7uACXofydS6G8l2
Fmi7DeRC0uAirkr5IZ8DtFFdIku5lcQ7IMk1mt0Q1LDTJiA1zr+gPIM87FHNMGPU3twoy/Tr
3advX357/vr0+e7LNzimsMwS5sftnIgxomBou0grv7fHH78/vdHZiKA6xuABKOCcJURrYx8g
Gc9+cPopNJjJ1KXId38xcdk9h70pYYxYt9gY1J3hSDI5OH4lFncMnvxMGfPkPfLViAcbz4yU
O8V3m8VPNFq/ibz7E1mi92PDMrMPxa0x/+Xx7dMfs1NLQJyNKKpA6bqdq8ZLveS90KkD7ll0
WnNy+0LgRQZvY94Pz/PDgyB0OuKDiS518wN6e8U/eN9qMOKV6PPuD2zHYHNQkH3fjY0vP9Wx
EX9/2nFICEUIlLCCIFDY2H+qa05xWr5/KJ7ePUZmbDMoWvnhei889SkJHMHG+ZEw2WLon2k7
x7AxD33/6Nc2m4J4aY58kCfvUMwHNCmiIdAr9WYfAc8cNWDos/iZtXhGGJ6C373bdfA4SAmB
FwOHP7EWgzr9buyMZI2gyefNBFjZYt//QUXFbUHQ0915Fi2lvPdi66XjKrJ/Fzxn+DJNifB8
gjpDuUwFClb+v3fY0xIwq1eBMlWuHIOT7kXFoRQZrQdNIFM9G1J3tGnQcGbSFuqwazZznTZx
dGTrP9Pa3chemducpF323OdaqaVaRnaZxLBy0LPMzpScTsojj+cGCLX1mhgh8K1OY6b2WAfQ
ya2YTmvhHEXC+viGTG1hZ7QNCzcr2Pf1z4/Eg1INqILrDJfHYQ13MWcgcoToPkTn9twc7Cbp
vzZz0xSfjvhJkDUdN7em44aYjlTaw3QkUrYn2wafbGTBx9lCQroJh2XPyg09nTbvmE8GJq7Z
Bp/UFgzWzNuooiQM5haKkEQtDNRc3zq8jc3eUU1CIrMwvJpNaHbh2NxYOaY5zszUzfxU3VBz
1UZM1qfNzyxQJjgvBTHd52Yzuue6E6WboPq47LY5fQbXn7glbXzAlqgeVs5vAKSOCQIJJRJW
EXEvX6pSKCMQuNTqakcdmZsnO7qi7u+WHTNZwrwoSic8YMe/pEHejWH8jFO/IIRbDdyOkqVJ
yBcqyd3C9+5N/EhtjxdCoDAwGYWJ5F6PXvxKU8t9mPyJOyUPRJDimlvjr/HmD0o8VEt5KnJC
Gt3IjaAMCKkkjmOo5RqVFmGp6KK7qa3v/s+nP5+ev/7+a3cv3nkh1OHb8IC7NOr5J4HXYeAn
HI3W2bHLihWOgVTRlWo7n3NFm3UVf+LNZsKfT1/E96QRQwMOpH7cNR11AQq4UofBKi4CaJLZ
dI+3ah5x2jStAPL/cebOOvVlRWrxulvub5aOnw83MeGpOJOqnULc3+gccNw93zvJ/TtAYXDG
bqyMaWBddDrN93vJ5tIcr1ZMP0xRZ6XjiOFYcRA3pXoivzy+vj7/4/nT9JZHG6aTAkgSPKij
VX+FECHLo7iZxah7QNQaBIDkah2LKZpU4EdiR3AjMnZU93LykC+/kDaUAUBIwH3J5PI6CyAD
aA5NWCbTykGyceV2HnCULIj7MM5VaIPOf/GE1r2OHQNIGawwK+0ydHRl7Uc5Vusb9CwWAcqA
p+ooIwxyFqEceLszaZkgdO57B3CPBAyETkGBfgT0QD0G+srJYZpAxiq5Rk4T4EFWpkjCk6IB
0b4b0RdNqsUImTO3yRX1fMDhIQQHmVBl2fiUCsLLlDoZZirZ7vQE4Qh4poOWENwMThskQVpJ
X02AW9xYBjZNJqASn5SmY3T7/pTRLTDuXBFhf51/brlmSWEuDFGIOaGPcnD+wYv0Yk/JgxST
A/WEEl0BijLOL/zK5FDGhczusjq1fKhracRFdtXz1igGSnvkRhspSu+OyqbK4auvVTqNlhN3
N058ZpdXNSRuPcAB+RK0ObC9uld4IMOQ425lu5C4gCGFAwOj7xxgV2OBWzXwLurBcUl1uDd/
DJEvDQIXVRxk3etd5znI3dvT6xsiApdn4VxKMvWUqijbrMiZKKwIJpM0HYb59sTolSCTGjLD
3IKG5hIAHgOlum4TDmFmE45XczQA5YO3X+6nUoJUJqKnfz1/Mv0hWt9dQkLfUMxmjstTh2vw
YPhYJQ6DNAQnFnAd3NYpgZuk8WxWx2qOe74E4FanDFmcEO5yoQTtXBphuN3i0SeAy5Snvnwm
9Ww29TIOzrfKxz8EbpQLm18kwtG2hz7m4Am099I36eMTW3oeLtWpooelv3b5/QnLNPEh05of
ZjLdwYt7BSGyjTM+z+cR8HF9XA2K+e+7UTEHycJDMAtQ/TYHqCe9bjSc00D2l9pXgH4nhlvA
kclr7GeEA/JELqAVZSlK2nOIvSZ1186ODFbrqvO30ZGurIpTR8EJkyMYJrzpyOwZX5+ePr/e
vX27++1JVg4u63yGZ9p3WRAqgOEXoKOAAKxuMUDkUx1idDHmeGWSilvPkjMj5e298xhqX47O
C6x9QTIa6rKqZtO+8MOA4QpkGJdwsYAwWySYHaXERFpLejPeXDgUO0Z5BJFR4YGt8WQVYtnF
qSubqCD3memSQ22G8UVdLzbdzwYsBUcFSLljcRJFkfaCkfO2PB73bzVMJhuUBWa2BRF+UwZH
yzuF+wNCQAa9l5qRrF54O46WDW7Ay8xKRlGMUFpWWoo37x3ehoFfiXeBR2/tJLAtCUOwcljN
MRe/wFEuqN1WmRnfKswDHgBKhb8MGSz8SVXkECbV7gJ4rw+rRxdCwM2UFbhwDjwpWNK8ABcn
VZadI71REOvcD4Cna3fBAtqnb1/ffnx7eXn6gTmehyQTIf/rETs1AE4FF/2LcLq7Goig3UzK
ED29Pv/+9Qr+b6E46oqD6cW42xzmYNrzxbffZOmfX4D9RCYzg9LVfvz8BKH6FHtsmte716lj
ZVWrMIhiOUpVaFrVEHQDWNC4xPfAm/kP/mzwvhv6Nf76+fs3uRvbzt3lEHUcSprUIdSFw5br
l9CujKzshyyGTF//5/nt0x/vGFP82ulmIg7RhphPbSxdGFSRPa+ykGErJgC1A4qutL98evzx
+e63H8+ff7eluQcI4onvtkHJHJ1mdMP7/Klb0u+K6Uv8WntBm1477Per+CKy0n5R1dOkUlbn
+LiSUkMeBSkVfqysdLaDs3fwfRtNij84oH75Joffj3HEJNfOT7chI/Uk5dwhkimaLnEaUQWj
5/UxJM/4lXIIqpvB2lwxwOA6Hq3c+Mms9zDwGj/1tOG63u5qPoh9gYqYdbEd8PRipXJEZnKJ
Mzgl8FYMlxkGebiKnV4HOky37ttWe4NB81Aw7ZS7A1MxK/kDN0JXTx15KGectSjU9zj7UqcQ
D/4g13HBTCG5io+WCx79u2V+OKFx03liR7t6E5LtebxPrzJ864J/YuVnUw3BxB5NwEzUWqt8
OKJdT8zZIUzGZyWlWZM4KxqBnqqCwRTcnWSdi5vxixObRrUw4lD0mRhScyGF2RAPh3XMbUUE
freZgJe6FQvwsyGF4axKboLqQ4Ng+ooI24+giNTYm14lHN2sfX/88Wr7QBPg5HSr3LOZjoAk
2fBF57KKZKBa2cueV2EaFBPfQiZFUSWs5Z9SDlCvku4CCRU/Hr++6qAad+njvydlPqRnOU2d
Yh3cUOKJIN65WdIY/G4r4nZKnhC376okItLnPDFj6/GsdTJUTViUhGdFEY3O8uRM0rbQSZ9W
QfZrVWS/Ji+Pr3I7/uP5O7atq35MMLEbOB/iKA6dxQXoR5Chp2SZENiulT+FIp90PrDzAvwt
kfUCyAHiCYPznisar6KHpQZsWoxjXGSxUE7KrQy0B9r8LLXzSJxaj8jAgfl2Bg53NcvdzXK9
zY0S2peKnVoyD2tjRoSs79nYuejA3E3GIXpBZ8BDIAU4iJsOhEyq89GULqWfYEqtBUttqhy/
DqFwCMGhc143LB4zY16rG4/fvxvBupR5R6EeP0FE1snEKGCHaHr/WjPT8fTA8egqqpxZtN00
k+Kz8NQRrbRifvAlme7C826xauYQPDz4bZJSz2YBksfi7emFKG+6Wi2OjVsuHcvqAm7JsW1O
NYPUkXS3jarbjRZXTc6fXv7xCygMj+rxqUyq22OpFavMwvWamrtRIAJVe7u5B3J7rZh2fcSS
BwpTCGdRycJT6S/P/npj0zkX/toZujydDN7y1DeMmaaIJJXsJLVb+dAaE/X7+fWfvxRffwmh
JemjE1WlIjwu0Z32dqubFcgD5ea7cpZ8uQPlVvg8g9g1sW7vybLSYRALBILS3YEm4DewFR2d
hnRLHochqK+nIHNPdgiI3JVxs4hem67qGyJHDgEzdauo7kjLKKru/q/+vy+12Ozui3ZDR4xv
/QHWabeTmhTEjuRqkJWr15VyGSNFV8y4CEA977UKMKqwJsO1UuEYuTaAddAeKfWBTQjtNVW+
4PmpSCPLW2QPOMSHLiKzv3B54EQ0cyUCYBzTOj4wtylUcmlBaKuAUPoXbnuNhCHFFYn5N7ji
E8LyJF3AHXoIDmqGspBE7Y0RZZ2LwweL0IVEsWid712LZmld8nce2wUBv6/VBYRH0zWsZsCt
BIumHf268V6McNFlCAJpFwa6w4yE0bChSS1x8NOzg2a32+6xZwA9wvN3q0lO4JdUJm3QTfd1
ynedUtIz2VZdPHWt+/z49vbt07cXQ3+QYDuEduebe0Jo8zpN4YdZTZfXaq9mQ4gfzMwVOYJA
nwaYIDmHnYKVS79pkG8/wsZinLPDb73kkkYYBel80GKOPifFqCUYybpnp1JXmbYNUJXXV+UB
ZQwL0fNV0ICi+3aSZVQdaC/oqnlv8Pn5Br/ZzVTJ2sANYlcZb4Px1AmguWCpfoULHGF0MUPM
muTOEmLEzbDZ1/5EbDRliUDNSTjBIg44tPZrHnAMXxtsGCAxuovpE0dqZN/qmoo30yOD/JLF
WKTDoT8v1FNLyWgT4k4o8PSjDXS7tDLVKsDz6yfDTNQ3abT2100blWY8MINom8VMhmUbi+os
e+gW39FCcIAQfMSp1ynIBSHKC5ZkSuJC+oeFfL/0+WphWOHiPEwLXlcxGITUgb11olS2LEUD
05cR3+8WfmCesTKe+vvFYulS/IVhtohzLsWGVkjOem3FMuhZh5NHXVjpISr7/QJb2U5ZuFmu
fctGyL3NDldvYf+UlZZiXLnsjkTwjCmR2zpTIf3s6QOxlkeJewbSJ3MpgxwVaEPf3iL1bzlo
ZImCqvU91Yjat31cgoY3nl/1fazocgXwrde2I3mN5NtxdWBTY7xochY0m912PaHvl2GzQTLZ
L5tmhV9h7hAsEu1ufypjjl9/6GBx7C0WK3TaOtU3DK2HrbeYTIouoOdfj6937Ovr248/wa/z
ax+y+g0shZDO3YtUdO4+ywXg+Tv8aa5CAgwTaFn+P9KdjvKU8SWsIfiNF3iiFIBhpCQeCiop
Oovxw+WB2xIL6AgQDY646IOmS4YcN0Nc1Zc7KXRKrePH08vjm6w6cq7aZcJCMiA0D1lCMi9S
VqBs7nMlMMzhcX69x9bKODxZd2cgWINs87CoXMXFhlSCN+9AUBevTsEhyIM2YGidrG1oWPBU
XDDTt53+oeXUl6fH1yeZilT1v31SQ1FZv399/vwE//7rx+ubMq/88fTy/dfnr//4dvft651M
QGuGxmYnaW0jRY7WDsABZH3xkdtEKW8gwrBiccmzwUdL2NAUSAHbxAamrVwOImCcSsl0RkiT
CYQR9qVi9M6tdYhNOtxN94EsBfHOcsSo2Ox4RVSQSlaEIrXbA3xutsmgbUB/gP1Lft2P4V9/
+/P3fzz/5fZQZxiZtvtUkx4k6izarBYUXW5up6nb1rFyUutB75oYRUZvVPRJzN0l6TFwMrDx
vVlM9ZG8YdpDgjjcONrQFJMyb93gnjkHTBZtV7fSEYw1+ImF1b7zqYiKwfXhWcypFMsNvrP2
kA9yFawK/NLfMD5keecHsth5W1yIMiC+N992CjKfUc5325WHvzwdShuF/kL2ZVuk80rFAMxj
/CRuUOwu1/P8bOeMZQH1qnrA8PX6RhPwNNwv4htdJqpMCs6zkAsLdn7Y3BiIItxtwsVieoMV
Apf1ptuJxKiimsEDm/FgPmCw6AozvBGg7F9wHG+J9EDrFjO8BF3Wd2///v509zcpDv3zP+/e
Hr8//eddGP0ixbm/G5GZ+gY0leJTpWkCW584duYwfHJEkjHfrKnCD/rRpFryb7iTQzycUZC0
OB7xV+SKzeGytLrS0S/0qklELyJaFl79Rcl0J9B5JuEtBFP/nYCsfCBQ+bSzFT1lB/m/SXMA
S90O5MQlGY2qSqx4/eGCU/3/YzfmNY0vjg92xREhdoimeeriAH/gybTEYXM8LDWMLjCAVrdA
h7zxZzCH2J9hdmN1eW3lXG7URKNzOpUcv6+muDKNPbUg9ADZTzQ/gKtzM+wgnC9ewMLtbAEA
sL8B2FObq16VLrM1yC51NtNTUQlWGfwdl84fvNTzh7k2qsKMeIemVwxZPh/nZ1KZVgup3I0o
l9EDRmve85j5ppCSwS2APwuA5/iivEfPaYBfJ/xky9IGmTrgMRGjUDpJoQ3hvWOPmEmoja4h
PLOcTezACbuOnlSCFTOLVvZQ4dpazyW0U60flxdy7oNVVS/ancl1pisc65AzrLNm6e29me8T
/UaAVE0V6BgRfiT7fWPmW0ZcPNLMHO4dzfID6tq5rr4gxGDNfcjWy3AnV09cQFWge7kbsxCO
gWbyuU+DW4t9FC73679m1gcoy36Le4tSiGu09fYz1aFfKGh5KruxCpfZzpH4bL6208/k7wwC
c3d2JEbDHkVcpM7wgg4+tytcok5q7px6aU04juM7b7lf3f0tef7xdJX//o6plwmrYnjQhafd
MeFe2QNa1dlsjLoFIctFwU/dlVzCbUr3Em6UpfKu5pbMWuQRLiuq8wETCmU/1tROHd/XUov9
SLs1bdEHWSw5WLcXlf9q6p5JEMLrftxuWLqsfjlswCGAdd34YreAXC3rCF+Hj/h9siDkcegU
G+TyIsVseqLOrWB2dd5eVFdUBZdyLfbJJRYnM/3uiIvycpSnGXpUC7lcKisIkBQjnFT6Szpv
P55/+xPMlVw/igiMQOPW3Y/+vcw7PxlMm+IEAdQd/xUXuf0UVbsM7TPlS1FRS694KE8FXV2d
XhAFpbD7qCOB6bpKGHpSZSZwjO2JEgtv6WEnP+ZHqZRUmczE6jyesrDgxCQdPxVxF066L28Y
U5tvZ4QXaDA8M9Es+GjGVbVYlgAlf+48z3NPaQd+CaPG9Rk7fttKdeVWWeT6kAtm6enBPXHH
wPyuCvEKwHAqLPUqECnlayzF9yVg4Is1cKjGvzUK6qqo7HoqSpsfdjtb2ph+fKiKIHImw2GF
7+qHECLAEGsCKIcoI6RGlWDHIieMSDIxQnJ4kDJSRjrVlh9SHkjGCoc6RI7xEWaBN77p3rlZ
ByUB6rzE+ujCaqtdxanO4XWPbJCWCElgQi63IYcjsWYZmIrApOy+dt9+IbU4xSm331d3pFbg
Y3xg4107sPExNrIvaIQwo2RSgLXK5S5fyCcqFrQ1VY5xJqXyYbPAy9S0cUgE4Yly9P2MkWk0
2brllpwyyklV/5X7WDtKffzCEpdd7T4rnqYXZ3UaW/eWD7F/s+zxx/DErCtImtLmJfjGyuWu
BREtWndVmKZ0LIqjGQ/dYJ3q4BozlMV2/rppcBacqVol89C1DsgLF7cgDoOPuAIs6cRkZA31
ibtDjZwVmTu+Tn7IbvRtFlSXOLUfyVyyiNJ9zoR1n58fsPcUZkYylyAv7OvvabNqKSNP2qxp
3Udy+XWWnVxvlIeFlT0Izny3W+PrkmbJZPF7BGf+cbdbTQ618UyLybTIQ3/3YYOr3JLZ+CvJ
xdmySber5Y1NXuXK5WKFTobsobJv78rf3oLo5yQO0vxGdnkguszGhUuTcIWA75Y7/4aoIf+M
K2YLndwnRumlOd4Y9fLPqsiLDF9UcrvsTEqM8c+tWLvlfmEv3P759ujIL3JPtXaYpKjCOMKv
IxofFmerxBJf3NjNdCx0WZMjy537bFIQlyMUbdiHGB77JujdATPxOOeB/Ms6hihu7rDa+mR+
dJ8GS8pWfp+SwqFME8yGFPseDZ5sFqSGWyyZJX/dh8FWrv0tL/Gm6fl1QEiX9yFczaKixVTZ
zTFVRVbbVJvF6sakgZgmIrYkgYDwO73zlnvCXTWwRIHPtGrnbfa3CpHH+jAM4YFbvQpl8SCT
won1oITDtkjczTW/jON7PMkildq0/GcfVBI3YCUdXtyHt3Q+zrTNxrAP7v3FEntEZX1lH7Yx
vqdsu4x7+xsdzTNujY24ZCFpK5bYPeUQTDFXtxZjXoTwQrbBzSNcqP3Gqp7IIMby7a6rc3sp
KsuHLCaeacPwiHHjWwguBnNiu2H1jUI85EUpVUVLgL6GbZMe8SDWxrciPtXCWos15cZX9hes
DUsp3EAwaU74OBa4/dBI82JvJPJnW50Y4RkDuFIKlN2KhgEwkr2yj9osZlhagdJe19SAGwDL
W/YEfQXYulOvLwUHDaOXzg6TprKtKUwSRfhokKIYsaArb1gH8sYUiMid/zbcCHV6oLx9ackT
ZMr9fp3hRwJlSsQELkviFA7XCmt+6NxZKuO72bbAkpop3mDAPEvVijCxAbuMjwEnbp0CvxLp
zlvjrTfycbsP8EG03RGbP/DlP0rpBjYrT/h6c3XW696pXXuNMLsnwEdLbab3U4xnW8Hlz5lD
KsldUwKhnWhmenczWYbxDeH2Bg6E1Su/BKvizPGYBfen8bFYMZ6hEQnMREcNE2PGUuIl27QK
bAdyFm8QbjCmeevKZJjvBk26IPAfHyJTdjFZykYc58okpN8WKN+Gd9dncE/4t6k/2L+DD0S4
Zvz2R49Cnqle0ZVdybPqmIx8e9SxkbdH4zqfNWAax5fI+gMTvG5j+oSOs8x1Q4y5ARxlfh6h
u9TFjk9wydrykE6DGbOv3/98I+/fsbysjc5UPyEcMHdpSQJP8lLrJafmgJNh6/WhJnPlCfNs
PXbVnCwQFWs6zuA+5eXx62fCFWr3WVHzGHe7rAEfigekHPEFJco1Zbj0rFqI8qCoPzjHD4dC
uwXr6D1FLmeW1GjQy/V6t0P71AHtb4DKUjY8erd7xIjzIULLcS+8BbGBWBjiJZSB8b0NJnsM
iKjz4F1tdmu0KOn5fMC0xgFwLG1Dt8VQIw3VOgeYCIPNynZaYvJ2Kw97VTlA9MhE+jjNdkt/
STCWGEOuS9vleo+WJAvx/X4ElJXnY7rPgMjjqzDP+QYGOGEHgx5Hc+70wbmUuSiuwdV8Tz2y
6lwPskmmci6vELrI/FYUdXgCj/gI+5quFssFWtJG3BgqYVBK/atBPz6gHnrH5hVSarDeqRsr
zEhUP9uSW0/8BmIbpCW2yI+Aw0OEJAZ3dZn8f1liTKk2BaVgIZ9lSg1TOwWbQMKH0o2dYuTM
kvhQFFjcmxGkXlz3/hyRROIUNmzCxb9R1hgEJMJ2ZOSmRgcRJ3iEJUUIckqISaIj6pJRPTg0
mJM27ZtMsdXCq4ropilH2Hq/Xbnk8CEog2k20Gbk+zoNufCmaQLsFFTz3ZWxK/8wIuZTH3HO
ezB3A4XItoY02FPaIA/kuDULMLKWuJg0AgitcQCExaHCaj4Ajol/RvM+Vqi+ZvHbrEQqdKyZ
3Ewy80H1wFMCvBV3ZWBxFsVXiL9RIUyRmX7TxuSU/ZlkQJfQTH/pI8xrUFWswMoAL0dS6/7T
WPYyCOOiOqAtqZgHPM7OCALn+XjVryySPxDOx1Ocn+oA4USHPdYxQRaHBVZ+UVcH8KSTNNgA
5euF5yEMkAodZ5sDrymDG2P3GqRnORakcIRtxwOs5JCU/doeYUopGuM3VYgWL+Es2BCHm2rC
qriKmMG7Y8PKxcMqjo3mNIjwYKaMq87x5mgWMxC7XZntNuj7dxMWRHy7W22oVIJou9tub6Yh
QXu8oJrXuTmgs9i7BwsokE6j8ha+5y6lGBAMCm1mmm4tdi1lVdaErML5h9r3Ft5yhukT7QCH
nkUetyzMd0tvR1XEhK0X2KN7C/2wC0V29LwFkemDELyc3CNFINQmhECpE6ApdEUfUJvgKNgv
1tjpuQWCfbAq8HqegqzkJ0bXMo5RG7QFOQZp0JAJKO6c11QL3YTLBWrtNVGdxQGv0rEoIkYW
5yQ3M9dnOAJjKZND8tYSwDf8YbvxiHLU+Ue6Wc8i8T3/1voQW7uazSF69BrAceEVLqlTmWvI
ewau1Og8b0dcd7eAodyLCIO3hcu452F2PwsUp0nA24yZCpYFcORTq+eyZlOnreAhwc/jxhYr
rZTPWw83LFt7SJxPfIljfRSJNhHrZkFuEurvCtx23cxT/X1l+JUXCwivJJbLdQNtcKOEcyv2
NRK7bdPMbUDXbIffFjZB6rSgyMqCa5+D6OjxltvdksoGUtDLyM3Kq+OFIP9AaFcudInpzC6I
iWy2ZEpGe0c6swsCAKIshD7zbk8jVaxKUd5Tgci1eE+KBo4CgrTt5xWZ6bEQBaZ9uLgP4AGZ
HDeq2VDXRxOUz+hSf3yAuzhsPhsBYQJXa8oniItXq8N7Chbwh5lVSP3NhO+Rg1p2tNoUb2Um
cf5i0TiOPqcIYqnUzPVMKSR7e3vFC1FfuiakylrT57i1T7I0DiKKx21XXhZTeFoZRAvFRZYI
zB5lgeoqkWre0lZULESz26yp5iv5Zr3YNjj3Yyw2vk8ItR8dFdiS8oqUHSrWXhLbS5fVnsUp
6yRj/NzF2tXu+Zo48ezsfAzdDKqMrSZSriLiGoFiOZ5HNS3DbCyKlZhOy3qKHvsO3Y8690wu
3tRyO4rvUmyjakfDRA3NWq/7s5DT44/PKngF+7W4610DdFinlIibTQehfrZst1j5LlH+1/Y2
psmh2Pnh1nO8tQGnDKoz4devA4RgVUWqqNlyjDmWXE2vAuyGq+Z1j27Q7yQxc2JK299WYfeh
TS4PCFWfbpj02mlKsIq4Dkt7Wpvz9Ro/ZxogKX7Xf+DHWe0tzrhsO4CSbCL+di/DsGEz+sdC
DiD18d4fjz8eP71BDB7X46EQ1kPEC9bQdc6a/a4txYOx1OrH7CSxc5DprwcnmGmkHG7V4Osz
GHwi86cfz48vxsmy0V9SNlBOcUPTStUxdv56gRLbKC4reMMQR0YAAgSn/cJao61neZv1ehG0
l0CSSHcgBj4BQyZm7TdBoX7FSBTGDABnMuImqHBOXqnbm/y/Vxi3kq3PsniAoOWOGziZj+nZ
3gMDXsayPS/udVGs+a9yBaAaNsJd9lgFF/5uh4n3JigtOdGrGYuQzCEOCeJ3QDsk/fb1F/hU
UtRQVM5rkAfIXVJSO12SlxVNCHFlUUOgIVOpmtD1tEUvg2gMJDfVD4R70Y7NWcKIWEQ9Igxz
wsnVgPA2jG8pfx4a1K3nH0RwJC8Y29BbMJY0m4Z4c9BBuot1Jb+ZmNwy5thVSfgc0OyEp3L8
3cpDoVgOzr5uQUO4nKpiVrEjk0JaQbgi6TqgdF+JD378rZXUGTlZKKp0crjVMXPt+CiiHqBD
+Fp8ZOXFx4J6KAF+r4XA/Z2o0E5yQObYqnm69BGxxgkANMubORAa097eEZBojl0t1XWjerpu
KNdt0DqysO7WL0lwbywXWDkVwyxkWk5X+bJ0opN3D87D6aP2UXouMyZlvTxKqatKZXbornPq
Q7RkEimrb5WrlLzyiPAtDOeucsgRPgGuTqjPji7z1e7iB6SknHHX4PnF8qANARPcnoWYuYoO
YZoMeUH+dnvjVKKHMLKljuEphpMrueVZYQ5EKP+VqMElTsMuDFRHkQtI+mANkJ6infqPcR4n
4tRYP2hqObprCGFb1mbNDQ64sBpi8+mrUVLxmd4Zs5xNhxBHwQ+lUFPFR2aKREBVFy7kemO/
YpEMsIIRDzoV+yS/w+96SW5WN30Jsz9f3p6/vzz9JasNpVUxRzAHsPKzoDpoUVumnqZxjr5v
6tKfrEkjXf535rtUhKvlYmM3AzDKMNivVx7F+AvLrGQ5LAFkMwFGNjtRnCg20sCSz9ImLF3H
hr0L2LmGtZPqIiuCDE2UpL9+MQyq4OX3bz+e3/748mqNK7nvHosDE3YjAbEME4wYmBPASXjI
bNBOIDzeODY6F6J3snCS/se317cbQUZ1tsxbL3GnkQN/gxspBj7h91Pxs2i7xl02dmzw2jDH
bzNCUAA+ow4wFJMT13o0M6OnKzj1xDVM4ObK+kMXSr/YkzOrJiHK3+WebnbJ3yxxOaxj7zeE
SUiyLwx/P9DxymoanlX52iXGCA8zxFE1LKX/fn17+nL3G0Rp7CJO/e2LHHcv/757+vLb0+fP
T5/vfu1Qv0j5H/za/t2eICGs+93aZE11zo65cgNmS+gOE/Nd5kB4SoXSdtMivHQ5sEPwIMUq
Ri9icRZfMMMN8LBVWC3hZtQSNOKX2pP6m5DmUA0DshWqM/oIWI+gzDHiA5WIPx3/JXfhr1Lq
lZhf9fLy+Pnx+xu9rESsgDtiNWpoVIA0952KTEPYqDoUh0Ik9cePbSEVK7LRRVDwVkpMRHaC
5Q/dxSRnNkC4ocKRyFVtirc/9FbR1dgY5W5t0X3H4CeuWzRjgUcXc2f+4aHVFQsGt92OitR5
/Z9OC4jJQ94/GCGwG92AkP7jDRnL+G6JjQTLbg8XilTSNkkH3HRoSjDWli25bmWPrzAWw3G3
m1x7Vw5glRJsKWZAbbR7WP3qGS9kK/fwQ+A8PAVyLUArSLH3ccBH3LzoWvarCfFdNzusjyaD
yODlTdmCDoxfFwKEvcQCJc22izZNS5uqFerDlDjpqkLPKreYZRPgkZOACU97XUcHQOeht5M7
3oKwCABixqgCA6JhhLFBMht4tk2UaPq6EKgfH/L7rGyP906DjiPOkCYxIxYUqZ4upPBpHwGr
G7XOGJX/HEVW9VVRlPCEhw6UAiiRxhu/wS7YqJTtpWIgKW1u0iGKo10jgQ4tqoJ4b1tm2JA7
mU+tTsqd8Kg46ZMZOZxH8fS1l18V+eUZIn+YDQpJgA6FlqEspz6yS1HKdL59+ifWPZLZeuvd
rg3dWHTmk63usSa848ljcS2qs3p9C83FRZBBjFLz7dbj588qkrLcJlXGr/9l+pqblmewXAz6
TEfo44l3jPZYFXVp7PqSrhXGKR6UoKTOQ8ceDynJv/AsLIZe2idF6osSNKW/2CP0LJoSs7D0
l3yxm3K4bDrLeNPTG2+9sCbjwBFZgi0qQ15Bs91uzHhJPacM0sz2ut1zijBOC3w29ZBZWa8H
hae4qh4ujHCU38PSB7lKwzuE+RyrohFEyIEhwyDPizwNzsQj4x4WR0ElxULcyVOPkhvUJa5u
ZamdWt3MkskmvYVJ4yvjh7rC38AOHV7nFePx7QYT7AjxEs/YVjqOjsjZ+HtOfF8zdVxfYxsE
bJrWZtgRVNjLEh7z6riYa8/vEUXibLVKurfjG/apsOre3Qz1/AMAUZzBJbtJmwQvUVT1ZGsx
WpZ04NIvj9+/S7VMZTE5DlTfQYyONsumlZiIT5qcRSXeRdo2pUUgqjrRNSgPkzThOI5OMhHw
v4WH7XRmeyCxWzS7QrrolF6jSTkYYUBQTOWY5oKJtbr5D7sN3zaTNHmQBevIl0OwOOAGAg2j
BZ6OX2ALYj9GQtsrkiJfmt0auzytmFNBqO/dNnFboTep0SNK78Fym/ul48JB+syY8xYr0Bnb
1S52+gU44FRQx25HOPIbh5Fsvd2ucTtd9Yk7FJjYbac9NNfrkrmkHKUowJXl4JKYaucr9zbh
amfa+WbbaTC2KOrTX9+lPDJtv+5x7rT7NB2WGrrEQZTjh5G64a6to9tOF5nFJGNF98kRqmzE
y+lw6+hueW1Islsj80qULPR37gVLQyl1GlAvikl0o2Er9rHIA2fY6Mv5kyKA2jazXpTL/WpJ
VSstd9ulO2iBuN6sJwu7u50NTQ4yENlwE0lIkatwLdY7slzqltzevJKlydO3rnp4U9eGB+4a
/Wi/x2MTIh00hMqadNxk6SJtywpwEJQnEd2eUp4pZlYCiGDZrUyzoFijfOLKkuqDKFxS4Z30
qlNEwQVe+KGNhDTGoG7Ojm651Xub1VRugJgI7ljUM91zqeFyudtNu7RkvCCCfejtpgo8OYDQ
6iDF1m4V+OFWn+P2vCFlJAW7NlIdrA2LyNXrpSfvl/957ox0o94+5H31OjOVenJf4B05giLu
r1DvXTZkZ60wJs+7YuLqiHDNzCOHH3FTJFI/s9785dEKaigT7IwHUvkxvc30dO6cXQ8MqNgC
P/iwMZhnAwth37+2P8bimVsIn/x4957SLYneMxAeWbolttjaiB318Rp9uGQitrsF0h+KQRZp
F7uhYVGQt50bO90YMZSs4grHqBfsGrfmVTE33ekbxMkQdnnwp8DvyZnQVIT+3t6qTXYmNksf
6w4T1OVEpaGl7RtpaJAmFYkV0KBjVbEKuQVRQtGugODaGYWycuR1WaYPbrNqqmtht3ina2Ze
eCijQPOnhqMgCttDAMZvIx+5Q+z2/tr9RosEygWRtbZqMgKGi5oddag9GD41FW2crizo+94B
BAbEI4xIKRQvNtgD6D6ZIBS7/WptiH09B+bRZoHTdxTdI+jWsOw5aXyUCvAFG5Q9hB8MC0Bf
K4uo/dI6xP7zw70PwcZIhn0jxmWeonus2D07Em0tB47sLxixs51FPnHtqyQB3hpr1J4+7Vow
UGIr5NCzCmB+qikzYwsAYCvWWSBpd4CkjtP2GNTHeFpieGy5XayQunQcn+D4HtJPnfANikA4
HQhSQZLD2/Ti039XNWtvime8hBJMGWo+mw88egYi/Pcs0FjQp689wDa9jFmp8YqmKJabNTZZ
jWJ6q/V2O001ioU6TteQzXqD1lEqTXukkqr2eyRVzdhNGXIKrLx1g1VCsVCRz0T4ayQ7YGyX
ayLV9W42VZ4dlqst9q3S63wP66p+uKixrHfQFbKEVWK9WC6xtCshF09ciuohdci9xQKb/M4m
pH5Kud4yzmlid5p9sn2m6Nvmj2/P/3rCzuv7GPHBgYn6WFeY39cJxhgeAy/aLr0VSl8p+jRH
4GBS7QjIwGEDliYw1niiwMIVUBuDeWS2ELbEarI81NmFgdj7qwVWbLFt3MdPI2uJGnBNxIr+
eOVhS4KF2Pjkx9ubOW/x1ubL+U95uN34eDs2rE2CfP4wtcOedxCRZh7iLW5ikiDz1qfpzuaW
TLnjy0K8wgc8CMYIgCcr6KeiKYlYeh0i4ri1auR7G2w6RHGaypUtw3LVMgXIp7NZs/W5DdBn
jUPjbT2pCSbT3JV12U+OGGe93K45VqzuFfzNciU8PGWYsbUHHNO1t+No1SXLX5BPUzqMFF6J
OLgjAn912LH1ZbUcK8CJnTYeqhkPzQ4HLd36jvTJenaowbUlGPPThu+s+A71Q2i/09RUORsq
z/eR5SpleRwcY4ShdkB0QdCsLelww8URF3RM1B4rmgilZIFMBWD43ppg+Ej9FYOszMpHXWLa
CKQcypmIRzA2iw2an+J5uLNQC7OZ2zUBsUe6X5kxtz66CWgeccHWAG02qNvK/2XsSprcNpb0
fX5Fn+Y2EdgBToQPRQAkIWJrFEiidUFo5PazY2TLIdsRb/79VBa2WjLBPsiW8vtQrL0ya8nU
GP4B/e0ownqfBEKkhSVAFwPrFFXa+qi60Kfam/dtXUtVo29t0yry0d5Q7S51Aka0IiHFOmMV
IyUT0gSTJuiiD55Jd7OToD+sH+9tclRrVmBs5FQHtMSH0POR6pZAgI1ZCSC5bdMk9iO09AAF
qF21MOo+nTZjC943nZ14nfZiHKEtDVAc4/q6wokT6n6ewjk42LP8ldGmVYx1Qnmod1Aqq62M
N2QzDxeDkurhKttRGOXtiXqOOa8fx2pMTycqEPPCqnl76yBeM+oodaV1fujhOqCAEiciAhyv
nJaHARWFeCHxMkqERrG/5lRe6ET7poFcceK96VUw/ARbYeaZHTV1BOY5H5hfBQk17fXJDxvZ
gAQBZnWAeR7pR+Fr7xlyseDsjXxhFQdOgK2bAgn9KD5g6d7S7IB7NlMZnoMO7SFrc6GQ7Hz8
uRSZRkraPqpZFTMAfumxFhNibLkQYv/fqDhFezHydsVUwqtcrK7ozJsLTdg697M5nuvszfaC
ET08BytMxdMgrnYQbGqfsKN/QPPM+57v91NhjEQRUuNiyXW9JEtcZKGTPiY9CoixbQBR6gRr
waJm2rVMVa67lFYQ3/P2J5k+jffm8v5SpSHao/uqdZ8sFZKy3wskZW9mEoTAQXsoIM8KV7Wh
u9fFIPpN2t5mg8P6XsBREu2bUvfe9Yj7Bxsl8fx9yiPx49gnwtornMTdsxuBcXAzu49IwKMA
VF+QyN74F4RSzNo9agpPYFQ/LVHkxRf8wY9Oyi9YXNeVI4+Z0IzIoyZr65B6CbeOQnghTJ8W
rLT+6rjoWiO1NKbcrp4FENe7L7juLX3B8irvznkNXl/mI0TYBmFvY8V/cpRjspkujWz6t5eX
1ob00RXSBe/YdwWhDC3ULJ9erJ2bu8h33o6PghNOTZEvTqzoxALD0Acw2AfgP2jy6Yzl+8NJ
armlUoKHPvI/TxLS8jTjYsaw2xeEpy5/VZBtPxxeICwAWn9Zflc/3q1jiMnLzIjkFgvu9iJl
W+6P2QV4bbpCzf0cZuTv92/w8uHH75hnIRlQaOqsacn0SXTCeJOOWc+xgm0DUVD9wBmQ31FT
AwpeQfMVhd20zIy16WU3MbzkS8HVI3KkwR+sTy9Zg3UuDg4uG86Lo+F2BvVxf0wrhtIBsCpT
Plj65Z8/vsITlcV7ldVo1Skz3tuCRDkNX39EyrkfE+vbAnu4GgDRKqa7qR5uI8jvWe8lsWM9
V1Qp0nE1vHzTYl5t0KVM1SNSAGSoA0dXi6Q8O4SxWz0wJw0yweXk2JLp5+Ugt29oblLC+55C
MHzwyUbhQVy6uIm+4sQz/hVPnuAHui0mfKcxYd8dfWy8oqGn19G8U48UdkbIEBALhS4OwOg2
9gr6Vma0mwZSpj1Rlu2Tur52cUIR6g8kVcDqHZciEurpErJlO9Ts4RE0L1JMLQVQJNSWmVlb
05z1emPddX1OjlZM2abkgwbASG8J62wNOf4AZUwv/eOjRJgl8YetW+HAD5nUuz7Co57xS9or
jzz8chDAn1j9eUyrhgp0DpyrMAiIp94AywtI6FbAhoZ6d1juLJlTiHmrYZYu79yMqQXkRFis
jZBg9yI3WFf3V3kS4JbaTEgODrYxuaJeaJVBv1ixCRND2EeGSS6leX3y3GOFzaL5Z+l/prXm
T8LJLWDbnWH917u8v+kS5WbNNvvNMvNoz4StgDfwC/a1bxVd7leosumyvpXQNXFwj5kSrcM+
cmmc5yntE0ASiiCOhr2FmFehuhuzitBi8+tbIvo2NT+bcYvZcQgdWw3QkxT2PJm35TWUItMc
qTNTRZheapjZhqtNCbYhMSdYVmZ3WR5cLPp3yyPX0S8ITU6IcTtx80+s5WOSJ/jG7kZATzhW
2HOtcQXyBL8asZTQeKOiiEP9iE/5HbrfSUJC+LFZCQe0chTYWKQXqb3uroi1VAtETNq+Gu1h
vmJn+k+W7BljN2qhEAwIxL7fZx+l68X+Pqes/NCnZ98+9cPksFOBr9Ww01GsB4H6bzfppWZn
ht0dkcqv+TRKEdqVvwBW3UvV0gt04aMKXcezZa618sm3Q9T6I8EE+SQgl+h1v8iSmdESFAQ/
1l8IpmI533G2qmh6BmVM+NJTeBa7iW2xLJhQlncm/zWBHRLvQR2jTTnr+b1eCWl28ANsGevk
UxAl1o7qjI2yRteP8zPsZmgewxeReY19A07FAC50m7LX7nJsBHAMeZv8gfJblaOpw9aO3NnZ
ZQnF7SymL7VZNBC0OvyIbqOBbZ1E+CDUWWCBY1W8kbLQVzUoBZnHXpk1Lp7bhSF6AbwF2P8d
wxDeEMW0trG1D9ttab2m1DHC/ZxG8ojIGgYJO8RR+g6rQz8MQzwrxKv8jTDZdVgJJ+QeGn7s
V7zg5cFHA1tpnMiLXYanIBaKCLW/FYrQamKi/SWGKWQqJYk9orNPisGTBpBqwrOePusS+zmZ
lj0iKwKMYnzN21iLdfUBWogaTRrHMsk0NIkC/I6TwUJvXumcg0f0TQmG+P6MwYqx7QWDo1pn
JoTOMPY1egWb90H0jUUdnyIEoVByIKaGKm1doec+LXQbBsQLXZWUJOHTVhIkQltVSa/xAb3W
qnCEXeuiE8X0Ho1C8GYxzQwFMSxhFZnM0ieFaU+3zzl+AVgh3ZPEUd9EGVBCQwccelSYeLN9
bUgoOajceqKwYbw8Cx3zSdm4sFSdiJhzBZh4wbMOARdHXNGqu7+jmIYo5hn3wnQ0dNA3jCYp
JpMPXZ8YZDvPRExSQCevGWkaZphdCma/MVJ0ONOLsMUw1XgdCdF+Z+rf6bwvonBnwe9KplIx
+WEKU1l0qfZllqdNJvTKTVh0Y52vgEh1k4vOTsgjRb7tq3bjp/uaEr71KjpsU79hHIXB6rcG
/WE4Wm1RpBJq8vWYodhQ4d8U0/sdrBxdWlU7mZQVeS9S1Q+qkLG+EC1ZNX1uJJfXhK9L0MqG
8JKhrlqnHNpZ7tjDSF+Unoo/Bh/1woQoyAaxI5qoaH27Nz16jA/VlGcd6329jYRtreeO913O
qs8MfygiCLO3mr1MFuema8vbea+Y5xurMetEYH0vPiyU5hettfgzNHI7OYwqiL45+R0ZtBLD
FThDNIUbQEQQDaHmVdFr7kQBLjpjRA/HZhizO7HNn4N3ZzCTGv35uTxoPf/48uevv339C/M7
yM7YRHE/M3DlvmVpFkiX/ef2xn9yoy0NAPmj6MHrXIPZI5nqj0H8A2JtFmN2LDApN6RZO7Lb
sLirVydficoHd6gvzQ3meXmC19pbCwB2rfjsVx1PVPxwxSHEc9uUzflNdO8TfgEFPjkdIZgE
etlBYYHr/1G0VSaGWVeBt1WrrGme6rIzuLus2JZVowgUBt/xCzzQX9HVcdP7H1+///z+4+X7
j5df37/9Kf4GfrCVU3dIYAoSEDt6PNEF4UXpRthlvIUADmB7YeEekgH7foXNEyLFPRKVzeku
RlcpQa209K+NGBAMTVb9Sv+oY1lO3FIBmFUZ5T0d4Lq53XNG48XBxSxIgO7n3Bged9GuZp3d
q8eZ2PCSrV0x/KkSgLesNJNjhNNWOQ7P7OwRgY4AF9Nhd+Pjq+jtJKdLWTdmj/GSoc5YV0p5
z6yivg7EtSOBHZv0Qo/COcKK0VAKoWV1vl4Xyn77689vX/7vpf3yx/s3o/NLopjyRJpCPxJj
Wr/QslGOTS5WbbBDvfiAz846ub+7jvu4iS5TYlb8Rp6rxpLzomqp3ORlkbHxmvlh7xKm1EY+
5cVQ1PBu0xWqhHdkhOGqffEGV9FOb07seEFWeBHznWelLiDe1VX87+ATl28QbnFIEhc/sVfY
dd2UEIDDiQ+fU/zq68b+lBVj2YucV7lDho/e6NeiPmcFb+FK4zVzDnFGeKRRmixnGWS/7K/i
By6Zm6BvnJW2ZBW/1RAi7zC5YsASFfDR8cPXp80DzHMQxs8aHjT8ukycILmUxG0phdzcGZSp
7v0wJLY0UfbBIfY4NnZTFlU+jGWawV/rm+iO+OmV8gn4YO3z9DI2PVxrODxr9YZn8Ed08t4L
k3gMfTSI6/aB+C/jEOh3vN8H1zk5flCrbx02Zsd4ewRPu+DpeguPijdjx96yQgz7ropi9/Cs
1hV2sjcXz+wmvco6+XRxwljk9vCBT4SWPXZHMRwy4lWM3U15lLlRRi0zJjf3L8xDZ7CNEvmf
nMHx8SrTeNXHM5knCXPEis2D0MtPxMMl/EPGnpQuL67NGPiP+8k9o2WTRmn5Kvpb5/JBfw1g
0bjjx/c4ezjYKQDCDvzeLXMy0UJGqB6EjRXHz4utspPD/RkdLHWWDoEXsCtuvNnkMArZFX//
vZH7VhhRmeMlwgwiTvwscuBXfc4+RG7P7tNpq+9u5ds0xR3i8fE6nJ9NK2KqanPRaYa2dcIw
9WJjcp71TUPHUJv22BXZOcf60Ipoagrc8f3xy5ev7y/HH7/9/K93Q2NJs5rP1pNuzMxLmBDV
ViQXjQkKxwibHNitIakYQpDXS9HC+6OsHWAH/ZyPxyR07v54euhFAQW/7Ws/iKy5E1TtseVJ
pD+FNsCAbjRhfIg/RYI7a5gYxcHxLMsDxJ5PL+OTbjU3AMnqL0UNfgbTyBd15jqEH01Jbfil
OLLpWkRMGkwGLTbzbeDYlR9JE+vQqTV8lMwAr6NQdA/i7sXydZu5HneIa73ScKgZeJkexF+G
yA8oo0alxdohq4ZmLWYbsuweh6ZiYgwoezTo6eR9ze4FPamxLm3PtP1SDfyE7y1N1pbr3Xzi
ojiEIAHSZUj8MMZeQC0M0HI99S6iCviBNsurUEA04sKpCjGZ+q+4hbeQurxlLbqfuTDEuhAm
EZYJWDH8kJ5Jhpw2pO/HZrgXwtam5yEZS5roWPkwBcqG/fic96iFJHTEvO7lRsz4eiu6q8EC
T/dTWMZljj39+PL7+8v//PPLLxCpxwyVfTqOaZWBM44tHSGrm744vaki5e/z/o7c7dG+SsWf
U1GWnZiMLSBt2jfxFbMAYWGe82NZ6J/wN46nBQCaFgBqWmvFQ66aLi/O9ZjXWcGwLazlFxs1
9scJ4pGehBacZ6PqR03IL3l6O+q/D1usJUS016TgO3LepuJGpsDyhrz2hf76yW62X5ewVciD
HKhFuWuBdjqBthVuXMGHb0LH9yh7URCoqMIAiZUKIolTeFHxngTvZ0ZYUADmHFdRBJafsD0X
6LGB6xr1ezljm+QCaIR6YwR3g/Z3s+XBgZrKFF+Qyk9X3EmsiIl1XmBlnghzBr8jBr2GdmwM
P0rv50HD9G8ucftsQimI48Y1IOzOCKUB0ILse1RsRKjXvBGDtSD71/Wtw+dRgfkZsV8IP9k0
WdPg6jPAvVCtyIL2QjvK6T7NiPAucpSRiaasqwriYAyqD65uE1NSxdPbadC66bTjqfSyo1i0
hz4IVUNeVr28ebfJZBxpuZdvR5OGLpeDzdTo0alAfhQVRnhul40PaiWRfS5GlBMbCfIqdnGL
Al2q5Fx3/PL1f7/99q9f/375z5cyzZZLjMipD2y4pCXjfD64RHK2TtQacauMDbdCvWyQdm1i
E9u36zZsvlOE1uTGkg7innBe06YaH2WOb09uPM6EdY1NghvFPLpXMpLBDRyHhGIUUt7XIBnC
bsJYJHm7zmFY6hI64GmXbRKGeD9VMgC6UYdPShtr113n2jk036pKPu6h58Rli+fymEUucU9V
qd8uHdIan+E31nzf9wnL6iXzYHsypNbjUdBnDSVmhuAAZPuXsHoa/V+j3PcUGlCNA1IJUGtJ
wdLy1num8Tnn3Dr0XdLmzU2d1LjxjzWSuiJq08oSjHmZ2cIiTw9hosuzik2BLO10Lo8sb3UR
z1+tuQbkHXtUQsfQhRDiXGgoEKb9BGerOvqJqUHeF8lY1O2tH82QhjXsEnM4wEW7ylJAWTsk
49JZuFoRbzWDJ4ZinWvULgIYHKyLJTDjP/meVh3Tmf7YlJmY9Aozy23XQCxX4gfv8IiMi3rv
irq/mt/KU24qqxUb+fl4O1mNc4PQWR3SZreqerPF0GZjfhfqAo7ZUrEk20AmVMjcrDPkJxlc
49BFaHpV37K7KeJq3JEpO/KSxs2NQsOtDPDbG+kTC1pGtFrFam9AHdbI3mS1JsvcJCGcDwLM
iwvx4FTCfVEMhPvVFZamFuGQE0i3xIpXZMDErscCUx62AH4QHngEduyTGF+W5DhnjusQbsMA
rgrqia0c18PbmdiQkF/zwEvohhRwRGh2Eu6HE/3TGetKtlNjZ+kniYRL9rb7+ZQ84b1tSZ6G
p+RpXCxKhDshAAmrBbA8vTSUf6AankZnBRHMeYOJd2wbIfv0NAW62ZYkaEZec9dwrYzgdL85
VcnO9HDJCJ1kAekxKpZGN95pNfnwPBnonC8E+ieuTXd2PdMAUXtOU9KtXw5REAU54bpQdp3B
CJOiwXXlhfRgb9PhQq/AXdH2BRGuROJV7tPFEuiB/mWJEi8apuWCuIo/LUMsoSxEBX8yP0t7
teH00LgPHnHHAtC36mRMlFOE4+y/2D8///Zd8y0l+yGbOguqYa5f/YfxiVDJWFk2cEfsc/5T
FGhLamusxDd+tATTM1ZzdQTgxlx0G2DF+eC92emlrGCvWHoSmDTC3VRdzyux76NTgUdMn/FL
cTIcRMnFLs3M/UTjO9hrjrAfbBvUpdqGXjK7+H1T56CP2MidCQ1nMLSeJrUEqwuhHWUeaItC
biN6vNlVWoEvn9Ys6QKln8USF3vuoRoOcJgi1GTCI4jxVdeHURB+jC5+3/83UamTy56pRqyP
q+LaNaBdNz09II9pFfnS5QofH5eC9+WO/ZDlvDjX8txA8K1xyr+nL3LIvfzy/cfL6cf7+19f
v3x7f0nb2xqtPP3+++/f/1Co3/+EF61/IZ/8txLnfS7SiZdCzezQ4gLGGa3rrN/fhMmLvfvT
EuJIb5BAmxWm1TFDufh1KmPCrjoV2PuLhVRUg8zZbdp0WeL67dWomgQ036WIPBceSXMsF0VF
GVMSnbzMTPeJS2EQlUYZBSKUWFSIjR2BsL6pYIIpPDWqsJ4vnGY6pPnAF3MW7FJP5bq+kUG2
TSZtX28s1n6EdT1+hHUu8e1onZXWH0krPX2IVZXj/rSz8Urs+r06Z87cCny02cNiASVG/Ix0
CniCI9GsfBOaW30ea1btqGdybuuvwiZL7xzfPV1ovDmtHdqervrqt68/vr9/e//694/vf8Bm
lBAJ9Ut8+fJFjjp1b3oZkh//ys7PFDLanDwpmlQz4NizkvGDPvKJnJ52mmzoT+2ZmXPE52Hs
M+yNwtpScOthVY9mPQwu4mBeP9fecYjJ6zorKWO38dYXJbYkC8yNNZfFGjKQSLSD6E4kLNRS
ABc0dhwP1QMAc91kvDz2l/KFhzuzW2nXwHUC9IcEggdD2ghBmBCfhiG10TMTItdHCi7kWlCG
VR76SYTKQ3N/VcrLNIz0AKILdMw88jBv5fQjT7GXO6tex/2w9NH2maD99CfOXgVNjJD+Ados
mziBVwaox3KVESL9dgZMVyo6/DxlpLEkECOtDkBElDXw8BgXKoEoRbxbiNg1fcKgtGGwRhDG
892djb6FE1Cm2ko4YCUJ/dK3tlslBN75CWd5C0faC/u9MTN8BhownNhM8yqWh5zH7m5PFgQ9
xsYqT3wX6SUg95AhPcnx2XTG0Ln03FcRNqfDW4mxu/qObxmWcsFnwsRyEvxKhEYSdhh2Wqpx
QsfcSl+QKCaAg0chPjaGpsQcBOBVcnAjcAE0ZsW56JlluUtnj2nlRsleBwVGnCA9dAbwppHg
YaB+UkBPFqiFhTYugJN/Hxyg8wQgx3V4AftO5Dwd9guPiBilsETNMjQbEqFmqRV/+gOh6/0b
TR8Asg4kSNSBGBi+t7f4d6VYRJFhBTsN2LAGuRG+8P8pe5bmxm0m79+v0DGp2mwk6r1bOUAk
JTHiawhSkufCcmzGUcW2/Mma2sz++kU3SAoAG9TsYcZ2d7PxagANoB+NzrzJQ9P9yCQJNhHz
uPk0qmDoRrbYzBe/kJ+jfTMT/4vzne75eqPJ1rVqfE+7tBzWOY+csRpTVEXMhoS+UyNso9Og
+2eOoJpMddvpFpWzsdN3JwEEU2I14WA5zQjlOWfcmU5JjQhRs5574ppm3rvTCwoIqUiWPJ2P
iDUAEQ7RCIEQWiaxIudit5yMiCUuX7PlYk4hwv3YGbLAdYg1WUHS8qkSkOtbSzAeHakWtmjn
SOrwGsEdcdFp71SHXNBrtOceR5PeoeRj5jhznyiAS53LgpmSjSw8NhpbnBgaGgyANyYzYjQU
0WI6ItUswNw5MCDJnRoIkkVfrwiC+YhYUQFOKUQAp1dUxJCZyBQCSicDePdlv8Xc7YP+GYwE
pLIFmEXfmUIQLOhTqsTcEe2aiJRpCCkzJE+KiLk7qEs6HaNKQK6KgKGz96gE9LgvF+Rp6Ste
4Sxnac+TV6Mrzi2xtFqafDae2t9rW5I7CnI+o4OmNQQxKxbTCbFGx9L0gmomouh0XBoFvSKm
DDKgMtqYVL9k0thKTQFsktr7I535jcD2eII6xCZj6RbJTA62OLCIAzvc2gK3+14ZeMrNYf3V
1sjHHXhtWnAIORNvcvpOVhBm7EA0odiqnhvA72bpKu83P6qn0+MrVoe4p4Mv2ATccgnmiHSz
4qiXgKByvTZb0rF3VXFczT+IkAJeYDu94Ye7gDaDATREb8los3eJDsRfPfgk44yMkyOxxYZl
Zo0i5rIwtPNMs8QLdv4DZd+GXDHujd5298F4gASgGOFNEmcyw04Nv8FkfyvkfsS7sNDXEnog
7KuonCki0SrIOpK4WWf0oz4iwyQLksLWSlEGOpfrBe0efB1wYGGepGbB+8A/oDO7hfnmIUM/
KZ1X4DLPN1kFuU0Cf2crPbUIAPNDEG9JNyLZqJgHYk7qeZcBE7q27OCI9Y05Gfpxsk86TBJx
8LfPPPSpiESXdxoZiV7MyPg9EvuwDhk3hgLjXm26LYkCN0t4sqZMCxAPy1uGEqR/V4R5gENu
FZk4p07IgEmy3N+ZHFMWQ94fIWf0cw7S+DkLH2L6dg0JxAIANtBWfMhi9Ch3bZKcZhAoRe88
sWgQFa5d8i18MMF7GMQ7g1Xus6gD8kMw4/SNFUFwT0Nz3cxUU22cHRDOgXF1jWlBnRWCRyzL
f08edL4qtPNJHuwTA5Kk3DdlHPyNN1Fngm2zgucR47anKyAqYHMrU045E+CqEQQYoU4r7xjE
kVGvr36W6A1rIJ1GfX3wxDZmLikyCV25LVYk3BVNSaL6r87uF5qp35q3QmIHxi0YAsPpWkLL
UJrlUDqLFHKv2d4bHquzIEsv5+v56UxkFQN+u5W24gMIVxey0nf4mmS3x85/yeBdpPYDD4iN
BqRE0OoywERhAd/auke+GQsCs5OMXGMdFq1ZmFqk0iHJ1g1KcOQUSqP0ML0JAuBrS3cdKLbc
SF9Y0TrK98AJjVoe0FwrTINSJmXWPhO/xraQ2Wh1lrmi1YyXW9fTqqHXSRpuq9/FcVLErl/G
/kGJBCmTnJ0+n6rX18f36vztE8e0tsfRBahJLAi+IwE3OqHjN6C1KslpQ9saB0ZHYkQDTu1C
Dc0qRE8Yntfz0+xtjt298SEM/cqMMaj2RJEnQhtNxfjKXJC/Of/S5kSsTa7z53Xgnt+vl/Pr
KzizdXVoHLbZ/DgcwqBY23kE2TIIFLRfo/VuRWiWJNjoMs/NdiM+z2FUuVCOe5kbxvwtfM0p
myS1Tqrtjjoqx8IZDbdpt9oBT0ej2bGLWIuRBPukDgITUzujGqELCNFxGkHR37M8XIxG3RJb
sKhtoqOyBZvNIMgKURv4AJz60Qi2c+oDkZH+jgP39fHzkzpzoRC6tKqNcxc8YUjlErAHzxiF
PGoPe7HYJf9rgI3LkwwcfZ+rD7HkfQ7A2s7lweCPb9fBKtzBClByb/D2+L2xyXt8/TwP/qgG
71X1XD3/tyi20jhtq9cPNEB7O1+qwen9z7PZpoaS6pPg7fHl9P6iBENUx95zF6rPK8JALzWU
LgiImtoTv+A89GJSjUCWOHyebjh4QySWaIMtxYZ5G9+2qiCFBwkysiRsw+Kkr49X0Wdvg83r
t2oQPn6vLk1/RygqERP9+VypXYmcIGZrEodUgAks6OCOjfVXQHBTIcDQNLPNiOi2qEvTtqkz
rnrj5MI44Oam3zLqbEmybkyN09CCk/UthKCOc4iWOJ3RkzFlH59fquuv3rfH11/E8l1hVw8u
1b+/nS6V3OgkSaMLDK44Bar3xz9eq2dTvrEgsfkFqTimMNuaiVSqIBA8LH5At89Nr7cuSZ6B
m2AUcO7DHRrpXocTYhsIdVENsKFCZUJjCtEZrRZTqGm/NAwlZ7ATzPXbxnZRwC4nddWC87me
GgIXGHR1I1npKoxl0fWjwPLiVWMdKtglLtdekRfHbn323LcpeKG/SXK4JNA7KzS3ofouSPyc
u7OxicNMuJ0+9QjFXd1kc/BnDBl9j4btgau6OugWSYQEZbQW2oE4u0Fc4w11sYK9EAhtarXf
MLOaoW1DhoDSrlBBV5mepgjblhxYlgUmGCMjG5oR93O5E6+DY15kxlIRcHCaVgN+AfRB0B0N
Rl+xz46ODgZ9S/x0pqOjoVtvudBzxS/j6XBMYyYz1bSk9pXYgSubnzVNaUU3/ev75+lJHBBx
e6DnQ7pVrvDiJJUKpesHe70YjPW9N44VOdvuE0BbxQEm6dh0BFOOeJYqaiXjVmLUBmFmzikF
U/v+2r+CwEE+78PTSOgCuDw96Lp9ja0VlTIuInEEW6/BaddRBqS6nD7+qi6ivTfFXx+PRo2V
a6HWl5sMoBbJbxRL47B2ZM68s7pE+x5GgBx3dFMep/ANquu2D6F8Q9RXnlu3RN/Xyb0ciImT
BIu86XQ8s9c49nPHmXd27xoMPjtW8USaBW3Kh12e7OggaTi9N87Q1hm1PEiz6U7/44lk2DMI
6NvdnhHU+UJKkL4erMANKeFBboi/UHx4GRpLTtF6m2mUEQQ9qaXZxBXMHREwY+jlr2vqGgLh
xJ5L0/UdaFqiZOXTV7caVfwjrPwfJCp5seI9Om5Lm8WeJfqRztKnTOQ1EvuotCRrMcIlt/f6
mo6XYNDUQ2xjAWh7nLoucWduKmiITnC/c9bd+ziajDjitstvrZJ/XKqn89vH+bN6Hjyd3/88
vXy7PDa3UhpfuOS1azrWZKO4OFjeW3Fx6BVEuXRYki3gbCtiFx6tekhUQempxp0piNEr7pzk
NlSPGwdn6RiJK1IPHzHPy8jepI18GOrB90nIpvRWG9rlHHdPdiAbqqy894VH0YseUtJwEIsS
Z7c6X4g5KwDF6ytouNMjOESRphakhwyChvgRmXu7xprHXUFcrsJEjQ7Tgppb1sWtDA5mDIXN
Yx2+BL2zM+EE4lfu/Qpf/8hVJ/CxH04By72tLZE81CFYRyWnzgX4rW5YCSB3NbeE+gDsHhLK
eHS3Ir5YjYdDk2fBt7YPClH9YCZGeKj3uvtl63bqtuVf7A2tY+HSToxAEenBbiI/4nng7ghq
uLGH++1blfC2G0OeUbDSeP5FzCqD41AMJ83tARLVxBu/fUWCyFLEmRk/bMKAERVDPGP5yFlq
nSzh8XjoTJeUFb7Ep0X3Gz6e0elaZRvATVm1urtBpybUzYbD0WQ0mnQK8cPR1BmOaeNmpMDs
0d0WIZiycGqw0kuq89FsSVr1tuihaieL0Dadoc4rddmypwZ6GDPJHjKyTwjg1OkAp1NMNlm/
ZZk4Z9RtGoCpy9YWO+uWstAS3jdAzUmilmRfHFojFoQGAjthavZYDaX6AFCzsflBk4E6Z3lh
ziIz1R0Cu/EAa7A7ciZ8uKDMWGX5h6jzVZs8zyrqnjjwdEtrPLQnTo/05uOpmlFUSpSZLFQ+
mLkM8ht2islDd7ocWQKBSH514lJbJTrpUNsZNP2nU9wu9xwxSeylBXw8Wofj0dI6j2oKaaVs
rGn4avHH6+n9759GP6OWkG1Wgzqa3rd3SN5EvNEPfrpZNfx8OwLL0YHrnO6o8gfuWqK5yk4J
j24a0upPQ5D59N6KePCStmPjwJ0vVj2DxuHJ+cGi3smBD8QYFfUqYOtrJelm29P55fTyol1b
qS+yvCth9VOtPVSdRpaIjWub0LqNRhjlPd3bEG19oSetfPYD/Nroi/dJXUvmLY2IiTPBPrBE
7tUoTRsAmqp5k9dHC4fl9HGFZ4zPwVWOzU3a4+r65+n1CqnKUDce/ARDeH28CNX5544C0A4V
pAEMbDF19a5gYlTp+xyNLmWG0R9NFvu559NR8g12YEprl9tmDMzAPcx1hV4UrCCvEz0ygfg/
FvpcTKmuvtgDMB5FIFRYN1ONhxB1Mxpp+QGc4JTlLtwM3b4HgFi6J7PFaNHFNBpgyxaAW1fo
ng/0CQ3wApcnpAIMWOOuFkDxPvLbGPQCMDg1WQ00ZRFIxfa0hgLIy4uWAOJBmrVGhDHIarWy
vXZvDmZDUJXOZXlDrASvNTBstZp+9fnYrIHE+clXKhnWjeC4oJh63AyPrGNKV8ybwmK5rJLO
adcGhWRmJnExSLYP0WI6I/Ma1xRib54t1fd2BVGnSqcQywXVvnqj7y9OqAp6joYGl/GpO77T
oICHI2dIuT3qFHqeFgNH5qKvSY6CYEp9m7pr08mIohiqz3YaZmzFzEj5Q9Sid+wmo3xBDR3C
y4OXd3GrL2NnRxVX54rua5+ZI735spuBuhnRTrbxGsHF4WmpxoJuEOsI/PQJTmKqjWj4dDEi
pUl8YYnK3ZD4kTiV0kGbWy57QdInbhmkYCfHj3tiri86OzG4CveuVjB6S6LPED6xLitkbnSV
gJRqwEwsGexVkr45DQRLegWZLVVP47bPlloMl9uITaxjOTMyYlFLx2RhqYPqv6vMLmfkUN3s
pvOlsewRwW5gGB/fn+9vPh4Xp3dyNZKYcnug1Wu9puR+gtK5dJ2OkLWvdr1Vc6OEmM5iOB16
gRYYI8kRQTAlJwPsVYtpuWZRQNoxKXTziaW3nMmQ8sZrCZozOQGnll6e70bznFFCM1nkC0Ju
AT6mdkQBny6pSkc8mjlklJfbejxZUPKZpVOXmiQw4kOqKHmH0TuT5VVGv6hhHP9usV8f4i9R
2gj/+f0XOODowtUpDlxyYjJVQ7va5+I3cl1PXZaSHepiMpe+TTGfjdW7hrZD52Psz9Yfj1fv
n+eLrRFexGrD6M7kEqhVse6aRfOH2EUbhVvh/IBQ7dWg/pwaKIkqo2Tv17ma+siaJOGW7MKS
SBxvLa4IRjOUE1Bx7LMGSiFjFfVSop+jCkxDT7cTcCkMysaPg+wLzUy0wI9qCpMx8+mTIuC4
n7mJJekNFuwGjV+klUYcMikZw8+zwnirFcBoPXOotUlUvVw9pHAPH7GYbfT4FRAbtwkUT3ws
c0PfhKnOFR35cWFykc2y8ijTSM3YXQNXEPNW91WoMbbwsk0NIqpaEQy4TDqm+BTcWHspdZe/
BzvB0kuVa2YJ6jQToTFpcytx4LvGa0+N+qn7t9ap4ely/jz/eR1sv39Ul1/2g5dv1eeVcifZ
PqR+tidnzD0uyOZYvTdXjR2fF/ARvvV6FwgPP0n2UG6TPA3Vo7edpgyDKMh/m44clRZvHsRM
3fi8uRXSmYHJt7/P3W2nIu5OSyEkgLpBCFDB2zDLJY4YDqzBA6+7MuCq/R7gxD8wyermKwLk
Js6NTBc3aNldO1WajMWYJqPEYM4GW4mMmInkhyDJwxUQ6V+ICQW8mh550yuU7sETmJMu3yRh
zcdS91SsAmLy6DXAaJpwQeJz6MM3FbeFLDHpPooK/Rt/HRhMijwpjyFTTYsa5jqEP0Rch2Ah
+zSSE7GeBYSA31q8yfyHFekYzHO2CeKNOrBCjHyLJUmWh4vR0qEvUQUyDGgDxmwxH1m/4lPj
6kBeHwXJ4PNaeyS0igCi2NNT9Vpdzm/V1VAPmNghRzNnSG8zNdbME1d3oMFVlvT++Hp+AZPz
59PL6fr4CrexoirdcucLS4o9gerYxTUl9nFXy2/Qf5x+eT5dqidQDaw1yedjsyp6efe4SXaP
H49Pguz9qfqh5o+mtAmAQM0ndHXuF1GnLYY6ih8Szb+/X/+qPk9GBZYLSzh+RNGpjaycpaNO
df2f8+Vv7LXv/1td/mMQvH1Uz1hd19IN06UZwLIu6geZ1eJ9FeIuvqwuL98HKI4wCQJXL8uf
L6Z0u+wM5P1w9Xl+hXXiB8bVEadO07ijLuUem9bNk5jItyJkirpp1/mAf1SPf3/7AJaf4Bfy
+VFVT39pAX5pihvvem0rMbpDpwD2/nw5nzQPEsa3EWk7qAU2gGywcEMv1GBQ49VFuOHZrcMq
YRaf+TD3y40XzZ0JfUzciN0s3bBVYnlbK+JA1IanjH6jk++EpRvuxHYTQ66v3eGrpSqQk3FN
l7Lj86HFyicNJrrU1449n39XV8WZrNMlG8Z3fi62ahb5h8RM9tikH9PZKJt4EJbsGEDe2rUl
dWXghx4amVsepnapa03H+iUknSmOi9ktljmhTqNucIjo3ZO5frb16LMX4MpDkPmh0CpoCnT3
2EQFLSYQvaUMWZontHEe4nsL8FxvxSgdwfPDUMzSVaDeSyFQlqd2AIKzlSUhtmSULBa2JLjF
70HOi752NCQ5W4UWa8pNCmmzXBQui4PNNsUXRdpiUyB7OwqSgAplh9KmMOwAh+wlqe7fsQ3i
XcowSQQtHHXAenii5KlTWvL2SCqMGbO3PfPWnvdxLmTbKffWR+o65rsfhwkdEVsS7Fe5JedV
kUFSknJcroo8txhV3IgwGkuZpJm/Ce4QC+W6l2nqytwhaKFmCdklA3/0SVJD8sWyrjX2gau8
zNa7IKSFpaHadi501CXBjVL6WiNlMcO4Or01xd1mPrNLD4T7yFnWxwRuy9HwV4yroI3zgJGB
eKLwSOaBwEAy4gTrwznVssLVomfpCYnNLD68td0UhDkRkNh3u5awMmKE2Per5wHHwP6DXGz5
72eh1n6/vXnbw1FgwBe4oRPcEYTyRm48/9+yzKIKTHotdjf/C1jV51lCy4+kTsGA1eY0fSMJ
LFJUUwhVIDdpGjmLpP2DOqLR2sPbmdJybeduM3EUbIWBHtRIrOksTm4yQxQudA44s4dJsiuU
YLJ4jgXFRKwJQn1RTsI3paW5KqrT0biv56e/Zepi0KfVEVYUnZ5Ld0BvuUen8lBYNC/iP0C3
nCzoB02FjAdTI4q5jcoSO1CnGtEmCDrR5EeILInpFCLXc/25JWmiQWbLt6yScVC5SpdeoxRC
+ax9l+pA75MKyd69W6mVOL8uLKaUCtk6OIrVA25U6fMsLaKttB94GsS1t4AUXKTk528XsY50
ngNFif5eTOeFM1VefgR0FXot9DYNIRYDeHkLdTyfTVZ0DakCFR4sCFcJdcMeiE4oFCMpqeHD
AfP0NEDkIH18qdCYTXFJvGnxd0iVlQxLQhMni1cOpNqSfMztIaveztfq43J+Ih5XfQhP1Zo1
tWfYzheS08fb5wvBJI24dlWGALDUp4yEJbJ9LrgVqjFXdnlIqHww8sDJs3DiDn7i3z+v1dsg
ERL21+njZzjuPp3+FJ3qGbdjb2KHEmBISaW+oDVHVAItv/uUe53lsy4W0avL+fH56fxm+47E
y/uVY/rrLVHWl/Ml+GJjco9U2lL+Z3S0MejgEPnl2+OrqJq17iS+VfUhJmprd3A8vZ7e/zEY
3U6qkPFn7+r3tcQX7c3GD433bWuHkygoGk1t6j8Hm7MgfD+rlalR5SbZN5FXk9jzI6Ze+KtE
qdCRILFRrKcf1EhAs+diMyfPjjc6MI3mKVMzcmtsGOfB3jcbQbg43VrcPQbVJP4RtMiGl//P
9UkszXXUH88cIUlcMs9tcn23RTWoLPhqJLE1CI6ps1gQX645EwoC9bxfE9TuEOZ37SluPLGk
Eq0JhQYymkzntMHUjWY8ntL74I2k4y1A0FgMCWuC1hbN/DLN4+mINHKoCbJ8sZyPFdOzGs6j
6VQ1xKjBTSCFG0K+vym3dCoygHdVjCJAwUpXi1WmIMANK4l5QediB8IdXD+VWoJzANd2zEJd
oIqVv6pxEZRvOqRYPIeJ2JI4em15Ey7OUkmBvzGnn1CazbV+QNHsvRogZXXLvGM4nkx1cgRZ
Qos3WMOREMFzx5paqMHTTFcRG6kml+JvR7fHEZDJ/3H2LMuN40je5yscddqN6I4WSYmWDn2g
SEpimS8TlCz7wlDZ6rJibcvhR8zUfP0iAYJCAgm5Zy7lUmYSbyQSiXw4lE7zIuZL09YHKRYV
+VNUVBIFpLlVwq+uyQiZaEkQNW4Cg8N0a1EsRVu6gNbSihltFQ3oQInyr7Ys0dIuiJ/mmEsg
PaJX2/j7lTfSc40VceAHyOkyuhxPJhYAp2xQQBRDHoBhaHh9RtMxmR+LY2aTiSfux6gIgJoA
LBRvYz7plNcXx4T+BAeDjyOHwyFrr/h9Egej56B5ZL55/fcvicMql6lXQFPaRnh/XI5mXkOz
cHh/c2RRAJTDbQteJkPK6BoQM4MDcIi7lBllissRY5y1gEPCUdhlUscXNVGeOzSwiJLO4gMP
i6FZPL+xd/T9GZDkKQyImbaMxO8A/Z5OL9HvGfb7BMiYTgoAKNIhDsSF0RYkDq1kIUL0sNP9
M/b4qvQATDKnGfCuZY1KSvLSx2Wn5SbNqxpMgNo0bs3QuNNxQK+s1faS5HV5G/tjPX+cAODs
CgI0oxaYxGiDCjLMyDcAnqc7QEgIkrAAFJBuFKCVCfWcIEVcB77uFQKAMXZHANCM7G0ZrS8N
V08p6ciBp1hGIqTJokqkDynmHbyz9G5tM/hsNPVohZxCk/7FCjlmI1/ruQR7vhegsevBoynz
SBt19dmUIevdHhx6LPRDqzxelkevI4m+nDmsBCR6Gowpa70eGU7tDjDp1+v4qOAy79bcTxzR
5vF4QuYU7L0t+PrRNw+HhgA1dtlmEXojvM02WQ3hsiDXhVFtfwncGsvlP7cFWbwdXz4u0pcH
7dgAiaBJ+QHWBzPEZWpf9AqC1yd+kbTe+qdBSF80VkU8NjV7gzZhKEsW9rh/FlHIpBmvfrS1
Od8v9eoUmnsQv9IQy27wGwsRPQzJEHHMpvoWz6LrXkLQ7onscuQwBoJmZI14y1/WrhyUNSNd
bzZ30xlK/211W5ozHx6UOTMYQUj94D/0RCNK3pOSPnbrN9AnAf4UIZssX18UBRueJOV4SnUS
q9V3ZpuEbMnq4SvZKOOiciJQUZ6V3sIqGH3WGo2hcWiSDVw/wb1dkNwnfMvs5EKnJarJKNSi
K/DfgR7IAH5P8e+x7+HfY0PM4BD6KjSZzHzwR9bD5fVQo4TJLKDNNQDnOBw4KvTHjVMemoS6
d4T8bV+zJuEsdF7NJpcT4yrHIXSmIkCFLlmLo5x9uLwcObvOJTGXHBWMqIOec60pvqzGYKJO
eu4mddUCSpOT2HiMsx1xycQzEi8heSYkj94i9APsTMQFkIlHKUoAMcWBOrjcMb70yVsKx8yw
kMKPL96F0dQ3w1kYFJPJJT2UEn1pXGFNdOhR/ZQnnhxCzYruzD4cTDkfPp+ff/WaTl3HauEE
cgEBgPcv978Go7x/Q7iHJGF/1HmuNN3yOUO8Lew+jm9/JIf3j7fDj08waNQ5wGzSC+zoGcTx
nXTUety973/POdn+4SI/Hl8v/ofX+78Xfw3tetfapde1GAc4EZwAmXPRN+Q/rUZ998XwIPb4
89fb8f3++LrnVZtHstDrjDD7A5AXGF2QQJeBq9AOkdnKomTbsPHE0MQsPccOW2wj5nOpn+RO
2nm4vG2qLtAU1kW9DkZ6OJgeQB408mvQm9AoMJQ4g4YIHSa6XfLbxYjaFfbwS9Fgv3v6eNSk
JAV9+7hodh/7i+L4cvjAs7VIx2M9mq0EaGcbaHlH5qUJID6SGqhKNKTeLtmqz+fDw+HjF7GA
Cj/w0IGRrFryBrWCu8bITNmjUrlAeLwWpy9qme+TBbVr/XhmGRfwJvi3jybCan1vScIZFEST
ed7v3j/f9s97LiJ/8tEgzMldesMe61jLPZbUOcyLzAuRyAu/zXTGPdSlEV1sKzblvXemWx4I
aIHhqtiGuvhcbmDXhGLX4BcJhCL3pk5hSBz91slZESZsS7LBM5Ohb0AYURxgQ4eeFO0y4M3h
5+OHtmJP8wLmWFHueFtOvvNl6Tobo2QNagrHfOeBy0CWoyC9K8Uf64TNAn3HCsgMLY6Vdzkx
fussOy4C35t6GBD46Hegu1THEM9sgn+HEySSLGs/qkekYkCieHdGIz1jsxLsWe7PRlhNg3Gk
275AeT5iJbo6PKdU3BpB3VQab/nOIs/3dO/duhlNfKStanCMsw2fu3HMEOfkzNUITChhtKqv
rCLw5ifaWdUtn2Kttpo3T0S3Q4zM83QPX/g91hlbexUEuksu31PrTcb8CQEykiYPYGNrtjEL
xg5TIoG7pBaAms6Wz9lE9+AWgKkBuLz0EWA8CYx4uBNv6lP+V5u4zM0ZkDCHrnKTFkJPcwbp
sHba5KFHMuo7Pne+eg3ruRXmLNKPcffzZf8h9f0kz7lypKwVCG0Go6vRbKYrNfq3qSJaliTQ
VJOcEPi9JVpypqatHm3vAHXaVkXapk2H45UXRRxMfDIldc/WRVW0sKSadw5NyFKDCXURT6bj
gGIkPcqVu9igMlPB9+im4NvJfXgaZNYhrHxPqZmXa+Lz6ePw+rT/l6FnE2oZ04ZflaZ/08sp
90+HF2tl2bOYlXGeleQsalTylbhrqlYk63ScxkSVojEqgNzF7+DQ8/LAL3sve6w7WjUiWhz9
li0Mbpt13dLoFmK45VVVI12XvmogapVCkm2nW9gLBC9c8BXBQXYvPz+f+P9fj+8H4fymb9lh
l39Nju5Zr8cPLrYciBf1ia/zwIR5RmQaUAmMHd5rAjelxRGJI5UMcT2Wh7AG8AL8HmJyYkEz
IuO5tHVu3iIc3SaHhE8Flqvzop55lueNo2T5tbyWv+3fQUAkbiLzehSOCmScNy9qV0aCJF/x
04B+Xk9qLgBSo4DkDZwauMYzmsU1jCRVSFHnnqe/mYvfxpO5hGEOXueBh+9aBZuE5GULEMGl
xXGNRutQ8p4sMagV7QTdQFe1PwrRneWujrjsSft7WrN3EtRfwDmQOjhZMDNPe/0YRt/1S+T4
r8Mz3Pdg6z4c3qV/KVG2kDknI/J1M0vAlyJr026D1Xpzz3fs1Doj08w2C3CA1S2YWLPQL+1s
yxsxwmgkPm/ySZCPtvZJNQzr2R7/Fy6fM1qVA76gWMnxRbHy0Ng/v4KeDW9bpFSdkeHUOHvL
ik4kV6riao0Cr2tbsU0L5IBW5NvZKPQoaUui9KtRW/BbTmj8RjGdWn7okGtEIPzEYOSBN53Q
q58aB+320NKO85si7WjHfRmy+PRDno56cwDoDgQP2KgtwJssh8QChiW9RrVgebdoC7NoEbea
fluTaMacWdROBITTCKISkaEdmm7Atze0BUmP63Kcb1MKVc31xf3j4ZXIfttcgw09Un7yzmdk
LPooSZuoU2FwlPRklj0UXUMyOCPtk3wqbvkw+Q4lk0wswb+u4pbMZ8eZdNoqD6McC00SN2/i
grXz/oWYrEUSSpPT5Y2zljbr4ycrLUu9ur1gnz/ehanyaRj7OD44JZYG7IqMC/6JRJ8GIy66
q6qMREovIKMnln/eh7Dq2qppXF6QOl3ydwpjGRdO6YC4QAa7ICu20+LamaZL9mwLTpWqf9SW
4lT1Nur8aVmIbGRoY+lIGAdHATFE4gICjQdA7VFdr6oy7YqkCEN8dwZ8Fad5Bc+pTZLSGjCg
uo6ros+V5qheo8hi3ATlVQmNN6tvOdCOJ6ARyBVoJuE4HXVouQ21gvm5EZisdzWMaofTXZKn
nOa74WSoycJzi2vU+zeI3ScO12epOEeOhqqNZ8iG3YRtpCEbnVXdKTyBYkRl0lQZOnB6UDfP
Ss6LbO9AMyrBIN7My02SFYilq4yztREB4XREJUBDouYtPYzVwipOFSbq767SW92lPEIJvkRk
oYiynlPhjvWf9vnXgHshq7sU3H4Ka4BXNxcfb7t7IUGa5wDDxx3/CSrGFoJzscyhMBhoIMYS
5ZIAFCIdma6aK8Dtp+Hsh0OYkYxUw5Lh0G2yBaQc1eQkuQ3alb012pXzcB4IvvBV5xRLnJvJ
RDNHzZxDnC+3bqkXiwGtImCfHnnsuVQfQewMTcGWt3Bo17BZDINiCyWEE739UFRXLJuBlLkf
ZwzSeFMTHRqoeuswdNcakFmcjq1nlQFbRPFqW1mG8zrZvMmSZYqFGtGqRZOmd2mPJ/vRN6wG
1ZEUwilHCFGL9Os/tZ9vfhIugMkityHdokhpKPTU6r3C2c2nqIZm2IVEC3o5DgR0EL8FwzPC
MpEGCNhWaeQd10iKiLV96gDz6x61WtPihUYSCUd2JxUX0WgeLpDz1BkypU2pZtdFV9XoeJUx
ZzoRRM4RVyzTX2TgF8i+hs8Oy7Nirqd2AYAULuK2yU3e0cR2XICBgK9OZ/K0wsobrnRN2CFM
2p4cIJaQEDF0v7mY77O0u6nAtFPkIkB39AhUBi1nwAwM2xm9Sxj4rWI5Jd22Pp1HkGMClDq1
B3SQ/m3L25DbKJbG60a+pJ8w4w4fiwK0Zryt/PYM9dOVj911jc/UZSQn+D5PkBQIv+1L6WmA
irkYZ+3pLs34aEJCQkYAOamefG2Ag58vJIRAW0wrqttGbUvN0Xejpu/GIJw6og2Bo5zTKaV/
A5p3yCJGRmiUtesxDTnkel211OPxlp4gADct/l2VIv6nkflCw0AAhqzBKGMqARQxPoIQVYff
RbWb3YL5xiKrYgmjjB/axuqmgp3645AzezIx74InLM0JsImbdclvJSWn69xxeSW1W10i8bL/
X1SXLiBpshElWImqWT4Mllr1vrHmBADWCUUmV64NJhepQlILFRPJASXnS+CFATISLWXZIrGN
vEiZp2tfNwQlBUVyRkZThzGPtIPCxXMg7gBmhhLS51qsan2kMn65A7CMa6nOAH5ZAk+MWwce
8saWcXNbt1ha0cFcblqiZc6xMNUkC1gwGStau9+YgEwChGO0VmU00J0q6mH92QPOoEUmBpWa
MsEx9M8FAKL0ihznZOAbddZDDtKe/iZqSjRGEmzwhetF0XYb9JQkQdTBIkqI29wok0P66LWa
PL5uqwUbo00gYXhfiKOMncKwxhyg85Y+EDK5tis+e3l0iwo8wfhWTrIGQgUlOm+kCKL8JuIC
0aLK8+qGJIUrOrreargtXwiib2eb2BUpH6mqHsInx7v7Rxxab8HE8UnKOj21JE9+51fiP5JN
IsQdS9rhct0sDEf4KKzyLEX3oTtORudAThbqPFCV0xXK58KK/cFPlD/SLfxbtnSTFgY/LBj/
DkE2Jgn8Vsm3Yi6RQ+DlP8fBJYXPKojiwXgHvx3ej9PpZPa7940iXLcLw81fVksx+1atVU3P
f/aUEejmhpZVzw2T1FC97z8fjhd/UcMnJCLjfQBAV2bUKR0Jyt42t76BceQyNT9RyeSIgiZe
ZXnS6MGb5acZF5+beGUldZQf1WuhpZayf4+5SptSn1Wl51FXlqLG3RKALwQJSeMSAlfrJeeU
c72WHiS6rq3BVIbQSlEYZ9G/FTgHZksIsxYbX8k/Bhvjd7JN1Bi7hpjQoeqMyawGMjYcFr4a
yBwrKiB7HyVncAs3LhXnoAu7cn/IUXW+dqLnZ9o6P9McN+r7whY/T5fXeeb+Mm6iwoFi1+uI
rRzIzdZdZpGVfC06kFVxZtxqN+663I7PYkM3tiEqVfsQcp1ri1X+Bu4HscoHaQ7xBEmS31UD
mlbwK7rx36VbxX+Lcjr2/xbdHWsTkhCTaX08Pwh2DH+jhIHg28P+r6fdx/6bRaiUvhhuxpnq
wVK96245X7u65MNZw8a5Cc7sq6ZyrQ4uQUKkXoPxKKTB0uD3xjd+IzsZCTG5tI4c6/2REIeD
f1NVbVc6ugRfgpwo009wQZzsXE8EB06aAxFue5IxiDnLpZuayhfJSSgz1mUjoh3wW0KlOazA
9cP8Cb1FFZoeoWxdNnVs/u6WTJN+OYDf9gDWXTXzCVoOklx1IyvFtTCF+017WzveA9VHTpEl
TusVvVjiDF/y4bc49xl1MxBYSAJxc2rZkC0El3GTRhD5D87YFd0moFrXkOHBjXed/gJp6W5O
UNr854SHN55avGydIfyifVUSuQ9o58ad1Y5dm+srOdfYki3qAlrJyt04QLYwCHcZUJaHmEQ3
bUaYqe7QYGB8Z5VTRwQrg+jLdqFM5gbGc2J8JyZwtzik7JAMEucghaETM3NgZoHrm5lzyGeB
q2uzsaue6eUYY/hVEFZSN3V84PnYC9BEkmE3ctC4xVlmfqgqc32k8NZSUgjK3kzHOzo3ocGh
qxrXSlT4mbNjtF0VInGtrIHAaO1VlU27xqxRQOnnL0AXUQzSYkTp7hQ+TvMWW7GcMGWbrhtK
rTGQNFXUZlFJfn7bZHmeUcZXimQZpblugTLAmzS9ssEZbyuKdDggynXW2mDRddk6A9Oum6uM
rTDC1A0kOWX7sC4zWPk6YQ/qSoi0mGd3wjp/yCRHPYVX3Q2yO0OvVjLKw/7+8w3MQq2MeL3V
xVA7/Ob37ut1Cum6bAWSEjLThmVc5itb+KLJyqXjytYXSb0rSVVqmlBt6JJVV/FKROfJN8Ve
jQ0p6JiwSWubLNamzX6TUhCsIhgK6uVZurvAmVopKnEhnXCYMEurI9IcQoS5XkVNkpa842uR
CK++lQmvzNhAFhmtsueiJyiDpdUH3Sh4Z4pFMQVfWas0rx2Po0PzGV/X9FAMJG1VVLf00/FA
E9V1xOv8orK8ipI6+2JAb6OCNsg7tTlagGViRr+Fa7Vx+bq6KcENldpN6iFJn4oBCCHZyohv
eEd0dkcbObzr5VhIUgumys2awdp3pnJJN1TrlObxtPz1GBa8R39+g4AGD8d/vvz2a/e8++3p
uHt4Pbz89r77a8/LOTz8BuHqfwIz+O3H61/fJH+42r+97J8uHndvD3thR3/iE9KBaf98fINI
9wfwzT38e4fDKsSx0HGJPGagucrKrMXDl7WwCuMrztVKMjLliYLvBf1VhMPFYwuftqH3+HVJ
0YAFi0ZCKk4dHVFo9zgMYW1MTqpauuWTKh6g9OcHYHDVoKJ/+/X6cby4P77tL45vF4/7p1cR
/wIRw7NShEIb6WDfhqdRQgJtUnYVZ/VKf10yEPYncKsigTZpoz8OnWAkoa0sUQ13tiRyNf6q
rm3qq7q2SwBNjE16SvpJwpHY2KPWtE0H/nC4W4t3Zqv45cLzp8U6txDlOqeBdtPFH2L21+0q
xRlte4xDgFDLICvswpb5GiwO4cSAhEdqLdefP54O97//3/7Xxb1Y1j/fdq+Pv6zV3LDIKjJZ
EU1L48Rxf1f4JmE0c1WjsW42qT+ZeLRPtkUF3dEppaXt58cjeJrd7z72Dxfpi+gc+Pn98/Dx
eBG9vx/vDwKV7D52Vm/juLAHkIDFKy5YRf6orvLbPu27uYGXGeT4diL4f1iZdYylxD5Pr7ON
BU15jZxTbtQMzkXgm+fjw/7d7sc8ttu8mNuw1t43MbHY09j+Nm9uiGVQLWh7tx5d85a5F/CW
qJrLkjdNZHODcqUNvlnNCSlG+FyLNNJosyWVWf3MQc7bdl1Qax9inVtLcbV7f3TNTxHZE7Qq
ImrPb40hM/GbAgfIUl6b+/cPu94mDnxiaQiwNMmlkTSUT2dOscDtVpw7xjHKT9foKvXnSI2o
Y1yPODqJuemtVrXeKMkWVHslxtXmJXlUnlljw7KBNHKkakidJ8nYKrdIJtbwFBnf4cLfxJ6h
pkgobgJgHHn5hPBN3zWLIvAp30DFhFaRRxQMYL6nWEorNU5UvHqbzqKaeL6kspmgKIICTzyC
Za6igGptcb6ZLZc45xVlvqgO3GXjzezqbmqqEWIJdWJ5dWU2bCcpOx5eH3FGGHUe2CyPw2Q2
ChusFWt2JirXczJ6uMI38Zj4jIu8N4uMUXddg8J6wjDxciPYWy+CtFKZLUcoxFcf9mclZ89/
n9J3k4JehO4J4OyzXEDP187akBhaAdc+dA9xQqwCDgu6NEldtS7EX/u8XEV3xI2CRTmL/JHd
9l6SoZrfo75sPUtTosK0qdPSbl8PF+eyq2uKBo25tXlPRD7RRJsVnEW3KWWVq5A31SIjToce
7lpOCu3oJUZ3wU1066RBI/GPPofaK/jg43u8WjoLnIxciWx3lQWbjm1Glt/ZrRUv+BYUXuFV
i5rdy8Px+aL8fP6xf1MRGanmRSXLurimLpxJMwdLnHJNY1aU3CQx8vQ2p1XgYvpp8ERhFfk9
a9sUvFEbaaFnXx87uOObJ7hCdL34Q907BV5d2M+tyIG4KR2PtQYdKA3c/RRnU29Fr2szng4/
3nZvvy7ejp8fhxdCVoVwbNQpJeD0mSIiuNkinU0kWYzyJnaUJInODYGgIm+PNh3FagE+iHMN
y+7SPz3vHM35BiuyL5ts3CLPN3wQiMyiVpSPd8RuiyL9/8qOZTlu5HbPV7h8SqoSl+woXu1B
Bz5nmCHZFJuckXRhOV6tonKkdVlyyp8fAN0k0d1oyjnsQwCm2U+8GkCjS5288RgTsA6bIbsx
rS2NHlOX7PofZ79OWYHu6SrDWBmTDcO70B0yfYFxxkfEYyvRjBkk/QXOttboi1+aMnsRawb+
Tkb785vfMcn24f7JlFf4/O+7z18enu55ZKx9sHt2wdoLDOmSwRDCXswOdaWXuw52p+BT0DnB
/7t8+3b1Gv5MB+cm06pN+hsTfF1eLiUMY8esT6r849RdsbAPC5nSos2A5fXOI1ZYekAebVqB
Pnssel6xek7UB1W3zbqbqewp2Zv7yzhJXbQRbFtguGzFow9mVFm1Ofyrh9lL+cVbpvrcySjv
q6aY2rFJoY98uHgb4+SgzNUFsmrJsvJQHpjCNDHeJ2u662xvgnD6ovQo0MldogJIb+d2deX6
DbMpy4DzO6D3H12K0L6Ezgzj5P7KtZjRVJ7v/9wDTBg4gEV6I5VSdAjOhZ8m/cl7+NejgAWJ
YSMqXebI/oyVHgIuFDoQMmaVWrvfSdtuc9Ww4Quf5BF/a1sIxTRVH36LvBDkmKvg3BrW7kF5
FKMLlVrmsYwOlEUuutRi/3hcogeW6K9vEez/bV21yzxaKFVHiDwbbEmqRFxXi03cZ6BX6LCH
Y7nVrgaOLbnuLDrN/hmMwW53C1wHP+1uq05EpID4IGLq2yYREde3EXoVgbPtPTMTfns7b1x6
W1rVChX7RwmK998X8g/wgww1FNeDLpABSbDp0HTrFxg8bURwqRmcsn+OST2hJ4MtQNL3yY3h
clzsa5VVwG2PxUQEKwoZI7BUXojBgDAwcnJYLcJzvhgtjZxeQZtAfuyGvYdDBDRB19h+aD3i
kjzvpwFsG0d65PRyVlYnFFq7J3VcYucKayUg8dguEQlMvp8qNdTOk4D0SSzpEslc1bva7AjW
lysunWqVun9x5j4PvHZT+7L6FuMPVkDVX6GiyNptusopiAx/lDlrEouDYOI7yGm2cmOmP6Do
drQailKYd/cx1yrc87tiwOqOqswToa4P/mYaSFzzHByFhvsSLcuhFz+4oCQQ3iab19nZcmCF
F1UL64i1RSbn4hQAfrmBhXq0qXNlPeq9l2U8p6Rkh1NSs7ghDdvL2clm1lzJvBSk89Q195Z9
1kgJ+vXbw9PLF1Ow7fHu+T6M0SFV8EDz6WhyBoyRo/J9pAkoB11mV4NeVy+3rr9EKa5GTOs6
X6fQaNtBC+cswgdjrW1X8qJOpPTH/KZNmirzswXBjEgVmg9F3wOB8xQsBs/CP0d8Qkk7LxlF
52vxazz85+5vLw+PVrd+JtLPBv4tnF3zLWvaBjBM6xuzwim1w7AalEBZQ2JE+SnpS9l9tMtT
zMSuukgqctHSHXIzotcR83KF2S17mDtK0Lz8cHZ+wbdnB0wbq+G4OUE9WPrUbKLl0hD7AuuZ
YZ4inINafAiTRqdNqi8mIDXJwOWTj6HuYYK5U/AGwz1suQTnnJrWDWs2Ad74MGvnvJn80wv9
J/4+uT2C+d2/vt/fY4BH9fT88u07VplnW6JJdhVlvFE9txC4RJmY5bk8+/FeojL12uQWbC03
jZF2+Jzy27fe4LXHjo2sh/3CVxL/lqz3hc+lOrEJ79Vt4YbWEI43ZoiHPpHK0xhkis+Sa68N
yj8LG+JfldNOiAyWf9c2scptyFgNoRjO81PL6k6syejw95odAg92WhpzXptGdgjKFD4YFgkB
NA0iIWkB8XhCdWplTwc5OFSllZtx7cKnVtlKBlGK26JXId8iIrBtNzrfKziRSSxcZDGvB8xu
cCQSQUwjYvaBaV6lWCNAh12zCNHai5Bi0NdrHyLJ2Wt/zWcsRn3G+9JnI/HDVz8CPApzVm3t
mdjHLEOfBel7j6HWXMejc293LWgzNXBCv9nX4KgFkco0Gbfkx7OzM3+kC+0rU77QLTF2ZRmd
lYWYggJ1lgTs3bD/USduOSoNYi63yKLNo1LPNHJs/GaPDYU0uDrdgurTcKUB3O3A9N9JO3Zh
pZa26ocxqYVGDGJj+syj5BStKHzHYqmyQwXSDlQiquSOu2a115gkSDSfUg+BU+CaIDZE02BX
/7OExRfCkx07LhaMS3J5FsRSrnzSH7HeYwXUILYE6d+oP74+//UNPnf1/asR3PtPT/dc6QXe
lmFYp3IKejhgVB7GYj1EBkl2yThAV2dLSGWHsePP3c57TZVDFImKLT5M3HAy+sLP0Phdw2Bu
71PIMUq+hAuFqcCC44BJbzqRRnq8d+0OI6TuSO7eKLHtu8Mo8GPTHot5DomWzuPpCnQ80PRy
5eTKkQA3jYsSfHs3mOQF0O5++44qHRfJDrvwDQsCuio9wSjjkGuRUtsu08BVOBRF54hiK0lB
9jR0h2589BhQt6ogf37++vCEQXYwssfvL3c/7uB/7l4+v3v37i9r/6lkDTW3I2PSt467Xh15
hRpm/SGiT06miRbmWXbrExoH7ncfPTPjUFwXgWTUMFr8WaAlyeSnk8GA+FInTHwIvnTSTnaw
gVLHPDZFwflFF/JXi4gKgmRQaGHquig66UM4uXR1alULR/ugnsBZwrj+mNazDnK29x+Zvf9/
LP3ik6NkYOCQJHo8s2gusLR0kewnmKxpbDFeAra38Y1vyWujc7xOAdogiGpdRFj1F6Nc//bp
5dMb1Ko/471VYEDTnZd/QCSgm81uYLPYiyQRof7UTqSTZopeDYk9WLLZY7cfGVj2YHaAFabn
Ewy6nsRj7FHLWFQB3yzMiQu6IjL1yb+pQcQr+wtJUH8ki3uRYR/ec3ywKxBYXIlVZOay/M6Q
/HkHpm2s7J7s642tYmprgeWDNS2l/uPVSZvdDIodP4otWPdzyN1aeuAFUP2lq8qUY2scCNvY
HRire5lm9jn5ZXAF5HSqhj36Rn2bWyKzRZ7Q6+aTW7KGTABoDy8wPRKsokPri5RgqbVD0AiG
jdx4wMy2Zppm25BGjo8N+CXZTFcyl4uTv3J5G88CiyOGQSG9I+LgP3hrgB5w9N/4c8yaspn6
+sSdo1Y2optaHGvwvdmq9D9kCQW/cXAaUKMhp7P9jeQri+2r2JZaj4GzF7aaBnaBNTG4nknW
jDSyYp4oYA+7nVP9Y51AWiH+Wkt/BepmGTS4NOXBjToUDml/gkNr4eLRb5pKRYdrj7TZ0DrY
k7pNOr1X4WadEbNTz9s4Kcg12G92EilfzFN7CJ60LT5chRl09INYCfuZHM7cJuFc8nwubiiM
+ACNpcW6GqsjiyMkTbsrgzWct5MPj30D27AdwCpwfSUWFZ4Pi3vjhnEp9qUstzoWraA5/2EB
fE5Ep9eJJxHYgBhuMn8jqekODxdDTmO122lIQEp2cVcE/2CMODxBdDExLVrbfC5uWmDoZvzA
fuIf5Su/TYmqAizNpPZZ9f7vv57TbWHE5Ndg2tWFm4xMIL4wYu4zpzJXL86wOJouauWEWEO2
pSXOJDT8SE0YQ7I/wfEqkgNtlM22ovWmLUGP9Y5AcFUxl7ClM3/FynMZmmOJD+zhsW5yDF7a
9JUDGTrLKlt5xr3yMdnllibQk39cfJT0RlfDDyUYRuTaCy6SXaNbDjrpaxsGJmdiF2U1dbuB
asxsaG4n6dWEXI1p7SdhWoO3Tula1FNBFlkg1TvCkWCUBL48sV2pwJzEs+sLJ2ScISKlzBeK
MbhkDGn8JGNfl6UrS3R9RFKGuiR6u29amHU034xpqm3ftZknut+JaNvdiNnMaM1GuzC2J/PG
B2jujhia4eZCkI5iRLAvpLsxqJNmDQd3T/P76+Hu+QWtXHTTZH/89+7bp/s7fkdyGGV+J/o2
vUrBXSOTiaNQJanF8cbFX7XFYMryv/IDX60Lez1TBIWPF0RVux59hJgLmNnvsQpARDXJoZir
b0h9QZpKLdah//MSHRSyeHX7uFwybjHEQ6Z4GqlxEWvQodRxlj3Mc2Op112AZPa2g8r59ngt
JX2QKPGquR8bysjgt5Q92Duk0hv/lhfcXx9y93UW425EhUerSHFuImmqFm91ZL5JFNHfW2nI
y4jLGuVq9gJL2NBoUgy+2sDzyLAolRPJtaGfmEuqKN640T6eb7MxmqB9cR2VPGYGTXyMSR6X
Fn6m0ll3wxUYgh8AMShJdhHahkc/OkAboeM3BWA46bUsWMwN8xipGkLY67gSRXhUyEtQGeIU
PQa2Uv2YjfmMVbAjbJVLCUZmux8abx7m2x8XSt4YqhntzVoXzCOGs+8V3VMe+XRSqDZM56rr
xzpVVn1zSnj1ULPaQRVh07VY5JDdIlS/hsoEuR09NCoPlhs0tAzs2s2dSUHwopSam/BrmAAo
PA9u2RBZMAa1RUz01/8ACjJe/ZSXAgA=

--2oS5YaxWCcQjTEyO--


From xen-devel-bounces@lists.xenproject.org Wed May 20 05:08:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 05:08:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbGyA-0005nR-LM; Wed, 20 May 2020 05:08:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=zw8I=7C=intel.com=lkp@srs-us1.protection.inumbo.net>)
 id 1jbGy9-0005nG-Aq
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 05:08:41 +0000
X-Inumbo-ID: f74aa0da-9a57-11ea-a9bc-12813bfff9fa
Received: from mga12.intel.com (unknown [192.55.52.136])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f74aa0da-9a57-11ea-a9bc-12813bfff9fa;
 Wed, 20 May 2020 05:08:39 +0000 (UTC)
IronPort-SDR: ySTQMKYGoj2FXjKvHpuTlm2pJVtSzJ1n2BdiyEnrIiKH+Snd/VXsJIxw8XSNgvrGkxI5swAot1
 F0k32nD5LbLw==
X-Amp-Result: UNKNOWN
X-Amp-Original-Verdict: FILE UNKNOWN
X-Amp-File-Uploaded: False
Received: from fmsmga002.fm.intel.com ([10.253.24.26])
 by fmsmga106.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 19 May 2020 22:08:38 -0700
IronPort-SDR: 1k5l/KRH87TbCkCVcOQq6NMcLVluqaxrYFWkuxwQ55j1aqVylMpyn4oRyLyw1v5DBII7afIAWg
 2vs0IYuRyuAA==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.73,412,1583222400"; 
 d="gz'50?scan'50,208,50";a="299814262"
Received: from lkp-server01.sh.intel.com (HELO lkp-server01) ([10.239.97.150])
 by fmsmga002.fm.intel.com with ESMTP; 19 May 2020 22:08:31 -0700
Received: from kbuild by lkp-server01 with local (Exim 4.89)
 (envelope-from <lkp@intel.com>)
 id 1jbGxy-000Gwl-NP; Wed, 20 May 2020 13:08:30 +0800
Date: Wed, 20 May 2020 13:07:58 +0800
From: kbuild test robot <lkp@intel.com>
To: Anchal Agarwal <anchalag@amazon.com>, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org,
 boris.ostrovsky@oracle.com, jgross@suse.com,
 linux-pm@vger.kernel.org, linux-mm@kvack.org, kamatam@amazon.com,
 sstabellini@kernel.org, konrad.wilk@oracle.com,
 roger.pau@citrix.com, axboe@kernel.dk, davem@davemloft.net,
 rjw@rjwysocki.net, len.brown@intel.com, pavel@ucw.cz,
 peterz@infradead.org, eduval@amazon.com, sblbir@amazon.com,
 xen-devel@lists.xenproject.org, vkuznets@redhat.com,
 netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
 dwmw@amazon.co.uk, benh@kernel.crashing.org
Subject: Re: [PATCH 06/12] xen-blkfront: add callbacks for PM suspend and
 hibernation
Message-ID: <202005201349.83AY8NAQ%lkp@intel.com>
References: <ad580b4d5b76c18fe2fe409704f25622e01af361.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="5vNYLRcllDrimb99"
Content-Disposition: inline
In-Reply-To: <ad580b4d5b76c18fe2fe409704f25622e01af361.1589926004.git.anchalag@amazon.com>
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: kbuild-all@lists.01.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--5vNYLRcllDrimb99
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hi Anchal,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on linus/master]
[also build test ERROR on v5.7-rc6]
[cannot apply to xen-tip/linux-next tip/irq/core tip/auto-latest next-20200519]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Anchal-Agarwal/Fix-PM-hibernation-in-Xen-guests/20200520-073211
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 03fb3acae4be8a6b680ffedb220a8b6c07260b40
config: x86_64-rhel (attached as .config)
compiler: gcc-7 (Ubuntu 7.5.0-6ubuntu2) 7.5.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>

All error/warnings (new ones prefixed by >>, old ones prefixed by <<):

drivers/block/xen-blkfront.c: In function 'blkfront_freeze':
>> drivers/block/xen-blkfront.c:2699:30: warning: missing terminating " character
xenbus_dev_error(dev, err, "Hibernation Failed.
^
>> drivers/block/xen-blkfront.c:2699:30: error: missing terminating " character
xenbus_dev_error(dev, err, "Hibernation Failed.
^~~~~~~~~~~~~~~~~~~~
>> drivers/block/xen-blkfront.c:2700:4: error: 'The' undeclared (first use in this function)
The ring is still busy");
^~~
drivers/block/xen-blkfront.c:2700:4: note: each undeclared identifier is reported only once for each function it appears in
>> drivers/block/xen-blkfront.c:2700:8: error: expected ')' before 'ring'
The ring is still busy");
^~~~
drivers/block/xen-blkfront.c:2700:26: warning: missing terminating " character
The ring is still busy");
^
drivers/block/xen-blkfront.c:2700:26: error: missing terminating " character
The ring is still busy");
^~~
>> drivers/block/xen-blkfront.c:2704:2: error: expected ';' before '}' token
}
^

vim +2699 drivers/block/xen-blkfront.c

  2672	
  2673	static int blkfront_freeze(struct xenbus_device *dev)
  2674	{
  2675		unsigned int i;
  2676		struct blkfront_info *info = dev_get_drvdata(&dev->dev);
  2677		struct blkfront_ring_info *rinfo;
  2678		/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
  2679		unsigned int timeout = 5 * HZ;
  2680		unsigned long flags;
  2681		int err = 0;
  2682	
  2683		info->connected = BLKIF_STATE_FREEZING;
  2684	
  2685		blk_mq_freeze_queue(info->rq);
  2686		blk_mq_quiesce_queue(info->rq);
  2687	
  2688		for_each_rinfo(info, rinfo, i) {
  2689		    /* No more gnttab callback work. */
  2690		    gnttab_cancel_free_callback(&rinfo->callback);
  2691		    /* Flush gnttab callback work. Must be done with no locks held. */
  2692		    flush_work(&rinfo->work);
  2693		}
  2694	
  2695		for_each_rinfo(info, rinfo, i) {
  2696		    spin_lock_irqsave(&rinfo->ring_lock, flags);
  2697		    if (RING_FULL(&rinfo->ring)
  2698			    || RING_HAS_UNCONSUMED_RESPONSES(&rinfo->ring)) {
> 2699			xenbus_dev_error(dev, err, "Hibernation Failed.
> 2700				The ring is still busy");
  2701			info->connected = BLKIF_STATE_CONNECTED;
  2702			spin_unlock_irqrestore(&rinfo->ring_lock, flags);
  2703			return -EBUSY;
> 2704		}
  2705		    spin_unlock_irqrestore(&rinfo->ring_lock, flags);
  2706		}
  2707		/* Kick the backend to disconnect */
  2708		xenbus_switch_state(dev, XenbusStateClosing);
  2709	
  2710		/*
  2711		 * We don't want to move forward before the frontend is diconnected
  2712		 * from the backend cleanly.
  2713		 */
  2714		timeout = wait_for_completion_timeout(&info->wait_backend_disconnected,
  2715						      timeout);
  2716		if (!timeout) {
  2717			err = -EBUSY;
  2718			xenbus_dev_error(dev, err, "Freezing timed out;"
  2719					 "the device may become inconsistent state");
  2720		}
  2721	
  2722		return err;
  2723	}
  2724	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

--5vNYLRcllDrimb99
Content-Type: application/gzip
Content-Disposition: attachment; filename=".config.gz"
Content-Transfer-Encoding: base64

H4sICHupxF4AAy5jb25maWcAlDzbctw2su/5iqnkJXlwVpJtxeec0gOGBDnwkAQDgKMZv7AU
eexVrSV5ddm1//50AyDZAEElSW2tNd2Ne6Pv4E8//LRiz0/3t1dPN9dXX758X30+3h0frp6O
H1efbr4c/2+Vy1UjzYrnwvwKxNXN3fO3f3x7d96fv1m9/fW3X09ePVyfr7bHh7vjl1V2f/fp
5vMztL+5v/vhpx/gfz8B8PYrdPXwv6vP19evflv93P3xfPf0vPrt17fQ+vzZ/jr7xf2GFpls
ClH2WdYL3ZdZdvF9AMGPfseVFrK5+O3k7cnJgKjyEX72+s2J/W/sp2JNOaJPSPcZa/pKNNtp
AABumO6ZrvtSGplEiAba8Akl1O/9pVSkl3UnqtyImveGrSvea6nMhDUbxVkO3RQS/g9INDa1
O1Xavf+yejw+PX+ddkI0wvS82fVMwVJFLczF6zPcWD83WbcChjFcm9XN4+ru/gl7GPdGZqwa
lv/jjylwzzq6WDv/XrPKEPoN2/F+y1XDq778INqJnGLWgDlLo6oPNUtj9h+WWsglxJsJEc5p
3BU6IborMQFO6yX8/sPLreXL6DeJE8l5wbrK9BupTcNqfvHjz3f3d8dfxr3Wl4zsrz7onWiz
GQD/zUw1wVupxb6vf+94x9PQWZNMSa37mtdSHXpmDMs2E7LTvBLr6TfrQAhEJ8JUtnEI7JpV
VUQ+QS2Hw2VZPT7/8fj98el4O3F4yRuuRGbvUqvkmkyfovRGXqYxvCh4ZgROqCj62t2piK7l
TS4ae2HTndSiVMzgNUmiRfMex6DoDVM5oDScWK+4hgFCuZDLmokmhGlRp4j6jeAKd/MwH73W
Ij1rj0iOY3GyrruFxTKjgC/gbEAQGKnSVLgotbOb0tcyj8ReIVXGcy/RYGsJi7ZMae4nPd4K
2nPO111Z6PD2HO8+ru4/RVwyCXKZbbXsYMz+kplsk0syomVESoJSkzA6wexYJXJmeF8xbfrs
kFUJfrPyezdj6gFt++M73hj9IrJfK8nyDAZ6mawGDmD5+y5JV0vddy1OebhH5ub2+PCYukpG
ZNteNhzuCumqkf3mA2qK2rLveCIAbGEMmYssKcdcO5FXPCHHHLLo6P7AP4bvTW8Uy7aOJYii
CnGOf5Y6JrdMlBvkRHsmStsuPafM9mEarVWc162BzprUGAN6J6uuMUwd6Ew98oVmmYRWw2lk
bfcPc/X4r9UTTGd1BVN7fLp6elxdXV/fg3Fzc/d5Op+dUNC67XqW2T6Ca5NAIhfQqeHdsbw5
kSSmaWWzzjZwO9mujO+hQ5gNVzWrcEladyq1SWudozzOgADHIwwVY/rda2LigPzVhtGrgSC4
8xU7RB1ZxD4BEzLcoulwtEhKjb9wCiOXwhYLLatB2ttTVFm30okLBSfeA45OAX72fA83J8Ui
2hHT5hEIt6cPQNgh7FhVTXeUYBoOx6V5ma0rQQWExclsjeuhtyJcySixt+4PIsO3I1fLLGCy
7QYkOty1pD2JFmIBylgU5uLshMJxX2u2J/jTs+nmiMZswawseNTH6euAbbtGe7vZsqmVksMZ
6et/Hj8+gzOx+nS8enp+OD66C+gNFrD169bub5JDEq0D9aG7tgVbXfdNV7N+zcBzyIL7aaku
WWMAaezsuqZmMGK17ouq08R48h4DrPn07F3UwzhOjF0aN4SPBiRvcJ+IzZGVSnYtuXYtK7mT
VZxoeLD3sjL6GRmdE2w+isNt4R8iD6qtHz2eTX+phOFrlm1nGHvAE7RgQvVJTFaAImVNfily
Q/YYhGWa3EFbkesZUOXUDfHAAi7pB7pDHr7pSg5nS+AtGMVUruHNwYE8ZtZDznci44EWdAig
R6GXuGDD7LkqZt2t2yLRl7WlUpIIbs5IwwxZN7odYKOB+CbmPjI+FdmoZSgAfQ76GxasAgDu
A/3dcBP8hlPKtq0Erkd1DkYnsbu8sgJHdOCiSV0dNJx/zkFRganK88RKFWqWkBth560NqKhN
jr9ZDb05U5D4tyqP3FoARN4sQEInFgDUd7V4Gf1+Q1eylhJNB/w7dfBZL1vYdfGBo2lkOUCC
im4iBorINPyROvzI1XOyVeSn54EnCTSgyjLeWhsfTTMetWkz3W5hNqAtcTpkl1vCoE4dEuYI
R6pBXglkGDI43C301PqZke0OfAYuNiACqplrO1qFgaKJf/dNLcjUOyL/eFXAoVBmXF4yA68m
tHiLDoza6CfcBNJ9K4PFibJhVUG40i6AAqxPQAF6EwhbJgiXgZ3UqVBL5Tuh+bB/OjpOq4Hw
JKwOKfL+MhT7a6aUoOe0xU4OtZ5D+uB4JugaTCvYBmRgZ5nEFHYb8c6iyx5ckLboK12nTFHA
zEMMoz4eVCKSvbeOX9AngGCyl+ygwTda6B1phm6oBUb2KhoZFfy0YzC9JosYCZzgwPK2ktpC
E5OAnnieU0Xm7h8M34+u5mQGZ6cnQXTJWkI+ENseHz7dP9xe3V0fV/w/xzswgxnYQBkawuAw
TdbtQudunhYJy+93tY0TJI2qvzji6OLUbrjBKiFspatu7UYOpD9CvTli5UJ4gEEYlAEDqG0S
rSu2TklJ6D0cTabJGE5CgTXlWSRsBFi0IdA87xVIKVkvTmIixOgRuPh5mnTTFQUYwdaCG6M0
CyuwhnfLlBEsFKOG11bzY/xbFCKLwltgvRSiCoSH1QBWRweOdhiaHojP36xpkGVv0wLBb6p7
tVGdDaDBHmYypzJGdqbtTG/Vnbn48fjl0/mbV9/enb86f0Mj1lswAgbrmazTgHFp5z3HBfEv
e2lrNNhVA9pduLjLxdm7lwjYHqPtSYKB5YaOFvoJyKC70/OBbgyIadYHdumACPQUAY7Cs7dH
FVwjNzg42l5790WezTsBQSrWCqNgeWg7jZINeQqH2adwDMw1TJ5wa34kKICvYFp9WwKPxUFj
MJSdgetiHYpTyxR93QFlJSJ0pTBOt+loqiags5ckSebmI9ZcNS6KCTaDFusqnrLuNMaHl9BW
x9itY9XcK/ggYR/g/F4TY9FGv23jJefPy1iYur3e0R7hqVa92c+uV6/rdqnLzgbPCS8UYB9x
pqpDhgFcakO0pXOyK5DGYCO8JUYoHp9meLR4sfD8eOYixFbFtA/318fHx/uH1dP3ry7QQpzx
aEvILaXTxqUUnJlOceeLhKj9GWtFFsLq1saUqdwtZZUXQm+SDoEBsytI1GEnjqfB6FVViOB7
A8ePLDXZfOM4SIAueLYRbVJYI8EOFpiYCKK6XdxbauYBgTv+WqScnQlftVrHXbN6WoR3PxN9
CKmLvl4L2nqALfqT2P3Iaz5pBE571angWJwrJ2vg/wK8rVFGpeKUB7jCYK2CG1N2nMa24LAZ
hkPnkH6/rwJDaIAvTXsk0K1obB4gPPvNDqVhhSEK0JNZkAzZ8yb40be7+HfE2QAD9X8SU212
dQI0b/v29KxchyCN8mBymKfTxqGsEInzKOEwiS3ZwtDRhrsEStthsB9EQGW82zLt8y7NrthX
ahrx7kdR68TBDqG+sev3wFwbiRaonWxyeJap5gV0vX2Xhrc6nfKo0YJPp4zBNpEp32TUqdTZ
GW6pasDU8QrTxTvPKUl1uowzOpKBWd3us00Z2ViYINpFwlI0ou5qK+8KVovqcHH+hhLYAwPf
v9aE2QVoMCuW+yByYKVbvZ8JbJIusYkAjEXwiqeDWjAREB1ObpGIkAeD0JoDN4eSGqsDOAPv
gXVqjviwYXJP06Cblju2UxGM112Fpo8yZINzGiAowZiO06dguwXXtbHGh0aDH8yPNS/RBDz9
n7M0HpPDKezgTyRwAcxJVV1Tw9eC6mwOwaCHDE/QFnz0c72KeZYZUHEl0YPH+NJayS0IEhu7
wmR3xGkZnwEwvF/xkmWHGSpmgAEcMMAAxMSy3oCqTHWDyfiL2+C6+ITWLjRXiGN6e39383T/
ECTkiAfstWrXRNGgGYVibfUSPsPsVyCrKY3V0PIyVIijp7UwX7rQ0/OZ28V1C7ZeLBiG/LVn
+MD3c2ffVvh/nMa4xLvttK+1yOByB5n/ERSf5YQITnMCw0k6kViwGddQOeQtNRGd+1trq4aw
XCg47b5cox09s4WylqERa8DbFllaR+JhgCkD1zNTh2TKFy0/oiaBPoR4s5xlrYgwKPk11kI0
vUTmdAA6SZvYgcNJJsFtY5chG7Nszt63lrCbNUv4MiN6ilUEeCukBwMOyzrikJpHRaU4FmUT
Hlu8ID0m8wnbVHjlq8HYwyqLjl+cfPt4vPp4Qv6j29biJJ2kmDIlaXx41W0mATxqqTHgprrW
83Zw+iix0Laoh/VMpK6DBRPXFb1glvKSaM3aKJpCg1/oJAkjgsRRCPfnM57DyQIZnhiaeFby
D8SnwU6w+BTBKtLgxaG0YmH6y6JdFCrcTl2zyAfrahFBvOMxMoBxNU/9lh90itLovWWhXhZF
fAAxRTpwl6DENFAqQFrQsHoh4G6H0TuE1WKfTBFpnmEghpJvPvSnJyfJSQHq7O0i6nXYKuiO
WP6bDxenhOOdct4oLNGZiLZ8z4PkuwVg/CTlt2WK6U2fd9QOcQ3eB7B2c9ACFT6IP3CiTr6d
hrdPcRtR9NJjKmywXIOZJQzRp6z5oV9WibKZ95sfwDrEyjTHQBU7gB1BdgRuZNWVoaU83VOC
PrmYhaMp9qUQ8C7XKe7xcibSicHyY5K9bKpDcqiYMq4ymuZU5zZMBousEpMCdhcF7FNu5tkN
GweqxI63WGwQzHMApi2IFwI0gYywJc153g+6k+K85PLn6Lf+z2gU/LUjMhw9N5cEcprOukIi
FlW+G91WwoDIh/kY7wgmqDD4ZsN9iWpNSmc2bUDibMH7/x4fVmBbXX0+3h7vnuzeoOJe3X/F
knUSwJoFDl1FDDG1XcRwBiDFBFNExKP0VrQ2TZSSHn4sPgYjaOZumgi54zXc7tzF/U1Y3Y2o
ivM2JEaIjzhMhmltpa3FJRkYCC7Zltu4SUog1MEYQ/qG9J7vMIOdJ1BYsT7fx3Gms1RQbufi
Kj+X5uryAODYJefaZ1UQYLj83VniWEAsMsGnTGKyf/TzS28yJfoPg7HIV4Q3Z78GGWKFsAZr
Q267OLILHLwxPm+LTVoayrcQn+Rxq7BuhyZZEBIlaX1cr0wG4lxfbaZ6E1mUdqYt9TccrWev
cAS0Dgs9924ojeK7HqSEUiLnqXg70oA+8wXCk91nESxe/5oZsDYPMbQzJpAMCNzBgDLqr2DN
bBGGpewHt4OhXEKQDbEoDoykdYSa4iKjQ5hGi3y2A1nbZn1Yfh+0ieCirUW0tKSujQZmZQlW
p63/Dht7XzpiR6sw3BahjO1akK95PPOXcJEMcLPJkJtkzGDwt2GgOeOVDstyWmcBKWQY03As
u465KTSb7aidNhIdBrOReUS9LhN3SvG8Q+mGydxLtOJjk4ESw18Ys5jcP/iNhmmnhDksxq+p
ZxkOvqlZymOd5AVrOZE6ITysiEmQT5Tlhse8beFwdJzNTsiiZumBGQUXzfv4dls4pvESst8U
L8sV8EYrWcY95lGyAI1T2QLTiwV3ZGA++DsZz3Z+aRxW1NY1GarBV8XD8d/Px7vr76vH66sv
QbxpkBdT21GClHKHj3AwjGoW0PMS/hGNIiZtgg4UQ3ULdkSKy/5GI9x/TEf89SZYPWMrDReC
wrMGssk5TCtPrpESAs4/Tvk787FOWGdESn8HOx1W3yUpht1YwI9LX8CTlaaPelrfAsm4GMp7
n2LeW318uPlPUPUzudxtpJgsd2c2lWGZNIi6DPruZQz8u446xI1q5GW/fRc1q3PPu7zRYMPu
QAxS+WiDFi3nOdg4LvCvRJPy7ewob1wCqbaC227H4z+vHo4f58Z92C9q2dvgHUHi/o7bKz5+
OYa32WvvgO9sFg2PqAIHKymzAqqaN91iF4anHxkGREPGLqkOHGrI7l18DxdrVzSG8SxbxGR/
7jjZ/Vk/Pw6A1c+gHFbHp+tffyGxdlD1LmJLzH2A1bX7EUKD3KsjwWTW6UngCyNl1qzPTmAj
fu/EQvkXVtisu5Q897U3mASJorxBaMmyzEEX66TLvbBwtyk3d1cP31f89vnLVcSHNuFGY/PB
cPvXZym+cUEOWmviQPFvm7zpMDKNoRrgMJo58k9Ix5bTSmaztYsobh5u/wuXaZXHsoTnOb2y
8BNDgYmJF0LV1kIC0yAIROa1oDEB+Okq/SIQPte2NRgNx3CLDfYV3lUmYWid4cPHdQHrF8F7
zBFBp1tc9lnhKwuTjFNKWVZ8nPys4BJmsfqZf3s63j3e/PHlOG2UwLrHT1fXx19W+vnr1/uH
J7JnMPUdo0VbCOGaljsMNCiig3RUhBi1Ww6cHHhSSKgw2V7DnrPAWXN7tx3OIh1rHRtfKta2
w8s7gsfAXSUxLGKtdRVGuALSjLW6w3IjS75IFr9hn6yytsViSIW5KiN4+qwwcG/cM+YtuM5G
lPZeLY6mMnHm3JVFEr+pTnLFL8X9lfk7LDBGxuymtNSEHEFh3aTlDF/CNah5c/z8cLX6NIzj
9LvFDO8i0wQDenabAw9hS0tVBgimfrH+KY0p4qJlD+8xjRwUe4zYWZE7Auuapq0RwmxVNX1p
MPZQ69i3QehYtuhSjfiyIexxV8RjDPUboJrMAZPX9gsNPvkRksaiNljs+tAyHdfbI7KRffgI
AMtdOpDLH6KQHm79LR0PDCdFnX07lE27BmSYcL0NN7KL3+uj777bvz09C0B6w077RsSws7fn
Dhp8mOLq4fqfN0/Ha4wlv/p4/ArshMbAzL5yWYowi+6yFCFscNeDqgbpqpL5tKAB4ivH7VsS
EAf7aKfHhrOu0NWNPbZtXCuJCRQw19Y8cBht+jiz6S9MnBYLX9mQrYn78wOAmd8X0buaWZ2m
nf8Ug+waq7PxQVSGkZooDINhdfxIB9ysfh2+29ti4WPUuX2nBfBONcCJRhTB8w5XbQrHguXK
iWLd2T45aGIcfwhp+Au7YfFF17hEo2X49McWdjyMWUzvWWyPGym3ERINO1RNouxkl3ipr+HI
rQntvmGQCHeBEWUwQ+MfjM0JUOXMYlEU6UsUApOHzNx9DsbVxveXG2F4+Mh3rD/WY3LNvo92
LeIudY3Raf9dl/gMFC9BBmAewmpIx1uh4evoNA1PhMeD36BZbOii6xSyuezXsED36i/C2Uwt
QWs7wYjoLzAvra2Z8wcG5tAttO8kXSVz9LZy6iQx/vAGRvlNC1Ov0zkG0uMFLH32NLo2XQ+W
ChaIuNgpZpCSaHzwnSLx/Obuh3ti7UsF48l4seLZDRNp8RG6dq5cbAGXy26hRN57HuhauC+A
DB8eStBiNdBEn9o1n7L3bwmI97IAJy3xrCpgrAg5q2gfFJaveg/QNqlLRl1oGzWCrZUzg8et
WhhwXjwf2cromNmy+dczKHr58xCBLJ9/ISK+eBIZu45ttkGSNrYcBU5oyLX+Vbq+7ZJ9Ih7f
kMXZLcsGFolZX7A6VHIoLQvjbLPZOvKh4oln+LyJXBqZd5hVQ1WJ7zvx1iX2ie8FPvZzH+Yx
bJZ0RqawzYe6iNT8ggdDsU7HAZLKJWw1vUFK9EseEC11QkkSXXm0JcfKjjnjtYdBFZnZk1HH
sf6zOXOdDHsrXAZ/fIhFTDD8lJgofZaXfCTET8njWaTsxyjHWrgS3tTGI0vFx5aCTerYgNI3
w9e51OWe3uJFVNzc8VayeQo1zbeFnXp9NtTehAp6NOzAlghssanoA1/zk7eTqWAWfZY6lDSO
Rnwmd6/+uHo8flz9y73Z/Ppw/+nGZyemgAeQ+W14aQBLNpjXzL8UGB4LvjBSsCv4QT90AEST
fGz4J+7G0JVClwDkJuVq+8JY47NVUnTnZEIsJNxniGxkYobqGg+e3gDQNg6dfiswWWFLeOxH
q2z8CF+VDp8MlCJd8+DReGHwQz4v0eA7s0swu7RGzTF+A6IXta1YSDbtGmBKuKKH+v85+9Ie
yW1kwb9S8IfFDPC8TinvBfoDJTEz2aWrRGWmqr4I5e6acWP6Qld53vj9+mWQlMQjqOxdA7Yr
I4KneEQE40iqHCcRS78Y6O7Bvxu3c5LnsQyH45o6JLa5D0RvkMq6hj7YPipTlBGxxUAys1EQ
8iHhRxRoPaVP8SFaeoS31xlU30aLSRQd0ODflvmlxBFatW3uRCzysWBbis6lHKE2GZMcFK4r
A7JrgqvKjEliEOJIHAO4DZtFmFaoXKu6rpx73OEq6DgVVr2wFqqa5J6+tH7+8fYJ9u9d+9d3
02NwNGMaLYbeWc/dlRAARhpcR8g6nGK4ivjBMJaaTt5CXD8WYqqxJQ2brbMgKVZnwbOKYwiI
m5Uxfu9ICuCv0/X8nCBFIE5Vw7g2A/bQZ1FSqtXNaqdLIytm+8+PDB/6OZfxAWfLnkusQ/ek
KQiGAE0m2ha8Rmx2N76usSswquGhylle1hHjae9gpRYPoNz1YMBrm3pCDbbDBwFQGsCp4JbV
FGHKWNiiFKuU7W8m+CvbKdVA3j8mttXggEgOD+hY7fbGfTTGvVPCshUNyo4CRHgZTb9UYFvp
JCkvNjFfVtBJjZc8osLP4dCyMhRUqLCJtEs7dnVtBUqQpjBigcr7X3VdHCDV1TIbEleF4IYC
SNlaADfyZDJ4aob5j4YxbuHmihf14COLBY9cYCiXk7qGO4NkGdzwvWNTMLGnQ/yRPqEH+B+o
LeyonQatslvWTzsTxWS9qp63/vPy4c+3Z3jWgHjQd9I16c1Y3QkrD0ULQpLHvGMo8cNWAcv+
glJlimom5C0dfs7YaaounjbM1OdrsGBp0umOhiq1mmZ6owmMQw6yePny7cdfd8X0vO1ptGd9
ZiaHm4KUZ4JhJpAMXDCosEePIEusHZwvIKJsizVDOzC1phjqol70PM8gj8JvVJ2R0kbbxx8g
VurR5NqkRfc9GOqKshDO2tiOagRmiEWzLng3hJ7IGNil7VwWsDe34Xo0Fl9uE0yBfOBswS7u
oNG6tkNv1Y0Bbpgrp1ACjLR1qyuAWviYJOvApLKkoXCeWdoZxKY9ldrr3gkRAW4Y8jzoWzcI
SyJkQ/N4UL7XFVg/GA0VZ0Sbes+NdTrMoFxNKjxt1rxbLfaji7J9LIcM/kLw07WuxAIpPVfO
eQ0UqndSoZ3M5YCSFSoyVkisVUp2cByw31R8SJpTotyuzINTfCmHzDbZFD99G08fi1odAhYC
pfB3W2vNG8oxpNST7s9YQgJGWbFqJuMBegBRIVQHVkTFurtd9W6Fe9PPVLz6fy5wwp35g0We
eIsZeYfo3/3y+X++/WJTPdVVlU8VJufMnw6HZnmoclxdgJJzP+xWmPzdL//z+58ff3GrnA5C
rBqoYFqv3hi8/o5VF8OBNNlh6YAyheJWAqNU5TxbVY0fnuWkvcPwKGk2IkZDm8Z+wJBRHTHD
rmwIn+Wr00c+qJYBkGzdtIpd47ijgvwOlcFpWNVOsDQgBZf7C76FVBQUN7TI5MUpA1yLPvRi
Dx4xTrDW3pemO7mMegBBlnGzJQgWKiT/U0ECFnGSVwdjdXm+ga0YevBYMyX17CZ/oz+2OoAE
15bXToDuMGs18UO+5ZqAQWIOccBxbvumQUhR0WBjPYoDkCIwsVAcY0N+n6hQP8PzqOT/ype3
//72419gKusxfuLuvjd7qH6LARPD1BxkcFsiF5xq4UB0kemKylHz8oPpiw+/xO12rByQjqQ5
mQsCcHStD1QL+gWwQ2FWXAZAKL6DOtDJc95BsFp60n4x51qsUA9g1DvJ+gV+UndZLePVUlSv
zaxVwmrFQNth9QV09DWT0SsaC3dgCWgqae9EJh8qA25cuWJZOBUHQ1EQM/jwiBMSWlJximDS
nHBumjYKTF3W7u8+O6XWKafB0i8Wt2ZVBA1pMNM9uUdq5nwgVh+lsWBx7lxE357L0jQpGumx
KpCMBjCHeshO+PIRgxHPzXvNCi5klQgDGhZMQuYVbVb3zDsk6kvL7O6fM3ykh+rsAaZZMbsF
SHKaiCWA8trc1gMMrGCDrwMDkdiyKfYJmRqCvc0kUG5AdxQSgwLtc0rRpTUGhtlxjyiJaMhV
IsIDAaxYWfDyjXGh0KD482gqdl1UwgwBfoSm58R81B3hV9HWtTIdtEbUSfyFgXkA/pjkBIFf
6JFw67QeMOVlboigVZGCt19ljrV/oWWFgB+pucxGMMvFtSikJrRjWeqsJZ8kzfCvOH2GBHNV
GHjI4XOY7JhECDEL88oY0EP173758Ofvnz78Yo6ryNbcyklQXzb2L32Ygz7kgGF6W/cgESpA
NtxdfWY+68Fy3XhbeIPt4U1oE1s03h6F1gtWb6zqAMhyEqwluKk3PhTqso42CeGs9SH9xgp+
DtAyYzyVOpf2saYOcmzL7vkRzQ0iUdbROUDwPvuHvd2KYE7gcRC9/WV57xoZgXMXiSDybw3V
ID1u+vyqO+t1B7CCe8akr4nAicevFmWdj9Xit7P3SlPU+DUgaCG1HBg8AR9v33B1W2sO4/Bo
YWSR+vQo7SYEt1PUdjIJ2rqGUyMIOaKThmVChppKfRnS+v14Ac75H58+v7388FL/eTVjXLtG
aXbfuoQ1SgWv053AymoCwQnN1KxyziDVD3iVM22GwPIs9dEVPxhoCCdfllLqtKAyRYpikCwn
YIkQVQlBEl8EujWoVSUeQtvqnTViovwVZGJB4uUBnHLpDyD9yOEWGhag2KDYoFwyuU4Drcj9
4nShlfYxlbj10hrHHE1to4ngaRsoInignLU00A0CzpokMPeHtg5gTst4GUCxJg1gJiYbx4tF
IaNflTxAwMsi1KG6DvYVwv6GUCxUqPXG3hpbeloZ3q455mchMQSWR0nssYvf2BcAsNs+wNyp
BZg7BIB5nQegrzjQiIJwcVTYgQ2mcQlhRKyj7tGqT19J9obXMT/gikcZsonEPxYMohYeYY4U
U+0B0jrzDmPIfbsvMiFJKdOKBqqxzz4AyBykTi0wNcFuygkNYv270kJXyXvB+gXR8uyewVYt
nuBT9es9Hq9UzYu0FrCGfiL85I4cuLdgC0qnER4bDw+slYspXLNebaEFdABjL8+7zVu03cgz
yWu9k4+kr3cfvn35/dPXl493X76BgcErdqV3rbpykIuxU8tqBg0xDr7Ybb49//jny1uoqZY0
R5C/pVcTXqcmkcH9+Lm4QTXwTvNU86MwqIYrdp7wRtczntbzFKf8Bv52J0AzrryhZskg6dc8
Ac4UTQQzXbGPd6RsCdmBbsxFebjZhfIQ5O0Mospl1hAi0GRSfqPX481xY17Ga2SWTjR4g8C9
bzAaaZA9S/JTS1eIKAXnN2mE6A3G0LW7ub88v334Y+YcaSGRcJY1Ui7FG1FEIHOhbMVIoYwa
b5x6A21+5m1wJ2gawbPTMvRNB5qyTB5bGpqgiUpJfzep9B07TzXz1SaiubWtqerzLF4y2bME
9KLSts0Shc82RUDTch7P58vD5Xx73tQr1TxJfmOFKbXPz60wVssA4LMNsvoyv3DyuJ0fe07L
Y3uaJ7k5NQVJb+BvLDeliIHYc3NU5SEkj48ktkCN4KWl3hyFfuiaJTk9crFy52nu25snkuQx
Zynm7w5NQ0keYlkGivTWMSRl2/m163OkM7QyOtBsg8Mj4Q0qmaJujmT2etEk4Ac0R3Bexu/M
SD5zKqqhGojISS3lqXLpJd27eL1xoAkDpqRntUc/Yqw9ZCPtjaFxcGipCs3nPQPjvuCjRHNV
SzMzv8cGtqTtXPv4i61J9TM0JaTkkW3dGM1MbwTqp8qHp0Mg2cFiiDRWpmRzV4J5Ksufw/uE
2bsLD4buU1ghYSnPvCjWluLiuL97+/H89RWCgoB31Nu3D98+333+9vzx7vfnz89fP4Axwqsb
N0ZVp3RVbWo/JI+IcxZAEHWDorgggpxwuFaiTcN5HUzR3e42jTuHVx+Upx6RBDnzfMAjYSlk
dcHCDun6E78FgHkdyU4uxBb4FazAkupoclNqUqDyYWCG5UzxU3iyxAodV8vOKFPMlClUGVZm
tLOX2PP3758/fZDn3d0fL5+/+2Ut7Zfu7SFtvW9OtfJM1/1/fkLzf4D3v4bIV5GVo/9Sd5DE
4No/JdhgRQfVmVMUIQlYR4h+gceTXzNo4YNlAKnLTEClPvLhUtlYFtLXlvl6SE8BC0BbTSym
XcBZPWoPLbiWlk443GKjTURTj084CLZtcxeBk4+irm23ayF9VahCW2K/VQKTiS0CVyHgdMaV
u4ehlcc8VKOW/VioUmQiBznXn6uGXF3QENnVhYtFhn9XEvpCAjENZXIAmtmHeqP+ezO3VfEt
ubm1JTfBLRkoqjfcJrB5bLjeaRtzDjah3bAJbQcDQc9sswrg4IAKoECREUCd8gAC+q0DxeME
RaiT2Jc30Q5LZKB4g19GG2O9Ih0ONBfc3CYW290bfLttkL2xcTaHO67SDWQ7rve55YxePIGl
qt6TQ/dHajzDuXSaangVP/Q0cVelxgkEPOOdTQHKQLXeF7CQ1kFpYHaLuF+iGFJUpohlYpoa
hbMQeIPCHf2BgbH1AgbCk54NHG/x5i85KUPDaGidP6LILDRh0LceR/mXhtm9UIWWytmAD8ro
yQ9ab2mcVbR1asqOLp1M8+TpDIC7NGXZa/jo1lX1QBbPCSIj1dKRXybEzeLtoRmi0o+7MtjJ
aQg6R/jp+cO/nGgTQ8WIX4xZvVOBKbo5Cg/43WfJEV4N0xJ/eFM0g1GbtB6Vpj5gjIb5OYfI
IXaeZdscInSzw5j0TvuGaauL1c2ZK0a16FhtNhlmRNVCkCXTbhCCNBViB5CeYSnPDbwlUUq4
DCxSOUDbqJS0hfVDcFu2lmOAQXRElqLaVCDJlUmCVayoK8yYDlBJE292K7eAgor1EtyRtoIV
fvkZKCT0YsSwkQDmlqOmHtY65Y7WSVz4x7J3sLCjkCJ4WVW2DZfGwlGprxE/ypM8UbjlgaZB
yPBlTeJuiYzA3hOsP15M+yoDUSiEYRGa4gqa3BbnxU/cK4y0JMcdVrp4jcJzUicooj5VIfuK
TV5da4LZTTBKKQxtba2hCdqXuf6DdrWYdngYIpidn1FEsdbGhyfp2ITxZbhO4ybPx4c/X/58
EWfdb9rF38ohoKn7NHnwquhPbYIADzz1odYeHYAydagHlZp9pLXGeSSWQH5AusAPSPGWPuQI
NHGf+PRwcWenAU/bgDHFUC2BsQU8JoDgiI4m4967h4SL/1Nk/rKmQabvQU+r1yl+n9zoVXqq
7qlf5QM2n6n0TPfAh4cR488quQ9wwmPhWfTpND/rNQsYoEjsYBbqL0Pw/0a6SwMeb+P0+/mb
FMPx+fn19dM/tDrM3ktp7niPCICnu9HgNlWKNg8hWf2VDz9cfZh6qNBADXCiUg5Q3/BXNsYv
NdIFAd0gPYBMlh5UvZIj4/be18dKAtGIBhIpjxI0BwGQ0ELnsfNgOmDbMkZQqetppuHytR3F
WJNrwAvqvNINCJnI1Bny0DopGWb9b5CwmtNQcYZnlNXzRSwLQ7BtAktUeLd0BgZwCJFnchLK
aDXxKyhYo44xq0OA4aSoQ9ZgkgBia3gNu5Y6qpfUtcJSLTD3a0nofYKTp8pIy+uo6GZ4kwMB
cCCzBGIRz+JTbTwxT9SCa8ksiRhaUeE+XuOkHsLHK+CV9SP4SwbJ2nTwXJ05SQ/MdIPJUmNp
ZCXE1uVVfrGtPxPBABAZcAupt6ppeeFXBnv2CwLsLf9AE3HpLBH+ov06fYgjMozgXPC7iWW0
clGpIy5Fysz6xpGocE0jCuNwbQrEEP/0KE7gy1wdpTZctrsNq9XehgDpj9y67yVMh8MPfMXS
fmo68fBxq2Y66DDQ50tQqMPjOBg4OCJAmXKGlGtqYxjNgcs4z2a+bdvtXYeOgwoDDIxB4bnC
ArDpICrJoxMbP3kwf9SH/r0V3kQAeNtQUuj4eHaV0uhVKaps5+67t5fXN4+Jru9bCKBrHUxZ
U9W9WBpMeeuPigivIgdhuo8bH5EUDcnw6TE3CWQ3sdSkAEjSwgYcr+YCAcj7aL/c+8yOOB6z
l39/+oAkbIFSF9W2VdOlSwOHKmB5nqLyEuAsGxoApCRP4Q0TnPDsUIiAvb8QiKcNyeMO+NEp
6+jnupOm220gCbDAMplxpJypvZitvabk/lb/+HsCuY/D+OrgbvXx0/Ba7L8hOcirqQ+Dkie2
jKIu3PW0jtcufrB48SsfGz3zZKbRHQS4kCSBZmnB5/E8Azwu1cu1O19er4o5kiJNyCyB/G5z
BGfvqxsT50yQXVKFu1ShPniwCmfPGTdtIK3NQRyCTY0bzwjkfVogey5w/kGEg8aOcHtlDc0t
r7sBAsyEAaXSYN/0mJIgcP7yQOxiSCmHI6gyIouLkxqSSCbzgUhp+NfQBWFKaQ5pfXpxw5di
x+Cc30ifQgKgA1MBlPuqRJN6jdQQQlaMGKLnQtz7hh6zxO+9jNM3hH4GEifdvdFZpdx1bsIJ
HYzfNHa/yYiRoNdFX63PkrPEm90BFtTVa+1S5OmbIhnHpTGjpA+IJoUQX7Cuchw7RgP7Gap3
v3z59PX17cfL5/6Pt188woLyE1I+pxlHwIMW1nSXNWriQ4CfUMwhuyKZaW9m0kASHUzaOrFq
nui7xVTXlQkoxjkd7lluqF3Ub2dEGsjK+mzF9dbwYx1UAu0dKX1fT+FJLcZOIDoaFowEuvGS
WNn4mfhkhOEiVUprsC3GD93ygJ9ttS+LWl1xhKZh7U/e2g5Ee2IP8g4Xp5kdnk2woqKnucup
A6/fF9x2j4YzSXovTkcrYTmE4pwgtD21EH1LiwSO7p1OfKt6BQvwYoqY2Vp6+I0MXqUjMUPV
uj/6rCoIM6PKAw8DJ40V/G+IkQglgMAmt/L5aoAXow/gPU3Ns0SScisdvIaMx4KdtVHh5lPK
2mRwbv4UMZ7b1ux7XVC3O30WuIdVgRb38ZPI5Iq3Y6co0wCZbEJ9KRsns1dyp1szOxKwYDQO
AdVUuM+enFvsEJEptttz4tYtZakzvnPFQQI0wBbK0Ia0xPRgUIsVTgkAEDlT8hgKZiNZdbEB
gqFwAERJinZX49o5uswG3YAKAFTiPLaRpvWPbwpIbBrG9CyxlFYmPoXcn9hsmkT8ZC80FRdd
FPzw7evbj2+fP7/8MFL2TiqOAhdGprHiEcD0AfT66Z9fr5DzD1qSlvFTWkpnH1z7OgeLxCqQ
jFEuZMoDIbjnmlLhdr/9Lgb36TOgX/yuDJHjwlSqx88fX75+eFHoaeZeDavpiTW/STvGD8c/
w/iJ6NeP378JIcGZNLH/MpnDCp0Rq+BY1et/f3r78MeNjy7Xy1Wre1qaBusP1zZtlJQ0zr4q
UobtKSBUV4bu7a8fnn98vPv9x6eP/zRd+B7h8XO6FeTPvjICrShIw9Lq5AJb5kJoSUEZSj3K
ip9YYl2RDamZo1SZEhV++qAv2rvKjeZ3VvlTtAvYXyi4l3Hefhn5VnECtkVtpaPVkL6QwTgm
i40WwhPkVtaoulF1j9lvIXHf+Mo7JuIEy3/TOvtw1alSDR5kAMk4lJmoyIyr3QkeeGzE6P1U
SiYxc0eOos20uuOUT5RYqpGJaOC7/GSjeowDrcpGAveLFbB7nGMpbwthM/DoNArkTSAbrCIA
IVRX06tIzyixJFO5RjWxzFeICd+PXB+3jJvhPIfYpTI/l7iGZXkcfTnn4gdJWM5aK3CdkD2t
SKTqd89i4x1Sw3hthICCNIcyfZZcFQc7NiQgD1TwQsoTGD1BAvtmTOr9UfKv1slUnJibUdtK
hz0UGc+USvDpdgRU0C4gIWSOJbq2itZ61BI/5Vfj/lU65oD4/vzj1TlQoRhptjKNRCBZjqAw
k02EqcR8Q4xGjMpLRzF0RfblLP4UF5wMrHBHBGkLvkAq4fZd/vyXnT9CtJTk92KlGy8oClil
9+6UqCDuDf5qdWiDUTZwBAtimkMWrI7zQ4az0rwIFoLOV1Udnm0IZxxEjllBILi+fPjwlkVD
it+aqvjt8Pn5VVyUf3z6jl248usfcCYOcO9pRtPQ2QAEKsVaed9fWdaeesNiD8HGs9iVjRXd
6lmEwGJLOQILk+DCisRVYRxJIOMAupJnZk8xdc/fv8MLiAZCggVF9fxBnAL+FFcg/ndDAOHw
V5fJjvsLJGPE7wH59QW76o154CNvdEz2jL98/sevwDs9y5gnok59foWWSF2k63UU7BDkWTnk
hJ+CFEV6quPlfbzehBc8b+N1eLPwfO4z16c5rPh3Di0PkbiwQ6QrOeLT679+rb7+msIMeloN
ew6q9LhEP8nt2XaOhVKIt2Ugz5pc7td+lkBclh6B7G5eZ1lz97/U/2PB6BZ3X1TE6sB3VwWw
Qd2uCulThZmiAvacMPuwF4D+msscjfxUCQ7SzMQwECQ00Y+f8cJuDbCQwqOYOUOBBkJyJeHT
TzYCiyNIIVkjjy/QBBXmoKkSWbLjqR1UXHCa2/rxAfDFAfR16sMEIwtByo2LcaKWVhK4UDvR
SDUTmycj3W633WP+TgNFFO9W3ggg1ExvZrtV8aCn6st6VFSr2Oc+e6O9hc0g5WVt6yx0xjkP
0JfnPIcfhqGxg+mVoh9JwD1QHgwbyTQTl4Iz1SxDPWZ0aVAxcA5HEKuXcdeZhZ9Ch9JQ+FxQ
7EFsQIPdiD8ygMoMJip24sKvVpmzA91s61mTYDqwcQYTi0EdwPx+rhDvdn6PxTSgQD2CaIPh
5GNFtFnuVtbHARuHNLu432wAa4EBvI0njb5FcJUSHbZxQdgHUckyeAe9oWJXR72hOSsGGiRM
XKuobXWS3JJAx8HOfoWGyzWlbD8uBTV0SwO3K6DqsdPfBBcrwgQQmqHfJ4YZMKdrgWbVkMgD
SRqInP/FKRR+gJGlcLZZ4gIhYiVKuuU5HR+DdFW114vBy2+2M5potk9j2Cf0TrTmX7GJn14/
GNLkIBbQUsjSHGIwLPPLIrZWDMnW8brrs7rCtZHZuSgeQamOSzBJIeT6gO7+RMq2wg6Ulh0K
Z4lI0LbrrMdY8YX3y5ivFhFSiZC584qf4fUZtAWp6SMIeRc746w6CSk/r2z8sTlbnk0KFHz3
JXXG97tFTHLTbZXn8X6xWLqQ2MhpOsx+KzDrNYJITtF2i8Bli/uFdYSfinSzXONWIBmPNrsY
2+9aNabzapmv3aRtIbmKkLmW+jkBFyxDN4epnJX6Avyxgwlxv+t5dqBYwOf6UpPSDn2exnCd
e5czpTUIWF7kDgUXR2Zs+ZVMYMzxTGNzeiRmJCMNLki32W3XHny/TLsN0sh+2XUrXNrQFELo
7Hf7U005bn2kySiNFosVuuGd4Y83TrKNFsN+mqZQQkPL2cCKDczPRd2aOVval/88v94xMDP4
E/LKvN69/vH8Q0gTU1iVz0K6uPsoDpxP3+FPk5Vv4T0MHcH/R73YKWbr7Ah4aBHQD9dWgHcQ
bQtq8G0jqLffJyd42+EKzInilKEXhWFBO1yP7Ovby+e7gqVCbPnx8vn5TQzz1X8U0lWz1Nf3
DSNP2SGIvAjuK6QonOuBoRCk5fUBHzZNTzifDikhxbyLNdeH3uQkSdPy7icoQkZlJ5KQkvQE
L38GC1pcK2Heg9YLP8vsL5/5j3mQHHuQmL2TRmbOLirD1LYhLBNHTNuY909qvkzLMlZmWwnx
bBskVGptD+NGlJ3Rvbh7++v7y93fxN7413/dvT1/f/mvuzT7VZwIfzcShg6Mr8mRnhoFa32W
jDcI3RGBmWbqsqPj/evAxd/wUmO+Wkt4Xh2PlhulhHKw/pMvA9aI2+E4eHWmHuR6ZLIFG4WC
mfwvhuGEB+E5SzjBC7gfEaDwkttzM766QjX12MKknHFG50zRNQd7PePEknAruYoCSc04f+QH
t5tpd0yWigjBrFBMUnZxENGJua1MZp7GA6knOyyvfSf+kXsCe9yBOk81J04zoti+6zofyu0s
MepjwvNpqHJCUmjbL8RSwVxixmUjem92QAPgpQLCMzVDwrqVSwBZZMGiKSePfcHfRevFwpCA
Byp1zyorEoy3tMgKwu/fIZU0VL53ti0kr/Uew53h7Ffh0RYXbF4lNMgvGCSt6F9uJgzTuHPB
vEqzuhV3NX6HqK5CRgyxjoNfpkkL3nj1UtGROKCxFvycPJNLej0GzPVGGsX8YVrCgcI/CASr
tEShMcyONGw80ndRvMNKzeFj7LOAd2xbP2AeJRJ/PvBTmjmdUUBpguPWJ1B9dk3B6Sl0L1tV
CBEBrL1mCfuEB9fMCRjL2uuGYFnEhcAC71hyQh4bnCkYsKgbkmLD6ot7QoFiRF0UYWsrbebD
26ohZrAAcR0cUueneSL6v/pDyVL/U5Zz482KbhntI1zNrrquzNrmv9sxa7GARcNt6C8IVgc3
H6T+tN2qBzA4Z4T7UNe43kOVLlCjezlBLe38WXss1st0Jw5ATLbVQ2icDSAgOjL3Xx7ctYWQ
iAe5GkGrvAi18pCT/mAH+UgLgMYzNwsU8q5LddnXAdWPWg3pcr/+z8y5CZOy3+JB+iTFNdtG
+2C/5DnvTFpdDJenDd0tFpG/gQ/E0V2ZWG1A7TAgJ5pzVjn7RXXn5LLLp77JSOpDZbZoH0wL
hJbkZ2LaymCcvaFuNfoEyldg68wXBmlVBR5XZg5ZAdQZKnvaNFYCWoESJ6e5BAGkXxKmyQTg
U11lKE8DyLoYQ4GmhnHdf396+0PQf/2VHw53X5/fPv37ZfKvMbhm2egpZc7oiiphORWrsBhi
OS+8IuPpb319wIojII02Mbq81CgFk4Y1y1keG178EnQ4jLy/GMoHd4wf/nx9+/blTipT/fHV
meD8Qbiy23mAU9xtu3NaTgollam2BQTvgCSbWpTfhLHOmxRxrYbmo7g4fSldAGh2GKf+dHkQ
7kIuVwdyzt1pvzB3gi6spVy2p569fnb0ch8QswEFKTIX0rTmc5GCtWLefGC922w7Byo4783K
mmMFfvSM42wCeiDYq6/ECV5kudk4DQHQax2AXVxi0KXXJwXuAwbUcru0uzhaOrVJoNvw+4Kl
TeU2LHhAIRbmDrSkbYpAWfme6EjdFpzvtqsIU4RKdJVn7qJWcMG/zYxMbL94EXvzB7sSntHd
2sBZF+f2FTpLnYosvYOCCB6NNpCFj7sYlm92Cw/okg0Gr27f2oYdcoodafW0hewiV1YmFWL4
ULPq129fP//l7ijL9nhc5YsgR6c+PnyXMFp9V5wbG79gGDvL4KuP8uR64lrGwP94/vz59+cP
/7r77e7zyz+fP5jGHdY2T02bSoBoo01vVsNCmZk4UascTFiRSdvQjLZWdjEBBnNDYtwHRSZ1
FAsPEvkQn2i13liw6RHThMqXfisApwDqoLj4Q3joKXh8IS+kfXPLELOBzHjTzgrN3/1lQJLz
weblBypt1FiQUkg9jXQvwYMoQCWCfasbxs0TKpMuQWKftWCJnSlGymzlXMqUNRTjcARamgdY
1fGS1PxU2cD2BKJPU10YZJe38h1AJdIu2oMI8fnB6c21ETefN9MmBW1w8QUqzfFYg1khA8OY
LIcAQaxdMPzmtRUSX2BsPlsAnmhTWQBkTZnQ3gzsZSF46yyEnDy6n/3MsVAn8KmkvbC1bg45
sbLPC5A4f53QsiNQ/u/w2DdV1UpHTx54UpxK4I+CsAyc6Ch6RuUH5E7r8IpyhOpCjUESTmwB
jinGrNdoIdixwQDYgB0Ey8wqG1a70h0A4atj4uoQesWzKZC1mzH0leJ3oJoeLAy40ujiEmBS
ayKkE4cztyyP1G9t8z5WoaGojDeUMLVgGobotzQmNaN6a9j0KKCezCild9Fyv7r72+HTj5er
+Pfv/hvMgTUUHO2N2jSkryyJYwSL6YgRsJMKY4JX3FlHw4PaXP/Gox+8pYHJ0L4Ottu1kFTP
RSXWR9Ian6CUeSilEcNEzJhF4EQQAMbDPgXBUsMcD4zleHa05dPb3sNZsPFPaLhOGevFEKiZ
G3GwpaTwIfAwRtHkqhZBU53LrBHyZxmkIGVWBRsgaSvmFbaRkxLLoAFnmoTk4KRqXMoktcMf
A6AlTnYYN/qVRgxhl8x3Uxrwd0lIQ88Zbu52bLH3Y9ETTlPre4u/eJXbsco0rM8eS1Iwm96O
8yPj7wgIvMe1jfjD9Etqz8YkOBMgcP1FLrem4rxHHywulk2aticrzTeBMi8q5/NeGiubN2nc
WKYTqi2GveOxndmn17cfn37/E96yufLKIz8+/PHp7eXD258/bIvywWXyJ4sMnRWDg2AVFgfp
RwMQF2VWNf0yDVj8GzQkI3WL3nImkWC+rLdp2kbLCBNHzEI5SSU/Y9nP8ZylVUBItgq31PUy
Hb6AMqloeSie3VBFQZ7kVTL1uiTjBN7sQBEK/jcQiDOqbJnlnEgewFLlRrnG3hojHDpWWfq7
3LgYxK/I/kXtn5bRiSUrm42cBR+ICcwGjTojK8OdP1kZyinxQzlEC2mG09ySZjQOLoM5vGXa
mUIaY5QXgBfeqd20NGNytuxYlUY8ZvVbGWZa1cMrMc6GPAqhoHBNucyCoZB70zylVjropHSC
TGpCoCpTa/+IIxWLvG0VurBzYZZpT+JqghzaLO0DMQ5NksttkuSIT41J0xyxba5619et9VqR
s4ez65zrIXs0V5M5cqWKt0zutHa+xYwtR6ShxBphls3dBIUgiXNVrS4HvzKIfo9+X8H4GvEX
aekGjx3oIBtbaR0YadcLeQ8VeEraorVkzlUsLkUIr264KMfRYmUovTSgz3g+6dGHQsbVCgHZ
iyv2mKtxhf1RFFSIw1iRjK46w0BRK6P63crQWGTFPloYO1zUt443prpOeqn3HWvSyosWOkwH
2CnNLyjB1ea0M/Ypja3JVb/Hs8OGiv8hsKUHk+xc44H5/eOJXO/RE58+pSdWo6hjVUE2FdM1
9nLjujudyZVaR+uJhR51jWJsF6/RJ0KTBswArYvUeWI1wAtjI8BP6v4W82zaX7FjYv1wP4MA
mXuRCbHS/mU0IH96FUigFRRVgqxaVwvb9E78do8ICxk4XJlr86fhhyJa4I5T7IixXu+dbJfD
hxh07RPzd5Hs3/R8cn80n5XFL1fbJmFw44Ly2YA+xmYtj7FbzuyF6AIpK2NHFXm36s1omhpg
T7sE2soMCXJaGsmgm7aDa96tJQY3nMk7fp1FH663dgO8cNBQFGuDptI712At03j3foOrugWy
i1cCi6PFZG5Xyxt7ULbKacHwT/JohtKBX9HiaJlcHyjJS/y6N+opSQttzHdF/AlOchafxePA
tX/p0CxKdnVNVVa22XN5CORYHktZR13JetGO1hNDYobe5SnR0V4EQ3KDG67ujYkVwkOF3+01
kTnUaHlkJbViGpyEHCJWC9LKI4U4GAdX1TDUSEsOqgbrXKqcE90vpoxFpi4/5GRp2S8+5DYn
rX73vLHCM2motX81zDlfRdtgwOQYFT2gCk2zn2cw0S4sDvchBR+DUEbZpviJT9pkN+YHwma1
1PIuI6jWYxct92aiVfjdVpUH6GubLRrAEM+mb6/MfVJxyHZRvHeLw/MkhEiWBpxI2WYXbfbo
OdDAwU44joPg1g2K4qTgZztcL5fXJm1xP3azLKUP81POq5w0B/GveTWZCmXxQ8b4+MsCpBlY
s5c21FmmI+GkrJ1GIHAHWGThsIhDB9lcUPmRKBAXfCQouLHvaM1Swe+YewII9hGqNpGolema
Zc1fCuEtOiuMnolv5eVwcwDnG5oJ/lhWNX+0Ti8wx+zyY2hPGqVbejoHHnNNqpsUFxYOU6lJ
ruwJ1xgYNMqvyxyK9vQiHQufMZomz8VwQjSHLAtEemN1HR4eT9yn6uG2BqFY23BbWr1eBeIy
nmgBBi84JXM6Z1GwNiGllUFFwt0wozZWLECIBMsCoSEkySXkCyPRWmsQJujqFLWuOD2qvHLD
XrgKiKVWoBkYQBzhBVigPLWr6PUdwMNxO0gG77Mn7N2eFBAxw3qbGDR7bomJQEUASIIE4iOB
SX+gSYHdbRXWEELFd5V6cTUbE1xr49xOQiWr3S4KtJGylGTELaQ1EIEyGRFr0G8pq3fLXRwH
Bwv4Nt1FXlfsGla7efxmewO/D3T7wDqqvuEk4KV1fubuQJQ3W3clj4GacrDeb6NFFKX2x8m7
1gZoQcxtYQAL5jvQhBIwvHKDSBGcgomiDc/zKCMEGi9l5GziNV92otr3RFxNoSX7MNQ6TYFm
pHpns2oOJNhH4EKwkRoXod2O4J+iRWc/A9GGiL3CUq+ZQehQBoTuOPXpfhQHRtzAf4OzCFlZ
+G6/Xwe85OucYTxjXZsGhHXdJxz2rgPMqGCGzAxGANQpYf8yYUVdO1TSlML2zBPgysrmBgCr
WGu3X9kJD6Fa5eRmgWRAvNbMYM1zM98hz0+pjRujA1KTkwOE9BNxHoFq9RYKf2GhVMRNo1Ny
OM/UgEhJm9qQe3Kl7cmG1fRI+Nkp2rT5LlovMKBllghgkMt3qG4MsOJf6x1u6DFcDtG2CyH2
fbTdGe8EAzbNUvk05ZcTmJ7SAkeUaeF2W+oypTJwoJiZX6AoElb4HcqK/WZhJTkdMLzZbwOq
DINkhzI7I4HY59t1h0yTZIVRzDHfxAviw0s4s3cLHwGXQOKDi5Rvd0uEvikzxh3benOi+Dnh
UsgHX7k5EhtHciHIrDfL2AGX8TZ2epHQ/N60oJN0TSF2/NmZEFrzqox3u52zEdI42iNDeyLn
xt0Lss/dLl5GCzuQy4C8J3nBkLX6IO6C69U0VgDMiVc+qbhz11EX2Q2z+uTtVs5o05De21KX
fGNLUWPPT/v4xiokD2kUYU9GVzCXMFb2mBrjiibQBfLpvbxw1QdZsYuDzRjPu7bO4TQT31tg
17i+WGKCNrkCuw+W29/3pxYXaVLS5PsokFZHFN3c48HoSLNex0sUdWVitwZMf0WNIX34NS2X
G/TMtSezsB8kJCDQ1naTrheeHz9Sq/HwPbHZK3x4Au6bAk9Y8AwNyY+APODym9mb4VVxGglr
sAjvZhnvoYbV1zjkDge40A5i13y13+BZdwVuuV8FcVd2wJSdbjcbcB8x1aQVxMjA5WraFIFI
v/V6pdO54eiG8WK9utGd6VXFeKxOaNMSvNEBKc1+IfAyzkbCRFBc415c8x32aGn1ChJwO0dN
IRbzIjrjdQrcfxZzuMC7CODiOVy4zsUyXC5aY3p/c4QN0ZzsJBy0cYeyDVYxX4MrGbgdvpQV
boupUdtcxju37HUl+T4OPORpLJ/FBvIeAXYbL8ksNpmpebejs+3OYMUFNdMujBf/yIDtui6E
vO52tz4Wt95pxM9+j2pAzULcEhbSaxTfXBSt1cw1j+JAjFZAdfiuFKhdEOW+KyJ9eHrMiKWw
Az7kKRO9x7sCqChqsAwiZrVSG0VL23DkoS3hDpExFTE1w5j36coZKiEoXvcaUnyD3WTvHuUq
ztXX598/v9xdP0FCpL/5GRD/fvf2TVC/3L39MVB5TjdXm/0SnZCnHTKQU5YbYib80rkSp9tB
w9y3ChOt7lK7mkPjAJTwLsfY/e94/ZvMLj+EtxEVf/z0CiP/6CRpEGtTyMr4qiFlh3Mldbpc
LNoqEKubNCB9Yxq63DQhh19g024GcBRCKXb7GinpB4n6C4I7kHuaJ5bObEKSdrdpDvEywDFM
hIWgWr1f3aRL03gd36QibUjrbBJlh228wmPPmS2SXYgnNfufNkLOvEUldxYy1fIxVJrMB6Ng
avRMFMyiEzSWN+fh/J61/NxTTEDR0SFckzEIHM8cU3U/nxXjmSFxFvLnF+tnn/HaBeVRxcb9
8gVAd388//go0z14+10VOR3S2ly/I1RqtRA4SO8OlFyKQ8PaJxfOa0qzA+lcODBoJa28EV03
m33sAsX8vDenUHfEOoJ0tTXxYdz04isvlhwhfvZ1kt97xyn7+v3Pt2BIryFlnPnTSS6nYIeD
YBgLO+GjwoChvZWmVYG5zCF5XzgeBhJXkLZh3b0TMXrMX/D5+etHO5+oXRo8SJzcwzYGcsCd
MSbAIeNpQ8V26d5Fi3g1T/P4brvZue29rx7xPMkKTS9oL+nFEcuN7xTK5qZK3tPHpHIy7Qww
cUTV67XNL4WI9jeI6lp8aNTscqJp7xO8Hw9ttFjjZ6BFE9AGGDRxFLB3GmkynTO72exweXGk
zO/vE9wXaCQJvl9aFHK90xtVtSnZrCI8IqZJtFtFNz6Y2io3xlbslgEtiUWzvEEjrvrtcn1j
cRQpLrROBHUj2M55mpJe24BIPdJA2ndgim80p01LbhC11ZVcCa5MmajO5c1F0hZx31bn9CQg
85Rde49GkzbOF+NWhJ/i2IoRUE9yM336BE8eMwwMBlri/3WNIQXjR2p42ZpF9ryw8jpOJDoG
BdouO9Ckqu4xHMQjupeRcjEszUECSU9zuHCXILEIze0YukbL8mMxzJpjIjpUKcj8tifRhL4U
8u/ZKtDujfkCLKg8X2W/XEySFuv9duWC00dSW47lCgxTA9Fgg/26cCFbE6RkICOs7vS4CqxI
sy5SMU/+jcgFFtM9KYIWnjaMRaB+q3eIlKbE8Aw3UawGjQyGOrap5dlvoE6kFBIS5s1vEN0n
4kegAv3Ch+5zTaa+sJDE0qrA9IB61PCxFSdhDH0Cgm9/DdmnbdNOk4JkfLsLBFm26ba77fbn
yPCj3iIDvXdfdLhdpEV5BkvFLmW4gY5JmpyFkBThl5FHF9/uJDz3VyXtWVru1gucEbDoH3dp
WxyjgKRmk7Ytr8NG3z7t6ueIwYm1DtjUmXQnUtT8xH6iRkoDtm8W0ZHk4KQuV+1t6g7UCLdn
ScuON+mOVZUFmBlrzCyjFFfVm2QsZ2J93K6Ob/jjdoNzJFbvzuXTT0zzfXuIo/j2DqMh1ZdN
hJ3DJoU8WfqrjkMXJFBHNdqGYOuiaBfQEVqEKV//zOcuCh5FeOwFi4zmBwj6yeqfoJU/bn/y
knYBJt2q7X4b4boa68ylpcyyefsjZUL+bdfd4vbpK/9uIOPQz5Fe2e018pOn6jVrpdWfwxDg
tMV+G9BEm2TSFqYq6oqz9vbOkH8zIbXdPtlbnsoz6PanFJSxlxIgSHf77Fd0t3dvU/SB9IzW
0cJySnCJwSbjP/VZeBvFy9sLl7fF4Wc6d24COlKHCtI6L3sesC22iLvdZv0TH6Pmm/Vie3uB
PdF2EwdEV4tOBpO8/dGqU6G5htt1sgeOOyZqAY3x1NfdCLYpWuHjUgRJQaKA9kNrf5bdQvSx
Dcm/unVe9BeWNKRF87hpzVvK6/sGUa8VZLdaYy9aehA1KWnulzvWcUArrdFg/C1u5oANtkGV
0bQKmWobZHKE4W62ubg+krbkrm6RtExm3G1p7KKE+M3F8DTaH+N9177fh2e0utKmsGwnFeKR
qrdkB5wW0WLvAs9Kqeo1XaeH3ToQtVZTXIvbEwxE3sRhs9tULWkewTnwxrcgWZcvZ1c1K7jo
Ps7gDTNBXFbRwsOzxH2ShV4tdDMZFWsT8k+KvxIy1+esucSbRSf4YymN3qLcrH+acjtL2RTM
5/ClAvc0vE6w36o7N30D3HWT5IfkJXQo5M+e7Rar2AWK/+oMhmOnFCJtd3G6DQg1iqQmTUjD
pQlSUB0hX1Ghc5ZYOioFVQ+mFkjHUAHiL14bPIYHmWAjYnZ0QQ3Wr0+j9turUellOX5znsOM
xpEU1A/QoSPnYN9zSi6DPKqod9s/nn88f3h7+eGnIgNj53HmLoYOJNWBjdqGlDwnQzKikXIg
wGBir4gDY8Kcrij1BO4TpmJnTVaQJev2u75ubc8sZX4mwcinyjOZBecMuQ3JmHSev/z49PzZ
f5/TWhJKmvwxtVzuFGIXS5Np67NqsLhL6gZiWNBMRvMUowisnKGAk/fSREWb9XpB+gsRoDLA
Qpn0B7Akw5RZJpE331bvrdQ3Zi9ThiNoRxocUzb9GfKBv1vGGLoRsgwrqKZZ4XXDwWvZzxvY
gpTie1eNlb7GwMs88JAOL/ypIPiomzAP6yoPzEp2tV21LFSo2aaNdzvU69EgymseGFbBxvVb
fvv6K8BEJXIhS9MMJG+VLi4k62UwGYBJEogQpEjge+WOgGVT2MHvDGBw7b3nhXtMCihP07LD
FSsjRbRhPCQbaiJ9uL9vCUTYC+RisUhvkbFDt+k2GPcw1NOk9hWjYLAl1IKNvDqbGr8PNPrA
c7EmbnVMUrESYiPfIuW1G2xwzEltHYrOKIq0bXJ5g3kfsVTZpDLnxVf6WbfuvTXcJY9pTjI7
RGj6+AQmsWhC6Kojyqo3N8N8SLD0qrGCfzyWqc0FDxDTA2mA9UcneCca9cGxfij7IzcNSqqn
ys6sI3MJt4FQpzLrhZC20YA9p0uqbZOM21LA1HlmADpTSa8BE9Pqn0HS1Cb0SjAkLMJ6JBHU
EuDyetjSGH1t2Vno0IPeEcDqgsHbR5ZTw15EQjP4V8pmDjlEwlbhiy3jbsBAespehsbFmHtZ
q/QtVlbXByvar0TbEWIViDMsSJfEXUmbnrLq6NQiRbPqYATVETyNjpX5lweCbBXA9hW0QApo
O3YEYYXyn8BWCgETLDeIaUNf1xB/MGSxTdAgS2L+CmpZ8wjIPZ6PurxABudxwGCX6C5qiEIr
4fTC34HtrdGOnfv8VFPnF+gOLO5pBIJXIcHZdrHWjumJQmRfmHXDSeciijqwNhX/1vg3M8GS
jnHn7tNQ65FMEwb1VxrP4nTGjcOkGmy6bhKW50vVovEYgarkqT1s5VVigQzzMauFjoZqTZvE
Hf2lhUQmTdUFTsVhgtrl8qmOV2FVpEuI2wCJrZPq8NBj0Y7l+WMoFacvExmXmf70zZkLmaYO
GIKbRJD9D2QOW6WhbKjEwHwTt9jw5YRA/PLTVUKoOFqRoAEq5UvxTSobDDpw0jowwQzbZm8C
WJzHBOHFn5/fPn3//PIfMWzoV/rHp+9oClRVLGx9NBDkbbpaBp4gBpo6Jfv1Cn/psWnw9EoD
jZibWXyRd2md4+zO7MDNyTrRHLILggxpT61jSiE3bn6sEtb6QDGaYcahsVFoT/58NWZbJaZI
70TNAv7Ht9c3IzMFFtBBVc+i9TLghzTgN7iiecR3S+yeA2yRbc1UChOs56vdLvYwuyiyE4Qr
cF/UmL5GnmO7RWTPGLNSiihI0doQyLixskGlVL/HKFD0dr9bux1Tka/Eog7oDOErM75e78PT
K/CbJapQVMi9GckRYNYlrQG1zCwgvyxsfV8zIStLC2Yuote/Xt9evtz9LpaKpr/72xexZj7/
dffy5feXjx9fPt79pql+FRLjB7HC/+6unlSs4ZCdDeAFj86OpczYZ4esc5BYGiqHhOc4X+HW
ZCfAc7AJeRTsM8NvR6ClBb0EDOAFdvb4qjw7PnO9pcQcpPWRi5ambp9VkAXv7Kf/ERfMVyFn
CZrf1D5//vj8/c3a3+bQWQXmU2fTxEl2hyj1qtNqUyVVezg/PfWVw7taZC2puGCWMc5Noln5
2FtW52qd1pBGTak25WCqtz/U6alHYixF7+6YOYqDJ6I1y+05cUfrrShn1UDmlKCZy0QCB/QN
kmD6buMqN8ot0SxjTla5moVTudbg0cJVJAyrhMNuK3WmODGK51dYQ1P2OcM426pAqTpwDQGg
O5XAWUXwC5Lp4Eph/LkFKSvHWT2g0NGgA4OfdrylHwLMNZxcU6EhQG8QDzFeQE0SYsOBJnhI
ADIvtos+zwPqKUFQqf0TGFjdQS5KQw0xwrxUsAIzRIkJNsbTaCcuoEVAhwQU7MACm0Sup44F
UlwKZAc+xWGsd8JZ6KfH8qGo++ODM9XTkjU4MEx5Cb07+ycoFK1/fHv79uHbZ73svUUu/nU8
HuxvOKZ4oTygLBNUbU43cRdQm0IjwdOH10UgNBqqXaprS1IUP/2DQbGGNb/78PnTy9e3V2zG
oGCaMwj7eS/FWbytgUa+jZjRbkbMdLn4OKkC/DL155+QY+z57dsPn5Fta9Hbbx/+5Qs7AtVH
692uVxLayPxBhCuZbM0OWGSTg3kVGkDNprq3PY/cOrJ2F9cBVwOfNg0kX7MJL4UTWVffEv5M
jH1mJShWpxkQgMKMcQIE4q8JoDOyGQhDHQNXla4S76/CuVkdPHyR1vGSL3Cnj4GId9F6gb1n
DAQDr2Z9Bo1LT7RpHi+M4rGIB7L8URzYVSiT+thQU3UhQ5WxQVKWVQmJsubJaEYawePhYTkG
KnF1XWhzq8kjLVjJbjbJUnqTJqdXxpNzg9+44xc5lw3j9PaEtexIG7fRYYGJDW69bmlAfxAM
iUxflrNCyLjrKDYphqy5TiHWPLgxotUyDcgcsir+yA98eOwqXr58+/HX3Zfn79+FQCOLITym
6kKR1fjAlbnPFVyZg2h40Qxjx303l4xRUrKAhahEFsluwwMmZ8rYqNutcWlTomcu3WEK+oPb
gUHlEZ5JdWCLk+lXjQVrgtm5Pmwj5zXTmYV2h5svqi88N0cCuXTiw9oESE5Ph4BHm3S1ww/j
uVGOwrWEvvzn+/PXj+hKm/FiVN8ZnNQCb64TQSAnizIUAQXYcpYADLVmCNqapfHOtbQxJBdn
kGq7HTJs8MMS8rFaacVuTpnSDc3MiDgKq5llAdl2ZOaUgMfiQEQVVYybsCmbsyxdxu4KG9aH
P5SR77wxRPmKvp9buWpZzE1CulzuAiFc1AAZr/jMOdU1JFotlujQkCEob2ae3BraJPyjNSM1
WCd6UclMcGZsEnwS5PtZTy4ogydxMhq4xVJMYPhvS9AXZ0XFz3WdP/qlFTwoj1tEXnKlGiLM
AgX+XCG6NIMGBT1E9IUTZRFw/0gISNOiezzeBtaGRfITteAC40DCE/ydeOhsCD8kqA3hh/qT
hxgCA8/SgD/IdhEwDXeI8NEMvWW8BqJZGlHRbu9uG4cmr3fbgEfNQBJUHYx1tMtNICTPQCIm
ZxWt8cmxaPb43Jg08Xq+v0CzDTweGDTr3R5TcY/LoUiWq60psQ3f50jORwpvQvE+8N4z1NG0
+9UaS33u5FmQP8VxZNlXKqBW+Dn6EmUx9fwmLnjM4q/kVcN7krD2fDw3Z9N8x0FZcUhGbLZd
RphHpUGwilZItQDfYfAiWsRRCLEOITYhxD6AWEb4eIoo2mJhwgyKfWzmr5oQ7baLFnitrZgm
3HBqolhFgVpXETofArGJA4htqKrtGu0gX25nu8fT7SbGZ6xjQjQqhySeM5Xc7yDTnt+v+2iB
Iw6kiNYndXkgw5HxL4oUwcgg+/g4IUDM3EDbrkaHKc03oJszhTO+iZF5zwQ3jq3oDGKE86Lw
MWx9L/jFBJkRIXUs1gccsYsPRwyzXm7XHEEIOaPIsLEeWt7Sc0taVM80UB3zdbTjSO8FIl6g
iO1mQbAGBSJkBagITuy0idBnxnHKkoJQbCqToqYd1ihbr1FPiAEPLyf4ugTZDqvxfRq4iQcC
sZKbKI7nWhXiLSV2zqYRJa8R/LKyabZB4xGXLvgOYNKhl59BIa5rZHkDIo7Q40aiYtyBwaBY
hQsHrFBNCnQbSy9bNEqvSbFZbJCLRmIi5D6RiA1ymQFijy4VKSpt4/nloogCkdYMos0mvjGi
zWaJ93uzWSFXiESskaNMIuZGNLtUirReLvBbpE1DHovT7ZWifoDjRy82KIcCD1azxbZLZO0W
W2QBCOgWhSKfPi92yPxBfB8Uira2Q1vbo/Xukc8ooGhr+3W8RFgyiVhhO1kikC7W6W673CD9
AcQqRrpftmkP8e0Lxtuqwb5XmbZiL2H2OCbFFudjBEpIiPO7Cmj2AXFopKll6paZTkjt096Y
rFqaQ/kzgYOBz4zxMSSQGOQQeFmbbrU+PRzqkG+Mpip5fRbyX81vETbLdRyIBmXQ7Bab+Wlj
Tc3Xq4B+ZyTi+WYXLed47LyI14sNwtLL60huN+xaWO4iTIJyTvZV4PSKFz9x1AqigPhqn4O7
Gx1ZrlaYDAFi+GaHjq/uqLh95jvY1ny1WN24VQTRernZYu61A8k5zfaLBdI/QMQ4a91lNY1m
7/SnfBNgy/mpnf1yAo9fGwKxxG0kDYp07nLU9m0Id15QcfciRxgtUtAwYt0RqDhazJ1dgmJz
jRfIIQsJL1bbYgaDHfEKlyz3SEcFf7/edJ2OjB7AY4e0RCw36IS3Lb+1A4RIswkEjTcu8yje
ZTs7rp5HxLe7GN0MErWd+65ETPQOk7pYSeIFwgwBvMMFhZIsb52Obbqd04K0pyLF+Km2qFXu
a79CwOD6OItkbgIFwQpbagAPsGFFvY7m1u+FEbD/xuUigdzsNgRBtBDiGoND2hGsI9fdcrtd
orZhBsUuyvxKAbEPIuIQAuGUJBy9oxUG9B6urYFPmIvLoEVYAIXalIjMLlBiY54QOV9hqET5
RzA80nq6P9yidtwnYGof0rC094vIVEpJzo1YBg0aJA4G0jLuOuQ7RLSgjegj+CtrPyFQgpDH
vuBGTnpN7Cg+B/C1YTKoG+QCNAMuDnjtANMfqwskDav7K+MU67FJeCCsUY6z+BsGUgQc1iF8
LmrDNxSw6/Y763YSQYOdofwPjp66YQXLl/Y4mg4dUkYvh4Y+zNJMn+2s/N29tcW+vr18hhDv
P75gDuUq1Z781mlOzCNDsDl9fQ+vQ0U9LisvSR+v0j5rOdbJaWkL0uVq0SG9MGsDEnyw+glv
ti5nQOnJ6vMYbgCbjKHo6DD3lwsZnKemt8EBUVZX8lidsfe8kUa5EEp/G52SKkOagDCt0u9L
1Ca2mt+UtDnxJvj6/Pbhj4/f/nlX/3h5+/Tl5dufb3fHb2JcX7/ZMzzWUzdUNwPLM1xhKIQy
rw6t6Vw4tZCRFqJnoStVp/QbyqE0T4w1EMZjlkib5c4TZdd5PGhJlt2N7pD04cwaGhwSyS46
pKpDMeBzVoDjC6CnfQXQbbSINHSsjSZpL+SjVaAyqVfeUbsuLniBxaJvzZwJXNRzYG2dxuhH
ouemmukzS7aiQqsR0NtySxFwJQdxpAUq2CwXC8oTWcfkM0OBzbWrFb12iAAy5kCubRdL0OhG
8cGtY7e1Iaca8Xs91YKmLwefXTf9dAppS4JfWSpKomVguOWld+KmbhZqpPjirc/rQE0yoaY2
LHLXBuCW22SrRovfBA8FnNh43cATWtM0sC8edLfd+sC9ByxIenryeilWHq2FNLOc31fqiC4o
Cw6mZPvFMjyLJUu3i2gXxBcQRjWOApPRqRh/776MhkG//v78+vJxOvnS5x8fjQMPYvmk/qoS
dSiD+MFC5UY1ggKrhkN83IpzZqUq5KZvC5DwujEdtGWplEEyLrz0gLWBPGPVTJkBbUOVXzRU
KENl4EVtImt/TdiASWWSFgSpFsDTJEgi1feUBahHvNn+hBDMSqj1qftOjUPPIYdPWpRexYGR
OUSosbx0L/jHn18/QDoePyn2sJgPmcd+AAwehAPGanXBUmUAGMjaIsuTNt5tF2H3IyCScbQX
ATsYSZDt19uouOKuDLKdro4X4WiaQFKAE3Igfy8MJSNwHASLA3odBx/KDJK5TkgSXCcyoANP
oCMaVwZodCiaoUTnZbjqIo2WkKN8bnwDTWiApxYc8jhL8S4CWhT13N+MFtSh/XAmzT3qp6hJ
8zoF8+JpEwFAOcsikoP8uumpzcAj6UbTEHdIysI/QxdywJJkD3wTMGsF9HtSPomdLtgEfEMD
zb2QiWbmareri13AtHbCh9eaxG8CwY7Uhumi1ToQnVwTbLebfXhBSoJdIGemJtjtA/FdR3wc
HoPE72+U3+P2yRLfbpZzxWl5iKOkwJc7fZLu+ZglCBS2/EStaoVoFEiiKJB1eliLTY7P2TlN
otXixnGKWvWa+Ha9CNQv0em6Xe/CeE7T+fY5W203nUdjUhTrReTOigSGrzhJcv+4E0syfIoB
C4tLUUm3vjVvQvRNA64sgG5ZT4rlct1BVGKShc/4vF7uZ9Y8mE8GTOZ1M3kxszxIXgQSkUIc
32gRsJhUQX5DMfTnIgDLTkmCHW5wPhEELDGHYYmBz1ywsord5gbBPjAEg2D+Bh6J5m46QSSO
1mUgCPs1Xy2WM4tJEGwWqxurDTJPbpfzNHmxXM/sVCWNhY4fcKBx9xhp2FNVktkJGmjm5uda
7FYzV49AL6N5bkyT3GhkuV7cqmW/d96izcAmIb53qqWhR1BqonGbm9QJ9iAAToa0nDUYt9+k
Q1BlMzJK05d0RBj6hQbO3AB8g8LfX/B6eFU+4ghSPlY45kSaGsUUKYUgwCiuK8wyE9PV9EyZ
IM9EMYZhFQVGY87ehaXUmLwmNeJIW12hpf2bFXaApqFPDcFSq6px2kEeRIGW9imzh6yii1og
HVHK/mQ0a0i7tOe4bSgpnkhtQbX7lm7I6u+xaur8fMSTlEuCMymJVVsLiTHNLosZGxy8nepn
8okANpC9QNTXJVXXZxfMcFZmbB0VamZUpS8vHz8933349gNJaKhKpaSAsJOeNk5hxUDzShyq
lxBBxo6sJfkMRUPAEyqA5JmpCDTUO7JrYsNqZHDQ4gdYPOdWVDcHI2bNcCW9sIzCbryYH0YB
L6tcXE3nBMIUEjSg10Q3fWujrAqg5dRKsouvHnBoDqyjguVlpczFXR5Ro19F2p5L8zyQwOR8
AOdQBJoVYpKPCOJSkDyvDMNtMUnDiTsp2QWsKFAuG1CllZ4KdGU9pVKLZdUKUfZIRmrINP9u
Z2IgtQ8IiHLgVgACiaUQQEuwvPDsJfaTkPry0GuAID/nNKSGkbvA17vIdQKZO6YlqB5IXn7/
8PzFD0sNpOojpDnhxnOyg3CSWhpER66icBmgYr1ZxDaIt5fFxozQIYvmO9PKb6ytT2j5gMEF
gLp1KETNiCUBTKisTbkjn3g0tK0KjtUL8flqhjb5nsLj0HsUlUM2kiTN8B7di0pTbP8bJFXJ
3FlVmII0aE+LZg++IGiZ8rpboGOoLmvT8NhCmKacDqJHy9QkjRfbAGa7dFeEgTJtOCYUp5aV
iYEo96KleBfGoYMVzAzrkiAG/ZLwn/UCXaMKhXdQotZh1CaMwkcFqE2wrWgdmIyHfaAXgEgD
mGVg+sBqY4WvaIGLoiVmamfSiBNgh0/luRTsCbqs2020ROGVCu6GdKatzjUet92guezWS3RB
XtLFMkYnQHCQpMAQHWtkyOOUtRj6KV26B199Td2+C1DQT3bABxIL62NaHIGYmSQUfmqWm5Xb
CfHRrjTxxsTj2Bb0VPUC1frv7uTr8+dv/7wTGOAtvdtFFa0vjcB67IUGj2EpUOT/5ezZltzG
dfwV13k4ldTOqehiyfJDHmhJtjUtWYpIq+V5cXm6nYlr+5Lq7pyd7NcvQUo2L6A7Zx9SaQMQ
LyAIgiQISDvHaMsZCfwqltiOSRKuM05q90WIa+wNHoxXbJhVPTPSRSm9/nR/+uv0dnh4p/dk
6yXq9FSh0uyyzSuJRLeCw2D3Ad/u9mapA5h/afJzxJCSEtdXwGsDxapYc9hVoWhZA0oWJZiV
vcMlYefo2UQHkHM+nPHFAtLOVIbJJ3KGJmqzlQ+EfYLXNiL3whsLCzVmkiIVc5Q3w+reVmzv
+Qgi7R3dF4hhv3KlMdVcW/AuDeHbmM6Gd83MUx9dqPAAKWfVJA29seGbuuN6dK/P7BEpdo8I
PGOMm0ZbGwEJUImPjONy7nlIayXc2r+P6CZl3TQKEEx2G/ge0rKUG2XtardnaKu7yMfGlPzB
Dd0Z0v08XW8KSlzs6RAY9Mh39DTE4JsdzZEOkm0cY2IGbfWQtqZ5HIQIfZ766tuzszhwmx0Z
p7LKgwirtupL3/fp0sa0rAySvt+ic7Fb0Bs8yuFI8kfmGyE/FAIhf/vFNlvlTK9ZYrJcfd9b
UVlpa0yXRZAGIqxhWjeYjjLxV/bEQE6or78hUnZmv4F+/HDQFpaP15aVvALm2WubhIuFxbl6
DDSY/h5QyFIwYEQqDhm55fnrm4gten/8eno63k9eDvenZ6PNmo1DipY2+KhuRcbm9KbFo6sK
SaJFgD+tlbte2Ksbu165S747fH/7oR0XGTyr8h1+aD6YC3VZx73jomBY9m6jxPGIaSSI8Tua
C1q/qrDb/+lwNrasgy9ZStEx5AgIoGoepaJOWYlf+SgfgHA4BWi5cNQ1IPYiOjTf3OGuToNx
lvfFthrCub1PV7fFVVut6vEgZMPhGAt93VHCyeBP337++XK6v8LntPctgw5gTusqUR9iDseQ
MjeOHpz0/EWUoE9vR3yCVJ+4queIRcmn1qJoMxSLTHYBl26/3DAIvWhqG5ScYkBhH1dNbp7R
7RcsmRpLCgfZZiwlZOaHVrkDGO3miLMt3xGD9FKgxMM99UztYq+CQwaRYaQNg5V0M9/39oVy
RHsB6z0cSGua6bRycTJuhC4IDCalxQYTc92S4Abc9q6saEZkXAx/1QTne3ZWG5YMxDkx7bWG
+WY9DcMO5CqyOSf9MI5bAaHD1nXTqKfG4vR2pd3eiAZli7bIVtYZ8AiHZUUKunPdplUB8c+c
+E3Otg0k0uM/cBU0Lc8BDgffO4f+nYKLaRXwf+/SiQBY14jkEP1CrXBjcY1QrvZSFfJlvqrS
T+BzOUZYV73suSUFKN2Uktcl5+Pynzqc5SSaRZolM9yvFNOZw53oQuBINy3shdblziRMNbpw
3EuJsivSF+Kva/WviSOgqYJ3paVc7G/y3BEKXFjHBPY2G7x+0T0yd7yIVvjqsEmG9nH1N/Ni
PDbgWMiSGyZ4HySF9DuwxIUd/z68Toqn17eXH48isjIQJn9PltVwazH5QNlEOB9/VEMh/mcf
GqK5PL0cb/m/yYciz/OJH86nHx0afFm0eWbujwegPGizb9/gtGhMQzmamHfPj4/gECCb9vwd
3AMsYx1sgKlvrXOsM++W0h030yiFhlRDmHb1i8V2GRjq8QJHrvAEnCuTuqHoF+aF2QXlumQL
9HXUXDPQFXYaO8D7TuG/0B0F2fC5p43LBd5qN5EXuFijkFdGcj0/PN2dHh4OLz8v6Tzefjzx
/3/jlE+vz/DHKbjjv76ffpt8fXl+euOi+PrRvFSDS9S2EwlraF7mqX2xzBhRnUgHY7oVV6hK
gpH86e75XtR/fxz/GlrCG8sngUj/8O348J3/B9lFXsfQ3+QHbLAuX31/eea7rPOHj6e/NTEf
hYxsMzWR3wDOyGwaam+Ez4h54ogPOFDkJJ76Ee5Go5CgAYkGY5024dQ+WExpGHq2bUujUD2x
ukDLMCBID8ouDDxSpEF4bUuwzQi3C9275Nsqmc2sagGqBr0ZrsqbYEarBtmPCxeaBVtyg9je
37UZPQ+nOW58jsSRMPQFaXe6Pz6rxPaV/Mx3uFmerW9/fh0f4R55Z3x8DX9DPd8Rx3EY9DKJ
u1kcX6MRmgENaafiET6zrol8RxpzhcLhz36mmHmOMC/jPj1IHDFeRoK5K96lQnCNjUBw9ayh
a/rQCPalSAgogoOmJxDBmvkz7O4gSkTwEKW049OVMoIZIu6ASHAPa0VQZ9c6KCneKyN0+MQq
FA5X8oHiJkkcXs3DQKxpEng2n9PD4/HlMKhs7KhLfl53QXxVjQJBdG1CAoEjLq1CcI1PdQfx
tq4SRLEjC9dIMJs53h2cCd7r5iy+OtxQxTslzK9X0dE4dsSlHjQPm1euINlnCub716Y+p+i8
98rortdCWy/0mtQRD0jStL9H041vSV3JxQ17gz6Ke5QgKmH5cHj95hZRkjV+HF2bJOAxHF9r
LSeIp7FDF50euYXy7yOY8WdDRl+Cm4yPbOhbxzkSIYKaXSyfT7JUbnF/f+FmD/jhoqXCyjmL
gjU9n1Rn7UTYfGd6be8L8ZcMfSPtx9Pr3ZGbjk/HZ8gbqBtktrKYhWiInkE2omA292x9aXkj
K3Hk/x+G4jmkutVaJVa5/YW0lAGnbJbOLU37LEgSTyaCaju0vUgJunU8+vjJgn+8vj0/nv73
CKds0ho3zW1BD0ngmlLZ7ag4bqj6kK7diU2C+TWkugTa5c58J3aeqBH0NKTYc7u+FEhtzVTR
FS089D5LI2KB1zvaDbjY0WGBC524QA2KZuD80NGfL8zX7rNVXG84aOm4SPMp0HFTJ67qS/6h
GnLWxs6YA5tOpzTxXBwABRBbR/SqOPiOzixTPmgOBglccAXnaM5Qo+PL3M2hZcpNOBf3kqSl
4Jvh4BDbkrnnOXpCi8CPHDJfsLkfOkSy5YsScwp8X4ae32KpwDUxq/zM59yaOvgh8AveMemb
NmYdRjSMqnpejxM4rV2O2/1xiy1czF/fuHo9vNxPPrwe3vgKcHo7frycDOjnSJQtvGSubAgH
YGw5DIAD3Nz7GwGaVwYcGPNNkE0a+75x9w5i3xteG3yoMxr63nn1NDp1d/jz4Tj5rwnX0nwd
fXs5wVWzo3tZ2xu+H6N6TIMsMxpY6LNItGWTJNNZgAHPzeOgf9Ff4TXfokyt+xUBDEKjBhb6
RqV/lHxEwhgDmqMXrf1pgIxekCT2OHvYOAe2RIghxSTCs/ibeEloM93zktgmDUxvjC6nfj83
vx+mauZbzZUoyVq7Vl5+b9ITW7bl5zEGnGHDZTKCS44pxYzyJcSg42JttR9SPxGzaskvsYaf
RYxNPvyKxNOGL+9m+wDWWx0JLEcvCdRO1c4SFWJHTcMcM2ZSGU9niY91aWq0YtMzWwK59EeI
9IeRMb6j/9wCB6cWeAZgFNqYXeZwiPXp8s+RnTGmk3CBMtqYp6giDWNLrriRGngtAp365j2h
cD0ynZ4kMECBsE9AlF1i9lo6JcEzjxp71QQk0p9uv7RuJAcz29qGgOymg9Z2Si3M+sScLpLL
ASpIpsaUWmt23nAxyuvcPL+8fZuQx+PL6e7w9Onm+eV4eJqwyyz6lIq1JGOds2VcQgPPdFCs
20gP3TgCfXMAFinfgpqKs1xlLAzNQgdohELV+JESzMfPFCyYpp6huck2iYIAg+2t+6MB3k1L
pGD/rI0Kmv26Opqb48dnVoJrwcCjWhX6ovrP/6helkLwEUuTiaV7GtpH2aObr1L25Pnp4edg
fH1qylKvgAOwhQj8Zz1T/yqo+fmEkubp5E4mZh6POCZfn1+kOWFZMeG83/1uiMBmsQ4is4cC
ikUxHpCNOR4CZggIBKOempIogObXEmhMRti6hlbDVjRZldgjizPWXEMJW3Bj0FR0XAHEcWRY
l0XPt9KRIc9i0xBYwiZcUq32ret2S0M8Io74iqY1C9xuFOu8xOKMpvKCFaIgvnw93B0nH/JN
5AWB/3Ec/Qcsy/uoUT1hiemrcWN7YbLn54fXyRucmv/7+PD8ffJ0/B+nTbytqt2owPX9hrWt
EIWvXg7fv53uXm1/MrJqLheG/Aek84unOkhEiNFBtKA6oCuI8jBchJRZMeWGsluRPWkXFkA8
ZFw1W/o5nqooelswyCVb14rTUqsu/W21rwo4EKJa3CqAZ7wb216koHKlixZkIq8UzculmS9a
IbqpKAiG7ukzwJeLEWU2QJTMm1FRBi+F6rJe7fZtvsSeoMIHS/E+9hyPVOvzgKy7vJW35Xzx
1KuTBGVObiDDMgSq1rPJK6RlTbI937xmlxt+m3lpjj38ACRjxhBwgLiqb8gKYpzVpd70riUV
yj74DoOv8mpP1+B+dObs+d56uAuaPFuX00oBEGUpXXOLMNYLBjgtSultZ8AheTwcq80T7V7P
Qps3GcpRqatt0pZpK+0MfIzMqoD1WluS5Q5fU0DzOcqnjBO9qbddTraOISzmmpP/ABkdZtt6
kX/+xz8sdEoatm3zfd62dauPscTXlXQccRFAPN+GWTNF4FYds/Th/cvjpxNHTrLjnz/++uv0
9Jd6Inz+9FbU52SFoLniFK+R7KvK4W51pqO3XO1CHFX5Qb34PU+Zw/vN+obrs/Rmn5Ffastq
i7s3XIpF9JZNVda3XDF0XB2zlqQyhfQ77ZX1d4uSbG72ecdF8Vfo2+0G4uPumwqdI8hw6sPc
vDx/PXFLfvXjdH+8n9Tf3058NTuAN5MxwUdpkiGrhTvMljb5JvvMDQiLcp2Tli1ywsSq1Hak
BDKbjktvXjXsHEuYW08WDaxVbf5lC46Piy3d3ZKCfQZT2KKkXOmfi/IRAsDRsgBJ2rZS5/sI
t65xRdOzK5EjTBucji9RDh3QVberZa9rAQnja0lqrj+rSn+CPMD4tt+iCy3gNiv1Lwllxiq+
IqvALD8tWm7M7b/wJVFHfOlLs6OLOl27ZborWgZ5vhuXQmzIRpg3wy7i9fvD4eekOTwdH15N
lSNIuXamzQLS3HPrg9VbXnnKhWeDyr1Rnlrv4Bb802rLBaM16WKALl5O938drdbJh3pFz//o
Z4kZftJokF2aXljONqQr3MbTqvKDbeiIlMmKzQ6I1n0SRjM85N9IU5TFPHCExFNpQkeiUZVm
6ojoNdJUhRck4RdHLOCBqM0b0riS/g40lM2id+riJLMwci9NvSkxqswu6l5czzopynxFUvSF
6FmK6rbIN0zolj2E5L45u4wuXw6Px8mfP75+5XZKZj7v4lZtWmWQr+8im0t4bsmK5U4FqWv5
aE0K2xJpFi9ARHLvcooExoEql+D9Wpat5tg4INK62fHCiYUoKm53LspC/4Tu6KWsRwNxLstE
XMpS9Au0qm7zYrXZ8xWmIBu8b6JGzcl1CY/xllxLiAdPGqv4pqfO8sHAxVQ0p2BFKdrCZNht
e9i+HV7u5eM32yUDmCP0Jyo+HNtUuKMJfLjjqi3wHI7rnIC0uGECKG5gcxbh00uMFmVOJN/1
ORLAc+QW5AbnFGC00c+XhcHuzdThFAMbuBV+YLAUT4I34OvsZCP1MxFA1oXf8DlcOItvi86J
K1wOSRxX5okXzfDHfPApbL5dyIqwtna298q2A0aX7fzAWS1h+LtKYBPu4AMY0vE558QWTs53
brZu8ppP5MIppDe7FlerHBdmSydzurrO6topRx1L4sDZUcZX9Nw9MVzPOMRUdRaa8g1k4XjB
AeyD+KNuJE237s5yq80pXwu++PdsGrlVBJhcW0dANogjL88rlm3NRXWDWwcgqzmX1U1dOTsI
R84BmswQ5vWOK9fOUOXSm8fNk5npgDcYTeiCKTTu4nD33w+nv769Tf45KdNsjMtonbNx3BDH
SkYCVBsGuHK69LxgGjCH766gqSi3XlZLRwxkQcK6MPK+4GYbEEhrCx/3Ee+y6gDPsjqYVk50
t1oF0zAgWM4wwI+vvMzuk4qG8Xy5cjgmD73n8nyzvMIgaW460TWrQm5pYusIBBUsi9Wa6YOk
hq0/U8ATv9ahXy5UzS12BHfBiyzfKhsuqC9pXe1vyxyfGRc6StbEEQBeqSdrksThQWlQOZxk
L1Tgaxl679UoqHC3YoWoSSJHZF+F0870AJdyuijwZmXzDtkii31HWG6FCW3apxt8H/fOPB/H
d51VxWiupc9Pr898734/7LiGF1r2k+6ViCtHazUvAwfyv2TWIL69rMtSxL18B88V3B85nKRf
nDxxOjA8C8q175hbab/YjVnAsE2FuHCwGqmB+f/lttrQz4mH49v6ln4OorOObkmVL7ZLSIpj
lYwgefMYt+f3TcsN9XZ3nbat2XiQftHwaJmDic7ITQ4n7OjgvzOSZwVXrzRDH35D3vNtv3c+
pFRoLAPYJknLLQuCqahkaJt1tTN+RuvtRs0JCD/3EO9xSIuBwuEYjGvAQs2ZopWyycTRVauD
mrTSAevbLG90EM2/XNY+Bd6S24qbyTrwd03YR8gQf0yLAEll6+HqRHuct4HQnz0fao5EOT+0
28QbWNlZrbZ1i3DAirqptoP0YKtl9HMY6PUPG+F9XWaO4KiiHW2d7pdGoR3E4qfixD5dUrPr
FyzfDuC2pWi143G9KKIiXEEYfZevMvkk0sEUDkU3qckUMeSgAyywpAbe218M/B3VkVXTHsRl
n3dcedkf26J0+QJExEJxW9X+pmq2U8/fb0lrVFE3Zcjn4gKHQoE6puttapLOZ3sIFZ0aIiQf
vuv9bVJqzCOEoQTiIhsVo91iDdFMYgmkruzXgkUQWnm/9eMownypLtwyywXBrsgm6NH8sSMf
RA5F2Afmer8N5FkYIp05hfFV5ifJ3GwJKcFrz9lFjp7ijmISW0TTyDcYTot1YzCXrzdF32Aw
cdxjKEiyTRLVqWiEBQgs9Kwe3TpSYQPuDxaGAZo+l2MXTPoRap8IoLhgFvk1HZ+mxPPVW1UB
E5EpjNnQ77iJjMwSATfrTuk0SNCExxKpxey9wPg2/3af0UYf/5T1S6M1GWlLYnJ1JZIl67CS
7GxC+fUU+XqKfW0A+apPDEhhAPJ0XYcrHVZssmJVY7AChWa/47Q9TmyAuVr0vRsfBdoKbUCY
ZWyoH848DGjphZz689AlnoBUg75dYGZIBAUj4kCYK+CyStDHLGIFz0ylChBjhnJDxZ+pPtxn
oDnM4sQt6T0cahR7U7crPzDLLevSEIyyj6fxNDfWx4rklLV1iEMxHnEjSK5iGnc2VRBhtqbU
qv26NT9oi4YVGZb/RmCrPDR6xEHzGAFFgVk0BD9Ou2KBxmcXBqc8PDMXOJIEpm4YgJjCFWdS
NTUmUNcHgdWgXbU0MmGJ/dw6+5d4BahEmxGSQ0xRIoPHlAWWVrEhqIDgRrcAOOWVDKbvIs8N
lafjRM/VPLgjiQjFJLx/0AwVI5kwS3hzIDjYjd0BiZb3jC4sLVYVQbsv8Z2pAi8osXt24OSF
hhMLcdmJKSMKnugJv22sKb8m1l5sFArxHMjNED1G2YgdzpFsBGL2eJcN31kM7dra3C6MN3sY
dqz1VcMZt2GISIGXkAVtQDK4iSDPGSI/sBTefrM2TXYJh3ZIoGGDN4YNB2EuTcDeiCCigcEZ
5ErmipF2S3zPt4vY0j7Y2eCUFOSLA4ypWVmUHwSl/VEMUX1MJSPiTRZLI826bpalmfMibiyi
qfFTQgW/vk7BuASY6UAsoo7wbQB20i6WWt6926I1LPgROhiC+r6zuNLtul9iiW2EKFE4qDNL
EzXV7Y17n7/IFzUeiUVrKUQs9hwRvzRCRmhK8FNwja6qHYnxRqqr448nXgVMn8TqMgJ6c182
uZwPjm/obsPWYL9Z5r+4kkEuYwYSsRVbbM+e/esisw8xOfAy/PzHfkEYy9udSBW0WbG1hm3J
rZLDA759VL8dNeNwkEq/H+/Amx8qttysgZ5MId6xxhGApulWuOcgfZL4VufFGbhfYm9CBVqc
2v+0QHrGIwGmW8yeEagtqFG9y4v8/xh7kuXGcWR/RTGnnsNEaJf8XvQBBCEJbYKkCVJLXRju
KnW1o7yF7Yo39fcPCXABwATdh+6yMpPYl8xELsktT/0uRAzsyHZ4SFpNwPcRzF6ovWBCbetK
DYyrXxe/LnV9SBJIlWTw1Z6E0YJQdTVgFimAzYss5rfsIv1hMldnuNJ8HgqKodFqIEuu7kkZ
qSsWE+E1lYlp5o6CWoP7LC24dN2gOujYqDOw/R5BJ6iZiEEpTk/4g8ASbNNqzBc1aP5M7ZmA
UKrB+ve7Aj+bNDKBOLbBtXnIGs6v/0hDxvq7L9fbBcZVAlK1X29Cd7nfXpgLqCiYrlEXeFJs
aJb7o3Xk7KRlhkCN+0tjUOmUxanigfyieImfvoD7g0QF9lQIuPLE0wPxarhVAi9XR51tPgnw
hGqGziVOWOw3JmFpdgwtBBid5pBDoLWtAnAQ6kfuJvhrMYEJBXxRiShhOYnnY1T7m+V0DH86
MJb4G8U5MdSEC7UUmb/AhZr3ImCIYvCXXUIkHjcSCHTOun0W2oWC0yKDJzF3NAVcgQXzjkuh
+G/eLmGnlrTENPYGU/C9W4zisGyJSh+KSmBR57PakM5asMBjuy5nqRq8FHuuM+iSJJf07FWp
jv6ExijQWPQh8O6FFEdDeTiCxRLHUDtusUaoIxOmnFP/C3jtG9zSBZiGoNoIjc0oJaXbR3W1
DcZfEiGrdO8B4Wq0GSQInxdcwzJnDEwlb/0WypIRzACgwamNoTgcW7ujEV0aIre3IrTO9mB3
TCR3YhB2wHCzjXFMbTaf2wRBivKP7OK3w4aHy1V3ceaWp85vyZi34MqDOieFD1Pifdm8OVkV
2/Cx7VABU1nnAVszTTHffWFF6IA9EZp5TTpx3iT3cMo5c7XxAqVABf7QtbDwsH25xIrvdLN8
6slQN0pW1IcKl100L5nkuNijjy7FPs3nng1YG4cJYao1tw3pE1AW30isg71uARqKNjtUU5Nf
YOfPhdYCDldGIHD8q4YFPH9cHydcXQJuMd0AGLWDIoDi0CEIFNEpWuwqrR5mB6qkLV6WCWts
gt0RGFg3a8WBjt1rX3Q6aQnT2k3cIUirFJKcg+wVJFB/pgM7GQtPCmACiKwP1J0ot3nOS5pJ
7pKqy4Uy887SJa9FwpLB9A6iDptcJMZDp7E18fvuvpgHO5iV4dFRuPp0UAd7wgN+UC2Vzj8A
VMHN1EyH1POxV2eNAgSypBrNU+d2pDqakMvvcxtt5rrfTy/vH2BF0joLx0ObbT2Z6815OoWp
CtR6hqVnZtL5UMPjaE/RtKIdxWCWDbS1yXNQrK/KhxZgyK/GsS5LBFuWsGakkjyxb00TnMZr
+E7i5qJ2U7qWhqf6XM1n00PuD6FDxGU+m63PozQ7tWhUSaM0ii9ZLOezkenK0DHMuu4MxyIb
66p9LgQWQgWa6rFGy2Q7GzTZoSi24Jx/sxklgiZGVOCCe0sgZXirAV4nEBAeH9ftGWNmO6GP
9+/vQ7WP3oPUyxyorV1sSQuAp9ijKkUXDTtV1/v/TPS4lFkBpujfrq/gNj95eZ5IKvnkz58f
kyi5hSOwlvHk6f5XG6jr/vH9ZfLndfJ8vX67fvtf1firU9Lh+viqw0E8Qaboh+e/XtzWN3T2
nW+BR1MxtjSDd5oGoE+n3NvQXcGkJDvipSFtkTvFOzp8kI3kMp77qUhbnPqblDhKxnExvQnj
Visc90clcnnIAqWShFQxwXFZyjydg429JYUIfNhGv1dDRAMjxFLV2Whtwj66e48MI8DDQuZP
9+Dviqc+FjHd+mOqxVNPC6PgPNdPPGFWIE4D3K8uVO+6GE3pqK/jE10MrmgFqw8ZGhuhw++J
zhaDfRpXJFG3RTLc4Pnj/YfaG0+T/ePPa3MdtmkUPC4CChpcXKZlJJdIveGEF/TAFb/KwqcW
XA2b9TACEkwjNA0/hyopN3N/X2i7KW8HGlsq6hu7Wrhe8+0eCgY7dF8Y0hBeUDDoxZoDfiYL
J0qahWs00BiKHhbLGYrR3NeBDba+wcJ7CqjhWaJfmPCyc3XP+olhG1SzG8UWRTM3w5OF2ZUx
V4OVocgjVzIViuG5/eRnI3B6phZ+sF8tUsnEgyO+aeV2Nl+EF2tPtVpgL2/2qtGOQIE+nXB4
VaFw0NHnJK3zwdnq4HFcIjmOyCKuVi/FR0rQUsnmbiYIGw26nvH+i0xuAjvQ4MC/nxRDqcyi
MZHn0QacqxFBoCFKyVEEhiVP5gs78KuFykq+3q7w5X1HSYXvizt1rII8iSJlTvPt2b9SGxzZ
4ecCINQIKck9RgdIclYUBJ40E2ab7NokFxFlSWAIUUWps9MjVmibb6zoszrSBjxJc/6cAoNu
kuTgKJHylOFrET6jge/OoI6pRRno44nLQ5SlnxzPUlazAQ/VTGsZ2gJVHm+2u+lmgb1w2ect
8Iwtbwt3liupo5cXE3zt5SFXoLl3R5C4Koer8Sj9Azhh+6x0Xzs0mMZ+19rDnV42dB1mW+gF
dOEhMYjHngpTy25w+sMbnNcFeKeN1Q0PwrrVGA2vxU7JoESWEDNqH5xDrkT+6Lj3j8YWDFe7
u3+SQb/LgqSUHXlUkDLDHs10v7ITKQqeFYOvQwFf9LwdJCuNVLXjZwjXEypem1HsTn7pF/VJ
6KphX/TYngdrFBQB6t/5aubmv7RJJKfwx2I1XQw+b3DLdSCtiR5GyG6v5k0HMR8ZATV7mVRX
VEhpU/qnCKjvEVGBnsEOwIVVjOwTNijirCUfYe+6/O9f7w9f7x8nyf0vLDIcfJYfrGemtMne
e6aMH33eD5R79XFMBwhc68J39bWUr4H22M3BmXgDHYmf5BNB1IWAu/yQFHuCtqigy7W2FJkj
2FYcSytRG58uqej6Kbi+Pbz+fX1Tne7Vbr66rVXyVDHuzKmrK0bRrbIkSJCfyXyDWxRpqew4
WjygFyMaKKg7zEFGMR0tnYh4tVqsx0jUPTmfb8JVaHwgRYwevuwWN4DSR8p+Pg3vZaNeG58d
42A4UFTZax9dCM4RzSNtByl56V8kqg3qhgooasyfSNY3qHZ//+379WPy+naFZCEv79dvECvy
r4fvP9/uW1W5U5r/MuVOlG/W5Q5jiT+E6/GvUxpWl5q9tAtv2F2VUuCjgnt1bICanVrChRqe
5n3DvITXAfhumbJGCmlUfyPKEVp30zxSDqGiFiMnmDEHGMEPHpscbBztcVdlgz6xKGRXqE8b
ckJHwlrvny8862n1krORow38Y00YTmTyhR2cW/2oI/AcQkCtR+S2xej0rJXnkwDk/s1u3pp0
rleT7vUfvKJAOSHNKeBkfLDdlTpQDflhKVUMqeO92eNz/7NCyQkHPQwINaE5WkuelDvh99ug
dvBvIC8TUJ0iiT0z6IHjO6G+HpSLOpQChkYbJ2+K0Fb5qojBrB4riMTuwip5oH5dlWo8X6sl
g0kousq7g5vsGYAHeRfsb5nJA49I7XliODQi4Nraj+qZpagVkGBCKknPUbG2sOECalIUPb28
/ZIfD19/YDGXuq+rVEvTSripBMaAC5kXWbdd+u+lgY3WG94Bfiv0mhBOqpoG84dWL6f1YntG
sIViKHowPP66Vjz6iVTHyHA83TtoHTbM0kRRAaJHCpLf4QT8erp341yYXGMsxsZYl0DQQH4a
BcnCXGfJHozzMC1+vRzB55TcjBYQeH83heeLm+Vy2CYFXmH5GBrsanU+t0YDTwOcHcO7By4Q
4HqOVL1doa5xzSyyI2Sf5sngQz0OgaAdHcF6MUIQEzqbL+U0kDPQFHIKhJfRyydW3GZw2IzJ
h5RL80zlflpSsl4FYoAYgoSubmaBaF7dQlr9d2S16ve+Px8fnn/8Nvu3vo+LfTRpIrn8fIYA
wYjdzeS33ijKyvprOgyirxh0RiRnmic4p9ESFAwX2jQeAqiGsSmnm200MhIlV4NRNQsUHZDy
7eH7d+dssq0r/BOlNbrwIis4uEydGuY50GtLg4+5xK8Dh0qU2DXqkHQhYwMN6e0hQ02hgejM
DhFRrPWRB8KmOZRj50vX+8baRp8XehYeXj8gN8b75MNMRb8G0+vHXw+PHxCkWvOGk99gxj7u
3xTr6C/AbmYKkkruuGG6XSZq5khwRHLiGWzjZEqeDEVs94oD7xLsZneHuHEb65V9msfjEU+8
gW/wXP0/VWyHHfOkh+ldo87GEaSpwK7SomDnvAncqQN0SH2/Vnjgj0GtzHrPt5DqJo+ZgL9y
sjfBIodEJI6bGfwE3Um19u1uUYryQPHnU4uInvcRrsuzx2L3aTl8OeUnlEidbkuL8rOCMlrE
AVsVi+poApjmx39CXMnQeraIovRc1oEXe4sM6jti70uAqIuzpZbQEMlP6ALkeea62vm4mmKK
9AGVeUbAF4BFoS1JxsuTRY62VMHLUENDF5NHgysD7EnPSX3EvUSY4kNqUmZgfihpUVnGkBo1
sOAEqEfT7GN5ke5m0ciQCNogwS+4Fm4QQI3aH1AneNNenZjD/0JDTdB91XmIRs9RgUcTs81q
bjH9Gsa385vNagB105w1MI+tMlC2mM3R6CkafV5s/WJWy2HRG9d5uSFE2rCaIR8vBjDZhND2
oLfnYftn0xRnODU6T2OM3SxKqv1af9kAQWfL9Xa2HWJagckCHaiScC84sI2T9a+3j6/Tf/Ut
AhKFLrMDfvQAPrT0AJcezS2i2QMFmDy0QcAtNg0IFSO965a2D4eIUwi4tfpG4HXFmQ6/FG51
ccT1P2D7DS1FpMH2OxJFqy8sYPrUE7HsCx6UsCc5b6fY81dLEMvZYuqkz3UxNVXsUVVgfIVN
uFmGitgs61OMXgQ90drOZdnCBTmvnTyOLaKQK7rAvuAyUdt2G0LMkU/OCr4agnO62xrZc9An
jZoGXncdooVLhJHY2YMdxBZBiOWs3CLjYeAwyu4KBlx0t5jfYt2Qi9XiZopddS3FTixmruKh
mwC1pmbY6WgRrOxMi/aHc2S4mVhM5+giLI4Kg8dn7km220D81K6zsVrJ28E+BGXiJ/sQxvZm
vHBNgrOGzlbCdS0OCa5BsEmW423RJLg6wCa5wVWyzs4LxC3vRv1mEwjE3E/2crX9jATSt46T
wGZfjq8Ac1KMj6/aVfNZIJZ1Vw7NNzdYKjh97s8hSEwb26NbP5AyfnieD8Z8MV8gp4+B14eT
50bjNnozttNgf9xQpGyD6cp2LVRHW0tFJocniVo3cztVrgVfzZC9DvAVeoLCgb9d1TsiOOpb
b9FtluiozZfT5RAuy9vZpiRbrE6x3JZbLDiUTbBAjiaAr24QuBTrOda66G65nWLzka/oFBkn
mKYu++LL839AzfLJobQr1V/eCdxFsZDX5/eXN3yGlaDVuz11xfbQwCsBiI6DjBogtLF072TU
AFgTJ12rwVOWSBerH5GsusG0vyBqNPdh+VS7vCl0IKJjS3AOicwanZEyVEOenOsQToetPkDt
tdgLXD7raZD1FZ+gbOoFqG2g/WpoyTz3GQVmoaY1OPgEdRqWFRTpROZSfK5XWje/9PHh+vxh
zS+Rl5TW5bkppJ9DYGmthnfLoC6I9ptsi4yq3dCHThcKBjVWXJ2Thjp2Os3naLc1qhbZkTXp
W8bI2oRigVRKhujAiO9w2iYWcrvRjU11bo3unIg1y+Vmi/FFt1LtVosvNb91NNHfp/9dbLYe
wnO+ozuyh8N3ablv9DA17iX7fW7FR+MCpo9yDjaK+JI3hsMmqQ5KATaD2k8+qbOAh7JNgonn
Fl4/oNljNai4nXnHwJ1nNeU7F5DDCbhnKS/uHOMMhYohxaJB4UXXxI5PCwDJCprJhVcF5VYg
M6eKlJUBwyf4rqgCcZkBK3bqwghiD0csBn9DcNwpCp4JUWn7BusW0Rh15N7tYhdoN1wTpZku
IFR67r5ktzAI4D3ySS0EsSLIdWB16J4x8N5x7tNw4enG2zVc3NXRJYfXV0FSsncd6OGSaWML
Y83T6dGsBph0aYKl1QDouOH0sEYX5jS3QeKJSBtsBLHkXG6uwegAa+j8t80LZWo8xjk6CeDF
pJZFmVjnggHGuWWzq0E+hTcWGmYMyPtqNVB7DKKt0uij9N73PTwEJZGNSzmS4Kvxvf769vL+
8tfH5PDr9fr2n+Pk+8/r+wcSkKvNf+L89uOZN9Cq5Ikc0LbzY7nif1a9buP5+hxMdgCxxpB5
t8Dw0pEVl/qQlXmCqrKAWKttddZZOQwUDgQ6n+6xpAfr/dzUQm8hV6RNvJMuDRiKkbLBOKWC
cs6MjvYicnDqPzBZbSOp+d3bp8GnLY0uSKpD29c6wuJndMAf+nQdk6DXOVC7DVRbFcpvR+DJ
LTg/QgAvOZ6WxyZsygnSwW7AiOyi1LFEReyOPvC9WsmorbH8ZgrKILpQoMADBM3Mj+rwdrtu
coLZlVRlVp8TYAd++ZX7Uy68RaArOea6jm5nIIu+b/i+YJcIjbkly/bVrr/vCy7FHGzwcFYi
g5BpAeE82c5u5tjFpVBOQG7zWx05l1wNBKUiD+HKWx7EnZiLgtqdVw2AbeaLCOt6sd3M5pVD
vZ1ttwx/Oi9KuZpPceXGsVyvV7hCSKOCaeWk2KyGoqF8vd7/+PkKj+M67cf76/X69W9bupQ5
I7eVZ8nZB1bBvrY+NlNeDyLfmVzRz9/eXh6+OXmiG1BfhJIEayUFbuZLNPtVG36ycULtZmh3
KsuLTkdRZiV4pim+1k683uMhXUWDtnNW7NXxkO8JZHLEubOUqxNS5oE4gZAIbYd/eeIJnU2n
U20e+glFjq/+W7mZBjRkOV8uFoOh3t+//7h+OJm5vSnaE3nLSpM5BiKMohPuFWN1l7Mk1l4M
gXP/Nqd+gNcGc5e49tSnHTbR5+26CxRhBWlpJS84R092RGX1o45EtnMsIhLOUv3wfRL4uB4q
cmI8iDZKAyi6PFRpDIlH0BxC4iya1vTTwshdsNwzJ5kYVNv1jRWH2O0IJBRpXRQDn7jDYTy/
9sJ2bIOwnXVCci/uoAaPFa7xTuEASSMXyBjLaV+8A3UIYxpHxDEjUXJuos6riGcB+RzwRVRi
0mWDq5Dysu0WXYAaDZNKXPmog3vpydpeC55kdbG75Yl97lR/8FJWg4638BIc1h1+ep/D8UP1
5sODaebGsdz+SMFGpgiw7vKDTIbqgsIkqpiRnMSDBpuAXBJiYdu5XsF07xboXbtvBwxpRuzM
t10rXCqtVdwRCgZKPOBKhXzxD+gau2Swj0J67NLq3MP9SeIiFWN+yy5qehIn45I5B7RVhczn
gzzwDpUObXoMJcRs1JBpqc7GuZKqQ1ngDJ2SzpIMCxht0Bm5LQvPmtVgjt5m6U//qoC4zIvg
0dQQ1IsmVHuWF2zPAwErW+IcEjREVVniBumKV/ZXG8D8E5MaBaI2g8YME5qYhsOV28DvbMP/
1sY+Kvst26+eBnkYKPw8gtAJrZaKYistJZIWqxLkdE3a9iLl5CQlOtrrsEsQmREDQsVagnMU
uhdZMrFZ64ZhGyDL1a1fIK2D1yzt9aPWjSJJS04CLkQiOY8FbmrWdS6Ha7EIOLU2Fs0QfFFB
UkYRCwkdxU5xnddvE3l9vH79mJSK4Xx+eXz5/qu38AiHyNNOsKD9hex62otqGJXciZj3z+vy
qyorxR1oBhN/XTRUlc66DLGI7tqI/iPUuaDhuDg9Cc9D5nqaQnGupU/TrkBhjMXs1SR2sVbX
1AEHKnoolJjWLQZ8Awl1l5I0G10zNLkFmT3JMiVvWHobEEEVDjIyKG7bEmKNSTbgfu/yUD49
vTxP6OPL1x8mk+3/vbz9sNdB/w3Mzc0yYJ5vkUm+WizxB2uPavVPqJa4stciojFlm0B2WJtM
AkNdU9zJziIceBl0qR7RwbKYjJOSP1LUU8d8JF9+vn29Dh8UVa3sWIK93WphWeHCz1o7A/2y
KKMk7ij7tmHld5eFuuOizNIk59R5J2pfD6MMkySMwpxnR+v9imdE2tE9DQ2x9SMG1DM1Rqy6
Pl/fHr5OjA49v/9+1XboVninXnT6hNTapromwx3hu6mlaIJOEilLtQmrPea92NDar3VExAaM
gOqj9UitvioMx2rbUpuHU9HohIbgWh7HGCK38SiHYBPukizPL/WJBGujJNFpXSG83iflFnd1
wZy3iUb/2/bHmAten14+rq9vL1/R53YGkXJBaYfuKuRjU+jr0/t3tLxcyOa1eK+DPxQBftIQ
mlcBvGqnCosfgLynIC4M1T+qE7/JX+8f16dJpo6Cvx9e/w1anK8Pf6ml+v+VPVlz20iPf8Xl
p92qzIwtH7G3yg8USUkc8TIPSfYLS+MoiWpiO2XL9SX76xdAd5N9oGnvw4wjAOy70QAaDQxP
7oS55hGOPgDXz6YbgjLdMGjx3as4RD2fuViRbfzlefvl4fnR9x2LF5EON+Vfs5fd7vVhC/vr
9vklufUV8h6peFzyZ7bxFeDgCHn7tv0BTfO2ncXr8xVaYYTEXcP+x/7pl1Nmb0OAxbPpVmHL
rg3u496M96FVMMgIaH9BiaW/1Rc/j+bPQPj0bLiWCFQ3L1YypBvsTPGIw1S1BzLYjyggYOgY
j/av0aJOgtmj3qXEhyV16UtBY5QJ7DRZuXtF9ZJ5ij0MiavqKcPHBqVaNWLxr8MDHL0y7idT
oiDvZnUAIgpv6JMkXpVR4nsN8+z8mpcpJCEGoTjzmJYlSdnkF6cXo82pmqvrz2e8a4okqbOL
C49bnqRQsWA8oiTemfEHDPt2LG+MlyLwE9VNtgDEwVHoxSURr7sQDgfaixXRBRqPDI0UIGrN
yyLn7RtI0BQe5YC+hl3j/xLfO3kTX61AeudvbEBy1ISsdeY+AUGg3yqD2LSsa+/L9oFgLNwx
UtF7W1NYFyphdXv0AAzL0PmUCmfjtJVSYo5tX6ShKsboUlIfS81HOsJjcnEHgts/r8QzB24n
XTVkxKO+uGmYdcsiDyh2FCL5Xi7uMIRON7nKMwoV9T4VluelEhs/zjJeTjG7oH2KLNUKrj3s
vXDqjgUows8vj9sn4GSgT+wPzy/cZIyR9ddkgbG44GcX+sN2nDtN0S+UlGCbR1XhCYrfXzYp
20wyzVdRokcbVPGQ0btjgOJ7t3Rp/LYyySNFo3lXTPX44fiAcabd1YtKCfbbgkXBxoFRbqnB
9S7YSJcXA6b9gOZHgWb3lwCrTwq6ZKFIqyxNWruNl5r0s2cRwvN0fXR42T5gSGDGJlM3Y3qC
HepHpSpwixy+xBs73oAXc86QcG6DcmG4GNKtngic6uMOdVJ48ualSeb7iIxNoWvX0hT21hvV
Jytsm5lyRDQFCBrc2R5vYmlb64JYGISLuFtjph35cFf3RQrSJAqaGAQOdAKs2cSpgAM1KTDG
Cw7sSefRUgF3xoc+A8x5p/ubEABzW8IZQGVaKGxWUScbaHrqouo4bKukubMadu59IPb3NJro
xPjbSwwVZFMaPYM3xfgyE3Cezv/toJScTgjNDQR+37ZFo+nnG767CNZfauLvIk/Rx9Z62alh
0GamZyBClHr0qoFA5o0rvIBq9EjO81k9MRorAWTEwUvjKNX2fxHa5ArSFRM9xngP7oVyYJ9t
bYSz72nqJmhquxLxKDcL6iVm59WmRUezwz9tKmsCFMQY8uHsVliYf5AYcBvPK18kg564akHM
DXKg6/xuxoLaLz0JvJiZd6qLZ/jK3uf0nCepGExudU+s4SAADrqxQSVZtwmapnLBzGpVKG5z
Ek4MqGf7EEVSoOTqUdlE+WTrYR8DW4Q1nV4YhtVHd1/ksW/T4jzpZ6r4DedHZMBYHoVyt/Vu
WsJk1LCiZKtM0ljts6E4VJwxmOqdBz9Dp0ryWEr0+KwGuAvSudEewOLqYaNEzGrhTa/JEzYg
EQDazVqVgU2nIPL8QZUlS2g+tIVmsUL6iR6oZBnrb2w0rQSD4UmydVDlloOZQPhYu8A2VWyw
9ttZ1nQrLmqQwEys5oVN6kKGazkliLZNMavNY0/AzI1Gp6C2H8PWTNspvYLZZVrANKbBnfh+
4Ik9FJMFJhXefUUJd8JzlEG6DkAimoEiVKwNVjsQJ3kU8yKRRrSBdUI9fo8wi2EEi9J1CA63
D9/1t0azWhzLjxagPzO0FS4Qi6RuinnliTuoqPwsWVEUU+Q4nZ2LSU0Z0lB4WH0aBuhIBRqR
p63qgkSMhRiX6I+qyP6KVhEJf47sB8Ls9eXlibGs/i7SJNaW5z0Q6euwjWZqGaka+VqEbayo
/wLZ4a94g//PG74dM3GoaJf+8J0BWdkk+FuZ9DGUBHpB35yffebwSYFPS0Bzvznevj7s91o0
AZ2sbWa8ayU13ncg5Q0j6ykpfKz3Qkd+3b19eT76yo0K3iQYLIAAS/NlGMFWmQQOyvoAlk5x
GCeW9S1ASlBtDGZFQBxSTOKVNLqXN6HCRZJGle5bLb7AZHyYbA33WWu3PCxbNKCETaXVtIwr
w8HcCrjQZKXzkztNBcISQhbtHM6HqV6ABFHftMMzFnfZseEK3aeNmydz9HgIra/EH4tNwx5d
BVUnz3Vl4nBnua86qcUDN+GbYTCnosLIgH5VIohGcDM/LqYD34dd+D8ElEji6JFbR9o6HWnO
mLLkyqeDSj5NfGJZCNzROCvptxCrrBgdEsWHRKtv26Be6CUpiBCzHO3PRIuTcqRcioOTlR3m
TE75giSFP+QuS4kyVMgGiezJrc3Sw+9F5Ba3/PT+fKy89L5gStvcs2Xd1w1vTO8pzsm+NiXf
iHte0u9p42waY96OsebNqmCexSASSkkACr0506SnjW8tZUkO3MaSnLKRTVL6cbf55nwUe+nH
Vkylir1i9Gn9UKDfeLDh445ewTEOCEECk9ajeYOxojv/KN0i/BDl1fnkQ3S4UlhCk0zr4/gg
uC+lrBJ6guMvu68/tofdsUNo5a6ScLyFZ4Z45qiqJh74j7604BxYeTneCBOtCt/qACUJ/fat
U0Yh1fk1CDSo9XFulYQ4Mz9dnZnnMMGM2D4IqddsTk5B3J3an3eaIlXmipmCElC0mpGZMFbE
bUGdgrzFfaHq6+i+GZlBQGovSC1RkQVJfnP87+7laffjz+eXb8fWiOB3WQJitye4mCRSVg2o
fBprA0P5QnN3pFGrk2HUopydPUmEglKcIpE5XJbNjkBJTX4vbVS6YdyAIDKGJILZdiYxsmc6
4qY6wrk2OxSJKRFDzwvMSITv4t6jUfP4Hh0uGGEB6Oqac15UVL65mVfkmxxXSaGZbUhWsH6K
/mpDDSPCDvGQ61ht6zavytD+3c31TFIShs8FZTQMbf2UITQf6btlNb0wOIb4TM16klM/MUtj
iK+/2Rd38hNz7UjopqwaCrJoKKlxufDIWol5NuJvoWhzTISw+GpzPTS0fzqt06zjAD0UUQBf
WKi2xMeXFtASZwhGioIFcwI7DlD+ynTAkwpFt2y+jkV666wRyaaMPGjSSCuC5xooCvzCvof1
X5eGckI/eXO2QKkdwm0iPTQL/BiOybfD16tjHaP08Q70cfObHvP57LPGhAzM5wsP5urixIuZ
eDH+0nwtuLr01nN56sV4W6DHbLMw516Mt9WXl17MtQdzfeb75to7otdnvv5cn/vqufps9Sep
i6uri+vuyvPB6cRbP6CsoaYAJeZqUuWf8tVOePAZD/a0/YIHX/Lgzzz4mgefeppy6mnLqdWY
ZZFcdRUDa00YBgoC7UFPBqfAYQzKYsjB4Txtq4LBVAWIQGxZd1WSplxp8yDm4VWsZ25W4CTE
zHURg8jbpPH0jW1S01bLpF6YCLTzaY4MaWb8cA+INk9CK0G3xCRFt77VDT3GFbtwpN09vL3s
D7/d0EbSW6OvBn93VXzbYu465xxQAm5c1QmI8KDHAn2V5OaFzVSWw3zZVHiRGllOIvLyZ4Dr
zemiRVdAjSTyelwelEgVZXFN/lFNlfAmj+G6z4IY5j9VnlRaNEUA2UIjJBxQvwJ5j+W2hA+J
7Sm/28yqjKm+DBpN6pB+JxtNxkvrjILZoHmAYoffXF5cnF0oNL1NWQRVFOexCF6OFxciXEUg
7KmDwcAm428YQMTEO7K6aCvPzScKXZRIMK7Q5X0RpyXrr9H3soZtmbcbpv8S0+Er9zJAFZUb
akUlRc8PVIU2nTgtypEqg1Vo3+07NHRFDHulrECrWgVpG9+ceonrJIJ1Q4IkbBAo93qMdAIr
WLcRTS4uuZ4Dd+HD2PQkTZEVd5zbaU8RlDC0mW5cd1CW+MvjNZOG24ye0n+3NOhFRRCViecJ
pyK6CzwB5obBCWboJ+lJAqbVBhpVsc5xL3HMVXlUmPtwLqpI5nmA2T05ZFDfZZgxGfaCye0G
Eo0bVlbCgL6UNkq0DZ/oD2USjO8XBzVqL2VYYTTBm9MTHYv8oWpTM3IiIpo4Q59W9jgBdD7v
Kewv62T+3tfqKqsv4nj/uP3j6dsxR0RLp14Ep3ZFNsHEEzyEo7045bQ+m/Lm+PX79vTYLGoN
wx7jk+ok9HhxYyaAOIgYGo0CFnwVJLUzfHSd807p6ttu2ibpB+vheaRBAdwYJs9TjrsUjUKm
KWVaqfsD39t43L3d5uLk2lORWrD+7QFEIH60cRcHVXpHHXPkAlqJQm+nDANV3wE7IooSMVba
AQs/OlTUQdls28SID0WoKBKKvMfICSRjvVRLjDng+jIcGsUg2Rod6ijgLEyw22+O8WXil+f/
PH36vX3cfvrxvP3yc//06XX7dQeU+y+f8GnwN5QFP73ufuyf3n59en3cPvz76fD8+Pz7+dP2
58/ty+Pzy6d/fn49FsLjkqyRR9+3L192T+jYOgiRWlq0o/3T/rDf/tj/L2U31K738T0vHKXh
ssuL3NwQiCKPH+DCnrdxDvEMxHUvrQqOxjdJof096t8c2QKz6s0GlhrZFjWLmQg6auZNELAs
zsLyzoZuisoGlbc2BOOSXgKjCQstdJwI9nSjnhO//P55eD56eH7ZHT2/HH3f/fhJqXUNYnSn
Ml6KGuCJCwfWxgJd0noZJuVC96qyEO4nlh1tALqklX4gDjCW0L1OUQ33tiTwNX5Zlgw13su4
YBXk0QN3PyAntEeeuredkk+m8+l8djq5ytrUQeRtygPd6kv66zSA/kQOOGibBahmDtyMpavm
PMncEuYgE3dCA8AQSQ5ehmKWcaTLt39+7B/++Hf3++iBlva3l+3P77+dFV3VxmtbCY34XIeq
pvA9fBXVvESpBqmtVvHk4uKUz13hUGGHHR+x4O3wffd02D9sD7svR/ET9RMYzNF/9ofvR8Hr
6/PDnlDR9rB1Oh6GmTvEYcYMRrgA3SGYnIDscOcN0d9v7nmCYdI/QgP/qPOkq+uYNavLpRDf
UvJ0e4QXAfDwlZrtKb2hf3z+ovvMqeZPQ65TMy6tvUI27jYMmW0Uh1MHllZr4+5AQIux6kps
oj0XG9OtTzGU+G5deV4sqd26UBPlDO0IabDajJIGGL60adlgFnIw8C2pmpDF9vW7bz6MsN+K
bWd6KiY1BNy4rMTnwhdv/233enBrqMKziVucAAvTB8O2Qt1ArENhflLklc4MbegEssEg3i7j
yZRZBALDC4Imib3fnVY1pydRMuO6KDC+Ns8XVhhqtQQ/sLf7tYIB6i45zxl1BkXn7rkUXbgn
WwLbGGM9Je40V1kELIIF65cZAxhUOg58NnGppYboAmHD1PEZRw+l+5GgIY5+ydUF3zDTAAg+
hI7CZ+No9O+esmFU1XE7r06v3XW+LrE97GLpaCF1edJvHCEv7n9+NwOhKObOsS2AWs/8XbxW
g4XM22lSu+AqdJcZiNPrWcLuSoFwsu/aeLG4XU4QYFyfJPAi3vtQnnbAZz9OOfGTokWd7wni
LnjoeO114+4ggo59Flme3z30rIuj+F1WMeOFyOUiuA9cEbDGaHuTE6ZCJaOMilOS5t1G1XHM
1B1XpZEn1YTTWesbJEUzMo4aiVaMu/9Hmt3E7ups1gW7HSTct4YU2tNYE92drYM7L43RZxW5
6ufL7vXV0Oz7hTMzozcrqYpcMe3huPLk9+4/8gSg6tGejHuSwHbpFMFttk9fnh+P8rfHf3Yv
ItSRZaTo2VaddGGJqqezaarp3Io/r2OkMORsKsL58rDrRCC/+pcJUjj1/p1g5t0YowCUd6ym
2XGKv0LwuniP1ZR7u709TeWxA9p0aD4YPwODhnegFoImHmlJPrMNHz/2/7xsX34fvTy/HfZP
jNiK8bGD2NUBCC6OImeBAeoDMh9F3ibe9C4Vqza6dIIpu/BegqvoIuj0lK3lI7Lg0GZeL3Sp
PaLQYu3uAQxCEESmH6SLo9kYw0ON7NG06oImw3gR4ejuHwix6Sfno7ODxKEvPt1AcouPcxZX
1xe/3q8bacOzjSdzvU14OfkQnap85Ulow1T/QVJowPuUeQIMZtOFeX5x8X7HwkWc1mycG41I
ZjrhJxrv7TahL8GPNs9ZWsyTsJtvuLjB5vUCJaAZFq2GLNtpKmnqdirJBpe2gbApM52KqRKv
A7owxgvyJERnbxGqQC+vXIb1FeVaQDzF7vWFM0DSz3Dg1DW6HPBFfSZTHJbD33Emc7zQL2Ph
ukxPsLFlluuwYKm7lwPG09oedq9HXzH4yf7b0/bw9rI7evi+e/h3//RNz4eF/tv+60sXX98c
a/dsEh9vmirQR8x3U1vkUVA516U8tSj6nfsq9WbwA51WfZomObaBHvTO1EGUek8gYb7XzfoK
0k3jPARxgdxKhukM6Kk0sxCmsP9iTBmkLWAVyQeUxzws7zBTSGY9cNZJ0jj3YPO4kSlnHNQs
ySNMYABjONWvoMOiinTlXzjmGHEZVJwhTJ5UGPEMFcoC060o+peHWbkJF8LPuYpnFgU+g5uh
GkWvi8o0MU3vIbDzpDEuAcLTS5PCNcJAY5q2M5QENCsZYg9alFRSNpa1EQEwknh6d8V8KjA+
yZZIgmrt2wGCAibBh/XkGASMF8Gl54Qj3zXDhZpBR1rPjBhIeVRk46ODT7lQfDOVhHsh9FhQ
/SWQCRXvymz4OQs3XusMzSewRj/06x7Bw/fiN91Y2DCKP1W6tElwee4AA92ra4A1izabOgjM
1eGWOw3/1sdbQj0jPfStm98n2v7SEFNATFhMem/kTBwQ9HqOoy888HN3w+s+Z2rtULDrIi0M
bVaHomPgFf8BVqihGjhL6hiZBAfrlnpSHQ0+zVjwrNZjbMkADPInPfFYBWlngjdBVQV3gjHp
gkZdhAkwyFXcEcGAQl4GXFCPUCVAlOLQDAILcCNCbk4DIdJYAm+f616ChKPMn0FJipX9lpgS
VkVR1TWg3xucXSVCxZAf2mEukliZZCE1Rxj0d1+3bz8OmGXnsP/29vz2evQort+3L7stHKn/
u/sfTTMjF6L7uMumd7CKbyYnJw6qRvuyQOusVEfjw1J8OTX3cEyjKI8vmUkUcMGYQ0ruBWIU
PtO6udI8Msixhkn5oAZtnoolry0cCgcsLkM1BkthZhhHsrBsMYAQ5rEk5wkD01XGAolu9dM3
LYyXs/h7jD3nqfVmJb1HD1et4dWtSjEhIVmZiOe5mqBpNT9KMoOkSKIOUzOAOKJtgjasJyih
GNIjea0qvrGKao3LKOg8bjB9XjGL9C2lf0Pp9Tr9mdqsQIufm/UD4WyEG6S/+nVllXD1Sxcp
aoxIWKTWFsMNSyHpDPsLAERyCoa6lUFnZmlbL9R7bh9RFqKOZBHQIlkHqbZQatjdVkA0Mdbs
cuilYkeoNb2AlC5A0J8v+6fDv5RP/Mvj7vWb62BOAvOSpsOQdwUYHymx6k8onrdigrkUnXV7
D4/PXorbFkOGnA/DLVQnp4SegjzLZENE8tph+d7lQZa4b9Pusil643VxVQGBvt7paRb8t8K0
S9LvTw6od5B60+r+x+6Pw/5Rqh2vRPog4C/ukIq6pEHMgWFsnDaMDa82DVuD2MwLkhpRtA6q
GS87zqMpxnNLSnbTxLmIdd7iXQeyNm33YOouCoIETP+8Tx6MK7KEgzJT6f0G2TIOIiotqPmA
OAsgAL1IJANJOSuA6FItwm9hlIssaELTk9nAUPMwNp3+QIBc12QIQsvdX0ZyK+Awkk8F46qz
Ih7oCQ0+NtFG7H6576LdP2/fvqGvWvL0enh5ezTzXmcBmkJA+a1uNYYzAHuHOTFBNye/Tjkq
maaOLUHg0LmjBbEsRoXeHIXaYuFC8oL1oo8Y/ubMNT2Pm9aBDE6HR7P1hpGw7OB+aLjMBou3
zPYWwlAoSrSRboR9YUZQbeQtIDLGuTdkmygQCf2JRKmYYp17QmsSuiwSzLvjsXgMtWCgPe8m
qApYvoHw3XI19QYflRr8mSCjaR5EuSLWlOddTtpOFZnHnx8pfOZ/WkVyruDwTGGDuZtPYUaa
KHZwW/ukxRo4VSSpYgyWi4xrpLwV51HTr2BJA3J0G6RueyXCO08y5y56uxqiAwIpQF0CLAaO
n6KSgQVvHp21IJgQagDeYRWbM4DtxO5aRKDbjinChiH1UGBVrnF9cwc1myJOfCDE2FPHS3fY
XhbbXiTVEMQfiY6K55+vn47S54d/334K5rnYPn3TpY0A81MBRy8MHcYA2091BJLkyba56dUR
NEG1uAMaGGTjeUwxa1xkPwi9I75OSHVwxj0vsWzlyTA5VWTVSlHV9enrKYRWgV2CQc9Klsbt
2NAYjYwa8xGafli11Yg1dAvMdtaALsNuqfUtHLpw9EaFJ3UeWrhFPSzjH18Y4vUjHLZf3vCE
1Tm5sfltOY+ApoRFsCF0n3IUZ8q29yPOwzKOS4uDCwsyOkEOp9V/vf7cP6FjJPTm8e2w+7WD
f+wOD3/++ed/D22mqzIqm3KjMkpNWRWrPv4mO67iug26M8Lo0BDRNvHGk/1QblMmCZNF8n4h
67UgggOhWOOzx7FWrevYk8dLEIibRk9ObkFCyRFBsklhWlwereIH0z24VJU4TkoVwRZC1Vd5
RQ8Lu+/SqLL1/5h/QyKleD5600l0hV5jesk4jmD9CuPsyEAtxeHtLEqxp0QUmqMv28P2CGWo
B7wYcdQQvGRxR7C0Y0/ai2ZMllHHnCeGG0kTHUkzYVFVbenG1jV4g6cfdq0h6E0xphRMa2dA
qrDleIc19UpvCVvizAzY/wGe16TF9AfR5YmmCuG33ljDiI1v2bidKvWS0X5n891KFaZilBdT
v6UVD1Iv3sB6riOgIws4I1IheFHMKydxoNo/gM7Du0Z/iUtOI8MKZ+LaFKUYi8qSXGZtLrS5
cey8CsoFT6O0/pnaXH5kt06aBVq0bK2HI5OxatHYYZNLsozi69NznyqySDAsJi0MpARdIG+c
QtAJ6M4ChrI0UbRmFqeei9zjZjdFU0IzBR7Zi6btbKaPFqURInrDcoczjYtDZJNxxlgrSgbf
wZBcZv1GeUo9sQuShO7amDlcEeUVMgXKbzhDn2/dvLNkfKvl/YXy8TXSNwEOdLzE18VQ0lf6
RmlHfxxnwBMrme3Jk/ChugWJcya/5yzYJAK5xS/WsGuZz3qCLEsKX7A52WG5omtnUdY5qCzA
PfQKLVSv3XjCuU3h1MMntmK8nCeHCh7kcMoE9GCUPvBIJz057D+OUFUq87CokPBDx5ZQwjSW
M2HoSjoCD7TcO2qtVYaqtJw5MLWybLivFViGbAmGuK4SNsLIOJNSu9G8s0Lfi6ZK5nM4x53p
lAxEKLC82N8zu8FfgmmZzlIGv4pHt7ogpQsxnEy2PrUymwBO6NK5YB3OWq3Cd4m1rUj2ZT9l
fZcDDxCjAlzPT6gvG5ZSUwBgMrtiESanZ9fndGMljQpDrQGGIOQWtGbNoLQ5iQyXFkc6A8Ig
K5JCLzYpTJwjTv26umTFKZoCGK1ZGsxrl7uL9+DSYN/W+hX61WUnDe/E8fUEvfpXnrKi6dxM
G2NV1G2iKW++i2dJV84bJ6y1LWFxl4tR0U5TN2aG1BnTKd0F8cx7SMHqs+/0XNgdRhwsvK6P
cAlL3US/ppRr9WRzdWLNqkLEvDdvT9HSn3EaZMxe9UzczKCZwbwCLplkCdbAkSg0plxkydht
qBgcsmeXRgZZkZca9UbvwLf5GlMNVF1RGWaxHi6uQIireQ7PnnTeOhFppSxvbh79bq7ZvR5Q
iUTzR4jJLrffdrrJeold4G6AObNiojtllNn7tsc8bshdlKMbk7rsSgfZx8yeYtzZBklap8GU
Z+WAFPZwnw2AKLJgGatIV3bZdJALPcxfxQy1e7Z0o9369YhdQD6SF4bamIWqiWNMeolBAmwr
bQ3SSrGSTLU07bSA4A5TON1JVIbq6EAXrx0Gk9Ey8qREE0Y6PPJrXxpHIsEwVYvY8xaWKLzf
i4Ov1vMW8eaBQXOEXT5ykJNfzwhedzbyUhneQCOCAMVB953Vwhh1ea5z4/5TPRqEt3waukW8
8R5DYmzF5b9wQeHlXUVXh54AYMLhGCgaNns5oaWP7KMBlL4Ij1ZRAAZukfIHClFghBY/Vvhd
+fEo1c5AcPFTVOjT2NiRw6yh9b3gIWwSBb6hSJeZ02V5reL7hAwgGIDNHsBy5o4eujkv0AsC
WC/PRtCLFwZ5VIqmsmZJla0DPYaJWAwiKYdds3vEmyuIYsCR47dZ3DIrIqcwDJMCSuXo0iX/
aI/ngirESwA4796RKdYl02OP3NHz1QkqIxxk/g+G37qd99ACAA==

--5vNYLRcllDrimb99--


From xen-devel-bounces@lists.xenproject.org Wed May 20 06:01:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 06:01:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbHmc-0002ko-6H; Wed, 20 May 2020 06:00:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jY/2=7C=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jbHma-0002kj-Q1
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 06:00:48 +0000
X-Inumbo-ID: 3ff70394-9a5f-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ff70394-9a5f-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 06:00:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 6E859AD11;
 Wed, 20 May 2020 06:00:49 +0000 (UTC)
Subject: Re: grant table issues mapping a ring order 10
To: Stefano Stabellini <sstabellini@kernel.org>, boris.ostrovsky@oracle.com
References: <alpine.DEB.2.21.2005191252040.27502@sstabellini-ThinkPad-T480s>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <03bad8fd-9826-7652-1c08-549e22634f8d@suse.com>
Date: Wed, 20 May 2020 08:00:45 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2005191252040.27502@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: andrew.cooper3@citrix.com, jbeulich@suse.com,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.20 23:21, Stefano Stabellini wrote:
> Hi Juergen, Boris,
> 
> I am trying to increase the size of the rings used for Xen 9pfs
> connections for performance reasons and also to reduce the likehood of
> the backend having to wait on the frontend to free up space from the
> ring.
> 
> FYI I realized that we cannot choose order 11 or greater in Linux
> because then we incur into the hard limit CONFIG_FORCE_MAX_ZONEORDER=11.
> But that is not the reason why I am writing to you :-)
> 
> 
> The reason why I am writing is that even order 10 fails for some
> grant-table related reason I cannot explain. There are two rings, each
> of them order 10. Mapping the first ring results into an error. (Order 9
> works fine, resulting in both rings being mapped correctly.)
> 
> QEMU tries to map the refs but gets an error:
> 
>    gnttab: error: mmap failed: Invalid argument
>    xen be: 9pfs-0: xen be: 9pfs-0: xengnttab_map_domain_grant_refs failed: Invalid argument
>    xengnttab_map_domain_grant_refs failed: Invalid argument
> 
> The error comes from Xen. The hypervisor returns GNTST_bad_gntref to
> Linux (drivers/xen/grant-table.c:gnttab_map_refs). Then:
> 
>      	if (map->map_ops[i].status) {
> 			err = -EINVAL;
> 			continue;
> 		}
> 
> So Linux returns -EINVAL to QEMU. The ref seem to be garbage. The
> following printks are in Xen in the implemenation of map_grant_ref:
> 
> (XEN) DEBUG map_grant_ref 1017 ref=998 nr=2560
> (XEN) DEBUG map_grant_ref 1017 ref=999 nr=2560
> (XEN) DEBUG map_grant_ref 1013 ref=2050669706 nr=2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x7a3abc8a for d1
> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> (XEN) DEBUG map_grant_ref 1017 ref=19 nr=2560
> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> (XEN) DEBUG map_grant_ref 1013 ref=56423797 nr=2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x35cf575 for d1
> (XEN) DEBUG map_grant_ref 1013 ref=348793 nr=2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x55279 for d1
> (XEN) DEBUG map_grant_ref 1013 ref=1589921828 nr=2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x5ec44824 for d1
> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> (XEN) DEBUG map_grant_ref 1013 ref=2070386184 nr=2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x7b679608 for d1
> (XEN) DEBUG map_grant_ref 1013 ref=3421871 nr=2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x3436af for d1
> (XEN) DEBUG map_grant_ref 1013 ref=1589921828 nr=2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x5ec44824 for d1
> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> (XEN) DEBUG map_grant_ref 1013 ref=875999099 nr=2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x3436af7b for d1
> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> (XEN) DEBUG map_grant_ref 1013 ref=2705045486 nr=2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0xa13bb7ee for d1
> (XEN) DEBUG map_grant_ref 1013 ref=4294967295 nr=2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0xffffffff for d1
> (XEN) DEBUG map_grant_ref 1013 ref=213291910 nr=2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0xcb69386 for d1
> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> (XEN) DEBUG map_grant_ref 1013 ref=4912 nr=2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0x1330 for d1
> (XEN) DEBUG map_grant_ref 1013 ref=167788925 nr=2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0xa00417d for d1
> (XEN) DEBUG map_grant_ref 1017 ref=24 nr=2560
> (XEN) DEBUG map_grant_ref 1013 ref=167788925 nr=2560
> (XEN) grant_table.c:1015:d0v0 Bad ref 0xa00417d for d1
> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> 
> 
> Full logs https://pastebin.com/QLTUaUGJ
> It is worth mentioning that no limits are being reached: we are below
> 2500 entries per domain and below the 64 pages of grant refs per domain.
> 
> What it seems to happen is that after ref 999, the next refs are garbage.
> Do you have any ideas why?

I don't think there is enough space for all the needed grant refs in the
initial interface page passed via Xenstore. So how do you pass the refs
to the backend?


Juergen


From xen-devel-bounces@lists.xenproject.org Wed May 20 06:09:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 06:09:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbHui-0002zE-1m; Wed, 20 May 2020 06:09:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AfT1=7C=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbHuh-0002z9-Ln
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 06:09:11 +0000
X-Inumbo-ID: 6b6a944a-9a60-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6b6a944a-9a60-11ea-9887-bc764e2007e4;
 Wed, 20 May 2020 06:09:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=uzQrjMfa02vktwWZLLe/tdrhbF3DbpXszYIK7HGWnAs=; b=7DN837A5Ekfz5mjB+W2nayOm7
 8WnFtA1tFzYaRx70XhyqRM8bNjbP4StJ9US8whjJD3daOqRlIAMO1VwwtBPqZ5YwkVaR6VmgoKQU3
 VjdoVFePTxECuJdk3KUc1isCjpiFDeMLdQtVfSZDSsRBy5ernoFS6WbZl2k5UlcL35V/w=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbHuf-0000p7-DY; Wed, 20 May 2020 06:09:09 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbHuf-0000JS-1f; Wed, 20 May 2020 06:09:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbHuf-0000kr-0z; Wed, 20 May 2020 06:09:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150260-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150260: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-armhf-armhf-libvirt-raw:xen-boot:fail:regression
 qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=f2465433b43fb87766d79f42191607dac4aed5b4
X-Osstest-Versions-That: qemuu=a89af8c20ab289785806fcc1589b0f4265bd4e90
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 20 May 2020 06:09:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150260 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150260/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-libvirt-raw  7 xen-boot                 fail REGR. vs. 150243

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 150243

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150243
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150243
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150243
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150243
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150243
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                f2465433b43fb87766d79f42191607dac4aed5b4
baseline version:
 qemuu                a89af8c20ab289785806fcc1589b0f4265bd4e90

Last test of basis   150243  2020-05-19 11:07:14 Z    0 days
Testing same since   150260  2020-05-19 19:14:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  John Snow <jsnow@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Lukas Straub <lukasstraub2@web.de>
  Max Reitz <mreitz@redhat.com>
  Oleksandr Natalenko <oleksandr@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Stefan Hajnoczi <stefanha@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1109 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 20 07:10:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 07:10:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbIrt-0000O9-UU; Wed, 20 May 2020 07:10:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RvRK=7C=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1jbIrs-0000O4-Vv
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 07:10:21 +0000
X-Inumbo-ID: f6dc6910-9a68-11ea-9887-bc764e2007e4
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f6dc6910-9a68-11ea-9887-bc764e2007e4;
 Wed, 20 May 2020 07:10:20 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1jbIrm-000Dvy-0I; Wed, 20 May 2020 07:10:14 +0000
Date: Wed, 20 May 2020 08:10:13 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
Message-ID: <20200520071013.GA53525@deinos.phlegethon.org>
References: <20200518153820.18170-1-andrew.cooper3@citrix.com>
 <2783ddc5-9919-3c97-ba52-2f734e7d72d5@suse.com>
 <62d4999b-7db3-bac6-28ed-bb636347df38@citrix.com>
 <3088e420-a72a-1b2d-144f-115610488418@suse.com>
 <1750cbe5-ef48-6dc7-e372-cbc0a8cbc9cc@citrix.com>
 <4a5c33c0-9245-126b-123e-3980a9135190@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <4a5c33c0-9245-126b-123e-3980a9135190@suse.com>
User-Agent: Mutt/1.11.1 (2018-12-01)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org);
 SAEximRunCond expanded to false
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

At 18:09 +0200 on 19 May (1589911795), Jan Beulich wrote:
> static inline int sh_type_has_up_pointer(struct domain *d, unsigned int t)
> {
>     /* Multi-page shadows don't have up-pointers */
>     if ( t == SH_type_l1_32_shadow
>          || t == SH_type_fl1_32_shadow
>          || t == SH_type_l2_32_shadow )
>         return 0;
>     /* Pinnable shadows don't have up-pointers either */
>     return !sh_type_is_pinnable(d, t);
> }
> 
> It's unclear to me in which way SH_type_l1_32_shadow and
> SH_type_l2_32_shadow are "multi-page" shadows; I'd rather have
> expected all three SH_type_fl1_*_shadow to be. Tim?

They are multi-page in the sense that the shadow itself is more than a
page long (because it needs to have 1024 8-byte entries).

The FL1 shadows are the same size as their equivalent L1s.

Tim.


From xen-devel-bounces@lists.xenproject.org Wed May 20 07:45:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 07:45:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbJPY-00037i-KR; Wed, 20 May 2020 07:45:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AfT1=7C=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbJPW-00037d-TW
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 07:45:06 +0000
X-Inumbo-ID: d1e89ea8-9a6d-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d1e89ea8-9a6d-11ea-b9cf-bc764e2007e4;
 Wed, 20 May 2020 07:45:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=F6h0UqHrQAAUaF2QDouqv8LKU2R+fjgjVqcGe9N32w4=; b=dQtF1qK+1eXRCoDUk7KGNcLKQ
 ap/CCFHW9bEjaZAL0A98h/By3Ph4xEjbzXOlZh/HXg3KdSHix1zOIoHlFTnyhHmuLoQoQRFWRI+z8
 yubm/hExHC2jFmE1jb3xGm/yGhGKMGOi778tV/GWKojUAnQoxL2yyHBmdIU5UbVnDjA34=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbJPU-0002lf-Kl; Wed, 20 May 2020 07:45:04 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbJPU-0003gb-Cd; Wed, 20 May 2020 07:45:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbJPU-0001os-Ag; Wed, 20 May 2020 07:45:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150268-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150268: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=6866a324abe2b27010f2085afd2353e9d10014fe
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 20 May 2020 07:45:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150268 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150268/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              6866a324abe2b27010f2085afd2353e9d10014fe
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  124 days
Failing since        146211  2020-01-18 04:18:52 Z  123 days  114 attempts
Testing same since   150268  2020-05-20 04:18:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Chris Jester-Young <cky@cky.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yan Wang <wangyan122@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 18546 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 20 07:49:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 07:49:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbJTc-0003Ih-BH; Wed, 20 May 2020 07:49:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbJTb-0003Ib-1h
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 07:49:19 +0000
X-Inumbo-ID: 685782fa-9a6e-11ea-a9cb-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 685782fa-9a6e-11ea-a9cb-12813bfff9fa;
 Wed, 20 May 2020 07:49:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 597FFAC69;
 Wed, 20 May 2020 07:49:19 +0000 (UTC)
Subject: Re: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200518153820.18170-1-andrew.cooper3@citrix.com>
 <2783ddc5-9919-3c97-ba52-2f734e7d72d5@suse.com>
 <62d4999b-7db3-bac6-28ed-bb636347df38@citrix.com>
 <3088e420-a72a-1b2d-144f-115610488418@suse.com>
 <1750cbe5-ef48-6dc7-e372-cbc0a8cbc9cc@citrix.com>
 <4a5c33c0-9245-126b-123e-3980a9135190@suse.com>
 <1808df24-ecde-97c6-c296-ecf385260395@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <90bf918e-3b87-b1be-344f-80a1bd6803a8@suse.com>
Date: Wed, 20 May 2020 09:48:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <1808df24-ecde-97c6-c296-ecf385260395@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.2020 20:00, Andrew Cooper wrote:
> On 19/05/2020 17:09, Jan Beulich wrote:
>> In any event there would be 12 bits to reclaim from the up
>> pointer - it being a physical address, there'll not be more
>> than 52 significant bits.
> 
> Right, but for L1TF safety, the address bits in the PTE must not be
> cacheable.

So if I understand this right, your response was only indirectly
related to what I said: You mean that no matter whether we find
a way to store full-width GFNs, SH_L1E_MMIO_MAGIC can't have
arbitrarily many set bits dropped. On L1TF vulnerable hardware,
that is (i.e. in principle the constant could become a variable
to be determined at boot).

> Currently, on fully populated multi-socket servers, the MMIO fastpath
> relies on the top 4G of address space not being cacheable, which is the
> safest we can reasonably manage.  Extending this by a nibble takes us to
> 16G which is not meaningfully less safe.

That's 64G (36 address bits), isn't it? Looking at
l1tf_calculations(), I'd be worried in particular Penryn /
Dunnington might not support more than 36 address bits (I don't
think I have anywhere to check). Even if it was 38, 39, or 40
bits, 64G becomes a not insignificant part of the overall 256G /
512G / 1T address space. Then again the top quarter assumption
in l1tf_calculations() would still be met in this latter case.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 20 07:50:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 07:50:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbJUY-00040P-L3; Wed, 20 May 2020 07:50:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbJUY-00040F-5M
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 07:50:18 +0000
X-Inumbo-ID: 884b03cb-9a6e-11ea-a9cc-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 884b03cb-9a6e-11ea-a9cc-12813bfff9fa;
 Wed, 20 May 2020 07:50:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 76364AE39;
 Wed, 20 May 2020 07:50:14 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v3 0/3] x86: XSA-298 follow-up
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <cbed3c45-3685-4bce-9719-93b1e8a2599a@suse.com>
Date: Wed, 20 May 2020 09:50:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

A few things stumbled across while doing the investigations.

1: relax GDT check in arch_set_info_guest()
2: relax LDT check in arch_set_info_guest()
3: PV: polish pv_set_gdt()

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 20 07:53:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 07:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbJY0-0004Ei-4m; Wed, 20 May 2020 07:53:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbJXy-0004Ed-Pz
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 07:53:50 +0000
X-Inumbo-ID: 0a86d896-9a6f-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0a86d896-9a6f-11ea-9887-bc764e2007e4;
 Wed, 20 May 2020 07:53:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E3BC7ACED;
 Wed, 20 May 2020 07:53:51 +0000 (UTC)
Subject: [PATCH v3 1/3] x86: relax GDT check in arch_set_info_guest()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cbed3c45-3685-4bce-9719-93b1e8a2599a@suse.com>
Message-ID: <acbaead9-0f6c-3606-e809-57dafe9b3f01@suse.com>
Date: Wed, 20 May 2020 09:53:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <cbed3c45-3685-4bce-9719-93b1e8a2599a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

It is wrong for us to check frames beyond the guest specified limit
(in the compat case another loop bound is already correct).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v3: Move nr_gdt_frames range check earlier. Avoid |= where not really
    needed.

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -840,6 +840,7 @@ int arch_set_info_guest(
 #ifdef CONFIG_PV
     mfn_t cr3_mfn;
     struct page_info *cr3_page = NULL;
+    unsigned int nr_gdt_frames;
     int rc = 0;
 #endif
 
@@ -957,6 +958,10 @@ int arch_set_info_guest(
     /* Ensure real hardware interrupts are enabled. */
     v->arch.user_regs.eflags |= X86_EFLAGS_IF;
 
+    nr_gdt_frames = DIV_ROUND_UP(c(gdt_ents), 512);
+    if ( nr_gdt_frames > ARRAY_SIZE(v->arch.pv.gdt_frames) )
+        return -EINVAL;
+
     if ( !v->is_initialised )
     {
         if ( !compat && !(flags & VGCF_in_kernel) && !c.nat->ctrlreg[1] )
@@ -988,9 +993,9 @@ int arch_set_info_guest(
             fail = compat_pfn_to_cr3(pfn) != c.cmp->ctrlreg[3];
         }
 
-        for ( i = 0; i < ARRAY_SIZE(v->arch.pv.gdt_frames); ++i )
-            fail |= v->arch.pv.gdt_frames[i] != c(gdt_frames[i]);
         fail |= v->arch.pv.gdt_ents != c(gdt_ents);
+        for ( i = 0; !fail && i < nr_gdt_frames; ++i )
+            fail = v->arch.pv.gdt_frames[i] != c(gdt_frames[i]);
 
         fail |= v->arch.pv.ldt_base != c(ldt_base);
         fail |= v->arch.pv.ldt_ents != c(ldt_ents);
@@ -1095,12 +1100,8 @@ int arch_set_info_guest(
     else
     {
         unsigned long gdt_frames[ARRAY_SIZE(v->arch.pv.gdt_frames)];
-        unsigned int nr_frames = DIV_ROUND_UP(c.cmp->gdt_ents, 512);
-
-        if ( nr_frames > ARRAY_SIZE(v->arch.pv.gdt_frames) )
-            return -EINVAL;
 
-        for ( i = 0; i < nr_frames; ++i )
+        for ( i = 0; i < nr_gdt_frames; ++i )
             gdt_frames[i] = c.cmp->gdt_frames[i];
 
         rc = (int)pv_set_gdt(v, gdt_frames, c.cmp->gdt_ents);



From xen-devel-bounces@lists.xenproject.org Wed May 20 07:54:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 07:54:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbJYO-0004Go-DI; Wed, 20 May 2020 07:54:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbJYN-0004Gf-CS
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 07:54:15 +0000
X-Inumbo-ID: 193cb860-9a6f-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 193cb860-9a6f-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 07:54:14 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A9D38AC37;
 Wed, 20 May 2020 07:54:16 +0000 (UTC)
Subject: [PATCH v3 2/3] x86: relax LDT check in arch_set_info_guest()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cbed3c45-3685-4bce-9719-93b1e8a2599a@suse.com>
Message-ID: <802123c8-026d-ba84-5f0d-0faecd5a46a9@suse.com>
Date: Wed, 20 May 2020 09:54:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <cbed3c45-3685-4bce-9719-93b1e8a2599a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

It is wrong for us to check the base address when there's no LDT in the
first place. Once we don't do this check anymore we can also set the
base address to a non-canonical value when the LDT is empty.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v3: Re-base over changes to earlier patch.
v2: Set v->arch.pv.ldt_base to non-canonical for an empty LDT, plus
    related necessary adjustments.

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -967,8 +967,10 @@ int arch_set_info_guest(
         if ( !compat && !(flags & VGCF_in_kernel) && !c.nat->ctrlreg[1] )
             return -EINVAL;
 
-        v->arch.pv.ldt_base = c(ldt_base);
         v->arch.pv.ldt_ents = c(ldt_ents);
+        v->arch.pv.ldt_base = v->arch.pv.ldt_ents
+                              ? c(ldt_base)
+                              : (unsigned long)ZERO_BLOCK_PTR;
     }
     else
     {
@@ -997,8 +999,9 @@ int arch_set_info_guest(
         for ( i = 0; !fail && i < nr_gdt_frames; ++i )
             fail = v->arch.pv.gdt_frames[i] != c(gdt_frames[i]);
 
-        fail |= v->arch.pv.ldt_base != c(ldt_base);
         fail |= v->arch.pv.ldt_ents != c(ldt_ents);
+        if ( v->arch.pv.ldt_ents )
+            fail |= v->arch.pv.ldt_base != c(ldt_base);
 
         if ( fail )
            return -EOPNOTSUPP;
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -1583,7 +1583,7 @@ void arch_get_info_guest(struct vcpu *v,
     }
     else
     {
-        c(ldt_base = v->arch.pv.ldt_base);
+        c(ldt_base = v->arch.pv.ldt_ents ? v->arch.pv.ldt_base : 0);
         c(ldt_ents = v->arch.pv.ldt_ents);
         for ( i = 0; i < ARRAY_SIZE(v->arch.pv.gdt_frames); ++i )
             c(gdt_frames[i] = v->arch.pv.gdt_frames[i]);
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -3669,14 +3669,15 @@ long do_mmuext_op(
         case MMUEXT_SET_LDT:
         {
             unsigned int ents = op.arg2.nr_ents;
-            unsigned long ptr = ents ? op.arg1.linear_addr : 0;
+            unsigned long ptr = ents ? op.arg1.linear_addr
+                                     : (unsigned long)ZERO_BLOCK_PTR;
 
             if ( unlikely(currd != pg_owner) )
                 rc = -EPERM;
             else if ( paging_mode_external(currd) )
                 rc = -EINVAL;
-            else if ( ((ptr & (PAGE_SIZE - 1)) != 0) || !__addr_ok(ptr) ||
-                      (ents > 8192) )
+            else if ( (ents > 8192) ||
+                      (ents && ((ptr & (PAGE_SIZE - 1)) || !__addr_ok(ptr))) )
             {
                 gdprintk(XENLOG_WARNING,
                          "Bad args to SET_LDT: ptr=%lx, ents=%x\n", ptr, ents);



From xen-devel-bounces@lists.xenproject.org Wed May 20 07:54:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 07:54:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbJYo-0004Ke-MS; Wed, 20 May 2020 07:54:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbJYm-0004KP-S6
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 07:54:40 +0000
X-Inumbo-ID: 2822b28a-9a6f-11ea-a9cc-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2822b28a-9a6f-11ea-a9cc-12813bfff9fa;
 Wed, 20 May 2020 07:54:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7345DAC37;
 Wed, 20 May 2020 07:54:41 +0000 (UTC)
Subject: [PATCH v3 3/3] x86/PV: polish pv_set_gdt()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cbed3c45-3685-4bce-9719-93b1e8a2599a@suse.com>
Message-ID: <ceae9c78-d0be-c1eb-88fc-c036b050f668@suse.com>
Date: Wed, 20 May 2020 09:54:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <cbed3c45-3685-4bce-9719-93b1e8a2599a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

There's no need to invoke get_page_from_gfn(), and there's also no need
to update the passed in frames[]. Invoke get_page_and_type() directly.

Also make the function's frames[] parameter const, change its return
type to int, and drop the bogus casts from two of its invocations.

Finally a little bit of cosmetics.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1099,7 +1099,7 @@ int arch_set_info_guest(
         return rc;
 
     if ( !compat )
-        rc = (int)pv_set_gdt(v, c.nat->gdt_frames, c.nat->gdt_ents);
+        rc = pv_set_gdt(v, c.nat->gdt_frames, c.nat->gdt_ents);
     else
     {
         unsigned long gdt_frames[ARRAY_SIZE(v->arch.pv.gdt_frames)];
@@ -1107,7 +1107,7 @@ int arch_set_info_guest(
         for ( i = 0; i < nr_gdt_frames; ++i )
             gdt_frames[i] = c.cmp->gdt_frames[i];
 
-        rc = (int)pv_set_gdt(v, gdt_frames, c.cmp->gdt_ents);
+        rc = pv_set_gdt(v, gdt_frames, c.cmp->gdt_ents);
     }
     if ( rc != 0 )
         return rc;
--- a/xen/arch/x86/pv/descriptor-tables.c
+++ b/xen/arch/x86/pv/descriptor-tables.c
@@ -81,7 +81,8 @@ void pv_destroy_gdt(struct vcpu *v)
     }
 }
 
-long pv_set_gdt(struct vcpu *v, unsigned long *frames, unsigned int entries)
+int pv_set_gdt(struct vcpu *v, const unsigned long frames[],
+               unsigned int entries)
 {
     struct domain *d = v->domain;
     l1_pgentry_t *pl1e;
@@ -95,17 +96,11 @@ long pv_set_gdt(struct vcpu *v, unsigned
     /* Check the pages in the new GDT. */
     for ( i = 0; i < nr_frames; i++ )
     {
-        struct page_info *page;
+        mfn_t mfn = _mfn(frames[i]);
 
-        page = get_page_from_gfn(d, frames[i], NULL, P2M_ALLOC);
-        if ( !page )
+        if ( !mfn_valid(mfn) ||
+             !get_page_and_type(mfn_to_page(mfn), d, PGT_seg_desc_page) )
             goto fail;
-        if ( !get_page_type(page, PGT_seg_desc_page) )
-        {
-            put_page(page);
-            goto fail;
-        }
-        frames[i] = mfn_x(page_to_mfn(page));
     }
 
     /* Tear down the old GDT. */
@@ -124,9 +119,8 @@ long pv_set_gdt(struct vcpu *v, unsigned
 
  fail:
     while ( i-- > 0 )
-    {
         put_page_and_type(mfn_to_page(_mfn(frames[i])));
-    }
+
     return -EINVAL;
 }
 
--- a/xen/include/asm-x86/pv/mm.h
+++ b/xen/include/asm-x86/pv/mm.h
@@ -25,7 +25,8 @@
 
 int pv_ro_page_fault(unsigned long addr, struct cpu_user_regs *regs);
 
-long pv_set_gdt(struct vcpu *v, unsigned long *frames, unsigned int entries);
+int pv_set_gdt(struct vcpu *v, const unsigned long frames[],
+               unsigned int entries);
 void pv_destroy_gdt(struct vcpu *v);
 
 bool pv_map_ldt_shadow_page(unsigned int off);
@@ -43,8 +44,8 @@ static inline int pv_ro_page_fault(unsig
     return 0;
 }
 
-static inline long pv_set_gdt(struct vcpu *v, unsigned long *frames,
-                              unsigned int entries)
+static inline int pv_set_gdt(struct vcpu *v, const unsigned long frames[],
+                             unsigned int entries)
 { ASSERT_UNREACHABLE(); return -EINVAL; }
 static inline void pv_destroy_gdt(struct vcpu *v) { ASSERT_UNREACHABLE(); }
 



From xen-devel-bounces@lists.xenproject.org Wed May 20 08:32:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 08:32:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbK8b-0008Og-0s; Wed, 20 May 2020 08:31:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbK8a-0008Ob-7F
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 08:31:40 +0000
X-Inumbo-ID: 5317d7ea-9a74-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5317d7ea-9a74-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 08:31:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 1989FAE68;
 Wed, 20 May 2020 08:31:41 +0000 (UTC)
Subject: Re: [PATCH v2] x86: use POPCNT for hweight<N>() when available
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <55a4a24d-7fac-527c-6bcf-8d689136bac2@suse.com>
 <20200514140522.GD54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <83534bf1-fa57-1d4a-c615-f656338a8457@suse.com>
Date: Wed, 20 May 2020 10:31:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200514140522.GD54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 14.05.2020 16:05, Roger Pau Monné wrote:
> On Mon, Jul 15, 2019 at 02:39:04PM +0000, Jan Beulich wrote:
>> @@ -251,6 +255,10 @@ boot/mkelf32: boot/mkelf32.c
>>   efi/mkreloc: efi/mkreloc.c
>>   	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
>>   
>> +nocov-y += hweight.o
>> +noubsan-y += hweight.o
>> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
> 
> Why not use clobbers in the asm to list the scratch registers? Is it
> that much expensive?

The goal is to disturb the call sites as little as possible. There's
no point avoiding the scratch registers when no call is made (i.e.
when the POPCNT insn can be used). Taking away from the compiler 7
out of 15 registers (that it can hold live data in) seems quite a
lot to me.

>> --- a/xen/include/asm-x86/bitops.h
>> +++ b/xen/include/asm-x86/bitops.h
>> @@ -475,9 +475,36 @@ static inline int fls(unsigned int x)
>>    *
>>    * The Hamming Weight of a number is the total number of bits set in it.
>>    */
>> +#ifndef __clang__
>> +/* POPCNT encodings with %{r,e}di input and %{r,e}ax output: */
>> +#define POPCNT_64 ".byte 0xF3, 0x48, 0x0F, 0xB8, 0xC7"
>> +#define POPCNT_32 ".byte 0xF3, 0x0F, 0xB8, 0xC7"
>> +
>> +#define hweight_(n, x, insn, setup, cout, cin) ({                         \
>> +    unsigned int res_;                                                    \
>> +    /*                                                                    \
>> +     * For the function call the POPCNT input register needs to be marked \
>> +     * modified as well. Set up a local variable of appropriate type      \
>> +     * for this purpose.                                                  \
>> +     */                                                                   \
>> +    typeof((uint##n##_t)(x) + 0U) val_ = (x);                             \
>> +    alternative_io(setup "; call generic_hweight" #n,                     \
>> +                   insn, X86_FEATURE_POPCNT,                              \
>> +                   ASM_OUTPUT2([res] "=a" (res_), [val] cout (val_)),     \
>> +                   [src] cin (val_));                                     \
>> +    res_;                                                                 \
>> +})
>> +#define hweight64(x) hweight_(64, x, POPCNT_64, "", "+D", "g")
>> +#define hweight32(x) hweight_(32, x, POPCNT_32, "", "+D", "g")
>> +#define hweight16(x) hweight_(16, x, "movzwl %w[src], %[val]; " POPCNT_32, \
>> +                              "mov %[src], %[val]", "=&D", "rm")
>> +#define hweight8(x)  hweight_( 8, x, "movzbl %b[src], %[val]; " POPCNT_32, \
>> +                              "mov %[src], %[val]", "=&D", "rm")
> 
> Why not just convert types < 32bits into uint32_t and avoid the asm
> prefix? You are already converting them in hweight_ to an uintX_t.

I don't think I do - there's a conversion to the promoted type
when adding in an unsigned int value (which is there to retain
uint64_t for the 64-bit case, while using unsigned int for all
smaller sizes, as per the parameter types of generic_hweight*()).

> That would made the asm simpler as you would then only need to make
> sure the input is in %rdi and the output is fetched from %rax?

That's an option, yes, but I again wanted to limit the impact to
generated code (including code size) as much as possible.
generic_hweight{8,16}() take unsigned int arguments, not
uint{8,16}_t ones. Hence at the call site (when a function call
is needed) no zero extension is necessary. Therefore the MOVZ
above is to (mainly) overlay the MOV during patching, while the
POPCNT is to (mainly) overlay the CALL.

If the simpler asm()-s were considered more important than the
quality of the generated code, I think we could simply have

#define hweight16(x) hweight32((uint16_t)(x))
#define hweight8(x)  hweight32(( uint8_t)(x))

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 20 08:35:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 08:35:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbKBv-00004z-Ga; Wed, 20 May 2020 08:35:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jY/2=7C=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jbKBu-00004t-Jl
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 08:35:06 +0000
X-Inumbo-ID: cd437894-9a74-11ea-a9d1-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd437894-9a74-11ea-a9d1-12813bfff9fa;
 Wed, 20 May 2020 08:35:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id F281BACA7;
 Wed, 20 May 2020 08:35:05 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] tools/libxengnttab: correct size of allocated memory
Date: Wed, 20 May 2020 10:35:01 +0200
Message-Id: <20200520083501.31704-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The size of the memory allocated for the IOCTL_GNTDEV_MAP_GRANT_REF
ioctl() parameters is calculated wrong, which results in too much
memory allocated.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/libs/gnttab/freebsd.c | 2 +-
 tools/libs/gnttab/linux.c   | 8 +++-----
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/tools/libs/gnttab/freebsd.c b/tools/libs/gnttab/freebsd.c
index 886b588303..0588501d0f 100644
--- a/tools/libs/gnttab/freebsd.c
+++ b/tools/libs/gnttab/freebsd.c
@@ -74,7 +74,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
     void *addr = NULL;
     int domids_stride;
     unsigned int refs_size = ROUNDUP(count *
-                                     sizeof(struct ioctl_gntdev_map_grant_ref),
+                                     sizeof(struct ioctl_gntdev_grant_ref),
                                      PAGE_SHIFT);
 
     domids_stride = (flags & XENGNTTAB_GRANT_MAP_SINGLE_DOMAIN) ? 0 : 1;
diff --git a/tools/libs/gnttab/linux.c b/tools/libs/gnttab/linux.c
index a01bb6c698..74331a4c7b 100644
--- a/tools/libs/gnttab/linux.c
+++ b/tools/libs/gnttab/linux.c
@@ -91,9 +91,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
 {
     int fd = xgt->fd;
     struct ioctl_gntdev_map_grant_ref *map;
-    unsigned int map_size = ROUNDUP((sizeof(*map) + (count - 1) *
-                                    sizeof(struct ioctl_gntdev_map_grant_ref)),
-                                    PAGE_SHIFT);
+    unsigned int map_size = sizeof(*map) + (count - 1) * sizeof(map->refs[0]);
     void *addr = NULL;
     int domids_stride = 1;
     int i;
@@ -102,10 +100,10 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
         domids_stride = 0;
 
     if ( map_size <= PAGE_SIZE )
-        map = alloca(sizeof(*map) +
-                     (count - 1) * sizeof(struct ioctl_gntdev_map_grant_ref));
+        map = alloca(map_size);
     else
     {
+        map_size = ROUNDUP(map_size, PAGE_SHIFT);
         map = mmap(NULL, map_size, PROT_READ | PROT_WRITE,
                    MAP_PRIVATE | MAP_ANON | MAP_POPULATE, -1, 0);
         if ( map == MAP_FAILED )
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 08:53:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 08:53:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbKTp-0001qo-4p; Wed, 20 May 2020 08:53:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jY/2=7C=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jbKTn-0001qj-OY
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 08:53:35 +0000
X-Inumbo-ID: 63305f32-9a77-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63305f32-9a77-11ea-b9cf-bc764e2007e4;
 Wed, 20 May 2020 08:53:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 8B86EAB76;
 Wed, 20 May 2020 08:53:36 +0000 (UTC)
Subject: Re: grant table issues mapping a ring order 10
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
To: Stefano Stabellini <sstabellini@kernel.org>, boris.ostrovsky@oracle.com
References: <alpine.DEB.2.21.2005191252040.27502@sstabellini-ThinkPad-T480s>
 <03bad8fd-9826-7652-1c08-549e22634f8d@suse.com>
Message-ID: <ecd0bdf8-6e65-24a7-8383-c244853f7ae6@suse.com>
Date: Wed, 20 May 2020 10:53:32 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <03bad8fd-9826-7652-1c08-549e22634f8d@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: andrew.cooper3@citrix.com, jbeulich@suse.com,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20.05.20 08:00, Jürgen Groß wrote:
> On 19.05.20 23:21, Stefano Stabellini wrote:
>> Hi Juergen, Boris,
>>
>> I am trying to increase the size of the rings used for Xen 9pfs
>> connections for performance reasons and also to reduce the likehood of
>> the backend having to wait on the frontend to free up space from the
>> ring.
>>
>> FYI I realized that we cannot choose order 11 or greater in Linux
>> because then we incur into the hard limit CONFIG_FORCE_MAX_ZONEORDER=11.
>> But that is not the reason why I am writing to you :-)
>>
>>
>> The reason why I am writing is that even order 10 fails for some
>> grant-table related reason I cannot explain. There are two rings, each
>> of them order 10. Mapping the first ring results into an error. (Order 9
>> works fine, resulting in both rings being mapped correctly.)
>>
>> QEMU tries to map the refs but gets an error:
>>
>>    gnttab: error: mmap failed: Invalid argument
>>    xen be: 9pfs-0: xen be: 9pfs-0: xengnttab_map_domain_grant_refs 
>> failed: Invalid argument
>>    xengnttab_map_domain_grant_refs failed: Invalid argument
>>
>> The error comes from Xen. The hypervisor returns GNTST_bad_gntref to
>> Linux (drivers/xen/grant-table.c:gnttab_map_refs). Then:
>>
>>          if (map->map_ops[i].status) {
>>             err = -EINVAL;
>>             continue;
>>         }
>>
>> So Linux returns -EINVAL to QEMU. The ref seem to be garbage. The
>> following printks are in Xen in the implemenation of map_grant_ref:
>>
>> (XEN) DEBUG map_grant_ref 1017 ref=998 nr=2560
>> (XEN) DEBUG map_grant_ref 1017 ref=999 nr=2560
>> (XEN) DEBUG map_grant_ref 1013 ref=2050669706 nr=2560
>> (XEN) grant_table.c:1015:d0v0 Bad ref 0x7a3abc8a for d1
>> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
>> (XEN) DEBUG map_grant_ref 1017 ref=19 nr=2560
>> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
>> (XEN) DEBUG map_grant_ref 1013 ref=56423797 nr=2560
>> (XEN) grant_table.c:1015:d0v0 Bad ref 0x35cf575 for d1
>> (XEN) DEBUG map_grant_ref 1013 ref=348793 nr=2560
>> (XEN) grant_table.c:1015:d0v0 Bad ref 0x55279 for d1
>> (XEN) DEBUG map_grant_ref 1013 ref=1589921828 nr=2560
>> (XEN) grant_table.c:1015:d0v0 Bad ref 0x5ec44824 for d1
>> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
>> (XEN) DEBUG map_grant_ref 1013 ref=2070386184 nr=2560
>> (XEN) grant_table.c:1015:d0v0 Bad ref 0x7b679608 for d1
>> (XEN) DEBUG map_grant_ref 1013 ref=3421871 nr=2560
>> (XEN) grant_table.c:1015:d0v0 Bad ref 0x3436af for d1
>> (XEN) DEBUG map_grant_ref 1013 ref=1589921828 nr=2560
>> (XEN) grant_table.c:1015:d0v0 Bad ref 0x5ec44824 for d1
>> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
>> (XEN) DEBUG map_grant_ref 1013 ref=875999099 nr=2560
>> (XEN) grant_table.c:1015:d0v0 Bad ref 0x3436af7b for d1
>> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
>> (XEN) DEBUG map_grant_ref 1013 ref=2705045486 nr=2560
>> (XEN) grant_table.c:1015:d0v0 Bad ref 0xa13bb7ee for d1
>> (XEN) DEBUG map_grant_ref 1013 ref=4294967295 nr=2560
>> (XEN) grant_table.c:1015:d0v0 Bad ref 0xffffffff for d1
>> (XEN) DEBUG map_grant_ref 1013 ref=213291910 nr=2560
>> (XEN) grant_table.c:1015:d0v0 Bad ref 0xcb69386 for d1
>> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
>> (XEN) DEBUG map_grant_ref 1013 ref=4912 nr=2560
>> (XEN) grant_table.c:1015:d0v0 Bad ref 0x1330 for d1
>> (XEN) DEBUG map_grant_ref 1013 ref=167788925 nr=2560
>> (XEN) grant_table.c:1015:d0v0 Bad ref 0xa00417d for d1
>> (XEN) DEBUG map_grant_ref 1017 ref=24 nr=2560
>> (XEN) DEBUG map_grant_ref 1013 ref=167788925 nr=2560
>> (XEN) grant_table.c:1015:d0v0 Bad ref 0xa00417d for d1
>> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
>> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
>> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
>> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
>> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
>> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
>> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
>> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
>> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
>> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
>> (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
>>
>>
>> Full logs https://pastebin.com/QLTUaUGJ
>> It is worth mentioning that no limits are being reached: we are below
>> 2500 entries per domain and below the 64 pages of grant refs per domain.
>>
>> What it seems to happen is that after ref 999, the next refs are garbage.
>> Do you have any ideas why?
> 
> I don't think there is enough space for all the needed grant refs in the
> initial interface page passed via Xenstore. So how do you pass the refs
> to the backend?

Looking into the full log this seems to be the problem: The processing
is starting with ref=9 and the last successful ref is 999, so 991 refs
have been processed. Each ref needs 4 bytes, so a page could hold 1024
refs, but the first 132 bytes of the page are used for other information
resulting in 1024-33 == 991 refs possible.


Juergen


From xen-devel-bounces@lists.xenproject.org Wed May 20 08:56:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 08:56:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbKWa-0001zO-NP; Wed, 20 May 2020 08:56:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbKWZ-0001zJ-UH
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 08:56:27 +0000
X-Inumbo-ID: c9db357c-9a77-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c9db357c-9a77-11ea-9887-bc764e2007e4;
 Wed, 20 May 2020 08:56:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id EDCF1AF2D;
 Wed, 20 May 2020 08:56:28 +0000 (UTC)
Subject: Re: [PATCH] x86: refine guest_mode()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7b62d06c-1369-2857-81c0-45e2434357f4@suse.com>
 <1704f4f6-7e77-971c-2c94-4f6a6719c34a@citrix.com>
 <5bbe6425-396c-d934-b5af-53b594a4afbc@suse.com>
 <16939982-3ccc-f848-0694-61b154dca89a@citrix.com>
 <5ce12c86-c894-4a2c-9fa6-1c2a6007ca28@suse.com>
 <20200518145101.GV54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d58ec87e-a871-2e65-4a69-b73a168a6afa@suse.com>
Date: Wed, 20 May 2020 10:56:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200518145101.GV54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 18.05.2020 16:51, Roger Pau Monné wrote:
> On Tue, Apr 28, 2020 at 08:30:12AM +0200, Jan Beulich wrote:
>> On 27.04.2020 22:11, Andrew Cooper wrote:
>>> On 27/04/2020 16:15, Jan Beulich wrote:
>>>> On 27.04.2020 16:35, Andrew Cooper wrote:
>>>>> On 27/04/2020 09:03, Jan Beulich wrote:
>>>>>> --- a/xen/include/asm-x86/regs.h
>>>>>> +++ b/xen/include/asm-x86/regs.h
>>>>>> @@ -10,9 +10,10 @@
>>>>>>      /* Frame pointer must point into current CPU stack. */                    \
>>>>>>      ASSERT(diff < STACK_SIZE);                                                \
>>>>>>      /* If not a guest frame, it must be a hypervisor frame. */                \
>>>>>> -    ASSERT((diff == 0) || (r->cs == __HYPERVISOR_CS));                        \
>>>>>> +    if ( diff < PRIMARY_STACK_SIZE )                                          \
>>>>>> +        ASSERT(!diff || ((r)->cs == __HYPERVISOR_CS));                        \
>>>>>>      /* Return TRUE if it's a guest frame. */                                  \
>>>>>> -    (diff == 0);                                                              \
>>>>>> +    !diff || ((r)->cs != __HYPERVISOR_CS);                                    \
>>>>> The (diff == 0) already worried me before because it doesn't fail safe,
>>>>> but this makes things more problematic.  Consider the case back when we
>>>>> had __HYPERVISOR_CS32.
>>>> Yes - if __HYPERVISOR_CS32 would ever have been to be used for
>>>> anything, it would have needed checking for here.
>>>>
>>>>> Guest mode is strictly "(r)->cs & 3".
>>>> As long as CS (a) gets properly saved (it's a "manual" step for
>>>> SYSCALL/SYSRET as well as #VMEXIT) and (b) didn't get clobbered. I
>>>> didn't write this code, I don't think, so I can only guess that
>>>> there were intentions behind this along these lines.
>>>
>>> Hmm - the VMExit case might be problematic here, due to the variability
>>> in the poison used.
>>
>> "Variability" is an understatement - there's no poisoning at all
>> in release builds afaics (and to be honest it seems a somewhat
>> pointless to write the same values over and over again in debug
>> mode). With this, ...
>>
>>>>> Everything else is expectations about how things ought to be laid out,
>>>>> but for safety in release builds, the final judgement should not depend
>>>>> on the expectations evaluating true.
>>>> Well, I can switch to a purely CS.RPL based approach, as long as
>>>> we're happy to live with the possible downside mentioned above.
>>>> Of course this would then end up being a more intrusive change
>>>> than originally intended ...
>>>
>>> I'd certainly prefer to go for something which is more robust, even if
>>> it is a larger change.
>>
>> ... what's your suggestion? Basing on _just_ CS.RPL obviously won't
>> work. Not even if we put in place the guest's CS (albeit that
>> somewhat depends on the meaning we assign to the macro's returned
>> value).
> 
> Just to check I'm following this correctly, using CS.RPL won't work
> for HVM guests, as HVM can legitimately use a RPL of 0 (which is not
> the case for PV guests). Doesn't the same apply to the usage of
> __HYPERVISOR_CS? (A HVM guest could also use the same code segment
> value as Xen?)

Of course (and in particular Xen as a guest would). My "Basing on
_just_ CS.RPL" wasn't meant to exclude the rest of the selector,
but to contrast this to the case where "diff" also is involved in
the calculation (which looks to be what Andrew would prefer to see
go away).

>> Using current inside the macro to determine whether the
>> guest is HVM would also seem fragile to me - there are quite a few
>> uses of guest_mode(). Which would leave passing in a const struct
>> vcpu * (or domain *), requiring to touch all call sites, including
>> Arm's.
> 
> Fragile or slow? Are there corner cases where guest_mode is used where
> current is not reliable?

This question is why I said "there are quite a few uses of
guest_mode()" - auditing them all is just one side of the medal.
The other is to prevent a new use appearing in the future that
can be reached by a call path in the time window where a lazy
context switch is pending (i.e. when current has already been
updated, but register state hasn't been yet).

>> Compared to this it would seem to me that the change as presented
>> is a clear improvement without becoming overly large of a change.
> 
> Using the cs register is already part of the guest_mode code, even if
> just in debug mode, hence I don't see it as a regression from existing
> code. It however feels weird to me that the reporter of the issue
> doesn't agree with the fix, and hence would like to know if there's a
> way we could achieve consensus on this.

Indeed. I'd be happy to make further adjustments, if only I had a
clear understanding of what is wanted (or why leaving things as
they are is better than a little bit of an improvement).

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 20 09:04:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 09:04:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbKeR-0002wu-TU; Wed, 20 May 2020 09:04:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Cwnj=7C=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1jbKeQ-0002wo-5I
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 09:04:34 +0000
X-Inumbo-ID: e944c724-9a78-11ea-ae69-bc764e2007e4
Received: from mail-ed1-x543.google.com (unknown [2a00:1450:4864:20::543])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e944c724-9a78-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 09:04:29 +0000 (UTC)
Received: by mail-ed1-x543.google.com with SMTP id b91so2221536edf.3
 for <xen-devel@lists.xenproject.org>; Wed, 20 May 2020 02:04:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references;
 bh=cxVe7z5qO5Qs7eGgcvSh31+4K5/DeD3iRNuzH0HAxuk=;
 b=TXm3L+NIb3XAHg5yp445t0VZcIT8EbggRa9KdpEROqmJVDAorllnh26t5yqkvwlDKW
 m5PYDkHpx2PqIfT+SIzZTnPtrIR5PniAeBGoeJx6ByvxV8W+D//I+TZ1RYeMYUg56pGA
 Hp+md7wmt/K9hc5lEXoIf2byBQ9+5h8OLAM+npMWpK346fnn0SkPoGlQP7147bPACxhH
 +3dR3IXpZQMQ1Brb/iJZi3o/vdsQW0p5vonKvFtEjS2CyBcXcVk+qR7bE7HDDjtfZsDH
 HbSTNn2rVoSy7Zx6d1XKPtqF5tNGZu8Gaqj8LWbvyLlPH/1Z4BQD1hv8z54IJR4yORds
 hsTg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references;
 bh=cxVe7z5qO5Qs7eGgcvSh31+4K5/DeD3iRNuzH0HAxuk=;
 b=UmQmGiG+8X+oIQx8szfvE3MmrpaKwo83sLuvi0p0b8zQSxQS8hbq1lfFbxb7BpRAME
 nkj5KgRsoZT+QLBdBqP/cT+ZxsZxQIuuY9IY++PNgMlbRL3FHsP0EFNO8BlR67beVMN4
 RW0xCP+fPg8ZQfO8HUjA80TaT4jwSvBnI4W/hc2R7O/OawBsB0Cktt8fMC2EXr93g6Kk
 KyS6XaygQad+b81tx9YZqDNORDAtfqMNrXJtNWAh4SziZqYlhFYyV238Jvik7qYcoYA3
 uUdFjJqIl2VR3WT1u7e4gQQ3/nbda0jO6iTGYPgSO9Z9CdvbrjIyrVxQHy9am3B1CqbM
 K5pA==
X-Gm-Message-State: AOAM530DLcRYdvx/GexYraIVqM4XPSZrmKN2n0ENI2g/nVejh46Ft9aI
 vKn6ZhY1v4Wg93fh0MDJA1fj+xHA
X-Google-Smtp-Source: ABdhPJy2i0oSG0uCwwcpW62NAlWD2JX+n/HqLBknlQfX98ojK0xiYyvde6gKBX5uCZRiWByiX3l0mA==
X-Received: by 2002:a50:b286:: with SMTP id p6mr2541409edd.350.1589965468129; 
 Wed, 20 May 2020 02:04:28 -0700 (PDT)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id p5sm1324811edi.82.2020.05.20.02.04.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 20 May 2020 02:04:27 -0700 (PDT)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: xen-devel@lists.xenproject.org, jgross@suse.com, ian.jackson@eu.citrix.com,
 wei.liu2@citrix.com, konrad.wilk@oracle.com
Subject: [PATCH 1/2] xen/displif: Protocol version 2
Date: Wed, 20 May 2020 12:04:24 +0300
Message-Id: <20200520090425.28558-2-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200520090425.28558-1-andr2000@gmail.com>
References: <20200520090425.28558-1-andr2000@gmail.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

1. Change protocol version from string to integer

Version string, which is in fact an integer, is hard to handle in the
code that supports different protocol versions. To simplify that
make the version an integer.

2. Pass buffer offset with XENDISPL_OP_DBUF_CREATE

There are cases when display data buffer is created with non-zero
offset to the data start. Handle such cases and provide that offset
while creating a display buffer.

3. Add XENDISPL_OP_GET_EDID command

Add an optional request for reading Extended Display Identification
Data (EDID) structure which allows better configuration of the
display connectors over the configuration set in XenStore.
With this change connectors may have multiple resolutions defined
with respect to detailed timing definitions and additional properties
normally provided by displays.

If this request is not supported by the backend then visible area
is defined by the relevant XenStore's "resolution" property.

If backend provides extended display identification data (EDID) with
XENDISPL_OP_GET_EDID request then EDID values must take precedence
over the resolutions defined in XenStore.

4. Bump protocol version to 2.

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 xen/include/public/io/displif.h | 83 +++++++++++++++++++++++++++++++--
 1 file changed, 80 insertions(+), 3 deletions(-)

diff --git a/xen/include/public/io/displif.h b/xen/include/public/io/displif.h
index cc5de9cb1f35..4d43ba5078c8 100644
--- a/xen/include/public/io/displif.h
+++ b/xen/include/public/io/displif.h
@@ -38,7 +38,7 @@
  *                           Protocol version
  ******************************************************************************
  */
-#define XENDISPL_PROTOCOL_VERSION     "1"
+#define XENDISPL_PROTOCOL_VERSION     2
 
 /*
  ******************************************************************************
@@ -202,6 +202,9 @@
  *      Width and height of the connector in pixels separated by
  *      XENDISPL_RESOLUTION_SEPARATOR. This defines visible area of the
  *      display.
+ *      If backend provides extended display identification data (EDID) with
+ *      XENDISPL_OP_GET_EDID request then EDID values must take precedence
+ *      over the resolutions defined here.
  *
  *------------------ Connector Request Transport Parameters -------------------
  *
@@ -349,6 +352,7 @@
 #define XENDISPL_OP_FB_DETACH         0x13
 #define XENDISPL_OP_SET_CONFIG        0x14
 #define XENDISPL_OP_PG_FLIP           0x15
+#define XENDISPL_OP_GET_EDID          0x16
 
 /*
  ******************************************************************************
@@ -377,6 +381,10 @@
 #define XENDISPL_FIELD_BE_ALLOC       "be-alloc"
 #define XENDISPL_FIELD_UNIQUE_ID      "unique-id"
 
+#define XENDISPL_EDID_BLOCK_SIZE      128
+#define XENDISPL_EDID_BLOCK_COUNT     256
+#define XENDISPL_EDID_MAX_SIZE        (XENDISPL_EDID_BLOCK_SIZE * XENDISPL_EDID_BLOCK_COUNT)
+
 /*
  ******************************************************************************
  *                          STATUS RETURN CODES
@@ -451,7 +459,9 @@
  * +----------------+----------------+----------------+----------------+
  * |                           gref_directory                          | 40
  * +----------------+----------------+----------------+----------------+
- * |                             reserved                              | 44
+ * |                             data_ofs                              | 44
+ * +----------------+----------------+----------------+----------------+
+ * |                             reserved                              | 48
  * +----------------+----------------+----------------+----------------+
  * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/|
  * +----------------+----------------+----------------+----------------+
@@ -494,6 +504,7 @@
  *   buffer size (buffer_sz) exceeds what can be addressed by this single page,
  *   then reference to the next page must be supplied (see gref_dir_next_page
  *   below)
+ * data_ofs - uint32_t, offset of the data in the buffer, octets
  */
 
 #define XENDISPL_DBUF_FLG_REQ_ALLOC       (1 << 0)
@@ -506,6 +517,7 @@ struct xendispl_dbuf_create_req {
     uint32_t buffer_sz;
     uint32_t flags;
     grant_ref_t gref_directory;
+    uint32_t data_ofs;
 };
 
 /*
@@ -731,6 +743,42 @@ struct xendispl_page_flip_req {
     uint64_t fb_cookie;
 };
 
+/*
+ * Request EDID - request EDID describing current connector:
+ *         0                1                 2               3        octet
+ * +----------------+----------------+----------------+----------------+
+ * |               id                | _OP_GET_EDID   |   reserved     | 4
+ * +----------------+----------------+----------------+----------------+
+ * |                             buffer_sz                             | 8
+ * +----------------+----------------+----------------+----------------+
+ * |                          gref_directory                           | 12
+ * +----------------+----------------+----------------+----------------+
+ * |                             reserved                              | 16
+ * +----------------+----------------+----------------+----------------+
+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/|
+ * +----------------+----------------+----------------+----------------+
+ * |                             reserved                              | 64
+ * +----------------+----------------+----------------+----------------+
+ *
+ * Notes:
+ *   - This request is optional and if not supported then visible area
+ *     is defined by the relevant XenStore's "resolution" property.
+ *   - Shared buffer, allocated for EDID storage, must not be less then
+ *     XENDISPL_EDID_MAX_SIZE octets.
+ *
+ * buffer_sz - uint32_t, buffer size to be allocated, octets
+ * gref_directory - grant_ref_t, a reference to the first shared page
+ *   describing EDID buffer references. See XENDISPL_OP_DBUF_CREATE for
+ *   grant page directory structure (struct xendispl_page_directory).
+ *
+ * See response format for this request.
+ */
+
+struct xendispl_get_edid_req {
+    uint32_t buffer_sz;
+    grant_ref_t gref_directory;
+};
+
 /*
  *---------------------------------- Responses --------------------------------
  *
@@ -753,6 +801,31 @@ struct xendispl_page_flip_req {
  * id - uint16_t, private guest value, echoed from request
  * status - int32_t, response status, zero on success and -XEN_EXX on failure
  *
+ *
+ * Get EDID response - response for XENDISPL_OP_GET_EDID:
+ *         0                1                 2               3        octet
+ * +----------------+----------------+----------------+----------------+
+ * |               id                |    operation   |    reserved    | 4
+ * +----------------+----------------+----------------+----------------+
+ * |                              status                               | 8
+ * +----------------+----------------+----------------+----------------+
+ * |                             edid_sz                               | 12
+ * +----------------+----------------+----------------+----------------+
+ * |                             reserved                              | 16
+ * +----------------+----------------+----------------+----------------+
+ * |/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/|
+ * +----------------+----------------+----------------+----------------+
+ * |                             reserved                              | 64
+ * +----------------+----------------+----------------+----------------+
+ *
+ * edid_sz - uint32_t, size of the EDID, octets
+ */
+
+struct xendispl_get_edid_resp {
+    uint32_t edid_sz;
+};
+
+/*
  *----------------------------------- Events ----------------------------------
  *
  * Events are sent via a shared page allocated by the front and propagated by
@@ -804,6 +877,7 @@ struct xendispl_req {
         struct xendispl_fb_detach_req fb_detach;
         struct xendispl_set_config_req set_config;
         struct xendispl_page_flip_req pg_flip;
+        struct xendispl_get_edid_req get_edid;
         uint8_t reserved[56];
     } op;
 };
@@ -813,7 +887,10 @@ struct xendispl_resp {
     uint8_t operation;
     uint8_t reserved;
     int32_t status;
-    uint8_t reserved1[56];
+    union {
+        struct xendispl_get_edid_resp get_edid;
+        uint8_t reserved1[56];
+    } op;
 };
 
 struct xendispl_evt {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 09:04:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 09:04:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbKeM-0002wd-Ko; Wed, 20 May 2020 09:04:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Cwnj=7C=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1jbKeL-0002wY-8e
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 09:04:29 +0000
X-Inumbo-ID: e8b6bb14-9a78-11ea-ae69-bc764e2007e4
Received: from mail-ed1-x541.google.com (unknown [2a00:1450:4864:20::541])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e8b6bb14-9a78-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 09:04:28 +0000 (UTC)
Received: by mail-ed1-x541.google.com with SMTP id i16so2229693edv.1
 for <xen-devel@lists.xenproject.org>; Wed, 20 May 2020 02:04:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id;
 bh=UDVSD7u//uj0am04PC5Ekp2qfxEdEYl9GujtwfxuudE=;
 b=lnfqMkyxhn6kHVOBwqZTwvZGZHJh2Hj0ZMlNgHZrxgrTlh+N1S90kA8YcvCLOrHLBY
 nYF3gw9vKSTWAzeMInbpQy2fHTNBQr+BV45fQnG3FavTY8sekyBMuDXMnSpGYgjXer4x
 gxnV70wHWE3tV/PT3JklhmKqk0JujQLlgezzXN5z9KuKEfTvgkWIyPDiy8SQL1KICOzs
 znPagc+w0/jbF4y6DB88MZA8ytCqFVCCvFowLfkCqCxs9d2xAsDaVAZtMP3NLNnn6ndz
 +kDXjQlQFN0oS8hnduR9R9qZ4tTkY5va/OcGjueqxtwiHTy6TKZPq5jnvIzUZA7jGlB3
 zLWg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id;
 bh=UDVSD7u//uj0am04PC5Ekp2qfxEdEYl9GujtwfxuudE=;
 b=HJ5RGyH3zKSwyS8iPfEz5xE5MjjolK58n0MSKkQUEIsxdL0yzF8LvMA6YEkXXQ0p7v
 YsiJIdIPgQ8W6K512hDMXloliv/REyB3VaSEU7GTQGOLvbkzz2EpK1gfDPI9wuBZAqnR
 kj1oRfbnEzPuBvwLW53fD6v/MFGYEpJ/cFsV0JbdejLjbxvoIhCtUfKo39MVo2ZO/bi3
 dvDgTVgt3eMrvNLA2egmvlQTVbwNjoJ0icjzOMhrnMZsZANwrLRzJAoW9We93+H5pk80
 solWJH5QeXEDBi6MES7XKilBVd5hja0gwUNeCUY1PF+RMlIVF1FyWmnd3vjq4nu4WdTe
 XofA==
X-Gm-Message-State: AOAM531c7XWRMAWxkVrBZyGYaOxD5IJ518DW938R/Y01SXSn1/53mOon
 SCAE4gljDqgdw3kuP4LqK4/Db3lz
X-Google-Smtp-Source: ABdhPJy8osHuSsngOvt/HStV+1z+S7bmu8+AQZPtnLON0DcdHsJ6cDy9cYv8wldo0f8hwW3gGMRcAQ==
X-Received: by 2002:a50:9317:: with SMTP id m23mr2577446eda.65.1589965467041; 
 Wed, 20 May 2020 02:04:27 -0700 (PDT)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id p5sm1324811edi.82.2020.05.20.02.04.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 20 May 2020 02:04:26 -0700 (PDT)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: xen-devel@lists.xenproject.org, jgross@suse.com, ian.jackson@eu.citrix.com,
 wei.liu2@citrix.com, konrad.wilk@oracle.com
Subject: [PATCH 0/2] displif: Protocol version 2
Date: Wed, 20 May 2020 12:04:23 +0300
Message-Id: <20200520090425.28558-1-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Hello all,

this series extends the existing displif protocol with new
request and adds additional parameter to the exiting one.
It also provides support for the new parameter in libgnttab
via ioctl. The relevant changes in the backend can be found at [1]
and the frontend at [2].

List of changes:

1. Change protocol version from string to integer

Version string, which is in fact an integer, is hard to handle in the
code that supports different protocol versions. To simplify that
make the version an integer.

2. Pass buffer offset with XENDISPL_OP_DBUF_CREATE

There are cases when display data buffer is created with non-zero
offset to the data start. Handle such cases and provide that offset
while creating a display buffer. Add the corresponding filed to the
protocol and handle it in libgnttab.
This change is required for bringing virtual display on iMX8
platform which uses offset of 64 bytes for the buffers allocated
on GPU side and then imported into PV DRM frontend. Otherwise the
final picture looks shifted.

3. Add XENDISPL_OP_GET_EDID command

Add an optional request for reading Extended Display Identification
Data (EDID) structure which allows better configuration of the
display connectors over the configuration set in XenStore.
With this change connectors may have multiple resolutions defined
with respect to detailed timing definitions and additional properties
normally provided by displays.

If this request is not supported by the backend then visible area
is defined by the relevant XenStore's "resolution" property.

If backend provides extended display identification data (EDID) with
XENDISPL_OP_GET_EDID request then EDID values must take precedence
over the resolutions defined in XenStore.

4. Bump protocol version to 2.

Open questions and notes on the changes:

1. gnttab minor version change from 2 to 3
As per my understanding it is required to bump the version when
I add the new functionality, but I would like to hear from the
maintainers on that.

2. gnttab and version 2 of the ioctls
Because we add an additional parameter (data offset) and the structures
used to pass ioctl arguments cannot be extended (there are no enough
reserved fields), so there are to ways to solve that:
- break the existing API and add data_ofs field into the existing
structures
- create a copy of the ioctl (v2) with the additional parameter.

It seems to be easier to go the first way, but this breaks things,
so I decided to introduce v2 of the same ioctl which behaves gracefully
with respect to the existing users, but adds some amount of
copy-paste code (I was trying to minimize copy-paste so it is easier
to maintain, but the result looked ugly to me because of lots of
"if (protocol v1)" constructs.

Please note that struct ioctl_gntdev_dmabuf_imp_to_refs, e.g.
version 1 of the ioctl, has uint32_t reserved field which can be
used for the data offset, but its counterpart (ioctl_gntdev_dmabuf_exp_from_refs)
doesn't have any, so it seems not to be aligned to only introduce
version 2 of the ioctl for the later.

The other question is if to keep that reserved field in
ioctl_gntdev_dmabuf_imp_to_refs_v2 structure or drop it.

3. displif protocol version string to int conversion
The existing protocol defines its version as a string "1"
which is ok, but makes it not so easy to be directly used by
C/C++ preprocessor which would like to see an integer for constructs
like "#if XENDISPL_PROTOCOL_VERSION > 2"

At the same time this change may break the existing users of the protocol
which still expect version as a string. I tried something like

#define STR_HELPER(x) #x
#define STR(x) STR_HELPER(x)

#define XENDISPL_PROTOCOL_VERSION_INT 1
#define XENDISPL_PROTOCOL_VERSION STR(XENDISPL_PROTOCOL_VERSION_INT)

but not sure if this is the right and good way to solve the issue,
so in this series I have deliberately changed the protocol version to
int.
Other possible way could be that every user of the header has its local
copy (this is what we now use in the display backend). This way the
local copy can be changed in a way suitable for the concrete user.
This cannot be done (?) for the Linux Kernel though.

Thank you,
Oleksandr

[1] https://github.com/xen-troops/displ_be
[2] https://github.com/xen-troops/linux/pull/87

Oleksandr Andrushchenko (2):
  xen/displif: Protocol version 2
  libgnttab: Add support for Linux dma-buf offset

 tools/include/xen-sys/Linux/gntdev.h  | 53 +++++++++++++++++
 tools/libs/gnttab/Makefile            |  2 +-
 tools/libs/gnttab/freebsd.c           | 15 +++++
 tools/libs/gnttab/gnttab_core.c       | 17 ++++++
 tools/libs/gnttab/include/xengnttab.h | 13 ++++
 tools/libs/gnttab/libxengnttab.map    |  6 ++
 tools/libs/gnttab/linux.c             | 86 +++++++++++++++++++++++++++
 tools/libs/gnttab/minios.c            | 15 +++++
 tools/libs/gnttab/private.h           |  9 +++
 xen/include/public/io/displif.h       | 83 +++++++++++++++++++++++++-
 10 files changed, 295 insertions(+), 4 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 09:04:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 09:04:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbKeX-0002xz-6Q; Wed, 20 May 2020 09:04:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Cwnj=7C=gmail.com=andr2000@srs-us1.protection.inumbo.net>)
 id 1jbKeV-0002xc-5D
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 09:04:39 +0000
X-Inumbo-ID: e9f7be6a-9a78-11ea-ae69-bc764e2007e4
Received: from mail-ed1-x544.google.com (unknown [2a00:1450:4864:20::544])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e9f7be6a-9a78-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 09:04:30 +0000 (UTC)
Received: by mail-ed1-x544.google.com with SMTP id b91so2221633edf.3
 for <xen-devel@lists.xenproject.org>; Wed, 20 May 2020 02:04:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references;
 bh=ykFRqSJsGAZwDGWi4dbag+7FdI9nXWeCtn40RKaRM34=;
 b=WjiRCyFue2N3loEHJeK1ZQ8K7qO/HzUwKXn36WqXp4dz8aFHTj3FtmsUjDZP3emB3o
 kUV421Ndka2pFFZoJGgStuk0GX/UB1DqkWaCD4Z2Z7Xqi2VVEkmxG2Wt8BkztOiWNKlf
 CRo9eHIrbvfZqTnDG7dUPWWi5KhuMxrLluZGo2+faLPg0ONJJ5LxB7dYaZqjFXKrp90r
 7yZKDyBSBEX7O0jJgbb2qL36F+TYs7dzGGeftYA78ES1n0ZR6kUw7m3+eWYW+3klfMPk
 ZmvKuIOiYXIQAEh6Is8WOhCU5yg9yTOQwyPDE+xkOht810efiAVDpw8eQTjHLXSRDszA
 BTmA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references;
 bh=ykFRqSJsGAZwDGWi4dbag+7FdI9nXWeCtn40RKaRM34=;
 b=XhaizOd5FBrTO/vaR0w9+2Kowqkw7PaPmrqjyCg6+QSzSsmx8zYA6zPKV+xowzQxic
 Bbnjn3AlG7wtGa3ZwlvJUSkJWGiVFMuj0+h+zheWqA1en823smEvRvcaHdIbRHo6ywKo
 kYjpVIIScUrcUzDm+oWkOfKcA2IuQSOYiVPZvMqs/7cgw2wffnGlpsZFI5Ef0jjAfERX
 3ISqv3oZzFmdA5NfGPC4ahLOWaHlChl1dq8xH309YbG4leTdiRCZXnBFEVOc9OnSgWEc
 Wmv+owcfctM7E/0rHFL9nDpziCy+aX9drxRn+P5gTbzPcNPtZOSXqWOvv/LwzAB/e6p3
 n1oQ==
X-Gm-Message-State: AOAM530ZQAtwwzsfN5xtytlUdiEKtjPKF72uDtSlg/B8tPdMmmAe0YuZ
 PWCCUcGO8v0KtuQ20VRAqW9rBv1a
X-Google-Smtp-Source: ABdhPJzqHH2vMiODcT+P2Pg8ABrNQ3ilHz5qnBMwlB4nD9B4Qyvqz/yVuRnzx6KUPsxUGCp859Ri0Q==
X-Received: by 2002:aa7:dd16:: with SMTP id i22mr2593357edv.366.1589965469263; 
 Wed, 20 May 2020 02:04:29 -0700 (PDT)
Received: from a2klaptop.localdomain ([185.199.97.5])
 by smtp.gmail.com with ESMTPSA id p5sm1324811edi.82.2020.05.20.02.04.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 20 May 2020 02:04:28 -0700 (PDT)
From: Oleksandr Andrushchenko <andr2000@gmail.com>
To: xen-devel@lists.xenproject.org, jgross@suse.com, ian.jackson@eu.citrix.com,
 wei.liu2@citrix.com, konrad.wilk@oracle.com
Subject: [PATCH 2/2] libgnttab: Add support for Linux dma-buf offset
Date: Wed, 20 May 2020 12:04:25 +0300
Message-Id: <20200520090425.28558-3-andr2000@gmail.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200520090425.28558-1-andr2000@gmail.com>
References: <20200520090425.28558-1-andr2000@gmail.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>

Add version 2 of the dma-buf ioctls which adds data_ofs parameter.

dma-buf is backed by a scatter-gather table and has offset parameter
which tells where the actual data starts. Relevant ioctls are extended
to support that offset:
  - when dma-buf is created (exported) from grant references then
    data_ofs is used to set the offset field in the scatter list
    of the new dma-buf
  - when dma-buf is imported and grant references provided then
    data_ofs is used to report that offset to user-space

Signed-off-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
 tools/include/xen-sys/Linux/gntdev.h  | 53 +++++++++++++++++
 tools/libs/gnttab/Makefile            |  2 +-
 tools/libs/gnttab/freebsd.c           | 15 +++++
 tools/libs/gnttab/gnttab_core.c       | 17 ++++++
 tools/libs/gnttab/include/xengnttab.h | 13 ++++
 tools/libs/gnttab/libxengnttab.map    |  6 ++
 tools/libs/gnttab/linux.c             | 86 +++++++++++++++++++++++++++
 tools/libs/gnttab/minios.c            | 15 +++++
 tools/libs/gnttab/private.h           |  9 +++
 9 files changed, 215 insertions(+), 1 deletion(-)

diff --git a/tools/include/xen-sys/Linux/gntdev.h b/tools/include/xen-sys/Linux/gntdev.h
index d16076044c71..0c43393cbee5 100644
--- a/tools/include/xen-sys/Linux/gntdev.h
+++ b/tools/include/xen-sys/Linux/gntdev.h
@@ -274,4 +274,57 @@ struct ioctl_gntdev_dmabuf_imp_release {
     uint32_t reserved;
 };
 
+/*
+ * Version 2 of the ioctls adds @data_ofs parameter.
+ *
+ * dma-buf is backed by a scatter-gather table and has offset
+ * parameter which tells where the actual data starts.
+ * Relevant ioctls are extended to support that offset:
+ *   - when dma-buf is created (exported) from grant references then
+ *     @data_ofs is used to set the offset field in the scatter list
+ *     of the new dma-buf
+ *   - when dma-buf is imported and grant references are provided then
+ *     @data_ofs is used to report that offset to user-space
+ */
+#define IOCTL_GNTDEV_DMABUF_EXP_FROM_REFS_V2 \
+    _IOC(_IOC_NONE, 'G', 13, \
+         sizeof(struct ioctl_gntdev_dmabuf_exp_from_refs_v2))
+struct ioctl_gntdev_dmabuf_exp_from_refs_v2 {
+    /* IN parameters. */
+    /* Specific options for this dma-buf: see GNTDEV_DMA_FLAG_XXX. */
+    uint32_t flags;
+    /* Number of grant references in @refs array. */
+    uint32_t count;
+    /* Offset of the data in the dma-buf. */
+    uint32_t data_ofs;
+    /* OUT parameters. */
+    /* File descriptor of the dma-buf. */
+    uint32_t fd;
+    /* The domain ID of the grant references to be mapped. */
+    uint32_t domid;
+    /* Variable IN parameter. */
+    /* Array of grant references of size @count. */
+    uint32_t refs[1];
+};
+
+#define IOCTL_GNTDEV_DMABUF_IMP_TO_REFS_V2 \
+    _IOC(_IOC_NONE, 'G', 14, \
+         sizeof(struct ioctl_gntdev_dmabuf_imp_to_refs_v2))
+struct ioctl_gntdev_dmabuf_imp_to_refs_v2 {
+    /* IN parameters. */
+    /* File descriptor of the dma-buf. */
+    uint32_t fd;
+    /* Number of grant references in @refs array. */
+    uint32_t count;
+    /* The domain ID for which references to be granted. */
+    uint32_t domid;
+    /* Reserved - must be zero. */
+    uint32_t reserved;
+    /* OUT parameters. */
+    /* Offset of the data in the dma-buf. */
+    uint32_t data_ofs;
+    /* Array of grant references of size @count. */
+    uint32_t refs[1];
+};
+
 #endif /* __LINUX_PUBLIC_GNTDEV_H__ */
diff --git a/tools/libs/gnttab/Makefile b/tools/libs/gnttab/Makefile
index 2da8fbbb7f6f..5ee2d965214f 100644
--- a/tools/libs/gnttab/Makefile
+++ b/tools/libs/gnttab/Makefile
@@ -2,7 +2,7 @@ XEN_ROOT = $(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
 MAJOR    = 1
-MINOR    = 2
+MINOR    = 3
 LIBNAME  := gnttab
 USELIBS  := toollog toolcore
 
diff --git a/tools/libs/gnttab/freebsd.c b/tools/libs/gnttab/freebsd.c
index 886b588303a0..baf0f60aa4d3 100644
--- a/tools/libs/gnttab/freebsd.c
+++ b/tools/libs/gnttab/freebsd.c
@@ -319,6 +319,14 @@ int osdep_gnttab_dmabuf_exp_from_refs(xengnttab_handle *xgt, uint32_t domid,
     abort();
 }
 
+int osdep_gnttab_dmabuf_exp_from_refs_v2(xengnttab_handle *xgt, uint32_t domid,
+                                         uint32_t flags, uint32_t count,
+                                         const uint32_t *refs,
+                                         uint32_t *dmabuf_fd, uint32_t data_ofs)
+{
+    abort();
+}
+
 int osdep_gnttab_dmabuf_exp_wait_released(xengnttab_handle *xgt,
                                           uint32_t fd, uint32_t wait_to_ms)
 {
@@ -331,6 +339,13 @@ int osdep_gnttab_dmabuf_imp_to_refs(xengnttab_handle *xgt, uint32_t domid,
     abort();
 }
 
+int osdep_gnttab_dmabuf_imp_to_refs_v2(xengnttab_handle *xgt, uint32_t domid,
+                                       uint32_t fd, uint32_t count,
+                                       uint32_t *refs, uint32_t *data_ofs)
+{
+    abort();
+}
+
 int osdep_gnttab_dmabuf_imp_release(xengnttab_handle *xgt, uint32_t fd)
 {
     abort();
diff --git a/tools/libs/gnttab/gnttab_core.c b/tools/libs/gnttab/gnttab_core.c
index 92e7228a2671..3af3cec80045 100644
--- a/tools/libs/gnttab/gnttab_core.c
+++ b/tools/libs/gnttab/gnttab_core.c
@@ -144,6 +144,15 @@ int xengnttab_dmabuf_exp_from_refs(xengnttab_handle *xgt, uint32_t domid,
                                              refs, fd);
 }
 
+int xengnttab_dmabuf_exp_from_refs_v2(xengnttab_handle *xgt, uint32_t domid,
+                                      uint32_t flags, uint32_t count,
+                                      const uint32_t *refs, uint32_t *fd,
+                                      uint32_t data_ofs)
+{
+    return osdep_gnttab_dmabuf_exp_from_refs_v2(xgt, domid, flags, count,
+                                                refs, fd, data_ofs);
+}
+
 int xengnttab_dmabuf_exp_wait_released(xengnttab_handle *xgt, uint32_t fd,
                                        uint32_t wait_to_ms)
 {
@@ -156,6 +165,14 @@ int xengnttab_dmabuf_imp_to_refs(xengnttab_handle *xgt, uint32_t domid,
     return osdep_gnttab_dmabuf_imp_to_refs(xgt, domid, fd, count, refs);
 }
 
+int xengnttab_dmabuf_imp_to_refs_v2(xengnttab_handle *xgt, uint32_t domid,
+                                    uint32_t fd, uint32_t count, uint32_t *refs,
+                                    uint32_t *data_ofs)
+{
+    return osdep_gnttab_dmabuf_imp_to_refs_v2(xgt, domid, fd, count, refs,
+                                              data_ofs);
+}
+
 int xengnttab_dmabuf_imp_release(xengnttab_handle *xgt, uint32_t fd)
 {
     return osdep_gnttab_dmabuf_imp_release(xgt, fd);
diff --git a/tools/libs/gnttab/include/xengnttab.h b/tools/libs/gnttab/include/xengnttab.h
index 111fc88caeb3..0956bd91e0df 100644
--- a/tools/libs/gnttab/include/xengnttab.h
+++ b/tools/libs/gnttab/include/xengnttab.h
@@ -322,12 +322,19 @@ int xengnttab_grant_copy(xengnttab_handle *xgt,
  * Returns 0 if dma-buf was successfully created and the corresponding
  * dma-buf's file descriptor is returned in @fd.
  *
+ * Version 2 also accepts @data_ofs offset of the data in the buffer.
+ *
  * [1] https://elixir.bootlin.com/linux/latest/source/Documentation/driver-api/dma-buf.rst
  */
 int xengnttab_dmabuf_exp_from_refs(xengnttab_handle *xgt, uint32_t domid,
                                    uint32_t flags, uint32_t count,
                                    const uint32_t *refs, uint32_t *fd);
 
+int xengnttab_dmabuf_exp_from_refs_v2(xengnttab_handle *xgt, uint32_t domid,
+                                      uint32_t flags, uint32_t count,
+                                      const uint32_t *refs, uint32_t *fd,
+                                      uint32_t data_ofs);
+
 /*
  * This will block until the dma-buf with the file descriptor @fd is
  * released. This is only valid for buffers created with
@@ -345,10 +352,16 @@ int xengnttab_dmabuf_exp_wait_released(xengnttab_handle *xgt, uint32_t fd,
 /*
  * Import a dma-buf with file descriptor @fd and export granted references
  * to the pages of that dma-buf into array @refs of size @count.
+ *
+ * Version 2 also provides @data_ofs offset of the data in the buffer.
  */
 int xengnttab_dmabuf_imp_to_refs(xengnttab_handle *xgt, uint32_t domid,
                                  uint32_t fd, uint32_t count, uint32_t *refs);
 
+int xengnttab_dmabuf_imp_to_refs_v2(xengnttab_handle *xgt, uint32_t domid,
+                                    uint32_t fd, uint32_t count, uint32_t *refs,
+                                    uint32_t *data_ofs);
+
 /*
  * This will close all references to an imported buffer, so it can be
  * released by the owner. This is only valid for buffers created with
diff --git a/tools/libs/gnttab/libxengnttab.map b/tools/libs/gnttab/libxengnttab.map
index d2a9b7e18bea..ddf77e064b08 100644
--- a/tools/libs/gnttab/libxengnttab.map
+++ b/tools/libs/gnttab/libxengnttab.map
@@ -36,3 +36,9 @@ VERS_1.2 {
 		xengnttab_dmabuf_imp_to_refs;
 		xengnttab_dmabuf_imp_release;
 } VERS_1.1;
+
+VERS_1.3 {
+    global:
+		xengnttab_dmabuf_exp_from_refs_v2;
+		xengnttab_dmabuf_imp_to_refs_v2;
+} VERS_1.2;
diff --git a/tools/libs/gnttab/linux.c b/tools/libs/gnttab/linux.c
index a01bb6c698c6..75e249fb3202 100644
--- a/tools/libs/gnttab/linux.c
+++ b/tools/libs/gnttab/linux.c
@@ -352,6 +352,51 @@ out:
     return rc;
 }
 
+int osdep_gnttab_dmabuf_exp_from_refs_v2(xengnttab_handle *xgt, uint32_t domid,
+                                         uint32_t flags, uint32_t count,
+                                         const uint32_t *refs,
+                                         uint32_t *dmabuf_fd,
+                                         uint32_t data_ofs)
+{
+    struct ioctl_gntdev_dmabuf_exp_from_refs_v2 *from_refs_v2 = NULL;
+    int rc = -1;
+
+    if ( !count )
+    {
+        errno = EINVAL;
+        goto out;
+    }
+
+    from_refs_v2 = malloc(sizeof(*from_refs_v2) +
+                          (count - 1) * sizeof(from_refs_v2->refs[0]));
+    if ( !from_refs_v2 )
+    {
+        errno = ENOMEM;
+        goto out;
+    }
+
+    from_refs_v2->flags = flags;
+    from_refs_v2->count = count;
+    from_refs_v2->domid = domid;
+    from_refs_v2->data_ofs = data_ofs;
+
+    memcpy(from_refs_v2->refs, refs, count * sizeof(from_refs_v2->refs[0]));
+
+    if ( (rc = ioctl(xgt->fd, IOCTL_GNTDEV_DMABUF_EXP_FROM_REFS_V2,
+                     from_refs_v2)) )
+    {
+        GTERROR(xgt->logger, "ioctl DMABUF_EXP_FROM_REFS_V2 failed");
+        goto out;
+    }
+
+    *dmabuf_fd = from_refs_v2->fd;
+    rc = 0;
+
+out:
+    free(from_refs_v2);
+    return rc;
+}
+
 int osdep_gnttab_dmabuf_exp_wait_released(xengnttab_handle *xgt,
                                           uint32_t fd, uint32_t wait_to_ms)
 {
@@ -413,6 +458,47 @@ out:
     return rc;
 }
 
+int osdep_gnttab_dmabuf_imp_to_refs_v2(xengnttab_handle *xgt, uint32_t domid,
+                                       uint32_t fd, uint32_t count,
+                                       uint32_t *refs,
+                                       uint32_t *data_ofs)
+{
+    struct ioctl_gntdev_dmabuf_imp_to_refs_v2 *to_refs_v2 = NULL;
+    int rc = -1;
+
+    if ( !count )
+    {
+        errno = EINVAL;
+        goto out;
+    }
+
+    to_refs_v2 = malloc(sizeof(*to_refs_v2) +
+                        (count - 1) * sizeof(to_refs_v2->refs[0]));
+    if ( !to_refs_v2 )
+    {
+        errno = ENOMEM;
+        goto out;
+    }
+
+    to_refs_v2->fd = fd;
+    to_refs_v2->count = count;
+    to_refs_v2->domid = domid;
+
+    if ( (rc = ioctl(xgt->fd, IOCTL_GNTDEV_DMABUF_IMP_TO_REFS_V2, to_refs_v2)) )
+    {
+        GTERROR(xgt->logger, "ioctl DMABUF_IMP_TO_REFS_V2 failed");
+        goto out;
+    }
+
+    memcpy(refs, to_refs_v2->refs, count * sizeof(*refs));
+    *data_ofs = to_refs_v2->data_ofs;
+    rc = 0;
+
+out:
+    free(to_refs_v2);
+    return rc;
+}
+
 int osdep_gnttab_dmabuf_imp_release(xengnttab_handle *xgt, uint32_t fd)
 {
     struct ioctl_gntdev_dmabuf_imp_release release;
diff --git a/tools/libs/gnttab/minios.c b/tools/libs/gnttab/minios.c
index f78caadd3043..298416b2a98d 100644
--- a/tools/libs/gnttab/minios.c
+++ b/tools/libs/gnttab/minios.c
@@ -120,6 +120,14 @@ int osdep_gnttab_dmabuf_exp_from_refs(xengnttab_handle *xgt, uint32_t domid,
     return -1;
 }
 
+int osdep_gnttab_dmabuf_exp_from_refs_v2(xengnttab_handle *xgt, uint32_t domid,
+                                         uint32_t flags, uint32_t count,
+                                         const uint32_t *refs, uint32_t *fd,
+                                         uint32_t data_ofs)
+{
+    return -1;
+}
+
 int osdep_gnttab_dmabuf_exp_wait_released(xengnttab_handle *xgt,
                                           uint32_t fd, uint32_t wait_to_ms)
 {
@@ -133,6 +141,13 @@ int osdep_gnttab_dmabuf_imp_to_refs(xengnttab_handle *xgt, uint32_t domid,
     return -1;
 }
 
+int osdep_gnttab_dmabuf_imp_to_refs_v2(xengnttab_handle *xgt, uint32_t domid,
+                                       uint32_t fd, uint32_t count,
+                                       uint32_t *refs, uint32_t *data_ofs)
+{
+    return -1;
+}
+
 int osdep_gnttab_dmabuf_imp_release(xengnttab_handle *xgt, uint32_t fd)
 {
     return -1;
diff --git a/tools/libs/gnttab/private.h b/tools/libs/gnttab/private.h
index c5e23639b141..07271637f609 100644
--- a/tools/libs/gnttab/private.h
+++ b/tools/libs/gnttab/private.h
@@ -39,6 +39,11 @@ int osdep_gnttab_dmabuf_exp_from_refs(xengnttab_handle *xgt, uint32_t domid,
                                       uint32_t flags, uint32_t count,
                                       const uint32_t *refs, uint32_t *fd);
 
+int osdep_gnttab_dmabuf_exp_from_refs_v2(xengnttab_handle *xgt, uint32_t domid,
+                                         uint32_t flags, uint32_t count,
+                                         const uint32_t *refs, uint32_t *fd,
+                                         uint32_t data_ofs);
+
 int osdep_gnttab_dmabuf_exp_wait_released(xengnttab_handle *xgt,
                                           uint32_t fd, uint32_t wait_to_ms);
 
@@ -46,6 +51,10 @@ int osdep_gnttab_dmabuf_imp_to_refs(xengnttab_handle *xgt, uint32_t domid,
                                     uint32_t fd, uint32_t count,
                                     uint32_t *refs);
 
+int osdep_gnttab_dmabuf_imp_to_refs_v2(xengnttab_handle *xgt, uint32_t domid,
+                                       uint32_t fd, uint32_t count,
+                                       uint32_t *refs, uint32_t *data_ofs);
+
 int osdep_gnttab_dmabuf_imp_release(xengnttab_handle *xgt, uint32_t fd);
 
 int osdep_gntshr_open(xengntshr_handle *xgs);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 09:27:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 09:27:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbL0B-0004zG-90; Wed, 20 May 2020 09:27:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbL0A-0004zA-MK
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 09:27:02 +0000
X-Inumbo-ID: 0f5fcd2a-9a7c-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f5fcd2a-9a7c-11ea-b9cf-bc764e2007e4;
 Wed, 20 May 2020 09:27:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 380DFACC3;
 Wed, 20 May 2020 09:27:03 +0000 (UTC)
Subject: Re: [PATCH v6 03/15] x86/mm: rewrite virt_to_xen_l*e
To: Hongyan Xia <hx242@xen.org>
References: <cover.1587735799.git.hongyxia@amazon.com>
 <949d2dc54fd7d3230db6a0934d73668a9999eb1a.1587735799.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5128cd55-99af-65eb-af82-1e84dcf108d0@suse.com>
Date: Wed, 20 May 2020 11:27:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <949d2dc54fd7d3230db6a0934d73668a9999eb1a.1587735799.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.04.2020 16:08, Hongyan Xia wrote:
> @@ -4998,31 +5005,40 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
>      if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
>      {
>          bool locking = system_state > SYS_STATE_boot;
> -        l2_pgentry_t *l2t = alloc_xen_pagetable();
> +        l2_pgentry_t *l2t;
> +        mfn_t l2mfn = alloc_xen_pagetable_new();
>  
> -        if ( !l2t )
> +        if ( mfn_eq(l2mfn, INVALID_MFN) )
> +        {
> +            UNMAP_DOMAIN_PAGE(pl3e);
>              return NULL;
> +        }
> +        l2t = map_domain_page(l2mfn);
>          clear_page(l2t);
> +        UNMAP_DOMAIN_PAGE(l2t);
>          if ( locking )
>              spin_lock(&map_pgdir_lock);
>          if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
>          {
> -            l3e_write(pl3e, l3e_from_paddr(__pa(l2t), __PAGE_HYPERVISOR));
> -            l2t = NULL;
> +            l3e_write(pl3e, l3e_from_mfn(l2mfn, __PAGE_HYPERVISOR));
> +            l2mfn = INVALID_MFN;
>          }
>          if ( locking )
>              spin_unlock(&map_pgdir_lock);
> -        if ( l2t )
> -            free_xen_pagetable(l2t);
> +        free_xen_pagetable_new(l2mfn);
>      }
>  
>      BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
> -    return l3e_to_l2e(*pl3e) + l2_table_offset(v);
> +    pl2e = map_l2t_from_l3e(*pl3e) + l2_table_offset(v);
> +    unmap_domain_page(pl3e);

To avoid undue pressure on the number of active mappings I'd like
to ask that you unmap first, then establish the new mapping.

> @@ -5095,6 +5119,10 @@ int map_pages_to_xen(
>  
>      while ( nr_mfns != 0 )
>      {
> +        /* Clean up mappings mapped in the previous iteration. */
> +        UNMAP_DOMAIN_PAGE(pl3e);
> +        UNMAP_DOMAIN_PAGE(pl2e);
> +
>          pl3e = virt_to_xen_l3e(virt);

As you don't add a comment here (and at other call sites of
virt_to_xen_l<N>e()), ...

> @@ -5260,9 +5288,12 @@ int map_pages_to_xen(
>              /* Normal page mapping. */
>              if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
>              {
> +                /* This forces the mapping to be populated. */
>                  pl1e = virt_to_xen_l1e(virt);

... I don't see why one needs adding here.

> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -291,7 +291,13 @@ void copy_page_sse2(void *, const void *);
>  #define pfn_to_paddr(pfn)   __pfn_to_paddr(pfn)
>  #define paddr_to_pfn(pa)    __paddr_to_pfn(pa)
>  #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
> -#define vmap_to_mfn(va)     _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va))))
> +
> +#define vmap_to_mfn(va) ({                                                  \
> +        const l1_pgentry_t *pl1e_ = virt_to_xen_l1e((unsigned long)(va));   \
> +        unsigned long pfn_ = l1e_get_pfn(*pl1e_);                           \

l1e_get_mfn()?

> +        unmap_domain_page(pl1e_);                                           \
> +        _mfn(pfn_); })
> +
>  #define vmap_to_page(va)    mfn_to_page(vmap_to_mfn(va))
>  
>  #endif /* !defined(__ASSEMBLY__) */

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 20 09:31:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 09:31:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbL4O-0005oL-SR; Wed, 20 May 2020 09:31:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AW6E=7C=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jbL4N-0005oF-8Y
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 09:31:23 +0000
X-Inumbo-ID: aab2176a-9a7c-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aab2176a-9a7c-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 09:31:22 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: F8eck1O+vaUAbn1FkBOm3lKkkBTvFU7rWh4Yk06Q7uS9V9Vtn2ldauRwocpU6NcuypdrHN+J0N
 +C5OMgi2zK1T/xToqcqX2mDymXHTBKvX76xuKkyFFq7IGvmS+ydhN5zs3Lcru2Tno9hMyUtY+F
 PpdZnMuIDHKKqGgH504Jr/EwujyXz7LOfoIe5sUDpsYgdZQhd0AGHdMnHC28ceH1uiLTEra0/q
 YhZXxQ/KBFAuGdhuLS1AwliKZuHz9x9ykMxwAuElgmWruDwewZRu9+18MZ8T++ztTHeS9WrPNM
 v2c=
X-SBRS: 2.7
X-MesageID: 18246321
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,413,1583211600"; d="scan'208";a="18246321"
Date: Wed, 20 May 2020 11:31:06 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2] x86: use POPCNT for hweight<N>() when available
Message-ID: <20200520093106.GI54375@Air-de-Roger>
References: <55a4a24d-7fac-527c-6bcf-8d689136bac2@suse.com>
 <20200514140522.GD54375@Air-de-Roger>
 <83534bf1-fa57-1d4a-c615-f656338a8457@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <83534bf1-fa57-1d4a-c615-f656338a8457@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 20, 2020 at 10:31:38AM +0200, Jan Beulich wrote:
> On 14.05.2020 16:05, Roger Pau Monné wrote:
> > On Mon, Jul 15, 2019 at 02:39:04PM +0000, Jan Beulich wrote:
> >> @@ -251,6 +255,10 @@ boot/mkelf32: boot/mkelf32.c
> >>   efi/mkreloc: efi/mkreloc.c
> >>   	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
> >>   
> >> +nocov-y += hweight.o
> >> +noubsan-y += hweight.o
> >> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
> > 
> > Why not use clobbers in the asm to list the scratch registers? Is it
> > that much expensive?
> 
> The goal is to disturb the call sites as little as possible. There's
> no point avoiding the scratch registers when no call is made (i.e.
> when the POPCNT insn can be used). Taking away from the compiler 7
> out of 15 registers (that it can hold live data in) seems quite a
> lot to me.

IMO using -ffixed-reg for all those registers is even worse, as that
prevents the generated code in hweight from using any of those, thus
greatly limiting the amount of registers and likely making the
generated code rely heavily on pushing an popping from the stack?

This also has the side effect to limiting the usage of popcnt to gcc,
which IMO is also not ideal.

I also wondered, since the in-place asm before patching is a call
instruction, wouldn't inline asm at build time already assume that the
scratch registers are clobbered?

> >> --- a/xen/include/asm-x86/bitops.h
> >> +++ b/xen/include/asm-x86/bitops.h
> >> @@ -475,9 +475,36 @@ static inline int fls(unsigned int x)
> >>    *
> >>    * The Hamming Weight of a number is the total number of bits set in it.
> >>    */
> >> +#ifndef __clang__
> >> +/* POPCNT encodings with %{r,e}di input and %{r,e}ax output: */
> >> +#define POPCNT_64 ".byte 0xF3, 0x48, 0x0F, 0xB8, 0xC7"
> >> +#define POPCNT_32 ".byte 0xF3, 0x0F, 0xB8, 0xC7"
> >> +
> >> +#define hweight_(n, x, insn, setup, cout, cin) ({                         \
> >> +    unsigned int res_;                                                    \
> >> +    /*                                                                    \
> >> +     * For the function call the POPCNT input register needs to be marked \
> >> +     * modified as well. Set up a local variable of appropriate type      \
> >> +     * for this purpose.                                                  \
> >> +     */                                                                   \
> >> +    typeof((uint##n##_t)(x) + 0U) val_ = (x);                             \
> >> +    alternative_io(setup "; call generic_hweight" #n,                     \
> >> +                   insn, X86_FEATURE_POPCNT,                              \
> >> +                   ASM_OUTPUT2([res] "=a" (res_), [val] cout (val_)),     \
> >> +                   [src] cin (val_));                                     \
> >> +    res_;                                                                 \
> >> +})
> >> +#define hweight64(x) hweight_(64, x, POPCNT_64, "", "+D", "g")
> >> +#define hweight32(x) hweight_(32, x, POPCNT_32, "", "+D", "g")
> >> +#define hweight16(x) hweight_(16, x, "movzwl %w[src], %[val]; " POPCNT_32, \
> >> +                              "mov %[src], %[val]", "=&D", "rm")
> >> +#define hweight8(x)  hweight_( 8, x, "movzbl %b[src], %[val]; " POPCNT_32, \
> >> +                              "mov %[src], %[val]", "=&D", "rm")
> > 
> > Why not just convert types < 32bits into uint32_t and avoid the asm
> > prefix? You are already converting them in hweight_ to an uintX_t.
> 
> I don't think I do - there's a conversion to the promoted type
> when adding in an unsigned int value (which is there to retain
> uint64_t for the 64-bit case, while using unsigned int for all
> smaller sizes, as per the parameter types of generic_hweight*()).
> 
> > That would made the asm simpler as you would then only need to make
> > sure the input is in %rdi and the output is fetched from %rax?
> 
> That's an option, yes, but I again wanted to limit the impact to
> generated code (including code size) as much as possible.
> generic_hweight{8,16}() take unsigned int arguments, not
> uint{8,16}_t ones. Hence at the call site (when a function call
> is needed) no zero extension is necessary. Therefore the MOVZ
> above is to (mainly) overlay the MOV during patching, while the
> POPCNT is to (mainly) overlay the CALL.
> 
> If the simpler asm()-s were considered more important than the
> quality of the generated code, I think we could simply have
> 
> #define hweight16(x) hweight32((uint16_t)(x))
> #define hweight8(x)  hweight32(( uint8_t)(x))

I would definitely prefer simpler asm for sure, so getting rid of the
prefix would be a clear +1 from me. Unless the prefixed version has a
measurable performance impact during normal operation.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 20 09:36:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 09:36:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbL8s-0005zs-Fe; Wed, 20 May 2020 09:36:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbL8q-0005zn-NY
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 09:36:00 +0000
X-Inumbo-ID: 505e4fc6-9a7d-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 505e4fc6-9a7d-11ea-9887-bc764e2007e4;
 Wed, 20 May 2020 09:36:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id B6AF5AA4F;
 Wed, 20 May 2020 09:36:01 +0000 (UTC)
Subject: Re: [PATCH v6 06/15] x86_64/mm: introduce pl2e in paging_init
To: Hongyan Xia <hx242@xen.org>
References: <cover.1587735799.git.hongyxia@amazon.com>
 <40759ec2fa21e365ad8a59ab0de59b3f7f5fb42a.1587735799.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <823027e8-dd72-bb02-6fca-037202d2432b@suse.com>
Date: Wed, 20 May 2020 11:35:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <40759ec2fa21e365ad8a59ab0de59b3f7f5fb42a.1587735799.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.04.2020 16:08, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> Introduce pl2e so that we can use l2_ro_mpt to point to the page table
> itself.
> 
> No functional change.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>

In principle I'm fine with the change, as a preparatory one for
the next patch. The description, however, suggests this is a
change for the sake of making a change: introducing a local
variable that's not really needed for anything.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 20 09:46:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 09:46:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbLIe-0006wC-F9; Wed, 20 May 2020 09:46:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbLIc-0006w7-W8
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 09:46:07 +0000
X-Inumbo-ID: b997d02e-9a7e-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b997d02e-9a7e-11ea-9887-bc764e2007e4;
 Wed, 20 May 2020 09:46:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id C6AB8AED9;
 Wed, 20 May 2020 09:46:07 +0000 (UTC)
Subject: Re: [PATCH v6 07/15] x86_64/mm: switch to new APIs in paging_init
To: Hongyan Xia <hx242@xen.org>
References: <cover.1587735799.git.hongyxia@amazon.com>
 <0655cc2d3dc27141ef102076c4ad390a37191b37.1587735799.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <80d185d4-c7c3-53b9-d851-ab56ea4bc755@suse.com>
Date: Wed, 20 May 2020 11:46:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <0655cc2d3dc27141ef102076c4ad390a37191b37.1587735799.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.04.2020 16:08, Hongyan Xia wrote:
> @@ -493,22 +494,28 @@ void __init paging_init(void)
>          if ( !(l4e_get_flags(idle_pg_table[l4_table_offset(va)]) &
>                _PAGE_PRESENT) )
>          {
> -            l3_pgentry_t *pl3t = alloc_xen_pagetable();
> +            l3_pgentry_t *pl3t;
> +            mfn_t l3mfn = alloc_xen_pagetable_new();
>  
> -            if ( !pl3t )
> +            if ( mfn_eq(l3mfn, INVALID_MFN) )
>                  goto nomem;
> +
> +            pl3t = map_domain_page(l3mfn);
>              clear_page(pl3t);
>              l4e_write(&idle_pg_table[l4_table_offset(va)],
> -                      l4e_from_paddr(__pa(pl3t), __PAGE_HYPERVISOR_RW));
> +                      l4e_from_mfn(l3mfn, __PAGE_HYPERVISOR_RW));
> +            unmap_domain_page(pl3t);

This can be moved up, and once it is you'll notice that you're
open-coding clear_domain_page(). I wonder whether I didn't spot
the same in other patches of this series.

Besides the previously raised point of possibly having an
allocation function that returns a mapping of the page right
away (not needed here) - are there many cases where allocation
of a new page table isn't accompanied by clearing the page? If
not, should the function perhaps do so (and then, once it has
a mapping anyway, it would be even more so natural to return
it for users wanting a mapping anyway)?

> @@ -662,6 +677,8 @@ void __init paging_init(void)
>      return;
>  
>   nomem:
> +    UNMAP_DOMAIN_PAGE(l2_ro_mpt);
> +    UNMAP_DOMAIN_PAGE(l3_ro_mpt);
>      panic("Not enough memory for m2p table\n");
>  }

I don't think this is a very useful addition.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 20 09:54:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 09:54:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbLQe-0007ry-CL; Wed, 20 May 2020 09:54:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbLQd-0007rt-HR
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 09:54:23 +0000
X-Inumbo-ID: e18e8e64-9a7f-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e18e8e64-9a7f-11ea-9887-bc764e2007e4;
 Wed, 20 May 2020 09:54:22 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7F383AC49;
 Wed, 20 May 2020 09:54:24 +0000 (UTC)
Subject: Re: [PATCH v6 08/15] x86_64/mm: switch to new APIs in setup_m2p_table
To: Hongyan Xia <hx242@xen.org>
References: <cover.1587735799.git.hongyxia@amazon.com>
 <a5e7c92fdc538c23f0173bec8e3b026dcf665c11.1587735799.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ac76fddd-06f8-9e60-24b7-bd942265b3c6@suse.com>
Date: Wed, 20 May 2020 11:54:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <a5e7c92fdc538c23f0173bec8e3b026dcf665c11.1587735799.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.04.2020 16:08, Hongyan Xia wrote:
> --- a/xen/arch/x86/x86_64/mm.c
> +++ b/xen/arch/x86/x86_64/mm.c
> @@ -379,13 +379,13 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
>  {
>      unsigned long i, va, smap, emap;
>      unsigned int n;
> -    l2_pgentry_t *l2_ro_mpt = NULL;
>      l3_pgentry_t *l3_ro_mpt = NULL;
>      int ret = 0;
>  
>      ASSERT(l4e_get_flags(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)])
>              & _PAGE_PRESENT);
> -    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]);
> +    l3_ro_mpt = map_l3t_from_l4e(
> +                    idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]);
>  
>      smap = (info->spfn & (~((1UL << (L2_PAGETABLE_SHIFT - 3)) -1)));
>      emap = ((info->epfn + ((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1 )) &
> @@ -424,6 +424,7 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
>                  break;
>          if ( n < CNT )
>          {
> +            l2_pgentry_t *l2_ro_mpt;
>              mfn_t mfn = alloc_hotadd_mfn(info);
>  
>              ret = map_pages_to_xen(
> @@ -440,30 +441,30 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
>                    _PAGE_PSE));
>              if ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
>                _PAGE_PRESENT )
> -                l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]) +
> -                  l2_table_offset(va);
> +                l2_ro_mpt = map_l2t_from_l3e(l3_ro_mpt[l3_table_offset(va)]);
>              else
>              {
> -                l2_ro_mpt = alloc_xen_pagetable();
> -                if ( !l2_ro_mpt )
> +                mfn_t l2_ro_mpt_mfn = alloc_xen_pagetable_new();
> +
> +                if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
>                  {
>                      ret = -ENOMEM;
>                      goto error;
>                  }
>  
> +                l2_ro_mpt = map_domain_page(l2_ro_mpt_mfn);
>                  clear_page(l2_ro_mpt);
>                  l3e_write(&l3_ro_mpt[l3_table_offset(va)],
> -                          l3e_from_paddr(__pa(l2_ro_mpt),
> -                                         __PAGE_HYPERVISOR_RO | _PAGE_USER));
> -                l2_ro_mpt += l2_table_offset(va);
> +                          l3e_from_mfn(l2_ro_mpt_mfn,
> +                                       __PAGE_HYPERVISOR_RO | _PAGE_USER));
>              }
> +            l2_ro_mpt += l2_table_offset(va);
>  
>              /* NB. Cannot be GLOBAL: guest user mode should not see it. */
>              l2e_write(l2_ro_mpt, l2e_from_mfn(mfn,
>                     /*_PAGE_GLOBAL|*/_PAGE_PSE|_PAGE_USER|_PAGE_PRESENT));
> +            unmap_domain_page(l2_ro_mpt);
>          }
> -        if ( !((unsigned long)l2_ro_mpt & ~PAGE_MASK) )
> -            l2_ro_mpt = NULL;

I think you want to consider retaining these two lines and the wider
scope of l2_ro_mpt, to leverage it to avoid mapping and unmapping
the same L2 page over and over on each loop iteration.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 20 09:56:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 09:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbLSY-0007z8-Qt; Wed, 20 May 2020 09:56:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbLSX-0007z0-Ht
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 09:56:21 +0000
X-Inumbo-ID: 27ae8661-9a80-11ea-a9dd-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 27ae8661-9a80-11ea-a9dd-12813bfff9fa;
 Wed, 20 May 2020 09:56:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 0EFEAAC49;
 Wed, 20 May 2020 09:56:22 +0000 (UTC)
Subject: Re: [PATCH v6 15/15] x86/mm: drop _new suffix for page table APIs
To: Hongyan Xia <hx242@xen.org>
References: <cover.1587735799.git.hongyxia@amazon.com>
 <9ff8ad5d4ba7602f3d7137a650aba5de52dacd80.1587735799.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <49a8bbc1-339e-b7aa-1959-1e68a15777f7@suse.com>
Date: Wed, 20 May 2020 11:56:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <9ff8ad5d4ba7602f3d7137a650aba5de52dacd80.1587735799.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.04.2020 16:09, Hongyan Xia wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> No functional change.
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Wed May 20 10:09:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 10:09:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbLeu-0000dt-Rw; Wed, 20 May 2020 10:09:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbLet-0000do-MA
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 10:09:07 +0000
X-Inumbo-ID: f09c54c0-9a81-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f09c54c0-9a81-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 10:09:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id BA678ABBE;
 Wed, 20 May 2020 10:09:08 +0000 (UTC)
Subject: Re: [PATCH v6 13/15] x86/mm: drop old page table APIs
To: Hongyan Xia <hx242@xen.org>
References: <cover.1587735799.git.hongyxia@amazon.com>
 <d6a642544c5ce0b975cdab8ad054f7a348f17c8d.1587735799.git.hongyxia@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <be3d3dd0-6001-41a5-7390-44dc8c327d8f@suse.com>
Date: Wed, 20 May 2020 12:09:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <d6a642544c5ce0b975cdab8ad054f7a348f17c8d.1587735799.git.hongyxia@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 24.04.2020 16:09, Hongyan Xia wrote:
> --- a/xen/arch/x86/smpboot.c
> +++ b/xen/arch/x86/smpboot.c
> @@ -815,7 +815,7 @@ static int setup_cpu_root_pgt(unsigned int cpu)
>      if ( !opt_xpti_hwdom && !opt_xpti_domu )
>          return 0;
>  
> -    rpt = alloc_xen_pagetable();
> +    rpt = alloc_xenheap_page();

So the idea of not using alloc_domheap_page() + map_domain_page_global()
here is that in the long run alloc_xenheap_page() will resolve to just
this? If so, while I'd have preferred the greater flexibility until then,
this is fair enough, i.e.
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 20 10:16:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 10:16:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbLlj-0001W8-Mg; Wed, 20 May 2020 10:16:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jY/2=7C=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jbLli-0001W3-Jr
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 10:16:10 +0000
X-Inumbo-ID: ec8d1d96-9a82-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec8d1d96-9a82-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 10:16:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 431F8B1F2;
 Wed, 20 May 2020 10:16:11 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] docs: update xenstore-migration.md
Date: Wed, 20 May 2020 12:16:05 +0200
Message-Id: <20200520101605.4263-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Update connection record details: make flags common for sockets and
domains, and add pending incoming data.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/designs/xenstore-migration.md | 63 ++++++++++++++++--------------
 1 file changed, 34 insertions(+), 29 deletions(-)

diff --git a/docs/designs/xenstore-migration.md b/docs/designs/xenstore-migration.md
index 34a2afd17e..e361d6b5e7 100644
--- a/docs/designs/xenstore-migration.md
+++ b/docs/designs/xenstore-migration.md
@@ -147,31 +147,45 @@ the domain being migrated.
 ```
     0       1       2       3       4       5       6       7    octet
 +-------+-------+-------+-------+-------+-------+-------+-------+
-| conn-id                       | conn-type     | conn-spec
+| conn-id                       | conn-type     | flags         |
++-------------------------------+---------------+---------------+
+| conn-spec
 ...
 +-------------------------------+-------------------------------+
-| data-len                      | data
-+-------------------------------+
+| in-data-len                   | out-data-len                  |
++-------------------------------+-------------------------------+
+| data
 ...
 ```
 
 
-| Field       | Description                                     |
-|-------------|-------------------------------------------------|
-| `conn-id`   | A non-zero number used to identify this         |
-|             | connection in subsequent connection-specific    |
-|             | records                                         |
-|             |                                                 |
-| `conn-type` | 0x0000: shared ring                             |
-|             | 0x0001: socket                                  |
-|             | 0x0002 - 0xFFFF: reserved for future use        |
-|             |                                                 |
-| `conn-spec` | See below                                       |
-|             |                                                 |
-| `data-len`  | The length (in octets) of any pending data not  |
-|             | yet written to the connection                   |
-|             |                                                 |
-| `data`      | Pending data (may be empty)                     |
+| Field          | Description                                  |
+|----------------|----------------------------------------------|
+| `conn-id`      | A non-zero number used to identify this      |
+|                | connection in subsequent connection-specific |
+|                | records                                      |
+|                |                                              |
+| `flags`        | A bit-wise OR of:                            |
+|                | 0001: read-only                              |
+|                |                                              |
+| `conn-type`    | 0x0000: shared ring                          |
+|                | 0x0001: socket                               |
+|                | 0x0002 - 0xFFFF: reserved for future use     |
+|                |                                              |
+| `conn-spec`    | See below                                    |
+|                |                                              |
+| `in-data-len`  | The length (in octets) of any data read      |
+|                | from the connection not yet processed        |
+|                |                                              |
+| `out-data-len` | The length (in octets) of any pending data   |
+|                | not yet written to the connection            |
+|                |                                              |
+| `data`         | Pending data, first read data, then written  |
+|                | data (any of both may be empty)              |
+
+In case of live update the connection record for the connection via which
+the live update command was issued will contain the response for the live
+update command in the pending write data.
 
 The format of `conn-spec` is dependent upon `conn-type`.
 
@@ -182,8 +196,6 @@ For `shared ring` connections it is as follows:
 
 ```
     0       1       2       3       4       5       6       7    octet
-                                                +-------+-------+
-                                                | flags         |
 +---------------+---------------+---------------+---------------+
 | domid         | tdomid        | evtchn                        |
 +-------------------------------+-------------------------------+
@@ -198,8 +210,6 @@ For `shared ring` connections it is as follows:
 |           | it has been subject to an SET_TARGET              |
 |           | operation [2] or DOMID_INVALID [3] otherwise      |
 |           |                                                   |
-| `flags`   | Must be zero                                      |
-|           |                                                   |
 | `evtchn`  | The port number of the interdomain channel used   |
 |           | by `domid` to communicate with xenstored          |
 |           |                                                   |
@@ -211,8 +221,6 @@ For `socket` connections it is as follows:
 
 
 ```
-                                                +-------+-------+
-                                                | flags         |
 +---------------+---------------+---------------+---------------+
 | socket-fd                     | pad                           |
 +-------------------------------+-------------------------------+
@@ -221,9 +229,6 @@ For `socket` connections it is as follows:
 
 | Field       | Description                                     |
 |-------------|-------------------------------------------------|
-| `flags`     | A bit-wise OR of:                               |
-|             | 0001: read-only                                 |
-|             |                                                 |
 | `socket-fd` | The file descriptor of the connected socket     |
 
 This type of connection is only relevant for live update, where the xenstored
@@ -398,4 +403,4 @@ explanation of node permissions.
 
 [3] See https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/xen.h;hb=HEAD#l612
 
-[4] https://wiki.xen.org/wiki/XenBus
\ No newline at end of file
+[4] https://wiki.xen.org/wiki/XenBus
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 10:17:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 10:17:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbLmq-0001do-60; Wed, 20 May 2020 10:17:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbLmp-0001dj-Kr
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 10:17:19 +0000
X-Inumbo-ID: 13ed24f9-9a83-11ea-a9e1-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 13ed24f9-9a83-11ea-a9e1-12813bfff9fa;
 Wed, 20 May 2020 10:17:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id C04E5B1EF;
 Wed, 20 May 2020 10:17:17 +0000 (UTC)
Subject: Re: [PATCH v2] x86: use POPCNT for hweight<N>() when available
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <55a4a24d-7fac-527c-6bcf-8d689136bac2@suse.com>
 <20200514140522.GD54375@Air-de-Roger>
 <83534bf1-fa57-1d4a-c615-f656338a8457@suse.com>
 <20200520093106.GI54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <53fdfbe2-615a-72f9-7f2d-26402a0a64d0@suse.com>
Date: Wed, 20 May 2020 12:17:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200520093106.GI54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20.05.2020 11:31, Roger Pau Monné wrote:
> On Wed, May 20, 2020 at 10:31:38AM +0200, Jan Beulich wrote:
>> On 14.05.2020 16:05, Roger Pau Monné wrote:
>>> On Mon, Jul 15, 2019 at 02:39:04PM +0000, Jan Beulich wrote:
>>>> @@ -251,6 +255,10 @@ boot/mkelf32: boot/mkelf32.c
>>>>   efi/mkreloc: efi/mkreloc.c
>>>>   	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
>>>>   
>>>> +nocov-y += hweight.o
>>>> +noubsan-y += hweight.o
>>>> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
>>>
>>> Why not use clobbers in the asm to list the scratch registers? Is it
>>> that much expensive?
>>
>> The goal is to disturb the call sites as little as possible. There's
>> no point avoiding the scratch registers when no call is made (i.e.
>> when the POPCNT insn can be used). Taking away from the compiler 7
>> out of 15 registers (that it can hold live data in) seems quite a
>> lot to me.
> 
> IMO using -ffixed-reg for all those registers is even worse, as that
> prevents the generated code in hweight from using any of those, thus
> greatly limiting the amount of registers and likely making the
> generated code rely heavily on pushing an popping from the stack?

Okay, that's the other side of the idea behind all this: Virtually no
hardware we run on will lack POPCNT support, hence the quality of
these fallback routines matters only the very old hardware, where we
likely don't perform optimally already anyway.

> This also has the side effect to limiting the usage of popcnt to gcc,
> which IMO is also not ideal.

Agreed. I don't know enough about clang to be able to think of
possible alternatives. In any event there's no change to current
behavior for hypervisors built with clang.

> I also wondered, since the in-place asm before patching is a call
> instruction, wouldn't inline asm at build time already assume that the
> scratch registers are clobbered?

That would imply the compiler peeks into the string literal of the
asm(). At least gcc doesn't, and even if it did it couldn't infer an
ABI from seeing a CALL insn.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 20 10:18:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 10:18:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbLnh-0001jW-HG; Wed, 20 May 2020 10:18:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AfT1=7C=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbLng-0001jK-67
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 10:18:12 +0000
X-Inumbo-ID: 325c8a8c-9a83-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 325c8a8c-9a83-11ea-b9cf-bc764e2007e4;
 Wed, 20 May 2020 10:18:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hF5aIMMfRUp6EW25+tUAk8ULKTlxeJcZmh6D/WopuCc=; b=Ul2w7TwlzJyuBicEJuyeg0yPm
 7GTPU3o2b++oJunReR+ucPZIWs6ceyd08q8UZtu1s6VXKSoxWImJWuRP5qDJUM1jlv3FE/pMkig+x
 XCnvMaaDdr18GJhQ3Y2X125kikh7wrVI5ZdPFXh0+GurlcXGrpbXbWYwZujtYAerDE++k=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbLna-0006Zt-81; Wed, 20 May 2020 10:18:06 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbLnZ-0004Xc-W5; Wed, 20 May 2020 10:18:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbLnZ-000556-VU; Wed, 20 May 2020 10:18:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150274-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 150274: all pass - PUSHED
X-Osstest-Versions-This: xen=e235fa2794c95365519eac714d6ea82f8e64752e
X-Osstest-Versions-That: xen=664e1bc12f8658da124a4eff7a8f16da073bd47f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 20 May 2020 10:18:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150274 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150274/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  e235fa2794c95365519eac714d6ea82f8e64752e
baseline version:
 xen                  664e1bc12f8658da124a4eff7a8f16da073bd47f

Last test of basis   150224  2020-05-17 09:19:02 Z    3 days
Testing same since   150274  2020-05-20 09:20:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Eric Shelton <eshelton@pobox.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Simon Gaiser <simon@invisiblethingslab.com>
  Wei Liu <wei.liu2@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   664e1bc12f..e235fa2794  e235fa2794c95365519eac714d6ea82f8e64752e -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed May 20 10:20:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 10:20:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbLqA-0002Zg-12; Wed, 20 May 2020 10:20:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AW6E=7C=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jbLq9-0002Zb-1j
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 10:20:45 +0000
X-Inumbo-ID: 9037e76e-9a83-11ea-b07b-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9037e76e-9a83-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 10:20:44 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Y4wQz6edF6UM0teSuTD5OyGrhtAOEpz14EK6fx5zCHKyQPc4kiUvJ2KOSNipNdLnnvbuIjrEFx
 IQh6qCgjbOsQaD6HJUdnto7z5H1A2AaYjBXm0VC6AKzaLDRoNpESKHoxMOIgXV1gi9Dc8/7B8v
 cF5Tv+YVImQhjO51/eQ4l2pPIsPFiGqKF1rDc/3Nol4yaNYCaSnURs5O0SgPfTnXnVkwIFcSs1
 W7Rgm+AnRY3i5nIsf6gC/UoANPDG54jCA8Da3ff4BNCyhPcZ22GWiVCOXYOfbAV8cPHInBYtM7
 pAU=
X-SBRS: 2.7
X-MesageID: 18230849
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,413,1583211600"; d="scan'208";a="18230849"
Date: Wed, 20 May 2020 12:20:35 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v3 1/3] x86: relax GDT check in arch_set_info_guest()
Message-ID: <20200520093656.GJ54375@Air-de-Roger>
References: <cbed3c45-3685-4bce-9719-93b1e8a2599a@suse.com>
 <acbaead9-0f6c-3606-e809-57dafe9b3f01@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <acbaead9-0f6c-3606-e809-57dafe9b3f01@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 20, 2020 at 09:53:50AM +0200, Jan Beulich wrote:
> It is wrong for us to check frames beyond the guest specified limit
> (in the compat case another loop bound is already correct).
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 20 10:28:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 10:28:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbLxO-0002o1-Qd; Wed, 20 May 2020 10:28:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AW6E=7C=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jbLxN-0002nw-EJ
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 10:28:13 +0000
X-Inumbo-ID: 9b6e3d30-9a84-11ea-ae69-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b6e3d30-9a84-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 10:28:12 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 5e78sDjByEWZjSRz+Ncz0hl6WYyS1Xob+q7KkspJ5AGk3oamWNP5llub8m2/MQhvTAKe0NXI6k
 s46QKT+lat1DyHXDClnvy/go+A7wAkwToGmvmAzDaAx1RYrOjHjnFnEmGJKpAbRL8FV1YgleJU
 loP5l/c9uHRGrAaMnwO+INl0/69Vwpp87Dnk2PWxjmM05sv7OzrkopImHok5ZbAkqIwhd4QXxt
 oqTtJT6VU6qfvoRzUuvCwU/KyvR6HRD9Nr2lnFt/A9AVGltI1ANZ03Hja3RBiSakxrl03UghmF
 aWQ=
X-SBRS: 2.7
X-MesageID: 18331715
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,413,1583211600"; d="scan'208";a="18331715"
Date: Wed, 20 May 2020 12:28:05 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2] x86: use POPCNT for hweight<N>() when available
Message-ID: <20200520102805.GK54375@Air-de-Roger>
References: <55a4a24d-7fac-527c-6bcf-8d689136bac2@suse.com>
 <20200514140522.GD54375@Air-de-Roger>
 <83534bf1-fa57-1d4a-c615-f656338a8457@suse.com>
 <20200520093106.GI54375@Air-de-Roger>
 <53fdfbe2-615a-72f9-7f2d-26402a0a64d0@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <53fdfbe2-615a-72f9-7f2d-26402a0a64d0@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 20, 2020 at 12:17:15PM +0200, Jan Beulich wrote:
> On 20.05.2020 11:31, Roger Pau Monné wrote:
> > On Wed, May 20, 2020 at 10:31:38AM +0200, Jan Beulich wrote:
> >> On 14.05.2020 16:05, Roger Pau Monné wrote:
> >>> On Mon, Jul 15, 2019 at 02:39:04PM +0000, Jan Beulich wrote:
> >>>> @@ -251,6 +255,10 @@ boot/mkelf32: boot/mkelf32.c
> >>>>   efi/mkreloc: efi/mkreloc.c
> >>>>   	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
> >>>>   
> >>>> +nocov-y += hweight.o
> >>>> +noubsan-y += hweight.o
> >>>> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
> >>>
> >>> Why not use clobbers in the asm to list the scratch registers? Is it
> >>> that much expensive?
> >>
> >> The goal is to disturb the call sites as little as possible. There's
> >> no point avoiding the scratch registers when no call is made (i.e.
> >> when the POPCNT insn can be used). Taking away from the compiler 7
> >> out of 15 registers (that it can hold live data in) seems quite a
> >> lot to me.
> > 
> > IMO using -ffixed-reg for all those registers is even worse, as that
> > prevents the generated code in hweight from using any of those, thus
> > greatly limiting the amount of registers and likely making the
> > generated code rely heavily on pushing an popping from the stack?
> 
> Okay, that's the other side of the idea behind all this: Virtually no
> hardware we run on will lack POPCNT support, hence the quality of
> these fallback routines matters only the very old hardware, where we
> likely don't perform optimally already anyway.
> 
> > This also has the side effect to limiting the usage of popcnt to gcc,
> > which IMO is also not ideal.
> 
> Agreed. I don't know enough about clang to be able to think of
> possible alternatives. In any event there's no change to current
> behavior for hypervisors built with clang.
> 
> > I also wondered, since the in-place asm before patching is a call
> > instruction, wouldn't inline asm at build time already assume that the
> > scratch registers are clobbered?
> 
> That would imply the compiler peeks into the string literal of the
> asm(). At least gcc doesn't, and even if it did it couldn't infer an
> ABI from seeing a CALL insn.

Please bear with me, but then I don't understand what Linux is doing
in arch/x86/include/asm/arch_hweight.h. I see no clobbers there,
neither it seems like the __sw_hweight{32/64} functions are built
without the usage of the scratch registers.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 20 10:34:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 10:34:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbM3J-0003gR-IC; Wed, 20 May 2020 10:34:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/hkT=7C=zohomail.eu=elliotkillick@srs-us1.protection.inumbo.net>)
 id 1jbM3H-0003gM-Ix
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 10:34:20 +0000
X-Inumbo-ID: 7538a44c-9a85-11ea-b07b-bc764e2007e4
Received: from sender11-pp-o93.zoho.eu (unknown [31.186.226.251])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7538a44c-9a85-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 10:34:18 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1589970856; cv=none; d=zohomail.eu; s=zohoarc; 
 b=XVSwE/m1XA/r1o/v3UFNMI7+eMIYA6oss6uN9RUcp+NeHmgxiPzqmFdV0YXvlIzBawpUEYGqM+VS74/i5UfDoDtTV6PUln0yJgadldMXHvsnASw4wRPVlUUx2CKmmanijmL+qxDRvtD9ltRAY105tBeMC7fo/JRKMQ/ui8SL2BM=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.eu;
 s=zohoarc; t=1589970856;
 h=Content-Type:Content-Transfer-Encoding:Date:From:MIME-Version:Message-ID:Subject:To;
 bh=pvI8uhi+G5NcTAaxI75Nu7A+wbFrd2Zrmhd/Q23JWAo=; 
 b=BGpaPCXD1i8bZqFyyDLFj7GjOtB2sgXllKer5MiJ7Rkc47yqBfjaFEPnZMJLdhzizG5eUatZUIpAmbNj+sU7uzLyLoT5tJbnWVR1pJxuESsvVfOASvTDKWgHSonisMpzEj/2qde37m+j8zH/ZZyZy0MXKm91g+qrYNJYuMEr29g=
ARC-Authentication-Results: i=1; mx.zohomail.eu;
 dkim=pass  header.i=zohomail.eu;
 spf=pass  smtp.mailfrom=elliotkillick@zohomail.eu;
 dmarc=pass header.from=<elliotkillick@zohomail.eu>
 header.from=<elliotkillick@zohomail.eu>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1589970856; 
 s=zoho; d=zohomail.eu; i=elliotkillick@zohomail.eu;
 h=To:From:Message-ID:Subject:Date:MIME-Version:Content-Type:Content-Transfer-Encoding;
 bh=pvI8uhi+G5NcTAaxI75Nu7A+wbFrd2Zrmhd/Q23JWAo=;
 b=Q79oMTlvpQw04cSBVCBQVVaIuIyB5zo3jww4+eqn2zYGOI9DrWuWmUaoBV2+4pzZ
 yPKlcl90U/3n493RD/OZYWV0Q+buoU8UVGgpt4pqQgu+2JUEExnffGoh9ox6E1XU/ui
 5Z3SGeTL4OEhFR+3wXzHsl05CYhkuH7n9BFcMip0=
Received: from [10.137.0.35]
 (CPEac202e7c9cc3-CMac202e7c9cc0.cpe.net.cable.rogers.com [99.231.147.74]) by
 mx.zoho.eu with SMTPS id 1589970853561616.0091882740427;
 Wed, 20 May 2020 12:34:13 +0200 (CEST)
To: xen-devel@lists.xenproject.org
From: Elliot Killick <elliotkillick@zohomail.eu>
Message-ID: <36815795-223f-2b96-5401-c262294cbaa8@zohomail.eu>
Subject: [BUG] Consistent LBR/TSX vmentry failure (0x80000022) calling
 domain_crash() in vmx.c:3324
Date: Wed, 20 May 2020 10:33:56 +0000
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

Xen is crashing Windows 10 (64-bit) VMs consistently whenever IDA
Debugger
(https://www.hex-rays.com/products/ida/support/download_freeware/)
launches the Local Windows Debugger. The crash occurs when trying to
launch the debugger against any executable (e.g. calc.exe) right at the
time IDA says it is "Moving segment from <X address> to <Y address>".

Tested on Windows 7, 8 and Linux as well but the bug is only triggered
on Windows 10. Happens whether or not IDA is running with administrator
privileges. No drivers/VM tools installed. Windows has a bug check code
of zero, leaves no memory dump, nothing in the logs from QEMU in Dom0,
the domain just powers off immediately leaving a record of the incident
in the hypervisor.log. So, it does appear to be a Xen issue. Modern
Intel CPU.

Does anyone have some ideas on what may be causing this?

Thank you,
Elliot

hypervisor.log:

(XEN) d24v1 vmentry failure (reason 0x80000022): MSR loading (entry 1)
(XEN)=C2=A0=C2=A0 msr 000001dd val 1ffff80676f52be5 (mbz 0)
(XEN) ************* VMCS Area **************
(XEN) *** Guest State ***
(XEN) CR0: actual=3D0x0000000080050031, shadow=3D0x0000000080050031,
gh_mask=3Dffffffffffffffff
(XEN) CR4: actual=3D0x0000000000172678, shadow=3D0x0000000000170678,
gh_mask=3Dffffffffffffffff
(XEN) CR3 =3D 0x00000001b2725002
(XEN) RSP =3D 0xffff960c962d1af8 (0xffff960c962d1af8)=C2=A0 RIP =3D
0xfffff80676dc29d0 (0xfffff80676dc29d0)
(XEN) RFLAGS=3D0x00000002 (0x00000002)=C2=A0 DR7 =3D 0x0000000000000400
(XEN) Sysenter RSP=3D0000000000000000 CS:RIP=3D0000:0000000000000000
(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sel=C2=A0 attr=C2=A0 limit=
=C2=A0=C2=A0 base
(XEN)=C2=A0=C2=A0 CS: 0010 0209b 00000000 0000000000000000
(XEN)=C2=A0=C2=A0 DS: 002b 0c0f3 ffffffff 0000000000000000
(XEN)=C2=A0=C2=A0 SS: 0000 1c000 ffffffff 0000000000000000
(XEN)=C2=A0=C2=A0 ES: 002b 0c0f3 ffffffff 0000000000000000
(XEN)=C2=A0=C2=A0 FS: 0053 040f3 00007c00 0000000000000000
(XEN)=C2=A0=C2=A0 GS: 002b 0c0f3 ffffffff ffffb181c2d00000
(XEN) GDTR:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 00000057 ffffb181c2d15fb0
(XEN) LDTR: 0000 1c000 ffffffff 0000000000000000
(XEN) IDTR:=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0 00000fff ffffb181c2d13000
(XEN)=C2=A0=C2=A0 TR: 0040 0008b 00000067 ffffb181c2d14000
(XEN) EFER =3D 0x0000000000000400=C2=A0 PAT =3D 0x0007010600070106
(XEN) PreemptionTimer =3D 0x00000000=C2=A0 SM Base =3D 0x00000000
(XEN) DebugCtl =3D 0x0000000000000001=C2=A0 DebugExceptions =3D 0x000000000=
0000000
(XEN) Interruptibility =3D 00000000=C2=A0 ActivityState =3D 00000000
(XEN) *** Host State ***
(XEN) RIP =3D 0xffff82d0801f0840 (vmx_asm_vmexit_handler)=C2=A0 RSP =3D
0xffff8304204f7f70
(XEN) CS=3De008 SS=3D0000 DS=3D0000 ES=3D0000 FS=3D0000 GS=3D0000 TR=3De040
(XEN) FSBase=3D0000000000000000 GSBase=3D0000000000000000
TRBase=3Dffff83042bb02c80
(XEN) GDTBase=3Dffff83042baf2000 IDTBase=3Dffff8304204ee000
(XEN) CR0=3D0000000080050033 CR3=3D0000000417a40000 CR4=3D00000000001526e0
(XEN) Sysenter RSP=3Dffff8304204f7fa0 CS:RIP=3De008:ffff82d0802144b0
(XEN) EFER =3D 0x0000000000000000=C2=A0 PAT =3D 0x0000050100070406
(XEN) *** Control State ***
(XEN) PinBased=3D0000003f CPUBased=3Db62065fa SecondaryExec=3D000054eb
(XEN) EntryControls=3D000053ff ExitControls=3D000fefff
(XEN) ExceptionBitmap=3D00060002 PFECmask=3D00000000 PFECmatch=3D00000000
(XEN) VMEntry: intr_info=3D0000002f errcode=3D00000000 ilen=3D00000000
(XEN) VMExit: intr_info=3D00000000 errcode=3D00000000 ilen=3D00000002
(XEN)=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 reason=3D80000022 qua=
lification=3D0000000000000002
(XEN) IDTVectoring: info=3D00000000 errcode=3D00000000
(XEN) TSC Offset =3D 0xffff797cd2ddfef4=C2=A0 TSC Multiplier =3D 0x00000000=
00000000
(XEN) TPR Threshold =3D 0x00=C2=A0 PostedIntrVec =3D 0x00
(XEN) EPT pointer =3D 0x000000041444701e=C2=A0 EPTP index =3D 0x0000
(XEN) PLE Gap=3D00000080 Window=3D00001000
(XEN) Virtual processor ID =3D 0xf71d VMfunc controls =3D 0000000000000000
(XEN) **************************************
(XEN) domain_crash called from vmx.c:3324
(XEN) Domain 24 (vcpu#1) crashed on cpu#1:
(XEN) ----[ Xen-4.8.5-15.fc25=C2=A0 x86_64=C2=A0 debug=3Dn=C2=A0=C2=A0 Not =
tainted ]----
(XEN) CPU:=C2=A0=C2=A0=C2=A0 1
(XEN) RIP:=C2=A0=C2=A0=C2=A0 0010:[<fffff80676dc29d0>]
(XEN) RFLAGS: 0000000000000002=C2=A0=C2=A0 CONTEXT: hvm guest (d24v1)
(XEN) rax: 0000000000000001=C2=A0=C2=A0 rbx: 0000000000000000=C2=A0=C2=A0 r=
cx: 00000000000001d9
(XEN) rdx: 0000000000000000=C2=A0=C2=A0 rsi: 0000000000000000=C2=A0=C2=A0 r=
di: 0000000000000000
(XEN) rbp: ffff960c962d1b80=C2=A0=C2=A0 rsp: ffff960c962d1af8=C2=A0=C2=A0 r=
8:=C2=A0 0000000000000002
(XEN) r9:=C2=A0 ffffb181c2d00000=C2=A0=C2=A0 r10: ffffc48c879b6080=C2=A0=C2=
=A0 r11: 0000000000000000
(XEN) r12: 0000000000000000=C2=A0=C2=A0 r13: 0000000000000000=C2=A0=C2=A0 r=
14: 0000000000000000
(XEN) r15: 0000000000000000=C2=A0=C2=A0 cr0: 0000000080050031=C2=A0=C2=A0 c=
r4: 0000000000170678
(XEN) cr3: 00000001b2725002=C2=A0=C2=A0 cr2: 00007ff89f231770
(XEN) fsb: 0000000000000000=C2=A0=C2=A0 gsb: ffffb181c2d00000=C2=A0=C2=A0 g=
ss: 000000146673a000
(XEN) ds: 002b=C2=A0=C2=A0 es: 002b=C2=A0=C2=A0 fs: 0053=C2=A0=C2=A0 gs: 00=
2b=C2=A0=C2=A0 ss: 0000=C2=A0=C2=A0 cs: 0010



From xen-devel-bounces@lists.xenproject.org Wed May 20 10:51:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 10:51:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbMIw-0005Lj-UT; Wed, 20 May 2020 10:50:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EGgk=7C=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jbMIv-0005Le-Hu
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 10:50:29 +0000
X-Inumbo-ID: b7b326f6-9a87-11ea-ae69-bc764e2007e4
Received: from mail-ej1-x62b.google.com (unknown [2a00:1450:4864:20::62b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7b326f6-9a87-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 10:50:28 +0000 (UTC)
Received: by mail-ej1-x62b.google.com with SMTP id d7so3196868eja.7
 for <xen-devel@lists.xenproject.org>; Wed, 20 May 2020 03:50:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=kXc3YsXD6WdzJLKK/MADRHqsmnNYXZb1UrKhbte5gqU=;
 b=diYlybjPAwMFN/TSud4BonXo4C/RCNMpGBar/EgLWY9VzP3qUu3qPtGo7DdldZ5U/I
 CJPFwu5pNuzzUNEhUmiS3gXBQJ47b6Fz05CCtEdQkYBWmaSusB11x4beyc47Pe7fCgPL
 zQC3vD7fy3XJ0WckAsEzLcKjcC1r/Ia+/wDWIK7OlLPw1h17RU2BbjPGhkU5savowTQ6
 A2+NWoxKwpJQSPFO1i0ueJHRaI4EE4XGLXeYy93bYmGr6uX86vVf+2Dy3eF1NkAznHXU
 NvANnPtOcad9mfzJgxSvZppGGDGJZyxOkQIrfldNNC+TtGd9CBUYUtAaQxeZeEnuBUZv
 qFtQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=kXc3YsXD6WdzJLKK/MADRHqsmnNYXZb1UrKhbte5gqU=;
 b=pQ6Y1Y2+3F9m7cVCpjHvpPimwVyUG8D4jiXybgzWplqr8+KL9e7ltM/9WYy5G37BG6
 RM/79EdgbLaPPSrbpyjjEvLVzmcmYSBe7kESU9kA2g62IKSeARBarhpjd/u85bv/My0W
 q99X+1HSwL4QpAZqqsQJnSznTHmJtFkbVN85G6MJcHUOUxaWoPiVTxh3aFWdsLeTvbGW
 eJDUI5S9xRnfEZ5izXnEgAsjsrdWDmlBsI6R8U9nflHPZQlV2CXTiYlwyb9Go30xSJJA
 OLHYqvql6NmPRhBukvWAAaYYCAOFj4Snd3E3ckLyj82ZCdZfSAm/q+StBy18leFSt9Go
 Fi7w==
X-Gm-Message-State: AOAM532euLZ0ol9BwXQSBN3EX6Y42cKjdLRAKFfj/iOvJDmT9APrcms+
 3eGjkBLtUAokPI0b85zAAT8=
X-Google-Smtp-Source: ABdhPJzwoKWYQshUjlG53R7HghVrVszBOYZcRQ6iXW1dZpJqVY12+qoQYbfCLiAyYk57d5t46mxxfA==
X-Received: by 2002:a17:906:4815:: with SMTP id
 w21mr3340872ejq.533.1589971827772; 
 Wed, 20 May 2020 03:50:27 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.187])
 by smtp.gmail.com with ESMTPSA id oq4sm1606166ejb.0.2020.05.20.03.50.26
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Wed, 20 May 2020 03:50:27 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Juergen Gross'" <jgross@suse.com>,
	<xen-devel@lists.xenproject.org>
References: <20200520101605.4263-1-jgross@suse.com>
In-Reply-To: <20200520101605.4263-1-jgross@suse.com>
Subject: RE: [PATCH] docs: update xenstore-migration.md
Date: Wed, 20 May 2020 11:50:25 +0100
Message-ID: <002a01d62e94$78de4b40$6a9ae1c0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQJtc1rpVxCWDPAud8mNEtXyIsLDq6eCQa8g
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Juergen Gross
> Sent: 20 May 2020 11:16
> To: xen-devel@lists.xenproject.org
> Cc: Juergen Gross <jgross@suse.com>; Stefano Stabellini <sstabellini@kernel.org>; Julien Grall
> <julien@xen.org>; Wei Liu <wl@xen.org>; Andrew Cooper <andrew.cooper3@citrix.com>; Ian Jackson
> <ian.jackson@eu.citrix.com>; George Dunlap <george.dunlap@citrix.com>; Jan Beulich <jbeulich@suse.com>
> Subject: [PATCH] docs: update xenstore-migration.md
> 
> Update connection record details: make flags common for sockets and
> domains, and add pending incoming data.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

LGTM

Reviewed-by: Paul Durrant <paul@xen.org>

> ---
>  docs/designs/xenstore-migration.md | 63 ++++++++++++++++--------------
>  1 file changed, 34 insertions(+), 29 deletions(-)
> 
> diff --git a/docs/designs/xenstore-migration.md b/docs/designs/xenstore-migration.md
> index 34a2afd17e..e361d6b5e7 100644
> --- a/docs/designs/xenstore-migration.md
> +++ b/docs/designs/xenstore-migration.md
> @@ -147,31 +147,45 @@ the domain being migrated.
>  ```
>      0       1       2       3       4       5       6       7    octet
>  +-------+-------+-------+-------+-------+-------+-------+-------+
> -| conn-id                       | conn-type     | conn-spec
> +| conn-id                       | conn-type     | flags         |
> ++-------------------------------+---------------+---------------+
> +| conn-spec
>  ...
>  +-------------------------------+-------------------------------+
> -| data-len                      | data
> -+-------------------------------+
> +| in-data-len                   | out-data-len                  |
> ++-------------------------------+-------------------------------+
> +| data
>  ...
>  ```
> 
> 
> -| Field       | Description                                     |
> -|-------------|-------------------------------------------------|
> -| `conn-id`   | A non-zero number used to identify this         |
> -|             | connection in subsequent connection-specific    |
> -|             | records                                         |
> -|             |                                                 |
> -| `conn-type` | 0x0000: shared ring                             |
> -|             | 0x0001: socket                                  |
> -|             | 0x0002 - 0xFFFF: reserved for future use        |
> -|             |                                                 |
> -| `conn-spec` | See below                                       |
> -|             |                                                 |
> -| `data-len`  | The length (in octets) of any pending data not  |
> -|             | yet written to the connection                   |
> -|             |                                                 |
> -| `data`      | Pending data (may be empty)                     |
> +| Field          | Description                                  |
> +|----------------|----------------------------------------------|
> +| `conn-id`      | A non-zero number used to identify this      |
> +|                | connection in subsequent connection-specific |
> +|                | records                                      |
> +|                |                                              |
> +| `flags`        | A bit-wise OR of:                            |
> +|                | 0001: read-only                              |
> +|                |                                              |
> +| `conn-type`    | 0x0000: shared ring                          |
> +|                | 0x0001: socket                               |
> +|                | 0x0002 - 0xFFFF: reserved for future use     |
> +|                |                                              |
> +| `conn-spec`    | See below                                    |
> +|                |                                              |
> +| `in-data-len`  | The length (in octets) of any data read      |
> +|                | from the connection not yet processed        |
> +|                |                                              |
> +| `out-data-len` | The length (in octets) of any pending data   |
> +|                | not yet written to the connection            |
> +|                |                                              |
> +| `data`         | Pending data, first read data, then written  |
> +|                | data (any of both may be empty)              |
> +
> +In case of live update the connection record for the connection via which
> +the live update command was issued will contain the response for the live
> +update command in the pending write data.
> 
>  The format of `conn-spec` is dependent upon `conn-type`.
> 
> @@ -182,8 +196,6 @@ For `shared ring` connections it is as follows:
> 
>  ```
>      0       1       2       3       4       5       6       7    octet
> -                                                +-------+-------+
> -                                                | flags         |
>  +---------------+---------------+---------------+---------------+
>  | domid         | tdomid        | evtchn                        |
>  +-------------------------------+-------------------------------+
> @@ -198,8 +210,6 @@ For `shared ring` connections it is as follows:
>  |           | it has been subject to an SET_TARGET              |
>  |           | operation [2] or DOMID_INVALID [3] otherwise      |
>  |           |                                                   |
> -| `flags`   | Must be zero                                      |
> -|           |                                                   |
>  | `evtchn`  | The port number of the interdomain channel used   |
>  |           | by `domid` to communicate with xenstored          |
>  |           |                                                   |
> @@ -211,8 +221,6 @@ For `socket` connections it is as follows:
> 
> 
>  ```
> -                                                +-------+-------+
> -                                                | flags         |
>  +---------------+---------------+---------------+---------------+
>  | socket-fd                     | pad                           |
>  +-------------------------------+-------------------------------+
> @@ -221,9 +229,6 @@ For `socket` connections it is as follows:
> 
>  | Field       | Description                                     |
>  |-------------|-------------------------------------------------|
> -| `flags`     | A bit-wise OR of:                               |
> -|             | 0001: read-only                                 |
> -|             |                                                 |
>  | `socket-fd` | The file descriptor of the connected socket     |
> 
>  This type of connection is only relevant for live update, where the xenstored
> @@ -398,4 +403,4 @@ explanation of node permissions.
> 
>  [3] See https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/xen.h;hb=HEAD#l612
> 
> -[4] https://wiki.xen.org/wiki/XenBus
> \ No newline at end of file
> +[4] https://wiki.xen.org/wiki/XenBus
> --
> 2.26.1
> 




From xen-devel-bounces@lists.xenproject.org Wed May 20 10:57:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 10:57:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbMPi-0005ia-Ud; Wed, 20 May 2020 10:57:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbMPh-0005iV-U4
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 10:57:29 +0000
X-Inumbo-ID: b248cd5a-9a88-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b248cd5a-9a88-11ea-9887-bc764e2007e4;
 Wed, 20 May 2020 10:57:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id BD81FACAE;
 Wed, 20 May 2020 10:57:30 +0000 (UTC)
Subject: Re: [PATCH v2] x86: use POPCNT for hweight<N>() when available
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <55a4a24d-7fac-527c-6bcf-8d689136bac2@suse.com>
 <20200514140522.GD54375@Air-de-Roger>
 <83534bf1-fa57-1d4a-c615-f656338a8457@suse.com>
 <20200520093106.GI54375@Air-de-Roger>
 <53fdfbe2-615a-72f9-7f2d-26402a0a64d0@suse.com>
 <20200520102805.GK54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0e97e3af-b66e-4924-a76c-9e33cdd1a726@suse.com>
Date: Wed, 20 May 2020 12:57:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200520102805.GK54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20.05.2020 12:28, Roger Pau Monné wrote:
> On Wed, May 20, 2020 at 12:17:15PM +0200, Jan Beulich wrote:
>> On 20.05.2020 11:31, Roger Pau Monné wrote:
>>> On Wed, May 20, 2020 at 10:31:38AM +0200, Jan Beulich wrote:
>>>> On 14.05.2020 16:05, Roger Pau Monné wrote:
>>>>> On Mon, Jul 15, 2019 at 02:39:04PM +0000, Jan Beulich wrote:
>>>>>> @@ -251,6 +255,10 @@ boot/mkelf32: boot/mkelf32.c
>>>>>>   efi/mkreloc: efi/mkreloc.c
>>>>>>   	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
>>>>>>   
>>>>>> +nocov-y += hweight.o
>>>>>> +noubsan-y += hweight.o
>>>>>> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
>>>>>
>>>>> Why not use clobbers in the asm to list the scratch registers? Is it
>>>>> that much expensive?
>>>>
>>>> The goal is to disturb the call sites as little as possible. There's
>>>> no point avoiding the scratch registers when no call is made (i.e.
>>>> when the POPCNT insn can be used). Taking away from the compiler 7
>>>> out of 15 registers (that it can hold live data in) seems quite a
>>>> lot to me.
>>>
>>> IMO using -ffixed-reg for all those registers is even worse, as that
>>> prevents the generated code in hweight from using any of those, thus
>>> greatly limiting the amount of registers and likely making the
>>> generated code rely heavily on pushing an popping from the stack?
>>
>> Okay, that's the other side of the idea behind all this: Virtually no
>> hardware we run on will lack POPCNT support, hence the quality of
>> these fallback routines matters only the very old hardware, where we
>> likely don't perform optimally already anyway.
>>
>>> This also has the side effect to limiting the usage of popcnt to gcc,
>>> which IMO is also not ideal.
>>
>> Agreed. I don't know enough about clang to be able to think of
>> possible alternatives. In any event there's no change to current
>> behavior for hypervisors built with clang.
>>
>>> I also wondered, since the in-place asm before patching is a call
>>> instruction, wouldn't inline asm at build time already assume that the
>>> scratch registers are clobbered?
>>
>> That would imply the compiler peeks into the string literal of the
>> asm(). At least gcc doesn't, and even if it did it couldn't infer an
>> ABI from seeing a CALL insn.
> 
> Please bear with me, but then I don't understand what Linux is doing
> in arch/x86/include/asm/arch_hweight.h. I see no clobbers there,
> neither it seems like the __sw_hweight{32/64} functions are built
> without the usage of the scratch registers.

__sw_hweight{32,64} are implemented in assembly, avoiding most
scratch registers while pushing/popping the ones which do get
altered.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 20 11:11:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 11:11:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbMcc-0007UH-PX; Wed, 20 May 2020 11:10:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4N77=7C=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jbMcb-0007U9-DA
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 11:10:49 +0000
X-Inumbo-ID: 8ea4719a-9a8a-11ea-b07b-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ea4719a-9a8a-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 11:10:48 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: eLO6tUA21tGf+7O4ReGpA9+cZahwiIW4E3ec2XRHNrbQv0yQ+RTToqplQ/0GyjN4tF3fCMbH4e
 bjt9fHJB8l6AJIaxN+UCNTBZ+OeeBzcwJAFlpkj+OX26jt3bOLQq+SrnPU4xZlVpLyAOJsMJ+J
 0ip7QbWnkLkiEvgCKrkIqipX0BA3osEBdHqaEcdtZAUBw27hL2XQeEjbqRhnnYDESygJ1MlR+D
 J3HoILODW7ctGCPBo3rIFW3P3djXteYvmPrVUcgF4YauViL/+3h/XLCTlUEUo6RtDAdKgCzuGe
 PzY=
X-SBRS: 2.7
X-MesageID: 18334345
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,413,1583211600"; d="scan'208";a="18334345"
Subject: Re: [BUG] Consistent LBR/TSX vmentry failure (0x80000022) calling
 domain_crash() in vmx.c:3324
To: Elliot Killick <elliotkillick@zohomail.eu>,
 <xen-devel@lists.xenproject.org>
References: <36815795-223f-2b96-5401-c262294cbaa8@zohomail.eu>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c715f89a-b2ba-490c-c027-b4c7d7069f42@citrix.com>
Date: Wed, 20 May 2020 12:10:28 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <36815795-223f-2b96-5401-c262294cbaa8@zohomail.eu>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20/05/2020 11:33, Elliot Killick wrote:
> Hello,
>
> Xen is crashing Windows 10 (64-bit) VMs consistently whenever IDA
> Debugger
> (https://www.hex-rays.com/products/ida/support/download_freeware/)
> launches the Local Windows Debugger. The crash occurs when trying to
> launch the debugger against any executable (e.g. calc.exe) right at the
> time IDA says it is "Moving segment from <X address> to <Y address>".
>
> Tested on Windows 7, 8 and Linux as well but the bug is only triggered
> on Windows 10. Happens whether or not IDA is running with administrator
> privileges. No drivers/VM tools installed. Windows has a bug check code
> of zero, leaves no memory dump, nothing in the logs from QEMU in Dom0,
> the domain just powers off immediately leaving a record of the incident
> in the hypervisor.log. So, it does appear to be a Xen issue. Modern
> Intel CPU.
>
> Does anyone have some ideas on what may be causing this?

What exact CPU do you have?  This looks exactly like the
Haswell/Broadwell TSX errata.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 20 11:11:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 11:11:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbMcZ-0007U3-H5; Wed, 20 May 2020 11:10:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbMcY-0007Ty-7I
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 11:10:46 +0000
X-Inumbo-ID: 8c7dbc15-9a8a-11ea-a9e9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8c7dbc15-9a8a-11ea-a9e9-12813bfff9fa;
 Wed, 20 May 2020 11:10:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A945AADC9;
 Wed, 20 May 2020 11:10:46 +0000 (UTC)
Subject: Re: iommu=no-igfx
To: buy computer <buycomputer40@gmail.com>
References: <CANSXg2FGtiDT05sQUpSAshAsdP4wSjPgQbfw_+aKJuAzSwvJuQ@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <da7e41b5-88a1-13ab-d52b-0652c16608af@suse.com>
Date: Wed, 20 May 2020 13:10:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <CANSXg2FGtiDT05sQUpSAshAsdP4wSjPgQbfw_+aKJuAzSwvJuQ@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Kevin Tian <kevin.tian@intel.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 11.05.2020 19:43, buy computer wrote:
> I've been working on a Windows 10 HVM on a Debian 10 dom0. When I was first
> trying to make the VM, I was getting IOMMU errors. I had a hard time
> figuring out what to do about this, and finally discovered that putting
> iommu=no-igfx in the grub stopped the errors.
> 
> Unfortunately, without the graphics support the VM is understandably slow,
> and can crash. I was also only now pointed to the page
> <https://xenbits.xen.org/docs/unstable/misc/xen-command-line.html#iommu>
> which says to report any errors that get fixed by using iommu=no-igfx.

Thanks for the report. For context I'll quote the commit message of
the commit introducing the option as well as the request to report
issues fixed with it:

"As we still cannot find a proper fix for this problem, this patch adds
 iommu=igfx option to control whether Intel graphics IOMMU is enabled.
 Running Xen with iommu=no-igfx is similar to running Linux with
 intel_iommu=igfx_off, which disables IOMMU for Intel GPU. This can be
 used by users to manually workaround the problem before a fix is
 available for i915 driver."

This was in 2015, referencing Linux >= 3.19. I have no idea whether
the underlying driver issue(s) has/have been fixed. The addresses
referenced are variable enough and all within RAM, so I'd conclude
this is not a "missing RMRR" issue.

Cc-ing the VT-d maintainer for possible insights or thoughts.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 20 11:20:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 11:20:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbMlb-0008UR-M7; Wed, 20 May 2020 11:20:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rB2F=7C=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jbMla-0008Qo-79
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 11:20:06 +0000
X-Inumbo-ID: d9cc5314-9a8b-11ea-a9ed-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d9cc5314-9a8b-11ea-a9ed-12813bfff9fa;
 Wed, 20 May 2020 11:20:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Bk+vYCiNsG8hJBYO3OBXFvWsQJMd7ctxt+h9CazVwRc=; b=ns/LMFQM6EARgfsJTCIVj/MxOw
 pkCIkTk0mrQrvcD2vWPU7v2E+GWY2fx7pCIuRjhB8aLX543HAOtvFVwep05jpviQh+GKMCn8Hws+y
 n/XxGQpYTOhsd8tyj8fJF4ItbYNAl6F7EPSgzBgY8PjAnUI2NlTMyvbP0+DyO3gBUldM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jbMlW-0007tI-5j; Wed, 20 May 2020 11:20:02 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jbMlV-0000xH-QA; Wed, 20 May 2020 11:20:02 +0000
Subject: Re: [PATCH] docs: update xenstore-migration.md
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
References: <20200520101605.4263-1-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2f5da70b-09ec-ade0-0953-2853c93c353b@xen.org>
Date: Wed, 20 May 2020 12:19:59 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200520101605.4263-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 20/05/2020 11:16, Juergen Gross wrote:
> Update connection record details: make flags common for sockets and
> domains, and add pending incoming data.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

> ---
>   docs/designs/xenstore-migration.md | 63 ++++++++++++++++--------------
>   1 file changed, 34 insertions(+), 29 deletions(-)
> 
> diff --git a/docs/designs/xenstore-migration.md b/docs/designs/xenstore-migration.md
> index 34a2afd17e..e361d6b5e7 100644
> --- a/docs/designs/xenstore-migration.md
> +++ b/docs/designs/xenstore-migration.md
> @@ -147,31 +147,45 @@ the domain being migrated.
>   ```
>       0       1       2       3       4       5       6       7    octet
>   +-------+-------+-------+-------+-------+-------+-------+-------+
> -| conn-id                       | conn-type     | conn-spec
> +| conn-id                       | conn-type     | flags         |
> ++-------------------------------+---------------+---------------+
> +| conn-spec
>   ...
>   +-------------------------------+-------------------------------+
> -| data-len                      | data
> -+-------------------------------+
> +| in-data-len                   | out-data-len                  |
> ++-------------------------------+-------------------------------+
> +| data
>   ...
>   ```
>   
>   
> -| Field       | Description                                     |
> -|-------------|-------------------------------------------------|
> -| `conn-id`   | A non-zero number used to identify this         |
> -|             | connection in subsequent connection-specific    |
> -|             | records                                         |
> -|             |                                                 |
> -| `conn-type` | 0x0000: shared ring                             |
> -|             | 0x0001: socket                                  |
> -|             | 0x0002 - 0xFFFF: reserved for future use        |
> -|             |                                                 |
> -| `conn-spec` | See below                                       |
> -|             |                                                 |
> -| `data-len`  | The length (in octets) of any pending data not  |
> -|             | yet written to the connection                   |
> -|             |                                                 |
> -| `data`      | Pending data (may be empty)                     |
> +| Field          | Description                                  |
> +|----------------|----------------------------------------------|
> +| `conn-id`      | A non-zero number used to identify this      |
> +|                | connection in subsequent connection-specific |
> +|                | records                                      |
> +|                |                                              |
> +| `flags`        | A bit-wise OR of:                            |
> +|                | 0001: read-only                              |
> +|                |                                              |
> +| `conn-type`    | 0x0000: shared ring                          |
> +|                | 0x0001: socket                               |
> +|                | 0x0002 - 0xFFFF: reserved for future use     |
> +|                |                                              |
> +| `conn-spec`    | See below                                    |
> +|                |                                              |
> +| `in-data-len`  | The length (in octets) of any data read      |
> +|                | from the connection not yet processed        |
> +|                |                                              |
> +| `out-data-len` | The length (in octets) of any pending data   |
> +|                | not yet written to the connection            |
> +|                |                                              |
> +| `data`         | Pending data, first read data, then written  |
> +|                | data (any of both may be empty)              |
> +
> +In case of live update the connection record for the connection via which
> +the live update command was issued will contain the response for the live
> +update command in the pending write data.
>   
>   The format of `conn-spec` is dependent upon `conn-type`.
>   
> @@ -182,8 +196,6 @@ For `shared ring` connections it is as follows:
>   
>   ```
>       0       1       2       3       4       5       6       7    octet
> -                                                +-------+-------+
> -                                                | flags         |
>   +---------------+---------------+---------------+---------------+
>   | domid         | tdomid        | evtchn                        |
>   +-------------------------------+-------------------------------+
> @@ -198,8 +210,6 @@ For `shared ring` connections it is as follows:
>   |           | it has been subject to an SET_TARGET              |
>   |           | operation [2] or DOMID_INVALID [3] otherwise      |
>   |           |                                                   |
> -| `flags`   | Must be zero                                      |
> -|           |                                                   |
>   | `evtchn`  | The port number of the interdomain channel used   |
>   |           | by `domid` to communicate with xenstored          |
>   |           |                                                   |
> @@ -211,8 +221,6 @@ For `socket` connections it is as follows:
>   
>   
>   ```
> -                                                +-------+-------+
> -                                                | flags         |
>   +---------------+---------------+---------------+---------------+
>   | socket-fd                     | pad                           |
>   +-------------------------------+-------------------------------+
> @@ -221,9 +229,6 @@ For `socket` connections it is as follows:
>   
>   | Field       | Description                                     |
>   |-------------|-------------------------------------------------|
> -| `flags`     | A bit-wise OR of:                               |
> -|             | 0001: read-only                                 |
> -|             |                                                 |
>   | `socket-fd` | The file descriptor of the connected socket     |
>   
>   This type of connection is only relevant for live update, where the xenstored
> @@ -398,4 +403,4 @@ explanation of node permissions.
>   
>   [3] See https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/xen.h;hb=HEAD#l612
>   
> -[4] https://wiki.xen.org/wiki/XenBus
> \ No newline at end of file
> +[4] https://wiki.xen.org/wiki/XenBus
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 20 11:20:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 11:20:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbMmM-00007C-Vh; Wed, 20 May 2020 11:20:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/hkT=7C=zohomail.eu=elliotkillick@srs-us1.protection.inumbo.net>)
 id 1jbMmM-00006L-FV
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 11:20:54 +0000
X-Inumbo-ID: f71a8b64-9a8b-11ea-b9cf-bc764e2007e4
Received: from sender11-pp-o92.zoho.eu (unknown [31.186.226.250])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f71a8b64-9a8b-11ea-b9cf-bc764e2007e4;
 Wed, 20 May 2020 11:20:53 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1589973650; cv=none; d=zohomail.eu; s=zohoarc; 
 b=Te8D8MxAZn8lO8f1D6jw2Vod41cgY0fLXmT63HCkreH322mL0tWy8z98PrIel+odBF2bwwjDLhgGmqJsIqsynyTIgfa4Hz1UhM4g6ZkzoORrOtuAXcyYXlaobKZCjZQfBRC0LYWSvCjui3U3TuWAOUH9OdA1I0bmklwBvm7SvOc=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.eu;
 s=zohoarc; t=1589973650;
 h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To;
 bh=aR7gBnILmO1RKcA2jKLkKUAJB/f8NlfON0JDkwcQY3Y=; 
 b=cAljNXntotx474riSRV1ig4S+u1y22eLAIB/VXrST4OYS5TywPh9RJbt+4T2NK1m7IHs9/FwPITzQq+wrrL3YVsIec9Up/gD5SIn7WBBVftDHbwC46LP34mrjbBetkcnqFk9Bw4i6SHJiP/2xyqDbTmFxTJaR4HTVK0aoz82UEA=
ARC-Authentication-Results: i=1; mx.zohomail.eu;
 dkim=pass  header.i=zohomail.eu;
 spf=pass  smtp.mailfrom=elliotkillick@zohomail.eu;
 dmarc=pass header.from=<elliotkillick@zohomail.eu>
 header.from=<elliotkillick@zohomail.eu>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1589973650; 
 s=zoho; d=zohomail.eu; i=elliotkillick@zohomail.eu;
 h=Subject:To:References:From:Cc:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
 bh=aR7gBnILmO1RKcA2jKLkKUAJB/f8NlfON0JDkwcQY3Y=;
 b=UbtnNdKz71aaL+0922UNn6KNjoveyZ/ueU1h2r3oQxO7itQVjPhfxWea15WycpWb
 xQjiECQiM2T/8JaVh0rSDJxGxTifO01dCOKLf9dV/0xvDgVobzP4fhOcgLYYSIb1ML4
 xjAA88HeK54z11SaclJwRtqPzlHpZDPlMqTT9+uE=
Received: from [10.137.0.35]
 (CPEac202e7c9cc3-CMac202e7c9cc0.cpe.net.cable.rogers.com [99.231.147.74]) by
 mx.zoho.eu with SMTPS id 1589973648928178.0051062278701;
 Wed, 20 May 2020 13:20:48 +0200 (CEST)
Subject: Re: [BUG] Consistent LBR/TSX vmentry failure (0x80000022) calling
 domain_crash() in vmx.c:3324
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <36815795-223f-2b96-5401-c262294cbaa8@zohomail.eu>
 <c715f89a-b2ba-490c-c027-b4c7d7069f42@citrix.com>
From: Elliot Killick <elliotkillick@zohomail.eu>
Message-ID: <2bcd2ccc-b58e-1268-68ce-3ef534534245@zohomail.eu>
Date: Wed, 20 May 2020 11:20:44 +0000
MIME-Version: 1.0
In-Reply-To: <c715f89a-b2ba-490c-c027-b4c7d7069f42@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 2020-05-20 11:10, Andrew Cooper wrote:
> On 20/05/2020 11:33, Elliot Killick wrote:
>> Hello,
>>
>> Xen is crashing Windows 10 (64-bit) VMs consistently whenever IDA
>> Debugger
>> (https://www.hex-rays.com/products/ida/support/download_freeware/)
>> launches the Local Windows Debugger. The crash occurs when trying to
>> launch the debugger against any executable (e.g. calc.exe) right at the
>> time IDA says it is "Moving segment from <X address> to <Y address>".
>>
>> Tested on Windows 7, 8 and Linux as well but the bug is only triggered
>> on Windows 10. Happens whether or not IDA is running with administrator
>> privileges. No drivers/VM tools installed. Windows has a bug check code
>> of zero, leaves no memory dump, nothing in the logs from QEMU in Dom0,
>> the domain just powers off immediately leaving a record of the incident
>> in the hypervisor.log. So, it does appear to be a Xen issue. Modern
>> Intel CPU.
>>
>> Does anyone have some ideas on what may be causing this?
>=20
> What exact CPU do you have?=C2=A0 This looks exactly like the
> Haswell/Broadwell TSX errata.
>=20
> ~Andrew
>=20

i5-4590



From xen-devel-bounces@lists.xenproject.org Wed May 20 11:24:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 11:24:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbMpj-0000Jp-FX; Wed, 20 May 2020 11:24:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rB2F=7C=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jbMpi-0000Jk-UY
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 11:24:22 +0000
X-Inumbo-ID: 74443f54-9a8c-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 74443f54-9a8c-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 11:24:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=jL40Ufp1XIfYCFNcE6i0/bw407ND1pBiZ7ZeS+1PF90=; b=m9kMUv/IzqkbQE0gO8seBwDgL7
 OWI9Fi8tOL6hV8nINN7ziHZUoTklZvoYQ+u0vqTSr4/x9IrDrFZmG4dpMi8gCQ791pyXdyG6uJbjz
 /BFZb17Zsn0SvQ3Ptlk9BDEAiCL6onqdcqRddVhw2ya2pti+cWqjbNuEJNMWhYP9C2Ns=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jbMpi-0007zj-9j; Wed, 20 May 2020 11:24:22 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jbMpi-0001FI-3p; Wed, 20 May 2020 11:24:22 +0000
Subject: Re: [OSSTEST PATCH 21/38] buster: Extend ARM clock workaround
To: Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xenproject.org
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
 <20200519190230.29519-22-ian.jackson@eu.citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <09efa8af-2fc7-f166-3cd5-1da4c5c10ca2@xen.org>
Date: Wed, 20 May 2020 12:24:20 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200519190230.29519-22-ian.jackson@eu.citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 19/05/2020 20:02, Ian Jackson wrote:
> CC: Julien Grall <julien@xen.org>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 20 11:26:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 11:26:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbMrn-0000QC-SL; Wed, 20 May 2020 11:26:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rB2F=7C=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jbMrm-0000Q7-7n
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 11:26:30 +0000
X-Inumbo-ID: bf7baa91-9a8c-11ea-a9ef-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bf7baa91-9a8c-11ea-a9ef-12813bfff9fa;
 Wed, 20 May 2020 11:26:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=W6iLhcTIkVKf6db6un2M1F/DBtaEYgUu3r3lVVDOgB0=; b=4Wvk4Sz+bAcRkA43so9xeksvRR
 3YIjGx/ZuIp9XcSkH/9kv2FINZMecBcyHR39xrDwGyyPIl76lfYhpYfXn7MUO6rZBL/03saU1bNqs
 GJJSRQiQwGALneuMEwKO6D+w1Ijg/EbCLKicMWMr8hwjRhDnNpa06vEpqCjkrLMHbxz8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jbMrl-00081t-HE; Wed, 20 May 2020 11:26:29 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jbMrl-0001Ii-AT; Wed, 20 May 2020 11:26:29 +0000
Subject: Re: [OSSTEST PATCH 22/38] buster: Extend guest bootloader workaround
To: Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xenproject.org
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
 <20200519190230.29519-23-ian.jackson@eu.citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7747c676-f9da-cb97-bd93-78dc13138d03@xen.org>
Date: Wed, 20 May 2020 12:26:27 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200519190230.29519-23-ian.jackson@eu.citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Ian,

On 19/05/2020 20:02, Ian Jackson wrote:
> CC: Julien Grall <julien@xen.org>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Julien Grall <jgrall@amazon.com>

> ---
>   Osstest/Debian.pm | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
> index 6fed0b75..77508d19 100644
> --- a/Osstest/Debian.pm
> +++ b/Osstest/Debian.pm
> @@ -1064,7 +1064,7 @@ END
>       logm("\$arch is $arch, \$suite is $suite");
>       if ($xopts{PvMenuLst} &&
>   	$arch =~ /^arm/ &&
> -	$suite =~ /wheezy|jessie|stretch|sid/ ) {
> +	$suite =~ /wheezy|jessie|stretch|buster|sid/ ) {
>   
>   	# Debian doesn't currently know what bootloader to install in
>   	# a Xen guest on ARM. We install pv-grub-menu above which

OOI, what does Debian install for x86 HVM guest? Is there any ticket 
tracking this issue?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 20 11:35:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 11:35:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbN0j-0001K9-TA; Wed, 20 May 2020 11:35:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4N77=7C=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jbN0i-0001K4-C0
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 11:35:44 +0000
X-Inumbo-ID: 08d86c3f-9a8e-11ea-a9f3-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 08d86c3f-9a8e-11ea-a9f3-12813bfff9fa;
 Wed, 20 May 2020 11:35:43 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ywem84RArb9AGL+qMApH7vTM/3l1wdppDpiZwRXhuieTzAp4r4qpTEZCgFW5W0rF1dqr7nnCF1
 AMePiEunqm7+3VfZWxHZ+hx8wcIz1gt8eIdvERAJtzgYDyXIsNf1T+J22h3jzwB5F4foKV/ruj
 Ung1J6gbvjHyZyJBKF5t8d3Sf1U45a0UcYqJnS/2LNWGuVReM+lf9R8GlDJb6A5M9QzlZPBteT
 Cmbjvz4+yN14QZwMZf0wSVoZhrdyIuUMArTK+ywJod7SpbswZG6O2Qc/zhw4ofqQGt6yyUCtnC
 nso=
X-SBRS: 2.7
X-MesageID: 18335873
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,413,1583211600"; d="scan'208";a="18335873"
Subject: Re: [BUG] Consistent LBR/TSX vmentry failure (0x80000022) calling
 domain_crash() in vmx.c:3324
To: Elliot Killick <elliotkillick@zohomail.eu>
References: <36815795-223f-2b96-5401-c262294cbaa8@zohomail.eu>
 <c715f89a-b2ba-490c-c027-b4c7d7069f42@citrix.com>
 <2bcd2ccc-b58e-1268-68ce-3ef534534245@zohomail.eu>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <1b76cd6a-c6a2-c9c9-1d8b-32a9a1dbc557@citrix.com>
Date: Wed, 20 May 2020 12:27:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <2bcd2ccc-b58e-1268-68ce-3ef534534245@zohomail.eu>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: FTLPEX02CAS03.citrite.net (10.13.99.94) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20/05/2020 12:20, Elliot Killick wrote:
> On 2020-05-20 11:10, Andrew Cooper wrote:
>> On 20/05/2020 11:33, Elliot Killick wrote:
>>> Hello,
>>>
>>> Xen is crashing Windows 10 (64-bit) VMs consistently whenever IDA
>>> Debugger
>>> (https://www.hex-rays.com/products/ida/support/download_freeware/)
>>> launches the Local Windows Debugger. The crash occurs when trying to
>>> launch the debugger against any executable (e.g. calc.exe) right at the
>>> time IDA says it is "Moving segment from <X address> to <Y address>".
>>>
>>> Tested on Windows 7, 8 and Linux as well but the bug is only triggered
>>> on Windows 10. Happens whether or not IDA is running with administrator
>>> privileges. No drivers/VM tools installed. Windows has a bug check code
>>> of zero, leaves no memory dump, nothing in the logs from QEMU in Dom0,
>>> the domain just powers off immediately leaving a record of the incident
>>> in the hypervisor.log. So, it does appear to be a Xen issue. Modern
>>> Intel CPU.
>>>
>>> Does anyone have some ideas on what may be causing this?
>> What exact CPU do you have?  This looks exactly like the
>> Haswell/Broadwell TSX errata.
>>
>> ~Andrew
>>
> i5-4590

How about the output of `head /proc/cpuinfo` in dom0?

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 20 11:36:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 11:36:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbN1D-0001Mk-6C; Wed, 20 May 2020 11:36:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AfT1=7C=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbN1C-0001MY-HR
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 11:36:14 +0000
X-Inumbo-ID: 1b19e076-9a8e-11ea-a9f3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1b19e076-9a8e-11ea-a9f3-12813bfff9fa;
 Wed, 20 May 2020 11:36:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=szOXndscGeXJY85HFeTWIHlLcKQRltqSF7o73Kw6iXM=; b=UObGy3wQ48YOaPY7651w6kx80
 cTV+/Cg5SaSTVFjGBdNTnMDQSNyYe6l2OWtz8HzgP7BnpT2eciadfULpRnHYMgvl6e0O/+pEBfq6W
 zNyHnVNjArz0WM/MlkY9+FfoIraKdKThg5wc8zmq/iYEgpTP2vCC5gUQe8D6e79KQGhmE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbN19-0008EJ-Ja; Wed, 20 May 2020 11:36:11 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbN19-0001Nx-Bn; Wed, 20 May 2020 11:36:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbN19-0005Bh-BA; Wed, 20 May 2020 11:36:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150266-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150266: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=115a54162a6c0d0ef2aef25ebd0b61fc5e179ebe
X-Osstest-Versions-That: linux=642b151f45dd54809ea00ecd3976a56c1ec9b53d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 20 May 2020 11:36:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150266 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150266/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150244
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150244
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150244
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150244
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150244
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150244
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150244
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150244
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150244
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150244
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                115a54162a6c0d0ef2aef25ebd0b61fc5e179ebe
baseline version:
 linux                642b151f45dd54809ea00ecd3976a56c1ec9b53d

Last test of basis   150244  2020-05-19 12:10:39 Z    0 days
Testing same since   150266  2020-05-20 00:40:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Al Viro <viro@zeniv.linux.org.uk>
  Alain Volmat <alain.volmat@st.com>
  Alexander Monakov <amonakov@ispras.ru>
  Atsushi Nemoto <atsushi.nemoto@sord.co.jp>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Codrin Ciubotariu <codrin.ciubotariu@microchip.com>
  Eugeniy Paltsev <Eugeniy.Paltsev@synopsys.com>
  Gustavo A. R. Silva <gustavoars@kernel.org>
  Ilya Dryomov <idryomov@gmail.com>
  Jerry Snitselaar <jsnitsel@redhat.com>
  Joerg Roedel <jroedel@suse.de>
  Linus Torvalds <torvalds@linux-foundation.org>
  Ludovic Desroches <ludovic.desroches@microchip.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Qii Wang <qii.wang@mediatek.com>
  Raul E Rangel <rrangel@chromium.org>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Thor Thayer <thor.thayer@linux.intel.com>
  Vineet Gupta <vgupta@synopsys.com>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Wei Liu <wei.liu@kernel.org>
  Wolfram Sang <wsa@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   642b151f45dd..115a54162a6c  115a54162a6c0d0ef2aef25ebd0b61fc5e179ebe -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Wed May 20 11:36:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 11:36:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbN1a-0001Q1-HL; Wed, 20 May 2020 11:36:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rB2F=7C=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jbN1Z-0001Pu-Ug
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 11:36:37 +0000
X-Inumbo-ID: 2a4625dc-9a8e-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a4625dc-9a8e-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 11:36:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=pgJTbuyPYmB+dyjkzYvEhRods9vwp8ua0UNTnQqMXt0=; b=f2NiTNdb6jI85b/dpN0OA5xHka
 p17a6SPu2itrm0+IgxA2y7YQwRR1+m9P9ipiOZ1OVP+0twdBMC+/F/7/DDzlLtrMyvQ7wZcOxHcKX
 /CO9FOxRa3z6qw/FukdApN9vmufPRKKwZ1AJCWn6UAOwbbUBgaNyr/S6Jh4j8HFOH2pw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jbN1Z-0008G2-6o; Wed, 20 May 2020 11:36:37 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jbN1Y-0001y9-Vq; Wed, 20 May 2020 11:36:37 +0000
Subject: Re: [OSSTEST PATCH 34/38] buster: grub, arm64: extend chainloading
 workaround
To: Ian Jackson <ian.jackson@eu.citrix.com>, xen-devel@lists.xenproject.org
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
 <20200519190230.29519-35-ian.jackson@eu.citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f3becdd7-a7e1-3e99-ca90-4ce0f74aa467@xen.org>
Date: Wed, 20 May 2020 12:36:35 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200519190230.29519-35-ian.jackson@eu.citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 19/05/2020 20:02, Ian Jackson wrote:
> multiboot[2] isn't supported.
> 
> Also link to the bug report.
> 
> CC: Julien Grall <julien@xen.org>
> CC: Stefano Stabellini <sstabellini@kernel.org>
> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>

Acked-by: Julien Grall <jgrall@amazon.com>

> ---
>   Osstest/Debian.pm | 3 ++-
>   1 file changed, 2 insertions(+), 1 deletion(-)
> 
> diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
> index 77508d19..151677ed 100644
> --- a/Osstest/Debian.pm
> +++ b/Osstest/Debian.pm
> @@ -443,9 +443,10 @@ sub setupboot_grub2 ($$$$) {
>       my $kernkey= (defined $xenhopt ? 'KernDom0' : 'KernOnly');
>   
>       # Grub2 on jessie/stretch ARM* doesn't do multiboot, so we must chainload.
> +    # https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=884770
>       my $need_uefi_chainload =
>           get_host_property($ho, "firmware") eq "uefi" &&
> -        $ho->{Suite} =~ m/jessie|stretch/ && $ho->{Arch} =~ m/^arm/;
> +        $ho->{Suite} =~ m/jessie|stretch|buster/ && $ho->{Arch} =~ m/^arm/;

FWIW, the next version of Debian seems to have a newer GRUB version with 
Xen on Arm support.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 20 11:43:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 11:43:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbN8L-0002O3-9d; Wed, 20 May 2020 11:43:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AW6E=7C=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jbN8K-0002Ny-8x
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 11:43:36 +0000
X-Inumbo-ID: 23254fe8-9a8f-11ea-a9f3-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 23254fe8-9a8f-11ea-a9f3-12813bfff9fa;
 Wed, 20 May 2020 11:43:35 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: lRKbqWturC4hsG5QIz48e3pbeVgDPAqfaFvxF3Xc+AfoVXNnyc6EoQQhvbG40da9FiKoCSC6ou
 gyQdlm+sS9L5M3zFXz1OIlKN6iw/72P2XWFpLup74WDTMeIoRLi3pj/6SGkpwTgEVp/D9VGsKj
 mE7osdQMX0Sn+QyaYkWq9YiaiVIL5esfWZrexNkAOlowev4U33o/cjtXyHVAl1Z45iK4ZG0y2H
 j554uMjk4xZI3BP52FuTtLy6sVIJfN9Td95kWRsdhA78WYuwg+zNkOPFIDTryFRjt/Vwnf62nA
 vLQ=
X-SBRS: 2.7
X-MesageID: 18336284
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,413,1583211600"; d="scan'208";a="18336284"
Date: Wed, 20 May 2020 13:43:27 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2] x86: use POPCNT for hweight<N>() when available
Message-ID: <20200520114327.GL54375@Air-de-Roger>
References: <55a4a24d-7fac-527c-6bcf-8d689136bac2@suse.com>
 <20200514140522.GD54375@Air-de-Roger>
 <83534bf1-fa57-1d4a-c615-f656338a8457@suse.com>
 <20200520093106.GI54375@Air-de-Roger>
 <53fdfbe2-615a-72f9-7f2d-26402a0a64d0@suse.com>
 <20200520102805.GK54375@Air-de-Roger>
 <0e97e3af-b66e-4924-a76c-9e33cdd1a726@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <0e97e3af-b66e-4924-a76c-9e33cdd1a726@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 20, 2020 at 12:57:27PM +0200, Jan Beulich wrote:
> On 20.05.2020 12:28, Roger Pau Monné wrote:
> > On Wed, May 20, 2020 at 12:17:15PM +0200, Jan Beulich wrote:
> >> On 20.05.2020 11:31, Roger Pau Monné wrote:
> >>> On Wed, May 20, 2020 at 10:31:38AM +0200, Jan Beulich wrote:
> >>>> On 14.05.2020 16:05, Roger Pau Monné wrote:
> >>>>> On Mon, Jul 15, 2019 at 02:39:04PM +0000, Jan Beulich wrote:
> >>>>>> @@ -251,6 +255,10 @@ boot/mkelf32: boot/mkelf32.c
> >>>>>>   efi/mkreloc: efi/mkreloc.c
> >>>>>>   	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
> >>>>>>   
> >>>>>> +nocov-y += hweight.o
> >>>>>> +noubsan-y += hweight.o
> >>>>>> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
> >>>>>
> >>>>> Why not use clobbers in the asm to list the scratch registers? Is it
> >>>>> that much expensive?
> >>>>
> >>>> The goal is to disturb the call sites as little as possible. There's
> >>>> no point avoiding the scratch registers when no call is made (i.e.
> >>>> when the POPCNT insn can be used). Taking away from the compiler 7
> >>>> out of 15 registers (that it can hold live data in) seems quite a
> >>>> lot to me.
> >>>
> >>> IMO using -ffixed-reg for all those registers is even worse, as that
> >>> prevents the generated code in hweight from using any of those, thus
> >>> greatly limiting the amount of registers and likely making the
> >>> generated code rely heavily on pushing an popping from the stack?
> >>
> >> Okay, that's the other side of the idea behind all this: Virtually no
> >> hardware we run on will lack POPCNT support, hence the quality of
> >> these fallback routines matters only the very old hardware, where we
> >> likely don't perform optimally already anyway.
> >>
> >>> This also has the side effect to limiting the usage of popcnt to gcc,
> >>> which IMO is also not ideal.
> >>
> >> Agreed. I don't know enough about clang to be able to think of
> >> possible alternatives. In any event there's no change to current
> >> behavior for hypervisors built with clang.
> >>
> >>> I also wondered, since the in-place asm before patching is a call
> >>> instruction, wouldn't inline asm at build time already assume that the
> >>> scratch registers are clobbered?
> >>
> >> That would imply the compiler peeks into the string literal of the
> >> asm(). At least gcc doesn't, and even if it did it couldn't infer an
> >> ABI from seeing a CALL insn.
> > 
> > Please bear with me, but then I don't understand what Linux is doing
> > in arch/x86/include/asm/arch_hweight.h. I see no clobbers there,
> > neither it seems like the __sw_hweight{32/64} functions are built
> > without the usage of the scratch registers.
> 
> __sw_hweight{32,64} are implemented in assembly, avoiding most
> scratch registers while pushing/popping the ones which do get
> altered.

Oh right, I was looking at lib/hweight.c instead of the arch one.

Would you agree to use the no_caller_saved_registers attribute (which
is available AFAICT for both gcc and clang) for generic_hweightXX and
then remove the asm prefix code in favour of the defines for
hweight{8/16}?

I think that would make it easier to read.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 20 11:47:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 11:47:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbNBg-0002Ye-SF; Wed, 20 May 2020 11:47:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/hkT=7C=zohomail.eu=elliotkillick@srs-us1.protection.inumbo.net>)
 id 1jbNBf-0002YX-AC
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 11:47:03 +0000
X-Inumbo-ID: 9e26030e-9a8f-11ea-b07b-bc764e2007e4
Received: from sender11-pp-o93.zoho.eu (unknown [31.186.226.251])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e26030e-9a8f-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 11:47:02 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1589975219; cv=none; d=zohomail.eu; s=zohoarc; 
 b=chTRUJ3vgHGLesrZHWnOhV+zdtjfA7ARzIkqQ1FdXoFTwGQ9KyxOZ0l14jY1++IyIQ/0v0lo02TFGZCkr4j+5dqYnxooUh8/HEu8GIv/Ixzr9M3kIWsUf+fjKKFq08u6Bgp3IrYPoVeEdFg+33j5SsiAsGAaklD+vrkgeiLbPSc=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.eu;
 s=zohoarc; t=1589975219;
 h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To;
 bh=3KZK3LTh1DP6fs9SUzKRT3SHJyw42JnzKR0mhAkLv+s=; 
 b=KCLDNP1DhQGzHxkOOAt5cNP0DlziRjVr1jbvEOUaWvEqzZIoC88SoByesT17jEXohPHOOmAEkt8qEeGyrsZRVcJF9+6Te5GtXTygXjqkNuiVFDhI0coul1wDjm0lJEpWW6yG6CDU9WP6vrDEwFSkpyWDFoKea6mLgX+D/mlSpio=
ARC-Authentication-Results: i=1; mx.zohomail.eu;
 dkim=pass  header.i=zohomail.eu;
 spf=pass  smtp.mailfrom=elliotkillick@zohomail.eu;
 dmarc=pass header.from=<elliotkillick@zohomail.eu>
 header.from=<elliotkillick@zohomail.eu>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1589975219; 
 s=zoho; d=zohomail.eu; i=elliotkillick@zohomail.eu;
 h=Subject:To:References:Cc:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
 bh=3KZK3LTh1DP6fs9SUzKRT3SHJyw42JnzKR0mhAkLv+s=;
 b=ca48+O65GZtTCFHs7oiN+9hFc5f4addQ9wwe9kksx9Z7ha46rY9jpMeOPzznWd9n
 FZFwGD/0yJeVv1tq5nWgAKbR/XeP91cAJq66bCp5YeiLzSt/qQBc6bwk9euLXO4/Z2V
 gp+IFWrVG6hBQ9gWtmHx44dn2OG5NXSd0YekcK9c=
Received: from [10.137.0.35]
 (CPEac202e7c9cc3-CMac202e7c9cc0.cpe.net.cable.rogers.com [99.231.147.74]) by
 mx.zoho.eu with SMTPS id 1589975218562995.3045505491439;
 Wed, 20 May 2020 13:46:58 +0200 (CEST)
Subject: Re: [BUG] Consistent LBR/TSX vmentry failure (0x80000022) calling
 domain_crash() in vmx.c:3324
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <36815795-223f-2b96-5401-c262294cbaa8@zohomail.eu>
 <c715f89a-b2ba-490c-c027-b4c7d7069f42@citrix.com>
 <2bcd2ccc-b58e-1268-68ce-3ef534534245@zohomail.eu>
 <1b76cd6a-c6a2-c9c9-1d8b-32a9a1dbc557@citrix.com>
From: Elliot Killick <elliotkillick@zohomail.eu>
Message-ID: <657e7522-bd0f-bea3-7ce8-2f6c4ec72407@zohomail.eu>
Date: Wed, 20 May 2020 11:46:54 +0000
MIME-Version: 1.0
In-Reply-To: <1b76cd6a-c6a2-c9c9-1d8b-32a9a1dbc557@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 2020-05-20 11:27, Andrew Cooper wrote:
> On 20/05/2020 12:20, Elliot Killick wrote:
>> On 2020-05-20 11:10, Andrew Cooper wrote:
>>> On 20/05/2020 11:33, Elliot Killick wrote:
>>>> Hello,
>>>>
>>>> Xen is crashing Windows 10 (64-bit) VMs consistently whenever IDA
>>>> Debugger
>>>> (https://www.hex-rays.com/products/ida/support/download_freeware/)
>>>> launches the Local Windows Debugger. The crash occurs when trying to
>>>> launch the debugger against any executable (e.g. calc.exe) right at th=
e
>>>> time IDA says it is "Moving segment from <X address> to <Y address>".
>>>>
>>>> Tested on Windows 7, 8 and Linux as well but the bug is only triggered
>>>> on Windows 10. Happens whether or not IDA is running with administrato=
r
>>>> privileges. No drivers/VM tools installed. Windows has a bug check cod=
e
>>>> of zero, leaves no memory dump, nothing in the logs from QEMU in Dom0,
>>>> the domain just powers off immediately leaving a record of the inciden=
t
>>>> in the hypervisor.log. So, it does appear to be a Xen issue. Modern
>>>> Intel CPU.
>>>>
>>>> Does anyone have some ideas on what may be causing this?
>>> What exact CPU do you have?=C2=A0 This looks exactly like the
>>> Haswell/Broadwell TSX errata.
>>>
>>> ~Andrew
>>>
>> i5-4590
>=20
> How about the output of `head /proc/cpuinfo` in dom0?
>=20
> ~Andrew
>=20

processor=09: 0
vendor_id=09: GenuineIntel
cpu family=09: 6
model=09=09: 60
model name=09: Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz
stepping=09: 3
microcode=09: 0x27
cpu MHz=09=09: 3299.926
cache size=09: 6144 KB
physical id=09: 0



From xen-devel-bounces@lists.xenproject.org Wed May 20 12:14:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 12:14:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbNcI-00056Q-Pk; Wed, 20 May 2020 12:14:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q69C=7C=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jbNcH-00056L-JN
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 12:14:33 +0000
X-Inumbo-ID: 74c2b15c-9a93-11ea-b07b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 74c2b15c-9a93-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 12:14:30 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jbNcD-0001Rk-DA; Wed, 20 May 2020 13:14:29 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: grub-devel@gnu.org
Subject: [GRUB PATCH 1/2] 20_linux_xen: Ignore xenpolicy and config files too
Date: Wed, 20 May 2020 13:14:19 +0100
Message-Id: <20200520121420.7965-2-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200520121420.7965-1-ian.jackson@eu.citrix.com>
References: <20200520121420.7965-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

"file_is_not_sym" currently only checks for xen-syms.  Extend it to
disregard xenpolicy (XSM policy files) and files ending .config (which
are built by the Xen upstream build system in some configurations and
can therefore end up in /boot).

Rename the function accordingly, to "file_is_not_xen_garbage".

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 util/grub.d/20_linux_xen.in | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/util/grub.d/20_linux_xen.in b/util/grub.d/20_linux_xen.in
index 81e5f0d7e..30da49d66 100644
--- a/util/grub.d/20_linux_xen.in
+++ b/util/grub.d/20_linux_xen.in
@@ -181,10 +181,14 @@ if [ "x${linux_list}" = "x" ] ; then
     exit 0
 fi
 
-file_is_not_sym () {
+file_is_not_xen_garbage () {
     case "$1" in
 	*/xen-syms-*)
 	    return 1;;
+	*/xenpolicy-*)
+	    return 1;;
+	*/*.config)
+	    return 1;;
 	*)
 	    return 0;;
     esac
@@ -192,7 +196,7 @@ file_is_not_sym () {
 
 xen_list=
 for i in /boot/xen*; do
-    if grub_file_is_not_garbage "$i" && file_is_not_sym "$i" ; then xen_list="$xen_list $i" ; fi
+    if grub_file_is_not_garbage "$i" && file_is_not_xen_garbage "$i" ; then xen_list="$xen_list $i" ; fi
 done
 prepare_boot_cache=
 boot_device_id=
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 12:14:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 12:14:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbNcD-00056F-Hv; Wed, 20 May 2020 12:14:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q69C=7C=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jbNcC-00056A-Mt
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 12:14:28 +0000
X-Inumbo-ID: 736f75c4-9a93-11ea-9887-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 736f75c4-9a93-11ea-9887-bc764e2007e4;
 Wed, 20 May 2020 12:14:28 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jbNcB-0001Rk-65; Wed, 20 May 2020 13:14:27 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: grub-devel@gnu.org
Subject: [GRUB PATCH 0/2] Better Xen support
Date: Wed, 20 May 2020 13:14:18 +0100
Message-Id: <20200520121420.7965-1-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi. As maintainer of the Xen Project upstream CI, I do testing of
upstream Xen builds onto Debian systems.

We use grub's 20_linux_xen to do the bootloader setup.  However, it is
missing some features so we are carrying some patches.  Here they are
for your consideration.

Regards, Ian.

Ian Jackson (2):
  20_linux_xen: Ignore xenpolicy and config files too
  20_linux_xen: Support Xen Security Modules (XSM/FLASK)

 util/grub.d/20_linux_xen.in | 32 ++++++++++++++++++++++++++++++--
 1 file changed, 30 insertions(+), 2 deletions(-)

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 12:14:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 12:14:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbNcO-00056p-1F; Wed, 20 May 2020 12:14:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=q69C=7C=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jbNcM-00056k-Jl
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 12:14:38 +0000
X-Inumbo-ID: 7544f0c2-9a93-11ea-b07b-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7544f0c2-9a93-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 12:14:31 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jbNcE-0001Rk-AZ; Wed, 20 May 2020 13:14:30 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: grub-devel@gnu.org
Subject: [GRUB PATCH 2/2] 20_linux_xen: Support Xen Security Modules
 (XSM/FLASK)
Date: Wed, 20 May 2020 13:14:20 +0100
Message-Id: <20200520121420.7965-3-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200520121420.7965-1-ian.jackson@eu.citrix.com>
References: <20200520121420.7965-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

XSM is enabled by adding "flask=enforcing" as a Xen command line
argument, and providing the policy file as a grub module.

We make entries for both with and without XSM.  If XSM is not compiled
into Xen, then there are no policy files, so no change to the boot
options.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 util/grub.d/20_linux_xen.in | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/util/grub.d/20_linux_xen.in b/util/grub.d/20_linux_xen.in
index 30da49d66..7a092b898 100644
--- a/util/grub.d/20_linux_xen.in
+++ b/util/grub.d/20_linux_xen.in
@@ -94,6 +94,11 @@ esac
 title_correction_code=
 
 linux_entry ()
+{
+  linux_entry_xsm "$@" false
+  linux_entry_xsm "$@" true
+}
+linux_entry_xsm ()
 {
   os="$1"
   version="$2"
@@ -101,6 +106,18 @@ linux_entry ()
   type="$4"
   args="$5"
   xen_args="$6"
+  xsm="$7"
+  # If user wants to enable XSM support, make sure there's
+  # corresponding policy file.
+  if ${xsm} ; then
+      xenpolicy="xenpolicy-$xen_version"
+      if test ! -e "${xen_dirname}/${xenpolicy}" ; then
+	  return
+      fi
+      xen_args="$xen_args flask=enforcing"
+      xen_version="$(gettext_printf "%s (XSM enabled)" "$xen_version")"
+      # xen_version is used for messages only; actual file is xen_basename
+  fi
   if [ -z "$boot_device_id" ]; then
       boot_device_id="$(grub_get_device_id "${GRUB_DEVICE}")"
   fi
@@ -154,6 +171,13 @@ EOF
     sed "s/^/$submenu_indentation/" << EOF
 	echo	'$(echo "$message" | grub_quote)'
 	${module_loader}	--nounzip   $(echo $initrd_path)
+EOF
+  fi
+  if test -n "${xenpolicy}" ; then
+    message="$(gettext_printf "Loading XSM policy ...")"
+    sed "s/^/$submenu_indentation/" << EOF
+	echo	'$(echo "$message" | grub_quote)'
+	${module_loader}     ${rel_dirname}/${xenpolicy}
 EOF
   fi
   sed "s/^/$submenu_indentation/" << EOF
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 12:32:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 12:32:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbNtG-0006wI-Hh; Wed, 20 May 2020 12:32:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4N77=7C=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jbNtG-0006wD-4E
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 12:32:06 +0000
X-Inumbo-ID: e96626fe-9a95-11ea-a9ff-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e96626fe-9a95-11ea-a9ff-12813bfff9fa;
 Wed, 20 May 2020 12:32:04 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: tgETOYZF0Amr79w9J5VUeDFooHZSvwk6D0nnal5tDs+/2zGHjE6umka7eyhHo8unlu9OoXp061
 LpkMK6rF2rMFgiVrYhIfF/Dc7m//cPhauQLMtULnnF6Lvmewcc6i4aE9X/gZ0XSWyLZvsPrxwW
 V3CwQwIsbBlQbZ+qGtzihmVpPKpVUFvb1n5mJSkCaCGleQvhcpYumGKH3LC48eWWq4vPvP0GO3
 buHg1Dp/a0CnWA9BwNOCkeulsjqu/wuEpQrFk3RGnPmhdqjlnTuNlBHVZx4sX7LXvfDHDkHZU+
 JOM=
X-SBRS: 2.7
X-MesageID: 18239068
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,414,1583211600"; d="scan'208";a="18239068"
Subject: Re: [BUG] Consistent LBR/TSX vmentry failure (0x80000022) calling
 domain_crash() in vmx.c:3324
To: Elliot Killick <elliotkillick@zohomail.eu>
References: <36815795-223f-2b96-5401-c262294cbaa8@zohomail.eu>
 <c715f89a-b2ba-490c-c027-b4c7d7069f42@citrix.com>
 <2bcd2ccc-b58e-1268-68ce-3ef534534245@zohomail.eu>
 <1b76cd6a-c6a2-c9c9-1d8b-32a9a1dbc557@citrix.com>
 <657e7522-bd0f-bea3-7ce8-2f6c4ec72407@zohomail.eu>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <dc1ef4b6-9406-b625-c157-6ebec2a6afda@citrix.com>
Date: Wed, 20 May 2020 13:31:44 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <657e7522-bd0f-bea3-7ce8-2f6c4ec72407@zohomail.eu>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20/05/2020 12:46, Elliot Killick wrote:
> On 2020-05-20 11:27, Andrew Cooper wrote:
>> On 20/05/2020 12:20, Elliot Killick wrote:
>>> On 2020-05-20 11:10, Andrew Cooper wrote:
>>>> On 20/05/2020 11:33, Elliot Killick wrote:
>>>>> Hello,
>>>>>
>>>>> Xen is crashing Windows 10 (64-bit) VMs consistently whenever IDA
>>>>> Debugger
>>>>> (https://www.hex-rays.com/products/ida/support/download_freeware/)
>>>>> launches the Local Windows Debugger. The crash occurs when trying to
>>>>> launch the debugger against any executable (e.g. calc.exe) right at the
>>>>> time IDA says it is "Moving segment from <X address> to <Y address>".
>>>>>
>>>>> Tested on Windows 7, 8 and Linux as well but the bug is only triggered
>>>>> on Windows 10. Happens whether or not IDA is running with administrator
>>>>> privileges. No drivers/VM tools installed. Windows has a bug check code
>>>>> of zero, leaves no memory dump, nothing in the logs from QEMU in Dom0,
>>>>> the domain just powers off immediately leaving a record of the incident
>>>>> in the hypervisor.log. So, it does appear to be a Xen issue. Modern
>>>>> Intel CPU.
>>>>>
>>>>> Does anyone have some ideas on what may be causing this?
>>>> What exact CPU do you have?  This looks exactly like the
>>>> Haswell/Broadwell TSX errata.
>>>>
>>>> ~Andrew
>>>>
>>> i5-4590
>> How about the output of `head /proc/cpuinfo` in dom0?
>>
>> ~Andrew
>>
> processor	: 0
> vendor_id	: GenuineIntel
> cpu family	: 6
> model		: 60
> model name	: Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz
> stepping	: 3
> microcode	: 0x27
> cpu MHz		: 3299.926
> cache size	: 6144 KB
> physical id	: 0

Ok, so the errata is one of HSM182/HSD172.

Xen has workaround for all of these.  However, I also see:

> (XEN) ----[ Xen-4.8.5-15.fc25  x86_64  debug=n   Not tainted ]----

which is an obsolete version of Xen these days.  It looks like these
issues were first fixed in Xen 4.9, but you should upgrade to something
rather newer.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 20 12:53:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 12:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbODM-0000Bz-C6; Wed, 20 May 2020 12:52:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbODL-0000Bt-HK
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 12:52:51 +0000
X-Inumbo-ID: cf89e07e-9a98-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cf89e07e-9a98-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 12:52:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id AACCFAD11;
 Wed, 20 May 2020 12:52:51 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] VT-x: extend LBR Broadwell errata coverage
Message-ID: <df6e8dad-b4c0-0821-46eb-e4aa86f8ccfa@suse.com>
Date: Wed, 20 May 2020 14:52:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

For lbr_tsx_fixup_check() simply name a few more specific errata numbers.

For bdf93_fixup_check(), however, more models are affected. Oddly enough
despite being the same model and stepping, the erratum is listed for Xeon
E3 but not its Core counterpart. With this it's of course also uncertain
whether the absence of the erratum for Xeon D is actually meaningful.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2870,8 +2870,10 @@ static void __init lbr_tsx_fixup_check(v
     case 0x45: /* HSM182 - 4th gen Core */
     case 0x46: /* HSM182, HSD172 - 4th gen Core (GT3) */
     case 0x3d: /* BDM127 - 5th gen Core */
-    case 0x47: /* BDD117 - 5th gen Core (GT3) */
-    case 0x4f: /* BDF85  - Xeon E5-2600 v4 */
+    case 0x47: /* BDD117 - 5th gen Core (GT3)
+                  BDW117 - Xeon E3-1200 v4 */
+    case 0x4f: /* BDF85  - Xeon E5-2600 v4
+                  BDX88  - Xeon E7-x800 v4 */
     case 0x56: /* BDE105 - Xeon D-1500 */
         break;
     default:
@@ -2895,15 +2897,26 @@ static void __init lbr_tsx_fixup_check(v
 static void __init bdf93_fixup_check(void)
 {
     /*
-     * Broadwell erratum BDF93:
+     * Broadwell erratum BDF93 et al:
      *
      * Reads from MSR_LER_TO_LIP (MSR 1DEH) may return values for bits[63:61]
      * that are not equal to bit[47].  Attempting to context switch this value
      * may cause a #GP.  Software should sign extend the MSR.
      */
-    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
-         boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x4f )
+    if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
+         boot_cpu_data.x86 != 6 )
+        return;
+
+    switch ( boot_cpu_data.x86_model )
+    {
+    case 0x3d: /* BDM131 - 5th gen Core */
+    case 0x47: /* BDD??? - 5th gen Core (H-Processor line)
+                  BDW120 - Xeon E3-1200 v4 */
+    case 0x4f: /* BDF93  - Xeon E5-2600 v4
+                  BDX93  - Xeon E7-x800 v4 */
         bdf93_fixup_needed = true;
+        break;
+    }
 }
 
 static int is_last_branch_msr(u32 ecx)


From xen-devel-bounces@lists.xenproject.org Wed May 20 13:12:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 13:12:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbOWL-0001vY-1Y; Wed, 20 May 2020 13:12:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbOWK-0001vT-1k
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 13:12:28 +0000
X-Inumbo-ID: 8d19e664-9a9b-11ea-aa07-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d19e664-9a9b-11ea-aa07-12813bfff9fa;
 Wed, 20 May 2020 13:12:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id B0FB6ACB1;
 Wed, 20 May 2020 13:12:28 +0000 (UTC)
Subject: Re: [PATCH v2] x86: use POPCNT for hweight<N>() when available
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <55a4a24d-7fac-527c-6bcf-8d689136bac2@suse.com>
 <20200514140522.GD54375@Air-de-Roger>
 <83534bf1-fa57-1d4a-c615-f656338a8457@suse.com>
 <20200520093106.GI54375@Air-de-Roger>
 <53fdfbe2-615a-72f9-7f2d-26402a0a64d0@suse.com>
 <20200520102805.GK54375@Air-de-Roger>
 <0e97e3af-b66e-4924-a76c-9e33cdd1a726@suse.com>
 <20200520114327.GL54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d0a15359-339f-6edd-034c-cd6385e929d1@suse.com>
Date: Wed, 20 May 2020 15:12:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200520114327.GL54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20.05.2020 13:43, Roger Pau Monné wrote:
> On Wed, May 20, 2020 at 12:57:27PM +0200, Jan Beulich wrote:
>> On 20.05.2020 12:28, Roger Pau Monné wrote:
>>> On Wed, May 20, 2020 at 12:17:15PM +0200, Jan Beulich wrote:
>>>> On 20.05.2020 11:31, Roger Pau Monné wrote:
>>>>> On Wed, May 20, 2020 at 10:31:38AM +0200, Jan Beulich wrote:
>>>>>> On 14.05.2020 16:05, Roger Pau Monné wrote:
>>>>>>> On Mon, Jul 15, 2019 at 02:39:04PM +0000, Jan Beulich wrote:
>>>>>>>> @@ -251,6 +255,10 @@ boot/mkelf32: boot/mkelf32.c
>>>>>>>>   efi/mkreloc: efi/mkreloc.c
>>>>>>>>   	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
>>>>>>>>   
>>>>>>>> +nocov-y += hweight.o
>>>>>>>> +noubsan-y += hweight.o
>>>>>>>> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
>>>>>>>
>>>>>>> Why not use clobbers in the asm to list the scratch registers? Is it
>>>>>>> that much expensive?
>>>>>>
>>>>>> The goal is to disturb the call sites as little as possible. There's
>>>>>> no point avoiding the scratch registers when no call is made (i.e.
>>>>>> when the POPCNT insn can be used). Taking away from the compiler 7
>>>>>> out of 15 registers (that it can hold live data in) seems quite a
>>>>>> lot to me.
>>>>>
>>>>> IMO using -ffixed-reg for all those registers is even worse, as that
>>>>> prevents the generated code in hweight from using any of those, thus
>>>>> greatly limiting the amount of registers and likely making the
>>>>> generated code rely heavily on pushing an popping from the stack?
>>>>
>>>> Okay, that's the other side of the idea behind all this: Virtually no
>>>> hardware we run on will lack POPCNT support, hence the quality of
>>>> these fallback routines matters only the very old hardware, where we
>>>> likely don't perform optimally already anyway.
>>>>
>>>>> This also has the side effect to limiting the usage of popcnt to gcc,
>>>>> which IMO is also not ideal.
>>>>
>>>> Agreed. I don't know enough about clang to be able to think of
>>>> possible alternatives. In any event there's no change to current
>>>> behavior for hypervisors built with clang.
>>>>
>>>>> I also wondered, since the in-place asm before patching is a call
>>>>> instruction, wouldn't inline asm at build time already assume that the
>>>>> scratch registers are clobbered?
>>>>
>>>> That would imply the compiler peeks into the string literal of the
>>>> asm(). At least gcc doesn't, and even if it did it couldn't infer an
>>>> ABI from seeing a CALL insn.
>>>
>>> Please bear with me, but then I don't understand what Linux is doing
>>> in arch/x86/include/asm/arch_hweight.h. I see no clobbers there,
>>> neither it seems like the __sw_hweight{32/64} functions are built
>>> without the usage of the scratch registers.
>>
>> __sw_hweight{32,64} are implemented in assembly, avoiding most
>> scratch registers while pushing/popping the ones which do get
>> altered.
> 
> Oh right, I was looking at lib/hweight.c instead of the arch one.
> 
> Would you agree to use the no_caller_saved_registers attribute (which
> is available AFAICT for both gcc and clang) for generic_hweightXX and
> then remove the asm prefix code in favour of the defines for
> hweight{8/16}?

At least for gcc no_caller_saved_registers isn't old enough to be
used unconditionally (nor is its companion -mgeneral-regs-only).
If you tell me it's fine to use unconditionally with clang, then
I can see about making this the preferred variant, with the
present one as a fallback.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 20 13:37:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 13:37:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbOtj-0003fP-2C; Wed, 20 May 2020 13:36:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbOth-0003fK-Rb
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 13:36:37 +0000
X-Inumbo-ID: ecf8baa8-9a9e-11ea-aa14-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ecf8baa8-9a9e-11ea-aa14-12813bfff9fa;
 Wed, 20 May 2020 13:36:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 967E4B249;
 Wed, 20 May 2020 13:36:37 +0000 (UTC)
Subject: Re: [PATCH 1/3] xen/monitor: Control register values
To: Tamas K Lengyel <tamas@tklengyel.com>
References: <cover.1589561218.git.tamas@tklengyel.com>
 <72d4d282dd20b79ebdbaf1f70865ea38b075c5c0.1589561218.git.tamas@tklengyel.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ec19beb2-6e69-4e62-b260-0d76b2a7f5a7@suse.com>
Date: Wed, 20 May 2020 15:36:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <72d4d282dd20b79ebdbaf1f70865ea38b075c5c0.1589561218.git.tamas@tklengyel.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Petre Pircalabu <ppircalabu@bitdefender.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Alexandru Isaila <aisaila@bitdefender.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.05.2020 18:53, Tamas K Lengyel wrote:
> Extend the monitor_op domctl to include option that enables
> controlling what values certain registers are permitted to hold
> by a monitor subscriber.

This needs a bit more explanation, especially for those of us
who aren't that introspection savvy. For example, from the text
here I didn't expect a simple bool control, but something where
actual (register) values get passed back and forth.

> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -2263,9 +2263,10 @@ int hvm_set_cr0(unsigned long value, bool may_defer)
>      {
>          ASSERT(v->arch.vm_event);
>  
> -        if ( hvm_monitor_crX(CR0, value, old_value) )
> +        if ( hvm_monitor_crX(CR0, value, old_value) &&
> +             v->domain->arch.monitor.control_register_values )
>          {
> -            /* The actual write will occur in hvm_do_resume(), if permitted. */
> +            /* The actual write will occur in hvm_do_resume, if permitted. */

Please can you leave alone this and the similar comments below.
And for consistency _add_ parentheses to the one new instance
you add?

> --- a/xen/arch/x86/monitor.c
> +++ b/xen/arch/x86/monitor.c
> @@ -144,7 +144,15 @@ int arch_monitor_domctl_event(struct domain *d,
>                                struct xen_domctl_monitor_op *mop)
>  {
>      struct arch_domain *ad = &d->arch;
> -    bool requested_status = (XEN_DOMCTL_MONITOR_OP_ENABLE == mop->op);
> +    bool requested_status;
> +
> +    if ( XEN_DOMCTL_MONITOR_OP_CONTROL_REGISTERS == mop->op )
> +    {
> +        ad->monitor.control_register_values = true;

And there's no way to clear this flag again?

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 20 13:43:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 13:43:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbP06-0004WZ-Mt; Wed, 20 May 2020 13:43:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PdaM=7C=tklsoftware.com=tamas@srs-us1.protection.inumbo.net>)
 id 1jbP05-0004WU-7V
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 13:43:13 +0000
X-Inumbo-ID: d913194c-9a9f-11ea-ae69-bc764e2007e4
Received: from mail-ed1-x544.google.com (unknown [2a00:1450:4864:20::544])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d913194c-9a9f-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 13:43:12 +0000 (UTC)
Received: by mail-ed1-x544.google.com with SMTP id l25so3108370edj.4
 for <xen-devel@lists.xenproject.org>; Wed, 20 May 2020 06:43:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=tklengyel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=RUIKCU/ZJORNs0A4gvnhf+OWb3nG8Xn3Lrmx2o7yhTs=;
 b=W/FT6hw/8kMejKs/XUDm0f2KSG6asWYRqRJHTzuh+Oo8E0kkNZiJytnpKTEVh6hU4m
 Ee7BHmvi3pf3Zr/t71AALly+Y/PN0R1NGvjkxZQHpxiRAQK3e9IituX6GMQRluCHwGgV
 sGYkIlZp+eti6Bxnqb+qf/xSfgXrmzwH3Oio4TWOouwecKL1HQQZgTa10J59nOwHZcun
 Y3NPgCYbDkiftL253etdrSemOcMz7OWKE0kPja4WUXCQMVvdcMafJ+9kSyhu1VFcSCxr
 VwaxIT++kfJVzKz/Zy3BAJJrtt5wTUGUoRPlv39PspCnP9+S+q8pzCAIltY7m+ehdXiT
 vRrQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=RUIKCU/ZJORNs0A4gvnhf+OWb3nG8Xn3Lrmx2o7yhTs=;
 b=kTLL9xhIuePGKd/VA1XqGxSGycUNMqz1//+z10+2OFc8/LfgJklAxG+ZEfIQ1iABwt
 nO4Wqju13pBKj6siVFr1BEASsNnWxJQeS9ffqMQ8Lm9Dk0nneCs38Z4ze/TIXZhxxyWm
 Sc6HgBXe70fAfqSL/oJSfzRkqknsxDJZzMzC9huqeq7xsTwAjhtN3u9s+k9a9nvWQLnY
 HMfj0PdcqKvFaSOEpBJcZ3hTykL0TK/WzYJbqm7gTW2SXaeaeKx2+P0ZeozCKtAuWYmF
 ZLEBLtWQ3wOVWG27FoyXczi5PkerzJUWn+baSJKMwiR1YFoCBswVRxbawxKmFouLyyZJ
 SgCA==
X-Gm-Message-State: AOAM531AO7cMDZOBaHL+NsqBDxSxOXyX/4PXGP14obfBqUjFsV71aJmX
 FXY129miAlr0yBtjat92H4Zxq4YBdVw=
X-Google-Smtp-Source: ABdhPJznjXfUyzWjbAAdS3E5rmJl153LX5+vjbc/Z59yjB0MUn/5pbJnPg+twZ4/tGLpnNa4rVRDzg==
X-Received: by 2002:a05:6402:31b5:: with SMTP id
 dj21mr3668236edb.160.1589982191493; 
 Wed, 20 May 2020 06:43:11 -0700 (PDT)
Received: from mail-wr1-f51.google.com (mail-wr1-f51.google.com.
 [209.85.221.51])
 by smtp.gmail.com with ESMTPSA id b11sm2034885ejl.3.2020.05.20.06.43.09
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 20 May 2020 06:43:09 -0700 (PDT)
Received: by mail-wr1-f51.google.com with SMTP id l11so3242260wru.0
 for <xen-devel@lists.xenproject.org>; Wed, 20 May 2020 06:43:09 -0700 (PDT)
X-Received: by 2002:adf:fccd:: with SMTP id f13mr4464584wrs.386.1589982189358; 
 Wed, 20 May 2020 06:43:09 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1589561218.git.tamas@tklengyel.com>
 <72d4d282dd20b79ebdbaf1f70865ea38b075c5c0.1589561218.git.tamas@tklengyel.com>
 <ec19beb2-6e69-4e62-b260-0d76b2a7f5a7@suse.com>
In-Reply-To: <ec19beb2-6e69-4e62-b260-0d76b2a7f5a7@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Wed, 20 May 2020 07:42:33 -0600
X-Gmail-Original-Message-ID: <CABfawhmg+ZtzKvJecAJE8=C+rnPiywUy8vO81Yz9dC4j2h-feg@mail.gmail.com>
Message-ID: <CABfawhmg+ZtzKvJecAJE8=C+rnPiywUy8vO81Yz9dC4j2h-feg@mail.gmail.com>
Subject: Re: [PATCH 1/3] xen/monitor: Control register values
To: Jan Beulich <jbeulich@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Petre Pircalabu <ppircalabu@bitdefender.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 20, 2020 at 7:36 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 15.05.2020 18:53, Tamas K Lengyel wrote:
> > Extend the monitor_op domctl to include option that enables
> > controlling what values certain registers are permitted to hold
> > by a monitor subscriber.
>
> This needs a bit more explanation, especially for those of us
> who aren't that introspection savvy. For example, from the text
> here I didn't expect a simple bool control, but something where
> actual (register) values get passed back and forth.
>
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -2263,9 +2263,10 @@ int hvm_set_cr0(unsigned long value, bool may_defer)
> >      {
> >          ASSERT(v->arch.vm_event);
> >
> > -        if ( hvm_monitor_crX(CR0, value, old_value) )
> > +        if ( hvm_monitor_crX(CR0, value, old_value) &&
> > +             v->domain->arch.monitor.control_register_values )
> >          {
> > -            /* The actual write will occur in hvm_do_resume(), if permitted. */
> > +            /* The actual write will occur in hvm_do_resume, if permitted. */
>
> Please can you leave alone this and the similar comments below.
> And for consistency _add_ parentheses to the one new instance
> you add?

I changed to because now it doesn't fit into the 80-line limit below,
and then changed it everywhere _for_ consistency.

>
> > --- a/xen/arch/x86/monitor.c
> > +++ b/xen/arch/x86/monitor.c
> > @@ -144,7 +144,15 @@ int arch_monitor_domctl_event(struct domain *d,
> >                                struct xen_domctl_monitor_op *mop)
> >  {
> >      struct arch_domain *ad = &d->arch;
> > -    bool requested_status = (XEN_DOMCTL_MONITOR_OP_ENABLE == mop->op);
> > +    bool requested_status;
> > +
> > +    if ( XEN_DOMCTL_MONITOR_OP_CONTROL_REGISTERS == mop->op )
> > +    {
> > +        ad->monitor.control_register_values = true;
>
> And there's no way to clear this flag again?

There is. Disable the monitor vm_event interface and reinitialize.

Tamas


From xen-devel-bounces@lists.xenproject.org Wed May 20 13:45:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 13:45:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbP2V-0004eI-82; Wed, 20 May 2020 13:45:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbP2U-0004eD-Ab
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 13:45:42 +0000
X-Inumbo-ID: 3200f858-9aa0-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3200f858-9aa0-11ea-9887-bc764e2007e4;
 Wed, 20 May 2020 13:45:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id F38ACAC46;
 Wed, 20 May 2020 13:45:42 +0000 (UTC)
Subject: Re: [PATCH 3/3] xen/vm_event: Add safe to disable vm_event
To: Tamas K Lengyel <tamas@tklengyel.com>
References: <cover.1589561218.git.tamas@tklengyel.com>
 <1168bacc61f655f559c236cdf63a6b2beccd4d6b.1589561218.git.tamas@tklengyel.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <28e50e15-410e-096d-51f1-e304c9ef8cdb@suse.com>
Date: Wed, 20 May 2020 15:45:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <1168bacc61f655f559c236cdf63a6b2beccd4d6b.1589561218.git.tamas@tklengyel.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Petre Pircalabu <ppircalabu@bitdefender.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Alexandru Isaila <aisaila@bitdefender.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15.05.2020 18:53, Tamas K Lengyel wrote:
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -563,15 +563,41 @@ void hvm_do_resume(struct vcpu *v)
>          v->arch.hvm.inject_event.vector = HVM_EVENT_VECTOR_UNSET;
>      }
>  
> -    if ( unlikely(v->arch.vm_event) && v->arch.monitor.next_interrupt_enabled )
> +    if ( unlikely(v->arch.vm_event) )
>      {
> -        struct x86_event info;
> +        struct domain *d = v->domain;

const

> +        if ( v->arch.monitor.next_interrupt_enabled )
> +        {
> +            struct x86_event info;
> +
> +            if ( hvm_get_pending_event(v, &info) )
> +            {
> +                hvm_monitor_interrupt(info.vector, info.type, info.error_code,
> +                                      info.cr2);
> +                v->arch.monitor.next_interrupt_enabled = false;
> +            }
> +        }
>  
> -        if ( hvm_get_pending_event(v, &info) )
> +        if ( d->arch.monitor.safe_to_disable )
>          {
> -            hvm_monitor_interrupt(info.vector, info.type, info.error_code,
> -                                  info.cr2);
> -            v->arch.monitor.next_interrupt_enabled = false;
> +            struct vcpu *check_vcpu;

const again, requiring a respective adjustment to patch 2.

> +            bool pending_op = false;
> +
> +            for_each_vcpu ( d, check_vcpu )
> +            {
> +                if ( vm_event_check_pending_op(check_vcpu) )
> +                {
> +                    pending_op = true;
> +                    break;
> +                }
> +            }
> +
> +            if ( !pending_op )
> +            {
> +                hvm_monitor_safe_to_disable();

This new function returns bool without the caller caring about the
return value.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 20 13:48:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 13:48:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbP57-0004nc-Ms; Wed, 20 May 2020 13:48:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbP56-0004nX-7Z
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 13:48:24 +0000
X-Inumbo-ID: 924a1ca8-9aa0-11ea-aa1f-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 924a1ca8-9aa0-11ea-aa1f-12813bfff9fa;
 Wed, 20 May 2020 13:48:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id AF64EAC90;
 Wed, 20 May 2020 13:48:24 +0000 (UTC)
Subject: Re: [PATCH 1/3] xen/monitor: Control register values
To: Tamas K Lengyel <tamas@tklengyel.com>
References: <cover.1589561218.git.tamas@tklengyel.com>
 <72d4d282dd20b79ebdbaf1f70865ea38b075c5c0.1589561218.git.tamas@tklengyel.com>
 <ec19beb2-6e69-4e62-b260-0d76b2a7f5a7@suse.com>
 <CABfawhmg+ZtzKvJecAJE8=C+rnPiywUy8vO81Yz9dC4j2h-feg@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d0cd29c0-3070-ceb2-cd21-4ae359a0ec57@suse.com>
Date: Wed, 20 May 2020 15:48:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <CABfawhmg+ZtzKvJecAJE8=C+rnPiywUy8vO81Yz9dC4j2h-feg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Petre Pircalabu <ppircalabu@bitdefender.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20.05.2020 15:42, Tamas K Lengyel wrote:
> On Wed, May 20, 2020 at 7:36 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 15.05.2020 18:53, Tamas K Lengyel wrote:
>>> Extend the monitor_op domctl to include option that enables
>>> controlling what values certain registers are permitted to hold
>>> by a monitor subscriber.
>>
>> This needs a bit more explanation, especially for those of us
>> who aren't that introspection savvy. For example, from the text
>> here I didn't expect a simple bool control, but something where
>> actual (register) values get passed back and forth.
>>
>>> --- a/xen/arch/x86/hvm/hvm.c
>>> +++ b/xen/arch/x86/hvm/hvm.c
>>> @@ -2263,9 +2263,10 @@ int hvm_set_cr0(unsigned long value, bool may_defer)
>>>      {
>>>          ASSERT(v->arch.vm_event);
>>>
>>> -        if ( hvm_monitor_crX(CR0, value, old_value) )
>>> +        if ( hvm_monitor_crX(CR0, value, old_value) &&
>>> +             v->domain->arch.monitor.control_register_values )
>>>          {
>>> -            /* The actual write will occur in hvm_do_resume(), if permitted. */
>>> +            /* The actual write will occur in hvm_do_resume, if permitted. */
>>
>> Please can you leave alone this and the similar comments below.
>> And for consistency _add_ parentheses to the one new instance
>> you add?
> 
> I changed to because now it doesn't fit into the 80-line limit below,
> and then changed it everywhere _for_ consistency.

The 80-char limit is easy to deal with - wrap the line.

>>> --- a/xen/arch/x86/monitor.c
>>> +++ b/xen/arch/x86/monitor.c
>>> @@ -144,7 +144,15 @@ int arch_monitor_domctl_event(struct domain *d,
>>>                                struct xen_domctl_monitor_op *mop)
>>>  {
>>>      struct arch_domain *ad = &d->arch;
>>> -    bool requested_status = (XEN_DOMCTL_MONITOR_OP_ENABLE == mop->op);
>>> +    bool requested_status;
>>> +
>>> +    if ( XEN_DOMCTL_MONITOR_OP_CONTROL_REGISTERS == mop->op )
>>> +    {
>>> +        ad->monitor.control_register_values = true;
>>
>> And there's no way to clear this flag again?
> 
> There is. Disable the monitor vm_event interface and reinitialize.

Quite heavy handed, isn't it?

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 20 14:08:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 14:08:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbPNu-0006ZY-Dt; Wed, 20 May 2020 14:07:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4N77=7C=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jbPNt-0006ZT-A4
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 14:07:49 +0000
X-Inumbo-ID: 48ed572a-9aa3-11ea-b07b-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 48ed572a-9aa3-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 14:07:48 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 98xVz1GJeLLKXYKeJAcgFHQZ3aHnKNcAeNxvoVIdbuwkn4VN4E+uzokvaB9ItCm15E37jppm5M
 lAWeVl/Sg0ECkRwiTw0PaCyKuLxGYj4kv6ioZoCVYj/aFFwh4RQLY6rLMhP+NiK9hcrEERZf9r
 l/treWeTpf9NBh4I9o2u0Udg5U3WiirQl4etGHaX9OuK0qYKe/u0yugaSyaCSFH28cFIccpXVH
 IF6ge19dVw5nUKwiV0tKDrOZYjMRMBeBKe2Ivb2LaaEjnFK+B1X/wqj+B4DX4eS7LBtisnJLCO
 84U=
X-SBRS: 2.7
X-MesageID: 18249144
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,414,1583211600"; d="scan'208";a="18249144"
Subject: Re: [PATCH] VT-x: extend LBR Broadwell errata coverage
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <df6e8dad-b4c0-0821-46eb-e4aa86f8ccfa@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e107f97b-4bb7-31ee-20d1-ddf8f7e00c21@citrix.com>
Date: Wed, 20 May 2020 15:07:12 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <df6e8dad-b4c0-0821-46eb-e4aa86f8ccfa@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20/05/2020 13:52, Jan Beulich wrote:
> For lbr_tsx_fixup_check() simply name a few more specific errata numbers.
>
> For bdf93_fixup_check(), however, more models are affected. Oddly enough
> despite being the same model and stepping, the erratum is listed for Xeon
> E3 but not its Core counterpart.

That is probably a documentation error.  These processors are made from
the same die, and are not going to deviate in this regard.

> With this it's of course also uncertain
> whether the absence of the erratum for Xeon D is actually meaningful.

Given BDE105, it is exceedingly unlikely that this erratum alone was
fixed, leaving the other related ones present.

The complicating factor is that the TSX errata were addressed in some
later Broadwell parts.  Both these errata groups are to do with a
mismatch of TSX metadata in LBR/LER records.

The former group affects Haswell and Broadwell, but only when microcode
has disabled TSX, and manifests in the processor rejecting
architecturally-correct last-branch-to records.  Any mode in the list
which still has TSX enabled in up-to-date microcode doesn't get the
workaround.

The latter group affects Broadwell only, and manifests as an
architecturally incorrect ler-from record, which shouldn't have any TSX
metadata to begin with.

>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2870,8 +2870,10 @@ static void __init lbr_tsx_fixup_check(v

There is a comment out of context here which is now stale.

If simply adding adding to the list is something you'd prefer to avoid,
what about /* Haswell/Broadwell LBR TSX metadata errata */ or similar?

>      case 0x45: /* HSM182 - 4th gen Core */
>      case 0x46: /* HSM182, HSD172 - 4th gen Core (GT3) */
>      case 0x3d: /* BDM127 - 5th gen Core */
> -    case 0x47: /* BDD117 - 5th gen Core (GT3) */
> -    case 0x4f: /* BDF85  - Xeon E5-2600 v4 */
> +    case 0x47: /* BDD117 - 5th gen Core (GT3)
> +                  BDW117 - Xeon E3-1200 v4 */
> +    case 0x4f: /* BDF85  - Xeon E5-2600 v4
> +                  BDX88  - Xeon E7-x800 v4 */
>      case 0x56: /* BDE105 - Xeon D-1500 */
>          break;
>      default:
> @@ -2895,15 +2897,26 @@ static void __init lbr_tsx_fixup_check(v
>  static void __init bdf93_fixup_check(void)

Seeing as this is no longer just BDF93, how about ler_tsx_fixup_check() ?

~Andrew

>  {
>      /*
> -     * Broadwell erratum BDF93:
> +     * Broadwell erratum BDF93 et al:
>       *
>       * Reads from MSR_LER_TO_LIP (MSR 1DEH) may return values for bits[63:61]
>       * that are not equal to bit[47].  Attempting to context switch this value
>       * may cause a #GP.  Software should sign extend the MSR.
>       */
> -    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
> -         boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x4f )
> +    if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
> +         boot_cpu_data.x86 != 6 )
> +        return;
> +
> +    switch ( boot_cpu_data.x86_model )
> +    {
> +    case 0x3d: /* BDM131 - 5th gen Core */
> +    case 0x47: /* BDD??? - 5th gen Core (H-Processor line)
> +                  BDW120 - Xeon E3-1200 v4 */
> +    case 0x4f: /* BDF93  - Xeon E5-2600 v4
> +                  BDX93  - Xeon E7-x800 v4 */
>          bdf93_fixup_needed = true;
> +        break;
> +    }
>  }
>  
>  static int is_last_branch_msr(u32 ecx)



From xen-devel-bounces@lists.xenproject.org Wed May 20 14:14:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 14:14:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbPU4-0007PY-5U; Wed, 20 May 2020 14:14:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DIPQ=7C=net-space.pl=dkiper@srs-us1.protection.inumbo.net>)
 id 1jbPU2-0007PP-AE
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 14:14:10 +0000
X-Inumbo-ID: 2b35deb9-9aa4-11ea-aa29-12813bfff9fa
Received: from dibed.net-space.pl (unknown [84.10.22.86])
 by us1-amaz-eas2.inumbo.com (Halon) with SMTP
 id 2b35deb9-9aa4-11ea-aa29-12813bfff9fa;
 Wed, 20 May 2020 14:14:09 +0000 (UTC)
Received: from router-fw.i.net-space.pl ([192.168.52.1]:38256 "EHLO
 tomti.i.net-space.pl") by router-fw-old.i.net-space.pl with ESMTP
 id S1953456AbgETNko (ORCPT <rfc822;xen-devel@lists.xenproject.org>);
 Wed, 20 May 2020 15:40:44 +0200
X-Comment: RFC 2476 MSA function at dibed.net-space.pl logged sender identity
 as: dkiper
Date: Wed, 20 May 2020 15:40:40 +0200
From: Daniel Kiper <dkiper@net-space.pl>
To: Ian Jackson <ian.jackson@eu.citrix.com>
Subject: Re: [GRUB PATCH 0/2] Better Xen support
Message-ID: <20200520134040.kwyugkbmel7y4i6q@tomti.i.net-space.pl>
References: <20200520121420.7965-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200520121420.7965-1-ian.jackson@eu.citrix.com>
User-Agent: NeoMutt/20170113 (1.7.2)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: grub-devel@gnu.org, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 20, 2020 at 01:14:18PM +0100, Ian Jackson wrote:
> Hi. As maintainer of the Xen Project upstream CI, I do testing of
> upstream Xen builds onto Debian systems.
>
> We use grub's 20_linux_xen to do the bootloader setup.  However, it is
> missing some features so we are carrying some patches.  Here they are
> for your consideration.
>
> Regards, Ian.
>
> Ian Jackson (2):
>   20_linux_xen: Ignore xenpolicy and config files too
>   20_linux_xen: Support Xen Security Modules (XSM/FLASK)

Reviewed-by: Daniel Kiper <daniel.kiper@oracle.com>

Daniel


From xen-devel-bounces@lists.xenproject.org Wed May 20 14:19:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 14:19:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbPZY-0007aY-RD; Wed, 20 May 2020 14:19:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AfT1=7C=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbPZX-0007aT-Ia
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 14:19:51 +0000
X-Inumbo-ID: f3e49ad4-9aa4-11ea-aa29-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f3e49ad4-9aa4-11ea-aa29-12813bfff9fa;
 Wed, 20 May 2020 14:19:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=aGpjqc8V1/9YRpCaYv7Q9mn2bt7z70BApPMl6nYQWu8=; b=cFL44RwLCxOqhjf2esCeYki3m
 4UVCPYmjH9FkCDTW8gJlyy/ltTKUuQP5FssvK5rH3eBeHiQc1svLLmvVnUTbhzAI35Sv4G3Z9zP5q
 s86UygQVliI7+7n2w72GtseqnbtqwMRdCgzKrZWFoAZcO++a/ST/DzAW5vW5Y76NITIVI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbPZQ-0003LF-44; Wed, 20 May 2020 14:19:44 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbPZP-0001jD-RT; Wed, 20 May 2020 14:19:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbPZP-0002W2-Qz; Wed, 20 May 2020 14:19:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150276-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150276: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=cdea123f1976549ecc72644588cc5ce1491606c4
X-Osstest-Versions-That: xen=e235fa2794c95365519eac714d6ea82f8e64752e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 20 May 2020 14:19:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150276 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150276/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  cdea123f1976549ecc72644588cc5ce1491606c4
baseline version:
 xen                  e235fa2794c95365519eac714d6ea82f8e64752e

Last test of basis   150265  2020-05-19 21:01:18 Z    0 days
Testing same since   150276  2020-05-20 11:01:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  David Woodhouse <dwmw@amazon.co.uk>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e235fa2794..cdea123f19  cdea123f1976549ecc72644588cc5ce1491606c4 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 20 14:20:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 14:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbPZk-0007t5-33; Wed, 20 May 2020 14:20:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbPZj-0007kg-08
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 14:20:03 +0000
X-Inumbo-ID: fe0cf718-9aa4-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe0cf718-9aa4-11ea-b9cf-bc764e2007e4;
 Wed, 20 May 2020 14:20:02 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id C3FE9AF44;
 Wed, 20 May 2020 14:20:03 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86emul: correct {evex} assembler capability check
Message-ID: <2c0c9040-5ae4-ec08-9ddc-b88b99645950@suse.com>
Date: Wed, 20 May 2020 16:20:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The {evex} pseudo prefix gets rejected by gas for insns not allowing
EVEX encoding. Except there's a gas bug due to which its check gets
bypassed for insns without operands. Let's not rely on that bug to
remain there.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -112,7 +112,7 @@ $(foreach flavor,$(SIMD) $(FMA),$(eval $
 
 # Also explicitly check for {evex} pseudo-prefix support, which got introduced
 # only after AVX512F and some of its extensions.
-TARGET-$(shell echo 'asm("{evex} vzeroall");' | $(CC) -x c -c -o /dev/null - || echo y) :=
+TARGET-$(shell echo 'asm("{evex} vmovaps %xmm0$(comma)%xmm0");' | $(CC) -x c -c -o /dev/null - || echo y) :=
 
 ifeq ($(TARGET-y),)
 $(warning Test harness not built, use newer compiler than "$(CC)" (version $(shell $(CC) -dumpversion)) and an "{evex}" capable assembler)


From xen-devel-bounces@lists.xenproject.org Wed May 20 14:41:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 14:41:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbPuG-0001cd-UP; Wed, 20 May 2020 14:41:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4N77=7C=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jbPuF-0001cV-Td
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 14:41:15 +0000
X-Inumbo-ID: f4beb0fe-9aa7-11ea-ae69-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f4beb0fe-9aa7-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 14:41:14 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: sYwtYo5bEK5hVQ+5/4808OUOJfIg1YQVoHSEf6OeDo+JxvD/4LYxIbui7qrkwLcAwAaoewDUiV
 3qhpXsXU8BiMHq7Jcrh16jA1sa9vtSVQhyld8e3U4gC+mG96TwX8vMFWn9inb3dvevcA7Ai8lI
 puF4f2iaW2dog1B+H2ZyxYp6onjUJXVMrhRqRD/jjUEBtcU7R9wzoas1ZeVyltRHHA3GBXfYt7
 ktlrTMkjV4i1tLj7tq9gmTef8Nke1g0vlHlpAcq7SMTVac1rbIIxbqCWd0p8vhR33uLwi/zYSJ
 b3w=
X-SBRS: 2.7
X-MesageID: 18254154
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,414,1583211600"; d="scan'208";a="18254154"
Subject: Re: [PATCH] x86emul: correct {evex} assembler capability check
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
References: <2c0c9040-5ae4-ec08-9ddc-b88b99645950@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a1215ec5-132f-e71a-39d7-c17a0d65969c@citrix.com>
Date: Wed, 20 May 2020 15:41:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <2c0c9040-5ae4-ec08-9ddc-b88b99645950@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20/05/2020 15:20, Jan Beulich wrote:
> The {evex} pseudo prefix gets rejected by gas for insns not allowing
> EVEX encoding. Except there's a gas bug due to which its check gets
> bypassed for insns without operands. Let's not rely on that bug to
> remain there.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Wed May 20 14:50:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 14:50:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbQ2p-0002Ve-Tl; Wed, 20 May 2020 14:50:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NWk/=7C=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jbQ2o-0002S1-Pj
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 14:50:06 +0000
X-Inumbo-ID: 30a97440-9aa9-11ea-b07b-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30a97440-9aa9-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 14:50:05 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Z9Y9y88xAnxg57My9pEQR+PfbEdb7JKaUhhBfu5OWwXLjqx4JP9yQwH7SiSyfbfHfgaiaZ7jEp
 5IM0RJ7LqubvuFkknKBHuzVLoHsiEooPkhyIZWzbfl42TMsH41NJzEiAk/0PywaM7Ss6Siv27o
 jejkRoQUc7n+ho3lAEiwI6Z375EmweYxM0lVMgCMtmjvJu7kmV4PJP729DIiJ5XdLLUQwBuCZ7
 3Qlo6pEP8z/5WtWtOOvO7lqRqsOXnOMG72hg+2vGeNxBmyCLTVuT48vKf0mvQGuljZRTMaUfrx
 tm0=
X-SBRS: 2.7
X-MesageID: 18022566
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,414,1583211600"; d="scan'208";a="18022566"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24261.17303.413916.29534@mariner.uk.xensource.com>
Date: Wed, 20 May 2020 15:49:59 +0100
To: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] tools/libxengnttab: correct size of allocated memory
In-Reply-To: <20200520083501.31704-1-jgross@suse.com>
References: <20200520083501.31704-1-jgross@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Wei
 Liu <wl@xen.org>, Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Juergen Gross writes ("[PATCH] tools/libxengnttab: correct size of allocated memory"):
> The size of the memory allocated for the IOCTL_GNTDEV_MAP_GRANT_REF
> ioctl() parameters is calculated wrong, which results in too much
> memory allocated.

Added Roger to CC.

Firstly,

Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thank you.


But, looking at this code, why on earth what the ?

The FreeBSD code checks to see if it's less than a page and if so uses
malloc and otherwise uses mmap !  Why not unconditionally use malloc ?

Likewise, the Linux code has its own mmap-based memory-obtainer.  ISTM
that malloc is probably going to be better.  Often it will be able to
give out even a substantial amount without making a syscall.

Essentially, we have two (similar but not identical) tiny custom
memory allocators here.  Also, the Linux and FreeBSD code are
remarkably similar which bothers me.

Anyway, these observations are no criticism of Juergen's patch.

Regards,
Ian.


From xen-devel-bounces@lists.xenproject.org Wed May 20 14:57:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 14:57:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbQ9Z-0002iv-RG; Wed, 20 May 2020 14:57:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NWk/=7C=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jbQ9Z-0002iq-0P
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 14:57:05 +0000
X-Inumbo-ID: 2a71fe66-9aaa-11ea-9887-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a71fe66-9aaa-11ea-9887-bc764e2007e4;
 Wed, 20 May 2020 14:57:04 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 381OR0lM7YUVTL6/Jp1k/rQ5yIXprlJ842bZ6ZCPlei49ASmBv1r2C1rE92jBiFMjFwp/HV3ZA
 WT+RAlBoucY/OJO2me87WZQ2vUx9LCglCLvN1h0QGWfsIAdYljzS/VguYnyOnKc+VaoEWZSsoB
 ysYb+q6+3no+Uh7Rc9CNW/NGXgrBgcXqVSMd4LcuMMs7WpVnCufoRoSPQaTHCD1ldzCjUS1krU
 El0lK0ZF3yLeMOChFspltBES2wDziz7UBpRXq23nhqe991EpirCIvV87RJXyWIpfjREGSSWK3Y
 gOA=
X-SBRS: 2.7
X-MesageID: 18276678
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,414,1583211600"; d="scan'208";a="18276678"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24261.17724.382954.918761@mariner.uk.xensource.com>
Date: Wed, 20 May 2020 15:57:00 +0100
To: Julien Grall <julien@xen.org>
Subject: Re: [OSSTEST PATCH 22/38] buster: Extend guest bootloader workaround
In-Reply-To: <7747c676-f9da-cb97-bd93-78dc13138d03@xen.org>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
 <20200519190230.29519-23-ian.jackson@eu.citrix.com>
 <7747c676-f9da-cb97-bd93-78dc13138d03@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Julien Grall writes ("Re: [OSSTEST PATCH 22/38] buster: Extend guest bootloader workaround"):
> On 19/05/2020 20:02, Ian Jackson wrote:
> > CC: Julien Grall <julien@xen.org>
> > CC: Stefano Stabellini <sstabellini@kernel.org>
> > Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Acked-by: Julien Grall <jgrall@amazon.com>

Thanks.

> >   	# Debian doesn't currently know what bootloader to install in
> >   	# a Xen guest on ARM. We install pv-grub-menu above which
> 
> OOI, what does Debian install for x86 HVM guest? Is there any ticket 
> tracking this issue?

On x86, it installes grub.  (grub2, x86, PC, to be precise.)
I'm not aware of any ticket or bug about this.

Ian.


From xen-devel-bounces@lists.xenproject.org Wed May 20 14:58:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 14:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbQAR-0002n8-4e; Wed, 20 May 2020 14:57:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NWk/=7C=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jbQAP-0002mz-ER
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 14:57:57 +0000
X-Inumbo-ID: 497b7bc0-9aaa-11ea-aa32-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 497b7bc0-9aaa-11ea-aa32-12813bfff9fa;
 Wed, 20 May 2020 14:57:56 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: nY5r0Gry+ldeq6TLb+N+71DylejcoRpA2XCOx0otUhMnDRhGdGYDFyMzpAGDBhkztINLB+SSrF
 +x3ZdZVxxTPct+8PD/yX3CHBSQq5NBTksy0zRMTohyc63m6g6LnWKgkLkXv6ZLhITzYiCHHF4v
 x3pngUcfoQSz3W0Ow9Bs94Ep6hrBRXKJz/jHuBUw5Xx6Fwg9OybTdbdNLjltk+Sgku0nSfqb/G
 IsdKAtKW3Ty2HPCrxKiW538KBPki5mZ15pCfpVKsRYV+NLm2BQ373wE2NR/B418ZIRVRTASCKq
 MB4=
X-SBRS: 2.7
X-MesageID: 18023814
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,414,1583211600"; d="scan'208";a="18023814"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24261.17765.769078.328560@mariner.uk.xensource.com>
Date: Wed, 20 May 2020 15:57:41 +0100
To: Julien Grall <julien@xen.org>
Subject: Re: [OSSTEST PATCH 34/38] buster: grub, arm64: extend chainloading
 workaround
In-Reply-To: <f3becdd7-a7e1-3e99-ca90-4ce0f74aa467@xen.org>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
 <20200519190230.29519-35-ian.jackson@eu.citrix.com>
 <f3becdd7-a7e1-3e99-ca90-4ce0f74aa467@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Julien Grall writes ("Re: [OSSTEST PATCH 34/38] buster: grub, arm64: extend chainloading workaround"):
> On 19/05/2020 20:02, Ian Jackson wrote:
> > multiboot[2] isn't supported.
> > 
> > Also link to the bug report.
> > 
> > CC: Julien Grall <julien@xen.org>
> > CC: Stefano Stabellini <sstabellini@kernel.org>
> > Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
> 
> Acked-by: Julien Grall <jgrall@amazon.com>
> 
> > ---
> >   Osstest/Debian.pm | 3 ++-
> >   1 file changed, 2 insertions(+), 1 deletion(-)
> > 
> > diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
> > index 77508d19..151677ed 100644
> > --- a/Osstest/Debian.pm
> > +++ b/Osstest/Debian.pm
> > @@ -443,9 +443,10 @@ sub setupboot_grub2 ($$$$) {
> >       my $kernkey= (defined $xenhopt ? 'KernDom0' : 'KernOnly');
> >   
> >       # Grub2 on jessie/stretch ARM* doesn't do multiboot, so we must chainload.
> > +    # https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=884770
> >       my $need_uefi_chainload =
> >           get_host_property($ho, "firmware") eq "uefi" &&
> > -        $ho->{Suite} =~ m/jessie|stretch/ && $ho->{Arch} =~ m/^arm/;
> > +        $ho->{Suite} =~ m/jessie|stretch|buster/ && $ho->{Arch} =~ m/^arm/;
> 
> FWIW, the next version of Debian seems to have a newer GRUB version with 
> Xen on Arm support.

Cool, we can drop this eventually then :-).

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Wed May 20 14:59:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 14:59:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbQC8-0002wI-GJ; Wed, 20 May 2020 14:59:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/hkT=7C=zohomail.eu=elliotkillick@srs-us1.protection.inumbo.net>)
 id 1jbQC6-0002wB-Hy
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 14:59:42 +0000
X-Inumbo-ID: 882a9248-9aaa-11ea-aa32-12813bfff9fa
Received: from sender11-pp-o93.zoho.eu (unknown [31.186.226.251])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 882a9248-9aaa-11ea-aa32-12813bfff9fa;
 Wed, 20 May 2020 14:59:41 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1589986779; cv=none; d=zohomail.eu; s=zohoarc; 
 b=hsuZSaxiZNrWx4SZfuji5fG9mpoim99fDkem1jGRnxduetF9Oo5F6GhhBb1Cjc5di167pKl7LbQMu5B3zK33BRMnCYR49Q3Lgq/Od8r3B8TI/reJ+e4XupDTfRNP4FckxPsBbGvuoK6x5D07n5dVHEfIvBB0FcX+NfRB+Rj9h+Y=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.eu;
 s=zohoarc; t=1589986779;
 h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To;
 bh=cw3sbluNfCHHl1K+t3LIMvsgkm5XgmIsw435d5vASq0=; 
 b=lnbO5MAseOwSBJE3NflNvVACPew4nNZEBgDSzr2Qm7de37Mo/R3HxLbUZQplPKU8Y6NEU5igABE+gToalTuelEgQtfM5GTP412GVJWoLALlYmys622NYORGnyLadyOknFeCYfRuLTblenuFm3SCiqpJW9WnLNfKFu6WtSdPNhKs=
ARC-Authentication-Results: i=1; mx.zohomail.eu;
 dkim=pass  header.i=zohomail.eu;
 spf=pass  smtp.mailfrom=elliotkillick@zohomail.eu;
 dmarc=pass header.from=<elliotkillick@zohomail.eu>
 header.from=<elliotkillick@zohomail.eu>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1589986779; 
 s=zoho; d=zohomail.eu; i=elliotkillick@zohomail.eu;
 h=Subject:To:References:From:Cc:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
 bh=cw3sbluNfCHHl1K+t3LIMvsgkm5XgmIsw435d5vASq0=;
 b=BZlg6TKEz5zMnEsvvJ5z6Mo6vz+Pr76P4XfYK/Uzr0kiaXRNowZXhlYbOKY1qRAp
 1VJuNv3p0bL9l4gNa2foG5UjVzS+Jvm5CAI9kf5AAj5a8kK4Ehu3heOiq9VtPf67UcP
 PD/iGmEGCS2/G4PcokQHrtNsTXAacAECoYwlI9Y0=
Received: from [10.137.0.35]
 (CPEac202e7c9cc3-CMac202e7c9cc0.cpe.net.cable.rogers.com [99.231.147.74]) by
 mx.zoho.eu with SMTPS id 1589986777008891.008836865654;
 Wed, 20 May 2020 16:59:37 +0200 (CEST)
Subject: Re: [BUG] Consistent LBR/TSX vmentry failure (0x80000022) calling
 domain_crash() in vmx.c:3324
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <36815795-223f-2b96-5401-c262294cbaa8@zohomail.eu>
 <c715f89a-b2ba-490c-c027-b4c7d7069f42@citrix.com>
 <2bcd2ccc-b58e-1268-68ce-3ef534534245@zohomail.eu>
 <1b76cd6a-c6a2-c9c9-1d8b-32a9a1dbc557@citrix.com>
 <657e7522-bd0f-bea3-7ce8-2f6c4ec72407@zohomail.eu>
 <dc1ef4b6-9406-b625-c157-6ebec2a6afda@citrix.com>
From: Elliot Killick <elliotkillick@zohomail.eu>
Message-ID: <325c716c-df62-d24c-2e48-3a100e84f48d@zohomail.eu>
Date: Wed, 20 May 2020 14:59:33 +0000
MIME-Version: 1.0
In-Reply-To: <dc1ef4b6-9406-b625-c157-6ebec2a6afda@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 2020-05-20 12:31, Andrew Cooper wrote:
> On 20/05/2020 12:46, Elliot Killick wrote:
>> On 2020-05-20 11:27, Andrew Cooper wrote:
>>> On 20/05/2020 12:20, Elliot Killick wrote:
>>>> On 2020-05-20 11:10, Andrew Cooper wrote:
>>>>> On 20/05/2020 11:33, Elliot Killick wrote:
>>>>>> Hello,
>>>>>>
>>>>>> Xen is crashing Windows 10 (64-bit) VMs consistently whenever IDA
>>>>>> Debugger
>>>>>> (https://www.hex-rays.com/products/ida/support/download_freeware/)
>>>>>> launches the Local Windows Debugger. The crash occurs when trying to
>>>>>> launch the debugger against any executable (e.g. calc.exe) right at =
the
>>>>>> time IDA says it is "Moving segment from <X address> to <Y address>"=
.
>>>>>>
>>>>>> Tested on Windows 7, 8 and Linux as well but the bug is only trigger=
ed
>>>>>> on Windows 10. Happens whether or not IDA is running with administra=
tor
>>>>>> privileges. No drivers/VM tools installed. Windows has a bug check c=
ode
>>>>>> of zero, leaves no memory dump, nothing in the logs from QEMU in Dom=
0,
>>>>>> the domain just powers off immediately leaving a record of the incid=
ent
>>>>>> in the hypervisor.log. So, it does appear to be a Xen issue. Modern
>>>>>> Intel CPU.
>>>>>>
>>>>>> Does anyone have some ideas on what may be causing this?
>>>>> What exact CPU do you have?=C2=A0 This looks exactly like the
>>>>> Haswell/Broadwell TSX errata.
>>>>>
>>>>> ~Andrew
>>>>>
>>>> i5-4590
>>> How about the output of `head /proc/cpuinfo` in dom0?
>>>
>>> ~Andrew
>>>
>> processor=09: 0
>> vendor_id=09: GenuineIntel
>> cpu family=09: 6
>> model=09=09: 60
>> model name=09: Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz
>> stepping=09: 3
>> microcode=09: 0x27
>> cpu MHz=09=09: 3299.926
>> cache size=09: 6144 KB
>> physical id=09: 0
>=20
> Ok, so the errata is one of HSM182/HSD172.
>=20
> Xen has workaround for all of these.=C2=A0 However, I also see:
>=20
>> (XEN) ----[ Xen-4.8.5-15.fc25=C2=A0 x86_64=C2=A0 debug=3Dn=C2=A0=C2=A0 N=
ot tainted ]----
>=20
> which is an obsolete version of Xen these days.=C2=A0 It looks like these
> issues were first fixed in Xen 4.9, but you should upgrade to something
> rather newer.
>=20
> ~Andrew
>=20

Ah, so this is originally a CPU bug which Xen has had to patch over.

As for the Xen version, that's controlled by the "distribution" of Xen I
run which is Qubes. To remedy this I could run the testing stream of
Qubes which currently provides the latest version of Xen (4.13) but that
could bring its own set of problems.

Thank you for the help, Andrew!



From xen-devel-bounces@lists.xenproject.org Wed May 20 15:13:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 15:13:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbQPY-0004ct-Nc; Wed, 20 May 2020 15:13:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AW6E=7C=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jbQPX-0004co-Ib
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 15:13:35 +0000
X-Inumbo-ID: 78abc6fb-9aac-11ea-aa37-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 78abc6fb-9aac-11ea-aa37-12813bfff9fa;
 Wed, 20 May 2020 15:13:34 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 9I7pS4R0rMyDtZTOjbFVaS7Cv7j3HPVLB/JECRhE6Ttnogz5g9eh+Zdjx46xt5QIGbc+M+uww2
 8MXjII+TlH5drdANaxj9qkDj+nthPr6w7U1Io/jMY0IJeDoy7PG3tavcGs5gObcJOkdi8j/WqE
 TgfNFhyLWgzsU64hPb6EyQ+7ueTXz3KAdrVB7wreFXNfRHlCynKfW2Ow8S69NNAyv+R19eDgUR
 /51P3PSIUAl7P3vqV8A+XfmQpWcn+aYi4iEsD8xhGG9VCq27WMoGRH6o7wprKqCXgF6+79fdbZ
 uUU=
X-SBRS: 2.7
X-MesageID: 17993308
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,414,1583211600"; d="scan'208";a="17993308"
Date: Wed, 20 May 2020 17:13:26 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86: refine guest_mode()
Message-ID: <20200520151326.GM54375@Air-de-Roger>
References: <7b62d06c-1369-2857-81c0-45e2434357f4@suse.com>
 <1704f4f6-7e77-971c-2c94-4f6a6719c34a@citrix.com>
 <5bbe6425-396c-d934-b5af-53b594a4afbc@suse.com>
 <16939982-3ccc-f848-0694-61b154dca89a@citrix.com>
 <5ce12c86-c894-4a2c-9fa6-1c2a6007ca28@suse.com>
 <20200518145101.GV54375@Air-de-Roger>
 <d58ec87e-a871-2e65-4a69-b73a168a6afa@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d58ec87e-a871-2e65-4a69-b73a168a6afa@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 20, 2020 at 10:56:26AM +0200, Jan Beulich wrote:
> On 18.05.2020 16:51, Roger Pau Monné wrote:
> > On Tue, Apr 28, 2020 at 08:30:12AM +0200, Jan Beulich wrote:
> >> On 27.04.2020 22:11, Andrew Cooper wrote:
> >>> On 27/04/2020 16:15, Jan Beulich wrote:
> >>>> On 27.04.2020 16:35, Andrew Cooper wrote:
> >>>>> On 27/04/2020 09:03, Jan Beulich wrote:
> >>>>>> --- a/xen/include/asm-x86/regs.h
> >>>>>> +++ b/xen/include/asm-x86/regs.h
> >>>>>> @@ -10,9 +10,10 @@
> >>>>>>      /* Frame pointer must point into current CPU stack. */                    \
> >>>>>>      ASSERT(diff < STACK_SIZE);                                                \
> >>>>>>      /* If not a guest frame, it must be a hypervisor frame. */                \
> >>>>>> -    ASSERT((diff == 0) || (r->cs == __HYPERVISOR_CS));                        \
> >>>>>> +    if ( diff < PRIMARY_STACK_SIZE )                                          \
> >>>>>> +        ASSERT(!diff || ((r)->cs == __HYPERVISOR_CS));                        \
> >>>>>>      /* Return TRUE if it's a guest frame. */                                  \
> >>>>>> -    (diff == 0);                                                              \
> >>>>>> +    !diff || ((r)->cs != __HYPERVISOR_CS);                                    \
> >>>>> The (diff == 0) already worried me before because it doesn't fail safe,
> >>>>> but this makes things more problematic.  Consider the case back when we
> >>>>> had __HYPERVISOR_CS32.
> >>>> Yes - if __HYPERVISOR_CS32 would ever have been to be used for
> >>>> anything, it would have needed checking for here.
> >>>>
> >>>>> Guest mode is strictly "(r)->cs & 3".
> >>>> As long as CS (a) gets properly saved (it's a "manual" step for
> >>>> SYSCALL/SYSRET as well as #VMEXIT) and (b) didn't get clobbered. I
> >>>> didn't write this code, I don't think, so I can only guess that
> >>>> there were intentions behind this along these lines.
> >>>
> >>> Hmm - the VMExit case might be problematic here, due to the variability
> >>> in the poison used.
> >>
> >> "Variability" is an understatement - there's no poisoning at all
> >> in release builds afaics (and to be honest it seems a somewhat
> >> pointless to write the same values over and over again in debug
> >> mode). With this, ...
> >>
> >>>>> Everything else is expectations about how things ought to be laid out,
> >>>>> but for safety in release builds, the final judgement should not depend
> >>>>> on the expectations evaluating true.
> >>>> Well, I can switch to a purely CS.RPL based approach, as long as
> >>>> we're happy to live with the possible downside mentioned above.
> >>>> Of course this would then end up being a more intrusive change
> >>>> than originally intended ...
> >>>
> >>> I'd certainly prefer to go for something which is more robust, even if
> >>> it is a larger change.
> >>
> >> ... what's your suggestion? Basing on _just_ CS.RPL obviously won't
> >> work. Not even if we put in place the guest's CS (albeit that
> >> somewhat depends on the meaning we assign to the macro's returned
> >> value).
> > 
> > Just to check I'm following this correctly, using CS.RPL won't work
> > for HVM guests, as HVM can legitimately use a RPL of 0 (which is not
> > the case for PV guests). Doesn't the same apply to the usage of
> > __HYPERVISOR_CS? (A HVM guest could also use the same code segment
> > value as Xen?)
> 
> Of course (and in particular Xen as a guest would). My "Basing on
> _just_ CS.RPL" wasn't meant to exclude the rest of the selector,
> but to contrast this to the case where "diff" also is involved in
> the calculation (which looks to be what Andrew would prefer to see
> go away).
> 
> >> Using current inside the macro to determine whether the
> >> guest is HVM would also seem fragile to me - there are quite a few
> >> uses of guest_mode(). Which would leave passing in a const struct
> >> vcpu * (or domain *), requiring to touch all call sites, including
> >> Arm's.
> > 
> > Fragile or slow? Are there corner cases where guest_mode is used where
> > current is not reliable?
> 
> This question is why I said "there are quite a few uses of
> guest_mode()" - auditing them all is just one side of the medal.
> The other is to prevent a new use appearing in the future that
> can be reached by a call path in the time window where a lazy
> context switch is pending (i.e. when current has already been
> updated, but register state hasn't been yet).
> 
> >> Compared to this it would seem to me that the change as presented
> >> is a clear improvement without becoming overly large of a change.
> > 
> > Using the cs register is already part of the guest_mode code, even if
> > just in debug mode, hence I don't see it as a regression from existing
> > code. It however feels weird to me that the reporter of the issue
> > doesn't agree with the fix, and hence would like to know if there's a
> > way we could achieve consensus on this.
> 
> Indeed. I'd be happy to make further adjustments, if only I had a
> clear understanding of what is wanted (or why leaving things as
> they are is better than a little bit of an improvement).

OK, so I think I'm starting to understand this all. Sorry it's taken
me so long. So it's my understanding that diff != 0 can only happen in
Xen context, or when in an IST that has a different stack (ie: MCE, NMI
or DF according to current.h) and running in PV mode?

Wouldn't in then be fine to use (r)->cs & 3 to check we are in guest
mode if diff != 0? I see a lot of other places where cs & 3 is already
used to that effect AFAICT (like entry.S).

Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 20 15:27:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 15:27:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbQdE-0005b5-2c; Wed, 20 May 2020 15:27:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PdaM=7C=tklsoftware.com=tamas@srs-us1.protection.inumbo.net>)
 id 1jbQdC-0005b0-ME
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 15:27:42 +0000
X-Inumbo-ID: 720c3044-9aae-11ea-b9cf-bc764e2007e4
Received: from mail-ed1-x542.google.com (unknown [2a00:1450:4864:20::542])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 720c3044-9aae-11ea-b9cf-bc764e2007e4;
 Wed, 20 May 2020 15:27:42 +0000 (UTC)
Received: by mail-ed1-x542.google.com with SMTP id s19so3453106edt.12
 for <xen-devel@lists.xenproject.org>; Wed, 20 May 2020 08:27:42 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=tklengyel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=1IQTSrcX8iPeYBi+ixiL6DxQngvP7b5n2Cxfe27WGoo=;
 b=lifyw1BM2ARewwsMdMMhdLz9ReEGuBTtb0NAcRNPUim6TBx+hUFUOSV5Iwm0u0vL0J
 k5EfwFq8+ED8Ru3zMjwcSM+POferALpD6JRYYXrRRz2DW94KZzsmlDjhBql2OCaKWsUp
 XOS5C+1NcBT24Wce5I9jynpHY9aCq8maP0nG2ayQVHNv7H5zotmUSfaVHNnlRY+/QQIj
 1hDfnoNpgJgFekZQneqwFSPZaPBmGGFPqkBBvYZxF7vlHeia2a3sWZmd2/cZwmHoDSAK
 lBayiJHE4cwTjhEFBkVMT5ux6WsWXlUhMvTQolMuo2doRf6KX+yyIK/G4PvHGRGsMIEP
 yYwA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=1IQTSrcX8iPeYBi+ixiL6DxQngvP7b5n2Cxfe27WGoo=;
 b=R7v/3LroEKrY0urOk9sVdZyZZm/Eq+dcgCRl2kueT8jPD8SbolZVI/dq5se8xou5s7
 CSFAK8shFHcG7tA7+LISMVCFFCJepmRu94wjqEiAjjxvibEbeKf3kg8mOc+3CnHefAjc
 +6UkKGVR4G6E0CFGrK2K7eqxRYU+jvc76Gzx1b+Sn37gU6MXNpOpuI8C4wzjNpG/TSgS
 zKk0C1c/742QZLub6z7HZU9Uo0FeDr2OKGfJRpFFivB64S87V/f8yEJbgOfN52z1YFOs
 VZsGMxvFrH8xa26TYlZd+696MlN0bsKLb8XaEVCKMiKUV9vY3hSqbxDrZXAFkIwOQDbD
 YjQQ==
X-Gm-Message-State: AOAM53173Jy4wWlZQSParfZ1zJ2Y6h/EwEtXDjWGv5fJtNJRQOHWeU1Q
 +GiSyKSj41PgZa0b/vYssT9rlk08OCk=
X-Google-Smtp-Source: ABdhPJxGzGyQ0AbBoyFbmjOkA2WpCIcHHmBTM6D1FGvrLSYRg2v7E38jZGg9fBm+ZU2RtWf5su+Ydw==
X-Received: by 2002:a50:a7e3:: with SMTP id i90mr4059419edc.6.1589988460676;
 Wed, 20 May 2020 08:27:40 -0700 (PDT)
Received: from mail-wr1-f44.google.com (mail-wr1-f44.google.com.
 [209.85.221.44])
 by smtp.gmail.com with ESMTPSA id a15sm2131427ejr.90.2020.05.20.08.27.39
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 20 May 2020 08:27:39 -0700 (PDT)
Received: by mail-wr1-f44.google.com with SMTP id k13so3609387wrx.3
 for <xen-devel@lists.xenproject.org>; Wed, 20 May 2020 08:27:39 -0700 (PDT)
X-Received: by 2002:adf:ecc3:: with SMTP id s3mr4991755wro.301.1589988459356; 
 Wed, 20 May 2020 08:27:39 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1589561218.git.tamas@tklengyel.com>
 <1168bacc61f655f559c236cdf63a6b2beccd4d6b.1589561218.git.tamas@tklengyel.com>
 <28e50e15-410e-096d-51f1-e304c9ef8cdb@suse.com>
In-Reply-To: <28e50e15-410e-096d-51f1-e304c9ef8cdb@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Wed, 20 May 2020 09:27:02 -0600
X-Gmail-Original-Message-ID: <CABfawhmsbJJ4-TwCgeVhA7yw+v_YstR3RyyjYJfV17KwYm4=Bg@mail.gmail.com>
Message-ID: <CABfawhmsbJJ4-TwCgeVhA7yw+v_YstR3RyyjYJfV17KwYm4=Bg@mail.gmail.com>
Subject: Re: [PATCH 3/3] xen/vm_event: Add safe to disable vm_event
To: Jan Beulich <jbeulich@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Petre Pircalabu <ppircalabu@bitdefender.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 20, 2020 at 7:45 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 15.05.2020 18:53, Tamas K Lengyel wrote:
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -563,15 +563,41 @@ void hvm_do_resume(struct vcpu *v)
> >          v->arch.hvm.inject_event.vector = HVM_EVENT_VECTOR_UNSET;
> >      }
> >
> > -    if ( unlikely(v->arch.vm_event) && v->arch.monitor.next_interrupt_enabled )
> > +    if ( unlikely(v->arch.vm_event) )
> >      {
> > -        struct x86_event info;
> > +        struct domain *d = v->domain;
>
> const
>
> > +        if ( v->arch.monitor.next_interrupt_enabled )
> > +        {
> > +            struct x86_event info;
> > +
> > +            if ( hvm_get_pending_event(v, &info) )
> > +            {
> > +                hvm_monitor_interrupt(info.vector, info.type, info.error_code,
> > +                                      info.cr2);
> > +                v->arch.monitor.next_interrupt_enabled = false;
> > +            }
> > +        }
> >
> > -        if ( hvm_get_pending_event(v, &info) )
> > +        if ( d->arch.monitor.safe_to_disable )
> >          {
> > -            hvm_monitor_interrupt(info.vector, info.type, info.error_code,
> > -                                  info.cr2);
> > -            v->arch.monitor.next_interrupt_enabled = false;
> > +            struct vcpu *check_vcpu;
>
> const again, requiring a respective adjustment to patch 2.
>
> > +            bool pending_op = false;
> > +
> > +            for_each_vcpu ( d, check_vcpu )
> > +            {
> > +                if ( vm_event_check_pending_op(check_vcpu) )
> > +                {
> > +                    pending_op = true;
> > +                    break;
> > +                }
> > +            }
> > +
> > +            if ( !pending_op )
> > +            {
> > +                hvm_monitor_safe_to_disable();
>
> This new function returns bool without the caller caring about the
> return value.

Yea, there is actually nothing to be done if the event can't be sent
for whatever reason, I guess I'll just turn it to void.

Tamas


From xen-devel-bounces@lists.xenproject.org Wed May 20 15:49:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 15:49:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbQxg-0007Kq-UG; Wed, 20 May 2020 15:48:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4N77=7C=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jbQxf-0007Kl-Al
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 15:48:51 +0000
X-Inumbo-ID: 65c509e0-9ab1-11ea-aa41-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 65c509e0-9ab1-11ea-aa41-12813bfff9fa;
 Wed, 20 May 2020 15:48:50 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: kFHHhQQZkZ/WRdHAoJXow45TVVp3YTvzq7jcBvb/6nLmL25Ef1TF4Om6GeXaBj1qJC+BsaDJHW
 Kz6fhN2kv+1TUEv2HMTeATeWexUVveTeRURJ50RlbjhE390YwLrXcfz8YouDvrzI7cg9ShBYPS
 dxhzqEzhzufNFWFvLOwVdZBmEk5uHfa3T8c28/wWDEHun29/wefreFOS6/KuFlJmjdQkh6xRex
 SCqYZ9eBW5umhyv7kP5y0VdhOUvWtl/0nJGlf5xhYmmBOc0qM0lL4Qr0qLcCaIlmkmPOoG6u7n
 pbw=
X-SBRS: 2.7
X-MesageID: 18367733
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,414,1583211600"; d="scan'208";a="18367733"
Subject: Re: [PATCH] x86/traps: Rework #PF[Rsvd] bit handling
To: Jan Beulich <jbeulich@suse.com>
References: <20200518153820.18170-1-andrew.cooper3@citrix.com>
 <2783ddc5-9919-3c97-ba52-2f734e7d72d5@suse.com>
 <62d4999b-7db3-bac6-28ed-bb636347df38@citrix.com>
 <3088e420-a72a-1b2d-144f-115610488418@suse.com>
 <1750cbe5-ef48-6dc7-e372-cbc0a8cbc9cc@citrix.com>
 <4a5c33c0-9245-126b-123e-3980a9135190@suse.com>
 <1808df24-ecde-97c6-c296-ecf385260395@citrix.com>
 <90bf918e-3b87-b1be-344f-80a1bd6803a8@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <43539b81-7824-9c29-acbe-a1edaf562523@citrix.com>
Date: Wed, 20 May 2020 16:48:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <90bf918e-3b87-b1be-344f-80a1bd6803a8@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>, Wei
 Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20/05/2020 08:48, Jan Beulich wrote:
> On 19.05.2020 20:00, Andrew Cooper wrote:
>> On 19/05/2020 17:09, Jan Beulich wrote:
>>> In any event there would be 12 bits to reclaim from the up
>>> pointer - it being a physical address, there'll not be more
>>> than 52 significant bits.
>> Right, but for L1TF safety, the address bits in the PTE must not be
>> cacheable.
> So if I understand this right, your response was only indirectly
> related to what I said: You mean that no matter whether we find
> a way to store full-width GFNs, SH_L1E_MMIO_MAGIC can't have
> arbitrarily many set bits dropped.

Yes

> On L1TF vulnerable hardware,
> that is (i.e. in principle the constant could become a variable
> to be determined at boot).

The only thing which can usefully be done at runtime disable the fastpath.

If cacheable memory overlaps with the used address bits, there are no
safe values to use.

>
>> Currently, on fully populated multi-socket servers, the MMIO fastpath
>> relies on the top 4G of address space not being cacheable, which is the
>> safest we can reasonably manage.  Extending this by a nibble takes us to
>> 16G which is not meaningfully less safe.
> That's 64G (36 address bits), isn't it?

Yes it is.  I can't count.

> Looking at
> l1tf_calculations(), I'd be worried in particular Penryn /
> Dunnington might not support more than 36 address bits (I don't
> think I have anywhere to check). Even if it was 38, 39, or 40
> bits, 64G becomes a not insignificant part of the overall 256G /
> 512G / 1T address space. Then again the top quarter assumption
> in l1tf_calculations() would still be met in this latter case.

I'm honestly not too worried.  Intel has ceased supporting anything
older than SandyBridge, and there are other unfixed speculative security
issues.

Anyone using these processors has bigger problems.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 20 15:56:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 15:56:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbR4k-0008Ds-Qu; Wed, 20 May 2020 15:56:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=txLX=7C=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jbR4j-0008Dn-1z
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 15:56:09 +0000
X-Inumbo-ID: 6ac35e30-9ab2-11ea-aa45-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6ac35e30-9ab2-11ea-aa45-12813bfff9fa;
 Wed, 20 May 2020 15:56:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 88B60AA4F;
 Wed, 20 May 2020 15:56:09 +0000 (UTC)
Subject: Re: [PATCH] VT-x: extend LBR Broadwell errata coverage
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <df6e8dad-b4c0-0821-46eb-e4aa86f8ccfa@suse.com>
 <e107f97b-4bb7-31ee-20d1-ddf8f7e00c21@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c193aaec-cebc-ca0b-88f9-aabfca6b988a@suse.com>
Date: Wed, 20 May 2020 17:56:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <e107f97b-4bb7-31ee-20d1-ddf8f7e00c21@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20.05.2020 16:07, Andrew Cooper wrote:
> On 20/05/2020 13:52, Jan Beulich wrote:
>> @@ -2895,15 +2897,26 @@ static void __init lbr_tsx_fixup_check(v
>>  static void __init bdf93_fixup_check(void)
> 
> Seeing as this is no longer just BDF93, how about ler_tsx_fixup_check() ?

I did consider renaming, and didn't do so just because this would
grow the patch size quite a bit. I'm fine doing so, but with the
name you suggest, is this one really as directly TSX related as the
other one? I had thought of something like lbr_bdw_fixup_check().

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 20 15:56:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 15:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbR57-0008Fg-2g; Wed, 20 May 2020 15:56:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AW6E=7C=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jbR55-0008FS-Cy
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 15:56:31 +0000
X-Inumbo-ID: 78704b60-9ab2-11ea-ae69-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78704b60-9ab2-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 15:56:30 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: huEG3xWiDCZyS+UsXq1qLYEvn4On/A+8tuklwQxifEsjCpsyoR4RgKAybu41l1kw/ESMuBa/n2
 uWRedc87mXLU3kj/MHcPJcayn5uq93XrsfkCKd6dt8spKJDHxn7P/ChTopgPqJkawdMY5QS8vc
 6vVNOTBRw93PCkozaEQS7lI0nsVN6RSCqItx3zHc73NUXg1QMMieRe3/YzxYfNHQC9bPd0IuaC
 nroz7Gu/0CtK/0/O9zjLiVk2Ds1W4I3E83aMwDprihz/GWitkr6eyACPiBgwCoBaZX4epMKQg8
 TIA=
X-SBRS: 2.7
X-MesageID: 18368514
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,414,1583211600"; d="scan'208";a="18368514"
Date: Wed, 20 May 2020 17:56:21 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Ian Jackson <ian.jackson@citrix.com>
Subject: Re: [PATCH] tools/libxengnttab: correct size of allocated memory
Message-ID: <20200520155621.GN54375@Air-de-Roger>
References: <20200520083501.31704-1-jgross@suse.com>
 <24261.17303.413916.29534@mariner.uk.xensource.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <24261.17303.413916.29534@mariner.uk.xensource.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 20, 2020 at 03:49:59PM +0100, Ian Jackson wrote:
> Juergen Gross writes ("[PATCH] tools/libxengnttab: correct size of allocated memory"):
> > The size of the memory allocated for the IOCTL_GNTDEV_MAP_GRANT_REF
> > ioctl() parameters is calculated wrong, which results in too much
> > memory allocated.
> 
> Added Roger to CC.
> 
> Firstly,
> 
> Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>

For the FreeBSD bits:

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> 
> Thank you.
> 
> 
> But, looking at this code, why on earth what the ?
> 
> The FreeBSD code checks to see if it's less than a page and if so uses
> malloc and otherwise uses mmap !  Why not unconditionally use malloc ?
> 
> Likewise, the Linux code has its own mmap-based memory-obtainer.  ISTM
> that malloc is probably going to be better.  Often it will be able to
> give out even a substantial amount without making a syscall.
> 
> Essentially, we have two (similar but not identical) tiny custom
> memory allocators here.  Also, the Linux and FreeBSD code are
> remarkably similar which bothers me.

Right. This is due to the FreeBSD file being mostly a clone of the
Linux one. I agree the duplication could be abstracted away.

I really have no idea why malloc or mmap is used, maybe at some point
requesting regions > PAGE_SIZE was considered faster using mmap
directly?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 20 16:40:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 16:40:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbRlA-0003tt-LL; Wed, 20 May 2020 16:40:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X9kX=7C=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jbRl9-0003to-J5
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 16:39:59 +0000
X-Inumbo-ID: 89ff5119-9ab8-11ea-aa4a-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 89ff5119-9ab8-11ea-aa4a-12813bfff9fa;
 Wed, 20 May 2020 16:39:57 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: vic9C3J8syvj01Av36Yw7bN1hPC785ufRTbsmxUCFdUpQp/Lx1JyLyuiUSd4szXY4rOrWTtJ7E
 cnR9LsQv08y7meAPD+9gcM4Sx55Cee40gQZdnjkC8xyiJjbiCacP7iqtHkPYtGQP2vOcvDfYhk
 hlG9yRGH8NVeLvQgLna+c2SaueOJ1PUE9WQ76d3z80Eo5zU+MAlTcc5yowBpc8LHQ88xwIKKri
 MUauDF9t+sad6SJkB2bjqS0SvBZbwLO8YEzB4HUR2R0pOr8HP9y1tVZNY18qBAfS62bdfq8Rw5
 vVE=
X-SBRS: 2.7
X-MesageID: 18273329
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,414,1583211600"; d="scan'208";a="18273329"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [XEN PATCH] tools/xenstore: mark variable in header as extern
Date: Wed, 20 May 2020 17:39:42 +0100
Message-ID: <20200520163942.131919-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This patch fix "multiple definition of `xprintf'" (or xgt_handle)
build error with GCC 10.1.0.

These are the error reported:
    gcc xs_tdb_dump.o utils.o tdb.o talloc.o      -o xs_tdb_dump
    /usr/bin/ld: utils.o:./utils.h:27: multiple definition of `xprintf'; xs_tdb_dump.o:./utils.h:27: first defined here
    [...]
    gcc xenstored_core.o xenstored_watch.o xenstored_domain.o xenstored_transaction.o xenstored_control.o xs_lib.o talloc.o utils.o tdb.o hashtable.o xenstored_posix.o      -lsystemd   -Wl,-rpath-link=... ../libxc/libxenctrl.so -lrt  -o xenstored
    /usr/bin/ld: xenstored_watch.o:./xenstored_core.h:207: multiple definition of `xgt_handle'; xenstored_core.o:./xenstored_core.h:207: first defined here
    /usr/bin/ld: xenstored_domain.o:./xenstored_core.h:207: multiple definition of `xgt_handle'; xenstored_core.o:./xenstored_core.h:207: first defined here
    /usr/bin/ld: xenstored_transaction.o:./xenstored_core.h:207: multiple definition of `xgt_handle'; xenstored_core.o:./xenstored_core.h:207: first defined here
    /usr/bin/ld: xenstored_control.o:./xenstored_core.h:207: multiple definition of `xgt_handle'; xenstored_core.o:./xenstored_core.h:207: first defined here
    /usr/bin/ld: xenstored_posix.o:./xenstored_core.h:207: multiple definition of `xgt_handle'; xenstored_core.o:./xenstored_core.h:207: first defined here

A difference that I noticed with earlier version of the build chain is
that before, I had:
    $ nm xs_tdb_dump.o | grep xprintf
    0000000000000008 C xprintf
And now, it's:
    0000000000000000 B xprintf
With the patch apply, the symbol isn't in xs_tdb_dump.o anymore.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 tools/xenstore/utils.h          | 2 +-
 tools/xenstore/xenstored_core.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/xenstore/utils.h b/tools/xenstore/utils.h
index 522c3594a2ba..6a1b5de9bdc1 100644
--- a/tools/xenstore/utils.h
+++ b/tools/xenstore/utils.h
@@ -24,7 +24,7 @@ static inline bool strends(const char *a, const char *b)
 void barf(const char *fmt, ...) __attribute__((noreturn));
 void barf_perror(const char *fmt, ...) __attribute__((noreturn));
 
-void (*xprintf)(const char *fmt, ...);
+extern void (*xprintf)(const char *fmt, ...);
 
 #define eprintf(_fmt, _args...) xprintf("[ERR] %s" _fmt, __FUNCTION__, ##_args)
 
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 56a279cfbb47..c4c32bc88f0c 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -204,7 +204,7 @@ void finish_daemonize(void);
 /* Open a pipe for signal handling */
 void init_pipe(int reopen_log_pipe[2]);
 
-xengnttab_handle **xgt_handle;
+extern xengnttab_handle **xgt_handle;
 
 int remember_string(struct hashtable *hash, const char *str);
 
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Wed May 20 16:52:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 16:52:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbRwj-0005Ve-Rg; Wed, 20 May 2020 16:51:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4N77=7C=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jbRwi-0005VY-LU
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 16:51:56 +0000
X-Inumbo-ID: 3649bdb8-9aba-11ea-aa4d-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3649bdb8-9aba-11ea-aa4d-12813bfff9fa;
 Wed, 20 May 2020 16:51:55 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: LijN6eRsVk4qRDzaB7Cgb3Y+TEBp7oqqgC4EbXBnSwREU68jJYVQcphcAH/9A2p5QFEwV+DolS
 pMhk+65xjIeCPZSrhiCzVFGxGz0djXnPkZCeS6l024ecfS/Y9WcDn1diiLmauts5molnya+K5q
 np+TJTKs9ILt0odvu63bXafPvYpWY+EHxOHyqHYg7bhUxePviG3OBh6kPIoPcHWgsXN4NId9Bl
 ycUkP1uuc+0UVC/4y+T32Us93nfGHvKfJZ584+sYjVpKcTOzRrMxgMgCcnw9sXI1onq0pN7mGN
 boQ=
X-SBRS: 2.7
X-MesageID: 18274377
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,414,1583211600"; d="scan'208";a="18274377"
Subject: Re: [PATCH] VT-x: extend LBR Broadwell errata coverage
To: Jan Beulich <jbeulich@suse.com>
References: <df6e8dad-b4c0-0821-46eb-e4aa86f8ccfa@suse.com>
 <e107f97b-4bb7-31ee-20d1-ddf8f7e00c21@citrix.com>
 <c193aaec-cebc-ca0b-88f9-aabfca6b988a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <7f5f70ff-7f30-b7db-eb51-c2f2df9ce58a@citrix.com>
Date: Wed, 20 May 2020 17:51:20 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <c193aaec-cebc-ca0b-88f9-aabfca6b988a@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Kevin
 Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20/05/2020 16:56, Jan Beulich wrote:
> On 20.05.2020 16:07, Andrew Cooper wrote:
>> On 20/05/2020 13:52, Jan Beulich wrote:
>>> @@ -2895,15 +2897,26 @@ static void __init lbr_tsx_fixup_check(v
>>>  static void __init bdf93_fixup_check(void)
>> Seeing as this is no longer just BDF93, how about ler_tsx_fixup_check() ?
> I did consider renaming, and didn't do so just because this would
> grow the patch size quite a bit.

I don't see that as a problem.

> I'm fine doing so, but with the
> name you suggest, is this one really as directly TSX related as the
> other one? I had thought of something like lbr_bdw_fixup_check().

The errata wording doesn't mention TSX, but the breakage manifests in
the same way, with bits 61 and 62 clear but hardware expecting to see a
canonicalised value on restore.

Also, it is very specifically LER-from which gets clobbered, rather than
any of the regular LBR registers.

I'm not overly fussed what the naming is, but it oughtn't to include
bdf93 any more, now the scope of the workaround has been extended.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 20 16:53:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 16:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbRxu-0005cP-7N; Wed, 20 May 2020 16:53:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AfT1=7C=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbRxs-0005cC-KL
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 16:53:08 +0000
X-Inumbo-ID: 5ceab6e8-9aba-11ea-aa4d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5ceab6e8-9aba-11ea-aa4d-12813bfff9fa;
 Wed, 20 May 2020 16:53:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Oa/VsOHrITyWoGfbd8JGX2ESFIMzrDfkIrrrZ4HWdU8=; b=or8E8LSDSKSC1BNJjBGmzIC9M
 50jD1bGM0xQ3NxIhzo7PhhLTqLeaMlPFijdWAi4apvAPEOATScMkkBrgsuKr2FpMiPqhCULZN9agC
 Ir4XgVPW8iT6oLUR9yFFg7YV/4npwhSbzJ+6MJSUUuK0vMhgTqDFCiD4EBIiNC6LAi9yk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbRxk-00076x-0X; Wed, 20 May 2020 16:53:00 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbRxj-0000oW-P3; Wed, 20 May 2020 16:52:59 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbRxj-0003Cw-OQ; Wed, 20 May 2020 16:52:59 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150267-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150267: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 xen-unstable:test-armhf-armhf-xl-rtds:guest-stop:fail:allowable
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=e235fa2794c95365519eac714d6ea82f8e64752e
X-Osstest-Versions-That: xen=664e1bc12f8658da124a4eff7a8f16da073bd47f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 20 May 2020 16:52:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150267 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150267/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 150227
 test-armhf-armhf-xl-rtds     15 guest-stop               fail REGR. vs. 150227

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150227
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150227
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150227
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150227
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150227
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150227
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150227
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150227
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150227
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e235fa2794c95365519eac714d6ea82f8e64752e
baseline version:
 xen                  664e1bc12f8658da124a4eff7a8f16da073bd47f

Last test of basis   150227  2020-05-18 01:51:25 Z    2 days
Failing since        150234  2020-05-18 18:06:10 Z    1 days    4 attempts
Testing same since   150267  2020-05-20 04:13:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Eric Shelton <eshelton@pobox.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Olaf Hering <olaf@aepfle.de>
  Simon Gaiser <simon@invisiblethingslab.com>
  Wei Liu <wei.liu2@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   664e1bc12f..e235fa2794  e235fa2794c95365519eac714d6ea82f8e64752e -> master


From xen-devel-bounces@lists.xenproject.org Wed May 20 16:54:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 16:54:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbRz8-0005jl-Ma; Wed, 20 May 2020 16:54:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4N77=7C=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jbRz8-0005je-7N
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 16:54:26 +0000
X-Inumbo-ID: 8f7cb715-9aba-11ea-aa4d-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8f7cb715-9aba-11ea-aa4d-12813bfff9fa;
 Wed, 20 May 2020 16:54:25 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Lvc6ObjI9N98SWdsow66xDUKaclVoU264U7YC2yYbq/qGZNFyV9pjHxRFhAwfpcXP2a6VR0BBi
 ljlol3mxhHb+EhP8U4JOu0sZt0VJoluONgXOvXFTcthD8wHmaBbjxCQYJRmLrJ50FyzLoe5FCn
 DGiVnLQLER8+ZXjTk8Zl/SUGRR77yWb+ZaPKhleoJ5GleGMFHmeR6Mjhopm614Oatr0AG/pvi3
 CES+dFpJXPWO0BynbG85PR1nowkcoY/wei0iAWnjn2ZjguY3fDDfYdDsdgqA0hQT4wrhLEv/os
 go0=
X-SBRS: 2.7
X-MesageID: 18375230
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,414,1583211600"; d="scan'208";a="18375230"
Subject: Re: [BUG] Consistent LBR/TSX vmentry failure (0x80000022) calling
 domain_crash() in vmx.c:3324
To: Elliot Killick <elliotkillick@zohomail.eu>
References: <36815795-223f-2b96-5401-c262294cbaa8@zohomail.eu>
 <c715f89a-b2ba-490c-c027-b4c7d7069f42@citrix.com>
 <2bcd2ccc-b58e-1268-68ce-3ef534534245@zohomail.eu>
 <1b76cd6a-c6a2-c9c9-1d8b-32a9a1dbc557@citrix.com>
 <657e7522-bd0f-bea3-7ce8-2f6c4ec72407@zohomail.eu>
 <dc1ef4b6-9406-b625-c157-6ebec2a6afda@citrix.com>
 <325c716c-df62-d24c-2e48-3a100e84f48d@zohomail.eu>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c09d62a4-6362-8eb9-a3b0-79c429850db6@citrix.com>
Date: Wed, 20 May 2020 17:53:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <325c716c-df62-d24c-2e48-3a100e84f48d@zohomail.eu>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20/05/2020 15:59, Elliot Killick wrote:
> On 2020-05-20 12:31, Andrew Cooper wrote:
>> On 20/05/2020 12:46, Elliot Killick wrote:
>>> processor	: 0
>>> vendor_id	: GenuineIntel
>>> cpu family	: 6
>>> model		: 60
>>> model name	: Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz
>>> stepping	: 3
>>> microcode	: 0x27
>>> cpu MHz		: 3299.926
>>> cache size	: 6144 KB
>>> physical id	: 0
>> Ok, so the errata is one of HSM182/HSD172.
>>
>> Xen has workaround for all of these.  However, I also see:
>>
>>> (XEN) ----[ Xen-4.8.5-15.fc25  x86_64  debug=n   Not tainted ]----
>> which is an obsolete version of Xen these days.  It looks like these
>> issues were first fixed in Xen 4.9, but you should upgrade to something
>> rather newer.
>>
>> ~Andrew
>>
> Ah, so this is originally a CPU bug which Xen has had to patch over.

Yes.  It was an unintended consequence of Intel being forced to disable
TSX in most of their Haswell/Broadwell CPUs.

> As for the Xen version, that's controlled by the "distribution" of Xen I
> run which is Qubes. To remedy this I could run the testing stream of
> Qubes which currently provides the latest version of Xen (4.13) but that
> could bring its own set of problems.

Ah, in which case Qubes will probably consider backporting the fixes. 
Open a bug with them, and I can probably point out a minimum set of
backports to make it work.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 20 17:00:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 17:00:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbS4n-00076i-PA; Wed, 20 May 2020 17:00:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4N77=7C=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jbS4m-00076d-Cj
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 17:00:16 +0000
X-Inumbo-ID: 5f7d14ea-9abb-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5f7d14ea-9abb-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 17:00:14 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: UsxlX1X7VTEfq6+h6MalDh9nt0ZhtNSXX5fYOoHqBiJc2h+85Kl+KLUDe4XrnxNh+HPoBwTZRo
 IOi0c+7PPnW3bU+6auttrhQg/h9WUeKA7WF1Z37pZyYDWQP3rTAjHO4DK05YqsM2Ep8PLbT5pX
 T7r/unLZOCZni1Qvpk+/97nfvDlVox9Q5BxuNocjKbItmqimxAzDRJaSkD6ZbpEadpgFkfs/p5
 0c8rk7KgAtHYdajo3UoQdrE+CWfPSBO2JCOHmcd3m89H/5M7hyW0Sb4WFiYzm6atuByY2weEdI
 C/Q=
X-SBRS: 2.7
X-MesageID: 18293887
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,414,1583211600"; d="scan'208";a="18293887"
Subject: Re: [XEN PATCH] tools/xenstore: mark variable in header as extern
To: Anthony PERARD <anthony.perard@citrix.com>,
 <xen-devel@lists.xenproject.org>
References: <20200520163942.131919-1-anthony.perard@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d13a7f0b-8d19-9f11-6f52-adde304b3c07@citrix.com>
Date: Wed, 20 May 2020 17:59:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200520163942.131919-1-anthony.perard@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20/05/2020 17:39, Anthony PERARD wrote:
> This patch fix "multiple definition of `xprintf'" (or xgt_handle)
> build error with GCC 10.1.0.
>
> These are the error reported:
>     gcc xs_tdb_dump.o utils.o tdb.o talloc.o      -o xs_tdb_dump
>     /usr/bin/ld: utils.o:./utils.h:27: multiple definition of `xprintf'; xs_tdb_dump.o:./utils.h:27: first defined here
>     [...]
>     gcc xenstored_core.o xenstored_watch.o xenstored_domain.o xenstored_transaction.o xenstored_control.o xs_lib.o talloc.o utils.o tdb.o hashtable.o xenstored_posix.o      -lsystemd   -Wl,-rpath-link=... ../libxc/libxenctrl.so -lrt  -o xenstored
>     /usr/bin/ld: xenstored_watch.o:./xenstored_core.h:207: multiple definition of `xgt_handle'; xenstored_core.o:./xenstored_core.h:207: first defined here
>     /usr/bin/ld: xenstored_domain.o:./xenstored_core.h:207: multiple definition of `xgt_handle'; xenstored_core.o:./xenstored_core.h:207: first defined here
>     /usr/bin/ld: xenstored_transaction.o:./xenstored_core.h:207: multiple definition of `xgt_handle'; xenstored_core.o:./xenstored_core.h:207: first defined here
>     /usr/bin/ld: xenstored_control.o:./xenstored_core.h:207: multiple definition of `xgt_handle'; xenstored_core.o:./xenstored_core.h:207: first defined here
>     /usr/bin/ld: xenstored_posix.o:./xenstored_core.h:207: multiple definition of `xgt_handle'; xenstored_core.o:./xenstored_core.h:207: first defined here
>
> A difference that I noticed with earlier version of the build chain is
> that before, I had:
>     $ nm xs_tdb_dump.o | grep xprintf
>     0000000000000008 C xprintf
> And now, it's:
>     0000000000000000 B xprintf
> With the patch apply, the symbol isn't in xs_tdb_dump.o anymore.
>
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Ah - this will be a side effect of defaulting to -fno-common now.

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Wed May 20 17:18:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 17:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbSMZ-0008DH-AD; Wed, 20 May 2020 17:18:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AW6E=7C=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jbSMX-0008D6-Dt
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 17:18:37 +0000
X-Inumbo-ID: f0278f8c-9abd-11ea-b07b-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0278f8c-9abd-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 17:18:36 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: uweaNdcDgBW1fhKufj7EQpRybnowEQbPy/Dd5z0s3f7GtfpVb3JEo+OWOm9+ff292dZyeXCG7M
 OSX3MZNsqFxMv+o8LQm1+kboYdlOSvaBoJpJslzvbH8avixXjnKbOSb/Q7RiKCJT2WSl/aAbfs
 89460la65joHaGxszw58o1TXjDEBCKcOQhaMvTt+xMeSYdLChf4uLPaX8DYHggTBQXsVFBt509
 c4cEJsqd8KMZqqwFA1TBHeb7P94LfJ5uIomgNzWu5u/P2NFLYIj0iEttZ66BdmgZWUCX9zGk3N
 bW4=
X-SBRS: 2.7
X-MesageID: 18377617
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,414,1583211600"; d="scan'208";a="18377617"
Date: Wed, 20 May 2020 19:18:29 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2] x86: use POPCNT for hweight<N>() when available
Message-ID: <20200520171829.GO54375@Air-de-Roger>
References: <55a4a24d-7fac-527c-6bcf-8d689136bac2@suse.com>
 <20200514140522.GD54375@Air-de-Roger>
 <83534bf1-fa57-1d4a-c615-f656338a8457@suse.com>
 <20200520093106.GI54375@Air-de-Roger>
 <53fdfbe2-615a-72f9-7f2d-26402a0a64d0@suse.com>
 <20200520102805.GK54375@Air-de-Roger>
 <0e97e3af-b66e-4924-a76c-9e33cdd1a726@suse.com>
 <20200520114327.GL54375@Air-de-Roger>
 <d0a15359-339f-6edd-034c-cd6385e929d1@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d0a15359-339f-6edd-034c-cd6385e929d1@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 20, 2020 at 03:12:25PM +0200, Jan Beulich wrote:
> On 20.05.2020 13:43, Roger Pau Monné wrote:
> > On Wed, May 20, 2020 at 12:57:27PM +0200, Jan Beulich wrote:
> >> On 20.05.2020 12:28, Roger Pau Monné wrote:
> >>> On Wed, May 20, 2020 at 12:17:15PM +0200, Jan Beulich wrote:
> >>>> On 20.05.2020 11:31, Roger Pau Monné wrote:
> >>>>> On Wed, May 20, 2020 at 10:31:38AM +0200, Jan Beulich wrote:
> >>>>>> On 14.05.2020 16:05, Roger Pau Monné wrote:
> >>>>>>> On Mon, Jul 15, 2019 at 02:39:04PM +0000, Jan Beulich wrote:
> >>>>>>>> @@ -251,6 +255,10 @@ boot/mkelf32: boot/mkelf32.c
> >>>>>>>>   efi/mkreloc: efi/mkreloc.c
> >>>>>>>>   	$(HOSTCC) $(HOSTCFLAGS) -g -o $@ $<
> >>>>>>>>   
> >>>>>>>> +nocov-y += hweight.o
> >>>>>>>> +noubsan-y += hweight.o
> >>>>>>>> +hweight.o: CFLAGS += $(foreach reg,cx dx si 8 9 10 11,-ffixed-r$(reg))
> >>>>>>>
> >>>>>>> Why not use clobbers in the asm to list the scratch registers? Is it
> >>>>>>> that much expensive?
> >>>>>>
> >>>>>> The goal is to disturb the call sites as little as possible. There's
> >>>>>> no point avoiding the scratch registers when no call is made (i.e.
> >>>>>> when the POPCNT insn can be used). Taking away from the compiler 7
> >>>>>> out of 15 registers (that it can hold live data in) seems quite a
> >>>>>> lot to me.
> >>>>>
> >>>>> IMO using -ffixed-reg for all those registers is even worse, as that
> >>>>> prevents the generated code in hweight from using any of those, thus
> >>>>> greatly limiting the amount of registers and likely making the
> >>>>> generated code rely heavily on pushing an popping from the stack?
> >>>>
> >>>> Okay, that's the other side of the idea behind all this: Virtually no
> >>>> hardware we run on will lack POPCNT support, hence the quality of
> >>>> these fallback routines matters only the very old hardware, where we
> >>>> likely don't perform optimally already anyway.
> >>>>
> >>>>> This also has the side effect to limiting the usage of popcnt to gcc,
> >>>>> which IMO is also not ideal.
> >>>>
> >>>> Agreed. I don't know enough about clang to be able to think of
> >>>> possible alternatives. In any event there's no change to current
> >>>> behavior for hypervisors built with clang.
> >>>>
> >>>>> I also wondered, since the in-place asm before patching is a call
> >>>>> instruction, wouldn't inline asm at build time already assume that the
> >>>>> scratch registers are clobbered?
> >>>>
> >>>> That would imply the compiler peeks into the string literal of the
> >>>> asm(). At least gcc doesn't, and even if it did it couldn't infer an
> >>>> ABI from seeing a CALL insn.
> >>>
> >>> Please bear with me, but then I don't understand what Linux is doing
> >>> in arch/x86/include/asm/arch_hweight.h. I see no clobbers there,
> >>> neither it seems like the __sw_hweight{32/64} functions are built
> >>> without the usage of the scratch registers.
> >>
> >> __sw_hweight{32,64} are implemented in assembly, avoiding most
> >> scratch registers while pushing/popping the ones which do get
> >> altered.
> > 
> > Oh right, I was looking at lib/hweight.c instead of the arch one.
> > 
> > Would you agree to use the no_caller_saved_registers attribute (which
> > is available AFAICT for both gcc and clang) for generic_hweightXX and
> > then remove the asm prefix code in favour of the defines for
> > hweight{8/16}?
> 
> At least for gcc no_caller_saved_registers isn't old enough to be
> used unconditionally (nor is its companion -mgeneral-regs-only).
> If you tell me it's fine to use unconditionally with clang, then
> I can see about making this the preferred variant, with the
> present one as a fallback.

Hm, so my suggestion was bad, no_caller_saved_registers is only
implemented starting clang 5, which is newer than the minimum we
currently require (3.5).

So apart from adding a clobber to the asm instance covering the
scratch registers the only option I can see as viable is using a bunch
of dummy global variables assigned to the registers we need to prevent
the software generic_hweightXX functions from using, but that's ugly
and will likely trigger warnings at least, since I'm not sure the
compiler will find it safe to clobber a function call register with a
global variable. Or adding a prolegue / epilogue to the call
instruction in order to push / pop the relevant registers. None seems
like a very good option IMO.

I also assume that using no_caller_saved_registers when available or
else keeping the current behavior is not an acceptable solution? FWIW,
from a FreeBSD PoV I would be OK with that, as I don't think there are
any supported targets with clang < 5.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 20 17:20:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 17:20:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbSO2-0000VV-LU; Wed, 20 May 2020 17:20:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NWk/=7C=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jbSO1-0000VO-Bz
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 17:20:09 +0000
X-Inumbo-ID: 275435c8-9abe-11ea-b9cf-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 275435c8-9abe-11ea-b9cf-bc764e2007e4;
 Wed, 20 May 2020 17:20:08 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Hy70aDVNs4PsAR3OYYC29RETWWM1PnlDmQqWOkvKGaKuLHIUl5bkbHqFyn0pg/buDtjH/me2P1
 XlLk9bukeYrFZ32aSqMDYJ6ZLAMPGfLzCmz1VEaoeXv5ukGWLb8q0VfUyt9397rjQVD1nb0FKb
 0AXWfGa570ObZVba+ZSBCAAypS99jl+SRLekpMh8gOVXTyVUYg4KNq6n5vzlRS0jRlnjA2OEJQ
 iF6WOirsCvcGMt1DzvNW20teer5XPWqJCyqc8Uhu0S2vA73fJ1Ngn2n5cpSnWGFViOrmosP3SC
 dT8=
X-SBRS: 2.7
X-MesageID: 18296172
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,414,1583211600"; d="scan'208";a="18296172"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24261.26307.721031.349605@mariner.uk.xensource.com>
Date: Wed, 20 May 2020 18:20:03 +0100
To: Anthony PERARD <anthony.perard@citrix.com>
Subject: [XEN PATCH] tools/xenstore: mark variable in header as extern
In-Reply-To: <20200520163942.131919-1-anthony.perard@citrix.com>
References: <20200520163942.131919-1-anthony.perard@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Anthony PERARD writes ("[XEN PATCH] tools/xenstore: mark variable in header as extern"):
> This patch fix "multiple definition of `xprintf'" (or xgt_handle)
> build error with GCC 10.1.0.

Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Wed May 20 17:53:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 17:53:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbSth-0003AY-9u; Wed, 20 May 2020 17:52:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2h4=7C=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbStg-0003AT-7P
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 17:52:52 +0000
X-Inumbo-ID: b91f7612-9ac2-11ea-b9cf-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b91f7612-9ac2-11ea-b9cf-bc764e2007e4;
 Wed, 20 May 2020 17:52:51 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 31A18206B6;
 Wed, 20 May 2020 17:52:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1589997170;
 bh=YBKhQKVIzAMV/Xq4T8XDemXDl9HFFlWw9sVxS+xBgAM=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=ZE7TQFMhu37nyJUuEiBz+0/HVOel0PmP0Wz27Ismmpo7CUMIoQQgDYsPnI2J1B1jY
 YWkcvgen+iAMEiuejlCC138ok84SeJss1bgUh69aFe1qHtuGXKoXRewm/IJJ1RmQOH
 4d9D8tv0KRSZ08wHoW2nQQ4m/zs9TdjGiBSZ+AT4=
Date: Wed, 20 May 2020 10:52:49 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: =?UTF-8?Q?J=C3=BCrgen_Gro=C3=9F?= <jgross@suse.com>
Subject: Re: grant table issues mapping a ring order 10
In-Reply-To: <ecd0bdf8-6e65-24a7-8383-c244853f7ae6@suse.com>
Message-ID: <alpine.DEB.2.21.2005201050310.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005191252040.27502@sstabellini-ThinkPad-T480s>
 <03bad8fd-9826-7652-1c08-549e22634f8d@suse.com>
 <ecd0bdf8-6e65-24a7-8383-c244853f7ae6@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1880897936-1589997170=:27502"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: andrew.cooper3@citrix.com, boris.ostrovsky@oracle.com,
 Stefano Stabellini <sstabellini@kernel.org>, jbeulich@suse.com,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1880897936-1589997170=:27502
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Wed, 20 May 2020, Jürgen Groß wrote:
> On 20.05.20 08:00, Jürgen Groß wrote:
> > On 19.05.20 23:21, Stefano Stabellini wrote:
> > > Hi Juergen, Boris,
> > > 
> > > I am trying to increase the size of the rings used for Xen 9pfs
> > > connections for performance reasons and also to reduce the likehood of
> > > the backend having to wait on the frontend to free up space from the
> > > ring.
> > > 
> > > FYI I realized that we cannot choose order 11 or greater in Linux
> > > because then we incur into the hard limit CONFIG_FORCE_MAX_ZONEORDER=11.
> > > But that is not the reason why I am writing to you :-)
> > > 
> > > 
> > > The reason why I am writing is that even order 10 fails for some
> > > grant-table related reason I cannot explain. There are two rings, each
> > > of them order 10. Mapping the first ring results into an error. (Order 9
> > > works fine, resulting in both rings being mapped correctly.)
> > > 
> > > QEMU tries to map the refs but gets an error:
> > > 
> > >    gnttab: error: mmap failed: Invalid argument
> > >    xen be: 9pfs-0: xen be: 9pfs-0: xengnttab_map_domain_grant_refs failed:
> > > Invalid argument
> > >    xengnttab_map_domain_grant_refs failed: Invalid argument
> > > 
> > > The error comes from Xen. The hypervisor returns GNTST_bad_gntref to
> > > Linux (drivers/xen/grant-table.c:gnttab_map_refs). Then:
> > > 
> > >          if (map->map_ops[i].status) {
> > >             err = -EINVAL;
> > >             continue;
> > >         }
> > > 
> > > So Linux returns -EINVAL to QEMU. The ref seem to be garbage. The
> > > following printks are in Xen in the implemenation of map_grant_ref:
> > > 
> > > (XEN) DEBUG map_grant_ref 1017 ref=998 nr=2560
> > > (XEN) DEBUG map_grant_ref 1017 ref=999 nr=2560
> > > (XEN) DEBUG map_grant_ref 1013 ref=2050669706 nr=2560
> > > (XEN) grant_table.c:1015:d0v0 Bad ref 0x7a3abc8a for d1
> > > (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> > > (XEN) DEBUG map_grant_ref 1017 ref=19 nr=2560
> > > (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> > > (XEN) DEBUG map_grant_ref 1013 ref=56423797 nr=2560
> > > (XEN) grant_table.c:1015:d0v0 Bad ref 0x35cf575 for d1
> > > (XEN) DEBUG map_grant_ref 1013 ref=348793 nr=2560
> > > (XEN) grant_table.c:1015:d0v0 Bad ref 0x55279 for d1
> > > (XEN) DEBUG map_grant_ref 1013 ref=1589921828 nr=2560
> > > (XEN) grant_table.c:1015:d0v0 Bad ref 0x5ec44824 for d1
> > > (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> > > (XEN) DEBUG map_grant_ref 1013 ref=2070386184 nr=2560
> > > (XEN) grant_table.c:1015:d0v0 Bad ref 0x7b679608 for d1
> > > (XEN) DEBUG map_grant_ref 1013 ref=3421871 nr=2560
> > > (XEN) grant_table.c:1015:d0v0 Bad ref 0x3436af for d1
> > > (XEN) DEBUG map_grant_ref 1013 ref=1589921828 nr=2560
> > > (XEN) grant_table.c:1015:d0v0 Bad ref 0x5ec44824 for d1
> > > (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> > > (XEN) DEBUG map_grant_ref 1013 ref=875999099 nr=2560
> > > (XEN) grant_table.c:1015:d0v0 Bad ref 0x3436af7b for d1
> > > (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> > > (XEN) DEBUG map_grant_ref 1013 ref=2705045486 nr=2560
> > > (XEN) grant_table.c:1015:d0v0 Bad ref 0xa13bb7ee for d1
> > > (XEN) DEBUG map_grant_ref 1013 ref=4294967295 nr=2560
> > > (XEN) grant_table.c:1015:d0v0 Bad ref 0xffffffff for d1
> > > (XEN) DEBUG map_grant_ref 1013 ref=213291910 nr=2560
> > > (XEN) grant_table.c:1015:d0v0 Bad ref 0xcb69386 for d1
> > > (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> > > (XEN) DEBUG map_grant_ref 1013 ref=4912 nr=2560
> > > (XEN) grant_table.c:1015:d0v0 Bad ref 0x1330 for d1
> > > (XEN) DEBUG map_grant_ref 1013 ref=167788925 nr=2560
> > > (XEN) grant_table.c:1015:d0v0 Bad ref 0xa00417d for d1
> > > (XEN) DEBUG map_grant_ref 1017 ref=24 nr=2560
> > > (XEN) DEBUG map_grant_ref 1013 ref=167788925 nr=2560
> > > (XEN) grant_table.c:1015:d0v0 Bad ref 0xa00417d for d1
> > > (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> > > (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> > > (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> > > (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> > > (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> > > (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> > > (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> > > (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> > > (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> > > (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> > > (XEN) DEBUG map_grant_ref 1017 ref=0 nr=2560
> > > 
> > > 
> > > Full logs https://pastebin.com/QLTUaUGJ
> > > It is worth mentioning that no limits are being reached: we are below
> > > 2500 entries per domain and below the 64 pages of grant refs per domain.
> > > 
> > > What it seems to happen is that after ref 999, the next refs are garbage.
> > > Do you have any ideas why?
> > 
> > I don't think there is enough space for all the needed grant refs in the
> > initial interface page passed via Xenstore. So how do you pass the refs
> > to the backend?
> 
> Looking into the full log this seems to be the problem: The processing
> is starting with ref=9 and the last successful ref is 999, so 991 refs
> have been processed. Each ref needs 4 bytes, so a page could hold 1024
> refs, but the first 132 bytes of the page are used for other information
> resulting in 1024-33 == 991 refs possible.

  O_O

Dho! That is definitely the issue. Thank you Jurgen!

I added all sorts of checks to the grant table pages and forgot about
the initial shared page used to pass the ring refs themselves.
--8323329-1880897936-1589997170=:27502--


From xen-devel-bounces@lists.xenproject.org Wed May 20 18:18:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 18:18:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbTHx-00058C-FI; Wed, 20 May 2020 18:17:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AfT1=7C=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbTHv-000585-T6
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 18:17:55 +0000
X-Inumbo-ID: 362621a8-9ac6-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 362621a8-9ac6-11ea-b9cf-bc764e2007e4;
 Wed, 20 May 2020 18:17:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=pVIGebkV9eQn57K1hg+cJ9CeRuEUb+EuxPyBZMycJv4=; b=J4aWlMA7zbD3DtnhYKEFeDrZV
 5oprtDB+GrMi1FkKtVVth4J0JnPgW85DbymeSdwSRFFskSms8bZ8DtWikdIgrQRJgZIwd/pxNrg24
 X5esji5P3LdPjSR/+20z5iez9yzj1W/uK17mEO4PXa3dES/CA5JrpSfOK8WfJVjx+u6aw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbTHo-0000dA-Uf; Wed, 20 May 2020 18:17:49 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbTHo-0005l9-L4; Wed, 20 May 2020 18:17:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbTHo-0003qK-KT; Wed, 20 May 2020 18:17:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150271-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150271: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-armhf-armhf-libvirt-raw:xen-boot:fail:heisenbug
 qemu-mainline:test-amd64-amd64-xl-rtds:guest-saverestore:fail:heisenbug
 qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=f2465433b43fb87766d79f42191607dac4aed5b4
X-Osstest-Versions-That: qemuu=a89af8c20ab289785806fcc1589b0f4265bd4e90
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 20 May 2020 18:17:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150271 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150271/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-libvirt-raw  7 xen-boot         fail in 150260 pass in 150271
 test-amd64-amd64-xl-rtds     15 guest-saverestore          fail pass in 150260

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds 16 guest-localmigrate fail in 150260 REGR. vs. 150243

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150243
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150243
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150243
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150243
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150243
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150243
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                f2465433b43fb87766d79f42191607dac4aed5b4
baseline version:
 qemuu                a89af8c20ab289785806fcc1589b0f4265bd4e90

Last test of basis   150243  2020-05-19 11:07:14 Z    1 days
Testing same since   150260  2020-05-19 19:14:51 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  John Snow <jsnow@redhat.com>
  Kevin Wolf <kwolf@redhat.com>
  Lukas Straub <lukasstraub2@web.de>
  Max Reitz <mreitz@redhat.com>
  Oleksandr Natalenko <oleksandr@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Stefan Hajnoczi <stefanha@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   a89af8c20a..f2465433b4  f2465433b43fb87766d79f42191607dac4aed5b4 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Wed May 20 18:39:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 18:39:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbTc4-0006ra-9f; Wed, 20 May 2020 18:38:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gBXs=7C=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jbTc3-0006rV-2s
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 18:38:43 +0000
X-Inumbo-ID: 1fd68733-9ac9-11ea-aa71-12813bfff9fa
Received: from ppsw-41.csi.cam.ac.uk (unknown [131.111.8.141])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1fd68733-9ac9-11ea-aa71-12813bfff9fa;
 Wed, 20 May 2020 18:38:42 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:40160
 helo=[192.168.1.219])
 by ppsw-41.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.159]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jbTby-000lux-SB (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Wed, 20 May 2020 19:38:38 +0100
Subject: Re: [PATCH v3 2/2] x86/idle: prevent entering C6 with in service
 interrupts on Intel
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <20200515135802.63853-1-roger.pau@citrix.com>
 <20200515135802.63853-3-roger.pau@citrix.com>
 <e9e337ae-295e-5577-3c6d-a42721190b07@suse.com>
 <20200518154527.GW54375@Air-de-Roger>
 <6ce247e4-ef85-6793-68a6-0d1cde7f886d@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b3fc4a9d-faf8-1904-3104-9c9f9da4773d@citrix.com>
Date: Wed, 20 May 2020 19:38:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <6ce247e4-ef85-6793-68a6-0d1cde7f886d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 18/05/2020 16:47, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> On 18.05.2020 17:45, Roger Pau Monné wrote:
>> On Mon, May 18, 2020 at 05:05:12PM +0200, Jan Beulich wrote:
>>> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>>>
>>> On 15.05.2020 15:58, Roger Pau Monne wrote:
>>>> --- a/docs/misc/xen-command-line.pandoc
>>>> +++ b/docs/misc/xen-command-line.pandoc
>>>> @@ -652,6 +652,15 @@ Specify the size of the console debug trace buffer. By specifying `cpu:`
>>>>  additionally a trace buffer of the specified size is allocated per cpu.
>>>>  The debug trace feature is only enabled in debugging builds of Xen.
>>>>  
>>>> +### disable-c6-errata
>>> Hmm, yes please - a disable for errata! ;-)
>>>
>>> How about "avoid-c6-errata", and then perhaps as a sub-option to
>>> "cpuidle="? (If we really want a control for this in the first
>>> place.)
>> Right, I see I'm very bad at naming. Not sure it's even worth it
>> maybe?
>>
>> I can remove it completely from the patch if that is OK.
> I'd be fine without. Andrew?

Yeah - the only thing people can do with this is shoot themselves in the
foot.

There's frankly no need to give them the option in the first place.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 20 18:41:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 18:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbTec-0007cQ-NT; Wed, 20 May 2020 18:41:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2h4=7C=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbTeb-0007cL-AX
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 18:41:21 +0000
X-Inumbo-ID: 7f725a0e-9ac9-11ea-b9cf-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7f725a0e-9ac9-11ea-b9cf-bc764e2007e4;
 Wed, 20 May 2020 18:41:20 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id D813A207D3;
 Wed, 20 May 2020 18:41:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590000080;
 bh=XMxptrvNj+mtd+IirGS6+uvsvJbFxSvrUCTSYI+SSDQ=;
 h=From:To:Cc:Subject:Date:From;
 b=BiuUkA8Z2bjqWFNEiZxOHABQXBdkmQ2BQ8GVOhrzdXHaY8jsPs6xTjOWajWwkI1xy
 UUwxVehuSvmywW37PRO8qtPrneVtcXogxeG0/FOXuwSLo4S0ovdDN2ywFCZSBHxPGR
 kPqEwR0VKJWc+pUWeO/jDLX6kWeSfvmEhR+l1wMo=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com
Subject: [PATCH] 9p/xen: increase XEN_9PFS_RING_ORDER
Date: Wed, 20 May 2020 11:41:13 -0700
Message-Id: <20200520184113.24727-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: lucho@ionkov.net, sstabellini@kernel.org, ericvh@gmail.com,
 rminnich@sandia.gov, linux-kernel@vger.kernel.org,
 v9fs-developer@lists.sourceforge.net, xen-devel@lists.xenproject.org,
 boris.ostrovsky@oracle.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

Increase XEN_9PFS_RING_ORDER to 9 for performance reason. Order 9 is the
max allowed by the protocol.

We can't assume that all backends will support order 9. The xenstore
property max-ring-page-order specifies the max order supported by the
backend. We'll use max-ring-page-order for the size of the ring.

This means that the size of the ring is not static
(XEN_FLEX_RING_SIZE(9)) anymore. Change XEN_9PFS_RING_SIZE to take an
argument and base the calculation on the order chosen at setup time.


Finally, modify p9_xen_trans.maxsize to be divided by 4 compared to the
original value. We need to divide it by 2 because we have two rings
coming off the same order allocation: the in and out rings. This was a
mistake in the original code. Also divide it further by 2 because we
don't want a single request/reply to fill up the entire ring. There can
be multiple requests/replies outstanding at any given time and if we use
the full ring with one, we risk forcing the backend to wait for the
client to read back more replies before continuing, which is not
performant.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 net/9p/trans_xen.c | 61 ++++++++++++++++++++++++++--------------------
 1 file changed, 34 insertions(+), 27 deletions(-)

diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
index 086a4abdfa7c..cf5ea74be7cc 100644
--- a/net/9p/trans_xen.c
+++ b/net/9p/trans_xen.c
@@ -44,8 +44,8 @@
 #include <net/9p/transport.h>
 
 #define XEN_9PFS_NUM_RINGS 2
-#define XEN_9PFS_RING_ORDER 6
-#define XEN_9PFS_RING_SIZE  XEN_FLEX_RING_SIZE(XEN_9PFS_RING_ORDER)
+#define XEN_9PFS_RING_ORDER 9
+#define XEN_9PFS_RING_SIZE(ring)  XEN_FLEX_RING_SIZE(ring->intf->ring_order)
 
 struct xen_9pfs_header {
 	uint32_t size;
@@ -130,8 +130,8 @@ static bool p9_xen_write_todo(struct xen_9pfs_dataring *ring, RING_IDX size)
 	prod = ring->intf->out_prod;
 	virt_mb();
 
-	return XEN_9PFS_RING_SIZE -
-		xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) >= size;
+	return XEN_9PFS_RING_SIZE(ring) -
+		xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE(ring)) >= size;
 }
 
 static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
@@ -165,17 +165,18 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
 	prod = ring->intf->out_prod;
 	virt_mb();
 
-	if (XEN_9PFS_RING_SIZE - xen_9pfs_queued(prod, cons,
-						 XEN_9PFS_RING_SIZE) < size) {
+	if (XEN_9PFS_RING_SIZE(ring) -
+	    xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE(ring)) < size) {
 		spin_unlock_irqrestore(&ring->lock, flags);
 		goto again;
 	}
 
-	masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE);
-	masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE);
+	masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE(ring));
+	masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE(ring));
 
 	xen_9pfs_write_packet(ring->data.out, p9_req->tc->sdata, size,
-			      &masked_prod, masked_cons, XEN_9PFS_RING_SIZE);
+			      &masked_prod, masked_cons,
+			      XEN_9PFS_RING_SIZE(ring));
 
 	p9_req->status = REQ_STATUS_SENT;
 	virt_wmb();			/* write ring before updating pointer */
@@ -204,19 +205,19 @@ static void p9_xen_response(struct work_struct *work)
 		prod = ring->intf->in_prod;
 		virt_rmb();
 
-		if (xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) <
+		if (xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE(ring)) <
 		    sizeof(h)) {
 			notify_remote_via_irq(ring->irq);
 			return;
 		}
 
-		masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE);
-		masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE);
+		masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE(ring));
+		masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE(ring));
 
 		/* First, read just the header */
 		xen_9pfs_read_packet(&h, ring->data.in, sizeof(h),
 				     masked_prod, &masked_cons,
-				     XEN_9PFS_RING_SIZE);
+				     XEN_9PFS_RING_SIZE(ring));
 
 		req = p9_tag_lookup(priv->client, h.tag);
 		if (!req || req->status != REQ_STATUS_SENT) {
@@ -230,11 +231,11 @@ static void p9_xen_response(struct work_struct *work)
 		memcpy(req->rc, &h, sizeof(h));
 		req->rc->offset = 0;
 
-		masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE);
+		masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE(ring));
 		/* Then, read the whole packet (including the header) */
 		xen_9pfs_read_packet(req->rc->sdata, ring->data.in, h.size,
 				     masked_prod, &masked_cons,
-				     XEN_9PFS_RING_SIZE);
+				     XEN_9PFS_RING_SIZE(ring));
 
 		virt_mb();
 		cons += h.size;
@@ -264,7 +265,7 @@ static irqreturn_t xen_9pfs_front_event_handler(int irq, void *r)
 
 static struct p9_trans_module p9_xen_trans = {
 	.name = "xen",
-	.maxsize = 1 << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT),
+	.maxsize = 1 << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT - 2),
 	.def = 1,
 	.create = p9_xen_create,
 	.close = p9_xen_close,
@@ -292,14 +293,16 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
 		if (priv->rings[i].irq > 0)
 			unbind_from_irqhandler(priv->rings[i].irq, priv->dev);
 		if (priv->rings[i].data.in) {
-			for (j = 0; j < (1 << XEN_9PFS_RING_ORDER); j++) {
+			for (j = 0;
+			     j < (1 << priv->rings[i].intf->ring_order);
+			     j++) {
 				grant_ref_t ref;
 
 				ref = priv->rings[i].intf->ref[j];
 				gnttab_end_foreign_access(ref, 0, 0);
 			}
 			free_pages((unsigned long)priv->rings[i].data.in,
-				   XEN_9PFS_RING_ORDER -
+				   priv->rings[i].intf->ring_order -
 				   (PAGE_SHIFT - XEN_PAGE_SHIFT));
 		}
 		gnttab_end_foreign_access(priv->rings[i].ref, 0, 0);
@@ -320,7 +323,8 @@ static int xen_9pfs_front_remove(struct xenbus_device *dev)
 }
 
 static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev,
-					 struct xen_9pfs_dataring *ring)
+					 struct xen_9pfs_dataring *ring,
+					 unsigned int order)
 {
 	int i = 0;
 	int ret = -ENOMEM;
@@ -339,21 +343,21 @@ static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev,
 		goto out;
 	ring->ref = ret;
 	bytes = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
-			XEN_9PFS_RING_ORDER - (PAGE_SHIFT - XEN_PAGE_SHIFT));
+			order - (PAGE_SHIFT - XEN_PAGE_SHIFT));
 	if (!bytes) {
 		ret = -ENOMEM;
 		goto out;
 	}
-	for (; i < (1 << XEN_9PFS_RING_ORDER); i++) {
+	for (; i < (1 << order); i++) {
 		ret = gnttab_grant_foreign_access(
 				dev->otherend_id, virt_to_gfn(bytes) + i, 0);
 		if (ret < 0)
 			goto out;
 		ring->intf->ref[i] = ret;
 	}
-	ring->intf->ring_order = XEN_9PFS_RING_ORDER;
+	ring->intf->ring_order = order;
 	ring->data.in = bytes;
-	ring->data.out = bytes + XEN_9PFS_RING_SIZE;
+	ring->data.out = bytes + XEN_FLEX_RING_SIZE(order);
 
 	ret = xenbus_alloc_evtchn(dev, &ring->evtchn);
 	if (ret)
@@ -371,7 +375,7 @@ static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev,
 		for (i--; i >= 0; i--)
 			gnttab_end_foreign_access(ring->intf->ref[i], 0, 0);
 		free_pages((unsigned long)bytes,
-			   XEN_9PFS_RING_ORDER -
+			   ring->intf->ring_order -
 			   (PAGE_SHIFT - XEN_PAGE_SHIFT));
 	}
 	gnttab_end_foreign_access(ring->ref, 0, 0);
@@ -401,8 +405,10 @@ static int xen_9pfs_front_probe(struct xenbus_device *dev,
 		return -EINVAL;
 	max_ring_order = xenbus_read_unsigned(dev->otherend,
 					      "max-ring-page-order", 0);
-	if (max_ring_order < XEN_9PFS_RING_ORDER)
-		return -EINVAL;
+	if (max_ring_order > XEN_9PFS_RING_ORDER)
+		max_ring_order = XEN_9PFS_RING_ORDER;
+	if (p9_xen_trans.maxsize > XEN_FLEX_RING_SIZE(max_ring_order))
+		p9_xen_trans.maxsize = XEN_FLEX_RING_SIZE(max_ring_order);
 
 	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
 	if (!priv)
@@ -419,7 +425,8 @@ static int xen_9pfs_front_probe(struct xenbus_device *dev,
 
 	for (i = 0; i < priv->num_rings; i++) {
 		priv->rings[i].priv = priv;
-		ret = xen_9pfs_front_alloc_dataring(dev, &priv->rings[i]);
+		ret = xen_9pfs_front_alloc_dataring(dev, &priv->rings[i],
+						    max_ring_order);
 		if (ret < 0)
 			goto error;
 	}
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 19:35:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 19:35:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbUU0-0003U9-RZ; Wed, 20 May 2020 19:34:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AfT1=7C=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbUTz-0003U4-B3
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 19:34:27 +0000
X-Inumbo-ID: ea45961e-9ad0-11ea-aa85-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ea45961e-9ad0-11ea-aa85-12813bfff9fa;
 Wed, 20 May 2020 19:34:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=lwWxZB2KTQrkyeL1Tdj9AYSfqQq6b6KjVtnmwkDl8PY=; b=exvD9NwffHy1NQOkPI1rjGlTR
 UvMFyQ/sw6B4Sg6TQZuxXQyEwGkJBTb66Hj4SPIBr3V7G98+k9KSzDr5eFzKaFsK3A1Un6AAelzgB
 4hBA2le71ymoqKB1MZucSn9q8wne78zFO2nSl0zoAASeFi3fnh/VCBNBmhJ1it2DYlSE8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbUTx-0002BN-Lu; Wed, 20 May 2020 19:34:25 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbUTw-0000Vg-F1; Wed, 20 May 2020 19:34:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbUTw-0007Oa-EH; Wed, 20 May 2020 19:34:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150273-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 150273: tolerable FAIL
X-Osstest-Failures: linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
 linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=1cdaf895c99d319c0007d0b62818cf85fc4b087f
X-Osstest-Versions-That: linux=cbaf2369956178e68fb714a30dc86cf768dd596a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 20 May 2020 19:34:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150273 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150273/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     12 guest-start              fail REGR. vs. 150172

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate           fail  like 150172
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                1cdaf895c99d319c0007d0b62818cf85fc4b087f
baseline version:
 linux                cbaf2369956178e68fb714a30dc86cf768dd596a

Last test of basis   150172  2020-05-14 06:09:31 Z    6 days
Testing same since   150273  2020-05-20 06:46:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Adam Ford <aford173@gmail.com>
  Adam McCoy <adam@forsedomani.com>
  Adrian Hunter <adrian.hunter@intel.com>
  Alan Maguire <alan.maguire@oracle.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alexei Starovoitov <ast@kernel.org>
  Amir Goldstein <amir73il@gmail.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ansuel Smith <ansuelsmth@gmail.com>
  Arnd Bergmann <arnd@arndb.de>
  Aurabindo Pillai <aurabindo.pillai@amd.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Ben Chuang <ben.chuang@genesyslogic.com.tw>
  Bob Peterson <rpeterso@redhat.com>
  Borislav Petkov <bp@suse.de>
  Catalin Marinas <catalin.marinas@arm.com>
  Chen-Yu Tsai <wens@csie.org>
  Chris Chiu <chiu@endlessm.com>
  Chris Down <chris@chrisdown.name>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian Brauner <christian.brauner@ubuntu.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Christophe Leroy <christophe.leroy@c-s.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Cong Wang <xiyou.wangcong@gmail.com>
  Daisuke Matsuda <matsuda-daisuke@fujitsu.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Dave Flogeras <dflogeras2@gmail.com>
  Dave Wysochanski <dwysocha@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
  Douglas Gilbert <dgilbert@interlog.com>
  Eric Dumazet <edumazet@google.com>
  Eric W. Biederman <ebiederm@xmission.com>
  Eugeniu Rosca <erosca@de.adit-jv.com>
  Fabio Estevam <festevam@gmail.com>
  Felipe Balbi <balbi@kernel.org>
  Florian Fainelli <f.fainelli@gmail.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Gerd Hoffmann <kraxel@redhat.com>
  Grace Kao <grace.kao@intel.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Grzegorz Kowal <custos.mentis@gmail.com>
  Guenter Roeck <linux@roeck-us.net>
  Guillaume Nault <gnault@redhat.com>
  Hangbin Liu <liuhangbin@gmail.com>
  Heiko Stuebner <heiko@sntech.de>
  Heiner Kallweit <hkallweit1@gmail.com>
  Hugh Dickins <hughd@google.com>
  Ilie Halip <ilie.halip@gmail.com>
  Ioana Ciornei <ioana.ciornei@nxp.com>
  Iris Liu <iris@onechronos.com>
  J. Bruce Fields <bfields@redhat.com>
  Jack Morgenstein <jackm@dev.mellanox.co.il>
  Jakub Kicinski <kuba@kernel.org>
  Jamal Hadi Salim <jhs@mojatatu.com>
  Jan Kara <jack@suse.cz>
  Jason Gunthorpe <jgg@mellanox.com>
  Jeremy Linton <jeremy.linton@arm.com>
  Jerome Brunet <jbrunet@baylibre.com>
  Jesus Ramos <jesus-ramos@live.com>
  Jim Mattson <jmattson@google.com>
  Jiri Kosina <jkosina@suse.cz>
  Johannes Weiner <hannes@cmpxchg.org>
  John Fastabend <john.fastabend@gmail.com>
  John Stultz <john.stultz@linaro.org>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Jue Wang <juew@google.com>
  Justin Swartz <justin.swartz@risingedge.co.za>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kalle Valo <kvalo@codeaurora.org>
  Kamal Mostafa <kamal@canonical.com>
  Kelly Littlepage <kelly@onechronos.com>
  Kevin Hilman <khilman@baylibre.com>
  Kishon Vijay Abraham I <kishon@ti.com>
  Kyungtae Kim <kt0755@gmail.com>
  Leon Romanovsky <leonro@mellanox.com>
  Li Jun <jun.li@nxp.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Luben Tuikov <luben.tuikov@amd.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luo bin <luobin9@huawei.com>
  Maciej Żenczykowski <maze@google.com>
  Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com>
  Maor Gottlieb <maorg@mellanox.com>
  Marc Zyngier <maz@kernel.org>
  Marek Olšák <marek.olsak@amd.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin KaFai Lau <kafai@fb.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Maxime Ripard <maxime@cerno.tech>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Hocko <mhocko@suse.com>
  Michal Vokáč <michal.vokac@ysoft.com>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Neil Armstrong <narmstrong@baylibre.com>
  Neil Horman <nhorman@tuxdriver.com>
  Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
  Olga Kornievskaia <kolga@netapp.com>
  Olga Kornievskaia <olga.kornievskaia@gmail.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paolo Abeni <pabeni@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Moore <paul@paul-moore.com>
  Peter Chen <peter.chen@nxp.com>
  Peter Jones <pjones@redhat.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Phil Sutter <phil@nwl.cc>
  Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
  Potnuri Bharat Teja <bharat@chelsio.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raul E Rangel <rrangel@chromium.org>
  Renius Chen <renius.chen@genesyslogic.com.tw>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Samu Nuutamo <samu.nuutamo@vincit.fi>
  Samuel Zou <zou_wei@huawei.com>
  Santosh Shilimkar <santosh.shilimkar@oracle.com>
  Sarthak Garg <sartgarg@codeaurora.org>
  Sasha Levin <sashal@kernel.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Shawn Guo <shawnguo@kernel.org>
  Shiraz Saleem <shiraz.saleem@intel.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Simon Ser <contact@emersion.fr>
  Soheil Hassas Yeganeh <soheil@google.com>
  Sriharsha Allenki <sallenki@codeaurora.org>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefano Brivio <sbrivio@redhat.com>
  Stephen Boyd <sboyd@kernel.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Sultan Alsawaf <sultan@kerneltoast.com>
  Sung Lee <sung.lee@amd.com>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tejun Heo <tj@kernel.org>
  Tiecheng Zhou <Tiecheng.Zhou@amd.com>
  Tony Lindgren <tony@atomide.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Ursula Braun <ubraun@linux.ibm.com>
  Vasily Averin <vvs@virtuozzo.com>
  Veerabhadrarao Badiganti <vbadigan@codeaurora.org>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vincent Minet <v.minet@criteo.com>
  Vineeth Pillai <vineethrp@gmail.com>
  Vinod Koul <vkoul@kernel.org>
  Waiman Long <longman@redhat.com>
  Wei Yongjun <weiyongjun1@huawei.com>
  Willem de Bruijn <willemb@google.com>
  Wu Bo <wubo40@huawei.com>
  Xiao Yang <yangx.jy@cn.fujitsu.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yafang Shao <laoar.shao@gmail.com>
  Yang Shi <yang.shi@linux.alibaba.com>
  Yang Yingliang <yangyingliang@huawei.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  Yuiko Oshino <yuiko.oshino@microchip.com>
  Zefan Li <lizefan@huawei.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

ssh: Could not resolve hostname xenbits.xen.org: Temporary failure in name resolution
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
(No revision log; it would be 4158 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 20 19:52:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 19:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbUky-0005BC-Kr; Wed, 20 May 2020 19:52:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AfT1=7C=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbUkw-0005B7-R1
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 19:51:58 +0000
X-Inumbo-ID: 59dbbca4-9ad3-11ea-aa85-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 59dbbca4-9ad3-11ea-aa85-12813bfff9fa;
 Wed, 20 May 2020 19:51:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=AG4rOfxxnQToOdQUY/WXCChcFE1WfCX5Sod8W/RkDWk=; b=qAlM5U+PuJDaipZabi92YBsmq
 KKtZzwHGeUCpNOJGVyQFGS1gE6UIFx4kBKSoGifslVarE7wPDDR/kcoOdOs34iaWbGANOqxHBmtdA
 eePwDwXZH0tDyNbY5/SpFADBoIn/mk7GcwIrlWHDOJSZIlz/aDxNHyIEKV7/Sb655Wdts=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbUkp-0002Xj-Pf; Wed, 20 May 2020 19:51:51 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbUkp-0001Bc-Da; Wed, 20 May 2020 19:51:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbUkp-0004u4-Cs; Wed, 20 May 2020 19:51:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150278-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150278: all pass
X-Osstest-Versions-This: ovmf=d3733188a2162abf72dd08c0cedd1119b5cfe6c4
X-Osstest-Versions-That: ovmf=7b6327ff03bb4436261ffad246ba3a14de10391f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 20 May 2020 19:51:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150278 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150278/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d3733188a2162abf72dd08c0cedd1119b5cfe6c4
baseline version:
 ovmf                 7b6327ff03bb4436261ffad246ba3a14de10391f

Last test of basis   150232  2020-05-18 16:09:20 Z    2 days
Testing same since   150278  2020-05-20 13:10:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Shenglei Zhang <shenglei.zhang@intel.com>
  Zhang, Shenglei <shenglei.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

ssh: Could not resolve hostname xenbits.xen.org: Temporary failure in name resolution
fatal: Could not read from remote repository.

Please make sure you have the correct access rights
and the repository exists.
------------------------------------------------------------
commit d3733188a2162abf72dd08c0cedd1119b5cfe6c4
Author: Zhang, Shenglei <shenglei.zhang@intel.com>
Date:   Wed May 20 11:08:47 2020 +0800

    NetworkPkg/DxeNetLib: Change the order of conditions in IF statement
    
    The condition, NET_HEADSPACE(&(Nbuf->BlockOp[Index])) < Len, is
    meaningless if Index = 0. So checking 'Index != 0' should be
    performed first in the if statement.
    
    Cc: Maciej Rabeda <maciej.rabeda@linux.intel.com>
    Cc: Siyuan Fu <siyuan.fu@intel.com>
    Cc: Jiaxin Wu <jiaxin.wu@intel.com>
    Signed-off-by: Shenglei Zhang <shenglei.zhang@intel.com>
    Reviewed-by: Maciej Rabeda <maciej.rabeda@linux.intel.com>
    Reviewed-by: Philippe Mathieu-Daude <philmd@redhat.com>


From xen-devel-bounces@lists.xenproject.org Wed May 20 20:48:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 20:48:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbVd0-00014U-MI; Wed, 20 May 2020 20:47:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2h4=7C=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbVcz-00014P-0j
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 20:47:49 +0000
X-Inumbo-ID: 29a4b20e-9adb-11ea-aa8a-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 29a4b20e-9adb-11ea-aa8a-12813bfff9fa;
 Wed, 20 May 2020 20:47:48 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id D4752207D8;
 Wed, 20 May 2020 20:47:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590007667;
 bh=pFgqAYPdTivuUZr14bc7fno90C0BiU2RfDO4wiHaAio=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=tLct13YcQQztlMgsWhfmqelxdPnoUXZr/8o7FXEJqbxcId6ab55//VCBCeRv3xOsE
 IEHdIuSxaCxcYqNmrqzhljKujCvKPuf5zACGA996C9MtZjcsZi/CuYgbC3hhdkW2xv
 wOQc/tokPdONm0NXR+9SOL0nuB9ZnWw3R+azUk4Y=
Date: Wed, 20 May 2020 13:47:46 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Dominique Martinet <asmadeus@codewreck.org>
Subject: Re: [V9fs-developer] [PATCH] 9p/xen: increase XEN_9PFS_RING_ORDER
In-Reply-To: <20200520193647.GA17565@nautica>
Message-ID: <alpine.DEB.2.21.2005201340310.27502@sstabellini-ThinkPad-T480s>
References: <20200520184113.24727-1-sstabellini@kernel.org>
 <20200520193647.GA17565@nautica>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, lucho@ionkov.net,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 ericvh@gmail.com, linux-kernel@vger.kernel.org,
 v9fs-developer@lists.sourceforge.net, rminnich@sandia.gov,
 boris.ostrovsky@oracle.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, 20 May 2020, Dominique Martinet wrote:
> Stefano Stabellini wrote on Wed, May 20, 2020:
> > From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > 
> > Increase XEN_9PFS_RING_ORDER to 9 for performance reason. Order 9 is the
> > max allowed by the protocol.
> > 
> > We can't assume that all backends will support order 9. The xenstore
> > property max-ring-page-order specifies the max order supported by the
> > backend. We'll use max-ring-page-order for the size of the ring.
> > 
> > This means that the size of the ring is not static
> > (XEN_FLEX_RING_SIZE(9)) anymore. Change XEN_9PFS_RING_SIZE to take an
> > argument and base the calculation on the order chosen at setup time.
> > 
> > 
> > Finally, modify p9_xen_trans.maxsize to be divided by 4 compared to the
> > original value. We need to divide it by 2 because we have two rings
> > coming off the same order allocation: the in and out rings. This was a
> > mistake in the original code. Also divide it further by 2 because we
> > don't want a single request/reply to fill up the entire ring. There can
> > be multiple requests/replies outstanding at any given time and if we use
> > the full ring with one, we risk forcing the backend to wait for the
> > client to read back more replies before continuing, which is not
> > performant.
> 
> Sounds good to me overall. A couple of comments inline.
> Also worth noting I need to rebuild myself a test setup so might take a
> bit of time to actually run tests, but I might just trust you on this
> one for now if it builds with no new warning... Looks like it would
> probably work :p
> 
> > [...]
> > @@ -264,7 +265,7 @@ static irqreturn_t xen_9pfs_front_event_handler(int irq, void *r)
> >  
> >  static struct p9_trans_module p9_xen_trans = {
> >  	.name = "xen",
> > -	.maxsize = 1 << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT),
> > +	.maxsize = 1 << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT - 2),
> >  	.def = 1,
> >  	.create = p9_xen_create,
> >  	.close = p9_xen_close,
> > [...]
> > @@ -401,8 +405,10 @@ static int xen_9pfs_front_probe(struct xenbus_device *dev,
> >  		return -EINVAL;
> >  	max_ring_order = xenbus_read_unsigned(dev->otherend,
> >  					      "max-ring-page-order", 0);
> > -	if (max_ring_order < XEN_9PFS_RING_ORDER)
> > -		return -EINVAL;
> > +	if (max_ring_order > XEN_9PFS_RING_ORDER)
> > +		max_ring_order = XEN_9PFS_RING_ORDER;
> 
> (If there are backends with very small max_ring_orders, we no longer
> error out when we encounter one, it might make sense to add a min
> define? Although to be honest 9p works with pretty small maxsizes so I
> don't see much reason to error out, and even order 0 will be one page
> worth.. I hope there is no xenbus that small though :))

Your point is valid but the size calculation (XEN_FLEX_RING_SIZE) should
work correctly even with order 0:

    (1UL << ((0) + XEN_PAGE_SHIFT - 1)) = 1 << (12 - 1) = 2048

So I am thinking that the protocol should still work correctly, although
the performance might be undesirable.

FYI The smallest backend I know of has order 6.


> > +	if (p9_xen_trans.maxsize > XEN_FLEX_RING_SIZE(max_ring_order))
> > +		p9_xen_trans.maxsize = XEN_FLEX_RING_SIZE(max_ring_order);
> 
> So base maxsize initial value is 1 << (order + page_shift - 2) ; but
> this is 1 << (order + page_shift - 1) -- I agree with the logic you gave
> in commit message so would think this needs to be shifted down one more
> like the base value as well.
> What do you think?

Yes, you are right, thanks for noticing this! I meant to do that here
too but somehow forgot. This should be:

   p9_xen_trans.maxsize = XEN_FLEX_RING_SIZE(max_ring_order) / 2;


From xen-devel-bounces@lists.xenproject.org Wed May 20 20:57:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 20:57:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbVm4-0001xK-Jf; Wed, 20 May 2020 20:57:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AfT1=7C=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbVm2-0001xF-UY
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 20:57:10 +0000
X-Inumbo-ID: 789d868c-9adc-11ea-aa8c-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 789d868c-9adc-11ea-aa8c-12813bfff9fa;
 Wed, 20 May 2020 20:57:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ifjRi86x/xmXwRucq2mjbL6VOWmvQ3tgL5JGxhzIQNc=; b=YJyf5BYEEcd8PgM8caPdykdnW
 f8dkk2G5lowxruS2g+BeQmPNon1Xb3z1dPCAuFwJlq2wmeULkK4Mylie/j6lRCwUOuOVAM2eN1VKr
 xKVxUZXq53MRJO/Hffgz9mLD1AoNBJxQFWIIdD6y6FsknGc/sahNV5wwJ91MEZHzyn+Oc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbVm1-0003xL-1X; Wed, 20 May 2020 20:57:09 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbVm0-0003RS-MX; Wed, 20 May 2020 20:57:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbVm0-0007iv-Lt; Wed, 20 May 2020 20:57:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150280-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150280: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=dacdbf7088d6a3705a9831e73991c2b14c519a65
X-Osstest-Versions-That: xen=cdea123f1976549ecc72644588cc5ce1491606c4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 20 May 2020 20:57:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150280 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150280/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dacdbf7088d6a3705a9831e73991c2b14c519a65
baseline version:
 xen                  cdea123f1976549ecc72644588cc5ce1491606c4

Last test of basis   150276  2020-05-20 11:01:29 Z    0 days
Testing same since   150280  2020-05-20 18:01:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   cdea123f19..dacdbf7088  dacdbf7088d6a3705a9831e73991c2b14c519a65 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 20 21:28:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 21:28:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbWFx-0004Wv-1n; Wed, 20 May 2020 21:28:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gBXs=7C=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jbWFw-0004Wq-1Y
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 21:28:04 +0000
X-Inumbo-ID: c9392e4e-9ae0-11ea-ae69-bc764e2007e4
Received: from ppsw-41.csi.cam.ac.uk (unknown [131.111.8.141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c9392e4e-9ae0-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 21:28:03 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:45964
 helo=[192.168.1.219])
 by ppsw-41.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.159]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jbWFp-000ECA-Pt (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Wed, 20 May 2020 22:27:57 +0100
Subject: Re: [PATCH] VT-x: extend LBR Broadwell errata coverage
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <df6e8dad-b4c0-0821-46eb-e4aa86f8ccfa@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6697e119-c5a2-4f3b-0d81-e6420dfaff87@citrix.com>
Date: Wed, 20 May 2020 22:27:56 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <df6e8dad-b4c0-0821-46eb-e4aa86f8ccfa@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20/05/2020 13:52, Jan Beulich wrote:
> For lbr_tsx_fixup_check() simply name a few more specific errata numbers.
>
> For bdf93_fixup_check(), however, more models are affected. Oddly enough
> despite being the same model and stepping, the erratum is listed for Xeon
> E3 but not its Core counterpart. With this it's of course also uncertain
> whether the absence of the erratum for Xeon D is actually meaningful.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2870,8 +2870,10 @@ static void __init lbr_tsx_fixup_check(v
>      case 0x45: /* HSM182 - 4th gen Core */
>      case 0x46: /* HSM182, HSD172 - 4th gen Core (GT3) */
>      case 0x3d: /* BDM127 - 5th gen Core */
> -    case 0x47: /* BDD117 - 5th gen Core (GT3) */
> -    case 0x4f: /* BDF85  - Xeon E5-2600 v4 */
> +    case 0x47: /* BDD117 - 5th gen Core (GT3)
> +                  BDW117 - Xeon E3-1200 v4 */
> +    case 0x4f: /* BDF85  - Xeon E5-2600 v4
> +                  BDX88  - Xeon E7-x800 v4 */

After cross referencing with Roger's errata patch, BDH75, and ...

>      case 0x56: /* BDE105 - Xeon D-1500 */
>          break;
>      default:
> @@ -2895,15 +2897,26 @@ static void __init lbr_tsx_fixup_check(v
>  static void __init bdf93_fixup_check(void)
>  {
>      /*
> -     * Broadwell erratum BDF93:
> +     * Broadwell erratum BDF93 et al:
>       *
>       * Reads from MSR_LER_TO_LIP (MSR 1DEH) may return values for bits[63:61]
>       * that are not equal to bit[47].  Attempting to context switch this value
>       * may cause a #GP.  Software should sign extend the MSR.
>       */
> -    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
> -         boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x4f )
> +    if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
> +         boot_cpu_data.x86 != 6 )
> +        return;
> +
> +    switch ( boot_cpu_data.x86_model )
> +    {
> +    case 0x3d: /* BDM131 - 5th gen Core */
> +    case 0x47: /* BDD??? - 5th gen Core (H-Processor line)
> +                  BDW120 - Xeon E3-1200 v4 */
> +    case 0x4f: /* BDF93  - Xeon E5-2600 v4
> +                  BDX93  - Xeon E7-x800 v4 */

BDH80, which is "Intel® Core™ i7 Processor Family for LGA2011-v3
Socket", so high end desktop, despite being electrically compatible with
the E5 servers.

~Andrew

>          bdf93_fixup_needed = true;
> +        break;
> +    }
>  }
>  
>  static int is_last_branch_msr(u32 ecx)



From xen-devel-bounces@lists.xenproject.org Wed May 20 21:30:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 21:30:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbWI5-0005HC-ED; Wed, 20 May 2020 21:30:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gBXs=7C=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jbWI4-0005H6-9I
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 21:30:16 +0000
X-Inumbo-ID: 183e61d0-9ae1-11ea-9887-bc764e2007e4
Received: from ppsw-41.csi.cam.ac.uk (unknown [131.111.8.141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 183e61d0-9ae1-11ea-9887-bc764e2007e4;
 Wed, 20 May 2020 21:30:15 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:46030
 helo=[192.168.1.219])
 by ppsw-41.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.159]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jbWI0-000FXT-RE (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Wed, 20 May 2020 22:30:12 +0100
Subject: Re: [PATCH v3 2/2] x86/idle: prevent entering C6 with in service
 interrupts on Intel
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20200515135802.63853-1-roger.pau@citrix.com>
 <20200515135802.63853-3-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <eaa63636-6e39-d401-e218-14ae37440667@citrix.com>
Date: Wed, 20 May 2020 22:30:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200515135802.63853-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 15/05/2020 14:58, Roger Pau Monne wrote:
> Apply a workaround for Intel errata BDX99, CLX30, SKX100, CFW125,
> BDF104, BDH85, BDM135, KWB131: "A Pending Fixed Interrupt May Be
> Dispatched Before an Interrupt of The Same Priority Completes".

HSM175 et al, so presumably a HSD, and HSE as well.

On the broadwell side at least, BDD BDW in addition

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 20 22:01:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 22:01:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbWlv-0007qr-RY; Wed, 20 May 2020 22:01:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2h4=7C=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbWlu-0007qm-Kn
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 22:01:06 +0000
X-Inumbo-ID: 671bfd04-9ae5-11ea-b07b-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 671bfd04-9ae5-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 22:01:06 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E4E1B2084C;
 Wed, 20 May 2020 22:01:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590012065;
 bh=fHxDw7+31cIHIoMoLe56plFoMHEmT+hzv9jPnj7sg34=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=1D8ds4X1XZguhQAlA0bQy6ElmSDOZqmeaDfhuy8WcHRm8MDzkNP7qo4JLj4n9wIa8
 r2y0HbAXtE2mwwUmLn0GqrVScm6XzeITWWZtbJAGwR9uBeo2Rsj4zAkypOOZMypdld
 +bvkJJD2JiwBacp8dqSEEwZge0g8UzWYHenqCkdw=
Date: Wed, 20 May 2020 15:01:04 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH for-4.14 1/3] xen/arm: Allow a platform to override the
 DMA width
In-Reply-To: <8e8afff48273589fd06192203c0452fb1e69cd1f.camel@epam.com>
Message-ID: <alpine.DEB.2.21.2005201500560.27502@sstabellini-ThinkPad-T480s>
References: <20200518113008.15422-1-julien@xen.org>
 <20200518113008.15422-2-julien@xen.org>
 <8e8afff48273589fd06192203c0452fb1e69cd1f.camel@epam.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "julien@xen.org" <julien@xen.org>, "minyard@acm.org" <minyard@acm.org>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 "jgrall@amazon.com" <jgrall@amazon.com>, "roman@zededa.com" <roman@zededa.com>,
 "george.dunlap@citrix.com" <george.dunlap@citrix.com>,
 "jeff.kubascik@dornerworks.com" <jeff.kubascik@dornerworks.com>,
 "jbeulich@suse.com" <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, 18 May 2020, Volodymyr Babchuk wrote:
> Hi Julien,
> 
> On Mon, 2020-05-18 at 12:30 +0100, Julien Grall wrote:
> > From: Julien Grall <jgrall@amazon.com>
> > 
> > At the moment, Xen is assuming that all the devices are at least 32-bit
> > DMA capable. However, some SoC have devices that may be able to access
> > a much restricted range. For instance, the RPI has devices that can
> > only access the first 1GB of RAM.
> > 
> > The structure platform_desc is now extended to allow a platform to
> > override the DMA width. The new is used to implement
> > arch_get_dma_bit_size().
> > 
> > The prototype is now moved in asm-arm/mm.h as the function is not NUMA
> > specific. The implementation is done in platform.c so we don't have to
> > include platform.h everywhere. This should be fine as the function is
> > not expected to be called in hotpath.
> > 
> > Signed-off-by: Julien Grall <jgrall@amazon.com>
> Reviewed-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> > ---
> > 
> > Cc: Jan Beulich <jbeulich@suse.com>
> > Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> > Cc: George Dunlap <george.dunlap@citrix.com>
> > 
> > I noticed that arch_get_dma_bit_size() is only called when there is more
> > than one NUMA node. I am a bit unsure what is the reason behind it.
> > 
> > The goal for Arm is to use arch_get_dma_bit_size() when deciding how low
> > the first Dom0 bank should be allocated.
> > ---
> >  xen/arch/arm/platform.c        | 5 +++++
> >  xen/include/asm-arm/mm.h       | 2 ++
> >  xen/include/asm-arm/numa.h     | 5 -----
> >  xen/include/asm-arm/platform.h | 2 ++
> >  4 files changed, 9 insertions(+), 5 deletions(-)
> > 
> > diff --git a/xen/arch/arm/platform.c b/xen/arch/arm/platform.c
> > index 8eb0b6e57a5a..4db5bbb4c51d 100644
> > --- a/xen/arch/arm/platform.c
> > +++ b/xen/arch/arm/platform.c
> > @@ -155,6 +155,11 @@ bool platform_device_is_blacklisted(const struct dt_device_node *node)
> >      return (dt_match_node(blacklist, node) != NULL);
> >  }
> >  
> > +unsigned int arch_get_dma_bitsize(void)
> > +{
> > +    return ( platform && platform->dma_bitsize ) ? platform->dma_bitsize : 32;
> > +}
> > +
> >  /*
> >   * Local variables:
> >   * mode: C
> > diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> > index 7df91280bc77..f8ba49b1188f 100644
> > --- a/xen/include/asm-arm/mm.h
> > +++ b/xen/include/asm-arm/mm.h
> > @@ -366,6 +366,8 @@ int arch_acquire_resource(struct domain *d, unsigned int type, unsigned int id,
> >      return -EOPNOTSUPP;
> >  }
> >  
> > +unsigned int arch_get_dma_bitsize(void);
> > +
> >  #endif /*  __ARCH_ARM_MM__ */
> >  /*
> >   * Local variables:
> > diff --git a/xen/include/asm-arm/numa.h b/xen/include/asm-arm/numa.h
> > index 490d1f31aa14..31a6de4e2346 100644
> > --- a/xen/include/asm-arm/numa.h
> > +++ b/xen/include/asm-arm/numa.h
> > @@ -25,11 +25,6 @@ extern mfn_t first_valid_mfn;
> >  #define node_start_pfn(nid) (mfn_x(first_valid_mfn))
> >  #define __node_distance(a, b) (20)
> >  
> > -static inline unsigned int arch_get_dma_bitsize(void)
> > -{
> > -    return 32;
> > -}
> > -
> >  #endif /* __ARCH_ARM_NUMA_H */
> >  /*
> >   * Local variables:
> > diff --git a/xen/include/asm-arm/platform.h b/xen/include/asm-arm/platform.h
> > index ed4d30a1be7c..997eb2521631 100644
> > --- a/xen/include/asm-arm/platform.h
> > +++ b/xen/include/asm-arm/platform.h
> > @@ -38,6 +38,8 @@ struct platform_desc {
> >       * List of devices which must not pass-through to a guest
> >       */
> >      const struct dt_device_match *blacklist_dev;
> > +    /* Override the DMA width (32-bit by default). */
> > +    unsigned int dma_bitsize;
> >  };
> >  
> >  /*
> 


From xen-devel-bounces@lists.xenproject.org Wed May 20 22:13:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 22:13:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbWxY-0000Nw-1i; Wed, 20 May 2020 22:13:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2h4=7C=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbWxX-0000No-3z
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 22:13:07 +0000
X-Inumbo-ID: 14ba0248-9ae7-11ea-b9cf-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14ba0248-9ae7-11ea-b9cf-bc764e2007e4;
 Wed, 20 May 2020 22:13:06 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id A1F19207F9;
 Wed, 20 May 2020 22:13:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590012786;
 bh=tq7/f6iaVdvUCdJrnIHbL8rMDfqcEiBj//6g+yk7qlY=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=moJe1oYMc3HhHjpkA8qODSAdO3VyVNECSvn34rvJiCxojubSOBkFGuCV4UQlo0LoK
 oguvJ8RqU0gXpp6TJ7Bvm2cw5/MVctWOfNZMleqKeY+N9DaKadpHvKeFT05F/rQw8z
 BgUjHB3u/gQKyRAw+GYVBrKNjp9crbKTWiZU3gKc=
Date: Wed, 20 May 2020 15:13:05 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH for-4.14 0/3] Remove the 1GB limitation on Rasberry Pi
 4
In-Reply-To: <20200518113008.15422-1-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2005201512380.27502@sstabellini-ThinkPad-T480s>
References: <20200518113008.15422-1-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, minyard@acm.org, paul@xen.org,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 roman@zededa.com, George Dunlap <george.dunlap@citrix.com>,
 jeff.kubascik@dornerworks.com, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, 18 May 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Hi all,
> 
> At the moment, a user who wants to boot Xen on the Raspberry Pi 4 can
> only use the first GB of memory.
> 
> This is because several devices cannot DMA above 1GB but Xen doesn't
> necessarily allocate memory for Dom0 below 1GB.
> 
> This small series is trying to address the problem by allowing a
> platform to restrict where Dom0 banks are allocated.
> 
> This is also a candidate for Xen 4.14. Without it, a user will not be
> able to use all the RAM on the Raspberry Pi 4.

The series looks good to me aside from the couple of minor issues being
discussed


> This series has only be slighlty tested. I would appreciate more test on
> the Rasbperry Pi 4 to confirm this removing the restriction.
> 
> Cheers,
> 
> Cc: paul@xen.org
> 
> Julien Grall (3):
>   xen/arm: Allow a platform to override the DMA width
>   xen/arm: Take into account the DMA width when allocating Dom0 memory
>     banks
>   xen/arm: plat: Allocate as much as possible memory below 1GB for dom0
>     for RPI
> 
>  xen/arch/arm/domain_build.c                | 32 +++++++++++++---------
>  xen/arch/arm/platform.c                    |  5 ++++
>  xen/arch/arm/platforms/brcm-raspberry-pi.c |  1 +
>  xen/include/asm-arm/mm.h                   |  2 ++
>  xen/include/asm-arm/numa.h                 |  5 ----
>  xen/include/asm-arm/platform.h             |  2 ++
>  6 files changed, 29 insertions(+), 18 deletions(-)
> 
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed May 20 22:17:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 22:17:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbX1n-0000Wx-Jm; Wed, 20 May 2020 22:17:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gBXs=7C=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jbX1m-0000Ws-D0
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 22:17:30 +0000
X-Inumbo-ID: b168c048-9ae7-11ea-b9cf-bc764e2007e4
Received: from ppsw-30.csi.cam.ac.uk (unknown [131.111.8.130])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b168c048-9ae7-11ea-b9cf-bc764e2007e4;
 Wed, 20 May 2020 22:17:29 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:53322
 helo=[192.168.1.219])
 by ppsw-30.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.156]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jbX1i-000Ef4-dM (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Wed, 20 May 2020 23:17:26 +0100
Subject: Re: [PATCH v5] x86: clear RDRAND CPUID bit on AMD family 15h/16h
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <4f76749b-54bd-7c39-6c90-279ce25cb57c@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e2cf9d9e-492d-fa24-0e9c-bf62b6704b34@citrix.com>
Date: Wed, 20 May 2020 23:17:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <4f76749b-54bd-7c39-6c90-279ce25cb57c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 18/05/2020 14:19, Jan Beulich wrote:
> Inspired by Linux commit c49a0a80137c7ca7d6ced4c812c9e07a949f6f24:
>
>     There have been reports of RDRAND issues after resuming from suspend on
>     some AMD family 15h and family 16h systems. This issue stems from a BIOS
>     not performing the proper steps during resume to ensure RDRAND continues
>     to function properly.
>
>     Update the CPU initialization to clear the RDRAND CPUID bit for any family
>     15h and 16h processor that supports RDRAND. If it is known that the family
>     15h or family 16h system does not have an RDRAND resume issue or that the
>     system will not be placed in suspend, the "cpuid=rdrand" kernel parameter
>     can be used to stop the clearing of the RDRAND CPUID bit.
>
>     Note, that clearing the RDRAND CPUID bit does not prevent a processor
>     that normally supports the RDRAND instruction from executing it. So any
>     code that determined the support based on family and model won't #UD.
>
> Warn if no explicit choice was given on affected hardware.
>
> Check RDRAND functions at boot as well as after S3 resume (the retry
> limit chosen is entirely arbitrary).
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

> ---
> Still slightly RFC, and still in particular because of the change to
> parse_xen_cpuid(): Alternative approach suggestions are welcome. But now
> also because with many CPUs there may now be a lot of warnings in case
> of issues.

It would still be nice if we could find a better way of determining
whether S3 is supported on the platform, which would at least let us
sort server and client platforms.

A straight string search for _S3 in the DSDT does look to be effective,
on a sample of 5 boxes I've tried.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 20 23:45:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 23:45:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbYOe-0007w5-9A; Wed, 20 May 2020 23:45:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2h4=7C=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbYOd-0007w0-2x
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 23:45:11 +0000
X-Inumbo-ID: f0324c70-9af3-11ea-aaaa-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f0324c70-9af3-11ea-aaaa-12813bfff9fa;
 Wed, 20 May 2020 23:45:09 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id F189F20748;
 Wed, 20 May 2020 23:45:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590018308;
 bh=ngrtvfLYBKfv8MQkCaYk8tYirfeqXPlMFjsOE/YFGgk=;
 h=Date:From:To:cc:Subject:From;
 b=aFQEiP489M9V1lc9WQ1OxfNV/2V+7N4zYIpsMfXXKumiUZCXkMtf0oX/l0ZaBRhko
 XXDUqKcjb3djxslDnJAwC1DX+oVg2M4ipvZyxQQ2EMXpfKDr0PuWTiEpSvLXpbWjS3
 1PiciD+5N1v3vJ8I1FAXnL7M5Kig9bCT9MNlgPYI=
Date: Wed, 20 May 2020 16:45:07 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: jgross@suse.com, boris.ostrovsky@oracle.com, konrad.wilk@oracle.com
Subject: [PATCH 00/10] fix swiotlb-xen for RPi4
Message-ID: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org, tamas@tklengyel.com,
 linux-kernel@vger.kernel.org, roman@zededa.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi all,

This series is a collection of fixes to get Linux running on the RPi4 as
dom0.

Conceptually there are only two significant changes:

- make sure not to call virt_to_page on vmalloc virt addresses (patch
  #1)
- use phys_to_dma and dma_to_phys to translate phys to/from dma
  addresses (all other patches)

In particular in regards to the second part, the RPi4 is the first
board where Xen can run that has the property that dma addresses are
different from physical addresses, and swiotlb-xen was written with the
assumption that phys addr == dma addr.

This series adds the phys_to_dma and dma_to_phys calls to make it work.


Cheers,

Stefano


From xen-devel-bounces@lists.xenproject.org Wed May 20 23:45:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 23:45:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbYOt-0007xb-0i; Wed, 20 May 2020 23:45:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2h4=7C=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbYOr-0007xF-BZ
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 23:45:25 +0000
X-Inumbo-ID: f9c78d2c-9af3-11ea-ae69-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f9c78d2c-9af3-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 23:45:25 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 30682208B3;
 Wed, 20 May 2020 23:45:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590018324;
 bh=L20gKWj8zlek7VJV9PmfeUsnev8ccHkezeGqO5r/j0c=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=pnTOgRAhyAKhhIqmhcyTNxvwKS7YZkziiXSq2B8Bf3L2jwD/2wcYZFCXYaLBTB/H3
 a5SP8AwC6k1h40imuvELO7bRpU77mncT/fN5LRmTcW8ndOxsBdcFDJtcHDYQ+ee4yM
 Rw5m1CKwyl4TbFieBldPA5YNa+ZIT6x9HG5pDjsw=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH 06/10] swiotlb-xen: add struct device* parameter to
 xen_dma_sync_for_device
Date: Wed, 20 May 2020 16:45:16 -0700
Message-Id: <20200520234520.22563-6-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

The parameter is unused in this patch.
No functional changes.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 arch/arm/xen/mm.c         | 5 +++--
 drivers/xen/swiotlb-xen.c | 4 ++--
 include/xen/swiotlb-xen.h | 5 +++--
 3 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index 1a00e8003c64..f2414ea40a79 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -81,8 +81,9 @@ void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle,
 		dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL);
 }
 
-void xen_dma_sync_for_device(dma_addr_t handle, phys_addr_t paddr, size_t size,
-		enum dma_data_direction dir)
+void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle,
+			     phys_addr_t paddr, size_t size,
+			     enum dma_data_direction dir)
 {
 	if (pfn_valid(PFN_DOWN(handle)))
 		arch_sync_dma_for_device(paddr, size, dir);
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index f9aa932973dd..ef58f05ae445 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -405,7 +405,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 
 done:
 	if (!dev_is_dma_coherent(dev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
-		xen_dma_sync_for_device(dev_addr, phys, size, dir);
+		xen_dma_sync_for_device(dev, dev_addr, phys, size, dir);
 	return dev_addr;
 }
 
@@ -455,7 +455,7 @@ xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr,
 		swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE);
 
 	if (!dev_is_dma_coherent(dev))
-		xen_dma_sync_for_device(dma_addr, paddr, size, dir);
+		xen_dma_sync_for_device(dev, dma_addr, paddr, size, dir);
 }
 
 /*
diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h
index f62d1854780b..6d235fe2b92d 100644
--- a/include/xen/swiotlb-xen.h
+++ b/include/xen/swiotlb-xen.h
@@ -7,8 +7,9 @@
 void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle,
 			  phys_addr_t paddr, size_t size,
 			  enum dma_data_direction dir);
-void xen_dma_sync_for_device(dma_addr_t handle, phys_addr_t paddr, size_t size,
-		enum dma_data_direction dir);
+void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle,
+			     phys_addr_t paddr, size_t size,
+			     enum dma_data_direction dir);
 
 extern int xen_swiotlb_init(int verbose, bool early);
 extern const struct dma_map_ops xen_swiotlb_dma_ops;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 23:45:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 23:45:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbYOr-0007xN-P6; Wed, 20 May 2020 23:45:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2h4=7C=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbYOq-0007x1-9y
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 23:45:24 +0000
X-Inumbo-ID: f88ee45b-9af3-11ea-aaaa-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f88ee45b-9af3-11ea-aaaa-12813bfff9fa;
 Wed, 20 May 2020 23:45:23 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 88F6120873;
 Wed, 20 May 2020 23:45:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590018322;
 bh=Vv+EdXpTbwnuZjYd1z0k52yCJtFShb1TwXsEeRgOU74=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=sBVUvkEnVb5CTQSI2t90TDC/MfBITVXjrBOqmirzQIzAeJwdr1sVZyo9BaUXBQWb6
 LLUhyQ43M/LP6uZtRnjF0h93Ma1z5Q2Gci6bg0La5Pw99Q96J4Tt/c0jbgFC2I1Iiz
 ES8P300ufpQlz2z+r0K7tnfeD8kqZ32GjZyT1K4U=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH 02/10] swiotlb-xen: remove start_dma_addr
Date: Wed, 20 May 2020 16:45:12 -0700
Message-Id: <20200520234520.22563-2-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

It is not strictly needed. Call virt_to_phys on xen_io_tlb_start
instead. It will be useful not to have a start_dma_addr around with the
next patches.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 drivers/xen/swiotlb-xen.c | 5 +----
 1 file changed, 1 insertion(+), 4 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index a42129cba36e..b5e0492b07b9 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -52,8 +52,6 @@ static unsigned long xen_io_tlb_nslabs;
  * Quick lookup value of the bus address of the IOTLB.
  */
 
-static u64 start_dma_addr;
-
 /*
  * Both of these functions should avoid XEN_PFN_PHYS because phys_addr_t
  * can be 32bit when dma_addr_t is 64bit leading to a loss in
@@ -241,7 +239,6 @@ int __ref xen_swiotlb_init(int verbose, bool early)
 		m_ret = XEN_SWIOTLB_EFIXUP;
 		goto error;
 	}
-	start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
 	if (early) {
 		if (swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs,
 			 verbose))
@@ -389,7 +386,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 	 */
 	trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
 
-	map = swiotlb_tbl_map_single(dev, start_dma_addr, phys,
+	map = swiotlb_tbl_map_single(dev, virt_to_phys(xen_io_tlb_start), phys,
 				     size, size, dir, attrs);
 	if (map == (phys_addr_t)DMA_MAPPING_ERROR)
 		return DMA_MAPPING_ERROR;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 23:45:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 23:45:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbYOq-0007x6-HA; Wed, 20 May 2020 23:45:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2h4=7C=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbYOp-0007ww-Hg
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 23:45:23 +0000
X-Inumbo-ID: f88ee45a-9af3-11ea-aaaa-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f88ee45a-9af3-11ea-aaaa-12813bfff9fa;
 Wed, 20 May 2020 23:45:23 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 1812920748;
 Wed, 20 May 2020 23:45:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590018322;
 bh=2rJi6KfJ7cKpdb3KgYosM8AG09eJiSBLH85aYQo6tfE=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=Z/XEnES7H/3TARKOmkB1qbDupxUWdE2Iq1shY3iRXvUvUqgWMI9JC8cDHfeS8Yl+u
 +HcspWBrKeLpETCuygnY/0/UPeXGziAinuXs3JHNLhNZdDIMXyospJd0iz+iNx4WVf
 DXuaL5c43gHfkKb/gZV0rdGZ8hqWUrfcfPK0H0bI=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH 01/10] swiotlb-xen: use vmalloc_to_page on vmalloc virt
 addresses
Date: Wed, 20 May 2020 16:45:11 -0700
Message-Id: <20200520234520.22563-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Don't just assume that virt_to_page works on all virtual addresses.
Instead add a is_vmalloc_addr check and use vmalloc_to_page on vmalloc
virt addresses.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 drivers/xen/swiotlb-xen.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index b6d27762c6f8..a42129cba36e 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -335,6 +335,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	int order = get_order(size);
 	phys_addr_t phys;
 	u64 dma_mask = DMA_BIT_MASK(32);
+	struct page *pg;
 
 	if (hwdev && hwdev->coherent_dma_mask)
 		dma_mask = hwdev->coherent_dma_mask;
@@ -346,9 +347,11 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	/* Convert the size to actually allocated. */
 	size = 1UL << (order + XEN_PAGE_SHIFT);
 
+	pg = is_vmalloc_addr(vaddr) ? vmalloc_to_page(vaddr) :
+				      virt_to_page(vaddr);
 	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
 		     range_straddles_page_boundary(phys, size)) &&
-	    TestClearPageXenRemapped(virt_to_page(vaddr)))
+	    TestClearPageXenRemapped(pg))
 		xen_destroy_contiguous_region(phys, order);
 
 	xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 23:45:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 23:45:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbYOv-0007yH-8N; Wed, 20 May 2020 23:45:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2h4=7C=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbYOu-0007y3-I1
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 23:45:28 +0000
X-Inumbo-ID: f9098f8e-9af3-11ea-aaaa-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f9098f8e-9af3-11ea-aaaa-12813bfff9fa;
 Wed, 20 May 2020 23:45:23 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id E850620884;
 Wed, 20 May 2020 23:45:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590018323;
 bh=BdtSeWgXsb31UNe8YibrIGXB2m1r+NP2vHy9sK75VrU=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=D4juGHuWLwXCUQ4BiyjK2os9avjDJbMjWHGYOfBv0bIT/y1SSkYFzmXf+J0FUlGmI
 uNxOlH4gYVmZy9fLvLCB6TRrezbkFec377Yz93CtKc+vcXA4qedTTTUz7DcXF6j/EL
 PaBHNJUauI8pORC7ruz8zCF/f70GVyBFHjifAemc=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH 03/10] swiotlb-xen: add struct device* parameter to
 xen_phys_to_bus
Date: Wed, 20 May 2020 16:45:13 -0700
Message-Id: <20200520234520.22563-3-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

The parameter is unused in this patch.
No functional changes.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 drivers/xen/swiotlb-xen.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index b5e0492b07b9..958ee5517e0b 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -57,7 +57,7 @@ static unsigned long xen_io_tlb_nslabs;
  * can be 32bit when dma_addr_t is 64bit leading to a loss in
  * information if the shift is done before casting to 64bit.
  */
-static inline dma_addr_t xen_phys_to_bus(phys_addr_t paddr)
+static inline dma_addr_t xen_phys_to_bus(struct device *dev, phys_addr_t paddr)
 {
 	unsigned long bfn = pfn_to_bfn(XEN_PFN_DOWN(paddr));
 	dma_addr_t dma = (dma_addr_t)bfn << XEN_PAGE_SHIFT;
@@ -78,9 +78,9 @@ static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
 	return paddr;
 }
 
-static inline dma_addr_t xen_virt_to_bus(void *address)
+static inline dma_addr_t xen_virt_to_bus(struct device *dev, void *address)
 {
-	return xen_phys_to_bus(virt_to_phys(address));
+	return xen_phys_to_bus(dev, virt_to_phys(address));
 }
 
 static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
@@ -309,7 +309,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 	 * Do not use virt_to_phys(ret) because on ARM it doesn't correspond
 	 * to *dma_handle. */
 	phys = *dma_handle;
-	dev_addr = xen_phys_to_bus(phys);
+	dev_addr = xen_phys_to_bus(hwdev, phys);
 	if (((dev_addr + size - 1 <= dma_mask)) &&
 	    !range_straddles_page_boundary(phys, size))
 		*dma_handle = dev_addr;
@@ -367,7 +367,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 				unsigned long attrs)
 {
 	phys_addr_t map, phys = page_to_phys(page) + offset;
-	dma_addr_t dev_addr = xen_phys_to_bus(phys);
+	dma_addr_t dev_addr = xen_phys_to_bus(dev, phys);
 
 	BUG_ON(dir == DMA_NONE);
 	/*
@@ -392,7 +392,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 		return DMA_MAPPING_ERROR;
 
 	phys = map;
-	dev_addr = xen_phys_to_bus(map);
+	dev_addr = xen_phys_to_bus(dev, map);
 
 	/*
 	 * Ensure that the address returned is DMA'ble
@@ -536,7 +536,7 @@ xen_swiotlb_sync_sg_for_device(struct device *dev, struct scatterlist *sgl,
 static int
 xen_swiotlb_dma_supported(struct device *hwdev, u64 mask)
 {
-	return xen_virt_to_bus(xen_io_tlb_end - 1) <= mask;
+	return xen_virt_to_bus(hwdev, xen_io_tlb_end - 1) <= mask;
 }
 
 const struct dma_map_ops xen_swiotlb_dma_ops = {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 23:45:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 23:45:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbYOx-0007zF-HQ; Wed, 20 May 2020 23:45:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2h4=7C=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbYOw-0007yu-AZ
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 23:45:30 +0000
X-Inumbo-ID: fa02ce0a-9af3-11ea-b9cf-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa02ce0a-9af3-11ea-b9cf-bc764e2007e4;
 Wed, 20 May 2020 23:45:25 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 98AEA208B8;
 Wed, 20 May 2020 23:45:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590018324;
 bh=68bz29KGiO4K1vLdUXFPavN9cfZPRxxaUFsbZLLeOQI=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=dTE+kbnCMNQDzzVwFHnyCGj6ceXloekheSqEc/mfi4uXHnNX9YcX6KNuk+7Xjfu1e
 4Ivslg+Y9YsbA3JPm/E/DYimCIufADyo/rByZJDKZvlNR4WU+bwFRttIiSiZLilf48
 QwBlp0h7JbG+8wQ41SDtLC7LELN2RoJXH8oONqRI=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH 07/10] swiotlb-xen: add struct device* parameter to
 is_xen_swiotlb_buffer
Date: Wed, 20 May 2020 16:45:17 -0700
Message-Id: <20200520234520.22563-7-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

The parameter is unused in this patch.
No functional changes.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 drivers/xen/swiotlb-xen.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index ef58f05ae445..c50448fd9b75 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -97,7 +97,7 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
 	return 0;
 }
 
-static int is_xen_swiotlb_buffer(dma_addr_t dma_addr)
+static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 {
 	unsigned long bfn = XEN_PFN_DOWN(dma_addr);
 	unsigned long xen_pfn = bfn_to_local_pfn(bfn);
@@ -428,7 +428,7 @@ static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
 		xen_dma_sync_for_cpu(hwdev, dev_addr, paddr, size, dir);
 
 	/* NOTE: We use dev_addr here, not paddr! */
-	if (is_xen_swiotlb_buffer(dev_addr))
+	if (is_xen_swiotlb_buffer(hwdev, dev_addr))
 		swiotlb_tbl_unmap_single(hwdev, paddr, size, size, dir, attrs);
 }
 
@@ -441,7 +441,7 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr,
 	if (!dev_is_dma_coherent(dev))
 		xen_dma_sync_for_cpu(dev, dma_addr, paddr, size, dir);
 
-	if (is_xen_swiotlb_buffer(dma_addr))
+	if (is_xen_swiotlb_buffer(dev, dma_addr))
 		swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_CPU);
 }
 
@@ -451,7 +451,7 @@ xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr,
 {
 	phys_addr_t paddr = xen_bus_to_phys(dev, dma_addr);
 
-	if (is_xen_swiotlb_buffer(dma_addr))
+	if (is_xen_swiotlb_buffer(dev, dma_addr))
 		swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE);
 
 	if (!dev_is_dma_coherent(dev))
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 23:45:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 23:45:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbYP0-00080n-UB; Wed, 20 May 2020 23:45:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2h4=7C=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbYOz-00080C-IB
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 23:45:33 +0000
X-Inumbo-ID: f9473532-9af3-11ea-aaaa-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f9473532-9af3-11ea-aaaa-12813bfff9fa;
 Wed, 20 May 2020 23:45:24 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 58CB720899;
 Wed, 20 May 2020 23:45:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590018323;
 bh=KOZVKoHc4nwdfLj2dJHT6a8axx8miTwmLiOrwBfDB1k=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=Zrzyv7dwGOoZVOjzN/lI6uRiajVHs9Vwy9l2vAnumI9ByScLvTFwCz0GbKgQGL6mH
 nHTux0QWT9tRMSYP25Dz44CDAsqapcfHK1zzX6Jz+gOjOGZU9NJy7LnunoCzzI+fz8
 hdxrCpqYM0hQY5MHAQFgGUEtfwix69+Jsu+d9KVs=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH 04/10] swiotlb-xen: add struct device* parameter to
 xen_bus_to_phys
Date: Wed, 20 May 2020 16:45:14 -0700
Message-Id: <20200520234520.22563-4-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

The parameter is unused in this patch.
No functional changes.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 drivers/xen/swiotlb-xen.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 958ee5517e0b..9b4306a56feb 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -67,7 +67,7 @@ static inline dma_addr_t xen_phys_to_bus(struct device *dev, phys_addr_t paddr)
 	return dma;
 }
 
-static inline phys_addr_t xen_bus_to_phys(dma_addr_t baddr)
+static inline phys_addr_t xen_bus_to_phys(struct device *dev, dma_addr_t baddr)
 {
 	unsigned long xen_pfn = bfn_to_pfn(XEN_PFN_DOWN(baddr));
 	dma_addr_t dma = (dma_addr_t)xen_pfn << XEN_PAGE_SHIFT;
@@ -339,7 +339,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 
 	/* do not use virt_to_phys because on ARM it doesn't return you the
 	 * physical address */
-	phys = xen_bus_to_phys(dev_addr);
+	phys = xen_bus_to_phys(hwdev, dev_addr);
 
 	/* Convert the size to actually allocated. */
 	size = 1UL << (order + XEN_PAGE_SHIFT);
@@ -420,7 +420,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
 static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
 		size_t size, enum dma_data_direction dir, unsigned long attrs)
 {
-	phys_addr_t paddr = xen_bus_to_phys(dev_addr);
+	phys_addr_t paddr = xen_bus_to_phys(hwdev, dev_addr);
 
 	BUG_ON(dir == DMA_NONE);
 
@@ -436,7 +436,7 @@ static void
 xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr,
 		size_t size, enum dma_data_direction dir)
 {
-	phys_addr_t paddr = xen_bus_to_phys(dma_addr);
+	phys_addr_t paddr = xen_bus_to_phys(dev, dma_addr);
 
 	if (!dev_is_dma_coherent(dev))
 		xen_dma_sync_for_cpu(dma_addr, paddr, size, dir);
@@ -449,7 +449,7 @@ static void
 xen_swiotlb_sync_single_for_device(struct device *dev, dma_addr_t dma_addr,
 		size_t size, enum dma_data_direction dir)
 {
-	phys_addr_t paddr = xen_bus_to_phys(dma_addr);
+	phys_addr_t paddr = xen_bus_to_phys(dev, dma_addr);
 
 	if (is_xen_swiotlb_buffer(dma_addr))
 		swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_DEVICE);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 23:45:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 23:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbYP3-000827-6a; Wed, 20 May 2020 23:45:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2h4=7C=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbYP1-00080z-9T
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 23:45:35 +0000
X-Inumbo-ID: fa845f88-9af3-11ea-b07b-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fa845f88-9af3-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 23:45:26 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 6EE5920B80;
 Wed, 20 May 2020 23:45:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590018325;
 bh=zOggHTvrAHnGnYxgbdj7YkRZVsHE+BJZQ+7OWADQwYE=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=JVwAtmtAzOWjanAHNYSDwz0EACLtThGIDZ+uUg3zRvGSkZ75zhEpQJU3GzzON3sTQ
 a4K7Q8VcswpaFBNTBAKJzXJgfqEM28xLxpmJbB7BCuTDWgYrzJXzFN48YVh7WrmbLS
 h39JcTbntVeLgiFMkXpexgwcaiT0drFuU4zHbALQ=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH 09/10] xen/arm: introduce phys/dma translations in
 xen_dma_sync_for_*
Date: Wed, 20 May 2020 16:45:19 -0700
Message-Id: <20200520234520.22563-9-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

Add phys_to_dma/dma_to_phys calls to
xen_dma_sync_for_cpu, xen_dma_sync_for_device, and
xen_arch_need_swiotlb.

In xen_arch_need_swiotlb, take the opportunity to switch to the simpler
pfn_valid check we use everywhere else.

dma_cache_maint is fixed by the next patch.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 arch/arm/xen/mm.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index f2414ea40a79..7639251bcc79 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -1,5 +1,6 @@
 // SPDX-License-Identifier: GPL-2.0-only
 #include <linux/cpu.h>
+#include <linux/dma-direct.h>
 #include <linux/dma-noncoherent.h>
 #include <linux/gfp.h>
 #include <linux/highmem.h>
@@ -75,7 +76,7 @@ void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle,
 			  phys_addr_t paddr, size_t size,
 			  enum dma_data_direction dir)
 {
-	if (pfn_valid(PFN_DOWN(handle)))
+	if (pfn_valid(PFN_DOWN(dma_to_phys(dev, handle))))
 		arch_sync_dma_for_cpu(paddr, size, dir);
 	else if (dir != DMA_TO_DEVICE)
 		dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL);
@@ -85,7 +86,7 @@ void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle,
 			     phys_addr_t paddr, size_t size,
 			     enum dma_data_direction dir)
 {
-	if (pfn_valid(PFN_DOWN(handle)))
+	if (pfn_valid(PFN_DOWN(dma_to_phys(dev, handle))))
 		arch_sync_dma_for_device(paddr, size, dir);
 	else if (dir == DMA_FROM_DEVICE)
 		dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL);
@@ -97,8 +98,7 @@ bool xen_arch_need_swiotlb(struct device *dev,
 			   phys_addr_t phys,
 			   dma_addr_t dev_addr)
 {
-	unsigned int xen_pfn = XEN_PFN_DOWN(phys);
-	unsigned int bfn = XEN_PFN_DOWN(dev_addr);
+	unsigned int bfn = XEN_PFN_DOWN(dma_to_phys(dev, dev_addr));
 
 	/*
 	 * The swiotlb buffer should be used if
@@ -115,7 +115,7 @@ bool xen_arch_need_swiotlb(struct device *dev,
 	 * require a bounce buffer because the device doesn't support coherent
 	 * memory and we are not able to flush the cache.
 	 */
-	return (!hypercall_cflush && (xen_pfn != bfn) &&
+	return (!hypercall_cflush && !pfn_valid(bfn) &&
 		!dev_is_dma_coherent(dev));
 }
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 23:45:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 23:45:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbYP5-00084D-FV; Wed, 20 May 2020 23:45:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2h4=7C=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbYP4-000830-II
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 23:45:38 +0000
X-Inumbo-ID: f990c1c0-9af3-11ea-aaaa-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f990c1c0-9af3-11ea-aaaa-12813bfff9fa;
 Wed, 20 May 2020 23:45:24 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id BBBC1208A7;
 Wed, 20 May 2020 23:45:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590018324;
 bh=z/oPKkIs7NsKHsXBE6reDZxADwxVNC/gQhCGPLiNSNk=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=2oXurw69UIZ3DM7uqS/KV3kmKOp8Qp47Zmwn2iXz8fUjXTUv6facvM2OBXrQxMwcv
 Nkp327Rz7XzXGLqJyesuWLPsCW/lNQZWMTpkPb/ip3gXO/vBlpjekfJspk7tfZGPiN
 RBi6Jz59yYEMA40jGdjvJ4BWqbtoLvc+3biQ3IHY=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH 05/10] swiotlb-xen: add struct device* parameter to
 xen_dma_sync_for_cpu
Date: Wed, 20 May 2020 16:45:15 -0700
Message-Id: <20200520234520.22563-5-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

The parameter is unused in this patch.
No functional changes.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 arch/arm/xen/mm.c         | 5 +++--
 drivers/xen/swiotlb-xen.c | 4 ++--
 include/xen/swiotlb-xen.h | 5 +++--
 3 files changed, 8 insertions(+), 6 deletions(-)

diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index d40e9e5fc52b..1a00e8003c64 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -71,8 +71,9 @@ static void dma_cache_maint(dma_addr_t handle, size_t size, u32 op)
  * pfn_valid returns true the pages is local and we can use the native
  * dma-direct functions, otherwise we call the Xen specific version.
  */
-void xen_dma_sync_for_cpu(dma_addr_t handle, phys_addr_t paddr, size_t size,
-		enum dma_data_direction dir)
+void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle,
+			  phys_addr_t paddr, size_t size,
+			  enum dma_data_direction dir)
 {
 	if (pfn_valid(PFN_DOWN(handle)))
 		arch_sync_dma_for_cpu(paddr, size, dir);
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 9b4306a56feb..f9aa932973dd 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -425,7 +425,7 @@ static void xen_swiotlb_unmap_page(struct device *hwdev, dma_addr_t dev_addr,
 	BUG_ON(dir == DMA_NONE);
 
 	if (!dev_is_dma_coherent(hwdev) && !(attrs & DMA_ATTR_SKIP_CPU_SYNC))
-		xen_dma_sync_for_cpu(dev_addr, paddr, size, dir);
+		xen_dma_sync_for_cpu(hwdev, dev_addr, paddr, size, dir);
 
 	/* NOTE: We use dev_addr here, not paddr! */
 	if (is_xen_swiotlb_buffer(dev_addr))
@@ -439,7 +439,7 @@ xen_swiotlb_sync_single_for_cpu(struct device *dev, dma_addr_t dma_addr,
 	phys_addr_t paddr = xen_bus_to_phys(dev, dma_addr);
 
 	if (!dev_is_dma_coherent(dev))
-		xen_dma_sync_for_cpu(dma_addr, paddr, size, dir);
+		xen_dma_sync_for_cpu(dev, dma_addr, paddr, size, dir);
 
 	if (is_xen_swiotlb_buffer(dma_addr))
 		swiotlb_tbl_sync_single(dev, paddr, size, dir, SYNC_FOR_CPU);
diff --git a/include/xen/swiotlb-xen.h b/include/xen/swiotlb-xen.h
index ffc0d3902b71..f62d1854780b 100644
--- a/include/xen/swiotlb-xen.h
+++ b/include/xen/swiotlb-xen.h
@@ -4,8 +4,9 @@
 
 #include <linux/swiotlb.h>
 
-void xen_dma_sync_for_cpu(dma_addr_t handle, phys_addr_t paddr, size_t size,
-		enum dma_data_direction dir);
+void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle,
+			  phys_addr_t paddr, size_t size,
+			  enum dma_data_direction dir);
 void xen_dma_sync_for_device(dma_addr_t handle, phys_addr_t paddr, size_t size,
 		enum dma_data_direction dir);
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 23:45:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 23:45:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbYP7-00085v-Pf; Wed, 20 May 2020 23:45:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2h4=7C=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbYP6-00084m-Av
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 23:45:40 +0000
X-Inumbo-ID: fac52810-9af3-11ea-9887-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fac52810-9af3-11ea-9887-bc764e2007e4;
 Wed, 20 May 2020 23:45:26 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id D156F208C7;
 Wed, 20 May 2020 23:45:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590018326;
 bh=K/T4h2BFa+NcoPT654ciuMnQsexCfqtyhB6cVtjK0L4=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=iO1sa0jxbGms6doP+1SAsu8gCvwCCG2FLgnZDuwrFexC103KNiw38nDll+VHU43s2
 Nh2TpH2F36+9fsLjiS6cJyqFf9ZymsSVC2VoFK1SHz/99E0SURKQRXWu+SOnutOCzF
 s0HqcEICwCpORe7VX79MIUVoZtX9bZ86p6lqngIo=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH 10/10] xen/arm: call dma_to_phys on the dma_addr_t parameter
 of dma_cache_maint
Date: Wed, 20 May 2020 16:45:20 -0700
Message-Id: <20200520234520.22563-10-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

Add a struct device* parameter to dma_cache_maint.

Translate the dma_addr_t parameter of dma_cache_maint by calling
dma_to_phys. Do it for the first page and all the following pages, in
case of multipage handling.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 arch/arm/xen/mm.c | 15 +++++++++------
 1 file changed, 9 insertions(+), 6 deletions(-)

diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index 7639251bcc79..6ddf3b3c1ab5 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -43,15 +43,18 @@ unsigned long xen_get_swiotlb_free_pages(unsigned int order)
 static bool hypercall_cflush = false;
 
 /* buffers in highmem or foreign pages cannot cross page boundaries */
-static void dma_cache_maint(dma_addr_t handle, size_t size, u32 op)
+static void dma_cache_maint(struct device *dev, dma_addr_t handle,
+			    size_t size, u32 op)
 {
 	struct gnttab_cache_flush cflush;
 
-	cflush.a.dev_bus_addr = handle & XEN_PAGE_MASK;
 	cflush.offset = xen_offset_in_page(handle);
 	cflush.op = op;
+	handle &= XEN_PAGE_MASK;
 
 	do {
+		cflush.a.dev_bus_addr = dma_to_phys(dev, handle);
+
 		if (size + cflush.offset > XEN_PAGE_SIZE)
 			cflush.length = XEN_PAGE_SIZE - cflush.offset;
 		else
@@ -60,7 +63,7 @@ static void dma_cache_maint(dma_addr_t handle, size_t size, u32 op)
 		HYPERVISOR_grant_table_op(GNTTABOP_cache_flush, &cflush, 1);
 
 		cflush.offset = 0;
-		cflush.a.dev_bus_addr += cflush.length;
+		handle += cflush.length;
 		size -= cflush.length;
 	} while (size);
 }
@@ -79,7 +82,7 @@ void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle,
 	if (pfn_valid(PFN_DOWN(dma_to_phys(dev, handle))))
 		arch_sync_dma_for_cpu(paddr, size, dir);
 	else if (dir != DMA_TO_DEVICE)
-		dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL);
+		dma_cache_maint(dev, handle, size, GNTTAB_CACHE_INVAL);
 }
 
 void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle,
@@ -89,9 +92,9 @@ void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle,
 	if (pfn_valid(PFN_DOWN(dma_to_phys(dev, handle))))
 		arch_sync_dma_for_device(paddr, size, dir);
 	else if (dir == DMA_FROM_DEVICE)
-		dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL);
+		dma_cache_maint(dev, handle, size, GNTTAB_CACHE_INVAL);
 	else
-		dma_cache_maint(handle, size, GNTTAB_CACHE_CLEAN);
+		dma_cache_maint(dev, handle, size, GNTTAB_CACHE_CLEAN);
 }
 
 bool xen_arch_need_swiotlb(struct device *dev,
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 23:45:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 23:45:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbYPB-00087u-32; Wed, 20 May 2020 23:45:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2h4=7C=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbYP9-00086q-IS
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 23:45:43 +0000
X-Inumbo-ID: fa47af7a-9af3-11ea-aaaa-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fa47af7a-9af3-11ea-aaaa-12813bfff9fa;
 Wed, 20 May 2020 23:45:25 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 03272208C3;
 Wed, 20 May 2020 23:45:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590018325;
 bh=8Uhi52nEHwr6p4oqjgGMPqzF2apk7Bwz25uN3GRP/Fc=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=b6e4YPjm7TkI1oOgr3Xo8IZYxje2EhZjUfcZ0kyxxDR0eWcOFPNzbEZDKqFtqYTP6
 DGJnyCco3P66kvXMuzlfjJ1at3ARO6GSJ/ORWX3M1W6ehKfPZs8ie/RZbRghJ5Ik+Z
 2zbX0cvTPa4xUg5gXkRmvnBpQPnqfYdnsqAzFQBo=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com,
	boris.ostrovsky@oracle.com,
	konrad.wilk@oracle.com
Subject: [PATCH 08/10] swiotlb-xen: introduce phys_to_dma/dma_to_phys
 translations
Date: Wed, 20 May 2020 16:45:18 -0700
Message-Id: <20200520234520.22563-8-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, sstabellini@kernel.org,
 linux-kernel@vger.kernel.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

Call dma_to_phys in is_xen_swiotlb_buffer.
Call phys_to_dma in xen_phys_to_bus.
Call dma_to_phys in xen_bus_to_phys.

Everything is taken care of by these changes except for
xen_swiotlb_alloc_coherent and xen_swiotlb_free_coherent, which need a
few explicit phys_to_dma/dma_to_phys calls.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 drivers/xen/swiotlb-xen.c | 20 ++++++++++++--------
 1 file changed, 12 insertions(+), 8 deletions(-)

diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index c50448fd9b75..d011c4c7aa72 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -64,14 +64,16 @@ static inline dma_addr_t xen_phys_to_bus(struct device *dev, phys_addr_t paddr)
 
 	dma |= paddr & ~XEN_PAGE_MASK;
 
-	return dma;
+	return phys_to_dma(dev, dma);
 }
 
-static inline phys_addr_t xen_bus_to_phys(struct device *dev, dma_addr_t baddr)
+static inline phys_addr_t xen_bus_to_phys(struct device *dev,
+					  dma_addr_t dma_addr)
 {
+	phys_addr_t baddr = dma_to_phys(dev, dma_addr);
 	unsigned long xen_pfn = bfn_to_pfn(XEN_PFN_DOWN(baddr));
-	dma_addr_t dma = (dma_addr_t)xen_pfn << XEN_PAGE_SHIFT;
-	phys_addr_t paddr = dma;
+	phys_addr_t paddr = (xen_pfn << XEN_PAGE_SHIFT) |
+			    (baddr & ~XEN_PAGE_MASK);
 
 	paddr |= baddr & ~XEN_PAGE_MASK;
 
@@ -99,7 +101,7 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
 
 static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 {
-	unsigned long bfn = XEN_PFN_DOWN(dma_addr);
+	unsigned long bfn = XEN_PFN_DOWN(dma_to_phys(dev, dma_addr));
 	unsigned long xen_pfn = bfn_to_local_pfn(bfn);
 	phys_addr_t paddr = XEN_PFN_PHYS(xen_pfn);
 
@@ -304,11 +306,11 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 	if (hwdev && hwdev->coherent_dma_mask)
 		dma_mask = hwdev->coherent_dma_mask;
 
-	/* At this point dma_handle is the physical address, next we are
+	/* At this point dma_handle is the dma address, next we are
 	 * going to set it to the machine address.
 	 * Do not use virt_to_phys(ret) because on ARM it doesn't correspond
 	 * to *dma_handle. */
-	phys = *dma_handle;
+	phys = dma_to_phys(hwdev, *dma_handle);
 	dev_addr = xen_phys_to_bus(hwdev, phys);
 	if (((dev_addr + size - 1 <= dma_mask)) &&
 	    !range_straddles_page_boundary(phys, size))
@@ -319,6 +321,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
 			xen_free_coherent_pages(hwdev, size, ret, (dma_addr_t)phys, attrs);
 			return NULL;
 		}
+		*dma_handle = phys_to_dma(hwdev, *dma_handle);
 		SetPageXenRemapped(virt_to_page(ret));
 	}
 	memset(ret, 0, size);
@@ -351,7 +354,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
 	    TestClearPageXenRemapped(pg))
 		xen_destroy_contiguous_region(phys, order);
 
-	xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
+	xen_free_coherent_pages(hwdev, size, vaddr, phys_to_dma(hwdev, phys),
+				attrs);
 }
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 20 23:51:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 23:51:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbYUq-0001GY-PJ; Wed, 20 May 2020 23:51:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Eu2h=7C=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1jbYUp-0001GT-Me
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 23:51:35 +0000
X-Inumbo-ID: d6a41972-9af4-11ea-ae69-bc764e2007e4
Received: from mail-qt1-x842.google.com (unknown [2607:f8b0:4864:20::842])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d6a41972-9af4-11ea-ae69-bc764e2007e4;
 Wed, 20 May 2020 23:51:35 +0000 (UTC)
Received: by mail-qt1-x842.google.com with SMTP id i68so4139729qtb.5
 for <xen-devel@lists.xenproject.org>; Wed, 20 May 2020 16:51:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zededa.com; s=google;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=DOijWyKqJLxNwQeBpPWMP+0S3cV6uReedydSlpIjzL4=;
 b=VKLJf4RMzZ8GKwYeh9OqRDk2QVWEm43naydQATItag3YW/jJ5eONuUaMTKh0s95LEh
 8rgDtmyH+0qrLmnklWxl1XWFSu7uIlM30DG0fONhO/30xcMrA6IMw8Zp6a7iaB3GIqn2
 S7IELd0Jg+esKrNB0DzDvAW/KK7ppEp87NWuSKApP6O63Xls3m8QUJAMgaygYnYm93JH
 sKzRJByGxKC1qNzWU2XCfC5b2NoVQARZfqn0YAbJsqUIM8KzV+mGq5Cgw0NRP8JV1Gnv
 Pe07AyDl+Q9k1ZW888U5f6HrlwhBFRIzybMwFGkFQVHj4NGMi0CcWVFYiQP29KvhkdNb
 S3NA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=DOijWyKqJLxNwQeBpPWMP+0S3cV6uReedydSlpIjzL4=;
 b=LzKbqUPAs55ygMoNAnnQ5ap6VUDZH2MZ6M08K3ZRNlGtVViJLdfygrTJyHl2lXvKlw
 3OWt1PMDW+0j1molGsaE3Y7J1xVDzDBX2Ny+rNHn/atoX1UNuqv2EVS1u0QIbLTyTFCc
 7Kmg1Vy+mom0eWBFZyWXFtcCQ4+WaMsgBzwWDunzxsi3F89AwoOzC/fWS5m2cg2b5Slr
 kWjcZO8xe6wEx8XXL2ym6Ugr7OyGwJc4e5p5s+7TKsiQeeCMEactLafPj0eKJFW9TxM0
 HwWbaHptAU2sLg3bls5N+mXpVHxXBjwPRufIyZshiG71+39B0paasyRLTMEMs8lJ3ZDx
 IUVA==
X-Gm-Message-State: AOAM533tXHCVDSZfFeOj/rHTnOiu7rQnP7vVfbtsJK6fc9RhZicjiI+O
 IaW89c51vYrjJj5Ggv2zEcM3SBj0/U5Sgnw3s7SMuQ==
X-Google-Smtp-Source: ABdhPJxt2EESLU8nhk5tsHgnFk3RvF6RDw6Vldmiq55JHPOjOD9uPQTlMBvIBmWRF0NacS8L88rfoIbWTF1pRYQwj0s=
X-Received: by 2002:ac8:670b:: with SMTP id e11mr8166734qtp.365.1590018694922; 
 Wed, 20 May 2020 16:51:34 -0700 (PDT)
MIME-Version: 1.0
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
From: Roman Shaposhnik <roman@zededa.com>
Date: Wed, 20 May 2020 16:51:23 -0700
Message-ID: <CAMmSBy9VBxjSCRcfyiZ-RY8eyYOooeNfCqrvirhWzfpSbAQyuw@mail.gmail.com>
Subject: Re: [PATCH 00/10] fix swiotlb-xen for RPi4
To: Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, tamas@tklengyel.com,
 Konrad Wilk <konrad.wilk@oracle.com>, linux-kernel@vger.kernel.org,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 20, 2020 at 4:45 PM Stefano Stabellini
<sstabellini@kernel.org> wrote:
>
> Hi all,
>
> This series is a collection of fixes to get Linux running on the RPi4 as
> dom0.
>
> Conceptually there are only two significant changes:
>
> - make sure not to call virt_to_page on vmalloc virt addresses (patch
>   #1)
> - use phys_to_dma and dma_to_phys to translate phys to/from dma
>   addresses (all other patches)
>
> In particular in regards to the second part, the RPi4 is the first
> board where Xen can run that has the property that dma addresses are
> different from physical addresses, and swiotlb-xen was written with the
> assumption that phys addr == dma addr.
>
> This series adds the phys_to_dma and dma_to_phys calls to make it work.

Great to see this! Stefano, any chance you can put it in a branch some place
so I can test the final version?

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Wed May 20 23:56:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 20 May 2020 23:56:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbYZR-0001SJ-Ac; Wed, 20 May 2020 23:56:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2h4=7C=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbYZQ-0001SE-Nl
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 23:56:20 +0000
X-Inumbo-ID: 8070b7da-9af5-11ea-b07b-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8070b7da-9af5-11ea-b07b-bc764e2007e4;
 Wed, 20 May 2020 23:56:20 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 7773220708;
 Wed, 20 May 2020 23:56:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590018979;
 bh=b+7RuEJNy8D/QcJ0wriRra2UVUAspE2s+m1lbqHAsB0=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=l2UoEQEzsRADKyvFaNG6S8GjntlJDaSWSTSicqkJe5Jzsc4y5ZqffdC9pc/WobvTJ
 XDhda7d9E/z8h6aUD/kiErxVMDFOwyqPxeOOewgJg/JGosWUhzo7V6kpi1O13U8R84
 6SGCtDLy2ymm24kRznYs+6GNzvx50a6okZAfHW6Q=
Date: Wed, 20 May 2020 16:56:19 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Roman Shaposhnik <roman@zededa.com>
Subject: Re: [PATCH 00/10] fix swiotlb-xen for RPi4
In-Reply-To: <CAMmSBy9VBxjSCRcfyiZ-RY8eyYOooeNfCqrvirhWzfpSbAQyuw@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2005201653310.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
 <CAMmSBy9VBxjSCRcfyiZ-RY8eyYOooeNfCqrvirhWzfpSbAQyuw@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Konrad Wilk <konrad.wilk@oracle.com>, linux-kernel@vger.kernel.org,
 tamas@tklengyel.com, Xen-devel <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, 20 May 2020, Roman Shaposhnik wrote:
> On Wed, May 20, 2020 at 4:45 PM Stefano Stabellini
> <sstabellini@kernel.org> wrote:
> >
> > Hi all,
> >
> > This series is a collection of fixes to get Linux running on the RPi4 as
> > dom0.
> >
> > Conceptually there are only two significant changes:
> >
> > - make sure not to call virt_to_page on vmalloc virt addresses (patch
> >   #1)
> > - use phys_to_dma and dma_to_phys to translate phys to/from dma
> >   addresses (all other patches)
> >
> > In particular in regards to the second part, the RPi4 is the first
> > board where Xen can run that has the property that dma addresses are
> > different from physical addresses, and swiotlb-xen was written with the
> > assumption that phys addr == dma addr.
> >
> > This series adds the phys_to_dma and dma_to_phys calls to make it work.
> 
> Great to see this! Stefano, any chance you can put it in a branch some place
> so I can test the final version?

Here it is, but keep in mind that it is based on Linux master (because
it is meant to go upstream):

  git://git.kernel.org/pub/scm/linux/kernel/git/sstabellini/xen.git fix-rip4-v1


From xen-devel-bounces@lists.xenproject.org Thu May 21 01:29:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 01:29:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jba0r-0002K4-8S; Thu, 21 May 2020 01:28:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KCrj=7D=tklsoftware.com=tamas@srs-us1.protection.inumbo.net>)
 id 1jba0q-0002Jz-9j
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 01:28:44 +0000
X-Inumbo-ID: 684dbefc-9b02-11ea-b07b-bc764e2007e4
Received: from mail-ed1-x543.google.com (unknown [2a00:1450:4864:20::543])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 684dbefc-9b02-11ea-b07b-bc764e2007e4;
 Thu, 21 May 2020 01:28:43 +0000 (UTC)
Received: by mail-ed1-x543.google.com with SMTP id k19so5428916edv.9
 for <xen-devel@lists.xenproject.org>; Wed, 20 May 2020 18:28:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=tklengyel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=QvUTjO36+ARpFUBm1M1MQO4pROw0xKLgs4hkEFY4Hr0=;
 b=AfJEpZwPFkoIVKQOF3pKkfl90esJmuGjuhAQkuZwEC2sffLm67SAMGgkQYtXFBIj6o
 8KQVYd2Z//ykhHMsVUmSlO77vaUe2S3g0j6B3HApYSm51mTw9me5GIXWITIweCjl2Pz2
 XgzCVTKC+rWqqEJIRyQuJRaBgsg+fQfO2A6kG3G6c9rlUobWU4ISuktON+s2vkn+hvbW
 J7Oy886zXjXXxMtDyi7qqfrPHKfREMS4C+BluvN4sz3Erp9p5QY8etknOjRF9tCwVqVB
 CGJkD/oqpNXUayS735If8AOywHIgEODe7Na9p32k2Xy4n6B5xn7R5+LmqvpQ9lKvNG0W
 wGYg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=QvUTjO36+ARpFUBm1M1MQO4pROw0xKLgs4hkEFY4Hr0=;
 b=DZta3ozP7iV0F1zy+VI7Zz2p9g0tKkBv3y8+TqYTVR8I/o/dy1FJlzWbzprWmJFqfh
 E/0F3wdN91VvAfPQBMuuXItRIwJNbSSZ/Ub8RaY7lBEIey9kF0fioBELCX0JAma67Ys+
 JOYMbGS9WPdbf5Evb0rg+Exk38C144AqefQIIeJfBw/FipIXet1mIz6hBpcsEgvXK4VA
 7wmOn134GHKeqJst6b18fal6gFEk/Q8jVH4R+y+dLcxzq+JLgG3K46Hnf7fRPywmSlZ9
 4vQmwnfZcc6/Z009Ov+EF7SAoff98hueJt9S4w9LosIEbXaHxl3o3HE6PzaYI/bIeal1
 dB1Q==
X-Gm-Message-State: AOAM533zje3kVkgN25cuhPPTsNhKSaidATjkqrXdRPXy4QnEtx4nh4B/
 AJIVuASlXTuvG0fQldTw11RFUfKEvQ4=
X-Google-Smtp-Source: ABdhPJzTb/tj8Y9vvRs5quUI8IId0yPtwsH4QdKneJOaniLCQP0d49EBg88JmUkdCkx1/QUfVFLfHQ==
X-Received: by 2002:a50:9e2c:: with SMTP id z41mr5736987ede.193.1590024522010; 
 Wed, 20 May 2020 18:28:42 -0700 (PDT)
Received: from mail-wm1-f53.google.com (mail-wm1-f53.google.com.
 [209.85.128.53])
 by smtp.gmail.com with ESMTPSA id cq10sm3196530edb.48.2020.05.20.18.28.40
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 20 May 2020 18:28:41 -0700 (PDT)
Received: by mail-wm1-f53.google.com with SMTP id z4so4190685wmi.2
 for <xen-devel@lists.xenproject.org>; Wed, 20 May 2020 18:28:40 -0700 (PDT)
X-Received: by 2002:a1c:acc8:: with SMTP id v191mr7126410wme.154.1590024520148; 
 Wed, 20 May 2020 18:28:40 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1589561218.git.tamas@tklengyel.com>
 <72d4d282dd20b79ebdbaf1f70865ea38b075c5c0.1589561218.git.tamas@tklengyel.com>
 <ec19beb2-6e69-4e62-b260-0d76b2a7f5a7@suse.com>
 <CABfawhmg+ZtzKvJecAJE8=C+rnPiywUy8vO81Yz9dC4j2h-feg@mail.gmail.com>
 <d0cd29c0-3070-ceb2-cd21-4ae359a0ec57@suse.com>
In-Reply-To: <d0cd29c0-3070-ceb2-cd21-4ae359a0ec57@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Wed, 20 May 2020 19:28:03 -0600
X-Gmail-Original-Message-ID: <CABfawhnjp8a3XEpbTiZ5VGyZ9uFQqow0Gzf1sVei64MzOX6XVA@mail.gmail.com>
Message-ID: <CABfawhnjp8a3XEpbTiZ5VGyZ9uFQqow0Gzf1sVei64MzOX6XVA@mail.gmail.com>
Subject: Re: [PATCH 1/3] xen/monitor: Control register values
To: Jan Beulich <jbeulich@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Petre Pircalabu <ppircalabu@bitdefender.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 20, 2020 at 7:48 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 20.05.2020 15:42, Tamas K Lengyel wrote:
> > On Wed, May 20, 2020 at 7:36 AM Jan Beulich <jbeulich@suse.com> wrote:
> >>
> >> On 15.05.2020 18:53, Tamas K Lengyel wrote:
> >>> Extend the monitor_op domctl to include option that enables
> >>> controlling what values certain registers are permitted to hold
> >>> by a monitor subscriber.
> >>
> >> This needs a bit more explanation, especially for those of us
> >> who aren't that introspection savvy. For example, from the text
> >> here I didn't expect a simple bool control, but something where
> >> actual (register) values get passed back and forth.
> >>
> >>> --- a/xen/arch/x86/hvm/hvm.c
> >>> +++ b/xen/arch/x86/hvm/hvm.c
> >>> @@ -2263,9 +2263,10 @@ int hvm_set_cr0(unsigned long value, bool may_defer)
> >>>      {
> >>>          ASSERT(v->arch.vm_event);
> >>>
> >>> -        if ( hvm_monitor_crX(CR0, value, old_value) )
> >>> +        if ( hvm_monitor_crX(CR0, value, old_value) &&
> >>> +             v->domain->arch.monitor.control_register_values )
> >>>          {
> >>> -            /* The actual write will occur in hvm_do_resume(), if permitted. */
> >>> +            /* The actual write will occur in hvm_do_resume, if permitted. */
> >>
> >> Please can you leave alone this and the similar comments below.
> >> And for consistency _add_ parentheses to the one new instance
> >> you add?
> >
> > I changed to because now it doesn't fit into the 80-line limit below,
> > and then changed it everywhere _for_ consistency.
>
> The 80-char limit is easy to deal with - wrap the line.
>
> >>> --- a/xen/arch/x86/monitor.c
> >>> +++ b/xen/arch/x86/monitor.c
> >>> @@ -144,7 +144,15 @@ int arch_monitor_domctl_event(struct domain *d,
> >>>                                struct xen_domctl_monitor_op *mop)
> >>>  {
> >>>      struct arch_domain *ad = &d->arch;
> >>> -    bool requested_status = (XEN_DOMCTL_MONITOR_OP_ENABLE == mop->op);
> >>> +    bool requested_status;
> >>> +
> >>> +    if ( XEN_DOMCTL_MONITOR_OP_CONTROL_REGISTERS == mop->op )
> >>> +    {
> >>> +        ad->monitor.control_register_values = true;
> >>
> >> And there's no way to clear this flag again?
> >
> > There is. Disable the monitor vm_event interface and reinitialize.
>
> Quite heavy handed, isn't it?

Not really. It's perfectly suitable for what its used for. You either
need this feature for the duration of your monitoring or you don't.
There is no in-between.

Tamas


From xen-devel-bounces@lists.xenproject.org Thu May 21 02:19:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 02:19:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbane-0006wv-Vi; Thu, 21 May 2020 02:19:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KCrj=7D=tklsoftware.com=tamas@srs-us1.protection.inumbo.net>)
 id 1jbane-0006wp-0W
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 02:19:10 +0000
X-Inumbo-ID: 73c1592c-9b09-11ea-b07b-bc764e2007e4
Received: from mail-ed1-x543.google.com (unknown [2a00:1450:4864:20::543])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 73c1592c-9b09-11ea-b07b-bc764e2007e4;
 Thu, 21 May 2020 02:19:09 +0000 (UTC)
Received: by mail-ed1-x543.google.com with SMTP id g9so5492080edr.8
 for <xen-devel@lists.xenproject.org>; Wed, 20 May 2020 19:19:09 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=tklengyel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=tZDQejP2Uh/kuBZJ8hJtfh64z2Jug7j1tZWZWWhfToE=;
 b=LJKZMAkKz3K9MatJq3u4dRRMNmb8aJarYtW50F7bwNk5j2q8vHx6OCPhMXDPB6Idj2
 nLoANbkr3piMSnOfeFoS9aXJfvvTk3j/wXMwLm+IapkxRKbjJup834gep0X9S2KTgD+l
 VNb/fK7xkU4dwLQCwB1OOeJgDDmxMFkVt0e0hQGTMCRuATLsyLIJHDXrqWv3meHVIqK8
 i/IovH3nvVM1mOfAxcOxH6D4rePr66Nb2rFirqEWO8ze34LTi3Mzly/FO4aUvVH0+8SA
 6b1woAh5albYbqpAbm0IfAqHXpL6iL1LT3gYpsryAQsmzma4MbNKfikP5IwGbF9rgj2S
 8z5g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=tZDQejP2Uh/kuBZJ8hJtfh64z2Jug7j1tZWZWWhfToE=;
 b=jmGCd7bcgfga5IPgpstELlzZniNJsw20DLAfOGX6R8yP0H97N7Ojw5kjTIhWYo+L2F
 qx3Nt6v1+M7+NzuRlWh/etP1kDoiQLDJOMUlhlxsGTyAMZNCof24og8dHD3v4GpZv+Ij
 cR7zIfJ/hYsPQ8wG4aR/EudkxyJ/eTmvm9+aIYqfFH5Be4uSHvPMA6jtkCZJy0HA6qIa
 oADwfUg/hJbT9Ad1M0a843y2QjOGwVyEtvc+OUSgzHz0I0kyLkIl+MhCZVD5mpAP+DJ3
 9qsmkCd034v2YhzhxaB9by9qe5+BlIMqfYFgNghrtIc2Uk0hvUMe/cRDOZebl/zk5zbv
 D5vA==
X-Gm-Message-State: AOAM530mxxWHF0guWG6BYZDZdg6KPd8cR2NkX9QhMS8pcPYofuI2anPo
 NIllrceUcg+6lISy+VYVRdZoIO7j/Go=
X-Google-Smtp-Source: ABdhPJyu3X4vnsw6CZxBSTpViFnrfQB3f0TcDmaQrhqWjD62Ir4XbkzWOgTC398Fv5HCxx4NdrZt9g==
X-Received: by 2002:a05:6402:b06:: with SMTP id
 bm6mr6263568edb.17.1590027548179; 
 Wed, 20 May 2020 19:19:08 -0700 (PDT)
Received: from mail-wr1-f53.google.com (mail-wr1-f53.google.com.
 [209.85.221.53])
 by smtp.gmail.com with ESMTPSA id y10sm3316108ejw.25.2020.05.20.19.19.06
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 20 May 2020 19:19:07 -0700 (PDT)
Received: by mail-wr1-f53.google.com with SMTP id l18so5127817wrn.6
 for <xen-devel@lists.xenproject.org>; Wed, 20 May 2020 19:19:06 -0700 (PDT)
X-Received: by 2002:adf:fccd:: with SMTP id f13mr6753320wrs.386.1590027546599; 
 Wed, 20 May 2020 19:19:06 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1589561218.git.tamas@tklengyel.com>
 <1168bacc61f655f559c236cdf63a6b2beccd4d6b.1589561218.git.tamas@tklengyel.com>
 <28e50e15-410e-096d-51f1-e304c9ef8cdb@suse.com>
In-Reply-To: <28e50e15-410e-096d-51f1-e304c9ef8cdb@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Wed, 20 May 2020 20:18:30 -0600
X-Gmail-Original-Message-ID: <CABfawhk36A0vtXVR+TkP3FL7_-zBMWgnPks34r1-XjZG+uBNJw@mail.gmail.com>
Message-ID: <CABfawhk36A0vtXVR+TkP3FL7_-zBMWgnPks34r1-XjZG+uBNJw@mail.gmail.com>
Subject: Re: [PATCH 3/3] xen/vm_event: Add safe to disable vm_event
To: Jan Beulich <jbeulich@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Petre Pircalabu <ppircalabu@bitdefender.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 20, 2020 at 7:45 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 15.05.2020 18:53, Tamas K Lengyel wrote:
> > --- a/xen/arch/x86/hvm/hvm.c
> > +++ b/xen/arch/x86/hvm/hvm.c
> > @@ -563,15 +563,41 @@ void hvm_do_resume(struct vcpu *v)
> >          v->arch.hvm.inject_event.vector = HVM_EVENT_VECTOR_UNSET;
> >      }
> >
> > -    if ( unlikely(v->arch.vm_event) && v->arch.monitor.next_interrupt_enabled )
> > +    if ( unlikely(v->arch.vm_event) )
> >      {
> > -        struct x86_event info;
> > +        struct domain *d = v->domain;
>
> const

This can't be const, we disable the safe_to_disable option below after
sending the one-shot async event.

Tamas


From xen-devel-bounces@lists.xenproject.org Thu May 21 02:27:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 02:27:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbavx-0007q3-Tn; Thu, 21 May 2020 02:27:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PcJi=7D=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbavx-0007py-43
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 02:27:45 +0000
X-Inumbo-ID: a35087d4-9b0a-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a35087d4-9b0a-11ea-9887-bc764e2007e4;
 Thu, 21 May 2020 02:27:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=9W/CtYQ4eTucuKBFtMiNqD3RGLlpL3PtWSRa5mhZFm8=; b=FOazj5dO20Xih784FBXE2STss
 /+AQs4CV9c/qclAHs2kJzko/r9QAMysfDc0X7LeiFH1evn3KO4maElhNeUVm2U/JyBogapgtxgGt2
 yCfrYMwss9HZFjrMlFinMKf/1U/o+gW9UE9pmWCVstYrEAr5IhvIZLgOU6DGXWBft3OOg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbavp-0004jI-GP; Thu, 21 May 2020 02:27:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbavp-0008OL-65; Thu, 21 May 2020 02:27:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbavp-0008I5-4l; Thu, 21 May 2020 02:27:37 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150279-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150279: regressions - FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:regression
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=cdea123f1976549ecc72644588cc5ce1491606c4
X-Osstest-Versions-That: xen=e235fa2794c95365519eac714d6ea82f8e64752e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 21 May 2020 02:27:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150279 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150279/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 150267

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150267
 test-armhf-armhf-xl-rtds   16 guest-start/debian.repeat fail blocked in 150267
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150267
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150267
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150267
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150267
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150267
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150267
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150267
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150267
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150267
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  cdea123f1976549ecc72644588cc5ce1491606c4
baseline version:
 xen                  e235fa2794c95365519eac714d6ea82f8e64752e

Last test of basis   150267  2020-05-20 04:13:30 Z    0 days
Testing same since   150279  2020-05-20 17:07:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  David Woodhouse <dwmw@amazon.co.uk>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit cdea123f1976549ecc72644588cc5ce1491606c4
Author: Jan Beulich <jbeulich@suse.com>
Date:   Wed May 20 12:49:28 2020 +0200

    x86/mem-paging: further adjustments to p2m_mem_paging_prep()'s error handling
    
    Address late comments on ecb913be4aaa ("x86/mem-paging: correct
    p2m_mem_paging_prep()'s error handling"):
    - insert a gprintk() ahead of domain_crash(),
    - add a comment.
    
    Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 5fef1fd713660406a6187ef352fbf79986abfe43
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Wed May 20 12:48:37 2020 +0200

    x86/idle: rework C6 EOI workaround
    
    Change the C6 EOI workaround (errata AAJ72) to use x86_match_cpu. Also
    call the workaround from mwait_idle, previously it was only used by
    the ACPI idle driver. Finally make sure the routine is called for all
    states equal or greater than ACPI_STATE_C3, note that the ACPI driver
    doesn't currently handle them, but the errata condition shouldn't be
    limited by that.
    
    Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit f546619ce1c5d4de694597c28358cd8e23eea231
Author: David Woodhouse <dwmw@amazon.co.uk>
Date:   Wed May 20 12:47:48 2020 +0200

    x86/setup: lift dom0 creation out into create_dom0() function
    
    The creation of dom0 can be relatively self-contained. Shift it into
    a separate function and simplify __start_xen() a little bit.
    
    This is a cleanup in its own right, but will be even more desireable
    when live update provides an alternative path through __start_xen()
    that doesn't involve creating a new dom0 at all.
    
    Move the calculation of the 'initrd' parameter for create_dom0()
    down past the cosmetic printk about NX support, because in the fullness
    of time the whole initrd and create_dom0() part will be under the same
    "not live update" conditional. And in the meantime it's just neater.
    
    Also drop the explicit check for initrd to be module #0 since that would
    be the dom0 kernel and the corresponding bit is always clear in
    module_map.
    
    Signed-off-by: David Woodhouse <dwmw@amazon.co.uk>
    Acked-by: Jan Beulich <jbeulich@suse.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu May 21 02:32:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 02:32:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbb04-0000FC-Is; Thu, 21 May 2020 02:32:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BsrW=7D=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jbb02-0000F6-WD
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 02:31:59 +0000
X-Inumbo-ID: 3e6df7a6-9b0b-11ea-ae69-bc764e2007e4
Received: from mail-ot1-f66.google.com (unknown [209.85.210.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e6df7a6-9b0b-11ea-ae69-bc764e2007e4;
 Thu, 21 May 2020 02:31:58 +0000 (UTC)
Received: by mail-ot1-f66.google.com with SMTP id a68so4351376otb.10
 for <xen-devel@lists.xenproject.org>; Wed, 20 May 2020 19:31:58 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=lqT0rYGXAALaM5oRy93eFgEEeqmfVVoL1lRpleol+kI=;
 b=XhXPv8JzlqkYS86ILA/PhouyjuWLD4MAINuI4mEzyuQVQ19cu+hCPIpUmDMOxzeY+D
 MGCoioICMa3fvfb5Cxd2toa39PiKQJuajdqo40NkHWr02UhjmP2CrY9szfzZsldx6rQW
 r67DGQtCqI0BbZ5bX6pPkno2nvypX/PQ43DWMq/bkKd9ehSJz2F5huOXaruoIn7lpUex
 YD0bR9nMVKtAqCQ6wkRwgHsjXvqvYxyM7o1ErkCdzjQA5DOsqVkXRWGmzikcxXF2gWLL
 QyakteMb/8povRPfbG3p2IIwmvZw5R6swAwf4c/JJgsaYdn5+HjbtDL9Th4t/rZKYVo/
 7x6A==
X-Gm-Message-State: AOAM533HnFXWVsf36owVT8TzQX+L0TbxToztuoX4t8ioELWjRBxMmz+l
 01GdcqU12gM4XfDS1D/Uil4zfpdp4Ow=
X-Google-Smtp-Source: ABdhPJwP80gNyO2BO1OARPyAuTYqYMbZOfnfg26W2PGWnf8JfkcjHemYO/SXXkhq7TuVtBsb+DyR7w==
X-Received: by 2002:a9d:6352:: with SMTP id y18mr5968701otk.4.1590028317478;
 Wed, 20 May 2020 19:31:57 -0700 (PDT)
Received: from t0.lan (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124])
 by smtp.googlemail.com with ESMTPSA id r17sm1312480ooq.2.2020.05.20.19.31.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 20 May 2020 19:31:56 -0700 (PDT)
From: Tamas K Lengyel <tamas@tklengyel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 for-4.14 0/3] vm_event: fix race-condition when disabling
 monitor events
Date: Wed, 20 May 2020 20:31:51 -0600
Message-Id: <cover.1590028160.git.tamas@tklengyel.com>
X-Mailer: git-send-email 2.26.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Petre Pircalabu <ppircalabu@bitdefender.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

For the last couple years we have received numerous reports from users of
monitor vm_events of spurious guest crashes when using events. In particular,
it has observed that the problem occurs when vm_events are being disabled. The
nature of the guest crash varied widely and has only occured occasionally. This
made debugging the issue particularly hard. We had discussions about this issue
even here on the xen-devel mailinglist with no luck figuring it out.

The bug has now been identified as a race-condition between register event
handling and disabling the vm_event interface.

Patch 96760e2fba100d694300a81baddb5740e0f8c0ee, "vm_event: deny register writes
if refused by  vm_event reply" is the patch that introduced the error. In this
patch emulation of register write events can be postponed until the
corresponding vm_event handler decides whether to allow such write to take
place. Unfortunately this can only be implemented by performing the deny/allow
step when the vCPU gets scheduled. Due to that postponed emulation of the event
if the user decides to pause the VM in the vm_event handler and then disable
events, the entire emulation step is skipped the next time the vCPU is resumed.
Even if the user doesn't pause during the vm_event handling but exits
immediately and disables vm_event, the situation becomes racey as disabling
vm_event may succeed before the guest's vCPUs get scheduled with the pending
emulation task. This has been particularly the case with VMS  that have several
vCPUs as after the VM is unpaused it may actually take a long time before all
vCPUs get scheduled.

The only solution currently is to poll each vCPU before vm_events are disabled
to verify they had been scheduled before it is safe to disable vm_events. The
following patches resolve this issue in a much nicer way.

Patch 1 adds an option to the monitor_op domctl that needs to be specified if
    the user wants to actually use the postponed register-write handling
    mechanism. If that option is not specified then handling is performed the
    same way as before patch 96760e2fba100d694300a81baddb5740e0f8c0ee.
    
Patch 2 performs sanity checking when disabling vm_events to determine whether
    its safe to free all vm_event structures. The vCPUs still get unpaused to
    allow them to get scheduled and perform any of their pending operations,
    but otherwise an -EAGAIN error is returned signaling to the user that they
    need to wait and try again disabling the interface.
    
Patch 3 adds a vm_event specifically to signal to the user when it is safe to
    continue disabling the interface.
    
Shout out to our friends at CERT.pl for stumbling upon a crucial piece of
information that lead to finally squashing this nasty bug.

v2: minor adjustments based on Jan's comments

Tamas K Lengyel (3):
  xen/monitor: Control register values
  xen/vm_event: add vm_event_check_pending_op
  xen/vm_event: Add safe to disable vm_event

 xen/arch/x86/hvm/hvm.c            | 63 +++++++++++++++++++++++--------
 xen/arch/x86/hvm/monitor.c        | 14 +++++++
 xen/arch/x86/monitor.c            | 23 ++++++++++-
 xen/arch/x86/vm_event.c           | 23 +++++++++++
 xen/common/vm_event.c             | 17 +++++++--
 xen/include/asm-arm/vm_event.h    |  7 ++++
 xen/include/asm-x86/domain.h      |  2 +
 xen/include/asm-x86/hvm/monitor.h |  1 +
 xen/include/asm-x86/vm_event.h    |  2 +
 xen/include/public/domctl.h       |  3 ++
 xen/include/public/vm_event.h     |  8 ++++
 11 files changed, 144 insertions(+), 19 deletions(-)

-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 02:32:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 02:32:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbb09-0000Fl-2X; Thu, 21 May 2020 02:32:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BsrW=7D=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jbb08-0000Ff-05
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 02:32:04 +0000
X-Inumbo-ID: 41046766-9b0b-11ea-b9cf-bc764e2007e4
Received: from mail-ot1-f49.google.com (unknown [209.85.210.49])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 41046766-9b0b-11ea-b9cf-bc764e2007e4;
 Thu, 21 May 2020 02:32:02 +0000 (UTC)
Received: by mail-ot1-f49.google.com with SMTP id c3so4341408otr.12
 for <xen-devel@lists.xenproject.org>; Wed, 20 May 2020 19:32:02 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=ivGMi/gUy4Fl/JUt0Ag3q+pDqnqpG1RucRU6tciEaHo=;
 b=rPOmcE/IfRmDjyKkPhDsSgcycA3HomMbHx8ZRysVJQQ8F0J3hTJQruK0O2dm1qqdeb
 fZonUdNct1xrsVMqWX6/1Ls5iwFNrYAE65l2W2GUKmBe6hH+LWBtigWZGxoM9I5oHyYB
 j9dwO86SOQ5CGk56PG6idRXe+V9ie9K543pdfRTFuOtkwhUNaWmWfbSUSJlD4agD5yMT
 +5HdHRQ8KrOC2xQA3diPtYGcg8O0obolw/PXO4MspNO/Z3em3MPiOiVbrC8CJEdAr0To
 AacrqpvnJQK5txTe//3xLmnxjqJjW1F2D6jIFObP52SQe9orMOfK3GD42MmwZa+BCIFT
 tglA==
X-Gm-Message-State: AOAM532zDz8ysJsE/EwZN18MtLKEEAYosynP/CQzSC67G0/gBxU8YmdY
 DuK88X65Fv0/NLCO23oRVtC3dHBHsj0=
X-Google-Smtp-Source: ABdhPJxUduLr0BaPkbikFoD30T7EOE8KTQA1pOoCB/sAKmae1i5hjytRziYvUc3qzyOGZQnpsRZe+A==
X-Received: by 2002:a9d:6e3:: with SMTP id 90mr5703777otx.261.1590028321780;
 Wed, 20 May 2020 19:32:01 -0700 (PDT)
Received: from t0.lan (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124])
 by smtp.googlemail.com with ESMTPSA id r17sm1312480ooq.2.2020.05.20.19.32.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 20 May 2020 19:32:01 -0700 (PDT)
From: Tamas K Lengyel <tamas@tklengyel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 for-4.14 3/3] xen/vm_event: Add safe to disable vm_event
Date: Wed, 20 May 2020 20:31:54 -0600
Message-Id: <682dde916f982e2889b2be2263418e9506a82c1e.1590028160.git.tamas@tklengyel.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <cover.1590028160.git.tamas@tklengyel.com>
References: <cover.1590028160.git.tamas@tklengyel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Petre Pircalabu <ppircalabu@bitdefender.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Instead of having to repeatedly try to disable vm_events, request a specific
vm_event to be sent when the domain is safe to continue with shutting down
the vm_event interface.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
 xen/arch/x86/hvm/hvm.c            | 38 ++++++++++++++++++++++++++-----
 xen/arch/x86/hvm/monitor.c        | 14 ++++++++++++
 xen/arch/x86/monitor.c            | 13 +++++++++++
 xen/include/asm-x86/domain.h      |  1 +
 xen/include/asm-x86/hvm/monitor.h |  1 +
 xen/include/public/domctl.h       |  2 ++
 xen/include/public/vm_event.h     |  8 +++++++
 7 files changed, 71 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index e6780c685b..fc7e1e2b22 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -563,15 +563,41 @@ void hvm_do_resume(struct vcpu *v)
         v->arch.hvm.inject_event.vector = HVM_EVENT_VECTOR_UNSET;
     }
 
-    if ( unlikely(v->arch.vm_event) && v->arch.monitor.next_interrupt_enabled )
+    if ( unlikely(v->arch.vm_event) )
     {
-        struct x86_event info;
+        struct domain *d = v->domain;
+
+        if ( v->arch.monitor.next_interrupt_enabled )
+        {
+            struct x86_event info;
+
+            if ( hvm_get_pending_event(v, &info) )
+            {
+                hvm_monitor_interrupt(info.vector, info.type, info.error_code,
+                                      info.cr2);
+                v->arch.monitor.next_interrupt_enabled = false;
+            }
+        }
 
-        if ( hvm_get_pending_event(v, &info) )
+        if ( d->arch.monitor.safe_to_disable )
         {
-            hvm_monitor_interrupt(info.vector, info.type, info.error_code,
-                                  info.cr2);
-            v->arch.monitor.next_interrupt_enabled = false;
+            const struct vcpu *check_vcpu;
+            bool pending_op = false;
+
+            for_each_vcpu ( d, check_vcpu )
+            {
+                if ( vm_event_check_pending_op(check_vcpu) )
+                {
+                    pending_op = true;
+                    break;
+                }
+            }
+
+            if ( !pending_op )
+            {
+                hvm_monitor_safe_to_disable();
+                d->arch.monitor.safe_to_disable = false;
+            }
         }
     }
 }
diff --git a/xen/arch/x86/hvm/monitor.c b/xen/arch/x86/hvm/monitor.c
index f5d89e71d1..75fd1a4b68 100644
--- a/xen/arch/x86/hvm/monitor.c
+++ b/xen/arch/x86/hvm/monitor.c
@@ -300,6 +300,20 @@ bool hvm_monitor_check_p2m(unsigned long gla, gfn_t gfn, uint32_t pfec,
     return monitor_traps(curr, true, &req) >= 0;
 }
 
+void hvm_monitor_safe_to_disable(void)
+{
+    struct vcpu *curr = current;
+    struct arch_domain *ad = &curr->domain->arch;
+    vm_event_request_t req = {};
+
+    if ( !ad->monitor.safe_to_disable )
+        return;
+
+    req.reason = VM_EVENT_REASON_SAFE_TO_DISABLE;
+
+    monitor_traps(curr, 0, &req);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/monitor.c b/xen/arch/x86/monitor.c
index 1517a97f50..86e0ba2fbc 100644
--- a/xen/arch/x86/monitor.c
+++ b/xen/arch/x86/monitor.c
@@ -339,6 +339,19 @@ int arch_monitor_domctl_event(struct domain *d,
         break;
     }
 
+    case XEN_DOMCTL_MONITOR_EVENT_SAFE_TO_DISABLE:
+    {
+        bool old_status = ad->monitor.safe_to_disable;
+
+        if ( unlikely(old_status == requested_status) )
+            return -EEXIST;
+
+        domain_pause(d);
+        ad->monitor.safe_to_disable = requested_status;
+        domain_unpause(d);
+        break;
+    }
+
     default:
         /*
          * Should not be reached unless arch_monitor_get_capabilities() is
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index d890ab7a22..948b750c71 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -417,6 +417,7 @@ struct arch_domain
          */
         unsigned int inguest_pagefault_disabled                            : 1;
         unsigned int control_register_values                               : 1;
+        unsigned int safe_to_disable                                       : 1;
         struct monitor_msr_bitmap *msr_bitmap;
         uint64_t write_ctrlreg_mask[4];
     } monitor;
diff --git a/xen/include/asm-x86/hvm/monitor.h b/xen/include/asm-x86/hvm/monitor.h
index 66de24cb75..dbc113a635 100644
--- a/xen/include/asm-x86/hvm/monitor.h
+++ b/xen/include/asm-x86/hvm/monitor.h
@@ -52,6 +52,7 @@ bool hvm_monitor_emul_unimplemented(void);
 
 bool hvm_monitor_check_p2m(unsigned long gla, gfn_t gfn, uint32_t pfec,
                            uint16_t kind);
+void hvm_monitor_safe_to_disable(void);
 
 #endif /* __ASM_X86_HVM_MONITOR_H__ */
 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index cbcd25f12c..247e809a6c 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1040,6 +1040,8 @@ struct xen_domctl_psr_cmt_op {
 #define XEN_DOMCTL_MONITOR_EVENT_EMUL_UNIMPLEMENTED    10
 /* Enabled by default */
 #define XEN_DOMCTL_MONITOR_EVENT_INGUEST_PAGEFAULT     11
+/* Always async, disables automaticaly on first event */
+#define XEN_DOMCTL_MONITOR_EVENT_SAFE_TO_DISABLE       12
 
 struct xen_domctl_monitor_op {
     uint32_t op; /* XEN_DOMCTL_MONITOR_OP_* */
diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h
index fdd3ad8a30..b66d2a8634 100644
--- a/xen/include/public/vm_event.h
+++ b/xen/include/public/vm_event.h
@@ -159,6 +159,14 @@
 #define VM_EVENT_REASON_DESCRIPTOR_ACCESS       13
 /* Current instruction is not implemented by the emulator */
 #define VM_EVENT_REASON_EMUL_UNIMPLEMENTED      14
+/*
+ * When shutting down vm_event it may not be immediately safe to complete the
+ * process as some vCPUs may be pending synchronization. This async event
+ * type can be used to receive a notification when its safe to finish disabling
+ * the vm_event interface. All other event types need to be disabled before
+ * registering to this one.
+ */
+#define VM_EVENT_REASON_SAFE_TO_DISABLE         15
 
 /* Supported values for the vm_event_write_ctrlreg index. */
 #define VM_EVENT_X86_CR0    0
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 02:32:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 02:32:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbb0A-0000G5-AO; Thu, 21 May 2020 02:32:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BsrW=7D=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jbb09-0000Fu-Ji
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 02:32:05 +0000
X-Inumbo-ID: 40174f9e-9b0b-11ea-aac4-12813bfff9fa
Received: from mail-ot1-f68.google.com (unknown [209.85.210.68])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 40174f9e-9b0b-11ea-aac4-12813bfff9fa;
 Thu, 21 May 2020 02:32:01 +0000 (UTC)
Received: by mail-ot1-f68.google.com with SMTP id x22so4382150otq.4
 for <xen-devel@lists.xenproject.org>; Wed, 20 May 2020 19:32:01 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=pAJ2F4WcFZUpTGHDJjHpvcdiKzqA1QwBYP6YeTTN2xA=;
 b=trw9u8caOYW5eF3wYbyTdCbUE5+W6Xzwh84I3eQzZEuF7l/BuuW+YA/GILGJdeYijr
 914WBBt8SzLt1nDxJOCsCLpLs8o3gUpSdTzkktvvazr9LqW3jGNLMesFeYnU7bFFkf/R
 07mPaFvYWLkEgnBQwxtcCFNuwCk9yp+SvzOlH2qNGypXu+3IN1Wq4794/ZAx+zZWNzaD
 lXufyR8femby7rg+WymPv3Uiqgy3iCaFfdWAG3CfVvND7LQRxV5FVBL0yftVwOFnkU96
 ePRZHA/TG2vbN+HXLCHa6c+QkvFlS+pgKldmEoRwmU7Bduf0U7YDxS+WGU6Hn1sKVZay
 5Y9Q==
X-Gm-Message-State: AOAM531fgnWXr/n7rMzKlRwICOVQAP93fcExn/HJIoM5ARktRYJbhJcj
 P1ABzbUFeYrhEycGqbE8P/+z0xSc8pk=
X-Google-Smtp-Source: ABdhPJyQJj0ZOz8w1ZTZHCbS7aBVzXmfSTHCOdfzxNPmjTi6gGNMTmXlrLX26xIuNU7kfnzl5fPkfg==
X-Received: by 2002:a05:6830:10c5:: with SMTP id
 z5mr5574207oto.325.1590028320358; 
 Wed, 20 May 2020 19:32:00 -0700 (PDT)
Received: from t0.lan (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124])
 by smtp.googlemail.com with ESMTPSA id r17sm1312480ooq.2.2020.05.20.19.31.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 20 May 2020 19:31:59 -0700 (PDT)
From: Tamas K Lengyel <tamas@tklengyel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 for-4.14 2/3] xen/vm_event: add vm_event_check_pending_op
Date: Wed, 20 May 2020 20:31:53 -0600
Message-Id: <52492e7b44f311b09e9a433f2e5a2ba607a85c78.1590028160.git.tamas@tklengyel.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <cover.1590028160.git.tamas@tklengyel.com>
References: <cover.1590028160.git.tamas@tklengyel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Petre Pircalabu <ppircalabu@bitdefender.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Perform sanity checking when shutting vm_event down to determine whether
it is safe to do so. Error out with -EAGAIN in case pending operations
have been found for the domain.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
 xen/arch/x86/vm_event.c        | 23 +++++++++++++++++++++++
 xen/common/vm_event.c          | 17 ++++++++++++++---
 xen/include/asm-arm/vm_event.h |  7 +++++++
 xen/include/asm-x86/vm_event.h |  2 ++
 4 files changed, 46 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/vm_event.c b/xen/arch/x86/vm_event.c
index 848d69c1b0..a23aadc112 100644
--- a/xen/arch/x86/vm_event.c
+++ b/xen/arch/x86/vm_event.c
@@ -297,6 +297,29 @@ void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
     };
 }
 
+bool vm_event_check_pending_op(const struct vcpu *v)
+{
+    struct monitor_write_data *w = &v->arch.vm_event->write_data;
+
+    if ( !v->arch.vm_event->sync_event )
+        return false;
+
+    if ( w->do_write.cr0 )
+        return true;
+    if ( w->do_write.cr3 )
+        return true;
+    if ( w->do_write.cr4 )
+        return true;
+    if ( w->do_write.msr )
+        return true;
+    if ( v->arch.vm_event->set_gprs )
+        return true;
+    if ( v->arch.vm_event->emulate_flags )
+        return true;
+
+    return false;
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 127f2d58f1..2df327a42c 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -183,6 +183,7 @@ static int vm_event_disable(struct domain *d, struct vm_event_domain **p_ved)
     if ( vm_event_check_ring(ved) )
     {
         struct vcpu *v;
+        bool pending_op = false;
 
         spin_lock(&ved->lock);
 
@@ -192,9 +193,6 @@ static int vm_event_disable(struct domain *d, struct vm_event_domain **p_ved)
             return -EBUSY;
         }
 
-        /* Free domU's event channel and leave the other one unbound */
-        free_xen_event_channel(d, ved->xen_port);
-
         /* Unblock all vCPUs */
         for_each_vcpu ( d, v )
         {
@@ -203,8 +201,21 @@ static int vm_event_disable(struct domain *d, struct vm_event_domain **p_ved)
                 vcpu_unpause(v);
                 ved->blocked--;
             }
+
+            if ( vm_event_check_pending_op(v) )
+                pending_op = true;
         }
 
+        /* vm_event ops are still pending until vCPUs get scheduled */
+        if ( pending_op )
+        {
+            spin_unlock(&ved->lock);
+            return -EAGAIN;
+        }
+
+        /* Free domU's event channel and leave the other one unbound */
+        free_xen_event_channel(d, ved->xen_port);
+
         destroy_ring_for_helper(&ved->ring_page, ved->ring_pg_struct);
 
         vm_event_cleanup_domain(d);
diff --git a/xen/include/asm-arm/vm_event.h b/xen/include/asm-arm/vm_event.h
index 14d1d341cc..978b224dc3 100644
--- a/xen/include/asm-arm/vm_event.h
+++ b/xen/include/asm-arm/vm_event.h
@@ -58,4 +58,11 @@ void vm_event_sync_event(struct vcpu *v, bool value)
     /* Not supported on ARM. */
 }
 
+static inline
+bool vm_event_check_pending_op(const struct vcpu *v)
+{
+    /* Not supported on ARM. */
+    return false;
+}
+
 #endif /* __ASM_ARM_VM_EVENT_H__ */
diff --git a/xen/include/asm-x86/vm_event.h b/xen/include/asm-x86/vm_event.h
index 785e741fba..97860d0d99 100644
--- a/xen/include/asm-x86/vm_event.h
+++ b/xen/include/asm-x86/vm_event.h
@@ -54,4 +54,6 @@ void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp);
 
 void vm_event_sync_event(struct vcpu *v, bool value);
 
+bool vm_event_check_pending_op(const struct vcpu *v);
+
 #endif /* __ASM_X86_VM_EVENT_H__ */
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 02:32:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 02:32:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbb05-0000FN-Qt; Thu, 21 May 2020 02:32:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BsrW=7D=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jbb04-0000FB-Jb
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 02:32:00 +0000
X-Inumbo-ID: 3f3f565c-9b0b-11ea-aac4-12813bfff9fa
Received: from mail-ot1-f66.google.com (unknown [209.85.210.66])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3f3f565c-9b0b-11ea-aac4-12813bfff9fa;
 Thu, 21 May 2020 02:31:59 +0000 (UTC)
Received: by mail-ot1-f66.google.com with SMTP id z3so4359263otp.9
 for <xen-devel@lists.xenproject.org>; Wed, 20 May 2020 19:31:59 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=5Ow8fbLTVv346fUNfXtdLXKcBCHvqTr422Z5j3+Oz/U=;
 b=JhBGCIVcf5xcsJJVIlQUu3WTdn1O8+x9cRS+ZH3eeIhkzUK+8UctL4Vk+YQj5LWpyP
 UnNoNu4e+FAVYw9xfHPZ8kuhFFxIh6Gy4XLJMNQBHeZ0HDef7Agq/jhgjkhDgQqrl4V7
 C7nVq45+smAMkqfgrTji+TB46IcHTW7Skbj9c/Swc/jX25KgWjqYlqlarj7W8K45OdkK
 lZnOdtu4XjWETRxgT7hOiE+dO27ZhhTX9U1x4Ow1fKyrgPKPWA6Az0hz8a7SFUnvc05k
 vQrycbvVXhFrg43mwENavy7C9yuliZ5wjOEgomGcYKATPwlkNxhGBGXyx+usFsepcFMW
 WLZg==
X-Gm-Message-State: AOAM533s4cWjL7jDyOGXSY/JApHrBjzVcjncvzjBtKcE5tEKrDHz67w7
 X/7yiKKbazlKeTURzq4ubh51Qz31UO4=
X-Google-Smtp-Source: ABdhPJyFeLElcRnr7TE7cCAmcA9TvgH5Chz90ASlGNCYFdu2paAWRUkURzFw3aRpYskqrdgMbdQVwQ==
X-Received: by 2002:a9d:3a7:: with SMTP id f36mr5760174otf.197.1590028318891; 
 Wed, 20 May 2020 19:31:58 -0700 (PDT)
Received: from t0.lan (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124])
 by smtp.googlemail.com with ESMTPSA id r17sm1312480ooq.2.2020.05.20.19.31.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 20 May 2020 19:31:58 -0700 (PDT)
From: Tamas K Lengyel <tamas@tklengyel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 for-4.14 1/3] xen/monitor: Control register values
Date: Wed, 20 May 2020 20:31:52 -0600
Message-Id: <b3c147cc226f3a30daec73b2ffd57bd285bc8659.1590028160.git.tamas@tklengyel.com>
X-Mailer: git-send-email 2.26.1
In-Reply-To: <cover.1590028160.git.tamas@tklengyel.com>
References: <cover.1590028160.git.tamas@tklengyel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Petre Pircalabu <ppircalabu@bitdefender.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jan Beulich <jbeulich@suse.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Extend the monitor_op domctl to include option that enables
controlling what values certain registers are permitted to hold
by a monitor subscriber.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
 xen/arch/x86/hvm/hvm.c       | 25 ++++++++++++++++---------
 xen/arch/x86/monitor.c       | 10 +++++++++-
 xen/include/asm-x86/domain.h |  1 +
 xen/include/public/domctl.h  |  1 +
 4 files changed, 27 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 09ee299bc7..e6780c685b 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -2263,7 +2263,8 @@ int hvm_set_cr0(unsigned long value, bool may_defer)
     {
         ASSERT(v->arch.vm_event);
 
-        if ( hvm_monitor_crX(CR0, value, old_value) )
+        if ( hvm_monitor_crX(CR0, value, old_value) &&
+             v->domain->arch.monitor.control_register_values )
         {
             /* The actual write will occur in hvm_do_resume(), if permitted. */
             v->arch.vm_event->write_data.do_write.cr0 = 1;
@@ -2362,7 +2363,8 @@ int hvm_set_cr3(unsigned long value, bool may_defer)
     {
         ASSERT(v->arch.vm_event);
 
-        if ( hvm_monitor_crX(CR3, value, old) )
+        if ( hvm_monitor_crX(CR3, value, old) &&
+             v->domain->arch.monitor.control_register_values )
         {
             /* The actual write will occur in hvm_do_resume(), if permitted. */
             v->arch.vm_event->write_data.do_write.cr3 = 1;
@@ -2443,7 +2445,8 @@ int hvm_set_cr4(unsigned long value, bool may_defer)
     {
         ASSERT(v->arch.vm_event);
 
-        if ( hvm_monitor_crX(CR4, value, old_cr) )
+        if ( hvm_monitor_crX(CR4, value, old_cr) &&
+             v->domain->arch.monitor.control_register_values )
         {
             /* The actual write will occur in hvm_do_resume(), if permitted. */
             v->arch.vm_event->write_data.do_write.cr4 = 1;
@@ -3587,13 +3590,17 @@ int hvm_msr_write_intercept(unsigned int msr, uint64_t msr_content,
 
         ASSERT(v->arch.vm_event);
 
-        /* The actual write will occur in hvm_do_resume() (if permitted). */
-        v->arch.vm_event->write_data.do_write.msr = 1;
-        v->arch.vm_event->write_data.msr = msr;
-        v->arch.vm_event->write_data.value = msr_content;
-
         hvm_monitor_msr(msr, msr_content, msr_old_content);
-        return X86EMUL_OKAY;
+
+        if ( v->domain->arch.monitor.control_register_values )
+        {
+            /* The actual write will occur in hvm_do_resume(), if permitted. */
+            v->arch.vm_event->write_data.do_write.msr = 1;
+            v->arch.vm_event->write_data.msr = msr;
+            v->arch.vm_event->write_data.value = msr_content;
+
+            return X86EMUL_OKAY;
+        }
     }
 
     if ( (ret = guest_wrmsr(v, msr, msr_content)) != X86EMUL_UNHANDLEABLE )
diff --git a/xen/arch/x86/monitor.c b/xen/arch/x86/monitor.c
index bbcb7536c7..1517a97f50 100644
--- a/xen/arch/x86/monitor.c
+++ b/xen/arch/x86/monitor.c
@@ -144,7 +144,15 @@ int arch_monitor_domctl_event(struct domain *d,
                               struct xen_domctl_monitor_op *mop)
 {
     struct arch_domain *ad = &d->arch;
-    bool requested_status = (XEN_DOMCTL_MONITOR_OP_ENABLE == mop->op);
+    bool requested_status;
+
+    if ( XEN_DOMCTL_MONITOR_OP_CONTROL_REGISTERS == mop->op )
+    {
+        ad->monitor.control_register_values = true;
+        return 0;
+    }
+
+    requested_status = (XEN_DOMCTL_MONITOR_OP_ENABLE == mop->op);
 
     switch ( mop->event )
     {
diff --git a/xen/include/asm-x86/domain.h b/xen/include/asm-x86/domain.h
index 5b6d909266..d890ab7a22 100644
--- a/xen/include/asm-x86/domain.h
+++ b/xen/include/asm-x86/domain.h
@@ -416,6 +416,7 @@ struct arch_domain
          * This is used to filter out pagefaults.
          */
         unsigned int inguest_pagefault_disabled                            : 1;
+        unsigned int control_register_values                               : 1;
         struct monitor_msr_bitmap *msr_bitmap;
         uint64_t write_ctrlreg_mask[4];
     } monitor;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 1ad34c35eb..cbcd25f12c 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1025,6 +1025,7 @@ struct xen_domctl_psr_cmt_op {
 #define XEN_DOMCTL_MONITOR_OP_DISABLE           1
 #define XEN_DOMCTL_MONITOR_OP_GET_CAPABILITIES  2
 #define XEN_DOMCTL_MONITOR_OP_EMULATE_EACH_REP  3
+#define XEN_DOMCTL_MONITOR_OP_CONTROL_REGISTERS 4
 
 #define XEN_DOMCTL_MONITOR_EVENT_WRITE_CTRLREG         0
 #define XEN_DOMCTL_MONITOR_EVENT_MOV_TO_MSR            1
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 04:35:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 04:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbcvR-0002YL-IR; Thu, 21 May 2020 04:35:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/tWO=7C=notk.org=asmadeus@srs-us1.protection.inumbo.net>)
 id 1jbUWX-0003bV-FF
 for xen-devel@lists.xenproject.org; Wed, 20 May 2020 19:37:05 +0000
X-Inumbo-ID: 47fa91e2-9ad1-11ea-aa85-12813bfff9fa
Received: from nautica.notk.org (unknown [91.121.71.147])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 47fa91e2-9ad1-11ea-aa85-12813bfff9fa;
 Wed, 20 May 2020 19:37:04 +0000 (UTC)
Received: by nautica.notk.org (Postfix, from userid 1001)
 id C6D98C009; Wed, 20 May 2020 21:37:02 +0200 (CEST)
Date: Wed, 20 May 2020 21:36:47 +0200
From: Dominique Martinet <asmadeus@codewreck.org>
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [V9fs-developer] [PATCH] 9p/xen: increase XEN_9PFS_RING_ORDER
Message-ID: <20200520193647.GA17565@nautica>
References: <20200520184113.24727-1-sstabellini@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20200520184113.24727-1-sstabellini@kernel.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Mailman-Approved-At: Thu, 21 May 2020 04:35:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, lucho@ionkov.net, xen-devel@lists.xenproject.org,
 ericvh@gmail.com, linux-kernel@vger.kernel.org, rminnich@sandia.gov,
 v9fs-developer@lists.sourceforge.net, boris.ostrovsky@oracle.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Stefano Stabellini wrote on Wed, May 20, 2020:
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> Increase XEN_9PFS_RING_ORDER to 9 for performance reason. Order 9 is the
> max allowed by the protocol.
> 
> We can't assume that all backends will support order 9. The xenstore
> property max-ring-page-order specifies the max order supported by the
> backend. We'll use max-ring-page-order for the size of the ring.
> 
> This means that the size of the ring is not static
> (XEN_FLEX_RING_SIZE(9)) anymore. Change XEN_9PFS_RING_SIZE to take an
> argument and base the calculation on the order chosen at setup time.
> 
> 
> Finally, modify p9_xen_trans.maxsize to be divided by 4 compared to the
> original value. We need to divide it by 2 because we have two rings
> coming off the same order allocation: the in and out rings. This was a
> mistake in the original code. Also divide it further by 2 because we
> don't want a single request/reply to fill up the entire ring. There can
> be multiple requests/replies outstanding at any given time and if we use
> the full ring with one, we risk forcing the backend to wait for the
> client to read back more replies before continuing, which is not
> performant.

Sounds good to me overall. A couple of comments inline.
Also worth noting I need to rebuild myself a test setup so might take a
bit of time to actually run tests, but I might just trust you on this
one for now if it builds with no new warning... Looks like it would
probably work :p

> [...]
> @@ -264,7 +265,7 @@ static irqreturn_t xen_9pfs_front_event_handler(int irq, void *r)
>  
>  static struct p9_trans_module p9_xen_trans = {
>  	.name = "xen",
> -	.maxsize = 1 << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT),
> +	.maxsize = 1 << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT - 2),
>  	.def = 1,
>  	.create = p9_xen_create,
>  	.close = p9_xen_close,
> [...]
> @@ -401,8 +405,10 @@ static int xen_9pfs_front_probe(struct xenbus_device *dev,
>  		return -EINVAL;
>  	max_ring_order = xenbus_read_unsigned(dev->otherend,
>  					      "max-ring-page-order", 0);
> -	if (max_ring_order < XEN_9PFS_RING_ORDER)
> -		return -EINVAL;
> +	if (max_ring_order > XEN_9PFS_RING_ORDER)
> +		max_ring_order = XEN_9PFS_RING_ORDER;

(If there are backends with very small max_ring_orders, we no longer
error out when we encounter one, it might make sense to add a min
define? Although to be honest 9p works with pretty small maxsizes so I
don't see much reason to error out, and even order 0 will be one page
worth.. I hope there is no xenbus that small though :))

> +	if (p9_xen_trans.maxsize > XEN_FLEX_RING_SIZE(max_ring_order))
> +		p9_xen_trans.maxsize = XEN_FLEX_RING_SIZE(max_ring_order);

So base maxsize initial value is 1 << (order + page_shift - 2) ; but
this is 1 << (order + page_shift - 1) -- I agree with the logic you gave
in commit message so would think this needs to be shifted down one more
like the base value as well.
What do you think?

-- 
Dominique


From xen-devel-bounces@lists.xenproject.org Thu May 21 07:11:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 07:11:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbfM4-0007nQ-Dv; Thu, 21 May 2020 07:11:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PcJi=7D=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbfM3-0007nL-Rq
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 07:10:59 +0000
X-Inumbo-ID: 3497a9b2-9b32-11ea-aae0-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3497a9b2-9b32-11ea-aae0-12813bfff9fa;
 Thu, 21 May 2020 07:10:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=CzF+1w5+sqQ/DUCZkf1HP8t5uaStRKdFjn9MDcPtajo=; b=TeFT452P3BA1BfCptQyCWquzX
 pWfixuDKpURLENfuv+25WqZYh1OyOOnRTYcmz7J0BgXr6sZmq2nyth4wbWTKbJu6WRhZxBe4KmHD9
 kb77OfLFvzA4+hnxAyI9vkXAnU1fflIWWjSB1SfAREdD1Bh6Vbq3cpdG1dNpSC6UmFFSE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbfLv-0002dN-VN; Thu, 21 May 2020 07:10:51 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbfLv-0005uI-OR; Thu, 21 May 2020 07:10:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbfLv-0007gx-Nt; Thu, 21 May 2020 07:10:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150284-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150284: all pass - PUSHED
X-Osstest-Versions-This: ovmf=bc5012b8fbf9f769a62d8a7a2dbf04343c16d398
X-Osstest-Versions-That: ovmf=7b6327ff03bb4436261ffad246ba3a14de10391f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 21 May 2020 07:10:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150284 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150284/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 bc5012b8fbf9f769a62d8a7a2dbf04343c16d398
baseline version:
 ovmf                 7b6327ff03bb4436261ffad246ba3a14de10391f

Last test of basis   150232  2020-05-18 16:09:20 Z    2 days
Failing since        150278  2020-05-20 13:10:59 Z    0 days    2 attempts
Testing same since   150284  2020-05-20 20:09:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Liming Gao <liming.gao@intel.com>
  Oleksiy Yakovlev <oleksiyy@ami.com>
  Shenglei Zhang <shenglei.zhang@intel.com>
  Wei6 Xu <wei6.xu@intel.com>
  Zhang, Shenglei <shenglei.zhang@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   7b6327ff03..bc5012b8fb  bc5012b8fbf9f769a62d8a7a2dbf04343c16d398 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 21 07:43:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 07:43:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbfrX-0001yC-TE; Thu, 21 May 2020 07:43:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xZmg=7D=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jbfrW-0001y7-SV
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 07:43:30 +0000
X-Inumbo-ID: c3578920-9b36-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3578920-9b36-11ea-ae69-bc764e2007e4;
 Thu, 21 May 2020 07:43:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 9FACEAD80;
 Thu, 21 May 2020 07:43:31 +0000 (UTC)
Message-ID: <86969ba1ea7e270267cfaa3408a89b55c86b3dca.camel@suse.com>
Subject: Re: [PATCH 0/3] Automation: improve openSUSE containers + podman
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Date: Thu, 21 May 2020 09:43:27 +0200
In-Reply-To: <158827088416.19371.17008531228521109457.stgit@Palanthas>
References: <158827088416.19371.17008531228521109457.stgit@Palanthas>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-ML6hfz1BfZRjRT0SoTVQ"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Doug Goldstein <cardoe@cardoe.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-ML6hfz1BfZRjRT0SoTVQ
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2020-04-30 at 20:27 +0200, Dario Faggioli wrote:
> Hello,
>=20
> This short series contains some improvements for building Xen in
> openSUSE containers. In fact, the build dependencies inside the
> Tumbleweed container are updated and more handy helpers are added, in
> containerize, for referring to both Leap and Tumbleweed containers.
>=20
> In addition to that, in patch 3, the containerize script is enhanced
> so
> that it is now possible to use podman, instead of docker. Rootless
> mode
> for podman also works (provided the system is properly configured)
> which,
> IMO, is rather nice.
>=20
> Docker of course continue to work, and is kept as the default.
>=20
Ping?

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-ML6hfz1BfZRjRT0SoTVQ
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl7GMR8ACgkQFkJ4iaW4
c+7YDQ//RWk+u4pS1RMuM6z8AEtE6hwzo/adAFemzE4DH6k2ul9dY/VNdjtmZ9Xc
nT88fBdcoyd+OQhtOfPmdgmzi0lVatV6jLBwYPD/maViq/rpBj65Cswlro+KhTMH
8snqm3V4VUNVZrsCX3shKjG1KVIq9QparXHv8X4Vo8Wv87gIAQdLfG0LKDckhEFN
ffl8Sk56lczaL70rM6lfwsHDXRDs8m7i4T++Pe/KZa99MI2rwxcdc8adoebtxaWq
cP72CPU9lSyAtd86AfiEDE7dPeTciAU24QDDm0zCYbPz0Wx8bpMcsQYVKsFk2381
911WIniUeKjYpwYpDjrxFrlfbYs0VwPfNMFHK6My/PI0O21WunUvay+AuJMziLPz
dZhGB2x9x2cnoFLC66Tle4J9IcZu8AJCzHRCVwvmnh6txoU9SSoUmma9TSJO81nA
lHr00NQ+0io7uD0KVfCElC81xLm720KM8SfcO+P/FEHSavePDh9TCJHCeoLx2s/N
Zxn4LXsPhUbQL+4iJAzG7Dq7TaGqGFUxmtvO2PmecLWPtxphNSIMZhwTm5eRLVFx
Ob5vj9/AL0i0FmW1lfgDcBqTrvSpiaUeR4nctukJ4oHXs0iNHFTGoSrJTlfre7Ec
uzAFGfhXAMODWoWAyTB1F3VcvcK4xgT78d+acUwUY4VzNLVVIe4=
=MdS1
-----END PGP SIGNATURE-----

--=-ML6hfz1BfZRjRT0SoTVQ--



From xen-devel-bounces@lists.xenproject.org Thu May 21 07:52:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 07:52:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbg0X-0002rt-RL; Thu, 21 May 2020 07:52:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PcJi=7D=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbg0W-0002ro-TC
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 07:52:48 +0000
X-Inumbo-ID: 0f1e1cc4-9b38-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f1e1cc4-9b38-11ea-ae69-bc764e2007e4;
 Thu, 21 May 2020 07:52:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=bFW0k45yKRf92HMUtT4EnWCjes7I1Dp1enEDM5P9gBM=; b=UlgDB25MFN030AVrGGbemG4I7
 CvbzgEFrVLnZOFGXCaKUqrf8OzGFf//3/pviEjV+LxGcQvQxEMG/TaSJDZVgz5nKJY6FZTdqqWWaA
 9A5jWyat2vE1UwYJ/pJiFrwm/GWElHShzKbIeobKks2zoZGhIJVUcn+XIqHztyh2DbffM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbg0U-0003U3-2b; Thu, 21 May 2020 07:52:46 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbg0T-0007bx-Iw; Thu, 21 May 2020 07:52:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbg0T-0004az-ID; Thu, 21 May 2020 07:52:45 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150283-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 150283: regressions - FAIL
X-Osstest-Failures: linux-5.4:build-amd64-libvirt:libvirt-build:fail:regression
 linux-5.4:test-amd64-amd64-xl-rtds:guest-saverestore:fail:heisenbug
 linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
 linux-5.4:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=1cdaf895c99d319c0007d0b62818cf85fc4b087f
X-Osstest-Versions-That: linux=cbaf2369956178e68fb714a30dc86cf768dd596a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 21 May 2020 07:52:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150283 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150283/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 150172

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     15 guest-saverestore          fail pass in 150273

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     12 guest-start              fail REGR. vs. 150172

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds     16 guest-localmigrate  fail in 150273 like 150172
 test-amd64-amd64-libvirt    13 migrate-support-check fail in 150273 never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check fail in 150273 never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail in 150273 never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check fail in 150273 never pass
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                1cdaf895c99d319c0007d0b62818cf85fc4b087f
baseline version:
 linux                cbaf2369956178e68fb714a30dc86cf768dd596a

Last test of basis   150172  2020-05-14 06:09:31 Z    7 days
Testing same since   150273  2020-05-20 06:46:17 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Adam Ford <aford173@gmail.com>
  Adam McCoy <adam@forsedomani.com>
  Adrian Hunter <adrian.hunter@intel.com>
  Alan Maguire <alan.maguire@oracle.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alexei Starovoitov <ast@kernel.org>
  Amir Goldstein <amir73il@gmail.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ansuel Smith <ansuelsmth@gmail.com>
  Arnd Bergmann <arnd@arndb.de>
  Aurabindo Pillai <aurabindo.pillai@amd.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Ben Chuang <ben.chuang@genesyslogic.com.tw>
  Bob Peterson <rpeterso@redhat.com>
  Borislav Petkov <bp@suse.de>
  Catalin Marinas <catalin.marinas@arm.com>
  Chen-Yu Tsai <wens@csie.org>
  Chris Chiu <chiu@endlessm.com>
  Chris Down <chris@chrisdown.name>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian Brauner <christian.brauner@ubuntu.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Christophe Leroy <christophe.leroy@c-s.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Cong Wang <xiyou.wangcong@gmail.com>
  Daisuke Matsuda <matsuda-daisuke@fujitsu.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Dave Flogeras <dflogeras2@gmail.com>
  Dave Wysochanski <dwysocha@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
  Douglas Gilbert <dgilbert@interlog.com>
  Eric Dumazet <edumazet@google.com>
  Eric W. Biederman <ebiederm@xmission.com>
  Eugeniu Rosca <erosca@de.adit-jv.com>
  Fabio Estevam <festevam@gmail.com>
  Felipe Balbi <balbi@kernel.org>
  Florian Fainelli <f.fainelli@gmail.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Gerd Hoffmann <kraxel@redhat.com>
  Grace Kao <grace.kao@intel.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Grzegorz Kowal <custos.mentis@gmail.com>
  Guenter Roeck <linux@roeck-us.net>
  Guillaume Nault <gnault@redhat.com>
  Hangbin Liu <liuhangbin@gmail.com>
  Heiko Stuebner <heiko@sntech.de>
  Heiner Kallweit <hkallweit1@gmail.com>
  Hugh Dickins <hughd@google.com>
  Ilie Halip <ilie.halip@gmail.com>
  Ioana Ciornei <ioana.ciornei@nxp.com>
  Iris Liu <iris@onechronos.com>
  J. Bruce Fields <bfields@redhat.com>
  Jack Morgenstein <jackm@dev.mellanox.co.il>
  Jakub Kicinski <kuba@kernel.org>
  Jamal Hadi Salim <jhs@mojatatu.com>
  Jan Kara <jack@suse.cz>
  Jason Gunthorpe <jgg@mellanox.com>
  Jeremy Linton <jeremy.linton@arm.com>
  Jerome Brunet <jbrunet@baylibre.com>
  Jesus Ramos <jesus-ramos@live.com>
  Jim Mattson <jmattson@google.com>
  Jiri Kosina <jkosina@suse.cz>
  Johannes Weiner <hannes@cmpxchg.org>
  John Fastabend <john.fastabend@gmail.com>
  John Stultz <john.stultz@linaro.org>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Jue Wang <juew@google.com>
  Justin Swartz <justin.swartz@risingedge.co.za>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kalle Valo <kvalo@codeaurora.org>
  Kamal Mostafa <kamal@canonical.com>
  Kelly Littlepage <kelly@onechronos.com>
  Kevin Hilman <khilman@baylibre.com>
  Kishon Vijay Abraham I <kishon@ti.com>
  Kyungtae Kim <kt0755@gmail.com>
  Leon Romanovsky <leonro@mellanox.com>
  Li Jun <jun.li@nxp.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Luben Tuikov <luben.tuikov@amd.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luo bin <luobin9@huawei.com>
  Maciej Żenczykowski <maze@google.com>
  Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com>
  Maor Gottlieb <maorg@mellanox.com>
  Marc Zyngier <maz@kernel.org>
  Marek Olšák <marek.olsak@amd.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin KaFai Lau <kafai@fb.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Maxime Ripard <maxime@cerno.tech>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Hocko <mhocko@suse.com>
  Michal Vokáč <michal.vokac@ysoft.com>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Neil Armstrong <narmstrong@baylibre.com>
  Neil Horman <nhorman@tuxdriver.com>
  Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
  Olga Kornievskaia <kolga@netapp.com>
  Olga Kornievskaia <olga.kornievskaia@gmail.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paolo Abeni <pabeni@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Moore <paul@paul-moore.com>
  Peter Chen <peter.chen@nxp.com>
  Peter Jones <pjones@redhat.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Phil Sutter <phil@nwl.cc>
  Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
  Potnuri Bharat Teja <bharat@chelsio.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raul E Rangel <rrangel@chromium.org>
  Renius Chen <renius.chen@genesyslogic.com.tw>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Samu Nuutamo <samu.nuutamo@vincit.fi>
  Samuel Zou <zou_wei@huawei.com>
  Santosh Shilimkar <santosh.shilimkar@oracle.com>
  Sarthak Garg <sartgarg@codeaurora.org>
  Sasha Levin <sashal@kernel.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Shawn Guo <shawnguo@kernel.org>
  Shiraz Saleem <shiraz.saleem@intel.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Simon Ser <contact@emersion.fr>
  Soheil Hassas Yeganeh <soheil@google.com>
  Sriharsha Allenki <sallenki@codeaurora.org>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefano Brivio <sbrivio@redhat.com>
  Stephen Boyd <sboyd@kernel.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Sultan Alsawaf <sultan@kerneltoast.com>
  Sung Lee <sung.lee@amd.com>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tejun Heo <tj@kernel.org>
  Tiecheng Zhou <Tiecheng.Zhou@amd.com>
  Tony Lindgren <tony@atomide.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Ursula Braun <ubraun@linux.ibm.com>
  Vasily Averin <vvs@virtuozzo.com>
  Veerabhadrarao Badiganti <vbadigan@codeaurora.org>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vincent Minet <v.minet@criteo.com>
  Vineeth Pillai <vineethrp@gmail.com>
  Vinod Koul <vkoul@kernel.org>
  Waiman Long <longman@redhat.com>
  Wei Yongjun <weiyongjun1@huawei.com>
  Willem de Bruijn <willemb@google.com>
  Wu Bo <wubo40@huawei.com>
  Xiao Yang <yangx.jy@cn.fujitsu.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yafang Shao <laoar.shao@gmail.com>
  Yang Shi <yang.shi@linux.alibaba.com>
  Yang Yingliang <yangyingliang@huawei.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  Yuiko Oshino <yuiko.oshino@microchip.com>
  Zefan Li <lizefan@huawei.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 4158 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 21 08:01:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 08:01:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbg92-0004La-70; Thu, 21 May 2020 08:01:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VGGW=7D=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jbg90-0004LV-Mh
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 08:01:34 +0000
X-Inumbo-ID: 49c1ef76-9b39-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 49c1ef76-9b39-11ea-ae69-bc764e2007e4;
 Thu, 21 May 2020 08:01:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=LzQcau/CTPhcsq7cGzAcVt098Z2d/I8LKnftpNJpgSo=; b=M5yzdliQtL6ecGvEyiMqU8gcmm
 VZCqgzHHR+lWgFbOO+SHL+yg3mG+eAAvKtIgwwZ4Sp5OsroQvTtvRs3AoSF4Rh9rom3+W0BAxUp4z
 uod9wfTjCWD0T8VM8sHHqldWI8WUDt/bDkzv1Iqct/fHvpGlM6u9pAfLNEoF7eTsDxYw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jbg8z-0004CT-E1; Thu, 21 May 2020 08:01:33 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jbg8z-0006PQ-6i; Thu, 21 May 2020 08:01:33 +0000
Subject: Re: [PATCH 01/10] swiotlb-xen: use vmalloc_to_page on vmalloc virt
 addresses
To: Stefano Stabellini <sstabellini@kernel.org>, jgross@suse.com,
 boris.ostrovsky@oracle.com, konrad.wilk@oracle.com
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
 <20200520234520.22563-1-sstabellini@kernel.org>
From: Julien Grall <julien@xen.org>
Message-ID: <23e5b6d8-c5d9-b43f-41cd-9d02d8ec0a7f@xen.org>
Date: Thu, 21 May 2020 09:01:31 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200520234520.22563-1-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 21/05/2020 00:45, Stefano Stabellini wrote:
> From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> 
> Don't just assume that virt_to_page works on all virtual addresses.
> Instead add a is_vmalloc_addr check and use vmalloc_to_page on vmalloc
> virt addresses.

Can you provide an example where swiotlb is used with vmalloc()?

> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> ---
>   drivers/xen/swiotlb-xen.c | 5 ++++-
>   1 file changed, 4 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index b6d27762c6f8..a42129cba36e 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -335,6 +335,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>   	int order = get_order(size);
>   	phys_addr_t phys;
>   	u64 dma_mask = DMA_BIT_MASK(32);
> +	struct page *pg;
>   
>   	if (hwdev && hwdev->coherent_dma_mask)
>   		dma_mask = hwdev->coherent_dma_mask;
> @@ -346,9 +347,11 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>   	/* Convert the size to actually allocated. */
>   	size = 1UL << (order + XEN_PAGE_SHIFT);
>   
> +	pg = is_vmalloc_addr(vaddr) ? vmalloc_to_page(vaddr) :
> +				      virt_to_page(vaddr);

Common DMA code seems to protect this check with CONFIG_DMA_REMAP. Is it 
something we want to do it here as well? Or is there any other condition 
where vmalloc can happen?

>   	if (!WARN_ON((dev_addr + size - 1 > dma_mask) ||
>   		     range_straddles_page_boundary(phys, size)) &&
> -	    TestClearPageXenRemapped(virt_to_page(vaddr)))
> +	    TestClearPageXenRemapped(pg))
>   		xen_destroy_contiguous_region(phys, order);
>   
>   	xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 21 08:05:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 08:05:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbgCV-0004Uo-NN; Thu, 21 May 2020 08:05:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VGGW=7D=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jbgCU-0004Ui-IS
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 08:05:10 +0000
X-Inumbo-ID: ca64bb72-9b39-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca64bb72-9b39-11ea-ae69-bc764e2007e4;
 Thu, 21 May 2020 08:05:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=eK8mUW3fF6LfxGwGD9YQ3A/OYObovV5a54tgN2SqYBY=; b=nwKIRUi1vyafg76NV78jcGDCFA
 khx1OwBDchZUUlFGvdTu/PCztlgnrvzIBAO4tBR/q1eQ3Z/wYi5GX4y38lAQfzrwGTtD+4EsmzR5v
 s80ZBeqel/JjmG+KlgZ2jEHrvhTMJZEVO4RZ3BH4eEoL9b5/2w+aCYk5Hzh7MLyqiGtY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jbgCT-0004HF-5A; Thu, 21 May 2020 08:05:09 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jbgCS-0006Wj-V5; Thu, 21 May 2020 08:05:09 +0000
Subject: Re: [PATCH 02/10] swiotlb-xen: remove start_dma_addr
To: Stefano Stabellini <sstabellini@kernel.org>, jgross@suse.com,
 boris.ostrovsky@oracle.com, konrad.wilk@oracle.com
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
 <20200520234520.22563-2-sstabellini@kernel.org>
From: Julien Grall <julien@xen.org>
Message-ID: <6241b8f6-5c51-0486-55ae-d571b117a184@xen.org>
Date: Thu, 21 May 2020 09:05:07 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200520234520.22563-2-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 21/05/2020 00:45, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> It is not strictly needed. Call virt_to_phys on xen_io_tlb_start
> instead. It will be useful not to have a start_dma_addr around with the
> next patches.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> ---
>   drivers/xen/swiotlb-xen.c | 5 +----
>   1 file changed, 1 insertion(+), 4 deletions(-)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index a42129cba36e..b5e0492b07b9 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -52,8 +52,6 @@ static unsigned long xen_io_tlb_nslabs;
>    * Quick lookup value of the bus address of the IOTLB.
>    */
>   
> -static u64 start_dma_addr;
> -
>   /*
>    * Both of these functions should avoid XEN_PFN_PHYS because phys_addr_t
>    * can be 32bit when dma_addr_t is 64bit leading to a loss in
> @@ -241,7 +239,6 @@ int __ref xen_swiotlb_init(int verbose, bool early)
>   		m_ret = XEN_SWIOTLB_EFIXUP;
>   		goto error;
>   	}
> -	start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
>   	if (early) {
>   		if (swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs,
>   			 verbose))
> @@ -389,7 +386,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device *dev, struct page *page,
>   	 */
>   	trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
>   
> -	map = swiotlb_tbl_map_single(dev, start_dma_addr, phys,
> +	map = swiotlb_tbl_map_single(dev, virt_to_phys(xen_io_tlb_start), phys,

xen_virt_to_bus() is implemented as xen_phys_to_bus(virt_to_phys()). Can 
you explain how the two are equivalent?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 21 08:08:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 08:08:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbgFD-0004cJ-4s; Thu, 21 May 2020 08:07:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VGGW=7D=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jbgFB-0004cD-EJ
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 08:07:57 +0000
X-Inumbo-ID: 2de0b50c-9b3a-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2de0b50c-9b3a-11ea-ae69-bc764e2007e4;
 Thu, 21 May 2020 08:07:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=fEdjHgy2pW+qAk723zdONWpbwhNqZU38m8VUziTh9HQ=; b=3jJo4wVB2f+53csNBCp9OUP961
 Zhep1hQOPo0cH+8+vjV/gg7OJhAXxImfCTpQBnCoXPYnRI2R/y8JPNqEDGv8S6YZY3jhEYHbNvKxx
 /tWfa9kJJ9INeovj6kgHusqsAPXabjfIbe7pdURU47UWlOnizXDfxG5KihbIqCbGH3u0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jbgF9-0004Lo-Fe; Thu, 21 May 2020 08:07:55 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jbgF9-0006h0-8O; Thu, 21 May 2020 08:07:55 +0000
Subject: Re: [PATCH 08/10] swiotlb-xen: introduce phys_to_dma/dma_to_phys
 translations
To: Stefano Stabellini <sstabellini@kernel.org>, jgross@suse.com,
 boris.ostrovsky@oracle.com, konrad.wilk@oracle.com
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
 <20200520234520.22563-8-sstabellini@kernel.org>
From: Julien Grall <julien@xen.org>
Message-ID: <0fa24e23-ee7a-3ff0-cee0-8734bcda5f6a@xen.org>
Date: Thu, 21 May 2020 09:07:53 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200520234520.22563-8-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 21/05/2020 00:45, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> Call dma_to_phys in is_xen_swiotlb_buffer.
> Call phys_to_dma in xen_phys_to_bus.
> Call dma_to_phys in xen_bus_to_phys.
> 
> Everything is taken care of by these changes except for
> xen_swiotlb_alloc_coherent and xen_swiotlb_free_coherent, which need a
> few explicit phys_to_dma/dma_to_phys calls.

The commit message explains what the code is doing but doesn't explain 
why this is needed.

> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> ---
>   drivers/xen/swiotlb-xen.c | 20 ++++++++++++--------
>   1 file changed, 12 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index c50448fd9b75..d011c4c7aa72 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -64,14 +64,16 @@ static inline dma_addr_t xen_phys_to_bus(struct device *dev, phys_addr_t paddr)
>   
>   	dma |= paddr & ~XEN_PAGE_MASK;
>   
> -	return dma;
> +	return phys_to_dma(dev, dma);
>   }
>   
> -static inline phys_addr_t xen_bus_to_phys(struct device *dev, dma_addr_t baddr)
> +static inline phys_addr_t xen_bus_to_phys(struct device *dev,
> +					  dma_addr_t dma_addr)
>   {
> +	phys_addr_t baddr = dma_to_phys(dev, dma_addr);
>   	unsigned long xen_pfn = bfn_to_pfn(XEN_PFN_DOWN(baddr));
> -	dma_addr_t dma = (dma_addr_t)xen_pfn << XEN_PAGE_SHIFT;
> -	phys_addr_t paddr = dma;
> +	phys_addr_t paddr = (xen_pfn << XEN_PAGE_SHIFT) |
> +			    (baddr & ~XEN_PAGE_MASK);
>   
>   	paddr |= baddr & ~XEN_PAGE_MASK;
>   
> @@ -99,7 +101,7 @@ static inline int range_straddles_page_boundary(phys_addr_t p, size_t size)
>   
>   static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
>   {
> -	unsigned long bfn = XEN_PFN_DOWN(dma_addr);
> +	unsigned long bfn = XEN_PFN_DOWN(dma_to_phys(dev, dma_addr));
>   	unsigned long xen_pfn = bfn_to_local_pfn(bfn);
>   	phys_addr_t paddr = XEN_PFN_PHYS(xen_pfn);
>   
> @@ -304,11 +306,11 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
>   	if (hwdev && hwdev->coherent_dma_mask)
>   		dma_mask = hwdev->coherent_dma_mask;
>   
> -	/* At this point dma_handle is the physical address, next we are
> +	/* At this point dma_handle is the dma address, next we are
>   	 * going to set it to the machine address.
>   	 * Do not use virt_to_phys(ret) because on ARM it doesn't correspond
>   	 * to *dma_handle. */
> -	phys = *dma_handle;
> +	phys = dma_to_phys(hwdev, *dma_handle);
>   	dev_addr = xen_phys_to_bus(hwdev, phys);
>   	if (((dev_addr + size - 1 <= dma_mask)) &&
>   	    !range_straddles_page_boundary(phys, size))
> @@ -319,6 +321,7 @@ xen_swiotlb_alloc_coherent(struct device *hwdev, size_t size,
>   			xen_free_coherent_pages(hwdev, size, ret, (dma_addr_t)phys, attrs);
>   			return NULL;
>   		}
> +		*dma_handle = phys_to_dma(hwdev, *dma_handle);
>   		SetPageXenRemapped(virt_to_page(ret));
>   	}
>   	memset(ret, 0, size);
> @@ -351,7 +354,8 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t size, void *vaddr,
>   	    TestClearPageXenRemapped(pg))
>   		xen_destroy_contiguous_region(phys, order);
>   
> -	xen_free_coherent_pages(hwdev, size, vaddr, (dma_addr_t)phys, attrs);
> +	xen_free_coherent_pages(hwdev, size, vaddr, phys_to_dma(hwdev, phys),
> +				attrs);
>   }
>   
>   /*
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 21 08:10:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 08:10:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbgI0-0005Qt-Kj; Thu, 21 May 2020 08:10:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PcJi=7D=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbgHz-0005Qn-Oa
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 08:10:51 +0000
X-Inumbo-ID: 921c6ad4-9b3a-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 921c6ad4-9b3a-11ea-b9cf-bc764e2007e4;
 Thu, 21 May 2020 08:10:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=4xnnIhwsfNQWjaYHqKMclfz3Jw9SkV3lAQyBh8fyx68=; b=rgXTycmOk9jFIoCFln/fw4D1x
 c3SDKUFJPkW3ZACL2QZXKBhcxVdlmQpmUTJihB0VfhksRfWZ1883Ix0cuZd92DkUgSFnXkSd2eLPa
 0WT+V8N7q+TiI/z+8eTkQ6vku7XntaW/p10UG5wlHbqkliJQ/2eDIaZ6O5lYMn/exwiNM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbgHs-0004On-LT; Thu, 21 May 2020 08:10:44 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbgHs-00005D-CM; Thu, 21 May 2020 08:10:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbgHs-0002Fp-Be; Thu, 21 May 2020 08:10:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150281-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150281: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-rtds:guest-saverestore.2:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=2ea1940b84e55420a9e8feddcafd173edfe4df11
X-Osstest-Versions-That: linux=115a54162a6c0d0ef2aef25ebd0b61fc5e179ebe
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 21 May 2020 08:10:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150281 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150281/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     17 guest-saverestore.2      fail REGR. vs. 150266

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150266
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150266
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150266
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150266
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150266
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150266
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150266
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150266
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150266
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                2ea1940b84e55420a9e8feddcafd173edfe4df11
baseline version:
 linux                115a54162a6c0d0ef2aef25ebd0b61fc5e179ebe

Last test of basis   150266  2020-05-20 00:40:00 Z    1 days
Testing same since   150281  2020-05-20 19:40:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chris Chiu <chiu@endlessm.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Miklos Szeredi <mszeredi@redhat.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Vivek Goyal <vgoyal@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   115a54162a6c..2ea1940b84e5  2ea1940b84e55420a9e8feddcafd173edfe4df11 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Thu May 21 08:19:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 08:19:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbgPc-0005gF-JD; Thu, 21 May 2020 08:18:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VGGW=7D=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jbgPa-0005gA-ML
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 08:18:42 +0000
X-Inumbo-ID: acbea86a-9b3b-11ea-aae1-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id acbea86a-9b3b-11ea-aae1-12813bfff9fa;
 Thu, 21 May 2020 08:18:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=eaQg7A9nmVi1QmaGDxLtL8MFACOo/B2TJw/SJawKC+A=; b=GUZo9FazFxZSxKd+FOJykkk6iR
 ft986fw1d+t4MM0L17Mc3bMzOa6eTJEFNAZXMNnsRs5LyxArZp9tRwrtrfc6YDYkjko+A+ixiieRj
 fDJWdmUATQn+Lyi1010A53/Hl1nCmPSj/bPn4V/xMvMXabBtglTgOwrqCOA5ZHynXwFU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jbgPV-0004ZD-Vs; Thu, 21 May 2020 08:18:37 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jbgPV-0007G4-OR; Thu, 21 May 2020 08:18:37 +0000
Subject: Re: [PATCH 09/10] xen/arm: introduce phys/dma translations in
 xen_dma_sync_for_*
To: Stefano Stabellini <sstabellini@kernel.org>, jgross@suse.com,
 boris.ostrovsky@oracle.com, konrad.wilk@oracle.com
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
 <20200520234520.22563-9-sstabellini@kernel.org>
From: Julien Grall <julien@xen.org>
Message-ID: <83c1120f-fe63-dc54-7b82-15a91c748de8@xen.org>
Date: Thu, 21 May 2020 09:18:35 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200520234520.22563-9-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 21/05/2020 00:45, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> Add phys_to_dma/dma_to_phys calls to
> xen_dma_sync_for_cpu, xen_dma_sync_for_device, and
> xen_arch_need_swiotlb.
> 
> In xen_arch_need_swiotlb, take the opportunity to switch to the simpler
> pfn_valid check we use everywhere else.
> 
> dma_cache_maint is fixed by the next patch.

Like patch #8, this explains what the code is doing not why this is 
necessary.

> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> ---
>   arch/arm/xen/mm.c | 10 +++++-----
>   1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
> index f2414ea40a79..7639251bcc79 100644
> --- a/arch/arm/xen/mm.c
> +++ b/arch/arm/xen/mm.c
> @@ -1,5 +1,6 @@
>   // SPDX-License-Identifier: GPL-2.0-only
>   #include <linux/cpu.h>
> +#include <linux/dma-direct.h>
>   #include <linux/dma-noncoherent.h>
>   #include <linux/gfp.h>
>   #include <linux/highmem.h>
> @@ -75,7 +76,7 @@ void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle,
>   			  phys_addr_t paddr, size_t size,
>   			  enum dma_data_direction dir)
>   {
> -	if (pfn_valid(PFN_DOWN(handle)))
> +	if (pfn_valid(PFN_DOWN(dma_to_phys(dev, handle))))
>   		arch_sync_dma_for_cpu(paddr, size, dir);
>   	else if (dir != DMA_TO_DEVICE)
>   		dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL);
> @@ -85,7 +86,7 @@ void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle,
>   			     phys_addr_t paddr, size_t size,
>   			     enum dma_data_direction dir)
>   {
> -	if (pfn_valid(PFN_DOWN(handle)))
> +	if (pfn_valid(PFN_DOWN(dma_to_phys(dev, handle))))
>   		arch_sync_dma_for_device(paddr, size, dir);
>   	else if (dir == DMA_FROM_DEVICE)
>   		dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL);
> @@ -97,8 +98,7 @@ bool xen_arch_need_swiotlb(struct device *dev,
>   			   phys_addr_t phys,
>   			   dma_addr_t dev_addr)
>   {
> -	unsigned int xen_pfn = XEN_PFN_DOWN(phys);
> -	unsigned int bfn = XEN_PFN_DOWN(dev_addr);
> +	unsigned int bfn = XEN_PFN_DOWN(dma_to_phys(dev, dev_addr));
>   
>   	/*
>   	 * The swiotlb buffer should be used if
> @@ -115,7 +115,7 @@ bool xen_arch_need_swiotlb(struct device *dev,
>   	 * require a bounce buffer because the device doesn't support coherent
>   	 * memory and we are not able to flush the cache.
>   	 */
> -	return (!hypercall_cflush && (xen_pfn != bfn) &&
> +	return (!hypercall_cflush && !pfn_valid(bfn) &&

I believe this change is incorrect. The bfn is a frame based on Xen page 
granularity (always 4K) while pfn_valid() is expecting a frame based on 
the Kernel page granularity.

>   		!dev_is_dma_coherent(dev));
>   }
>   
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 21 08:26:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 08:26:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbgWh-0006Xn-CN; Thu, 21 May 2020 08:26:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VGGW=7D=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jbgWg-0006Xi-0b
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 08:26:02 +0000
X-Inumbo-ID: b4083ba8-9b3c-11ea-aae2-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b4083ba8-9b3c-11ea-aae2-12813bfff9fa;
 Thu, 21 May 2020 08:26:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hXHVUCa/FWiIinv37wJMJKlq+4q06rxlpbzT5I9rDrE=; b=dgiAhbOin/mrdQK1tmYjd7ud82
 lc7lhmM+8uyQKk7hTF/rU7sIE1Xhhve5VCs8P2pGzLw9ZorI1DRN8EFZGtxH/8jbwk5j43SfvZeh7
 bSGiG1z6ZsxO3vHAhqC0tkGl5CHnhfA94ecvzTrAjWTliiIhdjonP+Rl4AyCDxER7Bvo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jbgWd-0004iC-Jl; Thu, 21 May 2020 08:25:59 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jbgWd-0007d5-Bc; Thu, 21 May 2020 08:25:59 +0000
Subject: Re: [PATCH 10/10] xen/arm: call dma_to_phys on the dma_addr_t
 parameter of dma_cache_maint
To: Stefano Stabellini <sstabellini@kernel.org>, jgross@suse.com,
 boris.ostrovsky@oracle.com, konrad.wilk@oracle.com
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
 <20200520234520.22563-10-sstabellini@kernel.org>
From: Julien Grall <julien@xen.org>
Message-ID: <c8f54c6b-d59e-2e67-1647-32c730b0a124@xen.org>
Date: Thu, 21 May 2020 09:25:57 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200520234520.22563-10-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 21/05/2020 00:45, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> Add a struct device* parameter to dma_cache_maint.
> 
> Translate the dma_addr_t parameter of dma_cache_maint by calling
> dma_to_phys. Do it for the first page and all the following pages, in
> case of multipage handling.

The term 'page' is confusing here. Are we referring to Xen page or Linux 
page?

Also, same as patch #8 and #9 regarding the commit message.

> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> ---
>   arch/arm/xen/mm.c | 15 +++++++++------
>   1 file changed, 9 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
> index 7639251bcc79..6ddf3b3c1ab5 100644
> --- a/arch/arm/xen/mm.c
> +++ b/arch/arm/xen/mm.c
> @@ -43,15 +43,18 @@ unsigned long xen_get_swiotlb_free_pages(unsigned int order)
>   static bool hypercall_cflush = false;
>   
>   /* buffers in highmem or foreign pages cannot cross page boundaries */
> -static void dma_cache_maint(dma_addr_t handle, size_t size, u32 op)
> +static void dma_cache_maint(struct device *dev, dma_addr_t handle,
> +			    size_t size, u32 op)
>   {
>   	struct gnttab_cache_flush cflush;
>   
> -	cflush.a.dev_bus_addr = handle & XEN_PAGE_MASK;
>   	cflush.offset = xen_offset_in_page(handle);
>   	cflush.op = op;
> +	handle &= XEN_PAGE_MASK;
>   
>   	do {
> +		cflush.a.dev_bus_addr = dma_to_phys(dev, handle);
> +
>   		if (size + cflush.offset > XEN_PAGE_SIZE)
>   			cflush.length = XEN_PAGE_SIZE - cflush.offset;
>   		else
> @@ -60,7 +63,7 @@ static void dma_cache_maint(dma_addr_t handle, size_t size, u32 op)
>   		HYPERVISOR_grant_table_op(GNTTABOP_cache_flush, &cflush, 1);
>   
>   		cflush.offset = 0;
> -		cflush.a.dev_bus_addr += cflush.length;
> +		handle += cflush.length;
>   		size -= cflush.length;
>   	} while (size);
>   }
> @@ -79,7 +82,7 @@ void xen_dma_sync_for_cpu(struct device *dev, dma_addr_t handle,
>   	if (pfn_valid(PFN_DOWN(dma_to_phys(dev, handle))))
>   		arch_sync_dma_for_cpu(paddr, size, dir);
>   	else if (dir != DMA_TO_DEVICE)
> -		dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL);
> +		dma_cache_maint(dev, handle, size, GNTTAB_CACHE_INVAL);
>   }
>   
>   void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle,
> @@ -89,9 +92,9 @@ void xen_dma_sync_for_device(struct device *dev, dma_addr_t handle,
>   	if (pfn_valid(PFN_DOWN(dma_to_phys(dev, handle))))
>   		arch_sync_dma_for_device(paddr, size, dir);
>   	else if (dir == DMA_FROM_DEVICE)
> -		dma_cache_maint(handle, size, GNTTAB_CACHE_INVAL);
> +		dma_cache_maint(dev, handle, size, GNTTAB_CACHE_INVAL);
>   	else
> -		dma_cache_maint(handle, size, GNTTAB_CACHE_CLEAN);
> +		dma_cache_maint(dev, handle, size, GNTTAB_CACHE_CLEAN);
>   }
>   
>   bool xen_arch_need_swiotlb(struct device *dev,
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 21 08:45:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 08:45:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbgot-0008GF-Vt; Thu, 21 May 2020 08:44:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V6/I=7D=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jbgot-0008GA-7N
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 08:44:51 +0000
X-Inumbo-ID: 548e61b8-9b3f-11ea-aae5-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 548e61b8-9b3f-11ea-aae5-12813bfff9fa;
 Thu, 21 May 2020 08:44:49 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: KyszrD7WlIUOasSueQmkZfh8h/tadSY9ereIbPAzS+VeucWBulawpXZzmfLe6K3h2YwN8SMEsR
 dGD7q4gCOItboomsr8HFzHVm0wm75dDKpM0kMwVC2IF0Ml68KxjxYBEo026p59xk+vJskw5H8J
 83mSnCplcuL00UM0xWyTA8IxdR/ANHhEPqwzXPswJ3F8+8YiA/sXneY88wt3KiM9NIRahzwnyi
 +Ibu8n2G34NnEH6AWc2a7zbXzP8pGfmI8kZk1JgWCj2HH11eCfXp8wWk7e1sta0Y9gwDMigLih
 f9o=
X-SBRS: 2.7
X-MesageID: 18327743
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,417,1583211600"; d="scan'208";a="18327743"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] xen/trace: Don't dump offline CPUs in debugtrace_dump_worker()
Date: Thu, 21 May 2020 09:44:22 +0100
Message-ID: <20200521084422.24073-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The 'T' debugkey reliably wedges on one of my systems, which has a sparse
APIC_ID layout due to a non power-of-2 number of cores per socket.  The
per_cpu(dt_cpu_data, cpu) calcution falls over the deliberately non-canonical
poison value.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Julien Grall <julien@xen.org>
CC: Juergen Gross <jgross@suse.com>

What is however weird is that instead of a crash, Xen wedges without printing
a clean backtrace.  Usually it blocks after just a few characters.  The best I
managed to get (and can't reproduce) is:

88 cpupool_rm_domain(dom=1,pool=0) n_dom 1
(XEN) wrap: 0
(XEN) debugtrace_dump() global buffer finished
(XEN) ----[ Xen-4.14-unstable  x86_64  debug=y   Not tainted ]----
(XEN) CPU:    3
(XEN) RIP:    e008:[<ffff82d040207b51>] common/debugtrace.c#debugtrace_dump_worker+0x6c/0xa1
(XEN) RFLAGS: 0000000000010006   CONTEXT: hypervisor (d0v13)
(XEN) rax: 80007d2fbf6d1000   rbx: 0000000000000030   rcx: 00000000fffffffa
(XEN) rdx: ffff82d040473c04   rsi: ffff83103ff0fc48   rdi: ffff83103ff0fc3e
(XEN) rbp: ffff83103ff0fc78   rsp: ffff83103ff0fc38   r8:  0000000000000001
(XEN) r9:  0000000000000038   r10: 0000000000000030   r11: 0000000000000002
(XEN) r12: ffff82d0409535a0   r13: ffff83103ff0fc38   r14: ffff82d040930000
(XEN) r15: ffff82d040473bfe   cr0: 0000000080050033   cr4: 0000000000362660
(XEN) cr3: 0000001dd0f74000   cr2: 000000000041e5b0
(XEN) fsb: 00007f5bb0f15780   gsb: ffff88827ad40000   gss: 0000000000000000
(XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: e010   cs: e008
(XEN) Xen code around <ffff82d040207b51> (common/debugtrace.c#debugtrace_dump_worker+0x6c/0xa1):
(XEN)  5e 2d 03 00 49 8b 04 24 <4a> 8b 3c 30 4c 89 ee e8 e6 fe ff ff 83 c3 01 49
(XEN) Xen stack trace from rsp=ffff83103ff0fc38:
(XEN)    ff00383420757063 ffff83103ff0fc48 0000000000000292 0000000000000292
(XEN)    ffff83103ff0fef8 ffff83103ff0ffff ffff83103ff0fd28 ffff83103ff0fef8
(XEN)    ffff83103ff0fc98 ffff82d040207c05 ffff83103ff0fc98 0000000000000054
(XEN)    ffff83103ff0fcb8 ffff82d04021d04a 0000000000000000 0000000000000000
(XEN)    ffff83103ff0fe48 ffff82d0402329f6 ffff831033cd9000 0000000000000000
(XEN)    ffff831000800027 0000000000000001 000000003ff0fcf8 0000000000000286
(XEN)    ffff83103ff0fd28 0000000000000000 00007f5bb0f27010 0000000000000000
(XEN)    0000000000000000 ffff82004009c938 ffff83103ff0fe54 ffff82d04035b055
(XEN)    ffff831033cd9000 ffff82c000000000 ffff83103ff0fd68 ffff82d040234612
(XEN)    ffff831033c8e068 0000000000000003 0000000000823679 ffff83103ff0fdf8
(XEN)    ffff83103ff0fd88 ffff82d040350f14 ffff83103ff0fdb8 0000001300000007
(XEN)    00007f5bb0f28010 0000000000000001 00007f5bafeb02c4 000000000000001c
(XEN)    00007f5bb01f02a0 0000000000000001 00007ffd208ae538 0000000000637a70
(XEN)    0000000000637a30 0000000000424e59 00007f5baff0d88b 00007f5bb01f33c0
(XEN)    0000000000000000 0000000000000002 00007f5baff0d913 0000000000000000
(XEN)    ffff82d0403b33d4 ffff83103ff0fef8 0000000000000230 ffff831033c8e000
(XEN)    0000000000000001 deadbeefdeadf00d ffff83103ff0fee8 ffff82d04032f0e0
(XEN)    ffff82d0403b33d4 00007f5bb0f27010 deadbeefdeadf00d deadbeefdeadf00d
(XEN)    deadbeefdeadf00d deadbeefdeadf00d ffff82d0403b33d4 ffff82d0403b33c8
(XEN)    ffff82d0403b33d4 ffff82d0403b33c8 ffff82d0403b33d4 ffff82d0403b33c8
(XEN) Xen call trace:
(XEN)    [<ffff82d040207b51>] R common/debugtrace.c#debugtrace_dump_worker+0x6c/0xa1
(XEN)    [<ffff82d040207c05>] F common/debugtrace.c#debugtrace_key+0x7f/0x81
(XEN)    [<ffff82d04021d04a>] F handle_keypress+0xb2/0xc9
(XEN)    [<ffff82d0402329f6>] F do_sysctl+0x6bc/0x148b
(XEN)    [<ffff82d04032f0e0>] F pv_hypercall+0x2fd/0x578
(XEN)    [<ffff82d0403b3432>] F lstar_enter+0x112/0x120
(XEN)

which is lacking the remainder of the #GP output from the non-canonical memory
reference in mov (%rax,%r14,1), %rdi.  The wedge also doesn't suffer a
watchdog timeout, which is even more concerning.
---
 xen/common/debugtrace.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/common/debugtrace.c b/xen/common/debugtrace.c
index c21ec99ee0..f3794b9453 100644
--- a/xen/common/debugtrace.c
+++ b/xen/common/debugtrace.c
@@ -95,7 +95,7 @@ static void debugtrace_dump_worker(void)
 
     debugtrace_dump_buffer(dt_data, "global");
 
-    for ( cpu = 0; cpu < nr_cpu_ids; cpu++ )
+    for_each_online_cpu ( cpu )
     {
         char buf[16];
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu May 21 08:45:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 08:45:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbgpd-0008KA-8j; Thu, 21 May 2020 08:45:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S6UB=7D=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jbgpb-0008Iy-LU
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 08:45:35 +0000
X-Inumbo-ID: 6efb2dec-9b3f-11ea-aae5-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6efb2dec-9b3f-11ea-aae5-12813bfff9fa;
 Thu, 21 May 2020 08:45:34 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 5oCZn25bClIxw0GsmZhUudrBK0bc81jdjDBeG8szzRIMS6ZMavw5YrZokg9KZwKLk1OK7pqcoY
 pdKSf2ARU/DdrmEwwT5TyCS8M73sFyWIxgOeK3PEkduB9gWtpCQCXKSwLbvCQRL+d2gPrY2epk
 dAp+2xrO2dqg0+77oW9rWXq5xczobgRgRd1OugBEbyKmQrgDR7bPQpdi+X92u209nTsbrbcuaQ
 OsCjcnOHwrZIKh5uKUZvRuDMK9kEkDn+yLH7lZVjLOflz1RguNiQsOhv3tk8LzISkk0+OGvP1k
 6Ig=
X-SBRS: 2.7
X-MesageID: 18093687
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,417,1583211600"; d="scan'208";a="18093687"
Date: Thu, 21 May 2020 10:45:23 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v3 2/2] x86/idle: prevent entering C6 with in service
 interrupts on Intel
Message-ID: <20200521084523.GP54375@Air-de-Roger>
References: <20200515135802.63853-1-roger.pau@citrix.com>
 <20200515135802.63853-3-roger.pau@citrix.com>
 <eaa63636-6e39-d401-e218-14ae37440667@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <eaa63636-6e39-d401-e218-14ae37440667@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 20, 2020 at 10:30:11PM +0100, Andrew Cooper wrote:
> On 15/05/2020 14:58, Roger Pau Monne wrote:
> > Apply a workaround for Intel errata BDX99, CLX30, SKX100, CFW125,
> > BDF104, BDH85, BDM135, KWB131: "A Pending Fixed Interrupt May Be
> > Dispatched Before an Interrupt of The Same Priority Completes".
> 
> HSM175 et al, so presumably a HSD, and HSE as well.
> 
> On the broadwell side at least, BDD BDW in addition

But those are a different errata AFAICT ('An APIC Timer Interrupt
During Core C6 Entry May be Lost') and the workaround should also be
different I think. We should mark the lapic timer as not reliable on
C6 or higher states in lapic_timer_reliable_states, so that it's
disabled before entering sleep?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 21 08:59:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 08:59:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbh3C-0000wc-FZ; Thu, 21 May 2020 08:59:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/yRJ=7D=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jbh3A-0000wX-QE
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 08:59:36 +0000
X-Inumbo-ID: 6533a2c4-9b41-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6533a2c4-9b41-11ea-b9cf-bc764e2007e4;
 Thu, 21 May 2020 08:59:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hC9tgQbr/TL4+Mod3DucwRrUzybUG/qIhAb+8o6gIVI=; b=k5Dmfcbpspf0I0G1yKqu78X1TF
 kbs5GDTJdQbCC8B/D9IYxBh8epzlNAMJ2InHykUpqQxyk5nc68zWPSA5YBQXcW7/lMeTd2HbxHvA0
 0QPIVUKUZD2nslYmbWHwx6ixD0J3oV0CqRHplUzNb2WTBmjaqJ7ubY+6SUZLHO6qRLd8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jbh39-0005Ob-QO; Thu, 21 May 2020 08:59:35 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jbh39-0000wi-HB; Thu, 21 May 2020 08:59:35 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v4 0/5] domain context infrastructure
Date: Thu, 21 May 2020 09:59:27 +0100
Message-Id: <20200521085932.10508-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

Paul Durrant (5):
  xen/common: introduce a new framework for save/restore of 'domain'
    context
  xen/common/domctl: introduce XEN_DOMCTL_get/setdomaincontext
  tools/misc: add xen-domctx to present domain context
  common/domain: add a domain context record for shared_info...
  tools/libxc: make use of domain context SHARED_INFO record...

 .gitignore                             |   1 +
 tools/flask/policy/modules/xen.if      |   4 +-
 tools/libxc/include/xenctrl.h          |   5 +
 tools/libxc/xc_domain.c                |  56 +++++
 tools/libxc/xc_sr_common.c             |  67 ++++++
 tools/libxc/xc_sr_common.h             |  11 +-
 tools/libxc/xc_sr_common_x86_pv.c      |  74 ++++++
 tools/libxc/xc_sr_common_x86_pv.h      |   3 +
 tools/libxc/xc_sr_restore_x86_pv.c     |  26 +-
 tools/libxc/xc_sr_save_x86_pv.c        |  43 ++--
 tools/libxc/xg_save_restore.h          |   1 +
 tools/misc/Makefile                    |   4 +
 tools/misc/xen-domctx.c                | 278 ++++++++++++++++++++++
 xen/common/Makefile                    |   1 +
 xen/common/domain.c                    |  59 +++++
 xen/common/domctl.c                    | 173 ++++++++++++++
 xen/common/save.c                      | 314 +++++++++++++++++++++++++
 xen/include/public/arch-arm/hvm/save.h |   5 +
 xen/include/public/arch-x86/hvm/save.h |   5 +
 xen/include/public/domctl.h            |  41 ++++
 xen/include/public/save.h              | 100 ++++++++
 xen/include/xen/save.h                 | 170 +++++++++++++
 xen/xsm/flask/hooks.c                  |   6 +
 xen/xsm/flask/policy/access_vectors    |   4 +
 24 files changed, 1405 insertions(+), 46 deletions(-)
 create mode 100644 tools/misc/xen-domctx.c
 create mode 100644 xen/common/save.c
 create mode 100644 xen/include/public/save.h
 create mode 100644 xen/include/xen/save.h

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 08:59:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 08:59:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbh3G-0000wv-NV; Thu, 21 May 2020 08:59:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/yRJ=7D=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jbh3F-0000wl-5o
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 08:59:41 +0000
X-Inumbo-ID: 672dff70-9b41-11ea-aae7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 672dff70-9b41-11ea-aae7-12813bfff9fa;
 Thu, 21 May 2020 08:59:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=WiiyBckKp4IeQF4EvI/vsFSw/vDZFhVSPuyj349dhzY=; b=BdbA5KYep/zur2zLkHSuQWLhFj
 PZVxTtP1f4P45P8vVAMsu99JomYHBbJq0REisbz3YNbIsGbtcqvCPJyUgMZehniIfATAeTJVui2zv
 iQwdExYry+l9evXU0UfajssnMdcG1G/uom/2yKkBVPysSBhVU/ZFLj02/VyNdcgkRy0U=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jbh3B-0005Of-Rb; Thu, 21 May 2020 08:59:37 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jbh3B-0000wi-D7; Thu, 21 May 2020 08:59:37 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v4 1/5] xen/common: introduce a new framework for save/restore
 of 'domain' context
Date: Thu, 21 May 2020 09:59:28 +0100
Message-Id: <20200521085932.10508-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200521085932.10508-1-paul@xen.org>
References: <20200521085932.10508-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

To allow enlightened HVM guests (i.e. those that have PV drivers) to be
migrated without their co-operation it will be necessary to transfer 'PV'
state such as event channel state, grant entry state, etc.

Currently there is a framework (entered via the hvm_save/load() functions)
that allows a domain's 'HVM' (architectural) state to be transferred but
'PV' state is also common with pure PV guests and so this framework is not
really suitable.

This patch adds the new public header and low level implementation of a new
common framework, entered via the domain_save/load() functions. Subsequent
patches will introduce other parts of the framework, and code that will
make use of it within the current version of the libxc migration stream.

This patch also marks the HVM-only framework as deprecated in favour of the
new framework.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v4:
 - Addressed further comments from Jan

v3:
 - Addressed comments from Julien and Jan
 - Save handlers no longer need to state entry length up-front
 - Save handlers expected to deal with multiple instances internally
 - Entries are now auto-padded to 8 byte boundary

v2:
 - Allow multi-stage save/load to avoid the need to double-buffer
 - Get rid of the masks and add an 'ignore' flag instead
 - Create copy function union to preserve const save buffer
 - Deprecate HVM-only framework
---
 xen/common/Makefile                    |   1 +
 xen/common/save.c                      | 314 +++++++++++++++++++++++++
 xen/include/public/arch-arm/hvm/save.h |   5 +
 xen/include/public/arch-x86/hvm/save.h |   5 +
 xen/include/public/save.h              |  89 +++++++
 xen/include/xen/save.h                 | 170 +++++++++++++
 6 files changed, 584 insertions(+)
 create mode 100644 xen/common/save.c
 create mode 100644 xen/include/public/save.h
 create mode 100644 xen/include/xen/save.h

diff --git a/xen/common/Makefile b/xen/common/Makefile
index e8cde65370..90553ba5d7 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -37,6 +37,7 @@ obj-y += radix-tree.o
 obj-y += rbtree.o
 obj-y += rcupdate.o
 obj-y += rwlock.o
+obj-y += save.o
 obj-y += shutdown.o
 obj-y += softirq.o
 obj-y += sort.o
diff --git a/xen/common/save.c b/xen/common/save.c
new file mode 100644
index 0000000000..f7b815b5ef
--- /dev/null
+++ b/xen/common/save.c
@@ -0,0 +1,314 @@
+/*
+ * save.c: Save and restore PV guest state common to all domain types.
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/compile.h>
+#include <xen/save.h>
+
+struct domain_context {
+    struct domain *domain;
+    const char *name; /* for logging purposes */
+    struct domain_save_descriptor desc;
+    size_t len; /* for internal accounting */
+    union {
+        const struct domain_save_ops *save;
+        const struct domain_load_ops *load;
+    } ops;
+    void *priv;
+};
+
+static struct {
+    const char *name;
+    domain_save_handler save;
+    domain_load_handler load;
+} handlers[DOMAIN_SAVE_CODE_MAX + 1];
+
+void __init domain_register_save_type(unsigned int typecode,
+                                      const char *name,
+                                      domain_save_handler save,
+                                      domain_load_handler load)
+{
+    BUG_ON(typecode >= ARRAY_SIZE(handlers));
+
+    ASSERT(!handlers[typecode].save);
+    ASSERT(!handlers[typecode].load);
+
+    handlers[typecode].name = name;
+    handlers[typecode].save = save;
+    handlers[typecode].load = load;
+}
+
+int domain_save_begin(struct domain_context *c, unsigned int typecode,
+                      unsigned int instance)
+{
+    int rc;
+
+    if ( typecode != c->desc.typecode )
+    {
+        ASSERT_UNREACHABLE();
+        return -EINVAL;
+    }
+    ASSERT(!c->desc.length); /* Should always be zero during domain_save() */
+    ASSERT(!c->len); /* Verify domain_save_end() was called */
+
+    rc = c->ops.save->begin(c->priv, &c->desc);
+    if ( rc )
+        return rc;
+
+    return 0;
+}
+
+int domain_save_data(struct domain_context *c, const void *src, size_t len)
+{
+    int rc = c->ops.save->append(c->priv, src, len);
+
+    if ( !rc )
+        c->len += len;
+
+    return rc;
+}
+
+#define DOMAIN_SAVE_ALIGN 8
+
+int domain_save_end(struct domain_context *c)
+{
+    struct domain *d = c->domain;
+    size_t len = ROUNDUP(c->len, DOMAIN_SAVE_ALIGN) - c->len; /* padding */
+    int rc;
+
+    if ( len )
+    {
+        static const uint8_t pad[DOMAIN_SAVE_ALIGN] = {};
+
+        rc = domain_save_data(c, pad, len);
+
+        if ( rc )
+            return rc;
+    }
+    ASSERT(IS_ALIGNED(c->len, DOMAIN_SAVE_ALIGN));
+
+    if ( c->name )
+        gdprintk(XENLOG_INFO, "%pd save: %s[%u] +%zu (-%zu)\n", d, c->name,
+                 c->desc.instance, c->len, len);
+
+    rc = c->ops.save->end(c->priv, c->len);
+    c->len = 0;
+
+    return rc;
+}
+
+int domain_save(struct domain *d, const struct domain_save_ops *ops,
+                void *priv, bool dry_run)
+{
+    struct domain_context c = {
+        .domain = d,
+        .ops.save = ops,
+        .priv = priv,
+    };
+    static const struct domain_save_header h = {
+        .magic = DOMAIN_SAVE_MAGIC,
+        .xen_major = XEN_VERSION,
+        .xen_minor = XEN_SUBVERSION,
+        .version = DOMAIN_SAVE_VERSION,
+    };
+    const struct domain_save_end e = {};
+    unsigned int i;
+    int rc;
+
+    ASSERT(d != current->domain);
+    domain_pause(d);
+
+    c.name = !dry_run ? "HEADER" : NULL;
+    c.desc.typecode = DOMAIN_SAVE_CODE(HEADER);
+
+    rc = DOMAIN_SAVE_ENTRY(HEADER, &c, 0, &h, sizeof(h));
+    if ( rc )
+        goto out;
+
+    for ( i = 0; i < ARRAY_SIZE(handlers); i++ )
+    {
+        domain_save_handler save = handlers[i].save;
+
+        if ( !save )
+            continue;
+
+        c.name = !dry_run ? handlers[i].name : NULL;
+        memset(&c.desc, 0, sizeof(c.desc));
+        c.desc.typecode = i;
+
+        rc = save(d, &c, dry_run);
+        if ( rc )
+            goto out;
+    }
+
+    c.name = !dry_run ? "END" : NULL;
+    memset(&c.desc, 0, sizeof(c.desc));
+    c.desc.typecode = DOMAIN_SAVE_CODE(END);
+
+    rc = DOMAIN_SAVE_ENTRY(END, &c, 0, &e, sizeof(e));
+
+ out:
+    domain_unpause(d);
+
+    return rc;
+}
+
+int domain_load_begin(struct domain_context *c, unsigned int typecode,
+                      unsigned int *instance)
+{
+    if ( typecode != c->desc.typecode )
+    {
+        ASSERT_UNREACHABLE();
+        return -EINVAL;
+    }
+
+    ASSERT(!c->len); /* Verify domain_load_end() was called */
+
+    *instance = c->desc.instance;
+
+    return 0;
+}
+
+int domain_load_data(struct domain_context *c, void *dst, size_t len)
+{
+    size_t copy_len = min_t(size_t, len, c->desc.length - c->len);
+    int rc;
+
+    c->len += copy_len;
+    ASSERT(c->len <= c->desc.length);
+
+    rc = copy_len ? c->ops.load->read(c->priv, dst, copy_len) : 0;
+    if ( rc )
+        return rc;
+
+    /* Zero extend if the entry is exhausted */
+    len -= copy_len;
+    if ( len )
+    {
+        dst += copy_len;
+        memset(dst, 0, len);
+    }
+
+    return 0;
+}
+
+int domain_load_end(struct domain_context *c)
+{
+    struct domain *d = c->domain;
+    size_t len = c->desc.length - c->len;
+
+    while ( c->len != c->desc.length ) /* unconsumed data or pad */
+    {
+        uint8_t pad;
+        int rc = domain_load_data(c, &pad, sizeof(pad));
+
+        if ( rc )
+            return rc;
+
+        if ( pad )
+            return -EINVAL;
+    }
+
+    gdprintk(XENLOG_INFO, "%pd load: %s[%u] +%zu (-%zu)\n", d, c->name,
+             c->desc.instance, c->len, len);
+
+    c->len = 0;
+
+    return 0;
+}
+
+int domain_load(struct domain *d, const struct domain_load_ops *ops,
+                void *priv)
+{
+    struct domain_context c = {
+        .domain = d,
+        .ops.load = ops,
+        .priv = priv,
+    };
+    unsigned int instance;
+    struct domain_save_header h;
+    int rc;
+
+    ASSERT(d != current->domain);
+
+    rc = c.ops.load->read(c.priv, &c.desc, sizeof(c.desc));
+    if ( rc )
+        return rc;
+
+    c.name = "HEADER";
+
+    rc = DOMAIN_LOAD_ENTRY(HEADER, &c, &instance, &h, sizeof(h));
+    if ( rc )
+        return rc;
+
+    if ( instance || h.magic != DOMAIN_SAVE_MAGIC ||
+         h.version != DOMAIN_SAVE_VERSION )
+        return -EINVAL;
+
+    domain_pause(d);
+
+    for (;;)
+    {
+        unsigned int i;
+        domain_load_handler load;
+
+        rc = c.ops.load->read(c.priv, &c.desc, sizeof(c.desc));
+        if ( rc )
+            return rc;
+
+        rc = -EINVAL;
+
+        if ( c.desc.typecode == DOMAIN_SAVE_CODE(END) )
+        {
+            struct domain_save_end e;
+
+            c.name = "END";
+
+            rc = DOMAIN_LOAD_ENTRY(END, &c, &instance, &e, sizeof(e));
+
+            if ( instance )
+                return -EINVAL;
+
+            break;
+        }
+
+        i = c.desc.typecode;
+        if ( i >= ARRAY_SIZE(handlers) )
+            break;
+
+        c.name = handlers[i].name;
+        load = handlers[i].load;
+
+        rc = load ? load(d, &c) : -EOPNOTSUPP;
+        if ( rc )
+            break;
+    }
+
+    domain_unpause(d);
+
+    return rc;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/arch-arm/hvm/save.h b/xen/include/public/arch-arm/hvm/save.h
index 75b8e65bcb..d5b0c15203 100644
--- a/xen/include/public/arch-arm/hvm/save.h
+++ b/xen/include/public/arch-arm/hvm/save.h
@@ -26,6 +26,11 @@
 #ifndef __XEN_PUBLIC_HVM_SAVE_ARM_H__
 #define __XEN_PUBLIC_HVM_SAVE_ARM_H__
 
+/*
+ * Further use of HVM state is deprecated. New state records should only
+ * be added to the domain state header: public/save.h
+ */
+
 #endif
 
 /*
diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h
index 773a380bc2..e61e2dbcd7 100644
--- a/xen/include/public/arch-x86/hvm/save.h
+++ b/xen/include/public/arch-x86/hvm/save.h
@@ -648,6 +648,11 @@ struct hvm_msr {
  */
 #define HVM_SAVE_CODE_MAX 20
 
+/*
+ * Further use of HVM state is deprecated. New state records should only
+ * be added to the domain state header: public/save.h
+ */
+
 #endif /* __XEN_PUBLIC_HVM_SAVE_X86_H__ */
 
 /*
diff --git a/xen/include/public/save.h b/xen/include/public/save.h
new file mode 100644
index 0000000000..551dbbddb8
--- /dev/null
+++ b/xen/include/public/save.h
@@ -0,0 +1,89 @@
+/*
+ * save.h
+ *
+ * Structure definitions for common PV/HVM domain state that is held by
+ * Xen and must be saved along with the domain's memory.
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef XEN_PUBLIC_SAVE_H
+#define XEN_PUBLIC_SAVE_H
+
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
+#include "xen.h"
+
+/* Entry data is preceded by a descriptor */
+struct domain_save_descriptor {
+    uint16_t typecode;
+
+    /*
+     * Instance number of the entry (since there may be multiple of some
+     * types of entries).
+     */
+    uint16_t instance;
+
+    /* Entry length not including this descriptor */
+    uint32_t length;
+};
+
+/*
+ * Each entry has a type associated with it. DECLARE_DOMAIN_SAVE_TYPE
+ * binds these things together, although it is not intended that the
+ * resulting type is ever instantiated.
+ */
+#define DECLARE_DOMAIN_SAVE_TYPE(_x, _code, _type) \
+    struct DOMAIN_SAVE_TYPE_##_x { char c[_code]; _type t; };
+
+#define DOMAIN_SAVE_CODE(_x) \
+    (sizeof(((struct DOMAIN_SAVE_TYPE_##_x *)0)->c))
+#define DOMAIN_SAVE_TYPE(_x) \
+    typeof(((struct DOMAIN_SAVE_TYPE_##_x *)0)->t)
+
+/*
+ * All entries will be zero-padded to the next 64-bit boundary when saved,
+ * so there is no need to include trailing pad fields in structure
+ * definitions.
+ * When loading, entries will be zero-extended if the load handler reads
+ * beyond the length specified in the descriptor.
+ */
+
+/* Terminating entry */
+struct domain_save_end {};
+DECLARE_DOMAIN_SAVE_TYPE(END, 0, struct domain_save_end);
+
+#define DOMAIN_SAVE_MAGIC   0x53415645
+#define DOMAIN_SAVE_VERSION 0x00000001
+
+/* Initial entry */
+struct domain_save_header {
+    uint32_t magic;                /* Must be DOMAIN_SAVE_MAGIC */
+    uint16_t xen_major, xen_minor; /* Xen version */
+    uint32_t version;              /* Save format version */
+};
+DECLARE_DOMAIN_SAVE_TYPE(HEADER, 1, struct domain_save_header);
+
+#define DOMAIN_SAVE_CODE_MAX 1
+
+#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
+
+#endif /* XEN_PUBLIC_SAVE_H */
diff --git a/xen/include/xen/save.h b/xen/include/xen/save.h
new file mode 100644
index 0000000000..f2e58bafef
--- /dev/null
+++ b/xen/include/xen/save.h
@@ -0,0 +1,170 @@
+/*
+ * save.h: support routines for save/restore
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef XEN_SAVE_H
+#define XEN_SAVE_H
+
+#include <xen/init.h>
+#include <xen/sched.h>
+#include <xen/types.h>
+
+#include <public/save.h>
+
+struct domain_context;
+
+int domain_save_begin(struct domain_context *c, unsigned int typecode,
+                      unsigned int instance);
+
+#define DOMAIN_SAVE_BEGIN(x, c, i) \
+    domain_save_begin((c), DOMAIN_SAVE_CODE(x), (i))
+
+int domain_save_data(struct domain_context *c, const void *data, size_t len);
+int domain_save_end(struct domain_context *c);
+
+static inline int domain_save_entry(struct domain_context *c,
+                                    unsigned int typecode,
+                                    unsigned int instance, const void *src,
+                                    size_t len)
+{
+    int rc;
+
+    rc = domain_save_begin(c, typecode, instance);
+    if ( rc )
+        return rc;
+
+    rc = domain_save_data(c, src, len);
+    if ( rc )
+        return rc;
+
+    return domain_save_end(c);
+}
+
+#define DOMAIN_SAVE_ENTRY(x, c, i, s, l) \
+    domain_save_entry((c), DOMAIN_SAVE_CODE(x), (i), (s), (l))
+
+int domain_load_begin(struct domain_context *c, unsigned int typecode,
+                      unsigned int *instance);
+
+#define DOMAIN_LOAD_BEGIN(x, c, i) \
+    domain_load_begin((c), DOMAIN_SAVE_CODE(x), (i))
+
+int domain_load_data(struct domain_context *c, void *data, size_t len);
+int domain_load_end(struct domain_context *c);
+
+static inline int domain_load_entry(struct domain_context *c,
+                                    unsigned int typecode,
+                                    unsigned int *instance, void *dst,
+                                    size_t len)
+{
+    int rc;
+
+    rc = domain_load_begin(c, typecode, instance);
+    if ( rc )
+        return rc;
+
+    rc = domain_load_data(c, dst, len);
+    if ( rc )
+        return rc;
+
+    return domain_load_end(c);
+}
+
+#define DOMAIN_LOAD_ENTRY(x, c, i, d, l) \
+    domain_load_entry((c), DOMAIN_SAVE_CODE(x), (i), (d), (l))
+
+/*
+ * The 'dry_run' flag indicates that the caller of domain_save() (see below)
+ * is not trying to actually acquire the data, only the size of the data.
+ * The save handler can therefore limit work to only that which is necessary
+ * to call domain_save_data() the correct number of times with accurate values
+ * for 'len'.
+ */
+typedef int (*domain_save_handler)(const struct domain *d,
+                                   struct domain_context *c,
+                                   bool dry_run);
+typedef int (*domain_load_handler)(struct domain *d,
+                                   struct domain_context *c);
+
+void domain_register_save_type(unsigned int typecode, const char *name,
+                               domain_save_handler save,
+                               domain_load_handler load);
+
+/*
+ * Register save and load handlers.
+ *
+ * Save handlers will be invoked in an order which copes with any inter-
+ * entry dependencies. For now this means that HEADER will come first and
+ * END will come last, all others being invoked in order of 'typecode'.
+ *
+ * Load handlers will be invoked in the order of entries present in the
+ * buffer.
+ */
+#define DOMAIN_REGISTER_SAVE_LOAD(x, s, l)                    \
+    static int __init __domain_register_##x##_save_load(void) \
+    {                                                         \
+        domain_register_save_type(                            \
+            DOMAIN_SAVE_CODE(x),                              \
+            #x,                                               \
+            &(s),                                             \
+            &(l));                                            \
+                                                              \
+        return 0;                                             \
+    }                                                         \
+    __initcall(__domain_register_##x##_save_load);
+
+/* Callback functions */
+struct domain_save_ops {
+    /*
+     * Begin a new entry with the given descriptor (only type and instance
+     * are valid).
+     */
+    int (*begin)(void *priv, const struct domain_save_descriptor *desc);
+    /* Append data/padding to the buffer */
+    int (*append)(void *priv, const void *data, size_t len);
+    /*
+     * Complete the entry by updating the descriptor with the total
+     * length of the appended data (not including padding).
+     */
+    int (*end)(void *priv, size_t len);
+};
+
+struct domain_load_ops {
+    /* Read data/padding from the buffer */
+    int (*read)(void *priv, void *data, size_t len);
+};
+
+/*
+ * Entry points:
+ *
+ * ops:     These are callback functions provided by the caller that will
+ *          be used to write to (in the save case) or read from (in the
+ *          load case) the context buffer. See above for more detail.
+ * priv:    This is a pointer that will be passed to the copy function to
+ *          allow it to identify the context buffer and the current state
+ *          of the save or load operation.
+ * dry_run: If this is set then the caller of domain_save() is only trying
+ *          to acquire the total size of the data, not the data itself.
+ *          In this case the caller may supply different ops to avoid doing
+ *          unnecessary work.
+ */
+int domain_save(struct domain *d, const struct domain_save_ops *ops,
+                void *priv, bool dry_run);
+int domain_load(struct domain *d, const struct domain_load_ops *ops,
+                void *priv);
+
+#endif /* XEN_SAVE_H */
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 08:59:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 08:59:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbh3J-0000xq-41; Thu, 21 May 2020 08:59:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/yRJ=7D=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jbh3H-0000xJ-N9
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 08:59:43 +0000
X-Inumbo-ID: 693d80ec-9b41-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 693d80ec-9b41-11ea-b07b-bc764e2007e4;
 Thu, 21 May 2020 08:59:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=l7LfIwH4Qwi+PKHHgB16HyW1BuWSKpamDKSJlCwi4Co=; b=VGBwrgMURZ2Uc7HVcO6oQ02NhK
 eDbPpp0Bw9PchBy19ybxSXTpMFciaafrEs6tniuR6d0g05Nh0MzkgAs4ZpqCkG0SsBJDOZO5U8l45
 PPL/RCg02ZNzxf7QdqXKqve9A4ywRfsc1AWb6HPZ9rmCTvJcSAK7o/U/wMRLXDFmFSlM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jbh3G-0005Ox-4a; Thu, 21 May 2020 08:59:42 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jbh3F-0000wi-Rm; Thu, 21 May 2020 08:59:42 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v4 4/5] common/domain: add a domain context record for
 shared_info...
Date: Thu, 21 May 2020 09:59:31 +0100
Message-Id: <20200521085932.10508-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200521085932.10508-1-paul@xen.org>
References: <20200521085932.10508-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

... and update xen-domctx to dump some information describing the record.

NOTE: The domain may or may not be using the embedded vcpu_info array so
      ultimately separate context records will be added for vcpu_info when
      this becomes necessary.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>

v4:
 - Addressed comments from Jan

v3:
 - Actually dump some of the content of shared_info

v2:
 - Drop the header change to define a 'Xen' page size and instead use a
   variable length struct now that the framework makes this is feasible
 - Guard use of 'has_32bit_shinfo' in common code with CONFIG_COMPAT
---
 tools/misc/xen-domctx.c   | 78 +++++++++++++++++++++++++++++++++++++++
 xen/common/domain.c       | 59 +++++++++++++++++++++++++++++
 xen/include/public/save.h | 13 ++++++-
 3 files changed, 149 insertions(+), 1 deletion(-)

diff --git a/tools/misc/xen-domctx.c b/tools/misc/xen-domctx.c
index 243325dfce..6ead7ea89d 100644
--- a/tools/misc/xen-domctx.c
+++ b/tools/misc/xen-domctx.c
@@ -31,6 +31,7 @@
 #include <errno.h>
 
 #include <xenctrl.h>
+#include <xen-tools/libs.h>
 #include <xen/xen.h>
 #include <xen/domctl.h>
 #include <xen/save.h>
@@ -61,6 +62,82 @@ static void dump_header(void)
 
 }
 
+static void print_binary(const char *prefix, const void *val, size_t size,
+                         const char *suffix)
+{
+    printf("%s", prefix);
+
+    while ( size-- )
+    {
+        uint8_t octet = *(const uint8_t *)val++;
+        unsigned int i;
+
+        for ( i = 0; i < 8; i++ )
+        {
+            printf("%u", octet & 1);
+            octet >>= 1;
+        }
+    }
+
+    printf("%s", suffix);
+}
+
+static void dump_shared_info(void)
+{
+    DOMAIN_SAVE_TYPE(SHARED_INFO) *s;
+    bool has_32bit_shinfo;
+    shared_info_any_t *info;
+    unsigned int i, n;
+
+    GET_PTR(s);
+    has_32bit_shinfo = s->flags & DOMAIN_SAVE_32BIT_SHINFO;
+
+    printf("    SHARED_INFO: has_32bit_shinfo: %s buffer_size: %u\n",
+           has_32bit_shinfo ? "true" : "false", s->buffer_size);
+
+    info = (shared_info_any_t *)s->buffer;
+
+#define GET_FIELD_PTR(_f)            \
+    (has_32bit_shinfo ?              \
+     (const void *)&(info->x32._f) : \
+     (const void *)&(info->x64._f))
+#define GET_FIELD_SIZE(_f) \
+    (has_32bit_shinfo ? sizeof(info->x32._f) : sizeof(info->x64._f))
+#define GET_FIELD(_f) \
+    (has_32bit_shinfo ? info->x32._f : info->x64._f)
+
+    n = has_32bit_shinfo ?
+        ARRAY_SIZE(info->x32.evtchn_pending) :
+        ARRAY_SIZE(info->x64.evtchn_pending);
+
+    for ( i = 0; i < n; i++ )
+    {
+        const char *prefix = !i ?
+            "                 evtchn_pending: " :
+            "                                 ";
+
+        print_binary(prefix, GET_FIELD_PTR(evtchn_pending[0]),
+                 GET_FIELD_SIZE(evtchn_pending[0]), "\n");
+    }
+
+    for ( i = 0; i < n; i++ )
+    {
+        const char *prefix = !i ?
+            "                    evtchn_mask: " :
+            "                                 ";
+
+        print_binary(prefix, GET_FIELD_PTR(evtchn_mask[0]),
+                 GET_FIELD_SIZE(evtchn_mask[0]), "\n");
+    }
+
+    printf("                 wc: version: %u sec: %u nsec: %u\n",
+           GET_FIELD(wc_version), GET_FIELD(wc_sec), GET_FIELD(wc_nsec));
+
+#undef GET_FIELD
+#undef GET_FIELD_SIZE
+#undef GET_FIELD_PTR
+}
+
 static void dump_end(void)
 {
     DOMAIN_SAVE_TYPE(END) *e;
@@ -173,6 +250,7 @@ int main(int argc, char **argv)
             switch (desc->typecode)
             {
             case DOMAIN_SAVE_CODE(HEADER): dump_header(); break;
+            case DOMAIN_SAVE_CODE(SHARED_INFO): dump_shared_info(); break;
             case DOMAIN_SAVE_CODE(END): dump_end(); break;
             default:
                 printf("Unknown type %u: skipping\n", desc->typecode);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 7cc9526139..14e96c3bc2 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -33,6 +33,7 @@
 #include <xen/xenoprof.h>
 #include <xen/irq.h>
 #include <xen/argo.h>
+#include <xen/save.h>
 #include <asm/debugger.h>
 #include <asm/p2m.h>
 #include <asm/processor.h>
@@ -1649,6 +1650,64 @@ int continue_hypercall_on_cpu(
     return 0;
 }
 
+static int save_shared_info(const struct domain *d, struct domain_context *c,
+                            bool dry_run)
+{
+    struct domain_shared_info_context ctxt = {
+#ifdef CONFIG_COMPAT
+        .flags = has_32bit_shinfo(d) ? DOMAIN_SAVE_32BIT_SHINFO : 0,
+#endif
+        .buffer_size = sizeof(shared_info_t),
+    };
+    size_t hdr_size = offsetof(typeof(ctxt), buffer);
+    int rc;
+
+    rc = DOMAIN_SAVE_BEGIN(SHARED_INFO, c, 0);
+    if ( rc )
+        return rc;
+
+    rc = domain_save_data(c, &ctxt, hdr_size);
+    if ( rc )
+        return rc;
+
+    rc = domain_save_data(c, d->shared_info, ctxt.buffer_size);
+    if ( rc )
+        return rc;
+
+    return domain_save_end(c);
+}
+
+static int load_shared_info(struct domain *d, struct domain_context *c)
+{
+    struct domain_shared_info_context ctxt;
+    size_t hdr_size = offsetof(typeof(ctxt), buffer);
+    unsigned int i;
+    int rc;
+
+    rc = DOMAIN_LOAD_BEGIN(SHARED_INFO, c, &i);
+    if ( rc || i ) /* expect only a single instance */
+        return rc;
+
+    rc = domain_load_data(c, &ctxt, hdr_size);
+    if ( rc )
+        return rc;
+
+    if ( ctxt.buffer_size != sizeof(shared_info_t) )
+        return -EINVAL;
+
+#ifdef CONFIG_COMPAT
+    has_32bit_shinfo(d) = ctxt.flags & DOMAIN_SAVE_32BIT_SHINFO;
+#endif
+
+    rc = domain_load_data(c, d->shared_info, sizeof(shared_info_t));
+    if ( rc )
+        return rc;
+
+    return domain_load_end(c);
+}
+
+DOMAIN_REGISTER_SAVE_LOAD(SHARED_INFO, save_shared_info, load_shared_info);
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/public/save.h b/xen/include/public/save.h
index 551dbbddb8..0e855a4b97 100644
--- a/xen/include/public/save.h
+++ b/xen/include/public/save.h
@@ -82,7 +82,18 @@ struct domain_save_header {
 };
 DECLARE_DOMAIN_SAVE_TYPE(HEADER, 1, struct domain_save_header);
 
-#define DOMAIN_SAVE_CODE_MAX 1
+struct domain_shared_info_context {
+    uint32_t flags;
+
+#define DOMAIN_SAVE_32BIT_SHINFO 0x00000001
+
+    uint32_t buffer_size;
+    uint8_t buffer[XEN_FLEX_ARRAY_DIM]; /* Implementation specific size */
+};
+
+DECLARE_DOMAIN_SAVE_TYPE(SHARED_INFO, 2, struct domain_shared_info_context);
+
+#define DOMAIN_SAVE_CODE_MAX 2
 
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 08:59:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 08:59:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbh3L-0000z0-CD; Thu, 21 May 2020 08:59:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/yRJ=7D=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jbh3K-0000yV-5F
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 08:59:46 +0000
X-Inumbo-ID: 681b97a8-9b41-11ea-aae7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 681b97a8-9b41-11ea-aae7-12813bfff9fa;
 Thu, 21 May 2020 08:59:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=q3yI5jX4C2dU+w1+3ZXLIp5svvvO70bzN9dQNXuQr2Q=; b=zdZLwobKbVGpriFIheXuLtr2Gv
 pACt5W32eooKocQ7+XxpiHIYUisql5Er8ABBiQzdRmwZu4o2Ke/6OM1IjIsZMBhYvuWMpDo0HfxEo
 PXdILFl4TIA1PIg1MSTme/mkE4wqZH+ydewDCaBzQCSYTF7PZei8pqVoIzLknIeinun0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jbh3E-0005Om-Jr; Thu, 21 May 2020 08:59:40 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jbh3E-0000wi-Au; Thu, 21 May 2020 08:59:40 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v4 3/5] tools/misc: add xen-domctx to present domain context
Date: Thu, 21 May 2020 09:59:30 +0100
Message-Id: <20200521085932.10508-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200521085932.10508-1-paul@xen.org>
References: <20200521085932.10508-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This tool is analogous to 'xen-hvmctx' which presents HVM context.
Subsequent patches will add 'dump' functions when new records are
introduced.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>

v3:
 - Re-worked to avoid copying onto stack
 - Added optional typecode and instance arguments

v2:
 - Change name from 'xen-ctx' to 'xen-domctx'
---
 .gitignore              |   1 +
 tools/misc/Makefile     |   4 +
 tools/misc/xen-domctx.c | 200 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 205 insertions(+)
 create mode 100644 tools/misc/xen-domctx.c

diff --git a/.gitignore b/.gitignore
index 7418ce9829..6da3030f0d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -209,6 +209,7 @@ tools/misc/xen_cpuperf
 tools/misc/xen-cpuid
 tools/misc/xen-detect
 tools/misc/xen-diag
+tools/misc/xen-domctx
 tools/misc/xen-tmem-list-parse
 tools/misc/xen-livepatch
 tools/misc/xenperf
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 63947bfadc..ef25524354 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -30,6 +30,7 @@ INSTALL_SBIN                   += xenpm
 INSTALL_SBIN                   += xenwatchdogd
 INSTALL_SBIN                   += xen-livepatch
 INSTALL_SBIN                   += xen-diag
+INSTALL_SBIN                   += xen-domctx
 INSTALL_SBIN += $(INSTALL_SBIN-y)
 
 # Everything to be installed in a private bin/
@@ -108,6 +109,9 @@ xen-livepatch: xen-livepatch.o
 xen-diag: xen-diag.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
+xen-domctx: xen-domctx.o
+	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
+
 xen-lowmemd: xen-lowmemd.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(APPEND_LDFLAGS)
 
diff --git a/tools/misc/xen-domctx.c b/tools/misc/xen-domctx.c
new file mode 100644
index 0000000000..243325dfce
--- /dev/null
+++ b/tools/misc/xen-domctx.c
@@ -0,0 +1,200 @@
+/*
+ * xen-domctx.c
+ *
+ * Print out domain save records in a human-readable way.
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+
+#include <xenctrl.h>
+#include <xen/xen.h>
+#include <xen/domctl.h>
+#include <xen/save.h>
+
+static void *buf = NULL;
+static size_t len, off;
+
+#define GET_PTR(_x)                                                        \
+    do {                                                                   \
+        if ( len - off < sizeof(*(_x)) )                                   \
+        {                                                                  \
+            fprintf(stderr,                                                \
+                    "error: need another %lu bytes, only %lu available\n", \
+                    sizeof(*(_x)), len - off);                             \
+            exit(1);                                                       \
+        }                                                                  \
+        (_x) = buf + off;                                                  \
+    } while (false);
+
+static void dump_header(void)
+{
+    DOMAIN_SAVE_TYPE(HEADER) *h;
+
+    GET_PTR(h);
+
+    printf("    HEADER: magic %#x, version %u\n",
+           h->magic, h->version);
+
+}
+
+static void dump_end(void)
+{
+    DOMAIN_SAVE_TYPE(END) *e;
+
+    GET_PTR(e);
+
+    printf("    END\n");
+}
+
+static void usage(const char *prog)
+{
+    fprintf(stderr, "usage: %s <domid> [ <typecode> [ <instance> ]]\n",
+            prog);
+    exit(1);
+}
+
+int main(int argc, char **argv)
+{
+    char *s, *e;
+    long domid;
+    long typecode = -1;
+    long instance = -1;
+    unsigned int entry;
+    xc_interface *xch;
+    int rc;
+
+    if ( argc < 2 || argc > 4 )
+        usage(argv[0]);
+
+    s = e = argv[1];
+    domid = strtol(s, &e, 0);
+
+    if ( *s == '\0' || *e != '\0' ||
+         domid < 0 || domid >= DOMID_FIRST_RESERVED )
+    {
+        fprintf(stderr, "invalid domid '%s'\n", s);
+        exit(1);
+    }
+
+    if ( argc >= 3 )
+    {
+        s = e = argv[2];
+        typecode = strtol(s, &e, 0);
+
+        if ( *s == '\0' || *e != '\0' )
+        {
+            fprintf(stderr, "invalid typecode '%s'\n", s);
+            exit(1);
+        }
+    }
+
+    if ( argc == 4 )
+    {
+        s = e = argv[3];
+        instance = strtol(s, &e, 0);
+
+        if ( *s == '\0' || *e != '\0' )
+        {
+            fprintf(stderr, "invalid instance '%s'\n", s);
+            exit(1);
+        }
+    }
+
+    xch = xc_interface_open(0, 0, 0);
+    if ( !xch )
+    {
+        fprintf(stderr, "error: can't open libxc handle\n");
+        exit(1);
+    }
+
+    rc = xc_domain_getcontext(xch, domid, NULL, &len);
+    if ( rc < 0 )
+    {
+        fprintf(stderr, "error: can't get record length for dom %lu: %s\n",
+                domid, strerror(errno));
+        exit(1);
+    }
+
+    buf = malloc(len);
+    if ( !buf )
+    {
+        fprintf(stderr, "error: can't allocate %lu bytes\n", len);
+        exit(1);
+    }
+
+    rc = xc_domain_getcontext(xch, domid, buf, &len);
+    if ( rc < 0 )
+    {
+        fprintf(stderr, "error: can't get domain record for dom %lu: %s\n",
+                domid, strerror(errno));
+        exit(1);
+    }
+    off = 0;
+
+    entry = 0;
+    for ( ; ; )
+    {
+        struct domain_save_descriptor *desc;
+
+        GET_PTR(desc);
+
+        off += sizeof(*desc);
+
+        if ( (typecode < 0 || typecode == desc->typecode) &&
+             (instance < 0 || instance == desc->instance) )
+        {
+            printf("[%u] type: %u instance: %u length: %u\n", entry++,
+                   desc->typecode, desc->instance, desc->length);
+
+            switch (desc->typecode)
+            {
+            case DOMAIN_SAVE_CODE(HEADER): dump_header(); break;
+            case DOMAIN_SAVE_CODE(END): dump_end(); break;
+            default:
+                printf("Unknown type %u: skipping\n", desc->typecode);
+                break;
+            }
+        }
+
+        if ( desc->typecode == DOMAIN_SAVE_CODE(END) )
+            break;
+
+        off += desc->length;
+    }
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 08:59:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 08:59:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbh3Q-000112-Ku; Thu, 21 May 2020 08:59:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/yRJ=7D=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jbh3P-00010T-5F
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 08:59:51 +0000
X-Inumbo-ID: 681b97a9-9b41-11ea-aae7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 681b97a9-9b41-11ea-aae7-12813bfff9fa;
 Thu, 21 May 2020 08:59:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ZZDDA75nTtiForn2IeXE4ehc4LRX7YW6c2uWXA5HXMU=; b=7EFbaliBVQRRtfXTQxu2zVBDuw
 5/BBPV4tN6ZY+cadSp07m3OGWwcSn3nlrct6PLiYQXMfXq63uDCxKPbtPlXExw0pRaUrscaouxVyj
 cZuRupxnNznEk3P+lSZIsS/HO2RNSA6XIy1sZpWilGpqq0nO9BkX2Mu2YtGlTF2Arj4g=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jbh3D-0005Oj-JZ; Thu, 21 May 2020 08:59:39 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jbh3D-0000wi-9G; Thu, 21 May 2020 08:59:39 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v4 2/5] xen/common/domctl: introduce
 XEN_DOMCTL_get/setdomaincontext
Date: Thu, 21 May 2020 09:59:29 +0100
Message-Id: <20200521085932.10508-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200521085932.10508-1-paul@xen.org>
References: <20200521085932.10508-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

These domctls provide a mechanism to get and set domain context from
the toolstack.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>

v4:
 - Add missing zero pad checks

v3:
 - Addressed comments from Julien and Jan
 - Use vmalloc() rather than xmalloc_bytes()

v2:
 - drop mask parameter
 - const-ify some more buffers
---
 tools/flask/policy/modules/xen.if   |   4 +-
 tools/libxc/include/xenctrl.h       |   5 +
 tools/libxc/xc_domain.c             |  56 +++++++++
 xen/common/domctl.c                 | 173 ++++++++++++++++++++++++++++
 xen/include/public/domctl.h         |  41 +++++++
 xen/xsm/flask/hooks.c               |   6 +
 xen/xsm/flask/policy/access_vectors |   4 +
 7 files changed, 287 insertions(+), 2 deletions(-)

diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 8eb2293a52..2bc9db4f64 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -53,7 +53,7 @@ define(`create_domain_common', `
 	allow $1 $2:domain2 { set_cpu_policy settsc setscheduler setclaim
 			set_vnumainfo get_vnumainfo cacheflush
 			psr_cmt_op psr_alloc soft_reset
-			resource_map get_cpu_policy };
+			resource_map get_cpu_policy setcontext };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op updatemp };
@@ -97,7 +97,7 @@ define(`migrate_domain_out', `
 	allow $1 $2:hvm { gethvmc getparam };
 	allow $1 $2:mmu { stat pageinfo map_read };
 	allow $1 $2:domain { getaddrsize getvcpucontext pause destroy };
-	allow $1 $2:domain2 gettsc;
+	allow $1 $2:domain2 { gettsc getcontext };
 	allow $1 $2:shadow { enable disable logdirty };
 ')
 
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 45ff7db1e8..0ce2372e2f 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -867,6 +867,11 @@ int xc_domain_hvm_setcontext(xc_interface *xch,
                              uint8_t *hvm_ctxt,
                              uint32_t size);
 
+int xc_domain_getcontext(xc_interface *xch, uint32_t domid,
+                         void *ctxt_buf, size_t *size);
+int xc_domain_setcontext(xc_interface *xch, uint32_t domid,
+                         const void *ctxt_buf, size_t size);
+
 /**
  * This function will return guest IO ABI protocol
  *
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 71829c2bce..e462a6f728 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -537,6 +537,62 @@ int xc_domain_hvm_setcontext(xc_interface *xch,
     return ret;
 }
 
+int xc_domain_getcontext(xc_interface *xch, uint32_t domid,
+                         void *ctxt_buf, size_t *size)
+{
+    int ret;
+    DECLARE_DOMCTL = {
+        .cmd = XEN_DOMCTL_getdomaincontext,
+        .domain = domid,
+        .u.getdomaincontext.size = *size,
+    };
+    DECLARE_HYPERCALL_BOUNCE(ctxt_buf, *size, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+
+    if ( xc_hypercall_bounce_pre(xch, ctxt_buf) )
+        return -1;
+
+    set_xen_guest_handle(domctl.u.setdomaincontext.buffer, ctxt_buf);
+
+    ret = do_domctl(xch, &domctl);
+
+    xc_hypercall_bounce_post(xch, ctxt_buf);
+
+    if ( ret )
+        return ret;
+
+    *size = domctl.u.getdomaincontext.size;
+    if ( *size != domctl.u.getdomaincontext.size )
+    {
+        errno = EOVERFLOW;
+        return -1;
+    }
+
+    return 0;
+}
+
+int xc_domain_setcontext(xc_interface *xch, uint32_t domid,
+                         const void *ctxt_buf, size_t size)
+{
+    int ret;
+    DECLARE_DOMCTL = {
+        .cmd = XEN_DOMCTL_setdomaincontext,
+        .domain = domid,
+        .u.setdomaincontext.size = size,
+    };
+    DECLARE_HYPERCALL_BOUNCE_IN(ctxt_buf, size);
+
+    if ( xc_hypercall_bounce_pre(xch, ctxt_buf) )
+        return -1;
+
+    set_xen_guest_handle(domctl.u.setdomaincontext.buffer, ctxt_buf);
+
+    ret = do_domctl(xch, &domctl);
+
+    xc_hypercall_bounce_post(xch, ctxt_buf);
+
+    return ret;
+}
+
 int xc_vcpu_getcontext(xc_interface *xch,
                        uint32_t domid,
                        uint32_t vcpu,
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index a69b3b59a8..44758034a6 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -25,6 +25,8 @@
 #include <xen/hypercall.h>
 #include <xen/vm_event.h>
 #include <xen/monitor.h>
+#include <xen/save.h>
+#include <xen/vmap.h>
 #include <asm/current.h>
 #include <asm/irq.h>
 #include <asm/page.h>
@@ -358,6 +360,168 @@ static struct vnuma_info *vnuma_init(const struct xen_domctl_vnuma *uinfo,
     return ERR_PTR(ret);
 }
 
+struct domctl_context
+{
+    void *buffer;
+    struct domain_save_descriptor *desc;
+    size_t len;
+    size_t cur;
+};
+
+static int dry_run_append(void *priv, const void *data, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len + len < c->len )
+        return -EOVERFLOW;
+
+    c->len += len;
+
+    return 0;
+}
+
+static int dry_run_begin(void *priv, const struct domain_save_descriptor *desc)
+{
+    return dry_run_append(priv, NULL, sizeof(*desc));
+}
+
+static int dry_run_end(void *priv, size_t len)
+{
+    return 0;
+}
+
+static struct domain_save_ops dry_run_ops = {
+    .begin = dry_run_begin,
+    .append = dry_run_append,
+    .end = dry_run_end,
+};
+
+static int save_begin(void *priv, const struct domain_save_descriptor *desc)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len - c->cur < sizeof(*desc) )
+        return -ENOSPC;
+
+    c->desc = c->buffer + c->cur; /* stash pointer to descriptor */
+    *c->desc = *desc;
+
+    c->cur += sizeof(*desc);
+
+    return 0;
+}
+
+static int save_append(void *priv, const void *data, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len - c->cur < len )
+        return -ENOSPC;
+
+    memcpy(c->buffer + c->cur, data, len);
+    c->cur += len;
+
+    return 0;
+}
+
+static int save_end(void *priv, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    c->desc->length = len;
+
+    return 0;
+}
+
+static struct domain_save_ops save_ops = {
+    .begin = save_begin,
+    .append = save_append,
+    .end = save_end,
+};
+
+static int getdomaincontext(struct domain *d,
+                            struct xen_domctl_getdomaincontext *gdc)
+{
+    struct domctl_context c = { .buffer = ZERO_BLOCK_PTR };
+    int rc;
+
+    if ( d == current->domain )
+        return -EPERM;
+
+    if ( gdc->pad )
+        return -EINVAL;
+
+    if ( guest_handle_is_null(gdc->buffer) ) /* query for buffer size */
+    {
+        if ( gdc->size )
+            return -EINVAL;
+
+        /* dry run to acquire buffer size */
+        rc = domain_save(d, &dry_run_ops, &c, true);
+        if ( rc )
+            return rc;
+
+        gdc->size = c.len;
+        return 0;
+    }
+
+    c.len = gdc->size;
+    c.buffer = vmalloc(c.len);
+    if ( !c.buffer )
+        return -ENOMEM;
+
+    rc = domain_save(d, &save_ops, &c, false);
+
+    gdc->size = c.cur;
+    if ( !rc && copy_to_guest(gdc->buffer, c.buffer, gdc->size) )
+        rc = -EFAULT;
+
+    vfree(c.buffer);
+
+    return rc;
+}
+
+static int load_read(void *priv, void *data, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len - c->cur < len )
+        return -ENODATA;
+
+    memcpy(data, c->buffer + c->cur, len);
+    c->cur += len;
+
+    return 0;
+}
+
+static struct domain_load_ops load_ops = {
+    .read = load_read,
+};
+
+static int setdomaincontext(struct domain *d,
+                            const struct xen_domctl_setdomaincontext *sdc)
+{
+    struct domctl_context c = { .buffer = ZERO_BLOCK_PTR, .len = sdc->size };
+    int rc;
+
+    if ( d == current->domain )
+        return -EPERM;
+
+    if ( sdc->pad )
+        return -EINVAL;
+
+    c.buffer = vmalloc(c.len);
+    if ( !c.buffer )
+        return -ENOMEM;
+
+    rc = !copy_from_guest(c.buffer, sdc->buffer, c.len) ?
+        domain_load(d, &load_ops, &c) : -EFAULT;
+
+    vfree(c.buffer);
+
+    return rc;
+}
+
 long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
@@ -942,6 +1106,15 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             copyback = 1;
         break;
 
+    case XEN_DOMCTL_getdomaincontext:
+        ret = getdomaincontext(d, &op->u.getdomaincontext);
+        copyback = !ret;
+        break;
+
+    case XEN_DOMCTL_setdomaincontext:
+        ret = setdomaincontext(d, &op->u.setdomaincontext);
+        break;
+
     default:
         ret = arch_do_domctl(op, d, u_domctl);
         break;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 1ad34c35eb..1b133bda59 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1129,6 +1129,43 @@ struct xen_domctl_vuart_op {
                                  */
 };
 
+/*
+ * XEN_DOMCTL_getdomaincontext
+ * ---------------------------
+ *
+ * buffer (IN):   The buffer into which the context data should be
+ *                copied, or NULL to query the buffer size that should
+ *                be allocated.
+ * size (IN/OUT): If 'buffer' is NULL then the value passed in must be
+ *                zero, and the value passed out will be the size of the
+ *                buffer to allocate.
+ *                If 'buffer' is non-NULL then the value passed in must
+ *                be the size of the buffer into which data may be copied.
+ *                The value passed out will be the size of data written.
+ */
+struct xen_domctl_getdomaincontext {
+    uint32_t size;
+    uint32_t pad;
+    XEN_GUEST_HANDLE_64(void) buffer;
+};
+
+/* XEN_DOMCTL_setdomaincontext
+ * ---------------------------
+ *
+ * buffer (IN):   The buffer from which the context data should be
+ *                copied.
+ * size (IN):     The size of the buffer from which data may be copied.
+ *                This data must include DOMAIN_SAVE_CODE_HEADER at the
+ *                start and terminate with a DOMAIN_SAVE_CODE_END record.
+ *                Any data beyond the DOMAIN_SAVE_CODE_END record will be
+ *                ignored.
+ */
+struct xen_domctl_setdomaincontext {
+    uint32_t size;
+    uint32_t pad;
+    XEN_GUEST_HANDLE_64(const_void) buffer;
+};
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -1210,6 +1247,8 @@ struct xen_domctl {
 #define XEN_DOMCTL_vuart_op                      81
 #define XEN_DOMCTL_get_cpu_policy                82
 #define XEN_DOMCTL_set_cpu_policy                83
+#define XEN_DOMCTL_getdomaincontext              84
+#define XEN_DOMCTL_setdomaincontext              85
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1270,6 +1309,8 @@ struct xen_domctl {
         struct xen_domctl_monitor_op        monitor_op;
         struct xen_domctl_psr_alloc         psr_alloc;
         struct xen_domctl_vuart_op          vuart_op;
+        struct xen_domctl_getdomaincontext  getdomaincontext;
+        struct xen_domctl_setdomaincontext  setdomaincontext;
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 4649e6fd95..6f3db276ef 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -745,6 +745,12 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_get_cpu_policy:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GET_CPU_POLICY);
 
+    case XEN_DOMCTL_setdomaincontext:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SETCONTEXT);
+
+    case XEN_DOMCTL_getdomaincontext:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GETCONTEXT);
+
     default:
         return avc_unknown_permission("domctl", cmd);
     }
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index c055c14c26..fccfb9de82 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -245,6 +245,10 @@ class domain2
     resource_map
 # XEN_DOMCTL_get_cpu_policy
     get_cpu_policy
+# XEN_DOMCTL_setdomaincontext
+    setcontext
+# XEN_DOMCTL_getdomaincontext
+    getcontext
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 09:00:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 09:00:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbh3V-00013n-2p; Thu, 21 May 2020 08:59:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/yRJ=7D=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jbh3U-000139-7q
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 08:59:56 +0000
X-Inumbo-ID: 692ef2ad-9b41-11ea-aae7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 692ef2ad-9b41-11ea-aae7-12813bfff9fa;
 Thu, 21 May 2020 08:59:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=+ncaDSHzhJAOarT0i8BUB/EZXUJTeKmdCE4wfOo/qYs=; b=y7iy3JT9IYHCAd1v0dGgqFwWgI
 Qw0YjPT7IzWxhTqUBwSmSRsBW5Wg+Rk+rkBDUiU+H8Mxcg3fiqgY1pKxDx8gjUTcSZvEtHenUfeXJ
 x5YHq30EgtfVP6jSBk1LEVmJ7CA4n+fBwmRY6iF9cNe3sgTpdNndejef4OtsDe4Vqt2o=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jbh3H-0005P5-6J; Thu, 21 May 2020 08:59:43 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jbh3G-0000wi-TY; Thu, 21 May 2020 08:59:43 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v4 5/5] tools/libxc: make use of domain context SHARED_INFO
 record...
Date: Thu, 21 May 2020 09:59:32 +0100
Message-Id: <20200521085932.10508-6-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200521085932.10508-1-paul@xen.org>
References: <20200521085932.10508-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

... in the save/restore code.

This patch replaces direct mapping of the shared_info_frame (retrieved
using XEN_DOMCTL_getdomaininfo) with save/load of the domain context
SHARED_INFO record.

No modifications are made to the definition of the migration stream at
this point. Subsequent patches will define a record in the libxc domain
image format for passing domain context and convert the save/restore code
to use that.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>

v3:
 - Moved basic get/set domain context functions to common code

v2:
 - Re-based (now making use of DOMAIN_SAVE_FLAG_IGNORE)
---
 tools/libxc/xc_sr_common.c         | 67 +++++++++++++++++++++++++++
 tools/libxc/xc_sr_common.h         | 11 ++++-
 tools/libxc/xc_sr_common_x86_pv.c  | 74 ++++++++++++++++++++++++++++++
 tools/libxc/xc_sr_common_x86_pv.h  |  3 ++
 tools/libxc/xc_sr_restore_x86_pv.c | 26 ++++-------
 tools/libxc/xc_sr_save_x86_pv.c    | 43 ++++++++---------
 tools/libxc/xg_save_restore.h      |  1 +
 7 files changed, 181 insertions(+), 44 deletions(-)

diff --git a/tools/libxc/xc_sr_common.c b/tools/libxc/xc_sr_common.c
index dd9a11b4b5..1acb3765aa 100644
--- a/tools/libxc/xc_sr_common.c
+++ b/tools/libxc/xc_sr_common.c
@@ -138,6 +138,73 @@ int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec)
     return 0;
 };
 
+int get_domain_context(struct xc_sr_context *ctx)
+{
+    xc_interface *xch = ctx->xch;
+    size_t len = 0;
+    int rc;
+
+    if ( ctx->domain_context.buffer )
+    {
+        ERROR("Domain context already present");
+        return -1;
+    }
+
+    rc = xc_domain_getcontext(xch, ctx->domid, NULL, &len);
+    if ( rc < 0 )
+    {
+        PERROR("Unable to get size of domain context");
+        return -1;
+    }
+
+    ctx->domain_context.buffer = malloc(len);
+    if ( !ctx->domain_context.buffer )
+    {
+        PERROR("Unable to allocate memory for domain context");
+        return -1;
+    }
+
+    rc = xc_domain_getcontext(xch, ctx->domid, ctx->domain_context.buffer,
+                              &len);
+    if ( rc < 0 )
+    {
+        PERROR("Unable to get domain context");
+        return -1;
+    }
+
+    ctx->domain_context.len = len;
+
+    return 0;
+}
+
+int set_domain_context(struct xc_sr_context *ctx)
+{
+    xc_interface *xch = ctx->xch;
+    int rc;
+
+    if ( !ctx->domain_context.buffer )
+    {
+        ERROR("Domain context not present");
+        return -1;
+    }
+
+    rc = xc_domain_setcontext(xch, ctx->domid, ctx->domain_context.buffer,
+                              ctx->domain_context.len);
+
+    if ( rc < 0 )
+    {
+        PERROR("Unable to set domain context");
+        return -1;
+    }
+
+    return 0;
+}
+
+void common_cleanup(struct xc_sr_context *ctx)
+{
+    free(ctx->domain_context.buffer);
+}
+
 static void __attribute__((unused)) build_assertions(void)
 {
     BUILD_BUG_ON(sizeof(struct xc_sr_ihdr) != 24);
diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
index 5dd51ccb15..0d61978b08 100644
--- a/tools/libxc/xc_sr_common.h
+++ b/tools/libxc/xc_sr_common.h
@@ -208,6 +208,11 @@ struct xc_sr_context
 
     xc_dominfo_t dominfo;
 
+    struct {
+        void *buffer;
+        unsigned int len;
+    } domain_context;
+
     union /* Common save or restore data. */
     {
         struct /* Save data. */
@@ -314,7 +319,7 @@ struct xc_sr_context
                 /* The guest pfns containing the p2m leaves */
                 xen_pfn_t *p2m_pfns;
 
-                /* Read-only mapping of guests shared info page */
+                /* Pointer to shared_info (located in context buffer) */
                 shared_info_any_t *shinfo;
 
                 /* p2m generation count for verifying validity of local p2m. */
@@ -425,6 +430,10 @@ int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec);
 int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
                   const xen_pfn_t *original_pfns, const uint32_t *types);
 
+int get_domain_context(struct xc_sr_context *ctx);
+int set_domain_context(struct xc_sr_context *ctx);
+void common_cleanup(struct xc_sr_context *ctx);
+
 #endif
 /*
  * Local variables:
diff --git a/tools/libxc/xc_sr_common_x86_pv.c b/tools/libxc/xc_sr_common_x86_pv.c
index d3d425cb82..69d9b142b8 100644
--- a/tools/libxc/xc_sr_common_x86_pv.c
+++ b/tools/libxc/xc_sr_common_x86_pv.c
@@ -182,6 +182,80 @@ int x86_pv_map_m2p(struct xc_sr_context *ctx)
     return rc;
 }
 
+int x86_pv_get_shinfo(struct xc_sr_context *ctx)
+{
+    xc_interface *xch = ctx->xch;
+    unsigned int off = 0;
+    int rc;
+
+#define GET_PTR(_x)                                                         \
+    do {                                                                    \
+        if ( ctx->domain_context.len - off < sizeof(*(_x)) )                \
+        {                                                                   \
+            ERROR("Need another %lu bytes of context, only %u available\n", \
+                  sizeof(*(_x)), ctx->domain_context.len - off);            \
+            return -1;                                                      \
+        }                                                                   \
+        (_x) = ctx->domain_context.buffer + off;                            \
+    } while (false);
+
+    rc = get_domain_context(ctx);
+    if ( rc )
+        return rc;
+
+    for ( ; ; )
+    {
+        struct domain_save_descriptor *desc;
+
+        GET_PTR(desc);
+
+        off += sizeof(*desc);
+
+        switch (desc->typecode)
+        {
+        case DOMAIN_SAVE_CODE(SHARED_INFO):
+        {
+            DOMAIN_SAVE_TYPE(SHARED_INFO) *s;
+
+            GET_PTR(s);
+
+            ctx->x86.pv.shinfo = (shared_info_any_t *)s->buffer;
+            break;
+        }
+        default:
+            break;
+        }
+
+        if ( desc->typecode == DOMAIN_SAVE_CODE(END) )
+            break;
+
+        off += desc->length;
+    }
+
+    if ( !ctx->x86.pv.shinfo )
+    {
+        ERROR("Failed to get SHARED_INFO\n");
+        return -1;
+    }
+
+    return 0;
+
+#undef GET_PTR
+}
+
+int x86_pv_set_shinfo(struct xc_sr_context *ctx)
+{
+    xc_interface *xch = ctx->xch;
+
+    if ( !ctx->x86.pv.shinfo )
+    {
+        ERROR("SHARED_INFO buffer not present\n");
+        return -1;
+    }
+
+    return set_domain_context(ctx);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xc_sr_common_x86_pv.h b/tools/libxc/xc_sr_common_x86_pv.h
index 2ed03309af..01442f48fb 100644
--- a/tools/libxc/xc_sr_common_x86_pv.h
+++ b/tools/libxc/xc_sr_common_x86_pv.h
@@ -97,6 +97,9 @@ int x86_pv_domain_info(struct xc_sr_context *ctx);
  */
 int x86_pv_map_m2p(struct xc_sr_context *ctx);
 
+int x86_pv_get_shinfo(struct xc_sr_context *ctx);
+int x86_pv_set_shinfo(struct xc_sr_context *ctx);
+
 #endif
 /*
  * Local variables:
diff --git a/tools/libxc/xc_sr_restore_x86_pv.c b/tools/libxc/xc_sr_restore_x86_pv.c
index 904ccc462a..21982a38ad 100644
--- a/tools/libxc/xc_sr_restore_x86_pv.c
+++ b/tools/libxc/xc_sr_restore_x86_pv.c
@@ -865,7 +865,7 @@ static int handle_shared_info(struct xc_sr_context *ctx,
     xc_interface *xch = ctx->xch;
     unsigned int i;
     int rc = -1;
-    shared_info_any_t *guest_shinfo = NULL;
+    shared_info_any_t *guest_shinfo;
     const shared_info_any_t *old_shinfo = rec->data;
 
     if ( !ctx->x86.pv.restore.seen_pv_info )
@@ -878,18 +878,14 @@ static int handle_shared_info(struct xc_sr_context *ctx,
     {
         ERROR("X86_PV_SHARED_INFO record wrong size: length %u"
               ", expected 4096", rec->length);
-        goto err;
+        return -1;
     }
 
-    guest_shinfo = xc_map_foreign_range(
-        xch, ctx->domid, PAGE_SIZE, PROT_READ | PROT_WRITE,
-        ctx->dominfo.shared_info_frame);
-    if ( !guest_shinfo )
-    {
-        PERROR("Failed to map Shared Info at mfn %#lx",
-               ctx->dominfo.shared_info_frame);
-        goto err;
-    }
+    rc = x86_pv_get_shinfo(ctx);
+    if ( rc )
+        return rc;
+
+    guest_shinfo = ctx->x86.pv.shinfo;
 
     MEMCPY_FIELD(guest_shinfo, old_shinfo, vcpu_info, ctx->x86.pv.width);
     MEMCPY_FIELD(guest_shinfo, old_shinfo, arch, ctx->x86.pv.width);
@@ -904,13 +900,7 @@ static int handle_shared_info(struct xc_sr_context *ctx,
 
     MEMSET_ARRAY_FIELD(guest_shinfo, evtchn_mask, 0xff, ctx->x86.pv.width);
 
-    rc = 0;
-
- err:
-    if ( guest_shinfo )
-        munmap(guest_shinfo, PAGE_SIZE);
-
-    return rc;
+    return x86_pv_set_shinfo(ctx);
 }
 
 /* restore_ops function. */
diff --git a/tools/libxc/xc_sr_save_x86_pv.c b/tools/libxc/xc_sr_save_x86_pv.c
index f3ccf5bb4b..bf87789340 100644
--- a/tools/libxc/xc_sr_save_x86_pv.c
+++ b/tools/libxc/xc_sr_save_x86_pv.c
@@ -9,25 +9,6 @@ static inline bool is_canonical_address(xen_vaddr_t vaddr)
     return ((int64_t)vaddr >> 47) == ((int64_t)vaddr >> 63);
 }
 
-/*
- * Maps the guests shared info page.
- */
-static int map_shinfo(struct xc_sr_context *ctx)
-{
-    xc_interface *xch = ctx->xch;
-
-    ctx->x86.pv.shinfo = xc_map_foreign_range(
-        xch, ctx->domid, PAGE_SIZE, PROT_READ, ctx->dominfo.shared_info_frame);
-    if ( !ctx->x86.pv.shinfo )
-    {
-        PERROR("Failed to map shared info frame at mfn %#lx",
-               ctx->dominfo.shared_info_frame);
-        return -1;
-    }
-
-    return 0;
-}
-
 /*
  * Copy a list of mfns from a guest, accounting for differences between guest
  * and toolstack width.  Can fail if truncation would occur.
@@ -854,13 +835,26 @@ static int write_x86_pv_p2m_frames(struct xc_sr_context *ctx)
  */
 static int write_shared_info(struct xc_sr_context *ctx)
 {
+    xc_interface *xch = ctx->xch;
     struct xc_sr_record rec = {
         .type = REC_TYPE_SHARED_INFO,
         .length = PAGE_SIZE,
-        .data = ctx->x86.pv.shinfo,
     };
+    int rc;
 
-    return write_record(ctx, &rec);
+    if ( !(rec.data = calloc(1, PAGE_SIZE)) )
+    {
+        ERROR("Cannot allocate buffer for SHARED_INFO data");
+        return -1;
+    }
+
+    memcpy(rec.data, ctx->x86.pv.shinfo, sizeof(*ctx->x86.pv.shinfo));
+
+    rc = write_record(ctx, &rec);
+
+    free(rec.data);
+
+    return rc;
 }
 
 /*
@@ -1041,7 +1035,7 @@ static int x86_pv_setup(struct xc_sr_context *ctx)
     if ( rc )
         return rc;
 
-    rc = map_shinfo(ctx);
+    rc = x86_pv_get_shinfo(ctx);
     if ( rc )
         return rc;
 
@@ -1112,12 +1106,11 @@ static int x86_pv_cleanup(struct xc_sr_context *ctx)
     if ( ctx->x86.pv.p2m )
         munmap(ctx->x86.pv.p2m, ctx->x86.pv.p2m_frames * PAGE_SIZE);
 
-    if ( ctx->x86.pv.shinfo )
-        munmap(ctx->x86.pv.shinfo, PAGE_SIZE);
-
     if ( ctx->x86.pv.m2p )
         munmap(ctx->x86.pv.m2p, ctx->x86.pv.nr_m2p_frames * PAGE_SIZE);
 
+    common_cleanup(ctx);
+
     return 0;
 }
 
diff --git a/tools/libxc/xg_save_restore.h b/tools/libxc/xg_save_restore.h
index 303081df0d..296b523963 100644
--- a/tools/libxc/xg_save_restore.h
+++ b/tools/libxc/xg_save_restore.h
@@ -19,6 +19,7 @@
 
 #include <xen/foreign/x86_32.h>
 #include <xen/foreign/x86_64.h>
+#include <xen/save.h>
 
 /*
 ** We process save/restore/migrate in batches of pages; the below
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 09:05:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 09:05:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbh8O-0002J7-M8; Thu, 21 May 2020 09:05:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V6/I=7D=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jbh8N-0002J2-Uq
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 09:04:59 +0000
X-Inumbo-ID: 25005354-9b42-11ea-9887-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 25005354-9b42-11ea-9887-bc764e2007e4;
 Thu, 21 May 2020 09:04:58 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ZxisDAJGiEEQi+wn7XLXo4t7iJDISE4CYylorTX8S37foyF5JiulHCVRVgMB4ZmwIqM+w2vYsl
 0jM6loOkmIbvYZFK/CpsQzUhCCAElw9TojHcuBLAcGhAUi50y08+zvCPzu4eF2WFlagt2DaUdv
 fjuTgBXnAUN+romk8ZsHS8vf/7sP7a5q9KHeWvN0Przo+0Cfe6yEc3hDo4+DFlKR33WkalJ7mL
 thw31QDtDcQuorTBVUHycblNk3FIxsB3WlpzlsZTWiVQc7UGtHA3JToHuL/VZTViGGbfSrx3Oa
 TBA=
X-SBRS: 2.7
X-MesageID: 18780728
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,417,1583211600"; d="scan'208";a="18780728"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/shadow: Reposition sh_remove_write_access_from_sl1p()
Date: Thu, 21 May 2020 10:04:28 +0100
Message-ID: <20200521090428.11425-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Tim Deegan <tim@xen.org>,
 Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

When compiling with SHOPT_OUT_OF_SYNC disabled, the build fails with:

  common.c:41:12: error: ‘sh_remove_write_access_from_sl1p’ declared ‘static’ but never defined [-Werror=unused-function]
   static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
              ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

due to an unguarded forward declaration.

It turns out there is no need to forward declare
sh_remove_write_access_from_sl1p() to begin with, so move it to just ahead of
its first users, which is within a larger #ifdef'd SHOPT_OUT_OF_SYNC block.

Fix up for style while moving it.  No functional change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Tim Deegan <tim@xen.org>
---
 xen/arch/x86/mm/shadow/common.c | 56 ++++++++++++++++++-----------------------
 1 file changed, 25 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
index 0ac3f880e1..6dff240e97 100644
--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -38,9 +38,6 @@
 #include <xen/numa.h>
 #include "private.h"
 
-static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
-                                            mfn_t smfn, unsigned long offset);
-
 DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
 
 static int sh_enable_log_dirty(struct domain *, bool log_global);
@@ -252,6 +249,31 @@ static inline void _sh_resync_l1(struct vcpu *v, mfn_t gmfn, mfn_t snpmfn)
         SHADOW_INTERNAL_NAME(sh_resync_l1, 4)(v, gmfn, snpmfn);
 }
 
+static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
+                                            mfn_t smfn, unsigned long off)
+{
+    struct page_info *sp = mfn_to_page(smfn);
+
+    ASSERT(mfn_valid(smfn));
+    ASSERT(mfn_valid(gmfn));
+
+    if ( sp->u.sh.type == SH_type_l1_32_shadow ||
+         sp->u.sh.type == SH_type_fl1_32_shadow )
+        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p, 2)
+            (d, gmfn, smfn, off);
+
+    if ( sp->u.sh.type == SH_type_l1_pae_shadow ||
+         sp->u.sh.type == SH_type_fl1_pae_shadow )
+        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p, 3)
+            (d, gmfn, smfn, off);
+
+    if ( sp->u.sh.type == SH_type_l1_64_shadow ||
+         sp->u.sh.type == SH_type_fl1_64_shadow )
+        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p, 4)
+            (d, gmfn, smfn, off);
+
+    return 0;
+}
 
 /*
  * Fixup arrays: We limit the maximum number of writable mappings to
@@ -2001,34 +2023,6 @@ int sh_remove_write_access(struct domain *d, mfn_t gmfn,
 }
 #endif /* CONFIG_HVM */
 
-#if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
-static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
-                                            mfn_t smfn, unsigned long off)
-{
-    struct page_info *sp = mfn_to_page(smfn);
-
-    ASSERT(mfn_valid(smfn));
-    ASSERT(mfn_valid(gmfn));
-
-    if ( sp->u.sh.type == SH_type_l1_32_shadow
-         || sp->u.sh.type == SH_type_fl1_32_shadow )
-    {
-        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p,2)
-            (d, gmfn, smfn, off);
-    }
-    else if ( sp->u.sh.type == SH_type_l1_pae_shadow
-              || sp->u.sh.type == SH_type_fl1_pae_shadow )
-        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p,3)
-            (d, gmfn, smfn, off);
-    else if ( sp->u.sh.type == SH_type_l1_64_shadow
-              || sp->u.sh.type == SH_type_fl1_64_shadow )
-        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p,4)
-            (d, gmfn, smfn, off);
-
-    return 0;
-}
-#endif
-
 /**************************************************************************/
 /* Remove all mappings of a guest frame from the shadow tables.
  * Returns non-zero if we need to flush TLBs. */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu May 21 09:23:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 09:23:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbhQK-000425-7A; Thu, 21 May 2020 09:23:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S6UB=7D=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jbhQI-000420-NU
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 09:23:30 +0000
X-Inumbo-ID: baa57a7c-9b44-11ea-aaeb-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id baa57a7c-9b44-11ea-aaeb-12813bfff9fa;
 Thu, 21 May 2020 09:23:28 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: kyRAjqcOd+fWqAf3a2AmUTJbOE4pK11IFSZLY/WDgzlKb/gif3ZyZhxKLrnWfMbdk6B7sZBw4v
 kzpgDXp4skGPCoN9LmL2JJfWCupc55sylwoKNqM5V/2jb8+GKgfhiEhgguBJI1YAXA3BCOjqht
 dMnTLoRt/IZbfXbGgs+jSBUMFMdzPxX1Hnf+aKiBJ7VtjPyKr9bfwSrf81ywj6ghMbvrlITC3v
 4aBQ4GDAJ0pOHc0eWGNmCB6nkeaNswSCT3hmArQpRN8vbYk2Wby9WCR0IcIyNRY73nA9riwQQS
 qMM=
X-SBRS: 2.7
X-MesageID: 18329862
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,417,1583211600"; d="scan'208";a="18329862"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v4] x86/idle: prevent entering C6 with in service interrupts
 on Intel
Date: Thu, 21 May 2020 11:22:58 +0200
Message-ID: <20200521092258.82503-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Apply a workaround for Intel errata BDX99, CLX30, SKX100, CFW125,
BDF104, BDH85, BDM135, KWB131: "A Pending Fixed Interrupt May Be
Dispatched Before an Interrupt of The Same Priority Completes".

Apply the errata to all server and client models (big cores) from
Broadwell to Cascade Lake. The workaround is grouped together with the
existing fix for errata AAJ72, and the eoi from the function name is
removed.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v3:
 - Remove command line option.
 - Switch order of static vs const.

Changes since v2:
 - Use x86_match_cpu and apply the workaround to all models from
   Broadwell to Cascade Lake.
 - Rename command line option to disable-c6-errata.

Changes since v1:
 - Unify workaround with errata_c6_eoi_workaround.
 - Properly check state in both acpi and mwait drivers.
---
 xen/arch/x86/acpi/cpu_idle.c  | 38 +++++++++++++++++++++++++++++++----
 xen/arch/x86/cpu/mwait-idle.c |  2 +-
 xen/include/asm-x86/cpuidle.h |  2 +-
 3 files changed, 36 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index 82f108d301..178cb607c2 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -548,7 +548,7 @@ void trace_exit_reason(u32 *irq_traced)
     }
 }
 
-bool errata_c6_eoi_workaround(void)
+bool errata_c6_workaround(void)
 {
     static int8_t __read_mostly fix_needed = -1;
 
@@ -573,10 +573,40 @@ bool errata_c6_eoi_workaround(void)
             INTEL_FAM6_MODEL(0x2f),
             { }
         };
+        /*
+         * Errata BDX99, CLX30, SKX100, CFW125, BDF104, BDH85, BDM135, KWB131:
+         * A Pending Fixed Interrupt May Be Dispatched Before an Interrupt of
+         * The Same Priority Completes.
+         *
+         * Resuming from C6 Sleep-State, with Fixed Interrupts of the same
+         * priority queued (in the corresponding bits of the IRR and ISR APIC
+         * registers), the processor may dispatch the second interrupt (from
+         * the IRR bit) before the first interrupt has completed and written to
+         * the EOI register, causing the first interrupt to never complete.
+         */
+        static const struct x86_cpu_id isr_errata[] = {
+            /* Broadwell */
+            INTEL_FAM6_MODEL(0x47),
+            INTEL_FAM6_MODEL(0x3d),
+            INTEL_FAM6_MODEL(0x4f),
+            INTEL_FAM6_MODEL(0x56),
+            /* Skylake (client) */
+            INTEL_FAM6_MODEL(0x5e),
+            INTEL_FAM6_MODEL(0x4e),
+            /* {Sky/Cascade}lake (server) */
+            INTEL_FAM6_MODEL(0x55),
+            /* {Kaby/Coffee/Whiskey/Amber} Lake */
+            INTEL_FAM6_MODEL(0x9e),
+            INTEL_FAM6_MODEL(0x8e),
+            /* Cannon Lake */
+            INTEL_FAM6_MODEL(0x66),
+            { }
+        };
 #undef INTEL_FAM6_MODEL
 
-        fix_needed = cpu_has_apic && !directed_eoi_enabled &&
-                     x86_match_cpu(eoi_errata);
+        fix_needed = cpu_has_apic &&
+                     ((!directed_eoi_enabled && x86_match_cpu(eoi_errata)) ||
+                      x86_match_cpu(isr_errata));
     }
 
     return (fix_needed && cpu_has_pending_apic_eoi());
@@ -685,7 +715,7 @@ static void acpi_processor_idle(void)
         return;
     }
 
-    if ( (cx->type >= ACPI_STATE_C3) && errata_c6_eoi_workaround() )
+    if ( (cx->type >= ACPI_STATE_C3) && errata_c6_workaround() )
         cx = power->safe_state;
 
 
diff --git a/xen/arch/x86/cpu/mwait-idle.c b/xen/arch/x86/cpu/mwait-idle.c
index 88a3e160c5..52eab81bf8 100644
--- a/xen/arch/x86/cpu/mwait-idle.c
+++ b/xen/arch/x86/cpu/mwait-idle.c
@@ -770,7 +770,7 @@ static void mwait_idle(void)
 		return;
 	}
 
-	if ((cx->type >= 3) && errata_c6_eoi_workaround())
+	if ((cx->type >= 3) && errata_c6_workaround())
 		cx = power->safe_state;
 
 	eax = cx->address;
diff --git a/xen/include/asm-x86/cpuidle.h b/xen/include/asm-x86/cpuidle.h
index 51368694dc..0981a8fd64 100644
--- a/xen/include/asm-x86/cpuidle.h
+++ b/xen/include/asm-x86/cpuidle.h
@@ -26,6 +26,6 @@ void update_idle_stats(struct acpi_processor_power *,
 void update_last_cx_stat(struct acpi_processor_power *,
                          struct acpi_processor_cx *, uint64_t);
 
-bool errata_c6_eoi_workaround(void);
+bool errata_c6_workaround(void);
 
 #endif /* __X86_ASM_CPUIDLE_H__ */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 21 09:47:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 09:47:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbhnS-0005qU-1e; Thu, 21 May 2020 09:47:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S6UB=7D=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jbhnQ-0005qO-UV
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 09:47:24 +0000
X-Inumbo-ID: 11949196-9b48-11ea-aaec-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 11949196-9b48-11ea-aaec-12813bfff9fa;
 Thu, 21 May 2020 09:47:23 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: T9lQX/REEirGqbQZzQ6/9OzM3tXUks4W10u7PepyFJvggWXvRcoOoh1wMySRTWkhBmiKJ2yxg3
 m6HYmkDseFUzaMYVFDk3pTIwzgl9V4x6jCed+u3oE5BH2Tn0vfOPjWMbYbYPcRqK+iqdxRTGqK
 T/N3dKWEiF2YJr6g7m5aPo+iJI4Fe1ysuF5JxzefxARj++NNuN431aYB/fd5oM5lwYRZ52u+HB
 yRxeTh9ZfkrcScqmZqQ411VTOITFh9v0+jGuOLBZ3BnH0HEeo3/TFz8ORvHLtNmkX9kTbQYmdC
 Q3I=
X-SBRS: 2.7
X-MesageID: 18062843
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,417,1583211600"; d="scan'208";a="18062843"
Date: Thu, 21 May 2020 11:47:17 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v5] x86: clear RDRAND CPUID bit on AMD family 15h/16h
Message-ID: <20200521094717.GQ54375@Air-de-Roger>
References: <4f76749b-54bd-7c39-6c90-279ce25cb57c@suse.com>
 <e2cf9d9e-492d-fa24-0e9c-bf62b6704b34@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <e2cf9d9e-492d-fa24-0e9c-bf62b6704b34@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 20, 2020 at 11:17:25PM +0100, Andrew Cooper wrote:
> On 18/05/2020 14:19, Jan Beulich wrote:
> > Inspired by Linux commit c49a0a80137c7ca7d6ced4c812c9e07a949f6f24:
> >
> >     There have been reports of RDRAND issues after resuming from suspend on
> >     some AMD family 15h and family 16h systems. This issue stems from a BIOS
> >     not performing the proper steps during resume to ensure RDRAND continues
> >     to function properly.
> >
> >     Update the CPU initialization to clear the RDRAND CPUID bit for any family
> >     15h and 16h processor that supports RDRAND. If it is known that the family
> >     15h or family 16h system does not have an RDRAND resume issue or that the
> >     system will not be placed in suspend, the "cpuid=rdrand" kernel parameter
> >     can be used to stop the clearing of the RDRAND CPUID bit.
> >
> >     Note, that clearing the RDRAND CPUID bit does not prevent a processor
> >     that normally supports the RDRAND instruction from executing it. So any
> >     code that determined the support based on family and model won't #UD.
> >
> > Warn if no explicit choice was given on affected hardware.
> >
> > Check RDRAND functions at boot as well as after S3 resume (the retry
> > limit chosen is entirely arbitrary).
> >
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> > Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> > ---
> > Still slightly RFC, and still in particular because of the change to
> > parse_xen_cpuid(): Alternative approach suggestions are welcome. But now
> > also because with many CPUs there may now be a lot of warnings in case
> > of issues.
> 
> It would still be nice if we could find a better way of determining
> whether S3 is supported on the platform, which would at least let us
> sort server and client platforms.
> 
> A straight string search for _S3 in the DSDT does look to be effective,
> on a sample of 5 boxes I've tried.

Hm, that's an interesting idea. There's also the _S3D device method
that could give a false positive? (ie: a device having a _S3D method
even when the DSDT doesn't have the _S3 method)

Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 21 09:50:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 09:50:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbhpz-0006Cv-J6; Thu, 21 May 2020 09:50:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nFaT=7D=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jbhpy-00062p-F0
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 09:50:02 +0000
X-Inumbo-ID: 702eddc2-9b48-11ea-b07b-bc764e2007e4
Received: from mail-wr1-x42d.google.com (unknown [2a00:1450:4864:20::42d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 702eddc2-9b48-11ea-b07b-bc764e2007e4;
 Thu, 21 May 2020 09:50:01 +0000 (UTC)
Received: by mail-wr1-x42d.google.com with SMTP id e1so6038020wrt.5
 for <xen-devel@lists.xenproject.org>; Thu, 21 May 2020 02:50:01 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:subject:date:message-id:mime-version
 :content-transfer-encoding:content-language:thread-index;
 bh=+d0IzG1+YMWC+2mUHTbCIH/IWYDliQDZXZ1jnDEbUKU=;
 b=n17iZNNS/P4KQ7TUGm3qXYLnlNr1E0OsfexmAM5Cn8DZ2Ad0tUoZ5QvrNERab3h/bH
 RXZhw4gVamKLslmZliajYt1XOY5FisjOqfwRNy4qk8R/UJ6r1FG5+toUjcGSjfgZhcm7
 /GNgPlGYTrLq/9iq9mVEMVM+e/yd02ZpwrDG3F0EnEoNB5VHj9urt/bZ9c+iVfe8+4f6
 LSRTaoLzXMptajLSsYSgEy5GmWd4GdLhDllZWzfOw1miZK7c8oXX1pzDx41+1xB1MRK6
 5FGG19wJsMzOOrbr+sLQVfYOR9DxLmqcBnwgR4s9+O9oNbebBE5n9pswzcaY7D5/CDTN
 B9/Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index;
 bh=+d0IzG1+YMWC+2mUHTbCIH/IWYDliQDZXZ1jnDEbUKU=;
 b=sccpnChkRHqKYNcsjkAT8ajBIVLpXZg/edcjypinTVUHPDUXPWh2LerIRHDCDXZMyG
 6GMtqEVTYV6c8v16DzHVsKYnOIT1VCjEqIZYahDm3RtP0Pn15B+RMhB/XE7a8tise0fH
 rsA2sTPhdMEIOvxgUJ+FE4BFx+MTwxcKLrDj9BAajh1yGHY31MDA8dXnk9vjJ7CaVCMl
 Sb5+ruS6bf4hLeqHPe3/rfRqUwyrV3T90+tm2F02mwPR2wFZu4eLrUfy8SdAUj98OOFo
 JK8waL2ts6vfbx9adK467jBlTZ19lT/7U8zfOksCsyjW4OduOKsBqVGf/FJy6HwjGS8d
 B2lA==
X-Gm-Message-State: AOAM533KI/ECPr+dDSiOFNmaF1HCw33ZV3tGJ0XVls7V90ky8S319W23
 l13NJGeYowhB3z3jHy28kFbleK+lo6c=
X-Google-Smtp-Source: ABdhPJxLFYu5eq/e/TIEq65KOggQQHQwp9EHEw/981p7p2YUElM8rx621n1wXjIXRzaDhzua+R0tEg==
X-Received: by 2002:a5d:4b88:: with SMTP id b8mr7923109wrt.341.1590054600438; 
 Thu, 21 May 2020 02:50:00 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.186])
 by smtp.gmail.com with ESMTPSA id t6sm6191567wma.4.2020.05.21.02.49.59
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 21 May 2020 02:49:59 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: <xen-devel@lists.xenproject.org>
Subject: Freeze date for Xen 4.14
Date: Thu, 21 May 2020 10:49:58 +0100
Message-ID: <003d01d62f55$31437b40$93ca71c0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AdYvU+RMtJio/i3kSdu9RNELiAmwvw==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi all,

  Code freeze for 4.14 is currently scheduled for CoB tomorrow (Friday May 22nd), however there is a public holiday today in many
countries meaning that a few maintainers and committers are not available. I am also aware of a few pending series that are close to
being fully acked but are not quite there yet. As release manager I therefore think it is prudent to slip the freeze date by one
week to Friday May 29th. Release, however, remains scheduled for Friday June 26th.

  Paul Durrant



From xen-devel-bounces@lists.xenproject.org Thu May 21 10:13:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 10:13:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbiCD-00006M-BX; Thu, 21 May 2020 10:13:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nFaT=7D=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jbiCC-00006H-Js
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 10:13:00 +0000
X-Inumbo-ID: a591a190-9b4b-11ea-b07b-bc764e2007e4
Received: from mail-ed1-x542.google.com (unknown [2a00:1450:4864:20::542])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a591a190-9b4b-11ea-b07b-bc764e2007e4;
 Thu, 21 May 2020 10:12:59 +0000 (UTC)
Received: by mail-ed1-x542.google.com with SMTP id g9so6284396edw.10
 for <xen-devel@lists.xenproject.org>; Thu, 21 May 2020 03:12:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=xTOAQMdFJ8ZQa7G8tNEiJrlhqwWuXNce7Xeryge4dCY=;
 b=TgVmMm0sZduOIJ27pb2jz92xZDTCtvZyEQNTPqkYWpXCAtWnVZzDyAcf5vYT1scEXJ
 1lz7D/JAPAvqi7i1bd+k/+Q1+Aaz5q+NQplbqfGYIw3zoqP5VDSSMRkyAgebB4UbnvXn
 ggezOJc8/FB7psmg87hNSMu8lvHWkm6yCxKZGdjqG8kcKPozdT9UAlZc6+KRfFqaf9hE
 gqS84orfky+gwH+F2N6ZUXpmSs8gJtWJvmuZUR/GlmpVE1Y3doCqSd7ScYqt5dF4yGVG
 pMySmgzykp6EW9vY3sBISM/XObVSAcliqV2T/JYQU4Aq9INhMSrw0ezko8wFlL+z0qFR
 961g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=xTOAQMdFJ8ZQa7G8tNEiJrlhqwWuXNce7Xeryge4dCY=;
 b=X5nbchDwMiyh9dBMn5i7j31IZRoYleUVFZjtNmIn9cBvroCmJe8GnIlCrYvN1w7695
 Y+w3/feos8aMG41HVA4QqNE0dLEwz00v+mAvQMH5wprHhfcVZEPCRjjDIicEA8+EUvvt
 1+zzU1IBgQjAx/Rf3v00kYxVTcmUmBHVx0XT/Jpm2a+chyGKS1SKlT7xfN+WEaWR5BP/
 bXu/UqcZAimkh8g8otTU9B7DsMz9lBX1q0SCf+josEOshmRqPuBJRxXORlT6J5pREHi7
 r97tElUp7j8G1KcmH/SXRhOj4nnjIUo5Q51aTrUyoCeNXmDXDzZZh+ctJNNywrEPgkTd
 ojnQ==
X-Gm-Message-State: AOAM530/9yjpnq6GrlL8aVHk42TqYLIGroCyQDj7vflR0ot02zPo6eRf
 fSD/hSdmTSLV4d8vKiClBPg=
X-Google-Smtp-Source: ABdhPJybgDobutCpQyXSEUrAQNSmHWCs161JKh8wLT47yfnEVBYIilZ7dWrA7pqBD/v17gIeYEY6yg==
X-Received: by 2002:a05:6402:348:: with SMTP id
 r8mr7025133edw.130.1590055978659; 
 Thu, 21 May 2020 03:12:58 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.187])
 by smtp.gmail.com with ESMTPSA id v3sm4644534edd.16.2020.05.21.03.12.57
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 21 May 2020 03:12:58 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Andrew Cooper'" <andrew.cooper3@citrix.com>,
 "'Xen-devel'" <xen-devel@lists.xenproject.org>
References: <20200521090428.11425-1-andrew.cooper3@citrix.com>
In-Reply-To: <20200521090428.11425-1-andrew.cooper3@citrix.com>
Subject: RE: [PATCH] x86/shadow: Reposition sh_remove_write_access_from_sl1p()
Date: Thu, 21 May 2020 11:12:56 +0100
Message-ID: <003e01d62f58$66bbcea0$34336be0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQDdpNCtQ9xLcYIXGO2ueKFGd4o5OqqjZl1A
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Tim Deegan' <tim@xen.org>, 'Jan Beulich' <JBeulich@suse.com>,
 'Wei Liu' <wl@xen.org>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of =
Andrew Cooper
> Sent: 21 May 2020 10:04
> To: Xen-devel <xen-devel@lists.xenproject.org>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Tim Deegan =
<tim@xen.org>; Wei Liu <wl@xen.org>; Jan
> Beulich <JBeulich@suse.com>; Roger Pau Monn=C3=A9 =
<roger.pau@citrix.com>
> Subject: [PATCH] x86/shadow: Reposition =
sh_remove_write_access_from_sl1p()
>=20
> When compiling with SHOPT_OUT_OF_SYNC disabled, the build fails with:
>=20
>   common.c:41:12: error: =
=E2=80=98sh_remove_write_access_from_sl1p=E2=80=99 declared =
=E2=80=98static=E2=80=99 but never defined [-
> Werror=3Dunused-function]
>    static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t =
gmfn,
>               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>=20
> due to an unguarded forward declaration.

Is this, perhaps, an argument for making SHADOW_OPTIMIZATIONS tunable =
via Kconfig so that randconfig could catch things like this?

  Paul

>=20
> It turns out there is no need to forward declare
> sh_remove_write_access_from_sl1p() to begin with, so move it to just =
ahead of
> its first users, which is within a larger #ifdef'd SHOPT_OUT_OF_SYNC =
block.
>=20
> Fix up for style while moving it.  No functional change.
>=20
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> CC: Tim Deegan <tim@xen.org>
> ---
>  xen/arch/x86/mm/shadow/common.c | 56 =
++++++++++++++++++-----------------------
>  1 file changed, 25 insertions(+), 31 deletions(-)
>=20
> diff --git a/xen/arch/x86/mm/shadow/common.c =
b/xen/arch/x86/mm/shadow/common.c
> index 0ac3f880e1..6dff240e97 100644
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -38,9 +38,6 @@
>  #include <xen/numa.h>
>  #include "private.h"
>=20
> -static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t =
gmfn,
> -                                            mfn_t smfn, unsigned long =
offset);
> -
>  DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
>=20
>  static int sh_enable_log_dirty(struct domain *, bool log_global);
> @@ -252,6 +249,31 @@ static inline void _sh_resync_l1(struct vcpu *v, =
mfn_t gmfn, mfn_t snpmfn)
>          SHADOW_INTERNAL_NAME(sh_resync_l1, 4)(v, gmfn, snpmfn);
>  }
>=20
> +static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t =
gmfn,
> +                                            mfn_t smfn, unsigned long =
off)
> +{
> +    struct page_info *sp =3D mfn_to_page(smfn);
> +
> +    ASSERT(mfn_valid(smfn));
> +    ASSERT(mfn_valid(gmfn));
> +
> +    if ( sp->u.sh.type =3D=3D SH_type_l1_32_shadow ||
> +         sp->u.sh.type =3D=3D SH_type_fl1_32_shadow )
> +        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p, 2)
> +            (d, gmfn, smfn, off);
> +
> +    if ( sp->u.sh.type =3D=3D SH_type_l1_pae_shadow ||
> +         sp->u.sh.type =3D=3D SH_type_fl1_pae_shadow )
> +        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p, 3)
> +            (d, gmfn, smfn, off);
> +
> +    if ( sp->u.sh.type =3D=3D SH_type_l1_64_shadow ||
> +         sp->u.sh.type =3D=3D SH_type_fl1_64_shadow )
> +        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p, 4)
> +            (d, gmfn, smfn, off);
> +
> +    return 0;
> +}
>=20
>  /*
>   * Fixup arrays: We limit the maximum number of writable mappings to
> @@ -2001,34 +2023,6 @@ int sh_remove_write_access(struct domain *d, =
mfn_t gmfn,
>  }
>  #endif /* CONFIG_HVM */
>=20
> -#if (SHADOW_OPTIMIZATIONS & SHOPT_OUT_OF_SYNC)
> -static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t =
gmfn,
> -                                            mfn_t smfn, unsigned long =
off)
> -{
> -    struct page_info *sp =3D mfn_to_page(smfn);
> -
> -    ASSERT(mfn_valid(smfn));
> -    ASSERT(mfn_valid(gmfn));
> -
> -    if ( sp->u.sh.type =3D=3D SH_type_l1_32_shadow
> -         || sp->u.sh.type =3D=3D SH_type_fl1_32_shadow )
> -    {
> -        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p,2)
> -            (d, gmfn, smfn, off);
> -    }
> -    else if ( sp->u.sh.type =3D=3D SH_type_l1_pae_shadow
> -              || sp->u.sh.type =3D=3D SH_type_fl1_pae_shadow )
> -        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p,3)
> -            (d, gmfn, smfn, off);
> -    else if ( sp->u.sh.type =3D=3D SH_type_l1_64_shadow
> -              || sp->u.sh.type =3D=3D SH_type_fl1_64_shadow )
> -        return SHADOW_INTERNAL_NAME(sh_rm_write_access_from_sl1p,4)
> -            (d, gmfn, smfn, off);
> -
> -    return 0;
> -}
> -#endif
> -
>  =
/************************************************************************=
**/
>  /* Remove all mappings of a guest frame from the shadow tables.
>   * Returns non-zero if we need to flush TLBs. */
> --
> 2.11.0
>=20




From xen-devel-bounces@lists.xenproject.org Thu May 21 10:26:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 10:26:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbiP8-00016T-DK; Thu, 21 May 2020 10:26:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2WPz=7D=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jbiP7-00016O-Ug
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 10:26:21 +0000
X-Inumbo-ID: 82eff54b-9b4d-11ea-aaee-12813bfff9fa
Received: from ppsw-40.csi.cam.ac.uk (unknown [131.111.8.140])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 82eff54b-9b4d-11ea-aaee-12813bfff9fa;
 Thu, 21 May 2020 10:26:20 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:40636
 helo=[192.168.1.219])
 by ppsw-40.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.158]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jbiP2-000tJn-jr (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Thu, 21 May 2020 11:26:16 +0100
Subject: Re: [PATCH] x86/shadow: Reposition sh_remove_write_access_from_sl1p()
To: paul@xen.org, 'Xen-devel' <xen-devel@lists.xenproject.org>
References: <20200521090428.11425-1-andrew.cooper3@citrix.com>
 <003e01d62f58$66bbcea0$34336be0$@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3c829c7d-2a3f-0d15-ea77-3a88df57aaaa@citrix.com>
Date: Thu, 21 May 2020 11:26:15 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <003e01d62f58$66bbcea0$34336be0$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Tim Deegan' <tim@xen.org>, 'Jan Beulich' <JBeulich@suse.com>,
 'Wei Liu' <wl@xen.org>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21/05/2020 11:12, Paul Durrant wrote:
>> -----Original Message-----
>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Andrew Cooper
>> Sent: 21 May 2020 10:04
>> To: Xen-devel <xen-devel@lists.xenproject.org>
>> Cc: Andrew Cooper <andrew.cooper3@citrix.com>; Tim Deegan <tim@xen.org>; Wei Liu <wl@xen.org>; Jan
>> Beulich <JBeulich@suse.com>; Roger Pau Monné <roger.pau@citrix.com>
>> Subject: [PATCH] x86/shadow: Reposition sh_remove_write_access_from_sl1p()
>>
>> When compiling with SHOPT_OUT_OF_SYNC disabled, the build fails with:
>>
>>   common.c:41:12: error: ‘sh_remove_write_access_from_sl1p’ declared ‘static’ but never defined [-
>> Werror=unused-function]
>>    static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
>>               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>> due to an unguarded forward declaration.
> Is this, perhaps, an argument for making SHADOW_OPTIMIZATIONS tunable via Kconfig so that randconfig could catch things like this?

Given enough TUITS, yes.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 21 10:26:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 10:26:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbiPW-000181-MZ; Thu, 21 May 2020 10:26:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S6UB=7D=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jbiPV-00017t-Kv
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 10:26:45 +0000
X-Inumbo-ID: 9164d8f2-9b4d-11ea-aaee-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9164d8f2-9b4d-11ea-aaee-12813bfff9fa;
 Thu, 21 May 2020 10:26:44 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 75XxUCYNwSU5PDE8kjHRJJnoYPfbEYOuogidHwKMPh1e+DSLxqQMLlnJ33ExmqwJtdUTpMnj7N
 qS3fxZWWqc2XRA5mQG7uyBglXqkij/rZPJk7zWali7ClFsZ0597tgpUul6jbFiVQBQsJIhIQi+
 pwq+6IUjyDckFYlaVU1tuqUmOOCpGHKP3/7tbmEBOKu5BeIK9MxwGQW0NylCy8LsFzoxSyh+vJ
 OUYgsoEApNquTyHh3VeGEa2ojkuEcZHGdxXtnY1aUZzXLk95Nzzij2kVMjrrVn3F6JhV8koWjQ
 0FI=
X-SBRS: 2.7
X-MesageID: 18785460
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,417,1583211600"; d="scan'208";a="18785460"
Date: Thu, 21 May 2020 12:26:36 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/shadow: Reposition sh_remove_write_access_from_sl1p()
Message-ID: <20200521102636.GR54375@Air-de-Roger>
References: <20200521090428.11425-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200521090428.11425-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
 Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 21, 2020 at 10:04:28AM +0100, Andrew Cooper wrote:
> When compiling with SHOPT_OUT_OF_SYNC disabled, the build fails with:
> 
>   common.c:41:12: error: ‘sh_remove_write_access_from_sl1p’ declared ‘static’ but never defined [-Werror=unused-function]
>    static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
>               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 
> due to an unguarded forward declaration.
> 
> It turns out there is no need to forward declare
> sh_remove_write_access_from_sl1p() to begin with, so move it to just ahead of
> its first users, which is within a larger #ifdef'd SHOPT_OUT_OF_SYNC block.
> 
> Fix up for style while moving it.  No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Tim Deegan <tim@xen.org>
> ---
>  xen/arch/x86/mm/shadow/common.c | 56 ++++++++++++++++++-----------------------
>  1 file changed, 25 insertions(+), 31 deletions(-)
> 
> diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
> index 0ac3f880e1..6dff240e97 100644
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -38,9 +38,6 @@
>  #include <xen/numa.h>
>  #include "private.h"
>  
> -static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
> -                                            mfn_t smfn, unsigned long offset);
> -
>  DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
>  
>  static int sh_enable_log_dirty(struct domain *, bool log_global);
> @@ -252,6 +249,31 @@ static inline void _sh_resync_l1(struct vcpu *v, mfn_t gmfn, mfn_t snpmfn)
>          SHADOW_INTERNAL_NAME(sh_resync_l1, 4)(v, gmfn, snpmfn);
>  }
>  
> +static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
> +                                            mfn_t smfn, unsigned long off)
> +{
> +    struct page_info *sp = mfn_to_page(smfn);
> +
> +    ASSERT(mfn_valid(smfn));
> +    ASSERT(mfn_valid(gmfn));
> +
> +    if ( sp->u.sh.type == SH_type_l1_32_shadow ||
> +         sp->u.sh.type == SH_type_fl1_32_shadow )

Using a switch would also be nice IMO and would avoid some of the code
churn.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 21 10:32:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 10:32:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbiUk-00023n-85; Thu, 21 May 2020 10:32:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2WPz=7D=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jbiUi-00023i-Gs
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 10:32:08 +0000
X-Inumbo-ID: 51f14678-9b4e-11ea-aaee-12813bfff9fa
Received: from ppsw-40.csi.cam.ac.uk (unknown [131.111.8.140])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 51f14678-9b4e-11ea-aaee-12813bfff9fa;
 Thu, 21 May 2020 10:32:07 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:40788
 helo=[192.168.1.219])
 by ppsw-40.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.158]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jbiUf-000yE5-jc (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Thu, 21 May 2020 11:32:05 +0100
Subject: Re: [PATCH] x86/shadow: Reposition sh_remove_write_access_from_sl1p()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200521090428.11425-1-andrew.cooper3@citrix.com>
 <20200521102636.GR54375@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <91c7f537-1d0e-4a9d-16a8-c02fa30d2d80@citrix.com>
Date: Thu, 21 May 2020 11:32:04 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200521102636.GR54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
 Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21/05/2020 11:26, Roger Pau Monné wrote:
> On Thu, May 21, 2020 at 10:04:28AM +0100, Andrew Cooper wrote:
>> When compiling with SHOPT_OUT_OF_SYNC disabled, the build fails with:
>>
>>   common.c:41:12: error: ‘sh_remove_write_access_from_sl1p’ declared ‘static’ but never defined [-Werror=unused-function]
>>    static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
>>               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>> due to an unguarded forward declaration.
>>
>> It turns out there is no need to forward declare
>> sh_remove_write_access_from_sl1p() to begin with, so move it to just ahead of
>> its first users, which is within a larger #ifdef'd SHOPT_OUT_OF_SYNC block.
>>
>> Fix up for style while moving it.  No functional change.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wl@xen.org>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Tim Deegan <tim@xen.org>
>> ---
>>  xen/arch/x86/mm/shadow/common.c | 56 ++++++++++++++++++-----------------------
>>  1 file changed, 25 insertions(+), 31 deletions(-)
>>
>> diff --git a/xen/arch/x86/mm/shadow/common.c b/xen/arch/x86/mm/shadow/common.c
>> index 0ac3f880e1..6dff240e97 100644
>> --- a/xen/arch/x86/mm/shadow/common.c
>> +++ b/xen/arch/x86/mm/shadow/common.c
>> @@ -38,9 +38,6 @@
>>  #include <xen/numa.h>
>>  #include "private.h"
>>  
>> -static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
>> -                                            mfn_t smfn, unsigned long offset);
>> -
>>  DEFINE_PER_CPU(uint32_t,trace_shadow_path_flags);
>>  
>>  static int sh_enable_log_dirty(struct domain *, bool log_global);
>> @@ -252,6 +249,31 @@ static inline void _sh_resync_l1(struct vcpu *v, mfn_t gmfn, mfn_t snpmfn)
>>          SHADOW_INTERNAL_NAME(sh_resync_l1, 4)(v, gmfn, snpmfn);
>>  }
>>  
>> +static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
>> +                                            mfn_t smfn, unsigned long off)
>> +{
>> +    struct page_info *sp = mfn_to_page(smfn);
>> +
>> +    ASSERT(mfn_valid(smfn));
>> +    ASSERT(mfn_valid(gmfn));
>> +
>> +    if ( sp->u.sh.type == SH_type_l1_32_shadow ||
>> +         sp->u.sh.type == SH_type_fl1_32_shadow )
> Using a switch would also be nice IMO and would avoid some of the code
> churn.

Good point.  Happy to do that if Tim agrees (but I won't bother sending
a v2 just now).

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 21 11:09:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 11:09:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbj50-0004qI-17; Thu, 21 May 2020 11:09:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2WPz=7D=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jbj4z-0004qD-06
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 11:09:37 +0000
X-Inumbo-ID: 8d778145-9b53-11ea-aaf1-12813bfff9fa
Received: from ppsw-40.csi.cam.ac.uk (unknown [131.111.8.140])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d778145-9b53-11ea-aaf1-12813bfff9fa;
 Thu, 21 May 2020 11:09:35 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:41892
 helo=[192.168.1.219])
 by ppsw-40.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.158]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jbj4u-000Ue9-jF (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Thu, 21 May 2020 12:09:32 +0100
Subject: Re: [PATCH v7 12/19] tools: add simple vchan-socket-proxy
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
References: <20200519015503.115236-1-jandryuk@gmail.com>
 <20200519015503.115236-13-jandryuk@gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <edb28eba-622e-e9e6-5d22-0d5e86b503bd@citrix.com>
Date: Thu, 21 May 2020 12:09:26 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200519015503.115236-13-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19/05/2020 02:54, Jason Andryuk wrote:
> +static int connect_socket(const char *path_or_fd) {
> +    int fd;
> +    char *endptr;
> +    struct sockaddr_un addr;
> +
> +    fd = strtoll(path_or_fd, &endptr, 0);
> +    if (*endptr == '\0') {
> +        set_nonblocking(fd, 1);
> +        return fd;
> +    }
> +
> +    fd = socket(AF_UNIX, SOCK_STREAM, 0);
> +    if (fd == -1)
> +        return -1;
> +
> +    addr.sun_family = AF_UNIX;
> +    strncpy(addr.sun_path, path_or_fd, sizeof(addr.sun_path));

Coverity has identified issues, some perhaps more concerning than others.

Here, addr.sun_path not necessarily NUL terminated.

> +    if (connect(fd, (const struct sockaddr *)&addr, sizeof(addr)) == -1) {
> +        close(fd);
> +        return -1;
> +    }
> +
> +    set_nonblocking(fd, 1);
> +
> +    return fd;
> +}
> +
> +static int listen_socket(const char *path_or_fd) {
> +    int fd;
> +    char *endptr;
> +    struct sockaddr_un addr;
> +
> +    fd = strtoll(path_or_fd, &endptr, 0);
> +    if (*endptr == '\0') {
> +        return fd;
> +    }
> +
> +    /* if not a number, assume a socket path */
> +    fd = socket(AF_UNIX, SOCK_STREAM, 0);
> +    if (fd == -1)
> +        return -1;
> +
> +    addr.sun_family = AF_UNIX;
> +    strncpy(addr.sun_path, path_or_fd, sizeof(addr.sun_path));

And here.

> +    if (bind(fd, (const struct sockaddr *)&addr, sizeof(addr)) == -1) {
> +        close(fd);
> +        return -1;
> +    }
> +    if (listen(fd, 5) != 0) {
> +        close(fd);
> +        return -1;
> +    }
> +
> +    return fd;
> +}
> +
> +static struct libxenvchan *connect_vchan(int domid, const char *path) {
> +    struct libxenvchan *ctrl = NULL;
> +    struct xs_handle *xs = NULL;
> +    xc_interface *xc = NULL;
> +    xc_dominfo_t dominfo;
> +    char **watch_ret;
> +    unsigned int watch_num;
> +    int ret;
> +
> +    xs = xs_open(XS_OPEN_READONLY);
> +    if (!xs) {
> +        perror("xs_open");
> +        goto out;
> +    }
> +    xc = xc_interface_open(NULL, NULL, XC_OPENFLAG_NON_REENTRANT);
> +    if (!xc) {
> +        perror("xc_interface_open");
> +        goto out;
> +    }
> +    /* wait for vchan server to create *path* */
> +    xs_watch(xs, path, "path");
> +    xs_watch(xs, "@releaseDomain", "release");

Return values not checked.

> +    while ((watch_ret = xs_read_watch(xs, &watch_num))) {
> +        /* don't care about exact which fired the watch */
> +        free(watch_ret);
> +        ctrl = libxenvchan_client_init(NULL, domid, path);
> +        if (ctrl)
> +            break;
> +
> +        ret = xc_domain_getinfo(xc, domid, 1, &dominfo);
> +        /* break the loop if domain is definitely not there anymore, but
> +         * continue if it is or the call failed (like EPERM) */
> +        if (ret == -1 && errno == ESRCH)
> +            break;
> +        if (ret == 1 && (dominfo.domid != (uint32_t)domid || dominfo.dying))
> +            break;
> +    }
> +
> +out:
> +    if (xc)
> +        xc_interface_close(xc);
> +    if (xs)
> +        xs_close(xs);
> +    return ctrl;
> +}
> +
> +
> +static void discard_buffers(struct libxenvchan *ctrl) {
> +    /* discard local buffers */
> +    insiz = 0;
> +    outsiz = 0;
> +
> +    /* discard remaining incoming data */
> +    while (libxenvchan_data_ready(ctrl)) {
> +        if (libxenvchan_read(ctrl, inbuf, BUFSIZE) == -1) {
> +            perror("vchan read");
> +            exit(1);
> +        }
> +    }
> +}
> +
> +int data_loop(struct libxenvchan *ctrl, int input_fd, int output_fd)
> +{
> +    int ret;
> +    int libxenvchan_fd;
> +    int max_fd;
> +
> +    libxenvchan_fd = libxenvchan_fd_for_select(ctrl);
> +    for (;;) {
> +        fd_set rfds;
> +        fd_set wfds;
> +        FD_ZERO(&rfds);
> +        FD_ZERO(&wfds);
> +
> +        max_fd = -1;
> +        if (input_fd != -1 && insiz != BUFSIZE) {
> +            FD_SET(input_fd, &rfds);
> +            if (input_fd > max_fd)
> +                max_fd = input_fd;
> +        }
> +        if (output_fd != -1 && outsiz) {
> +            FD_SET(output_fd, &wfds);
> +            if (output_fd > max_fd)
> +                max_fd = output_fd;
> +        }
> +        FD_SET(libxenvchan_fd, &rfds);
> +        if (libxenvchan_fd > max_fd)
> +            max_fd = libxenvchan_fd;
> +        ret = select(max_fd + 1, &rfds, &wfds, NULL, NULL);
> +        if (ret < 0) {
> +            perror("select");
> +            exit(1);
> +        }
> +        if (FD_ISSET(libxenvchan_fd, &rfds)) {
> +            libxenvchan_wait(ctrl);
> +            if (!libxenvchan_is_open(ctrl)) {
> +                if (verbose)
> +                    fprintf(stderr, "vchan client disconnected\n");
> +                while (outsiz)
> +                    socket_wr(output_fd);
> +                close(output_fd);
> +                close(input_fd);
> +                discard_buffers(ctrl);
> +                break;
> +            }
> +            vchan_wr(ctrl);
> +        }
> +
> +        if (FD_ISSET(input_fd, &rfds)) {
> +            ret = read(input_fd, inbuf + insiz, BUFSIZE - insiz);
> +            if (ret < 0 && errno != EAGAIN)
> +                exit(1);
> +            if (verbose)
> +                fprintf(stderr, "from-unix: %.*s\n", ret, inbuf + insiz);
> +            if (ret == 0) {
> +                /* EOF on socket, write everything in the buffer and close the
> +                 * input_fd socket */
> +                while (insiz) {
> +                    vchan_wr(ctrl);
> +                    libxenvchan_wait(ctrl);
> +                }
> +                close(input_fd);
> +                input_fd = -1;

Dead store.

> +                /* TODO: maybe signal the vchan client somehow? */
> +                break;
> +            }
> +            if (ret)
> +                insiz += ret;
> +            vchan_wr(ctrl);
> +        }
> +        if (FD_ISSET(output_fd, &wfds))
> +            socket_wr(output_fd);
> +        while (libxenvchan_data_ready(ctrl) && outsiz < BUFSIZE) {
> +            ret = libxenvchan_read(ctrl, outbuf + outsiz, BUFSIZE - outsiz);
> +            if (ret < 0)
> +                exit(1);
> +            if (verbose)
> +                fprintf(stderr, "from-vchan: %.*s\n", ret, outbuf + outsiz);
> +            outsiz += ret;
> +            socket_wr(output_fd);
> +        }
> +    }
> +    return 0;
> +}
> +
> +/**
> +    Simple libxenvchan application, both client and server.
> +    Both sides may write and read, both from the libxenvchan and from
> +    stdin/stdout (just like netcat).
> +*/
> +
> +static struct option options[] = {
> +    { "mode",       required_argument, NULL, 'm' },
> +    { "verbose",          no_argument, NULL, 'v' },
> +    { "state-path", required_argument, NULL, 's' },
> +    { }
> +};
> +
> +int main(int argc, char **argv)
> +{
> +    int is_server = 0;
> +    int socket_fd = -1;
> +    int input_fd, output_fd;
> +    struct libxenvchan *ctrl = NULL;
> +    const char *socket_path;
> +    int domid;
> +    const char *vchan_path;
> +    const char *state_path = NULL;
> +    int opt;
> +
> +    while ((opt = getopt_long(argc, argv, "m:vs:", options, NULL)) != -1) {
> +        switch (opt) {
> +            case 'm':
> +                if (strcmp(optarg, "server") == 0)
> +                    is_server = 1;
> +                else if (strcmp(optarg, "client") == 0)
> +                    is_server = 0;
> +                else {
> +                    fprintf(stderr, "invalid argument for --mode: %s\n", optarg);
> +                    usage(argv);
> +                    return 1;
> +                }
> +                break;
> +            case 'v':
> +                verbose = 1;
> +                break;
> +            case 's':
> +                state_path = optarg;
> +                break;
> +            case '?':
> +                usage(argv);
> +        }
> +    }
> +
> +    if (argc-optind != 3)
> +        usage(argv);
> +
> +    domid = atoi(argv[optind]);
> +    vchan_path = argv[optind+1];
> +    socket_path = argv[optind+2];
> +
> +    if (is_server) {
> +        ctrl = libxenvchan_server_init(NULL, domid, vchan_path, 0, 0);
> +        if (!ctrl) {
> +            perror("libxenvchan_server_init");
> +            exit(1);
> +        }
> +    } else {
> +        if (strcmp(socket_path, "-") == 0) {
> +            input_fd = 0;
> +            output_fd = 1;
> +        } else {
> +            socket_fd = listen_socket(socket_path);
> +            if (socket_fd == -1) {
> +                perror("listen socket");
> +                return 1;
> +            }
> +        }
> +    }
> +
> +    if (state_path) {
> +        struct xs_handle *xs;
> +
> +        xs = xs_open(0);
> +        if (!xs) {
> +            perror("xs_open");
> +            return 1;
> +        }
> +        if (!xs_write(xs, XBT_NULL, state_path, "running", strlen("running"))) {
> +            perror("xs_write");
> +            return 1;
> +        }
> +        xs_close(xs);
> +    }
> +
> +    for (;;) {
> +        if (is_server) {
> +            /* wait for vchan connection */
> +            while (libxenvchan_is_open(ctrl) != 1)
> +                libxenvchan_wait(ctrl);
> +            /* vchan client connected, setup local FD if needed */
> +            if (strcmp(socket_path, "-") == 0) {
> +                input_fd = 0;
> +                output_fd = 1;
> +            } else {
> +                input_fd = output_fd = connect_socket(socket_path);
> +            }
> +            if (input_fd == -1) {
> +                perror("connect socket");
> +                return 1;
> +            }
> +            if (data_loop(ctrl, input_fd, output_fd) != 0)
> +                break;
> +            /* keep it running only when get UNIX socket path */
> +            if (socket_path[0] != '/')
> +                break;
> +        } else {
> +            /* wait for local socket connection */
> +            if (strcmp(socket_path, "-") != 0)
> +                input_fd = output_fd = accept(socket_fd, NULL, NULL);
> +            if (input_fd == -1) {
> +                perror("accept");

Leakage of scoket_fd, and ...

> +                return 1;
> +            }
> +            set_nonblocking(input_fd, 1);
> +            set_nonblocking(output_fd, 1);
> +            ctrl = connect_vchan(domid, vchan_path);
> +            if (!ctrl) {
> +                perror("vchan client init");

All 3 FDs here.

~Andrew

> +                return 1;
> +            }
> +            if (data_loop(ctrl, input_fd, output_fd) != 0)
> +                break;
> +            /* don't reconnect if output was stdout */
> +            if (strcmp(socket_path, "-") == 0)
> +                break;
> +
> +            libxenvchan_close(ctrl);
> +            ctrl = NULL;
> +        }
> +    }
> +    return 0;
> +}



From xen-devel-bounces@lists.xenproject.org Thu May 21 12:23:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 12:23:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbkE4-0003Hv-J3; Thu, 21 May 2020 12:23:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=S6UB=7D=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jbkE2-0003Hq-Uj
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 12:23:02 +0000
X-Inumbo-ID: cffb0806-9b5d-11ea-aafb-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cffb0806-9b5d-11ea-aafb-12813bfff9fa;
 Thu, 21 May 2020 12:23:01 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: uV5mpF+hvX5MhqHVXPacy/c6Ks9JGV2IiDP3LEu+MtuwbO4aTbvXhxZPHFCVp/6zVapaQDSyPa
 prIMLS9A0PchOBkGfEvWoaemPHIXvazb7kVmt+yTgaooKDN5w00Gi+vmettwazkG4Yl8kCi2dR
 n4LlAzkIWKdDVG0F7wdKZI3W/HIXTkapRU9gdQdhj6YEyETNljHA3KELwLrahsygBIbwSJeH5B
 XD7mVbWyY0G+Ql8jGuwnOWFoO07X9naAiM9aK6n1kZ4VmHtCQkgHFs0NB5AuZuFbo4R84l5zBk
 pVE=
X-SBRS: 2.7
X-MesageID: 18358494
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,417,1583211600"; d="scan'208";a="18358494"
Date: Thu, 21 May 2020 14:22:52 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/shadow: Reposition sh_remove_write_access_from_sl1p()
Message-ID: <20200521111441.GS54375@Air-de-Roger>
References: <20200521090428.11425-1-andrew.cooper3@citrix.com>
 <20200521102636.GR54375@Air-de-Roger>
 <91c7f537-1d0e-4a9d-16a8-c02fa30d2d80@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <91c7f537-1d0e-4a9d-16a8-c02fa30d2d80@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Tim Deegan <tim@xen.org>,
 Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 21, 2020 at 11:32:04AM +0100, Andrew Cooper wrote:
> On 21/05/2020 11:26, Roger Pau Monné wrote:
> > On Thu, May 21, 2020 at 10:04:28AM +0100, Andrew Cooper wrote:
> >> +static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
> >> +                                            mfn_t smfn, unsigned long off)
> >> +{
> >> +    struct page_info *sp = mfn_to_page(smfn);
> >> +
> >> +    ASSERT(mfn_valid(smfn));
> >> +    ASSERT(mfn_valid(gmfn));
> >> +
> >> +    if ( sp->u.sh.type == SH_type_l1_32_shadow ||
> >> +         sp->u.sh.type == SH_type_fl1_32_shadow )
> > Using a switch would also be nice IMO and would avoid some of the code
> > churn.
> 
> Good point.  Happy to do that if Tim agrees (but I won't bother sending
> a v2 just now).

Sure, feel free to keep my RB after that.

Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 21 12:29:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 12:29:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbkJr-0003VP-A4; Thu, 21 May 2020 12:29:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PcJi=7D=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbkJp-0003UY-Rd
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 12:29:01 +0000
X-Inumbo-ID: a38476d0-9b5e-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a38476d0-9b5e-11ea-b07b-bc764e2007e4;
 Thu, 21 May 2020 12:28:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=xs5UMk5nNnqEEXlm7pF8IJ2sKoY+CP/8DvsctX11D8Y=; b=FfeWJRpH9T3IN0ZYBIXqRhYLg
 NxyUQKDsYRDwFgTu9IHndqD0G/gbNTzk2jY0QWvByWxKFUEoKjzcGsNrXG5OwJizgGq79cu0J6iy7
 sbBeCTVFV9TNM15zxhI3B3fO1rkxxEc5+icLTWlaQxivf+W50LGtVgH37cxyyu+kRJlDE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbkJj-0001Q8-MP; Thu, 21 May 2020 12:28:55 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbkJj-00061b-Ac; Thu, 21 May 2020 12:28:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbkJj-0000tA-9z; Thu, 21 May 2020 12:28:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150293-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150293: all pass - PUSHED
X-Osstest-Versions-This: ovmf=3f89db869028fa65a37756fd7f391cbd69f4579c
X-Osstest-Versions-That: ovmf=bc5012b8fbf9f769a62d8a7a2dbf04343c16d398
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 21 May 2020 12:28:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150293 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150293/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 3f89db869028fa65a37756fd7f391cbd69f4579c
baseline version:
 ovmf                 bc5012b8fbf9f769a62d8a7a2dbf04343c16d398

Last test of basis   150284  2020-05-20 20:09:31 Z    0 days
Testing same since   150293  2020-05-21 07:12:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chasel Chiu <chasel.chiu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   bc5012b8fb..3f89db8690  3f89db869028fa65a37756fd7f391cbd69f4579c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 21 12:52:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 12:52:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbkg1-0005zu-7i; Thu, 21 May 2020 12:51:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VGGW=7D=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jbkfz-0005zo-Sc
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 12:51:55 +0000
X-Inumbo-ID: d882e7ba-9b61-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d882e7ba-9b61-11ea-9887-bc764e2007e4;
 Thu, 21 May 2020 12:51:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=pentjSjMUHUSmtrKHJagsi5c/5Jl6LKgka80A+BCb6Y=; b=iRkkHf/XF7NMCSxBl7XxpTn4eF
 fAW9q5NI3OKwDXk13r3O7/vTDsg8mrlqJjQ3WGdFTjme/JHHQPnwUCwogmff8MaQ9meGNwDKqh4DF
 SVGyPTrPnV65EiiFAz8oSQuwZOxrPEKv8IREhZHeZSlZtbbASJdIvCqnTTipK1gEGyQ4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jbkfs-0001sg-93; Thu, 21 May 2020 12:51:48 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jbkfr-0004vm-KG; Thu, 21 May 2020 12:51:48 +0000
Subject: Re: [PATCH v10 04/12] xen: add basic hypervisor filesystem support
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
References: <20200519072106.26894-1-jgross@suse.com>
 <20200519072106.26894-5-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <dad8b3d7-8588-303d-6209-b9e2c9943ff5@xen.org>
Date: Thu, 21 May 2020 13:51:44 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200519072106.26894-5-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Juergen,

On 19/05/2020 08:20, Juergen Gross wrote:
> Add the infrastructure for the hypervisor filesystem.
> 
> This includes the hypercall interface and the base functions for
> entry creation, deletion and modification.
> 
> In order not to have to repeat the same pattern multiple times in case
> adding a new node should BUG_ON() failure, the helpers for adding a
> node (hypfs_add_dir() and hypfs_add_leaf()) get a nofault parameter
> causing the BUG() in case of a failure.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> V1:
> - rename files from filesystem.* to hypfs.*
> - add dummy write entry support
> - rename hypercall filesystem_op to hypfs_op
> - add support for unsigned integer entries
> 
> V2:
> - test new entry name to be valid
> 
> V3:
> - major rework, especially by supporting binary contents of entries
> - addressed all comments
> 
> V4:
> - sort #includes alphabetically (Wei Liu)
> - add public interface structures to xlat.lst (Jan Beulich)
> - let DIRENTRY_SIZE() add 1 for trailing nul byte (Jan Beulich)
> - remove hypfs_add_entry() (Jan Beulich)
> - len -> ulen (Jan Beulich)
> - switch sequence of tests in hypfs_get_entry_rel() (Jan Beulich)
> - add const qualifier (Jan Beulich)
> - return -ENOBUFS if only direntry but no entry contents are returned
>    (Jan Beulich)
> - use xmalloc() instead of xzalloc() (Jan Beulich)
> - better error handling in hypfs_write_leaf() (Jan Beulich)
> - return -EOPNOTSUPP for unknown sub-command (Jan Beulich)
> - use plain integers for enum-like constants in public interface
>    (Jan Beulich)
> - rename XEN_HYPFS_OP_read_contents to XEN_HYPFS_OP_read (Jan Beulich)
> - add some comments in include/public/hypfs.h (Jan Beulich)
> - use const_char for user parameter path (Jan Beulich)
> - add helpers for XEN_HYPFS_TYPE_BOOL and XEN_HYPFS_TYPE_INT entry
>    definitions (Jan Beulich)
> - make statically defined entries __read_mostly (Jan Beulich)
> 
> V5:
> - switch to xsm for privilege check
> 
> V6:
> - use memchr() for testing correct string length (Jan Beulich)
> - reject writing to non-string leafs with wrong length (Jan Beulich)
> - only support bools of natural size (Julien Grall)
> - adjust blank padding in header (Jan Beulich)
> - adjust comments in public header (Jan Beulich)
> - rename hypfs_string_set() and add comment (Jan Beulich)
> - add common HYPFS_INIT helper macro (Jan Beulich)
> - really check structures added to xlat.lst (Jan Beulich)
> - add missing xsm parts (Jan Beulich)
> 
> V7:
> - simplify compat check (Jan Beulich)
> - add max write size (Jan Beulich)
> - better length testing of written string (Jan Beulich)
> 
> V8:
> - add Kconfig item CONFIG_HYPFS (Jan Beulich)
> - init write pointer in HYPFS_*_INIT_WRITABLE() (Jan Beulich)
> - expand write ASSERT()s (Jan Beulich)
> 
> V9:
> - move hypfs to correct position in Makefile (Jan Beulich)
> - avoid recursion in hypfs_get_entry_rel() (Jan Beulich)
> - make hypfs_get_entry() static (Jan Beulich)
> - assert locking in read/write functions (Jan Beulich)
> - add ASSERT() in hypfs_write_leaf() (Jan Beulich)
> - add encoding test in hypfs_write_leaf() (Jan Beulich)
> - test parameters of XEN_HYPFS_OP_get_version to be zero (Jan Beulich)
> - add parantheses in macro (Jan Beulich)
> - make ulen type unsigned int in functions (Jan Beulich)
> 
> V10:
> - add locking ASSERT()s (Jan Beulich)
> - correct indentation (Jan Beulich)
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   tools/flask/policy/modules/dom0.te  |   2 +-
>   xen/arch/arm/traps.c                |   3 +
>   xen/arch/x86/hvm/hypercall.c        |   3 +
>   xen/arch/x86/hypercall.c            |   3 +
>   xen/arch/x86/pv/hypercall.c         |   3 +
>   xen/common/Kconfig                  |  11 +
>   xen/common/Makefile                 |   1 +
>   xen/common/hypfs.c                  | 422 ++++++++++++++++++++++++++++
>   xen/include/Makefile                |   1 +
>   xen/include/public/hypfs.h          | 129 +++++++++
>   xen/include/public/xen.h            |   1 +
>   xen/include/xen/hypercall.h         |  10 +
>   xen/include/xen/hypfs.h             | 121 ++++++++
>   xen/include/xlat.lst                |   2 +
>   xen/include/xsm/dummy.h             |   6 +
>   xen/include/xsm/xsm.h               |   6 +
>   xen/xsm/dummy.c                     |   1 +
>   xen/xsm/flask/hooks.c               |   6 +
>   xen/xsm/flask/policy/access_vectors |   2 +
>   19 files changed, 732 insertions(+), 1 deletion(-)
>   create mode 100644 xen/common/hypfs.c
>   create mode 100644 xen/include/public/hypfs.h
>   create mode 100644 xen/include/xen/hypfs.h
> 
> diff --git a/tools/flask/policy/modules/dom0.te b/tools/flask/policy/modules/dom0.te
> index 272f6a4f75..20925e38a2 100644
> --- a/tools/flask/policy/modules/dom0.te
> +++ b/tools/flask/policy/modules/dom0.te
> @@ -11,7 +11,7 @@ allow dom0_t xen_t:xen {
>   	mtrr_del mtrr_read microcode physinfo quirk writeconsole readapic
>   	writeapic privprofile nonprivprofile kexec firmware sleep frequency
>   	getidle debug getcpuinfo heap pm_op mca_op lockprof cpupool_op
> -	getscheduler setscheduler
> +	getscheduler setscheduler hypfs_op
>   };
>   allow dom0_t xen_t:xen2 {
>   	resource_op psr_cmt_op psr_alloc pmu_ctrl get_symbol
> diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
> index 30c4c1830b..8f40d0e0b6 100644
> --- a/xen/arch/arm/traps.c
> +++ b/xen/arch/arm/traps.c
> @@ -1381,6 +1381,9 @@ static arm_hypercall_t arm_hypercall_table[] = {
>   #ifdef CONFIG_ARGO
>       HYPERCALL(argo_op, 5),
>   #endif
> +#ifdef CONFIG_HYPFS
> +    HYPERCALL(hypfs_op, 5),
> +#endif
>   };
>   
>   #ifndef NDEBUG
> diff --git a/xen/arch/x86/hvm/hypercall.c b/xen/arch/x86/hvm/hypercall.c
> index c41c2179c9..b6ccaf4457 100644
> --- a/xen/arch/x86/hvm/hypercall.c
> +++ b/xen/arch/x86/hvm/hypercall.c
> @@ -150,6 +150,9 @@ static const hypercall_table_t hvm_hypercall_table[] = {
>   #endif
>       HYPERCALL(xenpmu_op),
>       COMPAT_CALL(dm_op),
> +#ifdef CONFIG_HYPFS
> +    HYPERCALL(hypfs_op),
> +#endif
>       HYPERCALL(arch_1)
>   };
>   
> diff --git a/xen/arch/x86/hypercall.c b/xen/arch/x86/hypercall.c
> index 7f299d45c6..dd00983005 100644
> --- a/xen/arch/x86/hypercall.c
> +++ b/xen/arch/x86/hypercall.c
> @@ -72,6 +72,9 @@ const hypercall_args_t hypercall_args_table[NR_hypercalls] =
>   #ifdef CONFIG_HVM
>       ARGS(hvm_op, 2),
>       ARGS(dm_op, 3),
> +#endif
> +#ifdef CONFIG_HYPFS
> +    ARGS(hypfs_op, 5),
>   #endif
>       ARGS(mca, 1),
>       ARGS(arch_1, 1),
> diff --git a/xen/arch/x86/pv/hypercall.c b/xen/arch/x86/pv/hypercall.c
> index b0d1d0ed77..53a52360fa 100644
> --- a/xen/arch/x86/pv/hypercall.c
> +++ b/xen/arch/x86/pv/hypercall.c
> @@ -84,6 +84,9 @@ const hypercall_table_t pv_hypercall_table[] = {
>   #ifdef CONFIG_HVM
>       HYPERCALL(hvm_op),
>       COMPAT_CALL(dm_op),
> +#endif
> +#ifdef CONFIG_HYPFS
> +    HYPERCALL(hypfs_op),
>   #endif
>       HYPERCALL(mca),
>       HYPERCALL(arch_1),
> diff --git a/xen/common/Kconfig b/xen/common/Kconfig
> index fe9b41f721..e768ea36b2 100644
> --- a/xen/common/Kconfig
> +++ b/xen/common/Kconfig
> @@ -116,6 +116,17 @@ config SPECULATIVE_HARDEN_BRANCH
>   
>   endmenu
>   
> +config HYPFS
> +	bool "Hypervisor file system support"
> +	default y
> +	---help---
> +	  Support Xen hypervisor file system. This file system is used to
> +	  present various hypervisor internal data to dom0 and in some
> +	  cases to allow modifying settings. Disabling the support might
> +	  result in some features not being available.
> +
> +	  If unsure, say Y.
> +
>   config KEXEC
>   	bool "kexec support"
>   	default y
> diff --git a/xen/common/Makefile b/xen/common/Makefile
> index e8cde65370..bf7d0e25a3 100644
> --- a/xen/common/Makefile
> +++ b/xen/common/Makefile
> @@ -14,6 +14,7 @@ obj-$(CONFIG_CRASH_DEBUG) += gdbstub.o
>   obj-$(CONFIG_GRANT_TABLE) += grant_table.o
>   obj-y += guestcopy.o
>   obj-bin-y += gunzip.init.o
> +obj-$(CONFIG_HYPFS) += hypfs.o
>   obj-y += irq.o
>   obj-y += kernel.o
>   obj-y += keyhandler.o
> diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
> new file mode 100644
> index 0000000000..9c2213a068
> --- /dev/null
> +++ b/xen/common/hypfs.c
> @@ -0,0 +1,422 @@
> +/******************************************************************************
> + *
> + * hypfs.c
> + *
> + * Simple sysfs-like file system for the hypervisor.
> + */
> +
> +#include <xen/err.h>
> +#include <xen/guest_access.h>
> +#include <xen/hypercall.h>
> +#include <xen/hypfs.h>
> +#include <xen/lib.h>
> +#include <xen/rwlock.h>
> +#include <public/hypfs.h>
> +
> +#ifdef CONFIG_COMPAT
> +#include <compat/hypfs.h>
> +CHECK_hypfs_dirlistentry;
> +#endif
> +
> +#define DIRENTRY_NAME_OFF offsetof(struct xen_hypfs_dirlistentry, name)
> +#define DIRENTRY_SIZE(name_len) \
> +    (DIRENTRY_NAME_OFF +        \
> +     ROUNDUP((name_len) + 1, alignof(struct xen_hypfs_direntry)))
> +
> +static DEFINE_RWLOCK(hypfs_lock);
> +enum hypfs_lock_state {
> +    hypfs_unlocked,
> +    hypfs_read_locked,
> +    hypfs_write_locked
> +};
> +static DEFINE_PER_CPU(enum hypfs_lock_state, hypfs_locked);
> +
> +HYPFS_DIR_INIT(hypfs_root, "");
> +
> +static void hypfs_read_lock(void)
> +{
> +    ASSERT(this_cpu(hypfs_locked) != hypfs_write_locked);
> +
> +    read_lock(&hypfs_lock);
> +    this_cpu(hypfs_locked) = hypfs_read_locked;
> +}
> +
> +static void hypfs_write_lock(void)
> +{
> +    ASSERT(this_cpu(hypfs_locked) == hypfs_unlocked);
> +
> +    write_lock(&hypfs_lock);
> +    this_cpu(hypfs_locked) = hypfs_write_locked;
> +}
> +
> +static void hypfs_unlock(void)
> +{
> +    enum hypfs_lock_state locked = this_cpu(hypfs_locked);
> +
> +    this_cpu(hypfs_locked) = hypfs_unlocked;
> +
> +    switch ( locked )
> +    {
> +    case hypfs_read_locked:
> +        read_unlock(&hypfs_lock);
> +        break;
> +    case hypfs_write_locked:
> +        write_unlock(&hypfs_lock);
> +        break;
> +    default:
> +        BUG();
> +    }
> +}
> +
> +static int add_entry(struct hypfs_entry_dir *parent, struct hypfs_entry *new)
> +{
> +    int ret = -ENOENT;
> +    struct hypfs_entry *e;
> +
> +    hypfs_write_lock();
> +
> +    list_for_each_entry ( e, &parent->dirlist, list )
> +    {
> +        int cmp = strcmp(e->name, new->name);
> +
> +        if ( cmp > 0 )
> +        {
> +            ret = 0;
> +            list_add_tail(&new->list, &e->list);
> +            break;
> +        }
> +        if ( cmp == 0 )
> +        {
> +            ret = -EEXIST;
> +            break;
> +        }
> +    }
> +
> +    if ( ret == -ENOENT )
> +    {
> +        ret = 0;
> +        list_add_tail(&new->list, &parent->dirlist);
> +    }
> +
> +    if ( !ret )
> +    {
> +        unsigned int sz = strlen(new->name);
> +
> +        parent->e.size += DIRENTRY_SIZE(sz);
> +    }
> +
> +    hypfs_unlock();
> +
> +    return ret;
> +}
> +
> +int hypfs_add_dir(struct hypfs_entry_dir *parent,
> +                  struct hypfs_entry_dir *dir, bool nofault)
> +{
> +    int ret;
> +
> +    ret = add_entry(parent, &dir->e);
> +    BUG_ON(nofault && ret);
> +
> +    return ret;
> +}
> +
> +int hypfs_add_leaf(struct hypfs_entry_dir *parent,
> +                   struct hypfs_entry_leaf *leaf, bool nofault)
> +{
> +    int ret;
> +
> +    if ( !leaf->content )
> +        ret = -EINVAL;
> +    else
> +        ret = add_entry(parent, &leaf->e);
> +    BUG_ON(nofault && ret);
> +
> +    return ret;
> +}
> +
> +static int hypfs_get_path_user(char *buf,
> +                               XEN_GUEST_HANDLE_PARAM(const_char) uaddr,
> +                               unsigned long ulen)
> +{
> +    if ( ulen > XEN_HYPFS_MAX_PATHLEN )
> +        return -EINVAL;
> +
> +    if ( copy_from_guest(buf, uaddr, ulen) )
> +        return -EFAULT;
> +
> +    if ( memchr(buf, 0, ulen) != buf + ulen - 1 )
> +        return -EINVAL;
> +
> +    return 0;
> +}
> +
> +static struct hypfs_entry *hypfs_get_entry_rel(struct hypfs_entry_dir *dir,
> +                                               const char *path)
> +{
> +    const char *end;
> +    struct hypfs_entry *entry;
> +    unsigned int name_len;
> +    bool again = true;
> +
> +    while ( again )
> +    {
> +        if ( dir->e.type != XEN_HYPFS_TYPE_DIR )
> +            return NULL;
> +
> +        if ( !*path )
> +            return &dir->e;
> +
> +        end = strchr(path, '/');
> +        if ( !end )
> +            end = strchr(path, '\0');
> +        name_len = end - path;
> +
> +        again = false;
> +
> +        list_for_each_entry ( entry, &dir->dirlist, list )
> +        {
> +            int cmp = strncmp(path, entry->name, name_len);
> +            struct hypfs_entry_dir *d = container_of(entry,
> +                                                     struct hypfs_entry_dir, e);
> +
> +            if ( cmp < 0 )
> +                return NULL;
> +            if ( !cmp && strlen(entry->name) == name_len )
> +            {
> +                if ( !*end )
> +                    return entry;
> +
> +                again = true;
> +                dir = d;
> +                path = end + 1;
> +
> +                break;
> +            }
> +        }
> +    }
> +
> +    return NULL;
> +}
> +
> +static struct hypfs_entry *hypfs_get_entry(const char *path)
> +{
> +    if ( path[0] != '/' )
> +        return NULL;
> +
> +    return hypfs_get_entry_rel(&hypfs_root, path + 1);
> +}
> +
> +int hypfs_read_dir(const struct hypfs_entry *entry,
> +                   XEN_GUEST_HANDLE_PARAM(void) uaddr)
> +{
> +    const struct hypfs_entry_dir *d;
> +    const struct hypfs_entry *e;
> +    unsigned int size = entry->size;
> +
> +    ASSERT(this_cpu(hypfs_locked) != hypfs_unlocked);
> +
> +    d = container_of(entry, const struct hypfs_entry_dir, e);
> +
> +    list_for_each_entry ( e, &d->dirlist, list )
> +    {
> +        struct xen_hypfs_dirlistentry direntry;
> +        unsigned int e_namelen = strlen(e->name);
> +        unsigned int e_len = DIRENTRY_SIZE(e_namelen);
> +
> +        direntry.e.pad = 0;
> +        direntry.e.type = e->type;
> +        direntry.e.encoding = e->encoding;
> +        direntry.e.content_len = e->size;
> +        direntry.e.max_write_len = e->max_size;
> +        direntry.off_next = list_is_last(&e->list, &d->dirlist) ? 0 : e_len;
> +        if ( copy_to_guest(uaddr, &direntry, 1) )
> +            return -EFAULT;
> +
> +        if ( copy_to_guest_offset(uaddr, DIRENTRY_NAME_OFF,
> +                                  e->name, e_namelen + 1) )
> +            return -EFAULT;
> +
> +        guest_handle_add_offset(uaddr, e_len);
> +
> +        ASSERT(e_len <= size);
> +        size -= e_len;
> +    }
> +
> +    return 0;
> +}
> +
> +int hypfs_read_leaf(const struct hypfs_entry *entry,
> +                    XEN_GUEST_HANDLE_PARAM(void) uaddr)
> +{
> +    const struct hypfs_entry_leaf *l;
> +
> +    ASSERT(this_cpu(hypfs_locked) != hypfs_unlocked);
> +
> +    l = container_of(entry, const struct hypfs_entry_leaf, e);
> +
> +    return copy_to_guest(uaddr, l->content, entry->size) ? -EFAULT: 0;
> +}
> +
> +static int hypfs_read(const struct hypfs_entry *entry,
> +                      XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
> +{
> +    struct xen_hypfs_direntry e;
> +    long ret = -EINVAL;
> +
> +    if ( ulen < sizeof(e) )
> +        goto out;
> +
> +    e.pad = 0;
> +    e.type = entry->type;
> +    e.encoding = entry->encoding;
> +    e.content_len = entry->size;
> +    e.max_write_len = entry->max_size;
> +
> +    ret = -EFAULT;
> +    if ( copy_to_guest(uaddr, &e, 1) )
> +        goto out;
> +
> +    ret = -ENOBUFS;
> +    if ( ulen < entry->size + sizeof(e) )
> +        goto out;
> +
> +    guest_handle_add_offset(uaddr, sizeof(e));
> +
> +    ret = entry->read(entry, uaddr);
> +
> + out:
> +    return ret;
> +}
> +
> +int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
> +                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen)
> +{
> +    char *buf;
> +    int ret;
> +
> +    ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked);
> +    ASSERT(ulen <= leaf->e.max_size);
> +
> +    if ( leaf->e.type != XEN_HYPFS_TYPE_STRING &&
> +         leaf->e.type != XEN_HYPFS_TYPE_BLOB && ulen != leaf->e.size )
> +        return -EDOM;
> +
> +    buf = xmalloc_array(char, ulen);
> +    if ( !buf )
> +        return -ENOMEM;
> +
> +    ret = -EFAULT;
> +    if ( copy_from_guest(buf, uaddr, ulen) )
> +        goto out;
> +
> +    ret = -EINVAL;
> +    if ( leaf->e.type == XEN_HYPFS_TYPE_STRING &&
> +         leaf->e.encoding == XEN_HYPFS_ENC_PLAIN &&
> +         memchr(buf, 0, ulen) != (buf + ulen - 1) )
> +        goto out;
> +
> +    ret = 0;
> +    memcpy(leaf->write_ptr, buf, ulen);
> +    leaf->e.size = ulen;
> +
> + out:
> +    xfree(buf);
> +    return ret;
> +}
> +
> +int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
> +                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen)
> +{
> +    bool buf;
> +
> +    ASSERT(this_cpu(hypfs_locked) == hypfs_write_locked);
> +    ASSERT(leaf->e.type == XEN_HYPFS_TYPE_BOOL &&
> +           leaf->e.size == sizeof(bool) &&
> +           leaf->e.max_size == sizeof(bool) );
> +
> +    if ( ulen != leaf->e.max_size )
> +        return -EDOM;
> +
> +    if ( copy_from_guest(&buf, uaddr, ulen) )
> +        return -EFAULT;
> +
> +    *(bool *)leaf->write_ptr = buf;
> +
> +    return 0;
> +}
> +
> +static int hypfs_write(struct hypfs_entry *entry,
> +                       XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned long ulen)
> +{
> +    struct hypfs_entry_leaf *l;
> +
> +    if ( !entry->write )
> +        return -EACCES;
> +
> +    ASSERT(entry->max_size);
> +
> +    if ( ulen > entry->max_size )
> +        return -ENOSPC;
> +
> +    l = container_of(entry, struct hypfs_entry_leaf, e);
> +
> +    return entry->write(l, uaddr, ulen);
> +}
> +
> +long do_hypfs_op(unsigned int cmd,
> +                 XEN_GUEST_HANDLE_PARAM(const_char) arg1, unsigned long arg2,
> +                 XEN_GUEST_HANDLE_PARAM(void) arg3, unsigned long arg4)
> +{
> +    int ret;
> +    struct hypfs_entry *entry;
> +    static char path[XEN_HYPFS_MAX_PATHLEN];
> +
> +    if ( xsm_hypfs_op(XSM_PRIV) )
> +        return -EPERM;
> +
> +    if ( cmd == XEN_HYPFS_OP_get_version )
> +    {
> +        if ( !guest_handle_is_null(arg1) || arg2 ||
> +             !guest_handle_is_null(arg3) || arg4 )
> +            return -EINVAL;
> +
> +        return XEN_HYPFS_VERSION;
> +    }
> +
> +    if ( cmd == XEN_HYPFS_OP_write_contents )
> +        hypfs_write_lock();
> +    else
> +        hypfs_read_lock();
> +
> +    ret = hypfs_get_path_user(path, arg1, arg2);
> +    if ( ret )
> +        goto out;
> +
> +    entry = hypfs_get_entry(path);
> +    if ( !entry )
> +    {
> +        ret = -ENOENT;
> +        goto out;
> +    }
> +
> +    switch ( cmd )
> +    {
> +    case XEN_HYPFS_OP_read:
> +        ret = hypfs_read(entry, arg3, arg4);
> +        break;
> +
> +    case XEN_HYPFS_OP_write_contents:
> +        ret = hypfs_write(entry, arg3, arg4);
> +        break;
> +
> +    default:
> +        ret = -EOPNOTSUPP;
> +        break;
> +    }
> +
> + out:
> +    hypfs_unlock();
> +
> +    return ret;
> +}
> diff --git a/xen/include/Makefile b/xen/include/Makefile
> index 2a10725d68..089314dc72 100644
> --- a/xen/include/Makefile
> +++ b/xen/include/Makefile
> @@ -9,6 +9,7 @@ headers-y := \
>       compat/event_channel.h \
>       compat/features.h \
>       compat/grant_table.h \
> +    compat/hypfs.h \
>       compat/kexec.h \
>       compat/memory.h \
>       compat/nmi.h \
> diff --git a/xen/include/public/hypfs.h b/xen/include/public/hypfs.h
> new file mode 100644
> index 0000000000..63a5df1629
> --- /dev/null
> +++ b/xen/include/public/hypfs.h
> @@ -0,0 +1,129 @@
> +/******************************************************************************
> + * Xen Hypervisor Filesystem
> + *
> + * Copyright (c) 2019, SUSE Software Solutions Germany GmbH
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a copy
> + * of this software and associated documentation files (the "Software"), to
> + * deal in the Software without restriction, including without limitation the
> + * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
> + * sell copies of the Software, and to permit persons to whom the Software is
> + * furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
> + * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + *
> + */
> +
> +#ifndef __XEN_PUBLIC_HYPFS_H__
> +#define __XEN_PUBLIC_HYPFS_H__
> +
> +#include "xen.h"
> +
> +/*
> + * Definitions for the __HYPERVISOR_hypfs_op hypercall.
> + */
> +
> +/* Highest version number of the hypfs interface currently defined. */
> +#define XEN_HYPFS_VERSION      1
> +
> +/* Maximum length of a path in the filesystem. */
> +#define XEN_HYPFS_MAX_PATHLEN  1024
> +
> +struct xen_hypfs_direntry {
> +    uint8_t type;
> +#define XEN_HYPFS_TYPE_DIR     0
> +#define XEN_HYPFS_TYPE_BLOB    1
> +#define XEN_HYPFS_TYPE_STRING  2
> +#define XEN_HYPFS_TYPE_UINT    3
> +#define XEN_HYPFS_TYPE_INT     4
> +#define XEN_HYPFS_TYPE_BOOL    5
> +    uint8_t encoding;
> +#define XEN_HYPFS_ENC_PLAIN    0
> +#define XEN_HYPFS_ENC_GZIP     1
> +    uint16_t pad;              /* Returned as 0. */
> +    uint32_t content_len;      /* Current length of data. */
> +    uint32_t max_write_len;    /* Max. length for writes (0 if read-only). */
> +};
> +
> +struct xen_hypfs_dirlistentry {
> +    struct xen_hypfs_direntry e;
> +    /* Offset in bytes to next entry (0 == this is the last entry). */
> +    uint16_t off_next;
> +    /* Zero terminated entry name, possibly with some padding for alignment. */
> +    char name[XEN_FLEX_ARRAY_DIM];
> +};
> +
> +/*
> + * Hypercall operations.
> + */
> +
> +/*
> + * XEN_HYPFS_OP_get_version
> + *
> + * Read highest interface version supported by the hypervisor.
> + *
> + * arg1 - arg4: all 0/NULL
> + *
> + * Possible return values:
> + * >0: highest supported interface version
> + * <0: negative Xen errno value
> + */
> +#define XEN_HYPFS_OP_get_version     0
> +
> +/*
> + * XEN_HYPFS_OP_read
> + *
> + * Read a filesystem entry.
> + *
> + * Returns the direntry and contents of an entry in the buffer supplied by the
> + * caller (struct xen_hypfs_direntry with the contents following directly
> + * after it).
> + * The data buffer must be at least the size of the direntry returned. If the
> + * data buffer was not large enough for all the data -ENOBUFS and no entry
> + * data is returned, but the direntry will contain the needed size for the
> + * returned data.
> + * The format of the contents is according to its entry type and encoding.
> + * The contents of a directory are multiple struct xen_hypfs_dirlistentry
> + * items.
> + *
> + * arg1: XEN_GUEST_HANDLE(path name)
> + * arg2: length of path name (including trailing zero byte)
> + * arg3: XEN_GUEST_HANDLE(data buffer written by hypervisor)
> + * arg4: data buffer size
> + *
> + * Possible return values:
> + * 0: success
> + * <0 : negative Xen errno value
> + */
> +#define XEN_HYPFS_OP_read              1
> +
> +/*
> + * XEN_HYPFS_OP_write_contents
> + *
> + * Write contents of a filesystem entry.
> + *
> + * Writes an entry with the contents of a buffer supplied by the caller.
> + * The data type and encoding can't be changed. The size can be changed only
> + * for blobs and strings.
> + *
> + * arg1: XEN_GUEST_HANDLE(path name)
> + * arg2: length of path name (including trailing zero byte)
> + * arg3: XEN_GUEST_HANDLE(content buffer read by hypervisor)
> + * arg4: content buffer size
> + *
> + * Possible return values:
> + * 0: success
> + * <0 : negative Xen errno value
> + */
> +#define XEN_HYPFS_OP_write_contents    2
> +
> +#endif /* __XEN_PUBLIC_HYPFS_H__ */
> diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
> index 75b1619d0d..945ef30273 100644
> --- a/xen/include/public/xen.h
> +++ b/xen/include/public/xen.h
> @@ -130,6 +130,7 @@ DEFINE_XEN_GUEST_HANDLE(xen_ulong_t);
>   #define __HYPERVISOR_argo_op              39
>   #define __HYPERVISOR_xenpmu_op            40
>   #define __HYPERVISOR_dm_op                41
> +#define __HYPERVISOR_hypfs_op             42
>   
>   /* Architecture-specific hypercall definitions. */
>   #define __HYPERVISOR_arch_0               48
> diff --git a/xen/include/xen/hypercall.h b/xen/include/xen/hypercall.h
> index d82a293377..655acc7f47 100644
> --- a/xen/include/xen/hypercall.h
> +++ b/xen/include/xen/hypercall.h
> @@ -150,6 +150,16 @@ do_dm_op(
>       unsigned int nr_bufs,
>       XEN_GUEST_HANDLE_PARAM(xen_dm_op_buf_t) bufs);
>   
> +#ifdef CONFIG_HYPFS
> +extern long
> +do_hypfs_op(
> +    unsigned int cmd,
> +    XEN_GUEST_HANDLE_PARAM(const_char) arg1,
> +    unsigned long arg2,
> +    XEN_GUEST_HANDLE_PARAM(void) arg3,
> +    unsigned long arg4);
> +#endif
> +
>   #ifdef CONFIG_COMPAT
>   
>   extern int
> diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
> new file mode 100644
> index 0000000000..5c6a0ccece
> --- /dev/null
> +++ b/xen/include/xen/hypfs.h
> @@ -0,0 +1,121 @@
> +#ifndef __XEN_HYPFS_H__
> +#define __XEN_HYPFS_H__
> +
> +#ifdef CONFIG_HYPFS
> +#include <xen/list.h>
> +#include <xen/string.h>
> +#include <public/hypfs.h>
> +
> +struct hypfs_entry_leaf;
> +
> +struct hypfs_entry {
> +    unsigned short type;
> +    unsigned short encoding;
> +    unsigned int size;
> +    unsigned int max_size;
> +    const char *name;
> +    struct list_head list;
> +    int (*read)(const struct hypfs_entry *entry,
> +                XEN_GUEST_HANDLE_PARAM(void) uaddr);
> +    int (*write)(struct hypfs_entry_leaf *leaf,
> +                 XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
> +};
> +
> +struct hypfs_entry_leaf {
> +    struct hypfs_entry e;
> +    union {
> +        const void *content;
> +        void *write_ptr;
> +    };
> +};
> +
> +struct hypfs_entry_dir {
> +    struct hypfs_entry e;
> +    struct list_head dirlist;
> +};
> +
> +#define HYPFS_DIR_INIT(var, nam)                  \
> +    struct hypfs_entry_dir __read_mostly var = {  \
> +        .e.type = XEN_HYPFS_TYPE_DIR,             \
> +        .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
> +        .e.name = (nam),                          \
> +        .e.size = 0,                              \
> +        .e.max_size = 0,                          \
> +        .e.list = LIST_HEAD_INIT(var.e.list),     \
> +        .e.read = hypfs_read_dir,                 \
> +        .dirlist = LIST_HEAD_INIT(var.dirlist),   \
> +    }
> +
> +#define HYPFS_VARSIZE_INIT(var, typ, nam, msz)    \
> +    struct hypfs_entry_leaf __read_mostly var = { \
> +        .e.type = (typ),                          \
> +        .e.encoding = XEN_HYPFS_ENC_PLAIN,        \
> +        .e.name = (nam),                          \
> +        .e.max_size = (msz),                      \
> +        .e.read = hypfs_read_leaf,                \
> +    }
> +
> +/* Content and size need to be set via hypfs_string_set_reference(). */
> +#define HYPFS_STRING_INIT(var, nam)               \
> +    HYPFS_VARSIZE_INIT(var, XEN_HYPFS_TYPE_STRING, nam, 0)
> +
> +/*
> + * Set content and size of a XEN_HYPFS_TYPE_STRING node. The node will point
> + * to str, so any later modification of *str should be followed by a call
> + * to hypfs_string_set_reference() in order to update the size of the node
> + * data.
> + */
> +static inline void hypfs_string_set_reference(struct hypfs_entry_leaf *leaf,
> +                                              const char *str)
> +{
> +    leaf->content = str;
> +    leaf->e.size = strlen(str) + 1;
> +}
> +
> +#define HYPFS_FIXEDSIZE_INIT(var, typ, nam, contvar, wr) \
> +    struct hypfs_entry_leaf __read_mostly var = {        \
> +        .e.type = (typ),                                 \
> +        .e.encoding = XEN_HYPFS_ENC_PLAIN,               \
> +        .e.name = (nam),                                 \
> +        .e.size = sizeof(contvar),                       \
> +        .e.max_size = (wr) ? sizeof(contvar) : 0,        \
> +        .e.read = hypfs_read_leaf,                       \
> +        .e.write = (wr),                                 \
> +        .content = &(contvar),                           \
> +    }
> +
> +#define HYPFS_UINT_INIT(var, nam, contvar)                       \
> +    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_UINT, nam, contvar, NULL)
> +#define HYPFS_UINT_INIT_WRITABLE(var, nam, contvar)              \
> +    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_UINT, nam, contvar, \
> +                         hypfs_write_leaf)
> +
> +#define HYPFS_INT_INIT(var, nam, contvar)                        \
> +    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_INT, nam, contvar, NULL)
> +#define HYPFS_INT_INIT_WRITABLE(var, nam, contvar)               \
> +    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_INT, nam, contvar, \
> +                         hypfs_write_leaf)
> +
> +#define HYPFS_BOOL_INIT(var, nam, contvar)                       \
> +    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, NULL)
> +#define HYPFS_BOOL_INIT_WRITABLE(var, nam, contvar)              \
> +    HYPFS_FIXEDSIZE_INIT(var, XEN_HYPFS_TYPE_BOOL, nam, contvar, \
> +                         hypfs_write_bool)
> +
> +extern struct hypfs_entry_dir hypfs_root;
> +
> +int hypfs_add_dir(struct hypfs_entry_dir *parent,
> +                  struct hypfs_entry_dir *dir, bool nofault);
> +int hypfs_add_leaf(struct hypfs_entry_dir *parent,
> +                   struct hypfs_entry_leaf *leaf, bool nofault);
> +int hypfs_read_dir(const struct hypfs_entry *entry,
> +                   XEN_GUEST_HANDLE_PARAM(void) uaddr);
> +int hypfs_read_leaf(const struct hypfs_entry *entry,
> +                    XEN_GUEST_HANDLE_PARAM(void) uaddr);
> +int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
> +                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
> +int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
> +                     XEN_GUEST_HANDLE_PARAM(void) uaddr, unsigned int ulen);
> +#endif
> +
> +#endif /* __XEN_HYPFS_H__ */
> diff --git a/xen/include/xlat.lst b/xen/include/xlat.lst
> index 95f5e5592b..0921d4a8d0 100644
> --- a/xen/include/xlat.lst
> +++ b/xen/include/xlat.lst
> @@ -86,6 +86,8 @@
>   ?	vcpu_hvm_context		hvm/hvm_vcpu.h
>   ?	vcpu_hvm_x86_32			hvm/hvm_vcpu.h
>   ?	vcpu_hvm_x86_64			hvm/hvm_vcpu.h
> +?	hypfs_direntry			hypfs.h
> +?	hypfs_dirlistentry		hypfs.h
>   ?	kexec_exec			kexec.h
>   !	kexec_image			kexec.h
>   !	kexec_range			kexec.h
> diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
> index 295dd67c48..2368acebed 100644
> --- a/xen/include/xsm/dummy.h
> +++ b/xen/include/xsm/dummy.h
> @@ -434,6 +434,12 @@ static XSM_INLINE int xsm_page_offline(XSM_DEFAULT_ARG uint32_t cmd)
>       return xsm_default_action(action, current->domain, NULL);
>   }
>   
> +static XSM_INLINE int xsm_hypfs_op(XSM_DEFAULT_VOID)
> +{
> +    XSM_ASSERT_ACTION(XSM_PRIV);
> +    return xsm_default_action(action, current->domain, NULL);
> +}
> +
>   static XSM_INLINE long xsm_do_xsm_op(XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
>   {
>       return -ENOSYS;
> diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
> index e22d6160b5..a80bcf3e42 100644
> --- a/xen/include/xsm/xsm.h
> +++ b/xen/include/xsm/xsm.h
> @@ -127,6 +127,7 @@ struct xsm_operations {
>       int (*resource_setup_misc) (void);
>   
>       int (*page_offline)(uint32_t cmd);
> +    int (*hypfs_op)(void);
>   
>       long (*do_xsm_op) (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op);
>   #ifdef CONFIG_COMPAT
> @@ -536,6 +537,11 @@ static inline int xsm_page_offline(xsm_default_t def, uint32_t cmd)
>       return xsm_ops->page_offline(cmd);
>   }
>   
> +static inline int xsm_hypfs_op(xsm_default_t def)
> +{
> +    return xsm_ops->hypfs_op();
> +}
> +
>   static inline long xsm_do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
>   {
>       return xsm_ops->do_xsm_op(op);
> diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
> index 5705e52791..d4cce68089 100644
> --- a/xen/xsm/dummy.c
> +++ b/xen/xsm/dummy.c
> @@ -103,6 +103,7 @@ void __init xsm_fixup_ops (struct xsm_operations *ops)
>       set_to_dummy_if_null(ops, resource_setup_misc);
>   
>       set_to_dummy_if_null(ops, page_offline);
> +    set_to_dummy_if_null(ops, hypfs_op);
>       set_to_dummy_if_null(ops, hvm_param);
>       set_to_dummy_if_null(ops, hvm_control);
>       set_to_dummy_if_null(ops, hvm_param_nested);
> diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
> index 4649e6fd95..a2c78e445c 100644
> --- a/xen/xsm/flask/hooks.c
> +++ b/xen/xsm/flask/hooks.c
> @@ -1173,6 +1173,11 @@ static inline int flask_page_offline(uint32_t cmd)
>       }
>   }
>   
> +static inline int flask_hypfs_op(void)
> +{
> +    return domain_has_xen(current->domain, XEN__HYPFS_OP);
> +}
> +
>   static int flask_add_to_physmap(struct domain *d1, struct domain *d2)
>   {
>       return domain_has_perm(d1, d2, SECCLASS_MMU, MMU__PHYSMAP);
> @@ -1812,6 +1817,7 @@ static struct xsm_operations flask_ops = {
>       .resource_setup_misc = flask_resource_setup_misc,
>   
>       .page_offline = flask_page_offline,
> +    .hypfs_op = flask_hypfs_op,
>       .hvm_param = flask_hvm_param,
>       .hvm_control = flask_hvm_param,
>       .hvm_param_nested = flask_hvm_param_nested,
> diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
> index c055c14c26..c9e385fb9b 100644
> --- a/xen/xsm/flask/policy/access_vectors
> +++ b/xen/xsm/flask/policy/access_vectors
> @@ -67,6 +67,8 @@ class xen
>       lockprof
>   # XEN_SYSCTL_cpupool_op
>       cpupool_op
> +# hypfs hypercall
> +    hypfs_op
>   # XEN_SYSCTL_scheduler_op with XEN_DOMCTL_SCHEDOP_getinfo, XEN_SYSCTL_sched_id, XEN_DOMCTL_SCHEDOP_getvcpuinfo
>       getscheduler
>   # XEN_SYSCTL_scheduler_op with XEN_DOMCTL_SCHEDOP_putinfo, XEN_DOMCTL_SCHEDOP_putvcpuinfo
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 21 12:53:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 12:53:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbkgt-000648-Ll; Thu, 21 May 2020 12:52:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VGGW=7D=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jbkgr-000640-Se
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 12:52:49 +0000
X-Inumbo-ID: f9471c50-9b61-11ea-aafe-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f9471c50-9b61-11ea-aafe-12813bfff9fa;
 Thu, 21 May 2020 12:52:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=GHqK/6rEcspyvA2fOvyIWaP4KczkSAzGrqmCF/pcafQ=; b=bt1gMVivDPWmAtfFYTolRw6U1L
 JHXlTrI0Qj/CkK6gi5dS1unxEWyXPFj9opMF1QO32U26IgrFz6VuyYm6PEhpOmjiHTgW6RFsQVMJ4
 sc8IWsfWWeq9w1QEaQXUH7Am3Z4VG2mvR04jt7/x0TiR3bsV0XmFfyBSq4PdX/Hfrfdg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jbkgk-0001to-HH; Thu, 21 May 2020 12:52:42 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jbkgk-0004wa-AJ; Thu, 21 May 2020 12:52:42 +0000
Subject: Re: [PATCH v10 09/12] xen: add runtime parameter access support to
 hypfs
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
References: <20200519072106.26894-1-jgross@suse.com>
 <20200519072106.26894-10-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <fcc51570-cbfe-4fca-42d9-2b684f52e651@xen.org>
Date: Thu, 21 May 2020 13:52:39 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200519072106.26894-10-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Juergen,

On 19/05/2020 08:21, Juergen Gross wrote:
> Add support to read and modify values of hypervisor runtime parameters
> via the hypervisor file system.
> 
> As runtime parameters can be modified via a sysctl, too, this path has
> to take the hypfs rw_lock as writer.
> 
> For custom runtime parameters the connection between the parameter
> value and the file system is done via an init function which will set
> the initial value (if needed) and the leaf properties.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 21 13:12:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 13:12:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbkzs-0007uT-BK; Thu, 21 May 2020 13:12:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UciX=7D=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jbkzq-0007uO-Ct
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 13:12:26 +0000
X-Inumbo-ID: b69f7e4e-9b64-11ea-b9cf-bc764e2007e4
Received: from mail-lf1-x144.google.com (unknown [2a00:1450:4864:20::144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b69f7e4e-9b64-11ea-b9cf-bc764e2007e4;
 Thu, 21 May 2020 13:12:25 +0000 (UTC)
Received: by mail-lf1-x144.google.com with SMTP id w15so4389819lfe.11
 for <xen-devel@lists.xenproject.org>; Thu, 21 May 2020 06:12:25 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=P3mZBUQpW9ml01kQtWZRohcsMSbianysuRNW4ZgL8rw=;
 b=r1J0vSCtjvnOSsObiMR4yhXTzW8YAdp/9N/rg20vyD/2Qt2TK06AvhDdR6C/coiqbg
 pCWO3syb3xWCOWhPQrlTbpQd4+b+216IwCRWkPOLFjCSbItOL5OpsYGj+MPLvCjqnWyh
 W2zODg42CAaDUHWlEViBpxsRiGi13ZjMCpKrjx1BqkEP3RPrbxpBDBgu9tbfAxCp/mUz
 nxwdWiVuXe9DnFJNbsdMqTdM3q9ygNoY89WujkvXPwKunDY1ypN+fdGAIu5NgClMAuZi
 C9o/WHyORkhWAw+hAz60/xgkO1eJIaG9VdRT2oulFA3UPv3KaRO00FML2bBH5Y4ZWNbK
 XB2Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=P3mZBUQpW9ml01kQtWZRohcsMSbianysuRNW4ZgL8rw=;
 b=tSp21UDWZKfu+cBDexH1FJAj1Q8CNA73vhXDd25rvjkWcq64UYkieUwZVyGqXA0k5m
 RmEoefgSXT3HuP1KjiLIPvRb9sMUN2SbwKeB8ATYFEHVfVTNFvsFSERVigEBR1pUN1Md
 sDXTAf2kBKsYkPr23q8qvQ1FCUZrNzJyoq8I5Iaanb/GZ5eDFsHeaHNJgHi6wBTekln1
 WiROhXVQEicDitinMgz7ocmyQOovM9XodbZIiC6EEquWCXmvewsDRODSDVi/wKqzromW
 cNhqUeiAxUaah7xKjxLmB3dx6O7rGPxS21bETk+8in8vEmhCuN0WmwsdDPCCDNOIRKQ6
 wORw==
X-Gm-Message-State: AOAM533ByXq4V8ahKtJH9gA8r/g1N9FzL75qI2/ZjfrhqyFiO81poZaP
 jpI3wlQWBWLYkq1EDdFjJsZtv9H3lLgutRO2Otk=
X-Google-Smtp-Source: ABdhPJw6wLCaDryHW0BocoBND2AJYJTh3uguwGqAXBhrdsfM0koGcl0bqAShD3GDp3zVKKotmjC++qQKw4iRRYbRr8s=
X-Received: by 2002:a19:f813:: with SMTP id a19mr4955560lff.212.1590066744538; 
 Thu, 21 May 2020 06:12:24 -0700 (PDT)
MIME-Version: 1.0
References: <20200519015503.115236-1-jandryuk@gmail.com>
 <20200519015503.115236-13-jandryuk@gmail.com>
 <edb28eba-622e-e9e6-5d22-0d5e86b503bd@citrix.com>
In-Reply-To: <edb28eba-622e-e9e6-5d22-0d5e86b503bd@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 21 May 2020 09:12:13 -0400
Message-ID: <CAKf6xpuGDHrWFrj_f+bZU9xVK878YKPR=u86MpzDw-0xbFvZVg@mail.gmail.com>
Subject: Re: [PATCH v7 12/19] tools: add simple vchan-socket-proxy
To: Andrew Cooper <andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 21, 2020 at 7:09 AM Andrew Cooper <andrew.cooper3@citrix.com> wrote:
>
> On 19/05/2020 02:54, Jason Andryuk wrote:
> > +static int connect_socket(const char *path_or_fd) {
> > +    int fd;
> > +    char *endptr;
> > +    struct sockaddr_un addr;
> > +
> > +    fd = strtoll(path_or_fd, &endptr, 0);
> > +    if (*endptr == '\0') {
> > +        set_nonblocking(fd, 1);
> > +        return fd;
> > +    }
> > +
> > +    fd = socket(AF_UNIX, SOCK_STREAM, 0);
> > +    if (fd == -1)
> > +        return -1;
> > +
> > +    addr.sun_family = AF_UNIX;
> > +    strncpy(addr.sun_path, path_or_fd, sizeof(addr.sun_path));
>
> Coverity has identified issues, some perhaps more concerning than others.

Thanks.  I'll take a look.

-Jason


From xen-devel-bounces@lists.xenproject.org Thu May 21 13:34:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 13:34:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jblKO-0001Gd-2S; Thu, 21 May 2020 13:33:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzzg=7D=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jblKN-0001GS-00
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 13:33:39 +0000
X-Inumbo-ID: ac620dae-9b67-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac620dae-9b67-11ea-ae69-bc764e2007e4;
 Thu, 21 May 2020 13:33:37 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 5gmKjkebEWCNOFvhOQXaQo4wL+J51hJQuVeY2/iYeTHSBCg77L82HAyYeNxrLhJeXIcIWCLoU1
 5R+k+iQjMla6m6+UO47x/hX6o+CFL4iyWxLkSSoMQDdr+WzsbcJ0Q+h3MNlE+ck956YVl8dn1B
 q1N8hC1Out+sqzoooUoZ8/9c4FFQ6khmMSqCbK0P3OFfqMoXPNP7bHwGdylTjN8/uoijqh85yV
 gYpceTcHpy4JiUgFIBBFniLY4IZWk/4zNNBOLWG3HdnN/Or7dDzXXe+ayvhYdEsL/PI+VXtefT
 Cr0=
X-SBRS: 2.7
X-MesageID: 18366171
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,417,1583211600"; d="scan'208";a="18366171"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24262.33569.802678.575518@mariner.uk.xensource.com>
Date: Thu, 21 May 2020 14:33:21 +0100
To: Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v4 3/5] tools/misc: add xen-domctx to present domain
 context
In-Reply-To: <20200521085932.10508-4-paul@xen.org>
References: <20200521085932.10508-1-paul@xen.org>
 <20200521085932.10508-4-paul@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Paul
 Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Paul Durrant writes ("[PATCH v4 3/5] tools/misc: add xen-domctx to present domain context"):
> This tool is analogous to 'xen-hvmctx' which presents HVM context.
> Subsequent patches will add 'dump' functions when new records are
> introduced.

This looks plausible to me.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

However, I have CC'd Andrew in case he wants to have an opinion.
Andy's done a lot of work on the save/restore stuff and he should be
given a chance to objedt.  Please would committers not commit this
this week, or until Andy also acks it.

Thanks,
Ian.

> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wl@xen.org>
> 
> v3:
>  - Re-worked to avoid copying onto stack
>  - Added optional typecode and instance arguments
> 
> v2:
>  - Change name from 'xen-ctx' to 'xen-domctx'
> ---
>  .gitignore              |   1 +
>  tools/misc/Makefile     |   4 +
>  tools/misc/xen-domctx.c | 200 ++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 205 insertions(+)
>  create mode 100644 tools/misc/xen-domctx.c
> 
> diff --git a/.gitignore b/.gitignore
> index 7418ce9829..6da3030f0d 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -209,6 +209,7 @@ tools/misc/xen_cpuperf
>  tools/misc/xen-cpuid
>  tools/misc/xen-detect
>  tools/misc/xen-diag
> +tools/misc/xen-domctx
>  tools/misc/xen-tmem-list-parse
>  tools/misc/xen-livepatch
>  tools/misc/xenperf
> diff --git a/tools/misc/Makefile b/tools/misc/Makefile
> index 63947bfadc..ef25524354 100644
> --- a/tools/misc/Makefile
> +++ b/tools/misc/Makefile
> @@ -30,6 +30,7 @@ INSTALL_SBIN                   += xenpm
>  INSTALL_SBIN                   += xenwatchdogd
>  INSTALL_SBIN                   += xen-livepatch
>  INSTALL_SBIN                   += xen-diag
> +INSTALL_SBIN                   += xen-domctx
>  INSTALL_SBIN += $(INSTALL_SBIN-y)
>  
>  # Everything to be installed in a private bin/
> @@ -108,6 +109,9 @@ xen-livepatch: xen-livepatch.o
>  xen-diag: xen-diag.o
>  	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
>  
> +xen-domctx: xen-domctx.o
> +	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
> +
>  xen-lowmemd: xen-lowmemd.o
>  	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(APPEND_LDFLAGS)
>  
> diff --git a/tools/misc/xen-domctx.c b/tools/misc/xen-domctx.c
> new file mode 100644
> index 0000000000..243325dfce
> --- /dev/null
> +++ b/tools/misc/xen-domctx.c
> @@ -0,0 +1,200 @@
> +/*
> + * xen-domctx.c
> + *
> + * Print out domain save records in a human-readable way.
> + *
> + * Copyright Amazon.com Inc. or its affiliates.
> + *
> + * Permission is hereby granted, free of charge, to any person obtaining a
> + * copy of this software and associated documentation files (the "Software"),
> + * to deal in the Software without restriction, including without limitation
> + * the rights to use, copy, modify, merge, publish, distribute, sublicense,
> + * and/or sell copies of the Software, and to permit persons to whom the
> + * Software is furnished to do so, subject to the following conditions:
> + *
> + * The above copyright notice and this permission notice shall be included in
> + * all copies or substantial portions of the Software.
> + *
> + * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
> + * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
> + * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
> + * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
> + * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
> + * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
> + * DEALINGS IN THE SOFTWARE.
> + */
> +
> +#include <inttypes.h>
> +#include <stdio.h>
> +#include <stdlib.h>
> +#include <string.h>
> +#include <errno.h>
> +
> +#include <xenctrl.h>
> +#include <xen/xen.h>
> +#include <xen/domctl.h>
> +#include <xen/save.h>
> +
> +static void *buf = NULL;
> +static size_t len, off;
> +
> +#define GET_PTR(_x)                                                        \
> +    do {                                                                   \
> +        if ( len - off < sizeof(*(_x)) )                                   \
> +        {                                                                  \
> +            fprintf(stderr,                                                \
> +                    "error: need another %lu bytes, only %lu available\n", \
> +                    sizeof(*(_x)), len - off);                             \
> +            exit(1);                                                       \
> +        }                                                                  \
> +        (_x) = buf + off;                                                  \
> +    } while (false);
> +
> +static void dump_header(void)
> +{
> +    DOMAIN_SAVE_TYPE(HEADER) *h;
> +
> +    GET_PTR(h);
> +
> +    printf("    HEADER: magic %#x, version %u\n",
> +           h->magic, h->version);
> +
> +}
> +
> +static void dump_end(void)
> +{
> +    DOMAIN_SAVE_TYPE(END) *e;
> +
> +    GET_PTR(e);
> +
> +    printf("    END\n");
> +}
> +
> +static void usage(const char *prog)
> +{
> +    fprintf(stderr, "usage: %s <domid> [ <typecode> [ <instance> ]]\n",
> +            prog);
> +    exit(1);
> +}
> +
> +int main(int argc, char **argv)
> +{
> +    char *s, *e;
> +    long domid;
> +    long typecode = -1;
> +    long instance = -1;
> +    unsigned int entry;
> +    xc_interface *xch;
> +    int rc;
> +
> +    if ( argc < 2 || argc > 4 )
> +        usage(argv[0]);
> +
> +    s = e = argv[1];
> +    domid = strtol(s, &e, 0);
> +
> +    if ( *s == '\0' || *e != '\0' ||
> +         domid < 0 || domid >= DOMID_FIRST_RESERVED )
> +    {
> +        fprintf(stderr, "invalid domid '%s'\n", s);
> +        exit(1);
> +    }
> +
> +    if ( argc >= 3 )
> +    {
> +        s = e = argv[2];
> +        typecode = strtol(s, &e, 0);
> +
> +        if ( *s == '\0' || *e != '\0' )
> +        {
> +            fprintf(stderr, "invalid typecode '%s'\n", s);
> +            exit(1);
> +        }
> +    }
> +
> +    if ( argc == 4 )
> +    {
> +        s = e = argv[3];
> +        instance = strtol(s, &e, 0);
> +
> +        if ( *s == '\0' || *e != '\0' )
> +        {
> +            fprintf(stderr, "invalid instance '%s'\n", s);
> +            exit(1);
> +        }
> +    }
> +
> +    xch = xc_interface_open(0, 0, 0);
> +    if ( !xch )
> +    {
> +        fprintf(stderr, "error: can't open libxc handle\n");
> +        exit(1);
> +    }
> +
> +    rc = xc_domain_getcontext(xch, domid, NULL, &len);
> +    if ( rc < 0 )
> +    {
> +        fprintf(stderr, "error: can't get record length for dom %lu: %s\n",
> +                domid, strerror(errno));
> +        exit(1);
> +    }
> +
> +    buf = malloc(len);
> +    if ( !buf )
> +    {
> +        fprintf(stderr, "error: can't allocate %lu bytes\n", len);
> +        exit(1);
> +    }
> +
> +    rc = xc_domain_getcontext(xch, domid, buf, &len);
> +    if ( rc < 0 )
> +    {
> +        fprintf(stderr, "error: can't get domain record for dom %lu: %s\n",
> +                domid, strerror(errno));
> +        exit(1);
> +    }
> +    off = 0;
> +
> +    entry = 0;
> +    for ( ; ; )
> +    {
> +        struct domain_save_descriptor *desc;
> +
> +        GET_PTR(desc);
> +
> +        off += sizeof(*desc);
> +
> +        if ( (typecode < 0 || typecode == desc->typecode) &&
> +             (instance < 0 || instance == desc->instance) )
> +        {
> +            printf("[%u] type: %u instance: %u length: %u\n", entry++,
> +                   desc->typecode, desc->instance, desc->length);
> +
> +            switch (desc->typecode)
> +            {
> +            case DOMAIN_SAVE_CODE(HEADER): dump_header(); break;
> +            case DOMAIN_SAVE_CODE(END): dump_end(); break;
> +            default:
> +                printf("Unknown type %u: skipping\n", desc->typecode);
> +                break;
> +            }
> +        }
> +
> +        if ( desc->typecode == DOMAIN_SAVE_CODE(END) )
> +            break;
> +
> +        off += desc->length;
> +    }
> +
> +    return 0;
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> -- 
> 2.20.1
> 


From xen-devel-bounces@lists.xenproject.org Thu May 21 13:34:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 13:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jblL9-0001KG-Cb; Thu, 21 May 2020 13:34:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zzzg=7D=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jblL8-0001KA-Ft
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 13:34:26 +0000
X-Inumbo-ID: c8f99a4a-9b67-11ea-9887-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c8f99a4a-9b67-11ea-9887-bc764e2007e4;
 Thu, 21 May 2020 13:34:24 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: zVjLvLt9e6qx2PPdBy4Mr7/KIQtphzeMivAVa7L4K3hz7m6Ftr4FPsE2WP0K2ckAE4D797aqVU
 vOCQui2wRGVtku2UC32XwABYxK//bRK1uOXlQcTD2KD4xw9Zlrpr5yTyTDpSkA8y+fsPuOp72t
 61EUibRRX7OLFQF2YpSVFQyusbDawffoXMXg45Z4rcA/QkSoETW2I/iK7CBrp5FdTbTc+y6rGt
 PWO1yzq/QCnTs3h9xghfdTSvwl1J608I3MmWZnKyOnFGim8c5HzzMU13EY4MtAITg7stGZ0Vrq
 0+U=
X-SBRS: 2.7
X-MesageID: 18348881
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,417,1583211600"; d="scan'208";a="18348881"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24262.33627.309482.955954@mariner.uk.xensource.com>
Date: Thu, 21 May 2020 14:34:19 +0100
To: Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v4 5/5] tools/libxc: make use of domain context SHARED_INFO
 record...
In-Reply-To: <20200521085932.10508-6-paul@xen.org>
References: <20200521085932.10508-1-paul@xen.org>
 <20200521085932.10508-6-paul@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Paul
 Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Paul Durrant writes ("[PATCH v4 5/5] tools/libxc: make use of domain context SHARED_INFO record..."):
> ... in the save/restore code.
> 
> This patch replaces direct mapping of the shared_info_frame (retrieved
> using XEN_DOMCTL_getdomaininfo) with save/load of the domain context
> SHARED_INFO record.
> 
> No modifications are made to the definition of the migration stream at
> this point. Subsequent patches will define a record in the libxc domain
> image format for passing domain context and convert the save/restore code
> to use that.

Andy, I think this needs your option.

Thanks,
Ian.

> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wl@xen.org>
> 
> v3:
>  - Moved basic get/set domain context functions to common code
> 
> v2:
>  - Re-based (now making use of DOMAIN_SAVE_FLAG_IGNORE)
> ---
>  tools/libxc/xc_sr_common.c         | 67 +++++++++++++++++++++++++++
>  tools/libxc/xc_sr_common.h         | 11 ++++-
>  tools/libxc/xc_sr_common_x86_pv.c  | 74 ++++++++++++++++++++++++++++++
>  tools/libxc/xc_sr_common_x86_pv.h  |  3 ++
>  tools/libxc/xc_sr_restore_x86_pv.c | 26 ++++-------
>  tools/libxc/xc_sr_save_x86_pv.c    | 43 ++++++++---------
>  tools/libxc/xg_save_restore.h      |  1 +
>  7 files changed, 181 insertions(+), 44 deletions(-)
> 
> diff --git a/tools/libxc/xc_sr_common.c b/tools/libxc/xc_sr_common.c
> index dd9a11b4b5..1acb3765aa 100644
> --- a/tools/libxc/xc_sr_common.c
> +++ b/tools/libxc/xc_sr_common.c
> @@ -138,6 +138,73 @@ int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec)
>      return 0;
>  };
>  
> +int get_domain_context(struct xc_sr_context *ctx)
> +{
> +    xc_interface *xch = ctx->xch;
> +    size_t len = 0;
> +    int rc;
> +
> +    if ( ctx->domain_context.buffer )
> +    {
> +        ERROR("Domain context already present");
> +        return -1;
> +    }
> +
> +    rc = xc_domain_getcontext(xch, ctx->domid, NULL, &len);
> +    if ( rc < 0 )
> +    {
> +        PERROR("Unable to get size of domain context");
> +        return -1;
> +    }
> +
> +    ctx->domain_context.buffer = malloc(len);
> +    if ( !ctx->domain_context.buffer )
> +    {
> +        PERROR("Unable to allocate memory for domain context");
> +        return -1;
> +    }
> +
> +    rc = xc_domain_getcontext(xch, ctx->domid, ctx->domain_context.buffer,
> +                              &len);
> +    if ( rc < 0 )
> +    {
> +        PERROR("Unable to get domain context");
> +        return -1;
> +    }
> +
> +    ctx->domain_context.len = len;
> +
> +    return 0;
> +}
> +
> +int set_domain_context(struct xc_sr_context *ctx)
> +{
> +    xc_interface *xch = ctx->xch;
> +    int rc;
> +
> +    if ( !ctx->domain_context.buffer )
> +    {
> +        ERROR("Domain context not present");
> +        return -1;
> +    }
> +
> +    rc = xc_domain_setcontext(xch, ctx->domid, ctx->domain_context.buffer,
> +                              ctx->domain_context.len);
> +
> +    if ( rc < 0 )
> +    {
> +        PERROR("Unable to set domain context");
> +        return -1;
> +    }
> +
> +    return 0;
> +}
> +
> +void common_cleanup(struct xc_sr_context *ctx)
> +{
> +    free(ctx->domain_context.buffer);
> +}
> +
>  static void __attribute__((unused)) build_assertions(void)
>  {
>      BUILD_BUG_ON(sizeof(struct xc_sr_ihdr) != 24);
> diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
> index 5dd51ccb15..0d61978b08 100644
> --- a/tools/libxc/xc_sr_common.h
> +++ b/tools/libxc/xc_sr_common.h
> @@ -208,6 +208,11 @@ struct xc_sr_context
>  
>      xc_dominfo_t dominfo;
>  
> +    struct {
> +        void *buffer;
> +        unsigned int len;
> +    } domain_context;
> +
>      union /* Common save or restore data. */
>      {
>          struct /* Save data. */
> @@ -314,7 +319,7 @@ struct xc_sr_context
>                  /* The guest pfns containing the p2m leaves */
>                  xen_pfn_t *p2m_pfns;
>  
> -                /* Read-only mapping of guests shared info page */
> +                /* Pointer to shared_info (located in context buffer) */
>                  shared_info_any_t *shinfo;
>  
>                  /* p2m generation count for verifying validity of local p2m. */
> @@ -425,6 +430,10 @@ int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec);
>  int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
>                    const xen_pfn_t *original_pfns, const uint32_t *types);
>  
> +int get_domain_context(struct xc_sr_context *ctx);
> +int set_domain_context(struct xc_sr_context *ctx);
> +void common_cleanup(struct xc_sr_context *ctx);
> +
>  #endif
>  /*
>   * Local variables:
> diff --git a/tools/libxc/xc_sr_common_x86_pv.c b/tools/libxc/xc_sr_common_x86_pv.c
> index d3d425cb82..69d9b142b8 100644
> --- a/tools/libxc/xc_sr_common_x86_pv.c
> +++ b/tools/libxc/xc_sr_common_x86_pv.c
> @@ -182,6 +182,80 @@ int x86_pv_map_m2p(struct xc_sr_context *ctx)
>      return rc;
>  }
>  
> +int x86_pv_get_shinfo(struct xc_sr_context *ctx)
> +{
> +    xc_interface *xch = ctx->xch;
> +    unsigned int off = 0;
> +    int rc;
> +
> +#define GET_PTR(_x)                                                         \
> +    do {                                                                    \
> +        if ( ctx->domain_context.len - off < sizeof(*(_x)) )                \
> +        {                                                                   \
> +            ERROR("Need another %lu bytes of context, only %u available\n", \
> +                  sizeof(*(_x)), ctx->domain_context.len - off);            \
> +            return -1;                                                      \
> +        }                                                                   \
> +        (_x) = ctx->domain_context.buffer + off;                            \
> +    } while (false);
> +
> +    rc = get_domain_context(ctx);
> +    if ( rc )
> +        return rc;
> +
> +    for ( ; ; )
> +    {
> +        struct domain_save_descriptor *desc;
> +
> +        GET_PTR(desc);
> +
> +        off += sizeof(*desc);
> +
> +        switch (desc->typecode)
> +        {
> +        case DOMAIN_SAVE_CODE(SHARED_INFO):
> +        {
> +            DOMAIN_SAVE_TYPE(SHARED_INFO) *s;
> +
> +            GET_PTR(s);
> +
> +            ctx->x86.pv.shinfo = (shared_info_any_t *)s->buffer;
> +            break;
> +        }
> +        default:
> +            break;
> +        }
> +
> +        if ( desc->typecode == DOMAIN_SAVE_CODE(END) )
> +            break;
> +
> +        off += desc->length;
> +    }
> +
> +    if ( !ctx->x86.pv.shinfo )
> +    {
> +        ERROR("Failed to get SHARED_INFO\n");
> +        return -1;
> +    }
> +
> +    return 0;
> +
> +#undef GET_PTR
> +}
> +
> +int x86_pv_set_shinfo(struct xc_sr_context *ctx)
> +{
> +    xc_interface *xch = ctx->xch;
> +
> +    if ( !ctx->x86.pv.shinfo )
> +    {
> +        ERROR("SHARED_INFO buffer not present\n");
> +        return -1;
> +    }
> +
> +    return set_domain_context(ctx);
> +}
> +
>  /*
>   * Local variables:
>   * mode: C
> diff --git a/tools/libxc/xc_sr_common_x86_pv.h b/tools/libxc/xc_sr_common_x86_pv.h
> index 2ed03309af..01442f48fb 100644
> --- a/tools/libxc/xc_sr_common_x86_pv.h
> +++ b/tools/libxc/xc_sr_common_x86_pv.h
> @@ -97,6 +97,9 @@ int x86_pv_domain_info(struct xc_sr_context *ctx);
>   */
>  int x86_pv_map_m2p(struct xc_sr_context *ctx);
>  
> +int x86_pv_get_shinfo(struct xc_sr_context *ctx);
> +int x86_pv_set_shinfo(struct xc_sr_context *ctx);
> +
>  #endif
>  /*
>   * Local variables:
> diff --git a/tools/libxc/xc_sr_restore_x86_pv.c b/tools/libxc/xc_sr_restore_x86_pv.c
> index 904ccc462a..21982a38ad 100644
> --- a/tools/libxc/xc_sr_restore_x86_pv.c
> +++ b/tools/libxc/xc_sr_restore_x86_pv.c
> @@ -865,7 +865,7 @@ static int handle_shared_info(struct xc_sr_context *ctx,
>      xc_interface *xch = ctx->xch;
>      unsigned int i;
>      int rc = -1;
> -    shared_info_any_t *guest_shinfo = NULL;
> +    shared_info_any_t *guest_shinfo;
>      const shared_info_any_t *old_shinfo = rec->data;
>  
>      if ( !ctx->x86.pv.restore.seen_pv_info )
> @@ -878,18 +878,14 @@ static int handle_shared_info(struct xc_sr_context *ctx,
>      {
>          ERROR("X86_PV_SHARED_INFO record wrong size: length %u"
>                ", expected 4096", rec->length);
> -        goto err;
> +        return -1;
>      }
>  
> -    guest_shinfo = xc_map_foreign_range(
> -        xch, ctx->domid, PAGE_SIZE, PROT_READ | PROT_WRITE,
> -        ctx->dominfo.shared_info_frame);
> -    if ( !guest_shinfo )
> -    {
> -        PERROR("Failed to map Shared Info at mfn %#lx",
> -               ctx->dominfo.shared_info_frame);
> -        goto err;
> -    }
> +    rc = x86_pv_get_shinfo(ctx);
> +    if ( rc )
> +        return rc;
> +
> +    guest_shinfo = ctx->x86.pv.shinfo;
>  
>      MEMCPY_FIELD(guest_shinfo, old_shinfo, vcpu_info, ctx->x86.pv.width);
>      MEMCPY_FIELD(guest_shinfo, old_shinfo, arch, ctx->x86.pv.width);
> @@ -904,13 +900,7 @@ static int handle_shared_info(struct xc_sr_context *ctx,
>  
>      MEMSET_ARRAY_FIELD(guest_shinfo, evtchn_mask, 0xff, ctx->x86.pv.width);
>  
> -    rc = 0;
> -
> - err:
> -    if ( guest_shinfo )
> -        munmap(guest_shinfo, PAGE_SIZE);
> -
> -    return rc;
> +    return x86_pv_set_shinfo(ctx);
>  }
>  
>  /* restore_ops function. */
> diff --git a/tools/libxc/xc_sr_save_x86_pv.c b/tools/libxc/xc_sr_save_x86_pv.c
> index f3ccf5bb4b..bf87789340 100644
> --- a/tools/libxc/xc_sr_save_x86_pv.c
> +++ b/tools/libxc/xc_sr_save_x86_pv.c
> @@ -9,25 +9,6 @@ static inline bool is_canonical_address(xen_vaddr_t vaddr)
>      return ((int64_t)vaddr >> 47) == ((int64_t)vaddr >> 63);
>  }
>  
> -/*
> - * Maps the guests shared info page.
> - */
> -static int map_shinfo(struct xc_sr_context *ctx)
> -{
> -    xc_interface *xch = ctx->xch;
> -
> -    ctx->x86.pv.shinfo = xc_map_foreign_range(
> -        xch, ctx->domid, PAGE_SIZE, PROT_READ, ctx->dominfo.shared_info_frame);
> -    if ( !ctx->x86.pv.shinfo )
> -    {
> -        PERROR("Failed to map shared info frame at mfn %#lx",
> -               ctx->dominfo.shared_info_frame);
> -        return -1;
> -    }
> -
> -    return 0;
> -}
> -
>  /*
>   * Copy a list of mfns from a guest, accounting for differences between guest
>   * and toolstack width.  Can fail if truncation would occur.
> @@ -854,13 +835,26 @@ static int write_x86_pv_p2m_frames(struct xc_sr_context *ctx)
>   */
>  static int write_shared_info(struct xc_sr_context *ctx)
>  {
> +    xc_interface *xch = ctx->xch;
>      struct xc_sr_record rec = {
>          .type = REC_TYPE_SHARED_INFO,
>          .length = PAGE_SIZE,
> -        .data = ctx->x86.pv.shinfo,
>      };
> +    int rc;
>  
> -    return write_record(ctx, &rec);
> +    if ( !(rec.data = calloc(1, PAGE_SIZE)) )
> +    {
> +        ERROR("Cannot allocate buffer for SHARED_INFO data");
> +        return -1;
> +    }
> +
> +    memcpy(rec.data, ctx->x86.pv.shinfo, sizeof(*ctx->x86.pv.shinfo));
> +
> +    rc = write_record(ctx, &rec);
> +
> +    free(rec.data);
> +
> +    return rc;
>  }
>  
>  /*
> @@ -1041,7 +1035,7 @@ static int x86_pv_setup(struct xc_sr_context *ctx)
>      if ( rc )
>          return rc;
>  
> -    rc = map_shinfo(ctx);
> +    rc = x86_pv_get_shinfo(ctx);
>      if ( rc )
>          return rc;
>  
> @@ -1112,12 +1106,11 @@ static int x86_pv_cleanup(struct xc_sr_context *ctx)
>      if ( ctx->x86.pv.p2m )
>          munmap(ctx->x86.pv.p2m, ctx->x86.pv.p2m_frames * PAGE_SIZE);
>  
> -    if ( ctx->x86.pv.shinfo )
> -        munmap(ctx->x86.pv.shinfo, PAGE_SIZE);
> -
>      if ( ctx->x86.pv.m2p )
>          munmap(ctx->x86.pv.m2p, ctx->x86.pv.nr_m2p_frames * PAGE_SIZE);
>  
> +    common_cleanup(ctx);
> +
>      return 0;
>  }
>  
> diff --git a/tools/libxc/xg_save_restore.h b/tools/libxc/xg_save_restore.h
> index 303081df0d..296b523963 100644
> --- a/tools/libxc/xg_save_restore.h
> +++ b/tools/libxc/xg_save_restore.h
> @@ -19,6 +19,7 @@
>  
>  #include <xen/foreign/x86_32.h>
>  #include <xen/foreign/x86_64.h>
> +#include <xen/save.h>
>  
>  /*
>  ** We process save/restore/migrate in batches of pages; the below
> -- 
> 2.20.1
> 


From xen-devel-bounces@lists.xenproject.org Thu May 21 14:06:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 14:06:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jblpK-0004KP-F2; Thu, 21 May 2020 14:05:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q4aB=7D=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1jblpJ-0004KK-1l
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 14:05:37 +0000
X-Inumbo-ID: 2382a374-9b6c-11ea-ab12-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 2382a374-9b6c-11ea-ab12-12813bfff9fa;
 Thu, 21 May 2020 14:05:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590069935;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=WuW4LhQZmtRPxOdsGv0UfenYyJdURtJzEmFBQoF1B8A=;
 b=YUfTJero3p2jdhnnMks/cpQ8zrPzYHOSEwEI8zHN5nrMmB9jRnJ8S6C49HgB2hpmttNGa+
 bK8ONEb2kNpbJAmUxdcSOu5OCG2zIrLmVeA5IEMDXs0F/9DskWTnpGnuX2MjVHlG9AVOpx
 P2ucfdzAeQBRy0i67seZCj8g64jPNPg=
Received: from mail-wr1-f72.google.com (mail-wr1-f72.google.com
 [209.85.221.72]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-272-FFKfPPF2Mm6lhDgX6Ek3FA-1; Thu, 21 May 2020 10:05:22 -0400
X-MC-Unique: FFKfPPF2Mm6lhDgX6Ek3FA-1
Received: by mail-wr1-f72.google.com with SMTP id x8so2966982wrl.16
 for <xen-devel@lists.xenproject.org>; Thu, 21 May 2020 07:05:22 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-language
 :content-transfer-encoding;
 bh=WuW4LhQZmtRPxOdsGv0UfenYyJdURtJzEmFBQoF1B8A=;
 b=arP8bO8HnPvM7nRnSCs3TVUxB+8ag2+Xzjy+q09WfuIz4tNY2nZCrQ4Y4uyRcwLrU9
 jQQug0vhGSkXvcGuKHnUSxh1XU0qb7tFIQvjwC7cJr24nPOAlAl93CwvuLUFx8j3xWHb
 SFaBf5Pt6+35uUhsCRFjsPj/06p0+VHf/5r9jWkmWIH8sFLOD6ZpfFpyJgxdpnPQD0mZ
 UoeJodOTHvSkuxh0Qo7qxyqLa6gxdElY8jvF80FDAT8i8IpdO6LbhWLmIfJl5YHBOA26
 vdMc2E7Jf4V25R2O43CzJRfATn4BvvhyN2keRplpf2KQUedaqSHdTXL2+n9ZHWZQ/L9O
 +sfw==
X-Gm-Message-State: AOAM532ByhzWCogUNstfIqGbGw8RH2iI1QoIU2B5y23U5W0cdvWj9p6B
 i1kznhP21Q6OfwJbSHf33cxiqloAzDN56JT+Ix8XzAIwEvn1V7YzyVaxGPimp4q9bqsZHxIealA
 07e9UXc7heGrILijtW+IzVnF5Vi0=
X-Received: by 2002:a7b:c205:: with SMTP id x5mr9562688wmi.135.1590069920835; 
 Thu, 21 May 2020 07:05:20 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJz+9vDo+k5k5h2/AhbhlgqJeb0CrNG8dWFa5E8+nrQcjn7Lroo1lNJb24UV5jL3E+o+6JSXCg==
X-Received: by 2002:a7b:c205:: with SMTP id x5mr9562654wmi.135.1590069920578; 
 Thu, 21 May 2020 07:05:20 -0700 (PDT)
Received: from [192.168.178.58] ([151.30.94.134])
 by smtp.gmail.com with ESMTPSA id c140sm6952309wmd.18.2020.05.21.07.05.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 21 May 2020 07:05:19 -0700 (PDT)
Subject: Re: [PATCH v3 0/3] various: Remove unnecessary casts
To: Markus Armbruster <armbru@redhat.com>, =?UTF-8?Q?C=c3=a9dric_Le_Goater?=
 <clg@kaod.org>
References: <20200512070020.22782-1-f4bug@amsat.org>
 <871rnlsps6.fsf@dusky.pond.sub.org>
 <8791b385-8493-f81d-5ee3-cca5b8559c27@redhat.com>
 <87imgt9ycp.fsf@dusky.pond.sub.org>
 <2f4607cf-90a9-ca9a-4ef6-a8358631cdf0@kaod.org>
 <87k1187dbo.fsf@dusky.pond.sub.org>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <ffae374d-9429-5bdc-d415-053aaa1b033c@redhat.com>
Date: Thu, 21 May 2020 16:05:15 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.6.0
MIME-Version: 1.0
In-Reply-To: <87k1187dbo.fsf@dusky.pond.sub.org>
Content-Language: en-US
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>, Paul Durrant <paul@xen.org>,
 Jason Wang <jasowang@redhat.com>,
 Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>, qemu-devel@nongnu.org,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <f4bug@amsat.org>,
 Gerd Hoffmann <kraxel@redhat.com>,
 "Edgar E. Iglesias" <edgar.iglesias@gmail.com>,
 Stefano Stabellini <sstabellini@kernel.org>, qemu-block@nongnu.org,
 qemu-trivial@nongnu.org, David Hildenbrand <david@redhat.com>,
 Halil Pasic <pasic@linux.ibm.com>,
 Christian Borntraeger <borntraeger@de.ibm.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 Joel Stanley <joel@jms.id.au>, Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, Richard Henderson <rth@twiddle.net>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, Corey Minyard <minyard@acm.org>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>, qemu-s390x@nongnu.org,
 qemu-arm@nongnu.org, Peter Chubb <peter.chubb@nicta.com.au>,
 John Snow <jsnow@redhat.com>, David Gibson <david@gibson.dropbear.id.au>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Andrew Jeffery <andrew@aj.id.au>, Cornelia Huck <cohuck@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, qemu-ppc@nongnu.org,
 Aurelien Jarno <aurelien@aurel32.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19/05/20 06:34, Markus Armbruster wrote:
> Cédric Le Goater <clg@kaod.org> writes:
> 
>> On 5/18/20 3:17 PM, Markus Armbruster wrote:
>>> Paolo Bonzini <pbonzini@redhat.com> writes:
>>>
>>>> On 15/05/20 07:58, Markus Armbruster wrote:
>>>>> Philippe Mathieu-Daudé <f4bug@amsat.org> writes:
>>>>>
>>>>>> Remove unnecessary casts using coccinelle scripts.
>>>>>>
>>>>>> The CPU()/OBJECT() patches don't introduce logical change,
>>>>>> The DEVICE() one removes various OBJECT_CHECK() calls.
>>>>> Queued, thanks!
>>>>>
>>>>> Managing expecations: I'm not a QOM maintainer, I don't want to become
>>>>> one, and I don't normally queue QOM patches :)
>>>>>
>>>>
>>>> I want to be again a QOM maintainer, but it's not the best time for me
>>>> to be one.  So thanks for picking up my slack.
>>>
>>> You're welcome :)
>>
>> Could you help me getting this patch merged ? :)
>>
>> http://patchwork.ozlabs.org/project/qemu-devel/patch/20200404153340.164861-1-clg@kaod.org/
> 
> I have more QOM patches in the pipe, and I may well post another QOM
> pull request while Paolo is busy with other stuff.  I'll consider
> including other QOM patches then.  Non-trivial ones need an R-by from
> Paolo, Daniel or Eduardo.

I queued Cedric's.

Paolo



From xen-devel-bounces@lists.xenproject.org Thu May 21 14:11:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 14:11:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbluu-0005Gt-N2; Thu, 21 May 2020 14:11:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PcJi=7D=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jblut-0005Gk-Fx
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 14:11:23 +0000
X-Inumbo-ID: efcfac40-9b6c-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id efcfac40-9b6c-11ea-9887-bc764e2007e4;
 Thu, 21 May 2020 14:11:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ixjdOhzo5pe+RlHFVK/FH3RZHTV26Zm0hhmEQ6K3++w=; b=cVZVRryHMF3n3bv56hPMxAqfl
 vjKq/5IZXCU9kCpb/L6gY7C5vgD7O6VVKsyzwncyj5I2tFcnJS9RNTbNpTL/DVSZbFlyxVED/NEt6
 U/BQn+mG8VUZdNJVqCL/0DGUot6fLkEcmvTV6Z0MHJiTmatqItdDDjgMM7w7n6LK6fqqk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jblum-0003az-Tq; Thu, 21 May 2020 14:11:16 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jblum-0002m2-Jp; Thu, 21 May 2020 14:11:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jblum-0006eu-J7; Thu, 21 May 2020 14:11:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150285-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150285: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=dacdbf7088d6a3705a9831e73991c2b14c519a65
X-Osstest-Versions-That: xen=e235fa2794c95365519eac714d6ea82f8e64752e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 21 May 2020 14:11:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150285 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150285/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150267
 test-armhf-armhf-xl-rtds   16 guest-start/debian.repeat fail blocked in 150267
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150267
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150267
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150267
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150267
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150267
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150267
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150267
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150267
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150267
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dacdbf7088d6a3705a9831e73991c2b14c519a65
baseline version:
 xen                  e235fa2794c95365519eac714d6ea82f8e64752e

Last test of basis   150267  2020-05-20 04:13:30 Z    1 days
Failing since        150279  2020-05-20 17:07:26 Z    0 days    2 attempts
Testing same since   150285  2020-05-21 02:29:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  David Woodhouse <dwmw@amazon.co.uk>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e235fa2794..dacdbf7088  dacdbf7088d6a3705a9831e73991c2b14c519a65 -> master


From xen-devel-bounces@lists.xenproject.org Thu May 21 14:22:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 14:22:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbm5X-0006MD-Q4; Thu, 21 May 2020 14:22:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=uQnW=7D=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jbm5W-0006M8-St
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 14:22:22 +0000
X-Inumbo-ID: 7c053ab2-9b6e-11ea-9887-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c053ab2-9b6e-11ea-9887-bc764e2007e4;
 Thu, 21 May 2020 14:22:22 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 43Tj/gR47oKP9Ex2PPBWEahlOGo6x/IxY+jx4kPMsU9YzGoiIELkT+92xNL9+4fdswIs353hDQ
 qv7qm02BmarxzkJ5rHUOUgPgf5WtQ21ml0ujeIEIFWivlfOgRjw2EtGNF7Otu8OYJf4XoRNi3i
 1MxBe3FzCO2FpUF3i6bMs4j6HIVwIrsWuuGQJDPCEDJdZ/9+F9cbcLbE/ddyOsyq8YHMveEfyA
 vGj1LhvZR2NIZrNaLy1hGS5rnj7zTjOcnefl1eYiI0qavC5BZZciV1c7Q4gQxaLBgWliLsMsPA
 /Dk=
X-SBRS: 2.7
X-MesageID: 18354433
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,417,1583211600"; d="scan'208";a="18354433"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH] CHANGELOG: Add an entry for golang binding updates
Date: Thu, 21 May 2020 15:21:38 +0100
Message-ID: <20200521142138.3528654-1-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 George Dunlap <george.dunlap@citrix.com>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Wasn't sure whether this sort of thing was what was wanted, but
thought it was worth trying.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Paul Durrant <paul@xen.org>
CC: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 CHANGELOG.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index ccb5055c87..5aa6af612f 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -16,6 +16,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
    fixes.
  - Hypervisor framework to ease porting Xen to run on hypervisors.
  - Initial support to run on Hyper-V.
+ - Golang bindings: IDL generation of structs, more functions covered,
+  more module-friendly (still Experimental)
 
 ## [4.13.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.13.0) - 2019-12-17
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 14:33:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 14:33:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbmFn-0007Nq-V2; Thu, 21 May 2020 14:32:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nFaT=7D=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jbmFm-0007Nl-Qa
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 14:32:58 +0000
X-Inumbo-ID: f7238086-9b6f-11ea-ae69-bc764e2007e4
Received: from mail-wr1-x429.google.com (unknown [2a00:1450:4864:20::429])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7238086-9b6f-11ea-ae69-bc764e2007e4;
 Thu, 21 May 2020 14:32:58 +0000 (UTC)
Received: by mail-wr1-x429.google.com with SMTP id e1so6871925wrt.5
 for <xen-devel@lists.xenproject.org>; Thu, 21 May 2020 07:32:58 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=zKSOPYOLfBtebqtUZ9nUuJvjZj6P1Uk4p6vE8GifzlQ=;
 b=qW7qhMIfVIwQOWESBC9LoZ+K6qYnNqkXC4fYjS/F6lM4GkvSTuqjbXIAOlgJj6GEkp
 wqjfj+WOCS5zw3kbyP9hNLTbuFjnhsADkDxIKCyeluPq09nkMqlAwCEJNEU+bzS5oYFL
 hJqaYDxjK392v1PwT5U+pOBvxCTUSA7opzbClI3Rp7QqPQv52aDnXwLeVYlUor/nagiw
 3kdDyiBfxMq6gG1W7PhN0WZqsxqGBKZa5Ioub8nuEnwkVUk/e0adnY43zQi2zLO15EvD
 r7685nh1YtjOAzoH1lFx4fN3ynHnNUEcz6iSOTZSeoYlZqtUpZPIPiPqZmCfoXZv4PpA
 Xkwg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=zKSOPYOLfBtebqtUZ9nUuJvjZj6P1Uk4p6vE8GifzlQ=;
 b=ngAWE9dTyJPgcMThc+Qq5odfIlSO9giqhs9KZ8rWMXy7tpXU4nWWrCrmMnCXOAtNoF
 lXRmP/i8H276RI/Pt1+uKm0Rmz5ZfSazYIcsgzMtTl1eCPxB8jbNBQ3MtJwEyhy5O+NR
 NeL+KFWW/7sFqsbaO+vP54fzRxgW4c+rbzrrelgcv/0/BDRcLhl6c2KYkNd4K5Qf9572
 maorRQ/hQkaiiS+//RUiGvWDFuJgkG3PjPgTFjmM0lit+cs/0BLGoxEs7mqVszBv8Fd8
 SCOtACt83LHOP/k0nGHzuKYTf8BNNPi173aIq/yXFqp7eO6cJD42rTKRGeqCVZk/bekA
 nBYQ==
X-Gm-Message-State: AOAM532hteFeGttP348TcBcBqef/5ZGrvkEXnAhZNOuBecmwy2Yh7rv9
 6MU6zG9I77oqC7n81I7X0PU=
X-Google-Smtp-Source: ABdhPJx6K5IKcwYwxqWXDuiuH1dQNuOJBw4oyiNFCEUV9HGKNRqkn/fPLWBUIJ4WfEnmwi02x8jteA==
X-Received: by 2002:adf:f304:: with SMTP id i4mr8605450wro.169.1590071577450; 
 Thu, 21 May 2020 07:32:57 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.185])
 by smtp.gmail.com with ESMTPSA id n7sm6629538wro.94.2020.05.21.07.32.56
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 21 May 2020 07:32:56 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'George Dunlap'" <george.dunlap@citrix.com>,
 <xen-devel@lists.xenproject.org>
References: <20200521142138.3528654-1-george.dunlap@citrix.com>
In-Reply-To: <20200521142138.3528654-1-george.dunlap@citrix.com>
Subject: RE: [PATCH] CHANGELOG: Add an entry for golang binding updates
Date: Thu, 21 May 2020 15:32:55 +0100
Message-ID: <004201d62f7c$b846b4a0$28d41de0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQGYdPJLPKTHHfC2jORfcIiXGcpiN6kuDtzQ
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Nick Rosbrook' <rosbrookn@ainfosec.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: George Dunlap <george.dunlap@citrix.com>
> Sent: 21 May 2020 15:22
> To: xen-devel@lists.xenproject.org
> Cc: George Dunlap <george.dunlap@citrix.com>; Paul Durrant <paul@xen.org>; Nick Rosbrook
> <rosbrookn@ainfosec.com>
> Subject: [PATCH] CHANGELOG: Add an entry for golang binding updates
> 
> Wasn't sure whether this sort of thing was what was wanted, but
> thought it was worth trying.
> 

Seems worthy of comment so...

Reviewed-by: Paul Durrant <paul@xen.org>

> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> ---
> CC: Paul Durrant <paul@xen.org>
> CC: Nick Rosbrook <rosbrookn@ainfosec.com>
> ---
>  CHANGELOG.md | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/CHANGELOG.md b/CHANGELOG.md
> index ccb5055c87..5aa6af612f 100644
> --- a/CHANGELOG.md
> +++ b/CHANGELOG.md
> @@ -16,6 +16,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
>     fixes.
>   - Hypervisor framework to ease porting Xen to run on hypervisors.
>   - Initial support to run on Hyper-V.
> + - Golang bindings: IDL generation of structs, more functions covered,
> +  more module-friendly (still Experimental)
> 
>  ## [4.13.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.13.0) - 2019-12-17
> 
> --
> 2.25.1




From xen-devel-bounces@lists.xenproject.org Thu May 21 14:41:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 14:41:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbmNd-0008I6-Pr; Thu, 21 May 2020 14:41:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q4aB=7D=redhat.com=pbonzini@srs-us1.protection.inumbo.net>)
 id 1jbmNc-0008I0-G4
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 14:41:04 +0000
X-Inumbo-ID: 16be1d1c-9b71-11ea-ab1c-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 16be1d1c-9b71-11ea-ab1c-12813bfff9fa;
 Thu, 21 May 2020 14:41:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590072060;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=63tZ1S/XxcYup4/85gY+n1PB55HQiLXxW5r6TabZ8Eg=;
 b=Ys9I5uOer/iWmdZpTU+VbsGFsQ0iKgY9JJX12czT0pXFwUxZWhIDgamYJ5QSjFgDo+Uu/e
 reRF1vzle0sWxjMOzDT1/8Gm7Cda31Vj9WSstgRpPCr1M/dJkjnQS1028n9J7mK6i2OkkQ
 vtVhr8oOtJI8J1RbTL2byIOdHEiaiUY=
Received: from mail-wm1-f70.google.com (mail-wm1-f70.google.com
 [209.85.128.70]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-311-fqQNHY9OM7iae41F7NE7Mw-1; Thu, 21 May 2020 10:40:58 -0400
X-MC-Unique: fqQNHY9OM7iae41F7NE7Mw-1
Received: by mail-wm1-f70.google.com with SMTP id g10so2780384wme.0
 for <xen-devel@lists.xenproject.org>; Thu, 21 May 2020 07:40:58 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:cc:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-language
 :content-transfer-encoding;
 bh=63tZ1S/XxcYup4/85gY+n1PB55HQiLXxW5r6TabZ8Eg=;
 b=C80v1pViF9LbjXe/AzR/q5WH51uz00tj+AQtsRGuB3hKwG2YPwGpTZ4YibGzdx7VBM
 M26Yy+FcKZu8uxWIrpMvVyxg+Z581uBzz1iwP78Npb027xS3fnX4uwKmvuC+SFnjHf90
 WSq/uW+RDsU/I/kJoKlu9G5oshGEf4MxKfqfSUftJmPapw5o3QwDtQHi6eYm2Rcj4O7R
 EMOdubHLm1zQu81EH2TcpOO4SOZTG4+Tnx5CSr50b0csEJKBOQAZndCcFjQS2i4+eF11
 Gt7Ny0vWEUteWmJsnqY5+t7HsIDxZUQTSJS22IKH1MZoQe8ETRde6HO9hGkvmcVh3/V3
 pRxQ==
X-Gm-Message-State: AOAM532KSKrBf7uWraV3XHmzAXWcVi5mKSQIcuQILtiMDZOIIW9FO2uA
 Q0HPmmBWe6QmDbxAF+i0HrBeoCPMl9xj0+4+9UCx6I4jTa4GiMuYMhHb6f7Rsbr2PebayW74RNL
 GMqtlZA/vtLDmnJgD32GA1pT3QgA=
X-Received: by 2002:a5d:6401:: with SMTP id z1mr9482890wru.226.1590072056977; 
 Thu, 21 May 2020 07:40:56 -0700 (PDT)
X-Google-Smtp-Source: ABdhPJxeMEzPsusOZht9DqRCOuYlIdd8vXViaBwpj/wrVLsNLhNSnLqfnz6zWNuoyfxt4yj+Ii4Whg==
X-Received: by 2002:a5d:6401:: with SMTP id z1mr9482858wru.226.1590072056572; 
 Thu, 21 May 2020 07:40:56 -0700 (PDT)
Received: from [192.168.178.58] ([151.30.94.134])
 by smtp.gmail.com with ESMTPSA id p17sm10732958wmi.3.2020.05.21.07.40.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 21 May 2020 07:40:55 -0700 (PDT)
Subject: Re: [PATCH v3] accel: Move Xen accelerator code under accel/xen/
To: =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 qemu-devel@nongnu.org
References: <20200508100222.7112-1-philmd@redhat.com>
From: Paolo Bonzini <pbonzini@redhat.com>
Message-ID: <0ac2fab6-8c8f-f9c0-8fcf-57877a0284e3@redhat.com>
Date: Thu, 21 May 2020 16:40:55 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.6.0
MIME-Version: 1.0
In-Reply-To: <20200508100222.7112-1-philmd@redhat.com>
Content-Language: en-US
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 Juan Quintela <quintela@redhat.com>,
 "Dr. David Alan Gilbert" <dgilbert@redhat.com>,
 Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Igor Mammedov <imammedo@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Aurelien Jarno <aurelien@aurel32.net>, Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 08/05/20 12:02, Philippe Mathieu-Daudé wrote:
> This code is not related to hardware emulation.
> Move it under accel/ with the other hypervisors.
> 
> Reviewed-by: Paul Durrant <paul@xen.org>
> Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
> ---
> We could also move the memory management functions from
> hw/i386/xen/xen-hvm.c but it is not trivial.
> 
> v2: Use g_assert_not_reached() instead of abort()
> v3: (quintela)
>  - Do not expose xen_allowed
>  - Do not abort in xen_hvm_modified_memory
> ---
>  include/exec/ram_addr.h                    |  2 +-
>  include/hw/xen/xen.h                       | 11 -------
>  include/sysemu/xen.h                       | 38 ++++++++++++++++++++++
>  hw/xen/xen-common.c => accel/xen/xen-all.c |  8 +++++
>  hw/acpi/piix4.c                            |  2 +-
>  hw/i386/pc.c                               |  1 +
>  hw/i386/pc_piix.c                          |  1 +
>  hw/i386/pc_q35.c                           |  1 +
>  hw/i386/xen/xen-hvm.c                      |  1 +
>  hw/i386/xen/xen_platform.c                 |  1 +
>  hw/isa/piix3.c                             |  1 +
>  hw/pci/msix.c                              |  1 +
>  migration/savevm.c                         |  2 +-
>  softmmu/vl.c                               |  2 +-
>  stubs/xen-hvm.c                            |  9 -----
>  target/i386/cpu.c                          |  2 +-
>  MAINTAINERS                                |  2 ++
>  accel/Makefile.objs                        |  1 +
>  accel/xen/Makefile.objs                    |  1 +
>  hw/xen/Makefile.objs                       |  2 +-
>  20 files changed, 63 insertions(+), 26 deletions(-)
>  create mode 100644 include/sysemu/xen.h
>  rename hw/xen/xen-common.c => accel/xen/xen-all.c (98%)
>  create mode 100644 accel/xen/Makefile.objs
> 
> diff --git a/include/exec/ram_addr.h b/include/exec/ram_addr.h
> index 5e59a3d8d7..4e05292f91 100644
> --- a/include/exec/ram_addr.h
> +++ b/include/exec/ram_addr.h
> @@ -21,7 +21,7 @@
>  
>  #ifndef CONFIG_USER_ONLY
>  #include "cpu.h"
> -#include "hw/xen/xen.h"
> +#include "sysemu/xen.h"
>  #include "sysemu/tcg.h"
>  #include "exec/ramlist.h"
>  #include "exec/ramblock.h"
> diff --git a/include/hw/xen/xen.h b/include/hw/xen/xen.h
> index 5ac1c6dc55..771dd447f2 100644
> --- a/include/hw/xen/xen.h
> +++ b/include/hw/xen/xen.h
> @@ -20,13 +20,6 @@ extern uint32_t xen_domid;
>  extern enum xen_mode xen_mode;
>  extern bool xen_domid_restrict;
>  
> -extern bool xen_allowed;
> -
> -static inline bool xen_enabled(void)
> -{
> -    return xen_allowed;
> -}
> -
>  int xen_pci_slot_get_pirq(PCIDevice *pci_dev, int irq_num);
>  void xen_piix3_set_irq(void *opaque, int irq_num, int level);
>  void xen_piix_pci_write_config_client(uint32_t address, uint32_t val, int len);
> @@ -39,10 +32,6 @@ void xenstore_store_pv_console_info(int i, struct Chardev *chr);
>  
>  void xen_hvm_init(PCMachineState *pcms, MemoryRegion **ram_memory);
>  
> -void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
> -                   struct MemoryRegion *mr, Error **errp);
> -void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length);
> -
>  void xen_register_framebuffer(struct MemoryRegion *mr);
>  
>  #endif /* QEMU_HW_XEN_H */
> diff --git a/include/sysemu/xen.h b/include/sysemu/xen.h
> new file mode 100644
> index 0000000000..1ca292715e
> --- /dev/null
> +++ b/include/sysemu/xen.h
> @@ -0,0 +1,38 @@
> +/*
> + * QEMU Xen support
> + *
> + * This work is licensed under the terms of the GNU GPL, version 2 or later.
> + * See the COPYING file in the top-level directory.
> + */
> +
> +#ifndef SYSEMU_XEN_H
> +#define SYSEMU_XEN_H
> +
> +#ifdef CONFIG_XEN
> +
> +bool xen_enabled(void);
> +
> +#ifndef CONFIG_USER_ONLY
> +void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length);
> +void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
> +                   struct MemoryRegion *mr, Error **errp);
> +#endif
> +
> +#else /* !CONFIG_XEN */
> +
> +#define xen_enabled() 0
> +#ifndef CONFIG_USER_ONLY
> +static inline void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
> +{
> +    /* nothing */
> +}
> +static inline void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size,
> +                                 MemoryRegion *mr, Error **errp)
> +{
> +    g_assert_not_reached();
> +}
> +#endif
> +
> +#endif /* CONFIG_XEN */
> +
> +#endif
> diff --git a/hw/xen/xen-common.c b/accel/xen/xen-all.c
> similarity index 98%
> rename from hw/xen/xen-common.c
> rename to accel/xen/xen-all.c
> index a15070f7f6..4f22c53731 100644
> --- a/hw/xen/xen-common.c
> +++ b/accel/xen/xen-all.c
> @@ -16,6 +16,7 @@
>  #include "hw/xen/xen_pt.h"
>  #include "chardev/char.h"
>  #include "sysemu/accel.h"
> +#include "sysemu/xen.h"
>  #include "sysemu/runstate.h"
>  #include "migration/misc.h"
>  #include "migration/global_state.h"
> @@ -31,6 +32,13 @@
>      do { } while (0)
>  #endif
>  
> +static bool xen_allowed;
> +
> +bool xen_enabled(void)
> +{
> +    return xen_allowed;
> +}
> +
>  xc_interface *xen_xc;
>  xenforeignmemory_handle *xen_fmem;
>  xendevicemodel_handle *xen_dmod;
> diff --git a/hw/acpi/piix4.c b/hw/acpi/piix4.c
> index 964d6f5990..daed273687 100644
> --- a/hw/acpi/piix4.c
> +++ b/hw/acpi/piix4.c
> @@ -30,6 +30,7 @@
>  #include "hw/acpi/acpi.h"
>  #include "sysemu/runstate.h"
>  #include "sysemu/sysemu.h"
> +#include "sysemu/xen.h"
>  #include "qapi/error.h"
>  #include "qemu/range.h"
>  #include "exec/address-spaces.h"
> @@ -41,7 +42,6 @@
>  #include "hw/mem/nvdimm.h"
>  #include "hw/acpi/memory_hotplug.h"
>  #include "hw/acpi/acpi_dev_interface.h"
> -#include "hw/xen/xen.h"
>  #include "migration/vmstate.h"
>  #include "hw/core/cpu.h"
>  #include "trace.h"
> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> index 97e345faea..1a599e1de9 100644
> --- a/hw/i386/pc.c
> +++ b/hw/i386/pc.c
> @@ -56,6 +56,7 @@
>  #include "sysemu/tcg.h"
>  #include "sysemu/numa.h"
>  #include "sysemu/kvm.h"
> +#include "sysemu/xen.h"
>  #include "sysemu/qtest.h"
>  #include "sysemu/reset.h"
>  #include "sysemu/runstate.h"
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index 3862e5120e..c00472b4c5 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -53,6 +53,7 @@
>  #include "cpu.h"
>  #include "qapi/error.h"
>  #include "qemu/error-report.h"
> +#include "sysemu/xen.h"
>  #ifdef CONFIG_XEN
>  #include <xen/hvm/hvm_info_table.h>
>  #include "hw/xen/xen_pt.h"
> diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
> index 3349e38a4c..e929749d8e 100644
> --- a/hw/i386/pc_q35.c
> +++ b/hw/i386/pc_q35.c
> @@ -36,6 +36,7 @@
>  #include "hw/rtc/mc146818rtc.h"
>  #include "hw/xen/xen.h"
>  #include "sysemu/kvm.h"
> +#include "sysemu/xen.h"
>  #include "hw/kvm/clock.h"
>  #include "hw/pci-host/q35.h"
>  #include "hw/qdev-properties.h"
> diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
> index 82ece6b9e7..041303a2fa 100644
> --- a/hw/i386/xen/xen-hvm.c
> +++ b/hw/i386/xen/xen-hvm.c
> @@ -28,6 +28,7 @@
>  #include "qemu/range.h"
>  #include "sysemu/runstate.h"
>  #include "sysemu/sysemu.h"
> +#include "sysemu/xen.h"
>  #include "sysemu/xen-mapcache.h"
>  #include "trace.h"
>  #include "exec/address-spaces.h"
> diff --git a/hw/i386/xen/xen_platform.c b/hw/i386/xen/xen_platform.c
> index 0f7b05e5e1..a1492fdecd 100644
> --- a/hw/i386/xen/xen_platform.c
> +++ b/hw/i386/xen/xen_platform.c
> @@ -33,6 +33,7 @@
>  #include "hw/xen/xen-legacy-backend.h"
>  #include "trace.h"
>  #include "exec/address-spaces.h"
> +#include "sysemu/xen.h"
>  #include "sysemu/block-backend.h"
>  #include "qemu/error-report.h"
>  #include "qemu/module.h"
> diff --git a/hw/isa/piix3.c b/hw/isa/piix3.c
> index fd1c78879f..1a5267e19f 100644
> --- a/hw/isa/piix3.c
> +++ b/hw/isa/piix3.c
> @@ -28,6 +28,7 @@
>  #include "hw/irq.h"
>  #include "hw/isa/isa.h"
>  #include "hw/xen/xen.h"
> +#include "sysemu/xen.h"
>  #include "sysemu/sysemu.h"
>  #include "sysemu/reset.h"
>  #include "sysemu/runstate.h"
> diff --git a/hw/pci/msix.c b/hw/pci/msix.c
> index 29187898f2..2c7ead7667 100644
> --- a/hw/pci/msix.c
> +++ b/hw/pci/msix.c
> @@ -19,6 +19,7 @@
>  #include "hw/pci/msix.h"
>  #include "hw/pci/pci.h"
>  #include "hw/xen/xen.h"
> +#include "sysemu/xen.h"
>  #include "migration/qemu-file-types.h"
>  #include "migration/vmstate.h"
>  #include "qemu/range.h"
> diff --git a/migration/savevm.c b/migration/savevm.c
> index c00a6807d9..b979ea6e7f 100644
> --- a/migration/savevm.c
> +++ b/migration/savevm.c
> @@ -28,7 +28,6 @@
>  
>  #include "qemu/osdep.h"
>  #include "hw/boards.h"
> -#include "hw/xen/xen.h"
>  #include "net/net.h"
>  #include "migration.h"
>  #include "migration/snapshot.h"
> @@ -59,6 +58,7 @@
>  #include "sysemu/replay.h"
>  #include "sysemu/runstate.h"
>  #include "sysemu/sysemu.h"
> +#include "sysemu/xen.h"
>  #include "qjson.h"
>  #include "migration/colo.h"
>  #include "qemu/bitmap.h"
> diff --git a/softmmu/vl.c b/softmmu/vl.c
> index afd2615fb3..0344e5fd2e 100644
> --- a/softmmu/vl.c
> +++ b/softmmu/vl.c
> @@ -36,6 +36,7 @@
>  #include "sysemu/runstate.h"
>  #include "sysemu/seccomp.h"
>  #include "sysemu/tcg.h"
> +#include "sysemu/xen.h"
>  
>  #include "qemu/error-report.h"
>  #include "qemu/sockets.h"
> @@ -178,7 +179,6 @@ static NotifierList exit_notifiers =
>  static NotifierList machine_init_done_notifiers =
>      NOTIFIER_LIST_INITIALIZER(machine_init_done_notifiers);
>  
> -bool xen_allowed;
>  uint32_t xen_domid;
>  enum xen_mode xen_mode = XEN_EMULATE;
>  bool xen_domid_restrict;
> diff --git a/stubs/xen-hvm.c b/stubs/xen-hvm.c
> index b7d53b5e2f..6954a5b696 100644
> --- a/stubs/xen-hvm.c
> +++ b/stubs/xen-hvm.c
> @@ -35,11 +35,6 @@ int xen_is_pirq_msi(uint32_t msi_data)
>      return 0;
>  }
>  
> -void xen_ram_alloc(ram_addr_t ram_addr, ram_addr_t size, MemoryRegion *mr,
> -                   Error **errp)
> -{
> -}
> -
>  qemu_irq *xen_interrupt_controller_init(void)
>  {
>      return NULL;
> @@ -49,10 +44,6 @@ void xen_register_framebuffer(MemoryRegion *mr)
>  {
>  }
>  
> -void xen_hvm_modified_memory(ram_addr_t start, ram_addr_t length)
> -{
> -}
> -
>  void xen_hvm_init(PCMachineState *pcms, MemoryRegion **ram_memory)
>  {
>  }
> diff --git a/target/i386/cpu.c b/target/i386/cpu.c
> index 9c256ab159..f9b3ef1ef2 100644
> --- a/target/i386/cpu.c
> +++ b/target/i386/cpu.c
> @@ -29,6 +29,7 @@
>  #include "sysemu/reset.h"
>  #include "sysemu/hvf.h"
>  #include "sysemu/cpus.h"
> +#include "sysemu/xen.h"
>  #include "kvm_i386.h"
>  #include "sev_i386.h"
>  
> @@ -54,7 +55,6 @@
>  #include "hw/i386/topology.h"
>  #ifndef CONFIG_USER_ONLY
>  #include "exec/address-spaces.h"
> -#include "hw/xen/xen.h"
>  #include "hw/i386/apic_internal.h"
>  #include "hw/boards.h"
>  #endif
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 1f84e3ae2c..95ddddfb1d 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -438,6 +438,7 @@ M: Paul Durrant <paul@xen.org>
>  L: xen-devel@lists.xenproject.org
>  S: Supported
>  F: */xen*
> +F: accel/xen/*
>  F: hw/9pfs/xen-9p*
>  F: hw/char/xen_console.c
>  F: hw/display/xenfb.c
> @@ -451,6 +452,7 @@ F: hw/i386/xen/
>  F: hw/pci-host/xen_igd_pt.c
>  F: include/hw/block/dataplane/xen*
>  F: include/hw/xen/
> +F: include/sysemu/xen.h
>  F: include/sysemu/xen-mapcache.h
>  
>  Guest CPU Cores (HAXM)
> diff --git a/accel/Makefile.objs b/accel/Makefile.objs
> index 17e5ac6061..ff72f0d030 100644
> --- a/accel/Makefile.objs
> +++ b/accel/Makefile.objs
> @@ -2,4 +2,5 @@ common-obj-$(CONFIG_SOFTMMU) += accel.o
>  obj-$(call land,$(CONFIG_SOFTMMU),$(CONFIG_POSIX)) += qtest.o
>  obj-$(CONFIG_KVM) += kvm/
>  obj-$(CONFIG_TCG) += tcg/
> +obj-$(CONFIG_XEN) += xen/
>  obj-y += stubs/
> diff --git a/accel/xen/Makefile.objs b/accel/xen/Makefile.objs
> new file mode 100644
> index 0000000000..7482cfb436
> --- /dev/null
> +++ b/accel/xen/Makefile.objs
> @@ -0,0 +1 @@
> +obj-y += xen-all.o
> diff --git a/hw/xen/Makefile.objs b/hw/xen/Makefile.objs
> index 84df60a928..340b2c5096 100644
> --- a/hw/xen/Makefile.objs
> +++ b/hw/xen/Makefile.objs
> @@ -1,5 +1,5 @@
>  # xen backend driver support
> -common-obj-$(CONFIG_XEN) += xen-legacy-backend.o xen_devconfig.o xen_pvdev.o xen-common.o xen-bus.o xen-bus-helper.o xen-backend.o
> +common-obj-$(CONFIG_XEN) += xen-legacy-backend.o xen_devconfig.o xen_pvdev.o xen-bus.o xen-bus-helper.o xen-backend.o
>  
>  obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen-host-pci-device.o
>  obj-$(CONFIG_XEN_PCI_PASSTHROUGH) += xen_pt.o xen_pt_config_init.o xen_pt_graphics.o xen_pt_msi.o
> 

Queued, thanks.

Paolo



From xen-devel-bounces@lists.xenproject.org Thu May 21 14:55:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 14:55:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbmbs-0000xK-4A; Thu, 21 May 2020 14:55:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OubW=7D=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jbmbq-0000xF-GJ
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 14:55:46 +0000
X-Inumbo-ID: 26a84776-9b73-11ea-b07b-bc764e2007e4
Received: from mail-qv1-xf42.google.com (unknown [2607:f8b0:4864:20::f42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26a84776-9b73-11ea-b07b-bc764e2007e4;
 Thu, 21 May 2020 14:55:46 +0000 (UTC)
Received: by mail-qv1-xf42.google.com with SMTP id r3so3207355qve.1
 for <xen-devel@lists.xenproject.org>; Thu, 21 May 2020 07:55:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id;
 bh=cGvCZX+A0EzA0r2FDdND3g4LWpEzQENF+HcMEUiPxHE=;
 b=Rdfvq+bfMRI+cBy9w0lkkvgWZzVi4hbhRo0aDkW/UaBsIFOooOl8FsgUK9MOjPllIh
 jFGeiETO6KWbroyRqI3/e2iyap2h0M/4wzYl70g4pnPih23UgV7LbwBpIqdxkZUHjvBP
 sEE01kKy5pVonFyFkq743tW5ebNIICrSFQ7DA82NsxfIH1x4i88FeW4HXgy4fFtxPzMd
 foR/wZV3q627eHxxgC40JNI+5TbfknZlHIyYs4DnvTPpR1L+Fx4JfJqDVp6TqVFAi2xR
 6sx3uQwtxl5jPlwJ7M8BlP7MJb/vlkmf88LEGw9IYqufkpj4Nh3T2h7xymU3J2DrPqxT
 ebgw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id;
 bh=cGvCZX+A0EzA0r2FDdND3g4LWpEzQENF+HcMEUiPxHE=;
 b=jlih5XN4dMAZjTXfwCsZ4NMYXKkVQqD4g87yKwwQFzwwUsch1O2vMxAh3tajHMG3G7
 3TxJK8uN8oehg/3lWdmJTsFz4VLkRxbYV9z18Xn3ccER406JtUuogZFb5rTMxn3BgYcC
 Bqw7eEYVfN3qNJ5NFXIrrB74SW7mOf3iRfl7JVdH31X3+z/BmdT5PGwPrm+skUK6jbQS
 SmLArS4a9kopVFzSq4wSbJNKNkJv7dIp+L5crDlSgOps7jNAfGaSIQ1AwvwIrfJkLTt3
 xksX8SlcbZd2QPWbXUGfwBg84TcAUzNJB4xogxJ0/HvwdRfaGzS/tx1kDrR8jCHYXtHP
 qY/g==
X-Gm-Message-State: AOAM5332toHH4qzXx/qzs3Y7VRVN+EdS94GBvQ34U2rzzlXZAgzNgyvm
 kGiq1jojJYzw+26dAI1i23N/WFl91r4=
X-Google-Smtp-Source: ABdhPJwZH/JAPdYzAO2i+paeAGSEK5CanslYlbW8asmm6LOhLE+4jdZSbz/7m8rJmgHznR5M0a/xUw==
X-Received: by 2002:a05:6214:1543:: with SMTP id
 t3mr10755668qvw.122.1590072945250; 
 Thu, 21 May 2020 07:55:45 -0700 (PDT)
Received: from six.lan (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id x124sm4990687qkb.108.2020.05.21.07.55.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 21 May 2020 07:55:44 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] golang/xenlight: add an empty line after DO NOT EDIT comment
Date: Thu, 21 May 2020 10:55:25 -0400
Message-Id: <49cc21c24b65ef5e1ce9810397c0fcd9d43f77f4.1590072675.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

When generating documentation, pkg.go.dev and godoc.org assume a comment
that immediately precedes the package declaration is a "package
comment", and should be shown in the documentation. Add an empty line
after the DO NOT EDIT comment in generated files to prevent these
comments from appearing as "package comments."

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/gengotypes.py  | 1 +
 tools/golang/xenlight/helpers.gen.go | 1 +
 tools/golang/xenlight/types.gen.go   | 1 +
 3 files changed, 3 insertions(+)

diff --git a/tools/golang/xenlight/gengotypes.py b/tools/golang/xenlight/gengotypes.py
index e9ad92afa0..2b71aa1ea8 100644
--- a/tools/golang/xenlight/gengotypes.py
+++ b/tools/golang/xenlight/gengotypes.py
@@ -737,6 +737,7 @@ if __name__ == '__main__':
     // This file is generated by:
     // {}
     //
+
     """.format(' '.join(sys.argv))
 
     xenlight_golang_generate_types(types=types,
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 109e9515a2..d464e38565 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -3,6 +3,7 @@
 // This file is generated by:
 // gengotypes.py ../../libxl/libxl_types.idl
 //
+
 package xenlight
 
 import (
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index df68fd0e88..65c2742bc3 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -3,6 +3,7 @@
 // This file is generated by:
 // gengotypes.py ../../libxl/libxl_types.idl
 //
+
 package xenlight
 
 type Error int
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 14:58:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 14:58:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbmej-00014g-Jd; Thu, 21 May 2020 14:58:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VGGW=7D=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jbmei-00014b-O6
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 14:58:44 +0000
X-Inumbo-ID: 90e4335c-9b73-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 90e4335c-9b73-11ea-b9cf-bc764e2007e4;
 Thu, 21 May 2020 14:58:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Jw7tDixVzzdfXbWQISf1SWJhIzBqcoa+jzKwO76v2e0=; b=d+y5+NmlhiaU7WLBO5VPqwaP1Y
 Wq8LItwcU/D4NuFd8GhMV+YB7se8AAV0ZbwI2d6Fe0byTYgjjPj2WZ2NPD++XFGhHKZM6h8DzjxvE
 TVNWHTG04BXzJuyGdRF9YBogaggvi906qz7j0T6B7cNxgv1VCz9nfXU7mNSsCR67anN0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jbmed-0004ZQ-Ry; Thu, 21 May 2020 14:58:39 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jbmed-0003P7-Jm; Thu, 21 May 2020 14:58:39 +0000
Subject: Re: [PATCH v4 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
To: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20200521085932.10508-1-paul@xen.org>
 <20200521085932.10508-2-paul@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <fd51463d-b656-44ca-d795-1cfe786d943d@xen.org>
Date: Thu, 21 May 2020 15:58:36 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200521085932.10508-2-paul@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 21/05/2020 09:59, Paul Durrant wrote:
> To allow enlightened HVM guests (i.e. those that have PV drivers) to be
> migrated without their co-operation it will be necessary to transfer 'PV'
> state such as event channel state, grant entry state, etc.
> 
> Currently there is a framework (entered via the hvm_save/load() functions)
> that allows a domain's 'HVM' (architectural) state to be transferred but
> 'PV' state is also common with pure PV guests and so this framework is not
> really suitable.
> 
> This patch adds the new public header and low level implementation of a new
> common framework, entered via the domain_save/load() functions. Subsequent
> patches will introduce other parts of the framework, and code that will
> make use of it within the current version of the libxc migration stream.
> 
> This patch also marks the HVM-only framework as deprecated in favour of the
> new framework.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 21 15:05:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 15:05:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbmks-00024x-Dx; Thu, 21 May 2020 15:05:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VGGW=7D=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jbmkr-00024s-9v
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 15:05:05 +0000
X-Inumbo-ID: 73b41c9c-9b74-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 73b41c9c-9b74-11ea-ae69-bc764e2007e4;
 Thu, 21 May 2020 15:05:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=2kDC2D1CcEzwvCUyzD5oKPX6XCaCtaqYVTjQOfOwYXI=; b=fcYKtkaTHcP9KfkmhQ61bujck1
 ox3ma8I/9TtNXyWhALJgaadRON60YG5vdfFaNwcg9EO/VwlD4NRYg3lofq6L7FhWDo7gd07QCgKtA
 ErDA78oP2ggVzOYjOxmEDD8nQaZ3POilgd4Myb4T+PfiSqEV9LDCzQ7rDKX7dC6nrqz0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jbmki-0004jE-TF; Thu, 21 May 2020 15:04:56 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jbmki-0003w1-IL; Thu, 21 May 2020 15:04:56 +0000
Subject: Re: [PATCH v4 2/5] xen/common/domctl: introduce
 XEN_DOMCTL_get/setdomaincontext
To: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20200521085932.10508-1-paul@xen.org>
 <20200521085932.10508-3-paul@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <8681d346-b704-40a8-c070-1b679836b08a@xen.org>
Date: Thu, 21 May 2020 16:04:52 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200521085932.10508-3-paul@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Paul,

On 21/05/2020 09:59, Paul Durrant wrote:
> These domctls provide a mechanism to get and set domain context from
> the toolstack.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 21 15:13:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 15:13:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbmsz-0002yN-9Q; Thu, 21 May 2020 15:13:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VGGW=7D=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jbmsx-0002yI-O9
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 15:13:27 +0000
X-Inumbo-ID: 9e97a0fe-9b75-11ea-ab21-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9e97a0fe-9b75-11ea-ab21-12813bfff9fa;
 Thu, 21 May 2020 15:13:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=B+SjDeAYCgLhoo+XlDyrRh7EIx9bOtUHiRaWaAa6sBA=; b=bvY/JNeUBTuRrSFDyiBNbJjVpJ
 QcblduMs03gqOg8Ns7ZtcHyhGdG90YhN75LOX0tRcR47NlHy8pcAY2UxmIJBWGS+M2aJglqrFoM0u
 +tOamkxmxqQl2DOsveAkpuSMEGgeq53+HdoZCQQMZK+pwBb9d9CSMHNkI/C37vIbFbt0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jbmst-0004uQ-3a; Thu, 21 May 2020 15:13:23 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jbmss-0004XY-HN; Thu, 21 May 2020 15:13:23 +0000
Subject: Re: [PATCH v4 4/5] common/domain: add a domain context record for
 shared_info...
To: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20200521085932.10508-1-paul@xen.org>
 <20200521085932.10508-5-paul@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <dd544d5d-aa25-f035-d96b-747f07c23513@xen.org>
Date: Thu, 21 May 2020 16:13:08 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200521085932.10508-5-paul@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Paul,

On 21/05/2020 09:59, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ... and update xen-domctx to dump some information describing the record.
> 
> NOTE: The domain may or may not be using the embedded vcpu_info array so
>        ultimately separate context records will be added for vcpu_info when
>        this becomes necessary.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wl@xen.org>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Julien Grall <julien@xen.org>
> Cc: Stefano Stabellini <sstabellini@kernel.org>
> 
> v4:
>   - Addressed comments from Jan
> 
> v3:
>   - Actually dump some of the content of shared_info
> 
> v2:
>   - Drop the header change to define a 'Xen' page size and instead use a
>     variable length struct now that the framework makes this is feasible
>   - Guard use of 'has_32bit_shinfo' in common code with CONFIG_COMPAT
> ---
>   tools/misc/xen-domctx.c   | 78 +++++++++++++++++++++++++++++++++++++++
>   xen/common/domain.c       | 59 +++++++++++++++++++++++++++++
>   xen/include/public/save.h | 13 ++++++-
>   3 files changed, 149 insertions(+), 1 deletion(-)
> 
> diff --git a/tools/misc/xen-domctx.c b/tools/misc/xen-domctx.c
> index 243325dfce..6ead7ea89d 100644
> --- a/tools/misc/xen-domctx.c
> +++ b/tools/misc/xen-domctx.c
> @@ -31,6 +31,7 @@
>   #include <errno.h>
>   
>   #include <xenctrl.h>
> +#include <xen-tools/libs.h>
>   #include <xen/xen.h>
>   #include <xen/domctl.h>
>   #include <xen/save.h>
> @@ -61,6 +62,82 @@ static void dump_header(void)
>   
>   }
>   
> +static void print_binary(const char *prefix, const void *val, size_t size,
> +                         const char *suffix)
> +{
> +    printf("%s", prefix);
> +
> +    while ( size-- )
> +    {
> +        uint8_t octet = *(const uint8_t *)val++;
> +        unsigned int i;
> +
> +        for ( i = 0; i < 8; i++ )
> +        {
> +            printf("%u", octet & 1);
> +            octet >>= 1;
> +        }
> +    }
> +
> +    printf("%s", suffix);
> +}
> +
> +static void dump_shared_info(void)
> +{
> +    DOMAIN_SAVE_TYPE(SHARED_INFO) *s;
> +    bool has_32bit_shinfo;
> +    shared_info_any_t *info;
> +    unsigned int i, n;
> +
> +    GET_PTR(s);
> +    has_32bit_shinfo = s->flags & DOMAIN_SAVE_32BIT_SHINFO;
> +
> +    printf("    SHARED_INFO: has_32bit_shinfo: %s buffer_size: %u\n",
> +           has_32bit_shinfo ? "true" : "false", s->buffer_size);
> +
> +    info = (shared_info_any_t *)s->buffer;
> +
> +#define GET_FIELD_PTR(_f)            \
> +    (has_32bit_shinfo ?              \
> +     (const void *)&(info->x32._f) : \
> +     (const void *)&(info->x64._f))
> +#define GET_FIELD_SIZE(_f) \
> +    (has_32bit_shinfo ? sizeof(info->x32._f) : sizeof(info->x64._f))
> +#define GET_FIELD(_f) \
> +    (has_32bit_shinfo ? info->x32._f : info->x64._f)
> +
> +    n = has_32bit_shinfo ?
> +        ARRAY_SIZE(info->x32.evtchn_pending) :
> +        ARRAY_SIZE(info->x64.evtchn_pending);
> +
> +    for ( i = 0; i < n; i++ )
> +    {
> +        const char *prefix = !i ?
> +            "                 evtchn_pending: " :
> +            "                                 ";
> +
> +        print_binary(prefix, GET_FIELD_PTR(evtchn_pending[0]),
> +                 GET_FIELD_SIZE(evtchn_pending[0]), "\n");
> +    }
> +
> +    for ( i = 0; i < n; i++ )
> +    {
> +        const char *prefix = !i ?
> +            "                    evtchn_mask: " :
> +            "                                 ";
> +
> +        print_binary(prefix, GET_FIELD_PTR(evtchn_mask[0]),
> +                 GET_FIELD_SIZE(evtchn_mask[0]), "\n");
> +    }
> +
> +    printf("                 wc: version: %u sec: %u nsec: %u\n",
> +           GET_FIELD(wc_version), GET_FIELD(wc_sec), GET_FIELD(wc_nsec));
> +
> +#undef GET_FIELD
> +#undef GET_FIELD_SIZE
> +#undef GET_FIELD_PTR
> +}
> +
>   static void dump_end(void)
>   {
>       DOMAIN_SAVE_TYPE(END) *e;
> @@ -173,6 +250,7 @@ int main(int argc, char **argv)
>               switch (desc->typecode)
>               {
>               case DOMAIN_SAVE_CODE(HEADER): dump_header(); break;
> +            case DOMAIN_SAVE_CODE(SHARED_INFO): dump_shared_info(); break;
>               case DOMAIN_SAVE_CODE(END): dump_end(); break;
>               default:
>                   printf("Unknown type %u: skipping\n", desc->typecode);
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 7cc9526139..14e96c3bc2 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -33,6 +33,7 @@
>   #include <xen/xenoprof.h>
>   #include <xen/irq.h>
>   #include <xen/argo.h>
> +#include <xen/save.h>
>   #include <asm/debugger.h>
>   #include <asm/p2m.h>
>   #include <asm/processor.h>
> @@ -1649,6 +1650,64 @@ int continue_hypercall_on_cpu(
>       return 0;
>   }
>   
> +static int save_shared_info(const struct domain *d, struct domain_context *c,
> +                            bool dry_run)
> +{
> +    struct domain_shared_info_context ctxt = {
> +#ifdef CONFIG_COMPAT
> +        .flags = has_32bit_shinfo(d) ? DOMAIN_SAVE_32BIT_SHINFO : 0,
> +#endif
> +        .buffer_size = sizeof(shared_info_t),
> +    };
> +    size_t hdr_size = offsetof(typeof(ctxt), buffer);
> +    int rc;
> +
> +    rc = DOMAIN_SAVE_BEGIN(SHARED_INFO, c, 0);
> +    if ( rc )
> +        return rc;
> +
> +    rc = domain_save_data(c, &ctxt, hdr_size);
> +    if ( rc )
> +        return rc;
> +
> +    rc = domain_save_data(c, d->shared_info, ctxt.buffer_size);
> +    if ( rc )
> +        return rc;
> +
> +    return domain_save_end(c);
> +}
> +
> +static int load_shared_info(struct domain *d, struct domain_context *c)
> +{
> +    struct domain_shared_info_context ctxt;
> +    size_t hdr_size = offsetof(typeof(ctxt), buffer);
> +    unsigned int i;
> +    int rc;
> +
> +    rc = DOMAIN_LOAD_BEGIN(SHARED_INFO, c, &i);
> +    if ( rc || i ) /* expect only a single instance */
> +        return rc;

This will return 0 if there is multiple instance. Is it intended?

> +
> +    rc = domain_load_data(c, &ctxt, hdr_size);
> +    if ( rc )
> +        return rc;
> +
> +    if ( ctxt.buffer_size != sizeof(shared_info_t) )
> +        return -EINVAL;
> +
> +#ifdef CONFIG_COMPAT
> +    has_32bit_shinfo(d) = ctxt.flags & DOMAIN_SAVE_32BIT_SHINFO;
> +#endif
Should we check the flag is not set when compat is not supported?

This would prevent someone to try restoring a compat shared info on a 
platform not support it.

> +
> +    rc = domain_load_data(c, d->shared_info, sizeof(shared_info_t));
> +    if ( rc )
> +        return rc;
> +
> +    return domain_load_end(c);
> +}
> +
> +DOMAIN_REGISTER_SAVE_LOAD(SHARED_INFO, save_shared_info, load_shared_info);
> +
>   /*
>    * Local variables:
>    * mode: C
> diff --git a/xen/include/public/save.h b/xen/include/public/save.h
> index 551dbbddb8..0e855a4b97 100644
> --- a/xen/include/public/save.h
> +++ b/xen/include/public/save.h
> @@ -82,7 +82,18 @@ struct domain_save_header {
>   };
>   DECLARE_DOMAIN_SAVE_TYPE(HEADER, 1, struct domain_save_header);
>   
> -#define DOMAIN_SAVE_CODE_MAX 1
> +struct domain_shared_info_context {
> +    uint32_t flags;
> +
> +#define DOMAIN_SAVE_32BIT_SHINFO 0x00000001
> +
> +    uint32_t buffer_size;
> +    uint8_t buffer[XEN_FLEX_ARRAY_DIM]; /* Implementation specific size */
> +};
> +
> +DECLARE_DOMAIN_SAVE_TYPE(SHARED_INFO, 2, struct domain_shared_info_context);
> +
> +#define DOMAIN_SAVE_CODE_MAX 2
>   
>   #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
>   
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 21 15:26:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 15:26:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbn53-0003zn-Bc; Thu, 21 May 2020 15:25:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nFaT=7D=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jbn51-0003zi-Ff
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 15:25:55 +0000
X-Inumbo-ID: 5c688f98-9b77-11ea-ae69-bc764e2007e4
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c688f98-9b77-11ea-ae69-bc764e2007e4;
 Thu, 21 May 2020 15:25:54 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id f13so5885042wmc.5
 for <xen-devel@lists.xenproject.org>; Thu, 21 May 2020 08:25:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=ayWuTYNvwTVM7qJ3U+nBeIGZYdKlMjTykYnWta1uC/w=;
 b=ZrpxpW6gi11IWChJSTz0COaM5DOn3+ubIPx4haK4otZ/OPkGKw2+1eV6VX3JbBJYuI
 +hzIbVpWa5fFMOu/f0nrKc7ZQF1QKuajjSUb0/aIi5rO10gPZVNXhthwHH0SKMnyCBrx
 6Y2fCfXmZ3u8nCbNQ8D+PnhZe+h61aEFU0zjgVOms5YI4MMWSkwWPUElMMKPiRYyUIzx
 OHRiYCqcenm7S6wvfByHhU+hc5C3mX+BaHk9nAIyPYIRKNgi+I7Olh9CU8UaJ+lfTKwd
 Uk95QA72y2Z/Mjgxq35rbsSznbPjjfj/SVwdXMOGkKWpU1iUTWCX2nRz2O2G7YMsxaFF
 e0ng==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=ayWuTYNvwTVM7qJ3U+nBeIGZYdKlMjTykYnWta1uC/w=;
 b=J0e+oumzBUglG3D8jAFSBxPEzJKfiwFaQVDvWpn3vY3xqSJ/GjIo4rcITrAWqGOF5X
 lgFdUH17IqDMkh+cDkDLBojE5iwPDp++2preScx+2MjYMb34iOCQKPkQAU9c+rGdu3YC
 GVVcCcltqONWfsXwfNXHEE+CFbbOda1ksA+CdlEwAfH8Hg+aRZNdFP8oQajIWfRTW0B9
 OyKLFo5YeJF55vCxSppG9U4MYbr+GxWAW3q3vzvD6GX7Zb0j8aoyRlmmKRsO2QAok7Sj
 PCFgeQkOfKxpAVa8Jk2Tn2ZM4hF/xgDYvwxHVJilrLq3NotpUuAWbcjXaeSfNi8duz5A
 MjKg==
X-Gm-Message-State: AOAM530xc1cOBd0WE9vysSQdo76/TVSlVjty72hfqOa6z3FGu3iw65M8
 lXP8SRxth6FxuZS5I3NSp18=
X-Google-Smtp-Source: ABdhPJy/sfbLMmNvTRd1rS6HEGErF5MHa13sbgk0PD9n4wADuFHqjQhrzEYIDTrDYvRcIKw0Sb2zJA==
X-Received: by 2002:a7b:cc06:: with SMTP id f6mr9306190wmh.119.1590074753809; 
 Thu, 21 May 2020 08:25:53 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.185])
 by smtp.gmail.com with ESMTPSA id h74sm6988292wrh.76.2020.05.21.08.25.51
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 21 May 2020 08:25:53 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Julien Grall'" <julien@xen.org>,
	<xen-devel@lists.xenproject.org>
References: <20200521085932.10508-1-paul@xen.org>
 <20200521085932.10508-5-paul@xen.org>
 <dd544d5d-aa25-f035-d96b-747f07c23513@xen.org>
In-Reply-To: <dd544d5d-aa25-f035-d96b-747f07c23513@xen.org>
Subject: RE: [PATCH v4 4/5] common/domain: add a domain context record for
 shared_info...
Date: Thu, 21 May 2020 16:25:51 +0100
Message-ID: <004401d62f84$1d9a2c40$58ce84c0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQIT1cQvb66Smwxss7bTr3fjVXX5DAJ/pCGkAnVDoB+oD7LJEA==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
[snip]
> > diff --git a/xen/common/domain.c b/xen/common/domain.c
> > index 7cc9526139..14e96c3bc2 100644
> > --- a/xen/common/domain.c
> > +++ b/xen/common/domain.c
> > @@ -33,6 +33,7 @@
> >   #include <xen/xenoprof.h>
> >   #include <xen/irq.h>
> >   #include <xen/argo.h>
> > +#include <xen/save.h>
> >   #include <asm/debugger.h>
> >   #include <asm/p2m.h>
> >   #include <asm/processor.h>
> > @@ -1649,6 +1650,64 @@ int continue_hypercall_on_cpu(
> >       return 0;
> >   }
> >
> > +static int save_shared_info(const struct domain *d, struct domain_context *c,
> > +                            bool dry_run)
> > +{
> > +    struct domain_shared_info_context ctxt = {
> > +#ifdef CONFIG_COMPAT
> > +        .flags = has_32bit_shinfo(d) ? DOMAIN_SAVE_32BIT_SHINFO : 0,
> > +#endif
> > +        .buffer_size = sizeof(shared_info_t),
> > +    };
> > +    size_t hdr_size = offsetof(typeof(ctxt), buffer);
> > +    int rc;
> > +
> > +    rc = DOMAIN_SAVE_BEGIN(SHARED_INFO, c, 0);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    rc = domain_save_data(c, &ctxt, hdr_size);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    rc = domain_save_data(c, d->shared_info, ctxt.buffer_size);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    return domain_save_end(c);
> > +}
> > +
> > +static int load_shared_info(struct domain *d, struct domain_context *c)
> > +{
> > +    struct domain_shared_info_context ctxt;
> > +    size_t hdr_size = offsetof(typeof(ctxt), buffer);
> > +    unsigned int i;
> > +    int rc;
> > +
> > +    rc = DOMAIN_LOAD_BEGIN(SHARED_INFO, c, &i);
> > +    if ( rc || i ) /* expect only a single instance */
> > +        return rc;
> 
> This will return 0 if there is multiple instance. Is it intended?
> 

No, it ought to be an error... probably ENOENT.

> > +
> > +    rc = domain_load_data(c, &ctxt, hdr_size);
> > +    if ( rc )
> > +        return rc;
> > +
> > +    if ( ctxt.buffer_size != sizeof(shared_info_t) )
> > +        return -EINVAL;
> > +
> > +#ifdef CONFIG_COMPAT
> > +    has_32bit_shinfo(d) = ctxt.flags & DOMAIN_SAVE_32BIT_SHINFO;
> > +#endif
> Should we check the flag is not set when compat is not supported?
> 
> This would prevent someone to try restoring a compat shared info on a
> platform not support it.
> 

That would be prudent, yes. Probably a straight EINVAL for this one.

  Paul



From xen-devel-bounces@lists.xenproject.org Thu May 21 15:43:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 15:43:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbnM0-0005iR-S8; Thu, 21 May 2020 15:43:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V6/I=7D=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jbnLz-0005iM-UQ
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 15:43:27 +0000
X-Inumbo-ID: cf5a1a74-9b79-11ea-ab28-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cf5a1a74-9b79-11ea-ab28-12813bfff9fa;
 Thu, 21 May 2020 15:43:26 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: cEQAIyRN+tdQi+VpIaRV7Q1pQgxcZYrRRMia970thXQEK5dywKJkHh48aAL7Ff3uPsuI61o82a
 QaAonCbikwrrQ83ipUmYxEzHZmZ7EvZsKT1tJmVFyTLkfvUu57hbkFQiHpc4lD9O3hzaQtCJgF
 mvQuN0VxuPANzHU61Cbpm1PfgjNin52WjHzqApjU4r+ty6xv0ouy1Wr51SpvEh71cKSw7aPkcq
 nZEhUX+PIYl1pVuXig2c26Eh7+Xe8SpvchQMpv4wWmuw5wg6Dv1qP3k6vMBiozxPA6VkbdZkD2
 qdk=
X-SBRS: 2.7
X-MesageID: 18128437
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,418,1583211600"; d="scan'208";a="18128437"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2] x86/traps: Rework #PF[Rsvd] bit handling
Date: Thu, 21 May 2020 16:43:06 +0100
Message-ID: <20200521154306.29019-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200518153820.18170-1-andrew.cooper3@citrix.com>
References: <20200518153820.18170-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The reserved_bit_page_fault() paths effectively turn reserved bit faults into
a warning, but in the light of L1TF, the real impact is far more serious.

Make #PF[Rsvd] a hard error, irrespective of mode.  Any new panic() caused by
this constitutes pagetable corruption, and probably an L1TF gadget needing
fixing.

Drop the PFEC_reserved_bit check in __page_fault_type() which has been made
dead by the rearrangement in do_page_fault().

Additionally, drop the comment for do_page_fault().  It is inaccurate (bit 0
being set isn't always a protection violation) and stale (missing bits
5,6,15,31).

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

v2:
 * Reword commit message and comment in do_page_fault().
---
 xen/arch/x86/traps.c | 42 ++++++++++++++++--------------------------
 1 file changed, 16 insertions(+), 26 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 1f6f1dde76..e8a0877344 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1137,15 +1137,6 @@ void do_int3(struct cpu_user_regs *regs)
     pv_inject_hw_exception(TRAP_int3, X86_EVENT_NO_EC);
 }
 
-static void reserved_bit_page_fault(unsigned long addr,
-                                    struct cpu_user_regs *regs)
-{
-    printk("%pv: reserved bit in page table (ec=%04X)\n",
-           current, regs->error_code);
-    show_page_walk(addr);
-    show_execution_state(regs);
-}
-
 #ifdef CONFIG_PV
 static int handle_ldt_mapping_fault(unsigned int offset,
                                     struct cpu_user_regs *regs)
@@ -1248,10 +1239,6 @@ static enum pf_type __page_fault_type(unsigned long addr,
     if ( in_irq() )
         return real_fault;
 
-    /* Reserved bit violations are never spurious faults. */
-    if ( error_code & PFEC_reserved_bit )
-        return real_fault;
-
     required_flags  = _PAGE_PRESENT;
     if ( error_code & PFEC_write_access )
         required_flags |= _PAGE_RW;
@@ -1413,14 +1400,6 @@ static int fixup_page_fault(unsigned long addr, struct cpu_user_regs *regs)
     return 0;
 }
 
-/*
- * #PF error code:
- *  Bit 0: Protection violation (=1) ; Page not present (=0)
- *  Bit 1: Write access
- *  Bit 2: User mode (=1) ; Supervisor mode (=0)
- *  Bit 3: Reserved bit violation
- *  Bit 4: Instruction fetch
- */
 void do_page_fault(struct cpu_user_regs *regs)
 {
     unsigned long addr, fixup;
@@ -1439,6 +1418,21 @@ void do_page_fault(struct cpu_user_regs *regs)
     if ( unlikely(fixup_page_fault(addr, regs) != 0) )
         return;
 
+    /*
+     * Xen doesn't have reserved bits set in its pagetables, nor do we permit
+     * PV guests to write any.  Such entries would generally be vulnerable to
+     * the L1TF sidechannel.
+     *
+     * The shadow pagetable logic may use reserved bits as part of
+     * SHOPT_FAST_FAULT_PATH.  Pagefaults arising from these will be resolved
+     * via the fixup_page_fault() path.
+     *
+     * Anything remaining is an error, constituting corruption of the
+     * pagetables and probably an L1TF vulnerable gadget.
+     */
+    if ( error_code & PFEC_reserved_bit )
+        goto fatal;
+
     if ( unlikely(!guest_mode(regs)) )
     {
         enum pf_type pf_type = spurious_page_fault(addr, regs);
@@ -1457,13 +1451,12 @@ void do_page_fault(struct cpu_user_regs *regs)
         if ( likely((fixup = search_exception_table(regs)) != 0) )
         {
             perfc_incr(copy_user_faults);
-            if ( unlikely(regs->error_code & PFEC_reserved_bit) )
-                reserved_bit_page_fault(addr, regs);
             this_cpu(last_extable_addr) = regs->rip;
             regs->rip = fixup;
             return;
         }
 
+    fatal:
         if ( debugger_trap_fatal(TRAP_page_fault, regs) )
             return;
 
@@ -1475,9 +1468,6 @@ void do_page_fault(struct cpu_user_regs *regs)
               error_code, _p(addr));
     }
 
-    if ( unlikely(regs->error_code & PFEC_reserved_bit) )
-        reserved_bit_page_fault(addr, regs);
-
     pv_inject_page_fault(regs->error_code, addr);
 }
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu May 21 15:45:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 15:45:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbnNq-0005pv-8n; Thu, 21 May 2020 15:45:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=OubW=7D=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jbnNo-0005pq-SZ
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 15:45:20 +0000
X-Inumbo-ID: 12d4b426-9b7a-11ea-ae69-bc764e2007e4
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 12d4b426-9b7a-11ea-ae69-bc764e2007e4;
 Thu, 21 May 2020 15:45:20 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id x22so4730542lfd.4
 for <xen-devel@lists.xenproject.org>; Thu, 21 May 2020 08:45:19 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=jm9exCPCFIMayaAFj2BX5Zlqia8v+/NT3eINPO+SFOQ=;
 b=rYKeo4FGTnkxQbo7HLwBIMoMx3jhFFPUHUtSEYn3fOKVqcc5D3OTmWwCtetulFNYrc
 8v2zODchHyIcebDd2yqWMb3vczuoysL5SiMUeoTxyOHi1O/3PinKyJmzjsOQPyTfHmmU
 ew61u49oTrWDL9B/HpgneRNH07CaZZjMXODSfm0Ak/+cK/yBgxPChR+tsZ5ua5icvCvN
 KlYMCiSm5mufOEl/flfcR0o2FI4nPmkowtVLp2dX0ptOhV+aXJBIIIqvoaGIxhOUI3zp
 7wbMFjU2nKlxqGL/sGrYfKDtfTSLopyM+8aTU9KZ6Zen0jcvs84T79MfJOJuryRRcgZF
 qAkw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=jm9exCPCFIMayaAFj2BX5Zlqia8v+/NT3eINPO+SFOQ=;
 b=lm3npGRlcUT2G7YC2MdSkPAq50ypMjSdfBJnTtylpQASFy+tlKwkJi08pe8vf0PBAx
 w2AtCrvZNdFMZs9XOP1bVbpwGCcab0w7VOVDmfkfzg9+y0GD1/OgFNXylSzRT5KdSjwb
 5ayOACQBLjvcql/ZtRe7y1t5Yp/ovRMCKEkSHpZLEfu44DFji7St58SZHECt4rUJuDZX
 ZvHzkM9tO0HQfMP0f9lfhROr47unpsQe4Af6TonbanIyEZ1wyYbq9YmvgkO6+VincQBX
 03EWniJctmiyL2CHCrgsukV04az/gbkkK+6Nm5KKxkQvmYrqR8mSb+ST38xN2UH9mRtw
 5EAw==
X-Gm-Message-State: AOAM530Zs9iCCEBnJnDPQcCkg5cRJjP0m7AZ01/xkB+C730ePrl/IUIQ
 0ECq+iu/3r6C4IUBghzzaDXdbehRiky6odXE98qdO/Ka
X-Google-Smtp-Source: ABdhPJwP23FzXsaGMw8VXzt1+MZB3xI15t1l+4RKjcWda0IH/M/ldDMzeH3KrR9vn/rHCDsXf7IGZQlDTY3Aqt6/akQ=
X-Received: by 2002:a05:6512:3139:: with SMTP id
 p25mr5317196lfd.214.1590075918288; 
 Thu, 21 May 2020 08:45:18 -0700 (PDT)
MIME-Version: 1.0
References: <49cc21c24b65ef5e1ce9810397c0fcd9d43f77f4.1590072675.git.rosbrookn@ainfosec.com>
In-Reply-To: <49cc21c24b65ef5e1ce9810397c0fcd9d43f77f4.1590072675.git.rosbrookn@ainfosec.com>
From: Nick Rosbrook <rosbrookn@gmail.com>
Date: Thu, 21 May 2020 11:45:07 -0400
Message-ID: <CAEBZRSczH+baW+hV9MVnup4qPauu=uEZGK7EhrkFPy+pfS0Fmg@mail.gmail.com>
Subject: Re: [PATCH] golang/xenlight: add an empty line after DO NOT EDIT
 comment
To: Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> When generating documentation, pkg.go.dev and godoc.org assume a comment
> that immediately precedes the package declaration is a "package
> comment", and should be shown in the documentation. Add an empty line
> after the DO NOT EDIT comment in generated files to prevent these
> comments from appearing as "package comments."

George,

When I re-generated the code, there were also changes to
types/helpers.gen.go corresponding to recent changes from the linux
stubdom series. How should we make sure the xenlight package is
up-to-date for the 4.14 release?

Thanks,
NR


From xen-devel-bounces@lists.xenproject.org Thu May 21 16:00:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 16:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbncX-00085O-P0; Thu, 21 May 2020 16:00:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nFaT=7D=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jbncW-00083X-1m
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 16:00:32 +0000
X-Inumbo-ID: 321e1096-9b7c-11ea-ae69-bc764e2007e4
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 321e1096-9b7c-11ea-ae69-bc764e2007e4;
 Thu, 21 May 2020 16:00:31 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id g12so5958089wrw.1
 for <xen-devel@lists.xenproject.org>; Thu, 21 May 2020 09:00:31 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=spyphAFyJlXm/hizdcLp1TmVYviGGpi8QG1vV3BS/z0=;
 b=hJCs6nLwDU8DrnP44yaAEWgxNEiVHYXDQapS29DcWm96j/Y1AkcozbCeVZWzcYbsWy
 tgcDZ9VDpvkuaCIUdwuH3SO2KPOZE/X3XToGZrKg1nuAfNusCnRPaujTP1O1HWmyAF+B
 iSFqnug5gmutgHCSkaTryOBS0yZVx362gqZcC4IBqo/9gaFG6DL+0mHplM2irWFnyLqz
 G/wlI7qVxAL0MH1bcu5fZUCtDBlLmuwv6DAhUA/0kmhAgS8t7UFNJDhIUqMl26PCI/Ui
 bzkWR1aFGtUNo3O+ylqBSdPjlnakw+K2idl95yBssvtzMNvebqA8goCN7qLvcEqTG3QO
 /+5w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=spyphAFyJlXm/hizdcLp1TmVYviGGpi8QG1vV3BS/z0=;
 b=MNVBzCY/n5xqGmtgxWl0gljgn+5C4FmsIHDjx5sa4U0flJKhXVQ7ZZggw3Dzj13d4J
 bLhnFF8pGdFsVdKTboHpD8wBjHMKpgozleLOCkj6/DcQo4GMQWiNq8B9jQnYrDEy2Mqr
 DnNjuBBrLFtdeCEHBIuqDl0nlcRIptj3jbc95a/ONG01vD37TCz2DtxhcxBHfcnb6myi
 kfwfeVlzBSi+0k5djodDjmBzT592UpWlT17nzv1jJ083UfABYiCnxerwMMJkCBZRskeA
 BuAN/3no0Kb40GWBZq7oH4teLi0PSijYXfHflxGf+HEif5VkryAWdJ8b3Dy4DYs1tEHX
 RpCw==
X-Gm-Message-State: AOAM532Ry3rRtZUtAe9KhWnjG+ZLfhl8wEPP2g/3/0YMwlpMgm+gMRQm
 Zn1TtYoXvcpp5tcBNFKy4jI=
X-Google-Smtp-Source: ABdhPJwa9bmXHpKIjt0/NV+HqpHW183nSJ8PtUAeGScEV8q8uO20duP+mYu+rh187XltJN4Bs821+A==
X-Received: by 2002:a5d:6884:: with SMTP id h4mr557173wru.198.1590076830419;
 Thu, 21 May 2020 09:00:30 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.186])
 by smtp.gmail.com with ESMTPSA id 88sm4117361wre.45.2020.05.21.09.00.28
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 21 May 2020 09:00:29 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: <paul@xen.org>, "'Julien Grall'" <julien@xen.org>,
 <xen-devel@lists.xenproject.org>
References: <20200521085932.10508-1-paul@xen.org>
 <20200521085932.10508-5-paul@xen.org>
 <dd544d5d-aa25-f035-d96b-747f07c23513@xen.org>
 <004401d62f84$1d9a2c40$58ce84c0$@xen.org>
In-Reply-To: <004401d62f84$1d9a2c40$58ce84c0$@xen.org>
Subject: RE: [PATCH v4 4/5] common/domain: add a domain context record for
 shared_info...
Date: Thu, 21 May 2020 17:00:28 +0100
Message-ID: <000001d62f88$f354b180$d9fe1480$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQIT1cQvb66Smwxss7bTr3fjVXX5DAJ/pCGkAnVDoB8CdO2uVqf8F1Ig
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Paul Durrant <xadimgnik@gmail.com>
> Sent: 21 May 2020 16:26
> To: 'Julien Grall' <julien@xen.org>; xen-devel@lists.xenproject.org
> Cc: 'Paul Durrant' <pdurrant@amazon.com>; 'Ian Jackson' <ian.jackson@eu.citrix.com>; 'Wei Liu'
> <wl@xen.org>; 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'George Dunlap' <george.dunlap@citrix.com>;
> 'Jan Beulich' <jbeulich@suse.com>; 'Stefano Stabellini' <sstabellini@kernel.org>
> Subject: RE: [PATCH v4 4/5] common/domain: add a domain context record for shared_info...
> 
> > -----Original Message-----
> [snip]
> > > diff --git a/xen/common/domain.c b/xen/common/domain.c
> > > index 7cc9526139..14e96c3bc2 100644
> > > --- a/xen/common/domain.c
> > > +++ b/xen/common/domain.c
> > > @@ -33,6 +33,7 @@
> > >   #include <xen/xenoprof.h>
> > >   #include <xen/irq.h>
> > >   #include <xen/argo.h>
> > > +#include <xen/save.h>
> > >   #include <asm/debugger.h>
> > >   #include <asm/p2m.h>
> > >   #include <asm/processor.h>
> > > @@ -1649,6 +1650,64 @@ int continue_hypercall_on_cpu(
> > >       return 0;
> > >   }
> > >
> > > +static int save_shared_info(const struct domain *d, struct domain_context *c,
> > > +                            bool dry_run)
> > > +{
> > > +    struct domain_shared_info_context ctxt = {
> > > +#ifdef CONFIG_COMPAT
> > > +        .flags = has_32bit_shinfo(d) ? DOMAIN_SAVE_32BIT_SHINFO : 0,
> > > +#endif
> > > +        .buffer_size = sizeof(shared_info_t),
> > > +    };
> > > +    size_t hdr_size = offsetof(typeof(ctxt), buffer);
> > > +    int rc;
> > > +
> > > +    rc = DOMAIN_SAVE_BEGIN(SHARED_INFO, c, 0);
> > > +    if ( rc )
> > > +        return rc;
> > > +
> > > +    rc = domain_save_data(c, &ctxt, hdr_size);
> > > +    if ( rc )
> > > +        return rc;
> > > +
> > > +    rc = domain_save_data(c, d->shared_info, ctxt.buffer_size);
> > > +    if ( rc )
> > > +        return rc;
> > > +
> > > +    return domain_save_end(c);
> > > +}
> > > +
> > > +static int load_shared_info(struct domain *d, struct domain_context *c)
> > > +{
> > > +    struct domain_shared_info_context ctxt;
> > > +    size_t hdr_size = offsetof(typeof(ctxt), buffer);
> > > +    unsigned int i;
> > > +    int rc;
> > > +
> > > +    rc = DOMAIN_LOAD_BEGIN(SHARED_INFO, c, &i);
> > > +    if ( rc || i ) /* expect only a single instance */
> > > +        return rc;
> >
> > This will return 0 if there is multiple instance. Is it intended?
> >
> 
> No, it ought to be an error... probably ENOENT.

Actually I think ENXIO might be better... ENOENT tends to imply something is missing rather than unexpected.

  Paul



From xen-devel-bounces@lists.xenproject.org Thu May 21 16:03:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 16:03:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbnf1-0008DS-5z; Thu, 21 May 2020 16:03:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VGGW=7D=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jbnez-0008DK-BL
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 16:03:05 +0000
X-Inumbo-ID: 8dcd5df2-9b7c-11ea-ab2c-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8dcd5df2-9b7c-11ea-ab2c-12813bfff9fa;
 Thu, 21 May 2020 16:03:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=V/yymTJ9MAfGq6/OyCXO5NReLvjSch/kQX3c2neQAHM=; b=YG1EEXYsSiipwysN4/lWy5EcRg
 HhwU6X/sS0tvJbDxCduJ3Et+dlYO2qQVJTv5GHoJVzJzVqtca14WrXV/yDg494Rd3IgT/y+vQ7IkT
 AbW0kzs2jUWuRqauSWsSc7tIJK/I+ObChiiGUvHmuG2bpnlW1+7yoJoaO8eL3gKHQhQg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jbneu-0006TN-0W; Thu, 21 May 2020 16:03:00 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jbnet-0007GG-OZ; Thu, 21 May 2020 16:02:59 +0000
Subject: Re: [PATCH v4 4/5] common/domain: add a domain context record for
 shared_info...
To: paul@xen.org, xen-devel@lists.xenproject.org
References: <20200521085932.10508-1-paul@xen.org>
 <20200521085932.10508-5-paul@xen.org>
 <dd544d5d-aa25-f035-d96b-747f07c23513@xen.org>
 <004401d62f84$1d9a2c40$58ce84c0$@xen.org>
 <000001d62f88$f354b180$d9fe1480$@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <0f562117-e541-8821-ae1a-a8f56c8004bc@xen.org>
Date: Thu, 21 May 2020 17:02:56 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <000001d62f88$f354b180$d9fe1480$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Paul Durrant' <pdurrant@amazon.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 21/05/2020 17:00, Paul Durrant wrote:
>> -----Original Message-----
>> From: Paul Durrant <xadimgnik@gmail.com>
>> Sent: 21 May 2020 16:26
>> To: 'Julien Grall' <julien@xen.org>; xen-devel@lists.xenproject.org
>> Cc: 'Paul Durrant' <pdurrant@amazon.com>; 'Ian Jackson' <ian.jackson@eu.citrix.com>; 'Wei Liu'
>> <wl@xen.org>; 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'George Dunlap' <george.dunlap@citrix.com>;
>> 'Jan Beulich' <jbeulich@suse.com>; 'Stefano Stabellini' <sstabellini@kernel.org>
>> Subject: RE: [PATCH v4 4/5] common/domain: add a domain context record for shared_info...
>>
>>> -----Original Message-----
>> [snip]
>>>> diff --git a/xen/common/domain.c b/xen/common/domain.c
>>>> index 7cc9526139..14e96c3bc2 100644
>>>> --- a/xen/common/domain.c
>>>> +++ b/xen/common/domain.c
>>>> @@ -33,6 +33,7 @@
>>>>    #include <xen/xenoprof.h>
>>>>    #include <xen/irq.h>
>>>>    #include <xen/argo.h>
>>>> +#include <xen/save.h>
>>>>    #include <asm/debugger.h>
>>>>    #include <asm/p2m.h>
>>>>    #include <asm/processor.h>
>>>> @@ -1649,6 +1650,64 @@ int continue_hypercall_on_cpu(
>>>>        return 0;
>>>>    }
>>>>
>>>> +static int save_shared_info(const struct domain *d, struct domain_context *c,
>>>> +                            bool dry_run)
>>>> +{
>>>> +    struct domain_shared_info_context ctxt = {
>>>> +#ifdef CONFIG_COMPAT
>>>> +        .flags = has_32bit_shinfo(d) ? DOMAIN_SAVE_32BIT_SHINFO : 0,
>>>> +#endif
>>>> +        .buffer_size = sizeof(shared_info_t),
>>>> +    };
>>>> +    size_t hdr_size = offsetof(typeof(ctxt), buffer);
>>>> +    int rc;
>>>> +
>>>> +    rc = DOMAIN_SAVE_BEGIN(SHARED_INFO, c, 0);
>>>> +    if ( rc )
>>>> +        return rc;
>>>> +
>>>> +    rc = domain_save_data(c, &ctxt, hdr_size);
>>>> +    if ( rc )
>>>> +        return rc;
>>>> +
>>>> +    rc = domain_save_data(c, d->shared_info, ctxt.buffer_size);
>>>> +    if ( rc )
>>>> +        return rc;
>>>> +
>>>> +    return domain_save_end(c);
>>>> +}
>>>> +
>>>> +static int load_shared_info(struct domain *d, struct domain_context *c)
>>>> +{
>>>> +    struct domain_shared_info_context ctxt;
>>>> +    size_t hdr_size = offsetof(typeof(ctxt), buffer);
>>>> +    unsigned int i;
>>>> +    int rc;
>>>> +
>>>> +    rc = DOMAIN_LOAD_BEGIN(SHARED_INFO, c, &i);
>>>> +    if ( rc || i ) /* expect only a single instance */
>>>> +        return rc;
>>>
>>> This will return 0 if there is multiple instance. Is it intended?
>>>
>>
>> No, it ought to be an error... probably ENOENT.
> 
> Actually I think ENXIO might be better... ENOENT tends to imply something is missing rather than unexpected.

ENXIO could work. Another one would be E2BIG.

I don't have a preference.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 21 16:19:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 16:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbnub-0000pV-LW; Thu, 21 May 2020 16:19:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PcJi=7D=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbnub-0000pQ-1w
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 16:19:13 +0000
X-Inumbo-ID: ca8bf7ba-9b7e-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca8bf7ba-9b7e-11ea-9887-bc764e2007e4;
 Thu, 21 May 2020 16:19:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=eb9/Zj/OqUDGEzqNOsht2LoIKITl2aJpdik09eEQgiE=; b=s5tcWVVA+cZ5YaZYMjbZfUALk
 jzgJYF2oiEE/X45iyeYasofTdAuhFBJeSjFWyHaOeu4krwjgEVMBZ5t9zMbWhcGg/VHvgpMsCZTG3
 A0Ye4IQTI8qnHHnekKIs9Hb0OvN+FQyKiNdKD7o56VwbHjm1bLQ9/trLg2IKEw/YaTnu0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbnuS-0006nH-QG; Thu, 21 May 2020 16:19:04 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbnuS-0002ME-Fr; Thu, 21 May 2020 16:19:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbnuS-0000w1-F8; Thu, 21 May 2020 16:19:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150294-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 150294: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-5.4:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
 linux-5.4:test-amd64-amd64-xl-rtds:guest-saverestore:fail:heisenbug
 linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=1cdaf895c99d319c0007d0b62818cf85fc4b087f
X-Osstest-Versions-That: linux=cbaf2369956178e68fb714a30dc86cf768dd596a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 21 May 2020 16:19:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150294 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150294/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds     12 guest-start      fail in 150273 pass in 150294
 test-amd64-amd64-xl-rtds     15 guest-saverestore          fail pass in 150273

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate  fail in 150273 like 150172
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150172
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                1cdaf895c99d319c0007d0b62818cf85fc4b087f
baseline version:
 linux                cbaf2369956178e68fb714a30dc86cf768dd596a

Last test of basis   150172  2020-05-14 06:09:31 Z    7 days
Testing same since   150273  2020-05-20 06:46:17 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Adam Ford <aford173@gmail.com>
  Adam McCoy <adam@forsedomani.com>
  Adrian Hunter <adrian.hunter@intel.com>
  Alan Maguire <alan.maguire@oracle.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alexei Starovoitov <ast@kernel.org>
  Amir Goldstein <amir73il@gmail.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ansuel Smith <ansuelsmth@gmail.com>
  Arnd Bergmann <arnd@arndb.de>
  Aurabindo Pillai <aurabindo.pillai@amd.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Ben Chuang <ben.chuang@genesyslogic.com.tw>
  Bob Peterson <rpeterso@redhat.com>
  Borislav Petkov <bp@suse.de>
  Catalin Marinas <catalin.marinas@arm.com>
  Chen-Yu Tsai <wens@csie.org>
  Chris Chiu <chiu@endlessm.com>
  Chris Down <chris@chrisdown.name>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian Brauner <christian.brauner@ubuntu.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Christophe Leroy <christophe.leroy@c-s.fr>
  Chuck Lever <chuck.lever@oracle.com>
  Cong Wang <xiyou.wangcong@gmail.com>
  Daisuke Matsuda <matsuda-daisuke@fujitsu.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Dave Flogeras <dflogeras2@gmail.com>
  Dave Wysochanski <dwysocha@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
  Douglas Gilbert <dgilbert@interlog.com>
  Eric Dumazet <edumazet@google.com>
  Eric W. Biederman <ebiederm@xmission.com>
  Eugeniu Rosca <erosca@de.adit-jv.com>
  Fabio Estevam <festevam@gmail.com>
  Felipe Balbi <balbi@kernel.org>
  Florian Fainelli <f.fainelli@gmail.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Gerd Hoffmann <kraxel@redhat.com>
  Grace Kao <grace.kao@intel.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Grzegorz Kowal <custos.mentis@gmail.com>
  Guenter Roeck <linux@roeck-us.net>
  Guillaume Nault <gnault@redhat.com>
  Hangbin Liu <liuhangbin@gmail.com>
  Heiko Stuebner <heiko@sntech.de>
  Heiner Kallweit <hkallweit1@gmail.com>
  Hugh Dickins <hughd@google.com>
  Ilie Halip <ilie.halip@gmail.com>
  Ioana Ciornei <ioana.ciornei@nxp.com>
  Iris Liu <iris@onechronos.com>
  J. Bruce Fields <bfields@redhat.com>
  Jack Morgenstein <jackm@dev.mellanox.co.il>
  Jakub Kicinski <kuba@kernel.org>
  Jamal Hadi Salim <jhs@mojatatu.com>
  Jan Kara <jack@suse.cz>
  Jason Gunthorpe <jgg@mellanox.com>
  Jeremy Linton <jeremy.linton@arm.com>
  Jerome Brunet <jbrunet@baylibre.com>
  Jesus Ramos <jesus-ramos@live.com>
  Jim Mattson <jmattson@google.com>
  Jiri Kosina <jkosina@suse.cz>
  Johannes Weiner <hannes@cmpxchg.org>
  John Fastabend <john.fastabend@gmail.com>
  John Stultz <john.stultz@linaro.org>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Jue Wang <juew@google.com>
  Justin Swartz <justin.swartz@risingedge.co.za>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kalle Valo <kvalo@codeaurora.org>
  Kamal Mostafa <kamal@canonical.com>
  Kelly Littlepage <kelly@onechronos.com>
  Kevin Hilman <khilman@baylibre.com>
  Kishon Vijay Abraham I <kishon@ti.com>
  Kyungtae Kim <kt0755@gmail.com>
  Leon Romanovsky <leonro@mellanox.com>
  Li Jun <jun.li@nxp.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Luben Tuikov <luben.tuikov@amd.com>
  Lubomir Rintel <lkundrak@v3.sk>
  Luo bin <luobin9@huawei.com>
  Maciej Żenczykowski <maze@google.com>
  Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com>
  Maor Gottlieb <maorg@mellanox.com>
  Marc Zyngier <maz@kernel.org>
  Marek Olšák <marek.olsak@amd.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin KaFai Lau <kafai@fb.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Maxime Ripard <maxime@cerno.tech>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael S. Tsirkin <mst@redhat.com>
  Michal Hocko <mhocko@suse.com>
  Michal Vokáč <michal.vokac@ysoft.com>
  Mika Westerberg <mika.westerberg@linux.intel.com>
  Mike Marciniszyn <mike.marciniszyn@intel.com>
  Neil Armstrong <narmstrong@baylibre.com>
  Neil Horman <nhorman@tuxdriver.com>
  Nikolay Aleksandrov <nikolay@cumulusnetworks.com>
  Olga Kornievskaia <kolga@netapp.com>
  Olga Kornievskaia <olga.kornievskaia@gmail.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paolo Abeni <pabeni@redhat.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Moore <paul@paul-moore.com>
  Peter Chen <peter.chen@nxp.com>
  Peter Jones <pjones@redhat.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Phil Sutter <phil@nwl.cc>
  Pierre-Eric Pelloux-Prayer <pierre-eric.pelloux-prayer@amd.com>
  Potnuri Bharat Teja <bharat@chelsio.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Raul E Rangel <rrangel@chromium.org>
  Renius Chen <renius.chen@genesyslogic.com.tw>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Samu Nuutamo <samu.nuutamo@vincit.fi>
  Samuel Zou <zou_wei@huawei.com>
  Santosh Shilimkar <santosh.shilimkar@oracle.com>
  Sarthak Garg <sartgarg@codeaurora.org>
  Sasha Levin <sashal@kernel.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Shawn Guo <shawnguo@kernel.org>
  Shiraz Saleem <shiraz.saleem@intel.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Simon Ser <contact@emersion.fr>
  Soheil Hassas Yeganeh <soheil@google.com>
  Sriharsha Allenki <sallenki@codeaurora.org>
  Stefan Hajnoczi <stefanha@redhat.com>
  Stefano Brivio <sbrivio@redhat.com>
  Stephen Boyd <sboyd@kernel.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Sultan Alsawaf <sultan@kerneltoast.com>
  Sung Lee <sung.lee@amd.com>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tejun Heo <tj@kernel.org>
  Tiecheng Zhou <Tiecheng.Zhou@amd.com>
  Tony Lindgren <tony@atomide.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Ursula Braun <ubraun@linux.ibm.com>
  Vasily Averin <vvs@virtuozzo.com>
  Veerabhadrarao Badiganti <vbadigan@codeaurora.org>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vincent Minet <v.minet@criteo.com>
  Vineeth Pillai <vineethrp@gmail.com>
  Vinod Koul <vkoul@kernel.org>
  Waiman Long <longman@redhat.com>
  Wei Yongjun <weiyongjun1@huawei.com>
  Willem de Bruijn <willemb@google.com>
  Wu Bo <wubo40@huawei.com>
  Xiao Yang <yangx.jy@cn.fujitsu.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yafang Shao <laoar.shao@gmail.com>
  Yang Shi <yang.shi@linux.alibaba.com>
  Yang Yingliang <yangyingliang@huawei.com>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  Yuiko Oshino <yuiko.oshino@microchip.com>
  Zefan Li <lizefan@huawei.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   cbaf23699561..1cdaf895c99d  1cdaf895c99d319c0007d0b62818cf85fc4b087f -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu May 21 16:19:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 16:19:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbnv6-0000th-2T; Thu, 21 May 2020 16:19:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/yRJ=7D=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jbnv5-0000tZ-3P
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 16:19:43 +0000
X-Inumbo-ID: e0a80282-9b7e-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e0a80282-9b7e-11ea-ae69-bc764e2007e4;
 Thu, 21 May 2020 16:19:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=aF3lgFMr/vmotaadmn736tS5L3slZKqVfZEiA8oxJlI=; b=nRFb7j9Gk59bhy5a/unLlltxIA
 dZz8TkvsTjpoWkSfUeBjor+jc0IHndhfhzcxa5YMI+tVJrC1yXF4Y7rPRC0xTTtiSkFOZYpSpf3KC
 m4inZfaPTZjKF6sL2i0wEILF40CkstlxAHklosdUCtvk/0VcaPzGask8E7wPnK33Rt/s=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jbnv4-0006nV-7s; Thu, 21 May 2020 16:19:42 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jbnv3-00088L-Uf; Thu, 21 May 2020 16:19:42 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 0/5] domain context infrastructure
Date: Thu, 21 May 2020 17:19:34 +0100
Message-Id: <20200521161939.4508-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

Paul Durrant (5):
  xen/common: introduce a new framework for save/restore of 'domain'
    context
  xen/common/domctl: introduce XEN_DOMCTL_get/setdomaincontext
  tools/misc: add xen-domctx to present domain context
  common/domain: add a domain context record for shared_info...
  tools/libxc: make use of domain context SHARED_INFO record...

 .gitignore                             |   1 +
 tools/flask/policy/modules/xen.if      |   4 +-
 tools/libxc/include/xenctrl.h          |   5 +
 tools/libxc/xc_domain.c                |  56 +++++
 tools/libxc/xc_sr_common.c             |  67 ++++++
 tools/libxc/xc_sr_common.h             |  11 +-
 tools/libxc/xc_sr_common_x86_pv.c      |  74 ++++++
 tools/libxc/xc_sr_common_x86_pv.h      |   3 +
 tools/libxc/xc_sr_restore_x86_pv.c     |  26 +-
 tools/libxc/xc_sr_save_x86_pv.c        |  43 ++--
 tools/libxc/xg_save_restore.h          |   1 +
 tools/misc/Makefile                    |   4 +
 tools/misc/xen-domctx.c                | 278 ++++++++++++++++++++++
 xen/common/Makefile                    |   1 +
 xen/common/domain.c                    |  65 +++++
 xen/common/domctl.c                    | 173 ++++++++++++++
 xen/common/save.c                      | 314 +++++++++++++++++++++++++
 xen/include/public/arch-arm/hvm/save.h |   5 +
 xen/include/public/arch-x86/hvm/save.h |   5 +
 xen/include/public/domctl.h            |  41 ++++
 xen/include/public/save.h              | 100 ++++++++
 xen/include/xen/save.h                 | 170 +++++++++++++
 xen/xsm/flask/hooks.c                  |   6 +
 xen/xsm/flask/policy/access_vectors    |   4 +
 24 files changed, 1411 insertions(+), 46 deletions(-)
 create mode 100644 tools/misc/xen-domctx.c
 create mode 100644 xen/common/save.c
 create mode 100644 xen/include/public/save.h
 create mode 100644 xen/include/xen/save.h

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 16:19:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 16:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbnvB-0000ud-Aa; Thu, 21 May 2020 16:19:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/yRJ=7D=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jbnvA-0000uQ-2a
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 16:19:48 +0000
X-Inumbo-ID: e24521b0-9b7e-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e24521b0-9b7e-11ea-b07b-bc764e2007e4;
 Thu, 21 May 2020 16:19:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Sl8/Ipswh+utcciRPDSVGPSr0kxIyFRjP4FJbLloSIY=; b=tsUXmy0C9USvafkXAyIhtWgvvy
 z+N7xmDQxY/fMr85ZlyIV74Mo5iXVOP6Gx0UgADzl7NynfZKhTV+UzWOrzqAJKWoOzMm/+nJ6ID4h
 XgkUHYwhjuikcuQF+jAC2Ha/1snJM3UF9AGSbxvZy+Ty/qMtJ/GH0uYToKBv5ZDVLG2s=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jbnv6-0006nZ-9o; Thu, 21 May 2020 16:19:44 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jbnv5-00088L-RV; Thu, 21 May 2020 16:19:44 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 1/5] xen/common: introduce a new framework for save/restore
 of 'domain' context
Date: Thu, 21 May 2020 17:19:35 +0100
Message-Id: <20200521161939.4508-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200521161939.4508-1-paul@xen.org>
References: <20200521161939.4508-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

To allow enlightened HVM guests (i.e. those that have PV drivers) to be
migrated without their co-operation it will be necessary to transfer 'PV'
state such as event channel state, grant entry state, etc.

Currently there is a framework (entered via the hvm_save/load() functions)
that allows a domain's 'HVM' (architectural) state to be transferred but
'PV' state is also common with pure PV guests and so this framework is not
really suitable.

This patch adds the new public header and low level implementation of a new
common framework, entered via the domain_save/load() functions. Subsequent
patches will introduce other parts of the framework, and code that will
make use of it within the current version of the libxc migration stream.

This patch also marks the HVM-only framework as deprecated in favour of the
new framework.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Julien Grall <julien@xen.org>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v4:
 - Addressed further comments from Jan

v3:
 - Addressed comments from Julien and Jan
 - Save handlers no longer need to state entry length up-front
 - Save handlers expected to deal with multiple instances internally
 - Entries are now auto-padded to 8 byte boundary

v2:
 - Allow multi-stage save/load to avoid the need to double-buffer
 - Get rid of the masks and add an 'ignore' flag instead
 - Create copy function union to preserve const save buffer
 - Deprecate HVM-only framework
---
 xen/common/Makefile                    |   1 +
 xen/common/save.c                      | 314 +++++++++++++++++++++++++
 xen/include/public/arch-arm/hvm/save.h |   5 +
 xen/include/public/arch-x86/hvm/save.h |   5 +
 xen/include/public/save.h              |  89 +++++++
 xen/include/xen/save.h                 | 170 +++++++++++++
 6 files changed, 584 insertions(+)
 create mode 100644 xen/common/save.c
 create mode 100644 xen/include/public/save.h
 create mode 100644 xen/include/xen/save.h

diff --git a/xen/common/Makefile b/xen/common/Makefile
index e8cde65370..90553ba5d7 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -37,6 +37,7 @@ obj-y += radix-tree.o
 obj-y += rbtree.o
 obj-y += rcupdate.o
 obj-y += rwlock.o
+obj-y += save.o
 obj-y += shutdown.o
 obj-y += softirq.o
 obj-y += sort.o
diff --git a/xen/common/save.c b/xen/common/save.c
new file mode 100644
index 0000000000..f7b815b5ef
--- /dev/null
+++ b/xen/common/save.c
@@ -0,0 +1,314 @@
+/*
+ * save.c: Save and restore PV guest state common to all domain types.
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/compile.h>
+#include <xen/save.h>
+
+struct domain_context {
+    struct domain *domain;
+    const char *name; /* for logging purposes */
+    struct domain_save_descriptor desc;
+    size_t len; /* for internal accounting */
+    union {
+        const struct domain_save_ops *save;
+        const struct domain_load_ops *load;
+    } ops;
+    void *priv;
+};
+
+static struct {
+    const char *name;
+    domain_save_handler save;
+    domain_load_handler load;
+} handlers[DOMAIN_SAVE_CODE_MAX + 1];
+
+void __init domain_register_save_type(unsigned int typecode,
+                                      const char *name,
+                                      domain_save_handler save,
+                                      domain_load_handler load)
+{
+    BUG_ON(typecode >= ARRAY_SIZE(handlers));
+
+    ASSERT(!handlers[typecode].save);
+    ASSERT(!handlers[typecode].load);
+
+    handlers[typecode].name = name;
+    handlers[typecode].save = save;
+    handlers[typecode].load = load;
+}
+
+int domain_save_begin(struct domain_context *c, unsigned int typecode,
+                      unsigned int instance)
+{
+    int rc;
+
+    if ( typecode != c->desc.typecode )
+    {
+        ASSERT_UNREACHABLE();
+        return -EINVAL;
+    }
+    ASSERT(!c->desc.length); /* Should always be zero during domain_save() */
+    ASSERT(!c->len); /* Verify domain_save_end() was called */
+
+    rc = c->ops.save->begin(c->priv, &c->desc);
+    if ( rc )
+        return rc;
+
+    return 0;
+}
+
+int domain_save_data(struct domain_context *c, const void *src, size_t len)
+{
+    int rc = c->ops.save->append(c->priv, src, len);
+
+    if ( !rc )
+        c->len += len;
+
+    return rc;
+}
+
+#define DOMAIN_SAVE_ALIGN 8
+
+int domain_save_end(struct domain_context *c)
+{
+    struct domain *d = c->domain;
+    size_t len = ROUNDUP(c->len, DOMAIN_SAVE_ALIGN) - c->len; /* padding */
+    int rc;
+
+    if ( len )
+    {
+        static const uint8_t pad[DOMAIN_SAVE_ALIGN] = {};
+
+        rc = domain_save_data(c, pad, len);
+
+        if ( rc )
+            return rc;
+    }
+    ASSERT(IS_ALIGNED(c->len, DOMAIN_SAVE_ALIGN));
+
+    if ( c->name )
+        gdprintk(XENLOG_INFO, "%pd save: %s[%u] +%zu (-%zu)\n", d, c->name,
+                 c->desc.instance, c->len, len);
+
+    rc = c->ops.save->end(c->priv, c->len);
+    c->len = 0;
+
+    return rc;
+}
+
+int domain_save(struct domain *d, const struct domain_save_ops *ops,
+                void *priv, bool dry_run)
+{
+    struct domain_context c = {
+        .domain = d,
+        .ops.save = ops,
+        .priv = priv,
+    };
+    static const struct domain_save_header h = {
+        .magic = DOMAIN_SAVE_MAGIC,
+        .xen_major = XEN_VERSION,
+        .xen_minor = XEN_SUBVERSION,
+        .version = DOMAIN_SAVE_VERSION,
+    };
+    const struct domain_save_end e = {};
+    unsigned int i;
+    int rc;
+
+    ASSERT(d != current->domain);
+    domain_pause(d);
+
+    c.name = !dry_run ? "HEADER" : NULL;
+    c.desc.typecode = DOMAIN_SAVE_CODE(HEADER);
+
+    rc = DOMAIN_SAVE_ENTRY(HEADER, &c, 0, &h, sizeof(h));
+    if ( rc )
+        goto out;
+
+    for ( i = 0; i < ARRAY_SIZE(handlers); i++ )
+    {
+        domain_save_handler save = handlers[i].save;
+
+        if ( !save )
+            continue;
+
+        c.name = !dry_run ? handlers[i].name : NULL;
+        memset(&c.desc, 0, sizeof(c.desc));
+        c.desc.typecode = i;
+
+        rc = save(d, &c, dry_run);
+        if ( rc )
+            goto out;
+    }
+
+    c.name = !dry_run ? "END" : NULL;
+    memset(&c.desc, 0, sizeof(c.desc));
+    c.desc.typecode = DOMAIN_SAVE_CODE(END);
+
+    rc = DOMAIN_SAVE_ENTRY(END, &c, 0, &e, sizeof(e));
+
+ out:
+    domain_unpause(d);
+
+    return rc;
+}
+
+int domain_load_begin(struct domain_context *c, unsigned int typecode,
+                      unsigned int *instance)
+{
+    if ( typecode != c->desc.typecode )
+    {
+        ASSERT_UNREACHABLE();
+        return -EINVAL;
+    }
+
+    ASSERT(!c->len); /* Verify domain_load_end() was called */
+
+    *instance = c->desc.instance;
+
+    return 0;
+}
+
+int domain_load_data(struct domain_context *c, void *dst, size_t len)
+{
+    size_t copy_len = min_t(size_t, len, c->desc.length - c->len);
+    int rc;
+
+    c->len += copy_len;
+    ASSERT(c->len <= c->desc.length);
+
+    rc = copy_len ? c->ops.load->read(c->priv, dst, copy_len) : 0;
+    if ( rc )
+        return rc;
+
+    /* Zero extend if the entry is exhausted */
+    len -= copy_len;
+    if ( len )
+    {
+        dst += copy_len;
+        memset(dst, 0, len);
+    }
+
+    return 0;
+}
+
+int domain_load_end(struct domain_context *c)
+{
+    struct domain *d = c->domain;
+    size_t len = c->desc.length - c->len;
+
+    while ( c->len != c->desc.length ) /* unconsumed data or pad */
+    {
+        uint8_t pad;
+        int rc = domain_load_data(c, &pad, sizeof(pad));
+
+        if ( rc )
+            return rc;
+
+        if ( pad )
+            return -EINVAL;
+    }
+
+    gdprintk(XENLOG_INFO, "%pd load: %s[%u] +%zu (-%zu)\n", d, c->name,
+             c->desc.instance, c->len, len);
+
+    c->len = 0;
+
+    return 0;
+}
+
+int domain_load(struct domain *d, const struct domain_load_ops *ops,
+                void *priv)
+{
+    struct domain_context c = {
+        .domain = d,
+        .ops.load = ops,
+        .priv = priv,
+    };
+    unsigned int instance;
+    struct domain_save_header h;
+    int rc;
+
+    ASSERT(d != current->domain);
+
+    rc = c.ops.load->read(c.priv, &c.desc, sizeof(c.desc));
+    if ( rc )
+        return rc;
+
+    c.name = "HEADER";
+
+    rc = DOMAIN_LOAD_ENTRY(HEADER, &c, &instance, &h, sizeof(h));
+    if ( rc )
+        return rc;
+
+    if ( instance || h.magic != DOMAIN_SAVE_MAGIC ||
+         h.version != DOMAIN_SAVE_VERSION )
+        return -EINVAL;
+
+    domain_pause(d);
+
+    for (;;)
+    {
+        unsigned int i;
+        domain_load_handler load;
+
+        rc = c.ops.load->read(c.priv, &c.desc, sizeof(c.desc));
+        if ( rc )
+            return rc;
+
+        rc = -EINVAL;
+
+        if ( c.desc.typecode == DOMAIN_SAVE_CODE(END) )
+        {
+            struct domain_save_end e;
+
+            c.name = "END";
+
+            rc = DOMAIN_LOAD_ENTRY(END, &c, &instance, &e, sizeof(e));
+
+            if ( instance )
+                return -EINVAL;
+
+            break;
+        }
+
+        i = c.desc.typecode;
+        if ( i >= ARRAY_SIZE(handlers) )
+            break;
+
+        c.name = handlers[i].name;
+        load = handlers[i].load;
+
+        rc = load ? load(d, &c) : -EOPNOTSUPP;
+        if ( rc )
+            break;
+    }
+
+    domain_unpause(d);
+
+    return rc;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/arch-arm/hvm/save.h b/xen/include/public/arch-arm/hvm/save.h
index 75b8e65bcb..d5b0c15203 100644
--- a/xen/include/public/arch-arm/hvm/save.h
+++ b/xen/include/public/arch-arm/hvm/save.h
@@ -26,6 +26,11 @@
 #ifndef __XEN_PUBLIC_HVM_SAVE_ARM_H__
 #define __XEN_PUBLIC_HVM_SAVE_ARM_H__
 
+/*
+ * Further use of HVM state is deprecated. New state records should only
+ * be added to the domain state header: public/save.h
+ */
+
 #endif
 
 /*
diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h
index 773a380bc2..e61e2dbcd7 100644
--- a/xen/include/public/arch-x86/hvm/save.h
+++ b/xen/include/public/arch-x86/hvm/save.h
@@ -648,6 +648,11 @@ struct hvm_msr {
  */
 #define HVM_SAVE_CODE_MAX 20
 
+/*
+ * Further use of HVM state is deprecated. New state records should only
+ * be added to the domain state header: public/save.h
+ */
+
 #endif /* __XEN_PUBLIC_HVM_SAVE_X86_H__ */
 
 /*
diff --git a/xen/include/public/save.h b/xen/include/public/save.h
new file mode 100644
index 0000000000..551dbbddb8
--- /dev/null
+++ b/xen/include/public/save.h
@@ -0,0 +1,89 @@
+/*
+ * save.h
+ *
+ * Structure definitions for common PV/HVM domain state that is held by
+ * Xen and must be saved along with the domain's memory.
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef XEN_PUBLIC_SAVE_H
+#define XEN_PUBLIC_SAVE_H
+
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
+#include "xen.h"
+
+/* Entry data is preceded by a descriptor */
+struct domain_save_descriptor {
+    uint16_t typecode;
+
+    /*
+     * Instance number of the entry (since there may be multiple of some
+     * types of entries).
+     */
+    uint16_t instance;
+
+    /* Entry length not including this descriptor */
+    uint32_t length;
+};
+
+/*
+ * Each entry has a type associated with it. DECLARE_DOMAIN_SAVE_TYPE
+ * binds these things together, although it is not intended that the
+ * resulting type is ever instantiated.
+ */
+#define DECLARE_DOMAIN_SAVE_TYPE(_x, _code, _type) \
+    struct DOMAIN_SAVE_TYPE_##_x { char c[_code]; _type t; };
+
+#define DOMAIN_SAVE_CODE(_x) \
+    (sizeof(((struct DOMAIN_SAVE_TYPE_##_x *)0)->c))
+#define DOMAIN_SAVE_TYPE(_x) \
+    typeof(((struct DOMAIN_SAVE_TYPE_##_x *)0)->t)
+
+/*
+ * All entries will be zero-padded to the next 64-bit boundary when saved,
+ * so there is no need to include trailing pad fields in structure
+ * definitions.
+ * When loading, entries will be zero-extended if the load handler reads
+ * beyond the length specified in the descriptor.
+ */
+
+/* Terminating entry */
+struct domain_save_end {};
+DECLARE_DOMAIN_SAVE_TYPE(END, 0, struct domain_save_end);
+
+#define DOMAIN_SAVE_MAGIC   0x53415645
+#define DOMAIN_SAVE_VERSION 0x00000001
+
+/* Initial entry */
+struct domain_save_header {
+    uint32_t magic;                /* Must be DOMAIN_SAVE_MAGIC */
+    uint16_t xen_major, xen_minor; /* Xen version */
+    uint32_t version;              /* Save format version */
+};
+DECLARE_DOMAIN_SAVE_TYPE(HEADER, 1, struct domain_save_header);
+
+#define DOMAIN_SAVE_CODE_MAX 1
+
+#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
+
+#endif /* XEN_PUBLIC_SAVE_H */
diff --git a/xen/include/xen/save.h b/xen/include/xen/save.h
new file mode 100644
index 0000000000..f2e58bafef
--- /dev/null
+++ b/xen/include/xen/save.h
@@ -0,0 +1,170 @@
+/*
+ * save.h: support routines for save/restore
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef XEN_SAVE_H
+#define XEN_SAVE_H
+
+#include <xen/init.h>
+#include <xen/sched.h>
+#include <xen/types.h>
+
+#include <public/save.h>
+
+struct domain_context;
+
+int domain_save_begin(struct domain_context *c, unsigned int typecode,
+                      unsigned int instance);
+
+#define DOMAIN_SAVE_BEGIN(x, c, i) \
+    domain_save_begin((c), DOMAIN_SAVE_CODE(x), (i))
+
+int domain_save_data(struct domain_context *c, const void *data, size_t len);
+int domain_save_end(struct domain_context *c);
+
+static inline int domain_save_entry(struct domain_context *c,
+                                    unsigned int typecode,
+                                    unsigned int instance, const void *src,
+                                    size_t len)
+{
+    int rc;
+
+    rc = domain_save_begin(c, typecode, instance);
+    if ( rc )
+        return rc;
+
+    rc = domain_save_data(c, src, len);
+    if ( rc )
+        return rc;
+
+    return domain_save_end(c);
+}
+
+#define DOMAIN_SAVE_ENTRY(x, c, i, s, l) \
+    domain_save_entry((c), DOMAIN_SAVE_CODE(x), (i), (s), (l))
+
+int domain_load_begin(struct domain_context *c, unsigned int typecode,
+                      unsigned int *instance);
+
+#define DOMAIN_LOAD_BEGIN(x, c, i) \
+    domain_load_begin((c), DOMAIN_SAVE_CODE(x), (i))
+
+int domain_load_data(struct domain_context *c, void *data, size_t len);
+int domain_load_end(struct domain_context *c);
+
+static inline int domain_load_entry(struct domain_context *c,
+                                    unsigned int typecode,
+                                    unsigned int *instance, void *dst,
+                                    size_t len)
+{
+    int rc;
+
+    rc = domain_load_begin(c, typecode, instance);
+    if ( rc )
+        return rc;
+
+    rc = domain_load_data(c, dst, len);
+    if ( rc )
+        return rc;
+
+    return domain_load_end(c);
+}
+
+#define DOMAIN_LOAD_ENTRY(x, c, i, d, l) \
+    domain_load_entry((c), DOMAIN_SAVE_CODE(x), (i), (d), (l))
+
+/*
+ * The 'dry_run' flag indicates that the caller of domain_save() (see below)
+ * is not trying to actually acquire the data, only the size of the data.
+ * The save handler can therefore limit work to only that which is necessary
+ * to call domain_save_data() the correct number of times with accurate values
+ * for 'len'.
+ */
+typedef int (*domain_save_handler)(const struct domain *d,
+                                   struct domain_context *c,
+                                   bool dry_run);
+typedef int (*domain_load_handler)(struct domain *d,
+                                   struct domain_context *c);
+
+void domain_register_save_type(unsigned int typecode, const char *name,
+                               domain_save_handler save,
+                               domain_load_handler load);
+
+/*
+ * Register save and load handlers.
+ *
+ * Save handlers will be invoked in an order which copes with any inter-
+ * entry dependencies. For now this means that HEADER will come first and
+ * END will come last, all others being invoked in order of 'typecode'.
+ *
+ * Load handlers will be invoked in the order of entries present in the
+ * buffer.
+ */
+#define DOMAIN_REGISTER_SAVE_LOAD(x, s, l)                    \
+    static int __init __domain_register_##x##_save_load(void) \
+    {                                                         \
+        domain_register_save_type(                            \
+            DOMAIN_SAVE_CODE(x),                              \
+            #x,                                               \
+            &(s),                                             \
+            &(l));                                            \
+                                                              \
+        return 0;                                             \
+    }                                                         \
+    __initcall(__domain_register_##x##_save_load);
+
+/* Callback functions */
+struct domain_save_ops {
+    /*
+     * Begin a new entry with the given descriptor (only type and instance
+     * are valid).
+     */
+    int (*begin)(void *priv, const struct domain_save_descriptor *desc);
+    /* Append data/padding to the buffer */
+    int (*append)(void *priv, const void *data, size_t len);
+    /*
+     * Complete the entry by updating the descriptor with the total
+     * length of the appended data (not including padding).
+     */
+    int (*end)(void *priv, size_t len);
+};
+
+struct domain_load_ops {
+    /* Read data/padding from the buffer */
+    int (*read)(void *priv, void *data, size_t len);
+};
+
+/*
+ * Entry points:
+ *
+ * ops:     These are callback functions provided by the caller that will
+ *          be used to write to (in the save case) or read from (in the
+ *          load case) the context buffer. See above for more detail.
+ * priv:    This is a pointer that will be passed to the copy function to
+ *          allow it to identify the context buffer and the current state
+ *          of the save or load operation.
+ * dry_run: If this is set then the caller of domain_save() is only trying
+ *          to acquire the total size of the data, not the data itself.
+ *          In this case the caller may supply different ops to avoid doing
+ *          unnecessary work.
+ */
+int domain_save(struct domain *d, const struct domain_save_ops *ops,
+                void *priv, bool dry_run);
+int domain_load(struct domain *d, const struct domain_load_ops *ops,
+                void *priv);
+
+#endif /* XEN_SAVE_H */
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 16:20:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 16:20:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbnvF-0000vo-Iw; Thu, 21 May 2020 16:19:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/yRJ=7D=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jbnvD-0000vL-Qp
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 16:19:51 +0000
X-Inumbo-ID: e5567c29-9b7e-11ea-ab32-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e5567c29-9b7e-11ea-ab32-12813bfff9fa;
 Thu, 21 May 2020 16:19:50 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=6HfwJMUCo4w2pkMGpLCkcV69ZEzum7mNXZZZUP2qHUY=; b=u6DDMzGH5Gxw3WKDRDRVj/TT2I
 D9Ca7wBKZU4NZyJ2TnScvbO/DttCdZIEjKzkCr72QID+AeZvrDvlnH9tAQCGs1lCgIs8Y/OWM6cWS
 loDLyFvww/WSB7fTayunAQYoroBLDAHrQRIxIOGfw/vxBS7dMCa1gKWiGdES8Sakke+o=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jbnvB-0006o2-Ul; Thu, 21 May 2020 16:19:49 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jbnvB-00088L-Lj; Thu, 21 May 2020 16:19:49 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 5/5] tools/libxc: make use of domain context SHARED_INFO
 record...
Date: Thu, 21 May 2020 17:19:39 +0100
Message-Id: <20200521161939.4508-6-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200521161939.4508-1-paul@xen.org>
References: <20200521161939.4508-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

... in the save/restore code.

This patch replaces direct mapping of the shared_info_frame (retrieved
using XEN_DOMCTL_getdomaininfo) with save/load of the domain context
SHARED_INFO record.

No modifications are made to the definition of the migration stream at
this point. Subsequent patches will define a record in the libxc domain
image format for passing domain context and convert the save/restore code
to use that.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>

NOTE: Ian requested ack from Andrew

v5:
 - Added BUILD_BUG_ON() in write_shared_info() to ensure copied data is not
   bigger than the record buffer

v4:
 - write_shared_info() now needs to allocate the record data since the
   shared info buffer is smaller than PAGE_SIZE

v3:
 - Moved basic get/set domain context functions to common code

v2:
 - Re-based (now making use of DOMAIN_SAVE_FLAG_IGNORE)
---
 tools/libxc/xc_sr_common.c         | 67 +++++++++++++++++++++++++++
 tools/libxc/xc_sr_common.h         | 11 ++++-
 tools/libxc/xc_sr_common_x86_pv.c  | 74 ++++++++++++++++++++++++++++++
 tools/libxc/xc_sr_common_x86_pv.h  |  3 ++
 tools/libxc/xc_sr_restore_x86_pv.c | 26 ++++-------
 tools/libxc/xc_sr_save_x86_pv.c    | 44 ++++++++----------
 tools/libxc/xg_save_restore.h      |  1 +
 7 files changed, 182 insertions(+), 44 deletions(-)

diff --git a/tools/libxc/xc_sr_common.c b/tools/libxc/xc_sr_common.c
index dd9a11b4b5..1acb3765aa 100644
--- a/tools/libxc/xc_sr_common.c
+++ b/tools/libxc/xc_sr_common.c
@@ -138,6 +138,73 @@ int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec)
     return 0;
 };
 
+int get_domain_context(struct xc_sr_context *ctx)
+{
+    xc_interface *xch = ctx->xch;
+    size_t len = 0;
+    int rc;
+
+    if ( ctx->domain_context.buffer )
+    {
+        ERROR("Domain context already present");
+        return -1;
+    }
+
+    rc = xc_domain_getcontext(xch, ctx->domid, NULL, &len);
+    if ( rc < 0 )
+    {
+        PERROR("Unable to get size of domain context");
+        return -1;
+    }
+
+    ctx->domain_context.buffer = malloc(len);
+    if ( !ctx->domain_context.buffer )
+    {
+        PERROR("Unable to allocate memory for domain context");
+        return -1;
+    }
+
+    rc = xc_domain_getcontext(xch, ctx->domid, ctx->domain_context.buffer,
+                              &len);
+    if ( rc < 0 )
+    {
+        PERROR("Unable to get domain context");
+        return -1;
+    }
+
+    ctx->domain_context.len = len;
+
+    return 0;
+}
+
+int set_domain_context(struct xc_sr_context *ctx)
+{
+    xc_interface *xch = ctx->xch;
+    int rc;
+
+    if ( !ctx->domain_context.buffer )
+    {
+        ERROR("Domain context not present");
+        return -1;
+    }
+
+    rc = xc_domain_setcontext(xch, ctx->domid, ctx->domain_context.buffer,
+                              ctx->domain_context.len);
+
+    if ( rc < 0 )
+    {
+        PERROR("Unable to set domain context");
+        return -1;
+    }
+
+    return 0;
+}
+
+void common_cleanup(struct xc_sr_context *ctx)
+{
+    free(ctx->domain_context.buffer);
+}
+
 static void __attribute__((unused)) build_assertions(void)
 {
     BUILD_BUG_ON(sizeof(struct xc_sr_ihdr) != 24);
diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
index 5dd51ccb15..0d61978b08 100644
--- a/tools/libxc/xc_sr_common.h
+++ b/tools/libxc/xc_sr_common.h
@@ -208,6 +208,11 @@ struct xc_sr_context
 
     xc_dominfo_t dominfo;
 
+    struct {
+        void *buffer;
+        unsigned int len;
+    } domain_context;
+
     union /* Common save or restore data. */
     {
         struct /* Save data. */
@@ -314,7 +319,7 @@ struct xc_sr_context
                 /* The guest pfns containing the p2m leaves */
                 xen_pfn_t *p2m_pfns;
 
-                /* Read-only mapping of guests shared info page */
+                /* Pointer to shared_info (located in context buffer) */
                 shared_info_any_t *shinfo;
 
                 /* p2m generation count for verifying validity of local p2m. */
@@ -425,6 +430,10 @@ int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec);
 int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
                   const xen_pfn_t *original_pfns, const uint32_t *types);
 
+int get_domain_context(struct xc_sr_context *ctx);
+int set_domain_context(struct xc_sr_context *ctx);
+void common_cleanup(struct xc_sr_context *ctx);
+
 #endif
 /*
  * Local variables:
diff --git a/tools/libxc/xc_sr_common_x86_pv.c b/tools/libxc/xc_sr_common_x86_pv.c
index d3d425cb82..69d9b142b8 100644
--- a/tools/libxc/xc_sr_common_x86_pv.c
+++ b/tools/libxc/xc_sr_common_x86_pv.c
@@ -182,6 +182,80 @@ int x86_pv_map_m2p(struct xc_sr_context *ctx)
     return rc;
 }
 
+int x86_pv_get_shinfo(struct xc_sr_context *ctx)
+{
+    xc_interface *xch = ctx->xch;
+    unsigned int off = 0;
+    int rc;
+
+#define GET_PTR(_x)                                                         \
+    do {                                                                    \
+        if ( ctx->domain_context.len - off < sizeof(*(_x)) )                \
+        {                                                                   \
+            ERROR("Need another %lu bytes of context, only %u available\n", \
+                  sizeof(*(_x)), ctx->domain_context.len - off);            \
+            return -1;                                                      \
+        }                                                                   \
+        (_x) = ctx->domain_context.buffer + off;                            \
+    } while (false);
+
+    rc = get_domain_context(ctx);
+    if ( rc )
+        return rc;
+
+    for ( ; ; )
+    {
+        struct domain_save_descriptor *desc;
+
+        GET_PTR(desc);
+
+        off += sizeof(*desc);
+
+        switch (desc->typecode)
+        {
+        case DOMAIN_SAVE_CODE(SHARED_INFO):
+        {
+            DOMAIN_SAVE_TYPE(SHARED_INFO) *s;
+
+            GET_PTR(s);
+
+            ctx->x86.pv.shinfo = (shared_info_any_t *)s->buffer;
+            break;
+        }
+        default:
+            break;
+        }
+
+        if ( desc->typecode == DOMAIN_SAVE_CODE(END) )
+            break;
+
+        off += desc->length;
+    }
+
+    if ( !ctx->x86.pv.shinfo )
+    {
+        ERROR("Failed to get SHARED_INFO\n");
+        return -1;
+    }
+
+    return 0;
+
+#undef GET_PTR
+}
+
+int x86_pv_set_shinfo(struct xc_sr_context *ctx)
+{
+    xc_interface *xch = ctx->xch;
+
+    if ( !ctx->x86.pv.shinfo )
+    {
+        ERROR("SHARED_INFO buffer not present\n");
+        return -1;
+    }
+
+    return set_domain_context(ctx);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xc_sr_common_x86_pv.h b/tools/libxc/xc_sr_common_x86_pv.h
index 2ed03309af..01442f48fb 100644
--- a/tools/libxc/xc_sr_common_x86_pv.h
+++ b/tools/libxc/xc_sr_common_x86_pv.h
@@ -97,6 +97,9 @@ int x86_pv_domain_info(struct xc_sr_context *ctx);
  */
 int x86_pv_map_m2p(struct xc_sr_context *ctx);
 
+int x86_pv_get_shinfo(struct xc_sr_context *ctx);
+int x86_pv_set_shinfo(struct xc_sr_context *ctx);
+
 #endif
 /*
  * Local variables:
diff --git a/tools/libxc/xc_sr_restore_x86_pv.c b/tools/libxc/xc_sr_restore_x86_pv.c
index 904ccc462a..21982a38ad 100644
--- a/tools/libxc/xc_sr_restore_x86_pv.c
+++ b/tools/libxc/xc_sr_restore_x86_pv.c
@@ -865,7 +865,7 @@ static int handle_shared_info(struct xc_sr_context *ctx,
     xc_interface *xch = ctx->xch;
     unsigned int i;
     int rc = -1;
-    shared_info_any_t *guest_shinfo = NULL;
+    shared_info_any_t *guest_shinfo;
     const shared_info_any_t *old_shinfo = rec->data;
 
     if ( !ctx->x86.pv.restore.seen_pv_info )
@@ -878,18 +878,14 @@ static int handle_shared_info(struct xc_sr_context *ctx,
     {
         ERROR("X86_PV_SHARED_INFO record wrong size: length %u"
               ", expected 4096", rec->length);
-        goto err;
+        return -1;
     }
 
-    guest_shinfo = xc_map_foreign_range(
-        xch, ctx->domid, PAGE_SIZE, PROT_READ | PROT_WRITE,
-        ctx->dominfo.shared_info_frame);
-    if ( !guest_shinfo )
-    {
-        PERROR("Failed to map Shared Info at mfn %#lx",
-               ctx->dominfo.shared_info_frame);
-        goto err;
-    }
+    rc = x86_pv_get_shinfo(ctx);
+    if ( rc )
+        return rc;
+
+    guest_shinfo = ctx->x86.pv.shinfo;
 
     MEMCPY_FIELD(guest_shinfo, old_shinfo, vcpu_info, ctx->x86.pv.width);
     MEMCPY_FIELD(guest_shinfo, old_shinfo, arch, ctx->x86.pv.width);
@@ -904,13 +900,7 @@ static int handle_shared_info(struct xc_sr_context *ctx,
 
     MEMSET_ARRAY_FIELD(guest_shinfo, evtchn_mask, 0xff, ctx->x86.pv.width);
 
-    rc = 0;
-
- err:
-    if ( guest_shinfo )
-        munmap(guest_shinfo, PAGE_SIZE);
-
-    return rc;
+    return x86_pv_set_shinfo(ctx);
 }
 
 /* restore_ops function. */
diff --git a/tools/libxc/xc_sr_save_x86_pv.c b/tools/libxc/xc_sr_save_x86_pv.c
index f3ccf5bb4b..fdd172b639 100644
--- a/tools/libxc/xc_sr_save_x86_pv.c
+++ b/tools/libxc/xc_sr_save_x86_pv.c
@@ -9,25 +9,6 @@ static inline bool is_canonical_address(xen_vaddr_t vaddr)
     return ((int64_t)vaddr >> 47) == ((int64_t)vaddr >> 63);
 }
 
-/*
- * Maps the guests shared info page.
- */
-static int map_shinfo(struct xc_sr_context *ctx)
-{
-    xc_interface *xch = ctx->xch;
-
-    ctx->x86.pv.shinfo = xc_map_foreign_range(
-        xch, ctx->domid, PAGE_SIZE, PROT_READ, ctx->dominfo.shared_info_frame);
-    if ( !ctx->x86.pv.shinfo )
-    {
-        PERROR("Failed to map shared info frame at mfn %#lx",
-               ctx->dominfo.shared_info_frame);
-        return -1;
-    }
-
-    return 0;
-}
-
 /*
  * Copy a list of mfns from a guest, accounting for differences between guest
  * and toolstack width.  Can fail if truncation would occur.
@@ -854,13 +835,27 @@ static int write_x86_pv_p2m_frames(struct xc_sr_context *ctx)
  */
 static int write_shared_info(struct xc_sr_context *ctx)
 {
+    xc_interface *xch = ctx->xch;
     struct xc_sr_record rec = {
         .type = REC_TYPE_SHARED_INFO,
         .length = PAGE_SIZE,
-        .data = ctx->x86.pv.shinfo,
     };
+    int rc;
 
-    return write_record(ctx, &rec);
+    if ( !(rec.data = calloc(1, PAGE_SIZE)) )
+    {
+        ERROR("Cannot allocate buffer for SHARED_INFO data");
+        return -1;
+    }
+
+    BUILD_BUG_ON(sizeof(*ctx->x86.pv.shinfo) > PAGE_SIZE);
+    memcpy(rec.data, ctx->x86.pv.shinfo, sizeof(*ctx->x86.pv.shinfo));
+
+    rc = write_record(ctx, &rec);
+
+    free(rec.data);
+
+    return rc;
 }
 
 /*
@@ -1041,7 +1036,7 @@ static int x86_pv_setup(struct xc_sr_context *ctx)
     if ( rc )
         return rc;
 
-    rc = map_shinfo(ctx);
+    rc = x86_pv_get_shinfo(ctx);
     if ( rc )
         return rc;
 
@@ -1112,12 +1107,11 @@ static int x86_pv_cleanup(struct xc_sr_context *ctx)
     if ( ctx->x86.pv.p2m )
         munmap(ctx->x86.pv.p2m, ctx->x86.pv.p2m_frames * PAGE_SIZE);
 
-    if ( ctx->x86.pv.shinfo )
-        munmap(ctx->x86.pv.shinfo, PAGE_SIZE);
-
     if ( ctx->x86.pv.m2p )
         munmap(ctx->x86.pv.m2p, ctx->x86.pv.nr_m2p_frames * PAGE_SIZE);
 
+    common_cleanup(ctx);
+
     return 0;
 }
 
diff --git a/tools/libxc/xg_save_restore.h b/tools/libxc/xg_save_restore.h
index 303081df0d..296b523963 100644
--- a/tools/libxc/xg_save_restore.h
+++ b/tools/libxc/xg_save_restore.h
@@ -19,6 +19,7 @@
 
 #include <xen/foreign/x86_32.h>
 #include <xen/foreign/x86_64.h>
+#include <xen/save.h>
 
 /*
 ** We process save/restore/migrate in batches of pages; the below
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 16:20:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 16:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbnvG-0000wH-1M; Thu, 21 May 2020 16:19:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/yRJ=7D=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jbnvF-0000vc-2l
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 16:19:53 +0000
X-Inumbo-ID: e39de952-9b7e-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e39de952-9b7e-11ea-b9cf-bc764e2007e4;
 Thu, 21 May 2020 16:19:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=gk5Uuo1wIab8QnJ+XZOG7n7mFLhPsfwOIEwveUEZ4oE=; b=UYXK/EzL0sRrOHyvc95H5YGi0v
 vcb9hjMzg38FDiq8G2dwW7rtDLsVRsdC7otPj3M1pRl8BQwk0rbP8wADa8KXg+OHABTu6mq/HZJPk
 s53wYp8jtYXUK7vprDdYGP4DjPwu9WcqpxxIohpJhEqK66JHkcR+elDKOV+M5jLSkqos=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jbnv9-0006nh-7J; Thu, 21 May 2020 16:19:47 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jbnv8-00088L-UP; Thu, 21 May 2020 16:19:47 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 3/5] tools/misc: add xen-domctx to present domain context
Date: Thu, 21 May 2020 17:19:37 +0100
Message-Id: <20200521161939.4508-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200521161939.4508-1-paul@xen.org>
References: <20200521161939.4508-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This tool is analogous to 'xen-hvmctx' which presents HVM context.
Subsequent patches will add 'dump' functions when new records are
introduced.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>

NOTE: Ian requested ack from Andrew

v3:
 - Re-worked to avoid copying onto stack
 - Added optional typecode and instance arguments

v2:
 - Change name from 'xen-ctx' to 'xen-domctx'
---
 .gitignore              |   1 +
 tools/misc/Makefile     |   4 +
 tools/misc/xen-domctx.c | 200 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 205 insertions(+)
 create mode 100644 tools/misc/xen-domctx.c

diff --git a/.gitignore b/.gitignore
index 7418ce9829..6da3030f0d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -209,6 +209,7 @@ tools/misc/xen_cpuperf
 tools/misc/xen-cpuid
 tools/misc/xen-detect
 tools/misc/xen-diag
+tools/misc/xen-domctx
 tools/misc/xen-tmem-list-parse
 tools/misc/xen-livepatch
 tools/misc/xenperf
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 63947bfadc..ef25524354 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -30,6 +30,7 @@ INSTALL_SBIN                   += xenpm
 INSTALL_SBIN                   += xenwatchdogd
 INSTALL_SBIN                   += xen-livepatch
 INSTALL_SBIN                   += xen-diag
+INSTALL_SBIN                   += xen-domctx
 INSTALL_SBIN += $(INSTALL_SBIN-y)
 
 # Everything to be installed in a private bin/
@@ -108,6 +109,9 @@ xen-livepatch: xen-livepatch.o
 xen-diag: xen-diag.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
+xen-domctx: xen-domctx.o
+	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
+
 xen-lowmemd: xen-lowmemd.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(APPEND_LDFLAGS)
 
diff --git a/tools/misc/xen-domctx.c b/tools/misc/xen-domctx.c
new file mode 100644
index 0000000000..243325dfce
--- /dev/null
+++ b/tools/misc/xen-domctx.c
@@ -0,0 +1,200 @@
+/*
+ * xen-domctx.c
+ *
+ * Print out domain save records in a human-readable way.
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+
+#include <xenctrl.h>
+#include <xen/xen.h>
+#include <xen/domctl.h>
+#include <xen/save.h>
+
+static void *buf = NULL;
+static size_t len, off;
+
+#define GET_PTR(_x)                                                        \
+    do {                                                                   \
+        if ( len - off < sizeof(*(_x)) )                                   \
+        {                                                                  \
+            fprintf(stderr,                                                \
+                    "error: need another %lu bytes, only %lu available\n", \
+                    sizeof(*(_x)), len - off);                             \
+            exit(1);                                                       \
+        }                                                                  \
+        (_x) = buf + off;                                                  \
+    } while (false);
+
+static void dump_header(void)
+{
+    DOMAIN_SAVE_TYPE(HEADER) *h;
+
+    GET_PTR(h);
+
+    printf("    HEADER: magic %#x, version %u\n",
+           h->magic, h->version);
+
+}
+
+static void dump_end(void)
+{
+    DOMAIN_SAVE_TYPE(END) *e;
+
+    GET_PTR(e);
+
+    printf("    END\n");
+}
+
+static void usage(const char *prog)
+{
+    fprintf(stderr, "usage: %s <domid> [ <typecode> [ <instance> ]]\n",
+            prog);
+    exit(1);
+}
+
+int main(int argc, char **argv)
+{
+    char *s, *e;
+    long domid;
+    long typecode = -1;
+    long instance = -1;
+    unsigned int entry;
+    xc_interface *xch;
+    int rc;
+
+    if ( argc < 2 || argc > 4 )
+        usage(argv[0]);
+
+    s = e = argv[1];
+    domid = strtol(s, &e, 0);
+
+    if ( *s == '\0' || *e != '\0' ||
+         domid < 0 || domid >= DOMID_FIRST_RESERVED )
+    {
+        fprintf(stderr, "invalid domid '%s'\n", s);
+        exit(1);
+    }
+
+    if ( argc >= 3 )
+    {
+        s = e = argv[2];
+        typecode = strtol(s, &e, 0);
+
+        if ( *s == '\0' || *e != '\0' )
+        {
+            fprintf(stderr, "invalid typecode '%s'\n", s);
+            exit(1);
+        }
+    }
+
+    if ( argc == 4 )
+    {
+        s = e = argv[3];
+        instance = strtol(s, &e, 0);
+
+        if ( *s == '\0' || *e != '\0' )
+        {
+            fprintf(stderr, "invalid instance '%s'\n", s);
+            exit(1);
+        }
+    }
+
+    xch = xc_interface_open(0, 0, 0);
+    if ( !xch )
+    {
+        fprintf(stderr, "error: can't open libxc handle\n");
+        exit(1);
+    }
+
+    rc = xc_domain_getcontext(xch, domid, NULL, &len);
+    if ( rc < 0 )
+    {
+        fprintf(stderr, "error: can't get record length for dom %lu: %s\n",
+                domid, strerror(errno));
+        exit(1);
+    }
+
+    buf = malloc(len);
+    if ( !buf )
+    {
+        fprintf(stderr, "error: can't allocate %lu bytes\n", len);
+        exit(1);
+    }
+
+    rc = xc_domain_getcontext(xch, domid, buf, &len);
+    if ( rc < 0 )
+    {
+        fprintf(stderr, "error: can't get domain record for dom %lu: %s\n",
+                domid, strerror(errno));
+        exit(1);
+    }
+    off = 0;
+
+    entry = 0;
+    for ( ; ; )
+    {
+        struct domain_save_descriptor *desc;
+
+        GET_PTR(desc);
+
+        off += sizeof(*desc);
+
+        if ( (typecode < 0 || typecode == desc->typecode) &&
+             (instance < 0 || instance == desc->instance) )
+        {
+            printf("[%u] type: %u instance: %u length: %u\n", entry++,
+                   desc->typecode, desc->instance, desc->length);
+
+            switch (desc->typecode)
+            {
+            case DOMAIN_SAVE_CODE(HEADER): dump_header(); break;
+            case DOMAIN_SAVE_CODE(END): dump_end(); break;
+            default:
+                printf("Unknown type %u: skipping\n", desc->typecode);
+                break;
+            }
+        }
+
+        if ( desc->typecode == DOMAIN_SAVE_CODE(END) )
+            break;
+
+        off += desc->length;
+    }
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 16:20:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 16:20:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbnvL-0000ys-Ac; Thu, 21 May 2020 16:19:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/yRJ=7D=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jbnvK-0000yS-3T
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 16:19:58 +0000
X-Inumbo-ID: e3bbed44-9b7e-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e3bbed44-9b7e-11ea-b9cf-bc764e2007e4;
 Thu, 21 May 2020 16:19:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=QfGmoXyhQuN/EVxQAxJlmVEjrcceHfUNWPHkPI0BpEQ=; b=0UcGuSWjabQ6qD4s4WMXkM5UJm
 JhbRjcH0xTpYT6vGQcXaa9L+rYKNtWZl+1l1QbYX6R7eyVmkvkdN7KxdYhlOncJYTSTGO5Dxp3BrJ
 0JttAfT/5a3/Ng6JM8HMSd98J56qjwLtU0GCMlV6CFzimu4OY9Hy3bk8h8aXIGq8IzUw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jbnv8-0006ne-1u; Thu, 21 May 2020 16:19:46 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jbnv7-00088L-OK; Thu, 21 May 2020 16:19:45 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 2/5] xen/common/domctl: introduce
 XEN_DOMCTL_get/setdomaincontext
Date: Thu, 21 May 2020 17:19:36 +0100
Message-Id: <20200521161939.4508-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200521161939.4508-1-paul@xen.org>
References: <20200521161939.4508-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

These domctls provide a mechanism to get and set domain context from
the toolstack.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Julien Grall <julien@xen.org>
---
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>

v4:
 - Add missing zero pad checks

v3:
 - Addressed comments from Julien and Jan
 - Use vmalloc() rather than xmalloc_bytes()

v2:
 - drop mask parameter
 - const-ify some more buffers
---
 tools/flask/policy/modules/xen.if   |   4 +-
 tools/libxc/include/xenctrl.h       |   5 +
 tools/libxc/xc_domain.c             |  56 +++++++++
 xen/common/domctl.c                 | 173 ++++++++++++++++++++++++++++
 xen/include/public/domctl.h         |  41 +++++++
 xen/xsm/flask/hooks.c               |   6 +
 xen/xsm/flask/policy/access_vectors |   4 +
 7 files changed, 287 insertions(+), 2 deletions(-)

diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 8eb2293a52..2bc9db4f64 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -53,7 +53,7 @@ define(`create_domain_common', `
 	allow $1 $2:domain2 { set_cpu_policy settsc setscheduler setclaim
 			set_vnumainfo get_vnumainfo cacheflush
 			psr_cmt_op psr_alloc soft_reset
-			resource_map get_cpu_policy };
+			resource_map get_cpu_policy setcontext };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op updatemp };
@@ -97,7 +97,7 @@ define(`migrate_domain_out', `
 	allow $1 $2:hvm { gethvmc getparam };
 	allow $1 $2:mmu { stat pageinfo map_read };
 	allow $1 $2:domain { getaddrsize getvcpucontext pause destroy };
-	allow $1 $2:domain2 gettsc;
+	allow $1 $2:domain2 { gettsc getcontext };
 	allow $1 $2:shadow { enable disable logdirty };
 ')
 
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 45ff7db1e8..0ce2372e2f 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -867,6 +867,11 @@ int xc_domain_hvm_setcontext(xc_interface *xch,
                              uint8_t *hvm_ctxt,
                              uint32_t size);
 
+int xc_domain_getcontext(xc_interface *xch, uint32_t domid,
+                         void *ctxt_buf, size_t *size);
+int xc_domain_setcontext(xc_interface *xch, uint32_t domid,
+                         const void *ctxt_buf, size_t size);
+
 /**
  * This function will return guest IO ABI protocol
  *
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 71829c2bce..e462a6f728 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -537,6 +537,62 @@ int xc_domain_hvm_setcontext(xc_interface *xch,
     return ret;
 }
 
+int xc_domain_getcontext(xc_interface *xch, uint32_t domid,
+                         void *ctxt_buf, size_t *size)
+{
+    int ret;
+    DECLARE_DOMCTL = {
+        .cmd = XEN_DOMCTL_getdomaincontext,
+        .domain = domid,
+        .u.getdomaincontext.size = *size,
+    };
+    DECLARE_HYPERCALL_BOUNCE(ctxt_buf, *size, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+
+    if ( xc_hypercall_bounce_pre(xch, ctxt_buf) )
+        return -1;
+
+    set_xen_guest_handle(domctl.u.setdomaincontext.buffer, ctxt_buf);
+
+    ret = do_domctl(xch, &domctl);
+
+    xc_hypercall_bounce_post(xch, ctxt_buf);
+
+    if ( ret )
+        return ret;
+
+    *size = domctl.u.getdomaincontext.size;
+    if ( *size != domctl.u.getdomaincontext.size )
+    {
+        errno = EOVERFLOW;
+        return -1;
+    }
+
+    return 0;
+}
+
+int xc_domain_setcontext(xc_interface *xch, uint32_t domid,
+                         const void *ctxt_buf, size_t size)
+{
+    int ret;
+    DECLARE_DOMCTL = {
+        .cmd = XEN_DOMCTL_setdomaincontext,
+        .domain = domid,
+        .u.setdomaincontext.size = size,
+    };
+    DECLARE_HYPERCALL_BOUNCE_IN(ctxt_buf, size);
+
+    if ( xc_hypercall_bounce_pre(xch, ctxt_buf) )
+        return -1;
+
+    set_xen_guest_handle(domctl.u.setdomaincontext.buffer, ctxt_buf);
+
+    ret = do_domctl(xch, &domctl);
+
+    xc_hypercall_bounce_post(xch, ctxt_buf);
+
+    return ret;
+}
+
 int xc_vcpu_getcontext(xc_interface *xch,
                        uint32_t domid,
                        uint32_t vcpu,
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index a69b3b59a8..44758034a6 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -25,6 +25,8 @@
 #include <xen/hypercall.h>
 #include <xen/vm_event.h>
 #include <xen/monitor.h>
+#include <xen/save.h>
+#include <xen/vmap.h>
 #include <asm/current.h>
 #include <asm/irq.h>
 #include <asm/page.h>
@@ -358,6 +360,168 @@ static struct vnuma_info *vnuma_init(const struct xen_domctl_vnuma *uinfo,
     return ERR_PTR(ret);
 }
 
+struct domctl_context
+{
+    void *buffer;
+    struct domain_save_descriptor *desc;
+    size_t len;
+    size_t cur;
+};
+
+static int dry_run_append(void *priv, const void *data, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len + len < c->len )
+        return -EOVERFLOW;
+
+    c->len += len;
+
+    return 0;
+}
+
+static int dry_run_begin(void *priv, const struct domain_save_descriptor *desc)
+{
+    return dry_run_append(priv, NULL, sizeof(*desc));
+}
+
+static int dry_run_end(void *priv, size_t len)
+{
+    return 0;
+}
+
+static struct domain_save_ops dry_run_ops = {
+    .begin = dry_run_begin,
+    .append = dry_run_append,
+    .end = dry_run_end,
+};
+
+static int save_begin(void *priv, const struct domain_save_descriptor *desc)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len - c->cur < sizeof(*desc) )
+        return -ENOSPC;
+
+    c->desc = c->buffer + c->cur; /* stash pointer to descriptor */
+    *c->desc = *desc;
+
+    c->cur += sizeof(*desc);
+
+    return 0;
+}
+
+static int save_append(void *priv, const void *data, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len - c->cur < len )
+        return -ENOSPC;
+
+    memcpy(c->buffer + c->cur, data, len);
+    c->cur += len;
+
+    return 0;
+}
+
+static int save_end(void *priv, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    c->desc->length = len;
+
+    return 0;
+}
+
+static struct domain_save_ops save_ops = {
+    .begin = save_begin,
+    .append = save_append,
+    .end = save_end,
+};
+
+static int getdomaincontext(struct domain *d,
+                            struct xen_domctl_getdomaincontext *gdc)
+{
+    struct domctl_context c = { .buffer = ZERO_BLOCK_PTR };
+    int rc;
+
+    if ( d == current->domain )
+        return -EPERM;
+
+    if ( gdc->pad )
+        return -EINVAL;
+
+    if ( guest_handle_is_null(gdc->buffer) ) /* query for buffer size */
+    {
+        if ( gdc->size )
+            return -EINVAL;
+
+        /* dry run to acquire buffer size */
+        rc = domain_save(d, &dry_run_ops, &c, true);
+        if ( rc )
+            return rc;
+
+        gdc->size = c.len;
+        return 0;
+    }
+
+    c.len = gdc->size;
+    c.buffer = vmalloc(c.len);
+    if ( !c.buffer )
+        return -ENOMEM;
+
+    rc = domain_save(d, &save_ops, &c, false);
+
+    gdc->size = c.cur;
+    if ( !rc && copy_to_guest(gdc->buffer, c.buffer, gdc->size) )
+        rc = -EFAULT;
+
+    vfree(c.buffer);
+
+    return rc;
+}
+
+static int load_read(void *priv, void *data, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len - c->cur < len )
+        return -ENODATA;
+
+    memcpy(data, c->buffer + c->cur, len);
+    c->cur += len;
+
+    return 0;
+}
+
+static struct domain_load_ops load_ops = {
+    .read = load_read,
+};
+
+static int setdomaincontext(struct domain *d,
+                            const struct xen_domctl_setdomaincontext *sdc)
+{
+    struct domctl_context c = { .buffer = ZERO_BLOCK_PTR, .len = sdc->size };
+    int rc;
+
+    if ( d == current->domain )
+        return -EPERM;
+
+    if ( sdc->pad )
+        return -EINVAL;
+
+    c.buffer = vmalloc(c.len);
+    if ( !c.buffer )
+        return -ENOMEM;
+
+    rc = !copy_from_guest(c.buffer, sdc->buffer, c.len) ?
+        domain_load(d, &load_ops, &c) : -EFAULT;
+
+    vfree(c.buffer);
+
+    return rc;
+}
+
 long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
@@ -942,6 +1106,15 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             copyback = 1;
         break;
 
+    case XEN_DOMCTL_getdomaincontext:
+        ret = getdomaincontext(d, &op->u.getdomaincontext);
+        copyback = !ret;
+        break;
+
+    case XEN_DOMCTL_setdomaincontext:
+        ret = setdomaincontext(d, &op->u.setdomaincontext);
+        break;
+
     default:
         ret = arch_do_domctl(op, d, u_domctl);
         break;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 1ad34c35eb..1b133bda59 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1129,6 +1129,43 @@ struct xen_domctl_vuart_op {
                                  */
 };
 
+/*
+ * XEN_DOMCTL_getdomaincontext
+ * ---------------------------
+ *
+ * buffer (IN):   The buffer into which the context data should be
+ *                copied, or NULL to query the buffer size that should
+ *                be allocated.
+ * size (IN/OUT): If 'buffer' is NULL then the value passed in must be
+ *                zero, and the value passed out will be the size of the
+ *                buffer to allocate.
+ *                If 'buffer' is non-NULL then the value passed in must
+ *                be the size of the buffer into which data may be copied.
+ *                The value passed out will be the size of data written.
+ */
+struct xen_domctl_getdomaincontext {
+    uint32_t size;
+    uint32_t pad;
+    XEN_GUEST_HANDLE_64(void) buffer;
+};
+
+/* XEN_DOMCTL_setdomaincontext
+ * ---------------------------
+ *
+ * buffer (IN):   The buffer from which the context data should be
+ *                copied.
+ * size (IN):     The size of the buffer from which data may be copied.
+ *                This data must include DOMAIN_SAVE_CODE_HEADER at the
+ *                start and terminate with a DOMAIN_SAVE_CODE_END record.
+ *                Any data beyond the DOMAIN_SAVE_CODE_END record will be
+ *                ignored.
+ */
+struct xen_domctl_setdomaincontext {
+    uint32_t size;
+    uint32_t pad;
+    XEN_GUEST_HANDLE_64(const_void) buffer;
+};
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -1210,6 +1247,8 @@ struct xen_domctl {
 #define XEN_DOMCTL_vuart_op                      81
 #define XEN_DOMCTL_get_cpu_policy                82
 #define XEN_DOMCTL_set_cpu_policy                83
+#define XEN_DOMCTL_getdomaincontext              84
+#define XEN_DOMCTL_setdomaincontext              85
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1270,6 +1309,8 @@ struct xen_domctl {
         struct xen_domctl_monitor_op        monitor_op;
         struct xen_domctl_psr_alloc         psr_alloc;
         struct xen_domctl_vuart_op          vuart_op;
+        struct xen_domctl_getdomaincontext  getdomaincontext;
+        struct xen_domctl_setdomaincontext  setdomaincontext;
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 4649e6fd95..6f3db276ef 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -745,6 +745,12 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_get_cpu_policy:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GET_CPU_POLICY);
 
+    case XEN_DOMCTL_setdomaincontext:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SETCONTEXT);
+
+    case XEN_DOMCTL_getdomaincontext:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GETCONTEXT);
+
     default:
         return avc_unknown_permission("domctl", cmd);
     }
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index c055c14c26..fccfb9de82 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -245,6 +245,10 @@ class domain2
     resource_map
 # XEN_DOMCTL_get_cpu_policy
     get_cpu_policy
+# XEN_DOMCTL_setdomaincontext
+    setcontext
+# XEN_DOMCTL_getdomaincontext
+    getcontext
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 16:20:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 16:20:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbnvQ-0001Qf-KH; Thu, 21 May 2020 16:20:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=/yRJ=7D=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jbnvP-0001DP-3j
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 16:20:03 +0000
X-Inumbo-ID: e4ea68a8-9b7e-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e4ea68a8-9b7e-11ea-b9cf-bc764e2007e4;
 Thu, 21 May 2020 16:19:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=/LS/E6IoF0MQgW42gOcJtVpnONoGSVfqwfRwEvxiDwc=; b=5Go9HNihDCxMPDKISozp/bPwXd
 bOYSDD1UcGPjly5CJX0D1Q10KHKtuXh6YDNajBkVYMGsZQvM7ZdrgpnHzywOc8+MdD2SnrJLxOqLm
 8Jz41l/9biOMYzHObvpmDqLLwLsuzce145G/81elqBWI6MrQygYmUjL/UT3gcLNEibBk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jbnvA-0006nw-PB; Thu, 21 May 2020 16:19:48 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <paul@xen.org>)
 id 1jbnvA-00088L-Fg; Thu, 21 May 2020 16:19:48 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v5 4/5] common/domain: add a domain context record for
 shared_info...
Date: Thu, 21 May 2020 17:19:38 +0100
Message-Id: <20200521161939.4508-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200521161939.4508-1-paul@xen.org>
References: <20200521161939.4508-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

... and update xen-domctx to dump some information describing the record.

NOTE: The domain may or may not be using the embedded vcpu_info array so
      ultimately separate context records will be added for vcpu_info when
      this becomes necessary.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>

v5:
 - Addressed comments from Julien

v4:
 - Addressed comments from Jan

v3:
 - Actually dump some of the content of shared_info

v2:
 - Drop the header change to define a 'Xen' page size and instead use a
   variable length struct now that the framework makes this is feasible
 - Guard use of 'has_32bit_shinfo' in common code with CONFIG_COMPAT
---
 tools/misc/xen-domctx.c   | 78 +++++++++++++++++++++++++++++++++++++++
 xen/common/domain.c       | 65 ++++++++++++++++++++++++++++++++
 xen/include/public/save.h | 13 ++++++-
 3 files changed, 155 insertions(+), 1 deletion(-)

diff --git a/tools/misc/xen-domctx.c b/tools/misc/xen-domctx.c
index 243325dfce..6ead7ea89d 100644
--- a/tools/misc/xen-domctx.c
+++ b/tools/misc/xen-domctx.c
@@ -31,6 +31,7 @@
 #include <errno.h>
 
 #include <xenctrl.h>
+#include <xen-tools/libs.h>
 #include <xen/xen.h>
 #include <xen/domctl.h>
 #include <xen/save.h>
@@ -61,6 +62,82 @@ static void dump_header(void)
 
 }
 
+static void print_binary(const char *prefix, const void *val, size_t size,
+                         const char *suffix)
+{
+    printf("%s", prefix);
+
+    while ( size-- )
+    {
+        uint8_t octet = *(const uint8_t *)val++;
+        unsigned int i;
+
+        for ( i = 0; i < 8; i++ )
+        {
+            printf("%u", octet & 1);
+            octet >>= 1;
+        }
+    }
+
+    printf("%s", suffix);
+}
+
+static void dump_shared_info(void)
+{
+    DOMAIN_SAVE_TYPE(SHARED_INFO) *s;
+    bool has_32bit_shinfo;
+    shared_info_any_t *info;
+    unsigned int i, n;
+
+    GET_PTR(s);
+    has_32bit_shinfo = s->flags & DOMAIN_SAVE_32BIT_SHINFO;
+
+    printf("    SHARED_INFO: has_32bit_shinfo: %s buffer_size: %u\n",
+           has_32bit_shinfo ? "true" : "false", s->buffer_size);
+
+    info = (shared_info_any_t *)s->buffer;
+
+#define GET_FIELD_PTR(_f)            \
+    (has_32bit_shinfo ?              \
+     (const void *)&(info->x32._f) : \
+     (const void *)&(info->x64._f))
+#define GET_FIELD_SIZE(_f) \
+    (has_32bit_shinfo ? sizeof(info->x32._f) : sizeof(info->x64._f))
+#define GET_FIELD(_f) \
+    (has_32bit_shinfo ? info->x32._f : info->x64._f)
+
+    n = has_32bit_shinfo ?
+        ARRAY_SIZE(info->x32.evtchn_pending) :
+        ARRAY_SIZE(info->x64.evtchn_pending);
+
+    for ( i = 0; i < n; i++ )
+    {
+        const char *prefix = !i ?
+            "                 evtchn_pending: " :
+            "                                 ";
+
+        print_binary(prefix, GET_FIELD_PTR(evtchn_pending[0]),
+                 GET_FIELD_SIZE(evtchn_pending[0]), "\n");
+    }
+
+    for ( i = 0; i < n; i++ )
+    {
+        const char *prefix = !i ?
+            "                    evtchn_mask: " :
+            "                                 ";
+
+        print_binary(prefix, GET_FIELD_PTR(evtchn_mask[0]),
+                 GET_FIELD_SIZE(evtchn_mask[0]), "\n");
+    }
+
+    printf("                 wc: version: %u sec: %u nsec: %u\n",
+           GET_FIELD(wc_version), GET_FIELD(wc_sec), GET_FIELD(wc_nsec));
+
+#undef GET_FIELD
+#undef GET_FIELD_SIZE
+#undef GET_FIELD_PTR
+}
+
 static void dump_end(void)
 {
     DOMAIN_SAVE_TYPE(END) *e;
@@ -173,6 +250,7 @@ int main(int argc, char **argv)
             switch (desc->typecode)
             {
             case DOMAIN_SAVE_CODE(HEADER): dump_header(); break;
+            case DOMAIN_SAVE_CODE(SHARED_INFO): dump_shared_info(); break;
             case DOMAIN_SAVE_CODE(END): dump_end(); break;
             default:
                 printf("Unknown type %u: skipping\n", desc->typecode);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 7cc9526139..9d156da84d 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -33,6 +33,7 @@
 #include <xen/xenoprof.h>
 #include <xen/irq.h>
 #include <xen/argo.h>
+#include <xen/save.h>
 #include <asm/debugger.h>
 #include <asm/p2m.h>
 #include <asm/processor.h>
@@ -1649,6 +1650,70 @@ int continue_hypercall_on_cpu(
     return 0;
 }
 
+static int save_shared_info(const struct domain *d, struct domain_context *c,
+                            bool dry_run)
+{
+    struct domain_shared_info_context ctxt = {
+#ifdef CONFIG_COMPAT
+        .flags = has_32bit_shinfo(d) ? DOMAIN_SAVE_32BIT_SHINFO : 0,
+#endif
+        .buffer_size = sizeof(shared_info_t),
+    };
+    size_t hdr_size = offsetof(typeof(ctxt), buffer);
+    int rc;
+
+    rc = DOMAIN_SAVE_BEGIN(SHARED_INFO, c, 0);
+    if ( rc )
+        return rc;
+
+    rc = domain_save_data(c, &ctxt, hdr_size);
+    if ( rc )
+        return rc;
+
+    rc = domain_save_data(c, d->shared_info, ctxt.buffer_size);
+    if ( rc )
+        return rc;
+
+    return domain_save_end(c);
+}
+
+static int load_shared_info(struct domain *d, struct domain_context *c)
+{
+    struct domain_shared_info_context ctxt;
+    size_t hdr_size = offsetof(typeof(ctxt), buffer);
+    unsigned int i;
+    int rc;
+
+    rc = DOMAIN_LOAD_BEGIN(SHARED_INFO, c, &i);
+    if ( rc )
+        return rc;
+
+    if ( i ) /* expect only a single instance */
+        return -ENXIO;
+
+    rc = domain_load_data(c, &ctxt, hdr_size);
+    if ( rc )
+        return rc;
+
+    if ( ctxt.buffer_size != sizeof(shared_info_t) )
+        return -EINVAL;
+
+    if ( ctxt.flags & DOMAIN_SAVE_32BIT_SHINFO )
+#ifdef CONFIG_COMPAT
+        has_32bit_shinfo(d) = true;
+#else
+        return -EINVAL;
+#endif
+
+    rc = domain_load_data(c, d->shared_info, sizeof(shared_info_t));
+    if ( rc )
+        return rc;
+
+    return domain_load_end(c);
+}
+
+DOMAIN_REGISTER_SAVE_LOAD(SHARED_INFO, save_shared_info, load_shared_info);
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/public/save.h b/xen/include/public/save.h
index 551dbbddb8..0e855a4b97 100644
--- a/xen/include/public/save.h
+++ b/xen/include/public/save.h
@@ -82,7 +82,18 @@ struct domain_save_header {
 };
 DECLARE_DOMAIN_SAVE_TYPE(HEADER, 1, struct domain_save_header);
 
-#define DOMAIN_SAVE_CODE_MAX 1
+struct domain_shared_info_context {
+    uint32_t flags;
+
+#define DOMAIN_SAVE_32BIT_SHINFO 0x00000001
+
+    uint32_t buffer_size;
+    uint8_t buffer[XEN_FLEX_ARRAY_DIM]; /* Implementation specific size */
+};
+
+DECLARE_DOMAIN_SAVE_TYPE(SHARED_INFO, 2, struct domain_shared_info_context);
+
+#define DOMAIN_SAVE_CODE_MAX 2
 
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 16:27:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 16:27:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbo2j-0002LR-KP; Thu, 21 May 2020 16:27:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2WPz=7D=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jbo2i-0002LM-Sm
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 16:27:36 +0000
X-Inumbo-ID: fac560aa-9b7f-11ea-9887-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fac560aa-9b7f-11ea-9887-bc764e2007e4;
 Thu, 21 May 2020 16:27:36 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:38838
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jbo2c-000KMA-LG (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Thu, 21 May 2020 17:27:30 +0100
Subject: Re: [PATCH v3 2/2] x86/idle: prevent entering C6 with in service
 interrupts on Intel
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200515135802.63853-1-roger.pau@citrix.com>
 <20200515135802.63853-3-roger.pau@citrix.com>
 <eaa63636-6e39-d401-e218-14ae37440667@citrix.com>
 <20200521084523.GP54375@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <84486d84-4452-af18-f7e7-753faf5a125d@citrix.com>
Date: Thu, 21 May 2020 17:27:29 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200521084523.GP54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21/05/2020 09:45, Roger Pau Monné wrote:
> On Wed, May 20, 2020 at 10:30:11PM +0100, Andrew Cooper wrote:
>> On 15/05/2020 14:58, Roger Pau Monne wrote:
>>> Apply a workaround for Intel errata BDX99, CLX30, SKX100, CFW125,
>>> BDF104, BDH85, BDM135, KWB131: "A Pending Fixed Interrupt May Be
>>> Dispatched Before an Interrupt of The Same Priority Completes".
>> HSM175 et al, so presumably a HSD, and HSE as well.
>>
>> On the broadwell side at least, BDD BDW in addition
> But those are a different errata AFAICT ('An APIC Timer Interrupt
> During Core C6 Entry May be Lost') and the workaround should also be
> different I think.

Hmm, so it is.

The issue in question here definitely does affect Haswell, because that
is where we first observed it.  There was also a report on xen-devel
against Haswell.

If the errata are missing, then I think Intel needs some more chasing to
work out the real extent of the problems.

> We should mark the lapic timer as not reliable on
> C6 or higher states in lapic_timer_reliable_states, so that it's
> disabled before entering sleep?

Probably should.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 21 16:46:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 16:46:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jboL4-00048p-82; Thu, 21 May 2020 16:46:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2WPz=7D=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jboL2-00048j-LD
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 16:46:32 +0000
X-Inumbo-ID: 9f341300-9b82-11ea-b9cf-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9f341300-9b82-11ea-b9cf-bc764e2007e4;
 Thu, 21 May 2020 16:46:31 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:39364
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jboKy-000WUp-Kv (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Thu, 21 May 2020 17:46:28 +0100
Subject: Re: [PATCH v3] x86/PV: remove unnecessary toggle_guest_pt() overhead
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <24d8b606-f74b-9367-d67e-e952838c7048@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d7840278-b999-65fa-40bf-2b78e5266837@citrix.com>
Date: Thu, 21 May 2020 17:46:27 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <24d8b606-f74b-9367-d67e-e952838c7048@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 05/05/2020 07:16, Jan Beulich wrote:
> While the mere updating of ->pv_cr3 and ->root_pgt_changed aren't overly
> expensive (but still needed only for the toggle_guest_mode() path), the
> effect of the latter on the exit-to-guest path is not insignificant.
> Move the logic into toggle_guest_mode(), on the basis that
> toggle_guest_pt() will always be invoked in pairs, yet we can't safely
> undo the setting of root_pgt_changed during the second of these
> invocations.
>
> While at it, add a comment ahead of toggle_guest_pt() to clarify its
> intended usage.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I'm still of the opinion that the commit message wants rewriting to get
the important points across clearly.

And those are that toggle_guest_pt() is called in pairs specifically to
read kernel data structures when emulating a userspace action, and that
this doesn't modify cr3 from the guests point of view, and therefore
doesn't need the resync on exit-to-guest path.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 21 17:22:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 17:22:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbotZ-0007cg-VV; Thu, 21 May 2020 17:22:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3PC2=7D=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jbotY-0007cb-7C
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 17:22:12 +0000
X-Inumbo-ID: 9b218cc0-9b87-11ea-b07b-bc764e2007e4
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b218cc0-9b87-11ea-b07b-bc764e2007e4;
 Thu, 21 May 2020 17:22:11 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04LHLv7O166772;
 Thu, 21 May 2020 17:22:07 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=hpmMnei2rM/L7Pi5nGnRDXvCuZpucliPxYWYs7TRDM8=;
 b=tZKSSkDuYeu27zDjK1lcJ6IfjCMQ8V1HkApiYMG296CARX++Sl+qibbMpSUIF4hXyH2v
 nkgUwhxPEykrb0QUzGtuNBxuAyJpZdM2Y4gxUgPklDOwMgxVcO/EMECmPJVI0AplgSs/
 CC3X6HcHtohvzo3gSm43fKbipOquGqfVYIVjn4kMBIkPX5SQMXGkRq7qWOLE1PuKuC8H
 3kmyYemwoQpPmnouUrrZOXDOFvL8dD66i3eWlHn1CRhnOHdyEFS5D53zAOZj7QIkOd0k
 Hh3mmeDMJEndG3IOq6/q4MIEC4i3i6fJisYyJiEEVQPLXXwGumQVv0M6TedzSrXsF/RL lA== 
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by aserp2120.oracle.com with ESMTP id 31284m9rtm-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Thu, 21 May 2020 17:22:07 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04LHJD2D116734;
 Thu, 21 May 2020 17:22:07 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
 by aserp3020.oracle.com with ESMTP id 312t3bthpd-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 21 May 2020 17:22:07 +0000
Received: from abhmp0013.oracle.com (abhmp0013.oracle.com [141.146.116.19])
 by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 04LHM5nV003894;
 Thu, 21 May 2020 17:22:06 GMT
Received: from [10.39.200.114] (/10.39.200.114)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Thu, 21 May 2020 10:22:05 -0700
Subject: Re: [PATCH] xen/events: avoid NULL pointer dereference in
 evtchn_from_irq()
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
References: <20200319071428.12115-1-jgross@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <30719c35-6de7-d400-7bb8-cff4570f8971@oracle.com>
Date: Thu, 21 May 2020 13:22:03 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200319071428.12115-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9628
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0
 spamscore=0 mlxlogscore=999
 phishscore=0 mlxscore=0 malwarescore=0 suspectscore=2 adultscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2005210124
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9628
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=2
 mlxscore=0
 cotscore=-2147483648 impostorscore=0 malwarescore=0 mlxlogscore=999
 lowpriorityscore=0 phishscore=0 spamscore=0 bulkscore=0 adultscore=0
 priorityscore=1501 clxscore=1011 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2004280000 definitions=main-2005210125
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, stable@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 3/19/20 3:14 AM, Juergen Gross wrote:
> There have been reports of races in evtchn_from_irq() where the info
> pointer has been NULL.
>
> Avoid that case by testing info before dereferencing it.
>
> In order to avoid accessing a just freed info structure do the kfree()
> via kfree_rcu().


Looks like noone ever responded to this.


This change looks fine but is there a background on the problem? I
looked in the archives and didn't find the relevant discussion.


-boris


>
> Cc: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>
> Cc: stable@vger.kernel.org
> Reported-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingsl=
ab.com>
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  drivers/xen/events/events_base.c     | 10 ++++++++--
>  drivers/xen/events/events_internal.h |  3 +++
>  2 files changed, 11 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/xen/events/events_base.c b/drivers/xen/events/even=
ts_base.c
> index 499eff7d3f65..838762fe3d6e 100644
> --- a/drivers/xen/events/events_base.c
> +++ b/drivers/xen/events/events_base.c
> @@ -247,10 +247,16 @@ static void xen_irq_info_cleanup(struct irq_info =
*info)
>   */
>  unsigned int evtchn_from_irq(unsigned irq)
>  {
> +	struct irq_info *info;
> +
>  	if (WARN(irq >=3D nr_irqs, "Invalid irq %d!\n", irq))
>  		return 0;
> =20
> -	return info_for_irq(irq)->evtchn;
> +	info =3D info_for_irq(irq);
> +	if (info =3D=3D NULL)
> +		return 0;
> +
> +	return info->evtchn;
>  }
> =20
>  unsigned irq_from_evtchn(unsigned int evtchn)
> @@ -436,7 +442,7 @@ static void xen_free_irq(unsigned irq)
> =20
>  	WARN_ON(info->refcnt > 0);
> =20
> -	kfree(info);
> +	kfree_rcu(info, rcu);
> =20
>  	/* Legacy IRQ descriptors are managed by the arch. */
>  	if (irq < nr_legacy_irqs())
> diff --git a/drivers/xen/events/events_internal.h b/drivers/xen/events/=
events_internal.h
> index 82938cff6c7a..c421055843c8 100644
> --- a/drivers/xen/events/events_internal.h
> +++ b/drivers/xen/events/events_internal.h
> @@ -7,6 +7,8 @@
>  #ifndef __EVENTS_INTERNAL_H__
>  #define __EVENTS_INTERNAL_H__
> =20
> +#include <linux/rcupdate.h>
> +
>  /* Interrupt types. */
>  enum xen_irq_type {
>  	IRQT_UNBOUND =3D 0,
> @@ -30,6 +32,7 @@ enum xen_irq_type {
>   */
>  struct irq_info {
>  	struct list_head list;
> +	struct rcu_head rcu;
>  	int refcnt;
>  	enum xen_irq_type type;	/* type */
>  	unsigned irq;





From xen-devel-bounces@lists.xenproject.org Thu May 21 17:26:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 17:26:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jboxy-0007mz-Fq; Thu, 21 May 2020 17:26:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3PC2=7D=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jboxw-0007mu-Gs
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 17:26:44 +0000
X-Inumbo-ID: 3cd69e3f-9b88-11ea-ab44-12813bfff9fa
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3cd69e3f-9b88-11ea-ab44-12813bfff9fa;
 Thu, 21 May 2020 17:26:43 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04LHLLgG190443;
 Thu, 21 May 2020 17:26:42 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=MgkxCsFV38a7Wm3TQBmGlrqbzqH0CEUY75tnU5xc+/M=;
 b=MNxGwd76rYyZ+9bUtdxxDy60kYW46J5sFIXcHI3P89/FiOPL8AUZxFegfI0WwgFXrNhe
 2/5zWH3xJ/jJNzS+h0vI6qqorCRDDNkfEQoUNJjPFAjkSAfrCCEREEakGfihvomis+wR
 Eq9flqUgfZ/ZbBeuXeiKTh0XeaI6VhoLSINcg2pwKwZJnXZGFANJ6OFZreVtIRxQ4wO+
 +ZkOyHdQrkchVveUx0XHyefWxb9JFdqvRHYF6MNjNyTM5JT5p9nVWzh3A7vvydtF+Q6l
 avUh/xIwjR9AhWdP3Fg6LHS9qZ53L7881euFpq0BxXfSebThtRwxWKOZFj7+It9KD6Sx Aw== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by userp2130.oracle.com with ESMTP id 3127krhtax-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Thu, 21 May 2020 17:26:42 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04LHOWQB067775;
 Thu, 21 May 2020 17:26:41 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by userp3030.oracle.com with ESMTP id 314gm9met1-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 21 May 2020 17:26:41 +0000
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 04LHQeuK023027;
 Thu, 21 May 2020 17:26:40 GMT
Received: from [10.39.200.114] (/10.39.200.114)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Thu, 21 May 2020 10:26:40 -0700
Subject: Re: [PATCH] x86/xen: drop an unused parameter gsi_override
To: Wei Liu <wei.liu@kernel.org>, linux-pci@vger.kernel.org,
 Xen Development List <xen-devel@lists.xenproject.org>
References: <20200428153640.76476-1-wei.liu@kernel.org>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <c60b771d-b61a-d63d-f593-52e8d07c0dc8@oracle.com>
Date: Thu, 21 May 2020 13:26:37 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200428153640.76476-1-wei.liu@kernel.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9628
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 mlxlogscore=999
 adultscore=0 phishscore=0 mlxscore=0 spamscore=0 suspectscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2005210126
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9628
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0
 phishscore=0 spamscore=0
 bulkscore=0 clxscore=1015 priorityscore=1501 mlxscore=0 impostorscore=0
 suspectscore=0 mlxlogscore=999 malwarescore=0 cotscore=-2147483648
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2005210125
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, sstabellini@kernel.org,
 konrad.wilk@oracle.com, x86@kernel.org, linux-kernel@vger.kernel.org,
 Michael Kelley <mikelley@microsoft.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 4/28/20 11:36 AM, Wei Liu wrote:
> All callers within the same file pass in -1 (no override).
>
> Signed-off-by: Wei Liu <wei.liu@kernel.org>



Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



From xen-devel-bounces@lists.xenproject.org Thu May 21 17:50:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 17:50:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbpKc-0001qt-Cs; Thu, 21 May 2020 17:50:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VGGW=7D=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jbpKb-0001qo-1G
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 17:50:09 +0000
X-Inumbo-ID: 827e4dee-9b8b-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 827e4dee-9b8b-11ea-b9cf-bc764e2007e4;
 Thu, 21 May 2020 17:50:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=oGAzJmBZGs/I95NNBDCcc68aRW8PyOqJUPeoeVkGNIQ=; b=mRisFQyrALZIJ+csPVe+p1nIi9
 8YB3j9vywe6MR58g11LA4J6kz9Kiui6cXM9H/LtGZQWFKHelEzwan1no3PKSnt60jOrEaT4WtcnDh
 jDxGiIB+YrLis0JhASSMa//txte/fU5Eh05p3wh/Zo0SEKtc0IDLru20EjRkqWJ4mzlQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jbpKU-0000IS-So; Thu, 21 May 2020 17:50:02 +0000
Received: from 54-240-197-225.amazon.com ([54.240.197.225]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jbpKU-0004Aj-KL; Thu, 21 May 2020 17:50:02 +0000
Subject: Re: [PATCH for-4.14 0/3] Remove the 1GB limitation on Rasberry Pi 4
To: Stefano Stabellini <sstabellini@kernel.org>
References: <20200518113008.15422-1-julien@xen.org>
 <alpine.DEB.2.21.2005201512380.27502@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <687bea7e-ade9-8fd3-9a61-e3f5cd6d17be@xen.org>
Date: Thu, 21 May 2020 18:50:00 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2005201512380.27502@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: minyard@acm.org, paul@xen.org, Andrew Cooper <andrew.cooper3@citrix.com>,
 Julien Grall <jgrall@amazon.com>, roman@zededa.com,
 George Dunlap <george.dunlap@citrix.com>, jeff.kubascik@dornerworks.com,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Stefano,

On 20/05/2020 23:13, Stefano Stabellini wrote:
> On Mon, 18 May 2020, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Hi all,
>>
>> At the moment, a user who wants to boot Xen on the Raspberry Pi 4 can
>> only use the first GB of memory.
>>
>> This is because several devices cannot DMA above 1GB but Xen doesn't
>> necessarily allocate memory for Dom0 below 1GB.
>>
>> This small series is trying to address the problem by allowing a
>> platform to restrict where Dom0 banks are allocated.
>>
>> This is also a candidate for Xen 4.14. Without it, a user will not be
>> able to use all the RAM on the Raspberry Pi 4.
> 
> The series looks good to me aside from the couple of minor issues being
> discussed

Thanks, I have sent the v2 yesterday but forgot to call 
add_maintainers.pl ([1]). Do you want me to resend it with you CCed?

Cheers,

[1] <20200519172028.31169-1-julien@xen.org>

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 21 18:12:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 18:12:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbpgF-0003p9-9n; Thu, 21 May 2020 18:12:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RqGm=7D=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbpgD-0003p4-Nd
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 18:12:29 +0000
X-Inumbo-ID: a17095f6-9b8e-11ea-ab47-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a17095f6-9b8e-11ea-ab47-12813bfff9fa;
 Thu, 21 May 2020 18:12:28 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 8AC8A20823;
 Thu, 21 May 2020 18:12:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590084748;
 bh=YV+FItUYezSQ7ZHcbNGENFz/MIrO0KddrFYZlCw3IsE=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=IJdy/cjOQYV2ETw3kOm32SIMSqO66krfMim2OrDkRRHYn7dpDokKYHSj624oFmZ8D
 AY8zaj1yoOTzWEMrws72jx8QtCSmlSJ0/geyhEyUaVxCOEcRJVxX/plpwNZJ9tyg8H
 nMaRdITCdViMmAaZI3iTzdZ3yxiCWikOKw9/YS3o=
Date: Thu, 21 May 2020 11:12:26 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH for-4.14 0/3] Remove the 1GB limitation on Rasberry Pi
 4
In-Reply-To: <687bea7e-ade9-8fd3-9a61-e3f5cd6d17be@xen.org>
Message-ID: <alpine.DEB.2.21.2005211112090.27502@sstabellini-ThinkPad-T480s>
References: <20200518113008.15422-1-julien@xen.org>
 <alpine.DEB.2.21.2005201512380.27502@sstabellini-ThinkPad-T480s>
 <687bea7e-ade9-8fd3-9a61-e3f5cd6d17be@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, minyard@acm.org, paul@xen.org,
 Andrew Cooper <andrew.cooper3@citrix.com>, Julien Grall <jgrall@amazon.com>,
 roman@zededa.com, George Dunlap <george.dunlap@citrix.com>,
 jeff.kubascik@dornerworks.com, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 21 May 2020, Julien Grall wrote:
> Hi Stefano,
> 
> On 20/05/2020 23:13, Stefano Stabellini wrote:
> > On Mon, 18 May 2020, Julien Grall wrote:
> > > From: Julien Grall <jgrall@amazon.com>
> > > 
> > > Hi all,
> > > 
> > > At the moment, a user who wants to boot Xen on the Raspberry Pi 4 can
> > > only use the first GB of memory.
> > > 
> > > This is because several devices cannot DMA above 1GB but Xen doesn't
> > > necessarily allocate memory for Dom0 below 1GB.
> > > 
> > > This small series is trying to address the problem by allowing a
> > > platform to restrict where Dom0 banks are allocated.
> > > 
> > > This is also a candidate for Xen 4.14. Without it, a user will not be
> > > able to use all the RAM on the Raspberry Pi 4.
> > 
> > The series looks good to me aside from the couple of minor issues being
> > discussed
> 
> Thanks, I have sent the v2 yesterday but forgot to call add_maintainers.pl
> ([1]). Do you want me to resend it with you CCed?
> 
> Cheers,
> 
> [1] <20200519172028.31169-1-julien@xen.org>

No worries I found them now


From xen-devel-bounces@lists.xenproject.org Thu May 21 18:46:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 18:46:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbqCn-0006cg-9m; Thu, 21 May 2020 18:46:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/ZHf=7D=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1jbqCm-0006ca-C7
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 18:46:08 +0000
X-Inumbo-ID: 543340c2-9b93-11ea-ab51-12813bfff9fa
Received: from out1-smtp.messagingengine.com (unknown [66.111.4.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 543340c2-9b93-11ea-ab51-12813bfff9fa;
 Thu, 21 May 2020 18:46:06 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 281AC5C00B3;
 Thu, 21 May 2020 14:46:06 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Thu, 21 May 2020 14:46:06 -0400
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
 messagingengine.com; h=cc:content-type:date:from:in-reply-to
 :message-id:mime-version:references:subject:to:x-me-proxy
 :x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=5vLYKu
 Xa+UUaGWnNM1CRpSE460GjOXDkyZjKCdT783g=; b=0KHI1XI5mrzKJUKfroMclt
 caUTTAnia0Qon+dGzPX/wRI2SOknY4bkod/GdNT2f0RxPuXNVd0BEtZgt55udaDf
 psD22x2ec17tAnU0Yj+HH7vbun1DKimUWva0SX98SBrnfc+r2A9QH3ZzG+BrXP3U
 wSI5kv86DwsKayk9LwQhW9Z83wimA2a0MyznQzwiCWiJmKaG+c+HaIoNpRDmcMWv
 5wC2CUwc1BUCrhGDOa1TjfPFI9i8lGEZZPI4D6IJhW3VxSgztcJ1E5h1UtxHNErt
 Z9PmKeo7z5/0/H28EerMW6KqkFT/1mvAlxwpsDO2UJH+ChYN2FkLCIIcZ7J/Ra0g
 ==
X-ME-Sender: <xms:bczGXqTWopXQpDgSc8qtXCEPUf9U7SAbPbN2Rnzhd_p09YKK_Zb8SQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduhedrudduuddguddvtdcutefuodetggdotefrod
 ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
 necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
 enucfjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
 khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
 hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepieek
 geeivdelkeejgeetkeelgeevgefhfeeghfffieehudeuteekueetgeetiefgnecuffhomh
 grihhnpehmrghrkhhmrghilhdrohhrghenucfkphepledurdeihedrfeegrdeffeenucev
 lhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhlfhhrohhmpehmrghrmhgrrh
 gvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtohhm
X-ME-Proxy: <xmx:bczGXvzXUmL1HdDyECiX4Iav9VuYdxbb-p2Fq4JFeYjxLWIutkeGuQ>
 <xmx:bczGXn22Aufr2TGoqgHvvxB3f9Gf3f-jIO-8ISISerhrQgrSjhnrng>
 <xmx:bczGXmBtO7Jubmnk77hQyhgnyL6w0DxBlY7Uo1JBa5YjPDWzUs8HFg>
 <xmx:bszGXrbEzmw5gVwWBC4Tj0CHrR8SuUZbxOxzKVR3STZbQshtbYgJkQ>
Received: from mail-itl (unknown [91.65.34.33])
 by mail.messagingengine.com (Postfix) with ESMTPA id C1469306645C;
 Thu, 21 May 2020 14:46:04 -0400 (EDT)
Date: Thu, 21 May 2020 20:46:02 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [PATCH] xen/events: avoid NULL pointer dereference in
 evtchn_from_irq()
Message-ID: <20200521184602.GP98582@mail-itl>
References: <20200319071428.12115-1-jgross@suse.com>
 <30719c35-6de7-d400-7bb8-cff4570f8971@oracle.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature"; boundary="KR/qxknboQ7+Tpez"
Content-Disposition: inline
In-Reply-To: <30719c35-6de7-d400-7bb8-cff4570f8971@oracle.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 stable@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--KR/qxknboQ7+Tpez
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Subject: Re: [PATCH] xen/events: avoid NULL pointer dereference in
 evtchn_from_irq()

On Thu, May 21, 2020 at 01:22:03PM -0400, Boris Ostrovsky wrote:
> On 3/19/20 3:14 AM, Juergen Gross wrote:
> > There have been reports of races in evtchn_from_irq() where the info
> > pointer has been NULL.
> >
> > Avoid that case by testing info before dereferencing it.
> >
> > In order to avoid accessing a just freed info structure do the kfree()
> > via kfree_rcu().
>=20
>=20
> Looks like noone ever responded to this.
>=20
>=20
> This change looks fine but is there a background on the problem? I
> looked in the archives and didn't find the relevant discussion.

Here is the original bug report:
https://xen.markmail.org/thread/44apwkwzeme4uavo

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab
A: Because it messes up the order in which people normally read text.
Q: Why is top-posting such a bad thing?

--KR/qxknboQ7+Tpez
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAl7GzGkACgkQ24/THMrX
1yygDAf/RzUmiuF2TM7HvZsoGQmZxnc+ZrcCx9F+FXoYyHUYYjyoj5hnMWPJit1g
GPLie+6PbPhjOyC+6tYK9TGgEv2HxyO8PNLTWDRDRnxrf8rBTqMKRvcbm8FYZV+J
9baXNldjf6TvgddPwik9bqetHX+e/QpyeovpcSOzbP1dXKWnxUbypEinsiNxtT97
vkiW/LfYtzb8arPdFVmVl/9YPmvk+080mm2eTQYARy7qVlM70zqsvtWBYQYFcxd2
N7OH7AoSARVyncbIT1B5bTrte9HTFB4ewJ1CvTvc3wOGDbUCiyVz7nDxNRQo6/S/
3x7RfR3nRBE3RwMeEj2cAdsgzOkI9g==
=oact
-----END PGP SIGNATURE-----

--KR/qxknboQ7+Tpez--


From xen-devel-bounces@lists.xenproject.org Thu May 21 19:33:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 19:33:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbqvz-0002ZJ-8O; Thu, 21 May 2020 19:32:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RqGm=7D=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbqvx-0002ZE-IX
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 19:32:49 +0000
X-Inumbo-ID: da7143a4-9b99-11ea-ae69-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id da7143a4-9b99-11ea-ae69-bc764e2007e4;
 Thu, 21 May 2020 19:32:49 +0000 (UTC)
Received: from sstabellini-ThinkPad-T480s.hsd1.ca.comcast.net
 (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id CB978207D3;
 Thu, 21 May 2020 19:32:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590089568;
 bh=wVdNaS0ZwKTecNDaqDlBLOhKCIthZf+jeeG5uNktzc8=;
 h=From:To:Cc:Subject:Date:From;
 b=NBLDLTkRTKeIfjBmoz7sgk+dNW0U0DUEyoIHVSfRSPNr/V73wzHCQ6PLDfpIjPC00
 NWqxvApQHe0eDRSi55HDN7vJ/etlgvlt2G6xFFl4T/TT9lduHQUV9q+Y2q47g6B44w
 DbbdfkFw0COPKQyEsJhdKUOm6DYqiU6XGxLzsfLs=
From: Stefano Stabellini <sstabellini@kernel.org>
To: jgross@suse.com
Subject: [PATCH v2] 9p/xen: increase XEN_9PFS_RING_ORDER
Date: Thu, 21 May 2020 12:32:42 -0700
Message-Id: <20200521193242.15953-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: lucho@ionkov.net, sstabellini@kernel.org, ericvh@gmail.com,
 asmadeus@codewreck.org, linux-kernel@vger.kernel.org,
 v9fs-developer@lists.sourceforge.net, xen-devel@lists.xenproject.org,
 boris.ostrovsky@oracle.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

Increase XEN_9PFS_RING_ORDER to 9 for performance reason. Order 9 is the
max allowed by the protocol.

We can't assume that all backends will support order 9. The xenstore
property max-ring-page-order specifies the max order supported by the
backend. We'll use max-ring-page-order for the size of the ring.

This means that the size of the ring is not static
(XEN_FLEX_RING_SIZE(9)) anymore. Change XEN_9PFS_RING_SIZE to take an
argument and base the calculation on the order chosen at setup time.

Finally, modify p9_xen_trans.maxsize to be divided by 4 compared to the
original value. We need to divide it by 2 because we have two rings
coming off the same order allocation: the in and out rings. This was a
mistake in the original code. Also divide it further by 2 because we
don't want a single request/reply to fill up the entire ring. There can
be multiple requests/replies outstanding at any given time and if we use
the full ring with one, we risk forcing the backend to wait for the
client to read back more replies before continuing, which is not
performant.

Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
Changes in v2:
- Fix setting of p9_xen_trans.maxsize in xen_9pfs_front_probe to match
  the initial setting for the reasons explained in the commit message

---
 net/9p/trans_xen.c | 61 ++++++++++++++++++++++++++--------------------
 1 file changed, 34 insertions(+), 27 deletions(-)

diff --git a/net/9p/trans_xen.c b/net/9p/trans_xen.c
index 3963eb11c3fb..3debad93be1a 100644
--- a/net/9p/trans_xen.c
+++ b/net/9p/trans_xen.c
@@ -43,8 +43,8 @@
 #include <net/9p/transport.h>
 
 #define XEN_9PFS_NUM_RINGS 2
-#define XEN_9PFS_RING_ORDER 6
-#define XEN_9PFS_RING_SIZE  XEN_FLEX_RING_SIZE(XEN_9PFS_RING_ORDER)
+#define XEN_9PFS_RING_ORDER 9
+#define XEN_9PFS_RING_SIZE(ring)  XEN_FLEX_RING_SIZE(ring->intf->ring_order)
 
 struct xen_9pfs_header {
 	uint32_t size;
@@ -132,8 +132,8 @@ static bool p9_xen_write_todo(struct xen_9pfs_dataring *ring, RING_IDX size)
 	prod = ring->intf->out_prod;
 	virt_mb();
 
-	return XEN_9PFS_RING_SIZE -
-		xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) >= size;
+	return XEN_9PFS_RING_SIZE(ring) -
+		xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE(ring)) >= size;
 }
 
 static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
@@ -167,17 +167,18 @@ static int p9_xen_request(struct p9_client *client, struct p9_req_t *p9_req)
 	prod = ring->intf->out_prod;
 	virt_mb();
 
-	if (XEN_9PFS_RING_SIZE - xen_9pfs_queued(prod, cons,
-						 XEN_9PFS_RING_SIZE) < size) {
+	if (XEN_9PFS_RING_SIZE(ring) -
+	    xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE(ring)) < size) {
 		spin_unlock_irqrestore(&ring->lock, flags);
 		goto again;
 	}
 
-	masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE);
-	masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE);
+	masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE(ring));
+	masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE(ring));
 
 	xen_9pfs_write_packet(ring->data.out, p9_req->tc.sdata, size,
-			      &masked_prod, masked_cons, XEN_9PFS_RING_SIZE);
+			      &masked_prod, masked_cons,
+			      XEN_9PFS_RING_SIZE(ring));
 
 	p9_req->status = REQ_STATUS_SENT;
 	virt_wmb();			/* write ring before updating pointer */
@@ -207,19 +208,19 @@ static void p9_xen_response(struct work_struct *work)
 		prod = ring->intf->in_prod;
 		virt_rmb();
 
-		if (xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE) <
+		if (xen_9pfs_queued(prod, cons, XEN_9PFS_RING_SIZE(ring)) <
 		    sizeof(h)) {
 			notify_remote_via_irq(ring->irq);
 			return;
 		}
 
-		masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE);
-		masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE);
+		masked_prod = xen_9pfs_mask(prod, XEN_9PFS_RING_SIZE(ring));
+		masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE(ring));
 
 		/* First, read just the header */
 		xen_9pfs_read_packet(&h, ring->data.in, sizeof(h),
 				     masked_prod, &masked_cons,
-				     XEN_9PFS_RING_SIZE);
+				     XEN_9PFS_RING_SIZE(ring));
 
 		req = p9_tag_lookup(priv->client, h.tag);
 		if (!req || req->status != REQ_STATUS_SENT) {
@@ -233,11 +234,11 @@ static void p9_xen_response(struct work_struct *work)
 		memcpy(&req->rc, &h, sizeof(h));
 		req->rc.offset = 0;
 
-		masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE);
+		masked_cons = xen_9pfs_mask(cons, XEN_9PFS_RING_SIZE(ring));
 		/* Then, read the whole packet (including the header) */
 		xen_9pfs_read_packet(req->rc.sdata, ring->data.in, h.size,
 				     masked_prod, &masked_cons,
-				     XEN_9PFS_RING_SIZE);
+				     XEN_9PFS_RING_SIZE(ring));
 
 		virt_mb();
 		cons += h.size;
@@ -267,7 +268,7 @@ static irqreturn_t xen_9pfs_front_event_handler(int irq, void *r)
 
 static struct p9_trans_module p9_xen_trans = {
 	.name = "xen",
-	.maxsize = 1 << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT),
+	.maxsize = 1 << (XEN_9PFS_RING_ORDER + XEN_PAGE_SHIFT - 2),
 	.def = 1,
 	.create = p9_xen_create,
 	.close = p9_xen_close,
@@ -295,14 +296,16 @@ static void xen_9pfs_front_free(struct xen_9pfs_front_priv *priv)
 		if (priv->rings[i].irq > 0)
 			unbind_from_irqhandler(priv->rings[i].irq, priv->dev);
 		if (priv->rings[i].data.in) {
-			for (j = 0; j < (1 << XEN_9PFS_RING_ORDER); j++) {
+			for (j = 0;
+			     j < (1 << priv->rings[i].intf->ring_order);
+			     j++) {
 				grant_ref_t ref;
 
 				ref = priv->rings[i].intf->ref[j];
 				gnttab_end_foreign_access(ref, 0, 0);
 			}
 			free_pages((unsigned long)priv->rings[i].data.in,
-				   XEN_9PFS_RING_ORDER -
+				   priv->rings[i].intf->ring_order -
 				   (PAGE_SHIFT - XEN_PAGE_SHIFT));
 		}
 		gnttab_end_foreign_access(priv->rings[i].ref, 0, 0);
@@ -323,7 +326,8 @@ static int xen_9pfs_front_remove(struct xenbus_device *dev)
 }
 
 static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev,
-					 struct xen_9pfs_dataring *ring)
+					 struct xen_9pfs_dataring *ring,
+					 unsigned int order)
 {
 	int i = 0;
 	int ret = -ENOMEM;
@@ -342,21 +346,21 @@ static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev,
 		goto out;
 	ring->ref = ret;
 	bytes = (void *)__get_free_pages(GFP_KERNEL | __GFP_ZERO,
-			XEN_9PFS_RING_ORDER - (PAGE_SHIFT - XEN_PAGE_SHIFT));
+			order - (PAGE_SHIFT - XEN_PAGE_SHIFT));
 	if (!bytes) {
 		ret = -ENOMEM;
 		goto out;
 	}
-	for (; i < (1 << XEN_9PFS_RING_ORDER); i++) {
+	for (; i < (1 << order); i++) {
 		ret = gnttab_grant_foreign_access(
 				dev->otherend_id, virt_to_gfn(bytes) + i, 0);
 		if (ret < 0)
 			goto out;
 		ring->intf->ref[i] = ret;
 	}
-	ring->intf->ring_order = XEN_9PFS_RING_ORDER;
+	ring->intf->ring_order = order;
 	ring->data.in = bytes;
-	ring->data.out = bytes + XEN_9PFS_RING_SIZE;
+	ring->data.out = bytes + XEN_FLEX_RING_SIZE(order);
 
 	ret = xenbus_alloc_evtchn(dev, &ring->evtchn);
 	if (ret)
@@ -374,7 +378,7 @@ static int xen_9pfs_front_alloc_dataring(struct xenbus_device *dev,
 		for (i--; i >= 0; i--)
 			gnttab_end_foreign_access(ring->intf->ref[i], 0, 0);
 		free_pages((unsigned long)bytes,
-			   XEN_9PFS_RING_ORDER -
+			   ring->intf->ring_order -
 			   (PAGE_SHIFT - XEN_PAGE_SHIFT));
 	}
 	gnttab_end_foreign_access(ring->ref, 0, 0);
@@ -404,8 +408,10 @@ static int xen_9pfs_front_probe(struct xenbus_device *dev,
 		return -EINVAL;
 	max_ring_order = xenbus_read_unsigned(dev->otherend,
 					      "max-ring-page-order", 0);
-	if (max_ring_order < XEN_9PFS_RING_ORDER)
-		return -EINVAL;
+	if (max_ring_order > XEN_9PFS_RING_ORDER)
+		max_ring_order = XEN_9PFS_RING_ORDER;
+	if (p9_xen_trans.maxsize > XEN_FLEX_RING_SIZE(max_ring_order))
+		p9_xen_trans.maxsize = XEN_FLEX_RING_SIZE(max_ring_order) / 2;
 
 	priv = kzalloc(sizeof(*priv), GFP_KERNEL);
 	if (!priv)
@@ -422,7 +428,8 @@ static int xen_9pfs_front_probe(struct xenbus_device *dev,
 
 	for (i = 0; i < priv->num_rings; i++) {
 		priv->rings[i].priv = priv;
-		ret = xen_9pfs_front_alloc_dataring(dev, &priv->rings[i]);
+		ret = xen_9pfs_front_alloc_dataring(dev, &priv->rings[i],
+						    max_ring_order);
 		if (ret < 0)
 			goto error;
 	}
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 19:41:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 19:41:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbr4C-0003Un-5g; Thu, 21 May 2020 19:41:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Nal7=7D=kernel.org=helgaas@srs-us1.protection.inumbo.net>)
 id 1jbr4B-0003Ui-CD
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 19:41:19 +0000
X-Inumbo-ID: 0a3434e2-9b9b-11ea-ab5e-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0a3434e2-9b9b-11ea-ab5e-12813bfff9fa;
 Thu, 21 May 2020 19:41:18 +0000 (UTC)
Received: from localhost (mobile-166-175-190-200.mycingular.net
 [166.175.190.200])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 74A8A2065F;
 Thu, 21 May 2020 19:41:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590090077;
 bh=V7Rxedh2MenOwI8XK4KoWriB9ieXIxOg9p/fYNUmu6o=;
 h=Date:From:To:Cc:Subject:In-Reply-To:From;
 b=DGrKWPYF4T61W/bPMJjljuNf8dXTHk0f6MkZK2ldwJgILEIsVJksvKR9Ef0tXf/Mm
 /v1VApdxpv0cJTjhytbqfnysZVkWTwx8Gte2VacSGA8MWFcyB34b+jg8pX6rCQInJz
 56CdXIhZgakIXA9UioojzKLvEfweY5Zp+71czsoo=
Date: Thu, 21 May 2020 14:41:15 -0500
From: Bjorn Helgaas <helgaas@kernel.org>
To: Wei Liu <wei.liu@kernel.org>
Subject: Re: [PATCH] x86/xen: drop an unused parameter gsi_override
Message-ID: <20200521194115.GA1169412@bjorn-Precision-5520>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200428153640.76476-1-wei.liu@kernel.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, sstabellini@kernel.org,
 konrad.wilk@oracle.com, linux-pci@vger.kernel.org, x86@kernel.org,
 linux-kernel@vger.kernel.org, Michael Kelley <mikelley@microsoft.com>,
 Xen Development List <xen-devel@lists.xenproject.org>,
 boris.ostrovsky@oracle.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, Apr 28, 2020 at 03:36:40PM +0000, Wei Liu wrote:
> All callers within the same file pass in -1 (no override).
> 
> Signed-off-by: Wei Liu <wei.liu@kernel.org>

Applied to pci/virtualization for v5.8, thanks!

I don't see anything else in linux-next that touches this file, but if
somebody wants to merge this via another tree, just let me know.

> ---
>  arch/x86/pci/xen.c | 16 ++++++----------
>  1 file changed, 6 insertions(+), 10 deletions(-)
> 
> diff --git a/arch/x86/pci/xen.c b/arch/x86/pci/xen.c
> index 91220cc25854..e3f1ca316068 100644
> --- a/arch/x86/pci/xen.c
> +++ b/arch/x86/pci/xen.c
> @@ -60,8 +60,7 @@ static int xen_pcifront_enable_irq(struct pci_dev *dev)
>  }
>  
>  #ifdef CONFIG_ACPI
> -static int xen_register_pirq(u32 gsi, int gsi_override, int triggering,
> -			     bool set_pirq)
> +static int xen_register_pirq(u32 gsi, int triggering, bool set_pirq)
>  {
>  	int rc, pirq = -1, irq = -1;
>  	struct physdev_map_pirq map_irq;
> @@ -94,9 +93,6 @@ static int xen_register_pirq(u32 gsi, int gsi_override, int triggering,
>  		name = "ioapic-level";
>  	}
>  
> -	if (gsi_override >= 0)
> -		gsi = gsi_override;
> -
>  	irq = xen_bind_pirq_gsi_to_irq(gsi, map_irq.pirq, shareable, name);
>  	if (irq < 0)
>  		goto out;
> @@ -112,12 +108,12 @@ static int acpi_register_gsi_xen_hvm(struct device *dev, u32 gsi,
>  	if (!xen_hvm_domain())
>  		return -1;
>  
> -	return xen_register_pirq(gsi, -1 /* no GSI override */, trigger,
> +	return xen_register_pirq(gsi, trigger,
>  				 false /* no mapping of GSI to PIRQ */);
>  }
>  
>  #ifdef CONFIG_XEN_DOM0
> -static int xen_register_gsi(u32 gsi, int gsi_override, int triggering, int polarity)
> +static int xen_register_gsi(u32 gsi, int triggering, int polarity)
>  {
>  	int rc, irq;
>  	struct physdev_setup_gsi setup_gsi;
> @@ -128,7 +124,7 @@ static int xen_register_gsi(u32 gsi, int gsi_override, int triggering, int polar
>  	printk(KERN_DEBUG "xen: registering gsi %u triggering %d polarity %d\n",
>  			gsi, triggering, polarity);
>  
> -	irq = xen_register_pirq(gsi, gsi_override, triggering, true);
> +	irq = xen_register_pirq(gsi, triggering, true);
>  
>  	setup_gsi.gsi = gsi;
>  	setup_gsi.triggering = (triggering == ACPI_EDGE_SENSITIVE ? 0 : 1);
> @@ -148,7 +144,7 @@ static int xen_register_gsi(u32 gsi, int gsi_override, int triggering, int polar
>  static int acpi_register_gsi_xen(struct device *dev, u32 gsi,
>  				 int trigger, int polarity)
>  {
> -	return xen_register_gsi(gsi, -1 /* no GSI override */, trigger, polarity);
> +	return xen_register_gsi(gsi, trigger, polarity);
>  }
>  #endif
>  #endif
> @@ -491,7 +487,7 @@ int __init pci_xen_initial_domain(void)
>  		if (acpi_get_override_irq(irq, &trigger, &polarity) == -1)
>  			continue;
>  
> -		xen_register_pirq(irq, -1 /* no GSI override */,
> +		xen_register_pirq(irq,
>  			trigger ? ACPI_LEVEL_SENSITIVE : ACPI_EDGE_SENSITIVE,
>  			true /* Map GSI to PIRQ */);
>  	}
> -- 
> 2.20.1
> 


From xen-devel-bounces@lists.xenproject.org Thu May 21 20:09:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 20:09:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbrUy-0005VQ-De; Thu, 21 May 2020 20:09:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RqGm=7D=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbrUw-0005VI-UB
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 20:08:59 +0000
X-Inumbo-ID: e756c012-9b9e-11ea-ab61-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e756c012-9b9e-11ea-ab61-12813bfff9fa;
 Thu, 21 May 2020 20:08:58 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id F3C84207D3;
 Thu, 21 May 2020 20:08:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590091737;
 bh=vEJTuttOxs4JpAIMsGvnSXLjxqVm1B/URIYI7oYtQbk=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=TRQaadxHYfti3eW2b+FoYOFMPyROpwEIbtKj2CuoXe/gsth2rSYpPkyxC2bCY2ld+
 H7f7yW0zmuCF4wvtXlGgLOeG46DSJIRKkfz3SK4yZfm4ElplcFkB5qoxb2rU/gC/SL
 1ne6kHxu9PzSY7msjlXmfe6JAp5tY2tYcMpikrms=
Date: Thu, 21 May 2020 13:08:56 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 09/10] xen/arm: introduce phys/dma translations in
 xen_dma_sync_for_*
In-Reply-To: <83c1120f-fe63-dc54-7b82-15a91c748de8@xen.org>
Message-ID: <alpine.DEB.2.21.2005211247461.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
 <20200520234520.22563-9-sstabellini@kernel.org>
 <83c1120f-fe63-dc54-7b82-15a91c748de8@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Stefano Stabellini <sstabellini@kernel.org>,
 konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 21 May 2020, Julien Grall wrote:
> > @@ -97,8 +98,7 @@ bool xen_arch_need_swiotlb(struct device *dev,
> >   			   phys_addr_t phys,
> >   			   dma_addr_t dev_addr)
> >   {
> > -	unsigned int xen_pfn = XEN_PFN_DOWN(phys);
> > -	unsigned int bfn = XEN_PFN_DOWN(dev_addr);
> > +	unsigned int bfn = XEN_PFN_DOWN(dma_to_phys(dev, dev_addr));
> >     	/*
> >   	 * The swiotlb buffer should be used if
> > @@ -115,7 +115,7 @@ bool xen_arch_need_swiotlb(struct device *dev,
> >   	 * require a bounce buffer because the device doesn't support coherent
> >   	 * memory and we are not able to flush the cache.
> >   	 */
> > -	return (!hypercall_cflush && (xen_pfn != bfn) &&
> > +	return (!hypercall_cflush && !pfn_valid(bfn) &&
> 
> I believe this change is incorrect. The bfn is a frame based on Xen page
> granularity (always 4K) while pfn_valid() is expecting a frame based on the
> Kernel page granularity.

Given that kernel granularity >= xen granularity it looks like it would
be safe to use PFN_DOWN instead of XEN_PFN_DOWN:

  unsigned int bfn = PFN_DOWN(dma_to_phys(dev, dev_addr));
  return (!hypercall_cflush && !pfn_valid(bfn) &&


From xen-devel-bounces@lists.xenproject.org Thu May 21 20:14:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 20:14:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbra9-0006NN-1N; Thu, 21 May 2020 20:14:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RqGm=7D=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbra7-0006NI-UK
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 20:14:19 +0000
X-Inumbo-ID: a703ad76-9b9f-11ea-9887-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a703ad76-9b9f-11ea-9887-bc764e2007e4;
 Thu, 21 May 2020 20:14:19 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9DF29207D3;
 Thu, 21 May 2020 20:14:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590092058;
 bh=suEz0MB32IOGGNvUKp3xFnbJ4st9KmSvw9lfG5y2jOo=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=WlZTEqico2+j5W2o2v1gAmMgQWuOHXzbhkBt9WlqipMd8XnTSJOpUwAfQDMAkhnYs
 BeZhoWcOssLSSCVcIbnvr5xtnEY3JG4I9ynz7NmIQe5cx2t0tCQgox7+ient/cKZKo
 4ux4u+UDXvWH828O3t8s4bL2+8dSDoAtBUKZErgI=
Date: Thu, 21 May 2020 13:14:18 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 for-4.14 0/3] Remove the 1GB limitation on Rasberry Pi 4
In-Reply-To: <20200519172028.31169-1-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2005211312340.27502@sstabellini-ThinkPad-T480s>
References: <20200519172028.31169-1-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: minyard@acm.org, paul@xen.org, Julien Grall <jgrall@amazon.com>,
 roman@zededa.com, jeff.kubascik@dornerworks.com,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, 19 May 2020, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Hi all,
> 
> At the moment, a user who wants to boot Xen on the Raspberry Pi 4 can
> only use the first GB of memory.
> 
> This is because several devices cannot DMA above 1GB but Xen doesn't
> necessarily allocate memory for Dom0 below 1GB.
> 
> This small series is trying to address the problem by allowing a
> platform to restrict where Dom0 banks are allocated.
> 
> This is also a candidate for Xen 4.14. Without it, a user will not be
> able to use all the RAM on the Raspberry Pi 4.
> 
> This series has only be slighlty tested. I would appreciate more test on
> the Rasbperry Pi 4 to confirm this removing the restriction.

You can add my reviewed-by to all patches in the series.



From xen-devel-bounces@lists.xenproject.org Thu May 21 20:17:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 20:17:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbrdY-0006Wi-Gc; Thu, 21 May 2020 20:17:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DSRu=7D=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1jbrdW-0006Wb-Sr
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 20:17:50 +0000
X-Inumbo-ID: 2493e17a-9ba0-11ea-b9cf-bc764e2007e4
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2493e17a-9ba0-11ea-b9cf-bc764e2007e4;
 Thu, 21 May 2020 20:17:50 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id z72so7714884wmc.2
 for <xen-devel@lists.xenproject.org>; Thu, 21 May 2020 13:17:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=huWgXpbDzdljKw3GvlKFMkS9v32YlszvZUodY9Vu2AM=;
 b=ljg05M5aahcskGiy83jjBBWZfs2xI5NxpP4k2fna2chn8rGXb6cpKB3Ef+xrCKF9yn
 fQC/P/qaZ+2l4SQYIMEb6qRa2UXz12dYEOnRQiH85v1UTuSC4oBFApfWrfv4jSTQp6P2
 yoDl/ib6de/OZOfk7HfYqAUJjl1/u8UWS/W/QSGfCiDmKzeNUB/Z8Ib6P50jRUSQVjDM
 YPfTI5HF8oUlAEgikI2fUumFbbtOfwWNlHE++B0GUji8XWMBv5r+YmHiQ+N90tOpsCSv
 TYp8Gv596H0ws+AtWZyXGaNYXCgld6k3Tm8qzgYs52/rlQa/zjGouo+MPG/6WsSbm+SO
 rogg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=huWgXpbDzdljKw3GvlKFMkS9v32YlszvZUodY9Vu2AM=;
 b=ofFyfsug6m9yr7bH9/lIddjw2sidvvdCxcAwUVtraOXjK2Z+9w/OWaoj+C6x8VEQ0u
 9zLwOIdMtdE+fgGjiFrHuuKKD9vetvXLQs0XFskx6yMTDCH63dZr1VuYzb4k1mO98jl9
 d28bZ0eZvGTuZ9aio5DlcNN1UBHdvsZMGDeDuzAJm23hAf2zsiJ02MRxX4zhYCN7M/t7
 CtlU9Zlo5DhRrKHWrLuHIYPdd7JN/f6LoC6xERzUWnVzuYglDwo9HfGKeoAV1yoVlzTQ
 IsCFzmmE1iCqCCP2VMwWW65IoOujwmjL1Q24JJcvgU6ay7A0TdBwJcGBLxGcRQum4k8B
 2cIA==
X-Gm-Message-State: AOAM532HDBPeuIGGW1kzbuqDNmZ/MTij8nX7lNfhKEl9qUO0PqLbLcTT
 LKlrBr5NKUNR6zEzlA5yi6FVHwHFEhzT0EyBwcc=
X-Google-Smtp-Source: ABdhPJwFCX6ZTfZC+CYhpgpI3DM3w4YWTm9Ir6r9l2PdlZuiYiDYb0T2GXm6T8zhCVnCb/vwD8/q+rFFDW8SaepTFM4=
X-Received: by 2002:a7b:cf05:: with SMTP id l5mr10778996wmg.100.1590092269456; 
 Thu, 21 May 2020 13:17:49 -0700 (PDT)
MIME-Version: 1.0
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
 <20200520234520.22563-9-sstabellini@kernel.org>
 <83c1120f-fe63-dc54-7b82-15a91c748de8@xen.org>
 <alpine.DEB.2.21.2005211247461.27502@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2005211247461.27502@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Thu, 21 May 2020 21:17:38 +0100
Message-ID: <CAJ=z9a24gHgS1yrpzLmW35zPSR6F6NUvYJUyJ9p1+oLp7THD8w@mail.gmail.com>
Subject: Re: [PATCH 09/10] xen/arm: introduce phys/dma translations in
 xen_dma_sync_for_*
To: Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, linux-kernel@vger.kernel.org,
 xen-devel <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 21 May 2020 at 21:08, Stefano Stabellini <sstabellini@kernel.org> wrote:
>
> On Thu, 21 May 2020, Julien Grall wrote:
> > > @@ -97,8 +98,7 @@ bool xen_arch_need_swiotlb(struct device *dev,
> > >                        phys_addr_t phys,
> > >                        dma_addr_t dev_addr)
> > >   {
> > > -   unsigned int xen_pfn = XEN_PFN_DOWN(phys);
> > > -   unsigned int bfn = XEN_PFN_DOWN(dev_addr);
> > > +   unsigned int bfn = XEN_PFN_DOWN(dma_to_phys(dev, dev_addr));
> > >             /*
> > >      * The swiotlb buffer should be used if
> > > @@ -115,7 +115,7 @@ bool xen_arch_need_swiotlb(struct device *dev,
> > >      * require a bounce buffer because the device doesn't support coherent
> > >      * memory and we are not able to flush the cache.
> > >      */
> > > -   return (!hypercall_cflush && (xen_pfn != bfn) &&
> > > +   return (!hypercall_cflush && !pfn_valid(bfn) &&
> >
> > I believe this change is incorrect. The bfn is a frame based on Xen page
> > granularity (always 4K) while pfn_valid() is expecting a frame based on the
> > Kernel page granularity.
>
> Given that kernel granularity >= xen granularity it looks like it would
> be safe to use PFN_DOWN instead of XEN_PFN_DOWN:
>
>   unsigned int bfn = PFN_DOWN(dma_to_phys(dev, dev_addr));

Yes. But is the change worth it though? pfn_valid() is definitely
going to be more expensive than the current check.

Cheers,


From xen-devel-bounces@lists.xenproject.org Thu May 21 21:08:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 21:08:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbsPk-0002aj-JF; Thu, 21 May 2020 21:07:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PcJi=7D=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbsPj-0002ae-Eu
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 21:07:39 +0000
X-Inumbo-ID: 15c159f0-9ba7-11ea-ab67-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15c159f0-9ba7-11ea-ab67-12813bfff9fa;
 Thu, 21 May 2020 21:07:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=dHjMN/Xc2cnhSDdluGWgUd9yU6V/kE3p57vAGdY8PMU=; b=qCFAA1P1CcxZYQk1cjLm6KviU
 C32xF52664/VVYwA2KwSHAlHbFZDf1/dwUrDHVESFWgFf+tXKr6PpRTfXg7L2kVDLT+tEveD0qGgI
 v3HJ2I7OM0rfL5RH1X4a84LYGGmKRtaa9PS6HDjsVFpVOMfByiUNgJmjUkolp9cAKwrqo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbsPa-0004Vs-T2; Thu, 21 May 2020 21:07:30 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbsPa-0000I2-JV; Thu, 21 May 2020 21:07:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbsPa-00020f-Iw; Thu, 21 May 2020 21:07:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150296-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150296: regressions - FAIL
X-Osstest-Failures: linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=b85051e755b0e9d6dd8f17ef1da083851b83287d
X-Osstest-Versions-That: linux=2ea1940b84e55420a9e8feddcafd173edfe4df11
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 21 May 2020 21:07:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150296 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150296/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386-pvops              6 kernel-build             fail REGR. vs. 150281

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150281
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150281
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150281
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150281
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150281
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150281
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                b85051e755b0e9d6dd8f17ef1da083851b83287d
baseline version:
 linux                2ea1940b84e55420a9e8feddcafd173edfe4df11

Last test of basis   150281  2020-05-20 19:40:19 Z    1 days
Testing same since   150296  2020-05-21 08:13:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anton Ivanov <anton.ivanov@cambridgegreys.com>
  Christoph Hellwig <hch@lst.de>
  Eric Biggers <ebiggers@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Ignat Korchagin <ignat@cloudflare.com>
  Johannes Berg <johannes.berg@intel.com>
  Kamal Dasu <kdasu.kdev@gmail.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Ricardo Ribalda Delgado <ribalda@kernel.org>
  Richard Weinberger <richard@nod.at>
  Sascha Hauer <s.hauer@pengutronix.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit b85051e755b0e9d6dd8f17ef1da083851b83287d
Merge: fea371e259eb f3a6a6c5e0f5
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Wed May 20 13:23:55 2020 -0700

    Merge tag 'fixes-for-5.7-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux
    
    Pull MTD fixes from Richard Weinberger:
    
     - Fix a PM regression in brcmnand driver
    
     - Propagate ECC information correctly on SPI-NAND
    
     - Make sure no MTD name is used multiple time in nvmem
    
    * tag 'fixes-for-5.7-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/mtd/linux:
      mtd:rawnand: brcmnand: Fix PM resume crash
      mtd: Fix mtd not registered due to nvmem name collision
      mtd: spinand: Propagate ECC information to the MTD structure

commit fea371e259ebcc85d2d51a036743189bee487289
Merge: d303402c2883 0e7572cffe44
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Wed May 20 13:07:01 2020 -0700

    Merge tag 'for-linus-5.7-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubifs
    
    Pull UBI and UBIFS fixes from Richard Weinberger:
    
     - Correctly set next cursor for detailed_erase_block_info debugfs file
    
     - Don't use crypto_shash_descsize() for digest size in UBIFS
    
     - Remove broken lazytime support from UBIFS
    
    * tag 'for-linus-5.7-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/ubifs:
      ubi: Fix seq_file usage in detailed_erase_block_info debugfs file
      ubifs: fix wrong use of crypto_shash_descsize()
      ubifs: remove broken lazytime support

commit d303402c288340f9094fa0ba697600335a41bc3e
Merge: 2ea1940b84e5 2e27d33d22af
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Wed May 20 12:56:21 2020 -0700

    Merge tag 'for-linus-5.7-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml
    
    Pull UML fixes from Richard Weinberger:
    
     - Two missing includes which caused build issues on recent systems
    
     - Correctly set TRANS_GRE_LEN in our vector network driver
    
    * tag 'for-linus-5.7-rc6' of git://git.kernel.org/pub/scm/linux/kernel/git/rw/uml:
      um: Fix typo in vector driver transport option definition
      um: syscall.c: include <asm/unistd.h>
      um: Fix xor.h include

commit f3a6a6c5e0f5a303fd8ec84ea33c0da5869d715f
Author: Kamal Dasu <kdasu.kdev@gmail.com>
Date:   Sat May 2 16:41:36 2020 -0400

    mtd:rawnand: brcmnand: Fix PM resume crash
    
    This change fixes crash observed on PM resume. This bug
    was introduced in the change made for flash-edu support.
    
    Fixes: a5d53ad26a8b ("mtd: rawnand: brcmnand: Add support for flash-edu for dma transfers")
    
    Signed-off-by: Kamal Dasu <kdasu.kdev@gmail.com>
    Acked-by: Florian Fainelli <f.fainelli@gmail.com>
    Signed-off-by: Richard Weinberger <richard@nod.at>

commit 7b01b7239d0dc9832e0d0d23605c1ff047422a2c
Author: Ricardo Ribalda Delgado <ribalda@kernel.org>
Date:   Thu Apr 30 15:17:21 2020 +0200

    mtd: Fix mtd not registered due to nvmem name collision
    
    When the nvmem framework is enabled, a nvmem device is created per mtd
    device/partition.
    
    It is not uncommon that a device can have multiple mtd devices with
    partitions that have the same name. Eg, when there DT overlay is allowed
    and the same device with mtd is attached twice.
    
    Under that circumstances, the mtd fails to register due to a name
    duplication on the nvmem framework.
    
    With this patch we use the mtdX name instead of the partition name,
    which is unique.
    
    [    8.948991] sysfs: cannot create duplicate filename '/bus/nvmem/devices/Production Data'
    [    8.948992] CPU: 7 PID: 246 Comm: systemd-udevd Not tainted 5.5.0-qtec-standard #13
    [    8.948993] Hardware name: AMD Dibbler/Dibbler, BIOS 05.22.04.0019 10/26/2019
    [    8.948994] Call Trace:
    [    8.948996]  dump_stack+0x50/0x70
    [    8.948998]  sysfs_warn_dup.cold+0x17/0x2d
    [    8.949000]  sysfs_do_create_link_sd.isra.0+0xc2/0xd0
    [    8.949002]  bus_add_device+0x74/0x140
    [    8.949004]  device_add+0x34b/0x850
    [    8.949006]  nvmem_register.part.0+0x1bf/0x640
    ...
    [    8.948926] mtd mtd8: Failed to register NVMEM device
    
    Fixes: c4dfa25ab307 ("mtd: add support for reading MTD devices via the nvmem API")
    Signed-off-by: Ricardo Ribalda Delgado <ribalda@kernel.org>
    Acked-by: Miquel Raynal <miquel.raynal@bootlin.com>
    Signed-off-by: Richard Weinberger <richard@nod.at>

commit 3507273d5a4d3c2e46f9d3f9ed9449805f5dff07
Author: Miquel Raynal <miquel.raynal@bootlin.com>
Date:   Wed May 13 15:10:29 2020 +0200

    mtd: spinand: Propagate ECC information to the MTD structure
    
    This is done by default in the raw NAND core (nand_base.c) but was
    missing in the SPI-NAND core. Without these two lines the ecc_strength
    and ecc_step_size values are not exported to the user through sysfs.
    
    Fixes: 7529df465248 ("mtd: nand: Add core infrastructure to support SPI NANDs")
    Cc: stable@vger.kernel.org
    Signed-off-by: Miquel Raynal <miquel.raynal@bootlin.com>
    Reviewed-by: Boris Brezillon <boris.brezillon@collabora.com>
    Signed-off-by: Richard Weinberger <richard@nod.at>

commit 0e7572cffe442290c347e779bf8bd4306bb0aa7c
Author: Richard Weinberger <richard@nod.at>
Date:   Sat May 2 14:48:02 2020 +0200

    ubi: Fix seq_file usage in detailed_erase_block_info debugfs file
    
    3bfa7e141b0b ("fs/seq_file.c: seq_read(): add info message about buggy .next functions")
    showed that we don't use seq_file correctly.
    So make sure that our ->next function always updates the position.
    
    Fixes: 7bccd12d27b7 ("ubi: Add debugfs file for tracking PEB state")
    Signed-off-by: Richard Weinberger <richard@nod.at>

commit 3c3c32f85b6cc05e5db78693457deff03ac0f434
Author: Eric Biggers <ebiggers@google.com>
Date:   Fri May 1 22:59:45 2020 -0700

    ubifs: fix wrong use of crypto_shash_descsize()
    
    crypto_shash_descsize() returns the size of the shash_desc context
    needed to compute the hash, not the size of the hash itself.
    
    crypto_shash_digestsize() would be correct, or alternatively using
    c->hash_len and c->hmac_desc_len which already store the correct values.
    But actually it's simpler to just use stack arrays, so do that instead.
    
    Fixes: 49525e5eecca ("ubifs: Add helper functions for authentication support")
    Fixes: da8ef65f9573 ("ubifs: Authenticate replayed journal")
    Cc: <stable@vger.kernel.org> # v4.20+
    Cc: Sascha Hauer <s.hauer@pengutronix.de>
    Signed-off-by: Eric Biggers <ebiggers@google.com>
    Acked-by: Sascha Hauer <s.hauer@pengutronix.de>
    Signed-off-by: Richard Weinberger <richard@nod.at>

commit 2e27d33d22afa3d12746f854d6a4fad7ad7b86de
Author: Ignat Korchagin <ignat@cloudflare.com>
Date:   Sat Apr 25 09:18:42 2020 +0100

    um: Fix typo in vector driver transport option definition
    
    No big problem as "raw" and "gre" have the same length, but could go wrong if
    they don't in the future.
    
    Signed-off-by: Ignat Korchagin <ignat@cloudflare.com>
    Signed-off-by: Richard Weinberger <richard@nod.at>

commit e6da5df0eefc0ff5c48aba29157d738888b214e1
Author: Johannes Berg <johannes.berg@intel.com>
Date:   Wed Apr 15 09:51:52 2020 +0200

    um: syscall.c: include <asm/unistd.h>
    
    Without CONFIG_SECCOMP, we don't get this include recursively
    through the existing includes, thus failing the build on not
    having __NR_syscall_max defined. Add the necessary include to
    fix this.
    
    Signed-off-by: Johannes Berg <johannes.berg@intel.com>
    Acked-By: Anton Ivanov <anton.ivanov@cambridgegreys.com>
    Signed-off-by: Richard Weinberger <richard@nod.at>

commit d0e20fd4c1db7cb28874402f78f39870d84398e9
Author: Johannes Berg <johannes.berg@intel.com>
Date:   Sun Apr 5 21:33:57 2020 +0200

    um: Fix xor.h include
    
    Two independent changes here ended up going into the tree
    one after another, without a necessary rename, fix that.
    
    Reported-by: Thomas Meyer <thomas@m3y3r.de>
    Fixes: f185063bff91 ("um: Move timer-internal.h to non-shared")
    Signed-off-by: Johannes Berg <johannes.berg@intel.com>
    Reviewed-by: Brendan Higgins <brendanhiggins@google.com>
    Signed-off-by: Richard Weinberger <richard@nod.at>

commit ecf84096a526f2632ee85c32a3d05de3fa60ce80
Author: Christoph Hellwig <hch@lst.de>
Date:   Thu Apr 9 13:33:05 2020 +0200

    ubifs: remove broken lazytime support
    
    When "ubifs: introduce UBIFS_ATIME_SUPPORT to ubifs" introduced atime
    support to ubifs, it also added lazytime support.  As far as I can tell
    the lazytime support is terminally broken, as it causes
    mark_inode_dirty_sync to be called from __writeback_single_inode, which
    will then trigger the locking assert in ubifs_dirty_inode.  Just remove
    the broken lazytime support for now, it can be added back later,
    especially as some infrastructure changes should make that easier soon.
    
    Fixes: 8c1c5f263833 ("ubifs: introduce UBIFS_ATIME_SUPPORT to ubifs")
    Signed-off-by: Christoph Hellwig <hch@lst.de>
    Signed-off-by: Richard Weinberger <richard@nod.at>


From xen-devel-bounces@lists.xenproject.org Thu May 21 21:44:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 21:44:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbszO-00066j-GA; Thu, 21 May 2020 21:44:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QHd7=7D=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1jbszN-00066e-6P
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 21:44:29 +0000
X-Inumbo-ID: 3e7f85a6-9bac-11ea-b07b-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e7f85a6-9bac-11ea-b07b-bc764e2007e4;
 Thu, 21 May 2020 21:44:28 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 08GFkIz6P4d/Td+27tImWBk/VaI65rsGnGEyI50VD17NIZbQKRIEoVJxmmvzUS3xVi87M7fYZH
 i67gKdKdsMK1t3ehwV4f4WSG09mW8II54YUmuLtKgJMVTVqRgdJshHYgyDwlJ0LcJNk7DRVF9y
 QvZaPbD+W7SUUoE7YIZ/r2dz/eNxzOVX444vsbDshAHYz31PaNUVEdIt7GSg1Fv/oMBZt9a3ke
 pLdfW6MUXzOjF+M4qsehEBTFS0/vx7+astlQHtt9YKUFN5R3bHiv8FsGioLSNFo8fQ2W6MPTfT
 HTg=
X-SBRS: 2.7
X-MesageID: 18159679
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,419,1583211600"; d="scan'208";a="18159679"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/svm: retry after unhandled NPT fault if gfn was marked
 for recalculation
Date: Thu, 21 May 2020 22:43:58 +0100
Message-ID: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: git-send-email 2.7.4
MIME-Version: 1.0
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: andrew.cooper3@citrix.com, Igor Druzhinin <igor.druzhinin@citrix.com>,
 wl@xen.org, jbeulich@suse.com, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

If a recalculation NPT fault hasn't been handled explicitly in
hvm_hap_nested_page_fault() then it's potentially safe to retry -
US bit has been re-instated in PTE and any real fault would be correctly
re-raised next time.

This covers a specific case of migration with vGPU assigned on AMD:
global log-dirty is enabled and causes immediate recalculation NPT
fault in MMIO area upon access. This type of fault isn't described
explicitly in hvm_hap_nested_page_fault (this isn't called on
EPT misconfig exit on Intel) which results in domain crash.

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---
 xen/arch/x86/hvm/svm/svm.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 46a1aac..f0d0bd3 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1726,6 +1726,10 @@ static void svm_do_nested_pgfault(struct vcpu *v,
         /* inject #VMEXIT(NPF) into guest. */
         nestedsvm_vmexit_defer(v, VMEXIT_NPF, pfec, gpa);
         return;
+    case 0:
+        /* If a recalculation page fault hasn't been handled - just retry. */
+        if ( pfec & PFEC_user_mode )
+            return;
     }
 
     /* Everything else is an error. */
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Thu May 21 21:59:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 21:59:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbtE4-00078D-Qh; Thu, 21 May 2020 21:59:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3PC2=7D=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jbtE2-000787-TJ
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 21:59:38 +0000
X-Inumbo-ID: 5d4bc88a-9bae-11ea-b9cf-bc764e2007e4
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d4bc88a-9bae-11ea-b9cf-bc764e2007e4;
 Thu, 21 May 2020 21:59:38 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04LLqDip034420;
 Thu, 21 May 2020 21:59:34 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=vvb0I8OQwnhxZvAKCewq0Vfhy5f8VV+kFj/ptCpXlC0=;
 b=BudTa7uvqT5tGY65n7q7NcYymLqgv9Z0UsmOidnj3sdxN4JcHD/1DHLK7wzUwz6rldVj
 sy/Lbr8bfkEBzoKcBPQrdqm8TDUHbbwpOBZ8F35oJxuefEQnYM/5z3QHQyPSoijCMrqn
 WJS5NQnUY/82X/EbxbB6CGpf/wE6RD5sTCxcSY1WTw75n7NOZ33a7NlLppVdocJ+wiDm
 cddXDgp+AoATliq0QK91+bkO4v8mdYygXEP/saV8bZgaqgDLavTL4e3pLSai3EO4V3xz
 u3SK8tFjvINwnadesI9PivjxtEBa6tCrsq1IN84vpp2+in754oZ9auUzajzDEaC798ir Zg== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by aserp2120.oracle.com with ESMTP id 31284mawhv-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Thu, 21 May 2020 21:59:34 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04LLqYNu059297;
 Thu, 21 May 2020 21:57:33 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by userp3030.oracle.com with ESMTP id 314gma0g5p-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 21 May 2020 21:57:33 +0000
Received: from abhmp0002.oracle.com (abhmp0002.oracle.com [141.146.116.8])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 04LLvUUl029917;
 Thu, 21 May 2020 21:57:31 GMT
Received: from [10.39.200.114] (/10.39.200.114)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Thu, 21 May 2020 14:57:29 -0700
Subject: Re: [PATCH] xen/events: avoid NULL pointer dereference in
 evtchn_from_irq()
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
References: <20200319071428.12115-1-jgross@suse.com>
 <30719c35-6de7-d400-7bb8-cff4570f8971@oracle.com>
 <20200521184602.GP98582@mail-itl>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <c36de3eb-c0ad-45e1-e08b-cb7d86d197f6@oracle.com>
Date: Thu, 21 May 2020 17:57:26 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200521184602.GP98582@mail-itl>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9628
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 mlxlogscore=999
 adultscore=0 phishscore=0 mlxscore=0 spamscore=0 suspectscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2005210162
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9628
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0
 mlxscore=0
 cotscore=-2147483648 impostorscore=0 malwarescore=0 mlxlogscore=999
 lowpriorityscore=0 phishscore=0 spamscore=0 bulkscore=0 adultscore=0
 priorityscore=1501 clxscore=1015 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2004280000 definitions=main-2005210162
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org,
 stable@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/21/20 2:46 PM, Marek Marczykowski-Górecki wrote:
> On Thu, May 21, 2020 at 01:22:03PM -0400, Boris Ostrovsky wrote:
>> On 3/19/20 3:14 AM, Juergen Gross wrote:
>>> There have been reports of races in evtchn_from_irq() where the info
>>> pointer has been NULL.
>>>
>>> Avoid that case by testing info before dereferencing it.
>>>
>>> In order to avoid accessing a just freed info structure do the kfree()
>>> via kfree_rcu().
>>
>> Looks like noone ever responded to this.
>>
>>
>> This change looks fine but is there a background on the problem? I
>> looked in the archives and didn't find the relevant discussion.
> Here is the original bug report:
> https://xen.markmail.org/thread/44apwkwzeme4uavo
>


Thanks. Do we know what the race is? Is it because an event is being
delivered from a dying guest who is in the process of tearing down event
channels?


-boris



From xen-devel-bounces@lists.xenproject.org Thu May 21 22:49:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 22:49:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbtzy-000371-Mx; Thu, 21 May 2020 22:49:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PcJi=7D=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbtzy-00036w-4D
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 22:49:10 +0000
X-Inumbo-ID: 481490da-9bb5-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 481490da-9bb5-11ea-b07b-bc764e2007e4;
 Thu, 21 May 2020 22:49:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=L5aY0IXO0Lrs7x/MP4bqPAdiqpvciv+M96xSbuL4XNs=; b=cAdKc/kLu5Kn/zOT04WWFF0U7
 z4OMIY3SderU9EgqFDqcWoIdyVogrPo5cpGPVCaZOcZJFJLyQTdiwWaJGm/jA1IzeXDqmEsNdhczp
 rDagObJd/9DuwtUAmwsIwYvtAuam5zbyZy5yduecU64tt/PxSzdCdoQ6r8pfzj0XCvdYw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbtzw-0006Xa-GV; Thu, 21 May 2020 22:49:08 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbtzw-0003xY-5J; Thu, 21 May 2020 22:49:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbtzw-0007Pf-2d; Thu, 21 May 2020 22:49:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150300-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150300: all pass - PUSHED
X-Osstest-Versions-This: ovmf=74f90d38c446e247469e2a775970eeed89216909
X-Osstest-Versions-That: ovmf=3f89db869028fa65a37756fd7f391cbd69f4579c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 21 May 2020 22:49:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150300 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150300/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 74f90d38c446e247469e2a775970eeed89216909
baseline version:
 ovmf                 3f89db869028fa65a37756fd7f391cbd69f4579c

Last test of basis   150293  2020-05-21 07:12:32 Z    0 days
Testing same since   150300  2020-05-21 12:39:29 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ard.biesheuvel@arm.com>
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   3f89db8690..74f90d38c4  74f90d38c446e247469e2a775970eeed89216909 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 21 22:53:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 22:53:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbu4A-0003y1-9H; Thu, 21 May 2020 22:53:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IBHf=7D=intel.com=tamas.lengyel@srs-us1.protection.inumbo.net>)
 id 1jbu48-0003xw-Rd
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 22:53:28 +0000
X-Inumbo-ID: e1d08404-9bb5-11ea-ab77-12813bfff9fa
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e1d08404-9bb5-11ea-ab77-12813bfff9fa;
 Thu, 21 May 2020 22:53:27 +0000 (UTC)
IronPort-SDR: 3hvyuLfT+JCx8wWlRVjScDS6g4Ek2mcx4VT2cpPMPLRvtoNGDdayjm99TLu3GvO2WiIu1VmVlV
 NbNOpgiwuB0A==
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga004.jf.intel.com ([10.7.209.38])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 21 May 2020 15:53:26 -0700
IronPort-SDR: w0bSNiQcmLVIv0ZueWiix7Cr6PDZK7pTALGuCNsSNQqTMQe2wDGd8mDXi8pNa6bG55CWSe1fg5
 Z7eY+/HMssTg==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.73,419,1583222400"; d="scan'208";a="412567790"
Received: from azehtab-mobl2.amr.corp.intel.com (HELO ubuntu.localdomain)
 ([10.255.68.236])
 by orsmga004.jf.intel.com with ESMTP; 21 May 2020 15:53:25 -0700
From: Tamas K Lengyel <tamas.lengyel@intel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH for-4.14 1/2] x86/mem_sharing: Prohibit interrupt injection
 for forks
Date: Thu, 21 May 2020 15:53:22 -0700
Message-Id: <7666b5bba73a1410446789a0c4ea908376da3487.1590101479.git.tamas.lengyel@intel.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <tamas.lengyel@intel.com>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

When running shallow forks without device models it may be undesirable for Xen
to inject interrupts. With Windows forks we have observed the kernel going into
infinite loops when trying to process such interrupts. By disabling interrupt
injection the fuzzer can exercise the target code without interference.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
---
 xen/arch/x86/hvm/vmx/intr.c      | 4 ++++
 xen/arch/x86/mm/mem_sharing.c    | 6 +++++-
 xen/include/asm-x86/hvm/domain.h | 2 ++
 xen/include/public/memory.h      | 1 +
 4 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 000e14af49..3814795e3f 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -256,6 +256,10 @@ void vmx_intr_assist(void)
     if ( unlikely(v->arch.vm_event) && v->arch.vm_event->sync_event )
         return;
 
+    /* Block event injection for VM fork if requested */
+    if ( unlikely(v->domain->arch.hvm.mem_sharing.prohibit_interrupts) )
+        return;
+
     /* Crank the handle on interrupt state. */
     pt_vector = pt_update_irq(v);
 
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 7271e5c90b..7352fce866 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -2106,7 +2106,8 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
         rc = -EINVAL;
         if ( mso.u.fork.pad )
             goto out;
-        if ( mso.u.fork.flags & ~XENMEM_FORK_WITH_IOMMU_ALLOWED )
+        if ( mso.u.fork.flags & ~(XENMEM_FORK_WITH_IOMMU_ALLOWED |
+                                  XENMEM_FORK_PROHIBIT_INTERRUPTS) )
             goto out;
 
         rc = rcu_lock_live_remote_domain_by_id(mso.u.fork.parent_domain,
@@ -2134,6 +2135,9 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
             rc = hypercall_create_continuation(__HYPERVISOR_memory_op,
                                                "lh", XENMEM_sharing_op,
                                                arg);
+        else if ( !rc && (mso.u.fork.flags & XENMEM_FORK_PROHIBIT_INTERRUPTS) )
+            d->arch.hvm.mem_sharing.prohibit_interrupts = true;
+
         rcu_unlock_domain(pd);
         break;
     }
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 95fe18cddc..e114f818d3 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -74,6 +74,8 @@ struct mem_sharing_domain
      * to resume the search.
      */
     unsigned long next_shared_gfn_to_relinquish;
+
+    bool prohibit_interrupts;
 };
 #endif
 
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index dbd35305df..fe2e6caa68 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -537,6 +537,7 @@ struct xen_mem_sharing_op {
         struct mem_sharing_op_fork {      /* OP_FORK */
             domid_t parent_domain;        /* IN: parent's domain id */
 #define XENMEM_FORK_WITH_IOMMU_ALLOWED (1u << 0)
+#define XENMEM_FORK_PROHIBIT_INTERRUPTS (1u << 1)
             uint16_t flags;               /* IN: optional settings */
             uint32_t pad;                 /* Must be set to 0 */
         } fork;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 22:53:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 22:53:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbu4F-0003yp-GV; Thu, 21 May 2020 22:53:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IBHf=7D=intel.com=tamas.lengyel@srs-us1.protection.inumbo.net>)
 id 1jbu4D-0003yZ-Rd
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 22:53:33 +0000
X-Inumbo-ID: e31c22f0-9bb5-11ea-ab77-12813bfff9fa
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e31c22f0-9bb5-11ea-ab77-12813bfff9fa;
 Thu, 21 May 2020 22:53:29 +0000 (UTC)
IronPort-SDR: y/I/yRtwb4qe2ZC/aI9RLUEPE5102Mu4foVGhb9fVcBXvM5xDjUchRi/rB57W3ogWUtHWy6CAc
 gm8VpqjOr7sA==
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga004.jf.intel.com ([10.7.209.38])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 21 May 2020 15:53:27 -0700
IronPort-SDR: 3ae7Lmt9oxNiI0YYrpyIj/B7bCMRzlEYzmBU++wyJZIwSlS8CUg+yeFKmhOlK192W871iSuXiI
 HtJKujlMEXIg==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.73,419,1583222400"; d="scan'208";a="412567795"
Received: from azehtab-mobl2.amr.corp.intel.com (HELO ubuntu.localdomain)
 ([10.255.68.236])
 by orsmga004.jf.intel.com with ESMTP; 21 May 2020 15:53:26 -0700
From: Tamas K Lengyel <tamas.lengyel@intel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH for-4.14 2/2] tools/libxc: xc_memshr_fork with interrupts
 disabled
Date: Thu, 21 May 2020 15:53:23 -0700
Message-Id: <c2830cae9affe327170c900731a7ca050ddb91ea.1590101479.git.tamas.lengyel@intel.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <7666b5bba73a1410446789a0c4ea908376da3487.1590101479.git.tamas.lengyel@intel.com>
References: <7666b5bba73a1410446789a0c4ea908376da3487.1590101479.git.tamas.lengyel@intel.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 Tamas K Lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Toolstack side for creating forks with interrupt injection disabled.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
---
 tools/libxc/include/xenctrl.h | 3 ++-
 tools/libxc/xc_memshr.c       | 4 +++-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 45ff7db1e8..0ea839b72a 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2242,7 +2242,8 @@ int xc_memshr_range_share(xc_interface *xch,
 int xc_memshr_fork(xc_interface *xch,
                    uint32_t source_domain,
                    uint32_t client_domain,
-                   bool allow_with_iommu);
+                   bool allow_with_iommu,
+                   bool prohibit_interrupts);
 
 /*
  * Note: this function is only intended to be used on short-lived forks that
diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c
index 2300cc7075..e2de1d3aa2 100644
--- a/tools/libxc/xc_memshr.c
+++ b/tools/libxc/xc_memshr.c
@@ -240,7 +240,7 @@ int xc_memshr_debug_gref(xc_interface *xch,
 }
 
 int xc_memshr_fork(xc_interface *xch, uint32_t pdomid, uint32_t domid,
-                   bool allow_with_iommu)
+                   bool allow_with_iommu, bool prohibit_interrupts)
 {
     xen_mem_sharing_op_t mso;
 
@@ -251,6 +251,8 @@ int xc_memshr_fork(xc_interface *xch, uint32_t pdomid, uint32_t domid,
 
     if ( allow_with_iommu )
         mso.u.fork.flags |= XENMEM_FORK_WITH_IOMMU_ALLOWED;
+    if ( prohibit_interrupts )
+        mso.u.fork.flags |= XENMEM_FORK_PROHIBIT_INTERRUPTS;
 
     return xc_memshr_memop(xch, domid, &mso);
 }
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu May 21 23:01:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 23:01:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbuC4-000511-8I; Thu, 21 May 2020 23:01:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3PC2=7D=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jbuC2-00050w-WD
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 23:01:39 +0000
X-Inumbo-ID: 06177100-9bb7-11ea-ab77-12813bfff9fa
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06177100-9bb7-11ea-ab77-12813bfff9fa;
 Thu, 21 May 2020 23:01:37 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04LMwVpJ129100;
 Thu, 21 May 2020 23:01:35 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=4X32WpZ18TJUn7A/N8unUu5lU8V4n9qJ4tKmC9pfW5k=;
 b=BSIluP8gg9bh3wENoVDpNyOMfNc+XrPU31TgvQ2wKUe95/PkOhFG/L8uxwiOELI2S2vx
 Kxivch4y+077nNTB0Pv5IdRQsHbUabxcNsK5PeAgDXvt0CLEzV8QSPm5bPoBvD0JiwPd
 tBVzIwHBoKmZyfLlUMgvwxdhMCRnQkHzdVgl2Z85mfwVdOPyecsfbpO3JQ4m9cvH7FXh
 miBoLe0GqwQ46j//4hQ0PV791p0i4AcIthiFzIDGGmjlKv0MWxQrse5mmoIPooVDefNV
 m+ftv0hMX3ktKZTYaEMCJs0FO8EjK0TSekY3er6lhwPF1No6Oh2j2HmCHelML2QGvdIa CQ== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by userp2120.oracle.com with ESMTP id 31501rhku3-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Thu, 21 May 2020 23:01:34 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04LMwWPK051969;
 Thu, 21 May 2020 22:59:34 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by userp3030.oracle.com with ESMTP id 314gma2j7d-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 21 May 2020 22:59:34 +0000
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 04LMxV3U002782;
 Thu, 21 May 2020 22:59:31 GMT
Received: from [10.39.200.114] (/10.39.200.114)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Thu, 21 May 2020 15:59:31 -0700
Subject: Re: [PATCH 08/10] swiotlb-xen: introduce phys_to_dma/dma_to_phys
 translations
To: Stefano Stabellini <sstabellini@kernel.org>, jgross@suse.com,
 konrad.wilk@oracle.com
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
 <20200520234520.22563-8-sstabellini@kernel.org>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <7aab1c00-c115-b989-61e5-fd7d28fa0d07@oracle.com>
Date: Thu, 21 May 2020 18:59:28 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200520234520.22563-8-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9628
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 mlxlogscore=999
 adultscore=0 phishscore=0 mlxscore=0 spamscore=0 suspectscore=2
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2005210172
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9628
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0
 lowpriorityscore=0 spamscore=0
 mlxlogscore=999 clxscore=1015 priorityscore=1501 cotscore=-2147483648
 impostorscore=0 bulkscore=0 adultscore=0 malwarescore=0 phishscore=0
 mlxscore=0 suspectscore=2 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2005210172
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/20/20 7:45 PM, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
>
> Call dma_to_phys in is_xen_swiotlb_buffer.
> Call phys_to_dma in xen_phys_to_bus.
> Call dma_to_phys in xen_bus_to_phys.
>
> Everything is taken care of by these changes except for
> xen_swiotlb_alloc_coherent and xen_swiotlb_free_coherent, which need a
> few explicit phys_to_dma/dma_to_phys calls.
>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> ---
>  drivers/xen/swiotlb-xen.c | 20 ++++++++++++--------
>  1 file changed, 12 insertions(+), 8 deletions(-)
>
> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> index c50448fd9b75..d011c4c7aa72 100644
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -64,14 +64,16 @@ static inline dma_addr_t xen_phys_to_bus(struct dev=
ice *dev, phys_addr_t paddr)
> =20
>  	dma |=3D paddr & ~XEN_PAGE_MASK;
> =20
> -	return dma;
> +	return phys_to_dma(dev, dma);
>  }
> =20
> -static inline phys_addr_t xen_bus_to_phys(struct device *dev, dma_addr=
_t baddr)
> +static inline phys_addr_t xen_bus_to_phys(struct device *dev,
> +					  dma_addr_t dma_addr)


Since now dma address !=3D bus address this is no longer
xen_bus_to_phys(). (And I guess the same is rue for xen_phys_to_bus()).


>  {
> +	phys_addr_t baddr =3D dma_to_phys(dev, dma_addr);
>  	unsigned long xen_pfn =3D bfn_to_pfn(XEN_PFN_DOWN(baddr));
> -	dma_addr_t dma =3D (dma_addr_t)xen_pfn << XEN_PAGE_SHIFT;
> -	phys_addr_t paddr =3D dma;
> +	phys_addr_t paddr =3D (xen_pfn << XEN_PAGE_SHIFT) |
> +			    (baddr & ~XEN_PAGE_MASK);
> =20
>  	paddr |=3D baddr & ~XEN_PAGE_MASK;


This line needs to go, no?


-boris



From xen-devel-bounces@lists.xenproject.org Thu May 21 23:02:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 23:02:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbuCf-00054k-MO; Thu, 21 May 2020 23:02:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KCrj=7D=tklsoftware.com=tamas@srs-us1.protection.inumbo.net>)
 id 1jbuCe-00054d-MF
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 23:02:16 +0000
X-Inumbo-ID: 1cfe8174-9bb7-11ea-b9cf-bc764e2007e4
Received: from mail-ej1-x642.google.com (unknown [2a00:1450:4864:20::642])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1cfe8174-9bb7-11ea-b9cf-bc764e2007e4;
 Thu, 21 May 2020 23:02:16 +0000 (UTC)
Received: by mail-ej1-x642.google.com with SMTP id d7so10841765eja.7
 for <xen-devel@lists.xenproject.org>; Thu, 21 May 2020 16:02:16 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=tklengyel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=W1t2A+3OScu1Y5T9zYIQYGjRttA9a3YJrKguzcDBeFU=;
 b=LooFSHSgRrWaLhQgC9LOaX8V3zb6TX21SfL8F93K5YNGxyCnd06vVKAvqn4fNmwpV+
 /6xeCoIKzsD+T1Y/0mCMx6iVGXE8kODc0ibMqgrCZD+Lmj74soRmWIAo04Z2b/DOAa2W
 Wz+o3sUvf1hPjiCKPLhFx6emCmgzaLljzjFr4LH42HW3ohp0/ilk5uZQwe1Wqtcc1jGm
 y1UYbNcZqZ2+5rPCn6ujZWanNZRk/zIg7agMk5YYwM7INfidbuR6041v5noLWhF5z03A
 DT9RtZ6r9DtQphNVO9LlWo3r0kYatJ6EXQLFyOyxS2bt+9S6kV+B2Ge9DBGsxjK61xPK
 u+lw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=W1t2A+3OScu1Y5T9zYIQYGjRttA9a3YJrKguzcDBeFU=;
 b=Il1lBK6bNpKCVO4rMyRLmXjZu3cJSDx5Z20nqa2zL3DCBAff+n5J6dDwg1pzoxlZGQ
 hyEEHv4RC7B1Xq9EVkGHuRoa5eO0IRnlSxkfS8sSc7TswWnM+v0gRAQREkroQDki/DPh
 po6qZhzoicypmUbE0c43cqPqHbOaw2DOBZFfJUwz+OUIIsMHDlotfYkSV+HflarrCDDa
 moYvtWUN9j8jnFM2vfeKQxwcachh9RFGbsyBcwP99QmhKbY+NTN6tMK5Ty2Les8RnOOW
 cK5MK5i52Iv6R19GdUvs59e/9eRTy6F0GTagw8kdkRVXWJlk6dsZpM/An0BFeZOgIy19
 jAUQ==
X-Gm-Message-State: AOAM533DUuS+gPw7mHgS6GCihMEZl+PnxQOHMJaWXEZFCag7Hpg3NKz3
 K8yyfBPWM7WGS2ClbnKG+du11AZFdrM=
X-Google-Smtp-Source: ABdhPJzOidNgnT9eDhmTQm11SM1BeHCgUps3t69Px0BcnShRvI8AbQzDZTILasw95GojNL4mtBL+lw==
X-Received: by 2002:a17:906:cecb:: with SMTP id
 si11mr5611275ejb.122.1590102134345; 
 Thu, 21 May 2020 16:02:14 -0700 (PDT)
Received: from mail-wr1-f41.google.com (mail-wr1-f41.google.com.
 [209.85.221.41])
 by smtp.gmail.com with ESMTPSA id c23sm6148867ejm.116.2020.05.21.16.02.13
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 21 May 2020 16:02:13 -0700 (PDT)
Received: by mail-wr1-f41.google.com with SMTP id l11so8351096wru.0
 for <xen-devel@lists.xenproject.org>; Thu, 21 May 2020 16:02:13 -0700 (PDT)
X-Received: by 2002:adf:ecc3:: with SMTP id s3mr756171wro.301.1590102133151;
 Thu, 21 May 2020 16:02:13 -0700 (PDT)
MIME-Version: 1.0
References: <7666b5bba73a1410446789a0c4ea908376da3487.1590101479.git.tamas.lengyel@intel.com>
In-Reply-To: <7666b5bba73a1410446789a0c4ea908376da3487.1590101479.git.tamas.lengyel@intel.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Thu, 21 May 2020 17:01:37 -0600
X-Gmail-Original-Message-ID: <CABfawhmtKSpcb6biat5OhB15o9OKAV30pDZ2Smz_nVcg1Wvikw@mail.gmail.com>
Message-ID: <CABfawhmtKSpcb6biat5OhB15o9OKAV30pDZ2Smz_nVcg1Wvikw@mail.gmail.com>
Subject: Re: [PATCH for-4.14 1/2] x86/mem_sharing: Prohibit interrupt
 injection for forks
To: Tamas K Lengyel <tamas.lengyel@intel.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
> index 000e14af49..3814795e3f 100644
> --- a/xen/arch/x86/hvm/vmx/intr.c
> +++ b/xen/arch/x86/hvm/vmx/intr.c
> @@ -256,6 +256,10 @@ void vmx_intr_assist(void)
>      if ( unlikely(v->arch.vm_event) && v->arch.vm_event->sync_event )
>          return;
>

Just noticed after sending the patch that this block needs to be wrapped in

#ifdef CONFIG_MEM_SHARING

> +    /* Block event injection for VM fork if requested */
> +    if ( unlikely(v->domain->arch.hvm.mem_sharing.prohibit_interrupts) )
> +        return;

#endif

> +
>      /* Crank the handle on interrupt state. */
>      pt_vector = pt_update_irq(v);
>

I can resend if necessary but its also a trivial fixup when applying
so let me know what would be preferred. I pushed the fixed-up version
to http://xenbits.xen.org/gitweb/?p=people/tklengyel/xen.git;a=shortlog;h=refs/heads/fork_interrupts.

Thanks,
Tamas


From xen-devel-bounces@lists.xenproject.org Thu May 21 23:03:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 23:03:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbuDj-0005Bu-0C; Thu, 21 May 2020 23:03:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PcJi=7D=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbuDi-0005Bm-9A
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 23:03:22 +0000
X-Inumbo-ID: 4359f7ea-9bb7-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4359f7ea-9bb7-11ea-9887-bc764e2007e4;
 Thu, 21 May 2020 23:03:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=KAVEe/OA3XlG+yMD2Nf4GEjKXaaw/dVfBvxw1bpD3pI=; b=z6f/ste2Ex0LRne6lAYZfaMcN
 JtlyY1aWe2K7pro28R5didvX2VPGFDnDp1DhBsBW7g+58wEwjrq/Kt+QjTZUwd/yHbqKnXTlkWw9B
 A886W5iA8iSSV+gSJTTiby2+Fr/h+e2IaPUCzUEnUIbZqLmMC3Un/ooHJ/6dmfMCUOqPQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbuDf-0006s5-M5; Thu, 21 May 2020 23:03:19 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbuDZ-0004Oa-KX; Thu, 21 May 2020 23:03:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbuDZ-0004MB-Je; Thu, 21 May 2020 23:03:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150297-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150297: regressions - FAIL
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
 qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=956ae3e9265fd36cb1c487cb3c868e906bd55897
X-Osstest-Versions-That: qemuu=f2465433b43fb87766d79f42191607dac4aed5b4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 21 May 2020 23:03:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150297 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150297/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-nested-intel 17 debian-hvm-install/l1/l2 fail REGR. vs. 150271

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150271
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150271
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150271
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150271
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150271
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150271
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                956ae3e9265fd36cb1c487cb3c868e906bd55897
baseline version:
 qemuu                f2465433b43fb87766d79f42191607dac4aed5b4

Last test of basis   150271  2020-05-20 06:14:37 Z    1 days
Testing same since   150297  2020-05-21 10:37:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 956ae3e9265fd36cb1c487cb3c868e906bd55897
Merge: f2465433b4 150c7a91ce
Author: Peter Maydell <peter.maydell@linaro.org>
Date:   Tue May 19 19:18:41 2020 +0100

    Merge remote-tracking branch 'remotes/rth/tags/pull-fpu-20200519' into staging
    
    Misc cleanups
    
    # gpg: Signature made Tue 19 May 2020 16:51:38 BST
    # gpg:                using RSA key 7A481E78868B4DB6A85A05C064DF38E8AF7E215F
    # gpg:                issuer "richard.henderson@linaro.org"
    # gpg: Good signature from "Richard Henderson <richard.henderson@linaro.org>" [full]
    # Primary key fingerprint: 7A48 1E78 868B 4DB6 A85A  05C0 64DF 38E8 AF7E 215F
    
    * remotes/rth/tags/pull-fpu-20200519:
      softfloat: Return bool from all classification predicates
      softfloat: Inline floatx80 compare specializations
      softfloat: Inline float128 compare specializations
      softfloat: Inline float64 compare specializations
      softfloat: Inline float32 compare specializations
      softfloat: Name compare relation enum
      softfloat: Name rounding mode enum
      softfloat: Change tininess_before_rounding to bool
      softfloat: Replace flag with bool
      softfloat: Use post test for floatN_mul
    
    Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

commit 150c7a91ce7862bcaf7422f6038dcf0ba4a7eee3
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Tue May 5 12:16:24 2020 -0700

    softfloat: Return bool from all classification predicates
    
    This includes *_is_any_nan, *_is_neg, *_is_inf, etc.
    
    Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
    Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit c6baf65000f826a713e8d9b5b35e617b0ca9ab5d
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Tue May 5 10:53:15 2020 -0700

    softfloat: Inline floatx80 compare specializations
    
    Replace the floatx80 compare specializations with inline functions
    that call the standard floatx80_compare{,_quiet} functions.
    Use bool as the return type.
    
    Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit b7b1ac684fea49c6bfe1ad8b706aed7b09116d15
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Tue May 5 10:50:32 2020 -0700

    softfloat: Inline float128 compare specializations
    
    Replace the float128 compare specializations with inline functions
    that call the standard float128_compare{,_quiet} functions.
    Use bool as the return type.
    
    Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 0673ecdf6cb2b1445a85283db8cbacb251c46516
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Tue May 5 10:40:23 2020 -0700

    softfloat: Inline float64 compare specializations
    
    Replace the float64 compare specializations with inline functions
    that call the standard float64_compare{,_quiet} functions.
    Use bool as the return type.
    
    Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 5da2d2d8e53d80e92a61720ea995c86b33cbf25d
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Tue May 5 10:33:18 2020 -0700

    softfloat: Inline float32 compare specializations
    
    Replace the float32 compare specializations with inline functions
    that call the standard float32_compare{,_quiet} functions.
    Use bool as the return type.
    
    Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 71bfd65c5fcd72f8af2735905415c7ce4220f6dc
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Tue May 5 10:22:05 2020 -0700

    softfloat: Name compare relation enum
    
    Give the previously unnamed enum a typedef name.  Use it in the
    prototypes of compare functions.  Use it to hold the results
    of the compare functions.
    
    Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
    Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit 3dede407cc61b64997f0c30f6dbf4df09949abc9
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Tue May 5 09:01:49 2020 -0700

    softfloat: Name rounding mode enum
    
    Give the previously unnamed enum a typedef name.  Use the packed
    attribute so that we do not affect the layout of the float_status
    struct.  Use it in the prototypes of relevant functions.
    
    Adjust switch statements as necessary to avoid compiler warnings.
    
    Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
    Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit a828b373bdabc7e53d1e218e3fc76f85b6674688
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Mon May 4 21:19:39 2020 -0700

    softfloat: Change tininess_before_rounding to bool
    
    Slightly tidies the usage within softfloat.c and the
    representation in float_status.
    
    Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit c120391c0090d9c40425c92cdb00f38ea8588ff6
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Mon May 4 19:54:57 2020 -0700

    softfloat: Replace flag with bool
    
    We have had this on the to-do list for quite some time.
    
    Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
    Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>

commit b240c9c497b9880ac0ba29465907d5ebecd48083
Author: Richard Henderson <richard.henderson@linaro.org>
Date:   Mon May 4 16:57:21 2020 -0700

    softfloat: Use post test for floatN_mul
    
    The existing f{32,64}_addsub_post test, which checks for zero
    inputs, is identical to f{32,64}_mul_fast_test.  Which means
    we can eliminate the fast_test/fast_op hooks in favor of
    reusing the same post hook.
    
    This means we have one fewer test along the fast path for multiply.
    
    Tested-by: Alex Bennée <alex.bennee@linaro.org>
    Reviewed-by: Alex Bennée <alex.bennee@linaro.org>
    Signed-off-by: Richard Henderson <richard.henderson@linaro.org>


From xen-devel-bounces@lists.xenproject.org Thu May 21 23:08:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 23:08:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbuIj-0005Pm-Kh; Thu, 21 May 2020 23:08:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PcJi=7D=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbuIi-0005Ph-BL
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 23:08:32 +0000
X-Inumbo-ID: fc5a84a8-9bb7-11ea-ab77-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fc5a84a8-9bb7-11ea-ab77-12813bfff9fa;
 Thu, 21 May 2020 23:08:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ARVyA5mw4enp7IaJ18RDZNA+6W/2szBSGgPo0rHUKA4=; b=hYV6OHkuLwmT3P7uUfZnzq3e3
 Eois7vsacj9c/3XHyWDr/sdweU4cvKD8kIDzObLKb5+ug9Qdyb50Ui9bnd+rlBpn/E0S16+LO3m7I
 nQkIsE5lsoL4Kx0XWE1H6XFp/Ntgnb+5aH/EtNqTXPj/+pica4GGrz9m3p/Zo4vjOS0Rw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbuIg-0006ys-01; Thu, 21 May 2020 23:08:30 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbuIf-0004bk-Kr; Thu, 21 May 2020 23:08:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbuIf-0002tu-KG; Thu, 21 May 2020 23:08:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150308-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 150308: tolerable FAIL - PUSHED
X-Osstest-Failures: seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: seabios=7e9db04923854b7f4edca33948f55abee22907b9
X-Osstest-Versions-That: seabios=b8eda131954452bb5a236100a6572fe8f27d8021
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 21 May 2020 23:08:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150308 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150308/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150229
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150229
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150229
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150229
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 seabios              7e9db04923854b7f4edca33948f55abee22907b9
baseline version:
 seabios              b8eda131954452bb5a236100a6572fe8f27d8021

Last test of basis   150229  2020-05-18 08:09:37 Z    3 days
Testing same since   150308  2020-05-21 18:10:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Kevin O'Connor <kevin@koconnor.net>
  Paul Menzel <pmenzel@molgen.mpg.de>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   b8eda13..7e9db04  7e9db04923854b7f4edca33948f55abee22907b9 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 21 23:49:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 21 May 2020 23:49:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbuvi-0000YL-T1; Thu, 21 May 2020 23:48:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t+zW=7D=amazon.com=prvs=403ae7f01=anchalag@srs-us1.protection.inumbo.net>)
 id 1jbuvh-0000YG-FE
 for xen-devel@lists.xenproject.org; Thu, 21 May 2020 23:48:49 +0000
X-Inumbo-ID: 9cdc208a-9bbd-11ea-ab78-12813bfff9fa
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9cdc208a-9bbd-11ea-ab78-12813bfff9fa;
 Thu, 21 May 2020 23:48:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1590104927; x=1621640927;
 h=date:from:to:subject:message-id:references:mime-version:
 in-reply-to; bh=bFBcoGFNtJUXMRWNnV/gDxB1SnY31ZK3eObGVk9SRlM=;
 b=bT6j0kcl3J48oHmgBvqWMp5/aQhJISBZTtAYo6Yhnhusx+gLpmugnlxn
 ikDuUsLvP7k5tWxtD1bWVep6A4yLpa60S9EdGY2Cf7SR9JCH/xg9MjAKW
 9X1lJ8vjZ63mH6MvK9adUNUcJ30WiwvIq6wuJn/hYh4DTbTWk6C6EYBdb s=;
IronPort-SDR: KEECJbF89LVvbmK/V5gpwOiRljggN8CQxES2EGUodOb5EvSvt9pIHqMEIklMg34dBV4oIlMtKE
 TxW9oTB6jSeQ==
X-IronPort-AV: E=Sophos;i="5.73,419,1583193600"; d="scan'208";a="31681872"
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2b-859fe132.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP;
 21 May 2020 23:48:33 +0000
Received: from EX13MTAUWC001.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2b-859fe132.us-west-2.amazon.com (Postfix) with ESMTPS
 id 15A63223F24; Thu, 21 May 2020 23:48:31 +0000 (UTC)
Received: from EX13D05UWC004.ant.amazon.com (10.43.162.223) by
 EX13MTAUWC001.ant.amazon.com (10.43.162.135) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 21 May 2020 23:48:23 +0000
Received: from EX13MTAUWC001.ant.amazon.com (10.43.162.135) by
 EX13D05UWC004.ant.amazon.com (10.43.162.223) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 21 May 2020 23:48:23 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.162.232) with Microsoft SMTP
 Server id 15.0.1497.2 via Frontend Transport; Thu, 21 May 2020 23:48:22 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id 4F1A240712; Thu, 21 May 2020 23:48:23 +0000 (UTC)
Date: Thu, 21 May 2020 23:48:23 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <tglx@linutronix.de>, <mingo@redhat.com>, <bp@alien8.de>, <hpa@zytor.com>, 
 <x86@kernel.org>, <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
 <linux-pm@vger.kernel.org>, <linux-mm@kvack.org>, <kamatam@amazon.com>,
 <sstabellini@kernel.org>, <konrad.wilk@oracle.com>, <roger.pau@citrix.com>,
 <axboe@kernel.dk>, <davem@davemloft.net>, <rjw@rjwysocki.net>,
 <len.brown@intel.com>, <pavel@ucw.cz>, <peterz@infradead.org>,
 <eduval@amazon.com>, <sblbir@amazon.com>, <anchalag@amazon.com>,
 <xen-devel@lists.xenproject.org>, <vkuznets@redhat.com>,
 <netdev@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
 <dwmw@amazon.co.uk>, <benh@kernel.crashing.org>
Subject: [PATCH 06/12] xen-blkfront: add callbacks for PM suspend and
 hibernation
Message-ID: <20200521234823.GA2131@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <ad580b4d5b76c18fe2fe409704f25622e01af361.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <ad580b4d5b76c18fe2fe409704f25622e01af361.1589926004.git.anchalag@amazon.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Munehisa Kamata <kamatam@amazon.com>

S4 power transisiton states are much different than xen
suspend/resume. Former is visible to the guest and frontend drivers should
be aware of the state transistions and should be able to take appropriate
actions when needed. In transition to S4 we need to make sure that at least
all the in-flight blkif requests get completed, since they probably contain
bits of the guest's memory image and that's not going to get saved any
other way. Hence, re-issuing of in-flight requests as in case of xen resume
will not work here. This is in contrast to xen-suspend where we need to
freeze with as little processing as possible to avoid dirtying RAM late in
the migration cycle and we know that in-flight data can wait.

Add freeze, thaw and restore callbacks for PM suspend and hibernation
support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
events, need to implement these xenbus_driver callbacks. The freeze handler
stops block-layer queue and disconnect the frontend from the backend while
freeing ring_info and associated resources. Before disconnecting from the
backend, we need to prevent any new IO from being queued and wait for
existing IO to complete. Freeze/unfreeze of the queues will guarantee that
there are no requests in use on the shared ring. However, for sanity we
should check state of the ring before disconnecting to make sure that there
are no outstanding requests to be processed on the ring. The restore
handler re-allocates ring_info, unquiesces and unfreezes the queue
and re-connect to the backend, so that rest of the kernel can continue
to use the block device transparently.

Note:For older backends,if a backend doesn't have commit'12ea729645ace'
xen/blkback: unmap all persistent grants when frontend gets disconnected,
the frontend may see massive amount of grant table warning when freeing
resources.
[   36.852659] deferring g.e. 0xf9 (pfn 0xffffffffffffffff)
[   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!

In this case, persistent grants would need to be disabled.

[Anchal Changelog: Removed timeout/request during blkfront freeze.
Reworked the whole patch to work with blk-mq and incorporate upstream's
comments]

Fixes: Build errors reported by kbuild due to linebreak
Reported-by: kbuild test robot <lkp@intel.com>

Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
---
 drivers/block/xen-blkfront.c | 118 +++++++++++++++++++++++++++++++++--
 1 file changed, 112 insertions(+), 6 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 3b889ea950c2..34b0e51697b6 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -48,6 +48,8 @@
 #include <linux/list.h>
 #include <linux/workqueue.h>
 #include <linux/sched/mm.h>
+#include <linux/completion.h>
+#include <linux/delay.h>
 
 #include <xen/xen.h>
 #include <xen/xenbus.h>
@@ -80,6 +82,8 @@ enum blkif_state {
 	BLKIF_STATE_DISCONNECTED,
 	BLKIF_STATE_CONNECTED,
 	BLKIF_STATE_SUSPENDED,
+	BLKIF_STATE_FREEZING,
+	BLKIF_STATE_FROZEN
 };
 
 struct grant {
@@ -219,6 +223,7 @@ struct blkfront_info
 	struct list_head requests;
 	struct bio_list bio_list;
 	struct list_head info_list;
+	struct completion wait_backend_disconnected;
 };
 
 static unsigned int nr_minors;
@@ -1005,6 +1010,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size,
 	info->sector_size = sector_size;
 	info->physical_sector_size = physical_sector_size;
 	blkif_set_queue_limits(info);
+	init_completion(&info->wait_backend_disconnected);
 
 	return 0;
 }
@@ -1057,7 +1063,7 @@ static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset)
 		case XEN_SCSI_DISK5_MAJOR:
 		case XEN_SCSI_DISK6_MAJOR:
 		case XEN_SCSI_DISK7_MAJOR:
-			*offset = (*minor / PARTS_PER_DISK) + 
+			*offset = (*minor / PARTS_PER_DISK) +
 				((major - XEN_SCSI_DISK1_MAJOR + 1) * 16) +
 				EMULATED_SD_DISK_NAME_OFFSET;
 			*minor = *minor +
@@ -1072,7 +1078,7 @@ static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset)
 		case XEN_SCSI_DISK13_MAJOR:
 		case XEN_SCSI_DISK14_MAJOR:
 		case XEN_SCSI_DISK15_MAJOR:
-			*offset = (*minor / PARTS_PER_DISK) + 
+			*offset = (*minor / PARTS_PER_DISK) +
 				((major - XEN_SCSI_DISK8_MAJOR + 8) * 16) +
 				EMULATED_SD_DISK_NAME_OFFSET;
 			*minor = *minor +
@@ -1353,6 +1359,8 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 	unsigned int i;
 	struct blkfront_ring_info *rinfo;
 
+	if (info->connected == BLKIF_STATE_FREEZING)
+		goto free_rings;
 	/* Prevent new requests being issued until we fix things up. */
 	info->connected = suspend ?
 		BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
@@ -1360,6 +1368,7 @@ static void blkif_free(struct blkfront_info *info, int suspend)
 	if (info->rq)
 		blk_mq_stop_hw_queues(info->rq);
 
+free_rings:
 	for_each_rinfo(info, rinfo, i)
 		blkif_free_ring(rinfo);
 
@@ -1563,8 +1572,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
 	struct blkfront_info *info = rinfo->dev_info;
 
-	if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
+	if (unlikely(info->connected != BLKIF_STATE_CONNECTED
+		&& info->connected != BLKIF_STATE_FREEZING)){
 		return IRQ_HANDLED;
+	}
 
 	spin_lock_irqsave(&rinfo->ring_lock, flags);
  again:
@@ -2027,6 +2038,7 @@ static int blkif_recover(struct blkfront_info *info)
 	unsigned int segs;
 	struct blkfront_ring_info *rinfo;
 
+	bool frozen = info->connected == BLKIF_STATE_FROZEN;
 	blkfront_gather_backend_features(info);
 	/* Reset limits changed by blk_mq_update_nr_hw_queues(). */
 	blkif_set_queue_limits(info);
@@ -2048,6 +2060,9 @@ static int blkif_recover(struct blkfront_info *info)
 		kick_pending_request_queues(rinfo);
 	}
 
+	if (frozen)
+		return 0;
+
 	list_for_each_entry_safe(req, n, &info->requests, queuelist) {
 		/* Requeue pending requests (flush or discard) */
 		list_del_init(&req->queuelist);
@@ -2364,6 +2379,7 @@ static void blkfront_connect(struct blkfront_info *info)
 
 		return;
 	case BLKIF_STATE_SUSPENDED:
+	case BLKIF_STATE_FROZEN:
 		/*
 		 * If we are recovering from suspension, we need to wait
 		 * for the backend to announce it's features before
@@ -2481,12 +2497,36 @@ static void blkback_changed(struct xenbus_device *dev,
 		break;
 
 	case XenbusStateClosed:
-		if (dev->state == XenbusStateClosed)
+		if (dev->state == XenbusStateClosed) {
+			if (info->connected == BLKIF_STATE_FREEZING) {
+				blkif_free(info, 0);
+				info->connected = BLKIF_STATE_FROZEN;
+				complete(&info->wait_backend_disconnected);
+				break;
+			}
+
+			break;
+		}
+
+		/*
+		 * We may somehow receive backend's Closed again while thawing
+		 * or restoring and it causes thawing or restoring to fail.
+		 * Ignore such unexpected state regardless of the backend state.
+		 */
+		if (info->connected == BLKIF_STATE_FROZEN) {
+			dev_dbg(&dev->dev,
+					"ignore the backend's Closed state: %s",
+					dev->nodename);
 			break;
+		}
 		/* fall through */
 	case XenbusStateClosing:
-		if (info)
-			blkfront_closing(info);
+		if (info) {
+			if (info->connected == BLKIF_STATE_FREEZING)
+				xenbus_frontend_closed(dev);
+			else
+				blkfront_closing(info);
+		}
 		break;
 	}
 }
@@ -2630,6 +2670,69 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
 	mutex_unlock(&blkfront_mutex);
 }
 
+static int blkfront_freeze(struct xenbus_device *dev)
+{
+	unsigned int i;
+	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
+	struct blkfront_ring_info *rinfo;
+	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
+	unsigned int timeout = 5 * HZ;
+	unsigned long flags;
+	int err = 0;
+
+	info->connected = BLKIF_STATE_FREEZING;
+
+	blk_mq_freeze_queue(info->rq);
+	blk_mq_quiesce_queue(info->rq);
+
+	for_each_rinfo(info, rinfo, i) {
+		/* No more gnttab callback work. */
+		gnttab_cancel_free_callback(&rinfo->callback);
+		/* Flush gnttab callback work. Must be done with no locks held. */
+		flush_work(&rinfo->work);
+	}
+
+	for_each_rinfo(info, rinfo, i) {
+		spin_lock_irqsave(&rinfo->ring_lock, flags);
+		if (RING_FULL(&rinfo->ring)
+			|| RING_HAS_UNCONSUMED_RESPONSES(&rinfo->ring)) {
+			xenbus_dev_error(dev, err, "Hibernation Failed.The ring is still busy");
+			info->connected = BLKIF_STATE_CONNECTED;
+			spin_unlock_irqrestore(&rinfo->ring_lock, flags);
+			return -EBUSY;
+		}
+		spin_unlock_irqrestore(&rinfo->ring_lock, flags);
+	}
+	/* Kick the backend to disconnect */
+	xenbus_switch_state(dev, XenbusStateClosing);
+
+	/*
+	 * We don't want to move forward before the frontend is diconnected
+	 * from the backend cleanly.
+	 */
+	timeout = wait_for_completion_timeout(&info->wait_backend_disconnected,
+					      timeout);
+	if (!timeout) {
+		err = -EBUSY;
+		xenbus_dev_error(dev, err, "Freezing timed out;"
+				 "the device may become inconsistent state");
+	}
+	return err;
+}
+
+static int blkfront_restore(struct xenbus_device *dev)
+{
+	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
+	int err = 0;
+
+	err = talk_to_blkback(dev, info);
+	blk_mq_unquiesce_queue(info->rq);
+	blk_mq_unfreeze_queue(info->rq);
+	if (!err)
+		blk_mq_update_nr_hw_queues(&info->tag_set, info->nr_rings);
+	return err;
+}
+
 static const struct block_device_operations xlvbd_block_fops =
 {
 	.owner = THIS_MODULE,
@@ -2653,6 +2756,9 @@ static struct xenbus_driver blkfront_driver = {
 	.resume = blkfront_resume,
 	.otherend_changed = blkback_changed,
 	.is_ready = blkfront_is_ready,
+	.freeze = blkfront_freeze,
+	.thaw = blkfront_restore,
+	.restore = blkfront_restore
 };
 
 static void purge_persistent_grants(struct blkfront_info *info)
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Fri May 22 00:26:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 00:26:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbvW2-0004lx-4j; Fri, 22 May 2020 00:26:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Somd=7E=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1jbvW0-0004ls-To
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 00:26:20 +0000
X-Inumbo-ID: db8d919c-9bc2-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db8d919c-9bc2-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 00:26:20 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: rLk77qZTgtHcHA50Wf4zZehurxawrcI6a+T+LPFa5uROfHZGX4LZDRs4/5HRvGzPR6X96cNiIB
 ZaS5P2oikfTuTJ9kCDWjqB4c5kgU3VsowI6rqyykgDGHJDTRCsAVc3QCB7h+jcXX1/2tiVDMBl
 DdF4CtnefIWdzz6EomrC+OzstG6AHT1fNAZlhAXWfN04tzkFKt2LW+BbKNCz9/kwU6XPSJUeML
 Us1Q7/Novv8jgX9eCErCEbSpwuIBdXVTp0ksm/tzIXt4rNJ+66K7QcI4q9CFrTw27+gc3/fntI
 p0E=
X-SBRS: 2.7
X-MesageID: 18421092
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,419,1583211600"; d="scan'208";a="18421092"
Subject: Re: [PATCH] x86/svm: retry after unhandled NPT fault if gfn was
 marked for recalculation
To: <xen-devel@lists.xenproject.org>
References: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <5727f0fd-38ed-91af-ee2c-0f1f6fb830f1@citrix.com>
Date: Fri, 22 May 2020 01:26:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101
 Thunderbird/60.9.0
MIME-Version: 1.0
In-Reply-To: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: andrew.cooper3@citrix.com, wl@xen.org, jbeulich@suse.com,
 roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21/05/2020 22:43, Igor Druzhinin wrote:
> If a recalculation NPT fault hasn't been handled explicitly in
> hvm_hap_nested_page_fault() then it's potentially safe to retry -
> US bit has been re-instated in PTE and any real fault would be correctly
> re-raised next time.
> 
> This covers a specific case of migration with vGPU assigned on AMD:
> global log-dirty is enabled and causes immediate recalculation NPT
> fault in MMIO area upon access. This type of fault isn't described
> explicitly in hvm_hap_nested_page_fault (this isn't called on
> EPT misconfig exit on Intel) which results in domain crash.
> 
> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> ---

Alternatively, I can re-raise the fault immediately after recalculation is
done which is less efficient (will take one more VMEXIT) but safer IMO -
hvm_hap_nested_page_fault might potentially leave VM in inconsistent state
in case of a real failure and cause second page fault to conceal it.

Another alternative is to inject fall_through bool into hvm_hap_nested_page_fault
to give it the idea of expected behavior in that case and avoid guessing in SVM
code. I think that's an improvement over suggestion in v1 and a candidate for v2. 

Igor


From xen-devel-bounces@lists.xenproject.org Fri May 22 01:44:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 01:44:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbwjA-0004Sm-TQ; Fri, 22 May 2020 01:44:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mxxc=7E=amazon.com=prvs=40441b980=sblbir@srs-us1.protection.inumbo.net>)
 id 1jbwj9-0004Sh-H9
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 01:43:59 +0000
X-Inumbo-ID: b49481bc-9bcd-11ea-b07b-bc764e2007e4
Received: from smtp-fw-2101.amazon.com (unknown [72.21.196.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b49481bc-9bcd-11ea-b07b-bc764e2007e4;
 Fri, 22 May 2020 01:43:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1590111840; x=1621647840;
 h=from:to:subject:date:message-id:references:in-reply-to:
 content-id:content-transfer-encoding:mime-version;
 bh=n/q9AlKaX0CH8H7SaLZwpolAQTXmIBZYGWFCXorwbbQ=;
 b=CwtmrhpQPU7Cvs03uzOHy/+EZjA8PmkyQo/M7u8XakiFxZxOmRDiDVpN
 KqO//tjbD8rZZh4t9Zc7MSt6/RckzaMab7YvSEFpu+sahGaYZokVHk1Oz
 wh77U3kSmYFkKje2xBB8rliaRMTOCx1x/jOFHk+KT7tNk0dzS0kJtWuhR 4=;
IronPort-SDR: 10RDmI1c2uqp4SDgqcSm7g8lCY0lagez6ZlQ+gRsxTPsmKfrlLAe2ZGpTRamKe9T1KvVjxsLMt
 9gULPZ8gXQeQ==
X-IronPort-AV: E=Sophos;i="5.73,419,1583193600"; d="scan'208";a="31753098"
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-1a-67b371d8.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-out-2101.iad2.amazon.com with ESMTP;
 22 May 2020 01:43:47 +0000
Received: from EX13MTAUWB001.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166])
 by email-inbound-relay-1a-67b371d8.us-east-1.amazon.com (Postfix) with ESMTPS
 id 119F7A2193; Fri, 22 May 2020 01:43:39 +0000 (UTC)
Received: from EX13D10UWB004.ant.amazon.com (10.43.161.121) by
 EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 22 May 2020 01:43:39 +0000
Received: from EX13D01UWB002.ant.amazon.com (10.43.161.136) by
 EX13D10UWB004.ant.amazon.com (10.43.161.121) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 22 May 2020 01:43:39 +0000
Received: from EX13D01UWB002.ant.amazon.com ([10.43.161.136]) by
 EX13d01UWB002.ant.amazon.com ([10.43.161.136]) with mapi id 15.00.1497.006;
 Fri, 22 May 2020 01:43:39 +0000
From: "Singh, Balbir" <sblbir@amazon.com>
To: "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>,
 "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>, "Agarwal,
 Anchal" <anchalag@amazon.com>, "peterz@infradead.org" <peterz@infradead.org>, 
 "Woodhouse, David" <dwmw@amazon.co.uk>, "vkuznets@redhat.com"
 <vkuznets@redhat.com>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "tglx@linutronix.de" <tglx@linutronix.de>, "linux-pm@vger.kernel.org"
 <linux-pm@vger.kernel.org>, "Valentin, Eduardo" <eduval@amazon.com>,
 "linux-mm@kvack.org" <linux-mm@kvack.org>, "jgross@suse.com"
 <jgross@suse.com>, "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
 "axboe@kernel.dk" <axboe@kernel.dk>, "x86@kernel.org" <x86@kernel.org>,
 "roger.pau@citrix.com" <roger.pau@citrix.com>, "hpa@zytor.com"
 <hpa@zytor.com>, "rjw@rjwysocki.net" <rjw@rjwysocki.net>, "mingo@redhat.com"
 <mingo@redhat.com>, "Kamata, Munehisa" <kamatam@amazon.com>, "pavel@ucw.cz"
 <pavel@ucw.cz>, "bp@alien8.de" <bp@alien8.de>, "netdev@vger.kernel.org"
 <netdev@vger.kernel.org>, "len.brown@intel.com" <len.brown@intel.com>,
 "davem@davemloft.net" <davem@davemloft.net>, "benh@kernel.crashing.org"
 <benh@kernel.crashing.org>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 06/12] xen-blkfront: add callbacks for PM suspend and
 hibernation
Thread-Topic: [PATCH 06/12] xen-blkfront: add callbacks for PM suspend and
 hibernation
Thread-Index: AQHWLjUeTCtt92OWtESWVO30imQxZaizODaAgAAgMwA=
Date: Fri, 22 May 2020 01:43:38 +0000
Message-ID: <eea5ebc9adcd46b368c8d856e865a411b946f364.camel@amazon.com>
References: <ad580b4d5b76c18fe2fe409704f25622e01af361.1589926004.git.anchalag@amazon.com>
 <20200521234823.GA2131@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
In-Reply-To: <20200521234823.GA2131@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.161.175]
Content-Type: text/plain; charset="utf-8"
Content-ID: <415874684795E24488089E35061B3040@amazon.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiBAQCAtMTA1Nyw3ICsxMDYzLDcgQEAgc3RhdGljIGludCB4ZW5fdHJhbnNsYXRlX3ZkZXYoaW50
IHZkZXZpY2UsIGludCAqbWlub3IsIHVuc2lnbmVkIGludCAqb2Zmc2V0KQ0KPiAgCQljYXNlIFhF
Tl9TQ1NJX0RJU0s1X01BSk9SOg0KPiAgCQljYXNlIFhFTl9TQ1NJX0RJU0s2X01BSk9SOg0KPiAg
CQljYXNlIFhFTl9TQ1NJX0RJU0s3X01BSk9SOg0KPiAtCQkJKm9mZnNldCA9ICgqbWlub3IgLyBQ
QVJUU19QRVJfRElTSykgKyANCj4gKwkJCSpvZmZzZXQgPSAoKm1pbm9yIC8gUEFSVFNfUEVSX0RJ
U0spICsNCj4gIAkJCQkoKG1ham9yIC0gWEVOX1NDU0lfRElTSzFfTUFKT1IgKyAxKSAqIDE2KSAr
DQo+ICAJCQkJRU1VTEFURURfU0RfRElTS19OQU1FX09GRlNFVDsNCj4gIAkJCSptaW5vciA9ICpt
aW5vciArDQo+IEBAIC0xMDcyLDcgKzEwNzgsNyBAQCBzdGF0aWMgaW50IHhlbl90cmFuc2xhdGVf
dmRldihpbnQgdmRldmljZSwgaW50ICptaW5vciwgdW5zaWduZWQgaW50ICpvZmZzZXQpDQo+ICAJ
CWNhc2UgWEVOX1NDU0lfRElTSzEzX01BSk9SOg0KPiAgCQljYXNlIFhFTl9TQ1NJX0RJU0sxNF9N
QUpPUjoNCj4gIAkJY2FzZSBYRU5fU0NTSV9ESVNLMTVfTUFKT1I6DQo+IC0JCQkqb2Zmc2V0ID0g
KCptaW5vciAvIFBBUlRTX1BFUl9ESVNLKSArIA0KPiArCQkJKm9mZnNldCA9ICgqbWlub3IgLyBQ
QVJUU19QRVJfRElTSykgKw0KPiAgCQkJCSgobWFqb3IgLSBYRU5fU0NTSV9ESVNLOF9NQUpPUiAr
IDgpICogMTYpICsNCj4gIAkJCQlFTVVMQVRFRF9TRF9ESVNLX05BTUVfT0ZGU0VUOw0KPiAgCQkJ
Km1pbm9yID0gKm1pbm9yICsNCg0KVGhlc2Ugc2VlbSBsaWtlIHdoaXRlc3BhY2UgZml4ZXM/IElm
IHNvLCB0aGV5IHNob3VsZCBiZSBpbiBhIHNlcGFyYXRlIHBhdGNoDQoNCkJhbGJpcg0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri May 22 03:55:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 03:55:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbylo-0007k7-T9; Fri, 22 May 2020 03:54:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h2Re=7E=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbyln-0007k2-R4
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 03:54:51 +0000
X-Inumbo-ID: fbc7a7f0-9bdf-11ea-ab92-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fbc7a7f0-9bdf-11ea-ab92-12813bfff9fa;
 Fri, 22 May 2020 03:54:49 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 7733F20759;
 Fri, 22 May 2020 03:54:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590119688;
 bh=IOb0RQXn20mPjLvcXrm5g1uV8k0WGkYBZ/xCUkJOpgs=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=aR+s4YbN4WY3+tkkj48KZxEgCx1VjrO1vm0r1wNMQmzemwTf89enT66INgI5pVVER
 PKJNzG8Jq6pwsmDp7EOMvvfRmjTrWjeT/QV0ZXVda0Uz4aJBL5l9u4Yl1wGvrTnwnx
 0IQQAfkPycuEiV4Dv9WiJbM49iZE8D+vPYKqCvhA=
Date: Thu, 21 May 2020 20:54:47 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 01/10] swiotlb-xen: use vmalloc_to_page on vmalloc virt
 addresses
In-Reply-To: <23e5b6d8-c5d9-b43f-41cd-9d02d8ec0a7f@xen.org>
Message-ID: <alpine.DEB.2.21.2005211235590.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
 <20200520234520.22563-1-sstabellini@kernel.org>
 <23e5b6d8-c5d9-b43f-41cd-9d02d8ec0a7f@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Stefano Stabellini <sstabellini@kernel.org>,
 konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 21 May 2020, Julien Grall wrote:
> Hi,
> 
> On 21/05/2020 00:45, Stefano Stabellini wrote:
> > From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > 
> > Don't just assume that virt_to_page works on all virtual addresses.
> > Instead add a is_vmalloc_addr check and use vmalloc_to_page on vmalloc
> > virt addresses.
> 
> Can you provide an example where swiotlb is used with vmalloc()?

The issue was reported here happening on the Rasperry Pi 4:
https://marc.info/?l=xen-devel&m=158862573216800

If you are asking where in the Linux codebase the vmalloc is happening
specifically, I don't know for sure, my information is limited to the
stack trace that you see in the link (I don't have a Rasperry Pi 4 yet
but I shall have one soon.)


> > Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > ---
> >   drivers/xen/swiotlb-xen.c | 5 ++++-
> >   1 file changed, 4 insertions(+), 1 deletion(-)
> > 
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index b6d27762c6f8..a42129cba36e 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -335,6 +335,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t
> > size, void *vaddr,
> >   	int order = get_order(size);
> >   	phys_addr_t phys;
> >   	u64 dma_mask = DMA_BIT_MASK(32);
> > +	struct page *pg;
> >     	if (hwdev && hwdev->coherent_dma_mask)
> >   		dma_mask = hwdev->coherent_dma_mask;
> > @@ -346,9 +347,11 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t
> > size, void *vaddr,
> >   	/* Convert the size to actually allocated. */
> >   	size = 1UL << (order + XEN_PAGE_SHIFT);
> >   +	pg = is_vmalloc_addr(vaddr) ? vmalloc_to_page(vaddr) :
> > +				      virt_to_page(vaddr);
> 
> Common DMA code seems to protect this check with CONFIG_DMA_REMAP. Is it
> something we want to do it here as well? Or is there any other condition where
> vmalloc can happen?

I can see it in dma_direct_free_pages:

	if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr))
		vunmap(cpu_addr);

I wonder why the common DMA code does that. is_vmalloc_addr should work
regardless of CONFIG_DMA_REMAP. Maybe just for efficiency?


From xen-devel-bounces@lists.xenproject.org Fri May 22 03:55:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 03:55:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbymI-0007nG-5S; Fri, 22 May 2020 03:55:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h2Re=7E=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jbymG-0007mG-FF
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 03:55:20 +0000
X-Inumbo-ID: 0ddb59d3-9be0-11ea-ab92-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0ddb59d3-9be0-11ea-ab92-12813bfff9fa;
 Fri, 22 May 2020 03:55:20 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 0F8B620759;
 Fri, 22 May 2020 03:55:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590119719;
 bh=X9liTyH3K6qtc7i+pOoBtNjTMSeIvTNSkx5f9x/WbU8=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=xg72ogsw7M4B8R9hfY5qynnL5VL4GU1PiWUwtI59DbCpIADLESAPLlV2eTHnsymLA
 sI5fe9xs5/3QQuJ4+IL97JYVroylQwE6gUKikmi03+bTwUuB3BuQ2Q/HUw8mkdD8UH
 5vvjEMO2HqFyTflVHshis/W14h45AxXdtBMq+M58=
Date: Thu, 21 May 2020 20:55:18 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 02/10] swiotlb-xen: remove start_dma_addr
In-Reply-To: <6241b8f6-5c51-0486-55ae-d571b117a184@xen.org>
Message-ID: <alpine.DEB.2.21.2005211243060.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
 <20200520234520.22563-2-sstabellini@kernel.org>
 <6241b8f6-5c51-0486-55ae-d571b117a184@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Stefano Stabellini <sstabellini@kernel.org>,
 konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 21 May 2020, Julien Grall wrote:
> Hi,
> 
> On 21/05/2020 00:45, Stefano Stabellini wrote:
> > From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > 
> > It is not strictly needed. Call virt_to_phys on xen_io_tlb_start
> > instead. It will be useful not to have a start_dma_addr around with the
> > next patches.
> > 
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > ---
> >   drivers/xen/swiotlb-xen.c | 5 +----
> >   1 file changed, 1 insertion(+), 4 deletions(-)
> > 
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index a42129cba36e..b5e0492b07b9 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -52,8 +52,6 @@ static unsigned long xen_io_tlb_nslabs;
> >    * Quick lookup value of the bus address of the IOTLB.
> >    */
> >   -static u64 start_dma_addr;
> > -
> >   /*
> >    * Both of these functions should avoid XEN_PFN_PHYS because phys_addr_t
> >    * can be 32bit when dma_addr_t is 64bit leading to a loss in
> > @@ -241,7 +239,6 @@ int __ref xen_swiotlb_init(int verbose, bool early)
> >   		m_ret = XEN_SWIOTLB_EFIXUP;
> >   		goto error;
> >   	}
> > -	start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
> >   	if (early) {
> >   		if (swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs,
> >   			 verbose))
> > @@ -389,7 +386,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device
> > *dev, struct page *page,
> >   	 */
> >   	trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
> >   -	map = swiotlb_tbl_map_single(dev, start_dma_addr, phys,
> > +	map = swiotlb_tbl_map_single(dev, virt_to_phys(xen_io_tlb_start),
> > phys,
> 
> xen_virt_to_bus() is implemented as xen_phys_to_bus(virt_to_phys()). Can you
> explain how the two are equivalent?

They are not equivalent. Looking at what swiotlb_tbl_map_single expects,
and also the implementation of swiotlb_init_with_tbl, I think
virt_to_phys is actually the one we want.

swiotlb_tbl_map_single compares the argument with __pa(tlb) which is
__pa(xen_io_tlb_start) which is virt_to_phys(xen_io_tlb_start).


From xen-devel-bounces@lists.xenproject.org Fri May 22 03:56:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 03:56:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jbynW-0007uI-LC; Fri, 22 May 2020 03:56:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=obdr=7E=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jbynV-0007u4-B0
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 03:56:37 +0000
X-Inumbo-ID: 37f5b082-9be0-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37f5b082-9be0-11ea-b07b-bc764e2007e4;
 Fri, 22 May 2020 03:56:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=qr7zaaCo/mrOx/0YmVzm/NgAd1gz2MRYSUpaVAiW4gc=; b=liREDUXdX+8XQiBDfCC2iKMpU
 //gLkDLq54o7C5mkg7x1eD/5dZ8s0dgrokaSTMq0sKjCHmdeagx1aO/6yscQcYFnn2GZ4Eu62U6Xy
 e7cPbfKNMVxF6GP/CYKZT4hyPOCiw/s3qzd3jyV1R6w8H9BUF68f4RGfBw81d2RiFcwiE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbynN-0006jb-Dx; Fri, 22 May 2020 03:56:29 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jbynN-0001CL-6V; Fri, 22 May 2020 03:56:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jbynN-000894-49; Fri, 22 May 2020 03:56:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150310-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150310: regressions - FAIL
X-Osstest-Failures: linux-linus:build-amd64-pvops:kernel-build:fail:regression
 linux-linus:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=d2f8825ab78e4c18686f3e1a756a30255bb00bf3
X-Osstest-Versions-That: linux=2ea1940b84e55420a9e8feddcafd173edfe4df11
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 22 May 2020 03:56:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150310 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150310/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-pvops             6 kernel-build             fail REGR. vs. 150281

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150281
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150281
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150281
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150281
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150281
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                d2f8825ab78e4c18686f3e1a756a30255bb00bf3
baseline version:
 linux                2ea1940b84e55420a9e8feddcafd173edfe4df11

Last test of basis   150281  2020-05-20 19:40:19 Z    1 days
Failing since        150296  2020-05-21 08:13:02 Z    0 days    2 attempts
Testing same since   150310  2020-05-21 21:39:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andreas Färber <afaerber@suse.de>
  Anton Ivanov <anton.ivanov@cambridgegreys.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Cristian Ciocaltea <cristian.ciocaltea@gmail.com>
  Dave Jiang <dave.jiang@intel.com>
  Eric Biggers <ebiggers@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Ignat Korchagin <ignat@cloudflare.com>
  Jason Wang <jasowang@redhat.com>
  Johannes Berg <johannes.berg@intel.com>
  Kamal Dasu <kdasu.kdev@gmail.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael S. Tsirkin <mst@redhat.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Peter Ujfalusi <peter.ujfalusi@ti.com>
  Rafal Hibner <rafal.hibner@secom.com.pl>
  Rafał Hibner <rafal.hibner@secom.com.pl>
  Ricardo Ribalda Delgado <ribalda@kernel.org>
  Richard Weinberger <richard@nod.at>
  Ritesh Harjani <riteshh@linux.ibm.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Theodore Ts'o <tytso@mit.edu>
  Thierry Reding <treding@nvidia.com>
  Vinod Koul <vkoul@kernel.org>
  Vladimir Murzin <vladimir.murzin@arm.com>
  YueHaibing <yuehaibing@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            fail    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 494 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 22 05:16:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 05:16:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc01t-0006rL-Ky; Fri, 22 May 2020 05:15:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=obdr=7E=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jc01r-0006rG-WA
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 05:15:32 +0000
X-Inumbo-ID: 3eba0462-9beb-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3eba0462-9beb-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 05:15:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=lsmdK8o/uUNPnG8YGzxCBMKlokBDXVxGPVaYGVhilo0=; b=wSV6B4xsySQrOPMDQK0XvgxJs
 t560kfNMO5lbiohwcNqHDmVaVhHX7EPniea6mqQqp49tLzxtLhC39EQvK8pFcP1rqQXTfUFgFdiRK
 hwoIQHNtXWNijgmT10M9OzIutBKw2sk+AIoI8Ikvy/kh2s+WkVQiHB6aiR2HyvFf6LS/w=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jc01l-0000SP-Oq; Fri, 22 May 2020 05:15:25 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jc01l-0005Ny-GW; Fri, 22 May 2020 05:15:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jc01l-00038C-FM; Fri, 22 May 2020 05:15:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150313-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150313: all pass - PUSHED
X-Osstest-Versions-This: ovmf=1a2ad3ba9efdd0db4bf1b6c114eedb59d6c483ca
X-Osstest-Versions-That: ovmf=74f90d38c446e247469e2a775970eeed89216909
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 22 May 2020 05:15:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150313 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150313/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 1a2ad3ba9efdd0db4bf1b6c114eedb59d6c483ca
baseline version:
 ovmf                 74f90d38c446e247469e2a775970eeed89216909

Last test of basis   150300  2020-05-21 12:39:29 Z    0 days
Testing same since   150313  2020-05-21 23:10:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  James Anandraj <james.sushanth.anandraj@intel.com>
  Liu, Zhiguang <Zhiguang.Liu@intel.com>
  Maggie Chu <maggie.chu@intel.com>
  Sami Mujawar <sami.mujawar@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   74f90d38c4..1a2ad3ba9e  1a2ad3ba9efdd0db4bf1b6c114eedb59d6c483ca -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 22 05:59:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 05:59:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc0i1-0001v4-FI; Fri, 22 May 2020 05:59:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qXeb=7E=notk.org=asmadeus@srs-us1.protection.inumbo.net>)
 id 1jc0i0-0001uz-Tf
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 05:59:04 +0000
X-Inumbo-ID: 5689fc36-9bf1-11ea-ae69-bc764e2007e4
Received: from nautica.notk.org (unknown [2001:41d0:1:7a93::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5689fc36-9bf1-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 05:59:03 +0000 (UTC)
Received: by nautica.notk.org (Postfix, from userid 1001)
 id 8C85FC009; Fri, 22 May 2020 07:59:02 +0200 (CEST)
Date: Fri, 22 May 2020 07:58:47 +0200
From: Dominique Martinet <asmadeus@codewreck.org>
To: Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2] 9p/xen: increase XEN_9PFS_RING_ORDER
Message-ID: <20200522055847.GA2833@nautica>
References: <20200521193242.15953-1-sstabellini@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20200521193242.15953-1-sstabellini@kernel.org>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, lucho@ionkov.net, ericvh@gmail.com,
 linux-kernel@vger.kernel.org, v9fs-developer@lists.sourceforge.net,
 xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Stefano Stabellini wrote on Thu, May 21, 2020:
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> 
> Increase XEN_9PFS_RING_ORDER to 9 for performance reason. Order 9 is the
> max allowed by the protocol.
> 
> We can't assume that all backends will support order 9. The xenstore
> property max-ring-page-order specifies the max order supported by the
> backend. We'll use max-ring-page-order for the size of the ring.
> 
> This means that the size of the ring is not static
> (XEN_FLEX_RING_SIZE(9)) anymore. Change XEN_9PFS_RING_SIZE to take an
> argument and base the calculation on the order chosen at setup time.
> 
> Finally, modify p9_xen_trans.maxsize to be divided by 4 compared to the
> original value. We need to divide it by 2 because we have two rings
> coming off the same order allocation: the in and out rings. This was a
> mistake in the original code. Also divide it further by 2 because we
> don't want a single request/reply to fill up the entire ring. There can
> be multiple requests/replies outstanding at any given time and if we use
> the full ring with one, we risk forcing the backend to wait for the
> client to read back more replies before continuing, which is not
> performant.
> 
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

LGTM, I'll try to find some time to test this by the end of next week or
will trust you if I can't make it -- ping me around June 1st if I don't
reply again until then...

Cheers,
-- 
Dominique


From xen-devel-bounces@lists.xenproject.org Fri May 22 07:28:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 07:28:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc25d-0001F8-GJ; Fri, 22 May 2020 07:27:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=obdr=7E=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jc25b-0001F3-Ux
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 07:27:31 +0000
X-Inumbo-ID: ae9e9466-9bfd-11ea-ab99-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ae9e9466-9bfd-11ea-ab99-12813bfff9fa;
 Fri, 22 May 2020 07:27:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=6DiX88hEX6h7Gt5eVIyLVBBzsG49wC95KORly2WJXK8=; b=xKZZysMlpEsQMiLAbR/hpfkRk
 VesODAlN1PNbpUFEUo89gkcta++y6mrOUNt6/4D2gfNGnm3rvfHcGxSj7asyVJb/W1rfTd1al+pKG
 5y0redt1CvM8xj9uM1Vq5c4NEmpYB/NHWOq2UT/NQBDdazOEDAtcmZa4GHKEx1Gw0d748=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jc25U-0003Ed-Ea; Fri, 22 May 2020 07:27:24 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jc25T-0003Pu-Pn; Fri, 22 May 2020 07:27:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jc25T-0005kQ-ON; Fri, 22 May 2020 07:27:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150317-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150317: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=d265171b5784657dccb5215d28a72b213553db0a
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 22 May 2020 07:27:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150317 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150317/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              d265171b5784657dccb5215d28a72b213553db0a
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  126 days
Failing since        146211  2020-01-18 04:18:52 Z  125 days  116 attempts
Testing same since   150287  2020-05-21 04:18:44 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Chris Jester-Young <cky@cky.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yan Wang <wangyan122@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 19104 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 22 07:59:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 07:59:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc2aY-0003sr-4a; Fri, 22 May 2020 07:59:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=obdr=7E=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jc2aW-0003sm-4m
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 07:59:28 +0000
X-Inumbo-ID: 27df29e0-9c02-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27df29e0-9c02-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 07:59:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Pg0SQoRYKCS+RW1GNCfgQFwx1EjMYRFV/iZDQwX8yvc=; b=0F88W1xWRJul0SS3hoBln1hCt
 1Kg3yvVDeXtDVI8XMQWnaWLt5w5cpJScnusEc4FPyynzlvxccKPTZi/dymO8t+IT/hBwfjRf4Xy0Q
 zRKzA9lBWwNQPps9JvkWgEClxV8BuowBoo0d9clWhEH8u+gjj0KRjhyNpaZ8j9WdeOBzI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jc2aT-0003qm-UI; Fri, 22 May 2020 07:59:25 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jc2aT-0004aV-Gf; Fri, 22 May 2020 07:59:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jc2aT-0000Bu-G6; Fri, 22 May 2020 07:59:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150312-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150312: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=ae3aa5da96f4ccf0c2a28851449d92db9fcfad71
X-Osstest-Versions-That: qemuu=f2465433b43fb87766d79f42191607dac4aed5b4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 22 May 2020 07:59:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150312 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150312/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150271
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150271
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150271
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150271
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150271
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150271
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                ae3aa5da96f4ccf0c2a28851449d92db9fcfad71
baseline version:
 qemuu                f2465433b43fb87766d79f42191607dac4aed5b4

Last test of basis   150271  2020-05-20 06:14:37 Z    2 days
Failing since        150297  2020-05-21 10:37:07 Z    0 days    2 attempts
Testing same since   150312  2020-05-21 23:10:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Daniel P. Berrangé <berrange@redhat.com>
  Eric Blake <eblake@redhat.com>
  Gerd Hoffmann <kraxel@redhat.com>
  John Snow <jsnow@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Richard Henderson <richard.henderson@linaro.org>
  Volker Rümelin <vr_qemu@t-online.de>
  xiaoqiang zhao <zxq_yx_007@163.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   f2465433b4..ae3aa5da96  ae3aa5da96f4ccf0c2a28851449d92db9fcfad71 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri May 22 08:09:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 08:09:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc2kS-0005NJ-GL; Fri, 22 May 2020 08:09:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L400=7E=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jc2kR-0005NE-6d
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 08:09:43 +0000
X-Inumbo-ID: 96ddfa96-9c03-11ea-b07b-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96ddfa96-9c03-11ea-b07b-bc764e2007e4;
 Fri, 22 May 2020 08:09:42 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: YOZoPeQ1Ubz6UHRo9X+yoS1foh4kVrSzE2tRnSH2NkMy/QxYRi99DRgFD3cuznlITZAZZMFKKN
 hMKbaRTSh0QC5Alis3eT/kBxQ+UrxlPaUOHP7OvLSkR/k73GabypOt6hpgWVBzim3K6E03vosK
 zcke2Aumn3k7unHOy9PFAZVULP226y/TB1nd5zmSTMlK+LQcL7XJF4QLykmkpSlyku2euXLiCj
 apRbl7VYbJCHScub3YAZSfU8qKGrZ6XETV2Q1WCjtEtpVprpdHISL8TM2pF50MLFkAXZngHv7l
 IUg=
X-SBRS: 2.7
X-MesageID: 18521085
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,420,1583211600"; d="scan'208";a="18521085"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/idle: prevent entering C3/C6 on some Intel CPUs due to
 errata
Date: Fri, 22 May 2020 10:09:28 +0200
Message-ID: <20200522080928.87786-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Apply a workaround for errata BA80, AAK120, AAM108, AAO67, BD59,
AAY54: Rapid Core C3/C6 Transition May Cause Unpredictable System
Behavior.

Limit maximum C state to C2 when SMT is enabled on the affected CPUs.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/cpu/intel.c | 37 +++++++++++++++++++++++++++++++++++++
 1 file changed, 37 insertions(+)

diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index b77c1a78ed..69e99bb358 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -296,6 +296,41 @@ static void early_init_intel(struct cpuinfo_x86 *c)
 	ctxt_switch_levelling(NULL);
 }
 
+/*
+ * Errata BA80, AAK120, AAM108, AAO67, BD59, AAY54: Rapid Core C3/C6 Transition
+ * May Cause Unpredictable System Behavior
+ *
+ * Under a complex set of internal conditions, cores rapidly performing C3/C6
+ * transitions in a system with Intel Hyper-Threading Technology enabled may
+ * cause a machine check error (IA32_MCi_STATUS.MCACOD = 0x0106), system hang
+ * or unpredictable system behavior.
+ */
+static void probe_c3_errata(const struct cpuinfo_x86 *c)
+{
+#define INTEL_FAM6_MODEL(m) { X86_VENDOR_INTEL, 6, m, X86_FEATURE_ALWAYS }
+    static const struct x86_cpu_id models[] = {
+        /* Nehalem */
+        INTEL_FAM6_MODEL(0x1a),
+        INTEL_FAM6_MODEL(0x1e),
+        INTEL_FAM6_MODEL(0x1f),
+        INTEL_FAM6_MODEL(0x2e),
+        /* Westmere (note Westmere-EX is not affected) */
+        INTEL_FAM6_MODEL(0x2c),
+        INTEL_FAM6_MODEL(0x25),
+        { }
+    };
+#undef INTEL_FAM6_MODEL
+
+    /* Serialized by the AP bringup code. */
+    if ( max_cstate > 1 && (c->apicid & (c->x86_num_siblings - 1)) &&
+         x86_match_cpu(models) )
+    {
+        printk(XENLOG_WARNING
+	       "Disabling C-states C3 and C6 due to CPU errata\n");
+        max_cstate = 1;
+    }
+}
+
 /*
  * P4 Xeon errata 037 workaround.
  * Hardware prefetcher may cause stale data to be loaded into the cache.
@@ -323,6 +358,8 @@ static void Intel_errata_workarounds(struct cpuinfo_x86 *c)
 
 	if (cpu_has_tsx_force_abort && opt_rtm_abort)
 		wrmsrl(MSR_TSX_FORCE_ABORT, TSX_FORCE_ABORT_RTM);
+
+	probe_c3_errata(c);
 }
 
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri May 22 08:18:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 08:18:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc2sv-0006It-FT; Fri, 22 May 2020 08:18:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=h8Ze=7E=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jc2st-0006In-HQ
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 08:18:27 +0000
X-Inumbo-ID: cf567320-9c04-11ea-ab9a-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cf567320-9c04-11ea-ab9a-12813bfff9fa;
 Fri, 22 May 2020 08:18:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id BF3A7BE73;
 Fri, 22 May 2020 08:18:27 +0000 (UTC)
Subject: Re: [PATCH] xen/events: avoid NULL pointer dereference in
 evtchn_from_irq()
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
References: <20200319071428.12115-1-jgross@suse.com>
 <30719c35-6de7-d400-7bb8-cff4570f8971@oracle.com>
 <20200521184602.GP98582@mail-itl>
 <c36de3eb-c0ad-45e1-e08b-cb7d86d197f6@oracle.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <5839ff92-92e4-667a-8ed1-f5f9f3453299@suse.com>
Date: Fri, 22 May 2020 10:18:23 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <c36de3eb-c0ad-45e1-e08b-cb7d86d197f6@oracle.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 linux-kernel@vger.kernel.org, stable@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21.05.20 23:57, Boris Ostrovsky wrote:
> On 5/21/20 2:46 PM, Marek Marczykowski-Górecki wrote:
>> On Thu, May 21, 2020 at 01:22:03PM -0400, Boris Ostrovsky wrote:
>>> On 3/19/20 3:14 AM, Juergen Gross wrote:
>>>> There have been reports of races in evtchn_from_irq() where the info
>>>> pointer has been NULL.
>>>>
>>>> Avoid that case by testing info before dereferencing it.
>>>>
>>>> In order to avoid accessing a just freed info structure do the kfree()
>>>> via kfree_rcu().
>>>
>>> Looks like noone ever responded to this.
>>>
>>>
>>> This change looks fine but is there a background on the problem? I
>>> looked in the archives and didn't find the relevant discussion.
>> Here is the original bug report:
>> https://xen.markmail.org/thread/44apwkwzeme4uavo
>>
> 
> 
> Thanks. Do we know what the race is? Is it because an event is being
> delivered from a dying guest who is in the process of tearing down event
> channels?

Missing synchronization between event channel (de-)allocation and
handling.

I have a patch sitting here, just didn't have the time to do some proper
testing and sending it out.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri May 22 08:27:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 08:27:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc31p-0007EF-DP; Fri, 22 May 2020 08:27:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L400=7E=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jc31n-0007EA-IG
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 08:27:39 +0000
X-Inumbo-ID: 18489e90-9c06-11ea-b9cf-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 18489e90-9c06-11ea-b9cf-bc764e2007e4;
 Fri, 22 May 2020 08:27:38 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: kfeIehtZkrHh3ta2CPMWx7GsDMwDQeFOdAfi2v6OfaLvj7307kdp+22g3VDGdm9FRRO/16oS+P
 UeCkPNuS1wiHfdaEnVDLS4rdYDBPmRmwUEV5xhCMSWwdFG1aYi1lFOAKv9jcnUX7SDmYrIhKXP
 6LvHqUYvakrkK06WERmSWMlQYjbfpwxBNyQHnz9oQ8XMHvPL7Nzg08J6mmWWWCsYRJNwvCjWds
 fQOLrtKwTiU5mvmsN9Jsnfn6g+qEzZjov34Sw7An+/TQwKHJy7IcDQKGewZkt/H4H9xK4Tq03/
 cig=
X-SBRS: 2.7
X-MesageID: 18154315
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18154315"
Date: Fri, 22 May 2020 10:27:28 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Tamas K Lengyel <tamas.lengyel@intel.com>
Subject: Re: [PATCH for-4.14 1/2] x86/mem_sharing: Prohibit interrupt
 injection for forks
Message-ID: <20200522082728.GT54375@Air-de-Roger>
References: <7666b5bba73a1410446789a0c4ea908376da3487.1590101479.git.tamas.lengyel@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <7666b5bba73a1410446789a0c4ea908376da3487.1590101479.git.tamas.lengyel@intel.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Jun Nakajima <jun.nakajima@intel.com>,
 Wei Liu <wl@xen.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tamas
 K Lengyel <tamas@tklengyel.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 21, 2020 at 03:53:22PM -0700, Tamas K Lengyel wrote:
> When running shallow forks without device models it may be undesirable for Xen
> to inject interrupts. With Windows forks we have observed the kernel going into
> infinite loops when trying to process such interrupts. By disabling interrupt
> injection the fuzzer can exercise the target code without interference.

Could you add some more information about why windows goes into
infinite loops? Is it trying to access MMIO regions of emulated
devices and getting ~0 out of them instead of the expected data and
that causes it to loop indefinitely?

> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
> ---
>  xen/arch/x86/hvm/vmx/intr.c      | 4 ++++
>  xen/arch/x86/mm/mem_sharing.c    | 6 +++++-
>  xen/include/asm-x86/hvm/domain.h | 2 ++
>  xen/include/public/memory.h      | 1 +
>  4 files changed, 12 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
> index 000e14af49..3814795e3f 100644
> --- a/xen/arch/x86/hvm/vmx/intr.c
> +++ b/xen/arch/x86/hvm/vmx/intr.c

I think you are missing the AMD side of this change? A similar
adjustment should be done to svm_intr_assist, or else it should be
noted in the commit message the reason this is Intel only.

> @@ -256,6 +256,10 @@ void vmx_intr_assist(void)
>      if ( unlikely(v->arch.vm_event) && v->arch.vm_event->sync_event )
>          return;
>  
> +    /* Block event injection for VM fork if requested */
> +    if ( unlikely(v->domain->arch.hvm.mem_sharing.prohibit_interrupts) )
> +        return;
> +
>      /* Crank the handle on interrupt state. */
>      pt_vector = pt_update_irq(v);
>  
> diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
> index 7271e5c90b..7352fce866 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -2106,7 +2106,8 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
>          rc = -EINVAL;
>          if ( mso.u.fork.pad )
>              goto out;
> -        if ( mso.u.fork.flags & ~XENMEM_FORK_WITH_IOMMU_ALLOWED )
> +        if ( mso.u.fork.flags & ~(XENMEM_FORK_WITH_IOMMU_ALLOWED |
> +                                  XENMEM_FORK_PROHIBIT_INTERRUPTS) )


Nit: I would move the XENMEM_FORK_ option ORing to a newline, so that
you don't have to use a whole line every time a new option is used.
Ie:

        if ( mso.u.fork.flags &
             ~(XENMEM_FORK_WITH_IOMMU_ALLOWED | XENMEM_FORK_BLOCK_INTERRUPTS) )


>              goto out;
>  
>          rc = rcu_lock_live_remote_domain_by_id(mso.u.fork.parent_domain,
> @@ -2134,6 +2135,9 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
>              rc = hypercall_create_continuation(__HYPERVISOR_memory_op,
>                                                 "lh", XENMEM_sharing_op,
>                                                 arg);
> +        else if ( !rc && (mso.u.fork.flags & XENMEM_FORK_PROHIBIT_INTERRUPTS) )
> +            d->arch.hvm.mem_sharing.prohibit_interrupts = true;
> +
>          rcu_unlock_domain(pd);
>          break;
>      }
> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
> index 95fe18cddc..e114f818d3 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -74,6 +74,8 @@ struct mem_sharing_domain
>       * to resume the search.
>       */
>      unsigned long next_shared_gfn_to_relinquish;
> +
> +    bool prohibit_interrupts;

Nit: I would prefer block_interrupts, prohibit seems very formal to
me, but I'm not a native speaker, so feel free to ignore this
suggestion.

>  };
>  #endif
>  
> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index dbd35305df..fe2e6caa68 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -537,6 +537,7 @@ struct xen_mem_sharing_op {
>          struct mem_sharing_op_fork {      /* OP_FORK */
>              domid_t parent_domain;        /* IN: parent's domain id */
>  #define XENMEM_FORK_WITH_IOMMU_ALLOWED (1u << 0)
> +#define XENMEM_FORK_PROHIBIT_INTERRUPTS (1u << 1)

FWIW, I would also use BLOCK here instead of PROHIBIT.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 22 08:29:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 08:29:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc33s-0007LH-Tw; Fri, 22 May 2020 08:29:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L400=7E=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jc33s-0007LB-A7
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 08:29:48 +0000
X-Inumbo-ID: 6466b7ef-9c06-11ea-ab9a-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6466b7ef-9c06-11ea-ab9a-12813bfff9fa;
 Fri, 22 May 2020 08:29:46 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: HUwpCuqjl1gvR5QYmSxkcaTLLvGqV49ZsNtttLIaQ8j2NZBrxfqKvfDj0oSEr+zg+7Rz/U3+Ty
 3i/d8s8mzDfLYk2sDqI+O1Ehlxe42DlypJ/v4L4u5Q3C0xWA+Ur4k3MYw18WiftPbLNgC5SmMJ
 Q2N/DPcKCGowgxjHfnlHeAVCRw3S/Jj502i1vouyUKBsWiWJ0cyclqrsJ4EYvzKqmcfdrPBLtI
 1xCiYDWRlYalZsLRHB4J6NE27tquKMfzOuXfUXr0hZH5QFi+mgAgP/R7m0oqZgIzlTaceQtwmD
 qG0=
X-SBRS: 2.7
X-MesageID: 18421233
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18421233"
Date: Fri, 22 May 2020 10:29:39 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Tamas K Lengyel <tamas.lengyel@intel.com>
Subject: Re: [PATCH for-4.14 2/2] tools/libxc: xc_memshr_fork with interrupts
 disabled
Message-ID: <20200522082939.GU54375@Air-de-Roger>
References: <7666b5bba73a1410446789a0c4ea908376da3487.1590101479.git.tamas.lengyel@intel.com>
 <c2830cae9affe327170c900731a7ca050ddb91ea.1590101479.git.tamas.lengyel@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c2830cae9affe327170c900731a7ca050ddb91ea.1590101479.git.tamas.lengyel@intel.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 21, 2020 at 03:53:23PM -0700, Tamas K Lengyel wrote:
> Toolstack side for creating forks with interrupt injection disabled.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

I have the same suggestion to use block instead of prohibit, so if you
agree to change it on patch #1 it should also be changed here.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 22 08:41:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 08:41:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc3FC-0000Za-0f; Fri, 22 May 2020 08:41:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aod8=7E=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jc3FA-0000ZV-H5
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 08:41:28 +0000
X-Inumbo-ID: 06684430-9c08-11ea-ab9a-12813bfff9fa
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.88]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06684430-9c08-11ea-ab9a-12813bfff9fa;
 Fri, 22 May 2020 08:41:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bZ8mvoVssdd5lvUpx+KPlVr59eHE7C91DlklRpTWB9k=;
 b=QTFKlWyCVPAflAEl/bdlIl+jZ2FlyE/6tqdZqegTkinejwZ3cwkeDpv5fmO0Nna7QF/l7oz32+J77mXCoFPuT4KNjA+t3SL7xDhSV0UbQicviYsgirOfTB5AaA+RHzVeL3ps839bR0DTq9ARp8exYUSEAtgs5ghZl26X+9WqT9Q=
Received: from AM6P194CA0079.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:8f::20)
 by VI1PR08MB4493.eurprd08.prod.outlook.com (2603:10a6:803:ff::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.27; Fri, 22 May
 2020 08:41:25 +0000
Received: from AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8f:cafe::26) by AM6P194CA0079.outlook.office365.com
 (2603:10a6:209:8f::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.23 via Frontend
 Transport; Fri, 22 May 2020 08:41:25 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT025.mail.protection.outlook.com (10.152.16.157) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3021.23 via Frontend Transport; Fri, 22 May 2020 08:41:24 +0000
Received: ("Tessian outbound cff7dd4de28a:v57");
 Fri, 22 May 2020 08:41:24 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 644b2724f829d406
X-CR-MTA-TID: 64aa7808
Received: from eed332ec7ed3.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D5D7C0D2-73ED-48A2-9468-439AB220F9C7.1; 
 Fri, 22 May 2020 08:41:19 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id eed332ec7ed3.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 22 May 2020 08:41:19 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hc4wifgZLMYva5e5BQg2NxUzC4SiNDv9rinzkg57EP0mG1FHpYm2mMkmwwb+X8wE8QNn2FLCyTehGbIYV75ljQTEQ0Yq2KzuJBep1YXQUyvzgXFNT2ID8IXmU8o/TrPB90TvWisOsJy98HWxuMC2kxX6HqmFI1QUIJiM2LmTN1HkCTSFQBadhU1ZQTvwL3argeq+ivbSeoNbeleB/6jpGVoMbhQCEL87VQGiJ0QqTsiklkmrMhAvDhjp2ub5L/2qvb7TVOHSKaOsLRpxGR9M08RZxDCbY8wW9Xq99DWs1GXnLk0w8Ap+wrzsbyXmGKxO6EZAO1TBwJpK7ikgk1DHlw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bZ8mvoVssdd5lvUpx+KPlVr59eHE7C91DlklRpTWB9k=;
 b=UZ9KzEqCaNuuUN5Tn1X6+7ib69UDJh7dNmVVawulAWxCDS8ko8I6GoS/TldbtZGHnxLoAeZc3/NEUCxH9QGI4BUboa5KLHhG9P0l+fOoGLqseyaV3bmNlULx2PBXO5K58DTKubDOYn96cnRlq1PHvA0JhlPIqx27KpZ1L34X7Sl0Ui4atBwc+z+itI8lV9/psj7K3RlOdBQuyDTYfsARNfSU3bLA+uP9HxyUWxShJZXDS6lNDErAUCoUsG6YHwI8QMklBv33iKLtO0+RdbXk6EAWuni8aB5xJAlI5zYS1FmF/Uigrz4v2ZCo8xGhTU6UJs0kBpvpxrbGwVJiN5EM0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bZ8mvoVssdd5lvUpx+KPlVr59eHE7C91DlklRpTWB9k=;
 b=QTFKlWyCVPAflAEl/bdlIl+jZ2FlyE/6tqdZqegTkinejwZ3cwkeDpv5fmO0Nna7QF/l7oz32+J77mXCoFPuT4KNjA+t3SL7xDhSV0UbQicviYsgirOfTB5AaA+RHzVeL3ps839bR0DTq9ARp8exYUSEAtgs5ghZl26X+9WqT9Q=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3644.eurprd08.prod.outlook.com (2603:10a6:10:4d::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.27; Fri, 22 May
 2020 08:41:18 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3000.034; Fri, 22 May 2020
 08:41:18 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Subject: Re: [PATCH 2/3] configure: also add EXTRA_PREFIX to {CPP/LD}FLAGS
Thread-Topic: [PATCH 2/3] configure: also add EXTRA_PREFIX to {CPP/LD}FLAGS
Thread-Index: AQHWIr9BT1TmXNUg30iaUVp4td7OCaiz5AWA
Date: Fri, 22 May 2020 08:41:17 +0000
Message-ID: <C053A44F-FFDE-4C07-B1FD-76FA8456ADCD@arm.com>
References: <20200505092454.9161-1-roger.pau@citrix.com>
 <20200505092454.9161-3-roger.pau@citrix.com>
In-Reply-To: <20200505092454.9161-3-roger.pau@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: d8c84307-16ee-4f66-7550-08d7fe2be972
x-ms-traffictypediagnostic: DB7PR08MB3644:|VI1PR08MB4493:
X-Microsoft-Antispam-PRVS: <VI1PR08MB4493E898EB7237573BD8C85E9DB40@VI1PR08MB4493.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
x-forefront-prvs: 04111BAC64
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: /4cPXOI7ZvPktb691ghgCxLqyk14aA4i7feVfz2g4Vpjry3BDth12LdI3KDSX1w3m8OnNrru33WGAaNHJiVTs/fA3sFZPHCSg8yeTUrpBwCx4bEmBsUtU6VRDsZWcCLOFHtjnkBEKFuVPLqD9WoT0El/CJnpPDrpyxzQqwFkEbLrAtRAKHUAQtDm0LRUf4P69fkK3I3M8ErHxjrjS7c7jvVi+s0/XPF8tdWCkGOBr9EIHhg8TyzzAZd5+vxV0wObYs1gVXQB0GkIxR4Wwbgw2OX5OoIXLzpVHIgC2R76X3h5tCuLLyVJocjHW1uJaw1J
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(376002)(39860400002)(346002)(366004)(396003)(316002)(54906003)(186003)(6506007)(26005)(2906002)(71200400001)(4326008)(478600001)(53546011)(36756003)(6916009)(86362001)(6486002)(8936002)(8676002)(33656002)(6512007)(66556008)(66476007)(5660300002)(66446008)(76116006)(2616005)(66946007)(91956017)(64756008);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: YLvyEfmfteBG5JP2w34F+tgyCGoHhC7TrfkEzSWfZdxam0P/x2B61U5pgqEWnEhsoPYNSLavbKI9N6XrFc15UjddHWcCPf+lxh8Mek/V/rft/I2RcaygsyQ7njNymYrwPE7d6WJM+LKBgHKTaGe6qK6wASeSwNMWeLt9+LBGjb3p2JBocl7xGcz+8AfqoiOjNlTYPXcVLddbkbKPWmBLVmIs+lVFgWXpNTGDhGXPPl9XoxHXF7Spzp8av4gYhS/1H4vF5ST5W60b2cIJ8jbUc+mY6Dbx6F6XAH0o72HQzhQjcuSpmNRC9Hdn20rNHoQMz+sHI7IT7xhauAyQV+kUzdXaZMj7DDBve0tQsNYZmTgOesq+j5umsp2L4AhoPEvWNESKI8Fi1J/sUphPOkYoa6HaGvuH7dBe85w9zJZqP/kOWDZmUpJFBxF8j5tpcrddmIuh1p1UFXZaIJpkxNmYw6cMv0zm2VIOhLNP98lzEtc=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <6AC0A0E50451744D9EB9A66A23A5EB82@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3644
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(376002)(39860400002)(346002)(136003)(46966005)(36756003)(478600001)(6862004)(316002)(6486002)(36906005)(82310400002)(70586007)(6512007)(81166007)(82740400003)(54906003)(5660300002)(8936002)(6506007)(2906002)(356005)(70206006)(8676002)(186003)(26005)(53546011)(86362001)(336012)(2616005)(47076004)(33656002)(4326008);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 3e83e76d-02c6-4430-f06f-08d7fe2be549
X-Forefront-PRVS: 04111BAC64
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: YJa49NdKfZmYvX1+UPqB7i3IO9R3XdKXgcXmDr48cuCiene3TRmAZWXfbmuOHd8zOWIfB6tQ8HuiFf4+HB+yzTrkWyT0Z4nBqTLco5ql2D8lgXLIAh0drpTHQNJt43wXMuJ0v05OQKPJBqUgGBo8uYGJFhBMIIjw+S+wTNhoa1MSocf6cinZfAen9xgHhG3SBTLRLdwNcFel5VT1P9soV+WFIvl/uP1SFwRxXSbZ3pbsRlAB28CZu+VKhWUpkIXsAgMs6nC6F7vjO3NYaJVTj+RaQ/i/gdgjIvdxmy1raJ8xoYuho7EUtV0abGf5rBr3BLcxiCTTH6jo0540Mx5Xztkpoc/91gNjDVbFaQK1ydAiaXSigTm68vuBvJcOEE1NQGKYg1ak7wJa/RwZz6EA8M5vHG7YI5+7AT0s4//sbK1XSNiNj45f8C22qjmYTk4krALNN07Kib+u/OxVDIYAEh/MmSGIIU95ZPf5qBiptMgK2V5l/anRvhglIkqEVhWH9XbodWP3M0odAQz6u/GyUQ==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2020 08:41:24.9326 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d8c84307-16ee-4f66-7550-08d7fe2be972
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4493
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGksDQoNCkFzIGEgY29uc2VxdWVuY2Ugb2YgdGhpcyBmaXgsIHRoZSBmb2xsb3dpbmcgaGFzIGJl
ZW4gY29tbWl0dGVkIChJIGd1ZXNzIGFzIGEgY29uc2VxdWVuY2Ugb2YgcmVnZW5lcmF0aW5nIHRo
ZSBjb25maWd1cmUgc2NyaXB0cyk6DQpkaWZmIC0tZ2l0IGEvdG9vbHMvY29uZmlndXJlIGIvdG9v
bHMvY29uZmlndXJlDQppbmRleCAzNzU0MzBkZjNmLi4zNjU5NjM4OWI4IDEwMDc1NQ0KLS0tIGEv
dG9vbHMvY29uZmlndXJlDQorKysgYi90b29scy9jb25maWd1cmUNCkBAIC00Njc4LDYgKzQ2Nzgs
MTAgQEAgZm9yIGxkZmxhZyBpbiAkQVBQRU5EX0xJQg0KIGRvDQogICAgIEFQUEVORF9MREZMQUdT
PSIkQVBQRU5EX0xERkxBR1MgLUwkbGRmbGFnIg0KIGRvbmUNCitpZiAgISAteiAkRVhUUkFfUFJF
RklYIDsgdGhlbg0KKyAgICBDUFBGTEFHUz0iJENQUEZMQUdTIC1JJEVYVFJBX1BSRUZJWC9pbmNs
dWRlIg0KKyAgICBMREZMQUdTPSIkTERGTEFHUyAtTCRFWFRSQV9QUkVGSVgvbGliIg0KK2ZpDQog
Q1BQRkxBR1M9IiRQUkVQRU5EX0NQUEZMQUdTICRDUFBGTEFHUyAkQVBQRU5EX0NQUEZMQUdTIg0K
IExERkxBR1M9IiRQUkVQRU5EX0xERkxBR1MgJExERkxBR1MgJEFQUEVORF9MREZMQUdT4oCdDQoN
ClRoaXMgc2hvdWxkIGJlOg0KaWYgIFsgISAteiAkRVhUUkFfUFJFRklYIF07IHRoZW4NCg0KQXMg
b24gb3RoZXIgY29uZmlndXJlIHNjcmlwdHMuDQoNCkR1cmluZyBjb25maWd1cmUgSSBoYXZlIG5v
dCB0aGUgZm9sbG93aW5nIGVycm9yOg0KLi9jb25maWd1cmU6IGxpbmUgNDY4MTogLXo6IGNvbW1h
bmQgbm90IGZvdW5kDQoNCldoaWNoIGlzIGlnbm9yZWQgYnV0IGlzIGFkZGluZyAtTC9saWIgYW5k
IC1JL2luY2x1ZGUgdG8gdGhlIENQUEZMQUdTIGFuZCBMREZMQUdTDQoNCldoYXQgc2hvdWxkIGJl
IHRoZSBwcm9jZWR1cmUgdG8gYWN0dWFsbHkgZml4IHRoYXQgKGFzIHRoZSBwcm9ibGVtIGlzIGNv
bWluZyBmcm9tIHRoZSBjb25maWd1cmUgc2NyaXB0IHJlZ2VuZXJhdGlvbiBJIGd1ZXNzKSA/IA0K
DQpCZXJ0cmFuZA0KDQo+IE9uIDUgTWF5IDIwMjAsIGF0IDEwOjI0LCBSb2dlciBQYXUgTW9ubmUg
PHJvZ2VyLnBhdUBjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+IFRoZSBwYXRoIHByb3ZpZGVkIGJ5
IEVYVFJBX1BSRUZJWCBzaG91bGQgYmUgYWRkZWQgdG8gdGhlIHNlYXJjaCBwYXRoDQo+IG9mIHRo
ZSBjb25maWd1cmUgc2NyaXB0LCBsaWtlIGl0J3MgZG9uZSBpbiBDb25maWcubWsuIE5vdCBkb2lu
ZyBzbw0KPiBtYWtlcyB0aGUgc2VhcmNoIHBhdGggZm9yIGNvbmZpZ3VyZSBkaWZmZXIgZnJvbSB0
aGUgc2VhcmNoIHBhdGggdXNlZA0KPiBieSB0aGUgYnVpbGQuDQo+IA0KPiBTaWduZWQtb2ZmLWJ5
OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4NCj4gLS0tDQo+IFBsZWFz
ZSByZS1ydW4gYXV0b2NvbmYuc2ggYWZ0ZXIgYXBwbHlpbmcuDQo+IC0tLQ0KPiBtNC9zZXRfY2Zs
YWdzX2xkZmxhZ3MubTQgfCA0ICsrKysNCj4gMSBmaWxlIGNoYW5nZWQsIDQgaW5zZXJ0aW9ucygr
KQ0KPiANCj4gZGlmZiAtLWdpdCBhL200L3NldF9jZmxhZ3NfbGRmbGFncy5tNCBiL200L3NldF9j
ZmxhZ3NfbGRmbGFncy5tNA0KPiBpbmRleCBjYmFkM2MxMGIwLi4wOGY1Yzk4M2NjIDEwMDY0NA0K
PiAtLS0gYS9tNC9zZXRfY2ZsYWdzX2xkZmxhZ3MubTQNCj4gKysrIGIvbTQvc2V0X2NmbGFnc19s
ZGZsYWdzLm00DQo+IEBAIC0xNSw2ICsxNSwxMCBAQCBmb3IgbGRmbGFnIGluICRBUFBFTkRfTElC
DQo+IGRvDQo+ICAgICBBUFBFTkRfTERGTEFHUz0iJEFQUEVORF9MREZMQUdTIC1MJGxkZmxhZyIN
Cj4gZG9uZQ0KPiAraWYgWyAhIC16ICRFWFRSQV9QUkVGSVggXTsgdGhlbg0KPiArICAgIENQUEZM
QUdTPSIkQ1BQRkxBR1MgLUkkRVhUUkFfUFJFRklYL2luY2x1ZGUiDQo+ICsgICAgTERGTEFHUz0i
JExERkxBR1MgLUwkRVhUUkFfUFJFRklYL2xpYiINCj4gK2ZpDQo+IENQUEZMQUdTPSIkUFJFUEVO
RF9DUFBGTEFHUyAkQ1BQRkxBR1MgJEFQUEVORF9DUFBGTEFHUyINCj4gTERGTEFHUz0iJFBSRVBF
TkRfTERGTEFHUyAkTERGTEFHUyAkQVBQRU5EX0xERkxBR1MiXSkNCj4gDQo+IC0tIA0KPiAyLjI2
LjINCj4gDQo+IA0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri May 22 09:06:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 09:06:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc3cw-0002X4-4j; Fri, 22 May 2020 09:06:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ov9E=7E=xen.org=wl@srs-us1.protection.inumbo.net>)
 id 1jc3cu-0002Wz-FH
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 09:06:00 +0000
X-Inumbo-ID: 7328e0c2-9c0b-11ea-ab9c-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7328e0c2-9c0b-11ea-ab9c-12813bfff9fa;
 Fri, 22 May 2020 09:05:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Transfer-Encoding:Content-Type:
 MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=NptQTuAO8IEJYISt6o+AcAVPdDuMKGkci5lUD57jJ18=; b=7H4VvtjQLmfL8lgKqr75GBfG/f
 nNVfxly/JDWO31KKsgC5bBmRd0U1USt5m+ZddlWdWe2oL6VbqZ0tstO0/XxslYzKO0NZHX0oAeMev
 0EM9xAgyzNYX9zXCFJSnoinjuNisS5FH2doG43bNsSGmS36y75q//A/meeE19vrGVSws=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <wl@xen.org>)
 id 1jc3cq-0005m6-6W; Fri, 22 May 2020 09:05:56 +0000
Received: from 82.149.115.87.dyn.plus.net ([87.115.149.82] helo=debian)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.89)
 (envelope-from <wl@xen.org>)
 id 1jc3cp-0006E0-Ry; Fri, 22 May 2020 09:05:56 +0000
Date: Fri, 22 May 2020 10:05:53 +0100
From: Wei Liu <wl@xen.org>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: [PATCH 2/3] configure: also add EXTRA_PREFIX to {CPP/LD}FLAGS
Message-ID: <20200522090553.eegs4fcltfqjuhzo@debian>
References: <20200505092454.9161-1-roger.pau@citrix.com>
 <20200505092454.9161-3-roger.pau@citrix.com>
 <C053A44F-FFDE-4C07-B1FD-76FA8456ADCD@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <C053A44F-FFDE-4C07-B1FD-76FA8456ADCD@arm.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 22, 2020 at 08:41:17AM +0000, Bertrand Marquis wrote:
> Hi,
> 
> As a consequence of this fix, the following has been committed (I guess as a consequence of regenerating the configure scripts):
> diff --git a/tools/configure b/tools/configure
> index 375430df3f..36596389b8 100755
> --- a/tools/configure
> +++ b/tools/configure
> @@ -4678,6 +4678,10 @@ for ldflag in $APPEND_LIB
>  do
>      APPEND_LDFLAGS="$APPEND_LDFLAGS -L$ldflag"
>  done
> +if  ! -z $EXTRA_PREFIX ; then
> +    CPPFLAGS="$CPPFLAGS -I$EXTRA_PREFIX/include"
> +    LDFLAGS="$LDFLAGS -L$EXTRA_PREFIX/lib"
> +fi
>  CPPFLAGS="$PREPEND_CPPFLAGS $CPPFLAGS $APPEND_CPPFLAGS"
>  LDFLAGS="$PREPEND_LDFLAGS $LDFLAGS $APPEND_LDFLAGS”
> 
> This should be:
> if  [ ! -z $EXTRA_PREFIX ]; then
> 
> As on other configure scripts.
> 
> During configure I have not the following error:
> ./configure: line 4681: -z: command not found
> 
> Which is ignored but is adding -L/lib and -I/include to the CPPFLAGS and LDFLAGS
> 
> What should be the procedure to actually fix that (as the problem is coming from the configure script regeneration I guess) ? 

Does the following patch work for you?

diff --git a/m4/set_cflags_ldflags.m4 b/m4/set_cflags_ldflags.m4
index 08f5c983cc63..cd34c139bc94 100644
--- a/m4/set_cflags_ldflags.m4
+++ b/m4/set_cflags_ldflags.m4
@@ -15,7 +15,7 @@ for ldflag in $APPEND_LIB
 do
     APPEND_LDFLAGS="$APPEND_LDFLAGS -L$ldflag"
 done
-if [ ! -z $EXTRA_PREFIX ]; then
+if test ! -z $EXTRA_PREFIX ; then
     CPPFLAGS="$CPPFLAGS -I$EXTRA_PREFIX/include"
     LDFLAGS="$LDFLAGS -L$EXTRA_PREFIX/lib"
 fi


You will need to run autogen.sh to regenerate tools/configure.

Wei.

> 
> Bertrand
> 
> > On 5 May 2020, at 10:24, Roger Pau Monne <roger.pau@citrix.com> wrote:
> > 
> > The path provided by EXTRA_PREFIX should be added to the search path
> > of the configure script, like it's done in Config.mk. Not doing so
> > makes the search path for configure differ from the search path used
> > by the build.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> > Please re-run autoconf.sh after applying.
> > ---
> > m4/set_cflags_ldflags.m4 | 4 ++++
> > 1 file changed, 4 insertions(+)
> > 
> > diff --git a/m4/set_cflags_ldflags.m4 b/m4/set_cflags_ldflags.m4
> > index cbad3c10b0..08f5c983cc 100644
> > --- a/m4/set_cflags_ldflags.m4
> > +++ b/m4/set_cflags_ldflags.m4
> > @@ -15,6 +15,10 @@ for ldflag in $APPEND_LIB
> > do
> >     APPEND_LDFLAGS="$APPEND_LDFLAGS -L$ldflag"
> > done
> > +if [ ! -z $EXTRA_PREFIX ]; then
> > +    CPPFLAGS="$CPPFLAGS -I$EXTRA_PREFIX/include"
> > +    LDFLAGS="$LDFLAGS -L$EXTRA_PREFIX/lib"
> > +fi
> > CPPFLAGS="$PREPEND_CPPFLAGS $CPPFLAGS $APPEND_CPPFLAGS"
> > LDFLAGS="$PREPEND_LDFLAGS $LDFLAGS $APPEND_LDFLAGS"])
> > 
> > -- 
> > 2.26.2
> > 
> > 
> 


From xen-devel-bounces@lists.xenproject.org Fri May 22 09:09:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 09:09:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc3g8-0002gK-N4; Fri, 22 May 2020 09:09:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=de5w=7E=epam.com=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1jc3g7-0002gF-93
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 09:09:19 +0000
X-Inumbo-ID: ea548afc-9c0b-11ea-ae69-bc764e2007e4
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.57]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ea548afc-9c0b-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 09:09:18 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XcZi4PqRHOmUL0cH9zEctVXORqZyRB3eQTjh0+9upJhspmodIyPuPvuEecNUyRO7jM5cKGNQnVcd7vMzUqTwA/pK/48bAWyU9C6b/cmJuPu04qUKTe8REf5n1qeQojrwe06RDtst4KyxtF4IgLx5FColn48CQqi+8wwXyHCWwFyMSiOq2p+wY8Q2N7zDpW3zc81gpb6DVL3NojykdiKermokTwluZMnVjV6lms9c6pD1BiFR2vUPRy3WWj2RRsBw1fZNoZQ0u3d5GnxcmUC3PuIRhFx9OVKDlrMpTylBfg90ZrUS5PQ/S8PtdUs7CC+VtxKweneZ/dSsVuzuwsgocg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gUkDOpjBQ1w1/LQCwIUaJ8oAJ006qfQg6XBe7g8l6ws=;
 b=OMM7Dr6KDYUCQdWLgVmmwdjsNQTWWi1/q6tgZbPGKC1L4X9DRdCDwyEAaiezPHUAmoTEz1JavDnBVVKgDcM7gOrWTyifnCWebwJhH6CLAgpMVAdQfCAEfVl3wPRTepfQ8+V5d6o9hMOLJc9WArAHtJFupFr8IvrjKzENWmQC+PtXPZJEkB/CR+ndYM59rGJ84JW5YOe9EqaO15NJQeZrbL0oWauP2SU9ozdJO6Pb9eMbxaiOGzK9IEm5LWbzGiIeWXApUCNay5pf3GN2WO08CXzIgfgIp6DAV5i1QriCA6LH/uzaax3I7osS6qrG1LlPzOPMuxeYJ233nllgvFVx1A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gUkDOpjBQ1w1/LQCwIUaJ8oAJ006qfQg6XBe7g8l6ws=;
 b=B6PXOm2XuAbicPUJJcCGqv9H06TVUn9l0AJ1CSIkb8NM8NL8g3Uzx8/6LFuv4PkFe6GQ40ty3UDx3U5s77oI8DxuHGFmbxIOl67YIQiLUnLjMWNOYc8q4AYgdpap32J5GBWNnXAk4e9lU3plJfIhr59ORX9KcyMrG4zWm9CJMpU=
Received: from VI1PR03MB3998.eurprd03.prod.outlook.com (2603:10a6:803:72::14)
 by VI1PR03MB3134.eurprd03.prod.outlook.com (2603:10a6:802:31::29)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.27; Fri, 22 May
 2020 09:09:16 +0000
Received: from VI1PR03MB3998.eurprd03.prod.outlook.com
 ([fe80::28ec:3584:94d:27a4]) by VI1PR03MB3998.eurprd03.prod.outlook.com
 ([fe80::28ec:3584:94d:27a4%7]) with mapi id 15.20.3021.027; Fri, 22 May 2020
 09:09:15 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Denis Kirjanov <kda@linux-powerpc.org>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v4] public/io/netif.h: add a new extra type for XDP
Thread-Topic: [PATCH v4] public/io/netif.h: add a new extra type for XDP
Thread-Index: AQHWMBiqME7497+04E6+8Omw+GVY7Q==
Date: Fri, 22 May 2020 09:09:15 +0000
Message-ID: <696ee8f6-f44b-35c3-2c9f-676cb9e5ad95@epam.com>
References: <1589814292-1789-1-git-send-email-kda@linux-powerpc.org>
In-Reply-To: <1589814292-1789-1-git-send-email-kda@linux-powerpc.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: linux-powerpc.org; dkim=none (message not signed)
 header.d=none; linux-powerpc.org; dmarc=none action=none header.from=epam.com; 
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: bbd70c54-893e-4cb8-86a7-08d7fe2fcd0c
x-ms-traffictypediagnostic: VI1PR03MB3134:
x-microsoft-antispam-prvs: <VI1PR03MB31343D6A3A126505ED4733B7E7B40@VI1PR03MB3134.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:660;
x-forefront-prvs: 04111BAC64
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: Z1xySATVrV0r0Rb/vNDmfLmRZ0S8gYupzYyIb6Ff5pFTNxjBfUJmCOpK7N9AomQWPyRFRZ5fb4VBW/QgLcr7mKtcQiGIv5S2wMFjmxjxyXjR2uPCtz3yKG1VvaJ4+WmAHZnkisUD1TANGbLiOy5X5/OtuG+gRxaKukhWQPYKOccGFhYvF9CTXQg+LLa+Obj2IJud5vvKlVs3WyvJW6uG95FYb0XTR1xWpG/NKJ6f9QgKNFteaH0P4dSMwz+lEbqkULkHnzbvpcYADh9cztdSfW2C3vNupRokzXTI5M7d5i16j/qY4OUz57gdAH1h8U9a3NFnayrhhCxq9vdUPgn6W9IAddWRUwi4zMY/lq5i74ARkDDzX5LzKhWlT8JaBb+ThB/DvoTFxmAON7D+OnKazBQM0ZgvDdqla1BgxV0RRTlldnmEJsEXJeo0d+jtSP7o
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:VI1PR03MB3998.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(136003)(39860400002)(366004)(396003)(376002)(346002)(6512007)(86362001)(31686004)(36756003)(66476007)(2616005)(26005)(316002)(53546011)(66556008)(186003)(66946007)(66446008)(6506007)(110136005)(64756008)(71200400001)(76116006)(31696002)(4326008)(478600001)(54906003)(8676002)(8936002)(5660300002)(6486002)(2906002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: G4M3FxUBUQvEUvMu3v9Iqc2sQ16ojLu8uflHNlOP/1SdrhcA76AVMfGK+cA3hvnYSu130dOeV7FX5cWwRGGdnkUwZaAKmDE7iMbTILhh0wLYk56MnFnuPK04DH2ICdX19slustGCG2j1WYRbq9zXWnv3l9bBYLFQMvi6okJybZmB6hClT3Z/c4/AVxzJbMQvWE+kFsd3+2FS3R8RSw5pZYdntTk87+N2bCywdCU7FykghgQXKda/evNywScaFZuwujnMgy2/j4Qsc7RrrY4eLRVuxK4Mbv43P7f6eUYzsCh5dqa6hShu0k3t2jTMnxq0hVQRIeYh17tf7PjUx/9ExjX8QZj0FC8ongwG4kXpmFjqxSJrpQwE2oXbDOUtNnibjTij3GigrnwBGLF3sqykHKG+Y5Xgjsqvi4zmCQmmiNiMnQsELoaAN3RqPWEI/ji2nabrbJ4yi3xuiMykCE5BsRST+eORCWgKY88MA65oPMo=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <FF5905AD4337CB478C258972088D1AF0@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bbd70c54-893e-4cb8-86a7-08d7fe2fcd0c
X-MS-Exchange-CrossTenant-originalarrivaltime: 22 May 2020 09:09:15.1983 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: PgYs5YBp8ASFcWtgKv14YJ/x6LO6GBZKnq29RDjs5eJJGyWvEA78GtHVW7J7HVi/kVa8TWrHWx005qGAIWwJmqgwrTmr7hyFauZEgw7DUGZIsaVGysaqhlbEQAK2gNIY
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR03MB3134
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "jgross@suse.com" <jgross@suse.com>, "paul@xen.org" <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

T24gNS8xOC8yMCA2OjA0IFBNLCBEZW5pcyBLaXJqYW5vdiB3cm90ZToNCj4gVGhlIHBhdGNoIGFk
ZHMgYSBuZXcgZXh0cmEgdHlwZSB0byBiZSBhYmxlIHRvIGRpZmZpcmVudGlhdGUNCj4gYmV0d2Vl
biBSWCByZXNwb25zZXMgb24geGVuLW5ldGZyb250IHNpZGUgd2l0aCB0aGUgYWRqdXN0ZWQgb2Zm
c2V0DQo+IHJlcXVpcmVkIGZvciBYRFAgcHJvY2Vzc2luZy4NCj4NCj4gVGhlIG9mZnNldCB2YWx1
ZSBmcm9tIGEgZ3Vlc3QgaXMgcGFzc2VkIHZpYSB4ZW5zdG9yZS4NCj4NCj4gU2lnbmVkLW9mZi1i
eTogRGVuaXMgS2lyamFub3YgPGRlbmlzLmtpcmphbm92QHN1c2UuY29tPg0KPiAtLS0NCj4gdjQ6
DQo+IC0gdXBkYXRlZCB0aGUgY29tbWl0IGFuZCBkb2N1bWVuYXRpb24NCj4NCj4gdjM6DQo+IC0g
dXBkYXRlZCB0aGUgY29tbWl0IG1lc3NhZ2UNCj4NCj4gdjI6DQo+IC0gYWRkZWQgZG9jdW1lbnRh
dGlvbg0KPiAtIGZpeGVkIHBhZGRpbmcgZm9yIG5ldGlmX2V4dHJhX2luZm8NCj4gLS0tDQo+IC0t
LQ0KPiAgIHhlbi9pbmNsdWRlL3B1YmxpYy9pby9uZXRpZi5oIHwgMTggKysrKysrKysrKysrKysr
KystDQo+ICAgMSBmaWxlIGNoYW5nZWQsIDE3IGluc2VydGlvbnMoKyksIDEgZGVsZXRpb24oLSkN
Cj4NCj4gZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL3B1YmxpYy9pby9uZXRpZi5oIGIveGVuL2lu
Y2x1ZGUvcHVibGljL2lvL25ldGlmLmgNCj4gaW5kZXggOWZjZjkxYS4uYTkyYmYwNCAxMDA2NDQN
Cj4gLS0tIGEveGVuL2luY2x1ZGUvcHVibGljL2lvL25ldGlmLmgNCj4gKysrIGIveGVuL2luY2x1
ZGUvcHVibGljL2lvL25ldGlmLmgNCj4gQEAgLTE2MSw2ICsxNjEsMTcgQEANCj4gICAgKi8NCj4g
ICANCj4gICAvKg0KPiArICogInhkcC1oZWFkcm9vbSIgaXMgdXNlZCB0byByZXF1ZXN0IHRoYXQg
ZXh0cmEgc3BhY2UgaXMgYWRkZWQNCj4gKyAqIGZvciBYRFAgcHJvY2Vzc2luZy4gIFRoZSB2YWx1
ZSBpcyBtZWFzdXJlZCBpbiBieXRlcyBhbmQgcGFzc2VkIGJ5DQoNCm5vdCBzdXJlIHRoYXQgd2Ug
c2hvdWxkIHVzZSB3b3JkICJieXRlcyIgaGVyZSBhcyB0aGUgcmVzdCBvZiB0aGUgDQpwcm90b2Nv
bCAobW9zdGx5KQ0KDQp0YWxrcyBhYm91dCBvY3RldHMuIEl0IGlzIHNvbWV3aGF0IG1peGVkIGhl
cmUsIG5vIHN0cm9uZyBvcGluaW9uDQoNCj4gKyAqIHRoZSBmcm9udGVuZCB0byBiZSBjb25zaXN0
ZW50IGJldHdlZW4gYm90aCBlbmRzLg0KPiArICogSWYgdGhlIHZhbHVlIGlzIGdyZWF0ZXIgdGhh
biB6ZXJvIHRoYXQgbWVhbnMgdGhhdA0KPiArICogYW4gUlggcmVzcG9uc2UgaXMgZ29pbmcgdG8g
YmUgcGFzc2VkIHRvIGFuIFhEUCBwcm9ncmFtIGZvciBwcm9jZXNzaW5nLg0KPiArICoNCj4gKyAq
ICJmZWF0dXJlLXhkcC1oZWFkcm9vbSIgaXMgc2V0IHRvICIxIiBieSB0aGUgbmV0YmFjayBzaWRl
IGxpa2Ugb3RoZXIgZmVhdHVyZXMNCj4gKyAqIHNvIGEgZ3Vlc3QgY2FuIGNoZWNrIGlmIGFuIFhE
UCBwcm9ncmFtIGNhbiBiZSBwcm9jZXNzZWQuDQo+ICsgKi8NCj4gKw0KPiArLyoNCj4gICAgKiBD
b250cm9sIHJpbmcNCj4gICAgKiA9PT09PT09PT09PT0NCj4gICAgKg0KPiBAQCAtOTg1LDcgKzk5
Niw4IEBAIHR5cGVkZWYgc3RydWN0IG5ldGlmX3R4X3JlcXVlc3QgbmV0aWZfdHhfcmVxdWVzdF90
Ow0KPiAgICNkZWZpbmUgWEVOX05FVElGX0VYVFJBX1RZUEVfTUNBU1RfQUREICgyKSAgLyogdS5t
Y2FzdCAqLw0KPiAgICNkZWZpbmUgWEVOX05FVElGX0VYVFJBX1RZUEVfTUNBU1RfREVMICgzKSAg
LyogdS5tY2FzdCAqLw0KPiAgICNkZWZpbmUgWEVOX05FVElGX0VYVFJBX1RZUEVfSEFTSCAgICAg
ICg0KSAgLyogdS5oYXNoICovDQo+IC0jZGVmaW5lIFhFTl9ORVRJRl9FWFRSQV9UWVBFX01BWCAg
ICAgICAoNSkNCj4gKyNkZWZpbmUgWEVOX05FVElGX0VYVFJBX1RZUEVfWERQICAgICAgICg1KSAg
LyogdS54ZHAgKi8NCj4gKyNkZWZpbmUgWEVOX05FVElGX0VYVFJBX1RZUEVfTUFYICAgICAgICg2
KQ0KPiAgIA0KPiAgIC8qIG5ldGlmX2V4dHJhX2luZm9fdCBmbGFncy4gKi8NCj4gICAjZGVmaW5l
IF9YRU5fTkVUSUZfRVhUUkFfRkxBR19NT1JFICgwKQ0KPiBAQCAtMTAxOCw2ICsxMDMwLDEwIEBA
IHN0cnVjdCBuZXRpZl9leHRyYV9pbmZvIHsNCj4gICAgICAgICAgICAgICB1aW50OF90IGFsZ29y
aXRobTsNCj4gICAgICAgICAgICAgICB1aW50OF90IHZhbHVlWzRdOw0KPiAgICAgICAgICAgfSBo
YXNoOw0KPiArICAgICAgICBzdHJ1Y3Qgew0KPiArICAgICAgICAgICAgdWludDE2X3QgaGVhZHJv
b207DQp3aHkgZG8geW91IG5lZWQgInBhZCIgZmllbGQgaGVyZT8NCj4gKyAgICAgICAgICAgIHVp
bnQxNl90IHBhZFsyXQ0KPiArICAgICAgICB9IHhkcDsNCj4gICAgICAgICAgIHVpbnQxNl90IHBh
ZFszXTsNCj4gICAgICAgfSB1Ow0KPiAgIH07


From xen-devel-bounces@lists.xenproject.org Fri May 22 09:10:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 09:10:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc3he-0003TT-2P; Fri, 22 May 2020 09:10:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2cgu=7E=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jc3hd-0003TL-6N
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 09:10:53 +0000
X-Inumbo-ID: 228f853e-9c0c-11ea-9887-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 228f853e-9c0c-11ea-9887-bc764e2007e4;
 Fri, 22 May 2020 09:10:52 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: +G6WDXfExI75jnQA6JOAuo3eKtLOQIbvPajZy4U9rP2rbRs+BCQw/WcfvkMLNDb88PCNp0JJYj
 d/xkxzEn3zfJ375XYbTGaUrdeLdhNGFhg9T6V/v0ViHAVjsBwXz+9cKz9jFDUICW9HRj+iFgtP
 VPbJWTVpNZ6tBsFUllcmooTOYXyh6HmgbTQ80230OfU/gJ6C12PkprOjHKBa69kX1xCz+lytJ9
 4MF2dOdyRqIgUXss7Uh3x7+tZA7SPVsvK71fRNZhZvyCdQJxGEbR6wsSic+eRCNnbVmsiE0Bl3
 TCw=
X-SBRS: 2.7
X-MesageID: 18423670
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18423670"
From: George Dunlap <George.Dunlap@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v7 00/19] Add support for qemu-xen runnning in a
 Linux-based stubdomain
Thread-Topic: [PATCH v7 00/19] Add support for qemu-xen runnning in a
 Linux-based stubdomain
Thread-Index: AQHWLYCfnvKV6FW89kWP3GSPgwH2N6iyfZwA
Date: Fri, 22 May 2020 09:10:48 +0000
Message-ID: <4510049C-2AD1-4AE4-B0E5-F4231450EDB6@citrix.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
In-Reply-To: <20200519015503.115236-1-jandryuk@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <CE6482AAEA84DC4B9DAEA5E44713F5EE@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano
 Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, Samuel
 Thibault <samuel.thibault@ens-lyon.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Ian Jackson <Ian.Jackson@citrix.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQo+IE9uIE1heSAxOSwgMjAyMCwgYXQgMjo1NCBBTSwgSmFzb24gQW5kcnl1ayA8amFuZHJ5dWtA
Z21haWwuY29tPiB3cm90ZToNCj4gDQo+IEdlbmVyYWwgaWRlYSBpcyB0byBhbGxvdyBmcmVlbHkg
c2V0IGRldmljZV9tb2RlbF92ZXJzaW9uIGFuZA0KPiBkZXZpY2VfbW9kZWxfc3R1YmRvbWFpbl9v
dmVycmlkZSBhbmQgY2hvb3NlIHRoZSByaWdodCBvcHRpb25zIGJhc2VkIG9uIHRoaXMNCj4gY2hv
aWNlLiAgQWxzbywgYWxsb3cgdG8gc3BlY2lmaWMgcGF0aCB0byBzdHViZG9tYWluIGtlcm5lbC9y
YW1kaXNrLCBmb3IgZ3JlYXRlcg0KPiBmbGV4aWJpbGl0eS4NCg0KRXhjaXRlZCB0byBzZWUgdGhp
cyBwYXRjaCBzZXJpZXMgZ2V0IGluLiAgQnV0IEkgZGlkbuKAmXQgcmVhbGx5IG5vdGljZSBhbnkg
ZG9jdW1lbnRzIGV4cGxhaW5pbmcgaG93IHRvIGFjdHVhbGx5IHVzZSBpdCDigJQgaXMgdGhlcmUg
YSBibG9nIHBvc3QgYW55d2hlcmUgZGVzY3JpYmluZyBob3cgdG8gZ2V0IHRoZSBrZXJuZWwgLyBp
bml0cmQgaW1hZ2UgYW5kIHNvIG9uPw0KDQpBbHNvLCB3b3VsZCBpdCBiZSBwb3NzaWJsZSB0byBh
ZGQgYSBmb2xsb3ctdXAgc2VyaWVzIHdoaWNoIG1vZGlmaWVzIFNVUFBPUlQubWQgYW5kIENIQU5H
RUxPRy5tZD8NCg0KVGhhbmtzLA0KIC1HZW9yZ2U=


From xen-devel-bounces@lists.xenproject.org Fri May 22 09:14:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 09:14:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc3l4-0003d5-In; Fri, 22 May 2020 09:14:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2cgu=7E=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jc3l3-0003d0-Gx
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 09:14:25 +0000
X-Inumbo-ID: a0e3ffd2-9c0c-11ea-b9cf-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a0e3ffd2-9c0c-11ea-b9cf-bc764e2007e4;
 Fri, 22 May 2020 09:14:24 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: B/GeyZWhaZYHqv9pji0HFtvz7Fy7ScTqYq/dAdNyohsU0mRYTPPaSijBH70+rr9rrMeJkZp1tL
 OAeLOqxIhFQYTwpH8VqgnspeWYO++dXVWEQc3K6LwkRHa1MZfn3zmByFEpqepohs28a+kEpKR9
 MOMYncLz7gWTYfuZeqKmuvNo2rBuWV5NUFtfpT1CRQXriNrztInXtbWC4Mgpxmd6iRXXfSeQ8j
 tRNnKobBlJScs4QfIj06w8zTQNMWHmgtTVxBQ1cKI5uDtt6XXHHSfMJBgoy+AIuMTulLGBk9dc
 tVU=
X-SBRS: 2.7
X-MesageID: 18524478
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18524478"
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
Subject: Re: [PATCH] golang/xenlight: add an empty line after DO NOT EDIT
 comment
Thread-Topic: [PATCH] golang/xenlight: add an empty line after DO NOT EDIT
 comment
Thread-Index: AQHWL3/q51M6HvGdpEqSwwkQaxvEFqiygIWA
Date: Fri, 22 May 2020 09:14:20 +0000
Message-ID: <D90B93AD-977A-468E-840E-2E2354905968@citrix.com>
References: <49cc21c24b65ef5e1ce9810397c0fcd9d43f77f4.1590072675.git.rosbrookn@ainfosec.com>
In-Reply-To: <49cc21c24b65ef5e1ce9810397c0fcd9d43f77f4.1590072675.git.rosbrookn@ainfosec.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <A82681681961B74C9EEB81D141E2E1EB@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 xen-devel <xen-devel@lists.xenproject.org>, Paul Durrant <paul@xen.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Q0PigJlpbmcgdGhlIHJlbGVhc2UgbWFuYWdlciwgc2luY2Ugd2XigJlyZSBwYXN0IHRoZSBsYXN0
IHBvc3RpbmcgZGF0ZQ0KDQo+IE9uIE1heSAyMSwgMjAyMCwgYXQgMzo1NSBQTSwgTmljayBSb3Ni
cm9vayA8cm9zYnJvb2tuQGdtYWlsLmNvbT4gd3JvdGU6DQo+IA0KPiBXaGVuIGdlbmVyYXRpbmcg
ZG9jdW1lbnRhdGlvbiwgcGtnLmdvLmRldiBhbmQgZ29kb2Mub3JnIGFzc3VtZSBhIGNvbW1lbnQN
Cj4gdGhhdCBpbW1lZGlhdGVseSBwcmVjZWRlcyB0aGUgcGFja2FnZSBkZWNsYXJhdGlvbiBpcyBh
ICJwYWNrYWdlDQo+IGNvbW1lbnQiLCBhbmQgc2hvdWxkIGJlIHNob3duIGluIHRoZSBkb2N1bWVu
dGF0aW9uLiBBZGQgYW4gZW1wdHkgbGluZQ0KPiBhZnRlciB0aGUgRE8gTk9UIEVESVQgY29tbWVu
dCBpbiBnZW5lcmF0ZWQgZmlsZXMgdG8gcHJldmVudCB0aGVzZQ0KPiBjb21tZW50cyBmcm9tIGFw
cGVhcmluZyBhcyAicGFja2FnZSBjb21tZW50cy4iDQo+IA0KPiBTaWduZWQtb2ZmLWJ5OiBOaWNr
IFJvc2Jyb29rIDxyb3Nicm9va25AYWluZm9zZWMuY29tPg0KDQpSZXZpZXdlZC1ieTogR2Vvcmdl
IER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXguY29tPg0KDQpQYXVsLCBJIHdvdWxkIGNsYXNz
aWZ5IHRoaXMgYXMgYSBidWcgZml4OiBJdCB3b27igJl0IGhhdmUgYW55IGZ1bmN0aW9uYWwgZWZm
ZWN0IG9uIHRoZSBjb2RlIGl0c2VsZiwgYnV0IGl0IGZpeGVzIGhvdyBpdOKAmXMgZGlzcGxheWVk
OyBlLmcuOg0KDQpodHRwczovL3BrZy5nby5kZXYveGVuYml0cy54ZW5wcm9qZWN0Lm9yZy9naXQt
aHR0cC94ZW4uZ2l0L3Rvb2xzL2dvbGFuZy94ZW5saWdodD90YWI9ZG9jDQoNCg0K


From xen-devel-bounces@lists.xenproject.org Fri May 22 09:17:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 09:17:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc3nk-0003m2-04; Fri, 22 May 2020 09:17:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hXvX=7E=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jc3nj-0003lw-22
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 09:17:11 +0000
X-Inumbo-ID: 03d8a944-9c0d-11ea-b9cf-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03d8a944-9c0d-11ea-b9cf-bc764e2007e4;
 Fri, 22 May 2020 09:17:10 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:36904
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jc3ng-0002Hs-Jg (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 22 May 2020 10:17:08 +0100
Subject: Re: [PATCH] x86/idle: prevent entering C3/C6 on some Intel CPUs due
 to errata
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20200522080928.87786-1-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <4881423b-c925-6e59-55e6-3b36a9323ef6@citrix.com>
Date: Fri, 22 May 2020 10:17:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200522080928.87786-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22/05/2020 09:09, Roger Pau Monne wrote:
> Apply a workaround for errata BA80, AAK120, AAM108, AAO67, BD59,
> AAY54: Rapid Core C3/C6 Transition May Cause Unpredictable System
> Behavior.
>
> Limit maximum C state to C2 when SMT is enabled on the affected CPUs.

C1

> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

A fix for this is long overdue.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 22 09:17:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 09:17:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc3np-0003mi-7s; Fri, 22 May 2020 09:17:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1mxZ=7E=linux-powerpc.org=kda@srs-us1.protection.inumbo.net>)
 id 1jc3no-0003mX-2F
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 09:17:16 +0000
X-Inumbo-ID: 04ecedae-9c0d-11ea-ae69-bc764e2007e4
Received: from mail-ej1-x642.google.com (unknown [2a00:1450:4864:20::642])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 04ecedae-9c0d-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 09:17:12 +0000 (UTC)
Received: by mail-ej1-x642.google.com with SMTP id e2so12109750eje.13
 for <xen-devel@lists.xenproject.org>; Fri, 22 May 2020 02:17:12 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=linux-powerpc-org.20150623.gappssmtp.com; s=20150623;
 h=mime-version:in-reply-to:references:from:date:message-id:subject:to
 :cc; bh=Eqky+1kINiJQLtida+8tK4ZgG7067EJfvfBCZJi95cY=;
 b=BgxOpeC+8AaBkF74inRXsRr7NNHl3A+mRw18tiY6wQp4tnjkuLrEw6vJe6hgFxV4AV
 9QaEd0NSV56HHOhAmfQvVvfvWqH0dN/0ri9dJcKl39n05Wx98cE1INsDglnnCxC6BS3r
 F4hchr8iCyEpyd97v4D/JUKcfLgJGEelRp/kHBTBBGueWu26VZPsvhEjZiSN3+yOYoog
 /krjl3NYk+tFZk7gWvfKjwOYLoLq4algiAgIXtG2bN9jXNTsF5ibnakgBhlG81QqVQI6
 v16c4ERpRIJK6t0QJlecPKxNpbwrRVxsH1ss+2258to1h9lyy9Gjhs28jOxgCqoQ9kzE
 xmwg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:in-reply-to:references:from:date
 :message-id:subject:to:cc;
 bh=Eqky+1kINiJQLtida+8tK4ZgG7067EJfvfBCZJi95cY=;
 b=WzHC0+8mtiV56GTpMTuRHXgbOIto7Tj72rfhlmbzZ7zthCl8UYqUWGrEOVFPrjeTQX
 pUDMFJs9BLRZk4BnQTLviHT8OCdAMKm32BOrF9Ujd3yn/NS9sr9FEX2hi8gm1pJXHo/Y
 CEtQreBXQhn3+PkwaPF7GpUomWUnXSAV4YR2aTEX6wD7rCo7RHKnX7nUf6SN/tnWzCWb
 xCaRnpXVBehSIy9oP3vBC3kb5VpOJyH7QenP1z5kjL4BwpmlAl+w/wfLV1F1FFjPxKeo
 E3K67vyvXLeak3KblmXQ69YI0f4eUzkcIJFUlMy2uI0FhyJAPdc93zxYS9cyqrO3xxCl
 /Oew==
X-Gm-Message-State: AOAM5325LHrHegISan0Rkx25u6OGEH7B0H1dkKFo74SVTo/HJffbUuAS
 AXbC6PXG6t9yvPIAnlCSYG855wxA9VlKR0/raim4hw==
X-Google-Smtp-Source: ABdhPJyb5+yX0ysifnCaPcrgVcXX14yJBr9fAzS5tdMQi9OpYE2XbiLDOyO2r0H3lQ5K2coyjzSRj3w677yzj8ggoxQ=
X-Received: by 2002:a17:906:1313:: with SMTP id
 w19mr7712209ejb.79.1590139031582; 
 Fri, 22 May 2020 02:17:11 -0700 (PDT)
MIME-Version: 1.0
Received: by 2002:a50:34e:0:0:0:0:0 with HTTP;
 Fri, 22 May 2020 02:17:10 -0700 (PDT)
X-Originating-IP: [5.35.46.149]
In-Reply-To: <696ee8f6-f44b-35c3-2c9f-676cb9e5ad95@epam.com>
References: <1589814292-1789-1-git-send-email-kda@linux-powerpc.org>
 <696ee8f6-f44b-35c3-2c9f-676cb9e5ad95@epam.com>
From: Denis Kirjanov <kda@linux-powerpc.org>
Date: Fri, 22 May 2020 12:17:10 +0300
Message-ID: <CAOJe8K3wZCkc2d1nxYqesTm2Cnt3QjsLDPn7U5KJbC7Bc=oVKA@mail.gmail.com>
Subject: Re: [PATCH v4] public/io/netif.h: add a new extra type for XDP
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "jgross@suse.com" <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "paul@xen.org" <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/22/20, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com> wrote:
> On 5/18/20 6:04 PM, Denis Kirjanov wrote:
>> The patch adds a new extra type to be able to diffirentiate
>> between RX responses on xen-netfront side with the adjusted offset
>> required for XDP processing.
>>
>> The offset value from a guest is passed via xenstore.
>>
>> Signed-off-by: Denis Kirjanov <denis.kirjanov@suse.com>
>> ---
>> v4:
>> - updated the commit and documenation
>>
>> v3:
>> - updated the commit message
>>
>> v2:
>> - added documentation
>> - fixed padding for netif_extra_info
>> ---
>> ---
>>   xen/include/public/io/netif.h | 18 +++++++++++++++++-
>>   1 file changed, 17 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/include/public/io/netif.h
>> b/xen/include/public/io/netif.h
>> index 9fcf91a..a92bf04 100644
>> --- a/xen/include/public/io/netif.h
>> +++ b/xen/include/public/io/netif.h
>> @@ -161,6 +161,17 @@
>>    */
>>
>>   /*
>> + * "xdp-headroom" is used to request that extra space is added
>> + * for XDP processing.  The value is measured in bytes and passed by
>
> not sure that we should use word "bytes" here as the rest of the
> protocol (mostly)
>
> talks about octets. It is somewhat mixed here, no strong opinion

sure, but since the public header mixes it I've decided to use that word.


>
>> + * the frontend to be consistent between both ends.
>> + * If the value is greater than zero that means that
>> + * an RX response is going to be passed to an XDP program for
>> processing.
>> + *
>> + * "feature-xdp-headroom" is set to "1" by the netback side like other
>> features
>> + * so a guest can check if an XDP program can be processed.
>> + */
>> +
>> +/*
>>    * Control ring
>>    * ============
>>    *
>> @@ -985,7 +996,8 @@ typedef struct netif_tx_request netif_tx_request_t;
>>   #define XEN_NETIF_EXTRA_TYPE_MCAST_ADD (2)  /* u.mcast */
>>   #define XEN_NETIF_EXTRA_TYPE_MCAST_DEL (3)  /* u.mcast */
>>   #define XEN_NETIF_EXTRA_TYPE_HASH      (4)  /* u.hash */
>> -#define XEN_NETIF_EXTRA_TYPE_MAX       (5)
>> +#define XEN_NETIF_EXTRA_TYPE_XDP       (5)  /* u.xdp */
>> +#define XEN_NETIF_EXTRA_TYPE_MAX       (6)
>>
>>   /* netif_extra_info_t flags. */
>>   #define _XEN_NETIF_EXTRA_FLAG_MORE (0)
>> @@ -1018,6 +1030,10 @@ struct netif_extra_info {
>>               uint8_t algorithm;
>>               uint8_t value[4];
>>           } hash;
>> +        struct {
>> +            uint16_t headroom;
> why do you need "pad" field here?

To state that we have a fixed size available.

>> +            uint16_t pad[2]
>> +        } xdp;
>>           uint16_t pad[3];
>>       } u;
>>   };


From xen-devel-bounces@lists.xenproject.org Fri May 22 09:22:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 09:22:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc3so-0004kE-Rp; Fri, 22 May 2020 09:22:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1mxZ=7E=linux-powerpc.org=kda@srs-us1.protection.inumbo.net>)
 id 1jc3sn-0004k7-Bp
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 09:22:25 +0000
X-Inumbo-ID: bf00ecd6-9c0d-11ea-ae69-bc764e2007e4
Received: from mail-ej1-x644.google.com (unknown [2a00:1450:4864:20::644])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bf00ecd6-9c0d-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 09:22:24 +0000 (UTC)
Received: by mail-ej1-x644.google.com with SMTP id z5so12230839ejb.3
 for <xen-devel@lists.xenproject.org>; Fri, 22 May 2020 02:22:24 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=linux-powerpc-org.20150623.gappssmtp.com; s=20150623;
 h=mime-version:in-reply-to:references:from:date:message-id:subject:to
 :cc; bh=sh+C8y8QBl00UA7On7O9K6RZaiY2Jwq1gQ2haVyYZY8=;
 b=yRgMnrWZntJwvaPhNGPB1K9IEovahaQaQMWce7fkULLsgsb9FNzyQYEh/hBFhXcGsA
 XFofnap883h0kBxs6cagX0d8G78FsEPvwirkTZxOaSL9lGkcqwExDM5sHyY4iKMf6XQy
 xD0X0aWM8sy8HR7HBhjL16GlVzRrrmU5zJGRC4xVp6iYLjVeTa8+uNlP4+bOiPlkrNUw
 4NPds4d6szfh5YoFTUWPHBrzzlv+Z0ETXHkLHdHNaXtaeCPCQnkCUCW1OxswOrjelLXN
 CFmYvpJxpwOuQNBir8aru+lecDrC/DaCFGBIVNt0Z6IwKDWxVSRz4c9n5LnH2cLsSIQV
 e0eg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:in-reply-to:references:from:date
 :message-id:subject:to:cc;
 bh=sh+C8y8QBl00UA7On7O9K6RZaiY2Jwq1gQ2haVyYZY8=;
 b=L2cUUIZ+Q11e+zpwtkUuctcacTmMlGNT/ThdPOh2/Qm+s7xkBWhFO5gMbPmuu1Fwi8
 nuvldm1C2JQ5tVh7ebqGhaBcEgACP+ajS8eXIIi1Rpt9VevdWY5pCVP0zl0DNHSaBw2w
 0d4qozM24SkO8Q7dqtYJcVavABjV7lB8mTttAsAx5XYc+rSaR2+NlfGg1DL1HO3vm7Bv
 tYY5QYsM56RtlagbiRw5gZAF0gke6Iqw7ANme28ZvU3e6Tf7GlRDQnCXOE35BL6/1zqj
 jw6D8bVUo4GqEZm82r18Et+wqgcsM1cQIokg/HLdwQbslrIz+PR2WbN6FsNRs6zPyLMd
 O+9Q==
X-Gm-Message-State: AOAM5325Hgso8I5spw3d0BtnIXupqC5kHMKWfI5aGnb/+LpwmKgM6buP
 mvEREpsdHTQwJLOG+kJRnfKuMnhA3lH2dsnEwPmdvtQtALc=
X-Google-Smtp-Source: ABdhPJyxgjaTA9zbHVY2bqlDipJWD4+oH5caXRBiEBss48xW1OGpGkE3ZCUEnzSDDx+DQ72BrtimtV1wxBS+jBxSppE=
X-Received: by 2002:a17:906:81c6:: with SMTP id
 e6mr6958185ejx.241.1590139343621; 
 Fri, 22 May 2020 02:22:23 -0700 (PDT)
MIME-Version: 1.0
Received: by 2002:a50:34e:0:0:0:0:0 with HTTP;
 Fri, 22 May 2020 02:22:23 -0700 (PDT)
X-Originating-IP: [5.35.46.149]
In-Reply-To: <1589814292-1789-1-git-send-email-kda@linux-powerpc.org>
References: <1589814292-1789-1-git-send-email-kda@linux-powerpc.org>
From: Denis Kirjanov <kda@linux-powerpc.org>
Date: Fri, 22 May 2020 12:22:23 +0300
Message-ID: <CAOJe8K35MObqmxX=Kfah7+vMxGCezboAGMJ7FiX7hqg8LAJ_KQ@mail.gmail.com>
Subject: Re: [PATCH v4] public/io/netif.h: add a new extra type for XDP
To: xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/18/20, Denis Kirjanov <kda@linux-powerpc.org> wrote:
> The patch adds a new extra type to be able to diffirentiate
> between RX responses on xen-netfront side with the adjusted offset
> required for XDP processing.
>
> The offset value from a guest is passed via xenstore.

I'm going to send a new version for Linux with the above changes applied.

>
> Signed-off-by: Denis Kirjanov <denis.kirjanov@suse.com>
> ---
> v4:
> - updated the commit and documenation
>
> v3:
> - updated the commit message
>
> v2:
> - added documentation
> - fixed padding for netif_extra_info
> ---
> ---
>  xen/include/public/io/netif.h | 18 +++++++++++++++++-
>  1 file changed, 17 insertions(+), 1 deletion(-)
>
> diff --git a/xen/include/public/io/netif.h b/xen/include/public/io/netif.h
> index 9fcf91a..a92bf04 100644
> --- a/xen/include/public/io/netif.h
> +++ b/xen/include/public/io/netif.h
> @@ -161,6 +161,17 @@
>   */
>
>  /*
> + * "xdp-headroom" is used to request that extra space is added
> + * for XDP processing.  The value is measured in bytes and passed by
> + * the frontend to be consistent between both ends.
> + * If the value is greater than zero that means that
> + * an RX response is going to be passed to an XDP program for processing.
> + *
> + * "feature-xdp-headroom" is set to "1" by the netback side like other
> features
> + * so a guest can check if an XDP program can be processed.
> + */
> +
> +/*
>   * Control ring
>   * ============
>   *
> @@ -985,7 +996,8 @@ typedef struct netif_tx_request netif_tx_request_t;
>  #define XEN_NETIF_EXTRA_TYPE_MCAST_ADD (2)  /* u.mcast */
>  #define XEN_NETIF_EXTRA_TYPE_MCAST_DEL (3)  /* u.mcast */
>  #define XEN_NETIF_EXTRA_TYPE_HASH      (4)  /* u.hash */
> -#define XEN_NETIF_EXTRA_TYPE_MAX       (5)
> +#define XEN_NETIF_EXTRA_TYPE_XDP       (5)  /* u.xdp */
> +#define XEN_NETIF_EXTRA_TYPE_MAX       (6)
>
>  /* netif_extra_info_t flags. */
>  #define _XEN_NETIF_EXTRA_FLAG_MORE (0)
> @@ -1018,6 +1030,10 @@ struct netif_extra_info {
>              uint8_t algorithm;
>              uint8_t value[4];
>          } hash;
> +        struct {
> +            uint16_t headroom;
> +            uint16_t pad[2]
> +        } xdp;
>          uint16_t pad[3];
>      } u;
>  };
> --
> 1.8.3.1
>
>


From xen-devel-bounces@lists.xenproject.org Fri May 22 09:25:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 09:25:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc3w5-0004uQ-En; Fri, 22 May 2020 09:25:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=de5w=7E=epam.com=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1jc3w4-0004sp-4d
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 09:25:48 +0000
X-Inumbo-ID: 379ff538-9c0e-11ea-ab9c-12813bfff9fa
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.64]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 379ff538-9c0e-11ea-ab9c-12813bfff9fa;
 Fri, 22 May 2020 09:25:47 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WlHBJayJwfAeqv+M3KTCfFgQ8HCkznBbda8IoTQIzcdqIrsIF5mT+GVuyf4iUhxPMrPDd3hbOqwqnYBkMuI/5Zq49sw1TBxO+AY1DU1NHuD/X5FlEhF3lfRm04iDC3533hlRHB2iYcHhwndHNL5IpR45vjykJMHq7xOOITDGQISNg2Ruo8WWHORibc1B5u91/sdTjWh3vk9joikilOsitwUvXDZfWbsUL+gzXGrflLdlJCoPYDDnVTLMrRah0moqTzYzHAZXEEhP9qppOgLmx+Yu5eY7dNTDmATVIgOrIyiOGNJlQMkyTTmWkf3a7p2UR85cdLC2iGQIJbdx+aPKzA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RNCa96JLCRBSccjpUgMoGQO3IFUEWUejdi8Mpr4pH6I=;
 b=m31LmrFcP+Fwq7/GBOHQ7XP98g+06N6M3WL9fgX4RAsFGCiJ+mjmQM8E6LkJurGWKveHBU/kLmnCVI6VGwd4WzSkuaPcdmjmeXXO5RUD35UKEfOFNNGhoaGd4VYpVMRFx2yTxhsoeMoj+XzXEhdnt2L57agtwPC+0UfrT8cCFFbzMsdLTinPizyp/Zl+8dRG2o4EiVORAh6wm6M7rYlYI5sQW7nd9lKiam/WMRT8g/78rF+z8UTHsit9Bm2f+3SPZkdgT6foJlPEixICuXFjxT6Cn7EwqCi7Bng1TiygbFbmnHx2h+PflpMdDin1LoQwY4QNJqpP94FXDIr4xLkJAA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=RNCa96JLCRBSccjpUgMoGQO3IFUEWUejdi8Mpr4pH6I=;
 b=BDZTnUJtgxHB56SAzMwXz7g5XoEPLPcPZmAEnTN8zMROWViNtYKx6yCKqn71M6DDTdSONLRyGMVryp1EgawTtcdIPk743BqsokdxrxZxTOjyo8T8kgJw0MgE/gYWTIilfTpSfRf1Fq/laRe0L8GjSTeo1+bjRsEfp7FBGmpf2Ew=
Received: from VI1PR03MB3998.eurprd03.prod.outlook.com (2603:10a6:803:72::14)
 by VI1PR03MB2909.eurprd03.prod.outlook.com (2603:10a6:802:39::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.27; Fri, 22 May
 2020 09:25:45 +0000
Received: from VI1PR03MB3998.eurprd03.prod.outlook.com
 ([fe80::28ec:3584:94d:27a4]) by VI1PR03MB3998.eurprd03.prod.outlook.com
 ([fe80::28ec:3584:94d:27a4%7]) with mapi id 15.20.3021.027; Fri, 22 May 2020
 09:25:45 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Denis Kirjanov <kda@linux-powerpc.org>
Subject: Re: [PATCH v4] public/io/netif.h: add a new extra type for XDP
Thread-Topic: [PATCH v4] public/io/netif.h: add a new extra type for XDP
Thread-Index: AQHWMBiqME7497+04E6+8Omw+GVY7aiz01kAgAACZQA=
Date: Fri, 22 May 2020 09:25:45 +0000
Message-ID: <05f78e98-79b9-b1a7-b892-67033b9cd245@epam.com>
References: <1589814292-1789-1-git-send-email-kda@linux-powerpc.org>
 <696ee8f6-f44b-35c3-2c9f-676cb9e5ad95@epam.com>
 <CAOJe8K3wZCkc2d1nxYqesTm2Cnt3QjsLDPn7U5KJbC7Bc=oVKA@mail.gmail.com>
In-Reply-To: <CAOJe8K3wZCkc2d1nxYqesTm2Cnt3QjsLDPn7U5KJbC7Bc=oVKA@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: linux-powerpc.org; dkim=none (message not signed)
 header.d=none; linux-powerpc.org; dmarc=none action=none header.from=epam.com; 
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 10c0f6a7-1cec-4855-8552-08d7fe321b52
x-ms-traffictypediagnostic: VI1PR03MB2909:
x-microsoft-antispam-prvs: <VI1PR03MB2909758D40D9B6B6D382BA84E7B40@VI1PR03MB2909.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-forefront-prvs: 04111BAC64
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: B7olqOZMzfs78n0GfIxwgorAM0YARVx0yxy2f9ZgbQtzrA3w5YUia5GLZhsY69Po6LN2y142HZwKs92lNOLp/YVhp2yG+MaC7M8W4YaIzMc7cvborCMQ/IbplIo+y64i/48PV1TEeuC6otKMnvv3Iq2WNmm9WxbE2VO1RhFnU8R4N8AJPi7hP7rzyDd8D78A+ALWGhFQ9fOMpH5BTkw0gWYLp7+mkPrR5JY/s45F5m1phJpYizqQhfg60WUezL1ouIm9c2Qsp1iJY4lXPNPFOKbiGRdz7WYW9ddk71L5Q/T225l2ApHUwddGmmMM/DoBBFv6Dh8ZqeYE9BiiEvYc/tQp2ZOBznajgjcB/Xb08VeVRGEFz+mZt1ENlhwqtQqFDgoIe8Ot+e527Dv7q6gJBCdHv25gu1C+zO+uwGr3u9tF+b1h00j07VWIxTzKScd7
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:VI1PR03MB3998.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(366004)(39860400002)(396003)(346002)(136003)(376002)(66476007)(66446008)(26005)(6916009)(36756003)(186003)(2906002)(6506007)(71200400001)(8936002)(53546011)(2616005)(478600001)(316002)(86362001)(66946007)(31686004)(64756008)(66556008)(54906003)(8676002)(4326008)(6486002)(76116006)(5660300002)(6512007)(31696002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: AxjN9C6qJ3niN0CRBnwelaw5HKI5dvr5hX3wvcsJHx+lUOoVulBdIcUM8LkYxLLZ8/4LJghc0JSF2OfJbdauRbkNxHNIP9RqwL1SMu9kvhnkdY2q/0TFosSEuZLein6eh4M/7nANBEUCu2/U43H4KT1OlcWPAD0rVsRtJ+jOJK7CiqHDwJnsdFADRc76L9r4BdujXiZEbJqRW2MKHFs1BRlzfwk+I1KZmWkHkuuMju1gyS1PcnRRa5mEvkNTHuiD+dnl0ONE4FxNpfrxg4iIZzoNCRUvmwdDhFUMYhet5ssTIL76umllh94nTSWxElBgGO1i4zP64tB+a4RVezuX3iAmPmjDVLkPAUUPOGH63voqeNS+Vzbd1sIVjKsMDtr/1goe+4pRTjkTSZojE3r2QxMs3RxCRJZh6l+ycJVxJDXLpNYc/FQMw2E92JgO7UydmxQThNvVT3Otp5ZtC5o49/XAZ9QnnR3nMSk1QjYkHfs=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <934A95F41034AB4E85A3A113EDC7F10F@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 10c0f6a7-1cec-4855-8552-08d7fe321b52
X-MS-Exchange-CrossTenant-originalarrivaltime: 22 May 2020 09:25:45.5857 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: FfU4pEskUtUPub3WLn+Nl0U2hxPXOB4Ewc8curXqn93XmyQ3y5mg9eADyzXEpmrtkU+FgvZXTbqfoFWiy9vvogqBORenTBz/SllsDAt1pZomKGfVuGSE81SYmXOFsEKq
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR03MB2909
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "jgross@suse.com" <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "paul@xen.org" <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

T24gNS8yMi8yMCAxMjoxNyBQTSwgRGVuaXMgS2lyamFub3Ygd3JvdGU6DQo+IE9uIDUvMjIvMjAs
IE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIDxPbGVrc2FuZHJfQW5kcnVzaGNoZW5rb0BlcGFtLmNv
bT4gd3JvdGU6DQo+PiBPbiA1LzE4LzIwIDY6MDQgUE0sIERlbmlzIEtpcmphbm92IHdyb3RlOg0K
Pj4+IFRoZSBwYXRjaCBhZGRzIGEgbmV3IGV4dHJhIHR5cGUgdG8gYmUgYWJsZSB0byBkaWZmaXJl
bnRpYXRlDQo+Pj4gYmV0d2VlbiBSWCByZXNwb25zZXMgb24geGVuLW5ldGZyb250IHNpZGUgd2l0
aCB0aGUgYWRqdXN0ZWQgb2Zmc2V0DQo+Pj4gcmVxdWlyZWQgZm9yIFhEUCBwcm9jZXNzaW5nLg0K
Pj4+DQo+Pj4gVGhlIG9mZnNldCB2YWx1ZSBmcm9tIGEgZ3Vlc3QgaXMgcGFzc2VkIHZpYSB4ZW5z
dG9yZS4NCj4+Pg0KPj4+IFNpZ25lZC1vZmYtYnk6IERlbmlzIEtpcmphbm92IDxkZW5pcy5raXJq
YW5vdkBzdXNlLmNvbT4NCj4+PiAtLS0NCj4+PiB2NDoNCj4+PiAtIHVwZGF0ZWQgdGhlIGNvbW1p
dCBhbmQgZG9jdW1lbmF0aW9uDQo+Pj4NCj4+PiB2MzoNCj4+PiAtIHVwZGF0ZWQgdGhlIGNvbW1p
dCBtZXNzYWdlDQo+Pj4NCj4+PiB2MjoNCj4+PiAtIGFkZGVkIGRvY3VtZW50YXRpb24NCj4+PiAt
IGZpeGVkIHBhZGRpbmcgZm9yIG5ldGlmX2V4dHJhX2luZm8NCj4+PiAtLS0NCj4+PiAtLS0NCj4+
PiAgICB4ZW4vaW5jbHVkZS9wdWJsaWMvaW8vbmV0aWYuaCB8IDE4ICsrKysrKysrKysrKysrKysr
LQ0KPj4+ICAgIDEgZmlsZSBjaGFuZ2VkLCAxNyBpbnNlcnRpb25zKCspLCAxIGRlbGV0aW9uKC0p
DQo+Pj4NCj4+PiBkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvcHVibGljL2lvL25ldGlmLmgNCj4+
PiBiL3hlbi9pbmNsdWRlL3B1YmxpYy9pby9uZXRpZi5oDQo+Pj4gaW5kZXggOWZjZjkxYS4uYTky
YmYwNCAxMDA2NDQNCj4+PiAtLS0gYS94ZW4vaW5jbHVkZS9wdWJsaWMvaW8vbmV0aWYuaA0KPj4+
ICsrKyBiL3hlbi9pbmNsdWRlL3B1YmxpYy9pby9uZXRpZi5oDQo+Pj4gQEAgLTE2MSw2ICsxNjEs
MTcgQEANCj4+PiAgICAgKi8NCj4+Pg0KPj4+ICAgIC8qDQo+Pj4gKyAqICJ4ZHAtaGVhZHJvb20i
IGlzIHVzZWQgdG8gcmVxdWVzdCB0aGF0IGV4dHJhIHNwYWNlIGlzIGFkZGVkDQo+Pj4gKyAqIGZv
ciBYRFAgcHJvY2Vzc2luZy4gIFRoZSB2YWx1ZSBpcyBtZWFzdXJlZCBpbiBieXRlcyBhbmQgcGFz
c2VkIGJ5DQo+PiBub3Qgc3VyZSB0aGF0IHdlIHNob3VsZCB1c2Ugd29yZCAiYnl0ZXMiIGhlcmUg
YXMgdGhlIHJlc3Qgb2YgdGhlDQo+PiBwcm90b2NvbCAobW9zdGx5KQ0KPj4NCj4+IHRhbGtzIGFi
b3V0IG9jdGV0cy4gSXQgaXMgc29tZXdoYXQgbWl4ZWQgaGVyZSwgbm8gc3Ryb25nIG9waW5pb24N
Cj4gc3VyZSwgYnV0IHNpbmNlIHRoZSBwdWJsaWMgaGVhZGVyIG1peGVzIGl0IEkndmUgZGVjaWRl
ZCB0byB1c2UgdGhhdCB3b3JkLg0KPg0KPg0KPj4+ICsgKiB0aGUgZnJvbnRlbmQgdG8gYmUgY29u
c2lzdGVudCBiZXR3ZWVuIGJvdGggZW5kcy4NCj4+PiArICogSWYgdGhlIHZhbHVlIGlzIGdyZWF0
ZXIgdGhhbiB6ZXJvIHRoYXQgbWVhbnMgdGhhdA0KPj4+ICsgKiBhbiBSWCByZXNwb25zZSBpcyBn
b2luZyB0byBiZSBwYXNzZWQgdG8gYW4gWERQIHByb2dyYW0gZm9yDQo+Pj4gcHJvY2Vzc2luZy4N
Cj4+PiArICoNCj4+PiArICogImZlYXR1cmUteGRwLWhlYWRyb29tIiBpcyBzZXQgdG8gIjEiIGJ5
IHRoZSBuZXRiYWNrIHNpZGUgbGlrZSBvdGhlcg0KPj4+IGZlYXR1cmVzDQo+Pj4gKyAqIHNvIGEg
Z3Vlc3QgY2FuIGNoZWNrIGlmIGFuIFhEUCBwcm9ncmFtIGNhbiBiZSBwcm9jZXNzZWQuDQo+Pj4g
KyAqLw0KPj4+ICsNCj4+PiArLyoNCj4+PiAgICAgKiBDb250cm9sIHJpbmcNCj4+PiAgICAgKiA9
PT09PT09PT09PT0NCj4+PiAgICAgKg0KPj4+IEBAIC05ODUsNyArOTk2LDggQEAgdHlwZWRlZiBz
dHJ1Y3QgbmV0aWZfdHhfcmVxdWVzdCBuZXRpZl90eF9yZXF1ZXN0X3Q7DQo+Pj4gICAgI2RlZmlu
ZSBYRU5fTkVUSUZfRVhUUkFfVFlQRV9NQ0FTVF9BREQgKDIpICAvKiB1Lm1jYXN0ICovDQo+Pj4g
ICAgI2RlZmluZSBYRU5fTkVUSUZfRVhUUkFfVFlQRV9NQ0FTVF9ERUwgKDMpICAvKiB1Lm1jYXN0
ICovDQo+Pj4gICAgI2RlZmluZSBYRU5fTkVUSUZfRVhUUkFfVFlQRV9IQVNIICAgICAgKDQpICAv
KiB1Lmhhc2ggKi8NCj4+PiAtI2RlZmluZSBYRU5fTkVUSUZfRVhUUkFfVFlQRV9NQVggICAgICAg
KDUpDQo+Pj4gKyNkZWZpbmUgWEVOX05FVElGX0VYVFJBX1RZUEVfWERQICAgICAgICg1KSAgLyog
dS54ZHAgKi8NCj4+PiArI2RlZmluZSBYRU5fTkVUSUZfRVhUUkFfVFlQRV9NQVggICAgICAgKDYp
DQo+Pj4NCj4+PiAgICAvKiBuZXRpZl9leHRyYV9pbmZvX3QgZmxhZ3MuICovDQo+Pj4gICAgI2Rl
ZmluZSBfWEVOX05FVElGX0VYVFJBX0ZMQUdfTU9SRSAoMCkNCj4+PiBAQCAtMTAxOCw2ICsxMDMw
LDEwIEBAIHN0cnVjdCBuZXRpZl9leHRyYV9pbmZvIHsNCj4+PiAgICAgICAgICAgICAgICB1aW50
OF90IGFsZ29yaXRobTsNCj4+PiAgICAgICAgICAgICAgICB1aW50OF90IHZhbHVlWzRdOw0KPj4+
ICAgICAgICAgICAgfSBoYXNoOw0KPj4+ICsgICAgICAgIHN0cnVjdCB7DQo+Pj4gKyAgICAgICAg
ICAgIHVpbnQxNl90IGhlYWRyb29tOw0KPj4gd2h5IGRvIHlvdSBuZWVkICJwYWQiIGZpZWxkIGhl
cmU/DQo+IFRvIHN0YXRlIHRoYXQgd2UgaGF2ZSBhIGZpeGVkIHNpemUgYXZhaWxhYmxlLg0KDQpX
ZWxsLCBJIHdvdWxkIGV4cGVjdCAicmVzZXJ2ZWQiIG9yIHNvbWV0aGluZyBpbiB0aGF0IGNhc2Ug
YW5kICJwYWQiIGluIGNhc2UNCg0KdGhlcmUgYXJlIG90aGVyIGZpZWxkcyBmb2xsb3dpbmcgKHNl
ZSBnc28gYWJvdmUpLg0KDQpCdXQgaGVyZSBJIHRoaW5rICJwYWQiIGlzIG5vdCByZXF1aXJlZCwg
anVzdCBsaWtlIG1jYXN0IGRvZXNuJ3QgYWRkIGFueQ0KDQo+DQo+Pj4gKyAgICAgICAgICAgIHVp
bnQxNl90IHBhZFsyXQ0KPj4+ICsgICAgICAgIH0geGRwOw0KPj4+ICAgICAgICAgICAgdWludDE2
X3QgcGFkWzNdOw0KPj4+ICAgICAgICB9IHU7DQo+Pj4gICAgfTs=


From xen-devel-bounces@lists.xenproject.org Fri May 22 09:33:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 09:33:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc43J-0005oq-94; Fri, 22 May 2020 09:33:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1mxZ=7E=linux-powerpc.org=kda@srs-us1.protection.inumbo.net>)
 id 1jc43I-0005ol-HF
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 09:33:16 +0000
X-Inumbo-ID: 43144c1a-9c0f-11ea-ae69-bc764e2007e4
Received: from mail-ej1-x643.google.com (unknown [2a00:1450:4864:20::643])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43144c1a-9c0f-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 09:33:15 +0000 (UTC)
Received: by mail-ej1-x643.google.com with SMTP id x20so12184071ejb.11
 for <xen-devel@lists.xenproject.org>; Fri, 22 May 2020 02:33:15 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=linux-powerpc-org.20150623.gappssmtp.com; s=20150623;
 h=mime-version:in-reply-to:references:from:date:message-id:subject:to
 :cc; bh=RH1+WXN6zq8qRr4lfcznEOYCFz+5Fo0QRgFKgbmOPTM=;
 b=jl5osZ4B8Ubh5NjrGD3o/fVYOGZl3EyMgILzYm8J0dpcJXetrJ2hRiuPjSKPlJ35rb
 rbXMMgriVIAnE92LE9wgY+HUQQdqAWc23yRkhVYoWFavogIwnrAB49ithsea2z+ZW6IA
 VVAv5WlPktygdsEkiAEj5VpirmvrkIWhdl4TuvmsQxhaaLCAIqTMuGh/sQEMnnaOTo05
 om0HtfpuzpHJQDshNdoNL9eA+NnpUuErXjpfXnMOEyRC3AQnv2dBOoIMFRr13Ic8eXUe
 JRVi780oFjv/t+qsIbFYWszmOHGyhcDtCH0btyDxm2PCQNi8kWw5iXU9UHIZcmi2N8vw
 9gvQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:in-reply-to:references:from:date
 :message-id:subject:to:cc;
 bh=RH1+WXN6zq8qRr4lfcznEOYCFz+5Fo0QRgFKgbmOPTM=;
 b=CwpfybQ8Rk73W7SnRyeNOdPb6+GEYqYYli9YaO61v7leN+/ARWEQsOhxFnlCkZrygN
 pd6uSnh6gW4lDzKs9fUYgh6lUkHmYu7GEK0HY0vJ5sDyxiGCHrL5GyUv6syNS8e57tIM
 bGTwvLf/FI6damklJ4WAOF/WSgSpHOBUfuck4sVZMH6Kj9RcLcfV22q5NqC55YwO91nc
 IutTGoXHNWMFI0saut7FW0e1Mr2JA9oCtk9t17tSJnZS3WE7z4wEs51hrmvz2/97O2T4
 +74R7ZZU8x/JgmSQrc1qyr7iUpRs5owKlDwPXcR18fwf1q4sxspMbwyplBXqRWpxPqoB
 4MSQ==
X-Gm-Message-State: AOAM532YiVipAuzYKWtNG6c6f7s3avQcU+wbAbfUUGZdUEUpeiOqPyXa
 INegNyuXD/M63/iCZQDea409qNszA6226CpUcdF7Cw==
X-Google-Smtp-Source: ABdhPJx1yzRZWqJNFKDprfJBjvkP0d18D7w8ZsQp01MpthIgHuSB6XOK9lOtSXPXqukIIhxq4KPp07/9GKS6QvXTp/I=
X-Received: by 2002:a17:906:5f98:: with SMTP id
 a24mr7360453eju.214.1590139994738; 
 Fri, 22 May 2020 02:33:14 -0700 (PDT)
MIME-Version: 1.0
Received: by 2002:a50:34e:0:0:0:0:0 with HTTP;
 Fri, 22 May 2020 02:33:14 -0700 (PDT)
X-Originating-IP: [5.35.46.149]
In-Reply-To: <05f78e98-79b9-b1a7-b892-67033b9cd245@epam.com>
References: <1589814292-1789-1-git-send-email-kda@linux-powerpc.org>
 <696ee8f6-f44b-35c3-2c9f-676cb9e5ad95@epam.com>
 <CAOJe8K3wZCkc2d1nxYqesTm2Cnt3QjsLDPn7U5KJbC7Bc=oVKA@mail.gmail.com>
 <05f78e98-79b9-b1a7-b892-67033b9cd245@epam.com>
From: Denis Kirjanov <kda@linux-powerpc.org>
Date: Fri, 22 May 2020 12:33:14 +0300
Message-ID: <CAOJe8K3Zwf7SjDwdPntucgu13meGahDNMi3oaqc6g9Y7Ttoo4Q@mail.gmail.com>
Subject: Re: [PATCH v4] public/io/netif.h: add a new extra type for XDP
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "jgross@suse.com" <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "paul@xen.org" <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/22/20, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com> wrote:
> On 5/22/20 12:17 PM, Denis Kirjanov wrote:
>> On 5/22/20, Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
>> wrote:
>>> On 5/18/20 6:04 PM, Denis Kirjanov wrote:
>>>> The patch adds a new extra type to be able to diffirentiate
>>>> between RX responses on xen-netfront side with the adjusted offset
>>>> required for XDP processing.
>>>>
>>>> The offset value from a guest is passed via xenstore.
>>>>
>>>> Signed-off-by: Denis Kirjanov <denis.kirjanov@suse.com>
>>>> ---
>>>> v4:
>>>> - updated the commit and documenation
>>>>
>>>> v3:
>>>> - updated the commit message
>>>>
>>>> v2:
>>>> - added documentation
>>>> - fixed padding for netif_extra_info
>>>> ---
>>>> ---
>>>>    xen/include/public/io/netif.h | 18 +++++++++++++++++-
>>>>    1 file changed, 17 insertions(+), 1 deletion(-)
>>>>
>>>> diff --git a/xen/include/public/io/netif.h
>>>> b/xen/include/public/io/netif.h
>>>> index 9fcf91a..a92bf04 100644
>>>> --- a/xen/include/public/io/netif.h
>>>> +++ b/xen/include/public/io/netif.h
>>>> @@ -161,6 +161,17 @@
>>>>     */
>>>>
>>>>    /*
>>>> + * "xdp-headroom" is used to request that extra space is added
>>>> + * for XDP processing.  The value is measured in bytes and passed by
>>> not sure that we should use word "bytes" here as the rest of the
>>> protocol (mostly)
>>>
>>> talks about octets. It is somewhat mixed here, no strong opinion
>> sure, but since the public header mixes it I've decided to use that word.
>>
>>
>>>> + * the frontend to be consistent between both ends.
>>>> + * If the value is greater than zero that means that
>>>> + * an RX response is going to be passed to an XDP program for
>>>> processing.
>>>> + *
>>>> + * "feature-xdp-headroom" is set to "1" by the netback side like other
>>>> features
>>>> + * so a guest can check if an XDP program can be processed.
>>>> + */
>>>> +
>>>> +/*
>>>>     * Control ring
>>>>     * ============
>>>>     *
>>>> @@ -985,7 +996,8 @@ typedef struct netif_tx_request netif_tx_request_t;
>>>>    #define XEN_NETIF_EXTRA_TYPE_MCAST_ADD (2)  /* u.mcast */
>>>>    #define XEN_NETIF_EXTRA_TYPE_MCAST_DEL (3)  /* u.mcast */
>>>>    #define XEN_NETIF_EXTRA_TYPE_HASH      (4)  /* u.hash */
>>>> -#define XEN_NETIF_EXTRA_TYPE_MAX       (5)
>>>> +#define XEN_NETIF_EXTRA_TYPE_XDP       (5)  /* u.xdp */
>>>> +#define XEN_NETIF_EXTRA_TYPE_MAX       (6)
>>>>
>>>>    /* netif_extra_info_t flags. */
>>>>    #define _XEN_NETIF_EXTRA_FLAG_MORE (0)
>>>> @@ -1018,6 +1030,10 @@ struct netif_extra_info {
>>>>                uint8_t algorithm;
>>>>                uint8_t value[4];
>>>>            } hash;
>>>> +        struct {
>>>> +            uint16_t headroom;
>>> why do you need "pad" field here?
>> To state that we have a fixed size available.
>
> Well, I would expect "reserved" or something in that case and "pad" in case
>
> there are other fields following (see gso above).

it can be consistent with other names like pad at then end of the structure.

If it really matters I can change it, no problem.

>
> But here I think "pad" is not required, just like mcast doesn't add any
because it's already 6-bytes long

>
>>
>>>> +            uint16_t pad[2]
>>>> +        } xdp;
>>>>            uint16_t pad[3];
>>>>        } u;
>>>>    };


From xen-devel-bounces@lists.xenproject.org Fri May 22 09:38:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 09:38:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc47w-0005ym-Sz; Fri, 22 May 2020 09:38:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aod8=7E=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jc47v-0005yh-4J
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 09:38:03 +0000
X-Inumbo-ID: edc5b072-9c0f-11ea-ae69-bc764e2007e4
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.77]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id edc5b072-9c0f-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 09:38:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bzPfIXO3K/ScES5DMso5xkC2MD69PRxoFgTXLOdu9s4=;
 b=KhCCwgTngd2w27Y9jdnM8ksIiRseirOgTDXOcKbIQw3hnrSgdDnv9Ic7Bp0mpnf58kkVCz1BJYTWz6WxUzaYN3wnNcU4MN/xeYLueGwmG3Nz2kUr2TQJdKVR5tv4zxx+PpkwjWeO3sY+qfKEVq7GdJQldjnUTJyo+Ll/ZBKHDKc=
Received: from AM6P192CA0087.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:8d::28)
 by DB7PR08MB3177.eurprd08.prod.outlook.com (2603:10a6:5:26::25) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.31; Fri, 22 May
 2020 09:37:59 +0000
Received: from VE1EUR03FT045.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8d:cafe::6f) by AM6P192CA0087.outlook.office365.com
 (2603:10a6:209:8d::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.25 via Frontend
 Transport; Fri, 22 May 2020 09:37:59 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT045.mail.protection.outlook.com (10.152.19.51) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3021.23 via Frontend Transport; Fri, 22 May 2020 09:37:59 +0000
Received: ("Tessian outbound 9eabd37e4fee:v57");
 Fri, 22 May 2020 09:37:59 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 94b1b538a5c58e36
X-CR-MTA-TID: 64aa7808
Received: from 73cced15552f.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8455706F-7F8A-42C8-BEBF-6AC7D0768FA5.1; 
 Fri, 22 May 2020 09:37:53 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 73cced15552f.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 22 May 2020 09:37:53 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dn+QhS4rNJ9hi4FEcBjRGy3XCYgJnQX+gJaEJK93/zWxSmuN7cncZ78AMkepXrO1zqFvOFtFGlSV+0JB1xf+8oiYZ7U4P9SvKzYu8w9mEipCtTsrxtrX0w54wAVV1E5ftxUBE11bC9msrACt11j3GKXxQcWFFqmm3i/7j15Kfro2T2O7wrPelOMl8e7KuvptxKGCMcScUF2Qm5RuauCfMb6O9el74/r+55U6Ek/4NCVX21IsDIIus6pjVtaHBdOB3jENX29b/XKtwkWwNtU/lthQJBGpQqEzCqttAfubyn8DxTIC6aiaWONnPkQF1QYmOmt8HwUyzp29KUxRs7N39Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bzPfIXO3K/ScES5DMso5xkC2MD69PRxoFgTXLOdu9s4=;
 b=IyhSeZRMM3D9q1dbdIFZGlRy3zJvE6Rb584bcsQ+A/gquNaYdqHLlzpctda2nL5bWQlLlQgNYgjLNsguvX0hdvMxw00gv9M9B0nJ/V0jBIAotCzA09V0KblhPpiiEI/ZbycVItXSQxxmWnyo3NGmafrycQY7/BPaya2+9/99OtZkSD0PdtqB+acKEjgTUZsWvOttBlxGn2wxMktRo17ffufE6djyrArpVmqec9BGWWfD2rH1W64AbivPxXfSGhkgWtVgnu0sOXRPQdI2mbr57YElcHs/KaLy7z1qxi/bzAFUjr82Uo6Z8bwWHSNxrgb0PH4LDFIokknSyLaSGAVMrg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bzPfIXO3K/ScES5DMso5xkC2MD69PRxoFgTXLOdu9s4=;
 b=KhCCwgTngd2w27Y9jdnM8ksIiRseirOgTDXOcKbIQw3hnrSgdDnv9Ic7Bp0mpnf58kkVCz1BJYTWz6WxUzaYN3wnNcU4MN/xeYLueGwmG3Nz2kUr2TQJdKVR5tv4zxx+PpkwjWeO3sY+qfKEVq7GdJQldjnUTJyo+Ll/ZBKHDKc=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3819.eurprd08.prod.outlook.com (2603:10a6:10:30::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.24; Fri, 22 May
 2020 09:37:52 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3000.034; Fri, 22 May 2020
 09:37:52 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/3] configure: also add EXTRA_PREFIX to {CPP/LD}FLAGS
Thread-Topic: [PATCH 2/3] configure: also add EXTRA_PREFIX to {CPP/LD}FLAGS
Thread-Index: AQHWIr9BT1TmXNUg30iaUVp4td7OCaiz5AWAgAAG4ICAAAjugA==
Date: Fri, 22 May 2020 09:37:51 +0000
Message-ID: <9FBA46AA-9727-4AA9-A23D-B72F5AE9C35C@arm.com>
References: <20200505092454.9161-1-roger.pau@citrix.com>
 <20200505092454.9161-3-roger.pau@citrix.com>
 <C053A44F-FFDE-4C07-B1FD-76FA8456ADCD@arm.com>
 <20200522090553.eegs4fcltfqjuhzo@debian>
In-Reply-To: <20200522090553.eegs4fcltfqjuhzo@debian>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: dbeb545b-0543-4d57-3e5d-08d7fe33d08b
x-ms-traffictypediagnostic: DB7PR08MB3819:|DB7PR08MB3177:
X-Microsoft-Antispam-PRVS: <DB7PR08MB31773BAE32F77B6EFC75B0FE9DB40@DB7PR08MB3177.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
x-forefront-prvs: 04111BAC64
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: tq9x3672QMwMpmlO2jg4sDXySpjzUws8EY5AzlhFb0MAjBssB4CU8A7h1hu2AiQItBBk9Rxaj+Vxv+uMVFVeGprHRwVxLXylXH9o73RWbXtzzvWNhDDM22esB2E1hnffBPR26fj/2IGt/ZKEwCL6vZxfvzXa3SBEityt6LkmC0EKDFXYtbS1id/YVU7IYVsSsUciCcRc2IjwS2E67UuXkwIKPyg39+LpMa6L1B7oezJjTDXZDcBf/sdiQpVO2Q0tddQMMnHBH0ZSSJZtsFBnbpo1wdWH1MCcOtt5TbOy8Vbc77xr5fuZPVT7Lt43ZmB+
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(366004)(39860400002)(396003)(136003)(376002)(186003)(8676002)(6512007)(6916009)(2906002)(26005)(316002)(6486002)(6506007)(54906003)(8936002)(36756003)(66556008)(66446008)(478600001)(4326008)(53546011)(71200400001)(66476007)(5660300002)(86362001)(64756008)(91956017)(2616005)(76116006)(66946007)(33656002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: mvSk+VAZGCv2p5gS1GBmuVXL+PM6Fbhm33A9jOP3oW1Nc+/fjXewUPWURU+0zG02hkn1xzWKZLarXjQ9PK2+IQ0xsd/QkGBTy+sIiF1sPUhE2rSQAI9ouaE8DrdL6fBW2T4Suygc0z0ofDsvMUHBzUnpdG4JiXaccH5kFcCI1zSk4uzFbVJ4eTNsN5aSFfOyY8BiVpz4atH5Zxhn2bqRogzVBZyFu/8zz7W3u3E1xz97WUWiD50wAEIiYrY0t4ZIpxk3bPHZQFwncB7eQQIJnF7d95Gah9a6Zl+D5O5VnFHPp+RvpsOQHiWZGj6hDqaPGP1ogkTZqsJeJ6tlthJQfz+z9jbTuxv4sv6TCTlhY/ph1gY5FPIEktDTMRzO7cszG3MmGhyVGz8sQgj/q6cMdIgpIU9docOIRAtqN5DrKg2uoB7P0VNWQk/y/tq4XQewlX+2Lg/v+Rex5Z/xQcZoFvQaDvb+n6To+WQ6fsrzXcc=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <F0DA0D9308217C4D889CAE828EBF436A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3819
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT045.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(376002)(346002)(396003)(39860400002)(46966005)(316002)(33656002)(36756003)(82310400002)(54906003)(70206006)(356005)(47076004)(70586007)(336012)(4326008)(6862004)(8676002)(8936002)(53546011)(82740400003)(81166007)(26005)(5660300002)(36906005)(2906002)(86362001)(6486002)(6506007)(6512007)(2616005)(478600001)(186003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: a244ce4c-4805-4fbd-5cee-08d7fe33cc45
X-Forefront-PRVS: 04111BAC64
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ywgG8JuQdKO/ne5c0G9s2dbFINTyNW8/9JRLj6z9C8PXu9M3Rshui8NvY41Md9VIp+50OepNIaW7F4n423WVQ5U9djqlNx/gV+jGuEKtupv0IXKULbcB77tNpe2FRADhiG3qqNRFRnWQBsSl69aYPFEWN2Gti4ZdkecHALnwQcgniN2jXDaoxZT9YfPqbvhCW3luV59RS9Qfh+69FSQzqUGzp6dxkBzPQtW69ORKw0CGCVPQ7wI6A2vwHU3KYUVtAyNTkJDux/iiSLPX8i6kXHb46RByosIwbfIO8srWeBQ8IDPuKSwx64tIfWj1XL0mEivj3MtZeeR6lvTdpT/vtXsUGEWQgCpLG9eKZjsDx2QsBTgOibH3Yu57oN/hrU3Xv+0UFxzsida+F+3lDtRe4ne37tHTGP0etM1qKknEjmTk6/bulfzVR/t4qXuUwnnYlI1Q1HAkMidqXEkv6QStO2AvRa21nvvUHO5zykkwd4HL9ANNRDqqT+z6E77X/sHW9mI3eBXhZGIcqKhBDYHAPg==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2020 09:37:59.0728 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: dbeb545b-0543-4d57-3e5d-08d7fe33d08b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3177
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGksDQoNCj4gT24gMjIgTWF5IDIwMjAsIGF0IDEwOjA1LCBXZWkgTGl1IDx3bEB4ZW4ub3JnPiB3
cm90ZToNCj4gDQo+IE9uIEZyaSwgTWF5IDIyLCAyMDIwIGF0IDA4OjQxOjE3QU0gKzAwMDAsIEJl
cnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+PiBIaSwNCj4+IA0KPj4gQXMgYSBjb25zZXF1ZW5jZSBv
ZiB0aGlzIGZpeCwgdGhlIGZvbGxvd2luZyBoYXMgYmVlbiBjb21taXR0ZWQgKEkgZ3Vlc3MgYXMg
YSBjb25zZXF1ZW5jZSBvZiByZWdlbmVyYXRpbmcgdGhlIGNvbmZpZ3VyZSBzY3JpcHRzKToNCj4+
IGRpZmYgLS1naXQgYS90b29scy9jb25maWd1cmUgYi90b29scy9jb25maWd1cmUNCj4+IGluZGV4
IDM3NTQzMGRmM2YuLjM2NTk2Mzg5YjggMTAwNzU1DQo+PiAtLS0gYS90b29scy9jb25maWd1cmUN
Cj4+ICsrKyBiL3Rvb2xzL2NvbmZpZ3VyZQ0KPj4gQEAgLTQ2NzgsNiArNDY3OCwxMCBAQCBmb3Ig
bGRmbGFnIGluICRBUFBFTkRfTElCDQo+PiBkbw0KPj4gICAgIEFQUEVORF9MREZMQUdTPSIkQVBQ
RU5EX0xERkxBR1MgLUwkbGRmbGFnIg0KPj4gZG9uZQ0KPj4gK2lmICAhIC16ICRFWFRSQV9QUkVG
SVggOyB0aGVuDQo+PiArICAgIENQUEZMQUdTPSIkQ1BQRkxBR1MgLUkkRVhUUkFfUFJFRklYL2lu
Y2x1ZGUiDQo+PiArICAgIExERkxBR1M9IiRMREZMQUdTIC1MJEVYVFJBX1BSRUZJWC9saWIiDQo+
PiArZmkNCj4+IENQUEZMQUdTPSIkUFJFUEVORF9DUFBGTEFHUyAkQ1BQRkxBR1MgJEFQUEVORF9D
UFBGTEFHUyINCj4+IExERkxBR1M9IiRQUkVQRU5EX0xERkxBR1MgJExERkxBR1MgJEFQUEVORF9M
REZMQUdT4oCdDQo+PiANCj4+IFRoaXMgc2hvdWxkIGJlOg0KPj4gaWYgIFsgISAteiAkRVhUUkFf
UFJFRklYIF07IHRoZW4NCj4+IA0KPj4gQXMgb24gb3RoZXIgY29uZmlndXJlIHNjcmlwdHMuDQo+
PiANCj4+IER1cmluZyBjb25maWd1cmUgSSBoYXZlIG5vdCB0aGUgZm9sbG93aW5nIGVycm9yOg0K
Pj4gLi9jb25maWd1cmU6IGxpbmUgNDY4MTogLXo6IGNvbW1hbmQgbm90IGZvdW5kDQo+PiANCj4+
IFdoaWNoIGlzIGlnbm9yZWQgYnV0IGlzIGFkZGluZyAtTC9saWIgYW5kIC1JL2luY2x1ZGUgdG8g
dGhlIENQUEZMQUdTIGFuZCBMREZMQUdTDQo+PiANCj4+IFdoYXQgc2hvdWxkIGJlIHRoZSBwcm9j
ZWR1cmUgdG8gYWN0dWFsbHkgZml4IHRoYXQgKGFzIHRoZSBwcm9ibGVtIGlzIGNvbWluZyBmcm9t
IHRoZSBjb25maWd1cmUgc2NyaXB0IHJlZ2VuZXJhdGlvbiBJIGd1ZXNzKSA/IA0KPiANCj4gRG9l
cyB0aGUgZm9sbG93aW5nIHBhdGNoIHdvcmsgZm9yIHlvdT8NCj4gDQo+IGRpZmYgLS1naXQgYS9t
NC9zZXRfY2ZsYWdzX2xkZmxhZ3MubTQgYi9tNC9zZXRfY2ZsYWdzX2xkZmxhZ3MubTQNCj4gaW5k
ZXggMDhmNWM5ODNjYzYzLi5jZDM0YzEzOWJjOTQgMTAwNjQ0DQo+IC0tLSBhL200L3NldF9jZmxh
Z3NfbGRmbGFncy5tNA0KPiArKysgYi9tNC9zZXRfY2ZsYWdzX2xkZmxhZ3MubTQNCj4gQEAgLTE1
LDcgKzE1LDcgQEAgZm9yIGxkZmxhZyBpbiAkQVBQRU5EX0xJQg0KPiBkbw0KPiAgICAgQVBQRU5E
X0xERkxBR1M9IiRBUFBFTkRfTERGTEFHUyAtTCRsZGZsYWciDQo+IGRvbmUNCj4gLWlmIFsgISAt
eiAkRVhUUkFfUFJFRklYIF07IHRoZW4NCj4gK2lmIHRlc3QgISAteiAkRVhUUkFfUFJFRklYIDsg
dGhlbg0KPiAgICAgQ1BQRkxBR1M9IiRDUFBGTEFHUyAtSSRFWFRSQV9QUkVGSVgvaW5jbHVkZSIN
Cj4gICAgIExERkxBR1M9IiRMREZMQUdTIC1MJEVYVFJBX1BSRUZJWC9saWIiDQo+IGZpDQo+IA0K
PiANCj4gWW91IHdpbGwgbmVlZCB0byBydW4gYXV0b2dlbi5zaCB0byByZWdlbmVyYXRlIHRvb2xz
L2NvbmZpZ3VyZS4NCj4gDQoNClllcyB0aGF0IHdvcmtzIG9uIG15IHNpZGUgYW5kIGdlbmVyYXRl
IHRvb2xzL2NvbmZpZ3VyZSB1c2luZyDigJx0ZXN04oCdDQoNCkJ1dCB3aHkgYXJlIHRoZSBbXSBi
ZWluZyByZW1vdmVkIHdoZW4gZ2VuZXJhdGluZyB0b29scy9jb25maWd1cmUgPw0KDQpCZXJ0cmFu
ZA0KDQo+IFdlaS4NCj4gDQo+PiANCj4+IEJlcnRyYW5kDQo+PiANCj4+PiBPbiA1IE1heSAyMDIw
LCBhdCAxMDoyNCwgUm9nZXIgUGF1IE1vbm5lIDxyb2dlci5wYXVAY2l0cml4LmNvbT4gd3JvdGU6
DQo+Pj4gDQo+Pj4gVGhlIHBhdGggcHJvdmlkZWQgYnkgRVhUUkFfUFJFRklYIHNob3VsZCBiZSBh
ZGRlZCB0byB0aGUgc2VhcmNoIHBhdGgNCj4+PiBvZiB0aGUgY29uZmlndXJlIHNjcmlwdCwgbGlr
ZSBpdCdzIGRvbmUgaW4gQ29uZmlnLm1rLiBOb3QgZG9pbmcgc28NCj4+PiBtYWtlcyB0aGUgc2Vh
cmNoIHBhdGggZm9yIGNvbmZpZ3VyZSBkaWZmZXIgZnJvbSB0aGUgc2VhcmNoIHBhdGggdXNlZA0K
Pj4+IGJ5IHRoZSBidWlsZC4NCj4+PiANCj4+PiBTaWduZWQtb2ZmLWJ5OiBSb2dlciBQYXUgTW9u
bsOpIDxyb2dlci5wYXVAY2l0cml4LmNvbT4NCj4+PiAtLS0NCj4+PiBQbGVhc2UgcmUtcnVuIGF1
dG9jb25mLnNoIGFmdGVyIGFwcGx5aW5nLg0KPj4+IC0tLQ0KPj4+IG00L3NldF9jZmxhZ3NfbGRm
bGFncy5tNCB8IDQgKysrKw0KPj4+IDEgZmlsZSBjaGFuZ2VkLCA0IGluc2VydGlvbnMoKykNCj4+
PiANCj4+PiBkaWZmIC0tZ2l0IGEvbTQvc2V0X2NmbGFnc19sZGZsYWdzLm00IGIvbTQvc2V0X2Nm
bGFnc19sZGZsYWdzLm00DQo+Pj4gaW5kZXggY2JhZDNjMTBiMC4uMDhmNWM5ODNjYyAxMDA2NDQN
Cj4+PiAtLS0gYS9tNC9zZXRfY2ZsYWdzX2xkZmxhZ3MubTQNCj4+PiArKysgYi9tNC9zZXRfY2Zs
YWdzX2xkZmxhZ3MubTQNCj4+PiBAQCAtMTUsNiArMTUsMTAgQEAgZm9yIGxkZmxhZyBpbiAkQVBQ
RU5EX0xJQg0KPj4+IGRvDQo+Pj4gICAgQVBQRU5EX0xERkxBR1M9IiRBUFBFTkRfTERGTEFHUyAt
TCRsZGZsYWciDQo+Pj4gZG9uZQ0KPj4+ICtpZiBbICEgLXogJEVYVFJBX1BSRUZJWCBdOyB0aGVu
DQo+Pj4gKyAgICBDUFBGTEFHUz0iJENQUEZMQUdTIC1JJEVYVFJBX1BSRUZJWC9pbmNsdWRlIg0K
Pj4+ICsgICAgTERGTEFHUz0iJExERkxBR1MgLUwkRVhUUkFfUFJFRklYL2xpYiINCj4+PiArZmkN
Cj4+PiBDUFBGTEFHUz0iJFBSRVBFTkRfQ1BQRkxBR1MgJENQUEZMQUdTICRBUFBFTkRfQ1BQRkxB
R1MiDQo+Pj4gTERGTEFHUz0iJFBSRVBFTkRfTERGTEFHUyAkTERGTEFHUyAkQVBQRU5EX0xERkxB
R1MiXSkNCj4+PiANCj4+PiAtLSANCj4+PiAyLjI2LjINCg0K


From xen-devel-bounces@lists.xenproject.org Fri May 22 09:44:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 09:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4Dt-0006sg-Lf; Fri, 22 May 2020 09:44:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RPHd=7E=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jc4Ds-0006sa-GW
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 09:44:12 +0000
X-Inumbo-ID: ca42b8f6-9c10-11ea-ae69-bc764e2007e4
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca42b8f6-9c10-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 09:44:12 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id f5so952114wmh.2
 for <xen-devel@lists.xenproject.org>; Fri, 22 May 2020 02:44:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=HlFVZYh5Hqq/NcSIeyeKR6lcWbBcZlAZWkWZy73uudo=;
 b=IqjZtBPLKjZ9qWS4u1si/Sxr//dDLqoKplPOKWFMOtZviYSjnsSNK9Bb/i8AbarPhK
 nMGkcA91U0Zbsr4RGdBTd59ml/O4B08NW9R85TepHoVMGuZOZFlXZZEvZ38WUlfgHROm
 PFNtmOWH2yv0Jdo1h02AcJTjJ1G4t+4tWGdxwsCZj2Zdc8RIOSfvyaV95tSsr+psJua8
 le5vDEhc7bolR9jKozSekBOGOlBFhi4yXaReAF1jEfCwFxfn2OuKzAW/URUQ4/mtgv+0
 0+V0Ow4QCtQZP7kFOMTZx3BdZtjEgzXIJu6hKITqp49t4Sredav/x+ebH3IPX+aApcLL
 wE9w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=HlFVZYh5Hqq/NcSIeyeKR6lcWbBcZlAZWkWZy73uudo=;
 b=LB9BaYuiHlHIW2gIO/QKXydSrC7Rvag1+72PsXrItNyGf4Sy9sKD/+gwC4QGNWlqz7
 RlO/Fhp2tZtbYwf3buYJf/wKucOzmSNAs4asHMeZNMM4I9mvOLIXRzch2Pygi0WkCIDB
 woXgo5j2Zu0CpFc89eJtK8QqY9qYLoGMLMFig4DWJY31c1tv/PUs6aXmPor6kFQ7HSsL
 fsgp7EDj4CNUxNgmQA4pQ+sBJq6vIvTUDKZow0GqEBlLZ90Q0/GvnXfl3qHr++UifEMg
 FPEKGcCXwce2L7YEqddS1JMvsgbweqZJADc76bLXpRZRFTOz5VmJEk6q7J7qgviMj0m0
 vgig==
X-Gm-Message-State: AOAM533WUXpePdReVPF+t1t39bGhC9hXmsehdLEwtVM9+Rd7HoJHD8dy
 nxicq0LNu244E57tO1HSG5U=
X-Google-Smtp-Source: ABdhPJwNt5X1nPCncv2PWpGLlfk5nvCfgV8zHpWsObLiRojTmpfCOJ0cnNsR2x9jy9u1udVrapNbyA==
X-Received: by 2002:a1c:6142:: with SMTP id v63mr12075838wmb.61.1590140651124; 
 Fri, 22 May 2020 02:44:11 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.185])
 by smtp.gmail.com with ESMTPSA id d9sm9076260wmd.10.2020.05.22.02.44.09
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 22 May 2020 02:44:10 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'George Dunlap'" <George.Dunlap@citrix.com>,
 "'Nick Rosbrook'" <rosbrookn@gmail.com>
References: <49cc21c24b65ef5e1ce9810397c0fcd9d43f77f4.1590072675.git.rosbrookn@ainfosec.com>
 <D90B93AD-977A-468E-840E-2E2354905968@citrix.com>
In-Reply-To: <D90B93AD-977A-468E-840E-2E2354905968@citrix.com>
Subject: RE: [PATCH] golang/xenlight: add an empty line after DO NOT EDIT
 comment
Date: Fri, 22 May 2020 10:44:09 +0100
Message-ID: <001201d6301d$8b6364a0$a22a2de0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQKRHSeOjHTtxvPh0rbQyb/gF3wxUQKaoV/+pykrGdA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Nick Rosbrook' <rosbrookn@ainfosec.com>,
 'xen-devel' <xen-devel@lists.xenproject.org>, 'Wei Liu' <wl@xen.org>,
 'Ian Jackson' <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: George Dunlap <George.Dunlap@citrix.com>
> Sent: 22 May 2020 10:14
> To: Nick Rosbrook <rosbrookn@gmail.com>
> Cc: xen-devel <xen-devel@lists.xenproject.org>; Nick Rosbrook =
<rosbrookn@ainfosec.com>; Ian Jackson
> <Ian.Jackson@citrix.com>; Wei Liu <wl@xen.org>; Paul Durrant =
<paul@xen.org>
> Subject: Re: [PATCH] golang/xenlight: add an empty line after DO NOT =
EDIT comment
>=20
> CC=E2=80=99ing the release manager, since we=E2=80=99re past the last =
posting date
>=20
> > On May 21, 2020, at 3:55 PM, Nick Rosbrook <rosbrookn@gmail.com> =
wrote:
> >
> > When generating documentation, pkg.go.dev and godoc.org assume a =
comment
> > that immediately precedes the package declaration is a "package
> > comment", and should be shown in the documentation. Add an empty =
line
> > after the DO NOT EDIT comment in generated files to prevent these
> > comments from appearing as "package comments."
> >
> > Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
>=20
> Reviewed-by: George Dunlap <george.dunlap@citrix.com>
>=20
> Paul, I would classify this as a bug fix: It won=E2=80=99t have any =
functional effect on the code itself, but
> it fixes how it=E2=80=99s displayed; e.g.:
>=20
> =
https://pkg.go.dev/xenbits.xenproject.org/git-http/xen.git/tools/golang/x=
enlight?tab=3Ddoc
>=20

Since it is apparently a pure whitespace change I have no problem with =
this going in. We're not at freeze yet so technically you don't need my =
release-ack as yet :-)

  Paul



From xen-devel-bounces@lists.xenproject.org Fri May 22 09:45:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 09:45:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4FT-0006zH-1V; Fri, 22 May 2020 09:45:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hXvX=7E=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jc4FS-0006yb-8v
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 09:45:50 +0000
X-Inumbo-ID: 04915530-9c11-11ea-aba8-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 04915530-9c11-11ea-aba8-12813bfff9fa;
 Fri, 22 May 2020 09:45:49 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:37674
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jc4FO-000NKL-M2 (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 22 May 2020 10:45:46 +0100
Subject: Re: [PATCH] x86/svm: retry after unhandled NPT fault if gfn was
 marked for recalculation
To: Igor Druzhinin <igor.druzhinin@citrix.com>, xen-devel@lists.xenproject.org
References: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <dae35bcf-5b85-a760-9d15-139973215334@citrix.com>
Date: Fri, 22 May 2020 10:45:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: wl@xen.org, jbeulich@suse.com, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21/05/2020 22:43, Igor Druzhinin wrote:
> If a recalculation NPT fault hasn't been handled explicitly in
> hvm_hap_nested_page_fault() then it's potentially safe to retry -
> US bit has been re-instated in PTE and any real fault would be correctly
> re-raised next time.
>
> This covers a specific case of migration with vGPU assigned on AMD:
> global log-dirty is enabled and causes immediate recalculation NPT
> fault in MMIO area upon access. This type of fault isn't described
> explicitly in hvm_hap_nested_page_fault (this isn't called on
> EPT misconfig exit on Intel) which results in domain crash.
>
> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> ---
>  xen/arch/x86/hvm/svm/svm.c | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 46a1aac..f0d0bd3 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1726,6 +1726,10 @@ static void svm_do_nested_pgfault(struct vcpu *v,
>          /* inject #VMEXIT(NPF) into guest. */
>          nestedsvm_vmexit_defer(v, VMEXIT_NPF, pfec, gpa);
>          return;
> +    case 0:
> +        /* If a recalculation page fault hasn't been handled - just retry. */
> +        if ( pfec & PFEC_user_mode )
> +            return;

This smells like it is a recipe for livelocks.

Everything should have been handled properly by the call to
p2m_pt_handle_deferred_changes() which precedes svm_do_nested_pgfault().

It is legitimate for the MMIO mapping to end up being transiently
recalculated, but the fact that p2m_pt_handle_deferred_changes() doesn't
fix it up suggests that the bug is there.

Do you have the complete NPT walk to the bad mapping? Do we have
_PAGE_USER in the leaf mapping, or is this perhaps a spurious fault?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 22 09:50:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 09:50:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4Jm-0007ml-KQ; Fri, 22 May 2020 09:50:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2cgu=7E=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jc4Jl-0007mg-PF
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 09:50:17 +0000
X-Inumbo-ID: a3a0895c-9c11-11ea-aba8-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a3a0895c-9c11-11ea-aba8-12813bfff9fa;
 Fri, 22 May 2020 09:50:16 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 2EOl9xU0oWRpS/14ijNqMJw2MTZ/WKZsksGprQsRZUnBHIul/sC2q2piRMqB5USAJTglWXyzzA
 wAVe9PBVuU9qW+7VQdQFgqCsPOqN9W1Qq3WmTAYLUNRb3Apy1aGU4oHHXgkUF1BH+21w/saO1I
 UCWyHVj+em1s25fqVx2dBa7sKFYlhtnBbIhb33cgw4uARbX2WCs94DTHnBiChEiVwC9UwJ2OZt
 /AYPVCMa4tIWOLaOMaIVhUzbTcoSfWb0ef1aWMTtkrGzDhBrXleNXRyrlumxwQa7UcceL8cIhV
 Yrg=
X-SBRS: 2.7
X-MesageID: 18159426
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18159426"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH] golang: Update generated files after libxl_types.idl change
Date: Fri, 22 May 2020 10:49:56 +0100
Message-ID: <20200522094956.3611661-1-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Ian Jackson <ian.jackson@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

c/s 7efd9f3d45 ("libxl: Handle Linux stubdomain specific QEMU
options.") modified libl_types.idl.  Run gengotypes.py again to update
the geneated golang bindings.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Nick Rosbrook <rosbrookn@ainfosec.com>
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 tools/golang/xenlight/helpers.gen.go | 10 ++++++++++
 tools/golang/xenlight/types.gen.go   |  3 +++
 2 files changed, 13 insertions(+)

diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 109e9515a2..b5bd0de830 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1163,6 +1163,9 @@ func (x *DomainBuildInfo) fromC(xc *C.libxl_domain_build_info) error {
 	if err := x.DeviceModelStubdomain.fromC(&xc.device_model_stubdomain); err != nil {
 		return fmt.Errorf("converting field DeviceModelStubdomain: %v", err)
 	}
+	x.StubdomainMemkb = uint64(xc.stubdomain_memkb)
+	x.StubdomainKernel = C.GoString(xc.stubdomain_kernel)
+	x.StubdomainRamdisk = C.GoString(xc.stubdomain_ramdisk)
 	x.DeviceModel = C.GoString(xc.device_model)
 	x.DeviceModelSsidref = uint32(xc.device_model_ssidref)
 	x.DeviceModelSsidLabel = C.GoString(xc.device_model_ssid_label)
@@ -1489,6 +1492,13 @@ func (x *DomainBuildInfo) toC(xc *C.libxl_domain_build_info) (err error) {
 	if err := x.DeviceModelStubdomain.toC(&xc.device_model_stubdomain); err != nil {
 		return fmt.Errorf("converting field DeviceModelStubdomain: %v", err)
 	}
+	xc.stubdomain_memkb = C.uint64_t(x.StubdomainMemkb)
+	if x.StubdomainKernel != "" {
+		xc.stubdomain_kernel = C.CString(x.StubdomainKernel)
+	}
+	if x.StubdomainRamdisk != "" {
+		xc.stubdomain_ramdisk = C.CString(x.StubdomainRamdisk)
+	}
 	if x.DeviceModel != "" {
 		xc.device_model = C.CString(x.DeviceModel)
 	}
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index df68fd0e88..15516ae552 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -509,6 +509,9 @@ type DomainBuildInfo struct {
 	MaxMaptrackFrames     uint32
 	DeviceModelVersion    DeviceModelVersion
 	DeviceModelStubdomain Defbool
+	StubdomainMemkb       uint64
+	StubdomainKernel      string
+	StubdomainRamdisk     string
 	DeviceModel           string
 	DeviceModelSsidref    uint32
 	DeviceModelSsidLabel  string
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri May 22 09:52:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 09:52:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4MC-0007xw-1X; Fri, 22 May 2020 09:52:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jc4MA-0007xo-5Q
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 09:52:46 +0000
X-Inumbo-ID: fbfd67a0-9c11-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fbfd67a0-9c11-11ea-b07b-bc764e2007e4;
 Fri, 22 May 2020 09:52:44 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 9CDC0AB7D;
 Fri, 22 May 2020 09:52:46 +0000 (UTC)
Subject: Re: [PATCH] x86: refine guest_mode()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7b62d06c-1369-2857-81c0-45e2434357f4@suse.com>
 <1704f4f6-7e77-971c-2c94-4f6a6719c34a@citrix.com>
 <5bbe6425-396c-d934-b5af-53b594a4afbc@suse.com>
 <16939982-3ccc-f848-0694-61b154dca89a@citrix.com>
 <5ce12c86-c894-4a2c-9fa6-1c2a6007ca28@suse.com>
 <20200518145101.GV54375@Air-de-Roger>
 <d58ec87e-a871-2e65-4a69-b73a168a6afa@suse.com>
 <20200520151326.GM54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <38d546f9-8043-8d94-8298-8fd035078a8a@suse.com>
Date: Fri, 22 May 2020 11:52:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200520151326.GM54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20.05.2020 17:13, Roger Pau Monné wrote:
> On Wed, May 20, 2020 at 10:56:26AM +0200, Jan Beulich wrote:
>> On 18.05.2020 16:51, Roger Pau Monné wrote:
>>> On Tue, Apr 28, 2020 at 08:30:12AM +0200, Jan Beulich wrote:
>>>> On 27.04.2020 22:11, Andrew Cooper wrote:
>>>>> On 27/04/2020 16:15, Jan Beulich wrote:
>>>>>> On 27.04.2020 16:35, Andrew Cooper wrote:
>>>>>>> On 27/04/2020 09:03, Jan Beulich wrote:
>>>>>>>> --- a/xen/include/asm-x86/regs.h
>>>>>>>> +++ b/xen/include/asm-x86/regs.h
>>>>>>>> @@ -10,9 +10,10 @@
>>>>>>>>      /* Frame pointer must point into current CPU stack. */                    \
>>>>>>>>      ASSERT(diff < STACK_SIZE);                                                \
>>>>>>>>      /* If not a guest frame, it must be a hypervisor frame. */                \
>>>>>>>> -    ASSERT((diff == 0) || (r->cs == __HYPERVISOR_CS));                        \
>>>>>>>> +    if ( diff < PRIMARY_STACK_SIZE )                                          \
>>>>>>>> +        ASSERT(!diff || ((r)->cs == __HYPERVISOR_CS));                        \
>>>>>>>>      /* Return TRUE if it's a guest frame. */                                  \
>>>>>>>> -    (diff == 0);                                                              \
>>>>>>>> +    !diff || ((r)->cs != __HYPERVISOR_CS);                                    \
>>>>>>> The (diff == 0) already worried me before because it doesn't fail safe,
>>>>>>> but this makes things more problematic.  Consider the case back when we
>>>>>>> had __HYPERVISOR_CS32.
>>>>>> Yes - if __HYPERVISOR_CS32 would ever have been to be used for
>>>>>> anything, it would have needed checking for here.
>>>>>>
>>>>>>> Guest mode is strictly "(r)->cs & 3".
>>>>>> As long as CS (a) gets properly saved (it's a "manual" step for
>>>>>> SYSCALL/SYSRET as well as #VMEXIT) and (b) didn't get clobbered. I
>>>>>> didn't write this code, I don't think, so I can only guess that
>>>>>> there were intentions behind this along these lines.
>>>>>
>>>>> Hmm - the VMExit case might be problematic here, due to the variability
>>>>> in the poison used.
>>>>
>>>> "Variability" is an understatement - there's no poisoning at all
>>>> in release builds afaics (and to be honest it seems a somewhat
>>>> pointless to write the same values over and over again in debug
>>>> mode). With this, ...
>>>>
>>>>>>> Everything else is expectations about how things ought to be laid out,
>>>>>>> but for safety in release builds, the final judgement should not depend
>>>>>>> on the expectations evaluating true.
>>>>>> Well, I can switch to a purely CS.RPL based approach, as long as
>>>>>> we're happy to live with the possible downside mentioned above.
>>>>>> Of course this would then end up being a more intrusive change
>>>>>> than originally intended ...
>>>>>
>>>>> I'd certainly prefer to go for something which is more robust, even if
>>>>> it is a larger change.
>>>>
>>>> ... what's your suggestion? Basing on _just_ CS.RPL obviously won't
>>>> work. Not even if we put in place the guest's CS (albeit that
>>>> somewhat depends on the meaning we assign to the macro's returned
>>>> value).
>>>
>>> Just to check I'm following this correctly, using CS.RPL won't work
>>> for HVM guests, as HVM can legitimately use a RPL of 0 (which is not
>>> the case for PV guests). Doesn't the same apply to the usage of
>>> __HYPERVISOR_CS? (A HVM guest could also use the same code segment
>>> value as Xen?)
>>
>> Of course (and in particular Xen as a guest would). My "Basing on
>> _just_ CS.RPL" wasn't meant to exclude the rest of the selector,
>> but to contrast this to the case where "diff" also is involved in
>> the calculation (which looks to be what Andrew would prefer to see
>> go away).
>>
>>>> Using current inside the macro to determine whether the
>>>> guest is HVM would also seem fragile to me - there are quite a few
>>>> uses of guest_mode(). Which would leave passing in a const struct
>>>> vcpu * (or domain *), requiring to touch all call sites, including
>>>> Arm's.
>>>
>>> Fragile or slow? Are there corner cases where guest_mode is used where
>>> current is not reliable?
>>
>> This question is why I said "there are quite a few uses of
>> guest_mode()" - auditing them all is just one side of the medal.
>> The other is to prevent a new use appearing in the future that
>> can be reached by a call path in the time window where a lazy
>> context switch is pending (i.e. when current has already been
>> updated, but register state hasn't been yet).
>>
>>>> Compared to this it would seem to me that the change as presented
>>>> is a clear improvement without becoming overly large of a change.
>>>
>>> Using the cs register is already part of the guest_mode code, even if
>>> just in debug mode, hence I don't see it as a regression from existing
>>> code. It however feels weird to me that the reporter of the issue
>>> doesn't agree with the fix, and hence would like to know if there's a
>>> way we could achieve consensus on this.
>>
>> Indeed. I'd be happy to make further adjustments, if only I had a
>> clear understanding of what is wanted (or why leaving things as
>> they are is better than a little bit of an improvement).
> 
> OK, so I think I'm starting to understand this all. Sorry it's taken
> me so long. So it's my understanding that diff != 0 can only happen in
> Xen context, or when in an IST that has a different stack (ie: MCE, NMI
> or DF according to current.h) and running in PV mode?
> 
> Wouldn't in then be fine to use (r)->cs & 3 to check we are in guest
> mode if diff != 0? I see a lot of other places where cs & 3 is already
> used to that effect AFAICT (like entry.S).

Technically this would be correct afaics, but the idea with all this
is (or should I say "looks to be"?) to have the checks be as tight as
possible, to make sure we don't mistakenly consider something "guest
mode" which really isn't. IOW your suggestion would be fine with me
if we could exclude bugs anywhere in the code. But since this isn't
realistic, I consider your suggestion to be relaxing things by too
much.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 22 09:55:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 09:55:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4OI-00085C-EP; Fri, 22 May 2020 09:54:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RPHd=7E=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jc4OG-000855-N1
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 09:54:56 +0000
X-Inumbo-ID: 4a41949a-9c12-11ea-b07b-bc764e2007e4
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a41949a-9c12-11ea-b07b-bc764e2007e4;
 Fri, 22 May 2020 09:54:56 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id f5so980449wmh.2
 for <xen-devel@lists.xenproject.org>; Fri, 22 May 2020 02:54:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=iaKsEP9YxVg/i27p2cuZ3z6rXXbvQ5bbumLMMOklFro=;
 b=b7p5QMd8vlzhHPIwg7bzdDLzc6HJSaOj7CiEhyPr8m4/fsogObBAdjkagqAFrKR332
 AU/mzPUBarTktfZPiybu/DKRxZk5DH4FuuRZWFAmmlJ85++xyxCH6/8PGuleZp49a62B
 jaBfdiy9uWGVMSUvXPOj+8aOn+8qIvCC+MZQ9COn79xsdWd0VA1YHgpkVs7wQzSoE9Jg
 kWqFKA4Wp+UyJWUuB4RX+v6ArEF0wa9sxR2sie9O+mtPfW9UZGVbWlmbybQUZsQGRd+Q
 OJR5In2fzlPAirlFKRoljO4r+Cnufj9jM1ASGI3iW1a32h1XaBxZExZJkwmDcd7GgSvw
 U4OQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=iaKsEP9YxVg/i27p2cuZ3z6rXXbvQ5bbumLMMOklFro=;
 b=iwJUGnUHoQcTE/hLliks0D9uoPAesudZgglHbHeIsbNgTYZ64+DiwXW2ZvgJSQFyUN
 Gn7Ubg/M2SeAzr5sSatP9qPOA3LU2ZvfZxscROfZO5PiaEOTiC7RhLa0nROcrQsB8dTk
 OYlSThAqAeQ59VM3IyxVEnEWuLaEl19MDnSMCY2t3ryaLEhnJS7Emh9oGoJLPWxcWuhh
 HV+f84eHzfJHQR2bL/xw1Yk3tfQqvLn8aPr1OR4PanWvHUgnHK2ylV2YxWCW6yV13mmV
 b7e+P0qKNDgwMpX2z0T3TpDXqCygt6pXGR06ahRYN5FhiDSV/LkdLKchSZd1vIchgrjN
 Y7+w==
X-Gm-Message-State: AOAM5332qd4FbVhXHHJB+e9Hcd4XQSIUvNFXm3bVYv6kzptQm/8f8y6M
 yX3OEhdiVu8mtzCbhiUDQ6o=
X-Google-Smtp-Source: ABdhPJyfNbwbyaOn7KKmyTfGxtoqbs9RWSOJEJW4kFlt7983av6rC9Xp8EuyhFqXBNVFaIGoZdEfqQ==
X-Received: by 2002:a1c:98cc:: with SMTP id a195mr5020506wme.32.1590141295432; 
 Fri, 22 May 2020 02:54:55 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.185])
 by smtp.gmail.com with ESMTPSA id m7sm9206225wmc.40.2020.05.22.02.54.53
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 22 May 2020 02:54:54 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'George Dunlap'" <George.Dunlap@citrix.com>,
 "'Jason Andryuk'" <jandryuk@gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
 <4510049C-2AD1-4AE4-B0E5-F4231450EDB6@citrix.com>
In-Reply-To: <4510049C-2AD1-4AE4-B0E5-F4231450EDB6@citrix.com>
Subject: RE: [PATCH v7 00/19] Add support for qemu-xen runnning in a
 Linux-based stubdomain
Date: Fri, 22 May 2020 10:54:52 +0100
Message-ID: <001301d6301f$0b546cd0$21fd4670$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQGuWQTXNTg7A7WCU27izkdscEVquQGwNvazqPYKEdA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'xen-devel' <xen-devel@lists.xenproject.org>,
 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <Andrew.Cooper3@citrix.com>,
 'Jan Beulich' <jbeulich@suse.com>, 'Ian Jackson' <Ian.Jackson@citrix.com>,
 'Anthony Perard' <anthony.perard@citrix.com>,
 'Samuel Thibault' <samuel.thibault@ens-lyon.org>,
 'Daniel De Graaf' <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of =
George Dunlap
> Sent: 22 May 2020 10:11
> To: Jason Andryuk <jandryuk@gmail.com>
> Cc: Stefano Stabellini <sstabellini@kernel.org>; Julien Grall =
<julien@xen.org>; Samuel Thibault
> <samuel.thibault@ens-lyon.org>; Wei Liu <wl@xen.org>; Andrew Cooper =
<Andrew.Cooper3@citrix.com>; Jan
> Beulich <jbeulich@suse.com>; Ian Jackson <Ian.Jackson@citrix.com>; =
Anthony Perard
> <anthony.perard@citrix.com>; xen-devel =
<xen-devel@lists.xenproject.org>; Daniel De Graaf
> <dgdegra@tycho.nsa.gov>
> Subject: Re: [PATCH v7 00/19] Add support for qemu-xen runnning in a =
Linux-based stubdomain
>=20
>=20
> > On May 19, 2020, at 2:54 AM, Jason Andryuk <jandryuk@gmail.com> =
wrote:
> >
> > General idea is to allow freely set device_model_version and
> > device_model_stubdomain_override and choose the right options based =
on this
> > choice.  Also, allow to specific path to stubdomain kernel/ramdisk, =
for greater
> > flexibility.
>=20
> Excited to see this patch series get in.  But I didn=E2=80=99t really =
notice any documents explaining how to
> actually use it =E2=80=94 is there a blog post anywhere describing how =
to get the kernel / initrd image and so
> on?
>=20
> Also, would it be possible to add a follow-up series which modifies =
SUPPORT.md and CHANGELOG.md?

Yes please. In future I think we should encourage the patch to =
CHANGELOG.md to be the last patch of a series such as this.

  Paul

>=20
> Thanks,
>  -George



From xen-devel-bounces@lists.xenproject.org Fri May 22 09:58:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 09:58:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4S2-0008Ew-Vx; Fri, 22 May 2020 09:58:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jc4S2-0008Er-7U
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 09:58:50 +0000
X-Inumbo-ID: d56170d6-9c12-11ea-aba9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d56170d6-9c12-11ea-aba9-12813bfff9fa;
 Fri, 22 May 2020 09:58:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 605C2AD60;
 Fri, 22 May 2020 09:58:51 +0000 (UTC)
Subject: Re: [PATCH v2] x86: use POPCNT for hweight<N>() when available
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <55a4a24d-7fac-527c-6bcf-8d689136bac2@suse.com>
 <20200514140522.GD54375@Air-de-Roger>
 <83534bf1-fa57-1d4a-c615-f656338a8457@suse.com>
 <20200520093106.GI54375@Air-de-Roger>
 <53fdfbe2-615a-72f9-7f2d-26402a0a64d0@suse.com>
 <20200520102805.GK54375@Air-de-Roger>
 <0e97e3af-b66e-4924-a76c-9e33cdd1a726@suse.com>
 <20200520114327.GL54375@Air-de-Roger>
 <d0a15359-339f-6edd-034c-cd6385e929d1@suse.com>
 <20200520171829.GO54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2be7b657-1027-2fef-fd26-131c27e940db@suse.com>
Date: Fri, 22 May 2020 11:58:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200520171829.GO54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20.05.2020 19:18, Roger Pau Monné wrote:
> I also assume that using no_caller_saved_registers when available or
> else keeping the current behavior is not an acceptable solution? FWIW,
> from a FreeBSD PoV I would be OK with that, as I don't think there are
> any supported targets with clang < 5.

By "current behavior" do you mean what the patch currently
does (matching my earlier response that I may try going this
route) or the unpatched upstream tree?

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 22 09:59:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 09:59:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4SP-0008I9-BQ; Fri, 22 May 2020 09:59:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nXpb=7E=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jc4SO-0008Hw-J6
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 09:59:12 +0000
X-Inumbo-ID: e2ae8094-9c12-11ea-b07b-bc764e2007e4
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e2ae8094-9c12-11ea-b07b-bc764e2007e4;
 Fri, 22 May 2020 09:59:11 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id w64so9212687wmg.4
 for <xen-devel@lists.xenproject.org>; Fri, 22 May 2020 02:59:11 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:content-transfer-encoding
 :in-reply-to:user-agent;
 bh=vo0TO+fBgnYreyEx5cxh9gkmxSN5jFGdN8I+8XzFfYI=;
 b=iIGKOzDxHOzfvjqBSfS7f1g77I29rHX+/lqD2XcBn+DS6xhR3Eri0L8FWg9g+TI1Qg
 2ZdAyG+BtHTrU1oDhecBxLaJZqetzjyW/aevpQIbX46MsO+2eiPWV97gEDt0AM+TYs5L
 SmGjh7Y8KYg1KnX1gnbU367KOsRoySMkSAyxQ+idSUaq3a4H2ucOkKw2l/uR1WIU2Ljv
 JepXw7O1lTJU1BzYmsS4RNsmAgDXTkJuoYlkcP9Wit7VREaVfDiUPt0RFAsqSay226kR
 IItDq3QxUfaBus/JQvtGi2UY8HmdywErFBu2VTqcrNEE3FfsB1rfoPoe1TYB0PPY7sWU
 FqeQ==
X-Gm-Message-State: AOAM533Xi+g+qwUVP8tDZov0ShD9k4SsUY73wumKBYkn2pUN5pv4UnCT
 Mv1A99ufddcTQB54Z+qDnkY=
X-Google-Smtp-Source: ABdhPJxzB3s1XUsarnDILUCrUGdK5hh0Ixpcne2INW8jZxlWhT4w/lwbIsmMF/r6NtW/ae5cHPWQtg==
X-Received: by 2002:a1c:96d8:: with SMTP id y207mr5676302wmd.167.1590141551119; 
 Fri, 22 May 2020 02:59:11 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id d15sm63318wrq.30.2020.05.22.02.59.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 22 May 2020 02:59:10 -0700 (PDT)
Date: Fri, 22 May 2020 09:59:09 +0000
From: Wei Liu <wl@xen.org>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: [PATCH 2/3] configure: also add EXTRA_PREFIX to {CPP/LD}FLAGS
Message-ID: <20200522095909.h3qc3mhogocdvwas@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <20200505092454.9161-1-roger.pau@citrix.com>
 <20200505092454.9161-3-roger.pau@citrix.com>
 <C053A44F-FFDE-4C07-B1FD-76FA8456ADCD@arm.com>
 <20200522090553.eegs4fcltfqjuhzo@debian>
 <9FBA46AA-9727-4AA9-A23D-B72F5AE9C35C@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <9FBA46AA-9727-4AA9-A23D-B72F5AE9C35C@arm.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 22, 2020 at 09:37:51AM +0000, Bertrand Marquis wrote:
> Hi,
> 
> > On 22 May 2020, at 10:05, Wei Liu <wl@xen.org> wrote:
> > 
> > On Fri, May 22, 2020 at 08:41:17AM +0000, Bertrand Marquis wrote:
> >> Hi,
> >> 
> >> As a consequence of this fix, the following has been committed (I guess as a consequence of regenerating the configure scripts):
> >> diff --git a/tools/configure b/tools/configure
> >> index 375430df3f..36596389b8 100755
> >> --- a/tools/configure
> >> +++ b/tools/configure
> >> @@ -4678,6 +4678,10 @@ for ldflag in $APPEND_LIB
> >> do
> >>     APPEND_LDFLAGS="$APPEND_LDFLAGS -L$ldflag"
> >> done
> >> +if  ! -z $EXTRA_PREFIX ; then
> >> +    CPPFLAGS="$CPPFLAGS -I$EXTRA_PREFIX/include"
> >> +    LDFLAGS="$LDFLAGS -L$EXTRA_PREFIX/lib"
> >> +fi
> >> CPPFLAGS="$PREPEND_CPPFLAGS $CPPFLAGS $APPEND_CPPFLAGS"
> >> LDFLAGS="$PREPEND_LDFLAGS $LDFLAGS $APPEND_LDFLAGS”
> >> 
> >> This should be:
> >> if  [ ! -z $EXTRA_PREFIX ]; then
> >> 
> >> As on other configure scripts.
> >> 
> >> During configure I have not the following error:
> >> ./configure: line 4681: -z: command not found
> >> 
> >> Which is ignored but is adding -L/lib and -I/include to the CPPFLAGS and LDFLAGS
> >> 
> >> What should be the procedure to actually fix that (as the problem is coming from the configure script regeneration I guess) ? 
> > 
> > Does the following patch work for you?
> > 
> > diff --git a/m4/set_cflags_ldflags.m4 b/m4/set_cflags_ldflags.m4
> > index 08f5c983cc63..cd34c139bc94 100644
> > --- a/m4/set_cflags_ldflags.m4
> > +++ b/m4/set_cflags_ldflags.m4
> > @@ -15,7 +15,7 @@ for ldflag in $APPEND_LIB
> > do
> >     APPEND_LDFLAGS="$APPEND_LDFLAGS -L$ldflag"
> > done
> > -if [ ! -z $EXTRA_PREFIX ]; then
> > +if test ! -z $EXTRA_PREFIX ; then
> >     CPPFLAGS="$CPPFLAGS -I$EXTRA_PREFIX/include"
> >     LDFLAGS="$LDFLAGS -L$EXTRA_PREFIX/lib"
> > fi
> > 
> > 
> > You will need to run autogen.sh to regenerate tools/configure.
> > 
> 
> Yes that works on my side and generate tools/configure using “test”
> 
> But why are the [] being removed when generating tools/configure ?

No idea why autoconf removed [] really.

I think switching to test is better anyway since that's what is used
throughout tools/configure.

Wei.


From xen-devel-bounces@lists.xenproject.org Fri May 22 10:01:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 10:01:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4Uw-0000pT-Pc; Fri, 22 May 2020 10:01:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nXpb=7E=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jc4Uv-0000pO-O7
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 10:01:49 +0000
X-Inumbo-ID: 3fe9636f-9c13-11ea-aba9-12813bfff9fa
Received: from mail-wm1-f65.google.com (unknown [209.85.128.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3fe9636f-9c13-11ea-aba9-12813bfff9fa;
 Fri, 22 May 2020 10:01:48 +0000 (UTC)
Received: by mail-wm1-f65.google.com with SMTP id f134so8131267wmf.1
 for <xen-devel@lists.xenproject.org>; Fri, 22 May 2020 03:01:48 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=HK4/FwLeGgXI+j5s40BKVWhGHh1cP+TEI9vZo9YfCEc=;
 b=m23ZgspuHqlmrM5OqG0mxoWCmZvgmaR2MJp6nvMqRDKIWISKyKzgQhk/eA0cBuNrTE
 f8CTPIVgdzkWxkre5ywnvanahdAjcJLi4cOvObLDoX6preMQF7hHxFk7k1fmJJbdzctA
 zJ52lazrQqaPjm0FQhLQrp+5q40uORczLUazmzoFZCkD62AzR2kpBh6oIQ7zIyRx3stI
 cTiIUMwxd36R5sni4vYHVN5w82ba3s1OSIRokeuHsY3KGJIIolWphAgX4fXu+VJViGuF
 G6eYXCPiNhrn4Sq4H6+7Bsylw1cvhk7H+mye7GZeL0Li2Oc5Lg8TUAMhtOqwNV+1OBCI
 Anuw==
X-Gm-Message-State: AOAM5311hRWUEq++hf5l4sGm9pRMffgSJoXNrBPmII0AmNCHjV1vguIO
 MANfCNC3EPY5Yzo2cPqWvPQ=
X-Google-Smtp-Source: ABdhPJx3SXpHFJNnubGIQMpwKvWOyu1PaejIWtIouPs1OeOQ9KHrpFQz6lKp634UEYhGLkojrljdwQ==
X-Received: by 2002:a1c:64c1:: with SMTP id
 y184mr11312321wmb.175.1590141707578; 
 Fri, 22 May 2020 03:01:47 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id h196sm1202838wme.22.2020.05.22.03.01.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 22 May 2020 03:01:46 -0700 (PDT)
Date: Fri, 22 May 2020 10:01:45 +0000
From: Wei Liu <wl@xen.org>
To: George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH] golang: Update generated files after libxl_types.idl
 change
Message-ID: <20200522100145.w6xd5v7ioubzkni5@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <20200522094956.3611661-1-george.dunlap@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200522094956.3611661-1-george.dunlap@citrix.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>, xen-devel@lists.xenproject.org,
 Wei Liu <wl@xen.org>, Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 22, 2020 at 10:49:56AM +0100, George Dunlap wrote:
> c/s 7efd9f3d45 ("libxl: Handle Linux stubdomain specific QEMU
> options.") modified libl_types.idl.  Run gengotypes.py again to update

libxl_types.idl.

> the geneated golang bindings.
> 

Can we perhaps add a dependency on golang's side such that it can be
auto-generated in the future?

In any case

Acked-by: Wei Liu <wl@xen.org>

(I haven't looked at the generated code)


From xen-devel-bounces@lists.xenproject.org Fri May 22 10:05:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 10:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4Yl-0000y2-Ay; Fri, 22 May 2020 10:05:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Somd=7E=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1jc4Yj-0000xx-AW
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 10:05:45 +0000
X-Inumbo-ID: cc84cdcc-9c13-11ea-aba9-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cc84cdcc-9c13-11ea-aba9-12813bfff9fa;
 Fri, 22 May 2020 10:05:44 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: y7eq3Hxw3uie9y4rdx1w4vE06kp/+C79LbQUjxLbRHkoTo0/t5VK6MXIXnjXf3u6zfsBlOxTkW
 Vgnpt/VHI+Oec7D5ijLXVqYz+OKOqq/XGamFsaTdxIvVmLMrKjfIIAuIYU60eEVQB8PKzz52pW
 zgYz2qr5JQpaC5wtueWTyJCQUFPJyk2wlfO7w76UOD+Z6no7nNtOgpUQ++SlcO1TRv+eXYFkyx
 pfmTpNEjkZJs6apbiFi7ytQKbeT4KAKG9kySmdIfkD6BH8ocCw0pTsmRlL4a1gBcI2hMPltAbe
 ujg=
X-SBRS: 2.7
X-MesageID: 18193849
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18193849"
Subject: Re: [PATCH] x86/svm: retry after unhandled NPT fault if gfn was
 marked for recalculation
To: Andrew Cooper <andrew.cooper3@citrix.com>, <xen-devel@lists.xenproject.org>
References: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
 <dae35bcf-5b85-a760-9d15-139973215334@citrix.com>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <506f21d4-ed81-2cd5-46af-162407553c91@citrix.com>
Date: Fri, 22 May 2020 11:05:39 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101
 Thunderbird/60.9.0
MIME-Version: 1.0
In-Reply-To: <dae35bcf-5b85-a760-9d15-139973215334@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: wl@xen.org, jbeulich@suse.com, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22/05/2020 10:45, Andrew Cooper wrote:
> On 21/05/2020 22:43, Igor Druzhinin wrote:
>> If a recalculation NPT fault hasn't been handled explicitly in
>> hvm_hap_nested_page_fault() then it's potentially safe to retry -
>> US bit has been re-instated in PTE and any real fault would be correctly
>> re-raised next time.
>>
>> This covers a specific case of migration with vGPU assigned on AMD:
>> global log-dirty is enabled and causes immediate recalculation NPT
>> fault in MMIO area upon access. This type of fault isn't described
>> explicitly in hvm_hap_nested_page_fault (this isn't called on
>> EPT misconfig exit on Intel) which results in domain crash.
>>
>> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>> ---
>>  xen/arch/x86/hvm/svm/svm.c | 4 ++++
>>  1 file changed, 4 insertions(+)
>>
>> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
>> index 46a1aac..f0d0bd3 100644
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -1726,6 +1726,10 @@ static void svm_do_nested_pgfault(struct vcpu *v,
>>          /* inject #VMEXIT(NPF) into guest. */
>>          nestedsvm_vmexit_defer(v, VMEXIT_NPF, pfec, gpa);
>>          return;
>> +    case 0:
>> +        /* If a recalculation page fault hasn't been handled - just retry. */
>> +        if ( pfec & PFEC_user_mode )
>> +            return;
> 
> This smells like it is a recipe for livelocks.
> 
> Everything should have been handled properly by the call to
> p2m_pt_handle_deferred_changes() which precedes svm_do_nested_pgfault().
> 
> It is legitimate for the MMIO mapping to end up being transiently
> recalculated, but the fact that p2m_pt_handle_deferred_changes() doesn't
> fix it up suggests that the bug is there.
> 
> Do you have the complete NPT walk to the bad mapping? Do we have
> _PAGE_USER in the leaf mapping, or is this perhaps a spurious fault?

It does fix it up. The problem is that currently in SVM we enter
svm_do_nested_pgfault immediately after p2m_pt_handle_deferred_changes
is finished finished. 

Yes, we don't have _PAGE_USER initially and, yes, it's fixed up
correctly in p2m_pt_handle_deferred_changes but svm_do_nested_pgfault
doesn't know about it.

Please read my second email about alternatives that suggest to resolve
the issue you're worrying about.

Igor


From xen-devel-bounces@lists.xenproject.org Fri May 22 10:07:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 10:07:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4aQ-00015m-Nc; Fri, 22 May 2020 10:07:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jc4aQ-00015h-9p
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 10:07:30 +0000
X-Inumbo-ID: 0b65bbd2-9c14-11ea-aba9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0b65bbd2-9c14-11ea-aba9-12813bfff9fa;
 Fri, 22 May 2020 10:07:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7E252AC63;
 Fri, 22 May 2020 10:07:31 +0000 (UTC)
Subject: Re: [PATCH v3] x86/PV: remove unnecessary toggle_guest_pt() overhead
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <24d8b606-f74b-9367-d67e-e952838c7048@suse.com>
 <d7840278-b999-65fa-40bf-2b78e5266837@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a6084473-2fb7-4106-66a4-d180ef483314@suse.com>
Date: Fri, 22 May 2020 12:07:27 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <d7840278-b999-65fa-40bf-2b78e5266837@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21.05.2020 18:46, Andrew Cooper wrote:
> On 05/05/2020 07:16, Jan Beulich wrote:
>> While the mere updating of ->pv_cr3 and ->root_pgt_changed aren't overly
>> expensive (but still needed only for the toggle_guest_mode() path), the
>> effect of the latter on the exit-to-guest path is not insignificant.
>> Move the logic into toggle_guest_mode(), on the basis that
>> toggle_guest_pt() will always be invoked in pairs, yet we can't safely
>> undo the setting of root_pgt_changed during the second of these
>> invocations.
>>
>> While at it, add a comment ahead of toggle_guest_pt() to clarify its
>> intended usage.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I'm still of the opinion that the commit message wants rewriting to get
> the important points across clearly.
> 
> And those are that toggle_guest_pt() is called in pairs specifically to
> read kernel data structures when emulating a userspace action, and that
> this doesn't modify cr3 from the guests point of view, and therefore
> doesn't need the resync on exit-to-guest path.

Is this

"toggle_guest_pt() is called in pairs, to read guest kernel data
 structures when emulating a guest userspace action. Hence this doesn't
 modify cr3 from the guest's point of view, and therefore doesn't need
 any resync on the exit-to-guest path. Therefore move the updating of
 ->pv_cr3 and ->root_pgt_changed into toggle_guest_mode(), since undoing
 the changes during the second of these invocations wouldn't be a safe
 thing to do."

any better?

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 22 10:08:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 10:08:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4bp-0001C5-2u; Fri, 22 May 2020 10:08:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L400=7E=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jc4bo-0001C0-GL
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 10:08:56 +0000
X-Inumbo-ID: 3ea04792-9c14-11ea-ae69-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ea04792-9c14-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 10:08:55 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: YQiBpXNXpK/yGTBHwopQ1csvroi95vSn+S2R+jofk447T3THtGYNymmuv32J7SvQ7qPcGu4/xu
 /VYE9/Sn6YmO1RYL8/Y5fQt69Cgr9/tN/yP2LopZ1VezGELgP7woopXbC7dZw+P8/IoiEIOwjN
 QJ+ZapphVIZZah0NFlT2tzv13TJkoKLA19ZJ5e3UvWmRmADIb3p1SpZOk5bUnP2QUaNwAPosSg
 0co6/a7u2KtmfnzhtZLxMu/VW0GKPwMVzro4OOIs8LcO4VnBqZgL7C3q9sMMjziEhiZ1CUc2nZ
 gns=
X-SBRS: 2.7
X-MesageID: 18447877
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18447877"
Date: Fri, 22 May 2020 12:08:46 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Subject: Re: [PATCH] x86/svm: retry after unhandled NPT fault if gfn was
 marked for recalculation
Message-ID: <20200522100846.GV54375@Air-de-Roger>
References: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, wl@xen.org, jbeulich@suse.com,
 andrew.cooper3@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 21, 2020 at 10:43:58PM +0100, Igor Druzhinin wrote:
> If a recalculation NPT fault hasn't been handled explicitly in
> hvm_hap_nested_page_fault() then it's potentially safe to retry -
> US bit has been re-instated in PTE and any real fault would be correctly
> re-raised next time.
> 
> This covers a specific case of migration with vGPU assigned on AMD:
> global log-dirty is enabled and causes immediate recalculation NPT
> fault in MMIO area upon access. This type of fault isn't described
> explicitly in hvm_hap_nested_page_fault (this isn't called on
> EPT misconfig exit on Intel) which results in domain crash.

Couldn't direct MMIO regions be handled like other types of memory for
the purposes of logdiry mode?

I assume there's already a path here used for other memory types when
logdirty is turned on, and hence would seem better to just make direct
MMIO regions also use that path?

> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> ---
>  xen/arch/x86/hvm/svm/svm.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 46a1aac..f0d0bd3 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1726,6 +1726,10 @@ static void svm_do_nested_pgfault(struct vcpu *v,
>          /* inject #VMEXIT(NPF) into guest. */
>          nestedsvm_vmexit_defer(v, VMEXIT_NPF, pfec, gpa);
>          return;
> +    case 0:
> +        /* If a recalculation page fault hasn't been handled - just retry. */
> +        if ( pfec & PFEC_user_mode )
> +            return;

I'm slightly worried that this diverges from the EPT implementation
now, in the sense that returning 0 from hvm_hap_nested_page_fault will
no longer trigger a guest crash.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 22 10:10:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 10:10:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4d8-0001wa-Ex; Fri, 22 May 2020 10:10:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=de5w=7E=epam.com=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1jc4d7-0001wO-9T
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 10:10:17 +0000
X-Inumbo-ID: 6ec346d6-9c14-11ea-ae69-bc764e2007e4
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0c::614])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6ec346d6-9c14-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 10:10:16 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QisDxXEcjL0FLe4iwHsxNprKObQa3cQg+djV5J9gyQIVjErRWJxG2ORnjnFP+LsfepebskpmrB0x9Wfs20MMhbJ5twFJWMFwTk1mogFEW3ac/gDN09XEeT/0P3xD5BVGaoBAiPPnq8eHIc7nRYZpR6VHSZuRxDihGOIfYeVU7yNsyDEl+9OWnmQhFM/xS4RMRWHVpmToW4hv1yzwyDCI79Br9dZcjgQ8CNAVp/yt80Qo94ccsZVB8FHErCKFHc4Dqeu1GCmDtMPCHTfZD0aw2azWyLh8AWsPQxvDkxUf0PB8oqTq00oVuTFZ20yVq9bhYvGsvKbchgdOoXNnJHsEhw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z81P0w2E9rPDUXDKcF8PG7ztBVmG5KoTvDdoezk8nW0=;
 b=Z31QwQIqLm4yG2JZT7MRfrdRV1ZTzgpPf+0UV2dJzGpcU1NfiwWtn1VCfatzNYPLuS7PJKKSTuaHEbCyq7/6WO5cM2ry818qeuG5oChfbmd7c3XC7kJeNyCHKWOAqrWWbwxISmQ4aZifP8E1X09HovhBhz4h1zuCIoeODgTxbcEVqfcSnU3fPgMLZvgFhYRdQkXETA7VVP8Kev5LXXflmql8gExggnkc4+CZEIvgsOa8xx1S+aYyIJcd0kFzAPZDatkQOPKd7SPcfK8TNt/tccrhJ9fuEarrcRMEYWgjlPQ+Flvzj6J0g4g7pbgsSPauztv6hJ5ZbwGvQ0MjVAFWHQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=z81P0w2E9rPDUXDKcF8PG7ztBVmG5KoTvDdoezk8nW0=;
 b=MrWP3STeHIMqkecKfpLn9KHWD3dxV/xYslIpUMN2/w/RC1Xtbg2EWLZD49Cm7TFW2Ym3O3KPUNsCQDTDjZl4SkEnhroI8rYrRKyOwosUVbwTmh6My5SAJEN+fcmqVa0WEZy51TrT8dyoaFgEefolCkXUc7T7f+uy/09XHqlSbdM=
Received: from VI1PR03MB3998.eurprd03.prod.outlook.com (2603:10a6:803:72::14)
 by VI1PR03MB6333.eurprd03.prod.outlook.com (2603:10a6:800:13c::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.27; Fri, 22 May
 2020 10:10:13 +0000
Received: from VI1PR03MB3998.eurprd03.prod.outlook.com
 ([fe80::28ec:3584:94d:27a4]) by VI1PR03MB3998.eurprd03.prod.outlook.com
 ([fe80::28ec:3584:94d:27a4%7]) with mapi id 15.20.3021.027; Fri, 22 May 2020
 10:10:13 +0000
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Denis Kirjanov <kda@linux-powerpc.org>
Subject: Re: [PATCH v4] public/io/netif.h: add a new extra type for XDP
Thread-Topic: [PATCH v4] public/io/netif.h: add a new extra type for XDP
Thread-Index: AQHWMBiqME7497+04E6+8Omw+GVY7aiz01kAgAACZQCAAAIYAIAAClQA
Date: Fri, 22 May 2020 10:10:13 +0000
Message-ID: <aff85c9c-fd98-c848-6062-4694d7f1a9c1@epam.com>
References: <1589814292-1789-1-git-send-email-kda@linux-powerpc.org>
 <696ee8f6-f44b-35c3-2c9f-676cb9e5ad95@epam.com>
 <CAOJe8K3wZCkc2d1nxYqesTm2Cnt3QjsLDPn7U5KJbC7Bc=oVKA@mail.gmail.com>
 <05f78e98-79b9-b1a7-b892-67033b9cd245@epam.com>
 <CAOJe8K3Zwf7SjDwdPntucgu13meGahDNMi3oaqc6g9Y7Ttoo4Q@mail.gmail.com>
In-Reply-To: <CAOJe8K3Zwf7SjDwdPntucgu13meGahDNMi3oaqc6g9Y7Ttoo4Q@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: linux-powerpc.org; dkim=none (message not signed)
 header.d=none; linux-powerpc.org; dmarc=none action=none header.from=epam.com; 
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: bb0301b2-25d5-4762-117e-08d7fe3851c0
x-ms-traffictypediagnostic: VI1PR03MB6333:
x-microsoft-antispam-prvs: <VI1PR03MB6333F1B6DB38AC539D10EE2EE7B40@VI1PR03MB6333.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-forefront-prvs: 04111BAC64
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: ql7VWJ0g+BdyOx3/WHid4LPPVPfljHep7SEIkYDgwhBgFkzbymRELEzQO4lYDv2STqvTJ5Zd3wj+V1XWYMYh/+0tX4VE17YPxasqP7B7whVrYSDj8/8PF2Ya6zpkgtAk3njcbUbSDpnwBIgbAkq3ORj1/hag2qhGpcVBoSykU8avAzBWlRtjZbRgFutUa+z0Dm1NMq/VDAG+PfaERZfR20h3ya9nO6ww1WOtthUp1MDm1ukI3MJszUddm9rRzjW1uep89979FZ/0LbjkUXPpxq/U7NpNZp5BF1zqs1KGNbGC2d4wg38QeJU7t719J/sbpOH1zZ0k1FzzgAOUZZAGOA==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:VI1PR03MB3998.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(366004)(396003)(136003)(346002)(39860400002)(376002)(26005)(6506007)(53546011)(2616005)(6486002)(2906002)(186003)(36756003)(8676002)(8936002)(86362001)(316002)(31686004)(478600001)(31696002)(6512007)(6916009)(66556008)(4326008)(64756008)(66446008)(66476007)(76116006)(66946007)(71200400001)(5660300002)(54906003);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: E0lQ0gyBC88Vfn7kfyA+B5qvUW3h/053KNOZizvSbmir1wuj4YvK1D1xVDJwBIBy2xpWIRcKQJhobAaTXk+dXK5x3cin/XZljIc1+7bUn6SRXc1Rw6ueGuEIDswt07fHyFEaqpIJx0USU31FClXcuwpmLbrG4GIVFiqjio8DXhmF9Z0LbEUJnJWSbyiGZuLGE0iW/MXVr4Ftkx/fSc20iw4T48ha1OSYEf9GwwpBbA+TTiBFTuvjABSemrn5ukk4ZOBF4jNUwSrEUcqPKMhCOQu66EhBqJ0UtoLJEjbPxps0dgWutE1PPcTiFGrF+LD+sSKVhHHccMMOywk5wzk/bgTBpUgd+PntgfDY1Cet2c5hHeQCIPJ6AswjfX/1D9p5IH7/9FXrUIyRzTX7bJ2tZSnbB1U6FCtqUuQEO1ABLRDQvTI8wxulzCsRmj/nwp5JpR8+j7736FLQBkRVPALY6HP+3RN4PDbdvYUFLXsdA5c=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <039DE3ACB2D7504BB39CAFF65F064479@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: bb0301b2-25d5-4762-117e-08d7fe3851c0
X-MS-Exchange-CrossTenant-originalarrivaltime: 22 May 2020 10:10:13.8850 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 5it0DEQT3Q1GfuA/DinBSIDw02k0TZyTuMIIqAxhzVEFbqZgpT5jJtopFW2oeWHCVE6UpA72vYyCRhxu71htLPESUInn1MmJGK4jgk8/huvQd3lQuSXdLomQCr5K8l1f
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR03MB6333
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "jgross@suse.com" <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "paul@xen.org" <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQpPbiA1LzIyLzIwIDEyOjMzIFBNLCBEZW5pcyBLaXJqYW5vdiB3cm90ZToNCj4gT24gNS8yMi8y
MCwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gPE9sZWtzYW5kcl9BbmRydXNoY2hlbmtvQGVwYW0u
Y29tPiB3cm90ZToNCj4+IE9uIDUvMjIvMjAgMTI6MTcgUE0sIERlbmlzIEtpcmphbm92IHdyb3Rl
Og0KPj4+IE9uIDUvMjIvMjAsIE9sZWtzYW5kciBBbmRydXNoY2hlbmtvIDxPbGVrc2FuZHJfQW5k
cnVzaGNoZW5rb0BlcGFtLmNvbT4NCj4+PiB3cm90ZToNCj4+Pj4gT24gNS8xOC8yMCA2OjA0IFBN
LCBEZW5pcyBLaXJqYW5vdiB3cm90ZToNCj4+Pj4+IFRoZSBwYXRjaCBhZGRzIGEgbmV3IGV4dHJh
IHR5cGUgdG8gYmUgYWJsZSB0byBkaWZmaXJlbnRpYXRlDQo+Pj4+PiBiZXR3ZWVuIFJYIHJlc3Bv
bnNlcyBvbiB4ZW4tbmV0ZnJvbnQgc2lkZSB3aXRoIHRoZSBhZGp1c3RlZCBvZmZzZXQNCj4+Pj4+
IHJlcXVpcmVkIGZvciBYRFAgcHJvY2Vzc2luZy4NCj4+Pj4+DQo+Pj4+PiBUaGUgb2Zmc2V0IHZh
bHVlIGZyb20gYSBndWVzdCBpcyBwYXNzZWQgdmlhIHhlbnN0b3JlLg0KPj4+Pj4NCj4+Pj4+IFNp
Z25lZC1vZmYtYnk6IERlbmlzIEtpcmphbm92IDxkZW5pcy5raXJqYW5vdkBzdXNlLmNvbT4NCj4+
Pj4+IC0tLQ0KPj4+Pj4gdjQ6DQo+Pj4+PiAtIHVwZGF0ZWQgdGhlIGNvbW1pdCBhbmQgZG9jdW1l
bmF0aW9uDQo+Pj4+Pg0KPj4+Pj4gdjM6DQo+Pj4+PiAtIHVwZGF0ZWQgdGhlIGNvbW1pdCBtZXNz
YWdlDQo+Pj4+Pg0KPj4+Pj4gdjI6DQo+Pj4+PiAtIGFkZGVkIGRvY3VtZW50YXRpb24NCj4+Pj4+
IC0gZml4ZWQgcGFkZGluZyBmb3IgbmV0aWZfZXh0cmFfaW5mbw0KPj4+Pj4gLS0tDQo+Pj4+PiAt
LS0NCj4+Pj4+ICAgICB4ZW4vaW5jbHVkZS9wdWJsaWMvaW8vbmV0aWYuaCB8IDE4ICsrKysrKysr
KysrKysrKysrLQ0KPj4+Pj4gICAgIDEgZmlsZSBjaGFuZ2VkLCAxNyBpbnNlcnRpb25zKCspLCAx
IGRlbGV0aW9uKC0pDQo+Pj4+Pg0KPj4+Pj4gZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL3B1Ymxp
Yy9pby9uZXRpZi5oDQo+Pj4+PiBiL3hlbi9pbmNsdWRlL3B1YmxpYy9pby9uZXRpZi5oDQo+Pj4+
PiBpbmRleCA5ZmNmOTFhLi5hOTJiZjA0IDEwMDY0NA0KPj4+Pj4gLS0tIGEveGVuL2luY2x1ZGUv
cHVibGljL2lvL25ldGlmLmgNCj4+Pj4+ICsrKyBiL3hlbi9pbmNsdWRlL3B1YmxpYy9pby9uZXRp
Zi5oDQo+Pj4+PiBAQCAtMTYxLDYgKzE2MSwxNyBAQA0KPj4+Pj4gICAgICAqLw0KPj4+Pj4NCj4+
Pj4+ICAgICAvKg0KPj4+Pj4gKyAqICJ4ZHAtaGVhZHJvb20iIGlzIHVzZWQgdG8gcmVxdWVzdCB0
aGF0IGV4dHJhIHNwYWNlIGlzIGFkZGVkDQo+Pj4+PiArICogZm9yIFhEUCBwcm9jZXNzaW5nLiAg
VGhlIHZhbHVlIGlzIG1lYXN1cmVkIGluIGJ5dGVzIGFuZCBwYXNzZWQgYnkNCj4+Pj4gbm90IHN1
cmUgdGhhdCB3ZSBzaG91bGQgdXNlIHdvcmQgImJ5dGVzIiBoZXJlIGFzIHRoZSByZXN0IG9mIHRo
ZQ0KPj4+PiBwcm90b2NvbCAobW9zdGx5KQ0KPj4+Pg0KPj4+PiB0YWxrcyBhYm91dCBvY3RldHMu
IEl0IGlzIHNvbWV3aGF0IG1peGVkIGhlcmUsIG5vIHN0cm9uZyBvcGluaW9uDQo+Pj4gc3VyZSwg
YnV0IHNpbmNlIHRoZSBwdWJsaWMgaGVhZGVyIG1peGVzIGl0IEkndmUgZGVjaWRlZCB0byB1c2Ug
dGhhdCB3b3JkLg0KPj4+DQo+Pj4NCj4+Pj4+ICsgKiB0aGUgZnJvbnRlbmQgdG8gYmUgY29uc2lz
dGVudCBiZXR3ZWVuIGJvdGggZW5kcy4NCj4+Pj4+ICsgKiBJZiB0aGUgdmFsdWUgaXMgZ3JlYXRl
ciB0aGFuIHplcm8gdGhhdCBtZWFucyB0aGF0DQo+Pj4+PiArICogYW4gUlggcmVzcG9uc2UgaXMg
Z29pbmcgdG8gYmUgcGFzc2VkIHRvIGFuIFhEUCBwcm9ncmFtIGZvcg0KPj4+Pj4gcHJvY2Vzc2lu
Zy4NCj4+Pj4+ICsgKg0KPj4+Pj4gKyAqICJmZWF0dXJlLXhkcC1oZWFkcm9vbSIgaXMgc2V0IHRv
ICIxIiBieSB0aGUgbmV0YmFjayBzaWRlIGxpa2Ugb3RoZXINCj4+Pj4+IGZlYXR1cmVzDQo+Pj4+
PiArICogc28gYSBndWVzdCBjYW4gY2hlY2sgaWYgYW4gWERQIHByb2dyYW0gY2FuIGJlIHByb2Nl
c3NlZC4NCj4+Pj4+ICsgKi8NCj4+Pj4+ICsNCj4+Pj4+ICsvKg0KPj4+Pj4gICAgICAqIENvbnRy
b2wgcmluZw0KPj4+Pj4gICAgICAqID09PT09PT09PT09PQ0KPj4+Pj4gICAgICAqDQo+Pj4+PiBA
QCAtOTg1LDcgKzk5Niw4IEBAIHR5cGVkZWYgc3RydWN0IG5ldGlmX3R4X3JlcXVlc3QgbmV0aWZf
dHhfcmVxdWVzdF90Ow0KPj4+Pj4gICAgICNkZWZpbmUgWEVOX05FVElGX0VYVFJBX1RZUEVfTUNB
U1RfQUREICgyKSAgLyogdS5tY2FzdCAqLw0KPj4+Pj4gICAgICNkZWZpbmUgWEVOX05FVElGX0VY
VFJBX1RZUEVfTUNBU1RfREVMICgzKSAgLyogdS5tY2FzdCAqLw0KPj4+Pj4gICAgICNkZWZpbmUg
WEVOX05FVElGX0VYVFJBX1RZUEVfSEFTSCAgICAgICg0KSAgLyogdS5oYXNoICovDQo+Pj4+PiAt
I2RlZmluZSBYRU5fTkVUSUZfRVhUUkFfVFlQRV9NQVggICAgICAgKDUpDQo+Pj4+PiArI2RlZmlu
ZSBYRU5fTkVUSUZfRVhUUkFfVFlQRV9YRFAgICAgICAgKDUpICAvKiB1LnhkcCAqLw0KPj4+Pj4g
KyNkZWZpbmUgWEVOX05FVElGX0VYVFJBX1RZUEVfTUFYICAgICAgICg2KQ0KPj4+Pj4NCj4+Pj4+
ICAgICAvKiBuZXRpZl9leHRyYV9pbmZvX3QgZmxhZ3MuICovDQo+Pj4+PiAgICAgI2RlZmluZSBf
WEVOX05FVElGX0VYVFJBX0ZMQUdfTU9SRSAoMCkNCj4+Pj4+IEBAIC0xMDE4LDYgKzEwMzAsMTAg
QEAgc3RydWN0IG5ldGlmX2V4dHJhX2luZm8gew0KPj4+Pj4gICAgICAgICAgICAgICAgIHVpbnQ4
X3QgYWxnb3JpdGhtOw0KPj4+Pj4gICAgICAgICAgICAgICAgIHVpbnQ4X3QgdmFsdWVbNF07DQo+
Pj4+PiAgICAgICAgICAgICB9IGhhc2g7DQo+Pj4+PiArICAgICAgICBzdHJ1Y3Qgew0KPj4+Pj4g
KyAgICAgICAgICAgIHVpbnQxNl90IGhlYWRyb29tOw0KPj4+PiB3aHkgZG8geW91IG5lZWQgInBh
ZCIgZmllbGQgaGVyZT8NCj4+PiBUbyBzdGF0ZSB0aGF0IHdlIGhhdmUgYSBmaXhlZCBzaXplIGF2
YWlsYWJsZS4NCj4+IFdlbGwsIEkgd291bGQgZXhwZWN0ICJyZXNlcnZlZCIgb3Igc29tZXRoaW5n
IGluIHRoYXQgY2FzZSBhbmQgInBhZCIgaW4gY2FzZQ0KPj4NCj4+IHRoZXJlIGFyZSBvdGhlciBm
aWVsZHMgZm9sbG93aW5nIChzZWUgZ3NvIGFib3ZlKS4NCj4gaXQgY2FuIGJlIGNvbnNpc3RlbnQg
d2l0aCBvdGhlciBuYW1lcyBsaWtlIHBhZCBhdCB0aGVuIGVuZCBvZiB0aGUgc3RydWN0dXJlLg0K
Pg0KPiBJZiBpdCByZWFsbHkgbWF0dGVycyBJIGNhbiBjaGFuZ2UgaXQsIG5vIHByb2JsZW0uDQpN
eSBwb2ludCBpcyB0aGF0IElNTyBpdCBpcyBub3QgcmVxdWlyZWQgYXQgYWxsLCBidXQgdGhpcyBp
cyB1cCB0byANCm1haW50YWluZXJzIHRvIGRlY2lkZQ0KPg0KPj4gQnV0IGhlcmUgSSB0aGluayAi
cGFkIiBpcyBub3QgcmVxdWlyZWQsIGp1c3QgbGlrZSBtY2FzdCBkb2Vzbid0IGFkZCBhbnkNCj4g
YmVjYXVzZSBpdCdzIGFscmVhZHkgNi1ieXRlcyBsb25nDQp5b3UgYXJlIHJpZ2h0DQo+DQo+Pj4+
PiArICAgICAgICAgICAgdWludDE2X3QgcGFkWzJdDQo+Pj4+PiArICAgICAgICB9IHhkcDsNCj4+
Pj4+ICAgICAgICAgICAgIHVpbnQxNl90IHBhZFszXTsNCj4+Pj4+ICAgICAgICAgfSB1Ow0KPj4+
Pj4gICAgIH07


From xen-devel-bounces@lists.xenproject.org Fri May 22 10:14:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 10:14:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4hE-0002CP-2G; Fri, 22 May 2020 10:14:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Somd=7E=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1jc4hC-0002CK-J4
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 10:14:30 +0000
X-Inumbo-ID: 05af5a4e-9c15-11ea-aba9-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 05af5a4e-9c15-11ea-aba9-12813bfff9fa;
 Fri, 22 May 2020 10:14:29 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: NfqJwmCDKLrTc+jp4yFEKqPI5eWD2h76oct4UKWna0tTU6a/EdOgeh6S73OwCRSjv4X6PCPkCJ
 fu0llXY3dX6+Lbj9ZFmAoLCt1Q/pxUIjpXR0pQcv4dyB+9s0QnV4held04X5z0kJNlCrt+ePQP
 hQEuZN7qPyBUnVoB597x+cHfrsfEyrA2MyszHN6BDXrMgxi8vUW6VPzoNpdveO7SckpS9p6nkS
 AIVXWSwGvi2JKc84tw5/UIDg3joEn95wTRPT3woHxIlRExklzudx4c23W1E2PcccAr/eJT/k3p
 ZKI=
X-SBRS: 2.7
X-MesageID: 18427125
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18427125"
Subject: Re: [PATCH] x86/svm: retry after unhandled NPT fault if gfn was
 marked for recalculation
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
 <20200522100846.GV54375@Air-de-Roger>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <04ec4ab4-a121-c5be-0a65-316e237dd793@citrix.com>
Date: Fri, 22 May 2020 11:14:24 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101
 Thunderbird/60.9.0
MIME-Version: 1.0
In-Reply-To: <20200522100846.GV54375@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, wl@xen.org, jbeulich@suse.com,
 andrew.cooper3@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22/05/2020 11:08, Roger Pau Monné wrote:
> On Thu, May 21, 2020 at 10:43:58PM +0100, Igor Druzhinin wrote:
>> If a recalculation NPT fault hasn't been handled explicitly in
>> hvm_hap_nested_page_fault() then it's potentially safe to retry -
>> US bit has been re-instated in PTE and any real fault would be correctly
>> re-raised next time.
>>
>> This covers a specific case of migration with vGPU assigned on AMD:
>> global log-dirty is enabled and causes immediate recalculation NPT
>> fault in MMIO area upon access. This type of fault isn't described
>> explicitly in hvm_hap_nested_page_fault (this isn't called on
>> EPT misconfig exit on Intel) which results in domain crash.
> 
> Couldn't direct MMIO regions be handled like other types of memory for
> the purposes of logdiry mode?
> 
> I assume there's already a path here used for other memory types when
> logdirty is turned on, and hence would seem better to just make direct
> MMIO regions also use that path?

The proble of handling only MMIO case is that the issue still stays.
It will be hit with some other memory type since it's not MMIO specific.
The issue is that if global recalculation is called, the next hit to
this type will cause a transient fault which will not be handled
correctly after a due fixup by neither of our handlers.

>> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>> ---
>>  xen/arch/x86/hvm/svm/svm.c | 4 ++++
>>  1 file changed, 4 insertions(+)
>>
>> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
>> index 46a1aac..f0d0bd3 100644
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -1726,6 +1726,10 @@ static void svm_do_nested_pgfault(struct vcpu *v,
>>          /* inject #VMEXIT(NPF) into guest. */
>>          nestedsvm_vmexit_defer(v, VMEXIT_NPF, pfec, gpa);
>>          return;
>> +    case 0:
>> +        /* If a recalculation page fault hasn't been handled - just retry. */
>> +        if ( pfec & PFEC_user_mode )
>> +            return;
> 
> I'm slightly worried that this diverges from the EPT implementation
> now, in the sense that returning 0 from hvm_hap_nested_page_fault will
> no longer trigger a guest crash.

My second alternative from my follow up email addresses this. I also
didn't like this aspect.

Igor


From xen-devel-bounces@lists.xenproject.org Fri May 22 10:19:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 10:19:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4lz-0002Mp-Lb; Fri, 22 May 2020 10:19:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hXvX=7E=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jc4ly-0002Mk-Pc
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 10:19:26 +0000
X-Inumbo-ID: b5f95af8-9c15-11ea-b9cf-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b5f95af8-9c15-11ea-b9cf-bc764e2007e4;
 Fri, 22 May 2020 10:19:25 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:38624
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jc4lv-000lca-KQ (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 22 May 2020 11:19:23 +0100
Subject: Re: [PATCH] x86/svm: retry after unhandled NPT fault if gfn was
 marked for recalculation
To: Igor Druzhinin <igor.druzhinin@citrix.com>, xen-devel@lists.xenproject.org
References: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
 <dae35bcf-5b85-a760-9d15-139973215334@citrix.com>
 <506f21d4-ed81-2cd5-46af-162407553c91@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <15df5f0a-43f7-ca0c-613f-25a1a1a19640@citrix.com>
Date: Fri, 22 May 2020 11:19:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <506f21d4-ed81-2cd5-46af-162407553c91@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: wl@xen.org, jbeulich@suse.com, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22/05/2020 11:05, Igor Druzhinin wrote:
> On 22/05/2020 10:45, Andrew Cooper wrote:
>> On 21/05/2020 22:43, Igor Druzhinin wrote:
>>> If a recalculation NPT fault hasn't been handled explicitly in
>>> hvm_hap_nested_page_fault() then it's potentially safe to retry -
>>> US bit has been re-instated in PTE and any real fault would be correctly
>>> re-raised next time.
>>>
>>> This covers a specific case of migration with vGPU assigned on AMD:
>>> global log-dirty is enabled and causes immediate recalculation NPT
>>> fault in MMIO area upon access. This type of fault isn't described
>>> explicitly in hvm_hap_nested_page_fault (this isn't called on
>>> EPT misconfig exit on Intel) which results in domain crash.
>>>
>>> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>>> ---
>>>  xen/arch/x86/hvm/svm/svm.c | 4 ++++
>>>  1 file changed, 4 insertions(+)
>>>
>>> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
>>> index 46a1aac..f0d0bd3 100644
>>> --- a/xen/arch/x86/hvm/svm/svm.c
>>> +++ b/xen/arch/x86/hvm/svm/svm.c
>>> @@ -1726,6 +1726,10 @@ static void svm_do_nested_pgfault(struct vcpu *v,
>>>          /* inject #VMEXIT(NPF) into guest. */
>>>          nestedsvm_vmexit_defer(v, VMEXIT_NPF, pfec, gpa);
>>>          return;
>>> +    case 0:
>>> +        /* If a recalculation page fault hasn't been handled - just retry. */
>>> +        if ( pfec & PFEC_user_mode )
>>> +            return;
>> This smells like it is a recipe for livelocks.
>>
>> Everything should have been handled properly by the call to
>> p2m_pt_handle_deferred_changes() which precedes svm_do_nested_pgfault().
>>
>> It is legitimate for the MMIO mapping to end up being transiently
>> recalculated, but the fact that p2m_pt_handle_deferred_changes() doesn't
>> fix it up suggests that the bug is there.
>>
>> Do you have the complete NPT walk to the bad mapping? Do we have
>> _PAGE_USER in the leaf mapping, or is this perhaps a spurious fault?
> It does fix it up. The problem is that currently in SVM we enter
> svm_do_nested_pgfault immediately after p2m_pt_handle_deferred_changes
> is finished finished.

Oh - so we do.  I'd read the entry condition for svm_do_nested_pgfault()
incorrectly.

Jan - why did you chose to do it this way?  If
p2m_pt_handle_deferred_changes() has made a modification, there is
surely nothing relevant to do in svm_do_nested_pgfault().

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 22 10:19:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 10:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4m8-0002Nc-UJ; Fri, 22 May 2020 10:19:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L400=7E=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jc4m7-0002NO-NR
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 10:19:35 +0000
X-Inumbo-ID: bb7bc402-9c15-11ea-abab-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bb7bc402-9c15-11ea-abab-12813bfff9fa;
 Fri, 22 May 2020 10:19:34 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: HmLuVmKF0041qSAPfbrl73oRNrN5n9et4OeZebTCWbsXxtXsbRxOTSkrCgM3AHw/d+mYMsqtjr
 UbfH1urKxTmQZTWYt1pJLiMCITidX5iLL4OJ3BGuWDA+I2NgAqbN4IqyMfqkSv/9GUprw6x9fX
 9C7kmswsZrjZCgUx6CUnULQUKgcO7tNwaSKLtnEDGDUn/EP2QoJBVR97EHRcSCjPiJ+hg3Cswf
 ifyD3F7XKG0BjNC6E0xa+Cm09Mn1zsUhzF3X97Z7DKO+lsdoNklf/L/mO0iRlb/YEs8lKtDLPk
 G8c=
X-SBRS: 2.7
X-MesageID: 18527956
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18527956"
Date: Fri, 22 May 2020 12:19:27 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2] x86: use POPCNT for hweight<N>() when available
Message-ID: <20200522101927.GW54375@Air-de-Roger>
References: <20200514140522.GD54375@Air-de-Roger>
 <83534bf1-fa57-1d4a-c615-f656338a8457@suse.com>
 <20200520093106.GI54375@Air-de-Roger>
 <53fdfbe2-615a-72f9-7f2d-26402a0a64d0@suse.com>
 <20200520102805.GK54375@Air-de-Roger>
 <0e97e3af-b66e-4924-a76c-9e33cdd1a726@suse.com>
 <20200520114327.GL54375@Air-de-Roger>
 <d0a15359-339f-6edd-034c-cd6385e929d1@suse.com>
 <20200520171829.GO54375@Air-de-Roger>
 <2be7b657-1027-2fef-fd26-131c27e940db@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <2be7b657-1027-2fef-fd26-131c27e940db@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 22, 2020 at 11:58:40AM +0200, Jan Beulich wrote:
> On 20.05.2020 19:18, Roger Pau Monné wrote:
> > I also assume that using no_caller_saved_registers when available or
> > else keeping the current behavior is not an acceptable solution? FWIW,
> > from a FreeBSD PoV I would be OK with that, as I don't think there are
> > any supported targets with clang < 5.
> 
> By "current behavior" do you mean what the patch currently
> does (matching my earlier response that I may try going this
> route) or the unpatched upstream tree?

Sorry this wasn't clear. By current I meant what's in the upstream
tree. IMO having both no_caller_saved_registers, the -ffixed-reg stuff
and the current upstream no popcnt implementation is kind of too much
divergence?

My preference would be to add popcnt support only when
no_caller_saved_registers is supported by the compiler, as I think
that's the cleanest solution. This is purely a performance
optimization, so it doesn't seem that bad to me to request a newish
compiler in order to enable it.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 22 10:23:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 10:23:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4qC-0003Jr-J9; Fri, 22 May 2020 10:23:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L400=7E=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jc4qB-0003Jm-9p
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 10:23:47 +0000
X-Inumbo-ID: 51a499ea-9c16-11ea-abac-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 51a499ea-9c16-11ea-abac-12813bfff9fa;
 Fri, 22 May 2020 10:23:46 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: s/bgTSOTcEaJVbv6mKIF/XXCZ9sFNoWnbzQnvccLWj08Gw8huL5npkPCnARoqQ8wcd4r8jiH5/
 tWbJD8Zkn8L9RBacqZeDrY555faKVNtKW1fkW8V+Wv8+TcVm3Zg3zRcoqBt2I3Obm4NMG++Hwk
 L1wBekOd7kORuGTgoSHYFmdS4bd6ab3B/g6KepU+LoCbs/i+gL7SM3e//1zvoJIagZt+VijCdZ
 nx+byZCBFRfhwdEh5l/o1hvPzKsys50g+YR/vrjIz8cR/Vg8CVDIekN3+SHvYotSvvhvo8jXTu
 ixU=
X-SBRS: 2.7
X-MesageID: 18161341
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18161341"
Date: Fri, 22 May 2020 12:23:39 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Subject: Re: [PATCH] x86/svm: retry after unhandled NPT fault if gfn was
 marked for recalculation
Message-ID: <20200522102339.GX54375@Air-de-Roger>
References: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
 <20200522100846.GV54375@Air-de-Roger>
 <04ec4ab4-a121-c5be-0a65-316e237dd793@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <04ec4ab4-a121-c5be-0a65-316e237dd793@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, wl@xen.org, jbeulich@suse.com,
 andrew.cooper3@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 22, 2020 at 11:14:24AM +0100, Igor Druzhinin wrote:
> On 22/05/2020 11:08, Roger Pau Monné wrote:
> > On Thu, May 21, 2020 at 10:43:58PM +0100, Igor Druzhinin wrote:
> >> If a recalculation NPT fault hasn't been handled explicitly in
> >> hvm_hap_nested_page_fault() then it's potentially safe to retry -
> >> US bit has been re-instated in PTE and any real fault would be correctly
> >> re-raised next time.
> >>
> >> This covers a specific case of migration with vGPU assigned on AMD:
> >> global log-dirty is enabled and causes immediate recalculation NPT
> >> fault in MMIO area upon access. This type of fault isn't described
> >> explicitly in hvm_hap_nested_page_fault (this isn't called on
> >> EPT misconfig exit on Intel) which results in domain crash.
> > 
> > Couldn't direct MMIO regions be handled like other types of memory for
> > the purposes of logdiry mode?
> > 
> > I assume there's already a path here used for other memory types when
> > logdirty is turned on, and hence would seem better to just make direct
> > MMIO regions also use that path?
> 
> The proble of handling only MMIO case is that the issue still stays.
> It will be hit with some other memory type since it's not MMIO specific.
> The issue is that if global recalculation is called, the next hit to
> this type will cause a transient fault which will not be handled
> correctly after a due fixup by neither of our handlers.

I admit I should go look at the code, but for example RAM p2m types
don't require this fix, so I assume there's some different path taken
in that case that avoids all this?

Ie: when global logdirty is enabled you will start to get nested page
faults for every access, yet only direct MMIO types require this fix?

Thanks, Roger


From xen-devel-bounces@lists.xenproject.org Fri May 22 10:25:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 10:25:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4rI-0003Oj-Tt; Fri, 22 May 2020 10:24:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=owPK=7E=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jc4rH-0003Ob-OA
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 10:24:55 +0000
X-Inumbo-ID: 7a2bb4ac-9c16-11ea-b9cf-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a2bb4ac-9c16-11ea-b9cf-bc764e2007e4;
 Fri, 22 May 2020 10:24:54 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: tgrRlMpCO0BybmgfjOhfOL32RnJz6f5bXzxP33D5/jL1e1EpQYCtLmxkvk3gOlYagw/plfz5Hq
 TBk/BAlqY3CiRze1LPmXZ6SQOVZOJjWfi7pSFz/VnadKH+HSJ4XjgI9JUZCYRD8pT1atRWyt/Y
 hq+9d0AJiWPVV+szftI4GF3bpzInsFNncpE3M9qLEmZ1D1YhDelpRAbrJEwkF8C2phO/FAmTSa
 Fr85v7TCgVDRfSYhmiXkgVQ1eeipDypSROjl8CAb3wWtQXfyuPVgmmNsZ5UIhvxwkYgpzRGIOE
 BJ8=
X-SBRS: 2.7
X-MesageID: 18427591
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18427591"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2] x86/ioemul: Rewrite stub generation to be shadow stack
 compatible
Date: Fri, 22 May 2020 11:24:35 +0100
Message-ID: <20200522102435.14329-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The logic is completely undocumented and almost impossible to follow.  It
actually uses return oriented programming.  Rewrite it to conform to more
normal call mechanics, and leave a big comment explaining thing.  As well as
the code being easier to follow, it will execute faster as it isn't fighting
the branch predictor.

Move the ioemul_handle_quirk() function pointer from traps.c to
ioport_emulate.c.  There is no reason for it to be in neither of the two
translation units which use it.  Alter the behaviour to return the number of
bytes written into the stub.

Introduce a new nocall annotation using __attribute__((error)) to prohibit
calls being made.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
--
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

v2:
 * Add ELF metadata for {load,save}_guest_gprs()
 * Make ioemul_handle_quirk() __read_mostly
 * Add new nocall tag
---
 xen/arch/x86/ioport_emulate.c  | 11 ++---
 xen/arch/x86/pv/emul-priv-op.c | 92 +++++++++++++++++++++++++++++++-----------
 xen/arch/x86/pv/gpr_switch.S   | 44 ++++++++------------
 xen/arch/x86/traps.c           |  3 --
 xen/include/asm-x86/io.h       |  3 +-
 xen/include/xen/compiler.h     |  2 +
 6 files changed, 95 insertions(+), 60 deletions(-)

diff --git a/xen/arch/x86/ioport_emulate.c b/xen/arch/x86/ioport_emulate.c
index 499c1f6056..b9d5d27188 100644
--- a/xen/arch/x86/ioport_emulate.c
+++ b/xen/arch/x86/ioport_emulate.c
@@ -8,7 +8,10 @@
 #include <xen/sched.h>
 #include <xen/dmi.h>
 
-static bool ioemul_handle_proliant_quirk(
+unsigned int __read_mostly (*ioemul_handle_quirk)(
+    uint8_t opcode, char *io_emul_stub, struct cpu_user_regs *regs);
+
+static unsigned int ioemul_handle_proliant_quirk(
     u8 opcode, char *io_emul_stub, struct cpu_user_regs *regs)
 {
     static const char stub[] = {
@@ -19,18 +22,16 @@ static bool ioemul_handle_proliant_quirk(
         0xa8, 0x80, /*    test $0x80, %al */
         0x75, 0xfb, /*    jnz 1b          */
         0x9d,       /*    popf            */
-        0xc3,       /*    ret             */
     };
     uint16_t port = regs->dx;
     uint8_t value = regs->al;
 
     if ( (opcode != 0xee) || (port != 0xcd4) || !(value & 0x80) )
-        return false;
+        return 0;
 
     memcpy(io_emul_stub, stub, sizeof(stub));
-    BUILD_BUG_ON(IOEMUL_QUIRK_STUB_BYTES < sizeof(stub));
 
-    return true;
+    return sizeof(stub);
 }
 
 /* This table is the set of system-specific I/O emulation hooks. */
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index 3b705299cf..3fb36a3e19 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -47,51 +47,97 @@ struct priv_op_ctxt {
     unsigned int bpmatch;
 };
 
-/* I/O emulation support. Helper routines for, and type of, the stack stub. */
-void host_to_guest_gpr_switch(struct cpu_user_regs *);
-unsigned long guest_to_host_gpr_switch(unsigned long);
+/* I/O emulation helpers.  Use non-standard calling conventions. */
+void nocall load_guest_gprs(struct cpu_user_regs *);
+void nocall save_guest_gprs(void);
 
 typedef void io_emul_stub_t(struct cpu_user_regs *);
 
 static io_emul_stub_t *io_emul_stub_setup(struct priv_op_ctxt *ctxt, u8 opcode,
                                           unsigned int port, unsigned int bytes)
 {
+    /*
+     * Construct a stub for IN/OUT emulation.
+     *
+     * Some platform drivers communicate with the SMM handler using GPRs as a
+     * mailbox.  Therefore, we must perform the emulation with the hardware
+     * domain's registers in view.
+     *
+     * We write a stub of the following form, using the guest load/save
+     * helpers (non-standard ABI), and one of several possible stubs
+     * performing the real I/O.
+     */
+    static const char prologue[] = {
+        0x53,       /* push %rbx */
+        0x55,       /* push %rbp */
+        0x41, 0x54, /* push %r12 */
+        0x41, 0x55, /* push %r13 */
+        0x41, 0x56, /* push %r14 */
+        0x41, 0x57, /* push %r15 */
+        0x57,       /* push %rdi (param for save_guest_gprs) */
+    };              /* call load_guest_gprs */
+                    /* <I/O stub> */
+                    /* call save_guest_gprs */
+    static const char epilogue[] = {
+        0x5f,       /* pop %rdi  */
+        0x41, 0x5f, /* pop %r15  */
+        0x41, 0x5e, /* pop %r14  */
+        0x41, 0x5d, /* pop %r13  */
+        0x41, 0x5c, /* pop %r12  */
+        0x5d,       /* pop %rbp  */
+        0x5b,       /* pop %rbx  */
+        0xc3,       /* ret       */
+    };
+
     struct stubs *this_stubs = &this_cpu(stubs);
     unsigned long stub_va = this_stubs->addr + STUB_BUF_SIZE / 2;
-    long disp;
-    bool use_quirk_stub = false;
+    unsigned int quirk_bytes = 0;
+    char *p;
+
+    /* Helpers - Read outer scope but only modify p. */
+#define APPEND_BUFF(b) ({ memcpy(p, b, sizeof(b)); p += sizeof(b); })
+#define APPEND_CALL(f)                                                  \
+    ({                                                                  \
+        long disp = (long)(f) - (stub_va + p - ctxt->io_emul_stub + 5); \
+        BUG_ON((int32_t)disp != disp);                                  \
+        *p++ = 0xe8;                                                    \
+        *(int32_t *)p = disp; p += 4;                                   \
+    })
 
     if ( !ctxt->io_emul_stub )
         ctxt->io_emul_stub =
             map_domain_page(_mfn(this_stubs->mfn)) + (stub_va & ~PAGE_MASK);
 
-    /* call host_to_guest_gpr_switch */
-    ctxt->io_emul_stub[0] = 0xe8;
-    disp = (long)host_to_guest_gpr_switch - (stub_va + 5);
-    BUG_ON((int32_t)disp != disp);
-    *(int32_t *)&ctxt->io_emul_stub[1] = disp;
+    p = ctxt->io_emul_stub;
+
+    APPEND_BUFF(prologue);
+    APPEND_CALL(load_guest_gprs);
 
+    /* Some platforms might need to quirk the stub for specific inputs. */
     if ( unlikely(ioemul_handle_quirk) )
-        use_quirk_stub = ioemul_handle_quirk(opcode, &ctxt->io_emul_stub[5],
-                                             ctxt->ctxt.regs);
+    {
+        quirk_bytes = ioemul_handle_quirk(opcode, p, ctxt->ctxt.regs);
+        p += quirk_bytes;
+    }
 
-    if ( !use_quirk_stub )
+    /* Default I/O stub. */
+    if ( likely(!quirk_bytes) )
     {
-        /* data16 or nop */
-        ctxt->io_emul_stub[5] = (bytes != 2) ? 0x90 : 0x66;
-        /* <io-access opcode> */
-        ctxt->io_emul_stub[6] = opcode;
-        /* imm8 or nop */
-        ctxt->io_emul_stub[7] = !(opcode & 8) ? port : 0x90;
-        /* ret (jumps to guest_to_host_gpr_switch) */
-        ctxt->io_emul_stub[8] = 0xc3;
+        *p++ = (bytes != 2) ? 0x90 : 0x66;  /* data16 or nop */
+        *p++ = opcode;                      /* <opcode>      */
+        *p++ = !(opcode & 8) ? port : 0x90; /* imm8 or nop   */
     }
 
-    BUILD_BUG_ON(STUB_BUF_SIZE / 2 < MAX(9, /* Default emul stub */
-                                         5 + IOEMUL_QUIRK_STUB_BYTES));
+    APPEND_CALL(save_guest_gprs);
+    APPEND_BUFF(epilogue);
+
+    BUG_ON(STUB_BUF_SIZE / 2 < (p - ctxt->io_emul_stub));
 
     /* Handy function-typed pointer to the stub. */
     return (void *)stub_va;
+
+#undef APPEND_CALL
+#undef APPEND_BUFF
 }
 
 
diff --git a/xen/arch/x86/pv/gpr_switch.S b/xen/arch/x86/pv/gpr_switch.S
index 6d26192c2c..e7f5bfcd2d 100644
--- a/xen/arch/x86/pv/gpr_switch.S
+++ b/xen/arch/x86/pv/gpr_switch.S
@@ -9,59 +9,49 @@
 
 #include <asm/asm_defns.h>
 
-ENTRY(host_to_guest_gpr_switch)
-        movq  (%rsp), %rcx
-        movq  %rdi, (%rsp)
+/* Load guest GPRs.  Parameter in %rdi, clobbers all registers. */
+ENTRY(load_guest_gprs)
         movq  UREGS_rdx(%rdi), %rdx
-        pushq %rbx
         movq  UREGS_rax(%rdi), %rax
         movq  UREGS_rbx(%rdi), %rbx
-        pushq %rbp
         movq  UREGS_rsi(%rdi), %rsi
         movq  UREGS_rbp(%rdi), %rbp
-        pushq %r12
-        movq  UREGS_r8(%rdi), %r8
+        movq  UREGS_r8 (%rdi), %r8
         movq  UREGS_r12(%rdi), %r12
-        pushq %r13
-        movq  UREGS_r9(%rdi), %r9
+        movq  UREGS_r9 (%rdi), %r9
         movq  UREGS_r13(%rdi), %r13
-        pushq %r14
         movq  UREGS_r10(%rdi), %r10
         movq  UREGS_r14(%rdi), %r14
-        pushq %r15
         movq  UREGS_r11(%rdi), %r11
         movq  UREGS_r15(%rdi), %r15
-        pushq %rcx /* dummy push, filled by guest_to_host_gpr_switch pointer */
-        pushq %rcx
-        leaq  guest_to_host_gpr_switch(%rip),%rcx
-        movq  %rcx,8(%rsp)
         movq  UREGS_rcx(%rdi), %rcx
         movq  UREGS_rdi(%rdi), %rdi
         ret
 
-ENTRY(guest_to_host_gpr_switch)
+        .size load_guest_gprs, . - load_guest_gprs
+        .type load_guest_gprs, STT_FUNC
+
+
+/* Save guest GPRs.  Parameter on the stack above the return address. */
+ENTRY(save_guest_gprs)
         pushq %rdi
-        movq  7*8(%rsp), %rdi
+        movq  2*8(%rsp), %rdi
         movq  %rax, UREGS_rax(%rdi)
-        popq  UREGS_rdi(%rdi)
+        popq        UREGS_rdi(%rdi)
         movq  %r15, UREGS_r15(%rdi)
         movq  %r11, UREGS_r11(%rdi)
-        popq  %r15
         movq  %r14, UREGS_r14(%rdi)
         movq  %r10, UREGS_r10(%rdi)
-        popq  %r14
         movq  %r13, UREGS_r13(%rdi)
-        movq  %r9, UREGS_r9(%rdi)
-        popq  %r13
+        movq  %r9,  UREGS_r9 (%rdi)
         movq  %r12, UREGS_r12(%rdi)
-        movq  %r8, UREGS_r8(%rdi)
-        popq  %r12
+        movq  %r8,  UREGS_r8 (%rdi)
         movq  %rbp, UREGS_rbp(%rdi)
         movq  %rsi, UREGS_rsi(%rdi)
-        popq  %rbp
         movq  %rbx, UREGS_rbx(%rdi)
         movq  %rdx, UREGS_rdx(%rdi)
-        popq  %rbx
         movq  %rcx, UREGS_rcx(%rdi)
-        popq  %rcx
         ret
+
+        .size save_guest_gprs, . - save_guest_gprs
+        .type save_guest_gprs, STT_FUNC
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 3f93ec285e..f383b07f6e 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -117,9 +117,6 @@ idt_entry_t *idt_tables[NR_CPUS] __read_mostly;
  */
 DEFINE_PER_CPU_PAGE_ALIGNED(struct tss_page, tss_page);
 
-bool (*ioemul_handle_quirk)(
-    u8 opcode, char *io_emul_stub, struct cpu_user_regs *regs);
-
 static int debug_stack_lines = 20;
 integer_param("debug_stack_lines", debug_stack_lines);
 
diff --git a/xen/include/asm-x86/io.h b/xen/include/asm-x86/io.h
index 8708b79b99..c4ec52cba7 100644
--- a/xen/include/asm-x86/io.h
+++ b/xen/include/asm-x86/io.h
@@ -49,8 +49,7 @@ __OUT(w,"w",short)
 __OUT(l,,int)
 
 /* Function pointer used to handle platform specific I/O port emulation. */
-#define IOEMUL_QUIRK_STUB_BYTES 10
-extern bool (*ioemul_handle_quirk)(
+extern unsigned int (*ioemul_handle_quirk)(
     u8 opcode, char *io_emul_stub, struct cpu_user_regs *regs);
 
 #endif
diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h
index 8c846261d2..c22439b7a4 100644
--- a/xen/include/xen/compiler.h
+++ b/xen/include/xen/compiler.h
@@ -20,6 +20,8 @@
 
 #define __weak        __attribute__((__weak__))
 
+#define nocall        __attribute__((error("Nonstandard ABI")))
+
 #if (!defined(__clang__) && (__GNUC__ == 4) && (__GNUC_MINOR__ < 5))
 #define unreachable() do {} while (1)
 #else
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 22 10:25:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 10:25:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4ra-0003S2-9p; Fri, 22 May 2020 10:25:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Somd=7E=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1jc4rZ-0003Rs-E4
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 10:25:13 +0000
X-Inumbo-ID: 850f8a1a-9c16-11ea-9887-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 850f8a1a-9c16-11ea-9887-bc764e2007e4;
 Fri, 22 May 2020 10:25:12 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: R5BCy4Mm6hu+Y/B//3BD/MyNFQ7VieS0kynn/j1Jlvpij2/3pt77sF5e/dNfy+xhR0GzD1mSMp
 YRlW9u5pg5XObifXw88+woch64g5Fvx3MDpQIESudlmz5LzsGq2LRmIMAYN/95wZ4CcvbHvvqg
 DaNQaZTO4scThbMdlg9hua6jTtB/f99mvl5dPuX5wYf1kXGj6uHyowczed257G1NUohUNB75/n
 r4Sq4UNYnnX+AndykIOKpCVwWh3G49s3JeSKRvM2iF6YHgEGdbb9/9YyWwpaYathrpjqoRdEiH
 zTo=
X-SBRS: 2.7
X-MesageID: 18427610
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18427610"
Subject: Re: [PATCH] x86/svm: retry after unhandled NPT fault if gfn was
 marked for recalculation
To: Andrew Cooper <andrew.cooper3@citrix.com>, <xen-devel@lists.xenproject.org>
References: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
 <dae35bcf-5b85-a760-9d15-139973215334@citrix.com>
 <506f21d4-ed81-2cd5-46af-162407553c91@citrix.com>
 <15df5f0a-43f7-ca0c-613f-25a1a1a19640@citrix.com>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <c74e1075-1f66-9757-7d2c-429cacd337d4@citrix.com>
Date: Fri, 22 May 2020 11:25:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101
 Thunderbird/60.9.0
MIME-Version: 1.0
In-Reply-To: <15df5f0a-43f7-ca0c-613f-25a1a1a19640@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: wl@xen.org, jbeulich@suse.com, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22/05/2020 11:19, Andrew Cooper wrote:
> On 22/05/2020 11:05, Igor Druzhinin wrote:
>> On 22/05/2020 10:45, Andrew Cooper wrote:
>>> On 21/05/2020 22:43, Igor Druzhinin wrote:
>>>> If a recalculation NPT fault hasn't been handled explicitly in
>>>> hvm_hap_nested_page_fault() then it's potentially safe to retry -
>>>> US bit has been re-instated in PTE and any real fault would be correctly
>>>> re-raised next time.
>>>>
>>>> This covers a specific case of migration with vGPU assigned on AMD:
>>>> global log-dirty is enabled and causes immediate recalculation NPT
>>>> fault in MMIO area upon access. This type of fault isn't described
>>>> explicitly in hvm_hap_nested_page_fault (this isn't called on
>>>> EPT misconfig exit on Intel) which results in domain crash.
>>>>
>>>> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>>>> ---
>>>>  xen/arch/x86/hvm/svm/svm.c | 4 ++++
>>>>  1 file changed, 4 insertions(+)
>>>>
>>>> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
>>>> index 46a1aac..f0d0bd3 100644
>>>> --- a/xen/arch/x86/hvm/svm/svm.c
>>>> +++ b/xen/arch/x86/hvm/svm/svm.c
>>>> @@ -1726,6 +1726,10 @@ static void svm_do_nested_pgfault(struct vcpu *v,
>>>>          /* inject #VMEXIT(NPF) into guest. */
>>>>          nestedsvm_vmexit_defer(v, VMEXIT_NPF, pfec, gpa);
>>>>          return;
>>>> +    case 0:
>>>> +        /* If a recalculation page fault hasn't been handled - just retry. */
>>>> +        if ( pfec & PFEC_user_mode )
>>>> +            return;
>>> This smells like it is a recipe for livelocks.
>>>
>>> Everything should have been handled properly by the call to
>>> p2m_pt_handle_deferred_changes() which precedes svm_do_nested_pgfault().
>>>
>>> It is legitimate for the MMIO mapping to end up being transiently
>>> recalculated, but the fact that p2m_pt_handle_deferred_changes() doesn't
>>> fix it up suggests that the bug is there.
>>>
>>> Do you have the complete NPT walk to the bad mapping? Do we have
>>> _PAGE_USER in the leaf mapping, or is this perhaps a spurious fault?
>> It does fix it up. The problem is that currently in SVM we enter
>> svm_do_nested_pgfault immediately after p2m_pt_handle_deferred_changes
>> is finished finished.
> 
> Oh - so we do.  I'd read the entry condition for svm_do_nested_pgfault()
> incorrectly.
> 
> Jan - why did you chose to do it this way?  If
> p2m_pt_handle_deferred_changes() has made a modification, there is
> surely nothing relevant to do in svm_do_nested_pgfault().

In Jan's defense that saves one additional VMEXIT in rare cases
if the fault had other implications (write to RO page in log-dirty)
in addition to recalculation.

Igor
other 


From xen-devel-bounces@lists.xenproject.org Fri May 22 10:28:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 10:28:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4u6-0003fg-OB; Fri, 22 May 2020 10:27:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Somd=7E=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1jc4u5-0003fb-IE
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 10:27:49 +0000
X-Inumbo-ID: e1fee3ce-9c16-11ea-abaf-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e1fee3ce-9c16-11ea-abaf-12813bfff9fa;
 Fri, 22 May 2020 10:27:48 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 2Wg7ssBM/W1QFfjkaBmKGEA0nhL9lqdJdx1//WCDmf09/2x8MkjW+CVgw7o5NSkFBsQ7/087LY
 KqrJOHwi66w8+q0s7I/cjr90EvW6JQ8V9M2i+k6P2LkoEeoA+fQD3HpNtOr9FuYSSr46RUkwuk
 /XLJdT/1gUJJrnliK2B2zOe1dJPTUQhqKFb/A58osTid8vvWxqJzR53J51MivFDqCr1cVqTRWU
 ylD/i3ObDp7jYj7gKhmKzdfV3RWIGXi4Jp8ahRiUtS+gJY00wPld6XZrXXWzOPw/RMywwBkCyn
 W/Y=
X-SBRS: 2.7
X-MesageID: 18195004
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18195004"
Subject: Re: [PATCH] x86/svm: retry after unhandled NPT fault if gfn was
 marked for recalculation
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
 <20200522100846.GV54375@Air-de-Roger>
 <04ec4ab4-a121-c5be-0a65-316e237dd793@citrix.com>
 <20200522102339.GX54375@Air-de-Roger>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <fe6e5c7f-df0f-5436-a7cd-2949464ab9a7@citrix.com>
Date: Fri, 22 May 2020 11:27:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101
 Thunderbird/60.9.0
MIME-Version: 1.0
In-Reply-To: <20200522102339.GX54375@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, wl@xen.org, jbeulich@suse.com,
 andrew.cooper3@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22/05/2020 11:23, Roger Pau Monné wrote:
> On Fri, May 22, 2020 at 11:14:24AM +0100, Igor Druzhinin wrote:
>> On 22/05/2020 11:08, Roger Pau Monné wrote:
>>> On Thu, May 21, 2020 at 10:43:58PM +0100, Igor Druzhinin wrote:
>>>> If a recalculation NPT fault hasn't been handled explicitly in
>>>> hvm_hap_nested_page_fault() then it's potentially safe to retry -
>>>> US bit has been re-instated in PTE and any real fault would be correctly
>>>> re-raised next time.
>>>>
>>>> This covers a specific case of migration with vGPU assigned on AMD:
>>>> global log-dirty is enabled and causes immediate recalculation NPT
>>>> fault in MMIO area upon access. This type of fault isn't described
>>>> explicitly in hvm_hap_nested_page_fault (this isn't called on
>>>> EPT misconfig exit on Intel) which results in domain crash.
>>>
>>> Couldn't direct MMIO regions be handled like other types of memory for
>>> the purposes of logdiry mode?
>>>
>>> I assume there's already a path here used for other memory types when
>>> logdirty is turned on, and hence would seem better to just make direct
>>> MMIO regions also use that path?
>>
>> The proble of handling only MMIO case is that the issue still stays.
>> It will be hit with some other memory type since it's not MMIO specific.
>> The issue is that if global recalculation is called, the next hit to
>> this type will cause a transient fault which will not be handled
>> correctly after a due fixup by neither of our handlers.
> 
> I admit I should go look at the code, but for example RAM p2m types
> don't require this fix, so I assume there's some different path taken
> in that case that avoids all this?
> 
> Ie: when global logdirty is enabled you will start to get nested page
> faults for every access, yet only direct MMIO types require this fix?

It's not "only MMIO" - it's just MMIO area is hit in my particular case.
I'd prefer this fix to address the general issue otherwise for SVM
we would have to write handlers in hvm_hap_nested_page_fault() for
every case as soon as we hit it.

Igor


From xen-devel-bounces@lists.xenproject.org Fri May 22 10:33:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 10:33:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc4z4-0004X2-Bt; Fri, 22 May 2020 10:32:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2cgu=7E=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jc4z2-0004Wx-Hu
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 10:32:56 +0000
X-Inumbo-ID: 98cabdf8-9c17-11ea-b9cf-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 98cabdf8-9c17-11ea-b9cf-bc764e2007e4;
 Fri, 22 May 2020 10:32:55 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: HrmzpA6NjPjOkeGyoU3BOKkESjZ0TNVahvFc58wj+wYdNDwMYiACcby+7Y/9nRpCdhwBE5H/xr
 2hrb4V6yB9eV3KNvkor47TQ1PWnErVMguvEbPdKhGVxC4zJaHinZ0OPJQs6xCRxgBjYS9tBSNd
 863NXC0BGqJyBESYDtLquq+4U6wpG3shhzxTuvO3xucCdIZCOWwn3XcfHJi+uTPnhQfrHSdCIm
 jwCyiXpoxgIymt1Qg4Oe3wG4JmujaGr/96XTmLaOhOs/vDfvjK6nJe0qwWoSUOIP3Gb4hrwNU0
 nT0=
X-SBRS: 2.7
X-MesageID: 18161851
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18161851"
From: George Dunlap <George.Dunlap@citrix.com>
To: Wei Liu <wl@xen.org>
Subject: Re: [PATCH] golang: Update generated files after libxl_types.idl
 change
Thread-Topic: [PATCH] golang: Update generated files after libxl_types.idl
 change
Thread-Index: AQHWMB5jZYuungko1U2ePr9Wy7HjSqizvjyAgAAIsIA=
Date: Fri, 22 May 2020 10:32:51 +0000
Message-ID: <3FE58AF4-3C20-4615-A28C-31273C4BF301@citrix.com>
References: <20200522094956.3611661-1-george.dunlap@citrix.com>
 <20200522100145.w6xd5v7ioubzkni5@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
In-Reply-To: <20200522100145.w6xd5v7ioubzkni5@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <F0E7A2754202414C8E5DAB1A16B7DE86@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDIyLCAyMDIwLCBhdCAxMTowMSBBTSwgV2VpIExpdSA8d2xAeGVuLm9yZz4g
d3JvdGU6DQo+IA0KPiBPbiBGcmksIE1heSAyMiwgMjAyMCBhdCAxMDo0OTo1NkFNICswMTAwLCBH
ZW9yZ2UgRHVubGFwIHdyb3RlOg0KPj4gYy9zIDdlZmQ5ZjNkNDUgKCJsaWJ4bDogSGFuZGxlIExp
bnV4IHN0dWJkb21haW4gc3BlY2lmaWMgUUVNVQ0KPj4gb3B0aW9ucy4iKSBtb2RpZmllZCBsaWJs
X3R5cGVzLmlkbC4gIFJ1biBnZW5nb3R5cGVzLnB5IGFnYWluIHRvIHVwZGF0ZQ0KPiANCj4gbGli
eGxfdHlwZXMuaWRsLg0KPiANCj4+IHRoZSBnZW5lYXRlZCBnb2xhbmcgYmluZGluZ3MuDQo+PiAN
Cj4gDQo+IENhbiB3ZSBwZXJoYXBzIGFkZCBhIGRlcGVuZGVuY3kgb24gZ29sYW5nJ3Mgc2lkZSBz
dWNoIHRoYXQgaXQgY2FuIGJlDQo+IGF1dG8tZ2VuZXJhdGVkIGluIHRoZSBmdXR1cmU/DQoNCklu
ZGVlZCwgSeKAmW0gdHJ5aW5nIHRvIHRoaW5rIG9mIGEgZ29vZCBzb2x1dGlvbiB0byB0aGlzLg0K
DQogLUdlb3JnZQ0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri May 22 10:41:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 10:41:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc56l-0005O6-3p; Fri, 22 May 2020 10:40:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hXvX=7E=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jc56j-0005NG-Dz
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 10:40:53 +0000
X-Inumbo-ID: b558f56a-9c18-11ea-b07b-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b558f56a-9c18-11ea-b07b-bc764e2007e4;
 Fri, 22 May 2020 10:40:52 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:39184
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jc56T-00116l-KE (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 22 May 2020 11:40:37 +0100
Subject: Re: [PATCH v3 3/5] x86/HVM: move NOFLUSH handling out of hvm_set_cr3()
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <3ce4ab2c-8cb6-1482-6ce9-3d5b019e10c1@suse.com>
 <b461a8a6-8a36-4cec-341a-7730f249b3c4@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <573a92f9-32e3-3f08-d1fc-e8a8e6c177b0@citrix.com>
Date: Fri, 22 May 2020 11:40:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <b461a8a6-8a36-4cec-341a-7730f249b3c4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Petre Pircalabu <ppircalabu@bitdefender.com>,
 Kevin Tian <kevin.tian@intel.com>, Tamas K Lengyel <tamas@tklengyel.com>,
 Wei Liu <wl@xen.org>, Razvan Cojocaru <rcojocaru@bitdefender.com>,
 Paul Durrant <paul@xen.org>, George Dunlap <George.Dunlap@eu.citrix.com>,
 Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>,
 Jun Nakajima <jun.nakajima@intel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25/09/2019 16:25, Jan Beulich wrote:
> The bit is meaningful only for MOV-to-CR3 insns, not anywhere else, in
> particular not when loading nested guest state.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Paul Durrant <paul@xen.org>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri May 22 10:49:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 10:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc5ET-0005eY-QS; Fri, 22 May 2020 10:48:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L400=7E=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jc5ER-0005eT-Vm
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 10:48:52 +0000
X-Inumbo-ID: d2a6b2be-9c19-11ea-abb1-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d2a6b2be-9c19-11ea-abb1-12813bfff9fa;
 Fri, 22 May 2020 10:48:51 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: oMCupNkvvGSw15n75VORzuOhWKwFc/VttI6152HTwXDbEpTjCjpYPj0kA4ukxoGlqC8XInmsNM
 S6pxCm+2BZ3I8JBq7G7XBOss1vkTX5nvwLaEt+Si5mo0OPIIZBRJQci/N0WH+TciDrRHXgt8Qd
 oznCZB+OGVJmMJ7X5bkktaf2p+zfygAKrTUBrqzPm9BmuK4pF+kUFPR4DdcoV08pxZD2ZEcTvb
 mFzKRDvTlJd5uzASC6xurpNM63jDV2aSD4b8cX2lgI3VjT7BJkBG/Pt9vPY6d0TZxhpPqfUe73
 u7Q=
X-SBRS: 2.7
X-MesageID: 18428895
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18428895"
Date: Fri, 22 May 2020 12:48:44 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86: refine guest_mode()
Message-ID: <20200522104844.GY54375@Air-de-Roger>
References: <7b62d06c-1369-2857-81c0-45e2434357f4@suse.com>
 <1704f4f6-7e77-971c-2c94-4f6a6719c34a@citrix.com>
 <5bbe6425-396c-d934-b5af-53b594a4afbc@suse.com>
 <16939982-3ccc-f848-0694-61b154dca89a@citrix.com>
 <5ce12c86-c894-4a2c-9fa6-1c2a6007ca28@suse.com>
 <20200518145101.GV54375@Air-de-Roger>
 <d58ec87e-a871-2e65-4a69-b73a168a6afa@suse.com>
 <20200520151326.GM54375@Air-de-Roger>
 <38d546f9-8043-8d94-8298-8fd035078a8a@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <38d546f9-8043-8d94-8298-8fd035078a8a@suse.com>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 22, 2020 at 11:52:42AM +0200, Jan Beulich wrote:
> On 20.05.2020 17:13, Roger Pau Monné wrote:
> > OK, so I think I'm starting to understand this all. Sorry it's taken
> > me so long. So it's my understanding that diff != 0 can only happen in
> > Xen context, or when in an IST that has a different stack (ie: MCE, NMI
> > or DF according to current.h) and running in PV mode?
> > 
> > Wouldn't in then be fine to use (r)->cs & 3 to check we are in guest
> > mode if diff != 0? I see a lot of other places where cs & 3 is already
> > used to that effect AFAICT (like entry.S).
> 
> Technically this would be correct afaics, but the idea with all this
> is (or should I say "looks to be"?) to have the checks be as tight as
> possible, to make sure we don't mistakenly consider something "guest
> mode" which really isn't. IOW your suggestion would be fine with me
> if we could exclude bugs anywhere in the code. But since this isn't
> realistic, I consider your suggestion to be relaxing things by too
> much.

OK, so I take that (long time) we might also want to change the cs & 3
checks from entry.S to check against __HYPERVISOR_CS explicitly?

What I would prefer is to have some kind of homogeneity in how guest
mode vs Xen mode checks are performed, so that we don't confuse
people.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 22 11:00:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 11:00:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc5PZ-0007HN-HP; Fri, 22 May 2020 11:00:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hXvX=7E=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jc5PX-0007HI-TQ
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 11:00:19 +0000
X-Inumbo-ID: 6c67d9a4-9c1b-11ea-ae69-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c67d9a4-9c1b-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 11:00:19 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:39742
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jc5PS-000Dj6-LG (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 22 May 2020 12:00:14 +0100
Subject: Re: [PATCH v3 1/5] x86: suppress XPTI-related TLB flushes when
 possible
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <3ce4ab2c-8cb6-1482-6ce9-3d5b019e10c1@suse.com>
 <ae47cb2c-2fff-cd08-0a26-683cef1f3303@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <17f1b674-92f9-6ee9-8e10-0fc30f055fe8@citrix.com>
Date: Fri, 22 May 2020 12:00:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <ae47cb2c-2fff-cd08-0a26-683cef1f3303@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25/09/2019 16:23, Jan Beulich wrote:
> When there's no XPTI-enabled PV domain at all, there's no need to issue
> respective TLB flushes. Hardwire opt_xpti_* to false when !PV, and
> record the creation of PV domains by bumping opt_xpti_* accordingly.
>
> As to the sticky opt_xpti_domu vs increment/decrement of opt_xpti_hwdom,
> this is done this way to avoid
> (a) widening the former variable,
> (b) any risk of a missed flush, which would result in an XSA if a DomU
>     was able to exercise it, and
> (c) any races updating the variable.
> Fundamentally the TLB flush done when context switching out the domain's
> vCPU-s the last time before destroying the domain ought to be
> sufficient, so in principle DomU handling could be made match hwdom's.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I am still concerned about the added complexity for no obvious use case.

Under what circumstances do we expect to XPTI-ness come and go on a
system, outside of custom dev-testing scenarios?


From xen-devel-bounces@lists.xenproject.org Fri May 22 11:12:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 11:12:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc5an-0008Jb-J1; Fri, 22 May 2020 11:11:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L400=7E=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jc5al-0008JU-OO
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 11:11:55 +0000
X-Inumbo-ID: 0b5442fe-9c1d-11ea-ae69-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b5442fe-9c1d-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 11:11:55 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: cmEcbCWDR4PUoj0+CoKGxJixscdbKuGR2xM4meav+piJVMvswePK4yf8NwykuYcJm3jgo/YvM0
 meq4BlCiqbIWJzCgfJHPKPgbNsev7BFeLSafRqr8aDmQvx6Kl0rHwcGOCTDrvQF/iUMfalz+Q3
 PpRDtwXPrVbgBJr7QMRkxJRE3NSxuMyTbv1ej4SImqJPdl7sha6mU4U3fDHi4v953S7gQ3DIcI
 xMaIkvZtwlxW43aQkzBy7TXoe0KUreORonwk+X5SZoFyuduTQSYtXopZfh3rYeW/8RpkQKRKSG
 dXM=
X-SBRS: 2.7
X-MesageID: 18888693
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18888693"
Date: Fri, 22 May 2020 13:11:46 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Subject: Re: [PATCH] x86/svm: retry after unhandled NPT fault if gfn was
 marked for recalculation
Message-ID: <20200522111146.GZ54375@Air-de-Roger>
References: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
 <20200522100846.GV54375@Air-de-Roger>
 <04ec4ab4-a121-c5be-0a65-316e237dd793@citrix.com>
 <20200522102339.GX54375@Air-de-Roger>
 <fe6e5c7f-df0f-5436-a7cd-2949464ab9a7@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <fe6e5c7f-df0f-5436-a7cd-2949464ab9a7@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, wl@xen.org, jbeulich@suse.com,
 andrew.cooper3@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 22, 2020 at 11:27:38AM +0100, Igor Druzhinin wrote:
> On 22/05/2020 11:23, Roger Pau Monné wrote:
> > On Fri, May 22, 2020 at 11:14:24AM +0100, Igor Druzhinin wrote:
> >> On 22/05/2020 11:08, Roger Pau Monné wrote:
> >>> On Thu, May 21, 2020 at 10:43:58PM +0100, Igor Druzhinin wrote:
> >>>> If a recalculation NPT fault hasn't been handled explicitly in
> >>>> hvm_hap_nested_page_fault() then it's potentially safe to retry -
> >>>> US bit has been re-instated in PTE and any real fault would be correctly
> >>>> re-raised next time.
> >>>>
> >>>> This covers a specific case of migration with vGPU assigned on AMD:
> >>>> global log-dirty is enabled and causes immediate recalculation NPT
> >>>> fault in MMIO area upon access. This type of fault isn't described
> >>>> explicitly in hvm_hap_nested_page_fault (this isn't called on
> >>>> EPT misconfig exit on Intel) which results in domain crash.
> >>>
> >>> Couldn't direct MMIO regions be handled like other types of memory for
> >>> the purposes of logdiry mode?
> >>>
> >>> I assume there's already a path here used for other memory types when
> >>> logdirty is turned on, and hence would seem better to just make direct
> >>> MMIO regions also use that path?
> >>
> >> The proble of handling only MMIO case is that the issue still stays.
> >> It will be hit with some other memory type since it's not MMIO specific.
> >> The issue is that if global recalculation is called, the next hit to
> >> this type will cause a transient fault which will not be handled
> >> correctly after a due fixup by neither of our handlers.
> > 
> > I admit I should go look at the code, but for example RAM p2m types
> > don't require this fix, so I assume there's some different path taken
> > in that case that avoids all this?
> > 
> > Ie: when global logdirty is enabled you will start to get nested page
> > faults for every access, yet only direct MMIO types require this fix?
> 
> It's not "only MMIO" - it's just MMIO area is hit in my particular case.
> I'd prefer this fix to address the general issue otherwise for SVM
> we would have to write handlers in hvm_hap_nested_page_fault() for
> every case as soon as we hit it.

Hm, I'm not sure I agree. p2m memory types are limited, and IMO we
want to have strict control about how they are handled.
hvm_hap_nested_page_fault is already full of special casing for each
memory type for that reason.

That being said, I also don't like the fact that logdity is handled
differently between EPT and NPT, as on EPT it's handled as a
misconfig while on NPT it's handled as a violation.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 22 11:13:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 11:13:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc5cZ-0008PL-Uz; Fri, 22 May 2020 11:13:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L400=7E=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jc5cZ-0008PG-Dp
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 11:13:47 +0000
X-Inumbo-ID: 4d358fea-9c1d-11ea-abb2-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4d358fea-9c1d-11ea-abb2-12813bfff9fa;
 Fri, 22 May 2020 11:13:46 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 8fL352PuG9f5QMJ2FfSj/JcrUwDdpE7MsH8LrDH5TDs81SkadWchJ77puOkGmVTm5AlzKe0gVG
 lR/V/WctzZB4URk7L5fpcdsL3XOIMUNYEZh05Mb8P7BoYjVaIgXbY+j25hpLu790fMASjX5I9s
 1alP4Rr19rQbPWckGPCGXFzekOQFksHRFgRgkhws5yFb55sefpeBO/EvlNFI3d+ZjknaYj38XI
 yw5e9tUqqJRWXMEqWyd+kvXQ5Ohb04k6aRDNbw2vwQ4s279Z1LWglgo8nH1OaNvDY8BNvnCq4L
 O9I=
X-SBRS: 2.7
X-MesageID: 18451695
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18451695"
Date: Fri, 22 May 2020 13:13:37 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v3 1/5] x86: suppress XPTI-related TLB flushes when
 possible
Message-ID: <20200522111337.GA54375@Air-de-Roger>
References: <3ce4ab2c-8cb6-1482-6ce9-3d5b019e10c1@suse.com>
 <ae47cb2c-2fff-cd08-0a26-683cef1f3303@suse.com>
 <17f1b674-92f9-6ee9-8e10-0fc30f055fe8@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <17f1b674-92f9-6ee9-8e10-0fc30f055fe8@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 22, 2020 at 12:00:14PM +0100, Andrew Cooper wrote:
> On 25/09/2019 16:23, Jan Beulich wrote:
> > When there's no XPTI-enabled PV domain at all, there's no need to issue
> > respective TLB flushes. Hardwire opt_xpti_* to false when !PV, and
> > record the creation of PV domains by bumping opt_xpti_* accordingly.
> >
> > As to the sticky opt_xpti_domu vs increment/decrement of opt_xpti_hwdom,
> > this is done this way to avoid
> > (a) widening the former variable,
> > (b) any risk of a missed flush, which would result in an XSA if a DomU
> >     was able to exercise it, and
> > (c) any races updating the variable.
> > Fundamentally the TLB flush done when context switching out the domain's
> > vCPU-s the last time before destroying the domain ought to be
> > sufficient, so in principle DomU handling could be made match hwdom's.
> >
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I am still concerned about the added complexity for no obvious use case.
> 
> Under what circumstances do we expect to XPTI-ness come and go on a
> system, outside of custom dev-testing scenarios?

XPTI-ness will be sticky, in the sense that once enabled cannot be
disabled anymore.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 22 11:19:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 11:19:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc5iP-0000Bb-Ol; Fri, 22 May 2020 11:19:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L400=7E=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jc5iO-0000BW-Bl
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 11:19:48 +0000
X-Inumbo-ID: 24f6bd45-9c1e-11ea-abb2-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 24f6bd45-9c1e-11ea-abb2-12813bfff9fa;
 Fri, 22 May 2020 11:19:47 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 827xNiK5/8ob4m5UXYy49GyQHOI+OOajdsyGOfrpNNOw+xD7Da6ppFGieENMyiTEC5vQevqbtq
 F6OlV5OqrVzO9bUf1a7DEhyF3MxhNH0LDJqU0voMxjQxVoBu+oA1/8m2iOVXZ24KU2AWH1q6Dc
 ZRW1v9MXlsQ3IO+bxOgM2yRTF56PiFfH/YBa+yIGIukFNZiMgpjM1G7JCEYECaVvpDNw605dlV
 nmlvcvK50NjsrLjlOTaWRkMzQnoMJhnS/mVzeHIzezt9ZJ1kTo8U4GyswtLIspg47fdjJ9sgHi
 gkc=
X-SBRS: 2.7
X-MesageID: 18452004
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18452004"
Date: Fri, 22 May 2020 13:19:40 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/3] configure: also add EXTRA_PREFIX to {CPP/LD}FLAGS
Message-ID: <20200522111940.GB54375@Air-de-Roger>
References: <20200505092454.9161-1-roger.pau@citrix.com>
 <20200505092454.9161-3-roger.pau@citrix.com>
 <C053A44F-FFDE-4C07-B1FD-76FA8456ADCD@arm.com>
 <20200522090553.eegs4fcltfqjuhzo@debian>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200522090553.eegs4fcltfqjuhzo@debian>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 22, 2020 at 10:05:53AM +0100, Wei Liu wrote:
> On Fri, May 22, 2020 at 08:41:17AM +0000, Bertrand Marquis wrote:
> > Hi,
> > 
> > As a consequence of this fix, the following has been committed (I guess as a consequence of regenerating the configure scripts):
> > diff --git a/tools/configure b/tools/configure
> > index 375430df3f..36596389b8 100755
> > --- a/tools/configure
> > +++ b/tools/configure
> > @@ -4678,6 +4678,10 @@ for ldflag in $APPEND_LIB
> >  do
> >      APPEND_LDFLAGS="$APPEND_LDFLAGS -L$ldflag"
> >  done
> > +if  ! -z $EXTRA_PREFIX ; then
> > +    CPPFLAGS="$CPPFLAGS -I$EXTRA_PREFIX/include"
> > +    LDFLAGS="$LDFLAGS -L$EXTRA_PREFIX/lib"
> > +fi
> >  CPPFLAGS="$PREPEND_CPPFLAGS $CPPFLAGS $APPEND_CPPFLAGS"
> >  LDFLAGS="$PREPEND_LDFLAGS $LDFLAGS $APPEND_LDFLAGS”
> > 
> > This should be:
> > if  [ ! -z $EXTRA_PREFIX ]; then
> > 
> > As on other configure scripts.
> > 
> > During configure I have not the following error:
> > ./configure: line 4681: -z: command not found
> > 
> > Which is ignored but is adding -L/lib and -I/include to the CPPFLAGS and LDFLAGS
> > 
> > What should be the procedure to actually fix that (as the problem is coming from the configure script regeneration I guess) ? 
> 
> Does the following patch work for you?
> 
> diff --git a/m4/set_cflags_ldflags.m4 b/m4/set_cflags_ldflags.m4
> index 08f5c983cc63..cd34c139bc94 100644
> --- a/m4/set_cflags_ldflags.m4
> +++ b/m4/set_cflags_ldflags.m4
> @@ -15,7 +15,7 @@ for ldflag in $APPEND_LIB
>  do
>      APPEND_LDFLAGS="$APPEND_LDFLAGS -L$ldflag"
>  done
> -if [ ! -z $EXTRA_PREFIX ]; then
> +if test ! -z $EXTRA_PREFIX ; then
>      CPPFLAGS="$CPPFLAGS -I$EXTRA_PREFIX/include"
>      LDFLAGS="$LDFLAGS -L$EXTRA_PREFIX/lib"
>  fi

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

My bad, I assume [] is expanded by m4, as that seems to be part of the
language?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 22 11:25:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 11:25:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc5nV-00013Y-CV; Fri, 22 May 2020 11:25:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nXpb=7E=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jc5nT-00013T-QL
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 11:25:03 +0000
X-Inumbo-ID: dfff8454-9c1e-11ea-abb2-12813bfff9fa
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dfff8454-9c1e-11ea-abb2-12813bfff9fa;
 Fri, 22 May 2020 11:25:01 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id x14so4438432wrp.2
 for <xen-devel@lists.xenproject.org>; Fri, 22 May 2020 04:25:01 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=i3xrpgs1A8xyf57KGQZcB/Jp+HBtDGSdAF+ZjKHnwQ0=;
 b=lt7TGAYpXFrn/2dwUDZx1csOp5RKHkMZeRFLt5ixXgk8HH13RDsa9HBwZD1GcBbX7X
 mbmAoQqvRP7JUVecJTOGXUfSBaQ8SLrTTw7MCmCvg3pWHBi5gQqRCMqFegdg5FI5LeC8
 K16cp3/J7UBjOqBdYju6NEI6XSPbmCiqaBA9t7Ej4my61jiTcw5WzJvBboV8yAHgDDBN
 LjPsJveP6FM8Tnh0iZE06Y4iM7m1368SSz/fTqkRym9L+Rdg1jSJeAfkyMBez6Ar991k
 XN6KgTdu8/D6KU578MSaxsAlcaVKxr8cKuufzyjVA4pK572uXdVk4kHeOYR4SN5nDnqx
 djwQ==
X-Gm-Message-State: AOAM5335cszqKQ6BD8P/gnKMDAi1lQLCWcpGoEQXzO/OCC6JjEE2sOAj
 HjM19fLnoDXUk/6MYHy3ryOw9NOSWJo=
X-Google-Smtp-Source: ABdhPJx0e0a0JKYI+AzIrD22tApaVJZRHKrcdjfx3djni1RVAwuZEjROBG0HQozMuoZ62tcu6A4P5A==
X-Received: by 2002:adf:e588:: with SMTP id l8mr271699wrm.255.1590146700383;
 Fri, 22 May 2020 04:25:00 -0700 (PDT)
Received: from localhost.localdomain (82.149.115.87.dyn.plus.net.
 [87.115.149.82])
 by smtp.gmail.com with ESMTPSA id 1sm648806wms.25.2020.05.22.04.24.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 22 May 2020 04:24:59 -0700 (PDT)
From: Wei Liu <wl@xen.org>
To: Xen Development List <xen-devel@lists.xenproject.org>
Subject: [PATCH] m4: use test instead of []
Date: Fri, 22 May 2020 12:24:57 +0100
Message-Id: <20200522112457.6640-1-wl@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Roger Pau Monne <roger.pau@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

It is reported that [] was removed by autoconf, which caused the
following error:

  ./configure: line 4681: -z: command not found

Switch to test. That's what is used throughout our configure scripts.

Reported-by: Bertrand Marquis <Bertrand.Marquis@arm.com>
Fixes: 8a6b1665d987 ("configure: also add EXTRA_PREFIX to {CPP/LD}FLAGS")
Reviewed-by: Roger Pau Monne <roger.pau@citrix.com>
Signed-off-by: Wei Liu <wl@xen.org>
---
Run autogen.sh before committing.

Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>
---
 m4/set_cflags_ldflags.m4 | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/m4/set_cflags_ldflags.m4 b/m4/set_cflags_ldflags.m4
index 08f5c983cc63..cd34c139bc94 100644
--- a/m4/set_cflags_ldflags.m4
+++ b/m4/set_cflags_ldflags.m4
@@ -15,7 +15,7 @@ for ldflag in $APPEND_LIB
 do
     APPEND_LDFLAGS="$APPEND_LDFLAGS -L$ldflag"
 done
-if [ ! -z $EXTRA_PREFIX ]; then
+if test ! -z $EXTRA_PREFIX ; then
     CPPFLAGS="$CPPFLAGS -I$EXTRA_PREFIX/include"
     LDFLAGS="$LDFLAGS -L$EXTRA_PREFIX/lib"
 fi
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 22 11:40:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 11:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc62E-0002jL-Mw; Fri, 22 May 2020 11:40:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aod8=7E=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jc62C-0002jG-Ru
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 11:40:17 +0000
X-Inumbo-ID: 00c7dbbc-9c21-11ea-abb5-12813bfff9fa
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.61]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 00c7dbbc-9c21-11ea-abb5-12813bfff9fa;
 Fri, 22 May 2020 11:40:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u6AVmq4i3qqZSAG4+PoTdBB32oBNrQBYxRDH4Lj/iFE=;
 b=lxSlXWwu6cWLXfogweBGlmg4XfMEyn3PtmcwHEDUcZ4YZcgXPPt2sjflYpPqHRxImuE0SkQRgtkp4BMsx/Zmd/xvGxHZeMpqs4dgBtt3wjZJDJWcH5mayE+iKNuvz8D837exctCe6Bl61DiANrqVswi03k8Rlzsrr8XgDpIfohw=
Received: from DB6PR0301CA0047.eurprd03.prod.outlook.com (2603:10a6:4:54::15)
 by VI1PR08MB2768.eurprd08.prod.outlook.com (2603:10a6:802:19::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3000.20; Fri, 22 May
 2020 11:40:13 +0000
Received: from DB5EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:54:cafe::7b) by DB6PR0301CA0047.outlook.office365.com
 (2603:10a6:4:54::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.24 via Frontend
 Transport; Fri, 22 May 2020 11:40:13 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT027.mail.protection.outlook.com (10.152.20.121) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3021.23 via Frontend Transport; Fri, 22 May 2020 11:40:13 +0000
Received: ("Tessian outbound 952576a3272a:v57");
 Fri, 22 May 2020 11:40:13 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 925a175fa93b1dcf
X-CR-MTA-TID: 64aa7808
Received: from f96988efb91d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D8C981BB-DB24-427B-A764-EB8EAABD3FC6.1; 
 Fri, 22 May 2020 11:40:08 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f96988efb91d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 22 May 2020 11:40:08 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=knmBnsOgeEMK7JUoF8+6iPXcLAgMzGD7Mx8UzKVSlTdgpU85p/kLVuut9m+7esQNTN1siQ+CjrmcEH/VwkPn5D1V+nPl5npBq9kEiPL0sneAOE/m8Q+MdnLAA1ia3tIz3rUvgPBmFV9e5QWwOA8tBTnHedLvnvIkFlaWfu+kOn0h/AJ2nn7KYKRb1/eRXeuqa6HuYSNoYQ7nSMcj5AClprM66zJQ9OW4brtNxPGWRFr60ueb8LMFTuKvu8WKIeq0O5iZyxvSuLnRBjZFv7MIpKg9ztmcPeJjho29kiwO4d/uh3AdsKq0Oj+JD/o82en5wCafWav+zmOenhKFqug9fw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u6AVmq4i3qqZSAG4+PoTdBB32oBNrQBYxRDH4Lj/iFE=;
 b=gySMzVmvCtX681uLFTFr4Q7QXyH7P0JjDxf6Z8tdPoOpIXIRpCTtPWFlWNkz4/czLNrCecVBwoiw5P1SQnRzIEGVzaXTiMEmjNATbuyusfzaIDicuEQNSsmuzpkbL9y1jfwFa9/Nw1DDbQg90OFzYLbhO3CnWX4HXvIONoBuoBJHQKbbB4m3yzHFHZTe3yLNVaRjq6e4htOQFT9Fp51O3UM3+d3ba/sV9/IqjAyNdNFjtSGr397c+xCG+aFJl1+ZY/LhLVyZUPVwdH92yzN3UtHRleanZp5Q4Bo8+QxydT/TP0UhEoSMLL9ldJGrYLH2co/ihMizaDSXpUxjuzuztQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u6AVmq4i3qqZSAG4+PoTdBB32oBNrQBYxRDH4Lj/iFE=;
 b=lxSlXWwu6cWLXfogweBGlmg4XfMEyn3PtmcwHEDUcZ4YZcgXPPt2sjflYpPqHRxImuE0SkQRgtkp4BMsx/Zmd/xvGxHZeMpqs4dgBtt3wjZJDJWcH5mayE+iKNuvz8D837exctCe6Bl61DiANrqVswi03k8Rlzsrr8XgDpIfohw=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3915.eurprd08.prod.outlook.com (2603:10a6:10:34::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.23; Fri, 22 May
 2020 11:40:06 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3000.034; Fri, 22 May 2020
 11:40:06 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Subject: Re: [PATCH 2/3] configure: also add EXTRA_PREFIX to {CPP/LD}FLAGS
Thread-Topic: [PATCH 2/3] configure: also add EXTRA_PREFIX to {CPP/LD}FLAGS
Thread-Index: AQHWIr9BT1TmXNUg30iaUVp4td7OCaiz5AWAgAAG4ICAACVhAIAABbUA
Date: Fri, 22 May 2020 11:40:06 +0000
Message-ID: <26610F4A-34B2-4B61-BEB2-ED2D16283B81@arm.com>
References: <20200505092454.9161-1-roger.pau@citrix.com>
 <20200505092454.9161-3-roger.pau@citrix.com>
 <C053A44F-FFDE-4C07-B1FD-76FA8456ADCD@arm.com>
 <20200522090553.eegs4fcltfqjuhzo@debian>
 <20200522111940.GB54375@Air-de-Roger>
In-Reply-To: <20200522111940.GB54375@Air-de-Roger>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: efec5907-dfc9-4ff0-3d51-08d7fe44e424
x-ms-traffictypediagnostic: DB7PR08MB3915:|VI1PR08MB2768:
X-Microsoft-Antispam-PRVS: <VI1PR08MB27686E17A7B8113DB8540EDA9DB40@VI1PR08MB2768.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8273;OLM:8273;
x-forefront-prvs: 04111BAC64
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: 4JrxUW+Lj2nQ+yyP6MUxyk8ZQYlbZupiLSP8hvzn70KQCb34Smq7Dy3LjX2gSsoRfkUhr7IZx4BoOIO5a41bWIgjSr5jmS3CFI8xUZX2JYIXrATtdRXfxpbVq6oUKyClBkPWS4jmJRfdt5vwvz2uasoFY/lmNO/hWGK7HaWn4EqY5Ol8Yyc4KbgvR3+tVLBvsDdJtUmuiO1/NxqQ0pQQG23rn8ggpT15ujKVkdHmEsEU6Sx3oCjmR9Cdrbj3tmdmt6v0EqeURfC+c6wQlCCCLY60bICVi1WsRl8ZjN2O8IUrtznGyQJEeZv18vRz2xhzXIx3ZeZdVXMrZ3jJlhbdgg==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(376002)(366004)(136003)(346002)(396003)(66946007)(5660300002)(36756003)(316002)(71200400001)(4326008)(76116006)(66446008)(6916009)(8936002)(2906002)(54906003)(91956017)(6486002)(2616005)(478600001)(8676002)(66556008)(53546011)(6512007)(33656002)(6506007)(66476007)(64756008)(26005)(86362001)(186003);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: MbM/2ArwDuB3bhKtFc9XpudSaGac7GY+w7VgCyW2M8A1Fvd3MnQtDBc2RVSaZwKaTxi/CwczWn9zWULY/vp+Fm3e5aUljHt9HUQ/zoq8EsVEBXXE4REiQmCUTlSeEr1pBrbOjNJmXdXlym1RbzgedBEjvdfxg4/XmA/HfIBkiBEe7BUVJqwCf389YripO7sXyjWQiQbg12RwR9WTT3fCTDsp73tTHuQH2wdxn52+y7Pqcmoi+XGRuzIz5nLngvwT4gQyQuz7XHO4BhjNA8BO9uPStqynl7zzvvUSKHx6Gn6w0MvCMU0h0s0E8AJsFsSKxUgSEOb84cHhOjNZ9/JQeCOIY2GW1xWmHWJ4dtuuB50mzwu0ueyYhQwEsOwrPxOxT8LFuC4BGIfZqTsSKzsdXznbZw/1IkEYHlkvkkQP4mV5+RkSr8xPG3v76ng83BbLuMDEEYev7rPkW9ZorMZlLaXlrWq1DfZ9CEBUtRG37DMsAI1dKVuIRYUFULe87FC8
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <BF5D092CB1AF6B4C935BC35A460887BD@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3915
Original-Authentication-Results: citrix.com; dkim=none (message not signed)
 header.d=none;citrix.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(376002)(346002)(396003)(46966005)(54906003)(316002)(86362001)(2616005)(8676002)(336012)(8936002)(33656002)(6486002)(6512007)(26005)(70586007)(47076004)(36756003)(81166007)(478600001)(356005)(70206006)(82740400003)(6862004)(82310400002)(5660300002)(53546011)(186003)(6506007)(2906002)(4326008);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 4358f830-5592-4d97-40f4-08d7fe44dfed
X-Forefront-PRVS: 04111BAC64
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 2DLj1awousOyirQNNtNgiNJ5yCAXtLczGBsa5rfl8VNyCBg9fqnUtov5gwwTffT1ChucqR13K9MFGAYKnmhD661tGLp3HXXwLvQwDbFpV8lbGIiiIRnQG2w2qsCNkwfQ0AzR4Y7k0uAg7mOZej0AdRWW2faxm/xyOJFE+y+IdR81fRUFdqgiYSdmgI4zjlvRjfvJzjXDF6B4gNBUiNqsJWRNA21PugA9jo706eGErjmkSobvsEDeCR6dyvexPf/uQO3p94jxCN3VtwvEFwPOxslLD4MjN4pNvL6CLwQQekHDD0+9cgHR8MTVk0eeeqARZpC8t4dck0qYkQ0q6NRrlOObiAZhMS1VGWxzX0Jsinh1VV1kmd4bpPWSvjbN0v8H2LNHSaFedRwU/SmaIb97+pAhuYn0ANxOV0eFHfNtN6vPQFj0SL4hQaggJbqOliOYJBpx9zP7KkWprU2dkAtGjM9k9Hk6j3cHTo+lvf9m+n0CNyvS8SX/W1XLowpCrprv0djS3MXiAOBt+ovQblB9Pg==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 22 May 2020 11:40:13.4966 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: efec5907-dfc9-4ff0-3d51-08d7fe44e424
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB2768
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMjIgTWF5IDIwMjAsIGF0IDEyOjE5LCBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5w
YXVAY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBPbiBGcmksIE1heSAyMiwgMjAyMCBhdCAxMDow
NTo1M0FNICswMTAwLCBXZWkgTGl1IHdyb3RlOg0KPj4gT24gRnJpLCBNYXkgMjIsIDIwMjAgYXQg
MDg6NDE6MTdBTSArMDAwMCwgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCj4+PiBIaSwNCj4+PiAN
Cj4+PiBBcyBhIGNvbnNlcXVlbmNlIG9mIHRoaXMgZml4LCB0aGUgZm9sbG93aW5nIGhhcyBiZWVu
IGNvbW1pdHRlZCAoSSBndWVzcyBhcyBhIGNvbnNlcXVlbmNlIG9mIHJlZ2VuZXJhdGluZyB0aGUg
Y29uZmlndXJlIHNjcmlwdHMpOg0KPj4+IGRpZmYgLS1naXQgYS90b29scy9jb25maWd1cmUgYi90
b29scy9jb25maWd1cmUNCj4+PiBpbmRleCAzNzU0MzBkZjNmLi4zNjU5NjM4OWI4IDEwMDc1NQ0K
Pj4+IC0tLSBhL3Rvb2xzL2NvbmZpZ3VyZQ0KPj4+ICsrKyBiL3Rvb2xzL2NvbmZpZ3VyZQ0KPj4+
IEBAIC00Njc4LDYgKzQ2NzgsMTAgQEAgZm9yIGxkZmxhZyBpbiAkQVBQRU5EX0xJQg0KPj4+IGRv
DQo+Pj4gICAgIEFQUEVORF9MREZMQUdTPSIkQVBQRU5EX0xERkxBR1MgLUwkbGRmbGFnIg0KPj4+
IGRvbmUNCj4+PiAraWYgICEgLXogJEVYVFJBX1BSRUZJWCA7IHRoZW4NCj4+PiArICAgIENQUEZM
QUdTPSIkQ1BQRkxBR1MgLUkkRVhUUkFfUFJFRklYL2luY2x1ZGUiDQo+Pj4gKyAgICBMREZMQUdT
PSIkTERGTEFHUyAtTCRFWFRSQV9QUkVGSVgvbGliIg0KPj4+ICtmaQ0KPj4+IENQUEZMQUdTPSIk
UFJFUEVORF9DUFBGTEFHUyAkQ1BQRkxBR1MgJEFQUEVORF9DUFBGTEFHUyINCj4+PiBMREZMQUdT
PSIkUFJFUEVORF9MREZMQUdTICRMREZMQUdTICRBUFBFTkRfTERGTEFHU+KAnQ0KPj4+IA0KPj4+
IFRoaXMgc2hvdWxkIGJlOg0KPj4+IGlmICBbICEgLXogJEVYVFJBX1BSRUZJWCBdOyB0aGVuDQo+
Pj4gDQo+Pj4gQXMgb24gb3RoZXIgY29uZmlndXJlIHNjcmlwdHMuDQo+Pj4gDQo+Pj4gRHVyaW5n
IGNvbmZpZ3VyZSBJIGhhdmUgbm90IHRoZSBmb2xsb3dpbmcgZXJyb3I6DQo+Pj4gLi9jb25maWd1
cmU6IGxpbmUgNDY4MTogLXo6IGNvbW1hbmQgbm90IGZvdW5kDQo+Pj4gDQo+Pj4gV2hpY2ggaXMg
aWdub3JlZCBidXQgaXMgYWRkaW5nIC1ML2xpYiBhbmQgLUkvaW5jbHVkZSB0byB0aGUgQ1BQRkxB
R1MgYW5kIExERkxBR1MNCj4+PiANCj4+PiBXaGF0IHNob3VsZCBiZSB0aGUgcHJvY2VkdXJlIHRv
IGFjdHVhbGx5IGZpeCB0aGF0IChhcyB0aGUgcHJvYmxlbSBpcyBjb21pbmcgZnJvbSB0aGUgY29u
ZmlndXJlIHNjcmlwdCByZWdlbmVyYXRpb24gSSBndWVzcykgPyANCj4+IA0KPj4gRG9lcyB0aGUg
Zm9sbG93aW5nIHBhdGNoIHdvcmsgZm9yIHlvdT8NCj4+IA0KPj4gZGlmZiAtLWdpdCBhL200L3Nl
dF9jZmxhZ3NfbGRmbGFncy5tNCBiL200L3NldF9jZmxhZ3NfbGRmbGFncy5tNA0KPj4gaW5kZXgg
MDhmNWM5ODNjYzYzLi5jZDM0YzEzOWJjOTQgMTAwNjQ0DQo+PiAtLS0gYS9tNC9zZXRfY2ZsYWdz
X2xkZmxhZ3MubTQNCj4+ICsrKyBiL200L3NldF9jZmxhZ3NfbGRmbGFncy5tNA0KPj4gQEAgLTE1
LDcgKzE1LDcgQEAgZm9yIGxkZmxhZyBpbiAkQVBQRU5EX0xJQg0KPj4gZG8NCj4+ICAgICBBUFBF
TkRfTERGTEFHUz0iJEFQUEVORF9MREZMQUdTIC1MJGxkZmxhZyINCj4+IGRvbmUNCj4+IC1pZiBb
ICEgLXogJEVYVFJBX1BSRUZJWCBdOyB0aGVuDQo+PiAraWYgdGVzdCAhIC16ICRFWFRSQV9QUkVG
SVggOyB0aGVuDQo+PiAgICAgQ1BQRkxBR1M9IiRDUFBGTEFHUyAtSSRFWFRSQV9QUkVGSVgvaW5j
bHVkZSINCj4+ICAgICBMREZMQUdTPSIkTERGTEFHUyAtTCRFWFRSQV9QUkVGSVgvbGliIg0KPj4g
ZmkNCj4gDQo+IFJldmlld2VkLWJ5OiBSb2dlciBQYXUgTW9ubsOpIDxyb2dlci5wYXVAY2l0cml4
LmNvbT4NClJldmlld2VkLWJ5OiBCZXJ0cmFuZCBNYXJxdWlzIDxiZXJ0cmFuZC5tYXJxdWlzQGFy
bS5jb20+DQoNCj4gDQo+IE15IGJhZCwgSSBhc3N1bWUgW10gaXMgZXhwYW5kZWQgYnkgbTQsIGFz
IHRoYXQgc2VlbXMgdG8gYmUgcGFydCBvZiB0aGUNCj4gbGFuZ3VhZ2U/DQo+IA0KPiBUaGFua3Ms
IFJvZ2VyLg0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri May 22 11:41:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 11:41:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc62l-0002op-W9; Fri, 22 May 2020 11:40:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hXvX=7E=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jc62l-0002oi-0v
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 11:40:51 +0000
X-Inumbo-ID: 15aec0d6-9c21-11ea-abb5-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15aec0d6-9c21-11ea-abb5-12813bfff9fa;
 Fri, 22 May 2020 11:40:50 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:40818
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jc62g-000gMc-KU (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 22 May 2020 12:40:46 +0100
Subject: Re: [PATCH v3 2/5] x86/mm: honor opt_pcid also for 32-bit PV domains
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <3ce4ab2c-8cb6-1482-6ce9-3d5b019e10c1@suse.com>
 <74eb1e77-7445-92fa-25b1-ece1d6699eb9@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a56bd0b8-0b46-7252-13e1-9acf642b7935@citrix.com>
Date: Fri, 22 May 2020 12:40:45 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <74eb1e77-7445-92fa-25b1-ece1d6699eb9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25/09/2019 16:23, Jan Beulich wrote:
> I can't see any technical or performance reason why we should treat
> 32-bit PV different from 64-bit PV in this regard.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

There are technical reasons, and a very good perf reason not to...

There is no such thing as a user/kernel split for 32bit guests (as
TF_kernel_mode remains set unconditionally), and there is no such thing
as an XPTI split (32bit code can't attack Xen using meltdown).

What you would gain is the perf hit of maintaining unused PCID's worth
of mappings (seeing as INVPCID is horribly expensive even on modern CPUs).

The only way this might not hurt performance is if it was tied to a PV
ABI extension letting 32bit PV guests split their user/kernel mappings
and have Xen handle the transition automatically, at which point a
user/kernel PCID split in Xen would be better than the guest kernel
trying to do KPTI on its own.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 22 11:42:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 11:42:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc63n-0002w2-9w; Fri, 22 May 2020 11:41:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nXpb=7E=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jc63m-0002vu-I6
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 11:41:54 +0000
X-Inumbo-ID: 3b807a2a-9c21-11ea-abb5-12813bfff9fa
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3b807a2a-9c21-11ea-abb5-12813bfff9fa;
 Fri, 22 May 2020 11:41:53 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id l17so9876852wrr.4
 for <xen-devel@lists.xenproject.org>; Fri, 22 May 2020 04:41:53 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:content-transfer-encoding
 :in-reply-to:user-agent;
 bh=Lta0CLT675UZSrc5FKSeHPcgBmKBhAoBlEaF84MSk+s=;
 b=q0DmaK01gNtPBELdxe4OVUKmN7xPpzL/GAht7n098WUpmRBR/kyCuWwQH6XnmXXqOh
 yRuBiLvnuy9Z4HGtLgJmP6FRIzn7NMnFxqC5OSRqWDSg4pfaI91jgWSKXaP7spk/PRH4
 ZSggSQkjAG3+vsyLEPKwR0YOJOc9rmMDg/tFdetf5C1aL5pOzt7XeDrBwUIP7//b1+ia
 1DqdufZQYFNV+uYDoArcrx0f63GWzpQmtes05+EyV1JQ0V1P4bTUbe6IP172Y9DTiygg
 DdWo68eEGCL17KO/0rfYxLLG979cIonfP0pB6D/3mkq1Hs+x8Xk1JhEVESdHaIVOkU06
 RfFA==
X-Gm-Message-State: AOAM530vsAvZTkTCT7RI2K/+tnXNihYbUaGpeXtCfImkieS65z8rMo1U
 FBJSuAn/N1A0Cyy8oWDCXUc=
X-Google-Smtp-Source: ABdhPJyZYzVPkWKLVkShoFhLaKYw3Sb4G9DSOxEnaJnvGTjhf0F7bFbo9otGPKk7xIMwItu5K9bAog==
X-Received: by 2002:adf:f4d0:: with SMTP id h16mr3087077wrp.230.1590147712805; 
 Fri, 22 May 2020 04:41:52 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id a184sm9469377wmh.24.2020.05.22.04.41.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 22 May 2020 04:41:52 -0700 (PDT)
Date: Fri, 22 May 2020 11:41:50 +0000
From: Wei Liu <wl@xen.org>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: [PATCH 2/3] configure: also add EXTRA_PREFIX to {CPP/LD}FLAGS
Message-ID: <20200522114150.txqx7q2menpouekk@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <20200505092454.9161-1-roger.pau@citrix.com>
 <20200505092454.9161-3-roger.pau@citrix.com>
 <C053A44F-FFDE-4C07-B1FD-76FA8456ADCD@arm.com>
 <20200522090553.eegs4fcltfqjuhzo@debian>
 <20200522111940.GB54375@Air-de-Roger>
 <26610F4A-34B2-4B61-BEB2-ED2D16283B81@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <26610F4A-34B2-4B61-BEB2-ED2D16283B81@arm.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 22, 2020 at 11:40:06AM +0000, Bertrand Marquis wrote:
> 
> 
> > On 22 May 2020, at 12:19, Roger Pau Monné <roger.pau@citrix.com> wrote:
> > 
> > On Fri, May 22, 2020 at 10:05:53AM +0100, Wei Liu wrote:
> >> On Fri, May 22, 2020 at 08:41:17AM +0000, Bertrand Marquis wrote:
> >>> Hi,
> >>> 
> >>> As a consequence of this fix, the following has been committed (I guess as a consequence of regenerating the configure scripts):
> >>> diff --git a/tools/configure b/tools/configure
> >>> index 375430df3f..36596389b8 100755
> >>> --- a/tools/configure
> >>> +++ b/tools/configure
> >>> @@ -4678,6 +4678,10 @@ for ldflag in $APPEND_LIB
> >>> do
> >>>     APPEND_LDFLAGS="$APPEND_LDFLAGS -L$ldflag"
> >>> done
> >>> +if  ! -z $EXTRA_PREFIX ; then
> >>> +    CPPFLAGS="$CPPFLAGS -I$EXTRA_PREFIX/include"
> >>> +    LDFLAGS="$LDFLAGS -L$EXTRA_PREFIX/lib"
> >>> +fi
> >>> CPPFLAGS="$PREPEND_CPPFLAGS $CPPFLAGS $APPEND_CPPFLAGS"
> >>> LDFLAGS="$PREPEND_LDFLAGS $LDFLAGS $APPEND_LDFLAGS”
> >>> 
> >>> This should be:
> >>> if  [ ! -z $EXTRA_PREFIX ]; then
> >>> 
> >>> As on other configure scripts.
> >>> 
> >>> During configure I have not the following error:
> >>> ./configure: line 4681: -z: command not found
> >>> 
> >>> Which is ignored but is adding -L/lib and -I/include to the CPPFLAGS and LDFLAGS
> >>> 
> >>> What should be the procedure to actually fix that (as the problem is coming from the configure script regeneration I guess) ? 
> >> 
> >> Does the following patch work for you?
> >> 
> >> diff --git a/m4/set_cflags_ldflags.m4 b/m4/set_cflags_ldflags.m4
> >> index 08f5c983cc63..cd34c139bc94 100644
> >> --- a/m4/set_cflags_ldflags.m4
> >> +++ b/m4/set_cflags_ldflags.m4
> >> @@ -15,7 +15,7 @@ for ldflag in $APPEND_LIB
> >> do
> >>     APPEND_LDFLAGS="$APPEND_LDFLAGS -L$ldflag"
> >> done
> >> -if [ ! -z $EXTRA_PREFIX ]; then
> >> +if test ! -z $EXTRA_PREFIX ; then
> >>     CPPFLAGS="$CPPFLAGS -I$EXTRA_PREFIX/include"
> >>     LDFLAGS="$LDFLAGS -L$EXTRA_PREFIX/lib"
> >> fi
> > 
> > Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> Reviewed-by: Bertrand Marquis <bertrand.marquis@arm.com>

Thanks. I will transfer your tag to the proper patch I just sent.

Wei.


From xen-devel-bounces@lists.xenproject.org Fri May 22 11:43:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 11:43:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc64t-00034D-O2; Fri, 22 May 2020 11:43:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jc64s-00033v-9D
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 11:43:02 +0000
X-Inumbo-ID: 638dc8c4-9c21-11ea-abb5-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 638dc8c4-9c21-11ea-abb5-12813bfff9fa;
 Fri, 22 May 2020 11:43:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 5B99BAE2C;
 Fri, 22 May 2020 11:43:02 +0000 (UTC)
Subject: Re: [PATCH v3 1/5] x86: suppress XPTI-related TLB flushes when
 possible
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <3ce4ab2c-8cb6-1482-6ce9-3d5b019e10c1@suse.com>
 <ae47cb2c-2fff-cd08-0a26-683cef1f3303@suse.com>
 <17f1b674-92f9-6ee9-8e10-0fc30f055fe8@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cc806683-8a56-3876-6bd4-1ab660347440@suse.com>
Date: Fri, 22 May 2020 13:42:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <17f1b674-92f9-6ee9-8e10-0fc30f055fe8@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.05.2020 13:00, Andrew Cooper wrote:
> On 25/09/2019 16:23, Jan Beulich wrote:
>> When there's no XPTI-enabled PV domain at all, there's no need to issue
>> respective TLB flushes. Hardwire opt_xpti_* to false when !PV, and
>> record the creation of PV domains by bumping opt_xpti_* accordingly.
>>
>> As to the sticky opt_xpti_domu vs increment/decrement of opt_xpti_hwdom,
>> this is done this way to avoid
>> (a) widening the former variable,
>> (b) any risk of a missed flush, which would result in an XSA if a DomU
>>     was able to exercise it, and
>> (c) any races updating the variable.
>> Fundamentally the TLB flush done when context switching out the domain's
>> vCPU-s the last time before destroying the domain ought to be
>> sufficient, so in principle DomU handling could be made match hwdom's.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I am still concerned about the added complexity for no obvious use case.
> 
> Under what circumstances do we expect to XPTI-ness come and go on a
> system, outside of custom dev-testing scenarios?

Run a PVH Dom0 with just HVM guests for a while on a system, until you
find a need to run a PV guest there (perhaps because of an emergency).

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 22 11:58:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 11:58:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc6Jb-0004A0-4K; Fri, 22 May 2020 11:58:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hXvX=7E=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jc6JZ-00049v-AA
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 11:58:13 +0000
X-Inumbo-ID: 82c68b98-9c23-11ea-ae69-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82c68b98-9c23-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 11:58:12 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:41298
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jc6JV-000sk0-M0 (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 22 May 2020 12:58:09 +0100
Subject: Re: [PATCH v3 1/5] x86: suppress XPTI-related TLB flushes when
 possible
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <3ce4ab2c-8cb6-1482-6ce9-3d5b019e10c1@suse.com>
 <ae47cb2c-2fff-cd08-0a26-683cef1f3303@suse.com>
 <17f1b674-92f9-6ee9-8e10-0fc30f055fe8@citrix.com>
 <20200522111337.GA54375@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e6071bf3-360e-fd7d-849e-dfbd8ccbe142@citrix.com>
Date: Fri, 22 May 2020 12:58:09 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200522111337.GA54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <George.Dunlap@eu.citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22/05/2020 12:13, Roger Pau Monné wrote:
> On Fri, May 22, 2020 at 12:00:14PM +0100, Andrew Cooper wrote:
>> On 25/09/2019 16:23, Jan Beulich wrote:
>>> When there's no XPTI-enabled PV domain at all, there's no need to issue
>>> respective TLB flushes. Hardwire opt_xpti_* to false when !PV, and
>>> record the creation of PV domains by bumping opt_xpti_* accordingly.
>>>
>>> As to the sticky opt_xpti_domu vs increment/decrement of opt_xpti_hwdom,
>>> this is done this way to avoid
>>> (a) widening the former variable,
>>> (b) any risk of a missed flush, which would result in an XSA if a DomU
>>>     was able to exercise it, and
>>> (c) any races updating the variable.
>>> Fundamentally the TLB flush done when context switching out the domain's
>>> vCPU-s the last time before destroying the domain ought to be
>>> sufficient, so in principle DomU handling could be made match hwdom's.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> I am still concerned about the added complexity for no obvious use case.
>>
>> Under what circumstances do we expect to XPTI-ness come and go on a
>> system, outside of custom dev-testing scenarios?
> XPTI-ness will be sticky, in the sense that once enabled cannot be
> disabled anymore.

I guess the question was a little too rhetorical, so lets spell it out.

You're either on Meltdown vulnerable hardware, or not.  If not, none of
this logic is relevant (AFAICT).

If you're on Meltdown-vulnerable hardware and in a production
environment, your running with XPTI (at which point none of this
complexity is relevant).

The only plausible case I can see where this would make a difference is
a dev environment starting with a non-XPTI dom0, then booting an XPTI
guest, which which point can we seriously justify bizarre logic like:

    opt_xpti_hwdom -= IS_ENABLED(CONFIG_LATE_HWDOM) &&
                      !d->domain_id && opt_xpti_hwdom;

just for an alleged perf improvement in a development corner case?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 22 12:00:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 12:00:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc6Lo-0004zc-Uf; Fri, 22 May 2020 12:00:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jc6Ln-0004zV-CR
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 12:00:31 +0000
X-Inumbo-ID: d4fda464-9c23-11ea-abbb-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d4fda464-9c23-11ea-abbb-12813bfff9fa;
 Fri, 22 May 2020 12:00:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D7841AD46;
 Fri, 22 May 2020 12:00:31 +0000 (UTC)
Subject: Re: [PATCH] x86: refine guest_mode()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <7b62d06c-1369-2857-81c0-45e2434357f4@suse.com>
 <1704f4f6-7e77-971c-2c94-4f6a6719c34a@citrix.com>
 <5bbe6425-396c-d934-b5af-53b594a4afbc@suse.com>
 <16939982-3ccc-f848-0694-61b154dca89a@citrix.com>
 <5ce12c86-c894-4a2c-9fa6-1c2a6007ca28@suse.com>
 <20200518145101.GV54375@Air-de-Roger>
 <d58ec87e-a871-2e65-4a69-b73a168a6afa@suse.com>
 <20200520151326.GM54375@Air-de-Roger>
 <38d546f9-8043-8d94-8298-8fd035078a8a@suse.com>
 <20200522104844.GY54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a31bd761-54eb-56b8-7c60-93202d26e7d0@suse.com>
Date: Fri, 22 May 2020 14:00:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200522104844.GY54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.05.2020 12:48, Roger Pau Monné wrote:
> On Fri, May 22, 2020 at 11:52:42AM +0200, Jan Beulich wrote:
>> On 20.05.2020 17:13, Roger Pau Monné wrote:
>>> OK, so I think I'm starting to understand this all. Sorry it's taken
>>> me so long. So it's my understanding that diff != 0 can only happen in
>>> Xen context, or when in an IST that has a different stack (ie: MCE, NMI
>>> or DF according to current.h) and running in PV mode?
>>>
>>> Wouldn't in then be fine to use (r)->cs & 3 to check we are in guest
>>> mode if diff != 0? I see a lot of other places where cs & 3 is already
>>> used to that effect AFAICT (like entry.S).
>>
>> Technically this would be correct afaics, but the idea with all this
>> is (or should I say "looks to be"?) to have the checks be as tight as
>> possible, to make sure we don't mistakenly consider something "guest
>> mode" which really isn't. IOW your suggestion would be fine with me
>> if we could exclude bugs anywhere in the code. But since this isn't
>> realistic, I consider your suggestion to be relaxing things by too
>> much.
> 
> OK, so I take that (long time) we might also want to change the cs & 3
> checks from entry.S to check against __HYPERVISOR_CS explicitly?

I didn't think so, no (not the least because of there not being any
guarantee afaik that EFI runtime calls couldn't play with segment
registers; they shouldn't, yes, but there's a lot of other "should"
many don't obey to). Those are guaranteed PV-only code paths. The
main issue here is that ->cs cannot be relied upon when a frame
points at HVM state.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 22 12:02:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 12:02:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc6Nj-0005DR-B5; Fri, 22 May 2020 12:02:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jc6Nh-0005DK-M3
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 12:02:29 +0000
X-Inumbo-ID: 1b90dea0-9c24-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b90dea0-9c24-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 12:02:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id BFA88AD46;
 Fri, 22 May 2020 12:02:30 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: Xen 4.13.1 released
To: xen-announce@lists.xenproject.org
Message-ID: <c6fd6980-06a7-c50a-b305-4ba394392866@suse.com>
Date: Fri, 22 May 2020 14:02:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

All,

I am pleased to announce the release of Xen 4.13.1. This is available
immediately from its git repository
http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.13
(tag RELEASE-4.13.1) or from the XenProject download page
https://xenproject.org/downloads/xen-project-archives/xen-project-4-13-series/xen-project-4-13-1/
(where a list of changes can also be found).

We recommend all users of the 4.13 stable series to update to this
first point release.

Regards, Jan


From xen-devel-bounces@lists.xenproject.org Fri May 22 12:03:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 12:03:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc6OP-0005HT-NY; Fri, 22 May 2020 12:03:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jc6OO-0005HG-4u
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 12:03:12 +0000
X-Inumbo-ID: 311640d1-9c24-11ea-abbb-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 311640d1-9c24-11ea-abbb-12813bfff9fa;
 Fri, 22 May 2020 12:03:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id CC27DAEED;
 Fri, 22 May 2020 12:03:07 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: Xen 4.12.3 released
To: xen-announce@lists.xenproject.org
Message-ID: <87784992-fbef-e056-b5c6-48c2a258a225@suse.com>
Date: Fri, 22 May 2020 14:03:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

All,

I am pleased to announce the release of Xen 4.12.3. This is available
immediately from its git repository
http://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=refs/heads/stable-4.12
(tag RELEASE-4.12.3) or from the XenProject download page
https://xenproject.org/downloads/xen-project-archives/xen-project-4-12-series/xen-project-4-12-3/
(where a list of changes can also be found).

We recommend all users of the 4.12 stable series to update to this
latest point release.

Regards, Jan


From xen-devel-bounces@lists.xenproject.org Fri May 22 12:49:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 12:49:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc76m-0000dd-GP; Fri, 22 May 2020 12:49:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=obdr=7E=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jc76l-0000dY-2e
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 12:49:03 +0000
X-Inumbo-ID: 995824c8-9c2a-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 995824c8-9c2a-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 12:48:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=CI/pVLJ3OGMveUiP2vJ/QVVmBnT/abuNaIxPHVwQK/E=; b=6ea1hQ8+bh3DDZSKwNuw19zt+
 e0qpcyGqvUBOSyaZ3yRAScK8/3BkqnyNnwq/fgrP+DUuUg+hFTf0Hh7xqea1/g58wPXKUNIbBkQmC
 EaKBiM5cc8GRYP5a35XXHZ1JKnGESqjuQkCEvHJ8o8PuEnKxCLMzQ7lo0tlX0cQHdjJqw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jc76e-0002C1-2c; Fri, 22 May 2020 12:48:56 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jc76d-0004JC-Ow; Fri, 22 May 2020 12:48:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jc76d-0005oR-OJ; Fri, 22 May 2020 12:48:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150315-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150315: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=dacdbf7088d6a3705a9831e73991c2b14c519a65
X-Osstest-Versions-That: xen=dacdbf7088d6a3705a9831e73991c2b14c519a65
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 22 May 2020 12:48:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150315 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150315/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150285
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150285
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150285
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150285
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150285
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150285
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150285
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150285
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150285
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150285
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150285
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  dacdbf7088d6a3705a9831e73991c2b14c519a65
baseline version:
 xen                  dacdbf7088d6a3705a9831e73991c2b14c519a65

Last test of basis   150315  2020-05-22 01:51:14 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri May 22 12:57:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 12:57:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc7Eg-0001Z1-CY; Fri, 22 May 2020 12:57:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tdEc=7E=samsung.com=m.szyprowski@srs-us1.protection.inumbo.net>)
 id 1jc7Ee-0001Yv-Iv
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 12:57:12 +0000
X-Inumbo-ID: bfce60d0-9c2b-11ea-b9cf-bc764e2007e4
Received: from mailout2.w1.samsung.com (unknown [210.118.77.12])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bfce60d0-9c2b-11ea-b9cf-bc764e2007e4;
 Fri, 22 May 2020 12:57:10 +0000 (UTC)
Received: from eucas1p1.samsung.com (unknown [182.198.249.206])
 by mailout2.w1.samsung.com (KnoxPortal) with ESMTP id
 20200522125710euoutp023290a64a1c815ef9fe50c3c339252a67~RWwc6Oqo81640916409euoutp02B
 for <xen-devel@lists.xenproject.org>; Fri, 22 May 2020 12:57:09 +0000 (GMT)
DKIM-Filter: OpenDKIM Filter v2.11.0 mailout2.w1.samsung.com
 20200522125710euoutp023290a64a1c815ef9fe50c3c339252a67~RWwc6Oqo81640916409euoutp02B
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=samsung.com;
 s=mail20170921; t=1590152230;
 bh=YNTaZW2eCrOBB+/EZo2VHocehjAUUuW398emjA1vya4=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=s8y14ezWQp/obQT5eQzlHBx2D6Ufr/neV8oEDj4QmWuKnIp4m5g7xix59W6ecmC7N
 T/nvlIAf/rGSuF7c0Ti7mHHY3RlU0vsLq6Ckm1BJuAlLy2TKWKVYz0sK3Dutt+bWO8
 j/Kn/oVcRQpDbsN2pVw4bZHwV4KrnWb/mSs6tuA8=
Received: from eusmges1new.samsung.com (unknown [203.254.199.242]) by
 eucas1p2.samsung.com (KnoxPortal) with ESMTP id
 20200522125709eucas1p2c9cc74a5e6327f14ff2fec8d00997ac0~RWwcNyFI80249702497eucas1p2n;
 Fri, 22 May 2020 12:57:09 +0000 (GMT)
Received: from eucas1p2.samsung.com ( [182.198.249.207]) by
 eusmges1new.samsung.com (EUCPMTA) with SMTP id 58.BE.61286.52CC7CE5; Fri, 22
 May 2020 13:57:09 +0100 (BST)
Received: from eusmtrp2.samsung.com (unknown [182.198.249.139]) by
 eucas1p2.samsung.com (KnoxPortal) with ESMTPA id
 20200522125708eucas1p233b80b0741f087a84d47f24b6d91985f~RWwbxlE_S0903009030eucas1p2y;
 Fri, 22 May 2020 12:57:08 +0000 (GMT)
Received: from eusmgms1.samsung.com (unknown [182.198.249.179]) by
 eusmtrp2.samsung.com (KnoxPortal) with ESMTP id
 20200522125708eusmtrp2ce3887019627c346b790bcbe15a7947f~RWwbwTwIw1358513585eusmtrp2R;
 Fri, 22 May 2020 12:57:08 +0000 (GMT)
X-AuditID: cbfec7f2-ef1ff7000001ef66-6f-5ec7cc252cc0
Received: from eusmtip1.samsung.com ( [203.254.199.221]) by
 eusmgms1.samsung.com (EUCPMTA) with SMTP id 3E.42.08375.42CC7CE5; Fri, 22
 May 2020 13:57:08 +0100 (BST)
Received: from AMDC2765.digital.local (unknown [106.120.51.73]) by
 eusmtip1.samsung.com (KnoxPortal) with ESMTPA id
 20200522125708eusmtip1f843fc2153bbac42a292dde233075588~RWwbEtm5i3030830308eusmtip1G;
 Fri, 22 May 2020 12:57:08 +0000 (GMT)
From: Marek Szyprowski <m.szyprowski@samsung.com>
To: dri-devel@lists.freedesktop.org, iommu@lists.linux-foundation.org,
 linaro-mm-sig@lists.linaro.org, linux-kernel@vger.kernel.org
Subject: [PATCH v5 39/38] drm: xen: fix common struct sg_table related issues
Date: Fri, 22 May 2020 14:56:52 +0200
Message-Id: <20200522125652.18435-1-m.szyprowski@samsung.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20200513132114.6046-1-m.szyprowski@samsung.com>
X-Brightmail-Tracker: H4sIAAAAAAAAA0VSWUwTURTNm5m2A2lhLChPMIKNmGAUJPIxCUowGjN+GCQxxBhBqkxYpIAt
 rYIfEEAgpRCEDwgg4gbYssm+uLCIxUgGWSSgRTajghCwQFmMVIap+HfuWd65uXk4Kn7Bc8TD
 o2JpeZQ0UsK3xhrfrvcdde3VBx0bNdmSmcw7hHyeX80jzY33UHJoZYFPPtN1I2TJa29yeWgS
 IWunh3nkYGsRn6x8MyYgF3ULPLJj8SuPXK3PRXxFVEVxBaAMT1sA9dJUglFNpgkeNZ6hR6i6
 JwnU581plModKQNU1+IQRrWNJvKprHotoJZq918QXrY+EUJHhqtouYdPsHVYkXEKi9kgbmf9
 mkcSQYGNGuA4JLzgj5azamCNi4lyANM0sxg3LANY+qXQMiwBONdbg6iB1XbC/KkZ5YQyABt0
 hfydiKmsFLAuPuEJ1fNqPovtibsA9mQKWRNKrCGQKe0RsIId4Qcfd+RsBzDCFaZWt2EsFhEn
 YRHzG+XqnKGupn0bW23xxoZ1HvsQJBgB1LYsCzjTGbhmnsA4bAdn9fUWfh80tzxAuEAygJNM
 pYAbNAAOJuUDzuUNDcwGn70HSrjB6lYPjj4F+yaSUe5MNnBkfhdLo1swpzHPQotgeqqYcx+C
 BfqqndqODwOW/SmY1tVmOVc2gNrMfiQbOBf8LysBQAscaKVCFkorPKPoW+4KqUyhjAp1vx4t
 qwVbf+r9pt7YDFYGrnUCAgcSoYho1QeJeVKVIk7WCSCOSuxFD227g8SiEGlcPC2PvipXRtKK
 TuCEYxIH0fFHM4FiIlQaS9+g6Rha/k9FcCvHRIAFCjGlMHXmSsJg3rqmZpbRaMTdScOB5y6t
 1+WQU2NU8ccn5r1ZQcHO8VW9PqXnIwwZvuW946tNpp+ky528g6fz5+r+9Dn5Od484hWGvPLP
 Kw9w2/OdSVUdkDh57k4x1Fz0D8hua9d+S+jyu48CXbBLpSo9i5b1+xvDUyIKiiWYIkzqeRiV
 K6R/AfS9l3RPAwAA
X-Brightmail-Tracker: H4sIAAAAAAAAA+NgFprHIsWRmVeSWpSXmKPExsVy+t/xu7oqZ47HGfxYoGPRe+4kk8XGGetZ
 Lf5vm8hsceXrezaLlauPMlks2G9t8eXKQyaLTY+vsVpc3jWHzWLtkbvsFh9Wv2e1OPjhCavF
 9y2TmRx4PdbMW8PocWfpTkaPvd8WsHhs//aA1eN+93Emj81L6j1u/3vM7DH5xnJGj8MfrrB4
 7L7ZwObRt2UVo8fnTXIBPFF6NkX5pSWpChn5xSW2StGGFkZ6hpYWekYmlnqGxuaxVkamSvp2
 NimpOZllqUX6dgl6GXM+PWIp+CVQ0ffxLVMD4yy+LkZODgkBE4n/t3YwdzFycQgJLGWUmNe4
 mgkiISNxcloDK4QtLPHnWhcbRNEnRolVp5+zgSTYBAwlut5CJEQEOhklpnV/ZAdxmAX+MUmc
 2LsdbJSwgK/EsrUfwDpYBFQl2tbvZgGxeQVsJeac+80MsUJeYvWGA2A2J1D809afYKuFBGwk
 1rbOYZvAyLeAkWEVo0hqaXFuem6xoV5xYm5xaV66XnJ+7iZGYNxsO/Zz8w7GSxuDDzEKcDAq
 8fDu2H48Tog1say4MvcQowQHs5II70L+o3FCvCmJlVWpRfnxRaU5qcWHGE2BjprILCWanA+M
 6bySeENTQ3MLS0NzY3NjMwslcd4OgYMxQgLpiSWp2ampBalFMH1MHJxSDYxqyz02X2Ts7561
 6b280cSlX37ILbB+toZpbzjXEcP7Vb+yA57YdfYbz6/qPLRV++U3sbovi0QvnOXQfax5806o
 k6ziy2m3J3YfT3jE4iD45fShHMe5+j8tlK+YPGu0tKtN6XT24HMOjeafW/Jk/7sb09ZwyU67
 7cB6d8oXjwOux5o80tp1eSuUWIozEg21mIuKEwHG6J9ksQIAAA==
X-CMS-MailID: 20200522125708eucas1p233b80b0741f087a84d47f24b6d91985f
X-Msg-Generator: CA
Content-Type: text/plain; charset="utf-8"
X-RootMTR: 20200522125708eucas1p233b80b0741f087a84d47f24b6d91985f
X-EPHeader: CA
CMS-TYPE: 201P
X-CMS-RootMailID: 20200522125708eucas1p233b80b0741f087a84d47f24b6d91985f
References: <20200513132114.6046-1-m.szyprowski@samsung.com>
 <CGME20200522125708eucas1p233b80b0741f087a84d47f24b6d91985f@eucas1p2.samsung.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Bartlomiej Zolnierkiewicz <b.zolnierkie@samsung.com>,
 David Airlie <airlied@linux.ie>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 Daniel Vetter <daniel@ffwll.ch>, xen-devel@lists.xenproject.org,
 Robin Murphy <robin.murphy@arm.com>, Christoph Hellwig <hch@lst.de>,
 linux-arm-kernel@lists.infradead.org,
 Marek Szyprowski <m.szyprowski@samsung.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The Documentation/DMA-API-HOWTO.txt states that the dma_map_sg() function
returns the number of the created entries in the DMA address space.
However the subsequent calls to the dma_sync_sg_for_{device,cpu}() and
dma_unmap_sg must be called with the original number of the entries
passed to the dma_map_sg().

struct sg_table is a common structure used for describing a non-contiguous
memory buffer, used commonly in the DRM and graphics subsystems. It
consists of a scatterlist with memory pages and DMA addresses (sgl entry),
as well as the number of scatterlist entries: CPU pages (orig_nents entry)
and DMA mapped pages (nents entry).

It turned out that it was a common mistake to misuse nents and orig_nents
entries, calling DMA-mapping functions with a wrong number of entries or
ignoring the number of mapped entries returned by the dma_map_sg()
function.

Fix the code to refer to proper nents or orig_nents entries. This driver
reports the number of the pages in the imported scatterlist, so it should
refer to sg_table->orig_nents entry.

Signed-off-by: Marek Szyprowski <m.szyprowski@samsung.com>
Acked-by: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
---
For more information, see '[PATCH v5 00/38] DRM: fix struct sg_table nents
vs. orig_nents misuse' thread:
https://lore.kernel.org/linux-iommu/20200513132114.6046-1-m.szyprowski@samsung.com/T/

This patch has been resurrected on Oleksandr Andrushchenko request. The
patch was a part of v2 patchset, but then I've dropped it as it only
fixes the debug output, thus I didn't consider it so critical.
---
 drivers/gpu/drm/xen/xen_drm_front_gem.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/gpu/drm/xen/xen_drm_front_gem.c b/drivers/gpu/drm/xen/xen_drm_front_gem.c
index f0b85e0..ba4bdc5 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_gem.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_gem.c
@@ -215,7 +215,7 @@ struct drm_gem_object *
 		return ERR_PTR(ret);
 
 	DRM_DEBUG("Imported buffer of size %zu with nents %u\n",
-		  size, sgt->nents);
+		  size, sgt->orig_nents);
 
 	return &xen_obj->base;
 }
-- 
1.9.1



From xen-devel-bounces@lists.xenproject.org Fri May 22 12:58:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 12:58:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc7G5-0001f0-Qw; Fri, 22 May 2020 12:58:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jc7G4-0001es-7v
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 12:58:40 +0000
X-Inumbo-ID: f4c1c5fc-9c2b-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f4c1c5fc-9c2b-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 12:58:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4EF1EAE56;
 Fri, 22 May 2020 12:58:41 +0000 (UTC)
Subject: Re: [PATCH] xen/trace: Don't dump offline CPUs in
 debugtrace_dump_worker()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200521084422.24073-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dd29969b-d261-59b4-824d-5748a2f831d8@suse.com>
Date: Fri, 22 May 2020 14:58:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200521084422.24073-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Xen-devel <xen-devel@lists.xenproject.org>,
 Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21.05.2020 10:44, Andrew Cooper wrote:
> The 'T' debugkey reliably wedges on one of my systems, which has a sparse
> APIC_ID layout due to a non power-of-2 number of cores per socket.  The
> per_cpu(dt_cpu_data, cpu) calcution falls over the deliberately non-canonical
> poison value.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Fri May 22 13:02:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 13:02:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc7Js-0002aR-Bc; Fri, 22 May 2020 13:02:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1fqI=7E=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jc7Jr-0002aM-7K
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 13:02:35 +0000
X-Inumbo-ID: 80b1fa96-9c2c-11ea-b9cf-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 80b1fa96-9c2c-11ea-b9cf-bc764e2007e4;
 Fri, 22 May 2020 13:02:34 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: TnwUKv5bTqmcFXwtCe5H4Hduz2A7z2zRYLoIyXYyW++u2WsKMGr74h/+DhQrzZCmNTwUl3zL5v
 dDUFFTfeuGcGqxV3vounC0T2UFnL3p8/1GbxZiFGFrhY6tciuYSt2EwHdx+aCA0dzhmskAFfbT
 ML3izgfaSOIijc3nd6PwrCRY8ChhxmcrII3QPcjMgAC6x5BS0H+zFO3VeNHwOcRd3AFcvRGLH4
 WotyL7thTPXcz2RxanM8FD6eHkwttM7nm9eabgsAfY+yjTZ/208JPyCv/z6ptJHPPh7QA55I6F
 +s0=
X-SBRS: 2.7
X-MesageID: 18171851
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18171851"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24263.52582.870176.527318@mariner.uk.xensource.com>
Date: Fri, 22 May 2020 14:02:30 +0100
To: Tamas K Lengyel <tamas.lengyel@intel.com>
Subject: Re: [PATCH for-4.14 2/2] tools/libxc: xc_memshr_fork with interrupts
 disabled
In-Reply-To: <c2830cae9affe327170c900731a7ca050ddb91ea.1590101479.git.tamas.lengyel@intel.com>
References: <7666b5bba73a1410446789a0c4ea908376da3487.1590101479.git.tamas.lengyel@intel.com>
 <c2830cae9affe327170c900731a7ca050ddb91ea.1590101479.git.tamas.lengyel@intel.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Tamas K Lengyel writes ("[PATCH for-4.14 2/2] tools/libxc: xc_memshr_fork with interrupts disabled"):
> Toolstack side for creating forks with interrupt injection disabled.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Subject to the hypervisor folks being happy with the underlying
feature.

Ian.


From xen-devel-bounces@lists.xenproject.org Fri May 22 13:04:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 13:04:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc7Ly-0002hc-PN; Fri, 22 May 2020 13:04:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jc7Lx-0002hX-JI
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 13:04:45 +0000
X-Inumbo-ID: ce9ce3ec-9c2c-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce9ce3ec-9c2c-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 13:04:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D5DF0AA7C;
 Fri, 22 May 2020 13:04:46 +0000 (UTC)
Subject: Re: [PATCH] x86/svm: retry after unhandled NPT fault if gfn was
 marked for recalculation
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
 <20200522100846.GV54375@Air-de-Roger>
 <04ec4ab4-a121-c5be-0a65-316e237dd793@citrix.com>
 <20200522102339.GX54375@Air-de-Roger>
 <fe6e5c7f-df0f-5436-a7cd-2949464ab9a7@citrix.com>
 <20200522111146.GZ54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4831dc51-cea1-2870-422b-2af7d6d1f2d6@suse.com>
Date: Fri, 22 May 2020 15:04:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200522111146.GZ54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>, andrew.cooper3@citrix.com,
 wl@xen.org, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.05.2020 13:11, Roger Pau Monné wrote:
> That being said, I also don't like the fact that logdity is handled
> differently between EPT and NPT, as on EPT it's handled as a
> misconfig while on NPT it's handled as a violation.

Because, well, there is no concept of misconfig in NPT.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 22 13:06:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 13:06:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc7O1-0002qi-5M; Fri, 22 May 2020 13:06:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1fqI=7E=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jc7Nz-0002qd-Q6
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 13:06:51 +0000
X-Inumbo-ID: 194ada16-9c2d-11ea-9887-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 194ada16-9c2d-11ea-9887-bc764e2007e4;
 Fri, 22 May 2020 13:06:50 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: qZk4mYrVsw+c4Gdzpq6OaABIxRPgsXvG+P+jAtdV+8GmocgIrcBu6PXoETdAGjv6yKRWX3R5kL
 QI3sz9mOtBxwg3YwonKtFdFT1qRKJMfsYlBb9uEStjSVhOaq4cuMI+up5Vxs2Dp1FhEFgdH8p/
 r+HM+kcaBAOp2+8zxUjtsCCfOg1Bl6qaMlHnzN7R8e4JpuhT2jAAOKqNtRigqdaiNk521BKtHW
 8gJ0/CVbtCCE4uRl2CIkr+32hMkqipmchih8csZ7ByJsC+nCbUXqWYUh0ynIjenZRKkkOFe6cu
 CXg=
X-SBRS: 2.7
X-MesageID: 18205861
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18205861"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24263.52837.360685.406105@mariner.uk.xensource.com>
Date: Fri, 22 May 2020 14:06:45 +0100
To: Wei Liu <wl@xen.org>
Subject: Re: [PATCH] m4: use test instead of []
In-Reply-To: <20200522112457.6640-1-wl@xen.org>
References: <20200522112457.6640-1-wl@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen Development List <xen-devel@lists.xenproject.org>, Ian
 Jackson <Ian.Jackson@citrix.com>, Roger Pau Monne <roger.pau@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Wei Liu writes ("[PATCH] m4: use test instead of []"):
> It is reported that [] was removed by autoconf, which caused the
> following error:
> 
>   ./configure: line 4681: -z: command not found
> 
> Switch to test. That's what is used throughout our configure scripts.

The reason for [ ] being removed is that configure.ac et al are
processed by m4 with quote characters set to [ ].

>      APPEND_LDFLAGS="$APPEND_LDFLAGS -L$ldflag"
>  done
> -if [ ! -z $EXTRA_PREFIX ]; then
> +if test ! -z $EXTRA_PREFIX ; then
>      CPPFLAGS="$CPPFLAGS -I$EXTRA_PREFIX/include"

If $EXTRA_PREFIX contains nothing (or just whitespace) this expands to
  test ! -z
which only works by accident.  It is parsed ax
  if not (string_is_nonempty("-z"))

Variable expansions in test expressions should generally be in " ".

Ian.


From xen-devel-bounces@lists.xenproject.org Fri May 22 13:11:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 13:11:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc7SJ-0003jH-PN; Fri, 22 May 2020 13:11:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hXvX=7E=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jc7SI-0003jC-3b
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 13:11:18 +0000
X-Inumbo-ID: b83ea616-9c2d-11ea-abca-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b83ea616-9c2d-11ea-abca-12813bfff9fa;
 Fri, 22 May 2020 13:11:17 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:43492
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jc7SG-000fyW-JW (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 22 May 2020 14:11:16 +0100
Subject: Re: [PATCH] x86/svm: retry after unhandled NPT fault if gfn was
 marked for recalculation
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
 <20200522100846.GV54375@Air-de-Roger>
 <04ec4ab4-a121-c5be-0a65-316e237dd793@citrix.com>
 <20200522102339.GX54375@Air-de-Roger>
 <fe6e5c7f-df0f-5436-a7cd-2949464ab9a7@citrix.com>
 <20200522111146.GZ54375@Air-de-Roger>
 <4831dc51-cea1-2870-422b-2af7d6d1f2d6@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <ef3411ac-9e7c-0ef7-ad9f-c24f8ebf32a6@citrix.com>
Date: Fri, 22 May 2020 14:11:15 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <4831dc51-cea1-2870-422b-2af7d6d1f2d6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>, wl@xen.org,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22/05/2020 14:04, Jan Beulich wrote:
> On 22.05.2020 13:11, Roger Pau Monné wrote:
>> That being said, I also don't like the fact that logdity is handled
>> differently between EPT and NPT, as on EPT it's handled as a
>> misconfig while on NPT it's handled as a violation.
> Because, well, there is no concept of misconfig in NPT.

Indeed.  Intel chose to split EPT errors into two - MISCONFIG for
structural errors (not present, or reserved bits set) and VIOLATION for
permissions errors.

AMD reused the same silicon pagewalker design, so have a single
NPT_FAULT vmexit which behaves much more like a regular pagefault,
encoding structural vs permission errors in the error code.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 22 13:28:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 13:28:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc7iQ-0004mI-6U; Fri, 22 May 2020 13:27:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hXvX=7E=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jc7iP-0004mD-PH
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 13:27:57 +0000
X-Inumbo-ID: 0c3298d4-9c30-11ea-b9cf-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c3298d4-9c30-11ea-b9cf-bc764e2007e4;
 Fri, 22 May 2020 13:27:57 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:43970
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jc7iM-000sCb-M2 (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 22 May 2020 14:27:54 +0100
Subject: Re: [PATCH v3 1/3] x86: relax GDT check in arch_set_info_guest()
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <cbed3c45-3685-4bce-9719-93b1e8a2599a@suse.com>
 <acbaead9-0f6c-3606-e809-57dafe9b3f01@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <58510f15-68d6-c773-5166-a38c72573442@citrix.com>
Date: Fri, 22 May 2020 14:27:54 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <acbaead9-0f6c-3606-e809-57dafe9b3f01@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 20/05/2020 08:53, Jan Beulich wrote:
> It is wrong for us to check frames beyond the guest specified limit
> (in the compat case another loop bound is already correct).
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I'm still not overly convinced this is a good idea, because all it will
allow people to do is write lazy code which breaks on older Xen.

However, if you still insist, Acked-by: Andrew Cooper
<andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri May 22 13:31:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 13:31:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc7lU-0005i8-BU; Fri, 22 May 2020 13:31:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mwuj=7E=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jc7lS-0005hz-UA
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 13:31:06 +0000
X-Inumbo-ID: 7ccf5000-9c30-11ea-b07b-bc764e2007e4
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ccf5000-9c30-11ea-b07b-bc764e2007e4;
 Fri, 22 May 2020 13:31:06 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id q2so12593564ljm.10
 for <xen-devel@lists.xenproject.org>; Fri, 22 May 2020 06:31:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=Ak1+Sq274vtepf4F/AMOKn+/SUcTSsWehkImMnDWeMg=;
 b=jKgF7CQcaAjDQDVOHthIQ3XotjTb0RDq7FMZ8hkoFVphja0cVBOQAcG6DeZ8tmWkUB
 eqYcZ1ypAK/GMY7ZMmHIEcwvJxEOhk5lQS2thb4Oc0MaFhdtVjo6ubZ8QmblgfjghmI/
 4E7OTkrq2NAY94p2UwBhKXZL/uRNC6ku6yx/mLyPvGv69QM7lR+uQjIyFPfrEfQXbTxm
 zOWQdFHQRiinT2wGHEJvVdwrwzKb3zmOQb+VI0k42e2Mn4o+NiuT71LJO9RA9y+yYE7r
 oXrzCY9HOFdiFXAM+I2sn8R+m7Tv0iqBZvvWA71lhhkvS/ZwGjW9m1CTfJWQx/MmKAU4
 LKCw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=Ak1+Sq274vtepf4F/AMOKn+/SUcTSsWehkImMnDWeMg=;
 b=ao4Rp/06dKszFzCABijCH8SUYA8uzV5aHWg4kTRkF2EoeVDOIiZCnHlyQbuG0ivnvL
 EKRI5we9d/QRi+vPYPGddvINqiKbNPg0MMtu6jE5IkIkOFrW93LG3jAFMg31pKC2/xJ+
 PSMrLwmmILYyQr6Mc7N8i+rcbVlmQaZn25V+QYmTvLI3vXK39fdjRQzz4yo3EqsYhCf/
 k8j3QU4Fu6LJSC3SM3vzvCZOBFn4UfX9DLIm87EpPFIZfqsua1m7eFq3NugZULUYBb7o
 +Xcmqh5e9p30i/CVaDXaCRzQsvXDWGUhsw6gcxUttYn6/PJ8KUsPFOv8e+JlwMc1rYip
 SRPQ==
X-Gm-Message-State: AOAM532U55hriMBoDNNpD/3kh1AucgK/JeXx0SFEkosk6FHtOWwikbP2
 kiBcA9MGmJX5rptZkk5v/ZyoRqMBPljmeddDfO8=
X-Google-Smtp-Source: ABdhPJwgaQ/wOSeT5ZTxq4/k1vLcj0qJp4eugiaAjpOEkGHNl9iHrkTNb6GbRJZzueZHfr3F2rAfXc+wfjf9fOfzCZY=
X-Received: by 2002:a2e:b0e7:: with SMTP id h7mr6640951ljl.196.1590154264938; 
 Fri, 22 May 2020 06:31:04 -0700 (PDT)
MIME-Version: 1.0
References: <20200519015503.115236-1-jandryuk@gmail.com>
 <4510049C-2AD1-4AE4-B0E5-F4231450EDB6@citrix.com>
 <001301d6301f$0b546cd0$21fd4670$@xen.org>
In-Reply-To: <001301d6301f$0b546cd0$21fd4670$@xen.org>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Fri, 22 May 2020 09:30:53 -0400
Message-ID: <CAKf6xptVdXnoU0QVoS6bS_DUS8SkN6Jt2ueGJ0vhX8+SyFLt2g@mail.gmail.com>
Subject: Re: [PATCH v7 00/19] Add support for qemu-xen runnning in a
 Linux-based stubdomain
To: Paul Durrant <paul@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Ian Jackson <Ian.Jackson@citrix.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 22, 2020 at 5:54 AM Paul Durrant <xadimgnik@gmail.com> wrote:
>
> > -----Original Message-----
> > From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of G=
eorge Dunlap
> > Sent: 22 May 2020 10:11
> > To: Jason Andryuk <jandryuk@gmail.com>
> > Cc: Stefano Stabellini <sstabellini@kernel.org>; Julien Grall <julien@x=
en.org>; Samuel Thibault
> > <samuel.thibault@ens-lyon.org>; Wei Liu <wl@xen.org>; Andrew Cooper <An=
drew.Cooper3@citrix.com>; Jan
> > Beulich <jbeulich@suse.com>; Ian Jackson <Ian.Jackson@citrix.com>; Anth=
ony Perard
> > <anthony.perard@citrix.com>; xen-devel <xen-devel@lists.xenproject.org>=
; Daniel De Graaf
> > <dgdegra@tycho.nsa.gov>
> > Subject: Re: [PATCH v7 00/19] Add support for qemu-xen runnning in a Li=
nux-based stubdomain
> >
> >
> > > On May 19, 2020, at 2:54 AM, Jason Andryuk <jandryuk@gmail.com> wrote=
:
> > >
> > > General idea is to allow freely set device_model_version and
> > > device_model_stubdomain_override and choose the right options based o=
n this
> > > choice.  Also, allow to specific path to stubdomain kernel/ramdisk, f=
or greater
> > > flexibility.
> >
> > Excited to see this patch series get in.  But I didn=E2=80=99t really n=
otice any documents explaining how to
> > actually use it =E2=80=94 is there a blog post anywhere describing how =
to get the kernel / initrd image and so
> > on?

Yeah, it's not really collected anywhere, but below are the quick
start instructions.

The cover letter mentioned this repo (forked from Marek's):
https://github.com/jandryuk/qubes-vmm-xen-stubdom-linux
   (branch initramfs-tools, tag for-upstream-v6)

clone it and then run:
$ make get-sources
$ make -f Makefile.stubdom

output:
kernel: build/linux/arch/x86/boot/bzImage
ramdisk: build/rootfs/stubdom-linux-rootfs

To make them available system wide, copy to
/usr/lib/xen/boot/qemu-stubdom-linux-kernel and
/usr/lib/xen/boot/qemu-stubdom-linux-rootfs respectively. Obviously
this should match your installation's "$lib/xen/boot/" location.

A second option is to set paths to those files manually in a VM's
xl.cfg with stubdomain_kernel=3D"/path" and stubdomain_ramdisk=3D"/path"

Update your xl configuration with:
device_model_stubdomain_override =3D 1
device_model_version =3D "qemu-xen"

Start the domain and that should be it.   Maybe additionally use
serial =3D "pty" to access the VM with `xl console -t serial $NAME`.

Some limitations are here:
https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dblob;f=3Ddocs/misc/stubdom.=
txt;h=3Dc717a95d17d2e562639a5574e89df3c4db8712fa;hb=3DHEAD#l124
Limitations:
 - PCI passthrough require permissive mode
 - only one nic is supported
 - at most 26 emulated disks are supported (more are still available
as PV disks)
 - graphics output (VNC/SDL/Spice) not supported

> > Also, would it be possible to add a follow-up series which modifies SUP=
PORT.md and CHANGELOG.md?
>
> Yes please. In future I think we should encourage the patch to CHANGELOG.=
md to be the last patch of a series such as this.

I can do this.  What is the SUPPORT status for this?

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Fri May 22 13:33:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 13:33:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc7nQ-0005so-Oc; Fri, 22 May 2020 13:33:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L400=7E=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jc7nP-0005sa-Kw
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 13:33:07 +0000
X-Inumbo-ID: c48df004-9c30-11ea-b9cf-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c48df004-9c30-11ea-b9cf-bc764e2007e4;
 Fri, 22 May 2020 13:33:06 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: LsteYU31AyrEhPUfZTgRE8v7ddfrDt4R/9S1KCMro4qUdmaGpBJvP56a92ktwAOszUbuQMAzNk
 ODj/MLM+IL9J7MJtz4JfurNRZLx36sMbWxxu7RxELg2rsYIaDHHCiu2gcPAI75A+TzuN1CwcwN
 /IPGi1c5RYopwxMnnr3xqeLYeKHUVgf0v0v77NGuEn1/AIyp6avHODLYqgBoFTs5ygnTlW41Wg
 ozwwL/d35BDvX+m6FIwAV7bis0IqPmd4mpwPAkzUDE7K5N7lWL12to+iPKVwWbzpJUdK1tj0R7
 p9M=
X-SBRS: 2.7
X-MesageID: 18174891
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18174891"
Date: Fri, 22 May 2020 15:32:59 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/svm: retry after unhandled NPT fault if gfn was
 marked for recalculation
Message-ID: <20200522133259.GC54375@Air-de-Roger>
References: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
 <20200522100846.GV54375@Air-de-Roger>
 <04ec4ab4-a121-c5be-0a65-316e237dd793@citrix.com>
 <20200522102339.GX54375@Air-de-Roger>
 <fe6e5c7f-df0f-5436-a7cd-2949464ab9a7@citrix.com>
 <20200522111146.GZ54375@Air-de-Roger>
 <4831dc51-cea1-2870-422b-2af7d6d1f2d6@suse.com>
 <ef3411ac-9e7c-0ef7-ad9f-c24f8ebf32a6@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ef3411ac-9e7c-0ef7-ad9f-c24f8ebf32a6@citrix.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>, wl@xen.org,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 22, 2020 at 02:11:15PM +0100, Andrew Cooper wrote:
> On 22/05/2020 14:04, Jan Beulich wrote:
> > On 22.05.2020 13:11, Roger Pau Monné wrote:
> >> That being said, I also don't like the fact that logdity is handled
> >> differently between EPT and NPT, as on EPT it's handled as a
> >> misconfig while on NPT it's handled as a violation.
> > Because, well, there is no concept of misconfig in NPT.
> 
> Indeed.  Intel chose to split EPT errors into two - MISCONFIG for
> structural errors (not present, or reserved bits set) and VIOLATION for
> permissions errors.
> 
> AMD reused the same silicon pagewalker design, so have a single
> NPT_FAULT vmexit which behaves much more like a regular pagefault,
> encoding structural vs permission errors in the error code.

Maybe I should clarify, I understand that NPT doesn't have such
differentiation regarding nested page table faults vs EPT, but I feel
like it would be clearer if part of the code could be shared, ie:
unify EPT resolve_misconfig and NPT do_recalc into a single function
for example that uses the necessary p2m-> helpers for the differing
implementations. I think we should be able to tell apart when a NPT
page fault is a recalc one by looking at the bits in the EXITINFO1
error field?

Anyway, this was just a rant, and it's tangential to the issue at
hand, sorry for distracting.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 22 13:34:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 13:34:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc7p8-00062u-40; Fri, 22 May 2020 13:34:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jc7p7-00062o-I3
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 13:34:53 +0000
X-Inumbo-ID: 036762a6-9c31-11ea-abd3-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 036762a6-9c31-11ea-abd3-12813bfff9fa;
 Fri, 22 May 2020 13:34:51 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 58067AFEC;
 Fri, 22 May 2020 13:34:53 +0000 (UTC)
Subject: Re: [PATCH] x86/svm: retry after unhandled NPT fault if gfn was
 marked for recalculation
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
 <dae35bcf-5b85-a760-9d15-139973215334@citrix.com>
 <506f21d4-ed81-2cd5-46af-162407553c91@citrix.com>
 <15df5f0a-43f7-ca0c-613f-25a1a1a19640@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f5fb6ae3-37b8-1cf9-00dc-e70775792e8c@suse.com>
Date: Fri, 22 May 2020 15:34:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <15df5f0a-43f7-ca0c-613f-25a1a1a19640@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>, roger.pau@citrix.com,
 wl@xen.org, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.05.2020 12:19, Andrew Cooper wrote:
> On 22/05/2020 11:05, Igor Druzhinin wrote:
>> On 22/05/2020 10:45, Andrew Cooper wrote:
>>> On 21/05/2020 22:43, Igor Druzhinin wrote:
>>>> If a recalculation NPT fault hasn't been handled explicitly in
>>>> hvm_hap_nested_page_fault() then it's potentially safe to retry -
>>>> US bit has been re-instated in PTE and any real fault would be correctly
>>>> re-raised next time.
>>>>
>>>> This covers a specific case of migration with vGPU assigned on AMD:
>>>> global log-dirty is enabled and causes immediate recalculation NPT
>>>> fault in MMIO area upon access. This type of fault isn't described
>>>> explicitly in hvm_hap_nested_page_fault (this isn't called on
>>>> EPT misconfig exit on Intel) which results in domain crash.
>>>>
>>>> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>>>> ---
>>>>  xen/arch/x86/hvm/svm/svm.c | 4 ++++
>>>>  1 file changed, 4 insertions(+)
>>>>
>>>> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
>>>> index 46a1aac..f0d0bd3 100644
>>>> --- a/xen/arch/x86/hvm/svm/svm.c
>>>> +++ b/xen/arch/x86/hvm/svm/svm.c
>>>> @@ -1726,6 +1726,10 @@ static void svm_do_nested_pgfault(struct vcpu *v,
>>>>          /* inject #VMEXIT(NPF) into guest. */
>>>>          nestedsvm_vmexit_defer(v, VMEXIT_NPF, pfec, gpa);
>>>>          return;
>>>> +    case 0:
>>>> +        /* If a recalculation page fault hasn't been handled - just retry. */
>>>> +        if ( pfec & PFEC_user_mode )
>>>> +            return;
>>> This smells like it is a recipe for livelocks.
>>>
>>> Everything should have been handled properly by the call to
>>> p2m_pt_handle_deferred_changes() which precedes svm_do_nested_pgfault().
>>>
>>> It is legitimate for the MMIO mapping to end up being transiently
>>> recalculated, but the fact that p2m_pt_handle_deferred_changes() doesn't
>>> fix it up suggests that the bug is there.
>>>
>>> Do you have the complete NPT walk to the bad mapping? Do we have
>>> _PAGE_USER in the leaf mapping, or is this perhaps a spurious fault?
>> It does fix it up. The problem is that currently in SVM we enter
>> svm_do_nested_pgfault immediately after p2m_pt_handle_deferred_changes
>> is finished finished.
> 
> Oh - so we do.  I'd read the entry condition for svm_do_nested_pgfault()
> incorrectly.
> 
> Jan - why did you chose to do it this way?  If
> p2m_pt_handle_deferred_changes() has made a modification, there is
> surely nothing relevant to do in svm_do_nested_pgfault().

Why? There can very well be multiple reasons for a single exit
to occur, and hence it did seem to make sense to allow processing
of such possible further reasons right away, instead of re-
entering the guest just to deal with another VM exit. What I can
accept is that the error code may not correctly represent the
"remaining" fault reason anymore at this point.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 22 13:37:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 13:37:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc7ri-0006FC-Kx; Fri, 22 May 2020 13:37:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1fqI=7E=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jc7rh-0006F7-JN
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 13:37:33 +0000
X-Inumbo-ID: 63763ea6-9c31-11ea-9887-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63763ea6-9c31-11ea-9887-bc764e2007e4;
 Fri, 22 May 2020 13:37:33 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 1D+RNv08gLkJTwbk4eBgP2Pnej0xaJZMD3gaEoSuM8ug9MRhupsUJ18/vp9k3vY62KL8qapM7z
 5vgY3c0Djb6L7oB4D3YU0pQxStjVVwrfU9GgkVVSCM1s7+dLmZh8aFZ5EwvKChFCgrHpzcznLP
 c3hOYmr7UsQfjN9uVFpAlqk7pBVZLK1mCrnSjhSZf0/dPPeZvm3WV1MwSkJxXiLr+fQ1MfcDTb
 Vq+dlW4oArVfu+faDeLV6BqO+dV7NQiQWSw0EjUo5TjT+oBjy3E/46aPLaRg5HCyg+blJa72V0
 /6o=
X-SBRS: 2.7
X-MesageID: 18461993
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,421,1583211600"; d="scan'208";a="18461993"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24263.54679.11598.155391@mariner.uk.xensource.com>
Date: Fri, 22 May 2020 14:37:27 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH v7 00/19] Add support for qemu-xen runnning in a
 Linux-based stubdomain
In-Reply-To: <CAKf6xptVdXnoU0QVoS6bS_DUS8SkN6Jt2ueGJ0vhX8+SyFLt2g@mail.gmail.com>
References: <20200519015503.115236-1-jandryuk@gmail.com>
 <4510049C-2AD1-4AE4-B0E5-F4231450EDB6@citrix.com>
 <001301d6301f$0b546cd0$21fd4670$@xen.org>
 <CAKf6xptVdXnoU0QVoS6bS_DUS8SkN6Jt2ueGJ0vhX8+SyFLt2g@mail.gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>, Andrew
 Cooper <Andrew.Cooper3@citrix.com>, George Dunlap <George.Dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>,
 Anthony Perard <anthony.perard@citrix.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("Re: [PATCH v7 00/19] Add support for qemu-xen runnning in a Linux-based stubdomain"):
> I can do this.  What is the SUPPORT status for this?

I think that given we aren't testing it upstream, the answer
probably has to be "Tech Preview".

In general, I would love to see this (including your stub builder)
being tested by osstest.

Ian.


From xen-devel-bounces@lists.xenproject.org Fri May 22 13:42:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 13:42:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc7wI-0007Gj-KQ; Fri, 22 May 2020 13:42:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jc7wI-0007Ge-0r
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 13:42:18 +0000
X-Inumbo-ID: 0a86a730-9c32-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0a86a730-9c32-11ea-b07b-bc764e2007e4;
 Fri, 22 May 2020 13:42:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E5AC2B00E;
 Fri, 22 May 2020 13:42:14 +0000 (UTC)
Subject: Re: [PATCH v4] x86/idle: prevent entering C6 with in service
 interrupts on Intel
To: Roger Pau Monne <roger.pau@citrix.com>
References: <20200521092258.82503-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b59a2d2e-e66d-f3a1-dd85-fd7c7eccc98a@suse.com>
Date: Fri, 22 May 2020 15:42:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200521092258.82503-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21.05.2020 11:22, Roger Pau Monne wrote:
> Apply a workaround for Intel errata BDX99, CLX30, SKX100, CFW125,
> BDF104, BDH85, BDM135, KWB131: "A Pending Fixed Interrupt May Be
> Dispatched Before an Interrupt of The Same Priority Completes".

While the change looks good to me as far as Broadwell goes, I
think it was before this posting that Andrew also pointed at
a specific Haswell erratum instance, still on the v3 thread.
Am I to imply a v5 will follow adding affected Haswell models
to the table?

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 22 13:51:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 13:51:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc84p-0008GB-Ga; Fri, 22 May 2020 13:51:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jc84o-0008G6-10
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 13:51:06 +0000
X-Inumbo-ID: 47bc7a0c-9c33-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47bc7a0c-9c33-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 13:51:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 00F20B00E;
 Fri, 22 May 2020 13:51:06 +0000 (UTC)
Subject: Re: [PATCH v2] x86/traps: Rework #PF[Rsvd] bit handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200518153820.18170-1-andrew.cooper3@citrix.com>
 <20200521154306.29019-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b99da362-202f-07f2-d3bb-fe2ca82a44ab@suse.com>
Date: Fri, 22 May 2020 15:51:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200521154306.29019-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21.05.2020 17:43, Andrew Cooper wrote:
> @@ -1439,6 +1418,21 @@ void do_page_fault(struct cpu_user_regs *regs)
>      if ( unlikely(fixup_page_fault(addr, regs) != 0) )
>          return;
>  
> +    /*
> +     * Xen doesn't have reserved bits set in its pagetables, nor do we permit
> +     * PV guests to write any.  Such entries would generally be vulnerable to
> +     * the L1TF sidechannel.
> +     *
> +     * The shadow pagetable logic may use reserved bits as part of
> +     * SHOPT_FAST_FAULT_PATH.  Pagefaults arising from these will be resolved
> +     * via the fixup_page_fault() path.
> +     *
> +     * Anything remaining is an error, constituting corruption of the
> +     * pagetables and probably an L1TF vulnerable gadget.
> +     */
> +    if ( error_code & PFEC_reserved_bit )
> +        goto fatal;
> +
>      if ( unlikely(!guest_mode(regs)) )
>      {
>          enum pf_type pf_type = spurious_page_fault(addr, regs);
> @@ -1457,13 +1451,12 @@ void do_page_fault(struct cpu_user_regs *regs)
>          if ( likely((fixup = search_exception_table(regs)) != 0) )

While I continue to not fully agree with not trying to fix up such
faults if the fault location has recovery code attached, I realize
that we're not going to reach agreement here, so somewhat hesitantly
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 22 13:55:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 13:55:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc88b-0008U9-1G; Fri, 22 May 2020 13:55:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=L400=7E=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jc88Z-0008U4-I4
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 13:54:59 +0000
X-Inumbo-ID: d28a5992-9c33-11ea-ae69-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d28a5992-9c33-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 13:54:58 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: LOEMMvQG1ac9ouyD5JCkDfvFyZREDLzgACTZ7g+zx61aJv5f68UArgf0/6vGVvrDjGAhps7zm5
 FtHqkmdbfBRzfrufN+xt+E6qeSZesySCVtgVBKEYBl4TUk4K6irHifXGaFMLQk3yPrN7Zyxt2m
 4vA4qrebewRiE0kyd+UvvqMjlr0QVPxpz8XX55ZQf5E0C5l4t0lE6VoCYO5cGXdvgv8BIK+QQ4
 kiSt2hQ6OgN/iKwX6AM+9UE59xvF17VMMIraNWTdadK5lJqqDrJt7+BBJuz5RimStYPcRBVpqV
 OX8=
X-SBRS: 2.7
X-MesageID: 18543226
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,422,1583211600"; d="scan'208";a="18543226"
Date: Fri, 22 May 2020 15:54:29 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v4] x86/idle: prevent entering C6 with in service
 interrupts on Intel
Message-ID: <20200522135429.GD54375@Air-de-Roger>
References: <20200521092258.82503-1-roger.pau@citrix.com>
 <b59a2d2e-e66d-f3a1-dd85-fd7c7eccc98a@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
In-Reply-To: <b59a2d2e-e66d-f3a1-dd85-fd7c7eccc98a@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 22, 2020 at 03:42:10PM +0200, Jan Beulich wrote:
> On 21.05.2020 11:22, Roger Pau Monne wrote:
> > Apply a workaround for Intel errata BDX99, CLX30, SKX100, CFW125,
> > BDF104, BDH85, BDM135, KWB131: "A Pending Fixed Interrupt May Be
> > Dispatched Before an Interrupt of The Same Priority Completes".
> 
> While the change looks good to me as far as Broadwell goes, I
> think it was before this posting that Andrew also pointed at
> a specific Haswell erratum instance, still on the v3 thread.
> Am I to imply a v5 will follow adding affected Haswell models
> to the table?

Those refer to a different errata, see:

https://lore.kernel.org/xen-devel/84486d84-4452-af18-f7e7-753faf5a125d@citrix.com/

So we believe this also affects Haswell, but the errata is not published
for those CPUs (yet at least). If/when it's published I'm happy to add
it, in the meantime I think we should go with what has been published
by Intel.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 22 13:56:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 13:56:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc89q-00009j-CK; Fri, 22 May 2020 13:56:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=obdr=7E=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jc89p-00009d-45
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 13:56:17 +0000
X-Inumbo-ID: fe154bb2-9c33-11ea-abde-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fe154bb2-9c33-11ea-abde-12813bfff9fa;
 Fri, 22 May 2020 13:56:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=oSHYVePbCc2eu4BhzmX0O+n/m4QVkUYHbi+g4K3Ga4Q=; b=qmGs3YiQC4jQp0u6zsWiGPR0s
 N/JgwX1DWHZDWcbH+NycPlDv8KC+ocfQGEEu2GkwEjnC3PufkPZDRKUYj2x93heDgzuorFpU6VQZw
 9aw7qQtU8LCSLFnw9Cl+PFzacFzcwXMkBcTaryk9sjlYc1pLuy7T7DiOdG3GNJafHkJQw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jc89i-0003cC-Jn; Fri, 22 May 2020 13:56:10 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jc89i-0006sK-AU; Fri, 22 May 2020 13:56:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jc89i-00009p-9s; Fri, 22 May 2020 13:56:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150319-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150319: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=f6d102046817bb5c08876ff78a6a00f4d29ee269
X-Osstest-Versions-That: xen=dacdbf7088d6a3705a9831e73991c2b14c519a65
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 22 May 2020 13:56:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150319 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150319/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f6d102046817bb5c08876ff78a6a00f4d29ee269
baseline version:
 xen                  dacdbf7088d6a3705a9831e73991c2b14c519a65

Last test of basis   150280  2020-05-20 18:01:10 Z    1 days
Testing same since   150319  2020-05-22 11:02:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   dacdbf7088..f6d1020468  f6d102046817bb5c08876ff78a6a00f4d29ee269 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri May 22 14:03:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 14:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc8Gd-0001BQ-6c; Fri, 22 May 2020 14:03:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jc8Gc-0001BL-FC
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 14:03:18 +0000
X-Inumbo-ID: fbce02d2-9c34-11ea-abe0-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fbce02d2-9c34-11ea-abe0-12813bfff9fa;
 Fri, 22 May 2020 14:03:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 8336FB114;
 Fri, 22 May 2020 14:03:18 +0000 (UTC)
Subject: Re: [PATCH v2] x86/ioemul: Rewrite stub generation to be shadow stack
 compatible
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200522102435.14329-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <502e32d2-5e4c-7c84-417e-7b1e44a0d8b4@suse.com>
Date: Fri, 22 May 2020 16:03:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200522102435.14329-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.05.2020 12:24, Andrew Cooper wrote:
> --- a/xen/arch/x86/ioport_emulate.c
> +++ b/xen/arch/x86/ioport_emulate.c
> @@ -8,7 +8,10 @@
>  #include <xen/sched.h>
>  #include <xen/dmi.h>
>  
> -static bool ioemul_handle_proliant_quirk(
> +unsigned int __read_mostly (*ioemul_handle_quirk)(

I guess the more correct (and long term supported) placement is

unsigned int (*__read_mostly ioemul_handle_quirk)(

as you mean to modify the variable, not its type.

> --- a/xen/arch/x86/pv/emul-priv-op.c
> +++ b/xen/arch/x86/pv/emul-priv-op.c
> @@ -47,51 +47,97 @@ struct priv_op_ctxt {
>      unsigned int bpmatch;
>  };
>  
> -/* I/O emulation support. Helper routines for, and type of, the stack stub. */
> -void host_to_guest_gpr_switch(struct cpu_user_regs *);
> -unsigned long guest_to_host_gpr_switch(unsigned long);
> +/* I/O emulation helpers.  Use non-standard calling conventions. */
> +void nocall load_guest_gprs(struct cpu_user_regs *);
> +void nocall save_guest_gprs(void);
>  
>  typedef void io_emul_stub_t(struct cpu_user_regs *);
>  
>  static io_emul_stub_t *io_emul_stub_setup(struct priv_op_ctxt *ctxt, u8 opcode,
>                                            unsigned int port, unsigned int bytes)
>  {
> +    /*
> +     * Construct a stub for IN/OUT emulation.
> +     *
> +     * Some platform drivers communicate with the SMM handler using GPRs as a
> +     * mailbox.  Therefore, we must perform the emulation with the hardware
> +     * domain's registers in view.
> +     *
> +     * We write a stub of the following form, using the guest load/save
> +     * helpers (non-standard ABI), and one of several possible stubs
> +     * performing the real I/O.
> +     */
> +    static const char prologue[] = {
> +        0x53,       /* push %rbx */
> +        0x55,       /* push %rbp */
> +        0x41, 0x54, /* push %r12 */
> +        0x41, 0x55, /* push %r13 */
> +        0x41, 0x56, /* push %r14 */
> +        0x41, 0x57, /* push %r15 */
> +        0x57,       /* push %rdi (param for save_guest_gprs) */
> +    };              /* call load_guest_gprs */
> +                    /* <I/O stub> */
> +                    /* call save_guest_gprs */
> +    static const char epilogue[] = {
> +        0x5f,       /* pop %rdi  */
> +        0x41, 0x5f, /* pop %r15  */
> +        0x41, 0x5e, /* pop %r14  */
> +        0x41, 0x5d, /* pop %r13  */
> +        0x41, 0x5c, /* pop %r12  */
> +        0x5d,       /* pop %rbp  */
> +        0x5b,       /* pop %rbx  */
> +        0xc3,       /* ret       */
> +    };
> +
>      struct stubs *this_stubs = &this_cpu(stubs);
>      unsigned long stub_va = this_stubs->addr + STUB_BUF_SIZE / 2;
> -    long disp;
> -    bool use_quirk_stub = false;
> +    unsigned int quirk_bytes = 0;
> +    char *p;
> +
> +    /* Helpers - Read outer scope but only modify p. */
> +#define APPEND_BUFF(b) ({ memcpy(p, b, sizeof(b)); p += sizeof(b); })
> +#define APPEND_CALL(f)                                                  \
> +    ({                                                                  \
> +        long disp = (long)(f) - (stub_va + p - ctxt->io_emul_stub + 5); \
> +        BUG_ON((int32_t)disp != disp);                                  \
> +        *p++ = 0xe8;                                                    \
> +        *(int32_t *)p = disp; p += 4;                                   \
> +    })
>  
>      if ( !ctxt->io_emul_stub )
>          ctxt->io_emul_stub =
>              map_domain_page(_mfn(this_stubs->mfn)) + (stub_va & ~PAGE_MASK);
>  
> -    /* call host_to_guest_gpr_switch */
> -    ctxt->io_emul_stub[0] = 0xe8;
> -    disp = (long)host_to_guest_gpr_switch - (stub_va + 5);
> -    BUG_ON((int32_t)disp != disp);
> -    *(int32_t *)&ctxt->io_emul_stub[1] = disp;
> +    p = ctxt->io_emul_stub;
> +
> +    APPEND_BUFF(prologue);
> +    APPEND_CALL(load_guest_gprs);
>  
> +    /* Some platforms might need to quirk the stub for specific inputs. */
>      if ( unlikely(ioemul_handle_quirk) )
> -        use_quirk_stub = ioemul_handle_quirk(opcode, &ctxt->io_emul_stub[5],
> -                                             ctxt->ctxt.regs);
> +    {
> +        quirk_bytes = ioemul_handle_quirk(opcode, p, ctxt->ctxt.regs);
> +        p += quirk_bytes;
> +    }
>  
> -    if ( !use_quirk_stub )
> +    /* Default I/O stub. */
> +    if ( likely(!quirk_bytes) )
>      {
> -        /* data16 or nop */
> -        ctxt->io_emul_stub[5] = (bytes != 2) ? 0x90 : 0x66;
> -        /* <io-access opcode> */
> -        ctxt->io_emul_stub[6] = opcode;
> -        /* imm8 or nop */
> -        ctxt->io_emul_stub[7] = !(opcode & 8) ? port : 0x90;
> -        /* ret (jumps to guest_to_host_gpr_switch) */
> -        ctxt->io_emul_stub[8] = 0xc3;
> +        *p++ = (bytes != 2) ? 0x90 : 0x66;  /* data16 or nop */
> +        *p++ = opcode;                      /* <opcode>      */
> +        *p++ = !(opcode & 8) ? port : 0x90; /* imm8 or nop   */
>      }
>  
> -    BUILD_BUG_ON(STUB_BUF_SIZE / 2 < MAX(9, /* Default emul stub */
> -                                         5 + IOEMUL_QUIRK_STUB_BYTES));
> +    APPEND_CALL(save_guest_gprs);
> +    APPEND_BUFF(epilogue);
> +
> +    BUG_ON(STUB_BUF_SIZE / 2 < (p - ctxt->io_emul_stub));

I continue to view this as a dis-improvement, because a bug here
may go unnoticed for a very long time, until someone runs Xen
on one of the affected Proliants _and_ invokes an action
triggering the quirk code. I don't see any better alternative
than what I think I had already described. Hence preferably with
the remark further up addressed
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 22 14:05:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 14:05:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc8Ib-0001IK-J9; Fri, 22 May 2020 14:05:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jc8Ia-0001IE-Bw
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 14:05:20 +0000
X-Inumbo-ID: 44890b82-9c35-11ea-abe0-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 44890b82-9c35-11ea-abe0-12813bfff9fa;
 Fri, 22 May 2020 14:05:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D0DFCAD12;
 Fri, 22 May 2020 14:05:20 +0000 (UTC)
Subject: Re: [PATCH v4] x86/idle: prevent entering C6 with in service
 interrupts on Intel
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20200521092258.82503-1-roger.pau@citrix.com>
 <b59a2d2e-e66d-f3a1-dd85-fd7c7eccc98a@suse.com>
 <20200522135429.GD54375@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5140b52a-4c47-e36d-6a04-4a24d70e539c@suse.com>
Date: Fri, 22 May 2020 16:05:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200522135429.GD54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.05.2020 15:54, Roger Pau Monné wrote:
> On Fri, May 22, 2020 at 03:42:10PM +0200, Jan Beulich wrote:
>> On 21.05.2020 11:22, Roger Pau Monne wrote:
>>> Apply a workaround for Intel errata BDX99, CLX30, SKX100, CFW125,
>>> BDF104, BDH85, BDM135, KWB131: "A Pending Fixed Interrupt May Be
>>> Dispatched Before an Interrupt of The Same Priority Completes".
>>
>> While the change looks good to me as far as Broadwell goes, I
>> think it was before this posting that Andrew also pointed at
>> a specific Haswell erratum instance, still on the v3 thread.
>> Am I to imply a v5 will follow adding affected Haswell models
>> to the table?
> 
> Those refer to a different errata, see:
> 
> https://lore.kernel.org/xen-devel/84486d84-4452-af18-f7e7-753faf5a125d@citrix.com/
> 
> So we believe this also affects Haswell, but the errata is not published
> for those CPUs (yet at least). If/when it's published I'm happy to add
> it, in the meantime I think we should go with what has been published
> by Intel.

Oh, sorry, yes - should have looked at the full thread again, not
just Andrew's initial response.

Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 22 14:15:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 14:15:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc8Rz-0002LN-HA; Fri, 22 May 2020 14:15:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jc8Ry-0002LI-6c
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 14:15:02 +0000
X-Inumbo-ID: 9fb9c428-9c36-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9fb9c428-9c36-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 14:15:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 365D1AD12;
 Fri, 22 May 2020 14:15:03 +0000 (UTC)
Subject: Re: [PATCH v3 1/3] x86: relax GDT check in arch_set_info_guest()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <cbed3c45-3685-4bce-9719-93b1e8a2599a@suse.com>
 <acbaead9-0f6c-3606-e809-57dafe9b3f01@suse.com>
 <58510f15-68d6-c773-5166-a38c72573442@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ee21b6eb-cf34-74ee-ac73-33ff76fb07db@suse.com>
Date: Fri, 22 May 2020 16:14:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <58510f15-68d6-c773-5166-a38c72573442@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.05.2020 15:27, Andrew Cooper wrote:
> On 20/05/2020 08:53, Jan Beulich wrote:
>> It is wrong for us to check frames beyond the guest specified limit
>> (in the compat case another loop bound is already correct).
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I'm still not overly convinced this is a good idea, because all it will
> allow people to do is write lazy code which breaks on older Xen.

Sounds a little like keeping bugs for the sake of keeping things
broken. The range of misbehaving versions could be shrunk by
backporting this change; I didn't intend to though, so far.

> However, if you still insist, Acked-by: Andrew Cooper
> <andrew.cooper3@citrix.com>

Thanks!

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 22 14:24:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 14:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc8bI-0003Z8-8K; Fri, 22 May 2020 14:24:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jc8bG-0003Z0-Qu
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 14:24:38 +0000
X-Inumbo-ID: f68cded9-9c37-11ea-abe7-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f68cded9-9c37-11ea-abe7-12813bfff9fa;
 Fri, 22 May 2020 14:24:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D5970B142;
 Fri, 22 May 2020 14:24:37 +0000 (UTC)
Subject: Re: [PATCH v5 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
To: Paul Durrant <paul@xen.org>
References: <20200521161939.4508-1-paul@xen.org>
 <20200521161939.4508-2-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <62f13c6d-8d5f-7d06-d7e9-d17b960c8264@suse.com>
Date: Fri, 22 May 2020 16:24:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200521161939.4508-2-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21.05.2020 18:19, Paul Durrant wrote:
> To allow enlightened HVM guests (i.e. those that have PV drivers) to be
> migrated without their co-operation it will be necessary to transfer 'PV'
> state such as event channel state, grant entry state, etc.
> 
> Currently there is a framework (entered via the hvm_save/load() functions)
> that allows a domain's 'HVM' (architectural) state to be transferred but
> 'PV' state is also common with pure PV guests and so this framework is not
> really suitable.
> 
> This patch adds the new public header and low level implementation of a new
> common framework, entered via the domain_save/load() functions. Subsequent
> patches will introduce other parts of the framework, and code that will
> make use of it within the current version of the libxc migration stream.
> 
> This patch also marks the HVM-only framework as deprecated in favour of the
> new framework.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> Acked-by: Julien Grall <julien@xen.org>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one remark:

> +int domain_load_end(struct domain_context *c)
> +{
> +    struct domain *d = c->domain;
> +    size_t len = c->desc.length - c->len;
> +
> +    while ( c->len != c->desc.length ) /* unconsumed data or pad */
> +    {
> +        uint8_t pad;
> +        int rc = domain_load_data(c, &pad, sizeof(pad));
> +
> +        if ( rc )
> +            return rc;
> +
> +        if ( pad )
> +            return -EINVAL;
> +    }
> +
> +    gdprintk(XENLOG_INFO, "%pd load: %s[%u] +%zu (-%zu)\n", d, c->name,
> +             c->desc.instance, c->len, len);

Unlike on the save side you assume c->name to be non-NULL here.
We're not going to crash because of this, but it feels a little
odd anyway, specifically with the function being non-static
(albeit on the positive side we have the type being private to
this file).

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 22 14:34:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 14:34:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc8ku-0004ae-61; Fri, 22 May 2020 14:34:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jc8ks-0004aZ-A0
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 14:34:34 +0000
X-Inumbo-ID: 5a491de6-9c39-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a491de6-9c39-11ea-b9cf-bc764e2007e4;
 Fri, 22 May 2020 14:34:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D72DBABB2;
 Fri, 22 May 2020 14:34:34 +0000 (UTC)
Subject: Re: [PATCH v5 4/5] common/domain: add a domain context record for
 shared_info...
To: Paul Durrant <paul@xen.org>
References: <20200521161939.4508-1-paul@xen.org>
 <20200521161939.4508-5-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6090d080-e771-81fe-0b64-e25c7a8c52eb@suse.com>
Date: Fri, 22 May 2020 16:34:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200521161939.4508-5-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 21.05.2020 18:19, Paul Durrant wrote:
> @@ -1649,6 +1650,70 @@ int continue_hypercall_on_cpu(
>      return 0;
>  }
>  
> +static int save_shared_info(const struct domain *d, struct domain_context *c,
> +                            bool dry_run)
> +{
> +    struct domain_shared_info_context ctxt = {
> +#ifdef CONFIG_COMPAT
> +        .flags = has_32bit_shinfo(d) ? DOMAIN_SAVE_32BIT_SHINFO : 0,
> +#endif
> +        .buffer_size = sizeof(shared_info_t),

But this size varies between native and compat.

> +static int load_shared_info(struct domain *d, struct domain_context *c)
> +{
> +    struct domain_shared_info_context ctxt;
> +    size_t hdr_size = offsetof(typeof(ctxt), buffer);
> +    unsigned int i;
> +    int rc;
> +
> +    rc = DOMAIN_LOAD_BEGIN(SHARED_INFO, c, &i);
> +    if ( rc )
> +        return rc;
> +
> +    if ( i ) /* expect only a single instance */
> +        return -ENXIO;
> +
> +    rc = domain_load_data(c, &ctxt, hdr_size);
> +    if ( rc )
> +        return rc;
> +
> +    if ( ctxt.buffer_size != sizeof(shared_info_t) )
> +        return -EINVAL;

While on the save side things could be left as they are (yet
I'd prefer a change), this should be flexible enough to allow
at least the smaller compat size as well in the compat case.
I wonder whether any smaller sizes might be acceptable, once
again with the rest getting zero-padded.

> +    if ( ctxt.flags & DOMAIN_SAVE_32BIT_SHINFO )
> +#ifdef CONFIG_COMPAT
> +        has_32bit_shinfo(d) = true;
> +#else
> +        return -EINVAL;
> +#endif

Am I mis-remembering or was a check lost of the remaining
flags being zero? If I am, one needs adding in any case, imo.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 22 14:36:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 14:36:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc8mD-0004eX-G9; Fri, 22 May 2020 14:35:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/N6j=7E=amazon.co.uk=prvs=4046e5fc3=pdurrant@srs-us1.protection.inumbo.net>)
 id 1jc8mB-0004eO-Fi
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 14:35:55 +0000
X-Inumbo-ID: 8b49f08c-9c39-11ea-9887-bc764e2007e4
Received: from smtp-fw-4101.amazon.com (unknown [72.21.198.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8b49f08c-9c39-11ea-9887-bc764e2007e4;
 Fri, 22 May 2020 14:35:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
 s=amazon201209; t=1590158155; x=1621694155;
 h=from:to:cc:date:message-id:references:in-reply-to:
 content-transfer-encoding:mime-version:subject;
 bh=gDmbTZSvOmX+Uj7ZaaaqCdVohExrLwLH2EEgMr0TZ5A=;
 b=PeeyO5xk3KpwKlN1loQwIaOrUqZ62lyebx5V4kFiHV/7OWxkI318d4Iu
 rADxh0tB2qikruH7kek8L9wjh9IP2IdYqAbCKwIWxFflBe4UXJiPWtGIl
 RYv7mHbBklN7LsVceFz4hRYVKAcHWRxcDx6pr/omXMhdMhMNwpphxuiXU E=;
IronPort-SDR: Xo61MzYupChIJvTMlSQqhS1wri8KARmx1nGHZq2cUhSz3cCD1NXAjro2Fp5BJdmuOEHvSzykry
 Z3yjX4BHhgCw==
X-IronPort-AV: E=Sophos;i="5.73,422,1583193600"; d="scan'208";a="31779345"
Subject: RE: [PATCH v5 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
Thread-Topic: [PATCH v5 1/5] xen/common: introduce a new framework for
 save/restore of 'domain' context
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2b-4e24fd92.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-4101.iad4.amazon.com with ESMTP;
 22 May 2020 14:35:42 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2b-4e24fd92.us-west-2.amazon.com (Postfix) with ESMTPS
 id C295AA22C0; Fri, 22 May 2020 14:35:40 +0000 (UTC)
Received: from EX13D32EUC002.ant.amazon.com (10.43.164.94) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 22 May 2020 14:35:40 +0000
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC002.ant.amazon.com (10.43.164.94) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 22 May 2020 14:35:39 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Fri, 22 May 2020 14:35:39 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Thread-Index: AQHWL4wYvpA3OyuRlUmGyLSLdUQhs6i0KlMAgAACDQA=
Date: Fri, 22 May 2020 14:35:39 +0000
Message-ID: <e94b7451bf8d43339a69e1f392a768a6@EX13D32EUC003.ant.amazon.com>
References: <20200521161939.4508-1-paul@xen.org>
 <20200521161939.4508-2-paul@xen.org>
 <62f13c6d-8d5f-7d06-d7e9-d17b960c8264@suse.com>
In-Reply-To: <62f13c6d-8d5f-7d06-d7e9-d17b960c8264@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.198]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, George
 Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Volodymyr
 Babchuk <Volodymyr_Babchuk@epam.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQo+IFNlbnQ6IDIyIE1heSAyMDIwIDE1OjI1DQo+IFRvOiBQYXVsIER1cnJh
bnQgPHBhdWxAeGVuLm9yZz4NCj4gQ2M6IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsg
RHVycmFudCwgUGF1bCA8cGR1cnJhbnRAYW1hem9uLmNvLnVrPjsgSnVsaWVuIEdyYWxsDQo+IDxq
dWxpZW5AeGVuLm9yZz47IEFuZHJldyBDb29wZXIgPGFuZHJldy5jb29wZXIzQGNpdHJpeC5jb20+
OyBHZW9yZ2UgRHVubGFwIDxnZW9yZ2UuZHVubGFwQGNpdHJpeC5jb20+Ow0KPiBJYW4gSmFja3Nv
biA8aWFuLmphY2tzb25AZXUuY2l0cml4LmNvbT47IFN0ZWZhbm8gU3RhYmVsbGluaSA8c3N0YWJl
bGxpbmlAa2VybmVsLm9yZz47IFdlaSBMaXUNCj4gPHdsQHhlbi5vcmc+OyBWb2xvZHlteXIgQmFi
Y2h1ayA8Vm9sb2R5bXlyX0JhYmNodWtAZXBhbS5jb20+OyBSb2dlciBQYXUgTW9ubsOpIDxyb2dl
ci5wYXVAY2l0cml4LmNvbT4NCj4gU3ViamVjdDogUkU6IFtFWFRFUk5BTF0gW1BBVENIIHY1IDEv
NV0geGVuL2NvbW1vbjogaW50cm9kdWNlIGEgbmV3IGZyYW1ld29yayBmb3Igc2F2ZS9yZXN0b3Jl
IG9mDQo+ICdkb21haW4nIGNvbnRleHQNCj4gDQo+IENBVVRJT046IFRoaXMgZW1haWwgb3JpZ2lu
YXRlZCBmcm9tIG91dHNpZGUgb2YgdGhlIG9yZ2FuaXphdGlvbi4gRG8gbm90IGNsaWNrIGxpbmtz
IG9yIG9wZW4NCj4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBjYW4gY29uZmlybSB0aGUgc2VuZGVy
IGFuZCBrbm93IHRoZSBjb250ZW50IGlzIHNhZmUuDQo+IA0KPiANCj4gDQo+IE9uIDIxLjA1LjIw
MjAgMTg6MTksIFBhdWwgRHVycmFudCB3cm90ZToNCj4gPiBUbyBhbGxvdyBlbmxpZ2h0ZW5lZCBI
Vk0gZ3Vlc3RzIChpLmUuIHRob3NlIHRoYXQgaGF2ZSBQViBkcml2ZXJzKSB0byBiZQ0KPiA+IG1p
Z3JhdGVkIHdpdGhvdXQgdGhlaXIgY28tb3BlcmF0aW9uIGl0IHdpbGwgYmUgbmVjZXNzYXJ5IHRv
IHRyYW5zZmVyICdQVicNCj4gPiBzdGF0ZSBzdWNoIGFzIGV2ZW50IGNoYW5uZWwgc3RhdGUsIGdy
YW50IGVudHJ5IHN0YXRlLCBldGMuDQo+ID4NCj4gPiBDdXJyZW50bHkgdGhlcmUgaXMgYSBmcmFt
ZXdvcmsgKGVudGVyZWQgdmlhIHRoZSBodm1fc2F2ZS9sb2FkKCkgZnVuY3Rpb25zKQ0KPiA+IHRo
YXQgYWxsb3dzIGEgZG9tYWluJ3MgJ0hWTScgKGFyY2hpdGVjdHVyYWwpIHN0YXRlIHRvIGJlIHRy
YW5zZmVycmVkIGJ1dA0KPiA+ICdQVicgc3RhdGUgaXMgYWxzbyBjb21tb24gd2l0aCBwdXJlIFBW
IGd1ZXN0cyBhbmQgc28gdGhpcyBmcmFtZXdvcmsgaXMgbm90DQo+ID4gcmVhbGx5IHN1aXRhYmxl
Lg0KPiA+DQo+ID4gVGhpcyBwYXRjaCBhZGRzIHRoZSBuZXcgcHVibGljIGhlYWRlciBhbmQgbG93
IGxldmVsIGltcGxlbWVudGF0aW9uIG9mIGEgbmV3DQo+ID4gY29tbW9uIGZyYW1ld29yaywgZW50
ZXJlZCB2aWEgdGhlIGRvbWFpbl9zYXZlL2xvYWQoKSBmdW5jdGlvbnMuIFN1YnNlcXVlbnQNCj4g
PiBwYXRjaGVzIHdpbGwgaW50cm9kdWNlIG90aGVyIHBhcnRzIG9mIHRoZSBmcmFtZXdvcmssIGFu
ZCBjb2RlIHRoYXQgd2lsbA0KPiA+IG1ha2UgdXNlIG9mIGl0IHdpdGhpbiB0aGUgY3VycmVudCB2
ZXJzaW9uIG9mIHRoZSBsaWJ4YyBtaWdyYXRpb24gc3RyZWFtLg0KPiA+DQo+ID4gVGhpcyBwYXRj
aCBhbHNvIG1hcmtzIHRoZSBIVk0tb25seSBmcmFtZXdvcmsgYXMgZGVwcmVjYXRlZCBpbiBmYXZv
dXIgb2YgdGhlDQo+ID4gbmV3IGZyYW1ld29yay4NCj4gPg0KPiA+IFNpZ25lZC1vZmYtYnk6IFBh
dWwgRHVycmFudCA8cGR1cnJhbnRAYW1hem9uLmNvbT4NCj4gPiBBY2tlZC1ieTogSnVsaWVuIEdy
YWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gDQo+IFJldmlld2VkLWJ5OiBKYW4gQmV1bGljaCA8amJl
dWxpY2hAc3VzZS5jb20+DQoNClRoYW5rcy4NCg0KPiB3aXRoIG9uZSByZW1hcms6DQo+IA0KPiA+
ICtpbnQgZG9tYWluX2xvYWRfZW5kKHN0cnVjdCBkb21haW5fY29udGV4dCAqYykNCj4gPiArew0K
PiA+ICsgICAgc3RydWN0IGRvbWFpbiAqZCA9IGMtPmRvbWFpbjsNCj4gPiArICAgIHNpemVfdCBs
ZW4gPSBjLT5kZXNjLmxlbmd0aCAtIGMtPmxlbjsNCj4gPiArDQo+ID4gKyAgICB3aGlsZSAoIGMt
PmxlbiAhPSBjLT5kZXNjLmxlbmd0aCApIC8qIHVuY29uc3VtZWQgZGF0YSBvciBwYWQgKi8NCj4g
PiArICAgIHsNCj4gPiArICAgICAgICB1aW50OF90IHBhZDsNCj4gPiArICAgICAgICBpbnQgcmMg
PSBkb21haW5fbG9hZF9kYXRhKGMsICZwYWQsIHNpemVvZihwYWQpKTsNCj4gPiArDQo+ID4gKyAg
ICAgICAgaWYgKCByYyApDQo+ID4gKyAgICAgICAgICAgIHJldHVybiByYzsNCj4gPiArDQo+ID4g
KyAgICAgICAgaWYgKCBwYWQgKQ0KPiA+ICsgICAgICAgICAgICByZXR1cm4gLUVJTlZBTDsNCj4g
PiArICAgIH0NCj4gPiArDQo+ID4gKyAgICBnZHByaW50ayhYRU5MT0dfSU5GTywgIiVwZCBsb2Fk
OiAlc1sldV0gKyV6dSAoLSV6dSlcbiIsIGQsIGMtPm5hbWUsDQo+ID4gKyAgICAgICAgICAgICBj
LT5kZXNjLmluc3RhbmNlLCBjLT5sZW4sIGxlbik7DQo+IA0KPiBVbmxpa2Ugb24gdGhlIHNhdmUg
c2lkZSB5b3UgYXNzdW1lIGMtPm5hbWUgdG8gYmUgbm9uLU5VTEwgaGVyZS4NCj4gV2UncmUgbm90
IGdvaW5nIHRvIGNyYXNoIGJlY2F1c2Ugb2YgdGhpcywgYnV0IGl0IGZlZWxzIGEgbGl0dGxlDQo+
IG9kZCBhbnl3YXksIHNwZWNpZmljYWxseSB3aXRoIHRoZSBmdW5jdGlvbiBiZWluZyBub24tc3Rh
dGljDQo+IChhbGJlaXQgb24gdGhlIHBvc2l0aXZlIHNpZGUgd2UgaGF2ZSB0aGUgdHlwZSBiZWlu
ZyBwcml2YXRlIHRvDQo+IHRoaXMgZmlsZSkuDQoNClllcywgSSBkaWRuJ3QgdGhpbmsgaXQgd2Fz
IHdvcnRoIHRoZSB0ZXN0IHNpbmNlIGl0IGlzIHN1cHBvc2VkIHRvIGJlIGEgImNhbid0IGhhcHBl
biIgY2FzZS4gSWYgd2UncmUgd29ycmllZCBhYm91dCB0aGUgbG9hZCBoYW5kbGVyIGNhbGxpbmcg
aW4gd2l0aCBhIGJhZCB2YWx1ZSBvZiBjIHRoZW4gd2UgY291bGQgYWRkIGEgbWFnaWMgbnVtYmVy
IGludG8gdGhlIHN0cnVjdCBhbmQgQVNTRVJUIGl0IGlzIGNvcnJlY3QgaW4gZG9tYWluX1tzYXZl
fGxvYWRdX1tiZWdpbnxkYXRhfGVuZF0uDQoNCiAgUGF1bA0KDQo+IA0KPiBKYW4NCg==


From xen-devel-bounces@lists.xenproject.org Fri May 22 14:45:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 14:45:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc8ve-0005fW-E8; Fri, 22 May 2020 14:45:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/N6j=7E=amazon.co.uk=prvs=4046e5fc3=pdurrant@srs-us1.protection.inumbo.net>)
 id 1jc8vd-0005ew-F4
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 14:45:41 +0000
X-Inumbo-ID: e84aca26-9c3a-11ea-abe8-12813bfff9fa
Received: from smtp-fw-6001.amazon.com (unknown [52.95.48.154])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e84aca26-9c3a-11ea-abe8-12813bfff9fa;
 Fri, 22 May 2020 14:45:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
 s=amazon201209; t=1590158740; x=1621694740;
 h=from:to:cc:date:message-id:references:in-reply-to:
 content-transfer-encoding:mime-version:subject;
 bh=kJ3G5h5R9joMKLzin3L9d4K2HP8nBlD8g5ghuL/Bd64=;
 b=m+3EztwoSEPZSLxbPXYCGIpYgyL8deaIwp5zr0wZ63nTL1+aWezjaeDB
 bcwqRlB7QINkA3FTUX/qAr+FPBtyT06vXdJVR0laKvSbaqjM/tIePGTWd
 NnxKc1iEBnU+1l4XXbG5xobytqa/5tCfvaQbItLE1bAvK99yMqQIJLYmj Y=;
IronPort-SDR: jiMOppQ1yMgSPtqEVMYZsmrQNzIynvVqEpNBpwySO1L4nivtzQh0bLusrPZKj+wFKaDQRwzgq8
 YkvQw8g1vFnA==
X-IronPort-AV: E=Sophos;i="5.73,422,1583193600"; d="scan'208";a="33102427"
Subject: RE: [PATCH v5 4/5] common/domain: add a domain context record for
 shared_info...
Thread-Topic: [PATCH v5 4/5] common/domain: add a domain context record for
 shared_info...
Received: from iad12-co-svc-p1-lb1-vlan3.amazon.com (HELO
 email-inbound-relay-2a-f14f4a47.us-west-2.amazon.com) ([10.43.8.6])
 by smtp-border-fw-out-6001.iad6.amazon.com with ESMTP;
 22 May 2020 14:45:26 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2a-f14f4a47.us-west-2.amazon.com (Postfix) with ESMTPS
 id 2CD90A27F7; Fri, 22 May 2020 14:45:25 +0000 (UTC)
Received: from EX13D32EUC002.ant.amazon.com (10.43.164.94) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 22 May 2020 14:45:24 +0000
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC002.ant.amazon.com (10.43.164.94) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 22 May 2020 14:45:23 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Fri, 22 May 2020 14:45:23 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Thread-Index: AQHWL4u/5V4oBl6iUEe2r7qYlan7rai0LReAgAAA1EA=
Date: Fri, 22 May 2020 14:45:23 +0000
Message-ID: <2981a3e0608840bc8c41723c7f71fc39@EX13D32EUC003.ant.amazon.com>
References: <20200521161939.4508-1-paul@xen.org>
 <20200521161939.4508-5-paul@xen.org>
 <6090d080-e771-81fe-0b64-e25c7a8c52eb@suse.com>
In-Reply-To: <6090d080-e771-81fe-0b64-e25c7a8c52eb@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.164.198]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Ian
 Jackson <ian.jackson@eu.citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQo+IFNlbnQ6IDIyIE1heSAyMDIwIDE1OjM0DQo+IFRvOiBQYXVsIER1cnJh
bnQgPHBhdWxAeGVuLm9yZz4NCj4gQ2M6IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsg
RHVycmFudCwgUGF1bCA8cGR1cnJhbnRAYW1hem9uLmNvLnVrPjsgSWFuIEphY2tzb24NCj4gPGlh
bi5qYWNrc29uQGV1LmNpdHJpeC5jb20+OyBXZWkgTGl1IDx3bEB4ZW4ub3JnPjsgQW5kcmV3IENv
b3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT47IEdlb3JnZQ0KPiBEdW5sYXAgPGdlb3Jn
ZS5kdW5sYXBAY2l0cml4LmNvbT47IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+OyBTdGVm
YW5vIFN0YWJlbGxpbmkNCj4gPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+DQo+IFN1YmplY3Q6IFJF
OiBbRVhURVJOQUxdIFtQQVRDSCB2NSA0LzVdIGNvbW1vbi9kb21haW46IGFkZCBhIGRvbWFpbiBj
b250ZXh0IHJlY29yZCBmb3Igc2hhcmVkX2luZm8uLi4NCj4gDQo+IENBVVRJT046IFRoaXMgZW1h
aWwgb3JpZ2luYXRlZCBmcm9tIG91dHNpZGUgb2YgdGhlIG9yZ2FuaXphdGlvbi4gRG8gbm90IGNs
aWNrIGxpbmtzIG9yIG9wZW4NCj4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBjYW4gY29uZmlybSB0
aGUgc2VuZGVyIGFuZCBrbm93IHRoZSBjb250ZW50IGlzIHNhZmUuDQo+IA0KPiANCj4gDQo+IE9u
IDIxLjA1LjIwMjAgMTg6MTksIFBhdWwgRHVycmFudCB3cm90ZToNCj4gPiBAQCAtMTY0OSw2ICsx
NjUwLDcwIEBAIGludCBjb250aW51ZV9oeXBlcmNhbGxfb25fY3B1KA0KPiA+ICAgICAgcmV0dXJu
IDA7DQo+ID4gIH0NCj4gPg0KPiA+ICtzdGF0aWMgaW50IHNhdmVfc2hhcmVkX2luZm8oY29uc3Qg
c3RydWN0IGRvbWFpbiAqZCwgc3RydWN0IGRvbWFpbl9jb250ZXh0ICpjLA0KPiA+ICsgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgYm9vbCBkcnlfcnVuKQ0KPiA+ICt7DQo+ID4gKyAgICBzdHJ1
Y3QgZG9tYWluX3NoYXJlZF9pbmZvX2NvbnRleHQgY3R4dCA9IHsNCj4gPiArI2lmZGVmIENPTkZJ
R19DT01QQVQNCj4gPiArICAgICAgICAuZmxhZ3MgPSBoYXNfMzJiaXRfc2hpbmZvKGQpID8gRE9N
QUlOX1NBVkVfMzJCSVRfU0hJTkZPIDogMCwNCj4gPiArI2VuZGlmDQo+ID4gKyAgICAgICAgLmJ1
ZmZlcl9zaXplID0gc2l6ZW9mKHNoYXJlZF9pbmZvX3QpLA0KPiANCj4gQnV0IHRoaXMgc2l6ZSB2
YXJpZXMgYmV0d2VlbiBuYXRpdmUgYW5kIGNvbXBhdC4NCj4gDQo+ID4gK3N0YXRpYyBpbnQgbG9h
ZF9zaGFyZWRfaW5mbyhzdHJ1Y3QgZG9tYWluICpkLCBzdHJ1Y3QgZG9tYWluX2NvbnRleHQgKmMp
DQo+ID4gK3sNCj4gPiArICAgIHN0cnVjdCBkb21haW5fc2hhcmVkX2luZm9fY29udGV4dCBjdHh0
Ow0KPiA+ICsgICAgc2l6ZV90IGhkcl9zaXplID0gb2Zmc2V0b2YodHlwZW9mKGN0eHQpLCBidWZm
ZXIpOw0KPiA+ICsgICAgdW5zaWduZWQgaW50IGk7DQo+ID4gKyAgICBpbnQgcmM7DQo+ID4gKw0K
PiA+ICsgICAgcmMgPSBET01BSU5fTE9BRF9CRUdJTihTSEFSRURfSU5GTywgYywgJmkpOw0KPiA+
ICsgICAgaWYgKCByYyApDQo+ID4gKyAgICAgICAgcmV0dXJuIHJjOw0KPiA+ICsNCj4gPiArICAg
IGlmICggaSApIC8qIGV4cGVjdCBvbmx5IGEgc2luZ2xlIGluc3RhbmNlICovDQo+ID4gKyAgICAg
ICAgcmV0dXJuIC1FTlhJTzsNCj4gPiArDQo+ID4gKyAgICByYyA9IGRvbWFpbl9sb2FkX2RhdGEo
YywgJmN0eHQsIGhkcl9zaXplKTsNCj4gPiArICAgIGlmICggcmMgKQ0KPiA+ICsgICAgICAgIHJl
dHVybiByYzsNCj4gPiArDQo+ID4gKyAgICBpZiAoIGN0eHQuYnVmZmVyX3NpemUgIT0gc2l6ZW9m
KHNoYXJlZF9pbmZvX3QpICkNCj4gPiArICAgICAgICByZXR1cm4gLUVJTlZBTDsNCj4gDQo+IFdo
aWxlIG9uIHRoZSBzYXZlIHNpZGUgdGhpbmdzIGNvdWxkIGJlIGxlZnQgYXMgdGhleSBhcmUgKHll
dA0KPiBJJ2QgcHJlZmVyIGEgY2hhbmdlKSwgdGhpcyBzaG91bGQgYmUgZmxleGlibGUgZW5vdWdo
IHRvIGFsbG93DQo+IGF0IGxlYXN0IHRoZSBzbWFsbGVyIGNvbXBhdCBzaXplIGFzIHdlbGwgaW4g
dGhlIGNvbXBhdCBjYXNlLg0KDQpPaywgSSBndWVzcyB3ZSBjYW4gb3B0aW1pemUgdGhlIGJ1ZmZl
ciBzaXplIGRvd24gaWYgb25seSB0aGUgY29tcGF0IHZlcnNpb24gaXMgbmVlZGVkLiBTZWVtcyBs
aWtlIHNsaWdodGx5IHBvaW50bGVzcyBjb21wbGV4aXR5IHRob3VnaC4NCg0KPiBJIHdvbmRlciB3
aGV0aGVyIGFueSBzbWFsbGVyIHNpemVzIG1pZ2h0IGJlIGFjY2VwdGFibGUsIG9uY2UNCj4gYWdh
aW4gd2l0aCB0aGUgcmVzdCBnZXR0aW5nIHplcm8tcGFkZGVkLg0KPiANCg0KSWYgdGhlIG5lZWQg
YXJpc2VzIHRvIHplcm8gZXh0ZW5kIGFuIG9sZGVyIHNoYXJlZF9pbmZvIHZhcmlhbnQgdGhlbiB0
aGF0IGNhbiBiZSBkb25lIGluIGZ1dHVyZS4NCg0KPiA+ICsgICAgaWYgKCBjdHh0LmZsYWdzICYg
RE9NQUlOX1NBVkVfMzJCSVRfU0hJTkZPICkNCj4gPiArI2lmZGVmIENPTkZJR19DT01QQVQNCj4g
PiArICAgICAgICBoYXNfMzJiaXRfc2hpbmZvKGQpID0gdHJ1ZTsNCj4gPiArI2Vsc2UNCj4gPiAr
ICAgICAgICByZXR1cm4gLUVJTlZBTDsNCj4gPiArI2VuZGlmDQo+IA0KPiBBbSBJIG1pcy1yZW1l
bWJlcmluZyBvciB3YXMgYSBjaGVjayBsb3N0IG9mIHRoZSByZW1haW5pbmcNCj4gZmxhZ3MgYmVp
bmcgemVybz8gSWYgSSBhbSwgb25lIG5lZWRzIGFkZGluZyBpbiBhbnkgY2FzZSwgaW1vLg0KPiAN
Cg0KSXQgd2Fzbid0IGZsYWdzIGJlZm9yZSwgYnV0IHlvdSdyZSBxdWl0ZSByaWdodDsgdGhleSBz
aG91bGQgYmUgemVyby1jaGVja2VkLg0KDQogIFBhdWwNCg0KPiBKYW4NCg==


From xen-devel-bounces@lists.xenproject.org Fri May 22 14:55:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 14:55:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc94u-0006fT-DO; Fri, 22 May 2020 14:55:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=obdr=7E=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jc94s-0006fO-V4
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 14:55:14 +0000
X-Inumbo-ID: 3b199aba-9c3c-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b199aba-9c3c-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 14:55:09 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=e5uh8SK0CzB13ud1UpLCQsQg+EtHlPgoL2KVr0i3fRQ=; b=dZ64Ix5C89TT2O3kt/FsJfQRR
 gYslCF571gCa/GDsXT7Bj2LMZJ7JIc9pgzga1OcajyhVcpAATl2H35oU1ZBDwJVbriTAogkcqj8M8
 EqPdhIxOrdlIaYYqAHt4HwPWYP9VJCm8Im36WW41g02owx72jtMv02oTv/Qi3mijNL6GM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jc94m-0004wS-Mm; Fri, 22 May 2020 14:55:08 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jc94m-0001w6-EO; Fri, 22 May 2020 14:55:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jc94m-0005HP-Dk; Fri, 22 May 2020 14:55:08 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150318-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150318: all pass - PUSHED
X-Osstest-Versions-This: ovmf=1c877c716038a862e876cac8f0929bab4a96e849
X-Osstest-Versions-That: ovmf=1a2ad3ba9efdd0db4bf1b6c114eedb59d6c483ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 22 May 2020 14:55:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150318 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150318/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 1c877c716038a862e876cac8f0929bab4a96e849
baseline version:
 ovmf                 1a2ad3ba9efdd0db4bf1b6c114eedb59d6c483ca

Last test of basis   150313  2020-05-21 23:10:47 Z    0 days
Testing same since   150318  2020-05-22 05:16:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Liming Gao <liming.gao@intel.com>
  Michael D Kinney <michael.d.kinney@intel.com>
  Vitaly Cheptsov <vit9696@protonmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   1a2ad3ba9e..1c877c7160  1c877c716038a862e876cac8f0929bab4a96e849 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 22 15:08:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 15:08:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc9H9-0007h2-EM; Fri, 22 May 2020 15:07:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=owPK=7E=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jc9H7-0007gx-RN
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 15:07:53 +0000
X-Inumbo-ID: 01c67b96-9c3e-11ea-ae69-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 01c67b96-9c3e-11ea-ae69-bc764e2007e4;
 Fri, 22 May 2020 15:07:52 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: pywG/WyueHGMEDxq8X+fh8wZanoO5OpJS1+aiLTRESYwg1GCNloMICFlG+JfSS9bVyJjzZXMYM
 eT7dAQQ0eRjsFMHCZm9tpf7UtUysia/NIrFNKt4JqpD5LiA05zStGO4h7c2dToLqv5n7uV/iqM
 MtFLu6LY0TL+PF2XKkizuvEcwfo7dTnXBo/paJmK0KRxMrKY8JNqU3tDuCb32BaJcPUtbnO0oK
 OsoWXO/rTvP0BA0QNTszdvAYx6t+Sg/I8zA/ABx/L33KVZfUKk0K6zaSH4k3ncEwF8rxc/6VPK
 zfY=
X-SBRS: 2.7
X-MesageID: 18449305
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,422,1583211600"; d="scan'208";a="18449305"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/idle: Extend ISR/C6 erratum workaround to Haswell
Date: Fri, 22 May 2020 16:07:33 +0100
Message-ID: <20200522150733.18422-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This bug was first discovered against Haswell.  It is definitely affected.

(The XenServer ticket for this bug was opened on 2013-05-30 which is coming up
on 7 years old, and predates Broadwell).

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

We've followed up with Intel, but based on conversations, I was expecting
Haswell to be treated the same as Broadwell in this regard.
---
 xen/arch/x86/acpi/cpu_idle.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index 178cb607c2..a2248ea11f 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -583,8 +583,16 @@ bool errata_c6_workaround(void)
          * registers), the processor may dispatch the second interrupt (from
          * the IRR bit) before the first interrupt has completed and written to
          * the EOI register, causing the first interrupt to never complete.
+         *
+         * Note: Haswell hasn't had errata issued, but this issue was first
+         * discovered on Haswell hardware, and is affected.
          */
         static const struct x86_cpu_id isr_errata[] = {
+            /* Haswell */
+            INTEL_FAM6_MODEL(0x3c),
+            INTEL_FAM6_MODEL(0x3f),
+            INTEL_FAM6_MODEL(0x45),
+            INTEL_FAM6_MODEL(0x46),
             /* Broadwell */
             INTEL_FAM6_MODEL(0x47),
             INTEL_FAM6_MODEL(0x3d),
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Fri May 22 15:53:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 15:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jc9zP-0003cu-4j; Fri, 22 May 2020 15:53:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hXvX=7E=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jc9zO-0003cj-6J
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 15:53:38 +0000
X-Inumbo-ID: 65a34e68-9c44-11ea-b07b-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65a34e68-9c44-11ea-b07b-bc764e2007e4;
 Fri, 22 May 2020 15:53:37 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:48624
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jc9zL-000TZn-Le (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 22 May 2020 16:53:35 +0100
Subject: Re: [PATCH] x86/svm: retry after unhandled NPT fault if gfn was
 marked for recalculation
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <1590097438-28829-1-git-send-email-igor.druzhinin@citrix.com>
 <20200522100846.GV54375@Air-de-Roger>
 <04ec4ab4-a121-c5be-0a65-316e237dd793@citrix.com>
 <20200522102339.GX54375@Air-de-Roger>
 <fe6e5c7f-df0f-5436-a7cd-2949464ab9a7@citrix.com>
 <20200522111146.GZ54375@Air-de-Roger>
 <4831dc51-cea1-2870-422b-2af7d6d1f2d6@suse.com>
 <ef3411ac-9e7c-0ef7-ad9f-c24f8ebf32a6@citrix.com>
 <20200522133259.GC54375@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <15036c44-640e-de1c-985b-b9c68592963f@citrix.com>
Date: Fri, 22 May 2020 16:53:35 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200522133259.GC54375@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>, wl@xen.org,
 Jan Beulich <jbeulich@suse.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22/05/2020 14:32, Roger Pau Monné wrote:
> On Fri, May 22, 2020 at 02:11:15PM +0100, Andrew Cooper wrote:
>> On 22/05/2020 14:04, Jan Beulich wrote:
>>> On 22.05.2020 13:11, Roger Pau Monné wrote:
>>>> That being said, I also don't like the fact that logdity is handled
>>>> differently between EPT and NPT, as on EPT it's handled as a
>>>> misconfig while on NPT it's handled as a violation.
>>> Because, well, there is no concept of misconfig in NPT.
>> Indeed.  Intel chose to split EPT errors into two - MISCONFIG for
>> structural errors (not present, or reserved bits set) and VIOLATION for
>> permissions errors.
>>
>> AMD reused the same silicon pagewalker design, so have a single
>> NPT_FAULT vmexit which behaves much more like a regular pagefault,
>> encoding structural vs permission errors in the error code.
> Maybe I should clarify, I understand that NPT doesn't have such
> differentiation regarding nested page table faults vs EPT, but I feel
> like it would be clearer if part of the code could be shared, ie:
> unify EPT resolve_misconfig and NPT do_recalc into a single function
> for example that uses the necessary p2m-> helpers for the differing
> implementations. I think we should be able to tell apart when a NPT
> page fault is a recalc one by looking at the bits in the EXITINFO1
> error field?

But they use fundamentally different mechanisms.

EPT uses an invalid caching type, while NPT sets the User bit (and even
this is going to have to change when we want to support GMET for Windows
VBS in the long term).

You could abstract a few things out into common logic, but none of the
bit positions match (not even the recalc software bit), and the result
would be more complicated than its current form.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 22 15:59:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 15:59:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcA4s-0003oG-Nn; Fri, 22 May 2020 15:59:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=knsM=7E=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jcA4q-0003oB-UY
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 15:59:16 +0000
X-Inumbo-ID: 2fdf4d62-9c45-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2fdf4d62-9c45-11ea-9887-bc764e2007e4;
 Fri, 22 May 2020 15:59:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 09A28B251;
 Fri, 22 May 2020 15:59:17 +0000 (UTC)
Subject: Re: [PATCH] x86/idle: Extend ISR/C6 erratum workaround to Haswell
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200522150733.18422-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5cbeb49b-96fa-8bc7-407d-724defef5671@suse.com>
Date: Fri, 22 May 2020 17:59:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200522150733.18422-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.05.2020 17:07, Andrew Cooper wrote:
> This bug was first discovered against Haswell.  It is definitely affected.
> 
> (The XenServer ticket for this bug was opened on 2013-05-30 which is coming up
> on 7 years old, and predates Broadwell).
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Fri May 22 16:13:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 16:13:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcAIC-00062y-WB; Fri, 22 May 2020 16:13:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2cgu=7E=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jcAIC-00062s-0V
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 16:13:04 +0000
X-Inumbo-ID: 1c853447-9c47-11ea-abf6-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1c853447-9c47-11ea-abf6-12813bfff9fa;
 Fri, 22 May 2020 16:13:02 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: KkaoinM9/gs5S/yJZ56rjkJY/pe8/WowyyXkJncXmN4y0QC5OF7G1CWrKU+ZOsmBOMQlV9ceUJ
 uoweMJ/cOPDmMxN/30QtE5MSpe2LSz1uLJcD17JXFm/fiZrmi4vcAbkwxL5fiRItJO5cE2If2i
 yhsXdoj3YZrJxSIaOIJRXdMBsUiwXMQK7WTwlvWmPx1Ny866FhJNDxNPf1HSk3T6dihUkdZ07z
 rJyrUu2/cgluUg59wO9ZhrCfj5mMOB6cpwLUpn3E1GKU7Kgiu8tCzWB1tgFGZT3xqjdQ+dxzFL
 3XE=
X-SBRS: 2.7
X-MesageID: 18915172
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,422,1583211600"; d="scan'208";a="18915172"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH 1/5] golang: Add a minimum go version to go.mod
Date: Fri, 22 May 2020 17:12:36 +0100
Message-ID: <20200522161240.3748320-2-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200522161240.3748320-1-george.dunlap@citrix.com>
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

`go build` wants to add the current go version to go.mod as the
minimum every time we run `make` in the directory.  Add 1.11 (the
earliest Go version that supports modules) there to make it happy.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/go.mod | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/golang/xenlight/go.mod b/tools/golang/xenlight/go.mod
index 926474d929..7dfbd758d1 100644
--- a/tools/golang/xenlight/go.mod
+++ b/tools/golang/xenlight/go.mod
@@ -1 +1,3 @@
 module xenbits.xenproject.org/git-http/xen.git/tools/golang/xenlight
+
+go 1.11
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri May 22 16:13:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 16:13:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcAIE-000638-8A; Fri, 22 May 2020 16:13:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2cgu=7E=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jcAID-00062x-12
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 16:13:05 +0000
X-Inumbo-ID: 1d602c9a-9c47-11ea-b9cf-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d602c9a-9c47-11ea-b9cf-bc764e2007e4;
 Fri, 22 May 2020 16:13:04 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 22nWqa1HOEUlJZzT1IqShg9g3FD7lnx5clfJoCR5UU4RJeD7RLjKAHXlulsxeGBSZr9kwcBq6p
 Oy1ENo0CiMdKvFjQsIlR7atnKsxFqvzpZeFVLjQtBXShWl6iV4DI/1GAaoekfU6IuU4PohRQ5z
 UcyUoKQy+nuTnUzxsOgo8/6S66HWg2Rh4qxx9tDSbLlYQxPCxy9RfAfpX0QYJVLBFwb2Evo8/C
 vEDYjbxDqkVY7Yfqxa8Llk5WySqiiCRKAURglXu/Kd46UCLf0I7rLgDmrJQN3DGtI/YdUurAaA
 m0c=
X-SBRS: 2.7
X-MesageID: 18454768
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,422,1583211600"; d="scan'208";a="18454768"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH 5/5] gitignore: Ignore golang package directory
Date: Fri, 22 May 2020 17:12:40 +0100
Message-ID: <20200522161240.3748320-6-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200522161240.3748320-1-george.dunlap@citrix.com>
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Konrad Wilk <konrad.wilk@oracle.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Julien Grall <julien.grall@arm.com>,
 Jan Beulich <jbeulich@suse.com>, Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Jan Beulich <jbeulich@suse.com>
CC: Konrad Wilk <konrad.wilk@oracle.com>
CC: Stefano Stabellini <sstabellini@kernel.org>
CC: Julien Grall <julien.grall@arm.com>
CC: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 .gitignore | 1 +
 1 file changed, 1 insertion(+)

diff --git a/.gitignore b/.gitignore
index 7418ce9829..caaaa15b49 100644
--- a/.gitignore
+++ b/.gitignore
@@ -406,6 +406,7 @@ tools/python/xen/lowlevel/xl/_pyxl_types.h
 tools/xenstore/xenstore-watch
 tools/xl/_paths.h
 tools/xl/xl
+tools/golang/src
 
 docs/txt/misc/*.txt
 docs/txt/man/*.txt
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri May 22 16:13:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 16:13:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcAII-00063f-GS; Fri, 22 May 2020 16:13:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2cgu=7E=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jcAIG-00063T-TR
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 16:13:08 +0000
X-Inumbo-ID: 1ca260c1-9c47-11ea-abf6-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1ca260c1-9c47-11ea-abf6-12813bfff9fa;
 Fri, 22 May 2020 16:13:03 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: H1HFa8+grZ7oFej9hGO2VMHPPWnJV8aaJrw+GbjtDg0jVZCqiLR8a8/3JJWBzAZDaiIUWJ7wvS
 4qlm0rfOfsKkuCu97+rr9taChIiTTdlwYn58dnuegKQgRujqSYEKsy35qA5oWahlIrRO8H7S0B
 MWAmMGuWkHdl1w69dKsmUXiKsLWl+z5kxvMetkiHnOnqgVcsYc7UEFekwfbeEd3kgtMYuVAYu8
 X6T8btkJNKaSrpunjGjQlfI4Zuu36pv/fjWzZpdAEMS2rxDmSQYI2UJ+mLNlF8lELqDmearTjX
 HRo=
X-SBRS: 2.7
X-MesageID: 18557067
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,422,1583211600"; d="scan'208";a="18557067"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH 0/5] Golang build fixes
Date: Fri, 22 May 2020 17:12:35 +0100
Message-ID: <20200522161240.3748320-1-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Ian Jackson <ian.jackson@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a series of patches that improve build for the golang xenlight
bindings.  The most important patch is #3, which will update the
generated golang bindings from the tools/libxl directory when
libxl_types.idl is updated, even if the person building doesn't have
the golang packages enabled.

George Dunlap (5):
  golang: Add a minimum go version to go.mod
  golang: Add a variable for the libxl source directory
  libxl: Generate golang bindings in libxl Makefile
  golang/xenlight: Use XEN_PKG_DIR variable rather than open-coding
  gitignore: Ignore golang package directory

 .gitignore                     |  1 +
 tools/golang/xenlight/Makefile | 12 +++++++++---
 tools/golang/xenlight/go.mod   |  2 ++
 tools/libxl/Makefile           | 12 +++++++++++-
 4 files changed, 23 insertions(+), 4 deletions(-)

--
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Nick Rosbrook <rosbrookn@ainfosec.com>
CC: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Fri May 22 16:13:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 16:13:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcAIJ-00063q-O2; Fri, 22 May 2020 16:13:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2cgu=7E=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jcAIH-00063Z-UH
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 16:13:09 +0000
X-Inumbo-ID: 1e073512-9c47-11ea-b9cf-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e073512-9c47-11ea-b9cf-bc764e2007e4;
 Fri, 22 May 2020 16:13:04 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: hM8AKnw8g800qnanAC29WuSXC2r7EZt/ZAlllla/TZS8Cgy1GhwEwsSWZ/4iQ4cSRBxyWy/6BS
 gRy3I3ErzCwV0xvhLktGPxm67r1ZvMmKMjNSMkjydZDTu9KeJ0NwpKqsqenIKZ3OlvzW0CPZF0
 DvRzR/gwTDTKwgocx/kQptGH8yerMHgsTg4EMhGYY8QYoWhrxuGoLDkeBsQvgOYCCCeiZlo2FH
 OXzfByK2XulSVVY1aOOzdVHH5T28UUBDIFh4P+Xh1z0SIfBQxDFQOjlqfyYoQifOSQ7l00EONl
 6WM=
X-SBRS: 2.7
X-MesageID: 18454769
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,422,1583211600"; d="scan'208";a="18454769"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH 4/5] golang/xenlight: Use XEN_PKG_DIR variable rather than
 open-coding
Date: Fri, 22 May 2020 17:12:39 +0100
Message-ID: <20200522161240.3748320-5-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200522161240.3748320-1-george.dunlap@citrix.com>
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Ian Jackson <ian.jackson@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/golang/xenlight/Makefile b/tools/golang/xenlight/Makefile
index 751f916276..6ab36c0aa9 100644
--- a/tools/golang/xenlight/Makefile
+++ b/tools/golang/xenlight/Makefile
@@ -19,7 +19,7 @@ package: $(XEN_GOPATH)$(GOXL_PKG_DIR)
 
 GOXL_GEN_FILES = types.gen.go helpers.gen.go
 
-$(XEN_GOPATH)/src/$(XEN_GOCODE_URL)/xenlight/: xenlight.go $(GOXL_GEN_FILES)
+$(XEN_GOPATH)$(GOXL_PKG_DIR): xenlight.go $(GOXL_GEN_FILES)
 	$(INSTALL_DIR) $(XEN_GOPATH)$(GOXL_PKG_DIR)
 	$(INSTALL_DATA) xenlight.go $(XEN_GOPATH)$(GOXL_PKG_DIR)
 	$(INSTALL_DATA) types.gen.go $(XEN_GOPATH)$(GOXL_PKG_DIR)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri May 22 16:13:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 16:13:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcAIO-000657-1R; Fri, 22 May 2020 16:13:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2cgu=7E=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jcAIL-00064Q-TR
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 16:13:13 +0000
X-Inumbo-ID: 216b5274-9c47-11ea-abf6-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 216b5274-9c47-11ea-abf6-12813bfff9fa;
 Fri, 22 May 2020 16:13:11 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: KxPNs5keHhc+fidn6/H1Nm9Y7zYU1beE8+bp8J+5ykM+OnhXTmM65r1mXRxUpW1JVCUCLJo1YO
 ZzROBGSVu/j+QAKZE9cXmewpX5ej+JVbvfL46Dj0hNEmTCY3igcdnTAoKd/IFIHb8LBhk8SjDR
 nS4dWO0Xl126vYI19Dk3cdoxyOLwNu0T9RSsqWEJRBFciI4IlzFKTUmHKIvgVYSri+kfYNhsCR
 rVHuDwIhUOSsZcj5050J6MwQ1xcGhsDlEunnaRSro97/0hawLlz5/THJpybteRzRt5XH1kBmhm
 jE0=
X-SBRS: 2.7
X-MesageID: 18189347
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,422,1583211600"; d="scan'208";a="18189347"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH 3/5] libxl: Generate golang bindings in libxl Makefile
Date: Fri, 22 May 2020 17:12:38 +0100
Message-ID: <20200522161240.3748320-4-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200522161240.3748320-1-george.dunlap@citrix.com>
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Ian Jackson <ian.jackson@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The generated golang bindings (types.gen.go and helpers.gen.go) are
left checked in so that they can be fetched from xenbits using the
golang tooling.  This means that they must be updated whenever
libxl_types.idl (or other dependencies) are updated.  However, the
golang bindings are only built optionally; we can't assume that anyone
updating libxl_types.idl will also descend into the tools/golang tree
to re-generate the bindings.

Fix this by re-generating the golang bindings from the libxl Makefile
when the IDL dependencies are updated, so that anyone who updates
libxl_types.idl will also end up updating the golang generated files
as well.

 - Make a variable for the generated files, and a target in
   xenlight/Makefile which will only re-generate the files.

 - Add a target in libxl/Makefile to call external idl generation
   targets (currently only golang).

For ease of testing, also add a specific target in libxl/Makefile just
to check and update files generated from the IDL.

This does mean that there are two potential paths for generating the
files during a parallel build; but that shouldn't be an issue, since
tools/golang/xenlight should never be built until after tools/libxl
has completed building anyway.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/Makefile |  6 +++++-
 tools/libxl/Makefile           | 12 +++++++++++-
 2 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/tools/golang/xenlight/Makefile b/tools/golang/xenlight/Makefile
index cd0a62505f..751f916276 100644
--- a/tools/golang/xenlight/Makefile
+++ b/tools/golang/xenlight/Makefile
@@ -17,12 +17,16 @@ all: build
 .PHONY: package
 package: $(XEN_GOPATH)$(GOXL_PKG_DIR)
 
-$(XEN_GOPATH)/src/$(XEN_GOCODE_URL)/xenlight/: xenlight.go types.gen.go helpers.gen.go
+GOXL_GEN_FILES = types.gen.go helpers.gen.go
+
+$(XEN_GOPATH)/src/$(XEN_GOCODE_URL)/xenlight/: xenlight.go $(GOXL_GEN_FILES)
 	$(INSTALL_DIR) $(XEN_GOPATH)$(GOXL_PKG_DIR)
 	$(INSTALL_DATA) xenlight.go $(XEN_GOPATH)$(GOXL_PKG_DIR)
 	$(INSTALL_DATA) types.gen.go $(XEN_GOPATH)$(GOXL_PKG_DIR)
 	$(INSTALL_DATA) helpers.gen.go $(XEN_GOPATH)$(GOXL_PKG_DIR)
 
+idl-gen: $(GOXL_GEN_FILES)
+
 %.gen.go: gengotypes.py $(LIBXL_SRC_DIR)/libxl_types.idl $(LIBXL_SRC_DIR)/idl.py
 	XEN_ROOT=$(XEN_ROOT) $(PYTHON) gengotypes.py $(LIBXL_SRC_DIR)/libxl_types.idl
 
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 69fcf21577..2a06a7ebb8 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -218,7 +218,7 @@ testidl.c: libxl_types.idl gentest.py libxl.h $(AUTOINCS)
 .PHONY: all
 all: $(CLIENTS) $(TEST_PROGS) $(PKG_CONFIG) $(PKG_CONFIG_LOCAL) \
 		libxenlight.so libxenlight.a libxlutil.so libxlutil.a \
-	$(AUTOSRCS) $(AUTOINCS)
+	$(AUTOSRCS) $(AUTOINCS) idl-external
 
 $(LIBXL_OBJS) $(LIBXLU_OBJS) $(SAVE_HELPER_OBJS) \
 		$(LIBXL_TEST_OBJS) $(TEST_PROG_OBJS): \
@@ -274,6 +274,16 @@ _libxl_type%.h _libxl_type%_json.h _libxl_type%_private.h _libxl_type%.c: libxl_
 	$(call move-if-changed,__libxl_type$(stem)_json.h,_libxl_type$(stem)_json.h)
 	$(call move-if-changed,__libxl_type$(stem).c,_libxl_type$(stem).c)
 
+.PHONY: idl-external
+idl-external:
+	$(MAKE) -C $(XEN_ROOT)/tools/golang/xenlight idl-gen
+
+LIBXL_IDLGEN_FILES = _libxl_types.h _libxl_types_json.h _libxl_types_private.h _libxl_types.c \
+	_libxl_types_internal.h _libxl_types_internal_json.h _libxl_types_internal_private.h _libxl_types_internal.c
+
+
+idl-gen: $(LIBXL_GEN_FILES) idl-external
+
 libxenlight.so: libxenlight.so.$(MAJOR)
 	$(SYMLINK_SHLIB) $< $@
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri May 22 16:13:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 16:13:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcAIO-00065m-B0; Fri, 22 May 2020 16:13:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2cgu=7E=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jcAIM-00064q-Ui
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 16:13:14 +0000
X-Inumbo-ID: 216abf58-9c47-11ea-b9cf-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 216abf58-9c47-11ea-b9cf-bc764e2007e4;
 Fri, 22 May 2020 16:13:11 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: QCTX7pdxLRE62cejAiA0UeIvrskqgUuAcylRwpuAgvfZw2mdhem3bUqZTBVw3kIeQ34PcL54UV
 CC41qdZAe/c1TPCpzJOkVsJaU53QCWMFDJs6DlKOekAHGia5PvH7Oi48DuEia0PTI/ettFO3Cr
 qgXj/yp+Pi5jHyUf+zOiFenC0HKkB8oq5b+n9X2OvZ3TNANxwJLUQgQNKheV3czVPljI9Ajrkl
 6xPOqeNFkxhqzHjPh4e2y8Ura0nIj/mRCg6tPfZ7aLLU0HSZPZZoJsqDYje/rpFYEKQOVswPHj
 Be0=
X-SBRS: 2.7
X-MesageID: 18476572
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,422,1583211600"; d="scan'208";a="18476572"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH 2/5] golang: Add a variable for the libxl source directory
Date: Fri, 22 May 2020 17:12:37 +0100
Message-ID: <20200522161240.3748320-3-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200522161240.3748320-1-george.dunlap@citrix.com>
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Ian Jackson <ian.jackson@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

...rather than duplicating the path in several places.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/Makefile | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/tools/golang/xenlight/Makefile b/tools/golang/xenlight/Makefile
index 37ed1358c4..cd0a62505f 100644
--- a/tools/golang/xenlight/Makefile
+++ b/tools/golang/xenlight/Makefile
@@ -9,6 +9,8 @@ GOXL_INSTALL_DIR = $(GOCODE_DIR)$(GOXL_PKG_DIR)
 
 GO ?= go
 
+LIBXL_SRC_DIR = ../../libxl
+
 .PHONY: all
 all: build
 
@@ -21,8 +23,8 @@ $(XEN_GOPATH)/src/$(XEN_GOCODE_URL)/xenlight/: xenlight.go types.gen.go helpers.
 	$(INSTALL_DATA) types.gen.go $(XEN_GOPATH)$(GOXL_PKG_DIR)
 	$(INSTALL_DATA) helpers.gen.go $(XEN_GOPATH)$(GOXL_PKG_DIR)
 
-%.gen.go: gengotypes.py $(XEN_ROOT)/tools/libxl/libxl_types.idl $(XEN_ROOT)/tools/libxl/idl.py
-	XEN_ROOT=$(XEN_ROOT) $(PYTHON) gengotypes.py ../../libxl/libxl_types.idl
+%.gen.go: gengotypes.py $(LIBXL_SRC_DIR)/libxl_types.idl $(LIBXL_SRC_DIR)/idl.py
+	XEN_ROOT=$(XEN_ROOT) $(PYTHON) gengotypes.py $(LIBXL_SRC_DIR)/libxl_types.idl
 
 # Go will do its own dependency checking, and not actuall go through
 # with the build if none of the input files have changed.
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri May 22 16:34:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 16:34:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcAcT-0008No-FX; Fri, 22 May 2020 16:34:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UE2V=7E=intel.com=tamas.lengyel@srs-us1.protection.inumbo.net>)
 id 1jcAcS-0008Nj-CG
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 16:34:00 +0000
X-Inumbo-ID: 0891861c-9c4a-11ea-b07b-bc764e2007e4
Received: from mga17.intel.com (unknown [192.55.52.151])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0891861c-9c4a-11ea-b07b-bc764e2007e4;
 Fri, 22 May 2020 16:33:58 +0000 (UTC)
IronPort-SDR: 0GUlAE8WTWb7Nv2TTpTc79aqv9ClrXzTz4m8utzAedEbBB2orqWneWllCFgriE7jgRyzpAQ07J
 DHb/5OnyZzzw==
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga008.jf.intel.com ([10.7.209.65])
 by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 22 May 2020 09:33:57 -0700
IronPort-SDR: cp+bdUzCCs1ZaOHkSL95CIMpt6nO1es0ml2hVDZF0p4hoiQPeIrJ5E0erNfpHj9Jym3aaNWHXS
 FQunWTL8V4ow==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.73,422,1583222400"; d="scan'208";a="301157055"
Received: from rpenaran-mobl.amr.corp.intel.com (HELO ubuntu.localdomain)
 ([10.212.41.203])
 by orsmga008.jf.intel.com with ESMTP; 22 May 2020 09:33:55 -0700
From: Tamas K Lengyel <tamas.lengyel@intel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 for-4.14 1/2] x86/mem_sharing: block interrupt injection
 for forks
Date: Fri, 22 May 2020 09:33:52 -0700
Message-Id: <adfececa3e29a46f5347459a629aa534d61625aa.1590165055.git.tamas.lengyel@intel.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <tamas.lengyel@intel.com>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

When running shallow forks without device models it may be undesirable for Xen
to inject interrupts. With Windows forks we have observed the kernel going into
infinite loops when trying to process such interrupts, likely because it attempts
to interact with devices that are not responding without QEMU running. By
disabling interrupt injection the fuzzer can exercise the target code without
interference.

Forks & memory sharing are only available on Intel CPUs so this only applies
to vmx.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
---
v2: prohibit => block
    minor style adjustments
---
 xen/arch/x86/hvm/vmx/intr.c      | 6 ++++++
 xen/arch/x86/mm/mem_sharing.c    | 6 +++++-
 xen/include/asm-x86/hvm/domain.h | 2 ++
 xen/include/public/memory.h      | 1 +
 4 files changed, 14 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 000e14af49..80bfbb4787 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -256,6 +256,12 @@ void vmx_intr_assist(void)
     if ( unlikely(v->arch.vm_event) && v->arch.vm_event->sync_event )
         return;
 
+#ifdef CONFIG_MEM_SHARING
+    /* Block event injection for VM fork if requested */
+    if ( unlikely(v->domain->arch.hvm.mem_sharing.block_interrupts) )
+        return;
+#endif
+
     /* Crank the handle on interrupt state. */
     pt_vector = pt_update_irq(v);
 
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 7271e5c90b..0c45a8d67e 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -2106,7 +2106,8 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
         rc = -EINVAL;
         if ( mso.u.fork.pad )
             goto out;
-        if ( mso.u.fork.flags & ~XENMEM_FORK_WITH_IOMMU_ALLOWED )
+        if ( mso.u.fork.flags &
+             ~(XENMEM_FORK_WITH_IOMMU_ALLOWED | XENMEM_FORK_BLOCK_INTERRUPTS) )
             goto out;
 
         rc = rcu_lock_live_remote_domain_by_id(mso.u.fork.parent_domain,
@@ -2134,6 +2135,9 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
             rc = hypercall_create_continuation(__HYPERVISOR_memory_op,
                                                "lh", XENMEM_sharing_op,
                                                arg);
+        else if ( !rc && (mso.u.fork.flags & XENMEM_FORK_BLOCK_INTERRUPTS) )
+            d->arch.hvm.mem_sharing.block_interrupts = true;
+
         rcu_unlock_domain(pd);
         break;
     }
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 95fe18cddc..37e494d234 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -74,6 +74,8 @@ struct mem_sharing_domain
      * to resume the search.
      */
     unsigned long next_shared_gfn_to_relinquish;
+
+    bool block_interrupts;
 };
 #endif
 
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index dbd35305df..1e4959638d 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -537,6 +537,7 @@ struct xen_mem_sharing_op {
         struct mem_sharing_op_fork {      /* OP_FORK */
             domid_t parent_domain;        /* IN: parent's domain id */
 #define XENMEM_FORK_WITH_IOMMU_ALLOWED (1u << 0)
+#define XENMEM_FORK_BLOCK_INTERRUPTS   (1u << 1)
             uint16_t flags;               /* IN: optional settings */
             uint32_t pad;                 /* Must be set to 0 */
         } fork;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri May 22 16:34:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 16:34:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcAcX-0008ON-Rr; Fri, 22 May 2020 16:34:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UE2V=7E=intel.com=tamas.lengyel@srs-us1.protection.inumbo.net>)
 id 1jcAcX-0008OG-9W
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 16:34:05 +0000
X-Inumbo-ID: 0a47d7e0-9c4a-11ea-b07b-bc764e2007e4
Received: from mga17.intel.com (unknown [192.55.52.151])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0a47d7e0-9c4a-11ea-b07b-bc764e2007e4;
 Fri, 22 May 2020 16:34:00 +0000 (UTC)
IronPort-SDR: GAK5B0aPxgcDVjfYmww21crZRwn4iCU7cYH2NlSS0orDwucsU7hVgiHYvycYpmGImkAyovvUSX
 c4xNnwyYH6aQ==
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga008.jf.intel.com ([10.7.209.65])
 by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 22 May 2020 09:33:57 -0700
IronPort-SDR: iZyueIouIt1QOBCDMYx+urq1Nqe60j6GkJlwLC/tYvEctZCC1KG7/uhyQAzfRKkM2XIumvoZ13
 lDdnoyMd4Fcw==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.73,422,1583222400"; d="scan'208";a="301157059"
Received: from rpenaran-mobl.amr.corp.intel.com (HELO ubuntu.localdomain)
 ([10.212.41.203])
 by orsmga008.jf.intel.com with ESMTP; 22 May 2020 09:33:56 -0700
From: Tamas K Lengyel <tamas.lengyel@intel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 for-4.14 2/2] tools/libxc: xc_memshr_fork with interrupts
 blocked
Date: Fri, 22 May 2020 09:33:53 -0700
Message-Id: <a903f59b9dc202114e06b7e4c7e0bb6e3fb588e9.1590165055.git.tamas.lengyel@intel.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <adfececa3e29a46f5347459a629aa534d61625aa.1590165055.git.tamas.lengyel@intel.com>
References: <adfececa3e29a46f5347459a629aa534d61625aa.1590165055.git.tamas.lengyel@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 Tamas K Lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Toolstack side for creating forks with interrupt injection blocked.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxc/include/xenctrl.h | 3 ++-
 tools/libxc/xc_memshr.c       | 4 +++-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 45ff7db1e8..804ff001d7 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2242,7 +2242,8 @@ int xc_memshr_range_share(xc_interface *xch,
 int xc_memshr_fork(xc_interface *xch,
                    uint32_t source_domain,
                    uint32_t client_domain,
-                   bool allow_with_iommu);
+                   bool allow_with_iommu,
+                   bool block_interrupts);
 
 /*
  * Note: this function is only intended to be used on short-lived forks that
diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c
index 2300cc7075..a6cfd7dccf 100644
--- a/tools/libxc/xc_memshr.c
+++ b/tools/libxc/xc_memshr.c
@@ -240,7 +240,7 @@ int xc_memshr_debug_gref(xc_interface *xch,
 }
 
 int xc_memshr_fork(xc_interface *xch, uint32_t pdomid, uint32_t domid,
-                   bool allow_with_iommu)
+                   bool allow_with_iommu, bool block_interrupts)
 {
     xen_mem_sharing_op_t mso;
 
@@ -251,6 +251,8 @@ int xc_memshr_fork(xc_interface *xch, uint32_t pdomid, uint32_t domid,
 
     if ( allow_with_iommu )
         mso.u.fork.flags |= XENMEM_FORK_WITH_IOMMU_ALLOWED;
+    if ( block_interrupts )
+        mso.u.fork.flags |= XENMEM_FORK_BLOCK_INTERRUPTS;
 
     return xc_memshr_memop(xch, domid, &mso);
 }
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri May 22 16:56:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 16:56:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcAxV-0001vA-NI; Fri, 22 May 2020 16:55:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=obdr=7E=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcAxU-0001v5-83
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 16:55:44 +0000
X-Inumbo-ID: 0fe1e094-9c4d-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0fe1e094-9c4d-11ea-b9cf-bc764e2007e4;
 Fri, 22 May 2020 16:55:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=G3IK/R5QLfnttLjbos8/D0jySFKnIDjwZxEqYD/DGys=; b=yEC4qSJ3Nmje3+l8zLOmY4Vg3
 9lDn2FcdHrvFl5S+Wes3/C12jdEhWxXPkPprZa6k4oS41IenWenVKdwLjGTXOOp1wLY+EyuS1EXIW
 f+JJOEGZzyY+ALYGMu5ngm9+ROWeqH2jf2DPIO106eYB3uxBEBWOQRytdGmgkhyFVPsqw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcAxN-0007yn-OR; Fri, 22 May 2020 16:55:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcAxN-0007qU-Er; Fri, 22 May 2020 16:55:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcAxN-00073l-EF; Fri, 22 May 2020 16:55:37 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150324-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150324: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=1658a39b0031baf9ec44c8d51d1d1369a964a4f9
X-Osstest-Versions-That: xen=f6d102046817bb5c08876ff78a6a00f4d29ee269
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 22 May 2020 16:55:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150324 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150324/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1658a39b0031baf9ec44c8d51d1d1369a964a4f9
baseline version:
 xen                  f6d102046817bb5c08876ff78a6a00f4d29ee269

Last test of basis   150319  2020-05-22 11:02:07 Z    0 days
Testing same since   150324  2020-05-22 14:01:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   f6d1020468..1658a39b00  1658a39b0031baf9ec44c8d51d1d1369a964a4f9 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri May 22 17:34:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 17:34:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcBYm-0005Tx-MG; Fri, 22 May 2020 17:34:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h2Re=7E=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jcBYl-0005Ts-2W
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 17:34:15 +0000
X-Inumbo-ID: 7478d9cc-9c52-11ea-9887-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7478d9cc-9c52-11ea-9887-bc764e2007e4;
 Fri, 22 May 2020 17:34:14 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id BC899206C3;
 Fri, 22 May 2020 17:34:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590168854;
 bh=bkgshU6AbS0fJo/CTj2DzVqhKPnZ0Xphtqqjh9kikv0=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=eSi4IF3LWHeaVP4rszG5wzmIBjjqjNqDcvCQBTU8KUyFwkM0Oh+Ow6iUg1jw3OAk0
 0w/3No0va/aFOz3I71+LNi36xW+LCxxNTBq257tnueccNsalMDzxVLsdBZpkO0j5lq
 5Iu6CCt39GlOmDVt5KSccs6JAEBpwqcDoIzXEL/4=
Date: Fri, 22 May 2020 10:34:13 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [PATCH 08/10] swiotlb-xen: introduce phys_to_dma/dma_to_phys
 translations
In-Reply-To: <7aab1c00-c115-b989-61e5-fd7d28fa0d07@oracle.com>
Message-ID: <alpine.DEB.2.21.2005221033180.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
 <20200520234520.22563-8-sstabellini@kernel.org>
 <7aab1c00-c115-b989-61e5-fd7d28fa0d07@oracle.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Stefano Stabellini <sstabellini@kernel.org>,
 konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, 21 May 2020, Boris Ostrovsky wrote:
> On 5/20/20 7:45 PM, Stefano Stabellini wrote:
> > From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> >
> > Call dma_to_phys in is_xen_swiotlb_buffer.
> > Call phys_to_dma in xen_phys_to_bus.
> > Call dma_to_phys in xen_bus_to_phys.
> >
> > Everything is taken care of by these changes except for
> > xen_swiotlb_alloc_coherent and xen_swiotlb_free_coherent, which need a
> > few explicit phys_to_dma/dma_to_phys calls.
> >
> > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > ---
> >  drivers/xen/swiotlb-xen.c | 20 ++++++++++++--------
> >  1 file changed, 12 insertions(+), 8 deletions(-)
> >
> > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > index c50448fd9b75..d011c4c7aa72 100644
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -64,14 +64,16 @@ static inline dma_addr_t xen_phys_to_bus(struct device *dev, phys_addr_t paddr)
> >  
> >  	dma |= paddr & ~XEN_PAGE_MASK;
> >  
> > -	return dma;
> > +	return phys_to_dma(dev, dma);
> >  }
> >  
> > -static inline phys_addr_t xen_bus_to_phys(struct device *dev, dma_addr_t baddr)
> > +static inline phys_addr_t xen_bus_to_phys(struct device *dev,
> > +					  dma_addr_t dma_addr)
> 
> 
> Since now dma address != bus address this is no longer
> xen_bus_to_phys(). (And I guess the same is rue for xen_phys_to_bus()).

Should I rename them to xen_dma_to_phys and xen_phys_to_dma?

 
> >  {
> > +	phys_addr_t baddr = dma_to_phys(dev, dma_addr);
> >  	unsigned long xen_pfn = bfn_to_pfn(XEN_PFN_DOWN(baddr));
> > -	dma_addr_t dma = (dma_addr_t)xen_pfn << XEN_PAGE_SHIFT;
> > -	phys_addr_t paddr = dma;
> > +	phys_addr_t paddr = (xen_pfn << XEN_PAGE_SHIFT) |
> > +			    (baddr & ~XEN_PAGE_MASK);
> >  
> >  	paddr |= baddr & ~XEN_PAGE_MASK;
> 
> 
> This line needs to go, no?

Yes, good point


From xen-devel-bounces@lists.xenproject.org Fri May 22 17:48:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 17:48:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcBm0-0006VQ-Si; Fri, 22 May 2020 17:47:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=obdr=7E=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcBlz-0006VL-N5
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 17:47:55 +0000
X-Inumbo-ID: 5c60f214-9c54-11ea-ac03-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5c60f214-9c54-11ea-ac03-12813bfff9fa;
 Fri, 22 May 2020 17:47:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=G6dH/LlXQ4QNMzSsv9QEOsr82dl1jiEF9sKk5DE5HUc=; b=bEvGb+rSFY2YPds5V92i+g1p3
 j1Z50CIA4rtGzRi+jndrXMQRRXjeJZsiZxG8wC+UVaMOJPnkJ8YVrjIEyXpkLJmBY0vvhmZc29osp
 FkiZ/PLfjGF8zoGmoRFvfKNbdl5DcCiU4Wq1JHXe7E4YYZMO9WjUr+b++l2xqiGhty+sQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcBlw-0000dk-ON; Fri, 22 May 2020 17:47:52 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcBlw-0001lZ-Fd; Fri, 22 May 2020 17:47:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcBlw-00074A-Ci; Fri, 22 May 2020 17:47:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150316-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150316: regressions - FAIL
X-Osstest-Failures: linux-linus:test-armhf-armhf-xl-vhd:xen-boot:fail:regression
 linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=051143e1602d90ea71887d92363edd539d411de5
X-Osstest-Versions-That: linux=2ea1940b84e55420a9e8feddcafd173edfe4df11
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 22 May 2020 17:47:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150316 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150316/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd       7 xen-boot                 fail REGR. vs. 150281
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 150281

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150281
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150281
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150281
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150281
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150281
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150281
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150281
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150281
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150281
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                051143e1602d90ea71887d92363edd539d411de5
baseline version:
 linux                2ea1940b84e55420a9e8feddcafd173edfe4df11

Last test of basis   150281  2020-05-20 19:40:19 Z    1 days
Failing since        150296  2020-05-21 08:13:02 Z    1 days    3 attempts
Testing same since   150316  2020-05-22 03:59:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andreas Färber <afaerber@suse.de>
  Anton Ivanov <anton.ivanov@cambridgegreys.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Cristian Ciocaltea <cristian.ciocaltea@gmail.com>
  Dave Jiang <dave.jiang@intel.com>
  Eric Biggers <ebiggers@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Ignat Korchagin <ignat@cloudflare.com>
  Jason Wang <jasowang@redhat.com>
  Johannes Berg <johannes.berg@intel.com>
  John Johansen <john.johansen@canonical.com>
  Kamal Dasu <kdasu.kdev@gmail.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael S. Tsirkin <mst@redhat.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Namjae Jeon <namjae.jeon@samsung.com>
  Navid Emamdoost <navid.emamdoost@gmail.com>
  Peter Ujfalusi <peter.ujfalusi@ti.com>
  Rafal Hibner <rafal.hibner@secom.com.pl>
  Rafał Hibner <rafal.hibner@secom.com.pl>
  Ricardo Ribalda Delgado <ribalda@kernel.org>
  Richard Weinberger <richard@nod.at>
  Ritesh Harjani <riteshh@linux.ibm.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Theodore Ts'o <tytso@mit.edu>
  Thierry Reding <treding@nvidia.com>
  Vinod Koul <vkoul@kernel.org>
  Vladimir Murzin <vladimir.murzin@arm.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  YueHaibing <yuehaibing@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 608 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 22 18:11:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 18:11:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcC8S-0000jQ-Vg; Fri, 22 May 2020 18:11:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZMCQ=7E=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jcC8R-0000jL-Iy
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 18:11:07 +0000
X-Inumbo-ID: 9993c367-9c57-11ea-ac04-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9993c367-9c57-11ea-ac04-12813bfff9fa;
 Fri, 22 May 2020 18:11:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=5VJHe7bH8hAm6eUuJu27AVIfqO1GywinPUVd6f11dDs=; b=DWY+wdnfmcr/84yBeVywBch+ZK
 fTpXAobGMc4xRDwXBwl6sZrqstYLh9PhmX7hH6BpImIMbDqxH0KwBngIa7FzJ54wfi8+uWb/5s0Lc
 2zUFol+D+MKCo9/SWg2AWsCLCz2nUacA9ebfwE5fJvSq2hL3jqNbwIifqKzMYPf+uifE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jcC8O-0001Aw-A6; Fri, 22 May 2020 18:11:04 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jcC8O-00043H-2H; Fri, 22 May 2020 18:11:04 +0000
Subject: Re: [PATCH 01/10] swiotlb-xen: use vmalloc_to_page on vmalloc virt
 addresses
To: Stefano Stabellini <sstabellini@kernel.org>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
 <20200520234520.22563-1-sstabellini@kernel.org>
 <23e5b6d8-c5d9-b43f-41cd-9d02d8ec0a7f@xen.org>
 <alpine.DEB.2.21.2005211235590.27502@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <a7f1d5c1-1ee1-61e3-22be-1db4dced08eb@xen.org>
Date: Fri, 22 May 2020 19:11:01 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2005211235590.27502@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Stefano,

On 22/05/2020 04:54, Stefano Stabellini wrote:
> On Thu, 21 May 2020, Julien Grall wrote:
>> Hi,
>>
>> On 21/05/2020 00:45, Stefano Stabellini wrote:
>>> From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>>
>>> Don't just assume that virt_to_page works on all virtual addresses.
>>> Instead add a is_vmalloc_addr check and use vmalloc_to_page on vmalloc
>>> virt addresses.
>>
>> Can you provide an example where swiotlb is used with vmalloc()?
> 
> The issue was reported here happening on the Rasperry Pi 4:
> https://marc.info/?l=xen-devel&m=158862573216800

Thanks, it would be good if the commit message contains a bit more details.

> 
> If you are asking where in the Linux codebase the vmalloc is happening
> specifically, I don't know for sure, my information is limited to the
> stack trace that you see in the link (I don't have a Rasperry Pi 4 yet
> but I shall have one soon.)

Looking at the code there is a comment in xen_swiotlb_alloc_coherent() 
suggesting that xen_alloc_coherent_pages() may return an ioremap'ped 
region on Arm. So it feels to me that commit 
b877ac9815a8fe7e5f6d7fdde3dc34652408840a "xen/swiotlb: remember having 
called xen_create_contiguous_region()" has always been broken on Arm.

As an aside, your commit message also suggests this is an issue for 
every virtual address used in swiotlb. But only the virt_to_page() call 
in xen_swiotlb_free_coherent() is modified. Is it intended? If yes, I 
think you want to update your commit message.

> 
> 
>>> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>> ---
>>>    drivers/xen/swiotlb-xen.c | 5 ++++-
>>>    1 file changed, 4 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>>> index b6d27762c6f8..a42129cba36e 100644
>>> --- a/drivers/xen/swiotlb-xen.c
>>> +++ b/drivers/xen/swiotlb-xen.c
>>> @@ -335,6 +335,7 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t
>>> size, void *vaddr,
>>>    	int order = get_order(size);
>>>    	phys_addr_t phys;
>>>    	u64 dma_mask = DMA_BIT_MASK(32);
>>> +	struct page *pg;
>>>      	if (hwdev && hwdev->coherent_dma_mask)
>>>    		dma_mask = hwdev->coherent_dma_mask;
>>> @@ -346,9 +347,11 @@ xen_swiotlb_free_coherent(struct device *hwdev, size_t
>>> size, void *vaddr,
>>>    	/* Convert the size to actually allocated. */
>>>    	size = 1UL << (order + XEN_PAGE_SHIFT);
>>>    +	pg = is_vmalloc_addr(vaddr) ? vmalloc_to_page(vaddr) :
>>> +				      virt_to_page(vaddr);
>>
>> Common DMA code seems to protect this check with CONFIG_DMA_REMAP. Is it
>> something we want to do it here as well? Or is there any other condition where
>> vmalloc can happen?
> 
> I can see it in dma_direct_free_pages:
> 
> 	if (IS_ENABLED(CONFIG_DMA_REMAP) && is_vmalloc_addr(cpu_addr))
> 		vunmap(cpu_addr);
> 
> I wonder why the common DMA code does that. is_vmalloc_addr should work
> regardless of CONFIG_DMA_REMAP. Maybe just for efficiency?
is_vmalloc_addr() doesn't looks very expensive (although it is not an 
inline function). So I am not sure the reason behind it. It might be 
worth asking the author of the config.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 22 18:16:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 18:16:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcCDM-0000v5-Ux; Fri, 22 May 2020 18:16:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZMCQ=7E=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jcCDK-0000uw-V8
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 18:16:11 +0000
X-Inumbo-ID: 5011d0b0-9c58-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5011d0b0-9c58-11ea-9887-bc764e2007e4;
 Fri, 22 May 2020 18:16:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hLKSuZTQ4zo02OD599iePD2/rvDfeT9Ihh6Ut+Ss5iQ=; b=DyxCae+oxFpfbkgZWHTM5ZUy30
 QNvkMcKVhEsiKm65FENKqqRVklyCPJplTthhwbFSX5pNHFWGcMlwchcEd58WvagHoo2/Knwx0WCKi
 1VCuyhyQtCmyJ7q8REVY+XFLjUDnZloNlsyddVIjeKQZ4GLFweJW5+XnQUD1pO/1+JeU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jcCDI-0001HQ-Sg; Fri, 22 May 2020 18:16:08 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jcCDI-0004LV-Ix; Fri, 22 May 2020 18:16:08 +0000
Subject: Re: [PATCH 02/10] swiotlb-xen: remove start_dma_addr
To: Stefano Stabellini <sstabellini@kernel.org>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
 <20200520234520.22563-2-sstabellini@kernel.org>
 <6241b8f6-5c51-0486-55ae-d571b117a184@xen.org>
 <alpine.DEB.2.21.2005211243060.27502@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <ab89bf08-a02f-85af-8f83-6a851d72ccf2@xen.org>
Date: Fri, 22 May 2020 19:16:06 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2005211243060.27502@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 22/05/2020 04:55, Stefano Stabellini wrote:
> On Thu, 21 May 2020, Julien Grall wrote:
>> Hi,
>>
>> On 21/05/2020 00:45, Stefano Stabellini wrote:
>>> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>>
>>> It is not strictly needed. Call virt_to_phys on xen_io_tlb_start
>>> instead. It will be useful not to have a start_dma_addr around with the
>>> next patches.
>>>
>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>> ---
>>>    drivers/xen/swiotlb-xen.c | 5 +----
>>>    1 file changed, 1 insertion(+), 4 deletions(-)
>>>
>>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>>> index a42129cba36e..b5e0492b07b9 100644
>>> --- a/drivers/xen/swiotlb-xen.c
>>> +++ b/drivers/xen/swiotlb-xen.c
>>> @@ -52,8 +52,6 @@ static unsigned long xen_io_tlb_nslabs;
>>>     * Quick lookup value of the bus address of the IOTLB.
>>>     */
>>>    -static u64 start_dma_addr;
>>> -
>>>    /*
>>>     * Both of these functions should avoid XEN_PFN_PHYS because phys_addr_t
>>>     * can be 32bit when dma_addr_t is 64bit leading to a loss in
>>> @@ -241,7 +239,6 @@ int __ref xen_swiotlb_init(int verbose, bool early)
>>>    		m_ret = XEN_SWIOTLB_EFIXUP;
>>>    		goto error;
>>>    	}
>>> -	start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
>>>    	if (early) {
>>>    		if (swiotlb_init_with_tbl(xen_io_tlb_start, xen_io_tlb_nslabs,
>>>    			 verbose))
>>> @@ -389,7 +386,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device
>>> *dev, struct page *page,
>>>    	 */
>>>    	trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
>>>    -	map = swiotlb_tbl_map_single(dev, start_dma_addr, phys,
>>> +	map = swiotlb_tbl_map_single(dev, virt_to_phys(xen_io_tlb_start),
>>> phys,
>>
>> xen_virt_to_bus() is implemented as xen_phys_to_bus(virt_to_phys()). Can you
>> explain how the two are equivalent?
> 
> They are not equivalent. Looking at what swiotlb_tbl_map_single expects,
> and also the implementation of swiotlb_init_with_tbl, I think
> virt_to_phys is actually the one we want.
> 
> swiotlb_tbl_map_single compares the argument with __pa(tlb) which is
> __pa(xen_io_tlb_start) which is virt_to_phys(xen_io_tlb_start).

I can't find such check in master. What is your baseline? Could you 
point to the exact line of code?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 22 18:29:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 18:29:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcCPv-00028b-3S; Fri, 22 May 2020 18:29:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0gwU=7E=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jcCPt-00028W-F7
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 18:29:09 +0000
X-Inumbo-ID: 1fde5a88-9c5a-11ea-ac08-12813bfff9fa
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1fde5a88-9c5a-11ea-ac08-12813bfff9fa;
 Fri, 22 May 2020 18:29:08 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04MIHqY9089651;
 Fri, 22 May 2020 18:29:07 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=obEvZ/H7PmoKjhzmT9MX3mkfA+fX0+O5yN3D+7XaHGI=;
 b=FDdPgOIxynFIH4KxkHZ2jAXL5uyC9v0/hK++HLRmqtwsBcmXOuwJM9Emxrwz4dM3OuHW
 AVgGhSwhHHozVHKCX4WrTFyD5riZZih1syNfigx9P3cjHfGFCBsP7PR3t2X78tn+kwxR
 z8sbXRBDceHUWLv2K/NKyb8k/z3XV5+KAdEeLNFdrpP+jLh8ZHHMOVHXgVszJWd8uS9f
 ZF8IQzkhVGVybX23aYKE2PE023rUpTRK3jxdjgsYNy1/s+fFP59kjLQt1nXapoz40fa5
 PZ1VAMv8KcRQiP105K5xjDcZR78PrpBiTkwopJzI5Aw16JlbIakQdFo/CdGyDie94XDU lQ== 
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2130.oracle.com with ESMTP id 3127krq7mr-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Fri, 22 May 2020 18:29:06 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04MIJIuk192805;
 Fri, 22 May 2020 18:29:06 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by aserp3020.oracle.com with ESMTP id 312t3f9e29-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 22 May 2020 18:29:06 +0000
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 04MIT5Hf031272;
 Fri, 22 May 2020 18:29:05 GMT
Received: from [10.39.212.182] (/10.39.212.182)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 22 May 2020 11:29:04 -0700
Subject: Re: [PATCH 08/10] swiotlb-xen: introduce phys_to_dma/dma_to_phys
 translations
To: Stefano Stabellini <sstabellini@kernel.org>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
 <20200520234520.22563-8-sstabellini@kernel.org>
 <7aab1c00-c115-b989-61e5-fd7d28fa0d07@oracle.com>
 <alpine.DEB.2.21.2005221033180.27502@sstabellini-ThinkPad-T480s>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <bf29c0a5-9933-a20e-d37e-4ed44ac7df84@oracle.com>
Date: Fri, 22 May 2020 14:29:02 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2005221033180.27502@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9629
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0
 spamscore=0 mlxlogscore=999
 phishscore=0 mlxscore=0 malwarescore=0 suspectscore=2 adultscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2005220147
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9629
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0
 phishscore=0 spamscore=0
 bulkscore=0 clxscore=1015 priorityscore=1501 mlxscore=0 impostorscore=0
 suspectscore=2 mlxlogscore=999 malwarescore=0 cotscore=-2147483648
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2005220147
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>,
 linux-kernel@vger.kernel.org, konrad.wilk@oracle.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/22/20 1:34 PM, Stefano Stabellini wrote:
> On Thu, 21 May 2020, Boris Ostrovsky wrote:
>> On 5/20/20 7:45 PM, Stefano Stabellini wrote:
>>> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>>
>>> Call dma_to_phys in is_xen_swiotlb_buffer.
>>> Call phys_to_dma in xen_phys_to_bus.
>>> Call dma_to_phys in xen_bus_to_phys.
>>>
>>> Everything is taken care of by these changes except for
>>> xen_swiotlb_alloc_coherent and xen_swiotlb_free_coherent, which need a
>>> few explicit phys_to_dma/dma_to_phys calls.
>>>
>>> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
>>> ---
>>>  drivers/xen/swiotlb-xen.c | 20 ++++++++++++--------
>>>  1 file changed, 12 insertions(+), 8 deletions(-)
>>>
>>> diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
>>> index c50448fd9b75..d011c4c7aa72 100644
>>> --- a/drivers/xen/swiotlb-xen.c
>>> +++ b/drivers/xen/swiotlb-xen.c
>>> @@ -64,14 +64,16 @@ static inline dma_addr_t xen_phys_to_bus(struct device *dev, phys_addr_t paddr)
>>>  
>>>  	dma |= paddr & ~XEN_PAGE_MASK;
>>>  
>>> -	return dma;
>>> +	return phys_to_dma(dev, dma);
>>>  }
>>>  
>>> -static inline phys_addr_t xen_bus_to_phys(struct device *dev, dma_addr_t baddr)
>>> +static inline phys_addr_t xen_bus_to_phys(struct device *dev,
>>> +					  dma_addr_t dma_addr)
>>
>> Since now dma address != bus address this is no longer
>> xen_bus_to_phys(). (And I guess the same is rue for xen_phys_to_bus()).
> Should I rename them to xen_dma_to_phys and xen_phys_to_dma?


Yes, please.


-boris




From xen-devel-bounces@lists.xenproject.org Fri May 22 20:00:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 20:00:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcDpX-0002AW-Ma; Fri, 22 May 2020 19:59:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=obdr=7E=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcDpW-0002AR-H8
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 19:59:42 +0000
X-Inumbo-ID: c65c8946-9c66-11ea-ac16-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c65c8946-9c66-11ea-ac16-12813bfff9fa;
 Fri, 22 May 2020 19:59:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=bigCs7kz9dupaZRr/XhUHqVc22ZabbthmQQmI/R8tyc=; b=g9hOWW0Bh6lY6ahjPibomoXzA
 hC+G1lk91WFUgw+9p1pzToLRT9sRp0Jlj0DYY/ChG4Y1/1uG3gqCepqfPgrqhn+7FNDKBpFE3Pnnt
 6DJZbUTyiz6uhhMKreYOk2LkmGnpUYWYbv/r7B3ro+VFlPJKqxOzB46Lh94A6eYKr5tds=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcDpV-0003N1-93; Fri, 22 May 2020 19:59:41 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcDpU-00014G-RX; Fri, 22 May 2020 19:59:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcDpU-0005Bc-Ql; Fri, 22 May 2020 19:59:40 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150320-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150320: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=d19f1ab0de8b763159513e3eaa12c5bc68122361
X-Osstest-Versions-That: qemuu=ae3aa5da96f4ccf0c2a28851449d92db9fcfad71
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 22 May 2020 19:59:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150320 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150320/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150312
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150312
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150312
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150312
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150312
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150312
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                d19f1ab0de8b763159513e3eaa12c5bc68122361
baseline version:
 qemuu                ae3aa5da96f4ccf0c2a28851449d92db9fcfad71

Last test of basis   150312  2020-05-21 23:10:46 Z    0 days
Testing same since   150320  2020-05-22 11:36:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Amanieu d'Antras <amanieu@gmail.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Guenter Roeck <linux@roeck-us.net>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Richard Henderson <richard.henderson@linaro.org>
  Thomas Huth <thuth@redhat.com>
  Wainer dos Santos Moschetta <wainersm@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   ae3aa5da96..d19f1ab0de  d19f1ab0de8b763159513e3eaa12c5bc68122361 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri May 22 20:11:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 20:11:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcE10-00049T-Bt; Fri, 22 May 2020 20:11:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=obdr=7E=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcE0y-00049O-P1
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 20:11:32 +0000
X-Inumbo-ID: 6ae5f686-9c68-11ea-ac1c-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6ae5f686-9c68-11ea-ac1c-12813bfff9fa;
 Fri, 22 May 2020 20:11:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=fye7boAr+hM7Zxox/wr14KaRlgtb4dvpXwf/sG1HE58=; b=r0magAaEND3ms0I9OdcjE1HqV
 9CD1d/s1Qzfqg3x78cstQaPQ2uAvgkdm4wJQjemOSez0c9wXP+CHCD5c4jHI8S5gIklN6OPwftzHL
 PfITnKkY6SZ/jiNpYh6KQ6ZofUN0IdAJ25PJPPrHg1ghB4rq9ZXT9FC8Di00mKePV46RQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcE0s-0003hM-OU; Fri, 22 May 2020 20:11:26 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcE0s-0001ky-Cq; Fri, 22 May 2020 20:11:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcE0s-00005u-C5; Fri, 22 May 2020 20:11:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150330-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150330: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=87827167bb1737e826b0a8fe0abe07c0ace36ac5
X-Osstest-Versions-That: xen=1658a39b0031baf9ec44c8d51d1d1369a964a4f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 22 May 2020 20:11:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150330 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150330/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  87827167bb1737e826b0a8fe0abe07c0ace36ac5
baseline version:
 xen                  1658a39b0031baf9ec44c8d51d1d1369a964a4f9

Last test of basis   150324  2020-05-22 14:01:40 Z    0 days
Testing same since   150330  2020-05-22 17:02:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1658a39b00..87827167bb  87827167bb1737e826b0a8fe0abe07c0ace36ac5 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri May 22 20:36:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 20:36:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcEP2-000676-Cx; Fri, 22 May 2020 20:36:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h2Re=7E=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jcEP0-00066N-UM
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 20:36:22 +0000
X-Inumbo-ID: e590fa9a-9c6b-11ea-ac22-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e590fa9a-9c6b-11ea-ac22-12813bfff9fa;
 Fri, 22 May 2020 20:36:21 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id DAB8C20723;
 Fri, 22 May 2020 20:36:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590179781;
 bh=IK1thAcWlCaqCoeU+81mOVb4TLhtZ5VdD8ydtfEDSgs=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=RmpDsrr5nt9VA49OfsdkCWmcG0mYVN1lGMYIb4+IQoZLGBtI/9xzwXOBng+nnnv/q
 s+CeSUi8X84jzr1XX2my8nGSl5T199PQ2Se5rhGEzgd8/K3xyp86L02PEim92xLt+R
 WdnSgson2U54CaVck5NBfmV5zlQWQ3kK7kcOlk7U=
Date: Fri, 22 May 2020 13:36:20 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 01/10] swiotlb-xen: use vmalloc_to_page on vmalloc virt
 addresses
In-Reply-To: <a7f1d5c1-1ee1-61e3-22be-1db4dced08eb@xen.org>
Message-ID: <alpine.DEB.2.21.2005221329590.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
 <20200520234520.22563-1-sstabellini@kernel.org>
 <23e5b6d8-c5d9-b43f-41cd-9d02d8ec0a7f@xen.org>
 <alpine.DEB.2.21.2005211235590.27502@sstabellini-ThinkPad-T480s>
 <a7f1d5c1-1ee1-61e3-22be-1db4dced08eb@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Stefano Stabellini <sstabellini@kernel.org>,
 konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 22 May 2020, Julien Grall wrote:
> Hi Stefano,
> 
> On 22/05/2020 04:54, Stefano Stabellini wrote:
> > On Thu, 21 May 2020, Julien Grall wrote:
> > > Hi,
> > > 
> > > On 21/05/2020 00:45, Stefano Stabellini wrote:
> > > > From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
> > > > 
> > > > Don't just assume that virt_to_page works on all virtual addresses.
> > > > Instead add a is_vmalloc_addr check and use vmalloc_to_page on vmalloc
> > > > virt addresses.
> > > 
> > > Can you provide an example where swiotlb is used with vmalloc()?
> > 
> > The issue was reported here happening on the Rasperry Pi 4:
> > https://marc.info/?l=xen-devel&m=158862573216800
> 
> Thanks, it would be good if the commit message contains a bit more details.
> 
> > 
> > If you are asking where in the Linux codebase the vmalloc is happening
> > specifically, I don't know for sure, my information is limited to the
> > stack trace that you see in the link (I don't have a Rasperry Pi 4 yet
> > but I shall have one soon.)
> 
> Looking at the code there is a comment in xen_swiotlb_alloc_coherent()
> suggesting that xen_alloc_coherent_pages() may return an ioremap'ped region on
> Arm. So it feels to me that commit b877ac9815a8fe7e5f6d7fdde3dc34652408840a
> "xen/swiotlb: remember having called xen_create_contiguous_region()" has
> always been broken on Arm.

Yes, I think you are right


> As an aside, your commit message also suggests this is an issue for every
> virtual address used in swiotlb. But only the virt_to_page() call in
> xen_swiotlb_free_coherent() is modified. Is it intended? If yes, I think you
> want to update your commit message.

I see, yes I can explain better



From xen-devel-bounces@lists.xenproject.org Fri May 22 20:47:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 22 May 2020 20:47:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcEa6-00076m-Fi; Fri, 22 May 2020 20:47:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h2Re=7E=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jcEa5-00076h-S3
 for xen-devel@lists.xenproject.org; Fri, 22 May 2020 20:47:49 +0000
X-Inumbo-ID: 7f43f146-9c6d-11ea-9887-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7f43f146-9c6d-11ea-9887-bc764e2007e4;
 Fri, 22 May 2020 20:47:49 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 44A1C20723;
 Fri, 22 May 2020 20:47:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590180468;
 bh=ABXEd3vc08BMg4ogCVIQEmBIL34CZ7hlNe8SZ9PbUAg=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=QBQVmp1VQZ2dAmCNmxEI92dXe8V0WuRDH3d/aYG8pRWxtwKEeupy1RLtLeK0fqVGJ
 yH0YKWXQ2E1JqwtNUKz1bfqrKnayppYXP5JBYeNJZVX+JM4GetGxI8H2+TPmrB1hHL
 Dext8r1i2WPfHkst+1gu6kSoUmo/96C3RHGFAxus=
Date: Fri, 22 May 2020 13:47:47 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 02/10] swiotlb-xen: remove start_dma_addr
In-Reply-To: <ab89bf08-a02f-85af-8f83-6a851d72ccf2@xen.org>
Message-ID: <alpine.DEB.2.21.2005221336530.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2005201628330.27502@sstabellini-ThinkPad-T480s>
 <20200520234520.22563-2-sstabellini@kernel.org>
 <6241b8f6-5c51-0486-55ae-d571b117a184@xen.org>
 <alpine.DEB.2.21.2005211243060.27502@sstabellini-ThinkPad-T480s>
 <ab89bf08-a02f-85af-8f83-6a851d72ccf2@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: jgross@suse.com, Stefano Stabellini <sstabellini@kernel.org>,
 konrad.wilk@oracle.com, linux-kernel@vger.kernel.org,
 xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 22 May 2020, Julien Grall wrote:
> On 22/05/2020 04:55, Stefano Stabellini wrote:
> > On Thu, 21 May 2020, Julien Grall wrote:
> > > Hi,
> > > 
> > > On 21/05/2020 00:45, Stefano Stabellini wrote:
> > > > From: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > > > 
> > > > It is not strictly needed. Call virt_to_phys on xen_io_tlb_start
> > > > instead. It will be useful not to have a start_dma_addr around with the
> > > > next patches.
> > > > 
> > > > Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
> > > > ---
> > > >    drivers/xen/swiotlb-xen.c | 5 +----
> > > >    1 file changed, 1 insertion(+), 4 deletions(-)
> > > > 
> > > > diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
> > > > index a42129cba36e..b5e0492b07b9 100644
> > > > --- a/drivers/xen/swiotlb-xen.c
> > > > +++ b/drivers/xen/swiotlb-xen.c
> > > > @@ -52,8 +52,6 @@ static unsigned long xen_io_tlb_nslabs;
> > > >     * Quick lookup value of the bus address of the IOTLB.
> > > >     */
> > > >    -static u64 start_dma_addr;
> > > > -
> > > >    /*
> > > >     * Both of these functions should avoid XEN_PFN_PHYS because
> > > > phys_addr_t
> > > >     * can be 32bit when dma_addr_t is 64bit leading to a loss in
> > > > @@ -241,7 +239,6 @@ int __ref xen_swiotlb_init(int verbose, bool early)
> > > >    		m_ret = XEN_SWIOTLB_EFIXUP;
> > > >    		goto error;
> > > >    	}
> > > > -	start_dma_addr = xen_virt_to_bus(xen_io_tlb_start);
> > > >    	if (early) {
> > > >    		if (swiotlb_init_with_tbl(xen_io_tlb_start,
> > > > xen_io_tlb_nslabs,
> > > >    			 verbose))
> > > > @@ -389,7 +386,7 @@ static dma_addr_t xen_swiotlb_map_page(struct device
> > > > *dev, struct page *page,
> > > >    	 */
> > > >    	trace_swiotlb_bounced(dev, dev_addr, size, swiotlb_force);
> > > >    -	map = swiotlb_tbl_map_single(dev, start_dma_addr, phys,
> > > > +	map = swiotlb_tbl_map_single(dev, virt_to_phys(xen_io_tlb_start),
> > > > phys,
> > > 
> > > xen_virt_to_bus() is implemented as xen_phys_to_bus(virt_to_phys()). Can
> > > you
> > > explain how the two are equivalent?
> > 
> > They are not equivalent. Looking at what swiotlb_tbl_map_single expects,
> > and also the implementation of swiotlb_init_with_tbl, I think
> > virt_to_phys is actually the one we want.
> > 
> > swiotlb_tbl_map_single compares the argument with __pa(tlb) which is
> > __pa(xen_io_tlb_start) which is virt_to_phys(xen_io_tlb_start).
> 
> I can't find such check in master. What is your baseline? Could you point to
> the exact line of code?

My base is b85051e755b0e9d6dd8f17ef1da083851b83287d, which is master
from a couple of days back.


xen_swiotlb_init calls swiotlb_init_with_tbl which takes a virt address
as a parameter (xen_io_tlb_start), it gets converted to phys and stored
in io_tlb_start as a physical address.

Later, xen_swiotlb_map_page calls swiotlb_tbl_map_single passing a
dma addr as parameter (tbl_dma_addr). tbl_dma_addr is used to calculate
the right slot in the swiotlb buffer to use. (Strangely tbl_dma_addr is
a dma_addr_t and it is not converted to phys_addr_t before doing
operations... I think tbl_dma_addr should be a phys addr.)

The comparison with io_tlb_start is done here:

	do {
		while (iommu_is_span_boundary(index, nslots, offset_slots,
					      max_slots)) {
			index += stride;
			if (index >= io_tlb_nslabs)
				index = 0;
			if (index == wrap)
				goto not_found;
		}

index is io_tlb_start and offset_slots is derived by tbl_dma_addr.


From xen-devel-bounces@lists.xenproject.org Sat May 23 00:10:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 00:10:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcHjV-0000Nj-Om; Sat, 23 May 2020 00:09:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BlJ7=7F=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcHjV-0000Ne-1X
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 00:09:45 +0000
X-Inumbo-ID: b4894556-9c89-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b4894556-9c89-11ea-ae69-bc764e2007e4;
 Sat, 23 May 2020 00:09:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=AEycuEuvksZ/QCn8WRxCpKjaBxPCwUEje/L5LmuV4tQ=; b=0xMd0n/nlqznsjQqTe/h+z9oK
 HdD4qBBq4uuogi1DzKmSXkNYSRHyPl2x+U49biqTrtdvvv1XXv6gz2YKP5QWnf+BuNLFmW2D5BoO9
 s4rQRYgjWoTKYmAkiQLE80HRVZNcF8QwQOmS3i4NN/t0N2hdzj45280+LE6w0JT1a937I=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcHjT-0000jE-UU; Sat, 23 May 2020 00:09:43 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcHjT-0006e0-H5; Sat, 23 May 2020 00:09:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcHjT-0006kf-GQ; Sat, 23 May 2020 00:09:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150333-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150333: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=abf378e6483195b98a3f32e2c9d017e0eeeb275f
X-Osstest-Versions-That: xen=87827167bb1737e826b0a8fe0abe07c0ace36ac5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 23 May 2020 00:09:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150333 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150333/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  abf378e6483195b98a3f32e2c9d017e0eeeb275f
baseline version:
 xen                  87827167bb1737e826b0a8fe0abe07c0ace36ac5

Last test of basis   150330  2020-05-22 17:02:53 Z    0 days
Testing same since   150333  2020-05-22 21:00:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   87827167bb..abf378e648  abf378e6483195b98a3f32e2c9d017e0eeeb275f -> smoke


From xen-devel-bounces@lists.xenproject.org Sat May 23 00:51:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 00:51:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcINN-0004gG-48; Sat, 23 May 2020 00:50:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BlJ7=7F=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcINL-0004gB-Sv
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 00:50:55 +0000
X-Inumbo-ID: 717cf81a-9c8f-11ea-ac46-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 717cf81a-9c8f-11ea-ac46-12813bfff9fa;
 Sat, 23 May 2020 00:50:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=/lx2S1KnlYeEEVhsyMyiqAXepm7Gy8WbfHtHOghIc8w=; b=G7ktd9qFKpwH9HbhhtB+Rkeqk
 w7kXxEmiCtT/v0bko++b+ryDoBZsCfVgV6QzgwI52QpeoCP0t/o5cEnEInCUz4kLHWzg9F15MOF7N
 /wrFerEBWIxnuwQr39HjGi7xrc6jLciwSw5V9oJrm6AOAOD5Bma8KgUg0Wi7NkG2MDzBA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcINE-0001X4-F3; Sat, 23 May 2020 00:50:48 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcINE-0000EV-62; Sat, 23 May 2020 00:50:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcINE-0001TG-5O; Sat, 23 May 2020 00:50:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150326-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150326: regressions - FAIL
X-Osstest-Failures: xen-unstable:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=f6d102046817bb5c08876ff78a6a00f4d29ee269
X-Osstest-Versions-That: xen=dacdbf7088d6a3705a9831e73991c2b14c519a65
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 23 May 2020 00:50:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150326 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150326/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-seattle   7 xen-boot                 fail REGR. vs. 150315

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 150315

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150315
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150315
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150315
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150315
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150315
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150315
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150315
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150315
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150315
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150315
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  f6d102046817bb5c08876ff78a6a00f4d29ee269
baseline version:
 xen                  dacdbf7088d6a3705a9831e73991c2b14c519a65

Last test of basis   150315  2020-05-22 01:51:14 Z    0 days
Testing same since   150326  2020-05-22 14:06:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit f6d102046817bb5c08876ff78a6a00f4d29ee269
Author: George Dunlap <george.dunlap@citrix.com>
Date:   Fri May 22 10:35:10 2020 +0100

    golang: Update generated files after libxl_types.idl change
    
    c/s 7efd9f3d45 ("libxl: Handle Linux stubdomain specific QEMU
    options.") modified libl_types.idl.  Run gengotypes.py again to update
    the geneated golang bindings.
    
    Signed-off-by: George Dunlap <george.dunlap@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat May 23 02:14:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 02:14:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcJfs-0004sN-5j; Sat, 23 May 2020 02:14:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BlJ7=7F=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcJfq-0004sI-OX
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 02:14:06 +0000
X-Inumbo-ID: 0f34f7aa-9c9b-11ea-ac53-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f34f7aa-9c9b-11ea-ac53-12813bfff9fa;
 Sat, 23 May 2020 02:13:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Iupffri/zdm7KOkDvvvFvptqe2w6x1DZubwcQSDicjQ=; b=AWoeOr7Z2RrdXXc+u6Ojp97eW
 RwODLqTISDZ8m1QEZaUWym7d42uiU+ca0ClWAt4eKza2JsE5wO8f3PORv1GQO7e0WZsrZV3N7J1Eu
 kryUBzlE8fgr/bzEu7DD/cVWzF5MrXDtNDzJ4yE37gPcr8Y9KLge+GM0T8VsQhbFuUnes=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcJfh-00051l-5p; Sat, 23 May 2020 02:13:57 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcJfg-0003KC-Ot; Sat, 23 May 2020 02:13:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcJfg-0006Jy-N6; Sat, 23 May 2020 02:13:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150331-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150331: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
 linux-linus:build-i386-pvops:kernel-build:fail:regression
 linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=4286d192c803571e8ca43b0f1f8ea04d663a278a
X-Osstest-Versions-That: linux=2ea1940b84e55420a9e8feddcafd173edfe4df11
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 23 May 2020 02:13:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150331 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150331/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-thunderx  7 xen-boot                 fail REGR. vs. 150281
 build-i386-pvops              6 kernel-build             fail REGR. vs. 150281

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150281
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150281
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150281
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150281
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150281
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150281
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                4286d192c803571e8ca43b0f1f8ea04d663a278a
baseline version:
 linux                2ea1940b84e55420a9e8feddcafd173edfe4df11

Last test of basis   150281  2020-05-20 19:40:19 Z    2 days
Failing since        150296  2020-05-21 08:13:02 Z    1 days    4 attempts
Testing same since   150331  2020-05-22 18:10:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Al Viro <viro@zeniv.linux.org.uk>
  Andreas Färber <afaerber@suse.de>
  Anton Ivanov <anton.ivanov@cambridgegreys.com>
  Bin Lu <Bin.Lu@arm.com>
  Brent Lu <brent.lu@intel.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Christian Lachner <gladiac@gmail.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Christophe Leroy <christophe.leroy@csgroup.eu>
  Cristian Ciocaltea <cristian.ciocaltea@gmail.com>
  Dave Jiang <dave.jiang@intel.com>
  Eric Biggers <ebiggers@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Ignat Korchagin <ignat@cloudflare.com>
  Jason Wang <jasowang@redhat.com>
  Johannes Berg <johannes.berg@intel.com>
  John Johansen <john.johansen@canonical.com>
  Kamal Dasu <kdasu.kdev@gmail.com>
  Keno Fischer <keno@juliacomputing.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael S. Tsirkin <mst@redhat.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Namjae Jeon <namjae.jeon@samsung.com>
  Navid Emamdoost <navid.emamdoost@gmail.com>
  PeiSen Hou <pshou@realtek.com>
  Peter Ujfalusi <peter.ujfalusi@ti.com>
  Rafal Hibner <rafal.hibner@secom.com.pl>
  Rafał Hibner <rafal.hibner@secom.com.pl>
  Ricardo Ribalda Delgado <ribalda@kernel.org>
  Richard Weinberger <richard@nod.at>
  Ritesh Harjani <riteshh@linux.ibm.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Scott Bahling <sbahling@suse.com>
  Sudeep Holla <sudeep.holla@arm.com>
  Takashi Iwai <tiwai@suse.de>
  Theodore Ts'o <tytso@mit.edu>
  Thierry Reding <treding@nvidia.com>
  Vinod Koul <vkoul@kernel.org>
  Vladimir Murzin <vladimir.murzin@arm.com>
  Will Deacon <will@kernel.org>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  YueHaibing <yuehaibing@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             fail    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 873 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 23 03:54:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 03:54:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcLES-0005IN-Hs; Sat, 23 May 2020 03:53:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BlJ7=7F=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcLEQ-0005II-QZ
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 03:53:54 +0000
X-Inumbo-ID: 048809e2-9ca9-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 048809e2-9ca9-11ea-ae69-bc764e2007e4;
 Sat, 23 May 2020 03:53:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=HLuAJCsRJ4lAN08m2eNEkUUss1Y/sVtKURGXxJQUbcU=; b=bEEeRSlFODaqo7Q4vJ/hJA8fC
 iufDAfUmhotJJLeHasfJ/8Ax8p0542LX+JcP5+LXoNVqvB6hQS86DxngSNEo9G3jcfC/3VqBuWCfF
 nmuMkmG67s+bEBIhgsKl6IEWBEzwXfTANoNDDibRTi6yDAZpokHusNjq/dbSSPvpPB72c=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcLEO-000717-9H; Sat, 23 May 2020 03:53:52 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcLEO-0006vF-0i; Sat, 23 May 2020 03:53:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcLEO-0007sh-07; Sat, 23 May 2020 03:53:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150335-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150335: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=5e015d48a5ee68ba03addad2698364d8f015afec
X-Osstest-Versions-That: xen=abf378e6483195b98a3f32e2c9d017e0eeeb275f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 23 May 2020 03:53:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150335 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150335/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5e015d48a5ee68ba03addad2698364d8f015afec
baseline version:
 xen                  abf378e6483195b98a3f32e2c9d017e0eeeb275f

Last test of basis   150333  2020-05-22 21:00:46 Z    0 days
Testing same since   150335  2020-05-23 01:00:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   abf378e648..5e015d48a5  5e015d48a5ee68ba03addad2698364d8f015afec -> smoke


From xen-devel-bounces@lists.xenproject.org Sat May 23 06:06:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 06:06:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcNIY-0000cp-LE; Sat, 23 May 2020 06:06:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BlJ7=7F=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcNIX-0000ck-QR
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 06:06:17 +0000
X-Inumbo-ID: 82e85546-9cbb-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82e85546-9cbb-11ea-9887-bc764e2007e4;
 Sat, 23 May 2020 06:06:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Gui42K4ceKStMOq0LR/7Yc8hGQaAb55AjsFKcHc/4Ac=; b=dNKWcOVAMelpX7zfL+H6R588L
 zs2lLWG9WYauOiIu7nacS3VX0DzrYm4fkyHu+U35bMjWK1ZMHGF8ApHj7w0Ex2/S/wpQxA4BACpEo
 VuBBLh0a44uwpMyxE/30mgcT6OHJA7RpHap/6FhNvoF+sHT5atus2Ayb1a0mjWkeFsp5U=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcNIV-0001rL-2I; Sat, 23 May 2020 06:06:15 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcNIU-0007Vs-Nb; Sat, 23 May 2020 06:06:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcNIU-0005yQ-Ll; Sat, 23 May 2020 06:06:14 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150332-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150332: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=1cc9c62e42445f2346bf0842cafe82c5ba404529
X-Osstest-Versions-That: qemuu=d19f1ab0de8b763159513e3eaa12c5bc68122361
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 23 May 2020 06:06:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150332 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150332/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150320
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150320
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150320
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150320
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150320
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150320
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                1cc9c62e42445f2346bf0842cafe82c5ba404529
baseline version:
 qemuu                d19f1ab0de8b763159513e3eaa12c5bc68122361

Last test of basis   150320  2020-05-22 11:36:31 Z    0 days
Testing same since   150332  2020-05-22 20:08:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Mark Cave-Ayland <mark.cave-ayland@ilande.co.uk>
  Peter Maydell <peter.maydell@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   d19f1ab0de..1cc9c62e42  1cc9c62e42445f2346bf0842cafe82c5ba404529 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sat May 23 07:28:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 07:28:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcOZT-0007sA-Oy; Sat, 23 May 2020 07:27:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BlJ7=7F=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcOZS-0007s5-0T
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 07:27:50 +0000
X-Inumbo-ID: e6969c50-9cc6-11ea-ac75-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e6969c50-9cc6-11ea-ac75-12813bfff9fa;
 Sat, 23 May 2020 07:27:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=JKJ6KrFVf2N4tYx2kTF+LBHbun8nZ28jzU3/jNThKVA=; b=yuAi9m3cIHA9qQNr9KaKlL9NY
 QHku3CXTHzBRh7tX4uYh3Nk1QWWjIoC/SKKbvFfvCl0qo+euSOO2JTbElileY9m2wUuM1dcZ21fBA
 0pij2FS3NAz+6BaC563Up20yYvXnZ5Z/OtEvgmEa/5gHBBbDS6+SoM6vdfbr84liYtLbU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcOZP-0003Xd-2F; Sat, 23 May 2020 07:27:47 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcOZO-0003ca-Pa; Sat, 23 May 2020 07:27:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcOZO-0001lm-Ow; Sat, 23 May 2020 07:27:46 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150339-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150339: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=f718709431429fbb4e1fc6781f3a3752a7f43f70
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 23 May 2020 07:27:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150339 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150339/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a

version targeted for testing:
 libvirt              f718709431429fbb4e1fc6781f3a3752a7f43f70
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  127 days
Failing since        146211  2020-01-18 04:18:52 Z  126 days  117 attempts
Testing same since   150339  2020-05-23 04:19:51 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Chris Jester-Young <cky@cky.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yan Wang <wangyan122@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 19289 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 23 10:57:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 10:57:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcRpT-00020h-NA; Sat, 23 May 2020 10:56:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BlJ7=7F=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcRpS-000202-Ii
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 10:56:34 +0000
X-Inumbo-ID: 1049cabe-9ce4-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1049cabe-9ce4-11ea-b07b-bc764e2007e4;
 Sat, 23 May 2020 10:56:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=eNhMAdfkPefOglfsF+qkC9WplE/CsbQf2gWrPLhxltw=; b=05Yq0hwmS+qbT3DusT+ItgGpo
 h8VszWUlCsZelOPsBKRBawmXN1v4CF+FU/wSMTZXpVWnvF2RpLHXQeaJQJ7pNt+xe1Mhzstl7zaky
 3oHo0MkFI1hCIlz5qODQdLE+TR8vBY0lc4cS1SfVXCi7VyVIZsYmR/wLwZD+cjUqY2SBk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcRpQ-0008MK-En; Sat, 23 May 2020 10:56:32 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcRpQ-0007pA-4R; Sat, 23 May 2020 10:56:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcRpQ-0004Bu-3p; Sat, 23 May 2020 10:56:32 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150336-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150336: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=abf378e6483195b98a3f32e2c9d017e0eeeb275f
X-Osstest-Versions-That: xen=dacdbf7088d6a3705a9831e73991c2b14c519a65
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 23 May 2020 10:56:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150336 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150336/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 150315

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150315
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150315
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150315
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150315
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150315
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150315
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150315
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150315
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150315
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150315
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  abf378e6483195b98a3f32e2c9d017e0eeeb275f
baseline version:
 xen                  dacdbf7088d6a3705a9831e73991c2b14c519a65

Last test of basis   150315  2020-05-22 01:51:14 Z    1 days
Failing since        150326  2020-05-22 14:06:40 Z    0 days    2 attempts
Testing same since   150336  2020-05-23 01:06:17 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   dacdbf7088..abf378e648  abf378e6483195b98a3f32e2c9d017e0eeeb275f -> master


From xen-devel-bounces@lists.xenproject.org Sat May 23 11:48:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 11:48:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcSdi-0006h9-SL; Sat, 23 May 2020 11:48:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BlJ7=7F=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcSdi-0006h4-7U
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 11:48:30 +0000
X-Inumbo-ID: 4deb6cfe-9ceb-11ea-acbe-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4deb6cfe-9ceb-11ea-acbe-12813bfff9fa;
 Sat, 23 May 2020 11:48:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=mi0pDxwwzlRSIvraBTO+dOt3CsJawcNj1wtdGhfAIoc=; b=dqBU4nlvJqNgY5advLOxYumcEW
 TlMFxXdVUDwaB3WSwJyaFMsYRgN4ta+adYBZN8jVl8U7QFam8IgeXmDBb3KpLOUfQv+GYjgd1fYLB
 glxIUhJdjEYfQk3Y/mKCZHZ3LadtK+O9yo1sAMF7fTcnNuynXT4HIKqRM4OkE/iHRNpo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcSda-00011N-0N; Sat, 23 May 2020 11:48:22 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcSdZ-0000YQ-Om; Sat, 23 May 2020 11:48:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcSdZ-0003vb-O9; Sat, 23 May 2020 11:48:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [libvirt bisection] complete build-amd64-libvirt
Message-Id: <E1jcSdZ-0003vb-O9@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 23 May 2020 11:48:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job build-amd64-libvirt
testid libvirt-build

Tree: libvirt git://libvirt.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  libvirt git://libvirt.org/libvirt.git
  Bug introduced:  4d5f50d86b760864240c695adc341379fb47a796
  Bug not present: a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/148859/


  commit 4d5f50d86b760864240c695adc341379fb47a796
  Author: Pavel Hrdina <phrdina@redhat.com>
  Date:   Wed Jan 8 22:54:31 2020 +0100
  
      bootstrap.conf: stop creating AUTHORS file
      
      The existence of AUTHORS file is required for GNU projects but since
      commit <8bfb36db40f38e92823b657b5a342652064b5adc> we do not require
      these files to exist.
      
      Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/libvirt/build-amd64-libvirt.libvirt-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/libvirt/build-amd64-libvirt.libvirt-build --summary-out=tmp/150342.bisection-summary --basis-template=146182 --blessings=real,real-bisect libvirt build-amd64-libvirt libvirt-build
Searching for failure / basis pass:
 150339 fail [host=albana1] / 146182 [host=rimava1] 146156 [host=huxelrebe1] 146103 [host=fiano0] 146061 [host=chardonnay1] 145969 [host=godello1] 145906 [host=godello1] 145842 [host=godello1] 145779 [host=godello0] 145511 [host=huxelrebe1] 145212 [host=godello0] 145173 [host=godello0] 145133 [host=godello1] 145054 [host=godello0] 144995 [host=godello1] 144958 [host=albana0] 144920 [host=godello0] 144885 ok.
Failure / basis pass flights: 150339 / 144885
(tree with no url: minios)
(tree in basispass but not in latest: libvirt_gnulib)
Tree: libvirt git://libvirt.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest f718709431429fbb4e1fc6781f3a3752a7f43f70 27acf0ef828bf719b2053ba398b195829413dbdd 1c877c716038a862e876cac8f0929bab4a96e849 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 7e9db04923854b7f4edca33948f55abee22907b9 dacdbf7088d6a3705a9831e73991c2b14c519a65
Basis pass 6f894a29d812381ffaf8e321f710ceb4bef8f944 317d3eeb963a515e15a63fa356d8ebcda7041a51 804666c86e7b6f04fe5c5cfdb13199c19e0e99b0 d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 c9115affa6f83aebe29ae9cbf503aa163911a5bb
Generating revisions with ./adhoc-revtuple-generator  git://libvirt.org/libvirt.git#6f894a29d812381ffaf8e321f710ceb4bef8f944-f718709431429fbb4e1fc6781f3a3752a7f43f70 https://gitlab.com/keycodemap/keycodemapdb.git#317d3eeb963a515e15a63fa356d8ebcda7041a51-27acf0ef828bf719b2053ba398b195829413dbdd git://xenbits.xen.org/osstest/ovmf.git#804666c86e7b6f04fe5c5cfdb13199c19e0e99b0-1c877c716038a862e876cac8f0929bab4a96e849 git://xenbits.xen.org/qemu-xen-traditional.git#d0d8ad39ecb51cd7497cd524484fe09f50876\
 798-3c659044118e34603161457db9934a34f816d78b git://xenbits.xen.org/qemu-xen.git#933ebad2470a169504799a1d95b8e410bd9847ef-410cc30fdc590417ae730d635bbc70257adf6750 git://xenbits.xen.org/osstest/seabios.git#f21b5a4aeb020f2a5e2c6503f906a9349dd2f069-7e9db04923854b7f4edca33948f55abee22907b9 git://xenbits.xen.org/xen.git#c9115affa6f83aebe29ae9cbf503aa163911a5bb-dacdbf7088d6a3705a9831e73991c2b14c519a65
Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.

Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.

Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 212271 nodes in revision graph
Searching for test results:
 144885 pass 6f894a29d812381ffaf8e321f710ceb4bef8f944 317d3eeb963a515e15a63fa356d8ebcda7041a51 804666c86e7b6f04fe5c5cfdb13199c19e0e99b0 d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 c9115affa6f83aebe29ae9cbf503aa163911a5bb
 144958 [host=albana0]
 144920 [host=godello0]
 144995 [host=godello1]
 145054 [host=godello0]
 145173 [host=godello0]
 145133 [host=godello1]
 145212 [host=godello0]
 145511 [host=huxelrebe1]
 145542 [host=godello1]
 145589 [host=godello1]
 145621 [host=godello1]
 145639 [host=godello1]
 145626 [host=godello1]
 145632 [host=godello1]
 145627 [host=godello1]
 145623 [host=godello1]
 145636 [host=godello1]
 145630 [host=godello1]
 145633 [host=godello1]
 145637 [host=godello1]
 145638 [host=godello1]
 145640 [host=godello1]
 145642 [host=godello1]
 145643 [host=godello1]
 145644 [host=godello1]
 145656 [host=godello1]
 145710 [host=godello0]
 145779 [host=godello0]
 145842 [host=godello1]
 145906 [host=godello1]
 145969 [host=godello1]
 146061 [host=chardonnay1]
 146103 [host=fiano0]
 146182 [host=rimava1]
 146156 [host=huxelrebe1]
 146223 [host=albana0]
 146238 [host=albana0]
 146239 [host=albana0]
 146256 [host=albana0]
 146241 [host=albana0]
 146240 [host=albana0]
 146211 [host=albana0]
 146243 [host=albana0]
 146245 [host=albana0]
 146260 [host=albana0]
 146249 [host=albana0]
 146250 [host=albana0]
 146264 [host=albana0]
 146252 [host=albana0]
 146253 [host=albana0]
 146265 [host=albana0]
 146255 [host=albana0]
 146266 [host=albana0]
 146269 [host=albana0]
 146299 fail irrelevant
 146344 [host=albana0]
 146374 [host=albana0]
 146410 fail irrelevant
 146455 fail irrelevant
 146509 [host=albana0]
 146489 [host=albana0]
 146528 [host=albana0]
 146546 [host=albana0]
 146565 [host=albana0]
 146586 [host=albana0]
 146616 [host=albana0]
 146636 fail irrelevant
 146660 [host=albana0]
 146689 [host=albana0]
 146737 [host=albana0]
 146756 fail irrelevant
 146714 fail irrelevant
 146775 [host=albana0]
 146799 fail irrelevant
 146843 []
 146921 [host=albana0]
 146995 fail irrelevant
 147040 [host=albana0]
 147084 fail irrelevant
 147141 [host=albana0]
 147195 [host=albana0]
 147265 fail irrelevant
 147340 [host=albana0]
 147419 [host=albana0]
 147477 fail irrelevant
 147520 [host=albana0]
 147583 [host=albana0]
 147649 [host=albana0]
 147703 fail irrelevant
 147784 [host=albana0]
 147736 [host=albana0]
 147885 [host=albana0]
 147831 [host=albana0]
 147981 [host=albana0]
 148068 []
 148144 [host=albana0]
 148196 [host=albana0]
 148269 fail irrelevant
 148331 fail irrelevant
 148406 [host=albana0]
 148459 [host=albana0]
 148503 [host=albana0]
 148547 [host=albana0]
 148615 [host=albana0]
 148583 [host=albana0]
 148651 [host=albana0]
 148688 []
 148729 fail irrelevant
 148775 fail irrelevant
 148828 pass a1cd25b919509be2645dbe6f952d5263e0d4e4e5 317d3eeb963a515e15a63fa356d8ebcda7041a51 710ff7490ad897383eb35d1becadabd21a733f24 d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d dda31ce9521c3b6a7750076f79427be77dea9b5b
 148847 fail 4d5f50d86b760864240c695adc341379fb47a796 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 148799 fail irrelevant
 148831 fail d61f95cf6a6fbd564e104c168d325581acd9cd8d 317d3eeb963a515e15a63fa356d8ebcda7041a51 9a1f14ad721bbcd833ec5108944c44a502392f03 d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d e0fbb9121a684b5604a4e572c9c7e4016ad5505c
 148834 pass 4aeb0cc4d7876f9a2c6a024a32d883808096da77 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 148849 pass a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 148811 pass 6f894a29d812381ffaf8e321f710ceb4bef8f944 317d3eeb963a515e15a63fa356d8ebcda7041a51 804666c86e7b6f04fe5c5cfdb13199c19e0e99b0 d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 c9115affa6f83aebe29ae9cbf503aa163911a5bb
 148819 fail irrelevant
 148821 fail 79ebc31a1b671577f413a4fed4addca8ae3423c9 317d3eeb963a515e15a63fa356d8ebcda7041a51 eafd990f2606431d45cf0bbdbfee6d5959628de7 d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d ef4666f63c9677b22a05b204e85fef5f207c0a5c
 148836 fail 2feaa925bba06e77be918bcbfab63bc8201c8f19 317d3eeb963a515e15a63fa356d8ebcda7041a51 4e2ac8062cbe907be9fbf6b2e6f1fc947690c4de d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 1eeedaf5a0d9ed6324f3bd5b700bb22eb4355341
 148822 pass 8b58b5ee03c6d4b7916d9ee6cdf40571e1e12919 317d3eeb963a515e15a63fa356d8ebcda7041a51 cf3ad972a2105ffa3795ddb1d9c149c7fc369f9b d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 09488b2bb76da2c78b9e25c7041e004baba1ca6a
 148824 fail 29d43bf96a3e5886f1b32c78bbb16d1507bd0d9e 317d3eeb963a515e15a63fa356d8ebcda7041a51 9a1f14ad721bbcd833ec5108944c44a502392f03 d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 4345dff75a7838649c75a85aeb0e0de93853201d
 148852 fail 4d5f50d86b760864240c695adc341379fb47a796 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 148839 fail d0236e2a554f2321512276b897e8a8a44f68e969 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 3c4b2eef4941c8a81d04337c6df31175a881635f
 148843 fail c02e9621b950f9af024c7abed2eef1f70bdb47aa 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 148830 fail irrelevant
 148845 pass a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 148856 pass a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 148859 fail 4d5f50d86b760864240c695adc341379fb47a796 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 148887 [host=albana0]
 148954 [host=albana0]
 149001 fail irrelevant
 149043 [host=albana0]
 149074 [host=albana0]
 149123 [host=albana0]
 149154 [host=albana0]
 149193 [host=albana0]
 149234 [host=albana0]
 149268 [host=albana0]
 149314 fail irrelevant
 149376 fail irrelevant
 149407 [host=albana0]
 149434 [host=albana0]
 149455 fail irrelevant
 149482 [host=albana0]
 149550 fail irrelevant
 149508 [host=albana0]
 149590 [host=albana0]
 149643 [host=albana0]
 149629 [host=albana0]
 149615 fail irrelevant
 149635 fail irrelevant
 149684 fail irrelevant
 149666 [host=albana0]
 149696 [host=albana0]
 149732 fail irrelevant
 149773 [host=albana0]
 149746 [host=albana0]
 149803 [host=albana0]
 149826 fail irrelevant
 149886 fail irrelevant
 149833 [host=albana0]
 149850 [host=albana0]
 149870 [host=albana0]
 149909 fail de7e9840e7f888f1a872c86b0cb793b283193137 317d3eeb963a515e15a63fa356d8ebcda7041a51 e54310451f1ac2ce4ccb90a110f45bb9b4f3ccd6 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 eaaf726038cb9b2d01312d6430b4e93842aa96eb 0135be8bd8cd60090298f02310691b688d95c3a8
 149902 fail de7e9840e7f888f1a872c86b0cb793b283193137 317d3eeb963a515e15a63fa356d8ebcda7041a51 e54310451f1ac2ce4ccb90a110f45bb9b4f3ccd6 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 eaaf726038cb9b2d01312d6430b4e93842aa96eb 0135be8bd8cd60090298f02310691b688d95c3a8
 149895 fail irrelevant
 150053 [host=albana0]
 150062 [host=albana0]
 150083 [host=albana0]
 150099 fail irrelevant
 150121 [host=albana0]
 150155 [host=albana0]
 150131 [host=albana0]
 150146 [host=albana0]
 150170 fail irrelevant
 150210 fail irrelevant
 150228 fail 144dfe4215902b40a9d17fdb326054bbd8e07563 27acf0ef828bf719b2053ba398b195829413dbdd 9099dcbd61c8d22b5eedda783d143c222d2705a3 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 665dce17c04b574bb0ebcde4cac129c3dd9e681c 664e1bc12f8658da124a4eff7a8f16da073bd47f
 150190 [host=albana0]
 150222 fail 144dfe4215902b40a9d17fdb326054bbd8e07563 27acf0ef828bf719b2053ba398b195829413dbdd 9099dcbd61c8d22b5eedda783d143c222d2705a3 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 665dce17c04b574bb0ebcde4cac129c3dd9e681c 57880053dd24012e9f59c23b630fefe07e15dc49
 150237 [host=albana0]
 150268 [host=albana0]
 150287 fail irrelevant
 150317 [host=albana0]
 150339 fail f718709431429fbb4e1fc6781f3a3752a7f43f70 27acf0ef828bf719b2053ba398b195829413dbdd 1c877c716038a862e876cac8f0929bab4a96e849 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 7e9db04923854b7f4edca33948f55abee22907b9 dacdbf7088d6a3705a9831e73991c2b14c519a65
 150340 pass 6f894a29d812381ffaf8e321f710ceb4bef8f944 317d3eeb963a515e15a63fa356d8ebcda7041a51 804666c86e7b6f04fe5c5cfdb13199c19e0e99b0 d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 c9115affa6f83aebe29ae9cbf503aa163911a5bb
 150342 fail f718709431429fbb4e1fc6781f3a3752a7f43f70 27acf0ef828bf719b2053ba398b195829413dbdd 1c877c716038a862e876cac8f0929bab4a96e849 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 7e9db04923854b7f4edca33948f55abee22907b9 dacdbf7088d6a3705a9831e73991c2b14c519a65
Searching for interesting versions
 Result found: flight 144885 (pass), for basis pass
 Result found: flight 150339 (fail), for basis failure
 Repro found: flight 150340 (pass), for basis pass
 Repro found: flight 150342 (fail), for basis failure
 0 revisions at a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
No revisions left to test, checking graph state.
 Result found: flight 148845 (pass), for last pass
 Result found: flight 148847 (fail), for first failure
 Repro found: flight 148849 (pass), for last pass
 Repro found: flight 148852 (fail), for first failure
 Repro found: flight 148856 (pass), for last pass
 Repro found: flight 148859 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  libvirt git://libvirt.org/libvirt.git
  Bug introduced:  4d5f50d86b760864240c695adc341379fb47a796
  Bug not present: a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/148859/

Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.


  commit 4d5f50d86b760864240c695adc341379fb47a796
  Author: Pavel Hrdina <phrdina@redhat.com>
  Date:   Wed Jan 8 22:54:31 2020 +0100
  
      bootstrap.conf: stop creating AUTHORS file
      
      The existence of AUTHORS file is required for GNU projects but since
      commit <8bfb36db40f38e92823b657b5a342652064b5adc> we do not require
      these files to exist.
      
      Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.17268 to fit
pnmtopng: 28 colors found
Revision graph left in /home/logs/results/bisect/libvirt/build-amd64-libvirt.libvirt-build.{dot,ps,png,html,svg}.
----------------------------------------
150342: tolerable FAIL

flight 150342 libvirt real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/150342/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build           fail baseline untested


jobs:
 build-amd64                                                  pass    
 build-amd64-libvirt                                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat May 23 12:23:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 12:23:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcTAm-0001v1-4T; Sat, 23 May 2020 12:22:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RGEO=7F=intel.com=lkp@srs-us1.protection.inumbo.net>)
 id 1jcTAk-0001uv-PE
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 12:22:38 +0000
X-Inumbo-ID: 15bfd702-9cf0-11ea-9887-bc764e2007e4
Received: from mga14.intel.com (unknown [192.55.52.115])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15bfd702-9cf0-11ea-9887-bc764e2007e4;
 Sat, 23 May 2020 12:22:36 +0000 (UTC)
IronPort-SDR: BrsEpy2Nrjj8Nlxo1bwTDG6jbHPYfpwBt6cPDDgK1io4t9qtduwnjGFkqMBMyO1uxGiWTnHuwe
 5BlHZcNpJklA==
X-Amp-Result: UNKNOWN
X-Amp-Original-Verdict: FILE UNKNOWN
X-Amp-File-Uploaded: False
Received: from orsmga005.jf.intel.com ([10.7.209.41])
 by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 23 May 2020 05:22:35 -0700
IronPort-SDR: 222iafxzv61I8KbH5GVQ5D/h52NTZfEfszMH+7M/y2JJSP2akYJ4JZ8DB/PoGo2hp3IkmPgSIH
 q2szgQOqa0pg==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.73,425,1583222400"; 
 d="gz'50?scan'50,208,50";a="441175164"
Received: from xsang-optiplex-9020.sh.intel.com (HELO xsang-OptiPlex-9020)
 ([10.239.159.140])
 by orsmga005.jf.intel.com with ESMTP; 23 May 2020 05:22:27 -0700
Date: Sat, 23 May 2020 20:32:16 +0800
From: kbuild test robot <lkp@intel.com>
To: Anchal Agarwal <anchalag@amazon.com>, tglx@linutronix.de,
 mingo@redhat.com, bp@alien8.de, hpa@zytor.com, x86@kernel.org,
 boris.ostrovsky@oracle.com, jgross@suse.com,
 linux-pm@vger.kernel.org, linux-mm@kvack.org, kamatam@amazon.com,
 sstabellini@kernel.org, konrad.wilk@oracle.com,
 roger.pau@citrix.com, axboe@kernel.dk, davem@davemloft.net,
 rjw@rjwysocki.net, len.brown@intel.com, pavel@ucw.cz,
 peterz@infradead.org, eduval@amazon.com, sblbir@amazon.com,
 xen-devel@lists.xenproject.org, vkuznets@redhat.com,
 netdev@vger.kernel.org, linux-kernel@vger.kernel.org,
 dwmw@amazon.co.uk, benh@kernel.crashing.org
Subject: Re: [PATCH 06/12] xen-blkfront: add callbacks for PM suspend and
 hibernation
Message-ID: <20200523123216.GC14189@xsang-OptiPlex-9020>
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="tsOsTdHNUZQcU9Ye"
Content-Disposition: inline
In-Reply-To: <ad580b4d5b76c18fe2fe409704f25622e01af361.1589926004.git.anchalag@amazon.com>
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: kbuild-all@lists.01.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--tsOsTdHNUZQcU9Ye
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline

Hi Anchal,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on linus/master]
[also build test WARNING on v5.7-rc6]
[cannot apply to xen-tip/linux-next tip/irq/core tip/auto-latest next-20200519]
[if your patch is applied to the wrong git tree, please drop us a note to help
improve the system. BTW, we also suggest to use '--base' option to specify the
base tree in git format-patch, please see https://stackoverflow.com/a/37406982]

url:    https://github.com/0day-ci/linux/commits/Anchal-Agarwal/Fix-PM-hibernation-in-Xen-guests/20200520-073211
base:   https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git 03fb3acae4be8a6b680ffedb220a8b6c07260b40
config: x86_64-allmodconfig (attached as .config)
reproduce:
        # apt-get install sparse
        # sparse version: v0.6.1-193-gb8fad4bc-dirty
        # save the attached .config to linux build tree
        make C=1 ARCH=x86_64 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__'
:::::: branch date: 11 hours ago
:::::: commit date: 11 hours ago

If you fix the issue, kindly add following tag as appropriate
Reported-by: kbuild test robot <lkp@intel.com>


sparse warnings: (new ones prefixed by >>)

>> drivers/block/xen-blkfront.c:2700:0: sparse: sparse: missing terminating " character
   drivers/block/xen-blkfront.c:2701:0: sparse: sparse: missing terminating " character
   drivers/block/xen-blkfront.c:2700:25: sparse: sparse: Expected ) in function call
   drivers/block/xen-blkfront.c:2700:25: sparse: sparse: got The

# https://github.com/0day-ci/linux/commit/1997467d18e784a64ee0fe00875492e9605f6147
git remote add linux-review https://github.com/0day-ci/linux
git remote update linux-review
git checkout 1997467d18e784a64ee0fe00875492e9605f6147
vim +2700 drivers/block/xen-blkfront.c

9f27ee59503865 Jeremy Fitzhardinge 2007-07-17  2672  
1997467d18e784 Munehisa Kamata     2020-05-19  2673  static int blkfront_freeze(struct xenbus_device *dev)
1997467d18e784 Munehisa Kamata     2020-05-19  2674  {
1997467d18e784 Munehisa Kamata     2020-05-19  2675  	unsigned int i;
1997467d18e784 Munehisa Kamata     2020-05-19  2676  	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
1997467d18e784 Munehisa Kamata     2020-05-19  2677  	struct blkfront_ring_info *rinfo;
1997467d18e784 Munehisa Kamata     2020-05-19  2678  	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
1997467d18e784 Munehisa Kamata     2020-05-19  2679  	unsigned int timeout = 5 * HZ;
1997467d18e784 Munehisa Kamata     2020-05-19  2680  	unsigned long flags;
1997467d18e784 Munehisa Kamata     2020-05-19  2681  	int err = 0;
1997467d18e784 Munehisa Kamata     2020-05-19  2682  
1997467d18e784 Munehisa Kamata     2020-05-19  2683  	info->connected = BLKIF_STATE_FREEZING;
1997467d18e784 Munehisa Kamata     2020-05-19  2684  
1997467d18e784 Munehisa Kamata     2020-05-19  2685  	blk_mq_freeze_queue(info->rq);
1997467d18e784 Munehisa Kamata     2020-05-19  2686  	blk_mq_quiesce_queue(info->rq);
1997467d18e784 Munehisa Kamata     2020-05-19  2687  
1997467d18e784 Munehisa Kamata     2020-05-19  2688  	for_each_rinfo(info, rinfo, i) {
1997467d18e784 Munehisa Kamata     2020-05-19  2689  	    /* No more gnttab callback work. */
1997467d18e784 Munehisa Kamata     2020-05-19  2690  	    gnttab_cancel_free_callback(&rinfo->callback);
1997467d18e784 Munehisa Kamata     2020-05-19  2691  	    /* Flush gnttab callback work. Must be done with no locks held. */
1997467d18e784 Munehisa Kamata     2020-05-19  2692  	    flush_work(&rinfo->work);
1997467d18e784 Munehisa Kamata     2020-05-19  2693  	}
1997467d18e784 Munehisa Kamata     2020-05-19  2694  
1997467d18e784 Munehisa Kamata     2020-05-19  2695  	for_each_rinfo(info, rinfo, i) {
1997467d18e784 Munehisa Kamata     2020-05-19  2696  	    spin_lock_irqsave(&rinfo->ring_lock, flags);
1997467d18e784 Munehisa Kamata     2020-05-19  2697  	    if (RING_FULL(&rinfo->ring)
1997467d18e784 Munehisa Kamata     2020-05-19  2698  		    || RING_HAS_UNCONSUMED_RESPONSES(&rinfo->ring)) {
1997467d18e784 Munehisa Kamata     2020-05-19  2699  		xenbus_dev_error(dev, err, "Hibernation Failed.
1997467d18e784 Munehisa Kamata     2020-05-19 @2700  			The ring is still busy");
1997467d18e784 Munehisa Kamata     2020-05-19  2701  		info->connected = BLKIF_STATE_CONNECTED;
1997467d18e784 Munehisa Kamata     2020-05-19  2702  		spin_unlock_irqrestore(&rinfo->ring_lock, flags);
1997467d18e784 Munehisa Kamata     2020-05-19  2703  		return -EBUSY;
1997467d18e784 Munehisa Kamata     2020-05-19  2704  	}
1997467d18e784 Munehisa Kamata     2020-05-19  2705  	    spin_unlock_irqrestore(&rinfo->ring_lock, flags);
1997467d18e784 Munehisa Kamata     2020-05-19  2706  	}
1997467d18e784 Munehisa Kamata     2020-05-19  2707  	/* Kick the backend to disconnect */
1997467d18e784 Munehisa Kamata     2020-05-19  2708  	xenbus_switch_state(dev, XenbusStateClosing);
1997467d18e784 Munehisa Kamata     2020-05-19  2709  
1997467d18e784 Munehisa Kamata     2020-05-19  2710  	/*
1997467d18e784 Munehisa Kamata     2020-05-19  2711  	 * We don't want to move forward before the frontend is diconnected
1997467d18e784 Munehisa Kamata     2020-05-19  2712  	 * from the backend cleanly.
1997467d18e784 Munehisa Kamata     2020-05-19  2713  	 */
1997467d18e784 Munehisa Kamata     2020-05-19  2714  	timeout = wait_for_completion_timeout(&info->wait_backend_disconnected,
1997467d18e784 Munehisa Kamata     2020-05-19  2715  					      timeout);
1997467d18e784 Munehisa Kamata     2020-05-19  2716  	if (!timeout) {
1997467d18e784 Munehisa Kamata     2020-05-19  2717  		err = -EBUSY;
1997467d18e784 Munehisa Kamata     2020-05-19  2718  		xenbus_dev_error(dev, err, "Freezing timed out;"
1997467d18e784 Munehisa Kamata     2020-05-19  2719  				 "the device may become inconsistent state");
1997467d18e784 Munehisa Kamata     2020-05-19  2720  	}
1997467d18e784 Munehisa Kamata     2020-05-19  2721  
1997467d18e784 Munehisa Kamata     2020-05-19  2722  	return err;
1997467d18e784 Munehisa Kamata     2020-05-19  2723  }
1997467d18e784 Munehisa Kamata     2020-05-19  2724  

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

--tsOsTdHNUZQcU9Ye
Content-Type: application/gzip
Content-Disposition: attachment; filename=".config.gz"
Content-Transfer-Encoding: base64

H4sICEzBxF4AAy5jb25maWcAlDzbcty2ku/nK6aSl+QhOZIsy97d0gNIgjPwkAQDgHPRC0qR
x45qLclHl3Psv99ugJcGiFG8rlRsdjfujb5jfv7Hzwv28vxwd/18e3P95cv3xefD/eHx+vnw
cfHp9svhfxaFXDTSLHghzO9AXN3ev3z757f3F/bifPH293e/n/z2eHOxWB8e7w9fFvnD/afb
zy/Q/vbh/h8//wP++xmAd1+hq8f/Xny+ufnt3eKX7s+X++eXxbvf30Lrixf3dfar/4YWuWxK
sbR5boW2yzy//D6A4MNuuNJCNpfvTt6enAyIqhjhZ2/OT9yfsZ+KNcsRfUK6z1ljK9GspwEA
uGLaMl3bpTQyiRANtOEEJRttVJcbqfQEFeoPu5WK9J11oiqMqLk1LKu41VKZCWtWirMCOi8l
/A9INDZ1+7d0J/Jl8XR4fvk67Y9ohLG82VimYANELczlm7NpUnUrYBDDNRmkY62wKxiHqwhT
yZxVwx799FMwZ6tZZQhwxTbcrrlqeGWXV6KdeqGYDDBnaVR1VbM0Znd1rIU8hjifEOGcgPEC
sJvQ4vZpcf/wjHs5I8BpvYbfXb3eWr6OPqfoHlnwknWVsSupTcNqfvnTL/cP94dfx73WW0b2
V+/1RrT5DIB/56aa4K3UYmfrPzre8TR01iRXUmtb81qqvWXGsHxFGEfzSmTTN+tAHEQnwlS+
8gjsmlVVRD5BHVfDBVk8vfz59P3p+XA3cfWSN1yJ3N2fVsmMTJ+i9Epu0xheljw3AidUlrb2
9yiia3lTiMZd0nQntVgqZvAuJNGi+YBjUPSKqQJQGk7MKq5hgHTTfEUvDEIKWTPRhDAt6hSR
XQmucJ/3IbZk2nApJjRMpykqTgXSMIlai/S6e0RyPg4n67o7sl3MKOAsOF0QIyAH01S4LWrj
ttXWsuDRGqTKedHLQTgcwuQtU5ofP6yCZ92y1O7KH+4/Lh4+Rcw1aQKZr7XsYCC7ZSZfFZIM
4/iXkqCAJfeDYDasEgUz3Faw8Tbf51WCTZ2o38zuwoB2/fENb0zikAjSZkqyImdUWqfIamAP
VnzoknS11LZrccrD9TO3d4fHp9QNNCJfW9lwuGKkq0ba1RWqldpx/SjeANjCGLIQeUK++Vai
cPsztvHQsquqY03IvRLLFXKO204VHPJsCaOcU5zXrYGummDcAb6RVdcYpvZJgd1TJaY2tM8l
NB82Mm+7f5rrp/9dPMN0Ftcwtafn6+enxfXNzQMYNrf3n6OthQaW5a4Pz+bjyBuhTITGI0zM
BNne8VfQEZXGOl/BbWKbSMh5sFlxVbMKF6R1pwjzZrpAsZsDHPs2xzF284ZYLyBmtWGUlREE
V7Ni+6gjh9glYEIml9NqEXyMSrMQGg2pgvLED5zGeKFho4WW1SDn3WmqvFvoxJ2Ak7eAmyYC
H5bvgPXJKnRA4dpEINymeT+wc1U13S2CaTiclubLPKsEvdiIK1kjO3N5cT4H2oqz8vL0IsRo
E18uN4TMM9wLuovhLoTGYCaaM2KBiLX/x+VdDHHcQgm94aknykpipyUoc1Gay9N3FI6nU7Md
xZ9N91A0Zg1macnjPt4El6ADy9zb2o7tnbgcTlrf/HX4+AJuyeLT4fr55fHwNB13Bz5D3Q5G
eAjMOhC5IG+9EHg7bVqiw0C16K5tweTXtulqZjMGbkkeMLqj2rLGANK4CXdNzWAaVWbLqtPE
HuvdEdiG07P3UQ/jODH22LghfLxevBlu1zDoUsmuJVe8ZUvu94ETlQ8mZL6MPiM7doLNR/G4
NfxFZE+17kePZ2O3ShiesXw9w7gzn6AlE8omMXkJShbspa0oDNljkMVJcsIcNj2nVhR6BlQF
dXp6YAky4opuXg9fdUsOx07gLZjgVLzi5cKBesysh4JvRM5nYKAOJe8wZa7KGTBr5zBnbBGR
J/P1iGKGrBDdGbDcQF+QrUPupzoCVRgFoC9Dv2FpKgDgiul3w03wDUeVr1sJrI9GA5iiZAt6
ldgZGR0b2GjAAgUHdQjmKz3rGGM3xKVVqNxCJoVdd2ajIn24b1ZDP956JJ60KiIHGgCR3wyQ
0F0GAPWSHV5G38QnzqREg8WJaCo+ZAubL6442t3u9CVYBE0e2EsxmYZ/JIyR2JP0olcUpxfB
RgINaMyct84BgC2h7OnatLlu1zAbUMk4HbIIyoix1o1GqkF2CeQbMjhcJnQE7cwY9+c7A5fe
fSJs5zzn0QQN9FD8bZuaGCzBbeFVCWdBefL4khm4PGgik1l1hu+iT7gQpPtWBosTy4ZVJWFF
twAKcL4DBehVIHiZIKwF9lmnQo1VbITmw/7p6DidNsKTcPqkLOw2VAEZU0rQc1pjJ/tazyE2
OJ4JmoH9BtuADOxNmJjCbSNeVIwIBAxlKx1y2JwNJoU86EQk+0C9wh4A89uyvbbUfhtQQ1uK
I7sSDYdqfdobmFOTRywDvjAx6J08jmDQnBcFlWP+esGYNvY4HRCmYze1c98pa56enA/WUh/f
bQ+Pnx4e767vbw4L/u/DPVjWDKyfHG1r8MUmCyo5lp9rYsTRhvrBYYYON7UfYzBCyFi66rKZ
skJYb3u4i0+PBCOmDE7YhWxHEagrlqVEHvQUksk0GcMBFZhJPRfQyQAO9T9a9laBwJH1MSwG
l8CVD+5pV5Zg2DoTLBF3cUtFG7plyggWijzDa6esMSguSpFHkS4wLUpRBRfdSWunVgMPPIxM
D8QX5xm9IjuXKwi+qXL0sXNUCQXPZUHlAXgyLTgzTjWZy58OXz5dnP/27f3FbxfnowpFkx70
82D1knUaMArdvOe4IJDlrl2NhrZq0L3xsZTLs/evEbAdCbaHBAMjDR0d6Scgg+4mb22MbWlm
A6NxQARMTYCjoLPuqIL74Adn+0HT2rLI552A/BOZwshWERo3o2xCnsJhdikcAwsLMyrcmQoJ
CuArmJZtl8BjcfwYrFhviPoQiOLUmEQ/eEA58QZdKYy9rTqavwno3N1Ikvn5iIyrxocjQb9r
kVXxlHWnMVR8DO1Ug9s6Vs1N9isJ+wDn94ZYcy4Q7hrPRuqdtl5GwtQjcbxmmjVw71kht1aW
JRr9J98+foI/Nyfjn2BHkQcqa3azy2h13R6bQOei7oRzSrB8OFPVPse4LbUOij0Y+RhOX+01
SJEqira3S+98VyCjwTh4S6xP5AVYDve3FJmB515+OW3TPj7cHJ6eHh4Xz9+/+jDO3Ekf9pdc
eboqXGnJmekU975IiNqdsVbkIaxuXaSZXAtZFaWgjrfiBoysIP+HLf2tABNXVSGC7wwwEDLl
zMJDNLreYUYAoZvZQrpN+D2fGEL9edeiSIGrVkdbwOppWjN/UUhd2joTc0isVbGrkXv6/BE4
21U3971kDdxfgjM0SigiA/Zwb8GcBD9j2QW5STgUhqHROcTudlUCGk1whOtWNC6KH05+tUG5
V2EQATRiHujRHW+CD9tu4u+I7QAGmvwkplpt6gRo3vbt6dkyC0Ea7/LMm3UDOWFR6lnPRGzA
INF++kRH22FYHm5iZUK3YdZ8Psq4o0dj0CPFEHPr4R+AMVYSrb94UrlqRthoV9Xr98kYfd3q
PI1AWzmd5QUbQtYJI23UfdSBGO6NasAk6RVbHIZEmuo0QF5QnNGRfMnrdpevlpExhNmZ6HqD
2SDqrnZipQQRW+1JmBcJ3JGAQ11rwqsCVI0TeTZwx51EqXfHhGEf00f3nlc8CA3B6HCxvfyY
g0F8zIGr/TIwqntwDkY669QccbVickezjauWe7ZSEYyDY4+GiTJkV1mbxcQF9b6XYP3GiUsw
toJb1zhrQaMJDvZCxpdos53+11kaj4ndFHaw7xO4AOYFoa6ppepAdT6HYERBhifpCjXsXHdh
8mQGVFxJdI8xeJMpuQbh4OJBmKiOOC7nMwCG1iu+ZPl+hop5YgAHPDEAMaWrV6CxUt1gIn3K
CLhr0yenNqFJQFzCu4f72+eHxyC1RhzOXuF1jfOb745TKNZSuTfD55jSOtKDU55yC5x3N/lD
RyZJV3d6MXOOuG7BxoqlwpA57hk/8ND8gbcV/o9Tm0K8X0/TBdMM7naQaB9B8QFOiOAIJzAc
nxeIJZuxilbhCTrdEoLeOiMwbFcIBUdslxlauxE/5i1D29CA9yty6sbAtoONAdcwV/vWHEWA
PnGOULafe95odIUNQ0hvI7O8FREGlYHGeoTGSmRTDwh7xvOatfCaY7TOvcXtjE0/Z5bwPUb0
bAEe76T1YHBhPUUcuepRURWNQ7nswRrvhzWc+geiwhtfDeYZVjp0HP2Mw/XHk5O5n4F71eIk
vaCYmZERPuITDNaDBywxm6ZU1865HMUV2hL1sJqJ0DePBR6WmGBWcEs0Zm0UzU/BFzofwogg
9RLC+0MZN//kCBkeE1pnTtoPxKfB8ll8dGD+aPCOUEKxMLfk0HEsyBnYNYtdgjp2G3rzfzx1
42uU7JrvdYrS6J3jG/QmqdGVomiSJlWCEtMrCSOLlzROXQq43F0WQmqxCyJcPMcQyWVYa3J6
cpLoHRBnb08i0jchadRLuptL6CZUwiuFRRvEIOY7nkefGNZIRTs8su3UEoNz+7iVpimZEeQL
oWJEdiVqDGe4iN0+bJorple26KhR41t9CGCjmw6CVWHw4DS8y4q7MGIoizwzYgYIQ+mR94rR
FtdKJ0ZhlVg2MMpZMMgQM+jZtGJ7LGJIDOcJjmOmgVpWuIKxk2/X40mC1Ki6ZWjTT7KEoImj
5v2cNK6P1m0KLSmb9VIv0tWpJFlMuZNNtX+tKyxeSvST14ULsMFiqE3uoSS1CJcRGaUqzDyv
4YJDFajHFusMJjgFTTbNK7GYGcfDSdhImztcL0z7k+u3+O9oFPyLJm3Qa/SJHq9onWsmYunZ
d6PbShhQPTAfE7qglAqDdi5MmCj4pHRm1QYk3iR9+M/hcQHW3vXnw93h/tntDVoNi4evWP9O
YlWzgKOvhSHSzkcaZ4B5hcCA0GvRuvQQOdd+AD7GM/QcGYb6axAGhU8SmLD0G1EV521IjJAw
aAFQlPlz2i1b8yjaQqF9yfrpJBoC7JJmouqgizi8U2MeEnPXRQKFZe7z3R2XEjUo3BziylAK
de4miqzTMzrxKJ09QEJvFaB5tQ6+h+CDL7olW7X9w7sXWM8scsGnJORr7RNHFlNImkoH1DJt
PI4RPWRogpt9DYLL6Q04VSnXXRxchquzMn1SGJu0NPfgIH1Wyi/ZuV16nrZxlO7ElvRGBGCX
Wp3MTN95mysb6TU/9VbE3Ucb6KcL1nKpe3cvRCm+sSCklBIFT6UJkAYU8VCiHM6L5REgYwaM
7n0M7YyBqxkCjWj2/Xb9GL4vB7h88z6g28DEZdS2ZLPeWBFBilCmIshFqxQHxqWR5nEHfZCp
96mPoUUx2768bXMbvj4I2kRw0dYimmvSGogGZsslGPEuiRot3YcjCHRUdn5nUD90LeiGIp75
a7hI8PjZ5Mh/MmZJ+LeBqzvjvWFZsaUUIIUMw0KeybOYFUMvxI3aaSPR7TIrGR9+tpxdS8WL
DiUwpqq36BL19g2lgX+RWA9+oZXfKWH2yf2IHHU3z5rFeUN/lVoujsHDgpwE+US5XPGYdR0c
Toaz2QE41LGMx0TBRfMhvvkOjplJv+4RW7SmHONKtEXivYKTLTuwbpZx70WQFkFzW7bA3aF3
7YXDEWwO8rrANwzHCAbOhH9Tueb9/Tgmq533N5TLL8rHw79eDvc33xdPN9dfgjDeIEPITAap
spQbfJeEcWpzBB2XUI9IFDrU2B4RQ3EOtiZVcEnHMd0IdxEzND/eBDWUq4z88SayKThMrPjx
FoDrX9tskm5Eqo3zeDsjqiPbG5YJJimG3Zg4IsCPSz+CH9Z5BE0XdYSErmFkuE8xwy0+Pt7+
OyhYAjK/HybouIe5PGnBozSND320kUZzVyDPh9bh3RgU5esY+DsLO4QblG7mdryRW7t+H/VX
Fz3v80aDcb8BKRtSgE3MCzC7fHpGiUZGXZ/77F3t5L/bzKe/rh8PH+f+TdgdKmsSFU9f+fFw
xMcvh1AAhEbAAHHHW4GHydURZM2b7gjKcHkEM0+ADpAhRxqvxU14IPY8ML5wGpzmv3UN3fKz
l6cBsPgFdMzi8Hzz+68kqQEGgY+SE3EOsLr2HyE0yFV7Eswenp6sQrq8yc5OYPV/dIK+T8Zy
o6zTIaAAP5sFJj+Gy2Pm3Osyo8s/si6/5tv768fvC3738uU64iKXwDyS7tjRMpo+GjMHzUgw
89VhMB+DUcAfNO3Wv50dW07Tn03Rzby8fbz7D/D/ooiFB1PgT+a1syeNzGXgTA0op277x5V3
IbolLROoZEteFMFHH8XtAaVQtTPDwDwJQsdFLWjIBD59LSQxQRCET+NdaUrDMRLlArRlH1Sg
HJLjw9CshI0W1L+eEFO/5dbm5TIejULHMNbYj+nAs9LgoO6s2pp6apbl9fm73c42G8VoGW8P
1rCdBGw4t1mzg1lupy6WUi4rPu7URN0jNPUjehhmTFwG1XuTMRprS0FFyVdRPo0bpUPmVMNQ
M5pNO4pjOJXFL/zb8+H+6fbPL4eJQwXWyX66vjn8utAvX78+PD5PzIpHuWH0HRZCuKYm/UCD
yi1IokaI+C1e2IPCqpAaVkUZ0HPSes6ZLgfAdiNyKpx0+QJZmiG9kx5lq1jbBjWPiMUtrKT7
xQP0oxS9YYjPWas7LFJzNCHO/UTCdB/bFgtuFaZcjaB+Ak7L+Dfza1uDrl1GAsxNMxdncRAC
4f3OeVnv/J1RDv1/jjc4y77+O8HbnVtzS1XoCAorc93c+AbTVyvrcpEqRA41geTW1ztb6JY8
BQSApq8de4Bti8E8M4fPj9eLT8PKvF3mMMMr3zTBgJ4J5cAHXNOqqwGC5Q9h0R3FlHHZfA+3
WEoxf2e7HmrQaTsE1jUt3UAIc8X89CnL2EOtY+8VoWOtrc+849OZsMdNGY8xRvuEMnss4HC/
KtInA0PSWGMGi832LdPxgw5ENtKG1hJWeXWgXq8insetv6Pj+YqDAIS1BjEA7NVNvJNd/IMT
GH3Z7N6engUgvWKnthEx7OzthYcGv6Zy/Xjz1+3z4QYzGb99PHwFfkJDbWba+uxa+ATDZ9dC
2BBwCUp7pK+lJwJ0gPQPF9xrJZAru2irx4azrjCaEfvX67hmFxN/YCtnNKzryilylw3G4oEy
/AEY2Zq4k75XcMpsGcW3Z0XCbtJTqLprnMGFz+1yDLBRe8gnwN1rYrhPNgufhq6xwjbq3L0C
BHinGuA/I8rg1ZAvdYazwMr6RF35bHM8NDFOv/Np+Cu74fBl1/h0O1cKA5mpnwjZ8DAWNT2T
cj2upFxHSLS/UW+JZSepbT7ccw3n7FwZ/7sZ0T67knsJ2gpTxv7x4ZwAddcshEiRfZ1OoKzJ
zP0PF/lnHHa7EoaHb9XHUnk9Jn/d21nfIu5S15iO6H+JKD4DxZdw8TH55VSt563QP/F0wXOo
8Hjw15KONgzSMw6y2toMFujflEY4V7FA0NpNMCL6AealdWVz/sCIKrrh7vGtr5OPnutOnSTG
H55bqX7TwjqB6RxTIiOFTbymQwENJg8WRPmQNyYtk2j8wYEUSc9v/n74h/19sWw8mV6s9OyG
udv4CH07Xyh5BFfI7shrjt5tRL/Q/+rM8JtYCVosiZvoU7vW17D0z16IKD4CJy3xrCpgrAg5
ey8xaKn+TUWAHn4AZVIAybZRI9haOTNz/KqFAYew5yNXxh8zG4oqDn4WirP13Fg68gMnsSz/
2x83wboAzO0fkaSNK8qCExrS+z9KZ9su2Sfi8blinNd0bOCQWGgApoZKDuV8F2eRzdZRDMV+
/8fZuy25jSPtoq9SMRcrZmL/vVokdaB2hC8okpJg8VQEJbF8w6i2q7srxnY5ytUz3evpNxLg
AZlIyr32REy79H04EccEkMhMY3iJZw2aMjnDfSoslfBsGEYdU09pK+BlqTEa1USOngN0Ch19
0J7hyofettE1HTJgFxcca3oux6RrvXWbS8QOwiTV0zo4KCK5Ha96GJaixnmJbHpsb7XJXZNV
3QqjNDK+GbSPMvSxGF4sYOhLcej1CixDOH05ez4iEsB4brUTRsmdaw3oZ7QtOWxaoxslCTSD
obn62tpDe5ai0U2HY6Nz1FTeSlVf4A/aY3jVHqU9JWBwAhqsa/YTXhq1fw1tqfsaGT4uLz/9
8vj96dPdv82L4W+vL78+46skCNR/OZOqZgeRGhvoAsa8Oe2W3cbekN/KF1UMGKqE3YBR6XDe
y/5g7zEkVcP+QM2ndm/XD9olvJy2VFJN+/TKg+iGtJ9GKGCUDPWhh0OdCxY2MUZyelYzSWv8
s5u+cHXcB4PaZm6kpo9wsma0Ii0GNZ6Fw26QFNSifH95s7h9qNX6b4QKwr+Tltqt3vxs6JbH
d//4/vuj9w/CwrxRo20UIRybl5THtitxIHhoelXCrJSwHo92WzqRaw0gax9WqKGsJraHfFdm
TmGksXhFFYB2WPsOrKSotUo/biVTIFD63LhO7/HjsMn+j5qE+rtci4Jjqp08sCBSHJlMtDTp
AdQOblBd4y2s8+aehkeniRtLrTxl0+A38y6n1dLxR/XanPR8Dbjrjq8BASbL1IT4MMPGJa06
lVKX39OS0Ud9Nsp9JzR9WUXjxWr1+Pr2DBPWXfPXN/th7qhNOOrlWfNvXKp90KRvOEd08TmP
imieT1NZtvM0fjRCyCjZ32D1dUuTxvMhaiFjYWcuWu6T4L0s96W5EgxYoolqwRF5FLOwTErJ
EWCqLxHyRHZz8Kqw7eR5x0QBO3hw02IeLDj0WcXU10lMslmSc1EApvY3DuznnTNtPZQr1Znt
K6dILXIcAcfSXDIP8rIOOcYafyM1XeKSDo5mNOf4FIZIfg/H+A4G2x77oLaHsUUwALWiq7GO
W0725KyhpWKJ0jxMSJSoiy/LLPL0sLMvMwZ4t7e0B9SPbpgziIkzoIi9r8m0KirZOOZHk5vm
hAM9FsaGwSJZeKhnFcZiRKW2lueC0dWeVFGbEk6P6tyaRbWAZCKrkVlekZqcWiyU8DhD6lac
4Ua5VdtKTriX4/MMjVxf+agOPgqncOMKWqVZVFWwbkRJoldxozrDiPCDlaBul+7hHzjvwSZ2
rbDmfUF/XTaFmDTNzd3in08f/3h7hHsnMAR/px82vll9cSeKfd7A7tLZ4HCU+oEPzHV54TRq
sjioNqqDkci/SDYyrkVl3cb1sJJaLCVaSLI/35ou0Wa+Q39k/vTl5fWvu3xS1HDO/2++s5se
6anV6hxxzATp5zLDgT99OmjOA4a3W2BAuuGySVt4FpFy1MVcqDqvCZ0QbqZmRtPvKVxeGw49
2CKdfn1xAvV5FRcs1lvD0XyBbVfVTgtuY6Ek2sx9gZ+mzrwNwXj/NbP0ZGyLzI2zr0r6hyKN
mdThufaSRNqBsIrWVwOY3s5t8Qmmj5bqFCYxJCEyj05ifdbfUVNcxwf9tqbuGmpdaae2zfac
YMwylFiTB05g3bPnk20Abag43YWMLemkfrdcbEeTBngunlN7ncOP16pUvaJwnnzfPq9jT+mM
sTV7u8MGy415OmbjY11JwMsefAPlInGWRuappj1bqpYiwZCBTzVEiPgzQrb0CSDYOpLvNlYV
skeGH/rsxq/WwLjHK+tJGyPdzzxDm41ijEj+OOlwyVvXuJEwvzm+FeHIG/eYjfJBNsn/xce+
+8fn//PyDxzqQ1WW2ZTg7py41UHCBPsy4zVy2eDSmLubLScK/u4f/+eXPz6RMnKWBHUs6+fO
PsE2RZzG2lAGy6RBb6MpN5KG3dlM0A5vt4d7R638Mdy6olkkrWt8P0Ps6OvbSo27lwSjkFJp
C2T4xN3YeyLvy42GykEfLZa2bWITEAxoXJB2rrE+RM38TM+ytb15lXGnRtCBk80q/Jy6f5BI
jJ8fwHqu2qMf88hWsdTH0/AwQk8yoJu4Z7NoUnMjYAsUfQuZSUGJSVlFzOHPyzKTAOIqSCoM
XOCoSUdK/HATTOuqDPG5E4Apg6k2J3qq8rQz1q+Gi1wtcBVPb/99ef03qGA7kpZaN0+ptTKY
3+qDI+uBA+xE8S9Qr8Q7VRIF7gPsH04nAqwpbRXuPTLUpX6BdiU+FtVolB1spWmA8PszDU12
MzCutuKgTCOQgRUgjGBACsTawzDpV/ql/Be7QVQvdQA3XZlbc4H6QWquTSpt4RlZnrZAElyg
DiYqI+xi1xUKHV9rauM0NeL2YqfmBZHSATUkBpKzeWmIOGPmxoSIbCPeI6d2U7vSFhxHJs4i
KW2dWMVURUV/d8kxdkH9ptxB66iuyECqBGkgUR20/mR+binRNeeisHWlxvBcEox/EKit/uPI
i5iR4QLfquFK5FLtIDwOtLSw1E5U5VmehDOTVJdG4OKfE/5L9+XZAaZasYsFZHTEHRDM6bjI
OH4dRg1O1K6msHhAaVAPNVpezbCgOzQ6lREHQz0wcB1dORgg1W3glt6aNiBp9eeBOXIdqZ2w
BvuIxmcev6osrqX99G+kjlBjDCxn8IddFjH4JT1EksGLCwPCoQXWuR6pjMv0ktrPWUb4IbX7
ywiLTC2Can/CUEnMf1WcHBh0t7Mm/0Feq6Eszk5riPPuH69PXydxFOA8WaF7MDV41vhXP3fC
UcGeYzq8LdeEseUOC0iX2CsZdKu1M47W7kBaz4+ktTtmIMtcVGsCCbsvmKizI2vtopAEmkk0
Iu2HpQPSrZEZfkCLRMhYHzw0D1VKSDYvNOlqBE1PA8JHvjGhQhHPO7gxo7A7P4/gDxJ0p2OT
T3pYd9m1LyHDKSk15nBkdt/0rSpjUgIZklw1VGhS1T9JLzYYZE1cEarUwHEiKElh6Rlmv6qp
+gV7/+BGqY4P+k5RCQ853s6oEFTZaoSYOXNXi0TtUKZYXwZXlq9PIMP++vz57enVcXfppMzJ
zz3VC95opespY/KxLwQXtw9ApQycsnG7xCQ/8MY74I0A6GWxS5dyb7/mhsms0Hs6hGpnPkYK
obBKCN5UMllAUsbBDptBRzqGTbndxmZhEylnOGPmYYaktuwROdgEmWd1j5zh9dghSTfmQZha
feKKZw726ZpNyLiZiaIEjUw06UwxInh4G81U+L6pZphj4AczlKjjGWaSWXle9QRtFq6QMwFk
kc8VqKpmywomp+coMRepcb69YQavDY/9YYY2m/dbQ+uQnZXsjjtUEeEE1W+uzQCmJQaMNgZg
9KMBcz4XQHd73xN5JNU0gu1bTJ+jdgOq57UPKL1+6XIhsn+c8H6esJgG7iBAc/SLjaHpDt4k
ZsZUOhZXdMjeaRUBi8IYJUIwngUBcMNANWBE1xiGSAO6+wbAyt17EOkQRidqDZVNRHPEB+sT
ZiqWfKu+vUaY1j/CFagfcmOASUwflyDEnA+QL5PksxqnbzR8j0nOlbtWwPn4DL6/JjyuSu/i
ppuYY1D6bRbHDdd27MtaOmj1NeP3u48vX355/vr06e7LC1yTf+ckg7Yxixibqu6KN2ipS4ny
fHt8/e3pbS6rJqoPsFfWr6j4NPsg2qimPOc/CDWIYLdD3f4KK9SwaN8O+IOiJzKuboc4Zj/g
f1wIOL42r69uBgOXdrcD8LLVFOBGUfBEwsQtwN/VD+qi2P+wCMV+VkS0ApVU5mMCwakjekzL
BhoWmR/Uy7ji3AynMvxBADrRcGFqdGrLBflbXVdtdXIpfxhG7dBB5bqig/vL49vH32/MIw14
3k6SWm9q+UxMINjR3eJ7J4s3g2Rn2cx2/z6MkvfTYq4hhzBFsXto0rlamUKZveUPQ5FVmQ91
o6mmQLc6dB+qOt/ktdh+M0B6+XFV35jQTIA0Lm7z8nZ8WPF/XG/z4uoU5Hb7MBcUbhBtX/8H
YS63e0vmN7dzydLi0BxvB/lhfeS23T6W/0EfM6c4YNrwVqhiP7eBH4NgkYrhtVbbrRD99dPN
IMcHObNNn8Kcmh/OPVRkdUPcXiX6MGmUzQknQ4j4R3OP3iLfDEDlVyYI9g0wE0Ift/4glPap
eCvIzdWjDwKa9LcCnAP/nW3H6NZB1pAMGJhN0cGqeSwcte/81ZqgOwEyRycqJ/zIoIGDSTwa
eg6mJy7BHsfjDHO30tN6WLOpAlswXz1m6n6DpmaJAlxG3UjzFnGLm/9ERQp83dyz2pkgbVJ7
TtU/nesGwIgukwHV9se82PP8Xm1ZzdB3b6+PX7+DURJ4HfX28vHl893nl8dPd788fn78+hGu
/r9TmzQmOXNK1ZBr1pE4JzNEZFY6lpsloiOP98dn0+d8H7SdaXHrmlbc1YWy2AnkQvuSIuVl
76S0cyMC5mSZHCkiHSR3w9g7FgMV94MgqitCHufrQvW6sTOEVpz8RpzcxBFFkra4Bz1++/b5
+aOejO5+f/r8zY2LDqn60u7jxmnStD/j6tP+f//G4f0ebujqSN94LNFhgFkVXNzsJBi8P9YC
HB1eDccyJII50XBRfeoykzi+A8CHGTQKl7o+iIdEKOYEnCm0OUgscv0uV7hnjM5xLID40Fi1
lcJFRU8GDd5vb448jkRgm6ir8eqGYZsmowQffNyb4sM1RLqHVoZG+3QUg9vEogB0B08KQzfK
w6cVh2wuxX7fJuYSZSpy2Ji6dVVHVwoNlnwprvoW367RXAspYvqU6d3JjcHbj+7/rP/e+J7G
8RoPqXEcr7mhhpdFPI5RhHEcE7QfxzhxPGAxxyUzl+kwaNF9+3puYK3nRpZFpGexXs5wMEHO
UHCIMUMdsxkCyt27ROAD5HOF5DqRTTczhKzdFJlTwp6ZyWN2crBZbnZY88N1zYyt9dzgWjNT
jJ0vP8fYIQr9oMYaYbcGELs+roelNUnjr09vf2P4qYCFPlrsDnW0A1OfJXKh9qOE3GHZX5Oj
kdbf3+cpvSTpCfeuRA8fNyl0Z4nJQUdg36U7OsB6ThFw1Xlu3GhANU6/QiRqW4sJF34XsEyU
l/ZW0mbsFd7CxRy8ZnFyOGIxeDNmEc7RgMXJhs/+ktkeCPBn1GmVPbBkMldhULaOp9yl1C7e
XILo5NzCyZn6bpib/qJIdyYCOD4wNIp+8aQuaMaYAu7iWCTf5wZXn1AHgXxmyzaSwQw8F6fZ
18QHA2KcR6KzRZ0+5GQMaBwfP/4bme0YEubTJLGsSPhMB351ye4A96kxekqniV4Fz2iqGiWk
PFnZLz5mw4FZCfbRx2yMGU9OOrxbgjm2N2dh9xCTI1IRrROJfphHxghB6owAkDZvwJjVF/uX
mkdVLp3d/BaMtuUa17ZaSgLicka2MV/1Q4mn9lQ0IGBuUsTIWbliMqTGAUhelRFGdrW/Dpcc
pjoLHZb43Bh+uR5bNHoJcCQ0f2ogtY+X0fx2QHNw7k7IzpQiDmpXJYuyxLpsPQuTZL+AuBay
9AQibdeVPfCFAGoVPcCK4t3zVFRvg8DjuV0d565uFwlwIyrM5WmR8CEO8kpV5gdq9jvSWSZv
Tjxxkh94ogQHuA3P3ccz2agm2QaLgCfl+8jzFiueVDKGyGxRQDcvaZgJ6w4Xe+NvETkijLg1
pdCLX/TlRWYfLakfvj1wouxkJ3AxlpAxnDUVeoddSfyrS6IH25yHxhq48SnQoU2SoP2p+gm2
n5DvS9+qwSyqLA2V6liij12rrVVlSxI94L7CHIjiGLuhFagV7nkGRGF82Wmzx7LiCbxTs5m8
3IkMyfo2O1ghZslzwuR2UATY5DsmNV+cw62YMPtyJbVT5SvHDoG3i1wIIiWLNE2hP6+WHNYV
Wf9H2lZq+oP6t9/KWSHpTY5FOd1DLbM0T7PMGuMYWna5/+PpjyclevzcG8FAsksfuot3904S
3bHZMeBexi6KVscB1L7AHVTfJTK51UQBRYPgpoEBmehNep8x6G7vgvFOumDaMCGbiP+GA1vY
RDoXqRpX/6ZM9SR1zdTOPZ+jPO14Ij6Wp9SF77k6irUtCAcG2yk8E0dc2lzSxyNTfZVgY/P4
oHnuppKdD1x7MUEnh4OjkDvIt/t7VgaexF9VATdDDLV0M5DE2RBWiXH7UpuncB/X9J/w7h/f
fn3+9aX79fH72z96Ff7Pj9+/P//aXy/gsRtn5NWaApxj7R5uYnNx4RB6Jlu6uO17YsDMreyw
JhqAGPcdUPcthM5MXiqmCApdMyUA+2MOyuj8mO8mukJjEkSlQOP6UA0s8SEmzbGD2QnrjVkG
PkPF9C1rj2t1IZZB1Wjh5PxnIrTTc46Io0IkLCMqmfJxkF2aoUKiGD9pAcBoW5BPABwMhdob
BaOxv3MTyEXtzJWAyyivMiZhp2gAUvVBU7SUqoaahAVtDI2ednzwmGqOmlJXmXRRfMgzoE6v
08lymluGafRDN66EeclUlNgztWT0sN0n0yYDjKkEdOJOaXrCXVZ6gp0vmnh4J4/bWs/swn7B
l8RWd0gKMDUuy+yCDg+V2BBpo3scNvxp6dHbpG1d2MITZOJswm23wxac43fIdkJU5KYcyxhn
QRwDZ7Jo21uqreRF7RlhwvnCgPgln01cWtQTUZy0SG1Xc5fhNbyDkPONEc7U7n2H1AkvxjHT
JY8Fl562Ffdjwtl3Hx/UunFhIhb9mxJcQD0mUZ8DRO26SxzG3XBoVE0szMvtwlY0OEoqkOk6
xS85QCklgKsKOBRF1H3dWPHhVydtdyEaUYUgJYhtTyrwqyvTHOz8deZOxOq3tW0vpN5L7RbA
2kW0Nt+bw4M89BDnCMeSgN5qt2CL6YF4Tdnd2z+qffceGXVSgGzqNMody6CQpL4yNEfx2KzG
3dvT9zdnR1KdGvxUBo4d6rJSO81CkOsXJyFC2IY7xoaO8jpKdJ30hkE//vvp7a5+/PT8MqoA
2d7S0BYefqlpJo86mUUX/IwInHiNAWsw39AfhUft//ZXd1/7wn56+s/zxyfXp2N+ErYEvK7Q
ONxV9yk4JbAnywc1qjrwlbBPWhY/Mrhqogl7iHK7Pm8WdOxC9vQDntfQFSAAO/scDYADCfDe
2wbboXYUcJeYrBxXdRD44mR4aR1IZg6EtEABiKMsBp0feFduz8XARc3Ww6H3Wepmc6gd6H1U
fOiE+ivA+OkSQROAL959Qgp7LpYCQ61Qsx7OrzICHvmGGUi7/ARz2iwXk9zieLNZMBCYj+dg
PnGhHZIV9Otyt4j5jSIarlH/WbarFnNVGp34GnwfeYsF+YQ0l+6nGlCtXuTD9qG3XnhzTcYX
Y6ZwMe5KPe5mWWWtm0r/JW7NDwRfa2BxDa1zFqjkWntsyUrcPQ8u2MjYOorA80il53HlrzQ4
6d+6yYzJn+VuNvkQzl9VALdJXFAmAPoYPTAh+1Zy8DzeRS6qW8NBz6aLog8kH4Knkt15MNKF
rFgxc9c43dqXsXCxnib21apaavcgFKFABuoaZOpbxS3SCidWgDXL2PHIMlBGN5Rh47zBKR1F
QgCJItimOdVP5xBSB0lwHNddlwV2aZwceUbaN2e7xhLCjSvWz388vb28vP0+u4KCKgB22wYV
EpM6bjCPbkegAmKxa1CHscAuOjdl70cDlXUMsLPNe9kE3OmwBBTIIWRib8wMeo7qhsNgqUfC
qEUdlyxclCfhfLZmdrGs2ChRcwycL9BM5pRfw8FV1CnLmEbiGKYuNA6NxBbqsG5blsnri1ut
ce4vgtZp2UrNtC66ZzpB0mSe2zGC2MGycxpHdULxy9Ge/3d9MSnQOa1vKh+Fa05OKIU5feRe
zShoJ2IKUkthz3+zY2uUe/dqa1Dbt2YDQhQNJ1hbTlU7SuRPb2DJJrpuT8hjz7472cN2ZncB
Goo1dhgCfS5DNkwGBB9bXFP9btnuoBoCqxoEktWDE0hYoy3eH+Aexr511vc9nrYUg+1aD2Fh
LUkz8M3aqe11oRZtyQSKwXXrXhg/NV1ZnLlA4H5CfSL45ABXY3V6SHZMMLDwPTjWgSAdNgw5
hgOTzdEUBMwC/OMfTKbqR5pl5yxSuwyBbI2gQMYhKOhF1Gwt9GfjXHTX+OxYL3USDbZ9GfqK
WhrBcAOHImViRxpvQIxeiIpVzXIxOvslZHMSHEk6fn+JZ+U/INq2Zh27QRUIJo9hTGQ8O1pH
/juh3v3jy/PX72+vT5+739/+4QTMU3lk4uNFf4SdNrPTkYMZVWxWGsVV4YozQxaloDayB6q3
lThXs12e5fOkbBzDx1MDNLNUGe9mObGTjpbSSFbzVF5lNzjwazzLHq95Nc+qFjQ292+GiOV8
TegAN4reJNk8adq1t2HCdQ1og/5RWqumsQ/p5CvqKuD53hf0s08wgxl0cr5W70/CvtAxv0k/
7UFRVLb5ox49VPQsfFvR34NzDAq39MRKYViXrQepke1IWBcI8IsLAZHJaYbYk01NWh21yqOD
gN6S2lDQZAcW1gV0QD+daO3R8xjQiTsIUFxAYGELND0AbiZcEIsmgB5pXHlMsng6JXx8vds/
P33+dBe/fPnyx9fhjdU/VdB/9YKKbWVAJdDU+812s4hwsnkq4F0wyUvkGICFwbPPFADc29uj
HuiET2qmKlbLJQPNhIQCOXAQMBBu5Anm0g18popzEdeldofIw25KE+WUEgurA+KW0aBuWQB2
89MCL+0wsvE99W/Eo24qsnF7osHmwjKdtK2Y7mxAJpVgf62LFQtyeW5XWkvCOqL+W917SKTi
Lk3R/aBr6XBAsGnERH0/cQ9wqEstzllTJVzMDD4o067NBb3zAz6X2GghiLXa0tgIGj+lyOg7
OFso0VVg2hwbsCbf3/1MQY1Pz+nCwehjz5wVm8DCViN1f3WXDGZEcgKsmUq1MhfBeIzv6tJ2
BampgvEpizwq0R9dUuaRsF1OwvkhTDzIAcbgHgRiQAAcPLIn6R5w/FQA3qWxLT/qoLLKXYRT
nRk57WBMqk9jdV9wMBDK/1bgtNauH4uYUzXXZa9y8tldUpGP6aqGfEy3uxIAHbpBfeZSOID2
R2uaBnOwszpJ0oR4IQUIrDyA8wHjy0afEeEAsjnvUNt0+srMBpUEAQQcmGpnHUjTGGIgA+O6
r8YR/nztI0pvdQ2GyeHhR37OMCHKCwbU8CBAhO4JNeRXyB+Yzh5bVQXIXPNOH2L1bL67R3F1
g1Gydc4m1sWzKQLTfWhWq9ViPurgKYIPIY/VKJWo33cfX76+vb58/vz06p5B6qJGdXJBClC6
L5o7nq64kkraN+q/IHkgFBw7RiSFOo5Idz6WsnHMpo/E8FVcOXDwFoIykDteLkEn05yCMOob
kdExG8EJdESmJQPqlL84RW6O5yKBS5g0Zz5oYJ2+r+pGdf74KKoZWMcnBRm5lMbSb0WaFOlB
JCQ2PAqQDRnXvV6D7UOhX7S+P//29fr4+qR7kDZoIqldCTPN0SksuZqyOygpdZfU0aZtOcxN
YCCcL1fpwo0Tj84URFO0NGn7UJRkyhJ5uybRZZVGtRfQcmfRg+pScVSlc7iT4VGQrprqw0/a
I9Wyk0RdSAenklarNKal61HuuwfKqUF9ug1X3hg+iZosL6kucgd9CK9IaitGQ+rZwNsuSR8c
YK53j5x9gqWZcyGqo6BiRKd10qcHbDd6rPFG9/KLmvuePwP9dKtHwxOBSyoyOsZ6mKvckev7
4uQ2Zj5TcyP5+Onp68cnQ0/z9HfXiIvOJ46SFDkks1GuYAPltPpAMIPHpm6lOQ2j6X7xh58z
uv/k16VxzUq/fvr28vwVV4CSWJKqFAWZGwa0lyP2VPBQwktjHlGg7Mcsxky///f57ePvP1wv
5bXXtgI/tiTR+SSmFPBNC716N7+19/AuFvZ5sopm5O6+wD99fHz9dPfL6/On3+yDhQd4rzGl
p392pWW33iBqoS2PFGwERWBRVduy1AlZyqPY2St+st742ylfEfqLrW9/F3wAvNfUprtsVbGo
EuhuqAe6RoqN77m49jMwmIEOFpTu5dq67Zq2I162xyRy+LQDOqIdOXLZMyZ7zqm++sCBL6rC
hbWP7y42h2G61erHb8+fwMur6SdO/7I+fbVpmYwq2bUMDuHXIR9eCUa+y9StZgK7B8+UTpf8
8PT16fX5Y7+RvSupg6mzNuI+2DP8i4U77T9ouqBRFdPklT1gB0RNqWf0srgBW9xZiaS+2qS9
F7XR+tydRTa+Jdo/v375LywHYB7LtnG0v+rBhW7mBkgfACQqIdu3qr5iGjKxSj/FOmvtNfLl
LN3t1d5Lq6wy4VxP9Iobzj7GRqIfNoQFl4v6BaHlqLWnjBN6nptDtQpJLdAZ66hYUqeSolon
wkToqBvQI3hdZHx06jiROew3MUEz3zokkg+yl1GFtF3KDZ7ywDsc7F9NNJa+nDP1I9LP/pDj
JKm2wOgco04PyCee+a12ctuNNXwMCCdmNKDMRA4J0rDS3iOOWC6cgFfPgfLcVocdMq/v3QTV
eEm0CoOTfRzv3PLbSgAwGcpjVJuev0ctDs769HJvrPVa/XBmQjDKL398d0+887Jt7FceIE5m
ahUqusw+KwEpuEt3wnZ8JeAwsavyDtXvXmagVoS9sR5FH2jSFrAKMy6mZVEYN4VjaofC1meF
X6CpIuwbBA3mzYknpKj3PHPetQ6RNwn6oQfDqAk3uUf/9vj6HSveqrBRvdFu1SVOYhfna7U3
4SjbGTuhyj2HGu0FtQdSM2KDlNsnsqlbjEMvqlSrMOmp3gX+3G5RxlqI9gqsPZX/5M0moDYF
+hBL7Xqtox43GNwMgE/Id6zr+aFudZWf1Z9KYNdG5e8iFbQBU4ufzSl39viX0wi77KSmQtoE
uuQupHbvVsdtsGMC8qurrd2YwHy9T3B0KfcJ8jOIad3A6MW4bqerbf+sb9FGgOIGuNHWLwaG
hbSO8p/rMv95//nxu5Jsf3/+xiiDQw/bC5zk+zRJYzKhA36As0MXVvH14xPwpFXah84Dqfbn
xEfvwOzU2v8Afj0Vzx7lDgGzmYAk2CEt87SpH3AZYN7cRcWpu4qkOXbeTda/yS5vsuHtfNc3
6cB3a054DMaFWzIYKQ1yyTgGgkME9OxvbNE8kXSmA1wJdJGLnhtB+m4d5QQoCRDtpLEQMImx
8z3WHAU8fvsGby16ELzWm1CPH9UaQbt1CWtPO7iIJf0S7Dfnzlgy4OAHhIsA31837xZ/hgv9
Py5IlhbvWAJaWzf2O5+jyz2fJXPqadMHcDcvZrhK7Ri003JEy3jlL+KEfH6RNpogy5tcrRYE
k7u4O7RkDVE9ZrNunWYW8dEFU7nzHTA+hYulG1bGOx8cD9vPqPrivj19xli2XC4OpFxIF94A
eOc+YV2ktrkPagtDeos5a7vUaiqrSbwsamr8uuVHvVR3Zfn0+def4LThUftEUUnNP9iBbPJ4
tfJI1hrrQBNKtKT5DUVVZRSTRE3E1OUId9daGMewyJccDuNMJXl8rPzg5K/WpOlk46/IxCAz
Z2qojg6k/k8x9btryibKjPLOcrFdE1ZtF2RqWM8P7eT0Uu4b0c0clD9///dP5defYmiYuate
/dVlfLDtyhlvCGr7k7/zli7avFtOPeHHjYz6s9opG11RLAQUKTAs2LeTaTQy3fchnBsnm5RR
Ls/FgSedVh4IvwUx4FDbdyPjB6RxDAdtxyjPBU2ZCaCdLWM5MLp27gfbUXf69Xx/LPPfn5Uw
+Pj589PnOwhz96tZO6YzTNycOp1EfUcmmAwM4c4YNpk0DKfqUfFZEzFcqSZifwbvv2WO6k9G
3LhgJKhk8F6OZ5g42qdcwZs85YLnUX1JM46RWQz7vsCn87+Jd5OFu6yZtlVboOWmbQtuotdV
0haRZPCD2pDP9RfYZ4p9zDCX/dpbYNWz6RNaDlXT3j6LqYRuOkZ0EQXbZZq23RbJPucSfP9h
uQkXDCHA/pOIobczXQOiLRea5NP0Vzvdq+ZynCH3ki2lmh5a7svgDGC1WDKMvgxjarU5sXVN
pyZTb/pSmilNkwdKFshjbjyZ+yyuhwhuqLgP3qyxYq5r+rUif/7+Ec8i0rXwNkaG/yClv5Ex
J/dM/xHyVBb6UvkWafZejF/WW2ETfS65+HHQozhwM5EVbrdrmHVGVuPw05WVVSrPu/9l/vXv
lFx19+Xpy8vrX7xgo4Phz74H4xXjRnNcTH+csFMsKqz1oFZGXWqnqE1pqwoDH8kqTZMOjQbA
h9uz+3OUIJU+IM0F655EAZ0+9e+eBDbCpJPGCOPlh1Bspz3vhAN016xrjqr1j6VaQYiwpAPs
0l3/Xt5fUA7sB6Ez1YEAH5xcbub4BAXX57/o3O+4y2O1VK5tW2JJY81y9g6o3MMFcoNf+Ckw
yjIVaScRqFaNBtxGIzCN6uyBp07l7j0CkociykWMc+pHj42hI9xSq0yj3zm6ECvBMLlM1VIK
01OOQvaa0AgDfcUssuTuqAaDPWpoNoPaHxz44LclA/CFAJ39jGrA6GnmFJaYVrEIrW0neM65
Be2pqA3DzXbtEkowX7opFaUu7oQXFfoxvtrQrzumu1TXjoKQEY2Mlb122Qnb1OiBrjirnrWz
7TdSpjPvXYwSpLCVJ4aQ6AF5gray6lNFMq4p1SC0Kuzu9+fffv/p89N/1E/34lpH66qEpqTq
i8H2LtS40IEtxuiYxvHQ2ceLGvv9QQ/uKvvctAfxk+MeTKRtvKQH96LxOTBwwBT5ZrXAOEQd
ysCkU+pUa9sm4AhWVwc87UTsgk0jHLAs7POSCVy7vQiUMKQESUhUvXw8nnN+UJsp5lxziHrO
beN+AwpWdHgUnmSZpzDTy5WBN3aI+bhJvbP6FPz6cZcv7CgDKE8c2IYuiHb5FtgX31tznHMA
oMca2HSJkwsdggPc35HJqUowfSXa6hGoX8DdJrJeDAq05gKBUaC1SLgrRlxvqoidYGquDmup
+4h5pHLJU1ftCVByYjC2ygW5PoOAxsEeXOT/hfDjFWl1amwf7ZS0KkkK5KmRDhgTABnSNoj2
q8CCpAvbDJNXz7hZDvh8aqZU06MIuzpHGd+9+ZRpIZWECC7Cguyy8K1WipKVv2q7pLLNIVsg
fpthE0jyS855/qClihESu1xJofb0eYyKxl5KjDyYC7WJsaekRuxz0h00pLbV1tGhatZt4Mul
bZVEnwJ00ra6qoTdrJRnePELt/ix7chBHkTXWjUdy9UqWHX5/mAvNjY6vhWFL92QELG+ZzM6
IdJ2yH6sOpFZ4pS+YY5LtdlGRxMaBokVPRyHQh7qswPQU9GoSuQ2XPiRbY5PyMzfLmyb1Qax
J/uhczSKQVrfA7E7esj+zYDrHLe2KYBjHq+DlbUOJtJbh9bv3jzaDu5OS2K8pzraCv4g7QrQ
HIyrYFDQn0pQU13+UQevQeaKex1ymexTe38O+lt1I62SV5cqKuzFMvaxMGp+q36uso7qzvd0
Tekxl6Zqk5e7KpMGV53St/YJE7hywCw9RLa/zh7Oo3Ydbtzg2yBu1wzatksXFknThdtjldpf
3XNp6i30Gcg4sZBPGitht/EWZGgajL6XnEA1B8hzPt6p6hprnv58/H4n4B31H1+evr59v/v+
++Pr0yfLu+Dn569Pd5/UbPb8Df6carWBuzu7rP8/EuPmRTLRGaV72USV7TDITFj2Q78R6uyF
akKbloWPib2+WFYDh04lvr4pcVZt5e7+193r0+fHN/VBrmfFfgKNsV6KjMUeIxclSyFgiolV
oScca4lCkvYAUnxpz+2XEi1Mt0o/RDmkxfXeqhzzezwa6NK6LkGVKwbh5WE6+0njY0nGcpSp
PkmOu4cxPgejZ5jHaBcVURdZIc9gMND+JrS0ThHVblbYNivszdHnp8fvT0oQfrpLXj7qzqm1
OX5+/vQE///fr9/f9LUauEH8+fnrry93L1/1FkZvn+zdoJLGWyX0ddg+BsDGPJvEoJL5mL2i
pqTicOCD7RtS/+6YMDfStAWsUQRPs5MoXByCM0KihkfbBLrpJZtXE1WMmKgIvDvWNRPJUyfK
2DaSo7eNdRl3k6EkqG+411T7laGP/vzLH7/9+vwnbQHnDmrcEjnHWeMuJU/Wy8UcrpatIzkE
tb4I9v/cl2p1uf3+nfXEyvoGRnffTjNmmrDc73clDHeHmf1iUJJZ27rRo7T/AVuaI+Vm84/S
eI0uWkYiE96qDRgiTzZLNkYjRMtUm65vJnxTC7BcyERQMp3PNRzIegx+rJpgzeyW3+sH4sxA
kLHncxVVqQ9gqq8JvY3P4r7HVJDGmXQKGW6W3orJNon9hWqErsyY4TmyRXplPuVyPTFTgBRa
eY8jVCVypZZZvF2kXDU2da7EVhe/iCj045brCk0cruPFgumjpi8O40fGUgz32c7QAbJDxqbr
SMBc2NRowbUf5+k46LmnRpzn2holk5EuTF+Ku7e/vj3d/VPJLf/+n7u3x29P/3MXJz8puexf
7tCW9mnBsTZYw9RwzYQ7MJh9uaYLOm6kCB7rBxXIpJHGs/JwQMrWGpXaeihoXKMvbgZR7Tup
en2V4Va22iSzsND/5RgZyVk8EzsZ8RFoIwKqn1Yi63uGqqsxh0l1gnwdqaKrMcsyLU8aR2cP
BtIqpcZgNqn+9rALTCCGWbLMrmj9WaJVdVvagzb1SdChLwXXTg28Vo8IktCxsu1zakiF3qJx
OqBu1UdU9gTsGHkbeyU1aBQzuUci3qCsegBWAXAbXfe2KS0PBUMIuOaAXX4WPXS5fLeyVOOG
IGZXYx75WNtvxOZK9HjnxAQLX8a8DDwax47r+mJvabG3Pyz29sfF3t4s9vZGsbd/q9jbJSk2
AHRPaDqGMIOI9pceJneGevK9uME1xqZvGJD8spQWNL+cc2earuCEq6QdCO6m1WijMDxhrum8
qDL07QtatYnXa4RaKsEy918OYV8pTGAksl3ZMgw9FRgJpl6UEMKiPtSKthd1QDpldqxbvG9S
tdwhQnvl8Kj3XrDuDxV/3stjTMemAZl2VkSXXGPwmsCSOpYjZ49RY7DKdIMfkp4PoR9Eu3Aj
uvcb36PLHlA76XRvOOeoSFC18VaLoS1FmyUMNITIc1JT3w/1jjbhg71w9ccF1QXPy3Bqb1J2
DvT7d/ayKWskkamVzz6G1j/tyd/91e0L50skD/WTyp5KBkneBt7Woz1j35sUYVGmTxyShsoo
aqGioUTlyAiFQDbJBjBC1qyMcFbRVUzktOuID9oiQmWrxU+EhIdvcVNTWaFJ6UooH/JVEIdq
3vRnGdhB9bf5oHOoDwO8ubD9SXUTHaR1/URCwZjXIdbLuRDoBVhfp3QSVMj4QIvi+GGfhu/1
eIA7dFrj91mELkaaOAfMR8u5BbKLACQyyCzjlHWfJoJ9m6GI/YzPV5DRqn08N8FJkW88+gVJ
HGxXf9KVA2pzu1kS+JpsvC3tCOaLSEfMOTmnykOzv8FF3u2hDucKTU3yGVnxmGZSlGS8IyF1
UJGwjt2NqrkSzFa+fZRucGc493ghivcR2TH11D2ZJnvY9MWVMzptw9g90NVJRKcihR7VQLy6
cJozYaPsHDkSPNkejpKOrSgLB2vUXkGk37STAzoA0UkXptTyFJPrW3y2pTP6UJVJQrBKDzRj
2MEyfvDf57ffVVf4+pPc7+++Pr49/+dpstxu7bd0TsjIoIa0y8pUDYTcuLiyjmLHKMy6qmGR
twSJ00tEIGNMB2P3JVJy0Bn1D0QwqJDYW9v9zxRKP/ZnvkaKzL6O0dB0lgY19JFW3cc/vr+9
fLlTky9XbVWitqLoklfncy/R406Td0ty3uX2OYRC+ALoYJaLFWhqdEqkU1cSjovAcQ45ixgY
OnMO+IUjQK0Snv3QvnEhQEEBuEcSMiWotszkNIyDSIpcrgQ5Z7SBL4I2xUU0asGcTuX/bj3r
0YsU7A2CTBtppI4kOP/YO3hjC4MGIweUPViFa9vcgkbpmaUBybnkCAYsuKbgQ4U9R2pUiQo1
geh55gg6xQSw9QsODVgQ90dN0GPMCaS5OeepGnXU/DVapE3MoLAABT5F6cGoRtXowSPNoErK
RyNeo+aM1KkemB/QmapGwacS2mAaNIkJQk+Je/BIEdDNrK9lfaJJqmG1Dp0EBA02mFMhKD0d
r5wRppGrKHblpDtdifKnl6+f/6KjjAyt/g4ESfam4Y3uI2lipiFMo9GvK6uGpuiqdwLorFkm
+n6OGe82kEGSXx8/f/7l8eO/736++/z02+NHRkO8GhdxNP07tys6nLPfZ+5l7CkoTzp4WW+P
4DzRh3ILB/FcxA20RM/fEkvhykb17gIVs4uzs364PWI7o4pGftOVp0f742XnXGe8Ucz1M6JG
MMp7idVUSU5T0DH3ttA7hOnfxudRobbFtbYIic6sSTjt19S1yQ7pC1D1F+jlRqItd6qx1oBG
UIKERcWdwdq8qGyPnwrVao0IkUVUyWOJweYo9CP2i1Bie4FeqEEiuNoHpJP5PUL1Owg3MLJB
CJG13RsbAVeltnijICW7a0MzsopiHBjvXBTwIa1xWzA9zEY72101ImRD2hS0zhFyJkGMPSDU
dvssQt5BFQTvERsOGl4qgnlbbYRdCtwR+mB7280VNCLxXdlXmG4AiWDQwDg4uX8AwwgT0qsD
EiU5tS0WxP4DYHslvtudH7AKb8AAgsazVsXBt6Wj96iTtCat/s6ChLJRcxVhSWW7ygm/P0uk
lmt+YyXDHrMzH4LZh549xhxn9gx6/tZjyEvogI1XWEZRIE3TOy/YLu/+uX9+fbqq///LvTHc
izrFdnAGpCvRdmSEVXX4DFyg6hnRUkLPmDRtbhVqnJthwoIlvjdzhN0KgJFbeCue7hpsln/y
4DUEFgIFIC5hQAbAUxFohU4/4QMOZ3S3M0J0zk7vz0r0/kDdUe+tYSWow/smtZWwB0Qfk3W7
uowS7cB2JkANBoxqtdctZkNERVLOZhDFjapaGDHU3/YUBuxs7aIsKuwZUrUA9pYMQGPbdBAV
BOiywNbyqXAk9RvFIX5vqa/bXVSnZ9twwsH2dKZKIG2FSRCky0KWxKB6j7mPjhSHPaJqT6UK
gdviplZ/IM8Jzc5x2VCDJZiG/gaDevRZfM/ULoP8x6LKUUx30f23LqVEXtsuSEu+V3ZHRSky
9GASkrnU1tZPO+lFQeBteppjnwpRHaNUze9OSfueCy5WLojciPZYbH/kgJX5dvHnn3O4vTAM
KQu1jnDh1U7E3noSAgvylLR1xKImdyciDeL5AiB0Fw6A6taRwFBauICjHt3DYEtSSYO1fUw3
cBqGPuatrzfY8Ba5vEX6s2R9M9P6Vqb1rUxrN1NYSoyHMFxpH9R/XISrx0LEYD4GB+5B/ShV
dXjBRtGsSJrNRvVpHEKjvq08bqNcMUaujkFVLJth+QJF+S6SMkpK8hkTzmV5LGvxwR7aFsgW
MSKf4zgN0i2iVlE1SlIcdkD1Bzg32ihEA5f0YC9quudBvMlzgQpNcjumMxWlZvjSGrvG6Q4d
vBptbJlVI6C9Yxw/M/hDEZMEjrZIqpHxsmIwdvL2+vzLH6BN3JsIjV4//v789vTx7Y9XzsPl
ylYyWwU6496oJMJzbXeVI8CCBUfIOtrxBHiXxB7au0RGYBiik3vfJchrnwGNikbcdwe1cWDY
vNmgA78Rv4Rhul6sOQrOzfQD+JP84Dz7Z0Ntl5vN3whC3LfMBsMeZLhg4Wa7+htBZlLS344u
Ch2qO2SlEsB8LJngIJVtL2akZRyrTV0mmNSjehsEnouDS2KY5uYIPqeBVCPeJe/jKDy5CYJH
jSY9qf09Uy9SlR260zaw3wFxLN+QKAR+Fz4E6U/YlegTbwKuAUgAvgFpIOsUbjKz/jengHEb
AU7g0eN29wsuaQHTfYCs36eZfRxtLiKDeGVf4U5oaNmdvpQ1utxvHqpj6QiMJssoiarGPino
AW2dbY82kXasQ2rvutLGC7yWD5lFsT7isW9KweSplDPhm9TehEdxilQ7zO+uzIUSZ8RBrXn2
YmGezDRyptR59MFOOy2iqXX4CLaL0zwJPfC3aUvnFYiY6CS/v2LOY7T5UZG79mDbexyQLol3
KBODGt9IMd7R0HvKEeouPv8BagurJnHrriO612+P2cC27yH1Q23Ko5gc8AzwhOhAoysPNl2o
4hLJ2RmSsTIP/0rxT/QmaqaXnevS9r1ifnfFLgwXCzaG2Yzbw21nO4RTP4xjGPAenWbgqekv
wkHF3OLt0+UcGsnWhS5a22866uG6Vwf0N31brPVkcYJqTquRa6DdAbWU/gmFiSjGqKQ9yCbN
8ftDlQf55WQI2D7TjqXK/R7OGgiJOrtG6Jtp1ERgKsYOH7Ft6fiEUN9kncvALy1ZHq9qUrM1
gTSD9oxmC5u1aRKpkYWqD2V4EWer6wxObmBmsu1G2PhlBt/ZRhZtorYJk6NerkcsE/dn7DVg
QFBmdrmNjo0l8/ZKN401Aies8w5M0IAJuuQw3NgWrlV8GMIu9YAiD5n2p4i6Rt6VZbj90xrq
5vfUs6dJv4LnqXgWR+nKuLSXCDHTBbQhd2vKMaohzHoSt+D8yL4CmFtukpRM9805E8give8t
7Ov4HlCiSzZtrUykL+hnl1+t+aiHkFadwQr0vm7C1NBRMrCaiSJsuSJJl60lXfaXsF1oa8kn
+dZbWLOdSnTlr11trVbUMT3bHCoGv1pJMt/WAlFDBh9nDgj5RCtB8KmW2m7oUx/Pz/q3M+ca
VP3DYIGD6UPW2oHl6eEYXU98uT5gh1jmd1dUsr8gzOEeL53rQPuoVuLbA5v0vk5TqaY2a+Sh
F+lg4W+PXHgAUt0TaRVAPTES/CCiAqlwQMCkiiIfD7UJVnOZsVeASfi4mIHQnDahbukMfiv1
7r6UfB2d34tGWt6sB0XC/PLeC3nR41CWB7tSDxde+BxN/U9Bj6JdHRO/w+uMfoiwTwlWLZa4
Io/CC1rPxJ1SLCSpEYWgH7DN2WMEdyeFBPhXd4wzW2NbY2hun0Jd9iTcbF89nqNransAFHNT
rQj9le1tzKbgrbg1XJD+dIqfgeqfKf2txrj9bkwcrOVG/aBTAEBJHCHA/mbRogSwyC+MZE9S
7DcBkQvtKCQqaS8RGqS5K8AJt7S/G36RxCOUiOLRb3tq3efe4mTXkNVk73O+5w+aUZPYdVkv
nTU4v+COm8PtiG258lLZd5RVG3nrECchT3Y3hV+OhiFgIItL262UmpFthXf1i8YrY9iVNq3f
5eiFzITbg6pIwPW2HC6ltOYD0taYotnS4oTOiG+5qsWoKG0L1VmrpgX74s4AuH01SMwhA0SN
Wg/BjLsk28lA1q40w3sWyFp5vUnvr4wquP1hIq7tcXySYbi0mgR+2/dP5rdKObOxDypS64rz
Vh4lWV2L2A/f2yeVA2K0IqjpbsW2/lLRVgzVIBvVmeezxK439SFeGacZvKUkChku1//iE3+w
nb7CL29hd/8BwVPLPo2ygi9tETW4rAMwBZZhEPr8flr9CZYNrYlN+vZwvrR24eDX4DQJ3mzg
uxOcbF0Wpe1auNgjB+9VF1VVv+lEgTQe7fTFDyZIv7ezsz9fq4X/LbkrDLbIF6x5ldDi21Vq
xrEHehs6Vmn8E1FINOlV8Vz2xUVt+uxGBvX9BE2NWRXPF788ITecxw6tWiqdkl+YKzDM1vRO
5JBb7RxmvCnOQwret/ZUr2FIJi0k6DVYC1I5Jwv07zPGkPdZFKDz9vsMn6aY3/SgokfRLNlj
7nkEPGrDadp6UOpHl9nH+QDQ7NIkxTFqpIAMSFnyWxVQQtE2IKfQcbRBkk0P4CPtATxH9hmO
8SyFZMs6n+sXoAs85lqvF0t+6PdH//b5njVCQy/YxuR3U5YO0FX2Xm0A9V15cxW9Nx7Chp6/
xah+bFD3r5GtwofeejtT+AKez1rT1hELFXV04U8g4MzTLlT/mws6OAeYMtHi3NwZhEzTe7Yv
yDKL6n0W2Wfv2FbyPgaDwYjt8jgBAxIFRkkXHQO6RhMUs4c+WOB8DIazs8sq4AB8SiXe+gt6
RTUGtetfyC16ZyWkt+U7HlwLOdOuzOOtF9tuN9NKxPhhpYq39ewLC40sZ5Y2Wcag4NPaj2TV
4oDulAFQUajK0phEo2UBK4Em12pvSHw1mEyzvXGERkO7x7TJFXB4MqO2tDg1Qzn63QZWa1qN
rgEMLKr7cGEfzRhYLR5q9+vArsvtAZdu0sTpgAHNbNQc70uHcm8UDK4aY18dIge29e0HKLcv
ZnoQG+EfwdABzdaSNs6cdKmSsNe/qnrIU9s4tNG/mn7HEbyftdMSZz7hh6Ks4JnGdPqlWrvN
8JnBhM2WsEmPZ9u3bf+bDWoHE4NTBrJsWATeuCkirtSGoDo+QF9GSQHhhjTCLlK+05TtJK9B
12lWYS+25KN+dPVR2HdkI0ROAwFX21I1tm2dESvhq/iAbm3N7+66QnPJiAYaHXc9Pb47y96n
H7s3skKJwg3nhoqKB75E7n12/xnG/uQUqbdHGbW0QXsiy1TXmLv46M9o6ZwLsG+/ct8niT2g
0j2aPeAnfdR9sqV6Ne6Rw9AySupzUdir64SpLVit5PQav3DVJ607fAJkVGyMARMMIsOEGjHO
C2gwUG8Hc0kMfi4EqjVDiGYXId89fW5dfm55dD6TnidOOGxKz7zdwfOjuQCq0ut0pjz9o4Ys
bdOahOgvvDDIFIQ7u9QEUuswiF5rlgTNyxbJqwaEjXEuBC1AfkEWFDVmDlEIqGbfpSBYf9VG
UHLBbrDK1hxV05q+jcCAbS3jClq2Y0fMlGzf1OIAj3sMYawkC3Gnfs66KpP2eIgSeGqDdHfz
hAD9TT9BzR5zh9HRRyoBtYUgCoYbBuzih0Oheo2Dw7CjFTJctbtJL8PQw2gs4ighH9FftWEQ
1h4nzaSCAwrfBZs49Dwm7DJkwPWGA7cY3Is2JU0g4iqjdWKMmbbX6AHjGZjtabyF58WEaBsM
9KenPOgtDoQwM0BLw+sDNhczOm0zcOMxDJwIYbjQd4IRSR08tjSgR0Z7T9SEi4Bg926qgz4Z
AfVmjYC9pIhRrTKGkSb1Fva7Z9AVUv1VxCTBQQkMgf3qeFDj1q8P6BVKX7knGW63K/QmF13E
VhX+0e0kjAoCqsVRSfkpBvciQ/tfwPKqIqH09I1vShVcIp1qAFC0BudfZj5BelN5CNLuypGu
rUSfKrNjjLnRsbttaFQT2oQTwfRLFfhrPUyXx5fvbz99f/70dKem/NE6IYhKT0+fnj5pE7bA
FE9v/315/fdd9Onx29vTq/s2SgUy2n69/vAXm4gj++oRkFN0RbsqwKr0EMkziVo3WejZpssn
0McgnBij3RSA6v/oFGYoJkzg3qadI7adtwkjl42TWCsisEyX2jsRmyhihjAXdfM8EPlOMEyS
b9f2W5IBl/V2s1iweMjiaixvVrTKBmbLMods7S+Ymilg1g2ZTGDu3rlwHstNGDDh6wKueLDp
artK5Hkn9eGotnZ3IwjmwC9ivlrbLow1XPgbf4GxnbFvjMPVuZoBzi1G00qtCn4Yhhg+xb63
JYlC2T5E55r2b13mNvQDb9E5IwLIU5TlgqnwezWzX6/25g2YoyzdoGqxXHkt6TBQUdWxdEaH
qI5OOaRI61obXcD4JVtz/So+bn0Oj+5jz7OKcUUnWfBeMFMzWXdNrP0GhJl0anN8Hprkoe8h
Jcejo/6OErA9i0Bg58XG0dybaNttEhNgK3G4eYQHtRo4/o1wcVob5wXo+E8FXZ1Q0Vcnpjwr
8yg9rSmKNCH7gCoPVfmR2r1luFDbU3e8oswUQmvKRpmSKG7XxGXagret3r/XuOHWPLPF7vO2
p/8RMnnsnZL2JVAbxVh9emZnE0d1tvU2Cz6n9SlD2ajfnURnKD2IZqQecz8YUMcgQI+rRu6N
YE1MvVr5oOJhnUKoydJbsCcUKh1vwdXYNS6CtT3z9gBbW56Hu5D6zXzIiLqx3Q/E4yVP8dsr
++hO6/FSyFzRYTRqNut4tSCG+e2MOK1h+33PMjD6tTbdSbnDgNpvp1IH7LQrTs2PNY5DsI0y
BVFxOWdXip/XXg5+oL0cmM74F/0qfEOj03GA40N3cKHChbLKxY6kGGrrLDFyvNYFSZ+a6lgG
1HrJCN2qkynErZrpQzkF63G3eD0xV0hsh8gqBqnYKbTuMZU+KdGq0XafsEIBO9d1pjxuBAM7
s3kUz5J7QjKDhajSRsI2uQG/0ItcOyZR6xLV1UeHrj0Al1qisY3XDQSpb4B9moA/lwAQYByp
bGy3nQNjrInFZ+TZfiCRYuEAksJkYidsR3nmt1PkK+3GCllu1ysEBNslAHqL9Pzfz/Dz7mf4
C0LeJU+//PHbb89ff7srv4HTD9uXxJXvmRjX8/D48OnvZGClc0XuWnuADB2FJpcchcrJbx2r
rPSWUP3nnEU1iq/5HZhV6LfJlrmM2xWgY7rfP8F7yRFwfGwtR9O7sNnKoF27BkNz021RKZFl
APMb3khrG7s04Eh0xQX5oOrpyn4iM2D2nVCP2WNP7RTz1PmtrQnZGRjU2PHZXzt4e6WGj3Xa
kLVOUk2eOFgB79MyB4bZ2cX0Qj0DG9HLPpguVfOXcYlX8Gq1dIRIwJxAWO1GAehSpQdGs7bG
Q5X1+YrH3VtXoO2i1+4JjlqkmgiUBG7fkg4ILumIxlxQLEhOsP0lI+pOTQZXlX1kYDD5BN2P
SWmgZpMcA5hvmdQDYVilLa86eM1CVva0q3G4hR6zzJUYt/Cs61QAqP4jQLixNIQvFBTy58LH
b1AGkAnJeDIH+EwBUo4/fT6i74QjKS0CEsJbEcD3uytSELdrTu1ZzGHhWN9147cLbtOColF1
H33KFaLbTwNtmJQUo/16WV1XB9769qVcD0kXSgi08YPIhXY0YhimbloUUpt0mhaU64wgvKz1
AJ45BhB1kQEk42PIxOkC/ZdwuNneCvvkCUK3bXt2ke5cwH7bPnetm2sY2iHVTzI+DEa+CiBV
Sf4uJWlpNHZQ51NHcD9zcqJWRiu8FN3W1tKpJbMwA4jnPEBw1WsfMfZ7HztP2wxLfMW2Ls1v
Exxnghh7brWTthUprpnnr9ChEvymcQ2GcgIQ7bMzrIxzzXDTmd80YYPhhPVlweTtLkG+Zuzv
+PCQ2CpycE72IcF2guC359VXF6HdwE5YX1qmhf2O7r4p9uj+uAe002dHAqijh9iVC5RgvLIL
p6KHC1UYeAHKHVSbs9wrUjEBux9dP9i1MHl9zqP2DqybfX76/v1u9/ry+OmXRyX7Ob5urwIM
vwl/uVjkdnVPKDlhsBmjFW2c8oSTdPnD3MfE7LNK9UV6fbREuySL8S9sxmlAyGMjQM1+DmP7
mgDolksjre1mVDWiGjbywT74jIoWHc0EiwVSEN1HNb6Cgodc5zgm3wJWBbpE+uuVb6t9ZfYc
Br/AKt/kuDqLqh25KlEFhksvK+Udsvmtfo13bbYnxDRNoZcpKdC5XLK4fXRKsx1LRU24rve+
fdvAsczmZAqVqyDL90s+iTj2keVmlDrqkjaT7De+/RrDTjBSa+ZMXpq6Xda4Rnc0FkUGqlbB
1vbZZlyF96TrKjwHLXzriK5/4tel+H5jiS8NemclVDFaZYGKBXPHPhJZiUzwCJnY77vUL7CK
Zi0F8Iv6qhiDgTPqJEvx1i/XaX5BP1VfryiUeaW+h9UT1heA7n5/fP3030fONJGJctzH1D2p
QXUXZ3As+Go0uuT7WjQfKK51qPZRS3HYCRRYTUfj1/XaVtw1oKrk93Y79AVBY79PtopcTNqP
UouLtV9TP7oKOZEfkHHJ6v3gfvvjbdY9nyiqsyVB6J9mZ/EFY/u92qvkGTJ9bhgwS4i0Hw0s
KzXxpacc2WHUTB41tWh7Rpfx/P3p9TMsB6N7gO+kiJ22r8lkM+BdJSP74pCwMq5TNdDad97C
X94O8/Busw5xkPflA5N1emFB43zEqvvE1H1Ce7CJcEofiO/QAVFzl9UhLLTCFuwxY8vGhNly
TFWpRrWlrYlqTruEwe8bb7Hi8gdiwxO+t+aIOKvkBumyj5R+NQ/ap+twxdDZiS+cMZDAEFjf
D8G6C6dcak0crZfemmfCpcfVteneXJHzMPCDGSLgCLXWb4IV12y5LTdOaFV7tk/ZkZDFRXbV
tUZmmkdW5K3q/B1PFum1see6kSirtAC5nCtIlQvwfcTVwvCahGmKMkv2Al6wgIVpLlnZlNfo
GnHFlHokgXdMjjwXfG9RmelYbIK5raI0Vda9RN5SpvpQE9qS7SmBGnpcjCb3u6Y8x0e+5ptr
tlwE3LBpZ0YmaLh1Kfc1am0GZTaG2dlaMVNPak66Ednp1lrZ4aeaeu1lb4C6SA1uJmi3e0g4
GN7GqX+riiOVCB1VoOx2k+xkvjuzQQa3HVy+Yp/uyvLEcSDmnIiLuYlNwcYgsg3mcvNFkinc
E9nPAa18da8QbK77MoYjLD7bSz7XQnxBZFoL+6WHQfWioMtAGdVbVsgNl4Hjh8j29GZAqAKi
Oo1wzf01w7GlvUg1p0RORkSV23zY2CeYEkwk3jYMi71UnNUfBgQeHqleOkWYiCDhUPsZwYjG
5c52DzDih71tF2aCa1s3EcFdzjJnoVaz3H54PXL6/iaKOUqKJL0KrFQ+kk1uiyJTcsbV1hyB
a5eSvv2+aSTVzqEWJVcGcIWdoUOOqezgMqGsucw0tYvst/YTB7pC/PdeRaJ+MMyHY1ocz1z7
Jbst1xpRnsYlV+jmXO/KQx3tW67ryNXC1rkaCRBFz2y7t1XEdUKAO+2gi2XwLYLVDNlJ9RQl
znGFqKSOi8RGhuSzrdqa60t7KaK1Mxgb0D+0pkHz2ygLxmkcIZcOEyUq9LLPog6NfQpkEceo
uKLHLhZ32qkfLONo0/acmVdVNcZlvnQ+CmZWs9uwvmwC4ZZe7eAbYT9Lt/kwrPJwvbBNr1ls
lMhNuFzPkZvQNkrrcNtbHJ5MGR51CczPRazVlsy7kTBoOXW5bbyPpbsm2PC1FZ3heXYbi5pP
Ynf2vYXtPMsh/ZlKAcX9skg7ERdhYG8G5gKtbGu2KNBDGDf5wbOPozDfNLKi/krcALPV2POz
7WN4amiFC/GDLJbzeSTRdhEs5zlb1xxxsFzb6jc2eYzySh7FXKnTtJkpjRq5WTQzhAznSEco
SAtHvTPNNZjiYslDWSZiJuOjWoXTiudEJlRfnIlI3tzZlFzLh83amynMufgwV3WnZu97/sxk
kaKlGDMzTaVnw+7a+1ydDTDbwdR22PPCuchqS7yabZA8l5430/XUBLIHrQFRzQUgojCq97xd
n7OukTNlFkXaipn6yE8bb6bLq721ElWLmUkvTZpu36zaxcwkX0ey2qV1/QBr8HUmc3EoZyZE
/XctDseZ7PXfVzHT/A146w2CVTtfKed45y3nmurWVH1NGv2ib7aLXPMQ2XLG3HbT3uBshwqU
8/wbXMBzWv+/zKtSimZmiOWt7LJ6dm3M0e0T7uxesAln1iz9aMLMbrMFq6Livb2/pHyQz3Oi
uUGmWq6d582EM0sneQz9xlvcyL4243E+QEKVPJxCgGEJJZ/9IKFDCf5HZ+n3kUTGx52qyG7U
Q+qLefLDAxiOErfSbpTEEy9XZ1s9mwYyc898GpF8uFED+m/R+HOiUSOX4dwgVk2oV8+ZmU/R
/mLR3pA2TIiZCdmQM0PDkDOrVk92Yq5eKuQCCE2qeYdMOtgrrMhStBVBnJyfrmTjoW0w5vL9
bIb4cBJR+LU4purlTHspaq82VMG88CbbcL2aa49KrleLzczc+iFt1r4/04k+kCMEJFCWmdjV
orvsVzPFrstj3ovoM+mLe4le2PXHmMK2vWOwYVPVlQU6j7XYOVJtfrylk4lBceMjBtV1z2hP
OBEYYdGnnZTWux3VRYlEYthdHqFHnP2NVNAuVB016BS/rwaZdxdVxRFytd1f68WyOrloHm6X
nnOVMJLwBn82xf5SYCY2XHZsVDfiq9iw26CvGYYOt/5qNm643W7mopqlFEo1U0t5FC7deo3U
Emq/vzHoobLNVwwYmKlQcn3q1ImmkjQukxlOVyZlYpil5gscNZmSZ3dNwfQf0dVwNpj6lIJ7
EPVFPe2wbfN+6zQoWC3MIzf0QxphKxN9sXNv4SQC7gsz6C4zzVMrgWL+U/XM43vhjcpoK1+N
2yp1itPfr9xIvA/AtoEiwaYcT57NvTqtryjLIzmfXxWriW4dqK6YnxkuRM5Teviaz/QsYNiy
1acQXOewY1B3ubpsovoB7IVyvdJs1PmBprmZQQjcOuA5I7V3XI246gNR0mYBN9tqmJ9uDcXM
tyJX7RE7tR3nEd7cI5jLA9R4TruE1/Hp81JiqT4ZzdRfu8ipWVnG/TytloE6cmuwvviwPs2s
DZper27Tmzla277RA5ppnxrcucgbM46SqjbDzO9wDUz8Hm35Ohf0tElDqG41gprNIPmOIPuF
tQkbECqBatxP4OZN2s+mTHjPcxCfIsHCQZYUWbnIalDGOQ7qTOLn8g40cWybObiwUR0fYZN+
bIw3nWoQqP9CEToRLmz1NgOq/2LvJwaOm9CPN/YZo8GrqEYXyj0aC3Sza1AlkjEo0sI0UO/O
iAmsIFDPciLUMRc6qrgMS7AJG1W2Elmv9jYq1NA6AcGYy8CogNj4mbQFXOLg+hyQrpCrVcjg
2ZIB0/zsLU4ew+xzc641asxyPWV0WcypdOn+Ff/++Pr4EUyHOGq9YPBk7DoXW2u8d0Lb1FEh
M20OR9ohhwAcpuYyOK6cHlld2dAT3O2EcXE8qWMXot2q9buxrQQOjzxnQJUanI35q9FbY5Yo
iV6/e+3d9ujqkE+vz4+fGfNU5nYmjersIUamRQ0R+isyRnpQiWpVDf5QwORtRarKDlcVFU94
69VqEXUXJehHyKiBHWgP97QnnkNvblGWth6kTaStvdbYjL0M2HiuT5h2PFnU2iqvfLfk2Fo1
jMjTW0HSFlZHZErHzjsqVBuX9VzdGNN23QVbBrZDyCO8LxT1/UwFpk0aN/N8LWcqOLnCUyqW
2sW5HwaryDaLh6PyeN34YdjyaTp2S21SjZrqKNKZdoVrbWTzGacr55pdJDzRpAd7Xe+pcm/b
dNUDrnj5+hPEuPtuRp62c+Qonfbxid0DG3VnEcRWtmlnxKhZLmoc7nRIdl1hW4fvCVf/sCcc
LTWMm+7dLZ0EEe90f7URDbAtXxt3SyFyF4OUM3SgTIhpgHq0cEclqLmThIGnaD7PcxPPUUI3
DXymm2LP9hbotu2wNmCX532U9zJ30taWdw/IazVlZnuQFHtxcWvv3oVkHBe2xbUR9tZCgriL
pVdK34iIFJ8cVlZuR1Vz6i6tkyhzM+ztITp4L3+9b6IDO1f2/I846JxmOqa92Q60i85JDScB
nrfyFwvaj/ftul27/R6s9bP5w71IxDK9JbtKzkQETTddorn2H0O4M0jtzpggk6qBYSqAjqe6
8p0ICptGUkCHErz9ySq25JoSxT5LW5aPwYi3Ehm6RBxErMQcd+6Xaqsr3W+A1fyDF6zc8BWV
knUiyPD0kMYl3Z35ajPUXHWX18yto8SdSRQ232Qi26URHKtIWyrn2G7oqqOwTGRAGjlu6swo
ENJcC1WaJioSpJuvzeQ3eC8QP8RZhFxWxw8fQNXO2mKCZVhjGCTDuoptZGwIog97KGJ8fjYg
tuLXgHUH+6DJ9gBO35mMCtbI+GHRHewptig/lMhtyjnLcATj86Quz40tfxhUomIfL/Hg4vsv
G0OyFwCtrQnVA9M5Cm0Z/VYKKV2pLUFVq+o9cVj/HHHcK2jULnpWuV2vqtDjDuNHfQw21XeV
C1AdSzJ0UgZoAv/XJ7vWTQMQIDSR56oGj8CDh1Z+ZxnZ1GjnZHIxFj30F8ENDimE3R0MoNY/
Al0jsF9u67WaTOEwqNzT0KdYdrvctjRmBHLAdQBEFpU2zDvD9lF3DcMpZHfj69SWsga3KzkD
wWoJG/g8ZVliG38iwGEyA1/QC1QLxqPWyiBvu7qwnYtNHJleJ4L4HZgIar3aimL39wlO24fC
dkkwMdAaHA5n/01pm44GDXBhHFpqwd48Tb77OH++ME5H9qYSbCWoDV23RIepE2pfR8q49tGx
bjXYKrTPRWYLMk6pV+TaQvUg6AZ/Wb9PCDB2aKYzwug6TFnTLB21Bk8v0j50UL97C31DjVYp
+QWXRxUDDWZYLCpSPeaYgmIw9F7rMOqiYhCsidX/K1sLAwAhidTdo24wfMs7gV1crxZucFDI
J9bmbMp9EGmzxflSNpQskGpQ7Fi9A4hPtk0JENc7XOKLqhlQoW0fmG9sguBD5S/nGXJZT1lc
c2lGvHQq8TR7QGvTgBALACNc7u3u7R7gTR3TTG31GcxPVpabT8TsyrKBIzC95JpHiH7MvPu0
dyBRrFoemqqs6vSA3KABqk9TVWOUGAbVJnuvrbGjCooeRSrQOAswduP/+Pz2/O3z05/qA6Fc
8e/P39jCKcF7Zw5mVZJZlha2/7U+UfLuY0KRd4IBzpp4GdgKcwNRxdF2tfTmiD8ZQhQgNroE
ck4AYJLeDJ9nbVxlid0BbtaQHf+YZlVa6yNP3Abm5QzKK8oO5U40Lqg+cWgayGw8dN798d1q
ln4FuFMpK/z3l+9vdx9fvr69vnz+DB3VedeqExfeyt5yjOA6YMCWgnmyWa05rJPLMPQdJkQm
b3tQ7fVIyN5nLQYFUinViETKFRrJSfVVQrRLDBVal8VnQVXubUjqw3i+Ux32jHEp5Gq1XTng
GtlWMNh2Tfo6klZ6wChP62aEsc43mYxzYXeG7399f3v6cveLavI+/N0/v6i2//zX3dOXX54+
geHyn/tQP718/emj6qn/or0ATghIuxDXJGZt2dLWU0gnM7j4SlvVzwW4MIzIEIraln5sfxDr
gFQ/eoBPZUFTANuRzQ6DMczU7nTTuxCiY16KQ6Ft3OEDIkK6/rBIAP35eGzb0W/ku4se1HbM
NtOnA7gHBwCneySRaujgL0h3T/P0QkNpOZPUtVtJehY3NudE8T6NscVKPSgPxyzCL8/0mMsP
FFDTeIUv3wEuK3T+Bdj7D8tNSEbLKc3NZGthWRXbr+70xIwFcQ016xXNQRsoo6vGZb1snYAt
mY1L8mpaY9hOAiBX0tvVXD3TZ6pcdVkSvSpIMao2cgCuizFnqwDXQpBqr08ByUIGsb/06Bx1
7HK1+GRkIEiRN2lMsXpPEHRCpZGG/la9d7/kwA0Fz8GCFu5crNVO17+Sr1WbmfuztgeOYH0P
0u2qnFS2extjox35KDCuEzVOjVzpCtP7+CGV3HvEwlhWU6Da0n5Yx/oaUM/v6Z9KxPz6+Bkm
+p/Nsv7Y+5xg14ZElPAw90yHXpIVZFKoIqJ5oLMud2WzP3/40JX4+AG+MoI36xfSpRtRPJDH
uXopU0vBcLmvP6R8+90ISv1XWKsV/oJJ1LKndfNeHvx0YlU+xe310cl05z4nHpEuRkrMDLB+
VTPmN8kMDsaz8BZuwkFe43DzThoV1ClbYLVbnBQSELWtlegYLLmyML6OqBwbgAD1cTCmd9nm
Hl7JHPnjd+he8SQ4OgZRIBYVGTRWb5HCl8aao/1U0QTLwftSgLxzmLBob2kgJV+cJT5oB7wV
+l/jFRhzjmxhgfjm1+DkVmYCu6NE28ie6u5dlHpg0+C5geOw7AHDsdr0FTEpM3PBqVtwEBUI
fiVaBAbLRUJu/3o8R6fWAKL5QFcksceinwVLQQG4FnG+HmA1DScOoZXVwIvrxUkbPDfBHYoT
BwsmgCj5Qv27FxQlKb4n13wKyvLNosuyiqBVGC69rrbdMIxfh3ys9SD7we7XGq9Y6q84niH2
lCDyisGwvGKwE5gzJjVYqa64tx16jqjbRGDlQtx3UpISlGYKJ6AScvwlLVgjmI4PQTtvsTgR
GPt9BUhVS+AzUCfvSZpK4PFp5gZze73rwFWjTjm5m2gFK0lo7XyojL1QbeIWpLQgIElR7inq
hDo6uTt32YDp5SVv/I2TP77u6xFspUKj5AZwgJhmkg00/ZKA+AVKD60p5IpYuku2gnQlLXSh
x5sj6i/ULJBFtK5GjlyiAeXIVBotqzgT+z1cSBOmbckqw2jeKLTVTs4xRAQ1jdE5A1ShZKT+
wW6BgfqgKoipcoDzqjv0zLS+WgdHrsYN1Ox0DAfhq9eXt5ePL5/7hZksw+r/6BxPj/WyrMBE
ofaAQ6opS9d+u2B6Il40TOeEw2yu08oHJUXkcMnW1CVasHOBf+mXK6BWDOeEE3W0Fxb1Ax1d
GgVcKayzq+/D4ZaGPz8/fbUVciEBONCckqxsL7HqB7a0p4AhEbcFILTqY2nRdCdymG9RWo2R
ZRy52uL6pW0sxG9PX59eH99eXt1DvKZSRXz5+G+mgI2acFdgShmfaGO8S5B3P8zdq+n53pIn
qzBYLxfYEyGJooQsOUui0Ui4k71joIkmTehXtkU1N0A8H/2SX23J362zMV5/rjt28d5f+UB0
h7o82zawFJ7bRgqt8HAcvD+raFinFFJSf/FZIMJsBpwiDUWJZLCxLcuOODyP2TK4fRc6gLvc
C8OFm3gShaCAeq6YOPrdB5PxoN7oJJbHlR/IRehGqT9EnhteoT6HFkxYKYqDve8e8dZbLZiy
wBvM1g1uHqPZNhgHxjzxcfFBI9MtJ7zGccOXcZqVjRscToXcUsKOx0W3HNofv87g3WE5T63m
qbVL6Y2RxzXwsI9yCH1GS5RxBq532YuGz8DRAWOwaialQvpzyVQ8sUvrzPZBZo8ppopN8G53
WMZMC7pns+MnHsFkw0WkV2a0PKiNjbZD53ZGFQucUGTM4COqDmMZ6rJFV7JjEaKiKIssOjFj
JE6TqN6X9YkZ42lxSWs2xUOai0LwKQrVyVkiS69C7s71gRmp56IWMp2pi0YcVOWzafaaKG61
w8EoB/orZuQDvuFmBNvVzNg/qJNwRIQMIar75cJjpmPH3zgiNjyxXnjMLKqKGq7XzNwFxJYl
wBWqx8xSEKPlMtdJ2XZTEbGZI7ZzSW1nYzAfeB/L5YJJ6T7Z++h0fooAijla5wmZvMS83M3x
Mt54IVNvMsnZilZ4uGSqU30Qer894r0yutPBek2fGRwOxG5xa2au10f43CgZdrQuceyqPbOw
GXxmDlYkCDszLMQz900sVYfRJoiYwg/kZsnMyhMZ3CJvJsusWRPJLQUTy0kuE7u7yca3Ut6E
t8jtDXJ7K9ntrRJtb7TMZnurfre36ne7ulmi1c0irW/GXd+Oe6thtzcbdsvJwxN7u463M/nK
48ZfzFQjcNzIHbmZJldcEM2URnHIP7PDzbS35ubLufHny7kJbnCrzTwXztfZJmTkUMO1TCnx
eZiNqhl9G7Iztz4a47YJcAnpM1XfU1yr9LeUS6bQPTUb68jOYprKK4+rvkZ0okyUvPXgftV4
pOXEGq8ws4RprpFVcvstWmYJM0nZsZk2nehWMlVulWy9u0l7zNC3aK7f23kHw/FO/vTp+bF5
+vfdt+evH99emWelqZJJtVKuu6edAbu8RLd9NlVFtWDWdjjZXTCfpM/3mU6hcaYf5U3ocZsw
wH2mA0G+HtMQebPecPMn4Fs2HXDsxue7YcsfeiGPrzxm6Kh8A53vpFI313DOtqOMj0V0QCd+
Q6qgURm5uBI1N5nHfL4muPrVBDeJaYJbLwzBVFl6fxbaQpKtHg4iFXqn2gPdPpJNBa7TM5GL
5t3KGx8nlXsiiGmVHdAUc1MR9b2+FyHnTkx8+SBtzzka60+vCKrdHCwmJdGnLy+vf919efz2
7enTHYRwh5qOt1ECKbmENCUnd8gGzJOqoRjRRbPATnJVgi+djbUUy9Ziaj8XNFZ/Bh2zvxy4
PUiqlWY4qoBm1GDp7a5BnetdY1DoGlU0gVRQdRoD5xRAr8WN8lYD/yxsm3x2azJaSYau8Z2r
Bo/ZlRZB2Me8BilpPYJzgPhCq8p5Ij2g+EWq6WS7cC03DpoWH5CJU4NWxnsF6abmBpWArdOb
W9rr9UXFTP33WjkISmh3URvAaJX4auCXuzMJ3d8BkgiipF8qC7gwAA1lEtQtk5onuhbcbDgD
OraPeDRonob/5WJeuKZBib1AAzoXchp2b9mMNaw2XK0Ido0TrB6iUXr7ZsCM9qsPtJFBbXiv
O6S1fszOR+ZS5eX17aeeBWsdN2Ysb7EEXapuGdKRDIwAyqPV1jMqDh2WGw+MB5BBp7sgHYqi
CWkfl86oU0jgziWNXK2cVruKYlcWtN9cpbeOdTGny5NbdTOqGmv06c9vj18/uXXmeDXq0YK2
5eHaIQ0va22h5deoTz9VK/wHLgqWt5xqqUTsh57TneVyu1i8I4pj5PvMKrdP/sZ3+zSD3kAg
XTOSzWLl0zpSqBcy6Ha18fLrheCxauiADi5qlnsCnZBI9UdD76PiQ9c0GYGpHm4/aQfbZeCA
4cZpDwBXa5o9FdDGZsb3NBa8cmDpSCb9dQ4G63jVrGyRtJ80wSQnmQh71z8End7yE0Kb0XTn
zd7GHQeHayd1gLfOWt3DtIma+7x1M6SOhwZ0jR7smYmamnI2MwExwzyCTqVfh2Pfabp1B0b/
AEX8YMDQZyBm6oMrEG3ahCzTzLWJIbJ2t+cwWnd5pgQVOolWzrQKjuH5mR1ehBnKfpDWywBK
hnFqUJbw4iDTHcd6EenUy6jkcbO+lPjsrWnG2hTJ1snZTKO0bvM4CMKQNmklZCnpyt0qiUB1
KppA2TbaO+L0XN8ttXERKHe3vwbpBo/JMdFIAeLT2VpIrraPYw9UUYbtv/fTf5971V9HY0aF
NBqw2vmbLXpNTCJ9tTTMMfaTJiu1NuYjeNecI7C8PeHygHSZmU+xP1F+fvzPE/66Xm/nmNY4
315vBz20HmH4LvtuGxPhLAE+2hNQNJpGLAph26PGUdczhD8TI5wtXrCYI7w5Yq5UQaDE7njm
W4KZalgtWp5Ar1owMVOyMLUvwzDjbZh+0bf/EEPbAeiii7UM6guxuLLPSXSgOpX2A2gLHPRP
WA52vHiTTFnYD7OkuV6ebBXwgSp0y0QY+LNBhk3sEEYR49aX6YeAjLUEO0zWxP52NfP5cGKF
Tu4s7mbZxmf7LNtv125wP6i2mr7bsckPtmd78J8HvgETW63PZMFyqCgx1lQt4L3+rWjyXFXZ
Ay2yQenbhCqJDG+tO/2hRZTE3S4CTXrrRHwwI03i9PZpYU5Ci4WBmcCgDYVR0JmkWJ89484J
1A4PMOrUHmBhu24ZokRxE26Xq8hlYmwzd4BhhrBvNGw8nMOZjDXuu3iWHsouvQQuA5Y6XdQx
UTcQcifdekBgHhWRAw7Rd/fQndpZAj/vp+QxuZ8nk6Y7qw6lWhJ7Yh6rBtwbcVVJ9lDDRykc
Xfdb4RE+dgZtyZrpCwQfLF7jzgYoqDiaxBx8f1ZS8iE62+/shwzA784Gif6EYfqDZpAEOzCD
Ve0cuT0ZPnJ+LAzWsd0U63blueHJQBhgISsoskvosW9LqAPhbIcGAnai9tmhjdsHFgOOxbMp
X92dmWSaYM19GFTtcrVhMjamLcs+yNp+QW9FJntfzGyZCujt588RzJcajZl8t3MpNZqW3opp
X01smYIB4a+Y7IHY2EcWFqH23UxSqkjBkknJ7Ly5GP3me+P2Oj1YzMq+ZCbKwdgW012b1SJg
qrlu1IzOfI1+pag2MrZ27fhBavW0RdJpGDsL6xDlHEtvsWDmI+cQaSCuIouRoaMcWypSP9X2
K6FQ/5zR3BYZ86CPb8//eeLs8YJBbtlFO9GcD+fafntEqYDhElUHSxZfzuIhh+fgi3COWM0R
6zliO0MEM3l49qC2iK2PDCWNRLNpvRkimCOW8wRbKkXY6tuI2MwlteHqSmvLMnBMHqYNRCu6
fVQwz0H6AKewSZFBvAH3Fjyxj3JvdaQL45gfOD2WecwwdT5Yx2CZimPkjlh6HXB85TjiTVsx
laBNUfFfk0h0sjnBHludSZqBhmHOMMajA1qOEcc0pliduijfMXUMqpCrPU+E/v7AMatgs5Iu
MXhmYUu2l/ExZypy38gmPTcgprnkIVt5oWTqQBH+giWU1ByxMDMozLWM7RdxYI7iuPYCprnE
Lo9SJl+FV2nL4HCHiifgqU1WXI+D56t8D8K3QgP6Pl4yn6YGTe35XIfLRJFGh5QhXHWKkdKr
JtOvDMGUqiew+E5JyY1ETW65gjexkkSYoQKE7/GlW/o+UzuamPmepb+eydxfM5lrJ5fcVAzE
erFmMtGMxyw2mlgzKx0QW6aW9envhvtCw3A9WDFrdsbRRMAXa73mOpkmVnN5zBeYa908rgJ2
Mc+ztk4P/DBtYuTjbIySFnvf2+Xx3NBTM1TLDNYsXzPiCrweZ1E+LNerck5QUCjT1FkesrmF
bG4hmxs3TWQ5O6aUrMKibG7blR8w1a2JJTcwNcEUsYrDTcANMyCWPlP8oonNebaQTcnMUEXc
qJHDlBqIDdcoitiEC+brgdgumO8cnqS4hIwCbqot47irQn4O1Ny2kztmJi5jJoK+17btblXY
cNwYjodBXvW5etiBgf49Uwq1pHXxfl8xiYlCVme1N68ky9bByueGsiLwq5iJqORqueCiyGwd
KrGC61z+arFmZHm9gLBDyxCTC7PpTMgKEoTcUtLP5txkE7X+Ym6mVQy3YplpkBu8wCyX3PYB
Nu/rkPmsqk3VcsLEUHvh5WLJrQ6KWQXrDTPXn+Nku+DEEiB8jmiTKvW4TD5ka1akBk9n7Gxu
q+bNTNzy2HCto2Cuvyk4+JOFYy40NeM3CtV5qpZSpgumSuJFl6QW4XszxPrqcx1d5jJebvIb
DDdTG24XcGutErhXa+32IOfrEnhurtVEwIws2TSS7c9qn7LmJB21znp+mIT87l1ukLYLIjbc
DlNVXsjOK0WEHkXbODdfKzxgJ6gm3jAjvDnmMSflNHnlcQuIxpnG1zjzwQpn5z7A2VLm1cpj
0r+ICCzN8psHRa7DNbM1ujSez8mvlyb0uYOPaxhsNgGzLwQi9JgtHhDbWcKfI5gv1DjTzwwO
swooWrvTuuIzNd02zGJlqHXBf5AaH0dmc2yYlKWIKo2Nc52ohQuudzetfY79H+z+zp2GNKeF
Zy8CWliKrLroATWIo0YJUeBT0OHSPK1VecBrV3/V2Ok3KF0u3y1o4HLvJnCtRRPttHcyUTEZ
9Ga0u0N5UQVJq+4qpNGOuRFwH4naeIm6e/5+9/Xl7e7709vtKOARTm0vo/jvR+nvzTO1DQbZ
wI5HYuEyuR9JP46hwUqY/g9PT8XneVLWKZAa/m7LA7iv03ueEUmWMow2reHASXrhU5p60Nn4
pHMprPmvbYINyYwomAxlQRmzeJjnLj6oEbqMtnjiwrJKo5qBz0XIlHGwNcUwMZeMRtWoClzq
JOrTtSwTl0nKS+qivck8N7Q22sHURHOyQKO/+/Xt6fMdmGH8wrnaM+p3un/FWWQvJEr67KoT
3IznzKebeOASNWnUAlvKPbWmiwKQQul5T4UIlov2ZtkgAFMtcTW2k5LucbFUlLUbRZuusHum
kj6rzNbTu1kmUl3x0crB8vvIVbX+4N3ry+Onjy9f5j8WrG5sPM8teW+OgyGM4g0bQ21ReVzW
XMlni6cL3zz9+fhdfd33t9c/vmjzSbNf0Qjd5E7WjXCnELAhF/DwkodXLpzU0WblW/j4TT8u
tVHDfPzy/Y+vv81/Uv8gn6m1uajjR6v5vnTrwtZwIePi/o/Hz6oZbnQTfXPbgBRgzXKjfQQ9
VqMs0mqAYzlnUx0S+ND62/XGLen4zpKZQWtmEhs91fxFEWI1dYSL8ho9lOeGoYzXHu0woUsL
EDISJlRZpYU2WAaJLBx6eOSma/f6+Pbx908vv91Vr09vz1+eXv54uzu8qJr4+oK0RYfIVZ32
KcMizGSOAyjZjKkLGqgo7cdTc6G0RyHdhjcC2tIMJMuIMD+KZvKh9ZMY17WuAdhy3zDuiBBs
5WSp4JirayZuf7k1Q6xmiHUwR3BJGUV2B55OjVnuw2K9ZRg9qFuGuCaRqoXEesPY658xQY0K
mkv03vNc4oMQ2sm3ywy+v5lvyFpcntEAb8tlEcl866+5UoEx3jqH06AZUkb5lkvSvItbMkz/
XJJh9o0q88Ljsurtl3NNf2VAY9qWIbTxUheuina5WPCdVPsRYJhT0NUNR9TFqll7XGJKymy5
GIMrLqbL9RpZTFpNDrb1WzBqy0XUL/pYYuOzWcFFDl9po5DNuCPLWx/3NIVszlmFQTUvnLmE
yxb8QaKgYGke5Ajui+H9KPdJ2va7i+vFESVuzPIe2t2OHfhAcngioiY9cb1j8NvAcP0LWHbc
ZJHccD1HiQcykrTuDFh/iPCQNo+huXqCV60ew4yLOpN1k3iePZKngxVY75kho81IcV8X359F
nZL5J7lESn5WwjOGM5GDMxoX3XgLD6PpLu7iIFxiVCs1hCQ3Wa081fkbWzXqkJYJDRavoFMj
SGWyF00Vc4tJeq5L9xvEbrNYUCiP7Gc212gPlY6CrIPFIpU7gqZwioshs5mKz0zTjA+ouJGp
vp6kBMglLZLS6GQj/2igcOD5exoj3GDkyM2ex0qFASfOxqki8oRoXhTSevd8WmX6NtALMFhc
cBv2T69woPWCVllcnUmPgrPz4RGtywSb3YZ+qHl9hzE4dMWrfH9q6KDhZuOCWwfMo/j4we2A
adWqns61qWnvVJBqEttF0FIs3ixgEbJBtQtcbmhtDZtMCmrjCPMo1edX3GYRkAxFfqjUVgd/
dAXDzjT/GFu7Z1mTPgHecSOfTAPgfhQB5zyzq2p4kPjTL4/fnz5Ngm38+PrJkmdViCrmJLnG
GBwfXrb9IBnQ/WSSkWpgV6WUYod85tpOMyCI1I4mbL7bwWEccnkLScXiWOpHCkySA0vSWQb6
eeOuFsnBiQCOGm+mOATAuExEeSPaQGNUR1ArOkaNI0coonY8zieIA7EcfkCkOmHEpAUw6sWR
W88aNR8Xi5k0Rp6D0SdqeCo+T+To3NyU3dhMx6DkwIIDh0pRE0sX58UM61YZMpatzZX/+sfX
j2/PL197Z4fuaUS+T8jOXiPkpTpg7oMYQLUpelUWpEGog8tgYxuGGjBkIlnbF+9f4eOQUeOH
mwVTNMuvCMFzNamCY4rY9voyUccsdsqoCdBIRUmpulxtF/Y1pEbdV/06DfI2ZMKwFomu1t4b
DjL8DgR9Wz9hbiI9jjTxTJsRQ0kjGHBgyIHbBQfar+qgxfQznJYB7fd3EL0/GkD+bSxc+DGD
r1zM1gQdscDB0JsejSGLCYD0R4FZFUlJqjX2gpa2eQ+6lT0Qbuu0KvXaGQ1qf7VSezYHP4r1
Ui2N2OppT6xWLSGODbh/kiIOMKZKAfYeUL0ZIeP+HNUnxm0c7MCQVSAAsE/G8fRfN298bOCw
U8wGyOu9/dB/KkJW0aqecGMCa45EPkMmDluR0Pi9XPuk6bRpjDhXcmyJCWocAzD9Jmqx4MAV
A65tS+ZmTNIHQz1qjGPQsOR90ITahiImdBswaGjbJ+zRcLtwiwDPLZmQtuW4CQwJaEy94SSH
QzdrR/VB+3KtyOSBn4cBhEwIWDgcLGDEfYs2IFhZfURx/+6NZpAbLZ1wHjrDnLFMrEtFTURo
kLwt0hi1V6LBU2irjmjIHCmRzNOYKaYUy8265Yh8ZWuejBCRBzR+eghVt/RpaEmGj3nHRCog
2rWrBV2Ao13gzYFlQxp7sNdiLmma/Pnj68vT56ePb68vX58/fr/TvL5ye/31kT2shgB4lTCQ
WVCmW5y/nzYRXcA1YR3npDrIs27AGtFFeRCoGbmRsTOLU9M6BtNPFGkqWU46uj7KPPcCNg5O
bePASzlvYb/sM6/qkBKJRjak07p2byaUrvnue7yh6MRWkAUja0FWIiGDIoM6I4rs6Vioz6Sg
UHfhHRlnrVaMmtsDSxYdjmNd6XJgonNij6beMg8T4Zp5/iZgxmmWBys6T1h2iTBOrRhpkBgO
0vMnts6m8xnfgGBxuLdixYFu5Q0EL1TaRob1N+cr0KJzMNqE2vLQhsFCB1su3LiglMVgrtjY
446Y2StwMRibBjKObyaw6zJ05v/ymBu7X3QVGRj8xBPHoYxx+5VVxGHRRGlCUkafDDvB96RA
jt2+4aap763YJfrcVnGM7OpgjxA9RZqIvWhT1W/LrEEvmKYAF1E35yiDR4jyjCphCgPKVVq3
6mYoJZodQttZN6KwfEeotS03TRxseUN7asMU3g1bXLIK7DfbFlOofyqWMRteltLrK8v0wzZL
Su8Wr3oLnBSzQcw2fYaxN+sWQ7a8E+PunC2OjgxE4aFBqLkEnQ35RBLh0+qpZvM6w6zYD6Zv
NDGzno1j71ER43tse2qGbYx9VKyCFV8GLPlOuNlbzjOXVcCWwmw9OUbIbBss2ELAqw9/47Hj
QS2Fa77KmQeMFqmkqg1bfs2wta4NRvBZEekFM3zNOqINpkK2x2ZmNZ+j1rZvlolyd5CYW4Vz
0Yj9Rcqt5rhwvWQLqan1bKwtP1UOG805ih9Ymtqwo8QxiUEptvLdbTTltnO5bfDbMovrz3qw
jIf5Tcgnq6hwO5Nq5anG4blqtfT4b6jCcMU3m2L4xS+v7jfbmS6i9vf8hNObzJphwtnU+Baj
TtssZidmiJn52z0YsLj9+UM6s1ZWlzBc8N1aU/wnaWrLU7aFwAnW2gp1lR9nSZknEGCeR844
J3I4ZeAofNZgEfTEwaKUUMri5IBjYqSfV9GC7S5ASb4nyVUebtZst6C2VSzGObqwuOwAegFs
oxiheVeW2MM6DXCp0/3uvJ8PUF1ZMdeRvG1Kbxa6S25fJ1i8+qDFml0fFRX6S3bswrM/bx2w
9eAeB2DOD/jubrb9/OB2jw8ox8+trnkfwnnz34APGxyO7byGm60zc8owx2156cs9cUCcOUPg
OGq9ytq4OEbWrY2PfvjEEPQ9Emb49bzfQvMM2tjGznEjIEXZgKleu9PTYArI7Sk5E7bNzTo2
WiOwWZ20UequSEdiiir0ZDaDr1n8/YVPR5bFA09ExUPJM8eorlgmVzvM0y5huTbn4whjiIn7
kjx3CV1PFxGnEtVd1AjVZnlp+/5VaZiHZtPvo2hXx8R3CuCWqI6u9NPOtt4ChGvUflrgQu/h
iuWEY4I+HUYaHKI4X8qGhKnTpI6aAFe8fboEv5s6jfIPdqcS9WD53imaOJR1lZ0PzmcczpFt
rl1BTaMCkejYhJ2upgP9rWvtL4IdXUh1agdTHdTBoHO6IHQ/F4Xu6qBqlDDYGnWdwWk4+hhj
b55UgTEy3iIMnnbbkErQdj0OrQTarhhJa4FeEg1Q19RRIXMBVthQuSUpiVbBRpm2u7LtkkuC
gtkmULX65qjw9sXScfgC3ofuPr68Prk+t02sOMr1lTnVljOs6j1Zeeiay1wAUA9t4OtmQ9QR
2BifIWXCKOr1BVOz4w3KnmB71Jjqyuz6pYyqxt0Ntk7vz2AqNbKPSy8iSUusl2CgyzLzVRF3
iuJiAM1GQQesBo+SCz0nNIQ5I8xFAdKn6hn23GhCNOfCnkR1Dnma+2DIFhcaGK1S02UqzThD
9/yGvRbI5q3OQQmD8KqHQRPQ3DkwxCXXrzlnokCFC1vF+LIj6ykgeW5fmgJS2EaQG9Bi69JU
65fhiFGr6jOqGlhvvbVNJQ9FpC/koT4lTj1JwQu6TLUTdDVzSLAhdcBhzllKFIn0+HI1h3TH
gpurqQebRyhPv3x8/NIfI2Mlu745SbMQohNFdW669AIt+5cd6CDVrhDHy1drew+ti9NcFmv7
yFBHzZC7wTG1bpcW9xyugJSmYYhKRB5HJE0s0c5potKmzCVHqPU2rQSbz/sUnpK8Z6nMXyxW
uzjhyJNKMm5YpiwErT/D5FHNFi+vt2DnkI1TXMMFW/DysrJNZSHCNkZEiI6NU0Wxb584IWYT
0La3KI9tJJkiww0WUWxVTvYhNOXYj1VLvGh3swzbfPAfZEiOUnwBNbWap9bzFP9VQK1n8/JW
M5Vxv50pBRDxDBPMVB8YQWD7hGI8L+AzggEe8vV3LpSMyPblZu2xY7Mp1fTKE+cKCcMWdQlX
Adv1LvECuT2yGDX2co5oBXizPylxjR21H+KATmbVNXYAurQOMDuZ9rOtmsnIR3yoA+2Emkyo
p2u6c0ovfd8+NjdpKqK5DOJZ9PXx88tvd81FewlxFgQTo7rUinWkiB6mTg4xiSQdQkF1iL0j
hRwTFYJmpjvbeuEY3kEshQ/lZmFPTTbaoV0KYrIyQjtCGk3X66IbtKSsivz50/Nvz2+Pn39Q
odF5gS7TbNQIbFQwM1Tt1FXc+oFn9wYEz0fookxGc7GgzQjV5Gt0nmejbFo9ZZLSNZT8oGq0
ZGO3SQ/QYTPCYheoLGwtt4GK0E2yFUHLI1wWA9XpZ7sPbG46BJObohYbLsNz3nRI9Wcg4pb9
UA33mx23BPA4tOVyV1ufi4tfqs3CNhNo4z6TzqEKK3ly8aK8qNm0wxPAQOptPIMnTaPkn7NL
lJXa5nlMi+23iwVTWoM7By8DXcXNZbnyGSa5+siO1FjHSvaqDw9dw5b6svK4how+KBF2w3x+
Gh8LIaO56rkwGHyRN/OlAYcXDzJlPjA6r9dc34KyLpiyxunaD5jwaezZ1lHH7qCkcaadsjz1
V1y2eZt5nif3LlM3mR+2LdMZ1L/y9ODiHxIP+dkCXPe0bndODmnDMUlqm3fMpcmgJgNj58d+
/wKhcicbynIzTyRNt7L2Uf8DU9o/H9EC8K9b07/aFofunG1Qdr/eU9w821PMlN0zdTyUVr78
+vbfx9cnVaxfn78+fbp7ffz0/MIXVPckUcvKah7AjlF8qvcYy6XwjbA8eik7Jrm4i9P47vHT
4zfsJ0wP23Mm0xAOUHBKdSQKeYyS8oo5s5GFnTbZyJqN70eVxx/c2ZKpiDx9oIcJSvTPyjU2
HN9Efut5oADtrGXXVWhbqRzQtbOEA7a2nDhbpfv5cRS1ZsopLo1zuAOY6oZVncZRkyadKOMm
c4QtHYrrHfsdm2oPd/uyjlO1F2scESxtxTnvPUvR2D1Z1sIVxPLW6YdJE3haCp2tk59//+uX
1+dPN6ombj2nrgGbFWNC2w5of4ioPWJ3sfM9KvwKmUBE8EwWIVOecK48ithlauTshK1Wb7HM
8NW4scmi1uxgsVq6opwK0VNc5LxK6alYt2vCJZntFeRORjKKNl7gpNvD7GcOnCtzDgzzlQPF
S+qadUdeXO5UY+IeZQne4AwycuYdPXlfNp636ERN5nQN41rpg5YywWHNCsQcFHJL0xBYsHBE
FycDV/DI9cbCVDnJEZZbttSWuymJNALONqjMVTUeBWwN6ahohOROSTWBsWNZVfYuSp+dHtB9
mS5F0r+cZVFYXMwgwN8jcwGeQ0nqaXOu4EaX6WiiOgeqIew6UCvt6La9f7LpzKxxtE+7OBb0
ELnL86q/tKDMZbzOcPpt77/eycPYaYnVOlq7WzmLbRx2sJpyqcRebQWk+p6Hm2HiqGrOtbMe
Jvl6uVyrL02cL03yYLWaY9arTkixn89yl84VC55V+N0FrCVd6r1zSjDRzj6ZuA/p54ojBHYb
w4Hys1OL2koaC/LXIVUb+Zs/aQTjETLK0X2GKVsQA+HWk1FlSZBfFcMMxkji1PkAqbI4F4PR
tGUnnPwmZu68ZFV1e5E7LQq4GlkCettMqjpel4nG6UNDrjrArUJV5v6l74n0qCNfBhslBiPz
6Yai/uxttGsqZ7HrmUvjfKc2HwkjiiVU33X6nH6HLKST0kA4DWieXccssWaJRqH2pS3MT+MV
2sz0VCbOLAOWPS9JyeJV6wi3o9Gd94y4MJKXyh1HA5cn84leQLnCnTzHi0FQZqgzsMk608mh
Rx58d7RbNFdwm8/3bgFaX+2P1ACvnaLj0dUd3CaXqqF2MKlxxPHiCkYGNlOJe1IKdJJmDRtP
E12uP3EuXt853lmW7KYpMT0zFuvwFLNPKkf4Hbj3bruP0WKnAgbqIpkUBwuv9cE9LISVwukC
BuVnYD3XXtLi7M612sDsrZ6lA9QluEdis0xyroBuP4DxilA1XrWL05nBemEm3Iu4CKdza1Dv
gJ0UgICb5SS9yHfrpZOBn7uJkSFoxME5sUffgodw/2wm4LFLgbYDjcZ0LVCl+JFQpSdZxe0H
EV6aXd/Tp7s8j38G6ybMwQEc6gCFT3WMXsd40f4Xxps0Wm2Q/qVRAxHLDb3tohi8yKfYFJte
VFFsrAJKDMna2JTsmhQqr0N6C5nIXU2jqh4h9F9OmseoPrEguVU6pUgwN4cxcOpakIu3PNoi
/eKpmu19Wp+R2r5tFuujG3y/DtGbHAMzby8NY55wDr3FNToLfPjn3T7vdSDu/imbO21P6F9T
/5mSCtt3X27ZsL2VnD3YTYpCRm5HHyn6KSDONxSsmxqph9moU03RBzh2pughzdFNaN8xRF1W
cY6eFpim2XvrPVJit+DabZq0rtV0GTt4fZbO1zQP1bG0ZU4Dfyizphbj4dg05vfPr09XcLr+
T5Gm6Z0XbJf/mtnA70WdJvS2owfNPaqrVgXyb1dWoE8zWqkFS73wjtI078s3eFXpHNPCOdLS
c+TN5kLVfeKHqk4lSMZ1fo2czdXuvPfJnnnCmeNejSvxqKzo4qYZTnfJSm9O58mf1ZPy8cEM
PVKYZ/ilWR/aLNe02nq4u1itp6d0ERWqo6JWnXD7MGlCZyQprTxm9gHWydDj14/Pnz8/vv41
KEjd/fPtj6/q3/9R69bX7y/wx7P/Uf369vw/d7++vnx9UzPD939RPSpQsasvXXRuSplmoMBD
tRSbJoqPztls3T/mNpbi/fgu/frx5ZPO/9PT8FdfElVYNSeBCem7358+f1P/fPz9+Rv0THPJ
/Acc2E+xvr2+fHz6Pkb88vwnGjFDfzVP5Wk3TqLNMnA2QArehkv3pjeJvO124w6GNFovvZUr
GwHuO8nksgqW7j1yLINg4R6oylWwdNQXAM0C3xXRskvgLyIR+4Fz+HNWpQ+Wzrde8xB5MZtQ
22Nf37cqfyPzyj0oBe32XbPvDKebqU7k2Ei0NdQwWK/04bEOenn+9PQyGzhKLmB109lzatg5
sAB4GTolBHi9cA5Re5gTM4EK3erqYS7Grgk9p8oUuHKmAQWuHfAkF57vnP7mWbhWZVzzx8Ke
Uy0GdrsoPPbcLJ3qGnDue5pLtfKWzNSv4JU7OOBOfeEOpasfuvXeXLfIw7qFOvUCqPudl6oN
jBdSqwvB+H9E0wPT8zaeO4L1NceSpPb09UYabktpOHRGku6nG777uuMO4MBtJg1vWXjlObvM
HuZ79TYIt87cEJ3CkOk0Rxn6051m/Pjl6fWxn6VntXqUjFFESvTPnPrJRVRVHHMUK3eMgK1n
z+k4GnUGGaArZ+oEdMOmsHWaQ6EBm27g6o6VF3/tLg6ArpwUAHXnLo0y6a7YdBXKh3W6YHnB
XlOnsG4H1Cib7pZBN/7K6WYKRY/YR5T9ig1bhs2GCxsyc2Z52bLpbtkv9oLQ7RAXuV77TofI
m22+WDhfp2FXNADYc4ecgiv07m6EGz7txvO4tC8LNu0LX5ILUxJZL4JFFQdOpRRqO7LwWCpf
5WXmHCLV71fLwk1/dVpH7jEdoM78pNBlGh9ceWF1Wu0i9yJAzxAUTZswPTltKVfxJsjH7XCm
JiVX93+Y81ahK4VFp03g9v/kut24s45Cw8Wmu8T5kN/+8+P332fnwATezDu1AaaWXPVMsDqh
NwrWyvP8RQm1/3mCjfgo+2JZrkrUYAg8px0MEY71ooXln02qar/37VVJymA8h00VxLLNyj/K
cXua1Hd6m0DDwwEXOCE1K5jZZzx///ikthhfn17++E4Fd7qsbAJ39c9X/oaZmH3m6FlfzyRa
2Jh8Iv3/21SY76zEzRIfpLdeo9ycGNZeCzh35x63iR+GC3hd2B/eTXaN3Gh4UzW8KzLL8B/f
316+PP+fJ7jmN5s4ukvT4dU2Ma+QCS+Lg61M6CPrjpgN0SLpkMhym5OubQ6FsNvQ9iGNSH1+
NhdTkzMxcynQJIu4xscmcgm3nvlKzQWznG/L74Tzgpmy3Dce0oS1uZa86sDcCukdY245y+Vt
piKu5C1208yw8XIpw8VcDcDYXzvaRXYf8GY+Zh8v0BrncP4NbqY4fY4zMdP5GtrHSm6cq70w
rCXob8/UUHOOtrPdTgrfW810V9FsvWCmS9ZqpZprkTYLFp6td4j6Vu4lnqqi5UwlaH6nvmZp
zzzcXGJPMt+f7pLL7m4/nAcNZzD6Qev3NzWnPr5+uvvn98c3NfU/vz39azo6wmeWstktwq0l
Hvfg2lE1hlcz28WfDEi1kxS4VjtgN+gaiUVaNUf1dXsW0FgYJjIw/nS5j/r4+Mvnp7v/507N
x2rVfHt9BoXWmc9L6pZojQ8TYewnCSmgwENHl6UIw+XG58CxeAr6Sf6dulab2aWjyqVB25iG
zqEJPJLph0y1iO2ieQJp662OHjrdGhrKt9UCh3ZecO3suz1CNynXIxZO/YaLMHArfYFMfwxB
farHfUml125p/H58Jp5TXEOZqnVzVem3NHzk9m0Tfc2BG665aEWonkN7cSPVukHCqW7tlD/f
heuIZm3qS6/WYxdr7v75d3q8rEJkB3DEWudDfOddiAF9pj8FVD2vbsnwydS+N6R68fo7liTr
om3cbqe6/Irp8sGKNOrwsGbHw7EDbwBm0cpBt273Ml9ABo5+JkEKlsbslBmsnR6k5E1/UTPo
0qMqifp5An0YYUCfBWEHwExrtPzwTqDbEw1F87IBHnmXpG3N8xsnQi8627007ufn2f4J4zuk
A8PUss/2Hjo3mvlpM26kGqnyLF5e336/i748vT5/fPz68+nl9enx610zjZefY71qJM1ltmSq
W/oL+oiprFfYWfoAerQBdrHaRtIpMjskTRDQRHt0xaK2jScD++jx4DgkF2SOjs7hyvc5rHNu
JXv8ssyYhL1x3hEy+fsTz5a2nxpQIT/f+QuJssDL5//6v8q3icHsJrdEL4Px0mN43mclePfy
9fNfvWz1c5VlOFV0GjqtM/CabkGnV4vajoNBprHa2H99e335PBxH3P368mqkBUdICbbtw3vS
7sXu6NMuAtjWwSpa8xojVQIWNpe0z2mQxjYgGXaw8Qxoz5ThIXN6sQLpYhg1OyXV0XlMje/1
ekXERNGq3e+KdFct8vtOX9Kv0kihjmV9lgEZQ5GMy4Y+xDummVErMYK1uXSfTL7/My1WC9/3
/jU04+enV/cka5gGF47EVI0PsZqXl8/f797g8uM/T59fvt19ffrvrMB6zvOHbo/MK8/J/Drx
w+vjt9/BZL3zOCU6WAuc+tFFeWKrwQCkPWJgCOnWAnARtnUk7ULj0Nh6z4eoi2pbRdoAWhXs
UJ1tAyOg5imq84VaPE/qHP0wer7JTnCotMzIAJqoTzu3XXyMavRKXXNwGQ9+efeg/IZTO+US
OgF+B9Dj+91AMcmpDHPZwMv/MisPD12d2tf9EG6vDd6kORhRE7YzgIksL2lttCHUIufSWRqd
uur4IDuZpzlOAJ6Ad2oPmUxKHbRC0BUTYE1DalgBWumiig7geKvMcPhLHeVs7UA8Dj+keae9
YDHVBjU6x0E8eQSFW469kE+X8TEdn7XD0WJ/GXj34iglWLFAry0+Kplvjcts9N0y9IpnwIu2
0udiW/vS2iH1SR0665wrkJFW6px5Ww41VOap1paevNFbQSdFRwhbR0laFrZ6I6LVMFejbpYu
yvMljTjFW/1xW/T8t0eGt3h1uUvf/eMfDt1ry3dpXZekAQ1f5kYJaS4AWGivGo45XBoe7U6X
/DC+s/r0+uXnZ8XcJU+//PHbb89ffyM9AGLRp0cI7/LcdtkwkvKqpmN442JClbv3adzIWwFV
F41PXRLNZ3U4x1wCwyzlUll5VTPCJdUm0uK0KtU8zJXBJH/ZZVFx6tKL6iizgepzAc4BugpG
2NjrmHrE9Vu9vvz6rCTpwx/Pn54+3ZXf3p7V0vQIOmpMjUO7Gl/jWjPpLKu0SN6pFd4JeUyj
utmlUaOXmPoSZRDMDaf6UZpXzejxXck0ThhYeAYzZbuzfLhGonkHoqhb5WoOH5PymADAyUxA
859rM5d7TG3dqhU0nanZDg+vy8m27ASI0e0f5ZK6iclcMT11SXDrGmK1DAJtOLLg2M08pZbX
ls6/PXMRiRjUE4fLHH1zs3t9/vQbncz6SEkl2MScBXwMz8LwXHumuKPjevnHLz+5stkUFB5p
cEmIis9TPz/iCK2ZT6ewnpNxlM3U30HGdF27HvZkYTGYWv+d9jnk2OBTj61tryc9FjigWlj2
Is1IBZyTDKcXUfEoP0QHn+Yai1rJ1919avsh0ouSVpu/mtZymeySkG5+35IC7Mr4SMKAcw9Q
H65IZlVUpNnQ9snz92+fH/+6qx6/Pn0mza8Dgtv7DjSg1fjNUiYllXXaHQXYhfc322QuRHPx
Ft71rJbQbM2Fcb/R4PS+b2LSTCRRd0qCVeOhjcwYYp+KVhTdCfxgi9zfReh0zg72EBWHbv+g
dqf+MhH+OgoW7JcIeB13Uv9sA59NawwgtmHoxWyQoigzJXJXi832g20SbgryPhFd1qjS5OkC
35JNYU6iOPTvL1UlLLabZLFkKzaNEihS1pxUUsfEC9EmeKro/nVUlmwXSzbHTJG7RbC656sR
6MNytWGbAkwRF1m4WIbHDJ0ITSHKi35XVjTBCh8FcUG2C4/tRmWm1py2y+IE/izOqv1LNlwt
ZKodkZcNeKfZsu1QygT+r/pP46/CTbcKGraTqv9GYHIu7i6X1lvsF8Gy4FutjmS1U4Lcg9qL
NeVZDdpYrckFH/QhAWsMdb7eeFu2zqwgoTPb9EHK+KS/8/1xsdoUC3LpYIUrdmVXg72jJGBD
jA/v1om3Tn4QJA2OEdtLrCDr4P2iXbDdBYXKf5RXGEYLJblLsBe0X7A1ZYeOIj7BVJzKbhlc
L3vvwAbQtquze9Udak+2MxmZQHIRbC6b5PqDQMug8bJ0JpBoajBjqOSszeZvBAm3FzYMqDFH
cbv0l9GpuhVitV5Fp5wL0VSgJ77ww0Z1JbYkfYhlkDdpNB+iOnj80G7qc/Zgxv52013v2wM7
INVwVkLwoWurarFaxf4G6a+QxQytj8SHt7U4DQxaD6ejLFZIi5PCiGKojMN0rCAwA1qSTT4s
cR19sqdljEME7yeVENQkVQvuTg5ptwtXi0vQ7a84MGyeq6YIlmunHmFr21UyXLtL00jRmV1t
4NX/RYjc2BhCbLE1sR70gyUFYYXuHFsPimqOolBL/zFeB+rjvYVPoqpdzVHsol5hmx4kEHZz
kw0Jq6bXfbWknQ1eexbrlWq5cO1GqBLPlwu6hzdW29Qgi4p2jZ4tUHaDzK0gNiEjD85BHEVn
QlBHiJR2jqlYCbIHu+i468hzEZsWvrxFG6NNzkhzhwkqbE5Pf+ChegQnd2rgOcYjhhDNJXXB
LNm5oPu1AuyQCDLALgER5i7x0gGm78R7kKaILoLMrD2oenZa5xE5/IvquDoQmTtvyYGDAvbk
gw65558Dexw2ongA5tiGwWqTuASImb59z2ETwdLjiaXd9wciF2p6D+4bl6nTKkIHlgOhFp0V
lxQsRsGKTH5V5tGurtrZEVralOzeFNDt1SLXwNkHbrNd2WpNRjLLitxdOlQKdIdmbI10zkYy
jxM6i4lEErEvgyn7AZe0SWhSteeTaUmEdEbK6UKHrhDMPo6GiC4RnWnTFoynwVGjfqguuZVO
ybxp0ehzmO7+LOoT/SgBT4OLpMyH1XD/+vjl6e6XP3799en1LqFHr/tdF+eJkrKtdXW/M+5M
HmxoymY4ctcH8ChWYhuhgZT38Pwzy2pkT7sn4rJ6UKlEDqHa/pDuMuFGqdNLV4k2zcDEebd7
aHCh5YPkswOCzQ4IPjvVCKk4FJ3qsSIqUDa7sjlO+Hi0DIz6xxDs4bMKobJp1DrrBiJfgR6X
Qs2me7Xh0BbmUFmOaXzekW+6HCLVCxAG7i0ycTjib8yVGNPfOUiUKpwSQI2oyeDAdqPfH18/
GVuF9MgJWkqfkKCcqtynv1VL7UtYP3pBCxUgziqJH4vpfoF/xw9qE4bvRG1U90Y70ajGvVPV
k73/VMj5kkpcmcXSnvCgwg84QHWp8aeVSraFqz1cAdJLtA88BGrjNDg7OGaMGAi7g5hgcqA/
EVOL22QtLjh1AJy0NeimrGE+XYEey0A/T0O1Uw1xy0S1GpwlTE/2A3mIji9hB4Qpg8FpgfNI
baFw3RpIrYhKTCnU9poJ3+UPshH355TjDhyIVP6tdKKLvfuHqtIXVAzk1rWBZ5rLkG41RM0D
WpZGaCYhRdLAXewEAQceaS1iOHlxudaB+LxkgHt+4AxEuvaNkFM7PRzFcZphQpDxJWQX2Oe+
A+atEHYho+uivdPAggF3e/Fe0tBdq+/u1IK7g2PBBzzW0lItHgJ3itODbQ1fAQGSKXqA+SYN
0xq4lGVSlngOujRqq4druVEbNyUX4Ea2bYvoSTeg4zEXRcphSpSIcrg+y+wVDpHxWTZlzi9h
h7RM8KjSSJfhejDggQfxJ1dthHT4FHT1yMwvj2pFU62UQv/FbdLkonQA0wSkXwUx6b3xcFuY
Hq61oBJHjnxNaETGZ9Le6KYBJrGd2ii0zXJFPuBQZsleSDw3JlFIlo3e4feEafFa60y4QjbM
UikcMJU5bmlQQ/NJyj2mrVQeyKAdONpBd3UZJfKYprjzHR+UVHHBVSNBvXJDqmvjkfUTDAu6
yKDHQn09jXxxBsUR+S5wY2qPOIKLhPYEKII75xKOTBUTG4OXKDWfiPoerBY3c+HQ5SBi1GoS
z1Bmf2uMBtIQyzGEQ63mKZOuTOYYdFeJGDUXdPv41KmGVj3m9G7Bp5yladVF+0aFgg9TA0mm
o4VoCLffmcM+fZ3a363eJYxgaRIF8SpRiZVVFKy5njIEoGdHbgD3rGgMEw/Hf11yETd5fH7B
BBjdozGhzKYuqbgUek6qBs9n6exQHdXCVEn7Wmc84vlh9Q6pgr1UbDNvQHiPaQMp7U4M6HhO
fFTiNab0HnIsGrst1X1i9/jx35+ff/v97e5/3amZvNcBctX74H7I+LsyDiGnsgOTLfeLhb/0
G/tyQhO59MPgsLc1QTXeXILV4v7y/1H2Lc2N48i6f8UxqzmLuSOSokSdG7UAH5LY4qsIUqJr
w6ip1vQ4jrvc13bHTP37iwRICkgk5DqLrra+L4k3Eq9EwkTV7sxgg8YmD4BdWvvr0sTOh4O/
Dny2NuHZ/ZiJspIHm93+oNtoTQkWo8xpjzOidpRMrAaPpX6oTXKWSZqjrG688pUpx84fNjvN
DakP4fqwfnirRUlP+W8CxhvQNzhlu5V+Ec1k9GsSN8Z6QF3LWWMcP98I6afwUujuam8kZ0fW
kiWJH5jVYkqbMNRbhkFFxhNqiNqSVBQ1pfiKjMx+yVsLknW+I0i41x2syIxJakcyTRSGZCoE
s9XvVd2YujO2BrWEwwYVXbT2u9U3zn7rWMsvD7b6Ul1ruI0+99PSfRYVtS0aiovTjbei42mT
IakqimrFMnDkZHiqhS267wMNN38vNCgn3GDS2zbT4D6Zcn9/e3m+Pvw67fBPjtRs7/0H6auM
17r/bgGKv0Ze70VtJKD55eOoH/BitfUl011e0lKQ5pyLKWo3O8+P4fVhaSGnbXdKG3ArZQYM
86y+rPinaEXzbX3hn/xwGTfFokXM2/Z7uCyHQyZIkapOLQvzkrWP92WlUZQyc74Zrd+vhEVr
1wdtSw9+jdLmYZTOHSlC7VtRTFL0ne+v9VRY1vHzZ7zu9XWC/DnWfHIf/4PGwfhQDCO5tp3C
jVCELBgMtibU6BOYCRgNg6wZzLNkF0YmnpYsqw6w8LTCOV7SrDEhnn22xjjAW3YpwYbPABez
3Hq/B5tyk/3FaPczMj0IZxjac1VGYO5ugtKgECg7/y4QHgMQueV24aiSNeBjSxS36wFTmSA2
wKicioWSbxSbWliNYglqPlMrI2/rZNyjkM5ZG9c8s/ZNTC6vOlSGaGW1QPNHdr6Htrc2wWTt
dcV4ZmBBZl4xkCkohe7EpaVcLIq+a8IcLGWrBBejbEmgUixYSds1CF9MNWIrtVkAWuGYnY3d
Gp2jUXmVwqbOeWt/Uzb9euWNPWtRFHVTBKNxNjChaxKVshANLW8z58EOhyW7LTaYkHWBfbKq
2uaoOxMVwODBbhQxWQxdo7/hoSCuGyioUpQPb/feJtSdjtzKEXVS0UlKVvnDmshmU1/Aw4KY
B5jZQuTSNla60AXeEsalBy+DoScCFByJhSHWfLG3sVFwcmsmJrXrKPUiT792OYP6tV9V9NzY
dZPYl87b6IupCfQD/XRlAX30eVLmUeBHBBhgSb72A4/AUDQZ9zZRZGHGNposr8S8hA3Yoedy
mZQnFp4NXZuVmYULjYpKHMz3L+ycOWDwOoAHmi9fcGFB/+O6+Z0CO7EcHci6mTmqmCQXoHSC
92GrWdlNCiPskhGQrQxkc4T+bGpAnrAGBQCFIncuUfpkf8uriiVFRlBkRcHzO6i5e1G0s5px
YDXjgq+t5iAGl3AdosJkPD82SNeIESgfGgqTp6xo2sL6yLAhmDHcNwDDvYBdUJsQvSqwOlDc
Gf4OFkheukuKGk9sErbyVqiqE/mID2pIw+Mhq4jRQuJ234zs/rrB/VBhY5VdpPYy08XD0NYD
AguRnZOaDwx7lN6UtQXDxSpmVxZWsEdbUH29Jr5eU18jUGhtpFLLHAFZcqyDg4nlVZofagrD
+VVo+gsta2klJYxgMa3wViePBO0+PRE4jIp7wXZFgThg7u0CWzXvNiSGXYZrjHKnbzD7MsKD
tYTmVwbG2LjiCfzRGi0BQZ1VrCQ845BiAXGFy8PpaFjRKAr2VLcHz8fhFnWBmkgxbNabdYZm
mmJJxLu2DmiUKjixErHmg1Xph6jTN8lwRPPgNhejR4qXU2UW+Ba02xBQiOSkkf05j3GerDNM
NbNjkY81xgRSqlUejtUc9ZTz4PsoFY/lXmk3uWNyTP8mHRJqjgRla2C4eTBsrDDDain6A8Nt
pgCbUcvIOKO+unEyj588LCBfoZtfwrY+l9NtETW8qXiyk6ro6SFjB8vzQ8nIjCr+jFXZjTIP
REwO2wYhtq6ygeEmoPFilMLjpsniNolZe4TRJKRXMneBmC85zqy1L75UETXfX/ZllgZnx9Zm
dmAi2Xdqu2xEwVUd0Y52uv3DjIqZrCOaBtqMmB2o3Tx/tY4sTTZWR7yqnYx11FnRPkdtCl5j
G4iFIce7DKzbBonvIZ02o2PHWnh/Mc47eOvi01q/zgqC8F7wDwRga2gDhru5yzMU9hnXLNsz
D48rEuaD/2jDCcvZZwdMqWUVlOf7hf3RBp7OsOFjvmd4dytOUt+avcoXofMq29hwU6ckeCTg
TjQueehuMWcm1s5IN0OaL3mLVsAzajeD1Nqpqwf9voRsYNy0XVxCrA37V1kQWVzHdIrkW+yG
fyOD7ZhYmpQOsqy73qbsemiSMsnRgvs8NGK+naH0N6lshMke9Yo6sQC1fxD3aLMEmNliy9wj
tcTmfU6bmV142AxL8CJHotYGlQJHNsj7B26SN2luZxacNUBUNJF8EXPwre/tymEHh51ihqOf
IyLRtgOH5HdkRDzBf2iqPcvPI5/4XB2MWjWzwKIuE6zKZgpeK3NQnDsDFJQM9A5tPIOm6J2n
WFbuDv5KvZSB165LGILdrfAmlh7EEH4Qglx/p+4yKfGoeCPJhlLmp7aW+8kdUtllcmzm78QP
FGyclL5oHO6Ak8dDhScd4qNNIM2Z+Hg55rwr8PZv1uxAwKr2NBPaqJIG8FZsGqf64fSyezK9
DQP+r/av1+vbt6/P14ek6Re/pZP3pZvo9OoR8cl/m5NULvfmi5HxllAdwHBG9Fkgys9Eacmw
elF7eLtsDo07QnN0cKAydxLyZJ/jjW2oSLhllJR2M59JSGKP17jlXF+o3KfDL1SYT/+nHB7+
8fL19VeqTCGwjNt7kzPHD10RWmPuwroLg8k2ydrUnbHceALtbvsx8i8a8zHf+PCINm6av3xZ
b9crupOc8vZ0qWti9NEZ8CLAUiZW+mOK53Iy7Qd7EBGgTFWON7A1znhDWCeXW2ZOCVnKzsAV
6w5e9Hq4s1mrrVmxHBKDDdGF1PSWK39X0gcN3tLsxrzBHypwtHbQZoIeXm9xfcDf+9R+6MyU
OTJ+yQrc6SDOri5hepn7hInSHSE6l5Tg3VydHgt2cqaanyg1ISnWOKlT7KQOxclFJZXzq2Tv
pkpRtvfIgpjmGHkf96zMC3zgaElxWGq5Uz+LHdUUczp9s/umIUweM03TwEm0hE0HV6Sl8bia
yYETpHEPN9fS4lGsY6vDWLEyI6ajRgN1zOSUTJxe5IwtXP2U2NY1d5zEwIr54zgfu6RV08wP
Yl0EQ++uYALGRnxKIjX3pEWds1xTtGRi2rzareBi9M/IV/IQYv1R1qR8MvirrT/8lKycwwc/
JQojrrf5KdGqVjsz92SF0hAF5kf3QwQpmffCF9NIXq5FZfz8B7KUxeKE3f1ErWM0YXLjSMvl
0NnfuDrpnU/ulqT4QJTOLrorJVSobHSbQAW78+8XjiYv/hd665//7H+VevzBT6frft+Fup23
3Obl9V35em+me+tq6WV3GuMuOfPFfyODqZ0+OWW/P7/89vTt4Y/nr+/i9+9v5rxU6My6GlmO
9iImeDjIS5VOrk3T1kV29T0yLeFCrND/HV4lmUJyImXvihhCeLZmkNZk7cYqSzR73qxJwHzv
XgjAu6MXK1aKghjHvssLfASjWKmCDkVPZvkwfJDsg+czUfaMMJAxBGBPvSMWZEqo26lLDDfX
mR+3KyOqgdMbT5Ig1znTri75FVhV22jRgPl50vQuyjHlXPi8+RytNkQhKJoBbZkrwGZGRwY6
yY88dmTBqW0/C9Ww+ZCl5t+KY/t7lFAmxBR5onETvVGtaPhwXdv1JXd+Kag7cRKNgpfRDp/0
yYJOy2gd2ji4rwLfOG6G3rdZWKtnGqxjqb3w8yzojoiaUxECJ7H8jybfK8R52SQT7Hbjoe1H
bFM7l4vyW4WIyZmVZdO6eLkisjVRZGkt35XpCXbujOeDXEK7HTaHA6GStR225sEfO0pdC5jI
Ggg02SO3jpOB6eo4a8u6JZY/sZiZE1ku6kvBqBJXbhbg8jeRgKq+2GidtnVOhMTaynzyHhdG
V/oiv6E6l7yz7dRev1/fvr4B+2ZvNvHjetxTG2vgO/ITuRfkDNwKO2+pihIodTxmcqN98LMI
9JYBIjBiNuTYJplYe69gIui9AWBqKv0w7ZJ2w9IpM7GWVBIiHTXcULRujupi01KCjmVeZ9wN
gXdintiNLM6V92Nneiwr5plSHqaXRY1xg9rOtLSJBue994RmM2zYnbojpmKWu1U1z21balN6
urcxXYIVMxuR35+QX7zGSP/N9z6AhOwL2HQ0fUHbkm3WsbyaT567bKCl6WqVXqrutlQhEd2v
dZBwhC7XBh+ErzavnM1e8c7+Mu2ViCntmDXEDqQZy7wZN1oXJgw516wGJMqsbXPpSfd+qdzk
HB29qQswdIKdrHvh3OTocA5Cw1f5x+Hc5OhwElZVdfVxODc5Rzj1fp9lPxHOIueoieQnApmE
XCkps06GQW05YomPUjtLEktaJHA/pC4/ZO3HOVvE6Oiy4nQU85OPw9EE6ZB+AfdhP5Ggmxwd
zmSE4+w3yrLGPUgBz4oLe+SLchXzzcJzSxd5dRpjxrPCcOChiw1dVnHi7Iw31METoOA1jRrr
u8VKjnfl07fXl+vz9dv768t3uFbG4cLxg5Cbnre37hjeginhcS5qnaEoelKrvoK5Zkus/BSd
7rlcINwmST+fTrUN8/z876fv8JqwNb1CGemrdU7dZhFE9BFBryD6Klx9ILCmLCskTE3CZYQs
lW0OvJaUrDG2Bu7k1ZqRZ4eWaEIS9lfSLMXNpoyoz5kkK3smHUsLSQci2mNPHD/OrDvkaePe
xYKxQxjcYXerO+zOMhG+sWJqWMonFFwCrEjCDTZdvNHuBewtX1tXTej7N7eHt43VQ3f9j1g7
5N/f3l//hJe9XYuUTkwe5IM51LoOvLLeI/sbqd6EsiJNWa4niziST9k5r5Ic3D7accxkmdyl
zwnVtsBRxmhbrCxUmcRUoBOn9iccpasMDB7+/fT+r58uaQg3GLtLsV7h2w9LtCzOQGKzopq0
lJgMcW9d/2drHofWV3lzzK37kRozMmodubBF6nl36GbgRONfaDGDZqRuFUJDLobAge71E6cW
so79a03OoXaGbt8cmBnDF0v6y2BJdNSulfT9C383y6gqc2Z7Ulx2IIpCZZ7Ioe0d4rZvkX+x
7p8AcRHLgD4mwhIEs+8UQlDgxXrlqgDX/U7JpV6Eb+dNuHUb7YZPZUNzhgs+naN2u1i6DQKq
5bGU9dSe/sx5wZbQ9ZLZYmPgGzM4mc0dxpWliXUUBrD4cpXO3As1uhfqjhpJZub+d+44t6sV
0cEl43nECnpmxiOxVbeQrujOEdkjJEEX2TmixnbRHTwPX6OTxGntYdvJGSezc1qvQxoPA2Lb
GXB812DCN9g+fsbXVM4Apwpe4PhqlsLDIKL66ykMyfTDvMWnEuSa0MSpH5FfxOAmhBhCkiZh
hE5KPq9Wu+BM1H/S1mIZlbhUUsKDsKBSpggiZYogakMRRPUpgihHuBFZUBUiCXzPVCPopq5I
Z3CuBFCqDYgNmZW1j2/2Lbgjvds7yd06VA9wA7WXNhHOEAOPmiABQXUIie9IfFt4dP63Bb7n
txB05QsichHUJF4RZDWGQUFmb/BXa7IdKaMcm5isPx2dAlg/jO/RW+fHBdGcpD0EkXBlCOTA
idpXdhUkHlDZlN7BiLKnZ/aTM0UyVxnfelSnF7hPtSxlt0TjlAWxwulmPXFkRzl05YYaxI4p
o27eaRRlRy37A6UN4X0rONlcUWos5wwO5IjlbFGud2tqEV3UybFiB9aO+NIDsCVcbCPSpxa+
2JfDjaF608QQjWCxKnJRlEKTTEgN9pLZEJOlyRjJlYKdT52pTwZMzqQRZTolzZUyioCTe28z
XsDboOM4W5eBC1MdI85QxTre21DTTyC22N2CRtANXpI7oj9PxN2v6H4CZEQZi0yEO0ggXUEG
qxXRGCVBlfdEOOOSpDMuUcJEU50Zd6CSdYUaeiufDjX0fOJu1EQ4Y5MkGRnYRVCary02ln+S
CQ/WVOdsO39L9D9p1knCOyrWzltRK0GJU5YfnZhYuHA6fIGPPCUWLMoK0oU7Sq8LN9R4AjhZ
eo69Tadli7RNduBE/1WGkw6cUE4Sd8SLvT3MODXRdO1tTjbdzrKLiEFtuuDnqKMtdZVHws4v
6AYlYPcXZJFs4SFe6gv3HSOer7eUepP39sltnJmhu/LCLicGloB8IYyJf+Fsl9hG06xGXNYU
DpshXvpkZwMipOaFQGyoLYWJoNvFTNIFoMy+CaJj5FwTcGr0FXjoEz0ILhvtthvSQDEfOXla
wrgfUgs8SWwcxJbqR4IIV5S+BGKLPbosBPaIMxGbNbUm6sS0fE1N17s920VbiijOgb9ieUJt
CWgkXWW6AFnhNwEq4zMZeJZnMIO2fL1Z9AfJkyL3E0jthipSTN6pXYnpyzQZPPJIiwfM97fU
iRNXS2oHQ207Oc8hnMcPfcq8gFo+SWJNRC4Jag9XzEN3AbXQlgQV1KXwfGq+fClXK2pReik9
P1yN2ZnQ5pfSdqsw4T6Nh5aDvAUn+utiOWjhEalcBL6mw49CRzgh1bckTtSPy24UDkep0Q5w
atUicUJxUzfKF9wRDrXcloe1jnRS60/AKbUocUI5AE5NIQQeUYtBhdN6YOJIBSCPlel0kcfN
1K39Gac6IuDUhgjg1HRO4nR576jxBnBq2SxxRzq3dLsQq1wH7kg/tS8gLY8d+do50rlzxEuZ
RkvckR7KJF7idLveUcuUS7lbUetqwOl87bbUzMllkCBxKr+cRRE1C/giz093mwb7zwKyKNdR
6Niz2FKrCElQ03+5ZUHN88vEC7ZUyygLf+NRKqzsNgG1spE4FXW3IVc2cL8vpPpURblvXAiq
nKZ7lS6CqL+uYRuxoGTGax7mQbHxiZqcu64qabRJqNn6oWXNkWCHSHMzLjdLiyYjbcYfK3gS
0fLHQL/7uXizmX2v5altbXXUjfHFjzGWh/ePYGidVYdOu2ss2JZdbr9769vbVUtlxvbH9dvT
12cZsXXsDvJsDa+tm2GwJOnlY+8YbvVcL9C43xspxG9QLFDeIpDrrkok0oNXLlQaWXHSb7Ip
rKsbiNdE80MM1YDg5AgP2GMsF78wWLec4UQmdX9gCCtZwooCfd20dZqfskeUJexqTWKN7+kq
S2Ii510OLnPjldEXJfmofBoZoGgKh7pqc268rTpjVq1kJbeKJitYhZHMuNKmsBoBX0Q+cbsr
47zFjXHfoqAORd3mNa72Y21671O/rRwc6vog+vaRlYYfeEl1myhAmEgj0YpPj6hp9gk8eZ2Y
4IUVne55G7Bznl2kQ0cU9WOrnLIbaJ6wFEWUdwj4hcUtahndJa+OuE5OWcVzoQhwHEUiHe8h
MEsxUNVnVIGQY7vfz+io+1k1CPGj0UplwfWaArDty7jIGpb6FnUQkzcLvBwzeOkWV7h8cbAU
zQUVXClqp8WlUbLHfcE4ylObqS6BZHM4O6/3HYJBf7e4aZd90eVES6q6HAOt7hEQoLo1Gzbo
CVbBO9yiI2gVpYFWKTRZJcqgQmltso4VjxVSyI1Qa/CkJQXCq1Q/KJx43FKnjScyDSJLOc0k
eYsIoWigyvIEdX355siA60yI4t7T1knCUBkIbW0Vr3UDUYKGrodfVinL57XB2Bx92WWstCDR
WMUom6G8iHibAuu2tkSt5NBmWcW4PiYskJUq9bbgSPQBeXPxl/rRjFFHrcDE8IL0gNBxPMMK
ozsKZVNirO15Nz00sTA6asXWw1RlbPQ3UiXs779kLUrHhVmDziXPyxprzCEXXcGEIDCzDGbE
StGXx1RMWLAu4EK7wtt2uqW1hqvHP6dfaLZSNKiySzGy+76nz2SpGZicmvU8pueDyvGl1Ye1
TjhJqHdZjMDil5f3h+b15f3l28uzPeODD0+xFjQAc+NakvxBYFjMuF+Q+wmdK7AGVblaAsCy
KoDv79fnh5wfHcGoV5H50SyiG7xcvkvrSzX5kdXjpINffNXqydHKqD4mufmiuVkb1qWannjO
QjopzaT354Mp2RdNPq0TjO+rCr3pJT26tjASMz4eE7NNmGLG8wDyu6oSwwhcvAT38/LdID63
n/Lp7dv1+fnr9+vLn2+yZienfGbbmbz7zm9bmeG73uKR5dcdLACcEYpas8IBKi7kmMQ72S8t
eq9f8Z+KlctyPQhNJADzLq/yg9vVYr0hBlPwXViwx0++2Qmqec0k2/XL2zs8a/X++vL8TL3R
Ketnsx1WK1kNRlQDNBYaTeMDGPn9sAjj6uMNtfxE3MLPjbc1FrzsThR6zuKewKc71xqckYmX
aFvXsj7GDtWYZLsOGhYXq62UYK38SXTPCwIth4RO01g1SbnVN/QNFpYWlYMTFe/K6XTti2LA
SyhB8SORw2x4rGpOZeeMunXFg2EYJEmEcyRfzZQtf+h9b3Vs7OrJeeN5m4Emgo1vE3vRjcB5
okWI2Viw9j2bqMmGUd8p4NpZwDcmSHzj5VqDLRo4UBocrF05CyUvlTi46XaMg7Xa6S2pHCsi
qinUrqYw13pt1Xp9v9Z7stx78PBuobyIPKLqFli0hxoNXJJKUGLbiG024W5rB9VmVcbF2CP+
PnKbhjjiRHdkOqMcj08Awq13dP/fikRXy+rx3Ifk+evbGz27YQkqPvkuW4Za5iVFUl25bJlV
Ytb53w+ybLparB2zh1+vf4jpwdsDOK1NeP7wjz/fH+LiBGPoyNOH37/+mF3bfn1+e3n4x/Xh
+/X66/XX//vwdr0aIR2vz3/I20i/v7xeH56+//PFTP0kh2pPgdihgk5Z7x9MgBz1mpL+KGUd
27OYjmwvliTGnFwnc54aR4I6J/5mHU3xNG1XOzenn97o3C992fBj7QiVFaxPGc3VVYYW7jp7
Ai+vNDVtuAkdwxJHCYk2Ovbxxg9RQfTMaLL5719/e/r+2/TwKWqtZZpEuCDl3oRRmQLNG+Rm
SWFnSjfccOnShH+KCLISKx7R6z2TOta8s8Lq0wRjRFNM0oojlSuh8cDSQ4ZnxpKRsRG40MHj
pdUtc24cHkkUmpdokCi7PpDTfoTJOB+e3h6+i6XN2/WdkFDp1WWwRNqzQkyGCqS1FGeXTCm1
XSpdVpvRSeJuguCf+wmSM28tQbLhNZPvs4fD85/Xh+Lrj+sranhS6Yl/Nis8+qoQecMJuB9C
q7nKf2CPW7VZtZyQyrpkQs/9er3FLGXFekb0y+IRLR4uCWo9gMiF0acfZqFI4m6xSYm7xSYl
Pig2Ned/4NT6XH5fl7iNSpga/SVhzS1UThguagnDSQI8R0FQN3d5BAkOeuQZGMGhzq3Az5aa
F7CP2ypgVqHLQjt8/fW36/vf0z+/Pv/tFV4Bhjp/eL3+vz+fXq9qLalEluu473KMvH7/+o/n
66/TvVAzIrG+zJtj1rLCXX++qx+qEIiy9qneKXHrPdaFARc+J6GTOc9gG3HPCZnJN5NIc53m
CdJox7zJ0wzV1Iwa7p4Mwkr/wvSpIwpCn8L0f7tBPXYCre2DifCmGIxaWb4RUcgid/a9WVJ1
P0uWkLS6ITQZ2VDIGV7PuWGrJ8dk+bIphS0Hoj8IjuooE8VysTSOXWR7CjzdZFnj8HGlRiVH
46aWxsidkGNmTZwUC3cT4FA2KzJ7X2MOuxGruYGmprlMGZF0VjbZgWT2XSoWOHj7aSLPubEf
qjF5oz8LpBO0fCYaijNfM2kN/HMaI8/Xb/WYVBjQRXIQMz9HJeXNhcb7nsRBTzesgkdu7vE0
V3A6V6c6zkXzTOgyKZNu7F25LuHwhGZqvnX0HMV5IbxDYO9jajLR2vH90DursGLn0lEATeEH
q4Ck6i7fRCHdZD8nrKcr9rPQJbDtSpK8SZpowIuMiTM8lSJCFEua4m2tRYdkbcvg5aTCOKHX
RR7LuKa1k6NVJ49x1so32Sl2ELrJWppNiuTiKGl4Fhdvjs1UWeVVRtcdfJY4vhvgTETMeumE
5PwYW9OXuUB471nrx6kCO7pZ9026jfarbUB/pgZ2bdllbmiTA0lW5hsUmYB8pNZZ2nd2Yztz
rDOL7FB35nG8hPEOyayNk8dtssELpkc4BEY1m6fo9A9AqZpN6w2ZWDCzScXACvvbCyPRsdzn
457xLjnC63IoQzkX/zsfsAqbYTiKMFt/gbIl5klVkp3zuGUdHhfy+sJaMTlCsHR5aBb/kYsp
g9wU2udD16MF7/Q42h4p6Echh7eEv8hCGlD1wt61+L8fegPejOJ5An8EIVZHM7Pe6MaosgjA
M5ko6KwlsiJKueaGlYysnw53Wzh1JrYokgFMq9DGQsYORWYFMfSw41Lqjb/514+3p29fn9XK
j279zVFbgc2LjYVZYqjqRsWSZLm2j83KIAiH+TFBkLA4EYyJQzBwmjWejZOujh3PtSm5QGq+
GT8uz0pa89VghWZU5VkeNqGWBt6hjHzJAi0atBUrz+HAzsccBKdb4SoA4/zTUdJGltX+x+82
Ri1HJoZckOhfiQ5SZPweT5NQ9qM0IvQJdt7bqvpyjPv9Pmu5JreMTnXF0XS9ub4+/fGv66so
idupmdngyM38+RgC7zGNh9bG5l1phBo70vZHNxr1bPDrvsX7Rmc7BMACvKNeERtyEhWfy418
FAYkHGmjOE2myMzNB3LDAYStNSEr0zAMNlaKxWju+1ufBM1n+hYiQuPqoT4h9ZMd/BXdjJVT
KZRheYxEVCyTKm88G/YWQKR9WT5Om5lmHyPblqmJY/kyLDdM7GT7sg8E9mL6MRYo8rltYzSD
ARmDyCx4CpT4fj/WMR6a9mNlpyizoeZYW5MyIZjZueljbgu2lZgGYLCExwPIM4Y96AuE9Czx
KAymOix5JCjfws6JlYY8zTFmmHJM2aeObfZjhwtK/YkTP6NzrfwgSZaUDkZWG01Vzo+ye8xc
TbSAqi3Hx5kr2KmJ0KRR17TIXnSDkbvi3VtDiEbJtnGPnBvJHRnfSco24iKP2BJKD/WM98Zu
3NyiXHx3e8Wyv+1Z/vF6/fby+x8vb9dfH769fP/n029/vn4lLGNMm7YZGY9VY3r2lirQ1B+T
FjWLVAPJohSKCann7kg1I4CtFnSwdZCKz1ICfZXAutGNy4T8cHBEejSW3Jlzq6ipRNTb2Igi
tS+0Inr2RWuXJFWvBxPDCMyDTznDoFAgY8kxKu2FSZAqkJlK8LbuwVaLB7AfUi5uLVTl6eTY
a51kKHV4GC9ZbDwHLadN7HIrO2M4/rhjLNP4x0a/Gi9/im7WlASmT20U2Hbe1vOOGN7DRE6/
X6rgPjG20sSvMUkOCDH9z6sPj2nAeeDr+2JTohou5mzRoC+suh9/XP+WPJR/Pr8//fF8/c/1
9e/pVfv1wP/99P7tX7bZogqy7MWyKA9kDsLAxyX7vw0dJ4s9v19fv399vz6UcARjLftUItJm
ZEVXGgbWiqnOObwQf2Op1DkiMdqOWByM/JJ3CVINQPDJVhPMz24JKEutoTSXlmefx4wCeRpt
o60No5188ekYF7W+gbZAs4HicizO4WZYz/TtSxCeFvXqQLNM/s7Tv4Pkx7aB8DFa2gHEU5xl
BY0idtjd59wwm7zxDf5MaM36KMuMkDYbuRZK0e1LioDnB1rG9b0kk5QzdhdpGGEZVHpJSn5M
KBYux1RJRiZzYOfARfgUsYf/6/uCN6rMizhjfUeWetPWKHHqYBWeNDYGaKCUl2FugpeYo3KB
3ecWNaN8L2Z/SO5QF+k+1y+ryITZNaeqOkERd6X0PNLaJWhXfT7yRw6rPrsmcu05YIu3PSED
msRbDxX1WegMnhqdVUqyc96XY3fsqzTTPdrL7nHBv6n2KdC46DP0vsbE4GP2CT7mwXYXJWfD
QGniToEdq9UlZcfSfbfIPPZCZaMAe6tx91CmG6HlkORsjWV35Ikwdr9k4X22dMWRf0aNoObH
PGZ2qNMr8ahtdyer/kUvGLKqpju+YdygqZdyozvOkH3jUlCS2XBrW5qiykre5YZinpBFZyqN
e/395fUHf3/69j/2SLZ80lfyfKbNeF9qy5ySi85tDQB8QawYPtbpc4yyO+uTvoX5RVpuVWMQ
DQTbGvs/N5hsGpg12geY75tXr6T1e1Iw/fzpho3oWpxk4ha20is4iTheYLe6OsgDLlkyQsIu
c/mZ7YVbwox1nq9f2ldoJSZq4Y5hWH8XUSE82KxDLCea8cZwq3ZDQ4wi37kKa1crb+3p7sgk
nhVe6K8Cw9mJJIoyCAMS9CkwsEHDBfEC7nxcXoCuPIzCtX0fhyoytrMTMKHqdojZDswLIyq6
JtitcTEAGFrJbcJwGKybKwvnexRolYQAN3bQUbiyPxfTOVyZAjR8Ok5NOTvXYnmYF1RRhLgs
J5QqIKA2Af4A3NB4A7iu6nrcjbCLGgmCA1YrFOmVFec8FYt4f81XuncPlZJLiZA2O/SFeYKm
Wn3qRysc7vQoL1/7dlPugnCHq4WlUFlY1HI7oe7SJGwTrrYYLZJwZ/iQUkGwYbvdWCWkYCsZ
AjY9hSxdKvwPAuvOzlqZVXvfi/XphsRPXepvdlYZ8cDbF4G3w2meCN/KDE/8regCcdEtW/M3
faieuXh++v4/f/X+Sy6L2kMsebHu/vP7r7BIsy/8Pfz1dq/yv5BGjeEYETcDoWRXlo4riyFp
9InQjLb6WbQE4RV6BFV5so1iK7Nw7+xR3+5Q9ZyL+ugdagA0GlF7G8N1pQpGLKG9VTjo5di9
Pv32mz2wTHe38KA2X+nq8tLK0czVYhQzrMMNNs35yRFo2aUO5piJ1WBs2GMZ/O3GNM3D0790
yCzp8nPePTo+JJT3kpHp7t3totrTH+9gQvn28K7K9NYGq+v7P59goT5tzTz8FYr+/evrb9d3
3ACXIm5ZxfOscuaJlYbnYoNsWKXv5BlclXVwU9X1Ifg6wW1sKS1zp1StkvM4L6AEl9iY5z2K
CY0YGsDzy3I4ObG5+LcS8+TKuC46Y7KrgFdmN6liJflsaKbdWXliy+XcrGf6sbIVlb4Zq5Fi
4phmJfzVsAO8bkwJsTSdKuoD+nYuQsmV3TFhZIYkgzcvNP5zHpPfCXxME0Z+kwyHeE0X355O
Rb5e5foCsQC/g/ersU7atKRDO6v7wM3ZKdFzw6uHxhwruuIFLlaazWpD5nhmI5KNq6Eb9aW7
/uU+16ZI8Gs6zpcPTtVtqh9gSExZChj9Qa+XTH/qXiOgLM5aV4ffYztkCOF6Peg11NSOliCZ
MaEbuSLdzUvj5fUkUoi3DRmzwDs6ScZoiAj6k7oRJWs0igxcy8PTorlY3yatfitZUtYNcEDR
55Oq4I9c75iSQmUyYeAGS0zQMpyMMt2sKWzM2rZuRUZ+yRLzcVspk21DfSkisTzyd9vQQs3l
0YT5NpYFno0OQYTlwrX97dbcwZoEiYhNR5PTx4GFcbGoTQ84RH6yMuetqhJhTZX6OBdwQKX1
hw4e29aqHQAxeV5vIi+yGbUcN6Bj0tWi7klwupD/6S+v799Wf9EFOJhmHRPzqwl0f4XaE0DV
WY09cu4ggIen72KG8M+vxq00EBTrij1upAsud01tWPmiINCxzzPwl1aYdNqejQ128AUBabK2
HWZhe+fBYCiCxXH4JdNvpd2YrP6yo/CBDCluk9K4br98wIOt7gZvxlPuBfrqycTHREyz+vbR
LhLg9Wm0iY+XtCO/2WyJNBwfyyjcELnHi+4ZFwuzjeGiUyOiHZUdSehO/QxiR8dhLv40QiwW
dbfNM9OeohURUsvDJKDynfNC6CTiC0VQ1TUxROSDwIn8Ncne9DZrECuq1CUTOBknERFEufa6
iKooidPNJE63q9AniiX+HPgnG7ZcIS+pYkWpu/BePoADU+MhCoPZeURYgolWK91N7lK9SdiR
eQdi4xGdlwdhsFsxm9iX5tNJS0iis1OJEngYUUkS8lRjz8pg5RNNuj0LnGq558h4hG3JQFgS
YCoURjSrSbFev68moQXsHC1m51AsK5cCI/IK+JoIX+IOhbejVcpm51G9fWc8O3gr+7WjTjYe
WYegHdZOJUfkWHQ236O6dJk02x0qCv1tyx+3qvn6/dePR7KUB8blHRMfj5dSn7iZyXO1sl1C
BKiYJUDTyvRuEpOyJjr4ue0SsoZ9Sm0LPPSIGgM8pFvQJgrHPStz3bWmSev3Eg1mR15I1ES2
fhR+KLP+CZnIlKFCISvXX6+o/oc2jA2c6n8Cp4YK3p28bceoBr+OOqp+AA+ooVvgIaFeS15u
fCpr8ed1RHWotgkTqitDqyR6rNqAp/GQkFf7tARuOqLR+g+My+RkMPCoWc+Xx+pz2dj49Ozi
rJJfvv8tafr7/YnxcudviDgsZzQLkR/Al2JN5GTP4WpmCY4vWmLAkEYMDtjRhc2z3tt4Sohm
zS6gSv3crj0KB/uQVmSeKmDgOCuJtnbzYoyj6aKQCor31Sa3laaAB6Jwu2G9C6gmfiYS2ZYs
ZcaZ7tIQsBXLUkOd+IucWiT1cbfyAmrCwzuqsZnnmrchyQNnQjahHj+kpvyJv6Y+sG5lLBGX
ERmDvEtDpL46EyNGWQ+GWdWCd77hs/2GbwJycdBtN9S8nViiS82zDSjFI0qYGncTuozbLvXg
2MhqVIs91OLSm1+/v7283lcBmktJOMwg2rxlErRowLxI6lE3pUzhGcHZgZ+F4cW/xpwNGwvw
0JFivzSMP1aJ6CJjVsEFeGkbUME5IzLog+3BrDrkVWZi57ztennbXX5nplBZpxlIrXnpBGuH
lomh5mBs1bIhRwZKMVjUx2xsmW4jO/UuLzJjgE6hr5bkxibzvAFjUoncoAsRsdJ/pkkLKOTM
SPAx5/LDG5KXB/D2g0DltVJgm7WNDrZ/y5p1VAB1MzICh63KQQxtZqSnwPxdJnuU+tlqDhzh
G1ZhMz5ga7FmbEwjH4F0JiI6a63tXZcDNwuxipv9VNy3kBtwRm0AxWACsk+bIS0QOOlHaGlK
Nm2KgguknlSVvshJneevRtbEprgivBUqftHBkeBsTCcTkBA4KlKp2MwgvqCcl91pPHILSj4b
EHh3Ad0jmnd50O9q3wijxUMykGXhhNpihs0SWOThwAAAKd2LL+/NbEyAGRjfqwZ1U4XTLT6z
+mTjyMaY6TclJ1T7NmEtyoF2KRAxXY6zASrKmB91spHKaaBQQa2uTJPnp+v3d0qZGgkXP8wb
wzddqjTaLci439uuWGWgcAFUy/VFopq9tfrYiFT8FkPyORurusv3jxbHs2IPCeNGyoA5ZuCl
CMtLVO5F64eaBpnIfC+G5ChHyyf6cSTrh/kK+xLmMV2bOvzExfwqwr+lw7NPq/8E2wgRyOVr
smcHWLautT3dGyYqocs++StdeTOe5Ll5nf/YeZuTvqKYvGfAaXhW6DCMn7NrjRWC21rWZGjC
yiIPZu3cuMCl2Bg8sc7cX/5yW6jC5X7p0r0Q4+qeXMvqIhWxktV4ZThoxq2VlxLUlJlxKxIs
lHUzWgCaaXKft59NIi2zkiSYPu0BgGdtUhue5iDcJLfXDEBUWTcg0bY33HQIqNxv9BdrADoS
a5DzXhB5XZa9vC/hIUbMez7vUxNEIlUtP7+VqEQNzTcjIzhjsOTEwKp7D15gMd4PFHxIEVoa
lhgLNJ8j3SYQ7ecxfmzAerRklWhl2joUJnhiXpqfDXOdc1wPh97QaiCoW9+p32DA1VugWQgL
Zt19m6hz2jBbvtRvx05gzIqi1hfEE55XTW+lVZSvUWk3UCgyeB0gG615N0qK+AW3UbSi3Cdn
rRucpUuDvO70K8gKbHP93QKFpY22F3g2/TMqCVScEjNuDiuIGxekFHbmhoH0BJr5kZgc6ybH
57cqmTyHf3t9eXv55/vD8ccf19e/nR9++/P69k48cySfMtC0p3raQFl2/UAoetlpQm91uQwo
H0Uv0zhcv8/2e1ay4OGmOdwfBAgGO3X7OB7rrin0VZVbZizyMu8+hZ6vy0qjATDukQs05M4C
BKAfZmexxtIaq4okOcGrUrqwfiETZODeIusmxggVzpZV8UmHXQYn/gN3EMu7VQZ5qEyzrRs2
4qmFpFpWdTIPUCYJ+k6RsP6TpDZtkT0BhMzgRN+HsOa8G6E1Z3h+6f9Tdm3NjePK+a/4MalK
ciRKoqSHPFAkJXFFkDBBXTwvLB+PMutaX6bs2Trr/PqgAZLqBprS5mU8+r7Glbij0a0Yf1uY
5arN9IKBSPWApvs4BWG3am68zeMsyok4Bd81NP5tdABNIzLIA56uMwqAYd7mlMM648tN0f2A
QjGJHKSbhqmORm6SrNKLYPhAqJ8wXaALu6nSB2KRpQWaVGEHb7WjlqYrTImAPk3QzTDFz/vt
b/c8oketQqNZeWbf0ma30muu6eKKmIhOWHLkiIpMxf7M1JKrski8nNFleAt2KzUXV0o3/UJ6
eKaiwVRlnBNvogjGaw4MhyyMLzAv8AKfomGYjWSBHUz3sJhwWQHv17oyszIYjaCEAwIyDibh
dT6csLyeWontYwz7hUqimEXVOBR+9Wpcr/m5VE0IDuXyAsIDeDjlslMHixGTGw0zbcDAfsUb
eMbDcxbGOl0dLMQkiPwmvM5nTIuJYKGdleOg8dsHcFlWlQ1TbZl5mxqMdrFHxeEJrjBKjxAy
DrnmltyPA28kaQrN1E0UjGf+V2g5PwlDCCbtjhiH/kiguTxayZhtNbqTRH4QjSYR2wEFl7qG
91yFwPP/+4mHqxk7EmT9UONyi2A2o+vovm71P8dIryyScsOzEUQ8Hk2YtnGhZ0xXwDTTQjAd
cl+9p8OT34ovdHA9a9RDtUeDjuI1esZ0WkSf2KzlUNchUTSi3Pw0GQynB2iuNgy3HDODxYXj
0oN7omxMXua6HFsDHee3vgvH5bPlwsE4m4Rp6WRKYRsqmlKu8uHkKp8FgxMakMxUGsNKMh7M
uZ1PuCSTmmrKdvBDYY40xyOm7Wz0KmUrmXWSWIcnP+NZLF2bIn227ldlVCUBl4XfKr6SdvBG
Yk/Nn3S1YLxGmdltmBtiEn/YtIwYDiS4UCKdcuUR4LHi3oP1uB3OAn9iNDhT+YATNVKEz3nc
zgtcXRZmROZajGW4aaCqkxnTGVXIDPeCWKK5RF1nJdmrXGaYOIsGJwhd52b5Q8wJkBbOEIVp
Zs1cd9lhFvr0dIC3tcdz5mDFZ+73kfVDGt1LjjfH9gOFTOoltyguTKiQG+k1nuz9D29hsJg6
QKlsI/zWexC7Bdfp9ezsdyqYsvl5nFmE7OzfPPOXSXhkvTaq8p+d29AkTNG6j3l17TQQsOb7
SFXua3LqVdV6l7IM9gQhRba/m7h6kHoLHcdUawJz9S4b5I6p9BJNKaKnxRXWaVjMxyRfeje1
SBEAv/SKwfFnVNV6IYfruIzrtCysZUF6TleHIW4O5jd8Mqsgn5V3n79aXzK9koGhoqen88v5
4/31/IuoHkRJpnt7gFVNW8ioiPRnA054G+fb48v7D3DV8P35x/Ovxxd4SagTdVOYk62m/m0t
SV7ivhYPTqmj//n8n9+fP85PcEE0kGY9n9BEDUCtp3RgFsRMdm4lZp1SPP58fNJib0/nv1EP
ZIeif8+nIU74dmT2xs/kRv+xtPp6+/X7+fOZJLVc4LWw+T3FSQ3GYd1bnX/96/3jD1MTX/97
/viPu+z15/m7yVjMFm22nExw/H8zhrZp/tJNVYc8f/z4ujMNDBpwFuME0vkCj40t0H46B7Qf
GTXdofjtK5fz5/sLnHnd/H6BGgdj0nJvhe09ijIds4t3vWqUmJuWYXWEf54f//jzJ8TzCa5S
Pn+ez0+/o4tdmUa7PRqZWgDuduttE8VFjScGn8WDs8PKMsd+3B12n8i6GmJX+BUkpZI0rvPd
FTY91VfY4fwmV6LdpQ/DAfMrAanLb4eTu3I/yNYnWQ0XBEzX/jd1+st95z60PUu1bpPw3VaS
lnBCnm6qskkO5HoKqK1xos2j4ANrIdzIWq4q4x24iXFpHabNRPek/L/EafaP8B/zO3H+/vx4
p/78p++57BKW3il18LzF++q4FisN3WqpJvjW1zKggzF1Qavf+cWATZwmFTEzbmyAH7DVuzbD
cg8OxDb7rg4+35+ap8fX88fj3adV7POU+sC2eVenTWJ+YWUyG3EvAHbKXVIvIQ+Zyi6P+aO3
7x/vz9+x6siWvhXHF1T6R6t3YfQsKBGLqEPRxGejd5ug2T+ih/t12mwSoXf9aAW7zqoUHFx4
VjbXx7p+gEP5pi5rcOdh3M2FU5+PdSotPelvxTqNR88gqmrWchOBksMF3BeZLrCSEbbgajDr
ioY81sWEc9GLqe2KrlUFVF6+a055cYL/HL9VaJOlB/MaDx/2dxNtxDgIp7tmnXvcKgnDyRQ/
6GuJ7UlP2qNVwRNzL1WDzyYDOCOvtwnLMX4ogPAJ3n4SfMbj0wF57M0I4dPFEB56uIwTPa37
FVRFi8Xcz44Kk1EQ+dFrfDwOGDyVevnNxLMdj0d+bpRKxsFiyeLkORTB+XiIkjfGZwxez+eT
WcXii+XBw/We6YGo3nR4rhbByK/NfTwOx36yGiaPrTpYJlp8zsRzNNY6yhr3ApXrETCKkDXn
HoJNjkIGBECReUzOdjrEsbh4gfGavke3x6YsV6D1gjVKjaIC2O8t0gKrsFmC3GULT0nCIKrc
4ztCg5nh2sGSTAQORBarBiEXozs1J+8BuitWd+RrYRj6KuzipyM67/Y+Q4wFd6Bjo6aH8TXA
BSzlirgc6hhJ3dp0MDiR8EDfA0xfJvM4P6FuODqS2r3pUFKpfW6OTL0othpJ6+lAaui1R/HX
6r9OFW9RVYPSuWkOVD+2NbfYHPRkj84nVZH4lhjt5O/BMpuaPVbrUPHzj/Mvf9nVTdmbSO3S
ullXkUiPZYUXu61EJNNTe0CGl2FOxF2oU5aDojs0rjWqRGN103gLwT1nK8CuH9SO/qJ4faXr
6tQy5jS90tsN3GogoNF1JN1uJ2NzeP3lAA2t4g4lH7QDSSvpQKoEnWOrEMc1WtucFmHvd9vX
7TL6H0eBxyCRNSuB3TZGeZYWxsIMEdzuo2PqBLZq+RBFaxV1VeZ4DDoJKq83GfcUOWVRKZxY
ozittsmaAo3vkczCJKRxDLUR+PQzUjAWRLIupQMyMRqYxAhIsaJgmqYy9uK0KBFM4mSF7wqS
NM/1BnqVlTxoQn9xhBLCIdzkDVit6sKD9l5a5YJoARiUZrxF9H9UXGWSDIA9GeExqkdzbFsZ
Hr/qncN6l+V4Nbn/LavV3itDh9fwUAcPahIW27EZJbBZ5620biQJ4n9WAEmzXQk4EEVAoncX
UeLlx75v0nNRQnzaguG7Hcg79tcxrLuRinwjOlTG6BGtoxjsf2W4nzJiQ2RrMZYaUKUidsof
ILdlvUsfGjhNwdaJoGMb60BKBg22cG+peFvD/yaTdepS8DAsPRC7a+2znqLWI1nQHOjk2L7t
SYu8PLpoGe3qipjItPiBtHO1r3QlphP6lVu0mehxva5LX14zZiXQlLJKNxknoQd4P7hQmddS
AKMDWzmeNale9+wI5nUFGdt3EsaMLFZPi4Te92/8Jtni93j1ZT5kaz4ZtcfWnvKq9lLtKOqr
uUOd0VjHHQvnlkRG/giU+7mVURGpUm9l/XKUxQMLQmpG+RMpVZqDgXno9rdS6gVC5cUCtgys
D4qs0AJFDa680afKT/0MiSPbx1s91qWguCoytytkuJ4sVCmvhSuh12IaKdL4Ygjo7df5BQ7Q
zt/v1PkFTrLr89Pvb+8v7z++LiaLfEXeNkrjXUrpES2urXVzaJh4FfT/TYDGX+/1pG2ONCZu
mfcFLFr0uiy971ZAzDCQgEV4cFtAumTbqdc5mAlNKxF5QUWWtN3P7V8tX0FgPl4p+jdd/TOQ
C5PJmHkG0vL7IqtBwvt08d7AXyxs9LdRsxbWdNpFuj/6kZnEbW2doOf1XffZ6q1Q2rdBrGNp
mNJfr/SEBFcyKUPUxICsn6YF6NqyAysp1IaRVdta+jBZs3ZgLpl49ehZo3nHwLtVAnMVZ3G0
CwbvcMgavU8E5FfYGELHHFZM8nZ2VUwJzLROHLb1lLES5sGO5xcD6x2WXpborSd5TIKo9lHa
ZantPXvuED+rPWNmUo7QrTMF38goAaGXYFFRouHtcphgrOr2Sv+vDo7n41J/S8jlFwH03DWf
cRhtZvkOtNz11htufS6PIEDBGw4i9UQrYbePtYrbQ8ru/UX8/vr6/nYXv7w//XG3/nh8PcPl
3GVYRMearoUNRIEqRVSTF4UAK7kYjyi0VcmOyw9jwIuSy+lixnKOfS/EbLOQWPZGlIpFNkDI
ASKbkQNLh5oNUo6OLmKmg8x8xDIrMdZbCbb64iRO5yO+9oAjZtYwp+x+WbIsHMWpKGNT3KQi
K3iqNXHAUSoQUhEFRQ3WxzwcTfkyw6Nu/XeTFjTMfVll97Rt5Wo8ChaR7o95km3Y2KzFBy5j
eRlvi2gTVWw412gZpvCBEsLLk16MsUkdYv5brJL5eHHiG+w6O+lh3CgGkzJHxlCnomB51J9t
Nhox6JxFly6qF5J6qF3p7WFzrHR9arAIFls8Y5sctydRLtiEYOWFRZsNebTSUbuyiNjKyqjF
yE4+ftgUe+Xj2yrwwUJJDmQkVUWxSjflVVpVDwOjwjbTPT+MD5MR33oNvxyiwpDvzEDNBynf
Pw0d88AZ2UV/KAWf2GBQAtsy2K9YYUQM5m1VgqvnbvrI3n6c356f7tR7zLhJzwp4k6uXGJve
GPwXx7VmZwa5YLYaJudXAi4GuNOYnBp2VK2Xn3ZuRMt9poBMtXQusNEWx3hOitvptt2e2HkW
OQMwN9r1+Q9IgJ11zf16nQ5MmnUANzDDlB4xiMlYXyATmxsScJ1+Q2SbrW9IwFXOdYlVIm9I
RPvkhsRmclViHFyhbmVAS9yoKy3xm9zcqC0tJNabeL25KnH1q2mBW98ERNLiikg4ny+vUFdz
YASu1oWVkOkNiTi6lcr1clqRm+W8XuFG4mrTCufL+RXqRl1pgRt1pSVulRNErpbTGLgapq73
PyNxtQ8biauVpCWGGhRQNzOwvJ6BxZgsMyg1nwxSi2uUvT+9lqiWudpIjcTVz2sl5N6coPBT
qiM0NJ73QlGS346nKK7JXO0RVuJWqa83WStytcku4MXXMHVpbhct2KuzJ7JRgrcPG/uVmTMq
Y8Nokyi0vDRQJUUcszkD+jLDG+FoNpH4rNeAJmUZK7B6uSB2antaiQQSYhiNIitukbxvNnHc
6E3ulKJCeHDWCk9HeNHZoeEIv/7K+oixzWVAcxa1slgZSRfOoiF+ydWjpNwX1JXNfTSxsssQ
P2QFNPdRHYOtCC9im5yb4VaYLcdyyaMhG4ULt8ILB5V7Fu8iWeAWoNqvh7IBT9IzJTWsN4cj
gm9Y0KTnwUIpH7TaCJ60rmg96EH2pjMKm1aE6xmyXO/B8gjNNeD3odJLYukUp43Fj9rWkwt3
WfSItlI8PAerMx7RJkq07DswIKAUWSPB9B0crmUHXCSweLYmnX0ndbWeYnok15kHozvJVKQH
Z8NZfYvGDjJXy8A9MqsW0XwSTX2Q7Jku4IQDZxw4Z8N7mTLoikVjLob5ggOXDLjkgi+5lJZu
3RmQq5QlV9RlyKYUskmFbAxsZS0XLMqXy8vZMhqFG3iaTE9Nt/pzuxGAETq9SQ2aWG54ajJA
7dVKhzKeq1WaOwKtITsdEkYI9/CDsLXkWd1J+Gm8vTu9cNblLpjEDaf0KNoR0BO/MlHE5JYY
jCuOR2xIywXD3HTCciaf2To7uCfXBmvW+9l01MgqxqcnYPURxfVKCBUvF+FoiJhElDFJUSX0
HrLfTHGMzpBwbRb77OIqu8RFsunFewJlh2Y9Bo1J5VGzUdZE8BEZfBsOwZVHTHU08EVdeT8z
oZacjD14oeFgwsITHl5Mag7fstKHiV/2BWiIBBxcTf2iLCFJHwZpCqKOU8M7eDLPANo71MYL
Yv72pgu2PSqZFcZp8ZePOXYpEUGXuYhQWbXmCYlV3TFBjSZvVSqafWuEG52Iqfc/P57O/gmi
Me9FbPxaRFblinbZ9FCDM6oZvleHnw0tvpZc5YkrqVFVxc7xeqeY6ZgY606rXby1xe7BnSV2
jzgag7AOuq5rUY10n3Dw7CTBsKyDmvcuoYvCkb4DVYmXX9v9fFB3vq1yYPv6xQGtMXUXLWQs
5n5OW2PnTV3HLtVat/dC2G+SrE6QCgxbuLfkUs3HYy+ZqM4jNfeq6aRcSFaZiAIv87rdVqlX
94Upf62/YSQHsikzVUfxFrcfPc8d5sKo02S4CUa1ANWIrHYhVXvRdvpHcMl0aSOtBX/3s8OF
k949emUFu77ud4YpiS/Jb0YNhWRPbdtuFwsOFfUeLWG6dUGpuz4jXOPPmLaF0EXP/Co9oRuh
7WICbU1UCwbD5wgtiD2o2iTgwRk83Ylrv8yqNioV6HvEugLGfuvubwp4uMSf0PhzNy+4dFxg
KtY7yXBGvT5glOWrEt2fmXd2gFxUYjqdY7FF+qbW2UAzgf5XHXULoYH6F2WCxC7xYUdnQp0E
tNdBHgiXRw7YZt0xjGYPSuA8hKgCwUgqk9iNAqxQi+Tege0aQKgNQa091aw8YF+PZaSyBP8G
GepV1UAXdVGrOA/PgJ+f7gx5Jx9/nI1/3DvlqYq1iTZyY1Rn3XgvDOxGb9G92eQrcmYoUTcF
cFQXtf0bxaJxdqoxXy5srejB5rreVuV+gwz7luvGMUzbBsJGqyORuFI91Bzwi/Qe9fKiI6wa
t8pbG/Y0/QvIlAiR6iCGQvXukll+nZdSPjTHaCDeOMrNhwFrDnxk1b0eKok93kyauhAKV5vu
4wrK9uoicAJikmgN7a4efLugarKEtefRrR+D67nMgaHrOZDpug7WmlPt0Pap/Ov7r/PPj/cn
xodFKso6be/x0QN5L4SN6efr5w8mEqo1Z34a3TUXs6e64Ba9KaIadnbDAuQA1mMVMQiMaIWN
51i8NVaMDQCQcvQ1D2/EQGe+W//qOeDt+/H54+y70uhlfVcxF8q0Oo5oF/E2kTK++zf19fnr
/HpX6v3C788//x1elT89/48eGRK3rmEBKUWT6A1CBr6I01y668sL3bWA6PXl/Ye9Kfe/nn2Y
HUfFAWvRtKi55Y7UHmuqWWqjp+wyzop1yTAkC4RM0yukwHFeHjgzubfF+rSqvlypdDyerpP9
DcsJWGmgL4MIVZT0eYxhZBB1QS7Z8lO/rFGWY5MDPNf1oFpX3cdffbw/fn96f+XL0O1y7Cu9
L1y0zm0pqiY2LmsY5CT/sf44nz+fHvXkcv/+kd3zCd7vszj2XL/A4a+CJwkEMeaTMIIGphRc
hNBFsdDbBfLYwT4jjVuf7tgIyY3c9tYM+DLAgmwj40PAtjOz0oz3UIe0QjsbC8SygZ8u7PX+
+msgZbsPvBcb7JPagoWkmup+NNbkNro0Y3pqu/yiCzLdXaqI3BgCas7JjxU+RQBYxdK5uGOT
NJm5//PxRbengcZpF45gPJy4UrNXZXr6AR+KycqZrWAprldCjvhGrTIHynN8cm8gmVTtcKcc
5l5kA4y5r/vyIJk4IJ1OuomEuQQEQXiMifXkWkIGbjUoobzw7fBG0WNcKOWMSe3CnDzzZL8I
bsXe9QaoOfl3DwidsSg+UEcwvn5A8IqHYzYSfNlwQZes7JKNGN83IHTKomz5yJUDhvn0Qj4S
vpLItQOCB0qIM1iBp4AYm8GwggwkyhVR6e73j5tqzaDcUGimoqF7AHXgMFjgejgkgOe5FmaT
NIfZqooEzUbnhelQ5nW0MVYsZe5OeUZocksI7ST35nSqn4atv4Dnl+e3gfH7lOml5ak5xHvc
55gQOMFveCT4dgqW4ZwW/WI56G8t9LqopHmzDC+Ouqy3P+8271rw7R3nvKWaTXkADxXw9Lcs
khTGYDThIiE9VMIRRUQWrkQAlhwqOgzQe6VZGQ2G1pseu7onOfcWs7BfaptL+xzdFJjsp+z5
5jClm41HXiqvfX/55ebSwF3aRYl17FkRKcV+SORiLmiNn0af4P1b9/HSv349vb+1+wi/Iqxw
EyVx8xsx1NARVfYNtLA9/CQD7O+9hdcqWk6x3kCL0+emLdg/SZ1MsdoEYc0TNI8T0Wk8nc3n
HDGZYIOHF3w+D7ETa0wspixBncq3uKv038F1MSOKBC1u517QHwBHAh5d1YvlfOJXrxKz/2vt
25rbxpV1/4orT3tXzazR3dKpmgeKpCTGvJkXWfYLyxNrEtXEdo4vayf7159uACS7G6CSVXUe
ZmJ93bgQl0YDaHTPqTN4A6OTUmdTAsG3X5zpECJkDAb0UgF042hDlFFtG92kYUJApbrRt5jt
4TNl0kN1PptgiDz2kWoIlwV9phexh8cYTafebNi5aYc1/trFir5aACzrhL76QvoVuptodJgK
AldFhC/E8MmbLotR9Z/05RdJw6vVllqiHOtYJpSlvLEDH2m4ZR+oWvte+5fcb5K3Mi20otAh
nl5OLEC6s9Rg687SwOvEGy9HDss4IEyolyf4PRtZv/nzxnXiw6yQHgIoOszPaxt4ExZv05vS
B0N4CBjQl04aWAmAPtgmwVN1cdQTlups8/BQU02EIt6pVZsUfZ0M0PAp9Dk6fKWkXx3KYCV+
Cj8lCuJeSg7+x6vxaExEWeJPmWNy2BuB3j23AOFpyICsQAS5RV/iLWc0yjgAq/l8LLysGFQC
tJIHH4bNnAEL5sO49D3uEL2srpZT6pAZgbU3///mgbZRfpjRn0ZFgwgGl6PVuJgzZEzdwuPv
FZt3l5OF8GW7Govfgp8a/8Hv2SVPvxhZv0GMK0cJXoF+HuMBspj7sBwuxO9lw6vGgiPib1H1
S7qeotve5SX7vZpw+mq24r9XzEeMOu8CRYRg6uDKS7x5MBEUUD9GBxtbLjmGt0nqsRiHfeXz
ayxAjNnMocBboTja5hyNU1GdMN2HcZbjoX4V+sxTS7uHoex49RwXqHMxWJ1WHSZzju4iUE/I
mNsdWPCfKPUmB9ES7c2jAPsn/ZyQHC4FFOfLS9mUce7jQ0QLnFrFx5U/mV2OBUBf6iqA6nwa
IMMHNbzRRADjMZUCGllyYEr9E+ILYeajLvHz6YQ65EdgRqOAI7BiScxTK3xxARonRhLlfRmm
zd1YNpY+VS69gqGpV1+yQERoF8ETavVSjjilRe5xwJgXc5yi46o3h8xOpFTPaADfD+AAk87S
toS3RcZrWqTzajEW3136k0s5HNCvbSEgNd7wHqyOuWc3HVJZfyldRzpcQsFGmSY7mDVFJoFJ
KiAYaNQWWVnAiPZXxlf+aDl2YNSSqcVm5Yg6jtTweDKeLi1wtMQHyjbvshzNbXgx5jEdFAwZ
UOt3jV2u6G5EY8spfV1usMVSVqqENY258Ec0gX2V6FiAq9ifzekL+Oomno2mI5h6jBPfck8t
QbrfLFSca+aYN0cXZujaleHmiMTMvf/cFfzm5fnp7SJ8eqDH5qCGFSHeyoaOPEkKc5X17evp
75PQFJZTuozuEn+m3tSTK6QulbZs+3J8PH1CF+rKDTDNC62UmnxnlFKyhiXhYjmSv6XerDDu
F8QvWYiwyLvm0yRP8OU3kZdYclQoP8DbfMoM4Ev6c3+3VEt3b7kiv4o2MXf5UYq56uA4S2xi
0Nu9dBt3hzy704MpV/lN1+aPJEZor+frLRwXoILcb9K6j3PnT6uYlF3tdK/o+9Uyb9PJOqkN
QJmTJsFKyR1Cx6DdpPTneVbGLFklKuOmsaEiaKaHTPQAPa9git3rieFWmeejBVOE59PFiP/m
2uR8Nhnz37OF+M20xfl8NSl0WHaJCmAqgBGv12IyK6QyPGc+RvRvm2e1kPED5pfzufi95L8X
Y/GbV+bycsRrK3XsKY+0seSxADHKNY00H+RZJZByNqM7lFaLY0ygfY3Z5g7VsQVd2JLFZMp+
e4f5mGtn8+WEa1r4RJ8Dqwnbs6lF2rNXdE8u/pWO1bicwKo0l/B8fjmW2CU7HDDYgu4Y9dKj
SydRLs6M9S5iysP74+MPcyTPp7Ty2d+Ee+aXRM0tfTTe+vQfoLSOhn4MMnRHZSxSBKuQqubm
5fh/349Pn350kTr+Fz7hIgjKP/I4bmO8aHtDZQp2//b88kdwen17Of31jpFLWHCQ+YQF6zib
TuWcf7l/Pf4eA9vx4SJ+fv528V9Q7n9f/N3V65XUi5a1gb0OkxMAqP7tSv9P827T/aRNmLD7
/OPl+fXT87ej8dZvncCNuDBDaDx1QAsJTbhUPBTlbM7W9u14Yf2Wa73CmHjaHLxyArsjytdj
PD3BWR5kJVTaPj0PS/J6OqIVNYBzidGpnUdeijR8IqbIjgOxqNpOtXMTa67aXaWVguP917cv
RMtq0Ze3i+L+7XiRPD+d3njPbsLZjIlbBdDXn95hOpJ7UEQmTF9wFUKItF66Vu+Pp4fT2w/H
YEsmU6raB7uKCrYd7h9GB2cX7uokCqKKiJtdVU6oiNa/eQ8ajI+LqqbJyuiSHdfh7wnrGut7
jFcYEKQn6LHH4/3r+8vx8Qjq9Tu0jzW52KmygRY2xHXiSMybyDFvIse8ycrlJS2vReScMSg/
hU0OC3YUs8d5sVDzgrtXJQQ2YQjBpZDFZbIIysMQ7px9Le1Mfk00Zevema6hGWC7NyxuGkX7
xUl1d3z6/OXNMaKNX17amx9h0LIF2wtqPP2hXR5Pma97+A0CgZ7N5kG5Yg6XFMLef69348u5
+E0HkQ/ax5jGmUCAxYiFTTCLa5qAkjvnvxf0sJvuX5TnRHyjRLpzm0+8fES3/xqBTxuN6CXW
NWz7x7zdOiW/jCcr9oifUyb0eT8iY6qW0VsQmjvBeZU/lt54QjWpIi9GcyYg2o1aMp1PSWvF
VcFCJcZ76NIZDcUI0nTG43QahOwE0szjYTOyHMOlknxzqOBkxLEyGo9pXfA3ewteXU2ndIBh
sIV9VE7mDohPux5mM67yy+mMuvlTAL2Ua9upgk6Z02NLBSwFcEmTAjCb01ggdTkfLydkwd77
acybUiMscECYqGMZiVBHg/t4wfwA3EFzT/T9Yyc++FTX5oT3n5+Ob/ruxSEErrivBfWbbqSu
Rit2CGuuBRNvmzpB5yWiIvBLLG8LcsZ9B4jcYZUlYRUWXPVJ/Ol8Qh0RGmGq8nfrMW2dzpEd
ak7n6zzx58wmQRDEABRE9sktsUimTHHhuDtDQxPh8Zxdqzv9/evb6dvX43dunIoHJDU7LmKM
Rjn49PX0NDRe6BlN6sdR6ugmwqPv35siq7xKuxQnK52jHFWD6uX0+TNuCH7HyHtPD7D9ezry
r9gV5rWa6yJf+YUu6rxyk/XWNs7P5KBZzjBUuIJgSJWB9Og313WA5f40s0o/gbYKu90H+O/z
+1f4+9vz60nFrrS6Qa1CsyZXkRPI7P95Fmxz9e35DfSLk8O2YT6hQi4oQfLw25z5TB5CsLhQ
GqDHEn4+Y0sjAuOpOKeYS2DMdI0qj6WKP/Apzs+EJqcqbpzkKxOvaDA7nUTvpF+Or6iSOYTo
Oh8tRgl5/rJO8glXivG3lI0Ks5TDVktZezQYYBDvYD2gpnx5OR0QoHkhAj7Qvov8fCx2Tnk8
Zj571G9hiaAxLsPzeMoTlnN+x6d+i4w0xjMCbHopplAlP4OiTnVbU/jSP2fbyF0+GS1Iwrvc
A61yYQE8+xYU0tcaD72y/YTRQu1hUk5XU3YlYTObkfb8/fSI2zacyg+nVx1Y1pYCqENyRS4K
MARAVIXszV6yHjPtOedBmTcYz5aqvmWxYRdxh9WcrVhAJjN5H8+n8ejQWS917XP2K/7jCK4r
tu/EiK586v4kL720HB+/4VGZcxoroTryYNkI6WsGPIFdLbn0ixLtyD/TBsjOWchzSeLDarSg
WqhG2J1lAjuQhfhN5kUF6wrtbfWbqpp4BjJezlloYtcndxp8RXaQ8AMDd/Rnngh49CkdAlFQ
CYA/cEOovIkqf1dR+0eEcdTlGR15iFZZJpKjYbJVLfFkWaUsvLRUL4T7oZiEKm6V2fnCz4v1
y+nhs8PcFll9bzX2D7MJz6CCLclsybGNd9Xdw6hcn+9fHlyZRsgNe9k55R4y+UVeNKMmM5M6
EoAfxgU/g0QAHISUgwKWi/FZsIv9wOc+sJHYGePY8BWzMTaoiGeGYFiA9icw8yCNga0rCIFK
A1kEw3w1PQhG40yBg7toTcPnIhTR5VcDh7GFUDMWA4FSIXKP8+mKqvwa05c1pV9ZBDS7kSBd
uFqkyf3IhVrRV5CkzFYEVF0pt2qS0Tgg5uhBVACdyTRBoh0jMEoOs2CxFN2Lzh0YoB6gcMQ4
kkBfDpzQhiNmaPvMhIPatxPH0EhFQtR9jUKqSALMkU0HQRtbaB6KOYaGJpxLPSsQUBT6Xm5h
u8KaXdVNbAE8EBiC2j8Lx+4OrdiIiuuLT19O3xyRcIprHuzZgxkSUUNxL0AfEcDXZ/5ReQ3x
KFvbfyDAfWQG8ewgQmE2ip7xBKkqZ0vcvdJCqd9uJFj57Ja6eP7svvWkBNUNQup2ASYr0Msq
ZMbgiKYV7mvl8yPMzM+SdZSKmzrZtl1euedf8cCH2gCmgqk74Zt2jJUMCTK/otF8tD93v4+Q
+INTvGpHH78Z8FCORweJGgkrUSljGWyMaGQiHtVDY2hraGHKMHF7I/EYw0ZdW6iWiRLWkssF
ak+vjVdY1UfrO5nE4bJHE7pXpzIX80TUlziPJmIwdZkrs1YiI8nHc6tpyszHqNUWzL3DabBz
LS8L7XyEDeDNNq5DSby7TWkgDe2HrI0bMGXGAoK40I8E9J5id4uB2V/V27NemGC8jQKmKIZt
/eEAmyTCkHuMjHC7HuIzmKyiUh2IOooHg7QtHwvDamD0GtOVIYkrdxp0WAf4lBPUGFuulUdF
B6XZHuKf0Vw5NtvxxBtOaIhTXN1DFwe6OD5HU1+PDCaqB+fTgTQcGehwGLx5Ot9nyqmk1aA6
rIbjU3qCaIC0nDiKRhQ7PmCrMuajnBd61FK/g61+NB9gZ9/5IsuKggVOpER7uLSUEiZSIWqg
Xljh6/5rux5JdFAB2Jxj0HhRshIZl0sOHKUwLjqOrEqMwZdmjg7QArbZF4cJOlOzmsTQC1hI
eWLtUmp6OVfvzuK6xMNYa7bqpcTVM5pgt8ke9hgN5Au1qSsWtJZQlwf8UutDQXdsJssU1PQy
8gdIdhMgya5Hkk8dKDpMs4pFtKZvx1rwUNpjRT0wsDP28nyXpSE6s4buHXFq5odxhqZ5RRCK
YtSybuenFyTozYkDZy4TetRuGYXjfNuVgwTZ0ISkGnyAWoocC0951rE+pHdia8uI7umrGtu7
QI4WTrc/j9ODMrJnYf+G3ZoZHUkEpUOaUQODXIZ5J0Q174fJqkA2l9pXl/aHlPN8PxmPNOWH
nZmao5bM7NZ+O0NKmg6Q7BZB61HcQo2nUBf4PGtZ7eizAXq0m40uHQuv2k9hNL/drWhptV0a
r2ZNPqk5JfCMmiDgZDleOHAvWcxnzin28XIyDpub6K6H1Z7W6Np8tcMYnFEeikaroLjxZCym
NfBukyhS3pcZQWvDYZLwY02mSHX8+FbeZ97CdFhUL4+lmXVHIFgQo6uojxgptd/H0Te38IOf
HyCg40xq/e748vfzy6M6Yn3Utk5k69jX/gxbp3ZS5yEFOpim8Q8NIE+hoGlnbV28p4eX59MD
Ob5NgyJjfpA00MDWLEDfj8y5I6PRwy+Rqo3n/uGv09PD8eW3L/9j/vj304P+68NweU5ffG3F
22RxtE73QZQQ8bmOr7DgJmfeYtIACey3H3sR2dMgR0V0GfxBifmG7BZ0oQr7IbDAIxuubCPr
oZkw2pWVEj8W9rYRfcQPuYEWF+25o1xSAn4qAo8CEOW26M6JXjlRzEHpDx49190j7w/2U56B
alDt+6NEJFVw5mc0VKt5TR9uamqxrtnbfUyI/u+szFoqy06T8OGgKAd1DVGIXtI3rrzV864y
8KgLu3YtE7l0uKMeqEWLepj8lbTGeLikhG7ZcDaGtsSWX9V6ZXMmKdN9Cc20zemeFuOblrnV
puZFmshHubptMW2EeXPx9nL/SV2LycOvkh4Fww8dVRcfI0S+i4AeZStOEKbfCJVZXfghcURm
03awYlbr0Kuc1E1VMEcoJpr0zka4EO9QHsu7g7fOLEonCmqJq7jKlW8rvHtDUbvN20Tq2OOR
/mqSbdEdiAxS0HU82ahox7U5Cl+xIFokdSTuyLhlFJe8ku7vcwcRj1GGvsU8aXPnCmvMTNqq
trTE83eHbOKgroso2NofuSnC8C60qKYCOS5qrU8jnl8RbiN6oASi34krMNjENtJsktCNNsxb
HaPIijLiUNmNt6kdKBv5rF+SXPZMGbEfTRoqZx5NmgVEH0dK4qk9L/fFQggssDXB4f+Nvxkg
KZ+RjFQyn/sKWYfo44SDGXVZV4WdTIM/bWdTXhJolv6ylrB1AriOqwhGxCHsXEYSqy2Hh8Aa
H4duL1cT0qAGLMczenOPKG84RJSbfbeNmFW5HFafnGi9sMCgyN1HZVawc/QyYt6i4Zdy98RL
L+Mo4akAMO4EmWO8Hk+3gaAp8y/4O2XKNEV1ygyDW1ET4qxGnh4Yj2awHfeChhr0EsswP60k
obUqYyTYYITXIZVJVaIyDpiHoEzdb/eWSPzOWD8mOn09XuidB/X85YMUgq1Rhi91fR+NZrrP
3ntoElLBClWiV4qSxTIo0Ucw3bOEh2rS0CMUAzQHr6Ju2Vs4z8oIxpUf26Qy9OsCHz1QylRm
Ph3OZTqYy0zmMhvOZXYmF7GDUdgVDOBKqcqkiI/rYMJ/ybRQSLJW3UDUoDAqcf/CatuBwOqz
OxmDKw8Z3KMuyUh2BCU5GoCS7Ub4KOr20Z3Jx8HEohEUIxp6YkAFMgYPohz8fV1nlcdZHEUj
XFT8d5bCUgn6pV/UayelCHMv0vd9hHjjFSn1pMSI6kMcfpa2m5LPBwOooCUYVC2IyWYF1B7B
3iJNNqF7+Q7uXOg15ujWwYMtWspCVIVxubrCCwMnke6Y1pUchy3iavWOpsaoCa/BOr/jKGo8
VYYpc2vmjGARI0SDXgmfXblyCzcNbEWjDSkqjWLZqpuJ+BgFYDuxjzZscsq0sOPDW5I92hVF
N4dVhHqqjtq/yEe5lNdnOhG9EG1Lwa0vWiw6ifFd5gJnNnhXVuRg5S5LQ9k6Jd/S69+wcgcM
c8pPtK6iH9wizVpHKMppY0QYMUFPDGqMkAboV+R2gA55half3OaikSgMyvOWfxCOEtY/LeQQ
zIaAJx8VXoJE29Sramh5ypVmFRt2gQQiDWhzrT6hJ/laxKzEaMyWRKqTSXlC+qmfoOJW6vBd
aSobNqDyAkDDhoKMtaCGxXdrsCpCehixSapmP5YAWdpUKr+KRT6AtAc1/VlcXWWbki/FGuOD
D9qLAT7b/Gu3/FxmQn/F3u0ABjIiiArU4YKIurx3MHjxjQeq6CaLmd9ywoqHgQcn5QDdrT7H
SU1CaJMsx17Xr7fvP32hgQE2pVAFDCBleQvjpWG2ZR5xW5I1nDWcrVGsNHHEYhIhCWcZbe4O
k1kRCi2/f1quP0p/YPB7kSV/BPtAqaCWBgra/gqvQ5k2kcURtd65AyYqSupgo/n7Et2laFP+
rPxj41V/hAf8f1q567HRS0CvU5eQjiF7yYK/2/gfPmwucw+2u7PppYseZRjQooSv+nB6fV4u
56vfxx9cjHW1WVKhKQvViCPb97e/l12OaSUmkwJENyqsuKE9d7at9GXB6/H94fnib1cbKgWU
mZwicKWObDi2TwbB9uFPUCe5YEDLGCphFIitDjsgUCSyQpD8XRQHRZjKFOh7pvB3ak7Vsrp+
XitTKNwYdpSrsEg33C86/VklufXTtSpqgtAqdvUWxPeaZmAg9W1kPQyTDexYi5C5j1dfskNv
Y9EWr/p9kUr/o4dDP6420d4rxCRydG1XdFT6ahXGkGdhQpXKwku3Um/wAjegR1uLbQRTqBZt
N4RnyaW3ZavXTqSH3znowlxZlVVTgNQtZUWs3Y3UI1vE5DSy8BtQHELpE7enAsVSVzW1rJPE
KyzYHjYd7tx3tTsAx+YLSUSBxKe3XMXQLHf4RlxgTLXUkHpNZ4H1WpkfdjslU6oKmZSCnunY
MFEWUFoyU21nFmV0Fzp3ZJRp4+2zuoAqOwqD+ok+bhEYqnv0ax7oNiKLTsvAGqFDeXP1MFOx
Nexhk5EQZTKN6OgOtzuzr3Rd7UKc/B7XhX1YmZkKpX5rFRzkrGRsElrb8rr2yh1N3iJaIdea
CukiTta6lKPxOzY8sE5y6E3lCMyVkeFQ55jODndyouYMYvxc0aKNO5x3Ywez7RNBMwd6uHPl
W7patpmpG+G1Ckt8FzoYwmQdBkHoSrspvG2CDuSNgogZTDtlRZ6cJFEKUoJpxomUn7kArtPD
zIYWbkjI1MLKXiNrz79CF963ehDSXpcMMBidfW5llFU7R19rNhBwax4nNgeNleke6jeqVDGe
draisa+4YYDePkecnSXu/GHyctYLZKtaOHCGqYME+TUkgFzXjo7vatmc7e741F/kJ1//Kylo
g/wKP2sjVwJ3o3Vt8uHh+PfX+7fjB4tRX+rKxlXx6iS4ESc5Bi7oLT1oT3u+6shVSItzpT0Q
MW9Pr7CQ2+UWGeK0DuJb3HV609Icx98t6Y6+I+nQzo4UtfI4SqLqz3Enk9bZodzwbUlY3WTF
lVu1TOUeBk9kJuL3VP7mX6KwGecpb+jFheZoxhZCDfHSdlGDbXxWU7vitF1OBbaJYQ/lStGW
16hXBCjA1ZrdwKZER33588M/x5en49d/Pb98/mClSiKMV8wWeUNr+wpKXIexbMZ2sSYgnq9o
h/hNkIp2l1tFhExQyjrIbeUFGAL2jQF0ldUVAfaXBFxcMwHkbPelINXopnE5pfTLyElo+8RJ
PNOC20J5Zgd9PSMfqXQo8VPWHL+tayw2BIwH035Zr9OCRtDVv5stXS8MhisfbPvTlNbR0PjY
BgS+CTNpror13Mqp7dIoVZ8e4sEpGsOWVr5iPBj0kBdVU2Aw2F7DDPMdP7XTgBh/BnUJn5Y0
1Bt+xLJHDVgdjk04S+Ph4V3/aSb0A+e5Cb2rJr/B/fNOkOrchxwEKGSowtQnCEwemHWYrKS+
cMGzDmF5p6lD9SiTtdGvBcFu6Czw+FZcbs3t6nqujDq+BpqzpGctq5xlqH6KxApzdbYm2MtM
GpfsR79W28dnSG7P35oZ9QbBKJfDFOq2iFGW1NOYoEwGKcO5DdVguRgsh/qmE5TBGlA/VIIy
G6QM1pr6xhaU1QBlNR1Ksxps0dV06HtYyAleg0vxPVGZ4eigdhcswXgyWD6QRFN7pR9F7vzH
bnjihqdueKDucze8cMOXbng1UO+BqowH6jIWlbnKomVTOLCaY4nn4wbMS23YD2GL7rvwtApr
6rimoxQZqDzOvG6LKI5duW290I0XIX3w3sIR1IqFu+sIaR1VA9/mrFJVF1dRueMEdarfIXj1
T39YBvFp5GtjtW6nY6AmxbB7cXSndcbWEN2x14my5uaaHu0zAyDt2vz46f0Fnaw8f0M/T+Qg
ny9F+KspwusarcCFYMegqRHo7WmFbEWUbkta7bVJ7qhgVeBOIdBl9bsYfXPb4rQOTbBrMijP
E8eVnZ4QJGGpXr5WRUQXRnt16ZLgRktpQLssu3LkuXGVYzYtpBFQfOh8YN7EQj3v0kXwM43W
OMwGM20OGxojsyPnXkUUFGOMeyAfGZcJRmXK8aCn8TBG3GI+ny5a8g4Nq3deEYQptC3eROMt
pNKRfI/dnFhMZ0jNBjJAdfQcD7ZOmXv0hh60Ybzn1hbQ5GtxK+SrlHiCK4OTO8m6ZT788frX
6emP99fjy+Pzw/H3L8ev38gbjq4ZYcLAhD44GthQmnWWVRiDydUJLY9Rm89xhCqU0BkOb+/L
O12LR1mNwMRDe3Q0x6vD/qbBYi6jAIag0mRh2kG+q3OsE5gk9OBwMl/Y7AnrWY6jeW+6rZ2f
qOgwoGGvVbEO5BxenodpoK0qYlc7VFmS3WaDBPRZpGwl8grkRlXc/jkZzZZnmesgqhq0exqP
JrMhziwBpt6+Ks7QKcZwLbodRmcmElYVu6jqUsAXezB2XZm1JLEVcdPJad4gn9yxuRmMRZWr
9QWjvoALXZzYQswFiKRA98Cc910z5tZLPNcI8TboiSByiUq1085uUpR5PyE3oVfERIIp0yNF
xBtdkKGqWupKii5bA2ydOZvzMHIgkaIGeDkD6zZPSqS5sJLroN7myEX0ytskCXG5E8tlz0KW
2YINyp6l9SJk82D3NXW4iQazVzOKEGhnwg8YNV6JcyP3iyYKDjDvKBV7qKjjkOkMSEAHaHh+
7WotIKfbjkOmLKPtz1K3xhNdFh9Oj/e/P/WHbZRJTbdyp0KXs4IkA0jQn5SnZvaH1y/3Y1aS
OuyFvTGoq7e88YoQmt9FgKlZeFEZChQtAc6x6yd851lQz4vwODsqkhuvwOVhW/6E9yo8YGif
nzOqUGG/lKWu4zlOyAuonDg82IHY6qfarq5SM8tcIBnBDbIOpEiWBuwCHtOuY1iw0GTKnbWa
J4f5aMVhRFr95Pj26Y9/jj9e//iOIAy4f9FHpuzLTMVAV6zck2l42gMTqOl1qOWeUmYcLGa9
AkUUP7ltNGQmJ/r7hP1o8OSr2ZR1zYK17zECd1V4ZklX52OlSBgETtzRaAgPN9rx34+s0dp5
5dDuuplq82A9nfLbYtXr+6/xtovlr3EHnu+QFbicfcCoLA/P//P024/7x/vfvj7fP3w7Pf32
ev/3EThPD7+dnt6On3HX9tvr8evp6f37b6+P95/++e3t+fH5x/Nv99++3YMK/PLbX9/+/qC3
eVfqNuHiy/3Lw1E5DO23e/pF0BH4f1ycnk4YPeD0v/c8cgwOQ9RUUaXTyyQlKAtbWPm6b6Q7
n5YDX65xhv6BkLvwljxc9y6MltzEtoUfYGirOwJ61lnepjIskcaSMPHzW4keaAQ3DeXXEoFJ
GyxAcPnZXpKqbq8A6VCDxwDG5EhVMmGdLS61IUYtWNtTvvz49vZ88en55Xjx/HKhNzp9b2lm
tHr28kjmYeCJjcNCQ41EOtBmLa/8KN9RfVgQ7CTicL0HbdaCStYeczJ2SrBV8cGaeEOVv8pz
m/uKvk5rc8DLY5s18VJv68jX4HYC7rqTc3fDQbyLMFzbzXiyTOrYSp7WsRu0i8/Vvxaz+scx
EpR1kW/h3K9qOw6ixM4BnZM1ZsN+oFHaDD1Mt1HavXjM3//6evr0O0j+i09quH9+uf/25Yc1
yovSmiZNYA+10LerHvrBzgEWQenZrVIX+3Ayn49XZ0jms7QTi/e3L+gC/NP92/HhInxSH4Ge
1P/n9Pblwnt9ff50UqTg/u3e+irfT+z2c2D+Drbv3mQEqtItD6bRTeBtVI5p5BBBgD/KNGpg
BzixuzG8jvaOFtp5INX37ZeuVRAxPHJ5tb9j7Vtd4W/W9ndU9kzwq9JRtp02Lm4sLHOUkWNl
JHhwFALK0k1BXXW202g32Mw9yd2ShO7tDzbdCyIvrWq7g9E+s2vp3f3rl6GGTjz743YIyuY/
uJphr5O3bu+Pr292CYU/ndgpNSzdOFOiG4XuiF0C7HBQS4WEQfm+Cid2p2rc7kODOwUNlF+N
R0G0GaYM1W7rrNzgsOg6HarR0Gu3VtgHLmxuLyERzDnlZc7ugCIJXPMbYebbsYMnc7tJAJ5O
bG6z57VBGOUldY/UkyD3YeJ8PDmb0lXWfOwQTDvPkUXiwPAJ05o6RWyXrW0xXtkZ3+Su4lSv
N2pENGnUjXWti52+fWHv5zv5aq/agDXUnwaBSbaCmNbryB7fsM+3hw6oujebyDl7NMGKmSvp
A+PU95IwjiPHsmgIP0toVhmQfb/OORlmxTso95cgzZ6HCj1felk5BAWi55IxL2M9Nm3CIBxK
s3GrXVc7786hgJdeXHqOmdku/IOEoeJL5pqiA4ucucrkuFrThjPUPGeaibAMZ5PYWBXaI666
yZxD3OBD46IlD5TOyc30xrsd5GEfqmXA8+M3jObB9szdcFBGuLbWQu3GDbac2Vo6Wp3baWc7
eyEw5uU6MMb908Pz40X6/vjX8aUN8OqqnpeWUePnRWqLyKBY44l/WrspTuVCU1x7PUXxK3t7
hASrhI9RVYXovbXIcrsncOPUeLktSVtC41ymO2q3fx3kcLVHR3TulMU9XquB4cJh3DHQrfvX
018v9y8/Ll6e399OTw59DsMwupYQhbtkv3kCtg91BMcBtYjQWjfN53h+UoqWNc4MNOlsGQOp
RRHD+y5OPl/U+VxcYhzxTn0r1P3oeHy2qoNaIMvqXDXP5vDTrR4yDahRuxt72oV7PLW7idLU
cWaB1LJOlyAbbNFFiZaVo2QpXStkTzyTPvcCblNt09QUOUcvHQMM6ejQ2fe8ZGi54Dymt9HD
c1jaXceYPTXlf8ob5J43USnc9Y/87OCHjrMcpBrnsk6hjW07t/euqrtVSJf2IMc5IDTHQKNq
auVWelryUItrauTYQfZU1yENy3kymrlz9337mM7gTWALa9VK+dlU+udQyrw8Ux6O6I0tupF+
7dlKlsGbYLdczb8PNAEy+NMDjRQjqYvJMLHNe2/veVnu5+iQ/wDZZ/qst4/qRGA9bxpVLAqt
RWr8NJ3PBz408UCQD8yKzK/CLK0Og0Wbmt1F7ulxPSDqrtEp/NChccewcxxDGlqYqpNcbcHd
XQi5mdqCnHdIA0l2nuMiSdbvRhm/xGH6J+xwnUxZMihRomRbhb5bq0K6ccI3JDjsGEO0V3Zh
XFIvbgZoohzfLUTKK5N7shnGioY7JqDxIuBMqz2HuKe3twlR9g4IGuYThVCUf/4ydE/flmif
LXTUa/dKoGhDQ1YRd3nhrpGXxNk28jE4xc/olvU/u11WztGdxLxex4anrNeDbFWeMJ6uNuqi
1w/RFBDfLYeWk7n8yi+X+BZ8j1TMw3B0WbR5SxxTXrYWS858L7VPY0jcpzL37nmon3qp9/n9
i2qtwmOI9b/Vwf7rxd/oAPv0+UkHyPv05fjpn9PTZ+JNsbN2UOV8+ASJX//AFMDW/HP88a9v
x8feRlE9fxs2YbDp5Z8fZGp9F08a1UpvcWj7v9loRQ0AtQ3ETytzxizC4lC6kfI6A7XuHbf8
QoO2Wa6jFCulPBpt/uwi1A/tpvS9LL2vbZFmDUoQ7GG5Da/wLrWGFSmEMUCtbNroNmVVpD6a
vxYqGAIdXJQFJO4ANcXIPVVErR5b0iZKA7S+QY/b1ADEz4qAhWoo0ItAWidrqCP9RhyvNI5T
F5LHj6TTxpYkYIxzZglQteHBJ4N+kh/8nbZ3K8KN4EBPJBs8pDO+RyO+cPogRSPqvxygMTt9
A4lhHdBDDau6YSs7Xi78YD97t+yPAgcxFa5vl3wFJpTZwIqrWLziRpiSCQ7oJeca7POzJr5v
98mzDdi82RcsPjnWN/ciP/o+ToMsoV/ckdxvyRHVDhI4jt4O8IgiZpLiTu+LBcqevzPUlTN7
D09R50N45HbWz/34XcEu/sMdwvK3ugiSmIq6kNu8kbeYWaBHbfJ7rNrB7LMIJaw3dr5r/6OF
8cHaf1CzZY+rCWENhImTEt9RmxFCoO4oGH82gJPPb+WD45kAqEJBU2ZxlvBYZT2KbzmW7gRY
4BAJUo0Xw8kobe0TlbSCla0MUQb1DD3WXNGQOQRfJ054U9LADcoZHjFrqsICzXQ4fPCKwrvV
co9qQmXmgwYc7WEXgAw9CUVllLGgBRrC18INk8iIM6OgVDXLFkFU7Lf0dYiiIQGfg+DZpJTi
SMMnIk3VLGZskQmUuaofe8r7wU4dw7oEvLJsRuY67V7s8FxQyeZOHsubKKviNWfz1Ufpu+fj
3/fvX98w8PLb6fP78/vrxaO2Drt/Od6DYvC/x/9DzkqVre9d2CTrW5hH/cOIjlDipakmUsFP
yegnBh/gbwfkO8sqcjuB5UzewbUWYHvHoF3ia/8/l/T79WER078Z3FBPE+U21lORjMUsSepG
vpTR7kYdpuN+XqPn1ybbbJRJH6M0BRtzwTVVIuJszX85lts05s+j46JuhDtDP77Dl1LkA4pr
PPskRSV5xJ3w2J8RRAljgR+bgAxUjNeCHubLihry1j7616q4nqqOcFs5tw9KIhVbdIvvOZIw
2wR09tI0yqt3Qx+lbTK8OpNeEhCVTMvvSwuhQk5Bi+/jsYAuv49nAsLQTLEjQw90x9SBo0+g
ZvbdUdhIQOPR97FMjce4dk0BHU++TyYCBok5XnynOluJ8TtiKnxKjIWUkZHQyRsV74UZNAJg
QgjY3LXxn7qJ63InX7FLpsTHPb9gUHPjxovp6yeEgjCnNtIlyE42ZdAGmL4JzdYfvS05VtGD
j86cbiNk7WO47W67tVTot5fT09s/F/eQ8uHx+PrZfsCp9khXDffNZkB0J8CEhfZyg8+iYny2
1tlVXg5yXNfon3PWd4beaFs5dBzKEN2UH6D/DTKXb1MviWwPE7fJGt8ANGFRAAOd/Eouwn+w
OVtnpX4BYlpxsGW6u9rT1+Pvb6dHs718VayfNP5it6M5ZktqtDrgbtc3BdRKudT9czleTWgX
57DqY+gh6swG33Loo0D6aGkX4tsydCcL44sKQSP8tQNodL+YeJXP34UxiqoIujG/FUO2dePP
popx861Wce0CAwMP5DVtyl9uLNW06pr59KkdsMHxr/fPn9FgO3p6fXt5fzw+vdE4Ex6ePZW3
JY3sTMDOWFy3/58gfVxcOpKyOwcTZbnEZ8sp7GM/fBAfT91teko5Qy1xG5Blxf7VZuvLcD+K
KOx1e0x5IWPPKwhNzQ2zLH3Yjzfj0egDY0M3JnpeVQWVNop4xaoYrM80HVKvwlsVlpqngT+r
KK3RpV8Fe/ciy3eR36tUvdBcl57x2o4aDxuxiiZ+igprbJ3VaVBKFD2MUsUdJpzOkcjIXxpk
vJv1Uzw58k1h9PlDlxkRoijTYEsQpqVj9iBVKGOC0EoP67Gmyji7YZevCsuzqMy4i26ON2lm
nOYPctyFReaqUsPOajReZCAZvIYfIXTnRZVwwat+N8Jxrwatey+dv/Y1PQQ7ji84fcP2V5ym
gqUM5szf53MaRsJFuTtE184ju5guA1yib7tJVsb1umWlL2IRFnYiSuyYYQr6TAyCWJb2Mxz1
IKU06VPc8WI0Gg1wysMGRuwe52ysMdLxoE/zpvQ9aybodaYumdvhEpbLwJDwlbdYPXVK+sas
RZQ9MlfaOlKxdoD5dhN79JlgJ64MC+xEa8+SAQMwfC1GHOCP7wyo3POrEHhFkRVt0E3RpmYp
xc23e4nxmJwUBPx6LlR8dfdmqK0FSS+9eW7nuJqsrsx9Wrd51QR9z+bYuGqy3imOOWjVU9+n
eEJkW9JVDJ1dpFQAcwAATBfZ87fX3y7i50//vH/TGsfu/ukz1W1B/vm4ombseIHBxlvCmBPV
Lq6u+oUNj7BrlF4VdCR7lp9tqkFi5yKCsqkSfoVHVg0dZoiicAxt6BDpOPTGHr8DOiXJnTzn
KkzYBisseboKk/eMWEKzw1jGoC9cOUbOzTVopKCXBjTGjBoiOus/Wbiqc/2uXdWAAvrwjlqn
Y53WokbuHxTIoyEprBXC/QtCR958lGJ7X4Vhrhdmfe2E7156BeS/Xr+dnvAtDHzC4/vb8fsR
/ji+ffrXv/71331FtYcAzHKrtoHyeCAvsr0jzomGC+9GZ5BCK4pX+njYU3nWOoqniHUVHkJr
nSzhW7hXXCP93Ow3N5oCy1h2w13RmJJuSua6U6PaTIuLCe15Ov+TvdFtmYHgGEvGUUWV4Tax
jMMwdxWELaoMPI1SUYoGghmBh0hiHey/zLUn/w86uRvjylckSDWxKCkhKlzEqj0btE9Tp2ia
DeNV3+xYS7BWOgZgUOxgfVb7YCJGtQ/Ri4f7t/sLVI4/4Z0qjfymGy6yta/cBdJjSI20iyEN
3KSUnkbplKAmFnUbmUdM9YG68fz9IjReM8r2y0Bzc+rpen74tTVlQNPjH+MeBMiHItcBDyfA
NV5t2rtlZTJmKXlfIxRe9zaPXZPwjxLz7tps0ot2e87IOpIS7FDwspbeqULVdiDOY62cKTfQ
KtI5mRKApv5tRX0UKSPnfpw6nLxmuf4s5i4KGnpTp/o44jx1C7vBnZunPQWSXpQdxOYmqnZ4
vGup0g42E9AHz7wku2FLlKKv3m/TbbFiwYAjqoeRE7ZYqaW+b7TjIQ76JjedNRl96suVMZb4
TF0Vn4tkdVYoY0iEe3xNgfxsDcAOxoFQwlf7dhuTrIxfVO4oNoedVgKztbh2f6tVXrtJlAUZ
RsfRt/hi1DfUqbmV9eBg+sk4GhpCPx89vz5wuiqAgEEjIe6dDFcZUSnSsKrnqCeO4hp0w42V
RGsu1iy5gSlroRhvVUaQM5NXD93SGn1lChuTXWYPy5bQ7WD4EFnD2oT+ZfSHWy6TWtxLYWHw
lD8RlSB0+f7DgAfKttCKf3cF+axDq60YjGtMKj+7didc5xsLa7tb4sM5mOIxjlcRBXZjD8iQ
djLwq1w0hqqKaLtla6fOSM9us7HkNDUlXZZLdG735EeZsRer+2DsJDKN/WzfdZ2cOO1Isk5p
WkLlweKYi+OjXkD9CofaEthjlX6TO5Nu5IulmUw4ddEgyOVtCpNb1wBkmKDSYeYgo1YB3d9k
Oz8aT1czdVVrTgr6iCQe+nV3jXpyLrHHc5vI+Khm0T2U70rDQWRFZlGURvR9uXBpRFwJtYWx
9g5k7mPqktqwLBeNuVdRIpo6+KOpBvIK1tuBBFhMcwioTwD0K5ZvKxHpy2g+xEdjkNXrWJ6h
mp1ZvFa3fLSl8EJcbAY1yA/S1ErdjyKrjaLMDKDRYTmiHUwIoTtAScdRq3/O8wyEQzIanro3
w205fVOQWzEbNbfQRYyenkSOKYz9bC5CqF6ZqxjGuNUyJXRjv05vMOZh0WTKuIk4pjW4vg9T
UkqasBtNlw9Wer9ZHV/fcIeFu37/+d/Hl/vPR+KNFitFJqmqo3Xi3IdflqzhQU1JQXMe5rGY
9HnysxO/bKNk/nB+pLiw0kHoz3J1+oWsVC97B2PDelFcxtS0AhF99C/24IqQeFdh69VXkKKs
29Nwwga3yoN1cdysmVSpo64w93y7/E5GXqE7pX7I60PQEjQKWLD0jKXGe5wbf6kr1aJWAZLY
PVUBK7XSO/Uxin7lSn0/XgVV4pyy+vgKl+8SJIXLpTEyoP/dXejpEy5KkIn6Q1i16pQ0GrKT
b93vzGA+D/MVyl7NordUalDXnXi0Uomatg2XYK5IBkrQJzWLGT9TaYnEZ9Zg/qq9duEBV4Zh
BmNNoY2fnC6mDVepXXvx1FdAqDKXuZYiGyvzRwYaew+ZFcAwp2P3YqCvMuvoDFVbDg7TUe/c
gKIwzFGggbDyQH2mPYFlmBoF3jBR27UMNVV8lVhNYq4HhpKoowafve/WLZlbTY4PCHaZumrb
02KUnTy0fK/zDhXWurIUOZtookQ1xN/OdUQ/caAE0b3WOs9HoPJbzR2d6zGY0IAyCuKXU7Ig
dFMH20DXKamRTvswV5YiPFdpiNTWC49N6draFsJRALj2sLuFGbdvRSg9zTq7oFte/fj7DnUc
qiJeo3O3zFfCGxeT/weUWWae4KcEAA==

--tsOsTdHNUZQcU9Ye--


From xen-devel-bounces@lists.xenproject.org Sat May 23 14:19:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 14:19:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcUzZ-0003dO-TA; Sat, 23 May 2020 14:19:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BlJ7=7F=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcUzY-0003dJ-Sw
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 14:19:12 +0000
X-Inumbo-ID: 5bfc2634-9d00-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5bfc2634-9d00-11ea-b9cf-bc764e2007e4;
 Sat, 23 May 2020 14:19:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=BjYobLojXN+q3WU2h60RqBJRxIsJqa7+eJyM+hsmKK0=; b=3kQreXuP3YSKEX0uQTyzGt5AR
 E2Ez+WSUbM6ZkXhzDGHya0a2fRdaVxU0fK+R291Vm9AiDeO9APHt1JHhCCvq03MxJ0UaQ3p7s6Ckz
 sHMm+wZq17nMjUOkIeK9Olmedsr4te4Fg2Ffsv4MNW8Mw3XuZZNyf4wLnUk71ENSRwUmM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcUzR-00048p-5d; Sat, 23 May 2020 14:19:05 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcUzQ-00073E-S9; Sat, 23 May 2020 14:19:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcUzQ-0001Qp-RS; Sat, 23 May 2020 14:19:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150337-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150337: regressions - FAIL
X-Osstest-Failures: linux-linus:test-armhf-armhf-xl-vhd:leak-check/check:fail:regression
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=444565650a5fe9c63ddf153e6198e31705dedeb2
X-Osstest-Versions-That: linux=2ea1940b84e55420a9e8feddcafd173edfe4df11
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 23 May 2020 14:19:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150337 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150337/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl-vhd      18 leak-check/check         fail REGR. vs. 150281

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150281
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150281
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150281
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150281
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150281
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150281
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150281
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150281
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                444565650a5fe9c63ddf153e6198e31705dedeb2
baseline version:
 linux                2ea1940b84e55420a9e8feddcafd173edfe4df11

Last test of basis   150281  2020-05-20 19:40:19 Z    2 days
Failing since        150296  2020-05-21 08:13:02 Z    2 days    5 attempts
Testing same since   150337  2020-05-23 02:17:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Al Viro <viro@zeniv.linux.org.uk>
  Andreas Färber <afaerber@suse.de>
  Anton Ivanov <anton.ivanov@cambridgegreys.com>
  Bijan Mottahedeh <bijan.mottahedeh@oracle.com>
  Bin Lu <Bin.Lu@arm.com>
  Brent Lu <brent.lu@intel.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
  Christian Lachner <gladiac@gmail.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Christophe Leroy <christophe.leroy@csgroup.eu>
  Cristian Ciocaltea <cristian.ciocaltea@gmail.com>
  Dave Jiang <dave.jiang@intel.com>
  Eric Biggers <ebiggers@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Ignat Korchagin <ignat@cloudflare.com>
  Jason Wang <jasowang@redhat.com>
  Jens Axboe <axboe@kernel.dk>
  Johannes Berg <johannes.berg@intel.com>
  John Johansen <john.johansen@canonical.com>
  Kamal Dasu <kdasu.kdev@gmail.com>
  Kefeng Wang <wangkefeng.wang@huawei.com>
  Keno Fischer <keno@juliacomputing.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael S. Tsirkin <mst@redhat.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Namjae Jeon <namjae.jeon@samsung.com>
  Navid Emamdoost <navid.emamdoost@gmail.com>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Pavel Begunkov <asml.silence@gmail.com>
  PeiSen Hou <pshou@realtek.com>
  Peter Ujfalusi <peter.ujfalusi@ti.com>
  Rafal Hibner <rafal.hibner@secom.com.pl>
  Rafał Hibner <rafal.hibner@secom.com.pl>
  Ricardo Ribalda Delgado <ribalda@kernel.org>
  Richard Weinberger <richard@nod.at>
  Ritesh Harjani <riteshh@linux.ibm.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Scott Bahling <sbahling@suse.com>
  Sudeep Holla <sudeep.holla@arm.com>
  Takashi Iwai <tiwai@suse.de>
  Theodore Ts'o <tytso@mit.edu>
  Thierry Reding <treding@nvidia.com>
  Vinod Koul <vkoul@kernel.org>
  Vladimir Murzin <vladimir.murzin@arm.com>
  Will Deacon <will@kernel.org>
  Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  YueHaibing <yuehaibing@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1250 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 23 16:30:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 16:30:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcX1t-0007Sw-3Z; Sat, 23 May 2020 16:29:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o9iM=7F=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jcX1s-0007Sp-1I
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 16:29:44 +0000
X-Inumbo-ID: 9b341980-9d12-11ea-ae69-bc764e2007e4
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b341980-9d12-11ea-ae69-bc764e2007e4;
 Sat, 23 May 2020 16:29:43 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id w15so8231878lfe.11
 for <xen-devel@lists.xenproject.org>; Sat, 23 May 2020 09:29:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=G4UfA6yGjbjJo0RoiZsoNurp5VKfjqBLnL5LS80IK8w=;
 b=B84jTBIXtn1mi6hYEl67fMjv7637uBLxPb2bLpJMR76guWz0ZbRyZiJOakEUKPfaSi
 5Jj6gVJ8UIGPdMeJ5kwrBDPZ9jat/efaDkjEzZlNp9l8pMYhAbN7lQab7Aeqk0rFVSZt
 Cm6guUIUKseEVzqF7LX8OH5KdV18AOkSArMJJiS1PJqBG6CMG8zFQ7uyeB8ZBVbZ21Pz
 2K3yXhGk+xVH68pg4RsFSmm3Gj5q5nO9jxZjYTIg22Q4vSbVsyNTLL1Lz8OAHLZviSme
 KepSsqZnNkIkVGQvrOPEw0DHIAIU8WPfuU7yAv+eyaI0ecFBhm/iCeMnDmdy/IZkQpgu
 Qidw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=G4UfA6yGjbjJo0RoiZsoNurp5VKfjqBLnL5LS80IK8w=;
 b=pFva1vpBkO0xLy2kFSNBqvRq9Uveknc10kD2ykNQHUEBaq665/hgl0e0CvBJJVNiWd
 g6GxP2PH8vqf3s8rerSLpJx4D46DUwpSCE7gad80qnqGF7S/h4iL72vKs0GcEezyNlaN
 63Zg6NcJHMmQJwK1i5PobcJMbe91uAE/mu5kOAIMCbmiSHsA/feoqtBdBvox0oGubndk
 DAlCc4Srrt/u9Tbo0RA4ogUTejsV2tsDmzShcIPSp+sM5l50Kw95pf2+mWtjeB3KsfKa
 vK1B6xgVq6jNJyni0gm3xlgLcoeE5ydZ2M6O5enwCaFCQtLIztuWuYURz5qf9OePPvCh
 Touw==
X-Gm-Message-State: AOAM5329tIyRMK9LUCsHl0fB1qCTaFlGeZnoCvoK2qk51Fc9bJ8268BF
 UYHyJHYGifcmcpwnvWukvOYcOcTZURTU3Soh8/A=
X-Google-Smtp-Source: ABdhPJz5yJkwNNWXIkmUF+JCqXHgnr3WEv3U6EEQz95DGysLbWk7fJXgR8X2OAiPfoMporz/0y78ClIoxlhn3f4TfcY=
X-Received: by 2002:a19:150:: with SMTP id 77mr10245454lfb.71.1590251382248;
 Sat, 23 May 2020 09:29:42 -0700 (PDT)
MIME-Version: 1.0
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-2-george.dunlap@citrix.com>
In-Reply-To: <20200522161240.3748320-2-george.dunlap@citrix.com>
From: Nick Rosbrook <rosbrookn@gmail.com>
Date: Sat, 23 May 2020 12:29:31 -0400
Message-ID: <CAEBZRSe6qoZB1om8wMEeOy_4TA0W=MQ3cr8QMAr2HqLDmXAQig@mail.gmail.com>
Subject: Re: [PATCH 1/5] golang: Add a minimum go version to go.mod
To: George Dunlap <george.dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> `go build` wants to add the current go version to go.mod as the
> minimum every time we run `make` in the directory.  Add 1.11 (the
> earliest Go version that supports modules) there to make it happy.
>
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
Reviewed-by: Nick Rosbrook <rosbrookn@ainfosec.com>


From xen-devel-bounces@lists.xenproject.org Sat May 23 16:33:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 16:33:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcX54-0008MN-Is; Sat, 23 May 2020 16:33:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o9iM=7F=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jcX53-0008MH-5y
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 16:33:01 +0000
X-Inumbo-ID: 10d4e714-9d13-11ea-b9cf-bc764e2007e4
Received: from mail-lj1-x244.google.com (unknown [2a00:1450:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 10d4e714-9d13-11ea-b9cf-bc764e2007e4;
 Sat, 23 May 2020 16:33:00 +0000 (UTC)
Received: by mail-lj1-x244.google.com with SMTP id b6so16339140ljj.1
 for <xen-devel@lists.xenproject.org>; Sat, 23 May 2020 09:33:00 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=NqhY+ZxbtADDYEqvt/Yw7Q+gVgjIuXnGiqU33hyjXJY=;
 b=AEmCgZxwQiOVVppAGQVY05jnejhSrey8pQ30yEfoVNh1fKU2bmjfr+Ql5d4doDEzB3
 sMpnKhiBMfWEV4EFdeT7dVqA2jlptqoibj3I7sbHF+0HqaYWOa1FCdLHrk2BGIkX+/pI
 Lv4EvxwvAA0/1GGCoK9Srmg3D17GKIPwsLdv5AZLZPmIhjdxRh2qPZfyCEBb4mdSiwIe
 1EmxJ3UHybEln9bwHKDIDaQYdR9ZZrGmuEo4QcuAcqyLQLjrvIbKyGCxToo2ndEL67P6
 dKcYeufTjNZgqeJICmCJ73OJOpKxb5lcJ6hfy0j0uCMB1ikQoDBCaMJN+tP0bc1iOhIK
 RFeA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=NqhY+ZxbtADDYEqvt/Yw7Q+gVgjIuXnGiqU33hyjXJY=;
 b=Jnm+7O8I76aH1hEcPueEBso8T9duA4hjJfRPYmXju6d23Z4dEUE853dM+kXBszg9Ai
 DYRqO6LVcUuwPBo9QEPddg8nqLYjxVJoLA14i3ednvod5bYLAujfUKgP2rVYDx/2I4ov
 cX9EvgmtAmvAaGP8Ur3j+WimgysDjxq4vbip9w97NJl91O5FwlObYu7gXpZjBHePuMye
 BQisZ0Tz2gAmeDVcJobRc8ktsynO8HXa61G33t08jDvfPXGaoHRi4WoRVJJY8hqHuzgK
 DSEphut9/7vty8eNs78Qj0cGubDujiz0SnNqNb8pD47E/rkz+UsUTMOtltX6xSh6PnTn
 gkEg==
X-Gm-Message-State: AOAM533JfoMSxrwU9+MCutyLVYo42XVHXMW2S7IXduXX1fc/yMG56jKc
 7NuraICu+VApj5aCPlNDRrk9ZBV1WxKocxeBukw=
X-Google-Smtp-Source: ABdhPJwDRIon/jS0FLkKJumGGLKDVPgCdISoAaXFR+SvrXX7dy77CM6RHvi78rMdsOoNCMYpqZVq3S7JD5oPJ0y5bFA=
X-Received: by 2002:a2e:96cd:: with SMTP id d13mr9390790ljj.219.1590251579687; 
 Sat, 23 May 2020 09:32:59 -0700 (PDT)
MIME-Version: 1.0
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-3-george.dunlap@citrix.com>
In-Reply-To: <20200522161240.3748320-3-george.dunlap@citrix.com>
From: Nick Rosbrook <rosbrookn@gmail.com>
Date: Sat, 23 May 2020 12:32:48 -0400
Message-ID: <CAEBZRSdiCBWf-axqqYy156b8=2191kRgFuFKWTXRm_YBzZH+WA@mail.gmail.com>
Subject: Re: [PATCH 2/5] golang: Add a variable for the libxl source directory
To: George Dunlap <george.dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> ...rather than duplicating the path in several places.
>
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
Reviewed-by: Nick Rosbrook <rosbrookn@ainfosec.com>


From xen-devel-bounces@lists.xenproject.org Sat May 23 16:40:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 16:40:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcXC8-0000qP-BE; Sat, 23 May 2020 16:40:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o9iM=7F=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jcXC7-0000qK-2t
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 16:40:19 +0000
X-Inumbo-ID: 15ba66a4-9d14-11ea-b9cf-bc764e2007e4
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15ba66a4-9d14-11ea-b9cf-bc764e2007e4;
 Sat, 23 May 2020 16:40:18 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id c11so14166867ljn.2
 for <xen-devel@lists.xenproject.org>; Sat, 23 May 2020 09:40:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=S0SOyc3LUuwBwXHuqtQ2x6cooeUtHm7VNacfs/Maa3Y=;
 b=gl3Nl6ssvtIxTPPjGJrOLVf2t1xQzGYlIQ7eo8C12NP8MLj7RWG2p3Wfl5sEloJbHg
 //xEI7JSxrcD5IJsjkexYOKpAdOF1EWyyjQIKLGUh9rTjiyX9mkSz4/zpFp6VZUuOygd
 emfZMvi7xlxObeD2KZu7HIU+dBOWQalvCqW3f3+mAtcyHF2SqZI/Dei3ZYJXHYLOMzAp
 VySCt4r2+OHyVRM3zExT/9lvim6DtNDNZ9hdSkVT9FmeN59OFKopbdAHtvUdvxHR5Lgv
 xRsOLt2NZqk1zYrlLvtZeoxDTyGQmwmlCqrU/h0z6j0Nwg2zDQdSyu2UXXJ3mpL5fPuM
 d10Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=S0SOyc3LUuwBwXHuqtQ2x6cooeUtHm7VNacfs/Maa3Y=;
 b=RqVWTBncqeD5LudBKJ6gpQ95R7zSppLjzarB2ZtqZEhfkGa7VJDmTkaHOvRcv5JRMb
 88W2oBN7gS9k/1iMUhyTMVJVuX5NSyaujY4WJ5coc6tGLqj3+H4U9W6kIoTmAjGPEGmh
 ATkBdav2PiZM7/rAKNcL3iX1KTLuvRYIBWJ2jq0r3RPKG1C3AhWbcoEl93uf3jhTbe7t
 4kdO/20M9QQ3TYhWLrSMIyE4ti18GJS1ftfw9F0YIhlXTqACIf1bXTXT7amehARl83bZ
 pnpGFb0kEwfpw40TLtYSedg5tvnRgXMSncLiYvGqr6C889OdTHeBa3g2raK2wZksf0gS
 fjHw==
X-Gm-Message-State: AOAM533rmvRWlbxQfTB82l+VHJsIHDImv5xCYUubljB7J48tSO+Ozsqu
 7C3gZ04GOwxXRFampkEm6DsthHAOtLqwiF6dnoI=
X-Google-Smtp-Source: ABdhPJxKYpHH4rv5m3lZYg3OgdevtCHi/6a+G5R0tBgusZ1Vo76R025KyzHpupJzgwy8cbdi8zDzwG51iqDGb+y72Nc=
X-Received: by 2002:a05:651c:1208:: with SMTP id
 i8mr6567289lja.103.1590252017364; 
 Sat, 23 May 2020 09:40:17 -0700 (PDT)
MIME-Version: 1.0
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-5-george.dunlap@citrix.com>
In-Reply-To: <20200522161240.3748320-5-george.dunlap@citrix.com>
From: Nick Rosbrook <rosbrookn@gmail.com>
Date: Sat, 23 May 2020 12:40:06 -0400
Message-ID: <CAEBZRSfF8KAnzz5LW8GhcuJu=2rex3d6bvgz=a7-kLMp-itjqQ@mail.gmail.com>
Subject: Re: [PATCH 4/5] golang/xenlight: Use XEN_PKG_DIR variable rather than
 open-coding
To: George Dunlap <george.dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
Reviewed-by: Nick Rosbrook <rosbrookn@ainfosec.com>


From xen-devel-bounces@lists.xenproject.org Sat May 23 16:48:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 16:48:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcXK9-0001GC-9p; Sat, 23 May 2020 16:48:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o9iM=7F=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jcXK8-0001G5-3A
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 16:48:36 +0000
X-Inumbo-ID: 3e05134c-9d15-11ea-b9cf-bc764e2007e4
Received: from mail-lf1-x143.google.com (unknown [2a00:1450:4864:20::143])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e05134c-9d15-11ea-b9cf-bc764e2007e4;
 Sat, 23 May 2020 16:48:35 +0000 (UTC)
Received: by mail-lf1-x143.google.com with SMTP id h188so8291628lfd.7
 for <xen-devel@lists.xenproject.org>; Sat, 23 May 2020 09:48:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=dFe3xhhoFWohY7kyQoEsaym4h3cRs1TlV6VivlpKLzg=;
 b=N9ovKB1E+OxhCKcISs04gTOvKiXi8P5l8JpPVzKgdxWxuMxvIMPMe8ZliOGMpoRBQd
 yNMna47lrw1x4SlLZ41Q2Qs7kdRhDq0ToHGfgw3f/R/UPABS+pgFRtPrg1druccqU8dU
 frBpL/U/pfZh+H/1ChoDFrmHVkFLUAzHmd/kvu6Q/nHxd28u+y35NrHEXOgI4HgefVTr
 xU9vtGj6k7MRTi4vm+bLKYMVofDCV7PWns5nXAaTCf849ucWAv3XAp8Os4QDl9OS3unV
 DqVGcFF0pxHiOkGcfPoNz5FNJVtlsXHN5PSPs4stG+Qr6fWUo59go5/n82CxbUXmnuDk
 nkxQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=dFe3xhhoFWohY7kyQoEsaym4h3cRs1TlV6VivlpKLzg=;
 b=HBR3w8BUZpMckntEdclD5FCYULTZlEeyp8yDuawDcn4un9cjrdLd+bxq7zfRLu5Xaj
 Zq5DLUOldhu7NX5Nl31v+gLMU1IIOAMMdaxBlKqVmgx8Aw7VjdcVC9hjlkVevJcXqJFq
 q5owMSXTfUz7UTGhK7WzOaZVKp0jMuoXCNLvgk1klwdVEbsBBg24kCLcV9ASe1ee4oR5
 MpVTEWeemZVfRjTrPlJXfJsvKds003tFH7g4Zb0mY08wA3aa/lnq/yEtdPhfZdIiXCw3
 1kEzKLNRA4n/0DOZth+9s2o1RLnRHTeO/SLWSfYcZuuoch8cf1PPdq0qeRbbWdYL3ujo
 kztA==
X-Gm-Message-State: AOAM532j1ct86q1l7wIxPq20LIW9FVxXRlxFkj2madt5xchhoSwAeOiK
 jzVoFWPKB36kKlokO8kiH1qqHWa6jFebe/+bvmg=
X-Google-Smtp-Source: ABdhPJy6GUvTW5Lz0tZHuLsinCQpNaxP+LehX/nbL5c9xVLhawjv0ptLRtZQcnH0dAKaIXxmuaSA7eYkYMBmgDbua6A=
X-Received: by 2002:a05:6512:3139:: with SMTP id
 p25mr10318594lfd.214.1590252514255; 
 Sat, 23 May 2020 09:48:34 -0700 (PDT)
MIME-Version: 1.0
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-5-george.dunlap@citrix.com>
 <CAEBZRSfF8KAnzz5LW8GhcuJu=2rex3d6bvgz=a7-kLMp-itjqQ@mail.gmail.com>
In-Reply-To: <CAEBZRSfF8KAnzz5LW8GhcuJu=2rex3d6bvgz=a7-kLMp-itjqQ@mail.gmail.com>
From: Nick Rosbrook <rosbrookn@gmail.com>
Date: Sat, 23 May 2020 12:48:23 -0400
Message-ID: <CAEBZRScpycd2_A8moi68AA3asbsUSRjkW1kUVdpsdwgx-SZKpQ@mail.gmail.com>
Subject: Re: [PATCH 4/5] golang/xenlight: Use XEN_PKG_DIR variable rather than
 open-coding
To: George Dunlap <george.dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> > Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> Reviewed-by: Nick Rosbrook <rosbrookn@ainfosec.com>

Oh, I just noticed your commit message calls the variable
"XEN_PKG_DIR", but it's actually named "GOXL_PKG_DIR."


From xen-devel-bounces@lists.xenproject.org Sat May 23 17:00:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 17:00:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcXVz-00036j-Dz; Sat, 23 May 2020 17:00:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o9iM=7F=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jcXVy-00036c-Aa
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 17:00:50 +0000
X-Inumbo-ID: f361723e-9d16-11ea-b07b-bc764e2007e4
Received: from mail-lf1-x142.google.com (unknown [2a00:1450:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f361723e-9d16-11ea-b07b-bc764e2007e4;
 Sat, 23 May 2020 17:00:49 +0000 (UTC)
Received: by mail-lf1-x142.google.com with SMTP id 202so8300202lfe.5
 for <xen-devel@lists.xenproject.org>; Sat, 23 May 2020 10:00:49 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=9fDPcNUZbFygQrZ1OlCwnzyfUVPdIJzfte36WKBK3I4=;
 b=UnAzIn+3r9pO5BkhG4ujop/i71xK5SFgkJhBVTLvFzIZI+LJDZUuMJOOCYZDA9Ckxh
 zUcALSWGjDuWCJqWWKHnPtM2CwmEOSpwt9lBmMyVoVS/04cfAongYgqW7k0E5QTVKx8c
 H3qDom1lRp6RCUoxTQpgJ2mitwYaKq134CR80Cbyhb+bwLJk+sV9Jvpnk1M1yA0xxjVr
 M4JqCiv+gr9wc3uwY4L635R6fsrAymiGt+y7ABYWmuWkAqO6E+h8dfOxGWZCrCN1RV0z
 BLzm7djTY9A07SWLsb83SrW49yHtSApLJf9ih7qMi5tSCWQMTg8QHLq90ZicoBWK88Lw
 vtTQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=9fDPcNUZbFygQrZ1OlCwnzyfUVPdIJzfte36WKBK3I4=;
 b=mMSwYYGXtc+oon4EFKDECgG9gB60lLajLW3Uub1JVLDtm6IiRLHMC0ElEjmkb8ylEI
 SIfBk7c+3z3c4+UG/KBEjxcyvK5E/xaRmiJLJ+otoc0mLjw9RUbGOtsrJJqwDpQjg899
 SfN158K/0p5csFuM2QjqRcIRzmsKNWcPZTpMkTlOk/FtJTsM6chj0qj3GjZFQFjBCvZg
 SYMHnal+uBbk6fA3yQ4CX3s6LOrp1CsLNUPfvqRsRe+XhmMo42kB+rBZdxLWmOhdCOGq
 Rtu+DS0DXpkzt4SPHCKrOHgw++is6cIlj9BjOeDaXciJG/H1JZCB0f0TZsqjnzFS9Ewh
 l5VA==
X-Gm-Message-State: AOAM530DZD01D/E9L7P4IuH2KNAXT+nEGdjLiJFUOS0RJ2dQRTnR8eOF
 2q+nNL8w1Q1cwfnAsjAWV9uzyGKJT8LXWfpSJmc=
X-Google-Smtp-Source: ABdhPJxrRCNRvbWb2rqnbtUYaBeJ8YjxU69t2Z41coZBu81PXvf0lJ6v8Uj8COeelWR67dK0nQPDTE5Io3ohqp1Qukw=
X-Received: by 2002:a05:6512:3139:: with SMTP id
 p25mr10336280lfd.214.1590253248070; 
 Sat, 23 May 2020 10:00:48 -0700 (PDT)
MIME-Version: 1.0
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-4-george.dunlap@citrix.com>
In-Reply-To: <20200522161240.3748320-4-george.dunlap@citrix.com>
From: Nick Rosbrook <rosbrookn@gmail.com>
Date: Sat, 23 May 2020 13:00:37 -0400
Message-ID: <CAEBZRScr-6Lt29tHFNgOBmZhXBW0oBB-NcoqTBi9xr5GgQaZVQ@mail.gmail.com>
Subject: Re: [PATCH 3/5] libxl: Generate golang bindings in libxl Makefile
To: George Dunlap <george.dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> The generated golang bindings (types.gen.go and helpers.gen.go) are
> left checked in so that they can be fetched from xenbits using the
> golang tooling.  This means that they must be updated whenever
> libxl_types.idl (or other dependencies) are updated.  However, the
> golang bindings are only built optionally; we can't assume that anyone
> updating libxl_types.idl will also descend into the tools/golang tree
> to re-generate the bindings.
>
> Fix this by re-generating the golang bindings from the libxl Makefile
> when the IDL dependencies are updated, so that anyone who updates
> libxl_types.idl will also end up updating the golang generated files
> as well.
>
>  - Make a variable for the generated files, and a target in
>    xenlight/Makefile which will only re-generate the files.
>
>  - Add a target in libxl/Makefile to call external idl generation
>    targets (currently only golang).
>
> For ease of testing, also add a specific target in libxl/Makefile just
> to check and update files generated from the IDL.
>
> This does mean that there are two potential paths for generating the
> files during a parallel build; but that shouldn't be an issue, since
> tools/golang/xenlight should never be built until after tools/libxl
> has completed building anyway.
>
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>

For the golang side:

Reviewed-by: Nick Rosbrook <rosbrookn@ainfosec.com>


From xen-devel-bounces@lists.xenproject.org Sat May 23 17:03:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 17:03:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcXYI-0003GK-Qk; Sat, 23 May 2020 17:03:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=o9iM=7F=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jcXYH-0003GE-SX
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 17:03:13 +0000
X-Inumbo-ID: 4945a3a0-9d17-11ea-b07b-bc764e2007e4
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4945a3a0-9d17-11ea-b07b-bc764e2007e4;
 Sat, 23 May 2020 17:03:13 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id z18so16315805lji.12
 for <xen-devel@lists.xenproject.org>; Sat, 23 May 2020 10:03:13 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=S0SOyc3LUuwBwXHuqtQ2x6cooeUtHm7VNacfs/Maa3Y=;
 b=lWn3o6id9Uoadkd2EHp2kvr7hSDT7e6MxKbRarDOsPMkTCrkSQ+tRJNt2+yx5OGMkb
 pxug7DUE83tCvU5ITeqxAm6XDHNjGQ0X3ZPuyDb1mEsHOlPd6MG4uTrctS0tjeqGb/+I
 iYyLetj4F9iUYwno7KVEOfd7o/GuLJTGtGdVbGRXdDkfLpMD6xjdenHmgZ/ek7vvdqtd
 ou0B7Jxg19YdmmlwEiwwRuOyzWbxCZx9/qs88NOAgxeJ9WM5ZmuI7R07o7DBkrAhk0sg
 dned+tPkVrDaTX+b4xaY+5cVmBzkGsLt37eb0ZeAuJZ2SjHIfVcZWmtsdkXD8Yc5fIFz
 LbnQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=S0SOyc3LUuwBwXHuqtQ2x6cooeUtHm7VNacfs/Maa3Y=;
 b=pWL/jX7fWcmVECNFB+K91JA/7XdBddYw02bEv43IuglWKi4TzwJfw32cZ9UNzba7dH
 dCXtp5avmGrEotXRBry+VoC6x5273gCa7n2XXXXeTxtiDTEY4al/NZ2yYvANmAGEsV6L
 EC6McfxvrKbjdYEiD4MpaiP3/mp5ku1J/0ZoU2mNEs8NQ19TsNW+0uPsQqKrq5yhdqq4
 O0bX5gDWyDVOXOR6xUXZX5SJknUJJQ0Y+SdlK+htrRGlMCihpPa6u6N2hMpHM8M4g4VM
 1KZkXSyxdT5w85ySHtzfUcfLnY1ZUFB4JOcLgnQPBaDwemrC/MGdlFUmfWUvVdpq5DMj
 wGWA==
X-Gm-Message-State: AOAM533/QtWM+/A8mW64vBqGuxLlEKRYOzDnYHy798nQOXw1t9n0TYkW
 a/X7Xx46ScWv0fU3Uw6r35k1eVkUEhWLDrx8ntKYfivFr/8=
X-Google-Smtp-Source: ABdhPJwoAP+Gpa2ri9m6QTTyMsHz2hIzra7+AphB85o44jODdGS/ij+6dI4uMcYSMp2aMRrBuvnmLiq8hn6Pcaf2rN4=
X-Received: by 2002:a05:651c:1208:: with SMTP id
 i8mr6600041lja.103.1590253392390; 
 Sat, 23 May 2020 10:03:12 -0700 (PDT)
MIME-Version: 1.0
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-6-george.dunlap@citrix.com>
In-Reply-To: <20200522161240.3748320-6-george.dunlap@citrix.com>
From: Nick Rosbrook <rosbrookn@gmail.com>
Date: Sat, 23 May 2020 13:03:01 -0400
Message-ID: <CAEBZRSeSRFa=HJhJB_xX8a_itkoDQ9MAgAioEeLogdh6k8k_wg@mail.gmail.com>
Subject: Re: [PATCH 5/5] gitignore: Ignore golang package directory
To: George Dunlap <george.dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Konrad Wilk <konrad.wilk@oracle.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Julien Grall <julien.grall@arm.com>,
 Jan Beulich <jbeulich@suse.com>, Ian Jackson <ian.jackson@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
Reviewed-by: Nick Rosbrook <rosbrookn@ainfosec.com>


From xen-devel-bounces@lists.xenproject.org Sat May 23 21:05:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 21:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcbKb-0000EP-1N; Sat, 23 May 2020 21:05:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BlJ7=7F=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcbKa-0000EK-IJ
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 21:05:20 +0000
X-Inumbo-ID: 18b9251e-9d39-11ea-ad54-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 18b9251e-9d39-11ea-ad54-12813bfff9fa;
 Sat, 23 May 2020 21:05:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Nvfc88kY6tEGi6kGeGQlvmu1mJk63BcBO2Gps+PSsqI=; b=CiU5oU6h7/mGnlBvUh3UcO5PK
 SVcj/h5XIP7/TKOptcNtfsbOk24N04q7ibDld0EE5AXDqTKMp91l/8Rn6LX+Eno7PB4+Plp2GcxlW
 U1QhrDO6G0iMM3nKY523vuQfYWxHXzs5vAAoDtuycT/VIisEWgDDHu6pCDJbIeKDMmGEM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcbKT-0004ct-Nk; Sat, 23 May 2020 21:05:13 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcbKT-0002jL-9S; Sat, 23 May 2020 21:05:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcbKT-00041m-8l; Sat, 23 May 2020 21:05:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150344-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [examine test] 150344: tolerable trouble: pass/starved
X-Osstest-Failures: examine:examine-albana0:memdisk-try-append:fail:nonblocking
 examine:examine-albana1:memdisk-try-append:fail:nonblocking
 examine:examine-rimava1:hosts-allocate:starved:nonblocking
 examine:examine-rochester0:hosts-allocate:starved:nonblocking
 examine:examine-debina0:hosts-allocate:starved:nonblocking
 examine:examine-godello0:hosts-allocate:starved:nonblocking
 examine:examine-elbling1:hosts-allocate:starved:nonblocking
X-Osstest-Versions-That: flight=149782
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 23 May 2020 21:05:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150344 examine real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150344/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 examine-albana0               4 memdisk-try-append           fail  like 149782
 examine-albana1               4 memdisk-try-append           fail  like 149782
 examine-rimava1               2 hosts-allocate               starved  n/a
 examine-rochester0            2 hosts-allocate               starved  n/a
 examine-debina0               2 hosts-allocate               starved  n/a
 examine-godello0              2 hosts-allocate               starved  n/a
 examine-elbling1              2 hosts-allocate               starved  n/a

baseline version:
 flight               149782

jobs:
 examine-albana0                                              pass    
 examine-albana1                                              pass    
 examine-arndale-bluewater                                    pass    
 examine-cubietruck-braque                                    pass    
 examine-chardonnay0                                          pass    
 examine-chardonnay1                                          pass    
 examine-debina0                                              starved 
 examine-debina1                                              pass    
 examine-elbling0                                             pass    
 examine-elbling1                                             starved 
 examine-fiano0                                               pass    
 examine-fiano1                                               pass    
 examine-cubietruck-gleizes                                   pass    
 examine-godello0                                             starved 
 examine-godello1                                             pass    
 examine-huxelrebe0                                           pass    
 examine-huxelrebe1                                           pass    
 examine-italia0                                              pass    
 examine-arndale-lakeside                                     pass    
 examine-laxton0                                              pass    
 examine-laxton1                                              pass    
 examine-arndale-metrocentre                                  pass    
 examine-cubietruck-metzinger                                 pass    
 examine-cubietruck-picasso                                   pass    
 examine-pinot0                                               pass    
 examine-pinot1                                               pass    
 examine-rimava1                                              starved 
 examine-rochester0                                           starved 
 examine-rochester1                                           pass    
 examine-arndale-westfield                                    pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Push not applicable.



From xen-devel-bounces@lists.xenproject.org Sat May 23 21:17:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 23 May 2020 21:17:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcbVr-0001I5-4R; Sat, 23 May 2020 21:16:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BlJ7=7F=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcbVp-0001I0-LF
 for xen-devel@lists.xenproject.org; Sat, 23 May 2020 21:16:57 +0000
X-Inumbo-ID: bad33fc8-9d3a-11ea-ad55-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bad33fc8-9d3a-11ea-ad55-12813bfff9fa;
 Sat, 23 May 2020 21:16:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=VdG4RGgMC9Nf3TjR8rGZE6F9BYUzdYKXG6X9+/JzDbw=; b=sKq8HGXAgsN/XL4IXb2tDzV4D
 br0VqNTFBzUBeNb6jIZtbqSnkPcE/cGfHCrvWBB3qvmvX/C/IzQAN343JiuNmgB8RuIDwN8ayZTEd
 QHrwbdX8B2gpSp1sdRWftyIb1eR+A2YHJISsrG4EvQxEidJsqfo3MiOsPe0finpIF50rQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcbVm-0004tg-UQ; Sat, 23 May 2020 21:16:54 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcbVm-0003CC-K2; Sat, 23 May 2020 21:16:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcbVm-000343-Ip; Sat, 23 May 2020 21:16:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150341-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150341: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=5e015d48a5ee68ba03addad2698364d8f015afec
X-Osstest-Versions-That: xen=abf378e6483195b98a3f32e2c9d017e0eeeb275f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 23 May 2020 21:16:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150341 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150341/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150336
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150336
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150336
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150336
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150336
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150336
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150336
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150336
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150336
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150336
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150336
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5e015d48a5ee68ba03addad2698364d8f015afec
baseline version:
 xen                  abf378e6483195b98a3f32e2c9d017e0eeeb275f

Last test of basis   150336  2020-05-23 01:06:17 Z    0 days
Testing same since   150341  2020-05-23 10:57:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   abf378e648..5e015d48a5  5e015d48a5ee68ba03addad2698364d8f015afec -> master


From xen-devel-bounces@lists.xenproject.org Sun May 24 00:15:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 24 May 2020 00:15:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jceIL-0008Q5-Qc; Sun, 24 May 2020 00:15:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2e/K=7G=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jceIK-0008Q0-75
 for xen-devel@lists.xenproject.org; Sun, 24 May 2020 00:15:12 +0000
X-Inumbo-ID: 9f4762de-9d53-11ea-ad6a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9f4762de-9d53-11ea-ad6a-12813bfff9fa;
 Sun, 24 May 2020 00:15:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=YmvhygnBJO6BJ64WkMrsZ2rVmMF2nFHNBLz1gN3vr8o=; b=iNtRbme1gr/T6rcT06mBLC8Vh
 A+cNzZv2gyWlw3XfUXwzpfdYckDppLitbJB0HIRM64mM2hO/TsWh5ttC+2jqcuZAp7ovnT9pX8KAa
 iUOakRGq7fOvTy/A4nw+loYGbbYl4W9FLiNZfvopUnMMOgupeAjq9/hZPzz7v/IxafPn0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jceIE-0000gm-65; Sun, 24 May 2020 00:15:06 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jceID-0000D8-Qo; Sun, 24 May 2020 00:15:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jceID-0002BK-Q8; Sun, 24 May 2020 00:15:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150343-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150343: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=e644645abf4788e919beeb97925fb6bf43e890a2
X-Osstest-Versions-That: linux=2ea1940b84e55420a9e8feddcafd173edfe4df11
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 24 May 2020 00:15:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150343 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150343/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 150281

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150281
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150281
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150281
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150281
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150281
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150281
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150281
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150281
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                e644645abf4788e919beeb97925fb6bf43e890a2
baseline version:
 linux                2ea1940b84e55420a9e8feddcafd173edfe4df11

Last test of basis   150281  2020-05-20 19:40:19 Z    3 days
Failing since        150296  2020-05-21 08:13:02 Z    2 days    6 attempts
Testing same since   150343  2020-05-23 14:40:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Roland Scheidegger (VMware)" <rscheidegger.oss@gmail.com>
  Al Viro <viro@zeniv.linux.org.uk>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Deucher <alexdeucher@gmail.com>
  Andreas Färber <afaerber@suse.de>
  Anton Ivanov <anton.ivanov@cambridgegreys.com>
  Bijan Mottahedeh <bijan.mottahedeh@oracle.com>
  Bin Lu <Bin.Lu@arm.com>
  Bodo Stroesser <bstroesser@ts.fujitsu.com>
  Brent Lu <brent.lu@intel.com>
  Bryant G. Ly <bryangly@gmail.com>
  Can Guo <cang@codeaurora.org>
  Catalin Marinas <catalin.marinas@arm.com>
  Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
  Christian Gmeiner <christian.gmeiner@gmail.com>
  Christian Lachner <gladiac@gmail.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Christophe Leroy <christophe.leroy@csgroup.eu>
  Colin Ian King <colin.king@canonical.com>
  Cristian Ciocaltea <cristian.ciocaltea@gmail.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dave Airlie <airlied@redhat.com>
  Dave Jiang <dave.jiang@intel.com>
  Eric Biggers <ebiggers@google.com>
  Ewan D. Milne <emilne@redhat.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Guixiong Wei <guixiongwei@gmail.com>
  Ignat Korchagin <ignat@cloudflare.com>
  Jan Schmidt <jan@centricular.com>
  Jason Wang <jasowang@redhat.com>
  Jason Yan <yanaijie@huawei.com>
  Jens Axboe <axboe@kernel.dk>
  Johannes Berg <johannes.berg@intel.com>
  John Johansen <john.johansen@canonical.com>
  Kamal Dasu <kdasu.kdev@gmail.com>
  Kefeng Wang <wangkefeng.wang@huawei.com>
  Keno Fischer <keno@juliacomputing.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lucas Stach <l.stach@pengutronix.de>
  Martin K. Petersen <martin.petersen@oracle.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael S. Tsirkin <mst@redhat.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Namjae Jeon <namjae.jeon@samsung.com>
  Navid Emamdoost <navid.emamdoost@gmail.com>
  Nicholas Kazlauskas <nicholas.kazlauskas@amd.com>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paul Cercueil <paul@crapouillou.net>
  Pavel Begunkov <asml.silence@gmail.com>
  PeiSen Hou <pshou@realtek.com>
  Peter Ujfalusi <peter.ujfalusi@ti.com>
  Rafal Hibner <rafal.hibner@secom.com.pl>
  Rafał Hibner <rafal.hibner@secom.com.pl>
  Ricardo Ribalda Delgado <ribalda@kernel.org>
  Richard Weinberger <richard@nod.at>
  Ritesh Harjani <riteshh@linux.ibm.com>
  Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
  Roland Scheidegger <sroland@vmware.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Scott Bahling <sbahling@suse.com>
  Sudeep Holla <sudeep.holla@arm.com>
  Takashi Iwai <tiwai@suse.de>
  Theodore Ts'o <tytso@mit.edu>
  Thierry Reding <treding@nvidia.com>
  Thomas Hellstrom <thellstrom@vmware.com>
  Vinod Koul <vkoul@kernel.org>
  Vladimir Murzin <vladimir.murzin@arm.com>
  Vladimir Stempen <vladimir.stempen@amd.com>
  Will Deacon <will@kernel.org>
  Xiaoguang Wang <xiaoguang.wang@linux.alibaba.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  YueHaibing <yuehaibing@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   2ea1940b84e5..e644645abf47  e644645abf4788e919beeb97925fb6bf43e890a2 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sun May 24 07:00:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 24 May 2020 07:00:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jckbw-0001ar-7L; Sun, 24 May 2020 06:59:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2e/K=7G=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jckbu-0001am-K5
 for xen-devel@lists.xenproject.org; Sun, 24 May 2020 06:59:50 +0000
X-Inumbo-ID: 2591eda4-9d8c-11ea-ad9b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2591eda4-9d8c-11ea-ad9b-12813bfff9fa;
 Sun, 24 May 2020 06:59:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=RkA1bT8vVPds7UGkFqKiPKSTSpeYf/duzNjnaPK6zI4=; b=3nKHySWIQhmqcK/vRJdnVI+DN
 hHajDczaQaMCZSceahVr59nDZ8Ja7/V77Y/UZfhpWONMcXiRme7KSC0X3tnqKey6s/I3joYqzK5JU
 ntXpXtNCRw0oOzyhLCs3ni/UPaIrQFcDkp7qgANaI3MD6Ep3MrN7IR2em+psPyJYvF0Ng=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jckbn-0002lK-0W; Sun, 24 May 2020 06:59:43 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jckbm-0001xR-Nv; Sun, 24 May 2020 06:59:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jckbm-0006Xm-NH; Sun, 24 May 2020 06:59:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150347-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150347: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=f718709431429fbb4e1fc6781f3a3752a7f43f70
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 24 May 2020 06:59:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150347 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150347/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a

version targeted for testing:
 libvirt              f718709431429fbb4e1fc6781f3a3752a7f43f70
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  128 days
Failing since        146211  2020-01-18 04:18:52 Z  127 days  118 attempts
Testing same since   150339  2020-05-23 04:19:51 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Chris Jester-Young <cky@cky.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yan Wang <wangyan122@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 19289 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 24 09:06:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 24 May 2020 09:06:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcmZf-0004d4-3R; Sun, 24 May 2020 09:05:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2e/K=7G=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcmZe-0004cz-Bw
 for xen-devel@lists.xenproject.org; Sun, 24 May 2020 09:05:38 +0000
X-Inumbo-ID: b8241f46-9d9d-11ea-ada5-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b8241f46-9d9d-11ea-ada5-12813bfff9fa;
 Sun, 24 May 2020 09:05:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=y4wHZm6YuXTWr9Yiy/iRmbvRLP504kgOpwfQMLIlCcc=; b=XaVcVKTiRJehe0eyarB71aAQO
 U+lbL/C+6P+Bp9mVzpQdzGZDxGUKyGW9pxb2uNtokLalZIY057S/VjIEitZ2HzfqaSujYxq+1ei9C
 gzr/Z91377Kf5k7fULcamgUeh/+iBYUYqIh+LsGcb7rIVPtnAWo8WBwpIv+O4doxERD4w=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcmZW-0005pB-TW; Sun, 24 May 2020 09:05:30 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcmZW-0008P8-J5; Sun, 24 May 2020 09:05:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcmZW-0003Tf-IJ; Sun, 24 May 2020 09:05:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150345-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150345: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=caffb99b6929f41a69edbb5aef3a359bf45f3315
X-Osstest-Versions-That: linux=e644645abf4788e919beeb97925fb6bf43e890a2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 24 May 2020 09:05:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150345 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150345/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150343
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150343
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150343
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150343
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150343
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150343
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150343
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150343
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                caffb99b6929f41a69edbb5aef3a359bf45f3315
baseline version:
 linux                e644645abf4788e919beeb97925fb6bf43e890a2

Last test of basis   150343  2020-05-23 14:40:06 Z    0 days
Testing same since   150345  2020-05-24 00:39:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Alex Elder <elder@linaro.org>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexandru Ardelean <alexandru.ardelean@analog.com>
  Alexei Starovoitov <ast@kernel.org>
  Amit Cohen <amitc@mellanox.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Antoine Tenart <antoine.tenart@bootlin.com>
  Arnd Bergmann <arnd@arndb.de>
  ASSOGBA Emery <assogba.emery@gmail.com>
  Baoquan He <bhe@redhat.com>
  Boris Sukholitko <boris.sukholitko@broadcom.com>
  Brendan Higgins <brendanhiggins@google.com>
  Calvin Johnson <calvin.johnson@oss.nxp.com>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christoph Paasch <cpaasch@apple.com>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Claudiu Manoil <claudiu.manoil@nxp.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Dave Young <dyoung@redhat.com>
  David Ahern <dsahern@gmail.com>
  David Hildenbrand <david@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  DENG Qingfang <dqfext@gmail.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dragos Bogdan <dragos.bogdan@analog.com>
  Edward Cree <ecree@solarflare.com>
  Eran Ben Elisha <eranbe@mellanox.com>
  Eric Dumazet <edumazet@google.com>
  Fabrice Gasnier <fabrice.gasnier@st.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Gerald Schaefer <gerald.schaefer@de.ibm.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregory CLEMENT <gregory.clement@bootlin.com>
  Grygorii Strashko <grygorii.strashko@ti.com>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Heiner Kallweit <hkallweit1@gmail.com>
  Ido Schimmel <idosch@mellanox.com>
  Jakub Kicinski <kuba@kernel.org>
  Jakub Sitnicki <jakub@cloudflare.com>
  Jamal Hadi Salim <jhs@mojatatu.com>
  James Morris <jamorris@linux.microsoft.com>
  Jason A. Donenfeld <Jason@zx2c4.com>
  Jere Leppänen <jere.leppanen@nokia.com>
  Jeremy Kerr <jk@ozlabs.org>
  Jiri Pirko <jiri@mellanox.com>
  John Hubbard <jhubbard@nvidia.com>
  John Paul Adrian Glaubitz <glaubitz@physik.fu-berlin.de>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan McDowell <noodles@earth.li>
  Kalle Valo <kvalo@codeaurora.org>
  Klaus Doth <kdlnx@doth.eu>
  KP Singh <kpsingh@google.com>
  Kurt Kanzenbach <kurt@linutronix.de>
  Leon Romanovsky <leonro@mellanox.com>
  Leon Yu <leoyu@nvidia.com>
  Lianbo Jiang <lijiang@redhat.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Luca Coelho <luciano.coelho@intel.com>
  Madhuparna Bhowmik <madhuparnabhowmik10@gmail.com>
  Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
  Maor Dickman <maord@mellanox.com>
  Marc Payne <marc.payne@mdpsys.co.uk>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Marco Elver <elver@google.com>
  Martin KaFai Lau <kafai@fb.com>
  Matt Ranostay <matt.ranostay@konsulko.com>
  Matteo Croce <mcroce@redhat.com>
  Matteo Ghidoni <matteo.ghidoni@ch.abb.com>
  Michal Kubecek <mkubecek@suse.cz>
  Mike Rapoport <rppt@linux.ibm.com>
  Moshe Shemesh <moshe@mellanox.com>
  Naoya Horiguchi <n-horiguchi@ah.jp.nec.com>
  Naoya Horiguchi <naoya.horiguchi@nec.com>
  Nathan Chancellor <natechancellor@gmail.com> [build, clang-11]
  Neil Horman <nhorman@tuxdriver.com>
  Nick Desaulniers <ndesaulniers@google.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Oscar Carter <oscar.carter@gmx.com>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Philipp Rudo <prudo@linux.ibm.com>
  Qiushi Wu <wu000273@umn.edu>
  Randy Dunlap <rdunlap@infradead.org>
  Roi Dayan <roid@mellanox.com>
  Roman Mashak <mrv@mojatatu.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  Sabrina Dubroca <sd@queasysnail.net>
  Saeed Mahameed <saeedm@mellanox.com>
  Sagar Shrikant Kadam <sagar.kadam@sifive.com>
  Samuel Iglesias Gonsalvez <siglesias@igalia.com>
  Saravana Kannan <saravanak@google.com>
  Sedat Dilek <sedat.dilek@gmail.com>
  Shaokun Zhang <zhangshaokun@hisilicon.com>
  Shay Drory <shayd@mellanox.com>
  Stan Johnson <userm57@yahoo.com>
  Stephen Worley <sworley@cumulusnetworks.com>
  Tang Bin <tangbin@cmss.chinamobile.com>
  Tariq Toukan <tariqt@mellanox.com>
  Tiezhu Yang <yangtiezhu@loongson.cn>
  Todd Malsbary <todd.malsbary@linux.intel.com>
  Tomas Winkler <tomas.winkler@intel.com>
  Uladzislau Rezki <uladzislau.rezki@sony.com>
  Vadim Fedorenko <vfedorenko@novek.ru>
  Valentin Longchamp <valentin@longchamp.me>
  Vasily Gorbik <gor@linux.ibm.com>
  Vitaly Wool <vitaly.wool@konsulko.com>
  Vladimir Oltean <vladimir.oltean@nxp.com>
  Wei Yongjun <weiyongjun1@huawei.com>
  You-Sheng Yang <vicamo@gmail.com>
  Yuqi Jin <jinyuqi@huawei.com>
  Zhang Shengju <zhangshengju@cmss.chinamobile.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   e644645abf47..caffb99b6929  caffb99b6929f41a69edbb5aef3a359bf45f3315 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sun May 24 09:45:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 24 May 2020 09:45:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcnBX-0007zG-6x; Sun, 24 May 2020 09:44:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2e/K=7G=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcnBW-0007zB-6K
 for xen-devel@lists.xenproject.org; Sun, 24 May 2020 09:44:46 +0000
X-Inumbo-ID: 2f1b169a-9da3-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2f1b169a-9da3-11ea-b07b-bc764e2007e4;
 Sun, 24 May 2020 09:44:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Y2Y93Z/UETto/LR+IAtl5PuITCBuqhJSr8NFFCflDSM=; b=Zq1t2mAlkPc07o7XwUiGH3dQOe
 8XwVY/CgsyEA7lSaNgLCrQktwt9E/Qf2ITJmvi3pOdpevGtcyEHWKX/7luGIPdNsGrEVDrkZDRVad
 +Pg5EAzmVD/uVFHHT0CViZ3gNOVBuDdASwLbPIuuqumrN4piTaIP0YUBpSfoZ3GM1VLo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcnBN-0006bG-VE; Sun, 24 May 2020 09:44:38 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcnBN-0002d8-O0; Sun, 24 May 2020 09:44:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcnBN-0008VN-NK; Sun, 24 May 2020 09:44:37 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [libvirt bisection] complete build-i386-libvirt
Message-Id: <E1jcnBN-0008VN-NK@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 24 May 2020 09:44:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job build-i386-libvirt
testid libvirt-build

Tree: libvirt git://libvirt.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  libvirt git://libvirt.org/libvirt.git
  Bug introduced:  4d5f50d86b760864240c695adc341379fb47a796
  Bug not present: a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/148912/


  commit 4d5f50d86b760864240c695adc341379fb47a796
  Author: Pavel Hrdina <phrdina@redhat.com>
  Date:   Wed Jan 8 22:54:31 2020 +0100
  
      bootstrap.conf: stop creating AUTHORS file
      
      The existence of AUTHORS file is required for GNU projects but since
      commit <8bfb36db40f38e92823b657b5a342652064b5adc> we do not require
      these files to exist.
      
      Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/libvirt/build-i386-libvirt.libvirt-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/libvirt/build-i386-libvirt.libvirt-build --summary-out=tmp/150350.bisection-summary --basis-template=146182 --blessings=real,real-bisect libvirt build-i386-libvirt libvirt-build
Searching for failure / basis pass:
 150347 fail [host=italia0] / 146182 [host=pinot1] 146156 [host=huxelrebe0] 146103 [host=pinot1] 146061 ok.
Failure / basis pass flights: 150347 / 146061
(tree with no url: minios)
(tree in basispass but not in latest: libvirt_gnulib)
Tree: libvirt git://libvirt.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest f718709431429fbb4e1fc6781f3a3752a7f43f70 27acf0ef828bf719b2053ba398b195829413dbdd 1c877c716038a862e876cac8f0929bab4a96e849 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 7e9db04923854b7f4edca33948f55abee22907b9 5e015d48a5ee68ba03addad2698364d8f015afec
Basis pass 7d608469621a3fda72dff2a89308e68cc9fb4c9a 317d3eeb963a515e15a63fa356d8ebcda7041a51 70911f1f4aee0366b6122f2b90d367ec0f066beb d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 03bfe526ecadc86f31eda433b91dc90be0563919
Generating revisions with ./adhoc-revtuple-generator  git://libvirt.org/libvirt.git#7d608469621a3fda72dff2a89308e68cc9fb4c9a-f718709431429fbb4e1fc6781f3a3752a7f43f70 https://gitlab.com/keycodemap/keycodemapdb.git#317d3eeb963a515e15a63fa356d8ebcda7041a51-27acf0ef828bf719b2053ba398b195829413dbdd git://xenbits.xen.org/osstest/ovmf.git#70911f1f4aee0366b6122f2b90d367ec0f066beb-1c877c716038a862e876cac8f0929bab4a96e849 git://xenbits.xen.org/qemu-xen-traditional.git#d0d8ad39ecb51cd7497cd524484fe09f50876\
 798-3c659044118e34603161457db9934a34f816d78b git://xenbits.xen.org/qemu-xen.git#933ebad2470a169504799a1d95b8e410bd9847ef-410cc30fdc590417ae730d635bbc70257adf6750 git://xenbits.xen.org/osstest/seabios.git#f21b5a4aeb020f2a5e2c6503f906a9349dd2f069-7e9db04923854b7f4edca33948f55abee22907b9 git://xenbits.xen.org/xen.git#03bfe526ecadc86f31eda433b91dc90be0563919-5e015d48a5ee68ba03addad2698364d8f015afec
Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.

Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.

Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 178548 nodes in revision graph
Searching for test results:
 146061 pass 7d608469621a3fda72dff2a89308e68cc9fb4c9a 317d3eeb963a515e15a63fa356d8ebcda7041a51 70911f1f4aee0366b6122f2b90d367ec0f066beb d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 03bfe526ecadc86f31eda433b91dc90be0563919
 146103 [host=pinot1]
 146182 [host=pinot1]
 146156 [host=huxelrebe0]
 146270 pass 7d608469621a3fda72dff2a89308e68cc9fb4c9a 317d3eeb963a515e15a63fa356d8ebcda7041a51 70911f1f4aee0366b6122f2b90d367ec0f066beb d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 03bfe526ecadc86f31eda433b91dc90be0563919
 146289 pass irrelevant
 146240 fail irrelevant
 146211 fail irrelevant
 146272 fail irrelevant
 146290 fail irrelevant
 146275 pass irrelevant
 146276 pass irrelevant
 146291 pass irrelevant
 146278 pass irrelevant
 146282 fail irrelevant
 146279 pass irrelevant
 146280 pass irrelevant
 146292 fail irrelevant
 146287 fail irrelevant
 146299 fail irrelevant
 146344 fail irrelevant
 146374 fail irrelevant
 146410 fail irrelevant
 146455 fail irrelevant
 146509 fail irrelevant
 146489 fail irrelevant
 146528 fail irrelevant
 146546 fail irrelevant
 146565 fail irrelevant
 146586 fail irrelevant
 146616 fail irrelevant
 146636 fail irrelevant
 146660 fail irrelevant
 146689 fail irrelevant
 146737 fail irrelevant
 146756 []
 146714 []
 146775 fail irrelevant
 146799 fail irrelevant
 146843 fail irrelevant
 146921 fail irrelevant
 146995 fail irrelevant
 147040 fail irrelevant
 147084 fail irrelevant
 147141 fail irrelevant
 147195 fail irrelevant
 147265 fail irrelevant
 147340 fail irrelevant
 147419 fail irrelevant
 147477 fail irrelevant
 147520 fail irrelevant
 147583 fail irrelevant
 147649 fail irrelevant
 147703 fail irrelevant
 147784 fail irrelevant
 147736 fail irrelevant
 147885 fail irrelevant
 147831 fail irrelevant
 147981 fail irrelevant
 148068 fail irrelevant
 148144 fail irrelevant
 148196 fail irrelevant
 148269 fail irrelevant
 148331 fail irrelevant
 148406 fail irrelevant
 148459 fail irrelevant
 148503 fail irrelevant
 148547 fail irrelevant
 148615 fail irrelevant
 148583 fail irrelevant
 148651 fail irrelevant
 148688 fail irrelevant
 148729 fail irrelevant
 148775 fail irrelevant
 148799 fail irrelevant
 148871 fail 0d0d60ddc5e58359cff5be8dfd6dd27e98da0282 317d3eeb963a515e15a63fa356d8ebcda7041a51 957ca63190bc2770d0383408bf87587112e84881 d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 3dd724dff085e13ad520f8e35aea717db2ff07d0
 148830 fail irrelevant
 148861 pass 7d608469621a3fda72dff2a89308e68cc9fb4c9a 317d3eeb963a515e15a63fa356d8ebcda7041a51 70911f1f4aee0366b6122f2b90d367ec0f066beb d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 03bfe526ecadc86f31eda433b91dc90be0563919
 148875 fail 4ed55c0be11da3e7e29986a8b3b4b0a32b58be47 317d3eeb963a515e15a63fa356d8ebcda7041a51 c8b8157e126ae2fb6f65842677251d300ceff104 d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d f910c3ebc6a178c5cbbc0868134be536fae7f7cf
 148883 pass 6b4140dafb6b3db9ead2e260757f1c3583936f19 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 148867 fail irrelevant
 148888 fail 2feaa925bba06e77be918bcbfab63bc8201c8f19 317d3eeb963a515e15a63fa356d8ebcda7041a51 d9c919744b9b6272cdf1c81f33a09539a4bd031b d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 1eeedaf5a0d9ed6324f3bd5b700bb22eb4355341
 148876 fail 6c1dddaf97b4ef70e27961c9f79b15c79a863ac5 317d3eeb963a515e15a63fa356d8ebcda7041a51 9a1f14ad721bbcd833ec5108944c44a502392f03 d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d f44a192d22a37dcb9171b95978b43637bc09718d
 148880 pass 0169f5ecdeefb91463b07a2e6f3f3b40c84323e9 317d3eeb963a515e15a63fa356d8ebcda7041a51 710ff7490ad897383eb35d1becadabd21a733f24 d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 321b658847a06417b6a0b6964e939ed0ecf16551
 148894 fail d0236e2a554f2321512276b897e8a8a44f68e969 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 148892 fail d0236e2a554f2321512276b897e8a8a44f68e969 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d b05ec9263e56ef0784da766e829cfe08569d1d88
 148898 pass a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 148899 fail 4d5f50d86b760864240c695adc341379fb47a796 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 148904 pass a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 148907 fail 4d5f50d86b760864240c695adc341379fb47a796 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 148887 fail irrelevant
 148909 pass a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 148912 fail 4d5f50d86b760864240c695adc341379fb47a796 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 148954 fail irrelevant
 149001 fail irrelevant
 149043 fail irrelevant
 149074 fail irrelevant
 149123 fail irrelevant
 149154 fail irrelevant
 149193 fail irrelevant
 149234 fail irrelevant
 149268 fail irrelevant
 149314 fail irrelevant
 149376 fail irrelevant
 149407 fail irrelevant
 149434 fail irrelevant
 149455 fail irrelevant
 149482 fail irrelevant
 149550 fail irrelevant
 149508 fail irrelevant
 149590 fail irrelevant
 149643 fail irrelevant
 149629 fail irrelevant
 149615 fail irrelevant
 149635 fail irrelevant
 149684 fail irrelevant
 149666 fail irrelevant
 149696 fail irrelevant
 149732 fail irrelevant
 149773 fail irrelevant
 149746 fail irrelevant
 149803 fail irrelevant
 149826 fail irrelevant
 149886 fail irrelevant
 149833 fail irrelevant
 149850 fail irrelevant
 149870 fail irrelevant
 149909 fail de7e9840e7f888f1a872c86b0cb793b283193137 317d3eeb963a515e15a63fa356d8ebcda7041a51 e54310451f1ac2ce4ccb90a110f45bb9b4f3ccd6 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 eaaf726038cb9b2d01312d6430b4e93842aa96eb 0135be8bd8cd60090298f02310691b688d95c3a8
 149902 fail de7e9840e7f888f1a872c86b0cb793b283193137 317d3eeb963a515e15a63fa356d8ebcda7041a51 e54310451f1ac2ce4ccb90a110f45bb9b4f3ccd6 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 eaaf726038cb9b2d01312d6430b4e93842aa96eb 0135be8bd8cd60090298f02310691b688d95c3a8
 149895 fail irrelevant
 150053 fail irrelevant
 150062 fail irrelevant
 150083 fail irrelevant
 150099 fail irrelevant
 150121 fail 23bf93884c6346206e87c0f14d93f905e8c81267 27acf0ef828bf719b2053ba398b195829413dbdd c8543b8d830d22882dab4ece47f0413f9c6eb431 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 eaaf726038cb9b2d01312d6430b4e93842aa96eb e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9
 150155 fail 65a12c467cd683809b4d445b8cf1c3ae250209b2 27acf0ef828bf719b2053ba398b195829413dbdd 88899a372cfc44f8612315f4b43a084d1814fe69 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 eaaf726038cb9b2d01312d6430b4e93842aa96eb a82582b1af6a4a57ca53bcfad9f71428cb5f9a54
 150131 fail 23bf93884c6346206e87c0f14d93f905e8c81267 27acf0ef828bf719b2053ba398b195829413dbdd c8543b8d830d22882dab4ece47f0413f9c6eb431 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 eaaf726038cb9b2d01312d6430b4e93842aa96eb e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9
 150146 fail irrelevant
 150170 fail irrelevant
 150210 fail irrelevant
 150228 fail 144dfe4215902b40a9d17fdb326054bbd8e07563 27acf0ef828bf719b2053ba398b195829413dbdd 9099dcbd61c8d22b5eedda783d143c222d2705a3 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 665dce17c04b574bb0ebcde4cac129c3dd9e681c 664e1bc12f8658da124a4eff7a8f16da073bd47f
 150190 fail irrelevant
 150222 fail 144dfe4215902b40a9d17fdb326054bbd8e07563 27acf0ef828bf719b2053ba398b195829413dbdd 9099dcbd61c8d22b5eedda783d143c222d2705a3 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 665dce17c04b574bb0ebcde4cac129c3dd9e681c 57880053dd24012e9f59c23b630fefe07e15dc49
 150237 fail irrelevant
 150268 fail irrelevant
 150287 fail irrelevant
 150347 fail f718709431429fbb4e1fc6781f3a3752a7f43f70 27acf0ef828bf719b2053ba398b195829413dbdd 1c877c716038a862e876cac8f0929bab4a96e849 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 7e9db04923854b7f4edca33948f55abee22907b9 5e015d48a5ee68ba03addad2698364d8f015afec
 150348 pass 7d608469621a3fda72dff2a89308e68cc9fb4c9a 317d3eeb963a515e15a63fa356d8ebcda7041a51 70911f1f4aee0366b6122f2b90d367ec0f066beb d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 03bfe526ecadc86f31eda433b91dc90be0563919
 150350 fail f718709431429fbb4e1fc6781f3a3752a7f43f70 27acf0ef828bf719b2053ba398b195829413dbdd 1c877c716038a862e876cac8f0929bab4a96e849 3c659044118e34603161457db9934a34f816d78b 410cc30fdc590417ae730d635bbc70257adf6750 7e9db04923854b7f4edca33948f55abee22907b9 5e015d48a5ee68ba03addad2698364d8f015afec
 150317 fail irrelevant
 150339 fail irrelevant
Searching for interesting versions
 Result found: flight 146061 (pass), for basis pass
 Result found: flight 150347 (fail), for basis failure
 Repro found: flight 150348 (pass), for basis pass
 Repro found: flight 150350 (fail), for basis failure
 0 revisions at a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a d0d8ad39ecb51cd7497cd524484fe09f50876798 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
No revisions left to test, checking graph state.
 Result found: flight 148898 (pass), for last pass
 Result found: flight 148899 (fail), for first failure
 Repro found: flight 148904 (pass), for last pass
 Repro found: flight 148907 (fail), for first failure
 Repro found: flight 148909 (pass), for last pass
 Repro found: flight 148912 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  libvirt git://libvirt.org/libvirt.git
  Bug introduced:  4d5f50d86b760864240c695adc341379fb47a796
  Bug not present: a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/148912/

Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.


  commit 4d5f50d86b760864240c695adc341379fb47a796
  Author: Pavel Hrdina <phrdina@redhat.com>
  Date:   Wed Jan 8 22:54:31 2020 +0100
  
      bootstrap.conf: stop creating AUTHORS file
      
      The existence of AUTHORS file is required for GNU projects but since
      commit <8bfb36db40f38e92823b657b5a342652064b5adc> we do not require
      these files to exist.
      
      Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.204627 to fit
pnmtopng: 37 colors found
Revision graph left in /home/logs/results/bisect/libvirt/build-i386-libvirt.libvirt-build.{dot,ps,png,html,svg}.
----------------------------------------
150350: tolerable FAIL

flight 150350 libvirt real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/150350/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-i386-libvirt            6 libvirt-build           fail baseline untested


jobs:
 build-i386                                                   pass    
 build-i386-libvirt                                           fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun May 24 10:13:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 24 May 2020 10:13:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcncw-0002At-MP; Sun, 24 May 2020 10:13:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2e/K=7G=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcncv-0002Ao-MF
 for xen-devel@lists.xenproject.org; Sun, 24 May 2020 10:13:05 +0000
X-Inumbo-ID: 28223a90-9da7-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 28223a90-9da7-11ea-b07b-bc764e2007e4;
 Sun, 24 May 2020 10:13:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=s6FsZrmcnsHYj+X2BKNwmFK0HWHEgGzWD9wY8dzJI68=; b=QF+EXP8zknw5rMkUMI0m3/gvB
 kI/OzW9guV1XbpT93yF3RH+QzTXy0sfX/clREBKy7n1G6HLHi0HmrfiCQPAGTc0oYQpPJVDrY/4s5
 aa+EfJJzZ1FTckD8CxpBruIozIVwOZKoanthwS9q7oaIQ4CWwihhmkwCIYQBzO8u31V+4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcnct-0007H9-Vf; Sun, 24 May 2020 10:13:04 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcncs-0003F3-Ov; Sun, 24 May 2020 10:13:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcncs-0004wY-OK; Sun, 24 May 2020 10:13:02 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150349-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 150349: all pass - PUSHED
X-Osstest-Versions-This: xen=5e015d48a5ee68ba03addad2698364d8f015afec
X-Osstest-Versions-That: xen=e235fa2794c95365519eac714d6ea82f8e64752e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 24 May 2020 10:13:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150349 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150349/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  5e015d48a5ee68ba03addad2698364d8f015afec
baseline version:
 xen                  e235fa2794c95365519eac714d6ea82f8e64752e

Last test of basis   150274  2020-05-20 09:20:08 Z    4 days
Testing same since   150349  2020-05-24 09:19:09 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  David Woodhouse <dwmw@amazon.co.uk>
  George Dunlap <george.dunlap@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Nick Rosbrook <rosbrookn@ainfosec.com>
  Nick Rosbrook <rosbrookn@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e235fa2794..5e015d48a5  5e015d48a5ee68ba03addad2698364d8f015afec -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun May 24 11:19:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 24 May 2020 11:19:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcoeP-0007Ij-Jo; Sun, 24 May 2020 11:18:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2e/K=7G=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcoeN-0007Ie-RB
 for xen-devel@lists.xenproject.org; Sun, 24 May 2020 11:18:39 +0000
X-Inumbo-ID: 4cafd6fc-9db0-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4cafd6fc-9db0-11ea-b9cf-bc764e2007e4;
 Sun, 24 May 2020 11:18:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=qGxQlRRf6ezwyBg27yYmb4BFcZKCCcI/Yok8naweY2o=; b=12f8oj+FROYQ3nywm50Clu2xjA
 WYkwOQDNsFQmPd1lI7Ll/nmsKV6bmd8lQ9/wsQYaLkfD0BUuC2g6DKqZbzy3wSZMY6zDtqHWbwuNv
 TfxljmD585COA2ROzYTGoRP0w5LrHzAoibVVDfZCOLe6kGKwXaL//vYbfnyogmDVRLqE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcoeF-0000CA-14; Sun, 24 May 2020 11:18:31 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcoeE-0006N6-Oe; Sun, 24 May 2020 11:18:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcoeE-0005vM-O6; Sun, 24 May 2020 11:18:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [libvirt bisection] complete build-arm64-libvirt
Message-Id: <E1jcoeE-0005vM-O6@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 24 May 2020 11:18:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job build-arm64-libvirt
testid libvirt-build

Tree: libvirt git://libvirt.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  libvirt git://libvirt.org/libvirt.git
  Bug introduced:  810613a60efe3924c536b3663246900bc08910a5
  Bug not present: f6a750e678fb0ca3898cba08b6698f079008924c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/146332/


  commit 810613a60efe3924c536b3663246900bc08910a5
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Dec 23 15:37:26 2019 +0000
  
      src: replace strptime()/timegm()/mktime() with GDateTime APIs set
      
      All places where we use strptime/timegm()/mktime() are handling
      conversion of dates in a format compatible with ISO 8601, so we
      can use the GDateTime APIs to simplify code.
      
      Reviewed-by: Fabiano Fidêncio <fidencio@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/libvirt/build-arm64-libvirt.libvirt-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/libvirt/build-arm64-libvirt.libvirt-build --summary-out=tmp/150352.bisection-summary --basis-template=146182 --blessings=real,real-bisect libvirt build-arm64-libvirt libvirt-build
Searching for failure / basis pass:
 150347 fail [host=rochester1] / 146182 [host=laxton0] 146156 [host=rochester0] 146103 [host=rochester0] 146061 [host=laxton1] 145969 [host=laxton1] 145906 [host=rochester0] 145842 [host=laxton0] 145779 [host=laxton1] 145511 [host=laxton1] 145212 [host=rochester0] 145173 [host=laxton0] 145133 [host=laxton0] 145054 [host=laxton1] 144995 [host=laxton1] 144958 [host=laxton1] 144920 [host=laxton1] 144885 [host=laxton1] 144853 [host=laxton1] 144828 [host=laxton1] 144802 [host=laxton0] 144517 [host=la\
 xton1] 144501 [host=laxton1] 144408 [host=rochester0] 144368 [host=laxton0] 144345 [host=rochester0] 144318 [host=laxton0] 144304 [host=laxton0] 144290 [host=laxton1] 144279 [host=rochester0] 144260 [host=laxton1] 144244 [host=laxton0] 144233 [host=laxton0] 144215 [host=rochester0] 144204 [host=laxton0] 144192 [host=laxton1] 144181 [host=laxton0] 144165 [host=laxton0] 144144 [host=laxton1] 144097 [host=laxton0] 143904 [host=rochester0] 143789 [host=rochester0] 143589 [host=laxton0] 143484 [host=\
 laxton1] 143391 [host=laxton1] 143316 [host=rochester0] 143263 [host=rochester0] 143218 [host=rochester0] 143189 [host=laxton0] 143140 [host=laxton1] 143051 ok.
Failure / basis pass flights: 150347 / 143051
(tree in basispass but not in latest: libvirt_gnulib)
Tree: libvirt git://libvirt.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest f718709431429fbb4e1fc6781f3a3752a7f43f70 27acf0ef828bf719b2053ba398b195829413dbdd 1c877c716038a862e876cac8f0929bab4a96e849 410cc30fdc590417ae730d635bbc70257adf6750 7e9db04923854b7f4edca33948f55abee22907b9 5e015d48a5ee68ba03addad2698364d8f015afec
Basis pass 8e09cf1d5a6b8bcf21bfb7d409a2ecf94be54ff1 6280c94f306df6a20bbc100ba15a5a81af0366e6 f413d9bee3f6cabd4b11ad0a1ab9ff865092fb16 933ebad2470a169504799a1d95b8e410bd9847ef 120996f147131eca8af90e30c900bc14bc824d9f 518c935fac4d30b3ec35d4b6add82b17b7d7aca3
Generating revisions with ./adhoc-revtuple-generator  git://libvirt.org/libvirt.git#8e09cf1d5a6b8bcf21bfb7d409a2ecf94be54ff1-f718709431429fbb4e1fc6781f3a3752a7f43f70 https://gitlab.com/keycodemap/keycodemapdb.git#6280c94f306df6a20bbc100ba15a5a81af0366e6-27acf0ef828bf719b2053ba398b195829413dbdd git://xenbits.xen.org/osstest/ovmf.git#f413d9bee3f6cabd4b11ad0a1ab9ff865092fb16-1c877c716038a862e876cac8f0929bab4a96e849 git://xenbits.xen.org/qemu-xen.git#933ebad2470a169504799a1d95b8e410bd9847ef-410cc30f\
 dc590417ae730d635bbc70257adf6750 git://xenbits.xen.org/osstest/seabios.git#120996f147131eca8af90e30c900bc14bc824d9f-7e9db04923854b7f4edca33948f55abee22907b9 git://xenbits.xen.org/xen.git#518c935fac4d30b3ec35d4b6add82b17b7d7aca3-5e015d48a5ee68ba03addad2698364d8f015afec
Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.

Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.

Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 126446 nodes in revision graph
Searching for test results:
 143051 pass 8e09cf1d5a6b8bcf21bfb7d409a2ecf94be54ff1 6280c94f306df6a20bbc100ba15a5a81af0366e6 f413d9bee3f6cabd4b11ad0a1ab9ff865092fb16 933ebad2470a169504799a1d95b8e410bd9847ef 120996f147131eca8af90e30c900bc14bc824d9f 518c935fac4d30b3ec35d4b6add82b17b7d7aca3
 143085 [host=laxton0]
 143140 [host=laxton1]
 143189 [host=laxton0]
 143218 [host=rochester0]
 143263 [host=rochester0]
 143316 [host=rochester0]
 143391 [host=laxton1]
 143484 [host=laxton1]
 143589 [host=laxton0]
 143789 [host=rochester0]
 143904 [host=rochester0]
 143957 [host=laxton1]
 143959 [host=laxton1]
 143953 [host=laxton0]
 143935 [host=laxton1]
 143954 [host=laxton1]
 143958 [host=laxton1]
 143961 [host=laxton1]
 143962 [host=laxton1]
 143963 [host=laxton1]
 143964 [host=laxton1]
 143966 [host=laxton1]
 143968 [host=laxton1]
 143969 [host=laxton1]
 143971 [host=laxton1]
 143981 [host=laxton0]
 144004 [host=laxton0]
 144038 [host=laxton0]
 144071 [host=laxton1]
 144127 [host=laxton1]
 144097 [host=laxton0]
 144132 [host=laxton1]
 144144 [host=laxton1]
 144165 [host=laxton0]
 144181 [host=laxton0]
 144187 [host=laxton1]
 144188 [host=laxton1]
 144233 [host=laxton0]
 144192 [host=laxton1]
 144196 [host=laxton0]
 144261 [host=rochester0]
 144215 [host=rochester0]
 144204 [host=laxton0]
 144248 [host=rochester0]
 144270 [host=laxton1]
 144253 [host=laxton1]
 144262 [host=laxton0]
 144254 [host=rochester0]
 144260 [host=laxton1]
 144256 [host=rochester0]
 144244 [host=laxton0]
 144263 [host=laxton1]
 144265 [host=laxton0]
 144257 pass irrelevant
 144259 [host=laxton0]
 144271 [host=laxton0]
 144266 [host=laxton1]
 144273 [host=laxton0]
 144279 [host=rochester0]
 144290 [host=laxton1]
 144304 [host=laxton0]
 144318 [host=laxton0]
 144345 [host=rochester0]
 144348 [host=laxton1]
 144368 [host=laxton0]
 144408 [host=rochester0]
 144501 [host=laxton1]
 144517 [host=laxton1]
 144526 [host=laxton1]
 144581 [host=laxton1]
 144580 [host=laxton1]
 144565 [host=laxton1]
 144567 [host=laxton1]
 144568 [host=laxton1]
 144569 [host=laxton1]
 144570 [host=laxton1]
 144571 [host=laxton1]
 144572 [host=laxton1]
 144573 [host=laxton1]
 144575 [host=laxton1]
 144576 [host=laxton1]
 144577 [host=laxton1]
 144633 [host=laxton1]
 144579 [host=laxton1]
 144615 [host=laxton1]
 144689 [host=laxton0]
 144751 [host=laxton0]
 144802 [host=laxton0]
 144778 [host=laxton1]
 144815 [host=laxton1]
 144817 [host=laxton1]
 144828 [host=laxton1]
 144853 [host=laxton1]
 144885 [host=laxton1]
 144958 [host=laxton1]
 144920 [host=laxton1]
 144995 [host=laxton1]
 145054 [host=laxton1]
 145141 pass irrelevant
 145173 [host=laxton0]
 145146 [host=laxton1]
 145133 [host=laxton0]
 145181 [host=rochester0]
 145212 [host=rochester0]
 145511 [host=laxton1]
 145597 [host=laxton0]
 145582 [host=laxton0]
 145542 [host=laxton0]
 145613 [host=laxton0]
 145584 [host=laxton0]
 145598 [host=laxton0]
 145589 [host=laxton0]
 145606 [host=laxton0]
 145600 [host=laxton0]
 145588 [host=laxton0]
 145590 [host=laxton0]
 145575 [host=laxton0]
 145579 [host=laxton0]
 145591 [host=laxton0]
 145580 [host=laxton0]
 145608 [host=laxton0]
 145593 [host=laxton0]
 145615 [host=laxton0]
 145595 [host=laxton0]
 145609 [host=laxton0]
 145596 [host=laxton0]
 145601 [host=laxton0]
 145616 [host=laxton0]
 145604 [host=laxton0]
 145611 [host=laxton0]
 145612 [host=laxton0]
 145617 [host=laxton0]
 145618 [host=laxton0]
 145619 [host=laxton0]
 145656 [host=laxton1]
 145710 [host=laxton0]
 145779 [host=laxton1]
 145842 [host=laxton0]
 145906 [host=rochester0]
 145969 [host=laxton1]
 146061 [host=laxton1]
 146066 [host=laxton1]
 146103 [host=rochester0]
 146182 [host=laxton0]
 146156 [host=rochester0]
 146240 fail irrelevant
 146211 fail irrelevant
 146298 fail irrelevant
 146314 pass 26d9748ff114a060ee751959d108d062f737f5d9 317d3eeb963a515e15a63fa356d8ebcda7041a51 b948a496150f4ae4f656c0f0ab672608723c80e6 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 7ad3d07b37e8f3b15772de8bc1367c68ea681eee
 146293 pass 8e09cf1d5a6b8bcf21bfb7d409a2ecf94be54ff1 6280c94f306df6a20bbc100ba15a5a81af0366e6 f413d9bee3f6cabd4b11ad0a1ab9ff865092fb16 933ebad2470a169504799a1d95b8e410bd9847ef 120996f147131eca8af90e30c900bc14bc824d9f 518c935fac4d30b3ec35d4b6add82b17b7d7aca3
 146303 pass 4de5d01a4ee76e6ea31dc61025e26459473d1104 6280c94f306df6a20bbc100ba15a5a81af0366e6 c8ff8e05afb6a20b1ae66aa80bb8636b664be0b2 933ebad2470a169504799a1d95b8e410bd9847ef c9ba5276e3217ac6a1ec772dbebf568ba3a8a55d 05de315b00bf2951617b8ef28811b1f1f2dd5742
 146305 fail fe1f2bfbe3ca8944df37c6b77f813eaab572a2f7 317d3eeb963a515e15a63fa356d8ebcda7041a51 49accdedf956f175041040e677163b7cbb746283 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 7b3c5b70a32303b46d0d051e695f18d72cce5ed0
 146301 pass 330b55682921886dfc1709b6ab6e3c6e72c25629 317d3eeb963a515e15a63fa356d8ebcda7041a51 665afccc52e1a02ee329147e02f04b8e9cf1d571 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 0cd791c499bdc698d14a24050ec56d60b45732e0
 146299 fail irrelevant
 146304 pass ce33c21f238206706ca62d84ffb1fcb7bba74e89 317d3eeb963a515e15a63fa356d8ebcda7041a51 ec8c74e8bcc66a43ff766254e68b0504f68e024f 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 25164571fc11ed3010c5885a98a68fac3b891d33
 146311 fail fe1f2bfbe3ca8944df37c6b77f813eaab572a2f7 317d3eeb963a515e15a63fa356d8ebcda7041a51 b948a496150f4ae4f656c0f0ab672608723c80e6 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 34492a38606fe2a1a4fb9ba8f17deb3f656961ee
 146309 pass 6c6d93bc62fd2be9ccf07b579c1f10edd3de7e4c 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5abd9cc2cebe7fac001f7bb7b647c47cf54af1a 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 50ea2445f49825208439c864fecb9d9fd8791277
 146310 pass cf44ec557753c2c266c7cb9d1cf0bceb7d613bec 317d3eeb963a515e15a63fa356d8ebcda7041a51 b948a496150f4ae4f656c0f0ab672608723c80e6 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 7ad3d07b37e8f3b15772de8bc1367c68ea681eee
 146312 pass bf7d2a26a3a6c22dd1adbd144815da12f4a40db4 317d3eeb963a515e15a63fa356d8ebcda7041a51 b948a496150f4ae4f656c0f0ab672608723c80e6 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 7ad3d07b37e8f3b15772de8bc1367c68ea681eee
 146313 fail fe1f2bfbe3ca8944df37c6b77f813eaab572a2f7 317d3eeb963a515e15a63fa356d8ebcda7041a51 b948a496150f4ae4f656c0f0ab672608723c80e6 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 7ad3d07b37e8f3b15772de8bc1367c68ea681eee
 146316 fail 810613a60efe3924c536b3663246900bc08910a5 317d3eeb963a515e15a63fa356d8ebcda7041a51 b948a496150f4ae4f656c0f0ab672608723c80e6 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 7ad3d07b37e8f3b15772de8bc1367c68ea681eee
 146318 pass f6a750e678fb0ca3898cba08b6698f079008924c 317d3eeb963a515e15a63fa356d8ebcda7041a51 b948a496150f4ae4f656c0f0ab672608723c80e6 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 7ad3d07b37e8f3b15772de8bc1367c68ea681eee
 146344 [host=rochester0]
 146320 fail 810613a60efe3924c536b3663246900bc08910a5 317d3eeb963a515e15a63fa356d8ebcda7041a51 b948a496150f4ae4f656c0f0ab672608723c80e6 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 7ad3d07b37e8f3b15772de8bc1367c68ea681eee
 146325 pass f6a750e678fb0ca3898cba08b6698f079008924c 317d3eeb963a515e15a63fa356d8ebcda7041a51 b948a496150f4ae4f656c0f0ab672608723c80e6 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 7ad3d07b37e8f3b15772de8bc1367c68ea681eee
 146327 fail 810613a60efe3924c536b3663246900bc08910a5 317d3eeb963a515e15a63fa356d8ebcda7041a51 b948a496150f4ae4f656c0f0ab672608723c80e6 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 7ad3d07b37e8f3b15772de8bc1367c68ea681eee
 146329 pass f6a750e678fb0ca3898cba08b6698f079008924c 317d3eeb963a515e15a63fa356d8ebcda7041a51 b948a496150f4ae4f656c0f0ab672608723c80e6 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 7ad3d07b37e8f3b15772de8bc1367c68ea681eee
 146332 fail 810613a60efe3924c536b3663246900bc08910a5 317d3eeb963a515e15a63fa356d8ebcda7041a51 b948a496150f4ae4f656c0f0ab672608723c80e6 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 7ad3d07b37e8f3b15772de8bc1367c68ea681eee
 146374 [host=rochester0]
 146410 [host=rochester0]
 146455 [host=rochester0]
 146509 fail irrelevant
 146489 fail irrelevant
 146528 [host=rochester0]
 146546 fail irrelevant
 146565 fail irrelevant
 146586 fail irrelevant
 146616 [host=rochester0]
 146636 fail irrelevant
 146660 fail irrelevant
 146689 [host=rochester0]
 146737 [host=rochester0]
 146756 [host=rochester0]
 146714 [host=rochester0]
 146775 [host=rochester0]
 146799 [host=rochester0]
 146843 [host=rochester0]
 146921 [host=rochester0]
 146995 fail irrelevant
 147040 fail irrelevant
 147084 fail irrelevant
 147141 fail irrelevant
 147195 fail irrelevant
 147265 fail irrelevant
 147340 fail irrelevant
 147419 [host=rochester0]
 147477 fail irrelevant
 147520 [host=rochester0]
 147583 [host=rochester0]
 147649 fail irrelevant
 147703 [host=rochester0]
 147784 [host=rochester0]
 147736 [host=rochester0]
 147885 [host=rochester0]
 147831 [host=rochester0]
 147981 [host=rochester0]
 148068 [host=rochester0]
 148144 [host=rochester0]
 148196 fail irrelevant
 148269 fail irrelevant
 148331 [host=rochester0]
 148406 [host=rochester0]
 148459 fail irrelevant
 148503 fail irrelevant
 148547 [host=rochester0]
 148615 [host=rochester0]
 148583 [host=rochester0]
 148651 fail irrelevant
 148688 [host=rochester0]
 148729 [host=rochester0]
 148775 [host=rochester0]
 148799 fail irrelevant
 148830 [host=rochester0]
 148887 [host=rochester0]
 148940 [host=rochester0]
 148941 [host=rochester0]
 148943 [host=rochester0]
 148944 [host=rochester0]
 148954 fail irrelevant
 148947 [host=rochester0]
 148915 [host=rochester0]
 148948 [host=rochester0]
 148929 [host=rochester0]
 148950 [host=rochester0]
 148931 [host=rochester0]
 148934 [host=rochester0]
 148936 [host=rochester0]
 148951 [host=rochester0]
 148955 [host=rochester0]
 148956 [host=rochester0]
 148958 [host=rochester0]
 149001 [host=rochester0]
 149043 [host=rochester0]
 149074 [host=rochester0]
 149123 [host=rochester0]
 149154 [host=rochester0]
 149193 [host=rochester0]
 149234 fail irrelevant
 149268 [host=rochester0]
 149314 [host=rochester0]
 149376 [host=rochester0]
 149407 [host=rochester0]
 149434 [host=rochester0]
 149455 fail irrelevant
 149482 fail irrelevant
 149550 [host=rochester0]
 149508 [host=rochester0]
 149590 [host=rochester0]
 149643 [host=rochester0]
 149629 [host=rochester0]
 149615 [host=rochester0]
 149635 fail irrelevant
 149684 fail irrelevant
 149666 fail irrelevant
 149696 fail irrelevant
 149732 fail irrelevant
 149773 fail irrelevant
 149746 fail irrelevant
 149803 fail irrelevant
 149826 fail irrelevant
 149886 fail irrelevant
 149833 fail irrelevant
 149850 fail irrelevant
 149870 fail irrelevant
 149909 fail de7e9840e7f888f1a872c86b0cb793b283193137 317d3eeb963a515e15a63fa356d8ebcda7041a51 e54310451f1ac2ce4ccb90a110f45bb9b4f3ccd6 410cc30fdc590417ae730d635bbc70257adf6750 eaaf726038cb9b2d01312d6430b4e93842aa96eb 0135be8bd8cd60090298f02310691b688d95c3a8
 149902 fail de7e9840e7f888f1a872c86b0cb793b283193137 317d3eeb963a515e15a63fa356d8ebcda7041a51 e54310451f1ac2ce4ccb90a110f45bb9b4f3ccd6 410cc30fdc590417ae730d635bbc70257adf6750 eaaf726038cb9b2d01312d6430b4e93842aa96eb 0135be8bd8cd60090298f02310691b688d95c3a8
 149895 fail irrelevant
 150053 fail irrelevant
 150062 fail irrelevant
 150083 fail irrelevant
 150099 fail irrelevant
 150121 fail 23bf93884c6346206e87c0f14d93f905e8c81267 27acf0ef828bf719b2053ba398b195829413dbdd c8543b8d830d22882dab4ece47f0413f9c6eb431 410cc30fdc590417ae730d635bbc70257adf6750 eaaf726038cb9b2d01312d6430b4e93842aa96eb e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9
 150155 fail 65a12c467cd683809b4d445b8cf1c3ae250209b2 27acf0ef828bf719b2053ba398b195829413dbdd 88899a372cfc44f8612315f4b43a084d1814fe69 410cc30fdc590417ae730d635bbc70257adf6750 eaaf726038cb9b2d01312d6430b4e93842aa96eb a82582b1af6a4a57ca53bcfad9f71428cb5f9a54
 150131 fail 23bf93884c6346206e87c0f14d93f905e8c81267 27acf0ef828bf719b2053ba398b195829413dbdd c8543b8d830d22882dab4ece47f0413f9c6eb431 410cc30fdc590417ae730d635bbc70257adf6750 eaaf726038cb9b2d01312d6430b4e93842aa96eb e0d92d9bd7997c6bcda17a19aba4f3957dd1a2e9
 150146 fail irrelevant
 150170 fail irrelevant
 150210 fail irrelevant
 150228 fail 144dfe4215902b40a9d17fdb326054bbd8e07563 27acf0ef828bf719b2053ba398b195829413dbdd 9099dcbd61c8d22b5eedda783d143c222d2705a3 410cc30fdc590417ae730d635bbc70257adf6750 665dce17c04b574bb0ebcde4cac129c3dd9e681c 664e1bc12f8658da124a4eff7a8f16da073bd47f
 150190 []
 150222 fail 144dfe4215902b40a9d17fdb326054bbd8e07563 27acf0ef828bf719b2053ba398b195829413dbdd 9099dcbd61c8d22b5eedda783d143c222d2705a3 410cc30fdc590417ae730d635bbc70257adf6750 665dce17c04b574bb0ebcde4cac129c3dd9e681c 57880053dd24012e9f59c23b630fefe07e15dc49
 150237 fail irrelevant
 150268 fail irrelevant
 150287 fail irrelevant
 150347 fail f718709431429fbb4e1fc6781f3a3752a7f43f70 27acf0ef828bf719b2053ba398b195829413dbdd 1c877c716038a862e876cac8f0929bab4a96e849 410cc30fdc590417ae730d635bbc70257adf6750 7e9db04923854b7f4edca33948f55abee22907b9 5e015d48a5ee68ba03addad2698364d8f015afec
 150351 pass 8e09cf1d5a6b8bcf21bfb7d409a2ecf94be54ff1 6280c94f306df6a20bbc100ba15a5a81af0366e6 f413d9bee3f6cabd4b11ad0a1ab9ff865092fb16 933ebad2470a169504799a1d95b8e410bd9847ef 120996f147131eca8af90e30c900bc14bc824d9f 518c935fac4d30b3ec35d4b6add82b17b7d7aca3
 150352 fail f718709431429fbb4e1fc6781f3a3752a7f43f70 27acf0ef828bf719b2053ba398b195829413dbdd 1c877c716038a862e876cac8f0929bab4a96e849 410cc30fdc590417ae730d635bbc70257adf6750 7e9db04923854b7f4edca33948f55abee22907b9 5e015d48a5ee68ba03addad2698364d8f015afec
 150317 fail irrelevant
 150339 fail irrelevant
Searching for interesting versions
 Result found: flight 143051 (pass), for basis pass
 Result found: flight 150347 (fail), for basis failure
 Repro found: flight 150351 (pass), for basis pass
 Repro found: flight 150352 (fail), for basis failure
 0 revisions at f6a750e678fb0ca3898cba08b6698f079008924c 317d3eeb963a515e15a63fa356d8ebcda7041a51 b948a496150f4ae4f656c0f0ab672608723c80e6 933ebad2470a169504799a1d95b8e410bd9847ef f21b5a4aeb020f2a5e2c6503f906a9349dd2f069 7ad3d07b37e8f3b15772de8bc1367c68ea681eee
No revisions left to test, checking graph state.
 Result found: flight 146318 (pass), for last pass
 Result found: flight 146320 (fail), for first failure
 Repro found: flight 146325 (pass), for last pass
 Repro found: flight 146327 (fail), for first failure
 Repro found: flight 146329 (pass), for last pass
 Repro found: flight 146332 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  libvirt git://libvirt.org/libvirt.git
  Bug introduced:  810613a60efe3924c536b3663246900bc08910a5
  Bug not present: f6a750e678fb0ca3898cba08b6698f079008924c
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/146332/

Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.


  commit 810613a60efe3924c536b3663246900bc08910a5
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Dec 23 15:37:26 2019 +0000
  
      src: replace strptime()/timegm()/mktime() with GDateTime APIs set
      
      All places where we use strptime/timegm()/mktime() are handling
      conversion of dates in a format compatible with ISO 8601, so we
      can use the GDateTime APIs to simplify code.
      
      Reviewed-by: Fabiano Fidêncio <fidencio@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.124825 to fit
pnmtopng: 42 colors found
Revision graph left in /home/logs/results/bisect/libvirt/build-arm64-libvirt.libvirt-build.{dot,ps,png,html,svg}.
----------------------------------------
150352: tolerable FAIL

flight 150352 libvirt real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/150352/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build           fail baseline untested


jobs:
 build-arm64                                                  pass    
 build-arm64-libvirt                                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun May 24 11:53:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 24 May 2020 11:53:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcpBS-00028l-ET; Sun, 24 May 2020 11:52:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0KGn=7G=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1jcpBQ-00028g-IQ
 for xen-devel@lists.xenproject.org; Sun, 24 May 2020 11:52:48 +0000
X-Inumbo-ID: 1614e902-9db5-11ea-ae69-bc764e2007e4
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1614e902-9db5-11ea-ae69-bc764e2007e4;
 Sun, 24 May 2020 11:52:47 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.92.3 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1jcpBM-0001Dn-OP; Sun, 24 May 2020 11:52:44 +0000
Date: Sun, 24 May 2020 12:52:44 +0100
From: Tim Deegan <tim@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/shadow: Reposition sh_remove_write_access_from_sl1p()
Message-ID: <20200524115244.GA4663@deinos.phlegethon.org>
References: <20200521090428.11425-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200521090428.11425-1-andrew.cooper3@citrix.com>
User-Agent: Mutt/1.11.1 (2018-12-01)
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org);
 SAEximRunCond expanded to false
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

At 10:04 +0100 on 21 May (1590055468), Andrew Cooper wrote:
> When compiling with SHOPT_OUT_OF_SYNC disabled, the build fails with:
> 
>   common.c:41:12: error: ‘sh_remove_write_access_from_sl1p’ declared ‘static’ but never defined [-Werror=unused-function]
>    static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
>               ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
> 
> due to an unguarded forward declaration.
> 
> It turns out there is no need to forward declare
> sh_remove_write_access_from_sl1p() to begin with, so move it to just ahead of
> its first users, which is within a larger #ifdef'd SHOPT_OUT_OF_SYNC block.
> 
> Fix up for style while moving it.  No functional change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thank you!  This is fine, either as-is or with the suggested change to
a switch.

Reviewed-by: Tim Deegan <tim@xen.org>



From xen-devel-bounces@lists.xenproject.org Sun May 24 13:07:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 24 May 2020 13:07:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcqKp-00086Z-11; Sun, 24 May 2020 13:06:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2e/K=7G=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcqKn-00086U-JY
 for xen-devel@lists.xenproject.org; Sun, 24 May 2020 13:06:33 +0000
X-Inumbo-ID: 631ba97a-9dbf-11ea-9887-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 631ba97a-9dbf-11ea-9887-bc764e2007e4;
 Sun, 24 May 2020 13:06:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=o9Sa5rwF4DubFOgewW1HmNED/sb061upmopCHnqq+3o=; b=4h65I3OsDGwoB+g8gYlc3mnzT
 X47XMeFc/n8tqDS03Dnrs31yVeDZE2622F4Ya6Z8850SwPMLjtIN2X94PDeqe8rX2PY2QLuxsxDAW
 W4zXrHFQVnqz+YjFV25FrVVViMIgrMxE8wx9+E12BT9LLq1LazWie/2K7Ch3/drePQlkw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcqKk-0002NR-UG; Sun, 24 May 2020 13:06:30 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcqKk-0000IH-Lm; Sun, 24 May 2020 13:06:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcqKk-0000el-L5; Sun, 24 May 2020 13:06:30 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150346-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150346: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:heisenbug
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=5e015d48a5ee68ba03addad2698364d8f015afec
X-Osstest-Versions-That: xen=5e015d48a5ee68ba03addad2698364d8f015afec
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 24 May 2020 13:06:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150346 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150346/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate         fail pass in 150341

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds 18 guest-localmigrate/x10 fail in 150341 blocked in 150346
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150341
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150341
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150341
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150341
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150341
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150341
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150341
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150341
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150341
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150341
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5e015d48a5ee68ba03addad2698364d8f015afec
baseline version:
 xen                  5e015d48a5ee68ba03addad2698364d8f015afec

Last test of basis   150346  2020-05-24 01:51:34 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun May 24 14:13:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 24 May 2020 14:13:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcrMi-0005Xd-6z; Sun, 24 May 2020 14:12:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=S3bL=7G=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jcrMh-0005XY-1D
 for xen-devel@lists.xenproject.org; Sun, 24 May 2020 14:12:35 +0000
X-Inumbo-ID: 9d347156-9dc8-11ea-ade6-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9d347156-9dc8-11ea-ade6-12813bfff9fa;
 Sun, 24 May 2020 14:12:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:References:Cc:To:From:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=kvOL9Cz6N64DzUfTy7+qmNgXhcKQ1cFxJ8rQuEOCVGU=; b=aEHiODXvFYHi9SIQiwX4PG+B9p
 noKVeui+uO+ck5/CRADXWto6vRoTQPpQr+5Etlw+ZuwtJUOXX+Z3pkJWRNc7tv4VUAx9Ob5yaUM/P
 cKUznL4g+trPx4gFePgjXxhX0sUmX4tWx9hJJEIhywGtdQGATESOk3Pht9REmveVccX0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jcrMf-0003rV-LT; Sun, 24 May 2020 14:12:33 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.89)
 (envelope-from <julien@xen.org>)
 id 1jcrMf-0002ur-3R; Sun, 24 May 2020 14:12:33 +0000
Subject: Re: [PATCH 12/12] xen/arm: call iomem_permit_access for passthrough
 devices
From: Julien Grall <julien@xen.org>
To: Stefano Stabellini <sstabellini@kernel.org>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-12-sstabellini@kernel.org>
 <521c8e55-73e8-950f-2d94-70b0c664bd3d@xen.org>
 <alpine.DEB.2.21.2004291318270.28941@sstabellini-ThinkPad-T480s>
 <f7f01eca-2415-e102-318f-0c58606fda96@xen.org>
Message-ID: <453b58f8-d9ee-bbe7-ac05-b5268620e79f@xen.org>
Date: Sun, 24 May 2020 15:12:30 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <f7f01eca-2415-e102-318f-0c58606fda96@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>, Volodymyr_Babchuk@epam.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 30/04/2020 14:01, Julien Grall wrote:
> On 29/04/2020 21:47, Stefano Stabellini wrote:
>> On Wed, 15 Apr 2020, Julien Grall wrote: But doesn't it make sense to 
>> give domU permission if it is going to get
>> the memory mapped? But admittedly I can't think of something that would
>> break because of the lack of the iomem_permit_access call in this code
>> path.
> 
> On Arm, the permissions are only useful if you plan you DomU to delegate 
> the regions to another domain. As your domain is not even aware it is 
> running on Xen (we don't expose 'xen' node in the DT), it makes little 
> sense to add the permission.

I actually found one use when helping a user last week. You can dump the 
list of MMIO regions assigned to a guest from Xen Console.

This will use d->iomem_caps that is modified via iomem_permit_access(). 
Without it, there is no easy way to confirm the list of MMIO regions 
assigned to a guest. Although...

> Even today, you can map IOMEM to a DomU and then revert the permission 
> right after. They IOMEM will still be mapped in the guest and it will 
> act normaly.

... this would not help the case where permissions are reverted. But I 
am assuming this shouldn't happen for Dom0less.

Stefano, I am not sure what's your plan for the series itself for Xen 
4.14. I think this patch could go in now. Any thoughts?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sun May 24 17:17:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 24 May 2020 17:17:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcuFD-0004Mp-7Y; Sun, 24 May 2020 17:17:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2e/K=7G=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcuFC-0004Mj-36
 for xen-devel@lists.xenproject.org; Sun, 24 May 2020 17:17:02 +0000
X-Inumbo-ID: 5e6bd6ac-9de2-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e6bd6ac-9de2-11ea-ae69-bc764e2007e4;
 Sun, 24 May 2020 17:16:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Ii8c5HaA6/c2Cfy3+Hg7Fub9NqQT5x5HU8rIfbGDXrA=; b=ghVtanZJyV0GcagO6ceAIl+lb
 P6kno4v9NXQMBqrwjiipqRd4Cm/2d/zqutHIshZPdtRWAVqB7Lz4B3nJ/lODLCIgxZzLfuT9S4x98
 2x6swXzYn42u4zL5E6vb2psmJF78nhwpTbUtmgtbq42Z4YQ7uJ28ZbKM/IQP79BICGlJk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcuF5-00086V-5c; Sun, 24 May 2020 17:16:55 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcuF4-00089K-Jp; Sun, 24 May 2020 17:16:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcuF4-0003XN-Ik; Sun, 24 May 2020 17:16:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150354-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150354: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=437b0aa06a014ce174e24c0d3530b3e9ab19b18b
X-Osstest-Versions-That: xen=5e015d48a5ee68ba03addad2698364d8f015afec
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 24 May 2020 17:16:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150354 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150354/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  437b0aa06a014ce174e24c0d3530b3e9ab19b18b
baseline version:
 xen                  5e015d48a5ee68ba03addad2698364d8f015afec

Last test of basis   150335  2020-05-23 01:00:38 Z    1 days
Testing same since   150354  2020-05-24 15:01:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Corey Minyard <cminyard@mvista.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5e015d48a5..437b0aa06a  437b0aa06a014ce174e24c0d3530b3e9ab19b18b -> smoke


From xen-devel-bounces@lists.xenproject.org Sun May 24 22:32:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 24 May 2020 22:32:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jcz9p-0005iU-7A; Sun, 24 May 2020 22:31:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2e/K=7G=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jcz9n-0005iP-Vn
 for xen-devel@lists.xenproject.org; Sun, 24 May 2020 22:31:48 +0000
X-Inumbo-ID: 5994cd7e-9e0e-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5994cd7e-9e0e-11ea-ae69-bc764e2007e4;
 Sun, 24 May 2020 22:31:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Kfz1JPzz7r1sCZxRJxPUTPMOEc4oPPrBvODoNhbFmww=; b=xe9mnDfnhspIXYdPLeYhQlmjd
 7YBxPut/+VGIH4yeYdA1EYYlnK6zaGay8bkpheJREBAhKl4v5VncdibN3ZN8IQX669X+OtBTPetHM
 fEVMgJG5kk1ToLIEogrUFc/oJ9oqdzIPFWj5b1BurhJUDQv0ANqC3KPAjLcf6oW2DgWyg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcz9l-0006AT-7L; Sun, 24 May 2020 22:31:45 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jcz9k-0005HU-U2; Sun, 24 May 2020 22:31:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jcz9k-0002YQ-TN; Sun, 24 May 2020 22:31:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150353-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150353: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=fea8f3ed739536fca027cf56af7f5576f37ef9cd
X-Osstest-Versions-That: qemuu=1cc9c62e42445f2346bf0842cafe82c5ba404529
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 24 May 2020 22:31:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150353 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150353/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 150332
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 150332

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150332
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150332
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150332
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150332
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150332
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                fea8f3ed739536fca027cf56af7f5576f37ef9cd
baseline version:
 qemuu                1cc9c62e42445f2346bf0842cafe82c5ba404529

Last test of basis   150332  2020-05-22 20:08:15 Z    2 days
Testing same since   150353  2020-05-24 14:07:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laszlo Ersek <lersek@redhat.com>
  Mansour Ahmadi <mansourweb@gmail.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   1cc9c62e42..fea8f3ed73  fea8f3ed739536fca027cf56af7f5576f37ef9cd -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Mon May 25 02:16:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 02:16:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd2es-0000qd-30; Mon, 25 May 2020 02:16:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6HJX=7H=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1jd2er-0000qY-0J
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 02:16:05 +0000
X-Inumbo-ID: ae8e8a58-9e2d-11ea-ae71-12813bfff9fa
Received: from mga09.intel.com (unknown [134.134.136.24])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ae8e8a58-9e2d-11ea-ae71-12813bfff9fa;
 Mon, 25 May 2020 02:16:03 +0000 (UTC)
IronPort-SDR: PzJLQh5JAQ0f2ZMICeFKYd4tW21tuW8yAjhWpOa8ge+30XuPo7+sikBcWqyGwIn52Y8Hj0Y7sl
 0yq0xyQ9dv0w==
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga007.fm.intel.com ([10.253.24.52])
 by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 24 May 2020 19:16:02 -0700
IronPort-SDR: 3iRDry9kWQHFIJQ+1+wxKBRaHYULvy/bSsV1KHhCZiwsxK2QpWBk06JCiMVT1mDtInNyhmQeyU
 qrvCeQ+FSdvQ==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.73,431,1583222400"; d="scan'208";a="254793388"
Received: from orsmsx110.amr.corp.intel.com ([10.22.240.8])
 by fmsmga007.fm.intel.com with ESMTP; 24 May 2020 19:16:01 -0700
Received: from ORSEDG001.ED.cps.intel.com (10.7.248.4) by
 ORSMSX110.amr.corp.intel.com (10.22.240.8) with Microsoft SMTP Server (TLS)
 id 14.3.439.0; Sun, 24 May 2020 19:16:01 -0700
Received: from NAM11-CO1-obe.outbound.protection.outlook.com (104.47.56.173)
 by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server (TLS)
 id 14.3.439.0; Sun, 24 May 2020 19:16:01 -0700
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jT+xX5BJsYcSZGgogAeoPyN24Y8y/PMly0xsMV5Mrs6ccdWx1kc1ccpIEBohoNSZdcqmshoU/TCpGa++iJYwBjslpH6n2mt5fr13taxMg4d7ZU7FHNTlGpilkek82OnkIgUV6XISsmIR81QgKFQNrTEjx34Cvz/GW0PHVZpuEtnpJ0vHnTWHWflzeMavKQ4xaXqyO9RZ1AjZM5EQtonGPKKd8bdkch5FtJT3s/Q3A30pTj8IDLiSfaH7yvgb9i/laX/jfD2sCwVKJbfPb+fTiPkdO05Jr2O1O4u0QBdsSrOPJelRzlJlmio9fRsRQrc8yNG6V8AOF3DKiDwZqtvXqA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Dv4DsR2Os+Y+RPx06qPQuuoviqK7msY4b3UoAh36O4U=;
 b=g+qxA5N6Iu+pppSvpxsEeMoFJVdMcySHRLKFC7fmHJadXJ53KUMDdrrm4ISasOz62a/Gek4L7NjGs6aUasdA2A2XbhmoGOU9s/nZBK42OZAIYOe6WH39pwki6XzcrQp+7p+ZHRAprjVtriRVg2gR6FGVyGmTsuHZrZ5FA2wi6najCfLLoG/bl0SC0TFE7gGGE0SUXS5QFDVBwjnB5EyureEYxMYKmOd8qUxzwHF17MRJAY0mPJb5Pc316T/j8RMTe4+WUXlfE88T2xH3OMh7mP9gS84I3bfYVWo7YVQwiLFaYlzTPFXHwlEHGrDD8w1ZdMz2JhFR2YHroyPQZtqoeA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; 
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Dv4DsR2Os+Y+RPx06qPQuuoviqK7msY4b3UoAh36O4U=;
 b=Rzk9U8cP2XLhy8zEO97NtpgKnJ3NC+H8D4/YLyxB4aeXM50Vrmr/qMtFwcmye/EoHlncewfUsLXVgzfeIv3D+uUxFt+e3jG6IY3Q00SWazONCcjzq79daaEbEWPrTPwUEK945a08x1iTuoE8nd+WBvh79GD9zfBPbuSbfz6ahDA=
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by MWHPR11MB1792.namprd11.prod.outlook.com (2603:10b6:300:10b::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.23; Mon, 25 May
 2020 02:15:59 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::b441:f6bd:703b:ba41]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::b441:f6bd:703b:ba41%2]) with mapi id 15.20.3021.029; Mon, 25 May 2020
 02:15:59 +0000
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>, buy computer <buycomputer40@gmail.com>
Subject: RE: iommu=no-igfx
Thread-Topic: iommu=no-igfx
Thread-Index: AQHWKBPp26X0x5YGAUu393CB+jzHkKiw3nQAgAdCTsA=
Date: Mon, 25 May 2020 02:15:59 +0000
Message-ID: <MWHPR11MB1645DC1C5782DDA28C9BB1CB8CB30@MWHPR11MB1645.namprd11.prod.outlook.com>
References: <CANSXg2FGtiDT05sQUpSAshAsdP4wSjPgQbfw_+aKJuAzSwvJuQ@mail.gmail.com>
 <da7e41b5-88a1-13ab-d52b-0652c16608af@suse.com>
In-Reply-To: <da7e41b5-88a1-13ab-d52b-0652c16608af@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
dlp-version: 11.2.0.6
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.55.52.208]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: b498a320-528f-4774-ce17-08d800519114
x-ms-traffictypediagnostic: MWHPR11MB1792:
x-microsoft-antispam-prvs: <MWHPR11MB1792D603F0B66027C4681E198CB30@MWHPR11MB1792.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:6430;
x-forefront-prvs: 0414DF926F
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: yYLd2AAJZjDopHLJwKJaUdDPT4wxQzSdMqO7ilBl1WV0BndpE9l/GhpJhBQVw2n/RAIvWwFaVtX3K6e0XwMaxk1XV81XIFx8ugcO6o4oMWv/c8JVdnppsOMmI5pevGSyTEk3nSFMYpu0M1+8OyjjuFAAIT5VMfk1Jfn7dvIMRFsP+xNVR9zPUnemO9wq2Lv5SqetcFmPAkhqGaBUV+F9R0G5QEa5riEaQawvIE4mFq7McJaBod/Xo3RZ+f7jLXIxJmfdxHM67azUlWPBQs1bcwE3Q5XThDgeCmYKhoFhs4y3qKLO4YX/Gqpy54XlBKgbkjr976cU2PQjZVbwgXi9F5pcmwC9hr6CN28Kn9NkGe7fQy3AFC9JZxcKlKyyIuJCWEKkIa5vL2zuhLaJm2qIvA==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:MWHPR11MB1645.namprd11.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(136003)(346002)(396003)(39860400002)(376002)(366004)(52536014)(110136005)(55016002)(71200400001)(478600001)(9686003)(186003)(8676002)(5660300002)(316002)(8936002)(3480700007)(2906002)(26005)(86362001)(53546011)(4326008)(7696005)(66946007)(33656002)(66476007)(66556008)(64756008)(76116006)(6506007)(66446008);
 DIR:OUT; SFP:1102; 
x-ms-exchange-antispam-messagedata: YFw0uibh4Dpm1BA7mGNithu/ihv+/+8AMBhUuR7J36yUpGOROq1AiJVPfJPK1xfo6C8+mVc8rGVBUs7A6T2L8F3sYvLZCiBj2Dc0C99z02QJ9Aun5acIlLvtOT+r+XVGobD8jD7Iy65GtVowbLkea/n1Sprg42M1Andx3IPVtE40snGOx22YCNIoNcj5jUdfcdvVh8h7/mXduPtceEJeHWwGhbUbNn272lJsRDqhIjfgTTZiyDMd/apcncAv0zxepzkiCsk6Qih8us7lFcunNKqPfotcvktYX9d9YbEuM6atBdZY4kq0sS+WMth854U+zg/+bsstu3Iehfob8/nIgnyWolva5ebi9fiwlHoSsP7UldfTdYy7A69pdnMGklpQEFdi4YlXGiSrjBm/TfAKLtpJb1+B5vvSH+V6poDYz+WzTAwWu5XSjFXMT6rOhTQVWyFcaZNOMAIILpDKEfhOKYVlQbGXdc+TmAo8z5WnGkE=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-Network-Message-Id: b498a320-528f-4774-ce17-08d800519114
X-MS-Exchange-CrossTenant-originalarrivaltime: 25 May 2020 02:15:59.7566 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: yUgdS3UALeEI3bebw921luyIb7efvGaQU7/3y34BxOqgvxn3cUMgkUbek7aVXu8joK5Qgd33wEtUDHjjmEb4DA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR11MB1792
X-OriginatorOrg: intel.com
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IFdlZG5lc2Rh
eSwgTWF5IDIwLCAyMDIwIDc6MTEgUE0NCj4gDQo+IE9uIDExLjA1LjIwMjAgMTk6NDMsIGJ1eSBj
b21wdXRlciB3cm90ZToNCj4gPiBJJ3ZlIGJlZW4gd29ya2luZyBvbiBhIFdpbmRvd3MgMTAgSFZN
IG9uIGEgRGViaWFuIDEwIGRvbTAuIFdoZW4gSQ0KPiB3YXMgZmlyc3QNCj4gPiB0cnlpbmcgdG8g
bWFrZSB0aGUgVk0sIEkgd2FzIGdldHRpbmcgSU9NTVUgZXJyb3JzLiBJIGhhZCBhIGhhcmQgdGlt
ZQ0KPiA+IGZpZ3VyaW5nIG91dCB3aGF0IHRvIGRvIGFib3V0IHRoaXMsIGFuZCBmaW5hbGx5IGRp
c2NvdmVyZWQgdGhhdCBwdXR0aW5nDQo+ID4gaW9tbXU9bm8taWdmeCBpbiB0aGUgZ3J1YiBzdG9w
cGVkIHRoZSBlcnJvcnMuDQo+ID4NCj4gPiBVbmZvcnR1bmF0ZWx5LCB3aXRob3V0IHRoZSBncmFw
aGljcyBzdXBwb3J0IHRoZSBWTSBpcyB1bmRlcnN0YW5kYWJseQ0KPiBzbG93LA0KPiA+IGFuZCBj
YW4gY3Jhc2guIEkgd2FzIGFsc28gb25seSBub3cgcG9pbnRlZCB0byB0aGUgcGFnZQ0KPiA+IDxo
dHRwczovL3hlbmJpdHMueGVuLm9yZy9kb2NzL3Vuc3RhYmxlL21pc2MveGVuLWNvbW1hbmQtDQo+
IGxpbmUuaHRtbCNpb21tdT4NCj4gPiB3aGljaCBzYXlzIHRvIHJlcG9ydCBhbnkgZXJyb3JzIHRo
YXQgZ2V0IGZpeGVkIGJ5IHVzaW5nIGlvbW11PW5vLWlnZnguDQoNCndoYXQgaXMgdGhlIHBsYXRm
b3JtIGFuZCBsaW51eCBrZXJuZWwgdmVyc2lvbiBpbiB0aGlzIGNvbnRleHQ/DQoNCj4gDQo+IFRo
YW5rcyBmb3IgdGhlIHJlcG9ydC4gRm9yIGNvbnRleHQgSSdsbCBxdW90ZSB0aGUgY29tbWl0IG1l
c3NhZ2Ugb2YNCj4gdGhlIGNvbW1pdCBpbnRyb2R1Y2luZyB0aGUgb3B0aW9uIGFzIHdlbGwgYXMg
dGhlIHJlcXVlc3QgdG8gcmVwb3J0DQo+IGlzc3VlcyBmaXhlZCB3aXRoIGl0Og0KPiANCj4gIkFz
IHdlIHN0aWxsIGNhbm5vdCBmaW5kIGEgcHJvcGVyIGZpeCBmb3IgdGhpcyBwcm9ibGVtLCB0aGlz
IHBhdGNoIGFkZHMNCj4gIGlvbW11PWlnZnggb3B0aW9uIHRvIGNvbnRyb2wgd2hldGhlciBJbnRl
bCBncmFwaGljcyBJT01NVSBpcyBlbmFibGVkLg0KPiAgUnVubmluZyBYZW4gd2l0aCBpb21tdT1u
by1pZ2Z4IGlzIHNpbWlsYXIgdG8gcnVubmluZyBMaW51eCB3aXRoDQo+ICBpbnRlbF9pb21tdT1p
Z2Z4X29mZiwgd2hpY2ggZGlzYWJsZXMgSU9NTVUgZm9yIEludGVsIEdQVS4gVGhpcyBjYW4gYmUN
Cj4gIHVzZWQgYnkgdXNlcnMgdG8gbWFudWFsbHkgd29ya2Fyb3VuZCB0aGUgcHJvYmxlbSBiZWZv
cmUgYSBmaXggaXMNCj4gIGF2YWlsYWJsZSBmb3IgaTkxNSBkcml2ZXIuIg0KPiANCj4gVGhpcyB3
YXMgaW4gMjAxNSwgcmVmZXJlbmNpbmcgTGludXggPj0gMy4xOS4gSSBoYXZlIG5vIGlkZWEgd2hl
dGhlcg0KPiB0aGUgdW5kZXJseWluZyBkcml2ZXIgaXNzdWUocykgaGFzL2hhdmUgYmVlbiBmaXhl
ZC4gVGhlIGFkZHJlc3Nlcw0KPiByZWZlcmVuY2VkIGFyZSB2YXJpYWJsZSBlbm91Z2ggYW5kIGFs
bCB3aXRoaW4gUkFNLCBzbyBJJ2QgY29uY2x1ZGUNCj4gdGhpcyBpcyBub3QgYSAibWlzc2luZyBS
TVJSIiBpc3N1ZS4NCg0KVmFyaWFibGUgZW5vdWdoIGJ1dCBub3Qgd2l0aGluIFJBTS4gRnJvbSBF
ODIwOg0KDQooWEVOKSAgMDAwMDAwMDEwMDAwMDAwMCAtIDAwMDAwMDA4NzE4MDAwMDAgKHVzYWJs
ZSkNCg0KQnV0IHRoZSByZWZlcmVuY2VkIGFkZHJlc3NlcyBhcmUgd2F5IGhpZ2hlcjoNCg0KKFhF
TikgW1ZULURdRE1BUjpbRE1BIFJlYWRdIFJlcXVlc3QgZGV2aWNlIFswMDAwOjAwOjAyLjBdIGZh
dWx0IA0KYWRkciA3NmM2MTVkMDAwLCBpb21tdSByZWcgPSBmZmZmODJjMDAwYTBjMDAwDQooWEVO
KSBbVlQtRF1ETUFSOiByZWFzb24gMDYgLSBQVEUgUmVhZCBhY2Nlc3MgaXMgbm90IHNldA0KDQo+
IA0KPiBDYy1pbmcgdGhlIFZULWQgbWFpbnRhaW5lciBmb3IgcG9zc2libGUgaW5zaWdodHMgb3Ig
dGhvdWdodHMuDQo+IA0KPiBKYW4NCg0KSSBkb24ndCBoYXZlIG90aGVyIHRob3VnaHRzIGV4Y2Vw
dCB0aGUgd2VpcmQgYWRkcmVzc2VzLiBJdCBtaWdodCBiZQ0KZ29vZCB0byBhZGQgc29tZSB0cmFj
ZSBpbiBkb20wJ3MgaTkxNSBkcml2ZXIgdG8gc2VlIHdoZXRoZXIgdGhvc2UNCmFkZHJlc3NlcyBh
cmUgaW50ZW5kZWQgb3Igbm90Lg0KDQpUaGFua3MNCktldmluDQo=


From xen-devel-bounces@lists.xenproject.org Mon May 25 02:23:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 02:23:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd2m4-0001ij-Rh; Mon, 25 May 2020 02:23:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6HJX=7H=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1jd2m3-0001ie-QA
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 02:23:31 +0000
X-Inumbo-ID: b9109682-9e2e-11ea-ae71-12813bfff9fa
Received: from mga17.intel.com (unknown [192.55.52.151])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b9109682-9e2e-11ea-ae71-12813bfff9fa;
 Mon, 25 May 2020 02:23:30 +0000 (UTC)
IronPort-SDR: 50EyUgrY01Bi04spkkuLiQSIBaSCvBivcvalZ07IiGvAPRseYkrc/I4kIUqsW+XUdglCdLmHZv
 MQh9Vc1gmu+w==
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga004.jf.intel.com ([10.7.209.38])
 by fmsmga107.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 24 May 2020 19:23:29 -0700
IronPort-SDR: unpRITaDVihedDHsd1pyS0il930zIp7e3H+1/vBeNO39cLG0/h1MIY1DfLYgokUfZEkPDiEtgX
 yysZKGig247A==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.73,431,1583222400"; d="scan'208";a="413373530"
Received: from fmsmsx107.amr.corp.intel.com ([10.18.124.205])
 by orsmga004.jf.intel.com with ESMTP; 24 May 2020 19:23:28 -0700
Received: from fmsmsx603.amr.corp.intel.com (10.18.126.83) by
 fmsmsx107.amr.corp.intel.com (10.18.124.205) with Microsoft SMTP Server (TLS)
 id 14.3.439.0; Sun, 24 May 2020 19:23:28 -0700
Received: from fmsmsx603.amr.corp.intel.com (10.18.126.83) by
 fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Sun, 24 May 2020 19:23:28 -0700
Received: from FMSEDG002.ED.cps.intel.com (10.1.192.134) by
 fmsmsx603.amr.corp.intel.com (10.18.126.83) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.1713.5
 via Frontend Transport; Sun, 24 May 2020 19:23:27 -0700
Received: from NAM12-DM6-obe.outbound.protection.outlook.com (104.47.59.174)
 by edgegateway.intel.com (192.55.55.69) with Microsoft SMTP Server (TLS) id
 14.3.439.0; Sun, 24 May 2020 19:23:27 -0700
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TrcE6iMwum8rLs8kCGb0YLOmGqqT0jKr1ilaNfl2bp7FxhWhsDyoc/4jwO2k5oTFQeKMitUv2LJ/KvmE7cwZ+ansGlTgkunQAkhCxQKMnVH551wgemQcc3KuWEWgn36Zh4aHnIKZleNW27qpIjJk8uvjCqvTrqz776jsc+Oozgwant+WO9B7tQakwTqJNUCMBbo1FIr8Xl0ZKvqQUQQdmv6yD5ipFY1p0Q3B1sbG+UCpsaSe7CrnvlCjGXbUrbxtFndwC0oZUb7VqwBQ8u1Rwnc4wKVm2WtF7AF1xmvIIxcdYOuc6FcR1pCOBbme0/AGK7ix7wOgukzY7nZzgJ3Oeg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rqPAxylaS6SBe/APk128XonbFl3wXDRqrfyy3LBRXco=;
 b=HlakCpWjneAQRHRfbD6TFjGL4WByw9A2EYxBIZBkaO1Mu+l88qI53qE4Nm3ht+ngWsqol0qaYH3D96QO5sPhrQpc8/TwOLSbMaw6skredXbhVU1UR+uw4rmfTqFBvzk4O9HOnNeYy9/ZnYkdBCPDQTUxFGem/8fArf+v3sQ9GswHyldPekDraOjXA6sSQX3TGep/QIwCKxqYiJVkOq4kbB81ihqBAWJLyrACv6V8oESujStsEiOQ6sboZu2ITjNnGZsw5tnC+zTZwpXEL3aKrVVdn6+OMtfs/5bLl6CddNwvWpWhNKZwDMvWR63+CzyrVFu34Gt9dUqrYby4W5BYjg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; 
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rqPAxylaS6SBe/APk128XonbFl3wXDRqrfyy3LBRXco=;
 b=Kw5Bv4IWih1pHNed/jaiOr2X6vUzjEpDfW2IDDIfMmIRPIABQbTdiiEx8yDznQg1Xzk4+fQ/68Uojc669Gi/poB5r3OfJdreLaKVUkQO8TCO0O4fvcQXE+04dz/YQxcHrWki5amOkjTzqjM1JMKYcRti5L3wZ8Zgb6XNdEUYiiE=
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by MWHPR11MB1693.namprd11.prod.outlook.com (2603:10b6:300:2b::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.23; Mon, 25 May
 2020 02:23:26 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::b441:f6bd:703b:ba41]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::b441:f6bd:703b:ba41%2]) with mapi id 15.20.3021.029; Mon, 25 May 2020
 02:23:26 +0000
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Juergen Gross <jgross@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v10 01/12] xen/vmx: let opt_ept_ad always reflect the
 current setting
Thread-Topic: [PATCH v10 01/12] xen/vmx: let opt_ept_ad always reflect the
 current setting
Thread-Index: AQHWLa4sYjkLeqlY40OG/TZ4iiK9pai4G3aw
Date: Mon, 25 May 2020 02:23:26 +0000
Message-ID: <MWHPR11MB1645A00E365615DD577663E98CB30@MWHPR11MB1645.namprd11.prod.outlook.com>
References: <20200519072106.26894-1-jgross@suse.com>
 <20200519072106.26894-2-jgross@suse.com>
In-Reply-To: <20200519072106.26894-2-jgross@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
dlp-version: 11.2.0.6
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.55.52.208]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: f6d8fad5-8a42-4e61-e553-08d800529b45
x-ms-traffictypediagnostic: MWHPR11MB1693:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <MWHPR11MB1693C9F0D50D5599ED1F23988CB30@MWHPR11MB1693.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-forefront-prvs: 0414DF926F
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: Y8i8Z+/EfC9t82DlZmERcA8RsqhPxdjr+KZuk3TaBJ+EX/UVchrRda0RksgQfE0T93o8ReXeE9YV4kGsfZf7eMpYgxE1qv5CfmmiM8fmrn+0O39YsAe8UJMh6UAi0kbmwtDKwCBa8l9t1K+21O6DlaH72baF+K93MCSF2raFMTFbval9ZpZkyPaXUj7neUIpXhq4KrlwtualfuT5nExCcqBqhQEhRdNS0o62mnu7FQ/CbCE2dXkVLlQx4ntKaY5Q8YinKSpVerNn96TZVFzTEARg8WuPxw9eG3tmNy0t6GMVBthiJre43SUQlLleZ7OEL8H3Bwgk5Tcn2tg3ym0NQw==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:MWHPR11MB1645.namprd11.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(39860400002)(376002)(346002)(136003)(366004)(396003)(71200400001)(9686003)(55016002)(33656002)(316002)(5660300002)(66446008)(110136005)(2906002)(54906003)(478600001)(52536014)(64756008)(66556008)(4326008)(86362001)(66476007)(76116006)(66946007)(8676002)(8936002)(7696005)(6506007)(26005)(186003);
 DIR:OUT; SFP:1102; 
x-ms-exchange-antispam-messagedata: jbsyxuNBiet+V48FbTLP0JGJaLMbuAoLgpwztKLc2gr9rAHaWBsFqwezX/vbe+wcJ+T6PH22XDAGkZ4qwuv2XeHf5mM9MqkBEFvIGpj6uuejwxn4fJ4PbYzYgBvNgYeR9EsEYCDYDMz7bxddXyiNLmj+kcwXiaJuiB/AtJhb2wrO/J20FLl1T5eJO0basqe94bSgMz/LTxaMEq6Q1RwjfE9Ya1schDHbHVcdoe6UmZjQdeQeaezjjBbdoXDwQMRBervqp7Owagfy3HHG5daPLfIr4hzAc1Pvr76H1zMU7NliyJFCdhqcpa59rXwF+XpzSnVpffUaVAroEOV4kOdQ4X2lEWuNzlxYeekAeez3rPoYlIgBQd3h3QQ550qgXnK1yYuS+fQ4n5jYMFTV1wfNLB8sZYt9yYbXu6pBvH2hLujc+lSfZ2mBvxQCK1Ci93pUI5rNCs8MwE15X7GQFcr/zq6Bi7bQkadTK/ZinUSArxc=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-Network-Message-Id: f6d8fad5-8a42-4e61-e553-08d800529b45
X-MS-Exchange-CrossTenant-originalarrivaltime: 25 May 2020 02:23:26.3783 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: GXa9uzuGxqzVz6TtvJoPSP5CvtNPlZwNMBF7qH8lhpyvXfmn84wnCQeFjHIEzcLEteSCFtmz7nE3jvXTcrljcw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR11MB1693
X-OriginatorOrg: intel.com
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>, "Nakajima, Jun" <jun.nakajima@intel.com>,
 =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> From: Juergen Gross <jgross@suse.com>
> Sent: Tuesday, May 19, 2020 3:21 PM
>=20
> In case opt_ept_ad has not been set explicitly by the user via command
> line or runtime parameter, it is treated as "no" on Avoton cpus.
>=20
> Change that handling by setting opt_ept_ad to 0 for this cpu type
> explicitly if no user value has been set.
>=20
> By putting this into the (renamed) boot time initialization of vmcs.c
> _vmx_cpu_up() can be made static.
>=20
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Kevin Tian <kevin.tian@intel.com>

> ---
>  xen/arch/x86/hvm/vmx/vmcs.c        | 22 +++++++++++++++-------
>  xen/arch/x86/hvm/vmx/vmx.c         |  4 +---
>  xen/include/asm-x86/hvm/vmx/vmcs.h |  3 +--
>  3 files changed, 17 insertions(+), 12 deletions(-)
>=20
> diff --git a/xen/arch/x86/hvm/vmx/vmcs.c b/xen/arch/x86/hvm/vmx/vmcs.c
> index 4c23645454..221af9737a 100644
> --- a/xen/arch/x86/hvm/vmx/vmcs.c
> +++ b/xen/arch/x86/hvm/vmx/vmcs.c
> @@ -315,10 +315,6 @@ static int vmx_init_vmcs_config(void)
>=20
>          if ( !opt_ept_ad )
>              _vmx_ept_vpid_cap &=3D ~VMX_EPT_AD_BIT;
> -        else if ( /* Work around Erratum AVR41 on Avoton processors. */
> -                  boot_cpu_data.x86 =3D=3D 6 && boot_cpu_data.x86_model =
=3D=3D 0x4d
> &&
> -                  opt_ept_ad < 0 )
> -            _vmx_ept_vpid_cap &=3D ~VMX_EPT_AD_BIT;
>=20
>          /*
>           * Additional sanity checking before using EPT:
> @@ -652,7 +648,7 @@ void vmx_cpu_dead(unsigned int cpu)
>      vmx_pi_desc_fixup(cpu);
>  }
>=20
> -int _vmx_cpu_up(bool bsp)
> +static int _vmx_cpu_up(bool bsp)
>  {
>      u32 eax, edx;
>      int rc, bios_locked, cpu =3D smp_processor_id();
> @@ -2108,9 +2104,21 @@ static void vmcs_dump(unsigned char ch)
>      printk("**************************************\n");
>  }
>=20
> -void __init setup_vmcs_dump(void)
> +int __init vmx_vmcs_init(void)
>  {
> -    register_keyhandler('v', vmcs_dump, "dump VT-x VMCSs", 1);
> +    int ret;
> +
> +    if ( opt_ept_ad < 0 )
> +        /* Work around Erratum AVR41 on Avoton processors. */
> +        opt_ept_ad =3D !(boot_cpu_data.x86 =3D=3D 6 &&
> +                       boot_cpu_data.x86_model =3D=3D 0x4d);
> +
> +    ret =3D _vmx_cpu_up(true);
> +
> +    if ( !ret )
> +        register_keyhandler('v', vmcs_dump, "dump VT-x VMCSs", 1);
> +
> +    return ret;
>  }
>=20
>  static void __init __maybe_unused build_assertions(void)
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 6efa80e422..11a4dd94cf 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -2482,7 +2482,7 @@ const struct hvm_function_table * __init
> start_vmx(void)
>  {
>      set_in_cr4(X86_CR4_VMXE);
>=20
> -    if ( _vmx_cpu_up(true) )
> +    if ( vmx_vmcs_init() )
>      {
>          printk("VMX: failed to initialise.\n");
>          return NULL;
> @@ -2553,8 +2553,6 @@ const struct hvm_function_table * __init
> start_vmx(void)
>          vmx_function_table.get_guest_bndcfgs =3D vmx_get_guest_bndcfgs;
>      }
>=20
> -    setup_vmcs_dump();
> -
>      lbr_tsx_fixup_check();
>      bdf93_fixup_check();
>=20
> diff --git a/xen/include/asm-x86/hvm/vmx/vmcs.h b/xen/include/asm-
> x86/hvm/vmx/vmcs.h
> index 95c1dea7b8..906810592f 100644
> --- a/xen/include/asm-x86/hvm/vmx/vmcs.h
> +++ b/xen/include/asm-x86/hvm/vmx/vmcs.h
> @@ -21,11 +21,10 @@
>  #include <xen/mm.h>
>=20
>  extern void vmcs_dump_vcpu(struct vcpu *v);
> -extern void setup_vmcs_dump(void);
> +extern int vmx_vmcs_init(void);
>  extern int  vmx_cpu_up_prepare(unsigned int cpu);
>  extern void vmx_cpu_dead(unsigned int cpu);
>  extern int  vmx_cpu_up(void);
> -extern int  _vmx_cpu_up(bool bsp);
>  extern void vmx_cpu_down(void);
>=20
>  struct vmcs_struct {
> --
> 2.26.1



From xen-devel-bounces@lists.xenproject.org Mon May 25 02:33:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 02:33:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd2vj-0002ep-Tb; Mon, 25 May 2020 02:33:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6HJX=7H=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1jd2vj-0002ek-Cl
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 02:33:31 +0000
X-Inumbo-ID: 1e1f5ed6-9e30-11ea-b9cf-bc764e2007e4
Received: from mga14.intel.com (unknown [192.55.52.115])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e1f5ed6-9e30-11ea-b9cf-bc764e2007e4;
 Mon, 25 May 2020 02:33:29 +0000 (UTC)
IronPort-SDR: uHwM7ujdEqmmpeOYslF5Lbpw0yuH32rFRDdNIjMkmQLTcBvLrz3EEQT2uenvPFxahl1JxZ0fO1
 6D3+K1MZyrGQ==
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 24 May 2020 19:33:28 -0700
IronPort-SDR: vMHLu0t7D/NyD7s4L6q7Xwpl80XDocytsW1mjDz5sa7gdObN8qVglpaiz/SvPVY2A41pMzEVTA
 OcyyXWxCTAkA==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.73,431,1583222400"; d="scan'208";a="467829781"
Received: from orsmsx109.amr.corp.intel.com ([10.22.240.7])
 by fmsmga005.fm.intel.com with ESMTP; 24 May 2020 19:33:27 -0700
Received: from orsmsx604.amr.corp.intel.com (10.22.229.17) by
 ORSMSX109.amr.corp.intel.com (10.22.240.7) with Microsoft SMTP Server (TLS)
 id 14.3.439.0; Sun, 24 May 2020 19:33:27 -0700
Received: from orsmsx611.amr.corp.intel.com (10.22.229.24) by
 ORSMSX604.amr.corp.intel.com (10.22.229.17) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Sun, 24 May 2020 19:33:27 -0700
Received: from ORSEDG002.ED.cps.intel.com (10.7.248.5) by
 orsmsx611.amr.corp.intel.com (10.22.229.24) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.1713.5
 via Frontend Transport; Sun, 24 May 2020 19:33:26 -0700
Received: from NAM11-DM6-obe.outbound.protection.outlook.com (104.47.57.170)
 by edgegateway.intel.com (134.134.137.101) with Microsoft SMTP Server (TLS)
 id 14.3.439.0; Sun, 24 May 2020 19:33:26 -0700
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XNCWDyBIO5ILJEmT7XnLzB7e8TJnULbR4hHg++ElgAEjzzGK/bTuDtliXaO8wCIRL7UYlnyw/Y5Plh8BNejOrq3e15UEl5wB4K1RJzpDRZrEF20XL2FdVmsrFkfAS6+7VEwli+llVOIkCFkfFSOJ5rDep6AZ9yk45OYpkkmKaJds6vsDAYkWrHEQxncAvUZPNbjwcrujWF4Q9hbYLBj+Q4mF2wK80fkrKnxAVTLBExQ3SFfsK/oMfmMtXr9pTb5BLpxmuD4Hvm5CsLvaqejEfqt8co13RQicZppdQRyj7FSr7NXGDExLOXiczX2pugGwVQzwfJI6ycE6L7jD6oWXfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7HgphL6cEfsGgB11HgodpgfBNCryVIgilxOHkxsMPl4=;
 b=fAcAwuA8/wPSx9I/txwU/gIE6SjmTKlOpNMqnsUD6QYjRfNM6bGTfJbxkhg6Py+Zo+AYX3XyCI17JtcEvCRjzJ+5pWnhM0svuKS7BzN18GBgVKBSPpUO+D6bu9s69DsNma21nB+k8vCt4pbTBojHzfXgYvKs5dwmCWGDLdCyU49VJ4/+ScJAzYHk/sBoLrgq5MuUpwDKG2u/d0CKoUHuvk58Nge+2Qk0xQgN4YYcOQc1UHCS1iIs+HF4VmuizHKxcDY9JmDBktCv94XfHz/UWNd0wYm4AhG2hsIxSZ0iXJiXC03OkZZifCffzLY3cp7bCgzwl2MBu0XOUyr4vcHxww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; 
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7HgphL6cEfsGgB11HgodpgfBNCryVIgilxOHkxsMPl4=;
 b=fsB9UVAEt5HvXl0aAmwqqSc62EQBazYJcJenWIxwGuboPzUp4CJrsgUiz6hy7KOChnxE2gSsdDbTXoJv+00NAnbO8/ZA1/rqNNZx0uuJAQNmgLbKqDe7bi/1QREfYFEva3T6zhS1BVp041KyOvA0yXiJoLuvmHqoWKSrTxWABvo=
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by MWHPR11MB1406.namprd11.prod.outlook.com (2603:10b6:300:23::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.27; Mon, 25 May
 2020 02:33:25 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::b441:f6bd:703b:ba41]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::b441:f6bd:703b:ba41%2]) with mapi id 15.20.3021.029; Mon, 25 May 2020
 02:33:25 +0000
From: "Tian, Kevin" <kevin.tian@intel.com>
To: "Lengyel, Tamas" <tamas.lengyel@intel.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v2 for-4.14 1/2] x86/mem_sharing: block interrupt
 injection for forks
Thread-Topic: [PATCH v2 for-4.14 1/2] x86/mem_sharing: block interrupt
 injection for forks
Thread-Index: AQHWMFbYrI+6YEB8D0iGHN/GPjct36i4F49g
Date: Mon, 25 May 2020 02:33:25 +0000
Message-ID: <MWHPR11MB1645167038AA1273F6CC6D848CB30@MWHPR11MB1645.namprd11.prod.outlook.com>
References: <adfececa3e29a46f5347459a629aa534d61625aa.1590165055.git.tamas.lengyel@intel.com>
In-Reply-To: <adfececa3e29a46f5347459a629aa534d61625aa.1590165055.git.tamas.lengyel@intel.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
dlp-version: 11.2.0.6
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: intel.com; dkim=none (message not signed)
 header.d=none;intel.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.55.52.208]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 6fcf4800-cd2a-4467-a9e5-08d800540056
x-ms-traffictypediagnostic: MWHPR11MB1406:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <MWHPR11MB140632FDFDE702EE4F125C628CB30@MWHPR11MB1406.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:4125;
x-forefront-prvs: 0414DF926F
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: F1mh5Po4PD8nwEOrpPJIv0ZrLOMZhzPWNd0NKGwH+ud/DfsIgSL0qZqsEdkB9zlqCTXDQWBRYPNFDPo+shQ5+Fv76fwBr+IC8UfpbJ2sJcrQyRHJwBLjhZOX8PiHH8wOkDO/gkpkMIdhbfL5eEd1othrTWbCxdziml1/P9l43578UhwA6oOYlcTHkxQPSmgPlfOmp2ErqnQJrsTWk68INnTi4h5ueRyUNLifhL4ZKGw0fr2YaBCHrGXlpvSMGPJdMdRTS79erWqNQ3M8royYw7/BEX1LUHNEp7dxgpORprKvo61hwDjG+g5UgHbVWifwOg2QExS/gHrY/+GOS7xsbA==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:MWHPR11MB1645.namprd11.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(39860400002)(376002)(136003)(366004)(346002)(396003)(478600001)(316002)(6506007)(33656002)(86362001)(64756008)(66946007)(66476007)(66446008)(7416002)(7696005)(66556008)(4326008)(110136005)(52536014)(76116006)(54906003)(55016002)(71200400001)(8936002)(2906002)(9686003)(26005)(186003)(8676002)(5660300002);
 DIR:OUT; SFP:1102; 
x-ms-exchange-antispam-messagedata: qywZ3SKdBI1eiMtachTZwos3cQ86EFYvezgMFmjTl7nR0RiBPU0ZHNYyHZcMZULSrdwvf7btbhTCuPi7B4E8ZBwB34U/yJSwXF4+ia2Cwl6QAzrNsFSh7svo8A/cvqVlWEAfjDrUz75njaofAFZAOPyMBzwspm4ZZEWa+lC9qAXryR9KoPTLbsEEct/pnzsD4TEofrJlGQV2zNDgJJIryIupXlEeel5XnxYsDsMuLGZuuOuWMTpiiOnvH5wkhLHuUaCE+WQqe70Bo2pAY1P/HdcWRglckA0cGD8cUU8kzdkK27z+M/fS4+f8F1uDNQCYd4KoFbQQLLJrLtTnU7Qirq37r/xXvc5glCypQH+7+8mzYik1j5yDRFVk6YNOoYOrn8kggoCMD0NqaH1OCgQCfJfy/+T2VljAicBpUw6MQNNniZVO2Z+066dncv2w46XEPht1e2Ncav1l2QXYBeRrsDA8/UJupZ910Zz5PDj0wm4=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-Network-Message-Id: 6fcf4800-cd2a-4467-a9e5-08d800540056
X-MS-Exchange-CrossTenant-originalarrivaltime: 25 May 2020 02:33:25.3738 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: W9rfYKGTWtMqIId5strOUxJo0dHJSl5wwzV60tGhw5E/MH/qcIgbhjt7oLve/h4Ixb+nBoWsOm3XTq7lGFNorw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR11MB1406
X-OriginatorOrg: intel.com
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, "Nakajima, Jun" <jun.nakajima@intel.com>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Tamas
 K Lengyel <tamas@tklengyel.com>, Jan Beulich <jbeulich@suse.com>,
 =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> From: Lengyel, Tamas <tamas.lengyel@intel.com>
> Sent: Saturday, May 23, 2020 12:34 AM
>=20
> When running shallow forks without device models it may be undesirable fo=
r
> Xen

what is shallow forks? and why interrupt injection is not desired without
device model? If it means just without Qemu thing, you still get local APIC
interrupts such as timers, PMI, etc.

> to inject interrupts. With Windows forks we have observed the kernel goin=
g
> into
> infinite loops when trying to process such interrupts, likely because it
> attempts

what is the relationship between shallow forks and windows forks then?

> to interact with devices that are not responding without QEMU running. By
> disabling interrupt injection the fuzzer can exercise the target code wit=
hout
> interference.

what is the fuzzer?

>=20
> Forks & memory sharing are only available on Intel CPUs so this only appl=
ies
> to vmx.

I feel lots of background is missing thus difficult to judge whether below =
change
is desired...

>=20
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
> ---
> v2: prohibit =3D> block
>     minor style adjustments
> ---
>  xen/arch/x86/hvm/vmx/intr.c      | 6 ++++++
>  xen/arch/x86/mm/mem_sharing.c    | 6 +++++-
>  xen/include/asm-x86/hvm/domain.h | 2 ++
>  xen/include/public/memory.h      | 1 +
>  4 files changed, 14 insertions(+), 1 deletion(-)
>=20
> diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
> index 000e14af49..80bfbb4787 100644
> --- a/xen/arch/x86/hvm/vmx/intr.c
> +++ b/xen/arch/x86/hvm/vmx/intr.c
> @@ -256,6 +256,12 @@ void vmx_intr_assist(void)
>      if ( unlikely(v->arch.vm_event) && v->arch.vm_event->sync_event )
>          return;
>=20
> +#ifdef CONFIG_MEM_SHARING
> +    /* Block event injection for VM fork if requested */
> +    if ( unlikely(v->domain->arch.hvm.mem_sharing.block_interrupts) )
> +        return;
> +#endif
> +
>      /* Crank the handle on interrupt state. */
>      pt_vector =3D pt_update_irq(v);
>=20
> diff --git a/xen/arch/x86/mm/mem_sharing.c
> b/xen/arch/x86/mm/mem_sharing.c
> index 7271e5c90b..0c45a8d67e 100644
> --- a/xen/arch/x86/mm/mem_sharing.c
> +++ b/xen/arch/x86/mm/mem_sharing.c
> @@ -2106,7 +2106,8 @@ int
> mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op
> _t) arg)
>          rc =3D -EINVAL;
>          if ( mso.u.fork.pad )
>              goto out;
> -        if ( mso.u.fork.flags & ~XENMEM_FORK_WITH_IOMMU_ALLOWED )
> +        if ( mso.u.fork.flags &
> +             ~(XENMEM_FORK_WITH_IOMMU_ALLOWED |
> XENMEM_FORK_BLOCK_INTERRUPTS) )
>              goto out;
>=20
>          rc =3D rcu_lock_live_remote_domain_by_id(mso.u.fork.parent_domai=
n,
> @@ -2134,6 +2135,9 @@ int
> mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op
> _t) arg)
>              rc =3D hypercall_create_continuation(__HYPERVISOR_memory_op,
>                                                 "lh", XENMEM_sharing_op,
>                                                 arg);
> +        else if ( !rc && (mso.u.fork.flags &
> XENMEM_FORK_BLOCK_INTERRUPTS) )
> +            d->arch.hvm.mem_sharing.block_interrupts =3D true;
> +
>          rcu_unlock_domain(pd);
>          break;
>      }
> diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-
> x86/hvm/domain.h
> index 95fe18cddc..37e494d234 100644
> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -74,6 +74,8 @@ struct mem_sharing_domain
>       * to resume the search.
>       */
>      unsigned long next_shared_gfn_to_relinquish;
> +
> +    bool block_interrupts;
>  };
>  #endif
>=20
> diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
> index dbd35305df..1e4959638d 100644
> --- a/xen/include/public/memory.h
> +++ b/xen/include/public/memory.h
> @@ -537,6 +537,7 @@ struct xen_mem_sharing_op {
>          struct mem_sharing_op_fork {      /* OP_FORK */
>              domid_t parent_domain;        /* IN: parent's domain id */
>  #define XENMEM_FORK_WITH_IOMMU_ALLOWED (1u << 0)
> +#define XENMEM_FORK_BLOCK_INTERRUPTS   (1u << 1)
>              uint16_t flags;               /* IN: optional settings */
>              uint32_t pad;                 /* Must be set to 0 */
>          } fork;
> --
> 2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 25 02:51:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 02:51:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd3CR-0004LJ-CW; Mon, 25 May 2020 02:50:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vb8S=7H=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jd3CQ-0004L8-Id
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 02:50:46 +0000
X-Inumbo-ID: 884632d8-9e32-11ea-9887-bc764e2007e4
Received: from mail-qv1-xf44.google.com (unknown [2607:f8b0:4864:20::f44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 884632d8-9e32-11ea-9887-bc764e2007e4;
 Mon, 25 May 2020 02:50:46 +0000 (UTC)
Received: by mail-qv1-xf44.google.com with SMTP id l3so7511743qvo.7
 for <xen-devel@lists.xenproject.org>; Sun, 24 May 2020 19:50:46 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=ds4Pyg1znJ3S0Yg3vWRMgNmyuvMQ1euqy6dagIketK0=;
 b=L6qXpcjjOcA1qYvY32zs5lbEBNrn4lYO4CNDF/NB7xae9LlITGYHYax3YmsIOiM+Sp
 jtSgD7NFMXilhhjJvcKaB3m53Bb3FHtw1MTeTrtnjfcdm0gDch061xr58+AaSlt2JGay
 E7v7NdpspaHZnLR2/3kVuluY4k5xPHwNyG8bb8yjNZALerzj8ZK/4p7Fqr2Fjo0Htb2R
 b65Ga0UDcyvOLQMMAp56Db8WjfO2k7mIKbwSOxxIcYrkADN0d3kt3FZ5eElN0wMmRrI9
 f29GsizKwpS0oNNoiBwWDbcBLnsafwZw0tyT8JfD0eGotV1EQw7nS9eN7TlY5BmZb+Yd
 6NwQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=ds4Pyg1znJ3S0Yg3vWRMgNmyuvMQ1euqy6dagIketK0=;
 b=Vm+n91htwbg7m5fqhnwN+iCew+vnvZjHFIadjLuAzhVFM93aDAV5xels6jX6Cs/+ak
 HPWmV1/5iKC3fPjtWmTw8ZFYiIWPFgbtK/6VZHOIRp6JiCVCNaNT9hETcjgIbOiurkky
 zCMjwjLd2d/ThBx5Ak9ZVVK7RoWsGnts5HlXD7kFcM0DEptyDPWfcvT1mg5hUiQADBYp
 OiKBe2VwiQA8O5qcphxCAsuSVqZ6Xg89Q3x0gAExtvOLvS1sGFZgW07KVHrP6cwHZ2JT
 wZ77pvMRtwtT9rYQItfmoOdmgRXoaUFGSylrmK5DQG3kIKF5sEzdnDixT/6k2iOP+qLe
 dl9w==
X-Gm-Message-State: AOAM533Vbfo7248m7BGsCGGTwEgOznCy/WRVhg4NeRtfPk+dLFNwTI2h
 rHLRLl9S116A9YDbEArbdskogCuD
X-Google-Smtp-Source: ABdhPJwVY4tyeXHUFHKqVTHep4vSGYq3+fpnSZ41i5Zzo0H8ntsDwuXfy/s0AQhbGmkAQpLlA5F9mg==
X-Received: by 2002:a05:6214:8e4:: with SMTP id
 dr4mr13187822qvb.97.1590375045443; 
 Sun, 24 May 2020 19:50:45 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:344b:9349:9475:b6a2])
 by smtp.gmail.com with ESMTPSA id h134sm13539512qke.6.2020.05.24.19.50.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 24 May 2020 19:50:44 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 0/8] Coverity fixes for vchan-socket-proxy
Date: Sun, 24 May 2020 22:49:47 -0400
Message-Id: <20200525024955.225415-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, marmarek@invisiblethingslab.com,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This series addresses some Coverity reports.  To handle closing FDs, a
state struct is introduced to track FDs closed in both main() and
data_loop().

Jason Andryuk (8):
  vchan-socket-proxy: Ensure UNIX path NUL terminated
  vchan-socket-proxy: Check xs_watch return value
  vchan-socket-proxy: Unify main return value
  vchan-socket-proxy: Use a struct to store state
  vchan-socket-proxy: Switch data_loop() to take state
  vchan-socket-proxy: Set closed FDs to -1
  vchan-socket-proxy: Cleanup resources on exit
  vchan-socket-proxy: Handle closing shared input/output_fd

 tools/libvchan/vchan-socket-proxy.c | 164 ++++++++++++++++++----------
 1 file changed, 106 insertions(+), 58 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 25 02:51:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 02:51:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd3CX-0004La-KW; Mon, 25 May 2020 02:50:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vb8S=7H=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jd3CW-0004LS-Cv
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 02:50:52 +0000
X-Inumbo-ID: 8bc771ba-9e32-11ea-b9cf-bc764e2007e4
Received: from mail-qt1-x841.google.com (unknown [2607:f8b0:4864:20::841])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8bc771ba-9e32-11ea-b9cf-bc764e2007e4;
 Mon, 25 May 2020 02:50:51 +0000 (UTC)
Received: by mail-qt1-x841.google.com with SMTP id l1so12932530qtp.6
 for <xen-devel@lists.xenproject.org>; Sun, 24 May 2020 19:50:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=TuFE1UFStx7DhYra1LcAmMWFmhp1FfRuQDwhQTRAYLQ=;
 b=a5R6y24hmZf7dseYfL279Z6P8Y/N4v32REhzXWvuI6RgseeMyz30DzODrA7F59/VeF
 VVMqWBMdPM43TC/sErMk6cyiuc1FBjV3KVGIpsHiaE463qPFvnucfc980Fw/WStrE4GR
 KrVpzAQNrqnCiRpbiUPW9pRY9XFfVINzYBEzWYc/se878fXjCqWn1oKwUQJ2YSyQfPjz
 +ZbawcHb84gd55HRTaBj1auKqYLRYCHR6jWm12LnjRCmDScGxiY3Ok08TxB4I2bffNwL
 oLZT96v4GZz+VoM1Cxiuy8MhumTmN6AQcU7wS89p+5fn8YnrcG4z7whjDkSl+AntixmM
 /jOg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=TuFE1UFStx7DhYra1LcAmMWFmhp1FfRuQDwhQTRAYLQ=;
 b=uh41joMsT7lXRpfraetULzxWvqhyJSlCysB58MZdTsZNPXca4lNAfPTCO2mdZEc1we
 J+MX3kSj2JZiW8kvXY/w+8/HUnZXMVHalgFEN9N6kt4uKCLw+YKYiMXGqlKS95WPe8ld
 tNURRKM2QsafeWVHBXpiujtQW1YFjyi5sR+Yg5Y/LD0EaofQirE2C+kmkgz9vLti4nTb
 yjt2rx9DybWA9aQeTJnmotJ+1NgyZJfhXXPfYhrexg5ALxCevkFCtCSg20WmjpvWwgB7
 gRy9dbPxD+1snysbwdqnG/Yr1L4sVpmEN4XmPX9aUpsGeUdIoeB5TJ7glPUPNZAw3faV
 VLtg==
X-Gm-Message-State: AOAM531iT4+S2d0p+QaXj08oaIaDHVmOGxgiUXxcrVwrtaXVAi5sgYm7
 1c3ohMwP0Tpx2mq+OA7hfq+49Yc7
X-Google-Smtp-Source: ABdhPJx6/03cefrmuDCSXXmpl/Oe1/sw6rOyEM8gKdoi/casVrLcDhpzCbgzgqTwZikXJCpRGw5g0A==
X-Received: by 2002:ac8:543:: with SMTP id c3mr25433015qth.8.1590375051323;
 Sun, 24 May 2020 19:50:51 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:344b:9349:9475:b6a2])
 by smtp.gmail.com with ESMTPSA id h134sm13539512qke.6.2020.05.24.19.50.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 24 May 2020 19:50:50 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 1/8] vchan-socket-proxy: Ensure UNIX path NUL terminated
Date: Sun, 24 May 2020 22:49:48 -0400
Message-Id: <20200525024955.225415-2-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200525024955.225415-1-jandryuk@gmail.com>
References: <20200525024955.225415-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, marmarek@invisiblethingslab.com,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Check the socket path length to ensure sun_path is NUL terminated.

This was spotted by Citrix's Coverity.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libvchan/vchan-socket-proxy.c | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/tools/libvchan/vchan-socket-proxy.c b/tools/libvchan/vchan-socket-proxy.c
index 13700c5d67..6d860af340 100644
--- a/tools/libvchan/vchan-socket-proxy.c
+++ b/tools/libvchan/vchan-socket-proxy.c
@@ -148,6 +148,12 @@ static int connect_socket(const char *path_or_fd) {
         return fd;
     }
 
+    if (strlen(path_or_fd) >= sizeof(addr.sun_path)) {
+        fprintf(stderr, "UNIX socket path \"%s\" too long (%zd >= %zd)\n",
+                path_or_fd, strlen(path_or_fd), sizeof(addr.sun_path));
+        return -1;
+    }
+
     fd = socket(AF_UNIX, SOCK_STREAM, 0);
     if (fd == -1)
         return -1;
@@ -174,6 +180,12 @@ static int listen_socket(const char *path_or_fd) {
         return fd;
     }
 
+    if (strlen(path_or_fd) >= sizeof(addr.sun_path)) {
+        fprintf(stderr, "UNIX socket path \"%s\" too long (%zd >= %zd)\n",
+                path_or_fd, strlen(path_or_fd), sizeof(addr.sun_path));
+        return -1;
+    }
+
     /* if not a number, assume a socket path */
     fd = socket(AF_UNIX, SOCK_STREAM, 0);
     if (fd == -1)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 25 02:51:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 02:51:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd3Cb-0004M2-Ss; Mon, 25 May 2020 02:50:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vb8S=7H=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jd3Ca-0004Lo-Dm
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 02:50:56 +0000
X-Inumbo-ID: 8d478886-9e32-11ea-b9cf-bc764e2007e4
Received: from mail-qv1-xf42.google.com (unknown [2607:f8b0:4864:20::f42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d478886-9e32-11ea-b9cf-bc764e2007e4;
 Mon, 25 May 2020 02:50:54 +0000 (UTC)
Received: by mail-qv1-xf42.google.com with SMTP id p4so7501250qvr.10
 for <xen-devel@lists.xenproject.org>; Sun, 24 May 2020 19:50:54 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=T0BiCkhzclg2VvasscglUktTq98DwBMZs8CpLP6l+Lg=;
 b=A5uYZZ2wzL/XRZaWDvdvR+WIEUjNsUaDYhREgp5q/oggWg9b4VK9We2mfzinR3lXXT
 EgSB3jM6fAwWsrfr8vuAHnBvoHG5vIMjjiV3Qi9mKi6d/Si+PzsbnzMbgehVJfr8VP8X
 f+wHjkozqeBFVGi60RXJimffcBsM9e5BBslRc6wWvr58aPIzf2LwV5NxlvO642947tLG
 Un2+3BLVDfOgPBwDHJOyaVB1edpmnV6YC0DKRFuI6TPSHJZTEtHjsSK95g1Zz9/snez7
 DRQHsvXgD5iLVgnUF8d5kHMpteE+NJnqqkC7HoaHv3iZQl8k4Hp43+/iR+dlpWnRtrvF
 hxhw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=T0BiCkhzclg2VvasscglUktTq98DwBMZs8CpLP6l+Lg=;
 b=qXt/iXpeiOb3R5aZbOlr8gWxLsLTQBDrY7CU6BiEYMaLBoS6pKyBI9gHe+r3dTryl+
 wlQ5jMpn2d8Jw3BNQgWtlvSdNDoIUx+vCNfi1I6jfOxY/vbU14gAlYqv9xwvc+bV5p1x
 65r3ipfZbgxdopVrlcpKQIPf06RC4iUtTlp7Jzc3L8C7Qe2AMgciNqO3xVUdMEddDzqh
 g0eY88OPYSVF551yUQx+Xx5+3sT8vZAAITO4QAwB5TANFmegy3SbhbPCyMNnN+TW0WHk
 IvK+uScg1Y33pw5GJYFHcjjupGcWb884Nas3XVnX2/PgF5/x45hSE++tEGkhpquSWa5H
 gAIQ==
X-Gm-Message-State: AOAM533V99aojHwSHjQ8Jgg6ldtZbta2pxAaPLS7qHWfsqrRt4PxGhid
 6GtgyO6rdPbBjnGl1L7Wcb/JauAs
X-Google-Smtp-Source: ABdhPJxUoxNI9bSF8blnHfGiLAFojgcXDTSl8/QBkDtJj1CY8bzGzXHV9M3Dgcidm14+QfZ0s4GmTQ==
X-Received: by 2002:a0c:fb4b:: with SMTP id b11mr13428732qvq.96.1590375053889; 
 Sun, 24 May 2020 19:50:53 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:344b:9349:9475:b6a2])
 by smtp.gmail.com with ESMTPSA id h134sm13539512qke.6.2020.05.24.19.50.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 24 May 2020 19:50:53 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 2/8] vchan-socket-proxy: Check xs_watch return value
Date: Sun, 24 May 2020 22:49:49 -0400
Message-Id: <20200525024955.225415-3-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200525024955.225415-1-jandryuk@gmail.com>
References: <20200525024955.225415-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, marmarek@invisiblethingslab.com,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Check the return value of xs_watch and error out on failure.

This was found by Citrix's Coverity.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libvchan/vchan-socket-proxy.c | 11 +++++++++--
 1 file changed, 9 insertions(+), 2 deletions(-)

diff --git a/tools/libvchan/vchan-socket-proxy.c b/tools/libvchan/vchan-socket-proxy.c
index 6d860af340..bd12632311 100644
--- a/tools/libvchan/vchan-socket-proxy.c
+++ b/tools/libvchan/vchan-socket-proxy.c
@@ -225,8 +225,15 @@ static struct libxenvchan *connect_vchan(int domid, const char *path) {
         goto out;
     }
     /* wait for vchan server to create *path* */
-    xs_watch(xs, path, "path");
-    xs_watch(xs, "@releaseDomain", "release");
+    if (!xs_watch(xs, path, "path")) {
+        fprintf(stderr, "xs_watch(%s) failed.\n", path);
+        goto out;
+    }
+    if (!xs_watch(xs, "@releaseDomain", "release")) {
+        fprintf(stderr, "xs_watch(@releaseDomain failed.\n");
+        goto out;
+    }
+
     while ((watch_ret = xs_read_watch(xs, &watch_num))) {
         /* don't care about exact which fired the watch */
         free(watch_ret);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 25 02:51:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 02:51:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd3Ch-0004NC-52; Mon, 25 May 2020 02:51:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vb8S=7H=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jd3Cf-0004Mo-D8
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 02:51:01 +0000
X-Inumbo-ID: 8ec275b8-9e32-11ea-b9cf-bc764e2007e4
Received: from mail-qt1-x843.google.com (unknown [2607:f8b0:4864:20::843])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ec275b8-9e32-11ea-b9cf-bc764e2007e4;
 Mon, 25 May 2020 02:50:56 +0000 (UTC)
Received: by mail-qt1-x843.google.com with SMTP id o19so12912009qtr.10
 for <xen-devel@lists.xenproject.org>; Sun, 24 May 2020 19:50:56 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=3D6lM8zZXrDZrYYs+sM9xLtHGYSewtj+rrLJ+C4zgl4=;
 b=myWRm19nfnWsV4/CRIazx+Qpwm3OypAgqd6JXVhMlnbvpdVctR7ZkbvcDEmYTTZrB3
 rj3W0eFYZeBRWs3DVQafy+38/4VG4sKlqmoeiWnNV0dRPABzREMDEV9IW7+1LqV3BiN4
 dvDjxI/dJFSQTzODcueatJRivY0yeUwKyGBjIV203VWeC2pLm6WwtroxWsjD6MchYz0P
 Cg9iIFgb+mEsaBcPLo0uBBszY5dkcs7hQ9tCIiTXY5tBfg1p45cMDgQTwpd2JM10E9OK
 SKh/z3ytasVvUOrIPt8g0LRI20bouAewUd3NZHOk6tuMooCImAqGlfIm+7GafZdBbE8p
 vX3Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=3D6lM8zZXrDZrYYs+sM9xLtHGYSewtj+rrLJ+C4zgl4=;
 b=q3dt/pPkQPm17sg6+gvwztfRSabZXq4v6er/UbiBvsHEl1Bsess5DiuDtLM75z3FnW
 5VFU5/kr1WZsyOHwX0i3vm21Qz9q1FRk82vXd9WIEVKwquVlfOfKVVlMdAywvS83QSHN
 xrtRmSIDnPJvye4+39GShYCK49kQa4PFdBhJJ6BOWtPbp58wz7xJr+z5j+1ZfNzWLuPd
 N25g9cuEN2XLEYgotzRMgK1v6/NtG/OK8r0ql2EZNjkONR/JorFJvLxS+dG7c2LXCtzy
 IAqJ3fUesa64a5bVCwNj4tohOP6X8MekVxnqi/8dki9n3bnqluVBYUGBMDySxghAln+I
 wjDQ==
X-Gm-Message-State: AOAM5300vT6UKmnp7of3e5/mTkt4E8lD2p6FLwWXJ+LXqiNSvPlicqBL
 T2PUnghwFgkD7xXIo9bzotAC0qLK
X-Google-Smtp-Source: ABdhPJx+rgbZ9bRQ3+GrhZW7rJFE2qtB40YncDhT2ua91BgbMD+5muIbaNWUuBv5zcCLqYFB120now==
X-Received: by 2002:ac8:65d1:: with SMTP id t17mr18084693qto.46.1590375056329; 
 Sun, 24 May 2020 19:50:56 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:344b:9349:9475:b6a2])
 by smtp.gmail.com with ESMTPSA id h134sm13539512qke.6.2020.05.24.19.50.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 24 May 2020 19:50:55 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 3/8] vchan-socket-proxy: Unify main return value
Date: Sun, 24 May 2020 22:49:50 -0400
Message-Id: <20200525024955.225415-4-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200525024955.225415-1-jandryuk@gmail.com>
References: <20200525024955.225415-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, marmarek@invisiblethingslab.com,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Introduce 'ret' for main's return value and remove direct returns.  This
is in preparation for a unified exit path with resource cleanup.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libvchan/vchan-socket-proxy.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/tools/libvchan/vchan-socket-proxy.c b/tools/libvchan/vchan-socket-proxy.c
index bd12632311..d85e24ee93 100644
--- a/tools/libvchan/vchan-socket-proxy.c
+++ b/tools/libvchan/vchan-socket-proxy.c
@@ -381,6 +381,7 @@ int main(int argc, char **argv)
     const char *vchan_path;
     const char *state_path = NULL;
     int opt;
+    int ret;
 
     while ((opt = getopt_long(argc, argv, "m:vs:", options, NULL)) != -1) {
         switch (opt) {
@@ -447,6 +448,8 @@ int main(int argc, char **argv)
         xs_close(xs);
     }
 
+    ret = 0;
+
     for (;;) {
         if (is_server) {
             /* wait for vchan connection */
@@ -461,7 +464,8 @@ int main(int argc, char **argv)
             }
             if (input_fd == -1) {
                 perror("connect socket");
-                return 1;
+                ret = 1;
+                break;
             }
             if (data_loop(ctrl, input_fd, output_fd) != 0)
                 break;
@@ -474,14 +478,16 @@ int main(int argc, char **argv)
                 input_fd = output_fd = accept(socket_fd, NULL, NULL);
             if (input_fd == -1) {
                 perror("accept");
-                return 1;
+                ret = 1;
+                break;
             }
             set_nonblocking(input_fd, 1);
             set_nonblocking(output_fd, 1);
             ctrl = connect_vchan(domid, vchan_path);
             if (!ctrl) {
                 perror("vchan client init");
-                return 1;
+                ret = 1;
+                break;
             }
             if (data_loop(ctrl, input_fd, output_fd) != 0)
                 break;
@@ -493,5 +499,6 @@ int main(int argc, char **argv)
             ctrl = NULL;
         }
     }
-    return 0;
+
+    return ret;
 }
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 25 02:51:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 02:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd3Cl-0004PA-H9; Mon, 25 May 2020 02:51:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vb8S=7H=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jd3Ck-0004Ol-EJ
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 02:51:06 +0000
X-Inumbo-ID: 9005aba2-9e32-11ea-ae69-bc764e2007e4
Received: from mail-qv1-xf41.google.com (unknown [2607:f8b0:4864:20::f41])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9005aba2-9e32-11ea-ae69-bc764e2007e4;
 Mon, 25 May 2020 02:50:59 +0000 (UTC)
Received: by mail-qv1-xf41.google.com with SMTP id r3so7548284qve.1
 for <xen-devel@lists.xenproject.org>; Sun, 24 May 2020 19:50:59 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=y9V1tgqrbsz8wnP0O1UzYnljlbdkN5+LC5mMxx9pPO4=;
 b=cnyTYQB33jVd69hyM6RmnHA2+rsRnAQVdhGjUC68MMxQbGsHlnpStndqhrcOoLGwP/
 HYVqoiTnWJ2kqpJWiL8Bk0dIOJh3eX0LNrS/pUvRQThjvzfT9VwKZ8VfXIxNJg+fAOVe
 OWJbriZfk4xqj82/E97TCGsSDT3LwGAzdAJ3uDfIx42sfsBrbuHRv+OZnkfq3eUz1gZr
 DzzZ1FOcOenLRZTw/6EbgwUfjRpqwyWmTmF1C09A4f7HTCrxgIoES9wTtYs2gJMjZadn
 llK0U86PwKlQiTIrGqk9zWjaty7YAYwO/nLMPL8S3MGcYtZl55oo2ul0ugX4UK3fFK+C
 PlVA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=y9V1tgqrbsz8wnP0O1UzYnljlbdkN5+LC5mMxx9pPO4=;
 b=eRpKNsmN75ja5iSBHa72gxeGSIOr6CSFZNv0pkUNOaT55RuwlY7hllOP8ActCLbIKU
 O44KDq5bY3n8YwGGcjg/q3ux+IN65ZZVQVBZy4Jfau6yBNkQ80mU7B/cFqWb7bea0MPU
 mY/S0pMxfNKlFKR5r5TbswTnZi5f7LxPlrp5BIHf/euJXv+ccdFjKqJ79IVmi7UXecWL
 F3mw1dUdxejdcB7cgN+WimeVzNSBgQSywfc5aXkzS0WIIF6nVE7k2chvR1dXoQcWUiPX
 L8KHszXRQf/SIzAaM005foTah6Vr1jCjMYon1FW4uAuDEKrBE89NweOlXNC5J6yTwZPB
 AZcQ==
X-Gm-Message-State: AOAM530OFos4XhXiBl7Snty1KSGgDaeb3vL/yZ8gU+/x2GsqHTdytYZw
 uLtrtbwDUdsW4IwxuX8v7mxh0d+J
X-Google-Smtp-Source: ABdhPJyeCUoOdYimoZsXvuzqA0tzdnRAtxmTY6XHOdXIW6cDTT6vGhf5s3BHQpaKoNNCv5yFr+iCog==
X-Received: by 2002:ad4:57cb:: with SMTP id y11mr13770669qvx.26.1590375058455; 
 Sun, 24 May 2020 19:50:58 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:344b:9349:9475:b6a2])
 by smtp.gmail.com with ESMTPSA id h134sm13539512qke.6.2020.05.24.19.50.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 24 May 2020 19:50:57 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 4/8] vchan-socket-proxy: Use a struct to store state
Date: Sun, 24 May 2020 22:49:51 -0400
Message-Id: <20200525024955.225415-5-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200525024955.225415-1-jandryuk@gmail.com>
References: <20200525024955.225415-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, marmarek@invisiblethingslab.com,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Use a struct to group the vchan ctrl and FDs.  This will facilite
tracking the state of open and closed FDs and ctrl in data_loop().

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libvchan/vchan-socket-proxy.c | 52 +++++++++++++++++------------
 1 file changed, 30 insertions(+), 22 deletions(-)

diff --git a/tools/libvchan/vchan-socket-proxy.c b/tools/libvchan/vchan-socket-proxy.c
index d85e24ee93..39f4bb1452 100644
--- a/tools/libvchan/vchan-socket-proxy.c
+++ b/tools/libvchan/vchan-socket-proxy.c
@@ -89,6 +89,12 @@ int insiz = 0;
 int outsiz = 0;
 int verbose = 0;
 
+struct vchan_proxy_state {
+    struct libxenvchan *ctrl;
+    int output_fd;
+    int input_fd;
+};
+
 static void vchan_wr(struct libxenvchan *ctrl) {
     int ret;
 
@@ -374,8 +380,9 @@ int main(int argc, char **argv)
 {
     int is_server = 0;
     int socket_fd = -1;
-    int input_fd, output_fd;
-    struct libxenvchan *ctrl = NULL;
+    struct vchan_proxy_state state = { .ctrl = NULL,
+                                       .input_fd = -1,
+                                       .output_fd = -1 };
     const char *socket_path;
     int domid;
     const char *vchan_path;
@@ -415,15 +422,15 @@ int main(int argc, char **argv)
     socket_path = argv[optind+2];
 
     if (is_server) {
-        ctrl = libxenvchan_server_init(NULL, domid, vchan_path, 0, 0);
-        if (!ctrl) {
+        state.ctrl = libxenvchan_server_init(NULL, domid, vchan_path, 0, 0);
+        if (!state.ctrl) {
             perror("libxenvchan_server_init");
             exit(1);
         }
     } else {
         if (strcmp(socket_path, "-") == 0) {
-            input_fd = 0;
-            output_fd = 1;
+            state.input_fd = 0;
+            state.output_fd = 1;
         } else {
             socket_fd = listen_socket(socket_path);
             if (socket_fd == -1) {
@@ -453,21 +460,21 @@ int main(int argc, char **argv)
     for (;;) {
         if (is_server) {
             /* wait for vchan connection */
-            while (libxenvchan_is_open(ctrl) != 1)
-                libxenvchan_wait(ctrl);
+            while (libxenvchan_is_open(state.ctrl) != 1)
+                libxenvchan_wait(state.ctrl);
             /* vchan client connected, setup local FD if needed */
             if (strcmp(socket_path, "-") == 0) {
-                input_fd = 0;
-                output_fd = 1;
+                state.input_fd = 0;
+                state.output_fd = 1;
             } else {
-                input_fd = output_fd = connect_socket(socket_path);
+                state.input_fd = state.output_fd = connect_socket(socket_path);
             }
-            if (input_fd == -1) {
+            if (state.input_fd == -1) {
                 perror("connect socket");
                 ret = 1;
                 break;
             }
-            if (data_loop(ctrl, input_fd, output_fd) != 0)
+            if (data_loop(state.ctrl, state.input_fd, state.output_fd) != 0)
                 break;
             /* keep it running only when get UNIX socket path */
             if (socket_path[0] != '/')
@@ -475,28 +482,29 @@ int main(int argc, char **argv)
         } else {
             /* wait for local socket connection */
             if (strcmp(socket_path, "-") != 0)
-                input_fd = output_fd = accept(socket_fd, NULL, NULL);
-            if (input_fd == -1) {
+                state.input_fd = state.output_fd = accept(socket_fd,
+                                                          NULL, NULL);
+            if (state.input_fd == -1) {
                 perror("accept");
                 ret = 1;
                 break;
             }
-            set_nonblocking(input_fd, 1);
-            set_nonblocking(output_fd, 1);
-            ctrl = connect_vchan(domid, vchan_path);
-            if (!ctrl) {
+            set_nonblocking(state.input_fd, 1);
+            set_nonblocking(state.output_fd, 1);
+            state.ctrl = connect_vchan(domid, vchan_path);
+            if (!state.ctrl) {
                 perror("vchan client init");
                 ret = 1;
                 break;
             }
-            if (data_loop(ctrl, input_fd, output_fd) != 0)
+            if (data_loop(state.ctrl, state.input_fd, state.output_fd) != 0)
                 break;
             /* don't reconnect if output was stdout */
             if (strcmp(socket_path, "-") == 0)
                 break;
 
-            libxenvchan_close(ctrl);
-            ctrl = NULL;
+            libxenvchan_close(state.ctrl);
+            state.ctrl = NULL;
         }
     }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 25 02:51:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 02:51:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd3Cq-0004R0-Pt; Mon, 25 May 2020 02:51:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vb8S=7H=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jd3Cp-0004Qe-EC
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 02:51:11 +0000
X-Inumbo-ID: 92a2e492-9e32-11ea-ae69-bc764e2007e4
Received: from mail-qv1-xf44.google.com (unknown [2607:f8b0:4864:20::f44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 92a2e492-9e32-11ea-ae69-bc764e2007e4;
 Mon, 25 May 2020 02:51:03 +0000 (UTC)
Received: by mail-qv1-xf44.google.com with SMTP id p4so7501325qvr.10
 for <xen-devel@lists.xenproject.org>; Sun, 24 May 2020 19:51:03 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=11/9Z+esWpnjGME4OYhJ11vNo1bXthXQQy5dVc6MS2w=;
 b=dUU5g9Gx88n6jfEOpIMwOpx2rQOev5secFI/pcOIwqVQs7iia4Ks117tv6/vjBQoK6
 ndINVcAH47xkE/9+cFfL4Sv2T/0ZQAI3yvLSZ6rdycRPTdjpZEi3MbPqzuTNc+6lnHJW
 LpG5CKzDTQ5zRmgnE+SU9Y/uCzW8NRnT/hziA/ckmkMBJDYWuGJdK1g+Z0X1Ls7sRrQ3
 3cI3jHxFHPtTc3zshG8RJ96sUeOtE4UyWPpVHjKsOchp6P5LpcbS59LuZtmlePy6vuQE
 4cbCRBcmD744cJqURtaHNIOtoYnAa6cC/Y7WI9Dq2zT+Ce4uoZ9zYbleWYuLdNeJlOiu
 Gvow==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=11/9Z+esWpnjGME4OYhJ11vNo1bXthXQQy5dVc6MS2w=;
 b=NkkTi0M4OodIwv5mDVKjKwtAb8eCRm6u12/hTLc5AHuoZMczSQfCUbszuHKKMH4lxO
 rJVx51edUuOdJbz7Bv0yLhUEc9PAW2FWON0iSkvBPzEMtvSSBkPqJLFC/HhbLhXvqNxv
 b/KHtqVyVKLNRRldiSyVRc78Jng/kESHwMD/NUIRe+hu2U35X92gjsvkTKt0/nZDWIWM
 nzHAPPqu9N895gagXRUTWStcBuoSCwQsEup3JsH+1M4QwfiQagOW/12P96Plvc0TkQyT
 VN/0nPASUmqWTIEGDfNOTrzSdLq4Y+9CTYFugVmT7PExxbTLp8eyGkytxwrlUsbNOSLm
 V8vQ==
X-Gm-Message-State: AOAM532+d0//xbTS/1dB1R9pjFDcfwpy1DwI1EWPFqa8oI40tUqI1eUL
 aCFpbAV/Nzq0g4lI1OAxo8639AgJ
X-Google-Smtp-Source: ABdhPJxh5j5c3BvA8/blIR/5rb6cXSRpiv6MBzO1RF04GCNvAYfvlxsJ2jqcP5+ri/t0wK0Thqws0Q==
X-Received: by 2002:a0c:a144:: with SMTP id d62mr13412979qva.229.1590375062897; 
 Sun, 24 May 2020 19:51:02 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:344b:9349:9475:b6a2])
 by smtp.gmail.com with ESMTPSA id h134sm13539512qke.6.2020.05.24.19.51.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 24 May 2020 19:51:02 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 6/8] vchan-socket-proxy: Set closed FDs to -1
Date: Sun, 24 May 2020 22:49:53 -0400
Message-Id: <20200525024955.225415-7-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200525024955.225415-1-jandryuk@gmail.com>
References: <20200525024955.225415-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, marmarek@invisiblethingslab.com,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

These FDs are closed, so set them to -1 so they are no longer valid.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libvchan/vchan-socket-proxy.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libvchan/vchan-socket-proxy.c b/tools/libvchan/vchan-socket-proxy.c
index 32be410609..f3f6e5ec09 100644
--- a/tools/libvchan/vchan-socket-proxy.c
+++ b/tools/libvchan/vchan-socket-proxy.c
@@ -319,7 +319,9 @@ int data_loop(struct vchan_proxy_state *state)
                 while (outsiz)
                     socket_wr(state->output_fd);
                 close(state->output_fd);
+                state->output_fd = -1;
                 close(state->input_fd);
+                state->input_fd = -1;
                 discard_buffers(state->ctrl);
                 break;
             }
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 25 02:51:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 02:51:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd3Cw-0004UQ-44; Mon, 25 May 2020 02:51:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vb8S=7H=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jd3Cu-0004TA-E9
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 02:51:16 +0000
X-Inumbo-ID: 919bb92a-9e32-11ea-ae69-bc764e2007e4
Received: from mail-qv1-xf42.google.com (unknown [2607:f8b0:4864:20::f42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 919bb92a-9e32-11ea-ae69-bc764e2007e4;
 Mon, 25 May 2020 02:51:01 +0000 (UTC)
Received: by mail-qv1-xf42.google.com with SMTP id dh1so7485372qvb.13
 for <xen-devel@lists.xenproject.org>; Sun, 24 May 2020 19:51:01 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=c+GBcLzdTfVlujqDgxVJJIXEMu0nDYhq6Ss2jg+Z8wQ=;
 b=oC0Y6l2LjLu/Y0zlJEc2vdTDFQkI7DecoziRTK9np3LNgQrgm0tm0uju7/F7CcyB7z
 70mb+ThUac1YUdZ2oOpOpc75eye9eO9erLaAwY2b3FEiqFIhERCW7h8FOK268/jl8ON2
 8M6TyCD2r44k3StyFFcNDIWLp/aSZsARlM4gF1HOKhQQaj3OT8CTnQK4GRqoVbyH8+Pj
 NTzRorLfvgL1Wrq0sn3h5KxeMlgKti7qJnXzsU4urdQiKCRz6Gtgw3nhgxGYn8n9/mou
 JJjmQETY1UrLpEOpWf4+iuegaT4OAvSy2YkTD1w11fLniGeZnMmq7ilZAa1Ym8K5xIrT
 7drA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=c+GBcLzdTfVlujqDgxVJJIXEMu0nDYhq6Ss2jg+Z8wQ=;
 b=PKnA99fc8izHIhesYHei0vq8dgo/u29VdnO7gE1k3GK4EDiDXe+HAoz6OmgwGRJ5eE
 J8/9vI9WAFCY+pXobdUBNQEfNPm8O8DOWCmdPti5U0j+2jhjOrzMUhhRYXpCrpnokTh9
 qep0P32xXAIQk4Y+2AX27oEAqR9LqYyUzdXZMaCpN4dHR3+uBEMM1KKUqOpEIg9N5u9b
 PhetEomxyoD5Pw7FGAZy/2Urc4vOP2hgmZuoRs87VEeOEwjjGE++IMyK5YtTYTXz7NmZ
 CXgmFgaOU333xo2MBl7DMtMJqwgKkFaRoo3AqFDtialbVg4w4ZTHYbN3amPQGiDCdnlm
 QXvw==
X-Gm-Message-State: AOAM533noFIgm6V/VJ46rfCWIKXSlzPIlhenJQMdif5e9h//k1Xvdcek
 lHgwC5cu1c1Ba1KqJHa5z0X63js8
X-Google-Smtp-Source: ABdhPJySewG2QNC01azxs4/d/Rg6cedR6xmr/aJH5EFOXstoH8Bi1+nJsZncrKYTGqifOvtadjfq9Q==
X-Received: by 2002:a0c:ee25:: with SMTP id l5mr13346075qvs.5.1590375060651;
 Sun, 24 May 2020 19:51:00 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:344b:9349:9475:b6a2])
 by smtp.gmail.com with ESMTPSA id h134sm13539512qke.6.2020.05.24.19.50.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 24 May 2020 19:51:00 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 5/8] vchan-socket-proxy: Switch data_loop() to take state
Date: Sun, 24 May 2020 22:49:52 -0400
Message-Id: <20200525024955.225415-6-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200525024955.225415-1-jandryuk@gmail.com>
References: <20200525024955.225415-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, marmarek@invisiblethingslab.com,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Switch data_loop to take a pointer to vchan_proxy_state.

No functional change.

This removes a dead store to input_fd identified by Coverity.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libvchan/vchan-socket-proxy.c | 65 +++++++++++++++--------------
 1 file changed, 33 insertions(+), 32 deletions(-)

diff --git a/tools/libvchan/vchan-socket-proxy.c b/tools/libvchan/vchan-socket-proxy.c
index 39f4bb1452..32be410609 100644
--- a/tools/libvchan/vchan-socket-proxy.c
+++ b/tools/libvchan/vchan-socket-proxy.c
@@ -279,13 +279,13 @@ static void discard_buffers(struct libxenvchan *ctrl) {
     }
 }
 
-int data_loop(struct libxenvchan *ctrl, int input_fd, int output_fd)
+int data_loop(struct vchan_proxy_state *state)
 {
     int ret;
     int libxenvchan_fd;
     int max_fd;
 
-    libxenvchan_fd = libxenvchan_fd_for_select(ctrl);
+    libxenvchan_fd = libxenvchan_fd_for_select(state->ctrl);
     for (;;) {
         fd_set rfds;
         fd_set wfds;
@@ -293,15 +293,15 @@ int data_loop(struct libxenvchan *ctrl, int input_fd, int output_fd)
         FD_ZERO(&wfds);
 
         max_fd = -1;
-        if (input_fd != -1 && insiz != BUFSIZE) {
-            FD_SET(input_fd, &rfds);
-            if (input_fd > max_fd)
-                max_fd = input_fd;
+        if (state->input_fd != -1 && insiz != BUFSIZE) {
+            FD_SET(state->input_fd, &rfds);
+            if (state->input_fd > max_fd)
+                max_fd = state->input_fd;
         }
-        if (output_fd != -1 && outsiz) {
-            FD_SET(output_fd, &wfds);
-            if (output_fd > max_fd)
-                max_fd = output_fd;
+        if (state->output_fd != -1 && outsiz) {
+            FD_SET(state->output_fd, &wfds);
+            if (state->output_fd > max_fd)
+                max_fd = state->output_fd;
         }
         FD_SET(libxenvchan_fd, &rfds);
         if (libxenvchan_fd > max_fd)
@@ -312,52 +312,53 @@ int data_loop(struct libxenvchan *ctrl, int input_fd, int output_fd)
             exit(1);
         }
         if (FD_ISSET(libxenvchan_fd, &rfds)) {
-            libxenvchan_wait(ctrl);
-            if (!libxenvchan_is_open(ctrl)) {
+            libxenvchan_wait(state->ctrl);
+            if (!libxenvchan_is_open(state->ctrl)) {
                 if (verbose)
                     fprintf(stderr, "vchan client disconnected\n");
                 while (outsiz)
-                    socket_wr(output_fd);
-                close(output_fd);
-                close(input_fd);
-                discard_buffers(ctrl);
+                    socket_wr(state->output_fd);
+                close(state->output_fd);
+                close(state->input_fd);
+                discard_buffers(state->ctrl);
                 break;
             }
-            vchan_wr(ctrl);
+            vchan_wr(state->ctrl);
         }
 
-        if (FD_ISSET(input_fd, &rfds)) {
-            ret = read(input_fd, inbuf + insiz, BUFSIZE - insiz);
+        if (FD_ISSET(state->input_fd, &rfds)) {
+            ret = read(state->input_fd, inbuf + insiz, BUFSIZE - insiz);
             if (ret < 0 && errno != EAGAIN)
                 exit(1);
             if (verbose)
                 fprintf(stderr, "from-unix: %.*s\n", ret, inbuf + insiz);
             if (ret == 0) {
                 /* EOF on socket, write everything in the buffer and close the
-                 * input_fd socket */
+                 * state->input_fd socket */
                 while (insiz) {
-                    vchan_wr(ctrl);
-                    libxenvchan_wait(ctrl);
+                    vchan_wr(state->ctrl);
+                    libxenvchan_wait(state->ctrl);
                 }
-                close(input_fd);
-                input_fd = -1;
+                close(state->input_fd);
+                state->input_fd = -1;
                 /* TODO: maybe signal the vchan client somehow? */
                 break;
             }
             if (ret)
                 insiz += ret;
-            vchan_wr(ctrl);
+            vchan_wr(state->ctrl);
         }
-        if (FD_ISSET(output_fd, &wfds))
-            socket_wr(output_fd);
-        while (libxenvchan_data_ready(ctrl) && outsiz < BUFSIZE) {
-            ret = libxenvchan_read(ctrl, outbuf + outsiz, BUFSIZE - outsiz);
+        if (FD_ISSET(state->output_fd, &wfds))
+            socket_wr(state->output_fd);
+        while (libxenvchan_data_ready(state->ctrl) && outsiz < BUFSIZE) {
+            ret = libxenvchan_read(state->ctrl, outbuf + outsiz,
+                                   BUFSIZE - outsiz);
             if (ret < 0)
                 exit(1);
             if (verbose)
                 fprintf(stderr, "from-vchan: %.*s\n", ret, outbuf + outsiz);
             outsiz += ret;
-            socket_wr(output_fd);
+            socket_wr(state->output_fd);
         }
     }
     return 0;
@@ -474,7 +475,7 @@ int main(int argc, char **argv)
                 ret = 1;
                 break;
             }
-            if (data_loop(state.ctrl, state.input_fd, state.output_fd) != 0)
+            if (data_loop(&state) != 0)
                 break;
             /* keep it running only when get UNIX socket path */
             if (socket_path[0] != '/')
@@ -497,7 +498,7 @@ int main(int argc, char **argv)
                 ret = 1;
                 break;
             }
-            if (data_loop(state.ctrl, state.input_fd, state.output_fd) != 0)
+            if (data_loop(&state) != 0)
                 break;
             /* don't reconnect if output was stdout */
             if (strcmp(socket_path, "-") == 0)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 25 02:51:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 02:51:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd3D0-0004XS-Eq; Mon, 25 May 2020 02:51:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vb8S=7H=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jd3Cz-0004Wx-Ds
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 02:51:21 +0000
X-Inumbo-ID: 940556bc-9e32-11ea-9887-bc764e2007e4
Received: from mail-qk1-x743.google.com (unknown [2607:f8b0:4864:20::743])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 940556bc-9e32-11ea-9887-bc764e2007e4;
 Mon, 25 May 2020 02:51:05 +0000 (UTC)
Received: by mail-qk1-x743.google.com with SMTP id b27so6457948qka.4
 for <xen-devel@lists.xenproject.org>; Sun, 24 May 2020 19:51:05 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=T3VIo617dxU+JNhTIh5muYXiVHbUoUU/vbppSclO4R4=;
 b=H//Y5QfjKCvFbeIzygw4SVcQFuOlrO12RIEjI8DQTOOE+qbkmX7/0YTKPVn5s1L/v8
 l+w9aN4jdqWihCj9TawrZN28mG674NgdJW1Fg6kClyaezu8wv1hdAiHPb/vbJYu/IUQc
 cTsJjwCJ+rBrNR6tWK/1frwx1HfgPpXr2Cn9WyEB91lPH3nyG30FDMiDLcaZOqUKb3my
 cWrygZpzPAyXmRh/PsSInNZEKQfJcmWpdeWiD1uLHurVmaChHLEbb590JT6ybJbhwL5l
 ZUAGYI0jDTi+AA6RqGsFnPVYUOHtCyDGlpZyqVKFgwrg+3PJImzFiKmnnHA8v3mIfFzA
 UTtA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=T3VIo617dxU+JNhTIh5muYXiVHbUoUU/vbppSclO4R4=;
 b=fFERzSq6w5k7aHGPtaiZaQakK2M5TZ5mQIHitoNHNAJuPWYFTFHyeGHKjqpY3i+TcP
 briLO9fCKjEfiUSMVAcgjwysleSO52VS0+aAymgwUoCZSnxd2Gvmh7Sxss0rmZksgWoh
 A1Z4LZGLdhSKdagexUKzqP5hMF5P5nATw5FoWMdUSRf3B1VEmD5xyfQzPA8RnAHQTQD7
 qBjWEKM9HqJ44pbPRVX2K/SkuP2Sh80zp+Fnk6UKIhFFXij+JvsYVwZ7E1A/E+eHYlC5
 DgBauTk0s5Fwlbx2UpiWKrUEJLu53OkgUMazyYgqSCjMpFuE26lWVt2OA0BgxDiUn11i
 o+kQ==
X-Gm-Message-State: AOAM5338D6qXK9ZTgnKhOrE2hI0ik5q6CiCnpslCPUav48lnqIQzRYmu
 mKsVLX8dG+PgMNw4PjI37i3AGLgf
X-Google-Smtp-Source: ABdhPJz87lVhG4951kT7XtwU45jYIGytRok/y5eoofYCmkEqwqVqOBqV4LiWrWTA+t0mWR5DoZGHpg==
X-Received: by 2002:ae9:f214:: with SMTP id m20mr25017275qkg.232.1590375065126; 
 Sun, 24 May 2020 19:51:05 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:344b:9349:9475:b6a2])
 by smtp.gmail.com with ESMTPSA id h134sm13539512qke.6.2020.05.24.19.51.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 24 May 2020 19:51:04 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 7/8] vchan-socket-proxy: Cleanup resources on exit
Date: Sun, 24 May 2020 22:49:54 -0400
Message-Id: <20200525024955.225415-8-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200525024955.225415-1-jandryuk@gmail.com>
References: <20200525024955.225415-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, marmarek@invisiblethingslab.com,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Close open FDs and close th vchan connection when exiting the program.

This addresses some Coverity findings about leaking file descriptors.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libvchan/vchan-socket-proxy.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/tools/libvchan/vchan-socket-proxy.c b/tools/libvchan/vchan-socket-proxy.c
index f3f6e5ec09..a04b46ee04 100644
--- a/tools/libvchan/vchan-socket-proxy.c
+++ b/tools/libvchan/vchan-socket-proxy.c
@@ -511,5 +511,14 @@ int main(int argc, char **argv)
         }
     }
 
+    if (state.output_fd >= 0)
+        close(state.output_fd);
+    if (state.input_fd >= 0)
+        close(state.input_fd);
+    if (state.ctrl)
+        libxenvchan_close(state.ctrl);
+    if (socket_fd >= 0)
+        close(socket_fd);
+
     return ret;
 }
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 25 02:51:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 02:51:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd3D5-0004c9-OI; Mon, 25 May 2020 02:51:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vb8S=7H=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jd3D4-0004b4-FE
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 02:51:26 +0000
X-Inumbo-ID: 95a74246-9e32-11ea-b9cf-bc764e2007e4
Received: from mail-qk1-x744.google.com (unknown [2607:f8b0:4864:20::744])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 95a74246-9e32-11ea-b9cf-bc764e2007e4;
 Mon, 25 May 2020 02:51:08 +0000 (UTC)
Received: by mail-qk1-x744.google.com with SMTP id n11so10967497qkn.8
 for <xen-devel@lists.xenproject.org>; Sun, 24 May 2020 19:51:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=Vlm0vRlSx5YbGIPqCOuB7/AqLs+Hl6v+Oi0EjKoqVGw=;
 b=FE8JO6G2FC2LN9LRfxO8JC1zcOYD8p79JwblsTCdvw+HETKRl1qYtuFao3GJ1cn9k0
 Yu7pzPYZpk6OHjjdjoh1WBJZHt6s5cuE954MfHPoztKvITO1t8CBHOXMpHYMJfbDIHeH
 TJTAmrPKs0pL4avp+EMaOuclniWL0MjJ5URGDkD+KuHBlETcXHb9w9WniHEtvVIyiFhr
 qtoFhDkbx15ycWXpLAKr32KsrEn/QDEIidSpr4ZdWIELD6QcsirudHXGNknzdn7lx6Gp
 SAgbbd+HYRF4jJTsuEZ4jn7zdW2xJdDBPB0GFmO0PRE93q7iv0jcAMuCfsOU+hYrG31l
 0iTQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
 :references:mime-version:content-transfer-encoding;
 bh=Vlm0vRlSx5YbGIPqCOuB7/AqLs+Hl6v+Oi0EjKoqVGw=;
 b=Pld02SXTaGSR/kUZjD92aLWBP0wKdLAb0BQ87CZuM8WxIqHgR0LEo20uJqbFfcTn2r
 NMxNyJcEcm4zd7z7/z1b8DigesSukAhXI6qp86lvnihHxrWyZdnvJd6aDKWZt9SZ/sYz
 SAikjkixswIbGZGa6Py8avxEfgYtTYgjrwOM2jgLi6qYd21nZh8q3Ws9KMs5Irabyloh
 0QtHwtS2yX3bvCtyPD6xwACvQSLZ3W7xj3WeKvjx0Fmw2cHxoADZxNd8rfMNfUco6DTc
 6J/5S3EEmb2PBEqKI0NRfKu1H50aqaXlAvH1rz0+QY0kKLDiv/gIZ1mBHywMmn2ajTHv
 Lrow==
X-Gm-Message-State: AOAM5313bk61MvTgXuDS0g8XeIAw8TwgpMuSqZC9FETQQG/boZRP0GwX
 YfJcVlkkhZb9yZectfsjRrPU/iEw
X-Google-Smtp-Source: ABdhPJy3ohNXkaaZhuxn/Fk6T6gmp+Y4SLoqqiX7Y6XxduJszM+OOBBfOp6+MpVwoI2b1eYyXz9pDA==
X-Received: by 2002:a37:b3c7:: with SMTP id
 c190mr24190289qkf.466.1590375067957; 
 Sun, 24 May 2020 19:51:07 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:344b:9349:9475:b6a2])
 by smtp.gmail.com with ESMTPSA id h134sm13539512qke.6.2020.05.24.19.51.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 24 May 2020 19:51:07 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 8/8] vchan-socket-proxy: Handle closing shared input/output_fd
Date: Sun, 24 May 2020 22:49:55 -0400
Message-Id: <20200525024955.225415-9-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200525024955.225415-1-jandryuk@gmail.com>
References: <20200525024955.225415-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, marmarek@invisiblethingslab.com,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

input_fd & output_fd may be the same FD.  In that case, mark both as -1
when closing one.  That avoids a dangling FD reference.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libvchan/vchan-socket-proxy.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libvchan/vchan-socket-proxy.c b/tools/libvchan/vchan-socket-proxy.c
index a04b46ee04..07ead251a2 100644
--- a/tools/libvchan/vchan-socket-proxy.c
+++ b/tools/libvchan/vchan-socket-proxy.c
@@ -342,6 +342,8 @@ int data_loop(struct vchan_proxy_state *state)
                     libxenvchan_wait(state->ctrl);
                 }
                 close(state->input_fd);
+                if (state->input_fd == state->output_fd)
+                    state->output_fd = -1;
                 state->input_fd = -1;
                 /* TODO: maybe signal the vchan client somehow? */
                 break;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 25 02:54:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 02:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd3Fl-0005Fh-BM; Mon, 25 May 2020 02:54:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vb8S=7H=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jd3Fj-0005Fc-Pu
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 02:54:11 +0000
X-Inumbo-ID: 02b26712-9e33-11ea-ae69-bc764e2007e4
Received: from mail-qk1-x741.google.com (unknown [2607:f8b0:4864:20::741])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 02b26712-9e33-11ea-ae69-bc764e2007e4;
 Mon, 25 May 2020 02:54:11 +0000 (UTC)
Received: by mail-qk1-x741.google.com with SMTP id b6so16400916qkh.11
 for <xen-devel@lists.xenproject.org>; Sun, 24 May 2020 19:54:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=Gd2QZnhqRG2wfOUzgNesrJhEW8WJZJ6w2ov6jlm8W5o=;
 b=OsB9g1UAriBq4kboxMcUwtzv278fUPt2vOcCEXJJ1O6mM2vFpeOZ5zGTjpjngJfYHZ
 S4c+F6BU5o818jJa5XdtW7uNLyyR7VtRvoSHzJW5oLphUcxqd3VafQ9d9v8gqj5N1K+q
 8I4TTcv6xofD/yhm99g6PUSnAaoTRaraFKLClVYoWk2rrqlSXYh61E328dGDZClSLV18
 rE7VcnnWyvE3ATHmoP8QeVFhUHpm8QH/EVqzTo3vl1MnC/dDsrPmezIGHMjS5CeL6NUY
 qZAKIf1zcgkT2AtaLrjd+AkLfgnPYf9Lhj8iSbrhpZ7ShjN6Gj7nBDCeu0ser/kTLpOr
 BsTw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=Gd2QZnhqRG2wfOUzgNesrJhEW8WJZJ6w2ov6jlm8W5o=;
 b=NlaOZI10Esu5cndpTWmmB4M2gF+f4c1JoaK4IgESWXCZA9+X4UX3MwmnfG6qYHErhl
 TIFDXrqzAotPVYX22Aa9SJ1EuBKKgHhazkPRcMc9bx+nd0dX2F+kAOwXZDcblxc1RFVl
 5usu5/TsFEpo/zGK0K4B27qUeffXqZMAMzIO6vr/vS9oi1+V9EEf6fAyKI8xmCCky/oN
 PFOvo7EkI6qOThRrDOmZ5ehn05Ws3+emX2ZoVlotAMeA+c/BJ8+N4srRIPF1rdQaOSfr
 0DtK+kCRtSfUreADZsPPc1OSXfAj3SURkBCSxgbzKS4p51nT9wi7hRJQ+012H+bKn1Xy
 qZWg==
X-Gm-Message-State: AOAM530qFHoYU31ml9khW8rijGuDDUXI5bjUpXOqDhBNXITAxpAFuU9A
 iogLhQb9/0CWu8L+54cbo5ikT5Bh
X-Google-Smtp-Source: ABdhPJwfuCYfsVOLTBWJmgJF0ndj8awyy8F6sa6kpfnh8VSsdrOS7osVB7vGROUh4v0+2Wz1K0E10A==
X-Received: by 2002:a37:6e42:: with SMTP id j63mr8169954qkc.329.1590375250901; 
 Sun, 24 May 2020 19:54:10 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:344b:9349:9475:b6a2])
 by smtp.gmail.com with ESMTPSA id o14sm12962296qkj.27.2020.05.24.19.54.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 24 May 2020 19:54:10 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] CHANGELOG: Add qemu-xen linux device model stubdomains
Date: Sun, 24 May 2020 22:54:02 -0400
Message-Id: <20200525025402.225884-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Community Manager <community.manager@xenproject.org>,
 Paul Durrant <paul@xen.org>, Jason Andryuk <jandryuk@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add qemu-xen linux device model stubdomain.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 CHANGELOG.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index ccb5055c87..52ed470903 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -16,6 +16,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
    fixes.
  - Hypervisor framework to ease porting Xen to run on hypervisors.
  - Initial support to run on Hyper-V.
+ - libxl support for running qemu-xen device model in a linux stubdomain.
 
 ## [4.13.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.13.0) - 2019-12-17
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 25 02:55:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 02:55:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd3Gm-0005KM-Lf; Mon, 25 May 2020 02:55:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vb8S=7H=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jd3Gl-0005KG-7m
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 02:55:15 +0000
X-Inumbo-ID: 2880af94-9e33-11ea-ae69-bc764e2007e4
Received: from mail-qt1-x844.google.com (unknown [2607:f8b0:4864:20::844])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2880af94-9e33-11ea-ae69-bc764e2007e4;
 Mon, 25 May 2020 02:55:14 +0000 (UTC)
Received: by mail-qt1-x844.google.com with SMTP id m44so12933066qtm.8
 for <xen-devel@lists.xenproject.org>; Sun, 24 May 2020 19:55:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=OuwE3uDSMnd3YhVaR0Jjj0ukVHHDRwTxTijFqTevWsY=;
 b=UiYVTOOJjuzzNGsN7JOkvx2OzXqdzXXosyG3cYLmbduVd8NThatlQshr9l0nCfJA2R
 fsOHu8CSa6plajp7QUa5NSFfApBmxBNq+8zWsYfbzad773SlG/nPEJaaNp0q4wOezUpB
 SwIMtX4SKwFxukM7aBov4UZEYn+PXx1KqJtK/YCvqnmuWW//+18poNR8U4KDWg991nXx
 60dCPu6xtCiYwjME1PASk1KLySeRVoVX8E2v2uoMVpMsFvyjgqOrgmsN3YSSKNW/Hth8
 qfxnazovfWRBHX7IDpyAashDiW3l9lS1H2gOy4k5aTLDN+Zr6Ab0yRkTa/1ghOMvv2mA
 Q+zw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=OuwE3uDSMnd3YhVaR0Jjj0ukVHHDRwTxTijFqTevWsY=;
 b=Y3QGUX8Wu/ig6/f0UdItsCWt+V3PrOgf2Kc55nOaAGQpExhxv7s9uTfnmaJfF908Aj
 mO/gWpEUfeuQQ570b3/Zp5HEiMNW/ctSHGmlzQaX650ZoE4XoKJSu40Q7wGVhKEgedGE
 5nWBqCzFL2aql/76b+bO8rtEN3wgfuL4T4Zwlp+59r5z3lnMu6ig90xhfFHK6/+qtDSq
 VdCAjNT9zRhPt/x+M4YOc7QiwDdTl6pGkARFC5mFZmjHflslKJ5ZuqWbOeKCG8BxVMh5
 tnyAaFQ7mZBMf3qdwTk5kJMMwqoL9AWSVPAss9uZWPHbjFSCcAooyJ5+aWJCk6gtNGmP
 NN1w==
X-Gm-Message-State: AOAM533+UI0MOCaMKlNB1u9HzTMGBIThR1hqGaSAPw3YgmKLeOp73bP6
 rRR/2kNYs/Hkf27oR2H8L1soR3Xi
X-Google-Smtp-Source: ABdhPJzdVpB3atFyil/XFxkZ4T9UIE0QW8WSS0kMYpTZ1Q9tdAuBDATICxxxs4ULkcHQ3QKuq0X5Zw==
X-Received: by 2002:ac8:8f7:: with SMTP id y52mr26765599qth.104.1590375314321; 
 Sun, 24 May 2020 19:55:14 -0700 (PDT)
Received: from shine.lan ([2001:470:8:67e:344b:9349:9475:b6a2])
 by smtp.gmail.com with ESMTPSA id n13sm2739984qtb.20.2020.05.24.19.55.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 24 May 2020 19:55:13 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] SUPPORT: Add linux device model stubdom to Toolstack
Date: Sun, 24 May 2020 22:55:06 -0400
Message-Id: <20200525025506.225959-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add qemu-xen linux device model stubdomain to the Toolstack section as a
Tech Preview.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 SUPPORT.md | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index e3a366fd56..25becc9192 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -153,6 +153,12 @@ Go (golang) bindings for libxl
 
     Status: Experimental
 
+### Linux device model stubdomains
+
+Support for running qemu-xen device model in a linux stubdomain.
+
+    Status: Tech Preview
+
 ## Toolstack/3rd party
 
 ### libvirt driver for xl
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 25 03:35:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 03:35:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd3t7-0000pK-P8; Mon, 25 May 2020 03:34:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AvLo=7H=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jd3t6-0000pF-Pq
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 03:34:52 +0000
X-Inumbo-ID: b0ff8bec-9e38-11ea-b07b-bc764e2007e4
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b0ff8bec-9e38-11ea-b07b-bc764e2007e4;
 Mon, 25 May 2020 03:34:51 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id g12so14582158wrw.1
 for <xen-devel@lists.xenproject.org>; Sun, 24 May 2020 20:34:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=WFt2U/4Q6uJzgd+Uj0nMH6M2jMoWfRNDoTmzInOyY/Y=;
 b=LJVqcIqrLKhirSUV1vufxjx+7P1U/9qwNpOWwaQVbvLcqXp0R/069We4KqnnKykNr1
 khXf5CDrP9GQegVFbpZKtJMzZ0i0hrvsZVP/h8zMWD+e6v7Fj64iR97yHKtNNbuBjBYp
 kXZ4y2AJHua99QgUsb28m5Z7NRuV05X4RuDiyDgvN6wBOz24yZBTsoSwviWPFGbLgjRH
 dAFUNKN8zIISTqhQmxoPNtVSHNvYw76LeP0QWH3N4tavjvqr8faygILVz2toJh5fmc1s
 p1Z96oMrt4LV0FYKDNYIgOAJWboq41MrHDMkJtPjAFXDVhbbcNUablYGpgBY9Ep+Y0M3
 qN+A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=WFt2U/4Q6uJzgd+Uj0nMH6M2jMoWfRNDoTmzInOyY/Y=;
 b=g+EAPiNHfn6CVbCdVZmC47uTc5/qccjNSSTCRrKNboz2I/znedIoq/4Qgv5gqCMGsw
 a0E7/2a0VlURBjnDoBKcok2ZvtBwlG0mHuzolMLzU139QRx+OSHTyys85dKHBQ6eD1SY
 LFY4HpHvInjkvmbNl3Av+yVrReoNht7Z5wB60YkuVen7M3KrtKkv6uQhJTmmJYTi6ZI/
 9jwOR91pI2EUEtS/JcqWeX0LyE8852PAFwvdzewHlk96383WXtOkNsPjdMYhV+bHNXNB
 i+3rC25Y6gOD3Z7X5JkaY3TvHXeV81LegK4FWyM7YLGN7oBUI4OS7iQw4hooisd8xzuW
 R4Hw==
X-Gm-Message-State: AOAM531wxlebn4rM/Xmdpyk6tStK/BlV875XbrX6jFLrzzGx6JayH9n5
 ZvTgIBihKTClmT9SibNgnf1If2s2tlh3Dmi87e0=
X-Google-Smtp-Source: ABdhPJz+dBDKGggBvE0hzelO51+s8kh+SkgfvijCDCKDVePO6KaytGax0SK1hpXNvXjsxj9ef8aDwpI1HWqFWaePdo4=
X-Received: by 2002:a5d:4b04:: with SMTP id v4mr14012962wrq.182.1590377690927; 
 Sun, 24 May 2020 20:34:50 -0700 (PDT)
MIME-Version: 1.0
References: <adfececa3e29a46f5347459a629aa534d61625aa.1590165055.git.tamas.lengyel@intel.com>
 <MWHPR11MB1645167038AA1273F6CC6D848CB30@MWHPR11MB1645.namprd11.prod.outlook.com>
In-Reply-To: <MWHPR11MB1645167038AA1273F6CC6D848CB30@MWHPR11MB1645.namprd11.prod.outlook.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Sun, 24 May 2020 21:34:15 -0600
Message-ID: <CABfawhm6iOhiUuQg6ONQcKAjcg5H=jATFLM4YQ4BLwLkDBdX2A@mail.gmail.com>
Subject: Re: [PATCH v2 for-4.14 1/2] x86/mem_sharing: block interrupt
 injection for forks
To: "Tian, Kevin" <kevin.tian@intel.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 "Lengyel, Tamas" <tamas.lengyel@intel.com>, "Nakajima,
 Jun" <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sun, May 24, 2020 at 8:33 PM Tian, Kevin <kevin.tian@intel.com> wrote:
>
> > From: Lengyel, Tamas <tamas.lengyel@intel.com>
> > Sent: Saturday, May 23, 2020 12:34 AM
> >
> > When running shallow forks without device models it may be undesirable for
> > Xen
>
> what is shallow forks? and why interrupt injection is not desired without
> device model? If it means just without Qemu thing, you still get local APIC
> interrupts such as timers, PMI, etc.

I refer to shallow forks as VM forks that run without a device model
(ie. QEMU). Effectively these are domains that run only with CPU and
memory, both of which are copied from the parent VM as needed. When an
interrupt is injected into a VM fork (because its state is copied from
a parent where an interrupt might be pending) the interrupt handler
might want to talk to the device model which is not present for the
fork. In such situations the VM fork ends up executing the interrupt
handler instead of the code we want to fuzz, which we want to avoid
for obvious reasons.

>
> > to inject interrupts. With Windows forks we have observed the kernel going
> > into
> > infinite loops when trying to process such interrupts, likely because it
> > attempts
>
> what is the relationship between shallow forks and windows forks then?

They are the same, but we only observed this behavior with Windows forks.

>
> > to interact with devices that are not responding without QEMU running. By
> > disabling interrupt injection the fuzzer can exercise the target code without
> > interference.
>
> what is the fuzzer?

https://github.com/intel/kernel-fuzzer-for-xen-project/

>
> >
> > Forks & memory sharing are only available on Intel CPUs so this only applies
> > to vmx.
>
> I feel lots of background is missing thus difficult to judge whether below change
> is desired...

You may find the VM forking series worthwhile to review to get some
context: https://lists.xenproject.org/archives/html/xen-devel/2020-04/msg01162.html.
In a nutshell, it's an experimental feature geared towards fuzzing and
it's disabled by default (note that it's gated on CONFIG_MEM_SHARING
being enabled).

Tamas


From xen-devel-bounces@lists.xenproject.org Mon May 25 04:02:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 04:02:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd4JD-0003WY-0U; Mon, 25 May 2020 04:01:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KePG=7H=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jd4JB-0003WT-NF
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 04:01:49 +0000
X-Inumbo-ID: 7546ddb8-9e3c-11ea-ae7f-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7546ddb8-9e3c-11ea-ae7f-12813bfff9fa;
 Mon, 25 May 2020 04:01:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=xXsJKBeKV8vmpmsqI1XIeoUFmxZBKHH0o+ww8Ib3k/Q=; b=Covk2uALOOzIUKXoBsovnWRJA
 Lj2tgjCI451jOv9a/r4/xiQRjrKa68UAxEvazkqPfOAbhf2DQW4jd97vMHNEwDzpOvgmFElQn9XjQ
 48TO1rlkEtbwRx3++sgzlkeL4e8GyHbNy18U0cMPwPK2o8pFPgsYnqjlm4qcmFDMIRw8o=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jd4JA-0007PT-Bs; Mon, 25 May 2020 04:01:48 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jd4J9-0007JV-Ud; Mon, 25 May 2020 04:01:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jd4J9-0002Gw-Sn; Mon, 25 May 2020 04:01:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150355-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150355: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=437b0aa06a014ce174e24c0d3530b3e9ab19b18b
X-Osstest-Versions-That: xen=5e015d48a5ee68ba03addad2698364d8f015afec
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 25 May 2020 04:01:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150355 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150355/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     12 guest-start              fail REGR. vs. 150346

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150341
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150346
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150346
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150346
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150346
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150346
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150346
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150346
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150346
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150346
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  437b0aa06a014ce174e24c0d3530b3e9ab19b18b
baseline version:
 xen                  5e015d48a5ee68ba03addad2698364d8f015afec

Last test of basis   150346  2020-05-24 01:51:34 Z    1 days
Testing same since   150355  2020-05-24 17:36:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Corey Minyard <cminyard@mvista.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5e015d48a5..437b0aa06a  437b0aa06a014ce174e24c0d3530b3e9ab19b18b -> master


From xen-devel-bounces@lists.xenproject.org Mon May 25 05:03:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 05:03:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd5GF-0000iS-Qy; Mon, 25 May 2020 05:02:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KePG=7H=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jd5GF-0000iN-1i
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 05:02:51 +0000
X-Inumbo-ID: fa6a1520-9e44-11ea-ae88-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fa6a1520-9e44-11ea-ae88-12813bfff9fa;
 Mon, 25 May 2020 05:02:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=zs9GZwf0ncnGqn1NG4B3deSE1OxXDnOEKj+d6G8V3DA=; b=vXNbKLsdp2AY9o/K76sp2TgZJ
 0g05iL+Tb4qTB2g4dN0wesewJPtDasdXqsUIOIkRuHzOfaQLxKEhYl0O3eUM14o5q6oyMalsAQHHg
 AmBuMqBAqiyFVEwmjO+4Nr0JPL+9lo4LfyIlYzXYZiBS6zau36XGz/4UaKECKYTCwPMOs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jd5GC-0000eM-4r; Mon, 25 May 2020 05:02:48 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jd5GB-0000cZ-Ij; Mon, 25 May 2020 05:02:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jd5GB-0000D3-I4; Mon, 25 May 2020 05:02:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150357-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 150357: regressions - trouble:
 blocked/fail/pass/starved
X-Osstest-Failures: seabios:build-amd64-xsm:xen-build:fail:regression
 seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 seabios:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
 seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 seabios:test-amd64-i386-xl-qemuu-ws16-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: seabios=d9aea4a7cd59e00f5ed96b6442806dde0959e1ca
X-Osstest-Versions-That: seabios=7e9db04923854b7f4edca33948f55abee22907b9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 25 May 2020 05:02:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150357 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150357/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 150308

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150308
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150308
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150308
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64  2 hosts-allocate              starved n/a

version targeted for testing:
 seabios              d9aea4a7cd59e00f5ed96b6442806dde0959e1ca
baseline version:
 seabios              7e9db04923854b7f4edca33948f55abee22907b9

Last test of basis   150308  2020-05-21 18:10:18 Z    3 days
Testing same since   150357  2020-05-25 02:10:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Kevin O'Connor <kevin@koconnor.net>
  Matt DeVillier <matt.devillier@gmail.com>

jobs:
 build-amd64-xsm                                              fail    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          starved 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d9aea4a7cd59e00f5ed96b6442806dde0959e1ca
Author: Kevin O'Connor <kevin@koconnor.net>
Date:   Sun May 24 21:57:19 2020 -0400

    boot: Fixup check for only one item in boot list
    
    Signed-off-by: Kevin O'Connor <kevin@koconnor.net>

commit 926fd4e05e667e7835073ee7c8612c11e23dc57f
Author: Matt DeVillier <matt.devillier@gmail.com>
Date:   Sun May 24 17:45:34 2020 -0500

    boot: Fix logic for boot menu display
    
    Commit c61193d3 [boot: Extend `etc/show-boot-menu`...] changed the
    logic surrounding the use of show_boot_menu incorrectly, leading the
    boot menu to be skipped by default with no way to override. Correct
    the logic error so that show_boot_menu works as documented.
    
    Test: build/boot SeaBIOS, verify boot menu option shown by default.
    
    Signed-off-by: Matt DeVillier <matt.devillier@gmail.com>


From xen-devel-bounces@lists.xenproject.org Mon May 25 05:18:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 05:18:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd5V4-0001jN-8J; Mon, 25 May 2020 05:18:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aFRw=7H=gmail.com=buycomputer40@srs-us1.protection.inumbo.net>)
 id 1jd5V3-0001jI-9V
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 05:18:09 +0000
X-Inumbo-ID: 1e4818f0-9e47-11ea-9887-bc764e2007e4
Received: from mail-lf1-x130.google.com (unknown [2a00:1450:4864:20::130])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e4818f0-9e47-11ea-9887-bc764e2007e4;
 Mon, 25 May 2020 05:18:08 +0000 (UTC)
Received: by mail-lf1-x130.google.com with SMTP id e125so9821287lfd.1
 for <xen-devel@lists.xenproject.org>; Sun, 24 May 2020 22:18:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to;
 bh=uVmlM1Szc7BB+byHXsR7/2eU40xH68Kxf4npVbZ9IkM=;
 b=CXrczYQJEF1jd5qLrHoK4Hl/r5GAE6PqgGwVsOg4ebUAKtqmGxAPBO2mLbQ6B6x04v
 dJY7ajVj9F0seQ0zxurjwfxgpTMWKvd7GM1ftwdfTTHOmcYRTL97e1B8KsRcMsS75W+k
 Emr5G+/J/Ltkd+KN7nJFvQo61MhQ9wiN8dBpCE7hRA/Q6G3YBmgLkM8jkhtpj9nsNcyu
 J6QR9iuQMxvNmXQpeb+bVr1IbN2SwQF1LVNUhzNRQISPJ2B1XTHxubwDMwJ1y3nTAi6m
 Bd0DypK8EMiKPUadG/mbFq0ftxiPjx14XhJovoRzL8Gt/d+LUjQo0mFtRUtSOjruPAeE
 0QFw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to;
 bh=uVmlM1Szc7BB+byHXsR7/2eU40xH68Kxf4npVbZ9IkM=;
 b=aPbqGYleEUESEy1sD3+5fa/b8t3MmD7l4TyujQYB5UHq9wtHD9niILF3qvs3405PU5
 ZyKFwkD1R+kSMSm5pSKed0cxwOgVATjRTorKX2PPjxiwy1tgZHNPa8Y1wA8FLyT3SEho
 X/quu5slFwx02xhMEfGPrX221zJIksFjbZvh6CATe5R24PQbTvB1CVmIyJjxPw1f7RqI
 0A6RWiV78RajySkQ3otC1VRzSz7z4/HFHoilN4sley/mdXqXOTeYvrqmfXXyc3c3Daix
 NUAJ96LTuL3scHzsr25jMCzyaqTYZz7iv8/6KXg9DumCYFcc28TJ5xkwwn3oPcZ7AYv3
 nH3A==
X-Gm-Message-State: AOAM533XTdWDmZkjfWaQ38j8gGQXXbgZBjU3EdkeDl3ItnT/mFSQAaAn
 8p8IQCY8KtFTjkAQtV+P0wZwp3VQTI1MHsRMhEQ=
X-Google-Smtp-Source: ABdhPJw8qdjdyHu3/5T+MbgwGEeB5/0K7SFBAzpohMFiQVkN7CAyWX6+tG/7Qv+gkObYR02enJDcskFP+kzoWqCLuq0=
X-Received: by 2002:a19:150:: with SMTP id 77mr13425593lfb.71.1590383887202;
 Sun, 24 May 2020 22:18:07 -0700 (PDT)
MIME-Version: 1.0
References: <CANSXg2FGtiDT05sQUpSAshAsdP4wSjPgQbfw_+aKJuAzSwvJuQ@mail.gmail.com>
 <da7e41b5-88a1-13ab-d52b-0652c16608af@suse.com>
 <MWHPR11MB1645DC1C5782DDA28C9BB1CB8CB30@MWHPR11MB1645.namprd11.prod.outlook.com>
In-Reply-To: <MWHPR11MB1645DC1C5782DDA28C9BB1CB8CB30@MWHPR11MB1645.namprd11.prod.outlook.com>
From: buy computer <buycomputer40@gmail.com>
Date: Mon, 25 May 2020 08:17:54 +0300
Message-ID: <CANSXg2EiauZfTMsmqzcB2ShUCr67rB+mHBm4EVtWhMaUL8NL-w@mail.gmail.com>
Subject: Re: iommu=no-igfx
To: "Tian, Kevin" <kevin.tian@intel.com>, xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="000000000000a742b305a6721ab9"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--000000000000a742b305a6721ab9
Content-Type: text/plain; charset="UTF-8"

On Mon, May 25, 2020 at 5:16 AM Tian, Kevin <kevin.tian@intel.com> wrote:

> > From: Jan Beulich <jbeulich@suse.com>
> > Sent: Wednesday, May 20, 2020 7:11 PM
> >
> > On 11.05.2020 19:43, buy computer wrote:
> > > I've been working on a Windows 10 HVM on a Debian 10 dom0. When I
> > was first
> > > trying to make the VM, I was getting IOMMU errors. I had a hard time
> > > figuring out what to do about this, and finally discovered that putting
> > > iommu=no-igfx in the grub stopped the errors.
> > >
> > > Unfortunately, without the graphics support the VM is understandably
> > slow,
> > > and can crash. I was also only now pointed to the page
> > > <https://xenbits.xen.org/docs/unstable/misc/xen-command-
> > line.html#iommu>
> > > which says to report any errors that get fixed by using iommu=no-igfx.
>
> what is the platform and linux kernel version in this context?
>
>
I'm not sure what you meant by 'platform', so I'll try to cover all the
bases.
Kernel: 4.19.0-9-amd64 GNU/Linux
Debian 10.4
Lenovo E490 ThinkPad
Intel Integrated Graphics 620


> >
> > Thanks for the report. For context I'll quote the commit message of
> > the commit introducing the option as well as the request to report
> > issues fixed with it:
> >
> > "As we still cannot find a proper fix for this problem, this patch adds
> >  iommu=igfx option to control whether Intel graphics IOMMU is enabled.
> >  Running Xen with iommu=no-igfx is similar to running Linux with
> >  intel_iommu=igfx_off, which disables IOMMU for Intel GPU. This can be
> >  used by users to manually workaround the problem before a fix is
> >  available for i915 driver."
> >
> > This was in 2015, referencing Linux >= 3.19. I have no idea whether
> > the underlying driver issue(s) has/have been fixed. The addresses
> > referenced are variable enough and all within RAM, so I'd conclude
> > this is not a "missing RMRR" issue.
>
> Variable enough but not within RAM. From E820:
>
> (XEN)  0000000100000000 - 0000000871800000 (usable)
>
> But the referenced addresses are way higher:
>
> (XEN) [VT-D]DMAR:[DMA Read] Request device [0000:00:02.0] fault
> addr 76c615d000, iommu reg = ffff82c000a0c000
> (XEN) [VT-D]DMAR: reason 06 - PTE Read access is not set
>
> >
> > Cc-ing the VT-d maintainer for possible insights or thoughts.
> >
> > Jan
>
> I don't have other thoughts except the weird addresses. It might be
> good to add some trace in dom0's i915 driver to see whether those
> addresses are intended or not.
>
>
Thanks for the insight! I'd love to help with the trace, but I don't know
how to do that. If you could point me in the right direction, I'd try to
give it a shot.

Thanks
> Kevin
>

Thanks for the insight!

--000000000000a742b305a6721ab9
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div dir=3D"ltr"><br></div><br><div class=3D"gmail_quote">=
<div dir=3D"ltr" class=3D"gmail_attr">On Mon, May 25, 2020 at 5:16 AM Tian,=
 Kevin &lt;<a href=3D"mailto:kevin.tian@intel.com" target=3D"_blank">kevin.=
tian@intel.com</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" st=
yle=3D"margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padd=
ing-left:1ex">&gt; From: Jan Beulich &lt;<a href=3D"mailto:jbeulich@suse.co=
m" target=3D"_blank">jbeulich@suse.com</a>&gt;<br>
&gt; Sent: Wednesday, May 20, 2020 7:11 PM<br>
&gt; <br>
&gt; On 11.05.2020 19:43, buy computer wrote:<br>
&gt; &gt; I&#39;ve been working on a Windows 10 HVM on a Debian 10 dom0. Wh=
en I<br>
&gt; was first<br>
&gt; &gt; trying to make the VM, I was getting IOMMU errors. I had a hard t=
ime<br>
&gt; &gt; figuring out what to do about this, and finally discovered that p=
utting<br>
&gt; &gt; iommu=3Dno-igfx in the grub stopped the errors.<br>
&gt; &gt;<br>
&gt; &gt; Unfortunately, without the graphics support the VM is understanda=
bly<br>
&gt; slow,<br>
&gt; &gt; and can crash. I was also only now pointed to the page<br>
&gt; &gt; &lt;<a href=3D"https://xenbits.xen.org/docs/unstable/misc/xen-com=
mand-" rel=3D"noreferrer" target=3D"_blank">https://xenbits.xen.org/docs/un=
stable/misc/xen-command-</a><br>
&gt; line.html#iommu&gt;<br>
&gt; &gt; which says to report any errors that get fixed by using iommu=3Dn=
o-igfx.<br>
<br>
what is the platform and linux kernel version in this context?<br>
<br></blockquote><div><br></div><div>I&#39;m not sure what you meant by &#3=
9;platform&#39;, so I&#39;ll try to cover all the bases.<br></div><div>Kern=
el: 4.19.0-9-amd64 GNU/Linux<br></div><div>Debian 10.4</div><div>Lenovo E49=
0 ThinkPad</div><div>Intel Integrated Graphics 620<br></div><div>=C2=A0</di=
v><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0.8ex;borde=
r-left:1px solid rgb(204,204,204);padding-left:1ex">
&gt; <br>
&gt; Thanks for the report. For context I&#39;ll quote the commit message o=
f<br>
&gt; the commit introducing the option as well as the request to report<br>
&gt; issues fixed with it:<br>
&gt; <br>
&gt; &quot;As we still cannot find a proper fix for this problem, this patc=
h adds<br>
&gt;=C2=A0 iommu=3Digfx option to control whether Intel graphics IOMMU is e=
nabled.<br>
&gt;=C2=A0 Running Xen with iommu=3Dno-igfx is similar to running Linux wit=
h<br>
&gt;=C2=A0 intel_iommu=3Digfx_off, which disables IOMMU for Intel GPU. This=
 can be<br>
&gt;=C2=A0 used by users to manually workaround the problem before a fix is=
<br>
&gt;=C2=A0 available for i915 driver.&quot;<br>
&gt; <br>
&gt; This was in 2015, referencing Linux &gt;=3D 3.19. I have no idea wheth=
er<br>
&gt; the underlying driver issue(s) has/have been fixed. The addresses<br>
&gt; referenced are variable enough and all within RAM, so I&#39;d conclude=
<br>
&gt; this is not a &quot;missing RMRR&quot; issue.<br>
<br>
Variable enough but not within RAM. From E820:<br>
<br>
(XEN)=C2=A0 0000000100000000 - 0000000871800000 (usable)<br>
<br>
But the referenced addresses are way higher:<br>
<br>
(XEN) [VT-D]DMAR:[DMA Read] Request device [0000:00:02.0] fault <br>
addr 76c615d000, iommu reg =3D ffff82c000a0c000<br>
(XEN) [VT-D]DMAR: reason 06 - PTE Read access is not set<br>
<br>
&gt; <br>
&gt; Cc-ing the VT-d maintainer for possible insights or thoughts.<br>
&gt; <br>
&gt; Jan<br>
<br>
I don&#39;t have other thoughts except the weird addresses. It might be<br>
good to add some trace in dom0&#39;s i915 driver to see whether those<br>
addresses are intended or not.<br>
<br></blockquote><div><br></div><div>Thanks for the insight! I&#39;d love t=
o help with the trace, but I don&#39;t know how to do that. If you could po=
int me in the right direction, I&#39;d try to give it a shot. <br></div><di=
v><br></div><blockquote class=3D"gmail_quote" style=3D"margin:0px 0px 0px 0=
.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
Thanks<br>
Kevin<br></blockquote><div><br></div><div>Thanks for the insight! <br></div=
></div></div>

--000000000000a742b305a6721ab9--


From xen-devel-bounces@lists.xenproject.org Mon May 25 05:39:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 05:39:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd5p3-0003Uh-Qz; Mon, 25 May 2020 05:38:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KePG=7H=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jd5p2-0003Uc-F5
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 05:38:48 +0000
X-Inumbo-ID: fe1945c4-9e49-11ea-ae8c-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fe1945c4-9e49-11ea-ae8c-12813bfff9fa;
 Mon, 25 May 2020 05:38:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=KeypeIFnM1YdZVcCmQ/U0rIGOZ4+XXzf7Q9rM3RHKY8=; b=G/HXBwZij0EMLJyG8dQWsZ+5f
 3XBQx3oX63VqIIeh00cQpiVdUokZa8TW6CQqZDIc9oHU+yPC3Z86TE2w2PlO+p45H7sTsVHQCtpn8
 taj3c/4yEydpF0619DFYYvGhoc2KqgonLX2UY333jn5+yFZgS8QVxvfECLNq6YFYuZP7s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jd5ov-0001MF-DO; Mon, 25 May 2020 05:38:41 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jd5ov-0001VI-3z; Mon, 25 May 2020 05:38:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jd5ov-0006Mk-3O; Mon, 25 May 2020 05:38:41 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150356-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150356: tolerable trouble: fail/pass/starved -
 PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:hosts-allocate:starved:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: linux=98790bbac4db1697212ce9462ec35ca09c4a2810
X-Osstest-Versions-That: linux=caffb99b6929f41a69edbb5aef3a359bf45f3315
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 25 May 2020 05:38:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150356 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150356/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150345
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150345
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150345
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150345
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150345
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150345
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64  2 hosts-allocate             starved n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  2 hosts-allocate             starved n/a

version targeted for testing:
 linux                98790bbac4db1697212ce9462ec35ca09c4a2810
baseline version:
 linux                caffb99b6929f41a69edbb5aef3a359bf45f3315

Last test of basis   150345  2020-05-24 00:39:39 Z    1 days
Testing same since   150356  2020-05-24 17:39:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ard Biesheuvel <ardb@kernel.org>
  Arvind Sankar <nivedita@alum.mit.edu>
  Benjamin Thiel <b.thiel@posteo.de>
  Borislav Petkov <bp@suse.de>
  Dave Young <dyoung@redhat.com>
  Heinrich Schuchardt <xypron.glpk@gmx.de>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Lenny Szubowicz <lszubowi@redhat.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Loïc Yhuel <loic.yhuel@gmail.com>
  Mike Lothian <mike@fireburn.co.uk>
  Nathan Chancellor <natechancellor@gmail.com>
  Pavankumar Kondeti <pkondeti@codeaurora.org>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Phil Auld <pauld@redhat.com>
  Punit Agrawal <punit1.agrawal@toshiba.co.jp>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Valentin Schneider <valentin.schneider@arm.com>
  Vincent Guittot <vincent.guittot@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         starved 
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         starved 
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/linux-pvops.git
   caffb99b6929..98790bbac4db  98790bbac4db1697212ce9462ec35ca09c4a2810 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Mon May 25 06:06:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 06:06:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd6FP-00068Q-4o; Mon, 25 May 2020 06:06:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MLe6=7H=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jd6FO-00068L-F2
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 06:06:02 +0000
X-Inumbo-ID: ce7c6a0e-9e4d-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce7c6a0e-9e4d-11ea-9887-bc764e2007e4;
 Mon, 25 May 2020 06:06:00 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A634FB327;
 Mon, 25 May 2020 06:06:01 +0000 (UTC)
Subject: Re: [PATCH v2 for-4.14 1/2] x86/mem_sharing: block interrupt
 injection for forks
To: Tamas K Lengyel <tamas.lengyel@intel.com>
References: <adfececa3e29a46f5347459a629aa534d61625aa.1590165055.git.tamas.lengyel@intel.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <338c26dc-a78a-4519-11f1-1b89bd1cf4db@suse.com>
Date: Mon, 25 May 2020 08:05:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <adfececa3e29a46f5347459a629aa534d61625aa.1590165055.git.tamas.lengyel@intel.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22.05.2020 18:33, Tamas K Lengyel wrote:
> When running shallow forks without device models it may be undesirable for Xen
> to inject interrupts. With Windows forks we have observed the kernel going into
> infinite loops when trying to process such interrupts, likely because it attempts
> to interact with devices that are not responding without QEMU running. By
> disabling interrupt injection the fuzzer can exercise the target code without
> interference.
> 
> Forks & memory sharing are only available on Intel CPUs so this only applies
> to vmx.

Looking at e.g. mem_sharing_control() I can't seem to be able to confirm
this. Would you mind pointing me at where this restriction is coming from?

> --- a/xen/arch/x86/hvm/vmx/intr.c
> +++ b/xen/arch/x86/hvm/vmx/intr.c
> @@ -256,6 +256,12 @@ void vmx_intr_assist(void)
>      if ( unlikely(v->arch.vm_event) && v->arch.vm_event->sync_event )
>          return;
>  
> +#ifdef CONFIG_MEM_SHARING
> +    /* Block event injection for VM fork if requested */
> +    if ( unlikely(v->domain->arch.hvm.mem_sharing.block_interrupts) )
> +        return;
> +#endif

The two earlier returns are temporary as far as the guest is concerned,
i.e. eventually the interrupt(s) will get delivered. The one you add
looks as if it is a permanent thing, i.e. interrupt requests will pile
up and potentially confuse a guest down the road. This _may_ be okay
for your short-lived-shallow-fork scenario, but then wants at least
calling out in the public header by a comment (and I think the same
goes for XENMEM_FORK_WITH_IOMMU_ALLOWED that's already there).

> --- a/xen/include/asm-x86/hvm/domain.h
> +++ b/xen/include/asm-x86/hvm/domain.h
> @@ -74,6 +74,8 @@ struct mem_sharing_domain
>       * to resume the search.
>       */
>      unsigned long next_shared_gfn_to_relinquish;
> +
> +    bool block_interrupts;
>  };

Please can you avoid unnecessary growth of the structure by inserting
next to the pre-existing bool rather than at the end?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 25 07:02:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 07:02:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd782-0002p9-FL; Mon, 25 May 2020 07:02:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eGcw=7H=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jd781-0002p4-8O
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 07:02:29 +0000
X-Inumbo-ID: b0b4c3d8-9e55-11ea-ae97-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b0b4c3d8-9e55-11ea-ae97-12813bfff9fa;
 Mon, 25 May 2020 07:02:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id C7A5DABCF;
 Mon, 25 May 2020 07:02:27 +0000 (UTC)
Subject: Re: [PATCH v10 00/12] Add hypervisor sysfs-like support
To: paul@xen.org, 'Jan Beulich' <jbeulich@suse.com>,
 'Kevin Tian' <kevin.tian@intel.com>, 'Julien Grall' <julien@xen.org>,
 'Jun Nakajima' <jun.nakajima@intel.com>, 'Wei Liu' <wl@xen.org>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'Daniel De Graaf' <dgdegra@tycho.nsa.gov>
References: <20200519072106.26894-1-jgross@suse.com>
 <24935c43-2f2d-83cf-9039-ec0f97498103@suse.com>
 <305d829f-24a9-1a6d-2131-fed92c22c305@suse.com>
 <000f01d62db4$57181e90$05485bb0$@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <fc55f4dc-c802-2153-cd6a-736a29e8a396@suse.com>
Date: Mon, 25 May 2020 09:02:23 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <000f01d62db4$57181e90$05485bb0$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Anthony PERARD' <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.20 10:06, Paul Durrant wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: 19 May 2020 08:45
>> To: Jürgen Groß <jgross@suse.com>; Kevin Tian <kevin.tian@intel.com>; Julien Grall <julien@xen.org>;
>> Jun Nakajima <jun.nakajima@intel.com>; Wei Liu <wl@xen.org>; Ian Jackson <ian.jackson@eu.citrix.com>;
>> Daniel De Graaf <dgdegra@tycho.nsa.gov>; Paul Durrant <paul@xen.org>
>> Cc: xen-devel@lists.xenproject.org; Stefano Stabellini <sstabellini@kernel.org>; Andrew Cooper
>> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>; Anthony PERARD
>> <anthony.perard@citrix.com>; Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Roger Pau Monné
>> <roger.pau@citrix.com>
>> Subject: Re: [PATCH v10 00/12] Add hypervisor sysfs-like support
>>
>> On 19.05.2020 09:30, Jürgen Groß wrote:
>>> On 19.05.20 09:20, Juergen Gross wrote:
>>>>
>>>> Juergen Gross (12):
>>>>     xen/vmx: let opt_ept_ad always reflect the current setting
>>>>     xen: add a generic way to include binary files as variables
>>>>     docs: add feature document for Xen hypervisor sysfs-like support
>>>>     xen: add basic hypervisor filesystem support
>>>>     libs: add libxenhypfs
>>>>     tools: add xenfs tool
>>>>     xen: provide version information in hypfs
>>>>     xen: add /buildinfo/config entry to hypervisor filesystem
>>>>     xen: add runtime parameter access support to hypfs
>>>>     tools/libxl: use libxenhypfs for setting xen runtime parameters
>>>>     tools/libxc: remove xc_set_parameters()
>>>>     xen: remove XEN_SYSCTL_set_parameter support
>>>>
>>>>    .gitignore                          |   6 +
>>>>    docs/features/hypervisorfs.pandoc   |  92 +++++
>>>>    docs/man/xenhypfs.1.pod             |  61 ++++
>>>>    docs/misc/hypfs-paths.pandoc        | 165 +++++++++
>>>>    tools/Rules.mk                      |   8 +-
>>>>    tools/flask/policy/modules/dom0.te  |   4 +-
>>>>    tools/libs/Makefile                 |   1 +
>>>>    tools/libs/hypfs/Makefile           |  16 +
>>>>    tools/libs/hypfs/core.c             | 536 ++++++++++++++++++++++++++++
>>>>    tools/libs/hypfs/include/xenhypfs.h |  90 +++++
>>>>    tools/libs/hypfs/libxenhypfs.map    |  10 +
>>>>    tools/libs/hypfs/xenhypfs.pc.in     |  10 +
>>>>    tools/libxc/include/xenctrl.h       |   1 -
>>>>    tools/libxc/xc_misc.c               |  21 --
>>>>    tools/libxl/Makefile                |   3 +-
>>>>    tools/libxl/libxl.c                 |  53 ++-
>>>>    tools/libxl/libxl_internal.h        |   1 +
>>>>    tools/libxl/xenlight.pc.in          |   2 +-
>>>>    tools/misc/Makefile                 |   6 +
>>>>    tools/misc/xenhypfs.c               | 192 ++++++++++
>>>>    tools/xl/xl_misc.c                  |   1 -
>>>>    xen/arch/arm/traps.c                |   3 +
>>>>    xen/arch/arm/xen.lds.S              |  13 +-
>>>>    xen/arch/x86/hvm/hypercall.c        |   3 +
>>>>    xen/arch/x86/hvm/vmx/vmcs.c         |  47 ++-
>>>>    xen/arch/x86/hvm/vmx/vmx.c          |   4 +-
>>>>    xen/arch/x86/hypercall.c            |   3 +
>>>>    xen/arch/x86/pv/domain.c            |  21 +-
>>>>    xen/arch/x86/pv/hypercall.c         |   3 +
>>>>    xen/arch/x86/xen.lds.S              |  12 +-
>>>>    xen/common/Kconfig                  |  23 ++
>>>>    xen/common/Makefile                 |  13 +
>>>>    xen/common/grant_table.c            |  62 +++-
>>>>    xen/common/hypfs.c                  | 452 +++++++++++++++++++++++
>>>>    xen/common/kernel.c                 |  84 ++++-
>>>>    xen/common/sysctl.c                 |  36 --
>>>>    xen/drivers/char/console.c          |  72 +++-
>>>>    xen/include/Makefile                |   1 +
>>>>    xen/include/asm-x86/hvm/vmx/vmcs.h  |   3 +-
>>>>    xen/include/public/hypfs.h          | 129 +++++++
>>>>    xen/include/public/sysctl.h         |  19 +-
>>>>    xen/include/public/xen.h            |   1 +
>>>>    xen/include/xen/hypercall.h         |  10 +
>>>>    xen/include/xen/hypfs.h             | 123 +++++++
>>>>    xen/include/xen/kernel.h            |   3 +
>>>>    xen/include/xen/lib.h               |   1 -
>>>>    xen/include/xen/param.h             | 126 +++++--
>>>>    xen/include/xlat.lst                |   2 +
>>>>    xen/include/xsm/dummy.h             |   6 +
>>>>    xen/include/xsm/xsm.h               |   6 +
>>>>    xen/tools/binfile                   |  43 +++
>>>>    xen/xsm/dummy.c                     |   1 +
>>>>    xen/xsm/flask/Makefile              |   5 +-
>>>>    xen/xsm/flask/flask-policy.S        |  16 -
>>>>    xen/xsm/flask/hooks.c               |   9 +-
>>>>    xen/xsm/flask/policy/access_vectors |   4 +-
>>>>    56 files changed, 2445 insertions(+), 193 deletions(-)
>>>>    create mode 100644 docs/features/hypervisorfs.pandoc
>>>>    create mode 100644 docs/man/xenhypfs.1.pod
>>>>    create mode 100644 docs/misc/hypfs-paths.pandoc
>>>>    create mode 100644 tools/libs/hypfs/Makefile
>>>>    create mode 100644 tools/libs/hypfs/core.c
>>>>    create mode 100644 tools/libs/hypfs/include/xenhypfs.h
>>>>    create mode 100644 tools/libs/hypfs/libxenhypfs.map
>>>>    create mode 100644 tools/libs/hypfs/xenhypfs.pc.in
>>>>    create mode 100644 tools/misc/xenhypfs.c
>>>>    create mode 100644 xen/common/hypfs.c
>>>>    create mode 100644 xen/include/public/hypfs.h
>>>>    create mode 100644 xen/include/xen/hypfs.h
>>>>    create mode 100755 xen/tools/binfile
>>>>    delete mode 100644 xen/xsm/flask/flask-policy.S
>>>>
>>>
>>> There are some Acks missing on this series, so please have a look at the
>>> patches!
>>>
>>> There are missing especially:
>>>
>>> - Patch 1: VMX maintainers
>>> - Patch 2 + 4: XSM maintainer
>>> - Patch 4 + 9: Arm maintainer
>>> - Patch 10 + 11: tools maintainers
>>>
>>> I'd really like the series to go into 4.14 (deadline this Friday).
>>
> 
> I would also like to see this in 4.14.
> 
>> FTR I'm intending to waive the need for the first three of the named
>> sets if they don't arrive by Friday (and there I don't mean last
>> minute on Friday) - they're not overly intrusive (maybe with the
>> exception of the XSM parts in #4) and the series has been pending
>> for long enough. I don't feel comfortable to do so for patch 10,
>> though; patch 11 looks to be simple enough again.
>>
>> Paul, as the release manager, please let me know if you disagree.
>>
> 
> Looking at patch #4, I'm not confident that the XSM parts are complete (e.g. does xen.if need updating?). Also I'd put the new access vector in xen2, since that's where set_parameter currently is (and will be removed from in a later patch), but the xen class does appear to have space so that's really just my taste.

I don't think xen.if needs updating, as it contains only macros for
groups of operations.

As the new hypercall isn't only replacing set_parameter, but has much
wider semantics, I don't think it should go to xen2. There will be
probably more interfaces being replaced and/or added after all.


Juergen


From xen-devel-bounces@lists.xenproject.org Mon May 25 07:19:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 07:19:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jd7Nh-0003r9-TX; Mon, 25 May 2020 07:18:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XQbn=7H=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jd7Ng-0003r1-GU
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 07:18:40 +0000
X-Inumbo-ID: f488715c-9e57-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f488715c-9e57-11ea-ae69-bc764e2007e4;
 Mon, 25 May 2020 07:18:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 0C93BAD12;
 Mon, 25 May 2020 07:18:40 +0000 (UTC)
Message-ID: <a3acb930c5656524f6229592be5d542f0cb9da60.camel@suse.com>
Subject: Re: [PATCH v3 2/3] xen/sched: don't call sync_vcpu_execstate() in
 sched_unit_migrate_finish()
From: Dario Faggioli <dfaggioli@suse.com>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Date: Mon, 25 May 2020 09:18:36 +0200
In-Reply-To: <20200514153614.2240-3-jgross@suse.com>
References: <20200514153614.2240-1-jgross@suse.com>
 <20200514153614.2240-3-jgross@suse.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-oAq6UK647ZbDv2w2R5Uq"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-oAq6UK647ZbDv2w2R5Uq
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2020-05-14 at 17:36 +0200, Juergen Gross wrote:
> With support of core scheduling sched_unit_migrate_finish() gained a
> call of sync_vcpu_execstate() as it was believed to be called as a
> result of vcpu migration in any case.
>=20
> In case of migrating a vcpu away from a physical cpu for a short
> period
> of time ionly without ever being scheduled on the selected new cpu=20
          ^
with this typo taken care of (I guess, upon commit)...

> this
> might not be true, so drop the call and let the lazy state syncing do
> its job.
>=20
> Signed-off-by: Juergen Gross <jgross@suse.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-oAq6UK647ZbDv2w2R5Uq
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl7LcUwACgkQFkJ4iaW4
c+6ROQ/+MJGiLNYQIM8h0pflGhWozbng0aZ5OEwdkBg3jrHTbtfD+UjZqJxG+/ck
0UZUi9CzPEW8O2L0ikM6zTPe5Rl6z1isp8G0Airs4ZnLFcN/5zFce6DJRNl2b61E
ubTxnYLKH5dP/DMLGCZmJww/OsK925NI0sWzvtXbFRUbP1zEyvD2Xjf//N/2UCH0
YijZy6qQLSDBX28aao5/1thtVnQoVHfAJKBZ711ojE6X4dmKEb4qGTtDQ8n2mpDg
VdwtT/wp3goraA7hgdpV7GLhAZxj86qhyTac8cqQZIbx5GTzI3d0CbjYyg1D7Vq2
C6+3mX4Cruaj5xxw2q3y0C6rwObWfpooZUkf/ULxALbcaw+Tkvag1m67c5BMMYEZ
ZSi+NGZnP5e2puci3hEHEJukSGuiKfTyuXpqcMq7qdDFTbq8UQEdqL1WltJacueI
i8ISYHzF6RXdRP5tdauk3tozA2VP1sYXlcH8nHRnteGhuNylaFvQYBz1pHYt4ARo
6uzHJD6UnBCJcIBqlWkEamBy6NAYU+wirx+qI4xpfRurczwouosh41Cxs+RJPuTb
oGCQy741iZf1yWQTQEsOPRjUbEd/TwVXXeBZ84eKTbL5jlmoJvs0wvGsXTaGb24w
rCQhmUzha782jjpKoyGhThZDym1PeFWly8MWzrTal8GpOupDweA=
=94pE
-----END PGP SIGNATURE-----

--=-oAq6UK647ZbDv2w2R5Uq--



From xen-devel-bounces@lists.xenproject.org Mon May 25 12:20:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 12:20:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdC4x-0004NZ-TM; Mon, 25 May 2020 12:19:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RZKA=7H=tklsoftware.com=tamas@srs-us1.protection.inumbo.net>)
 id 1jdC4w-0004NU-2f
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 12:19:38 +0000
X-Inumbo-ID: fe9f03a2-9e81-11ea-ae69-bc764e2007e4
Received: from mail-ej1-x642.google.com (unknown [2a00:1450:4864:20::642])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe9f03a2-9e81-11ea-ae69-bc764e2007e4;
 Mon, 25 May 2020 12:19:37 +0000 (UTC)
Received: by mail-ej1-x642.google.com with SMTP id j21so20306781ejy.1
 for <xen-devel@lists.xenproject.org>; Mon, 25 May 2020 05:19:35 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=tklengyel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=dhZgBKYxFLaJ+pXXELZQ9RPmxhzJI8715ah/Xe7Fhk0=;
 b=x7B3DzsJTyorEeWuMx7d0uf3TMhxX051u4zFlkNiQq++F9in9O8JpS5OAmxRUptUeR
 FvDd134b9jGeIegBS6ZQxNvNlbk4gC7KTkAd+nZOmqh07a1GcKh3sJxbxr9A5cPJoP+0
 2F3H9p2H9XovZ25OuhZP5f4ibbMYHuNH5OkRUm+a/hzTgKeqwSKk18YbMYTH/FfJCcNj
 BqjVg4SvFRI29TpcBqmOZgsVQeGNq6XArCI/F5jZe/MvR30AQRdu1ZhTQhYmclDUuflG
 8xjVX889uIElXclE/DOoeRI7JMDJEg8IPUyJM/9ssCBj8xvT7+cG8LIj9CqeGqNj4+Nh
 RFMg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=dhZgBKYxFLaJ+pXXELZQ9RPmxhzJI8715ah/Xe7Fhk0=;
 b=m03kz/xYPpFSlyEoxqZ1u3TNfTxGyQKfd1UBh27BGJHkwrRwkmLALBiW60LoCC6F2I
 cqYnJQ5VVLGig8KbdMqGBGTFSAsfjV6MQIm2sXuC0r+NL4K6ZZCKbmd916xGuSZ8JjD3
 P3jWUJDsdpRW/UqS81f9Jqq1jHv6+NLpRhAtGwq4KYbHUfDe+lbodiCrl/BvrGuvR8wZ
 Y3LA26AwWj75niYHCMEuTTYDmnvZpbzj+xi8y+aFog/7K7bp+vhPNxIc81cdHPZcgV3W
 afXe+olFS92+e1Q2PNfUxhK4J3GNsgOpN7Ez4AU+VvzCN2BuF+EjjiDHA46G7Bs0gQCE
 0AEQ==
X-Gm-Message-State: AOAM531Fs7QDV+Lmf7gR0Wt2l5zslrJlqC1gGWhc7utqhFoJ7cS0F/f6
 bHGD01DSWvosRO4IR2uWhl6cBaqDBh8=
X-Google-Smtp-Source: ABdhPJxydRNilDAnJrFy/jHIdRlFI5Dz3HUT/3FxBRU72+AAQz7z7eYrZsOIiLdY1hknESCkUM2Wsw==
X-Received: by 2002:a17:906:34c5:: with SMTP id
 h5mr9527537ejb.325.1590409173566; 
 Mon, 25 May 2020 05:19:33 -0700 (PDT)
Received: from mail-wm1-f45.google.com (mail-wm1-f45.google.com.
 [209.85.128.45])
 by smtp.gmail.com with ESMTPSA id dt12sm15746384ejb.102.2020.05.25.05.19.31
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 25 May 2020 05:19:32 -0700 (PDT)
Received: by mail-wm1-f45.google.com with SMTP id u188so16485225wmu.1
 for <xen-devel@lists.xenproject.org>; Mon, 25 May 2020 05:19:31 -0700 (PDT)
X-Received: by 2002:a1c:acc8:: with SMTP id
 v191mr26548126wme.154.1590409171099; 
 Mon, 25 May 2020 05:19:31 -0700 (PDT)
MIME-Version: 1.0
References: <adfececa3e29a46f5347459a629aa534d61625aa.1590165055.git.tamas.lengyel@intel.com>
 <338c26dc-a78a-4519-11f1-1b89bd1cf4db@suse.com>
In-Reply-To: <338c26dc-a78a-4519-11f1-1b89bd1cf4db@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Mon, 25 May 2020 06:18:55 -0600
X-Gmail-Original-Message-ID: <CABfawh=WPyW383QAe_JwRC2q8W1zHXcYntjYF-Vog34baQcrfw@mail.gmail.com>
Message-ID: <CABfawh=WPyW383QAe_JwRC2q8W1zHXcYntjYF-Vog34baQcrfw@mail.gmail.com>
Subject: Re: [PATCH v2 for-4.14 1/2] x86/mem_sharing: block interrupt
 injection for forks
To: Jan Beulich <jbeulich@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 25, 2020 at 12:06 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 22.05.2020 18:33, Tamas K Lengyel wrote:
> > When running shallow forks without device models it may be undesirable for Xen
> > to inject interrupts. With Windows forks we have observed the kernel going into
> > infinite loops when trying to process such interrupts, likely because it attempts
> > to interact with devices that are not responding without QEMU running. By
> > disabling interrupt injection the fuzzer can exercise the target code without
> > interference.
> >
> > Forks & memory sharing are only available on Intel CPUs so this only applies
> > to vmx.
>
> Looking at e.g. mem_sharing_control() I can't seem to be able to confirm
> this. Would you mind pointing me at where this restriction is coming from?

Both mem_access and mem_sharing are only implemented for EPT:
http://xenbits.xen.org/hg/xen-unstable.hg/file/5eadf9363c25/xen/arch/x86/mm/p2m-ept.c#l126.

>
> > --- a/xen/arch/x86/hvm/vmx/intr.c
> > +++ b/xen/arch/x86/hvm/vmx/intr.c
> > @@ -256,6 +256,12 @@ void vmx_intr_assist(void)
> >      if ( unlikely(v->arch.vm_event) && v->arch.vm_event->sync_event )
> >          return;
> >
> > +#ifdef CONFIG_MEM_SHARING
> > +    /* Block event injection for VM fork if requested */
> > +    if ( unlikely(v->domain->arch.hvm.mem_sharing.block_interrupts) )
> > +        return;
> > +#endif
>
> The two earlier returns are temporary as far as the guest is concerned,
> i.e. eventually the interrupt(s) will get delivered. The one you add
> looks as if it is a permanent thing, i.e. interrupt requests will pile
> up and potentially confuse a guest down the road. This _may_ be okay
> for your short-lived-shallow-fork scenario, but then wants at least
> calling out in the public header by a comment (and I think the same
> goes for XENMEM_FORK_WITH_IOMMU_ALLOWED that's already there).

This is indeed only for the short-lived forks, that's why this is an
optional flag that can be enabled when creating forks and it's not on
by default. In that use-case the VM executes for fractions of a second
and we want to only executes very specific code-segments with
absolutely no interference. Interrupts in that case are just a
nuisance that in the best case slow the fuzzing process down but as we
observed in the worst case complete stall it.

>
> > --- a/xen/include/asm-x86/hvm/domain.h
> > +++ b/xen/include/asm-x86/hvm/domain.h
> > @@ -74,6 +74,8 @@ struct mem_sharing_domain
> >       * to resume the search.
> >       */
> >      unsigned long next_shared_gfn_to_relinquish;
> > +
> > +    bool block_interrupts;
> >  };
>
> Please can you avoid unnecessary growth of the structure by inserting
> next to the pre-existing bool rather than at the end?

Sure. Do you want me to resend the patch for that?

Tamas


From xen-devel-bounces@lists.xenproject.org Mon May 25 12:38:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 12:38:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdCN7-00067W-GZ; Mon, 25 May 2020 12:38:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RZKA=7H=tklsoftware.com=tamas@srs-us1.protection.inumbo.net>)
 id 1jdCN6-00067R-50
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 12:38:24 +0000
X-Inumbo-ID: 9ef58310-9e84-11ea-b9cf-bc764e2007e4
Received: from mail-ed1-x543.google.com (unknown [2a00:1450:4864:20::543])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ef58310-9e84-11ea-b9cf-bc764e2007e4;
 Mon, 25 May 2020 12:38:23 +0000 (UTC)
Received: by mail-ed1-x543.google.com with SMTP id e10so14905350edq.0
 for <xen-devel@lists.xenproject.org>; Mon, 25 May 2020 05:38:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=tklengyel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=UDnWw0ElEdYX87EaI0aeQxvrrFALObdoslB3urYMcgw=;
 b=pNE8FmMfkJsqyqLvG4l9RAWZ7JADihHzgoyKL3TLhsXkKJZV/YwbQTaew2A/Vg4/ww
 BVIdrsRvyWH5/31cRIQ04ImH4eHtUkzqxQmw6knwQd0BFiwvhOg/+2PWz/Nunp5qVJX4
 VtKbbouHJAlO69GHMSOPxuml2tLqx6FxUtS2gUpM47hOpA7NtaYKgurIjXFL6PWrYra1
 zmwS80/bHU3VRMgD2ahqzOhiQAFty45ldSud2q8voqJWC7LmCIYrwqiDINlPyESFpzU6
 X+v50Z4TK5ANi4jBKHzCqDaxXOa1Ys7SMc2mfObDyFjAi4Lz2RyC+t6uPIxPwZtxjuXA
 80Zg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=UDnWw0ElEdYX87EaI0aeQxvrrFALObdoslB3urYMcgw=;
 b=MdPPju2D7SuGsE+XdYm/qvOrNvxrX6vxllnZhOfgq6CPKdn8neG9ToMWJSU5VsIv/N
 aj+Xhh0ameMbYq62J+TfEXtF+poUyFGgpZvRsIs/rkcurZlIQlrBz+N6P0Pi96MAMTqc
 RtrGVZ4zTK3BRWNTxU+XWZryNdLf7O0qAlbE/bbryg8c9MfZ6dxw0dL6+hervfPMS9H1
 qZqLMdRHRSXYy8bD6WCI2LPDDxV/y1IAu85WkJM/HaGlQC0SVvHFjCQn6F0JYLnGzabT
 a6g33RnaUOSIEJVP5Cuwza4p0ZdDQqon2oLzk2KPjDv9UHpRlu95c+qKP8wXkfnUuxX6
 iHFw==
X-Gm-Message-State: AOAM532elsbzkkiTTc4XIYw0/bav1k1auj9qyIj1gzo4vKIXbgmkslQ5
 oksbL9uDdXScWgU7Hdh2hGtYzfVv5e8=
X-Google-Smtp-Source: ABdhPJxkU63lIA3reQoJNTrQDy4Ax+OXyuYpsktoHa0cwISg2xLybbHAFV3xK8vhagcgu+RNiwOD0w==
X-Received: by 2002:a05:6402:8c1:: with SMTP id
 d1mr15157207edz.265.1590410302058; 
 Mon, 25 May 2020 05:38:22 -0700 (PDT)
Received: from mail-wr1-f47.google.com (mail-wr1-f47.google.com.
 [209.85.221.47])
 by smtp.gmail.com with ESMTPSA id k90sm15553076edc.2.2020.05.25.05.38.20
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 25 May 2020 05:38:20 -0700 (PDT)
Received: by mail-wr1-f47.google.com with SMTP id x14so11572447wrp.2
 for <xen-devel@lists.xenproject.org>; Mon, 25 May 2020 05:38:20 -0700 (PDT)
X-Received: by 2002:a5d:61d0:: with SMTP id q16mr536273wrv.182.1590410300269; 
 Mon, 25 May 2020 05:38:20 -0700 (PDT)
MIME-Version: 1.0
References: <adfececa3e29a46f5347459a629aa534d61625aa.1590165055.git.tamas.lengyel@intel.com>
 <338c26dc-a78a-4519-11f1-1b89bd1cf4db@suse.com>
 <CABfawh=WPyW383QAe_JwRC2q8W1zHXcYntjYF-Vog34baQcrfw@mail.gmail.com>
In-Reply-To: <CABfawh=WPyW383QAe_JwRC2q8W1zHXcYntjYF-Vog34baQcrfw@mail.gmail.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Mon, 25 May 2020 06:37:44 -0600
X-Gmail-Original-Message-ID: <CABfawh=A3ZO-9jEiVYg72fHvZRqWzC5j8WsW6L2V7x9_tVKuPA@mail.gmail.com>
Message-ID: <CABfawh=A3ZO-9jEiVYg72fHvZRqWzC5j8WsW6L2V7x9_tVKuPA@mail.gmail.com>
Subject: Re: [PATCH v2 for-4.14 1/2] x86/mem_sharing: block interrupt
 injection for forks
To: Jan Beulich <jbeulich@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 25, 2020 at 6:18 AM Tamas K Lengyel <tamas@tklengyel.com> wrote:
>
> On Mon, May 25, 2020 at 12:06 AM Jan Beulich <jbeulich@suse.com> wrote:
> >
> > On 22.05.2020 18:33, Tamas K Lengyel wrote:
> > > When running shallow forks without device models it may be undesirable for Xen
> > > to inject interrupts. With Windows forks we have observed the kernel going into
> > > infinite loops when trying to process such interrupts, likely because it attempts
> > > to interact with devices that are not responding without QEMU running. By
> > > disabling interrupt injection the fuzzer can exercise the target code without
> > > interference.
> > >
> > > Forks & memory sharing are only available on Intel CPUs so this only applies
> > > to vmx.
> >
> > Looking at e.g. mem_sharing_control() I can't seem to be able to confirm
> > this. Would you mind pointing me at where this restriction is coming from?
>
> Both mem_access and mem_sharing are only implemented for EPT:
> http://xenbits.xen.org/hg/xen-unstable.hg/file/5eadf9363c25/xen/arch/x86/mm/p2m-ept.c#l126.
>
> >
> > > --- a/xen/arch/x86/hvm/vmx/intr.c
> > > +++ b/xen/arch/x86/hvm/vmx/intr.c
> > > @@ -256,6 +256,12 @@ void vmx_intr_assist(void)
> > >      if ( unlikely(v->arch.vm_event) && v->arch.vm_event->sync_event )
> > >          return;
> > >
> > > +#ifdef CONFIG_MEM_SHARING
> > > +    /* Block event injection for VM fork if requested */
> > > +    if ( unlikely(v->domain->arch.hvm.mem_sharing.block_interrupts) )
> > > +        return;
> > > +#endif
> >
> > The two earlier returns are temporary as far as the guest is concerned,
> > i.e. eventually the interrupt(s) will get delivered. The one you add
> > looks as if it is a permanent thing, i.e. interrupt requests will pile
> > up and potentially confuse a guest down the road. This _may_ be okay
> > for your short-lived-shallow-fork scenario, but then wants at least
> > calling out in the public header by a comment (and I think the same
> > goes for XENMEM_FORK_WITH_IOMMU_ALLOWED that's already there).
>
> This is indeed only for the short-lived forks, that's why this is an
> optional flag that can be enabled when creating forks and it's not on
> by default. In that use-case the VM executes for fractions of a second
> and we want to only executes very specific code-segments with
> absolutely no interference. Interrupts in that case are just a
> nuisance that in the best case slow the fuzzing process down but as we
> observed in the worst case complete stall it.
>
> >
> > > --- a/xen/include/asm-x86/hvm/domain.h
> > > +++ b/xen/include/asm-x86/hvm/domain.h
> > > @@ -74,6 +74,8 @@ struct mem_sharing_domain
> > >       * to resume the search.
> > >       */
> > >      unsigned long next_shared_gfn_to_relinquish;
> > > +
> > > +    bool block_interrupts;
> > >  };
> >
> > Please can you avoid unnecessary growth of the structure by inserting
> > next to the pre-existing bool rather than at the end?
>
> Sure. Do you want me to resend the patch for that?

I'll just resend it anyway with the requested comments in the public header.

Tamas


From xen-devel-bounces@lists.xenproject.org Mon May 25 13:00:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 13:00:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdCiP-00007F-09; Mon, 25 May 2020 13:00:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=arI+=7H=intel.com=tamas.lengyel@srs-us1.protection.inumbo.net>)
 id 1jdCiO-000078-JK
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 13:00:24 +0000
X-Inumbo-ID: afcbd092-9e87-11ea-aecf-12813bfff9fa
Received: from mga18.intel.com (unknown [134.134.136.126])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id afcbd092-9e87-11ea-aecf-12813bfff9fa;
 Mon, 25 May 2020 13:00:19 +0000 (UTC)
IronPort-SDR: jMtoG8pplBeVLv1tousLMSXG3dfh6cjm5XGgF2W98PykInYcjsPEA2/XjeMU8A23cQUYo66LkR
 NwwJFewDLEig==
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga003.jf.intel.com ([10.7.209.27])
 by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 25 May 2020 06:00:14 -0700
IronPort-SDR: 0AKhMnDC6ZUJOL+FPfKYZyI3pQtVz/B/iWd/hEHIT6WXrtwyQZZYUIcDnSp1LSjDTPzF35/ecU
 MhdzVH4TCuLQ==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.73,433,1583222400"; d="scan'208";a="266140646"
Received: from dmaltesx-mobl.amr.corp.intel.com (HELO ubuntu.localdomain)
 ([10.209.105.3])
 by orsmga003.jf.intel.com with ESMTP; 25 May 2020 06:00:13 -0700
From: Tamas K Lengyel <tamas.lengyel@intel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 for-4.14 2/2] tools/libxc: xc_memshr_fork with interrupts
 blocked
Date: Mon, 25 May 2020 06:00:09 -0700
Message-Id: <60dfdc24b87af20cf09e2cbd551fc62c34234c11.1590411162.git.tamas.lengyel@intel.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <a3b3410c707636aa201641e14b1ab43d4b8821e1.1590411162.git.tamas.lengyel@intel.com>
References: <a3b3410c707636aa201641e14b1ab43d4b8821e1.1590411162.git.tamas.lengyel@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 Tamas K Lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Toolstack side for creating forks with interrupt injection blocked.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 tools/libxc/include/xenctrl.h | 3 ++-
 tools/libxc/xc_memshr.c       | 4 +++-
 2 files changed, 5 insertions(+), 2 deletions(-)

diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 45ff7db1e8..804ff001d7 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -2242,7 +2242,8 @@ int xc_memshr_range_share(xc_interface *xch,
 int xc_memshr_fork(xc_interface *xch,
                    uint32_t source_domain,
                    uint32_t client_domain,
-                   bool allow_with_iommu);
+                   bool allow_with_iommu,
+                   bool block_interrupts);
 
 /*
  * Note: this function is only intended to be used on short-lived forks that
diff --git a/tools/libxc/xc_memshr.c b/tools/libxc/xc_memshr.c
index 2300cc7075..a6cfd7dccf 100644
--- a/tools/libxc/xc_memshr.c
+++ b/tools/libxc/xc_memshr.c
@@ -240,7 +240,7 @@ int xc_memshr_debug_gref(xc_interface *xch,
 }
 
 int xc_memshr_fork(xc_interface *xch, uint32_t pdomid, uint32_t domid,
-                   bool allow_with_iommu)
+                   bool allow_with_iommu, bool block_interrupts)
 {
     xen_mem_sharing_op_t mso;
 
@@ -251,6 +251,8 @@ int xc_memshr_fork(xc_interface *xch, uint32_t pdomid, uint32_t domid,
 
     if ( allow_with_iommu )
         mso.u.fork.flags |= XENMEM_FORK_WITH_IOMMU_ALLOWED;
+    if ( block_interrupts )
+        mso.u.fork.flags |= XENMEM_FORK_BLOCK_INTERRUPTS;
 
     return xc_memshr_memop(xch, domid, &mso);
 }
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 25 13:00:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 13:00:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdCiL-00006w-Ee; Mon, 25 May 2020 13:00:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=arI+=7H=intel.com=tamas.lengyel@srs-us1.protection.inumbo.net>)
 id 1jdCiJ-00006r-Nj
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 13:00:19 +0000
X-Inumbo-ID: acb92ec2-9e87-11ea-aecf-12813bfff9fa
Received: from mga18.intel.com (unknown [134.134.136.126])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id acb92ec2-9e87-11ea-aecf-12813bfff9fa;
 Mon, 25 May 2020 13:00:15 +0000 (UTC)
IronPort-SDR: S3R7iCJlP0t0cJ0AblaCDF+4C5KhDzTKGYAXIziOALNcsX2Fhu6f1FL+8jvrZEeZB5sFnEjPLn
 uLQN/LDCziGA==
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga003.jf.intel.com ([10.7.209.27])
 by orsmga106.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 25 May 2020 06:00:13 -0700
IronPort-SDR: YjREmtBNF/KTsshmsp6hSNW1/dSLa83eOThGjEH6MT6BD7q40jsToTeQGvPxLvSb0MjD3hTfy9
 9AfA5f/u71zQ==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.73,433,1583222400"; d="scan'208";a="266140632"
Received: from dmaltesx-mobl.amr.corp.intel.com (HELO ubuntu.localdomain)
 ([10.209.105.3])
 by orsmga003.jf.intel.com with ESMTP; 25 May 2020 06:00:11 -0700
From: Tamas K Lengyel <tamas.lengyel@intel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3 for-4.14 1/2] x86/mem_sharing: block interrupt injection
 for forks
Date: Mon, 25 May 2020 06:00:08 -0700
Message-Id: <a3b3410c707636aa201641e14b1ab43d4b8821e1.1590411162.git.tamas.lengyel@intel.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <tamas.lengyel@intel.com>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

When running shallow forks, ie. VM forks without device models (QEMU), it may
be undesirable for Xen to inject interrupts. When creating such forks from
Windows VMs we have observed the kernel trying to process interrupts
immediately after the fork is executed. However without QEMU running such
interrupt handling may not be possible because it may attempt to interact with
devices that are not emulated by a backend. In the best case scenario such
interrupt handling would only present a detour in the VM forks' execution
flow, but in the worst case as we actually observed can completely stall it.
By disabling interrupt injection a fuzzer can exercise the target code without
interference. For other use-cases this option probably doesn't make sense,
that's why this is not enabled by default.

Forks & memory sharing are only available on Intel CPUs so this only applies
to vmx. Note that this is part of the experimental VM forking feature that's
completely disabled by default and can only be enabled by using
XEN_CONFIG_EXPERT during compile time.

Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>
---
v3: add comments in the public header how this option only makes sense for
     short lived forks
    minor style adjustment
v2: prohibit => block
    minor style adjustments
---
 xen/arch/x86/hvm/vmx/intr.c      | 6 ++++++
 xen/arch/x86/mm/mem_sharing.c    | 6 +++++-
 xen/include/asm-x86/hvm/domain.h | 2 +-
 xen/include/public/memory.h      | 3 +++
 4 files changed, 15 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/intr.c b/xen/arch/x86/hvm/vmx/intr.c
index 000e14af49..80bfbb4787 100644
--- a/xen/arch/x86/hvm/vmx/intr.c
+++ b/xen/arch/x86/hvm/vmx/intr.c
@@ -256,6 +256,12 @@ void vmx_intr_assist(void)
     if ( unlikely(v->arch.vm_event) && v->arch.vm_event->sync_event )
         return;
 
+#ifdef CONFIG_MEM_SHARING
+    /* Block event injection for VM fork if requested */
+    if ( unlikely(v->domain->arch.hvm.mem_sharing.block_interrupts) )
+        return;
+#endif
+
     /* Crank the handle on interrupt state. */
     pt_vector = pt_update_irq(v);
 
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 7271e5c90b..0c45a8d67e 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -2106,7 +2106,8 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
         rc = -EINVAL;
         if ( mso.u.fork.pad )
             goto out;
-        if ( mso.u.fork.flags & ~XENMEM_FORK_WITH_IOMMU_ALLOWED )
+        if ( mso.u.fork.flags &
+             ~(XENMEM_FORK_WITH_IOMMU_ALLOWED | XENMEM_FORK_BLOCK_INTERRUPTS) )
             goto out;
 
         rc = rcu_lock_live_remote_domain_by_id(mso.u.fork.parent_domain,
@@ -2134,6 +2135,9 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
             rc = hypercall_create_continuation(__HYPERVISOR_memory_op,
                                                "lh", XENMEM_sharing_op,
                                                arg);
+        else if ( !rc && (mso.u.fork.flags & XENMEM_FORK_BLOCK_INTERRUPTS) )
+            d->arch.hvm.mem_sharing.block_interrupts = true;
+
         rcu_unlock_domain(pd);
         break;
     }
diff --git a/xen/include/asm-x86/hvm/domain.h b/xen/include/asm-x86/hvm/domain.h
index 95fe18cddc..9d247baf4d 100644
--- a/xen/include/asm-x86/hvm/domain.h
+++ b/xen/include/asm-x86/hvm/domain.h
@@ -67,7 +67,7 @@ struct hvm_ioreq_server {
 #ifdef CONFIG_MEM_SHARING
 struct mem_sharing_domain
 {
-    bool enabled;
+    bool enabled, block_interrupts;
 
     /*
      * When releasing shared gfn's in a preemptible manner, recall where
diff --git a/xen/include/public/memory.h b/xen/include/public/memory.h
index dbd35305df..850bd72c52 100644
--- a/xen/include/public/memory.h
+++ b/xen/include/public/memory.h
@@ -536,7 +536,10 @@ struct xen_mem_sharing_op {
         } debug;
         struct mem_sharing_op_fork {      /* OP_FORK */
             domid_t parent_domain;        /* IN: parent's domain id */
+/* Only makes sense for short-lived forks */
 #define XENMEM_FORK_WITH_IOMMU_ALLOWED (1u << 0)
+/* Only makes sense for short-lived forks */
+#define XENMEM_FORK_BLOCK_INTERRUPTS   (1u << 1)
             uint16_t flags;               /* IN: optional settings */
             uint32_t pad;                 /* Must be set to 0 */
         } fork;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 25 13:06:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 13:06:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdCod-0000Qz-O1; Mon, 25 May 2020 13:06:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MLe6=7H=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdCoc-0000Qu-3h
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 13:06:50 +0000
X-Inumbo-ID: 98013906-9e88-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 98013906-9e88-11ea-ae69-bc764e2007e4;
 Mon, 25 May 2020 13:06:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id C5566AE37;
 Mon, 25 May 2020 13:06:50 +0000 (UTC)
Subject: Re: [PATCH v2 for-4.14 1/2] x86/mem_sharing: block interrupt
 injection for forks
To: Tamas K Lengyel <tamas@tklengyel.com>
References: <adfececa3e29a46f5347459a629aa534d61625aa.1590165055.git.tamas.lengyel@intel.com>
 <338c26dc-a78a-4519-11f1-1b89bd1cf4db@suse.com>
 <CABfawh=WPyW383QAe_JwRC2q8W1zHXcYntjYF-Vog34baQcrfw@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e5a2899c-f375-55e8-fc6c-940b70929ae6@suse.com>
Date: Mon, 25 May 2020 15:06:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <CABfawh=WPyW383QAe_JwRC2q8W1zHXcYntjYF-Vog34baQcrfw@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25.05.2020 14:18, Tamas K Lengyel wrote:
> On Mon, May 25, 2020 at 12:06 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 22.05.2020 18:33, Tamas K Lengyel wrote:
>>> When running shallow forks without device models it may be undesirable for Xen
>>> to inject interrupts. With Windows forks we have observed the kernel going into
>>> infinite loops when trying to process such interrupts, likely because it attempts
>>> to interact with devices that are not responding without QEMU running. By
>>> disabling interrupt injection the fuzzer can exercise the target code without
>>> interference.
>>>
>>> Forks & memory sharing are only available on Intel CPUs so this only applies
>>> to vmx.
>>
>> Looking at e.g. mem_sharing_control() I can't seem to be able to confirm
>> this. Would you mind pointing me at where this restriction is coming from?
> 
> Both mem_access and mem_sharing are only implemented for EPT:
> http://xenbits.xen.org/hg/xen-unstable.hg/file/5eadf9363c25/xen/arch/x86/mm/p2m-ept.c#l126.

p2m-pt.c:p2m_type_to_flags() has a similar case label. And I can't
spot a respective restriction in mem_sharing_memop(), i.e. it looks
to me as if enabling mem-sharing on NPT (to satisfy hap_enabled()
in mem_sharing_control()) would be possible.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 25 13:12:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 13:12:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdCtw-0001Ia-D7; Mon, 25 May 2020 13:12:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MLe6=7H=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdCtv-0001Hk-AP
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 13:12:19 +0000
X-Inumbo-ID: 5bf1b87c-9e89-11ea-aecf-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5bf1b87c-9e89-11ea-aecf-12813bfff9fa;
 Mon, 25 May 2020 13:12:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 0CBA6B02E;
 Mon, 25 May 2020 13:12:19 +0000 (UTC)
Subject: Re: [PATCH v3 2/3] xen/sched: don't call sync_vcpu_execstate() in
 sched_unit_migrate_finish()
To: Dario Faggioli <dfaggioli@suse.com>
References: <20200514153614.2240-1-jgross@suse.com>
 <20200514153614.2240-3-jgross@suse.com>
 <a3acb930c5656524f6229592be5d542f0cb9da60.camel@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b127c20f-1ae8-242f-8435-3165a728a451@suse.com>
Date: Mon, 25 May 2020 15:12:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <a3acb930c5656524f6229592be5d542f0cb9da60.camel@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25.05.2020 09:18, Dario Faggioli wrote:
> On Thu, 2020-05-14 at 17:36 +0200, Juergen Gross wrote:
>> With support of core scheduling sched_unit_migrate_finish() gained a
>> call of sync_vcpu_execstate() as it was believed to be called as a
>> result of vcpu migration in any case.
>>
>> In case of migrating a vcpu away from a physical cpu for a short
>> period
>> of time ionly without ever being scheduled on the selected new cpu 
>           ^
> with this typo taken care of (I guess, upon commit)...
> 
>> this
>> might not be true, so drop the call and let the lazy state syncing do
>> its job.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>
> Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

Hmm, I'm puzzled: This had gone in a week and a half ago with your
R-b sent on the 15th.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 25 13:26:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 13:26:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdD7T-0002J3-Os; Mon, 25 May 2020 13:26:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XQbn=7H=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jdD7S-0002Iy-KN
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 13:26:18 +0000
X-Inumbo-ID: 4fc6f74a-9e8b-11ea-aed4-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4fc6f74a-9e8b-11ea-aed4-12813bfff9fa;
 Mon, 25 May 2020 13:26:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 96C07ABC2;
 Mon, 25 May 2020 13:26:18 +0000 (UTC)
Message-ID: <3c5f439741f63c9f60431f2cd86a8ad7a117ec7d.camel@suse.com>
Subject: Re: [PATCH v3 2/3] xen/sched: don't call sync_vcpu_execstate() in
 sched_unit_migrate_finish()
From: Dario Faggioli <dfaggioli@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Date: Mon, 25 May 2020 15:26:14 +0200
In-Reply-To: <b127c20f-1ae8-242f-8435-3165a728a451@suse.com>
References: <20200514153614.2240-1-jgross@suse.com>
 <20200514153614.2240-3-jgross@suse.com>
 <a3acb930c5656524f6229592be5d542f0cb9da60.camel@suse.com>
 <b127c20f-1ae8-242f-8435-3165a728a451@suse.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-o+4t4rc6s2Dtzj++GNSl"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-o+4t4rc6s2Dtzj++GNSl
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Mon, 2020-05-25 at 15:12 +0200, Jan Beulich wrote:
> On 25.05.2020 09:18, Dario Faggioli wrote:
> >=20
> > > Signed-off-by: Juergen Gross <jgross@suse.com>
> > > Reviewed-by: Jan Beulich <jbeulich@suse.com>
> > >=20
> > Reviewed-by: Dario Faggioli <dfaggioli@suse.com>
>=20
> Hmm, I'm puzzled: This had gone in a week and a half ago with your
> R-b sent on the 15th.
>
Well, at least I'm consistent! :-P

Being a bit more serious, yes I see this now. Unfortunately, it somehow
was stuck in the "wrong" folder in my MUA, making me think it was still
pending.

And of course I could have double checked either the tree or my sent
folder, but I didn't... Sorry for the noise. :-(

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-o+4t4rc6s2Dtzj++GNSl
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl7Lx3YACgkQFkJ4iaW4
c+6DfA//ZjX/bnJRmjyF1jHRvwKCYlpXHxuxK9lMPRCbmfpC1NpR41Q0rhDbJxHa
pTh6+bsa0iow+SBy4IGBPQfYc1vQ0MJucEaxrksPibodcYbtmv4cqsmu/jLebXgh
NEiOYQSiFOWU4GVzVtGvIKsJ5Qrw0vrqOJ8LcR2FV7zh0xAw/c+8zuVGZxA5r5Uj
SLWPn3cHFazG9nKJ+qI+3s8QWcbxAeBTSvEjjSjGSpiAVxd6oVxfpeLo0ntST7bI
5D8b+sds1eSIgarxApx+OnZIFGCv5JxFX9ko3Ho2vTGVmbYu681F+/g7YCvmgRY+
fky6IGkXOURn+kb/sidTJ1FObTFo27Jfx7qTYnGI9GN+TYHTOr9vEJIRfUW3xhGk
xYgge9jRYv762dX7dF6XGvG1j/mRDtYFfV320f8vnes8eTfE+Wq4MTKuJuyAR2U4
ajuSDONF+4/9aUNg48vSkPanxYTf5H9w6ar8VYBLsZ+OkuiTXZnGpOPjf/vTA3xw
Ah60TRWTKLHqItD1Bxcq+/TJ9KJLX4HosuVnFih62mW7gX7jxZTkFoOj0bM1qUwg
I93RjNFDBakhDc1tWvlp5Gzpx7X30Dbe1jesW8EV+6QWgmVx4i/i2zL8PmC+54Cc
JwZ0eNnccvsEm6Eps7vLRDJ9R4u9kxOryvmBVklFsN/nMjTgoVM=
=7ZRU
-----END PGP SIGNATURE-----

--=-o+4t4rc6s2Dtzj++GNSl--



From xen-devel-bounces@lists.xenproject.org Mon May 25 13:47:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 13:47:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdDRF-00043X-Ez; Mon, 25 May 2020 13:46:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RZKA=7H=tklsoftware.com=tamas@srs-us1.protection.inumbo.net>)
 id 1jdDRD-00043R-Re
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 13:46:43 +0000
X-Inumbo-ID: 2aa57dbc-9e8e-11ea-9887-bc764e2007e4
Received: from mail-ed1-x541.google.com (unknown [2a00:1450:4864:20::541])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2aa57dbc-9e8e-11ea-9887-bc764e2007e4;
 Mon, 25 May 2020 13:46:43 +0000 (UTC)
Received: by mail-ed1-x541.google.com with SMTP id h16so15053665eds.5
 for <xen-devel@lists.xenproject.org>; Mon, 25 May 2020 06:46:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=tklengyel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=cIIpcEZlk83IdO9fNWCqBgDU/dAAypxanXhjaWPR5Zc=;
 b=hXqReaxqU4aHxacq8cYSGCSDcWvaF4t4+gR+f4Irh+F1NY02JLX9HYNYvGYYThSsT9
 AfpmQriut/0ztGzpySsdu9VSyK5p/+DGz5QOagPU9MgePL3TSqMNIKbPnT7F1VqMqVks
 3g2+dBUgIfa03AfYKFou2zM9QeG7qaWvt1zqpWJg0IfH2Tl+wvDgbtgZmJblQWQQ4tfV
 0P+9D40nKSqGAqrqMqEjLF4ktMNEaxkbwzg23FF9sSRK3TOeOl4XU6fkYOPaaMWXeu+J
 3wLy9t+/sG/k412xiG/ks0MRJYmizHSfECGeoR4qBZ+VybArUr4PiPkJlzR35yiawX9C
 sLSQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=cIIpcEZlk83IdO9fNWCqBgDU/dAAypxanXhjaWPR5Zc=;
 b=R06GrucqUEZdC5YZJnjXVjkO1MLurMZyDvCH1bwmCFm9dBf2MyzTgqEAuXFd6vrshH
 NcSTDv+9xe32b+fdT99ctQpRmucIWkgntyf08vhxG2QZ7W9c9UB2FjqS4Qow3cau5fxH
 SluelowREIBFPvC6IjtI+ymYKFVzFoMM2NSkAmmZy5Pqi705VfFSVefWi3nPBsnQdubX
 s17isQcQVoepPSIA5CcLCQoJ4yty2Ub25GkGhR8T5PMe8coCU2vde2x6FyU0TyFabzxe
 PxXkA8rtOphEETiF/LwnAtRO/VCO5sy7EYqMv+g/mfpn9x31kMc/uPCWAVpBbyovYOuW
 HsAA==
X-Gm-Message-State: AOAM530TLtuoeDafKjGj8rsZtEMAFmU6GvttZDvdeKHmr3QPXKSHX8O3
 Zn/fY5ILHsZu9yeuYGGlANxYKnVSpTk=
X-Google-Smtp-Source: ABdhPJwNSGnIdk98hq7a1+kKHw3MxkdpByxQ0Ss2wOiqgO6W6U1yt/bp+cApai5NWNzGVTBP5bP0iw==
X-Received: by 2002:aa7:d487:: with SMTP id b7mr15278016edr.351.1590414401679; 
 Mon, 25 May 2020 06:46:41 -0700 (PDT)
Received: from mail-wr1-f44.google.com (mail-wr1-f44.google.com.
 [209.85.221.44])
 by smtp.gmail.com with ESMTPSA id l1sm3291319ejd.114.2020.05.25.06.46.40
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 25 May 2020 06:46:40 -0700 (PDT)
Received: by mail-wr1-f44.google.com with SMTP id i15so17080856wrx.10
 for <xen-devel@lists.xenproject.org>; Mon, 25 May 2020 06:46:40 -0700 (PDT)
X-Received: by 2002:a5d:61d0:: with SMTP id q16mr792065wrv.182.1590414400100; 
 Mon, 25 May 2020 06:46:40 -0700 (PDT)
MIME-Version: 1.0
References: <adfececa3e29a46f5347459a629aa534d61625aa.1590165055.git.tamas.lengyel@intel.com>
 <338c26dc-a78a-4519-11f1-1b89bd1cf4db@suse.com>
 <CABfawh=WPyW383QAe_JwRC2q8W1zHXcYntjYF-Vog34baQcrfw@mail.gmail.com>
 <e5a2899c-f375-55e8-fc6c-940b70929ae6@suse.com>
In-Reply-To: <e5a2899c-f375-55e8-fc6c-940b70929ae6@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Mon, 25 May 2020 07:46:04 -0600
X-Gmail-Original-Message-ID: <CABfawhnB4WY6U-XcigT+X=n4e8qdDMFokMWR1Sc_s-oMeyZRWg@mail.gmail.com>
Message-ID: <CABfawhnB4WY6U-XcigT+X=n4e8qdDMFokMWR1Sc_s-oMeyZRWg@mail.gmail.com>
Subject: Re: [PATCH v2 for-4.14 1/2] x86/mem_sharing: block interrupt
 injection for forks
To: Jan Beulich <jbeulich@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 25, 2020 at 7:06 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 25.05.2020 14:18, Tamas K Lengyel wrote:
> > On Mon, May 25, 2020 at 12:06 AM Jan Beulich <jbeulich@suse.com> wrote:
> >>
> >> On 22.05.2020 18:33, Tamas K Lengyel wrote:
> >>> When running shallow forks without device models it may be undesirable for Xen
> >>> to inject interrupts. With Windows forks we have observed the kernel going into
> >>> infinite loops when trying to process such interrupts, likely because it attempts
> >>> to interact with devices that are not responding without QEMU running. By
> >>> disabling interrupt injection the fuzzer can exercise the target code without
> >>> interference.
> >>>
> >>> Forks & memory sharing are only available on Intel CPUs so this only applies
> >>> to vmx.
> >>
> >> Looking at e.g. mem_sharing_control() I can't seem to be able to confirm
> >> this. Would you mind pointing me at where this restriction is coming from?
> >
> > Both mem_access and mem_sharing are only implemented for EPT:
> > http://xenbits.xen.org/hg/xen-unstable.hg/file/5eadf9363c25/xen/arch/x86/mm/p2m-ept.c#l126.
>
> p2m-pt.c:p2m_type_to_flags() has a similar case label.

It doesn't do anything though, does it? For mem_sharing to work you
actively have to restrict the memory permissions on the shared entries
to be read/execute only. That's only done for EPT.

> And I can't
> spot a respective restriction in mem_sharing_memop(), i.e. it looks
> to me as if enabling mem-sharing on NPT (to satisfy hap_enabled()
> in mem_sharing_control()) would be possible.

If you are looking for an explicit gate like that, then you are right,
there isn't one. You can ask the original authors of this subsystem
why that is. If you feel like adding an extra gate, I wouldn't object.

Tamas


From xen-devel-bounces@lists.xenproject.org Mon May 25 14:07:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 14:07:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdDkp-0005tQ-5J; Mon, 25 May 2020 14:06:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MLe6=7H=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdDkn-0005tL-Vk
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 14:06:58 +0000
X-Inumbo-ID: fe50b21a-9e90-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe50b21a-9e90-11ea-9887-bc764e2007e4;
 Mon, 25 May 2020 14:06:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7857FAC24;
 Mon, 25 May 2020 14:06:58 +0000 (UTC)
Subject: Re: [PATCH v2 for-4.14 1/2] x86/mem_sharing: block interrupt
 injection for forks
To: Tamas K Lengyel <tamas@tklengyel.com>
References: <adfececa3e29a46f5347459a629aa534d61625aa.1590165055.git.tamas.lengyel@intel.com>
 <338c26dc-a78a-4519-11f1-1b89bd1cf4db@suse.com>
 <CABfawh=WPyW383QAe_JwRC2q8W1zHXcYntjYF-Vog34baQcrfw@mail.gmail.com>
 <e5a2899c-f375-55e8-fc6c-940b70929ae6@suse.com>
 <CABfawhnB4WY6U-XcigT+X=n4e8qdDMFokMWR1Sc_s-oMeyZRWg@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <78714288-89b0-6a53-4f74-f2306ae6e749@suse.com>
Date: Mon, 25 May 2020 16:06:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <CABfawhnB4WY6U-XcigT+X=n4e8qdDMFokMWR1Sc_s-oMeyZRWg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25.05.2020 15:46, Tamas K Lengyel wrote:
> On Mon, May 25, 2020 at 7:06 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 25.05.2020 14:18, Tamas K Lengyel wrote:
>>> On Mon, May 25, 2020 at 12:06 AM Jan Beulich <jbeulich@suse.com> wrote:
>>>>
>>>> On 22.05.2020 18:33, Tamas K Lengyel wrote:
>>>>> When running shallow forks without device models it may be undesirable for Xen
>>>>> to inject interrupts. With Windows forks we have observed the kernel going into
>>>>> infinite loops when trying to process such interrupts, likely because it attempts
>>>>> to interact with devices that are not responding without QEMU running. By
>>>>> disabling interrupt injection the fuzzer can exercise the target code without
>>>>> interference.
>>>>>
>>>>> Forks & memory sharing are only available on Intel CPUs so this only applies
>>>>> to vmx.
>>>>
>>>> Looking at e.g. mem_sharing_control() I can't seem to be able to confirm
>>>> this. Would you mind pointing me at where this restriction is coming from?
>>>
>>> Both mem_access and mem_sharing are only implemented for EPT:
>>> http://xenbits.xen.org/hg/xen-unstable.hg/file/5eadf9363c25/xen/arch/x86/mm/p2m-ept.c#l126.
>>
>> p2m-pt.c:p2m_type_to_flags() has a similar case label.
> 
> It doesn't do anything though, does it? For mem_sharing to work you
> actively have to restrict the memory permissions on the shared entries
> to be read/execute only. That's only done for EPT.

Does it not? I seems to me that it does, seeing the case sits
together with the p2m_ram_ro and p2m_ram_logdirty ones:

    case p2m_ram_ro:
    case p2m_ram_logdirty:
    case p2m_ram_shared:
        return flags | P2M_BASE_FLAGS;

>> And I can't
>> spot a respective restriction in mem_sharing_memop(), i.e. it looks
>> to me as if enabling mem-sharing on NPT (to satisfy hap_enabled()
>> in mem_sharing_control()) would be possible.
> 
> If you are looking for an explicit gate like that, then you are right,
> there isn't one. You can ask the original authors of this subsystem
> why that is. If you feel like adding an extra gate, I wouldn't object.

Well, the question here isn't about gating - that's an independent
bug if it's indeed missing. The question is whether SVM code also
needs touching, as was previously requested. You tried to address
this by stating an Intel-only limitation, which I couldn't find
proof for (so far).

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 25 14:15:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 14:15:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdDsh-0006na-W5; Mon, 25 May 2020 14:15:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RZKA=7H=tklsoftware.com=tamas@srs-us1.protection.inumbo.net>)
 id 1jdDsf-0006nV-Th
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 14:15:05 +0000
X-Inumbo-ID: 21200952-9e92-11ea-b07b-bc764e2007e4
Received: from mail-ed1-x542.google.com (unknown [2a00:1450:4864:20::542])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 21200952-9e92-11ea-b07b-bc764e2007e4;
 Mon, 25 May 2020 14:15:05 +0000 (UTC)
Received: by mail-ed1-x542.google.com with SMTP id e10so15141579edq.0
 for <xen-devel@lists.xenproject.org>; Mon, 25 May 2020 07:15:05 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=tklengyel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=ucr8+ugM3xNV+eS4h/rXd6GByzTSfj97ee8k7HUgYww=;
 b=n8N76yKOdxWyw4pK+GJ6QORn+G0BWsunOktbSHPz8fXQfNd48OqbtBbJDdEBzW4Nm0
 knBXRkzma1MDyZh/8EfN/ZeMvKxXgsHdQWudTIOrrqX8P3pJAcfOgshvC2IWJcNNK1Jv
 eRASkR2KVV160GqqY6357l9kLC7GTvKzNJ+rUVYzsBbVw3T0IVfaG+yI21pkL/g56LVl
 99rHoIfWGd5txj6bkD4sbmKMbhV744GKkdZLppSeUAl9Jr70DBmpswXklB7A2Szg3IwL
 R7X5nTm1PAPraX2UQROsr+pNqsh/JfXjlWZmP4LKWP1NWZldtpxkut8E/DI5NY8sQYIK
 agEw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=ucr8+ugM3xNV+eS4h/rXd6GByzTSfj97ee8k7HUgYww=;
 b=RvRkIxELtdBRgSXqxMb6LkiYpwRBlmL1gO4YTgq5SdqWJ1mM6Tqydd/iva1Pw9tn4w
 bA6A4l5cN2ky780IPG4FrDeKy6fyjJAPj8l/jVlfH5cQMfR2ERm5X7ENT+JcEre+vb2H
 zaXaNN3nmf8Bh8VY0FpyT1EwvByD87c3m8/82Xzjlrglz4TITUOtB4chZOoizyKJP9sS
 0fT9D7e+Vcq5qz5iNQudm5kdjkezpBWKtB8QlgTaDwuNNdzNbPd4/Unnl9Ymw2O+EW4V
 PHAiSpAFyJ4hS5eqIm4qntBnbXNSJoj6Cw1o4LHz0ghJOL9nPoJ3oyN4SBnOvpxXYfjT
 ewXw==
X-Gm-Message-State: AOAM530MOHj0m2J8piY45Uu1z7LBP1cVXca4S++Eq/YVbnue39rNEKv4
 e9M+aB00WGLzS5w8clGLr3/TtW4WLIY=
X-Google-Smtp-Source: ABdhPJxVit5FhbsyL55qOXqUluLSKrW0X/ikjdLQhRnylUUM84QzLRaNBydIAIgZc4XBda/prnVtAg==
X-Received: by 2002:a50:a985:: with SMTP id n5mr15715676edc.338.1590416103820; 
 Mon, 25 May 2020 07:15:03 -0700 (PDT)
Received: from mail-wr1-f54.google.com (mail-wr1-f54.google.com.
 [209.85.221.54])
 by smtp.gmail.com with ESMTPSA id o18sm15668538eji.97.2020.05.25.07.15.02
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 25 May 2020 07:15:03 -0700 (PDT)
Received: by mail-wr1-f54.google.com with SMTP id q11so5058620wrp.3
 for <xen-devel@lists.xenproject.org>; Mon, 25 May 2020 07:15:02 -0700 (PDT)
X-Received: by 2002:a5d:5707:: with SMTP id a7mr15180113wrv.259.1590416102538; 
 Mon, 25 May 2020 07:15:02 -0700 (PDT)
MIME-Version: 1.0
References: <adfececa3e29a46f5347459a629aa534d61625aa.1590165055.git.tamas.lengyel@intel.com>
 <338c26dc-a78a-4519-11f1-1b89bd1cf4db@suse.com>
 <CABfawh=WPyW383QAe_JwRC2q8W1zHXcYntjYF-Vog34baQcrfw@mail.gmail.com>
 <e5a2899c-f375-55e8-fc6c-940b70929ae6@suse.com>
 <CABfawhnB4WY6U-XcigT+X=n4e8qdDMFokMWR1Sc_s-oMeyZRWg@mail.gmail.com>
 <78714288-89b0-6a53-4f74-f2306ae6e749@suse.com>
In-Reply-To: <78714288-89b0-6a53-4f74-f2306ae6e749@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Mon, 25 May 2020 08:14:27 -0600
X-Gmail-Original-Message-ID: <CABfawhkMzpekYLqXqZZZ5Mxum-eHqMAWvAguaRakFasJKtPfFQ@mail.gmail.com>
Message-ID: <CABfawhkMzpekYLqXqZZZ5Mxum-eHqMAWvAguaRakFasJKtPfFQ@mail.gmail.com>
Subject: Re: [PATCH v2 for-4.14 1/2] x86/mem_sharing: block interrupt
 injection for forks
To: Jan Beulich <jbeulich@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 25, 2020 at 8:06 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 25.05.2020 15:46, Tamas K Lengyel wrote:
> > On Mon, May 25, 2020 at 7:06 AM Jan Beulich <jbeulich@suse.com> wrote:
> >>
> >> On 25.05.2020 14:18, Tamas K Lengyel wrote:
> >>> On Mon, May 25, 2020 at 12:06 AM Jan Beulich <jbeulich@suse.com> wrote:
> >>>>
> >>>> On 22.05.2020 18:33, Tamas K Lengyel wrote:
> >>>>> When running shallow forks without device models it may be undesirable for Xen
> >>>>> to inject interrupts. With Windows forks we have observed the kernel going into
> >>>>> infinite loops when trying to process such interrupts, likely because it attempts
> >>>>> to interact with devices that are not responding without QEMU running. By
> >>>>> disabling interrupt injection the fuzzer can exercise the target code without
> >>>>> interference.
> >>>>>
> >>>>> Forks & memory sharing are only available on Intel CPUs so this only applies
> >>>>> to vmx.
> >>>>
> >>>> Looking at e.g. mem_sharing_control() I can't seem to be able to confirm
> >>>> this. Would you mind pointing me at where this restriction is coming from?
> >>>
> >>> Both mem_access and mem_sharing are only implemented for EPT:
> >>> http://xenbits.xen.org/hg/xen-unstable.hg/file/5eadf9363c25/xen/arch/x86/mm/p2m-ept.c#l126.
> >>
> >> p2m-pt.c:p2m_type_to_flags() has a similar case label.
> >
> > It doesn't do anything though, does it? For mem_sharing to work you
> > actively have to restrict the memory permissions on the shared entries
> > to be read/execute only. That's only done for EPT.
>
> Does it not? I seems to me that it does, seeing the case sits
> together with the p2m_ram_ro and p2m_ram_logdirty ones:
>
>     case p2m_ram_ro:
>     case p2m_ram_logdirty:
>     case p2m_ram_shared:
>         return flags | P2M_BASE_FLAGS;
>
> >> And I can't
> >> spot a respective restriction in mem_sharing_memop(), i.e. it looks
> >> to me as if enabling mem-sharing on NPT (to satisfy hap_enabled()
> >> in mem_sharing_control()) would be possible.
> >
> > If you are looking for an explicit gate like that, then you are right,
> > there isn't one. You can ask the original authors of this subsystem
> > why that is. If you feel like adding an extra gate, I wouldn't object.
>
> Well, the question here isn't about gating - that's an independent
> bug if it's indeed missing. The question is whether SVM code also
> needs touching, as was previously requested. You tried to address
> this by stating an Intel-only limitation, which I couldn't find
> proof for (so far).

Well, as far as I'm concerned VM forking is for Intel hardware only.
If mem_sharing seems to work for non-Intel hw - I was unaware of that
- than I'll just add an extra check for the VM fork hypercall that
gates it. It may be possible for technically be made available for
other hw as well, but at this time that's completely out-of-scope.

Tamas


From xen-devel-bounces@lists.xenproject.org Mon May 25 14:24:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 14:24:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdE1L-0007hI-TC; Mon, 25 May 2020 14:24:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MLe6=7H=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdE1K-0007hD-Pv
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 14:24:02 +0000
X-Inumbo-ID: 610c89d6-9e93-11ea-aee4-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 610c89d6-9e93-11ea-aee4-12813bfff9fa;
 Mon, 25 May 2020 14:24:01 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 77E3AAC9F;
 Mon, 25 May 2020 14:24:03 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v10 0/9] x86emul: further work
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Date: Mon, 25 May 2020 16:23:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The first two patches are bug fixes, in part pointed out by the
3rd patch. The remainder is further enabling.

1: address x86_insn_is_mem_{access,write}() omissions 
2: rework CMP and TEST emulation
3: also test decoding and mem access / write logic
4: disable FPU/MMX/SIMD insn emulation when !HVM
5: support MOVDIR{I,64B} insns
6: support ENQCMD insn
7: support FNSTENV and FNSAVE
8: support FLDENV and FRSTOR
9: support FXSAVE/FXRSTOR

Main changes from v9 are several fixes in patch 1 and the new
patch 2, both a result of the new patch 3. For other changes
see the individual patches.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 25 14:26:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 14:26:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdE3Q-0007o5-9l; Mon, 25 May 2020 14:26:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MLe6=7H=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdE3P-0007nz-1R
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 14:26:11 +0000
X-Inumbo-ID: ad078142-9e93-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ad078142-9e93-11ea-b9cf-bc764e2007e4;
 Mon, 25 May 2020 14:26:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 2FC91ABEC;
 Mon, 25 May 2020 14:26:11 +0000 (UTC)
Subject: [PATCH v10 1/9] x86emul: address x86_insn_is_mem_{access,write}()
 omissions
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Message-ID: <f41a4f27-bbe2-6450-38c1-6c4e23f2b07b@suse.com>
Date: Mon, 25 May 2020 16:26:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

First of all explain in comments what the functions' purposes are. Then
make them actually match their comments.

Note that fc6fa977be54 ("x86emul: extend x86_insn_is_mem_write()
coverage") didn't actually fix the function's behavior for {,V}STMXCSR:
Both are covered by generic code higher up in the function, due to
x86_decode_twobyte() already doing suitable adjustments. And VSTMXCSR
wouldn't have been covered anyway without a further X86EMUL_OPC_VEX()
case label. Keep the inner case label in a comment for reference.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v10: Move ARPL case to earlier switch() x86_insn_is_mem_write(). Make
     group 5 handling actually work there. Drop VMPTRST case. Also
     handle CLFLUSH*, CLWB, UDn, and remaining PREFETCH* in
     x86_insn_is_mem_access().
v9: New.

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -11474,25 +11474,87 @@ x86_insn_operand_ea(const struct x86_emu
     return state->ea.mem.off;
 }
 
+/*
+ * This function means to return 'true' for all supported insns with explicit
+ * accesses to memory.  This means also insns which don't have an explicit
+ * memory operand (like POP), but it does not mean e.g. segment selector
+ * loads, where the descriptor table access is considered an implicit one.
+ */
 bool
 x86_insn_is_mem_access(const struct x86_emulate_state *state,
                        const struct x86_emulate_ctxt *ctxt)
 {
+    if ( mode_64bit() && state->not_64bit )
+        return false;
+
     if ( state->ea.type == OP_MEM )
-        return ctxt->opcode != 0x8d /* LEA */ &&
-               (ctxt->opcode != X86EMUL_OPC(0x0f, 0x01) ||
-                (state->modrm_reg & 7) != 7) /* INVLPG */;
+    {
+        switch ( ctxt->opcode )
+        {
+        case 0x8d: /* LEA */
+        case X86EMUL_OPC(0x0f, 0x0d): /* PREFETCH */
+        case X86EMUL_OPC(0x0f, 0x18)
+         ... X86EMUL_OPC(0x0f, 0x1f): /* NOP space */
+        case X86EMUL_OPC_66(0x0f, 0x18)
+         ... X86EMUL_OPC_66(0x0f, 0x1f): /* NOP space */
+        case X86EMUL_OPC_F3(0x0f, 0x18)
+         ... X86EMUL_OPC_F3(0x0f, 0x1f): /* NOP space */
+        case X86EMUL_OPC_F2(0x0f, 0x18)
+         ... X86EMUL_OPC_F2(0x0f, 0x1f): /* NOP space */
+        case X86EMUL_OPC(0x0f, 0xb9): /* UD1 */
+        case X86EMUL_OPC(0x0f, 0xff): /* UD0 */
+            return false;
+
+        case X86EMUL_OPC(0x0f, 0x01):
+            return (state->modrm_reg & 7) != 7; /* INVLPG */
+
+        case X86EMUL_OPC(0x0f, 0xae):
+            return (state->modrm_reg & 7) != 7; /* CLFLUSH */
+
+        case X86EMUL_OPC_66(0x0f, 0xae):
+            return (state->modrm_reg & 7) < 6; /* CLWB, CLFLUSHOPT */
+        }
+
+        return true;
+    }
 
     switch ( ctxt->opcode )
     {
+    case 0x06 ... 0x07: /* PUSH / POP %es */
+    case 0x0e:          /* PUSH %cs */
+    case 0x16 ... 0x17: /* PUSH / POP %ss */
+    case 0x1e ... 0x1f: /* PUSH / POP %ds */
+    case 0x50 ... 0x5f: /* PUSH / POP reg */
+    case 0x60 ... 0x61: /* PUSHA / POPA */
+    case 0x68: case 0x6a: /* PUSH imm */
     case 0x6c ... 0x6f: /* INS / OUTS */
+    case 0x8f:          /* POP r/m */
+    case 0x9a:          /* CALL (far, direct) */
+    case 0x9c ... 0x9d: /* PUSHF / POPF */
     case 0xa4 ... 0xa7: /* MOVS / CMPS */
     case 0xaa ... 0xaf: /* STOS / LODS / SCAS */
+    case 0xc2 ... 0xc3: /* RET (near) */
+    case 0xc8 ... 0xc9: /* ENTER / LEAVE */
+    case 0xca ... 0xcb: /* RET (far) */
     case 0xd7:          /* XLAT */
+    case 0xe8:          /* CALL (near, direct) */
+    case X86EMUL_OPC(0x0f, 0xa0):         /* PUSH %fs */
+    case X86EMUL_OPC(0x0f, 0xa1):         /* POP %fs */
+    case X86EMUL_OPC(0x0f, 0xa8):         /* PUSH %gs */
+    case X86EMUL_OPC(0x0f, 0xa9):         /* POP %gs */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xf7): /* MASKMOV{Q,DQU} */
                                           /* VMASKMOVDQU */
         return true;
 
+    case 0xff:
+        switch ( state->modrm_reg & 7 )
+        {
+        case 2: /* CALL (near, indirect) */
+        case 6: /* PUSH r/m */
+            return true;
+        }
+        break;
+
     case X86EMUL_OPC(0x0f, 0x01):
         /* Cover CLZERO. */
         return (state->modrm_rm & 7) == 4 && (state->modrm_reg & 7) == 7;
@@ -11501,10 +11563,20 @@ x86_insn_is_mem_access(const struct x86_
     return false;
 }
 
+/*
+ * This function means to return 'true' for all supported insns with explicit
+ * writes to memory.  This means also insns which don't have an explicit
+ * memory operand (like PUSH), but it does not mean e.g. segment selector
+ * loads, where the (possible) descriptor table write is considered an
+ * implicit access.
+ */
 bool
 x86_insn_is_mem_write(const struct x86_emulate_state *state,
                       const struct x86_emulate_ctxt *ctxt)
 {
+    if ( mode_64bit() && state->not_64bit )
+        return false;
+
     switch ( state->desc & DstMask )
     {
     case DstMem:
@@ -11516,19 +11588,48 @@ x86_insn_is_mem_write(const struct x86_e
         break;
 
     default:
+        switch ( ctxt->opcode )
+        {
+        case 0x63:                         /* ARPL */
+            return !mode_64bit();
+        }
+
         return false;
     }
 
     if ( state->modrm_mod == 3 )
-        /* CLZERO is the odd one. */
-        return ctxt->opcode == X86EMUL_OPC(0x0f, 0x01) &&
-               (state->modrm_rm & 7) == 4 && (state->modrm_reg & 7) == 7;
+    {
+        switch ( ctxt->opcode )
+        {
+        case 0xff: /* Grp5 */
+            break;
+
+        case X86EMUL_OPC(0x0f, 0x01): /* CLZERO is the odd one. */
+            return (state->modrm_rm & 7) == 4 && (state->modrm_reg & 7) == 7;
+
+        default:
+            return false;
+        }
+    }
 
     switch ( ctxt->opcode )
     {
+    case 0x06:                           /* PUSH %es */
+    case 0x0e:                           /* PUSH %cs */
+    case 0x16:                           /* PUSH %ss */
+    case 0x1e:                           /* PUSH %ds */
+    case 0x50 ... 0x57:                  /* PUSH reg */
+    case 0x60:                           /* PUSHA */
+    case 0x68: case 0x6a:                /* PUSH imm */
     case 0x6c: case 0x6d:                /* INS */
+    case 0x9a:                           /* CALL (far, direct) */
+    case 0x9c:                           /* PUSHF */
     case 0xa4: case 0xa5:                /* MOVS */
     case 0xaa: case 0xab:                /* STOS */
+    case 0xc8:                           /* ENTER */
+    case 0xe8:                           /* CALL (near, direct) */
+    case X86EMUL_OPC(0x0f, 0xa0):        /* PUSH %fs */
+    case X86EMUL_OPC(0x0f, 0xa8):        /* PUSH %gs */
     case X86EMUL_OPC(0x0f, 0xab):        /* BTS */
     case X86EMUL_OPC(0x0f, 0xb3):        /* BTR */
     case X86EMUL_OPC(0x0f, 0xbb):        /* BTC */
@@ -11586,6 +11687,16 @@ x86_insn_is_mem_write(const struct x86_e
         }
         break;
 
+    case 0xff:
+        switch ( state->modrm_reg & 7 )
+        {
+        case 2: /* CALL (near, indirect) */
+        case 3: /* CALL (far, indirect) */
+        case 6: /* PUSH r/m */
+            return true;
+        }
+        break;
+
     case X86EMUL_OPC(0x0f, 0x01):
         switch ( state->modrm_reg & 7 )
         {
@@ -11600,7 +11711,7 @@ x86_insn_is_mem_write(const struct x86_e
         switch ( state->modrm_reg & 7 )
         {
         case 0: /* FXSAVE */
-        case 3: /* {,V}STMXCSR */
+        /* case 3: STMXCSR - handled above */
         case 4: /* XSAVE */
         case 6: /* XSAVEOPT */
             return true;
@@ -11616,7 +11727,6 @@ x86_insn_is_mem_write(const struct x86_e
         case 1: /* CMPXCHG{8,16}B */
         case 4: /* XSAVEC */
         case 5: /* XSAVES */
-        case 7: /* VMPTRST */
             return true;
         }
         break;



From xen-devel-bounces@lists.xenproject.org Mon May 25 14:26:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 14:26:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdE41-0007ro-Mi; Mon, 25 May 2020 14:26:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MLe6=7H=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdE40-0007rg-O2
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 14:26:48 +0000
X-Inumbo-ID: c38e4676-9e93-11ea-aee4-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c38e4676-9e93-11ea-aee4-12813bfff9fa;
 Mon, 25 May 2020 14:26:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id EDF8EAC9F;
 Mon, 25 May 2020 14:26:48 +0000 (UTC)
Subject: [PATCH v10 2/9] x86emul: rework CMP and TEST emulation
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Message-ID: <5843dca9-1a1a-a32e-3cb0-95cd93533723@suse.com>
Date: Mon, 25 May 2020 16:26:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Unlike similarly encoded insns these don't write their memory operands,
and hence x86_is_mem_write() should return false for them. However,
rather than adding special logic there, rework how their emulation gets
done, by making decoding attributes properly describe the r/o nature of
their memory operands.

Note how this also allows dropping custom LOCK prefix checks.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v10: New.

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -84,7 +84,7 @@ static const opcode_desc_t opcode_table[
     ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
     ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps,
     /* 0x38 - 0x3F */
-    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
+    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
     ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
     ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps,
     /* 0x40 - 0x4F */
@@ -2481,7 +2481,6 @@ x86_decode_onebyte(
     case 0x60: /* pusha */
     case 0x61: /* popa */
     case 0x62: /* bound */
-    case 0x82: /* Grp1 (x86/32 only) */
     case 0xc4: /* les */
     case 0xc5: /* lds */
     case 0xce: /* into */
@@ -2491,6 +2490,14 @@ x86_decode_onebyte(
         state->not_64bit = true;
         break;
 
+    case 0x82: /* Grp1 (x86/32 only) */
+        state->not_64bit = true;
+        /* fall through */
+    case 0x80: case 0x81: case 0x83: /* Grp1 */
+        if ( (modrm_reg & 7) == 7 ) /* cmp */
+            state->desc = (state->desc & ByteOp) | DstNone | SrcMem;
+        break;
+
     case 0x90: /* nop / pause */
         if ( repe_prefix() )
             ctxt->opcode |= X86EMUL_OPC_F3(0, 0);
@@ -2521,6 +2528,11 @@ x86_decode_onebyte(
         imm2 = insn_fetch_type(uint8_t);
         break;
 
+    case 0xf6: case 0xf7: /* Grp3 */
+        if ( !(modrm_reg & 6) ) /* test */
+            state->desc = (state->desc & ByteOp) | DstNone | SrcMem;
+        break;
+
     case 0xff: /* Grp5 */
         switch ( modrm_reg & 7 )
         {
@@ -3928,13 +3940,11 @@ x86_emulate(
         break;
 
     case 0x38: case 0x39: cmp: /* cmp reg,mem */
-        if ( ops->rmw && dst.type == OP_MEM &&
-             (rc = read_ulong(dst.mem.seg, dst.mem.off, &dst.val,
-                              dst.bytes, ctxt, ops)) != X86EMUL_OKAY )
-            goto done;
-        /* fall through */
+        emulate_2op_SrcV("cmp", dst, src, _regs.eflags);
+        dst.type = OP_NONE;
+        break;
+
     case 0x3a ... 0x3d: /* cmp */
-        generate_exception_if(lock_prefix, EXC_UD);
         emulate_2op_SrcV("cmp", src, dst, _regs.eflags);
         dst.type = OP_NONE;
         break;
@@ -4239,7 +4249,9 @@ x86_emulate(
         case 4: goto and;
         case 5: goto sub;
         case 6: goto xor;
-        case 7: goto cmp;
+        case 7:
+            dst.val = imm1;
+            goto cmp;
         }
         break;
 
@@ -5233,11 +5245,8 @@ x86_emulate(
             unsigned long u[2], v;
 
         case 0 ... 1: /* test */
-            generate_exception_if(lock_prefix, EXC_UD);
-            if ( ops->rmw && dst.type == OP_MEM &&
-                 (rc = read_ulong(dst.mem.seg, dst.mem.off, &dst.val,
-                                  dst.bytes, ctxt, ops)) != X86EMUL_OKAY )
-                goto done;
+            dst.val = imm1;
+            dst.bytes = src.bytes;
             goto test;
         case 2: /* not */
             if ( ops->rmw && dst.type == OP_MEM )



From xen-devel-bounces@lists.xenproject.org Mon May 25 14:27:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 14:27:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdE4R-0007vY-W0; Mon, 25 May 2020 14:27:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MLe6=7H=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdE4Q-0007vJ-E8
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 14:27:14 +0000
X-Inumbo-ID: d006009c-9e93-11ea-aee4-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d006009c-9e93-11ea-aee4-12813bfff9fa;
 Mon, 25 May 2020 14:27:08 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A86B2ABEC;
 Mon, 25 May 2020 14:27:09 +0000 (UTC)
Subject: [PATCH v10 3/9] x86emul: also test decoding and mem access / write
 logic
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Message-ID: <627a6fbb-8e84-0509-78c8-942736e64503@suse.com>
Date: Mon, 25 May 2020 16:27:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

x86emul_is_mem_{access,write}() (and their interaction with
x86_decode()) have become sufficiently complex that we should have a way
to test this logic. Start by covering legacy encoded GPR insns, with the
exception of a few the main emulator doesn't support yet (left as
comments in the respective tables, or about to be added by subsequent
patches). This has already helped spot a few flaws in said logic,
addressed by (revised) earlier patches.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v10: New.

--- a/tools/tests/x86_emulator/Makefile
+++ b/tools/tests/x86_emulator/Makefile
@@ -250,7 +250,7 @@ xop.h avx512f.h: simd-fma.c
 
 endif # 32-bit override
 
-$(TARGET): x86-emulate.o cpuid.o test_x86_emulator.o evex-disp8.o wrappers.o
+$(TARGET): x86-emulate.o cpuid.o test_x86_emulator.o evex-disp8.o predicates.o wrappers.o
 	$(HOSTCC) $(HOSTCFLAGS) -o $@ $^
 
 .PHONY: clean
@@ -289,7 +289,7 @@ x86.h := $(addprefix $(XEN_ROOT)/tools/i
                      cpuid.h cpuid-autogen.h)
 x86_emulate.h := x86-emulate.h x86_emulate/x86_emulate.h $(x86.h)
 
-x86-emulate.o cpuid.o test_x86_emulator.o evex-disp8.o wrappers.o: %.o: %.c $(x86_emulate.h)
+x86-emulate.o cpuid.o test_x86_emulator.o evex-disp8.o predicates.o wrappers.o: %.o: %.c $(x86_emulate.h)
 	$(HOSTCC) $(HOSTCFLAGS) -c -g -o $@ $<
 
 x86-emulate.o: x86_emulate/x86_emulate.c
--- /dev/null
+++ b/tools/tests/x86_emulator/predicates.c
@@ -0,0 +1,671 @@
+#include "x86-emulate.h"
+
+#include <stdio.h>
+
+enum mem_access { mem_none, mem_read, mem_write };
+enum pfx { pfx_none, pfx_66, pfx_f3, pfx_f2 };
+static const uint8_t prefixes[] = { 0x66, 0xf3, 0xf2 };
+
+#define F false
+#define T true
+
+#define N mem_none
+#define R mem_read
+#define W mem_write
+
+/*
+ * ModR/M bytes and immediates don't need spelling out in the opcodes,
+ * unless the implied zeros aren't good enough.
+ */
+static const struct {
+    uint8_t opc[8];
+    uint8_t len[2]; /* 32- and 64-bit mode */
+    bool modrm:1; /* Should register form (also) be tested? */
+    unsigned int mem:2;
+    unsigned int pfx:2;
+#define REG(opc, more...) \
+    { { (opc) | 0 }, more }, /* %?ax */ \
+    { { (opc) | 1 }, more }, /* %?cx */ \
+    { { (opc) | 2 }, more }, /* %?dx */ \
+    { { (opc) | 3 }, more }, /* %?bx */ \
+    { { (opc) | 4 }, more }, /* %?sp */ \
+    { { (opc) | 5 }, more }, /* %?bp */ \
+    { { (opc) | 6 }, more }, /* %?si */ \
+    { { (opc) | 7 }, more }  /* %?di */
+#define CND(opc, more...) \
+    { { (opc) | 0x0 }, more }, /* ..o */ \
+    { { (opc) | 0x1 }, more }, /* ..no */ \
+    { { (opc) | 0x2 }, more }, /* ..c / ..b */ \
+    { { (opc) | 0x3 }, more }, /* ..nc / ..nb */ \
+    { { (opc) | 0x4 }, more }, /* ..z / ..e */ \
+    { { (opc) | 0x5 }, more }, /* ..nz / ..ne */ \
+    { { (opc) | 0x6 }, more }, /* ..be / ..na */ \
+    { { (opc) | 0x7 }, more }, /* ..a / ..nbe */ \
+    { { (opc) | 0x8 }, more }, /* ..s */ \
+    { { (opc) | 0x9 }, more }, /* ..ns */ \
+    { { (opc) | 0xa }, more }, /* ..pe / ..p */ \
+    { { (opc) | 0xb }, more }, /* ..po / ..np */ \
+    { { (opc) | 0xc }, more }, /* ..l / ..nge */ \
+    { { (opc) | 0xd }, more }, /* ..ge / ..nl */ \
+    { { (opc) | 0xe }, more }, /* ..le / ..ng */ \
+    { { (opc) | 0xf }, more }  /* ..g / .. nle */
+} legacy[] = {
+    { { 0x00 }, { 2, 2 }, T, W }, /* add */
+    { { 0x01 }, { 2, 2 }, T, W }, /* add */
+    { { 0x02 }, { 2, 2 }, T, R }, /* add */
+    { { 0x03 }, { 2, 2 }, T, R }, /* add */
+    { { 0x04 }, { 2, 2 }, F, N }, /* add */
+    { { 0x05 }, { 5, 5 }, F, N }, /* add */
+    { { 0x06 }, { 1, 0 }, F, W }, /* push %es */
+    { { 0x07 }, { 1, 0 }, F, R }, /* pop %es */
+    { { 0x08 }, { 2, 2 }, T, W }, /* or */
+    { { 0x09 }, { 2, 2 }, T, W }, /* or */
+    { { 0x0a }, { 2, 2 }, T, R }, /* or */
+    { { 0x0b }, { 2, 2 }, T, R }, /* or */
+    { { 0x0c }, { 2, 2 }, F, N }, /* or */
+    { { 0x0d }, { 5, 5 }, F, N }, /* or */
+    { { 0x0e }, { 1, 0 }, F, W }, /* push %cs */
+    { { 0x10 }, { 2, 2 }, T, W }, /* adc */
+    { { 0x11 }, { 2, 2 }, T, W }, /* adc */
+    { { 0x12 }, { 2, 2 }, T, R }, /* adc */
+    { { 0x13 }, { 2, 2 }, T, R }, /* adc */
+    { { 0x14 }, { 2, 2 }, F, N }, /* adc */
+    { { 0x15 }, { 5, 5 }, F, N }, /* adc */
+    { { 0x16 }, { 1, 0 }, F, W }, /* push %ss */
+    { { 0x17 }, { 1, 0 }, F, R }, /* pop %ss */
+    { { 0x18 }, { 2, 2 }, T, W }, /* adc */
+    { { 0x19 }, { 2, 2 }, T, W }, /* adc */
+    { { 0x1a }, { 2, 2 }, T, R }, /* adc */
+    { { 0x1b }, { 2, 2 }, T, R }, /* adc */
+    { { 0x1c }, { 2, 2 }, F, N }, /* adc */
+    { { 0x1d }, { 5, 5 }, F, N }, /* adc */
+    { { 0x1e }, { 1, 0 }, F, W }, /* push %ds */
+    { { 0x1f }, { 1, 0 }, F, R }, /* pop %ds */
+    { { 0x20 }, { 2, 2 }, T, W }, /* and */
+    { { 0x21 }, { 2, 2 }, T, W }, /* and */
+    { { 0x22 }, { 2, 2 }, T, R }, /* and */
+    { { 0x23 }, { 2, 2 }, T, R }, /* and */
+    { { 0x24 }, { 2, 2 }, F, N }, /* and */
+    { { 0x25 }, { 5, 5 }, F, N }, /* and */
+    { { 0x27 }, { 1, 0 }, F, N }, /* daa */
+    { { 0x28 }, { 2, 2 }, T, W }, /* sub */
+    { { 0x29 }, { 2, 2 }, T, W }, /* sub */
+    { { 0x2a }, { 2, 2 }, T, R }, /* sub */
+    { { 0x2b }, { 2, 2 }, T, R }, /* sub */
+    { { 0x2c }, { 2, 2 }, F, N }, /* sub */
+    { { 0x2d }, { 5, 5 }, F, N }, /* sub */
+    { { 0x2f }, { 1, 0 }, F, N }, /* das */
+    { { 0x30 }, { 2, 2 }, T, W }, /* xor */
+    { { 0x31 }, { 2, 2 }, T, W }, /* xor */
+    { { 0x32 }, { 2, 2 }, T, R }, /* xor */
+    { { 0x33 }, { 2, 2 }, T, R }, /* xor */
+    { { 0x34 }, { 2, 2 }, F, N }, /* xor */
+    { { 0x35 }, { 5, 5 }, F, N }, /* xor */
+    { { 0x37 }, { 1, 0 }, F, N }, /* aaa */
+    { { 0x38 }, { 2, 2 }, T, R }, /* cmp */
+    { { 0x39 }, { 2, 2 }, T, R }, /* cmp */
+    { { 0x3a }, { 2, 2 }, T, R }, /* cmp */
+    { { 0x3b }, { 2, 2 }, T, R }, /* cmp */
+    { { 0x3c }, { 2, 2 }, F, N }, /* cmp */
+    { { 0x3d }, { 5, 5 }, F, N }, /* cmp */
+    { { 0x3f }, { 1, 0 }, F, N }, /* aas */
+    REG(0x40,   { 1, 0 }, F, N ), /* inc */
+    REG(0x48,   { 1, 0 }, F, N ), /* dec */
+    REG(0x50,   { 1, 0 }, F, W ), /* push */
+    REG(0x58,   { 1, 0 }, F, R ), /* pop */
+    { { 0x60 }, { 1, 0 }, F, W }, /* pusha */
+    { { 0x61 }, { 1, 0 }, F, R }, /* popa */
+    { { 0x62 }, { 2, 0 }, F, R }, /* bound */
+    { { 0x63 }, { 2, 0 }, F, W }, /* arpl */
+    { { 0x63 }, { 0, 2 }, F, R }, /* movsxd */
+    { { 0x68 }, { 5, 5 }, F, W }, /* push */
+    { { 0x69 }, { 6, 6 }, T, R }, /* imul */
+    { { 0x6a }, { 2, 2 }, F, W }, /* push */
+    { { 0x6b }, { 3, 3 }, T, R }, /* imul */
+    { { 0x6c }, { 1, 1 }, F, W }, /* ins */
+    { { 0x6d }, { 1, 1 }, F, W }, /* ins */
+    { { 0x6e }, { 1, 1 }, F, R }, /* outs */
+    { { 0x6f }, { 1, 1 }, F, R }, /* outs */
+    CND(0x70,   { 2, 2 }, F, N ), /* j<cc> */
+    { { 0x80, 0x00 }, { 3, 3 }, T, W }, /* add */
+    { { 0x80, 0x08 }, { 3, 3 }, T, W }, /* or */
+    { { 0x80, 0x10 }, { 3, 3 }, T, W }, /* adc */
+    { { 0x80, 0x18 }, { 3, 3 }, T, W }, /* sbb */
+    { { 0x80, 0x20 }, { 3, 3 }, T, W }, /* and */
+    { { 0x80, 0x28 }, { 3, 3 }, T, W }, /* sub */
+    { { 0x80, 0x30 }, { 3, 3 }, T, W }, /* xor */
+    { { 0x80, 0x38 }, { 3, 3 }, T, R }, /* cmp */
+    { { 0x81, 0x00 }, { 6, 6 }, T, W }, /* add */
+    { { 0x81, 0x08 }, { 6, 6 }, T, W }, /* or */
+    { { 0x81, 0x10 }, { 6, 6 }, T, W }, /* adc */
+    { { 0x81, 0x18 }, { 6, 6 }, T, W }, /* sbb */
+    { { 0x81, 0x20 }, { 6, 6 }, T, W }, /* and */
+    { { 0x81, 0x28 }, { 6, 6 }, T, W }, /* sub */
+    { { 0x81, 0x30 }, { 6, 6 }, T, W }, /* add */
+    { { 0x81, 0x38 }, { 6, 6 }, T, R }, /* cmp */
+    { { 0x82, 0x00 }, { 3, 0 }, T, W }, /* xor */
+    { { 0x82, 0x08 }, { 3, 0 }, T, W }, /* or */
+    { { 0x82, 0x10 }, { 3, 0 }, T, W }, /* adc */
+    { { 0x82, 0x18 }, { 3, 0 }, T, W }, /* sbb */
+    { { 0x82, 0x20 }, { 3, 0 }, T, W }, /* and */
+    { { 0x82, 0x28 }, { 3, 0 }, T, W }, /* sub */
+    { { 0x82, 0x30 }, { 3, 0 }, T, W }, /* xor */
+    { { 0x82, 0x38 }, { 3, 0 }, T, R }, /* cmp */
+    { { 0x83, 0x00 }, { 3, 3 }, T, W }, /* add */
+    { { 0x83, 0x08 }, { 3, 3 }, T, W }, /* or */
+    { { 0x83, 0x10 }, { 3, 3 }, T, W }, /* adc */
+    { { 0x83, 0x18 }, { 3, 3 }, T, W }, /* sbb */
+    { { 0x83, 0x20 }, { 3, 3 }, T, W }, /* and */
+    { { 0x83, 0x28 }, { 3, 3 }, T, W }, /* sub */
+    { { 0x83, 0x30 }, { 3, 3 }, T, W }, /* xor */
+    { { 0x83, 0x38 }, { 3, 3 }, T, R }, /* cmp */
+    { { 0x84 }, { 2, 2 }, T, R }, /* test */
+    { { 0x85 }, { 2, 2 }, T, R }, /* test */
+    { { 0x86 }, { 2, 2 }, T, W }, /* xchg */
+    { { 0x87 }, { 2, 2 }, T, W }, /* xchg */
+    { { 0x88 }, { 2, 2 }, T, W }, /* mov */
+    { { 0x89 }, { 2, 2 }, T, W }, /* mov */
+    { { 0x8a }, { 2, 2 }, T, R }, /* mov */
+    { { 0x8b }, { 2, 2 }, T, R }, /* mov */
+    { { 0x8c }, { 2, 2 }, T, W }, /* mov */
+    { { 0x8d }, { 2, 2 }, F, N }, /* lea */
+    { { 0x8e }, { 2, 2 }, T, R }, /* mov */
+    { { 0x8f, 0x00 }, { 2, 2 }, F, W }, /* pop */
+    { { 0x8f, 0xc0 }, { 2, 2 }, F, R }, /* pop */
+    REG(0x90,   { 1, 0 }, F, N ), /* xchg */
+    { { 0x98 }, { 1, 1 }, F, N }, /* cbw */
+    { { 0x99 }, { 1, 1 }, F, N }, /* cwd */
+    { { 0x9a }, { 7, 0 }, F, W }, /* lcall */
+    { { 0x9b }, { 1, 1 }, F, N }, /* wait */
+    { { 0x9c }, { 1, 1 }, F, W }, /* pushf */
+    { { 0x9d }, { 1, 1 }, F, R }, /* popf */
+    { { 0x9e }, { 1, 1 }, F, N }, /* sahf */
+    { { 0x9f }, { 1, 1 }, F, N }, /* lahf */
+    { { 0xa0 }, { 5, 9 }, F, R }, /* mov */
+    { { 0xa1 }, { 5, 9 }, F, R }, /* mov */
+    { { 0xa2 }, { 5, 9 }, F, W }, /* mov */
+    { { 0xa3 }, { 5, 9 }, F, W }, /* mov */
+    { { 0xa4 }, { 1, 1 }, F, W }, /* movs */
+    { { 0xa5 }, { 1, 1 }, F, W }, /* movs */
+    { { 0xa6 }, { 1, 1 }, F, R }, /* cmps */
+    { { 0xa7 }, { 1, 1 }, F, R }, /* cmps */
+    { { 0xa8 }, { 2, 2 }, F, N }, /* test */
+    { { 0xa9 }, { 5, 5 }, F, N }, /* test */
+    { { 0xaa }, { 1, 1 }, F, W }, /* stos */
+    { { 0xab }, { 1, 1 }, F, W }, /* stos */
+    { { 0xac }, { 1, 1 }, F, R }, /* lods */
+    { { 0xad }, { 1, 1 }, F, R }, /* lods */
+    { { 0xae }, { 1, 1 }, F, R }, /* scas */
+    { { 0xaf }, { 1, 1 }, F, R }, /* scas */
+    REG(0xb0,   { 2, 2 }, F, N ), /* mov */
+    REG(0xb8,   { 5, 5 }, F, N ), /* mov */
+    { { 0xc0, 0x00 }, { 3, 3 }, T, W }, /* rol */
+    { { 0xc0, 0x08 }, { 3, 3 }, T, W }, /* ror */
+    { { 0xc0, 0x10 }, { 3, 3 }, T, W }, /* rcl */
+    { { 0xc0, 0x18 }, { 3, 3 }, T, W }, /* rcr */
+    { { 0xc0, 0x20 }, { 3, 3 }, T, W }, /* shl */
+    { { 0xc0, 0x28 }, { 3, 3 }, T, W }, /* shr */
+    { { 0xc0, 0x30 }, { 3, 3 }, T, W }, /* sal */
+    { { 0xc0, 0x38 }, { 3, 3 }, T, W }, /* sar */
+    { { 0xc1, 0x00 }, { 3, 3 }, T, W }, /* rol */
+    { { 0xc1, 0x08 }, { 3, 3 }, T, W }, /* ror */
+    { { 0xc1, 0x10 }, { 3, 3 }, T, W }, /* rcl */
+    { { 0xc1, 0x18 }, { 3, 3 }, T, W }, /* rcr */
+    { { 0xc1, 0x20 }, { 3, 3 }, T, W }, /* shl */
+    { { 0xc1, 0x28 }, { 3, 3 }, T, W }, /* shr */
+    { { 0xc1, 0x30 }, { 3, 3 }, T, W }, /* sal */
+    { { 0xc1, 0x38 }, { 3, 3 }, T, W }, /* sar */
+    { { 0xc2 }, { 3, 3 }, F, R }, /* ret */
+    { { 0xc3 }, { 1, 1 }, F, R }, /* ret */
+    { { 0xc4 }, { 2, 0 }, F, R }, /* les */
+    { { 0xc5 }, { 2, 0 }, F, R }, /* lds */
+    { { 0xc6, 0x00 }, { 3, 3 }, T, W }, /* mov */
+    { { 0xc6, 0xf8 }, { 3, 3 }, F, N }, /* xabort */
+    { { 0xc7, 0x00 }, { 6, 6 }, T, W }, /* mov */
+    { { 0xc7, 0xf8 }, { 6, 6 }, F, N }, /* xbegin */
+    { { 0xc8 }, { 4, 4 }, F, W }, /* enter */
+    { { 0xc9 }, { 1, 1 }, F, R }, /* leave */
+    { { 0xca }, { 3, 3 }, F, R }, /* lret */
+    { { 0xcb }, { 1, 1 }, F, R }, /* lret */
+    { { 0xcc }, { 1, 1 }, F, N }, /* int3 */
+    { { 0xcd }, { 2, 2 }, F, N }, /* int */
+    { { 0xce }, { 1, 0 }, F, N }, /* into */
+    { { 0xcf }, { 1, 1 }, F, N }, /* iret */
+    { { 0xd0, 0x00 }, { 2, 2 }, T, W }, /* rol */
+    { { 0xd0, 0x08 }, { 2, 2 }, T, W }, /* ror */
+    { { 0xd0, 0x10 }, { 2, 2 }, T, W }, /* rcl */
+    { { 0xd0, 0x18 }, { 2, 2 }, T, W }, /* rcr */
+    { { 0xd0, 0x20 }, { 2, 2 }, T, W }, /* shl */
+    { { 0xd0, 0x28 }, { 2, 2 }, T, W }, /* shr */
+    { { 0xd0, 0x30 }, { 2, 2 }, T, W }, /* sal */
+    { { 0xd0, 0x38 }, { 2, 2 }, T, W }, /* sar */
+    { { 0xd1, 0x00 }, { 2, 2 }, T, W }, /* rol */
+    { { 0xd1, 0x08 }, { 2, 2 }, T, W }, /* ror */
+    { { 0xd1, 0x10 }, { 2, 2 }, T, W }, /* rcl */
+    { { 0xd1, 0x18 }, { 2, 2 }, T, W }, /* rcr */
+    { { 0xd1, 0x20 }, { 2, 2 }, T, W }, /* shl */
+    { { 0xd1, 0x28 }, { 2, 2 }, T, W }, /* shr */
+    { { 0xd1, 0x30 }, { 2, 2 }, T, W }, /* sal */
+    { { 0xd1, 0x38 }, { 2, 2 }, T, W }, /* sar */
+    { { 0xd2, 0x00 }, { 2, 2 }, T, W }, /* rol */
+    { { 0xd2, 0x08 }, { 2, 2 }, T, W }, /* ror */
+    { { 0xd2, 0x10 }, { 2, 2 }, T, W }, /* rcl */
+    { { 0xd2, 0x18 }, { 2, 2 }, T, W }, /* rcr */
+    { { 0xd2, 0x20 }, { 2, 2 }, T, W }, /* shl */
+    { { 0xd2, 0x28 }, { 2, 2 }, T, W }, /* shr */
+    { { 0xd2, 0x30 }, { 2, 2 }, T, W }, /* sal */
+    { { 0xd2, 0x38 }, { 2, 2 }, T, W }, /* sar */
+    { { 0xd3, 0x00 }, { 2, 2 }, T, W }, /* rol */
+    { { 0xd3, 0x08 }, { 2, 2 }, T, W }, /* ror */
+    { { 0xd3, 0x10 }, { 2, 2 }, T, W }, /* rcl */
+    { { 0xd3, 0x18 }, { 2, 2 }, T, W }, /* rcr */
+    { { 0xd3, 0x20 }, { 2, 2 }, T, W }, /* shl */
+    { { 0xd3, 0x28 }, { 2, 2 }, T, W }, /* shr */
+    { { 0xd3, 0x30 }, { 2, 2 }, T, W }, /* sal */
+    { { 0xd3, 0x38 }, { 2, 2 }, T, W }, /* sar */
+    { { 0xd4 }, { 2, 0 }, F, N }, /* aam */
+    { { 0xd5 }, { 2, 0 }, F, N }, /* aad */
+    { { 0xd6 }, { 1, 0 }, F, N }, /* salc */
+    { { 0xd7 }, { 1, 1 }, F, R }, /* xlat */
+    { { 0xe0 }, { 2, 2 }, F, N }, /* loopne */
+    { { 0xe1 }, { 2, 2 }, F, N }, /* loope */
+    { { 0xe2 }, { 2, 2 }, F, N }, /* loop */
+    { { 0xe3 }, { 2, 2 }, F, N }, /* j?cxz */
+    { { 0xe4 }, { 2, 2 }, F, N }, /* in */
+    { { 0xe5 }, { 2, 2 }, F, N }, /* in */
+    { { 0xe6 }, { 2, 2 }, F, N }, /* out */
+    { { 0xe7 }, { 2, 2 }, F, N }, /* out */
+    { { 0xe8 }, { 5, 5 }, F, W }, /* call */
+    { { 0xe9 }, { 5, 5 }, F, N }, /* jmp */
+    { { 0xea }, { 7, 0 }, F, N }, /* ljmp */
+    { { 0xeb }, { 2, 2 }, F, N }, /* jmp */
+    { { 0xec }, { 1, 1 }, F, N }, /* in */
+    { { 0xed }, { 1, 1 }, F, N }, /* in */
+    { { 0xee }, { 1, 1 }, F, N }, /* out */
+    { { 0xef }, { 1, 1 }, F, N }, /* out */
+    { { 0xf1 }, { 1, 1 }, F, N }, /* icebp */
+    { { 0xf4 }, { 1, 1 }, F, N }, /* hlt */
+    { { 0xf5 }, { 1, 1 }, F, N }, /* cmc */
+    { { 0xf6, 0x00 }, { 3, 3 }, T, R }, /* test */
+    { { 0xf6, 0x08 }, { 3, 3 }, T, R }, /* test */
+    { { 0xf6, 0x10 }, { 2, 2 }, T, W }, /* not */
+    { { 0xf6, 0x18 }, { 2, 2 }, T, W }, /* neg */
+    { { 0xf6, 0x20 }, { 2, 2 }, T, R }, /* mul */
+    { { 0xf6, 0x28 }, { 2, 2 }, T, R }, /* imul */
+    { { 0xf6, 0x30 }, { 2, 2 }, T, R }, /* div */
+    { { 0xf6, 0x38 }, { 2, 2 }, T, R }, /* idiv */
+    { { 0xf7, 0x00 }, { 6, 6 }, T, R }, /* test */
+    { { 0xf7, 0x08 }, { 6, 6 }, T, R }, /* test */
+    { { 0xf7, 0x10 }, { 2, 2 }, T, W }, /* not */
+    { { 0xf7, 0x18 }, { 2, 2 }, T, W }, /* neg */
+    { { 0xf7, 0x20 }, { 2, 2 }, T, R }, /* mul */
+    { { 0xf7, 0x28 }, { 2, 2 }, T, R }, /* imul */
+    { { 0xf7, 0x30 }, { 2, 2 }, T, R }, /* div */
+    { { 0xf7, 0x38 }, { 2, 2 }, T, R }, /* idiv */
+    { { 0xf8 }, { 1, 1 }, F, N }, /* clc */
+    { { 0xf9 }, { 1, 1 }, F, N }, /* stc */
+    { { 0xfa }, { 1, 1 }, F, N }, /* cli */
+    { { 0xfb }, { 1, 1 }, F, N }, /* sti */
+    { { 0xfc }, { 1, 1 }, F, N }, /* cld */
+    { { 0xfd }, { 1, 1 }, F, N }, /* std */
+    { { 0xfe, 0x00 }, { 2, 2 }, T, W }, /* inc */
+    { { 0xfe, 0x08 }, { 2, 2 }, T, W }, /* dec */
+    { { 0xff, 0x00 }, { 2, 2 }, T, W }, /* inc */
+    { { 0xff, 0x08 }, { 2, 2 }, T, W }, /* dec */
+    { { 0xff, 0x10 }, { 2, 2 }, F, W }, /* call */
+    { { 0xff, 0x18 }, { 2, 2 }, F, W }, /* lcall */
+    { { 0xff, 0x20 }, { 2, 2 }, T, R }, /* jmp */
+    { { 0xff, 0x28 }, { 2, 2 }, F, R }, /* ljmp */
+    { { 0xff, 0x30 }, { 2, 2 }, F, W }, /* push */
+    { { 0xff, 0xd0 }, { 2, 2 }, F, W }, /* call */
+    { { 0xff, 0xf0 }, { 2, 2 }, F, W }, /* push */
+}, legacy_0f[] = {
+    { { 0x00, 0x00 }, { 2, 2 }, T, W }, /* sldt */
+    { { 0x00, 0x08 }, { 2, 2 }, T, W }, /* str */
+    { { 0x00, 0x10 }, { 2, 2 }, T, R }, /* lldt */
+    { { 0x00, 0x18 }, { 2, 2 }, T, R }, /* ltr */
+    { { 0x00, 0x20 }, { 2, 2 }, T, R }, /* verr */
+    { { 0x00, 0x28 }, { 2, 2 }, T, R }, /* verw */
+    { { 0x01, 0x00 }, { 2, 2 }, F, W }, /* sgdt */
+    { { 0x01, 0x08 }, { 2, 2 }, F, W }, /* sidt */
+    { { 0x01, 0x10 }, { 2, 2 }, F, R }, /* lgdt */
+    { { 0x01, 0x18 }, { 2, 2 }, F, R }, /* lidt */
+    { { 0x01, 0x20 }, { 2, 2 }, T, W }, /* smsw */
+    /*{ 0x01, 0x28 }, { 2, 2 }, F, W, pfx_f3 }, rstorssp */
+    { { 0x01, 0x30 }, { 2, 2 }, T, R }, /* lmsw */
+    { { 0x01, 0x38 }, { 2, 2 }, F, N }, /* invlpg */
+    { { 0x01, 0xc0 }, { 2, 2 }, T, N }, /* enclv */
+    { { 0x01, 0xc1 }, { 2, 2 }, T, N }, /* vmcall */
+    /*{ 0x01, 0xc2 }, { 2, 2 }, F, R }, vmlaunch */
+    /*{ 0x01, 0xc3 }, { 2, 2 }, F, R }, vmresume */
+    { { 0x01, 0xc4 }, { 2, 2 }, T, N }, /* vmxoff */
+    { { 0x01, 0xc5 }, { 2, 2 }, T, N }, /* pconfig */
+    { { 0x01, 0xc8 }, { 2, 2 }, T, N }, /* monitor */
+    { { 0x01, 0xc9 }, { 2, 2 }, T, N }, /* mwait */
+    { { 0x01, 0xca }, { 2, 2 }, T, N }, /* clac */
+    { { 0x01, 0xcb }, { 2, 2 }, T, N }, /* stac */
+    { { 0x01, 0xcf }, { 2, 2 }, T, N }, /* encls */
+    { { 0x01, 0xd0 }, { 2, 2 }, T, N }, /* xgetbv */
+    { { 0x01, 0xd1 }, { 2, 2 }, T, N }, /* xsetbv */
+    { { 0x01, 0xd4 }, { 2, 2 }, T, N }, /* vmfunc */
+    { { 0x01, 0xd5 }, { 2, 2 }, T, N }, /* xend */
+    { { 0x01, 0xd6 }, { 2, 2 }, T, N }, /* xtest */
+    { { 0x01, 0xd7 }, { 2, 2 }, T, N }, /* enclu */
+    /*{ 0x01, 0xd8 }, { 2, 2 }, F, R }, vmrun */
+    { { 0x01, 0xd9 }, { 2, 2 }, T, N }, /* vmcall */
+    { { 0x01, 0xd9 }, { 2, 2 }, T, N, pfx_f3 }, /* vmgexit */
+    { { 0x01, 0xd9 }, { 2, 2 }, T, N, pfx_f2 }, /* vmgexit */
+    /*{ 0x01, 0xda }, { 2, 2 }, F, R }, vmload */
+    /*{ 0x01, 0xdb }, { 2, 2 }, F, W }, vmsave */
+    { { 0x01, 0xdc }, { 2, 2 }, T, N }, /* stgi */
+    { { 0x01, 0xdd }, { 2, 2 }, T, N }, /* clgi */
+    /*{ 0x01, 0xde }, { 2, 2 }, F, R }, skinit */
+    { { 0x01, 0xdf }, { 2, 2 }, T, N }, /* invlpga */
+    { { 0x01, 0xe8 }, { 2, 2 }, T, N }, /* serialize */
+    /*{ 0x01, 0xe8 }, { 2, 2 }, F, W, pfx_f3 }, setssbsy */
+    { { 0x01, 0xe8 }, { 2, 2 }, T, N, pfx_f2 }, /* xsusldtrk */
+    { { 0x01, 0xe9 }, { 2, 2 }, T, N, pfx_f2 }, /* xresldtrk */
+    /*{ 0x01, 0xea }, { 2, 2 }, F, W, pfx_f3 }, saveprevssp */
+    { { 0x01, 0xee }, { 2, 2 }, T, N }, /* rdpkru */
+    { { 0x01, 0xef }, { 2, 2 }, T, N }, /* wrpkru */
+    { { 0x01, 0xf8 }, { 0, 2 }, T, N }, /* swapgs */
+    { { 0x01, 0xf9 }, { 2, 2 }, T, N }, /* rdtscp */
+    { { 0x01, 0xfa }, { 2, 2 }, T, N }, /* monitorx */
+    { { 0x01, 0xfa }, { 2, 2 }, T, N, pfx_f3 }, /* mcommit */
+    { { 0x01, 0xfb }, { 2, 2 }, T, N }, /* mwaitx */
+    { { 0x01, 0xfc }, { 2, 2 }, F, W }, /* clzero */
+    { { 0x01, 0xfd }, { 2, 2 }, T, N }, /* rdpru */
+    { { 0x01, 0xfe }, { 2, 2 }, T, N }, /* invlpgb */
+    { { 0x01, 0xfe }, { 0, 2 }, T, N, pfx_f3 }, /* rmpadjust */
+    { { 0x01, 0xfe }, { 0, 2 }, T, N, pfx_f2 }, /* rmpupdate */
+    { { 0x01, 0xff }, { 2, 2 }, T, N }, /* tlbsync */
+    { { 0x01, 0xff }, { 0, 2 }, T, N, pfx_f3 }, /* psmash */
+    { { 0x01, 0xff }, { 0, 2 }, T, N, pfx_f2 }, /* pvalidate */
+    { { 0x02 }, { 2, 2 }, T, R }, /* lar */
+    { { 0x03 }, { 2, 2 }, T, R }, /* lsl */
+    { { 0x05 }, { 1, 1 }, F, N }, /* syscall */
+    { { 0x06 }, { 1, 1 }, F, N }, /* clts */
+    { { 0x07 }, { 1, 1 }, F, N }, /* sysret */
+    { { 0x08 }, { 1, 1 }, F, N }, /* invd */
+    { { 0x09 }, { 1, 1 }, F, N }, /* wbinvd */
+    { { 0x09 }, { 1, 1 }, F, N, pfx_f3 }, /* wbnoinvd */
+    { { 0x0b }, { 1, 1 }, F, N }, /* ud2 */
+    { { 0x0d, 0x00 }, { 2, 2 }, F, N }, /* prefetch */
+    { { 0x0d, 0x08 }, { 2, 2 }, F, N }, /* prefetchw */
+    { { 0x0e }, { 1, 1 }, F, N }, /* femms */
+    { { 0x18, 0x00 }, { 2, 2 }, F, N }, /* prefetchnta */
+    { { 0x18, 0x08 }, { 2, 2 }, F, N }, /* prefetch0 */
+    { { 0x18, 0x10 }, { 2, 2 }, F, N }, /* prefetch1 */
+    { { 0x18, 0x18 }, { 2, 2 }, F, N }, /* prefetch2 */
+    /*{ 0x1a }, { 2, 2 }, F, R }, bndldx */
+    /*{ 0x1a }, { 2, 2 }, T, R, pfx_66 }, bndmov */
+    { { 0x1a }, { 2, 2 }, T, N, pfx_f3 }, /* bndcl */
+    { { 0x1a }, { 2, 2 }, T, N, pfx_f2 }, /* bndcu */
+    /*{ 0x1b }, { 2, 2 }, F, W }, bndstx */
+    /*{ 0x1b }, { 2, 2 }, T, W, pfx_66 }, bndmov */
+    { { 0x1b }, { 2, 2 }, F, N, pfx_f3 }, /* bndmk */
+    { { 0x1b }, { 2, 2 }, T, N, pfx_f2 }, /* bndcn */
+    { { 0x1c, 0x00 }, { 2, 2 }, F, N }, /* cldemote */
+    { { 0x1e, 0xc8 }, { 2, 2 }, F, N, pfx_f3 }, /* rdssp */
+    { { 0x1e, 0xfa }, { 2, 2 }, F, N, pfx_f3 }, /* endbr64 */
+    { { 0x1e, 0xfb }, { 2, 2 }, F, N, pfx_f3 }, /* endbr32 */
+    { { 0x1f, 0x00 }, { 2, 2 }, T, N }, /* nop */
+    { { 0x20 }, { 2, 2 }, T, N }, /* mov */
+    { { 0x21 }, { 2, 2 }, T, N }, /* mov */
+    { { 0x22 }, { 2, 2 }, T, N }, /* mov */
+    { { 0x23 }, { 2, 2 }, T, N }, /* mov */
+    { { 0x30 }, { 1, 1 }, F, N }, /* wrmsr */
+    { { 0x31 }, { 1, 1 }, F, N }, /* rdtsc */
+    { { 0x32 }, { 1, 1 }, F, N }, /* rdmsr */
+    { { 0x33 }, { 1, 1 }, F, N }, /* rdpmc */
+    { { 0x34 }, { 1, 1 }, F, N }, /* sysenter */
+    { { 0x35 }, { 1, 1 }, F, N }, /* sysexit */
+    CND(0x40,   { 2, 2 }, T, R ), /* cmov<cc> */
+    /*{ 0x78 }, { 2, 2 }, T, W }, vmread */
+    { { 0x79 }, { 2, 2 }, T, R }, /* vmwrite */
+    CND(0x80,   { 5, 5 }, F, N ), /* j<cc> */
+    CND(0x90,   { 2, 2 }, T, W ), /* set<cc> */
+    { { 0xa0 }, { 1, 1 }, F, W }, /* push %fs */
+    { { 0xa1 }, { 1, 1 }, F, R }, /* pop %fs */
+    { { 0xa2 }, { 1, 1 }, F, N }, /* cpuid */
+    { { 0xa3 }, { 2, 2 }, T, R }, /* bt */
+    { { 0xa4 }, { 3, 3 }, T, W }, /* shld */
+    { { 0xa5 }, { 2, 2 }, T, W }, /* shld */
+    { { 0xa8 }, { 1, 1 }, F, W }, /* push %gs */
+    { { 0xa9 }, { 1, 1 }, F, R }, /* pop %gs */
+    { { 0xaa }, { 1, 1 }, F, N }, /* rsm */
+    { { 0xab }, { 2, 2 }, T, W }, /* bts */
+    { { 0xac }, { 3, 3 }, T, W }, /* shrd */
+    { { 0xad }, { 2, 2 }, T, W }, /* shrd */
+    { { 0xae, 0x00 }, { 2, 2 }, F, W }, /* fxsave */
+    { { 0xae, 0x08 }, { 2, 2 }, F, R }, /* fxrstor */
+    { { 0xae, 0x10 }, { 2, 2 }, F, R }, /* ldmxcsr */
+    { { 0xae, 0x18 }, { 2, 2 }, F, W }, /* stmxcsr */
+    { { 0xae, 0x20 }, { 2, 2 }, F, W }, /* xsave */
+    { { 0xae, 0x20 }, { 2, 2 }, F, R, pfx_f3 }, /* ptwrite */
+    { { 0xae, 0x28 }, { 2, 2 }, F, R }, /* xrstor */
+    { { 0xae, 0x30 }, { 2, 2 }, F, W }, /* xsaveopt */
+    { { 0xae, 0x30 }, { 2, 2 }, F, N, pfx_66 }, /* clwb */
+    /*{ 0xae, 0x30 }, { 2, 2 }, F, W, pfx_f3 }, clrssbsy */
+    { { 0xae, 0x38 }, { 2, 2 }, F, N }, /* clflush */
+    { { 0xae, 0x38 }, { 2, 2 }, F, N, pfx_66 }, /* clflushopt */
+    { { 0xae, 0xc0 }, { 0, 2 }, F, N, pfx_f3 }, /* rdfsbase */
+    { { 0xae, 0xc8 }, { 0, 2 }, F, N, pfx_f3 }, /* rdgsbase */
+    { { 0xae, 0xd0 }, { 0, 2 }, F, N, pfx_f3 }, /* wrfsbase */
+    { { 0xae, 0xd8 }, { 0, 2 }, F, N, pfx_f3 }, /* wrgsbase */
+    { { 0xae, 0xe8 }, { 2, 2 }, F, N }, /* lfence */
+    /*{ 0xae, 0xe8 }, { 2, 2 }, F, R, pfx_f3 }, incssp */
+    { { 0xae, 0xf0 }, { 2, 2 }, F, N }, /* mfence */
+    { { 0xae, 0xf0 }, { 2, 2 }, F, N, pfx_66 }, /* tpause */
+    { { 0xae, 0xf0 }, { 2, 2 }, F, N, pfx_f3 }, /* umonitor */
+    { { 0xae, 0xf0 }, { 2, 2 }, F, N, pfx_f2 }, /* umwait */
+    { { 0xae, 0xf8 }, { 2, 2 }, F, N }, /* sfence */
+    { { 0xaf }, { 2, 2 }, T, R }, /* imul */
+    { { 0xb0 }, { 2, 2 }, F, W }, /* cmpxchg */
+    { { 0xb1 }, { 2, 2 }, F, W }, /* cmpxchg */
+    { { 0xb2 }, { 2, 2 }, F, R }, /* lss */
+    { { 0xb3 }, { 2, 2 }, T, W }, /* btr */
+    { { 0xb4 }, { 2, 2 }, F, R }, /* lfs */
+    { { 0xb5 }, { 2, 2 }, F, R }, /* lgs */
+    { { 0xb6 }, { 2, 2 }, F, R }, /* movzx */
+    { { 0xb7 }, { 2, 2 }, F, R }, /* movzx */
+    { { 0xb8 }, { 2, 2 }, F, R }, /* popcnt */
+    { { 0xb9 }, { 2, 2 }, F, N }, /* ud1 */
+    { { 0xba, 0x20 }, { 3, 3 }, T, R }, /* bt */
+    { { 0xba, 0x28 }, { 3, 3 }, T, W }, /* bts */
+    { { 0xba, 0x30 }, { 3, 3 }, T, W }, /* btr */
+    { { 0xba, 0x38 }, { 3, 3 }, T, W }, /* btc */
+    { { 0xbb }, { 2, 2 }, T, W }, /* btc */
+    { { 0xbc }, { 2, 2 }, T, R }, /* bsf */
+    { { 0xbc }, { 2, 2 }, T, R, pfx_f3 }, /* tzcnt */
+    { { 0xbd }, { 2, 2 }, T, R }, /* bsr */
+    { { 0xbd }, { 2, 2 }, T, R, pfx_f3 }, /* lzcnt */
+    { { 0xbe }, { 2, 2 }, F, R }, /* movsx */
+    { { 0xbf }, { 2, 2 }, F, R }, /* movsx */
+    { { 0xc0 }, { 2, 2 }, F, W }, /* xadd */
+    { { 0xc1 }, { 2, 2 }, F, W }, /* xadd */
+    { { 0xc3 }, { 2, 2 }, F, W }, /* movnti */
+    { { 0xc7, 0x08 }, { 2, 2 }, F, W }, /* cmpxchg8b */
+    { { 0xc7, 0x18 }, { 2, 2 }, F, R }, /* xrstors */
+    { { 0xc7, 0x20 }, { 2, 2 }, F, W }, /* xsavec */
+    { { 0xc7, 0x28 }, { 2, 2 }, F, W }, /* xsaves */
+    { { 0xc7, 0x30 }, { 2, 2 }, F, R }, /* vmptrld */
+    { { 0xc7, 0x30 }, { 2, 2 }, F, R, pfx_66 }, /* vmclear */
+    { { 0xc7, 0x30 }, { 2, 2 }, F, R, pfx_f3 }, /* vmxon */
+    { { 0xc7, 0x38 }, { 2, 2 }, F, R }, /* vmptrst */
+    { { 0xc7, 0xf0 }, { 2, 2 }, F, N }, /* rdrand */
+    { { 0xc7, 0xf8 }, { 2, 2 }, F, N }, /* rdseed */
+    { { 0xc7, 0xf8 }, { 2, 2 }, F, N, pfx_f3 }, /* rdpid */
+    REG(0xc8,   { 1, 1 }, F, N ), /* bswap */
+    { { 0xff }, { 2, 2 }, F, N }, /* ud0 */
+}, legacy_0f38[] = {
+    { { 0x80 }, { 2, 2 }, T, R, pfx_66 }, /* invept */
+    { { 0x81 }, { 2, 2 }, T, R, pfx_66 }, /* invvpid */
+    { { 0x82 }, { 2, 2 }, T, R, pfx_66 }, /* invpcid */
+    { { 0xf0 }, { 2, 2 }, T, R }, /* movbe */
+    { { 0xf0 }, { 2, 2 }, T, R, pfx_f2 }, /* crc32 */
+    { { 0xf1 }, { 2, 2 }, T, W }, /* movbe */
+    { { 0xf1 }, { 2, 2 }, T, R, pfx_f2 }, /* crc32 */
+    /*{ 0xf5 }, { 2, 2 }, F, W, pfx_66 }, wruss */
+    /*{ 0xf6 }, { 2, 2 }, F, W }, wrss */
+    { { 0xf6 }, { 2, 2 }, T, R, pfx_66 }, /* adcx */
+    { { 0xf6 }, { 2, 2 }, T, R, pfx_f3 }, /* adox */
+};
+#undef CND
+#undef REG
+#undef F
+#undef N
+#undef R
+#undef T
+#undef W
+
+static unsigned int errors;
+
+static void print_insn(const uint8_t *instr, unsigned int len)
+{
+    if ( !errors++ )
+        puts("");
+    while ( len--)
+        printf("%02x%c", *instr++, len ? ' ' : ':');
+}
+
+void do_test(uint8_t *instr, unsigned int len, unsigned int modrm,
+             enum mem_access mem, struct x86_emulate_ctxt *ctxt,
+             int (*fetch)(enum x86_segment seg,
+                          unsigned long offset,
+                          void *p_data,
+                          unsigned int bytes,
+                          struct x86_emulate_ctxt *ctxt))
+{
+    struct x86_emulate_state *s;
+
+    if ( !modrm || mem != mem_none )
+    {
+        s = x86_decode_insn(ctxt, fetch);
+
+        if ( x86_insn_length(s, ctxt) != len )
+        {
+            print_insn(instr, len);
+            printf(" length %u (expected %u)\n", x86_insn_length(s, ctxt), len);
+        }
+
+        if ( x86_insn_is_mem_access(s, ctxt) != (mem != mem_none) )
+        {
+            print_insn(instr, len);
+            printf(" mem access %d (expected %d)\n",
+                   x86_insn_is_mem_access(s, ctxt), mem != mem_none);
+        }
+
+        if ( x86_insn_is_mem_write(s, ctxt) != (mem == mem_write) )
+        {
+            print_insn(instr, len);
+            printf(" mem write %d (expected %d)\n",
+                   x86_insn_is_mem_write(s, ctxt), mem == mem_write);
+        }
+
+        x86_emulate_free_state(s);
+    }
+
+    if ( modrm )
+    {
+        instr[modrm] |= 0xc0;
+
+        s = x86_decode_insn(ctxt, fetch);
+
+        if ( x86_insn_length(s, ctxt) != len )
+        {
+            print_insn(instr, len);
+            printf(" length %u (expected %u)\n", x86_insn_length(s, ctxt), len);
+        }
+
+        if ( x86_insn_is_mem_access(s, ctxt) ||
+             x86_insn_is_mem_write(s, ctxt) )
+        {
+            print_insn(instr, len);
+            printf(" mem access %d / write %d unexpected\n",
+                   x86_insn_is_mem_access(s, ctxt),
+                   x86_insn_is_mem_write(s, ctxt));
+        }
+
+        x86_emulate_free_state(s);
+    }
+}
+
+void predicates_test(void *instr, struct x86_emulate_ctxt *ctxt,
+                     int (*fetch)(enum x86_segment seg,
+                                  unsigned long offset,
+                                  void *p_data,
+                                  unsigned int bytes,
+                                  struct x86_emulate_ctxt *ctxt))
+{
+    unsigned int m;
+
+    ctxt->regs->eip = (unsigned long)instr;
+
+    for ( m = 0; m < sizeof(long) / sizeof(int); ++m )
+    {
+        unsigned int t;
+
+        ctxt->addr_size = 32 << m;
+        ctxt->sp_size = 32 << m;
+        ctxt->lma = ctxt->sp_size == 64;
+
+        printf("Testing %u-bit decoding / predicates...", ctxt->sp_size);
+
+        for ( t = 0; t < ARRAY_SIZE(legacy); ++t )
+        {
+            if ( !legacy[t].len[m] )
+                continue;
+
+            assert(!legacy[t].pfx);
+
+            memset(instr + 1, 0xcc, 14);
+            memcpy(instr, legacy[t].opc, legacy[t].len[m]);
+
+            do_test(instr, legacy[t].len[m], legacy[t].modrm, legacy[t].mem,
+                    ctxt, fetch);
+        }
+
+        for ( t = 0; t < ARRAY_SIZE(legacy_0f); ++t )
+        {
+            uint8_t *ptr = instr;
+
+            if ( !legacy_0f[t].len[m] )
+                continue;
+
+            memset(instr + 2, 0xcc, 13);
+            if ( legacy_0f[t].pfx )
+                *ptr++ = prefixes[legacy_0f[t].pfx - 1];
+            *ptr++ = 0x0f;
+            memcpy(ptr, legacy_0f[t].opc, legacy_0f[t].len[m]);
+
+            do_test(instr, legacy_0f[t].len[m] + ((void *)ptr - instr),
+                    legacy_0f[t].modrm ? (void *)ptr - instr + 1 : 0,
+                    legacy_0f[t].mem, ctxt, fetch);
+        }
+
+        for ( t = 0; t < ARRAY_SIZE(legacy_0f38); ++t )
+        {
+            uint8_t *ptr = instr;
+
+            if ( !legacy_0f38[t].len[m] )
+                continue;
+
+            memset(instr + 2, 0xcc, 13);
+            if ( legacy_0f38[t].pfx )
+                *ptr++ = prefixes[legacy_0f38[t].pfx - 1];
+            *ptr++ = 0x0f;
+            *ptr++ = 0x38;
+            memcpy(ptr, legacy_0f38[t].opc, legacy_0f38[t].len[m]);
+
+            do_test(instr, legacy_0f38[t].len[m] + ((void *)ptr - instr),
+                    legacy_0f38[t].modrm ? (void *)ptr - instr + 1 : 0,
+                    legacy_0f38[t].mem, ctxt, fetch);
+        }
+
+        if ( errors )
+            exit(1);
+
+        puts(" okay");
+    }
+}
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -4810,6 +4810,8 @@ int main(int argc, char **argv)
     if ( stack_exec )
         evex_disp8_test(instr, &ctxt, &emulops);
 
+    predicates_test(instr, &ctxt, fetch);
+
     for ( j = 0; j < ARRAY_SIZE(blobs); j++ )
     {
         if ( blobs[j].check_cpu && !blobs[j].check_cpu() )
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -1,7 +1,13 @@
 #include "x86-emulate.h"
 
+#include <errno.h>
 #include <sys/mman.h>
 
+#define DEFINE_PER_CPU(type, var) type per_cpu_##var
+#define this_cpu(var) per_cpu_##var
+
+#define ERR_PTR(val) NULL
+
 #define cpu_has_amd_erratum(nr) 0
 #define cpu_has_mpx false
 #define read_bndcfgu() 0
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -101,6 +101,12 @@ WRAP(puts);
 
 void evex_disp8_test(void *instr, struct x86_emulate_ctxt *ctxt,
                      const struct x86_emulate_ops *ops);
+void predicates_test(void *instr, struct x86_emulate_ctxt *ctxt,
+                     int (*fetch)(enum x86_segment seg,
+                                  unsigned long offset,
+                                  void *p_data,
+                                  unsigned int bytes,
+                                  struct x86_emulate_ctxt *ctxt));
 
 static inline uint64_t xgetbv(uint32_t xcr)
 {
--- a/xen/arch/x86/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate.c
@@ -10,6 +10,7 @@
  */
 
 #include <xen/domain_page.h>
+#include <xen/err.h>
 #include <xen/event.h>
 #include <asm/x86_emulate.h>
 #include <asm/processor.h> /* current_cpu_info */
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -11382,10 +11382,6 @@ int x86_emulate_wrapper(
 }
 #endif
 
-#ifdef __XEN__
-
-#include <xen/err.h>
-
 struct x86_emulate_state *
 x86_decode_insn(
     struct x86_emulate_ctxt *ctxt,
@@ -11408,7 +11404,7 @@ x86_decode_insn(
     if ( unlikely(rc != X86EMUL_OKAY) )
         return ERR_PTR(-rc);
 
-#ifndef NDEBUG
+#if defined(__XEN__) && !defined(NDEBUG)
     /*
      * While we avoid memory allocation (by use of per-CPU data) above,
      * nevertheless make sure callers properly release the state structure
@@ -11428,12 +11424,12 @@ x86_decode_insn(
 
 static inline void check_state(const struct x86_emulate_state *state)
 {
-#ifndef NDEBUG
+#if defined(__XEN__) && !defined(NDEBUG)
     ASSERT(state->caller);
 #endif
 }
 
-#ifndef NDEBUG
+#if defined(__XEN__) && !defined(NDEBUG)
 void x86_emulate_free_state(struct x86_emulate_state *state)
 {
     check_state(state);
@@ -11806,5 +11802,3 @@ x86_insn_length(const struct x86_emulate
 
     return state->ip - ctxt->regs->r(ip);
 }
-
-#endif
--- a/xen/arch/x86/x86_emulate/x86_emulate.h
+++ b/xen/arch/x86/x86_emulate/x86_emulate.h
@@ -730,8 +730,6 @@ x86emul_unhandleable_rw(
     unsigned int bytes,
     struct x86_emulate_ctxt *ctxt);
 
-#ifdef __XEN__
-
 struct x86_emulate_state *
 x86_decode_insn(
     struct x86_emulate_ctxt *ctxt,
@@ -767,12 +765,14 @@ bool
 x86_insn_is_cr_access(const struct x86_emulate_state *state,
                       const struct x86_emulate_ctxt *ctxt);
 
-#ifdef NDEBUG
+#if !defined(__XEN__) || defined(NDEBUG)
 static inline void x86_emulate_free_state(struct x86_emulate_state *state) {}
 #else
 void x86_emulate_free_state(struct x86_emulate_state *state);
 #endif
 
+#ifdef __XEN__
+
 int x86emul_read_xcr(unsigned int reg, uint64_t *val,
                      struct x86_emulate_ctxt *ctxt);
 int x86emul_write_xcr(unsigned int reg, uint64_t val,



From xen-devel-bounces@lists.xenproject.org Mon May 25 14:27:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 14:27:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdE57-00084C-Eh; Mon, 25 May 2020 14:27:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MLe6=7H=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdE56-00083w-Hj
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 14:27:56 +0000
X-Inumbo-ID: ebcc606e-9e93-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ebcc606e-9e93-11ea-9887-bc764e2007e4;
 Mon, 25 May 2020 14:27:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 762FDB117;
 Mon, 25 May 2020 14:27:56 +0000 (UTC)
Subject: [PATCH v10 4/9] x86emul: disable FPU/MMX/SIMD insn emulation when !HVM
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Message-ID: <a2c36be0-03f0-00ca-33e9-9915773ccb0f@suse.com>
Date: Mon, 25 May 2020 16:27:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In a pure PV environment (the PV shim in particular) we don't really
need emulation of all these. To limit #ifdef-ary utilize some of the
CASE_*() macros we have, by providing variants expanding to
(effectively) nothing (really a label, which in turn requires passing
-Wno-unused-label to the compiler when build such configurations).

Due to the mixture of macro and #ifdef use, the placement of some of
the #ifdef-s is a little arbitrary.

The resulting object file's .text is less than half the size of the
original, and looks to also be compiling a little more quickly.

This is meant as a first step; more parts can likely be disabled down
the road.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v7: Integrate into this series. Re-base.
---
I'll be happy to take suggestions allowing to avoid -Wno-unused-label.

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -73,6 +73,9 @@ obj-y += vm_event.o
 obj-y += xstate.o
 extra-y += asm-macros.i
 
+ifneq ($(CONFIG_HVM),y)
+x86_emulate.o: CFLAGS-y += -Wno-unused-label
+endif
 x86_emulate.o: x86_emulate/x86_emulate.c x86_emulate/x86_emulate.h
 
 efi-y := $(shell if [ ! -r $(BASEDIR)/include/xen/compile.h -o \
--- a/xen/arch/x86/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate.c
@@ -43,6 +43,12 @@
     }                                                      \
 })
 
+#ifndef CONFIG_HVM
+# define X86EMUL_NO_FPU
+# define X86EMUL_NO_MMX
+# define X86EMUL_NO_SIMD
+#endif
+
 #include "x86_emulate/x86_emulate.c"
 
 int x86emul_read_xcr(unsigned int reg, uint64_t *val,
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -3506,6 +3506,7 @@ x86_decode(
             op_bytes = 4;
         break;
 
+#ifndef X86EMUL_NO_SIMD
     case simd_packed_int:
         switch ( vex.pfx )
         {
@@ -3571,6 +3572,7 @@ x86_decode(
     case simd_256:
         op_bytes = 32;
         break;
+#endif /* !X86EMUL_NO_SIMD */
 
     default:
         op_bytes = 0;
@@ -3725,6 +3727,7 @@ x86_emulate(
         break;
     }
 
+#ifndef X86EMUL_NO_SIMD
     /* With a memory operand, fetch the mask register in use (if any). */
     if ( ea.type == OP_MEM && evex.opmsk &&
          _get_fpu(fpu_type = X86EMUL_FPU_opmask, ctxt, ops) == X86EMUL_OKAY )
@@ -3755,6 +3758,7 @@ x86_emulate(
         put_fpu(X86EMUL_FPU_opmask, false, state, ctxt, ops);
         fpu_type = X86EMUL_FPU_none;
     }
+#endif /* !X86EMUL_NO_SIMD */
 
     /* Decode (but don't fetch) the destination operand: register or memory. */
     switch ( d & DstMask )
@@ -4400,11 +4404,13 @@ x86_emulate(
         singlestep = _regs.eflags & X86_EFLAGS_TF;
         break;
 
+#ifndef X86EMUL_NO_FPU
     case 0x9b:  /* wait/fwait */
         host_and_vcpu_must_have(fpu);
         get_fpu(X86EMUL_FPU_wait);
         emulate_fpu_insn_stub(b);
         break;
+#endif
 
     case 0x9c: /* pushf */
         if ( (_regs.eflags & X86_EFLAGS_VM) &&
@@ -4814,6 +4820,7 @@ x86_emulate(
         break;
     }
 
+#ifndef X86EMUL_NO_FPU
     case 0xd8: /* FPU 0xd8 */
         host_and_vcpu_must_have(fpu);
         get_fpu(X86EMUL_FPU_fpu);
@@ -5148,6 +5155,7 @@ x86_emulate(
             }
         }
         break;
+#endif /* !X86EMUL_NO_FPU */
 
     case 0xe0 ... 0xe2: /* loop{,z,nz} */ {
         unsigned long count = get_loop_count(&_regs, ad_bytes);
@@ -6124,6 +6132,8 @@ x86_emulate(
     case X86EMUL_OPC(0x0f, 0x19) ... X86EMUL_OPC(0x0f, 0x1f): /* nop */
         break;
 
+#ifndef X86EMUL_NO_MMX
+
     case X86EMUL_OPC(0x0f, 0x0e): /* femms */
         host_and_vcpu_must_have(3dnow);
         asm volatile ( "femms" );
@@ -6144,39 +6154,71 @@ x86_emulate(
         state->simd_size = simd_other;
         goto simd_0f_imm8;
 
-#define CASE_SIMD_PACKED_INT(pfx, opc)       \
+#endif /* !X86EMUL_NO_MMX */
+
+#if !defined(X86EMUL_NO_SIMD) && !defined(X86EMUL_NO_MMX)
+# define CASE_SIMD_PACKED_INT(pfx, opc)      \
     case X86EMUL_OPC(pfx, opc):              \
     case X86EMUL_OPC_66(pfx, opc)
-#define CASE_SIMD_PACKED_INT_VEX(pfx, opc)   \
+#elif !defined(X86EMUL_NO_SIMD)
+# define CASE_SIMD_PACKED_INT(pfx, opc)      \
+    case X86EMUL_OPC_66(pfx, opc)
+#elif !defined(X86EMUL_NO_MMX)
+# define CASE_SIMD_PACKED_INT(pfx, opc)      \
+    case X86EMUL_OPC(pfx, opc)
+#else
+# define CASE_SIMD_PACKED_INT(pfx, opc) C##pfx##_##opc
+#endif
+
+#ifndef X86EMUL_NO_SIMD
+
+# define CASE_SIMD_PACKED_INT_VEX(pfx, opc)  \
     CASE_SIMD_PACKED_INT(pfx, opc):          \
     case X86EMUL_OPC_VEX_66(pfx, opc)
 
-#define CASE_SIMD_ALL_FP(kind, pfx, opc)     \
+# define CASE_SIMD_ALL_FP(kind, pfx, opc)    \
     CASE_SIMD_PACKED_FP(kind, pfx, opc):     \
     CASE_SIMD_SCALAR_FP(kind, pfx, opc)
-#define CASE_SIMD_PACKED_FP(kind, pfx, opc)  \
+# define CASE_SIMD_PACKED_FP(kind, pfx, opc) \
     case X86EMUL_OPC##kind(pfx, opc):        \
     case X86EMUL_OPC##kind##_66(pfx, opc)
-#define CASE_SIMD_SCALAR_FP(kind, pfx, opc)  \
+# define CASE_SIMD_SCALAR_FP(kind, pfx, opc) \
     case X86EMUL_OPC##kind##_F3(pfx, opc):   \
     case X86EMUL_OPC##kind##_F2(pfx, opc)
-#define CASE_SIMD_SINGLE_FP(kind, pfx, opc)  \
+# define CASE_SIMD_SINGLE_FP(kind, pfx, opc) \
     case X86EMUL_OPC##kind(pfx, opc):        \
     case X86EMUL_OPC##kind##_F3(pfx, opc)
 
-#define CASE_SIMD_ALL_FP_VEX(pfx, opc)       \
+# define CASE_SIMD_ALL_FP_VEX(pfx, opc)      \
     CASE_SIMD_ALL_FP(, pfx, opc):            \
     CASE_SIMD_ALL_FP(_VEX, pfx, opc)
-#define CASE_SIMD_PACKED_FP_VEX(pfx, opc)    \
+# define CASE_SIMD_PACKED_FP_VEX(pfx, opc)   \
     CASE_SIMD_PACKED_FP(, pfx, opc):         \
     CASE_SIMD_PACKED_FP(_VEX, pfx, opc)
-#define CASE_SIMD_SCALAR_FP_VEX(pfx, opc)    \
+# define CASE_SIMD_SCALAR_FP_VEX(pfx, opc)   \
     CASE_SIMD_SCALAR_FP(, pfx, opc):         \
     CASE_SIMD_SCALAR_FP(_VEX, pfx, opc)
-#define CASE_SIMD_SINGLE_FP_VEX(pfx, opc)    \
+# define CASE_SIMD_SINGLE_FP_VEX(pfx, opc)   \
     CASE_SIMD_SINGLE_FP(, pfx, opc):         \
     CASE_SIMD_SINGLE_FP(_VEX, pfx, opc)
 
+#else
+
+# define CASE_SIMD_PACKED_INT_VEX(pfx, opc)  \
+    CASE_SIMD_PACKED_INT(pfx, opc)
+
+# define CASE_SIMD_ALL_FP(kind, pfx, opc)    C##kind##pfx##_##opc
+# define CASE_SIMD_PACKED_FP(kind, pfx, opc) Cp##kind##pfx##_##opc
+# define CASE_SIMD_SCALAR_FP(kind, pfx, opc) Cs##kind##pfx##_##opc
+# define CASE_SIMD_SINGLE_FP(kind, pfx, opc) C##kind##pfx##_##opc
+
+# define CASE_SIMD_ALL_FP_VEX(pfx, opc)    CASE_SIMD_ALL_FP(, pfx, opc)
+# define CASE_SIMD_PACKED_FP_VEX(pfx, opc) CASE_SIMD_PACKED_FP(, pfx, opc)
+# define CASE_SIMD_SCALAR_FP_VEX(pfx, opc) CASE_SIMD_SCALAR_FP(, pfx, opc)
+# define CASE_SIMD_SINGLE_FP_VEX(pfx, opc) CASE_SIMD_SINGLE_FP(, pfx, opc)
+
+#endif
+
     CASE_SIMD_SCALAR_FP(, 0x0f, 0x2b):     /* movnts{s,d} xmm,mem */
         host_and_vcpu_must_have(sse4a);
         /* fall through */
@@ -6314,6 +6356,8 @@ x86_emulate(
         insn_bytes = EVEX_PFX_BYTES + 2;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_66(0x0f, 0x12):       /* movlpd m64,xmm */
     case X86EMUL_OPC_VEX_66(0x0f, 0x12):   /* vmovlpd m64,xmm,xmm */
     CASE_SIMD_PACKED_FP_VEX(0x0f, 0x13):   /* movlp{s,d} xmm,m64 */
@@ -6420,6 +6464,8 @@ x86_emulate(
         avx512_vlen_check(false);
         goto simd_zmm;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f, 0x20): /* mov cr,reg */
     case X86EMUL_OPC(0x0f, 0x21): /* mov dr,reg */
     case X86EMUL_OPC(0x0f, 0x22): /* mov reg,cr */
@@ -6446,6 +6492,8 @@ x86_emulate(
             goto done;
         break;
 
+#if !defined(X86EMUL_NO_MMX) && !defined(X86EMUL_NO_SIMD)
+
     case X86EMUL_OPC_66(0x0f, 0x2a):       /* cvtpi2pd mm/m64,xmm */
         if ( ea.type == OP_REG )
         {
@@ -6457,6 +6505,8 @@ x86_emulate(
         op_bytes = (b & 4) && (vex.pfx & VEX_PREFIX_DOUBLE_MASK) ? 16 : 8;
         goto simd_0f_fp;
 
+#endif /* !X86EMUL_NO_MMX && !X86EMUL_NO_SIMD */
+
     CASE_SIMD_SCALAR_FP_VEX(0x0f, 0x2a):   /* {,v}cvtsi2s{s,d} r/m,xmm */
         if ( vex.opcx == vex_none )
         {
@@ -6803,6 +6853,8 @@ x86_emulate(
             dst.val = src.val;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_VEX(0x0f, 0x4a):    /* kadd{w,q} k,k,k */
         if ( !vex.w )
             host_and_vcpu_must_have(avx512dq);
@@ -6857,6 +6909,8 @@ x86_emulate(
         generate_exception_if(!vex.l || vex.w, EXC_UD);
         goto opmask_common;
 
+#endif /* X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_FP_VEX(0x0f, 0x50):   /* movmskp{s,d} xmm,reg */
                                            /* vmovmskp{s,d} {x,y}mm,reg */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xd7):  /* pmovmskb {,x}mm,reg */
@@ -6940,6 +6994,8 @@ x86_emulate(
                          evex.w);
         goto avx512f_all_fp;
 
+#ifndef X86EMUL_NO_SIMD
+
     CASE_SIMD_PACKED_FP_VEX(0x0f, 0x5b):   /* cvt{ps,dq}2{dq,ps} xmm/mem,xmm */
                                            /* vcvt{ps,dq}2{dq,ps} {x,y}mm/mem,{x,y}mm */
     case X86EMUL_OPC_F3(0x0f, 0x5b):       /* cvttps2dq xmm/mem,xmm */
@@ -6970,6 +7026,8 @@ x86_emulate(
         op_bytes = 16 << evex.lr;
         goto simd_zmm;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x60): /* punpcklbw {,x}mm/mem,{,x}mm */
                                           /* vpunpcklbw {x,y}mm/mem,{x,y}mm,{x,y}mm */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x61): /* punpcklwd {,x}mm/mem,{,x}mm */
@@ -6996,6 +7054,7 @@ x86_emulate(
                                           /* vpackusbw {x,y}mm/mem,{x,y}mm,{x,y}mm */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x6b): /* packsswd {,x}mm/mem,{,x}mm */
                                           /* vpacksswd {x,y}mm/mem,{x,y}mm,{x,y}mm */
+#ifndef X86EMUL_NO_SIMD
     case X86EMUL_OPC_66(0x0f, 0x6c):     /* punpcklqdq xmm/m128,xmm */
     case X86EMUL_OPC_VEX_66(0x0f, 0x6c): /* vpunpcklqdq {x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_66(0x0f, 0x6d):     /* punpckhqdq xmm/m128,xmm */
@@ -7080,6 +7139,7 @@ x86_emulate(
                                           /* vpsubd {x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_66(0x0f, 0xfb):     /* psubq xmm/m128,xmm */
     case X86EMUL_OPC_VEX_66(0x0f, 0xfb): /* vpsubq {x,y}mm/mem,{x,y}mm,{x,y}mm */
+#endif /* !X86EMUL_NO_SIMD */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xfc): /* paddb {,x}mm/mem,{,x}mm */
                                           /* vpaddb {x,y}mm/mem,{x,y}mm,{x,y}mm */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xfd): /* paddw {,x}mm/mem,{,x}mm */
@@ -7087,6 +7147,7 @@ x86_emulate(
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xfe): /* paddd {,x}mm/mem,{,x}mm */
                                           /* vpaddd {x,y}mm/mem,{x,y}mm,{x,y}mm */
     simd_0f_int:
+#ifndef X86EMUL_NO_SIMD
         if ( vex.opcx != vex_none )
         {
     case X86EMUL_OPC_VEX_66(0x0f38, 0x00): /* vpshufb {x,y}mm/mem,{x,y}mm,{x,y}mm */
@@ -7128,11 +7189,14 @@ x86_emulate(
         }
         if ( vex.pfx )
             goto simd_0f_sse2;
+#endif /* !X86EMUL_NO_SIMD */
     simd_0f_mmx:
         host_and_vcpu_must_have(mmx);
         get_fpu(X86EMUL_FPU_mmx);
         goto simd_0f_common;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0xf6): /* vpsadbw [xyz]mm/mem,[xyz]mm,[xyz]mm */
         generate_exception_if(evex.opmsk, EXC_UD);
         /* fall through */
@@ -7226,6 +7290,8 @@ x86_emulate(
         generate_exception_if(!evex.w, EXC_UD);
         goto avx512f_no_sae;
 
+#endif /* X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x6e): /* mov{d,q} r/m,{,x}mm */
                                           /* vmov{d,q} r/m,xmm */
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x7e): /* mov{d,q} {,x}mm,r/m */
@@ -7267,6 +7333,8 @@ x86_emulate(
         ASSERT(!state->simd_size);
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0x6e): /* vmov{d,q} r/m,xmm */
     case X86EMUL_OPC_EVEX_66(0x0f, 0x7e): /* vmov{d,q} xmm,r/m */
         generate_exception_if((evex.lr || evex.opmsk || evex.brs ||
@@ -7339,11 +7407,15 @@ x86_emulate(
         d |= TwoOp;
         /* fall through */
     case X86EMUL_OPC_66(0x0f, 0xd6):     /* movq xmm,xmm/m64 */
+#endif /* !X86EMUL_NO_SIMD */
+#ifndef X86EMUL_NO_MMX
     case X86EMUL_OPC(0x0f, 0x6f):        /* movq mm/m64,mm */
     case X86EMUL_OPC(0x0f, 0x7f):        /* movq mm,mm/m64 */
+#endif
         op_bytes = 8;
         goto simd_0f_int;
 
+#ifndef X86EMUL_NO_SIMD
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0x70):/* pshuf{w,d} $imm8,{,x}mm/mem,{,x}mm */
                                          /* vpshufd $imm8,{x,y}mm/mem,{x,y}mm */
     case X86EMUL_OPC_F3(0x0f, 0x70):     /* pshufhw $imm8,xmm/m128,xmm */
@@ -7352,12 +7424,15 @@ x86_emulate(
     case X86EMUL_OPC_VEX_F2(0x0f, 0x70): /* vpshuflw $imm8,{x,y}mm/mem,{x,y}mm */
         d = (d & ~SrcMask) | SrcMem | TwoOp;
         op_bytes = vex.pfx ? 16 << vex.l : 8;
+#endif
     simd_0f_int_imm8:
         if ( vex.opcx != vex_none )
         {
+#ifndef X86EMUL_NO_SIMD
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x0e): /* vpblendw $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x0f): /* vpalignr $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x42): /* vmpsadbw $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
+#endif
             if ( vex.l )
             {
     simd_0f_imm8_avx2:
@@ -7365,6 +7440,7 @@ x86_emulate(
             }
             else
             {
+#ifndef X86EMUL_NO_SIMD
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x08): /* vroundps $imm8,{x,y}mm/mem,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x09): /* vroundpd $imm8,{x,y}mm/mem,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x0a): /* vroundss $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
@@ -7372,6 +7448,7 @@ x86_emulate(
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x0c): /* vblendps $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x0d): /* vblendpd $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x40): /* vdpps $imm8,{x,y}mm/mem,{x,y}mm,{x,y}mm */
+#endif
     simd_0f_imm8_avx:
                 host_and_vcpu_must_have(avx);
             }
@@ -7405,6 +7482,8 @@ x86_emulate(
         insn_bytes = PFX_BYTES + 3;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0x70): /* vpshufd $imm8,[xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_F3(0x0f, 0x70): /* vpshufhw $imm8,[xyz]mm/mem,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_F2(0x0f, 0x70): /* vpshuflw $imm8,[xyz]mm/mem,[xyz]mm{k} */
@@ -7463,6 +7542,9 @@ x86_emulate(
         opc[1] = modrm;
         opc[2] = imm1;
         insn_bytes = PFX_BYTES + 3;
+
+#endif /* X86EMUL_NO_SIMD */
+
     simd_0f_reg_only:
         opc[insn_bytes - PFX_BYTES] = 0xc3;
 
@@ -7473,6 +7555,8 @@ x86_emulate(
         ASSERT(!state->simd_size);
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0x71): /* Grp12 */
         switch ( modrm_reg & 7 )
         {
@@ -7504,6 +7588,9 @@ x86_emulate(
         }
         goto unrecognized_insn;
 
+#endif /* !X86EMUL_NO_SIMD */
+#ifndef X86EMUL_NO_MMX
+
     case X86EMUL_OPC(0x0f, 0x73):        /* Grp14 */
         switch ( modrm_reg & 7 )
         {
@@ -7513,6 +7600,9 @@ x86_emulate(
         }
         goto unrecognized_insn;
 
+#endif /* !X86EMUL_NO_MMX */
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_66(0x0f, 0x73):
     case X86EMUL_OPC_VEX_66(0x0f, 0x73):
         switch ( modrm_reg & 7 )
@@ -7543,7 +7633,12 @@ x86_emulate(
         }
         goto unrecognized_insn;
 
+#endif /* !X86EMUL_NO_SIMD */
+
+#ifndef X86EMUL_NO_MMX
     case X86EMUL_OPC(0x0f, 0x77):        /* emms */
+#endif
+#ifndef X86EMUL_NO_SIMD
     case X86EMUL_OPC_VEX(0x0f, 0x77):    /* vzero{all,upper} */
         if ( vex.opcx != vex_none )
         {
@@ -7589,6 +7684,7 @@ x86_emulate(
 #endif
         }
         else
+#endif /* !X86EMUL_NO_SIMD */
         {
             host_and_vcpu_must_have(mmx);
             get_fpu(X86EMUL_FPU_mmx);
@@ -7602,6 +7698,8 @@ x86_emulate(
         insn_bytes = PFX_BYTES + 1;
         goto simd_0f_reg_only;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_66(0x0f, 0x78):     /* Grp17 */
         switch ( modrm_reg & 7 )
         {
@@ -7699,6 +7797,8 @@ x86_emulate(
         op_bytes = 8;
         goto simd_zmm;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f, 0x80) ... X86EMUL_OPC(0x0f, 0x8f): /* jcc (near) */
         if ( test_cc(b, _regs.eflags) )
             jmp_rel((int32_t)src.val);
@@ -7709,6 +7809,8 @@ x86_emulate(
         dst.val = test_cc(b, _regs.eflags);
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_VEX(0x0f, 0x91):    /* kmov{w,q} k,mem */
     case X86EMUL_OPC_VEX_66(0x0f, 0x91): /* kmov{b,d} k,mem */
         generate_exception_if(ea.type != OP_MEM, EXC_UD);
@@ -7857,6 +7959,8 @@ x86_emulate(
         dst.type = OP_NONE;
         break;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f, 0xa2): /* cpuid */
         msr_val = 0;
         fail_if(ops->cpuid == NULL);
@@ -7953,6 +8057,7 @@ x86_emulate(
     case X86EMUL_OPC(0x0f, 0xae): case X86EMUL_OPC_66(0x0f, 0xae): /* Grp15 */
         switch ( modrm_reg & 7 )
         {
+#ifndef X86EMUL_NO_SIMD
         case 2: /* ldmxcsr */
             generate_exception_if(vex.pfx, EXC_UD);
             vcpu_must_have(sse);
@@ -7971,6 +8076,7 @@ x86_emulate(
             get_fpu(vex.opcx ? X86EMUL_FPU_ymm : X86EMUL_FPU_xmm);
             asm volatile ( "stmxcsr %0" : "=m" (dst.val) );
             break;
+#endif /* X86EMUL_NO_SIMD */
 
         case 5: /* lfence */
             fail_if(modrm_mod != 3);
@@ -8019,6 +8125,8 @@ x86_emulate(
         }
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_VEX(0x0f, 0xae): /* Grp15 */
         switch ( modrm_reg & 7 )
         {
@@ -8033,6 +8141,8 @@ x86_emulate(
         }
         goto unrecognized_insn;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC_F3(0x0f, 0xae): /* Grp15 */
         fail_if(modrm_mod != 3);
         generate_exception_if((modrm_reg & 4) || !mode_64bit(), EXC_UD);
@@ -8272,6 +8382,8 @@ x86_emulate(
         }
         goto simd_0f_imm8_avx;
 
+#ifndef X86EMUL_NO_SIMD
+
     CASE_SIMD_ALL_FP(_EVEX, 0x0f, 0xc2): /* vcmp{p,s}{s,d} $imm8,[xyz]mm/mem,[xyz]mm,k{k} */
         generate_exception_if((evex.w != (evex.pfx & VEX_PREFIX_DOUBLE_MASK) ||
                                (ea.type != OP_REG && evex.brs &&
@@ -8298,6 +8410,8 @@ x86_emulate(
         insn_bytes = EVEX_PFX_BYTES + 3;
         break;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f, 0xc3): /* movnti */
         /* Ignore the non-temporal hint for now. */
         vcpu_must_have(sse2);
@@ -8312,6 +8426,8 @@ x86_emulate(
         ea.type = OP_MEM;
         goto simd_0f_int_imm8;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0xc4):   /* vpinsrw $imm8,r32/m16,xmm,xmm */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x20): /* vpinsrb $imm8,r32/m8,xmm,xmm */
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x22): /* vpinsr{d,q} $imm8,r/m,xmm,xmm */
@@ -8329,6 +8445,8 @@ x86_emulate(
         state->simd_size = simd_other;
         goto avx512f_imm8_no_sae;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xc5):  /* pextrw $imm8,{,x}mm,reg */
                                            /* vpextrw $imm8,xmm,reg */
         generate_exception_if(vex.l, EXC_UD);
@@ -8344,6 +8462,8 @@ x86_emulate(
         insn_bytes = PFX_BYTES + 3;
         goto simd_0f_to_gpr;
 
+#ifndef X86EMUL_NO_SIMD
+
     CASE_SIMD_PACKED_FP(_EVEX, 0x0f, 0xc6): /* vshufp{s,d} $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         generate_exception_if(evex.w != (evex.pfx & VEX_PREFIX_DOUBLE_MASK),
                               EXC_UD);
@@ -8358,6 +8478,8 @@ x86_emulate(
         avx512_vlen_check(false);
         goto simd_imm8_zmm;
 
+#endif /* X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f, 0xc7): /* Grp9 */
     {
         union {
@@ -8548,6 +8670,8 @@ x86_emulate(
         }
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0xd2): /* vpsrld xmm/m128,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0xd3): /* vpsrlq xmm/m128,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0xe2): /* vpsra{d,q} xmm/m128,[xyz]mm,[xyz]mm{k} */
@@ -8569,12 +8693,18 @@ x86_emulate(
         generate_exception_if(evex.w != (b & 1), EXC_UD);
         goto avx512f_no_sae;
 
+#endif /* !X86EMUL_NO_SIMD */
+#ifndef X86EMUL_NO_MMX
+
     case X86EMUL_OPC(0x0f, 0xd4):        /* paddq mm/m64,mm */
     case X86EMUL_OPC(0x0f, 0xf4):        /* pmuludq mm/m64,mm */
     case X86EMUL_OPC(0x0f, 0xfb):        /* psubq mm/m64,mm */
         vcpu_must_have(sse2);
         goto simd_0f_mmx;
 
+#endif /* !X86EMUL_NO_MMX */
+#if !defined(X86EMUL_NO_MMX) && !defined(X86EMUL_NO_SIMD)
+
     case X86EMUL_OPC_F3(0x0f, 0xd6):     /* movq2dq mm,xmm */
     case X86EMUL_OPC_F2(0x0f, 0xd6):     /* movdq2q xmm,mm */
         generate_exception_if(ea.type != OP_REG, EXC_UD);
@@ -8582,6 +8712,9 @@ x86_emulate(
         host_and_vcpu_must_have(mmx);
         goto simd_0f_int;
 
+#endif /* !X86EMUL_NO_MMX && !X86EMUL_NO_SIMD */
+#ifndef X86EMUL_NO_MMX
+
     case X86EMUL_OPC(0x0f, 0xe7):        /* movntq mm,m64 */
         generate_exception_if(ea.type != OP_MEM, EXC_UD);
         sfence = true;
@@ -8597,6 +8730,9 @@ x86_emulate(
         vcpu_must_have(mmxext);
         goto simd_0f_mmx;
 
+#endif /* !X86EMUL_NO_MMX */
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f, 0xda): /* vpminub [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0xde): /* vpmaxub [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
     case X86EMUL_OPC_EVEX_66(0x0f, 0xe4): /* vpmulhuw [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
@@ -8617,6 +8753,8 @@ x86_emulate(
         op_bytes = 8 << (!!(vex.pfx & VEX_PREFIX_DOUBLE_MASK) + vex.l);
         goto simd_0f_cvt;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_INT_VEX(0x0f, 0xf7): /* {,v}maskmov{q,dqu} {,x}mm,{,x}mm */
         generate_exception_if(ea.type != OP_REG, EXC_UD);
         if ( vex.opcx != vex_none )
@@ -8720,6 +8858,8 @@ x86_emulate(
         insn_bytes = PFX_BYTES + 3;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_VEX_66(0x0f38, 0x19): /* vbroadcastsd xmm/m64,ymm */
     case X86EMUL_OPC_VEX_66(0x0f38, 0x1a): /* vbroadcastf128 m128,ymm */
         generate_exception_if(!vex.l, EXC_UD);
@@ -9302,6 +9442,8 @@ x86_emulate(
         ASSERT(!state->simd_size);
         break;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC_66(0x0f38, 0x82): /* invpcid reg,m128 */
         vcpu_must_have(invpcid);
         generate_exception_if(ea.type != OP_MEM, EXC_UD);
@@ -9344,6 +9486,8 @@ x86_emulate(
         state->simd_size = simd_none;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f38, 0x83): /* vpmultishiftqb [xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         generate_exception_if(!evex.w, EXC_UD);
         host_and_vcpu_must_have(avx512_vbmi);
@@ -9907,6 +10051,8 @@ x86_emulate(
         generate_exception_if(evex.brs || evex.opmsk, EXC_UD);
         goto avx512f_no_sae;
 
+#endif /* !X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC(0x0f38, 0xf0): /* movbe m,r */
     case X86EMUL_OPC(0x0f38, 0xf1): /* movbe r,m */
         vcpu_must_have(movbe);
@@ -10072,6 +10218,8 @@ x86_emulate(
                             : "0" ((uint32_t)src.val), "rm" (_regs.edx) );
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x00): /* vpermq $imm8,ymm/m256,ymm */
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x01): /* vpermpd $imm8,ymm/m256,ymm */
         generate_exception_if(!vex.l || !vex.w, EXC_UD);
@@ -10132,6 +10280,8 @@ x86_emulate(
         avx512_vlen_check(b & 2);
         goto simd_imm8_zmm;
 
+#endif /* X86EMUL_NO_SIMD */
+
     CASE_SIMD_PACKED_INT(0x0f3a, 0x0f): /* palignr $imm8,{,x}mm/mem,{,x}mm */
         host_and_vcpu_must_have(ssse3);
         if ( vex.pfx )
@@ -10159,6 +10309,8 @@ x86_emulate(
         insn_bytes = PFX_BYTES + 4;
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_EVEX_66(0x0f3a, 0x42): /* vdbpsadbw $imm8,[xyz]mm/mem,[xyz]mm,[xyz]mm{k} */
         generate_exception_if(evex.w, EXC_UD);
         /* fall through */
@@ -10657,6 +10809,8 @@ x86_emulate(
         generate_exception_if(vex.l, EXC_UD);
         goto simd_0f_imm8_avx;
 
+#endif /* X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC_VEX_F2(0x0f3a, 0xf0): /* rorx imm,r/m,r */
         vcpu_must_have(bmi2);
         generate_exception_if(vex.l || vex.reg != 0xf, EXC_UD);
@@ -10671,6 +10825,8 @@ x86_emulate(
             asm ( "rorl %b1,%k0" : "=g" (dst.val) : "c" (imm1), "0" (src.val) );
         break;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_XOP(08, 0x85): /* vpmacssww xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x86): /* vpmacsswd xmm,xmm/m128,xmm,xmm */
     case X86EMUL_OPC_XOP(08, 0x87): /* vpmacssdql xmm,xmm/m128,xmm,xmm */
@@ -10706,6 +10862,8 @@ x86_emulate(
         host_and_vcpu_must_have(xop);
         goto simd_0f_imm8_ymm;
 
+#endif /* X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC_XOP(09, 0x01): /* XOP Grp1 */
         switch ( modrm_reg & 7 )
         {
@@ -10765,6 +10923,8 @@ x86_emulate(
         }
         goto unrecognized_insn;
 
+#ifndef X86EMUL_NO_SIMD
+
     case X86EMUL_OPC_XOP(09, 0x82): /* vfrczss xmm/m128,xmm */
     case X86EMUL_OPC_XOP(09, 0x83): /* vfrczsd xmm/m128,xmm */
         generate_exception_if(vex.l, EXC_UD);
@@ -10820,6 +10980,8 @@ x86_emulate(
         host_and_vcpu_must_have(xop);
         goto simd_0f_ymm;
 
+#endif /* X86EMUL_NO_SIMD */
+
     case X86EMUL_OPC_XOP(0a, 0x10): /* bextr imm,r/m,r */
     {
         uint8_t *buf = get_stub(stub);



From xen-devel-bounces@lists.xenproject.org Mon May 25 14:28:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 14:28:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdE5N-00088V-Re; Mon, 25 May 2020 14:28:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RZKA=7H=tklsoftware.com=tamas@srs-us1.protection.inumbo.net>)
 id 1jdE5M-00088G-83
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 14:28:12 +0000
X-Inumbo-ID: f5e53cba-9e93-11ea-b9cf-bc764e2007e4
Received: from mail-ed1-x542.google.com (unknown [2a00:1450:4864:20::542])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5e53cba-9e93-11ea-b9cf-bc764e2007e4;
 Mon, 25 May 2020 14:28:11 +0000 (UTC)
Received: by mail-ed1-x542.google.com with SMTP id e10so15173959edq.0
 for <xen-devel@lists.xenproject.org>; Mon, 25 May 2020 07:28:11 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=tklengyel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=+tV9JRJ6FoJ6dNHFeSBVvjjvhV689iBU5DthCBbQEdI=;
 b=14XCvgWa7QBQQB2K8w1of7gvfhAB9ppLx4tOfy58dq5BECAwhw70sQvQnT6uCY1abE
 EhbKkwoa4E1iQy43Ld/IeyZYr7xQ/YNUHI/q+MsrD+wp075Po+DEFXuX+g2LkY3EM2cS
 JiV/eU+YRewOZ7C+xPbLWf3/2POZ4jbwUYtRqr1qlvyKS3OqqvggcjoJ/VDtq66qd9cf
 mmZ1GBA/O7Azw4BJdh2u1f5MUDIBEhqAZKlMN9N8g2rF/kvhzMgGeu6wiBJpZ1RlLqcp
 qeQjo2eL1EQNt8T/0xRUrJHbYZczVx2EIZJmNJUFrNGQJT6wEWtpTCkvrr9J7ylVvXke
 0SOg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=+tV9JRJ6FoJ6dNHFeSBVvjjvhV689iBU5DthCBbQEdI=;
 b=sNny/ja3aeq97gSdeTI6rfpHHFQ4Kh/uiREnbOLGCVUThlU3XHpvWT1Lb1jxq7+A84
 rYhr87nQb1BrVkAuV757jhDkUcEgKUZ28bjdSw3ZMZ8KrjODoor3ozhYHxn1qHlq3UGD
 Y1ouGgxeIfXwmQsWS6R/A+UYrgA+Q6WnCeTrg63WRWkY7yqrsPWNHa3YQ/yvPPWpbkO0
 7Nr+7rwlA4SD16bfVsFW/eLhQm4i2/GiESbQyEv/jwdTFRe/h5rA2Co7vQlLjiBsXkf5
 P1E9iy2pGmF4JHAVR/Bf2jgl5H03KNIaVTrJZ/gh9OpgWIA53THXj8uDliKmPY/mWo9K
 VcFw==
X-Gm-Message-State: AOAM533kssbk2/SLvW8Sb1ki03SV/2ayh7GJJ7AVAGDedVTEJaa9E23J
 JVb/1zi6b98VTZBNJRkWr1kxlcUdyf4=
X-Google-Smtp-Source: ABdhPJzanoClpdWHbahWwCoZAx+AFevN6DqK2Z7lvv0fTkRrQT+24XpXjsm1hXyX/eFpLGxWLiaVGw==
X-Received: by 2002:a50:8754:: with SMTP id 20mr15980099edv.57.1590416890389; 
 Mon, 25 May 2020 07:28:10 -0700 (PDT)
Received: from mail-wr1-f43.google.com (mail-wr1-f43.google.com.
 [209.85.221.43])
 by smtp.gmail.com with ESMTPSA id b16sm16030130ejg.43.2020.05.25.07.28.09
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 25 May 2020 07:28:09 -0700 (PDT)
Received: by mail-wr1-f43.google.com with SMTP id r7so552954wro.1
 for <xen-devel@lists.xenproject.org>; Mon, 25 May 2020 07:28:09 -0700 (PDT)
X-Received: by 2002:a5d:5707:: with SMTP id a7mr15227844wrv.259.1590416888945; 
 Mon, 25 May 2020 07:28:08 -0700 (PDT)
MIME-Version: 1.0
References: <adfececa3e29a46f5347459a629aa534d61625aa.1590165055.git.tamas.lengyel@intel.com>
 <338c26dc-a78a-4519-11f1-1b89bd1cf4db@suse.com>
 <CABfawh=WPyW383QAe_JwRC2q8W1zHXcYntjYF-Vog34baQcrfw@mail.gmail.com>
 <e5a2899c-f375-55e8-fc6c-940b70929ae6@suse.com>
 <CABfawhnB4WY6U-XcigT+X=n4e8qdDMFokMWR1Sc_s-oMeyZRWg@mail.gmail.com>
 <78714288-89b0-6a53-4f74-f2306ae6e749@suse.com>
 <CABfawhkMzpekYLqXqZZZ5Mxum-eHqMAWvAguaRakFasJKtPfFQ@mail.gmail.com>
In-Reply-To: <CABfawhkMzpekYLqXqZZZ5Mxum-eHqMAWvAguaRakFasJKtPfFQ@mail.gmail.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Mon, 25 May 2020 08:27:30 -0600
X-Gmail-Original-Message-ID: <CABfawhmxMA0Tgwez_cGrTDoMtYVe5Z+sOng0QqEVMb_M6M-mQQ@mail.gmail.com>
Message-ID: <CABfawhmxMA0Tgwez_cGrTDoMtYVe5Z+sOng0QqEVMb_M6M-mQQ@mail.gmail.com>
Subject: Re: [PATCH v2 for-4.14 1/2] x86/mem_sharing: block interrupt
 injection for forks
To: Jan Beulich <jbeulich@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Tamas K Lengyel <tamas.lengyel@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Jun Nakajima <jun.nakajima@intel.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 25, 2020 at 8:14 AM Tamas K Lengyel <tamas@tklengyel.com> wrote:
>
> On Mon, May 25, 2020 at 8:06 AM Jan Beulich <jbeulich@suse.com> wrote:
> >
> > On 25.05.2020 15:46, Tamas K Lengyel wrote:
> > > On Mon, May 25, 2020 at 7:06 AM Jan Beulich <jbeulich@suse.com> wrote:
> > >>
> > >> On 25.05.2020 14:18, Tamas K Lengyel wrote:
> > >>> On Mon, May 25, 2020 at 12:06 AM Jan Beulich <jbeulich@suse.com> wrote:
> > >>>>
> > >>>> On 22.05.2020 18:33, Tamas K Lengyel wrote:
> > >>>>> When running shallow forks without device models it may be undesirable for Xen
> > >>>>> to inject interrupts. With Windows forks we have observed the kernel going into
> > >>>>> infinite loops when trying to process such interrupts, likely because it attempts
> > >>>>> to interact with devices that are not responding without QEMU running. By
> > >>>>> disabling interrupt injection the fuzzer can exercise the target code without
> > >>>>> interference.
> > >>>>>
> > >>>>> Forks & memory sharing are only available on Intel CPUs so this only applies
> > >>>>> to vmx.
> > >>>>
> > >>>> Looking at e.g. mem_sharing_control() I can't seem to be able to confirm
> > >>>> this. Would you mind pointing me at where this restriction is coming from?
> > >>>
> > >>> Both mem_access and mem_sharing are only implemented for EPT:
> > >>> http://xenbits.xen.org/hg/xen-unstable.hg/file/5eadf9363c25/xen/arch/x86/mm/p2m-ept.c#l126.
> > >>
> > >> p2m-pt.c:p2m_type_to_flags() has a similar case label.
> > >
> > > It doesn't do anything though, does it? For mem_sharing to work you
> > > actively have to restrict the memory permissions on the shared entries
> > > to be read/execute only. That's only done for EPT.
> >
> > Does it not? I seems to me that it does, seeing the case sits
> > together with the p2m_ram_ro and p2m_ram_logdirty ones:
> >
> >     case p2m_ram_ro:
> >     case p2m_ram_logdirty:
> >     case p2m_ram_shared:
> >         return flags | P2M_BASE_FLAGS;
> >
> > >> And I can't
> > >> spot a respective restriction in mem_sharing_memop(), i.e. it looks
> > >> to me as if enabling mem-sharing on NPT (to satisfy hap_enabled()
> > >> in mem_sharing_control()) would be possible.
> > >
> > > If you are looking for an explicit gate like that, then you are right,
> > > there isn't one. You can ask the original authors of this subsystem
> > > why that is. If you feel like adding an extra gate, I wouldn't object.
> >
> > Well, the question here isn't about gating - that's an independent
> > bug if it's indeed missing. The question is whether SVM code also
> > needs touching, as was previously requested. You tried to address
> > this by stating an Intel-only limitation, which I couldn't find
> > proof for (so far).
>
> Well, as far as I'm concerned VM forking is for Intel hardware only.
> If mem_sharing seems to work for non-Intel hw - I was unaware of that
> - than I'll just add an extra check for the VM fork hypercall that
> gates it. It may be possible for technically be made available for
> other hw as well, but at this time that's completely out-of-scope.

Actually, I'm going to just add that gate completely for mem_sharing.
Even if it at some time worked on other architectures (doubtful) at
this time its a usecase that's completely abandoned and forgotten and
as far as I'm concerned unmaintained with no plans from my side to
ever maintain it.

Tamas


From xen-devel-bounces@lists.xenproject.org Mon May 25 14:28:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 14:28:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdE5i-0008Du-AP; Mon, 25 May 2020 14:28:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MLe6=7H=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdE5g-0008DM-D5
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 14:28:32 +0000
X-Inumbo-ID: 00e74cf2-9e94-11ea-aee4-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 00e74cf2-9e94-11ea-aee4-12813bfff9fa;
 Mon, 25 May 2020 14:28:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id B3BB1B13D;
 Mon, 25 May 2020 14:28:31 +0000 (UTC)
Subject: [PATCH v10 5/9] x86emul: support MOVDIR{I,64B} insns
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Message-ID: <ae7ff12a-edf9-45b5-b7c9-6c5b5d0739c0@suse.com>
Date: Mon, 25 May 2020 16:28:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Introduce a new blk() hook, paralleling the rmw() one in a certain way,
but being intended for larger data sizes, and hence its HVM intermediate
handling function doesn't fall back to splitting the operation if the
requested virtual address can't be mapped.

Note that SDM revision 071 doesn't specify exception behavior for
ModRM.mod == 0b11; assuming #UD here.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
v10: Re-base over changes earlier in the series as well as later parts
     comitted already.
v9: Fold in "x86/HVM: make hvmemul_blk() capable of handling r/o
    operations". Also adjust x86_insn_is_mem_write().
v7: Add blk_NONE. Move  harness'es setting of .blk. Correct indentation.
    Re-base.
v6: Fold MOVDIRI and MOVDIR64B changes again. Use blk() for both. All
    tags dropped.
v5: Introduce/use ->blk() hook. Correct asm() operands.
v4: Split MOVDIRI and MOVDIR64B and move this one ahead. Re-base.
v3: Update description.
---
(SDE: -tnt)

--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -510,6 +510,8 @@ static const struct {
     /*{ 0xf6 }, { 2, 2 }, F, W }, wrss */
     { { 0xf6 }, { 2, 2 }, T, R, pfx_66 }, /* adcx */
     { { 0xf6 }, { 2, 2 }, T, R, pfx_f3 }, /* adox */
+    { { 0xf8 }, { 2, 2 }, F, W, pfx_66 }, /* movdir64b */
+    { { 0xf9 }, { 2, 2 }, F, W }, /* movdiri */
 };
 #undef CND
 #undef REG
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -652,6 +652,18 @@ static int cmpxchg(
     return X86EMUL_OKAY;
 }
 
+static int blk(
+    enum x86_segment seg,
+    unsigned long offset,
+    void *p_data,
+    unsigned int bytes,
+    uint32_t *eflags,
+    struct x86_emulate_state *state,
+    struct x86_emulate_ctxt *ctxt)
+{
+    return x86_emul_blk((void *)offset, p_data, bytes, eflags, state, ctxt);
+}
+
 static int read_segment(
     enum x86_segment seg,
     struct segment_register *reg,
@@ -721,6 +733,7 @@ static struct x86_emulate_ops emulops =
     .insn_fetch = fetch,
     .write      = write,
     .cmpxchg    = cmpxchg,
+    .blk        = blk,
     .read_segment = read_segment,
     .cpuid      = emul_test_cpuid,
     .read_cr    = emul_test_read_cr,
@@ -2339,6 +2352,50 @@ int main(int argc, char **argv)
         goto fail;
     printf("okay\n");
 
+    printf("%-40s", "Testing movdiri %edx,(%ecx)...");
+    if ( stack_exec && cpu_has_movdiri )
+    {
+        instr[0] = 0x0f; instr[1] = 0x38; instr[2] = 0xf9; instr[3] = 0x11;
+
+        regs.eip = (unsigned long)&instr[0];
+        regs.ecx = (unsigned long)memset(res, -1, 16);
+        regs.edx = 0x44332211;
+
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             (regs.eip != (unsigned long)&instr[4]) ||
+             res[0] != 0x44332211 || ~res[1] )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
+    printf("%-40s", "Testing movdir64b 144(%edx),%ecx...");
+    if ( stack_exec && cpu_has_movdir64b )
+    {
+        instr[0] = 0x66; instr[1] = 0x0f; instr[2] = 0x38; instr[3] = 0xf8;
+        instr[4] = 0x8a; instr[5] = 0x90; instr[8] = instr[7] = instr[6] = 0;
+
+        regs.eip = (unsigned long)&instr[0];
+        for ( i = 0; i < 64; ++i )
+            res[i] = i - 20;
+        regs.edx = (unsigned long)res;
+        regs.ecx = (unsigned long)(res + 16);
+
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             (regs.eip != (unsigned long)&instr[9]) ||
+             res[15] != -5 || res[32] != 12 )
+            goto fail;
+        for ( i = 16; i < 32; ++i )
+            if ( res[i] != i )
+                goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
     printf("%-40s", "Testing movq %mm3,(%ecx)...");
     if ( stack_exec && cpu_has_mmx )
     {
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -160,6 +160,8 @@ static inline bool xcr0_mask(uint64_t ma
 #define cpu_has_avx512_vnni (cp.feat.avx512_vnni && xcr0_mask(0xe6))
 #define cpu_has_avx512_bitalg (cp.feat.avx512_bitalg && xcr0_mask(0xe6))
 #define cpu_has_avx512_vpopcntdq (cp.feat.avx512_vpopcntdq && xcr0_mask(0xe6))
+#define cpu_has_movdiri    cp.feat.movdiri
+#define cpu_has_movdir64b  cp.feat.movdir64b
 #define cpu_has_avx512_4vnniw (cp.feat.avx512_4vnniw && xcr0_mask(0xe6))
 #define cpu_has_avx512_4fmaps (cp.feat.avx512_4fmaps && xcr0_mask(0xe6))
 #define cpu_has_serialize  cp.feat.serialize
--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -47,6 +47,7 @@ $(call as-option-add,CFLAGS,CC,"rdseed %
 $(call as-option-add,CFLAGS,CC,"clwb (%rax)",-DHAVE_AS_CLWB)
 $(call as-option-add,CFLAGS,CC,".equ \"x\"$$(comma)1",-DHAVE_AS_QUOTED_SYM)
 $(call as-option-add,CFLAGS,CC,"invpcid (%rax)$$(comma)%rax",-DHAVE_AS_INVPCID)
+$(call as-option-add,CFLAGS,CC,"movdiri %rax$$(comma)(%rax)",-DHAVE_AS_MOVDIR)
 
 # GAS's idea of true is -1.  Clang's idea is 1
 $(call as-option-add,CFLAGS,CC,\
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -1441,6 +1441,47 @@ static int hvmemul_rmw(
     return rc;
 }
 
+static int hvmemul_blk(
+    enum x86_segment seg,
+    unsigned long offset,
+    void *p_data,
+    unsigned int bytes,
+    uint32_t *eflags,
+    struct x86_emulate_state *state,
+    struct x86_emulate_ctxt *ctxt)
+{
+    struct hvm_emulate_ctxt *hvmemul_ctxt =
+        container_of(ctxt, struct hvm_emulate_ctxt, ctxt);
+    unsigned long addr;
+    uint32_t pfec = PFEC_page_present;
+    int rc;
+    void *mapping = NULL;
+
+    rc = hvmemul_virtual_to_linear(
+        seg, offset, bytes, NULL, hvm_access_write, hvmemul_ctxt, &addr);
+    if ( rc != X86EMUL_OKAY || !bytes )
+        return rc;
+
+    if ( x86_insn_is_mem_write(state, ctxt) )
+        pfec |= PFEC_write_access;
+
+    if ( is_x86_system_segment(seg) )
+        pfec |= PFEC_implicit;
+    else if ( hvmemul_ctxt->seg_reg[x86_seg_ss].dpl == 3 )
+        pfec |= PFEC_user_mode;
+
+    mapping = hvmemul_map_linear_addr(addr, bytes, pfec, hvmemul_ctxt);
+    if ( IS_ERR(mapping) )
+        return ~PTR_ERR(mapping);
+    if ( !mapping )
+        return X86EMUL_UNHANDLEABLE;
+
+    rc = x86_emul_blk(mapping, p_data, bytes, eflags, state, ctxt);
+    hvmemul_unmap_linear_addr(mapping, addr, bytes, hvmemul_ctxt);
+
+    return rc;
+}
+
 static int hvmemul_write_discard(
     enum x86_segment seg,
     unsigned long offset,
@@ -2518,6 +2559,7 @@ static const struct x86_emulate_ops hvm_
     .write         = hvmemul_write,
     .rmw           = hvmemul_rmw,
     .cmpxchg       = hvmemul_cmpxchg,
+    .blk           = hvmemul_blk,
     .validate      = hvmemul_validate,
     .rep_ins       = hvmemul_rep_ins,
     .rep_outs      = hvmemul_rep_outs,
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -548,6 +548,8 @@ static const struct ext0f38_table {
     [0xf1] = { .to_mem = 1, .two_op = 1 },
     [0xf2 ... 0xf3] = {},
     [0xf5 ... 0xf7] = {},
+    [0xf8] = { .simd_size = simd_other },
+    [0xf9] = { .to_mem = 1, .two_op = 1 /* Mov */ },
 };
 
 /* Shift values between src and dst sizes of pmov{s,z}x{b,w,d}{w,d,q}. */
@@ -851,6 +853,10 @@ struct x86_emulate_state {
         rmw_xchg,
         rmw_xor,
     } rmw;
+    enum {
+        blk_NONE,
+        blk_movdir,
+    } blk;
     uint8_t modrm, modrm_mod, modrm_reg, modrm_rm;
     uint8_t sib_index, sib_scale;
     uint8_t rex_prefix;
@@ -1915,6 +1921,8 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_avx512_vpopcntdq() (ctxt->cpuid->feat.avx512_vpopcntdq)
 #define vcpu_has_tsxldtrk()    (ctxt->cpuid->feat.tsxldtrk)
 #define vcpu_has_rdpid()       (ctxt->cpuid->feat.rdpid)
+#define vcpu_has_movdiri()     (ctxt->cpuid->feat.movdiri)
+#define vcpu_has_movdir64b()   (ctxt->cpuid->feat.movdir64b)
 #define vcpu_has_avx512_4vnniw() (ctxt->cpuid->feat.avx512_4vnniw)
 #define vcpu_has_avx512_4fmaps() (ctxt->cpuid->feat.avx512_4fmaps)
 #define vcpu_has_serialize()   (ctxt->cpuid->feat.serialize)
@@ -2736,10 +2744,12 @@ x86_decode_0f38(
     {
     case 0x00 ... 0xef:
     case 0xf2 ... 0xf5:
-    case 0xf7 ... 0xff:
+    case 0xf7 ... 0xf8:
+    case 0xfa ... 0xff:
         op_bytes = 0;
         /* fall through */
     case 0xf6: /* adcx / adox */
+    case 0xf9: /* movdiri */
         ctxt->opcode |= MASK_INSR(vex.pfx, X86EMUL_OPC_PFX_MASK);
         break;
 
@@ -10218,6 +10228,34 @@ x86_emulate(
                             : "0" ((uint32_t)src.val), "rm" (_regs.edx) );
         break;
 
+    case X86EMUL_OPC_66(0x0f38, 0xf8): /* movdir64b r,m512 */
+        host_and_vcpu_must_have(movdir64b);
+        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        src.val = truncate_ea(*dst.reg);
+        generate_exception_if(!is_aligned(x86_seg_es, src.val, 64, ctxt, ops),
+                              EXC_GP, 0);
+        fail_if(!ops->blk);
+        state->blk = blk_movdir;
+        BUILD_BUG_ON(sizeof(*mmvalp) < 64);
+        if ( (rc = ops->read(ea.mem.seg, ea.mem.off, mmvalp, 64,
+                             ctxt)) != X86EMUL_OKAY ||
+             (rc = ops->blk(x86_seg_es, src.val, mmvalp, 64, &_regs.eflags,
+                            state, ctxt)) != X86EMUL_OKAY )
+            goto done;
+        state->simd_size = simd_none;
+        break;
+
+    case X86EMUL_OPC(0x0f38, 0xf9): /* movdiri mem,r */
+        host_and_vcpu_must_have(movdiri);
+        generate_exception_if(dst.type != OP_MEM, EXC_UD);
+        fail_if(!ops->blk);
+        state->blk = blk_movdir;
+        if ( (rc = ops->blk(dst.mem.seg, dst.mem.off, &src.val, op_bytes,
+                            &_regs.eflags, state, ctxt)) != X86EMUL_OKAY )
+            goto done;
+        dst.type = OP_NONE;
+        break;
+
 #ifndef X86EMUL_NO_SIMD
 
     case X86EMUL_OPC_VEX_66(0x0f3a, 0x00): /* vpermq $imm8,ymm/m256,ymm */
@@ -11477,6 +11515,77 @@ int x86_emul_rmw(
     return X86EMUL_OKAY;
 }
 
+int x86_emul_blk(
+    void *ptr,
+    void *data,
+    unsigned int bytes,
+    uint32_t *eflags,
+    struct x86_emulate_state *state,
+    struct x86_emulate_ctxt *ctxt)
+{
+    switch ( state->blk )
+    {
+        /*
+         * Throughout this switch(), memory clobbers are used to compensate
+         * that other operands may not properly express the (full) memory
+         * ranges covered.
+         */
+    case blk_movdir:
+        switch ( bytes )
+        {
+#ifdef __x86_64__
+        case sizeof(uint32_t):
+# ifdef HAVE_AS_MOVDIR
+            asm ( "movdiri %0, (%1)"
+                  :: "r" (*(uint32_t *)data), "r" (ptr) : "memory" );
+# else
+            /* movdiri %esi, (%rdi) */
+            asm ( ".byte 0x0f, 0x38, 0xf9, 0x37"
+                  :: "S" (*(uint32_t *)data), "D" (ptr) : "memory" );
+# endif
+            break;
+#endif
+
+        case sizeof(unsigned long):
+#ifdef HAVE_AS_MOVDIR
+            asm ( "movdiri %0, (%1)"
+                  :: "r" (*(unsigned long *)data), "r" (ptr) : "memory" );
+#else
+            /* movdiri %rsi, (%rdi) */
+            asm ( ".byte 0x48, 0x0f, 0x38, 0xf9, 0x37"
+                  :: "S" (*(unsigned long *)data), "D" (ptr) : "memory" );
+#endif
+            break;
+
+        case 64:
+            if ( ((unsigned long)ptr & 0x3f) )
+            {
+                ASSERT_UNREACHABLE();
+                return X86EMUL_UNHANDLEABLE;
+            }
+#ifdef HAVE_AS_MOVDIR
+            asm ( "movdir64b (%0), %1" :: "r" (data), "r" (ptr) : "memory" );
+#else
+            /* movdir64b (%rsi), %rdi */
+            asm ( ".byte 0x66, 0x0f, 0x38, 0xf8, 0x3e"
+                  :: "S" (data), "D" (ptr) : "memory" );
+#endif
+            break;
+
+        default:
+            ASSERT_UNREACHABLE();
+            return X86EMUL_UNHANDLEABLE;
+        }
+        break;
+
+    default:
+        ASSERT_UNREACHABLE();
+        return X86EMUL_UNHANDLEABLE;
+    }
+
+    return X86EMUL_OKAY;
+}
+
 static void __init __maybe_unused build_assertions(void)
 {
     /* Check the values against SReg3 encoding in opcode/ModRM bytes. */
@@ -11759,6 +11868,9 @@ x86_insn_is_mem_write(const struct x86_e
         {
         case 0x63:                         /* ARPL */
             return !mode_64bit();
+
+        case X86EMUL_OPC_66(0x0f38, 0xf8): /* MOVDIR64B */
+            return true;
         }
 
         return false;
--- a/xen/arch/x86/x86_emulate/x86_emulate.h
+++ b/xen/arch/x86/x86_emulate/x86_emulate.h
@@ -310,6 +310,22 @@ struct x86_emulate_ops
         struct x86_emulate_ctxt *ctxt);
 
     /*
+     * blk: Emulate a large (block) memory access.
+     * @p_data: [IN/OUT] (optional) Pointer to source/destination buffer.
+     * @eflags: [IN/OUT] Pointer to EFLAGS to be updated according to
+     *                   instruction effects.
+     * @state:  [IN/OUT] Pointer to (opaque) emulator state.
+     */
+    int (*blk)(
+        enum x86_segment seg,
+        unsigned long offset,
+        void *p_data,
+        unsigned int bytes,
+        uint32_t *eflags,
+        struct x86_emulate_state *state,
+        struct x86_emulate_ctxt *ctxt);
+
+    /*
      * validate: Post-decode, pre-emulate hook to allow caller controlled
      * filtering.
      */
@@ -793,6 +809,14 @@ x86_emul_rmw(
     unsigned int bytes,
     uint32_t *eflags,
     struct x86_emulate_state *state,
+    struct x86_emulate_ctxt *ctxt);
+int
+x86_emul_blk(
+    void *ptr,
+    void *data,
+    unsigned int bytes,
+    uint32_t *eflags,
+    struct x86_emulate_state *state,
     struct x86_emulate_ctxt *ctxt);
 
 static inline void x86_emul_hw_exception(
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -118,6 +118,8 @@
 #define cpu_has_avx512_bitalg   boot_cpu_has(X86_FEATURE_AVX512_BITALG)
 #define cpu_has_avx512_vpopcntdq boot_cpu_has(X86_FEATURE_AVX512_VPOPCNTDQ)
 #define cpu_has_rdpid           boot_cpu_has(X86_FEATURE_RDPID)
+#define cpu_has_movdiri         boot_cpu_has(X86_FEATURE_MOVDIRI)
+#define cpu_has_movdir64b       boot_cpu_has(X86_FEATURE_MOVDIR64B)
 
 /* CPUID level 0x80000007.edx */
 #define cpu_has_itsc            boot_cpu_has(X86_FEATURE_ITSC)
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -241,6 +241,8 @@ XEN_CPUFEATURE(AVX512_VPOPCNTDQ, 6*32+14
 XEN_CPUFEATURE(TSXLDTRK,      6*32+16) /*a  TSX load tracking suspend/resume insns */
 XEN_CPUFEATURE(RDPID,         6*32+22) /*A  RDPID instruction */
 XEN_CPUFEATURE(CLDEMOTE,      6*32+25) /*A  CLDEMOTE instruction */
+XEN_CPUFEATURE(MOVDIRI,       6*32+27) /*A  MOVDIRI instruction */
+XEN_CPUFEATURE(MOVDIR64B,     6*32+28) /*A  MOVDIR64B instruction */
 
 /* AMD-defined CPU features, CPUID level 0x80000007.edx, word 7 */
 XEN_CPUFEATURE(ITSC,          7*32+ 8) /*   Invariant TSC */



From xen-devel-bounces@lists.xenproject.org Mon May 25 14:29:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 14:29:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdE67-0008K9-MV; Mon, 25 May 2020 14:28:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MLe6=7H=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdE66-0008Jp-Ja
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 14:28:58 +0000
X-Inumbo-ID: 10e9585d-9e94-11ea-aee4-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 10e9585d-9e94-11ea-aee4-12813bfff9fa;
 Mon, 25 May 2020 14:28:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 14A0DAC9F;
 Mon, 25 May 2020 14:28:59 +0000 (UTC)
Subject: [PATCH v10 6/9] x86emul: support ENQCMD insns
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Message-ID: <ec0c3d44-b6dc-0d75-870e-c1b6fb1ce834@suse.com>
Date: Mon, 25 May 2020 16:28:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Note that the ISA extensions document revision 038 doesn't specify
exception behavior for ModRM.mod == 0b11; assuming #UD here.

No tests are being added to the harness - this would be quite hard,
we can't just issue the insns against RAM. Their similarity with
MOVDIR64B should have the test case there be god enough to cover any
fundamental flaws.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
TBD: This doesn't (can't) consult PASID translation tables yet, as we
     have no VMX code for this so far. I guess for this we will want to
     replace the direct ->read_msr(MSR_PASID, ...) with a new
     ->read_pasid() hook.
---
v10: Re-base over changes earlier in the series as well as later parts
     comitted already.
v9: Consistently use named asm() operands. Also adjust
    x86_insn_is_mem_write(). A -> a in public header. Move
    asm-x86/msr-index.h addition and drop _IA32 from their names.
    Introduce _AC() into the emulator harness as a result.
v7: Re-base.
v6: Re-base.
v5: New.

--- a/tools/tests/x86_emulator/predicates.c
+++ b/tools/tests/x86_emulator/predicates.c
@@ -511,6 +511,8 @@ static const struct {
     { { 0xf6 }, { 2, 2 }, T, R, pfx_66 }, /* adcx */
     { { 0xf6 }, { 2, 2 }, T, R, pfx_f3 }, /* adox */
     { { 0xf8 }, { 2, 2 }, F, W, pfx_66 }, /* movdir64b */
+    { { 0xf8 }, { 2, 2 }, F, W, pfx_f3 }, /* enqcmds */
+    { { 0xf8 }, { 2, 2 }, F, W, pfx_f2 }, /* enqcmd */
     { { 0xf9 }, { 2, 2 }, F, W }, /* movdiri */
 };
 #undef CND
--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -59,6 +59,9 @@
     (type *)((char *)mptr__ - offsetof(type, member)); \
 })
 
+#define AC_(n,t) (n##t)
+#define _AC(n,t) AC_(n,t)
+
 #define hweight32 __builtin_popcount
 #define hweight64 __builtin_popcountll
 
--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -48,6 +48,7 @@ $(call as-option-add,CFLAGS,CC,"clwb (%r
 $(call as-option-add,CFLAGS,CC,".equ \"x\"$$(comma)1",-DHAVE_AS_QUOTED_SYM)
 $(call as-option-add,CFLAGS,CC,"invpcid (%rax)$$(comma)%rax",-DHAVE_AS_INVPCID)
 $(call as-option-add,CFLAGS,CC,"movdiri %rax$$(comma)(%rax)",-DHAVE_AS_MOVDIR)
+$(call as-option-add,CFLAGS,CC,"enqcmd (%rax)$$(comma)%rax",-DHAVE_AS_ENQCMD)
 
 # GAS's idea of true is -1.  Clang's idea is 1
 $(call as-option-add,CFLAGS,CC,\
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -855,6 +855,7 @@ struct x86_emulate_state {
     } rmw;
     enum {
         blk_NONE,
+        blk_enqcmd,
         blk_movdir,
     } blk;
     uint8_t modrm, modrm_mod, modrm_reg, modrm_rm;
@@ -901,6 +902,7 @@ typedef union {
     uint64_t __attribute__ ((aligned(16))) xmm[2];
     uint64_t __attribute__ ((aligned(32))) ymm[4];
     uint64_t __attribute__ ((aligned(64))) zmm[8];
+    uint32_t data32[16];
 } mmval_t;
 
 /*
@@ -1923,6 +1925,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_rdpid()       (ctxt->cpuid->feat.rdpid)
 #define vcpu_has_movdiri()     (ctxt->cpuid->feat.movdiri)
 #define vcpu_has_movdir64b()   (ctxt->cpuid->feat.movdir64b)
+#define vcpu_has_enqcmd()      (ctxt->cpuid->feat.enqcmd)
 #define vcpu_has_avx512_4vnniw() (ctxt->cpuid->feat.avx512_4vnniw)
 #define vcpu_has_avx512_4fmaps() (ctxt->cpuid->feat.avx512_4fmaps)
 #define vcpu_has_serialize()   (ctxt->cpuid->feat.serialize)
@@ -10245,6 +10248,36 @@ x86_emulate(
         state->simd_size = simd_none;
         break;
 
+    case X86EMUL_OPC_F2(0x0f38, 0xf8): /* enqcmd r,m512 */
+    case X86EMUL_OPC_F3(0x0f38, 0xf8): /* enqcmds r,m512 */
+        host_and_vcpu_must_have(enqcmd);
+        generate_exception_if(ea.type != OP_MEM, EXC_UD);
+        generate_exception_if(vex.pfx != vex_f2 && !mode_ring0(), EXC_GP, 0);
+        src.val = truncate_ea(*dst.reg);
+        generate_exception_if(!is_aligned(x86_seg_es, src.val, 64, ctxt, ops),
+                              EXC_GP, 0);
+        fail_if(!ops->blk);
+        BUILD_BUG_ON(sizeof(*mmvalp) < 64);
+        if ( (rc = ops->read(ea.mem.seg, ea.mem.off, mmvalp, 64,
+                             ctxt)) != X86EMUL_OKAY )
+            goto done;
+        if ( vex.pfx == vex_f2 ) /* enqcmd */
+        {
+            fail_if(!ops->read_msr);
+            if ( (rc = ops->read_msr(MSR_PASID, &msr_val,
+                                     ctxt)) != X86EMUL_OKAY )
+                goto done;
+            generate_exception_if(!(msr_val & PASID_VALID), EXC_GP, 0);
+            mmvalp->data32[0] = MASK_EXTR(msr_val, PASID_PASID_MASK);
+        }
+        mmvalp->data32[0] &= ~0x7ff00000;
+        state->blk = blk_enqcmd;
+        if ( (rc = ops->blk(x86_seg_es, src.val, mmvalp, 64, &_regs.eflags,
+                            state, ctxt)) != X86EMUL_OKAY )
+            goto done;
+        state->simd_size = simd_none;
+        break;
+
     case X86EMUL_OPC(0x0f38, 0xf9): /* movdiri mem,r */
         host_and_vcpu_must_have(movdiri);
         generate_exception_if(dst.type != OP_MEM, EXC_UD);
@@ -11525,11 +11558,36 @@ int x86_emul_blk(
 {
     switch ( state->blk )
     {
+        bool zf;
+
         /*
          * Throughout this switch(), memory clobbers are used to compensate
          * that other operands may not properly express the (full) memory
          * ranges covered.
          */
+    case blk_enqcmd:
+        ASSERT(bytes == 64);
+        if ( ((unsigned long)ptr & 0x3f) )
+        {
+            ASSERT_UNREACHABLE();
+            return X86EMUL_UNHANDLEABLE;
+        }
+        *eflags &= ~EFLAGS_MASK;
+#ifdef HAVE_AS_ENQCMD
+        asm ( "enqcmds (%[src]), %[dst]" ASM_FLAG_OUT(, "; setz %[zf]")
+              : [zf] ASM_FLAG_OUT("=@ccz", "=qm") (zf)
+              : [src] "r" (data), [dst] "r" (ptr) : "memory" );
+#else
+        /* enqcmds (%rsi), %rdi */
+        asm ( ".byte 0xf3, 0x0f, 0x38, 0xf8, 0x3e"
+              ASM_FLAG_OUT(, "; setz %[zf]")
+              : [zf] ASM_FLAG_OUT("=@ccz", "=qm") (zf)
+              : "S" (data), "D" (ptr) : "memory" );
+#endif
+        if ( zf )
+            *eflags |= X86_EFLAGS_ZF;
+        break;
+
     case blk_movdir:
         switch ( bytes )
         {
@@ -11870,6 +11928,8 @@ x86_insn_is_mem_write(const struct x86_e
             return !mode_64bit();
 
         case X86EMUL_OPC_66(0x0f38, 0xf8): /* MOVDIR64B */
+        case X86EMUL_OPC_F2(0x0f38, 0xf8): /* ENQCMD */
+        case X86EMUL_OPC_F3(0x0f38, 0xf8): /* ENQCMDS */
             return true;
         }
 
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -120,6 +120,7 @@
 #define cpu_has_rdpid           boot_cpu_has(X86_FEATURE_RDPID)
 #define cpu_has_movdiri         boot_cpu_has(X86_FEATURE_MOVDIRI)
 #define cpu_has_movdir64b       boot_cpu_has(X86_FEATURE_MOVDIR64B)
+#define cpu_has_enqcmd          boot_cpu_has(X86_FEATURE_ENQCMD)
 
 /* CPUID level 0x80000007.edx */
 #define cpu_has_itsc            boot_cpu_has(X86_FEATURE_ITSC)
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -74,6 +74,10 @@
 #define MSR_PL3_SSP                         0x000006a7
 #define MSR_INTERRUPT_SSP_TABLE             0x000006a8
 
+#define MSR_PASID                           0x00000d93
+#define  PASID_PASID_MASK                   0x000fffff
+#define  PASID_VALID                        (_AC(1, ULL) << 31)
+
 /*
  * Legacy MSR constants in need of cleanup.  No new MSRs below this comment.
  */
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -243,6 +243,7 @@ XEN_CPUFEATURE(RDPID,         6*32+22) /
 XEN_CPUFEATURE(CLDEMOTE,      6*32+25) /*A  CLDEMOTE instruction */
 XEN_CPUFEATURE(MOVDIRI,       6*32+27) /*A  MOVDIRI instruction */
 XEN_CPUFEATURE(MOVDIR64B,     6*32+28) /*A  MOVDIR64B instruction */
+XEN_CPUFEATURE(ENQCMD,        6*32+29) /*   ENQCMD{,S} instructions */
 
 /* AMD-defined CPU features, CPUID level 0x80000007.edx, word 7 */
 XEN_CPUFEATURE(ITSC,          7*32+ 8) /*   Invariant TSC */



From xen-devel-bounces@lists.xenproject.org Mon May 25 14:29:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 14:29:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdE6Z-0008RW-1r; Mon, 25 May 2020 14:29:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MLe6=7H=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdE6Y-0008RG-5w
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 14:29:26 +0000
X-Inumbo-ID: 21c6cc7c-9e94-11ea-aee4-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 21c6cc7c-9e94-11ea-aee4-12813bfff9fa;
 Mon, 25 May 2020 14:29:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id EC24FABEC;
 Mon, 25 May 2020 14:29:26 +0000 (UTC)
Subject: [PATCH v10 7/9] x86emul: support FNSTENV and FNSAVE
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Message-ID: <ec5d12df-dbef-af1a-7649-44a9a7d46de4@suse.com>
Date: Mon, 25 May 2020 16:29:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

To avoid introducing another boolean into emulator state, the
rex_prefix field gets (ab)used to convey the real/VM86 vs protected mode
info (affecting structure layout, albeit not size) to x86_emul_blk().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
TBD: The full 16-bit padding fields in the 32-bit structures get filled
     with all ones by modern CPUs (i.e. unlike what the comment says for
     FIP and FDP). We may want to mirror this as well (for the real mode
     variant), even if those fields' contents are unspecified.
---
v9: Fix !HVM build. Add /*state->*/ comments. Add memset(). Add/extend
    comments.
v7: New.

--- a/tools/tests/x86_emulator/x86-emulate.h
+++ b/tools/tests/x86_emulator/x86-emulate.h
@@ -129,6 +129,7 @@ static inline bool xcr0_mask(uint64_t ma
 }
 
 #define cache_line_size() (cp.basic.clflush_size * 8)
+#define cpu_has_fpu        cp.basic.fpu
 #define cpu_has_mmx        cp.basic.mmx
 #define cpu_has_fxsr       cp.basic.fxsr
 #define cpu_has_sse        cp.basic.sse
--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -748,6 +748,25 @@ static struct x86_emulate_ops emulops =
 
 #define MMAP_ADDR 0x100000
 
+/*
+ * 64-bit OSes may not (be able to) properly restore the two selectors in
+ * the FPU environment. Zap them so that memcmp() on two saved images will
+ * work regardless of whether a context switch occurred in the middle.
+ */
+static void zap_fpsel(unsigned int *env, bool is_32bit)
+{
+    if ( is_32bit )
+    {
+        env[4] &= ~0xffff;
+        env[6] &= ~0xffff;
+    }
+    else
+    {
+        env[2] &= ~0xffff;
+        env[3] &= ~0xffff;
+    }
+}
+
 #ifdef __x86_64__
 # define STKVAL_DISP 64
 static const struct {
@@ -2394,6 +2413,62 @@ int main(int argc, char **argv)
         printf("okay\n");
     }
     else
+        printf("skipped\n");
+
+    printf("%-40s", "Testing fnstenv 4(%ecx)...");
+    if ( stack_exec && cpu_has_fpu )
+    {
+        const uint16_t three = 3;
+
+        asm volatile ( "fninit\n\t"
+                       "fld1\n\t"
+                       "fidivs %1\n\t"
+                       "fstenv %0"
+                       : "=m" (res[9]) : "m" (three) : "memory" );
+        zap_fpsel(&res[9], true);
+        instr[0] = 0xd9; instr[1] = 0x71; instr[2] = 0x04;
+        regs.eip = (unsigned long)&instr[0];
+        regs.ecx = (unsigned long)res;
+        res[8] = 0xaa55aa55;
+        rc = x86_emulate(&ctxt, &emulops);
+        zap_fpsel(&res[1], true);
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res + 1, res + 9, 28) ||
+             res[8] != 0xaa55aa55 ||
+             (regs.eip != (unsigned long)&instr[3]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
+    printf("%-40s", "Testing 16-bit fnsave (%ecx)...");
+    if ( stack_exec && cpu_has_fpu )
+    {
+        const uint16_t five = 5;
+
+        asm volatile ( "fninit\n\t"
+                       "fld1\n\t"
+                       "fidivs %1\n\t"
+                       "fsaves %0"
+                       : "=m" (res[25]) : "m" (five) : "memory" );
+        zap_fpsel(&res[25], false);
+        asm volatile ( "frstors %0" :: "m" (res[25]) : "memory" );
+        instr[0] = 0x66; instr[1] = 0xdd; instr[2] = 0x31;
+        regs.eip = (unsigned long)&instr[0];
+        regs.ecx = (unsigned long)res;
+        res[23] = 0xaa55aa55;
+        res[24] = 0xaa55aa55;
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res, res + 25, 94) ||
+             (res[23] >> 16) != 0xaa55 ||
+             res[24] != 0xaa55aa55 ||
+             (regs.eip != (unsigned long)&instr[3]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
         printf("skipped\n");
 
     printf("%-40s", "Testing movq %mm3,(%ecx)...");
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -856,6 +856,9 @@ struct x86_emulate_state {
     enum {
         blk_NONE,
         blk_enqcmd,
+#ifndef X86EMUL_NO_FPU
+        blk_fst, /* FNSTENV, FNSAVE */
+#endif
         blk_movdir,
     } blk;
     uint8_t modrm, modrm_mod, modrm_reg, modrm_rm;
@@ -897,6 +900,50 @@ struct x86_emulate_state {
 #define PTR_POISON NULL /* 32-bit builds are for user-space, so NULL is OK. */
 #endif
 
+#ifndef X86EMUL_NO_FPU
+struct x87_env16 {
+    uint16_t fcw;
+    uint16_t fsw;
+    uint16_t ftw;
+    union {
+        struct {
+            uint16_t fip_lo;
+            uint16_t fop:11, :1, fip_hi:4;
+            uint16_t fdp_lo;
+            uint16_t :12, fdp_hi:4;
+        } real;
+        struct {
+            uint16_t fip;
+            uint16_t fcs;
+            uint16_t fdp;
+            uint16_t fds;
+        } prot;
+    } mode;
+};
+
+struct x87_env32 {
+    uint32_t fcw:16, :16;
+    uint32_t fsw:16, :16;
+    uint32_t ftw:16, :16;
+    union {
+        struct {
+            /* some CPUs/FPUs also store the full FIP here */
+            uint32_t fip_lo:16, :16;
+            uint32_t fop:11, :1, fip_hi:16, :4;
+            /* some CPUs/FPUs also store the full FDP here */
+            uint32_t fdp_lo:16, :16;
+            uint32_t :12, fdp_hi:16, :4;
+        } real;
+        struct {
+            uint32_t fip;
+            uint32_t fcs:16, fop:11, :5;
+            uint32_t fdp;
+            uint32_t fds:16, :16;
+        } prot;
+    } mode;
+};
+#endif
+
 typedef union {
     uint64_t mmx;
     uint64_t __attribute__ ((aligned(16))) xmm[2];
@@ -4924,9 +4971,22 @@ x86_emulate(
                     goto done;
                 emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
                 break;
-            case 6: /* fnstenv - TODO */
+            case 6: /* fnstenv */
+                fail_if(!ops->blk);
+                state->blk = blk_fst;
+                /*
+                 * REX is meaningless for this insn by this point - (ab)use
+                 * the field to communicate real vs protected mode to ->blk().
+                 */
+                /*state->*/rex_prefix = in_protmode(ctxt, ops);
+                if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
+                                    op_bytes > 2 ? sizeof(struct x87_env32)
+                                                 : sizeof(struct x87_env16),
+                                    &_regs.eflags,
+                                    state, ctxt)) != X86EMUL_OKAY )
+                    goto done;
                 state->fpu_ctrl = true;
-                goto unimplemented_insn;
+                break;
             case 7: /* fnstcw m2byte */
                 state->fpu_ctrl = true;
             fpu_memdst16:
@@ -5080,9 +5140,24 @@ x86_emulate(
                 emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
                 break;
             case 4: /* frstor - TODO */
-            case 6: /* fnsave - TODO */
                 state->fpu_ctrl = true;
                 goto unimplemented_insn;
+            case 6: /* fnsave */
+                fail_if(!ops->blk);
+                state->blk = blk_fst;
+                /*
+                 * REX is meaningless for this insn by this point - (ab)use
+                 * the field to communicate real vs protected mode to ->blk().
+                 */
+                /*state->*/rex_prefix = in_protmode(ctxt, ops);
+                if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
+                                    op_bytes > 2 ? sizeof(struct x87_env32) + 80
+                                                 : sizeof(struct x87_env16) + 80,
+                                    &_regs.eflags,
+                                    state, ctxt)) != X86EMUL_OKAY )
+                    goto done;
+                state->fpu_ctrl = true;
+                break;
             case 7: /* fnstsw m2byte */
                 state->fpu_ctrl = true;
                 goto fpu_memdst16;
@@ -11559,6 +11634,14 @@ int x86_emul_blk(
     switch ( state->blk )
     {
         bool zf;
+#ifndef X86EMUL_NO_FPU
+        struct {
+            struct x87_env32 env;
+            struct {
+               uint8_t bytes[10];
+            } freg[8];
+        } fpstate;
+#endif
 
         /*
          * Throughout this switch(), memory clobbers are used to compensate
@@ -11588,6 +11671,98 @@ int x86_emul_blk(
             *eflags |= X86_EFLAGS_ZF;
         break;
 
+#ifndef X86EMUL_NO_FPU
+
+    case blk_fst:
+        ASSERT(!data);
+
+        /* Don't chance consuming uninitialized data. */
+        memset(&fpstate, 0, sizeof(fpstate));
+        if ( bytes > sizeof(fpstate.env) )
+            asm ( "fnsave %0" : "+m" (fpstate) );
+        else
+            asm ( "fnstenv %0" : "+m" (fpstate.env) );
+
+        /* state->rex_prefix carries CR0.PE && !EFLAGS.VM setting */
+        switch ( bytes )
+        {
+        case sizeof(fpstate.env): /* 32-bit FNSTENV */
+        case sizeof(fpstate):     /* 32-bit FNSAVE */
+            if ( !state->rex_prefix )
+            {
+                /* Convert 32-bit prot to 32-bit real/vm86 format. */
+                unsigned int fip = fpstate.env.mode.prot.fip +
+                                   (fpstate.env.mode.prot.fcs << 4);
+                unsigned int fdp = fpstate.env.mode.prot.fdp +
+                                   (fpstate.env.mode.prot.fds << 4);
+                unsigned int fop = fpstate.env.mode.prot.fop;
+
+                memset(&fpstate.env.mode, 0, sizeof(fpstate.env.mode));
+                fpstate.env.mode.real.fip_lo = fip;
+                fpstate.env.mode.real.fip_hi = fip >> 16;
+                fpstate.env.mode.real.fop = fop;
+                fpstate.env.mode.real.fdp_lo = fdp;
+                fpstate.env.mode.real.fdp_hi = fdp >> 16;
+            }
+            memcpy(ptr, &fpstate.env, sizeof(fpstate.env));
+            if ( bytes == sizeof(fpstate.env) )
+                ptr = NULL;
+            else
+                ptr += sizeof(fpstate.env);
+            break;
+
+        case sizeof(struct x87_env16):                        /* 16-bit FNSTENV */
+        case sizeof(struct x87_env16) + sizeof(fpstate.freg): /* 16-bit FNSAVE */
+            if ( state->rex_prefix )
+            {
+                /* Convert 32-bit prot to 16-bit prot format. */
+                struct x87_env16 *env = ptr;
+
+                env->fcw = fpstate.env.fcw;
+                env->fsw = fpstate.env.fsw;
+                env->ftw = fpstate.env.ftw;
+                env->mode.prot.fip = fpstate.env.mode.prot.fip;
+                env->mode.prot.fcs = fpstate.env.mode.prot.fcs;
+                env->mode.prot.fdp = fpstate.env.mode.prot.fdp;
+                env->mode.prot.fds = fpstate.env.mode.prot.fds;
+            }
+            else
+            {
+                /* Convert 32-bit prot to 16-bit real/vm86 format. */
+                unsigned int fip = fpstate.env.mode.prot.fip +
+                                   (fpstate.env.mode.prot.fcs << 4);
+                unsigned int fdp = fpstate.env.mode.prot.fdp +
+                                   (fpstate.env.mode.prot.fds << 4);
+                struct x87_env16 env = {
+                    .fcw = fpstate.env.fcw,
+                    .fsw = fpstate.env.fsw,
+                    .ftw = fpstate.env.ftw,
+                    .mode.real.fip_lo = fip,
+                    .mode.real.fip_hi = fip >> 16,
+                    .mode.real.fop = fpstate.env.mode.prot.fop,
+                    .mode.real.fdp_lo = fdp,
+                    .mode.real.fdp_hi = fdp >> 16
+                };
+
+                memcpy(ptr, &env, sizeof(env));
+            }
+            if ( bytes == sizeof(struct x87_env16) )
+                ptr = NULL;
+            else
+                ptr += sizeof(struct x87_env16);
+            break;
+
+        default:
+            ASSERT_UNREACHABLE();
+            return X86EMUL_UNHANDLEABLE;
+        }
+
+        if ( ptr )
+            memcpy(ptr, fpstate.freg, sizeof(fpstate.freg));
+        break;
+
+#endif /* X86EMUL_NO_FPU */
+
     case blk_movdir:
         switch ( bytes )
         {



From xen-devel-bounces@lists.xenproject.org Mon May 25 14:29:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 14:29:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdE6v-00005W-GB; Mon, 25 May 2020 14:29:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MLe6=7H=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdE6u-00005J-9u
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 14:29:48 +0000
X-Inumbo-ID: 2f008aa4-9e94-11ea-ae69-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2f008aa4-9e94-11ea-ae69-bc764e2007e4;
 Mon, 25 May 2020 14:29:47 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 3A330ABEC;
 Mon, 25 May 2020 14:29:49 +0000 (UTC)
Subject: [PATCH v10 8/9] x86emul: support FLDENV and FRSTOR
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Message-ID: <6dc6eb46-c7cb-ca88-2f92-04b99be1b9ef@suse.com>
Date: Mon, 25 May 2020 16:29:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

While the Intel SDM claims that FRSTOR itself may raise #MF upon
completion, this was confirmed by Intel to be a doc error which will be
corrected in due course; behavior is like FLDENV, and like old hard copy
manuals describe it.

Re-arrange a switch() statement's case label order to allow for
fall-through from FLDENV handling to FNSTENV's.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v9: Refine description. Re-base over changes to earlier patch. Add
    comments.
v7: New.

--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -2442,6 +2442,27 @@ int main(int argc, char **argv)
     else
         printf("skipped\n");
 
+    printf("%-40s", "Testing fldenv 8(%edx)...");
+    if ( stack_exec && cpu_has_fpu )
+    {
+        asm volatile ( "fnstenv %0\n\t"
+                       "fninit"
+                       : "=m" (res[2]) :: "memory" );
+        zap_fpsel(&res[2], true);
+        instr[0] = 0xd9; instr[1] = 0x62; instr[2] = 0x08;
+        regs.eip = (unsigned long)&instr[0];
+        regs.edx = (unsigned long)res;
+        rc = x86_emulate(&ctxt, &emulops);
+        asm volatile ( "fnstenv %0" : "=m" (res[9]) :: "memory" );
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res + 2, res + 9, 28) ||
+             (regs.eip != (unsigned long)&instr[3]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
     printf("%-40s", "Testing 16-bit fnsave (%ecx)...");
     if ( stack_exec && cpu_has_fpu )
     {
@@ -2468,6 +2489,31 @@ int main(int argc, char **argv)
             goto fail;
         printf("okay\n");
     }
+    else
+        printf("skipped\n");
+
+    printf("%-40s", "Testing frstor (%edx)...");
+    if ( stack_exec && cpu_has_fpu )
+    {
+        const uint16_t seven = 7;
+
+        asm volatile ( "fninit\n\t"
+                       "fld1\n\t"
+                       "fidivs %1\n\t"
+                       "fnsave %0\n\t"
+                       : "=&m" (res[0]) : "m" (seven) : "memory" );
+        zap_fpsel(&res[0], true);
+        instr[0] = 0xdd; instr[1] = 0x22;
+        regs.eip = (unsigned long)&instr[0];
+        regs.edx = (unsigned long)res;
+        rc = x86_emulate(&ctxt, &emulops);
+        asm volatile ( "fnsave %0" : "=m" (res[27]) :: "memory" );
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res, res + 27, 108) ||
+             (regs.eip != (unsigned long)&instr[2]) )
+            goto fail;
+        printf("okay\n");
+    }
     else
         printf("skipped\n");
 
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -857,6 +857,7 @@ struct x86_emulate_state {
         blk_NONE,
         blk_enqcmd,
 #ifndef X86EMUL_NO_FPU
+        blk_fld, /* FLDENV, FRSTOR */
         blk_fst, /* FNSTENV, FNSAVE */
 #endif
         blk_movdir,
@@ -4960,22 +4961,15 @@ x86_emulate(
                 dst.bytes = 4;
                 emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
                 break;
-            case 4: /* fldenv - TODO */
-                state->fpu_ctrl = true;
-                goto unimplemented_insn;
-            case 5: /* fldcw m2byte */
-                state->fpu_ctrl = true;
-            fpu_memsrc16:
-                if ( (rc = ops->read(ea.mem.seg, ea.mem.off, &src.val,
-                                     2, ctxt)) != X86EMUL_OKAY )
-                    goto done;
-                emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
-                break;
+            case 4: /* fldenv */
+                /* Raise #MF now if there are pending unmasked exceptions. */
+                emulate_fpu_insn_stub(0xd9, 0xd0 /* fnop */);
+                /* fall through */
             case 6: /* fnstenv */
                 fail_if(!ops->blk);
-                state->blk = blk_fst;
+                state->blk = modrm_reg & 2 ? blk_fst : blk_fld;
                 /*
-                 * REX is meaningless for this insn by this point - (ab)use
+                 * REX is meaningless for these insns by this point - (ab)use
                  * the field to communicate real vs protected mode to ->blk().
                  */
                 /*state->*/rex_prefix = in_protmode(ctxt, ops);
@@ -4987,6 +4981,14 @@ x86_emulate(
                     goto done;
                 state->fpu_ctrl = true;
                 break;
+            case 5: /* fldcw m2byte */
+                state->fpu_ctrl = true;
+            fpu_memsrc16:
+                if ( (rc = ops->read(ea.mem.seg, ea.mem.off, &src.val,
+                                     2, ctxt)) != X86EMUL_OKAY )
+                    goto done;
+                emulate_fpu_insn_memsrc(b, modrm_reg & 7, src.val);
+                break;
             case 7: /* fnstcw m2byte */
                 state->fpu_ctrl = true;
             fpu_memdst16:
@@ -5139,14 +5141,15 @@ x86_emulate(
                 dst.bytes = 8;
                 emulate_fpu_insn_memdst(b, modrm_reg & 7, dst.val);
                 break;
-            case 4: /* frstor - TODO */
-                state->fpu_ctrl = true;
-                goto unimplemented_insn;
+            case 4: /* frstor */
+                /* Raise #MF now if there are pending unmasked exceptions. */
+                emulate_fpu_insn_stub(0xd9, 0xd0 /* fnop */);
+                /* fall through */
             case 6: /* fnsave */
                 fail_if(!ops->blk);
-                state->blk = blk_fst;
+                state->blk = modrm_reg & 2 ? blk_fst : blk_fld;
                 /*
-                 * REX is meaningless for this insn by this point - (ab)use
+                 * REX is meaningless for these insns by this point - (ab)use
                  * the field to communicate real vs protected mode to ->blk().
                  */
                 /*state->*/rex_prefix = in_protmode(ctxt, ops);
@@ -11673,6 +11676,92 @@ int x86_emul_blk(
 
 #ifndef X86EMUL_NO_FPU
 
+    case blk_fld:
+        ASSERT(!data);
+
+        /* state->rex_prefix carries CR0.PE && !EFLAGS.VM setting */
+        switch ( bytes )
+        {
+        case sizeof(fpstate.env): /* 32-bit FLDENV */
+        case sizeof(fpstate):     /* 32-bit FRSTOR */
+            memcpy(&fpstate.env, ptr, sizeof(fpstate.env));
+            if ( !state->rex_prefix )
+            {
+                /* Convert 32-bit real/vm86 to 32-bit prot format. */
+                unsigned int fip = fpstate.env.mode.real.fip_lo +
+                                   (fpstate.env.mode.real.fip_hi << 16);
+                unsigned int fdp = fpstate.env.mode.real.fdp_lo +
+                                   (fpstate.env.mode.real.fdp_hi << 16);
+                unsigned int fop = fpstate.env.mode.real.fop;
+
+                fpstate.env.mode.prot.fip = fip & 0xf;
+                fpstate.env.mode.prot.fcs = fip >> 4;
+                fpstate.env.mode.prot.fop = fop;
+                fpstate.env.mode.prot.fdp = fdp & 0xf;
+                fpstate.env.mode.prot.fds = fdp >> 4;
+            }
+
+            if ( bytes == sizeof(fpstate.env) )
+                ptr = NULL;
+            else
+                ptr += sizeof(fpstate.env);
+            break;
+
+        case sizeof(struct x87_env16):                        /* 16-bit FLDENV */
+        case sizeof(struct x87_env16) + sizeof(fpstate.freg): /* 16-bit FRSTOR */
+        {
+            const struct x87_env16 *env = ptr;
+
+            fpstate.env.fcw = env->fcw;
+            fpstate.env.fsw = env->fsw;
+            fpstate.env.ftw = env->ftw;
+
+            if ( state->rex_prefix )
+            {
+                /* Convert 16-bit prot to 32-bit prot format. */
+                fpstate.env.mode.prot.fip = env->mode.prot.fip;
+                fpstate.env.mode.prot.fcs = env->mode.prot.fcs;
+                fpstate.env.mode.prot.fdp = env->mode.prot.fdp;
+                fpstate.env.mode.prot.fds = env->mode.prot.fds;
+                fpstate.env.mode.prot.fop = 0; /* unknown */
+            }
+            else
+            {
+                /* Convert 16-bit real/vm86 to 32-bit prot format. */
+                unsigned int fip = env->mode.real.fip_lo +
+                                   (env->mode.real.fip_hi << 16);
+                unsigned int fdp = env->mode.real.fdp_lo +
+                                   (env->mode.real.fdp_hi << 16);
+                unsigned int fop = env->mode.real.fop;
+
+                fpstate.env.mode.prot.fip = fip & 0xf;
+                fpstate.env.mode.prot.fcs = fip >> 4;
+                fpstate.env.mode.prot.fop = fop;
+                fpstate.env.mode.prot.fdp = fdp & 0xf;
+                fpstate.env.mode.prot.fds = fdp >> 4;
+            }
+
+            if ( bytes == sizeof(*env) )
+                ptr = NULL;
+            else
+                ptr += sizeof(*env);
+            break;
+        }
+
+        default:
+            ASSERT_UNREACHABLE();
+            return X86EMUL_UNHANDLEABLE;
+        }
+
+        if ( ptr )
+        {
+            memcpy(fpstate.freg, ptr, sizeof(fpstate.freg));
+            asm volatile ( "frstor %0" :: "m" (fpstate) );
+        }
+        else
+            asm volatile ( "fldenv %0" :: "m" (fpstate.env) );
+        break;
+
     case blk_fst:
         ASSERT(!data);
 



From xen-devel-bounces@lists.xenproject.org Mon May 25 14:30:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 14:30:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdE7P-0000rj-Sc; Mon, 25 May 2020 14:30:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MLe6=7H=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdE7O-0000rO-V8
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 14:30:18 +0000
X-Inumbo-ID: 412bc824-9e94-11ea-b9cf-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 412bc824-9e94-11ea-b9cf-bc764e2007e4;
 Mon, 25 May 2020 14:30:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 88A19AC9F;
 Mon, 25 May 2020 14:30:19 +0000 (UTC)
Subject: [PATCH v10 9/9] x86emul: support FXSAVE/FXRSTOR
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Message-ID: <31ee7b12-c256-3186-30ca-45665b241a8b@suse.com>
Date: Mon, 25 May 2020 16:30:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Note that FPU selector handling as well as MXCSR mask saving for now
does not honor differences between host and guest visible featuresets.

While for Intel operation of the insns with CR4.OSFXSR=0 is
implementation dependent, use the easiest solution there: Simply don't
look at the bit in the first place. For AMD and alike the behavior is
well defined, so it gets handled together with FFXSR.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v9: Change a few field types in struct x86_fxsr. Leave reserved fields
    either entirely unnamed, or named "rsvd". Set state->fpu_ctrl. Avoid
    memory clobbers. Add memset() to FXSAVE logic. Add comments.
v8: Respect EFER.FFXSE and CR4.OSFXSR. Correct wrong X86EMUL_NO_*
    dependencies. Reduce #ifdef-ary.
v7: New.

--- a/tools/tests/x86_emulator/test_x86_emulator.c
+++ b/tools/tests/x86_emulator/test_x86_emulator.c
@@ -767,6 +767,12 @@ static void zap_fpsel(unsigned int *env,
     }
 }
 
+static void zap_xfpsel(unsigned int *env)
+{
+    env[3] &= ~0xffff;
+    env[5] &= ~0xffff;
+}
+
 #ifdef __x86_64__
 # define STKVAL_DISP 64
 static const struct {
@@ -2517,6 +2523,91 @@ int main(int argc, char **argv)
     else
         printf("skipped\n");
 
+    printf("%-40s", "Testing fxsave 4(%ecx)...");
+    if ( stack_exec && cpu_has_fxsr )
+    {
+        const uint16_t nine = 9;
+
+        memset(res + 0x80, 0xcc, 0x400);
+        if ( cpu_has_sse2 )
+            asm volatile ( "pcmpeqd %xmm7, %xmm7\n\t"
+                           "pxor %xmm6, %xmm6\n\t"
+                           "psubw %xmm7, %xmm6" );
+        asm volatile ( "fninit\n\t"
+                       "fld1\n\t"
+                       "fidivs %1\n\t"
+                       "fxsave %0"
+                       : "=m" (res[0x100]) : "m" (nine) : "memory" );
+        zap_xfpsel(&res[0x100]);
+        instr[0] = 0x0f; instr[1] = 0xae; instr[2] = 0x41; instr[3] = 0x04;
+        regs.eip = (unsigned long)&instr[0];
+        regs.ecx = (unsigned long)(res + 0x7f);
+        memset(res + 0x100 + 0x74, 0x33, 0x30);
+        memset(res + 0x80 + 0x74, 0x33, 0x30);
+        rc = x86_emulate(&ctxt, &emulops);
+        zap_xfpsel(&res[0x80]);
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res + 0x80, res + 0x100, 0x200) ||
+             (regs.eip != (unsigned long)&instr[4]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
+    printf("%-40s", "Testing fxrstor -4(%ecx)...");
+    if ( stack_exec && cpu_has_fxsr )
+    {
+        const uint16_t eleven = 11;
+
+        memset(res + 0x80, 0xcc, 0x400);
+        asm volatile ( "fxsave %0" : "=m" (res[0x80]) :: "memory" );
+        zap_xfpsel(&res[0x80]);
+        if ( cpu_has_sse2 )
+            asm volatile ( "pxor %xmm7, %xmm6\n\t"
+                           "pxor %xmm7, %xmm3\n\t"
+                           "pxor %xmm7, %xmm0\n\t"
+                           "pxor %xmm7, %xmm7" );
+        asm volatile ( "fninit\n\t"
+                       "fld1\n\t"
+                       "fidivs %0\n\t"
+                       :: "m" (eleven) );
+        instr[0] = 0x0f; instr[1] = 0xae; instr[2] = 0x49; instr[3] = 0xfc;
+        regs.eip = (unsigned long)&instr[0];
+        regs.ecx = (unsigned long)(res + 0x81);
+        rc = x86_emulate(&ctxt, &emulops);
+        asm volatile ( "fxsave %0" : "=m" (res[0x100]) :: "memory" );
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res + 0x100, res + 0x80, 0x200) ||
+             (regs.eip != (unsigned long)&instr[4]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+
+#ifdef __x86_64__
+    printf("%-40s", "Testing fxsaveq 8(%edx)...");
+    if ( stack_exec && cpu_has_fxsr )
+    {
+        memset(res + 0x80, 0xcc, 0x400);
+        asm volatile ( "fxsaveq %0" : "=m" (res[0x100]) :: "memory" );
+        instr[0] = 0x48; instr[1] = 0x0f; instr[2] = 0xae; instr[3] = 0x42; instr[4] = 0x08;
+        regs.eip = (unsigned long)&instr[0];
+        regs.edx = (unsigned long)(res + 0x7e);
+        memset(res + 0x100 + 0x74, 0x33, 0x30);
+        memset(res + 0x80 + 0x74, 0x33, 0x30);
+        rc = x86_emulate(&ctxt, &emulops);
+        if ( (rc != X86EMUL_OKAY) ||
+             memcmp(res + 0x80, res + 0x100, 0x200) ||
+             (regs.eip != (unsigned long)&instr[5]) )
+            goto fail;
+        printf("okay\n");
+    }
+    else
+        printf("skipped\n");
+#endif
+
     printf("%-40s", "Testing movq %mm3,(%ecx)...");
     if ( stack_exec && cpu_has_mmx )
     {
--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -36,6 +36,13 @@ struct cpuid_policy cp;
 static char fpu_save_area[4096] __attribute__((__aligned__((64))));
 static bool use_xsave;
 
+/*
+ * Re-use the area above also as scratch space for the emulator itself.
+ * (When debugging the emulator, care needs to be taken when inserting
+ * printf() or alike function calls into regions using this.)
+ */
+#define FXSAVE_AREA ((struct x86_fxsr *)fpu_save_area)
+
 void emul_save_fpu_state(void)
 {
     if ( use_xsave )
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -860,6 +860,11 @@ struct x86_emulate_state {
         blk_fld, /* FLDENV, FRSTOR */
         blk_fst, /* FNSTENV, FNSAVE */
 #endif
+#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
+    !defined(X86EMUL_NO_SIMD)
+        blk_fxrstor,
+        blk_fxsave,
+#endif
         blk_movdir,
     } blk;
     uint8_t modrm, modrm_mod, modrm_reg, modrm_rm;
@@ -953,6 +958,29 @@ typedef union {
     uint32_t data32[16];
 } mmval_t;
 
+struct x86_fxsr {
+    uint16_t fcw;
+    uint16_t fsw;
+    uint8_t ftw, :8;
+    uint16_t fop;
+    union {
+        struct {
+            uint32_t offs;
+            uint16_t sel, :16;
+        };
+        uint64_t addr;
+    } fip, fdp;
+    uint32_t mxcsr;
+    uint32_t mxcsr_mask;
+    struct {
+        uint8_t data[10];
+        uint16_t :16, :16, :16;
+    } fpreg[8];
+    uint64_t __attribute__ ((aligned(16))) xmm[16][2];
+    uint64_t rsvd[6];
+    uint64_t avl[6];
+};
+
 /*
  * While proper alignment gets specified above, this doesn't get honored by
  * the compiler for automatic variables. Use this helper to instantiate a
@@ -1910,6 +1938,7 @@ amd_like(const struct x86_emulate_ctxt *
 #define vcpu_has_cmov()        (ctxt->cpuid->basic.cmov)
 #define vcpu_has_clflush()     (ctxt->cpuid->basic.clflush)
 #define vcpu_has_mmx()         (ctxt->cpuid->basic.mmx)
+#define vcpu_has_fxsr()        (ctxt->cpuid->basic.fxsr)
 #define vcpu_has_sse()         (ctxt->cpuid->basic.sse)
 #define vcpu_has_sse2()        (ctxt->cpuid->basic.sse2)
 #define vcpu_has_sse3()        (ctxt->cpuid->basic.sse3)
@@ -8148,6 +8177,49 @@ x86_emulate(
     case X86EMUL_OPC(0x0f, 0xae): case X86EMUL_OPC_66(0x0f, 0xae): /* Grp15 */
         switch ( modrm_reg & 7 )
         {
+#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
+    !defined(X86EMUL_NO_SIMD)
+        case 0: /* fxsave */
+        case 1: /* fxrstor */
+            generate_exception_if(vex.pfx, EXC_UD);
+            vcpu_must_have(fxsr);
+            generate_exception_if(ea.type != OP_MEM, EXC_UD);
+            generate_exception_if(!is_aligned(ea.mem.seg, ea.mem.off, 16,
+                                              ctxt, ops),
+                                  EXC_GP, 0);
+            fail_if(!ops->blk);
+            op_bytes =
+#ifdef __x86_64__
+                !mode_64bit() ? offsetof(struct x86_fxsr, xmm[8]) :
+#endif
+                sizeof(struct x86_fxsr);
+            if ( amd_like(ctxt) )
+            {
+                /* Assume "normal" operation in case of missing hooks. */
+                if ( !ops->read_cr ||
+                     ops->read_cr(4, &cr4, ctxt) != X86EMUL_OKAY )
+                    cr4 = X86_CR4_OSFXSR;
+                if ( !ops->read_msr ||
+                     ops->read_msr(MSR_EFER, &msr_val, ctxt) != X86EMUL_OKAY )
+                    msr_val = 0;
+                if ( !(cr4 & X86_CR4_OSFXSR) ||
+                     (mode_64bit() && mode_ring0() && (msr_val & EFER_FFXSE)) )
+                    op_bytes = offsetof(struct x86_fxsr, xmm[0]);
+            }
+            /*
+             * This could also be X86EMUL_FPU_mmx, but it shouldn't be
+             * X86EMUL_FPU_xmm, as we don't want CR4.OSFXSR checked.
+             */
+            get_fpu(X86EMUL_FPU_fpu);
+            state->fpu_ctrl = true;
+            state->blk = modrm_reg & 1 ? blk_fxrstor : blk_fxsave;
+            if ( (rc = ops->blk(ea.mem.seg, ea.mem.off, NULL,
+                                sizeof(struct x86_fxsr), &_regs.eflags,
+                                state, ctxt)) != X86EMUL_OKAY )
+                goto done;
+            break;
+#endif /* X86EMUL_NO_{FPU,MMX,SIMD} */
+
 #ifndef X86EMUL_NO_SIMD
         case 2: /* ldmxcsr */
             generate_exception_if(vex.pfx, EXC_UD);
@@ -11634,6 +11706,8 @@ int x86_emul_blk(
     struct x86_emulate_state *state,
     struct x86_emulate_ctxt *ctxt)
 {
+    int rc = X86EMUL_OKAY;
+
     switch ( state->blk )
     {
         bool zf;
@@ -11852,6 +11926,86 @@ int x86_emul_blk(
 
 #endif /* X86EMUL_NO_FPU */
 
+#if !defined(X86EMUL_NO_FPU) || !defined(X86EMUL_NO_MMX) || \
+    !defined(X86EMUL_NO_SIMD)
+
+    case blk_fxrstor:
+    {
+        struct x86_fxsr *fxsr = FXSAVE_AREA;
+
+        ASSERT(!data);
+        ASSERT(bytes == sizeof(*fxsr));
+        ASSERT(state->op_bytes <= bytes);
+
+        if ( state->op_bytes < sizeof(*fxsr) )
+        {
+            if ( state->rex_prefix & REX_W )
+            {
+                /*
+                 * The only way to force fxsaveq on a wide range of gas
+                 * versions. On older versions the rex64 prefix works only if
+                 * we force an addressing mode that doesn't require extended
+                 * registers.
+                 */
+                asm volatile ( ".byte 0x48; fxsave (%1)"
+                               : "=m" (*fxsr) : "R" (fxsr) );
+            }
+            else
+                asm volatile ( "fxsave %0" : "=m" (*fxsr) );
+        }
+
+        /*
+         * Don't chance the reserved or available ranges to contain any
+         * data FXRSTOR may actually consume in some way: Copy only the
+         * defined portion, and zero the rest.
+         */
+        memcpy(fxsr, ptr, min(state->op_bytes,
+                              (unsigned int)offsetof(struct x86_fxsr, rsvd)));
+        memset(fxsr->rsvd, 0, sizeof(*fxsr) - offsetof(struct x86_fxsr, rsvd));
+
+        generate_exception_if(fxsr->mxcsr & ~mxcsr_mask, EXC_GP, 0);
+
+        if ( state->rex_prefix & REX_W )
+        {
+            /* See above for why operand/constraints are this way. */
+            asm volatile ( ".byte 0x48; fxrstor (%1)"
+                           :: "m" (*fxsr), "R" (fxsr) );
+        }
+        else
+            asm volatile ( "fxrstor %0" :: "m" (*fxsr) );
+        break;
+    }
+
+    case blk_fxsave:
+    {
+        struct x86_fxsr *fxsr = FXSAVE_AREA;
+
+        ASSERT(!data);
+        ASSERT(bytes == sizeof(*fxsr));
+        ASSERT(state->op_bytes <= bytes);
+
+        if ( state->op_bytes < sizeof(*fxsr) )
+            /* Don't chance consuming uninitialized data. */
+            memset(fxsr, 0, state->op_bytes);
+        else
+            fxsr = ptr;
+
+        if ( state->rex_prefix & REX_W )
+        {
+            /* See above for why operand/constraints are this way. */
+            asm volatile ( ".byte 0x48; fxsave (%1)"
+                           : "=m" (*fxsr) : "R" (fxsr) );
+        }
+        else
+            asm volatile ( "fxsave %0" : "=m" (*fxsr) );
+
+        if ( fxsr != ptr ) /* i.e. state->op_bytes < sizeof(*fxsr) */
+            memcpy(ptr, fxsr, state->op_bytes);
+        break;
+    }
+
+#endif /* X86EMUL_NO_{FPU,MMX,SIMD} */
+
     case blk_movdir:
         switch ( bytes )
         {
@@ -11905,7 +12059,8 @@ int x86_emul_blk(
         return X86EMUL_UNHANDLEABLE;
     }
 
-    return X86EMUL_OKAY;
+ done:
+    return rc;
 }
 
 static void __init __maybe_unused build_assertions(void)
--- a/xen/arch/x86/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate.c
@@ -43,6 +43,8 @@
     }                                                      \
 })
 
+#define FXSAVE_AREA current->arch.fpu_ctxt
+
 #ifndef CONFIG_HVM
 # define X86EMUL_NO_FPU
 # define X86EMUL_NO_MMX



From xen-devel-bounces@lists.xenproject.org Mon May 25 14:46:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 14:46:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdEMt-00022P-DH; Mon, 25 May 2020 14:46:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AvLo=7H=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jdEMs-00022K-Gn
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 14:46:18 +0000
X-Inumbo-ID: 7da9c308-9e96-11ea-9887-bc764e2007e4
Received: from mail-oi1-x242.google.com (unknown [2607:f8b0:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7da9c308-9e96-11ea-9887-bc764e2007e4;
 Mon, 25 May 2020 14:46:18 +0000 (UTC)
Received: by mail-oi1-x242.google.com with SMTP id s198so16193373oie.6
 for <xen-devel@lists.xenproject.org>; Mon, 25 May 2020 07:46:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=2HoAMOZQ3HefgrNkWv0E4FEusKuZTEg4JrxhK22N/y8=;
 b=ReMeZAzm0OhxA9ZFrXFaIJr4A7AitXK67M1nc1DIMnZVm4rsNnGd6GLwpZssIvlK4S
 RIPBPFac6YVzcojeIYwVYfj7eX8C7Uupl5wQaCticSg8T635U3MmG0VMYUNrNvDuFx8b
 1is+wqVRsnUSMPL9ZeXcz2OZMxBMVVDJ8/75uiVZbbaP20EasdKrKI7lwjuOTOJuyudb
 eOL0ob+v74UyXOV8Qb0UNpIPKIKJiEfmZnQi72RfoIAg0tRdGQiofKhuVneht52NP78W
 t5bFmFUD6EbsF8WKc4xOPzF3+iZCSMf2OWH0BDbHN/na+vHsKZyIdCsmvw58H8hXe/uZ
 dr+Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=2HoAMOZQ3HefgrNkWv0E4FEusKuZTEg4JrxhK22N/y8=;
 b=Q3KgA1XVWFu6exGJWM6ASu1DzQwmLRqDRNB537ZntNglBf/eK7spb6oaMhoIvIW7an
 GwN+rD9FoPB7VMqsCPkuw94fMlfvgCAmSEFYu5CgOV/GuCeRQQ1eD3Uof8queLWkfd90
 n5jvNuXJoR5nBePBvmYRrVT9iYqAyHWDE+2YnEEjY61ExD9ECrRNT6a/73deNslv0Tes
 3p09OqIhoRd+tCiLhOhvmlerWhXShoxUXfCV/XpaeCqLY3lyEgMUgGZcmkewGRJgQnKs
 kCXSGFQtV7ZLZlA94hs04cB6QLhk3fskIu56DxzRc3RiPtxEI7zX1p3EtNSyWbEh839S
 2Fuw==
X-Gm-Message-State: AOAM531GNH3DvdcUvWp+Vujw5HLzyYIy/3D4aaxMDzOQs4/zQ5P6I9XR
 HXp3H/SYwfhy50sOQzakqxxRW6gg
X-Google-Smtp-Source: ABdhPJzwhYnhhOJL1Gojw7XS0y1IEPkHFTcDYYY1DCzZz6LbJNmwj/AxkU0NJSTJ2j2tzilQmSQ9ZQ==
X-Received: by 2002:aca:c594:: with SMTP id v142mr11228598oif.66.1590417977063; 
 Mon, 25 May 2020 07:46:17 -0700 (PDT)
Received: from t0.lan (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124])
 by smtp.googlemail.com with ESMTPSA id s124sm5327251oig.19.2020.05.25.07.46.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 25 May 2020 07:46:16 -0700 (PDT)
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
X-Google-Original-From: Tamas K Lengyel <tamas.lengyel@intel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] x86/mem_sharing: gate enabling on cpu_has_vmx
Date: Mon, 25 May 2020 08:46:06 -0600
Message-Id: <20200525144606.126767-1-tamas.lengyel@intel.com>
X-Mailer: git-send-email 2.26.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Tamas K Lengyel <tamas@tklengyel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Tamas K Lengyel <tamas@tklengyel.com>

It is unclear whether mem_sharing was ever made to work on other architectures
but at this time the only verified platform for it is vmx. No plans to support
or maintain it on other architectures. Make this explicit by checking during
initialization.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
 xen/arch/x86/mm/mem_sharing.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 7271e5c90b..19922ab5d1 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1444,7 +1444,7 @@ static inline int mem_sharing_control(struct domain *d, bool enable,
 {
     if ( enable )
     {
-        if ( unlikely(!is_hvm_domain(d)) )
+        if ( unlikely(!is_hvm_domain(d) || !cpu_has_vmx) )
             return -EOPNOTSUPP;
 
         if ( unlikely(!hap_enabled(d)) )
-- 
2.26.1



From xen-devel-bounces@lists.xenproject.org Mon May 25 15:04:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 15:04:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdEeS-0003nv-VD; Mon, 25 May 2020 15:04:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MLe6=7H=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdEeR-0003nq-8E
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 15:04:27 +0000
X-Inumbo-ID: 061da4b4-9e99-11ea-9887-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 061da4b4-9e99-11ea-9887-bc764e2007e4;
 Mon, 25 May 2020 15:04:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id C225CAC52;
 Mon, 25 May 2020 15:04:27 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] VT-x: extend LBR Broadwell errata coverage
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <c43b9d43-2e37-d2a8-ba32-dd06062a05e2@suse.com>
Date: Mon, 25 May 2020 17:04:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

For lbr_tsx_fixup_check() simply name a few more specific erratum
numbers.

For bdf93_fixup_check(), however, more models are affected. Oddly enough
despite being the same model and stepping, the erratum is listed for
Xeon E3 but not its Core counterpart. Apply the workaround uniformly,
and also for Xeon D, which only has the LBR-from one listed in its spec
update.

Seeing this broader applicability, rename anything BDF93-related to more
generic names.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Name yet another pair of errata. Speculatively cover Xeon D also
    in the 2nd case. Identifier renaming.

--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2380,7 +2380,7 @@ static void pi_notification_interrupt(st
 }
 
 static void __init lbr_tsx_fixup_check(void);
-static void __init bdf93_fixup_check(void);
+static void __init ler_to_fixup_check(void);
 
 /*
  * Calculate whether the CPU is vulnerable to Instruction Fetch page
@@ -2554,7 +2554,7 @@ const struct hvm_function_table * __init
     }
 
     lbr_tsx_fixup_check();
-    bdf93_fixup_check();
+    ler_to_fixup_check();
 
     return &vmx_function_table;
 }
@@ -2832,11 +2832,11 @@ enum
 
 #define LBR_MSRS_INSERTED      (1u << 0)
 #define LBR_FIXUP_TSX          (1u << 1)
-#define LBR_FIXUP_BDF93        (1u << 2)
-#define LBR_FIXUP_MASK         (LBR_FIXUP_TSX | LBR_FIXUP_BDF93)
+#define LBR_FIXUP_LER_TO       (1u << 2)
+#define LBR_FIXUP_MASK         (LBR_FIXUP_TSX | LBR_FIXUP_LER_TO)
 
 static bool __read_mostly lbr_tsx_fixup_needed;
-static bool __read_mostly bdf93_fixup_needed;
+static bool __read_mostly ler_to_fixup_needed;
 
 static void __init lbr_tsx_fixup_check(void)
 {
@@ -2844,7 +2844,7 @@ static void __init lbr_tsx_fixup_check(v
     uint32_t lbr_format;
 
     /*
-     * HSM182, HSD172, HSE117, BDM127, BDD117, BDF85, BDE105:
+     * Haswell erratum HSM182 et al, Broadwell erratum BDM127 et al:
      *
      * On processors that do not support Intel Transactional Synchronization
      * Extensions (Intel TSX) (CPUID.07H.EBX bits 4 and 11 are both zero),
@@ -2868,8 +2868,11 @@ static void __init lbr_tsx_fixup_check(v
     case 0x45: /* HSM182 - 4th gen Core */
     case 0x46: /* HSM182, HSD172 - 4th gen Core (GT3) */
     case 0x3d: /* BDM127 - 5th gen Core */
-    case 0x47: /* BDD117 - 5th gen Core (GT3) */
-    case 0x4f: /* BDF85  - Xeon E5-2600 v4 */
+    case 0x47: /* BDD117 - 5th gen Core (GT3)
+                  BDW117 - Xeon E3-1200 v4 */
+    case 0x4f: /* BDF85  - Xeon E5-2600 v4
+                  BDH75  - Core-i7 for LGA2011-v3 Socket
+                  BDX88  - Xeon E7-x800 v4 */
     case 0x56: /* BDE105 - Xeon D-1500 */
         break;
     default:
@@ -2890,18 +2893,31 @@ static void __init lbr_tsx_fixup_check(v
         lbr_tsx_fixup_needed = true;
 }
 
-static void __init bdf93_fixup_check(void)
+static void __init ler_to_fixup_check(void)
 {
     /*
-     * Broadwell erratum BDF93:
+     * Broadwell erratum BDF93 et al:
      *
      * Reads from MSR_LER_TO_LIP (MSR 1DEH) may return values for bits[63:61]
      * that are not equal to bit[47].  Attempting to context switch this value
      * may cause a #GP.  Software should sign extend the MSR.
      */
-    if ( boot_cpu_data.x86_vendor == X86_VENDOR_INTEL &&
-         boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x4f )
-        bdf93_fixup_needed = true;
+    if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL ||
+         boot_cpu_data.x86 != 6 )
+        return;
+
+    switch ( boot_cpu_data.x86_model )
+    {
+    case 0x3d: /* BDM131 - 5th gen Core */
+    case 0x47: /* BDD??? - 5th gen Core (H-Processor line)
+                  BDW120 - Xeon E3-1200 v4 */
+    case 0x4f: /* BDF93  - Xeon E5-2600 v4
+                  BDH80  - Core-i7 for LGA2011-v3 Socket
+                  BDX93  - Xeon E7-x800 v4 */
+    case 0x56: /* BDE??? - Xeon D-1500 */
+        ler_to_fixup_needed = true;
+        break;
+    }
 }
 
 static int is_last_branch_msr(u32 ecx)
@@ -3276,8 +3292,8 @@ static int vmx_msr_write_intercept(unsig
             v->arch.hvm.vmx.lbr_flags |= LBR_MSRS_INSERTED;
             if ( lbr_tsx_fixup_needed )
                 v->arch.hvm.vmx.lbr_flags |= LBR_FIXUP_TSX;
-            if ( bdf93_fixup_needed )
-                v->arch.hvm.vmx.lbr_flags |= LBR_FIXUP_BDF93;
+            if ( ler_to_fixup_needed )
+                v->arch.hvm.vmx.lbr_flags |= LBR_FIXUP_LER_TO;
         }
 
         __vmwrite(GUEST_IA32_DEBUGCTL, msr_content);
@@ -4338,7 +4354,7 @@ static void sign_extend_msr(struct vcpu
         entry->data = canonicalise_addr(entry->data);
 }
 
-static void bdf93_fixup(void)
+static void ler_to_fixup(void)
 {
     struct vcpu *curr = current;
 
@@ -4351,8 +4367,8 @@ static void lbr_fixup(void)
 
     if ( curr->arch.hvm.vmx.lbr_flags & LBR_FIXUP_TSX )
         lbr_tsx_fixup();
-    if ( curr->arch.hvm.vmx.lbr_flags & LBR_FIXUP_BDF93 )
-        bdf93_fixup();
+    if ( curr->arch.hvm.vmx.lbr_flags & LBR_FIXUP_LER_TO )
+        ler_to_fixup();
 }
 
 /* Returns false if the vmentry has to be restarted */


From xen-devel-bounces@lists.xenproject.org Mon May 25 15:18:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 15:18:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdEsL-0004pX-BI; Mon, 25 May 2020 15:18:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MLe6=7H=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdEsK-0004pS-QI
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 15:18:48 +0000
X-Inumbo-ID: 07be82e6-9e9b-11ea-aeed-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 07be82e6-9e9b-11ea-aeed-12813bfff9fa;
 Mon, 25 May 2020 15:18:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 9C855AC85;
 Mon, 25 May 2020 15:18:49 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86: avoid HPET use also on certain Coffee Lake H
Message-ID: <26a90632-bb76-a24b-aef1-4c068b610c6a@suse.com>
Date: Mon, 25 May 2020 17:18:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Linux commit f8edbde885bbcab6a2b4a1b5ca614e6ccb807577 says

"Coffee Lake H SoC has similar behavior as Coffee Lake, skewed HPET
 timer once the SoCs entered PC10."

Again follow this for Xen as well, noting though that even the
pre-existing PCI ID refers to a H-processor line variant (the 6-core
one). It is also suspicious that the datasheet names 0x3e10 for the
4-core variant, while the Linux commit specifies 0x3e20, which I haven't
been able to locate in any datasheet yet. To be on the safe side, add
both until clarification can be provided by Intel.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -397,10 +397,16 @@ static int64_t __init init_hpet(struct p
          * entered PC10.
          */
         if ( pci_conf_read16(PCI_SBDF(0, 0, 0, 0),
-                             PCI_VENDOR_ID) == PCI_VENDOR_ID_INTEL &&
-             pci_conf_read16(PCI_SBDF(0, 0, 0, 0),
-                             PCI_DEVICE_ID) == 0x3ec4 )
-            hpet_address = 0;
+                             PCI_VENDOR_ID) == PCI_VENDOR_ID_INTEL )
+            switch ( pci_conf_read16(PCI_SBDF(0, 0, 0, 0),
+                                     PCI_DEVICE_ID) )
+            {
+            case 0x3e10: /* as per datasheet (4 core variant) */
+            case 0x3e20: /* as per respective Linux commit */
+            case 0x3ec4:
+                hpet_address = 0;
+                break;
+            }
 
         if ( !hpet_address )
             printk("Disabling HPET for being unreliable\n");


From xen-devel-bounces@lists.xenproject.org Mon May 25 15:23:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 15:23:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdEx4-0005fY-Uw; Mon, 25 May 2020 15:23:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=MLe6=7H=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdEx4-0005fT-Cn
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 15:23:42 +0000
X-Inumbo-ID: b6b4a53c-9e9b-11ea-b07b-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b6b4a53c-9e9b-11ea-b07b-bc764e2007e4;
 Mon, 25 May 2020 15:23:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 5DF77B19E;
 Mon, 25 May 2020 15:23:43 +0000 (UTC)
Subject: Re: [PATCH] x86: avoid HPET use also on certain Coffee Lake H
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <26a90632-bb76-a24b-aef1-4c068b610c6a@suse.com>
Message-ID: <de2ca5b7-5fe1-27e5-b6e6-08e695258f1f@suse.com>
Date: Mon, 25 May 2020 17:23:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <26a90632-bb76-a24b-aef1-4c068b610c6a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25.05.2020 17:18, Jan Beulich wrote:
> Linux commit f8edbde885bbcab6a2b4a1b5ca614e6ccb807577 says
> 
> "Coffee Lake H SoC has similar behavior as Coffee Lake, skewed HPET
>  timer once the SoCs entered PC10."
> 
> Again follow this for Xen as well, noting though that even the
> pre-existing PCI ID refers to a H-processor line variant (the 6-core
> one). It is also suspicious that the datasheet names 0x3e10 for the
> 4-core variant, while the Linux commit specifies 0x3e20, which I haven't
> been able to locate in any datasheet yet. To be on the safe side, add
> both until clarification can be provided by Intel.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I'd like to note that I've been sitting on this for several months,
hoping to be able to submit with less uncertainty. I shall further
note that I'm sitting on a similar Ice Lake patch, triggered by
seeing Linux'es e0748539e3d594dd26f0d27a270f14720b22a406. The
situation seems even worse there - I can't make datasheet and
Linux commit match even remotely, PCI-ID-wise. I didn't think it
makes sense to submit a patch in such a case.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 25 15:31:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 15:31:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdF4B-0006XD-O1; Mon, 25 May 2020 15:31:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pvIA=7H=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jdF4A-0006X8-Ke
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 15:31:02 +0000
X-Inumbo-ID: bd2bdaba-9e9c-11ea-b07b-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd2bdaba-9e9c-11ea-b07b-bc764e2007e4;
 Mon, 25 May 2020 15:31:02 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:39840
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jdF44-000KAx-JZ (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Mon, 25 May 2020 16:30:56 +0100
Subject: Re: [PATCH v2] VT-x: extend LBR Broadwell errata coverage
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <c43b9d43-2e37-d2a8-ba32-dd06062a05e2@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e8f3ac62-2829-dec6-a855-ac4b50816b34@citrix.com>
Date: Mon, 25 May 2020 16:30:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <c43b9d43-2e37-d2a8-ba32-dd06062a05e2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>, Jun Nakajima <jun.nakajima@intel.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25/05/2020 16:04, Jan Beulich wrote:
> For lbr_tsx_fixup_check() simply name a few more specific erratum
> numbers.
>
> For bdf93_fixup_check(), however, more models are affected. Oddly enough
> despite being the same model and stepping, the erratum is listed for
> Xeon E3 but not its Core counterpart. Apply the workaround uniformly,
> and also for Xeon D, which only has the LBR-from one listed in its spec
> update.
>
> Seeing this broader applicability, rename anything BDF93-related to more
> generic names.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon May 25 15:51:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 15:51:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdFO5-0008KU-Fm; Mon, 25 May 2020 15:51:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TCGy=7H=knorrie.org=hans@srs-us1.protection.inumbo.net>)
 id 1jdFO3-0008KP-JT
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 15:51:35 +0000
X-Inumbo-ID: 9b7a920a-9e9f-11ea-ae69-bc764e2007e4
Received: from syrinx.knorrie.org (unknown [2001:888:2177::4d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b7a920a-9e9f-11ea-ae69-bc764e2007e4;
 Mon, 25 May 2020 15:51:33 +0000 (UTC)
Received: from [IPv6:2a02:a213:2b80:f000::12] (unknown
 [IPv6:2a02:a213:2b80:f000::12])
 (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested)
 by syrinx.knorrie.org (Postfix) with ESMTPSA id 176CD6091B256
 for <xen-devel@lists.xenproject.org>; Mon, 25 May 2020 17:51:33 +0200 (CEST)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Hans van Kranenburg <hans@knorrie.org>
Subject: Bug: toolstack allows too low values to be set for shadow_memory
Autocrypt: addr=hans@knorrie.org; keydata=
 mQINBFo2pooBEADwTBe/lrCa78zuhVkmpvuN+pXPWHkYs0LuAgJrOsOKhxLkYXn6Pn7e3xm+
 ySfxwtFmqLUMPWujQYF0r5C6DteypL7XvkPP+FPVlQnDIifyEoKq8JZRPsAFt1S87QThYPC3
 mjfluLUKVBP21H3ZFUGjcf+hnJSN9d9MuSQmAvtJiLbRTo5DTZZvO/SuQlmafaEQteaOswme
 DKRcIYj7+FokaW9n90P8agvPZJn50MCKy1D2QZwvw0g2ZMR8yUdtsX6fHTe7Ym+tHIYM3Tsg
 2KKgt17NTxIqyttcAIaVRs4+dnQ23J98iFmVHyT+X2Jou+KpHuULES8562QltmkchA7YxZpT
 mLMZ6TPit+sIocvxFE5dGiT1FMpjM5mOVCNOP+KOup/N7jobCG15haKWtu9k0kPz+trT3NOn
 gZXecYzBmasSJro60O4bwBayG9ILHNn+v/ZLg/jv33X2MV7oYXf+ustwjXnYUqVmjZkdI/pt
 30lcNUxCANvTF861OgvZUR4WoMNK4krXtodBoEImjmT385LATGFt9HnXd1rQ4QzqyMPBk84j
 roX5NpOzNZrNJiUxj+aUQZcINtbpmvskGpJX0RsfhOh2fxfQ39ZP/0a2C59gBQuVCH6C5qsY
 rc1qTIpGdPYT+J1S2rY88AvPpr2JHZbiVqeB3jIlwVSmkYeB/QARAQABtCZIYW5zIHZhbiBL
 cmFuZW5idXJnIDxoYW5zQGtub3JyaWUub3JnPokCTgQTAQoAOBYhBOJv1o/B6NS2GUVGTueB
 VzIYDCpVBQJaNq7KAhsDBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEOeBVzIYDCpVgDMQ
 ANSQMebh0Rr6RNhfA+g9CKiCDMGWZvHvvq3BNo9TqAo9BC4neAoVciSmeZXIlN8xVALf6rF8
 lKy8L1omocMcWw7TlvZHBr2gZHKlFYYC34R2NvxS0xO8Iw5rhEU6paYaKzlrvxuXuHMVXgjj
 bM3zBiN8W4b9VW1MoynP9nvm1WaGtFI9GIyK9j6mBCU+N5hpvFtt4DBmuWjzdDkd3sWUufYd
 nQhGimWHEg95GWhQUiFvr4HRvYJpbjRRRQG3O/5Fm0YyTYZkI5CDzQIm5lhqKNqmuf2ENstS
 8KcBImlbwlzEpK9Pa3Z5MUeLZ5Ywwv+d11fyhk53aT9bipdEipvcGa6DrA0DquO4WlQR+RKU
 ywoGTgntwFu8G0+tmD8J1UE6kIzFwE5kiFWjM0rxv1tAgV9ZWqmp3sbI7vzbZXn+KI/wosHV
 iDeW5rYg+PdmnOlYXQIJO+t0KmF5zJlSe7daylKZKTYtk7w1Fq/Oh1Rps9h1C4sXN8OAUO7h
 1SAnEtehHfv52nPxwZiI6eqbvqV0uEEyLFS5pCuuwmPpC8AmOrciY2T8T+4pmkJNO2Nd3jOP
 cnJgAQrxPvD7ACp/85LParnoz5c9/nPHJB1FgbAa7N5d8ubqJgi+k9Q2lAL9vBxK67aZlFZ0
 Kd7u1w1rUlY12KlFWzxpd4TuHZJ8rwi7PUceuQINBFo2sK8BEADSZP5cKnGl2d7CHXdpAzVF
 6K4Hxwn5eHyKC1D/YvsY+otq3PnfLJeMf1hzv2OSrGaEAkGJh/9yXPOkQ+J1OxJJs9CY0fqB
 MvHZ98iTyeFAq+4CwKcnZxLiBchQJQd0dFPujtcoMkWgzp3QdzONdkK4P7+9XfryPECyCSUF
 ib2aEkuU3Ic4LYfsBqGR5hezbJqOs96ExMnYUCEAS5aeejr3xNb8NqZLPqU38SQCTLrAmPAX
 glKVnYyEVxFUV8EXXY6AK31lRzpCqmPxLoyhPAPda9BXchRluy+QOyg+Yn4Q2DSwbgCYPrxo
 HTZKxH+E+JxCMfSW35ZE5ufvAbY3IrfHIhbNnHyxbTRgYMDbTQCDyN9F2Rvx3EButRMApj+v
 OuaMBJF/fWfxL3pSIosG9Q7uPc+qJvVMHMRNnS0Y1QQ5ZPLG0zI5TeHzMnGmSTbcvn/NOxDe
 6EhumcclFS0foHR78l1uOhUItya/48WCJE3FvOS3+KBhYvXCsG84KVsJeen+ieX/8lnSn0d2
 ZvUsj+6wo+d8tcOAP+KGwJ+ElOilqW29QfV4qvqmxnWjDYQWzxU9WGagU3z0diN97zMEO4D8
 SfUu72S5O0o9ATgid9lEzMKdagXP94x5CRvBydWu1E5CTgKZ3YZv+U3QclOG5p9/4+QNbhqH
 W4SaIIg90CFMiwARAQABiQRsBBgBCgAgFiEE4m/Wj8Ho1LYZRUZO54FXMhgMKlUFAlo2sK8C
 GwICQAkQ54FXMhgMKlXBdCAEGQEKAB0WIQRJbJ13A1ob3rfuShiywd9yY2FfbAUCWjawrwAK
 CRCywd9yY2FfbMKbEACIGLdFrD5j8rz/1fm8xWTJlOb3+o5A6fdJ2eyPwr5njJZSG9i5R28c
 dMmcwLtVisfedBUYLaMBmCEHnj7ylOgJi60HE74ZySX055hKECNfmA9Q7eidxta5WeXeTPSb
 PwTQkAgUZ576AO129MKKP4jkEiNENePMuYugCuW7XGR+FCEC2efYlVwDQy24ZfR9Q1dNK2ny
 0gH1c+313l0JcNTKjQ0e7M9KsQSKUr6Tk0VGTFZE2dp+dJF1sxtWhJ6Ci7N1yyj3buFFpD9c
 kj5YQFqBkEwt3OGtYNuLfdwR4d47CEGdQSm52n91n/AKdhRDG5xvvADG0qLGBXdWvbdQFllm
 v47TlJRDc9LmwpIqgtaUGTVjtkhw0SdiwJX+BjhtWTtrQPbseDe2pN3gWte/dPidJWnj8zzS
 ggZ5otY2reSvM+79w/odUlmtaFx+IyFITuFnBVcMF0uGmQBBxssew8rePQejYQHz0bZUDNbD
 VaZiXqP4njzBJu5+nzNxQKzQJ0VDF6ve5K49y0RpT4IjNOupZ+OtlZTQyM7moag+Y6bcJ7KK
 8+MRdRjGFFWP6H/RCSFAfoOGIKTlZHubjgetyQhMwKJQ5KnGDm+XUkeIWyevPfCVPNvqF2q3
 viQm0taFit8L+x7ATpolZuSCat5PSXtgx1liGjBpPKnERxyNLQ/erRNcEACwEJliFbQm+c2i
 6ccpx2cdtyAI1yzWuE0nr9DqpsEbIZzTCIVyry/VZgdJ27YijGJWesj/ie/8PtpDu0Cf1pty
 QOKSpC9WvRCFGJPGS8MmvzepmX2DYQ5MSKTO5tRJZ8EwCFfd9OxX2g280rdcDyCFkY3BYrf9
 ic2PTKQokx+9sLCHAC/+feSx/MA/vYpY1EJwkAr37mP7Q8KA9PCRShJziiljh5tKQeIG4sz1
 QjOrS8WryEwI160jKBBNc/M5n2kiIPCrapBGsL58MumrtbL53VimFOAJaPaRWNSdWCJSnVSv
 kCHMl/1fRgzXEMpEmOlBEY0Kdd1Ut3S2cuwejzI+WbrQLgeps2N70Ztq50PkfWkj0jeethhI
 FqIJzNlUqVkHl1zCWSFsghxiMyZmqULaGcSDItYQ+3c9fxIO/v0zDg7bLeG9Zbj4y8E47xqJ
 6brtAAEJ1RIM42gzF5GW71BqZrbFFoI0C6AzgHjaQP1xfj7nBRSBz4ObqnsuvRr7H6Jme5rl
 eg7COIbm8R7zsFjF4tC6k5HMc1tZ8xX+WoDsurqeQuBOg7rggmhJEpDK2f+g8DsvKtP14Vs0
 Sn7fVJi87b5HZojry1lZB2pXUH90+GWPF7DabimBki4QLzmyJ/ENH8GspFulVR3U7r3YYQ5K
 ctOSoRq9pGmMi231Q+xx9LkCDQRaOtArARAA50ylThKbq0ACHyomxjQ6nFNxa9ICp6byU9Lh
 hKOax0GB6l4WebMsQLhVGRQ8H7DT84E7QLRYsidEbneB1ciToZkL5YFFaVxY0Hj1wKxCFcVo
 CRNtOfoPnHQ5m/eDLaO4o0KKL/kaxZwTn2jnl6BQDGX1Aak0u4KiUlFtoWn/E/NIv5QbTGSw
 IYuzWqqYBIzFtDbiQRvGw0NuKxAGMhwXy8VP05mmNwRdyh/CC4rWQPBTvTeMwr3nl8/G+16/
 cn4RNGhDiGTTXcX03qzZ5jZ5N7GLY5JtE6pTpLG+EXn5pAnQ7MvuO19cCbp6Dj8fXRmI0SVX
 WKSo0A2C8xH6KLCRfUMzD7nvDRU+bAHQmbi5cZBODBZ5yp5CfIL1KUCSoiGOMpMin3FrarIl
 cxhNtoE+ya23A+JVtOwtM53ESra9cJL4WPkyk/E3OvNDmh8U6iZXn4ZaKQTHaxN9yvmAUhZQ
 iQi/sABwxCcQQ2ydRb86Vjcbx+FUr5OoEyQS46gc3KN5yax9D3H9wrptOzkNNMUhFj0oK0fX
 /MYDWOFeuNBTYk1uFRJDmHAOp01rrMHRogQAkMBuJDMrMHfolivZw8RKfdPzgiI500okLTzH
 C0wgSSAOyHKGZjYjbEwmxsl3sLJck9IPOKvqQi1DkvpOPFSUeX3LPBIav5UUlXt0wjbzInUA
 EQEAAYkCNgQYAQoAIBYhBOJv1o/B6NS2GUVGTueBVzIYDCpVBQJaOtArAhsMAAoJEOeBVzIY
 DCpV4kgP+wUh3BDRhuKaZyianKroStgr+LM8FIUwQs3Fc8qKrcDaa35vdT9cocDZjkaGHprp
 mlN0OuT2PB+Djt7am2noV6Kv1C8EnCPpyDBCwa7DntGdGcGMjH9w6aR4/ruNRUGS1aSMw8sR
 QgpTVWEyzHlnIH92D+k+IhdNG+eJ6o1fc7MeC0gUwMt27Im+TxVxc0JRfniNk8PUAg4kvJq7
 z7NLBUcJsIh3hM0WHQH9AYe/mZhQq5oyZTsz4jo/dWFRSlpY7zrDS2TZNYt4cCfZj1bIdpbf
 SpRi9M3W/yBF2WOkwYgbkqGnTUvr+3r0LMCH2H7nzENrYxNY2kFmDX9bBvOWsWpcMdOEo99/
 Iayz5/q2d1rVjYVFRm5U9hG+C7BYvtUOnUvSEBeE4tnJBMakbJPYxWe61yANDQubPsINB10i
 ngzsm553yqEjLTuWOjzdHLpE4lzD416ExCoZy7RLEHNhM1YQSI2RNs8umlDfZM9Lek1+1kgB
 vT3RH0/CpPJgveWV5xDOKuhD8j5l7FME+t2RWP+gyLid6dE0C7J03ir90PlTEkMEHEzyJMPt
 OhO05Phy+d51WPTo1VSKxhL4bsWddHLfQoXW8RQ388Q69JG4m+JhNH/XvWe3aQFpYP+GZuzO
 hkMez0lHCaVOOLBSKHkAHh9i0/pH+/3hfEa4NsoHCpyy
Message-ID: <37137142-1e34-0f78-c950-91bcd6543eb8@knorrie.org>
Date: Mon, 25 May 2020 17:51:32 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.4.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This bug report is a follow-up to the thread "Domu windows 2012 crash"
on the xen-users list. In there we found out that it is possible to set
a value for shadow_memory that is lower than a safe minimum value.

This became apparent after XSA-304, which caused using more of this type
of memory. Having a hardcoded line like shadow_memory = 8 results in
random crashes of the guest, with the following errors in xl dmesg of
the host:

(XEN) Failed to shatter gfn 105245: -12
(XEN) d75v1 EPT violation 0x19c (--x/rw-) gpa 0x00000105245760 mfn
0x285245 type 0
(XEN) d75v1 Walking EPT tables for GFN 105245:
(XEN) d75v1  epte 9c000004105f9007
(XEN) d75v1  epte 9c000002800000f3
(XEN) d75v1  --- GLA 0x7ff98b40d760
(XEN) domain_crash called from vmx.c:3497
(XEN) Domain 75 (vcpu#1) crashed on cpu#4:
(XEN) ----[ Xen-4.11.4-pre  x86_64  debug=n   Not tainted ]----
(XEN) CPU:    4
(XEN) RIP:    0033:[<00007ff98b40d760>]
(XEN) RFLAGS: 0000000000010216   CONTEXT: hvm guest (d75v1)
(XEN) rax: 0000000000001212   rbx: 000000c714d9da58   rcx: 0000023500001590
(XEN) rdx: 000000c700000001   rsi: 000000c714d9da18   rdi: 000000c714d9db20
(XEN) rbp: 000000c714d9d950   rsp: 000000c714d9d918   r8:  0000023500001470
(XEN) r9:  00000235000014f0   r10: 00007ff99c5c0923   r11: 000000c714d9d970
(XEN) r12: 0000000000000000   r13: 000000c714d9d9d0   r14: 000000c714d9da58
(XEN) r15: 0000000000000006   cr0: 0000000080050031   cr4: 0000000000060678
(XEN) cr3: 00000001d9458002   cr2: 00007ff98b6fa048
(XEN) fsb: 0000000000000000   gsb: 000000c714e9e000   gss: ffffcd015dc40000
(XEN) ds: 002b   es: 002b   fs: 0053   gs: 002b   ss: 002b   cs: 0033

Why do users have this shadow_memory line in their guest config file?
Well, I guess it's simply because there are not many proper examples of
how to run Windows under Xen, so whoever is searching ze interwobz for
it will end up on a page like...


https://www.howtoforge.com/how-to-run-fully-virtualized-guests-hvm-with-xen-3.2-on-debian-lenny-x86_64

...and then start copy pasting and adjusting until things work. Or...
already have such a config for 10+ years and just only adjust things
when it breaks after upgrading Xen.

Hans van Kranenburg


From xen-devel-bounces@lists.xenproject.org Mon May 25 16:08:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 16:08:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdFeG-0001UA-Sp; Mon, 25 May 2020 16:08:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KePG=7H=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdFeF-0001U5-3M
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 16:08:19 +0000
X-Inumbo-ID: f1fefe5c-9ea1-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f1fefe5c-9ea1-11ea-b9cf-bc764e2007e4;
 Mon, 25 May 2020 16:08:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=vRhnUOBLBuTxpl6ZrPE6u2mJQ271KicF3fvF1IHVnPw=; b=PE2/g57BFou7MO6ArMrxddaL0
 5cXxKLmnvTuJ5wUCsdCA9wBTBz0Pnkvbb3ape4nvFMOSLE46E+MDvrfiBmfNScabGRQ6UshgjFRLK
 oTTWELXjTtILw9U5dew8xVewt1iGlVUsP5jP1rkfeXToyJrBKrMxn8a9VFROKGuHJ29ws=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdFeD-0007Co-4m; Mon, 25 May 2020 16:08:17 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdFeC-0007l4-PC; Mon, 25 May 2020 16:08:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdFeC-0001HP-OR; Mon, 25 May 2020 16:08:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150359-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150359: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=f718709431429fbb4e1fc6781f3a3752a7f43f70
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 25 May 2020 16:08:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150359 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150359/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              f718709431429fbb4e1fc6781f3a3752a7f43f70
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  129 days
Failing since        146211  2020-01-18 04:18:52 Z  128 days  119 attempts
Testing same since   150339  2020-05-23 04:19:51 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Chris Jester-Young <cky@cky.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yan Wang <wangyan122@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 19289 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 25 16:21:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 16:21:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdFqi-00039A-9P; Mon, 25 May 2020 16:21:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=g7v1=7H=lucina.net=martin@srs-us1.protection.inumbo.net>)
 id 1jdFqh-000395-1s
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 16:21:11 +0000
X-Inumbo-ID: ba31e596-9ea3-11ea-ae69-bc764e2007e4
Received: from smtp.lucina.net (unknown [62.176.169.44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba31e596-9ea3-11ea-ae69-bc764e2007e4;
 Mon, 25 May 2020 16:21:03 +0000 (UTC)
Received: from nodbug.lucina.net (78-141-76-187.dynamic.orange.sk
 [78.141.76.187])
 by smtp.lucina.net (Postfix) with ESMTPSA id E07E2122804;
 Mon, 25 May 2020 18:04:01 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=lucina.net;
 s=dkim-201811; t=1590422641;
 bh=BNe2R8tifla9iLYf7B+jPurPruxQbGfizIkZY85wFFQ=;
 h=Date:From:To:Cc:Subject:From;
 b=nJDNlPUTcplLpcXGA1OC2kJC3ykorw2KC32zRXopyDAAongcoYRPSOE3Bzd80aUG9
 +wCuKbPx+NljUZPxapYylXrc/h6neHckG2N7y1bnncIOHRvaNVzc1nLitX2NXvmfrI
 dy7DiYnSMM4j2em88jDei3j8xQ+80YE1A1Np8sL4kbWN+rwVh5zs57cZxvY07GAJoU
 CVMuzrROpBSyr9rukyFJX81KFdLq0hvAPpy3CAZawPDKsFCFFkFuaEvcf+Ud3uzAIF
 jq3rCgwUyDSM2C+x7EkJunx32dYsbYOZFgAhrCX9vBAu38J4+esTMG/Pwb/XJRDYYQ
 tOntArvN7qIyA==
Received: by nodbug.lucina.net (Postfix, from userid 1000)
 id B515E268436E; Mon, 25 May 2020 18:04:01 +0200 (CEST)
Date: Mon, 25 May 2020 18:04:01 +0200
From: Martin Lucina <martin@lucina.net>
To: xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org
Subject: Xen PVH domU start-of-day VCPU state
Message-ID: <20200525160401.GA3091@nodbug.lucina.net>
Mail-Followup-To: xen-devel@lists.xenproject.org,
 mirageos-devel@lists.xenproject.org, anil@recoil.org,
 dave@recoil.org
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: dave@recoil.org, anil@recoil.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

I'm trying to bootstrap a new PVH-only Xen domU OS "from scratch", to
replace our existing use of Mini-OS for the early boot/low-level support
layer in MirageOS. I've done this by creating new Xen bindings for Solo5
[1], basing them on our existing virtio code [2].

Unfortunately, I can't seem to get past the first few instructions on VCPU
boot. Here's what I have at the moment (abridged):

    .section .note.solo5.xen

            .align  4
            .long   4
            .long   4
            .long   XEN_ELFNOTE_PHYS32_ENTRY
            .ascii "Xen\0"
            .long   _start32

    /* ... */

    .code32

    ENTRY(_start32)
            cld

            lgdt (gdt64_ptr)
            ljmp $0x10, $1f

    1:      movl $0x18, %eax
            movl %eax, %ds
            movl %eax, %es
            movl %eax, %ss

            xorl %eax, %eax
            movl %eax, %fs
            movl %eax, %gs

I have verified, via xl -v create -c ..., that the domain builder appears
to be doing the right thing, and is interpreting the ELF NOTE correctly.
However, for some reason I cannot fathom, I get a triple fault on the ljmp
following the lgdt instruction above:

    (XEN) d31v0 Triple fault - invoking HVM shutdown action 1
    (XEN) *** Dumping Dom31 vcpu#0 state: ***
    (XEN) ----[ Xen-4.11.4-pre  x86_64  debug=n   Not tainted ]----
    (XEN) CPU:    0
    (XEN) RIP:    0000:[<0000000000100028>]
    (XEN) RFLAGS: 0000000000010002   CONTEXT: hvm guest (d31v0)
    (XEN) rax: 0000000000000000   rbx: 0000000000116000   rcx: 0000000000000000
    (XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: 0000000000000000
    (XEN) rbp: 0000000000000000   rsp: 0000000000000000   r8:  0000000000000000
    (XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
    (XEN) r12: 0000000000000000   r13: 0000000000000000   r14: 0000000000000000
    (XEN) r15: 0000000000000000   cr0: 0000000000000011   cr4: 0000000000000000
    (XEN) cr3: 0000000000000000   cr2: 0000000000000000
    (XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
    (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: 0000

Cross-checking 0x100028 via gdb:

    Dump of assembler code for function _start32:
       0x00100020 <+0>:	cld
       0x00100021 <+1>:	lgdtl  0x108040
       0x00100028 <+8>:	ljmp   $0x10,$0x10002f
       0x0010002f <+15>:	mov    $0x18,%eax

I've spent a couple of days trying various things and cross-checking both
with the Mini-OS PVH/HVM startup [3] and the Intel SDM, but no joy. I've
also re-checked the GDT selector values used by the original virtio code
which this is based on, and they appear to be fine.

This is not helped by the fact that the Xen domU PVH start-of-day VCPU
state does not seem to be documented anywhere, with the exception of
"struct hvm_start_info is passed in %ebx" as stated in
arch-x86/hvm/start_info.h.

In case it's relevant, I'm testing with Xen 4.11.4 as shipped with Debian
10, on an Intel Broadwell CPU.

Any ideas? Any help much appreciated.

Thanks,

-mato

[1] https://github.com/mato/solo5/tree/xen/bindings/xen / https://github.com/mato/solo5/commit/f2539d588883a2e8854998c75bdea9b10f113ed6
[2] https://github.com/mato/solo5/tree/xen/bindings/virtio
[3] https://xenbits.xen.org/gitweb/?p=mini-os.git;a=blob;f=arch/x86/x86_hvm.S;h=6e8ad983a16adbe97b343f7dbc17e281ee0c389f;hb=HEAD



From xen-devel-bounces@lists.xenproject.org Mon May 25 16:43:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 16:43:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdGBw-00059P-WB; Mon, 25 May 2020 16:43:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eGcw=7H=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jdGBv-00058h-Ca
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 16:43:07 +0000
X-Inumbo-ID: c66a1128-9ea6-11ea-aef8-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c66a1128-9ea6-11ea-aef8-12813bfff9fa;
 Mon, 25 May 2020 16:42:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id D7BD3AC51;
 Mon, 25 May 2020 16:42:53 +0000 (UTC)
Subject: Re: Xen PVH domU start-of-day VCPU state
To: xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org,
 anil@recoil.org, dave@recoil.org
References: <20200525160401.GA3091@nodbug.lucina.net>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
Date: Mon, 25 May 2020 18:42:50 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200525160401.GA3091@nodbug.lucina.net>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25.05.20 18:04, Martin Lucina wrote:
> Hi,
> 
> I'm trying to bootstrap a new PVH-only Xen domU OS "from scratch", to
> replace our existing use of Mini-OS for the early boot/low-level support
> layer in MirageOS. I've done this by creating new Xen bindings for Solo5
> [1], basing them on our existing virtio code [2].
> 
> Unfortunately, I can't seem to get past the first few instructions on VCPU
> boot. Here's what I have at the moment (abridged):
> 
>      .section .note.solo5.xen
> 
>              .align  4
>              .long   4
>              .long   4
>              .long   XEN_ELFNOTE_PHYS32_ENTRY
>              .ascii "Xen\0"
>              .long   _start32
> 
>      /* ... */
> 
>      .code32
> 
>      ENTRY(_start32)
>              cld
> 
>              lgdt (gdt64_ptr)
>              ljmp $0x10, $1f

You need to setup virtual addressing and enable 64 bit mode before using
64-bit GDT.

See Mini-OS source arch/x86/x86_hvm.S


Juergen


From xen-devel-bounces@lists.xenproject.org Mon May 25 16:58:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 16:58:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdGQI-0006DJ-Az; Mon, 25 May 2020 16:57:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KePG=7H=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdGQG-0006DE-TI
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 16:57:56 +0000
X-Inumbo-ID: de716ec2-9ea8-11ea-aef9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id de716ec2-9ea8-11ea-aef9-12813bfff9fa;
 Mon, 25 May 2020 16:57:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=c2/IQ9ZG/AWZehEem14ySvlZN+UCiKjw8uDxulxmCL0=; b=OaeIKEDaubRQHfhGhqz69btN+
 DqN3UCoLMRDG1/+hQh+hlcRWxnXjC77oF7nBozJCAvwMlz16RurlTnPldkuY1G/B/hKtBiL7ZnP0r
 my7pVZDwfMze0A4TLlQY8WZwd2kWHgg3+wvsLzpXIVaKhWWNmiL8zeOn4mDspZjoZ+FWg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdGQB-0008D9-1R; Mon, 25 May 2020 16:57:51 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdGQA-0002hr-PB; Mon, 25 May 2020 16:57:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdGQA-0007wx-OG; Mon, 25 May 2020 16:57:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150363-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150363: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=b4d01ede23847bed9471ca0b7071394aef693a1a
X-Osstest-Versions-That: xen=437b0aa06a014ce174e24c0d3530b3e9ab19b18b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 25 May 2020 16:57:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150363 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150363/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b4d01ede23847bed9471ca0b7071394aef693a1a
baseline version:
 xen                  437b0aa06a014ce174e24c0d3530b3e9ab19b18b

Last test of basis   150354  2020-05-24 15:01:15 Z    1 days
Testing same since   150363  2020-05-25 07:00:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   437b0aa06a..b4d01ede23  b4d01ede23847bed9471ca0b7071394aef693a1a -> smoke


From xen-devel-bounces@lists.xenproject.org Mon May 25 16:59:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 16:59:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdGS2-0006Jn-Pq; Mon, 25 May 2020 16:59:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pvIA=7H=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jdGS1-0006JW-V8
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 16:59:45 +0000
X-Inumbo-ID: 1f3be284-9ea9-11ea-aef9-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1f3be284-9ea9-11ea-aef9-12813bfff9fa;
 Mon, 25 May 2020 16:59:40 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:42506
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jdGRu-0000n7-M1 (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Mon, 25 May 2020 17:59:38 +0100
Subject: Re: Xen PVH domU start-of-day VCPU state
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org,
 anil@recoil.org, dave@recoil.org
References: <20200525160401.GA3091@nodbug.lucina.net>
 <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
Date: Mon, 25 May 2020 17:59:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25/05/2020 17:42, Jürgen Groß wrote:
> On 25.05.20 18:04, Martin Lucina wrote:
>> Hi,
>>
>> I'm trying to bootstrap a new PVH-only Xen domU OS "from scratch", to
>> replace our existing use of Mini-OS for the early boot/low-level support
>> layer in MirageOS. I've done this by creating new Xen bindings for Solo5
>> [1], basing them on our existing virtio code [2].
>>
>> Unfortunately, I can't seem to get past the first few instructions on
>> VCPU
>> boot. Here's what I have at the moment (abridged):
>>
>>      .section .note.solo5.xen
>>
>>              .align  4
>>              .long   4
>>              .long   4
>>              .long   XEN_ELFNOTE_PHYS32_ENTRY
>>              .ascii "Xen\0"
>>              .long   _start32
>>
>>      /* ... */
>>
>>      .code32
>>
>>      ENTRY(_start32)
>>              cld
>>
>>              lgdt (gdt64_ptr)
>>              ljmp $0x10, $1f
>
> You need to setup virtual addressing and enable 64 bit mode before using
> 64-bit GDT.
>
> See Mini-OS source arch/x86/x86_hvm.S

Or
https://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen-test-framework.git;a=blob;f=arch/x86/hvm/head.S;h=f7dc72b58ab9ec68538f0087969ab6f72d181d80;hb=HEAD

But yes - Juergen is correct.  Until you have enabled long mode, lgdt
will only load the bottom 32 bits of GDTR.base.

Is there a less abridged version to look at?

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 25 17:31:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 17:31:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdGvw-0001GG-A2; Mon, 25 May 2020 17:30:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pvIA=7H=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jdGvv-0001GB-JN
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 17:30:39 +0000
X-Inumbo-ID: 7327a302-9ead-11ea-af01-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7327a302-9ead-11ea-af01-12813bfff9fa;
 Mon, 25 May 2020 17:30:39 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:43380
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jdGvs-000HbU-M1 (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Mon, 25 May 2020 18:30:36 +0100
Subject: Re: [PATCH] x86: avoid HPET use also on certain Coffee Lake H
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <26a90632-bb76-a24b-aef1-4c068b610c6a@suse.com>
 <de2ca5b7-5fe1-27e5-b6e6-08e695258f1f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <5f5483cc-a8e8-51a3-6c47-5b061ff97108@citrix.com>
Date: Mon, 25 May 2020 18:30:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <de2ca5b7-5fe1-27e5-b6e6-08e695258f1f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25/05/2020 16:23, Jan Beulich wrote:
> On 25.05.2020 17:18, Jan Beulich wrote:
>> Linux commit f8edbde885bbcab6a2b4a1b5ca614e6ccb807577 says
>>
>> "Coffee Lake H SoC has similar behavior as Coffee Lake, skewed HPET
>>  timer once the SoCs entered PC10."
>>
>> Again follow this for Xen as well, noting though that even the
>> pre-existing PCI ID refers to a H-processor line variant (the 6-core
>> one). It is also suspicious that the datasheet names 0x3e10 for the
>> 4-core variant, while the Linux commit specifies 0x3e20, which I haven't
>> been able to locate in any datasheet yet.

3e20 is the host bridge ID for CFL-R (Gen 9) Core i9 (8c/16t) as found
in the Dell XPS 15 7590 amongst other things.

As such, it is a generation later than CFL.

>>  To be on the safe side, add
>> both until clarification can be provided by Intel.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Given the nature of issue (a power efficiently "feature" rather than a
bug), it will likely affect everything in a couple of generations worth
of CPUs.

The issue may not actually affect Xen yet, because I don't expect we've
got S0ix working yet.  It is only a problem on entry to S0i2/3 where the
HPET is halted.

> I'd like to note that I've been sitting on this for several months,
> hoping to be able to submit with less uncertainty. I shall further
> note that I'm sitting on a similar Ice Lake patch, triggered by
> seeing Linux'es e0748539e3d594dd26f0d27a270f14720b22a406. The
> situation seems even worse there - I can't make datasheet and
> Linux commit match even remotely, PCI-ID-wise. I didn't think it
> makes sense to submit a patch in such a case.

0x8a12 is an ID in the middle of a load of Ice Lake IDs, according to
the PCI-ID database, but there isn't entry specifically for it.

I can't find a datasheet either for it.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 25 17:41:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 17:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdH6D-0002DT-9t; Mon, 25 May 2020 17:41:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pvIA=7H=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jdH6B-0002DO-Hk
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 17:41:15 +0000
X-Inumbo-ID: ed4dd754-9eae-11ea-b9cf-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ed4dd754-9eae-11ea-b9cf-bc764e2007e4;
 Mon, 25 May 2020 17:41:13 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:43642
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jdH68-000Mfb-Jg (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Mon, 25 May 2020 18:41:12 +0100
Subject: Re: Xen PVH domU start-of-day VCPU state
To: xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org,
 anil@recoil.org, dave@recoil.org, martin@lucina.net
References: <20200525160401.GA3091@nodbug.lucina.net>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6fadfd84-0fc4-d462-a917-1c88ec0822b8@citrix.com>
Date: Mon, 25 May 2020 18:41:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200525160401.GA3091@nodbug.lucina.net>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25/05/2020 17:04, Martin Lucina wrote:
> Hi,
>
> I'm trying to bootstrap a new PVH-only Xen domU OS "from scratch", to
> replace our existing use of Mini-OS for the early boot/low-level support
> layer in MirageOS. I've done this by creating new Xen bindings for Solo5
> [1], basing them on our existing virtio code [2].
>
> Unfortunately, I can't seem to get past the first few instructions on VCPU
> boot. Here's what I have at the moment (abridged):
>
>     .section .note.solo5.xen
>
>             .align  4
>             .long   4
>             .long   4
>             .long   XEN_ELFNOTE_PHYS32_ENTRY
>             .ascii "Xen\0"
>             .long   _start32
>
>     /* ... */
>
>     .code32
>
>     ENTRY(_start32)
>             cld
>
>             lgdt (gdt64_ptr)
>             ljmp $0x10, $1f
>
>     1:      movl $0x18, %eax
>             movl %eax, %ds
>             movl %eax, %es
>             movl %eax, %ss
>
>             xorl %eax, %eax
>             movl %eax, %fs
>             movl %eax, %gs
>
> I have verified, via xl -v create -c ..., that the domain builder appears
> to be doing the right thing, and is interpreting the ELF NOTE correctly.
> However, for some reason I cannot fathom, I get a triple fault on the ljmp
> following the lgdt instruction above:
>
>     (XEN) d31v0 Triple fault - invoking HVM shutdown action 1
>     (XEN) *** Dumping Dom31 vcpu#0 state: ***
>     (XEN) ----[ Xen-4.11.4-pre  x86_64  debug=n   Not tainted ]----
>     (XEN) CPU:    0
>     (XEN) RIP:    0000:[<0000000000100028>]
>     (XEN) RFLAGS: 0000000000010002   CONTEXT: hvm guest (d31v0)
>     (XEN) rax: 0000000000000000   rbx: 0000000000116000   rcx: 0000000000000000
>     (XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: 0000000000000000
>     (XEN) rbp: 0000000000000000   rsp: 0000000000000000   r8:  0000000000000000
>     (XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
>     (XEN) r12: 0000000000000000   r13: 0000000000000000   r14: 0000000000000000
>     (XEN) r15: 0000000000000000   cr0: 0000000000000011   cr4: 0000000000000000
>     (XEN) cr3: 0000000000000000   cr2: 0000000000000000
>     (XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
>     (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: 0000

For extra help debugging this, you can dump the vmcs here:

andrewcoop@andrewcoop:/local/xen.git/xen$ git d
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 74c9f84462..8ae23545ae 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1687,6 +1687,7 @@ void hvm_triple_fault(void)
             "Triple fault - invoking HVM shutdown action %d\n",
             reason);
     vcpu_show_execution_state(v);
+    vmcs_dump_vcpu(v);
     domain_shutdown(d, reason);
 }
 

which will include the segment cache, including the just loaded GDT details.

> Cross-checking 0x100028 via gdb:
>
>     Dump of assembler code for function _start32:
>        0x00100020 <+0>:	cld
>        0x00100021 <+1>:	lgdtl  0x108040
>        0x00100028 <+8>:	ljmp   $0x10,$0x10002f
>        0x0010002f <+15>:	mov    $0x18,%eax
>
> I've spent a couple of days trying various things and cross-checking both
> with the Mini-OS PVH/HVM startup [3] and the Intel SDM, but no joy. I've
> also re-checked the GDT selector values used by the original virtio code
> which this is based on, and they appear to be fine.
>
> This is not helped by the fact that the Xen domU PVH start-of-day VCPU
> state does not seem to be documented anywhere, with the exception of
> "struct hvm_start_info is passed in %ebx" as stated in
> arch-x86/hvm/start_info.h.

https://xenbits.xen.org/docs/unstable/misc/pvh.html

The starting state is described there.  It is 32bit flat mode, very
similar to multiboot's entry.

> In case it's relevant, I'm testing with Xen 4.11.4 as shipped with Debian
> 10, on an Intel Broadwell CPU.
>
> Any ideas?

Sadly no.

From
https://github.com/mato/solo5/commit/f2539d588883a2e8854998c75bdea9b10f113ed6

all data looks to be linked below the 4G boundary, so the 32/64bitness
of lgdt shouldn't matter in this case.

Reordering the logic as per MiniOS/XTF will avoid the need for a 32bit
CS selector - it is safe to run on the ABI-provided %cs until you switch
into 64bit mode.

It might also be interesting to see exactly what value is in gdt64_ptr,
just to check that the base an limit are set sensibly.

~Andrew


From xen-devel-bounces@lists.xenproject.org Mon May 25 19:03:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 19:03:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdIMq-0000jk-Dj; Mon, 25 May 2020 19:02:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=775J=7H=zohomail.eu=elliotkillick@srs-us1.protection.inumbo.net>)
 id 1jdIMo-0000jf-Uc
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 19:02:31 +0000
X-Inumbo-ID: 4769079e-9eba-11ea-9887-bc764e2007e4
Received: from sender11-pp-o93.zoho.eu (unknown [31.186.226.251])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4769079e-9eba-11ea-9887-bc764e2007e4;
 Mon, 25 May 2020 19:02:29 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1590433346; cv=none; d=zohomail.eu; s=zohoarc; 
 b=d2kW6MvdHWurCUa2wY2E1e9zGwAc9wlYX8l+/XpzWYA45rGZZdnB3w4ZZbKdlYQHrz6NMKjsSQ6St3mRf4u4P/TxTdwM6m3ip1sj3U4TRELnkHoYJpoRvgItAjOO9/JkAaPHCJ593Xzm483O0e2aeuAE8LyJntpsrcW9ICx862o=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.eu;
 s=zohoarc; t=1590433346;
 h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To;
 bh=zstc/sqbQ9WVCJVASP9+rIVmqfSTJUm3XzhTxVIi8+Q=; 
 b=SM/gweNYX2J0P3gC3Hi1x/rsQ6zC+J0MdMhX32vPLP5QJCchksx2AwaYTd/2P63h57h8yyDYxmxcuuaT6cSfBYGAFqKkKYbhdJlba/1DkOuuRLfF5CTLCcmyn+pJzn5EyurAFpNZn0AxDVVK9WFeggWS5cAvzu7Ku4X2BtMqlNc=
ARC-Authentication-Results: i=1; mx.zohomail.eu;
 dkim=pass  header.i=zohomail.eu;
 spf=pass  smtp.mailfrom=elliotkillick@zohomail.eu;
 dmarc=pass header.from=<elliotkillick@zohomail.eu>
 header.from=<elliotkillick@zohomail.eu>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1590433346; 
 s=zoho; d=zohomail.eu; i=elliotkillick@zohomail.eu;
 h=From:Subject:To:Cc:Message-ID:References:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
 bh=zstc/sqbQ9WVCJVASP9+rIVmqfSTJUm3XzhTxVIi8+Q=;
 b=JF22u+9rO5ikwC3gRFLNhx3N8g197yvdBWlIAA4BTfl3dsifuZTS/izLf7gdh9v5
 DVr/QHRR6sfnn241K1xOPXFBfQubj1IFegdLlstq3zDCiR/fnxHbjIRMWaowvrXNMNQ
 7uDumIkcQL91C4HGEzXPJ6n40b1tuhNptnm3UM9Q=
Received: from [10.137.0.35]
 (CPEac202e7c9cc3-CMac202e7c9cc0.cpe.net.cable.rogers.com [99.231.147.74]) by
 mx.zoho.eu with SMTPS id 1590433345040946.9235816899597;
 Mon, 25 May 2020 21:02:25 +0200 (CEST)
From: Elliot Killick <elliotkillick@zohomail.eu>
Subject: Re: [BUG] Consistent LBR/TSX vmentry failure (0x80000022) calling
 domain_crash() in vmx.c:3324
To: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <767a6155-63f1-ab0c-377a-d2a1638babf4@zohomail.eu>
References: <36815795-223f-2b96-5401-c262294cbaa8@zohomail.eu>
 <c715f89a-b2ba-490c-c027-b4c7d7069f42@citrix.com>
 <2bcd2ccc-b58e-1268-68ce-3ef534534245@zohomail.eu>
 <1b76cd6a-c6a2-c9c9-1d8b-32a9a1dbc557@citrix.com>
 <657e7522-bd0f-bea3-7ce8-2f6c4ec72407@zohomail.eu>
 <dc1ef4b6-9406-b625-c157-6ebec2a6afda@citrix.com>
 <325c716c-df62-d24c-2e48-3a100e84f48d@zohomail.eu>
 <c09d62a4-6362-8eb9-a3b0-79c429850db6@citrix.com>
Date: Mon, 25 May 2020 19:01:31 +0000
MIME-Version: 1.0
In-Reply-To: <c09d62a4-6362-8eb9-a3b0-79c429850db6@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 2020-05-20 16:53, Andrew Cooper wrote:
> On 20/05/2020 15:59, Elliot Killick wrote:
>> On 2020-05-20 12:31, Andrew Cooper wrote:
>>> On 20/05/2020 12:46, Elliot Killick wrote:
>>>> processor=09: 0
>>>> vendor_id=09: GenuineIntel
>>>> cpu family=09: 6
>>>> model=09=09: 60
>>>> model name=09: Intel(R) Core(TM) i5-4590 CPU @ 3.30GHz
>>>> stepping=09: 3
>>>> microcode=09: 0x27
>>>> cpu MHz=09=09: 3299.926
>>>> cache size=09: 6144 KB
>>>> physical id=09: 0
>>> Ok, so the errata is one of HSM182/HSD172.
>>>
>>> Xen has workaround for all of these.=C2=A0 However, I also see:
>>>
>>>> (XEN) ----[ Xen-4.8.5-15.fc25=C2=A0 x86_64=C2=A0 debug=3Dn=C2=A0=C2=A0=
 Not tainted ]----
>>> which is an obsolete version of Xen these days.=C2=A0 It looks like the=
se
>>> issues were first fixed in Xen 4.9, but you should upgrade to something
>>> rather newer.
>>>
>>> ~Andrew
>>>
>> Ah, so this is originally a CPU bug which Xen has had to patch over.
>=20
> Yes.=C2=A0 It was an unintended consequence of Intel being forced to disa=
ble
> TSX in most of their Haswell/Broadwell CPUs.
>=20
>> As for the Xen version, that's controlled by the "distribution" of Xen I
>> run which is Qubes. To remedy this I could run the testing stream of
>> Qubes which currently provides the latest version of Xen (4.13) but that
>> could bring its own set of problems.
>=20
> Ah, in which case Qubes will probably consider backporting the fixes.=C2=
=A0
> Open a bug with them, and I can probably point out a minimum set of
> backports to make it work.
>=20
> ~Andrew
>=20

Hi, Andrew

I've opened up a bug with Qubes here:
https://github.com/QubesOS/qubes-issues/issues/5848

Please highlight the required backports at your earliest convenience,
thank you.



From xen-devel-bounces@lists.xenproject.org Mon May 25 19:59:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 19:59:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdJFy-0005CR-20; Mon, 25 May 2020 19:59:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KePG=7H=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdJFw-0005CM-Vh
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 19:59:29 +0000
X-Inumbo-ID: 3a94c37a-9ec2-11ea-ae69-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3a94c37a-9ec2-11ea-ae69-bc764e2007e4;
 Mon, 25 May 2020 19:59:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=773jymhomIkB0nGYVQ4HFvnyJ0pVYbN3XXEz2uVQNXk=; b=Z+5kEYHxqrAQkC7uvQsI1tZCw
 usBEuk8nkpj2uuoY5BJmt0e+L/Yifmn/QSnd+Hnl7wDr0SFMKgSxSvakMZLp6Yc1VJfIZtJP82tug
 tiYsTOaeA9fKs5nv3qrhleQ2/4ct7VxN3qRc5cPYMaqnjqol1cGO8NI18XfQX2tnEtCiU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdJFq-0003Xi-PG; Mon, 25 May 2020 19:59:22 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdJFq-0005Gr-IF; Mon, 25 May 2020 19:59:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdJFq-0002GF-He; Mon, 25 May 2020 19:59:22 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150367-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150367: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=354e8318d5a9b6f32fbd3c01d1a9f1970007010b
X-Osstest-Versions-That: xen=b4d01ede23847bed9471ca0b7071394aef693a1a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 25 May 2020 19:59:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150367 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150367/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  354e8318d5a9b6f32fbd3c01d1a9f1970007010b
baseline version:
 xen                  b4d01ede23847bed9471ca0b7071394aef693a1a

Last test of basis   150363  2020-05-25 07:00:41 Z    0 days
Testing same since   150367  2020-05-25 17:02:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b4d01ede23..354e8318d5  354e8318d5a9b6f32fbd3c01d1a9f1970007010b -> smoke


From xen-devel-bounces@lists.xenproject.org Mon May 25 21:34:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 21:34:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdKjU-0005I6-2S; Mon, 25 May 2020 21:34:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KePG=7H=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdKjT-0005I1-9m
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 21:34:03 +0000
X-Inumbo-ID: 70aa1ce6-9ecf-11ea-af32-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 70aa1ce6-9ecf-11ea-af32-12813bfff9fa;
 Mon, 25 May 2020 21:33:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=5RW+nFGFTGk3pJsdIX95yiVzoB//NwDuo0grZn4JJ4c=; b=mH2j6lrUKeFk/MLDRdHHiL4HX
 LQi+dyIMG8POZV1cWnY4+0bKIiFX286BUzuAYxzJi4jxPZEPCXKaqHdhWv5eth3J2+SL9VvSlU2tB
 mWNP0l5Du9U958GXZ1exsgOO/ZAa1Jc7ZGfTnIFdnysVsgqhCBIUcaDBVzCO/DtyPm8ic=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdKjM-0005cz-Q2; Mon, 25 May 2020 21:33:56 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdKjM-0003L9-Gm; Mon, 25 May 2020 21:33:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdKjM-0000b1-Dd; Mon, 25 May 2020 21:33:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150360-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 150360: tolerable FAIL - PUSHED
X-Osstest-Failures: seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: seabios=d9aea4a7cd59e00f5ed96b6442806dde0959e1ca
X-Osstest-Versions-That: seabios=7e9db04923854b7f4edca33948f55abee22907b9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 25 May 2020 21:33:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150360 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150360/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150308
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150308
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150308
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150308
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 seabios              d9aea4a7cd59e00f5ed96b6442806dde0959e1ca
baseline version:
 seabios              7e9db04923854b7f4edca33948f55abee22907b9

Last test of basis   150308  2020-05-21 18:10:18 Z    4 days
Testing same since   150357  2020-05-25 02:10:36 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Kevin O'Connor <kevin@koconnor.net>
  Matt DeVillier <matt.devillier@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   7e9db04..d9aea4a  d9aea4a7cd59e00f5ed96b6442806dde0959e1ca -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 25 22:37:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 22:37:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdLiR-00023D-Ve; Mon, 25 May 2020 22:37:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vb8S=7H=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jdLiQ-000238-QF
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 22:37:02 +0000
X-Inumbo-ID: 4017a1b2-9ed8-11ea-b9cf-bc764e2007e4
Received: from mail-lf1-x141.google.com (unknown [2a00:1450:4864:20::141])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4017a1b2-9ed8-11ea-b9cf-bc764e2007e4;
 Mon, 25 May 2020 22:37:02 +0000 (UTC)
Received: by mail-lf1-x141.google.com with SMTP id u16so9458197lfl.8
 for <xen-devel@lists.xenproject.org>; Mon, 25 May 2020 15:37:02 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=RDKATGfCW4kdMXKLacGTn8XQ0RHZLkt01AwNltWZnUQ=;
 b=a92km76raeAi29Jrv/MCVUOEM8mNwjqL50IsdS/pfbL22ofJyi084JmfmrTtZaER6q
 3BbwFRqOusSkfBnlPwfc5a2zdnl78IjRFU0xbSBgBPeVC5jJW9i2VL4wL2/l79dypfN7
 5pgKlYziVmVB1v1KTe89h59lwy8f5uq0vEXEd60LeOKQZQDdF9ntqi3iylv9VkWzaa7g
 NzUucryuJompGG1yf+OpjA7HXPg4tzLSJI1K5yy8Yeng58N8rTdbpxh0qiyq3RZHET8E
 2i13BrJIYOtiAnJFKYiW73wCZO1x99lXZVFAt/7vyeWSjESA/82pSyOq0UZK7zOj9+2U
 O6vw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=RDKATGfCW4kdMXKLacGTn8XQ0RHZLkt01AwNltWZnUQ=;
 b=XQC1pbzwvcaGURHwjSJ99AUQxcmElVOXjWgKRa/aOfsbVVR3MgwhZYa0B2ykNLL5IU
 JR8nctDDnxW0ck0qcvAHbQbtffEfPlcX/k6NJt/S4nKcenNz4kEo7GQAt83Zb2oq6dPR
 Xhdjg7kfmjqg0z9YOTI1uuUurs+VJ7yA0DlS8OJ2r6QFwwJwU2yjM5Vy6XmRZBK+jWMZ
 Fd0RmQVJndOp0V7XaR6yQwV/cGHh8Sjd5J3Vrr6EHWH0yUH/4sNVKu6MIoTEQHVkISp4
 SF+Vy21kRIZt/xmceVmqlEM65o5figXgxM6/wOUgMAOZntmiuyshe6y5ByU5PmmWNVn4
 vaLg==
X-Gm-Message-State: AOAM531zmOzm/qcr4OSXysOOVQL04znVP+uGCJtL35mUczFWfxf31NVZ
 F8EOdk/5RfFUJvaP7dhDD0ZS0QGobjNwq5oIQi0oEA==
X-Google-Smtp-Source: ABdhPJxcUsE0FpOrWOB7h+yv1PB1IkG1pqbKocKHpWKHSrF5niI7NoiU61R6etGR6pUMRZBpsMWw0a3TzcTmpGfmzd0=
X-Received: by 2002:a05:6512:3049:: with SMTP id
 b9mr15213389lfb.44.1590446220666; 
 Mon, 25 May 2020 15:37:00 -0700 (PDT)
MIME-Version: 1.0
References: <20200525024955.225415-1-jandryuk@gmail.com>
In-Reply-To: <20200525024955.225415-1-jandryuk@gmail.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 25 May 2020 18:36:49 -0400
Message-ID: <CAKf6xpvRxeUdOOogacDvncC3yogcTN4gALVWO+V8ZJ8x__RafA@mail.gmail.com>
Subject: Re: [PATCH 0/8] Coverity fixes for vchan-socket-proxy
To: xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sun, May 24, 2020 at 10:50 PM Jason Andryuk <jandryuk@gmail.com> wrote:
>
> This series addresses some Coverity reports.  To handle closing FDs, a
> state struct is introduced to track FDs closed in both main() and
> data_loop().

I've realized the changes here are insufficient to handle the FD
leaks.  That is, the accept()-ed FDs need to be closed inside the for
loop so they aren't leaked with each iteration.  I'll re-work for a
v2.

-Jason


From xen-devel-bounces@lists.xenproject.org Mon May 25 23:40:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 25 May 2020 23:40:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdMha-0007uc-Qm; Mon, 25 May 2020 23:40:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KePG=7H=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdMhZ-0007uX-28
 for xen-devel@lists.xenproject.org; Mon, 25 May 2020 23:40:13 +0000
X-Inumbo-ID: 13610150-9ee1-11ea-af43-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 13610150-9ee1-11ea-af43-12813bfff9fa;
 Mon, 25 May 2020 23:40:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=8BoLVhSomO0K4/xVrHXDzRy2VsTYIsdS9V20fdw42bA=; b=JNrmzTspfoRuc4RtrKWU0zM1b
 dBpkoSmZeoSngBceHJ+vM/fyRDzrmQ/MtzdYmteubt03F0QjYkCQBQ1JG3C6iwoZrv1+8h3EmQOtS
 /e9iqz6LG6MvjvwHVL1kutD/FRq5/mjE1O9M44qCGFpt3hVuEi0YwhwVw4lcRFjth5avo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdMhX-0008Bx-J8; Mon, 25 May 2020 23:40:11 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdMhX-00010U-8a; Mon, 25 May 2020 23:40:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdMhX-0007mt-7v; Mon, 25 May 2020 23:40:11 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150358-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150358: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
 xen-unstable:test-amd64-amd64-xl-rtds:guest-saverestore.2:fail:heisenbug
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=437b0aa06a014ce174e24c0d3530b3e9ab19b18b
X-Osstest-Versions-That: xen=437b0aa06a014ce174e24c0d3530b3e9ab19b18b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 25 May 2020 23:40:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150358 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150358/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds     12 guest-start      fail in 150355 pass in 150358
 test-amd64-amd64-xl-rtds     17 guest-saverestore.2        fail pass in 150355

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds 18 guest-localmigrate/x10 fail in 150355 blocked in 150358
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150355
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150355
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150355
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150355
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150355
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150355
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150355
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150355
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150355
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  437b0aa06a014ce174e24c0d3530b3e9ab19b18b
baseline version:
 xen                  437b0aa06a014ce174e24c0d3530b3e9ab19b18b

Last test of basis   150358  2020-05-25 04:03:33 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue May 26 00:51:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 00:51:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdNoR-00066u-SV; Tue, 26 May 2020 00:51:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTO2=7I=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdNoQ-00066p-3v
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 00:51:22 +0000
X-Inumbo-ID: 03e2c86c-9eeb-11ea-b07b-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03e2c86c-9eeb-11ea-b07b-bc764e2007e4;
 Tue, 26 May 2020 00:51:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=AqCTqioUBy9ngGcgRgUuCI+7Vo/69ArYQfbveto80Mg=; b=GxILNe7bj/Ov7jA/8MnvEZnB+
 QCzYAGCyim4Y4OnGMZ0CN/bxtlSh4VA1geR3GxB3f+Gyadqair3pKungSPTk6ShgA5TvzGs2oKHoJ
 lEPN2X8hlB5Upmf3vNuEJ+111e+smFXeKbL5uZjmiMr91MpwAuDdHIh+cqruLuf4aDtJ8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdNoO-0001nv-3G; Tue, 26 May 2020 00:51:20 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdNoN-0002rP-QX; Tue, 26 May 2020 00:51:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdNoN-0004kz-Q3; Tue, 26 May 2020 00:51:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150369-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 150369: regressions - FAIL
X-Osstest-Failures: seabios:build-amd64:xen-build:fail:regression
 seabios:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 seabios:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 seabios:build-amd64-libvirt:build-check(1):blocked:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 seabios:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 seabios:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 seabios:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
 seabios:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 seabios:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 seabios:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
 seabios:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: seabios=2e3de6253422112ae43e608661ba94ea6b345694
X-Osstest-Versions-That: seabios=d9aea4a7cd59e00f5ed96b6442806dde0959e1ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 26 May 2020 00:51:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150369 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150369/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 150360

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass

version targeted for testing:
 seabios              2e3de6253422112ae43e608661ba94ea6b345694
baseline version:
 seabios              d9aea4a7cd59e00f5ed96b6442806dde0959e1ca

Last test of basis   150360  2020-05-25 05:11:48 Z    0 days
Testing same since   150369  2020-05-25 21:40:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  fail    
 build-i386                                                   pass    
 build-amd64-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 2e3de6253422112ae43e608661ba94ea6b345694
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Mon May 25 11:06:27 2020 +0200

    pci: fix mmconfig support
    
    The MODESEGMENT condition is backwards, with the effect that
    mmconfig mode is not used to configure pci bars during POST.
    
    Oops.  Fix it.
    
    The only real mode pci config space access seems to come from the
    ipxe option rom initialiation.  Which happens to work via mmconfig
    because it runs in big real mode so this went unnoticed ...
    
    Fixes: 6a3b59ab9c7d ("pci: add mmconfig support")
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>


From xen-devel-bounces@lists.xenproject.org Tue May 26 02:28:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 02:28:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdPJV-0006qa-WD; Tue, 26 May 2020 02:27:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fnDi=7I=epam.com=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1jdPJU-0006qV-Se
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 02:27:33 +0000
X-Inumbo-ID: 7370769a-9ef8-11ea-af56-12813bfff9fa
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.83]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7370769a-9ef8-11ea-af56-12813bfff9fa;
 Tue, 26 May 2020 02:27:32 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DbdDL8yGHAb62K6iQNkLhdVALxPDNw57R/YPF2cxA2M6XOmKaC5shPRib2ZSnT+9WhEeDcVg7Q3c02EswarOg+EEo17mt9Qhk/iPFJGHLg6JRydDmOZ7AiWigfYVI075ckB+hzZyd2/KmMRE3xe714iS+Z8tWBjOGEtpCahy4TlpXEsuL+dXVjp6JS34/mquErfTRpFEObv94iwQWZ8UrWBB65Q2wolQWyaW7NAHzE3t04XVS0ba0Hp4vTrnaQBiht8sBIdDP2SPEMAWyQFudT0CzMizbOzF04fRrc3qRC9qM1Iqx+MrAXBZeqPoxTWwy1vwBP4wh37xN0V94NsKhA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eb6VUL5NMaVrqiQCWQbi5xVS92eUxRsLmpNuZWPjoUw=;
 b=NKQA6znZk+L0QE9KZ7WBy5KBalajTp2v8yHvIQNYGhUC6d9XhMRSYMJ4MdlEszRFLslyKUiIMM90WWuivK4IUBzUoZlBeDBVzIZfivI6RZg9pqd/a/TM1//4h06ff3vLAhVSv5YOu8vSHwc4XuMtE5sP2ILyLgXmaJcYvqz2S1j4pTNT3luo+ukCOlDf/SBdHaw7mDdYeWEeptreXwU9p0Lj96HJ3FF1vjxuEHXsjF+KBRA6na7JqJgtihoOfcU88u0u6LO57j+DcNAI8qINx0KsOZkAVZgnuIl+5P6PV0ngF8DZZ+HQdGTn2P4lVCKy5160mxJYO5oitrJ/tdm6iw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector2; 
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eb6VUL5NMaVrqiQCWQbi5xVS92eUxRsLmpNuZWPjoUw=;
 b=mPzqz7O9onvi9SJ4vTIZuBsaZ9cxpcddhJNPFowB6bvE+wBHGUCEnbSiJ6EDBmUZXNGl7dXYfCl8rynf8UbA155nxD1q7NbnxngDyRiQPZwIdmT4De1vFmM19uZhNQZE3eZK/WNlflGQeuWUsZ/l+mACWiu3dBJM6zl9s63IkTw=
Received: from VI1PR0302MB3407.eurprd03.prod.outlook.com
 (2603:10a6:803:23::18) by VI1PR0302MB3230.eurprd03.prod.outlook.com
 (2603:10a6:803:18::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.29; Tue, 26 May
 2020 02:27:29 +0000
Received: from VI1PR0302MB3407.eurprd03.prod.outlook.com
 ([fe80::a01e:c0b1:7174:4c75]) by VI1PR0302MB3407.eurprd03.prod.outlook.com
 ([fe80::a01e:c0b1:7174:4c75%5]) with mapi id 15.20.3021.029; Tue, 26 May 2020
 02:27:29 +0000
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "andrii.anisov@gmail.com" <andrii.anisov@gmail.com>,
 "george.dunlap@eu.citrix.com" <george.dunlap@eu.citrix.com>,
 "dfaggioli@suse.com" <dfaggioli@suse.com>, "julien.grall@arm.com"
 <julien.grall@arm.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: Re: [Xen-devel] [RFC 1/9] schedule: Introduce per-pcpu time accounting
Thread-Topic: [Xen-devel] [RFC 1/9] schedule: Introduce per-pcpu time
 accounting
Thread-Index: AQHVaIxKZPZB1+b+UEeHqKVXpxsigKdwZuYAgA3xnYCBPODKAA==
Date: Tue, 26 May 2020 02:27:29 +0000
Message-ID: <0e46fc4b29b7c3b3e6b4ca4704b9e7dac5738868.camel@epam.com>
References: <1568197942-15374-1-git-send-email-andrii.anisov@gmail.com>
 <1568197942-15374-2-git-send-email-andrii.anisov@gmail.com>
 <8c74cacb-ff73-eddc-626c-f6fa862cf5a6@arm.com>
 <f3767489-e46a-3830-8b3c-0b637b71e0b8@gmail.com>
In-Reply-To: <f3767489-e46a-3830-8b3c-0b637b71e0b8@gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: Evolution 3.36.2 
authentication-results: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 88b995d8-7686-4006-c8c5-08d8011c56ab
x-ms-traffictypediagnostic: VI1PR0302MB3230:
x-microsoft-antispam-prvs: <VI1PR0302MB3230E6EAE90BDD413E50178EE6B00@VI1PR0302MB3230.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:3173;
x-forefront-prvs: 041517DFAB
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: zcM9pqPYKCpuku+8LApW06X7eeXVBpJg4yAuqRBZMwO4a7V3jQ3WZh8SFhFoFIdctG4DcnyznfzCJ/zoRTlF9puaG1xqcTGLo52ornvvx5/QWTp90E2D1O9Fey/ro9vRuGWROgpvp9h2nhxHY4f4UaNwJMbeV0PYU/3o3Aco7G+Tj/BQxq/6tWoxDBfT815RNRdI7qnG8sc0CmzvRKQhpWXbjrw3aIRmV1ng+UiPDIJgYINOby3Lwxg0RztMUZBgBPvBxrt5U/XaE4mIuJv2K+YRJZkH9TtbQWfCiqC5h2V/jqw1egZ2NOxC9aauUQmDEoYVEqLOFXVOCgvnQ3gTMQ==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:VI1PR0302MB3407.eurprd03.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(4636009)(136003)(366004)(346002)(396003)(376002)(39860400002)(186003)(26005)(71200400001)(6506007)(36756003)(53546011)(55236004)(2906002)(15650500001)(2616005)(8676002)(4326008)(7416002)(86362001)(8936002)(6512007)(6486002)(110136005)(66556008)(64756008)(5660300002)(91956017)(316002)(478600001)(66946007)(66446008)(76116006)(54906003)(66476007);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: lZF1RoH+ReQOnLS5ZFQ5TpDRIqf5rbfVrP2OULI12vXzftXVnFojKmOV5Qs9E/q8nVYGByLM2QygY18MJAq0wRT3Zn2aYeGpdqTEkKbpj0dgdSfp2Nwfh+NVdgG2mKrsoqGrvm+5Fu8jua9kW3V9zMtoKnxR0VucRJtvrS2ELrXtSyx0XKvEMTdzVGRqgCE9mh/43xNoqKgnJZ+7rBjtGVPSjLrsY/5rEL8GjhHBmBoyHTLEpWxs/lVPwifS+Y98Rh9+iQlsV67DICyzX6ZsS5epyJt+s8C+NTA2hWkUPVcLzd/jHfOEdFK1I3IYfLHJfWevQyDOsrETaeky2S9qZa+tVXi12smXGRI8ppRhzIgLTq0shL0qe1V/tPatrxe09myOEBCjq2kWQm7DjNde5b9WHcYTD7/T5U9jSBinH15Q4+PJlDcZz3eNbwYvOpVHLQTlFUBQ9N8tX/569G4tAS057PXSg6dmoxowqlNl+XA=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <210CAF0FA772BE4D952454B1AD1F2764@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 88b995d8-7686-4006-c8c5-08d8011c56ab
X-MS-Exchange-CrossTenant-originalarrivaltime: 26 May 2020 02:27:29.5337 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: 1Zvwx7qJ2XCYKmZYYST0jKwHauN37Be1Qrol7+9iSVld6Bx4qXVaE59oI7w04b1pr1JlehKJ1ma2Rv+U146uFLVm8PlQxHskUa0S5CunslI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0302MB3230
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "wl@xen.org" <wl@xen.org>, "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
 "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
 "tim@xen.org" <tim@xen.org>, "jbeulich@suse.com" <jbeulich@suse.com>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGVsbG8gQWxsLA0KDQpUaGlzIGlzIGdlbnRsZSByZW1pbmRlciBhYm91dCB0aGlzIFJGQy4gDQoN
ClNhZGx5LCBBbmRyaWkgQW5pc292IGhhcyBsZWZ0IG91ciB0ZWFtLiBCdXQgSSdtIGNvbW1pdGVk
IHRvIGNvbnRpbnVlDQpoaXMgd29yayBvbiB0aW1lIGFjY291bnRpbmcgYW5kIHJlYWwgdGltZSBz
Y2hlZHVsaW5nLg0KDQpJIGRvIHJlYWxpemUsIHRoYXQgcHJvcG9zZWQgcGF0Y2hlcyBoYXZlIGJl
Y29tZSBtb2xkeS4gSSBjYW4gcmViYXNlDQp0aGVtIG9udG8gY3VycmVudCBtYXN0ZXIsIGlmIGl0
IHdvdWxkIGhlbHAuIA0KDQpPbiBXZWQsIDIwMTktMTEtMDYgYXQgMTM6MjQgKzAyMDAsIEFuZHJp
aSBBbmlzb3Ygd3JvdGU6DQo+IEhlbGxvIEp1bGllbiwNCj4gDQo+IE9uIDI4LjEwLjE5IDE2OjI4
LCBKdWxpZW4gR3JhbGwgd3JvdGU6DQo+ID4gSXQgd291bGQgYmUgZ29vZCB0byBnZXQgYSByZXZp
ZXcgZnJvbSB0aGUgc2NoZWR1bGVyIG1haW50YWluZXJzIChEYXJpbywgR2VvcmdlKSB0byBtYWtl
IHN1cmUgdGhleSBhcmUgaGFwcHkgd2l0aCB0aGUgc3VnZ2VzdGVkIHN0YXRlcyBoZXJlLg0KPiAN
Cj4gSSB3b3VsZCBub3Qgc2F5IEknbSBjb21wbGV0ZWx5IGhhcHB5IHdpdGggdGhpcyBzZXQgb2Yg
c3RhdGVzLiBJJ2QgbGlrZSB0byBoYXZlIGEgZGlzY3Vzc2lvbiBvbiB0aGlzIHRvcGljIHdpdGgg
c2NoZWR1bGVyIG1haW50YWluZXJzLiBBbHNvIGJlY2F1c2UgdGhleSBjb3VsZCBoYXZlIGEgZGlm
ZmVyZW50IHZpZXcgZnJvbSB4ODYgd29ybGQuDQoNCkkgd291bGQgbG92ZSB0byBoZWFyIGFueSBp
bnB1dHMgb24gdGhpcyB0b3BjIGZyb20gZ2VuZXJhbCBzY2hlZHVsaW5nDQphcHByb2FjaCBzdGFu
ZHBvaW50IGFuZCBmcm9tIHg4NiB2aWV3LiAgDQoNCj4gPiA+IEludHJvZHVjZSBwZXItcGNwdSB0
aW1lIGFjY291bnRpbmcgd2hhdCBpbmNsdWRlcyB0aGUgZm9sbG93aW5nIHN0YXRlczoNCj4gPiAN
Cj4gPiBJIHRoaW5rIHdlIG5lZWQgYSB2ZXJ5IGRldGFpbGVkIGRlc2NyaXB0aW9uIG9mIGVhY2gg
c3RhdGVzLiBPdGhlcndpc2UgaXQgd2lsbCBiZSBoYXJkIHRvIGtub3cgaG93IHRvIGNhdGVnb3Jp
emUgaXQuDQo+IA0KPiBJIGFncmVlIHRoYXQgd2UgbmVlZCBhIHZlcnkgZGV0YWlsZWQgZGVzY3Jp
cHRpb24gb2YgZWFjaCBzdGF0ZXMuIEFzayBxdWVzdGlvbnMgaWYgc29tZXRoaW5nIGlzIG5vdCBj
bGVhciBvciBkb3VidGZ1bC4gSSBndWVzcyB3ZSBjb3VsZCBoYXZlIHNvbWV0aGluZyBiZXR0ZXIg
YWZ0ZXIgUSZBIHByb2Nlc3MuDQo+IA0KPiA+ID4gVEFDQ19IWVAgLSB0aGUgcGNwdSBleGVjdXRl
cyBoeXBlcnZpc29yIGNvZGUgbGlrZSBzb2Z0aXJxIHByb2Nlc3NpbmcNCj4gPiA+ICAgICAgICAg
ICAgIChpbmNsdWRpbmcgc2NoZWR1bGluZyksIHRhc2tsZXRzIGFuZCBjb250ZXh0IHN3aXRjaGVz
DQo+ID4gDQo+ID4gSUhNTywgImxpa2UiIGlzIHRvbyB3ZWFrIGhlcmUuIFdoYXQgZG8geW91IGV4
YWN0bHkgcGxhbiB0byBpbnRyb2R1Y2U/DQo+IA0KPiBJIHRoaW5rIHRoaXMgc2hvdWxkIGJlIHdo
YXQgaHlwZXJ2aXNvciBkb2VzIGV4Y2VwdCBoeXBlcmNhbGwgYW5kIElPIGVtdWxhdGlvbiAod2hh
dCBpcyBUQUNDX0dTWU5DKS4NCj4gDQo+ID4gRm9yIGluc3RhbmNlLCBvbiBBcm0sIHlvdSBjb25z
aWRlciB0aGF0IGxlYXZlX2h5cGVydmlzb3JfdGFpbCgpIGlzIHBhcnQgb2YgVEFDQ19IWVAuIFRo
aXMgZnVuY3Rpb24gd2lsbCBpbmNsdWRlIHNvbWUgaGFuZGxpbmcgZm9yIHN5bmNocm9ub3VzIHRy
YXAuDQo+IA0KPiBJIGd1ZXNzIHlvdSBhcmUgc2F5aW5nIGFib3V0IGBwMm1fZmx1c2hfdm1gLiBJ
IGRvdWJ0IGhlcmUsIGFuZCBvcGVuIGZvciBzdWdnZXN0aW9ucy4NCj4gDQo+IA0KPiA+ID4gVEFD
Q19HVUVTVCAtIHRoZSBwY3B1IGV4ZWN1dGVzIGd1ZXN0cyBjb2RlDQo+ID4gDQo+ID4gTG9va2lu
ZyBhdCB0aGUgYXJtNjQgY29kZSwgeW91IGFyZSBleGVjdXRpbmcgc29tZSBoeXBlcnZpc29yIGNv
ZGUgaGVyZS4gSSBhZ3JlZSB0aGlzIGlzIGltcG9zc2libGUgdG8gbm90IHJ1biBhbnkgaHlwZXJ2
aXNvciBjb2RlIHdpdGggVEFDQ19HVUVTVCwgYnV0IEkgdGhpbmsgdGhpcyBzaG91bGQgYmUgY2xh
cmlmaWVkIGluIHRoZSBkb2N1bWVudGF0aW9uLg0KPiANCj4gRG8geW91IG1lYW4gYWRkaW5nIGZl
dyB3b3JkcyBhYm91dCBzdGlsbCBoYXZpbmcgc29tZSBoeXBlcnZpc29yIGNvZGUgbmVhciB0aGUg
YWN0dWFsIGNvbnRleHQgc3dpdGNoIGZyb20vdG8gZ3Vlc3QgKGVudHJ5L3JldHVybl9mcm9tX3Ry
YXApPw0KPiANCj4gPiA+IFRBQ0NfSURMRSAtIHRoZSBsb3ctcG93ZXIgc3RhdGUgb2YgdGhlIHBj
cHUNCj4gPiANCj4gPiBEaWQgeW91IGludGVuZCB0byBtZWFuICJpZGxlIHZDUFUiIGlzIGluIHVz
ZT8NCj4gDQo+IE5vLiBJIGRpZCBtZWFuIHdoYXQgaXMgd3JpdHRlbi4NCj4gQ3VycmVudGx5LCB0
aGUgaWRsZSB2Y3B1IGRvZXMgaHlwZXJ2aXNvciB3b3JrIChlLmcuIHRhc2tsZXRzKSBhbG9uZyB3
aXRoIHRoZSBsb3ctcG93ZXIgbW9kZS4gSU1PIHdlIGhhdmUgdG8gc2VwYXJhdGUgdGhlbS4NCj4g
DQo+ID4gPiBUQUNDX0lSUSAtIHRoZSBwY3B1IHBlcmZvcm1zIGludGVycnVwdHMgcHJvY2Vzc2lu
Zywgd2l0aG91dCBzZXBhcmF0aW9uIHRvDQo+ID4gPiAgICAgICAgICAgICBndWVzdCBvciBoeXBl
cnZpc29yIGludGVycnVwdHMNCj4gPiA+IFRBQ0NfR1NZTkMgLSB0aGUgcGNwdSBleGVjdXRlcyBo
eXBlcnZpc29yIGNvZGUgdG8gcHJvY2VzcyBzeW5jaHJvbm91cyB0cmFwDQo+ID4gPiAgICAgICAg
ICAgICAgIGZyb20gdGhlIGd1ZXN0LiBFLmcuIGh5cGVyY2FsbCBwcm9jZXNzaW5nIG9yIGlvIGVt
dWxhdGlvbi4NCj4gPiA+IA0KPiA+ID4gQ3VycmVudGx5LCB0aGUgb25seSByZWVudGVyYW50IHN0
YXRlIGlzIFRBQ0NfSVJRLiBJdCBpcyBhc3N1bWVkLCBubyBjaGFuZ2VzDQo+ID4gPiB0byBzdGF0
ZSBvdGhlciB0aGFuIFRBQ0NfSVJRIGNvdWxkIGhhcHBlbiB1bnRpbCB3ZSByZXR1cm4gZnJvbSBu
ZXN0ZWQNCj4gPiA+IGludGVycnVwdHMuIElSUSB0aW1lIGlzIGFjY291bnRlZCBpbiBhIGRpc3Rp
bmN0IHdheSBjb21wYXJpbmcgdG8gb3RoZXIgc3RhdGVzLg0KPiA+IA0KPiA+IHMvY29tcGFyaW5n
L2NvbXBhcmUvDQo+IA0KPiBPSy4NCj4gDQo+ID4gPiBJdCBpcyBhY3VtdWxhdGVkIGJldHdlZW4g
b3RoZXIgc3RhdGVzIHRyYW5zaXRpb24gbW9tZW50cywgYW5kIGlzIHN1YnN0cmFjdGVkDQo+ID4g
DQo+ID4gcy9hY3VtdWxhdGVkL2FjY3VtdWxhdGVkLyBzL3N1YnN0cmFjdGVkL3N1YnRyYWN0ZWQv
DQo+IA0KPiBPSy4NCj4gDQo+ID4gPiBmcm9tIHRoZSBvbGQgc3RhdGUgb24gc3RhdGVzIHRyYW5z
aW9uIGNhbGN1bGF0aW9uLg0KPiBbMV0NCj4gPiBzL3RyYW5zaW9uL3RyYW5zaXRpb24vDQo+IA0K
PiBPSy4NCj4gDQo+ID4gPiBTaWduZWQtb2ZmLWJ5OiBBbmRyaWkgQW5pc292IDxhbmRyaWlfYW5p
c292QGVwYW0uY29tPg0KPiA+ID4gLS0tDQo+ID4gPiAgIHhlbi9jb21tb24vc2NoZWR1bGUuYyAg
IHwgODEgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKw0K
PiA+ID4gICB4ZW4vaW5jbHVkZS94ZW4vc2NoZWQuaCB8IDI3ICsrKysrKysrKysrKysrKysrDQo+
ID4gPiAgIDIgZmlsZXMgY2hhbmdlZCwgMTA4IGluc2VydGlvbnMoKykNCj4gPiA+IA0KPiA+ID4g
ZGlmZiAtLWdpdCBhL3hlbi9jb21tb24vc2NoZWR1bGUuYyBiL3hlbi9jb21tb24vc2NoZWR1bGUu
Yw0KPiA+ID4gaW5kZXggN2I3MTU4MS4uNmRkNjYwMyAxMDA2NDQNCj4gPiA+IC0tLSBhL3hlbi9j
b21tb24vc2NoZWR1bGUuYw0KPiA+ID4gKysrIGIveGVuL2NvbW1vbi9zY2hlZHVsZS5jDQo+ID4g
PiBAQCAtMTUzOSw2ICsxNTM5LDg3IEBAIHN0YXRpYyB2b2lkIHNjaGVkdWxlKHZvaWQpDQo+ID4g
PiAgICAgICBjb250ZXh0X3N3aXRjaChwcmV2LCBuZXh0KTsNCj4gPiA+ICAgfQ0KPiA+ID4gK0RF
RklORV9QRVJfQ1BVKHN0cnVjdCB0YWNjLCB0YWNjKTsNCj4gPiA+ICsNCj4gPiA+ICtzdGF0aWMg
dm9pZCB0YWNjX3N0YXRlX2NoYW5nZShlbnVtIFRBQ0NfU1RBVEVTIG5ld19zdGF0ZSkNCj4gPiAN
Cj4gPiBUaGlzIHNob3VsZCBuZXZlciBiZSBjYWxsZWQgd2l0aCB0aGUgVEFDQ19JUlEsIHJpZ2h0
Pw0KPiANCj4gWWVzLiBBY3R1YWxseSwgdGFjYy0+c3RhdGUgc2hvdWxkIG5ldmVyIGJlIFRBQ0Nf
SVJRLg0KPiBCZWNhdXNlIG9mIFRBQ0NfSVJRIHJlZW50ZXJhYmlsaXR5IGl0IGlzIGhhbmRsZWQg
dGhyb3VnaCB0aGUgdGFjYy0+aXJxX2NudCBhbmQgdGFjYy0+aXJxX2VudGVyX3RpbWUuDQo+IA0K
PiA+ID4gK3sNCj4gPiA+ICsgICAgc190aW1lX3Qgbm93LCBkZWx0YTsNCj4gPiA+ICsgICAgc3Ry
dWN0IHRhY2MqIHRhY2MgPSAmdGhpc19jcHUodGFjYyk7DQo+ID4gPiArICAgIHVuc2lnbmVkIGxv
bmcgZmxhZ3M7DQo+ID4gPiArDQo+ID4gPiArICAgIGxvY2FsX2lycV9zYXZlKGZsYWdzKTsNCj4g
PiA+ICsNCj4gPiA+ICsgICAgbm93ID0gTk9XKCk7DQo+ID4gPiArICAgIGRlbHRhID0gbm93IC0g
dGFjYy0+c3RhdGVfZW50cnlfdGltZTsNCj4gPiA+ICsNCj4gPiA+ICsgICAgLyogV2UgZG8gbm90
IGV4cGVjdCBzdGF0ZXMgcmVlbnRlcmFiaWxpdHkgKGF0IGxlYXN0IHRocm91Z2ggdGhpcyBmdW5j
dGlvbikqLw0KPiA+ID4gKyAgICBBU1NFUlQobmV3X3N0YXRlICE9IHRhY2MtPnN0YXRlKTsNCj4g
PiA+ICsNCj4gPiA+ICsgICAgdGFjYy0+c3RhdGVfdGltZVt0YWNjLT5zdGF0ZV0gKz0gZGVsdGEg
LSB0YWNjLT5pcnFfdGltZTsNCj4gPiA+ICsgICAgdGFjYy0+c3RhdGVfdGltZVtUQUNDX0lSUV0g
Kz0gdGFjYy0+aXJxX3RpbWU7DQo+ID4gPiArICAgIHRhY2MtPmlycV90aW1lID0gMDsNCj4gPiA+
ICsgICAgdGFjYy0+c3RhdGUgPSBuZXdfc3RhdGU7DQo+ID4gPiArICAgIHRhY2MtPnN0YXRlX2Vu
dHJ5X3RpbWUgPSBub3c7DQo+ID4gPiArDQo+ID4gPiArICAgIGxvY2FsX2lycV9yZXN0b3JlKGZs
YWdzKTsNCj4gPiA+ICt9DQo+ID4gPiArDQo+ID4gPiArdm9pZCB0YWNjX2h5cChpbnQgcGxhY2Up
DQo+ID4gDQo+ID4gUGxhY2UgaXMgbmV2ZXIgdXNlZCBleGNlcHQgZm9yIHlvdXIgY29tbWVudGVk
IHByaW50ay4gU28gd2hhdCdzIHRoZSBnb2FsIGZvciBpdD8NCj4gDQo+IFBsYWNlIGlzIGp1c3Qg
YSBwaWVjZSBvZiBjb2RlIHVzZWQgZm9yIGRlYnVnZ2luZywgYXMgd2VsbCBhcyBwcmludGsuIEkg
a2VlcHQgaXQgaGVyZSBiZWNhdXNlIHRoaXMgc2VyaWVzIGlzIHZlcnkgUkZDLCB5ZXQgaXQgY291
bGQgYmUgcmVtb3ZlZC4NCj4gDQo+ID4gQWxzbywgaXMgaXQgcmVhbGx5IG5lY2Vzc2FyeSB0byBw
cm92aWRlIGhlbHBlciBmb3IgZWFjaCBzdGF0ZT8gQ291bGRuJ3Qgd2UganVzdCBpbnRyb2R1Y2Ug
b25lIGZ1bmN0aW9ucyBkb2luZyBhbGwgdGhlIHN0YXRlPw0KPiANCj4gSSdkIGxpa2UgY2FsbGlu
ZyB0aGF0IHN0dWZmIGZyb20gYXNzZW1ibGVyIHdpdGhvdXQgcGFyYW1ldGVycy4gQnV0IGhhdmUg
bm8gc3Ryb25nIG9waW5pb24gaGVyZS4NCj4gICANCj4gPiA+ICt7DQo+ID4gPiArLy8gICAgcHJp
bnRrKCJcdHRhY2NfaHlwICV1LCBwbGFjZSAlZFxuIiwgc21wX3Byb2Nlc3Nvcl9pZCgpLCBwbGFj
ZSk7DQo+ID4gPiArICAgIHRhY2Nfc3RhdGVfY2hhbmdlKFRBQ0NfSFlQKTsNCj4gPiA+ICt9DQo+
ID4gPiArDQo+ID4gPiArdm9pZCB0YWNjX2d1ZXN0KGludCBwbGFjZSkNCj4gPiA+ICt7DQo+ID4g
PiArLy8gICAgcHJpbnRrKCJcdHRhY2NfZ3Vlc3QgJXUsIHBsYWNlICVkXG4iLCBzbXBfcHJvY2Vz
c29yX2lkKCksIHBsYWNlKTsNCj4gPiA+ICsgICAgdGFjY19zdGF0ZV9jaGFuZ2UoVEFDQ19HVUVT
VCk7DQo+ID4gPiArfQ0KPiA+ID4gKw0KPiA+ID4gK3ZvaWQgdGFjY19pZGxlKGludCBwbGFjZSkN
Cj4gPiA+ICt7DQo+ID4gPiArLy8gICAgcHJpbnRrKCJcdGlkbGUgY3B1ICV1LCBwbGFjZSAlZFxu
Iiwgc21wX3Byb2Nlc3Nvcl9pZCgpLCBwbGFjZSk7DQo+ID4gPiArICAgIHRhY2Nfc3RhdGVfY2hh
bmdlKFRBQ0NfSURMRSk7DQo+ID4gPiArfQ0KPiA+ID4gKw0KPiA+ID4gK3ZvaWQgdGFjY19nc3lu
YyhpbnQgcGxhY2UpDQo+ID4gPiArew0KPiA+ID4gKy8vICAgIHByaW50aygiXHR0YWNjX2dzeW5j
ICV1LCBwbGFjZSAlZFxuIiwgc21wX3Byb2Nlc3Nvcl9pZCgpLCBwbGFjZSk7DQo+ID4gPiArICAg
IHRhY2Nfc3RhdGVfY2hhbmdlKFRBQ0NfR1NZTkMpOw0KPiA+ID4gK30NCj4gPiA+ICsNCj4gPiA+
ICt2b2lkIHRhY2NfaXJxX2VudGVyKGludCBwbGFjZSkNCj4gPiA+ICt7DQo+ID4gPiArICAgIHN0
cnVjdCB0YWNjKiB0YWNjID0gJnRoaXNfY3B1KHRhY2MpOw0KPiA+ID4gKw0KPiA+ID4gKy8vICAg
IHByaW50aygiXHR0YWNjX2lycV9lbnRlciAldSwgcGxhY2UgJWQsIGNudCAlZFxuIiwgc21wX3By
b2Nlc3Nvcl9pZCgpLCBwbGFjZSwgdGhpc19jcHUodGFjYykuaXJxX2NudCk7DQo+ID4gPiArICAg
IEFTU0VSVCghbG9jYWxfaXJxX2lzX2VuYWJsZWQoKSk7DQo+ID4gPiArICAgIEFTU0VSVCh0YWNj
LT5pcnFfY250ID49IDApOw0KPiA+ID4gKw0KPiA+ID4gKyAgICBpZiAoIHRhY2MtPmlycV9jbnQg
PT0gMCApDQo+ID4gPiArICAgIHsNCj4gPiA+ICsgICAgICAgIHRhY2MtPmlycV9lbnRlcl90aW1l
ID0gTk9XKCk7DQo+ID4gPiArICAgIH0NCj4gPiA+ICsNCj4gPiA+ICsgICAgdGFjYy0+aXJxX2Nu
dCsrOw0KPiA+ID4gK30NCj4gPiA+ICsNCj4gPiA+ICt2b2lkIHRhY2NfaXJxX2V4aXQoaW50IHBs
YWNlKQ0KPiA+ID4gK3sNCj4gPiA+ICsgICAgc3RydWN0IHRhY2MqIHRhY2MgPSAmdGhpc19jcHUo
dGFjYyk7DQo+ID4gPiArDQo+ID4gPiArLy8gICAgcHJpbnRrKCJcdHRhY2NfaXJxX2V4aXQgJXUs
IHBsYWNlICVkLCBjbnQgJWRcbiIsIHNtcF9wcm9jZXNzb3JfaWQoKSwgcGxhY2UsIHRhY2MtPmly
cV9jbnQpOw0KPiA+ID4gKyAgICBBU1NFUlQoIWxvY2FsX2lycV9pc19lbmFibGVkKCkpOw0KPiA+
ID4gKyAgICBBU1NFUlQodGFjYy0+aXJxX2NudCA+IDApOw0KPiA+ID4gKyAgICBpZiAoIHRhY2Mt
PmlycV9jbnQgPT0gMSApDQo+ID4gPiArICAgIHsNCj4gPiA+ICsgICAgICAgIHRhY2MtPmlycV90
aW1lID0gTk9XKCkgLSB0YWNjLT5pcnFfZW50ZXJfdGltZTsNCj4gPiANCj4gPiBJZiBJIHVuZGVy
c3RhbmQgY29ycmVjdGx5LCB5b3Ugd2lsbCB1c2UgaXJxX3RpbWUgdG8gdXBkYXRlIFRBQ0NfSVJR
IGluIHRhY2Nfc3RhdGVfY2hhbmdlKCkuIEl0IG1heSBiZSBwb3NzaWJsZSB0byByZWNlaXZlIGFu
b3RoZXIgaW50ZXJydXB0IGJlZm9yZSB0aGUgc3RhdGUgaXMgY2hhbmdlZCAoZS5nLiBIWVAgLT4g
R1VFU1QpLiBUaGlzIG1lYW5zIG9ubHkgdGhlIHRpbWUgZm9yIHRoZSBsYXN0IElSUSByZWNlaXZl
ZCB3b3VsZCBiZSBhY2NvdW50ZWQuDQo+IA0KPiBJIGRvIGxvY2sgSVJRcyBmb3Igc3RhdGUgY2hh
bmdlLiBTaG91bGRuJ3QgdGhhdCBwcm90ZWN0IGl0Pw0KPiANCj4gPiA+ICsgICAgICAgIHRhY2Mt
PmlycV9lbnRlcl90aW1lID0gMDsNCj4gPiA+ICsgICAgfQ0KPiA+ID4gKw0KPiA+ID4gKyAgICB0
YWNjLT5pcnFfY250LS07DQo+ID4gPiArfQ0KPiA+ID4gKw0KPiA+ID4gICB2b2lkIGNvbnRleHRf
c2F2ZWQoc3RydWN0IHZjcHUgKnByZXYpDQo+ID4gPiAgIHsNCj4gPiA+ICAgICAgIC8qIENsZWFy
IHJ1bm5pbmcgZmxhZyAvYWZ0ZXIvIHdyaXRpbmcgY29udGV4dCB0byBtZW1vcnkuICovDQo+ID4g
PiBkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUveGVuL3NjaGVkLmggYi94ZW4vaW5jbHVkZS94ZW4v
c2NoZWQuaA0KPiA+ID4gaW5kZXggZTM2MDFjMS4uMDRhODcyNCAxMDA2NDQNCj4gPiA+IC0tLSBh
L3hlbi9pbmNsdWRlL3hlbi9zY2hlZC5oDQo+ID4gPiArKysgYi94ZW4vaW5jbHVkZS94ZW4vc2No
ZWQuaA0KPiA+ID4gQEAgLTEwMDIsNiArMTAwMiwzMyBAQCBleHRlcm4gdm9pZCBkdW1wX3J1bnEo
dW5zaWduZWQgY2hhciBrZXkpOw0KPiA+ID4gICB2b2lkIGFyY2hfZG9fcGh5c2luZm8oc3RydWN0
IHhlbl9zeXNjdGxfcGh5c2luZm8gKnBpKTsNCj4gPiA+ICtlbnVtIFRBQ0NfU1RBVEVTIHsNCj4g
PiANCj4gPiBXZSBkb24ndCB0ZW5kIHRvIHVzZSBhbGwgdXBwZXJjYXNlcyBmb3IgZW51bSBuYW1l
Lg0KPiANCj4gT0suDQo+IA0KPiA+ID4gKyAgICBUQUNDX0hZUCA9IDAsDQo+ID4gDQo+ID4gZW51
bSBiZWdpbnMgYXQgMCBhbmQgaW5jcmVtZW50IGJ5IG9uZSBldmVyeSB0aW1lLiBTbyB0aGVyZSBp
cyBubyBuZWVkIHRvIGhhcmRjb2RlIGEgbnVtYmVyLg0KPiA+IA0KPiA+IEFsc28sIGxvb2tpbmcg
YXQgdGhlIGNvZGUsIEkgdGhpbmsgeW91IHJlbHkgb24gdGhlIGZpcnN0IHN0YXRlIHRvIGJlIFRB
Q0NfSFlQLiBBbSBJIGNvcnJlY3Q/DQo+IA0KPiBUQUNDX0hZUCBpcyBleHBlY3RlZCB0byBiZSB0
aGUgaW5pdGlhbCBzdGF0ZSBvZiB0aGUgUENQVS4NCj4gDQo+ID4gPiArICAgIFRBQ0NfR1VFU1Qg
PSAxLA0KPiA+ID4gKyAgICBUQUNDX0lETEUgPSAyLA0KPiA+ID4gKyAgICBUQUNDX0lSUSA9IDMs
DQo+ID4gPiArICAgIFRBQ0NfR1NZTkMgPSA0LA0KPiA+ID4gKyAgICBUQUNDX1NUQVRFU19NQVgN
Cj4gPiA+ICt9Ow0KPiA+ID4gSXQgd291bGQgYmUgZ29vZCB0byBkb2N1bWVudCBhbGwgdGhlIHN0
YXRlcyBpbiB0aGUgaGVhZGVyIGFzIHdlbGwuDQo+IA0KPiBPSy4NCj4gDQo+ID4gPiArDQo+ID4g
PiArc3RydWN0IHRhY2MNCj4gPiANCj4gPiBQbGVhc2UgZG9jdW1lbnQgdGhlIHN0cnVjdHVyZS4N
Cj4gDQo+IE9LLg0KPiANCj4gPiA+ICt7DQo+ID4gPiArICAgIHNfdGltZV90IHN0YXRlX3RpbWVb
VEFDQ19TVEFURVNfTUFYXTsNCj4gPiA+ICsgICAgc190aW1lX3Qgc3RhdGVfZW50cnlfdGltZTsN
Cj4gPiA+ICsgICAgaW50IHN0YXRlOw0KPiA+IA0KPiA+IFRoaXMgc2hvdWxkIGJlIHRoZSBlbnVt
IHlvdSB1c2VkIGFib3ZlIGhlcmUuDQo+IA0KPiBZZXAuDQo+IA0KPiA+ID4gKw0KPiA+ID4gKyAg
ICBzX3RpbWVfdCBndWVzdF90aW1lOw0KPiA+IA0KPiA+IFRoaXMgaXMgbm90IHVzZWQuDQo+IA0K
PiBZZXAsIHdpbGwgZHJvcCBpdC4NCj4gDQo+ID4gPiArDQo+ID4gPiArICAgIHNfdGltZV90IGly
cV9lbnRlcl90aW1lOw0KPiA+ID4gKyAgICBzX3RpbWVfdCBpcnFfdGltZTsNCj4gPiA+ICsgICAg
aW50IGlycV9jbnQ7DQo+ID4gV2h5IGRvIHlvdSBuZWVkIHRoaXMgdG8gYmUgc2lnbmVkPw0KPiAN
Cj4gRm9yIGFzc2VydGlvbi4NCj4gICANCj4gPiA+ICt9Ow0KPiA+ID4gKw0KPiA+ID4gK0RFQ0xB
UkVfUEVSX0NQVShzdHJ1Y3QgdGFjYywgdGFjYyk7DQo+ID4gPiArDQo+ID4gPiArdm9pZCB0YWNj
X2h5cChpbnQgcGxhY2UpOw0KPiA+ID4gK3ZvaWQgdGFjY19pZGxlKGludCBwbGFjZSk7DQo+ID4g
PiArDQo+ID4gPiAgICNlbmRpZiAvKiBfX1NDSEVEX0hfXyAqLw0KPiA+ID4gICAvKg0KPiA+ID4g
DQo+ID4gDQo+ID4gQ2hlZXJzLA0KPiA+IA0K


From xen-devel-bounces@lists.xenproject.org Tue May 26 04:24:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 04:24:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdR8P-00008E-RD; Tue, 26 May 2020 04:24:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTO2=7I=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdR8P-000089-0z
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 04:24:13 +0000
X-Inumbo-ID: bb3127bc-9f08-11ea-af65-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bb3127bc-9f08-11ea-af65-12813bfff9fa;
 Tue, 26 May 2020 04:24:04 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Pnap5txyM0CGz883/BX9IXPk0DasdtLTbDPbGfewYpc=; b=4srRljEsw5BC3KW/SOjRl4pyh
 rK05ZszRj8mqLfmqHe81mFTwtyafwF7PO5QnJVFdlViqvREsfgihQ1yTtBjr2rwdN+EGZyd5S55ES
 WUd2vNMkPe9gkqLAVqbSUyN93D7Ow5LXR96rx8xtgO6+NYvl03keowiG1oVG6oPmWPhtU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdR8F-0007zf-0c; Tue, 26 May 2020 04:24:03 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdR8E-0006tO-La; Tue, 26 May 2020 04:24:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdR8E-0000ME-JQ; Tue, 26 May 2020 04:24:02 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150362-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150362: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=9cb1fd0efd195590b828b9b865421ad345a4a145
X-Osstest-Versions-That: linux=98790bbac4db1697212ce9462ec35ca09c4a2810
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 26 May 2020 04:24:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150362 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150362/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 150356

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150356
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150356
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150356
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150356
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150356
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150356
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop      fail starved in 150356
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop      fail starved in 150356

version targeted for testing:
 linux                9cb1fd0efd195590b828b9b865421ad345a4a145
baseline version:
 linux                98790bbac4db1697212ce9462ec35ca09c4a2810

Last test of basis   150356  2020-05-24 17:39:04 Z    1 days
Testing same since   150362  2020-05-25 05:40:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Linus Torvalds <torvalds@linux-foundation.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   98790bbac4db..9cb1fd0efd19  9cb1fd0efd195590b828b9b865421ad345a4a145 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Tue May 26 05:48:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 05:48:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdSRe-0007aE-1S; Tue, 26 May 2020 05:48:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ta6a=7I=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdSRd-0007a9-I9
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 05:48:09 +0000
X-Inumbo-ID: 78cfd92a-9f14-11ea-af6e-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 78cfd92a-9f14-11ea-af6e-12813bfff9fa;
 Tue, 26 May 2020 05:48:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 8C2C6ACED;
 Tue, 26 May 2020 05:48:08 +0000 (UTC)
Subject: Re: [PATCH] x86: avoid HPET use also on certain Coffee Lake H
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <26a90632-bb76-a24b-aef1-4c068b610c6a@suse.com>
 <de2ca5b7-5fe1-27e5-b6e6-08e695258f1f@suse.com>
 <5f5483cc-a8e8-51a3-6c47-5b061ff97108@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6fdaf1fc-8c3d-fff9-fb65-174caad5afeb@suse.com>
Date: Tue, 26 May 2020 07:47:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <5f5483cc-a8e8-51a3-6c47-5b061ff97108@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25.05.2020 19:30, Andrew Cooper wrote:
> On 25/05/2020 16:23, Jan Beulich wrote:
>> On 25.05.2020 17:18, Jan Beulich wrote:
>>> Linux commit f8edbde885bbcab6a2b4a1b5ca614e6ccb807577 says
>>>
>>> "Coffee Lake H SoC has similar behavior as Coffee Lake, skewed HPET
>>>  timer once the SoCs entered PC10."
>>>
>>> Again follow this for Xen as well, noting though that even the
>>> pre-existing PCI ID refers to a H-processor line variant (the 6-core
>>> one). It is also suspicious that the datasheet names 0x3e10 for the
>>> 4-core variant, while the Linux commit specifies 0x3e20, which I haven't
>>> been able to locate in any datasheet yet.
> 
> 3e20 is the host bridge ID for CFL-R (Gen 9) Core i9 (8c/16t) as found
> in the Dell XPS 15 7590 amongst other things.
> 
> As such, it is a generation later than CFL.

Ah, and I should have checked again before submitting - the pretty new
rev 003 datasheet actually includes all three IDs now. I've adjusted
description and code comments accordingly.

>>>  To be on the safe side, add
>>> both until clarification can be provided by Intel.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Given the nature of issue (a power efficiently "feature" rather than a
> bug), it will likely affect everything in a couple of generations worth
> of CPUs.
> 
> The issue may not actually affect Xen yet, because I don't expect we've
> got S0ix working yet.  It is only a problem on entry to S0i2/3 where the
> HPET is halted.

While looking into this a while ago I cam across this as well, but I
couldn't deduce whether entering PC10 is indeed possible _only_ this
way.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 26 06:50:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 06:50:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdTPW-0004j3-47; Tue, 26 May 2020 06:50:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ta6a=7I=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdTPU-0004bY-TJ
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 06:50:00 +0000
X-Inumbo-ID: 1dc7f20c-9f1d-11ea-af76-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1dc7f20c-9f1d-11ea-af76-12813bfff9fa;
 Tue, 26 May 2020 06:49:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 27FA1AD4D;
 Tue, 26 May 2020 06:50:01 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86: extend coverage of HLE "bad page" workaround
Message-ID: <b238f66d-37a9-3080-4f2b-90225ea17102@suse.com>
Date: Tue, 26 May 2020 08:49:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Respective Core Gen10 processor lines are affected, too.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -6045,6 +6045,8 @@ const struct platform_bad_page *__init g
     case 0x000506e0: /* errata SKL167 / SKW159 */
     case 0x000806e0: /* erratum KBL??? */
     case 0x000906e0: /* errata KBL??? / KBW114 / CFW103 */
+    case 0x000a0650: /* erratum Core Gen10 U/H/S 101 */
+    case 0x000a0660: /* erratum Core Gen10 U/H/S 101 */
         *array_size = (cpuid_eax(0) >= 7 && !cpu_has_hypervisor &&
                        (cpuid_count_ebx(7, 0) & cpufeat_mask(X86_FEATURE_HLE)));
         return &hle_bad_page;


From xen-devel-bounces@lists.xenproject.org Tue May 26 06:51:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 06:51:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdTQN-0005IT-Fd; Tue, 26 May 2020 06:50:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTO2=7I=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdTQL-0005IJ-Ae
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 06:50:53 +0000
X-Inumbo-ID: 3d7dd5ee-9f1d-11ea-af76-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3d7dd5ee-9f1d-11ea-af76-12813bfff9fa;
 Tue, 26 May 2020 06:50:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hDVhusL0g6O8WwIm66OmKpJNwGxcl6En/IbTLoAzhoQ=; b=Bg9gC4xrPSADmmzBHQ4ZnWloD
 uQ6prsEfNjd0Gj4kwU/To9SgElBks2IUl4g463m230hNJYioKF4jwbFlyTrTKxwO54vymGcA9cR7Y
 HbBea2yLSXt68X2eThb8KA+vO48T25ujzc2b0By9eap54oEP1IUp9yxoe97x3pqOxQ3IM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdTQJ-00032Y-Ck; Tue, 26 May 2020 06:50:51 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdTQJ-0005jG-5B; Tue, 26 May 2020 06:50:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdTQJ-00026j-4c; Tue, 26 May 2020 06:50:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150372-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 150372: regressions - FAIL
X-Osstest-Failures: seabios:build-i386:xen-build:fail:regression
 seabios:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
 seabios:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
 seabios:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 seabios:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
 seabios:build-i386-libvirt:build-check(1):blocked:nonblocking
 seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 seabios:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
 seabios:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
 seabios:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: seabios=2e3de6253422112ae43e608661ba94ea6b345694
X-Osstest-Versions-That: seabios=d9aea4a7cd59e00f5ed96b6442806dde0959e1ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 26 May 2020 06:50:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150372 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150372/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-i386                    6 xen-build                fail REGR. vs. 150360

Tests which did not succeed, but are not blocking:
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150360
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150360
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 seabios              2e3de6253422112ae43e608661ba94ea6b345694
baseline version:
 seabios              d9aea4a7cd59e00f5ed96b6442806dde0959e1ca

Last test of basis   150360  2020-05-25 05:11:48 Z    1 days
Testing same since   150369  2020-05-25 21:40:16 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   fail    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 2e3de6253422112ae43e608661ba94ea6b345694
Author: Gerd Hoffmann <kraxel@redhat.com>
Date:   Mon May 25 11:06:27 2020 +0200

    pci: fix mmconfig support
    
    The MODESEGMENT condition is backwards, with the effect that
    mmconfig mode is not used to configure pci bars during POST.
    
    Oops.  Fix it.
    
    The only real mode pci config space access seems to come from the
    ipxe option rom initialiation.  Which happens to work via mmconfig
    because it runs in big real mode so this went unnoticed ...
    
    Fixes: 6a3b59ab9c7d ("pci: add mmconfig support")
    Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>


From xen-devel-bounces@lists.xenproject.org Tue May 26 07:05:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 07:05:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdTdw-0006SX-VD; Tue, 26 May 2020 07:04:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTO2=7I=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdTdv-0006SS-Iu
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 07:04:55 +0000
X-Inumbo-ID: 305b2176-9f1f-11ea-b9cf-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 305b2176-9f1f-11ea-b9cf-bc764e2007e4;
 Tue, 26 May 2020 07:04:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=xX25+8nOswmDfLaa//ljOVEC36c/NDiD/EzNfFHJoyc=; b=1w8422Cg/vjiQ4hlYupTP9b0n
 vkQ3hJVe2Jb3pP7VKHUiyEh/9PBG8BMCUimGRHYy4Z9yc6WojnE1vH0KQSl7nlNQDnkbVsOnO/Pap
 dExfzC+uK2yHVFbGuKWA9OQJJFcrggIA+hGtaM3qvqid1B0K5uieZyVHJXrgtB/8rqolQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdTdo-0003NH-Ol; Tue, 26 May 2020 07:04:48 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdTdo-0006H3-Dm; Tue, 26 May 2020 07:04:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdTdo-0007rD-DE; Tue, 26 May 2020 07:04:48 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150374-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150374: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=3944f6855b9d4df73754bb6e5c8023d77399879b
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 26 May 2020 07:04:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150374 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150374/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              3944f6855b9d4df73754bb6e5c8023d77399879b
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  130 days
Failing since        146211  2020-01-18 04:18:52 Z  129 days  120 attempts
Testing same since   150374  2020-05-26 04:20:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Chris Jester-Young <cky@cky.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yan Wang <wangyan122@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 19474 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 26 07:20:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 07:20:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdTsM-0007TB-Dc; Tue, 26 May 2020 07:19:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/3u5=7I=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jdTsL-0007T6-GE
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 07:19:49 +0000
X-Inumbo-ID: 4850f394-9f21-11ea-b07b-bc764e2007e4
Received: from mail-wr1-x435.google.com (unknown [2a00:1450:4864:20::435])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4850f394-9f21-11ea-b07b-bc764e2007e4;
 Tue, 26 May 2020 07:19:48 +0000 (UTC)
Received: by mail-wr1-x435.google.com with SMTP id r7so2634579wro.1
 for <xen-devel@lists.xenproject.org>; Tue, 26 May 2020 00:19:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=QZiLmao3gEJI1hwzYf53wNQ+KzKbFm639/BxHvqaqGk=;
 b=LF0ElGei7SmJcxO1AXrpGkJtiEgJQs3rqYTtYqmhVBNUiSj1kxwNZ/oSPVW2q7jIE5
 SLJYY/o8/G65+OEj2a64rRhsscV+ewpYWMsGZ+RI0VpXUI8wuYPYA6Dk9ywBSPhMnzXg
 aKvB2M7ZQc5u7qleiBE5MXaOC/2OlR9ddBJVZdhICIEUGt3R/NPDLsXuvaYsjKPPYqSv
 fs/GUKKjAMmD49Zl19tnai39UbdXFKL9hk20mGsUUcSkiUPWM+lh4EcrDbDAtk13mQvO
 fVIGt4rSTBC28GKH/o+UNDeGp3MZ0N9eEBYuqr2qAXcPsc8haTinLNbTguVqLXxoiCzN
 e7QA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=QZiLmao3gEJI1hwzYf53wNQ+KzKbFm639/BxHvqaqGk=;
 b=Sn+p9uslJz/KQcRo4Ura3aJy1qaYhmOe0OjQfXdYNX296eTE+5JXI99sCSrdGsCmXI
 NeR09T6papntsSUlHklWzW/U4QFJSPujLAo1fYW1RcJBJQIc0DKtdAWrmpwBzbOsCjt7
 ObCkTgApoYVMTtk8aqlG4BpmYMmSfrAqmcJB8PLVfeMC5eoKDZpaGk+5vZsNW/8GQkgX
 3ZfyHaOrGxbprFCKsVTiEHh2zFhQ52rYCjkp4WXOZxvgFuOAxCAMUc0LgV2ZTiMLhuC/
 EBLvj4AzxKnbZLrFUz9a1rc01mNfPfuX5gilzmMHbg2hsw8r9AFiR/GCP9o07X2oAY9e
 U8HQ==
X-Gm-Message-State: AOAM533CiKU9abMzEXo2h8sje/mg8yWNz/Ob2zv9+uGZfv7//tnaMZNO
 VSHU4tHQsErVZi6trNDLIC8=
X-Google-Smtp-Source: ABdhPJyqoteEteCUp3DuT8thxXXsrN5VAEpRq5eC7Pr4nllI3Wp1xZ+/V9kM3t/LEIZrnxQDQIYcxA==
X-Received: by 2002:adf:91c2:: with SMTP id 60mr20085772wri.41.1590477588030; 
 Tue, 26 May 2020 00:19:48 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.186])
 by smtp.gmail.com with ESMTPSA id d13sm20117509wmb.39.2020.05.26.00.19.47
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 26 May 2020 00:19:47 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Jason Andryuk'" <jandryuk@gmail.com>,
	<xen-devel@lists.xenproject.org>
References: <20200525025402.225884-1-jandryuk@gmail.com>
In-Reply-To: <20200525025402.225884-1-jandryuk@gmail.com>
Subject: RE: [PATCH] CHANGELOG: Add qemu-xen linux device model stubdomains
Date: Tue, 26 May 2020 08:19:46 +0100
Message-ID: <003b01d6332e$09854750$1c8fd5f0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQDPrtn+Cl1qhDPYYjFnlqkGO6V2XKrG/eXA
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Community Manager' <community.manager@xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Jason Andryuk <jandryuk@gmail.com>
> Sent: 25 May 2020 03:54
> To: xen-devel@lists.xenproject.org
> Cc: Jason Andryuk <jandryuk@gmail.com>; Paul Durrant <paul@xen.org>; Community Manager
> <community.manager@xenproject.org>
> Subject: [PATCH] CHANGELOG: Add qemu-xen linux device model stubdomains
> 
> Add qemu-xen linux device model stubdomain.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Acked-by: Paul Durrant <paul@xen.org>

> ---
>  CHANGELOG.md | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/CHANGELOG.md b/CHANGELOG.md
> index ccb5055c87..52ed470903 100644
> --- a/CHANGELOG.md
> +++ b/CHANGELOG.md
> @@ -16,6 +16,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
>     fixes.
>   - Hypervisor framework to ease porting Xen to run on hypervisors.
>   - Initial support to run on Hyper-V.
> + - libxl support for running qemu-xen device model in a linux stubdomain.
> 
>  ## [4.13.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.13.0) - 2019-12-17
> 
> --
> 2.25.1




From xen-devel-bounces@lists.xenproject.org Tue May 26 07:41:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 07:41:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdUD7-0001j3-C1; Tue, 26 May 2020 07:41:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ta6a=7I=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdUD6-0001iy-M7
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 07:41:16 +0000
X-Inumbo-ID: 476dc06c-9f24-11ea-af7d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 476dc06c-9f24-11ea-af7d-12813bfff9fa;
 Tue, 26 May 2020 07:41:15 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id AA006B029;
 Tue, 26 May 2020 07:41:17 +0000 (UTC)
Subject: Re: Bug: toolstack allows too low values to be set for shadow_memory
To: Hans van Kranenburg <hans@knorrie.org>
References: <37137142-1e34-0f78-c950-91bcd6543eb8@knorrie.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <304a7a55-20a6-1dfe-1f3a-dabe90d28f40@suse.com>
Date: Tue, 26 May 2020 09:41:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <37137142-1e34-0f78-c950-91bcd6543eb8@knorrie.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25.05.2020 17:51, Hans van Kranenburg wrote:
> This bug report is a follow-up to the thread "Domu windows 2012 crash"
> on the xen-users list. In there we found out that it is possible to set
> a value for shadow_memory that is lower than a safe minimum value.
> 
> This became apparent after XSA-304, which caused using more of this type
> of memory. Having a hardcoded line like shadow_memory = 8 results in
> random crashes of the guest,

I don't think it is the tool stack's responsibility to override
admin requested values, or at least not as far a affecting guest
stability goes; host stability of course needs to be guaranteed,
but that's then still the hypervisor's job, not the tool stack's.

Compare this to e.g. setting too small a memory= for a guest to
be able to boot at all, or setting maxmem > memory for a guest
without balloon driver.

Furthermore - what would the suggestion be as to a "safe minimum
value"? Assuming _all_ large pages may potentially get shattered
is surely a waste of memory, unless the admin really knows
guests are going to behave that way. (In your report you also
didn't mention what memory= values the issue was observed with.
Obviously larger memory= also require bumping shadow_memory= at
least from some point onwards.)

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 26 08:00:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 08:00:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdUVM-0003sM-Fb; Tue, 26 May 2020 08:00:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/3u5=7I=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jdUVL-0003oj-DH
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 08:00:07 +0000
X-Inumbo-ID: e91a61d4-9f26-11ea-b07b-bc764e2007e4
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e91a61d4-9f26-11ea-b07b-bc764e2007e4;
 Tue, 26 May 2020 08:00:06 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id r9so2222664wmh.2
 for <xen-devel@lists.xenproject.org>; Tue, 26 May 2020 01:00:06 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=TDWa6IPHHqnB9ZQF5qL5hjByhVXCrB0DlhZQgkoOHis=;
 b=G5pf4Lhlyk+9taGtPaOH4JCJOD4FNZrW81g+1r86uerybaf0qWM3RPv5IyFdVUhjaD
 efiQY+sXnzuaqPwzQG2+DKfcw1+dvzI5onEkxmTuGRJWgRXZHBIpwwhOY5CDRiuLgxm3
 ms5Ek4/Og8kZgxuwHk27gN5Otmq01Rj16A9p90RufaJ8l0YMTHd//atYLZhSbR6JCNFN
 JxLYpC0e/D+dVlO5gY4JkQXsCQB4m2TSADiq3kaeUst5rw94Rxj+CVoBZsgIXIJIjeFI
 CBXW5+IQJrcj+4YZIUcHzh3eaQv9X0Cur9TaGjQLK5YrFP6kNfU9oiGbQBw6EH99rYSn
 3AfQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=TDWa6IPHHqnB9ZQF5qL5hjByhVXCrB0DlhZQgkoOHis=;
 b=megSxEtBkmUrSZBW1AG4hdUPVF9Mbw6/YKKjxFoQWPoHtglqyeMz66NwbCBWERRHFp
 DLr81f66FYcOl/LveyKlWIVOREFX4uMqXWzwbamIHRX5EEvNqMmY7yvwkr45QBD++dta
 u1BwyBVwQpMbp/RpYr2xzXzsFzgjL92I6wqWrTk7iNgjntUvegk5UQhNRAs1M6hcI3jV
 /0Aa4hpAU+3I7ucOyfx768tdmQj3FgD/hpYqmFDIJZ9J9MPOxZ5ZW7qv3Ewzf9O3QEIq
 jOfhkcb1tNm0YP/cUzg1DcFNTth4gs+u3oOc2RXf05h7VedlRhBDwUIt+BBgvhoNgvai
 JVcw==
X-Gm-Message-State: AOAM531JWotTlrjRlbTbdjiG1LJ7f2+4bxuPHHeKKBvuqTbC8Nrvz4fO
 VcDhY730syq6e30aWNR3eV8=
X-Google-Smtp-Source: ABdhPJzsDZXsRo/kd8VsDXHDUrHrCjoVuergmA7ZHKA5tamSZnVsbmqBQ7elSHnv+QW+wALrk9Py7w==
X-Received: by 2002:a1c:4405:: with SMTP id r5mr219103wma.72.1590480005225;
 Tue, 26 May 2020 01:00:05 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.185])
 by smtp.gmail.com with ESMTPSA id d15sm11994980wrq.30.2020.05.26.01.00.03
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 26 May 2020 01:00:04 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: =?utf-8?Q?'J=C3=BCrgen_Gro=C3=9F'?= <jgross@suse.com>,
 "'Jan Beulich'" <jbeulich@suse.com>, "'Kevin Tian'" <kevin.tian@intel.com>,
 "'Julien Grall'" <julien@xen.org>,
 "'Jun Nakajima'" <jun.nakajima@intel.com>, "'Wei Liu'" <wl@xen.org>,
 "'Ian Jackson'" <ian.jackson@eu.citrix.com>,
 "'Daniel De Graaf'" <dgdegra@tycho.nsa.gov>
References: <20200519072106.26894-1-jgross@suse.com>
 <24935c43-2f2d-83cf-9039-ec0f97498103@suse.com>
 <305d829f-24a9-1a6d-2131-fed92c22c305@suse.com>
 <000f01d62db4$57181e90$05485bb0$@xen.org>
 <fc55f4dc-c802-2153-cd6a-736a29e8a396@suse.com>
In-Reply-To: <fc55f4dc-c802-2153-cd6a-736a29e8a396@suse.com>
Subject: RE: [PATCH v10 00/12] Add hypervisor sysfs-like support
Date: Tue, 26 May 2020 09:00:02 +0100
Message-ID: <003f01d63333$aa399f20$feacdd60$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQF0GzHQelLbY29QZvJSPKpxy6orIQGqCQh0Aj1ZGvwCHmdLiQIdeZWcqT0U8tA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Anthony PERARD' <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?utf-8?Q?'Roger_Pau_Monn=C3=A9'?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: J=C3=BCrgen Gro=C3=9F <jgross@suse.com>
> Sent: 25 May 2020 08:02
> To: paul@xen.org; 'Jan Beulich' <jbeulich@suse.com>; 'Kevin Tian' =
<kevin.tian@intel.com>; 'Julien
> Grall' <julien@xen.org>; 'Jun Nakajima' <jun.nakajima@intel.com>; 'Wei =
Liu' <wl@xen.org>; 'Ian
> Jackson' <ian.jackson@eu.citrix.com>; 'Daniel De Graaf' =
<dgdegra@tycho.nsa.gov>
> Cc: 'Stefano Stabellini' <sstabellini@kernel.org>; 'Andrew Cooper' =
<andrew.cooper3@citrix.com>;
> 'George Dunlap' <george.dunlap@citrix.com>; 'Anthony PERARD' =
<anthony.perard@citrix.com>; xen-
> devel@lists.xenproject.org; 'Volodymyr Babchuk' =
<Volodymyr_Babchuk@epam.com>; 'Roger Pau Monn=C3=A9'
> <roger.pau@citrix.com>
> Subject: Re: [PATCH v10 00/12] Add hypervisor sysfs-like support
>=20
> On 19.05.20 10:06, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Jan Beulich <jbeulich@suse.com>
> >> Sent: 19 May 2020 08:45
> >> To: J=C3=BCrgen Gro=C3=9F <jgross@suse.com>; Kevin Tian =
<kevin.tian@intel.com>; Julien Grall
> <julien@xen.org>;
> >> Jun Nakajima <jun.nakajima@intel.com>; Wei Liu <wl@xen.org>; Ian =
Jackson
> <ian.jackson@eu.citrix.com>;
> >> Daniel De Graaf <dgdegra@tycho.nsa.gov>; Paul Durrant =
<paul@xen.org>
> >> Cc: xen-devel@lists.xenproject.org; Stefano Stabellini =
<sstabellini@kernel.org>; Andrew Cooper
> >> <andrew.cooper3@citrix.com>; George Dunlap =
<george.dunlap@citrix.com>; Anthony PERARD
> >> <anthony.perard@citrix.com>; Volodymyr Babchuk =
<Volodymyr_Babchuk@epam.com>; Roger Pau Monn=C3=A9
> >> <roger.pau@citrix.com>
> >> Subject: Re: [PATCH v10 00/12] Add hypervisor sysfs-like support
> >>
> >> On 19.05.2020 09:30, J=C3=BCrgen Gro=C3=9F wrote:
> >>> On 19.05.20 09:20, Juergen Gross wrote:
> >>>>
> >>>> Juergen Gross (12):
> >>>>     xen/vmx: let opt_ept_ad always reflect the current setting
> >>>>     xen: add a generic way to include binary files as variables
> >>>>     docs: add feature document for Xen hypervisor sysfs-like =
support
> >>>>     xen: add basic hypervisor filesystem support
> >>>>     libs: add libxenhypfs
> >>>>     tools: add xenfs tool
> >>>>     xen: provide version information in hypfs
> >>>>     xen: add /buildinfo/config entry to hypervisor filesystem
> >>>>     xen: add runtime parameter access support to hypfs
> >>>>     tools/libxl: use libxenhypfs for setting xen runtime =
parameters
> >>>>     tools/libxc: remove xc_set_parameters()
> >>>>     xen: remove XEN_SYSCTL_set_parameter support
> >>>>
> >>>>    .gitignore                          |   6 +
> >>>>    docs/features/hypervisorfs.pandoc   |  92 +++++
> >>>>    docs/man/xenhypfs.1.pod             |  61 ++++
> >>>>    docs/misc/hypfs-paths.pandoc        | 165 +++++++++
> >>>>    tools/Rules.mk                      |   8 +-
> >>>>    tools/flask/policy/modules/dom0.te  |   4 +-
> >>>>    tools/libs/Makefile                 |   1 +
> >>>>    tools/libs/hypfs/Makefile           |  16 +
> >>>>    tools/libs/hypfs/core.c             | 536 =
++++++++++++++++++++++++++++
> >>>>    tools/libs/hypfs/include/xenhypfs.h |  90 +++++
> >>>>    tools/libs/hypfs/libxenhypfs.map    |  10 +
> >>>>    tools/libs/hypfs/xenhypfs.pc.in     |  10 +
> >>>>    tools/libxc/include/xenctrl.h       |   1 -
> >>>>    tools/libxc/xc_misc.c               |  21 --
> >>>>    tools/libxl/Makefile                |   3 +-
> >>>>    tools/libxl/libxl.c                 |  53 ++-
> >>>>    tools/libxl/libxl_internal.h        |   1 +
> >>>>    tools/libxl/xenlight.pc.in          |   2 +-
> >>>>    tools/misc/Makefile                 |   6 +
> >>>>    tools/misc/xenhypfs.c               | 192 ++++++++++
> >>>>    tools/xl/xl_misc.c                  |   1 -
> >>>>    xen/arch/arm/traps.c                |   3 +
> >>>>    xen/arch/arm/xen.lds.S              |  13 +-
> >>>>    xen/arch/x86/hvm/hypercall.c        |   3 +
> >>>>    xen/arch/x86/hvm/vmx/vmcs.c         |  47 ++-
> >>>>    xen/arch/x86/hvm/vmx/vmx.c          |   4 +-
> >>>>    xen/arch/x86/hypercall.c            |   3 +
> >>>>    xen/arch/x86/pv/domain.c            |  21 +-
> >>>>    xen/arch/x86/pv/hypercall.c         |   3 +
> >>>>    xen/arch/x86/xen.lds.S              |  12 +-
> >>>>    xen/common/Kconfig                  |  23 ++
> >>>>    xen/common/Makefile                 |  13 +
> >>>>    xen/common/grant_table.c            |  62 +++-
> >>>>    xen/common/hypfs.c                  | 452 =
+++++++++++++++++++++++
> >>>>    xen/common/kernel.c                 |  84 ++++-
> >>>>    xen/common/sysctl.c                 |  36 --
> >>>>    xen/drivers/char/console.c          |  72 +++-
> >>>>    xen/include/Makefile                |   1 +
> >>>>    xen/include/asm-x86/hvm/vmx/vmcs.h  |   3 +-
> >>>>    xen/include/public/hypfs.h          | 129 +++++++
> >>>>    xen/include/public/sysctl.h         |  19 +-
> >>>>    xen/include/public/xen.h            |   1 +
> >>>>    xen/include/xen/hypercall.h         |  10 +
> >>>>    xen/include/xen/hypfs.h             | 123 +++++++
> >>>>    xen/include/xen/kernel.h            |   3 +
> >>>>    xen/include/xen/lib.h               |   1 -
> >>>>    xen/include/xen/param.h             | 126 +++++--
> >>>>    xen/include/xlat.lst                |   2 +
> >>>>    xen/include/xsm/dummy.h             |   6 +
> >>>>    xen/include/xsm/xsm.h               |   6 +
> >>>>    xen/tools/binfile                   |  43 +++
> >>>>    xen/xsm/dummy.c                     |   1 +
> >>>>    xen/xsm/flask/Makefile              |   5 +-
> >>>>    xen/xsm/flask/flask-policy.S        |  16 -
> >>>>    xen/xsm/flask/hooks.c               |   9 +-
> >>>>    xen/xsm/flask/policy/access_vectors |   4 +-
> >>>>    56 files changed, 2445 insertions(+), 193 deletions(-)
> >>>>    create mode 100644 docs/features/hypervisorfs.pandoc
> >>>>    create mode 100644 docs/man/xenhypfs.1.pod
> >>>>    create mode 100644 docs/misc/hypfs-paths.pandoc
> >>>>    create mode 100644 tools/libs/hypfs/Makefile
> >>>>    create mode 100644 tools/libs/hypfs/core.c
> >>>>    create mode 100644 tools/libs/hypfs/include/xenhypfs.h
> >>>>    create mode 100644 tools/libs/hypfs/libxenhypfs.map
> >>>>    create mode 100644 tools/libs/hypfs/xenhypfs.pc.in
> >>>>    create mode 100644 tools/misc/xenhypfs.c
> >>>>    create mode 100644 xen/common/hypfs.c
> >>>>    create mode 100644 xen/include/public/hypfs.h
> >>>>    create mode 100644 xen/include/xen/hypfs.h
> >>>>    create mode 100755 xen/tools/binfile
> >>>>    delete mode 100644 xen/xsm/flask/flask-policy.S
> >>>>
> >>>
> >>> There are some Acks missing on this series, so please have a look =
at the
> >>> patches!
> >>>
> >>> There are missing especially:
> >>>
> >>> - Patch 1: VMX maintainers
> >>> - Patch 2 + 4: XSM maintainer
> >>> - Patch 4 + 9: Arm maintainer
> >>> - Patch 10 + 11: tools maintainers
> >>>
> >>> I'd really like the series to go into 4.14 (deadline this Friday).
> >>
> >
> > I would also like to see this in 4.14.
> >
> >> FTR I'm intending to waive the need for the first three of the =
named
> >> sets if they don't arrive by Friday (and there I don't mean last
> >> minute on Friday) - they're not overly intrusive (maybe with the
> >> exception of the XSM parts in #4) and the series has been pending
> >> for long enough. I don't feel comfortable to do so for patch 10,
> >> though; patch 11 looks to be simple enough again.
> >>
> >> Paul, as the release manager, please let me know if you disagree.
> >>
> >
> > Looking at patch #4, I'm not confident that the XSM parts are =
complete (e.g. does xen.if need
> updating?). Also I'd put the new access vector in xen2, since that's =
where set_parameter currently is
> (and will be removed from in a later patch), but the xen class does =
appear to have space so that's
> really just my taste.
>=20
> I don't think xen.if needs updating, as it contains only macros for
> groups of operations.
>=20

Ok.

> As the new hypercall isn't only replacing set_parameter, but has much
> wider semantics, I don't think it should go to xen2. There will be
> probably more interfaces being replaced and/or added after all.
>=20

If you're happy with it then, in the absence of a response from Daniel, =
then I think patch #4 can go in. Patch #10 and #11 have acks now, so it =
looks like the series is good to go. Could you send a patch for =
CHANGELOG.md as I think we'd consider this a significant feature :-)

  Paul


>=20
> Juergen



From xen-devel-bounces@lists.xenproject.org Tue May 26 08:08:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 08:08:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdUd7-0004Ez-A3; Tue, 26 May 2020 08:08:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ta6a=7I=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdUd5-0004Er-RK
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 08:08:07 +0000
X-Inumbo-ID: 07bfc196-9f28-11ea-9dbe-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 07bfc196-9f28-11ea-9dbe-bc764e2007e4;
 Tue, 26 May 2020 08:08:07 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 0D888ACAD;
 Tue, 26 May 2020 08:08:07 +0000 (UTC)
Subject: Re: [PATCH v10 00/12] Add hypervisor sysfs-like support
To: paul@xen.org
References: <20200519072106.26894-1-jgross@suse.com>
 <24935c43-2f2d-83cf-9039-ec0f97498103@suse.com>
 <305d829f-24a9-1a6d-2131-fed92c22c305@suse.com>
 <000f01d62db4$57181e90$05485bb0$@xen.org>
 <fc55f4dc-c802-2153-cd6a-736a29e8a396@suse.com>
 <003f01d63333$aa399f20$feacdd60$@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <88154236-0e64-55a5-7531-eeb6029a188e@suse.com>
Date: Tue, 26 May 2020 10:08:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <003f01d63333$aa399f20$feacdd60$@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?B?J0rDvHJnZW4gR3Jvw58n?= <jgross@suse.com>,
 'Kevin Tian' <kevin.tian@intel.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>, 'Julien Grall' <julien@xen.org>,
 'Wei Liu' <wl@xen.org>, 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Jun Nakajima' <jun.nakajima@intel.com>,
 'Anthony PERARD' <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 'Daniel De Graaf' <dgdegra@tycho.nsa.gov>,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 26.05.2020 10:00, Paul Durrant wrote:
>> From: Jürgen Groß <jgross@suse.com>
>> Sent: 25 May 2020 08:02
>>
>> On 19.05.20 10:06, Paul Durrant wrote:
>>> Looking at patch #4, I'm not confident that the XSM parts are complete (e.g. does xen.if need
>> updating?). Also I'd put the new access vector in xen2, since that's where set_parameter currently is
>> (and will be removed from in a later patch), but the xen class does appear to have space so that's
>> really just my taste.
>>
>> I don't think xen.if needs updating, as it contains only macros for
>> groups of operations.
>>
> 
> Ok.
> 
>> As the new hypercall isn't only replacing set_parameter, but has much
>> wider semantics, I don't think it should go to xen2. There will be
>> probably more interfaces being replaced and/or added after all.
>>
> 
> If you're happy with it then, in the absence of a response from Daniel,
> then I think patch #4 can go in. Patch #10 and #11 have acks now, so it
> looks like the series is good to go.
I've pinged Daniel privately, and hence would like to give him a day or
two more to respond at least there. If I don't hear back, I'll put the
series in before the end of the week.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 26 08:18:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 08:18:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdUmp-0005Af-2f; Tue, 26 May 2020 08:18:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3e+z=7I=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jdUmn-0005Aa-5i
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 08:18:09 +0000
X-Inumbo-ID: 6e09b302-9f29-11ea-9947-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6e09b302-9f29-11ea-9947-bc764e2007e4;
 Tue, 26 May 2020 08:18:08 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 42F57AEED;
 Tue, 26 May 2020 08:18:09 +0000 (UTC)
Subject: Re: [PATCH v10 00/12] Add hypervisor sysfs-like support
To: paul@xen.org, 'Jan Beulich' <jbeulich@suse.com>,
 'Kevin Tian' <kevin.tian@intel.com>, 'Julien Grall' <julien@xen.org>,
 'Jun Nakajima' <jun.nakajima@intel.com>, 'Wei Liu' <wl@xen.org>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'Daniel De Graaf' <dgdegra@tycho.nsa.gov>
References: <20200519072106.26894-1-jgross@suse.com>
 <24935c43-2f2d-83cf-9039-ec0f97498103@suse.com>
 <305d829f-24a9-1a6d-2131-fed92c22c305@suse.com>
 <000f01d62db4$57181e90$05485bb0$@xen.org>
 <fc55f4dc-c802-2153-cd6a-736a29e8a396@suse.com>
 <003f01d63333$aa399f20$feacdd60$@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <2dcec236-038d-2279-6415-cb7a68100829@suse.com>
Date: Tue, 26 May 2020 10:18:05 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <003f01d63333$aa399f20$feacdd60$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>,
 'Anthony PERARD' <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?B?J1JvZ2VyIFBhdSBNb25uw6kn?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 26.05.20 10:00, Paul Durrant wrote:
>> -----Original Message-----
>> From: Jürgen Groß <jgross@suse.com>
>> Sent: 25 May 2020 08:02
>> To: paul@xen.org; 'Jan Beulich' <jbeulich@suse.com>; 'Kevin Tian' <kevin.tian@intel.com>; 'Julien
>> Grall' <julien@xen.org>; 'Jun Nakajima' <jun.nakajima@intel.com>; 'Wei Liu' <wl@xen.org>; 'Ian
>> Jackson' <ian.jackson@eu.citrix.com>; 'Daniel De Graaf' <dgdegra@tycho.nsa.gov>
>> Cc: 'Stefano Stabellini' <sstabellini@kernel.org>; 'Andrew Cooper' <andrew.cooper3@citrix.com>;
>> 'George Dunlap' <george.dunlap@citrix.com>; 'Anthony PERARD' <anthony.perard@citrix.com>; xen-
>> devel@lists.xenproject.org; 'Volodymyr Babchuk' <Volodymyr_Babchuk@epam.com>; 'Roger Pau Monné'
>> <roger.pau@citrix.com>
>> Subject: Re: [PATCH v10 00/12] Add hypervisor sysfs-like support
>>
>> On 19.05.20 10:06, Paul Durrant wrote:
>>>> -----Original Message-----
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: 19 May 2020 08:45
>>>> To: Jürgen Groß <jgross@suse.com>; Kevin Tian <kevin.tian@intel.com>; Julien Grall
>> <julien@xen.org>;
>>>> Jun Nakajima <jun.nakajima@intel.com>; Wei Liu <wl@xen.org>; Ian Jackson
>> <ian.jackson@eu.citrix.com>;
>>>> Daniel De Graaf <dgdegra@tycho.nsa.gov>; Paul Durrant <paul@xen.org>
>>>> Cc: xen-devel@lists.xenproject.org; Stefano Stabellini <sstabellini@kernel.org>; Andrew Cooper
>>>> <andrew.cooper3@citrix.com>; George Dunlap <george.dunlap@citrix.com>; Anthony PERARD
>>>> <anthony.perard@citrix.com>; Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>; Roger Pau Monné
>>>> <roger.pau@citrix.com>
>>>> Subject: Re: [PATCH v10 00/12] Add hypervisor sysfs-like support
>>>>
>>>> On 19.05.2020 09:30, Jürgen Groß wrote:
>>>>> On 19.05.20 09:20, Juergen Gross wrote:
>>>>>>
>>>>>> Juergen Gross (12):
>>>>>>      xen/vmx: let opt_ept_ad always reflect the current setting
>>>>>>      xen: add a generic way to include binary files as variables
>>>>>>      docs: add feature document for Xen hypervisor sysfs-like support
>>>>>>      xen: add basic hypervisor filesystem support
>>>>>>      libs: add libxenhypfs
>>>>>>      tools: add xenfs tool
>>>>>>      xen: provide version information in hypfs
>>>>>>      xen: add /buildinfo/config entry to hypervisor filesystem
>>>>>>      xen: add runtime parameter access support to hypfs
>>>>>>      tools/libxl: use libxenhypfs for setting xen runtime parameters
>>>>>>      tools/libxc: remove xc_set_parameters()
>>>>>>      xen: remove XEN_SYSCTL_set_parameter support
>>>>>>
>>>>>>     .gitignore                          |   6 +
>>>>>>     docs/features/hypervisorfs.pandoc   |  92 +++++
>>>>>>     docs/man/xenhypfs.1.pod             |  61 ++++
>>>>>>     docs/misc/hypfs-paths.pandoc        | 165 +++++++++
>>>>>>     tools/Rules.mk                      |   8 +-
>>>>>>     tools/flask/policy/modules/dom0.te  |   4 +-
>>>>>>     tools/libs/Makefile                 |   1 +
>>>>>>     tools/libs/hypfs/Makefile           |  16 +
>>>>>>     tools/libs/hypfs/core.c             | 536 ++++++++++++++++++++++++++++
>>>>>>     tools/libs/hypfs/include/xenhypfs.h |  90 +++++
>>>>>>     tools/libs/hypfs/libxenhypfs.map    |  10 +
>>>>>>     tools/libs/hypfs/xenhypfs.pc.in     |  10 +
>>>>>>     tools/libxc/include/xenctrl.h       |   1 -
>>>>>>     tools/libxc/xc_misc.c               |  21 --
>>>>>>     tools/libxl/Makefile                |   3 +-
>>>>>>     tools/libxl/libxl.c                 |  53 ++-
>>>>>>     tools/libxl/libxl_internal.h        |   1 +
>>>>>>     tools/libxl/xenlight.pc.in          |   2 +-
>>>>>>     tools/misc/Makefile                 |   6 +
>>>>>>     tools/misc/xenhypfs.c               | 192 ++++++++++
>>>>>>     tools/xl/xl_misc.c                  |   1 -
>>>>>>     xen/arch/arm/traps.c                |   3 +
>>>>>>     xen/arch/arm/xen.lds.S              |  13 +-
>>>>>>     xen/arch/x86/hvm/hypercall.c        |   3 +
>>>>>>     xen/arch/x86/hvm/vmx/vmcs.c         |  47 ++-
>>>>>>     xen/arch/x86/hvm/vmx/vmx.c          |   4 +-
>>>>>>     xen/arch/x86/hypercall.c            |   3 +
>>>>>>     xen/arch/x86/pv/domain.c            |  21 +-
>>>>>>     xen/arch/x86/pv/hypercall.c         |   3 +
>>>>>>     xen/arch/x86/xen.lds.S              |  12 +-
>>>>>>     xen/common/Kconfig                  |  23 ++
>>>>>>     xen/common/Makefile                 |  13 +
>>>>>>     xen/common/grant_table.c            |  62 +++-
>>>>>>     xen/common/hypfs.c                  | 452 +++++++++++++++++++++++
>>>>>>     xen/common/kernel.c                 |  84 ++++-
>>>>>>     xen/common/sysctl.c                 |  36 --
>>>>>>     xen/drivers/char/console.c          |  72 +++-
>>>>>>     xen/include/Makefile                |   1 +
>>>>>>     xen/include/asm-x86/hvm/vmx/vmcs.h  |   3 +-
>>>>>>     xen/include/public/hypfs.h          | 129 +++++++
>>>>>>     xen/include/public/sysctl.h         |  19 +-
>>>>>>     xen/include/public/xen.h            |   1 +
>>>>>>     xen/include/xen/hypercall.h         |  10 +
>>>>>>     xen/include/xen/hypfs.h             | 123 +++++++
>>>>>>     xen/include/xen/kernel.h            |   3 +
>>>>>>     xen/include/xen/lib.h               |   1 -
>>>>>>     xen/include/xen/param.h             | 126 +++++--
>>>>>>     xen/include/xlat.lst                |   2 +
>>>>>>     xen/include/xsm/dummy.h             |   6 +
>>>>>>     xen/include/xsm/xsm.h               |   6 +
>>>>>>     xen/tools/binfile                   |  43 +++
>>>>>>     xen/xsm/dummy.c                     |   1 +
>>>>>>     xen/xsm/flask/Makefile              |   5 +-
>>>>>>     xen/xsm/flask/flask-policy.S        |  16 -
>>>>>>     xen/xsm/flask/hooks.c               |   9 +-
>>>>>>     xen/xsm/flask/policy/access_vectors |   4 +-
>>>>>>     56 files changed, 2445 insertions(+), 193 deletions(-)
>>>>>>     create mode 100644 docs/features/hypervisorfs.pandoc
>>>>>>     create mode 100644 docs/man/xenhypfs.1.pod
>>>>>>     create mode 100644 docs/misc/hypfs-paths.pandoc
>>>>>>     create mode 100644 tools/libs/hypfs/Makefile
>>>>>>     create mode 100644 tools/libs/hypfs/core.c
>>>>>>     create mode 100644 tools/libs/hypfs/include/xenhypfs.h
>>>>>>     create mode 100644 tools/libs/hypfs/libxenhypfs.map
>>>>>>     create mode 100644 tools/libs/hypfs/xenhypfs.pc.in
>>>>>>     create mode 100644 tools/misc/xenhypfs.c
>>>>>>     create mode 100644 xen/common/hypfs.c
>>>>>>     create mode 100644 xen/include/public/hypfs.h
>>>>>>     create mode 100644 xen/include/xen/hypfs.h
>>>>>>     create mode 100755 xen/tools/binfile
>>>>>>     delete mode 100644 xen/xsm/flask/flask-policy.S
>>>>>>
>>>>>
>>>>> There are some Acks missing on this series, so please have a look at the
>>>>> patches!
>>>>>
>>>>> There are missing especially:
>>>>>
>>>>> - Patch 1: VMX maintainers
>>>>> - Patch 2 + 4: XSM maintainer
>>>>> - Patch 4 + 9: Arm maintainer
>>>>> - Patch 10 + 11: tools maintainers
>>>>>
>>>>> I'd really like the series to go into 4.14 (deadline this Friday).
>>>>
>>>
>>> I would also like to see this in 4.14.
>>>
>>>> FTR I'm intending to waive the need for the first three of the named
>>>> sets if they don't arrive by Friday (and there I don't mean last
>>>> minute on Friday) - they're not overly intrusive (maybe with the
>>>> exception of the XSM parts in #4) and the series has been pending
>>>> for long enough. I don't feel comfortable to do so for patch 10,
>>>> though; patch 11 looks to be simple enough again.
>>>>
>>>> Paul, as the release manager, please let me know if you disagree.
>>>>
>>>
>>> Looking at patch #4, I'm not confident that the XSM parts are complete (e.g. does xen.if need
>> updating?). Also I'd put the new access vector in xen2, since that's where set_parameter currently is
>> (and will be removed from in a later patch), but the xen class does appear to have space so that's
>> really just my taste.
>>
>> I don't think xen.if needs updating, as it contains only macros for
>> groups of operations.
>>
> 
> Ok.
> 
>> As the new hypercall isn't only replacing set_parameter, but has much
>> wider semantics, I don't think it should go to xen2. There will be
>> probably more interfaces being replaced and/or added after all.
>>
> 
> If you're happy with it then, in the absence of a response from Daniel, then I think patch #4 can go in. Patch #10 and #11 have acks now, so it looks like the series is good to go. Could you send a patch for CHANGELOG.md as I think we'd consider this a significant feature :-)

Will send a patch for CHANGELOG.md and one for SUPPORT.md.


Juergen


From xen-devel-bounces@lists.xenproject.org Tue May 26 08:27:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 08:27:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdUvY-000676-4h; Tue, 26 May 2020 08:27:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N6lx=7I=lucina.net=martin@srs-us1.protection.inumbo.net>)
 id 1jdUvX-000671-EM
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 08:27:11 +0000
X-Inumbo-ID: adfd328a-9f2a-11ea-9947-bc764e2007e4
Received: from smtp.lucina.net (unknown [62.176.169.44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id adfd328a-9f2a-11ea-9947-bc764e2007e4;
 Tue, 26 May 2020 08:27:05 +0000 (UTC)
Received: from nodbug.lucina.net (78-141-76-187.dynamic.orange.sk
 [78.141.76.187])
 by smtp.lucina.net (Postfix) with ESMTPSA id 2B303122804;
 Tue, 26 May 2020 10:27:04 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=lucina.net;
 s=dkim-201811; t=1590481624;
 bh=nMbw4GKCmitbAyTsPCbgeJZDMgcv0pZGQeT3lxD0qLU=;
 h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
 b=O58q11Z6PMCr4KpKLZFxlfh72IAdiaHSUro7vrskbvfQB/G5jvVkuiXFfcvnHRbGe
 usat0Jdc6C2ZbqD+CQD7P3xTrQDmcHCcjupohbNSQaRK3lJ7nEHfD90zKK9xIFf7Yz
 OCT+Y1VsUkrf01ysO9rkAi2gt2KCBZIiVFClD1SLNt0PNXu/pyYfF+z+YtjyIk1i6O
 8FwTb23AFsnPBoP6FZYURvZksDbDrTLJkYRK4IMImQbyKlUsqCC7otXFPKC3uaC3a9
 NsE4Fmw5F6yWGtiJdB5e9xCA81Oo8H+TcTH9KGld8bg+pw+lIi0Lx0NmIZTvQsTY+s
 y2Qh/go8Nwzcw==
Received: by nodbug.lucina.net (Postfix, from userid 1000)
 id EB756268436E; Tue, 26 May 2020 10:27:03 +0200 (CEST)
Date: Tue, 26 May 2020 10:27:03 +0200
From: Martin Lucina <martin@lucina.net>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Xen PVH domU start-of-day VCPU state
Message-ID: <20200526082703.GA5942@nodbug.lucina.net>
Mail-Followup-To: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org,
 anil@recoil.org, dave@recoil.org
References: <20200525160401.GA3091@nodbug.lucina.net>
 <6fadfd84-0fc4-d462-a917-1c88ec0822b8@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <6fadfd84-0fc4-d462-a917-1c88ec0822b8@citrix.com>
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, dave@recoil.org,
 mirageos-devel@lists.xenproject.org, anil@recoil.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Monday, 25.05.2020 at18:41, Andrew Cooper wrote:
> On 25/05/2020 17:04, Martin Lucina wrote:
> > Hi,
> >
> > I'm trying to bootstrap a new PVH-only Xen domU OS "from scratch", to
> > replace our existing use of Mini-OS for the early boot/low-level support
> > layer in MirageOS. I've done this by creating new Xen bindings for Solo5
> > [1], basing them on our existing virtio code [2].
> >
> > Unfortunately, I can't seem to get past the first few instructions on VCPU
> > boot. Here's what I have at the moment (abridged):
> >
> >     .section .note.solo5.xen
> >
> >             .align  4
> >             .long   4
> >             .long   4
> >             .long   XEN_ELFNOTE_PHYS32_ENTRY
> >             .ascii "Xen\0"
> >             .long   _start32
> >
> >     /* ... */
> >
> >     .code32
> >
> >     ENTRY(_start32)
> >             cld
> >
> >             lgdt (gdt64_ptr)
> >             ljmp $0x10, $1f
> >
> >     1:      movl $0x18, %eax
> >             movl %eax, %ds
> >             movl %eax, %es
> >             movl %eax, %ss
> >
> >             xorl %eax, %eax
> >             movl %eax, %fs
> >             movl %eax, %gs
> >
> > I have verified, via xl -v create -c ..., that the domain builder appears
> > to be doing the right thing, and is interpreting the ELF NOTE correctly.
> > However, for some reason I cannot fathom, I get a triple fault on the ljmp
> > following the lgdt instruction above:
> >
> >     (XEN) d31v0 Triple fault - invoking HVM shutdown action 1
> >     (XEN) *** Dumping Dom31 vcpu#0 state: ***
> >     (XEN) ----[ Xen-4.11.4-pre  x86_64  debug=n   Not tainted ]----
> >     (XEN) CPU:    0
> >     (XEN) RIP:    0000:[<0000000000100028>]
> >     (XEN) RFLAGS: 0000000000010002   CONTEXT: hvm guest (d31v0)
> >     (XEN) rax: 0000000000000000   rbx: 0000000000116000   rcx: 0000000000000000
> >     (XEN) rdx: 0000000000000000   rsi: 0000000000000000   rdi: 0000000000000000
> >     (XEN) rbp: 0000000000000000   rsp: 0000000000000000   r8:  0000000000000000
> >     (XEN) r9:  0000000000000000   r10: 0000000000000000   r11: 0000000000000000
> >     (XEN) r12: 0000000000000000   r13: 0000000000000000   r14: 0000000000000000
> >     (XEN) r15: 0000000000000000   cr0: 0000000000000011   cr4: 0000000000000000
> >     (XEN) cr3: 0000000000000000   cr2: 0000000000000000
> >     (XEN) fsb: 0000000000000000   gsb: 0000000000000000   gss: 0000000000000000
> >     (XEN) ds: 0000   es: 0000   fs: 0000   gs: 0000   ss: 0000   cs: 0000
> 
> For extra help debugging this, you can dump the vmcs here:
> 
> andrewcoop@andrewcoop:/local/xen.git/xen$ git d
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 74c9f84462..8ae23545ae 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1687,6 +1687,7 @@ void hvm_triple_fault(void)
>  "Triple fault - invoking HVM shutdown action %d\n",
>  reason);
>  vcpu_show_execution_state(v);
> + vmcs_dump_vcpu(v);
>  domain_shutdown(d, reason);
> }
> 
> 
> which will include the segment cache, including the just loaded GDT details.

Thanks, I'll try that and report back.

> 
> > Cross-checking 0x100028 via gdb:
> >
> >     Dump of assembler code for function _start32:
> >        0x00100020 <+0>:	cld
> >        0x00100021 <+1>:	lgdtl  0x108040
> >        0x00100028 <+8>:	ljmp   $0x10,$0x10002f
> >        0x0010002f <+15>:	mov    $0x18,%eax
> >
> > I've spent a couple of days trying various things and cross-checking both
> > with the Mini-OS PVH/HVM startup [3] and the Intel SDM, but no joy. I've
> > also re-checked the GDT selector values used by the original virtio code
> > which this is based on, and they appear to be fine.
> >
> > This is not helped by the fact that the Xen domU PVH start-of-day VCPU
> > state does not seem to be documented anywhere, with the exception of
> > "struct hvm_start_info is passed in %ebx" as stated in
> > arch-x86/hvm/start_info.h.
> 
> https://xenbits.xen.org/docs/unstable/misc/pvh.html
> 
> The starting state is described there. It is 32bit flat mode, very
> similar to multiboot's entry.
> 
> > In case it's relevant, I'm testing with Xen 4.11.4 as shipped with Debian
> > 10, on an Intel Broadwell CPU.
> >
> > Any ideas?
> 
> Sadly no.
> 
> From
> https://github.com/mato/solo5/commit/f2539d588883a2e8854998c75bdea9b10f113ed6
> 
> all data looks to be linked below the 4G boundary, so the 32/64bitness
> of lgdt shouldn't matter in this case.

That's correct, the virtio code this is based on doesn't use anything above
1GB.

> Reordering the logic as per MiniOS/XTF will avoid the need for a 32bit
> CS selector - it is safe to run on the ABI-provided %cs until you switch
> into 64bit mode.

I can try poking at the order some more, but was aiming for a minimal diff
against virtio to start with.

> It might also be interesting to see exactly what value is in gdt64_ptr,
> just to check that the base an limit are set sensibly.

Seems fine:

    (gdb) info address gdt64_ptr
    Symbol "gdt64_ptr" is at 0x108040 in a file compiled without debugging.
    (gdb) x /1xg 0x108040
    0x108040:	0x000000108000002f
    (gdb) p/x (struct gdtptr)gdt64_ptr
    $3 = {limit = 0x2f, base = 0x108000}

-mato


From xen-devel-bounces@lists.xenproject.org Tue May 26 08:45:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 08:45:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdVDD-0007tK-N0; Tue, 26 May 2020 08:45:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0p4g=7I=xen.org=wl@srs-us1.protection.inumbo.net>)
 id 1jdVDB-0007tF-TY
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 08:45:25 +0000
X-Inumbo-ID: 3cfacf2c-9f2d-11ea-a5e9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3cfacf2c-9f2d-11ea-a5e9-12813bfff9fa;
 Tue, 26 May 2020 08:45:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID
 :Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID
 :Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:
 Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe
 :List-Post:List-Owner:List-Archive;
 bh=E6QvmVKZVI4IOi+Q8Bq8o7i7X6Za3KtquTWs6tFiv6w=; b=JeCINbzNa0lWIvA/Z/av2DWayv
 qcT/VmkpFm2lx7wQRtx+Ssb46cXJiHVa2QECjCe0CDgF51dGHAJ/V5hxokmFGSWd52yzIOhER2S8R
 bUtRWeeGU1GnAW1Zn/QT61hO1rEk+B5fNFnSOY4M9THMe8Djzsq93EIMECQHDmd8O+Ok=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <wl@xen.org>)
 id 1jdVD8-0005z9-7I; Tue, 26 May 2020 08:45:22 +0000
Received: from 82.149.115.87.dyn.plus.net ([87.115.149.82] helo=debian)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <wl@xen.org>)
 id 1jdVD7-0008Ji-TU; Tue, 26 May 2020 08:45:22 +0000
Date: Mon, 25 May 2020 19:46:16 +0100
From: Wei Liu <wl@xen.org>
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH] SUPPORT: Add linux device model stubdom to Toolstack
Message-ID: <20200525184616.uovnuszn73jtipxg@debian>
References: <20200525025506.225959-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200525025506.225959-1-jandryuk@gmail.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sun, May 24, 2020 at 10:55:06PM -0400, Jason Andryuk wrote:
> Add qemu-xen linux device model stubdomain to the Toolstack section as a
> Tech Preview.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Not really related to this patch: Could you please also send a patch to
CHANGELOG.md?

Wei.

> ---
>  SUPPORT.md | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> index e3a366fd56..25becc9192 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -153,6 +153,12 @@ Go (golang) bindings for libxl
>  
>      Status: Experimental
>  
> +### Linux device model stubdomains
> +
> +Support for running qemu-xen device model in a linux stubdomain.
> +
> +    Status: Tech Preview
> +
>  ## Toolstack/3rd party
>  
>  ### libvirt driver for xl
> -- 
> 2.25.1
> 


From xen-devel-bounces@lists.xenproject.org Tue May 26 08:46:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 08:46:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdVDz-0007w4-0h; Tue, 26 May 2020 08:46:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=0p4g=7I=xen.org=wl@srs-us1.protection.inumbo.net>)
 id 1jdVDx-0007vv-6V
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 08:46:13 +0000
X-Inumbo-ID: 57d725a2-9f2d-11ea-a5e9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 57d725a2-9f2d-11ea-a5e9-12813bfff9fa;
 Tue, 26 May 2020 08:46:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID
 :Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID
 :Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:
 Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe
 :List-Post:List-Owner:List-Archive;
 bh=/70pCIqdBc1+IaihE3qtCdVQmzI0BmRdQTEKFEaq7uw=; b=63MHqGT5n6awT6PUH4TIaQNUto
 FRsxPE2ZqlvxFbUZ/g8CEZCBAhVGBU3PCIT6sL7+1w8UngFhX8MBQknykjtVJsoyoJDhMsAA4jXSL
 IRBHrCduN40NTmaNskc3L6E71vNpe93TdEBvocw5mGv0NdDXzAXzwb9/q459HoX/OYgw=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <wl@xen.org>)
 id 1jdVDr-0005zz-9I; Tue, 26 May 2020 08:46:07 +0000
Received: from 82.149.115.87.dyn.plus.net ([87.115.149.82] helo=debian)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <wl@xen.org>)
 id 1jdVDr-0008Lh-18; Tue, 26 May 2020 08:46:07 +0000
Date: Mon, 25 May 2020 19:47:01 +0100
From: Wei Liu <wl@xen.org>
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Subject: Re: [PATCH] x86/mem_sharing: gate enabling on cpu_has_vmx
Message-ID: <20200525184701.hogeyzd7q7qntiwn@debian>
References: <20200525144606.126767-1-tamas.lengyel@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200525144606.126767-1-tamas.lengyel@intel.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Tamas K Lengyel <tamas@tklengyel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org,
 Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 25, 2020 at 08:46:06AM -0600, Tamas K Lengyel wrote:
> From: Tamas K Lengyel <tamas@tklengyel.com>
> 
> It is unclear whether mem_sharing was ever made to work on other architectures
> but at this time the only verified platform for it is vmx. No plans to support
> or maintain it on other architectures. Make this explicit by checking during
> initialization.
> 
> Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>

Reviewed-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Tue May 26 08:52:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 08:52:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdVJx-0000QC-Ls; Tue, 26 May 2020 08:52:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N6lx=7I=lucina.net=martin@srs-us1.protection.inumbo.net>)
 id 1jdVJw-0000Q7-W6
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 08:52:25 +0000
X-Inumbo-ID: 3760aee6-9f2e-11ea-9947-bc764e2007e4
Received: from smtp.lucina.net (unknown [62.176.169.44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3760aee6-9f2e-11ea-9947-bc764e2007e4;
 Tue, 26 May 2020 08:52:24 +0000 (UTC)
Received: from nodbug.lucina.net (78-141-76-187.dynamic.orange.sk
 [78.141.76.187])
 by smtp.lucina.net (Postfix) with ESMTPSA id 182F3122804;
 Tue, 26 May 2020 10:52:22 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=lucina.net;
 s=dkim-201811; t=1590483142;
 bh=XwDVAhYIHN59igx5z/ZwfLAUYZdwFiO8r9vROke/t0s=;
 h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
 b=gcGJMuBsYGMtvtKYzIbnugjaX8mt0TgWIGKfN0MYQYf6YfB0rnbpeBn8oxJ+mVlW1
 BHkwh9+oXWVjWW5caGEBIcWaExTGk9g+xxUR4vQXfl+WmTHmTWRqd7WTv+0Ven7LmH
 yVfVx45jtlVfna12QO7Tb+EfsjbQNGhIGU38rdqz+m2lJ9rzZQUpmiUT1KXH7dc9X6
 jpPccijphOyl/4WDZhD51i/VXvfYzO44H99+UdxZcCdbCbnFBMcYYLlRNxpmOCK//s
 Q4dEdfQEib+QJWvJCkEyH7B8FJAapNh0KCBlVn0ZLvHDnqywCzOeFzDUn7To6Su0zz
 iZajJDdb/9pjg==
Received: by nodbug.lucina.net (Postfix, from userid 1000)
 id EB3AB268436E; Tue, 26 May 2020 10:52:21 +0200 (CEST)
Date: Tue, 26 May 2020 10:52:21 +0200
From: Martin Lucina <martin@lucina.net>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Xen PVH domU start-of-day VCPU state
Message-ID: <20200526085221.GB5942@nodbug.lucina.net>
Mail-Followup-To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org,
 anil@recoil.org, dave@recoil.org
References: <20200525160401.GA3091@nodbug.lucina.net>
 <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
 <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, dave@recoil.org,
 mirageos-devel@lists.xenproject.org, anil@recoil.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Monday, 25.05.2020 at17:59, Andrew Cooper wrote:
> On 25/05/2020 17:42, Jrgen Gro wrote:
> > You need to setup virtual addressing and enable 64 bit mode before using
> > 64-bit GDT.
> >
> > See Mini-OS source arch/x86/x86_hvm.S
> 
> Or
> https://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen-test-framework.git;a=blob;f=arch/x86/hvm/head.S;h=f7dc72b58ab9ec68538f0087969ab6f72d181d80;hb=HEAD
> 
> But yes - Juergen is correct. Until you have enabled long mode, lgdt
> will only load the bottom 32 bits of GDTR.base.

Ah, I missed Jurgen's and your reply here.

LGDT loading only the bottom 32 bits of GDTR.base shouldn't matter.
Examining gdt_ptr some more:

    (gdb) set architecture i386
    The target architecture is assumed to be i386
    (gdb) x /xh 0x108040
    0x108040:	0x002f
    (gdb) x /xw 0x108042
    0x108042:	0x00108000
    (gdb) x /6xb 0x108040
    0x108040:	0x2f	0x00	0x00	0x80	0x10	0x00
    (gdb) x /8xb 0x108040
    0x108040:	0x2f	0x00	0x00	0x80	0x10	0x00	0x00	0x00

> Is there a less abridged version to look at?

https://github.com/mato/solo5/blob/xen/bindings/xen/boot.S

As I wrote in another reply, this boot.S is based off the virtio
(multiboot) boot.S, which has worked fine for years and should have the
same environment (32-bit flat protected mode).

-mato


From xen-devel-bounces@lists.xenproject.org Tue May 26 09:31:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 09:31:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdVvS-0004E6-Au; Tue, 26 May 2020 09:31:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=52a6=7I=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jdVvQ-0004E0-Hk
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 09:31:08 +0000
X-Inumbo-ID: a0a23186-9f33-11ea-9947-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a0a23186-9f33-11ea-9947-bc764e2007e4;
 Tue, 26 May 2020 09:31:08 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: rgihcP8gIIQagQhwJFYJQIdELqxwqe1yKw2omxAEys5e239J57ICUBGKrUdg4+3CP9J3DCyblH
 GOWZWIF624KDvUvbWtiWOLTEer8VcOKIoebvRYcXxsWLLxDw1g62ZQOWTgqfrdDjlE6SMkG/5n
 OU3fXZC8Q2xymg1HxAqE9kXRzOW4kaDA9Ide46DAvMq6VEtiK95HM4d8sxtjO2aT6ow6NlJ6op
 SXF4rne7oBaKIWROrDutBKy2gqQX+kuu3z9pVgV6bdiaKRvkOHjyPZr4VHE0hj+3hDnC7iz3ex
 wMQ=
X-SBRS: None
X-MesageID: 18650797
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
Subject: Re: [PATCH 4/5] golang/xenlight: Use XEN_PKG_DIR variable rather than
 open-coding
Thread-Topic: [PATCH 4/5] golang/xenlight: Use XEN_PKG_DIR variable rather
 than open-coding
Thread-Index: AQHWMFPdgo/hf5/zFkClp7r2x+3aSai1v3IAgAACUICABDzNAA==
Date: Tue, 26 May 2020 09:31:03 +0000
Message-ID: <8040BE07-B452-4036-ADE8-6E5CA0ED41A9@citrix.com>
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-5-george.dunlap@citrix.com>
 <CAEBZRSfF8KAnzz5LW8GhcuJu=2rex3d6bvgz=a7-kLMp-itjqQ@mail.gmail.com>
 <CAEBZRScpycd2_A8moi68AA3asbsUSRjkW1kUVdpsdwgx-SZKpQ@mail.gmail.com>
In-Reply-To: <CAEBZRScpycd2_A8moi68AA3asbsUSRjkW1kUVdpsdwgx-SZKpQ@mail.gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="us-ascii"
Content-ID: <D3D4CE974867CA4CAE73CCAE56AD2B14@citrix.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On May 23, 2020, at 5:48 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
>=20
>>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
>> Reviewed-by: Nick Rosbrook <rosbrookn@ainfosec.com>
>=20
> Oh, I just noticed your commit message calls the variable
> "XEN_PKG_DIR", but it's actually named "GOXL_PKG_DIR."

Oh, weird.  I presume the R-b stands if I fix the title?

 -George



From xen-devel-bounces@lists.xenproject.org Tue May 26 09:34:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 09:34:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdVz5-0004a3-UR; Tue, 26 May 2020 09:34:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vX9/=7I=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jdVz5-0004Zy-4V
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 09:34:55 +0000
X-Inumbo-ID: 24249198-9f34-11ea-9947-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24249198-9f34-11ea-9947-bc764e2007e4;
 Tue, 26 May 2020 09:34:48 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: WuLYWAQddVlidClve6sgpt1aMbR1w/JuXyEnOdxAtjCsL/iK1mzpoExUyUmnlpcZK8yY+xdhcr
 Fx2u0I2BEK/kPiAGjG/7RNvJUyMor2aGJQHKwGRK1kluQi5ZoengjiHaO+vniDa4m3buLacnvC
 uaUIVwO5Q+eN7m8KORkzoyr4DvieRelbBHDriH5preC+GwmWMPDMXvJgEq9ZbB3eb9yat4YbYw
 EysZQmH7b1ZplhSs7BuELbE/SwuPQLl0QgueZPeTqxpHKdYjFtc9D3D/KdedZSZo0MMFIuLa2t
 l44=
X-SBRS: None
X-MesageID: 18390190
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-URL-LookUp-ScanningError: 1
Date: Tue, 26 May 2020 11:34:21 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Martin Lucina <martin@lucina.net>
Subject: Re: Xen PVH domU start-of-day VCPU state
Message-ID: <20200526093421.GA38408@Air-de-Roger>
References: <20200525160401.GA3091@nodbug.lucina.net>
 <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
 <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
 <20200526085221.GB5942@nodbug.lucina.net>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200526085221.GB5942@nodbug.lucina.net>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, anil@recoil.org,
 Andrew Cooper <andrew.cooper3@citrix.com>, mirageos-devel@lists.xenproject.org,
 dave@recoil.org, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 26, 2020 at 10:52:21AM +0200, Martin Lucina wrote:
> On Monday, 25.05.2020 at 17:59, Andrew Cooper wrote:
> > On 25/05/2020 17:42, Jürgen Groß wrote:
> > > You need to setup virtual addressing and enable 64 bit mode before using
> > > 64-bit GDT.
> > >
> > > See Mini-OS source arch/x86/x86_hvm.S
> > 
> > Or
> > https://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen-test-framework.git;a=blob;f=arch/x86/hvm/head.S;h=f7dc72b58ab9ec68538f0087969ab6f72d181d80;hb=HEAD
> > 
> > But yes - Juergen is correct.  Until you have enabled long mode, lgdt
> > will only load the bottom 32 bits of GDTR.base.
> 
> Ah, I missed Jurgen's and your reply here.
> 
> LGDT loading only the bottom 32 bits of GDTR.base shouldn't matter.
> Examining gdt_ptr some more:
> 
>     (gdb) set architecture i386
>     The target architecture is assumed to be i386
>     (gdb) x /xh 0x108040
>     0x108040:	0x002f
>     (gdb) x /xw 0x108042
>     0x108042:	0x00108000
>     (gdb) x /6xb 0x108040
>     0x108040:	0x2f	0x00	0x00	0x80	0x10	0x00
>     (gdb) x /8xb 0x108040
>     0x108040:	0x2f	0x00	0x00	0x80	0x10	0x00	0x00	0x00

Could you also print the GDT entry at 0x10 (ie: 0x108000 + 0x10), just
to make sure it contains the right descriptor?

Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 26 09:50:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 09:50:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdWEF-0006Qn-A3; Tue, 26 May 2020 09:50:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTO2=7I=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdWEE-0006Q4-3T
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 09:50:34 +0000
X-Inumbo-ID: 5376c4dc-9f36-11ea-a5f7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5376c4dc-9f36-11ea-a5f7-12813bfff9fa;
 Tue, 26 May 2020 09:50:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=iIma8DdTuPrcR6cZS2JkkAEZKLulfgreCYyHXJK5Dok=; b=Y0Ma0BXW1c4tpDiDKgUSLQ5ot
 FB/43/ctqmVFWDcnSwGt4o+Nr7cdkGoF9mqfmBM7c10r0zmq/D/qhrvDYeG9AourjdIoGcQU2xOkg
 y9cvOGtk5xTCOVZMfyp9zgNgkaMGms/lsL2RSHHL2D094PtWY/IdrbyR1FEGkTbuleNjQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdWE6-0007Mh-Eq; Tue, 26 May 2020 09:50:26 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdWE6-0004Mv-2t; Tue, 26 May 2020 09:50:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdWE6-0003tK-2C; Tue, 26 May 2020 09:50:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150375-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 150375: tolerable FAIL - PUSHED
X-Osstest-Failures: seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This: seabios=2e3de6253422112ae43e608661ba94ea6b345694
X-Osstest-Versions-That: seabios=d9aea4a7cd59e00f5ed96b6442806dde0959e1ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 26 May 2020 09:50:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150375 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150375/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150360
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150360
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150360
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150360
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass

version targeted for testing:
 seabios              2e3de6253422112ae43e608661ba94ea6b345694
baseline version:
 seabios              d9aea4a7cd59e00f5ed96b6442806dde0959e1ca

Last test of basis   150360  2020-05-25 05:11:48 Z    1 days
Testing same since   150369  2020-05-25 21:40:16 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   d9aea4a..2e3de62  2e3de6253422112ae43e608661ba94ea6b345694 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue May 26 09:50:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 09:50:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdWEO-0006SF-M9; Tue, 26 May 2020 09:50:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3e+z=7I=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jdWEM-0006RL-T4
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 09:50:42 +0000
X-Inumbo-ID: 5cacc6fa-9f36-11ea-9dbe-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5cacc6fa-9f36-11ea-9dbe-bc764e2007e4;
 Tue, 26 May 2020 09:50:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E3ABBACAC;
 Tue, 26 May 2020 09:50:43 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 0/2] add xen hypfs to meta files
Date: Tue, 26 May 2020 11:50:36 +0200
Message-Id: <20200526095038.27378-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Community Manager <community.manager@xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add entries for Xen hypfs in CHANGELOG.md and SUPPORT.md.

To be committed only after the Xen hypfs series has gone in.

Juergen Gross (2):
  CHANGELOG: add hypervisor file system support
  SUPPORT.md: add hypervisor file system

 CHANGELOG.md |  1 +
 SUPPORT.md   | 14 ++++++++++++++
 2 files changed, 15 insertions(+)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue May 26 09:50:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 09:50:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdWEO-0006SN-U5; Tue, 26 May 2020 09:50:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3e+z=7I=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jdWEM-0006RM-Ty
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 09:50:42 +0000
X-Inumbo-ID: 5c7db8ce-9f36-11ea-a5f8-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5c7db8ce-9f36-11ea-a5f8-12813bfff9fa;
 Tue, 26 May 2020 09:50:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E34C2AC12;
 Tue, 26 May 2020 09:50:43 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 1/2] CHANGELOG: add hypervisor file system support
Date: Tue, 26 May 2020 11:50:37 +0200
Message-Id: <20200526095038.27378-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200526095038.27378-1-jgross@suse.com>
References: <20200526095038.27378-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Community Manager <community.manager@xenproject.org>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 CHANGELOG.md | 1 +
 1 file changed, 1 insertion(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index ccb5055c87..75b7582447 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -16,6 +16,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
    fixes.
  - Hypervisor framework to ease porting Xen to run on hypervisors.
  - Initial support to run on Hyper-V.
+ - Initial hypervisor file system (hypfs) support.
 
 ## [4.13.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.13.0) - 2019-12-17
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue May 26 09:50:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 09:50:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdWEP-0006SZ-85; Tue, 26 May 2020 09:50:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3e+z=7I=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jdWEO-0006Rf-3m
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 09:50:44 +0000
X-Inumbo-ID: 5cad35d7-9f36-11ea-a5f8-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5cad35d7-9f36-11ea-a5f8-12813bfff9fa;
 Tue, 26 May 2020 09:50:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 20043AD12;
 Tue, 26 May 2020 09:50:44 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH 2/2] SUPPORT.md: add hypervisor file system
Date: Tue, 26 May 2020 11:50:38 +0200
Message-Id: <20200526095038.27378-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200526095038.27378-1-jgross@suse.com>
References: <20200526095038.27378-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 SUPPORT.md | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/SUPPORT.md b/SUPPORT.md
index e3a366fd56..a1f7eb6434 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -109,6 +109,20 @@ ARM only has one guest type at the moment
 
     Status: Supported
 
+## Hypervisor file system
+
+### Build info
+
+    Status: Supported
+
+### Hypervisor config
+
+    Status: Supported
+
+### Runtime parameters
+
+    Status: Supported
+
 ## Toolstack
 
 ### xl
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue May 26 09:57:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 09:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdWKX-0006wa-00; Tue, 26 May 2020 09:57:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTO2=7I=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdWKW-0006wV-32
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 09:57:04 +0000
X-Inumbo-ID: 3f237e66-9f37-11ea-9dbe-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3f237e66-9f37-11ea-9dbe-bc764e2007e4;
 Tue, 26 May 2020 09:57:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=eJ8mBdtX+rWpjxmef4+GLgZgBsi8WntI1hbJxUEKKq0=; b=37mPtlbdKXAfmdCJQROaLqMsO
 s87hQc/O/JpELafEdQ8RezcgSqqup7l7dXBRYzV+aEZe9XLOZ2sdXyzWaVU2PCvtrI5oPfhBSF0hG
 KqwHe3MjP0b0CtGKrPLZ588226muffhWwbXTVuzpM4ZAyKo/OoXRBDtbhXX/pHqTB/pPg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdWKS-0007Xd-Tg; Tue, 26 May 2020 09:57:00 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdWKS-0004WL-K1; Tue, 26 May 2020 09:57:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdWKS-00078A-JK; Tue, 26 May 2020 09:57:00 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150370-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150370: regressions - FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:regression
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=354e8318d5a9b6f32fbd3c01d1a9f1970007010b
X-Osstest-Versions-That: xen=437b0aa06a014ce174e24c0d3530b3e9ab19b18b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 26 May 2020 09:57:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150370 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150370/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 150358

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 150358

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150358
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150358
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150358
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150358
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150358
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150358
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150358
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150358
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150358
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  354e8318d5a9b6f32fbd3c01d1a9f1970007010b
baseline version:
 xen                  437b0aa06a014ce174e24c0d3530b3e9ab19b18b

Last test of basis   150358  2020-05-25 04:03:33 Z    1 days
Testing same since   150370  2020-05-26 00:07:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Juergen Gross <jgross@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 354e8318d5a9b6f32fbd3c01d1a9f1970007010b
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu May 21 09:45:27 2020 +0100

    x86/shadow: Reposition sh_remove_write_access_from_sl1p()
    
    When compiling with SHOPT_OUT_OF_SYNC disabled, the build fails with:
    
      common.c:41:12: error: ‘sh_remove_write_access_from_sl1p’ declared ‘static’ but never defined [-Werror=unused-function]
       static int sh_remove_write_access_from_sl1p(struct domain *d, mfn_t gmfn,
                  ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    
    due to an unguarded forward declaration.
    
    It turns out there is no need to forward declare
    sh_remove_write_access_from_sl1p() to begin with, so move it to just ahead of
    its first user, which is within a larger #ifdef'd SHOPT_OUT_OF_SYNC block.
    
    Fix up for style while moving it.  No functional change.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
    Reviewed-by: Tim Deegan <tim@xen.org>

commit b4d01ede23847bed9471ca0b7071394aef693a1a
Author: Juergen Gross <jgross@suse.com>
Date:   Mon May 25 08:21:55 2020 +0200

    vmx: let opt_ept_ad always reflect the current setting
    
    In case opt_ept_ad has not been set explicitly by the user via command
    line or runtime parameter, it is treated as "no" on Avoton cpus.
    
    Change that handling by setting opt_ept_ad to 0 for this cpu type
    explicitly if no user value has been set.
    
    By putting this into the (renamed) boot time initialization of vmcs.c
    _vmx_cpu_up() can be made static.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Kevin Tian <kevin.tian@intel.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue May 26 10:03:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 10:03:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdWR0-0007tz-Po; Tue, 26 May 2020 10:03:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/3u5=7I=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jdWQz-0007tu-7V
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 10:03:45 +0000
X-Inumbo-ID: 2ee37cda-9f38-11ea-9947-bc764e2007e4
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2ee37cda-9f38-11ea-9947-bc764e2007e4;
 Tue, 26 May 2020 10:03:44 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id t18so5729577wru.6
 for <xen-devel@lists.xenproject.org>; Tue, 26 May 2020 03:03:44 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=QrOVKMNdUBLLHH/nbmg3vYiipoecOqTtO9FF9ATZl9E=;
 b=sgtCM8vE6hmf54xd5rSL746e0q9+uIhGHaGUmToYZ9lHGw3Ad6UQeWbAE3oZ3+yils
 3+yJzSNBGihiU34HDtszOrHPTLVFdMaUq8lw4tueyEnsAgllQDMgz57vC3AP7bMGSJ67
 IVPWt2R4c3g7JSzmqnjd3/qsWm6HvDF16ZBdTzec/trqmbj0n76T6c9CHwuScjHeAlnM
 hYjbZoFd0U4EtgY2Cd5sq1ONpUMXhuw3KYg/Zg7eRT5ay7S9vM5LF00GQmrT8qaZIPAO
 eu30WPWvDMjWC8R2WcxtzRlNZyr8Zv7G2+i9cRUFpqwJwJqMWdHMg1ErAAEDJqwMgwPJ
 vXiw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=QrOVKMNdUBLLHH/nbmg3vYiipoecOqTtO9FF9ATZl9E=;
 b=iYHTwUv/8icUtLLpd5btbPRqEdXoUyrQ81iJWdricPlqQfP7BU6QBFis7oL9R4C8A/
 YHs9ZtKJia8L88ccqMLAZv1H8Zmu54Sz+hrqAMflzNZBjvdmp7dZj8cuX96S9R8aZ3oT
 O0UzFV5EeeypLlWbiFL51/l3wm0hEm3gpGldA7o4GwTUbKApaSW8tFSgze0plugWRclr
 Ck6DmLZ0723fi1y026z0C0iAnwQo4MemNOe00+VW3r8bM6cK79nAdOBdMtSFx5/IQULG
 qhT1F8VIAma7jnLHbPZisCIzc8/G4FpCETdwbeNwV+DRoUWAxLD+ePZ+Ewsp0R60Hl80
 qEFw==
X-Gm-Message-State: AOAM530E3Pl2vTdLKQT5sj+syDmtjGhyExPBEnQXr4qrU3aPNS0z7lnT
 8FSBswC8NGTb/3G9t1Qc4PI=
X-Google-Smtp-Source: ABdhPJxRV7tECLej5J7sXTjPqP+Jwph+Yk3wdqueS3QNCNiPvrwOTrnDIm9VxPf5hHesntP0uQ9lWw==
X-Received: by 2002:adf:f990:: with SMTP id f16mr13493344wrr.311.1590487423892; 
 Tue, 26 May 2020 03:03:43 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.187])
 by smtp.gmail.com with ESMTPSA id j4sm19202837wrx.24.2020.05.26.03.03.42
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Tue, 26 May 2020 03:03:43 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Juergen Gross'" <jgross@suse.com>,
	<xen-devel@lists.xenproject.org>
References: <20200526095038.27378-1-jgross@suse.com>
 <20200526095038.27378-2-jgross@suse.com>
In-Reply-To: <20200526095038.27378-2-jgross@suse.com>
Subject: RE: [PATCH 1/2] CHANGELOG: add hypervisor file system support
Date: Tue, 26 May 2020 11:03:42 +0100
Message-ID: <004501d63344$f025a540$d070efc0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQKQq46lmxkKpOsd5tjB0m87I8ZJ0wFonwPmpzntqMA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Community Manager' <community.manager@xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Juergen Gross <jgross@suse.com>
> Sent: 26 May 2020 10:51
> To: xen-devel@lists.xenproject.org
> Cc: Juergen Gross <jgross@suse.com>; Paul Durrant <paul@xen.org>; Community Manager
> <community.manager@xenproject.org>
> Subject: [PATCH 1/2] CHANGELOG: add hypervisor file system support
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Paul Durrant <paul@xen.org>

> ---
>  CHANGELOG.md | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/CHANGELOG.md b/CHANGELOG.md
> index ccb5055c87..75b7582447 100644
> --- a/CHANGELOG.md
> +++ b/CHANGELOG.md
> @@ -16,6 +16,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
>     fixes.
>   - Hypervisor framework to ease porting Xen to run on hypervisors.
>   - Initial support to run on Hyper-V.
> + - Initial hypervisor file system (hypfs) support.
> 
>  ## [4.13.0](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.13.0) - 2019-12-17
> 
> --
> 2.26.2




From xen-devel-bounces@lists.xenproject.org Tue May 26 10:04:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 10:04:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdWRS-0007yg-Bj; Tue, 26 May 2020 10:04:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vX9/=7I=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jdWRR-0007yX-Aa
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 10:04:13 +0000
X-Inumbo-ID: 3c1add44-9f38-11ea-a5fb-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3c1add44-9f38-11ea-a5fb-12813bfff9fa;
 Tue, 26 May 2020 10:04:07 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: sEdFN5Lh6elRsteVMPSO85RlLCzyksheTxVv0xvTAvzuSXt7L8hhnqt72/SDbi20OfRac0z5Tr
 GwE/4Do88nreLCpzqNT3Y0IxHINxhOYoRlhh/tZlHU+MjMvzfMZSebve4g06MR/JhJ9L9x0My4
 WTUxLdh7JUkDzhwDScjKRyw6S5t+pDXytc8Y13XJfkTGoLlKCC5VSQEX1TffzaT5GnPhxRDtnW
 dAboLqFhhaBzW2eOs76qQ3FCh//kdAwewGfoSeTsuLYNv2qm20H+8H2fjOBDIbcJIuq5HvxPfn
 EGg=
X-SBRS: None
X-MesageID: 18394027
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-URL-LookUp-ScanningError: 1
Date: Tue, 26 May 2020 12:03:37 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Martin Lucina <martin@lucina.net>
Subject: Re: Xen PVH domU start-of-day VCPU state
Message-ID: <20200526100337.GB38408@Air-de-Roger>
References: <20200525160401.GA3091@nodbug.lucina.net>
 <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
 <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
 <20200526085221.GB5942@nodbug.lucina.net>
 <20200526093421.GA38408@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200526093421.GA38408@Air-de-Roger>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, anil@recoil.org,
 Andrew Cooper <andrew.cooper3@citrix.com>, mirageos-devel@lists.xenproject.org,
 dave@recoil.org, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 26, 2020 at 11:34:21AM +0200, Roger Pau Monné wrote:
> On Tue, May 26, 2020 at 10:52:21AM +0200, Martin Lucina wrote:
> > On Monday, 25.05.2020 at 17:59, Andrew Cooper wrote:
> > > On 25/05/2020 17:42, Jürgen Groß wrote:
> > > > You need to setup virtual addressing and enable 64 bit mode before using
> > > > 64-bit GDT.
> > > >
> > > > See Mini-OS source arch/x86/x86_hvm.S
> > > 
> > > Or
> > > https://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen-test-framework.git;a=blob;f=arch/x86/hvm/head.S;h=f7dc72b58ab9ec68538f0087969ab6f72d181d80;hb=HEAD
> > > 
> > > But yes - Juergen is correct.  Until you have enabled long mode, lgdt
> > > will only load the bottom 32 bits of GDTR.base.
> > 
> > Ah, I missed Jurgen's and your reply here.
> > 
> > LGDT loading only the bottom 32 bits of GDTR.base shouldn't matter.
> > Examining gdt_ptr some more:
> > 
> >     (gdb) set architecture i386
> >     The target architecture is assumed to be i386
> >     (gdb) x /xh 0x108040
> >     0x108040:	0x002f
> >     (gdb) x /xw 0x108042
> >     0x108042:	0x00108000
> >     (gdb) x /6xb 0x108040
> >     0x108040:	0x2f	0x00	0x00	0x80	0x10	0x00
> >     (gdb) x /8xb 0x108040
> >     0x108040:	0x2f	0x00	0x00	0x80	0x10	0x00	0x00	0x00
> 
> Could you also print the GDT entry at 0x10 (ie: 0x108000 + 0x10), just
> to make sure it contains the right descriptor?

Forgot to ask, but can you also add the output of readelf -lW
<kernel>?

Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 26 10:08:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 10:08:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdWVF-0008HS-Tr; Tue, 26 May 2020 10:08:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N6lx=7I=lucina.net=martin@srs-us1.protection.inumbo.net>)
 id 1jdWVE-0008HN-LR
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 10:08:08 +0000
X-Inumbo-ID: cb620d74-9f38-11ea-a5fe-12813bfff9fa
Received: from smtp.lucina.net (unknown [62.176.169.44])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cb620d74-9f38-11ea-a5fe-12813bfff9fa;
 Tue, 26 May 2020 10:08:07 +0000 (UTC)
Received: from nodbug.lucina.net (78-141-76-187.dynamic.orange.sk
 [78.141.76.187])
 by smtp.lucina.net (Postfix) with ESMTPSA id 725C6122804;
 Tue, 26 May 2020 12:08:06 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=lucina.net;
 s=dkim-201811; t=1590487686;
 bh=hqRXSznMCr4tJH1FgF5vK1/580jutbWPg/BvlK0PEEU=;
 h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
 b=ns3/sJIXPIqV52TPuBJPwBfQJCtih9cK/sXO9G3+En8yA2D8lU/iLyddoJvZZM6U0
 +Ohcw1P6oSEaKtIznsl7bhChiJsoFMV3r2ysjRZzN0ICK2zkChTzPOesoW7FpI4fqf
 Hlneod7iPocEvVM0Lga6+xc7/d3gHFa1gSZYNajLfaVSyBnb800PWRCdTOfxKVmhmT
 EJzSRC55gqS6qW+TQXVrJv440cAWASK9qw7SZDflurmmJ8DDbgyGEIhgMmngs6HZqh
 5Ft6fSC9YvoYWJtAQlP3TzD7BpIARfxnxLGHdS2D0UxCcklMFzTY+MMav5JcGmK1g+
 rqbK3dB5M45Aw==
Received: by nodbug.lucina.net (Postfix, from userid 1000)
 id 14173268436E; Tue, 26 May 2020 12:08:06 +0200 (CEST)
Date: Tue, 26 May 2020 12:08:06 +0200
From: Martin Lucina <martin@lucina.net>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: Xen PVH domU start-of-day VCPU state
Message-ID: <20200526100806.GD5942@nodbug.lucina.net>
Mail-Followup-To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
 =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, anil@recoil.org,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 mirageos-devel@lists.xenproject.org, dave@recoil.org,
 xen-devel@lists.xenproject.org
References: <20200525160401.GA3091@nodbug.lucina.net>
 <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
 <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
 <20200526085221.GB5942@nodbug.lucina.net>
 <20200526093421.GA38408@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200526093421.GA38408@Air-de-Roger>
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, anil@recoil.org,
 Andrew Cooper <andrew.cooper3@citrix.com>, mirageos-devel@lists.xenproject.org,
 dave@recoil.org, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tuesday, 26.05.2020 at11:34, Roger Pau Monn wrote:
> On Tue, May 26, 2020 at 10:52:21AM +0200, Martin Lucina wrote:
> > On Monday, 25.05.2020 at17:59, Andrew Cooper wrote:
> > > On 25/05/2020 17:42, Jrgen Gro wrote:
> > > > You need to setup virtual addressing and enable 64 bit mode before using
> > > > 64-bit GDT.
> > > >
> > > > See Mini-OS source arch/x86/x86_hvm.S
> > > 
> > > Or
> > > https://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen-test-framework.git;a=blob;f=arch/x86/hvm/head.S;h=f7dc72b58ab9ec68538f0087969ab6f72d181d80;hb=HEAD
> > > 
> > > But yes - Juergen is correct. Until you have enabled long mode, lgdt
> > > will only load the bottom 32 bits of GDTR.base.
> > 
> > Ah, I missed Jurgen's and your reply here.
> > 
> > LGDT loading only the bottom 32 bits of GDTR.base shouldn't matter.
> > Examining gdt_ptr some more:
> > 
> >     (gdb) set architecture i386
> >     The target architecture is assumed to be i386
> >     (gdb) x /xh 0x108040
> >     0x108040:	0x002f
> >     (gdb) x /xw 0x108042
> >     0x108042:	0x00108000
> >     (gdb) x /6xb 0x108040
> >     0x108040:	0x2f	0x00	0x00	0x80	0x10	0x00
> >     (gdb) x /8xb 0x108040
> >     0x108040:	0x2f	0x00	0x00	0x80	0x10	0x00	0x00	0x00
> 
> Could you also print the GDT entry at 0x10 (ie: 0x108000 + 0x10), just
> to make sure it contains the right descriptor?

I triple-checked that on Friday, but here you go:

    (gdb) x /xg 0x108010
    0x108010:	0x00cf9b000000ffff
    (gdb) x /tg 0x108010
    0x108010:	0000000011001111100110110000000000000000000000001111111111111111

Translates to:

base_31_24 = { 0 }
g = 1 (4 kB)
b = 1 (32-bit)
l = 0 (32-bit)
avl = 0
limit_19_16 = { 1 }
p = 1
dpl = 0
s = 1
type = 1011 (code, exec/read, accessed)
base23_16 = { 0 }
base15_0 = { 0 }
limit_15_0 = { 1 }

type should technically not include accessed, but that shouldn't matter.
In any case, changing it to 1010 does not help.

Looks like I'll have to build a patched Xen as per Andrew's suggestion, and
dump the VMCS on the triple fault.

-mato


From xen-devel-bounces@lists.xenproject.org Tue May 26 10:12:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 10:12:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdWZ5-0000jQ-Jv; Tue, 26 May 2020 10:12:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N6lx=7I=lucina.net=martin@srs-us1.protection.inumbo.net>)
 id 1jdWZ3-0000jL-Jd
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 10:12:05 +0000
X-Inumbo-ID: 589ff96c-9f39-11ea-a5fe-12813bfff9fa
Received: from smtp.lucina.net (unknown [62.176.169.44])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 589ff96c-9f39-11ea-a5fe-12813bfff9fa;
 Tue, 26 May 2020 10:12:04 +0000 (UTC)
Received: from nodbug.lucina.net (78-141-76-187.dynamic.orange.sk
 [78.141.76.187])
 by smtp.lucina.net (Postfix) with ESMTPSA id 6AC7D122804;
 Tue, 26 May 2020 12:12:03 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=lucina.net;
 s=dkim-201811; t=1590487923;
 bh=ktMxHkN8RbubbKp+2Me4K8XKCkvYrm8mNijcddKeBhQ=;
 h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
 b=es4vTGLQqFdUhQ+yBIXQPO6grScTZUzrVVbgxaAR0elX/6pAF6McJX4Ko4AF9lYvI
 h2Eb3E9Motw5p4CKmhEGMZ+cq8ejEqdGQxihe0sy9DAgPsRdnGfVFN8+BRK5Ke9h/U
 pEY0ByiS9jDd+KtXGCS/mYD80ATOPXXkaIi2pM2FjMTwqvap/wfogDJmm3TmP4PDru
 OnjDBN8ofpW8zx8NAzev1ePPegKcuOpg4SCgZ+R7c8Wu4OVhKx7nC+v18UrzonAGpH
 37zxy7tk/iRxRZKgx3ws/kQRcBz12i8kE3Eg6OVT/bb4AbaohKqAhJDRUft9Io7Pu6
 cA/YgIOrEOXAA==
Received: by nodbug.lucina.net (Postfix, from userid 1000)
 id 4542B268436E; Tue, 26 May 2020 12:12:03 +0200 (CEST)
Date: Tue, 26 May 2020 12:12:03 +0200
From: Martin Lucina <martin@lucina.net>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: Xen PVH domU start-of-day VCPU state
Message-ID: <20200526101203.GE5942@nodbug.lucina.net>
Mail-Followup-To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
 =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, anil@recoil.org,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 mirageos-devel@lists.xenproject.org, dave@recoil.org,
 xen-devel@lists.xenproject.org
References: <20200525160401.GA3091@nodbug.lucina.net>
 <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
 <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
 <20200526085221.GB5942@nodbug.lucina.net>
 <20200526093421.GA38408@Air-de-Roger>
 <20200526100337.GB38408@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200526100337.GB38408@Air-de-Roger>
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, anil@recoil.org,
 Andrew Cooper <andrew.cooper3@citrix.com>, mirageos-devel@lists.xenproject.org,
 dave@recoil.org, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tuesday, 26.05.2020 at12:03, Roger Pau Monn wrote:
> On Tue, May 26, 2020 at 11:34:21AM +0200, Roger Pau Monn wrote:
> > On Tue, May 26, 2020 at 10:52:21AM +0200, Martin Lucina wrote:
> > > On Monday, 25.05.2020 at17:59, Andrew Cooper wrote:
> > > > On 25/05/2020 17:42, Jrgen Gro wrote:
> > > > > You need to setup virtual addressing and enable 64 bit mode before using
> > > > > 64-bit GDT.
> > > > >
> > > > > See Mini-OS source arch/x86/x86_hvm.S
> > > > 
> > > > Or
> > > > https://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen-test-framework.git;a=blob;f=arch/x86/hvm/head.S;h=f7dc72b58ab9ec68538f0087969ab6f72d181d80;hb=HEAD
> > > > 
> > > > But yes - Juergen is correct. Until you have enabled long mode, lgdt
> > > > will only load the bottom 32 bits of GDTR.base.
> > > 
> > > Ah, I missed Jurgen's and your reply here.
> > > 
> > > LGDT loading only the bottom 32 bits of GDTR.base shouldn't matter.
> > > Examining gdt_ptr some more:
> > > 
> > >     (gdb) set architecture i386
> > >     The target architecture is assumed to be i386
> > >     (gdb) x /xh 0x108040
> > >     0x108040:	0x002f
> > >     (gdb) x /xw 0x108042
> > >     0x108042:	0x00108000
> > >     (gdb) x /6xb 0x108040
> > >     0x108040:	0x2f	0x00	0x00	0x80	0x10	0x00
> > >     (gdb) x /8xb 0x108040
> > >     0x108040:	0x2f	0x00	0x00	0x80	0x10	0x00	0x00	0x00
> > 
> > Could you also print the GDT entry at 0x10 (ie: 0x108000 + 0x10), just
> > to make sure it contains the right descriptor?
> 
> Forgot to ask, but can you also add the output of readelf -lW
> <kernel>?

    Elf file type is EXEC (Executable file)
    Entry point 0x1001e0
    There are 7 program headers, starting at offset 64

    Program Headers:
      Type           Offset   VirtAddr           PhysAddr           FileSiz  MemSiz   Flg Align
      INTERP         0x001000 0x0000000000100000 0x0000000000100000 0x000018 0x000018 R   0x8
          [Requesting program interpreter: /nonexistent/solo5/]
      LOAD           0x001000 0x0000000000100000 0x0000000000100000 0x00626c 0x00626c R E 0x1000
      LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x007120 0x00ed48 RW  0x1000
      NOTE           0x0080ac 0x00000000001070ac 0x00000000001070ac 0x000018 0x000018 R   0x4
      NOTE           0x00f120 0x00000000001070c4 0x00000000001070c4 0x000014 0x000000 R   0x4
      NOTE           0x008088 0x0000000000107088 0x0000000000107088 0x000024 0x000024 R   0x4
      NOTE           0x008000 0x0000000000107000 0x0000000000107000 0x000088 0x000088 R   0x4

     Section to Segment mapping:
      Segment Sections...
       00     .interp
       01     .interp .text .rodata .eh_frame
       02     .note.solo5.manifest .note.solo5.abi .note.solo5.not-openbsd .data .bss
       03     .note.solo5.not-openbsd
       04     .note.solo5.xen
       05     .note.solo5.abi
       06     .note.solo5.manifest

The PT_INTERP and multiple PT_NOTE headers are that way due to specifics of
how Solo5 ABIs work, but I've verified that the domain builder is
interpreting XEN_ELFNOTE_PHYS32_ENTRY correctly.

-mato


From xen-devel-bounces@lists.xenproject.org Tue May 26 10:33:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 10:33:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdWtA-0002fT-K7; Tue, 26 May 2020 10:32:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vX9/=7I=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jdWt9-0002fO-Tm
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 10:32:51 +0000
X-Inumbo-ID: 3c54960c-9f3c-11ea-a601-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3c54960c-9f3c-11ea-a601-12813bfff9fa;
 Tue, 26 May 2020 10:32:45 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: xxe8AOw4QDajVG4fl/Tl4hCojrrpsVEXkYh8SHc39ewwj/UFxLeOjKDlbL0LNICLFez0GsYRou
 ekEKJWG3u1nPA1+F61Kw4PWIiaM30pg88uDP1t2qyohyBoljWPOnWIfPD4CgK5kE6OGvqNAhxG
 pJ74kWBcbq4MmdCiklRivyLo4pt8vCViaDYcrYGXUOjgdP9wGQxtmfiUyeponFjQsBpeRl7PSe
 43VitVyCZ9rTtVWRWXfRz8m2swzx8YoMc0iWhTQ6916qx9KC27/8Mb/HqQgE58ifC2/qBz3g1+
 PKU=
X-SBRS: None
X-MesageID: 18425278
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-URL-LookUp-ScanningError: 1
Date: Tue, 26 May 2020 12:32:18 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Martin Lucina <martin@lucina.net>
Subject: Re: Xen PVH domU start-of-day VCPU state
Message-ID: <20200526102707.GC38408@Air-de-Roger>
References: <20200525160401.GA3091@nodbug.lucina.net>
 <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
 <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
 <20200526085221.GB5942@nodbug.lucina.net>
 <20200526093421.GA38408@Air-de-Roger>
 <20200526100337.GB38408@Air-de-Roger>
 <20200526101203.GE5942@nodbug.lucina.net>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200526101203.GE5942@nodbug.lucina.net>
X-ClientProxiedBy: AMSPEX02CAS01.citrite.net (10.69.22.112) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, anil@recoil.org,
 Andrew Cooper <andrew.cooper3@citrix.com>, mirageos-devel@lists.xenproject.org,
 dave@recoil.org, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

BTW, not sure why but my MUA (Mutt 11.0.3) seems to add everyone on Cc
to the To: field on reply, and drops your email address from the list.
I don't see a 'Reply-to:' on the headers, so I'm not sure why it does
that, but you might want to check your config.

I have to manually fix the headers to properly set the To: field to
your address and the Cc to everyone else.

On Tue, May 26, 2020 at 12:12:03PM +0200, Martin Lucina wrote:
> On Tuesday, 26.05.2020 at 12:03, Roger Pau Monné wrote:
> > On Tue, May 26, 2020 at 11:34:21AM +0200, Roger Pau Monné wrote:
> > > On Tue, May 26, 2020 at 10:52:21AM +0200, Martin Lucina wrote:
> > > > On Monday, 25.05.2020 at 17:59, Andrew Cooper wrote:
> > > > > On 25/05/2020 17:42, Jürgen Groß wrote:
> > > > > > You need to setup virtual addressing and enable 64 bit mode before using
> > > > > > 64-bit GDT.
> > > > > >
> > > > > > See Mini-OS source arch/x86/x86_hvm.S
> > > > > 
> > > > > Or
> > > > > https://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen-test-framework.git;a=blob;f=arch/x86/hvm/head.S;h=f7dc72b58ab9ec68538f0087969ab6f72d181d80;hb=HEAD
> > > > > 
> > > > > But yes - Juergen is correct.  Until you have enabled long mode, lgdt
> > > > > will only load the bottom 32 bits of GDTR.base.
> > > > 
> > > > Ah, I missed Jurgen's and your reply here.
> > > > 
> > > > LGDT loading only the bottom 32 bits of GDTR.base shouldn't matter.
> > > > Examining gdt_ptr some more:
> > > > 
> > > >     (gdb) set architecture i386
> > > >     The target architecture is assumed to be i386
> > > >     (gdb) x /xh 0x108040
> > > >     0x108040:	0x002f
> > > >     (gdb) x /xw 0x108042
> > > >     0x108042:	0x00108000
> > > >     (gdb) x /6xb 0x108040
> > > >     0x108040:	0x2f	0x00	0x00	0x80	0x10	0x00
> > > >     (gdb) x /8xb 0x108040
> > > >     0x108040:	0x2f	0x00	0x00	0x80	0x10	0x00	0x00	0x00
> > > 
> > > Could you also print the GDT entry at 0x10 (ie: 0x108000 + 0x10), just
> > > to make sure it contains the right descriptor?
> > 
> > Forgot to ask, but can you also add the output of readelf -lW
> > <kernel>?
> 
>     Elf file type is EXEC (Executable file)
>     Entry point 0x1001e0
>     There are 7 program headers, starting at offset 64
> 
>     Program Headers:
>       Type           Offset   VirtAddr           PhysAddr           FileSiz  MemSiz   Flg Align
>       INTERP         0x001000 0x0000000000100000 0x0000000000100000 0x000018 0x000018 R   0x8
>           [Requesting program interpreter: /nonexistent/solo5/]
>       LOAD           0x001000 0x0000000000100000 0x0000000000100000 0x00626c 0x00626c R E 0x1000
>       LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x007120 0x00ed48 RW  0x1000
>       NOTE           0x0080ac 0x00000000001070ac 0x00000000001070ac 0x000018 0x000018 R   0x4
>       NOTE           0x00f120 0x00000000001070c4 0x00000000001070c4 0x000014 0x000000 R   0x4
>       NOTE           0x008088 0x0000000000107088 0x0000000000107088 0x000024 0x000024 R   0x4
>       NOTE           0x008000 0x0000000000107000 0x0000000000107000 0x000088 0x000088 R   0x4
> 
>      Section to Segment mapping:
>       Segment Sections...
>        00     .interp
>        01     .interp .text .rodata .eh_frame
>        02     .note.solo5.manifest .note.solo5.abi .note.solo5.not-openbsd .data .bss
>        03     .note.solo5.not-openbsd
>        04     .note.solo5.xen
>        05     .note.solo5.abi
>        06     .note.solo5.manifest
> 
> The PT_INTERP and multiple PT_NOTE headers are that way due to specifics of
> how Solo5 ABIs work, but I've verified that the domain builder is
> interpreting XEN_ELFNOTE_PHYS32_ENTRY correctly.

Right, just wanted to double check that virtaddr == physaddr since you
didn't use any offset to get the physical address of symbols, but I
guess that if this wasn't correct you won't be even able to execute
the first instruction anyway.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 26 10:51:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 10:51:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdXAc-0004TJ-7H; Tue, 26 May 2020 10:50:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=52a6=7I=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jdXAa-0004TE-U0
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 10:50:52 +0000
X-Inumbo-ID: c4055c24-9f3e-11ea-a606-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c4055c24-9f3e-11ea-a606-12813bfff9fa;
 Tue, 26 May 2020 10:50:51 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: cL5R855sXckrq7UQRAtJS3jZQhND/cvrshWa7YFrg0rmxAAQBIJc5p6kfR/U0JNkGtDtTvBaEX
 HG70zg9CV/odpI6d9x1h0y0dRQaJuqIOz6TTTkUhkSamtuVrGgrFuACpjRPhZ0wAWkvPysRbvb
 HYBvo0WsbZm5h6jtN+zu4SKosyppseOE8Nd90Ra7zIo+mOgWGfiJgqfrVpsRacVYfXrpR3bu+s
 56aTY8jvF16yVaeHAWlmZECMfKF+rRGLaMlHF5QvRR0Q5ZTIxajmkd3AFIXBhOIyTqRLNxSicY
 3Yk=
X-SBRS: None
X-MesageID: 19118755
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
From: George Dunlap <George.Dunlap@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH] SUPPORT: Add linux device model stubdom to Toolstack
Thread-Topic: [PATCH] SUPPORT: Add linux device model stubdom to Toolstack
Thread-Index: AQHWMj/s7fGWchLRtk6NEQ/KHg6aZKi6EQAA
Date: Tue, 26 May 2020 10:50:47 +0000
Message-ID: <3986B3CE-1730-443C-BD10-D2161C2A75F4@citrix.com>
References: <20200525025506.225959-1-jandryuk@gmail.com>
In-Reply-To: <20200525025506.225959-1-jandryuk@gmail.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <B3F439CA8791704C92BE5E37C1E33655@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Ian Jackson <Ian.Jackson@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDI1LCAyMDIwLCBhdCAzOjU1IEFNLCBKYXNvbiBBbmRyeXVrIDxqYW5kcnl1
a0BnbWFpbC5jb20+IHdyb3RlOg0KPiANCj4gQWRkIHFlbXUteGVuIGxpbnV4IGRldmljZSBtb2Rl
bCBzdHViZG9tYWluIHRvIHRoZSBUb29sc3RhY2sgc2VjdGlvbiBhcyBhDQo+IFRlY2ggUHJldmll
dy4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IEphc29uIEFuZHJ5dWsgPGphbmRyeXVrQGdtYWlsLmNv
bT4NCj4gLS0tDQo+IFNVUFBPUlQubWQgfCA2ICsrKysrKw0KPiAxIGZpbGUgY2hhbmdlZCwgNiBp
bnNlcnRpb25zKCspDQo+IA0KPiBkaWZmIC0tZ2l0IGEvU1VQUE9SVC5tZCBiL1NVUFBPUlQubWQN
Cj4gaW5kZXggZTNhMzY2ZmQ1Ni4uMjViZWNjOTE5MiAxMDA2NDQNCj4gLS0tIGEvU1VQUE9SVC5t
ZA0KPiArKysgYi9TVVBQT1JULm1kDQo+IEBAIC0xNTMsNiArMTUzLDEyIEBAIEdvIChnb2xhbmcp
IGJpbmRpbmdzIGZvciBsaWJ4bA0KPiANCj4gICAgIFN0YXR1czogRXhwZXJpbWVudGFsDQo+IA0K
PiArIyMjIExpbnV4IGRldmljZSBtb2RlbCBzdHViZG9tYWlucw0KPiArDQo+ICtTdXBwb3J0IGZv
ciBydW5uaW5nIHFlbXUteGVuIGRldmljZSBtb2RlbCBpbiBhIGxpbnV4IHN0dWJkb21haW4uDQo+
ICsNCj4gKyAgICBTdGF0dXM6IFRlY2ggUHJldmlldw0KDQpBY2tlZC1ieTogR2VvcmdlIER1bmxh
cCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXguY29tPg0KDQpPdXQgb2YgY3VyaW9zaXR5LCB3aGF0IGRv
IHlvdSB0aGluayBpcyBtaXNzaW5nIHRvIGJlIGFibGUgdG8gZGVjbGFyZSB0aGlzIOKAmFN1cHBv
cnRlZOKAmT8gIEFyZSB0aGVyZSBhbnkgZmVhdHVyZXMgbWlzc2luZywgb3IgZG8gd2UganVzdCAg
bmVlZCB0byBhZGQgYSB0ZXN0IHRvIG9zc3Rlc3Q/DQoNCiAtR2Vvcmdl


From xen-devel-bounces@lists.xenproject.org Tue May 26 10:57:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 10:57:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdXGV-0004f0-Ua; Tue, 26 May 2020 10:56:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vX9/=7I=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jdXGU-0004ev-GK
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 10:56:58 +0000
X-Inumbo-ID: 9e2f297a-9f3f-11ea-a607-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9e2f297a-9f3f-11ea-a607-12813bfff9fa;
 Tue, 26 May 2020 10:56:57 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: n6rV7kdbpNBzv1i57lUk+nppkCEiijrYpKl1OCbGdGTKxgsjUG290z7NYIE1TumiWo3aOyOifJ
 QusDPdZKfVpTwZ30qm0Sesbuxj9fopWvl0FduNTnyQ0ttTrX6dbZoz+6LY80c7Jq3nhuRk6bIC
 scyZsprx6g4XvmUnC/Q5DzPKr7TTbYJtxozGumxfexOC0KlPmftD+zt+J2k2xGqTK/vGUeS7Y5
 HNwVE1x9Zsp04rlvbPBPLkoiGkTxT3NerVTq9Rhgkev6ALMl3sH4ouNJKzbOqkRSENvQ/AFZH0
 apA=
X-SBRS: None
X-MesageID: 19119346
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
Date: Tue, 26 May 2020 12:56:52 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86: refine guest_mode()
Message-ID: <20200526105652.GD38408@Air-de-Roger>
References: <1704f4f6-7e77-971c-2c94-4f6a6719c34a@citrix.com>
 <5bbe6425-396c-d934-b5af-53b594a4afbc@suse.com>
 <16939982-3ccc-f848-0694-61b154dca89a@citrix.com>
 <5ce12c86-c894-4a2c-9fa6-1c2a6007ca28@suse.com>
 <20200518145101.GV54375@Air-de-Roger>
 <d58ec87e-a871-2e65-4a69-b73a168a6afa@suse.com>
 <20200520151326.GM54375@Air-de-Roger>
 <38d546f9-8043-8d94-8298-8fd035078a8a@suse.com>
 <20200522104844.GY54375@Air-de-Roger>
 <a31bd761-54eb-56b8-7c60-93202d26e7d0@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a31bd761-54eb-56b8-7c60-93202d26e7d0@suse.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 22, 2020 at 02:00:22PM +0200, Jan Beulich wrote:
> On 22.05.2020 12:48, Roger Pau Monné wrote:
> > On Fri, May 22, 2020 at 11:52:42AM +0200, Jan Beulich wrote:
> >> On 20.05.2020 17:13, Roger Pau Monné wrote:
> >>> OK, so I think I'm starting to understand this all. Sorry it's taken
> >>> me so long. So it's my understanding that diff != 0 can only happen in
> >>> Xen context, or when in an IST that has a different stack (ie: MCE, NMI
> >>> or DF according to current.h) and running in PV mode?
> >>>
> >>> Wouldn't in then be fine to use (r)->cs & 3 to check we are in guest
> >>> mode if diff != 0? I see a lot of other places where cs & 3 is already
> >>> used to that effect AFAICT (like entry.S).
> >>
> >> Technically this would be correct afaics, but the idea with all this
> >> is (or should I say "looks to be"?) to have the checks be as tight as
> >> possible, to make sure we don't mistakenly consider something "guest
> >> mode" which really isn't. IOW your suggestion would be fine with me
> >> if we could exclude bugs anywhere in the code. But since this isn't
> >> realistic, I consider your suggestion to be relaxing things by too
> >> much.
> > 
> > OK, so I take that (long time) we might also want to change the cs & 3
> > checks from entry.S to check against __HYPERVISOR_CS explicitly?
> 
> I didn't think so, no (not the least because of there not being any
> guarantee afaik that EFI runtime calls couldn't play with segment
> registers; they shouldn't, yes, but there's a lot of other "should"
> many don't obey to). Those are guaranteed PV-only code paths. The
> main issue here is that ->cs cannot be relied upon when a frame
> points at HVM state.

Well, if it points at HVM state it could equally have __HYPERVISOR_CS
set by the guest.

Will things work anyway if you get here from an exception generated by
EFI code that has changed the code segment? You are going to hit the
assert at least, since diff will be != 0 and cs != __HYPERVISOR_CS?

I would prefer to keep things coherent by either using cs & 3 or
cs == __HYPERVISOR_CS everywhere if possible, as I'm still unsure of
the benefit of using __HYPERVISOR_CS.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 26 10:58:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 10:58:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdXIG-0004mL-GJ; Tue, 26 May 2020 10:58:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vgeY=7I=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jdXIF-0004mE-Mp
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 10:58:47 +0000
X-Inumbo-ID: db778dae-9f3f-11ea-9947-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db778dae-9f3f-11ea-9947-bc764e2007e4;
 Tue, 26 May 2020 10:58:40 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:43890
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jdXI7-000qyu-KT (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Tue, 26 May 2020 11:58:39 +0100
Subject: Re: Xen PVH domU start-of-day VCPU state
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org,
 anil@recoil.org, dave@recoil.org, martin@lucina.net
References: <20200525160401.GA3091@nodbug.lucina.net>
 <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
 <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
 <20200526085221.GB5942@nodbug.lucina.net>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <36363b39-c8c1-82bf-af37-f0d917844bb4@citrix.com>
Date: Tue, 26 May 2020 11:58:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200526085221.GB5942@nodbug.lucina.net>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 26/05/2020 09:52, Martin Lucina wrote:
> On Monday, 25.05.2020 at 17:59, Andrew Cooper wrote:
>> On 25/05/2020 17:42, Jürgen Groß wrote:
>>> You need to setup virtual addressing and enable 64 bit mode before using
>>> 64-bit GDT.
>>>
>>> See Mini-OS source arch/x86/x86_hvm.S
>> Or
>> https://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen-test-framework.git;a=blob;f=arch/x86/hvm/head.S;h=f7dc72b58ab9ec68538f0087969ab6f72d181d80;hb=HEAD
>>
>> But yes - Juergen is correct.  Until you have enabled long mode, lgdt
>> will only load the bottom 32 bits of GDTR.base.
> Ah, I missed Jurgen's and your reply here.

So the mailing list is doing something evil and setting:

Mail-Followup-To: Andrew Cooper <andrew.cooper3@citrix.com>,
    =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>,
    xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org,
    anil@recoil.org, dave@recoil.org

which causes normal replies to cut you out.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 26 11:17:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 11:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdXaA-0006et-9s; Tue, 26 May 2020 11:17:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vgeY=7I=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jdXa9-0006eo-7Y
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 11:17:17 +0000
X-Inumbo-ID: 74aace80-9f42-11ea-9dbe-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 74aace80-9f42-11ea-9dbe-bc764e2007e4;
 Tue, 26 May 2020 11:17:16 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:44426
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jdXa5-0002ey-JY (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Tue, 26 May 2020 12:17:13 +0100
Subject: Re: [PATCH] x86: extend coverage of HLE "bad page" workaround
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <b238f66d-37a9-3080-4f2b-90225ea17102@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <424d1b72-5eb6-f2bc-20fe-e59bacda8dd9@citrix.com>
Date: Tue, 26 May 2020 12:17:12 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <b238f66d-37a9-3080-4f2b-90225ea17102@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 26/05/2020 07:49, Jan Beulich wrote:
> Respective Core Gen10 processor lines are affected, too.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -6045,6 +6045,8 @@ const struct platform_bad_page *__init g
>      case 0x000506e0: /* errata SKL167 / SKW159 */
>      case 0x000806e0: /* erratum KBL??? */
>      case 0x000906e0: /* errata KBL??? / KBW114 / CFW103 */
> +    case 0x000a0650: /* erratum Core Gen10 U/H/S 101 */
> +    case 0x000a0660: /* erratum Core Gen10 U/H/S 101 */

This is marred in complexity.

The enumeration of MSR_TSX_CTRL (from the TAA fix, but architectural
moving forwards on any TSX-enabled CPU) includes a confirmation that HLE
no longer exists/works.  This applies to IceLake systems, but possibly
not their initial release configuration (hence, via a later microcode
update).

HLE is also disabled in microcode on all older parts for errata reasons,
so in practice it doesn't exist anywhere now.

I think it is safe to drop this workaround, and this does seem a more
simple option than encoding which microcode turned HLE off (which sadly
isn't covered by the spec updates, as even when turned off, HLE is still
functioning according to its spec of "may speed things up, may do
nothing"), or the interactions with the CPUID hiding capabilities of
MSR_TSX_CTRL.

~Andrew

>          *array_size = (cpuid_eax(0) >= 7 && !cpu_has_hypervisor &&
>                         (cpuid_count_ebx(7, 0) & cpufeat_mask(X86_FEATURE_HLE)));
>          return &hle_bad_page;



From xen-devel-bounces@lists.xenproject.org Tue May 26 11:55:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 11:55:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdYAU-0001hd-Au; Tue, 26 May 2020 11:54:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N6lx=7I=lucina.net=martin@srs-us1.protection.inumbo.net>)
 id 1jdYAT-0001hX-7k
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 11:54:49 +0000
X-Inumbo-ID: b27c9a4a-9f47-11ea-9947-bc764e2007e4
Received: from smtp.lucina.net (unknown [62.176.169.44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b27c9a4a-9f47-11ea-9947-bc764e2007e4;
 Tue, 26 May 2020 11:54:48 +0000 (UTC)
Received: from nodbug.lucina.net (78-141-76-187.dynamic.orange.sk
 [78.141.76.187])
 by smtp.lucina.net (Postfix) with ESMTPSA id 0D2D2122804;
 Tue, 26 May 2020 13:54:47 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=lucina.net;
 s=dkim-201811; t=1590494087;
 bh=QFPqT+wMGFf0hsPuefeEdPUGLpxgodt6U7aKLQIqJtQ=;
 h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
 b=jLgJZgWPv9JNi73tAsS754tnBDH0rRz6wRinU0Se2qyr7pgjUxvs8OLsHRS95HevT
 hppI8NMQngUX7awqmCeuZu7eIS9OIgiMMh0EAy+svRk/JJYUc7XqKewSuJS87XzXXZ
 KjMhT+GIrF9yQG9gIGyiNHs5jJLqcwMR/nuWKxZcN56HqUIgCJNimNVYLHsFp9WTi4
 d1D+pmC/FD0kDs4zNY5VApzb72hGEImzYhUD8eD1ihgh2f2/53nFfCoyPEl8ebJ7tv
 XUtacmkeS9hPhjLIc5Cym0+j8IP1ZzRVKm0qvf/Z1edMa+rSw8BVe7NJTINUZ/blV8
 w7IBUPKcFkvIg==
Received: by nodbug.lucina.net (Postfix, from userid 1000)
 id DF22D268436E; Tue, 26 May 2020 13:54:46 +0200 (CEST)
Date: Tue, 26 May 2020 13:54:46 +0200
From: Martin Lucina <martin@lucina.net>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Mail-Followup-To (was Re: Xen PVH domU start-of-day VCPU state)
Message-ID: <20200526115446.GA24386@nodbug.lucina.net>
Mail-Followup-To: Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org
References: <20200525160401.GA3091@nodbug.lucina.net>
 <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
 <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
 <20200526085221.GB5942@nodbug.lucina.net>
 <36363b39-c8c1-82bf-af37-f0d917844bb4@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <36363b39-c8c1-82bf-af37-f0d917844bb4@citrix.com>
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tuesday, 26.05.2020 at11:58, Andrew Cooper wrote:
> On 26/05/2020 09:52, Martin Lucina wrote:
> > On Monday, 25.05.2020 at17:59, Andrew Cooper wrote:
> >> On 25/05/2020 17:42, Jrgen Gro wrote:
> >>> You need to setup virtual addressing and enable 64 bit mode before using
> >>> 64-bit GDT.
> >>>
> >>> See Mini-OS source arch/x86/x86_hvm.S
> >> Or
> >> https://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen-test-framework.git;a=blob;f=arch/x86/hvm/head.S;h=f7dc72b58ab9ec68538f0087969ab6f72d181d80;hb=HEAD
> >>
> >> But yes - Juergen is correct. Until you have enabled long mode, lgdt
> >> will only load the bottom 32 bits of GDTR.base.
> > Ah, I missed Jurgen's and your reply here.
> 
> So the mailing list is doing something evil and setting:
> 
> Mail-Followup-To: Andrew Cooper <andrew.cooper3@citrix.com>,
>  =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>,
>  xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org,
>  anil@recoil.org, dave@recoil.org
> 
> which causes normal replies to cut you out.

I _think_ I've fixed this, it was due to ancient Mutt configuration (using
xensource.com / xen.org !?) for xen-devel at my end.

Dropped the other direct Ccs to lessen the noise, but I have no real way of
testing without replying, so here goes.

-mato


From xen-devel-bounces@lists.xenproject.org Tue May 26 11:59:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 11:59:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdYEa-0001tC-2a; Tue, 26 May 2020 11:59:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vgeY=7I=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jdYEZ-0001t6-Gu
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 11:59:03 +0000
X-Inumbo-ID: 4719d3a2-9f48-11ea-81bc-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4719d3a2-9f48-11ea-81bc-bc764e2007e4;
 Tue, 26 May 2020 11:58:57 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:45642
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jdYES-000V8O-L6 (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Tue, 26 May 2020 12:58:56 +0100
Subject: Re: Mail-Followup-To (was Re: Xen PVH domU start-of-day VCPU state)
To: martin@lucina.net
References: <20200525160401.GA3091@nodbug.lucina.net>
 <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
 <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
 <20200526085221.GB5942@nodbug.lucina.net>
 <36363b39-c8c1-82bf-af37-f0d917844bb4@citrix.com>
 <20200526115446.GA24386@nodbug.lucina.net>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <14a27bd6-7013-2cbb-e202-05f0b32caf9a@citrix.com>
Date: Tue, 26 May 2020 12:58:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200526115446.GA24386@nodbug.lucina.net>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 26/05/2020 12:54, Martin Lucina wrote:
> On Tuesday, 26.05.2020 at 11:58, Andrew Cooper wrote:
>> On 26/05/2020 09:52, Martin Lucina wrote:
>>> On Monday, 25.05.2020 at 17:59, Andrew Cooper wrote:
>>>> On 25/05/2020 17:42, Jürgen Groß wrote:
>>>>> You need to setup virtual addressing and enable 64 bit mode before using
>>>>> 64-bit GDT.
>>>>>
>>>>> See Mini-OS source arch/x86/x86_hvm.S
>>>> Or
>>>> https://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen-test-framework.git;a=blob;f=arch/x86/hvm/head.S;h=f7dc72b58ab9ec68538f0087969ab6f72d181d80;hb=HEAD
>>>>
>>>> But yes - Juergen is correct.  Until you have enabled long mode, lgdt
>>>> will only load the bottom 32 bits of GDTR.base.
>>> Ah, I missed Jurgen's and your reply here.
>> So the mailing list is doing something evil and setting:
>>
>> Mail-Followup-To: Andrew Cooper <andrew.cooper3@citrix.com>,
>>     =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>,
>>     xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org,
>>     anil@recoil.org, dave@recoil.org
>>
>> which causes normal replies to cut you out.
> I _think_ I've fixed this, it was due to ancient Mutt configuration (using
> xensource.com / xen.org !?) for xen-devel at my end.
>
> Dropped the other direct Ccs to lessen the noise, but I have no real way of
> testing without replying, so here goes.

Sorry - still no luck.  Had to add you back in manually.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 26 12:41:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 12:41:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdYtf-0006Iv-1q; Tue, 26 May 2020 12:41:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N6lx=7I=lucina.net=martin@srs-us1.protection.inumbo.net>)
 id 1jdYte-0006Iq-JK
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 12:41:30 +0000
X-Inumbo-ID: 355e72e8-9f4e-11ea-9dbe-bc764e2007e4
Received: from smtp.lucina.net (unknown [62.176.169.44])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 355e72e8-9f4e-11ea-9dbe-bc764e2007e4;
 Tue, 26 May 2020 12:41:24 +0000 (UTC)
Received: from nodbug.lucina.net (78-141-76-187.dynamic.orange.sk
 [78.141.76.187])
 by smtp.lucina.net (Postfix) with ESMTPSA id 93C19122804;
 Tue, 26 May 2020 14:41:23 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=lucina.net;
 s=dkim-201811; t=1590496883;
 bh=mxUxU5t9xa4Ak/X6Fq1E38zK1T3h4xJHYO9Uxz717QI=;
 h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
 b=N/ttzeYNqHDj/uLecT8xdoCkI9UMRPxwZHYz66WxZcWhHiY7NXIPaTQAXr/eJRFSV
 AoSSP3IQLc5TnXHgXgF2Gzgc1UU7Js0PAgoxfptStuy/LMz1ABWXiY+vbtlswr0Adw
 F577C6xq+ylyGDBmNJuTrKg/J0gJh4Xx6lHdcDNV4vsKgFxYmTp8rqPx5tlWwijepS
 KOqj0tW84V1ElpBIFhIoZ591GNNArX7IOa6l3to7Z0zS6mArfLBZGMScjzoohQxaaN
 0gFr6hUdsZ6RAFjLRxzDC0lSTelvrFuTm9sSVRBfEFBLjFRK4Ke6zftaeR1MmMcZnC
 i3nm4WMLwux1w==
Received: by nodbug.lucina.net (Postfix, from userid 1000)
 id 7A7D6268436E; Tue, 26 May 2020 14:41:23 +0200 (CEST)
Date: Tue, 26 May 2020 14:41:23 +0200
From: Martin Lucina <martin@lucina.net>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Mail-Followup-To (was Re: Xen PVH domU start-of-day VCPU state)
Message-ID: <20200526124123.GA25283@nodbug.lucina.net>
Mail-Followup-To: Martin Lucina <martin@lucina.net>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org
References: <20200525160401.GA3091@nodbug.lucina.net>
 <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
 <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
 <20200526085221.GB5942@nodbug.lucina.net>
 <36363b39-c8c1-82bf-af37-f0d917844bb4@citrix.com>
 <20200526115446.GA24386@nodbug.lucina.net>
 <14a27bd6-7013-2cbb-e202-05f0b32caf9a@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <14a27bd6-7013-2cbb-e202-05f0b32caf9a@citrix.com>
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tuesday, 26.05.2020 at12:58, Andrew Cooper wrote:
> On 26/05/2020 12:54, Martin Lucina wrote:
> > On Tuesday, 26.05.2020 at11:58, Andrew Cooper wrote:
> >> On 26/05/2020 09:52, Martin Lucina wrote:
> >>> On Monday, 25.05.2020 at17:59, Andrew Cooper wrote:
> >>>> On 25/05/2020 17:42, Jrgen Gro wrote:
> >>>>> You need to setup virtual addressing and enable 64 bit mode before using
> >>>>> 64-bit GDT.
> >>>>>
> >>>>> See Mini-OS source arch/x86/x86_hvm.S
> >>>> Or
> >>>> https://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen-test-framework.git;a=blob;f=arch/x86/hvm/head.S;h=f7dc72b58ab9ec68538f0087969ab6f72d181d80;hb=HEAD
> >>>>
> >>>> But yes - Juergen is correct. Until you have enabled long mode, lgdt
> >>>> will only load the bottom 32 bits of GDTR.base.
> >>> Ah, I missed Jurgen's and your reply here.
> >> So the mailing list is doing something evil and setting:
> >>
> >> Mail-Followup-To: Andrew Cooper <andrew.cooper3@citrix.com>,
> >>  =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>,
> >>  xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org,
> >>  anil@recoil.org, dave@recoil.org
> >>
> >> which causes normal replies to cut you out.
> > I _think_ I've fixed this, it was due to ancient Mutt configuration (using
> > xensource.com / xen.org !?) for xen-devel at my end.
> >
> > Dropped the other direct Ccs to lessen the noise, but I have no real way of
> > testing without replying, so here goes.
> 
> Sorry - still no luck. Had to add you back in manually.

How about now?

-mato


From xen-devel-bounces@lists.xenproject.org Tue May 26 12:42:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 12:42:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdYuU-0006Nt-Bb; Tue, 26 May 2020 12:42:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vgeY=7I=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jdYuT-0006Nn-Eu
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 12:42:21 +0000
X-Inumbo-ID: 55ca768a-9f4e-11ea-a636-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 55ca768a-9f4e-11ea-a636-12813bfff9fa;
 Tue, 26 May 2020 12:42:18 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:47048
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jdYuQ-000yfY-Jg (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Tue, 26 May 2020 13:42:18 +0100
Subject: Re: Mail-Followup-To (was Re: Xen PVH domU start-of-day VCPU state)
To: Martin Lucina <martin@lucina.net>, xen-devel@lists.xenproject.org,
 mirageos-devel@lists.xenproject.org
References: <20200525160401.GA3091@nodbug.lucina.net>
 <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
 <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
 <20200526085221.GB5942@nodbug.lucina.net>
 <36363b39-c8c1-82bf-af37-f0d917844bb4@citrix.com>
 <20200526115446.GA24386@nodbug.lucina.net>
 <14a27bd6-7013-2cbb-e202-05f0b32caf9a@citrix.com>
 <20200526124123.GA25283@nodbug.lucina.net>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a6a7f5f7-6a96-3477-2239-bdd13eb00395@citrix.com>
Date: Tue, 26 May 2020 13:42:17 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200526124123.GA25283@nodbug.lucina.net>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 26/05/2020 13:41, Martin Lucina wrote:
> On Tuesday, 26.05.2020 at 12:58, Andrew Cooper wrote:
>> On 26/05/2020 12:54, Martin Lucina wrote:
>>> On Tuesday, 26.05.2020 at 11:58, Andrew Cooper wrote:
>>>> On 26/05/2020 09:52, Martin Lucina wrote:
>>>>> On Monday, 25.05.2020 at 17:59, Andrew Cooper wrote:
>>>>>> On 25/05/2020 17:42, Jürgen Groß wrote:
>>>>>>> You need to setup virtual addressing and enable 64 bit mode before using
>>>>>>> 64-bit GDT.
>>>>>>>
>>>>>>> See Mini-OS source arch/x86/x86_hvm.S
>>>>>> Or
>>>>>> https://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen-test-framework.git;a=blob;f=arch/x86/hvm/head.S;h=f7dc72b58ab9ec68538f0087969ab6f72d181d80;hb=HEAD
>>>>>>
>>>>>> But yes - Juergen is correct.  Until you have enabled long mode, lgdt
>>>>>> will only load the bottom 32 bits of GDTR.base.
>>>>> Ah, I missed Jurgen's and your reply here.
>>>> So the mailing list is doing something evil and setting:
>>>>
>>>> Mail-Followup-To: Andrew Cooper <andrew.cooper3@citrix.com>,
>>>>     =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>,
>>>>     xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org,
>>>>     anil@recoil.org, dave@recoil.org
>>>>
>>>> which causes normal replies to cut you out.
>>> I _think_ I've fixed this, it was due to ancient Mutt configuration (using
>>> xensource.com / xen.org !?) for xen-devel at my end.
>>>
>>> Dropped the other direct Ccs to lessen the noise, but I have no real way of
>>> testing without replying, so here goes.
>> Sorry - still no luck.  Had to add you back in manually.
> How about now?

That works.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 26 12:44:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 12:44:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdYwQ-0006bG-Ri; Tue, 26 May 2020 12:44:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N6lx=7I=lucina.net=martin@srs-us1.protection.inumbo.net>)
 id 1jdYwP-0006aN-1B
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 12:44:21 +0000
X-Inumbo-ID: 9dacff18-9f4e-11ea-a636-12813bfff9fa
Received: from smtp.lucina.net (unknown [62.176.169.44])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9dacff18-9f4e-11ea-a636-12813bfff9fa;
 Tue, 26 May 2020 12:44:19 +0000 (UTC)
Received: from nodbug.lucina.net (78-141-76-187.dynamic.orange.sk
 [78.141.76.187])
 by smtp.lucina.net (Postfix) with ESMTPSA id B40E5122804;
 Tue, 26 May 2020 14:44:18 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=lucina.net;
 s=dkim-201811; t=1590497058;
 bh=DCk1HFmCWgmAdqJZvcRKU/bEbj5+p//dr2VZiaADZ60=;
 h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
 b=N1gUJLExhUISbdI+NDunp+bQwwnT9Cq+5rDSIAmeSQVlGRi3crYWxNPl8fUJA10qZ
 gTAxHtBti9DrgLOzYQ0lXII72lJV+us9sxX10Mzr3D+ywFax69ZH//XpF2PP4J37J6
 7OnPy7Nvu5dJKbTqJ74QeEgg3/BKbweMSGGR/fNocOE710B20Jkmv5031JQf6fczFF
 M1nhuEB2IPCaEWuf1VwDdt4S+w5RSwoksOTzPV477EFDXAKcDxKzS1t0D8AOWNiUu6
 lkUt9MysCtp+Go/vdTExZ8+k3PBNbjweY20uNCyjfO3PYBXgFQ2PZ893Ru2s7wKgXy
 YmsDo7J8NgbMA==
Received: by nodbug.lucina.net (Postfix, from userid 1000)
 id 984BC268436E; Tue, 26 May 2020 14:44:18 +0200 (CEST)
Date: Tue, 26 May 2020 14:44:18 +0200
From: Martin Lucina <martin@lucina.net>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Mail-Followup-To (was Re: Xen PVH domU start-of-day VCPU state)
Message-ID: <20200526124418.GB25283@nodbug.lucina.net>
Mail-Followup-To: Martin Lucina <martin@lucina.net>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org
References: <20200525160401.GA3091@nodbug.lucina.net>
 <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
 <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
 <20200526085221.GB5942@nodbug.lucina.net>
 <36363b39-c8c1-82bf-af37-f0d917844bb4@citrix.com>
 <20200526115446.GA24386@nodbug.lucina.net>
 <14a27bd6-7013-2cbb-e202-05f0b32caf9a@citrix.com>
 <20200526124123.GA25283@nodbug.lucina.net>
 <a6a7f5f7-6a96-3477-2239-bdd13eb00395@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a6a7f5f7-6a96-3477-2239-bdd13eb00395@citrix.com>
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tuesday, 26.05.2020 at13:42, Andrew Cooper wrote:
> On 26/05/2020 13:41, Martin Lucina wrote:
> > On Tuesday, 26.05.2020 at12:58, Andrew Cooper wrote:
> >> On 26/05/2020 12:54, Martin Lucina wrote:
> >>> On Tuesday, 26.05.2020 at11:58, Andrew Cooper wrote:
> >>>> On 26/05/2020 09:52, Martin Lucina wrote:
> >>>>> On Monday, 25.05.2020 at17:59, Andrew Cooper wrote:
> >>>>>> On 25/05/2020 17:42, Jrgen Gro wrote:
> >>>>>>> You need to setup virtual addressing and enable 64 bit mode before using
> >>>>>>> 64-bit GDT.
> >>>>>>>
> >>>>>>> See Mini-OS source arch/x86/x86_hvm.S
> >>>>>> Or
> >>>>>> https://xenbits.xen.org/gitweb/?p=people/andrewcoop/xen-test-framework.git;a=blob;f=arch/x86/hvm/head.S;h=f7dc72b58ab9ec68538f0087969ab6f72d181d80;hb=HEAD
> >>>>>>
> >>>>>> But yes - Juergen is correct. Until you have enabled long mode, lgdt
> >>>>>> will only load the bottom 32 bits of GDTR.base.
> >>>>> Ah, I missed Jurgen's and your reply here.
> >>>> So the mailing list is doing something evil and setting:
> >>>>
> >>>> Mail-Followup-To: Andrew Cooper <andrew.cooper3@citrix.com>,
> >>>>  =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>,
> >>>>  xen-devel@lists.xenproject.org, mirageos-devel@lists.xenproject.org,
> >>>>  anil@recoil.org, dave@recoil.org
> >>>>
> >>>> which causes normal replies to cut you out.
> >>> I _think_ I've fixed this, it was due to ancient Mutt configuration (using
> >>> xensource.com / xen.org !?) for xen-devel at my end.
> >>>
> >>> Dropped the other direct Ccs to lessen the noise, but I have no real way of
> >>> testing without replying, so here goes.
> >> Sorry - still no luck. Had to add you back in manually.
> > How about now?
> 
> That works.

Ok, TIL that I've been doing "subscribe" vs. "lists" in Mutt wrong. Here's
a good explanation of how it's intended to work:

https://lists.debian.org/debian-user/2003/05/msg05016.html

TL;DR if you want a Mail-Followup-To, use "lists", otherwise use
"subscribe".

-mato


From xen-devel-bounces@lists.xenproject.org Tue May 26 13:07:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 13:07:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdZIJ-0008Ve-R2; Tue, 26 May 2020 13:06:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Zc16=7I=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jdZII-0008VZ-3n
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 13:06:58 +0000
X-Inumbo-ID: c6fdb274-9f51-11ea-9947-bc764e2007e4
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c6fdb274-9f51-11ea-9947-bc764e2007e4;
 Tue, 26 May 2020 13:06:57 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id w10so24441602ljo.0
 for <xen-devel@lists.xenproject.org>; Tue, 26 May 2020 06:06:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=pA/KPitDzDn71tBmji2Hne4OFzrRzYGwCFdjGoKaubU=;
 b=lXlW3S6YNmMsBKt2gkfn1fxYmPIKLfa2dWjgFxBj99n+gkASpyg4LSbqawo7YOqaHr
 Y+K40fSRLEsun0zcn0cXe91KXZhJnt9o4yeYGaCPuxJ64Dp2qWFhxnkSKIaG3BF5vLxu
 gapn2/Tl0lleWds09KWb+r4w2DROx3Hna0iqRurUV4BZn/i5TfZ02WK9nnWkTOAWCmxj
 zqUUL9owN/3ilx4aAFReeJIxv3TDWUhuPZVSjPctcxYBBGUit0bXZ4RJrTB5g183UQYL
 +alWOM7Cfj2MG/UacJGUGdN5oVoyW3RLheR5Kt8nO+so+OwWXHQki4KSlaDZVkPfiQhv
 6ARg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=pA/KPitDzDn71tBmji2Hne4OFzrRzYGwCFdjGoKaubU=;
 b=LHkLM2XKPsjEJRwXGJ65PSL+FITpCFk+xexxtTJ2o4mJLCeSy3MbiNmt5p3jOzzEkY
 OWf9/U6t3eXOezsiFK7nopJxAOMjNAskK99b9WsnyG7jQojROz3DKP20YASjjbHor7WI
 VkdZsBD29kGRWh42ay/AWtJ6oh9wnpypYiRrEX601hAhwC3iKX7g/h35R6YWp6vUL7JO
 UNvjscx3CCxcHsgMqMTnB/fwDUuGYIKNdvL/jqDXvs4tL1oV9LGFrmOzVIx3EkOonH9p
 /5pRBTHwfocDVTZNlvONB4XLHDTN/oXUH/pCrykfUcLf8m/6aCboi8DQgd3kQJ3cKaGq
 SpHA==
X-Gm-Message-State: AOAM530rfNi6sOqyhwzm1Hb+s4nD5szTnz/Oz5zlVJCFraZhJ9UYFpoB
 yY1hgOrO81R6rCaGyjZsvnPhi8y7SRdUjYKH3e4=
X-Google-Smtp-Source: ABdhPJzWbI2umDYcfXTOHNuNk+H1mlfS3jOJaXvIHLpyqOfekDoUUjVpxKuGIxWaL0/uBWM/DlG0pKu3mn/nn/fsg4E=
X-Received: by 2002:a2e:8053:: with SMTP id p19mr592714ljg.199.1590498416322; 
 Tue, 26 May 2020 06:06:56 -0700 (PDT)
MIME-Version: 1.0
References: <20200525025506.225959-1-jandryuk@gmail.com>
 <3986B3CE-1730-443C-BD10-D2161C2A75F4@citrix.com>
In-Reply-To: <3986B3CE-1730-443C-BD10-D2161C2A75F4@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 26 May 2020 09:06:45 -0400
Message-ID: <CAKf6xpt3ALKd2F8bP5ui+VhvhSWrTG+Hj_5TQSezOtUm_2A99w@mail.gmail.com>
Subject: Re: [PATCH] SUPPORT: Add linux device model stubdom to Toolstack
To: George Dunlap <George.Dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Ian Jackson <Ian.Jackson@citrix.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 26, 2020 at 6:50 AM George Dunlap <George.Dunlap@citrix.com> wr=
ote:
>
>
>
> > On May 25, 2020, at 3:55 AM, Jason Andryuk <jandryuk@gmail.com> wrote:
> >
> > Add qemu-xen linux device model stubdomain to the Toolstack section as =
a
> > Tech Preview.
> >
> > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> > ---
> > SUPPORT.md | 6 ++++++
> > 1 file changed, 6 insertions(+)
> >
> > diff --git a/SUPPORT.md b/SUPPORT.md
> > index e3a366fd56..25becc9192 100644
> > --- a/SUPPORT.md
> > +++ b/SUPPORT.md
> > @@ -153,6 +153,12 @@ Go (golang) bindings for libxl
> >
> >     Status: Experimental
> >
> > +### Linux device model stubdomains
> > +
> > +Support for running qemu-xen device model in a linux stubdomain.
> > +
> > +    Status: Tech Preview
>
> Acked-by: George Dunlap <george.dunlap@citrix.com>
>
> Out of curiosity, what do you think is missing to be able to declare this=
 =E2=80=98Supported=E2=80=99?  Are there any features missing, or do we jus=
t  need to add a test to osstest?

Yeah, adding testing would be good.  From this list of limitations:
 - PCI passthrough require permissive mode
 - at most 26 emulated disks are supported (more are still available
as PV disks)
 - only one nic is supported
 - graphics output (VNC/SDL/Spice) not supported

PCI passthrough requiring permissive mode is fine for now.  26
emulated disks is probably fine forever.  We should have support for
multiple nics, and I have a idea for that.

The lack of graphics output is probably the biggest limitation at this time=
.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Tue May 26 13:35:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 13:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdZje-0002iB-2E; Tue, 26 May 2020 13:35:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ta6a=7I=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdZjc-0002i6-P5
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 13:35:12 +0000
X-Inumbo-ID: b9344fdc-9f55-11ea-8993-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9344fdc-9f55-11ea-8993-bc764e2007e4;
 Tue, 26 May 2020 13:35:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id AF507ADD7;
 Tue, 26 May 2020 13:35:13 +0000 (UTC)
Subject: Re: [PATCH] x86: extend coverage of HLE "bad page" workaround
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <b238f66d-37a9-3080-4f2b-90225ea17102@suse.com>
 <424d1b72-5eb6-f2bc-20fe-e59bacda8dd9@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c27d838e-0331-3cab-25bf-dd16b4645152@suse.com>
Date: Tue, 26 May 2020 15:35:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <424d1b72-5eb6-f2bc-20fe-e59bacda8dd9@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 26.05.2020 13:17, Andrew Cooper wrote:
> On 26/05/2020 07:49, Jan Beulich wrote:
>> Respective Core Gen10 processor lines are affected, too.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -6045,6 +6045,8 @@ const struct platform_bad_page *__init g
>>      case 0x000506e0: /* errata SKL167 / SKW159 */
>>      case 0x000806e0: /* erratum KBL??? */
>>      case 0x000906e0: /* errata KBL??? / KBW114 / CFW103 */
>> +    case 0x000a0650: /* erratum Core Gen10 U/H/S 101 */
>> +    case 0x000a0660: /* erratum Core Gen10 U/H/S 101 */
> 
> This is marred in complexity.
> 
> The enumeration of MSR_TSX_CTRL (from the TAA fix, but architectural
> moving forwards on any TSX-enabled CPU) includes a confirmation that HLE
> no longer exists/works.  This applies to IceLake systems, but possibly
> not their initial release configuration (hence, via a later microcode
> update).
> 
> HLE is also disabled in microcode on all older parts for errata reasons,
> so in practice it doesn't exist anywhere now.
> 
> I think it is safe to drop this workaround, and this does seem a more
> simple option than encoding which microcode turned HLE off (which sadly
> isn't covered by the spec updates, as even when turned off, HLE is still
> functioning according to its spec of "may speed things up, may do
> nothing"), or the interactions with the CPUID hiding capabilities of
> MSR_TSX_CTRL.

I'm afraid I don't fully follow: For one, does what you say imply HLE is
no longer enumerated in CPUID? If so, and if we assume all CPU models
listed have had suitable ucode updates issued, we could indeed drop the
workaround (as taking effect only when HLE is enumerated). But then this
erratum does not have the usual text effectively meaning that an ucode
update is or will be available to address the issue; instead it says
that BIOS or VMM can reserve the respective address range. This -
assuming the alternative you describe is indeed viable - then is surely
a much more intrusive workaround than needed. Which I wouldn't assume
they would suggest in such a case.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 26 13:41:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 13:41:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdZpR-0003dE-Ms; Tue, 26 May 2020 13:41:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5j=7I=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jdZpQ-0003d9-5u
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 13:41:12 +0000
X-Inumbo-ID: 8f023f8e-9f56-11ea-9947-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8f023f8e-9f56-11ea-9947-bc764e2007e4;
 Tue, 26 May 2020 13:41:10 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: AQ6KFv75vtoyFTBXTTrWdirrylRMCQuaPyjc3tRsE8PWFf6JUCefM9uKsLnUOyXZCtOo2dC6KX
 K8BEZ6amR7K+w86YgQgPb3ZbDH4rX8oCgronbcA0J7Wr7ome17INQT3KqU7rf0pOlb3tHkkIW7
 +xCgzObzL/rSHjAkC4Sl2/8lfEMycJ258/zc42fO4j/ldrEjTvjp2jyodDuP+YDwpAbgQvy5QG
 bsQxcEUWos5bmBMIyW44KeMguUIyVewrtQsm+pq9auC6Xuc+ehzsadOMzRygRgj43cHQWFN2Vv
 INE=
X-SBRS: None
X-MesageID: 18718884
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24269.7281.921460.211014@mariner.uk.xensource.com>
Date: Tue, 26 May 2020 14:41:05 +0100
To: George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH 2/5] golang: Add a variable for the libxl source directory
In-Reply-To: <20200522161240.3748320-3-george.dunlap@citrix.com>
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-3-george.dunlap@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("[PATCH 2/5] golang: Add a variable for the libxl source directory"):
> ...rather than duplicating the path in several places.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue May 26 13:56:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 13:56:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jda3X-0004fx-UJ; Tue, 26 May 2020 13:55:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ta6a=7I=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jda3W-0004fs-Kn
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 13:55:46 +0000
X-Inumbo-ID: 98926c52-9f58-11ea-a64a-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 98926c52-9f58-11ea-a64a-12813bfff9fa;
 Tue, 26 May 2020 13:55:45 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 99161ADBB;
 Tue, 26 May 2020 13:55:47 +0000 (UTC)
Subject: Re: [PATCH] x86: refine guest_mode()
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <1704f4f6-7e77-971c-2c94-4f6a6719c34a@citrix.com>
 <5bbe6425-396c-d934-b5af-53b594a4afbc@suse.com>
 <16939982-3ccc-f848-0694-61b154dca89a@citrix.com>
 <5ce12c86-c894-4a2c-9fa6-1c2a6007ca28@suse.com>
 <20200518145101.GV54375@Air-de-Roger>
 <d58ec87e-a871-2e65-4a69-b73a168a6afa@suse.com>
 <20200520151326.GM54375@Air-de-Roger>
 <38d546f9-8043-8d94-8298-8fd035078a8a@suse.com>
 <20200522104844.GY54375@Air-de-Roger>
 <a31bd761-54eb-56b8-7c60-93202d26e7d0@suse.com>
 <20200526105652.GD38408@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dfa3604a-d53e-ae0c-fe24-099b135b308e@suse.com>
Date: Tue, 26 May 2020 15:55:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200526105652.GD38408@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 26.05.2020 12:56, Roger Pau Monné wrote:
> On Fri, May 22, 2020 at 02:00:22PM +0200, Jan Beulich wrote:
>> On 22.05.2020 12:48, Roger Pau Monné wrote:
>>> On Fri, May 22, 2020 at 11:52:42AM +0200, Jan Beulich wrote:
>>>> On 20.05.2020 17:13, Roger Pau Monné wrote:
>>>>> OK, so I think I'm starting to understand this all. Sorry it's taken
>>>>> me so long. So it's my understanding that diff != 0 can only happen in
>>>>> Xen context, or when in an IST that has a different stack (ie: MCE, NMI
>>>>> or DF according to current.h) and running in PV mode?
>>>>>
>>>>> Wouldn't in then be fine to use (r)->cs & 3 to check we are in guest
>>>>> mode if diff != 0? I see a lot of other places where cs & 3 is already
>>>>> used to that effect AFAICT (like entry.S).
>>>>
>>>> Technically this would be correct afaics, but the idea with all this
>>>> is (or should I say "looks to be"?) to have the checks be as tight as
>>>> possible, to make sure we don't mistakenly consider something "guest
>>>> mode" which really isn't. IOW your suggestion would be fine with me
>>>> if we could exclude bugs anywhere in the code. But since this isn't
>>>> realistic, I consider your suggestion to be relaxing things by too
>>>> much.
>>>
>>> OK, so I take that (long time) we might also want to change the cs & 3
>>> checks from entry.S to check against __HYPERVISOR_CS explicitly?
>>
>> I didn't think so, no (not the least because of there not being any
>> guarantee afaik that EFI runtime calls couldn't play with segment
>> registers; they shouldn't, yes, but there's a lot of other "should"
>> many don't obey to). Those are guaranteed PV-only code paths. The
>> main issue here is that ->cs cannot be relied upon when a frame
>> points at HVM state.
> 
> Well, if it points at HVM state it could equally have __HYPERVISOR_CS
> set by the guest.

No, that's not the point. ->cs will never be __HYPERVISOR_CS in that
case, as we never store the guest's CS selector there. Instead
hvm_invalidate_regs_fields() clobbers the field in debug builds (with
a value resulting in RPL 3), but zero (i.e. a value implying RPL 0)
remains in place in release builds.

Instead of doing this clobbering in debug mode only, we could - as I
think I did suggest before - clobber always, but just once during vCPU
init rather than on every VM exit. In debug mode we could then instead
check that the dummy values didn't themselves get clobbered.

> Will things work anyway if you get here from an exception generated by
> EFI code that has changed the code segment? You are going to hit the
> assert at least, since diff will be != 0 and cs != __HYPERVISOR_CS?

What would guarantee the latter? Additionally they could in principle
also have switched stacks then, i.e. diff may then also be larger than
PRIMARY_STACK_SIZE, in which case - with the patch in place - the
assertion is bypassed altogether.

> I would prefer to keep things coherent by either using cs & 3 or
> cs == __HYPERVISOR_CS everywhere if possible, as I'm still unsure of
> the benefit of using __HYPERVISOR_CS.

See above.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 26 14:06:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 14:06:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdaDg-0005jB-0r; Tue, 26 May 2020 14:06:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5j=7I=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jdaDe-0005j6-Ly
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 14:06:14 +0000
X-Inumbo-ID: 0e31e4a1-9f5a-11ea-a64b-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0e31e4a1-9f5a-11ea-a64b-12813bfff9fa;
 Tue, 26 May 2020 14:06:13 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: NzXzWeVY9LrJHh/AvpJpANxVDPhB3QGgrhdeI9O0SjnVKY/CguAusOHcWxuWJFZY7vxNN1EAs+
 WoG51D0zCnljLEJAy3ZkugXWbMc3dfbmnlWauCxgDuUkTa8dYDJtPkNisPHd5EQ+I8lShm/qEq
 RdJg+kyN5X+lVPTlA65in2U5DzsiIuIS3Rk9tNbDAwVjFdji+GXQ3UNY4y130TiSZtvMexcFJR
 O2o3ROfsmNl0XnhN5srRF383xhk11p4BDOYdXfUh1xkL/hfYm2AXxL/eJMCdEdmDiKiPNSGltS
 Csk=
X-SBRS: None
X-MesageID: 18800847
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-URL-LookUp-ScanningError: 1
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24269.8019.97048.52370@mariner.uk.xensource.com>
Date: Tue, 26 May 2020 14:53:23 +0100
To: George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH 3/5] libxl: Generate golang bindings in libxl Makefile
In-Reply-To: <20200522161240.3748320-4-george.dunlap@citrix.com>
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-4-george.dunlap@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("[PATCH 3/5] libxl: Generate golang bindings in libxl Makefile"):
> +.PHONY: idl-external
> +idl-external:
> +	$(MAKE) -C $(XEN_ROOT)/tools/golang/xenlight idl-gen

Unfortunately this kind of thing is forbidden.  At least, without a
rigorous proof that this isn't a concurrency hazard.

The problem is that with parallel make, the concurrency correctness
principles are as follows:
(1) different targets use nonoverlapping temporary and output files
    (makefile authors' responsibiliy)
(2) one invocation of make won't make the same target twice at the
    same time (fundamental principle of operation for make)
(3) the same makefile (or different makefiles with overlapping
    targets) may not be entered multiple times in parallel
    (build system authors' responsibility; preclucdes most use of
    make -C to sibling directories rather than to children)

A correctness proof to make an exception would involve demonstrating
that the tools/golang directories never touch this file when invoked
as part of a recursive build.  NB, consider the clean targets too.

Alternatively, move the generated golang files to tools/libxl maybe,
and perhaps leave symlinks behind.

Or convert the whole (of tools/, maybe) to nonrecursive make using eg
subdirmk :-).  https://diziet.dreamwidth.org/5763.html

Sorry,
Ian.


From xen-devel-bounces@lists.xenproject.org Tue May 26 14:11:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 14:11:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdaIA-0006aO-JT; Tue, 26 May 2020 14:10:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5j=7I=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jdaI9-0006aJ-Jy
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 14:10:53 +0000
X-Inumbo-ID: b50a87c8-9f5a-11ea-a64b-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b50a87c8-9f5a-11ea-a64b-12813bfff9fa;
 Tue, 26 May 2020 14:10:52 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: vZMTF4Qyd0Rdcer/lXpd7GdNRTFjtilJq0YVovdZlfHGcquLi6ofAcRq/r5mDmfKbN+q3Xi3Q7
 hdiHs8NDjlQEJRhrWjX9Mwku2E+b+SejYrRTru9+86upOTQBRXSl7SC8TFVjEej/CGmdZHgZbF
 HPcgkD7eTUZvPJnONXtvAZozf51IOFCCQbVKT0XboXcitNSxkxcVZUMp7m1IhSYm74rAP61lqE
 Y7vPm+qw4d5kfC/gPdcWC8+HGQ11Ml/WaeQR/fcL0MwGeDDhhKJg8eXXKy7owgPJD5y6SZZ0gK
 0hs=
X-SBRS: None
X-MesageID: 18441054
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24269.8059.28506.353748@mariner.uk.xensource.com>
Date: Tue, 26 May 2020 14:54:03 +0100
To: George Dunlap <george.dunlap@citrix.com>
Subject: [PATCH 5/5] gitignore: Ignore golang package directory
In-Reply-To: <20200522161240.3748320-6-george.dunlap@citrix.com>
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-6-george.dunlap@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Konrad Wilk <konrad.wilk@oracle.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Nick
 Rosbrook <rosbrookn@ainfosec.com>, Julien Grall <julien.grall@arm.com>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("[PATCH 5/5] gitignore: Ignore golang package directory"):
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>

I have to say that finding a directory src/ in gitignore is very
startling.

This directory src/ contains only output files ?

Is there not a risk that humans will try to edit it ?

Ian.


From xen-devel-bounces@lists.xenproject.org Tue May 26 14:11:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 14:11:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdaIw-0006dk-TE; Tue, 26 May 2020 14:11:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5j=7I=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jdaIw-0006da-7F
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 14:11:42 +0000
X-Inumbo-ID: d24348ac-9f5a-11ea-a64b-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d24348ac-9f5a-11ea-a64b-12813bfff9fa;
 Tue, 26 May 2020 14:11:41 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: GtppemxZpmjLTC13PCoZMb124ksaalluAF2xf4LMMrwSYkCMYCAnOezRvX7Qv5TsKuaTPt/JAX
 k8+zb8EfEld55eZ6AeGfbcChX5BiwK+CB4RlqX9UCurbjVEGgl2UbfnH4Nf8sN/+BFiLXSLaZS
 facOY+TzKviwzDN0aPtR0bTndHH15E0ZjZc10JU+c9GJWmEeNgM27dlr0OitnNy6gK3h3t6v2m
 v/744H8j2Pw87W/V3h10OP/AzUW6j3Bw/1kGWEWus4j2GSETTyJ8/9y4BE/v2xF4rgplvgyDVE
 nZQ=
X-SBRS: None
X-MesageID: 18801744
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Message-ID: <24269.8453.965170.734723@mariner.uk.xensource.com>
Date: Tue, 26 May 2020 15:00:37 +0100
To: George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH] SUPPORT: Add linux device model stubdom to Toolstack
In-Reply-To: <3986B3CE-1730-443C-BD10-D2161C2A75F4@citrix.com>
References: <20200525025506.225959-1-jandryuk@gmail.com>
 <3986B3CE-1730-443C-BD10-D2161C2A75F4@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Jason Andryuk <jandryuk@gmail.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("Re: [PATCH] SUPPORT: Add linux device model stubdom to Toolstack"):
> Acked-by: George Dunlap <george.dunlap@citrix.com>
> 
> Out of curiosity, what do you think is missing to be able to declare this ‘Supported’?  Are there any features missing, or do we just  need to add a test to osstest?

I think from my point of view that is all that would be sufficient but
others may have other concerns.

Ian.


From xen-devel-bounces@lists.xenproject.org Tue May 26 14:13:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 14:13:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdaKD-0006ku-7D; Tue, 26 May 2020 14:13:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5j=7I=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jdaKB-0006kj-KB
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 14:12:59 +0000
X-Inumbo-ID: 001dba00-9f5b-11ea-81bc-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 001dba00-9f5b-11ea-81bc-bc764e2007e4;
 Tue, 26 May 2020 14:12:58 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 8jVSF0r7W1MiM9ewOW0jKrkPRHAUpf/tywmL5jk+QXdniHMqcQlwCb4dVWZYU3KnHr/5gv9A70
 knEpGMrWJHBBM1axt6n8CUPv+34EGrQfAsqej/PHt66MPKIaeuG3zjYl2khoT0KRiTEHCm1b1y
 oVnKCDB3w1NmjyKKp9pdb3ranjsWwMNx6z19df9tAbAqL+D9Xolo8YiP1dFWjUiLhvm4JE8eie
 dtQMh+LeZQK9ssEBmQUvK9HC6IFFWat19bfFd3ZiKOQk5m9cu5SzDGi1vp+QAtfJLq2mS5TyPw
 kxQ=
X-SBRS: None
X-MesageID: 18465246
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24269.8336.90027.993820@mariner.uk.xensource.com>
Date: Tue, 26 May 2020 14:58:40 +0100
To: George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH 4/5] golang/xenlight: Use XEN_PKG_DIR variable rather than
 open-coding
In-Reply-To: <20200522161240.3748320-5-george.dunlap@citrix.com>
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-5-george.dunlap@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("[PATCH 4/5] golang/xenlight: Use XEN_PKG_DIR variable rather than open-coding"):
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue May 26 14:13:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 14:13:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdaL1-0006qf-Gm; Tue, 26 May 2020 14:13:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5j=7I=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jdaL0-0006qX-EU
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 14:13:50 +0000
X-Inumbo-ID: 1eafe4ca-9f5b-11ea-a64b-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1eafe4ca-9f5b-11ea-a64b-12813bfff9fa;
 Tue, 26 May 2020 14:13:49 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: a8e1hJP9tcy9W6KUzPsqSDXcAyUSgsjT2CQ28rO/DwSG1bVqVJdBBImOsBpm2VTW94PyzOv8Y1
 s93RDwryycUdBHFey0xCK7O7668Oy1dcSdOcVxNNdTgbDLx6QCnKf22I1vOrRsDH0N+34Tjpda
 PKRc4gC9LYKqdQJHRXR0elxZzXTMOWBsFwsy9u9fCi97SVQWlnODqdzxq92vrC9p2q9pLJKvHB
 7UhypLSKV2vcLh0y0jdC2oEyNpbI/JDZtvNJuYt/hEj6xv86Cn7U+kOKHhGuH/ytj+xVkyBqny
 rAA=
X-SBRS: None
X-MesageID: 18441670
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24269.8360.504075.118119@mariner.uk.xensource.com>
Date: Tue, 26 May 2020 14:59:04 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [PATCH] SUPPORT: Add linux device model stubdom to Toolstack
In-Reply-To: <20200525025506.225959-1-jandryuk@gmail.com>
References: <20200525025506.225959-1-jandryuk@gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew
 Cooper <Andrew.Cooper3@citrix.com>, George Dunlap <George.Dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("[PATCH] SUPPORT: Add linux device model stubdom to Toolstack"):
> Add qemu-xen linux device model stubdomain to the Toolstack section as a
> Tech Preview.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue May 26 14:39:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 14:39:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdajc-0000Lf-J2; Tue, 26 May 2020 14:39:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTO2=7I=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdajb-0000La-Qt
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 14:39:15 +0000
X-Inumbo-ID: abf75f04-9f5e-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id abf75f04-9f5e-11ea-8993-bc764e2007e4;
 Tue, 26 May 2020 14:39:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=L9XIyFu6C/hpOZ/sHwuBnYy3C9RV5TetydhMcbBYHTk=; b=BdyjSwVDehE1yVQfshRSHBayj
 hBXkzU9oJr0h7Oj4c3xdniQD99ZNmsVgG1RkVG1VHj5LuZH6aS7X0udsjEKtYiBTvHjNgwMYje/em
 VCwTs3aoVR5ydHT/gGOXkxAYg0QysabqOUfVAxPgQae+bpo9Uioq3OQvQDumHKKRiqNIA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdaja-000573-PM; Tue, 26 May 2020 14:39:14 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdaja-0008PV-Gt; Tue, 26 May 2020 14:39:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdaja-0004Me-Fm; Tue, 26 May 2020 14:39:14 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150384-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150384: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=d9b29088603f8438160eb4852cedd85cb7c61a19
X-Osstest-Versions-That: xen=354e8318d5a9b6f32fbd3c01d1a9f1970007010b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 26 May 2020 14:39:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150384 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150384/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d9b29088603f8438160eb4852cedd85cb7c61a19
baseline version:
 xen                  354e8318d5a9b6f32fbd3c01d1a9f1970007010b

Last test of basis   150367  2020-05-25 17:02:04 Z    0 days
Testing same since   150384  2020-05-26 12:01:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   354e8318d5..d9b2908860  d9b29088603f8438160eb4852cedd85cb7c61a19 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 26 14:57:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 14:57:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdb0h-00028y-3M; Tue, 26 May 2020 14:56:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=52a6=7I=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jdb0g-00028t-5N
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 14:56:54 +0000
X-Inumbo-ID: 223f4440-9f61-11ea-8993-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 223f4440-9f61-11ea-8993-bc764e2007e4;
 Tue, 26 May 2020 14:56:53 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: VsQLdcC/tJ4zRDV10UdsWSp1gVaeujGl3vFAN0uwJlEkTS9rBiII3Rv+yBoTKBpvyg09yJgVAk
 AkWqpy1owbL6R6ryuoU6ir1syBXI/9gJ+FgMR3YqYtZtyN0HrFeucvYF78aVx/k5R+Jp0+en93
 W2rutPnLLFPBhf6aZ9CesBf8g4QIGTEDn4HhzOFz9S2LW5WvFzeDiSh8FjWUWdGaQL6H6lo0xt
 1FnPOmHYGDdI/X4wGrKPEs0p0tAuHyTCrb+xJXr65KoeLnvXE0hOL61bZmsqc5zsyakKlE4+PP
 +hI=
X-SBRS: 2.7
X-MesageID: 18751974
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,437,1583211600"; d="scan'208";a="18751974"
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [PATCH 3/5] libxl: Generate golang bindings in libxl Makefile
Thread-Topic: [PATCH 3/5] libxl: Generate golang bindings in libxl Makefile
Thread-Index: AQHWMFPgaNjzBfpXokm+gPTH75vVE6i6R9yAgAARtwA=
Date: Tue, 26 May 2020 14:56:48 +0000
Message-ID: <B1814837-4E4B-4795-887E-769E3D25608A@citrix.com>
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-4-george.dunlap@citrix.com>
 <24269.8019.97048.52370@mariner.uk.xensource.com>
In-Reply-To: <24269.8019.97048.52370@mariner.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <6D619586DFB76D468E064C20DB71D3E0@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDI2LCAyMDIwLCBhdCAyOjUzIFBNLCBJYW4gSmFja3NvbiA8aWFuLmphY2tz
b25AY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBHZW9yZ2UgRHVubGFwIHdyaXRlcyAoIltQQVRD
SCAzLzVdIGxpYnhsOiBHZW5lcmF0ZSBnb2xhbmcgYmluZGluZ3MgaW4gbGlieGwgTWFrZWZpbGUi
KToNCj4+ICsuUEhPTlk6IGlkbC1leHRlcm5hbA0KPj4gK2lkbC1leHRlcm5hbDoNCj4+ICsJJChN
QUtFKSAtQyAkKFhFTl9ST09UKS90b29scy9nb2xhbmcveGVubGlnaHQgaWRsLWdlbg0KPiANCj4g
VW5mb3J0dW5hdGVseSB0aGlzIGtpbmQgb2YgdGhpbmcgaXMgZm9yYmlkZGVuLiAgQXQgbGVhc3Qs
IHdpdGhvdXQgYQ0KPiByaWdvcm91cyBwcm9vZiB0aGF0IHRoaXMgaXNuJ3QgYSBjb25jdXJyZW5j
eSBoYXphcmQuDQo+IA0KPiBUaGUgcHJvYmxlbSBpcyB0aGF0IHdpdGggcGFyYWxsZWwgbWFrZSwg
dGhlIGNvbmN1cnJlbmN5IGNvcnJlY3RuZXNzDQo+IHByaW5jaXBsZXMgYXJlIGFzIGZvbGxvd3M6
DQo+ICgxKSBkaWZmZXJlbnQgdGFyZ2V0cyB1c2Ugbm9ub3ZlcmxhcHBpbmcgdGVtcG9yYXJ5IGFu
ZCBvdXRwdXQgZmlsZXMNCj4gICAgKG1ha2VmaWxlIGF1dGhvcnMnIHJlc3BvbnNpYmlsaXkpDQo+
ICgyKSBvbmUgaW52b2NhdGlvbiBvZiBtYWtlIHdvbid0IG1ha2UgdGhlIHNhbWUgdGFyZ2V0IHR3
aWNlIGF0IHRoZQ0KPiAgICBzYW1lIHRpbWUgKGZ1bmRhbWVudGFsIHByaW5jaXBsZSBvZiBvcGVy
YXRpb24gZm9yIG1ha2UpDQo+ICgzKSB0aGUgc2FtZSBtYWtlZmlsZSAob3IgZGlmZmVyZW50IG1h
a2VmaWxlcyB3aXRoIG92ZXJsYXBwaW5nDQo+ICAgIHRhcmdldHMpIG1heSBub3QgYmUgZW50ZXJl
ZCBtdWx0aXBsZSB0aW1lcyBpbiBwYXJhbGxlbA0KPiAgICAoYnVpbGQgc3lzdGVtIGF1dGhvcnMn
IHJlc3BvbnNpYmlsaXR5OyBwcmVjbHVjZGVzIG1vc3QgdXNlIG9mDQo+ICAgIG1ha2UgLUMgdG8g
c2libGluZyBkaXJlY3RvcmllcyByYXRoZXIgdGhhbiB0byBjaGlsZHJlbikNCj4gDQo+IEEgY29y
cmVjdG5lc3MgcHJvb2YgdG8gbWFrZSBhbiBleGNlcHRpb24gd291bGQgaW52b2x2ZSBkZW1vbnN0
cmF0aW5nDQo+IHRoYXQgdGhlIHRvb2xzL2dvbGFuZyBkaXJlY3RvcmllcyBuZXZlciB0b3VjaCB0
aGlzIGZpbGUgd2hlbiBpbnZva2VkDQo+IGFzIHBhcnQgb2YgYSByZWN1cnNpdmUgYnVpbGQuICBO
QiwgY29uc2lkZXIgdGhlIGNsZWFuIHRhcmdldHMgdG9vLg0KDQp0b29scy9nb2xhbmcveGVubGln
aHQvTWFrZWZpbGU6Ki5nZW4uZ28gdGFyZ2V0IHdpbGwgYmUgdHJpZ2dlcmVkIGJ5IHhlbmxpZ2h0
L01ha2VmaWxlOmlkbC1nZW4gYW5kIHhlbmxpZ2h0L01ha2VmaWxlOmJ1aWxkLg0KDQp4ZW5saWdo
dC9NYWtlZmlsZTpidWlsZCBpcyBjYWxsZWQgZnJvbSB0b29scy9nb2xhbmcvTWFrZmlsZTpzdWJk
aXJzLWFsbCwgd2hpY2ggaXMgY2FsbGVkIGZyb20gdG9vbHMvTWFrZWZpbGU6c3ViZGlycy1hbGwu
DQoNCnhlbmxpZ2h0L01ha2VmaWxlOmlkbC1nZW4gaXMgY2FsbGVkIGZyb20gdG9vbHMvbGlieGwv
TWFrZWZpbGU6aWRsLWV4dGVybmFsLCB3aGljaCBpcyBjYWxsZWQgZnJvbSB0b29scy9saWJ4bC9N
YWtlZmlsZTphbGwsIHdoaWNoIGlzIGNhbGxlZCBmcm9tIHRvb2xzL01ha2VmaWxlOnN1YmRpcnMt
YWxsLg0KDQp0b29scy9NYWtlZmlsZTpzdWJkaXJzLWFsbCBpcyBpbXBsZW1lbnRlZCBhcyBhIG5v
bi1wYXJhbGxlbCBmb3IgbG9vcCBleGVjdXRpbmcgb3ZlciBTVUJESVJTLXk7IHRvb2xzL2dvbGFu
ZyBjb21lcyBhZnRlciB0b29scy9saWJ4bCBpbiB0aGF0IGxpc3QsIGFuZCBzbyB0b29scy9nb2xh
bmc6YWxsIHdpbGwgbmV2ZXIgYmUgY2FsbGVkIHVudGlsIGFmdGVyIHRvb2xzL2xpYnhsOmFsbCBo
YXMgY29tcGxldGVkLiAgVGhpcyBpbnZhcmlhbnQg4oCUIHRoYXQgdG9vbHMvZ29sYW5nL01ha2Vm
aWxlOmFsbCBtdXN0IG5vdCBiZSBjYWxsZWQgdW50aWwgdG9vbHMvbGlieGwvTWFrZWZpbGU6YWxs
IGhhcyBjb21wbGV0ZWQgbXVzdCBiZSBrZXB0IHJlZ2FyZGxlc3Mgb2YgdGhpcyBwYXRjaCwgc2lu
Y2UgeGVubGlnaHQvTWFrZWZpbGU6YnVpbGQgZGVwZW5kcyBvbiBvdGhlciBvdXRwdXQgZnJvbSB0
b29scy9saWJ4bC9NYWtlZmlsZTphbGwuDQoNClNvIGFzIGxvbmcgYXMgbm90aGluZyBlbHNlIGNh
bGxzIHRvb2xzL2xpYnhsOmFsbCBvciB0b29scy9saWJ4bDppZGwtZXh0ZXJuYWwsIHRoaXMgc2hv
dWxkIGJlIHNhZmUuICBXZSBjb3VsZCBhZGQgYSBjb21tZW50cyBuZWFyIHhlbmxpZ2h0L01ha2Vm
aWxlOmlkbC1nZW4gc2F5aW5nIGl0IG11c3Qgb25seSBiZSBjYWxsZWQgZnJvbSBsaWJ4bC9NYWtl
ZmlsZTppZGwtZXh0ZXJuYWw7IGFuZCB0byBsaWJ4bC9NYWtlZmlsZTppZGwtZXh0ZXJuYWwgc2F5
aW5nIGl0IG11c3Qgbm90IGJlIGNhbGxlZCByZWN1cnNpdmVseSBmcm9tIGFub3RoZXIgTWFrZWZp
bGUuDQoNCj4gQWx0ZXJuYXRpdmVseSwgbW92ZSB0aGUgZ2VuZXJhdGVkIGdvbGFuZyBmaWxlcyB0
byB0b29scy9saWJ4bCBtYXliZSwNCj4gYW5kIHBlcmhhcHMgbGVhdmUgc3ltbGlua3MgYmVoaW5k
Lg0KDQpXb3VsZCB0aGF0IHJlc3VsdCBpbiB0aGUgZmlsZXMgYmVpbmcgYWNjZXNzaWJsZSB0byB0
aGUgZ29sYW5nIGJ1aWxkIHRvb2xzIGF0IGh0dHBzOi8veGVuYml0cy54ZW5wcm9qZWN0Lm9yZy9n
aXQtaHR0cC94ZW4uZ2l0L3Rvb2xzL2dvbGFuZy94ZW5saWdodCA/ICBJZiBub3QsIGl0IGRlZmVh
dHMgdGhlIHB1cnBvc2Ugb2YgaGF2aW5nIHRoZSBmaWxlcyBjaGVja2VkIGludG8gdGhlIHRyZWUu
DQoNCj4gT3IgY29udmVydCB0aGUgd2hvbGUgKG9mIHRvb2xzLywgbWF5YmUpIHRvIG5vbnJlY3Vy
c2l2ZSBtYWtlIHVzaW5nIGVnDQo+IHN1YmRpcm1rIDotKS4gIGh0dHBzOi8vZGl6aWV0LmRyZWFt
d2lkdGgub3JnLzU3NjMuaHRtbA0KDQpUaGlzIGlzbuKAmXQgcmVhbGx5IGEgcHJhY3RpY2FsIHN1
Z2dlc3Rpb246IEkgZG9u4oCZdCBoYXZlIHRpbWUgdG8gcmVmYWN0b3IgdGhlIGVudGlyZSBsaWJ4
bCBNYWtlZmlsZSB0cmVlLCBhbmQgY2VydGFpbmx5IGRvbuKAmXQgaGF2ZSB0aW1lIGJ5IEZyaWRh
eS4NCg0KIC1HZW9yZ2U=


From xen-devel-bounces@lists.xenproject.org Tue May 26 15:01:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 15:01:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdb4v-00031t-NF; Tue, 26 May 2020 15:01:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vgeY=7I=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jdb4u-00031o-2b
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 15:01:16 +0000
X-Inumbo-ID: be8c096e-9f61-11ea-a65c-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id be8c096e-9f61-11ea-a65c-12813bfff9fa;
 Tue, 26 May 2020 15:01:15 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:51506
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jdb4p-000WRX-KF (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Tue, 26 May 2020 16:01:11 +0100
Subject: Re: [PATCH] x86: extend coverage of HLE "bad page" workaround
To: Jan Beulich <jbeulich@suse.com>
References: <b238f66d-37a9-3080-4f2b-90225ea17102@suse.com>
 <424d1b72-5eb6-f2bc-20fe-e59bacda8dd9@citrix.com>
 <c27d838e-0331-3cab-25bf-dd16b4645152@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <2c0ff1f3-ee0c-6d14-a51c-d82b65338005@citrix.com>
Date: Tue, 26 May 2020 16:01:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <c27d838e-0331-3cab-25bf-dd16b4645152@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 26/05/2020 14:35, Jan Beulich wrote:
> On 26.05.2020 13:17, Andrew Cooper wrote:
>> On 26/05/2020 07:49, Jan Beulich wrote:
>>> Respective Core Gen10 processor lines are affected, too.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> --- a/xen/arch/x86/mm.c
>>> +++ b/xen/arch/x86/mm.c
>>> @@ -6045,6 +6045,8 @@ const struct platform_bad_page *__init g
>>>      case 0x000506e0: /* errata SKL167 / SKW159 */
>>>      case 0x000806e0: /* erratum KBL??? */
>>>      case 0x000906e0: /* errata KBL??? / KBW114 / CFW103 */
>>> +    case 0x000a0650: /* erratum Core Gen10 U/H/S 101 */
>>> +    case 0x000a0660: /* erratum Core Gen10 U/H/S 101 */
>> This is marred in complexity.
>>
>> The enumeration of MSR_TSX_CTRL (from the TAA fix, but architectural
>> moving forwards on any TSX-enabled CPU) includes a confirmation that HLE
>> no longer exists/works.  This applies to IceLake systems, but possibly
>> not their initial release configuration (hence, via a later microcode
>> update).
>>
>> HLE is also disabled in microcode on all older parts for errata reasons,
>> so in practice it doesn't exist anywhere now.
>>
>> I think it is safe to drop this workaround, and this does seem a more
>> simple option than encoding which microcode turned HLE off (which sadly
>> isn't covered by the spec updates, as even when turned off, HLE is still
>> functioning according to its spec of "may speed things up, may do
>> nothing"), or the interactions with the CPUID hiding capabilities of
>> MSR_TSX_CTRL.
> I'm afraid I don't fully follow: For one, does what you say imply HLE is
> no longer enumerated in CPUID?

No - sadly not.  For reasons of "not repeating the Haswell/Broadwell
microcode fiasco", the HLE bit will continue to exist and be set. 
(Although on CascadeLake and later, you can turn it off with MSR_TSX_CTRL.)

It was always a weird CPUID bit.  You were supposed to put
XACQUIRE/XRELEASE prefixes on your legacy locking, and it would be a nop
on old hardware and go faster on newer hardware.

There is nothing runtime code needs to look at the HLE bit for, except
perhaps for UI reporting purposes.

> But then this
> erratum does not have the usual text effectively meaning that an ucode
> update is or will be available to address the issue; instead it says
> that BIOS or VMM can reserve the respective address range.

This is not surprising at all.  Turning off HLE was an unrelated
activity, and I bet the link went unnoticed.

> This - assuming the alternative you describe is indeed viable - then is surely
> a much more intrusive workaround than needed. Which I wouldn't assume
> they would suggest in such a case.

My suggestion was to drop the workaround, not to complicated it with a
microcode revision matrix.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 26 15:20:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 15:20:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdbN0-0004Cc-LC; Tue, 26 May 2020 15:19:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LE0D=7I=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jdbN0-0004CX-1r
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 15:19:58 +0000
X-Inumbo-ID: 5b7af0e4-9f64-11ea-9947-bc764e2007e4
Received: from mail-lj1-x242.google.com (unknown [2a00:1450:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5b7af0e4-9f64-11ea-9947-bc764e2007e4;
 Tue, 26 May 2020 15:19:57 +0000 (UTC)
Received: by mail-lj1-x242.google.com with SMTP id e4so2518529ljn.4
 for <xen-devel@lists.xenproject.org>; Tue, 26 May 2020 08:19:57 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=LkQKRSobVUAHtal/uWlHiNLYAfFspWICdN15Jpx80uc=;
 b=j1ewBqqXkChli7grKCYNwueXzWyyZaWWId1En+lzsZCj3tEIFvqKsYtTuvJiFRJXhF
 Pa29OX38mPo+Z15+0xzBLXit9d2BiKik3oT3X2yvzIFroCrbWOCdazvrsvcfazVQFpae
 rQyg3s5XcIedXETdhbxFXnLY1XW4rzW6XkF5XH0ioobPIFFfcgktMk8Q64+OI/lJ3tjG
 Nu3YeVZqyLPsl7O/MkW/YFQrh6FtAsNl2IDE+hu1DIZozWr+GQ3HSF4GpFVPlhYBvzBW
 c3U7Tm3U3OiMPXAsERRZ+43BDFx4OEb5THsqYHqITDsni/Je7Rr8bI4fReop9dkv0vjY
 WScQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=LkQKRSobVUAHtal/uWlHiNLYAfFspWICdN15Jpx80uc=;
 b=mb42kSRcWe70NtfIvpiUamxqQFPBeTyYQ0d3yTDOIxAmeuVq0GwZEkDD2qQCKS56/I
 URHL41tLqlvizOROMgFXUMfzEgHL9fGcmOzXs8iazfBXiwRQ/xuldlNEFS1TeRQHRXmj
 AXdVOsUF+ew5tNSiOxrVs4hw/xk2iD5seuEbfT/I5SXDekMm13/rZCGUU4OOf6qwD/w1
 MCHKLe3r0C1822fLzhx9gbz47Vn4Uaqhh6MO+6AocFzlUXSdlUJkPbR4QzrEr18lrai5
 pe15MNNAW7FcnyRaARnns8ta4fsF8vMrLWwFp/B0Sx85PKGbkMHmC2j/4wY1TnAkot+e
 r4ww==
X-Gm-Message-State: AOAM5321g7uS7/xMBZxjUlWZ9MH88Iznkh9UOW4/rgM4J2D7Gnj6skCY
 mWO4PhcnbI4+jxiMkZ9mZT1kYyhz+/VAu8BlbuQ=
X-Google-Smtp-Source: ABdhPJxTyBk69b2Q5nBvmxqW4yz5PZRYtGqxez+3t+OmIZPurtYWzEdL8ixeJqTMvnDyDVqMvsuMR96n44hCXzlmPCs=
X-Received: by 2002:a2e:9953:: with SMTP id r19mr798833ljj.235.1590506396351; 
 Tue, 26 May 2020 08:19:56 -0700 (PDT)
MIME-Version: 1.0
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-5-george.dunlap@citrix.com>
 <CAEBZRSfF8KAnzz5LW8GhcuJu=2rex3d6bvgz=a7-kLMp-itjqQ@mail.gmail.com>
 <CAEBZRScpycd2_A8moi68AA3asbsUSRjkW1kUVdpsdwgx-SZKpQ@mail.gmail.com>
 <8040BE07-B452-4036-ADE8-6E5CA0ED41A9@citrix.com>
In-Reply-To: <8040BE07-B452-4036-ADE8-6E5CA0ED41A9@citrix.com>
From: Nick Rosbrook <rosbrookn@gmail.com>
Date: Tue, 26 May 2020 11:19:41 -0400
Message-ID: <CAEBZRSe1DHR6vLzj7j8iB9kkJD2zqELcToU4U9Kbwi6tgdpXPA@mail.gmail.com>
Subject: Re: [PATCH 4/5] golang/xenlight: Use XEN_PKG_DIR variable rather than
 open-coding
To: George Dunlap <George.Dunlap@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 26, 2020 at 5:31 AM George Dunlap <George.Dunlap@citrix.com> wrote:
>
>
>
> > On May 23, 2020, at 5:48 PM, Nick Rosbrook <rosbrookn@gmail.com> wrote:
> >
> >>> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> >> Reviewed-by: Nick Rosbrook <rosbrookn@ainfosec.com>
> >
> > Oh, I just noticed your commit message calls the variable
> > "XEN_PKG_DIR", but it's actually named "GOXL_PKG_DIR."
>
> Oh, weird.  I presume the R-b stands if I fix the title?

Yes, of course.

-NR


From xen-devel-bounces@lists.xenproject.org Tue May 26 15:30:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 15:30:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdbXV-0005xr-N7; Tue, 26 May 2020 15:30:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=52a6=7I=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jdbXU-0005xm-VR
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 15:30:48 +0000
X-Inumbo-ID: df2fb900-9f65-11ea-8993-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id df2fb900-9f65-11ea-8993-bc764e2007e4;
 Tue, 26 May 2020 15:30:47 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: WPOi4qXKf/pfHPv9Ze4jVAH3PwSLSf2+5uL4bYenxgN0O/x6oiwBKbFwY7HXil2GPaWSdwRn33
 fxD0TWPbcOXVgJtxgnb4JW8aBy2YZ7zjevlhRs+EGuxzBXybIFS23NtQhSWY4vAYTdnHiPZ9xa
 QfRJ54/uUH+w8Fe9eu7hoJCbXDDF9RWXENtLNGOXpN4TaDiu2SEPydpk+jm35nK3KT9l7Cld8w
 wY4un+TBpy+adY942kApJ1Tqv+HhIecxzIlyS2CLTK/dCzI1BDChXupZ9JRY9PClYWbBVo1ZTO
 Jys=
X-SBRS: 2.7
X-MesageID: 19202148
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,437,1583211600"; d="scan'208";a="19202148"
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [PATCH 5/5] gitignore: Ignore golang package directory
Thread-Topic: [PATCH 5/5] gitignore: Ignore golang package directory
Thread-Index: AQHWMFPdiIMBJ2MNy0CkFZ0ooWSGmqi6SAuAgAAbAoA=
Date: Tue, 26 May 2020 15:30:43 +0000
Message-ID: <A525D330-BCF9-4998-BEC5-425BA6C26CCF@citrix.com>
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-6-george.dunlap@citrix.com>
 <24269.8059.28506.353748@mariner.uk.xensource.com>
In-Reply-To: <24269.8059.28506.353748@mariner.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <B7C7329DB0A56D429ED95CC69AAD5C83@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Konrad Wilk <konrad.wilk@oracle.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Julien Grall <julien.grall@arm.com>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDI2LCAyMDIwLCBhdCAyOjU0IFBNLCBJYW4gSmFja3NvbiA8aWFuLmphY2tz
b25AY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBHZW9yZ2UgRHVubGFwIHdyaXRlcyAoIltQQVRD
SCA1LzVdIGdpdGlnbm9yZTogSWdub3JlIGdvbGFuZyBwYWNrYWdlIGRpcmVjdG9yeSIpOg0KPj4g
U2lnbmVkLW9mZi1ieTogR2VvcmdlIER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXguY29tPg0K
PiANCj4gSSBoYXZlIHRvIHNheSB0aGF0IGZpbmRpbmcgYSBkaXJlY3Rvcnkgc3JjLyBpbiBnaXRp
Z25vcmUgaXMgdmVyeQ0KPiBzdGFydGxpbmcuDQo+IA0KPiBUaGlzIGRpcmVjdG9yeSBzcmMvIGNv
bnRhaW5zIG9ubHkgb3V0cHV0IGZpbGVzID8NCg0KV2l0aCBnb2xhbmcsIHlvdSBkb27igJl0IHJl
YWxseSBkaXN0cmlidXRlIHBhY2thZ2UgYmluYXJpZXM7IHlvdSBvbmx5IGRpc3RyaWJ1dGUgc291
cmNlIGZpbGVzLg0KDQpIb3dldmVyLCB3ZSBkb27igJl0IHdhbnQgdG8gd2FpdCB1bnRpbCBzb21l
b25lIHRyaWVzIHRvIGNsb25lIHRoZSBwYWNrYWdlIHRvIHNlZSBpZiB3ZeKAmXZlIGJyb2tlbiB0
aGUgYnVpbGQ7IHNvIHRoZSBjdXJyZW50IG1ha2VmaWxlIGRvZXMgYSDigJxidWlsZCB0ZXN04oCd
IG9mIHRoZSBwYWNrYWdlIGZpbGVzLg0KDQpCZWZvcmUgZ29sYW5n4oCZcyDigJxtb2R1bGVz4oCd
IGZlYXR1cmUsIHRoZSBvbmx5IHdheSB0byBkbyB0aGlzIHdhcyB0byBoYXZlIHRoZSBjb2RlIHRv
IGJ1aWxkIGluc2lkZSAkR09QQVRIL3NyYy8kUEFDS0FHRU5BTUUuICBXZSBjYW4gc2V0IEdPUEFU
SCBidXQgd2UgY2Fu4oCZdCBjaGFuZ2UgdGhlIOKAnHNyY+KAnSBjb21wb25lbnQgb2YgdGhhdC4g
IFNvIHdlIHVzZWQgdG8gc2V0IEdPUEFUSCB0byAkWEVOUk9PVC90b29scy9nb2xhbmcsIHB1dCB0
aGUgZmlsZXMgaW4gJFhFTlJPT1QvdG9vbHMvZ29sYW5nL3NyYy8kUEFDS0FHRU5BTUUsIGFuZCAN
Cg0KV2l0aCB0aGUg4oCcbW9kdWxlc+KAnSBmZWF0dXJlLCBjb2RlIGNhbiBiZSBidWlsdCBhbnl3
aGVyZTsgdGhlIGJ1aWxkIGF0IHRoZSBtb21lbnQgZG9lc27igJl0IHJlcXVpcmUgR09QQVRILg0K
DQpJZiB3ZeKAmXJlIHdpbGxpbmcgdG8gbGltaXQgb3Vyc2VsdmVzIHRvIHVzaW5nIHZlcnNpb25z
IG9mIGdvbGFuZyB3aGljaCBzdXBwb3J0IG1vZHVsZXMgYnkgZGVmYXVsdCAoMS4xMispLCB0aGVu
IHdlIGNhbiBwcm9iYWJseSBnZXQgcmlkIG9mIHRoaXMgYml0IGluc3RlYWQuICAoQW5kIGlmIHdl
IGRvIHdhbnQgdG8gc3VwcG9ydCBvbGRlciB2ZXJzaW9ucywgd2Ugc2hvdWxkIHJlYWxseSBhZGQg
c29tZSBjb2RlIGluIHRoZSBjb25maWd1cmUgc2NyaXB0IHRvIGRldGVybWluZSB3aGV0aGVyIHRv
IGJ1aWxkIHdpdGggbW9kdWxlcyBvciBHT1BBVEguKQ0KDQpOaWNrLCBhbnkgb3BpbmlvbnM/DQoN
Cj4gSXMgdGhlcmUgbm90IGEgcmlzayB0aGF0IGh1bWFucyB3aWxsIHRyeSB0byBlZGl0IGl0ID8N
Cg0KSSBzdXBwb3NlIHNvbWVvbmUgbWlnaHQuICBJZiB3ZSBkZWNpZGUgd2Ugd2FudCB0byBzdXBw
b3J0IG9sZGVyIHZlcnNpb25zIG9mIGdvLCB3ZSBwcm9iYWJseSB3YW50IHRvIGZpZ3VyZSBzb21l
dGhpbmcgb3V0IHRoZXJlLiAgT3B0aW9uczoNCg0KMS4gQ29weSB0aGUgZmlsZXMgdG8gYSB0ZW1w
IGRpcmVjdG9yeSBpbnN0ZWFkLiAgVGhpcyBpcyBjb21wbGljYXRlZCBiZWNhdXNlIHdlIGhhdmUg
dG8gZmluZCBhIGdvb2QgdGVtcCBkaXJlY3RvcnksIGFuZCB3ZeKAmWQgaGF2ZSB0byBjb3B5IHRo
ZW0gZXZlcnkgdGltZSwgc2xvd2luZyBkb3duIHRoZSBpbmNyZW1lbnRhbCBidWlsZCAodGhvdWdo
IG5vdCB0aGF0IG11Y2gpLg0KDQoyLiBQdXQgYSBmaWxlIGluIHRoZSBnZW5lcmF0ZWQgZGlyZWN0
b3J5IGxpa2Ug4oCcR0VORVJBVEVEX0RPX05PVF9FRElU4oCdLg0KDQozLiBQdXQgdGhlbSBpbiB0
b29scy9nb2xhbmcvR0VORVJBVEVEX0RPX05PVF9FRElUL3NyYyBpbnN0ZWFkLg0KDQogLUdlb3Jn
ZQ==


From xen-devel-bounces@lists.xenproject.org Tue May 26 15:39:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 15:39:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdbfD-0006AO-H6; Tue, 26 May 2020 15:38:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vgeY=7I=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jdbfC-0006AJ-Hb
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 15:38:46 +0000
X-Inumbo-ID: fc18e6ee-9f66-11ea-a661-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fc18e6ee-9f66-11ea-a661-12813bfff9fa;
 Tue, 26 May 2020 15:38:45 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:52580
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jdbf7-000xmp-Lx (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Tue, 26 May 2020 16:38:41 +0100
Subject: Re: [PATCH 02/16] x86/traps: Clean up printing in
 do_reserved_trap()/fatal_trap()
To: Jan Beulich <jbeulich@suse.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-3-andrew.cooper3@citrix.com>
 <aca22b53-895e-19bb-c54c-f1e4945c95c1@suse.com>
 <8f1d68b1-895a-d2a6-4dcb-55b688b03336@citrix.com>
 <b1ef905c-dab6-d1c3-4673-4c06c7e94a0a@suse.com>
 <560c3bce-211a-52ab-c919-9ca1ab9beab3@citrix.com>
 <45545b0c-2f0d-1de8-88ec-472d0a110eaa@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <8f5dc985-502c-3ed5-1e4e-980a91acfba4@citrix.com>
Date: Tue, 26 May 2020 16:38:41 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <45545b0c-2f0d-1de8-88ec-472d0a110eaa@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19/05/2020 09:50, Jan Beulich wrote:
> On 18.05.2020 18:54, Andrew Cooper wrote:
>> On 11/05/2020 16:09, Jan Beulich wrote:
>>> On 11.05.2020 17:01, Andrew Cooper wrote:
>>>> On 04/05/2020 14:08, Jan Beulich wrote:
>>>>> On 02.05.2020 00:58, Andrew Cooper wrote:
>>>>>> For one, they render the vector in a different base.
>>>>>>
>>>>>> Introduce X86_EXC_* constants and vec_name() to refer to exceptions by their
>>>>>> mnemonic, which starts bringing the code/diagnostics in line with the Intel
>>>>>> and AMD manuals.
>>>>> For this "bringing in line" purpose I'd like to see whether you could
>>>>> live with some adjustments to how you're currently doing things:
>>>>> - NMI is nowhere prefixed by #, hence I think we'd better not do so
>>>>>   either; may require embedding the #-es in the names[] table, or not
>>>>>   using N() for NMI
>>>> No-one is going to get confused at seeing #NMI in an error message.  I
>>>> don't mind jugging the existing names table, but anything more
>>>> complicated is overkill.
>>>>
>>>>> - neither Coprocessor Segment Overrun nor vector 0x0f have a mnemonic
>>>>>   and hence I think we shouldn't invent one; just treat them like
>>>>>   other reserved vectors (of which at least vector 0x09 indeed is one
>>>>>   on x86-64)?
>>>> This I disagree with.  Coprocessor Segment Overrun *is* its name in both
>>>> manuals, and the avoidance of vector 0xf is clearly documented as well,
>>>> due to it being the default PIC Spurious Interrupt Vector.
>>>>
>>>> Neither CSO or SPV are expected to be encountered in practice, but if
>>>> they are, highlighting them is a damn-sight more helpful than pretending
>>>> they don't exist.
>>> How is them occurring (and getting logged with their vector numbers)
>>> any different from other reserved, acronym-less vectors? I particularly
>>> didn't suggest to pretend they don't exist; instead I did suggest that
>>> they are as reserved as, say, vector 0x18. By inventing an acronym and
>>> logging this instead of the vector number you'll make people other than
>>> you have to look up what the odd acronym means iff such an exception
>>> ever got raised.
>> You snipped the bits in the patch where both the vector number and
>> acronym are printed together.
>>
>> Anyone who doesn't know the vector has to look it up anyway, at which
>> point they'll find that what Xen prints out matches what both manuals
>> say.  OTOH, people who know what a coprocessor segment overrun or PIC
>> spurious vector is won't need to look it up.
> And who know to decipher the non-standard CPO and SPV (which are what
> triggered my comments in the first place).

CSO, and no.

Anyone who doesn't know the text still has the vector number to work
with, and still needs to look it up.

At which point they will observe that the text is appropriate in context.

> What I continue to fail to
> see is why these reserved vectors need treatment different from all
> others.

Because it has nothing to do with reserved-ness.

It is about providing clarifying information (for all vectors which
currently have, or have ever had, meaning) for mere mortals who can't
(or rather, don't want to) debug crashes based on raw numbers alone.

> In addition I'm having trouble seeing how the default spurious
> PIC vector matters for us - we program the PIC to vectors 0x20-0x2f,
> i.e. a spurious PIC0 IRQ would show up at vector 0x27. (I notice we
> still blindly assume there's a pair of PICs in the first place.)

That's not relevant.  What is relevant is the actions taken when we see
vector 15 being raised.

Hitting CSO means that legacy #FERR_FREEZE external signal has been
wired up (and it is very SMP-unsafe, hence why it was phased out with
the introductions integrated x87's).

Hitting SPV means that the PIC wasn't reprogrammed and something wonky
is going on with one of the input pins.

Both of these are strictly more helpful in a log than "something went
wrong - figure it out yourself", and both indicate that something is
very wrong with the system.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 26 15:42:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 15:42:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdbir-00070U-3G; Tue, 26 May 2020 15:42:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=N6lx=7I=lucina.net=martin@srs-us1.protection.inumbo.net>)
 id 1jdbip-00070P-Bs
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 15:42:31 +0000
X-Inumbo-ID: 7f183838-9f67-11ea-a664-12813bfff9fa
Received: from smtp.lucina.net (unknown [62.176.169.44])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f183838-9f67-11ea-a664-12813bfff9fa;
 Tue, 26 May 2020 15:42:25 +0000 (UTC)
Received: from nodbug.lucina.net (78-141-76-187.dynamic.orange.sk
 [78.141.76.187])
 by smtp.lucina.net (Postfix) with ESMTPSA id C74C1122804;
 Tue, 26 May 2020 17:42:24 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=lucina.net;
 s=dkim-201811; t=1590507744;
 bh=6B1mwmaG5KlpcOX1ZvW8LOn+uZ4wZogEf3CjfrNwJ3M=;
 h=Date:From:To:Subject:References:In-Reply-To:From;
 b=i7drQIPpJjc9AbVLB4iBbP2qe0AvwzA2hleRl0LUhof8taNt21yw+TfJljfrxfhr4
 R0NzttbfyipltZIkPQCGSHiGq+zSGCsSHEcJ9rAQq4YAZIt5ZyLzJi0n0GdtCUlp+n
 AbAAx5R+BhiPqzpfe0QgX5epJx2CKatbmpnnxCqFRaLeIusGZzWCEFPUhtH/k56EVW
 LKAJvH9CzOF5tkCCDuV488q3Rjd2mNfs2GU2g9QUz7Px5QNLDBTsfyI4f8C/u5ccS9
 eHVJIE7rcrQncNRrHdwCIHxI/pZBNoOFTRaIyA0EBBykaP50ozFfvVyB0CbX3OCsc7
 cTA4JHcJF61Cg==
Received: by nodbug.lucina.net (Postfix, from userid 1000)
 id 7DC03268436E; Tue, 26 May 2020 17:42:24 +0200 (CEST)
Date: Tue, 26 May 2020 17:42:24 +0200
From: Martin Lucina <martin@lucina.net>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
 =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, anil@recoil.org,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 mirageos-devel@lists.xenproject.org, dave@recoil.org,
 xen-devel@lists.xenproject.org
Subject: Re: Xen PVH domU start-of-day VCPU state
Message-ID: <20200526154224.GC25283@nodbug.lucina.net>
Mail-Followup-To: Martin Lucina <martin@lucina.net>,
 Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
 =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, anil@recoil.org,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 mirageos-devel@lists.xenproject.org, dave@recoil.org,
 xen-devel@lists.xenproject.org
References: <20200525160401.GA3091@nodbug.lucina.net>
 <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
 <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
 <20200526085221.GB5942@nodbug.lucina.net>
 <20200526093421.GA38408@Air-de-Roger>
 <20200526100337.GB38408@Air-de-Roger>
 <20200526101203.GE5942@nodbug.lucina.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200526101203.GE5942@nodbug.lucina.net>
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Oh! I think I've found a solution, even though I don't entirely understand the
problem/root cause:

On Tuesday, 26.05.2020 at12:12, Martin Lucina wrote:
> > On Tue, May 26, 2020 at 11:34:21AM +0200, Roger Pau Monn wrote:
> > Forgot to ask, but can you also add the output of readelf -lW
> > <kernel>?
> 
>     Elf file type is EXEC (Executable file)
>     Entry point 0x1001e0
>     There are 7 program headers, starting at offset 64
> 
>     Program Headers:
>       Type           Offset   VirtAddr           PhysAddr           FileSiz  MemSiz   Flg Align
>       INTERP         0x001000 0x0000000000100000 0x0000000000100000 0x000018 0x000018 R   0x8
>           [Requesting program interpreter: /nonexistent/solo5/]
>       LOAD           0x001000 0x0000000000100000 0x0000000000100000 0x00626c 0x00626c R E 0x1000
>       LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x007120 0x00ed48 RW  0x1000
>       NOTE           0x0080ac 0x00000000001070ac 0x00000000001070ac 0x000018 0x000018 R   0x4
>       NOTE           0x00f120 0x00000000001070c4 0x00000000001070c4 0x000014 0x000000 R   0x4
                                                                               ^^^^^^^^

I should have picked up on the above, but thought it wasn't relevant.

>       NOTE           0x008088 0x0000000000107088 0x0000000000107088 0x000024 0x000024 R   0x4
>       NOTE           0x008000 0x0000000000107000 0x0000000000107000 0x000088 0x000088 R   0x4
> 
>      Section to Segment mapping:
>       Segment Sections...
>        00     .interp
>        01     .interp .text .rodata .eh_frame
>        02     .note.solo5.manifest .note.solo5.abi .note.solo5.not-openbsd .data .bss

And also the missing .note.solo5.xen above.

>        03     .note.solo5.not-openbsd
>        04     .note.solo5.xen
>        05     .note.solo5.abi
>        06     .note.solo5.manifest

Turns out that the .note.solo5.xen section as defined in boot.S was not
marked allocatable, and that was doing <something> that was confusing our
linker script[1] (?).

If I make this simple change:

--- a/bindings/xen/boot.S
+++ b/bindings/xen/boot.S
@@ -32,7 +32,7 @@
 #define ENTRY(x) .text; .globl x; .type x,%function; x:
 #define END(x)   .size x, . - x

-.section .note.solo5.xen
+.section .note.solo5.xen, "a", @note

        .align  4
        .long   4

then I get the expected output from readelf -lW, and I can get as far as
the C _start() with no issues!

FWIW, here's the diff of readelf -lW before/after:

--- before	2020-05-26 17:36:46.117885855 +0200
+++ after	2020-05-26 17:38:07.090508322 +0200
@@ -8,9 +8,9 @@
   INTERP         0x001000 0x0000000000100000 0x0000000000100000 0x000018 0x000018 R   0x8
       [Requesting program interpreter: /nonexistent/solo5/]
   LOAD           0x001000 0x0000000000100000 0x0000000000100000 0x00615c 0x00615c R E 0x1000
-  LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x007120 0x00ed28 RW  0x1000
+  LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x006120 0x00dd28 RW  0x1000
   NOTE           0x0080ac 0x00000000001070ac 0x00000000001070ac 0x000018 0x000018 R   0x4
-  NOTE           0x00f120 0x00000000001070c4 0x00000000001070c4 0x000014 0x000000 R   0x4
+  NOTE           0x0080c4 0x00000000001070c4 0x00000000001070c4 0x000014 0x000014 R   0x4
   NOTE           0x008088 0x0000000000107088 0x0000000000107088 0x000024 0x000024 R   0x4
   NOTE           0x008000 0x0000000000107000 0x0000000000107000 0x000088 0x000088 R   0x4

@@ -18,7 +18,7 @@
   Segment Sections...
    00     .interp
    01     .interp .text .rodata .eh_frame
-   02     .note.solo5.manifest .note.solo5.abi .note.solo5.not-openbsd .data .bss
+   02     .note.solo5.manifest .note.solo5.abi .note.solo5.not-openbsd .note.solo5.xen .data .bss
    03     .note.solo5.not-openbsd
    04     .note.solo5.xen
    05     .note.solo5.abi

-mato

[1] https://github.com/mato/solo5/blob/xen/bindings/xen/solo5_xen.lds


From xen-devel-bounces@lists.xenproject.org Tue May 26 16:22:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 16:22:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdcKo-0002eO-8E; Tue, 26 May 2020 16:21:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LE0D=7I=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jdcKn-0002eJ-L3
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 16:21:45 +0000
X-Inumbo-ID: fdbb4c66-9f6c-11ea-8993-bc764e2007e4
Received: from mail-qv1-xf34.google.com (unknown [2607:f8b0:4864:20::f34])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fdbb4c66-9f6c-11ea-8993-bc764e2007e4;
 Tue, 26 May 2020 16:21:45 +0000 (UTC)
Received: by mail-qv1-xf34.google.com with SMTP id v15so9706737qvr.8
 for <xen-devel@lists.xenproject.org>; Tue, 26 May 2020 09:21:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=date:from:to:cc:subject:message-id:references:mime-version
 :content-disposition:content-transfer-encoding:in-reply-to
 :user-agent; bh=3e404zhK40BBVf7jsCAC5LYdQeS8cu+XadHO8+AJo8Y=;
 b=JfU+f55wOh00HAi9EVvVIEWjX0/DFX8tR92LwZHVa53x6dsayBt6d4sUevz5RMkHGr
 XjE4eZuBCvDp2o0AFz4jWzTvVLxAa/qxZ9Pc3uCd6T5wbaLtf8GchrGC4Bnm6dTsIOmY
 wVnyLGVMC+ppztqiadNB1I2d4BJoaA8hdp3Vu94CMOR0L9EpQUM3JilRXOrStUl0dBak
 JkoX8qZPhA8M2Px44899D0Qz20Wywqd4fTcG4fCaKTLjYeQOPAupmGPi70/uC40cJ2T/
 WC1jzAh9x5qaQ9cpnaF57noCnLHVBY9XMrYwkpc/M66t1aLZ+4A+xkSSWCX0hJ974Y13
 Zg8w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:content-transfer-encoding
 :in-reply-to:user-agent;
 bh=3e404zhK40BBVf7jsCAC5LYdQeS8cu+XadHO8+AJo8Y=;
 b=S9r/WKv738kWXY5Wgq49rflIdHIfTas2dGY6xt7bTlMxX99u1IfBJe5ukcifQ8L49J
 Rv54bd9iRLCyJrQkRzZuXhwpUp3U6rn2PFhO5DiVorTrgzp33RCsO65+OZlGVvjZ5Jh1
 nMFFQjiUa7X+UmfKhKz57PKU/0a/u9e5YCMUMe0o1Jzdr43cdHasC9drkNMDerIHJnEV
 2TxKQ1Cjf5KX6qstwBe9yuVvPQazwn9R4dI6V4W3NabSRvT4OMpqeuFZfn9Hm+pinmeH
 we32MN8bPIEQmC+SjMRal0ZQl1p/BovRDaEEKaYJ+8TYogWZwj9Mday45t2iJJIB3trj
 2zLA==
X-Gm-Message-State: AOAM530XySmg2+GD2rc8TNbPMPQvSlflNVPCXdWpxilgjICRhouKZnwy
 aqubI46J3+MgCJ/q4XUIaNo=
X-Google-Smtp-Source: ABdhPJyk210gdYaFNoalUbVRBrirbIXc9vU7rJ2kINFf/hkQ3yWyLfV4iDIgNg2R5EF9KENl4FhdlQ==
X-Received: by 2002:a05:6214:1392:: with SMTP id
 g18mr19512692qvz.210.1590510104681; 
 Tue, 26 May 2020 09:21:44 -0700 (PDT)
Received: from six (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id b189sm54033qkg.110.2020.05.26.09.21.43
 (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256);
 Tue, 26 May 2020 09:21:43 -0700 (PDT)
Date: Tue, 26 May 2020 12:21:41 -0400
From: Nick Rosbrook <rosbrookn@gmail.com>
To: George Dunlap <George.Dunlap@citrix.com>
Subject: [PATCH 5/5] gitignore: Ignore golang package directory
Message-ID: <20200526162141.GA28056@six>
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-6-george.dunlap@citrix.com>
 <24269.8059.28506.353748@mariner.uk.xensource.com>
 <A525D330-BCF9-4998-BEC5-425BA6C26CCF@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <A525D330-BCF9-4998-BEC5-425BA6C26CCF@citrix.com>
User-Agent: Mutt/1.9.4 (2018-02-28)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Konrad Wilk <konrad.wilk@oracle.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Julien Grall <julien.grall@arm.com>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> With golang, you don’t really distribute package binaries; you only distribute source files.
> 
> However, we don’t want to wait until someone tries to clone the package to see if we’ve broken the build; so the current makefile does a “build test” of the package files.
> 
> Before golang’s “modules” feature, the only way to do this was to have the code to build inside $GOPATH/src/$PACKAGENAME.  We can set GOPATH but we can’t change the “src” component of that.  So we used to set GOPATH to $XENROOT/tools/golang, put the files in $XENROOT/tools/golang/src/$PACKAGENAME, and 
> 
> With the “modules” feature, code can be built anywhere; the build at the moment doesn’t require GOPATH.
> 
> If we’re willing to limit ourselves to using versions of golang which support modules by default (1.12+), then we can probably get rid of this bit instead.  (And if we do want to support older versions, we should really add some code in the configure script to determine whether to build with modules or GOPATH.)
> 
> Nick, any opinions?

I can't think of a reason we need to support anything older than go
1.11, so I think it would be fine to get rid of remnants of the GOPATH
build.

-NR


From xen-devel-bounces@lists.xenproject.org Tue May 26 16:30:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 16:30:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdcTH-0003Vz-4Z; Tue, 26 May 2020 16:30:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vX9/=7I=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jdcTG-0003Vu-9a
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 16:30:30 +0000
X-Inumbo-ID: 3596f44a-9f6e-11ea-a676-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3596f44a-9f6e-11ea-a676-12813bfff9fa;
 Tue, 26 May 2020 16:30:28 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: C66bPiiyt6YKoDqn5r/JZkaVknDvTkdz8iCAQJLH4uHbQspyC92LK43ixaC7HZSDdvalXk9Kij
 1QdMNsQHuHF7pXkAYXUZ2pw/ts9V7Tt1sQxU5Afm6l2DzNsepCIotmfFznhD+rlo79bVI7gtF7
 cbxKBQQFF2S7Bm/4QWLc+6fQFfpvIgKv1IKkLyQwEMMZbuz1bxcCI2Dqbh6F0vMsLB6xTpB4Wz
 jE3hf3im7nQQFP3ZCpUfGx7BzTywSkFnS0+Jf4mBzBjVWztu4PnYZzchro9xMScqESKb15tNmO
 qkk=
X-SBRS: 2.7
X-MesageID: 19210299
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,437,1583211600"; d="scan'208";a="19210299"
Date: Tue, 26 May 2020 18:30:21 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Martin Lucina <martin@lucina.net>
Subject: Re: Xen PVH domU start-of-day VCPU state
Message-ID: <20200526163021.GE38408@Air-de-Roger>
References: <20200525160401.GA3091@nodbug.lucina.net>
 <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
 <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
 <20200526085221.GB5942@nodbug.lucina.net>
 <20200526093421.GA38408@Air-de-Roger>
 <20200526100337.GB38408@Air-de-Roger>
 <20200526101203.GE5942@nodbug.lucina.net>
 <20200526154224.GC25283@nodbug.lucina.net>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200526154224.GC25283@nodbug.lucina.net>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, anil@recoil.org,
 Andrew Cooper <andrew.cooper3@citrix.com>, mirageos-devel@lists.xenproject.org,
 dave@recoil.org, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 26, 2020 at 05:42:24PM +0200, Martin Lucina wrote:
> Oh! I think I've found a solution, even though I don't entirely understand the
> problem/root cause:
> 
> On Tuesday, 26.05.2020 at 12:12, Martin Lucina wrote:
> > > On Tue, May 26, 2020 at 11:34:21AM +0200, Roger Pau Monné wrote:
> > > Forgot to ask, but can you also add the output of readelf -lW
> > > <kernel>?
> > 
> >     Elf file type is EXEC (Executable file)
> >     Entry point 0x1001e0
> >     There are 7 program headers, starting at offset 64
> > 
> >     Program Headers:
> >       Type           Offset   VirtAddr           PhysAddr           FileSiz  MemSiz   Flg Align
> >       INTERP         0x001000 0x0000000000100000 0x0000000000100000 0x000018 0x000018 R   0x8
> >           [Requesting program interpreter: /nonexistent/solo5/]
> >       LOAD           0x001000 0x0000000000100000 0x0000000000100000 0x00626c 0x00626c R E 0x1000
> >       LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x007120 0x00ed48 RW  0x1000
> >       NOTE           0x0080ac 0x00000000001070ac 0x00000000001070ac 0x000018 0x000018 R   0x4
> >       NOTE           0x00f120 0x00000000001070c4 0x00000000001070c4 0x000014 0x000000 R   0x4
>                                                                                ^^^^^^^^
> 
> I should have picked up on the above, but thought it wasn't relevant.
> 
> >       NOTE           0x008088 0x0000000000107088 0x0000000000107088 0x000024 0x000024 R   0x4
> >       NOTE           0x008000 0x0000000000107000 0x0000000000107000 0x000088 0x000088 R   0x4
> > 
> >      Section to Segment mapping:
> >       Segment Sections...
> >        00     .interp
> >        01     .interp .text .rodata .eh_frame
> >        02     .note.solo5.manifest .note.solo5.abi .note.solo5.not-openbsd .data .bss
> 
> And also the missing .note.solo5.xen above.
> 
> >        03     .note.solo5.not-openbsd
> >        04     .note.solo5.xen
> >        05     .note.solo5.abi
> >        06     .note.solo5.manifest
> 
> Turns out that the .note.solo5.xen section as defined in boot.S was not
> marked allocatable, and that was doing <something> that was confusing our
> linker script[1] (?).

Hm, I would have said there was no need to load notes into memory, and
hence using a MemSize of 0 would be fine.

Maybe libelf loader was somehow getting confused and not loading the
image properly?

Can you paste the output of `xl -vvv create ...` when using the broken
image?

> 
> If I make this simple change:
> 
> --- a/bindings/xen/boot.S
> +++ b/bindings/xen/boot.S
> @@ -32,7 +32,7 @@
>  #define ENTRY(x) .text; .globl x; .type x,%function; x:
>  #define END(x)   .size x, . - x
> 
> -.section .note.solo5.xen
> +.section .note.solo5.xen, "a", @note
> 
>         .align  4
>         .long   4
> 
> then I get the expected output from readelf -lW, and I can get as far as
> the C _start() with no issues!
> 
> FWIW, here's the diff of readelf -lW before/after:
> 
> --- before	2020-05-26 17:36:46.117885855 +0200
> +++ after	2020-05-26 17:38:07.090508322 +0200
> @@ -8,9 +8,9 @@
>    INTERP         0x001000 0x0000000000100000 0x0000000000100000 0x000018 0x000018 R   0x8
>        [Requesting program interpreter: /nonexistent/solo5/]
>    LOAD           0x001000 0x0000000000100000 0x0000000000100000 0x00615c 0x00615c R E 0x1000
> -  LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x007120 0x00ed28 RW  0x1000
> +  LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x006120 0x00dd28 RW  0x1000

This seems suspicious, there's a change of the size of the LOAD
section, but your change to the note type should not affect the LOAD
section?

Hm, maybe it does because the .note.solo5.xen was considered writable
by default?

Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 26 16:33:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 16:33:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdcVq-0003kE-Nj; Tue, 26 May 2020 16:33:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=52a6=7I=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jdcVp-0003k8-NT
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 16:33:09 +0000
X-Inumbo-ID: 950d4848-9f6e-11ea-81bc-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 950d4848-9f6e-11ea-81bc-bc764e2007e4;
 Tue, 26 May 2020 16:33:09 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: oS50+zsSVqefo8RHIfeozCbKo1gS6/8O/T+M9H5Qc9wCUsv6qju7t2Tu7Ss+g3r0wIwQIwHTc5
 8Z05fv48sXfoYqvVFEh1rWGIs//eSGhcc/64pIJ0BA7Drve/i3Uo3T3wJ5Pp78drBscpMeiHQt
 M9BS+F6XUbAIAbwHSIfPzr211pr1aUGfj8Cuuu8HSzKXyt40Hx9f2zgyPU74JbDHqj96F21Uj2
 WEYEa54d9mGQbGc8HjvITSH1NHL753i+iZzp/0d6yzes3dxYaNkoHx2CAChfs0EOaWIO/khyvr
 So0=
X-SBRS: 2.7
X-MesageID: 18494331
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,437,1583211600"; d="scan'208";a="18494331"
From: George Dunlap <George.Dunlap@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
Subject: Re: [PATCH 5/5] gitignore: Ignore golang package directory
Thread-Topic: [PATCH 5/5] gitignore: Ignore golang package directory
Thread-Index: AQHWMFPdiIMBJ2MNy0CkFZ0ooWSGmqi6SAuAgAAbAoCAAA4+gIAAAy4A
Date: Tue, 26 May 2020 16:33:04 +0000
Message-ID: <73E47FDE-098C-4174-8295-5829B9EDA10C@citrix.com>
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-6-george.dunlap@citrix.com>
 <24269.8059.28506.353748@mariner.uk.xensource.com>
 <A525D330-BCF9-4998-BEC5-425BA6C26CCF@citrix.com>
 <20200526162141.GA28056@six>
In-Reply-To: <20200526162141.GA28056@six>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <3D406E16ABB8874AA0B5F826109F8ADA@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Konrad Wilk <konrad.wilk@oracle.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Nick
 Rosbrook <rosbrookn@ainfosec.com>, Julien Grall <julien.grall@arm.com>, Jan
 Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Ian Jackson <Ian.Jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDI2LCAyMDIwLCBhdCA1OjIxIFBNLCBOaWNrIFJvc2Jyb29rIDxyb3Nicm9v
a25AZ21haWwuY29tPiB3cm90ZToNCj4gDQo+PiBXaXRoIGdvbGFuZywgeW91IGRvbuKAmXQgcmVh
bGx5IGRpc3RyaWJ1dGUgcGFja2FnZSBiaW5hcmllczsgeW91IG9ubHkgZGlzdHJpYnV0ZSBzb3Vy
Y2UgZmlsZXMuDQo+PiANCj4+IEhvd2V2ZXIsIHdlIGRvbuKAmXQgd2FudCB0byB3YWl0IHVudGls
IHNvbWVvbmUgdHJpZXMgdG8gY2xvbmUgdGhlIHBhY2thZ2UgdG8gc2VlIGlmIHdl4oCZdmUgYnJv
a2VuIHRoZSBidWlsZDsgc28gdGhlIGN1cnJlbnQgbWFrZWZpbGUgZG9lcyBhIOKAnGJ1aWxkIHRl
c3TigJ0gb2YgdGhlIHBhY2thZ2UgZmlsZXMuDQo+PiANCj4+IEJlZm9yZSBnb2xhbmfigJlzIOKA
nG1vZHVsZXPigJ0gZmVhdHVyZSwgdGhlIG9ubHkgd2F5IHRvIGRvIHRoaXMgd2FzIHRvIGhhdmUg
dGhlIGNvZGUgdG8gYnVpbGQgaW5zaWRlICRHT1BBVEgvc3JjLyRQQUNLQUdFTkFNRS4gIFdlIGNh
biBzZXQgR09QQVRIIGJ1dCB3ZSBjYW7igJl0IGNoYW5nZSB0aGUg4oCcc3Jj4oCdIGNvbXBvbmVu
dCBvZiB0aGF0LiAgU28gd2UgdXNlZCB0byBzZXQgR09QQVRIIHRvICRYRU5ST09UL3Rvb2xzL2dv
bGFuZywgcHV0IHRoZSBmaWxlcyBpbiAkWEVOUk9PVC90b29scy9nb2xhbmcvc3JjLyRQQUNLQUdF
TkFNRSwgYW5kIA0KPj4gDQo+PiBXaXRoIHRoZSDigJxtb2R1bGVz4oCdIGZlYXR1cmUsIGNvZGUg
Y2FuIGJlIGJ1aWx0IGFueXdoZXJlOyB0aGUgYnVpbGQgYXQgdGhlIG1vbWVudCBkb2VzbuKAmXQg
cmVxdWlyZSBHT1BBVEguDQo+PiANCj4+IElmIHdl4oCZcmUgd2lsbGluZyB0byBsaW1pdCBvdXJz
ZWx2ZXMgdG8gdXNpbmcgdmVyc2lvbnMgb2YgZ29sYW5nIHdoaWNoIHN1cHBvcnQgbW9kdWxlcyBi
eSBkZWZhdWx0ICgxLjEyKyksIHRoZW4gd2UgY2FuIHByb2JhYmx5IGdldCByaWQgb2YgdGhpcyBi
aXQgaW5zdGVhZC4gIChBbmQgaWYgd2UgZG8gd2FudCB0byBzdXBwb3J0IG9sZGVyIHZlcnNpb25z
LCB3ZSBzaG91bGQgcmVhbGx5IGFkZCBzb21lIGNvZGUgaW4gdGhlIGNvbmZpZ3VyZSBzY3JpcHQg
dG8gZGV0ZXJtaW5lIHdoZXRoZXIgdG8gYnVpbGQgd2l0aCBtb2R1bGVzIG9yIEdPUEFUSC4pDQo+
PiANCj4+IE5pY2ssIGFueSBvcGluaW9ucz8NCj4gDQo+IEkgY2FuJ3QgdGhpbmsgb2YgYSByZWFz
b24gd2UgbmVlZCB0byBzdXBwb3J0IGFueXRoaW5nIG9sZGVyIHRoYW4gZ28NCj4gMS4xMSwgc28g
SSB0aGluayBpdCB3b3VsZCBiZSBmaW5lIHRvIGdldCByaWQgb2YgcmVtbmFudHMgb2YgdGhlIEdP
UEFUSA0KPiBidWlsZC4NCg0KT0ssIEnigJlsbCBzZW5kIGEgcGF0Y2ggdG8gcmVtb3ZlIHRoZSDi
gJxmYWtlIEdPUEFUSCBidWlsZOKAnSBzdXBwb3J0Lg0KDQogLUdlb3JnZQ==


From xen-devel-bounces@lists.xenproject.org Tue May 26 16:40:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 16:40:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdccs-0004bL-G7; Tue, 26 May 2020 16:40:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Ta6a=7I=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdccr-0004bG-13
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 16:40:25 +0000
X-Inumbo-ID: 97d1aeb0-9f6f-11ea-a67a-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 97d1aeb0-9f6f-11ea-a67a-12813bfff9fa;
 Tue, 26 May 2020 16:40:23 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 9A806AD12;
 Tue, 26 May 2020 16:40:24 +0000 (UTC)
Subject: Re: [PATCH] x86: extend coverage of HLE "bad page" workaround
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <b238f66d-37a9-3080-4f2b-90225ea17102@suse.com>
 <424d1b72-5eb6-f2bc-20fe-e59bacda8dd9@citrix.com>
 <c27d838e-0331-3cab-25bf-dd16b4645152@suse.com>
 <2c0ff1f3-ee0c-6d14-a51c-d82b65338005@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0df22405-bda8-8f4d-63b4-e9c4d57843b1@suse.com>
Date: Tue, 26 May 2020 18:40:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <2c0ff1f3-ee0c-6d14-a51c-d82b65338005@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 26.05.2020 17:01, Andrew Cooper wrote:
> On 26/05/2020 14:35, Jan Beulich wrote:
>> On 26.05.2020 13:17, Andrew Cooper wrote:
>>> On 26/05/2020 07:49, Jan Beulich wrote:
>>>> Respective Core Gen10 processor lines are affected, too.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> --- a/xen/arch/x86/mm.c
>>>> +++ b/xen/arch/x86/mm.c
>>>> @@ -6045,6 +6045,8 @@ const struct platform_bad_page *__init g
>>>>      case 0x000506e0: /* errata SKL167 / SKW159 */
>>>>      case 0x000806e0: /* erratum KBL??? */
>>>>      case 0x000906e0: /* errata KBL??? / KBW114 / CFW103 */
>>>> +    case 0x000a0650: /* erratum Core Gen10 U/H/S 101 */
>>>> +    case 0x000a0660: /* erratum Core Gen10 U/H/S 101 */
>>> This is marred in complexity.
>>>
>>> The enumeration of MSR_TSX_CTRL (from the TAA fix, but architectural
>>> moving forwards on any TSX-enabled CPU) includes a confirmation that HLE
>>> no longer exists/works.  This applies to IceLake systems, but possibly
>>> not their initial release configuration (hence, via a later microcode
>>> update).
>>>
>>> HLE is also disabled in microcode on all older parts for errata reasons,
>>> so in practice it doesn't exist anywhere now.
>>>
>>> I think it is safe to drop this workaround, and this does seem a more
>>> simple option than encoding which microcode turned HLE off (which sadly
>>> isn't covered by the spec updates, as even when turned off, HLE is still
>>> functioning according to its spec of "may speed things up, may do
>>> nothing"), or the interactions with the CPUID hiding capabilities of
>>> MSR_TSX_CTRL.
>> I'm afraid I don't fully follow: For one, does what you say imply HLE is
>> no longer enumerated in CPUID?
> 
> No - sadly not.  For reasons of "not repeating the Haswell/Broadwell
> microcode fiasco", the HLE bit will continue to exist and be set. 
> (Although on CascadeLake and later, you can turn it off with MSR_TSX_CTRL.)
> 
> It was always a weird CPUID bit.  You were supposed to put
> XACQUIRE/XRELEASE prefixes on your legacy locking, and it would be a nop
> on old hardware and go faster on newer hardware.
> 
> There is nothing runtime code needs to look at the HLE bit for, except
> perhaps for UI reporting purposes.

Do you know of some public Intel doc I could reference for all of this,
which I would kind of need in the description of a patch ...

>> But then this
>> erratum does not have the usual text effectively meaning that an ucode
>> update is or will be available to address the issue; instead it says
>> that BIOS or VMM can reserve the respective address range.
> 
> This is not surprising at all.  Turning off HLE was an unrelated
> activity, and I bet the link went unnoticed.
> 
>> This - assuming the alternative you describe is indeed viable - then is surely
>> a much more intrusive workaround than needed. Which I wouldn't assume
>> they would suggest in such a case.
> 
> My suggestion was to drop the workaround, not to complicated it with a
> microcode revision matrix.

... doing this? I don't think I've seen any of this in writing so far,
except by you. (I don't understand how this reply of yours relates to
what I was saying about the spec update. I understand what you are
suggesting. I merely tried to express that I'd have expected Intel to
point out the much easier workaround, rather than just a pretty involved
one.) Otherwise, may I suggest you make such a patch, to make sure it
has an adequate description?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 26 16:46:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 16:46:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdciR-0004q6-23; Tue, 26 May 2020 16:46:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oCnM=7I=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1jdciQ-0004q1-5B
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 16:46:10 +0000
X-Inumbo-ID: 664aa6a2-9f70-11ea-81bc-bc764e2007e4
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 664aa6a2-9f70-11ea-81bc-bc764e2007e4;
 Tue, 26 May 2020 16:46:09 +0000 (UTC)
Received: from localhost (c-67-164-102-47.hsd1.ca.comcast.net [67.164.102.47])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 6A0CD20787;
 Tue, 26 May 2020 16:46:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590511568;
 bh=98RcdAfJdXvYGhomBt3uJoTyCPPiyKT2lI4hSkGoc6U=;
 h=Date:From:To:cc:Subject:In-Reply-To:References:From;
 b=ple5w+MpbkZnIDgB7dx2qegDYxlz5l3fOQML0rEwpo5BGTCKuDL4LswMWVG/He8XU
 vO2ref1LVhL+x7z0wBa1Pk9C/JGfldmajj2TuLQo9yldp1JlFhRNJ49yxznh4BJUEC
 1TlvX3/+Mnq+TuTqLewVASgnR+8fulQpzdo/CF3k=
Date: Tue, 26 May 2020 09:46:07 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH 12/12] xen/arm: call iomem_permit_access for passthrough
 devices
In-Reply-To: <453b58f8-d9ee-bbe7-ac05-b5268620e79f@xen.org>
Message-ID: <alpine.DEB.2.21.2005260941250.27502@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-12-sstabellini@kernel.org>
 <521c8e55-73e8-950f-2d94-70b0c664bd3d@xen.org>
 <alpine.DEB.2.21.2004291318270.28941@sstabellini-ThinkPad-T480s>
 <f7f01eca-2415-e102-318f-0c58606fda96@xen.org>
 <453b58f8-d9ee-bbe7-ac05-b5268620e79f@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr_Babchuk@epam.com, Stefano Stabellini <stefano.stabellini@xilinx.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sun, 24 May 2020, Julien Grall wrote:
> On 30/04/2020 14:01, Julien Grall wrote:
> > On 29/04/2020 21:47, Stefano Stabellini wrote:
> > > On Wed, 15 Apr 2020, Julien Grall wrote: But doesn't it make sense to give
> > > domU permission if it is going to get
> > > the memory mapped? But admittedly I can't think of something that would
> > > break because of the lack of the iomem_permit_access call in this code
> > > path.
> > 
> > On Arm, the permissions are only useful if you plan you DomU to delegate the
> > regions to another domain. As your domain is not even aware it is running on
> > Xen (we don't expose 'xen' node in the DT), it makes little sense to add the
> > permission.
> 
> I actually found one use when helping a user last week. You can dump the list
> of MMIO regions assigned to a guest from Xen Console.
> 
> This will use d->iomem_caps that is modified via iomem_permit_access().
> Without it, there is no easy way to confirm the list of MMIO regions assigned
> to a guest. Although...
> 
> > Even today, you can map IOMEM to a DomU and then revert the permission right
> > after. They IOMEM will still be mapped in the guest and it will act normaly.
> 
> ... this would not help the case where permissions are reverted. But I am
> assuming this shouldn't happen for Dom0less.

Thank you for looking into this


> Stefano, I am not sure what's your plan for the series itself for Xen 4.14. I
> think this patch could go in now. Any thoughts?

For the series: I have addresses all comments in my working tree except
for the ones on memory allocation (the thread "xen: introduce
reserve_heap_pages"). It looks like that part requires a complete
rewrite, and it seems that the new code is not trivial to write. So I am
thinking of not targeting 4.14. What do you think? Do you think the new
code should be "easy" enough that I could target 4.14?

For this patch: it is fine to go in now, doesn't have to wait for the
series.


From xen-devel-bounces@lists.xenproject.org Tue May 26 16:57:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 16:57:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdct8-00063G-1T; Tue, 26 May 2020 16:57:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5j=7I=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jdct7-00063B-Gt
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 16:57:13 +0000
X-Inumbo-ID: f1465156-9f71-11ea-a67a-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f1465156-9f71-11ea-a67a-12813bfff9fa;
 Tue, 26 May 2020 16:57:12 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Ed4zbgttKkGzIfaAMvpWOXhlhTutbzcPEmuDnZM8pbCvXqxKC4m9CSxobFnlIikgvFyroi9lQO
 cIgQ2WhxACHkxlhldfZXtDV0TRsJkHQT3Mnre6bKvtu4ABZzaL9M4ueaxcK55zf9cNCmhjG7dZ
 cpmm+vyxv3TdptcpeZTgzOIPgnV8avyYTQF23ztnakf7YA8myDfIHWDcNqBdtUkqfIYGPtGUsj
 YjnBWodCwkiP7xVXXMpxfjFn9VWFWpHy00YQ7z8W7fRmJL97Kd6HwRHesM5BWqtfdFLxB/wYj+
 yCo=
X-SBRS: 2.7
X-MesageID: 18829419
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,437,1583211600"; d="scan'208";a="18829419"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Message-ID: <24269.19042.534647.23671@mariner.uk.xensource.com>
Date: Tue, 26 May 2020 17:57:06 +0100
To: George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH 3/5] libxl: Generate golang bindings in libxl Makefile
In-Reply-To: <B1814837-4E4B-4795-887E-769E3D25608A@citrix.com>
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-4-george.dunlap@citrix.com>
 <24269.8019.97048.52370@mariner.uk.xensource.com>
 <B1814837-4E4B-4795-887E-769E3D25608A@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("Re: [PATCH 3/5] libxl: Generate golang bindings in libxl Makefile"):
> tools/Makefile:subdirs-all is implemented as a non-parallel for loop executing over SUBDIRS-y; tools/golang comes after tools/libxl in that list, and so tools/golang:all will never be called until after tools/libxl:all has completed.  This invariant — that tools/golang/Makefile:all must not be called until tools/libxl/Makefile:all has completed must be kept regardless of this patch, since xenlight/Makefile:build depends on other output from tools/libxl/Makefile:all.

I had not spotted this aspect of the situation.  But the toplevel
Makefile is parallel.  I think this means that make -C works between
different directories in tools/.

Provided no-one says `make all install' (which is a thing that people
expect to work but which is already badly broken).

> So as long as nothing else calls tools/libxl:all or tools/libxl:idl-external, this should be safe.  We could add a comments near xenlight/Makefile:idl-gen saying it must only be called from libxl/Makefile:idl-external; and to libxl/Makefile:idl-external saying it must not be called recursively from another Makefile.

So, err, I'm sold on the original patch, I think.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

I'll answer your other comments anyway:

> > Alternatively, move the generated golang files to tools/libxl maybe,
> > and perhaps leave symlinks behind.
> 
> Would that result in the files being accessible to the golang build tools at https://xenbits.xenproject.org/git-http/xen.git/tools/golang/xenlight ?  If not, it defeats the purpose of having the files checked into the tree.

Yes.  git can convey symlinks.  (I'm assuming that the golang build
tools fetch the git objects and do git checkout, rather than
trying to download individual raw files from gitweb...)

> > Or convert the whole (of tools/, maybe) to nonrecursive make using eg
> > subdirmk :-).  https://diziet.dreamwidth.org/5763.html
> 
> This isn’t really a practical suggestion: I don’t have time to refactor the entire libxl Makefile tree, and certainly don’t have time by Friday.

Yes, it wasn't a serious suggestion.  Sorry that that apparently
wasn't clear.

Regards,
Ian.


From xen-devel-bounces@lists.xenproject.org Tue May 26 16:59:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 16:59:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdcve-0006CA-F8; Tue, 26 May 2020 16:59:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5j=7I=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jdcvd-0006C5-Fy
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 16:59:49 +0000
X-Inumbo-ID: 4e717dc4-9f72-11ea-9dbe-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e717dc4-9f72-11ea-9dbe-bc764e2007e4;
 Tue, 26 May 2020 16:59:48 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: HiL4jJvAzT0oY0O/XFxaWSH6mJVeMrECIAY7/aklf1RLqIVa/d+YREem0bBy9P4/kkVElbpzr6
 bZR64mhmdDkee7+L6lVQwnqVmJ/Jjum6/5bPNtdFM0eOqWxb1EncQ9IXuF1euKtNfSqHHoh8ak
 6gW+1VInzBeDqVfM+TNLATBzqApsugvwh7U/xhZkUqIGQEqgq84C4qE53rRPs9aI2hpT/aanSU
 pMJVjsvPKHODmO/5MOD78GQnJJ7pUjwhiszqiCh29xIFoCDLfHq7kquIfDUHV482NDHQ69LlqO
 mhM=
X-SBRS: 2.7
X-MesageID: 18474234
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,437,1583211600"; d="scan'208";a="18474234"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Message-ID: <24269.19198.604986.160896@mariner.uk.xensource.com>
Date: Tue, 26 May 2020 17:59:42 +0100
To: George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH 5/5] gitignore: Ignore golang package directory
In-Reply-To: <A525D330-BCF9-4998-BEC5-425BA6C26CCF@citrix.com>
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-6-george.dunlap@citrix.com>
 <24269.8059.28506.353748@mariner.uk.xensource.com>
 <A525D330-BCF9-4998-BEC5-425BA6C26CCF@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Konrad Wilk <konrad.wilk@oracle.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Julien Grall <julien.grall@arm.com>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("Re: [PATCH 5/5] gitignore: Ignore golang package directory"):
> [explanation]

Sounds quite tangled...

> Nick, any opinions?
...
> > Is there not a risk that humans will try to edit it ?

Anyway ISTM that you have definitely considered this so

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

assuming that and Nick convince yourselves you've addressed this
possible issue.

> I suppose someone might.  If we decide we want to support older versions of go, we probably want to figure something out there.  Options:
> 
> 1. Copy the files to a temp directory instead.  This is complicated because we have to find a good temp directory, and we’d have to copy them every time, slowing down the incremental build (though not that much).

I don't think that helps much.

> 2. Put a file in the generated directory like “GENERATED_DO_NOT_EDIT”.
> 
> 3. Put them in tools/golang/GENERATED_DO_NOT_EDIT/src instead.

Do they not have a header comment saying DO NOT EDIT ?

3 is pretty ugly.  I'll leave it up to you whether to bother with 2.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Tue May 26 17:07:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 17:07:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdd2r-00075Q-E5; Tue, 26 May 2020 17:07:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=52a6=7I=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jdd2q-00075L-BR
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 17:07:16 +0000
X-Inumbo-ID: 58ecf8f4-9f73-11ea-9dbe-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58ecf8f4-9f73-11ea-9dbe-bc764e2007e4;
 Tue, 26 May 2020 17:07:15 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 8xyA8cgVOqztaCYCv9smMqkcUv3MbBxmnB58bnscb5p+c2b1evReH5p3Z7dZlOiT09IDIP6iV3
 1jJvzDO0W5ZvGxXIvn07fSMzKW/eVVVJC9E8q4XXFpcgQUCwxNu71fMYDINKy9R8zvPGkxguEW
 yzZaBNnPzfp5ftsFRxQm+BY7DaVaWKHFS1aSWHTOhqPsTwX7ZNQflPxy+jXbDz6oBeswYke3tp
 8pGWL6e7Ta2uzyCbRoaX2v+rT4gb+YDdHYfji/0d9b+0vYEfAPsMofS4RT6pgJQPt3mh97WvoC
 mV4=
X-SBRS: 2.7
X-MesageID: 18770012
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,437,1583211600"; d="scan'208";a="18770012"
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [PATCH 5/5] gitignore: Ignore golang package directory
Thread-Topic: [PATCH 5/5] gitignore: Ignore golang package directory
Thread-Index: AQHWMFPdiIMBJ2MNy0CkFZ0ooWSGmqi6SAuAgAAbAoCAABjdAIAAAheA
Date: Tue, 26 May 2020 17:07:11 +0000
Message-ID: <1D4C50D9-BFA4-42B9-AE76-8E836CAD6430@citrix.com>
References: <20200522161240.3748320-1-george.dunlap@citrix.com>
 <20200522161240.3748320-6-george.dunlap@citrix.com>
 <24269.8059.28506.353748@mariner.uk.xensource.com>
 <A525D330-BCF9-4998-BEC5-425BA6C26CCF@citrix.com>
 <24269.19198.604986.160896@mariner.uk.xensource.com>
In-Reply-To: <24269.19198.604986.160896@mariner.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <9E913A99B232AC45B1F42429E7115FBD@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Konrad Wilk <konrad.wilk@oracle.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 Nick Rosbrook <rosbrookn@ainfosec.com>, Julien Grall <julien.grall@arm.com>,
 Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDI2LCAyMDIwLCBhdCA1OjU5IFBNLCBJYW4gSmFja3NvbiA8aWFuLmphY2tz
b25AY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBHZW9yZ2UgRHVubGFwIHdyaXRlcyAoIlJlOiBb
UEFUQ0ggNS81XSBnaXRpZ25vcmU6IElnbm9yZSBnb2xhbmcgcGFja2FnZSBkaXJlY3RvcnkiKToN
Cj4+IFtleHBsYW5hdGlvbl0NCj4gDQo+IFNvdW5kcyBxdWl0ZSB0YW5nbGVkLi4uDQo+IA0KPj4g
TmljaywgYW55IG9waW5pb25zPw0KPiAuLi4NCj4+PiBJcyB0aGVyZSBub3QgYSByaXNrIHRoYXQg
aHVtYW5zIHdpbGwgdHJ5IHRvIGVkaXQgaXQgPw0KPiANCj4gQW55d2F5IElTVE0gdGhhdCB5b3Ug
aGF2ZSBkZWZpbml0ZWx5IGNvbnNpZGVyZWQgdGhpcyBzbw0KPiANCj4gQWNrZWQtYnk6IElhbiBK
YWNrc29uIDxpYW4uamFja3NvbkBldS5jaXRyaXguY29tPg0KPiANCj4gYXNzdW1pbmcgdGhhdCBh
bmQgTmljayBjb252aW5jZSB5b3Vyc2VsdmVzIHlvdSd2ZSBhZGRyZXNzZWQgdGhpcw0KPiBwb3Nz
aWJsZSBpc3N1ZS4NCj4gDQo+PiBJIHN1cHBvc2Ugc29tZW9uZSBtaWdodC4gIElmIHdlIGRlY2lk
ZSB3ZSB3YW50IHRvIHN1cHBvcnQgb2xkZXIgdmVyc2lvbnMgb2YgZ28sIHdlIHByb2JhYmx5IHdh
bnQgdG8gZmlndXJlIHNvbWV0aGluZyBvdXQgdGhlcmUuICBPcHRpb25zOg0KPj4gDQo+PiAxLiBD
b3B5IHRoZSBmaWxlcyB0byBhIHRlbXAgZGlyZWN0b3J5IGluc3RlYWQuICBUaGlzIGlzIGNvbXBs
aWNhdGVkIGJlY2F1c2Ugd2UgaGF2ZSB0byBmaW5kIGEgZ29vZCB0ZW1wIGRpcmVjdG9yeSwgYW5k
IHdl4oCZZCBoYXZlIHRvIGNvcHkgdGhlbSBldmVyeSB0aW1lLCBzbG93aW5nIGRvd24gdGhlIGlu
Y3JlbWVudGFsIGJ1aWxkICh0aG91Z2ggbm90IHRoYXQgbXVjaCkuDQo+IA0KPiBJIGRvbid0IHRo
aW5rIHRoYXQgaGVscHMgbXVjaC4NCj4gDQo+PiAyLiBQdXQgYSBmaWxlIGluIHRoZSBnZW5lcmF0
ZWQgZGlyZWN0b3J5IGxpa2Ug4oCcR0VORVJBVEVEX0RPX05PVF9FRElU4oCdLg0KPj4gDQo+PiAz
LiBQdXQgdGhlbSBpbiB0b29scy9nb2xhbmcvR0VORVJBVEVEX0RPX05PVF9FRElUL3NyYyBpbnN0
ZWFkLg0KPiANCj4gRG8gdGhleSBub3QgaGF2ZSBhIGhlYWRlciBjb21tZW50IHNheWluZyBETyBO
T1QgRURJVCA/DQoNClRoZSBnZW5lcmF0ZWQgZmlsZXMgZG8sIGJ1dCB0aGlzIGNvcGllcyBhbGwg
dGhlIGZpbGVzLCBpbmNsdWRpbmcgdGhlIG5vbi1nZW5lcmF0ZWQgb25lcy4NCg0KQW55d2F5LCBp
dCB0dXJucyBvdXQgaXMgaGFzIG5vdGhpbmcgdG8gZG8gd2l0aCBnbyBtb2R1bGVzIHBlciBzZSwg
YnV0IG1vcmUgdG8gZG8gd2l0aCBteSBxdWl4b3RpYyBhdHRlbXB0IHRvIG1ha2UgaXQgcG9zc2li
bGUgdG8gYnVpbGQgZnJvbSBzdHVmZiBpbnN0YWxsZWQgbG9jYWxseSBpbiAkUFJFRklYLCByYXRo
ZXIgdGhhbiBoYXZpbmcgdG8gY2xvbmUgc29tZXRoaW5nIG92ZXIgdGhlIGludGVybmV0LiAgVGhl
IGN1cnJlbnQgdmVyc2lvbiBvZiB0aGUg4oCcYnVpbGQgdGVzdOKAnSBkb2VzbuKAmXQgYWN0dWFs
bHkgdXNlIHRoaXMgR09QQVRIIHN0dWZmLCBhbmQgd29ya3MgZXZlbiBvbiB2ZXJzaW9ucyBvZiBn
b2xhbmcgdGhhdCBkb27igJl0IGhhdmUgbW9kdWxlIHN1cHBvcnQuDQoNCknigJl2ZSBnb3QgYSBw
YXRjaCB0aGF0IHJlbW92ZXMgdGhpcyB3aG9sZSBmYWtlLUdPUEFUSCB0aGluZzsgSeKAmWxsIHNl
bmQgdGhhdCBhbG9uZyBpbiBsaWV1IG9mIHBhdGNoZXMgNCBhbmQgNS4NCg0KVGhhbmtzLA0KIC1H
ZW9yZ2U=


From xen-devel-bounces@lists.xenproject.org Tue May 26 17:13:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 17:13:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdd8t-0007uU-3s; Tue, 26 May 2020 17:13:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mv6+=7I=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1jdd8r-0007uP-Ol
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 17:13:29 +0000
X-Inumbo-ID: 36e8ca98-9f74-11ea-81bc-bc764e2007e4
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 36e8ca98-9f74-11ea-81bc-bc764e2007e4;
 Tue, 26 May 2020 17:13:28 +0000 (UTC)
ARC-Seal: i=1; a=rsa-sha256; t=1590513205; cv=none; 
 d=zohomail.com; s=zohoarc; 
 b=ccXqLQcH5jzsedUplzrBXFOK4mUKY54J4IyDkw1wKYOr40qP1AntXrIOit/sv3J92No1B07P9TX9b+f4k6Ar8Cx2rkuum3PBAmSoq4DmNX5vq4ZEZC3nukGtHJUd3RumnZeFCBx+nomW85TfegXYMoZlEjqvORRzZFnVyVC1vtc=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com;
 s=zohoarc; 
 t=1590513205; h=Content-Type:Date:From:MIME-Version:Message-ID:Subject:To; 
 bh=FVTrQ7tdhD4ExqOEnrw+beAro04cuJgyxbyvs+1v9Tw=; 
 b=bh6vbVXbaFSGjqt3Z+vuj7xjsU4n3zVGSCTfX8MD7FplpwHAAWnrX+04BVUK8D8/Mvvz5o/X/l/Ow73gdUspCgUWxjVjDRKQvhDeplUUpvvIrDxnLVaIXjhZ+76WLVQP5LcS6tZY7SqB6P0gFFxUZnhyDR68EoArhdRb4EFX06g=
ARC-Authentication-Results: i=1; mx.zohomail.com;
 dkim=pass  header.i=apertussolutions.com;
 spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
 dmarc=pass header.from=<dpsmith@apertussolutions.com>
 header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1590513205; 
 s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
 h=Date:From:To:Message-Id:In-Reply-To:Subject:MIME-Version:Content-Type;
 bh=FVTrQ7tdhD4ExqOEnrw+beAro04cuJgyxbyvs+1v9Tw=;
 b=r/Y1yyZlEwXm38ScnyWe0tg6j9s2mLTbzFRk3KCQElpbG4KldnjjrGxIZ2RCVhGP
 shg1Db4ZlAcpNdiOL9Qn31QYske67cqrM+nADOW9nNA0unKAwSx04Zb6PAIjt8vB+Bx
 wJQFoU7+/qs3TOYrxnIOo50JDQbCvLskjfogjih4=
Received: from mail.zoho.com by mx.zohomail.com
 with SMTP id 1590513199327898.1113379833491;
 Tue, 26 May 2020 10:13:19 -0700 (PDT)
Date: Tue, 26 May 2020 13:13:19 -0400
From: Daniel Smith <dpsmith@apertussolutions.com>
To: "xen-devel" <xen-devel@lists.xenproject.org>
Message-Id: <17251f968dd.b28c8ebe731955.2247348003729398828@apertussolutions.com>
In-Reply-To: 
Subject: [BUG] PVH ACPI XSDT table construction
MIME-Version: 1.0
Content-Type: multipart/alternative; 
 boundary="----=_Part_2340908_1359952992.1590513199325"
Importance: Medium
User-Agent: Zoho Mail
X-Mailer: Zoho Mail
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

------=_Part_2340908_1359952992.1590513199325
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Greetings,



I was reviewing the ACPI construction for PVH and discovered what I believe=
 is a flaw in the logic for selecting the XSDT tables. The current logic is=
,



static bool __init pvh_acpi_xsdt_table_allowed(const char *sig,

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long address,

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned long size)

{

=C2=A0=C2=A0=C2=A0 /*

=C2=A0=C2=A0=C2=A0=C2=A0 * DSDT and FACS are pointed to from FADT and thus =
don't belong

=C2=A0=C2=A0=C2=A0=C2=A0 * in XSDT.

=C2=A0=C2=A0=C2=A0=C2=A0 */

=C2=A0=C2=A0=C2=A0 return (pvh_acpi_table_allowed(sig, address, size) &&

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 strncmp(=
sig, ACPI_SIG_DSDT, ACPI_NAME_SIZE) &&

=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 strncmp(=
sig, ACPI_SIG_FACS, ACPI_NAME_SIZE));

}



Unless I am mistaken, the boolean logic in the return statement will always=
 return false resulting in an empty XSDT table. I believe based on the comm=
ent what was intended here was,



=C2=A0=C2=A0=C2=A0 return (pvh_acpi_table_allowed(sig, address, size) &&

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 !(strncmp(sig, ACPI_SIG_DSDT, ACP=
I_NAME_SIZE) ||

=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 strncmp(sig, ACPI_SIG_FACS=
, ACPI_NAME_SIZE)));



Thanks!


V/r,

Daniel P. Smith

Apertus Solutions, LLC
------=_Part_2340908_1359952992.1590513199325
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN"><html><head>=
<meta content=3D"text/html;charset=3DUTF-8" http-equiv=3D"Content-Type"></h=
ead><body ><div style=3D"font-family: Verdana, Arial, Helvetica, sans-serif=
; font-size: 10pt;"><div>Greetings,<br></div><div><br></div><div>I was revi=
ewing the ACPI construction for PVH and discovered what I believe is a flaw=
 in the logic for selecting the XSDT tables. The current logic is,<br></div=
><div><br></div><div>static bool __init pvh_acpi_xsdt_table_allowed(const c=
har *sig,<br></div><div>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nb=
sp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
 unsigned long address,<br></div><div>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&=
nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbs=
p;&nbsp;&nbsp; unsigned long size)<br></div><div>{<br></div><div>&nbsp;&nbs=
p;&nbsp; /*<br></div><div>&nbsp;&nbsp;&nbsp;&nbsp; * DSDT and FACS are poin=
ted to from FADT and thus don't belong<br></div><div>&nbsp;&nbsp;&nbsp;&nbs=
p; * in XSDT.<br></div><div>&nbsp;&nbsp;&nbsp;&nbsp; */<br></div><div>&nbsp=
;&nbsp;&nbsp; return (pvh_acpi_table_allowed(sig, address, size) &amp;&amp;=
<br></div><div>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;=
&nbsp; strncmp(sig, ACPI_SIG_DSDT, ACPI_NAME_SIZE) &amp;&amp;<br></div><div=
>&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; strncmp=
(sig, ACPI_SIG_FACS, ACPI_NAME_SIZE));<br></div><div>}<br></div><div><br></=
div><div>Unless I am mistaken, the boolean logic in the return statement wi=
ll always return false resulting in an empty XSDT table. I believe based on=
 the comment what was intended here was,<br></div><div><br></div><div>&nbsp=
;&nbsp;&nbsp; return (pvh_acpi_table_allowed(sig, address, size) &amp;&amp;=
<br></div><div>&nbsp; &nbsp; &nbsp; &nbsp; &nbsp; &nbsp; !(strncmp(sig, ACP=
I_SIG_DSDT, ACPI_NAME_SIZE) ||<br></div><div>&nbsp; &nbsp; &nbsp; &nbsp; &n=
bsp; &nbsp; &nbsp; strncmp(sig, ACPI_SIG_FACS, ACPI_NAME_SIZE)));<br></div>=
<div><br></div><div>Thanks!</div><div><br></div><div id=3D"" data-zbluepenc=
il-ignore=3D"true"><div>V/r,<br></div><div>Daniel P. Smith<br></div><div>Ap=
ertus Solutions, LLC<br></div></div><div><br></div><div><br></div></div><br=
></body></html>
------=_Part_2340908_1359952992.1590513199325--



From xen-devel-bounces@lists.xenproject.org Tue May 26 17:44:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 17:44:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jddcM-00022v-Mk; Tue, 26 May 2020 17:43:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTO2=7I=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jddcL-00022q-Tt
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 17:43:57 +0000
X-Inumbo-ID: 7637081e-9f78-11ea-a68b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7637081e-9f78-11ea-a68b-12813bfff9fa;
 Tue, 26 May 2020 17:43:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=esDhJ7YBMTu8MOKHrrF8Y1IUxoSDsir2T2rgP2jnadI=; b=kwG2jfhFZiY+8NJK0pqQVprid
 oevyjyHU5l5yFvO9+MmmKmJJS+ei+LsKvLSoVzQu6U3BXL3NOTkTpyLQnK0asL39dg4bZpNHJ4IaO
 RZRpGu2IT1MnzGO05rChTxawh9EFaoKeJIDUFiMu+zVXd43EWbkuWuTs1v+gm0Bc9Vj8A=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jddcF-00013V-Cw; Tue, 26 May 2020 17:43:51 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jddcF-0008BR-3Q; Tue, 26 May 2020 17:43:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jddcF-0001I1-2r; Tue, 26 May 2020 17:43:51 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150388-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150388: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707
X-Osstest-Versions-That: xen=d9b29088603f8438160eb4852cedd85cb7c61a19
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 26 May 2020 17:43:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150388 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150388/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707
baseline version:
 xen                  d9b29088603f8438160eb4852cedd85cb7c61a19

Last test of basis   150384  2020-05-26 12:01:19 Z    0 days
Testing same since   150388  2020-05-26 15:01:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d9b2908860..d89e5e65f3  d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 26 17:57:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 17:57:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jddpg-00033C-Un; Tue, 26 May 2020 17:57:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vX9/=7I=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jddpf-000337-Qh
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 17:57:43 +0000
X-Inumbo-ID: 650e7ce6-9f7a-11ea-81bc-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 650e7ce6-9f7a-11ea-81bc-bc764e2007e4;
 Tue, 26 May 2020 17:57:42 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: NjSlT6lCFbs4DsndjP2AVVVWYUbhekmkov3gNnsNRRqog9kA6FV7rTCzkPNduU+D82ECQIYEbm
 thbB9Yka+1OvqcN/hjPVEcs4I2pMp4/IG6Gfo+IlLahN2WX5GdPpoUt+7BdoXxkYm97zu0coLe
 PoCz2tekGqFt2GZkqP9f3jLOdugN4+0QOGIxLPI143bgDGiLDfBToeo6++nEiZPtHEpNMZZUju
 Ce/ueXEwn3Z90mkMzGwKA8erJIClHfBPOZYiRuwJ1IF2f19nEOyuXDwP3O1R2kGhkT7aq48Rfj
 Aj4=
X-SBRS: 2.7
X-MesageID: 18775426
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,437,1583211600"; d="scan'208";a="18775426"
Date: Tue, 26 May 2020 19:57:34 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Daniel Smith <dpsmith@apertussolutions.com>
Subject: Re: [BUG] PVH ACPI XSDT table construction
Message-ID: <20200526175734.GF38408@Air-de-Roger>
References: <17251f968dd.b28c8ebe731955.2247348003729398828@apertussolutions.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <17251f968dd.b28c8ebe731955.2247348003729398828@apertussolutions.com>
X-ClientProxiedBy: AMSPEX02CAS02.citrite.net (10.69.22.113) To
 AMSPEX02CL02.citrite.net (10.69.22.126)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 26, 2020 at 01:13:19PM -0400, Daniel Smith wrote:
> Greetings,
> 
> 
> 
> I was reviewing the ACPI construction for PVH and discovered what I believe is a flaw in the logic for selecting the XSDT tables. The current logic is,
> 
> 
> 
> static bool __init pvh_acpi_xsdt_table_allowed(const char *sig,
> 
>                                                unsigned long address,
> 
>                                                unsigned long size)
> 
> {
> 
>     /*
> 
>      * DSDT and FACS are pointed to from FADT and thus don't belong
> 
>      * in XSDT.
> 
>      */
> 
>     return (pvh_acpi_table_allowed(sig, address, size) &&
> 
>             strncmp(sig, ACPI_SIG_DSDT, ACPI_NAME_SIZE) &&
> 
>             strncmp(sig, ACPI_SIG_FACS, ACPI_NAME_SIZE));
> 
> }
> 
> 
> 
> Unless I am mistaken, the boolean logic in the return statement will always return false resulting in an empty XSDT table. I believe based on the comment what was intended here was,
> 
> 
> 
>     return (pvh_acpi_table_allowed(sig, address, size) &&
> 
>             !(strncmp(sig, ACPI_SIG_DSDT, ACPI_NAME_SIZE) ||
> 
>               strncmp(sig, ACPI_SIG_FACS, ACPI_NAME_SIZE)));

Keep in mind that strncmp will return 0 if the signature matches, and
hence doing this won't allow any table, as it would require a
signature to match both the DSDT and the FACS one (you would require
strncmp to return 0 in both cases).

The code is correct AFAICT, as it won't add DSDT or FACS to the XSDT
(because strncmp will return 0 in that case).

Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 26 18:06:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 18:06:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jddy5-00040d-Qz; Tue, 26 May 2020 18:06:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vgeY=7I=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jddy4-00040Y-7K
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 18:06:24 +0000
X-Inumbo-ID: 9bae6968-9f7b-11ea-a690-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9bae6968-9f7b-11ea-a690-12813bfff9fa;
 Tue, 26 May 2020 18:06:23 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:57706
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jddy0-000RQN-MI (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Tue, 26 May 2020 19:06:21 +0100
Subject: Re: [PATCH 03/16] x86/traps: Factor out exception_fixup() and make
 printing consistent
To: Jan Beulich <jbeulich@suse.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-4-andrew.cooper3@citrix.com>
 <f7cb696a-5c2c-4aa6-d379-ed77772b7c35@suse.com>
 <a397dd69-2384-a4af-d127-9189a730a554@citrix.com>
 <afd75bde-9adf-d8cf-f8cf-24cb1b753253@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <9c939815-a4f9-75d7-3b6b-b8921de6cdb9@citrix.com>
Date: Tue, 26 May 2020 19:06:20 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <afd75bde-9adf-d8cf-f8cf-24cb1b753253@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 12/05/2020 14:05, Jan Beulich wrote:
> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>
> On 11.05.2020 17:14, Andrew Cooper wrote:
>> On 04/05/2020 14:20, Jan Beulich wrote:
>>> On 02.05.2020 00:58, Andrew Cooper wrote:
>>>> --- a/xen/arch/x86/traps.c
>>>> +++ b/xen/arch/x86/traps.c
>>>> @@ -774,10 +774,27 @@ static void do_reserved_trap(struct cpu_user_regs *regs)
>>>>            trapnr, vec_name(trapnr), regs->error_code);
>>>>  }
>>>>  
>>>> +static bool exception_fixup(struct cpu_user_regs *regs, bool print)
>>>> +{
>>>> +    unsigned long fixup = search_exception_table(regs);
>>>> +
>>>> +    if ( unlikely(fixup == 0) )
>>>> +        return false;
>>>> +
>>>> +    /* Can currently be triggered by guests.  Make sure we ratelimit. */
>>>> +    if ( IS_ENABLED(CONFIG_DEBUG) && print )
>>> I didn't think we consider dprintk()-s a possible security issue.
>>> Why would we consider so a printk() hidden behind
>>> IS_ENABLED(CONFIG_DEBUG)? IOW I think one of XENLOG_GUEST and
>>> IS_ENABLED(CONFIG_DEBUG) wants dropping.
>> Who said anything about a security issue?
> The need to rate limit is (among other aspects) to prevent a
> (logspam) security issue, isn't it?

Rate limiting (from a security aspect) is a stopgap solution to relieve
incidental pressure on the various global spinlocks involved.

It specifically does not prevent a guest from trivially filling the
console ring with junk, or for that junk to be written to
/var/log/xen/hypervisor.log at an alarming rate, both of which are
issues in production setups, but not security issues.

Technical solutions to these problems do exist, such as deleting the
offending printk(), or maintaining per-guest console rings, but both
come with downsides in terms of usability, which similarly impacts
production setups.


What ratelimiting even in debug builds gets you is a quick spate of
printks() (e.g. any new sshd connection on an AMD system where the
MSR_VIRT_SPEC_CTRL patch is still uncommitted, and the default WRMSR
behaviour breaking wrmsr_safe() logic in Linux) not wasting an
unreasonable quantity of space in the console ring.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 26 18:08:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 18:08:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jde0N-000482-84; Tue, 26 May 2020 18:08:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xfaz=7I=gmail.com=dpsmith.dev@srs-us1.protection.inumbo.net>)
 id 1jde0L-00047J-Ev
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 18:08:45 +0000
X-Inumbo-ID: f05be670-9f7b-11ea-9dbe-bc764e2007e4
Received: from mail-qt1-x836.google.com (unknown [2607:f8b0:4864:20::836])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f05be670-9f7b-11ea-9dbe-bc764e2007e4;
 Tue, 26 May 2020 18:08:45 +0000 (UTC)
Received: by mail-qt1-x836.google.com with SMTP id i68so16927990qtb.5
 for <xen-devel@lists.xenproject.org>; Tue, 26 May 2020 11:08:45 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=subject:to:references:from:message-id:date:user-agent:mime-version
 :in-reply-to:content-language:content-transfer-encoding;
 bh=3vq5tAtSjUZ1REBUmxkczwtyXpGOgL9IzAxGNeyIljk=;
 b=O2aM6u6G0kNZGSJoihN+2/JpvbD5Da6615BD8Z11Ng9m/iFuJWVeY6qy5dXN6Xd/1i
 xiGC9Eb24Z1SVjPBT2pzOoa809pn418e4UXxlHCtbbz2S2RRq26j8iK9YEXWMb9Pftw9
 OQ8JOXmGS/rVVZcIhP0QHqARRd3YM+Vtg09DarQ2fAqIdwxdILZVCa5n2XBrZkRvv8FP
 dhlXqe78jA9N/BdciNoqrzRclFK/dRc8JyWQBsgPVs1gJyw2BsjY5WXb0D8oGflKZLxM
 gCaVMMROiKx9qbIhnz5epmcOct8fs1R9puUKId8V45HqBcfS3Y8grXhH2R2+XCDDbvpw
 fzpg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:to:references:from:message-id:date
 :user-agent:mime-version:in-reply-to:content-language
 :content-transfer-encoding;
 bh=3vq5tAtSjUZ1REBUmxkczwtyXpGOgL9IzAxGNeyIljk=;
 b=spOkzLY/+Tkkog/h6ianFJfuL4Xrmzc2Ql6F84yIiqQYiIOQ2isaYsTQv9T+edkwJe
 l3M29sZt9Mls2NWYP1dlJ0UfP0fzkpH5nKbH2f1I3Y+zullfp28nu0hMrybN3uZY3DPX
 LGhG8qcMMJ44/4frqUeWi1LDcNWTqHANZJKIzTVvdTEA9cFh9SCM+rpUY3PAvaMlShgZ
 65ahBx/ig+k1Rb1daAl1n7SWkpXx0WhzuIsniDVt2tdn7VBFlxeTKyDldnE8FcBQ0GjW
 8EK0X5qpCEg0/SECKWxzgaKGFrdSGzBMNSYMf9HVf2u9uIHGWbw8MLBPH63bk890lyaa
 1hfw==
X-Gm-Message-State: AOAM532ewctivj1FCiveE74whqgpSaivpyyGQa7uvhd4w9Y2PijuOFfH
 IuymJPORpZ2CEt0DYuGGWnR758lH
X-Google-Smtp-Source: ABdhPJzHVeE7cfrEFHlQyR35IioXLAFWVpeEjmLdCAEuPhUI3MRHXPCBVGnnfGiDJ4LDsR8KvRliKA==
X-Received: by 2002:ac8:1601:: with SMTP id p1mr2600001qtj.311.1590516524381; 
 Tue, 26 May 2020 11:08:44 -0700 (PDT)
Received: from [10.10.1.24] (c-73-129-47-101.hsd1.md.comcast.net.
 [73.129.47.101])
 by smtp.gmail.com with ESMTPSA id h77sm328386qke.37.2020.05.26.11.08.43
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 26 May 2020 11:08:43 -0700 (PDT)
Subject: Re: [BUG] PVH ACPI XSDT table construction
To: xen-devel@lists.xenproject.org
References: <17251f968dd.b28c8ebe731955.2247348003729398828@apertussolutions.com>
 <20200526175734.GF38408@Air-de-Roger>
From: "Daniel P. Smith" <dpsmith.dev@gmail.com>
Message-ID: <8c9c4a9a-653e-8a75-bfb3-10d6581831f1@gmail.com>
Date: Tue, 26 May 2020 14:08:43 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200526175734.GF38408@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/26/20 1:57 PM, Roger Pau Monné wrote:
> 
> Keep in mind that strncmp will return 0 if the signature matches, and
> hence doing this won't allow any table, as it would require a
> signature to match both the DSDT and the FACS one (you would require
> strncmp to return 0 in both cases).
> 
> The code is correct AFAICT, as it won't add DSDT or FACS to the XSDT
> (because strncmp will return 0 in that case).
> 
> Roger.
> 

Ugh, your are right. Apologies for the noise.

V/r,
DPS


From xen-devel-bounces@lists.xenproject.org Tue May 26 21:19:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 21:19:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdgy3-0002mo-91; Tue, 26 May 2020 21:18:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KCAZ=7I=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jdgy1-0002mj-TR
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 21:18:33 +0000
X-Inumbo-ID: 728b1426-9f96-11ea-a6bd-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 728b1426-9f96-11ea-a6bd-12813bfff9fa;
 Tue, 26 May 2020 21:18:31 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 784E4AD4F;
 Tue, 26 May 2020 21:18:32 +0000 (UTC)
Message-ID: <8bf86f0c2bcce449cf7643aa9b98aa26ea558c2c.camel@suse.com>
Subject: Re: [PATCH 2/2] xen: credit2: limit the max number of CPUs in a
 runqueue
From: Dario Faggioli <dfaggioli@suse.com>
To: =?ISO-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, 
 xen-devel@lists.xenproject.org
Date: Tue, 26 May 2020 23:18:27 +0200
In-Reply-To: <7e039c65-4532-c3ea-8707-72a86cf48e0e@suse.com>
References: <158818022727.24327.14309662489731832234.stgit@Palanthas>
 <158818179558.24327.11334680191217289878.stgit@Palanthas>
 <b368ccef-d3b1-1338-6325-8f81a963876d@suse.com>
 <d60d5b917d517b1dfa8292cfb456639c736ec173.camel@suse.com>
 <7e039c65-4532-c3ea-8707-72a86cf48e0e@suse.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-UKAJFAljL5IxqXECkPZ6"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-UKAJFAljL5IxqXECkPZ6
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2020-04-30 at 14:52 +0200, J=C3=BCrgen Gro=C3=9F wrote:
> On 30.04.20 14:28, Dario Faggioli wrote:
> > That being said, I can try to make things a bit more fair, when
> > CPUs
> > come up and are added to the pool. Something around the line of
> > adding
> > them to the runqueue with the least number of CPUs in it (among the
> > suitable ones, of course).
> >=20
> > With that, when the user removes 4 CPUs, we will have the 6 vs 10
> > situation. But we would make sure that, when she adds them back, we
> > will go back to 10 vs. 10, instead than, say, 6 vs 14 or something
> > like
> > that.
> >=20
> > Was something like this that you had in mind? And in any case, what
> > do
> > you think about it?
>=20
> Yes, this would be better already.
>=20
So, a couple of thoughts. Doing something like what I tried to describe
above is not too bad, and I have it pretty much ready.

With that, on an Intel system with 96 CPUs on two sockets, and
max_cpus_per_runqueue set to 16, I got, after boot, instead of just 2
giant runqueues with 48 CPUs in each:

 - 96 CPUs online, split in 6 runqueues (3 for each socket) with 16=20
   runqueues in each of them

I can also "tweak" it in such a way that, if one for instance boots
with "smt=3Dno", we get to a point where we have:

 - 48 CPUs online, split in 6 runqueues, with 8 CPUs in each

Now, I think this is good, and actually better than the current
situation where on such a system, we only have two very big runqueues
(and let me repeat that introducing a per-LLC runqueue arrangement, on
which I'm also working, won't help in this case, as NUMA node =3D=3D LLC).

So, problem is that if one starts to fiddle with cpupools and cpu on
and offlining, things can get pretty unbalanced. E.g., I can end up
with 2 runqueues on a socket, one with 16 CPUs and the other with just
a couple of them.

Now, this is all possible as of now (I mean, without this patch)
already, although at a different level. In fact, I can very well remove
all-but-1 CPUs of node 1 from Pool-0, and end up again with a runqueue
with a lot of CPUs and another with just one.

It looks like we need a way to rebalance the runqueues, which should be
doable... But despite having spent a couple of days trying to come up
with something decent, that I could include in v2 of this series, I
couldn't get it to work sensibly.

So, this looks to me like an improvement, that would need being
improved further, but I'm not sure we have the time for it right now
(Cc-ing Paul). Should we still go for what's ready? I think yes, but
I'd be interested in opinions...

Also, if anyone has any clever ideas on how to implement a mechanism
that rebalance the CPUs within the runqueue, I'm all ears and am up for
trying implementing. :-)

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-UKAJFAljL5IxqXECkPZ6
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl7Nh6QACgkQFkJ4iaW4
c+4WCQ//Rlw01mXUpz2TGQ3s+MTXAn76wT6CVBJG1ChBEqxyZsLpgl71KByVwozB
gIZFm9a7mEslNch0BmfYswsrr+SIl12YhzRdclXwdGlud8pKkkSYNq2GnKrFjjlf
sttMkU2dXla+ZNPa/T9zpilF1Eu+V1baWEwCYZXg+dJ9hZBZLymxTpc1MoHlR4pM
lUogKuPpznQWYaA9t5Dpp606BSTbgAmwPLjHUJfgA/46d4W1OsY+nXHK59VuZDNb
ObT9qmYZNFgdPfXf0Rc8ofi7+TJZ0addKIGQZ9AxgpCgTYV+owsJRl8zR2z68s6d
kvZOWKb3aZRzVxvO+OedjkSphottwqYmqznuUIft2d30YzIUkH8MWJ3PUNaerMoU
vs4FxVlUYDgFJ1y2Z4+fis3rcZbTGtsoXfBH3u+l72XbFDNjGU3gwwnqC3bUy1SC
nloFJviZhg+7c++Rlbtg3HvCAdWoS4b4XTyNklHEZ8+eU7MT8q1WzQd5ZOLmay3P
BuoceEDi4Yd87M6NdjQGGM/CKOLCV6Sj7gIPjx/BdEp2k1nRnbc4RcY4b4ZDeg3i
IS3XIJlST/s6bHolT5lraJ6bUG4V7t4y2QX8ElbTNHrtpr9D+XxpBCdVnpc6JTVo
kiLJ75gAY6YDpVP1E3hLDE4vcL1Jrtm/YQW3D+LreYxGla1iDBw=
=uLmS
-----END PGP SIGNATURE-----

--=-UKAJFAljL5IxqXECkPZ6--



From xen-devel-bounces@lists.xenproject.org Tue May 26 22:01:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 22:01:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdhd7-0006ok-KD; Tue, 26 May 2020 22:01:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KCAZ=7I=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jdhd6-0006of-5M
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 22:01:00 +0000
X-Inumbo-ID: 6160eaee-9f9c-11ea-81bc-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6160eaee-9f9c-11ea-81bc-bc764e2007e4;
 Tue, 26 May 2020 22:00:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id AB79CABEC;
 Tue, 26 May 2020 22:01:00 +0000 (UTC)
Message-ID: <7b34b1b2c4b36399ad16f6e72a872e15d949f4bf.camel@suse.com>
Subject: Re: [PATCH 2/2] xen: credit2: limit the max number of CPUs in a
 runqueue
From: Dario Faggioli <dfaggioli@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Date: Wed, 27 May 2020 00:00:56 +0200
In-Reply-To: <3db33b8a-ba97-f302-a325-e989ff0e7084@suse.com>
References: <158818022727.24327.14309662489731832234.stgit@Palanthas>
 <158818179558.24327.11334680191217289878.stgit@Palanthas>
 <3db33b8a-ba97-f302-a325-e989ff0e7084@suse.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-qMHuAGZ7ymoi6esFNIdU"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-qMHuAGZ7ymoi6esFNIdU
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hey,

thanks for the review, and sorry for replying late... I was busy with
something and then was trying to implement a better balancing logic, as
discussed with Juergen, but with only partial success...

On Thu, 2020-04-30 at 08:45 +0200, Jan Beulich wrote:
> On 29.04.2020 19:36, Dario Faggioli wrote:
> > @@ -852,14 +862,61 @@ cpu_runqueue_match(const struct=20
> > [...]
> > +        ASSERT(rcpu !=3D cpu);
> > +        if ( !cpumask_test_cpu(rcpu, cpumask_scratch_cpu(cpu)) )
> > +        {
> > +            /*
> > +             * For each CPU already in the runqueue, account for
> > it and for
> > +             * its sibling(s), independently from whether such
> > sibling(s) are
> > +             * in the runqueue already or not.
> > +             *
> > +             * Of course, if there are sibling CPUs in the
> > runqueue already,
> > +             * only count them once.
> > +             */
> > +            cpumask_or(cpumask_scratch_cpu(cpu),
> > cpumask_scratch_cpu(cpu),
> > +                       per_cpu(cpu_sibling_mask, rcpu));
> > +            nr_smts +=3D nr_sibl;
>=20
> This being common code, is it appropriate to assume all CPUs having
> the same number of siblings?=20
>
You mention common code because you are thinking of differences between
x86 and ARM? In ARM --althought there might be (I'm not sure)-- chips
that have SMT, or that we may want to identify and treat like if it was
SMT, we currently have no support for that, so I don't think it is a
problem.

On x86, I'm not sure I am aware of cases where the number of threads is
different among cores or sockets... are there any?

Besides, we have some SMT specific code around (especially in
scheduling) already.

> Even beyond that, iirc the sibling mask
> represents the online or parked siblings, but not offline ones. For
> the purpose here, don't you rather care about the full set?
>=20
This is actually a good point. I indeed care about the number of
siblings a thread has, in general, not only about the ones that are
currently online.

In v2, I'll be using boot_cpu_data.x86_num_siblings, of course wrapped
in an helper that just returns 1 for ARM. What do you think, is this
better?

> What about HT vs AMD Fam15's CUs? Do you want both to be treated
> the same here?
>=20
Are you referring to the cores that, AFAIUI, share the L1i cache? If
yes, I thought about it, and ended up _not_ dealing with them here, but
I'm still a bit unsure.

Cache oriented runqueue organization will be the subject of another
patch series, and that's why I kept them out. However, that's a rather
special case with a lot in common to SMT... Just in case, is there a
way to identify them easily, like with a mask or something, in the code
already?

> Also could you outline the intentions with this logic in the
> description, to be able to match the goal with what gets done?
>=20
Sure, I will try state it more clearly.

> > @@ -900,6 +990,12 @@ cpu_add_to_runqueue(struct csched2_private
> > *prv, unsigned int cpu)
> >          rqd->pick_bias =3D cpu;
> >          rqd->id =3D rqi;
> >      }
> > +    else
> > +        rqd =3D rqd_valid;
> > +
> > +    printk(XENLOG_INFO "CPU %d (sibling=3D{%*pbl}) will go to
> > runqueue %d with {%*pbl}\n",
> > +           cpu, CPUMASK_PR(per_cpu(cpu_sibling_mask, cpu)), rqd-
> > >id,
> > +           CPUMASK_PR(&rqd->active));
>=20
> Iirc there's one per-CPU printk() already. On large systems this
> isn't
> very nice, so I'd like to ask that their total number at least not
> get
> further grown. Ideally there would be a less verbose summary after
> all
> CPUs have been brought up at boot, with per-CPU info be logged only
> during CPU hot online.
>=20
Understood. Problem is that, here in the scheduling code, I don't see
an easy way to tell when we have finished bringing up CPUs... And it's
probably not worth looking too hard (even less adding logic) only for
the sake of printing this message.

So I think I will demote this printk as a XENLOG_DEBUG one (and will
also get rid of others that were already DEBUG, but not super useful,
after some more thinking).

The idea is that, after all, exactly on which runqueue a CPU ends up is
not an information that should matter much from an user/administrator.
For now, it will be possible to know where they ended up via debug
keys. And even if we feel like making it more visible, that is better
achieved via some toolstack command that queries and prints the
configuration of the scheduler, rather than a line like this in the
boot log.

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)





--=-qMHuAGZ7ymoi6esFNIdU
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl7NkZgACgkQFkJ4iaW4
c+46wxAAg6a8Btt5duG6yWc/heO5YGNXnJXfgb2/6uNV/MQzX5ehEpMRzoOVFOxV
wipVArdQBUjg156E8Qj2xvWZqMsYCimqjVxc3J8FMX+UpSrUB8pOaYz6iTR132hc
mocB1AP3Wkf8hO1eazGmTy7kKu+6X3VL7AbjIG4MDpCAAEP3xr3cLNBqJrSe0R2h
ZjmCN0Fa16T6paBvJlfNwkM6Muck3NLfdCyBv8gq9R8TkS6uXl+2FGK0uJISFu1K
ZK66IBN1LuYKe3MQt+cYw9UZ8X87pZUj6frRv9O+XykXWA3scXBsQ4A+ZiFOsFx8
SxaXcYvqJS6qUhkQPP9upseZ6eTijPZKijiNQ8hpKPSBz60qYIax9cTFUC7O8yu5
5ILFgbomoDvcDuMaDuVDaatGLXN7LdvaPc/5SEkDyPZmfHY9QmGv/L9MUBUtyk+3
gV5CglSBoShgYGWZtj0B9EBajJxMb1soiXk6I68TlaLTq51J2o8IAz2B8bJ5xXeF
ImaxKvLGJzrzsQjZjsNdjUZcuXxCwJHI+MqG9WqabI1gPAY+FaCLMHfIkTBpBylO
31GRX8VvuHYxnkfEYhcgV8fSs5U8bhSyiGLIa8dps/IPmgoOXol+FkBJx/5JX1wQ
0cOchwVDGP4aoE0VSNoDOeLJG8DVk+YttBe0xXJtx7J94m22eGE=
=zTYs
-----END PGP SIGNATURE-----

--=-qMHuAGZ7ymoi6esFNIdU--



From xen-devel-bounces@lists.xenproject.org Tue May 26 22:12:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 22:12:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdhoT-0007md-VE; Tue, 26 May 2020 22:12:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KTO2=7I=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdhoS-0007mY-Ic
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 22:12:44 +0000
X-Inumbo-ID: 01de2378-9f9e-11ea-a6c3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 01de2378-9f9e-11ea-a6c3-12813bfff9fa;
 Tue, 26 May 2020 22:12:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ouOC8L0QwUvIO9KzT8rgg/6B7/NiBJSKsXykvL6FH1w=; b=p1kNiyjxtZgxKpghc4OljVbDy
 38yjVzBDUsEOunZoYVRorM/TqubRPsSZmqII7579JlzSjau1+rDuc1AONvnFbnkMF0HAVrD5NMSIL
 waDsmRF/OvmTViHdDz080aVbZDzwIqEk6vdv0wdtaNBUqIvzfQb1nef+pc2ncFhnILZfU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdhoL-0006hA-5p; Tue, 26 May 2020 22:12:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdhoK-0007SY-T2; Tue, 26 May 2020 22:12:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdhoK-0004Xr-SR; Tue, 26 May 2020 22:12:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150378-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150378: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=354e8318d5a9b6f32fbd3c01d1a9f1970007010b
X-Osstest-Versions-That: xen=437b0aa06a014ce174e24c0d3530b3e9ab19b18b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 26 May 2020 22:12:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150378 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150378/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150355
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150358
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150358
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150358
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150358
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150358
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150358
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150358
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150358
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150358
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  354e8318d5a9b6f32fbd3c01d1a9f1970007010b
baseline version:
 xen                  437b0aa06a014ce174e24c0d3530b3e9ab19b18b

Last test of basis   150358  2020-05-25 04:03:33 Z    1 days
Testing same since   150370  2020-05-26 00:07:38 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Juergen Gross <jgross@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   437b0aa06a..354e8318d5  354e8318d5a9b6f32fbd3c01d1a9f1970007010b -> master


From xen-devel-bounces@lists.xenproject.org Tue May 26 22:16:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 22:16:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdhsK-0007wl-Om; Tue, 26 May 2020 22:16:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=52a6=7I=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jdhsJ-0007we-Pr
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 22:16:43 +0000
X-Inumbo-ID: 940e59e8-9f9e-11ea-8993-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 940e59e8-9f9e-11ea-8993-bc764e2007e4;
 Tue, 26 May 2020 22:16:43 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: eTEhCJ/mCCQHeXvQIyLpMeRUv7+peNP5dOgSSyYqLeB8zC+ed1/0TLuZK6vlVfNIGko+tqoG6n
 T7tMQUngaCKt/CXqMWtz78gIUTSeILGE7F3iFtIVjw/A07LtYHI2dIOIvbWsaIIfu0zpY8wJdD
 E+JfePvQ5/J4argGdB4m3kuEaLcVRz2a2AHMkqGWHej1vCte7MTRA3UpthGlsjaOQYG52nOozm
 xYVIFntpnV9nFNthvSZuv8kjS7hXLlBLzTWqCj9lFSpl6qPgJjBYLv8bZ+t+iD4XzGpSYbigZ3
 Ol0=
X-SBRS: 2.7
X-MesageID: 18759804
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,439,1583211600"; d="scan'208";a="18759804"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 5/5] automation/containerize: Add a shortcut for Debian
 unstable
Date: Tue, 26 May 2020 23:16:12 +0100
Message-ID: <20200526221612.900922-6-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200526221612.900922-1-george.dunlap@citrix.com>
References: <20200526221612.900922-1-george.dunlap@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Doug Goldstein <cardoe@cardoe.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
v2:
- New

CC: Doug Goldstein <cardoe@cardoe.com>
CC: Wei Liu <wl@xen.org>
---
 automation/scripts/containerize | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/automation/scripts/containerize b/automation/scripts/containerize
index fbc4bc22d6..b71edd736c 100755
--- a/automation/scripts/containerize
+++ b/automation/scripts/containerize
@@ -22,6 +22,7 @@ case "_${CONTAINER}" in
     _fedora) CONTAINER="${BASE}/fedora:29";;
     _jessie) CONTAINER="${BASE}/debian:jessie" ;;
     _stretch|_) CONTAINER="${BASE}/debian:stretch" ;;
+    _unstable|_) CONTAINER="${BASE}/debian:unstable" ;;
     _trusty) CONTAINER="${BASE}/ubuntu:trusty" ;;
     _xenial) CONTAINER="${BASE}/ubuntu:xenial" ;;
 esac
@@ -91,4 +92,3 @@ exec docker run \
     -${termint}i --rm -- \
     ${CONTAINER} \
     ${cmd}
-
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 26 22:16:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 22:16:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdhsI-0007wY-Gi; Tue, 26 May 2020 22:16:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=52a6=7I=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jdhsH-0007wT-KB
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 22:16:41 +0000
X-Inumbo-ID: 929bb0d8-9f9e-11ea-a6c3-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 929bb0d8-9f9e-11ea-a6c3-12813bfff9fa;
 Tue, 26 May 2020 22:16:40 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: tPOJu4PQ63FYA20aQt6/WSEoTQZcS9T1BmehDgtkkbZDhK+XKyX/nX9GYBhvHr1EAbSdVXbxHR
 TYUoqgAuUOLxWX2uPTNLiL3yUTsNACjHESTxduH3LSHxi4hXY4NofWud4lNUn1XxYGxJqtYaLT
 vQqcsF61mtY7q1oNc/g7gONSHlh+NQhrvUI7qSBdd7xAjxqZBZPxFBM5Z3EugiqWwFI/tHlGkB
 +gH7tFIhiZHFA9cJr7XnDijJ9Jku3qap/j/5fQFe0PWJvghN7Vz7xMut2d6iPdbYjcvuO77jtW
 RXs=
X-SBRS: 2.7
X-MesageID: 19238409
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,439,1583211600"; d="scan'208";a="19238409"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 1/5] libxl: Generate golang bindings in libxl Makefile
Date: Tue, 26 May 2020 23:16:08 +0100
Message-ID: <20200526221612.900922-2-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200526221612.900922-1-george.dunlap@citrix.com>
References: <20200526221612.900922-1-george.dunlap@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>, Ian
 Jackson <ian.jackson@citrix.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The generated golang bindings (types.gen.go and helpers.gen.go) are
left checked in so that they can be fetched from xenbits using the
golang tooling.  This means that they must be updated whenever
libxl_types.idl (or other dependencies) are updated.  However, the
golang bindings are only built optionally; we can't assume that anyone
updating libxl_types.idl will also descend into the tools/golang tree
to re-generate the bindings.

Fix this by re-generating the golang bindings from the libxl Makefile
when the IDL dependencies are updated, so that anyone who updates
libxl_types.idl will also end up updating the golang generated files
as well.

 - Make a variable for the generated files, and a target in
   xenlight/Makefile which will only re-generate the files.

 - Add a target in libxl/Makefile to call external idl generation
   targets (currently only golang).

For ease of testing, also add a specific target in libxl/Makefile just
to check and update files generated from the IDL.

This does mean that there are two potential paths for generating the
files during a parallel build; but that shouldn't be an issue, since
tools/golang/xenlight should never be built until after tools/libxl
has completed building anyway.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
Reviewed-by: Nick Rosbrook <rosbrookn@ainfosec.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
v2:
- Add comments explaining parallel make safety

CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/Makefile | 11 ++++++++++-
 tools/libxl/Makefile           | 17 ++++++++++++++++-
 2 files changed, 26 insertions(+), 2 deletions(-)

diff --git a/tools/golang/xenlight/Makefile b/tools/golang/xenlight/Makefile
index cd0a62505f..8ab4cb5665 100644
--- a/tools/golang/xenlight/Makefile
+++ b/tools/golang/xenlight/Makefile
@@ -17,12 +17,21 @@ all: build
 .PHONY: package
 package: $(XEN_GOPATH)$(GOXL_PKG_DIR)
 
-$(XEN_GOPATH)/src/$(XEN_GOCODE_URL)/xenlight/: xenlight.go types.gen.go helpers.gen.go
+GOXL_GEN_FILES = types.gen.go helpers.gen.go
+
+$(XEN_GOPATH)/src/$(XEN_GOCODE_URL)/xenlight/: xenlight.go $(GOXL_GEN_FILES)
 	$(INSTALL_DIR) $(XEN_GOPATH)$(GOXL_PKG_DIR)
 	$(INSTALL_DATA) xenlight.go $(XEN_GOPATH)$(GOXL_PKG_DIR)
 	$(INSTALL_DATA) types.gen.go $(XEN_GOPATH)$(GOXL_PKG_DIR)
 	$(INSTALL_DATA) helpers.gen.go $(XEN_GOPATH)$(GOXL_PKG_DIR)
 
+# NOTE: This target is called from libxl/Makefile:all.  Since that
+# target must finish before golang/Makefile is called, this is
+# currently safe.  It must not be called from anywhere else in the
+# Makefile system without careful thought about races with
+# xenlight/Makefile:all
+idl-gen: $(GOXL_GEN_FILES)
+
 %.gen.go: gengotypes.py $(LIBXL_SRC_DIR)/libxl_types.idl $(LIBXL_SRC_DIR)/idl.py
 	XEN_ROOT=$(XEN_ROOT) $(PYTHON) gengotypes.py $(LIBXL_SRC_DIR)/libxl_types.idl
 
diff --git a/tools/libxl/Makefile b/tools/libxl/Makefile
index 69fcf21577..947eb6036e 100644
--- a/tools/libxl/Makefile
+++ b/tools/libxl/Makefile
@@ -218,7 +218,7 @@ testidl.c: libxl_types.idl gentest.py libxl.h $(AUTOINCS)
 .PHONY: all
 all: $(CLIENTS) $(TEST_PROGS) $(PKG_CONFIG) $(PKG_CONFIG_LOCAL) \
 		libxenlight.so libxenlight.a libxlutil.so libxlutil.a \
-	$(AUTOSRCS) $(AUTOINCS)
+	$(AUTOSRCS) $(AUTOINCS) idl-external
 
 $(LIBXL_OBJS) $(LIBXLU_OBJS) $(SAVE_HELPER_OBJS) \
 		$(LIBXL_TEST_OBJS) $(TEST_PROG_OBJS): \
@@ -274,6 +274,21 @@ _libxl_type%.h _libxl_type%_json.h _libxl_type%_private.h _libxl_type%.c: libxl_
 	$(call move-if-changed,__libxl_type$(stem)_json.h,_libxl_type$(stem)_json.h)
 	$(call move-if-changed,__libxl_type$(stem).c,_libxl_type$(stem).c)
 
+# NOTE: This is safe to do at the moment because idl-external and
+# idl-gen are only called from libxl/Makefile:all, which must return
+# before golang/Makefile is callid.  idl-external and idl-gen must
+# never be called from another part of the make system without careful thought
+# about races with tools/golang/xenlight/Makefile:all
+.PHONY: idl-external
+idl-external:
+	$(MAKE) -C $(XEN_ROOT)/tools/golang/xenlight idl-gen
+
+LIBXL_IDLGEN_FILES = _libxl_types.h _libxl_types_json.h _libxl_types_private.h _libxl_types.c \
+	_libxl_types_internal.h _libxl_types_internal_json.h _libxl_types_internal_private.h _libxl_types_internal.c
+
+
+idl-gen: $(LIBXL_GEN_FILES) idl-external
+
 libxenlight.so: libxenlight.so.$(MAJOR)
 	$(SYMLINK_SHLIB) $< $@
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 26 22:16:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 22:16:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdhsO-0007xm-4x; Tue, 26 May 2020 22:16:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=52a6=7I=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jdhsM-0007xL-I9
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 22:16:46 +0000
X-Inumbo-ID: 929bb0d9-9f9e-11ea-a6c3-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 929bb0d9-9f9e-11ea-a6c3-12813bfff9fa;
 Tue, 26 May 2020 22:16:41 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: YeKW0jUKxEquevFRXYynJNH92Ps5LOSQIwW22tVNkuVltKfq5xh4EbEVGrfkGTyrO8gAV512nt
 Wb2UTg9xXlosgsogqdUskZf5GbR3x740niaw1lMhCPZYR8TdB/6Ke9lrE8qDVQzD9inpD7aTR1
 IUTStwAAHSrLm1JZ2+rkgi6eVnpRDiMxP182HrEZWSt1RE7OdGrHnKgKMuN6+U5X38IWIPUTaX
 1ZOrTKVl5Gj+M53qgTzn5uaA+i6RQcW3DC5GGek2l82PwsS8mlVlLDggKGahQMMIOT67+UHB9k
 3QA=
X-SBRS: 2.7
X-MesageID: 19238411
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,439,1583211600"; d="scan'208";a="19238411"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 3/5] automation/archlinux: Add 32-bit glibc headers
Date: Tue, 26 May 2020 23:16:10 +0100
Message-ID: <20200526221612.900922-4-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200526221612.900922-1-george.dunlap@citrix.com>
References: <20200526221612.900922-1-george.dunlap@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 Doug Goldstein <cardoe@cardoe.com>, George Dunlap <george.dunlap@citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This fixes the following build error in hvmloader:

usr/include/gnu/stubs.h:7:11: fatal error: gnu/stubs-32.h: No such file or directory

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
v2:
- New

CC: Doug Goldstein <cardoe@cardoe.com>
CC: Wei Liu <wl@xen.org>
CC: Anthony Perard <anthony.perard@citrix.com>
---
 automation/build/archlinux/current.dockerfile | 1 +
 1 file changed, 1 insertion(+)

diff --git a/automation/build/archlinux/current.dockerfile b/automation/build/archlinux/current.dockerfile
index 9af5d66afc..5095de65b8 100644
--- a/automation/build/archlinux/current.dockerfile
+++ b/automation/build/archlinux/current.dockerfile
@@ -19,6 +19,7 @@ RUN pacman -S --refresh --sysupgrade --noconfirm --noprogressbar --needed \
         iasl \
         inetutils \
         iproute \
+        lib32-glibc \
         libaio \
         libcacard \
         libgl \
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 26 22:17:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 22:17:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdhsT-0007zF-EM; Tue, 26 May 2020 22:16:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=52a6=7I=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jdhsR-0007yj-I0
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 22:16:51 +0000
X-Inumbo-ID: 9315abae-9f9e-11ea-a6c3-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9315abae-9f9e-11ea-a6c3-12813bfff9fa;
 Tue, 26 May 2020 22:16:41 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: m9Edo25O6kTacSEUmULiuAAHzIyCQetXjF1b09ArClrHD6Fnoyq7QQGVutGWguCY3u0HK2th1l
 gcVFbzRyJq40yEz0otpsn4vKT7cYLvQYe6etu9kMH7LCPm/pMuUgzBVd+hRCFkKNYfdLWbYUpw
 5cCVEAssQ5gXfEm3lp4MJQnLq41S2EBoZ/IDsuNIUSog/K4VZSy9t0JWxM1XHIYFzecmasECKl
 sy8FXjwOEqwJ4rBLQIAvcEIzk3XILiby8X4a455i2tqY8mK1VXsYG+uh3fyG6GUIudTgsjXtJ3
 GJE=
X-SBRS: 2.7
X-MesageID: 19238410
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,439,1583211600"; d="scan'208";a="19238410"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 0/5] Golang build fixes / improvements
Date: Tue, 26 May 2020 23:16:07 +0100
Message-ID: <20200526221612.900922-1-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Ian Jackson <ian.jackson@citrix.com>, Doug Goldstein <cardoe@cardoe.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a series of patches that improve build for the golang xenlight
bindings.  The key patches are patch is #1 and #4.  Patch 1 will
update the generated golang bindings from the tools/libxl directory
when libxl_types.idl is updated, even if the person building doesn't
have the golang packages enabled.  Patch 2 adds golang packages to the
docker images which have suitable golang versions, so that the bindings
can be tested in the CI loop.

Changes in v2:
- Document requirements to make sure the parallel build is race-free
- Replace v1 patches 4-5 with a patch which will just remove the
  GOPATH-related build testing
- Introduce improvements to automation

CC: Ian Jackson <ian.jackson@citrix.com>
CC: Wei Liu <wl@xen.org>
CC: Doug Goldstein <cardoe@cardoe.com>
CC: Nick Rosbrook <rosbrookn@ainfosec.com>

George Dunlap (5):
  libxl: Generate golang bindings in libxl Makefile
  golang/xenlight: Get rid of GOPATH-based build artefacts
  automation/archlinux: Add 32-bit glibc headers
  automation: Add golang packages to various dockerfiles
  automation/containerize: Add a shortcut for Debian unstable

 automation/build/archlinux/current.dockerfile |  2 ++
 automation/build/debian/unstable.dockerfile   |  1 +
 automation/build/fedora/29.dockerfile         |  1 +
 automation/scripts/containerize               |  2 +-
 tools/Rules.mk                                |  1 -
 tools/golang/Makefile                         | 10 --------
 tools/golang/xenlight/Makefile                | 24 +++++++++----------
 tools/libxl/Makefile                          | 17 ++++++++++++-
 8 files changed, 32 insertions(+), 26 deletions(-)

--
2.25.1


From xen-devel-bounces@lists.xenproject.org Tue May 26 22:17:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 22:17:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdhsX-00081x-Ne; Tue, 26 May 2020 22:16:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=52a6=7I=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jdhsW-00081S-Ie
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 22:16:56 +0000
X-Inumbo-ID: 929bb0da-9f9e-11ea-a6c3-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 929bb0da-9f9e-11ea-a6c3-12813bfff9fa;
 Tue, 26 May 2020 22:16:42 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: yW0inEishecCLkrpueYmCQX7SwgD6cEnEpRYPwL4Ph6EZXjjCl2Z7efPQFhiaJZjYHQxkbcTgK
 Ckpo8xSFETBl72pDg7BjZtr5+lGfHQ7HfOoDgHFvuJFyhQ82xEutpxtW+QEuDUhGDEUaowQaRo
 sJt/n3LXVYKMmYCrms7Xe2rKT8HOi6ToR1e5qUK97kJ1xE3HZJIoYKsp4x5WwI+iSPK8F3FDU/
 Xw9GaXBfqcQ7UKIVVhCyDcYbaIs7cFSR+ZrAKG0p9oxrmMkxkDaItTgIf8mpD0B+8kUD9eEPzF
 I+Q=
X-SBRS: 2.7
X-MesageID: 19238412
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,439,1583211600"; d="scan'208";a="19238412"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 2/5] golang/xenlight: Get rid of GOPATH-based build
 artefacts
Date: Tue, 26 May 2020 23:16:09 +0100
Message-ID: <20200526221612.900922-3-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200526221612.900922-1-george.dunlap@citrix.com>
References: <20200526221612.900922-1-george.dunlap@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
 Ian Jackson <ian.jackson@citrix.com>, George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The original build setup used a "fake GOPATH" in tools/golang to test
the mechanism of building from go package files installed on a
filesystem.  With the move to modules, this isn't necessary, and leads
to potentially confusing directories being created.  (I.e., it might
not be obvious that files under tools/golang/src shouldn't be edited.)

Get rid of the code that creates this (now unused) intermediate
directory.  Add direct dependencies from 'build' onto the source
files.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
v2:
- New

CC: Ian Jackson <ian.jackson@citrix.com>
CC: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/Rules.mk                 |  1 -
 tools/golang/Makefile          | 10 ----------
 tools/golang/xenlight/Makefile | 19 ++++---------------
 3 files changed, 4 insertions(+), 26 deletions(-)

diff --git a/tools/Rules.mk b/tools/Rules.mk
index 59c72e7a88..76acaef988 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -35,7 +35,6 @@ XENSTORE_XENSTORED ?= y
 debug ?= y
 debug_symbols ?= $(debug)
 
-XEN_GOPATH        = $(XEN_ROOT)/tools/golang
 XEN_GOCODE_URL    = golang.xenproject.org
 
 ifeq ($(debug_symbols),y)
diff --git a/tools/golang/Makefile b/tools/golang/Makefile
index aba11ebc39..b022e2c5a3 100644
--- a/tools/golang/Makefile
+++ b/tools/golang/Makefile
@@ -1,16 +1,6 @@
 XEN_ROOT=$(CURDIR)/../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-# In order to link against a package in Go, the package must live in a
-# directory tree in the way that Go expects.  To make this possible,
-# there must be a directory such that we can set GOPATH=${dir}, and
-# the package will be under $GOPATH/src/${full-package-path}.
-
-# So we set XEN_GOPATH to $XEN_ROOT/tools/golang.  The xenlight
-# "package build" directory ($PWD/xenlight) will create the "package
-# source" directory in the proper place.  Go programs can use this
-# package by setting GOPATH=$(XEN_GOPATH).
-
 SUBDIRS-y = xenlight
 
 .PHONY: build all
diff --git a/tools/golang/xenlight/Makefile b/tools/golang/xenlight/Makefile
index 8ab4cb5665..f8d8047524 100644
--- a/tools/golang/xenlight/Makefile
+++ b/tools/golang/xenlight/Makefile
@@ -14,17 +14,8 @@ LIBXL_SRC_DIR = ../../libxl
 .PHONY: all
 all: build
 
-.PHONY: package
-package: $(XEN_GOPATH)$(GOXL_PKG_DIR)
-
 GOXL_GEN_FILES = types.gen.go helpers.gen.go
 
-$(XEN_GOPATH)/src/$(XEN_GOCODE_URL)/xenlight/: xenlight.go $(GOXL_GEN_FILES)
-	$(INSTALL_DIR) $(XEN_GOPATH)$(GOXL_PKG_DIR)
-	$(INSTALL_DATA) xenlight.go $(XEN_GOPATH)$(GOXL_PKG_DIR)
-	$(INSTALL_DATA) types.gen.go $(XEN_GOPATH)$(GOXL_PKG_DIR)
-	$(INSTALL_DATA) helpers.gen.go $(XEN_GOPATH)$(GOXL_PKG_DIR)
-
 # NOTE: This target is called from libxl/Makefile:all.  Since that
 # target must finish before golang/Makefile is called, this is
 # currently safe.  It must not be called from anywhere else in the
@@ -43,23 +34,21 @@ idl-gen: $(GOXL_GEN_FILES)
 # in the LDFLAGS; and thus we need to add -L$(XEN_XENLIGHT) here
 # so that it can find the actual library.
 .PHONY: build
-build: package
+build: xenlight.go $(GOXL_GEN_FILES)
 	CGO_CFLAGS="$(CFLAGS_libxenlight) $(CFLAGS_libxentoollog)" CGO_LDFLAGS="$(LDLIBS_libxenlight) $(LDLIBS_libxentoollog) -L$(XEN_XENLIGHT) -L$(XEN_LIBXENTOOLLOG)" $(GO) build -x
 
 .PHONY: install
 install: build
 	$(INSTALL_DIR) $(DESTDIR)$(GOXL_INSTALL_DIR)
-	$(INSTALL_DATA) $(XEN_GOPATH)$(GOXL_PKG_DIR)xenlight.go $(DESTDIR)$(GOXL_INSTALL_DIR)
-	$(INSTALL_DATA) $(XEN_GOPATH)$(GOXL_PKG_DIR)types.gen.go $(DESTDIR)$(GOXL_INSTALL_DIR)
-	$(INSTALL_DATA) $(XEN_GOPATH)$(GOXL_PKG_DIR)helpers.gen.go $(DESTDIR)$(GOXL_INSTALL_DIR)
+	$(INSTALL_DATA) xenlight.go $(DESTDIR)$(GOXL_INSTALL_DIR)
+	$(INSTALL_DATA) types.gen.go $(DESTDIR)$(GOXL_INSTALL_DIR)
+	$(INSTALL_DATA) helpers.gen.go $(DESTDIR)$(GOXL_INSTALL_DIR)
 
 .PHONY: uninstall
 	rm -rf $(DESTDIR)$(GOXL_INSTALL_DIR)
 
 .PHONY: clean
 clean:
-	$(RM) -r $(XEN_GOPATH)$(GOXL_PKG_DIR)
-	$(RM) $(XEN_GOPATH)/pkg/*/$(XEN_GOCODE_URL)/xenlight.a
 
 .PHONY: distclean
 distclean: clean
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 26 22:17:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 22:17:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdhsd-00084P-1F; Tue, 26 May 2020 22:17:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=52a6=7I=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jdhsb-00083n-J8
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 22:17:01 +0000
X-Inumbo-ID: 94daf39a-9f9e-11ea-a6c3-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 94daf39a-9f9e-11ea-a6c3-12813bfff9fa;
 Tue, 26 May 2020 22:16:44 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: cu8UWhvEil7kDZD8zHi5/jSJQD0btiknIDvUtyRoaZOu5JTNQNJuhxwRU8GYWVJ9i698NuCL1I
 Mgu4pRtHmAIP88qZMM0ZJK63RxGG+g1CyXeVQHDQm9rEtnoXG5GKUIp40KJo0oXLZoehC86n6J
 h7gEWVSr34fKFia57eCXrCS4DF5r2PaqtwfikLD59jz9b6j3m4FcBMmfvRh+4OYuJOkip8HMNW
 KAwL/l2LPuXJ+G8kGyah9iXkbYsKo4Ce2h8M5TUg11rYR0O+jvIfzZa2TL1SETYvXyYjc0IAIa
 2W4=
X-SBRS: 2.7
X-MesageID: 18520581
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,439,1583211600"; d="scan'208";a="18520581"
From: George Dunlap <george.dunlap@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 4/5] automation: Add golang packages to various dockerfiles
Date: Tue, 26 May 2020 23:16:11 +0100
Message-ID: <20200526221612.900922-5-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200526221612.900922-1-george.dunlap@citrix.com>
References: <20200526221612.900922-1-george.dunlap@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Doug
 Goldstein <cardoe@cardoe.com>, George Dunlap <george.dunlap@citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Specifically, Fedora 29, Archlinux, and Debian unstable.  This will
cause the CI loop to detect golang build failures.

CentOS 6 and 7 don't have golang packages, and the packages in
stretch, jessie, xenial, and trusty are too old.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---
v2:
- New

CC: Wei Liu <wl@xen.org>
CC: Doug Goldstein <cardoe@cardoe.com>
---
 automation/build/archlinux/current.dockerfile | 1 +
 automation/build/debian/unstable.dockerfile   | 1 +
 automation/build/fedora/29.dockerfile         | 1 +
 3 files changed, 3 insertions(+)

diff --git a/automation/build/archlinux/current.dockerfile b/automation/build/archlinux/current.dockerfile
index 5095de65b8..d8fbebaf79 100644
--- a/automation/build/archlinux/current.dockerfile
+++ b/automation/build/archlinux/current.dockerfile
@@ -16,6 +16,7 @@ RUN pacman -S --refresh --sysupgrade --noconfirm --noprogressbar --needed \
         ghostscript \
         git \
         gnutls \
+        go \
         iasl \
         inetutils \
         iproute \
diff --git a/automation/build/debian/unstable.dockerfile b/automation/build/debian/unstable.dockerfile
index d0aa5ad2bb..aeb4f3448b 100644
--- a/automation/build/debian/unstable.dockerfile
+++ b/automation/build/debian/unstable.dockerfile
@@ -45,6 +45,7 @@ RUN apt-get update && \
         nasm \
         gnupg \
         apt-transport-https \
+        golang \
         && \
         apt-get autoremove -y && \
         apt-get clean && \
diff --git a/automation/build/fedora/29.dockerfile b/automation/build/fedora/29.dockerfile
index 5be4a9e229..6a4e5b0413 100644
--- a/automation/build/fedora/29.dockerfile
+++ b/automation/build/fedora/29.dockerfile
@@ -40,5 +40,6 @@ RUN dnf -y install \
         nasm \
         ocaml \
         ocaml-findlib \
+        golang \
     && dnf clean all && \
     rm -rf /var/cache/dnf
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 26 23:58:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 26 May 2020 23:58:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdjS1-0008CV-Fl; Tue, 26 May 2020 23:57:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LE0D=7I=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jdjRz-0008CQ-Jy
 for xen-devel@lists.xenproject.org; Tue, 26 May 2020 23:57:39 +0000
X-Inumbo-ID: ae099c64-9fac-11ea-9dbe-bc764e2007e4
Received: from mail-qv1-xf31.google.com (unknown [2607:f8b0:4864:20::f31])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ae099c64-9fac-11ea-9dbe-bc764e2007e4;
 Tue, 26 May 2020 23:57:39 +0000 (UTC)
Received: by mail-qv1-xf31.google.com with SMTP id z9so10349556qvi.12
 for <xen-devel@lists.xenproject.org>; Tue, 26 May 2020 16:57:39 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=date:from:to:cc:subject:message-id:references:mime-version
 :content-disposition:in-reply-to:user-agent;
 bh=4l5IN1Cyll7yznAYyy5s4ZQut9n46ETYfEZJFeYnMGw=;
 b=EmZuxg9ricpPdFMhg4Vh+a1FpmGfUWEppKNUW1Mg9PZ3rOTDW9twgAjEnlpCs34lV8
 TrsI7imnzTQpKQbxVdXTTpT49mDaO2tkivIl/bx/PAeU8D//uhJFo3hO9afvCjZM2Bxi
 sxMmfZ6z1PVj1UTDUvrBzIVX4rYCDI148NUVleDzF8IpmeA9igjC9Kmrow7v2nmZuu0V
 8DtBn080fnL/ss1s9PQWME8XP+ugx3bnyFi6VQCJQCI3FtAQQcxd7Hu506FDLQAAuq9M
 /PsDrw4SAWSDfIFnsLXMSV75yoiEKIUkmnyQYaiW10LwyFs6x5LIWRyO3qQM0GPXf12P
 TkyA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=4l5IN1Cyll7yznAYyy5s4ZQut9n46ETYfEZJFeYnMGw=;
 b=V1Zuy2OfkBOwqrB++VDkODhw5qJL1C9ZuDcvPgfMv4f5fZE9OlbhJy429pcOmm9Zvs
 oxGVvuhxuK/4EAgW4xklp1uPX3ihFYCWANceKHofJS4tt0plA+RrpYqqU4FGQgfZxd7I
 z6MZLgpB1hX2mlgSut2Hm89B2GPbB0N3xdDgLFHaAnuR6RsWmBIP+tufUF4MTTR+9re/
 oCu1D2HWauwP7YhS3dFtx/G83hgeud+kxCabxxeR5xGvSgXMJhN2Dk4b0ydMuVTmKNjI
 rAcKV6SZ8FqdkQOGHnShHsLdt94C2S1b2FZR52Xxx/j1wmQB+fkKsUgQkmg940EiUAPF
 gqBA==
X-Gm-Message-State: AOAM533v93Qw+IV4YFDxdZ4Pi/5KA/pcnM1lsBAODc+U/DERhpzkt9VI
 hrRIcH0HbWWcxL7AuSFTA8w=
X-Google-Smtp-Source: ABdhPJyJr73tQv4tl5NSa5CfCVYL//2P/Bh0KkB3oftZwnAo8seL/fhTwiJU8fGqgc2AOzX155BAiA==
X-Received: by 2002:ad4:57a1:: with SMTP id g1mr21703794qvx.27.1590537458788; 
 Tue, 26 May 2020 16:57:38 -0700 (PDT)
Received: from six (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id m94sm1084893qtd.29.2020.05.26.16.57.36
 (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256);
 Tue, 26 May 2020 16:57:37 -0700 (PDT)
Date: Tue, 26 May 2020 19:57:35 -0400
From: Nick Rosbrook <rosbrookn@gmail.com>
To: George Dunlap <george.dunlap@citrix.com>
Subject: [PATCH v2 2/5] golang/xenlight: Get rid of GOPATH-based build
 artefacts
Message-ID: <20200526235735.GA2978@six>
References: <20200526221612.900922-1-george.dunlap@citrix.com>
 <20200526221612.900922-3-george.dunlap@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200526221612.900922-3-george.dunlap@citrix.com>
User-Agent: Mutt/1.9.4 (2018-02-28)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>, xen-devel@lists.xenproject.org,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> diff --git a/tools/golang/xenlight/Makefile b/tools/golang/xenlight/Makefile
> index 8ab4cb5665..f8d8047524 100644
> --- a/tools/golang/xenlight/Makefile
> +++ b/tools/golang/xenlight/Makefile
> @@ -14,17 +14,8 @@ LIBXL_SRC_DIR = ../../libxl
>  .PHONY: all
>  all: build
>  
> -.PHONY: package
> -package: $(XEN_GOPATH)$(GOXL_PKG_DIR)
> -
>  GOXL_GEN_FILES = types.gen.go helpers.gen.go
>  
> -$(XEN_GOPATH)/src/$(XEN_GOCODE_URL)/xenlight/: xenlight.go $(GOXL_GEN_FILES)
> -	$(INSTALL_DIR) $(XEN_GOPATH)$(GOXL_PKG_DIR)
> -	$(INSTALL_DATA) xenlight.go $(XEN_GOPATH)$(GOXL_PKG_DIR)
> -	$(INSTALL_DATA) types.gen.go $(XEN_GOPATH)$(GOXL_PKG_DIR)
> -	$(INSTALL_DATA) helpers.gen.go $(XEN_GOPATH)$(GOXL_PKG_DIR)
> -

I think the GOXL_PKG_DIR variable can be removed all together too. With
these changes it's only used to initialize GOXL_INSTALL_DIR.

-NR


From xen-devel-bounces@lists.xenproject.org Wed May 27 00:03:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 00:03:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdjXG-0001DP-IQ; Wed, 27 May 2020 00:03:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xzom=7J=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1jdjXF-0001DK-Ad
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 00:03:05 +0000
X-Inumbo-ID: 702300b0-9fad-11ea-9dbe-bc764e2007e4
Received: from mail-qv1-xf33.google.com (unknown [2607:f8b0:4864:20::f33])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 702300b0-9fad-11ea-9dbe-bc764e2007e4;
 Wed, 27 May 2020 00:03:04 +0000 (UTC)
Received: by mail-qv1-xf33.google.com with SMTP id v15so10395364qvr.8
 for <xen-devel@lists.xenproject.org>; Tue, 26 May 2020 17:03:04 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=date:from:to:cc:subject:message-id:references:mime-version
 :content-disposition:in-reply-to:user-agent;
 bh=37NBoZxOHh49AYaQ2gDLWnrnuuRxdlIh80uAzZEA4PM=;
 b=OsKPJg6qIycZl8TSUN1uxpVCzvGANmELsaMWVuy7AdftEyScnJxI/fv2rDJ5tnYsvh
 uqKaiwf6gkUFnz/tI5bRLFoFJ0cwnOJOu+UN+vn04WYP0lFI1PfCHb5r+1I9JF25ygz9
 Xr5gr1MIHgpq9ADWVDRi5I0O5Rh0E1eF+xG2JOBhVOiv+HAjWsjaSrLis/decFZi+wV8
 4TvCv7/Hxpej7Ih6ks3PCZoRAQ7675J6fEntGt/UeeZxO1gjwK9C8PQPWpoSncwCTUlA
 6xSPbwtsUxTPu6KyQiaxm8ytGhb+7VEu3tb0TrNR/IfeEnvGIFQ8yFrcHGKJY/1WnsM0
 G8NA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=37NBoZxOHh49AYaQ2gDLWnrnuuRxdlIh80uAzZEA4PM=;
 b=GJNiSIovl4YGJX9qlRSmEhhctPIV71uX55+IK/r1mT/i8YlHz06Jp/9D7CJhSMlJ35
 K78HtHJA8Ze/6DXHcCxKKjYXXTfjQqws6T5xIEfAf2hOvSaLh5iOyAlzamoGpqfRkxHZ
 u/6xmcOv27AREFQv6LZpfdeSBYaK/hZDnC551VWGrhu3eMYbVsFrTzZV0dnSPE/a9G4F
 Yl5HE2dNzHWFI7qyQlYMvQzg4TkpMU8k+pkTleMd5txZgh+SXcNR/z9Rx8pWUc3LYFCZ
 UK3t1UVgeEJuyi25tGr3ZNIq9G5tmqcudO+AYL7Rmqkq1NoLIHtkEL5/SzgfH1bswSi0
 Fd1w==
X-Gm-Message-State: AOAM532o3nZNb39jCAcTKtSl9jSJ4o841xSkd6OJKuPHyrDXydyGmez0
 9GF0VMFEdJRFIuQjmgB1v88=
X-Google-Smtp-Source: ABdhPJxIZ8hF6u7aHWAiIDYuoegJAVUe+le0YM4mjWmC4NmaR4rpWSUXhuR3O6gXUSn+qrxljMoP4Q==
X-Received: by 2002:ad4:5662:: with SMTP id bm2mr22489308qvb.48.1590537784494; 
 Tue, 26 May 2020 17:03:04 -0700 (PDT)
Received: from six (cpe-67-241-56-252.twcny.res.rr.com. [67.241.56.252])
 by smtp.gmail.com with ESMTPSA id v69sm990918qkb.96.2020.05.26.17.03.03
 (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256);
 Tue, 26 May 2020 17:03:03 -0700 (PDT)
Date: Tue, 26 May 2020 20:03:01 -0400
From: Nick Rosbrook <rosbrookn@gmail.com>
To: George Dunlap <george.dunlap@citrix.com>
Subject: [PATCH v2 2/5] golang/xenlight: Get rid of GOPATH-based build
 artefacts
Message-ID: <20200527000301.GB2978@six>
References: <20200526221612.900922-1-george.dunlap@citrix.com>
 <20200526221612.900922-3-george.dunlap@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200526221612.900922-3-george.dunlap@citrix.com>
User-Agent: Mutt/1.9.4 (2018-02-28)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>, xen-devel@lists.xenproject.org,
 Ian Jackson <ian.jackson@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 26, 2020 at 11:16:09PM +0100, George Dunlap wrote:
> The original build setup used a "fake GOPATH" in tools/golang to test
> the mechanism of building from go package files installed on a
> filesystem.  With the move to modules, this isn't necessary, and leads
> to potentially confusing directories being created.  (I.e., it might
> not be obvious that files under tools/golang/src shouldn't be edited.)
> 
> Get rid of the code that creates this (now unused) intermediate
> directory.  Add direct dependencies from 'build' onto the source
> files.
> 
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>

If you want to just make that change on check-in,

Reviewed-by: Nick Rosbrook <rosbrookn@ainfosec.com>


From xen-devel-bounces@lists.xenproject.org Wed May 27 01:02:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 01:02:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdkSR-0007Am-6g; Wed, 27 May 2020 01:02:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dR7h=7J=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1jdkSQ-0007Ah-LE
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 01:02:10 +0000
X-Inumbo-ID: b084bf4c-9fb5-11ea-81bc-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b084bf4c-9fb5-11ea-81bc-bc764e2007e4;
 Wed, 27 May 2020 01:02:09 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: +tmivBmUIBfMiRW1qfV0UofSbEzv3m76rP1iLS1598pEP9EGORPXTeeRdu1M6OlUHhrl+FBzfY
 +aYegj597GkgF7eMc3EIix2mAQllPLtS95enhNVVBsDYV+fpWAHOxgk1BTI63hDUgiVBi7iL1a
 d9Q8TquhIdegVKga9CYotVzNrZVBEDcl69RR/qU15NyvOtQ4Q8x1j3wvukmrlENviLLkSx8PYw
 D8fiu/NDD7G5lfov1yOBGLsJTg4aWCw8W/zf0xvkfQGpwQu+be/8R5VSYNLJqdhlwjlrpX0BY/
 k0g=
X-SBRS: 2.7
X-MesageID: 18801100
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,439,1583211600"; d="scan'208";a="18801100"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2] x86/svm: retry after unhandled NPT fault if gfn was marked
 for recalculation
Date: Wed, 27 May 2020 02:01:48 +0100
Message-ID: <1590541308-11317-1-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: git-send-email 2.7.4
MIME-Version: 1.0
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>, kevin.tian@intel.com,
 jbeulich@suse.com, wl@xen.org, andrew.cooper3@citrix.com,
 jun.nakajima@intel.com, roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

If a recalculation NPT fault hasn't been handled explicitly in
hvm_hap_nested_page_fault() then it's potentially safe to retry -
US bit has been re-instated in PTE and any real fault would be correctly
re-raised next time. Do it by allowing hvm_hap_nested_page_fault to
fall through in that case.

This covers a specific case of migration with vGPU assigned on AMD:
global log-dirty is enabled and causes immediate recalculation NPT
fault in MMIO area upon access. This type of fault isn't described
explicitly in hvm_hap_nested_page_fault (this isn't called on
EPT misconfig exit on Intel) which results in domain crash.

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---
Changes in v2:
- don't gamble with retrying every recal fault and instead let
  hvm_hap_nested_page_fault know it's allowed to fall through in default case
---
 xen/arch/x86/hvm/hvm.c        | 6 +++---
 xen/arch/x86/hvm/svm/svm.c    | 7 ++++++-
 xen/arch/x86/hvm/vmx/vmx.c    | 2 +-
 xen/include/asm-x86/hvm/hvm.h | 2 +-
 4 files changed, 11 insertions(+), 6 deletions(-)

diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 74c9f84..42bd720 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1731,7 +1731,7 @@ void hvm_inject_event(const struct x86_event *event)
 }
 
 int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
-                              struct npfec npfec)
+                              struct npfec npfec, bool fall_through)
 {
     unsigned long gfn = gpa >> PAGE_SHIFT;
     p2m_type_t p2mt;
@@ -1740,7 +1740,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
     struct vcpu *curr = current;
     struct domain *currd = curr->domain;
     struct p2m_domain *p2m, *hostp2m;
-    int rc, fall_through = 0, paged = 0;
+    int rc, paged = 0;
     bool sharing_enomem = false;
     vm_event_request_t *req_ptr = NULL;
     bool sync = false;
@@ -1905,7 +1905,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
             sync = p2m_mem_access_check(gpa, gla, npfec, &req_ptr);
 
             if ( !sync )
-                fall_through = 1;
+                fall_through = true;
             else
             {
                 /* Rights not promoted (aka. sync event), work here is done */
diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 46a1aac..8ef3fed 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -1695,7 +1695,12 @@ static void svm_do_nested_pgfault(struct vcpu *v,
     else if ( pfec & NPT_PFEC_in_gpt )
         npfec.kind = npfec_kind_in_gpt;
 
-    ret = hvm_hap_nested_page_fault(gpa, ~0ul, npfec);
+    /*
+     * US bit being set in error code indicates P2M type recalculation has
+     * just been done meaning that it's possible there is nothing else to handle
+     * and we can just fall through and retry.
+     */
+    ret = hvm_hap_nested_page_fault(gpa, ~0ul, npfec, !!(pfec & PFEC_user_mode));
 
     if ( tb_init_done )
     {
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 11a4dd9..10f1eeb 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -3398,7 +3398,7 @@ static void ept_handle_violation(ept_qual_t q, paddr_t gpa)
     else
         gla = ~0ull;
 
-    ret = hvm_hap_nested_page_fault(gpa, gla, npfec);
+    ret = hvm_hap_nested_page_fault(gpa, gla, npfec, false);
     switch ( ret )
     {
     case 0:         // Unhandled L1 EPT violation
diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
index 1eb377d..03e5f1d 100644
--- a/xen/include/asm-x86/hvm/hvm.h
+++ b/xen/include/asm-x86/hvm/hvm.h
@@ -329,7 +329,7 @@ void hvm_fast_singlestep(struct vcpu *v, uint16_t p2midx);
 
 struct npfec;
 int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
-                              struct npfec npfec);
+                              struct npfec npfec, bool fall_through);
 
 /* Check CR4/EFER values */
 const char *hvm_efer_valid(const struct vcpu *v, uint64_t value,
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Wed May 27 01:08:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 01:08:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdkYU-0007NQ-1I; Wed, 27 May 2020 01:08:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeBB=7J=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdkYS-0007NL-VC
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 01:08:25 +0000
X-Inumbo-ID: 8ccd0040-9fb6-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ccd0040-9fb6-11ea-8993-bc764e2007e4;
 Wed, 27 May 2020 01:08:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=/Mxi2rDGrw3YR00XcCXPZFc8AmFma4e3hvfZfpMnuE8=; b=0Zvn/aG1k1UNgVPiM04q8lRYh
 ec+DPLg0aGsdOtgEURQDYnr513vVCKq/WGQJuA6/NSLaSm1LqMSZ/I0AayiMoo0Gm64WM9rmZ/Gnp
 j3GRAPABYC0rCL6HhO2NPsME5g3ggeUZzqkS7tJlYq9ydQh9Z+CPjAde87DB+tcgRzjkw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdkYL-0003rL-Qo; Wed, 27 May 2020 01:08:17 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdkYL-0005Og-J2; Wed, 27 May 2020 01:08:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdkYL-0001hz-HG; Wed, 27 May 2020 01:08:17 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150386-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150386: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=8f72c75cfc9b3c84a9b5e7a58ee5e471cb2f19c8
X-Osstest-Versions-That: qemuu=fea8f3ed739536fca027cf56af7f5576f37ef9cd
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 27 May 2020 01:08:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150386 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150386/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate           fail  like 150353
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150353
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150353
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150353
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150353
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150353
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150353
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                8f72c75cfc9b3c84a9b5e7a58ee5e471cb2f19c8
baseline version:
 qemuu                fea8f3ed739536fca027cf56af7f5576f37ef9cd

Last test of basis   150353  2020-05-24 14:07:02 Z    2 days
Testing same since   150386  2020-05-26 13:06:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Bruce Rogers <brogers@suse.com>
  Geoffrey McRae <geoff@hostfission.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Prasad J Pandit <pjp@fedoraproject.org>
  Volker Rümelin <vr_qemu@t-online.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   fea8f3ed73..8f72c75cfc  8f72c75cfc9b3c84a9b5e7a58ee5e471cb2f19c8 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Wed May 27 04:27:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 04:27:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdneD-0007FJ-0L; Wed, 27 May 2020 04:26:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=hL2n=7J=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jdneB-0007FE-K9
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 04:26:31 +0000
X-Inumbo-ID: 3bb9f283-9fd2-11ea-a6e5-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3bb9f283-9fd2-11ea-a6e5-12813bfff9fa;
 Wed, 27 May 2020 04:26:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 25BAEAAC7;
 Wed, 27 May 2020 04:26:31 +0000 (UTC)
Subject: Re: [PATCH 2/2] xen: credit2: limit the max number of CPUs in a
 runqueue
To: Dario Faggioli <dfaggioli@suse.com>, Jan Beulich <jbeulich@suse.com>
References: <158818022727.24327.14309662489731832234.stgit@Palanthas>
 <158818179558.24327.11334680191217289878.stgit@Palanthas>
 <3db33b8a-ba97-f302-a325-e989ff0e7084@suse.com>
 <7b34b1b2c4b36399ad16f6e72a872e15d949f4bf.camel@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <5939e797-09be-53d1-b87f-d6c6c97ea3a3@suse.com>
Date: Wed, 27 May 2020 06:26:26 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <7b34b1b2c4b36399ad16f6e72a872e15d949f4bf.camel@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.20 00:00, Dario Faggioli wrote:
> Hey,
> 
> thanks for the review, and sorry for replying late... I was busy with
> something and then was trying to implement a better balancing logic, as
> discussed with Juergen, but with only partial success...
> 
> On Thu, 2020-04-30 at 08:45 +0200, Jan Beulich wrote:
>> On 29.04.2020 19:36, Dario Faggioli wrote:
>>> @@ -852,14 +862,61 @@ cpu_runqueue_match(const struct
>>> [...]
>>> +        ASSERT(rcpu != cpu);
>>> +        if ( !cpumask_test_cpu(rcpu, cpumask_scratch_cpu(cpu)) )
>>> +        {
>>> +            /*
>>> +             * For each CPU already in the runqueue, account for
>>> it and for
>>> +             * its sibling(s), independently from whether such
>>> sibling(s) are
>>> +             * in the runqueue already or not.
>>> +             *
>>> +             * Of course, if there are sibling CPUs in the
>>> runqueue already,
>>> +             * only count them once.
>>> +             */
>>> +            cpumask_or(cpumask_scratch_cpu(cpu),
>>> cpumask_scratch_cpu(cpu),
>>> +                       per_cpu(cpu_sibling_mask, rcpu));
>>> +            nr_smts += nr_sibl;
>>
>> This being common code, is it appropriate to assume all CPUs having
>> the same number of siblings?
>>
> You mention common code because you are thinking of differences between
> x86 and ARM? In ARM --althought there might be (I'm not sure)-- chips
> that have SMT, or that we may want to identify and treat like if it was
> SMT, we currently have no support for that, so I don't think it is a
> problem.
> 
> On x86, I'm not sure I am aware of cases where the number of threads is
> different among cores or sockets... are there any?
> 
> Besides, we have some SMT specific code around (especially in
> scheduling) already.
> 
>> Even beyond that, iirc the sibling mask
>> represents the online or parked siblings, but not offline ones. For
>> the purpose here, don't you rather care about the full set?
>>
> This is actually a good point. I indeed care about the number of
> siblings a thread has, in general, not only about the ones that are
> currently online.
> 
> In v2, I'll be using boot_cpu_data.x86_num_siblings, of course wrapped
> in an helper that just returns 1 for ARM. What do you think, is this
> better?
> 
>> What about HT vs AMD Fam15's CUs? Do you want both to be treated
>> the same here?
>>
> Are you referring to the cores that, AFAIUI, share the L1i cache? If
> yes, I thought about it, and ended up _not_ dealing with them here, but
> I'm still a bit unsure.
> 
> Cache oriented runqueue organization will be the subject of another
> patch series, and that's why I kept them out. However, that's a rather
> special case with a lot in common to SMT... Just in case, is there a
> way to identify them easily, like with a mask or something, in the code
> already?
> 
>> Also could you outline the intentions with this logic in the
>> description, to be able to match the goal with what gets done?
>>
> Sure, I will try state it more clearly.
> 
>>> @@ -900,6 +990,12 @@ cpu_add_to_runqueue(struct csched2_private
>>> *prv, unsigned int cpu)
>>>           rqd->pick_bias = cpu;
>>>           rqd->id = rqi;
>>>       }
>>> +    else
>>> +        rqd = rqd_valid;
>>> +
>>> +    printk(XENLOG_INFO "CPU %d (sibling={%*pbl}) will go to
>>> runqueue %d with {%*pbl}\n",
>>> +           cpu, CPUMASK_PR(per_cpu(cpu_sibling_mask, cpu)), rqd-
>>>> id,
>>> +           CPUMASK_PR(&rqd->active));
>>
>> Iirc there's one per-CPU printk() already. On large systems this
>> isn't
>> very nice, so I'd like to ask that their total number at least not
>> get
>> further grown. Ideally there would be a less verbose summary after
>> all
>> CPUs have been brought up at boot, with per-CPU info be logged only
>> during CPU hot online.
>>
> Understood. Problem is that, here in the scheduling code, I don't see
> an easy way to tell when we have finished bringing up CPUs... And it's
> probably not worth looking too hard (even less adding logic) only for
> the sake of printing this message.

cpupool_init() is the perfect place for that.


Juergen


From xen-devel-bounces@lists.xenproject.org Wed May 27 06:00:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 06:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdp6e-0006WZ-42; Wed, 27 May 2020 06:00:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KwrG=7J=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jdp6c-0006WU-OQ
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 05:59:59 +0000
X-Inumbo-ID: 4959291e-9fdf-11ea-9947-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 4959291e-9fdf-11ea-9947-bc764e2007e4;
 Wed, 27 May 2020 05:59:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590559194;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=j6YgtXDpCWCf9KnSRhU+LICwWG5pFJTKsR57WPE8YO4=;
 b=BNJC+gRWTNmjWtGnU6/e04Eqznkr18U6q9vpb14NE3Omf+Fyr1qGwGik0VMgJdOwEezLaw
 ln86CzZjjd2KZ0ktGDtDlIhpzxJgrlKUeFl4Pab/NVDDbX3eZ8Q7gujQBAUDsJs3+jgvKe
 IZmlRakMv/skATULzEpDsOkVzeh8AG8=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-387-eEH0X-D4OE60pRaysMOMXQ-1; Wed, 27 May 2020 01:59:51 -0400
X-MC-Unique: eEH0X-D4OE60pRaysMOMXQ-1
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 710038014D4;
 Wed, 27 May 2020 05:59:50 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-32.ams2.redhat.com
 [10.36.112.32])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 5C56279C5C;
 Wed, 27 May 2020 05:59:47 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id E4B9E11386A3; Wed, 27 May 2020 07:59:45 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PULL 02/10] xen: Fix and improve handling of device_add usb-host
 errors
Date: Wed, 27 May 2020 07:59:37 +0200
Message-Id: <20200527055945.6774-3-armbru@redhat.com>
In-Reply-To: <20200527055945.6774-1-armbru@redhat.com>
References: <20200527055945.6774-1-armbru@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

usbback_portid_add() leaks the error when qdev_device_add() fails.
Fix that.  While there, use the error to improve the error message.

The qemu_opts_from_qdict() similarly leaks on failure.  But any
failure there is a programming error.  Pass &error_abort.

Fixes: 816ac92ef769f9ffc534e49a1bb6177bddce7aa2
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Anthony Perard <anthony.perard@citrix.com>
Cc: Paul Durrant <paul@xen.org>
Cc: Gerd Hoffmann <kraxel@redhat.com>
Cc: xen-devel@lists.xenproject.org
Signed-off-by: Markus Armbruster <armbru@redhat.com>
Message-Id: <20200505101908.6207-3-armbru@redhat.com>
Acked-by: Paul Durrant <paul@xen.org>
---
 hw/usb/xen-usb.c | 19 +++++++++----------
 1 file changed, 9 insertions(+), 10 deletions(-)

diff --git a/hw/usb/xen-usb.c b/hw/usb/xen-usb.c
index 961190d0f7..4d266d7bb4 100644
--- a/hw/usb/xen-usb.c
+++ b/hw/usb/xen-usb.c
@@ -30,6 +30,7 @@
 #include "hw/usb.h"
 #include "hw/xen/xen-legacy-backend.h"
 #include "monitor/qdev.h"
+#include "qapi/error.h"
 #include "qapi/qmp/qdict.h"
 #include "qapi/qmp/qstring.h"
 
@@ -755,13 +756,16 @@ static void usbback_portid_add(struct usbback_info *usbif, unsigned port,
     qdict_put_int(qdict, "port", port);
     qdict_put_int(qdict, "hostbus", atoi(busid));
     qdict_put_str(qdict, "hostport", portname);
-    opts = qemu_opts_from_qdict(qemu_find_opts("device"), qdict, &local_err);
-    if (local_err) {
-        goto err;
-    }
+    opts = qemu_opts_from_qdict(qemu_find_opts("device"), qdict,
+                                &error_abort);
     usbif->ports[port - 1].dev = USB_DEVICE(qdev_device_add(opts, &local_err));
     if (!usbif->ports[port - 1].dev) {
-        goto err;
+        qobject_unref(qdict);
+        xen_pv_printf(&usbif->xendev, 0,
+                      "device %s could not be opened: %s\n",
+                      busid, error_get_pretty(local_err));
+        error_free(local_err);
+        return;
     }
     qobject_unref(qdict);
     speed = usbif->ports[port - 1].dev->speed;
@@ -793,11 +797,6 @@ static void usbback_portid_add(struct usbback_info *usbif, unsigned port,
     usbback_hotplug_enq(usbif, port);
 
     TR_BUS(&usbif->xendev, "port %d attached\n", port);
-    return;
-
-err:
-    qobject_unref(qdict);
-    xen_pv_printf(&usbif->xendev, 0, "device %s could not be opened\n", busid);
 }
 
 static void usbback_process_port(struct usbback_info *usbif, unsigned port)
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Wed May 27 06:18:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 06:18:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdpNv-0008RX-PG; Wed, 27 May 2020 06:17:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+pkZ=7J=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdpNu-0008RS-4r
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 06:17:50 +0000
X-Inumbo-ID: c95b937a-9fe1-11ea-8993-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c95b937a-9fe1-11ea-8993-bc764e2007e4;
 Wed, 27 May 2020 06:17:48 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 6FDBAACFE;
 Wed, 27 May 2020 06:17:50 +0000 (UTC)
Subject: Re: [PATCH 2/2] xen: credit2: limit the max number of CPUs in a
 runqueue
To: Dario Faggioli <dfaggioli@suse.com>
References: <158818022727.24327.14309662489731832234.stgit@Palanthas>
 <158818179558.24327.11334680191217289878.stgit@Palanthas>
 <3db33b8a-ba97-f302-a325-e989ff0e7084@suse.com>
 <7b34b1b2c4b36399ad16f6e72a872e15d949f4bf.camel@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cd566bb2-753f-b0eb-3c6a-bc2dc01cf37c@suse.com>
Date: Wed, 27 May 2020 08:17:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <7b34b1b2c4b36399ad16f6e72a872e15d949f4bf.camel@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 00:00, Dario Faggioli wrote:
> On Thu, 2020-04-30 at 08:45 +0200, Jan Beulich wrote:
>> On 29.04.2020 19:36, Dario Faggioli wrote:
>>> @@ -852,14 +862,61 @@ cpu_runqueue_match(const struct 
>>> [...]
>>> +        ASSERT(rcpu != cpu);
>>> +        if ( !cpumask_test_cpu(rcpu, cpumask_scratch_cpu(cpu)) )
>>> +        {
>>> +            /*
>>> +             * For each CPU already in the runqueue, account for
>>> it and for
>>> +             * its sibling(s), independently from whether such
>>> sibling(s) are
>>> +             * in the runqueue already or not.
>>> +             *
>>> +             * Of course, if there are sibling CPUs in the
>>> runqueue already,
>>> +             * only count them once.
>>> +             */
>>> +            cpumask_or(cpumask_scratch_cpu(cpu),
>>> cpumask_scratch_cpu(cpu),
>>> +                       per_cpu(cpu_sibling_mask, rcpu));
>>> +            nr_smts += nr_sibl;
>>
>> This being common code, is it appropriate to assume all CPUs having
>> the same number of siblings? 
>>
> You mention common code because you are thinking of differences between
> x86 and ARM? In ARM --althought there might be (I'm not sure)-- chips
> that have SMT, or that we may want to identify and treat like if it was
> SMT, we currently have no support for that, so I don't think it is a
> problem.
> 
> On x86, I'm not sure I am aware of cases where the number of threads is
> different among cores or sockets... are there any?

I'm not aware of any either, but in common code it should also
matter what might be, not just what there is. Unless you wrap
things in e.g. CONFIG_X86.

>> Even beyond that, iirc the sibling mask
>> represents the online or parked siblings, but not offline ones. For
>> the purpose here, don't you rather care about the full set?
>>
> This is actually a good point. I indeed care about the number of
> siblings a thread has, in general, not only about the ones that are
> currently online.
> 
> In v2, I'll be using boot_cpu_data.x86_num_siblings, of course wrapped
> in an helper that just returns 1 for ARM. What do you think, is this
> better?

As per above, cpu_data[rcpu] then please.

>> What about HT vs AMD Fam15's CUs? Do you want both to be treated
>> the same here?
>>
> Are you referring to the cores that, AFAIUI, share the L1i cache? If
> yes, I thought about it, and ended up _not_ dealing with them here, but
> I'm still a bit unsure.
> 
> Cache oriented runqueue organization will be the subject of another
> patch series, and that's why I kept them out. However, that's a rather
> special case with a lot in common to SMT...

I didn't think of cache sharing in particular, but about the
concept of compute units vs hyperthreads in general.

> Just in case, is there a
> way to identify them easily, like with a mask or something, in the code
> already?

cpu_sibling_mask still gets used for both, so there's no mask
to use. As per set_cpu_sibling_map() you can look at
cpu_data[].compute_unit_id to tell, but that's of course x86-
specific (as is the entire compute unit concept).

>>> @@ -900,6 +990,12 @@ cpu_add_to_runqueue(struct csched2_private
>>> *prv, unsigned int cpu)
>>>          rqd->pick_bias = cpu;
>>>          rqd->id = rqi;
>>>      }
>>> +    else
>>> +        rqd = rqd_valid;
>>> +
>>> +    printk(XENLOG_INFO "CPU %d (sibling={%*pbl}) will go to
>>> runqueue %d with {%*pbl}\n",
>>> +           cpu, CPUMASK_PR(per_cpu(cpu_sibling_mask, cpu)), rqd-
>>>> id,
>>> +           CPUMASK_PR(&rqd->active));
>>
>> Iirc there's one per-CPU printk() already. On large systems this
>> isn't
>> very nice, so I'd like to ask that their total number at least not
>> get
>> further grown. Ideally there would be a less verbose summary after
>> all
>> CPUs have been brought up at boot, with per-CPU info be logged only
>> during CPU hot online.
>>
> Understood. Problem is that, here in the scheduling code, I don't see
> an easy way to tell when we have finished bringing up CPUs... And it's
> probably not worth looking too hard (even less adding logic) only for
> the sake of printing this message.
> 
> So I think I will demote this printk as a XENLOG_DEBUG one (and will
> also get rid of others that were already DEBUG, but not super useful,
> after some more thinking).

Having seen Jürgen's reply as well as what you further wrote below,
I'd still like to point out that even XENLOG_DEBUG isn't quiet
enough: There may be some value to such a debugging message to you,
but it may be mainly spam to e.g. me, who I still have a need to
run with loglvl=all in the common case. Let's not forget, the
context in which the underlying topic came up in was pretty-many-
core AMD CPUs.

We had this issue elsewhere as well, and as CPU counts grew we
started hiding such messages behind a command line option, besides
a log level qualification. Similarly we hide e.g. details of the
IOMMU arrangements behind a command line option.

> The idea is that, after all, exactly on which runqueue a CPU ends up is
> not an information that should matter much from an user/administrator.
> For now, it will be possible to know where they ended up via debug
> keys. And even if we feel like making it more visible, that is better
> achieved via some toolstack command that queries and prints the
> configuration of the scheduler, rather than a line like this in the
> boot log.

Good.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 27 06:22:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 06:22:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdpSm-0000p1-C7; Wed, 27 May 2020 06:22:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+pkZ=7J=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdpSk-0000ow-I0
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 06:22:50 +0000
X-Inumbo-ID: 7cce32e6-9fe2-11ea-9947-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7cce32e6-9fe2-11ea-9947-bc764e2007e4;
 Wed, 27 May 2020 06:22:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 735FCB140;
 Wed, 27 May 2020 06:22:51 +0000 (UTC)
Subject: Re: [PATCH 2/2] xen: credit2: limit the max number of CPUs in a
 runqueue
To: Dario Faggioli <dfaggioli@suse.com>
References: <158818022727.24327.14309662489731832234.stgit@Palanthas>
 <158818179558.24327.11334680191217289878.stgit@Palanthas>
 <b368ccef-d3b1-1338-6325-8f81a963876d@suse.com>
 <d60d5b917d517b1dfa8292cfb456639c736ec173.camel@suse.com>
 <7e039c65-4532-c3ea-8707-72a86cf48e0e@suse.com>
 <8bf86f0c2bcce449cf7643aa9b98aa26ea558c2c.camel@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9948ac59-af64-d77d-57df-38a771a472b4@suse.com>
Date: Wed, 27 May 2020 08:22:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <8bf86f0c2bcce449cf7643aa9b98aa26ea558c2c.camel@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, paul@xen.org,
 George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 26.05.2020 23:18, Dario Faggioli wrote:
> On Thu, 2020-04-30 at 14:52 +0200, Jürgen Groß wrote:
>> On 30.04.20 14:28, Dario Faggioli wrote:
>>> That being said, I can try to make things a bit more fair, when
>>> CPUs
>>> come up and are added to the pool. Something around the line of
>>> adding
>>> them to the runqueue with the least number of CPUs in it (among the
>>> suitable ones, of course).
>>>
>>> With that, when the user removes 4 CPUs, we will have the 6 vs 10
>>> situation. But we would make sure that, when she adds them back, we
>>> will go back to 10 vs. 10, instead than, say, 6 vs 14 or something
>>> like
>>> that.
>>>
>>> Was something like this that you had in mind? And in any case, what
>>> do
>>> you think about it?
>>
>> Yes, this would be better already.
>>
> So, a couple of thoughts. Doing something like what I tried to describe
> above is not too bad, and I have it pretty much ready.
> 
> With that, on an Intel system with 96 CPUs on two sockets, and
> max_cpus_per_runqueue set to 16, I got, after boot, instead of just 2
> giant runqueues with 48 CPUs in each:
> 
>  - 96 CPUs online, split in 6 runqueues (3 for each socket) with 16 
>    runqueues in each of them
> 
> I can also "tweak" it in such a way that, if one for instance boots
> with "smt=no", we get to a point where we have:
> 
>  - 48 CPUs online, split in 6 runqueues, with 8 CPUs in each
> 
> Now, I think this is good, and actually better than the current
> situation where on such a system, we only have two very big runqueues
> (and let me repeat that introducing a per-LLC runqueue arrangement, on
> which I'm also working, won't help in this case, as NUMA node == LLC).
> 
> So, problem is that if one starts to fiddle with cpupools and cpu on
> and offlining, things can get pretty unbalanced. E.g., I can end up
> with 2 runqueues on a socket, one with 16 CPUs and the other with just
> a couple of them.
> 
> Now, this is all possible as of now (I mean, without this patch)
> already, although at a different level. In fact, I can very well remove
> all-but-1 CPUs of node 1 from Pool-0, and end up again with a runqueue
> with a lot of CPUs and another with just one.
> 
> It looks like we need a way to rebalance the runqueues, which should be
> doable... But despite having spent a couple of days trying to come up
> with something decent, that I could include in v2 of this series, I
> couldn't get it to work sensibly.

CPU on-/offlining may not need considering here at all imo. I think it
would be quite reasonable as a first step to have a static model where
from system topology (and perhaps a command line option) one can
predict which runqueue a particular CPU will end up in, no matter when
it gets brought online.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 27 06:34:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 06:34:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdpdv-0001nK-KC; Wed, 27 May 2020 06:34:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeBB=7J=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdpdu-0001nF-EM
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 06:34:22 +0000
X-Inumbo-ID: 15e29e58-9fe4-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15e29e58-9fe4-11ea-8993-bc764e2007e4;
 Wed, 27 May 2020 06:34:15 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Yt5cNIgFabXn/mrMFblKHbe4jKLY+U8qza/nCCjXjBo=; b=JMR3nJ/L5yVbmj4e2Osj+Bjos
 YAJpPT9CAPWnEmEidZUE0WfDITNxJi98aNirYhYaXn5YfrV2Hh8Phg4oKh56TiS+dGqCTCS9T8DsV
 27+jU9LMBnw4O50LU5LYN6tyFO7n1R0yZ0U3uJzx2eMqFZd96AVDXV174NPL0T3ugdLd0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdpdn-00032b-41; Wed, 27 May 2020 06:34:15 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdpdm-0008NH-RW; Wed, 27 May 2020 06:34:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdpdm-00042M-Pz; Wed, 27 May 2020 06:34:14 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150389-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150389: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707
X-Osstest-Versions-That: xen=354e8318d5a9b6f32fbd3c01d1a9f1970007010b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 27 May 2020 06:34:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150389 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150389/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate           fail  like 150370
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150378
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150378
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150378
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150378
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150378
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150378
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150378
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150378
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150378
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707
baseline version:
 xen                  354e8318d5a9b6f32fbd3c01d1a9f1970007010b

Last test of basis   150378  2020-05-26 09:58:43 Z    0 days
Testing same since   150389  2020-05-26 22:37:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  George Dunlap <george.dunlap@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   354e8318d5..d89e5e65f3  d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707 -> master


From xen-devel-bounces@lists.xenproject.org Wed May 27 06:55:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 06:55:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdpxb-0003Wb-Cc; Wed, 27 May 2020 06:54:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+pkZ=7J=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdpxZ-0003WW-Mg
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 06:54:41 +0000
X-Inumbo-ID: ed1ed7a4-9fe6-11ea-a6fb-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ed1ed7a4-9fe6-11ea-a6fb-12813bfff9fa;
 Wed, 27 May 2020 06:54:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id AC83BACA3;
 Wed, 27 May 2020 06:54:37 +0000 (UTC)
Subject: Re: [PATCH 02/16] x86/traps: Clean up printing in
 do_reserved_trap()/fatal_trap()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-3-andrew.cooper3@citrix.com>
 <aca22b53-895e-19bb-c54c-f1e4945c95c1@suse.com>
 <8f1d68b1-895a-d2a6-4dcb-55b688b03336@citrix.com>
 <b1ef905c-dab6-d1c3-4673-4c06c7e94a0a@suse.com>
 <560c3bce-211a-52ab-c919-9ca1ab9beab3@citrix.com>
 <45545b0c-2f0d-1de8-88ec-472d0a110eaa@suse.com>
 <8f5dc985-502c-3ed5-1e4e-980a91acfba4@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e685e84f-30c4-657e-e7b3-82245b5e9ccf@suse.com>
Date: Wed, 27 May 2020 08:54:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <8f5dc985-502c-3ed5-1e4e-980a91acfba4@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 26.05.2020 17:38, Andrew Cooper wrote:
> On 19/05/2020 09:50, Jan Beulich wrote:
>> On 18.05.2020 18:54, Andrew Cooper wrote:
>>> On 11/05/2020 16:09, Jan Beulich wrote:
>>>> On 11.05.2020 17:01, Andrew Cooper wrote:
>>>>> On 04/05/2020 14:08, Jan Beulich wrote:
>>>>>> On 02.05.2020 00:58, Andrew Cooper wrote:
>>>>>>> For one, they render the vector in a different base.
>>>>>>>
>>>>>>> Introduce X86_EXC_* constants and vec_name() to refer to exceptions by their
>>>>>>> mnemonic, which starts bringing the code/diagnostics in line with the Intel
>>>>>>> and AMD manuals.
>>>>>> For this "bringing in line" purpose I'd like to see whether you could
>>>>>> live with some adjustments to how you're currently doing things:
>>>>>> - NMI is nowhere prefixed by #, hence I think we'd better not do so
>>>>>>   either; may require embedding the #-es in the names[] table, or not
>>>>>>   using N() for NMI
>>>>> No-one is going to get confused at seeing #NMI in an error message.  I
>>>>> don't mind jugging the existing names table, but anything more
>>>>> complicated is overkill.
>>>>>
>>>>>> - neither Coprocessor Segment Overrun nor vector 0x0f have a mnemonic
>>>>>>   and hence I think we shouldn't invent one; just treat them like
>>>>>>   other reserved vectors (of which at least vector 0x09 indeed is one
>>>>>>   on x86-64)?
>>>>> This I disagree with.  Coprocessor Segment Overrun *is* its name in both
>>>>> manuals, and the avoidance of vector 0xf is clearly documented as well,
>>>>> due to it being the default PIC Spurious Interrupt Vector.
>>>>>
>>>>> Neither CSO or SPV are expected to be encountered in practice, but if
>>>>> they are, highlighting them is a damn-sight more helpful than pretending
>>>>> they don't exist.
>>>> How is them occurring (and getting logged with their vector numbers)
>>>> any different from other reserved, acronym-less vectors? I particularly
>>>> didn't suggest to pretend they don't exist; instead I did suggest that
>>>> they are as reserved as, say, vector 0x18. By inventing an acronym and
>>>> logging this instead of the vector number you'll make people other than
>>>> you have to look up what the odd acronym means iff such an exception
>>>> ever got raised.
>>> You snipped the bits in the patch where both the vector number and
>>> acronym are printed together.
>>>
>>> Anyone who doesn't know the vector has to look it up anyway, at which
>>> point they'll find that what Xen prints out matches what both manuals
>>> say.  OTOH, people who know what a coprocessor segment overrun or PIC
>>> spurious vector is won't need to look it up.
>> And who know to decipher the non-standard CPO and SPV (which are what
>> triggered my comments in the first place).
> 
> CSO, and no.
> 
> Anyone who doesn't know the text still has the vector number to work
> with, and still needs to look it up.
> 
> At which point they will observe that the text is appropriate in context.
> 
>> What I continue to fail to
>> see is why these reserved vectors need treatment different from all
>> others.
> 
> Because it has nothing to do with reserved-ness.

How does it not? The SDM page, among historic information, specifically
says "Intel reserved". Seeing more exception vectors getting used after
many years of "silence" in this area, I'm pretty sure if they ran out
of vectors they'd re-use this one. Vector 15 doesn't even have a page,
which puts it even more in the same group as other reserved ones.

> It is about providing clarifying information (for all vectors which
> currently have, or have ever had, meaning) for mere mortals who can't
> (or rather, don't want to) debug crashes based on raw numbers alone.
> 
>> In addition I'm having trouble seeing how the default spurious
>> PIC vector matters for us - we program the PIC to vectors 0x20-0x2f,
>> i.e. a spurious PIC0 IRQ would show up at vector 0x27. (I notice we
>> still blindly assume there's a pair of PICs in the first place.)
> 
> That's not relevant.  What is relevant is the actions taken when we see
> vector 15 being raised.
> 
> Hitting CSO means that legacy #FERR_FREEZE external signal has been
> wired up (and it is very SMP-unsafe, hence why it was phased out with
> the introductions integrated x87's).

What does FERR have to do with this vector? This exception is a stand-
in for #GP (and maybe #PF) on the 386/387 pair.

> Hitting SPV means that the PIC wasn't reprogrammed and something wonky
> is going on with one of the input pins.

If the PIC was neither re-programmed nor properly masked, we're in
bigger trouble, I'm afraid.

> Both of these are strictly more helpful in a log than "something went
> wrong - figure it out yourself", and both indicate that something is
> very wrong with the system.

So what do we do? We can't seem to be able to reach agreement here,
because our views are different and neither of us can convince the
other. Looking back at my initial reply, hesitantly
Acked-by: Jan Beulich <jbeulich@suse.com>
then.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 27 07:01:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 07:01:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdq4G-0004RG-4B; Wed, 27 May 2020 07:01:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+pkZ=7J=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdq4E-0004RB-Bi
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 07:01:34 +0000
X-Inumbo-ID: e5a76760-9fe7-11ea-9dbe-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e5a76760-9fe7-11ea-9dbe-bc764e2007e4;
 Wed, 27 May 2020 07:01:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id EDD8BACD5;
 Wed, 27 May 2020 07:01:34 +0000 (UTC)
Subject: Re: [PATCH 03/16] x86/traps: Factor out exception_fixup() and make
 printing consistent
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200501225838.9866-1-andrew.cooper3@citrix.com>
 <20200501225838.9866-4-andrew.cooper3@citrix.com>
 <f7cb696a-5c2c-4aa6-d379-ed77772b7c35@suse.com>
 <a397dd69-2384-a4af-d127-9189a730a554@citrix.com>
 <afd75bde-9adf-d8cf-f8cf-24cb1b753253@suse.com>
 <9c939815-a4f9-75d7-3b6b-b8921de6cdb9@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0d19ed8a-d1f5-930c-3958-a34a69b7a24e@suse.com>
Date: Wed, 27 May 2020 09:01:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <9c939815-a4f9-75d7-3b6b-b8921de6cdb9@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 26.05.2020 20:06, Andrew Cooper wrote:
> On 12/05/2020 14:05, Jan Beulich wrote:
>> [CAUTION - EXTERNAL EMAIL] DO NOT reply, click links, or open attachments unless you have verified the sender and know the content is safe.
>>
>> On 11.05.2020 17:14, Andrew Cooper wrote:
>>> On 04/05/2020 14:20, Jan Beulich wrote:
>>>> On 02.05.2020 00:58, Andrew Cooper wrote:
>>>>> --- a/xen/arch/x86/traps.c
>>>>> +++ b/xen/arch/x86/traps.c
>>>>> @@ -774,10 +774,27 @@ static void do_reserved_trap(struct cpu_user_regs *regs)
>>>>>            trapnr, vec_name(trapnr), regs->error_code);
>>>>>  }
>>>>>  
>>>>> +static bool exception_fixup(struct cpu_user_regs *regs, bool print)
>>>>> +{
>>>>> +    unsigned long fixup = search_exception_table(regs);
>>>>> +
>>>>> +    if ( unlikely(fixup == 0) )
>>>>> +        return false;
>>>>> +
>>>>> +    /* Can currently be triggered by guests.  Make sure we ratelimit. */
>>>>> +    if ( IS_ENABLED(CONFIG_DEBUG) && print )
>>>> I didn't think we consider dprintk()-s a possible security issue.
>>>> Why would we consider so a printk() hidden behind
>>>> IS_ENABLED(CONFIG_DEBUG)? IOW I think one of XENLOG_GUEST and
>>>> IS_ENABLED(CONFIG_DEBUG) wants dropping.
>>> Who said anything about a security issue?
>> The need to rate limit is (among other aspects) to prevent a
>> (logspam) security issue, isn't it?
> 
> Rate limiting (from a security aspect) is a stopgap solution to relieve
> incidental pressure on the various global spinlocks involved.
> 
> It specifically does not prevent a guest from trivially filling the
> console ring with junk, or for that junk to be written to
> /var/log/xen/hypervisor.log at an alarming rate, both of which are
> issues in production setups, but not security issues.

IOW you assert that e.g. XSA-141 should not have been issued?

> Technical solutions to these problems do exist, such as deleting the
> offending printk(), or maintaining per-guest console rings, but both
> come with downsides in terms of usability, which similarly impacts
> production setups.
> 
> 
> What ratelimiting even in debug builds gets you is a quick spate of
> printks() (e.g. any new sshd connection on an AMD system where the
> MSR_VIRT_SPEC_CTRL patch is still uncommitted, and the default WRMSR
> behaviour breaking wrmsr_safe() logic in Linux) not wasting an
> unreasonable quantity of space in the console ring.

Hmm, okay, I can accept this perspective. Since the other comment I
gave has been taken care of by re-arrangements in a separate patch:
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 27 07:56:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 07:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdqv0-0000Fp-Dn; Wed, 27 May 2020 07:56:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeBB=7J=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdquz-0000Ex-0T
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 07:56:05 +0000
X-Inumbo-ID: 8050c660-9fef-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8050c660-9fef-11ea-8993-bc764e2007e4;
 Wed, 27 May 2020 07:55:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=zfynMdo5N2KU69GYR6rRtXTJr8zH+yoBdSWzQLJYioc=; b=R7I0RKQ3xAHb2LXpI47vQP2DE
 DRNaB/VtNtzQXfKuCpk/INBmaSNnNCK7RE0PeJvxkJqbB4XDYVZJcXFpKzbyeW4K6dt8V3ZpM+yk9
 1seEDAVUeZXJNtk9zgD65KPzcqcqd7plpT51B+2ZhrbrwZzAyt6RdnkEdT2Pi3ScVPrho=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdqur-0004jv-Hy; Wed, 27 May 2020 07:55:57 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdqur-0004NS-AH; Wed, 27 May 2020 07:55:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdqur-0000uT-9Z; Wed, 27 May 2020 07:55:57 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150393-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150393: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=4eda71a8d05d968e73ab9b0fdc8a90123c57d39e
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 27 May 2020 07:55:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150393 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150393/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              4eda71a8d05d968e73ab9b0fdc8a90123c57d39e
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  131 days
Failing since        146211  2020-01-18 04:18:52 Z  130 days  121 attempts
Testing same since   150393  2020-05-27 04:19:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Chris Jester-Young <cky@cky.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yan Wang <wangyan122@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 19542 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 27 08:00:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 08:00:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdqz0-0001Vm-9d; Wed, 27 May 2020 08:00:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YYzi=7J=lucina.net=martin@srs-us1.protection.inumbo.net>)
 id 1jdqyy-0001Sl-J1
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 08:00:12 +0000
X-Inumbo-ID: 1573d188-9ff0-11ea-a70a-12813bfff9fa
Received: from smtp.lucina.net (unknown [62.176.169.44])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1573d188-9ff0-11ea-a70a-12813bfff9fa;
 Wed, 27 May 2020 08:00:09 +0000 (UTC)
Received: from nodbug.lucina.net (78-141-76-187.dynamic.orange.sk
 [78.141.76.187])
 by smtp.lucina.net (Postfix) with ESMTPSA id 95158122804;
 Wed, 27 May 2020 10:00:08 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=lucina.net;
 s=dkim-201811; t=1590566408;
 bh=6rQ6u7P+sESM3XhmLHwdlYYRer/3126WoGV8Y9T1yC0=;
 h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
 b=aPvdAznh47EwVX3q6zIkaFezmhg2LF7L49VQRqpfaZRFhVd62/GF0m49g7ijEso9W
 BFo1ITbwLFx/53S/kuT9n+art4id2s4u7SdwfbySqQx4zOfdypQH5d+ExbNvCyyZxS
 Ti2mNZk7DVSAbPqwBGSgv6onmBKpsY1E5J8sfbPVjwq9bpcS2B7WTU4FQLEkG8fS5K
 aGew6Ork10UXDzI5jC8JL3MXVYN+0//rHT6h+lEnZ/HMcljjhrkBCSBc0eouOoKSIb
 /fyuEpLSMKYl3YcM1NMS/nr2gqlmZXrRN69EP1kn8nx+fJKl4vHZmfWHnLMRMPQbrJ
 ey7USF7Sy+buQ==
Received: by nodbug.lucina.net (Postfix, from userid 1000)
 id 6F852265E722; Wed, 27 May 2020 10:00:08 +0200 (CEST)
Date: Wed, 27 May 2020 10:00:08 +0200
From: Martin Lucina <martin@lucina.net>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: Xen PVH domU start-of-day VCPU state
Message-ID: <20200527080008.GC4788@nodbug.lucina.net>
Mail-Followup-To: Martin Lucina <martin@lucina.net>,
 Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
 =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, anil@recoil.org,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 mirageos-devel@lists.xenproject.org, dave@recoil.org,
 xen-devel@lists.xenproject.org
References: <20200525160401.GA3091@nodbug.lucina.net>
 <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
 <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
 <20200526085221.GB5942@nodbug.lucina.net>
 <20200526093421.GA38408@Air-de-Roger>
 <20200526100337.GB38408@Air-de-Roger>
 <20200526101203.GE5942@nodbug.lucina.net>
 <20200526154224.GC25283@nodbug.lucina.net>
 <20200526163021.GE38408@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200526163021.GE38408@Air-de-Roger>
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, anil@recoil.org,
 Andrew Cooper <andrew.cooper3@citrix.com>, mirageos-devel@lists.xenproject.org,
 dave@recoil.org, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tuesday, 26.05.2020 at18:30, Roger Pau Monn wrote:
> > Turns out that the .note.solo5.xen section as defined in boot.S was not
> > marked allocatable, and that was doing <something> that was confusing our
> > linker script[1] (?).
> 
> Hm, I would have said there was no need to load notes into memory, and
> hence using a MemSize of 0 would be fine.
> 
> Maybe libelf loader was somehow getting confused and not loading the
> image properly?
> 
> Can you paste the output of `xl -vvv create ...` when using the broken
> image?

Here you go:

Parsing config from ./test_hello.xl
libxl: debug: libxl_create.c:1671:do_domain_create: Domain 0:ao 0x5593c42e7e30: create: how=(nil) callback=(nil) poller=0x5593c42e7670
libxl: debug: libxl_create.c:1007:initiate_domain_create: Domain 2:running bootloader
libxl: debug: libxl_bootloader.c:335:libxl__bootloader_run: Domain 2:no bootloader configured, using user supplied kernel
libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x5593c42e9590: deregister unregistered
libxl: debug: libxl_sched.c:82:libxl__set_vcpuaffinity: Domain 2:New soft affinity for vcpu 0 has unreachable cpus
domainbuilder: detail: xc_dom_allocate: cmdline="", features=""
domainbuilder: detail: xc_dom_kernel_file: filename="test_hello.xen"
domainbuilder: detail: xc_dom_malloc_filemap    : 191 kB
domainbuilder: detail: xc_dom_boot_xen_init: ver 4.11, caps xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
domainbuilder: detail: xc_dom_parse_image: called
domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ...
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ...
domainbuilder: detail: xc_dom_probe_bzimage_kernel: kernel is not a bzImage
domainbuilder: detail: loader probe failed
domainbuilder: detail: xc_dom_find_loader: trying ELF-generic loader ...
domainbuilder: detail: loader probe OK
xc: detail: ELF: phdr: paddr=0x100000 memsz=0x6264
xc: detail: ELF: phdr: paddr=0x107000 memsz=0xed48
xc: detail: ELF: memory: 0x100000 -> 0x115d48
xc: detail: ELF: note: PHYS32_ENTRY = 0x100020
xc: detail: ELF: Found PVH image
xc: detail: ELF: VIRT_BASE unset, using 0
xc: detail: ELF_PADDR_OFFSET unset, using 0
xc: detail: ELF: addresses:
xc: detail:     virt_base        = 0x0
xc: detail:     elf_paddr_offset = 0x0
xc: detail:     virt_offset      = 0x0
xc: detail:     virt_kstart      = 0x100000
xc: detail:     virt_kend        = 0x115d48
xc: detail:     virt_entry       = 0x1001e0
xc: detail:     p2m_base         = 0xffffffffffffffff
domainbuilder: detail: xc_dom_parse_elf_kernel: hvm-3.0-x86_32: 0x100000 -> 0x115d48
domainbuilder: detail: xc_dom_mem_init: mem 256 MB, pages 0x10000 pages, 4k each
domainbuilder: detail: xc_dom_mem_init: 0x10000 pages
domainbuilder: detail: xc_dom_boot_mem_init: called
domainbuilder: detail: range: start=0x0 end=0x10000400
domainbuilder: detail: xc_dom_malloc            : 512 kB
xc: detail: PHYSICAL MEMORY ALLOCATION:
xc: detail:   4KB PAGES: 0x0000000000000c00
xc: detail:   2MB PAGES: 0x000000000000007a
xc: detail:   1GB PAGES: 0x0000000000000000
domainbuilder: detail: xc_dom_build_image: called
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x100+0x16 at 0x7f5609445000
domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x100000 -> 0x116000  (pfn 0x100 + 0x16 pages)
xc: detail: ELF: phdr 1 at 0x7f5609445000 -> 0x7f560944b264
xc: detail: ELF: phdr 2 at 0x7f560944c000 -> 0x7f5609453120
domainbuilder: detail: xc_dom_load_acpi: 64 bytes at address fc008000
domainbuilder: detail: xc_dom_load_acpi: 4096 bytes at address fc000000
domainbuilder: detail: xc_dom_load_acpi: 28672 bytes at address fc001000
domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x116+0x1 at 0x7f5609ace000
domainbuilder: detail: xc_dom_alloc_segment:   HVM start info : 0x116000 -> 0x117000  (pfn 0x116 + 0x1 pages)
domainbuilder: detail: alloc_pgtables_hvm: doing nothing
domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x117000
domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x0
domainbuilder: detail: xc_dom_boot_image: called
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64
domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32 <= matches
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p
domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64
domainbuilder: detail: domain builder memory footprint
domainbuilder: detail:    allocated
domainbuilder: detail:       malloc             : 515 kB
domainbuilder: detail:       anon mmap          : 0 bytes
domainbuilder: detail:    mapped
domainbuilder: detail:       file mmap          : 191 kB
domainbuilder: detail:       domU mmap          : 92 kB
domainbuilder: detail: vcpu_hvm: called
domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0xff000
domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0xff001
domainbuilder: detail: xc_dom_release: called
libxl: debug: libxl_event.c:2194:libxl__ao_progress_report: ao 0x5593c42e7e30: progress report: callback queued aop=0x5593c42fea10
libxl: debug: libxl_event.c:1869:libxl__ao_complete: ao 0x5593c42e7e30: complete, rc=0
libxl: debug: libxl_event.c:1404:egc_run_callbacks: ao 0x5593c42e7e30: progress report: callback aop=0x5593c42fea10
libxl: debug: libxl_create.c:1708:do_domain_create: Domain 0:ao 0x5593c42e7e30: inprogress: poller=0x5593c42e7670, flags=ic
libxl: debug: libxl_event.c:1838:libxl__ao__destroy: ao 0x5593c42e7e30: destroy
xencall:buffer: debug: total allocations:233 total releases:233
xencall:buffer: debug: current allocations:0 maximum allocations:3
xencall:buffer: debug: cache current size:3
xencall:buffer: debug: cache hits:215 misses:3 toobig:15
xencall:buffer: debug: total allocations:0 total releases:0
xencall:buffer: debug: current allocations:0 maximum allocations:0
xencall:buffer: debug: cache current size:0
xencall:buffer: debug: cache hits:0 misses:0 toobig:0

> 
> > 
> > If I make this simple change:
> > 
> > --- a/bindings/xen/boot.S
> > +++ b/bindings/xen/boot.S
> > @@ -32,7 +32,7 @@
> >  #define ENTRY(x) .text; .globl x; .type x,%function; x:
> >  #define END(x)   .size x, . - x
> > 
> > -.section .note.solo5.xen
> > +.section .note.solo5.xen, "a", @note
> > 
> >         .align  4
> >         .long   4
> > 
> > then I get the expected output from readelf -lW, and I can get as far as
> > the C _start() with no issues!
> > 
> > FWIW, here's the diff of readelf -lW before/after:
> > 
> > --- before	2020-05-26 17:36:46.117885855 +0200
> > +++ after	2020-05-26 17:38:07.090508322 +0200
> > @@ -8,9 +8,9 @@
> >    INTERP         0x001000 0x0000000000100000 0x0000000000100000 0x000018 0x000018 R   0x8
> >        [Requesting program interpreter: /nonexistent/solo5/]
> >    LOAD           0x001000 0x0000000000100000 0x0000000000100000 0x00615c 0x00615c R E 0x1000
> > -  LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x007120 0x00ed28 RW  0x1000
> > +  LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x006120 0x00dd28 RW  0x1000
> 
> This seems suspicious, there's a change of the size of the LOAD
> section, but your change to the note type should not affect the LOAD
> section?

Indeed.

> 
> Hm, maybe it does because the .note.solo5.xen was considered writable
> by default?

I don't think so. From the broken image:

  [ 8] .note.solo5.xen   NOTE             00000000001070c4  0000f120
       0000000000000014  0000000000000000           0     0     4

>From the good image:

  [ 8] .note.solo5.xen   NOTE             00000000001070c4  000080c4
       0000000000000014  0000000000000000   A       0     0     4

-mato


From xen-devel-bounces@lists.xenproject.org Wed May 27 09:44:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 09:44:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdsbn-0001pH-DE; Wed, 27 May 2020 09:44:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeBB=7J=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdsbm-0001pC-PG
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 09:44:22 +0000
X-Inumbo-ID: a406a2f0-9ffe-11ea-a70d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a406a2f0-9ffe-11ea-a70d-12813bfff9fa;
 Wed, 27 May 2020 09:44:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=iCaLZIAG80TdIyQv/OLd7QCdqfhaM58kNRJ9trjkeTc=; b=KsGx8RuSD7L6Xm3aFyXAtoBnT
 9H2rWE30JcT9ohStkU+/TCCGx7O1lY74iZZcJzhMJzmMLbx8EjU7jCLBtcT2DOrrUzl5YbTbGkgIZ
 nA3jcSzeCobJ5/2vvVPqW0oewQAwNu6YCYOJAZ+/9abR8F1ku6/Xt8EmvF3ZB/br7Tg+g=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdsbk-0007Vt-Q2; Wed, 27 May 2020 09:44:20 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdsbk-0000jb-FI; Wed, 27 May 2020 09:44:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdsbk-0005dg-El; Wed, 27 May 2020 09:44:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150397-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 150397: all pass - PUSHED
X-Osstest-Versions-This: xen=d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707
X-Osstest-Versions-That: xen=5e015d48a5ee68ba03addad2698364d8f015afec
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 27 May 2020 09:44:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150397 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150397/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707
baseline version:
 xen                  5e015d48a5ee68ba03addad2698364d8f015afec

Last test of basis   150349  2020-05-24 09:19:09 Z    3 days
Testing same since   150397  2020-05-27 09:19:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Corey Minyard <cminyard@mvista.com>
  George Dunlap <george.dunlap@citrix.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   5e015d48a5..d89e5e65f3  d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed May 27 10:18:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 10:18:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdt8H-0004Rq-VZ; Wed, 27 May 2020 10:17:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qkut=7J=antioche.eu.org=bouyer@srs-us1.protection.inumbo.net>)
 id 1jdt8G-0004Rl-R1
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 10:17:56 +0000
X-Inumbo-ID: 5400db18-a003-11ea-8993-bc764e2007e4
Received: from chassiron.antioche.eu.org (unknown [2001:41d0:fe9d:1101::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5400db18-a003-11ea-8993-bc764e2007e4;
 Wed, 27 May 2020 10:17:55 +0000 (UTC)
Received: from rochebonne.antioche.eu.org (rochebonne
 [IPv6:2001:41d0:fe9d:1100:221:70ff:fe0c:9885])
 by chassiron.antioche.eu.org (8.15.2/8.15.2) with ESMTP id 04RAHq3K007955
 for <xen-devel@lists.xenproject.org>; Wed, 27 May 2020 12:17:54 +0200 (MEST)
Received: by rochebonne.antioche.eu.org (Postfix, from userid 1210)
 id 8C60C2828; Wed, 27 May 2020 12:17:52 +0200 (CEST)
Date: Wed, 27 May 2020 12:17:52 +0200
From: Manuel Bouyer <bouyer@antioche.eu.org>
To: xen-devel@lists.xenproject.org
Subject: patches for Xen 4.13
Message-ID: <20200527101752.GA3094@antioche.eu.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: Mutt/1.12.1 (2019-06-15)
X-Greylist: Sender IP whitelisted, not delayed by milter-greylist-4.4.3
 (chassiron.antioche.eu.org [IPv6:2001:41d0:fe9d:1101:0:0:0:1]);
 Wed, 27 May 2020 12:17:54 +0200 (MEST)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,
I got Xen 4.13 working with NetBSD. Now I'd like to get the patches back
in the Xen sources. What is the best way to submit the patches ?

-- 
Manuel Bouyer <bouyer@antioche.eu.org>
     NetBSD: 26 ans d'experience feront toujours la difference
--


From xen-devel-bounces@lists.xenproject.org Wed May 27 10:31:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 10:31:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdtKw-00060z-5p; Wed, 27 May 2020 10:31:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+pkZ=7J=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdtKu-00060u-K4
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 10:31:00 +0000
X-Inumbo-ID: 27a86c46-a005-11ea-a714-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 27a86c46-a005-11ea-a714-12813bfff9fa;
 Wed, 27 May 2020 10:30:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 26802AD88;
 Wed, 27 May 2020 10:31:01 +0000 (UTC)
Subject: Re: patches for Xen 4.13
To: Manuel Bouyer <bouyer@antioche.eu.org>
References: <20200527101752.GA3094@antioche.eu.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a9b3dbcc-71a9-f07c-f7ca-00e423c18138@suse.com>
Date: Wed, 27 May 2020 12:30:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200527101752.GA3094@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 12:17, Manuel Bouyer wrote:
> I got Xen 4.13 working with NetBSD. Now I'd like to get the patches back
> in the Xen sources. What is the best way to submit the patches ?

Patches generally need to be submitted against the staging tree;
backporting to older trees occurs on an as-needed / as-requested
basis. Note however that the staging tree is about to freeze for
4.14, so only clear bug fixes are likely to get a release
manager approval. Anything else would be deferred until the tree
re-opens.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 27 10:35:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 10:35:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdtPN-0006Gk-O8; Wed, 27 May 2020 10:35:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PV0l=7J=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jdtPM-0006Gf-7O
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 10:35:36 +0000
X-Inumbo-ID: cc04dc8e-a005-11ea-a714-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cc04dc8e-a005-11ea-a714-12813bfff9fa;
 Wed, 27 May 2020 10:35:35 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:57910
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jdtPK-000EiZ-JN (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Wed, 27 May 2020 11:35:34 +0100
Subject: Re: patches for Xen 4.13
To: Manuel Bouyer <bouyer@antioche.eu.org>, xen-devel@lists.xenproject.org
References: <20200527101752.GA3094@antioche.eu.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c8019c14-6ec0-7037-5202-5b2c1db76f89@citrix.com>
Date: Wed, 27 May 2020 11:35:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200527101752.GA3094@antioche.eu.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27/05/2020 11:17, Manuel Bouyer wrote:
> Hello,
> I got Xen 4.13 working with NetBSD. Now I'd like to get the patches back
> in the Xen sources. What is the best way to submit the patches ?

Ideally, git-send-email to this list, from a branch based on
xen.git#staging.

https://wiki.xenproject.org/wiki/Submitting_Xen_Project_Patches

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 27 10:42:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 10:42:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdtWL-00077f-Gs; Wed, 27 May 2020 10:42:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeBB=7J=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdtWL-00077a-0l
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 10:42:49 +0000
X-Inumbo-ID: cb4ca604-a006-11ea-a717-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cb4ca604-a006-11ea-a717-12813bfff9fa;
 Wed, 27 May 2020 10:42:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=t/P2ceX11wSG3BWn37Ezvgb0T0f7m3FmQgxu24Dpo2U=; b=Xj9UZI6KbJp2l6i8+pFMqHp/z
 G0n5zPEgjsjPilEec2PQs1kRScQT+XET1EmEhZmjXRXJtE4HyX4nvZkuiAYU6X0CZ41PKUYShUlcR
 0ZkN6E96fuz7ucnGYkpJaG0Vvsbl6oYbs+Hk91rZQvgoLNt5PhTAfQkIcI95GZyxuJrAs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdtWE-0000MT-LR; Wed, 27 May 2020 10:42:42 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdtWE-0003WP-8I; Wed, 27 May 2020 10:42:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdtWE-0001jH-7d; Wed, 27 May 2020 10:42:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150392-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 150392: all pass - PUSHED
X-Osstest-Versions-This: ovmf=568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3
X-Osstest-Versions-That: ovmf=1c877c716038a862e876cac8f0929bab4a96e849
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 27 May 2020 10:42:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150392 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150392/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3
baseline version:
 ovmf                 1c877c716038a862e876cac8f0929bab4a96e849

Last test of basis   150318  2020-05-22 05:16:35 Z    5 days
Testing same since   150392  2020-05-27 02:39:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael Kubacki <michael.kubacki@microsoft.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   1c877c7160..568eee7cf3  568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 27 10:43:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 10:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdtWu-0007Aq-Qo; Wed, 27 May 2020 10:43:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eMJP=7J=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jdtWt-0007Ai-Vt
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 10:43:24 +0000
X-Inumbo-ID: e22b481c-a006-11ea-a717-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e22b481c-a006-11ea-a717-12813bfff9fa;
 Wed, 27 May 2020 10:43:21 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: tlUAiLy9f0+D4MUI4Y6F83hCylkZvlGEvyD0/lHW1DkMPMptHTE09oxOySCdo0I7J4I5i2tI1j
 Ogb+8HGZlLfbGhTo0HAeKbbmFvkWKd7eHLx1qXSRovYu7B7Q+A6yUF4Pz0uWPhAFmE7Mfn+8b/
 p1wUe8oNHvSg5OP+KHkQzjSMJWwspk351ZfZqS+s5XoHlaW7YywFIkYn+VdWJsIqLAZUQDAOuZ
 gxidGy8iwlLvdPsFyNb/kKzxePHk/S//GDA0zK+r0Sw6T7IG4frW8+4HyMdnMgGSvpsMCLA+u0
 bIs=
X-SBRS: 2.7
X-MesageID: 18558352
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,441,1583211600"; d="scan'208";a="18558352"
Date: Wed, 27 May 2020 11:43:16 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: George Dunlap <george.dunlap@citrix.com>
Subject: Re: [PATCH v2 3/5] automation/archlinux: Add 32-bit glibc headers
Message-ID: <20200527104316.GH2105@perard.uk.xensource.com>
References: <20200526221612.900922-1-george.dunlap@citrix.com>
 <20200526221612.900922-4-george.dunlap@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20200526221612.900922-4-george.dunlap@citrix.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, Wei
 Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 26, 2020 at 11:16:10PM +0100, George Dunlap wrote:
> This fixes the following build error in hvmloader:
> 
> usr/include/gnu/stubs.h:7:11: fatal error: gnu/stubs-32.h: No such file or directory
> 
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> ---
>  automation/build/archlinux/current.dockerfile | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/automation/build/archlinux/current.dockerfile b/automation/build/archlinux/current.dockerfile
> index 9af5d66afc..5095de65b8 100644
> --- a/automation/build/archlinux/current.dockerfile
> +++ b/automation/build/archlinux/current.dockerfile
> @@ -19,6 +19,7 @@ RUN pacman -S --refresh --sysupgrade --noconfirm --noprogressbar --needed \
>          iasl \
>          inetutils \
>          iproute \
> +        lib32-glibc \
>          libaio \
>          libcacard \
>          libgl \

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed May 27 11:07:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 11:07:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdttl-0000kI-Qf; Wed, 27 May 2020 11:07:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeBB=7J=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdttj-0000kD-UH
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 11:06:59 +0000
X-Inumbo-ID: 2c0a21da-a00a-11ea-81bc-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c0a21da-a00a-11ea-81bc-bc764e2007e4;
 Wed, 27 May 2020 11:06:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=w5YTqwbuTeUyDqN4W0Oms4W3xENRDRpj2kuxnBJHrkg=; b=ccpBh2oPOg8YYvQrkOgSx1BlF
 AS+AKj73Vk+RzFfiuR9eaMFRE4KHZfDmzbn/WAELWdK3OvodLXiAiodmUUyuJ4nOEw097ZPjOPjKT
 bp/vgmsFcW+qWcaGwjm7ZiwqWcVFISOw5mcjWDr/BYqAIExoBsEfEs3OFUL8FINTsGQSA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdttd-0000so-Aq; Wed, 27 May 2020 11:06:53 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdttc-0004Ja-Sa; Wed, 27 May 2020 11:06:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdttc-0004NG-Rz; Wed, 27 May 2020 11:06:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150396-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150396: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=b66e28226dd9df8a28101438f44c0a26d63b76fa
X-Osstest-Versions-That: xen=d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 27 May 2020 11:06:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150396 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150396/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b66e28226dd9df8a28101438f44c0a26d63b76fa
baseline version:
 xen                  d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707

Last test of basis   150388  2020-05-26 15:01:19 Z    0 days
Testing same since   150396  2020-05-27 08:09:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Tamas K Lengyel <tamas@tklengyel.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d89e5e65f3..b66e28226d  b66e28226dd9df8a28101438f44c0a26d63b76fa -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 27 11:15:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 11:15:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdu1a-0001aZ-LR; Wed, 27 May 2020 11:15:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeBB=7J=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdu1Z-0001aU-9Q
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 11:15:05 +0000
X-Inumbo-ID: 4fd65fa6-a00b-11ea-9dbe-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4fd65fa6-a00b-11ea-9dbe-bc764e2007e4;
 Wed, 27 May 2020 11:15:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=9zR4cochbnPegsVII5kMzLtFgFcBgJVbeGTQWw0wFD4=; b=grQ+W/PmzAtvToNielbdWWSYH
 E2eAl7Pksqg/NbB0GM6gNk1q4JrSRIg9HE1x65s0qDn5PF9JG7xvIBT0Za5YZK8ifp1016Neq+6AE
 1COyp6yoc2nqrxvXCgdhmT9lJ5ERSqdVcl+p0qgkpmE3PimLOsKp5KDDKmSHXPHBIpsKI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdu1W-000134-VH; Wed, 27 May 2020 11:15:03 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdu1W-0004VF-HH; Wed, 27 May 2020 11:15:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdu1W-0002V7-9T; Wed, 27 May 2020 11:15:02 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150390-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150390: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=444fc5cde64330661bf59944c43844e7d4c2ccd8
X-Osstest-Versions-That: linux=9cb1fd0efd195590b828b9b865421ad345a4a145
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 27 May 2020 11:15:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150390 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150390/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150362
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150362
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150362
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150362
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150362
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150362
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150362
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150362
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150362
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                444fc5cde64330661bf59944c43844e7d4c2ccd8
baseline version:
 linux                9cb1fd0efd195590b828b9b865421ad345a4a145

Last test of basis   150362  2020-05-25 05:40:38 Z    2 days
Testing same since   150390  2020-05-27 00:39:33 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Fredrik Strupe <fredrik@strupe.net>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Linus Torvalds <torvalds@linux-foundation.org>
  Russell King <rmk+kernel@armlinux.org.uk>
  Łukasz Stelmach <l.stelmach@samsung.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   9cb1fd0efd19..444fc5cde643  444fc5cde64330661bf59944c43844e7d4c2ccd8 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Wed May 27 11:29:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 11:29:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jduFZ-0002Zw-5W; Wed, 27 May 2020 11:29:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t1uD=7J=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1jduFX-0002Zr-P1
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 11:29:31 +0000
X-Inumbo-ID: 54d1e262-a00d-11ea-9dbe-bc764e2007e4
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 54d1e262-a00d-11ea-9dbe-bc764e2007e4;
 Wed, 27 May 2020 11:29:31 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id x13so8979723wrv.4
 for <xen-devel@lists.xenproject.org>; Wed, 27 May 2020 04:29:31 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:date:from:to:cc:subject:message-id:references
 :mime-version:content-disposition:in-reply-to:user-agent;
 bh=7ukDdBTID2nqNbO4Nuu1d5hVjiNbzXmjnB0+7rrB2SI=;
 b=YOZ4PHMDS2V4eKlJz4rhHnIhkwbjNY7ez2xrBrVWeyoqTodkcA+LJasGuLpboA0RcQ
 hZsQ7HOGsDLQNJzcX8VUPXHLtfvOypigR5AqcEAtfxSDav44YlVbVN+e97AaA5nn+nfw
 rJyfIRJDyf5RJay6t0T4DP0BMAjiN8OMmTvNsOQh2Mg9pIBV+gUTTGlc/kjL7qNoTqvR
 cM0bSKEl0cq7DvAfnt/ldOteU0lOBd/mcq896FbRUBHKMhOz3h/CpaYwovmijN3tXtFd
 DIchCKWjRJYlP6t8YgZaoXr774VDmdALtCaQYWHPDrB2tWrxOKoAEgIuGmkgqA4Qm+JY
 BvBw==
X-Gm-Message-State: AOAM530ks+QDBM5a7psT0SOZnoMl6RFgIOIWBwu6sxEdVe/lnxM1TSXl
 Pz3AzOt80Cq0yb/qDjCdfjE=
X-Google-Smtp-Source: ABdhPJxjEiifR2Pzkt5OwOsAom9xHyoqgDSd3Z4ktCRMBjxCTJ5lwU4iuCjQ2+iyaTL0G9iDxSjgtw==
X-Received: by 2002:adf:9d8f:: with SMTP id p15mr7554651wre.421.1590578970189; 
 Wed, 27 May 2020 04:29:30 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id c25sm2536025wmb.44.2020.05.27.04.29.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 27 May 2020 04:29:29 -0700 (PDT)
Date: Wed, 27 May 2020 11:29:28 +0000
From: Wei Liu <wl@xen.org>
To: Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH v2 3/5] automation/archlinux: Add 32-bit glibc headers
Message-ID: <20200527112928.72amqcojenrz2a46@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
References: <20200526221612.900922-1-george.dunlap@citrix.com>
 <20200526221612.900922-4-george.dunlap@citrix.com>
 <20200527104316.GH2105@perard.uk.xensource.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20200527104316.GH2105@perard.uk.xensource.com>
User-Agent: NeoMutt/20180716
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>,
 George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 27, 2020 at 11:43:16AM +0100, Anthony PERARD wrote:
> On Tue, May 26, 2020 at 11:16:10PM +0100, George Dunlap wrote:
> > This fixes the following build error in hvmloader:
> > 
> > usr/include/gnu/stubs.h:7:11: fatal error: gnu/stubs-32.h: No such file or directory
> > 
> > Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> > ---
> >  automation/build/archlinux/current.dockerfile | 1 +
> >  1 file changed, 1 insertion(+)
> > 
> > diff --git a/automation/build/archlinux/current.dockerfile b/automation/build/archlinux/current.dockerfile
> > index 9af5d66afc..5095de65b8 100644
> > --- a/automation/build/archlinux/current.dockerfile
> > +++ b/automation/build/archlinux/current.dockerfile
> > @@ -19,6 +19,7 @@ RUN pacman -S --refresh --sysupgrade --noconfirm --noprogressbar --needed \
> >          iasl \
> >          inetutils \
> >          iproute \
> > +        lib32-glibc \
> >          libaio \
> >          libcacard \
> >          libgl \
> 
> Acked-by: Anthony PERARD <anthony.perard@citrix.com>

All automation patches:

Acked-by: Wei Liu <wl@xen.org>

Anthony, can you generate and push the new images? Thanks.

Wei.

> 
> -- 
> Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Wed May 27 12:52:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 12:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdvXi-0001Sh-J1; Wed, 27 May 2020 12:52:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mVnz=7J=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jdvR0-0000fO-UW
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 12:45:26 +0000
X-Inumbo-ID: ee852c98-a017-11ea-a740-12813bfff9fa
Received: from forwardcorp1p.mail.yandex.net (unknown [77.88.29.217])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ee852c98-a017-11ea-a740-12813bfff9fa;
 Wed, 27 May 2020 12:45:24 +0000 (UTC)
Received: from mxbackcorp1j.mail.yandex.net (mxbackcorp1j.mail.yandex.net
 [IPv6:2a02:6b8:0:1619::162])
 by forwardcorp1p.mail.yandex.net (Yandex) with ESMTP id 693EE2E15B2;
 Wed, 27 May 2020 15:45:22 +0300 (MSK)
Received: from iva4-7c3d9abce76c.qloud-c.yandex.net
 (iva4-7c3d9abce76c.qloud-c.yandex.net [2a02:6b8:c0c:4e8e:0:640:7c3d:9abc])
 by mxbackcorp1j.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 NncoSmAIxp-jIF4FJjI; Wed, 27 May 2020 15:45:22 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590583522; bh=oIN03DAZ4lQYUkl3lhide6Xs43zEdrUrV/56BgwHQX0=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=NI0AJsTTemllPECSfi+BL/N/9mtkS20hZ1UfRKA0K7Uhqy5GZMQwxjcKYKDr7hmtr
 //+nYgUXvj6RJUl+sDbsrRYAVCdOTsWshvGUgRhw/6Gd/7gE0esW0D/P+hHpqQqIKu
 j16B7hxVho1S7RMx409aZaf2tvAyPB6vHESIDjaQ=
Authentication-Results: mxbackcorp1j.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b080:8308::1:12])
 by iva4-7c3d9abce76c.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 bzO5qPtczO-jIWOql5q; Wed, 27 May 2020 15:45:18 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v6 1/5] virtio-blk: store opt_io_size with correct size
Date: Wed, 27 May 2020 15:45:07 +0300
Message-Id: <20200527124511.986099-2-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200527124511.986099-1-rvkagan@yandex-team.ru>
References: <20200527124511.986099-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-Mailman-Approved-At: Wed, 27 May 2020 12:52:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Eric Blake <eblake@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, John Snow <jsnow@redhat.com>,
 Keith Busch <kbusch@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The width of opt_io_size in virtio_blk_config is 32bit.  However, it's
written with virtio_stw_p; this may result in value truncation, and on
big-endian systems with legacy virtio in completely bogus readings in
the guest.

Use the appropriate accessor to store it.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
v4 -> v5:
- expand the log [Michael]

 hw/block/virtio-blk.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index f5f6fc925e..413083e62f 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -918,7 +918,7 @@ static void virtio_blk_update_config(VirtIODevice *vdev, uint8_t *config)
     virtio_stw_p(vdev, &blkcfg.geometry.cylinders, conf->cyls);
     virtio_stl_p(vdev, &blkcfg.blk_size, blk_size);
     virtio_stw_p(vdev, &blkcfg.min_io_size, conf->min_io_size / blk_size);
-    virtio_stw_p(vdev, &blkcfg.opt_io_size, conf->opt_io_size / blk_size);
+    virtio_stl_p(vdev, &blkcfg.opt_io_size, conf->opt_io_size / blk_size);
     blkcfg.geometry.heads = conf->heads;
     /*
      * We must ensure that the block device capacity is a multiple of
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed May 27 12:52:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 12:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdvXi-0001Sb-An; Wed, 27 May 2020 12:52:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mVnz=7J=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jdvQw-0000f6-1I
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 12:45:22 +0000
X-Inumbo-ID: ec23362a-a017-11ea-a740-12813bfff9fa
Received: from forwardcorp1o.mail.yandex.net (unknown [95.108.205.193])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ec23362a-a017-11ea-a740-12813bfff9fa;
 Wed, 27 May 2020 12:45:20 +0000 (UTC)
Received: from mxbackcorp1o.mail.yandex.net (mxbackcorp1o.mail.yandex.net
 [IPv6:2a02:6b8:0:1a2d::301])
 by forwardcorp1o.mail.yandex.net (Yandex) with ESMTP id 2C5E72E1581;
 Wed, 27 May 2020 15:45:18 +0300 (MSK)
Received: from iva4-7c3d9abce76c.qloud-c.yandex.net
 (iva4-7c3d9abce76c.qloud-c.yandex.net [2a02:6b8:c0c:4e8e:0:640:7c3d:9abc])
 by mxbackcorp1o.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 yGuxnAsuRm-jBxmk9RU; Wed, 27 May 2020 15:45:18 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590583518; bh=42hnJ6RHsswvNBcGuhOx1Vid5BV/7/IOuRuXw9PtENo=;
 h=Message-Id:Date:Subject:To:From:Cc;
 b=GT+rFY3/A/0GhjPSqnGNvn8/0KVp00ezEslGbKJi1h1CVFHRFaME8nksjJwjD6zVw
 raZBgRPM5mlp8y4/rx6hWXKRnNlkNTSuGd95/scuF9tESi9pzKpbstRV6qtHXXiAtn
 i85pJ9QEsSf4mIai+JIHe8xloikKTWVzzaQUhOKI=
Authentication-Results: mxbackcorp1o.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b080:8308::1:12])
 by iva4-7c3d9abce76c.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 bzO5qPtczO-jBWOPpYZ; Wed, 27 May 2020 15:45:11 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v6 0/5] block: enhance handling of size-related BlockConf
 properties
Date: Wed, 27 May 2020 15:45:06 +0300
Message-Id: <20200527124511.986099-1-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
Content-Type: text/plain; charset="utf-8"
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable
X-Mailman-Approved-At: Wed, 27 May 2020 12:52:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Eric Blake <eblake@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, John Snow <jsnow@redhat.com>,
 Keith Busch <kbusch@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

BlockConf includes several properties counted in bytes.=0D
=0D
Enhance their handling in a some aspects, specifically=0D
=0D
- accept common size suffixes (k, m)=0D
- perform consistency checks on the values=0D
- lift the upper limit on physical_block_size and logical_block_size=0D
=0D
Also fix the accessor for opt_io_size in virtio-blk to make it consistent w=
ith=0D
the size of the field.=0D
=0D
History:=0D
v5 -> v6:=0D
- fix forgotten xen-block and swim=0D
- add prop_size32 instead of going with 64bit=0D
=0D
v4 -> v5:=0D
- re-split the patches [Philippe]=0D
- fix/reword error messages [Philippe, Kevin]=0D
- do early return on failed consistency check [Philippe]=0D
- use QEMU_IS_ALIGNED instead of open coding [Philippe]=0D
- make all BlockConf size props support suffixes=0D
- expand the log for virtio-blk opt_io_size [Michael]=0D
=0D
v3 -> v4:=0D
- add patch to fix opt_io_size width in virtio-blk=0D
- add patch to perform consistency checks [Kevin]=0D
- check min_io_size against truncation [Kevin]=0D
=0D
v2 -> v3:=0D
- mention qcow2 cluster size limit in the log and comment [Eric]=0D
=0D
v1 -> v2:=0D
- cap the property at 2 MiB [Eric]=0D
- accept size suffixes=0D
=0D
Roman Kagan (5):=0D
  virtio-blk: store opt_io_size with correct size=0D
  block: consolidate blocksize properties consistency checks=0D
  qdev-properties: blocksize: use same limits in code and description=0D
  block: make size-related BlockConf properties accept size suffixes=0D
  block: lift blocksize property limit to 2 MiB=0D
=0D
 include/hw/block/block.h     |  14 +-=0D
 include/hw/qdev-properties.h |   5 +-=0D
 hw/block/block.c             |  41 ++-=0D
 hw/block/fdc.c               |   5 +-=0D
 hw/block/nvme.c              |   5 +-=0D
 hw/block/swim.c              |   5 +-=0D
 hw/block/virtio-blk.c        |   9 +-=0D
 hw/block/xen-block.c         |   6 +-=0D
 hw/core/qdev-properties.c    |  85 +++++-=0D
 hw/ide/qdev.c                |   5 +-=0D
 hw/scsi/scsi-disk.c          |  12 +-=0D
 hw/usb/dev-storage.c         |   5 +-=0D
 tests/qemu-iotests/172.out   | 532 +++++++++++++++++------------------=0D
 13 files changed, 420 insertions(+), 309 deletions(-)=0D
=0D
-- =0D
2.26.2=0D
=0D


From xen-devel-bounces@lists.xenproject.org Wed May 27 12:52:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 12:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdvXj-0001T5-Jf; Wed, 27 May 2020 12:52:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mVnz=7J=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jdvRL-0000gl-Nl
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 12:45:47 +0000
X-Inumbo-ID: f91d9b40-a017-11ea-8993-bc764e2007e4
Received: from forwardcorp1p.mail.yandex.net (unknown
 [2a02:6b8:0:1472:2741:0:8b6:217])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f91d9b40-a017-11ea-8993-bc764e2007e4;
 Wed, 27 May 2020 12:45:42 +0000 (UTC)
Received: from mxbackcorp2j.mail.yandex.net (mxbackcorp2j.mail.yandex.net
 [IPv6:2a02:6b8:0:1619::119])
 by forwardcorp1p.mail.yandex.net (Yandex) with ESMTP id 05C1E2E15A3;
 Wed, 27 May 2020 15:45:41 +0300 (MSK)
Received: from iva4-7c3d9abce76c.qloud-c.yandex.net
 (iva4-7c3d9abce76c.qloud-c.yandex.net [2a02:6b8:c0c:4e8e:0:640:7c3d:9abc])
 by mxbackcorp2j.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 nqQj86Bk57-jbfq4Pkl; Wed, 27 May 2020 15:45:40 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590583540; bh=5vTc+cYwJ0Yp/qYZkQZ2K2YBz+RbUpeML72x3NJFPMc=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=M8ZuAGhMsx+thLLDbKB6QLd0xI3uJHv/67y4TKRY/bZ+ez9EboHcOgdO7FSZX/7gx
 xj/pb4AGN2vS4pgcLBcJJgBd6ZYem3Hkwx03tMjUZRDlse10PYNpKpubdCsW5rUtkS
 Y0DVCZz3xDn0rgPyU2RF91ofdnmEY3eHdmqMoygs=
Authentication-Results: mxbackcorp2j.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b080:8308::1:12])
 by iva4-7c3d9abce76c.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 bzO5qPtczO-jbWOOqOL; Wed, 27 May 2020 15:45:37 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v6 5/5] block: lift blocksize property limit to 2 MiB
Date: Wed, 27 May 2020 15:45:11 +0300
Message-Id: <20200527124511.986099-6-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200527124511.986099-1-rvkagan@yandex-team.ru>
References: <20200527124511.986099-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Mailman-Approved-At: Wed, 27 May 2020 12:52:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Eric Blake <eblake@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, John Snow <jsnow@redhat.com>,
 Keith Busch <kbusch@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Logical and physical block sizes in QEMU are limited to 32 KiB.

This appears unnecessary tight, and we've seen bigger block sizes handy
at times.

Lift the limitation up to 2 MiB which appears to be good enough for
everybody, and matches the qcow2 cluster size limit.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
---
v4 -> v5:
- split out into separate patch [Philippe]
- as this patch has changed significantly lose Eric's r-b

 hw/core/qdev-properties.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index e7ccd4d276..ecd84262a9 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -784,9 +784,12 @@ const PropertyInfo qdev_prop_size32 = {
 /* lower limit is sector size */
 #define MIN_BLOCK_SIZE          512
 #define MIN_BLOCK_SIZE_STR      stringify(MIN_BLOCK_SIZE)
-/* upper limit is the max power of 2 that fits in uint16_t */
-#define MAX_BLOCK_SIZE          32768
-#define MAX_BLOCK_SIZE_STR      stringify(MAX_BLOCK_SIZE)
+/*
+ * upper limit is arbitrary, 2 MiB looks sufficient for all sensible uses, and
+ * matches qcow2 cluster size limit
+ */
+#define MAX_BLOCK_SIZE          (2 * MiB)
+#define MAX_BLOCK_SIZE_STR      "2 MiB"
 
 static void set_blocksize(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed May 27 12:52:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 12:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdvXj-0001St-3t; Wed, 27 May 2020 12:52:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mVnz=7J=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jdvRB-0000g1-1k
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 12:45:37 +0000
X-Inumbo-ID: f56b1a40-a017-11ea-a740-12813bfff9fa
Received: from forwardcorp1j.mail.yandex.net (unknown [5.45.199.163])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f56b1a40-a017-11ea-a740-12813bfff9fa;
 Wed, 27 May 2020 12:45:35 +0000 (UTC)
Received: from mxbackcorp1o.mail.yandex.net (mxbackcorp1o.mail.yandex.net
 [IPv6:2a02:6b8:0:1a2d::301])
 by forwardcorp1j.mail.yandex.net (Yandex) with ESMTP id BD9702E156F;
 Wed, 27 May 2020 15:45:33 +0300 (MSK)
Received: from iva4-7c3d9abce76c.qloud-c.yandex.net
 (iva4-7c3d9abce76c.qloud-c.yandex.net [2a02:6b8:c0c:4e8e:0:640:7c3d:9abc])
 by mxbackcorp1o.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 RSC1xPiAcU-jSxK4aYo; Wed, 27 May 2020 15:45:33 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590583533; bh=A0YhPbFgXOi7Ibarq6sWTVZlCf7rqVdJcWDX7rNnVYU=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=WYnlLphKioMSFrz8oFbRygNduQl/bL8tD6c/NiJkHcDsYzLrYS1UyEnm0fIViPBRC
 /i9nB2GRviU/5H+9Xcv960vEViNd1mWl+0a99WkYzXHvzkDmt9FWAhM9XHz7hkxcjF
 P31LzMOyFKBdeJy8/Rew0UG3//wx5+hR5FCg/rl4=
Authentication-Results: mxbackcorp1o.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b080:8308::1:12])
 by iva4-7c3d9abce76c.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 bzO5qPtczO-jSWO478g; Wed, 27 May 2020 15:45:28 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v6 3/5] qdev-properties: blocksize: use same limits in code
 and description
Date: Wed, 27 May 2020 15:45:09 +0300
Message-Id: <20200527124511.986099-4-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200527124511.986099-1-rvkagan@yandex-team.ru>
References: <20200527124511.986099-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Mailman-Approved-At: Wed, 27 May 2020 12:52:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Eric Blake <eblake@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, John Snow <jsnow@redhat.com>,
 Keith Busch <kbusch@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Make it easier (more visible) to maintain the limits on the blocksize
properties in sync with the respective description, by using macros both
in the code and in the description.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
---
v4 -> v5:
- split out into separate patch [Philippe]

 hw/core/qdev-properties.c | 21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index cc924815da..249dc69bd8 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -729,6 +729,13 @@ const PropertyInfo qdev_prop_pci_devfn = {
 
 /* --- blocksize --- */
 
+/* lower limit is sector size */
+#define MIN_BLOCK_SIZE          512
+#define MIN_BLOCK_SIZE_STR      stringify(MIN_BLOCK_SIZE)
+/* upper limit is the max power of 2 that fits in uint16_t */
+#define MAX_BLOCK_SIZE          32768
+#define MAX_BLOCK_SIZE_STR      stringify(MAX_BLOCK_SIZE)
+
 static void set_blocksize(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
@@ -736,8 +743,6 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
     Property *prop = opaque;
     uint16_t value, *ptr = qdev_get_prop_ptr(dev, prop);
     Error *local_err = NULL;
-    const int64_t min = 512;
-    const int64_t max = 32768;
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -750,9 +755,12 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
         return;
     }
     /* value of 0 means "unset" */
-    if (value && (value < min || value > max)) {
-        error_setg(errp, QERR_PROPERTY_VALUE_OUT_OF_RANGE,
-                   dev->id ? : "", name, (int64_t)value, min, max);
+    if (value && (value < MIN_BLOCK_SIZE || value > MAX_BLOCK_SIZE)) {
+        error_setg(errp,
+                   "Property %s.%s doesn't take value %" PRIu16
+                   " (minimum: " MIN_BLOCK_SIZE_STR
+                   ", maximum: " MAX_BLOCK_SIZE_STR ")",
+                   dev->id ? : "", name, value);
         return;
     }
 
@@ -769,7 +777,8 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
 
 const PropertyInfo qdev_prop_blocksize = {
     .name  = "uint16",
-    .description = "A power of two between 512 and 32768",
+    .description = "A power of two between " MIN_BLOCK_SIZE_STR
+                   " and " MAX_BLOCK_SIZE_STR,
     .get   = get_uint16,
     .set   = set_blocksize,
     .set_default_value = set_default_value_uint,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed May 27 12:52:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 12:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdvXi-0001Sn-Ru; Wed, 27 May 2020 12:52:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mVnz=7J=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jdvR5-0000fi-Ug
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 12:45:31 +0000
X-Inumbo-ID: f1a58490-a017-11ea-a740-12813bfff9fa
Received: from forwardcorp1o.mail.yandex.net (unknown [95.108.205.193])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f1a58490-a017-11ea-a740-12813bfff9fa;
 Wed, 27 May 2020 12:45:29 +0000 (UTC)
Received: from mxbackcorp1g.mail.yandex.net (mxbackcorp1g.mail.yandex.net
 [IPv6:2a02:6b8:0:1402::301])
 by forwardcorp1o.mail.yandex.net (Yandex) with ESMTP id 760832E14EF;
 Wed, 27 May 2020 15:45:28 +0300 (MSK)
Received: from iva4-7c3d9abce76c.qloud-c.yandex.net
 (iva4-7c3d9abce76c.qloud-c.yandex.net [2a02:6b8:c0c:4e8e:0:640:7c3d:9abc])
 by mxbackcorp1g.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 EY5t1VtL3U-jMICaEGx; Wed, 27 May 2020 15:45:28 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590583528; bh=7R0FdQjVPrwj0NU+a9BW8mDIxABy3m6MTEZ4IgcH4AU=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=XnEB0m+xvqUPsgGEBKNnCHr3Pj4SBPUXBm6pEXDiyjEO+oDAb+Q5Wyj/VuYKUTIxm
 rkNBfvVM0iGMkCmzIEghsxLBVLv4pC0zURwZEQkKUa2InvSPmPgCzc2wSrm/yvJ2q9
 Vvv2sUJPBNPMINIPTzIK7+eUl9i3m9u7p1Rl2zr0=
Authentication-Results: mxbackcorp1g.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b080:8308::1:12])
 by iva4-7c3d9abce76c.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 bzO5qPtczO-jMWOe2sb; Wed, 27 May 2020 15:45:22 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v6 2/5] block: consolidate blocksize properties consistency
 checks
Date: Wed, 27 May 2020 15:45:08 +0300
Message-Id: <20200527124511.986099-3-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200527124511.986099-1-rvkagan@yandex-team.ru>
References: <20200527124511.986099-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Mailman-Approved-At: Wed, 27 May 2020 12:52:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Eric Blake <eblake@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, John Snow <jsnow@redhat.com>,
 Keith Busch <kbusch@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Several block device properties related to blocksize configuration must
be in certain relationship WRT each other: physical block must be no
smaller than logical block; min_io_size, opt_io_size, and
discard_granularity must be a multiple of a logical block.

To ensure these requirements are met, add corresponding consistency
checks to blkconf_blocksizes, adjusting its signature to communicate
possible error to the caller.  Also remove the now redundant consistency
checks from the specific devices.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
---
v5 -> v6:
- fix forgotten xen-block and swim

v4 -> v5:
- fix/reword error messages [Philippe, Kevin]
- do early return on failed consistency check [Philippe]
- use QEMU_IS_ALIGNED instead of open coding [Philippe]

 include/hw/block/block.h   |  2 +-
 hw/block/block.c           | 30 +++++++++++++++++++++++++++++-
 hw/block/fdc.c             |  5 ++++-
 hw/block/nvme.c            |  5 ++++-
 hw/block/swim.c            |  5 ++++-
 hw/block/virtio-blk.c      |  7 +------
 hw/block/xen-block.c       |  6 +-----
 hw/ide/qdev.c              |  5 ++++-
 hw/scsi/scsi-disk.c        | 12 +++++-------
 hw/usb/dev-storage.c       |  5 ++++-
 tests/qemu-iotests/172.out |  2 +-
 11 files changed, 58 insertions(+), 26 deletions(-)

diff --git a/include/hw/block/block.h b/include/hw/block/block.h
index d7246f3862..784953a237 100644
--- a/include/hw/block/block.h
+++ b/include/hw/block/block.h
@@ -87,7 +87,7 @@ bool blk_check_size_and_read_all(BlockBackend *blk, void *buf, hwaddr size,
 bool blkconf_geometry(BlockConf *conf, int *trans,
                       unsigned cyls_max, unsigned heads_max, unsigned secs_max,
                       Error **errp);
-void blkconf_blocksizes(BlockConf *conf);
+bool blkconf_blocksizes(BlockConf *conf, Error **errp);
 bool blkconf_apply_backend_options(BlockConf *conf, bool readonly,
                                    bool resizable, Error **errp);
 
diff --git a/hw/block/block.c b/hw/block/block.c
index bf56c7612b..b22207c921 100644
--- a/hw/block/block.c
+++ b/hw/block/block.c
@@ -61,7 +61,7 @@ bool blk_check_size_and_read_all(BlockBackend *blk, void *buf, hwaddr size,
     return true;
 }
 
-void blkconf_blocksizes(BlockConf *conf)
+bool blkconf_blocksizes(BlockConf *conf, Error **errp)
 {
     BlockBackend *blk = conf->blk;
     BlockSizes blocksizes;
@@ -83,6 +83,34 @@ void blkconf_blocksizes(BlockConf *conf)
             conf->logical_block_size = BDRV_SECTOR_SIZE;
         }
     }
+
+    if (conf->logical_block_size > conf->physical_block_size) {
+        error_setg(errp,
+                   "logical_block_size > physical_block_size not supported");
+        return false;
+    }
+
+    if (!QEMU_IS_ALIGNED(conf->min_io_size, conf->logical_block_size)) {
+        error_setg(errp,
+                   "min_io_size must be a multiple of logical_block_size");
+        return false;
+    }
+
+    if (!QEMU_IS_ALIGNED(conf->opt_io_size, conf->logical_block_size)) {
+        error_setg(errp,
+                   "opt_io_size must be a multiple of logical_block_size");
+        return false;
+    }
+
+    if (conf->discard_granularity != -1 &&
+        !QEMU_IS_ALIGNED(conf->discard_granularity,
+                         conf->logical_block_size)) {
+        error_setg(errp, "discard_granularity must be "
+                   "a multiple of logical_block_size");
+        return false;
+    }
+
+    return true;
 }
 
 bool blkconf_apply_backend_options(BlockConf *conf, bool readonly,
diff --git a/hw/block/fdc.c b/hw/block/fdc.c
index c5fb9d6ece..8eda572ef4 100644
--- a/hw/block/fdc.c
+++ b/hw/block/fdc.c
@@ -554,7 +554,10 @@ static void floppy_drive_realize(DeviceState *qdev, Error **errp)
         read_only = !blk_bs(dev->conf.blk) || blk_is_read_only(dev->conf.blk);
     }
 
-    blkconf_blocksizes(&dev->conf);
+    if (!blkconf_blocksizes(&dev->conf, errp)) {
+        return;
+    }
+
     if (dev->conf.logical_block_size != 512 ||
         dev->conf.physical_block_size != 512)
     {
diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index 2f3100e56c..672650e162 100644
--- a/hw/block/nvme.c
+++ b/hw/block/nvme.c
@@ -1390,7 +1390,10 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp)
         host_memory_backend_set_mapped(n->pmrdev, true);
     }
 
-    blkconf_blocksizes(&n->conf);
+    if (!blkconf_blocksizes(&n->conf, errp)) {
+        return;
+    }
+
     if (!blkconf_apply_backend_options(&n->conf, blk_is_read_only(n->conf.blk),
                                        false, errp)) {
         return;
diff --git a/hw/block/swim.c b/hw/block/swim.c
index 8f124782f4..74f56e8f46 100644
--- a/hw/block/swim.c
+++ b/hw/block/swim.c
@@ -189,7 +189,10 @@ static void swim_drive_realize(DeviceState *qdev, Error **errp)
         assert(ret == 0);
     }
 
-    blkconf_blocksizes(&dev->conf);
+    if (!blkconf_blocksizes(&dev->conf, errp)) {
+        return;
+    }
+
     if (dev->conf.logical_block_size != 512 ||
         dev->conf.physical_block_size != 512)
     {
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 413083e62f..4ffdb130be 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1162,12 +1162,7 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
         return;
     }
 
-    blkconf_blocksizes(&conf->conf);
-
-    if (conf->conf.logical_block_size >
-        conf->conf.physical_block_size) {
-        error_setg(errp,
-                   "logical_block_size > physical_block_size not supported");
+    if (!blkconf_blocksizes(&conf->conf, errp)) {
         return;
     }
 
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index 570489d6d9..e17fec50e1 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -239,11 +239,7 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
         return;
     }
 
-    blkconf_blocksizes(conf);
-
-    if (conf->logical_block_size > conf->physical_block_size) {
-        error_setg(
-            errp, "logical_block_size > physical_block_size not supported");
+    if (!blkconf_blocksizes(conf, errp)) {
         return;
     }
 
diff --git a/hw/ide/qdev.c b/hw/ide/qdev.c
index 06b11583f5..b4821b2403 100644
--- a/hw/ide/qdev.c
+++ b/hw/ide/qdev.c
@@ -187,7 +187,10 @@ static void ide_dev_initfn(IDEDevice *dev, IDEDriveKind kind, Error **errp)
         return;
     }
 
-    blkconf_blocksizes(&dev->conf);
+    if (!blkconf_blocksizes(&dev->conf, errp)) {
+        return;
+    }
+
     if (dev->conf.logical_block_size != 512) {
         error_setg(errp, "logical_block_size must be 512 for IDE");
         return;
diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index 387503e11b..8ce68a9dd6 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -2346,12 +2346,7 @@ static void scsi_realize(SCSIDevice *dev, Error **errp)
         return;
     }
 
-    blkconf_blocksizes(&s->qdev.conf);
-
-    if (s->qdev.conf.logical_block_size >
-        s->qdev.conf.physical_block_size) {
-        error_setg(errp,
-                   "logical_block_size > physical_block_size not supported");
+    if (!blkconf_blocksizes(&s->qdev.conf, errp)) {
         return;
     }
 
@@ -2436,7 +2431,9 @@ static void scsi_hd_realize(SCSIDevice *dev, Error **errp)
     if (s->qdev.conf.blk) {
         ctx = blk_get_aio_context(s->qdev.conf.blk);
         aio_context_acquire(ctx);
-        blkconf_blocksizes(&s->qdev.conf);
+        if (!blkconf_blocksizes(&s->qdev.conf, errp)) {
+            goto out;
+        }
     }
     s->qdev.blocksize = s->qdev.conf.logical_block_size;
     s->qdev.type = TYPE_DISK;
@@ -2444,6 +2441,7 @@ static void scsi_hd_realize(SCSIDevice *dev, Error **errp)
         s->product = g_strdup("QEMU HARDDISK");
     }
     scsi_realize(&s->qdev, errp);
+out:
     if (ctx) {
         aio_context_release(ctx);
     }
diff --git a/hw/usb/dev-storage.c b/hw/usb/dev-storage.c
index 4eba47538d..de461f37bd 100644
--- a/hw/usb/dev-storage.c
+++ b/hw/usb/dev-storage.c
@@ -599,7 +599,10 @@ static void usb_msd_storage_realize(USBDevice *dev, Error **errp)
         return;
     }
 
-    blkconf_blocksizes(&s->conf);
+    if (!blkconf_blocksizes(&s->conf, errp)) {
+        return;
+    }
+
     if (!blkconf_apply_backend_options(&s->conf, blk_is_read_only(blk), true,
                                        errp)) {
         return;
diff --git a/tests/qemu-iotests/172.out b/tests/qemu-iotests/172.out
index 7abbe82427..59cc70aebb 100644
--- a/tests/qemu-iotests/172.out
+++ b/tests/qemu-iotests/172.out
@@ -1204,7 +1204,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,physica
                 drive-type = "144"
 
 Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,logical_block_size=4096
-QEMU_PROG: -device floppy,drive=none0,logical_block_size=4096: Physical and logical block size must be 512 for floppy
+QEMU_PROG: -device floppy,drive=none0,logical_block_size=4096: logical_block_size > physical_block_size not supported
 
 Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,physical_block_size=1024
 QEMU_PROG: -device floppy,drive=none0,physical_block_size=1024: Physical and logical block size must be 512 for floppy
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed May 27 12:52:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 12:52:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdvXj-0001Sz-C0; Wed, 27 May 2020 12:52:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mVnz=7J=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jdvRG-0000gK-T1
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 12:45:42 +0000
X-Inumbo-ID: f6efc8fc-a017-11ea-8993-bc764e2007e4
Received: from forwardcorp1p.mail.yandex.net (unknown [77.88.29.217])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f6efc8fc-a017-11ea-8993-bc764e2007e4;
 Wed, 27 May 2020 12:45:38 +0000 (UTC)
Received: from mxbackcorp1j.mail.yandex.net (mxbackcorp1j.mail.yandex.net
 [IPv6:2a02:6b8:0:1619::162])
 by forwardcorp1p.mail.yandex.net (Yandex) with ESMTP id 5C7772E1590;
 Wed, 27 May 2020 15:45:37 +0300 (MSK)
Received: from iva4-7c3d9abce76c.qloud-c.yandex.net
 (iva4-7c3d9abce76c.qloud-c.yandex.net [2a02:6b8:c0c:4e8e:0:640:7c3d:9abc])
 by mxbackcorp1j.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 K4MuwlsXgC-jYFOeB5T; Wed, 27 May 2020 15:45:37 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590583537; bh=5W8C5oYNbfwIbGSjpB4jp+1QNqqk5DONxp7cMMQxQ3c=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=tgMD0FVC6Kkk5pyhPXCpSm0/Ii7pxD/QYpWNlhB59EwghvJb4UkNXKIFy7US5RYtx
 12WH1B3QeOzy/c2E2TZtf4jWDI5VLqlNIVr2BgqTPudT3NdAx7ebMMNeDqaL9wcCtP
 +Z+itHy1FTjGj2Hr/HHhplRQcWLk9vj62ThvVMSc=
Authentication-Results: mxbackcorp1j.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b080:8308::1:12])
 by iva4-7c3d9abce76c.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 bzO5qPtczO-jXWODlkU; Wed, 27 May 2020 15:45:34 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v6 4/5] block: make size-related BlockConf properties accept
 size suffixes
Date: Wed, 27 May 2020 15:45:10 +0300
Message-Id: <20200527124511.986099-5-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200527124511.986099-1-rvkagan@yandex-team.ru>
References: <20200527124511.986099-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Mailman-Approved-At: Wed, 27 May 2020 12:52:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Eric Blake <eblake@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, John Snow <jsnow@redhat.com>,
 Keith Busch <kbusch@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Several BlockConf properties represent respective sizes in bytes so it
makes sense to accept size suffixes for them.

Turn them all into uint32_t and use size-suffix-capable setters/getters
on them; introduce DEFINE_PROP_SIZE32 and adjust DEFINE_PROP_BLOCKSIZE
for that. (Making them uint64_t and reusing DEFINE_PROP_SIZE isn't
justified because guests expect at most 32bit values.)

Also, since min_io_size is exposed to the guest by scsi and virtio-blk
devices as an uint16_t in units of logical blocks, introduce an
additional check in blkconf_blocksizes to prevent its silent truncation.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
---
v5 -> v6:
- add prop_size32 instead of going with 64bit

v4 -> v5:
- make all BlockConf size props support suffixes
- move qdev_prop_blocksize after qdev_prop_size, to reuse get_size
- reword error messages [Kevin]

 include/hw/block/block.h     |  12 +-
 include/hw/qdev-properties.h |   5 +-
 hw/block/block.c             |  11 +
 hw/core/qdev-properties.c    |  63 ++++-
 tests/qemu-iotests/172.out   | 530 +++++++++++++++++------------------
 5 files changed, 344 insertions(+), 277 deletions(-)

diff --git a/include/hw/block/block.h b/include/hw/block/block.h
index 784953a237..1e8b6253dd 100644
--- a/include/hw/block/block.h
+++ b/include/hw/block/block.h
@@ -18,9 +18,9 @@
 
 typedef struct BlockConf {
     BlockBackend *blk;
-    uint16_t physical_block_size;
-    uint16_t logical_block_size;
-    uint16_t min_io_size;
+    uint32_t physical_block_size;
+    uint32_t logical_block_size;
+    uint32_t min_io_size;
     uint32_t opt_io_size;
     int32_t bootindex;
     uint32_t discard_granularity;
@@ -51,9 +51,9 @@ static inline unsigned int get_physical_block_exp(BlockConf *conf)
                           _conf.logical_block_size),                    \
     DEFINE_PROP_BLOCKSIZE("physical_block_size", _state,                \
                           _conf.physical_block_size),                   \
-    DEFINE_PROP_UINT16("min_io_size", _state, _conf.min_io_size, 0),    \
-    DEFINE_PROP_UINT32("opt_io_size", _state, _conf.opt_io_size, 0),    \
-    DEFINE_PROP_UINT32("discard_granularity", _state,                   \
+    DEFINE_PROP_SIZE32("min_io_size", _state, _conf.min_io_size, 0),    \
+    DEFINE_PROP_SIZE32("opt_io_size", _state, _conf.opt_io_size, 0),    \
+    DEFINE_PROP_SIZE32("discard_granularity", _state,                   \
                        _conf.discard_granularity, -1),                  \
     DEFINE_PROP_ON_OFF_AUTO("write-cache", _state, _conf.wce,           \
                             ON_OFF_AUTO_AUTO),                          \
diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
index f161604fb6..5252bb6b1a 100644
--- a/include/hw/qdev-properties.h
+++ b/include/hw/qdev-properties.h
@@ -29,6 +29,7 @@ extern const PropertyInfo qdev_prop_drive;
 extern const PropertyInfo qdev_prop_drive_iothread;
 extern const PropertyInfo qdev_prop_netdev;
 extern const PropertyInfo qdev_prop_pci_devfn;
+extern const PropertyInfo qdev_prop_size32;
 extern const PropertyInfo qdev_prop_blocksize;
 extern const PropertyInfo qdev_prop_pci_host_devaddr;
 extern const PropertyInfo qdev_prop_uuid;
@@ -196,8 +197,10 @@ extern const PropertyInfo qdev_prop_pcie_link_width;
                         BlockdevOnError)
 #define DEFINE_PROP_BIOS_CHS_TRANS(_n, _s, _f, _d) \
     DEFINE_PROP_SIGNED(_n, _s, _f, _d, qdev_prop_bios_chs_trans, int)
+#define DEFINE_PROP_SIZE32(_n, _s, _f, _d)                       \
+    DEFINE_PROP_UNSIGNED(_n, _s, _f, _d, qdev_prop_size32, uint32_t)
 #define DEFINE_PROP_BLOCKSIZE(_n, _s, _f) \
-    DEFINE_PROP_UNSIGNED(_n, _s, _f, 0, qdev_prop_blocksize, uint16_t)
+    DEFINE_PROP_UNSIGNED(_n, _s, _f, 0, qdev_prop_blocksize, uint32_t)
 #define DEFINE_PROP_PCI_HOST_DEVADDR(_n, _s, _f) \
     DEFINE_PROP(_n, _s, _f, qdev_prop_pci_host_devaddr, PCIHostDeviceAddress)
 #define DEFINE_PROP_OFF_AUTO_PCIBAR(_n, _s, _f, _d) \
diff --git a/hw/block/block.c b/hw/block/block.c
index b22207c921..97c8129e60 100644
--- a/hw/block/block.c
+++ b/hw/block/block.c
@@ -96,6 +96,17 @@ bool blkconf_blocksizes(BlockConf *conf, Error **errp)
         return false;
     }
 
+    /*
+     * all devices which support min_io_size (scsi and virtio-blk) expose it to
+     * the guest as a uint16_t in units of logical blocks
+     */
+    if (conf->min_io_size > conf->logical_block_size * UINT16_MAX) {
+        error_setg(errp,
+                   "min_io_size must not exceed " stringify(UINT16_MAX)
+                   " logical blocks");
+        return false;
+    }
+
     if (!QEMU_IS_ALIGNED(conf->opt_io_size, conf->logical_block_size)) {
         error_setg(errp,
                    "opt_io_size must be a multiple of logical_block_size");
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index 249dc69bd8..e7ccd4d276 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -14,6 +14,7 @@
 #include "qapi/visitor.h"
 #include "chardev/char.h"
 #include "qemu/uuid.h"
+#include "qemu/units.h"
 
 void qdev_prop_set_after_realize(DeviceState *dev, const char *name,
                                   Error **errp)
@@ -727,6 +728,57 @@ const PropertyInfo qdev_prop_pci_devfn = {
     .set_default_value = set_default_value_int,
 };
 
+/* --- 32bit unsigned int 'size' type --- */
+
+static void get_size32(Object *obj, Visitor *v, const char *name, void *opaque,
+                       Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+    Property *prop = opaque;
+    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t value = *ptr;
+
+    visit_type_size(v, name, &value, errp);
+}
+
+static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
+                       Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+    Property *prop = opaque;
+    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t value;
+    Error *local_err = NULL;
+
+    if (dev->realized) {
+        qdev_prop_set_after_realize(dev, name, errp);
+        return;
+    }
+
+    visit_type_size(v, name, &value, &local_err);
+    if (local_err) {
+        error_propagate(errp, local_err);
+        return;
+    }
+
+    if (value > UINT32_MAX) {
+        error_setg(errp,
+                   "Property %s.%s doesn't take value %" PRIu64
+                   " (maximum: " stringify(UINT32_MAX) ")",
+                   dev->id ? : "", name, value);
+        return;
+    }
+
+    *ptr = value;
+}
+
+const PropertyInfo qdev_prop_size32 = {
+    .name  = "size",
+    .get = get_size32,
+    .set = set_size32,
+    .set_default_value = set_default_value_uint,
+};
+
 /* --- blocksize --- */
 
 /* lower limit is sector size */
@@ -741,7 +793,8 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint16_t value, *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t value;
     Error *local_err = NULL;
 
     if (dev->realized) {
@@ -749,7 +802,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
         return;
     }
 
-    visit_type_uint16(v, name, &value, &local_err);
+    visit_type_size(v, name, &value, &local_err);
     if (local_err) {
         error_propagate(errp, local_err);
         return;
@@ -757,7 +810,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
     /* value of 0 means "unset" */
     if (value && (value < MIN_BLOCK_SIZE || value > MAX_BLOCK_SIZE)) {
         error_setg(errp,
-                   "Property %s.%s doesn't take value %" PRIu16
+                   "Property %s.%s doesn't take value %" PRIu64
                    " (minimum: " MIN_BLOCK_SIZE_STR
                    ", maximum: " MAX_BLOCK_SIZE_STR ")",
                    dev->id ? : "", name, value);
@@ -776,10 +829,10 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
 }
 
 const PropertyInfo qdev_prop_blocksize = {
-    .name  = "uint16",
+    .name  = "size",
     .description = "A power of two between " MIN_BLOCK_SIZE_STR
                    " and " MAX_BLOCK_SIZE_STR,
-    .get   = get_uint16,
+    .get   = get_size32,
     .set   = set_blocksize,
     .set_default_value = set_default_value_uint,
 };
diff --git a/tests/qemu-iotests/172.out b/tests/qemu-iotests/172.out
index 59cc70aebb..e782c5957e 100644
--- a/tests/qemu-iotests/172.out
+++ b/tests/qemu-iotests/172.out
@@ -24,11 +24,11 @@ Testing:
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -54,11 +54,11 @@ Testing: -fda TEST_DIR/t.qcow2
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -81,22 +81,22 @@ Testing: -fdb TEST_DIR/t.qcow2
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -119,22 +119,22 @@ Testing: -fda TEST_DIR/t.qcow2 -fdb TEST_DIR/t.qcow2.2
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -160,11 +160,11 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -187,22 +187,22 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2,index=1
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -225,22 +225,22 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=floppy,file=TEST_DIR/t
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -266,11 +266,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -global isa-fdc.driveA=none0
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -293,11 +293,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -global isa-fdc.driveB=none0
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -320,22 +320,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -361,11 +361,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -388,11 +388,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,unit=1
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -415,22 +415,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -456,22 +456,22 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -494,22 +494,22 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -532,11 +532,11 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -559,11 +559,11 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -589,22 +589,22 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -627,22 +627,22 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -665,22 +665,22 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -703,22 +703,22 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -750,22 +750,22 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.q
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -788,22 +788,22 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.q
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -832,22 +832,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -870,22 +870,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -908,22 +908,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -946,22 +946,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -999,11 +999,11 @@ Testing: -device floppy
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = ""
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -1026,11 +1026,11 @@ Testing: -device floppy,drive-type=120
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = ""
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "120"
@@ -1053,11 +1053,11 @@ Testing: -device floppy,drive-type=144
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = ""
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -1080,11 +1080,11 @@ Testing: -device floppy,drive-type=288
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = ""
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -1110,11 +1110,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,drive-t
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "120"
@@ -1137,11 +1137,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,drive-t
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -1167,11 +1167,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,logical
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -1194,11 +1194,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,physica
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed May 27 13:03:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 13:03:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdvi9-0002rG-OH; Wed, 27 May 2020 13:03:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2Hki=7J=redhat.com=fweimer@srs-us1.protection.inumbo.net>)
 id 1jdvi8-0002rB-Hl
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 13:03:08 +0000
X-Inumbo-ID: 68fd1f60-a01a-11ea-9947-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.61])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 68fd1f60-a01a-11ea-9947-bc764e2007e4;
 Wed, 27 May 2020 13:03:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590584587;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type;
 bh=oy0V2qQifVRxyl3hnuRXtetKes4TGaiAhxy8832yoas=;
 b=XYrGYTz2e4jmKUSiAK9jElMI1dgb4uDJ5Bdic9uHARTUdIYiEeZGZPHajDPg8SckayNqNu
 spZwhiNa6HAQeTMkYys74SlloaZ1ACCZMsSQOfdEQbWBLV+DkzHrOo0eCSQ3LXDJR0KdKx
 ZNgkSymDzwoMeCVdBo3VETu6fuVGn1Y=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-218-tGzjnUSgMNe2-8nuD3ru8Q-1; Wed, 27 May 2020 09:03:04 -0400
X-MC-Unique: tGzjnUSgMNe2-8nuD3ru8Q-1
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DEE0D460;
 Wed, 27 May 2020 13:03:03 +0000 (UTC)
Received: from oldenburg2.str.redhat.com (ovpn-113-106.ams2.redhat.com
 [10.36.113.106])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 3622760C05;
 Wed, 27 May 2020 13:03:03 +0000 (UTC)
From: Florian Weimer <fweimer@redhat.com>
To: xen-devel@lists.xenproject.org
Subject: -mno-tls-direct-seg-refs support in glibc for i386 PV Xen
Date: Wed, 27 May 2020 15:03:01 +0200
Message-ID: <87mu5til8a.fsf@oldenburg2.str.redhat.com>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: libc-alpha@sourceware.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

I'm about to remove nosegneg support from upstream glibc, special builds
that use -mno-tls-direct-seg-refs, and the ability load different
libraries built in this mode automatically, when the Linux kernel tells
us to do that.  I think the intended effect is that these special builds
do not use operands of the form %gs:(%eax) when %eax has the MSB set
because that had a performance hit with paravirtualization on 32-bit
x86.  Instead, the thread pointer is first loaded from %gs:0, and the
actual access does not use a segment prefix.

Before doing that, I'd like to ask if anybody is still using this
feature?

I know that we've been carrying nosegneg libraries for many years, in
some cases even after we stopped shipping 32-bit kernels. 8-/ The
feature has always been rather poorly documented, and the way the
dynamic loader selects those nosegneg library variants is still very
bizarre.

Thanks,
Florian



From xen-devel-bounces@lists.xenproject.org Wed May 27 13:06:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 13:06:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdvla-000317-7r; Wed, 27 May 2020 13:06:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dLv=7J=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jdvlY-000312-RR
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 13:06:40 +0000
X-Inumbo-ID: e6be261a-a01a-11ea-8993-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e6be261a-a01a-11ea-8993-bc764e2007e4;
 Wed, 27 May 2020 13:06:39 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 3bVAu9sXs27VQ/uE6l96fuNeL8FVmemCrw9I2Aj1uF3PqkLV095CwOOqqTkG7fDUWnvb0ACZEM
 XuGJaPOGspcFB/dmf4dUAG4qkJXSLICYTbVf+mByAjlE8X/qz9541EoyhQcRLJAne8869trGqu
 oh8pr4varGcZFxKoTmcREJ6jrleEKxQHSaBXcLcXQWIzf32alfAofYDUx8ofNh3sHXrF077hh3
 LITD8ha08ElXDQRg1admQv4F5A2edbUN9/5Kj8YKnsDgwM8u+Wo1JKwYNEBIXWggZkvobX5qPX
 ZAk=
X-SBRS: 2.7
X-MesageID: 19289677
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,441,1583211600"; d="scan'208";a="19289677"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/boot: Fix load_system_tables() to be NMI/#MC-safe
Date: Wed, 27 May 2020 14:06:07 +0100
Message-ID: <20200527130607.32069-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

During boot, load_system_tables() is used in reinit_bsp_stack() to switch the
virtual addresses used from their .data/.bss alias, to their directmap alias.

The structure assignment is implemented as a memset() to zero first, then a
copy-in of the new data.  This causes the NMI/#MC stack pointers to
transiently become 0, at a point where we may have an NMI watchdog running.

Rewrite the logic using a volatile tss pointer (equivalent to, but more
readable than, using ACCESS_ONCE() for all writes).

This does drop the zeroing side effect for holes in the structure, but the
backing memory for the TSS is fully zeroed anyway, and architecturally, they
are all reserved.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

This wants backporting a fairly long way, technically to Xen 4.6.
---
 xen/arch/x86/cpu/common.c | 49 ++++++++++++++++++++++-------------------------
 1 file changed, 23 insertions(+), 26 deletions(-)

diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 3e0d9cbe98..a78b796fe5 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -703,11 +703,12 @@ static cpumask_t cpu_initialized;
  */
 void load_system_tables(void)
 {
-	unsigned int cpu = smp_processor_id();
+	unsigned int i, cpu = smp_processor_id();
 	unsigned long stack_bottom = get_stack_bottom(),
 		stack_top = stack_bottom & ~(STACK_SIZE - 1);
 
-	struct tss64 *tss = &this_cpu(tss_page).tss;
+	/* The TSS may be live.	 Disuade any clever optimisations. */
+	volatile struct tss64 *tss = &this_cpu(tss_page).tss;
 	seg_desc_t *gdt =
 		this_cpu(gdt) - FIRST_RESERVED_GDT_ENTRY;
 
@@ -720,30 +721,26 @@ void load_system_tables(void)
 		.limit = (IDT_ENTRIES * sizeof(idt_entry_t)) - 1,
 	};
 
-	*tss = (struct tss64){
-		/* Main stack for interrupts/exceptions. */
-		.rsp0 = stack_bottom,
-
-		/* Ring 1 and 2 stacks poisoned. */
-		.rsp1 = 0x8600111111111111ul,
-		.rsp2 = 0x8600111111111111ul,
-
-		/*
-		 * MCE, NMI and Double Fault handlers get their own stacks.
-		 * All others poisoned.
-		 */
-		.ist = {
-			[IST_MCE - 1] = stack_top + IST_MCE * PAGE_SIZE,
-			[IST_DF  - 1] = stack_top + IST_DF  * PAGE_SIZE,
-			[IST_NMI - 1] = stack_top + IST_NMI * PAGE_SIZE,
-			[IST_DB  - 1] = stack_top + IST_DB  * PAGE_SIZE,
-
-			[IST_MAX ... ARRAY_SIZE(tss->ist) - 1] =
-				0x8600111111111111ul,
-		},
-
-		.bitmap = IOBMP_INVALID_OFFSET,
-	};
+	/*
+	 * Set up the TSS.  Warning - may be live, and the NMI/#MC must remain
+	 * valid on every instruction boundary.  (Note: these are all
+	 * semantically ACCESS_ONCE() due to tss's volatile qualifier.)
+	 *
+	 * rsp0 refers to the primary stack.  #MC, #DF, NMI and #DB handlers
+	 * each get their own stacks.  No IO Bitmap.
+	 */
+	tss->rsp0 = stack_bottom;
+	tss->ist[IST_MCE - 1] = stack_top + IST_MCE * PAGE_SIZE;
+	tss->ist[IST_DF  - 1] = stack_top + IST_DF  * PAGE_SIZE;
+	tss->ist[IST_NMI - 1] = stack_top + IST_NMI * PAGE_SIZE;
+	tss->ist[IST_DB  - 1] = stack_top + IST_DB  * PAGE_SIZE;
+	tss->bitmap = IOBMP_INVALID_OFFSET;
+
+	/* All other stack pointers poisioned. */
+	for ( i = IST_MAX; i < ARRAY_SIZE(tss->ist); ++i )
+		tss->ist[i] = 0x8600111111111111ul;
+	tss->rsp1 = 0x8600111111111111ul;
+	tss->rsp2 = 0x8600111111111111ul;
 
 	BUILD_BUG_ON(sizeof(*tss) <= 0x67); /* Mandated by the architecture. */
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 27 13:19:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 13:19:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdvxw-0003xc-CK; Wed, 27 May 2020 13:19:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+pkZ=7J=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdvxv-0003xX-An
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 13:19:27 +0000
X-Inumbo-ID: aecb59e3-a01c-11ea-a745-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aecb59e3-a01c-11ea-a745-12813bfff9fa;
 Wed, 27 May 2020 13:19:25 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 03E9BAFED;
 Wed, 27 May 2020 13:19:26 +0000 (UTC)
Subject: Re: [PATCH] x86/boot: Fix load_system_tables() to be NMI/#MC-safe
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200527130607.32069-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <50f66504-ab7b-2f3e-1695-003ad69ae37a@suse.com>
Date: Wed, 27 May 2020 15:19:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200527130607.32069-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 15:06, Andrew Cooper wrote:
> @@ -720,30 +721,26 @@ void load_system_tables(void)
>  		.limit = (IDT_ENTRIES * sizeof(idt_entry_t)) - 1,
>  	};
>  
> -	*tss = (struct tss64){
> -		/* Main stack for interrupts/exceptions. */
> -		.rsp0 = stack_bottom,
> -
> -		/* Ring 1 and 2 stacks poisoned. */
> -		.rsp1 = 0x8600111111111111ul,
> -		.rsp2 = 0x8600111111111111ul,
> -
> -		/*
> -		 * MCE, NMI and Double Fault handlers get their own stacks.
> -		 * All others poisoned.
> -		 */
> -		.ist = {
> -			[IST_MCE - 1] = stack_top + IST_MCE * PAGE_SIZE,
> -			[IST_DF  - 1] = stack_top + IST_DF  * PAGE_SIZE,
> -			[IST_NMI - 1] = stack_top + IST_NMI * PAGE_SIZE,
> -			[IST_DB  - 1] = stack_top + IST_DB  * PAGE_SIZE,
> -
> -			[IST_MAX ... ARRAY_SIZE(tss->ist) - 1] =
> -				0x8600111111111111ul,
> -		},
> -
> -		.bitmap = IOBMP_INVALID_OFFSET,
> -	};
> +	/*
> +	 * Set up the TSS.  Warning - may be live, and the NMI/#MC must remain
> +	 * valid on every instruction boundary.  (Note: these are all
> +	 * semantically ACCESS_ONCE() due to tss's volatile qualifier.)
> +	 *
> +	 * rsp0 refers to the primary stack.  #MC, #DF, NMI and #DB handlers
> +	 * each get their own stacks.  No IO Bitmap.
> +	 */
> +	tss->rsp0 = stack_bottom;
> +	tss->ist[IST_MCE - 1] = stack_top + IST_MCE * PAGE_SIZE;
> +	tss->ist[IST_DF  - 1] = stack_top + IST_DF  * PAGE_SIZE;
> +	tss->ist[IST_NMI - 1] = stack_top + IST_NMI * PAGE_SIZE;
> +	tss->ist[IST_DB  - 1] = stack_top + IST_DB  * PAGE_SIZE;
> +	tss->bitmap = IOBMP_INVALID_OFFSET;
> +
> +	/* All other stack pointers poisioned. */
> +	for ( i = IST_MAX; i < ARRAY_SIZE(tss->ist); ++i )
> +		tss->ist[i] = 0x8600111111111111ul;
> +	tss->rsp1 = 0x8600111111111111ul;
> +	tss->rsp2 = 0x8600111111111111ul;

ACCESS_ONCE() unfortunately only has one of the two needed effects:
It guarantees that each memory location gets accessed exactly once
(which I assume can also be had with just the volatile addition,
but without the moving away from using an initializer), but it does
not guarantee single-insn accesses. I consider this in particular
relevant here because all of the 64-bit fields are misaligned. By
doing it like you do, we're setting us up to have to re-do this yet
again in a couple of years time (presumably using write_atomic()
instead then).

Nevertheless it is a clear improvement, so if you want to leave it
like this
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 27 13:39:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 13:39:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdwGv-0005ew-03; Wed, 27 May 2020 13:39:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PV0l=7J=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jdwGt-0005er-Pf
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 13:39:03 +0000
X-Inumbo-ID: 6cec3d36-a01f-11ea-8993-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6cec3d36-a01f-11ea-8993-bc764e2007e4;
 Wed, 27 May 2020 13:39:02 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:34504
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jdwGr-000762-Lf (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Wed, 27 May 2020 14:39:01 +0100
Subject: Re: -mno-tls-direct-seg-refs support in glibc for i386 PV Xen
To: Florian Weimer <fweimer@redhat.com>, xen-devel@lists.xenproject.org
References: <87mu5til8a.fsf@oldenburg2.str.redhat.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <551ceac2-9cf6-00fd-95a6-a5b9fea6a383@citrix.com>
Date: Wed, 27 May 2020 14:39:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <87mu5til8a.fsf@oldenburg2.str.redhat.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: libc-alpha@sourceware.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27/05/2020 14:03, Florian Weimer wrote:
> I'm about to remove nosegneg support from upstream glibc, special builds
> that use -mno-tls-direct-seg-refs, and the ability load different
> libraries built in this mode automatically, when the Linux kernel tells
> us to do that.  I think the intended effect is that these special builds
> do not use operands of the form %gs:(%eax) when %eax has the MSB set
> because that had a performance hit with paravirtualization on 32-bit
> x86.  Instead, the thread pointer is first loaded from %gs:0, and the
> actual access does not use a segment prefix.
>
> Before doing that, I'd like to ask if anybody is still using this
> feature?
>
> I know that we've been carrying nosegneg libraries for many years, in
> some cases even after we stopped shipping 32-bit kernels. 8-/ The
> feature has always been rather poorly documented, and the way the
> dynamic loader selects those nosegneg library variants is still very
> bizarre.

I wasn't even aware of this feature, or that there was a problem wanting
fixing.

That said, I have found:

# 32-bit x86 does not perform well with -ve segment accesses on Xen.
CFLAGS-$(CONFIG_X86_32) += $(call cc-option,$(CC),-mno-tls-direct-seg-refs)

in one of our makefiles.

Why does the MSB make any difference?  %gs still needs to remain intact
so the thread pointer can be pulled out, so there is nothing that Xen or
Linux can do in the way of lazy loading.

Beyond that, its straight up segment base semantics in x86.  There will
be a 1-cycle AGU delay from a non-zero base, but that nothing to do with
Xen and applies to all segment based TLS accesses on x86, and you'll win
that back easily through reduced register pressure.

Are there any further details on the perf problem claim?  I find it
suspicious.


Either way, 32bit PV is on its last legs (not too bad, for something
which was essentially killed by the AMD64 spec).

Ring 1 counting as supervisor mode as far as pagetables goes has already
caused guests to suffer a major performance hit on hardware with
SMAP/SMEP (IvyBridge and later), as well as various speculative
mitigations (we can't rely on SMEP preventing the CPU from speculating
back into Ring 1, etc), and the forthcoming CET Shadow Stack feature
totally kills Ring1/2 as usable concepts in the architecture.

Linux is threatening to drop PV32 support, and I've recently added an
option to Xen to compile out and/or disable PV32 (both for attack
surface reduction purposes, and as a necessary consequence of using
Shadow Stacks).

With both my XenServer and upstream x86 maintainers hats on, PV32 is
solely for legacy workloads now.  People currently using PV32 obviously
don't care about performance, or haven't been taking security updates. 
I severely doubt they'll notice any change from this.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 27 13:44:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 13:44:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdwMR-0006Ua-Kt; Wed, 27 May 2020 13:44:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9XM8=7J=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1jdwMQ-0006UV-7D
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 13:44:46 +0000
X-Inumbo-ID: 383e66f9-a020-11ea-a74b-12813bfff9fa
Received: from hera.aquilenet.fr (unknown [185.233.100.1])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 383e66f9-a020-11ea-a74b-12813bfff9fa;
 Wed, 27 May 2020 13:44:44 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id E64B6D737;
 Wed, 27 May 2020 15:44:43 +0200 (CEST)
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id Pgx7Mf1c6z9U; Wed, 27 May 2020 15:44:42 +0200 (CEST)
Received: from function.home (unknown
 [IPv6:2a01:cb19:956:1b00:9eb6:d0ff:fe88:c3c7])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id C1D68D242;
 Wed, 27 May 2020 15:44:42 +0200 (CEST)
Received: from samy by function.home with local (Exim 4.93)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1jdwML-001IBK-JO; Wed, 27 May 2020 15:44:41 +0200
Date: Wed, 27 May 2020 15:44:41 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: -mno-tls-direct-seg-refs support in glibc for i386 PV Xen
Message-ID: <20200527134441.y5dta4n2dm3ftlmw@function>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Florian Weimer <fweimer@redhat.com>, xen-devel@lists.xenproject.org,
 libc-alpha@sourceware.org
References: <87mu5til8a.fsf@oldenburg2.str.redhat.com>
 <551ceac2-9cf6-00fd-95a6-a5b9fea6a383@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <551ceac2-9cf6-00fd-95a6-a5b9fea6a383@citrix.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Florian Weimer <fweimer@redhat.com>, xen-devel@lists.xenproject.org,
 libc-alpha@sourceware.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

Andrew Cooper via Libc-alpha, le mer. 27 mai 2020 14:39:00 +0100, a ecrit:
> Why does the MSB make any difference?  %gs still needs to remain intact
> so the thread pointer can be pulled out, so there is nothing that Xen or
> Linux can do in the way of lazy loading.
> 
> Beyond that, its straight up segment base semantics in x86.  There will
> be a 1-cycle AGU delay from a non-zero base, but that nothing to do with
> Xen and applies to all segment based TLS accesses on x86, and you'll win
> that back easily through reduced register pressure.
> 
> Are there any further details on the perf problem claim?  I find it
> suspicious.

The concern is not about the indirection.

The concern is that to keep safe from the guest, the hypervisor has to
restrict the size of the segment, and thus negative offsets, used in the
i386 TLS model, are rejected by the processor, and the hypervisor has to
emulate these access, thus a high cost.

Samuel


From xen-devel-bounces@lists.xenproject.org Wed May 27 14:01:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 14:01:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdwbk-0008CA-Vd; Wed, 27 May 2020 14:00:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+pkZ=7J=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdwbj-0008C5-Qo
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 14:00:35 +0000
X-Inumbo-ID: 6ed322ce-a022-11ea-a752-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6ed322ce-a022-11ea-a752-12813bfff9fa;
 Wed, 27 May 2020 14:00:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id DE1A9AFC0;
 Wed, 27 May 2020 14:00:35 +0000 (UTC)
Subject: Re: -mno-tls-direct-seg-refs support in glibc for i386 PV Xen
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <87mu5til8a.fsf@oldenburg2.str.redhat.com>
 <551ceac2-9cf6-00fd-95a6-a5b9fea6a383@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <69bdaedf-c403-a77d-8ab1-12feffa15494@suse.com>
Date: Wed, 27 May 2020 16:00:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <551ceac2-9cf6-00fd-95a6-a5b9fea6a383@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Florian Weimer <fweimer@redhat.com>, xen-devel@lists.xenproject.org,
 libc-alpha@sourceware.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 15:39, Andrew Cooper wrote:
> On 27/05/2020 14:03, Florian Weimer wrote:
>> I'm about to remove nosegneg support from upstream glibc, special builds
>> that use -mno-tls-direct-seg-refs, and the ability load different
>> libraries built in this mode automatically, when the Linux kernel tells
>> us to do that.  I think the intended effect is that these special builds
>> do not use operands of the form %gs:(%eax) when %eax has the MSB set
>> because that had a performance hit with paravirtualization on 32-bit
>> x86.  Instead, the thread pointer is first loaded from %gs:0, and the
>> actual access does not use a segment prefix.
>>
>> Before doing that, I'd like to ask if anybody is still using this
>> feature?
>>
>> I know that we've been carrying nosegneg libraries for many years, in
>> some cases even after we stopped shipping 32-bit kernels. 8-/ The
>> feature has always been rather poorly documented, and the way the
>> dynamic loader selects those nosegneg library variants is still very
>> bizarre.
> 
> I wasn't even aware of this feature, or that there was a problem wanting
> fixing.
> 
> That said, I have found:
> 
> # 32-bit x86 does not perform well with -ve segment accesses on Xen.
> CFLAGS-$(CONFIG_X86_32) += $(call cc-option,$(CC),-mno-tls-direct-seg-refs)
> 
> in one of our makefiles.
> 
> Why does the MSB make any difference?  %gs still needs to remain intact
> so the thread pointer can be pulled out, so there is nothing that Xen or
> Linux can do in the way of lazy loading.
> 
> Beyond that, its straight up segment base semantics in x86.  There will
> be a 1-cycle AGU delay from a non-zero base, but that nothing to do with
> Xen and applies to all segment based TLS accesses on x86, and you'll win
> that back easily through reduced register pressure.
> 
> Are there any further details on the perf problem claim?  I find it
> suspicious.

To guard the hypervisor area, 32-bit Xen reduced the limits of guest
usable segment descriptors. While this works fine for flat ones (you
just chop off some space at the top), there's no way to represent a
full segment with a non-zero base. You can have the descriptor map
only the [base,XenBase] part or the [0,base) one. Hence Xen, from its
#GP handler, flipped the descriptor between the two options depending
on whether the current access was to the positive of negative part of
the TLS seg. (An in-practice use of expand down segments, as you'll
surely notice.)

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 27 14:15:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 14:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdwqC-0000mk-EE; Wed, 27 May 2020 14:15:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PV0l=7J=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jdwqA-0000md-R6
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 14:15:30 +0000
X-Inumbo-ID: 84525d66-a024-11ea-a755-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 84525d66-a024-11ea-a755-12813bfff9fa;
 Wed, 27 May 2020 14:15:29 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:35482
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jdwq8-000Vss-KN (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Wed, 27 May 2020 15:15:28 +0100
Subject: Re: -mno-tls-direct-seg-refs support in glibc for i386 PV Xen
To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Florian Weimer <fweimer@redhat.com>, xen-devel@lists.xenproject.org,
 libc-alpha@sourceware.org
References: <87mu5til8a.fsf@oldenburg2.str.redhat.com>
 <551ceac2-9cf6-00fd-95a6-a5b9fea6a383@citrix.com>
 <20200527134441.y5dta4n2dm3ftlmw@function>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3c6c1ab3-e9d9-78d1-8a1e-36da206c4c98@citrix.com>
Date: Wed, 27 May 2020 15:15:27 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200527134441.y5dta4n2dm3ftlmw@function>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27/05/2020 14:44, Samuel Thibault wrote:
> Hello,
>
> Andrew Cooper via Libc-alpha, le mer. 27 mai 2020 14:39:00 +0100, a ecrit:
>> Why does the MSB make any difference?  %gs still needs to remain intact
>> so the thread pointer can be pulled out, so there is nothing that Xen or
>> Linux can do in the way of lazy loading.
>>
>> Beyond that, its straight up segment base semantics in x86.  There will
>> be a 1-cycle AGU delay from a non-zero base, but that nothing to do with
>> Xen and applies to all segment based TLS accesses on x86, and you'll win
>> that back easily through reduced register pressure.
>>
>> Are there any further details on the perf problem claim?  I find it
>> suspicious.
> The concern is not about the indirection.
>
> The concern is that to keep safe from the guest, the hypervisor has to
> restrict the size of the segment, and thus negative offsets, used in the
> i386 TLS model, are rejected by the processor, and the hypervisor has to
> emulate these access, thus a high cost.

Oh, so the i386 TLS model relies on the calculation wrapping (modulo 4G)
when the segment limit is 4G, instead of taking a fault?

Intel states this is behaviour is implementation specific (SDM Vol3
5.3.1) and may fault, while AMD doesn't discuss it at all as far as I
can tell (APM Vol2 4.12 is the right section, but I can't see this
discussed).

While I can believe it probably works on every processor these days, it
does seem like dodgy ground to base an ABI on.

It also means that Xen isn't necessarily the only affected party.  I'm
pretty sure GRSecurity use reduced segment limits as well.

I also bet it doesn't work reliably under emulation.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 27 14:21:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 14:21:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdwvV-0001ar-2c; Wed, 27 May 2020 14:21:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2Hki=7J=redhat.com=fweimer@srs-us1.protection.inumbo.net>)
 id 1jdwvT-0001am-CT
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 14:21:00 +0000
X-Inumbo-ID: 48524b22-a025-11ea-81bc-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 48524b22-a025-11ea-81bc-bc764e2007e4;
 Wed, 27 May 2020 14:20:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590589257;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=TsR6FQSfpPDBRu16d0tiuTG2BxyyN88e+ocsvc5aPWY=;
 b=ZB6ErvAfsH2nTu6jSjVwOpKy5vLv7tV5tl2/ERzXAMKgGs7MAzUdDXQhiHHz1/I0zhjf/d
 YoeR7zvWsokL17hd3LRnLLfAEuchQvK2olXHewEFZR+nNYKvRNwRuhvY6fIZeyh9GUvcHF
 jd7LH4LFCnaVcnkGqdz2pf91D2rB7NQ=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-428-MuetF85rO5Sx5VW2hrFfdg-1; Wed, 27 May 2020 10:20:55 -0400
X-MC-Unique: MuetF85rO5Sx5VW2hrFfdg-1
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 6DB9F84B8A1;
 Wed, 27 May 2020 14:20:54 +0000 (UTC)
Received: from oldenburg2.str.redhat.com (ovpn-113-106.ams2.redhat.com
 [10.36.113.106])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 2E55F768A4;
 Wed, 27 May 2020 14:20:51 +0000 (UTC)
From: Florian Weimer <fweimer@redhat.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: -mno-tls-direct-seg-refs support in glibc for i386 PV Xen
References: <87mu5til8a.fsf@oldenburg2.str.redhat.com>
 <551ceac2-9cf6-00fd-95a6-a5b9fea6a383@citrix.com>
 <20200527134441.y5dta4n2dm3ftlmw@function>
 <3c6c1ab3-e9d9-78d1-8a1e-36da206c4c98@citrix.com>
Date: Wed, 27 May 2020 16:20:49 +0200
In-Reply-To: <3c6c1ab3-e9d9-78d1-8a1e-36da206c4c98@citrix.com> (Andrew
 Cooper's message of "Wed, 27 May 2020 15:15:27 +0100")
Message-ID: <87sgflh326.fsf@oldenburg2.str.redhat.com>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Samuel Thibault <samuel.thibault@ens-lyon.org>, libc-alpha@sourceware.org,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

* Andrew Cooper:

> Oh, so the i386 TLS model relies on the calculation wrapping (modulo 4G)
> when the segment limit is 4G, instead of taking a fault?

That's about it.

> Intel states this is behaviour is implementation specific (SDM Vol3
> 5.3.1) and may fault, while AMD doesn't discuss it at all as far as I
> can tell (APM Vol2 4.12 is the right section, but I can't see this
> discussed).
>
> While I can believe it probably works on every processor these days, it
> does seem like dodgy ground to base an ABI on.

Sure, but it has been this way since the beginnings of NPTL, for close
to twenty years now.  The TCB is at positive offsets, and the user TLS
data at negative offsets.

> It also means that Xen isn't necessarily the only affected party.=C2=A0 I=
'm
> pretty sure GRSecurity use reduced segment limits as well.

Mostly for CS and DS, I believe, for the fake NX handling.  I think that
was never upstream, but some vendor kernels had variants of it.

> I also bet it doesn't work reliably under emulation.

It has to, given that it's so pervasively used under Linux. 8-/

Thanks,
Florian



From xen-devel-bounces@lists.xenproject.org Wed May 27 14:36:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 14:36:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdxAg-0002m6-FS; Wed, 27 May 2020 14:36:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TPJL=7J=redhat.com=eblake@srs-us1.protection.inumbo.net>)
 id 1jdxAf-0002m1-F8
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 14:36:41 +0000
X-Inumbo-ID: 7a01ebb2-a027-11ea-81bc-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 7a01ebb2-a027-11ea-81bc-bc764e2007e4;
 Wed, 27 May 2020 14:36:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590590199;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=A0YiRCCQVyRX4Yfd5csRDtsOZtlp+horQF41aUZ07xM=;
 b=GXaKLxKSaCqP4qrb8uVZqIXfSA7v/PmFoOmFTZxBjLadTrfLYWx4JvTg6I1vKkybpz97Ca
 HeBOGpEeZRPZApXu8TUqJSzrZcgAwKUheFSkk///DA+XwGZhKTAMjQpDlPF/41UxFEVbIA
 Smjjes8NRFyKR7l0RL9VDO3yu2c+f90=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-493-rMNYJ8TRO2qUApmORyRw6w-1; Wed, 27 May 2020 10:36:27 -0400
X-MC-Unique: rMNYJ8TRO2qUApmORyRw6w-1
Received: from smtp.corp.redhat.com (int-mx06.intmail.prod.int.phx2.redhat.com
 [10.5.11.16])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 094AD1005510;
 Wed, 27 May 2020 14:36:26 +0000 (UTC)
Received: from [10.3.112.88] (ovpn-112-88.phx2.redhat.com [10.3.112.88])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 028ED5C1B0;
 Wed, 27 May 2020 14:36:15 +0000 (UTC)
Subject: Re: [PATCH v6 2/5] block: consolidate blocksize properties
 consistency checks
To: Roman Kagan <rvkagan@yandex-team.ru>, qemu-devel@nongnu.org
References: <20200527124511.986099-1-rvkagan@yandex-team.ru>
 <20200527124511.986099-3-rvkagan@yandex-team.ru>
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
Message-ID: <e087aca6-1612-6bc4-9a0a-0c5c69fb2b74@redhat.com>
Date: Wed, 27 May 2020 09:36:14 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200527124511.986099-3-rvkagan@yandex-team.ru>
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.16
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, John Snow <jsnow@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Keith Busch <kbusch@kernel.org>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/27/20 7:45 AM, Roman Kagan wrote:
> Several block device properties related to blocksize configuration must
> be in certain relationship WRT each other: physical block must be no
> smaller than logical block; min_io_size, opt_io_size, and
> discard_granularity must be a multiple of a logical block.
> 
> To ensure these requirements are met, add corresponding consistency
> checks to blkconf_blocksizes, adjusting its signature to communicate
> possible error to the caller.  Also remove the now redundant consistency
> checks from the specific devices.
> 
> Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
> ---

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Wed May 27 14:37:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 14:37:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdxBh-0002pq-PX; Wed, 27 May 2020 14:37:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TPJL=7J=redhat.com=eblake@srs-us1.protection.inumbo.net>)
 id 1jdxBg-0002pi-FA
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 14:37:44 +0000
X-Inumbo-ID: a001dade-a027-11ea-9947-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id a001dade-a027-11ea-9947-bc764e2007e4;
 Wed, 27 May 2020 14:37:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590590263;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=MyK0ZS/S5olIgXiNFZRrlNJf+GStgK6P0N6XBLc58IQ=;
 b=Y550cfNdUF4PluJFz9HIYy6RpnrofyJ0n+mpKgqXayfsdKpq9WhUukfJk6EyKzMh4GYAkJ
 z3uDKx9TBOpXH/tzZ1T6ii6SYNVcg2zHfIDma5Xvh3bW76bod+rpw3TL0gZWA6HeCos9n1
 5XBxXyXrJv6pYql2sHurTizajgG/pfc=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-91-lO8UuUcSMGG0cK5CO75n8g-1; Wed, 27 May 2020 10:37:40 -0400
X-MC-Unique: lO8UuUcSMGG0cK5CO75n8g-1
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 447B2800688;
 Wed, 27 May 2020 14:37:38 +0000 (UTC)
Received: from [10.3.112.88] (ovpn-112-88.phx2.redhat.com [10.3.112.88])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 0DAB579C40;
 Wed, 27 May 2020 14:37:24 +0000 (UTC)
Subject: Re: [PATCH v6 3/5] qdev-properties: blocksize: use same limits in
 code and description
To: Roman Kagan <rvkagan@yandex-team.ru>, qemu-devel@nongnu.org
References: <20200527124511.986099-1-rvkagan@yandex-team.ru>
 <20200527124511.986099-4-rvkagan@yandex-team.ru>
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
Message-ID: <19680d8c-400a-3842-ada1-bc3bb239fcf0@redhat.com>
Date: Wed, 27 May 2020 09:37:23 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200527124511.986099-4-rvkagan@yandex-team.ru>
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, John Snow <jsnow@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Keith Busch <kbusch@kernel.org>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/27/20 7:45 AM, Roman Kagan wrote:
> Make it easier (more visible) to maintain the limits on the blocksize
> properties in sync with the respective description, by using macros both
> in the code and in the description.
> 
> Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
> ---

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Wed May 27 14:40:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 14:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdxE5-0003fa-6f; Wed, 27 May 2020 14:40:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PV0l=7J=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jdxE3-0003fQ-9W
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 14:40:11 +0000
X-Inumbo-ID: f6e21558-a027-11ea-a759-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6e21558-a027-11ea-a759-12813bfff9fa;
 Wed, 27 May 2020 14:40:09 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:36104
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jdxE0-000m4V-KT (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Wed, 27 May 2020 15:40:08 +0100
Subject: Re: -mno-tls-direct-seg-refs support in glibc for i386 PV Xen
To: Jan Beulich <jbeulich@suse.com>
References: <87mu5til8a.fsf@oldenburg2.str.redhat.com>
 <551ceac2-9cf6-00fd-95a6-a5b9fea6a383@citrix.com>
 <69bdaedf-c403-a77d-8ab1-12feffa15494@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <779861db-f9eb-634e-3d28-e113fcc37846@citrix.com>
Date: Wed, 27 May 2020 15:40:07 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <69bdaedf-c403-a77d-8ab1-12feffa15494@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Florian Weimer <fweimer@redhat.com>, xen-devel@lists.xenproject.org,
 libc-alpha@sourceware.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27/05/2020 15:00, Jan Beulich wrote:
> On 27.05.2020 15:39, Andrew Cooper wrote:
>> On 27/05/2020 14:03, Florian Weimer wrote:
>>> I'm about to remove nosegneg support from upstream glibc, special builds
>>> that use -mno-tls-direct-seg-refs, and the ability load different
>>> libraries built in this mode automatically, when the Linux kernel tells
>>> us to do that.  I think the intended effect is that these special builds
>>> do not use operands of the form %gs:(%eax) when %eax has the MSB set
>>> because that had a performance hit with paravirtualization on 32-bit
>>> x86.  Instead, the thread pointer is first loaded from %gs:0, and the
>>> actual access does not use a segment prefix.
>>>
>>> Before doing that, I'd like to ask if anybody is still using this
>>> feature?
>>>
>>> I know that we've been carrying nosegneg libraries for many years, in
>>> some cases even after we stopped shipping 32-bit kernels. 8-/ The
>>> feature has always been rather poorly documented, and the way the
>>> dynamic loader selects those nosegneg library variants is still very
>>> bizarre.
>> I wasn't even aware of this feature, or that there was a problem wanting
>> fixing.
>>
>> That said, I have found:
>>
>> # 32-bit x86 does not perform well with -ve segment accesses on Xen.
>> CFLAGS-$(CONFIG_X86_32) += $(call cc-option,$(CC),-mno-tls-direct-seg-refs)
>>
>> in one of our makefiles.
>>
>> Why does the MSB make any difference?  %gs still needs to remain intact
>> so the thread pointer can be pulled out, so there is nothing that Xen or
>> Linux can do in the way of lazy loading.
>>
>> Beyond that, its straight up segment base semantics in x86.  There will
>> be a 1-cycle AGU delay from a non-zero base, but that nothing to do with
>> Xen and applies to all segment based TLS accesses on x86, and you'll win
>> that back easily through reduced register pressure.
>>
>> Are there any further details on the perf problem claim?  I find it
>> suspicious.
> To guard the hypervisor area, 32-bit Xen reduced the limits of guest
> usable segment descriptors.

Right.  Segment limits are what keept the guest kernel (ring 1,
supervisor) out of Xen (ring 1, also supervisor).

> While this works fine for flat ones (you
> just chop off some space at the top), there's no way to represent a
> full segment with a non-zero base.

(From the other thread,) The problem isn't related to the base, per say.

It is that a segment with a non-4G limit now faults rather than
truncating usefully for the 32bit TLS model.

> You can have the descriptor map
> only the [base,XenBase] part or the [0,base) one. Hence Xen, from its
> #GP handler, flipped the descriptor between the two options depending
> on whether the current access was to the positive of negative part of
> the TLS seg. (An in-practice use of expand down segments, as you'll
> surely notice.)

I've found gpf_emulate_4gb() in source history.  It was specific to
32bit builds of Xen (now long gone).

What I can't figure out is why this is unnecessary in 64bit builds of
Xen.  We still enforce reduced segment limits on the guests descriptors.

I have a worrying suspicion that Xen's ABI for PV32 (on top of a 64bit
Xen) now depends on -mno-tls-direct-seg-refs

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 27 14:51:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 14:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdxOR-0004bs-AG; Wed, 27 May 2020 14:50:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TPJL=7J=redhat.com=eblake@srs-us1.protection.inumbo.net>)
 id 1jdxOQ-0004bn-CU
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 14:50:54 +0000
X-Inumbo-ID: 76bc841a-a029-11ea-a75a-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 76bc841a-a029-11ea-a75a-12813bfff9fa;
 Wed, 27 May 2020 14:50:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590591053;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=Jt2O02vs+NLP0r9p0Z1zjPBIZdH//DNFNRyNX/mabPM=;
 b=Q+3s8AaEYnDOTmOkL6G7LytzGrReFtkAYOQ6WPg7L7W5c0lHbNwK0DtQw10wLo7eYzQmUx
 3/XJjnWxZgnafDvjD31LbVeuTKqRK0jvQfvrZV36Xzwedu6rnvBxYVxnqCVAfDg1gocaK0
 VzR3FV0rwSSJQ3Iu1kU7c8cEvPsyDFI=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-24-Aj8ZSKYKNliX64OSfWTUIg-1; Wed, 27 May 2020 10:50:51 -0400
X-MC-Unique: Aj8ZSKYKNliX64OSfWTUIg-1
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id CFF17EC1B6;
 Wed, 27 May 2020 14:50:49 +0000 (UTC)
Received: from [10.3.112.88] (ovpn-112-88.phx2.redhat.com [10.3.112.88])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id C1CD662932;
 Wed, 27 May 2020 14:50:40 +0000 (UTC)
Subject: Re: [PATCH v6 4/5] block: make size-related BlockConf properties
 accept size suffixes
To: Roman Kagan <rvkagan@yandex-team.ru>, qemu-devel@nongnu.org
References: <20200527124511.986099-1-rvkagan@yandex-team.ru>
 <20200527124511.986099-5-rvkagan@yandex-team.ru>
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
Message-ID: <d2ac3549-e63d-d737-41fa-21965c551175@redhat.com>
Date: Wed, 27 May 2020 09:50:39 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200527124511.986099-5-rvkagan@yandex-team.ru>
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, John Snow <jsnow@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Keith Busch <kbusch@kernel.org>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/27/20 7:45 AM, Roman Kagan wrote:
> Several BlockConf properties represent respective sizes in bytes so it
> makes sense to accept size suffixes for them.
> 
> Turn them all into uint32_t and use size-suffix-capable setters/getters
> on them; introduce DEFINE_PROP_SIZE32 and adjust DEFINE_PROP_BLOCKSIZE
> for that. (Making them uint64_t and reusing DEFINE_PROP_SIZE isn't
> justified because guests expect at most 32bit values.)
> 
> Also, since min_io_size is exposed to the guest by scsi and virtio-blk
> devices as an uint16_t in units of logical blocks, introduce an
> additional check in blkconf_blocksizes to prevent its silent truncation.
> 
> Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
> ---
> v5 -> v6:
> - add prop_size32 instead of going with 64bit

Would it be worth adding prop_size32 as its own patch, before using it 
here?  But I'll review this as-is.

> +++ b/hw/block/block.c
> @@ -96,6 +96,17 @@ bool blkconf_blocksizes(BlockConf *conf, Error **errp)
>           return false;
>       }
>   
> +    /*
> +     * all devices which support min_io_size (scsi and virtio-blk) expose it to
> +     * the guest as a uint16_t in units of logical blocks
> +     */
> +    if (conf->min_io_size > conf->logical_block_size * UINT16_MAX) {

This risks overflow.  Better would be:

if (conf->min_io_size / conf->logical_block-size > UINT16_MAX)

> +        error_setg(errp,
> +                   "min_io_size must not exceed " stringify(UINT16_MAX)
> +                   " logical blocks");
> +        return false;
> +    }
> +
>       if (!QEMU_IS_ALIGNED(conf->opt_io_size, conf->logical_block_size)) {
>           error_setg(errp,
>                      "opt_io_size must be a multiple of logical_block_size");

> +++ b/tests/qemu-iotests/172.out
> @@ -24,11 +24,11 @@ Testing:
>                 dev: floppy, id ""
>                   unit = 0 (0x0)
>                   drive = "floppy0"
> -                logical_block_size = 512 (0x200)
> -                physical_block_size = 512 (0x200)
> -                min_io_size = 0 (0x0)
> -                opt_io_size = 0 (0x0)
> -                discard_granularity = 4294967295 (0xffffffff)
> +                logical_block_size = 512 (512 B)
> +                physical_block_size = 512 (512 B)
> +                min_io_size = 0 (0 B)
> +                opt_io_size = 0 (0 B)
> +                discard_granularity = 4294967295 (4 GiB)

Although 4 GiB is not quite the same as 4294967295, the exact byte value 
next to the approximate size is not too bad.  The mechanical fallout 
from the change from int to size is fine to me.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Wed May 27 14:53:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 14:53:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdxQS-0004k0-MY; Wed, 27 May 2020 14:53:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TPJL=7J=redhat.com=eblake@srs-us1.protection.inumbo.net>)
 id 1jdxQR-0004jt-9z
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 14:52:59 +0000
X-Inumbo-ID: c09268ca-a029-11ea-81bc-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id c09268ca-a029-11ea-81bc-bc764e2007e4;
 Wed, 27 May 2020 14:52:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590591177;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=ZwiwScNgaDu/uRp9hitNBzNDJPJ3dGNkzvYDl7pCbU8=;
 b=F+1el6hiaILGLcqxSChcP7BaKMywBXEZUjrJDdQ56BpVdlv4NWde8gbcv+8aAjnrsOORuE
 QMUYK9kfwlO7tzAwICgquNyfTaTMoGlHxJ96WfQfuotzsakojfSKWBeEXnzhcYW3BaHICF
 Pm7a91Du78T9sb6hEszMsTuGJbMnh2o=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-171-YCbpBZSYNWOqDOykNAawcg-1; Wed, 27 May 2020 10:52:55 -0400
X-MC-Unique: YCbpBZSYNWOqDOykNAawcg-1
Received: from smtp.corp.redhat.com (int-mx08.intmail.prod.int.phx2.redhat.com
 [10.5.11.23])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 055F7EC1A7;
 Wed, 27 May 2020 14:52:53 +0000 (UTC)
Received: from [10.3.112.88] (ovpn-112-88.phx2.redhat.com [10.3.112.88])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 0EBE12DE60;
 Wed, 27 May 2020 14:52:43 +0000 (UTC)
Subject: Re: [PATCH v6 5/5] block: lift blocksize property limit to 2 MiB
To: Roman Kagan <rvkagan@yandex-team.ru>, qemu-devel@nongnu.org
References: <20200527124511.986099-1-rvkagan@yandex-team.ru>
 <20200527124511.986099-6-rvkagan@yandex-team.ru>
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
Message-ID: <008081c3-4eb5-a260-b1d8-94525da51fdf@redhat.com>
Date: Wed, 27 May 2020 09:52:42 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200527124511.986099-6-rvkagan@yandex-team.ru>
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.23
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, John Snow <jsnow@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Keith Busch <kbusch@kernel.org>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/27/20 7:45 AM, Roman Kagan wrote:
> Logical and physical block sizes in QEMU are limited to 32 KiB.
> 
> This appears unnecessary tight, and we've seen bigger block sizes handy

unnecessarily

> at times.
> 
> Lift the limitation up to 2 MiB which appears to be good enough for
> everybody, and matches the qcow2 cluster size limit.
> 
> Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
> ---

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Wed May 27 14:57:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 14:57:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdxUR-0004vp-Bn; Wed, 27 May 2020 14:57:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YYzi=7J=lucina.net=martin@srs-us1.protection.inumbo.net>)
 id 1jdxUP-0004vk-O8
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 14:57:05 +0000
X-Inumbo-ID: 535092fe-a02a-11ea-a75b-12813bfff9fa
Received: from smtp.lucina.net (unknown [62.176.169.44])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 535092fe-a02a-11ea-a75b-12813bfff9fa;
 Wed, 27 May 2020 14:57:04 +0000 (UTC)
Received: from nodbug.lucina.net (78-141-76-187.dynamic.orange.sk
 [78.141.76.187])
 by smtp.lucina.net (Postfix) with ESMTPSA id 27A01122804;
 Wed, 27 May 2020 16:57:03 +0200 (CEST)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=lucina.net;
 s=dkim-201811; t=1590591423;
 bh=gCASpuwhr1jiW6oKsT1tI6Zwmv/ACpqwRxqwguDYUJ0=;
 h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
 b=qiW8NyhCrP/mIBzkTgusiSWV8Ui4+vu8fNB/+7vnRonbmWNdzmFd583CzU/dgmM3L
 qrXqDJFvUKn/U6fLaN71Xhg5TtiLst5ck5UZyhNDAx/2wUhYNStdyDtV8Wq2tWPgSr
 WnAQU8xn0ae7RXXIYiPGNBMqBfQ0uYu5bmQCKeIBobVQyYMqr8PC6ue5Wsc+W3rpTe
 EKNSvK8b0auZMXKO8KnI/t6JT1IgsjN1J8RN1c4ZS97BSKLvNxNG/UONrObTIwQVVm
 UpPVoZSTsK4G+ElChvnFYhqaHto/X1FYhts0XIZOXDRwKyQJEVb298+xKhkjnq26Sm
 B5H3zmLQ9lEaA==
Received: by nodbug.lucina.net (Postfix, from userid 1000)
 id 05BB4265E722; Wed, 27 May 2020 16:57:03 +0200 (CEST)
Date: Wed, 27 May 2020 16:57:02 +0200
From: Martin Lucina <martin@lucina.net>
To: Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger@xen.org>
Subject: Re: Xen PVH domU start-of-day VCPU state
Message-ID: <20200527145702.GE4788@nodbug.lucina.net>
Mail-Followup-To: Martin Lucina <martin@lucina.net>,
 Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger@xen.org>,
 =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, anil@recoil.org,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 mirageos-devel@lists.xenproject.org, dave@recoil.org,
 xen-devel@lists.xenproject.org
References: <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
 <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
 <20200526085221.GB5942@nodbug.lucina.net>
 <20200526093421.GA38408@Air-de-Roger>
 <20200526100337.GB38408@Air-de-Roger>
 <20200526101203.GE5942@nodbug.lucina.net>
 <20200526154224.GC25283@nodbug.lucina.net>
 <20200526163021.GE38408@Air-de-Roger>
 <20200527080008.GC4788@nodbug.lucina.net>
 <20200527143644.GA1195@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200527143644.GA1195@Air-de-Roger>
User-Agent: Mutt/1.10.1 (2018-07-13)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?iso-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, anil@recoil.org,
 Andrew Cooper <andrew.cooper3@citrix.com>, mirageos-devel@lists.xenproject.org,
 dave@recoil.org, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wednesday, 27.05.2020 at16:41, Roger Pau Monn wrote:
> > > > If I make this simple change:
> > > > 
> > > > --- a/bindings/xen/boot.S
> > > > +++ b/bindings/xen/boot.S
> > > > @@ -32,7 +32,7 @@
> > > >  #define ENTRY(x) .text; .globl x; .type x,%function; x:
> > > >  #define END(x)   .size x, . - x
> > > > 
> > > > -.section .note.solo5.xen
> > > > +.section .note.solo5.xen, "a", @note
> > > > 
> > > >         .align  4
> > > >         .long   4
> > > > 
> > > > then I get the expected output from readelf -lW, and I can get as far as
> > > > the C _start() with no issues!
> > > > 
> > > > FWIW, here's the diff of readelf -lW before/after:
> > > > 
> > > > --- before	2020-05-26 17:36:46.117885855 +0200
> > > > +++ after	2020-05-26 17:38:07.090508322 +0200
> > > > @@ -8,9 +8,9 @@
> > > >    INTERP         0x001000 0x0000000000100000 0x0000000000100000 0x000018 0x000018 R   0x8
> > > >        [Requesting program interpreter: /nonexistent/solo5/]
> > > >    LOAD           0x001000 0x0000000000100000 0x0000000000100000 0x00615c 0x00615c R E 0x1000
> > > > -  LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x007120 0x00ed28 RW  0x1000
> > > > +  LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x006120 0x00dd28 RW  0x1000
> > > 
> > > This seems suspicious, there's a change of the size of the LOAD
> > > section, but your change to the note type should not affect the LOAD
> > > section?
> > 
> > Indeed.
> 
> You could try to disassemble the text sections with objdump -d (or -D
> for all sections) and see if there's a difference between both
> versions, but having solved the issue maybe you just want to move
> on.

I have moved on, making good progress:

    domainbuilder: detail: xc_dom_release: called
    Hello, world!
    Solo5: Xen hvm_start_info @0x0000000000119000
    Solo5: magic=0x336ec578 version=1
    Solo5: cmdline_paddr=0x0
    Solo5: memmap_paddr=0x119878 entries=5
    Solo5: memmap[0] = { 0x0, 0x10000400, 1 }
    Solo5: mem_size=0x10000000
                |      ___|
      __|  _ \  |  _ \ __ \
    \__ \ (   | | (   |  ) |
    ____/\___/ _|\___/____/
    Solo5: Bindings version v0.6.5-4-g57724f8-dirty
    Solo5: Memory map: 256 MB addressable:
    Solo5:   reserved @ (0x0 - 0xfffff)
    Solo5:       text @ (0x100000 - 0x104fff)
    Solo5:     rodata @ (0x105000 - 0x106fff)
    Solo5:       data @ (0x107000 - 0x118fff)
    Solo5:       heap >= 0x119000 < stack < 0x10000000
    Solo5: Clock source: KVM paravirtualized clock
    Solo5: trap: type=#GP ec=0x0 rip=0x103a96 rsp=0xfffff90 rflags=0x2
    Solo5: ABORT: cpu_x86_64.c:181: Fatal trap
    Solo5: Halted

(The #GP is due to the timekeeping code not yet ported to Xen).

Random question: With memory="256" in the xl config, why is the size of the
first XEN_HVM_MEMMAP_TYPE_RAM memmap entry not a multiple of PAGE_SIZE? I
had to align it down, since we put the stack at the top of RAM. 0x10000400
seems... odd.

Thanks all for your help so far, I'm sure I'll run into some more details
that will need clarifying. Enough for today, now going for a walk in the
woods :-)

-mato


From xen-devel-bounces@lists.xenproject.org Wed May 27 15:02:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 15:02:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdxZ7-0005oP-1f; Wed, 27 May 2020 15:01:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NSSa=7J=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jdxZ5-0005oJ-D1
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 15:01:55 +0000
X-Inumbo-ID: 00ae6ba6-a02b-11ea-a75c-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 00ae6ba6-a02b-11ea-a75c-12813bfff9fa;
 Wed, 27 May 2020 15:01:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
 References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Rjr/fnn1r9nNNDmy/mckjFtMS52K4ZK3kk14ndKqHwA=; b=s6qPhnaoWtIroM/W19Qp3qPcgm
 ucfHgmuL+AECd+KMScnPkZZgsLEWmQK8uEwjFAYl3uYBFG91v9U6unIRkxBOunSqFuwxaYEOEMiZZ
 UhWxDs86uZjvPo3p5GtE3xe5nxqExzkOfY55VdKPA3BUa9RW3UoXnKiSURCYrCtl52zs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jdxZ2-0005rk-Sp; Wed, 27 May 2020 15:01:52 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jdxZ2-0002DH-Ie; Wed, 27 May 2020 15:01:52 +0000
Message-ID: <27611cd2c6f04431d3e0fe99824cc844fed96e40.camel@xen.org>
Subject: Re: [PATCH v6 07/15] x86_64/mm: switch to new APIs in paging_init
From: Hongyan Xia <hx242@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Date: Wed, 27 May 2020 16:01:50 +0100
In-Reply-To: <80d185d4-c7c3-53b9-d851-ab56ea4bc755@suse.com>
References: <cover.1587735799.git.hongyxia@amazon.com>
 <0655cc2d3dc27141ef102076c4ad390a37191b37.1587735799.git.hongyxia@amazon.com>
 <80d185d4-c7c3-53b9-d851-ab56ea4bc755@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, 2020-05-20 at 11:46 +0200, Jan Beulich wrote:
> On 24.04.2020 16:08, Hongyan Xia wrote:
> > @@ -493,22 +494,28 @@ void __init paging_init(void)
> >          if ( !(l4e_get_flags(idle_pg_table[l4_table_offset(va)]) &
> >                _PAGE_PRESENT) )
> >          {
> > -            l3_pgentry_t *pl3t = alloc_xen_pagetable();
> > +            l3_pgentry_t *pl3t;
> > +            mfn_t l3mfn = alloc_xen_pagetable_new();
> >  
> > -            if ( !pl3t )
> > +            if ( mfn_eq(l3mfn, INVALID_MFN) )
> >                  goto nomem;
> > +
> > +            pl3t = map_domain_page(l3mfn);
> >              clear_page(pl3t);
> >              l4e_write(&idle_pg_table[l4_table_offset(va)],
> > -                      l4e_from_paddr(__pa(pl3t),
> > __PAGE_HYPERVISOR_RW));
> > +                      l4e_from_mfn(l3mfn, __PAGE_HYPERVISOR_RW));
> > +            unmap_domain_page(pl3t);
> 
> This can be moved up, and once it is you'll notice that you're
> open-coding clear_domain_page(). I wonder whether I didn't spot
> the same in other patches of this series.
> 
> Besides the previously raised point of possibly having an
> allocation function that returns a mapping of the page right
> away (not needed here) - are there many cases where allocation
> of a new page table isn't accompanied by clearing the page? If
> not, should the function perhaps do so (and then, once it has
> a mapping anyway, it would be even more so natural to return
> it for users wanting a mapping anyway)?

I grepped through all alloc_xen_pagetable(). Except the page shattering
logic in x86/mm.c where the whole page table page is written
immediately, all other call sites clear the page right away, so it is
useful to have a helper that clears it for you. I also looked at the
use of VA and MFN from the call. MFN is almost always needed while VA
is not, and if we bundle clearing into the alloc() itself, a lot of
call sites don't even need the VA.

Similar to what you suggested before, we can do:
void* alloc_map_clear_xen_pagetable(mfn_t* mfn)
which needs to be paired with an unmap call, of course.

> > @@ -662,6 +677,8 @@ void __init paging_init(void)
> >      return;
> >  
> >   nomem:
> > +    UNMAP_DOMAIN_PAGE(l2_ro_mpt);
> > +    UNMAP_DOMAIN_PAGE(l3_ro_mpt);
> >      panic("Not enough memory for m2p table\n");
> >  }
> 
> I don't think this is a very useful addition.

I was trying to avoid further mapping leaks in the panic path, but it
does not look like panic() does mappings, so these can be removed.

Hongyan



From xen-devel-bounces@lists.xenproject.org Wed May 27 15:03:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 15:03:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdxb0-0005uq-EF; Wed, 27 May 2020 15:03:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ARaW=7J=xen.org=roger@srs-us1.protection.inumbo.net>)
 id 1jdxFQ-0003lF-G0
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 14:41:36 +0000
X-Inumbo-ID: 26d14dce-a028-11ea-9947-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26d14dce-a028-11ea-9947-bc764e2007e4;
 Wed, 27 May 2020 14:41:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Transfer-Encoding:Content-Type:
 MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=62mMkHYb9wXQ+M+kXG8GvEUvOczD2Ydx/xkpqyMXgIE=; b=t9bDoPdJpvG7EfgeWOOjc4YDuD
 TBgIUh1I5IbS15gaDzI2RXn0xmXxhozo9ym2ZA6IkwT0GQBjsBGAm1uN6Xqvf71x4iNzeepQ4JYJy
 OQGbZZ0SByjGlTKnGtBCpH+gm++7SYvHkIKMBinx3+NvPjanqVHOw5RGVOPGA92abX4s=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jdxFE-0005N6-Qq; Wed, 27 May 2020 14:41:24 +0000
Received: from [93.176.191.173] (helo=localhost)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jdxFE-00010X-G7; Wed, 27 May 2020 14:41:24 +0000
Date: Wed, 27 May 2020 16:41:10 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger@xen.org>
To: Martin Lucina <martin@lucina.net>
Subject: Re: Xen PVH domU start-of-day VCPU state
Message-ID: <20200527143644.GA1195@Air-de-Roger>
References: <20200525160401.GA3091@nodbug.lucina.net>
 <a17fef73-382c-50b3-1e6b-5904fc3bf60f@suse.com>
 <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
 <20200526085221.GB5942@nodbug.lucina.net>
 <20200526093421.GA38408@Air-de-Roger>
 <20200526100337.GB38408@Air-de-Roger>
 <20200526101203.GE5942@nodbug.lucina.net>
 <20200526154224.GC25283@nodbug.lucina.net>
 <20200526163021.GE38408@Air-de-Roger>
 <20200527080008.GC4788@nodbug.lucina.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200527080008.GC4788@nodbug.lucina.net>
X-Mailman-Approved-At: Wed, 27 May 2020 15:03:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, anil@recoil.org,
 Andrew Cooper <andrew.cooper3@citrix.com>, mirageos-devel@lists.xenproject.org,
 dave@recoil.org, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 27, 2020 at 10:00:08AM +0200, Martin Lucina wrote:
> On Tuesday, 26.05.2020 at 18:30, Roger Pau Monné wrote:
> > > Turns out that the .note.solo5.xen section as defined in boot.S was not
> > > marked allocatable, and that was doing <something> that was confusing our
> > > linker script[1] (?).
> > 
> > Hm, I would have said there was no need to load notes into memory, and
> > hence using a MemSize of 0 would be fine.
> > 
> > Maybe libelf loader was somehow getting confused and not loading the
> > image properly?
> > 
> > Can you paste the output of `xl -vvv create ...` when using the broken
> > image?
> 
> Here you go:
> 
> Parsing config from ./test_hello.xl
> libxl: debug: libxl_create.c:1671:do_domain_create: Domain 0:ao 0x5593c42e7e30: create: how=(nil) callback=(nil) poller=0x5593c42e7670
> libxl: debug: libxl_create.c:1007:initiate_domain_create: Domain 2:running bootloader
> libxl: debug: libxl_bootloader.c:335:libxl__bootloader_run: Domain 2:no bootloader configured, using user supplied kernel
> libxl: debug: libxl_event.c:686:libxl__ev_xswatch_deregister: watch w=0x5593c42e9590: deregister unregistered
> libxl: debug: libxl_sched.c:82:libxl__set_vcpuaffinity: Domain 2:New soft affinity for vcpu 0 has unreachable cpus
> domainbuilder: detail: xc_dom_allocate: cmdline="", features=""
> domainbuilder: detail: xc_dom_kernel_file: filename="test_hello.xen"
> domainbuilder: detail: xc_dom_malloc_filemap    : 191 kB
> domainbuilder: detail: xc_dom_boot_xen_init: ver 4.11, caps xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
> domainbuilder: detail: xc_dom_parse_image: called
> domainbuilder: detail: xc_dom_find_loader: trying multiboot-binary loader ...
> domainbuilder: detail: loader probe failed
> domainbuilder: detail: xc_dom_find_loader: trying HVM-generic loader ...
> domainbuilder: detail: loader probe failed
> domainbuilder: detail: xc_dom_find_loader: trying Linux bzImage loader ...
> domainbuilder: detail: xc_dom_probe_bzimage_kernel: kernel is not a bzImage
> domainbuilder: detail: loader probe failed
> domainbuilder: detail: xc_dom_find_loader: trying ELF-generic loader ...
> domainbuilder: detail: loader probe OK
> xc: detail: ELF: phdr: paddr=0x100000 memsz=0x6264
> xc: detail: ELF: phdr: paddr=0x107000 memsz=0xed48
> xc: detail: ELF: memory: 0x100000 -> 0x115d48
> xc: detail: ELF: note: PHYS32_ENTRY = 0x100020
> xc: detail: ELF: Found PVH image
> xc: detail: ELF: VIRT_BASE unset, using 0
> xc: detail: ELF_PADDR_OFFSET unset, using 0
> xc: detail: ELF: addresses:
> xc: detail:     virt_base        = 0x0
> xc: detail:     elf_paddr_offset = 0x0
> xc: detail:     virt_offset      = 0x0
> xc: detail:     virt_kstart      = 0x100000
> xc: detail:     virt_kend        = 0x115d48
> xc: detail:     virt_entry       = 0x1001e0
> xc: detail:     p2m_base         = 0xffffffffffffffff
> domainbuilder: detail: xc_dom_parse_elf_kernel: hvm-3.0-x86_32: 0x100000 -> 0x115d48
> domainbuilder: detail: xc_dom_mem_init: mem 256 MB, pages 0x10000 pages, 4k each
> domainbuilder: detail: xc_dom_mem_init: 0x10000 pages
> domainbuilder: detail: xc_dom_boot_mem_init: called
> domainbuilder: detail: range: start=0x0 end=0x10000400
> domainbuilder: detail: xc_dom_malloc            : 512 kB
> xc: detail: PHYSICAL MEMORY ALLOCATION:
> xc: detail:   4KB PAGES: 0x0000000000000c00
> xc: detail:   2MB PAGES: 0x000000000000007a
> xc: detail:   1GB PAGES: 0x0000000000000000
> domainbuilder: detail: xc_dom_build_image: called
> domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x100+0x16 at 0x7f5609445000
> domainbuilder: detail: xc_dom_alloc_segment:   kernel       : 0x100000 -> 0x116000  (pfn 0x100 + 0x16 pages)
> xc: detail: ELF: phdr 1 at 0x7f5609445000 -> 0x7f560944b264
> xc: detail: ELF: phdr 2 at 0x7f560944c000 -> 0x7f5609453120
> domainbuilder: detail: xc_dom_load_acpi: 64 bytes at address fc008000
> domainbuilder: detail: xc_dom_load_acpi: 4096 bytes at address fc000000
> domainbuilder: detail: xc_dom_load_acpi: 28672 bytes at address fc001000
> domainbuilder: detail: xc_dom_pfn_to_ptr_retcount: domU mapping: pfn 0x116+0x1 at 0x7f5609ace000
> domainbuilder: detail: xc_dom_alloc_segment:   HVM start info : 0x116000 -> 0x117000  (pfn 0x116 + 0x1 pages)
> domainbuilder: detail: alloc_pgtables_hvm: doing nothing
> domainbuilder: detail: xc_dom_build_image  : virt_alloc_end : 0x117000
> domainbuilder: detail: xc_dom_build_image  : virt_pgtab_end : 0x0
> domainbuilder: detail: xc_dom_boot_image: called
> domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_64
> domainbuilder: detail: xc_dom_compat_check: supported guest type: xen-3.0-x86_32p
> domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32 <= matches
> domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_32p
> domainbuilder: detail: xc_dom_compat_check: supported guest type: hvm-3.0-x86_64
> domainbuilder: detail: domain builder memory footprint
> domainbuilder: detail:    allocated
> domainbuilder: detail:       malloc             : 515 kB
> domainbuilder: detail:       anon mmap          : 0 bytes
> domainbuilder: detail:    mapped
> domainbuilder: detail:       file mmap          : 191 kB
> domainbuilder: detail:       domU mmap          : 92 kB
> domainbuilder: detail: vcpu_hvm: called
> domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0xff000
> domainbuilder: detail: xc_dom_gnttab_hvm_seed: called, pfn=0xff001
> domainbuilder: detail: xc_dom_release: called
> libxl: debug: libxl_event.c:2194:libxl__ao_progress_report: ao 0x5593c42e7e30: progress report: callback queued aop=0x5593c42fea10
> libxl: debug: libxl_event.c:1869:libxl__ao_complete: ao 0x5593c42e7e30: complete, rc=0
> libxl: debug: libxl_event.c:1404:egc_run_callbacks: ao 0x5593c42e7e30: progress report: callback aop=0x5593c42fea10
> libxl: debug: libxl_create.c:1708:do_domain_create: Domain 0:ao 0x5593c42e7e30: inprogress: poller=0x5593c42e7670, flags=ic
> libxl: debug: libxl_event.c:1838:libxl__ao__destroy: ao 0x5593c42e7e30: destroy
> xencall:buffer: debug: total allocations:233 total releases:233
> xencall:buffer: debug: current allocations:0 maximum allocations:3
> xencall:buffer: debug: cache current size:3
> xencall:buffer: debug: cache hits:215 misses:3 toobig:15
> xencall:buffer: debug: total allocations:0 total releases:0
> xencall:buffer: debug: current allocations:0 maximum allocations:0
> xencall:buffer: debug: cache current size:0
> xencall:buffer: debug: cache hits:0 misses:0 toobig:0

That looks fine AFAICT. The program headers seems to be correctly
identified and sized.

> > 
> > > 
> > > If I make this simple change:
> > > 
> > > --- a/bindings/xen/boot.S
> > > +++ b/bindings/xen/boot.S
> > > @@ -32,7 +32,7 @@
> > >  #define ENTRY(x) .text; .globl x; .type x,%function; x:
> > >  #define END(x)   .size x, . - x
> > > 
> > > -.section .note.solo5.xen
> > > +.section .note.solo5.xen, "a", @note
> > > 
> > >         .align  4
> > >         .long   4
> > > 
> > > then I get the expected output from readelf -lW, and I can get as far as
> > > the C _start() with no issues!
> > > 
> > > FWIW, here's the diff of readelf -lW before/after:
> > > 
> > > --- before	2020-05-26 17:36:46.117885855 +0200
> > > +++ after	2020-05-26 17:38:07.090508322 +0200
> > > @@ -8,9 +8,9 @@
> > >    INTERP         0x001000 0x0000000000100000 0x0000000000100000 0x000018 0x000018 R   0x8
> > >        [Requesting program interpreter: /nonexistent/solo5/]
> > >    LOAD           0x001000 0x0000000000100000 0x0000000000100000 0x00615c 0x00615c R E 0x1000
> > > -  LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x007120 0x00ed28 RW  0x1000
> > > +  LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x006120 0x00dd28 RW  0x1000
> > 
> > This seems suspicious, there's a change of the size of the LOAD
> > section, but your change to the note type should not affect the LOAD
> > section?
> 
> Indeed.

You could try to disassemble the text sections with objdump -d (or -D
for all sections) and see if there's a difference between both
versions, but having solved the issue maybe you just want to move
on.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 27 15:05:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 15:05:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdxcw-000626-Oi; Wed, 27 May 2020 15:05:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ARaW=7J=xen.org=roger@srs-us1.protection.inumbo.net>)
 id 1jdxcw-00061z-4P
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 15:05:54 +0000
X-Inumbo-ID: 8ee513b6-a02b-11ea-9947-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8ee513b6-a02b-11ea-9947-bc764e2007e4;
 Wed, 27 May 2020 15:05:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Transfer-Encoding:Content-Type:
 MIME-Version:References:Message-ID:Subject:To:From:Date:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=6c3/y3ONIrPCVAnbVGJRw/qLH1x2DrpPzwDDCf4/n1U=; b=jYroH60y9cMXFRr0Fe/uU9l0Xb
 M5d+GjmDIyQfj5wZrl/vLaj4iiiQlz6OnnmkMy0AKwVFaaT+8vm7JzKX+ARhpjSyslyF/RiU3VFoh
 Gu/Ag2sngTaM5bkxKrLn/lorePV+XXewpWIMuu5x0n9mpEsuCA7Q73xT/0Yo+i1n4RmE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jdxcr-0005vl-Qv; Wed, 27 May 2020 15:05:49 +0000
Received: from [93.176.191.173] (helo=localhost)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jdxcr-0002MU-GF; Wed, 27 May 2020 15:05:49 +0000
Date: Wed, 27 May 2020 17:05:39 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger@xen.org>
To: Martin Lucina <martin@lucina.net>,
 =?utf-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>, anil@recoil.org,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 mirageos-devel@lists.xenproject.org, dave@recoil.org,
 xen-devel@lists.xenproject.org
Subject: Re: Xen PVH domU start-of-day VCPU state
Message-ID: <20200527150539.GB1195@Air-de-Roger>
References: <6a22e477-c9e7-f0d7-6cb1-615137a778be@citrix.com>
 <20200526085221.GB5942@nodbug.lucina.net>
 <20200526093421.GA38408@Air-de-Roger>
 <20200526100337.GB38408@Air-de-Roger>
 <20200526101203.GE5942@nodbug.lucina.net>
 <20200526154224.GC25283@nodbug.lucina.net>
 <20200526163021.GE38408@Air-de-Roger>
 <20200527080008.GC4788@nodbug.lucina.net>
 <20200527143644.GA1195@Air-de-Roger>
 <20200527145702.GE4788@nodbug.lucina.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200527145702.GE4788@nodbug.lucina.net>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 27, 2020 at 04:57:02PM +0200, Martin Lucina wrote:
> On Wednesday, 27.05.2020 at 16:41, Roger Pau Monné wrote:
> > > > > If I make this simple change:
> > > > > 
> > > > > --- a/bindings/xen/boot.S
> > > > > +++ b/bindings/xen/boot.S
> > > > > @@ -32,7 +32,7 @@
> > > > >  #define ENTRY(x) .text; .globl x; .type x,%function; x:
> > > > >  #define END(x)   .size x, . - x
> > > > > 
> > > > > -.section .note.solo5.xen
> > > > > +.section .note.solo5.xen, "a", @note
> > > > > 
> > > > >         .align  4
> > > > >         .long   4
> > > > > 
> > > > > then I get the expected output from readelf -lW, and I can get as far as
> > > > > the C _start() with no issues!
> > > > > 
> > > > > FWIW, here's the diff of readelf -lW before/after:
> > > > > 
> > > > > --- before	2020-05-26 17:36:46.117885855 +0200
> > > > > +++ after	2020-05-26 17:38:07.090508322 +0200
> > > > > @@ -8,9 +8,9 @@
> > > > >    INTERP         0x001000 0x0000000000100000 0x0000000000100000 0x000018 0x000018 R   0x8
> > > > >        [Requesting program interpreter: /nonexistent/solo5/]
> > > > >    LOAD           0x001000 0x0000000000100000 0x0000000000100000 0x00615c 0x00615c R E 0x1000
> > > > > -  LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x007120 0x00ed28 RW  0x1000
> > > > > +  LOAD           0x008000 0x0000000000107000 0x0000000000107000 0x006120 0x00dd28 RW  0x1000
> > > > 
> > > > This seems suspicious, there's a change of the size of the LOAD
> > > > section, but your change to the note type should not affect the LOAD
> > > > section?
> > > 
> > > Indeed.
> > 
> > You could try to disassemble the text sections with objdump -d (or -D
> > for all sections) and see if there's a difference between both
> > versions, but having solved the issue maybe you just want to move
> > on.
> 
> I have moved on, making good progress:
> 
>     domainbuilder: detail: xc_dom_release: called
>     Hello, world!
>     Solo5: Xen hvm_start_info @0x0000000000119000
>     Solo5: magic=0x336ec578 version=1
>     Solo5: cmdline_paddr=0x0
>     Solo5: memmap_paddr=0x119878 entries=5
>     Solo5: memmap[0] = { 0x0, 0x10000400, 1 }
>     Solo5: mem_size=0x10000000
>                 |      ___|
>       __|  _ \  |  _ \ __ \
>     \__ \ (   | | (   |  ) |
>     ____/\___/ _|\___/____/
>     Solo5: Bindings version v0.6.5-4-g57724f8-dirty
>     Solo5: Memory map: 256 MB addressable:
>     Solo5:   reserved @ (0x0 - 0xfffff)
>     Solo5:       text @ (0x100000 - 0x104fff)
>     Solo5:     rodata @ (0x105000 - 0x106fff)
>     Solo5:       data @ (0x107000 - 0x118fff)
>     Solo5:       heap >= 0x119000 < stack < 0x10000000
>     Solo5: Clock source: KVM paravirtualized clock
>     Solo5: trap: type=#GP ec=0x0 rip=0x103a96 rsp=0xfffff90 rflags=0x2
>     Solo5: ABORT: cpu_x86_64.c:181: Fatal trap
>     Solo5: Halted
> 
> (The #GP is due to the timekeeping code not yet ported to Xen).
> 
> Random question: With memory="256" in the xl config, why is the size of the
> first XEN_HVM_MEMMAP_TYPE_RAM memmap entry not a multiple of PAGE_SIZE? I
> had to align it down, since we put the stack at the top of RAM. 0x10000400
> seems... odd.

IIRC some memory is stolen by the domain builder to place stuff (like
the ACPI tables), and the end is not aligned to a page size boundary
because it's not mandatory.

Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 27 15:12:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 15:12:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdxj0-0006uu-Fp; Wed, 27 May 2020 15:12:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeBB=7J=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jdxiz-0006up-9M
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 15:12:09 +0000
X-Inumbo-ID: 6d94807e-a02c-11ea-a75c-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6d94807e-a02c-11ea-a75c-12813bfff9fa;
 Wed, 27 May 2020 15:12:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=jkiExLgVgn2bdmzCbOU36pm6jIiWk55dfC9RRH0phs8=; b=p5zhjGo1qwtdKhvgY6ZcD6iwd
 3HOsi4NDFIrIkRFDYI9937N2BLzMM7D/6rRsAXlef9OkzSJrML6fHmtUo6P2VlzKxzyFijYTR+flZ
 FZlGVvQQW9ceXiCT9u7h57sf9PkewpZy+XOZ/0OnLUbuPArIzWizrx6RcL4D3YrR3HJbo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdxiw-00065M-DT; Wed, 27 May 2020 15:12:06 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jdxiv-0000L7-T9; Wed, 27 May 2020 15:12:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jdxiv-0000KC-SO; Wed, 27 May 2020 15:12:05 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150391-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150391: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=ddc760832fa8cf5e93b9d9e6e854a5114ac63510
X-Osstest-Versions-That: qemuu=8f72c75cfc9b3c84a9b5e7a58ee5e471cb2f19c8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 27 May 2020 15:12:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150391 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150391/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150386
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150386
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150386
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150386
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150386
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150386
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                ddc760832fa8cf5e93b9d9e6e854a5114ac63510
baseline version:
 qemuu                8f72c75cfc9b3c84a9b5e7a58ee5e471cb2f19c8

Last test of basis   150386  2020-05-26 13:06:33 Z    1 days
Testing same since   150391  2020-05-27 01:37:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dan Robertson <dan@dlrobertson.com>
  Greg Kurz <groug@kaod.org>
  Peter Maydell <peter.maydell@linaro.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   8f72c75cfc..ddc760832f  ddc760832fa8cf5e93b9d9e6e854a5114ac63510 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Wed May 27 15:17:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 15:17:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdxnw-00076a-8N; Wed, 27 May 2020 15:17:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ARaW=7J=xen.org=roger@srs-us1.protection.inumbo.net>)
 id 1jdxnu-00076V-SQ
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 15:17:14 +0000
X-Inumbo-ID: 24df156e-a02d-11ea-9947-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24df156e-a02d-11ea-9947-bc764e2007e4;
 Wed, 27 May 2020 15:17:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Transfer-Encoding:Content-Type:
 MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=pdLVhj5eV1gR37ueruLD3a5quNY9FX/AGMwp0eKrLTQ=; b=nkI73BHwPfHcxUIxAogRiBCRyl
 ugRwNkDUeQPgd4iGDGqR6YFSOe87tBon5w7uWL9ZTkvfdTnafVFPVrd3OrYjIhVshBPbEIRz+2gbd
 WVqc489dy6+aSzUS/xD7hbyBLeOQQe2EoAhFrHAIwerKuJ6SFNtYGVPp2jCgQdoxQmVY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jdxns-0006By-Aq; Wed, 27 May 2020 15:17:12 +0000
Received: from [93.176.191.173] (helo=localhost)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jdxnr-00033F-W4; Wed, 27 May 2020 15:17:12 +0000
Date: Wed, 27 May 2020 17:17:04 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH] x86: refine guest_mode()
Message-ID: <20200527151319.GC1195@Air-de-Roger>
References: <16939982-3ccc-f848-0694-61b154dca89a@citrix.com>
 <5ce12c86-c894-4a2c-9fa6-1c2a6007ca28@suse.com>
 <20200518145101.GV54375@Air-de-Roger>
 <d58ec87e-a871-2e65-4a69-b73a168a6afa@suse.com>
 <20200520151326.GM54375@Air-de-Roger>
 <38d546f9-8043-8d94-8298-8fd035078a8a@suse.com>
 <20200522104844.GY54375@Air-de-Roger>
 <a31bd761-54eb-56b8-7c60-93202d26e7d0@suse.com>
 <20200526105652.GD38408@Air-de-Roger>
 <dfa3604a-d53e-ae0c-fe24-099b135b308e@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <dfa3604a-d53e-ae0c-fe24-099b135b308e@suse.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 26, 2020 at 03:55:39PM +0200, Jan Beulich wrote:
> On 26.05.2020 12:56, Roger Pau Monné wrote:
> > On Fri, May 22, 2020 at 02:00:22PM +0200, Jan Beulich wrote:
> >> On 22.05.2020 12:48, Roger Pau Monné wrote:
> >>> On Fri, May 22, 2020 at 11:52:42AM +0200, Jan Beulich wrote:
> >>>> On 20.05.2020 17:13, Roger Pau Monné wrote:
> >>>>> OK, so I think I'm starting to understand this all. Sorry it's taken
> >>>>> me so long. So it's my understanding that diff != 0 can only happen in
> >>>>> Xen context, or when in an IST that has a different stack (ie: MCE, NMI
> >>>>> or DF according to current.h) and running in PV mode?
> >>>>>
> >>>>> Wouldn't in then be fine to use (r)->cs & 3 to check we are in guest
> >>>>> mode if diff != 0? I see a lot of other places where cs & 3 is already
> >>>>> used to that effect AFAICT (like entry.S).
> >>>>
> >>>> Technically this would be correct afaics, but the idea with all this
> >>>> is (or should I say "looks to be"?) to have the checks be as tight as
> >>>> possible, to make sure we don't mistakenly consider something "guest
> >>>> mode" which really isn't. IOW your suggestion would be fine with me
> >>>> if we could exclude bugs anywhere in the code. But since this isn't
> >>>> realistic, I consider your suggestion to be relaxing things by too
> >>>> much.
> >>>
> >>> OK, so I take that (long time) we might also want to change the cs & 3
> >>> checks from entry.S to check against __HYPERVISOR_CS explicitly?
> >>
> >> I didn't think so, no (not the least because of there not being any
> >> guarantee afaik that EFI runtime calls couldn't play with segment
> >> registers; they shouldn't, yes, but there's a lot of other "should"
> >> many don't obey to). Those are guaranteed PV-only code paths. The
> >> main issue here is that ->cs cannot be relied upon when a frame
> >> points at HVM state.
> > 
> > Well, if it points at HVM state it could equally have __HYPERVISOR_CS
> > set by the guest.
> 
> No, that's not the point. ->cs will never be __HYPERVISOR_CS in that
> case, as we never store the guest's CS selector there. Instead
> hvm_invalidate_regs_fields() clobbers the field in debug builds (with
> a value resulting in RPL 3), but zero (i.e. a value implying RPL 0)
> remains in place in release builds.
> 
> Instead of doing this clobbering in debug mode only, we could - as I
> think I did suggest before - clobber always, but just once during vCPU
> init rather than on every VM exit. In debug mode we could then instead
> check that the dummy values didn't themselves get clobbered.

It would make sense to clobber it always with a value that has RPL >
0, so that it's consistent with PV state.

> > Will things work anyway if you get here from an exception generated by
> > EFI code that has changed the code segment? You are going to hit the
> > assert at least, since diff will be != 0 and cs != __HYPERVISOR_CS?
> 
> What would guarantee the latter? Additionally they could in principle
> also have switched stacks then, i.e. diff may then also be larger than
> PRIMARY_STACK_SIZE, in which case - with the patch in place - the
> assertion is bypassed altogether.
> 
> > I would prefer to keep things coherent by either using cs & 3 or
> > cs == __HYPERVISOR_CS everywhere if possible, as I'm still unsure of
> > the benefit of using __HYPERVISOR_CS.
> 
> See above.

Well, I think it's an improvement overall, as it allows to properly
handle the case where a PV guest could manage to trigger an exception
that uses a stack different than the primary one.

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 27 15:19:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 15:19:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdxpr-0007DS-Kh; Wed, 27 May 2020 15:19:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=NSSa=7J=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jdxpp-0007DN-R9
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 15:19:13 +0000
X-Inumbo-ID: 6b4cdbb2-a02d-11ea-a75c-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6b4cdbb2-a02d-11ea-a75c-12813bfff9fa;
 Wed, 27 May 2020 15:19:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
 References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=PorrzRqCInOxYvt20+7xogE6y1SnTT80lZlH/XZ9BsI=; b=kuxbgbMWLwcnM8rlk8O0vGPYCT
 O1P4cE/1vUPQMoDot1CMPKKm5+9v6aTuu8gKp3qrnkiZ/yUkTphJks/14yKeNM9Bch+1c2sj+h9Jx
 cgzrRGBJvMOwg6xQzuhbctlHYq1yn1y54Vum6Qh9mn8wMwVy0mc5tfBba6JJosONbc5Q=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jdxpn-0006EC-4J; Wed, 27 May 2020 15:19:11 +0000
Received: from 54-240-197-232.amazon.com ([54.240.197.232]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jdxpm-00036J-Qi; Wed, 27 May 2020 15:19:11 +0000
Message-ID: <662f11745542fa3f65f14bef76961837271251fc.camel@xen.org>
Subject: Re: [PATCH v6 13/15] x86/mm: drop old page table APIs
From: Hongyan Xia <hx242@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Date: Wed, 27 May 2020 16:19:08 +0100
In-Reply-To: <be3d3dd0-6001-41a5-7390-44dc8c327d8f@suse.com>
References: <cover.1587735799.git.hongyxia@amazon.com>
 <d6a642544c5ce0b975cdab8ad054f7a348f17c8d.1587735799.git.hongyxia@amazon.com>
 <be3d3dd0-6001-41a5-7390-44dc8c327d8f@suse.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, 2020-05-20 at 12:09 +0200, Jan Beulich wrote:
> On 24.04.2020 16:09, Hongyan Xia wrote:
> > --- a/xen/arch/x86/smpboot.c
> > +++ b/xen/arch/x86/smpboot.c
> > @@ -815,7 +815,7 @@ static int setup_cpu_root_pgt(unsigned int cpu)
> >      if ( !opt_xpti_hwdom && !opt_xpti_domu )
> >          return 0;
> >  
> > -    rpt = alloc_xen_pagetable();
> > +    rpt = alloc_xenheap_page();
> 
> So the idea of not using alloc_domheap_page() +
> map_domain_page_global()
> here is that in the long run alloc_xenheap_page() will resolve to
> just
> this? If so, while I'd have preferred the greater flexibility until
> then,
> this is fair enough, i.e.
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

alloc_xenheap_page() has an advantage which is the fast PA->VA lookup,
which is currently required by the restore_all_guest logic. If we
change how restore_all_guest works or gain fast PA->VA lookup for
globally mapped pages, then xenheap could probably just be removed.

Hongyan



From xen-devel-bounces@lists.xenproject.org Wed May 27 15:25:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 15:25:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdxw5-00084P-D8; Wed, 27 May 2020 15:25:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+pkZ=7J=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdxw4-00084K-EY
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 15:25:40 +0000
X-Inumbo-ID: 5210b5f0-a02e-11ea-9dbe-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5210b5f0-a02e-11ea-9dbe-bc764e2007e4;
 Wed, 27 May 2020 15:25:39 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 969A0B1FB;
 Wed, 27 May 2020 15:25:41 +0000 (UTC)
Subject: Re: -mno-tls-direct-seg-refs support in glibc for i386 PV Xen
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <87mu5til8a.fsf@oldenburg2.str.redhat.com>
 <551ceac2-9cf6-00fd-95a6-a5b9fea6a383@citrix.com>
 <69bdaedf-c403-a77d-8ab1-12feffa15494@suse.com>
 <779861db-f9eb-634e-3d28-e113fcc37846@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2eb20e19-ae05-e2ae-431b-6ff1f85d4490@suse.com>
Date: Wed, 27 May 2020 17:25:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <779861db-f9eb-634e-3d28-e113fcc37846@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Florian Weimer <fweimer@redhat.com>, xen-devel@lists.xenproject.org,
 libc-alpha@sourceware.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 16:40, Andrew Cooper wrote:
> On 27/05/2020 15:00, Jan Beulich wrote:
>> You can have the descriptor map
>> only the [base,XenBase] part or the [0,base) one. Hence Xen, from its
>> #GP handler, flipped the descriptor between the two options depending
>> on whether the current access was to the positive of negative part of
>> the TLS seg. (An in-practice use of expand down segments, as you'll
>> surely notice.)
> 
> I've found gpf_emulate_4gb() in source history.  It was specific to
> 32bit builds of Xen (now long gone).
> 
> What I can't figure out is why this is unnecessary in 64bit builds of
> Xen.  We still enforce reduced segment limits on the guests descriptors.

Do we? I can't find such - neither boot_compat_gdt[] has any signs
of it, nor check_descriptor(). And we don't have a need to: The
entire range is used for the r/o M2P, i.e. protection is enforced
at the paging layer. 32-bit Xen necessarily had r/w as well as
executable sub-ranges there.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 27 15:39:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 15:39:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdy93-0000i5-Pd; Wed, 27 May 2020 15:39:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PV0l=7J=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jdy92-0000hv-2F
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 15:39:04 +0000
X-Inumbo-ID: 307b211d-a030-11ea-a760-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 307b211d-a030-11ea-a760-12813bfff9fa;
 Wed, 27 May 2020 15:39:03 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:37566
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jdy8y-000MCH-KJ (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Wed, 27 May 2020 16:39:00 +0100
Subject: Re: [PATCH] x86/boot: Fix load_system_tables() to be NMI/#MC-safe
To: Jan Beulich <jbeulich@suse.com>
References: <20200527130607.32069-1-andrew.cooper3@citrix.com>
 <50f66504-ab7b-2f3e-1695-003ad69ae37a@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e9efb665-2a16-4234-fb5b-4da391cc0572@citrix.com>
Date: Wed, 27 May 2020 16:38:59 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <50f66504-ab7b-2f3e-1695-003ad69ae37a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27/05/2020 14:19, Jan Beulich wrote:
> On 27.05.2020 15:06, Andrew Cooper wrote:
>> @@ -720,30 +721,26 @@ void load_system_tables(void)
>>  		.limit = (IDT_ENTRIES * sizeof(idt_entry_t)) - 1,
>>  	};
>>  
>> -	*tss = (struct tss64){
>> -		/* Main stack for interrupts/exceptions. */
>> -		.rsp0 = stack_bottom,
>> -
>> -		/* Ring 1 and 2 stacks poisoned. */
>> -		.rsp1 = 0x8600111111111111ul,
>> -		.rsp2 = 0x8600111111111111ul,
>> -
>> -		/*
>> -		 * MCE, NMI and Double Fault handlers get their own stacks.
>> -		 * All others poisoned.
>> -		 */
>> -		.ist = {
>> -			[IST_MCE - 1] = stack_top + IST_MCE * PAGE_SIZE,
>> -			[IST_DF  - 1] = stack_top + IST_DF  * PAGE_SIZE,
>> -			[IST_NMI - 1] = stack_top + IST_NMI * PAGE_SIZE,
>> -			[IST_DB  - 1] = stack_top + IST_DB  * PAGE_SIZE,
>> -
>> -			[IST_MAX ... ARRAY_SIZE(tss->ist) - 1] =
>> -				0x8600111111111111ul,
>> -		},
>> -
>> -		.bitmap = IOBMP_INVALID_OFFSET,
>> -	};
>> +	/*
>> +	 * Set up the TSS.  Warning - may be live, and the NMI/#MC must remain
>> +	 * valid on every instruction boundary.  (Note: these are all
>> +	 * semantically ACCESS_ONCE() due to tss's volatile qualifier.)
>> +	 *
>> +	 * rsp0 refers to the primary stack.  #MC, #DF, NMI and #DB handlers
>> +	 * each get their own stacks.  No IO Bitmap.
>> +	 */
>> +	tss->rsp0 = stack_bottom;
>> +	tss->ist[IST_MCE - 1] = stack_top + IST_MCE * PAGE_SIZE;
>> +	tss->ist[IST_DF  - 1] = stack_top + IST_DF  * PAGE_SIZE;
>> +	tss->ist[IST_NMI - 1] = stack_top + IST_NMI * PAGE_SIZE;
>> +	tss->ist[IST_DB  - 1] = stack_top + IST_DB  * PAGE_SIZE;
>> +	tss->bitmap = IOBMP_INVALID_OFFSET;
>> +
>> +	/* All other stack pointers poisioned. */
>> +	for ( i = IST_MAX; i < ARRAY_SIZE(tss->ist); ++i )
>> +		tss->ist[i] = 0x8600111111111111ul;
>> +	tss->rsp1 = 0x8600111111111111ul;
>> +	tss->rsp2 = 0x8600111111111111ul;
> ACCESS_ONCE() unfortunately only has one of the two needed effects:
> It guarantees that each memory location gets accessed exactly once
> (which I assume can also be had with just the volatile addition,
> but without the moving away from using an initializer), but it does
> not guarantee single-insn accesses.

Linux's memory-barriers.txt disagrees, and specifically gives an example
with a misaligned int (vs two shorts) and the use volatile cast (by way
of {READ,WRITE}_ONCE()) to prevent load/store tearing, as the memory
location is of a size which can be fit in a single access.

I'm fairly sure we're safe here.

>  I consider this in particular
> relevant here because all of the 64-bit fields are misaligned. By
> doing it like you do, we're setting us up to have to re-do this yet
> again in a couple of years time (presumably using write_atomic()
> instead then).
>
> Nevertheless it is a clear improvement, so if you want to leave it
> like this
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks,

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 27 15:41:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 15:41:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdyBD-0001Tw-6C; Wed, 27 May 2020 15:41:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Mtku=7J=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jdyBB-0001Tp-Kr
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 15:41:17 +0000
X-Inumbo-ID: 8066da2c-a030-11ea-a760-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8066da2c-a030-11ea-a760-12813bfff9fa;
 Wed, 27 May 2020 15:41:16 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: CLmjROcVQQK0tn0MKoOfTvUKHIX4tqItm1DRqIcOLCRMrh7uoUXcGWom/0YaSYZjr75L5VGHkK
 n+esSnZraN0kir/DTgImwLUoc48LeHfcmxuVFNVGBlC4CiTEHFrtw0YchCW2p9wEKCdHCxbYYn
 2Gju2dbJdGSc5q8rO2UUFsSUAIvJ8KCZx+UF9/h9XORd32Q7SccA/JxXh4BayEA7q5B0V1i4/t
 +uxdpXQ3oKYwpf1DRsuuJYD1lgaqBDbQ2cdz1MDA6qIn2gYOcpGrQVSctal3y43GVT0MI4cfTf
 Z/A=
X-SBRS: 2.7
X-MesageID: 18864744
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,441,1583211600"; d="scan'208";a="18864744"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24270.35349.838484.116865@mariner.uk.xensource.com>
Date: Wed, 27 May 2020 16:41:09 +0100
To: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Xen XSM/FLASK policy, grub defaults, etc.
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, cjwatson@debian.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The Xen tools build system builds a FLASK policy by default.  It does
this even if the hypervisor build for XSM is disabled.

I recently sent patches upstream to grub to support XSM in
update-grub.  update-grub is the program which examines your /boot and
generates appropriate bootloader entries.  My merge request
  https://salsa.debian.org/grub-team/grub/-/merge_requests/18
finds XSM policy files, and when theya are found, generates "XSM
enabled" bootloader entries. [1]

The result of these two things together is that a default build of
grub will result in these "XSM enabled" bootloader entries.  In
practice I think these entries will boot because everything ignores
the additional XSM policy file (!) and Xen ignores the
"flask=enforcing" option (!!)

This is not particularly good.  Offering people an "XSM enabled"
option which does nothing is poor because it might think they have the
extra security but actually significantly more steps are needed.  But
there doesn't appear to be any way for update-grub to tell whether a
particular hypervisor does support XSM or not.

I think the following changes would be good:

1. Xen should reject "flask=enforcing" if it is built without FLASK
support, rather than ignoring it.  This will ensure users are not
misled by these boot options since they will be broken.

2. Xen should disable the XSM policy build when FLASK is disabled.
This is unfortunately not so simple because the XSM policy build is a
tools option and FLASK is a Xen option and the configuration systems
are disjoint.  But at the very least a default build, which has no XSM
support, should not build an XSM policy file either.

3. Failing that, Xen should provide some other mechanism which would
enable something like update-grub to determine whether a particular
hypervisor can sensibly be run with a policy file and flask=enforcing.

Opinions?

Thanks,
Ian.

[1] osstest has been doing this approximately forever.  Due to
accidents of boot config ordering, these entries have not been used by
default.


From xen-devel-bounces@lists.xenproject.org Wed May 27 16:09:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 16:09:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdybw-00040d-CP; Wed, 27 May 2020 16:08:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=weHr=7J=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jdybu-00040Y-VL
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 16:08:54 +0000
X-Inumbo-ID: 5c3a9108-a034-11ea-9947-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c3a9108-a034-11ea-9947-bc764e2007e4;
 Wed, 27 May 2020 16:08:53 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ab1vbUrLt58hHPy86AVpIv4KbwTkL82ierfDBVPOjQhlKWj+usgWJnLuzlJimpA3Y7maq0moBI
 w9VozM3hH44pXHO5+Mjcl+C7u5Scn0AnLveJePrGwCgNABonoOzcpDJDq0CE5B9JesppDnoEBi
 dQq9dO2nHJZsAanrtfPy60wSCwOcKtmgnTh+4zWvRk8armB+BNTdF429mDk4g4eqn088tNjOku
 e66WS6Jr3MyWuW7q14pq9N6hQkP6LQa2pAyOI7p7Cx3n2xxAYAIRqUYhYdkhgWFDtIX4dkIU9S
 Gjs=
X-SBRS: 2.7
X-MesageID: 19312688
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,441,1583211600"; d="scan'208";a="19312688"
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: Xen XSM/FLASK policy, grub defaults, etc.
Thread-Topic: Xen XSM/FLASK policy, grub defaults, etc.
Thread-Index: AQHWND0/xSJFUOR5YU28XikgbPuX46i7+DWA
Date: Wed, 27 May 2020 16:08:49 +0000
Message-ID: <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
References: <24270.35349.838484.116865@mariner.uk.xensource.com>
In-Reply-To: <24270.35349.838484.116865@mariner.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <13D5DAE7DC7BCB43AD94801A4BC9B354@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 "cjwatson@debian.org" <cjwatson@debian.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDI3LCAyMDIwLCBhdCA0OjQxIFBNLCBJYW4gSmFja3NvbiA8aWFuLmphY2tz
b25AY2l0cml4LmNvbT4gd3JvdGU6DQo+IA0KPiBUaGUgWGVuIHRvb2xzIGJ1aWxkIHN5c3RlbSBi
dWlsZHMgYSBGTEFTSyBwb2xpY3kgYnkgZGVmYXVsdC4gIEl0IGRvZXMNCj4gdGhpcyBldmVuIGlm
IHRoZSBoeXBlcnZpc29yIGJ1aWxkIGZvciBYU00gaXMgZGlzYWJsZWQuDQo+IA0KPiBJIHJlY2Vu
dGx5IHNlbnQgcGF0Y2hlcyB1cHN0cmVhbSB0byBncnViIHRvIHN1cHBvcnQgWFNNIGluDQo+IHVw
ZGF0ZS1ncnViLiAgdXBkYXRlLWdydWIgaXMgdGhlIHByb2dyYW0gd2hpY2ggZXhhbWluZXMgeW91
ciAvYm9vdCBhbmQNCj4gZ2VuZXJhdGVzIGFwcHJvcHJpYXRlIGJvb3Rsb2FkZXIgZW50cmllcy4g
IE15IG1lcmdlIHJlcXVlc3QNCj4gIGh0dHBzOi8vc2Fsc2EuZGViaWFuLm9yZy9ncnViLXRlYW0v
Z3J1Yi8tL21lcmdlX3JlcXVlc3RzLzE4DQo+IGZpbmRzIFhTTSBwb2xpY3kgZmlsZXMsIGFuZCB3
aGVuIHRoZXlhIGFyZSBmb3VuZCwgZ2VuZXJhdGVzICJYU00NCj4gZW5hYmxlZCIgYm9vdGxvYWRl
ciBlbnRyaWVzLiBbMV0NCj4gDQo+IFRoZSByZXN1bHQgb2YgdGhlc2UgdHdvIHRoaW5ncyB0b2dl
dGhlciBpcyB0aGF0IGEgZGVmYXVsdCBidWlsZCBvZg0KPiBncnViIHdpbGwgcmVzdWx0IGluIHRo
ZXNlICJYU00gZW5hYmxlZCIgYm9vdGxvYWRlciBlbnRyaWVzLiAgSW4NCj4gcHJhY3RpY2UgSSB0
aGluayB0aGVzZSBlbnRyaWVzIHdpbGwgYm9vdCBiZWNhdXNlIGV2ZXJ5dGhpbmcgaWdub3Jlcw0K
PiB0aGUgYWRkaXRpb25hbCBYU00gcG9saWN5IGZpbGUgKCEpIGFuZCBYZW4gaWdub3JlcyB0aGUN
Cj4gImZsYXNrPWVuZm9yY2luZyIgb3B0aW9uICghISkNCj4gDQo+IFRoaXMgaXMgbm90IHBhcnRp
Y3VsYXJseSBnb29kLiAgT2ZmZXJpbmcgcGVvcGxlIGFuICJYU00gZW5hYmxlZCINCj4gb3B0aW9u
IHdoaWNoIGRvZXMgbm90aGluZyBpcyBwb29yIGJlY2F1c2UgaXQgbWlnaHQgdGhpbmsgdGhleSBo
YXZlIHRoZQ0KPiBleHRyYSBzZWN1cml0eSBidXQgYWN0dWFsbHkgc2lnbmlmaWNhbnRseSBtb3Jl
IHN0ZXBzIGFyZSBuZWVkZWQuICBCdXQNCj4gdGhlcmUgZG9lc24ndCBhcHBlYXIgdG8gYmUgYW55
IHdheSBmb3IgdXBkYXRlLWdydWIgdG8gdGVsbCB3aGV0aGVyIGENCj4gcGFydGljdWxhciBoeXBl
cnZpc29yIGRvZXMgc3VwcG9ydCBYU00gb3Igbm90Lg0KPiANCj4gSSB0aGluayB0aGUgZm9sbG93
aW5nIGNoYW5nZXMgd291bGQgYmUgZ29vZDoNCj4gDQo+IDEuIFhlbiBzaG91bGQgcmVqZWN0ICJm
bGFzaz1lbmZvcmNpbmciIGlmIGl0IGlzIGJ1aWx0IHdpdGhvdXQgRkxBU0sNCj4gc3VwcG9ydCwg
cmF0aGVyIHRoYW4gaWdub3JpbmcgaXQuICBUaGlzIHdpbGwgZW5zdXJlIHVzZXJzIGFyZSBub3QN
Cj4gbWlzbGVkIGJ5IHRoZXNlIGJvb3Qgb3B0aW9ucyBzaW5jZSB0aGV5IHdpbGwgYmUgYnJva2Vu
Lg0KDQorMQ0KDQo+IDIuIFhlbiBzaG91bGQgZGlzYWJsZSB0aGUgWFNNIHBvbGljeSBidWlsZCB3
aGVuIEZMQVNLIGlzIGRpc2FibGVkLg0KPiBUaGlzIGlzIHVuZm9ydHVuYXRlbHkgbm90IHNvIHNp
bXBsZSBiZWNhdXNlIHRoZSBYU00gcG9saWN5IGJ1aWxkIGlzIGENCj4gdG9vbHMgb3B0aW9uIGFu
ZCBGTEFTSyBpcyBhIFhlbiBvcHRpb24gYW5kIHRoZSBjb25maWd1cmF0aW9uIHN5c3RlbXMNCj4g
YXJlIGRpc2pvaW50LiAgQnV0IGF0IHRoZSB2ZXJ5IGxlYXN0IGEgZGVmYXVsdCBidWlsZCwgd2hp
Y2ggaGFzIG5vIFhTTQ0KPiBzdXBwb3J0LCBzaG91bGQgbm90IGJ1aWxkIGFuIFhTTSBwb2xpY3kg
ZmlsZSBlaXRoZXIuDQoNCkEgc2ltcGxlIHRoaW5nIHRvIGRvIGhlcmUgd291bGQgYmUgdG8gaGF2
ZSB0aGUgZmxhc2sgcG9saWN5IGNvbnRyb2xsZWQgYnkgYSBjb25maWd1cmUgLS1mbGFzayBvcHRp
b24uICBJZiBuZWl0aGVyIC0tZmxhc2sgbm9yIC0tbm8tZmxhc2sgaXMgc3BlY2lmaWVkLCB3ZSBj
b3VsZCBtYXliZSBoYXZlIGNvbmZpZ3VyZSBhbHNvIGNoZWNrIHRoZSBjb250ZW50cyBvZiB4ZW4v
LmNvbmZpZyB0byBzZWUgaWYgQ09ORklHX1hTTV9GTEFTSyBpcyBlbmFibGVkPw0KDQo+IDMuIEZh
aWxpbmcgdGhhdCwgWGVuIHNob3VsZCBwcm92aWRlIHNvbWUgb3RoZXIgbWVjaGFuaXNtIHdoaWNo
IHdvdWxkDQo+IGVuYWJsZSBzb21ldGhpbmcgbGlrZSB1cGRhdGUtZ3J1YiB0byBkZXRlcm1pbmUg
d2hldGhlciBhIHBhcnRpY3VsYXINCj4gaHlwZXJ2aXNvciBjYW4gc2Vuc2libHkgYmUgcnVuIHdp
dGggYSBwb2xpY3kgZmlsZSBhbmQgZmxhc2s9ZW5mb3JjaW5nLg0KDQpTbyB5b3Ugd2FudCB1cGRh
dGUtZ3J1YiB0byBjaGVjayB3aGV0aGVyICp0aGUgWGVuIGJpbmFyeSBpdOKAmXMgY3JlYXRpbmcg
ZW50cmllcyBmb3IqIGhhcyBGTEFTSyBlbmFibGVkLiAgV2UgZ2VuZXJhbGx5IGluY2x1ZGUgdGhl
IFhlbiBjb25maWcgdXNlZCB0byBidWlsZCB0aGUgaHlwZXJ2aXNvciDigJQgY291bGQgd2UgaGF2
ZSBpdCBjaGVjayBmb3IgQ09ORklHX1hTTV9GTEFTSz8NCg0KIC1HZW9yZ2U=


From xen-devel-bounces@lists.xenproject.org Wed May 27 16:23:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 16:23:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdypf-0005fI-KX; Wed, 27 May 2020 16:23:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+pkZ=7J=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jdype-0005fD-Qk
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 16:23:06 +0000
X-Inumbo-ID: 57fbd334-a036-11ea-8993-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57fbd334-a036-11ea-8993-bc764e2007e4;
 Wed, 27 May 2020 16:23:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 8179CB260;
 Wed, 27 May 2020 16:23:07 +0000 (UTC)
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] x86: refine guest_mode()
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <369ec8ad-7d07-9c50-7458-fd68a2d717fb@suse.com>
Date: Wed, 27 May 2020 18:23:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The 2nd of the assertions as well as the macro's return value have been
assuming we're on the primary stack. While for most IST exceptions we
switch back to the main one when user mode was interrupted, for #DF we
intentionally never do, and hence a #DF actually triggering on a user
mode insn (which then is still a Xen bug) would in turn trigger this
assertion, rather than cleanly logging state.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
---
v2: Correct description.
---
While we could go further and also assert we're on the correct IST
stack in an "else" ti the "if()" added, I'm not fully convinced this
would be generally helpful. I'll be happy to adjust accordingly if
others think differently; at such a point though I think this should
then no longer be a macro.

--- unstable.orig/xen/include/asm-x86/regs.h	2020-01-22 20:03:18.000000000 +0100
+++ unstable/xen/include/asm-x86/regs.h	2020-04-27 10:02:40.009916762 +0200
@@ -10,9 +10,10 @@
     /* Frame pointer must point into current CPU stack. */                    \
     ASSERT(diff < STACK_SIZE);                                                \
     /* If not a guest frame, it must be a hypervisor frame. */                \
-    ASSERT((diff == 0) || (r->cs == __HYPERVISOR_CS));                        \
+    if ( diff < PRIMARY_STACK_SIZE )                                          \
+        ASSERT(!diff || ((r)->cs == __HYPERVISOR_CS));                        \
     /* Return TRUE if it's a guest frame. */                                  \
-    (diff == 0);                                                              \
+    !diff || ((r)->cs != __HYPERVISOR_CS);                                    \
 })
 
 #endif /* __X86_REGS_H__ */


From xen-devel-bounces@lists.xenproject.org Wed May 27 17:34:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 17:34:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdzwX-00030u-GA; Wed, 27 May 2020 17:34:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=baN0=7J=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jdzwW-00030n-Gh
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 17:34:16 +0000
X-Inumbo-ID: 4895dd22-a040-11ea-a76e-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4895dd22-a040-11ea-a76e-12813bfff9fa;
 Wed, 27 May 2020 17:34:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=wMlxPJkYWIuTwIy5RKdZAcuhjCIm+Q739FXn913ExJg=; b=A5hyp6XDyuvX11MxaeKZrXJAYR
 i52sUTflYDjlx5UO73u+/CfAFgwydHL/omMU8xaSf/k77lM8CkRJPF32wH4O84f70DmLEHu1DUDpj
 mN74navNwKJSgoKT97GAwYfGMHhgjUuNa3fTo0aPPotUl5syA8EG3q98fBonudMagBr8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jdzwT-00015y-Ii; Wed, 27 May 2020 17:34:13 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jdzwT-0003od-5l; Wed, 27 May 2020 17:34:13 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 1/5] xen/common: introduce a new framework for save/restore
 of 'domain' context
Date: Wed, 27 May 2020 18:34:03 +0100
Message-Id: <20200527173407.1398-2-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200527173407.1398-1-paul@xen.org>
References: <20200527173407.1398-1-paul@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

To allow enlightened HVM guests (i.e. those that have PV drivers) to be
migrated without their co-operation it will be necessary to transfer 'PV'
state such as event channel state, grant entry state, etc.

Currently there is a framework (entered via the hvm_save/load() functions)
that allows a domain's 'HVM' (architectural) state to be transferred but
'PV' state is also common with pure PV guests and so this framework is not
really suitable.

This patch adds the new public header and low level implementation of a new
common framework, entered via the domain_save/load() functions. Subsequent
patches will introduce other parts of the framework, and code that will
make use of it within the current version of the libxc migration stream.

This patch also marks the HVM-only framework as deprecated in favour of the
new framework.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Julien Grall <julien@xen.org>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Wei Liu <wl@xen.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>

v4:
 - Addressed further comments from Jan

v3:
 - Addressed comments from Julien and Jan
 - Save handlers no longer need to state entry length up-front
 - Save handlers expected to deal with multiple instances internally
 - Entries are now auto-padded to 8 byte boundary

v2:
 - Allow multi-stage save/load to avoid the need to double-buffer
 - Get rid of the masks and add an 'ignore' flag instead
 - Create copy function union to preserve const save buffer
 - Deprecate HVM-only framework
---
 xen/common/Makefile                    |   1 +
 xen/common/save.c                      | 315 +++++++++++++++++++++++++
 xen/include/public/arch-arm/hvm/save.h |   5 +
 xen/include/public/arch-x86/hvm/save.h |   5 +
 xen/include/public/save.h              |  89 +++++++
 xen/include/xen/save.h                 | 170 +++++++++++++
 6 files changed, 585 insertions(+)
 create mode 100644 xen/common/save.c
 create mode 100644 xen/include/public/save.h
 create mode 100644 xen/include/xen/save.h

diff --git a/xen/common/Makefile b/xen/common/Makefile
index e8cde65370..90553ba5d7 100644
--- a/xen/common/Makefile
+++ b/xen/common/Makefile
@@ -37,6 +37,7 @@ obj-y += radix-tree.o
 obj-y += rbtree.o
 obj-y += rcupdate.o
 obj-y += rwlock.o
+obj-y += save.o
 obj-y += shutdown.o
 obj-y += softirq.o
 obj-y += sort.o
diff --git a/xen/common/save.c b/xen/common/save.c
new file mode 100644
index 0000000000..ceec129b3f
--- /dev/null
+++ b/xen/common/save.c
@@ -0,0 +1,315 @@
+/*
+ * save.c: Save and restore PV guest state common to all domain types.
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/compile.h>
+#include <xen/save.h>
+
+struct domain_context {
+    struct domain *domain;
+    const char *name; /* for logging purposes */
+    struct domain_save_descriptor desc;
+    size_t len; /* for internal accounting */
+    union {
+        const struct domain_save_ops *save;
+        const struct domain_load_ops *load;
+    } ops;
+    void *priv;
+};
+
+static struct {
+    const char *name;
+    domain_save_handler save;
+    domain_load_handler load;
+} handlers[DOMAIN_SAVE_CODE_MAX + 1];
+
+void __init domain_register_save_type(unsigned int typecode,
+                                      const char *name,
+                                      domain_save_handler save,
+                                      domain_load_handler load)
+{
+    BUG_ON(typecode >= ARRAY_SIZE(handlers));
+
+    ASSERT(!handlers[typecode].save);
+    ASSERT(!handlers[typecode].load);
+
+    handlers[typecode].name = name;
+    handlers[typecode].save = save;
+    handlers[typecode].load = load;
+}
+
+int domain_save_begin(struct domain_context *c, unsigned int typecode,
+                      unsigned int instance)
+{
+    int rc;
+
+    if ( typecode != c->desc.typecode )
+    {
+        ASSERT_UNREACHABLE();
+        return -EINVAL;
+    }
+    ASSERT(!c->desc.length); /* Should always be zero during domain_save() */
+    ASSERT(!c->len); /* Verify domain_save_end() was called */
+
+    rc = c->ops.save->begin(c->priv, &c->desc);
+    if ( rc )
+        return rc;
+
+    return 0;
+}
+
+int domain_save_data(struct domain_context *c, const void *src, size_t len)
+{
+    int rc = c->ops.save->append(c->priv, src, len);
+
+    if ( !rc )
+        c->len += len;
+
+    return rc;
+}
+
+#define DOMAIN_SAVE_ALIGN 8
+
+int domain_save_end(struct domain_context *c)
+{
+    struct domain *d = c->domain;
+    size_t len = ROUNDUP(c->len, DOMAIN_SAVE_ALIGN) - c->len; /* padding */
+    int rc;
+
+    if ( len )
+    {
+        static const uint8_t pad[DOMAIN_SAVE_ALIGN] = {};
+
+        rc = domain_save_data(c, pad, len);
+
+        if ( rc )
+            return rc;
+    }
+    ASSERT(IS_ALIGNED(c->len, DOMAIN_SAVE_ALIGN));
+
+    if ( c->name )
+        gdprintk(XENLOG_INFO, "%pd save: %s[%u] +%zu (-%zu)\n", d, c->name,
+                 c->desc.instance, c->len, len);
+
+    rc = c->ops.save->end(c->priv, c->len);
+    c->len = 0;
+
+    return rc;
+}
+
+int domain_save(struct domain *d, const struct domain_save_ops *ops,
+                void *priv, bool dry_run)
+{
+    struct domain_context c = {
+        .domain = d,
+        .ops.save = ops,
+        .priv = priv,
+    };
+    static const struct domain_save_header h = {
+        .magic = DOMAIN_SAVE_MAGIC,
+        .xen_major = XEN_VERSION,
+        .xen_minor = XEN_SUBVERSION,
+        .version = DOMAIN_SAVE_VERSION,
+    };
+    const struct domain_save_end e = {};
+    unsigned int i;
+    int rc;
+
+    ASSERT(d != current->domain);
+    domain_pause(d);
+
+    c.name = !dry_run ? "HEADER" : NULL;
+    c.desc.typecode = DOMAIN_SAVE_CODE(HEADER);
+
+    rc = DOMAIN_SAVE_ENTRY(HEADER, &c, 0, &h, sizeof(h));
+    if ( rc )
+        goto out;
+
+    for ( i = 0; i < ARRAY_SIZE(handlers); i++ )
+    {
+        domain_save_handler save = handlers[i].save;
+
+        if ( !save )
+            continue;
+
+        c.name = !dry_run ? handlers[i].name : NULL;
+        memset(&c.desc, 0, sizeof(c.desc));
+        c.desc.typecode = i;
+
+        rc = save(d, &c, dry_run);
+        if ( rc )
+            goto out;
+    }
+
+    c.name = !dry_run ? "END" : NULL;
+    memset(&c.desc, 0, sizeof(c.desc));
+    c.desc.typecode = DOMAIN_SAVE_CODE(END);
+
+    rc = DOMAIN_SAVE_ENTRY(END, &c, 0, &e, sizeof(e));
+
+ out:
+    domain_unpause(d);
+
+    return rc;
+}
+
+int domain_load_begin(struct domain_context *c, unsigned int typecode,
+                      unsigned int *instance)
+{
+    if ( typecode != c->desc.typecode )
+    {
+        ASSERT_UNREACHABLE();
+        return -EINVAL;
+    }
+
+    ASSERT(!c->len); /* Verify domain_load_end() was called */
+
+    *instance = c->desc.instance;
+
+    return 0;
+}
+
+int domain_load_data(struct domain_context *c, void *dst, size_t len)
+{
+    size_t copy_len = min_t(size_t, len, c->desc.length - c->len);
+    int rc;
+
+    c->len += copy_len;
+    ASSERT(c->len <= c->desc.length);
+
+    rc = copy_len ? c->ops.load->read(c->priv, dst, copy_len) : 0;
+    if ( rc )
+        return rc;
+
+    /* Zero extend if the entry is exhausted */
+    len -= copy_len;
+    if ( len )
+    {
+        dst += copy_len;
+        memset(dst, 0, len);
+    }
+
+    return 0;
+}
+
+int domain_load_end(struct domain_context *c)
+{
+    struct domain *d = c->domain;
+    size_t len = c->desc.length - c->len;
+
+    while ( c->len != c->desc.length ) /* unconsumed data or pad */
+    {
+        uint8_t pad;
+        int rc = domain_load_data(c, &pad, sizeof(pad));
+
+        if ( rc )
+            return rc;
+
+        if ( pad )
+            return -EINVAL;
+    }
+
+    ASSERT(c->name);
+    gdprintk(XENLOG_INFO, "%pd load: %s[%u] +%zu (-%zu)\n", d, c->name,
+             c->desc.instance, c->len, len);
+
+    c->len = 0;
+
+    return 0;
+}
+
+int domain_load(struct domain *d, const struct domain_load_ops *ops,
+                void *priv)
+{
+    struct domain_context c = {
+        .domain = d,
+        .ops.load = ops,
+        .priv = priv,
+    };
+    unsigned int instance;
+    struct domain_save_header h;
+    int rc;
+
+    ASSERT(d != current->domain);
+
+    rc = c.ops.load->read(c.priv, &c.desc, sizeof(c.desc));
+    if ( rc )
+        return rc;
+
+    c.name = "HEADER";
+
+    rc = DOMAIN_LOAD_ENTRY(HEADER, &c, &instance, &h, sizeof(h));
+    if ( rc )
+        return rc;
+
+    if ( instance || h.magic != DOMAIN_SAVE_MAGIC ||
+         h.version != DOMAIN_SAVE_VERSION )
+        return -EINVAL;
+
+    domain_pause(d);
+
+    for (;;)
+    {
+        unsigned int i;
+        domain_load_handler load;
+
+        rc = c.ops.load->read(c.priv, &c.desc, sizeof(c.desc));
+        if ( rc )
+            return rc;
+
+        rc = -EINVAL;
+
+        if ( c.desc.typecode == DOMAIN_SAVE_CODE(END) )
+        {
+            struct domain_save_end e;
+
+            c.name = "END";
+
+            rc = DOMAIN_LOAD_ENTRY(END, &c, &instance, &e, sizeof(e));
+
+            if ( instance )
+                return -EINVAL;
+
+            break;
+        }
+
+        i = c.desc.typecode;
+        if ( i >= ARRAY_SIZE(handlers) )
+            break;
+
+        c.name = handlers[i].name;
+        load = handlers[i].load;
+
+        rc = load ? load(d, &c) : -EOPNOTSUPP;
+        if ( rc )
+            break;
+    }
+
+    domain_unpause(d);
+
+    return rc;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/arch-arm/hvm/save.h b/xen/include/public/arch-arm/hvm/save.h
index 75b8e65bcb..d5b0c15203 100644
--- a/xen/include/public/arch-arm/hvm/save.h
+++ b/xen/include/public/arch-arm/hvm/save.h
@@ -26,6 +26,11 @@
 #ifndef __XEN_PUBLIC_HVM_SAVE_ARM_H__
 #define __XEN_PUBLIC_HVM_SAVE_ARM_H__
 
+/*
+ * Further use of HVM state is deprecated. New state records should only
+ * be added to the domain state header: public/save.h
+ */
+
 #endif
 
 /*
diff --git a/xen/include/public/arch-x86/hvm/save.h b/xen/include/public/arch-x86/hvm/save.h
index 773a380bc2..e61e2dbcd7 100644
--- a/xen/include/public/arch-x86/hvm/save.h
+++ b/xen/include/public/arch-x86/hvm/save.h
@@ -648,6 +648,11 @@ struct hvm_msr {
  */
 #define HVM_SAVE_CODE_MAX 20
 
+/*
+ * Further use of HVM state is deprecated. New state records should only
+ * be added to the domain state header: public/save.h
+ */
+
 #endif /* __XEN_PUBLIC_HVM_SAVE_X86_H__ */
 
 /*
diff --git a/xen/include/public/save.h b/xen/include/public/save.h
new file mode 100644
index 0000000000..551dbbddb8
--- /dev/null
+++ b/xen/include/public/save.h
@@ -0,0 +1,89 @@
+/*
+ * save.h
+ *
+ * Structure definitions for common PV/HVM domain state that is held by
+ * Xen and must be saved along with the domain's memory.
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef XEN_PUBLIC_SAVE_H
+#define XEN_PUBLIC_SAVE_H
+
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
+#include "xen.h"
+
+/* Entry data is preceded by a descriptor */
+struct domain_save_descriptor {
+    uint16_t typecode;
+
+    /*
+     * Instance number of the entry (since there may be multiple of some
+     * types of entries).
+     */
+    uint16_t instance;
+
+    /* Entry length not including this descriptor */
+    uint32_t length;
+};
+
+/*
+ * Each entry has a type associated with it. DECLARE_DOMAIN_SAVE_TYPE
+ * binds these things together, although it is not intended that the
+ * resulting type is ever instantiated.
+ */
+#define DECLARE_DOMAIN_SAVE_TYPE(_x, _code, _type) \
+    struct DOMAIN_SAVE_TYPE_##_x { char c[_code]; _type t; };
+
+#define DOMAIN_SAVE_CODE(_x) \
+    (sizeof(((struct DOMAIN_SAVE_TYPE_##_x *)0)->c))
+#define DOMAIN_SAVE_TYPE(_x) \
+    typeof(((struct DOMAIN_SAVE_TYPE_##_x *)0)->t)
+
+/*
+ * All entries will be zero-padded to the next 64-bit boundary when saved,
+ * so there is no need to include trailing pad fields in structure
+ * definitions.
+ * When loading, entries will be zero-extended if the load handler reads
+ * beyond the length specified in the descriptor.
+ */
+
+/* Terminating entry */
+struct domain_save_end {};
+DECLARE_DOMAIN_SAVE_TYPE(END, 0, struct domain_save_end);
+
+#define DOMAIN_SAVE_MAGIC   0x53415645
+#define DOMAIN_SAVE_VERSION 0x00000001
+
+/* Initial entry */
+struct domain_save_header {
+    uint32_t magic;                /* Must be DOMAIN_SAVE_MAGIC */
+    uint16_t xen_major, xen_minor; /* Xen version */
+    uint32_t version;              /* Save format version */
+};
+DECLARE_DOMAIN_SAVE_TYPE(HEADER, 1, struct domain_save_header);
+
+#define DOMAIN_SAVE_CODE_MAX 1
+
+#endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
+
+#endif /* XEN_PUBLIC_SAVE_H */
diff --git a/xen/include/xen/save.h b/xen/include/xen/save.h
new file mode 100644
index 0000000000..f2e58bafef
--- /dev/null
+++ b/xen/include/xen/save.h
@@ -0,0 +1,170 @@
+/*
+ * save.h: support routines for save/restore
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef XEN_SAVE_H
+#define XEN_SAVE_H
+
+#include <xen/init.h>
+#include <xen/sched.h>
+#include <xen/types.h>
+
+#include <public/save.h>
+
+struct domain_context;
+
+int domain_save_begin(struct domain_context *c, unsigned int typecode,
+                      unsigned int instance);
+
+#define DOMAIN_SAVE_BEGIN(x, c, i) \
+    domain_save_begin((c), DOMAIN_SAVE_CODE(x), (i))
+
+int domain_save_data(struct domain_context *c, const void *data, size_t len);
+int domain_save_end(struct domain_context *c);
+
+static inline int domain_save_entry(struct domain_context *c,
+                                    unsigned int typecode,
+                                    unsigned int instance, const void *src,
+                                    size_t len)
+{
+    int rc;
+
+    rc = domain_save_begin(c, typecode, instance);
+    if ( rc )
+        return rc;
+
+    rc = domain_save_data(c, src, len);
+    if ( rc )
+        return rc;
+
+    return domain_save_end(c);
+}
+
+#define DOMAIN_SAVE_ENTRY(x, c, i, s, l) \
+    domain_save_entry((c), DOMAIN_SAVE_CODE(x), (i), (s), (l))
+
+int domain_load_begin(struct domain_context *c, unsigned int typecode,
+                      unsigned int *instance);
+
+#define DOMAIN_LOAD_BEGIN(x, c, i) \
+    domain_load_begin((c), DOMAIN_SAVE_CODE(x), (i))
+
+int domain_load_data(struct domain_context *c, void *data, size_t len);
+int domain_load_end(struct domain_context *c);
+
+static inline int domain_load_entry(struct domain_context *c,
+                                    unsigned int typecode,
+                                    unsigned int *instance, void *dst,
+                                    size_t len)
+{
+    int rc;
+
+    rc = domain_load_begin(c, typecode, instance);
+    if ( rc )
+        return rc;
+
+    rc = domain_load_data(c, dst, len);
+    if ( rc )
+        return rc;
+
+    return domain_load_end(c);
+}
+
+#define DOMAIN_LOAD_ENTRY(x, c, i, d, l) \
+    domain_load_entry((c), DOMAIN_SAVE_CODE(x), (i), (d), (l))
+
+/*
+ * The 'dry_run' flag indicates that the caller of domain_save() (see below)
+ * is not trying to actually acquire the data, only the size of the data.
+ * The save handler can therefore limit work to only that which is necessary
+ * to call domain_save_data() the correct number of times with accurate values
+ * for 'len'.
+ */
+typedef int (*domain_save_handler)(const struct domain *d,
+                                   struct domain_context *c,
+                                   bool dry_run);
+typedef int (*domain_load_handler)(struct domain *d,
+                                   struct domain_context *c);
+
+void domain_register_save_type(unsigned int typecode, const char *name,
+                               domain_save_handler save,
+                               domain_load_handler load);
+
+/*
+ * Register save and load handlers.
+ *
+ * Save handlers will be invoked in an order which copes with any inter-
+ * entry dependencies. For now this means that HEADER will come first and
+ * END will come last, all others being invoked in order of 'typecode'.
+ *
+ * Load handlers will be invoked in the order of entries present in the
+ * buffer.
+ */
+#define DOMAIN_REGISTER_SAVE_LOAD(x, s, l)                    \
+    static int __init __domain_register_##x##_save_load(void) \
+    {                                                         \
+        domain_register_save_type(                            \
+            DOMAIN_SAVE_CODE(x),                              \
+            #x,                                               \
+            &(s),                                             \
+            &(l));                                            \
+                                                              \
+        return 0;                                             \
+    }                                                         \
+    __initcall(__domain_register_##x##_save_load);
+
+/* Callback functions */
+struct domain_save_ops {
+    /*
+     * Begin a new entry with the given descriptor (only type and instance
+     * are valid).
+     */
+    int (*begin)(void *priv, const struct domain_save_descriptor *desc);
+    /* Append data/padding to the buffer */
+    int (*append)(void *priv, const void *data, size_t len);
+    /*
+     * Complete the entry by updating the descriptor with the total
+     * length of the appended data (not including padding).
+     */
+    int (*end)(void *priv, size_t len);
+};
+
+struct domain_load_ops {
+    /* Read data/padding from the buffer */
+    int (*read)(void *priv, void *data, size_t len);
+};
+
+/*
+ * Entry points:
+ *
+ * ops:     These are callback functions provided by the caller that will
+ *          be used to write to (in the save case) or read from (in the
+ *          load case) the context buffer. See above for more detail.
+ * priv:    This is a pointer that will be passed to the copy function to
+ *          allow it to identify the context buffer and the current state
+ *          of the save or load operation.
+ * dry_run: If this is set then the caller of domain_save() is only trying
+ *          to acquire the total size of the data, not the data itself.
+ *          In this case the caller may supply different ops to avoid doing
+ *          unnecessary work.
+ */
+int domain_save(struct domain *d, const struct domain_save_ops *ops,
+                void *priv, bool dry_run);
+int domain_load(struct domain *d, const struct domain_load_ops *ops,
+                void *priv);
+
+#endif /* XEN_SAVE_H */
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed May 27 17:34:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 17:34:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdzwh-00033I-Lf; Wed, 27 May 2020 17:34:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=baN0=7J=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jdzwg-00032g-FB
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 17:34:26 +0000
X-Inumbo-ID: 4b99aecc-a040-11ea-a76e-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4b99aecc-a040-11ea-a76e-12813bfff9fa;
 Wed, 27 May 2020 17:34:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=6HfwJMUCo4w2pkMGpLCkcV69ZEzum7mNXZZZUP2qHUY=; b=iyZIATC7nbvbdFBWGH6A8Yz7IO
 5zJ0+kkNhkDzn0oEQj44cuPo5WQ1HxIwdHX30e9I8JJOFrZR9qi/uwE0phYSfzLN71/EcFRq6Mckk
 96mdJxxm5kUXomAcUklmwW98bACYwDc9k2nt45gNlpB3nj8jdthx6gFIy890Jde7kzpE=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jdzwZ-00016Y-4l; Wed, 27 May 2020 17:34:19 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jdzwY-0003od-SJ; Wed, 27 May 2020 17:34:19 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 5/5] tools/libxc: make use of domain context SHARED_INFO
 record...
Date: Wed, 27 May 2020 18:34:07 +0100
Message-Id: <20200527173407.1398-6-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200527173407.1398-1-paul@xen.org>
References: <20200527173407.1398-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

... in the save/restore code.

This patch replaces direct mapping of the shared_info_frame (retrieved
using XEN_DOMCTL_getdomaininfo) with save/load of the domain context
SHARED_INFO record.

No modifications are made to the definition of the migration stream at
this point. Subsequent patches will define a record in the libxc domain
image format for passing domain context and convert the save/restore code
to use that.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>

NOTE: Ian requested ack from Andrew

v5:
 - Added BUILD_BUG_ON() in write_shared_info() to ensure copied data is not
   bigger than the record buffer

v4:
 - write_shared_info() now needs to allocate the record data since the
   shared info buffer is smaller than PAGE_SIZE

v3:
 - Moved basic get/set domain context functions to common code

v2:
 - Re-based (now making use of DOMAIN_SAVE_FLAG_IGNORE)
---
 tools/libxc/xc_sr_common.c         | 67 +++++++++++++++++++++++++++
 tools/libxc/xc_sr_common.h         | 11 ++++-
 tools/libxc/xc_sr_common_x86_pv.c  | 74 ++++++++++++++++++++++++++++++
 tools/libxc/xc_sr_common_x86_pv.h  |  3 ++
 tools/libxc/xc_sr_restore_x86_pv.c | 26 ++++-------
 tools/libxc/xc_sr_save_x86_pv.c    | 44 ++++++++----------
 tools/libxc/xg_save_restore.h      |  1 +
 7 files changed, 182 insertions(+), 44 deletions(-)

diff --git a/tools/libxc/xc_sr_common.c b/tools/libxc/xc_sr_common.c
index dd9a11b4b5..1acb3765aa 100644
--- a/tools/libxc/xc_sr_common.c
+++ b/tools/libxc/xc_sr_common.c
@@ -138,6 +138,73 @@ int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec)
     return 0;
 };
 
+int get_domain_context(struct xc_sr_context *ctx)
+{
+    xc_interface *xch = ctx->xch;
+    size_t len = 0;
+    int rc;
+
+    if ( ctx->domain_context.buffer )
+    {
+        ERROR("Domain context already present");
+        return -1;
+    }
+
+    rc = xc_domain_getcontext(xch, ctx->domid, NULL, &len);
+    if ( rc < 0 )
+    {
+        PERROR("Unable to get size of domain context");
+        return -1;
+    }
+
+    ctx->domain_context.buffer = malloc(len);
+    if ( !ctx->domain_context.buffer )
+    {
+        PERROR("Unable to allocate memory for domain context");
+        return -1;
+    }
+
+    rc = xc_domain_getcontext(xch, ctx->domid, ctx->domain_context.buffer,
+                              &len);
+    if ( rc < 0 )
+    {
+        PERROR("Unable to get domain context");
+        return -1;
+    }
+
+    ctx->domain_context.len = len;
+
+    return 0;
+}
+
+int set_domain_context(struct xc_sr_context *ctx)
+{
+    xc_interface *xch = ctx->xch;
+    int rc;
+
+    if ( !ctx->domain_context.buffer )
+    {
+        ERROR("Domain context not present");
+        return -1;
+    }
+
+    rc = xc_domain_setcontext(xch, ctx->domid, ctx->domain_context.buffer,
+                              ctx->domain_context.len);
+
+    if ( rc < 0 )
+    {
+        PERROR("Unable to set domain context");
+        return -1;
+    }
+
+    return 0;
+}
+
+void common_cleanup(struct xc_sr_context *ctx)
+{
+    free(ctx->domain_context.buffer);
+}
+
 static void __attribute__((unused)) build_assertions(void)
 {
     BUILD_BUG_ON(sizeof(struct xc_sr_ihdr) != 24);
diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
index 5dd51ccb15..0d61978b08 100644
--- a/tools/libxc/xc_sr_common.h
+++ b/tools/libxc/xc_sr_common.h
@@ -208,6 +208,11 @@ struct xc_sr_context
 
     xc_dominfo_t dominfo;
 
+    struct {
+        void *buffer;
+        unsigned int len;
+    } domain_context;
+
     union /* Common save or restore data. */
     {
         struct /* Save data. */
@@ -314,7 +319,7 @@ struct xc_sr_context
                 /* The guest pfns containing the p2m leaves */
                 xen_pfn_t *p2m_pfns;
 
-                /* Read-only mapping of guests shared info page */
+                /* Pointer to shared_info (located in context buffer) */
                 shared_info_any_t *shinfo;
 
                 /* p2m generation count for verifying validity of local p2m. */
@@ -425,6 +430,10 @@ int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec);
 int populate_pfns(struct xc_sr_context *ctx, unsigned int count,
                   const xen_pfn_t *original_pfns, const uint32_t *types);
 
+int get_domain_context(struct xc_sr_context *ctx);
+int set_domain_context(struct xc_sr_context *ctx);
+void common_cleanup(struct xc_sr_context *ctx);
+
 #endif
 /*
  * Local variables:
diff --git a/tools/libxc/xc_sr_common_x86_pv.c b/tools/libxc/xc_sr_common_x86_pv.c
index d3d425cb82..69d9b142b8 100644
--- a/tools/libxc/xc_sr_common_x86_pv.c
+++ b/tools/libxc/xc_sr_common_x86_pv.c
@@ -182,6 +182,80 @@ int x86_pv_map_m2p(struct xc_sr_context *ctx)
     return rc;
 }
 
+int x86_pv_get_shinfo(struct xc_sr_context *ctx)
+{
+    xc_interface *xch = ctx->xch;
+    unsigned int off = 0;
+    int rc;
+
+#define GET_PTR(_x)                                                         \
+    do {                                                                    \
+        if ( ctx->domain_context.len - off < sizeof(*(_x)) )                \
+        {                                                                   \
+            ERROR("Need another %lu bytes of context, only %u available\n", \
+                  sizeof(*(_x)), ctx->domain_context.len - off);            \
+            return -1;                                                      \
+        }                                                                   \
+        (_x) = ctx->domain_context.buffer + off;                            \
+    } while (false);
+
+    rc = get_domain_context(ctx);
+    if ( rc )
+        return rc;
+
+    for ( ; ; )
+    {
+        struct domain_save_descriptor *desc;
+
+        GET_PTR(desc);
+
+        off += sizeof(*desc);
+
+        switch (desc->typecode)
+        {
+        case DOMAIN_SAVE_CODE(SHARED_INFO):
+        {
+            DOMAIN_SAVE_TYPE(SHARED_INFO) *s;
+
+            GET_PTR(s);
+
+            ctx->x86.pv.shinfo = (shared_info_any_t *)s->buffer;
+            break;
+        }
+        default:
+            break;
+        }
+
+        if ( desc->typecode == DOMAIN_SAVE_CODE(END) )
+            break;
+
+        off += desc->length;
+    }
+
+    if ( !ctx->x86.pv.shinfo )
+    {
+        ERROR("Failed to get SHARED_INFO\n");
+        return -1;
+    }
+
+    return 0;
+
+#undef GET_PTR
+}
+
+int x86_pv_set_shinfo(struct xc_sr_context *ctx)
+{
+    xc_interface *xch = ctx->xch;
+
+    if ( !ctx->x86.pv.shinfo )
+    {
+        ERROR("SHARED_INFO buffer not present\n");
+        return -1;
+    }
+
+    return set_domain_context(ctx);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/tools/libxc/xc_sr_common_x86_pv.h b/tools/libxc/xc_sr_common_x86_pv.h
index 2ed03309af..01442f48fb 100644
--- a/tools/libxc/xc_sr_common_x86_pv.h
+++ b/tools/libxc/xc_sr_common_x86_pv.h
@@ -97,6 +97,9 @@ int x86_pv_domain_info(struct xc_sr_context *ctx);
  */
 int x86_pv_map_m2p(struct xc_sr_context *ctx);
 
+int x86_pv_get_shinfo(struct xc_sr_context *ctx);
+int x86_pv_set_shinfo(struct xc_sr_context *ctx);
+
 #endif
 /*
  * Local variables:
diff --git a/tools/libxc/xc_sr_restore_x86_pv.c b/tools/libxc/xc_sr_restore_x86_pv.c
index 904ccc462a..21982a38ad 100644
--- a/tools/libxc/xc_sr_restore_x86_pv.c
+++ b/tools/libxc/xc_sr_restore_x86_pv.c
@@ -865,7 +865,7 @@ static int handle_shared_info(struct xc_sr_context *ctx,
     xc_interface *xch = ctx->xch;
     unsigned int i;
     int rc = -1;
-    shared_info_any_t *guest_shinfo = NULL;
+    shared_info_any_t *guest_shinfo;
     const shared_info_any_t *old_shinfo = rec->data;
 
     if ( !ctx->x86.pv.restore.seen_pv_info )
@@ -878,18 +878,14 @@ static int handle_shared_info(struct xc_sr_context *ctx,
     {
         ERROR("X86_PV_SHARED_INFO record wrong size: length %u"
               ", expected 4096", rec->length);
-        goto err;
+        return -1;
     }
 
-    guest_shinfo = xc_map_foreign_range(
-        xch, ctx->domid, PAGE_SIZE, PROT_READ | PROT_WRITE,
-        ctx->dominfo.shared_info_frame);
-    if ( !guest_shinfo )
-    {
-        PERROR("Failed to map Shared Info at mfn %#lx",
-               ctx->dominfo.shared_info_frame);
-        goto err;
-    }
+    rc = x86_pv_get_shinfo(ctx);
+    if ( rc )
+        return rc;
+
+    guest_shinfo = ctx->x86.pv.shinfo;
 
     MEMCPY_FIELD(guest_shinfo, old_shinfo, vcpu_info, ctx->x86.pv.width);
     MEMCPY_FIELD(guest_shinfo, old_shinfo, arch, ctx->x86.pv.width);
@@ -904,13 +900,7 @@ static int handle_shared_info(struct xc_sr_context *ctx,
 
     MEMSET_ARRAY_FIELD(guest_shinfo, evtchn_mask, 0xff, ctx->x86.pv.width);
 
-    rc = 0;
-
- err:
-    if ( guest_shinfo )
-        munmap(guest_shinfo, PAGE_SIZE);
-
-    return rc;
+    return x86_pv_set_shinfo(ctx);
 }
 
 /* restore_ops function. */
diff --git a/tools/libxc/xc_sr_save_x86_pv.c b/tools/libxc/xc_sr_save_x86_pv.c
index f3ccf5bb4b..fdd172b639 100644
--- a/tools/libxc/xc_sr_save_x86_pv.c
+++ b/tools/libxc/xc_sr_save_x86_pv.c
@@ -9,25 +9,6 @@ static inline bool is_canonical_address(xen_vaddr_t vaddr)
     return ((int64_t)vaddr >> 47) == ((int64_t)vaddr >> 63);
 }
 
-/*
- * Maps the guests shared info page.
- */
-static int map_shinfo(struct xc_sr_context *ctx)
-{
-    xc_interface *xch = ctx->xch;
-
-    ctx->x86.pv.shinfo = xc_map_foreign_range(
-        xch, ctx->domid, PAGE_SIZE, PROT_READ, ctx->dominfo.shared_info_frame);
-    if ( !ctx->x86.pv.shinfo )
-    {
-        PERROR("Failed to map shared info frame at mfn %#lx",
-               ctx->dominfo.shared_info_frame);
-        return -1;
-    }
-
-    return 0;
-}
-
 /*
  * Copy a list of mfns from a guest, accounting for differences between guest
  * and toolstack width.  Can fail if truncation would occur.
@@ -854,13 +835,27 @@ static int write_x86_pv_p2m_frames(struct xc_sr_context *ctx)
  */
 static int write_shared_info(struct xc_sr_context *ctx)
 {
+    xc_interface *xch = ctx->xch;
     struct xc_sr_record rec = {
         .type = REC_TYPE_SHARED_INFO,
         .length = PAGE_SIZE,
-        .data = ctx->x86.pv.shinfo,
     };
+    int rc;
 
-    return write_record(ctx, &rec);
+    if ( !(rec.data = calloc(1, PAGE_SIZE)) )
+    {
+        ERROR("Cannot allocate buffer for SHARED_INFO data");
+        return -1;
+    }
+
+    BUILD_BUG_ON(sizeof(*ctx->x86.pv.shinfo) > PAGE_SIZE);
+    memcpy(rec.data, ctx->x86.pv.shinfo, sizeof(*ctx->x86.pv.shinfo));
+
+    rc = write_record(ctx, &rec);
+
+    free(rec.data);
+
+    return rc;
 }
 
 /*
@@ -1041,7 +1036,7 @@ static int x86_pv_setup(struct xc_sr_context *ctx)
     if ( rc )
         return rc;
 
-    rc = map_shinfo(ctx);
+    rc = x86_pv_get_shinfo(ctx);
     if ( rc )
         return rc;
 
@@ -1112,12 +1107,11 @@ static int x86_pv_cleanup(struct xc_sr_context *ctx)
     if ( ctx->x86.pv.p2m )
         munmap(ctx->x86.pv.p2m, ctx->x86.pv.p2m_frames * PAGE_SIZE);
 
-    if ( ctx->x86.pv.shinfo )
-        munmap(ctx->x86.pv.shinfo, PAGE_SIZE);
-
     if ( ctx->x86.pv.m2p )
         munmap(ctx->x86.pv.m2p, ctx->x86.pv.nr_m2p_frames * PAGE_SIZE);
 
+    common_cleanup(ctx);
+
     return 0;
 }
 
diff --git a/tools/libxc/xg_save_restore.h b/tools/libxc/xg_save_restore.h
index 303081df0d..296b523963 100644
--- a/tools/libxc/xg_save_restore.h
+++ b/tools/libxc/xg_save_restore.h
@@ -19,6 +19,7 @@
 
 #include <xen/foreign/x86_32.h>
 #include <xen/foreign/x86_64.h>
+#include <xen/save.h>
 
 /*
 ** We process save/restore/migrate in batches of pages; the below
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed May 27 17:34:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 17:34:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdzwg-00032h-Cu; Wed, 27 May 2020 17:34:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=baN0=7J=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jdzwe-00032U-Gl
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 17:34:24 +0000
X-Inumbo-ID: 4b1bae0a-a040-11ea-81bc-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b1bae0a-a040-11ea-81bc-bc764e2007e4;
 Wed, 27 May 2020 17:34:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ieSZ3DmS/NEuUsfREP78zr0PFaP8o8jbCaGOy4mF6V0=; b=B6GXGkvQacL1nQ+BrDrY/MwUJX
 nBhHOW1g3mT75hVk4grXDvIgt6ennhAvqbvjlbHfFYuY8JdSta9pkOSzY9qJPmR9F6GhNPXKUVAC9
 PcVYxqmVJrBxBZzLVlAjDJE4P7QE+y7E3Y8eAKkXdUz0wmJOco5TPKHSrsgnRcMX13oI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jdzwX-00016O-Vy; Wed, 27 May 2020 17:34:17 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jdzwX-0003od-N0; Wed, 27 May 2020 17:34:17 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 4/5] common/domain: add a domain context record for
 shared_info...
Date: Wed, 27 May 2020 18:34:06 +0100
Message-Id: <20200527173407.1398-5-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200527173407.1398-1-paul@xen.org>
References: <20200527173407.1398-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

... and update xen-domctx to dump some information describing the record.

NOTE: The domain may or may not be using the embedded vcpu_info array so
      ultimately separate context records will be added for vcpu_info when
      this becomes necessary.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>

v6:
 - Only save compat_shared_info buffer if has_32bit_shinfo is set
 - Validate flags field in load handler

v5:
 - Addressed comments from Julien

v4:
 - Addressed comments from Jan

v3:
 - Actually dump some of the content of shared_info

v2:
 - Drop the header change to define a 'Xen' page size and instead use a
   variable length struct now that the framework makes this is feasible
 - Guard use of 'has_32bit_shinfo' in common code with CONFIG_COMPAT
---
 tools/misc/xen-domctx.c   | 78 +++++++++++++++++++++++++++++++++++++++
 xen/common/domain.c       | 70 +++++++++++++++++++++++++++++++++++
 xen/include/public/save.h | 13 ++++++-
 3 files changed, 160 insertions(+), 1 deletion(-)

diff --git a/tools/misc/xen-domctx.c b/tools/misc/xen-domctx.c
index 243325dfce..6ead7ea89d 100644
--- a/tools/misc/xen-domctx.c
+++ b/tools/misc/xen-domctx.c
@@ -31,6 +31,7 @@
 #include <errno.h>
 
 #include <xenctrl.h>
+#include <xen-tools/libs.h>
 #include <xen/xen.h>
 #include <xen/domctl.h>
 #include <xen/save.h>
@@ -61,6 +62,82 @@ static void dump_header(void)
 
 }
 
+static void print_binary(const char *prefix, const void *val, size_t size,
+                         const char *suffix)
+{
+    printf("%s", prefix);
+
+    while ( size-- )
+    {
+        uint8_t octet = *(const uint8_t *)val++;
+        unsigned int i;
+
+        for ( i = 0; i < 8; i++ )
+        {
+            printf("%u", octet & 1);
+            octet >>= 1;
+        }
+    }
+
+    printf("%s", suffix);
+}
+
+static void dump_shared_info(void)
+{
+    DOMAIN_SAVE_TYPE(SHARED_INFO) *s;
+    bool has_32bit_shinfo;
+    shared_info_any_t *info;
+    unsigned int i, n;
+
+    GET_PTR(s);
+    has_32bit_shinfo = s->flags & DOMAIN_SAVE_32BIT_SHINFO;
+
+    printf("    SHARED_INFO: has_32bit_shinfo: %s buffer_size: %u\n",
+           has_32bit_shinfo ? "true" : "false", s->buffer_size);
+
+    info = (shared_info_any_t *)s->buffer;
+
+#define GET_FIELD_PTR(_f)            \
+    (has_32bit_shinfo ?              \
+     (const void *)&(info->x32._f) : \
+     (const void *)&(info->x64._f))
+#define GET_FIELD_SIZE(_f) \
+    (has_32bit_shinfo ? sizeof(info->x32._f) : sizeof(info->x64._f))
+#define GET_FIELD(_f) \
+    (has_32bit_shinfo ? info->x32._f : info->x64._f)
+
+    n = has_32bit_shinfo ?
+        ARRAY_SIZE(info->x32.evtchn_pending) :
+        ARRAY_SIZE(info->x64.evtchn_pending);
+
+    for ( i = 0; i < n; i++ )
+    {
+        const char *prefix = !i ?
+            "                 evtchn_pending: " :
+            "                                 ";
+
+        print_binary(prefix, GET_FIELD_PTR(evtchn_pending[0]),
+                 GET_FIELD_SIZE(evtchn_pending[0]), "\n");
+    }
+
+    for ( i = 0; i < n; i++ )
+    {
+        const char *prefix = !i ?
+            "                    evtchn_mask: " :
+            "                                 ";
+
+        print_binary(prefix, GET_FIELD_PTR(evtchn_mask[0]),
+                 GET_FIELD_SIZE(evtchn_mask[0]), "\n");
+    }
+
+    printf("                 wc: version: %u sec: %u nsec: %u\n",
+           GET_FIELD(wc_version), GET_FIELD(wc_sec), GET_FIELD(wc_nsec));
+
+#undef GET_FIELD
+#undef GET_FIELD_SIZE
+#undef GET_FIELD_PTR
+}
+
 static void dump_end(void)
 {
     DOMAIN_SAVE_TYPE(END) *e;
@@ -173,6 +250,7 @@ int main(int argc, char **argv)
             switch (desc->typecode)
             {
             case DOMAIN_SAVE_CODE(HEADER): dump_header(); break;
+            case DOMAIN_SAVE_CODE(SHARED_INFO): dump_shared_info(); break;
             case DOMAIN_SAVE_CODE(END): dump_end(); break;
             default:
                 printf("Unknown type %u: skipping\n", desc->typecode);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 7cc9526139..da122dc8e7 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -33,6 +33,7 @@
 #include <xen/xenoprof.h>
 #include <xen/irq.h>
 #include <xen/argo.h>
+#include <xen/save.h>
 #include <asm/debugger.h>
 #include <asm/p2m.h>
 #include <asm/processor.h>
@@ -1649,6 +1650,75 @@ int continue_hypercall_on_cpu(
     return 0;
 }
 
+static int save_shared_info(const struct domain *d, struct domain_context *c,
+                            bool dry_run)
+{
+    struct domain_shared_info_context ctxt = {
+#ifdef CONFIG_COMPAT
+        .flags = has_32bit_shinfo(d) ? DOMAIN_SAVE_32BIT_SHINFO : 0,
+        .buffer_size = has_32bit_shinfo(d) ?
+                       sizeof(struct compat_shared_info) :
+                       sizeof(struct shared_info),
+#else
+        .buffer_size = sizeof(struct shared_info),
+#endif
+    };
+    size_t hdr_size = offsetof(typeof(ctxt), buffer);
+    int rc;
+
+    rc = DOMAIN_SAVE_BEGIN(SHARED_INFO, c, 0);
+    if ( rc )
+        return rc;
+
+    rc = domain_save_data(c, &ctxt, hdr_size);
+    if ( rc )
+        return rc;
+
+    rc = domain_save_data(c, d->shared_info, ctxt.buffer_size);
+    if ( rc )
+        return rc;
+
+    return domain_save_end(c);
+}
+
+static int load_shared_info(struct domain *d, struct domain_context *c)
+{
+    struct domain_shared_info_context ctxt;
+    size_t hdr_size = offsetof(typeof(ctxt), buffer);
+    unsigned int i;
+    int rc;
+
+    rc = DOMAIN_LOAD_BEGIN(SHARED_INFO, c, &i);
+    if ( rc )
+        return rc;
+
+    if ( i ) /* expect only a single instance */
+        return -ENXIO;
+
+    rc = domain_load_data(c, &ctxt, hdr_size);
+    if ( rc )
+        return rc;
+
+    if ( ctxt.buffer_size > sizeof(shared_info_t) ||
+         (ctxt.flags & ~DOMAIN_SAVE_32BIT_SHINFO) )
+        return -EINVAL;
+
+    if ( ctxt.flags & DOMAIN_SAVE_32BIT_SHINFO )
+#ifdef CONFIG_COMPAT
+        has_32bit_shinfo(d) = true;
+#else
+        return -EINVAL;
+#endif
+
+    rc = domain_load_data(c, d->shared_info, sizeof(shared_info_t));
+    if ( rc )
+        return rc;
+
+    return domain_load_end(c);
+}
+
+DOMAIN_REGISTER_SAVE_LOAD(SHARED_INFO, save_shared_info, load_shared_info);
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/include/public/save.h b/xen/include/public/save.h
index 551dbbddb8..0e855a4b97 100644
--- a/xen/include/public/save.h
+++ b/xen/include/public/save.h
@@ -82,7 +82,18 @@ struct domain_save_header {
 };
 DECLARE_DOMAIN_SAVE_TYPE(HEADER, 1, struct domain_save_header);
 
-#define DOMAIN_SAVE_CODE_MAX 1
+struct domain_shared_info_context {
+    uint32_t flags;
+
+#define DOMAIN_SAVE_32BIT_SHINFO 0x00000001
+
+    uint32_t buffer_size;
+    uint8_t buffer[XEN_FLEX_ARRAY_DIM]; /* Implementation specific size */
+};
+
+DECLARE_DOMAIN_SAVE_TYPE(SHARED_INFO, 2, struct domain_shared_info_context);
+
+#define DOMAIN_SAVE_CODE_MAX 2
 
 #endif /* defined(__XEN__) || defined(__XEN_TOOLS__) */
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed May 27 17:34:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 17:34:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdzwa-00031B-Oa; Wed, 27 May 2020 17:34:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=baN0=7J=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jdzwZ-000312-HB
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 17:34:19 +0000
X-Inumbo-ID: 4ab3572e-a040-11ea-81bc-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ab3572e-a040-11ea-81bc-bc764e2007e4;
 Wed, 27 May 2020 17:34:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=QfGmoXyhQuN/EVxQAxJlmVEjrcceHfUNWPHkPI0BpEQ=; b=1h8AoXDFuY2JpVPf1k3nl867r+
 M2Quscr/llmLBn303sK7eatmTeUtDAZSQXrYwPmdudNkxmoeC18wAG8xCfzovYHjVPeTvz9fvvJ9g
 lzaKjHwlA28Ua77T9pZkxIIX+l63+sL/wmtZPB5LsZtm5pnZ8wrLG58fS6roO2qDfofg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jdzwV-000167-Ar; Wed, 27 May 2020 17:34:15 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jdzwV-0003od-1N; Wed, 27 May 2020 17:34:15 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 2/5] xen/common/domctl: introduce
 XEN_DOMCTL_get/setdomaincontext
Date: Wed, 27 May 2020 18:34:04 +0100
Message-Id: <20200527173407.1398-3-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200527173407.1398-1-paul@xen.org>
References: <20200527173407.1398-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

These domctls provide a mechanism to get and set domain context from
the toolstack.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Reviewed-by: Julien Grall <julien@xen.org>
---
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>

v4:
 - Add missing zero pad checks

v3:
 - Addressed comments from Julien and Jan
 - Use vmalloc() rather than xmalloc_bytes()

v2:
 - drop mask parameter
 - const-ify some more buffers
---
 tools/flask/policy/modules/xen.if   |   4 +-
 tools/libxc/include/xenctrl.h       |   5 +
 tools/libxc/xc_domain.c             |  56 +++++++++
 xen/common/domctl.c                 | 173 ++++++++++++++++++++++++++++
 xen/include/public/domctl.h         |  41 +++++++
 xen/xsm/flask/hooks.c               |   6 +
 xen/xsm/flask/policy/access_vectors |   4 +
 7 files changed, 287 insertions(+), 2 deletions(-)

diff --git a/tools/flask/policy/modules/xen.if b/tools/flask/policy/modules/xen.if
index 8eb2293a52..2bc9db4f64 100644
--- a/tools/flask/policy/modules/xen.if
+++ b/tools/flask/policy/modules/xen.if
@@ -53,7 +53,7 @@ define(`create_domain_common', `
 	allow $1 $2:domain2 { set_cpu_policy settsc setscheduler setclaim
 			set_vnumainfo get_vnumainfo cacheflush
 			psr_cmt_op psr_alloc soft_reset
-			resource_map get_cpu_policy };
+			resource_map get_cpu_policy setcontext };
 	allow $1 $2:security check_context;
 	allow $1 $2:shadow enable;
 	allow $1 $2:mmu { map_read map_write adjust memorymap physmap pinpage mmuext_op updatemp };
@@ -97,7 +97,7 @@ define(`migrate_domain_out', `
 	allow $1 $2:hvm { gethvmc getparam };
 	allow $1 $2:mmu { stat pageinfo map_read };
 	allow $1 $2:domain { getaddrsize getvcpucontext pause destroy };
-	allow $1 $2:domain2 gettsc;
+	allow $1 $2:domain2 { gettsc getcontext };
 	allow $1 $2:shadow { enable disable logdirty };
 ')
 
diff --git a/tools/libxc/include/xenctrl.h b/tools/libxc/include/xenctrl.h
index 45ff7db1e8..0ce2372e2f 100644
--- a/tools/libxc/include/xenctrl.h
+++ b/tools/libxc/include/xenctrl.h
@@ -867,6 +867,11 @@ int xc_domain_hvm_setcontext(xc_interface *xch,
                              uint8_t *hvm_ctxt,
                              uint32_t size);
 
+int xc_domain_getcontext(xc_interface *xch, uint32_t domid,
+                         void *ctxt_buf, size_t *size);
+int xc_domain_setcontext(xc_interface *xch, uint32_t domid,
+                         const void *ctxt_buf, size_t size);
+
 /**
  * This function will return guest IO ABI protocol
  *
diff --git a/tools/libxc/xc_domain.c b/tools/libxc/xc_domain.c
index 71829c2bce..e462a6f728 100644
--- a/tools/libxc/xc_domain.c
+++ b/tools/libxc/xc_domain.c
@@ -537,6 +537,62 @@ int xc_domain_hvm_setcontext(xc_interface *xch,
     return ret;
 }
 
+int xc_domain_getcontext(xc_interface *xch, uint32_t domid,
+                         void *ctxt_buf, size_t *size)
+{
+    int ret;
+    DECLARE_DOMCTL = {
+        .cmd = XEN_DOMCTL_getdomaincontext,
+        .domain = domid,
+        .u.getdomaincontext.size = *size,
+    };
+    DECLARE_HYPERCALL_BOUNCE(ctxt_buf, *size, XC_HYPERCALL_BUFFER_BOUNCE_OUT);
+
+    if ( xc_hypercall_bounce_pre(xch, ctxt_buf) )
+        return -1;
+
+    set_xen_guest_handle(domctl.u.setdomaincontext.buffer, ctxt_buf);
+
+    ret = do_domctl(xch, &domctl);
+
+    xc_hypercall_bounce_post(xch, ctxt_buf);
+
+    if ( ret )
+        return ret;
+
+    *size = domctl.u.getdomaincontext.size;
+    if ( *size != domctl.u.getdomaincontext.size )
+    {
+        errno = EOVERFLOW;
+        return -1;
+    }
+
+    return 0;
+}
+
+int xc_domain_setcontext(xc_interface *xch, uint32_t domid,
+                         const void *ctxt_buf, size_t size)
+{
+    int ret;
+    DECLARE_DOMCTL = {
+        .cmd = XEN_DOMCTL_setdomaincontext,
+        .domain = domid,
+        .u.setdomaincontext.size = size,
+    };
+    DECLARE_HYPERCALL_BOUNCE_IN(ctxt_buf, size);
+
+    if ( xc_hypercall_bounce_pre(xch, ctxt_buf) )
+        return -1;
+
+    set_xen_guest_handle(domctl.u.setdomaincontext.buffer, ctxt_buf);
+
+    ret = do_domctl(xch, &domctl);
+
+    xc_hypercall_bounce_post(xch, ctxt_buf);
+
+    return ret;
+}
+
 int xc_vcpu_getcontext(xc_interface *xch,
                        uint32_t domid,
                        uint32_t vcpu,
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index a69b3b59a8..44758034a6 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -25,6 +25,8 @@
 #include <xen/hypercall.h>
 #include <xen/vm_event.h>
 #include <xen/monitor.h>
+#include <xen/save.h>
+#include <xen/vmap.h>
 #include <asm/current.h>
 #include <asm/irq.h>
 #include <asm/page.h>
@@ -358,6 +360,168 @@ static struct vnuma_info *vnuma_init(const struct xen_domctl_vnuma *uinfo,
     return ERR_PTR(ret);
 }
 
+struct domctl_context
+{
+    void *buffer;
+    struct domain_save_descriptor *desc;
+    size_t len;
+    size_t cur;
+};
+
+static int dry_run_append(void *priv, const void *data, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len + len < c->len )
+        return -EOVERFLOW;
+
+    c->len += len;
+
+    return 0;
+}
+
+static int dry_run_begin(void *priv, const struct domain_save_descriptor *desc)
+{
+    return dry_run_append(priv, NULL, sizeof(*desc));
+}
+
+static int dry_run_end(void *priv, size_t len)
+{
+    return 0;
+}
+
+static struct domain_save_ops dry_run_ops = {
+    .begin = dry_run_begin,
+    .append = dry_run_append,
+    .end = dry_run_end,
+};
+
+static int save_begin(void *priv, const struct domain_save_descriptor *desc)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len - c->cur < sizeof(*desc) )
+        return -ENOSPC;
+
+    c->desc = c->buffer + c->cur; /* stash pointer to descriptor */
+    *c->desc = *desc;
+
+    c->cur += sizeof(*desc);
+
+    return 0;
+}
+
+static int save_append(void *priv, const void *data, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len - c->cur < len )
+        return -ENOSPC;
+
+    memcpy(c->buffer + c->cur, data, len);
+    c->cur += len;
+
+    return 0;
+}
+
+static int save_end(void *priv, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    c->desc->length = len;
+
+    return 0;
+}
+
+static struct domain_save_ops save_ops = {
+    .begin = save_begin,
+    .append = save_append,
+    .end = save_end,
+};
+
+static int getdomaincontext(struct domain *d,
+                            struct xen_domctl_getdomaincontext *gdc)
+{
+    struct domctl_context c = { .buffer = ZERO_BLOCK_PTR };
+    int rc;
+
+    if ( d == current->domain )
+        return -EPERM;
+
+    if ( gdc->pad )
+        return -EINVAL;
+
+    if ( guest_handle_is_null(gdc->buffer) ) /* query for buffer size */
+    {
+        if ( gdc->size )
+            return -EINVAL;
+
+        /* dry run to acquire buffer size */
+        rc = domain_save(d, &dry_run_ops, &c, true);
+        if ( rc )
+            return rc;
+
+        gdc->size = c.len;
+        return 0;
+    }
+
+    c.len = gdc->size;
+    c.buffer = vmalloc(c.len);
+    if ( !c.buffer )
+        return -ENOMEM;
+
+    rc = domain_save(d, &save_ops, &c, false);
+
+    gdc->size = c.cur;
+    if ( !rc && copy_to_guest(gdc->buffer, c.buffer, gdc->size) )
+        rc = -EFAULT;
+
+    vfree(c.buffer);
+
+    return rc;
+}
+
+static int load_read(void *priv, void *data, size_t len)
+{
+    struct domctl_context *c = priv;
+
+    if ( c->len - c->cur < len )
+        return -ENODATA;
+
+    memcpy(data, c->buffer + c->cur, len);
+    c->cur += len;
+
+    return 0;
+}
+
+static struct domain_load_ops load_ops = {
+    .read = load_read,
+};
+
+static int setdomaincontext(struct domain *d,
+                            const struct xen_domctl_setdomaincontext *sdc)
+{
+    struct domctl_context c = { .buffer = ZERO_BLOCK_PTR, .len = sdc->size };
+    int rc;
+
+    if ( d == current->domain )
+        return -EPERM;
+
+    if ( sdc->pad )
+        return -EINVAL;
+
+    c.buffer = vmalloc(c.len);
+    if ( !c.buffer )
+        return -ENOMEM;
+
+    rc = !copy_from_guest(c.buffer, sdc->buffer, c.len) ?
+        domain_load(d, &load_ops, &c) : -EFAULT;
+
+    vfree(c.buffer);
+
+    return rc;
+}
+
 long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 {
     long ret = 0;
@@ -942,6 +1106,15 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             copyback = 1;
         break;
 
+    case XEN_DOMCTL_getdomaincontext:
+        ret = getdomaincontext(d, &op->u.getdomaincontext);
+        copyback = !ret;
+        break;
+
+    case XEN_DOMCTL_setdomaincontext:
+        ret = setdomaincontext(d, &op->u.setdomaincontext);
+        break;
+
     default:
         ret = arch_do_domctl(op, d, u_domctl);
         break;
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 1ad34c35eb..1b133bda59 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -1129,6 +1129,43 @@ struct xen_domctl_vuart_op {
                                  */
 };
 
+/*
+ * XEN_DOMCTL_getdomaincontext
+ * ---------------------------
+ *
+ * buffer (IN):   The buffer into which the context data should be
+ *                copied, or NULL to query the buffer size that should
+ *                be allocated.
+ * size (IN/OUT): If 'buffer' is NULL then the value passed in must be
+ *                zero, and the value passed out will be the size of the
+ *                buffer to allocate.
+ *                If 'buffer' is non-NULL then the value passed in must
+ *                be the size of the buffer into which data may be copied.
+ *                The value passed out will be the size of data written.
+ */
+struct xen_domctl_getdomaincontext {
+    uint32_t size;
+    uint32_t pad;
+    XEN_GUEST_HANDLE_64(void) buffer;
+};
+
+/* XEN_DOMCTL_setdomaincontext
+ * ---------------------------
+ *
+ * buffer (IN):   The buffer from which the context data should be
+ *                copied.
+ * size (IN):     The size of the buffer from which data may be copied.
+ *                This data must include DOMAIN_SAVE_CODE_HEADER at the
+ *                start and terminate with a DOMAIN_SAVE_CODE_END record.
+ *                Any data beyond the DOMAIN_SAVE_CODE_END record will be
+ *                ignored.
+ */
+struct xen_domctl_setdomaincontext {
+    uint32_t size;
+    uint32_t pad;
+    XEN_GUEST_HANDLE_64(const_void) buffer;
+};
+
 struct xen_domctl {
     uint32_t cmd;
 #define XEN_DOMCTL_createdomain                   1
@@ -1210,6 +1247,8 @@ struct xen_domctl {
 #define XEN_DOMCTL_vuart_op                      81
 #define XEN_DOMCTL_get_cpu_policy                82
 #define XEN_DOMCTL_set_cpu_policy                83
+#define XEN_DOMCTL_getdomaincontext              84
+#define XEN_DOMCTL_setdomaincontext              85
 #define XEN_DOMCTL_gdbsx_guestmemio            1000
 #define XEN_DOMCTL_gdbsx_pausevcpu             1001
 #define XEN_DOMCTL_gdbsx_unpausevcpu           1002
@@ -1270,6 +1309,8 @@ struct xen_domctl {
         struct xen_domctl_monitor_op        monitor_op;
         struct xen_domctl_psr_alloc         psr_alloc;
         struct xen_domctl_vuart_op          vuart_op;
+        struct xen_domctl_getdomaincontext  getdomaincontext;
+        struct xen_domctl_setdomaincontext  setdomaincontext;
         uint8_t                             pad[128];
     } u;
 };
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index 4649e6fd95..6f3db276ef 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -745,6 +745,12 @@ static int flask_domctl(struct domain *d, int cmd)
     case XEN_DOMCTL_get_cpu_policy:
         return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GET_CPU_POLICY);
 
+    case XEN_DOMCTL_setdomaincontext:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__SETCONTEXT);
+
+    case XEN_DOMCTL_getdomaincontext:
+        return current_has_perm(d, SECCLASS_DOMAIN2, DOMAIN2__GETCONTEXT);
+
     default:
         return avc_unknown_permission("domctl", cmd);
     }
diff --git a/xen/xsm/flask/policy/access_vectors b/xen/xsm/flask/policy/access_vectors
index c055c14c26..fccfb9de82 100644
--- a/xen/xsm/flask/policy/access_vectors
+++ b/xen/xsm/flask/policy/access_vectors
@@ -245,6 +245,10 @@ class domain2
     resource_map
 # XEN_DOMCTL_get_cpu_policy
     get_cpu_policy
+# XEN_DOMCTL_setdomaincontext
+    setcontext
+# XEN_DOMCTL_getdomaincontext
+    getcontext
 }
 
 # Similar to class domain, but primarily contains domctls related to HVM domains
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed May 27 17:34:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 17:34:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdzwd-00031r-4m; Wed, 27 May 2020 17:34:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=baN0=7J=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jdzwb-00031V-F3
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 17:34:21 +0000
X-Inumbo-ID: 4a30455a-a040-11ea-a76e-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a30455a-a040-11ea-a76e-12813bfff9fa;
 Wed, 27 May 2020 17:34:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:MIME-Version:References:
 In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-Type:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=gk5Uuo1wIab8QnJ+XZOG7n7mFLhPsfwOIEwveUEZ4oE=; b=k5T6PmiisbaV/n+gWHjfIqwxS0
 05imcSEWrZrCA17JlA0shnHDAOq4Crt9aXEIw8lBe1CFsQLltUNS4I4Skz89eZjH7RfA+0+tAJHcT
 EmyamSH15v+VYboCncKonBOoCbkCMtwtoq0CwqJt8tBV6CmlIg1DuhLkkORTPS+6EUNA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jdzwW-00016D-FC; Wed, 27 May 2020 17:34:16 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jdzwW-0003od-6f; Wed, 27 May 2020 17:34:16 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 3/5] tools/misc: add xen-domctx to present domain context
Date: Wed, 27 May 2020 18:34:05 +0100
Message-Id: <20200527173407.1398-4-paul@xen.org>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200527173407.1398-1-paul@xen.org>
References: <20200527173407.1398-1-paul@xen.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This tool is analogous to 'xen-hvmctx' which presents HVM context.
Subsequent patches will add 'dump' functions when new records are
introduced.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>

NOTE: Ian requested ack from Andrew

v3:
 - Re-worked to avoid copying onto stack
 - Added optional typecode and instance arguments

v2:
 - Change name from 'xen-ctx' to 'xen-domctx'
---
 .gitignore              |   1 +
 tools/misc/Makefile     |   4 +
 tools/misc/xen-domctx.c | 200 ++++++++++++++++++++++++++++++++++++++++
 3 files changed, 205 insertions(+)
 create mode 100644 tools/misc/xen-domctx.c

diff --git a/.gitignore b/.gitignore
index 7418ce9829..6da3030f0d 100644
--- a/.gitignore
+++ b/.gitignore
@@ -209,6 +209,7 @@ tools/misc/xen_cpuperf
 tools/misc/xen-cpuid
 tools/misc/xen-detect
 tools/misc/xen-diag
+tools/misc/xen-domctx
 tools/misc/xen-tmem-list-parse
 tools/misc/xen-livepatch
 tools/misc/xenperf
diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 63947bfadc..ef25524354 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -30,6 +30,7 @@ INSTALL_SBIN                   += xenpm
 INSTALL_SBIN                   += xenwatchdogd
 INSTALL_SBIN                   += xen-livepatch
 INSTALL_SBIN                   += xen-diag
+INSTALL_SBIN                   += xen-domctx
 INSTALL_SBIN += $(INSTALL_SBIN-y)
 
 # Everything to be installed in a private bin/
@@ -108,6 +109,9 @@ xen-livepatch: xen-livepatch.o
 xen-diag: xen-diag.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
 
+xen-domctx: xen-domctx.o
+	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenctrl) $(APPEND_LDFLAGS)
+
 xen-lowmemd: xen-lowmemd.o
 	$(CC) $(LDFLAGS) -o $@ $< $(LDLIBS_libxenevtchn) $(LDLIBS_libxenctrl) $(LDLIBS_libxenstore) $(APPEND_LDFLAGS)
 
diff --git a/tools/misc/xen-domctx.c b/tools/misc/xen-domctx.c
new file mode 100644
index 0000000000..243325dfce
--- /dev/null
+++ b/tools/misc/xen-domctx.c
@@ -0,0 +1,200 @@
+/*
+ * xen-domctx.c
+ *
+ * Print out domain save records in a human-readable way.
+ *
+ * Copyright Amazon.com Inc. or its affiliates.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a
+ * copy of this software and associated documentation files (the "Software"),
+ * to deal in the Software without restriction, including without limitation
+ * the rights to use, copy, modify, merge, publish, distribute, sublicense,
+ * and/or sell copies of the Software, and to permit persons to whom the
+ * Software is furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
+ * THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#include <inttypes.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <string.h>
+#include <errno.h>
+
+#include <xenctrl.h>
+#include <xen/xen.h>
+#include <xen/domctl.h>
+#include <xen/save.h>
+
+static void *buf = NULL;
+static size_t len, off;
+
+#define GET_PTR(_x)                                                        \
+    do {                                                                   \
+        if ( len - off < sizeof(*(_x)) )                                   \
+        {                                                                  \
+            fprintf(stderr,                                                \
+                    "error: need another %lu bytes, only %lu available\n", \
+                    sizeof(*(_x)), len - off);                             \
+            exit(1);                                                       \
+        }                                                                  \
+        (_x) = buf + off;                                                  \
+    } while (false);
+
+static void dump_header(void)
+{
+    DOMAIN_SAVE_TYPE(HEADER) *h;
+
+    GET_PTR(h);
+
+    printf("    HEADER: magic %#x, version %u\n",
+           h->magic, h->version);
+
+}
+
+static void dump_end(void)
+{
+    DOMAIN_SAVE_TYPE(END) *e;
+
+    GET_PTR(e);
+
+    printf("    END\n");
+}
+
+static void usage(const char *prog)
+{
+    fprintf(stderr, "usage: %s <domid> [ <typecode> [ <instance> ]]\n",
+            prog);
+    exit(1);
+}
+
+int main(int argc, char **argv)
+{
+    char *s, *e;
+    long domid;
+    long typecode = -1;
+    long instance = -1;
+    unsigned int entry;
+    xc_interface *xch;
+    int rc;
+
+    if ( argc < 2 || argc > 4 )
+        usage(argv[0]);
+
+    s = e = argv[1];
+    domid = strtol(s, &e, 0);
+
+    if ( *s == '\0' || *e != '\0' ||
+         domid < 0 || domid >= DOMID_FIRST_RESERVED )
+    {
+        fprintf(stderr, "invalid domid '%s'\n", s);
+        exit(1);
+    }
+
+    if ( argc >= 3 )
+    {
+        s = e = argv[2];
+        typecode = strtol(s, &e, 0);
+
+        if ( *s == '\0' || *e != '\0' )
+        {
+            fprintf(stderr, "invalid typecode '%s'\n", s);
+            exit(1);
+        }
+    }
+
+    if ( argc == 4 )
+    {
+        s = e = argv[3];
+        instance = strtol(s, &e, 0);
+
+        if ( *s == '\0' || *e != '\0' )
+        {
+            fprintf(stderr, "invalid instance '%s'\n", s);
+            exit(1);
+        }
+    }
+
+    xch = xc_interface_open(0, 0, 0);
+    if ( !xch )
+    {
+        fprintf(stderr, "error: can't open libxc handle\n");
+        exit(1);
+    }
+
+    rc = xc_domain_getcontext(xch, domid, NULL, &len);
+    if ( rc < 0 )
+    {
+        fprintf(stderr, "error: can't get record length for dom %lu: %s\n",
+                domid, strerror(errno));
+        exit(1);
+    }
+
+    buf = malloc(len);
+    if ( !buf )
+    {
+        fprintf(stderr, "error: can't allocate %lu bytes\n", len);
+        exit(1);
+    }
+
+    rc = xc_domain_getcontext(xch, domid, buf, &len);
+    if ( rc < 0 )
+    {
+        fprintf(stderr, "error: can't get domain record for dom %lu: %s\n",
+                domid, strerror(errno));
+        exit(1);
+    }
+    off = 0;
+
+    entry = 0;
+    for ( ; ; )
+    {
+        struct domain_save_descriptor *desc;
+
+        GET_PTR(desc);
+
+        off += sizeof(*desc);
+
+        if ( (typecode < 0 || typecode == desc->typecode) &&
+             (instance < 0 || instance == desc->instance) )
+        {
+            printf("[%u] type: %u instance: %u length: %u\n", entry++,
+                   desc->typecode, desc->instance, desc->length);
+
+            switch (desc->typecode)
+            {
+            case DOMAIN_SAVE_CODE(HEADER): dump_header(); break;
+            case DOMAIN_SAVE_CODE(END): dump_end(); break;
+            default:
+                printf("Unknown type %u: skipping\n", desc->typecode);
+                break;
+            }
+        }
+
+        if ( desc->typecode == DOMAIN_SAVE_CODE(END) )
+            break;
+
+        off += desc->length;
+    }
+
+    return 0;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed May 27 17:34:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 17:34:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jdzwW-00030j-82; Wed, 27 May 2020 17:34:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=baN0=7J=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jdzwU-00030e-Kc
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 17:34:14 +0000
X-Inumbo-ID: 4850a874-a040-11ea-9dbe-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4850a874-a040-11ea-9dbe-bc764e2007e4;
 Wed, 27 May 2020 17:34:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
 Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Ouw0yJUdxV0BhlDmLIeXobKVtd5FgZApf/B2Dwn+OQM=; b=Uwu1jO1/PuroMhuSvZa5Dcb1Z+
 oU3OjuAHJOk/Arj+oLxoA6ZaKqV68f3MTYa7FjoutIHs7/aj22CKk5HyHL+W8nurzCA0o+lm5jCH4
 4hbqKqddMMOjPry5ObZ13odiCz8YXcKeVZ+SkN0ggJkCro13g9o2gZCBB49As3BTrnx0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jdzwR-00015w-Kv; Wed, 27 May 2020 17:34:11 +0000
Received: from 54-240-197-224.amazon.com ([54.240.197.224]
 helo=u2f063a87eabd5f.cbg10.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jdzwR-0003od-AJ; Wed, 27 May 2020 17:34:11 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v6 0/5] domain context infrastructure
Date: Wed, 27 May 2020 18:34:02 +0100
Message-Id: <20200527173407.1398-1-paul@xen.org>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

Paul Durrant (5):
  xen/common: introduce a new framework for save/restore of 'domain'
    context
  xen/common/domctl: introduce XEN_DOMCTL_get/setdomaincontext
  tools/misc: add xen-domctx to present domain context
  common/domain: add a domain context record for shared_info...
  tools/libxc: make use of domain context SHARED_INFO record...

 .gitignore                             |   1 +
 tools/flask/policy/modules/xen.if      |   4 +-
 tools/libxc/include/xenctrl.h          |   5 +
 tools/libxc/xc_domain.c                |  56 +++++
 tools/libxc/xc_sr_common.c             |  67 ++++++
 tools/libxc/xc_sr_common.h             |  11 +-
 tools/libxc/xc_sr_common_x86_pv.c      |  74 ++++++
 tools/libxc/xc_sr_common_x86_pv.h      |   3 +
 tools/libxc/xc_sr_restore_x86_pv.c     |  26 +-
 tools/libxc/xc_sr_save_x86_pv.c        |  44 ++--
 tools/libxc/xg_save_restore.h          |   1 +
 tools/misc/Makefile                    |   4 +
 tools/misc/xen-domctx.c                | 278 ++++++++++++++++++++++
 xen/common/Makefile                    |   1 +
 xen/common/domain.c                    |  70 ++++++
 xen/common/domctl.c                    | 173 ++++++++++++++
 xen/common/save.c                      | 315 +++++++++++++++++++++++++
 xen/include/public/arch-arm/hvm/save.h |   5 +
 xen/include/public/arch-x86/hvm/save.h |   5 +
 xen/include/public/domctl.h            |  41 ++++
 xen/include/public/save.h              | 100 ++++++++
 xen/include/xen/save.h                 | 170 +++++++++++++
 xen/xsm/flask/hooks.c                  |   6 +
 xen/xsm/flask/policy/access_vectors    |   4 +
 24 files changed, 1418 insertions(+), 46 deletions(-)
 create mode 100644 tools/misc/xen-domctx.c
 create mode 100644 xen/common/save.c
 create mode 100644 xen/include/public/save.h
 create mode 100644 xen/include/xen/save.h
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: "Roger Pau Monné" <roger.pau@citrix.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>
Cc: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Cc: Wei Liu <wl@xen.org>
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Wed May 27 17:43:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 17:43:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je05X-0004OV-Op; Wed, 27 May 2020 17:43:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/NoV=7J=kernel.org=helgaas@srs-us1.protection.inumbo.net>)
 id 1je05W-0004OP-7P
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 17:43:34 +0000
X-Inumbo-ID: 94fd9e58-a041-11ea-a76e-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 94fd9e58-a041-11ea-a76e-12813bfff9fa;
 Wed, 27 May 2020 17:43:33 +0000 (UTC)
Received: from localhost (mobile-166-175-190-200.mycingular.net
 [166.175.190.200])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id C628B20663;
 Wed, 27 May 2020 17:43:31 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590601412;
 bh=g3K+YeZ1yYJxyeiZYfgVMKP3rgbKk0R7FZQrOzGYFD4=;
 h=From:To:Cc:Subject:Date:From;
 b=JMWwka5vNpbNGWnUUgtimzSN0G8Pvb5P/VTBVcE7vDaIMLZWSEFxt4+jVAomle1UR
 QaZ7Ig7WnzizcmKoBp2qZ+EAKlmGgcPaKhzeJPEPoQNUCrAe7vLoAMfpJ0g8wLPkyv
 Kf2ABixKI9pZl67nVgy2SLj51DIdykhx0R+T+qPY=
From: Bjorn Helgaas <helgaas@kernel.org>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>
Subject: [PATCH 0/2] xen: Use dev_printk() when possible
Date: Wed, 27 May 2020 12:43:24 -0500
Message-Id: <20200527174326.254329-1-helgaas@kernel.org>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Bjorn Helgaas <bhelgaas@google.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Bjorn Helgaas <bhelgaas@google.com>

Use dev_printk() when possible to include device and driver information in
the conventional format.

Bjorn Helgaas (2):
  xen-pciback: Use dev_printk() when possible
  xenbus: Use dev_printk() when possible

 drivers/xen/xen-pciback/conf_space.c        | 16 +++++----
 drivers/xen/xen-pciback/conf_space_header.c | 40 +++++++++------------
 drivers/xen/xen-pciback/conf_space_quirks.c |  6 ++--
 drivers/xen/xen-pciback/pci_stub.c          | 38 +++++++++-----------
 drivers/xen/xen-pciback/pciback_ops.c       | 38 ++++++++------------
 drivers/xen/xen-pciback/vpci.c              | 10 +++---
 drivers/xen/xenbus/xenbus_probe.c           | 11 +++---
 7 files changed, 70 insertions(+), 89 deletions(-)


base-commit: 8f3d9f354286745c751374f5f1fcafee6b3f3136
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed May 27 17:43:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 17:43:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je05g-0004PS-0j; Wed, 27 May 2020 17:43:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/NoV=7J=kernel.org=helgaas@srs-us1.protection.inumbo.net>)
 id 1je05e-0004PE-9q
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 17:43:42 +0000
X-Inumbo-ID: 99b4c794-a041-11ea-a76e-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 99b4c794-a041-11ea-a76e-12813bfff9fa;
 Wed, 27 May 2020 17:43:40 +0000 (UTC)
Received: from localhost (mobile-166-175-190-200.mycingular.net
 [166.175.190.200])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 6E06420663;
 Wed, 27 May 2020 17:43:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590601419;
 bh=FHu3tfr7BLYO0sXAFxkyhkrxhWECuYcejWzTuvTFgAI=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=CCanJ0N/Btr+b4Ga/b0dB9L0ZuLGi7ehHtsrzCbYiwyGckWrmSNf3IUPec5EAtAe5
 EJbB5a6MyeyOQi/0BuS51FMW5A2OTCsNQWLuKCKv6kLlO+wDVqbvdl369m5Jf098BO
 hOwzdpuGfVdptBwUoHkHafOOqexLno+65Brp9cMw=
From: Bjorn Helgaas <helgaas@kernel.org>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>
Subject: [PATCH 1/2] xen-pciback: Use dev_printk() when possible
Date: Wed, 27 May 2020 12:43:25 -0500
Message-Id: <20200527174326.254329-2-helgaas@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200527174326.254329-1-helgaas@kernel.org>
References: <20200527174326.254329-1-helgaas@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Bjorn Helgaas <bhelgaas@google.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Bjorn Helgaas <bhelgaas@google.com>

Use dev_printk() when possible to include device and driver information in
the conventional format.

Add "#define dev_fmt" when needed to preserve DRV_NAME or KBUILD_MODNAME in
messages.

No functional change intended.

Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
---
 drivers/xen/xen-pciback/conf_space.c        | 16 +++++----
 drivers/xen/xen-pciback/conf_space_header.c | 40 +++++++++------------
 drivers/xen/xen-pciback/conf_space_quirks.c |  6 ++--
 drivers/xen/xen-pciback/pci_stub.c          | 38 +++++++++-----------
 drivers/xen/xen-pciback/pciback_ops.c       | 38 ++++++++------------
 drivers/xen/xen-pciback/vpci.c              | 10 +++---
 6 files changed, 65 insertions(+), 83 deletions(-)

diff --git a/drivers/xen/xen-pciback/conf_space.c b/drivers/xen/xen-pciback/conf_space.c
index da51a5d34e6e..f2df4e55fc1b 100644
--- a/drivers/xen/xen-pciback/conf_space.c
+++ b/drivers/xen/xen-pciback/conf_space.c
@@ -10,6 +10,8 @@
  * Author: Ryan Wilson <hap9@epoch.ncsc.mil>
  */
 
+#define dev_fmt(fmt) DRV_NAME ": " fmt
+
 #include <linux/kernel.h>
 #include <linux/moduleparam.h>
 #include <linux/pci.h>
@@ -155,8 +157,8 @@ int xen_pcibk_config_read(struct pci_dev *dev, int offset, int size,
 	u32 value = 0, tmp_val;
 
 	if (unlikely(verbose_request))
-		printk(KERN_DEBUG DRV_NAME ": %s: read %d bytes at 0x%x\n",
-		       pci_name(dev), size, offset);
+		dev_printk(KERN_DEBUG, &dev->dev, "read %d bytes at 0x%x\n",
+			   size, offset);
 
 	if (!valid_request(offset, size)) {
 		err = XEN_PCI_ERR_invalid_offset;
@@ -196,8 +198,8 @@ int xen_pcibk_config_read(struct pci_dev *dev, int offset, int size,
 
 out:
 	if (unlikely(verbose_request))
-		printk(KERN_DEBUG DRV_NAME ": %s: read %d bytes at 0x%x = %x\n",
-		       pci_name(dev), size, offset, value);
+		dev_printk(KERN_DEBUG, &dev->dev,
+			   "read %d bytes at 0x%x = %x\n", size, offset, value);
 
 	*ret_val = value;
 	return xen_pcibios_err_to_errno(err);
@@ -213,9 +215,9 @@ int xen_pcibk_config_write(struct pci_dev *dev, int offset, int size, u32 value)
 	int field_start, field_end;
 
 	if (unlikely(verbose_request))
-		printk(KERN_DEBUG
-		       DRV_NAME ": %s: write request %d bytes at 0x%x = %x\n",
-		       pci_name(dev), size, offset, value);
+		dev_printk(KERN_DEBUG, &dev->dev,
+			   "write request %d bytes at 0x%x = %x\n", size,
+			   offset, value);
 
 	if (!valid_request(offset, size))
 		return XEN_PCI_ERR_invalid_offset;
diff --git a/drivers/xen/xen-pciback/conf_space_header.c b/drivers/xen/xen-pciback/conf_space_header.c
index fb4fccb4aecc..b277b689f257 100644
--- a/drivers/xen/xen-pciback/conf_space_header.c
+++ b/drivers/xen/xen-pciback/conf_space_header.c
@@ -6,6 +6,7 @@
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define dev_fmt pr_fmt
 
 #include <linux/kernel.h>
 #include <linux/pci.h>
@@ -68,8 +69,7 @@ static int command_write(struct pci_dev *dev, int offset, u16 value, void *data)
 	dev_data = pci_get_drvdata(dev);
 	if (!pci_is_enabled(dev) && is_enable_cmd(value)) {
 		if (unlikely(verbose_request))
-			printk(KERN_DEBUG DRV_NAME ": %s: enable\n",
-			       pci_name(dev));
+			dev_printk(KERN_DEBUG, &dev->dev, "enable\n");
 		err = pci_enable_device(dev);
 		if (err)
 			return err;
@@ -77,8 +77,7 @@ static int command_write(struct pci_dev *dev, int offset, u16 value, void *data)
 			dev_data->enable_intx = 1;
 	} else if (pci_is_enabled(dev) && !is_enable_cmd(value)) {
 		if (unlikely(verbose_request))
-			printk(KERN_DEBUG DRV_NAME ": %s: disable\n",
-			       pci_name(dev));
+			dev_printk(KERN_DEBUG, &dev->dev, "disable\n");
 		pci_disable_device(dev);
 		if (dev_data)
 			dev_data->enable_intx = 0;
@@ -86,34 +85,30 @@ static int command_write(struct pci_dev *dev, int offset, u16 value, void *data)
 
 	if (!dev->is_busmaster && is_master_cmd(value)) {
 		if (unlikely(verbose_request))
-			printk(KERN_DEBUG DRV_NAME ": %s: set bus master\n",
-			       pci_name(dev));
+			dev_printk(KERN_DEBUG, &dev->dev, "set bus master\n");
 		pci_set_master(dev);
 	} else if (dev->is_busmaster && !is_master_cmd(value)) {
 		if (unlikely(verbose_request))
-			printk(KERN_DEBUG DRV_NAME ": %s: clear bus master\n",
-			       pci_name(dev));
+			dev_printk(KERN_DEBUG, &dev->dev, "clear bus master\n");
 		pci_clear_master(dev);
 	}
 
 	if (!(cmd->val & PCI_COMMAND_INVALIDATE) &&
 	    (value & PCI_COMMAND_INVALIDATE)) {
 		if (unlikely(verbose_request))
-			printk(KERN_DEBUG
-			       DRV_NAME ": %s: enable memory-write-invalidate\n",
-			       pci_name(dev));
+			dev_printk(KERN_DEBUG, &dev->dev,
+				   "enable memory-write-invalidate\n");
 		err = pci_set_mwi(dev);
 		if (err) {
-			pr_warn("%s: cannot enable memory-write-invalidate (%d)\n",
-				pci_name(dev), err);
+			dev_warn(&dev->dev, "cannot enable memory-write-invalidate (%d)\n",
+				err);
 			value &= ~PCI_COMMAND_INVALIDATE;
 		}
 	} else if ((cmd->val & PCI_COMMAND_INVALIDATE) &&
 		   !(value & PCI_COMMAND_INVALIDATE)) {
 		if (unlikely(verbose_request))
-			printk(KERN_DEBUG
-			       DRV_NAME ": %s: disable memory-write-invalidate\n",
-			       pci_name(dev));
+			dev_printk(KERN_DEBUG, &dev->dev,
+				   "disable memory-write-invalidate\n");
 		pci_clear_mwi(dev);
 	}
 
@@ -157,8 +152,7 @@ static int rom_write(struct pci_dev *dev, int offset, u32 value, void *data)
 	struct pci_bar_info *bar = data;
 
 	if (unlikely(!bar)) {
-		pr_warn(DRV_NAME ": driver data not found for %s\n",
-		       pci_name(dev));
+		dev_warn(&dev->dev, "driver data not found\n");
 		return XEN_PCI_ERR_op_failed;
 	}
 
@@ -194,8 +188,7 @@ static int bar_write(struct pci_dev *dev, int offset, u32 value, void *data)
 	u32 mask;
 
 	if (unlikely(!bar)) {
-		pr_warn(DRV_NAME ": driver data not found for %s\n",
-		       pci_name(dev));
+		dev_warn(&dev->dev, "driver data not found\n");
 		return XEN_PCI_ERR_op_failed;
 	}
 
@@ -228,8 +221,7 @@ static int bar_read(struct pci_dev *dev, int offset, u32 * value, void *data)
 	struct pci_bar_info *bar = data;
 
 	if (unlikely(!bar)) {
-		pr_warn(DRV_NAME ": driver data not found for %s\n",
-		       pci_name(dev));
+		dev_warn(&dev->dev, "driver data not found\n");
 		return XEN_PCI_ERR_op_failed;
 	}
 
@@ -433,8 +425,8 @@ int xen_pcibk_config_header_add_fields(struct pci_dev *dev)
 
 	default:
 		err = -EINVAL;
-		pr_err("%s: Unsupported header type %d!\n",
-		       pci_name(dev), dev->hdr_type);
+		dev_err(&dev->dev, "Unsupported header type %d!\n",
+			dev->hdr_type);
 		break;
 	}
 
diff --git a/drivers/xen/xen-pciback/conf_space_quirks.c b/drivers/xen/xen-pciback/conf_space_quirks.c
index ed593d1042a6..7dc281086302 100644
--- a/drivers/xen/xen-pciback/conf_space_quirks.c
+++ b/drivers/xen/xen-pciback/conf_space_quirks.c
@@ -6,6 +6,8 @@
  * Author: Chris Bookholt <hap10@epoch.ncsc.mil>
  */
 
+#define dev_fmt(fmt) DRV_NAME ": " fmt
+
 #include <linux/kernel.h>
 #include <linux/pci.h>
 #include "pciback.h"
@@ -35,8 +37,8 @@ static struct xen_pcibk_config_quirk *xen_pcibk_find_quirk(struct pci_dev *dev)
 		if (match_one_device(&tmp_quirk->devid, dev) != NULL)
 			goto out;
 	tmp_quirk = NULL;
-	printk(KERN_DEBUG DRV_NAME
-	       ": quirk didn't match any device known\n");
+	dev_printk(KERN_DEBUG, &dev->dev,
+		   "quirk didn't match any device known\n");
 out:
 	return tmp_quirk;
 }
diff --git a/drivers/xen/xen-pciback/pci_stub.c b/drivers/xen/xen-pciback/pci_stub.c
index 7af93d65ed51..e876c3d6dad1 100644
--- a/drivers/xen/xen-pciback/pci_stub.c
+++ b/drivers/xen/xen-pciback/pci_stub.c
@@ -6,6 +6,7 @@
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define dev_fmt pr_fmt
 
 #include <linux/module.h>
 #include <linux/init.h>
@@ -626,11 +627,11 @@ static void pcistub_remove(struct pci_dev *dev)
 		if (found_psdev->pdev) {
 			int domid = xen_find_device_domain_owner(dev);
 
-			pr_warn("****** removing device %s while still in-use by domain %d! ******\n",
+			dev_warn(&dev->dev, "****** removing device %s while still in-use by domain %d! ******\n",
 			       pci_name(found_psdev->dev), domid);
-			pr_warn("****** driver domain may still access this device's i/o resources!\n");
-			pr_warn("****** shutdown driver domain before binding device\n");
-			pr_warn("****** to other drivers or domains\n");
+			dev_warn(&dev->dev, "****** driver domain may still access this device's i/o resources!\n");
+			dev_warn(&dev->dev, "****** shutdown driver domain before binding device\n");
+			dev_warn(&dev->dev, "****** to other drivers or domains\n");
 
 			/* N.B. This ends up calling pcistub_put_pci_dev which ends up
 			 * doing the FLR. */
@@ -711,14 +712,12 @@ static pci_ers_result_t common_process(struct pcistub_device *psdev,
 	ret = xen_pcibk_get_pcifront_dev(psdev->dev, psdev->pdev,
 		&aer_op->domain, &aer_op->bus, &aer_op->devfn);
 	if (!ret) {
-		dev_err(&psdev->dev->dev,
-			DRV_NAME ": failed to get pcifront device\n");
+		dev_err(&psdev->dev->dev, "failed to get pcifront device\n");
 		return PCI_ERS_RESULT_NONE;
 	}
 	wmb();
 
-	dev_dbg(&psdev->dev->dev,
-			DRV_NAME ": aer_op %x dom %x bus %x devfn %x\n",
+	dev_dbg(&psdev->dev->dev, "aer_op %x dom %x bus %x devfn %x\n",
 			aer_cmd, aer_op->domain, aer_op->bus, aer_op->devfn);
 	/*local flag to mark there's aer request, xen_pcibk callback will use
 	* this flag to judge whether we need to check pci-front give aer
@@ -754,8 +753,7 @@ static pci_ers_result_t common_process(struct pcistub_device *psdev,
 
 	if (test_bit(_XEN_PCIF_active,
 		(unsigned long *)&sh_info->flags)) {
-		dev_dbg(&psdev->dev->dev,
-			"schedule pci_conf service in " DRV_NAME "\n");
+		dev_dbg(&psdev->dev->dev, "schedule pci_conf service\n");
 		xen_pcibk_test_and_schedule_op(psdev->pdev);
 	}
 
@@ -786,13 +784,12 @@ static pci_ers_result_t xen_pcibk_slot_reset(struct pci_dev *dev)
 				PCI_FUNC(dev->devfn));
 
 	if (!psdev || !psdev->pdev) {
-		dev_err(&dev->dev,
-			DRV_NAME " device is not found/assigned\n");
+		dev_err(&dev->dev, "device is not found/assigned\n");
 		goto end;
 	}
 
 	if (!psdev->pdev->sh_info) {
-		dev_err(&dev->dev, DRV_NAME " device is not connected or owned"
+		dev_err(&dev->dev, "device is not connected or owned"
 			" by HVM, kill it\n");
 		kill_domain_by_device(psdev);
 		goto end;
@@ -844,13 +841,12 @@ static pci_ers_result_t xen_pcibk_mmio_enabled(struct pci_dev *dev)
 				PCI_FUNC(dev->devfn));
 
 	if (!psdev || !psdev->pdev) {
-		dev_err(&dev->dev,
-			DRV_NAME " device is not found/assigned\n");
+		dev_err(&dev->dev, "device is not found/assigned\n");
 		goto end;
 	}
 
 	if (!psdev->pdev->sh_info) {
-		dev_err(&dev->dev, DRV_NAME " device is not connected or owned"
+		dev_err(&dev->dev, "device is not connected or owned"
 			" by HVM, kill it\n");
 		kill_domain_by_device(psdev);
 		goto end;
@@ -902,13 +898,12 @@ static pci_ers_result_t xen_pcibk_error_detected(struct pci_dev *dev,
 				PCI_FUNC(dev->devfn));
 
 	if (!psdev || !psdev->pdev) {
-		dev_err(&dev->dev,
-			DRV_NAME " device is not found/assigned\n");
+		dev_err(&dev->dev, "device is not found/assigned\n");
 		goto end;
 	}
 
 	if (!psdev->pdev->sh_info) {
-		dev_err(&dev->dev, DRV_NAME " device is not connected or owned"
+		dev_err(&dev->dev, "device is not connected or owned"
 			" by HVM, kill it\n");
 		kill_domain_by_device(psdev);
 		goto end;
@@ -956,13 +951,12 @@ static void xen_pcibk_error_resume(struct pci_dev *dev)
 				PCI_FUNC(dev->devfn));
 
 	if (!psdev || !psdev->pdev) {
-		dev_err(&dev->dev,
-			DRV_NAME " device is not found/assigned\n");
+		dev_err(&dev->dev, "device is not found/assigned\n");
 		goto end;
 	}
 
 	if (!psdev->pdev->sh_info) {
-		dev_err(&dev->dev, DRV_NAME " device is not connected or owned"
+		dev_err(&dev->dev, "device is not connected or owned"
 			" by HVM, kill it\n");
 		kill_domain_by_device(psdev);
 		goto end;
diff --git a/drivers/xen/xen-pciback/pciback_ops.c b/drivers/xen/xen-pciback/pciback_ops.c
index 787966f44589..8545ca78c4f1 100644
--- a/drivers/xen/xen-pciback/pciback_ops.c
+++ b/drivers/xen/xen-pciback/pciback_ops.c
@@ -6,6 +6,7 @@
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define dev_fmt pr_fmt
 
 #include <linux/moduleparam.h>
 #include <linux/wait.h>
@@ -148,7 +149,7 @@ int xen_pcibk_enable_msi(struct xen_pcibk_device *pdev,
 	int status;
 
 	if (unlikely(verbose_request))
-		printk(KERN_DEBUG DRV_NAME ": %s: enable MSI\n", pci_name(dev));
+		dev_printk(KERN_DEBUG, &dev->dev, "enable MSI\n");
 
 	if (dev->msi_enabled)
 		status = -EALREADY;
@@ -158,9 +159,8 @@ int xen_pcibk_enable_msi(struct xen_pcibk_device *pdev,
 		status = pci_enable_msi(dev);
 
 	if (status) {
-		pr_warn_ratelimited("%s: error enabling MSI for guest %u: err %d\n",
-				    pci_name(dev), pdev->xdev->otherend_id,
-				    status);
+		dev_warn_ratelimited(&dev->dev, "error enabling MSI for guest %u: err %d\n",
+				     pdev->xdev->otherend_id, status);
 		op->value = 0;
 		return XEN_PCI_ERR_op_failed;
 	}
@@ -170,8 +170,7 @@ int xen_pcibk_enable_msi(struct xen_pcibk_device *pdev,
 
 	op->value = dev->irq ? xen_pirq_from_irq(dev->irq) : 0;
 	if (unlikely(verbose_request))
-		printk(KERN_DEBUG DRV_NAME ": %s: MSI: %d\n", pci_name(dev),
-			op->value);
+		dev_printk(KERN_DEBUG, &dev->dev, "MSI: %d\n", op->value);
 
 	dev_data = pci_get_drvdata(dev);
 	if (dev_data)
@@ -185,8 +184,7 @@ int xen_pcibk_disable_msi(struct xen_pcibk_device *pdev,
 			  struct pci_dev *dev, struct xen_pci_op *op)
 {
 	if (unlikely(verbose_request))
-		printk(KERN_DEBUG DRV_NAME ": %s: disable MSI\n",
-		       pci_name(dev));
+		dev_printk(KERN_DEBUG, &dev->dev, "disable MSI\n");
 
 	if (dev->msi_enabled) {
 		struct xen_pcibk_dev_data *dev_data;
@@ -199,8 +197,7 @@ int xen_pcibk_disable_msi(struct xen_pcibk_device *pdev,
 	}
 	op->value = dev->irq ? xen_pirq_from_irq(dev->irq) : 0;
 	if (unlikely(verbose_request))
-		printk(KERN_DEBUG DRV_NAME ": %s: MSI: %d\n", pci_name(dev),
-			op->value);
+		dev_printk(KERN_DEBUG, &dev->dev, "MSI: %d\n", op->value);
 	return 0;
 }
 
@@ -214,8 +211,7 @@ int xen_pcibk_enable_msix(struct xen_pcibk_device *pdev,
 	u16 cmd;
 
 	if (unlikely(verbose_request))
-		printk(KERN_DEBUG DRV_NAME ": %s: enable MSI-X\n",
-		       pci_name(dev));
+		dev_printk(KERN_DEBUG, &dev->dev, "enable MSI-X\n");
 
 	if (op->value > SH_INFO_MAX_VEC)
 		return -EINVAL;
@@ -249,16 +245,14 @@ int xen_pcibk_enable_msix(struct xen_pcibk_device *pdev,
 				op->msix_entries[i].vector =
 					xen_pirq_from_irq(entries[i].vector);
 				if (unlikely(verbose_request))
-					printk(KERN_DEBUG DRV_NAME ": %s: " \
-						"MSI-X[%d]: %d\n",
-						pci_name(dev), i,
+					dev_printk(KERN_DEBUG, &dev->dev,
+						"MSI-X[%d]: %d\n", i,
 						op->msix_entries[i].vector);
 			}
 		}
 	} else
-		pr_warn_ratelimited("%s: error enabling MSI-X for guest %u: err %d!\n",
-				    pci_name(dev), pdev->xdev->otherend_id,
-				    result);
+		dev_warn_ratelimited(&dev->dev, "error enabling MSI-X for guest %u: err %d!\n",
+				     pdev->xdev->otherend_id, result);
 	kfree(entries);
 
 	op->value = result;
@@ -274,8 +268,7 @@ int xen_pcibk_disable_msix(struct xen_pcibk_device *pdev,
 			   struct pci_dev *dev, struct xen_pci_op *op)
 {
 	if (unlikely(verbose_request))
-		printk(KERN_DEBUG DRV_NAME ": %s: disable MSI-X\n",
-			pci_name(dev));
+		dev_printk(KERN_DEBUG, &dev->dev, "disable MSI-X\n");
 
 	if (dev->msix_enabled) {
 		struct xen_pcibk_dev_data *dev_data;
@@ -292,8 +285,7 @@ int xen_pcibk_disable_msix(struct xen_pcibk_device *pdev,
 	 */
 	op->value = dev->irq ? xen_pirq_from_irq(dev->irq) : 0;
 	if (unlikely(verbose_request))
-		printk(KERN_DEBUG DRV_NAME ": %s: MSI-X: %d\n",
-		       pci_name(dev), op->value);
+		dev_printk(KERN_DEBUG, &dev->dev, "MSI-X: %d\n", op->value);
 	return 0;
 }
 #endif
@@ -424,7 +416,7 @@ static irqreturn_t xen_pcibk_guest_interrupt(int irq, void *dev_id)
 		dev_data->handled++;
 		if ((dev_data->handled % 1000) == 0) {
 			if (xen_test_irq_shared(irq)) {
-				pr_info("%s IRQ line is not shared "
+				dev_info(&dev->dev, "%s IRQ line is not shared "
 					"with other domains. Turning ISR off\n",
 					 dev_data->irq_name);
 				dev_data->ack_intr = 0;
diff --git a/drivers/xen/xen-pciback/vpci.c b/drivers/xen/xen-pciback/vpci.c
index f6ba18191c0f..5447b5ab7c76 100644
--- a/drivers/xen/xen-pciback/vpci.c
+++ b/drivers/xen/xen-pciback/vpci.c
@@ -7,6 +7,7 @@
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define dev_fmt pr_fmt
 
 #include <linux/list.h>
 #include <linux/slab.h>
@@ -105,9 +106,8 @@ static int __xen_pcibk_add_pci_dev(struct xen_pcibk_device *pdev,
 				       struct pci_dev_entry, list);
 
 			if (match_slot(dev, t->dev)) {
-				pr_info("vpci: %s: assign to virtual slot %d func %d\n",
-					pci_name(dev), slot,
-					PCI_FUNC(dev->devfn));
+				dev_info(&dev->dev, "vpci: assign to virtual slot %d func %d\n",
+					 slot, PCI_FUNC(dev->devfn));
 				list_add_tail(&dev_entry->list,
 					      &vpci_dev->dev_list[slot]);
 				func = PCI_FUNC(dev->devfn);
@@ -119,8 +119,8 @@ static int __xen_pcibk_add_pci_dev(struct xen_pcibk_device *pdev,
 	/* Assign to a new slot on the virtual PCI bus */
 	for (slot = 0; slot < PCI_SLOT_MAX; slot++) {
 		if (list_empty(&vpci_dev->dev_list[slot])) {
-			pr_info("vpci: %s: assign to virtual slot %d\n",
-				pci_name(dev), slot);
+			dev_info(&dev->dev, "vpci: assign to virtual slot %d\n",
+				 slot);
 			list_add_tail(&dev_entry->list,
 				      &vpci_dev->dev_list[slot]);
 			func = dev->is_virtfn ? 0 : PCI_FUNC(dev->devfn);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed May 27 17:43:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 17:43:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je05o-0004RL-9m; Wed, 27 May 2020 17:43:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/NoV=7J=kernel.org=helgaas@srs-us1.protection.inumbo.net>)
 id 1je05m-0004R0-Ia
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 17:43:50 +0000
X-Inumbo-ID: 9f6bdeac-a041-11ea-a76e-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9f6bdeac-a041-11ea-a76e-12813bfff9fa;
 Wed, 27 May 2020 17:43:50 +0000 (UTC)
Received: from localhost (mobile-166-175-190-200.mycingular.net
 [166.175.190.200])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 0DCEE20663;
 Wed, 27 May 2020 17:43:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590601429;
 bh=qsk677uI1G7xG388JRQ1EJcb+HhzqzXoxdTlVWzea00=;
 h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
 b=b38dsy5BnxkemvB3aPYKCSp1dQnRdVruwP9i/zmHaxl2Nc2tYliSvyXorhy6EveSt
 mrBcJSt8Swhi4bC7MYWSGFXFuo0fZmzniwoYXhi4KFqdazKK4EdZSWBjgFIgk8Nwlq
 pvKME8zXxxqF+hce0F4a5Kawn3FZbQjYHtntOCuI=
From: Bjorn Helgaas <helgaas@kernel.org>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>
Subject: [PATCH 2/2] xenbus: Use dev_printk() when possible
Date: Wed, 27 May 2020 12:43:26 -0500
Message-Id: <20200527174326.254329-3-helgaas@kernel.org>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20200527174326.254329-1-helgaas@kernel.org>
References: <20200527174326.254329-1-helgaas@kernel.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Bjorn Helgaas <bhelgaas@google.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Bjorn Helgaas <bhelgaas@google.com>

Use dev_printk() when possible to include device and driver information in
the conventional format.

Add "#define dev_fmt" to preserve KBUILD_MODNAME in messages.

No functional change intended.

Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>
---
 drivers/xen/xenbus/xenbus_probe.c | 11 +++++------
 1 file changed, 5 insertions(+), 6 deletions(-)

diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index 8c4d05b687b7..ee9a9170b43a 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -31,6 +31,7 @@
  */
 
 #define pr_fmt(fmt) KBUILD_MODNAME ": " fmt
+#define dev_fmt pr_fmt
 
 #define DPRINTK(fmt, args...)				\
 	pr_debug("xenbus_probe (%s:%d) " fmt ".\n",	\
@@ -608,7 +609,7 @@ int xenbus_dev_suspend(struct device *dev)
 	if (drv->suspend)
 		err = drv->suspend(xdev);
 	if (err)
-		pr_warn("suspend %s failed: %i\n", dev_name(dev), err);
+		dev_warn(dev, "suspend failed: %i\n", err);
 	return 0;
 }
 EXPORT_SYMBOL_GPL(xenbus_dev_suspend);
@@ -627,8 +628,7 @@ int xenbus_dev_resume(struct device *dev)
 	drv = to_xenbus_driver(dev->driver);
 	err = talk_to_otherend(xdev);
 	if (err) {
-		pr_warn("resume (talk_to_otherend) %s failed: %i\n",
-			dev_name(dev), err);
+		dev_warn(dev, "resume (talk_to_otherend) failed: %i\n", err);
 		return err;
 	}
 
@@ -637,15 +637,14 @@ int xenbus_dev_resume(struct device *dev)
 	if (drv->resume) {
 		err = drv->resume(xdev);
 		if (err) {
-			pr_warn("resume %s failed: %i\n", dev_name(dev), err);
+			dev_warn(dev, "resume failed: %i\n", err);
 			return err;
 		}
 	}
 
 	err = watch_otherend(xdev);
 	if (err) {
-		pr_warn("resume (watch_otherend) %s failed: %d.\n",
-			dev_name(dev), err);
+		dev_warn(dev, "resume (watch_otherend) failed: %d\n", err);
 		return err;
 	}
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed May 27 18:10:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 18:10:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je0V3-0006SL-Fi; Wed, 27 May 2020 18:09:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EnSh=7J=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1je0V2-0006SG-8n
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 18:09:56 +0000
X-Inumbo-ID: 447cce08-a045-11ea-9dbe-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 447cce08-a045-11ea-9dbe-bc764e2007e4;
 Wed, 27 May 2020 18:09:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=5MFQJumJ2IAunmsj+TT2q2BnfUmhDTmq47ASVCA/ruY=; b=s231dMdP+Z5t91qdWmgEk9KQsG
 uMWwbDPsKqy4K2FVOQ8wRxccOvElSE4DaOxcoj88lDOsKjeO7x6S3Z8e3yo+QuvI01I60zRzMAJCl
 1dDy03Q+rYVPfvQiGAVBHoZnTorb3pDcjoDgRU5eQzkb03VhSmKqE+ldPEdNJxPeo6Yc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1je0Uz-0001x3-Ry; Wed, 27 May 2020 18:09:53 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1je0Uz-0005nh-Lk; Wed, 27 May 2020 18:09:53 +0000
Subject: Re: [PATCH 12/12] xen/arm: call iomem_permit_access for passthrough
 devices
To: Stefano Stabellini <sstabellini@kernel.org>
References: <alpine.DEB.2.21.2004141746350.8746@sstabellini-ThinkPad-T480s>
 <20200415010255.10081-12-sstabellini@kernel.org>
 <521c8e55-73e8-950f-2d94-70b0c664bd3d@xen.org>
 <alpine.DEB.2.21.2004291318270.28941@sstabellini-ThinkPad-T480s>
 <f7f01eca-2415-e102-318f-0c58606fda96@xen.org>
 <453b58f8-d9ee-bbe7-ac05-b5268620e79f@xen.org>
 <alpine.DEB.2.21.2005260941250.27502@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <0f79d268-b154-0674-4c33-a06a0b585142@xen.org>
Date: Wed, 27 May 2020 19:09:52 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2005260941250.27502@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>, Volodymyr_Babchuk@epam.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Stefano,

On 26/05/2020 17:46, Stefano Stabellini wrote:
> On Sun, 24 May 2020, Julien Grall wrote:
>> On 30/04/2020 14:01, Julien Grall wrote:
>>> On 29/04/2020 21:47, Stefano Stabellini wrote:
>>>> On Wed, 15 Apr 2020, Julien Grall wrote: But doesn't it make sense to give
>>>> domU permission if it is going to get
>>>> the memory mapped? But admittedly I can't think of something that would
>>>> break because of the lack of the iomem_permit_access call in this code
>>>> path.
>>>
>>> On Arm, the permissions are only useful if you plan you DomU to delegate the
>>> regions to another domain. As your domain is not even aware it is running on
>>> Xen (we don't expose 'xen' node in the DT), it makes little sense to add the
>>> permission.
>>
>> I actually found one use when helping a user last week. You can dump the list
>> of MMIO regions assigned to a guest from Xen Console.
>>
>> This will use d->iomem_caps that is modified via iomem_permit_access().
>> Without it, there is no easy way to confirm the list of MMIO regions assigned
>> to a guest. Although...
>>
>>> Even today, you can map IOMEM to a DomU and then revert the permission right
>>> after. They IOMEM will still be mapped in the guest and it will act normaly.
>>
>> ... this would not help the case where permissions are reverted. But I am
>> assuming this shouldn't happen for Dom0less.
> 
> Thank you for looking into this
> 
> 
>> Stefano, I am not sure what's your plan for the series itself for Xen 4.14. I
>> think this patch could go in now. Any thoughts?
> 
> For the series: I have addresses all comments in my working tree except
> for the ones on memory allocation (the thread "xen: introduce
> reserve_heap_pages"). It looks like that part requires a complete
> rewrite, and it seems that the new code is not trivial to write. So I am
> thinking of not targeting 4.14. What do you think? Do you think the new
> code should be "easy" enough that I could target 4.14?
It may be a stretch with the code freeze on Friday. I would suggest to 
send it when it is ready and we can either include in Xen 4.14 or as 
soon as the tree re-open.

> 
> For this patch: it is fine to go in now, doesn't have to wait for the
> series.

Feel free to add my ack on the patch.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 27 18:14:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 18:14:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je0Zf-0007HN-4j; Wed, 27 May 2020 18:14:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ARaW=7J=xen.org=roger@srs-us1.protection.inumbo.net>)
 id 1je0Ze-0007HI-15
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 18:14:42 +0000
X-Inumbo-ID: eee6dd0c-a045-11ea-a771-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eee6dd0c-a045-11ea-a771-12813bfff9fa;
 Wed, 27 May 2020 18:14:41 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID
 :Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID
 :Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:
 Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe
 :List-Post:List-Owner:List-Archive;
 bh=+hLus+pX/7H3a22ZBroR+aLqB4dL12CGzDLFgeud9xk=; b=CWIEmF0bhY7CTfTc3wJAdOJxZT
 pqME1DQ3IB6RT4JgwD4b31a9hNxZYmB3hCLXbBQ8xfiUQBXAuOSug9tB7fzo+Iie2PV6FPTjtObDx
 1L9nEFGjQFZ+0vij5kjPThmyCAo44sFD9EvqT62sWoYx903jNpDV9oAh8fflcnTaY/e8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1je0Zb-00022w-Hg; Wed, 27 May 2020 18:14:39 +0000
Received: from [93.176.191.173] (helo=localhost)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1je0Zb-0006A4-7C; Wed, 27 May 2020 18:14:39 +0000
Date: Wed, 27 May 2020 20:14:31 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger@xen.org>
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Subject: Re: [PATCH v2] x86/svm: retry after unhandled NPT fault if gfn was
 marked for recalculation
Message-ID: <20200527181414.GD1195@Air-de-Roger>
References: <1590541308-11317-1-git-send-email-igor.druzhinin@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <1590541308-11317-1-git-send-email-igor.druzhinin@citrix.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: kevin.tian@intel.com, jbeulich@suse.com, wl@xen.org,
 andrew.cooper3@citrix.com, jun.nakajima@intel.com,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 27, 2020 at 02:01:48AM +0100, Igor Druzhinin wrote:
> If a recalculation NPT fault hasn't been handled explicitly in
> hvm_hap_nested_page_fault() then it's potentially safe to retry -
> US bit has been re-instated in PTE and any real fault would be correctly
> re-raised next time. Do it by allowing hvm_hap_nested_page_fault to
> fall through in that case.
> 
> This covers a specific case of migration with vGPU assigned on AMD:
> global log-dirty is enabled and causes immediate recalculation NPT
> fault in MMIO area upon access. This type of fault isn't described
> explicitly in hvm_hap_nested_page_fault (this isn't called on
> EPT misconfig exit on Intel) which results in domain crash.
> 
> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> ---
> Changes in v2:
> - don't gamble with retrying every recal fault and instead let
>   hvm_hap_nested_page_fault know it's allowed to fall through in default case
> ---
>  xen/arch/x86/hvm/hvm.c        | 6 +++---
>  xen/arch/x86/hvm/svm/svm.c    | 7 ++++++-
>  xen/arch/x86/hvm/vmx/vmx.c    | 2 +-
>  xen/include/asm-x86/hvm/hvm.h | 2 +-
>  4 files changed, 11 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
> index 74c9f84..42bd720 100644
> --- a/xen/arch/x86/hvm/hvm.c
> +++ b/xen/arch/x86/hvm/hvm.c
> @@ -1731,7 +1731,7 @@ void hvm_inject_event(const struct x86_event *event)
>  }
>  
>  int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
> -                              struct npfec npfec)
> +                              struct npfec npfec, bool fall_through)
>  {
>      unsigned long gfn = gpa >> PAGE_SHIFT;
>      p2m_type_t p2mt;
> @@ -1740,7 +1740,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
>      struct vcpu *curr = current;
>      struct domain *currd = curr->domain;
>      struct p2m_domain *p2m, *hostp2m;
> -    int rc, fall_through = 0, paged = 0;
> +    int rc, paged = 0;
>      bool sharing_enomem = false;
>      vm_event_request_t *req_ptr = NULL;
>      bool sync = false;

I would assert that the parameter is never set when running on Intel,
since those code path is not supposed to use it.

I also wonder whether it would be possible to avoid passing a
parameter, and instead check whether the guest is in logdirty mode on
AMD and the fault is actually a logdirty triggered one. That would IMO
make it more robust since the caller doesn't need to care about
whether it's a recalc fault or not.

> @@ -1905,7 +1905,7 @@ int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
>              sync = p2m_mem_access_check(gpa, gla, npfec, &req_ptr);
>  
>              if ( !sync )
> -                fall_through = 1;
> +                fall_through = true;
>              else
>              {
>                  /* Rights not promoted (aka. sync event), work here is done */
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 46a1aac..8ef3fed 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -1695,7 +1695,12 @@ static void svm_do_nested_pgfault(struct vcpu *v,
>      else if ( pfec & NPT_PFEC_in_gpt )
>          npfec.kind = npfec_kind_in_gpt;
>  
> -    ret = hvm_hap_nested_page_fault(gpa, ~0ul, npfec);
> +    /*
> +     * US bit being set in error code indicates P2M type recalculation has
> +     * just been done meaning that it's possible there is nothing else to handle
> +     * and we can just fall through and retry.
> +     */
> +    ret = hvm_hap_nested_page_fault(gpa, ~0ul, npfec, !!(pfec & PFEC_user_mode));
>  
>      if ( tb_init_done )
>      {
> diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
> index 11a4dd9..10f1eeb 100644
> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -3398,7 +3398,7 @@ static void ept_handle_violation(ept_qual_t q, paddr_t gpa)
>      else
>          gla = ~0ull;
>  
> -    ret = hvm_hap_nested_page_fault(gpa, gla, npfec);
> +    ret = hvm_hap_nested_page_fault(gpa, gla, npfec, false);
>      switch ( ret )
>      {
>      case 0:         // Unhandled L1 EPT violation
> diff --git a/xen/include/asm-x86/hvm/hvm.h b/xen/include/asm-x86/hvm/hvm.h
> index 1eb377d..03e5f1d 100644
> --- a/xen/include/asm-x86/hvm/hvm.h
> +++ b/xen/include/asm-x86/hvm/hvm.h
> @@ -329,7 +329,7 @@ void hvm_fast_singlestep(struct vcpu *v, uint16_t p2midx);
>  
>  struct npfec;
>  int hvm_hap_nested_page_fault(paddr_t gpa, unsigned long gla,
> -                              struct npfec npfec);
> +                              struct npfec npfec, bool fall_through);

I would rename fall_through to recalc, recalculate or misconfig. It's
not easy to understand the meaning of the parameter when looking at
the function prototype.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 27 18:55:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 18:55:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je1CI-0002Ak-83; Wed, 27 May 2020 18:54:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeBB=7J=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1je1CH-0002Af-4u
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 18:54:37 +0000
X-Inumbo-ID: 7f5df82a-a04b-11ea-a776-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f5df82a-a04b-11ea-a776-12813bfff9fa;
 Wed, 27 May 2020 18:54:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=5I1JXReOWHrWHIvw53wUpwrnnjLm/8jfpP2B/coZYlY=; b=y94/7+UY2rspyQ6l9eGT7C3Oa
 wI6M275opGpAWszBtXgjezjqhUdG6GnTVhDQb85mESnKiYCfA3hjUZ+Tj4Gy2okV37iWL8yFlhypm
 Nr3Nu4FEDUKlavLdcmtnH5nSOJnhUFjxjZvizBN2MWy176J9SNsZDMklmvb3nojHTCptY=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1je1CA-0002pp-9B; Wed, 27 May 2020 18:54:30 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1je1C9-0000M3-Pm; Wed, 27 May 2020 18:54:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1je1C9-0007Nl-P2; Wed, 27 May 2020 18:54:29 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150407-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150407: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=9f3e9139fa6c3d620eb08dff927518fc88200b8d
X-Osstest-Versions-That: xen=b66e28226dd9df8a28101438f44c0a26d63b76fa
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 27 May 2020 18:54:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150407 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150407/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9f3e9139fa6c3d620eb08dff927518fc88200b8d
baseline version:
 xen                  b66e28226dd9df8a28101438f44c0a26d63b76fa

Last test of basis   150396  2020-05-27 08:09:48 Z    0 days
Testing same since   150407  2020-05-27 16:00:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b66e28226d..9f3e9139fa  9f3e9139fa6c3d620eb08dff927518fc88200b8d -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 27 19:19:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 19:19:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je1a7-0003zq-FO; Wed, 27 May 2020 19:19:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dLv=7J=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1je1a6-0003zb-5F
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 19:19:14 +0000
X-Inumbo-ID: f27751dc-a04e-11ea-a777-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f27751dc-a04e-11ea-a777-12813bfff9fa;
 Wed, 27 May 2020 19:19:12 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ija9DmiPgoPmZNcL3bjjdKcpLaVY+8guYONj0oMriHZeiKRsozlQ4o9aitlckvP1s6btqJjxh1
 a18qRYR3chz2xqEmZi1iYjPkG72SPuWjfXyhVFoGpAviU1XOrbI66ZIQIbfdo4gTDSN5jj9AgB
 s6lRv7mYcHllksiQ687H74WJMWuU5rfZhBYRoEdw8kjtIZt8gT6+KHdasj18k6r8n/CwzOQovE
 kl3xiEYvA2VvsgwwvuZvjliwLI/IDNpuSXuy1dJz9R6NEaTju6WmHDxpgad7sSTQhi6ISl/cjt
 uPA=
X-SBRS: 2.7
X-MesageID: 18591103
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,442,1583211600"; d="scan'208";a="18591103"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 02/14] x86/traps: Factor out extable_fixup() and make
 printing consistent
Date: Wed, 27 May 2020 20:18:35 +0100
Message-ID: <20200527191847.17207-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200527191847.17207-1-andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

UD faults never had any diagnostics printed, and the others were inconsistent.

Don't use dprintk() because identifying traps.c is actively unhelpful in the
message, as it is the location of the fixup, not the fault.  Use the new
vec_name() infrastructure, rather than leaving raw numbers for the log.

  (XEN) Running stub recovery selftests...
  (XEN) Fixup #UD[0000]: ffff82d07fffd040 [ffff82d07fffd040] -> ffff82d0403ac9d6
  (XEN) Fixup #GP[0000]: ffff82d07fffd041 [ffff82d07fffd041] -> ffff82d0403ac9d6
  (XEN) Fixup #SS[0000]: ffff82d07fffd040 [ffff82d07fffd040] -> ffff82d0403ac9d6
  (XEN) Fixup #BP[0000]: ffff82d07fffd041 [ffff82d07fffd041] -> ffff82d0403ac9d6

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

v2:
 * Rebase
 * Rename to extable_fixup() to better distinguish from fixup_page_fault()
---
 xen/arch/x86/traps.c | 77 ++++++++++++++++++++++++----------------------------
 1 file changed, 35 insertions(+), 42 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 427178e649..eeb3e146ef 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -772,10 +772,31 @@ static void do_reserved_trap(struct cpu_user_regs *regs)
           trapnr, vec_name(trapnr), regs->error_code);
 }
 
+static bool extable_fixup(struct cpu_user_regs *regs, bool print)
+{
+    unsigned long fixup = search_exception_table(regs);
+
+    if ( unlikely(fixup == 0) )
+        return false;
+
+    /*
+     * Don't use dprintk() because the __FILE__ reference is unhelpful.
+     * Can currently be triggered by guests.  Make sure we ratelimit.
+     */
+    if ( IS_ENABLED(CONFIG_DEBUG) && print )
+        printk(XENLOG_GUEST XENLOG_WARNING "Fixup %s[%04x]: %p [%ps] -> %p\n",
+               vec_name(regs->entry_vector), regs->error_code,
+               _p(regs->rip), _p(regs->rip), _p(fixup));
+
+    regs->rip = fixup;
+    this_cpu(last_extable_addr) = regs->rip;
+
+    return true;
+}
+
 static void do_trap(struct cpu_user_regs *regs)
 {
     unsigned int trapnr = regs->entry_vector;
-    unsigned long fixup;
 
     if ( regs->error_code & X86_XEC_EXT )
         goto hardware_trap;
@@ -793,14 +814,8 @@ static void do_trap(struct cpu_user_regs *regs)
         return;
     }
 
-    if ( likely((fixup = search_exception_table(regs)) != 0) )
-    {
-        dprintk(XENLOG_ERR, "Trap %u: %p [%ps] -> %p\n",
-                trapnr, _p(regs->rip), _p(regs->rip), _p(fixup));
-        this_cpu(last_extable_addr) = regs->rip;
-        regs->rip = fixup;
+    if ( likely(extable_fixup(regs, true)) )
         return;
-    }
 
  hardware_trap:
     if ( debugger_trap_fatal(trapnr, regs) )
@@ -1108,12 +1123,8 @@ void do_invalid_op(struct cpu_user_regs *regs)
     }
 
  die:
-    if ( (fixup = search_exception_table(regs)) != 0 )
-    {
-        this_cpu(last_extable_addr) = regs->rip;
-        regs->rip = fixup;
+    if ( likely(extable_fixup(regs, true)) )
         return;
-    }
 
     if ( debugger_trap_fatal(TRAP_invalid_op, regs) )
         return;
@@ -1129,16 +1140,8 @@ void do_int3(struct cpu_user_regs *regs)
 
     if ( !guest_mode(regs) )
     {
-        unsigned long fixup;
-
-        if ( (fixup = search_exception_table(regs)) != 0 )
-        {
-            this_cpu(last_extable_addr) = regs->rip;
-            dprintk(XENLOG_DEBUG, "Trap %u: %p [%ps] -> %p\n",
-                    TRAP_int3, _p(regs->rip), _p(regs->rip), _p(fixup));
-            regs->rip = fixup;
+        if ( likely(extable_fixup(regs, true)) )
             return;
-        }
 
         if ( !debugger_trap_fatal(TRAP_int3, regs) )
             printk(XENLOG_DEBUG "Hit embedded breakpoint at %p [%ps]\n",
@@ -1415,7 +1418,7 @@ static int fixup_page_fault(unsigned long addr, struct cpu_user_regs *regs)
 
 void do_page_fault(struct cpu_user_regs *regs)
 {
-    unsigned long addr, fixup;
+    unsigned long addr;
     unsigned int error_code;
 
     addr = read_cr2();
@@ -1461,11 +1464,9 @@ void do_page_fault(struct cpu_user_regs *regs)
         if ( pf_type != real_fault )
             return;
 
-        if ( likely((fixup = search_exception_table(regs)) != 0) )
+        if ( likely(extable_fixup(regs, false)) )
         {
             perfc_incr(copy_user_faults);
-            this_cpu(last_extable_addr) = regs->rip;
-            regs->rip = fixup;
             return;
         }
 
@@ -1521,7 +1522,6 @@ void do_general_protection(struct cpu_user_regs *regs)
 #ifdef CONFIG_PV
     struct vcpu *v = current;
 #endif
-    unsigned long fixup;
 
     if ( debugger_trap_entry(TRAP_gp_fault, regs) )
         return;
@@ -1588,14 +1588,8 @@ void do_general_protection(struct cpu_user_regs *regs)
 
  gp_in_kernel:
 
-    if ( likely((fixup = search_exception_table(regs)) != 0) )
-    {
-        dprintk(XENLOG_INFO, "GPF (%04x): %p [%ps] -> %p\n",
-                regs->error_code, _p(regs->rip), _p(regs->rip), _p(fixup));
-        this_cpu(last_extable_addr) = regs->rip;
-        regs->rip = fixup;
+    if ( likely(extable_fixup(regs, true)) )
         return;
-    }
 
  hardware_gp:
     if ( debugger_trap_fatal(TRAP_gp_fault, regs) )
@@ -1754,18 +1748,17 @@ void do_device_not_available(struct cpu_user_regs *regs)
 
     if ( !guest_mode(regs) )
     {
-        unsigned long fixup = search_exception_table(regs);
-
-        gprintk(XENLOG_ERR, "#NM: %p [%ps] -> %p\n",
-                _p(regs->rip), _p(regs->rip), _p(fixup));
         /*
          * We shouldn't be able to reach here, but for release builds have
          * the recovery logic in place nevertheless.
          */
-        ASSERT_UNREACHABLE();
-        BUG_ON(!fixup);
-        regs->rip = fixup;
-        return;
+        if ( extable_fixup(regs, true) )
+        {
+            ASSERT_UNREACHABLE();
+            return;
+        }
+
+        fatal_trap(regs, false);
     }
 
 #ifdef CONFIG_PV
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 27 19:19:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 19:19:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je1aC-00040U-0L; Wed, 27 May 2020 19:19:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dLv=7J=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1je1aB-00040F-2C
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 19:19:19 +0000
X-Inumbo-ID: f27751dd-a04e-11ea-a777-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f27751dd-a04e-11ea-a777-12813bfff9fa;
 Wed, 27 May 2020 19:19:14 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: pWuwntnoMuy1EWQAtQPoeE0QeTGmXZH8kVIvZIS1A0pYGGSDK0Iaqz1Wu/VZdWrDd2uD9kdgDp
 hXxKMHV/BedfVZ5aqPmMWJrOhAWvWRxeGpuIgBZSir7iyKh8oeMf2f0WaF/cGfrcAd7N/3glxy
 tHJ2omm/L1ade4G2eT5t3rvxHGzFaH7jl3Yvgkytrz4GdSqn3vxARB6zJz5kLjF+ySOcvcH6PH
 8JlASs/wj6vHj7T0vaC/H2ehG67bvrm00Wlm2aFUQix7eZDi5IJggYT/mX8/I6Ed5a+f1YDXP+
 KNs=
X-SBRS: 2.7
X-MesageID: 18591104
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,442,1583211600"; d="scan'208";a="18591104"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 01/14] x86/traps: Clean up printing in {do_reserved,
 fatal}_trap()
Date: Wed, 27 May 2020 20:18:34 +0100
Message-ID: <20200527191847.17207-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200527191847.17207-1-andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

For one, they render the vector in a different base.

Introduce X86_EXC_* constants and vec_name() to refer to exceptions by their
mnemonic, which starts bringing the code/diagnostics in line with the Intel
and AMD manuals.

Provide constants for every archtiecturally defined exception, even those not
implemented by Xen yet, as do_reserved_trap() is a catch-all handler.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

v2:
 * Move "#" into vec_name() to skip it for the 3-character vectors
---
 xen/arch/x86/traps.c            | 26 +++++++++++++++++++++-----
 xen/include/asm-x86/processor.h |  6 +-----
 xen/include/asm-x86/x86-defns.h | 34 ++++++++++++++++++++++++++++++++++
 3 files changed, 56 insertions(+), 10 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index a8300c214d..427178e649 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -682,6 +682,22 @@ const char *trapstr(unsigned int trapnr)
     return trapnr < ARRAY_SIZE(strings) ? strings[trapnr] : "???";
 }
 
+static const char *vec_name(unsigned int vec)
+{
+    static const char names[][4] = {
+#define P(x) [X86_EXC_ ## x] = "#" #x
+#define N(x) [X86_EXC_ ## x] = #x
+        P(DE),  P(DB),  N(NMI), P(BP),  P(OF),  P(BR),  P(UD),  P(NM),
+        P(DF),  N(CSO), P(TS),  P(NP),  P(SS),  P(GP),  P(PF),  N(SPV),
+        P(MF),  P(AC),  P(MC),  P(XM),  P(VE),  P(CP),
+                                        P(HV),  P(VC),  P(SX),
+#undef N
+#undef P
+    };
+
+    return (vec < ARRAY_SIZE(names) && names[vec][0]) ? names[vec] : "???";
+}
+
 /*
  * This is called for faults at very unexpected times (e.g., when interrupts
  * are disabled). In such situations we can't do much that is safe. We try to
@@ -739,10 +755,9 @@ void fatal_trap(const struct cpu_user_regs *regs, bool show_remote)
         }
     }
 
-    panic("FATAL TRAP: vector = %d (%s)\n"
-          "[error_code=%04x] %s\n",
-          trapnr, trapstr(trapnr), regs->error_code,
-          (regs->eflags & X86_EFLAGS_IF) ? "" : ", IN INTERRUPT CONTEXT");
+    panic("FATAL TRAP: vec %u, %s[%04x]%s\n",
+          trapnr, vec_name(trapnr), regs->error_code,
+          (regs->eflags & X86_EFLAGS_IF) ? "" : " IN INTERRUPT CONTEXT");
 }
 
 static void do_reserved_trap(struct cpu_user_regs *regs)
@@ -753,7 +768,8 @@ static void do_reserved_trap(struct cpu_user_regs *regs)
         return;
 
     show_execution_state(regs);
-    panic("FATAL RESERVED TRAP %#x: %s\n", trapnr, trapstr(trapnr));
+    panic("FATAL RESERVED TRAP: vec %u, %s[%04x]\n",
+          trapnr, vec_name(trapnr), regs->error_code);
 }
 
 static void do_trap(struct cpu_user_regs *regs)
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 070691882b..96deac73ed 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -43,11 +43,7 @@
 #define TRAP_virtualisation   20
 #define TRAP_nr               32
 
-#define TRAP_HAVE_EC                                                    \
-    ((1u << TRAP_double_fault) | (1u << TRAP_invalid_tss) |             \
-     (1u << TRAP_no_segment) | (1u << TRAP_stack_error) |               \
-     (1u << TRAP_gp_fault) | (1u << TRAP_page_fault) |                  \
-     (1u << TRAP_alignment_check))
+#define TRAP_HAVE_EC X86_EXC_HAVE_EC
 
 /* Set for entry via SYSCALL. Informs return code to use SYSRETQ not IRETQ. */
 /* NB. Same as VGCF_in_syscall. No bits in common with any other TRAP_ defn. */
diff --git a/xen/include/asm-x86/x86-defns.h b/xen/include/asm-x86/x86-defns.h
index 8bf503220a..5366e2d018 100644
--- a/xen/include/asm-x86/x86-defns.h
+++ b/xen/include/asm-x86/x86-defns.h
@@ -118,4 +118,38 @@
 
 #define X86_NR_VECTORS 256
 
+/* Exception Vectors */
+#define X86_EXC_DE             0 /* Divide Error. */
+#define X86_EXC_DB             1 /* Debug Exception. */
+#define X86_EXC_NMI            2 /* NMI. */
+#define X86_EXC_BP             3 /* Breakpoint. */
+#define X86_EXC_OF             4 /* Overflow. */
+#define X86_EXC_BR             5 /* BOUND Range. */
+#define X86_EXC_UD             6 /* Invalid Opcode. */
+#define X86_EXC_NM             7 /* Device Not Available. */
+#define X86_EXC_DF             8 /* Double Fault. */
+#define X86_EXC_CSO            9 /* Coprocessor Segment Overrun. */
+#define X86_EXC_TS            10 /* Invalid TSS. */
+#define X86_EXC_NP            11 /* Segment Not Present. */
+#define X86_EXC_SS            12 /* Stack-Segment Fault. */
+#define X86_EXC_GP            13 /* General Porection Fault. */
+#define X86_EXC_PF            14 /* Page Fault. */
+#define X86_EXC_SPV           15 /* PIC Spurious Interrupt Vector. */
+#define X86_EXC_MF            16 /* Maths fault (x87 FPU). */
+#define X86_EXC_AC            17 /* Alignment Check. */
+#define X86_EXC_MC            18 /* Machine Check. */
+#define X86_EXC_XM            19 /* SIMD Exception. */
+#define X86_EXC_VE            20 /* Virtualisation Exception. */
+#define X86_EXC_CP            21 /* Control-flow Protection. */
+#define X86_EXC_HV            28 /* Hypervisor Injection. */
+#define X86_EXC_VC            29 /* VMM Communication. */
+#define X86_EXC_SX            30 /* Security Exception. */
+
+/* Bitmap of exceptions which have error codes. */
+#define X86_EXC_HAVE_EC                                             \
+    ((1u << X86_EXC_DF) | (1u << X86_EXC_TS) | (1u << X86_EXC_NP) | \
+     (1u << X86_EXC_SS) | (1u << X86_EXC_GP) | (1u << X86_EXC_PF) | \
+     (1u << X86_EXC_AC) | (1u << X86_EXC_CP) |                      \
+     (1u << X86_EXC_VC) | (1u << X86_EXC_SX))
+
 #endif	/* __XEN_X86_DEFNS_H__ */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 27 19:19:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 19:19:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je1aB-00040L-Nh; Wed, 27 May 2020 19:19:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dLv=7J=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1je1aA-000407-PL
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 19:19:18 +0000
X-Inumbo-ID: f267606a-a04e-11ea-81bc-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f267606a-a04e-11ea-81bc-bc764e2007e4;
 Wed, 27 May 2020 19:19:12 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: PqU4pUb91tB9Qzz3QFZ5kdCgJyyeaIaQRM6P1RfR9bMkxLxj5+zFmOGBmAEaXDS6T9LmWcaiR/
 NznYQ7Ek1SAKgop8OoJXAsDup0hcFx2/qh5n7dSxICCbcOWKmVwtWImqvP+CbZs1q14aVZX8hl
 5omvACdCjxk6j1FDGP6amTVLSx6ufQyG5scC+Xns65sJgWJ+Ioj9Zq51LxH+cMQMtBsKpW0w9x
 s+VKO+5C8cJvfrTjrB055dJH8Po8Js3Dq2IPx3RxeeffSWkFxxl2MViGpCKNFEGgI3v39k2QT0
 Dgw=
X-SBRS: 2.7
X-MesageID: 19333916
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,442,1583211600"; d="scan'208";a="19333916"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 03/14] x86/shstk: Introduce Supervisor Shadow Stack support
Date: Wed, 27 May 2020 20:18:36 +0100
Message-ID: <20200527191847.17207-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200527191847.17207-1-andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Introduce CONFIG_HAS_AS_CET to determine whether CET instructions are
supported in the assembler, and CONFIG_XEN_SHSTK as the main build option.

Introduce cet={no-,}shstk to for a user to select whether or not to use shadow
stacks at runtime, and X86_FEATURE_XEN_SHSTK to determine Xen's overall
enablement of shadow stacks.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

LLVM 6 supports CET-SS instructions while only LLVM 7 supports CET-IBT
instructions.  We'd need to split HAS_AS_CET into two if we want to support
supervisor shadow stacks with LLVM 6.  (This demonstrates exactly why picking
a handful of instructions to test is the right approach.)

v2:
 * Leave a comment identifying minimum toolchain support, to make it easier to
   remove ifdefary in the future when bumping minima.
 * Reindent CONFIG_XEN_SHSTK help text.
 * Rename xen= to cet=.  Add documentation, __init.
---
 docs/misc/xen-command-line.pandoc | 17 +++++++++++++++++
 xen/arch/x86/Kconfig              | 18 ++++++++++++++++++
 xen/arch/x86/setup.c              | 30 ++++++++++++++++++++++++++++++
 xen/include/asm-x86/cpufeature.h  |  1 +
 xen/include/asm-x86/cpufeatures.h |  1 +
 xen/scripts/Kconfig.include       |  4 ++++
 6 files changed, 71 insertions(+)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index e16bb90184..d4934eabb7 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -270,6 +270,23 @@ and not running softirqs. Reduce this if softirqs are not being run frequently
 enough. Setting this to a high value may cause boot failure, particularly if
 the NMI watchdog is also enabled.
 
+### cet
+    = List of [ shstk=<bool> ]
+
+    Applicability: x86
+
+Controls for the use of Control-flow Enforcement Technology.  CET is group of
+hardware features designed to combat Return-oriented Programming (ROP, also
+call/jmp COP/JOP) attacks.
+
+*   The `shstk=` boolean controls whether Xen uses Shadow Stacks for its own
+    protection.
+
+    The option is available when `CONFIG_XEN_SHSTK` is compiled in, and
+    defaults to `true` on hardware supporting CET-SS.  Specifying
+    `cet=no-shstk` will cause Xen not to use Shadow Stacks even when support
+    is available in hardware.
+
 ### clocksource (x86)
 > `= pit | hpet | acpi | tsc`
 
diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index b565f6831d..304a42ffb2 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -34,6 +34,10 @@ config ARCH_DEFCONFIG
 config INDIRECT_THUNK
 	def_bool $(cc-option,-mindirect-branch-register)
 
+config HAS_AS_CET
+	# binutils >= 2.29 and LLVM >= 7
+	def_bool $(as-instr,wrssq %rax$(comma)0;setssbsy;endbr64)
+
 menu "Architecture Features"
 
 source "arch/Kconfig"
@@ -97,6 +101,20 @@ config HVM
 
 	  If unsure, say Y.
 
+config XEN_SHSTK
+	bool "Supervisor Shadow Stacks"
+	depends on HAS_AS_CET && EXPERT = "y"
+	default y
+	---help---
+	  Control-flow Enforcement Technology (CET) is a set of features in
+	  hardware designed to combat Return-oriented Programming (ROP, also
+	  call/jump COP/JOP) attacks.  Shadow Stacks are one CET feature
+	  designed to provide return address protection.
+
+	  This option arranges for Xen to use CET-SS for its own protection.
+	  When CET-SS is active, 32bit PV guests cannot be used.  Backwards
+	  compatiblity can be provided vai the PV Shim mechanism.
+
 config SHADOW_PAGING
         bool "Shadow Paging"
         default y
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 2dec7a3fc6..584589baff 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -95,6 +95,36 @@ unsigned long __initdata highmem_start;
 size_param("highmem-start", highmem_start);
 #endif
 
+static bool __initdata opt_xen_shstk = true;
+
+static int __init parse_cet(const char *s)
+{
+    const char *ss;
+    int val, rc = 0;
+
+    do {
+        ss = strchr(s, ',');
+        if ( !ss )
+            ss = strchr(s, '\0');
+
+        if ( (val = parse_boolean("shstk", s, ss)) >= 0 )
+        {
+#ifdef CONFIG_XEN_SHSTK
+            opt_xen_shstk = val;
+#else
+            no_config_param("XEN_SHSTK", "cet", s, ss);
+#endif
+        }
+        else
+            rc = -EINVAL;
+
+        s = ss + 1;
+    } while ( *ss );
+
+    return rc;
+}
+custom_param("cet", parse_cet);
+
 cpumask_t __read_mostly cpu_present_map;
 
 unsigned long __read_mostly xen_phys_start;
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index cadef4e824..b831448eba 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -137,6 +137,7 @@
 #define cpu_has_aperfmperf      boot_cpu_has(X86_FEATURE_APERFMPERF)
 #define cpu_has_lfence_dispatch boot_cpu_has(X86_FEATURE_LFENCE_DISPATCH)
 #define cpu_has_xen_lbr         boot_cpu_has(X86_FEATURE_XEN_LBR)
+#define cpu_has_xen_shstk       boot_cpu_has(X86_FEATURE_XEN_SHSTK)
 
 #define cpu_has_msr_tsc_aux     (cpu_has_rdtscp || cpu_has_rdpid)
 
diff --git a/xen/include/asm-x86/cpufeatures.h b/xen/include/asm-x86/cpufeatures.h
index b9d3cac975..d7e42d9bb6 100644
--- a/xen/include/asm-x86/cpufeatures.h
+++ b/xen/include/asm-x86/cpufeatures.h
@@ -38,6 +38,7 @@ XEN_CPUFEATURE(XEN_LBR,           X86_SYNTH(22)) /* Xen uses MSR_DEBUGCTL.LBR */
 XEN_CPUFEATURE(SC_VERW_PV,        X86_SYNTH(23)) /* VERW used by Xen for PV */
 XEN_CPUFEATURE(SC_VERW_HVM,       X86_SYNTH(24)) /* VERW used by Xen for HVM */
 XEN_CPUFEATURE(SC_VERW_IDLE,      X86_SYNTH(25)) /* VERW used by Xen for idle */
+XEN_CPUFEATURE(XEN_SHSTK,         X86_SYNTH(26)) /* Xen uses CET Shadow Stacks */
 
 /* Bug words follow the synthetic words. */
 #define X86_NR_BUG 1
diff --git a/xen/scripts/Kconfig.include b/xen/scripts/Kconfig.include
index 8221095ca3..e1f13e1720 100644
--- a/xen/scripts/Kconfig.include
+++ b/xen/scripts/Kconfig.include
@@ -31,6 +31,10 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -E -x c /dev/null -o /de
 # Return y if the linker supports <flag>, n otherwise
 ld-option = $(success,$(LD) -v $(1))
 
+# $(as-instr,<instr>)
+# Return y if the assembler supports <instr>, n otherwise
+as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -)
+
 # check if $(CC) and $(LD) exist
 $(error-if,$(failure,command -v $(CC)),compiler '$(CC)' not found)
 $(error-if,$(failure,command -v $(LD)),linker '$(LD)' not found)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 27 19:19:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 19:19:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je1a7-0003zk-7p; Wed, 27 May 2020 19:19:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dLv=7J=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1je1a5-0003za-UD
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 19:19:13 +0000
X-Inumbo-ID: f25930ee-a04e-11ea-81bc-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f25930ee-a04e-11ea-81bc-bc764e2007e4;
 Wed, 27 May 2020 19:19:12 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: mbKeQ7QpZv3IAsquSsZ+NyeeuXXDsp5Ov4NaEfAcMJJ0LNNrOgQw0Ti7QwRh7vNC2an6Gox6Hs
 eZCEKK9hD4wSVxN90TtlQHL7WVq9ol/YPFFm6tWBpni7cPHg/oUYxNKCPBKRGjUCh2Ly6s3kvd
 JkNKKmQy5NMUDurQa7eM8aLs9WzGh9bZpl2MGsu0TCHcHBgh0ZFO9TZbxqmbVvDvPmMO3IqMK3
 TmThn2U46kTojFfzBjApdDj6z4gcqzab3mVmIyHk0Nr5PHI8VhwC6cUGNhMq9Yk3lWBhiiMqqq
 RkQ=
X-SBRS: 2.7
X-MesageID: 18850555
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,442,1583211600"; d="scan'208";a="18850555"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 08/14] x86/cpu: Adjust reset_stack_and_jump() to be shadow
 stack compatible
Date: Wed, 27 May 2020 20:18:41 +0100
Message-ID: <20200527191847.17207-9-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200527191847.17207-1-andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

We need to unwind up to the supervisor token.  See the comment for details.

The use of UNLIKELY_END_SECTION in this case highlights that it isn't safe
when it isn't the final statement of an asm().  Adjust all declarations with a
newline.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

v2:
 * Drop 'cmc' which was stray debugging.
 * Replace raw numbers with defines.
 * Use a real BUG frame in .fixup, to get static branch preduction working the
   right way around.
---
 xen/include/asm-x86/asm_defns.h |  8 +++----
 xen/include/asm-x86/current.h   | 48 ++++++++++++++++++++++++++++++++++++++---
 2 files changed, 49 insertions(+), 7 deletions(-)

diff --git a/xen/include/asm-x86/asm_defns.h b/xen/include/asm-x86/asm_defns.h
index b42a19b654..035708adac 100644
--- a/xen/include/asm-x86/asm_defns.h
+++ b/xen/include/asm-x86/asm_defns.h
@@ -177,13 +177,13 @@ register unsigned long current_stack_pointer asm("rsp");
 
 #ifdef __clang__ /* clang's builtin assember can't do .subsection */
 
-#define UNLIKELY_START_SECTION ".pushsection .text.unlikely,\"ax\""
-#define UNLIKELY_END_SECTION   ".popsection"
+#define UNLIKELY_START_SECTION ".pushsection .text.unlikely,\"ax\"\n\t"
+#define UNLIKELY_END_SECTION   ".popsection\n\t"
 
 #else
 
-#define UNLIKELY_START_SECTION ".subsection 1"
-#define UNLIKELY_END_SECTION   ".subsection 0"
+#define UNLIKELY_START_SECTION ".subsection 1\n\t"
+#define UNLIKELY_END_SECTION   ".subsection 0\n\t"
 
 #endif
 
diff --git a/xen/include/asm-x86/current.h b/xen/include/asm-x86/current.h
index 99b66a0087..086326b81a 100644
--- a/xen/include/asm-x86/current.h
+++ b/xen/include/asm-x86/current.h
@@ -124,13 +124,55 @@ unsigned long get_stack_dump_bottom (unsigned long sp);
 # define CHECK_FOR_LIVEPATCH_WORK ""
 #endif
 
+#ifdef CONFIG_XEN_SHSTK
+/*
+ * We need to unwind the primary shadow stack to its supervisor token, located
+ * at 0x5ff8 from the base of the stack blocks.
+ *
+ * Read the shadow stack pointer, subtract it from 0x5ff8, divide by 8 to get
+ * the number of slots needing popping.
+ *
+ * INCSSPQ can't pop more than 255 entries.  We shouldn't ever need to pop
+ * that many entries, and getting this wrong will cause us to #DF later.  Turn
+ * it into a BUG() now for fractionally easier debugging.
+ */
+# define SHADOW_STACK_WORK                                      \
+    "mov $1, %[ssp];"                                           \
+    "rdsspd %[ssp];"                                            \
+    "cmp $1, %[ssp];"                                           \
+    "je .L_shstk_done.%=;" /* CET not active?  Skip. */         \
+    "mov $%c[skstk_base], %[val];"                              \
+    "and $%c[stack_mask], %[ssp];"                              \
+    "sub %[ssp], %[val];"                                       \
+    "shr $3, %[val];"                                           \
+    "cmp $255, %[val];" /* More than 255 entries?  Crash. */    \
+    UNLIKELY_START(a, shstk_adjust)                             \
+    _ASM_BUGFRAME_TEXT(0)                                       \
+    UNLIKELY_END_SECTION                                        \
+    "incsspq %q[val];"                                          \
+    ".L_shstk_done.%=:"
+#else
+# define SHADOW_STACK_WORK ""
+#endif
+
 #define switch_stack_and_jump(fn, instr)                                \
     ({                                                                  \
+        unsigned int tmp;                                               \
         __asm__ __volatile__ (                                          \
-            "mov %0,%%"__OP"sp;"                                        \
+            SHADOW_STACK_WORK                                           \
+            "mov %[stk], %%rsp;"                                        \
             instr                                                       \
-             "jmp %c1"                                                  \
-            : : "r" (guest_cpu_user_regs()), "i" (fn) : "memory" );     \
+            "jmp %c[fun];"                                              \
+            : [val] "=&r" (tmp),                                        \
+              [ssp] "=&r" (tmp)                                         \
+            : [stk] "r" (guest_cpu_user_regs()),                        \
+              [fun] "i" (fn),                                           \
+              [skstk_base] "i"                                          \
+              ((PRIMARY_SHSTK_SLOT + 1) * PAGE_SIZE - 8),               \
+              [stack_mask] "i" (STACK_SIZE - 1),                        \
+              _ASM_BUGFRAME_INFO(BUGFRAME_bug, __LINE__,                \
+                                 __FILE__, NULL)                        \
+            : "memory" );                                               \
         unreachable();                                                  \
     })
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 27 19:19:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 19:19:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je1aH-00042G-DG; Wed, 27 May 2020 19:19:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dLv=7J=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1je1aF-00041n-QK
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 19:19:23 +0000
X-Inumbo-ID: f33f80e4-a04e-11ea-81bc-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f33f80e4-a04e-11ea-81bc-bc764e2007e4;
 Wed, 27 May 2020 19:19:13 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Tq6+s3Zh7QaRK595E15ZFckIpIk5gndEZuDz8ttfd0qmFsoM1hkOEjbDpJmnafihXdS2HWfRcK
 QVkAVn3Rogew9UeKYWAH/IlD37FBiRZJR2anYYwrexQOk8+W6IdikgmgQ91IBXK3pEfLaP1DAM
 N6gd/Mt3pHXf98aiI/OYkIiMcPVmOwN4qrs210s1yoSsDlLfidrAwIRjL7qHSPnlzZBL0xbvuQ
 67mO7Yp3pPLaQirrO8TluPLa7vsdyPssxO/H6VF+3sCQfFEywaUHox0NAOzp6g/8KgpVgvw+RP
 Q7A=
X-SBRS: 2.7
X-MesageID: 18850558
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,442,1583211600"; d="scan'208";a="18850558"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 07/14] x86/cpu: Adjust enable_nmis() to be shadow stack
 compatible
Date: Wed, 27 May 2020 20:18:40 +0100
Message-ID: <20200527191847.17207-8-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200527191847.17207-1-andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

When executing an IRET-to-self, the shadow stack must agree with the regular
stack.  We can't manipulate SSP directly, so have to fake a shadow IRET frame
by executing 3 CALLs, then editing the result to look correct.

This is not a fastpath, is called on the BSP long before CET can be set up,
and may be called on the crash path after CET is disabled.  Use the fact that
INCSSP is allocated from the hint nop space to construct a test for CET being
active which is safe on all processors.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/include/asm-x86/processor.h | 43 +++++++++++++++++++++++++++++++----------
 1 file changed, 33 insertions(+), 10 deletions(-)

diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 859bd9e2ec..badd7e60e5 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -545,17 +545,40 @@ static inline void enable_nmis(void)
 {
     unsigned long tmp;
 
-    asm volatile ( "mov %%rsp, %[tmp]     \n\t"
-                   "push %[ss]            \n\t"
-                   "push %[tmp]           \n\t"
-                   "pushf                 \n\t"
-                   "push %[cs]            \n\t"
-                   "lea 1f(%%rip), %[tmp] \n\t"
-                   "push %[tmp]           \n\t"
-                   "iretq; 1:             \n\t"
-                   : [tmp] "=&r" (tmp)
+    asm volatile ( "mov     %%rsp, %[rsp]        \n\t"
+                   "lea    .Ldone(%%rip), %[rip] \n\t"
+#ifdef CONFIG_XEN_SHSTK
+                   /* Check for CET-SS being active. */
+                   "mov    $1, %k[ssp]           \n\t"
+                   "rdsspq %[ssp]                \n\t"
+                   "cmp    $1, %k[ssp]           \n\t"
+                   "je     .Lshstk_done          \n\t"
+
+                   /* Push 3 words on the shadow stack */
+                   ".rept 3                      \n\t"
+                   "call 1f; nop; 1:             \n\t"
+                   ".endr                        \n\t"
+
+                   /* Fixup to be an IRET shadow stack frame */
+                   "wrssq  %q[cs], -1*8(%[ssp])  \n\t"
+                   "wrssq  %[rip], -2*8(%[ssp])  \n\t"
+                   "wrssq  %[ssp], -3*8(%[ssp])  \n\t"
+
+                   ".Lshstk_done:"
+#endif
+                   /* Write an IRET regular frame */
+                   "push   %[ss]                 \n\t"
+                   "push   %[rsp]                \n\t"
+                   "pushf                        \n\t"
+                   "push   %q[cs]                \n\t"
+                   "push   %[rip]                \n\t"
+                   "iretq                        \n\t"
+                   ".Ldone:                      \n\t"
+                   : [rip] "=&r" (tmp),
+                     [rsp] "=&r" (tmp),
+                     [ssp] "=&r" (tmp)
                    : [ss] "i" (__HYPERVISOR_DS),
-                     [cs] "i" (__HYPERVISOR_CS) );
+                     [cs] "r" (__HYPERVISOR_CS) );
 }
 
 void sysenter_entry(void);
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 27 19:19:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 19:19:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je1aH-00042T-MQ; Wed, 27 May 2020 19:19:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dLv=7J=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1je1aG-00041w-2S
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 19:19:24 +0000
X-Inumbo-ID: f518fc1a-a04e-11ea-a777-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f518fc1a-a04e-11ea-a777-12813bfff9fa;
 Wed, 27 May 2020 19:19:17 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: AqhgHtvucTXh9gKrfysPQjfVwMB0aAHR29VpIB06eomiAmpiSuq6TuZPrORn6qj4+x6hCO6gd+
 SVfFlrKExbY8F3Atho7vCyhxi4fAZnu21BA2HyMi3ltCqkKkS4O8jkHQ0LfIaBT+ViAzXYjlRf
 Xck5nRfYrDK76ervV5In3SK6anRX5eygGGYsEbTEJFpVcojs6VfxwOnIpP2Znpfl3pX2t9h21I
 tR35FF9r5Bug6GxwSAg4sHJIuukG+UQV578hX0usflysaIbNriaUFgS9dDOVaDrsVJHsfWh3+3
 0vA=
X-SBRS: 2.7
X-MesageID: 19333922
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,442,1583211600"; d="scan'208";a="19333922"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 09/14] x86/spec-ctrl: Adjust DO_OVERWRITE_RSB to be shadow
 stack compatible
Date: Wed, 27 May 2020 20:18:42 +0100
Message-ID: <20200527191847.17207-10-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200527191847.17207-1-andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The 32 calls need dropping from the shadow stack as well as the regular stack.
To shorten the code, we can use the 32bit forms of RDSSP/INCSSP, but need to
double up the input to INCSSP to counter the operand size based multiplier.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/include/asm-x86/spec_ctrl_asm.h | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/xen/include/asm-x86/spec_ctrl_asm.h b/xen/include/asm-x86/spec_ctrl_asm.h
index c60093b090..cb34299a86 100644
--- a/xen/include/asm-x86/spec_ctrl_asm.h
+++ b/xen/include/asm-x86/spec_ctrl_asm.h
@@ -83,9 +83,9 @@
  * Requires nothing
  * Clobbers \tmp (%rax by default), %rcx
  *
- * Requires 256 bytes of stack space, but %rsp has no net change. Based on
- * Google's performance numbers, the loop is unrolled to 16 iterations and two
- * calls per iteration.
+ * Requires 256 bytes of {,shadow}stack space, but %rsp/SSP has no net
+ * change. Based on Google's performance numbers, the loop is unrolled to 16
+ * iterations and two calls per iteration.
  *
  * The call filling the RSB needs a nonzero displacement.  A nop would do, but
  * we use "1: pause; lfence; jmp 1b" to safely contains any ret-based
@@ -114,6 +114,16 @@
     sub $1, %ecx
     jnz .L\@_fill_rsb_loop
     mov %\tmp, %rsp                 /* Restore old %rsp */
+
+#ifdef CONFIG_XEN_SHSTK
+    mov $1, %ecx
+    rdsspd %ecx
+    cmp $1, %ecx
+    je .L\@_shstk_done
+    mov $64, %ecx                   /* 64 * 4 bytes, given incsspd */
+    incsspd %ecx                    /* Restore old SSP */
+.L\@_shstk_done:
+#endif
 .endm
 
 .macro DO_SPEC_CTRL_ENTRY_FROM_HVM
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 27 19:19:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 19:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je1aM-00045B-1O; Wed, 27 May 2020 19:19:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dLv=7J=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1je1aK-00044V-Pr
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 19:19:28 +0000
X-Inumbo-ID: f35ede3a-a04e-11ea-81bc-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f35ede3a-a04e-11ea-81bc-bc764e2007e4;
 Wed, 27 May 2020 19:19:14 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 6yVJg/lifPGO70PS2wgpoTRI1lbe6ilhKsuQKwn77mP8iPyQN+41b9d9G5++vpGzQ/ftQCiYYd
 ZgK+D1g3IJgzBulmn6XKKAh8lQHXEAR013SdwW9z7Es8GkxZwhq9lNVQL5o9+mJd8UZr4qw70k
 DPVef143ILoMh4hgxea53PUJo+MtnUWnGFsATZcYwMX6MwBk9SgtFF0lOnRrv5Cqp/CgWoewgF
 didimL59PEPr413Qm8eYsWKX+P+TE+UcqnnH6LksOZJyea05myySxgXXiSk46AV0q/4CMu/vFY
 1Qo=
X-SBRS: 2.7
X-MesageID: 19333917
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,442,1583211600"; d="scan'208";a="19333917"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 05/14] x86/shstk: Re-layout the stack block for shadow
 stacks
Date: Wed, 27 May 2020 20:18:38 +0100
Message-ID: <20200527191847.17207-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200527191847.17207-1-andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

We have two free pages in the current stack.  A useful property of shadow
stacks and regular stacks is that they act as each others guard pages as far
as OoB writes go.

Move the regular IST stacks up by one page, to allow their shadow stack page
to be in slot 0.  The primary shadow stack uses slot 5.

As the shadow IST stacks are only 1k large, shuffle the order of IST vectors
to have #DF numerically highest (so there is no chance of a shadow stack
overflow clobbering the supervisor token).

The XPTI code already breaks the MEMORY_GUARD abstraction for stacks by
forcing it to be present.  To avoid having too many configurations, do away
with the concept entirely, and unconditionally unmap the pages in all cases.

A later change will turn these properly into shadow stacks.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

v2:
 * Adjust text
 * Introduce PRIMARY_SHSTK_SLOT (Name subject to improvement).
---
 xen/arch/x86/cpu/common.c       | 10 +++++-----
 xen/arch/x86/mm.c               | 19 ++++++-------------
 xen/arch/x86/smpboot.c          |  3 +--
 xen/arch/x86/traps.c            | 23 ++++++-----------------
 xen/include/asm-x86/config.h    |  3 +++
 xen/include/asm-x86/current.h   | 12 ++++++------
 xen/include/asm-x86/mm.h        |  1 -
 xen/include/asm-x86/processor.h |  6 +++---
 8 files changed, 30 insertions(+), 47 deletions(-)

diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 09b911b3ba..690fd8baa8 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -753,14 +753,14 @@ void load_system_tables(void)
 	 * valid on every instruction boundary.  (Note: these are all
 	 * semantically ACCESS_ONCE() due to tss's volatile qualifier.)
 	 *
-	 * rsp0 refers to the primary stack.  #MC, #DF, NMI and #DB handlers
+	 * rsp0 refers to the primary stack.  #MC, NMI, #DB and #DF handlers
 	 * each get their own stacks.  No IO Bitmap.
 	 */
 	tss->rsp0 = stack_bottom;
-	tss->ist[IST_MCE - 1] = stack_top + IST_MCE * PAGE_SIZE;
-	tss->ist[IST_DF  - 1] = stack_top + IST_DF  * PAGE_SIZE;
-	tss->ist[IST_NMI - 1] = stack_top + IST_NMI * PAGE_SIZE;
-	tss->ist[IST_DB  - 1] = stack_top + IST_DB  * PAGE_SIZE;
+	tss->ist[IST_MCE - 1] = stack_top + (1 + IST_MCE) * PAGE_SIZE;
+	tss->ist[IST_NMI - 1] = stack_top + (1 + IST_NMI) * PAGE_SIZE;
+	tss->ist[IST_DB  - 1] = stack_top + (1 + IST_DB)  * PAGE_SIZE;
+	tss->ist[IST_DF  - 1] = stack_top + (1 + IST_DF)  * PAGE_SIZE;
 	tss->bitmap = IOBMP_INVALID_OFFSET;
 
 	/* All other stack pointers poisioned. */
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index e42044eb74..2f1e716b6d 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5996,25 +5996,18 @@ void memguard_unguard_range(void *p, unsigned long l)
 
 void memguard_guard_stack(void *p)
 {
-    /* IST_MAX IST pages + at least 1 guard page + primary stack. */
-    BUILD_BUG_ON((IST_MAX + 1) * PAGE_SIZE + PRIMARY_STACK_SIZE > STACK_SIZE);
+    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
 
-    memguard_guard_range(p + IST_MAX * PAGE_SIZE,
-                         STACK_SIZE - PRIMARY_STACK_SIZE - IST_MAX * PAGE_SIZE);
+    p += PRIMARY_SHSTK_SLOT * PAGE_SIZE;
+    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
 }
 
 void memguard_unguard_stack(void *p)
 {
-    memguard_unguard_range(p + IST_MAX * PAGE_SIZE,
-                           STACK_SIZE - PRIMARY_STACK_SIZE - IST_MAX * PAGE_SIZE);
-}
-
-bool memguard_is_stack_guard_page(unsigned long addr)
-{
-    addr &= STACK_SIZE - 1;
+    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, PAGE_HYPERVISOR_RW);
 
-    return addr >= IST_MAX * PAGE_SIZE &&
-           addr < STACK_SIZE - PRIMARY_STACK_SIZE;
+    p += PRIMARY_SHSTK_SLOT * PAGE_SIZE;
+    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, PAGE_HYPERVISOR_RW);
 }
 
 void arch_dump_shared_mem_info(void)
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 170ab24e66..13b3dade9c 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -823,8 +823,7 @@ static int setup_cpu_root_pgt(unsigned int cpu)
 
     /* Install direct map page table entries for stack, IDT, and TSS. */
     for ( off = rc = 0; !rc && off < STACK_SIZE; off += PAGE_SIZE )
-        if ( !memguard_is_stack_guard_page(off) )
-            rc = clone_mapping(__va(__pa(stack_base[cpu])) + off, rpt);
+        rc = clone_mapping(__va(__pa(stack_base[cpu])) + off, rpt);
 
     if ( !rc )
         rc = clone_mapping(idt_tables[cpu], rpt);
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 90da787ee2..235a72cf4a 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -365,20 +365,15 @@ static void show_guest_stack(struct vcpu *v, const struct cpu_user_regs *regs)
 /*
  * Notes for get_stack_trace_bottom() and get_stack_dump_bottom()
  *
- * Stack pages 0 - 3:
+ * Stack pages 1 - 4:
  *   These are all 1-page IST stacks.  Each of these stacks have an exception
  *   frame and saved register state at the top.  The interesting bound for a
  *   trace is the word adjacent to this, while the bound for a dump is the
  *   very top, including the exception frame.
  *
- * Stack pages 4 and 5:
- *   None of these are particularly interesting.  With MEMORY_GUARD, page 5 is
- *   explicitly not present, so attempting to dump or trace it is
- *   counterproductive.  Without MEMORY_GUARD, it is possible for a call chain
- *   to use the entire primary stack and wander into page 5.  In this case,
- *   consider these pages an extension of the primary stack to aid debugging
- *   hopefully rare situations where the primary stack has effective been
- *   overflown.
+ * Stack pages 0 and 5:
+ *   Shadow stacks.  These are mapped read-only, and used by CET-SS capable
+ *   processors.  They will never contain regular stack data.
  *
  * Stack pages 6 and 7:
  *   These form the primary stack, and have a cpu_info at the top.  For a
@@ -392,13 +387,10 @@ unsigned long get_stack_trace_bottom(unsigned long sp)
 {
     switch ( get_stack_page(sp) )
     {
-    case 0 ... 3:
+    case 1 ... 4:
         return ROUNDUP(sp, PAGE_SIZE) -
             offsetof(struct cpu_user_regs, es) - sizeof(unsigned long);
 
-#ifndef MEMORY_GUARD
-    case 4 ... 5:
-#endif
     case 6 ... 7:
         return ROUNDUP(sp, STACK_SIZE) -
             sizeof(struct cpu_info) - sizeof(unsigned long);
@@ -412,12 +404,9 @@ unsigned long get_stack_dump_bottom(unsigned long sp)
 {
     switch ( get_stack_page(sp) )
     {
-    case 0 ... 3:
+    case 1 ... 4:
         return ROUNDUP(sp, PAGE_SIZE) - sizeof(unsigned long);
 
-#ifndef MEMORY_GUARD
-    case 4 ... 5:
-#endif
     case 6 ... 7:
         return ROUNDUP(sp, STACK_SIZE) - sizeof(unsigned long);
 
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index 266d281718..f3cf5df462 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -75,6 +75,9 @@
 /* Primary stack is restricted to 8kB by guard pages. */
 #define PRIMARY_STACK_SIZE 8192
 
+/* Primary shadow stack is slot 5 of 8, immediately under the primary stack. */
+#define PRIMARY_SHSTK_SLOT 5
+
 /* Total size of syscall and emulation stubs. */
 #define STUB_BUF_SHIFT (L1_CACHE_SHIFT > 7 ? L1_CACHE_SHIFT : 7)
 #define STUB_BUF_SIZE  (1 << STUB_BUF_SHIFT)
diff --git a/xen/include/asm-x86/current.h b/xen/include/asm-x86/current.h
index 5b8f4dbc79..99b66a0087 100644
--- a/xen/include/asm-x86/current.h
+++ b/xen/include/asm-x86/current.h
@@ -16,12 +16,12 @@
  *
  * 7 - Primary stack (with a struct cpu_info at the top)
  * 6 - Primary stack
- * 5 - Optionally not present (MEMORY_GUARD)
- * 4 - Unused; optionally not present (MEMORY_GUARD)
- * 3 - Unused; optionally not present (MEMORY_GUARD)
- * 2 - MCE IST stack
- * 1 - NMI IST stack
- * 0 - Double Fault IST stack
+ * 5 - Primay Shadow Stack (read-only)
+ * 4 - #DF IST stack
+ * 3 - #DB IST stack
+ * 2 - NMI IST stack
+ * 1 - #MC IST stack
+ * 0 - IST Shadow Stacks (4x 1k, read-only)
  */
 
 /*
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 3d3f9d49ac..7e74996053 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -536,7 +536,6 @@ void memguard_unguard_range(void *p, unsigned long l);
 
 void memguard_guard_stack(void *p);
 void memguard_unguard_stack(void *p);
-bool __attribute_const__ memguard_is_stack_guard_page(unsigned long addr);
 
 struct mmio_ro_emulate_ctxt {
         unsigned long cr2;
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index c2b9dc1ac0..8ab09cf7ed 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -440,10 +440,10 @@ struct tss_page {
 DECLARE_PER_CPU(struct tss_page, tss_page);
 
 #define IST_NONE 0UL
-#define IST_DF   1UL
+#define IST_MCE  1UL
 #define IST_NMI  2UL
-#define IST_MCE  3UL
-#define IST_DB   4UL
+#define IST_DB   3UL
+#define IST_DF   4UL
 #define IST_MAX  4UL
 
 /* Set the Interrupt Stack Table used by a particular IDT entry. */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 27 19:19:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 19:19:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je1aM-00045h-BX; Wed, 27 May 2020 19:19:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dLv=7J=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1je1aL-00044e-2t
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 19:19:29 +0000
X-Inumbo-ID: f518fc1b-a04e-11ea-a777-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f518fc1b-a04e-11ea-a777-12813bfff9fa;
 Wed, 27 May 2020 19:19:18 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: KPMzHN7M4o1atLAbnD8TJWcgtq/9W44k99HeJc51xZ7vy4wcR5MQtaZJzm2CQRb+Ipd249maSA
 OaNFGgLYbqehzstG49rMtAVS/JzuM0k0ReG9EOH6uo6/HzEi9Q8++2XrVFWwYIBnNTRi4KllVz
 +4x4wdu5Lz5xRs2n6pYNOQnrFVW7ZHQqnGxgvwKAmi+0TgLBB6AYYvhTZcTWG1za+cpJP4B3Gm
 vSFvFtaX+5p/yZLjM28gA1QiJ/4pKYlUDpjvMmSozDeI4tHq/hkXy477SMg3vBDaRUdJZ32oMY
 9Co=
X-SBRS: 2.7
X-MesageID: 19333923
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,442,1583211600"; d="scan'208";a="19333923"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 04/14] x86/traps: Implement #CP handler and extend #PF for
 shadow stacks
Date: Wed, 27 May 2020 20:18:37 +0100
Message-ID: <20200527191847.17207-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200527191847.17207-1-andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

For now, any #CP exception or shadow stack #PF indicate a bug in Xen, but
attempt to recover from #CP if taken in guest context.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

v2:
 * Rebase over #PF[Rsvd] rework.
 * Alignment for PFEC_shstk.
 * Use more X86_EXC_* names.
---
 xen/arch/x86/traps.c            | 46 +++++++++++++++++++++++++++++++++++++++--
 xen/arch/x86/x86_64/entry.S     |  7 ++++++-
 xen/include/asm-x86/processor.h |  2 ++
 3 files changed, 52 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index eeb3e146ef..90da787ee2 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -156,7 +156,9 @@ void (* const exception_table[TRAP_nr])(struct cpu_user_regs *regs) = {
     [TRAP_alignment_check]              = do_trap,
     [TRAP_machine_check]                = (void *)do_machine_check,
     [TRAP_simd_error]                   = do_trap,
-    [TRAP_virtualisation ...
+    [TRAP_virtualisation]               = do_reserved_trap,
+    [X86_EXC_CP]                        = do_entry_CP,
+    [X86_EXC_CP + 1 ...
      (ARRAY_SIZE(exception_table) - 1)] = do_reserved_trap,
 };
 
@@ -1445,8 +1447,10 @@ void do_page_fault(struct cpu_user_regs *regs)
      *
      * Anything remaining is an error, constituting corruption of the
      * pagetables and probably an L1TF vulnerable gadget.
+     *
+     * Any shadow stack access fault is a bug in Xen.
      */
-    if ( error_code & PFEC_reserved_bit )
+    if ( error_code & (PFEC_reserved_bit | PFEC_shstk) )
         goto fatal;
 
     if ( unlikely(!guest_mode(regs)) )
@@ -1898,6 +1902,43 @@ void do_debug(struct cpu_user_regs *regs)
     pv_inject_hw_exception(TRAP_debug, X86_EVENT_NO_EC);
 }
 
+void do_entry_CP(struct cpu_user_regs *regs)
+{
+    static const char errors[][10] = {
+        [1] = "near ret",
+        [2] = "far/iret",
+        [3] = "endbranch",
+        [4] = "rstorssp",
+        [5] = "setssbsy",
+    };
+    const char *err = "??";
+    unsigned int ec = regs->error_code;
+
+    if ( debugger_trap_entry(TRAP_debug, regs) )
+        return;
+
+    /* Decode ec if possible */
+    if ( ec < ARRAY_SIZE(errors) && errors[ec][0] )
+        err = errors[ec];
+
+    /*
+     * For now, only supervisors shadow stacks should be active.  A #CP from
+     * guest context is probably a Xen bug, but kill the guest in an attempt
+     * to recover.
+     */
+    if ( guest_mode(regs) )
+    {
+        gprintk(XENLOG_ERR, "Hit #CP[%04x] in guest context %04x:%p\n",
+                ec, regs->cs, _p(regs->rip));
+        ASSERT_UNREACHABLE();
+        domain_crash(current->domain);
+        return;
+    }
+
+    show_execution_state(regs);
+    panic("CONTROL-FLOW PROTECTION FAULT: #CP[%04x] %s\n", ec, err);
+}
+
 static void __init noinline __set_intr_gate(unsigned int n,
                                             uint32_t dpl, void *addr)
 {
@@ -1987,6 +2028,7 @@ void __init init_idt_traps(void)
     set_intr_gate(TRAP_alignment_check,&alignment_check);
     set_intr_gate(TRAP_machine_check,&machine_check);
     set_intr_gate(TRAP_simd_error,&simd_coprocessor_error);
+    set_intr_gate(X86_EXC_CP, entry_CP);
 
     /* Specify dedicated interrupt stacks for NMI, #DF, and #MC. */
     enable_each_ist(idt_table);
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index d55453f3f3..f7ee3dce91 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -795,6 +795,10 @@ ENTRY(alignment_check)
         movl  $TRAP_alignment_check,4(%rsp)
         jmp   handle_exception
 
+ENTRY(entry_CP)
+        movl  $X86_EXC_CP, 4(%rsp)
+        jmp   handle_exception
+
 ENTRY(double_fault)
         movl  $TRAP_double_fault,4(%rsp)
         /* Set AC to reduce chance of further SMAP faults */
@@ -940,7 +944,8 @@ autogen_stubs: /* Automatically generated stubs. */
         entrypoint 1b
 
         /* Reserved exceptions, heading towards do_reserved_trap(). */
-        .elseif vec == TRAP_copro_seg || vec == TRAP_spurious_int || (vec > TRAP_simd_error && vec < TRAP_nr)
+        .elseif vec == X86_EXC_CSO || vec == X86_EXC_SPV || \
+                vec == X86_EXC_VE  || (vec > X86_EXC_CP && vec < TRAP_nr)
 
 1:      test  $8,%spl        /* 64bit exception frames are 16 byte aligned, but the word */
         jz    2f             /* size is 8 bytes.  Check whether the processor gave us an */
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 96deac73ed..c2b9dc1ac0 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -68,6 +68,7 @@
 #define PFEC_reserved_bit   (_AC(1,U) << 3)
 #define PFEC_insn_fetch     (_AC(1,U) << 4)
 #define PFEC_prot_key       (_AC(1,U) << 5)
+#define PFEC_shstk          (_AC(1,U) << 6)
 #define PFEC_arch_mask      (_AC(0xffff,U)) /* Architectural PFEC values. */
 /* Internally used only flags. */
 #define PFEC_page_paged     (1U<<16)
@@ -530,6 +531,7 @@ DECLARE_TRAP_HANDLER(coprocessor_error);
 DECLARE_TRAP_HANDLER(simd_coprocessor_error);
 DECLARE_TRAP_HANDLER_CONST(machine_check);
 DECLARE_TRAP_HANDLER(alignment_check);
+DECLARE_TRAP_HANDLER(entry_CP);
 
 DECLARE_TRAP_HANDLER(entry_int82);
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 27 19:19:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 19:19:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je1aR-0004AE-QY; Wed, 27 May 2020 19:19:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dLv=7J=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1je1aP-00048o-Ps
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 19:19:33 +0000
X-Inumbo-ID: f367e02a-a04e-11ea-9dbe-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f367e02a-a04e-11ea-9dbe-bc764e2007e4;
 Wed, 27 May 2020 19:19:14 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ljXFLm6vJNGuhH71VFga8ld5yUtQeikwNp2tlkhaq8oVuyij+A1TdxM7HaHaOeTzmRI0VV0Zng
 HPQ13O3QjuMhFyqsliSE9jkK/Ce6AvmjGeUJepoaWmrpp4JfUzKmXdGJkbqvc7Q7Jhdx9do5UB
 emcVV+uXPT4q3xgZ/xY8QjEMkUBayoYw1RdyT4cZ0tF3ltQDIcv9MUiKnqtvjlD3kq+MfHX42g
 MxgWRqulkEy6NI10dLzawMQC/ucVuUk/qbDFYjDht43OowdxWcaRE8p1ZcoNWlWNu0QNRjafMQ
 3ac=
X-SBRS: 2.7
X-MesageID: 19333919
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,442,1583211600"; d="scan'208";a="19333919"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 00/14] x86: Support for CET Supervisor Shadow Stacks
Date: Wed, 27 May 2020 20:18:33 +0100
Message-ID: <20200527191847.17207-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This series implements Shadow Stack support for Xen to use.

You'll need a CET-capable toolchain (Binutils >= 2.29 or LLVM >= 7), but no
specific compiler support required.

CET-SS makes PV32 unusable, so using shadow stacks prevents the use of 32bit
PV guests.  Compatibilty can be obtained using PV Shim.

See patches for individual changes.

Andrew Cooper (14):
  x86/traps: Clean up printing in {do_reserved,fatal}_trap()
  x86/traps: Factor out extable_fixup() and make printing consistent
  x86/shstk: Introduce Supervisor Shadow Stack support
  x86/traps: Implement #CP handler and extend #PF for shadow stacks
  x86/shstk: Re-layout the stack block for shadow stacks
  x86/shstk: Create shadow stacks
  x86/cpu: Adjust enable_nmis() to be shadow stack compatible
  x86/cpu: Adjust reset_stack_and_jump() to be shadow stack compatible
  x86/spec-ctrl: Adjust DO_OVERWRITE_RSB to be shadow stack compatible
  x86/extable: Adjust extable handling to be shadow stack compatible
  x86/alt: Adjust _alternative_instructions() to not create shadow stacks
  x86/entry: Adjust guest paths to be shadow stack compatible
  x86/S3: Save and restore Shadow Stack configuration
  x86/shstk: Activate Supervisor Shadow Stacks

 docs/misc/xen-command-line.pandoc   |  25 ++++
 xen/arch/x86/Kconfig                |  18 +++
 xen/arch/x86/acpi/wakeup_prot.S     |  58 +++++++++
 xen/arch/x86/alternative.c          |  14 +++
 xen/arch/x86/boot/x86_64.S          |  35 +++++-
 xen/arch/x86/cpu/common.c           |  39 +++++-
 xen/arch/x86/crash.c                |   7 ++
 xen/arch/x86/mm.c                   |  46 ++++---
 xen/arch/x86/setup.c                |  56 +++++++++
 xen/arch/x86/smpboot.c              |   3 +-
 xen/arch/x86/spec_ctrl.c            |   8 ++
 xen/arch/x86/traps.c                | 239 ++++++++++++++++++++++++++----------
 xen/arch/x86/x86_64/compat/entry.S  |   1 +
 xen/arch/x86/x86_64/entry.S         |  50 +++++++-
 xen/include/asm-x86/asm_defns.h     |   8 +-
 xen/include/asm-x86/config.h        |   5 +
 xen/include/asm-x86/cpufeature.h    |   1 +
 xen/include/asm-x86/cpufeatures.h   |   1 +
 xen/include/asm-x86/current.h       |  60 +++++++--
 xen/include/asm-x86/mm.h            |   1 -
 xen/include/asm-x86/msr-index.h     |   3 +
 xen/include/asm-x86/page.h          |   1 +
 xen/include/asm-x86/processor.h     |  60 ++++++---
 xen/include/asm-x86/spec_ctrl_asm.h |  16 ++-
 xen/include/asm-x86/x86-defns.h     |  35 ++++++
 xen/include/asm-x86/x86_64/page.h   |   1 +
 xen/scripts/Kconfig.include         |   4 +
 27 files changed, 664 insertions(+), 131 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 27 19:19:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 19:19:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je1aW-0004EE-4V; Wed, 27 May 2020 19:19:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dLv=7J=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1je1aU-0004Cq-Pr
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 19:19:38 +0000
X-Inumbo-ID: f5699b3e-a04e-11ea-9947-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5699b3e-a04e-11ea-9947-bc764e2007e4;
 Wed, 27 May 2020 19:19:17 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 3iRO9kwgKCNdvZIluTTdpAT/32WruaeuHu2KpnABpZyIvZ+Vr+VV4xpohS0xAvZrALgMKsTVd6
 tWXKr9ZjuXo8i8o3L8YyYuI0aquhbf2e7yM0Kw8dJUXJ7pt/sjFnQc0aH57p3BPAFYn/aHyq1G
 UPsZHVhzGAOR0+euvfSZAcCq7Or4hRjqb5HLesPc3oa/nqlHVoEtsOQU2lUYxXoZ4vGSdoAo4w
 //HwxzzZoOzhP67f1BplRf2QTfz9snO/vK1N00DpAP4VYboMCNwaeDFS7Y3yVNwxzNMo+nm0bi
 9d4=
X-SBRS: 2.7
X-MesageID: 18946803
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,442,1583211600"; d="scan'208";a="18946803"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 06/14] x86/shstk: Create shadow stacks
Date: Wed, 27 May 2020 20:18:39 +0100
Message-ID: <20200527191847.17207-7-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200527191847.17207-1-andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Introduce HYPERVISOR_SHSTK pagetable constants, which are Read-Only + Dirty.
Use these in place of _PAGE_NONE for memguard_guard_stack().

Supervisor shadow stacks need a token written at the top, which is most easily
done before making the frame read only.

Allocate the shadow IST stack block in struct tss_page.  It doesn't strictly
need to live here, but it is a convenient location (and XPTI-safe, for testing
purposes), and placing it ahead of the TSS doesn't risk colliding with a bad
IO Bitmap offset and turning into some IO port permissions.

Have load_system_tables() set up the shadow IST stack table when setting up
the regular IST in the TSS.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

v2:
 * Introduce IST_SHSTK_SIZE (Name subject to improvement).
 * Skip writing the shadow stack token for !XEN_SHSTK builds.
 * Tweak clobbering to be correct and safe.
---
 xen/arch/x86/cpu/common.c         | 24 ++++++++++++++++++++++++
 xen/arch/x86/mm.c                 | 25 +++++++++++++++++++++++--
 xen/include/asm-x86/config.h      |  2 ++
 xen/include/asm-x86/page.h        |  1 +
 xen/include/asm-x86/processor.h   |  3 ++-
 xen/include/asm-x86/x86_64/page.h |  1 +
 6 files changed, 53 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index 690fd8baa8..dcc9ee08de 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -769,6 +769,30 @@ void load_system_tables(void)
 	tss->rsp1 = 0x8600111111111111ul;
 	tss->rsp2 = 0x8600111111111111ul;
 
+	/* Set up the shadow stack IST. */
+	if (cpu_has_xen_shstk) {
+		volatile uint64_t *ist_ssp = this_cpu(tss_page).ist_ssp;
+
+		/*
+		 * Used entries must point at the supervisor stack token.
+		 * Unused entries are poisoned.
+		 *
+		 * This IST Table may be live, and the NMI/#MC entries must
+		 * remain valid on every instruction boundary, hence the
+		 * volatile qualifier.
+		 */
+		ist_ssp[0] = 0x8600111111111111ul;
+		ist_ssp[IST_MCE] = stack_top + (IST_MCE * IST_SHSTK_SIZE) - 8;
+		ist_ssp[IST_NMI] = stack_top + (IST_NMI * IST_SHSTK_SIZE) - 8;
+		ist_ssp[IST_DB]	 = stack_top + (IST_DB	* IST_SHSTK_SIZE) - 8;
+		ist_ssp[IST_DF]	 = stack_top + (IST_DF	* IST_SHSTK_SIZE) - 8;
+		for ( i = IST_DF + 1;
+		      i < ARRAY_SIZE(this_cpu(tss_page).ist_ssp); ++i )
+			ist_ssp[i] = 0x8600111111111111ul;
+
+		wrmsrl(MSR_INTERRUPT_SSP_TABLE, (unsigned long)ist_ssp);
+	}
+
 	BUILD_BUG_ON(sizeof(*tss) <= 0x67); /* Mandated by the architecture. */
 
 	_set_tssldt_desc(gdt + TSS_ENTRY, (unsigned long)tss,
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 2f1e716b6d..4d6d22cc41 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5994,12 +5994,33 @@ void memguard_unguard_range(void *p, unsigned long l)
 
 #endif
 
+static void write_sss_token(unsigned long *ptr)
+{
+    /*
+     * A supervisor shadow stack token is its own linear address, with the
+     * busy bit (0) clear.
+     */
+    *ptr = (unsigned long)ptr;
+}
+
 void memguard_guard_stack(void *p)
 {
-    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
+    /* IST Shadow stacks.  4x 1k in stack page 0. */
+    if ( IS_ENABLED(CONFIG_XEN_SHSTK) )
+    {
+        write_sss_token(p + (IST_MCE * IST_SHSTK_SIZE) - 8);
+        write_sss_token(p + (IST_NMI * IST_SHSTK_SIZE) - 8);
+        write_sss_token(p + (IST_DB  * IST_SHSTK_SIZE) - 8);
+        write_sss_token(p + (IST_DF  * IST_SHSTK_SIZE) - 8);
+    }
+    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, PAGE_HYPERVISOR_SHSTK);
 
+    /* Primary Shadow Stack.  1x 4k in stack page 5. */
     p += PRIMARY_SHSTK_SLOT * PAGE_SIZE;
-    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
+    if ( IS_ENABLED(CONFIG_XEN_SHSTK) )
+        write_sss_token(p + PAGE_SIZE - 8);
+
+    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, PAGE_HYPERVISOR_SHSTK);
 }
 
 void memguard_unguard_stack(void *p)
diff --git a/xen/include/asm-x86/config.h b/xen/include/asm-x86/config.h
index f3cf5df462..2ba234383d 100644
--- a/xen/include/asm-x86/config.h
+++ b/xen/include/asm-x86/config.h
@@ -66,6 +66,8 @@
 #define STACK_ORDER 3
 #define STACK_SIZE  (PAGE_SIZE << STACK_ORDER)
 
+#define IST_SHSTK_SIZE 1024
+
 #define TRAMPOLINE_STACK_SPACE  PAGE_SIZE
 #define TRAMPOLINE_SPACE        (KB(64) - TRAMPOLINE_STACK_SPACE)
 #define WAKEUP_STACK_MIN        3072
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 5acf3d3d5a..f632affaef 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -364,6 +364,7 @@ void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t);
                                    _PAGE_DIRTY | _PAGE_RW)
 #define __PAGE_HYPERVISOR_UCMINUS (__PAGE_HYPERVISOR | _PAGE_PCD)
 #define __PAGE_HYPERVISOR_UC      (__PAGE_HYPERVISOR | _PAGE_PCD | _PAGE_PWT)
+#define __PAGE_HYPERVISOR_SHSTK   (__PAGE_HYPERVISOR_RO | _PAGE_DIRTY)
 
 #define MAP_SMALL_PAGES _PAGE_AVAIL0 /* don't use superpages mappings */
 
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 8ab09cf7ed..859bd9e2ec 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -435,7 +435,8 @@ struct __packed tss64 {
     uint16_t :16, bitmap;
 };
 struct tss_page {
-    struct tss64 __aligned(PAGE_SIZE) tss;
+    uint64_t __aligned(PAGE_SIZE) ist_ssp[8];
+    struct tss64 tss;
 };
 DECLARE_PER_CPU(struct tss_page, tss_page);
 
diff --git a/xen/include/asm-x86/x86_64/page.h b/xen/include/asm-x86/x86_64/page.h
index 9876634881..26621f9519 100644
--- a/xen/include/asm-x86/x86_64/page.h
+++ b/xen/include/asm-x86/x86_64/page.h
@@ -171,6 +171,7 @@ static inline intpte_t put_pte_flags(unsigned int x)
 #define PAGE_HYPERVISOR_RW      (__PAGE_HYPERVISOR_RW      | _PAGE_GLOBAL)
 #define PAGE_HYPERVISOR_RX      (__PAGE_HYPERVISOR_RX      | _PAGE_GLOBAL)
 #define PAGE_HYPERVISOR_RWX     (__PAGE_HYPERVISOR         | _PAGE_GLOBAL)
+#define PAGE_HYPERVISOR_SHSTK   (__PAGE_HYPERVISOR_SHSTK   | _PAGE_GLOBAL)
 
 #define PAGE_HYPERVISOR         PAGE_HYPERVISOR_RW
 #define PAGE_HYPERVISOR_UCMINUS (__PAGE_HYPERVISOR_UCMINUS | \
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 27 19:20:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 19:20:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je1av-0004wK-HR; Wed, 27 May 2020 19:20:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UeBB=7J=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1je1at-0004jB-W3
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 19:20:04 +0000
X-Inumbo-ID: 0d08f0be-a04f-11ea-a777-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0d08f0be-a04f-11ea-a777-12813bfff9fa;
 Wed, 27 May 2020 19:19:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=PysOfsRPogzl0EI+m6wAd6HSqC3fHrDXCdA/oSYq1QA=; b=pP96bSy3N3t5XfIYplBwIxvMV
 nT/Lgd9hTKg/TPFjEVKtWxxMLHkFpXmolaF4nUEwfywDCFyGo10SLt5v+2TiI6a/xrV8aoTAvBcyl
 9BE7+b01ddIlleFydR6xd/hHnbUNv3QspvULxj6s/5UCq5qG+v1vcXc36tUhWQ5hUslVo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1je1am-0003NY-Qc; Wed, 27 May 2020 19:19:56 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1je1am-0001kB-JP; Wed, 27 May 2020 19:19:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1je1am-0008Lw-IW; Wed, 27 May 2020 19:19:56 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150394-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150394: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707
X-Osstest-Versions-That: xen=d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 27 May 2020 19:19:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150394 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150394/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate           fail  like 150389
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150389
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150389
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150389
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150389
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150389
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150389
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150389
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150389
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150389
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707
baseline version:
 xen                  d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707

Last test of basis   150394  2020-05-27 06:35:55 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed May 27 19:34:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 19:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je1oj-0006Ze-1G; Wed, 27 May 2020 19:34:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dLv=7J=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1je1oh-0006ZZ-NH
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 19:34:19 +0000
X-Inumbo-ID: 0e5e31e8-a051-11ea-8993-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0e5e31e8-a051-11ea-8993-bc764e2007e4;
 Wed, 27 May 2020 19:34:18 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 1Fy0vTmIqdblFkHjLrUVZU3/BmKXzCKiM8pMK5MITBvTnvTgIv0MZhoxUPvTF3MWxpbVwfe6yi
 1gGDaBuP2Bi6nW0jfWOzXb0YE1UD+QTOupodLlegaGD1PVAf7X3YHY1n88wHDOxoAtmerMJCPm
 iEJxzbXvy/KcjtqHr3kOOkspOJTy/m88xlA3kywA/KvGrY3mKVmqihgZcNDjCu9LZ35jc/0Pea
 XqNB90JuXdcgrbb2mjNujuLyhUuc60vtdpYi36yQPh1BhtCVt19cTDBfnIZryDjbGRe5mqJbfi
 Ezc=
X-SBRS: 2.7
X-MesageID: 18947884
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,442,1583211600"; d="scan'208";a="18947884"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 12/14] x86/entry: Adjust guest paths to be shadow stack
 compatible
Date: Wed, 27 May 2020 20:18:45 +0100
Message-ID: <20200527191847.17207-13-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200527191847.17207-1-andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The SYSCALL/SYSENTER/SYSRET paths need to use {SET,CLR}SSBSY.  The IRET to
guest paths must not.  In the SYSRET path, re-position the mov which loads rip
into %rcx so we can use %rcx for CLRSSBSY, rather than spilling another
register to the stack.

While we can in principle detect shadow stack corruption and a failure to
clear the supervisor token busy bit in the SYSRET path (by inspecting the
carry flag following CLRSSBSY), we cannot detect similar problems for the IRET
path (IRET is specified not to fault in this case).

We will double fault at some point later, when next trying to enter Xen, due
to an already-set supervisor shadow stack busy bit.  As SYSRET is a uncommon
path anyway, avoid the added complexity for no appreciable gain.

The IST switch onto the primary stack is not great as we have an instruction
boundary with no shadow stack.  This is the least bad option available.

These paths are not used before shadow stacks are properly established, so can
use alternatives to avoid extra runtime CET detection logic.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

v2:
 * Break comment deletion out to an earlier patch
 * SETSSBSY on the SYSENTER path as well
 * Don't spill %rax to the stack in the SYSRET path
---
 xen/arch/x86/x86_64/compat/entry.S |  1 +
 xen/arch/x86/x86_64/entry.S        | 32 +++++++++++++++++++++++++++++++-
 2 files changed, 32 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/x86_64/compat/entry.S b/xen/arch/x86/x86_64/compat/entry.S
index 3cd375bd48..2ca81341a4 100644
--- a/xen/arch/x86/x86_64/compat/entry.S
+++ b/xen/arch/x86/x86_64/compat/entry.S
@@ -198,6 +198,7 @@ ENTRY(cr4_pv32_restore)
 
 /* See lstar_enter for entry register state. */
 ENTRY(cstar_enter)
+        ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
         /* sti could live here when we don't switch page tables below. */
         CR4_PV32_RESTORE
         movq  8(%rsp),%rax /* Restore %rax. */
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index 78ac0df49f..449ee468e4 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -191,9 +191,16 @@ restore_all_guest:
         sarq  $47,%rcx
         incl  %ecx
         cmpl  $1,%ecx
-        movq  8(%rsp),%rcx            # RIP
         ja    iret_exit_to_guest
 
+        /* Clear the supervisor shadow stack token busy bit. */
+.macro rag_clrssbsy
+        rdsspq %rcx
+        clrssbsy (%rcx)
+.endm
+        ALTERNATIVE "", rag_clrssbsy, X86_FEATURE_XEN_SHSTK
+
+        movq  8(%rsp), %rcx           # RIP
         cmpw  $FLAT_USER_CS32,16(%rsp)# CS
         movq  32(%rsp),%rsp           # RSP
         je    1f
@@ -226,6 +233,7 @@ iret_exit_to_guest:
  * %ss must be saved into the space left by the trampoline.
  */
 ENTRY(lstar_enter)
+        ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
         /* sti could live here when we don't switch page tables below. */
         movq  8(%rsp),%rax /* Restore %rax. */
         movq  $FLAT_KERNEL_SS,8(%rsp)
@@ -259,6 +267,7 @@ ENTRY(lstar_enter)
         jmp   test_all_events
 
 ENTRY(sysenter_entry)
+        ALTERNATIVE "", "setssbsy", X86_FEATURE_XEN_SHSTK
         /* sti could live here when we don't switch page tables below. */
         pushq $FLAT_USER_SS
         pushq $0
@@ -877,6 +886,27 @@ handle_ist_exception:
         movl  $UREGS_kernel_sizeof/8,%ecx
         movq  %rdi,%rsp
         rep   movsq
+
+        /* Switch Shadow Stacks */
+.macro ist_switch_shstk
+        rdsspq %rdi
+        clrssbsy (%rdi)
+        /*
+         * Switching supervisor shadow stacks is specially hard, as supervisor
+         * and restore tokens are incompatible.
+         *
+         * For now, we only need to switch on to an unused primary shadow
+         * stack, so use SETSSBSY for the purpose, exactly like the
+         * SYSCALL/SYSENTER entry.
+         *
+         * Ideally, we'd want to CLRSSBSY after switching stacks, but that
+         * will leave SSP zeroed so it not an option.  Instead, we transiently
+         * have a zero SSP on this instruction boundary, and depend on IST for
+         * NMI/#MC protection.
+         */
+        setssbsy
+.endm
+        ALTERNATIVE "", ist_switch_shstk, X86_FEATURE_XEN_SHSTK
 1:
 #else
         ASSERT_CONTEXT_IS_XEN
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 27 19:34:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 19:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je1on-0006a7-Hc; Wed, 27 May 2020 19:34:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dLv=7J=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1je1om-0006Zn-Lz
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 19:34:24 +0000
X-Inumbo-ID: 11666ca2-a051-11ea-a779-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 11666ca2-a051-11ea-a779-12813bfff9fa;
 Wed, 27 May 2020 19:34:23 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: wY10bV6tZiw8sRMAEjklee9H8RWGt6pqPgLjQ3cvNmM25+VfjtEE08bM7dz9/CBxYagtPn4LQ9
 9ZPonaohusmVZGlVGlaSY1fLc4vEQzXgXV3Yr6ZDI1uZZZPDRKuR+aRqwlEOWGnSBSOOHK7054
 CkoiN1AupubV16bKGGtSqGLVA6RuL/rHFOrdGnXEKATOiFpGSoHyhCb4cNaAUOdEKdru+jFRi6
 EEuEuMhCcHhljANqiK7tKQo2J3/9jRGidOEUY09coR0Ek2+tzdSE5HXOIj+lJGwhLT0gjFgi0I
 pzw=
X-SBRS: 2.7
X-MesageID: 18615173
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,442,1583211600"; d="scan'208";a="18615173"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 11/14] x86/alt: Adjust _alternative_instructions() to not
 create shadow stacks
Date: Wed, 27 May 2020 20:18:44 +0100
Message-ID: <20200527191847.17207-12-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200527191847.17207-1-andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The current alternatives algorithm clears CR0.WP and writes into .text.  This
has a side effect of the mappings becoming shadow stacks once CET is active.

Adjust _alternative_instructions() to clean up after itself.  This involves
extending the set of bits modify_xen_mappings() to include Dirty (and Accessed
for good measure).

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/alternative.c | 14 ++++++++++++++
 xen/arch/x86/mm.c          |  6 +++---
 2 files changed, 17 insertions(+), 3 deletions(-)

diff --git a/xen/arch/x86/alternative.c b/xen/arch/x86/alternative.c
index ce2b4302e6..004e9ede25 100644
--- a/xen/arch/x86/alternative.c
+++ b/xen/arch/x86/alternative.c
@@ -21,6 +21,7 @@
 #include <asm/processor.h>
 #include <asm/alternative.h>
 #include <xen/init.h>
+#include <asm/setup.h>
 #include <asm/system.h>
 #include <asm/traps.h>
 #include <asm/nmi.h>
@@ -398,6 +399,19 @@ static void __init _alternative_instructions(bool force)
         panic("Timed out waiting for alternatives self-NMI to hit\n");
 
     set_nmi_callback(saved_nmi_callback);
+
+    /*
+     * When Xen is using shadow stacks, the alternatives clearing CR0.WP and
+     * writing into the mappings set dirty bits, turning the mappings into
+     * shadow stack mappings.
+     *
+     * While we can execute from them, this would also permit them to be the
+     * target of WRSS instructions, so reset the dirty after patching.
+     */
+    if ( cpu_has_xen_shstk )
+        modify_xen_mappings(XEN_VIRT_START + MB(2),
+                            (unsigned long)&__2M_text_end,
+                            PAGE_HYPERVISOR_RX);
 }
 
 void __init alternative_instructions(void)
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 4d6d22cc41..711b12828f 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5442,8 +5442,8 @@ int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
  * mappings, but will shatter superpages if necessary, and will destroy
  * mappings if not passed _PAGE_PRESENT.
  *
- * The only flags considered are NX, RW and PRESENT.  All other input flags
- * are ignored.
+ * The only flags considered are NX, D, A, RW and PRESENT.  All other input
+ * flags are ignored.
  *
  * It is an error to call with present flags over an unpopulated range.
  */
@@ -5456,7 +5456,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     unsigned long v = s;
 
     /* Set of valid PTE bits which may be altered. */
-#define FLAGS_MASK (_PAGE_NX|_PAGE_RW|_PAGE_PRESENT)
+#define FLAGS_MASK (_PAGE_NX|_PAGE_DIRTY|_PAGE_ACCESSED|_PAGE_RW|_PAGE_PRESENT)
     nf &= FLAGS_MASK;
 
     ASSERT(IS_ALIGNED(s, PAGE_SIZE));
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 27 19:34:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 19:34:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je1on-0006Zx-9F; Wed, 27 May 2020 19:34:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dLv=7J=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1je1om-0006Zk-GC
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 19:34:24 +0000
X-Inumbo-ID: 0faab7f6-a051-11ea-9947-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0faab7f6-a051-11ea-9947-bc764e2007e4;
 Wed, 27 May 2020 19:34:20 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 63PRrOCGIORe2oeMNymVJruyxbUTOeT4sjMpQqtroN19UX8P7qCkVtoTz54iDIaAiqLt8qi9Mt
 /mY5IUfUBqYTY0xurY8LU5jdhtOp6HuTYlEiklfG2DC9LhZY0K3tUy6xmWj8ODIDd1+1pljwbE
 J0alG1ldPodeuqZw1ayxiyajiThrKxl/Yfsk9Yvwobj9x3FG8Oag6vtyxB+8BDQ/rV4DJ0Mpy3
 ktkAj6jGI3H2I4XnI6XxUP8E6NFqbejvopVIRC3Kd3z7D47rXDJtqwsnXGYxIGD+f/2mpjbITz
 QWA=
X-SBRS: 2.7
X-MesageID: 18890129
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,442,1583211600"; d="scan'208";a="18890129"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 14/14] x86/shstk: Activate Supervisor Shadow Stacks
Date: Wed, 27 May 2020 20:18:47 +0100
Message-ID: <20200527191847.17207-15-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200527191847.17207-1-andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

With all other plumbing in place, activate shadow stacks when possible.  Note
that this requires prohibiting the use of PV32.  Compatibility can be
maintained if necessary via PV-Shim.

The BSP needs to wait until alternatives have run (to avoid interaction with
CR0.WP), and after the first reset_stack_and_jump() to avoid having a pristine
shadow stack interact in problematic ways with an in-use regular stack.
Activate shadow stack in reinit_bsp_stack().

APs have all infrastructure set up by the booting CPU, so enable shadow stacks
before entering C.  Adjust the logic to call start_secondary rather than jump
to it, so stack traces make more sense.

The crash path needs to turn CET off to avoid interfering with the crash
kernel's environment.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

v2:
 * Split S3 out into earlier patch.
 * Discuss CET-SS disabling PV32, and start_secondary to be a call.
---
 docs/misc/xen-command-line.pandoc |  8 ++++++++
 xen/arch/x86/boot/x86_64.S        | 35 +++++++++++++++++++++++++++++++++--
 xen/arch/x86/cpu/common.c         |  5 +++++
 xen/arch/x86/crash.c              |  7 +++++++
 xen/arch/x86/setup.c              | 26 ++++++++++++++++++++++++++
 xen/arch/x86/spec_ctrl.c          |  8 ++++++++
 6 files changed, 87 insertions(+), 2 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index d4934eabb7..9892c57ee9 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -287,6 +287,10 @@ call/jmp COP/JOP) attacks.
     `cet=no-shstk` will cause Xen not to use Shadow Stacks even when support
     is available in hardware.
 
+    Shadow Stacks are incompatible with 32bit PV guests.  This option will
+    override the `pv=32` boolean to false.  Backwards compatibility can be
+    maintained with the `pv-shim` mechanism.
+
 ### clocksource (x86)
 > `= pit | hpet | acpi | tsc`
 
@@ -1726,6 +1730,10 @@ Controls for aspects of PV guest support.
 *   The `32` boolean controls whether 32bit PV guests can be created.  It
     defaults to `true`, and is ignored when `CONFIG_PV32` is compiled out.
 
+    32bit PV guests are incompatible with CET Shadow Stacks.  If Xen is using
+    shadow stacks, this option will be overridden to `false`.  Backwards
+    compatibility can be maintained with the `pv-shim` mechanism.
+
 ### pv-linear-pt (x86)
 > `= <boolean>`
 
diff --git a/xen/arch/x86/boot/x86_64.S b/xen/arch/x86/boot/x86_64.S
index 314a32a19f..551acd9e94 100644
--- a/xen/arch/x86/boot/x86_64.S
+++ b/xen/arch/x86/boot/x86_64.S
@@ -28,8 +28,39 @@ ENTRY(__high_start)
         lretq
 1:
         test    %ebx,%ebx
-        jnz     start_secondary
-
+        jz      .L_bsp
+
+        /* APs.  Set up shadow stacks before entering C. */
+
+        testl   $cpufeat_mask(X86_FEATURE_XEN_SHSTK), \
+                CPUINFO_FEATURE_OFFSET(X86_FEATURE_XEN_SHSTK) + boot_cpu_data(%rip)
+        je      .L_ap_shstk_done
+
+        /* Set up MSR_S_CET. */
+        mov     $MSR_S_CET, %ecx
+        xor     %edx, %edx
+        mov     $CET_SHSTK_EN | CET_WRSS_EN, %eax
+        wrmsr
+
+        /* Derive MSR_PL0_SSP from %rsp (token written when stack is allocated). */
+        mov     $MSR_PL0_SSP, %ecx
+        mov     %rsp, %rdx
+        shr     $32, %rdx
+        mov     %esp, %eax
+        and     $~(STACK_SIZE - 1), %eax
+        or      $(PRIMARY_SHSTK_SLOT + 1) * PAGE_SIZE - 8, %eax
+        wrmsr
+
+        /* Enable CET.  MSR_INTERRUPT_SSP_TABLE is set up later in load_system_tables(). */
+        mov     $XEN_MINIMAL_CR4 | X86_CR4_CET, %ecx
+        mov     %rcx, %cr4
+        setssbsy
+
+.L_ap_shstk_done:
+        call    start_secondary
+        BUG     /* start_secondary() shouldn't return. */
+
+.L_bsp:
         /* Pass off the Multiboot info structure to C land (if applicable). */
         mov     multiboot_ptr(%rip),%edi
         call    __start_xen
diff --git a/xen/arch/x86/cpu/common.c b/xen/arch/x86/cpu/common.c
index dcc9ee08de..b4416f941c 100644
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -329,6 +329,11 @@ void __init early_cpu_init(void)
 	       x86_cpuid_vendor_to_str(c->x86_vendor), c->x86, c->x86,
 	       c->x86_model, c->x86_model, c->x86_mask, eax);
 
+	if (c->cpuid_level >= 7) {
+		cpuid_count(7, 0, &eax, &ebx, &ecx, &edx);
+		c->x86_capability[cpufeat_word(X86_FEATURE_CET_SS)] = ecx;
+	}
+
 	eax = cpuid_eax(0x80000000);
 	if ((eax >> 16) == 0x8000 && eax >= 0x80000008) {
 		eax = cpuid_eax(0x80000008);
diff --git a/xen/arch/x86/crash.c b/xen/arch/x86/crash.c
index 450eecd46b..0611b4fb9b 100644
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -200,6 +200,13 @@ void machine_crash_shutdown(void)
     /* Reset CPUID masking and faulting to the host's default. */
     ctxt_switch_levelling(NULL);
 
+    /* Disable shadow stacks. */
+    if ( cpu_has_xen_shstk )
+    {
+        wrmsrl(MSR_S_CET, 0);
+        write_cr4(read_cr4() & ~X86_CR4_CET);
+    }
+
     info = kexec_crash_save_info();
     info->xen_phys_start = xen_phys_start;
     info->dom0_pfn_to_mfn_frame_list_list =
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 584589baff..0c4fd8c6a8 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -664,6 +664,13 @@ static void __init noreturn reinit_bsp_stack(void)
     stack_base[0] = stack;
     memguard_guard_stack(stack);
 
+    if ( cpu_has_xen_shstk )
+    {
+        wrmsrl(MSR_PL0_SSP, (unsigned long)stack + 0x5ff8);
+        wrmsrl(MSR_S_CET, CET_SHSTK_EN | CET_WRSS_EN);
+        asm volatile ("setssbsy" ::: "memory");
+    }
+
     reset_stack_and_jump_nolp(init_done);
 }
 
@@ -1065,6 +1072,21 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     /* This must come before e820 code because it sets paddr_bits. */
     early_cpu_init();
 
+    /* Choose shadow stack early, to set infrastructure up appropriately. */
+    if ( opt_xen_shstk && boot_cpu_has(X86_FEATURE_CET_SS) )
+    {
+        printk("Enabling Supervisor Shadow Stacks\n");
+
+        setup_force_cpu_cap(X86_FEATURE_XEN_SHSTK);
+#ifdef CONFIG_PV32
+        if ( opt_pv32 )
+        {
+            opt_pv32 = 0;
+            printk("  - Disabling PV32 due to Shadow Stacks\n");
+        }
+#endif
+    }
+
     /* Sanitise the raw E820 map to produce a final clean version. */
     max_page = raw_max_page = init_e820(memmap_type, &e820_raw);
 
@@ -1801,6 +1823,10 @@ void __init noreturn __start_xen(unsigned long mbi_p)
 
     alternative_branches();
 
+    /* Defer CR4.CET until alternatives have finished playing with CR4.WP */
+    if ( cpu_has_xen_shstk )
+        set_in_cr4(X86_CR4_CET);
+
     /*
      * NB: when running as a PV shim VCPUOP_up/down is wired to the shim
      * physical cpu_add/remove functions, so launch the guest with only
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index c5d8e587a8..a94be2d594 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -882,6 +882,14 @@ void __init init_speculation_mitigations(void)
     hw_smt_enabled = check_smt_enabled();
 
     /*
+     * First, disable the use of retpolines if Xen is using shadow stacks, as
+     * they are incompatible.
+     */
+    if ( cpu_has_xen_shstk &&
+         (opt_thunk == THUNK_DEFAULT || opt_thunk == THUNK_RETPOLINE) )
+        thunk = THUNK_JMP;
+
+    /*
      * Has the user specified any custom BTI mitigations?  If so, follow their
      * instructions exactly and disable all heuristics.
      */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 27 19:34:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 19:34:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je1os-0006bS-Q3; Wed, 27 May 2020 19:34:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dLv=7J=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1je1or-0006b7-HC
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 19:34:29 +0000
X-Inumbo-ID: 107ad8aa-a051-11ea-9947-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 107ad8aa-a051-11ea-9947-bc764e2007e4;
 Wed, 27 May 2020 19:34:21 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: y3yuGKwlIoZ/HdGOnxeDgPPOs0wYll11M5AmTmCsBOgUvsK+Ui34zh3c7zEMnp8C8ZMwhgrbzY
 ONgNIr9ifCCY9QmRQ592wMWvYsy952D5rLRirgbTZYlYR7XasUsPjGm0wkdLgkfMMZz9RoCyEk
 inT5OCYK5wlOdtiHr/frvxX95qgNFOpt1MgEG3BNf+hFvXhmr9lkj8kxbiXv/7g5I5pd+4W6eV
 wrqjo/oLEFZOBWyrfmG/mmdmOIKhu9oh7cWsx4x6hWghUHzJNzOn9MiviLGQ+LzQgofCXVJXL3
 6Vg=
X-SBRS: 2.7
X-MesageID: 18890134
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,442,1583211600"; d="scan'208";a="18890134"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 13/14] x86/S3: Save and restore Shadow Stack configuration
Date: Wed, 27 May 2020 20:18:46 +0100
Message-ID: <20200527191847.17207-14-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200527191847.17207-1-andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

See code for details

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

Semi-RFC - I can't actually test this path.  Currently attempting to arrange
for someone else to.

v2:
 * New, split out of "x86/shstk: Activate Supervisor Shadow Stacks"
 * Drop asm/config.h include
 * Fix order of operations to avoid multiple crashes.
---
 xen/arch/x86/acpi/wakeup_prot.S | 58 +++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/msr-index.h |  3 +++
 xen/include/asm-x86/x86-defns.h |  1 +
 3 files changed, 62 insertions(+)

diff --git a/xen/arch/x86/acpi/wakeup_prot.S b/xen/arch/x86/acpi/wakeup_prot.S
index 4dba6020a7..dcc7e2327d 100644
--- a/xen/arch/x86/acpi/wakeup_prot.S
+++ b/xen/arch/x86/acpi/wakeup_prot.S
@@ -1,3 +1,7 @@
+#include <asm/msr-index.h>
+#include <asm/page.h>
+#include <asm/processor.h>
+
         .file __FILE__
         .text
         .code64
@@ -15,6 +19,12 @@ ENTRY(do_suspend_lowlevel)
         mov     %cr0, %rax
         mov     %rax, saved_cr0(%rip)
 
+#ifdef CONFIG_XEN_SHSTK
+        mov     $1, %eax
+        rdsspq  %rax
+        mov     %rax, saved_ssp(%rip)
+#endif
+
         /* enter sleep state physically */
         mov     $3, %edi
         call    acpi_enter_sleep_state
@@ -48,6 +58,51 @@ ENTRY(s3_resume)
         pushq   %rax
         lretq
 1:
+#ifdef CONFIG_XEN_SHSTK
+        /*
+         * Restoring SSP is a little complicated, because we are intercepting
+         * an in-use shadow stack.  Write a temporary token under the stack,
+         * so SETSSBSY will successfully load a value useful for us, then
+         * reset MSR_PL0_SSP to its usual value and pop the temporary token.
+         */
+        mov     saved_rsp(%rip), %rdi
+        cmpq    $1, %rdi
+        je      .L_shstk_done
+
+        /* Set up MSR_S_CET. */
+        mov     $MSR_S_CET, %ecx
+        xor     %edx, %edx
+        mov     $CET_SHSTK_EN | CET_WRSS_EN, %eax
+        wrmsr
+
+        /* Construct the temporary supervisor token under SSP. */
+        sub     $8, %rdi
+
+        /* Load it into MSR_PL0_SSP. */
+        mov     $MSR_PL0_SSP, %ecx
+        mov     %rdi, %rdx
+        shr     $32, %rdx
+        mov     %edi, %eax
+        wrmsr
+
+        /* Enable CET.  MSR_INTERRUPT_SSP_TABLE is set up later in load_system_tables(). */
+        mov     $XEN_MINIMAL_CR4 | X86_CR4_CET, %ebx
+        mov     %rbx, %cr4
+
+        /* Write the temporary token onto the shadow stack, and activate it. */
+        wrssq   %rdi, (%rdi)
+        setssbsy
+
+        /* Reset MSR_PL0_SSP back to its normal value. */
+        and     $~(STACK_SIZE - 1), %eax
+        or      $(PRIMARY_SHSTK_SLOT + 1) * PAGE_SIZE - 8, %eax
+        wrmsr
+
+        /* Pop the temporary token off the stack. */
+        mov     $2, %eax
+        incsspd %eax
+.L_shstk_done:
+#endif
 
         call    load_system_tables
 
@@ -65,6 +120,9 @@ ENTRY(s3_resume)
 
 saved_rsp:      .quad   0
 saved_cr0:      .quad   0
+#ifdef CONFIG_XEN_SHSTK
+saved_ssp:      .quad   0
+#endif
 
 GLOBAL(saved_magic)
         .long   0x9abcdef0
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index 85c5f20b76..cdfb7b047b 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -68,6 +68,9 @@
 
 #define MSR_U_CET                           0x000006a0
 #define MSR_S_CET                           0x000006a2
+#define  CET_SHSTK_EN                       (_AC(1, ULL) <<  0)
+#define  CET_WRSS_EN                        (_AC(1, ULL) <<  1)
+
 #define MSR_PL0_SSP                         0x000006a4
 #define MSR_PL1_SSP                         0x000006a5
 #define MSR_PL2_SSP                         0x000006a6
diff --git a/xen/include/asm-x86/x86-defns.h b/xen/include/asm-x86/x86-defns.h
index 5366e2d018..072c87042c 100644
--- a/xen/include/asm-x86/x86-defns.h
+++ b/xen/include/asm-x86/x86-defns.h
@@ -73,6 +73,7 @@
 #define X86_CR4_SMEP       0x00100000 /* enable SMEP */
 #define X86_CR4_SMAP       0x00200000 /* enable SMAP */
 #define X86_CR4_PKE        0x00400000 /* enable PKE */
+#define X86_CR4_CET        0x00800000 /* Control-flow Enforcement Technology */
 
 /*
  * XSTATE component flags in XCR0
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 27 19:34:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 19:34:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je1oy-0006cq-2D; Wed, 27 May 2020 19:34:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/dLv=7J=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1je1ow-0006cN-HW
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 19:34:34 +0000
X-Inumbo-ID: 130007e4-a051-11ea-81bc-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 130007e4-a051-11ea-81bc-bc764e2007e4;
 Wed, 27 May 2020 19:34:26 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: NPahr4HOwB/rPxUenW5oMXduKyL+uOiKhCpIDIFNdBdKGst4pFLbAcGaUzrmUK8S5AUttl92jn
 apZJLDNuAGxnq7nvb+RwyoDq0h6KJ9aNV5JKxkAqDGy05fCpCQYS+sW5Infn+BhObaVgccfRD7
 Cniq80QEBLLaA/PichBP8Kol8/FhBO7RzKG9oVCKq0naYVLJyYLSV1aBHMrJLVBP/cEpRLYDvF
 Gyf0yoFxFWl5qT/uT969foSfPYFHrdrSEZ43ePELaNMyryXcW9PQ+rV9YGfN093+Y80M8uaY2f
 f+w=
X-SBRS: 2.7
X-MesageID: 18592446
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,442,1583211600"; d="scan'208";a="18592446"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 10/14] x86/extable: Adjust extable handling to be shadow
 stack compatible
Date: Wed, 27 May 2020 20:18:43 +0100
Message-ID: <20200527191847.17207-11-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20200527191847.17207-1-andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

When adjusting an IRET frame to recover from a fault, and equivalent
adjustment needs making in the shadow IRET frame.

The adjustment in exception_with_ints_disabled() could in principle be an
alternative block rather than an ifdef, as the only two current users of
_PRE_EXTABLE() are IRET-to-guest instructions.  However, this is not a
fastpath, and this form is more robust to future changes.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>

v2:
 * Break extable_shstk_fixup() out into a separate function.
 * Guard from shstk underflows, and unrealistic call traces.
---
 xen/arch/x86/traps.c        | 67 ++++++++++++++++++++++++++++++++++++++++++++-
 xen/arch/x86/x86_64/entry.S | 11 +++++++-
 2 files changed, 76 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 235a72cf4a..ce910294ea 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -363,7 +363,7 @@ static void show_guest_stack(struct vcpu *v, const struct cpu_user_regs *regs)
 }
 
 /*
- * Notes for get_stack_trace_bottom() and get_stack_dump_bottom()
+ * Notes for get_{stack,shstk}*_bottom() helpers
  *
  * Stack pages 1 - 4:
  *   These are all 1-page IST stacks.  Each of these stacks have an exception
@@ -400,6 +400,18 @@ unsigned long get_stack_trace_bottom(unsigned long sp)
     }
 }
 
+static unsigned long get_shstk_bottom(unsigned long sp)
+{
+    switch ( get_stack_page(sp) )
+    {
+#ifdef CONFIG_XEN_SHSTK
+    case 0:  return ROUNDUP(sp, IST_SHSTK_SIZE) - sizeof(unsigned long);
+    case 5:  return ROUNDUP(sp, PAGE_SIZE)      - sizeof(unsigned long);
+#endif
+    default: return sp - sizeof(unsigned long);
+    }
+}
+
 unsigned long get_stack_dump_bottom(unsigned long sp)
 {
     switch ( get_stack_page(sp) )
@@ -763,6 +775,56 @@ static void do_reserved_trap(struct cpu_user_regs *regs)
           trapnr, vec_name(trapnr), regs->error_code);
 }
 
+static void extable_shstk_fixup(struct cpu_user_regs *regs, unsigned long fixup)
+{
+    unsigned long ssp, *ptr, *base;
+
+    asm ( "rdsspq %0" : "=r" (ssp) : "0" (1) );
+    if ( ssp == 1 )
+        return;
+
+    ptr = _p(ssp);
+    base = _p(get_shstk_bottom(ssp));
+
+    for ( ; ptr < base; ++ptr )
+    {
+        /*
+         * Search for %rip.  The shstk currently looks like this:
+         *
+         *   ...  [Likely pointed to by SSP]
+         *   %cs  [== regs->cs]
+         *   %rip [== regs->rip]
+         *   SSP  [Likely points to 3 slots higher, above %cs]
+         *   ...  [call tree to this function, likely 2/3 slots]
+         *
+         * and we want to overwrite %rip with fixup.  There are two
+         * complications:
+         *   1) We cant depend on SSP values, because they won't differ by 3
+         *      slots if the exception is taken on an IST stack.
+         *   2) There are synthetic (unrealistic but not impossible) scenarios
+         *      where %rip can end up in the call tree to this function, so we
+         *      can't check against regs->rip alone.
+         *
+         * Check for both reg->rip and regs->cs matching.
+         */
+
+        if ( ptr[0] == regs->rip && ptr[1] == regs->cs )
+        {
+            asm ( "wrssq %[fix], %[stk]"
+                  : [stk] "=m" (*ptr)
+                  : [fix] "r" (fixup) );
+            return;
+        }
+    }
+
+    /*
+     * We failed to locate and fix up the shadow IRET frame.  This could be
+     * due to shadow stack corruption, or bad logic above.  We cannot continue
+     * executing the interrupted context.
+     */
+    BUG();
+}
+
 static bool extable_fixup(struct cpu_user_regs *regs, bool print)
 {
     unsigned long fixup = search_exception_table(regs);
@@ -779,6 +841,9 @@ static bool extable_fixup(struct cpu_user_regs *regs, bool print)
                vec_name(regs->entry_vector), regs->error_code,
                _p(regs->rip), _p(regs->rip), _p(fixup));
 
+    if ( IS_ENABLED(CONFIG_XEN_SHSTK) )
+        extable_shstk_fixup(regs, fixup);
+
     regs->rip = fixup;
     this_cpu(last_extable_addr) = regs->rip;
 
diff --git a/xen/arch/x86/x86_64/entry.S b/xen/arch/x86/x86_64/entry.S
index f7ee3dce91..78ac0df49f 100644
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -708,7 +708,16 @@ exception_with_ints_disabled:
         call  search_pre_exception_table
         testq %rax,%rax                 # no fixup code for faulting EIP?
         jz    1b
-        movq  %rax,UREGS_rip(%rsp)
+        movq  %rax,UREGS_rip(%rsp)      # fixup regular stack
+
+#ifdef CONFIG_XEN_SHSTK
+        mov    $1, %edi
+        rdsspq %rdi
+        cmp    $1, %edi
+        je     .L_exn_shstk_done
+        wrssq  %rax, (%rdi)             # fixup shadow stack
+.L_exn_shstk_done:
+#endif
         subq  $8,UREGS_rsp(%rsp)        # add ec/ev to previous stack frame
         testb $15,UREGS_rsp(%rsp)       # return %rsp is now aligned?
         jz    1f                        # then there is a pad quadword already
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Wed May 27 20:53:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 20:53:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je33D-00059j-3I; Wed, 27 May 2020 20:53:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mVnz=7J=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1je33B-00059e-CL
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 20:53:21 +0000
X-Inumbo-ID: 175459e8-a05c-11ea-a782-12813bfff9fa
Received: from forwardcorp1j.mail.yandex.net (unknown [5.45.199.163])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 175459e8-a05c-11ea-a782-12813bfff9fa;
 Wed, 27 May 2020 20:53:18 +0000 (UTC)
Received: from mxbackcorp2j.mail.yandex.net (mxbackcorp2j.mail.yandex.net
 [IPv6:2a02:6b8:0:1619::119])
 by forwardcorp1j.mail.yandex.net (Yandex) with ESMTP id BD23D2E157F;
 Wed, 27 May 2020 23:53:16 +0300 (MSK)
Received: from sas2-32987e004045.qloud-c.yandex.net
 (sas2-32987e004045.qloud-c.yandex.net [2a02:6b8:c08:b889:0:640:3298:7e00])
 by mxbackcorp2j.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 Xt3ArwXecc-rCfKmK9U; Wed, 27 May 2020 23:53:16 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590612796; bh=ZsdCg7tQAxxXKqG44grgO5wkYDTDlvE1HPG0Z1dXtPA=;
 h=In-Reply-To:Message-ID:Subject:To:From:References:Date:Cc;
 b=F3RR6S+ZRu88QHnYWpbbvafacf6fEC/1xkhzlcxkkqo49KTGptul9Z1Oit3tDKxul
 3qjVktQ/i3tf08IpeyluZJm0G030wnzCELwqcpIsqxOeYX9xHAmPsfA2lnAobH2iI/
 hWck9RjN7KSaV2qfKo+WD+82KTDgWLDtYj5bvMW8=
Authentication-Results: mxbackcorp2j.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b080:8308::1:12])
 by sas2-32987e004045.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 6CLcHCW0d0-rCX0XiYO; Wed, 27 May 2020 23:53:12 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
Date: Wed, 27 May 2020 23:53:11 +0300
From: Roman Kagan <rvkagan@yandex-team.ru>
To: Eric Blake <eblake@redhat.com>
Subject: Re: [PATCH v6 4/5] block: make size-related BlockConf properties
 accept size suffixes
Message-ID: <20200527205311.GA373697@rvkaganb.lan>
Mail-Followup-To: Roman Kagan <rvkagan@yandex-team.ru>,
 Eric Blake <eblake@redhat.com>, qemu-devel@nongnu.org,
 Kevin Wolf <kwolf@redhat.com>, xen-devel@lists.xenproject.org,
 Gerd Hoffmann <kraxel@redhat.com>,
 Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Laurent Vivier <laurent@vivier.eu>, Max Reitz <mreitz@redhat.com>,
 qemu-block@nongnu.org,
 Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>,
 Paul Durrant <paul@xen.org>, Fam Zheng <fam@euphon.net>,
 John Snow <jsnow@redhat.com>, "Michael S. Tsirkin" <mst@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>,
 Keith Busch <kbusch@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Stefan Hajnoczi <stefanha@redhat.com>
References: <20200527124511.986099-1-rvkagan@yandex-team.ru>
 <20200527124511.986099-5-rvkagan@yandex-team.ru>
 <d2ac3549-e63d-d737-41fa-21965c551175@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <d2ac3549-e63d-d737-41fa-21965c551175@redhat.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, John Snow <jsnow@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, qemu-devel@nongnu.org,
 Laurent Vivier <laurent@vivier.eu>, Keith Busch <kbusch@kernel.org>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 Max Reitz <mreitz@redhat.com>,
 Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 27, 2020 at 09:50:39AM -0500, Eric Blake wrote:
> On 5/27/20 7:45 AM, Roman Kagan wrote:
> > Several BlockConf properties represent respective sizes in bytes so it
> > makes sense to accept size suffixes for them.
> > 
> > Turn them all into uint32_t and use size-suffix-capable setters/getters
> > on them; introduce DEFINE_PROP_SIZE32 and adjust DEFINE_PROP_BLOCKSIZE
> > for that. (Making them uint64_t and reusing DEFINE_PROP_SIZE isn't
> > justified because guests expect at most 32bit values.)
> > 
> > Also, since min_io_size is exposed to the guest by scsi and virtio-blk
> > devices as an uint16_t in units of logical blocks, introduce an
> > additional check in blkconf_blocksizes to prevent its silent truncation.
> > 
> > Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
> > ---
> > v5 -> v6:
> > - add prop_size32 instead of going with 64bit
> 
> Would it be worth adding prop_size32 as its own patch, before using it here?

I've no strong opinion on this.  Should I better split it out when
respinning?

> But I'll review this as-is.
> 
> > +++ b/hw/block/block.c
> > @@ -96,6 +96,17 @@ bool blkconf_blocksizes(BlockConf *conf, Error **errp)
> >           return false;
> >       }
> > +    /*
> > +     * all devices which support min_io_size (scsi and virtio-blk) expose it to
> > +     * the guest as a uint16_t in units of logical blocks
> > +     */
> > +    if (conf->min_io_size > conf->logical_block_size * UINT16_MAX) {
> 
> This risks overflow.  Better would be:
> 
> if (conf->min_io_size / conf->logical_block-size > UINT16_MAX)

D'oh.  I kept it in mind and did it right initially in v4, but then
mixed it up.  Thanks for catching!

> 
> > +        error_setg(errp,
> > +                   "min_io_size must not exceed " stringify(UINT16_MAX)
> > +                   " logical blocks");
> > +        return false;
> > +    }
> > +
> >       if (!QEMU_IS_ALIGNED(conf->opt_io_size, conf->logical_block_size)) {
> >           error_setg(errp,
> >                      "opt_io_size must be a multiple of logical_block_size");
> 
> > +++ b/tests/qemu-iotests/172.out
> > @@ -24,11 +24,11 @@ Testing:
> >                 dev: floppy, id ""
> >                   unit = 0 (0x0)
> >                   drive = "floppy0"
> > -                logical_block_size = 512 (0x200)
> > -                physical_block_size = 512 (0x200)
> > -                min_io_size = 0 (0x0)
> > -                opt_io_size = 0 (0x0)
> > -                discard_granularity = 4294967295 (0xffffffff)
> > +                logical_block_size = 512 (512 B)
> > +                physical_block_size = 512 (512 B)
> > +                min_io_size = 0 (0 B)
> > +                opt_io_size = 0 (0 B)
> > +                discard_granularity = 4294967295 (4 GiB)
> 
> Although 4 GiB is not quite the same as 4294967295, the exact byte value
> next to the approximate size is not too bad.  The mechanical fallout from
> the change from int to size is fine to me.

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Wed May 27 21:03:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 21:03:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je3D8-00065H-2i; Wed, 27 May 2020 21:03:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TPJL=7J=redhat.com=eblake@srs-us1.protection.inumbo.net>)
 id 1je3D6-00065C-Kg
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 21:03:37 +0000
X-Inumbo-ID: 8794a86a-a05d-11ea-a782-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 8794a86a-a05d-11ea-a782-12813bfff9fa;
 Wed, 27 May 2020 21:03:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590613415;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=x0JRjjU3dgAIrBxPLDRaH5wHrqMTWl6+2G4BcOeSFds=;
 b=NS7bBsUjXRChUOXUpyRuNDVvkLVQ3R7usOZ2WN4GMH2ihYgC8O0Of/1t2JCE1OGXy3l+T4
 bQVBbukhWY45AR8DQStpRXT35wC5RFgV5kQ8WEuQfyuAdtee1xUNEzaUjXlS74+1NQUGVW
 paaDnInQ7+MbCKYEYszMuP5w/pBGmko=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-433-tt9gU6FBNfqvTs1VyXa-Jg-1; Wed, 27 May 2020 17:03:27 -0400
X-MC-Unique: tt9gU6FBNfqvTs1VyXa-Jg-1
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 3AED58005AA;
 Wed, 27 May 2020 21:03:25 +0000 (UTC)
Received: from [10.3.112.88] (ovpn-112-88.phx2.redhat.com [10.3.112.88])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 770ECA1020;
 Wed, 27 May 2020 21:03:14 +0000 (UTC)
Subject: Re: [PATCH v6 4/5] block: make size-related BlockConf properties
 accept size suffixes
To: Roman Kagan <rvkagan@yandex-team.ru>, qemu-devel@nongnu.org,
 Kevin Wolf <kwolf@redhat.com>, xen-devel@lists.xenproject.org,
 Gerd Hoffmann <kraxel@redhat.com>, =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?=
 <berrange@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 Laurent Vivier <laurent@vivier.eu>, Max Reitz <mreitz@redhat.com>,
 qemu-block@nongnu.org, =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?=
 <philmd@redhat.com>, Paul Durrant <paul@xen.org>, Fam Zheng
 <fam@euphon.net>, John Snow <jsnow@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Eduardo Habkost
 <ehabkost@redhat.com>, Keith Busch <kbusch@kernel.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Stefan Hajnoczi <stefanha@redhat.com>
References: <20200527124511.986099-1-rvkagan@yandex-team.ru>
 <20200527124511.986099-5-rvkagan@yandex-team.ru>
 <d2ac3549-e63d-d737-41fa-21965c551175@redhat.com>
 <20200527205311.GA373697@rvkaganb.lan>
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
Message-ID: <367c42ad-69cf-0ed6-1bbf-ed5ea1b0a957@redhat.com>
Date: Wed, 27 May 2020 16:03:14 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200527205311.GA373697@rvkaganb.lan>
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/27/20 3:53 PM, Roman Kagan wrote:

>>> ---
>>> v5 -> v6:
>>> - add prop_size32 instead of going with 64bit
>>
>> Would it be worth adding prop_size32 as its own patch, before using it here?
> 
> I've no strong opinion on this.  Should I better split it out when
> respinning?

Patch splitting is an art-form.  But in general, a long series of 
smaller patches each easy to review is going to get accepted into the 
tree faster than a single patch that merges multiple changes into one 
big blob, even if the net diff is identical.  It's rare that someone 
will ask you to merge patches because you split too far, so the real 
tradeoff is whether it will cost you more time to split than what you 
will save the next reviewer (including the maintainer that will merge 
your patches, depending on whether the maintainer also reviews it or 
just trusts my review), if you decide to go with a v7.

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Wed May 27 22:35:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 22:35:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je4dr-00050M-Sw; Wed, 27 May 2020 22:35:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W8B8=7J=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1je4dp-00050H-Sp
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 22:35:17 +0000
X-Inumbo-ID: 56a09234-a06a-11ea-81bc-bc764e2007e4
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 56a09234-a06a-11ea-81bc-bc764e2007e4;
 Wed, 27 May 2020 22:35:17 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04RMMCx8054693;
 Wed, 27 May 2020 22:35:14 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=mime-version :
 message-id : date : from : to : cc : subject : references : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=bPT+VORLylxcPAWpuFfNtjTNb+GnFV3EtWluz+y7630=;
 b=iuXe3awqVD7jdre/ppfwWPneSn50hN0SKbrxFDjlilZI0Q4fwndV4WJEkVGBPONBiYy7
 dYpyex/I0PRzfVneSMlbndZP++DnpvbJhEzARKwvWmBCizcKN70X+MMgS5mrlXkcK1g2
 ZhuPJ2RGuuvN8nRXgQ7RqHZ4pAp2VOSBOHTnOVnIyFhgZyuVD/XYkdvg2fbeD5XVX4dc
 Y3h1y1+hdVY8XorK+HMBct29orqDiNbJow2v0G5yW75QIdUpNlJkKehC0EeMJOQpAxdi
 qP2+HuPvU3nETYjdTD0RhXXjSLTMcbha59Jff6fyM40Qv+/T1c+8v+LHb4CgFRpMcUE7 /A== 
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2130.oracle.com with ESMTP id 316u8r21cy-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Wed, 27 May 2020 22:35:14 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04RMHc5D184460;
 Wed, 27 May 2020 22:35:13 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
 by userp3020.oracle.com with ESMTP id 317dkv9ae6-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 27 May 2020 22:35:13 +0000
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
 by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 04RMZCjn017158;
 Wed, 27 May 2020 22:35:12 GMT
Received: from [10.39.225.245] (/10.39.225.245) by default (Oracle Beehive
 Gateway v4.0) with ESMTP ; Wed, 27 May 2020 15:34:28 -0700
AUTOCRYPT: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4
USER-AGENT: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
Content-Language: en-US
MIME-Version: 1.0
Message-ID: <612fee00-4e7c-9b90-511d-4efb7676cbed@oracle.com>
Date: Wed, 27 May 2020 15:34:26 -0700 (PDT)
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: Bjorn Helgaas <helgaas@kernel.org>, Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 1/2] xen-pciback: Use dev_printk() when possible
References: <20200527174326.254329-1-helgaas@kernel.org>
 <20200527174326.254329-2-helgaas@kernel.org>
In-Reply-To: <20200527174326.254329-2-helgaas@kernel.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9634
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0
 malwarescore=0 bulkscore=0
 spamscore=0 suspectscore=0 mlxscore=0 adultscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2005270170
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9634
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 mlxscore=0
 priorityscore=1501 spamscore=0 cotscore=-2147483648 suspectscore=0
 phishscore=0 clxscore=1011 mlxlogscore=999 bulkscore=0 adultscore=0
 lowpriorityscore=0 impostorscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2004280000 definitions=main-2005270170
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Bjorn Helgaas <bhelgaas@google.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/27/20 1:43 PM, Bjorn Helgaas wrote:
> @@ -155,8 +157,8 @@ int xen_pcibk_config_read(struct pci_dev *dev, int offset, int size,
>  	u32 value = 0, tmp_val;
>  
>  	if (unlikely(verbose_request))
> -		printk(KERN_DEBUG DRV_NAME ": %s: read %d bytes at 0x%x\n",
> -		       pci_name(dev), size, offset);
> +		dev_printk(KERN_DEBUG, &dev->dev, "read %d bytes at 0x%x\n",
> +			   size, offset);


Maybe then dev_dbg() ?


-boris




From xen-devel-bounces@lists.xenproject.org Wed May 27 22:36:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 22:36:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je4f2-00054S-87; Wed, 27 May 2020 22:36:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W8B8=7J=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1je4f1-00053v-Ae
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 22:36:31 +0000
X-Inumbo-ID: 805a150a-a06a-11ea-a788-12813bfff9fa
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 805a150a-a06a-11ea-a788-12813bfff9fa;
 Wed, 27 May 2020 22:36:27 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04RMMVog156925;
 Wed, 27 May 2020 22:36:24 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=YDNAfhyBuiMW1rjZ6zjEBNBsEKPH2Y8kZuZUqJPOGys=;
 b=g29IADdyc59ROq4MstjWrG5Pzy+jt8c8wr4b02d07HgN+LVX3KQlM2MNBiOjeSkp6RIj
 bzR+SzdDNbbE/tn0nMOK+tpxkY1uxWreCjEBElEKRr7RSwxEqg81uwxeLUwaVWHS8v+1
 sGBMZSb1Ns1gk5XJq0wpx7eLGNkis9s5G57nmpDXhoMA8HKtrh8DgvgmJQTzmuCRAGXY
 rexLdvExoh6N1Q28vf07gyMeNIcEEXldTp00ln/0fonQUakWcnvg/uDhLuBBEJJh/uUb
 SkWNf+Lu9C+mECvAL6Fu6EzwcfplW7KKfVL6yXm0RAX7mFtq+EFPXonWMFzSaxSTR30g SA== 
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by aserp2120.oracle.com with ESMTP id 318xe1j1yn-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Wed, 27 May 2020 22:36:24 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04RMHams113958;
 Wed, 27 May 2020 22:36:23 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
 by aserp3020.oracle.com with ESMTP id 317j5t6rw0-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 27 May 2020 22:36:23 +0000
Received: from abhmp0019.oracle.com (abhmp0019.oracle.com [141.146.116.25])
 by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 04RMaN0J017597;
 Wed, 27 May 2020 22:36:23 GMT
Received: from [10.39.225.245] (/10.39.225.245)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Wed, 27 May 2020 15:36:23 -0700
Subject: Re: [PATCH 2/2] xenbus: Use dev_printk() when possible
To: Bjorn Helgaas <helgaas@kernel.org>, Juergen Gross <jgross@suse.com>
References: <20200527174326.254329-1-helgaas@kernel.org>
 <20200527174326.254329-3-helgaas@kernel.org>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <d028cc35-42e3-7452-0b26-c0bbc46018d2@oracle.com>
Date: Wed, 27 May 2020 18:36:21 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200527174326.254329-3-helgaas@kernel.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9634
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0
 spamscore=0 suspectscore=0
 mlxlogscore=999 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2005270170
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9634
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0
 mlxlogscore=999
 adultscore=0 cotscore=-2147483648 mlxscore=0 bulkscore=0
 priorityscore=1501 phishscore=0 lowpriorityscore=0 malwarescore=0
 clxscore=1015 impostorscore=0 suspectscore=0 classifier=spam adjust=0
 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2005270170
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Bjorn Helgaas <bhelgaas@google.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/27/20 1:43 PM, Bjorn Helgaas wrote:
> From: Bjorn Helgaas <bhelgaas@google.com>
>
> Use dev_printk() when possible to include device and driver information in
> the conventional format.
>
> Add "#define dev_fmt" to preserve KBUILD_MODNAME in messages.
>
> No functional change intended.
>
> Signed-off-by: Bjorn Helgaas <bhelgaas@google.com>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



From xen-devel-bounces@lists.xenproject.org Wed May 27 22:50:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 22:50:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je4sg-0006i4-Iu; Wed, 27 May 2020 22:50:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/NoV=7J=kernel.org=helgaas@srs-us1.protection.inumbo.net>)
 id 1je4sf-0006hz-IL
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 22:50:37 +0000
X-Inumbo-ID: 7aa93530-a06c-11ea-a789-12813bfff9fa
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7aa93530-a06c-11ea-a789-12813bfff9fa;
 Wed, 27 May 2020 22:50:36 +0000 (UTC)
Received: from localhost (mobile-166-175-190-200.mycingular.net
 [166.175.190.200])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.kernel.org (Postfix) with ESMTPSA id 9EEFC2071A;
 Wed, 27 May 2020 22:50:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
 s=default; t=1590619836;
 bh=AC/qBVIz2RY3tjOrzusi4AN43Yd3fTkCrWWr78GG/34=;
 h=Date:From:To:Cc:Subject:In-Reply-To:From;
 b=VGr0mm/uYPh+W6nREpEC/OqhxQ54VwT0nXe0oLBmmhPRb48QhwaTYd3VgVQO3kJ59
 3UJbET/o9qWoUkvbv97YWwpxa2JX5FSr5+QalNF0uLbxOR60YCdltuCmj351kl/Iba
 V7Knj2in8qphInGIehOaq1LhmIuM1JqviKvUTM14=
Date: Wed, 27 May 2020 17:50:34 -0500
From: Bjorn Helgaas <helgaas@kernel.org>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Subject: Re: [PATCH 1/2] xen-pciback: Use dev_printk() when possible
Message-ID: <20200527225034.GA270348@bjorn-Precision-5520>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <612fee00-4e7c-9b90-511d-4efb7676cbed@oracle.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>, linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, May 27, 2020 at 03:34:26PM -0700, Boris Ostrovsky wrote:
> On 5/27/20 1:43 PM, Bjorn Helgaas wrote:
> > @@ -155,8 +157,8 @@ int xen_pcibk_config_read(struct pci_dev *dev, int offset, int size,
> >  	u32 value = 0, tmp_val;
> >  
> >  	if (unlikely(verbose_request))
> > -		printk(KERN_DEBUG DRV_NAME ": %s: read %d bytes at 0x%x\n",
> > -		       pci_name(dev), size, offset);
> > +		dev_printk(KERN_DEBUG, &dev->dev, "read %d bytes at 0x%x\n",
> > +			   size, offset);
> 
> 
> Maybe then dev_dbg() ?

printk(KERN_DEBUG) always produces output, so I used
dev_printk(KERN_DEBUG) to retain that behavior.

dev_dbg() does not always produce output, since it depends on DEBUG or
CONFIG_DYNAMIC_DEBUG and the dynamic debug settings.

If dev_dbg() seems like the right thing, I would probably add a
separate patch on top to convert dev_printk(KERN_DEBUG) to dev_dbg().

Thanks for taking a look!  

Bjorn


From xen-devel-bounces@lists.xenproject.org Wed May 27 23:19:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 27 May 2020 23:19:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je5K6-00008y-W4; Wed, 27 May 2020 23:18:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W8B8=7J=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1je5K5-00008t-BP
 for xen-devel@lists.xenproject.org; Wed, 27 May 2020 23:18:57 +0000
X-Inumbo-ID: 6fb827fe-a070-11ea-81bc-bc764e2007e4
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6fb827fe-a070-11ea-81bc-bc764e2007e4;
 Wed, 27 May 2020 23:18:56 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04RN8bko029323;
 Wed, 27 May 2020 23:18:53 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to : cc :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=2SSeYmYOhKfisHaQ+F7Ocp0A5qzXm5KSC4ApG8uXPBQ=;
 b=clVTvqtQUqyP8mmDuMatWgW0dumoYwZ4zEWqr2yDnz4k8omXGpQM/ab1UluhyUVXZieS
 HwvY0TSvFzYVyzSq3dhdi5wTV0jItfjF+m3Zne9ngji4xgDnULh3jxgVHf5Erzx7LqlG
 87D4CTMvTiqc7ORlm21WqdAhbuBV0wIb7qKSjRIYUx+UwKzt8uqMW3GooW0jNHQh/v8g
 21kfwDG2wUaVfK96wv+hgvut3wE4HxipxJ9YwAOGK3zoP/XvxbFSBToTOMu8gHqyA/OS
 Gt9Y4zMEy27PeiIyyz5N+aFUPO6VHVAbusOZK/3nOgAseXFs2PzFHTU/mhg07O0jkMdC fQ== 
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by aserp2120.oracle.com with ESMTP id 318xe1j5q5-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Wed, 27 May 2020 23:18:53 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04RN3ea3175248;
 Wed, 27 May 2020 23:18:52 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
 by userp3020.oracle.com with ESMTP id 317dkvb2pf-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 27 May 2020 23:18:52 +0000
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
 by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 04RNIoSP005074;
 Wed, 27 May 2020 23:18:50 GMT
Received: from [10.39.225.245] (/10.39.225.245)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Wed, 27 May 2020 16:18:50 -0700
Subject: Re: [PATCH 1/2] xen-pciback: Use dev_printk() when possible
To: Bjorn Helgaas <helgaas@kernel.org>
References: <20200527225034.GA270348@bjorn-Precision-5520>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <6e6b78a2-6354-5c5b-b0d8-711be24000b8@oracle.com>
Date: Wed, 27 May 2020 19:18:48 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200527225034.GA270348@bjorn-Precision-5520>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9634
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0
 malwarescore=0 bulkscore=0
 spamscore=0 suspectscore=0 mlxscore=0 adultscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2005270172
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9634
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0
 mlxlogscore=999
 adultscore=0 cotscore=-2147483648 mlxscore=0 bulkscore=0
 priorityscore=1501 phishscore=0 lowpriorityscore=0 malwarescore=0
 clxscore=1015 impostorscore=0 suspectscore=0 classifier=spam adjust=0
 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2005270172
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>,
 Bjorn Helgaas <bhelgaas@google.com>, linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/27/20 6:50 PM, Bjorn Helgaas wrote:
> On Wed, May 27, 2020 at 03:34:26PM -0700, Boris Ostrovsky wrote:
>> On 5/27/20 1:43 PM, Bjorn Helgaas wrote:
>>> @@ -155,8 +157,8 @@ int xen_pcibk_config_read(struct pci_dev *dev, in=
t offset, int size,
>>>  	u32 value =3D 0, tmp_val;
>>> =20
>>>  	if (unlikely(verbose_request))
>>> -		printk(KERN_DEBUG DRV_NAME ": %s: read %d bytes at 0x%x\n",
>>> -		       pci_name(dev), size, offset);
>>> +		dev_printk(KERN_DEBUG, &dev->dev, "read %d bytes at 0x%x\n",
>>> +			   size, offset);
>>
>> Maybe then dev_dbg() ?
> printk(KERN_DEBUG) always produces output, so I used
> dev_printk(KERN_DEBUG) to retain that behavior.
>
> dev_dbg() does not always produce output, since it depends on DEBUG or
> CONFIG_DYNAMIC_DEBUG and the dynamic debug settings.


Oh, I didn't realize it needs either of these.


>
> If dev_dbg() seems like the right thing, I would probably add a
> separate patch on top to convert dev_printk(KERN_DEBUG) to dev_dbg().


I think anyone who wants to see those messages ought to have at least
CONFIG_DYNAMIC_DEBUG, especially since they are under verbose_request
(which also should go away IMO). In fact, I wonder whether this code
predates dynamic debugging, it's been there for almost 10 years.


I'll leave it to you whether you want to add another patch.


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>




From xen-devel-bounces@lists.xenproject.org Thu May 28 00:07:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 00:07:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je64s-0004oo-5m; Thu, 28 May 2020 00:07:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P1kI=7K=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1je64q-0004oj-SI
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 00:07:16 +0000
X-Inumbo-ID: 2ca7b4b4-a077-11ea-a78a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2ca7b4b4-a077-11ea-a78a-12813bfff9fa;
 Thu, 28 May 2020 00:07:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=q7OCPtSS/0TRJKDEDfYpvWKpJ672iVqDLYFg7m/y+6I=; b=ioMvCTrch/HgH+4ZlA+PZB8w4
 f16TUshKjPAcMZLP0qg/f/enTLUNU6UbAuegZ5ZMzQ2P2LEFzhxCq+m031oNETudeussKGmrkwYW0
 us7/hJ2oILmFpab7lkTiWDSzijyad0fBEXOTi94jFyD+/yAanUl5J4EWKXn3l9O7V1Q+s=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1je64j-0001bZ-HM; Thu, 28 May 2020 00:07:09 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1je64j-0001rb-8E; Thu, 28 May 2020 00:07:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1je64j-00042V-7g; Thu, 28 May 2020 00:07:09 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150416-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150416: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=6b75c7a95420acbb9c118624ff0a5e973287c1e4
X-Osstest-Versions-That: xen=9f3e9139fa6c3d620eb08dff927518fc88200b8d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 28 May 2020 00:07:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150416 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150416/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  6b75c7a95420acbb9c118624ff0a5e973287c1e4
baseline version:
 xen                  9f3e9139fa6c3d620eb08dff927518fc88200b8d

Last test of basis   150407  2020-05-27 16:00:24 Z    0 days
Testing same since   150416  2020-05-27 21:01:18 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <julien@xen.org>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9f3e9139fa..6b75c7a954  6b75c7a95420acbb9c118624ff0a5e973287c1e4 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 28 03:00:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 03:00:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je8m9-0003qG-VX; Thu, 28 May 2020 03:00:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=reLY=7K=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1je8m8-0003qB-Hs
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 03:00:08 +0000
X-Inumbo-ID: 560e2500-a08f-11ea-9dbe-bc764e2007e4
Received: from mail-lj1-x243.google.com (unknown [2a00:1450:4864:20::243])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 560e2500-a08f-11ea-9dbe-bc764e2007e4;
 Thu, 28 May 2020 03:00:07 +0000 (UTC)
Received: by mail-lj1-x243.google.com with SMTP id q2so31462410ljm.10
 for <xen-devel@lists.xenproject.org>; Wed, 27 May 2020 20:00:07 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=39PNVTFoufuahtg+OEmF+7P6d2E+Q8LB1HHd8dgo6c4=;
 b=dIsS5pv9ldHUSyqQhe3PuZSdaE0qiOZWJMyINANYtoqGNFnmQltwY3csbDkrHh+aP9
 +dt4ka1TYkezdbz+D96x0oeIDnL/d9z03/9uks4+EtGoLIE0/r0Y9/qpyT0ObBxlSg1y
 q84PZgTYaA00SGpWKfg2Fy6tF7++KCYUxLWY67kQUDpVfPTVpQg0tNCwCYa7KcejoAX3
 ySQPewQakA0kO2JqS/NXueVhMOWkm9hGD3EsKRXQvMEXExqToO6L5/O0yxF0hzVSxLgG
 9BgBXtV1CCvdSNvuS5s7FUiNcKTjqupyWbRWLvhVvWKh9t08RvUbKtI7ukitTz3knkeR
 6yUA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=39PNVTFoufuahtg+OEmF+7P6d2E+Q8LB1HHd8dgo6c4=;
 b=pBr+3R4BX3uHYDkInKfYZ1xtsZEpX8ni6XN2NsKL1d3DzDunnMXmJX9KZabzG0YChA
 o7BKh58K0b0YXZ4XPG5YyBZHmGSncHvhIC/aMBSFyq+fqStmmaJb46MoFZXGhP5gzKZ4
 vXOvshJBcrF+vkqL3FQp14LLUxObkuho09WcSlSG/+c00FkUgMIh/K7uTclc7eFB8mHA
 ZlvtZWIjvBeX8ADxx4rJhoZIqoYfqDy30LnnikPSgzwmVCf5Ocb3wTQ75X/uW+weEmyE
 PA8FcdvngchHUkNGgnmZ6eKHJ+X7MYPrtW8e92srHXoae9CbHJOO+jdI4MbcpJKhYgWK
 G/Fw==
X-Gm-Message-State: AOAM530fECdkGAVi5JD0QrL17Uj/qkfMZ8wtbH7UaNeY6PNWajB3Tc3k
 UH6S25br6JB7QVmV2lw4lkEvj2v3GrDrtXg3AaKUAA==
X-Google-Smtp-Source: ABdhPJzhtr+pUC1bNj4/UXGUaxlVw+mPgImwreR0E/xW9PJZu/vgSIHbUAEZlEuc1iyCE2QZCpi8roDGRUscuipbofY=
X-Received: by 2002:a2e:7f04:: with SMTP id a4mr381146ljd.114.1590634806613;
 Wed, 27 May 2020 20:00:06 -0700 (PDT)
MIME-Version: 1.0
References: <20200525024955.225415-1-jandryuk@gmail.com>
 <CAKf6xpvRxeUdOOogacDvncC3yogcTN4gALVWO+V8ZJ8x__RafA@mail.gmail.com>
In-Reply-To: <CAKf6xpvRxeUdOOogacDvncC3yogcTN4gALVWO+V8ZJ8x__RafA@mail.gmail.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 27 May 2020 22:59:55 -0400
Message-ID: <CAKf6xps9j=bszbw5SAYeZdrGS9jP-3Hu9RCGT45ifNR6qdAX3Q@mail.gmail.com>
Subject: Re: [PATCH 0/8] Coverity fixes for vchan-socket-proxy
To: xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski=2DG=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 25, 2020 at 6:36 PM Jason Andryuk <jandryuk@gmail.com> wrote:
>
> On Sun, May 24, 2020 at 10:50 PM Jason Andryuk <jandryuk@gmail.com> wrote:
> >
> > This series addresses some Coverity reports.  To handle closing FDs, a
> > state struct is introduced to track FDs closed in both main() and
> > data_loop().
>
> I've realized the changes here are insufficient to handle the FD
> leaks.  That is, the accept()-ed FDs need to be closed inside the for
> loop so they aren't leaked with each iteration.  I'll re-work for a
> v2.

So it turns out this series doesn't leak FDs in the for loop.  FDs are
necessarily closed down in data_loop() when the read() returns 0.  The
only returns from data_loop() are after the FDs have been closed.
data_loop() and some of the functions it calls will call exit(1) on
error, but that won't leak FDs.

Please review this series.  Sorry for the confusion.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu May 28 04:08:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 04:08:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1je9pg-0000Yj-S1; Thu, 28 May 2020 04:07:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3kFH=7K=perches.com=joe@srs-us1.protection.inumbo.net>)
 id 1je9pe-0000Ye-PE
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 04:07:50 +0000
X-Inumbo-ID: c89083b2-a098-11ea-a796-12813bfff9fa
Received: from smtprelay.hostedemail.com (unknown [216.40.44.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c89083b2-a098-11ea-a796-12813bfff9fa;
 Thu, 28 May 2020 04:07:45 +0000 (UTC)
Received: from filter.hostedemail.com (clb03-v110.bra.tucows.net
 [216.40.38.60])
 by smtprelay05.hostedemail.com (Postfix) with ESMTP id 8167D18029597;
 Thu, 28 May 2020 04:07:44 +0000 (UTC)
X-Session-Marker: 6A6F6540706572636865732E636F6D
X-Spam-Summary: 2, 0, 0, , d41d8cd98f00b204, joe@perches.com, ,
 RULES_HIT:2:41:355:379:599:968:973:988:989:1260:1277:1311:1313:1314:1345:1359:1437:1515:1516:1518:1535:1593:1594:1730:1747:1777:1792:2393:2559:2562:2828:3138:3139:3140:3141:3142:3167:3354:3622:3865:3867:3868:3872:4050:4119:4321:4605:5007:7875:8603:10004:10848:11026:11232:11233:11658:11914:12043:12296:12297:12438:12740:12760:12895:13439:14659:21080:21451:21627:21990:30034:30054:30091,
 0, RBL:none, CacheIP:none, Bayesian:0.5, 0.5, 0.5, Netcheck:none,
 DomainCache:0, MSF:not bulk, SPF:, MSBL:0, DNSBL:none, Custom_rules:0:0:0,
 LFtime:1, LUA_SUMMARY:none
X-HE-Tag: hill77_1b1293126d57
X-Filterd-Recvd-Size: 8369
Received: from XPS-9350.home (unknown [47.151.136.130])
 (Authenticated sender: joe@perches.com)
 by omf08.hostedemail.com (Postfix) with ESMTPA;
 Thu, 28 May 2020 04:07:43 +0000 (UTC)
Message-ID: <d46604df8e64fd91c6fea5073c6cb5eb20184baf.camel@perches.com>
Subject: Re: [PATCH 1/2] xen-pciback: Use dev_printk() when possible
From: Joe Perches <joe@perches.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Bjorn Helgaas
 <helgaas@kernel.org>, Juergen Gross <jgross@suse.com>
Date: Wed, 27 May 2020 21:07:41 -0700
In-Reply-To: <612fee00-4e7c-9b90-511d-4efb7676cbed@oracle.com>
References: <20200527174326.254329-1-helgaas@kernel.org>
 <20200527174326.254329-2-helgaas@kernel.org>
 <612fee00-4e7c-9b90-511d-4efb7676cbed@oracle.com>
Content-Type: text/plain; charset="ISO-8859-1"
User-Agent: Evolution 3.36.2-0ubuntu1 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Bjorn Helgaas <bhelgaas@google.com>, xen-devel@lists.xenproject.org,
 Stefano Stabellini <sstabellini@kernel.org>, linux-kernel@vger.kernel.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Wed, 2020-05-27 at 15:34 -0700, Boris Ostrovsky wrote:
> On 5/27/20 1:43 PM, Bjorn Helgaas wrote:
> > @@ -155,8 +157,8 @@ int xen_pcibk_config_read(struct pci_dev *dev, int offset, int size,
> >  	u32 value = 0, tmp_val;
> >  
> >  	if (unlikely(verbose_request))
> > -		printk(KERN_DEBUG DRV_NAME ": %s: read %d bytes at 0x%x\n",
> > -		       pci_name(dev), size, offset);
> > +		dev_printk(KERN_DEBUG, &dev->dev, "read %d bytes at 0x%x\n",
> > +			   size, offset);
> 
> Maybe then dev_dbg() ?

It likely would be better to remove verbose_request altogether
and just use dynamic debugging and dev_dbg for all the output.

$ git grep -w -A3 verbose_request
drivers/pci/xen-pcifront.c:static int verbose_request;
drivers/pci/xen-pcifront.c:module_param(verbose_request, int, 0644);
drivers/pci/xen-pcifront.c-
drivers/pci/xen-pcifront.c-static int errno_to_pcibios_err(int errno)
drivers/pci/xen-pcifront.c-{
--
drivers/pci/xen-pcifront.c:	if (verbose_request)
drivers/pci/xen-pcifront.c-		dev_info(&pdev->xdev->dev,
drivers/pci/xen-pcifront.c-			 "read dev=%04x:%02x:%02x.%d - offset %x size %d\n",
drivers/pci/xen-pcifront.c-			 pci_domain_nr(bus), bus->number, PCI_SLOT(devfn),
--
drivers/pci/xen-pcifront.c:		if (verbose_request)
drivers/pci/xen-pcifront.c-			dev_info(&pdev->xdev->dev, "read got back value %x\n",
drivers/pci/xen-pcifront.c-				 op.value);
drivers/pci/xen-pcifront.c-
--
drivers/pci/xen-pcifront.c:	if (verbose_request)
drivers/pci/xen-pcifront.c-		dev_info(&pdev->xdev->dev,
drivers/pci/xen-pcifront.c-			 "write dev=%04x:%02x:%02x.%d - "
drivers/pci/xen-pcifront.c-			 "offset %x size %d val %x\n",
--
drivers/xen/xen-pciback/conf_space.c:	if (unlikely(verbose_request))
drivers/xen/xen-pciback/conf_space.c-		printk(KERN_DEBUG DRV_NAME ": %s: read %d bytes at 0x%x\n",
drivers/xen/xen-pciback/conf_space.c-		       pci_name(dev), size, offset);
drivers/xen/xen-pciback/conf_space.c-
--
drivers/xen/xen-pciback/conf_space.c:	if (unlikely(verbose_request))
drivers/xen/xen-pciback/conf_space.c-		printk(KERN_DEBUG DRV_NAME ": %s: read %d bytes at 0x%x = %x\n",
drivers/xen/xen-pciback/conf_space.c-		       pci_name(dev), size, offset, value);
drivers/xen/xen-pciback/conf_space.c-
--
drivers/xen/xen-pciback/conf_space.c:	if (unlikely(verbose_request))
drivers/xen/xen-pciback/conf_space.c-		printk(KERN_DEBUG
drivers/xen/xen-pciback/conf_space.c-		       DRV_NAME ": %s: write request %d bytes at 0x%x = %x\n",
drivers/xen/xen-pciback/conf_space.c-		       pci_name(dev), size, offset, value);
--
drivers/xen/xen-pciback/conf_space_header.c:		if (unlikely(verbose_request))
drivers/xen/xen-pciback/conf_space_header.c-			printk(KERN_DEBUG DRV_NAME ": %s: enable\n",
drivers/xen/xen-pciback/conf_space_header.c-			       pci_name(dev));
drivers/xen/xen-pciback/conf_space_header.c-		err = pci_enable_device(dev);
--
drivers/xen/xen-pciback/conf_space_header.c:		if (unlikely(verbose_request))
drivers/xen/xen-pciback/conf_space_header.c-			printk(KERN_DEBUG DRV_NAME ": %s: disable\n",
drivers/xen/xen-pciback/conf_space_header.c-			       pci_name(dev));
drivers/xen/xen-pciback/conf_space_header.c-		pci_disable_device(dev);
--
drivers/xen/xen-pciback/conf_space_header.c:		if (unlikely(verbose_request))
drivers/xen/xen-pciback/conf_space_header.c-			printk(KERN_DEBUG DRV_NAME ": %s: set bus master\n",
drivers/xen/xen-pciback/conf_space_header.c-			       pci_name(dev));
drivers/xen/xen-pciback/conf_space_header.c-		pci_set_master(dev);
--
drivers/xen/xen-pciback/conf_space_header.c:		if (unlikely(verbose_request))
drivers/xen/xen-pciback/conf_space_header.c-			printk(KERN_DEBUG DRV_NAME ": %s: clear bus master\n",
drivers/xen/xen-pciback/conf_space_header.c-			       pci_name(dev));
drivers/xen/xen-pciback/conf_space_header.c-		pci_clear_master(dev);
--
drivers/xen/xen-pciback/conf_space_header.c:		if (unlikely(verbose_request))
drivers/xen/xen-pciback/conf_space_header.c-			printk(KERN_DEBUG
drivers/xen/xen-pciback/conf_space_header.c-			       DRV_NAME ": %s: enable memory-write-invalidate\n",
drivers/xen/xen-pciback/conf_space_header.c-			       pci_name(dev));
--
drivers/xen/xen-pciback/conf_space_header.c:		if (unlikely(verbose_request))
drivers/xen/xen-pciback/conf_space_header.c-			printk(KERN_DEBUG
drivers/xen/xen-pciback/conf_space_header.c-			       DRV_NAME ": %s: disable memory-write-invalidate\n",
drivers/xen/xen-pciback/conf_space_header.c-			       pci_name(dev));
--
drivers/xen/xen-pciback/pciback.h:extern int verbose_request;
drivers/xen/xen-pciback/pciback.h-
drivers/xen/xen-pciback/pciback.h-void xen_pcibk_test_and_schedule_op(struct xen_pcibk_device *pdev);
drivers/xen/xen-pciback/pciback.h-#endif
--
drivers/xen/xen-pciback/pciback_ops.c:int verbose_request;
drivers/xen/xen-pciback/pciback_ops.c:module_param(verbose_request, int, 0644);
drivers/xen/xen-pciback/pciback_ops.c-
drivers/xen/xen-pciback/pciback_ops.c-static irqreturn_t xen_pcibk_guest_interrupt(int irq, void *dev_id);
drivers/xen/xen-pciback/pciback_ops.c-
--
drivers/xen/xen-pciback/pciback_ops.c:	if (unlikely(verbose_request))
drivers/xen/xen-pciback/pciback_ops.c-		printk(KERN_DEBUG DRV_NAME ": %s: enable MSI\n", pci_name(dev));
drivers/xen/xen-pciback/pciback_ops.c-
drivers/xen/xen-pciback/pciback_ops.c-	if (dev->msi_enabled)
--
drivers/xen/xen-pciback/pciback_ops.c:	if (unlikely(verbose_request))
drivers/xen/xen-pciback/pciback_ops.c-		printk(KERN_DEBUG DRV_NAME ": %s: MSI: %d\n", pci_name(dev),
drivers/xen/xen-pciback/pciback_ops.c-			op->value);
drivers/xen/xen-pciback/pciback_ops.c-
--
drivers/xen/xen-pciback/pciback_ops.c:	if (unlikely(verbose_request))
drivers/xen/xen-pciback/pciback_ops.c-		printk(KERN_DEBUG DRV_NAME ": %s: disable MSI\n",
drivers/xen/xen-pciback/pciback_ops.c-		       pci_name(dev));
drivers/xen/xen-pciback/pciback_ops.c-
--
drivers/xen/xen-pciback/pciback_ops.c:	if (unlikely(verbose_request))
drivers/xen/xen-pciback/pciback_ops.c-		printk(KERN_DEBUG DRV_NAME ": %s: MSI: %d\n", pci_name(dev),
drivers/xen/xen-pciback/pciback_ops.c-			op->value);
drivers/xen/xen-pciback/pciback_ops.c-	return 0;
--
drivers/xen/xen-pciback/pciback_ops.c:	if (unlikely(verbose_request))
drivers/xen/xen-pciback/pciback_ops.c-		printk(KERN_DEBUG DRV_NAME ": %s: enable MSI-X\n",
drivers/xen/xen-pciback/pciback_ops.c-		       pci_name(dev));
drivers/xen/xen-pciback/pciback_ops.c-
--
drivers/xen/xen-pciback/pciback_ops.c:				if (unlikely(verbose_request))
drivers/xen/xen-pciback/pciback_ops.c-					printk(KERN_DEBUG DRV_NAME ": %s: " \
drivers/xen/xen-pciback/pciback_ops.c-						"MSI-X[%d]: %d\n",
drivers/xen/xen-pciback/pciback_ops.c-						pci_name(dev), i,
--
drivers/xen/xen-pciback/pciback_ops.c:	if (unlikely(verbose_request))
drivers/xen/xen-pciback/pciback_ops.c-		printk(KERN_DEBUG DRV_NAME ": %s: disable MSI-X\n",
drivers/xen/xen-pciback/pciback_ops.c-			pci_name(dev));
drivers/xen/xen-pciback/pciback_ops.c-
--
drivers/xen/xen-pciback/pciback_ops.c:	if (unlikely(verbose_request))
drivers/xen/xen-pciback/pciback_ops.c-		printk(KERN_DEBUG DRV_NAME ": %s: MSI-X: %d\n",
drivers/xen/xen-pciback/pciback_ops.c-		       pci_name(dev), op->value);
drivers/xen/xen-pciback/pciback_ops.c-	return 0;




From xen-devel-bounces@lists.xenproject.org Thu May 28 04:28:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 04:28:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeA8z-0002To-J6; Thu, 28 May 2020 04:27:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P1kI=7K=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jeA8y-0002Tj-9J
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 04:27:48 +0000
X-Inumbo-ID: 93c61e3c-a09b-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 93c61e3c-a09b-11ea-8993-bc764e2007e4;
 Thu, 28 May 2020 04:27:45 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=FoDuPcIaelc/D9n2tgCEnnCiUxuSuwieAjaWSb4FPUE=; b=qF0XFzSkVO8gw7sWaRyCfO5AQ
 wQndlgjPn8fVJZKTn0QdXSt5bFpAFCP68/suM2wmR6tXzR/9neeaRPrirvh0UHNwfE06hPzGg2YWZ
 1GkrgQwtAEl5pKFGFy1qwCX3FR4+3DBmDUqMJgYqPw2Zos0/N0cANwvjNAWOuZvGrA/wE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeA8t-0000Q6-Lf; Thu, 28 May 2020 04:27:43 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeA8t-0007yf-AI; Thu, 28 May 2020 04:27:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jeA8t-0004E6-9U; Thu, 28 May 2020 04:27:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150406-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150406: regressions - trouble: fail/pass/starved
X-Osstest-Failures: qemu-mainline:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
 qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: qemuu=06539ebc76b8625587aa78d646a9d8d5fddf84f3
X-Osstest-Versions-That: qemuu=ddc760832fa8cf5e93b9d9e6e854a5114ac63510
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 28 May 2020 04:27:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150406 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150406/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-xl-credit2   7 xen-boot                 fail REGR. vs. 150391

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150391
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150391
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150391
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150391
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150391
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150391
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                06539ebc76b8625587aa78d646a9d8d5fddf84f3
baseline version:
 qemuu                ddc760832fa8cf5e93b9d9e6e854a5114ac63510

Last test of basis   150391  2020-05-27 01:37:34 Z    1 days
Testing same since   150406  2020-05-27 15:36:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 06539ebc76b8625587aa78d646a9d8d5fddf84f3
Merge: ddc760832f 97d8974620
Author: Peter Maydell <peter.maydell@linaro.org>
Date:   Tue May 26 20:25:06 2020 +0100

    Merge remote-tracking branch 'remotes/philmd-gitlab/tags/mips-hw-next-20200526' into staging
    
    MIPS hardware updates
    
    - MAINTAINERS updated to welcome Huacai Chen and Jiaxun Yang,
      and update Aleksandar Rikalo's email address,
    - Trivial improvements in the Bonito64 North Bridge and the
      Fuloong 2e machine,
    - MIPS Machines names unified without 'mips_' prefix.
    
    CI: https://travis-ci.org/github/philmd/qemu/builds/691247975
    
    # gpg: Signature made Tue 26 May 2020 14:32:08 BST
    # gpg:                using RSA key FAABE75E12917221DCFD6BB2E3E32C2CDEADC0DE
    # gpg: Good signature from "Philippe Mathieu-Daudé (F4BUG) <f4bug@amsat.org>" [full]
    # Primary key fingerprint: FAAB E75E 1291 7221 DCFD  6BB2 E3E3 2C2C DEAD C0DE
    
    * remotes/philmd-gitlab/tags/mips-hw-next-20200526:
      MAINTAINERS: Change Aleksandar Rikalo's email address
      hw/mips/mips_int: De-duplicate KVM interrupt delivery
      hw/mips/malta: Add some logging for bad register offset cases
      hw/mips: Rename malta/mipssim/r4k/jazz files
      hw/mips/fuloong2e: Fix typo in Fuloong machine name
      hw/mips/fuloong2e: Move code and update a comment
      hw/pci-host/bonito: Set the Config register reset value with FIELD_DP32
      hw/pci-host/bonito: Better describe the I/O CS regions
      hw/pci-host/bonito: Map the different PCI ranges more detailed
      hw/pci-host/bonito: Map all the Bonito64 I/O range
      hw/pci-host/bonito: Map peripheral using physical address
      hw/pci-host/bonito: Fix DPRINTF() format strings
      hw/pci-host: Use CONFIG_PCI_BONITO to select the Bonito North Bridge
      MAINTAINERS: Add Huacai Chen as fuloong2e co-maintainer
    
    Signed-off-by: Peter Maydell <peter.maydell@linaro.org>

commit 97d8974620053db5754af808583de70380f73a10
Author: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
Date:   Mon May 18 22:09:16 2020 +0200

    MAINTAINERS: Change Aleksandar Rikalo's email address
    
    Aleksandar Rikalo wants to use a different email address from
    now on.
    
    Reviewed-by: Aleksandar Rikalo <aleksandar.rikalo@syrmia.com>
    Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
    Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
    Signed-off-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
    Message-id: <20200518200920.17344-18-aleksandar.qemu.devel@gmail.com>
    Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

commit 56b92eeeac8074501858e15b7658ec6099456f04
Author: Philippe Mathieu-Daudé <f4bug@amsat.org>
Date:   Wed Apr 29 10:21:05 2020 +0200

    hw/mips/mips_int: De-duplicate KVM interrupt delivery
    
    Refactor duplicated code in a single place.
    
    Reviewed-by: Thomas Huth <thuth@redhat.com>
    Message-Id: <20200429082916.10669-2-f4bug@amsat.org>
    Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

commit c707f06fb1d9b09e5d442e72f6f3dcd021671a90
Author: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
Date:   Mon May 18 22:09:19 2020 +0200

    hw/mips/malta: Add some logging for bad register offset cases
    
    Log the cases where a guest attempts read or write using bad
    register offset.
    
    Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
    Tested-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
    Signed-off-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
    Message-id: <20200518200920.17344-21-aleksandar.qemu.devel@gmail.com>
    [PMD: Replaced TARGET_FMT_lx by HWADDR_PRIX]
    Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

commit 5298722edad2d40baac9c2326c6d492ad2b0211a
Author: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
Date:   Mon May 18 22:09:20 2020 +0200

    hw/mips: Rename malta/mipssim/r4k/jazz files
    
    Machine file names should not have prefix "mips_".
    
    Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
    Signed-off-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
    Message-id: <20200518200920.17344-22-aleksandar.qemu.devel@gmail.com>
    [PMD: Fixed Fuloong line conflict due to rebase]
    Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

commit c3a09ff68ddffd1efd57612706484aa386826518
Author: Philippe Mathieu-Daudé <f4bug@amsat.org>
Date:   Sun Apr 26 12:16:37 2020 +0200

    hw/mips/fuloong2e: Fix typo in Fuloong machine name
    
    We always miswrote the Fuloong machine... Fix its name.
    Add an machine alias to the previous name for backward
    compatibility.
    
    Suggested-by: Aleksandar Markovic <amarkovic@wavecomp.com>
    Reviewed-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
    Message-id: <20200526104726.11273-11-f4bug@amsat.org>
    Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

commit 3e5fe8dd1fcb6aa3acd3e5b719bd0b9e69ddee6b
Author: Philippe Mathieu-Daudé <f4bug@amsat.org>
Date:   Sun Apr 26 12:19:16 2020 +0200

    hw/mips/fuloong2e: Move code and update a comment
    
    Move the RAM-related call closer to the RAM creation block,
    rename the ROM comment.
    
    Reviewed-by: Huacai Chen <chenhc@lemote.com>
    Message-id: <20200510210128.18343-4-f4bug@amsat.org>
    Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

commit 1f8a6c8b3c3a9c6ea0b215a764a1c4f1d6141078
Author: Philippe Mathieu-Daudé <f4bug@amsat.org>
Date:   Sun May 10 21:36:37 2020 +0200

    hw/pci-host/bonito: Set the Config register reset value with FIELD_DP32
    
    Describe some bits of the Config registers fields with the
    registerfields API. Use the FIELD_DP32() macro to set the
    BONGENCFG register bits at reset.
    
    Reviewed-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
    Message-id: <20200510210128.18343-12-f4bug@amsat.org>
    Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

commit 7a296990af3ae3a63e5397c9c1a9f26981815c1c
Author: Philippe Mathieu-Daudé <f4bug@amsat.org>
Date:   Sun May 10 21:46:43 2020 +0200

    hw/pci-host/bonito: Better describe the I/O CS regions
    
    Better describe the I/O CS regions, add the ROMCS region.
    
    Reviewed-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
    Message-id: <20200510210128.18343-11-f4bug@amsat.org>
    Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

commit a0b544c1c95df240629964636479fc113086d57b
Author: Philippe Mathieu-Daudé <f4bug@amsat.org>
Date:   Sun May 10 21:42:11 2020 +0200

    hw/pci-host/bonito: Map the different PCI ranges more detailed
    
    Better describe the Bonito64 MEM HI/LO and I/O PCI ranges,
    add more PCI regions as unimplemented.
    
    Reviewed-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
    Message-id: <20200526104726.11273-7-f4bug@amsat.org>
    Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

commit 25cca0a9b789244f89b24ed628b0dd6b0a169acc
Author: Philippe Mathieu-Daudé <f4bug@amsat.org>
Date:   Sun May 10 19:26:36 2020 +0200

    hw/pci-host/bonito: Map all the Bonito64 I/O range
    
    To ease following guest accesses to the Bonito64 chipset,
    map its I/O range as UnimplementedDevice.
    We can now see the accesses to unimplemented peripheral
    using the '-d unimp' command line option.
    
    Reviewed-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
    Message-id: <20200510210128.18343-9-f4bug@amsat.org>
    Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

commit 86313bdc85a3ebc4817ffb29edd1c108c50afbe6
Author: Philippe Mathieu-Daudé <f4bug@amsat.org>
Date:   Sun May 10 19:25:18 2020 +0200

    hw/pci-host/bonito: Map peripheral using physical address
    
    Peripherals are mapped at physical address on busses.
    Only CPU/IOMMU can use virtual addresses.
    
    Reviewed-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
    Message-id: <20200510210128.18343-8-f4bug@amsat.org>
    Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

commit 3d14264cceb005e6b2131082bfa202c701e7ffb6
Author: Philippe Mathieu-Daudé <f4bug@amsat.org>
Date:   Sun May 10 21:34:11 2020 +0200

    hw/pci-host/bonito: Fix DPRINTF() format strings
    
    Reviewed-by: Huacai Chen <chenhc@lemote.com>
    Message-id: <20200510210128.18343-7-f4bug@amsat.org>
    Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

commit abc82de356f636d70ac36e202c989a5e978dbae3
Author: Philippe Mathieu-Daudé <philmd@redhat.com>
Date:   Sun Feb 3 22:37:26 2019 +0100

    hw/pci-host: Use CONFIG_PCI_BONITO to select the Bonito North Bridge
    
    Ease the kconfig selection by introducing CONFIG_PCI_BONITO to select
    the Bonito North Bridge.
    
    Reviewed-by: Aleksandar Markovic <amarkovic@wavecomp.com>
    Signed-off-by: Philippe Mathieu-Daudé <philmd@redhat.com>
    Message-id: <20200510210128.18343-6-f4bug@amsat.org>
    Reviewed-by: Huacai Chen <chenhc@lemote.com>
    Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

commit 97eeef8aeeac1c5fb64f6785bbcef2e57f7c62ce
Author: Huacai Chen <chenhc@lemote.com>
Date:   Wed Apr 8 17:16:20 2020 +0800

    MAINTAINERS: Add Huacai Chen as fuloong2e co-maintainer
    
    I submitted the MIPS/fuloong2e support about ten years ago, and
    after that I became a MIPS kernel developer. Last year, Philippe
    Mathieu- Daudé asked me that whether I can be a reviewer of
    MIPS/fuloong2e, and I promised that I will do some QEMU work in
    the next year (i.e., 2020 and later). I think now (and also in
    future) I can have some spare time, so I can finally do some real
    work on QEMU/MIPS. And if possible, I hope I can be a co-maintainer
    of MIPS/fuloong2e.
    
    Cc: Jiaxun Yang <jiaxun.yang@flygoat.com>
    Signed-off-by: Huacai Chen <chenhc@lemote.com>
    Message-Id: <1586337380-25217-3-git-send-email-chenhc@lemote.com>
    [PMD: Added Jiaxun Yang as reviewer]
    Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
    Reviewed-by: Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
    Message-Id: <20200510210128.18343-2-f4bug@amsat.org>


From xen-devel-bounces@lists.xenproject.org Thu May 28 04:49:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 04:49:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeAUA-0004GY-LJ; Thu, 28 May 2020 04:49:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P1kI=7K=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jeAU9-0004GT-Ip
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 04:49:41 +0000
X-Inumbo-ID: a03b7312-a09e-11ea-81bc-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a03b7312-a09e-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 04:49:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=+Id1cnZ2mJkIE7iCCCFdw7A8EsQYJVsBC48K6wxE02E=; b=R0QNmDNRV4Kaoa4Smm2utI93L
 wQA0UDZLtztUP1d3yoUy26asXGwZU3yJVbZV1KOe+wpm0MFN4rNwZ29qZlzkxcW4N1ADxZjHEc0A1
 aWiQVsnBkqgDHx8mKYvtb67iwJ+XdI2f2wt24e0WUtLdpinEvbf0tEurMFBafSHmO8tcM=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeAU1-0000qX-Mk; Thu, 28 May 2020 04:49:33 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeAU1-0000Em-Bn; Thu, 28 May 2020 04:49:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jeAU1-0001S9-B5; Thu, 28 May 2020 04:49:33 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150410-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 150410: regressions - trouble:
 blocked/fail/pass/starved
X-Osstest-Failures: linux-5.4:build-amd64-libvirt:libvirt-build:fail:regression
 linux-5.4:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-rtds:guest-saverestore:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: linux=e0d81ce760044efd3f26004aa32821c34968512a
X-Osstest-Versions-That: linux=1cdaf895c99d319c0007d0b62818cf85fc4b087f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 28 May 2020 04:49:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150410 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150410/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 150294

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds     15 guest-saverestore            fail  like 150294
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150294
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                e0d81ce760044efd3f26004aa32821c34968512a
baseline version:
 linux                1cdaf895c99d319c0007d0b62818cf85fc4b087f

Last test of basis   150294  2020-05-21 07:55:33 Z    6 days
Testing same since   150410  2020-05-27 16:09:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Al Viro <viro@zeniv.linux.org.uk>
  Alain Volmat <alain.volmat@st.com>
  Alan Maguire <alan.maguire@oracle.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Monakov <amonakov@ispras.ru>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexandru Ardelean <alexandru.ardelean@analog.com>
  Alexei Starovoitov <ast@kernel.org>
  Andreas Färber <afaerber@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artem Borisov <dedsa2002@gmail.com>
  Arun Easi <aeasi@marvell.com>
  Aurabindo Pillai <aurabindo.pillai@amd.com>
  Aymeric Agon-Rambosson <aymeric.agon@yandex.com>
  Babu Moger <babu.moger@amd.com>
  Bin Lu <Bin.Lu@arm.com>
  Bob Peterson <rpeterso@redhat.com>
  Bodo Stroesser <bstroesser@ts.fujitsu.com>
  Brent Lu <brent.lu@intel.com>
  Bryant G. Ly <bryangly@gmail.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chris Chiu <chiu@endlessm.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Gmeiner <christian.gmeiner@gmail.com>
  Christian Lachner <gladiac@gmail.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Colin Xu <colin.xu@intel.com>
  Cristian Ciocaltea <cristian.ciocaltea@gmail.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Drake <drake@endlessm.com>
  Daniel Playfair Cal <daniel.playfair.cal@gmail.com>
  David Hildenbrand <david@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dragos Bogdan <dragos.bogdan@analog.com>
  Eric Biggers <ebiggers@google.com>
  Ewan D. Milne <emilne@redhat.com>
  Fabrice Gasnier <fabrice.gasnier@st.com>
  Frédéric Pierret (fepitre) <frederic.pierret@qubes-os.org>
  Gavin Shan <gshan@redhat.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Gerald Schaefer <gerald.schaefer@de.ibm.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregory CLEMENT <gregory.clement@bootlin.com>
  Hans de Goede <hdegoede@redhat.com>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Igor Russkikh <irusskikh@marvell.com>
  Ilya Dryomov <idryomov@gmail.com>
  Ingo Molnar <mingo@kernel.org>
  Jakub Sitnicki <jakub@cloudflare.com>
  James Hilliard <james.hilliard1@gmail.com>
  Jian-Hong Pan <jian-hong@endlessm.com>
  Jiri Kosina <jkosina@suse.cz>
  Joerg Roedel <jroedel@suse.de>
  John Hubbard <jhubbard@nvidia.com>
  John Johansen <john.johansen@canonical.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Juliet Kim <julietk@linux.vnet.ibm.com>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Kailang Yang <kailang@realtek.com>
  Kees Cook <keescook@chromium.org>
  Keno Fischer <keno@juliacomputing.com>
  Kevin Hao <haokexin@gmail.com>
  Klaus Doth <kdlnx@doth.eu>
  Krzysztof Struczynski <krzysztof.struczynski@huawei.com>
  Lianbo Jiang <lijiang@redhat.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Loïc Yhuel <loic.yhuel@gmail.com>
  Lucas Stach <l.stach@pengutronix.de>
  Marco Elver <elver@google.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Mauro Carvalho Chehab <mchehab@kernel.org>
  Maxim Petrov <mmrmaximuzz@gmail.com>
  Mel Gorman <mgorman@techsingularity.net>
  Miaohe Lin <linmiaohe@huawei.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michał Mirosław <mirq-linux@rere.qmqm.pl>
  Mike Pozulp <pozulp.kernel@gmail.com>
  Mimi Zohar <zohar@linux.ibm.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Navid Emamdoost <navid.emamdoost@gmail.com>
  Neil Horman <nhorman@tuxdriver.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nilesh Javali <njavali@marvell.com>
  Olivier Moysan <olivier.moysan@st.com>
  Oscar Carter <oscar.carter@gmx.com>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Cercueil <paul@crapouillou.net>
  PeiSen Hou <pshou@realtek.com>
  Peter Ujfalusi <peter.ujfalusi@ti.com>
  Peter Xu <peterx@redhat.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <a.p.zijlstra@chello.nl>
  Phil Auld <pauld@redhat.com>
  Philipp Rudo <prudo@linux.ibm.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Qian Cai <cai@lca.pw>
  Qiushi Wu <wu000273@umn.edu>
  Quinn Tran <qutran@marvell.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ricardo Ribalda Delgado <ribalda@kernel.org>
  Richard Clark <richard.xnu.clark@gmail.com>
  Richard Weinberger <richard@nod.at>
  Rick Edgecombe <rick.p.edgecombe@intel.com>
  Roberto Sassu <roberto.sassu@huawei.com>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Russell Currey <ruscur@russell.cc>
  Sagar Shrikant Kadam <sagar.kadam@sifive.com>
  Samuel Iglesias Gonsalvez <siglesias@igalia.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Sasha Levin <sashal@kernel.org>
  Scott Bahling <sbahling@suse.com>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Shay Agroskin <shayagr@amazon.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Stefano Garzarella <sgarzare@redhat.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudeep Holla <sudeep.holla@arm.com>
  Takashi Iwai <tiwai@suse.de>
  Thierry Reding <treding@nvidia.com>
  Thomas Gleixner <tglx@linutronix.de>
  Tomas Winkler <tomas.winkler@intel.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vincent Guittot <vincent.guittot@linaro.org>
  Vinod Koul <vkoul@kernel.org>
  Vladimir Murzin <vladimir.murzin@arm.com>
  Wei Yongjun <weiyongjun1@huawei.com>
  Will Deacon <will@kernel.org>
  Wolfram Sang <wsa@kernel.org>
  Wolfram Sang <wsa@the-dreams.de>
  Wu Bo <wubo40@huawei.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yoshiyuki Kurauchi <ahochauwaaaaa@gmail.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 3181 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 28 07:23:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 07:23:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeCs4-0000ee-5V; Thu, 28 May 2020 07:22:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=blJD=7K=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jeCs2-0000eZ-N2
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 07:22:30 +0000
X-Inumbo-ID: fd34a344-a0b3-11ea-9947-bc764e2007e4
Received: from mail-wr1-x42d.google.com (unknown [2a00:1450:4864:20::42d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd34a344-a0b3-11ea-9947-bc764e2007e4;
 Thu, 28 May 2020 07:22:30 +0000 (UTC)
Received: by mail-wr1-x42d.google.com with SMTP id j10so11170576wrw.8
 for <xen-devel@lists.xenproject.org>; Thu, 28 May 2020 00:22:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=lG1goK+j76bC1xgzoN5RNMDZSAMph60A5AtJ8km1I0o=;
 b=Lr7SjPYKWCR+htcjVxHMvdv+mCpf1sadXNCcM9IFhKuaKLja69ldRd2SOn3LRNumSu
 HRSg1cxgh7W92gTWYjP0WAAl8iw4Ec1h0JEttKppapN7tTaDDZ+XiBubDhrDO/IK28+B
 XXu54Cn1gEhQVSVHDdAZC7dPzTbYzBfm2UAjNieTSvnYITzTYC04GpSVnNWLHzN7Ekhc
 e+KEvumqJUPxWUosz5wPdY8LTeJa/1ztP6iYJF/9LLc3M5o6pGYHm9uDMCKKqFgW10zY
 h7W+gODx9k3RU3FpuI/s0JYIgDZY7JsGmU7N0iAGJr6+ZXwLIakFqWDJo0GYa+DoJV6h
 6/uw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=lG1goK+j76bC1xgzoN5RNMDZSAMph60A5AtJ8km1I0o=;
 b=kY8xiP7uHhNYprC6ECD86vmRbtRoF3uQn4f34Z+fr5GsnlCghFjPsuPILPvwY9BHi+
 fvKV6nYx3b4E40wKlltlkiUw5l2TIrEzfLBUltmoEI5dLRatLPTdmgPFaUFvGNrJQh3F
 TvOMqByfgtfL9RjPvFTthsD7dCnAw8ouCCb57jNjCZflXU2V3YalWkSyftq3N6UDZX88
 5nRffCGEN+vQCSgiLHcjyIvtWf+16bSt/K97KTrot0uUDi2C+rhvuUUs8lkfV9rInDbJ
 lN4QZ6uzkGuXCtZjdWtKIik1OP2CTeDx6eIhWoSFlBy0k2HaEgPfTELC2tSlHsiz0X1C
 lAhg==
X-Gm-Message-State: AOAM533OMMQ4giD6HYFtVfrP1ElYRRO37y4iv6+zFtlFx/i2CvYefT6q
 cd8rFEJNpMRR4JwkGISKW8E=
X-Google-Smtp-Source: ABdhPJzZeepf4f9Lhk6hcKohahXMsHQAxqVj1ZuAYT/Mm4ajGdUstVvT/LRiUMw5ohCj4bzJ7lehzg==
X-Received: by 2002:adf:eb47:: with SMTP id u7mr2142080wrn.14.1590650549215;
 Thu, 28 May 2020 00:22:29 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.186])
 by smtp.gmail.com with ESMTPSA id p1sm5125372wrx.44.2020.05.28.00.22.26
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 28 May 2020 00:22:28 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Roman Kagan'" <rvkagan@yandex-team.ru>,
	<qemu-devel@nongnu.org>
References: <20200527124511.986099-1-rvkagan@yandex-team.ru>
 <20200527124511.986099-3-rvkagan@yandex-team.ru>
In-Reply-To: <20200527124511.986099-3-rvkagan@yandex-team.ru>
Subject: RE: [PATCH v6 2/5] block: consolidate blocksize properties
 consistency checks
Date: Thu, 28 May 2020 08:22:25 +0100
Message-ID: <009a01d634c0$be65dc00$3b319400$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQIXeuT69nnYcgltbfOkY7MAd59asgG7AZYfqCyzbjA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Kevin Wolf' <kwolf@redhat.com>, 'Fam Zheng' <fam@euphon.net>,
 'Stefano Stabellini' <sstabellini@kernel.org>,
 "=?utf-8?Q?'Daniel_P._Berrang=C3=A9'?=" <berrange@redhat.com>,
 'Eduardo Habkost' <ehabkost@redhat.com>, qemu-block@nongnu.org,
 "'Michael S. Tsirkin'" <mst@redhat.com>, 'Eric Blake' <eblake@redhat.com>,
 'Laurent Vivier' <laurent@vivier.eu>, 'Max Reitz' <mreitz@redhat.com>,
 'John Snow' <jsnow@redhat.com>, 'Keith Busch' <kbusch@kernel.org>,
 'Gerd Hoffmann' <kraxel@redhat.com>, 'Stefan Hajnoczi' <stefanha@redhat.com>,
 'Paolo Bonzini' <pbonzini@redhat.com>,
 'Anthony Perard' <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?utf-8?Q?'Philippe_Mathieu-Daud=C3=A9'?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Roman Kagan <rvkagan@yandex-team.ru>
> Sent: 27 May 2020 13:45
> To: qemu-devel@nongnu.org
> Cc: Kevin Wolf <kwolf@redhat.com>; xen-devel@lists.xenproject.org; =
Gerd Hoffmann <kraxel@redhat.com>;
> Daniel P. Berrang=C3=A9 <berrange@redhat.com>; Paolo Bonzini =
<pbonzini@redhat.com>; Anthony Perard
> <anthony.perard@citrix.com>; Laurent Vivier <laurent@vivier.eu>; Max =
Reitz <mreitz@redhat.com>; qemu-
> block@nongnu.org; Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>; =
Eric Blake <eblake@redhat.com>; Paul
> Durrant <paul@xen.org>; Fam Zheng <fam@euphon.net>; John Snow =
<jsnow@redhat.com>; Michael S. Tsirkin
> <mst@redhat.com>; Eduardo Habkost <ehabkost@redhat.com>; Keith Busch =
<kbusch@kernel.org>; Stefano
> Stabellini <sstabellini@kernel.org>; Stefan Hajnoczi =
<stefanha@redhat.com>
> Subject: [PATCH v6 2/5] block: consolidate blocksize properties =
consistency checks
>=20
> Several block device properties related to blocksize configuration =
must
> be in certain relationship WRT each other: physical block must be no
> smaller than logical block; min_io_size, opt_io_size, and
> discard_granularity must be a multiple of a logical block.
>=20
> To ensure these requirements are met, add corresponding consistency
> checks to blkconf_blocksizes, adjusting its signature to communicate
> possible error to the caller.  Also remove the now redundant =
consistency
> checks from the specific devices.
>=20
> Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>

Reviewed-by: Paul Durrant <paul@xen.org>




From xen-devel-bounces@lists.xenproject.org Thu May 28 08:22:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 08:22:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeDnz-0006Hc-8A; Thu, 28 May 2020 08:22:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l/Gm=7K=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jeDny-0006HX-Hd
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 08:22:22 +0000
X-Inumbo-ID: 59dcff30-a0bc-11ea-9947-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59dcff30-a0bc-11ea-9947-bc764e2007e4;
 Thu, 28 May 2020 08:22:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 76405AF3E;
 Thu, 28 May 2020 08:22:22 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2] docs: update xenstore-migration.md
Date: Thu, 28 May 2020 10:22:17 +0200
Message-Id: <20200528082217.26057-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Update connection record details: make flags common for sockets and
domains, and add pending incoming data.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- added out-resp-len to connection record
---
 docs/designs/xenstore-migration.md | 71 +++++++++++++++++-------------
 1 file changed, 40 insertions(+), 31 deletions(-)

diff --git a/docs/designs/xenstore-migration.md b/docs/designs/xenstore-migration.md
index 34a2afd17e..5736bbad94 100644
--- a/docs/designs/xenstore-migration.md
+++ b/docs/designs/xenstore-migration.md
@@ -147,43 +147,59 @@ the domain being migrated.
 ```
     0       1       2       3       4       5       6       7    octet
 +-------+-------+-------+-------+-------+-------+-------+-------+
-| conn-id                       | conn-type     | conn-spec
+| conn-id                       | conn-type     | flags         |
++-------------------------------+---------------+---------------+
+| conn-spec
 ...
-+-------------------------------+-------------------------------+
-| data-len                      | data
-+-------------------------------+
++---------------+---------------+-------------------------------+
+| in-data-len   | out-resp-len  | out-data-len                  |
++---------------+---------------+-------------------------------+
+| data
 ...
 ```
 
 
-| Field       | Description                                     |
-|-------------|-------------------------------------------------|
-| `conn-id`   | A non-zero number used to identify this         |
-|             | connection in subsequent connection-specific    |
-|             | records                                         |
-|             |                                                 |
-| `conn-type` | 0x0000: shared ring                             |
-|             | 0x0001: socket                                  |
-|             | 0x0002 - 0xFFFF: reserved for future use        |
-|             |                                                 |
-| `conn-spec` | See below                                       |
-|             |                                                 |
-| `data-len`  | The length (in octets) of any pending data not  |
-|             | yet written to the connection                   |
-|             |                                                 |
-| `data`      | Pending data (may be empty)                     |
+| Field          | Description                                  |
+|----------------|----------------------------------------------|
+| `conn-id`      | A non-zero number used to identify this      |
+|                | connection in subsequent connection-specific |
+|                | records                                      |
+|                |                                              |
+| `flags`        | A bit-wise OR of:                            |
+|                | 0001: read-only                              |
+|                |                                              |
+| `conn-type`    | 0x0000: shared ring                          |
+|                | 0x0001: socket                               |
+|                | 0x0002 - 0xFFFF: reserved for future use     |
+|                |                                              |
+| `conn-spec`    | See below                                    |
+|                |                                              |
+| `in-data-len`  | The length (in octets) of any data read      |
+|                | from the connection not yet processed        |
+|                |                                              |
+| `out-resp-len` | The length (in octets) of a partial response |
+|                | not yet written to the connection (included  |
+|                | in the following `out-data-len`)             |
+|                |                                              |
+| `out-data-len` | The length (in octets) of any pending data   |
+|                | not yet written to the connection            |
+|                |                                              |
+| `data`         | Pending data, first read data, then written  |
+|                | data (any of both may be empty)              |
 
-The format of `conn-spec` is dependent upon `conn-type`.
+In case of live update the connection record for the connection via which
+the live update command was issued will contain the response for the live
+update command in the pending write data.
 
 \pagebreak
 
+The format of `conn-spec` is dependent upon `conn-type`.
+
 For `shared ring` connections it is as follows:
 
 
 ```
     0       1       2       3       4       5       6       7    octet
-                                                +-------+-------+
-                                                | flags         |
 +---------------+---------------+---------------+---------------+
 | domid         | tdomid        | evtchn                        |
 +-------------------------------+-------------------------------+
@@ -198,8 +214,6 @@ For `shared ring` connections it is as follows:
 |           | it has been subject to an SET_TARGET              |
 |           | operation [2] or DOMID_INVALID [3] otherwise      |
 |           |                                                   |
-| `flags`   | Must be zero                                      |
-|           |                                                   |
 | `evtchn`  | The port number of the interdomain channel used   |
 |           | by `domid` to communicate with xenstored          |
 |           |                                                   |
@@ -211,8 +225,6 @@ For `socket` connections it is as follows:
 
 
 ```
-                                                +-------+-------+
-                                                | flags         |
 +---------------+---------------+---------------+---------------+
 | socket-fd                     | pad                           |
 +-------------------------------+-------------------------------+
@@ -221,9 +233,6 @@ For `socket` connections it is as follows:
 
 | Field       | Description                                     |
 |-------------|-------------------------------------------------|
-| `flags`     | A bit-wise OR of:                               |
-|             | 0001: read-only                                 |
-|             |                                                 |
 | `socket-fd` | The file descriptor of the connected socket     |
 
 This type of connection is only relevant for live update, where the xenstored
@@ -398,4 +407,4 @@ explanation of node permissions.
 
 [3] See https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/xen.h;hb=HEAD#l612
 
-[4] https://wiki.xen.org/wiki/XenBus
\ No newline at end of file
+[4] https://wiki.xen.org/wiki/XenBus
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 08:35:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 08:35:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeDzq-0007GC-B5; Thu, 28 May 2020 08:34:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lRPh=7K=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jeDzo-0007G7-Uk
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 08:34:37 +0000
X-Inumbo-ID: 0f471aee-a0be-11ea-a7a8-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0f471aee-a0be-11ea-a7a8-12813bfff9fa;
 Thu, 28 May 2020 08:34:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=yKMi1nya+OTYYAGIWXH70UxGsSw5kc/9JlSI1JLKCwI=; b=WHm4AQ65EWnI4bLqMSRm8Tc6GC
 H/IZ2ssjHyH2fvZMEEfJU4lofCREC/FEa0udWRRtbCsxTLydy/jxV4E/Oz6CvrQluovs7B586Xufk
 HRxNertnCsNLXJQDyUyEQaGVIfl+ZcBMh9NG33Js2m+loSd2CHKMJQr342FfH2jsprc8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeDzl-0006NI-4H; Thu, 28 May 2020 08:34:33 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeDzk-0003ZK-Tu; Thu, 28 May 2020 08:34:33 +0000
Subject: Re: [PATCH v2] docs: update xenstore-migration.md
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
References: <20200528082217.26057-1-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <57f59299-0723-bfcc-33b3-a97562c87150@xen.org>
Date: Thu, 28 May 2020 09:34:30 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <20200528082217.26057-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Juergen,

On 28/05/2020 09:22, Juergen Gross wrote:
> -| Field       | Description                                     |
> -|-------------|-------------------------------------------------|
> -| `conn-id`   | A non-zero number used to identify this         |
> -|             | connection in subsequent connection-specific    |
> -|             | records                                         |
> -|             |                                                 |
> -| `conn-type` | 0x0000: shared ring                             |
> -|             | 0x0001: socket                                  |
> -|             | 0x0002 - 0xFFFF: reserved for future use        |
> -|             |                                                 |
> -| `conn-spec` | See below                                       |
> -|             |                                                 |
> -| `data-len`  | The length (in octets) of any pending data not  |
> -|             | yet written to the connection                   |
> -|             |                                                 |
> -| `data`      | Pending data (may be empty)                     |
> +| Field          | Description                                  |
> +|----------------|----------------------------------------------|
> +| `conn-id`      | A non-zero number used to identify this      |
> +|                | connection in subsequent connection-specific |
> +|                | records                                      |
> +|                |                                              |
> +| `flags`        | A bit-wise OR of:                            |
> +|                | 0001: read-only                              |

NIT: It is a bit odd the flags is in the header after `conn-type` and 
described before.

> +|                |                                              |
> +| `conn-type`    | 0x0000: shared ring                          |
> +|                | 0x0001: socket                               |
> +|                | 0x0002 - 0xFFFF: reserved for future use     |
> +|                |                                              |
> +| `conn-spec`    | See below                                    |
> +|                |                                              |
> +| `in-data-len`  | The length (in octets) of any data read      |
> +|                | from the connection not yet processed        |
> +|                |                                              |
> +| `out-resp-len` | The length (in octets) of a partial response |
> +|                | not yet written to the connection (included  |
> +|                | in the following `out-data-len`)             |

This seems to come from nowhere. It would be good to explain in the 
commit message why this is necessary.

> +|                |                                              |
> +| `out-data-len` | The length (in octets) of any pending data   |
> +|                | not yet written to the connection            |

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 28 08:35:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 08:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeE0D-0007Hd-Jo; Thu, 28 May 2020 08:35:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bSPA=7K=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1jeE0B-0007HV-US
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 08:35:00 +0000
X-Inumbo-ID: 1b8dcda2-a0be-11ea-81bc-bc764e2007e4
Received: from mga06.intel.com (unknown [134.134.136.31])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b8dcda2-a0be-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 08:34:56 +0000 (UTC)
IronPort-SDR: aWw8Q8aKnw6bDmizL1lBqKGk19FYiZFcsaILNIUU6lsXQK0Skj2eJI8omewGcThGGbQjvCnAH+
 zJq7acgRCYvw==
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from orsmga006.jf.intel.com ([10.7.209.51])
 by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 28 May 2020 01:34:54 -0700
IronPort-SDR: frqcvrNYiWR8HD8L7VqU1Cdw6avucpDlkyv4XC5OUp4ozmAY97b9ybw6ywPWkYV4dHyNecIz8h
 DKKK+svZehJg==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.73,444,1583222400"; 
 d="scan'208,217";a="270780435"
Received: from fmsmsx106.amr.corp.intel.com ([10.18.124.204])
 by orsmga006.jf.intel.com with ESMTP; 28 May 2020 01:34:54 -0700
Received: from fmsmsx118.amr.corp.intel.com (10.18.116.18) by
 FMSMSX106.amr.corp.intel.com (10.18.124.204) with Microsoft SMTP Server (TLS)
 id 14.3.439.0; Thu, 28 May 2020 01:34:54 -0700
Received: from FMSEDG001.ED.cps.intel.com (10.1.192.133) by
 fmsmsx118.amr.corp.intel.com (10.18.116.18) with Microsoft SMTP Server (TLS)
 id 14.3.439.0; Thu, 28 May 2020 01:34:53 -0700
Received: from NAM10-DM6-obe.outbound.protection.outlook.com (104.47.58.108)
 by edgegateway.intel.com (192.55.55.68) with Microsoft SMTP Server (TLS) id
 14.3.439.0; Thu, 28 May 2020 01:34:53 -0700
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Xxv/bHVgKyBJ1XrILGg4sTL8cnflXalwzL8oQ0yAdSdYoFqchmgJSmatF3S7N16J+thQnaVyoI0MsL/GcnCkIDjAGaWLALvBAaDaoxby+DmVyKJsVOKRV/IAEXDk+/jbA0u0GsMh2xnsNSowQnCL2+0d7Bp6CQioyVkI9XVSIAGRiaihbCGLYuD1ksI/4MgdDLN/JA2i0UY7tz0FWFOGqCxk6HuBE8Ody4rSMhi6l1VGrV6hjqJqXzDs4bxQt9s+u1y3r6xcTHu/E8VRQH312LKKrlYEmPeIwxPqMeYNU7iCS0bk70EE5pmUgmzXloDj4zrKnhhlZ//yWSItecS2ZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vlvMZe4cNznB6HUqYl7YDrsHHVQpXtUCAz2HVkgR65M=;
 b=JPfStJmg/EGSeheHhOwKgKomaxp2suUSUVsRJRgiW+6oiWlxnoOiD0kdZSJhHnokQRcve5w+R+u1RKb5KgPGsdGk+7vfy+iAzIFug4b9MJalIzcX77KlaikQ/Ls6zVe5Oc3H4LflvCxLDSDXwRWcvBHgOFiqkBU3onX+9gSO/eg5C99/p1TOYj4QmeeJH2sQbHPZg6EXjZ91Qq3smXrBPoPZNmL5Agf77832NvFpw+O1E2N6cqFj+UfGwCEND0DyCvjhBTaotfX6+kFHvaJMcHGWtw2AJ4VcznxpWbgG28m0iMNfXipEtCDFI/03rquZVM/KvuklrQuBaY2vOQ+Uag==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; 
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vlvMZe4cNznB6HUqYl7YDrsHHVQpXtUCAz2HVkgR65M=;
 b=yY4w1jMB+jt+95KrYLsQC+xVyWQUGbMb5F6j7RMfmmg1xBBnGoo0/PNxKH8DaMpYq8rfv3fyzYG/qv8UivGsXESeweO6Mm7sOyR+ZDlmnCw7QuaU1Pl6C6xBz/Xg3h408Djr/70nGSPzjxBsgKjQWCm4YFQfuR4CzAG8DLPafek=
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by MWHPR11MB1550.namprd11.prod.outlook.com (2603:10b6:301:b::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.23; Thu, 28 May
 2020 08:34:52 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::b441:f6bd:703b:ba41]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::b441:f6bd:703b:ba41%2]) with mapi id 15.20.3021.030; Thu, 28 May 2020
 08:34:52 +0000
From: "Tian, Kevin" <kevin.tian@intel.com>
To: buy computer <buycomputer40@gmail.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: RE: iommu=no-igfx
Thread-Topic: iommu=no-igfx
Thread-Index: AQHWKBPp26X0x5YGAUu393CB+jzHkKiw3nQAgAdCTsCAADbFAIAE7K9Q
Date: Thu, 28 May 2020 08:34:52 +0000
Message-ID: <MWHPR11MB1645086209CC9A97D5805DA68C8E0@MWHPR11MB1645.namprd11.prod.outlook.com>
References: <CANSXg2FGtiDT05sQUpSAshAsdP4wSjPgQbfw_+aKJuAzSwvJuQ@mail.gmail.com>
 <da7e41b5-88a1-13ab-d52b-0652c16608af@suse.com>
 <MWHPR11MB1645DC1C5782DDA28C9BB1CB8CB30@MWHPR11MB1645.namprd11.prod.outlook.com>
 <CANSXg2EiauZfTMsmqzcB2ShUCr67rB+mHBm4EVtWhMaUL8NL-w@mail.gmail.com>
In-Reply-To: <CANSXg2EiauZfTMsmqzcB2ShUCr67rB+mHBm4EVtWhMaUL8NL-w@mail.gmail.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
dlp-version: 11.2.0.6
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.55.52.219]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 1a8dd0a7-77d9-407f-1dcb-08d802e1fe0b
x-ms-traffictypediagnostic: MWHPR11MB1550:
x-microsoft-antispam-prvs: <MWHPR11MB1550887A08DB3E32C04E7D688C8E0@MWHPR11MB1550.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:4714;
x-forefront-prvs: 0417A3FFD2
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: Nr8Up8iPTizKTkAVNNsNPtM+b1Qd7dYG2OntrCXmGTpKj8eHOly5V6GLpEJibkIkG9Fw5xvcGBAMjpFe4HBJnaltr7UHpuQ3ZGwgdyTbWLvt5WSAIbVuEobl7uOg28Qx9+lmrClWLfS8yjIyPJ+irDRnuc81i7jYKq5C5Vy6/GcN6ienXclkiv4sFM0iEzrwOU8M3Vm4ktVVyVril7Og82R7cglbxxGl+Ycq1vhQMZkPFIBXTnUplKNSag8VX9Yudae2hh4QBvYeJgzvnBh01zQn2FGoOaX8gIm2CdRjQnODe1VYfQZG2vPRxaDQO15YmxbVXkJut/EhSyJeG1/8M0+lZgrqgXovSmsSrjzsb/msr440eiw4Mxr5oMtsR8Wax1vK2NzDBkRSxZo7oywk2Q==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:MWHPR11MB1645.namprd11.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(376002)(39860400002)(366004)(396003)(346002)(136003)(55016002)(110136005)(53546011)(5660300002)(478600001)(8676002)(8936002)(166002)(52536014)(26005)(71200400001)(66556008)(7696005)(66476007)(2906002)(66446008)(66946007)(6506007)(316002)(76116006)(64756008)(83380400001)(186003)(86362001)(3480700007)(33656002)(9686003);
 DIR:OUT; SFP:1102; 
x-ms-exchange-antispam-messagedata: GETKrmNLr/KoAt+jZMrxgp5ByORmzpKiJB9Yk0lxP6r7SqrKF2QsnoxLfj1Iaz3GOVWc08Kh8mbNmvIjAecW8b9z0f8/l4ptMMMV0ktfqBhaWeD2ZwVqEy9ZseaxoCiPJzfBPLHpTa0bPEY66pzEt87wGZa08L19x4YjUfECWZ9uxlbPJudkWiPA5rnE35/k1d6rW3m+g441bkezgiD6oK3KGQe6bAfD1zT2SfYpoOcWsJFXWtxu6iCWdnYAHE+KPg+KmqQ2q992tLHemD8VfS2ZAtMt2+6bD6Eg5TxtOoJdD7wcbeAgl7Zy7/nQEYDEQAy0kWaTXUwNXn7rmIZx5Li/OJkjy8ZcS+2M4/H/MTpq5T5s9aDUNMBFUjl5GjMxHExB+DN1JhKVOLxH+goYvMlfiwqZ4W3B1KLP2tBdKiqVoKvyXMQLubTEcrY8j+tHei5Iz/6rqEYJ9V3QeUnlF52iMUEnPfSg4KXs4L0phZx1Mh3djES1FLZvAdQDQYB5
x-ms-exchange-transport-forked: True
Content-Type: multipart/alternative;
 boundary="_000_MWHPR11MB1645086209CC9A97D5805DA68C8E0MWHPR11MB1645namp_"
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-Network-Message-Id: 1a8dd0a7-77d9-407f-1dcb-08d802e1fe0b
X-MS-Exchange-CrossTenant-originalarrivaltime: 28 May 2020 08:34:52.3784 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: pA4iE4ueqPm1c/XubGDY5Xpjk6SJ7CicrWzgRjxw4Q7RRBBoPxQU/cdtPq7TMLcWJhVQvQfLgFXGq7LDHqwQ/g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR11MB1550
X-OriginatorOrg: intel.com
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

--_000_MWHPR11MB1645086209CC9A97D5805DA68C8E0MWHPR11MB1645namp_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

WW91IG1heSBzZWFyY2ggZG1hX21hcCogaW4gZHJpdmVycy9ncHUvZHJtL2k5MTUsIGFuZCB0aGVu
IHByaW50IG1hcHBlZCBhZGRyZXNzZXMgdG8gc2VlIGFueSBtYXRjaCBpbiBWVC1kIHJlcG9ydGVk
IGZhdWx0aW5nIGFkZHJlc3Nlcy4gRm9yIGV4YW1wbGUsIF9fc2V0dXBfcGFnZV9kbWEgbWlnaHQg
YmUgb25lIGV4YW1wbGUgdGhhdCB5b3Ugd2FudCB0byBjaGVjay4NCg0KRnJvbTogYnV5IGNvbXB1
dGVyIDxidXljb21wdXRlcjQwQGdtYWlsLmNvbT4NClNlbnQ6IE1vbmRheSwgTWF5IDI1LCAyMDIw
IDE6MTggUE0NClRvOiBUaWFuLCBLZXZpbiA8a2V2aW4udGlhbkBpbnRlbC5jb20+OyB4ZW4tZGV2
ZWxAbGlzdHMueGVucHJvamVjdC5vcmcNClN1YmplY3Q6IFJlOiBpb21tdT1uby1pZ2Z4DQoNCg0K
DQpPbiBNb24sIE1heSAyNSwgMjAyMCBhdCA1OjE2IEFNIFRpYW4sIEtldmluIDxrZXZpbi50aWFu
QGludGVsLmNvbTxtYWlsdG86a2V2aW4udGlhbkBpbnRlbC5jb20+PiB3cm90ZToNCj4gRnJvbTog
SmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPG1haWx0bzpqYmV1bGljaEBzdXNlLmNvbT4+
DQo+IFNlbnQ6IFdlZG5lc2RheSwgTWF5IDIwLCAyMDIwIDc6MTEgUE0NCj4NCj4gT24gMTEuMDUu
MjAyMCAxOTo0MywgYnV5IGNvbXB1dGVyIHdyb3RlOg0KPiA+IEkndmUgYmVlbiB3b3JraW5nIG9u
IGEgV2luZG93cyAxMCBIVk0gb24gYSBEZWJpYW4gMTAgZG9tMC4gV2hlbiBJDQo+IHdhcyBmaXJz
dA0KPiA+IHRyeWluZyB0byBtYWtlIHRoZSBWTSwgSSB3YXMgZ2V0dGluZyBJT01NVSBlcnJvcnMu
IEkgaGFkIGEgaGFyZCB0aW1lDQo+ID4gZmlndXJpbmcgb3V0IHdoYXQgdG8gZG8gYWJvdXQgdGhp
cywgYW5kIGZpbmFsbHkgZGlzY292ZXJlZCB0aGF0IHB1dHRpbmcNCj4gPiBpb21tdT1uby1pZ2Z4
IGluIHRoZSBncnViIHN0b3BwZWQgdGhlIGVycm9ycy4NCj4gPg0KPiA+IFVuZm9ydHVuYXRlbHks
IHdpdGhvdXQgdGhlIGdyYXBoaWNzIHN1cHBvcnQgdGhlIFZNIGlzIHVuZGVyc3RhbmRhYmx5DQo+
IHNsb3csDQo+ID4gYW5kIGNhbiBjcmFzaC4gSSB3YXMgYWxzbyBvbmx5IG5vdyBwb2ludGVkIHRv
IHRoZSBwYWdlDQo+ID4gPGh0dHBzOi8veGVuYml0cy54ZW4ub3JnL2RvY3MvdW5zdGFibGUvbWlz
Yy94ZW4tY29tbWFuZC0NCj4gbGluZS5odG1sI2lvbW11Pg0KPiA+IHdoaWNoIHNheXMgdG8gcmVw
b3J0IGFueSBlcnJvcnMgdGhhdCBnZXQgZml4ZWQgYnkgdXNpbmcgaW9tbXU9bm8taWdmeC4NCg0K
d2hhdCBpcyB0aGUgcGxhdGZvcm0gYW5kIGxpbnV4IGtlcm5lbCB2ZXJzaW9uIGluIHRoaXMgY29u
dGV4dD8NCg0KSSdtIG5vdCBzdXJlIHdoYXQgeW91IG1lYW50IGJ5ICdwbGF0Zm9ybScsIHNvIEkn
bGwgdHJ5IHRvIGNvdmVyIGFsbCB0aGUgYmFzZXMuDQpLZXJuZWw6IDQuMTkuMC05LWFtZDY0IEdO
VS9MaW51eA0KRGViaWFuIDEwLjQNCkxlbm92byBFNDkwIFRoaW5rUGFkDQpJbnRlbCBJbnRlZ3Jh
dGVkIEdyYXBoaWNzIDYyMA0KDQo+DQo+IFRoYW5rcyBmb3IgdGhlIHJlcG9ydC4gRm9yIGNvbnRl
eHQgSSdsbCBxdW90ZSB0aGUgY29tbWl0IG1lc3NhZ2Ugb2YNCj4gdGhlIGNvbW1pdCBpbnRyb2R1
Y2luZyB0aGUgb3B0aW9uIGFzIHdlbGwgYXMgdGhlIHJlcXVlc3QgdG8gcmVwb3J0DQo+IGlzc3Vl
cyBmaXhlZCB3aXRoIGl0Og0KPg0KPiAiQXMgd2Ugc3RpbGwgY2Fubm90IGZpbmQgYSBwcm9wZXIg
Zml4IGZvciB0aGlzIHByb2JsZW0sIHRoaXMgcGF0Y2ggYWRkcw0KPiAgaW9tbXU9aWdmeCBvcHRp
b24gdG8gY29udHJvbCB3aGV0aGVyIEludGVsIGdyYXBoaWNzIElPTU1VIGlzIGVuYWJsZWQuDQo+
ICBSdW5uaW5nIFhlbiB3aXRoIGlvbW11PW5vLWlnZnggaXMgc2ltaWxhciB0byBydW5uaW5nIExp
bnV4IHdpdGgNCj4gIGludGVsX2lvbW11PWlnZnhfb2ZmLCB3aGljaCBkaXNhYmxlcyBJT01NVSBm
b3IgSW50ZWwgR1BVLiBUaGlzIGNhbiBiZQ0KPiAgdXNlZCBieSB1c2VycyB0byBtYW51YWxseSB3
b3JrYXJvdW5kIHRoZSBwcm9ibGVtIGJlZm9yZSBhIGZpeCBpcw0KPiAgYXZhaWxhYmxlIGZvciBp
OTE1IGRyaXZlci4iDQo+DQo+IFRoaXMgd2FzIGluIDIwMTUsIHJlZmVyZW5jaW5nIExpbnV4ID49
IDMuMTkuIEkgaGF2ZSBubyBpZGVhIHdoZXRoZXINCj4gdGhlIHVuZGVybHlpbmcgZHJpdmVyIGlz
c3VlKHMpIGhhcy9oYXZlIGJlZW4gZml4ZWQuIFRoZSBhZGRyZXNzZXMNCj4gcmVmZXJlbmNlZCBh
cmUgdmFyaWFibGUgZW5vdWdoIGFuZCBhbGwgd2l0aGluIFJBTSwgc28gSSdkIGNvbmNsdWRlDQo+
IHRoaXMgaXMgbm90IGEgIm1pc3NpbmcgUk1SUiIgaXNzdWUuDQoNClZhcmlhYmxlIGVub3VnaCBi
dXQgbm90IHdpdGhpbiBSQU0uIEZyb20gRTgyMDoNCg0KKFhFTikgIDAwMDAwMDAxMDAwMDAwMDAg
LSAwMDAwMDAwODcxODAwMDAwICh1c2FibGUpDQoNCkJ1dCB0aGUgcmVmZXJlbmNlZCBhZGRyZXNz
ZXMgYXJlIHdheSBoaWdoZXI6DQoNCihYRU4pIFtWVC1EXURNQVI6W0RNQSBSZWFkXSBSZXF1ZXN0
IGRldmljZSBbMDAwMDowMDowMi4wXSBmYXVsdA0KYWRkciA3NmM2MTVkMDAwLCBpb21tdSByZWcg
PSBmZmZmODJjMDAwYTBjMDAwDQooWEVOKSBbVlQtRF1ETUFSOiByZWFzb24gMDYgLSBQVEUgUmVh
ZCBhY2Nlc3MgaXMgbm90IHNldA0KDQo+DQo+IENjLWluZyB0aGUgVlQtZCBtYWludGFpbmVyIGZv
ciBwb3NzaWJsZSBpbnNpZ2h0cyBvciB0aG91Z2h0cy4NCj4NCj4gSmFuDQoNCkkgZG9uJ3QgaGF2
ZSBvdGhlciB0aG91Z2h0cyBleGNlcHQgdGhlIHdlaXJkIGFkZHJlc3Nlcy4gSXQgbWlnaHQgYmUN
Cmdvb2QgdG8gYWRkIHNvbWUgdHJhY2UgaW4gZG9tMCdzIGk5MTUgZHJpdmVyIHRvIHNlZSB3aGV0
aGVyIHRob3NlDQphZGRyZXNzZXMgYXJlIGludGVuZGVkIG9yIG5vdC4NCg0KVGhhbmtzIGZvciB0
aGUgaW5zaWdodCEgSSdkIGxvdmUgdG8gaGVscCB3aXRoIHRoZSB0cmFjZSwgYnV0IEkgZG9uJ3Qg
a25vdyBob3cgdG8gZG8gdGhhdC4gSWYgeW91IGNvdWxkIHBvaW50IG1lIGluIHRoZSByaWdodCBk
aXJlY3Rpb24sIEknZCB0cnkgdG8gZ2l2ZSBpdCBhIHNob3QuDQoNClRoYW5rcw0KS2V2aW4NCg0K
VGhhbmtzIGZvciB0aGUgaW5zaWdodCENCg==

--_000_MWHPR11MB1645086209CC9A97D5805DA68C8E0MWHPR11MB1645namp_
Content-Type: text/html; charset="utf-8"
Content-Transfer-Encoding: base64

PGh0bWwgeG1sbnM6dj0idXJuOnNjaGVtYXMtbWljcm9zb2Z0LWNvbTp2bWwiIHhtbG5zOm89InVy
bjpzY2hlbWFzLW1pY3Jvc29mdC1jb206b2ZmaWNlOm9mZmljZSIgeG1sbnM6dz0idXJuOnNjaGVt
YXMtbWljcm9zb2Z0LWNvbTpvZmZpY2U6d29yZCIgeG1sbnM6bT0iaHR0cDovL3NjaGVtYXMubWlj
cm9zb2Z0LmNvbS9vZmZpY2UvMjAwNC8xMi9vbW1sIiB4bWxucz0iaHR0cDovL3d3dy53My5vcmcv
VFIvUkVDLWh0bWw0MCI+DQo8aGVhZD4NCjxtZXRhIGh0dHAtZXF1aXY9IkNvbnRlbnQtVHlwZSIg
Y29udGVudD0idGV4dC9odG1sOyBjaGFyc2V0PXV0Zi04Ij4NCjxtZXRhIG5hbWU9IkdlbmVyYXRv
ciIgY29udGVudD0iTWljcm9zb2Z0IFdvcmQgMTUgKGZpbHRlcmVkIG1lZGl1bSkiPg0KPHN0eWxl
PjwhLS0NCi8qIEZvbnQgRGVmaW5pdGlvbnMgKi8NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6
IkNhbWJyaWEgTWF0aCI7DQoJcGFub3NlLTE6MiA0IDUgMyA1IDQgNiAzIDIgNDt9DQpAZm9udC1m
YWNlDQoJe2ZvbnQtZmFtaWx5OkRlbmdYaWFuOw0KCXBhbm9zZS0xOjIgMSA2IDAgMyAxIDEgMSAx
IDE7fQ0KQGZvbnQtZmFjZQ0KCXtmb250LWZhbWlseTpDYWxpYnJpOw0KCXBhbm9zZS0xOjIgMTUg
NSAyIDIgMiA0IDMgMiA0O30NCkBmb250LWZhY2UNCgl7Zm9udC1mYW1pbHk6IlxA562J57q/IjsN
CglwYW5vc2UtMToyIDEgNiAwIDMgMSAxIDEgMSAxO30NCi8qIFN0eWxlIERlZmluaXRpb25zICov
DQpwLk1zb05vcm1hbCwgbGkuTXNvTm9ybWFsLCBkaXYuTXNvTm9ybWFsDQoJe21hcmdpbjowaW47
DQoJbWFyZ2luLWJvdHRvbTouMDAwMXB0Ow0KCWZvbnQtc2l6ZToxMS4wcHQ7DQoJZm9udC1mYW1p
bHk6IkNhbGlicmkiLHNhbnMtc2VyaWY7fQ0KYTpsaW5rLCBzcGFuLk1zb0h5cGVybGluaw0KCXtt
c28tc3R5bGUtcHJpb3JpdHk6OTk7DQoJY29sb3I6Ymx1ZTsNCgl0ZXh0LWRlY29yYXRpb246dW5k
ZXJsaW5lO30NCmE6dmlzaXRlZCwgc3Bhbi5Nc29IeXBlcmxpbmtGb2xsb3dlZA0KCXttc28tc3R5
bGUtcHJpb3JpdHk6OTk7DQoJY29sb3I6cHVycGxlOw0KCXRleHQtZGVjb3JhdGlvbjp1bmRlcmxp
bmU7fQ0KcC5tc29ub3JtYWwwLCBsaS5tc29ub3JtYWwwLCBkaXYubXNvbm9ybWFsMA0KCXttc28t
c3R5bGUtbmFtZTptc29ub3JtYWw7DQoJbXNvLW1hcmdpbi10b3AtYWx0OmF1dG87DQoJbWFyZ2lu
LXJpZ2h0OjBpbjsNCgltc28tbWFyZ2luLWJvdHRvbS1hbHQ6YXV0bzsNCgltYXJnaW4tbGVmdDow
aW47DQoJZm9udC1zaXplOjExLjBwdDsNCglmb250LWZhbWlseToiQ2FsaWJyaSIsc2Fucy1zZXJp
Zjt9DQpzcGFuLkVtYWlsU3R5bGUxOA0KCXttc28tc3R5bGUtdHlwZTpwZXJzb25hbC1yZXBseTsN
Cglmb250LWZhbWlseToiQ2FsaWJyaSIsc2Fucy1zZXJpZjsNCgljb2xvcjp3aW5kb3d0ZXh0O30N
Ci5Nc29DaHBEZWZhdWx0DQoJe21zby1zdHlsZS10eXBlOmV4cG9ydC1vbmx5Ow0KCWZvbnQtZmFt
aWx5OiJDYWxpYnJpIixzYW5zLXNlcmlmO30NCkBwYWdlIFdvcmRTZWN0aW9uMQ0KCXtzaXplOjgu
NWluIDExLjBpbjsNCgltYXJnaW46MS4waW4gMS4yNWluIDEuMGluIDEuMjVpbjt9DQpkaXYuV29y
ZFNlY3Rpb24xDQoJe3BhZ2U6V29yZFNlY3Rpb24xO30NCi0tPjwvc3R5bGU+PCEtLVtpZiBndGUg
bXNvIDldPjx4bWw+DQo8bzpzaGFwZWRlZmF1bHRzIHY6ZXh0PSJlZGl0IiBzcGlkbWF4PSIxMDI2
IiAvPg0KPC94bWw+PCFbZW5kaWZdLS0+PCEtLVtpZiBndGUgbXNvIDldPjx4bWw+DQo8bzpzaGFw
ZWxheW91dCB2OmV4dD0iZWRpdCI+DQo8bzppZG1hcCB2OmV4dD0iZWRpdCIgZGF0YT0iMSIgLz4N
CjwvbzpzaGFwZWxheW91dD48L3htbD48IVtlbmRpZl0tLT4NCjwvaGVhZD4NCjxib2R5IGxhbmc9
IkVOLVVTIiBsaW5rPSJibHVlIiB2bGluaz0icHVycGxlIj4NCjxkaXYgY2xhc3M9IldvcmRTZWN0
aW9uMSI+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5Zb3UgbWF5IHNlYXJjaCBkbWFfbWFwKiBpbiBk
cml2ZXJzL2dwdS9kcm0vaTkxNSwgYW5kIHRoZW4gcHJpbnQgbWFwcGVkIGFkZHJlc3NlcyB0byBz
ZWUgYW55IG1hdGNoIGluIFZULWQgcmVwb3J0ZWQgZmF1bHRpbmcgYWRkcmVzc2VzLiBGb3IgZXhh
bXBsZSwgX19zZXR1cF9wYWdlX2RtYSBtaWdodCBiZSBvbmUgZXhhbXBsZSB0aGF0IHlvdSB3YW50
IHRvIGNoZWNrLjxvOnA+PC9vOnA+PC9wPg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJz
cDs8L286cD48L3A+DQo8ZGl2IHN0eWxlPSJib3JkZXI6bm9uZTtib3JkZXItbGVmdDpzb2xpZCBi
bHVlIDEuNXB0O3BhZGRpbmc6MGluIDBpbiAwaW4gNC4wcHQiPg0KPGRpdj4NCjxkaXYgc3R5bGU9
ImJvcmRlcjpub25lO2JvcmRlci10b3A6c29saWQgI0UxRTFFMSAxLjBwdDtwYWRkaW5nOjMuMHB0
IDBpbiAwaW4gMGluIj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxiPkZyb206PC9iPiBidXkgY29t
cHV0ZXIgJmx0O2J1eWNvbXB1dGVyNDBAZ21haWwuY29tJmd0OyA8YnI+DQo8Yj5TZW50OjwvYj4g
TW9uZGF5LCBNYXkgMjUsIDIwMjAgMToxOCBQTTxicj4NCjxiPlRvOjwvYj4gVGlhbiwgS2V2aW4g
Jmx0O2tldmluLnRpYW5AaW50ZWwuY29tJmd0OzsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qu
b3JnPGJyPg0KPGI+U3ViamVjdDo8L2I+IFJlOiBpb21tdT1uby1pZ2Z4PG86cD48L286cD48L3A+
DQo8L2Rpdj4NCjwvZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48
L3A+DQo8ZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPjxvOnA+Jm5ic3A7PC9vOnA+
PC9wPg0KPC9kaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwvbzpwPjwvcD4N
CjxkaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+T24gTW9uLCBNYXkgMjUsIDIwMjAg
YXQgNToxNiBBTSBUaWFuLCBLZXZpbiAmbHQ7PGEgaHJlZj0ibWFpbHRvOmtldmluLnRpYW5AaW50
ZWwuY29tIiB0YXJnZXQ9Il9ibGFuayI+a2V2aW4udGlhbkBpbnRlbC5jb208L2E+Jmd0OyB3cm90
ZTo8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGJsb2NrcXVvdGUgc3R5bGU9ImJvcmRlcjpub25l
O2JvcmRlci1sZWZ0OnNvbGlkICNDQ0NDQ0MgMS4wcHQ7cGFkZGluZzowaW4gMGluIDBpbiA2LjBw
dDttYXJnaW4tbGVmdDo0LjhwdDttYXJnaW4tcmlnaHQ6MGluIj4NCjxwIGNsYXNzPSJNc29Ob3Jt
YWwiIHN0eWxlPSJtYXJnaW4tYm90dG9tOjEyLjBwdCI+Jmd0OyBGcm9tOiBKYW4gQmV1bGljaCAm
bHQ7PGEgaHJlZj0ibWFpbHRvOmpiZXVsaWNoQHN1c2UuY29tIiB0YXJnZXQ9Il9ibGFuayI+amJl
dWxpY2hAc3VzZS5jb208L2E+Jmd0Ozxicj4NCiZndDsgU2VudDogV2VkbmVzZGF5LCBNYXkgMjAs
IDIwMjAgNzoxMSBQTTxicj4NCiZndDsgPGJyPg0KJmd0OyBPbiAxMS4wNS4yMDIwIDE5OjQzLCBi
dXkgY29tcHV0ZXIgd3JvdGU6PGJyPg0KJmd0OyAmZ3Q7IEkndmUgYmVlbiB3b3JraW5nIG9uIGEg
V2luZG93cyAxMCBIVk0gb24gYSBEZWJpYW4gMTAgZG9tMC4gV2hlbiBJPGJyPg0KJmd0OyB3YXMg
Zmlyc3Q8YnI+DQomZ3Q7ICZndDsgdHJ5aW5nIHRvIG1ha2UgdGhlIFZNLCBJIHdhcyBnZXR0aW5n
IElPTU1VIGVycm9ycy4gSSBoYWQgYSBoYXJkIHRpbWU8YnI+DQomZ3Q7ICZndDsgZmlndXJpbmcg
b3V0IHdoYXQgdG8gZG8gYWJvdXQgdGhpcywgYW5kIGZpbmFsbHkgZGlzY292ZXJlZCB0aGF0IHB1
dHRpbmc8YnI+DQomZ3Q7ICZndDsgaW9tbXU9bm8taWdmeCBpbiB0aGUgZ3J1YiBzdG9wcGVkIHRo
ZSBlcnJvcnMuPGJyPg0KJmd0OyAmZ3Q7PGJyPg0KJmd0OyAmZ3Q7IFVuZm9ydHVuYXRlbHksIHdp
dGhvdXQgdGhlIGdyYXBoaWNzIHN1cHBvcnQgdGhlIFZNIGlzIHVuZGVyc3RhbmRhYmx5PGJyPg0K
Jmd0OyBzbG93LDxicj4NCiZndDsgJmd0OyBhbmQgY2FuIGNyYXNoLiBJIHdhcyBhbHNvIG9ubHkg
bm93IHBvaW50ZWQgdG8gdGhlIHBhZ2U8YnI+DQomZ3Q7ICZndDsgJmx0OzxhIGhyZWY9Imh0dHBz
Oi8veGVuYml0cy54ZW4ub3JnL2RvY3MvdW5zdGFibGUvbWlzYy94ZW4tY29tbWFuZC0iIHRhcmdl
dD0iX2JsYW5rIj5odHRwczovL3hlbmJpdHMueGVuLm9yZy9kb2NzL3Vuc3RhYmxlL21pc2MveGVu
LWNvbW1hbmQtPC9hPjxicj4NCiZndDsgbGluZS5odG1sI2lvbW11Jmd0Ozxicj4NCiZndDsgJmd0
OyB3aGljaCBzYXlzIHRvIHJlcG9ydCBhbnkgZXJyb3JzIHRoYXQgZ2V0IGZpeGVkIGJ5IHVzaW5n
IGlvbW11PW5vLWlnZnguPGJyPg0KPGJyPg0Kd2hhdCBpcyB0aGUgcGxhdGZvcm0gYW5kIGxpbnV4
IGtlcm5lbCB2ZXJzaW9uIGluIHRoaXMgY29udGV4dD88bzpwPjwvbzpwPjwvcD4NCjwvYmxvY2tx
dW90ZT4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwvbzpwPjwvcD4N
CjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPkknbSBub3Qgc3VyZSB3aGF0IHlv
dSBtZWFudCBieSAncGxhdGZvcm0nLCBzbyBJJ2xsIHRyeSB0byBjb3ZlciBhbGwgdGhlIGJhc2Vz
LjxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+S2Vy
bmVsOiA0LjE5LjAtOS1hbWQ2NCBHTlUvTGludXg8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRp
dj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPkRlYmlhbiAxMC40PG86cD48L286cD48L3A+DQo8L2Rp
dj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj5MZW5vdm8gRTQ5MCBUaGlua1BhZDxvOnA+
PC9vOnA+PC9wPg0KPC9kaXY+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCI+SW50ZWwgSW50
ZWdyYXRlZCBHcmFwaGljcyA2MjA8bzpwPjwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNs
YXNzPSJNc29Ob3JtYWwiPiZuYnNwOzxvOnA+PC9vOnA+PC9wPg0KPC9kaXY+DQo8YmxvY2txdW90
ZSBzdHlsZT0iYm9yZGVyOm5vbmU7Ym9yZGVyLWxlZnQ6c29saWQgI0NDQ0NDQyAxLjBwdDtwYWRk
aW5nOjBpbiAwaW4gMGluIDYuMHB0O21hcmdpbi1sZWZ0OjQuOHB0O21hcmdpbi1yaWdodDowaW4i
Pg0KPHAgY2xhc3M9Ik1zb05vcm1hbCIgc3R5bGU9Im1hcmdpbi1ib3R0b206MTIuMHB0Ij4mZ3Q7
IDxicj4NCiZndDsgVGhhbmtzIGZvciB0aGUgcmVwb3J0LiBGb3IgY29udGV4dCBJJ2xsIHF1b3Rl
IHRoZSBjb21taXQgbWVzc2FnZSBvZjxicj4NCiZndDsgdGhlIGNvbW1pdCBpbnRyb2R1Y2luZyB0
aGUgb3B0aW9uIGFzIHdlbGwgYXMgdGhlIHJlcXVlc3QgdG8gcmVwb3J0PGJyPg0KJmd0OyBpc3N1
ZXMgZml4ZWQgd2l0aCBpdDo8YnI+DQomZ3Q7IDxicj4NCiZndDsgJnF1b3Q7QXMgd2Ugc3RpbGwg
Y2Fubm90IGZpbmQgYSBwcm9wZXIgZml4IGZvciB0aGlzIHByb2JsZW0sIHRoaXMgcGF0Y2ggYWRk
czxicj4NCiZndDsmbmJzcDsgaW9tbXU9aWdmeCBvcHRpb24gdG8gY29udHJvbCB3aGV0aGVyIElu
dGVsIGdyYXBoaWNzIElPTU1VIGlzIGVuYWJsZWQuPGJyPg0KJmd0OyZuYnNwOyBSdW5uaW5nIFhl
biB3aXRoIGlvbW11PW5vLWlnZnggaXMgc2ltaWxhciB0byBydW5uaW5nIExpbnV4IHdpdGg8YnI+
DQomZ3Q7Jm5ic3A7IGludGVsX2lvbW11PWlnZnhfb2ZmLCB3aGljaCBkaXNhYmxlcyBJT01NVSBm
b3IgSW50ZWwgR1BVLiBUaGlzIGNhbiBiZTxicj4NCiZndDsmbmJzcDsgdXNlZCBieSB1c2VycyB0
byBtYW51YWxseSB3b3JrYXJvdW5kIHRoZSBwcm9ibGVtIGJlZm9yZSBhIGZpeCBpczxicj4NCiZn
dDsmbmJzcDsgYXZhaWxhYmxlIGZvciBpOTE1IGRyaXZlci4mcXVvdDs8YnI+DQomZ3Q7IDxicj4N
CiZndDsgVGhpcyB3YXMgaW4gMjAxNSwgcmVmZXJlbmNpbmcgTGludXggJmd0Oz0gMy4xOS4gSSBo
YXZlIG5vIGlkZWEgd2hldGhlcjxicj4NCiZndDsgdGhlIHVuZGVybHlpbmcgZHJpdmVyIGlzc3Vl
KHMpIGhhcy9oYXZlIGJlZW4gZml4ZWQuIFRoZSBhZGRyZXNzZXM8YnI+DQomZ3Q7IHJlZmVyZW5j
ZWQgYXJlIHZhcmlhYmxlIGVub3VnaCBhbmQgYWxsIHdpdGhpbiBSQU0sIHNvIEknZCBjb25jbHVk
ZTxicj4NCiZndDsgdGhpcyBpcyBub3QgYSAmcXVvdDttaXNzaW5nIFJNUlImcXVvdDsgaXNzdWUu
PGJyPg0KPGJyPg0KVmFyaWFibGUgZW5vdWdoIGJ1dCBub3Qgd2l0aGluIFJBTS4gRnJvbSBFODIw
Ojxicj4NCjxicj4NCihYRU4pJm5ic3A7IDAwMDAwMDAxMDAwMDAwMDAgLSAwMDAwMDAwODcxODAw
MDAwICh1c2FibGUpPGJyPg0KPGJyPg0KQnV0IHRoZSByZWZlcmVuY2VkIGFkZHJlc3NlcyBhcmUg
d2F5IGhpZ2hlcjo8YnI+DQo8YnI+DQooWEVOKSBbVlQtRF1ETUFSOltETUEgUmVhZF0gUmVxdWVz
dCBkZXZpY2UgWzAwMDA6MDA6MDIuMF0gZmF1bHQgPGJyPg0KYWRkciA3NmM2MTVkMDAwLCBpb21t
dSByZWcgPSBmZmZmODJjMDAwYTBjMDAwPGJyPg0KKFhFTikgW1ZULURdRE1BUjogcmVhc29uIDA2
IC0gUFRFIFJlYWQgYWNjZXNzIGlzIG5vdCBzZXQ8YnI+DQo8YnI+DQomZ3Q7IDxicj4NCiZndDsg
Q2MtaW5nIHRoZSBWVC1kIG1haW50YWluZXIgZm9yIHBvc3NpYmxlIGluc2lnaHRzIG9yIHRob3Vn
aHRzLjxicj4NCiZndDsgPGJyPg0KJmd0OyBKYW48YnI+DQo8YnI+DQpJIGRvbid0IGhhdmUgb3Ro
ZXIgdGhvdWdodHMgZXhjZXB0IHRoZSB3ZWlyZCBhZGRyZXNzZXMuIEl0IG1pZ2h0IGJlPGJyPg0K
Z29vZCB0byBhZGQgc29tZSB0cmFjZSBpbiBkb20wJ3MgaTkxNSBkcml2ZXIgdG8gc2VlIHdoZXRo
ZXIgdGhvc2U8YnI+DQphZGRyZXNzZXMgYXJlIGludGVuZGVkIG9yIG5vdC48bzpwPjwvbzpwPjwv
cD4NCjwvYmxvY2txdW90ZT4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNw
OzwvbzpwPjwvcD4NCjwvZGl2Pg0KPGRpdj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPlRoYW5rcyBm
b3IgdGhlIGluc2lnaHQhIEknZCBsb3ZlIHRvIGhlbHAgd2l0aCB0aGUgdHJhY2UsIGJ1dCBJIGRv
bid0IGtub3cgaG93IHRvIGRvIHRoYXQuIElmIHlvdSBjb3VsZCBwb2ludCBtZSBpbiB0aGUgcmln
aHQgZGlyZWN0aW9uLCBJJ2QgdHJ5IHRvIGdpdmUgaXQgYSBzaG90Lg0KPG86cD48L286cD48L3A+
DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0iTXNvTm9ybWFsIj48bzpwPiZuYnNwOzwvbzpwPjwv
cD4NCjwvZGl2Pg0KPGJsb2NrcXVvdGUgc3R5bGU9ImJvcmRlcjpub25lO2JvcmRlci1sZWZ0OnNv
bGlkICNDQ0NDQ0MgMS4wcHQ7cGFkZGluZzowaW4gMGluIDBpbiA2LjBwdDttYXJnaW4tbGVmdDo0
LjhwdDttYXJnaW4tcmlnaHQ6MGluIj4NCjxwIGNsYXNzPSJNc29Ob3JtYWwiPlRoYW5rczxicj4N
CktldmluPG86cD48L286cD48L3A+DQo8L2Jsb2NrcXVvdGU+DQo8ZGl2Pg0KPHAgY2xhc3M9Ik1z
b05vcm1hbCI+PG86cD4mbmJzcDs8L286cD48L3A+DQo8L2Rpdj4NCjxkaXY+DQo8cCBjbGFzcz0i
TXNvTm9ybWFsIj5UaGFua3MgZm9yIHRoZSBpbnNpZ2h0ISA8bzpwPjwvbzpwPjwvcD4NCjwvZGl2
Pg0KPC9kaXY+DQo8L2Rpdj4NCjwvZGl2Pg0KPC9kaXY+DQo8L2JvZHk+DQo8L2h0bWw+DQo=

--_000_MWHPR11MB1645086209CC9A97D5805DA68C8E0MWHPR11MB1645namp_--


From xen-devel-bounces@lists.xenproject.org Thu May 28 08:35:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 08:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeE0k-0007Ml-1P; Thu, 28 May 2020 08:35:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bSPA=7K=intel.com=kevin.tian@srs-us1.protection.inumbo.net>)
 id 1jeE0j-0007Ma-4m
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 08:35:33 +0000
X-Inumbo-ID: 30418a7c-a0be-11ea-81bc-bc764e2007e4
Received: from mga05.intel.com (unknown [192.55.52.43])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30418a7c-a0be-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 08:35:30 +0000 (UTC)
IronPort-SDR: hPkfe3QjEuz2Frmg1AVTaV7TGzUNqbMdW7UQsRPqS04mxgC5348Q+SI2IHjG/uwn90vG6Le3et
 AgZz5Gh24k/g==
X-Amp-Result: SKIPPED(no attachment in message)
X-Amp-File-Uploaded: False
Received: from fmsmga003.fm.intel.com ([10.253.24.29])
 by fmsmga105.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 28 May 2020 01:35:30 -0700
IronPort-SDR: mUQCBgh8mNSZs578D1herbj5WSqxNp6iEeeZQOHzWJ/d4EXHdG6gNDW1W1uJbHc429BYOdWDEy
 k0qvxe0llDiA==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.73,444,1583222400"; d="scan'208";a="310847005"
Received: from orsmsx109.amr.corp.intel.com ([10.22.240.7])
 by FMSMGA003.fm.intel.com with ESMTP; 28 May 2020 01:35:29 -0700
Received: from orsmsx607.amr.corp.intel.com (10.22.229.20) by
 ORSMSX109.amr.corp.intel.com (10.22.240.7) with Microsoft SMTP Server (TLS)
 id 14.3.439.0; Thu, 28 May 2020 01:35:29 -0700
Received: from orsmsx603.amr.corp.intel.com (10.22.229.16) by
 ORSMSX607.amr.corp.intel.com (10.22.229.20) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.1.1713.5; Thu, 28 May 2020 01:35:28 -0700
Received: from ORSEDG001.ED.cps.intel.com (10.7.248.4) by
 orsmsx603.amr.corp.intel.com (10.22.229.16) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.1713.5
 via Frontend Transport; Thu, 28 May 2020 01:35:28 -0700
Received: from NAM10-MW2-obe.outbound.protection.outlook.com (104.47.55.102)
 by edgegateway.intel.com (134.134.137.100) with Microsoft SMTP Server (TLS)
 id 14.3.439.0; Thu, 28 May 2020 01:35:28 -0700
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nuzAcwmI9mu4A/JWDeNruJotcWkU63DRfEiFDab6xc3ucrPW96sI5s6GsdjNctw0rvl41Nw/Oqy2JwfEoveEU2FVL9h/Au4GFsBMYhWbQZfe6VQva0tN2lS4UiVqxmvXZJ298p+oRngr57cuNrmp6Hf9khoaHqQ7nB6ut1EcOMJKgrv3lMaqrHz1ftYhGO9ubUo++re/PPCXSdguxeZMTXFU9BOCs8lOkwmf9ATiglsCfLBFLGo7YhcEA9UGy9pHwjJZ6o2qUlouguSH4uykZHAdeUGvkrjp/uXYheMJQWamHehzzdZxHQ2VMJ4dwGjKEQQA45B2cBJDW42w0P9aIg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ACSaxWbDr43wovmngDVWebOVHFVDVMM5zB0rC2bhq94=;
 b=MUnYDSZerAnAdBe4wb5t3OLlYPkh0jWr5C+xktY0UFVLCS9ExC5+OkxxEk5Vr3TS+d6dc1bYCm5Q8zCDqxcByK2ZQgF6OSalsscZ/fGhww3P4LPvhKnQQ/KkUWyV3q18rg0w07bwBi2qy/OCYCNVtQnGPst9CjLu1sAzirSKsGS364MB11pxfDvFfVXcABHu6ftdEN+0W80pB5tk56awN2ZKR7wCeR/0OZImvzJVooBLBDggv98DBKnTimeGo5sMiDfmjKTIuxRV8e4ZaOsgBYPq8tuMplOl2sJTcSwCeSNuU/ybIIzex3sdlue7xwPqVyzBItPmw61CENQiXuaarQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=intel.com; dmarc=pass action=none header.from=intel.com;
 dkim=pass header.d=intel.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=intel.onmicrosoft.com; 
 s=selector2-intel-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ACSaxWbDr43wovmngDVWebOVHFVDVMM5zB0rC2bhq94=;
 b=PfhePK5na11wnQyUh6hfVepvXQMLW6TiOApNK/WXVqvhCvJ1Zi1pcMKi2/4Z+1YrWguUCqesXnmTv9s46hsvEoJ0tRt3kM4Dp3C5OP3LGz5GFsdOOerCCX5v4OozQ2rndU5MUrI4mo1Z0T1BXZnYGog6biY4NNt75Cdeh2Ol/xo=
Received: from MWHPR11MB1645.namprd11.prod.outlook.com (2603:10b6:301:b::12)
 by MWHPR11MB1550.namprd11.prod.outlook.com (2603:10b6:301:b::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.23; Thu, 28 May
 2020 08:35:27 +0000
Received: from MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::b441:f6bd:703b:ba41]) by MWHPR11MB1645.namprd11.prod.outlook.com
 ([fe80::b441:f6bd:703b:ba41%2]) with mapi id 15.20.3021.030; Thu, 28 May 2020
 08:35:27 +0000
From: "Tian, Kevin" <kevin.tian@intel.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
 <xen-devel@lists.xenproject.org>
Subject: RE: [PATCH v2] VT-x: extend LBR Broadwell errata coverage
Thread-Topic: [PATCH v2] VT-x: extend LBR Broadwell errata coverage
Thread-Index: AQHWMqXfhCtjE8umXE6fYTlaghat+ai9MHkw
Date: Thu, 28 May 2020 08:35:27 +0000
Message-ID: <MWHPR11MB1645307CE610D333B208AD1D8C8E0@MWHPR11MB1645.namprd11.prod.outlook.com>
References: <c43b9d43-2e37-d2a8-ba32-dd06062a05e2@suse.com>
In-Reply-To: <c43b9d43-2e37-d2a8-ba32-dd06062a05e2@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
dlp-version: 11.2.0.6
dlp-product: dlpe-windows
dlp-reaction: no-action
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=intel.com;
x-originating-ip: [192.55.52.219]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 5a2c8468-98c7-4548-9d51-08d802e21305
x-ms-traffictypediagnostic: MWHPR11MB1550:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <MWHPR11MB155078038F2BC45247A9C09B8C8E0@MWHPR11MB1550.namprd11.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-forefront-prvs: 0417A3FFD2
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: m9Ivao+MaIEDjKgihdZXwkbLvdDtptIiITG9HIWyMlCUI3yJecCyafrx1ARVF0SOuFGK4SyirSksltkNN/oulmXXsopTnT+hKafyQhp/kVaiBEK8H57fyrpx89ID9s10ePnEKOl2OtdkrYPPxO5mxArQjzikZNhpSXHNTkcnlwjeYUTBwA9jlwB3ozXuyUaDToeE+tozS9+G4UKd4tZVA6a5t5ydel3WYHdUblLoJnCe/YYKYt9Bg05ooAiPUMsCdz6hZie15iMkIpkCoaS3xuHlIEdlZNnHC0GqAiM0ySQLPB9H0x7Du5CvTi+Ger2K2OPqZ4XcfRFHEhheaHZfSQ==
x-forefront-antispam-report: CIP:255.255.255.255; CTRY:; LANG:en; SCL:1; SRV:;
 IPV:NLI; SFV:NSPM; H:MWHPR11MB1645.namprd11.prod.outlook.com; PTR:; CAT:NONE;
 SFTY:;
 SFS:(376002)(39860400002)(366004)(396003)(346002)(136003)(54906003)(55016002)(110136005)(5660300002)(478600001)(8676002)(8936002)(52536014)(26005)(107886003)(71200400001)(66556008)(7696005)(66476007)(2906002)(66446008)(66946007)(6506007)(316002)(76116006)(64756008)(4326008)(83380400001)(186003)(86362001)(33656002)(9686003);
 DIR:OUT; SFP:1102; 
x-ms-exchange-antispam-messagedata: h7zGrhnRDvr1OyB10uEi6yzfjrxGVqwnhyiV5mid94GQdP8AF/Wm3IR0m/ERKjdjUrhVQyMaRSoXWedwneV52/TH6HHzdD6QcsuFb6grYI7hRcht51Sy6hJ+OKqqhEJBV+Q0wgnz99gachHvVfQbMRHdWn0L5Mvt2aP9A4M9Z6OGbthmBQ5iy52rzBnhjnZAwQAk8N5HaSb9oBGdnuQE5Vabsv9YdEnlIW0vTEsyK2jI9YrvJGjupEVw2vxs6iP3E1W4qHakOdoIYW8C5B8kgJZ4VeBBoUHdpxJWcf2uCEFUz8+ecKPlThaw3IdNFF3YukYY785GrOazIwCrib5IZ9XgHO9GfOTJeKyQRp3lsYKLljaZxBj54yPUg8u+/5lQosEkJuI8VPCdyhNPLnxPZEgRvETsZJnTNNkNRvsb8HfEAZ5Kt+rlxccF0i3BXiihzVkey7GtaMlRvrgvnIPCdoTonl1P1w2juFcK/JvYmI/VOgyRqZ9v0VrznxCsl6ia
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-Network-Message-Id: 5a2c8468-98c7-4548-9d51-08d802e21305
X-MS-Exchange-CrossTenant-originalarrivaltime: 28 May 2020 08:35:27.5505 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 46c98d88-e344-4ed4-8496-4ed7712e255d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: pyyQrp1KiVEpLWiAJfcX3wHFUZv/QrdXnli/VyohUvT/m9wmG8rZjyL0OohBE3rwetruF+QzuP91VMDSeo0DOQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MWHPR11MB1550
X-OriginatorOrg: intel.com
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, "Nakajima,
 Jun" <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+IFNlbnQ6IE1vbmRheSwg
TWF5IDI1LCAyMDIwIDExOjA0IFBNDQo+IA0KPiBGb3IgbGJyX3RzeF9maXh1cF9jaGVjaygpIHNp
bXBseSBuYW1lIGEgZmV3IG1vcmUgc3BlY2lmaWMgZXJyYXR1bQ0KPiBudW1iZXJzLg0KPiANCj4g
Rm9yIGJkZjkzX2ZpeHVwX2NoZWNrKCksIGhvd2V2ZXIsIG1vcmUgbW9kZWxzIGFyZSBhZmZlY3Rl
ZC4gT2RkbHkgZW5vdWdoDQo+IGRlc3BpdGUgYmVpbmcgdGhlIHNhbWUgbW9kZWwgYW5kIHN0ZXBw
aW5nLCB0aGUgZXJyYXR1bSBpcyBsaXN0ZWQgZm9yDQo+IFhlb24gRTMgYnV0IG5vdCBpdHMgQ29y
ZSBjb3VudGVycGFydC4gQXBwbHkgdGhlIHdvcmthcm91bmQgdW5pZm9ybWx5LA0KPiBhbmQgYWxz
byBmb3IgWGVvbiBELCB3aGljaCBvbmx5IGhhcyB0aGUgTEJSLWZyb20gb25lIGxpc3RlZCBpbiBp
dHMgc3BlYw0KPiB1cGRhdGUuDQo+IA0KPiBTZWVpbmcgdGhpcyBicm9hZGVyIGFwcGxpY2FiaWxp
dHksIHJlbmFtZSBhbnl0aGluZyBCREY5My1yZWxhdGVkIHRvIG1vcmUNCj4gZ2VuZXJpYyBuYW1l
cy4NCj4gDQo+IFNpZ25lZC1vZmYtYnk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4N
Cg0KUmV2aWV3ZWQtYnk6IEtldmluIFRpYW4gPGtldmluLnRpYW5AaW50ZWwuY29tPg0KDQo+IC0t
LQ0KPiB2MjogTmFtZSB5ZXQgYW5vdGhlciBwYWlyIG9mIGVycmF0YS4gU3BlY3VsYXRpdmVseSBj
b3ZlciBYZW9uIEQgYWxzbw0KPiAgICAgaW4gdGhlIDJuZCBjYXNlLiBJZGVudGlmaWVyIHJlbmFt
aW5nLg0KPiANCj4gLS0tIGEveGVuL2FyY2gveDg2L2h2bS92bXgvdm14LmMNCj4gKysrIGIveGVu
L2FyY2gveDg2L2h2bS92bXgvdm14LmMNCj4gQEAgLTIzODAsNyArMjM4MCw3IEBAIHN0YXRpYyB2
b2lkIHBpX25vdGlmaWNhdGlvbl9pbnRlcnJ1cHQoc3QNCj4gIH0NCj4gDQo+ICBzdGF0aWMgdm9p
ZCBfX2luaXQgbGJyX3RzeF9maXh1cF9jaGVjayh2b2lkKTsNCj4gLXN0YXRpYyB2b2lkIF9faW5p
dCBiZGY5M19maXh1cF9jaGVjayh2b2lkKTsNCj4gK3N0YXRpYyB2b2lkIF9faW5pdCBsZXJfdG9f
Zml4dXBfY2hlY2sodm9pZCk7DQo+IA0KPiAgLyoNCj4gICAqIENhbGN1bGF0ZSB3aGV0aGVyIHRo
ZSBDUFUgaXMgdnVsbmVyYWJsZSB0byBJbnN0cnVjdGlvbiBGZXRjaCBwYWdlDQo+IEBAIC0yNTU0
LDcgKzI1NTQsNyBAQCBjb25zdCBzdHJ1Y3QgaHZtX2Z1bmN0aW9uX3RhYmxlICogX19pbml0DQo+
ICAgICAgfQ0KPiANCj4gICAgICBsYnJfdHN4X2ZpeHVwX2NoZWNrKCk7DQo+IC0gICAgYmRmOTNf
Zml4dXBfY2hlY2soKTsNCj4gKyAgICBsZXJfdG9fZml4dXBfY2hlY2soKTsNCj4gDQo+ICAgICAg
cmV0dXJuICZ2bXhfZnVuY3Rpb25fdGFibGU7DQo+ICB9DQo+IEBAIC0yODMyLDExICsyODMyLDEx
IEBAIGVudW0NCj4gDQo+ICAjZGVmaW5lIExCUl9NU1JTX0lOU0VSVEVEICAgICAgKDF1IDw8IDAp
DQo+ICAjZGVmaW5lIExCUl9GSVhVUF9UU1ggICAgICAgICAgKDF1IDw8IDEpDQo+IC0jZGVmaW5l
IExCUl9GSVhVUF9CREY5MyAgICAgICAgKDF1IDw8IDIpDQo+IC0jZGVmaW5lIExCUl9GSVhVUF9N
QVNLICAgICAgICAgKExCUl9GSVhVUF9UU1ggfCBMQlJfRklYVVBfQkRGOTMpDQo+ICsjZGVmaW5l
IExCUl9GSVhVUF9MRVJfVE8gICAgICAgKDF1IDw8IDIpDQo+ICsjZGVmaW5lIExCUl9GSVhVUF9N
QVNLICAgICAgICAgKExCUl9GSVhVUF9UU1ggfCBMQlJfRklYVVBfTEVSX1RPKQ0KPiANCj4gIHN0
YXRpYyBib29sIF9fcmVhZF9tb3N0bHkgbGJyX3RzeF9maXh1cF9uZWVkZWQ7DQo+IC1zdGF0aWMg
Ym9vbCBfX3JlYWRfbW9zdGx5IGJkZjkzX2ZpeHVwX25lZWRlZDsNCj4gK3N0YXRpYyBib29sIF9f
cmVhZF9tb3N0bHkgbGVyX3RvX2ZpeHVwX25lZWRlZDsNCj4gDQo+ICBzdGF0aWMgdm9pZCBfX2lu
aXQgbGJyX3RzeF9maXh1cF9jaGVjayh2b2lkKQ0KPiAgew0KPiBAQCAtMjg0NCw3ICsyODQ0LDcg
QEAgc3RhdGljIHZvaWQgX19pbml0IGxicl90c3hfZml4dXBfY2hlY2sodg0KPiAgICAgIHVpbnQz
Ml90IGxicl9mb3JtYXQ7DQo+IA0KPiAgICAgIC8qDQo+IC0gICAgICogSFNNMTgyLCBIU0QxNzIs
IEhTRTExNywgQkRNMTI3LCBCREQxMTcsIEJERjg1LCBCREUxMDU6DQo+ICsgICAgICogSGFzd2Vs
bCBlcnJhdHVtIEhTTTE4MiBldCBhbCwgQnJvYWR3ZWxsIGVycmF0dW0gQkRNMTI3IGV0IGFsOg0K
PiAgICAgICAqDQo+ICAgICAgICogT24gcHJvY2Vzc29ycyB0aGF0IGRvIG5vdCBzdXBwb3J0IElu
dGVsIFRyYW5zYWN0aW9uYWwgU3luY2hyb25pemF0aW9uDQo+ICAgICAgICogRXh0ZW5zaW9ucyAo
SW50ZWwgVFNYKSAoQ1BVSUQuMDdILkVCWCBiaXRzIDQgYW5kIDExIGFyZSBib3RoIHplcm8pLA0K
PiBAQCAtMjg2OCw4ICsyODY4LDExIEBAIHN0YXRpYyB2b2lkIF9faW5pdCBsYnJfdHN4X2ZpeHVw
X2NoZWNrKHYNCj4gICAgICBjYXNlIDB4NDU6IC8qIEhTTTE4MiAtIDR0aCBnZW4gQ29yZSAqLw0K
PiAgICAgIGNhc2UgMHg0NjogLyogSFNNMTgyLCBIU0QxNzIgLSA0dGggZ2VuIENvcmUgKEdUMykg
Ki8NCj4gICAgICBjYXNlIDB4M2Q6IC8qIEJETTEyNyAtIDV0aCBnZW4gQ29yZSAqLw0KPiAtICAg
IGNhc2UgMHg0NzogLyogQkREMTE3IC0gNXRoIGdlbiBDb3JlIChHVDMpICovDQo+IC0gICAgY2Fz
ZSAweDRmOiAvKiBCREY4NSAgLSBYZW9uIEU1LTI2MDAgdjQgKi8NCj4gKyAgICBjYXNlIDB4NDc6
IC8qIEJERDExNyAtIDV0aCBnZW4gQ29yZSAoR1QzKQ0KPiArICAgICAgICAgICAgICAgICAgQkRX
MTE3IC0gWGVvbiBFMy0xMjAwIHY0ICovDQo+ICsgICAgY2FzZSAweDRmOiAvKiBCREY4NSAgLSBY
ZW9uIEU1LTI2MDAgdjQNCj4gKyAgICAgICAgICAgICAgICAgIEJESDc1ICAtIENvcmUtaTcgZm9y
IExHQTIwMTEtdjMgU29ja2V0DQo+ICsgICAgICAgICAgICAgICAgICBCRFg4OCAgLSBYZW9uIEU3
LXg4MDAgdjQgKi8NCj4gICAgICBjYXNlIDB4NTY6IC8qIEJERTEwNSAtIFhlb24gRC0xNTAwICov
DQo+ICAgICAgICAgIGJyZWFrOw0KPiAgICAgIGRlZmF1bHQ6DQo+IEBAIC0yODkwLDE4ICsyODkz
LDMxIEBAIHN0YXRpYyB2b2lkIF9faW5pdCBsYnJfdHN4X2ZpeHVwX2NoZWNrKHYNCj4gICAgICAg
ICAgbGJyX3RzeF9maXh1cF9uZWVkZWQgPSB0cnVlOw0KPiAgfQ0KPiANCj4gLXN0YXRpYyB2b2lk
IF9faW5pdCBiZGY5M19maXh1cF9jaGVjayh2b2lkKQ0KPiArc3RhdGljIHZvaWQgX19pbml0IGxl
cl90b19maXh1cF9jaGVjayh2b2lkKQ0KPiAgew0KPiAgICAgIC8qDQo+IC0gICAgICogQnJvYWR3
ZWxsIGVycmF0dW0gQkRGOTM6DQo+ICsgICAgICogQnJvYWR3ZWxsIGVycmF0dW0gQkRGOTMgZXQg
YWw6DQo+ICAgICAgICoNCj4gICAgICAgKiBSZWFkcyBmcm9tIE1TUl9MRVJfVE9fTElQIChNU1Ig
MURFSCkgbWF5IHJldHVybiB2YWx1ZXMgZm9yDQo+IGJpdHNbNjM6NjFdDQo+ICAgICAgICogdGhh
dCBhcmUgbm90IGVxdWFsIHRvIGJpdFs0N10uICBBdHRlbXB0aW5nIHRvIGNvbnRleHQgc3dpdGNo
IHRoaXMgdmFsdWUNCj4gICAgICAgKiBtYXkgY2F1c2UgYSAjR1AuICBTb2Z0d2FyZSBzaG91bGQg
c2lnbiBleHRlbmQgdGhlIE1TUi4NCj4gICAgICAgKi8NCj4gLSAgICBpZiAoIGJvb3RfY3B1X2Rh
dGEueDg2X3ZlbmRvciA9PSBYODZfVkVORE9SX0lOVEVMICYmDQo+IC0gICAgICAgICBib290X2Nw
dV9kYXRhLng4NiA9PSA2ICYmIGJvb3RfY3B1X2RhdGEueDg2X21vZGVsID09IDB4NGYgKQ0KPiAt
ICAgICAgICBiZGY5M19maXh1cF9uZWVkZWQgPSB0cnVlOw0KPiArICAgIGlmICggYm9vdF9jcHVf
ZGF0YS54ODZfdmVuZG9yICE9IFg4Nl9WRU5ET1JfSU5URUwgfHwNCj4gKyAgICAgICAgIGJvb3Rf
Y3B1X2RhdGEueDg2ICE9IDYgKQ0KPiArICAgICAgICByZXR1cm47DQo+ICsNCj4gKyAgICBzd2l0
Y2ggKCBib290X2NwdV9kYXRhLng4Nl9tb2RlbCApDQo+ICsgICAgew0KPiArICAgIGNhc2UgMHgz
ZDogLyogQkRNMTMxIC0gNXRoIGdlbiBDb3JlICovDQo+ICsgICAgY2FzZSAweDQ3OiAvKiBCREQ/
Pz8gLSA1dGggZ2VuIENvcmUgKEgtUHJvY2Vzc29yIGxpbmUpDQo+ICsgICAgICAgICAgICAgICAg
ICBCRFcxMjAgLSBYZW9uIEUzLTEyMDAgdjQgKi8NCj4gKyAgICBjYXNlIDB4NGY6IC8qIEJERjkz
ICAtIFhlb24gRTUtMjYwMCB2NA0KPiArICAgICAgICAgICAgICAgICAgQkRIODAgIC0gQ29yZS1p
NyBmb3IgTEdBMjAxMS12MyBTb2NrZXQNCj4gKyAgICAgICAgICAgICAgICAgIEJEWDkzICAtIFhl
b24gRTcteDgwMCB2NCAqLw0KPiArICAgIGNhc2UgMHg1NjogLyogQkRFPz8/IC0gWGVvbiBELTE1
MDAgKi8NCj4gKyAgICAgICAgbGVyX3RvX2ZpeHVwX25lZWRlZCA9IHRydWU7DQo+ICsgICAgICAg
IGJyZWFrOw0KPiArICAgIH0NCj4gIH0NCj4gDQo+ICBzdGF0aWMgaW50IGlzX2xhc3RfYnJhbmNo
X21zcih1MzIgZWN4KQ0KPiBAQCAtMzI3Niw4ICszMjkyLDggQEAgc3RhdGljIGludCB2bXhfbXNy
X3dyaXRlX2ludGVyY2VwdCh1bnNpZw0KPiAgICAgICAgICAgICAgdi0+YXJjaC5odm0udm14Lmxi
cl9mbGFncyB8PSBMQlJfTVNSU19JTlNFUlRFRDsNCj4gICAgICAgICAgICAgIGlmICggbGJyX3Rz
eF9maXh1cF9uZWVkZWQgKQ0KPiAgICAgICAgICAgICAgICAgIHYtPmFyY2guaHZtLnZteC5sYnJf
ZmxhZ3MgfD0gTEJSX0ZJWFVQX1RTWDsNCj4gLSAgICAgICAgICAgIGlmICggYmRmOTNfZml4dXBf
bmVlZGVkICkNCj4gLSAgICAgICAgICAgICAgICB2LT5hcmNoLmh2bS52bXgubGJyX2ZsYWdzIHw9
IExCUl9GSVhVUF9CREY5MzsNCj4gKyAgICAgICAgICAgIGlmICggbGVyX3RvX2ZpeHVwX25lZWRl
ZCApDQo+ICsgICAgICAgICAgICAgICAgdi0+YXJjaC5odm0udm14Lmxicl9mbGFncyB8PSBMQlJf
RklYVVBfTEVSX1RPOw0KPiAgICAgICAgICB9DQo+IA0KPiAgICAgICAgICBfX3Ztd3JpdGUoR1VF
U1RfSUEzMl9ERUJVR0NUTCwgbXNyX2NvbnRlbnQpOw0KPiBAQCAtNDMzOCw3ICs0MzU0LDcgQEAg
c3RhdGljIHZvaWQgc2lnbl9leHRlbmRfbXNyKHN0cnVjdCB2Y3B1DQo+ICAgICAgICAgIGVudHJ5
LT5kYXRhID0gY2Fub25pY2FsaXNlX2FkZHIoZW50cnktPmRhdGEpOw0KPiAgfQ0KPiANCj4gLXN0
YXRpYyB2b2lkIGJkZjkzX2ZpeHVwKHZvaWQpDQo+ICtzdGF0aWMgdm9pZCBsZXJfdG9fZml4dXAo
dm9pZCkNCj4gIHsNCj4gICAgICBzdHJ1Y3QgdmNwdSAqY3VyciA9IGN1cnJlbnQ7DQo+IA0KPiBA
QCAtNDM1MSw4ICs0MzY3LDggQEAgc3RhdGljIHZvaWQgbGJyX2ZpeHVwKHZvaWQpDQo+IA0KPiAg
ICAgIGlmICggY3Vyci0+YXJjaC5odm0udm14Lmxicl9mbGFncyAmIExCUl9GSVhVUF9UU1ggKQ0K
PiAgICAgICAgICBsYnJfdHN4X2ZpeHVwKCk7DQo+IC0gICAgaWYgKCBjdXJyLT5hcmNoLmh2bS52
bXgubGJyX2ZsYWdzICYgTEJSX0ZJWFVQX0JERjkzICkNCj4gLSAgICAgICAgYmRmOTNfZml4dXAo
KTsNCj4gKyAgICBpZiAoIGN1cnItPmFyY2guaHZtLnZteC5sYnJfZmxhZ3MgJiBMQlJfRklYVVBf
TEVSX1RPICkNCj4gKyAgICAgICAgbGVyX3RvX2ZpeHVwKCk7DQo+ICB9DQo+IA0KPiAgLyogUmV0
dXJucyBmYWxzZSBpZiB0aGUgdm1lbnRyeSBoYXMgdG8gYmUgcmVzdGFydGVkICovDQo=


From xen-devel-bounces@lists.xenproject.org Thu May 28 08:54:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 08:54:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeEIL-0000im-Dq; Thu, 28 May 2020 08:53:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=blJD=7K=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jeEIK-0000ih-2p
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 08:53:44 +0000
X-Inumbo-ID: bb766e80-a0c0-11ea-9dbe-bc764e2007e4
Received: from mail-wr1-x42b.google.com (unknown [2a00:1450:4864:20::42b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb766e80-a0c0-11ea-9dbe-bc764e2007e4;
 Thu, 28 May 2020 08:53:43 +0000 (UTC)
Received: by mail-wr1-x42b.google.com with SMTP id y17so18618321wrn.11
 for <xen-devel@lists.xenproject.org>; Thu, 28 May 2020 01:53:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=OsmbqVD2Ng7iKf9+jo7l+b8noucPBniDdZVmRwZ2OHk=;
 b=SuXgitoB2Tx2l/10ympckof0gahR0b1hDRKDFHfF1WhjIwtBthSANU2W7Ybpbayk3p
 H1QooXdIEmAOOVqbM0+7ZhWs/85YhvJDV9wbn059RXDPH1uH3pI/67GsZbvS9tAX0bXI
 J4875LqdvaZ15zOIDDtb0wQi+ohnU2OwmJ2p8Hx1iR9E3mYHvYVeVt15itrRUlTxgyKV
 gpzAfiG3YvvPnLWZKbZDBwGb2MEYO0YdwSsIgGa1bqnZfBI2HmFaunLqMGl0Av+AOjKI
 zLGlEpxRycjayXuiY0/qEOR3GKlGtbu4icDqRJI0nVhjgmngg30MNXOavPd0PYbpSCxk
 BybQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=OsmbqVD2Ng7iKf9+jo7l+b8noucPBniDdZVmRwZ2OHk=;
 b=oOwhh9K4qF/9AGiyGq5egWOalOfQFr/Q+QQQ1yZPk4P2HxvXsqIvTsiRUsP6VtvTTJ
 z3U/hlucm2OznnUiN4oRoKy7CMrFgRltE+V9Engd8EYCrwkSiWWVU9GEVQMPBzymx6Nu
 HtqIDR/KCgRdG2Askp7G6UoILrqXoC/pPhGIDn+tgeULI++/hhkWNxcDST1syDT9Iy77
 S38XAfQFnEJwPOEIM1Fd/ebfcdSMOBCCtMO5put4EHLtnfwOVUHCi9xiUMLAtb7zuE0V
 Pr1KliqHVI2RYo3sw0QSsy/s+Y/3prE5LOdEHwsj420gGsStZo1j5OdWQ4zxrl2QtAvO
 WGGg==
X-Gm-Message-State: AOAM531Vs5P5qsNmiMCBFce2yz2qIkA3QchdcpHWYz0sOMlhAMMaE6iV
 3drAdXIZf2U3+P80LG/ye/w=
X-Google-Smtp-Source: ABdhPJxPQJPrGm4dKWgaLCge5ybHHqVXd1GSzGgAIPr9ECDzgTT8ub3oBJKVe/JTYAueu4frei8FPw==
X-Received: by 2002:a5d:6944:: with SMTP id r4mr2432274wrw.169.1590656022346; 
 Thu, 28 May 2020 01:53:42 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.185])
 by smtp.gmail.com with ESMTPSA id t14sm5390200wrb.94.2020.05.28.01.53.40
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 28 May 2020 01:53:41 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Juergen Gross'" <jgross@suse.com>,
	<xen-devel@lists.xenproject.org>
References: <20200528082217.26057-1-jgross@suse.com>
In-Reply-To: <20200528082217.26057-1-jgross@suse.com>
Subject: RE: [PATCH v2] docs: update xenstore-migration.md
Date: Thu, 28 May 2020 09:53:40 +0100
Message-ID: <00a001d634cd$7c9afb40$75d0f1c0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQFP4uuOoU5NbpLVMW9x/VcR2g3Na6nJ05cw
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Juergen Gross
> Sent: 28 May 2020 09:22
> To: xen-devel@lists.xenproject.org
> Cc: Juergen Gross <jgross@suse.com>; Stefano Stabellini <sstabellini@kernel.org>; Julien Grall
> <julien@xen.org>; Wei Liu <wl@xen.org>; Andrew Cooper <andrew.cooper3@citrix.com>; Ian Jackson
> <ian.jackson@eu.citrix.com>; George Dunlap <george.dunlap@citrix.com>; Jan Beulich <jbeulich@suse.com>
> Subject: [PATCH v2] docs: update xenstore-migration.md
> 
> Update connection record details: make flags common for sockets and
> domains, and add pending incoming data.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - added out-resp-len to connection record
> ---
>  docs/designs/xenstore-migration.md | 71 +++++++++++++++++-------------
>  1 file changed, 40 insertions(+), 31 deletions(-)
> 
> diff --git a/docs/designs/xenstore-migration.md b/docs/designs/xenstore-migration.md
> index 34a2afd17e..5736bbad94 100644
> --- a/docs/designs/xenstore-migration.md
> +++ b/docs/designs/xenstore-migration.md
> @@ -147,43 +147,59 @@ the domain being migrated.
>  ```
>      0       1       2       3       4       5       6       7    octet
>  +-------+-------+-------+-------+-------+-------+-------+-------+
> -| conn-id                       | conn-type     | conn-spec
> +| conn-id                       | conn-type     | flags         |
> ++-------------------------------+---------------+---------------+
> +| conn-spec
>  ...
> -+-------------------------------+-------------------------------+
> -| data-len                      | data
> -+-------------------------------+
> ++---------------+---------------+-------------------------------+
> +| in-data-len   | out-resp-len  | out-data-len                  |
> ++---------------+---------------+-------------------------------+
> +| data
>  ...
>  ```
> 
> 
> -| Field       | Description                                     |
> -|-------------|-------------------------------------------------|
> -| `conn-id`   | A non-zero number used to identify this         |
> -|             | connection in subsequent connection-specific    |
> -|             | records                                         |
> -|             |                                                 |
> -| `conn-type` | 0x0000: shared ring                             |
> -|             | 0x0001: socket                                  |
> -|             | 0x0002 - 0xFFFF: reserved for future use        |
> -|             |                                                 |
> -| `conn-spec` | See below                                       |
> -|             |                                                 |
> -| `data-len`  | The length (in octets) of any pending data not  |
> -|             | yet written to the connection                   |
> -|             |                                                 |
> -| `data`      | Pending data (may be empty)                     |
> +| Field          | Description                                  |
> +|----------------|----------------------------------------------|
> +| `conn-id`      | A non-zero number used to identify this      |
> +|                | connection in subsequent connection-specific |
> +|                | records                                      |
> +|                |                                              |
> +| `flags`        | A bit-wise OR of:                            |
> +|                | 0001: read-only                              |
> +|                |                                              |
> +| `conn-type`    | 0x0000: shared ring                          |
> +|                | 0x0001: socket                               |
> +|                | 0x0002 - 0xFFFF: reserved for future use     |
> +|                |                                              |

Agreed with Julien... the above two would be better swapped to match the order of the fields in the record.

> +| `conn-spec`    | See below                                    |
> +|                |                                              |
> +| `in-data-len`  | The length (in octets) of any data read      |
> +|                | from the connection not yet processed        |
> +|                |                                              |
> +| `out-resp-len` | The length (in octets) of a partial response |
> +|                | not yet written to the connection (included  |
> +|                | in the following `out-data-len`)             |
> +|                |                                              |
> +| `out-data-len` | The length (in octets) of any pending data   |
> +|                | not yet written to the connection            |

So, IIUC out-data-len is inclusive of out-resp-len?

> +|                |                                              |
> +| `data`         | Pending data, first read data, then written  |
> +|                | data (any of both may be empty)              |

Perhaps be more explicit here:

"Pending data: first in-data-len octets of read data, then out-data-len octets of written data"

> 
> -The format of `conn-spec` is dependent upon `conn-type`.
> +In case of live update the connection record for the connection via which
> +the live update command was issued will contain the response for the live
> +update command in the pending write data.

s/write/written for consistency I think.

  Paul

> 
>  \pagebreak
> 
> +The format of `conn-spec` is dependent upon `conn-type`.
> +
>  For `shared ring` connections it is as follows:
> 
> 
>  ```
>      0       1       2       3       4       5       6       7    octet
> -                                                +-------+-------+
> -                                                | flags         |
>  +---------------+---------------+---------------+---------------+
>  | domid         | tdomid        | evtchn                        |
>  +-------------------------------+-------------------------------+
> @@ -198,8 +214,6 @@ For `shared ring` connections it is as follows:
>  |           | it has been subject to an SET_TARGET              |
>  |           | operation [2] or DOMID_INVALID [3] otherwise      |
>  |           |                                                   |
> -| `flags`   | Must be zero                                      |
> -|           |                                                   |
>  | `evtchn`  | The port number of the interdomain channel used   |
>  |           | by `domid` to communicate with xenstored          |
>  |           |                                                   |
> @@ -211,8 +225,6 @@ For `socket` connections it is as follows:
> 
> 
>  ```
> -                                                +-------+-------+
> -                                                | flags         |
>  +---------------+---------------+---------------+---------------+
>  | socket-fd                     | pad                           |
>  +-------------------------------+-------------------------------+
> @@ -221,9 +233,6 @@ For `socket` connections it is as follows:
> 
>  | Field       | Description                                     |
>  |-------------|-------------------------------------------------|
> -| `flags`     | A bit-wise OR of:                               |
> -|             | 0001: read-only                                 |
> -|             |                                                 |
>  | `socket-fd` | The file descriptor of the connected socket     |
> 
>  This type of connection is only relevant for live update, where the xenstored
> @@ -398,4 +407,4 @@ explanation of node permissions.
> 
>  [3] See https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/xen.h;hb=HEAD#l612
> 
> -[4] https://wiki.xen.org/wiki/XenBus
> \ No newline at end of file
> +[4] https://wiki.xen.org/wiki/XenBus
> --
> 2.26.2
> 




From xen-devel-bounces@lists.xenproject.org Thu May 28 08:56:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 08:56:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeEKp-0000py-Rb; Thu, 28 May 2020 08:56:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l/Gm=7K=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jeEKo-0000pt-QD
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 08:56:18 +0000
X-Inumbo-ID: 17af58ba-a0c1-11ea-9dbe-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 17af58ba-a0c1-11ea-9dbe-bc764e2007e4;
 Thu, 28 May 2020 08:56:18 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 3E211AF9C;
 Thu, 28 May 2020 08:56:16 +0000 (UTC)
Subject: Re: [PATCH v2] docs: update xenstore-migration.md
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
References: <20200528082217.26057-1-jgross@suse.com>
 <57f59299-0723-bfcc-33b3-a97562c87150@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <d20c15c2-0b60-9892-f849-9ba69e62ac34@suse.com>
Date: Thu, 28 May 2020 10:56:15 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <57f59299-0723-bfcc-33b3-a97562c87150@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.05.20 10:34, Julien Grall wrote:
> Hi Juergen,
> 
> On 28/05/2020 09:22, Juergen Gross wrote:
>> -| Field       | Description                                     |
>> -|-------------|-------------------------------------------------|
>> -| `conn-id`   | A non-zero number used to identify this         |
>> -|             | connection in subsequent connection-specific    |
>> -|             | records                                         |
>> -|             |                                                 |
>> -| `conn-type` | 0x0000: shared ring                             |
>> -|             | 0x0001: socket                                  |
>> -|             | 0x0002 - 0xFFFF: reserved for future use        |
>> -|             |                                                 |
>> -| `conn-spec` | See below                                       |
>> -|             |                                                 |
>> -| `data-len`  | The length (in octets) of any pending data not  |
>> -|             | yet written to the connection                   |
>> -|             |                                                 |
>> -| `data`      | Pending data (may be empty)                     |
>> +| Field          | Description                                  |
>> +|----------------|----------------------------------------------|
>> +| `conn-id`      | A non-zero number used to identify this      |
>> +|                | connection in subsequent connection-specific |
>> +|                | records                                      |
>> +|                |                                              |
>> +| `flags`        | A bit-wise OR of:                            |
>> +|                | 0001: read-only                              |
> 
> NIT: It is a bit odd the flags is in the header after `conn-type` and 
> described before.

Okay, I'll swap the descriptions.

> 
>> +|                |                                              |
>> +| `conn-type`    | 0x0000: shared ring                          |
>> +|                | 0x0001: socket                               |
>> +|                | 0x0002 - 0xFFFF: reserved for future use     |
>> +|                |                                              |
>> +| `conn-spec`    | See below                                    |
>> +|                |                                              |
>> +| `in-data-len`  | The length (in octets) of any data read      |
>> +|                | from the connection not yet processed        |
>> +|                |                                              |
>> +| `out-resp-len` | The length (in octets) of a partial response |
>> +|                | not yet written to the connection (included  |
>> +|                | in the following `out-data-len`)             |
> 
> This seems to come from nowhere. It would be good to explain in the 
> commit message why this is necessary.

Okay.


Juergen


From xen-devel-bounces@lists.xenproject.org Thu May 28 08:59:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 08:59:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeENI-00010U-Ei; Thu, 28 May 2020 08:58:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=l/Gm=7K=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jeENH-00010P-Cn
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 08:58:51 +0000
X-Inumbo-ID: 72128386-a0c1-11ea-a7a8-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 72128386-a0c1-11ea-a7a8-12813bfff9fa;
 Thu, 28 May 2020 08:58:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E04FBACF9;
 Thu, 28 May 2020 08:58:47 +0000 (UTC)
Subject: Re: [PATCH v2] docs: update xenstore-migration.md
To: paul@xen.org, xen-devel@lists.xenproject.org
References: <20200528082217.26057-1-jgross@suse.com>
 <00a001d634cd$7c9afb40$75d0f1c0$@xen.org>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <ad448884-6705-9473-597c-10388b398972@suse.com>
Date: Thu, 28 May 2020 10:58:47 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <00a001d634cd$7c9afb40$75d0f1c0$@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.05.20 10:53, Paul Durrant wrote:
>> -----Original Message-----
>> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Juergen Gross
>> Sent: 28 May 2020 09:22
>> To: xen-devel@lists.xenproject.org
>> Cc: Juergen Gross <jgross@suse.com>; Stefano Stabellini <sstabellini@kernel.org>; Julien Grall
>> <julien@xen.org>; Wei Liu <wl@xen.org>; Andrew Cooper <andrew.cooper3@citrix.com>; Ian Jackson
>> <ian.jackson@eu.citrix.com>; George Dunlap <george.dunlap@citrix.com>; Jan Beulich <jbeulich@suse.com>
>> Subject: [PATCH v2] docs: update xenstore-migration.md
>>
>> Update connection record details: make flags common for sockets and
>> domains, and add pending incoming data.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> V2:
>> - added out-resp-len to connection record
>> ---
>>   docs/designs/xenstore-migration.md | 71 +++++++++++++++++-------------
>>   1 file changed, 40 insertions(+), 31 deletions(-)
>>
>> diff --git a/docs/designs/xenstore-migration.md b/docs/designs/xenstore-migration.md
>> index 34a2afd17e..5736bbad94 100644
>> --- a/docs/designs/xenstore-migration.md
>> +++ b/docs/designs/xenstore-migration.md
>> @@ -147,43 +147,59 @@ the domain being migrated.
>>   ```
>>       0       1       2       3       4       5       6       7    octet
>>   +-------+-------+-------+-------+-------+-------+-------+-------+
>> -| conn-id                       | conn-type     | conn-spec
>> +| conn-id                       | conn-type     | flags         |
>> ++-------------------------------+---------------+---------------+
>> +| conn-spec
>>   ...
>> -+-------------------------------+-------------------------------+
>> -| data-len                      | data
>> -+-------------------------------+
>> ++---------------+---------------+-------------------------------+
>> +| in-data-len   | out-resp-len  | out-data-len                  |
>> ++---------------+---------------+-------------------------------+
>> +| data
>>   ...
>>   ```
>>
>>
>> -| Field       | Description                                     |
>> -|-------------|-------------------------------------------------|
>> -| `conn-id`   | A non-zero number used to identify this         |
>> -|             | connection in subsequent connection-specific    |
>> -|             | records                                         |
>> -|             |                                                 |
>> -| `conn-type` | 0x0000: shared ring                             |
>> -|             | 0x0001: socket                                  |
>> -|             | 0x0002 - 0xFFFF: reserved for future use        |
>> -|             |                                                 |
>> -| `conn-spec` | See below                                       |
>> -|             |                                                 |
>> -| `data-len`  | The length (in octets) of any pending data not  |
>> -|             | yet written to the connection                   |
>> -|             |                                                 |
>> -| `data`      | Pending data (may be empty)                     |
>> +| Field          | Description                                  |
>> +|----------------|----------------------------------------------|
>> +| `conn-id`      | A non-zero number used to identify this      |
>> +|                | connection in subsequent connection-specific |
>> +|                | records                                      |
>> +|                |                                              |
>> +| `flags`        | A bit-wise OR of:                            |
>> +|                | 0001: read-only                              |
>> +|                |                                              |
>> +| `conn-type`    | 0x0000: shared ring                          |
>> +|                | 0x0001: socket                               |
>> +|                | 0x0002 - 0xFFFF: reserved for future use     |
>> +|                |                                              |
> 
> Agreed with Julien... the above two would be better swapped to match the order of the fields in the record.

Yes.

> 
>> +| `conn-spec`    | See below                                    |
>> +|                |                                              |
>> +| `in-data-len`  | The length (in octets) of any data read      |
>> +|                | from the connection not yet processed        |
>> +|                |                                              |
>> +| `out-resp-len` | The length (in octets) of a partial response |
>> +|                | not yet written to the connection (included  |
>> +|                | in the following `out-data-len`)             |
>> +|                |                                              |
>> +| `out-data-len` | The length (in octets) of any pending data   |
>> +|                | not yet written to the connection            |
> 
> So, IIUC out-data-len is inclusive of out-resp-len?

Yes.

> 
>> +|                |                                              |
>> +| `data`         | Pending data, first read data, then written  |
>> +|                | data (any of both may be empty)              |
> 
> Perhaps be more explicit here:
> 
> "Pending data: first in-data-len octets of read data, then out-data-len octets of written data"

Okay.

> 
>>
>> -The format of `conn-spec` is dependent upon `conn-type`.
>> +In case of live update the connection record for the connection via which
>> +the live update command was issued will contain the response for the live
>> +update command in the pending write data.
> 
> s/write/written for consistency I think.

I'll use "... in the pending not yet written data".


Juergen


From xen-devel-bounces@lists.xenproject.org Thu May 28 09:00:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 09:00:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeEP0-0001mH-Rp; Thu, 28 May 2020 09:00:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=blJD=7K=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jeEOz-0001m8-Om
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 09:00:37 +0000
X-Inumbo-ID: b1f6809c-a0c1-11ea-8993-bc764e2007e4
Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b1f6809c-a0c1-11ea-8993-bc764e2007e4;
 Thu, 28 May 2020 09:00:36 +0000 (UTC)
Received: by mail-wr1-x436.google.com with SMTP id j16so14592758wrb.7
 for <xen-devel@lists.xenproject.org>; Thu, 28 May 2020 02:00:36 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=Ann0i1ys87ZqTaBslW/JT2uNUb3D+tfWBJuPQxro4MM=;
 b=HfMmTVmuFyk/U+MKSxjeypCVY+NHqduA5LikENxzOxgTOKtEEpunayQLjtLeF2WlFs
 xjMAR+nS2oAxR2O+jSO+/F4DSUnL/uwAxi0YEIJCfYdyhBV4Mi3v24KmZdf6g/ORnECV
 azs5k6VHxiu9ZjJyMXqxIeBDq4lEpI1/qocYj765NeuEwLCRoJpQxzlv3wzPH9FeH+G2
 BVxfJRiPGDe0gGlWfSaZp60gcZbmcVjWjlCAbyiWwdsnXEscBepgpXr+c0rYa+jUgjzK
 BJB9IbrQfX8EgnCotonY/ImYe9QZHZbTydPsgFPdW7zc+dJZ4vWJpfcew8grlTRJPfnd
 rH3w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=Ann0i1ys87ZqTaBslW/JT2uNUb3D+tfWBJuPQxro4MM=;
 b=Lv70JbgWn4Kco8toDjtAVuRSzzFdFlOMzlsXDdQjldp722pGy+onNzxomGbmvxOG8W
 EnGfv/xakgIuIRGSZlirGN6BWheNk+Nc5Ik10/RNQv/E8pPiE2aJCqVtB/MxJ9gV4Cec
 Vg2xNKWEua0BqDoaMut17Kiuxg/jJT62aCuAnK57ylCwLCaEii9VjCLb9tbEa/ouawZJ
 mYrzrYd2Y/9pjSjNlmQ9Zga57tbVOWwZWgmoFk42B42hB7M0rRKbmr65PA3xyWqRXgkt
 GjRldGvkRiBuAg3Zl3PGGBi2uoa0tbSKe/2USsy0DJRJaaJgIGp4jhMk6T/XYPH/Es5e
 0VhA==
X-Gm-Message-State: AOAM532mcPgNYSEJtI/8lOX3hs0ET03dWfJM0ADCKIbq+ZLaGE52Z1oB
 pfQEtQyK05knoL75aUzHOxY=
X-Google-Smtp-Source: ABdhPJwSb2CiShWybYjkqToTC/yBGfAdj3vNS+H0VuDM4cY0yt/U36EA8jqXZl7N4+7n5n5aS6ChXg==
X-Received: by 2002:a5d:4745:: with SMTP id o5mr2401872wrs.87.1590656435864;
 Thu, 28 May 2020 02:00:35 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.186])
 by smtp.gmail.com with ESMTPSA id q4sm5850317wma.47.2020.05.28.02.00.34
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 28 May 2020 02:00:35 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: =?utf-8?Q?'J=C3=BCrgen_Gro=C3=9F'?= <jgross@suse.com>,
 <xen-devel@lists.xenproject.org>
References: <20200528082217.26057-1-jgross@suse.com>
 <00a001d634cd$7c9afb40$75d0f1c0$@xen.org>
 <ad448884-6705-9473-597c-10388b398972@suse.com>
In-Reply-To: <ad448884-6705-9473-597c-10388b398972@suse.com>
Subject: RE: [PATCH v2] docs: update xenstore-migration.md
Date: Thu, 28 May 2020 10:00:33 +0100
Message-ID: <00a501d634ce$7328cf50$597a6df0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQFP4uuOoU5NbpLVMW9x/VcR2g3NawL+UMOZAfDcPc+pol2EIA==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: J=C3=BCrgen Gro=C3=9F <jgross@suse.com>
> Sent: 28 May 2020 09:59
> To: paul@xen.org; xen-devel@lists.xenproject.org
> Cc: 'Stefano Stabellini' <sstabellini@kernel.org>; 'Julien Grall' =
<julien@xen.org>; 'Wei Liu'
> <wl@xen.org>; 'Andrew Cooper' <andrew.cooper3@citrix.com>; 'Ian =
Jackson' <ian.jackson@eu.citrix.com>;
> 'George Dunlap' <george.dunlap@citrix.com>; 'Jan Beulich' =
<jbeulich@suse.com>
> Subject: Re: [PATCH v2] docs: update xenstore-migration.md
>=20
> On 28.05.20 10:53, Paul Durrant wrote:
> >> -----Original Message-----
> >> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf =
Of Juergen Gross
> >> Sent: 28 May 2020 09:22
> >> To: xen-devel@lists.xenproject.org
> >> Cc: Juergen Gross <jgross@suse.com>; Stefano Stabellini =
<sstabellini@kernel.org>; Julien Grall
> >> <julien@xen.org>; Wei Liu <wl@xen.org>; Andrew Cooper =
<andrew.cooper3@citrix.com>; Ian Jackson
> >> <ian.jackson@eu.citrix.com>; George Dunlap =
<george.dunlap@citrix.com>; Jan Beulich
> <jbeulich@suse.com>
> >> Subject: [PATCH v2] docs: update xenstore-migration.md
> >>
> >> Update connection record details: make flags common for sockets and
> >> domains, and add pending incoming data.
> >>
> >> Signed-off-by: Juergen Gross <jgross@suse.com>
> >> ---
> >> V2:
> >> - added out-resp-len to connection record
> >> ---
> >>   docs/designs/xenstore-migration.md | 71 =
+++++++++++++++++-------------
> >>   1 file changed, 40 insertions(+), 31 deletions(-)
> >>
> >> diff --git a/docs/designs/xenstore-migration.md =
b/docs/designs/xenstore-migration.md
> >> index 34a2afd17e..5736bbad94 100644
> >> --- a/docs/designs/xenstore-migration.md
> >> +++ b/docs/designs/xenstore-migration.md
> >> @@ -147,43 +147,59 @@ the domain being migrated.
> >>   ```
> >>       0       1       2       3       4       5       6       7    =
octet
> >>   +-------+-------+-------+-------+-------+-------+-------+-------+
> >> -| conn-id                       | conn-type     | conn-spec
> >> +| conn-id                       | conn-type     | flags         |
> >> ++-------------------------------+---------------+---------------+
> >> +| conn-spec
> >>   ...
> >> -+-------------------------------+-------------------------------+
> >> -| data-len                      | data
> >> -+-------------------------------+
> >> ++---------------+---------------+-------------------------------+
> >> +| in-data-len   | out-resp-len  | out-data-len                  |
> >> ++---------------+---------------+-------------------------------+
> >> +| data
> >>   ...
> >>   ```
> >>
> >>
> >> -| Field       | Description                                     |
> >> -|-------------|-------------------------------------------------|
> >> -| `conn-id`   | A non-zero number used to identify this         |
> >> -|             | connection in subsequent connection-specific    |
> >> -|             | records                                         |
> >> -|             |                                                 |
> >> -| `conn-type` | 0x0000: shared ring                             |
> >> -|             | 0x0001: socket                                  |
> >> -|             | 0x0002 - 0xFFFF: reserved for future use        |
> >> -|             |                                                 |
> >> -| `conn-spec` | See below                                       |
> >> -|             |                                                 |
> >> -| `data-len`  | The length (in octets) of any pending data not  |
> >> -|             | yet written to the connection                   |
> >> -|             |                                                 |
> >> -| `data`      | Pending data (may be empty)                     |
> >> +| Field          | Description                                  |
> >> +|----------------|----------------------------------------------|
> >> +| `conn-id`      | A non-zero number used to identify this      |
> >> +|                | connection in subsequent connection-specific |
> >> +|                | records                                      |
> >> +|                |                                              |
> >> +| `flags`        | A bit-wise OR of:                            |
> >> +|                | 0001: read-only                              |
> >> +|                |                                              |
> >> +| `conn-type`    | 0x0000: shared ring                          |
> >> +|                | 0x0001: socket                               |
> >> +|                | 0x0002 - 0xFFFF: reserved for future use     |
> >> +|                |                                              |
> >
> > Agreed with Julien... the above two would be better swapped to match =
the order of the fields in the
> record.
>=20
> Yes.
>=20
> >
> >> +| `conn-spec`    | See below                                    |
> >> +|                |                                              |
> >> +| `in-data-len`  | The length (in octets) of any data read      |
> >> +|                | from the connection not yet processed        |
> >> +|                |                                              |
> >> +| `out-resp-len` | The length (in octets) of a partial response |
> >> +|                | not yet written to the connection (included  |
> >> +|                | in the following `out-data-len`)             |
> >> +|                |                                              |
> >> +| `out-data-len` | The length (in octets) of any pending data   |
> >> +|                | not yet written to the connection            |
> >
> > So, IIUC out-data-len is inclusive of out-resp-len?
>=20
> Yes.
>=20
> >
> >> +|                |                                              |
> >> +| `data`         | Pending data, first read data, then written  |
> >> +|                | data (any of both may be empty)              |
> >
> > Perhaps be more explicit here:
> >
> > "Pending data: first in-data-len octets of read data, then =
out-data-len octets of written data"
>=20
> Okay.
>=20
> >
> >>
> >> -The format of `conn-spec` is dependent upon `conn-type`.
> >> +In case of live update the connection record for the connection =
via which
> >> +the live update command was issued will contain the response for =
the live
> >> +update command in the pending write data.
> >
> > s/write/written for consistency I think.
>=20
> I'll use "... in the pending not yet written data".
>=20

Ok.

  Paul

>=20
> Juergen



From xen-devel-bounces@lists.xenproject.org Thu May 28 09:04:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 09:04:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeESN-0001x4-CC; Thu, 28 May 2020 09:04:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eoT6=7K=xen.org=roger@srs-us1.protection.inumbo.net>)
 id 1jeESM-0001wx-In
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 09:04:06 +0000
X-Inumbo-ID: 2e847100-a0c2-11ea-a7a8-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2e847100-a0c2-11ea-a7a8-12813bfff9fa;
 Thu, 28 May 2020 09:04:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Transfer-Encoding:Content-Type:
 MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=lf1gkKuFmc3/ZRlD1zPwK//yWBW5s9SvQhBMNyi+z2Y=; b=kw3gs5YfL6l5m+3GRVfv1PlJQr
 tgzMy91Bn7IYvZWsEeAggrL6BO008ctsyBPJp7jMfDi6NU0m29IR+zm9tvdRCvzj3+cHISoBfOptY
 e9C21LLjiCORPZIWUKPZUDCtU2H7Woi47MM/fnIaXDjJdCSAEj7/sgwmIAul0C4fPooU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeESI-00075U-R9; Thu, 28 May 2020 09:04:02 +0000
Received: from [93.176.191.173] (helo=localhost)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeESI-0005E2-BU; Thu, 28 May 2020 09:04:02 +0000
Date: Thu, 28 May 2020 11:03:51 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger@xen.org>
To: Tamas K Lengyel <tamas.lengyel@intel.com>
Subject: Re: [PATCH v3 for-4.14 1/2] x86/mem_sharing: block interrupt
 injection for forks
Message-ID: <20200528090338.GE1195@Air-de-Roger>
References: <a3b3410c707636aa201641e14b1ab43d4b8821e1.1590411162.git.tamas.lengyel@intel.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <a3b3410c707636aa201641e14b1ab43d4b8821e1.1590411162.git.tamas.lengyel@intel.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Tian <kevin.tian@intel.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Mon, May 25, 2020 at 06:00:08AM -0700, Tamas K Lengyel wrote:
> When running shallow forks, ie. VM forks without device models (QEMU), it may
> be undesirable for Xen to inject interrupts. When creating such forks from
> Windows VMs we have observed the kernel trying to process interrupts
> immediately after the fork is executed. However without QEMU running such
> interrupt handling may not be possible because it may attempt to interact with
> devices that are not emulated by a backend. In the best case scenario such
> interrupt handling would only present a detour in the VM forks' execution
> flow, but in the worst case as we actually observed can completely stall it.
> By disabling interrupt injection a fuzzer can exercise the target code without
> interference. For other use-cases this option probably doesn't make sense,
> that's why this is not enabled by default.
> 
> Forks & memory sharing are only available on Intel CPUs so this only applies
> to vmx. Note that this is part of the experimental VM forking feature that's
> completely disabled by default and can only be enabled by using
> XEN_CONFIG_EXPERT during compile time.
> 
> Signed-off-by: Tamas K Lengyel <tamas.lengyel@intel.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 28 09:17:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 09:17:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeEf8-0002sr-Iy; Thu, 28 May 2020 09:17:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lRPh=7K=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jeEf6-0002sm-Um
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 09:17:17 +0000
X-Inumbo-ID: 05e4e08e-a0c4-11ea-a7a8-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 05e4e08e-a0c4-11ea-a7a8-12813bfff9fa;
 Thu, 28 May 2020 09:17:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=+Z0tasWUn/B4MzE6fZiFjnf1JiK67V7eieVP2LkyCkM=; b=zS07+VJJSmLYUNobIG8kv8V6x0
 maBN6XPUEf1ZUcbZzf4MWCWL4+FkhCXU0OItYlubgAOZnTgPcFhlPPdCiUPW2Tqo96A95Jw+FJjQR
 OMwrQwIcQXnUnhrthf4Mb+HK/wFs0+3d9EplDBGEUTZms00qcibzieYYpleUc/FEi+Yk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeEf2-0007Mf-9X; Thu, 28 May 2020 09:17:12 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeEf2-00065Q-2S; Thu, 28 May 2020 09:17:12 +0000
Subject: Re: [PATCH v6 4/5] common/domain: add a domain context record for
 shared_info...
To: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20200527173407.1398-1-paul@xen.org>
 <20200527173407.1398-5-paul@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <27717a17-560d-a804-39c2-93bbc3c85009@xen.org>
Date: Thu, 28 May 2020 10:17:07 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <20200527173407.1398-5-paul@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Paul,

On 27/05/2020 18:34, Paul Durrant wrote:
> From: Paul Durrant <pdurrant@amazon.com>
> 
> ... and update xen-domctx to dump some information describing the record.
> 
> NOTE: The domain may or may not be using the embedded vcpu_info array so
>        ultimately separate context records will be added for vcpu_info when
>        this becomes necessary.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 28 09:33:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 09:33:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeEu2-0004VE-UL; Thu, 28 May 2020 09:32:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FdVz=7K=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jeEu1-0004V9-6g
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 09:32:41 +0000
X-Inumbo-ID: 2c568de2-a0c6-11ea-8993-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c568de2-a0c6-11ea-8993-bc764e2007e4;
 Thu, 28 May 2020 09:32:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 96255AD3A;
 Thu, 28 May 2020 09:32:38 +0000 (UTC)
Message-ID: <9f807e10a188ad0877b6d9e769a37575db12c570.camel@suse.com>
Subject: Re: [PATCH 2/2] xen: credit2: limit the max number of CPUs in a
 runqueue
From: Dario Faggioli <dfaggioli@suse.com>
To: =?ISO-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, Jan Beulich
 <jbeulich@suse.com>
Date: Thu, 28 May 2020 11:32:37 +0200
In-Reply-To: <5939e797-09be-53d1-b87f-d6c6c97ea3a3@suse.com>
References: <158818022727.24327.14309662489731832234.stgit@Palanthas>
 <158818179558.24327.11334680191217289878.stgit@Palanthas>
 <3db33b8a-ba97-f302-a325-e989ff0e7084@suse.com>
 <7b34b1b2c4b36399ad16f6e72a872e15d949f4bf.camel@suse.com>
 <5939e797-09be-53d1-b87f-d6c6c97ea3a3@suse.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-RT0obdPu5SaH1+O+jnCD"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-RT0obdPu5SaH1+O+jnCD
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2020-05-27 at 06:26 +0200, J=C3=BCrgen Gro=C3=9F wrote:
> On 27.05.20 00:00, Dario Faggioli wrote:
> >=20
> > Understood. Problem is that, here in the scheduling code, I don't
> > see
> > an easy way to tell when we have finished bringing up CPUs... And
> > it's
> > probably not worth looking too hard (even less adding logic) only
> > for
> > the sake of printing this message.
>=20
> cpupool_init() is the perfect place for that.
>=20
Yes, at least for boot time, it is indeed, so I'll go for it.

OTOH, when for instance one creates a new cpupool (or adding a bunch of
CPUs to one, with `xl cpupool-add Pool-X 7,3-14-22,18`), CPUs are added
one by one, and we don't really know in Xen which one would be the last
one, and print a summary.

But yeah, I'll go for cpupool_init() and get rid of the rest, for now.

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-RT0obdPu5SaH1+O+jnCD
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl7PhTUACgkQFkJ4iaW4
c+7kThAAxnCZvME7HLmR8t9EeOSTfvm00OcIzK6Izeq/qXf1zwZf7nPaSqSAmFPy
9OyKgZYQna/kDLcOwn1LxUnJkX0W3zY5tPbSr0Jrue67NYgafgYhKA0OqC4oVbem
esf4M+q1idXbizNAo0HpJC2WcNCACfLXP8VOjFGibxz79IVriqOLkwP8AZ4Gkw6i
ek+6RQM/lFv5/K5ahyokKBGElXthKIsV5KLk8pRfWR1Xn+2I+j/tPWiXaKarVnuS
+2Kj1yAQsz3sPXGLFKRuu76YzLRsc5QqDz2FJVm+GJPZOALNeW8A5l6klkVwx9OQ
zMUiTVoopL2YliIcf5eReBpoR6W7bAD/1xy6InrAKO3ANKHBLHvyuG5v+S2PbdFQ
LajYRc3z75iH1nAakBu/kHr/OXXBTflEXpqniunB0YwIybcaz37wuh9O7bGxpHfJ
SgviQQOCJ9SuAk8fdUxp4zJRsq+cn+ZL8JdekKjIoJnKAOqqAKs2bxPQhARj+SNq
JFG4GLlfM82ubYeUHKM6n4KX6hvAi5/JrWoQJyMdeyNz1LlvwGJbTsTwvaczDtjO
2N+FKdwGP5ZB+SpSkqcTVwTfp8fbq6MzlRGC535ULy/AlEYK9UlcddRAN60nBGtJ
oQaxzWzmYNjszQf+sknFec69y9NT0PbngNpKuvBnvlAzonOZDuM=
=duRl
-----END PGP SIGNATURE-----

--=-RT0obdPu5SaH1+O+jnCD--



From xen-devel-bounces@lists.xenproject.org Thu May 28 09:36:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 09:36:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeExo-0004g7-I2; Thu, 28 May 2020 09:36:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FdVz=7K=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jeExn-0004g2-AS
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 09:36:35 +0000
X-Inumbo-ID: b84028c2-a0c6-11ea-8993-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b84028c2-a0c6-11ea-8993-bc764e2007e4;
 Thu, 28 May 2020 09:36:34 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4BEEFAD3A;
 Thu, 28 May 2020 09:36:33 +0000 (UTC)
Message-ID: <db0c02328a1fe60ed186638a6acd5c3df21686d5.camel@suse.com>
Subject: Re: [PATCH 2/2] xen: credit2: limit the max number of CPUs in a
 runqueue
From: Dario Faggioli <dfaggioli@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Date: Thu, 28 May 2020 11:36:32 +0200
In-Reply-To: <9948ac59-af64-d77d-57df-38a771a472b4@suse.com>
References: <158818022727.24327.14309662489731832234.stgit@Palanthas>
 <158818179558.24327.11334680191217289878.stgit@Palanthas>
 <b368ccef-d3b1-1338-6325-8f81a963876d@suse.com>
 <d60d5b917d517b1dfa8292cfb456639c736ec173.camel@suse.com>
 <7e039c65-4532-c3ea-8707-72a86cf48e0e@suse.com>
 <8bf86f0c2bcce449cf7643aa9b98aa26ea558c2c.camel@suse.com>
 <9948ac59-af64-d77d-57df-38a771a472b4@suse.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-2OnZpGu9g/zF6PmSkDRF"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?ISO-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>,
 xen-devel@lists.xenproject.org, paul@xen.org,
 George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-2OnZpGu9g/zF6PmSkDRF
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2020-05-27 at 08:22 +0200, Jan Beulich wrote:
> On 26.05.2020 23:18, Dario Faggioli wrote:
> >=20
> > It looks like we need a way to rebalance the runqueues, which
> > should be
> > doable... But despite having spent a couple of days trying to come
> > up
> > with something decent, that I could include in v2 of this series, I
> > couldn't get it to work sensibly.
>=20
> CPU on-/offlining may not need considering here at all imo. I think
> it
> would be quite reasonable as a first step to have a static model
> where
> from system topology (and perhaps a command line option) one can
> predict which runqueue a particular CPU will end up in, no matter
> when
> it gets brought online.
>=20
Right.

IAC, just FYI, after talking to Juergen --who suggested a nice solution
to overcome the problem where I was stuck-- I have now a patch that
successfully implement dynamic online rebalancing of runqueues.

I'm polishing up the comments and changelog and will send it ASAP, as
I'd really like for this series to go in... :-)

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-2OnZpGu9g/zF6PmSkDRF
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl7PhiAACgkQFkJ4iaW4
c+6IQg//Y5NRewHwSqke21nEdLFFGtz/mgKw+CeJIVi+jkKKznpQSfllqbEKnovG
piyjrgxtu1mT3gGmwM+bISd3gagFKadN6k50OaVbFFUE/R4IwveEtnmrQBfBDI7A
XdHfn7AsRKESfjpkUxjTn+BKAhFitmgWK6QMFgaZyhhqGN8jaqJoiQhsMaX6kTAX
B5eGiPguQ0Jaqz3diVrgYcBZqdjYDmi1hx47CpY/YYekbhAh+t31h4yZ23Kdi1N9
YRTmznforS5DIRwZHAcp/gaFjZULX5jEUgMdU21PYHgLjx2BUTzYGyGNJXSlQ+3f
kW6eIZ1OomHSDoz09LJ6f76Hq3fuWnZqeC/V1UHGQa1TDJkqqqF6CY//fltLk5/d
uQ/N2zZ+ZmFDXvq8vSFX5bxK/7bYKek+qyqNGixc7XKgepiV3o7k1sfNZZwP6sNY
dF74zR/mzIE5ouAnPY4k07zI7x6oWz6N9NErrOndYISRlFLeY1opuZnzl0zvGp0T
nDrvoWD8RlqZzlHF4mu9Ks5GbsXkbWvK+sNnjinzR/2v4zdH7dhH073rD8Vm22c5
pA+QaRuhTqgamKsapBoHupSWDm8hIMVdqv8rVa2dwSkGd/CJpoimsDyJ15O7tFF0
IsClCNsXWcOoHY6eMo6pK3KCnzw88NZ3sDbRUKlmYxseSfACYrw=
=kj2r
-----END PGP SIGNATURE-----

--=-2OnZpGu9g/zF6PmSkDRF--



From xen-devel-bounces@lists.xenproject.org Thu May 28 09:42:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 09:42:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeF3U-0005WO-6M; Thu, 28 May 2020 09:42:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VkFg=7K=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeF3T-0005WJ-77
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 09:42:27 +0000
X-Inumbo-ID: 89fce90e-a0c7-11ea-a7aa-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 89fce90e-a0c7-11ea-a7aa-12813bfff9fa;
 Thu, 28 May 2020 09:42:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id DEEE1AD63;
 Thu, 28 May 2020 09:42:24 +0000 (UTC)
Subject: Re: [PATCH v6 4/5] common/domain: add a domain context record for
 shared_info...
To: Paul Durrant <paul@xen.org>
References: <20200527173407.1398-1-paul@xen.org>
 <20200527173407.1398-5-paul@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5589a800-a40a-5534-d2e8-09df78072b02@suse.com>
Date: Thu, 28 May 2020 11:42:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200527173407.1398-5-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 19:34, Paul Durrant wrote:
> @@ -1649,6 +1650,75 @@ int continue_hypercall_on_cpu(
>      return 0;
>  }
>  
> +static int save_shared_info(const struct domain *d, struct domain_context *c,
> +                            bool dry_run)
> +{
> +    struct domain_shared_info_context ctxt = {
> +#ifdef CONFIG_COMPAT
> +        .flags = has_32bit_shinfo(d) ? DOMAIN_SAVE_32BIT_SHINFO : 0,
> +        .buffer_size = has_32bit_shinfo(d) ?
> +                       sizeof(struct compat_shared_info) :
> +                       sizeof(struct shared_info),
> +#else
> +        .buffer_size = sizeof(struct shared_info),
> +#endif

To prevent disconnect between the types used here and the actual
pointer copied from, I'd have preferred

#ifdef CONFIG_COMPAT
        .flags = has_32bit_shinfo(d) ? DOMAIN_SAVE_32BIT_SHINFO : 0,
        .buffer_size = has_32bit_shinfo(d) ?
                       sizeof(d->shared_info->compat) :
                       sizeof(d->shared_info->native),
#else
        .buffer_size = sizeof(*d->shared_info),
#endif

But this is secondary, as the types indeed are very unlikely to go
out of sync. What's more important is ...

> +static int load_shared_info(struct domain *d, struct domain_context *c)
> +{
> +    struct domain_shared_info_context ctxt;
> +    size_t hdr_size = offsetof(typeof(ctxt), buffer);
> +    unsigned int i;
> +    int rc;
> +
> +    rc = DOMAIN_LOAD_BEGIN(SHARED_INFO, c, &i);
> +    if ( rc )
> +        return rc;
> +
> +    if ( i ) /* expect only a single instance */
> +        return -ENXIO;
> +
> +    rc = domain_load_data(c, &ctxt, hdr_size);
> +    if ( rc )
> +        return rc;
> +
> +    if ( ctxt.buffer_size > sizeof(shared_info_t) ||
> +         (ctxt.flags & ~DOMAIN_SAVE_32BIT_SHINFO) )
> +        return -EINVAL;
> +
> +    if ( ctxt.flags & DOMAIN_SAVE_32BIT_SHINFO )
> +#ifdef CONFIG_COMPAT
> +        has_32bit_shinfo(d) = true;
> +#else
> +        return -EINVAL;
> +#endif
> +
> +    rc = domain_load_data(c, d->shared_info, sizeof(shared_info_t));
> +    if ( rc )
> +        return rc;

... the still insufficient checking here. You shouldn't accept more
than sizeof(d->shared_info->compat) worth of data in the compat case
if you also don't accept more than sizeof(shared_info_t) in the
native case. To save another round trip I'll offer to make the
adjustments while committing, but patches 3 and 5 want Andrew's ack
first anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 28 09:45:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 09:45:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeF6E-0005g5-Kc; Thu, 28 May 2020 09:45:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VkFg=7K=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeF6D-0005g0-T6
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 09:45:17 +0000
X-Inumbo-ID: efb24df2-a0c7-11ea-a7ac-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id efb24df2-a0c7-11ea-a7ac-12813bfff9fa;
 Thu, 28 May 2020 09:45:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E912EAD63;
 Thu, 28 May 2020 09:45:15 +0000 (UTC)
Subject: Re: [PATCH v2 01/14] x86/traps: Clean up printing in
 {do_reserved,fatal}_trap()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3b9156db-58e2-a9f9-088a-51131ce3c7b6@suse.com>
Date: Thu, 28 May 2020 11:45:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200527191847.17207-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 21:18, Andrew Cooper wrote:
> For one, they render the vector in a different base.
> 
> Introduce X86_EXC_* constants and vec_name() to refer to exceptions by their
> mnemonic, which starts bringing the code/diagnostics in line with the Intel
> and AMD manuals.
> 
> Provide constants for every archtiecturally defined exception, even those not
> implemented by Xen yet, as do_reserved_trap() is a catch-all handler.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

As before somewhat hesitantly
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 28 09:48:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 09:48:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeF9J-0005pd-4r; Thu, 28 May 2020 09:48:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vSBr=7K=amazon.co.uk=prvs=410b74a99=pdurrant@srs-us1.protection.inumbo.net>)
 id 1jeF9H-0005pY-Dv
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 09:48:27 +0000
X-Inumbo-ID: 6072bfb8-a0c8-11ea-a7ac-12813bfff9fa
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6072bfb8-a0c8-11ea-a7ac-12813bfff9fa;
 Thu, 28 May 2020 09:48:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
 s=amazon201209; t=1590659307; x=1622195307;
 h=from:to:cc:date:message-id:references:in-reply-to:
 content-transfer-encoding:mime-version:subject;
 bh=o0UFG3pl7XEkMkP8r44DQuMwfgbM2785FkdZEEJOPWE=;
 b=Ao9qGbONXFRsSkmMHLZ7JdhqAbzoAqidd9Sq3FRulwxhwpfIePdkWrsV
 WH0xZp4gpaSaCVmnZrrDP+wjdUqRMBcMT+qQS1mIAyteqgatVI8OvDn/f
 2etyXaqx5a4FdZ6cNNXa+X+nUTD1EZFJRYBZCgPS4PEXReRz9rC/YpygF I=;
IronPort-SDR: iRib3XpA0i/UkuiWYnulzSgMRNeTU2AxONsVGfpOUd7uEDvuHZBBReGV9nXwEcKH3xcEl3Dg5c
 69T3l7IvCVYA==
X-IronPort-AV: E=Sophos;i="5.73,444,1583193600"; d="scan'208";a="46801652"
Subject: RE: [PATCH v6 4/5] common/domain: add a domain context record for
 shared_info...
Thread-Topic: [PATCH v6 4/5] common/domain: add a domain context record for
 shared_info...
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2c-397e131e.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP;
 28 May 2020 09:48:24 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan3.pdx.amazon.com [10.170.41.166])
 by email-inbound-relay-2c-397e131e.us-west-2.amazon.com (Postfix) with ESMTPS
 id 670C4A26D5; Thu, 28 May 2020 09:48:23 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 28 May 2020 09:48:23 +0000
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC003.ant.amazon.com (10.43.164.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 28 May 2020 09:48:21 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.006;
 Thu, 28 May 2020 09:48:21 +0000
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
Thread-Index: AQHWNE0m0J0mZmLrdk2DDG4EFRRFkKi9P/oAgAABYQA=
Date: Thu, 28 May 2020 09:48:21 +0000
Message-ID: <d1b21c8c9be24425abef57f394722c9a@EX13D32EUC003.ant.amazon.com>
References: <20200527173407.1398-1-paul@xen.org>
 <20200527173407.1398-5-paul@xen.org>
 <5589a800-a40a-5534-d2e8-09df78072b02@suse.com>
In-Reply-To: <5589a800-a40a-5534-d2e8-09df78072b02@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.166.118]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Ian
 Jackson <ian.jackson@eu.citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxp
Y2hAc3VzZS5jb20+DQo+IFNlbnQ6IDI4IE1heSAyMDIwIDEwOjQyDQo+IFRvOiBQYXVsIER1cnJh
bnQgPHBhdWxAeGVuLm9yZz4NCj4gQ2M6IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsg
RHVycmFudCwgUGF1bCA8cGR1cnJhbnRAYW1hem9uLmNvLnVrPjsgSWFuIEphY2tzb24NCj4gPGlh
bi5qYWNrc29uQGV1LmNpdHJpeC5jb20+OyBXZWkgTGl1IDx3bEB4ZW4ub3JnPjsgQW5kcmV3IENv
b3BlciA8YW5kcmV3LmNvb3BlcjNAY2l0cml4LmNvbT47IEdlb3JnZQ0KPiBEdW5sYXAgPGdlb3Jn
ZS5kdW5sYXBAY2l0cml4LmNvbT47IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+OyBTdGVm
YW5vIFN0YWJlbGxpbmkNCj4gPHNzdGFiZWxsaW5pQGtlcm5lbC5vcmc+DQo+IFN1YmplY3Q6IFJF
OiBbRVhURVJOQUxdIFtQQVRDSCB2NiA0LzVdIGNvbW1vbi9kb21haW46IGFkZCBhIGRvbWFpbiBj
b250ZXh0IHJlY29yZCBmb3Igc2hhcmVkX2luZm8uLi4NCj4gDQo+IENBVVRJT046IFRoaXMgZW1h
aWwgb3JpZ2luYXRlZCBmcm9tIG91dHNpZGUgb2YgdGhlIG9yZ2FuaXphdGlvbi4gRG8gbm90IGNs
aWNrIGxpbmtzIG9yIG9wZW4NCj4gYXR0YWNobWVudHMgdW5sZXNzIHlvdSBjYW4gY29uZmlybSB0
aGUgc2VuZGVyIGFuZCBrbm93IHRoZSBjb250ZW50IGlzIHNhZmUuDQo+IA0KPiANCj4gDQo+IE9u
IDI3LjA1LjIwMjAgMTk6MzQsIFBhdWwgRHVycmFudCB3cm90ZToNCj4gPiBAQCAtMTY0OSw2ICsx
NjUwLDc1IEBAIGludCBjb250aW51ZV9oeXBlcmNhbGxfb25fY3B1KA0KPiA+ICAgICAgcmV0dXJu
IDA7DQo+ID4gIH0NCj4gPg0KPiA+ICtzdGF0aWMgaW50IHNhdmVfc2hhcmVkX2luZm8oY29uc3Qg
c3RydWN0IGRvbWFpbiAqZCwgc3RydWN0IGRvbWFpbl9jb250ZXh0ICpjLA0KPiA+ICsgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgYm9vbCBkcnlfcnVuKQ0KPiA+ICt7DQo+ID4gKyAgICBzdHJ1
Y3QgZG9tYWluX3NoYXJlZF9pbmZvX2NvbnRleHQgY3R4dCA9IHsNCj4gPiArI2lmZGVmIENPTkZJ
R19DT01QQVQNCj4gPiArICAgICAgICAuZmxhZ3MgPSBoYXNfMzJiaXRfc2hpbmZvKGQpID8gRE9N
QUlOX1NBVkVfMzJCSVRfU0hJTkZPIDogMCwNCj4gPiArICAgICAgICAuYnVmZmVyX3NpemUgPSBo
YXNfMzJiaXRfc2hpbmZvKGQpID8NCj4gPiArICAgICAgICAgICAgICAgICAgICAgICBzaXplb2Yo
c3RydWN0IGNvbXBhdF9zaGFyZWRfaW5mbykgOg0KPiA+ICsgICAgICAgICAgICAgICAgICAgICAg
IHNpemVvZihzdHJ1Y3Qgc2hhcmVkX2luZm8pLA0KPiA+ICsjZWxzZQ0KPiA+ICsgICAgICAgIC5i
dWZmZXJfc2l6ZSA9IHNpemVvZihzdHJ1Y3Qgc2hhcmVkX2luZm8pLA0KPiA+ICsjZW5kaWYNCj4g
DQo+IFRvIHByZXZlbnQgZGlzY29ubmVjdCBiZXR3ZWVuIHRoZSB0eXBlcyB1c2VkIGhlcmUgYW5k
IHRoZSBhY3R1YWwNCj4gcG9pbnRlciBjb3BpZWQgZnJvbSwgSSdkIGhhdmUgcHJlZmVycmVkDQo+
IA0KPiAjaWZkZWYgQ09ORklHX0NPTVBBVA0KPiAgICAgICAgIC5mbGFncyA9IGhhc18zMmJpdF9z
aGluZm8oZCkgPyBET01BSU5fU0FWRV8zMkJJVF9TSElORk8gOiAwLA0KPiAgICAgICAgIC5idWZm
ZXJfc2l6ZSA9IGhhc18zMmJpdF9zaGluZm8oZCkgPw0KPiAgICAgICAgICAgICAgICAgICAgICAg
IHNpemVvZihkLT5zaGFyZWRfaW5mby0+Y29tcGF0KSA6DQo+ICAgICAgICAgICAgICAgICAgICAg
ICAgc2l6ZW9mKGQtPnNoYXJlZF9pbmZvLT5uYXRpdmUpLA0KPiAjZWxzZQ0KPiAgICAgICAgIC5i
dWZmZXJfc2l6ZSA9IHNpemVvZigqZC0+c2hhcmVkX2luZm8pLA0KPiAjZW5kaWYNCj4gDQo+IEJ1
dCB0aGlzIGlzIHNlY29uZGFyeSwgYXMgdGhlIHR5cGVzIGluZGVlZCBhcmUgdmVyeSB1bmxpa2Vs
eSB0byBnbw0KPiBvdXQgb2Ygc3luYy4gV2hhdCdzIG1vcmUgaW1wb3J0YW50IGlzIC4uLg0KPiAN
Cj4gPiArc3RhdGljIGludCBsb2FkX3NoYXJlZF9pbmZvKHN0cnVjdCBkb21haW4gKmQsIHN0cnVj
dCBkb21haW5fY29udGV4dCAqYykNCj4gPiArew0KPiA+ICsgICAgc3RydWN0IGRvbWFpbl9zaGFy
ZWRfaW5mb19jb250ZXh0IGN0eHQ7DQo+ID4gKyAgICBzaXplX3QgaGRyX3NpemUgPSBvZmZzZXRv
Zih0eXBlb2YoY3R4dCksIGJ1ZmZlcik7DQo+ID4gKyAgICB1bnNpZ25lZCBpbnQgaTsNCj4gPiAr
ICAgIGludCByYzsNCj4gPiArDQo+ID4gKyAgICByYyA9IERPTUFJTl9MT0FEX0JFR0lOKFNIQVJF
RF9JTkZPLCBjLCAmaSk7DQo+ID4gKyAgICBpZiAoIHJjICkNCj4gPiArICAgICAgICByZXR1cm4g
cmM7DQo+ID4gKw0KPiA+ICsgICAgaWYgKCBpICkgLyogZXhwZWN0IG9ubHkgYSBzaW5nbGUgaW5z
dGFuY2UgKi8NCj4gPiArICAgICAgICByZXR1cm4gLUVOWElPOw0KPiA+ICsNCj4gPiArICAgIHJj
ID0gZG9tYWluX2xvYWRfZGF0YShjLCAmY3R4dCwgaGRyX3NpemUpOw0KPiA+ICsgICAgaWYgKCBy
YyApDQo+ID4gKyAgICAgICAgcmV0dXJuIHJjOw0KPiA+ICsNCj4gPiArICAgIGlmICggY3R4dC5i
dWZmZXJfc2l6ZSA+IHNpemVvZihzaGFyZWRfaW5mb190KSB8fA0KPiA+ICsgICAgICAgICAoY3R4
dC5mbGFncyAmIH5ET01BSU5fU0FWRV8zMkJJVF9TSElORk8pICkNCj4gPiArICAgICAgICByZXR1
cm4gLUVJTlZBTDsNCj4gPiArDQo+ID4gKyAgICBpZiAoIGN0eHQuZmxhZ3MgJiBET01BSU5fU0FW
RV8zMkJJVF9TSElORk8gKQ0KPiA+ICsjaWZkZWYgQ09ORklHX0NPTVBBVA0KPiA+ICsgICAgICAg
IGhhc18zMmJpdF9zaGluZm8oZCkgPSB0cnVlOw0KPiA+ICsjZWxzZQ0KPiA+ICsgICAgICAgIHJl
dHVybiAtRUlOVkFMOw0KPiA+ICsjZW5kaWYNCj4gPiArDQo+ID4gKyAgICByYyA9IGRvbWFpbl9s
b2FkX2RhdGEoYywgZC0+c2hhcmVkX2luZm8sIHNpemVvZihzaGFyZWRfaW5mb190KSk7DQo+ID4g
KyAgICBpZiAoIHJjICkNCj4gPiArICAgICAgICByZXR1cm4gcmM7DQo+IA0KPiAuLi4gdGhlIHN0
aWxsIGluc3VmZmljaWVudCBjaGVja2luZyBoZXJlLiBZb3Ugc2hvdWxkbid0IGFjY2VwdCBtb3Jl
DQo+IHRoYW4gc2l6ZW9mKGQtPnNoYXJlZF9pbmZvLT5jb21wYXQpIHdvcnRoIG9mIGRhdGEgaW4g
dGhlIGNvbXBhdCBjYXNlDQo+IGlmIHlvdSBhbHNvIGRvbid0IGFjY2VwdCBtb3JlIHRoYW4gc2l6
ZW9mKHNoYXJlZF9pbmZvX3QpIGluIHRoZQ0KPiBuYXRpdmUgY2FzZS4gVG8gc2F2ZSBhbm90aGVy
IHJvdW5kIHRyaXAgSSdsbCBvZmZlciB0byBtYWtlIHRoZQ0KPiBhZGp1c3RtZW50cyB3aGlsZSBj
b21taXR0aW5nLCBidXQgcGF0Y2hlcyAzIGFuZCA1IHdhbnQgQW5kcmV3J3MgYWNrDQo+IGZpcnN0
IGFueXdheS4NCg0KT2ssIHRoYW5rcy4NCg0KICBQYXVsDQoNCj4gDQo+IEphbg0K


From xen-devel-bounces@lists.xenproject.org Thu May 28 09:50:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 09:50:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeFB7-0006ZP-HL; Thu, 28 May 2020 09:50:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VkFg=7K=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeFB5-0006ZF-UH
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 09:50:19 +0000
X-Inumbo-ID: a398dfde-a0c8-11ea-8993-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a398dfde-a0c8-11ea-8993-bc764e2007e4;
 Thu, 28 May 2020 09:50:19 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A87A7AD3A;
 Thu, 28 May 2020 09:50:17 +0000 (UTC)
Subject: Re: [PATCH v2 02/14] x86/traps: Factor out extable_fixup() and make
 printing consistent
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9cb12ae3-ef09-3b81-caef-0b1d61426a42@suse.com>
Date: Thu, 28 May 2020 11:50:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200527191847.17207-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 21:18, Andrew Cooper wrote:
> UD faults never had any diagnostics printed, and the others were inconsistent.
> 
> Don't use dprintk() because identifying traps.c is actively unhelpful in the
> message, as it is the location of the fixup, not the fault.  Use the new
> vec_name() infrastructure, rather than leaving raw numbers for the log.
> 
>   (XEN) Running stub recovery selftests...
>   (XEN) Fixup #UD[0000]: ffff82d07fffd040 [ffff82d07fffd040] -> ffff82d0403ac9d6
>   (XEN) Fixup #GP[0000]: ffff82d07fffd041 [ffff82d07fffd041] -> ffff82d0403ac9d6
>   (XEN) Fixup #SS[0000]: ffff82d07fffd040 [ffff82d07fffd040] -> ffff82d0403ac9d6
>   (XEN) Fixup #BP[0000]: ffff82d07fffd041 [ffff82d07fffd041] -> ffff82d0403ac9d6
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

As before
Reviewed-by: Jan Beulich <jbeulich@suse.com>
albeit I realize I have one more suggestion:

> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -772,10 +772,31 @@ static void do_reserved_trap(struct cpu_user_regs *regs)
>            trapnr, vec_name(trapnr), regs->error_code);
>  }
>  
> +static bool extable_fixup(struct cpu_user_regs *regs, bool print)
> +{
> +    unsigned long fixup = search_exception_table(regs);
> +
> +    if ( unlikely(fixup == 0) )
> +        return false;
> +
> +    /*
> +     * Don't use dprintk() because the __FILE__ reference is unhelpful.
> +     * Can currently be triggered by guests.  Make sure we ratelimit.
> +     */
> +    if ( IS_ENABLED(CONFIG_DEBUG) && print )

How about pulling the IS_ENABLED(CONFIG_DEBUG) into the call sites
currently passing "true"?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 28 10:25:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 10:25:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeFjG-0000s5-EH; Thu, 28 May 2020 10:25:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VkFg=7K=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeFjF-0000s0-Eh
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 10:25:37 +0000
X-Inumbo-ID: 91953bfd-a0cd-11ea-a7ae-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 91953bfd-a0cd-11ea-a7ae-12813bfff9fa;
 Thu, 28 May 2020 10:25:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 05692AC20;
 Thu, 28 May 2020 10:25:34 +0000 (UTC)
Subject: Re: [PATCH v2 03/14] x86/shstk: Introduce Supervisor Shadow Stack
 support
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4f535d4c-b3b3-fe5b-8b57-af736dc0a360@suse.com>
Date: Thu, 28 May 2020 12:25:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200527191847.17207-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 21:18, Andrew Cooper wrote:
> Introduce CONFIG_HAS_AS_CET to determine whether CET instructions are
> supported in the assembler, and CONFIG_XEN_SHSTK as the main build option.
> 
> Introduce cet={no-,}shstk to for a user to select whether or not to use shadow
> stacks at runtime, and X86_FEATURE_XEN_SHSTK to determine Xen's overall
> enablement of shadow stacks.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> 
> LLVM 6 supports CET-SS instructions while only LLVM 7 supports CET-IBT
> instructions.  We'd need to split HAS_AS_CET into two if we want to support
> supervisor shadow stacks with LLVM 6.  (This demonstrates exactly why picking
> a handful of instructions to test is the right approach.)

In this case I agree with splitting; I wasn't aware clang implemented
the respective insns piecemeal.

> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -270,6 +270,23 @@ and not running softirqs. Reduce this if softirqs are not being run frequently
>  enough. Setting this to a high value may cause boot failure, particularly if
>  the NMI watchdog is also enabled.
>  
> +### cet
> +    = List of [ shstk=<bool> ]
> +
> +    Applicability: x86
> +
> +Controls for the use of Control-flow Enforcement Technology.  CET is group of

Nit: "... is a group of ..."

> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -34,6 +34,10 @@ config ARCH_DEFCONFIG
>  config INDIRECT_THUNK
>  	def_bool $(cc-option,-mindirect-branch-register)
>  
> +config HAS_AS_CET
> +	# binutils >= 2.29 and LLVM >= 7
> +	def_bool $(as-instr,wrssq %rax$(comma)0;setssbsy;endbr64)

So you put me in a really awkward position: I'd really like to see
this series go in for 4.14, yet I've previously indicated I want the
underlying concept to first be agreed upon, before any uses get
introduced.

A fundamental problem with this, at least as long as (a) more of
Anthony's series hasn't been committed and (b) we re-build Xen upon
installing (as root), even if it was fully built (as non-root) and
is ready without any re-building. Each of these aspects means that
what you've configured and built may not be what gets installed, by
virtue of the tool chains differing. (a) in addition may lead to
the install-time rebuild to actually go wrong, due to there being
dependency tracking issues on at least {xen,efi}.lds (which I've
noticed in a different context yesterday).

> --- a/xen/scripts/Kconfig.include
> +++ b/xen/scripts/Kconfig.include
> @@ -31,6 +31,10 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -E -x c /dev/null -o /de
>  # Return y if the linker supports <flag>, n otherwise
>  ld-option = $(success,$(LD) -v $(1))
>  
> +# $(as-instr,<instr>)
> +# Return y if the assembler supports <instr>, n otherwise
> +as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -)

Is this actually checking the right thing in the clang case? I.e.
doesn't "-x assembler" make clang invoke the system assembler rather
than use its integrated one, whereas you need the insns to be
recognized by the integrated assembler unless we find a need to pass
-no-integrated-as?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 28 10:27:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 10:27:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeFkc-0000xN-Pg; Thu, 28 May 2020 10:27:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNTM=7K=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jeFkb-0000xG-6G
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 10:27:01 +0000
X-Inumbo-ID: c3881828-a0cd-11ea-9dbe-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3881828-a0cd-11ea-9dbe-bc764e2007e4;
 Thu, 28 May 2020 10:27:00 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: cSKd4Y6+AIAY/kxhWWVdLKq2BbQ5R7Q1hnlWUyvcGTtuOFAM6q8s0pfJjUh/tV2wSk4hhZPwh9
 E1SEWsEtJF5kyYl1YdBu+QUW/3Khrvs3Qa9QsW08cxj41FbGVYZut6t+rEGqM7TClJ6phuBE9s
 0qjbg5BvcSsU7NCS7azKx7jUDlXZte5zXc91GTHAixTQ5pa2QuD15EP/JY17DLw6RpztWOaVHW
 cZtUKT18dsGi0HqiOabVZi3k6m6gXfMwYvz8bgwJV8UR48zMfxWcIXfCKoNxZlrIbtqTheBVT6
 RXU=
X-SBRS: 2.7
X-MesageID: 18992937
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,444,1583211600"; d="scan'208";a="18992937"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH 0/2] osstest: update FreeBSD guest tests
Date: Thu, 28 May 2020 12:26:46 +0200
Message-ID: <20200528102648.8724-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: ian.jackson@eu.citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

The following series adds FreeBSD 11 and 12 guests tests to osstest.
ATM this is only tested on amd64, since the i386 versions had a bug.

The result can be seen at:

http://logs.test-lab.xenproject.org/osstest/logs/150428/

Note this flight has been generated without using the freebsd-{11,12}
hostflags and with the following env variable set:

OSSTEST_JOBS_ONLY=^(.*)freebsd(.*)$,build-(amd64|i386),build-(amd64|i386)-pvops

In order to limit the number of tests run. The runvar difference can be
see in patch #2.

Roger Pau Monne (2):
  freebsd10: refactor code to generate jobs
  freebsd: add FreeBSD 11 and 12 guest jobs

 make-flight | 26 +++++++++++++++++++-------
 1 file changed, 19 insertions(+), 7 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 10:27:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 10:27:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeFkh-0000yC-6t; Thu, 28 May 2020 10:27:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNTM=7K=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jeFkg-0000xw-0o
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 10:27:06 +0000
X-Inumbo-ID: c4a47ddc-a0cd-11ea-81bc-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c4a47ddc-a0cd-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 10:27:02 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: fQkIDVo0sLcVE7teoc9gsdNSHk0LZJYVkmGtBwTinVq4F+B1CnCB+KP2sR8tc1D9HVFfThXuOW
 hmHAiu7yrL7aRKo2Va/oFZ3J3jNq/w25Ay1Y4M6GwzYvOKcu/NNhtg1NI6ngAWUKR8QbmHb9Su
 VWtmXg75HJpYYzhOICrTfT7Nsffj2TqsECzh9Xzc6QmpgmL/tk8ejiQD3HJK2lWZx86ClGWqU2
 PTcZxTFdLCp8nBJQkeJ+7489ZXReTHarvqc53v2WlQNNdTbrlPkS72BePS50YqLzb0qX/N4r2m
 k6o=
X-SBRS: 2.7
X-MesageID: 18936375
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,444,1583211600"; d="scan'208";a="18936375"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH 1/2] freebsd10: refactor code to generate jobs
Date: Thu, 28 May 2020 12:26:47 +0200
Message-ID: <20200528102648.8724-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528102648.8724-1-roger.pau@citrix.com>
References: <20200528102648.8724-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: ian.jackson@eu.citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Slightly adjust the code to generate the freebsd10 jobs in order to
avoid exiting early from the function if the dom0 arch is different
than i386. No functional change expected, the freebsd10 jobs are still
limited to run on an i386 dom0. No runvar diff created as part of this
change.

This is a preparatory change for adding new FreeBSD 11 and 12 jobs
that will instead use an amd64 dom0.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 make-flight | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/make-flight b/make-flight
index a361bcb1..af74bb4e 100755
--- a/make-flight
+++ b/make-flight
@@ -230,19 +230,18 @@ test_matrix_branch_filter_callback () {
 
 do_freebsd_tests () {
 
-  if [ $xenarch != amd64 -o $dom0arch != i386 -o "$kern" != "" ]; then
+  if [ $xenarch != amd64 -o "$kern" != "" ]; then
     return
   fi
 
-  for freebsdarch in amd64 i386; do
-
- job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd10-$freebsdarch \
-                        test-freebsd xl $xenarch $dom0arch \
-                        freebsd_arch=$freebsdarch \
+  if [ $dom0arch == i386 ]; then
+    for freebsdarch in amd64 i386; do
+      job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd10-$freebsdarch \
+                      test-freebsd xl $xenarch $dom0arch freebsd_arch=$freebsdarch \
  freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.1-CUSTOM-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20150525.raw.xz} \
-                        all_hostflags=$most_hostflags,freebsd-10
-
-  done
+                      all_hostflags=$most_hostflags,freebsd-10
+    done
+  fi
 }
 
 do_hvm_winxp_tests () {
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 10:27:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 10:27:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeFkl-0000zF-GG; Thu, 28 May 2020 10:27:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNTM=7K=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jeFkl-0000z4-0Q
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 10:27:11 +0000
X-Inumbo-ID: c5a7a678-a0cd-11ea-81bc-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c5a7a678-a0cd-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 10:27:03 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: Ib42O0XlrVzhAxmWHsx9scWP7f3LZbKckKwtcVmGNUiPu4YcwZ3KLHX6UcvZyFaCI7y6zkrMSs
 K9X+fL1F7IuqBPMso2V6zhvkvjcqR1+4U8qxEPlecyqQZvqqG+RIwAxyv28hBVH4QesbfeFNMr
 vt7B4FvxDxOR+X3gNj46RZrCbAmKSzHaiBLnjWcI2nsHVwZNA4Zm4FI6Hi3lP14qjTjbEEx1BF
 42zAHYaHosPX3R0KdKTW1HwjIJ+XyjLi0ISQ9wJ4mdI/yjreW6vuHABECilNLLRlJYElBtvebJ
 V5M=
X-SBRS: 2.7
X-MesageID: 19379557
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,444,1583211600"; d="scan'208";a="19379557"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH 2/2] freebsd: add FreeBSD 11 and 12 guest jobs
Date: Thu, 28 May 2020 12:26:48 +0200
Message-ID: <20200528102648.8724-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528102648.8724-1-roger.pau@citrix.com>
References: <20200528102648.8724-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: ian.jackson@eu.citrix.com, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Those are based on the upstream images and are run on an amd64 dom0.
The runvar difference is:

+test-amd64-amd64-qemuu-freebsd11-amd64 all_host_di_version 2020-02-10
+test-amd64-amd64-qemuu-freebsd12-amd64 all_host_di_version 2020-02-10
+test-amd64-amd64-qemuu-freebsd11-amd64 all_host_suite      stretch
+test-amd64-amd64-qemuu-freebsd12-amd64 all_host_suite      stretch
+test-amd64-amd64-qemuu-freebsd11-amd64 all_hostflags       arch-amd64,arch-xen-amd64,suite-stretch,purpose-test,freebsd-11
+test-amd64-amd64-qemuu-freebsd12-amd64 all_hostflags       arch-amd64,arch-xen-amd64,suite-stretch,purpose-test,freebsd-12
+test-amd64-amd64-qemuu-freebsd11-amd64 arch                amd64
+test-amd64-amd64-qemuu-freebsd12-amd64 arch                amd64
+test-amd64-amd64-qemuu-freebsd11-amd64 buildjob            build-amd64
+test-amd64-amd64-qemuu-freebsd12-amd64 buildjob            build-amd64
+test-amd64-amd64-qemuu-freebsd11-amd64 freebsd_arch        amd64
+test-amd64-amd64-qemuu-freebsd12-amd64 freebsd_arch        amd64
+test-amd64-amd64-qemuu-freebsd11-amd64 freebsd_image       FreeBSD-11.3-RELEASE-amd64.raw.xz
+test-amd64-amd64-qemuu-freebsd12-amd64 freebsd_image       FreeBSD-12.1-RELEASE-amd64.raw.xz
+test-amd64-amd64-qemuu-freebsd11-amd64 kernbuildjob        build-amd64-pvops
+test-amd64-amd64-qemuu-freebsd12-amd64 kernbuildjob        build-amd64-pvops
+test-amd64-amd64-qemuu-freebsd11-amd64 kernkind            pvops
+test-amd64-amd64-qemuu-freebsd12-amd64 kernkind            pvops
+test-amd64-amd64-qemuu-freebsd11-amd64 toolstack           xl
+test-amd64-amd64-qemuu-freebsd12-amd64 toolstack           xl
+test-amd64-amd64-qemuu-freebsd11-amd64 xenbuildjob         build-amd64
+test-amd64-amd64-qemuu-freebsd12-amd64 xenbuildjob         build-amd64

Note that only amd64 versions are tested at the moment, i386 had some
bugs that are being fixed so new releases can be tested.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Note this requires adding the freebsd-{11,12} hostflags to amd64
hosts.
---
 make-flight | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/make-flight b/make-flight
index af74bb4e..48dc927c 100755
--- a/make-flight
+++ b/make-flight
@@ -241,7 +241,20 @@ do_freebsd_tests () {
  freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-10.1-CUSTOM-}$freebsdarch${FREEBSD_IMAGE_SUFFIX--20150525.raw.xz} \
                       all_hostflags=$most_hostflags,freebsd-10
     done
+    return
   fi
+
+  # NB: limit to amd64 ATM due to bugs with the i386 versions (11.3 and 12.1).
+  for freebsdarch in amd64; do
+    job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd11-$freebsdarch \
+                    test-freebsd xl $xenarch $dom0arch freebsd_arch=$freebsdarch \
+ freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-11.3-RELEASE-}$freebsdarch${FREEBSD_IMAGE_SUFFIX-.raw.xz} \
+                    all_hostflags=$most_hostflags,freebsd-11
+    job_create_test test-$xenarch$kern-$dom0arch$qemuu_suffix-freebsd12-$freebsdarch \
+                    test-freebsd xl $xenarch $dom0arch freebsd_arch=$freebsdarch \
+ freebsd_image=${FREEBSD_IMAGE_PREFIX-FreeBSD-12.1-RELEASE-}$freebsdarch${FREEBSD_IMAGE_SUFFIX-.raw.xz} \
+                    all_hostflags=$most_hostflags,freebsd-12
+  done
 }
 
 do_hvm_winxp_tests () {
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 10:56:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 10:56:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeGCW-0003ez-UO; Thu, 28 May 2020 10:55:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P1kI=7K=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jeGCV-0003eu-EF
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 10:55:51 +0000
X-Inumbo-ID: c6ed72e8-a0d1-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c6ed72e8-a0d1-11ea-8993-bc764e2007e4;
 Thu, 28 May 2020 10:55:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=v6m8frk5lRpk1+uIcIit3OXenXrRV6Wa6MLpIZKRVNI=; b=XaMrjNO51x32okc7AAPca5cT4
 dVJ966xEiT1n7E/6txKmtN7/Z/jiWi2rbDAnga4BjZL1zaYH8kREcq0WjefAHi7SJEN2Af36LxyM1
 lxruVFztOUm8ow3CaXczzcd4fG3u5D2RKFVgdJYycixq+z5i+JrWN3dkxEpOuNRBoO8Kc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeGCN-00011B-95; Thu, 28 May 2020 10:55:43 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeGCN-0005r5-1S; Thu, 28 May 2020 10:55:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jeGCM-00023E-Vg; Thu, 28 May 2020 10:55:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150413-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150413: regressions - trouble: fail/pass/starved
X-Osstest-Failures: linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: linux=b0c3ba31be3e45a130e13b278cf3b90f69bda6f6
X-Osstest-Versions-That: linux=444fc5cde64330661bf59944c43844e7d4c2ccd8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 28 May 2020 10:55:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150413 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150413/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 150390

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150390
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150390
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150390
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150390
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150390
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150390
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150390
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150390
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                b0c3ba31be3e45a130e13b278cf3b90f69bda6f6
baseline version:
 linux                444fc5cde64330661bf59944c43844e7d4c2ccd8

Last test of basis   150390  2020-05-27 00:39:33 Z    1 days
Testing same since   150413  2020-05-27 18:39:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Amir Goldstein <amir73il@gmail.com>
  Daniel Xu <dxu@dxuuu.xyz>
  Eric W. Biederman <ebiederm@xmission.com>
  Jan Kara <jack@suse.cz>
  Linus Torvalds <torvalds@linux-foundation.org>
  Odin Ugedal <odin@ugedal.com>
  Roman Gushchin <guro@fb.com>
  Tejun Heo <tj@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit b0c3ba31be3e45a130e13b278cf3b90f69bda6f6
Merge: 3301f6ae2d4c f17936993af0
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Wed May 27 11:03:24 2020 -0700

    Merge tag 'fsnotify_for_v5.7-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs
    
    Pull fanotify FAN_DIR_MODIFY disabling from Jan Kara:
     "A single patch that disables FAN_DIR_MODIFY support that was merged in
      this merge window.
    
      When discussing further functionality we realized it may be more
      logical to guard it with a feature flag or to call things slightly
      differently (or maybe not) so let's not set the API in stone for now."
    
    * tag 'fsnotify_for_v5.7-rc8' of git://git.kernel.org/pub/scm/linux/kernel/git/jack/linux-fs:
      fanotify: turn off support for FAN_DIR_MODIFY

commit 3301f6ae2d4cb396ae0c103329a5680d14f7a5c6
Merge: 006f38a1c3dc eec8fd0277e3
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Wed May 27 10:58:19 2020 -0700

    Merge branch 'for-5.7-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup
    
    Pull cgroup fixes from Tejun Heo:
    
     - Reverted stricter synchronization for cgroup recursive stats which
       was prepping it for event counter usage which never got merged. The
       change was causing performation regressions in some cases.
    
     - Restore bpf-based device-cgroup operation even when cgroup1 device
       cgroup is disabled.
    
     - An out-param init fix.
    
    * 'for-5.7-fixes' of git://git.kernel.org/pub/scm/linux/kernel/git/tj/cgroup:
      device_cgroup: Cleanup cgroup eBPF device filter code
      xattr: fix uninitialized out-param
      Revert "cgroup: Add memory barriers to plug cgroup_rstat_updated() race window"

commit f17936993af054b16725d0c54baa58115f9e052a
Author: Amir Goldstein <amir73il@gmail.com>
Date:   Wed May 27 15:54:55 2020 +0300

    fanotify: turn off support for FAN_DIR_MODIFY
    
    FAN_DIR_MODIFY has been enabled by commit 44d705b0370b ("fanotify:
    report name info for FAN_DIR_MODIFY event") in 5.7-rc1. Now we are
    planning further extensions to the fanotify API and during that we
    realized that FAN_DIR_MODIFY may behave slightly differently to be more
    consistent with extensions we plan. So until we finalize these
    extensions, let's not bind our hands with exposing FAN_DIR_MODIFY to
    userland.
    
    Signed-off-by: Amir Goldstein <amir73il@gmail.com>
    Signed-off-by: Jan Kara <jack@suse.cz>

commit 006f38a1c3dcbe237a75e725fe457bd59cb489c4
Merge: 444fc5cde643 a4ae32c71fe9
Author: Linus Torvalds <torvalds@linux-foundation.org>
Date:   Wed May 27 09:53:25 2020 -0700

    Merge branch 'exec-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace
    
    Pull execve fix from Eric Biederman:
     "While working on my exec cleanups I found a bug in exec that winds up
      miscomputing the ambient credentials during exec. Andy appears to have
      to been confused as to why credentials are computed for both the
      script and the interpreter
    
      From the original patch description:
    
       [3] Linux very confusingly processes both the script and the
           interpreter if applicable, for reasons that elude me. The results
           from thinking about a script's file capabilities and/or setuid
           bits are mostly discarded.
    
      The only value in struct cred that gets changed in cap_bprm_set_creds
      that I could find that might persist between the script and the
      interpreter was cap_ambient. Which is fixed with this trivial change"
    
    * 'exec-linus' of git://git.kernel.org/pub/scm/linux/kernel/git/ebiederm/user-namespace:
      exec: Always set cap_ambient in cap_bprm_set_creds

commit a4ae32c71fe90794127b32d26d7ad795813b502e
Author: Eric W. Biederman <ebiederm@xmission.com>
Date:   Mon May 25 12:56:15 2020 -0500

    exec: Always set cap_ambient in cap_bprm_set_creds
    
    An invariant of cap_bprm_set_creds is that every field in the new cred
    structure that cap_bprm_set_creds might set, needs to be set every
    time to ensure the fields does not get a stale value.
    
    The field cap_ambient is not set every time cap_bprm_set_creds is
    called, which means that if there is a suid or sgid script with an
    interpreter that has neither the suid nor the sgid bits set the
    interpreter should be able to accept ambient credentials.
    Unfortuantely because cap_ambient is not reset to it's original value
    the interpreter can not accept ambient credentials.
    
    Given that the ambient capability set is expected to be controlled by
    the caller, I don't think this is particularly serious.  But it is
    definitely worth fixing so the code works correctly.
    
    I have tested to verify my reading of the code is correct and the
    interpreter of a sgid can receive ambient capabilities with this
    change and cannot receive ambient capabilities without this change.
    
    Cc: stable@vger.kernel.org
    Cc: Andy Lutomirski <luto@kernel.org>
    Fixes: 58319057b784 ("capabilities: ambient capabilities")
    Signed-off-by: "Eric W. Biederman" <ebiederm@xmission.com>

commit eec8fd0277e37cf447b88c6be181e81df867bcf1
Author: Odin Ugedal <odin@ugedal.com>
Date:   Fri Apr 3 19:55:28 2020 +0200

    device_cgroup: Cleanup cgroup eBPF device filter code
    
    Original cgroup v2 eBPF code for filtering device access made it
    possible to compile with CONFIG_CGROUP_DEVICE=n and still use the eBPF
    filtering. Change
    commit 4b7d4d453fc4 ("device_cgroup: Export devcgroup_check_permission")
    reverted this, making it required to set it to y.
    
    Since the device filtering (and all the docs) for cgroup v2 is no longer
    a "device controller" like it was in v1, someone might compile their
    kernel with CONFIG_CGROUP_DEVICE=n. Then (for linux 5.5+) the eBPF
    filter will not be invoked, and all processes will be allowed access
    to all devices, no matter what the eBPF filter says.
    
    Signed-off-by: Odin Ugedal <odin@ugedal.com>
    Acked-by: Roman Gushchin <guro@fb.com>
    Signed-off-by: Tejun Heo <tj@kernel.org>

commit 772b3140669246e1ab32392c490d338e2eb7b803
Author: Daniel Xu <dxu@dxuuu.xyz>
Date:   Wed Apr 8 23:27:29 2020 -0700

    xattr: fix uninitialized out-param
    
    `removed_sized` isn't correctly initialized (as the doc comment
    suggests) on memory allocation failures. Fix by moving initialization up
    a bit.
    
    Fixes: 0c47383ba3bd ("kernfs: Add option to enable user xattrs")
    Reported-by: Dan Carpenter <dan.carpenter@oracle.com>
    Signed-off-by: Daniel Xu <dxu@dxuuu.xyz>
    Signed-off-by: Tejun Heo <tj@kernel.org>

commit d8ef4b38cb69d907f9b0e889c44d05fc0f890977
Author: Tejun Heo <tj@kernel.org>
Date:   Thu Apr 9 14:55:35 2020 -0400

    Revert "cgroup: Add memory barriers to plug cgroup_rstat_updated() race window"
    
    This reverts commit 9a9e97b2f1f2 ("cgroup: Add memory barriers to plug
    cgroup_rstat_updated() race window").
    
    The commit was added in anticipation of memcg rstat conversion which needed
    synchronous accounting for the event counters (e.g. oom kill count). However,
    the conversion didn't get merged due to percpu memory overhead concern which
    couldn't be addressed at the time.
    
    Unfortunately, the patch's addition of smp_mb() to cgroup_rstat_updated()
    meant that every scheduling event now had to go through an additional full
    barrier and Mel Gorman noticed it as 1% regression in netperf UDP_STREAM test.
    
    There's no need to have this barrier in tree now and even if we need
    synchronous accounting in the future, the right thing to do is separating that
    out to a separate function so that hot paths which don't care about
    synchronous behavior don't have to pay the overhead of the full barrier. Let's
    revert.
    
    Signed-off-by: Tejun Heo <tj@kernel.org>
    Reported-by: Mel Gorman <mgorman@techsingularity.net>
    Link: http://lkml.kernel.org/r/20200409154413.GK3818@techsingularity.net
    Cc: v4.18+


From xen-devel-bounces@lists.xenproject.org Thu May 28 11:32:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 11:32:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeGle-00070v-1R; Thu, 28 May 2020 11:32:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oNcS=7K=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jeGlc-00070q-8H
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 11:32:08 +0000
X-Inumbo-ID: dbe773a6-a0d6-11ea-a7b0-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dbe773a6-a0d6-11ea-a7b0-12813bfff9fa;
 Thu, 28 May 2020 11:32:06 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: YDMVmpaopbqOp2DZITwIx9RAzo4FFGonV6hwe07867BKdayL0WwFDqi4WK4SBEwAVfsTCJI1Az
 RGUHkO7ywIQo79iL/ExC6Q0IMS3AQOX4K/zjOd5+jrXS+0TUIQbsGkmfKSqXD8C0hgT/e/VP9Q
 Us5C6wYgGJhbPcIlSVhyyRfFLqUEqEldShK3fh8h6gC6b7kKpMVPODKR8a0EBlB/UaNXTYUVBN
 JDmcKkiTmENC9vOxTTtnY+5kCMt8e5roGK40iwS0NBwPxA9vwzEWuS51RZWUa5WWSx1/mbT/vi
 LPQ=
X-SBRS: 2.7
X-MesageID: 18940914
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,444,1583211600"; d="scan'208";a="18940914"
From: George Dunlap <George.Dunlap@citrix.com>
To: Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 3/5] automation/archlinux: Add 32-bit glibc headers
Thread-Topic: [PATCH v2 3/5] automation/archlinux: Add 32-bit glibc headers
Thread-Index: AQHWM6tSHP0e+i+tl0C42vBSo8ubOai7nmMAgAAM6QCAAZMLgA==
Date: Thu, 28 May 2020 11:32:02 +0000
Message-ID: <CF50B50F-BDC0-4290-A606-33927CE86FE9@citrix.com>
References: <20200526221612.900922-1-george.dunlap@citrix.com>
 <20200526221612.900922-4-george.dunlap@citrix.com>
 <20200527104316.GH2105@perard.uk.xensource.com>
 <20200527112928.72amqcojenrz2a46@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
In-Reply-To: <20200527112928.72amqcojenrz2a46@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <06163BBAE935194791B452966CCC8E22@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Doug
 Goldstein <cardoe@cardoe.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDI3LCAyMDIwLCBhdCAxMjoyOSBQTSwgV2VpIExpdSA8d2xAeGVuLm9yZz4g
d3JvdGU6DQo+IA0KPiBPbiBXZWQsIE1heSAyNywgMjAyMCBhdCAxMTo0MzoxNkFNICswMTAwLCBB
bnRob255IFBFUkFSRCB3cm90ZToNCj4+IE9uIFR1ZSwgTWF5IDI2LCAyMDIwIGF0IDExOjE2OjEw
UE0gKzAxMDAsIEdlb3JnZSBEdW5sYXAgd3JvdGU6DQo+Pj4gVGhpcyBmaXhlcyB0aGUgZm9sbG93
aW5nIGJ1aWxkIGVycm9yIGluIGh2bWxvYWRlcjoNCj4+PiANCj4+PiB1c3IvaW5jbHVkZS9nbnUv
c3R1YnMuaDo3OjExOiBmYXRhbCBlcnJvcjogZ251L3N0dWJzLTMyLmg6IE5vIHN1Y2ggZmlsZSBv
ciBkaXJlY3RvcnkNCj4+PiANCj4+PiBTaWduZWQtb2ZmLWJ5OiBHZW9yZ2UgRHVubGFwIDxnZW9y
Z2UuZHVubGFwQGNpdHJpeC5jb20+DQo+Pj4gLS0tDQo+Pj4gYXV0b21hdGlvbi9idWlsZC9hcmNo
bGludXgvY3VycmVudC5kb2NrZXJmaWxlIHwgMSArDQo+Pj4gMSBmaWxlIGNoYW5nZWQsIDEgaW5z
ZXJ0aW9uKCspDQo+Pj4gDQo+Pj4gZGlmZiAtLWdpdCBhL2F1dG9tYXRpb24vYnVpbGQvYXJjaGxp
bnV4L2N1cnJlbnQuZG9ja2VyZmlsZSBiL2F1dG9tYXRpb24vYnVpbGQvYXJjaGxpbnV4L2N1cnJl
bnQuZG9ja2VyZmlsZQ0KPj4+IGluZGV4IDlhZjVkNjZhZmMuLjUwOTVkZTY1YjggMTAwNjQ0DQo+
Pj4gLS0tIGEvYXV0b21hdGlvbi9idWlsZC9hcmNobGludXgvY3VycmVudC5kb2NrZXJmaWxlDQo+
Pj4gKysrIGIvYXV0b21hdGlvbi9idWlsZC9hcmNobGludXgvY3VycmVudC5kb2NrZXJmaWxlDQo+
Pj4gQEAgLTE5LDYgKzE5LDcgQEAgUlVOIHBhY21hbiAtUyAtLXJlZnJlc2ggLS1zeXN1cGdyYWRl
IC0tbm9jb25maXJtIC0tbm9wcm9ncmVzc2JhciAtLW5lZWRlZCBcDQo+Pj4gICAgICAgICBpYXNs
IFwNCj4+PiAgICAgICAgIGluZXR1dGlscyBcDQo+Pj4gICAgICAgICBpcHJvdXRlIFwNCj4+PiAr
ICAgICAgICBsaWIzMi1nbGliYyBcDQo+Pj4gICAgICAgICBsaWJhaW8gXA0KPj4+ICAgICAgICAg
bGliY2FjYXJkIFwNCj4+PiAgICAgICAgIGxpYmdsIFwNCj4+IA0KPj4gQWNrZWQtYnk6IEFudGhv
bnkgUEVSQVJEIDxhbnRob255LnBlcmFyZEBjaXRyaXguY29tPg0KPiANCj4gQWxsIGF1dG9tYXRp
b24gcGF0Y2hlczoNCj4gDQo+IEFja2VkLWJ5OiBXZWkgTGl1IDx3bEB4ZW4ub3JnPg0KPiANCj4g
QW50aG9ueSwgY2FuIHlvdSBnZW5lcmF0ZSBhbmQgcHVzaCB0aGUgbmV3IGltYWdlcz8gVGhhbmtz
Lg0KDQpUaGVzZSBhcmUgY2hlY2tlZCBpbiBub3cuDQoNCkJUVyBpdCBsb29rcyBsaWtlIHRoZXJl
IG1heSBiZSBzb21lIG90aGVyIGNvbXBpbGF0aW9uIGlzc3VlcyB1cGRhdGluZyB0aGUgYXJjaGxp
bnV4IGltYWdlLiAgSSB0ZXN0ZWQgdGhlIG1pbmltdW0gY29uZmlndXJhdGlvbiByZXF1aXJlZCB0
byBnZXQgdGhlIGdvbGFuZyBiaW5kaW5ncyB0byBidWlsZDsgYnV0IGEgZnVsbCBidWlsZCBmYWls
cyB3aXRoIG90aGVyIGVycm9ycyBJIGhhdmVu4oCZdCBoYWQgdGltZSB0byBkaWFnbm9zZSB5ZXQu
DQoNCiAtR2Vvcmdl


From xen-devel-bounces@lists.xenproject.org Thu May 28 11:45:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 11:45:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeGyM-0007y4-8T; Thu, 28 May 2020 11:45:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P1kI=7K=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jeGyK-0007xz-Rv
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 11:45:16 +0000
X-Inumbo-ID: aff29936-a0d8-11ea-a7b3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aff29936-a0d8-11ea-a7b3-12813bfff9fa;
 Thu, 28 May 2020 11:45:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=AXMi7nxOrpQ0+N94N7YJpr5nNG8DdrjWqZIDjotIFYc=; b=do8UJbs4FXw1nGIuZJrnBUEpH
 Eu9daPBjn/jE3jBUAu581B5nrjR60j4ycERp2S37HPWX1kr9v7QxmQ5OUngt5f+AtRvw01WcxhVX8
 tz4E3UiyxtlmCed+/AsjGszb1JlY37BoljXZnzay5KGyH5Nf7A0FoZzUzPW9rFNgH81Vg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeGyE-00024V-TW; Thu, 28 May 2020 11:45:10 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeGyE-0007T6-EP; Thu, 28 May 2020 11:45:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jeGyE-0001bd-Di; Thu, 28 May 2020 11:45:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150419-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150419: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=55029d93150e33d70b02b6de2b899c05054c5d3a
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 28 May 2020 11:45:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150419 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150419/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              55029d93150e33d70b02b6de2b899c05054c5d3a
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  132 days
Failing since        146211  2020-01-18 04:18:52 Z  131 days  122 attempts
Testing same since   150419  2020-05-28 04:19:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Chris Jester-Young <cky@cky.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yan Wang <wangyan122@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 19574 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 28 11:48:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 11:48:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeH14-00085V-Mx; Thu, 28 May 2020 11:48:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jtjf=7K=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jeH13-00085P-Vx
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 11:48:06 +0000
X-Inumbo-ID: 177a73d1-a0d9-11ea-a7b3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 177a73d1-a0d9-11ea-a7b3-12813bfff9fa;
 Thu, 28 May 2020 11:48:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=jYFHpVCBp2KPyT07SJt1PsTQ64YTGLrO0WCrYprMmDI=; b=6JuK3sKqeektckDMmR+apcdh9L
 QmAQZ6+XfRSWIERqkfJwMPqbDVdaVAAx+eF7BvizAW3uslTcSD9zivivQuZj7NaH3zh/ngYfH4RLZ
 ASry70kAj6fGqxMfapm8bqIIb3gN+woKh1dAtg+7p2HNMSZTvyfjchl6D5uFBk7zLM8A=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jeH12-000298-Kk; Thu, 28 May 2020 11:48:04 +0000
Received: from [54.239.6.188] (helo=CBG-R90WXYV0.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jeH12-0007xu-Aa; Thu, 28 May 2020 11:48:04 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] libxl: stop libxl_domain_info() consuming massive amounts of
 stack
Date: Thu, 28 May 2020 12:48:01 +0100
Message-Id: <20200528114801.20241-1-paul@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

Currently an array of 1024 xc_domaininfo_t is declared on stack. That alone
consumes ~112k. Since libxl_domain_info() creates a new gc this patch simply
uses it to allocate the array instead.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

This is small and IMO it would be nice to have this in 4.14 but I'd like an
opinion from a maintainer too.
---
 tools/libxl/libxl_domain.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c
index fef2cd4e13..c07ec8fd3a 100644
--- a/tools/libxl/libxl_domain.c
+++ b/tools/libxl/libxl_domain.c
@@ -314,11 +314,13 @@ libxl_dominfo * libxl_list_domain(libxl_ctx *ctx, int *nb_domain_out)
 {
     libxl_dominfo *ptr = NULL;
     int i, ret;
-    xc_domaininfo_t info[1024];
+    xc_domaininfo_t *info;
     int size = 0;
     uint32_t domid = 0;
     GC_INIT(ctx);
 
+    info = libxl__calloc(gc, 1024, sizeof(*info));
+
     while ((ret = xc_domain_getinfolist(ctx->xch, domid, 1024, info)) > 0) {
         ptr = libxl__realloc(NOGC, ptr, (size + ret) * sizeof(libxl_dominfo));
         for (i = 0; i < ret; i++) {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 28 12:04:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 12:04:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeHG7-0001Ls-8N; Thu, 28 May 2020 12:03:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VkFg=7K=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeHG6-0001Lh-58
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 12:03:38 +0000
X-Inumbo-ID: 43070c50-a0db-11ea-8993-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43070c50-a0db-11ea-8993-bc764e2007e4;
 Thu, 28 May 2020 12:03:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 12E23ADCF;
 Thu, 28 May 2020 12:03:35 +0000 (UTC)
Subject: Re: [PATCH v2 04/14] x86/traps: Implement #CP handler and extend #PF
 for shadow stacks
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-5-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <424dc7f2-d999-19e1-42ad-226cf22783eb@suse.com>
Date: Thu, 28 May 2020 14:03:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200527191847.17207-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 21:18, Andrew Cooper wrote:
> For now, any #CP exception or shadow stack #PF indicate a bug in Xen, but
> attempt to recover from #CP if taken in guest context.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one more question and a suggestion:

> @@ -1445,8 +1447,10 @@ void do_page_fault(struct cpu_user_regs *regs)
>       *
>       * Anything remaining is an error, constituting corruption of the
>       * pagetables and probably an L1TF vulnerable gadget.
> +     *
> +     * Any shadow stack access fault is a bug in Xen.
>       */
> -    if ( error_code & PFEC_reserved_bit )
> +    if ( error_code & (PFEC_reserved_bit | PFEC_shstk) )
>          goto fatal;

Wouldn't you saying "any" imply putting this new check higher up, in
particular ahead of the call to fixup_page_fault()?

> @@ -940,7 +944,8 @@ autogen_stubs: /* Automatically generated stubs. */
>          entrypoint 1b
>  
>          /* Reserved exceptions, heading towards do_reserved_trap(). */
> -        .elseif vec == TRAP_copro_seg || vec == TRAP_spurious_int || (vec > TRAP_simd_error && vec < TRAP_nr)
> +        .elseif vec == X86_EXC_CSO || vec == X86_EXC_SPV || \
> +                vec == X86_EXC_VE  || (vec > X86_EXC_CP && vec < TRAP_nr)

Adding yet another || here adds to the fragility of the entire
construct. Wouldn't it be better to implement do_entry_VE at
this occasion, even its handling continues to end up in
do_reserved_trap()? This would have the benefit of avoiding the
pointless checking of %spl first thing in its handling. Feel
free to keep the R-b if you decide to go this route.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 28 12:31:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 12:31:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeHgk-0003nX-AE; Thu, 28 May 2020 12:31:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eoT6=7K=xen.org=roger@srs-us1.protection.inumbo.net>)
 id 1jeHgi-0003nS-UK
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 12:31:09 +0000
X-Inumbo-ID: 1ad5392e-a0df-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ad5392e-a0df-11ea-8993-bc764e2007e4;
 Thu, 28 May 2020 12:31:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID
 :Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID
 :Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:
 Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe
 :List-Post:List-Owner:List-Archive;
 bh=yT0YCeVmS31oUgr3kcKp2rb0acVnrFO8fotP62VttFc=; b=AYKcpbquSFiA+Fd7AjYabIdcNc
 4c2MIIx960LxN3oKeum4woXDrPd63sTz6u9RAt3c7TQ/aIhITEKF4IXMi/5azqQSLBSyOIsbayblx
 Rr4HoSXPO1rM1AF1bil9f9PjFl8RKeIwRbeNJz/k/np4qPa4pVnpN0eeWuRSLh7XOXmI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeHfy-000317-0c; Thu, 28 May 2020 12:30:22 +0000
Received: from [93.176.191.173] (helo=localhost)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeHfx-0002M2-9k; Thu, 28 May 2020 12:30:21 +0000
Date: Thu, 28 May 2020 14:30:12 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger@xen.org>
To: Anchal Agarwal <anchalag@amazon.com>
Subject: Re: [PATCH 06/12] xen-blkfront: add callbacks for PM suspend and
 hibernation
Message-ID: <20200528122956.GF1195@Air-de-Roger>
References: <cover.1589926004.git.anchalag@amazon.com>
 <ad580b4d5b76c18fe2fe409704f25622e01af361.1589926004.git.anchalag@amazon.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <ad580b4d5b76c18fe2fe409704f25622e01af361.1589926004.git.anchalag@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: eduval@amazon.com, len.brown@intel.com, peterz@infradead.org,
 benh@kernel.crashing.org, x86@kernel.org, linux-mm@kvack.org, pavel@ucw.cz,
 hpa@zytor.com, tglx@linutronix.de, sstabellini@kernel.org, kamatam@amazon.com,
 mingo@redhat.com, xen-devel@lists.xenproject.org, sblbir@amazon.com,
 axboe@kernel.dk, konrad.wilk@oracle.com, bp@alien8.de,
 boris.ostrovsky@oracle.com, jgross@suse.com, netdev@vger.kernel.org,
 linux-pm@vger.kernel.org, rjw@rjwysocki.net, linux-kernel@vger.kernel.org,
 vkuznets@redhat.com, davem@davemloft.net, dwmw@amazon.co.uk
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 11:27:50PM +0000, Anchal Agarwal wrote:
> From: Munehisa Kamata <kamatam@amazon.com>
> 
> S4 power transition states are much different than xen
> suspend/resume. Former is visible to the guest and frontend drivers should
> be aware of the state transitions and should be able to take appropriate
> actions when needed. In transition to S4 we need to make sure that at least
> all the in-flight blkif requests get completed, since they probably contain
> bits of the guest's memory image and that's not going to get saved any
> other way. Hence, re-issuing of in-flight requests as in case of xen resume
> will not work here. This is in contrast to xen-suspend where we need to
> freeze with as little processing as possible to avoid dirtying RAM late in
> the migration cycle and we know that in-flight data can wait.
> 
> Add freeze, thaw and restore callbacks for PM suspend and hibernation
> support. All frontend drivers that needs to use PM_HIBERNATION/PM_SUSPEND
> events, need to implement these xenbus_driver callbacks. The freeze handler
> stops block-layer queue and disconnect the frontend from the backend while
> freeing ring_info and associated resources. Before disconnecting from the
> backend, we need to prevent any new IO from being queued and wait for existing
> IO to complete. Freeze/unfreeze of the queues will guarantee that there are no
> requests in use on the shared ring. However, for sanity we should check
> state of the ring before disconnecting to make sure that there are no
> outstanding requests to be processed on the ring. The restore handler
> re-allocates ring_info, unquiesces and unfreezes the queue and re-connect to
> the backend, so that rest of the kernel can continue to use the block device
> transparently.
> 
> Note:For older backends,if a backend doesn't have commit'12ea729645ace'
> xen/blkback: unmap all persistent grants when frontend gets disconnected,
> the frontend may see massive amount of grant table warning when freeing
> resources.
> [   36.852659] deferring g.e. 0xf9 (pfn 0xffffffffffffffff)
> [   36.855089] xen:grant_table: WARNING:e.g. 0x112 still in use!
> 
> In this case, persistent grants would need to be disabled.
> 
> [Anchal Changelog: Removed timeout/request during blkfront freeze.
> Reworked the whole patch to work with blk-mq and incorporate upstream's
> comments]

Please tag versions using vX and it would be helpful if you could list
the specific changes that you performed between versions. There where
3 RFC versions IIRC, and there's no log of the changes between them.

> 
> Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
> Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
> ---
>  drivers/block/xen-blkfront.c | 122 +++++++++++++++++++++++++++++++++--
>  1 file changed, 115 insertions(+), 7 deletions(-)
> 
> diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
> index 3b889ea950c2..464863ed7093 100644
> --- a/drivers/block/xen-blkfront.c
> +++ b/drivers/block/xen-blkfront.c
> @@ -48,6 +48,8 @@
>  #include <linux/list.h>
>  #include <linux/workqueue.h>
>  #include <linux/sched/mm.h>
> +#include <linux/completion.h>
> +#include <linux/delay.h>
>  
>  #include <xen/xen.h>
>  #include <xen/xenbus.h>
> @@ -80,6 +82,8 @@ enum blkif_state {
>  	BLKIF_STATE_DISCONNECTED,
>  	BLKIF_STATE_CONNECTED,
>  	BLKIF_STATE_SUSPENDED,
> +	BLKIF_STATE_FREEZING,
> +	BLKIF_STATE_FROZEN

Nit: adding a terminating ',' would prevent further additions from
having to modify this line.

>  };
>  
>  struct grant {
> @@ -219,6 +223,7 @@ struct blkfront_info
>  	struct list_head requests;
>  	struct bio_list bio_list;
>  	struct list_head info_list;
> +	struct completion wait_backend_disconnected;
>  };
>  
>  static unsigned int nr_minors;
> @@ -1005,6 +1010,7 @@ static int xlvbd_init_blk_queue(struct gendisk *gd, u16 sector_size,
>  	info->sector_size = sector_size;
>  	info->physical_sector_size = physical_sector_size;
>  	blkif_set_queue_limits(info);
> +	init_completion(&info->wait_backend_disconnected);
>  
>  	return 0;
>  }
> @@ -1057,7 +1063,7 @@ static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset)
>  		case XEN_SCSI_DISK5_MAJOR:
>  		case XEN_SCSI_DISK6_MAJOR:
>  		case XEN_SCSI_DISK7_MAJOR:
> -			*offset = (*minor / PARTS_PER_DISK) + 
> +			*offset = (*minor / PARTS_PER_DISK) +
>  				((major - XEN_SCSI_DISK1_MAJOR + 1) * 16) +
>  				EMULATED_SD_DISK_NAME_OFFSET;
>  			*minor = *minor +
> @@ -1072,7 +1078,7 @@ static int xen_translate_vdev(int vdevice, int *minor, unsigned int *offset)
>  		case XEN_SCSI_DISK13_MAJOR:
>  		case XEN_SCSI_DISK14_MAJOR:
>  		case XEN_SCSI_DISK15_MAJOR:
> -			*offset = (*minor / PARTS_PER_DISK) + 
> +			*offset = (*minor / PARTS_PER_DISK) +

Unrelated changes, please split to a pre-patch.

>  				((major - XEN_SCSI_DISK8_MAJOR + 8) * 16) +
>  				EMULATED_SD_DISK_NAME_OFFSET;
>  			*minor = *minor +
> @@ -1353,6 +1359,8 @@ static void blkif_free(struct blkfront_info *info, int suspend)
>  	unsigned int i;
>  	struct blkfront_ring_info *rinfo;
>  
> +	if (info->connected == BLKIF_STATE_FREEZING)
> +		goto free_rings;
>  	/* Prevent new requests being issued until we fix things up. */
>  	info->connected = suspend ?
>  		BLKIF_STATE_SUSPENDED : BLKIF_STATE_DISCONNECTED;
> @@ -1360,6 +1368,7 @@ static void blkif_free(struct blkfront_info *info, int suspend)
>  	if (info->rq)
>  		blk_mq_stop_hw_queues(info->rq);
>  
> +free_rings:
>  	for_each_rinfo(info, rinfo, i)
>  		blkif_free_ring(rinfo);
>  
> @@ -1563,8 +1572,10 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
>  	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
>  	struct blkfront_info *info = rinfo->dev_info;
>  
> -	if (unlikely(info->connected != BLKIF_STATE_CONNECTED))
> -		return IRQ_HANDLED;
> +	if (unlikely(info->connected != BLKIF_STATE_CONNECTED
> +		    && info->connected != BLKIF_STATE_FREEZING)){

Extra tab and missing space between '){'. Also my preference would be
for the && to go at the end of the previous line, like it's done
elsewhere in the file.

> +	    return IRQ_HANDLED;
> +	}
>  
>  	spin_lock_irqsave(&rinfo->ring_lock, flags);
>   again:
> @@ -2027,6 +2038,7 @@ static int blkif_recover(struct blkfront_info *info)
>  	unsigned int segs;
>  	struct blkfront_ring_info *rinfo;
>  
> +	bool frozen = info->connected == BLKIF_STATE_FROZEN;

Please put this together with the rest of the variable definitions,
and leave the empty line as a split between variable definitions and
code. I've already requested this on RFC v3 but you seem to have
dropped some of the requests I've made there.

>  	blkfront_gather_backend_features(info);
>  	/* Reset limits changed by blk_mq_update_nr_hw_queues(). */
>  	blkif_set_queue_limits(info);
> @@ -2048,6 +2060,9 @@ static int blkif_recover(struct blkfront_info *info)
>  		kick_pending_request_queues(rinfo);
>  	}
>  
> +	if (frozen)
> +		return 0;
> +
>  	list_for_each_entry_safe(req, n, &info->requests, queuelist) {
>  		/* Requeue pending requests (flush or discard) */
>  		list_del_init(&req->queuelist);
> @@ -2364,6 +2379,7 @@ static void blkfront_connect(struct blkfront_info *info)
>  
>  		return;
>  	case BLKIF_STATE_SUSPENDED:
> +	case BLKIF_STATE_FROZEN:
>  		/*
>  		 * If we are recovering from suspension, we need to wait
>  		 * for the backend to announce it's features before
> @@ -2481,12 +2497,36 @@ static void blkback_changed(struct xenbus_device *dev,
>  		break;
>  
>  	case XenbusStateClosed:
> -		if (dev->state == XenbusStateClosed)
> +		if (dev->state == XenbusStateClosed) {
> +			if (info->connected == BLKIF_STATE_FREEZING) {
> +				blkif_free(info, 0);
> +				info->connected = BLKIF_STATE_FROZEN;
> +				complete(&info->wait_backend_disconnected);
> +				break;

There's no need for the break here, you can rely on the break below.

> +			}
> +
>  			break;
> +		}
> +
> +		/*
> +		 * We may somehow receive backend's Closed again while thawing
> +		 * or restoring and it causes thawing or restoring to fail.
> +		 * Ignore such unexpected state regardless of the backend state.
> +		 */
> +		if (info->connected == BLKIF_STATE_FROZEN) {

I think you can join this with the previous dev->state == XenbusStateClosed?

Also, won't the device be in the Closed state already if it's in state
frozen?

> +			dev_dbg(&dev->dev,
> +					"ignore the backend's Closed state: %s",
> +					dev->nodename);
> +			break;
> +		}
>  		/* fall through */
>  	case XenbusStateClosing:
> -		if (info)
> -			blkfront_closing(info);
> +		if (info) {
> +			if (info->connected == BLKIF_STATE_FREEZING)
> +				xenbus_frontend_closed(dev);
> +			else
> +				blkfront_closing(info);
> +		}
>  		break;
>  	}
>  }
> @@ -2630,6 +2670,71 @@ static void blkif_release(struct gendisk *disk, fmode_t mode)
>  	mutex_unlock(&blkfront_mutex);
>  }
>  
> +static int blkfront_freeze(struct xenbus_device *dev)
> +{
> +	unsigned int i;
> +	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> +	struct blkfront_ring_info *rinfo;
> +	/* This would be reasonable timeout as used in xenbus_dev_shutdown() */
> +	unsigned int timeout = 5 * HZ;
> +	unsigned long flags;
> +	int err = 0;
> +
> +	info->connected = BLKIF_STATE_FREEZING;
> +
> +	blk_mq_freeze_queue(info->rq);
> +	blk_mq_quiesce_queue(info->rq);
> +
> +	for_each_rinfo(info, rinfo, i) {
> +	    /* No more gnttab callback work. */
> +	    gnttab_cancel_free_callback(&rinfo->callback);
> +	    /* Flush gnttab callback work. Must be done with no locks held. */
> +	    flush_work(&rinfo->work);
> +	}
> +
> +	for_each_rinfo(info, rinfo, i) {
> +	    spin_lock_irqsave(&rinfo->ring_lock, flags);
> +	    if (RING_FULL(&rinfo->ring)
> +		    || RING_HAS_UNCONSUMED_RESPONSES(&rinfo->ring)) {

'||' should go at the end of the previous line.

> +		xenbus_dev_error(dev, err, "Hibernation Failed.
> +			The ring is still busy");
> +		info->connected = BLKIF_STATE_CONNECTED;
> +		spin_unlock_irqrestore(&rinfo->ring_lock, flags);

You need to unfreeze the queues here, or else the device will be in a
blocked state AFAICT.

> +		return -EBUSY;
> +	}
> +	    spin_unlock_irqrestore(&rinfo->ring_lock, flags);
> +	}

This block has indentation all messed up.

> +	/* Kick the backend to disconnect */
> +	xenbus_switch_state(dev, XenbusStateClosing);
> +
> +	/*
> +	 * We don't want to move forward before the frontend is diconnected
> +	 * from the backend cleanly.
> +	 */
> +	timeout = wait_for_completion_timeout(&info->wait_backend_disconnected,
> +					      timeout);
> +	if (!timeout) {
> +		err = -EBUSY;

Note err is only used here, and I think could just be dropped.

> +		xenbus_dev_error(dev, err, "Freezing timed out;"
> +				 "the device may become inconsistent state");

Leaving the device in this state is quite bad, as it's in a closed
state and with the queues frozen. You should make an attempt to
restore things to a working state.

> +	}
> +
> +	return err;
> +}
> +
> +static int blkfront_restore(struct xenbus_device *dev)
> +{
> +	struct blkfront_info *info = dev_get_drvdata(&dev->dev);
> +	int err = 0;
> +
> +	err = talk_to_blkback(dev, info);
> +	blk_mq_unquiesce_queue(info->rq);
> +	blk_mq_unfreeze_queue(info->rq);
> +	if (!err)
> +	    blk_mq_update_nr_hw_queues(&info->tag_set, info->nr_rings);

Bad indentation. Also shouldn't you first update the queues and then
unfreeze them?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 28 12:33:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 12:33:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeHid-0003ta-M7; Thu, 28 May 2020 12:33:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VkFg=7K=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeHic-0003tU-4u
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 12:33:06 +0000
X-Inumbo-ID: 6083ff82-a0df-11ea-8993-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6083ff82-a0df-11ea-8993-bc764e2007e4;
 Thu, 28 May 2020 12:33:05 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 811B2AF0D;
 Thu, 28 May 2020 12:33:03 +0000 (UTC)
Subject: Re: [PATCH v2 05/14] x86/shstk: Re-layout the stack block for shadow
 stacks
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-6-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <03cc30f8-4849-f77d-857d-b63248c70a25@suse.com>
Date: Thu, 28 May 2020 14:33:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200527191847.17207-6-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 21:18, Andrew Cooper wrote:
> --- a/xen/arch/x86/traps.c
> +++ b/xen/arch/x86/traps.c
> @@ -365,20 +365,15 @@ static void show_guest_stack(struct vcpu *v, const struct cpu_user_regs *regs)
>  /*
>   * Notes for get_stack_trace_bottom() and get_stack_dump_bottom()
>   *
> - * Stack pages 0 - 3:
> + * Stack pages 1 - 4:
>   *   These are all 1-page IST stacks.  Each of these stacks have an exception
>   *   frame and saved register state at the top.  The interesting bound for a
>   *   trace is the word adjacent to this, while the bound for a dump is the
>   *   very top, including the exception frame.
>   *
> - * Stack pages 4 and 5:
> - *   None of these are particularly interesting.  With MEMORY_GUARD, page 5 is
> - *   explicitly not present, so attempting to dump or trace it is
> - *   counterproductive.  Without MEMORY_GUARD, it is possible for a call chain
> - *   to use the entire primary stack and wander into page 5.  In this case,
> - *   consider these pages an extension of the primary stack to aid debugging
> - *   hopefully rare situations where the primary stack has effective been
> - *   overflown.
> + * Stack pages 0 and 5:
> + *   Shadow stacks.  These are mapped read-only, and used by CET-SS capable
> + *   processors.  They will never contain regular stack data.

I don't mind the comment getting put in place already here, but will it
reflect reality even when CET-SS is not in use, in that the pages then
still are mapped r/o rather than being left unmapped to act as guard
pages not only for stack pushes but also for stack pops? At which point
the "dump or trace it is counterproductive" remark would still apply in
this case, and hence may better be retained.

> @@ -392,13 +387,10 @@ unsigned long get_stack_trace_bottom(unsigned long sp)
>  {
>      switch ( get_stack_page(sp) )
>      {
> -    case 0 ... 3:
> +    case 1 ... 4:
>          return ROUNDUP(sp, PAGE_SIZE) -
>              offsetof(struct cpu_user_regs, es) - sizeof(unsigned long);
>  
> -#ifndef MEMORY_GUARD
> -    case 4 ... 5:
> -#endif
>      case 6 ... 7:
>          return ROUNDUP(sp, STACK_SIZE) -
>              sizeof(struct cpu_info) - sizeof(unsigned long);
> @@ -412,12 +404,9 @@ unsigned long get_stack_dump_bottom(unsigned long sp)
>  {
>      switch ( get_stack_page(sp) )
>      {
> -    case 0 ... 3:
> +    case 1 ... 4:
>          return ROUNDUP(sp, PAGE_SIZE) - sizeof(unsigned long);
>  
> -#ifndef MEMORY_GUARD
> -    case 4 ... 5:
> -#endif
>      case 6 ... 7:
>          return ROUNDUP(sp, STACK_SIZE) - sizeof(unsigned long);

The need to adjust these literal numbers demonstrates how fragile
this is. I admit I can't see a good way to get rid of the literal
numbers altogether, but could I talk you into switching to (for
the latter, as example)

    switch ( get_stack_page(sp) )
    {
    case 0: case PRIMARY_SHSTK_SLOT:
        return 0;

    case 1 ... 4:
        return ROUNDUP(sp, PAGE_SIZE) - sizeof(unsigned long);

    case 6 ... 7:
        return ROUNDUP(sp, STACK_SIZE) - sizeof(unsigned long);

    default:
        return sp - sizeof(unsigned long);
    }

? Of course this will need the callers to be aware they may get
back zero, but there are only very few (which made me notice the
functions would better be static). And the returning of zero may
then want changing (conditionally upon us using CET-SS) in a
later patch, where iirc you use the shadow stack for call trace
generation.

As a positive side effect this will yield a compile error if
PRIMARY_SHSTK_SLOT gets changed without adjusting these
functions.

> --- a/xen/include/asm-x86/config.h
> +++ b/xen/include/asm-x86/config.h
> @@ -75,6 +75,9 @@
>  /* Primary stack is restricted to 8kB by guard pages. */
>  #define PRIMARY_STACK_SIZE 8192
>  
> +/* Primary shadow stack is slot 5 of 8, immediately under the primary stack. */
> +#define PRIMARY_SHSTK_SLOT 5

Any reason to put it here rather than ...

> --- a/xen/include/asm-x86/current.h
> +++ b/xen/include/asm-x86/current.h
> @@ -16,12 +16,12 @@
>   *
>   * 7 - Primary stack (with a struct cpu_info at the top)
>   * 6 - Primary stack
> - * 5 - Optionally not present (MEMORY_GUARD)
> - * 4 - Unused; optionally not present (MEMORY_GUARD)
> - * 3 - Unused; optionally not present (MEMORY_GUARD)
> - * 2 - MCE IST stack
> - * 1 - NMI IST stack
> - * 0 - Double Fault IST stack
> + * 5 - Primay Shadow Stack (read-only)
> + * 4 - #DF IST stack
> + * 3 - #DB IST stack
> + * 2 - NMI IST stack
> + * 1 - #MC IST stack
> + * 0 - IST Shadow Stacks (4x 1k, read-only)
>   */

... right below this comment?

Same question as above regarding the "read-only" here.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 28 12:50:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 12:50:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeHzB-0005aJ-7T; Thu, 28 May 2020 12:50:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VkFg=7K=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeHzA-0005aE-Iz
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 12:50:12 +0000
X-Inumbo-ID: c423aa72-a0e1-11ea-a7c5-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c423aa72-a0e1-11ea-a7c5-12813bfff9fa;
 Thu, 28 May 2020 12:50:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A44EBAA35;
 Thu, 28 May 2020 12:50:09 +0000 (UTC)
Subject: Re: [PATCH v2 06/14] x86/shstk: Create shadow stacks
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-7-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8a02b933-3b7e-ded9-8bf3-a1c35f2ef7ae@suse.com>
Date: Thu, 28 May 2020 14:50:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200527191847.17207-7-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 21:18, Andrew Cooper wrote:
> --- a/xen/arch/x86/cpu/common.c
> +++ b/xen/arch/x86/cpu/common.c
> @@ -769,6 +769,30 @@ void load_system_tables(void)
>  	tss->rsp1 = 0x8600111111111111ul;
>  	tss->rsp2 = 0x8600111111111111ul;
>  
> +	/* Set up the shadow stack IST. */
> +	if (cpu_has_xen_shstk) {
> +		volatile uint64_t *ist_ssp = this_cpu(tss_page).ist_ssp;
> +
> +		/*
> +		 * Used entries must point at the supervisor stack token.
> +		 * Unused entries are poisoned.
> +		 *
> +		 * This IST Table may be live, and the NMI/#MC entries must
> +		 * remain valid on every instruction boundary, hence the
> +		 * volatile qualifier.
> +		 */

Move this comment ahead of what it comments on, as we usually have it?

> +		ist_ssp[0] = 0x8600111111111111ul;
> +		ist_ssp[IST_MCE] = stack_top + (IST_MCE * IST_SHSTK_SIZE) - 8;
> +		ist_ssp[IST_NMI] = stack_top + (IST_NMI * IST_SHSTK_SIZE) - 8;
> +		ist_ssp[IST_DB]	 = stack_top + (IST_DB	* IST_SHSTK_SIZE) - 8;
> +		ist_ssp[IST_DF]	 = stack_top + (IST_DF	* IST_SHSTK_SIZE) - 8;

Strictly speaking you want to introduce

#define IST_SHSTK_SLOT 0

next to PRIMARY_SHSTK_SLOT and use

		ist_ssp[IST_MCE] = stack_top + (IST_SHSTK_SLOT * PAGE_SIZE) +
                                               (IST_MCE * IST_SHSTK_SIZE) - 8;

etc here. It's getting longish, so I'm not going to insist. But if you
go this route, then please also below / elsewhere.

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -5994,12 +5994,33 @@ void memguard_unguard_range(void *p, unsigned long l)
>  
>  #endif
>  
> +static void write_sss_token(unsigned long *ptr)
> +{
> +    /*
> +     * A supervisor shadow stack token is its own linear address, with the
> +     * busy bit (0) clear.
> +     */
> +    *ptr = (unsigned long)ptr;
> +}
> +
>  void memguard_guard_stack(void *p)
>  {
> -    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
> +    /* IST Shadow stacks.  4x 1k in stack page 0. */
> +    if ( IS_ENABLED(CONFIG_XEN_SHSTK) )
> +    {
> +        write_sss_token(p + (IST_MCE * IST_SHSTK_SIZE) - 8);
> +        write_sss_token(p + (IST_NMI * IST_SHSTK_SIZE) - 8);
> +        write_sss_token(p + (IST_DB  * IST_SHSTK_SIZE) - 8);
> +        write_sss_token(p + (IST_DF  * IST_SHSTK_SIZE) - 8);

Up to now two successive memguard_guard_stack() were working fine. This
will be no longer the case, just as an observation.

> +    }
> +    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, PAGE_HYPERVISOR_SHSTK);

As already hinted at in reply to the previous patch, I think this wants
to remain _PAGE_NONE when we don't use CET-SS.

> +    /* Primary Shadow Stack.  1x 4k in stack page 5. */
>      p += PRIMARY_SHSTK_SLOT * PAGE_SIZE;
> -    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
> +    if ( IS_ENABLED(CONFIG_XEN_SHSTK) )
> +        write_sss_token(p + PAGE_SIZE - 8);
> +
> +    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, PAGE_HYPERVISOR_SHSTK);
>  }
>  
>  void memguard_unguard_stack(void *p)

Would this function perhaps better zap the tokens?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 28 13:08:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 13:08:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeIGN-0006cY-Oo; Thu, 28 May 2020 13:07:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GBWX=7K=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1jeIGN-0006cT-78
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 13:07:59 +0000
X-Inumbo-ID: 3ff13104-a0e4-11ea-8993-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ff13104-a0e4-11ea-8993-bc764e2007e4;
 Thu, 28 May 2020 13:07:57 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 78UCGICbqnbVdGeCbbPd0B2eskBfpEuIjG+gYkPNFGwgfALk1/OeB7ClUBDCN4K5asDzjA3W/v
 sRuQ6M4Ta9gEkchlqHe93Qf4+Aq/VZVu4Ylwkzrspwj2r7tTa8N1OJ+Ug24Cq8Q/+LXO5zyQff
 Vpw1Ba8IcnjdM4ZK/wUgAx3LGNvTbEJ9Sh7mC6tQ5GYzyDMqqv9YTLb3JWfA8nA/WEcyRudQRD
 GdWd2xnaLI8LYG7qlLAUTw53OaAHXrPc2kKzOtfGFt+v5G15I2v52WJPm/E8iVi4q2Rfm6dQyf
 DzY=
X-SBRS: 2.7
X-MesageID: 19392726
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,444,1583211600"; d="scan'208";a="19392726"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/hvm: Improve error information in handle_pio()
Date: Thu, 28 May 2020 14:07:38 +0100
Message-ID: <20200528130738.12816-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?q?Marek=20Marczykowski-G=C3=B3recki?=
 <marmarek@invisiblethingslab.com>, Paul Durrant <paul.durrant@citrix.com>,
 Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

domain_crash() should always have a message which emitted even in release
builds, so something more useful than this is presented.

  (XEN) domain_crash called from io.c:171
  (XEN) domain_crash called from io.c:171
  (XEN) domain_crash called from io.c:171
  ...

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Wei Liu <wl@xen.org>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Paul Durrant <paul.durrant@citrix.com>
CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>

Part of a bug reported by Marek.  Something else is wonky in the IO emulation
state, and preventing us from yielding to the scheduler so the domain can
progress with being shut down.
---
 xen/arch/x86/hvm/io.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
index a5b0a23f06..4e468bfb6b 100644
--- a/xen/arch/x86/hvm/io.c
+++ b/xen/arch/x86/hvm/io.c
@@ -167,7 +167,9 @@ bool handle_pio(uint16_t port, unsigned int size, int dir)
         break;
 
     default:
-        gdprintk(XENLOG_ERR, "Weird HVM ioemulation status %d.\n", rc);
+        gprintk(XENLOG_ERR, "Unexpected PIO status %d, port %#x %s 0x%0*lx\n",
+                rc, port, dir == IOREQ_WRITE ? "write" : "read",
+                size * 2, data & ((1ul << (size * 8)) - 1));
         domain_crash(curr->domain);
         return false;
     }
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu May 28 13:14:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 13:14:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeIMJ-0007Tr-Hl; Thu, 28 May 2020 13:14:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lRPh=7K=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jeIMH-0007Tk-VC
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 13:14:06 +0000
X-Inumbo-ID: 1b001120-a0e5-11ea-a7c8-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1b001120-a0e5-11ea-a7c8-12813bfff9fa;
 Thu, 28 May 2020 13:14:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=/TGQqVJMo46RFi783ukTwjZkqvAI+atG7guE7LsOJRw=; b=tayfq+iWBTaRYEL1JG22O0euSw
 FVLlHeZ4IngAjmhhoBxx9Y76GSGzRcL7mF/n+61HDsQxFkYQT5lEW0J61XVIUg/g0c0vozu5ldToy
 Hc60CmtyeifMnTYrIGsKSazGMKHTsCnVMdTv14KyJEhUnZNPAjAgny1ZW0GFNaSXb5kM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeIMG-0003w5-HA; Thu, 28 May 2020 13:14:04 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeIMG-0005jw-Ar; Thu, 28 May 2020 13:14:04 +0000
Subject: Re: [OSSTEST PATCH 22/38] buster: Extend guest bootloader workaround
To: Ian Jackson <ian.jackson@citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
 <20200519190230.29519-23-ian.jackson@eu.citrix.com>
 <7747c676-f9da-cb97-bd93-78dc13138d03@xen.org>
 <24261.17724.382954.918761@mariner.uk.xensource.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e4e7e515-587a-ad81-c9b7-b7cfa69108be@xen.org>
Date: Thu, 28 May 2020 14:14:02 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <24261.17724.382954.918761@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Ian,

On 20/05/2020 15:57, Ian Jackson wrote:
> Julien Grall writes ("Re: [OSSTEST PATCH 22/38] buster: Extend guest bootloader workaround"):
>> On 19/05/2020 20:02, Ian Jackson wrote:
>>> CC: Julien Grall <julien@xen.org>
>>> CC: Stefano Stabellini <sstabellini@kernel.org>
>>> Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
>>
>> Acked-by: Julien Grall <jgrall@amazon.com>
> 
> Thanks.
> 
>>>    	# Debian doesn't currently know what bootloader to install in
>>>    	# a Xen guest on ARM. We install pv-grub-menu above which
>>
>> OOI, what does Debian install for x86 HVM guest? Is there any ticket
>> tracking this issue?
> 
> On x86, it installes grub.  (grub2, x86, PC, to be precise.)

I have just realized that on x86 you will always have a firmware in the 
guest. On Arm we commonly boot the kernel directly.

So maybe we are closer to PV here. Do you also install GRUB in that case?

Note that we do support EDK2 at least on Arm64. It would be nice to get 
some tests for it in Osstest in the future.

> I'm not aware of any ticket or bug about this.

It might be worth creating one then.

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 28 13:20:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 13:20:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeISL-0008Is-7x; Thu, 28 May 2020 13:20:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P1kI=7K=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jeISK-0008In-4b
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 13:20:20 +0000
X-Inumbo-ID: f6cbcf00-a0e5-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f6cbcf00-a0e5-11ea-8993-bc764e2007e4;
 Thu, 28 May 2020 13:20:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=BOr1h+ZpHYJcwkKp51Vi+J4mya56ZTRjORlC7mrz1YQ=; b=NJrAF++aCCjrnQrJyIKQ7Th0r
 cHk7rXe6gUd4S8v5IdDux329lhaAsdS2o8QrHVk1oOroW/fte843LIloo8ObsTPk/W0RDSkeKDB/b
 BQHkhWJAFeoogRZc5SNOGEj8N/qWolUU4jAOnzd/uAG5f569UsIAe4uGAUZQdCpBMyT38=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeISD-00043X-5d; Thu, 28 May 2020 13:20:13 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeISC-0003nf-UL; Thu, 28 May 2020 13:20:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jeISC-0000OF-Td; Thu, 28 May 2020 13:20:12 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150433-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150433: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=724913de8ac8426d313a4645741d86c1169ae406
X-Osstest-Versions-That: xen=6b75c7a95420acbb9c118624ff0a5e973287c1e4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 28 May 2020 13:20:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150433 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150433/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  724913de8ac8426d313a4645741d86c1169ae406
baseline version:
 xen                  6b75c7a95420acbb9c118624ff0a5e973287c1e4

Last test of basis   150416  2020-05-27 21:01:18 Z    0 days
Testing same since   150433  2020-05-28 11:01:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   6b75c7a954..724913de8a  724913de8ac8426d313a4645741d86c1169ae406 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 28 13:23:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 13:23:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeIUx-0008Px-M6; Thu, 28 May 2020 13:23:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gT1p=7K=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jeIUv-0008Pr-Sr
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 13:23:01 +0000
X-Inumbo-ID: 5a1a40fa-a0e6-11ea-8993-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a1a40fa-a0e6-11ea-8993-bc764e2007e4;
 Thu, 28 May 2020 13:23:00 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:46080
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jeIUr-000IXV-L0 (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Thu, 28 May 2020 14:22:57 +0100
Subject: Re: [PATCH v2 04/14] x86/traps: Implement #CP handler and extend #PF
 for shadow stacks
To: Jan Beulich <jbeulich@suse.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-5-andrew.cooper3@citrix.com>
 <424dc7f2-d999-19e1-42ad-226cf22783eb@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a5fa915b-b0ce-8247-09bb-dac3d149c6b5@citrix.com>
Date: Thu, 28 May 2020 14:22:56 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <424dc7f2-d999-19e1-42ad-226cf22783eb@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28/05/2020 13:03, Jan Beulich wrote:
> On 27.05.2020 21:18, Andrew Cooper wrote:
>> For now, any #CP exception or shadow stack #PF indicate a bug in Xen, but
>> attempt to recover from #CP if taken in guest context.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with one more question and a suggestion:
>
>> @@ -1445,8 +1447,10 @@ void do_page_fault(struct cpu_user_regs *regs)
>>       *
>>       * Anything remaining is an error, constituting corruption of the
>>       * pagetables and probably an L1TF vulnerable gadget.
>> +     *
>> +     * Any shadow stack access fault is a bug in Xen.
>>       */
>> -    if ( error_code & PFEC_reserved_bit )
>> +    if ( error_code & (PFEC_reserved_bit | PFEC_shstk) )
>>          goto fatal;
> Wouldn't you saying "any" imply putting this new check higher up, in
> particular ahead of the call to fixup_page_fault()?

Can do.

>
>> @@ -940,7 +944,8 @@ autogen_stubs: /* Automatically generated stubs. */
>>          entrypoint 1b
>>  
>>          /* Reserved exceptions, heading towards do_reserved_trap(). */
>> -        .elseif vec == TRAP_copro_seg || vec == TRAP_spurious_int || (vec > TRAP_simd_error && vec < TRAP_nr)
>> +        .elseif vec == X86_EXC_CSO || vec == X86_EXC_SPV || \
>> +                vec == X86_EXC_VE  || (vec > X86_EXC_CP && vec < TRAP_nr)
> Adding yet another || here adds to the fragility of the entire
> construct. Wouldn't it be better to implement do_entry_VE at
> this occasion, even its handling continues to end up in
> do_reserved_trap()? This would have the benefit of avoiding the
> pointless checking of %spl first thing in its handling. Feel
> free to keep the R-b if you decide to go this route.

I actually have a different plan, which deletes this entire clause, and
simplifies our autogen sanity checking somewhat.

For vectors which Xen has no implementation of (for whatever reason),
use DPL0, non-present descriptors, and redirect #NP[IDT] into
do_reserved_trap().

No need for any entry logic for the trivially fatal paths, or safety
against being uncertain about error codes.

However, its a little too close to 4.14 to clean this up now.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 28 13:31:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 13:31:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeIcy-0000rW-Hs; Thu, 28 May 2020 13:31:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VkFg=7K=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeIcx-0000rR-Kx
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 13:31:19 +0000
X-Inumbo-ID: 81ced8f8-a0e7-11ea-a7cf-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 81ced8f8-a0e7-11ea-a7cf-12813bfff9fa;
 Thu, 28 May 2020 13:31:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 5E463B028;
 Thu, 28 May 2020 13:31:15 +0000 (UTC)
Subject: Re: [PATCH v2 04/14] x86/traps: Implement #CP handler and extend #PF
 for shadow stacks
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-5-andrew.cooper3@citrix.com>
 <424dc7f2-d999-19e1-42ad-226cf22783eb@suse.com>
 <a5fa915b-b0ce-8247-09bb-dac3d149c6b5@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <21cfcf09-930d-c0cd-6860-92523732a594@suse.com>
Date: Thu, 28 May 2020 15:31:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <a5fa915b-b0ce-8247-09bb-dac3d149c6b5@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.05.2020 15:22, Andrew Cooper wrote:
> On 28/05/2020 13:03, Jan Beulich wrote:
>> On 27.05.2020 21:18, Andrew Cooper wrote:
>>> @@ -940,7 +944,8 @@ autogen_stubs: /* Automatically generated stubs. */
>>>          entrypoint 1b
>>>  
>>>          /* Reserved exceptions, heading towards do_reserved_trap(). */
>>> -        .elseif vec == TRAP_copro_seg || vec == TRAP_spurious_int || (vec > TRAP_simd_error && vec < TRAP_nr)
>>> +        .elseif vec == X86_EXC_CSO || vec == X86_EXC_SPV || \
>>> +                vec == X86_EXC_VE  || (vec > X86_EXC_CP && vec < TRAP_nr)
>> Adding yet another || here adds to the fragility of the entire
>> construct. Wouldn't it be better to implement do_entry_VE at
>> this occasion, even its handling continues to end up in
>> do_reserved_trap()? This would have the benefit of avoiding the
>> pointless checking of %spl first thing in its handling. Feel
>> free to keep the R-b if you decide to go this route.
> 
> I actually have a different plan, which deletes this entire clause, and
> simplifies our autogen sanity checking somewhat.
> 
> For vectors which Xen has no implementation of (for whatever reason),
> use DPL0, non-present descriptors, and redirect #NP[IDT] into
> do_reserved_trap().

Except that #NP itself being a contributory exception, if the such
covered exception is also contributory (e.g. #CP) or of page fault
class (e.g. #VE), we'd get #DF instead of #NP afaict.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 28 14:40:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 14:40:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeJhz-0006bm-Iq; Thu, 28 May 2020 14:40:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNTM=7K=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jeJhy-0006bh-6W
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 14:40:34 +0000
X-Inumbo-ID: 2f164236-a0f1-11ea-81bc-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2f164236-a0f1-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 14:40:33 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: zOZLCvdBn5STWZEqfyirtPxo/qNWkt5261DA362n/Xlg/i6hH2Kvgsy0jM7Ykg2t1dtPu66/pj
 oWDMByik9nM8LB0tFbGPzk7ZVn3hxPHFTK8jtthhw0PnFqnNlVfn9TPcsLwwXUWHZ4EwCnff50
 iscF7Kjwl2rpUZxkIE3KXmx8VTZ+SuKDFD86Xp8ZIltz8ZeUPU6Rrr9J2ntcfRY0DNr6RPTv/t
 Zwp9HiAMBJ9zSrsxa37u4hC7e/J0SLe4pF1VDkAvrrposZlMgHNSOUAY+tog4kP/JVdMoUmGL4
 7yY=
X-SBRS: 2.7
X-MesageID: 19406324
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,445,1583211600"; d="scan'208";a="19406324"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 0/3] Clang/LLVM build fixes
Date: Thu, 28 May 2020 16:40:20 +0200
Message-ID: <20200528144023.10814-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, George
 Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello,

A couple of build fixes for Clang/LLVM. First patch was already sent,
patches #2 and #3 are new as a result of recent commits.

Thanks, Roger.

Roger Pau Monne (3):
  x86/mm: do not attempt to convert _PAGE_GNTTAB to a boolean
  build32: don't discard .shstrtab in linker script
  clang: don't define nocall

 xen/arch/x86/boot/build32.lds | 5 +++++
 xen/arch/x86/mm.c             | 4 +++-
 xen/include/xen/compiler.h    | 6 +++++-
 3 files changed, 13 insertions(+), 2 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 14:40:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 14:40:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeJi3-0006bx-Qy; Thu, 28 May 2020 14:40:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNTM=7K=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jeJi3-0006bs-09
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 14:40:39 +0000
X-Inumbo-ID: 307906d6-a0f1-11ea-81bc-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 307906d6-a0f1-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 14:40:35 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: sU2tFkgZwZH7RR6JIykLm+24ucLQq/ISXqXaqfdXSK70ubCNcimGSaTkRQMkKYPwvZoD9u3H3K
 u5lpE8lSe50tvxqDJRPca5ff+kY4wPKYjjPtVOEzxN72byOJ7XWeGF0b4qzmQIXWyktKCymmeI
 8Gn21imQ/6E6nwIuEVUqBemenlq9BZ/emfRDMHrqG0QUhr/j9Lg7u6UD8xB1hxXm+Mma5rOs30
 gdinRYU9Z3l5EurwvImTMwSoH4F/LWOIUlc71H8DJgh4H65E4gQnU8CaKlSEeLLm+cNc1VdSwK
 Boo=
X-SBRS: 2.7
X-MesageID: 18922173
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,445,1583211600"; d="scan'208";a="18922173"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 1/3] x86/mm: do not attempt to convert _PAGE_GNTTAB to a
 boolean
Date: Thu, 28 May 2020 16:40:21 +0200
Message-ID: <20200528144023.10814-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528144023.10814-1-roger.pau@citrix.com>
References: <20200528144023.10814-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Clang 10 complains with:

mm.c:1239:10: error: converting the result of '<<' to a boolean always evaluates to true
      [-Werror,-Wtautological-constant-compare]
    if ( _PAGE_GNTTAB && (l1e_get_flags(l1e) & _PAGE_GNTTAB) &&
         ^
xen/include/asm/x86_64/page.h:161:25: note: expanded from macro '_PAGE_GNTTAB'
#define _PAGE_GNTTAB (1U<<22)
                        ^

Remove the conversion of _PAGE_GNTTAB to a boolean and instead use a
preprocessor conditional to check if _PAGE_GNTTAB is defined.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Use a preprocessor conditional.
---
 xen/arch/x86/mm.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 54980b4eb1..7bcac17e2e 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -1236,7 +1236,8 @@ void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner)
      * (Note that the undestroyable active grants are not a security hole in
      * Xen. All active grants can safely be cleaned up when the domain dies.)
      */
-    if ( _PAGE_GNTTAB && (l1e_get_flags(l1e) & _PAGE_GNTTAB) &&
+#if _PAGE_GNTTAB
+    if ( (l1e_get_flags(l1e) & _PAGE_GNTTAB) &&
          !l1e_owner->is_shutting_down && !l1e_owner->is_dying )
     {
         gdprintk(XENLOG_WARNING,
@@ -1244,6 +1245,7 @@ void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner)
                  l1e_get_intpte(l1e));
         domain_crash(l1e_owner);
     }
+#endif
 
     /*
      * Remember we didn't take a type-count of foreign writable mappings
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 14:40:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 14:40:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeJi9-0006cS-2h; Thu, 28 May 2020 14:40:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNTM=7K=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jeJi8-0006cM-0i
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 14:40:44 +0000
X-Inumbo-ID: 318dd56a-a0f1-11ea-81bc-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 318dd56a-a0f1-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 14:40:36 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: n0X8arZrp+V6NldKp9SgBONEnoWCNEfodvtEjriYw2SWvek747YOBU8ZTr1bEibCB3W1RbTfCE
 hUjUStG/LOStGHyfxQArWMBR2RU1P2DoYgIMP/cqoYoRuMglw34XWImAhxfWzUf56qpnh1AjaL
 Kfn9ewuEATU/dLbzSI2Mb3X7rlVE8Eg48tY1u5SZJfvDbLOYFaiNsQ6Lpjym0wRwNavybmCdNJ
 +1tAs8q9NZGR+GVK9ySDQXshjbr/405zed5LGXWBWYc5GWYbA+H8S8RkZZGBgDpLceHDs8Lp1n
 bpM=
X-SBRS: 2.7
X-MesageID: 19406330
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,445,1583211600"; d="scan'208";a="19406330"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 2/3] build32: don't discard .shstrtab in linker script
Date: Thu, 28 May 2020 16:40:22 +0200
Message-ID: <20200528144023.10814-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528144023.10814-1-roger.pau@citrix.com>
References: <20200528144023.10814-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

LLVM linker doesn't support discarding .shstrtab, and complains with:

ld -melf_i386_fbsd -N -T build32.lds -o reloc.lnk reloc.o
ld: error: discarding .shstrtab section is not allowed

Add an explicit .shstrtab section to the linker script after the text
section in order to make LLVM LD happy.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/boot/build32.lds | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/xen/arch/x86/boot/build32.lds b/xen/arch/x86/boot/build32.lds
index 97454b40ff..5bd42ee4d9 100644
--- a/xen/arch/x86/boot/build32.lds
+++ b/xen/arch/x86/boot/build32.lds
@@ -50,6 +50,11 @@ SECTIONS
         *(.got.plt)
   }
 
+  .shstrtab : {
+        /* Discarding .shstrtab is not supported by LLD (LLVM LD). */
+        *(.shstrtab)
+  }
+
   /DISCARD/ : {
         /*
          * Discard everything else, to prevent linkers from putting
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 14:40:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 14:40:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeJiD-0006d7-Ap; Thu, 28 May 2020 14:40:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eNTM=7K=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1jeJiD-0006cz-0D
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 14:40:49 +0000
X-Inumbo-ID: 33b9ef18-a0f1-11ea-81bc-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 33b9ef18-a0f1-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 14:40:40 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: B0tPt1oZmZtfZVk0/L0NGOLQ+wfZvyCCx2fkWMTTmrRNQcfGuApvslZJ44ku2jKqeCz3HbY1Wb
 ma8CcyoNOAm91tGEuQ9dldiOkBM1Zn3xCh5ojXFlP5nblUVGHs49aNn1BWjeZoMvP4zOQpqdCf
 2TbfKobJwDikEaNjr4VKWLjG07LiaQnHah0jXfLbxB8F40YwP2mZBCucnyL95enV//7LT5dtV1
 mj3UsWRczggiAFkFnckph0GZmdcyWIDzDXlsbHTAVbA0TkolSIQAzgVkmtX3se5JciIsY4QbtA
 krY=
X-SBRS: 2.7
X-MesageID: 18922185
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,445,1583211600"; d="scan'208";a="18922185"
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH v2 3/3] clang: don't define nocall
Date: Thu, 28 May 2020 16:40:23 +0200
Message-ID: <20200528144023.10814-4-roger.pau@citrix.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528144023.10814-1-roger.pau@citrix.com>
References: <20200528144023.10814-1-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, Ian
 Jackson <ian.jackson@eu.citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Clang doesn't support attribute error, and the possible equivalents
like diagnose_if don't seem to work well in this case as they trigger
when when the function is not called (just by being used by the
APPEND_CALL macro).

Define nocall to a noop on clang until a proper solution can be found.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/include/xen/compiler.h | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h
index c22439b7a4..225e09e5f7 100644
--- a/xen/include/xen/compiler.h
+++ b/xen/include/xen/compiler.h
@@ -20,7 +20,11 @@
 
 #define __weak        __attribute__((__weak__))
 
-#define nocall        __attribute__((error("Nonstandard ABI")))
+#if !defined(__clang__)
+# define nocall        __attribute__((error("Nonstandard ABI")))
+#else
+# define nocall
+#endif
 
 #if (!defined(__clang__) && (__GNUC__ == 4) && (__GNUC_MINOR__ < 5))
 #define unreachable() do {} while (1)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 14:41:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 14:41:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeJib-0006jo-Ne; Thu, 28 May 2020 14:41:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VkFg=7K=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeJia-0006jS-NL
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 14:41:12 +0000
X-Inumbo-ID: 45cc651e-a0f1-11ea-a7e5-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 45cc651e-a0f1-11ea-a7e5-12813bfff9fa;
 Thu, 28 May 2020 14:41:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 1367DB175;
 Thu, 28 May 2020 14:41:08 +0000 (UTC)
Subject: Re: [PATCH v2 08/14] x86/cpu: Adjust reset_stack_and_jump() to be
 shadow stack compatible
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-9-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6611fd1a-20cd-3a20-33ba-8b7ccc03c53f@suse.com>
Date: Thu, 28 May 2020 16:41:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200527191847.17207-9-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 21:18, Andrew Cooper wrote:
> We need to unwind up to the supervisor token.  See the comment for details.
> 
> The use of UNLIKELY_END_SECTION in this case highlights that it isn't safe
> when it isn't the final statement of an asm().  Adjust all declarations with a
> newline.

That's only one perspective to take. I'd appreciate if you undid this.
The intention has always been ...

> --- a/xen/include/asm-x86/current.h
> +++ b/xen/include/asm-x86/current.h
> @@ -124,13 +124,55 @@ unsigned long get_stack_dump_bottom (unsigned long sp);
>  # define CHECK_FOR_LIVEPATCH_WORK ""
>  #endif
>  
> +#ifdef CONFIG_XEN_SHSTK
> +/*
> + * We need to unwind the primary shadow stack to its supervisor token, located
> + * at 0x5ff8 from the base of the stack blocks.
> + *
> + * Read the shadow stack pointer, subtract it from 0x5ff8, divide by 8 to get
> + * the number of slots needing popping.
> + *
> + * INCSSPQ can't pop more than 255 entries.  We shouldn't ever need to pop
> + * that many entries, and getting this wrong will cause us to #DF later.  Turn
> + * it into a BUG() now for fractionally easier debugging.
> + */
> +# define SHADOW_STACK_WORK                                      \
> +    "mov $1, %[ssp];"                                           \
> +    "rdsspd %[ssp];"                                            \
> +    "cmp $1, %[ssp];"                                           \
> +    "je .L_shstk_done.%=;" /* CET not active?  Skip. */         \
> +    "mov $%c[skstk_base], %[val];"                              \
> +    "and $%c[stack_mask], %[ssp];"                              \
> +    "sub %[ssp], %[val];"                                       \
> +    "shr $3, %[val];"                                           \
> +    "cmp $255, %[val];" /* More than 255 entries?  Crash. */    \
> +    UNLIKELY_START(a, shstk_adjust)                             \

... to put suitable separators at the use sites (which, seeing all
the other semicolons here, would be a semicolon then, not a
newline/tab combination. If a tab was to be put anywhere, then like
this:

#define UNLIKELY_START_SECTION "\t.pushsection .text.unlikely,\"ax\""
#define UNLIKELY_END_SECTION   "\t.popsection"

as directives (from a cross arch perspective) are supposed to be
indented; it's just that on most arch-es it doesn't matter (and
hence is irrelevant here).

Preferably with this aspect undone
Reviewed-by: Jan Beulich <jbeulich@suse.com>

As to the comment, could I talk you into replacing the two 0x5ff8
instances by something like "last word of the shadow stack"?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 28 14:53:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 14:53:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeJuH-0007vN-QO; Thu, 28 May 2020 14:53:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WTQv=7K=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jeJuG-0007vH-Jx
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 14:53:16 +0000
X-Inumbo-ID: f5a360ea-a0f2-11ea-a7e7-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f5a360ea-a0f2-11ea-a7e7-12813bfff9fa;
 Thu, 28 May 2020 14:53:15 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: KsjsF3/PIY/VQi7ER7HxCfPCEmqvkVIg6mZOtHHy90LIffC+dQZN6474J4Mi5yYc7LSO33N7cu
 3FoDa/ykZCXchY+CpnX+7q0nbhRfuCM0Xu9v5xnCimOpJ7od3NCdC+foWQG3k+TEbBRAQXed2G
 MH/htsFJzEW5za5tWFq7MvTgWxDysGeW/cFn7mzushHd55FySO0maXd/9JivQLVNgv7KQo0cLx
 0juo//xEdJBKPcKh2e1b4d1NTZX9M29A27weM2OCY4tvbmoe2bTZ8h6DF0C9/aocS6FraE4NOn
 7V4=
X-SBRS: 2.7
X-MesageID: 19021734
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,445,1583211600"; d="scan'208";a="19021734"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24271.53336.125796.634580@mariner.uk.xensource.com>
Date: Thu, 28 May 2020 15:53:12 +0100
To: Julien Grall <julien@xen.org>
Subject: Re: [OSSTEST PATCH 22/38] buster: Extend guest bootloader workaround
In-Reply-To: <e4e7e515-587a-ad81-c9b7-b7cfa69108be@xen.org>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
 <20200519190230.29519-23-ian.jackson@eu.citrix.com>
 <7747c676-f9da-cb97-bd93-78dc13138d03@xen.org>
 <24261.17724.382954.918761@mariner.uk.xensource.com>
 <e4e7e515-587a-ad81-c9b7-b7cfa69108be@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Julien Grall writes ("Re: [OSSTEST PATCH 22/38] buster: Extend guest bootloader workaround"):
> On 20/05/2020 15:57, Ian Jackson wrote:
> > Julien Grall writes ("Re: [OSSTEST PATCH 22/38] buster: Extend guest bootloader workaround"):
> >> On 19/05/2020 20:02, Ian Jackson wrote:
> >>>    	# Debian doesn't currently know what bootloader to install in
> >>>    	# a Xen guest on ARM. We install pv-grub-menu above which
> >>
> >> OOI, what does Debian install for x86 HVM guest? Is there any ticket
> >> tracking this issue?
> > 
> > On x86, it installes grub.  (grub2, x86, PC, to be precise.)
> 
> I have just realized that on x86 you will always have a firmware in the 
> guest. On Arm we commonly boot the kernel directly.

Yes.  At leave, for HVM.

> So maybe we are closer to PV here. Do you also install GRUB in that case?

It's Complicated.  There are several options, but the usual ones are:

1. pygrub: Install some version of grub, which generates
   /boot/grub.cfg.  It doesn't matter very much which version of grub
   because grub.cfg is read by pygrub in dom0 and that fishes out the
   kernel and initrd.  Many of osstest's tests do this.

2. host kernel: Simply pass the dom0 kernel *and initramfs* as the
   kernel image to the guest.  This works if the kernel has the right
   modules for the guest storage, which it can easily do.  On x86 an
   amd64 kernel can run an i386 userland.

3. pvgrub.

> Note that we do support EDK2 at least on Arm64. It would be nice to get 
> some tests for it in Osstest in the future.

Is this the same as "EADK" ?  I'm afraid I don't follow...

> > I'm not aware of any ticket or bug about this.
> 
> It might be worth creating one then.

Where should I do that ?  I guess I mean, in which bugtracker ?

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 28 14:55:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 14:55:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeJwS-00083l-6Y; Thu, 28 May 2020 14:55:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FdVz=7K=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jeJwQ-00083g-9D
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 14:55:30 +0000
X-Inumbo-ID: 45253ed6-a0f3-11ea-a7e7-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 45253ed6-a0f3-11ea-a7e7-12813bfff9fa;
 Thu, 28 May 2020 14:55:29 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7C313AEB8;
 Thu, 28 May 2020 14:55:27 +0000 (UTC)
Message-ID: <a959e9e807dc1f832d151ab72f324f2c084c2461.camel@suse.com>
Subject: Re: [PATCH 2/2] xen: credit2: limit the max number of CPUs in a
 runqueue
From: Dario Faggioli <dfaggioli@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Date: Thu, 28 May 2020 16:55:26 +0200
In-Reply-To: <cd566bb2-753f-b0eb-3c6a-bc2dc01cf37c@suse.com>
References: <158818022727.24327.14309662489731832234.stgit@Palanthas>
 <158818179558.24327.11334680191217289878.stgit@Palanthas>
 <3db33b8a-ba97-f302-a325-e989ff0e7084@suse.com>
 <7b34b1b2c4b36399ad16f6e72a872e15d949f4bf.camel@suse.com>
 <cd566bb2-753f-b0eb-3c6a-bc2dc01cf37c@suse.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-GJSCESBdF8F/HKfEl0Tn"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-GJSCESBdF8F/HKfEl0Tn
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2020-05-27 at 08:17 +0200, Jan Beulich wrote:
> On 27.05.2020 00:00, Dario Faggioli wrote:
> >=20
> > Cache oriented runqueue organization will be the subject of another
> > patch series, and that's why I kept them out. However, that's a
> > rather
> > special case with a lot in common to SMT...
>=20
> I didn't think of cache sharing in particular, but about the
> concept of compute units vs hyperthreads in general.
>=20
Ok.

> > Just in case, is there a
> > way to identify them easily, like with a mask or something, in the
> > code
> > already?
>=20
> cpu_sibling_mask still gets used for both, so there's no mask
> to use. As per set_cpu_sibling_map() you can look at
> cpu_data[].compute_unit_id to tell, but that's of course x86-
> specific (as is the entire compute unit concept).
>=20
Right. And thanks for the pointers.

But then, what I am understanding by having a look there is that I
indeed can use (again, appropriately wrapped) x86_num_siblings, for
telling, in this function, whether a CPU has any, and if yes how many,
HT (Intel) or CU (AMD) siblings in total, although some of them may
currently be offline.

Which means I will be treating HTs and CUs the same which, thinking
more about it (and thinking actually to CUs, rather than to any cache
sharing relationship), does make sense for this feature.

Does this make sense, or am I missing or misinterpreting anything?

> > So I think I will demote this printk as a XENLOG_DEBUG one (and
> > will
> > also get rid of others that were already DEBUG, but not super
> > useful,
> > after some more thinking).
>=20
> Having seen J=C3=BCrgen's reply as well as what you further wrote below,
> I'd still like to point out that even XENLOG_DEBUG isn't quiet
> enough: There may be some value to such a debugging message to you,
> but it may be mainly spam to e.g. me, who I still have a need to
> run with loglvl=3Dall in the common case. Let's not forget, the
> context in which the underlying topic came up in was pretty-many-
> core AMD CPUs.
>=20
Good point indeed about DEBUG potentially being an issue as well. So
yes, as announced in my reply to Juergen, I was going with the recap in
cpupool_init().

However, that looks like it requires a new hook in the scheduler's
interface, as the information is scheduler specific but at the same
time I don't think we want to have the exact same information from
either dump_settings() or dump_cpu_state(), which is all we have... :-/

I'll see about it.

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-GJSCESBdF8F/HKfEl0Tn
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl7P0N4ACgkQFkJ4iaW4
c+6TERAA6faSMR4AkggC2CwbPyZe10EgJ6TlzDMXSfwSdhv/9sxNtvtS2Ej/fdSn
9W+icF94JvzKP5iOBP6laN2q3ssbLp8ut4XWSWweTPBdnpRphF+bSyZ6Ca47Ra/0
lRQeO7PtaGvzEfA8Kz6ckXd4xmursjzSc1frx9ZLx48aQFPJb2/8/gWg9ZwsZswB
fAhzlYOq86i6SSqXUPGs0ztv5nzU773gQ7w5zAFS1XuemgZqs195wDxV54pqQB7k
ssMPOYawFoKmq6WY2ME1kK36d0JMon9eIUfdIHcAjho1CQH3u6CwcV/qjAS92e9C
1inXTQRtlWphpBPctrTY4Zf4XUi2GyonGrgCr5MtaEHNMMWFzQOaepGTYnUDsgLy
LJnaqVwio/b39JQpmF02zQ8w7cJY/VhKmA17Ujmv/4ZVbDJmSgfHdx3gJ3D+GLkv
0AcwionEOjYRqN7AQnZ+1cDxnihUEA5/+6owXGLg5OrFGLCKna8BjfMa6c4/kf7G
LLeFZQPDW0TeM6iiTe3YDt9Ol3Afkc/y61QMuMdO/6JoD0hYmzqdvrkyp5jEFKjc
RbYKe4Nvq+rwVOT0+Sx1hl3XqJn3+h9eVs7mmsxEEjY0Am4svMgN9dH2y8bj8+k4
sk6H5wpDL29Tma5yEXpAHOMbGZkkJcC17Yn1Y4MIW7OuPBB+rYU=
=Op7o
-----END PGP SIGNATURE-----

--=-GJSCESBdF8F/HKfEl0Tn--



From xen-devel-bounces@lists.xenproject.org Thu May 28 14:57:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 14:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeJxu-0008AN-I5; Thu, 28 May 2020 14:57:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WTQv=7K=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jeJxt-0008AH-FY
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 14:57:01 +0000
X-Inumbo-ID: 7b729da8-a0f3-11ea-a7e7-12813bfff9fa
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7b729da8-a0f3-11ea-a7e7-12813bfff9fa;
 Thu, 28 May 2020 14:57:00 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: XPQ4X+dE/6hyPkWA4jyUYtycJBETPYjzKjQDpDV2jMZLl6for62rN6bu0GdRjIapB3p6HWMozN
 3mPz86sCuVkMDn4JNpauqJ5lAmCBPrY7PdaET+A+qs3ZHfaim6TEbz2BC7JDeP9T4fvI2q67ss
 PRDcqCMSI5iCYBxbjoW/KdxpuDaY3e3MAtWZyZW8uT+VDR6rZXNxH0dOIFQ3XWhdbnyZP4LIcZ
 qvliMunFzHgvdI1nI1EizD5vwDLQAuvlUA9MIJQEbberqp91cSAWehT1OpsGDG0k4Q5X+VAKUG
 9wk=
X-SBRS: 2.7
X-MesageID: 18964615
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,445,1583211600"; d="scan'208";a="18964615"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24271.53557.114994.926329@mariner.uk.xensource.com>
Date: Thu, 28 May 2020 15:56:53 +0100
To: Paul Durrant <paul@xen.org>
Subject: Re: [PATCH] libxl: stop libxl_domain_info() consuming massive amounts
 of stack
In-Reply-To: <20200528114801.20241-1-paul@xen.org>
References: <20200528114801.20241-1-paul@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Paul
 Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Paul Durrant writes ("[PATCH] libxl: stop libxl_domain_info() consuming massive amounts of stack"):
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Currently an array of 1024 xc_domaininfo_t is declared on stack. That alone
> consumes ~112k.

Wow.

> Since libxl_domain_info() creates a new gc this patch simply
> uses it to allocate the array instead.

Thanks.

> +    info = libxl__calloc(gc, 1024, sizeof(*info));

Wy not GCNEW_ARRAY ?

That avoids a possible bug with wrong number of * to sizeof (although
in this case you seem to have it right...)

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 28 14:59:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 14:59:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeJzg-0008Ho-Uf; Thu, 28 May 2020 14:58:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=blJD=7K=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jeJzg-0008Hj-4h
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 14:58:52 +0000
X-Inumbo-ID: bddea902-a0f3-11ea-9947-bc764e2007e4
Received: from mail-wr1-x42a.google.com (unknown [2a00:1450:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bddea902-a0f3-11ea-9947-bc764e2007e4;
 Thu, 28 May 2020 14:58:51 +0000 (UTC)
Received: by mail-wr1-x42a.google.com with SMTP id r7so11573364wro.1
 for <xen-devel@lists.xenproject.org>; Thu, 28 May 2020 07:58:51 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=ot4+TOvce5OZy3rZ+p1nFFmdFUBtszR7HMAwtsAEHc4=;
 b=HqzQ40hXeB1+zgEndPFF0LxgV+BLJHlD4IapBKd9+e6MiCoRaNf4yN4aPnCJaThQHH
 4v9xiKjFfziycw9pJ+QPfZPtXnjQLuj+SarfsNnXLqEnIOO1MX4icCS68ULc0MWSDDOX
 aJaO0OrdvVkVk8ihiRAkLCpSU69+DtMjUzjdh6l0DgWP99xx+FYaXNjqHGLq2pUWDVa8
 VQWfGi5sUkuhp9ulzQaBmp78J41AMvCa7bOIQ3vI1xaa0qLB2PSDMIPPSd6cn0707CKZ
 O6rvHQKy2x3FcKgOCC8/qKfQ9cbMKb+jMYIBYjyty3FEaw+oNaHGrPe2IRXIH4FAJnZ1
 V8XA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=ot4+TOvce5OZy3rZ+p1nFFmdFUBtszR7HMAwtsAEHc4=;
 b=dbZ39TJM6/YuqE5RntkTipmqHzSS3pPWhM59v9KsBZU4Khh9HPtkYgvOZD1pYjPiTV
 LKbwpxAxDzsnxjWkcocXcmmLZSzUCyWQ9MzaiXw4Ki+SwoZc+C/G7f0i+ESWDkJTBdIX
 jBA4/dqw/KtC099oh8fiH0ynYQh+7Z6znU9rFHl1NoUdXb6DTABmbdN95zqebzLTsQRK
 wr8vsC3HK6V/w+HgctSTxbMvIAibCZleO7JQtthDb1tMA/KiVqm9pZpA15rat18A5Q2N
 xUn8w8EFnFFr3Qqxak8+vI6OngXqJ+yvkD+HfOvf16mErd4WHt20megrrGvhu1FGheam
 2BzA==
X-Gm-Message-State: AOAM530tudGqxIwTS+xlmZqi+/0Hed4uT5qYWfwibkJNhdp44OtGAfS/
 QeGyAzSfIAJxGZzsRa8hd/g=
X-Google-Smtp-Source: ABdhPJyy4QQdwtkPx7fHszy3MGesk7pnTV+0GaWTtp7sVaHuXD4k/JlOyXEECSe9hN/T/OkpZ47yVQ==
X-Received: by 2002:adf:fb0f:: with SMTP id c15mr4165801wrr.410.1590677930778; 
 Thu, 28 May 2020 07:58:50 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.186])
 by smtp.gmail.com with ESMTPSA id z11sm6017536wrw.67.2020.05.28.07.58.49
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Thu, 28 May 2020 07:58:50 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Ian Jackson'" <ian.jackson@citrix.com>
References: <20200528114801.20241-1-paul@xen.org>
 <24271.53557.114994.926329@mariner.uk.xensource.com>
In-Reply-To: <24271.53557.114994.926329@mariner.uk.xensource.com>
Subject: RE: [PATCH] libxl: stop libxl_domain_info() consuming massive amounts
 of stack
Date: Thu, 28 May 2020 15:58:48 +0100
Message-ID: <000c01d63500$7f00f690$7d02e3b0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQHpe7CJHLMN88U00yeijiyjwE50VgJzn8buqINsV6A=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Anthony Perard' <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org, 'Paul Durrant' <pdurrant@amazon.com>,
 'Wei Liu' <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Ian Jackson <ian.jackson@citrix.com>
> Sent: 28 May 2020 15:57
> To: Paul Durrant <paul@xen.org>
> Cc: xen-devel@lists.xenproject.org; Paul Durrant <pdurrant@amazon.com>; Wei Liu <wl@xen.org>; Anthony
> Perard <anthony.perard@citrix.com>
> Subject: Re: [PATCH] libxl: stop libxl_domain_info() consuming massive amounts of stack
> 
> Paul Durrant writes ("[PATCH] libxl: stop libxl_domain_info() consuming massive amounts of stack"):
> > From: Paul Durrant <pdurrant@amazon.com>
> >
> > Currently an array of 1024 xc_domaininfo_t is declared on stack. That alone
> > consumes ~112k.
> 
> Wow.
> 
> > Since libxl_domain_info() creates a new gc this patch simply
> > uses it to allocate the array instead.
> 
> Thanks.
> 
> > +    info = libxl__calloc(gc, 1024, sizeof(*info));
> 
> Wy not GCNEW_ARRAY ?

'Cos I didn't know about that one :-)

> 
> That avoids a possible bug with wrong number of * to sizeof (although
> in this case you seem to have it right...)
> 

Cool. I'll send a v2 in mo'.

  Paul

> Thanks,
> Ian.



From xen-devel-bounces@lists.xenproject.org Thu May 28 15:12:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 15:12:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeKCK-0001St-6A; Thu, 28 May 2020 15:11:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lRPh=7K=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jeKCI-0001So-Le
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 15:11:54 +0000
X-Inumbo-ID: 906ffd52-a0f5-11ea-9947-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 906ffd52-a0f5-11ea-9947-bc764e2007e4;
 Thu, 28 May 2020 15:11:54 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=vO6WjParInOWM48KFMRaowrG8Fo/bPpHLEYdvF3UWeQ=; b=nEUkb+5pdnfpmk7pfAoz5PPpKR
 a4tTaKm1Zy3UPoY+yArdx31GUPlzOtiBUDsmJS6vnjdqqzz0QZkCPetcI184Au6CNWn7L1sJMxEfp
 LpVTbJIrOtrB3TscL8de1MEiR7IOVRf7OM7gxllPJSX6FQMuZYgfik6OluIy3g0DU3Ss=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeKCH-0006T8-On; Thu, 28 May 2020 15:11:53 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeKCH-0004rX-H0; Thu, 28 May 2020 15:11:53 +0000
Subject: Re: [OSSTEST PATCH 22/38] buster: Extend guest bootloader workaround
To: Ian Jackson <ian.jackson@citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
 <20200519190230.29519-23-ian.jackson@eu.citrix.com>
 <7747c676-f9da-cb97-bd93-78dc13138d03@xen.org>
 <24261.17724.382954.918761@mariner.uk.xensource.com>
 <e4e7e515-587a-ad81-c9b7-b7cfa69108be@xen.org>
 <24271.53336.125796.634580@mariner.uk.xensource.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d4fc0391-9f40-86ad-f304-70bb0cd73e9b@xen.org>
Date: Thu, 28 May 2020 16:11:51 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <24271.53336.125796.634580@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 28/05/2020 15:53, Ian Jackson wrote:
> Julien Grall writes ("Re: [OSSTEST PATCH 22/38] buster: Extend guest bootloader workaround"):
>> On 20/05/2020 15:57, Ian Jackson wrote:
>>> Julien Grall writes ("Re: [OSSTEST PATCH 22/38] buster: Extend guest bootloader workaround"):
>>>> On 19/05/2020 20:02, Ian Jackson wrote:
>>>>>     	# Debian doesn't currently know what bootloader to install in
>>>>>     	# a Xen guest on ARM. We install pv-grub-menu above which
>>>>
>>>> OOI, what does Debian install for x86 HVM guest? Is there any ticket
>>>> tracking this issue?
>>>
>>> On x86, it installes grub.  (grub2, x86, PC, to be precise.)
>>
>> I have just realized that on x86 you will always have a firmware in the
>> guest. On Arm we commonly boot the kernel directly.
> 
> Yes.  At leave, for HVM.
> 
>> So maybe we are closer to PV here. Do you also install GRUB in that case?
> 
> It's Complicated.  There are several options, but the usual ones are:
> 
> 1. pygrub: Install some version of grub, which generates
>     /boot/grub.cfg.  It doesn't matter very much which version of grub
>     because grub.cfg is read by pygrub in dom0 and that fishes out the
>     kernel and initrd.  Many of osstest's tests do this.
> 
> 2. host kernel: Simply pass the dom0 kernel *and initramfs* as the
>     kernel image to the guest.  This works if the kernel has the right
>     modules for the guest storage, which it can easily do.  On x86 an
>     amd64 kernel can run an i386 userland.
> 
> 3. pvgrub.

Thanks for the explanation. How do you select it in the Osstest today?

Is it a option for the debian installer or you do it manually as part
of your install script?
>> Note that we do support EDK2 at least on Arm64. It would be nice to get
>> some tests for it in Osstest in the future.
> 
> Is this the same as "EADK" ?  I'm afraid I don't follow...

Sorry, I should have been more precise. I meant that we are able to boot 
a Arm guest using UEFI as we added support in EDK2 (I think in Xen we 
use the term ovmf).

When using EFI, the guest can boot exactly the same way as it would on 
baremetal. The toolstack is just loading the firmware in the guest memory.

IIRC we have already regular EFI testing on x86 in Osstest. I am 
thinking to extend them to Arm at some point.

> 
>>> I'm not aware of any ticket or bug about this.
>>
>> It might be worth creating one then.
> 
> Where should I do that ?  I guess I mean, in which bugtracker ?

 From the comment in the code, I would assume this is a bug/enhancement 
against the Debian installer. But I may have misundertood it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 28 15:13:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 15:13:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeKDv-0001Yi-K9; Thu, 28 May 2020 15:13:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jtjf=7K=xen.org=paul@srs-us1.protection.inumbo.net>)
 id 1jeKDu-0001Yb-G8
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 15:13:34 +0000
X-Inumbo-ID: cc230362-a0f5-11ea-9947-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc230362-a0f5-11ea-9947-bc764e2007e4;
 Thu, 28 May 2020 15:13:34 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=RWE7FsZRvo9h78tHiAsaTBasLYIAlxCJdGiu/keK7GI=; b=Kak50CIAlbQvjWF1B+0ze6GAkv
 ZK3o02xEA9tAEpBbeXcOstLtKvHpgGHSIBmPxh3/5GG9Cuaaexr3nGjlWO+tlTMas1C0uqwajDg4J
 dFEBHGecWwxoHf8qiFfRNm+NZS2Aci/80dDCDmGadvzg2KeaY2BwJmKxFW1P171yfwEg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jeKDt-0006VB-I4; Thu, 28 May 2020 15:13:33 +0000
Received: from [54.239.6.187] (helo=CBG-R90WXYV0.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <paul@xen.org>)
 id 1jeKDt-0004v3-8D; Thu, 28 May 2020 15:13:33 +0000
From: Paul Durrant <paul@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2] libxl: stop libxl_domain_info() consuming massive amounts
 of stack
Date: Thu, 28 May 2020 16:13:30 +0100
Message-Id: <20200528151330.20964-1-paul@xen.org>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Paul Durrant <pdurrant@amazon.com>

Currently an array of 1024 xc_domaininfo_t is declared on stack. That alone
consumes ~112k. Since libxl_domain_info() creates a new gc this patch simply
uses it to allocate the array instead.

Signed-off-by: Paul Durrant <pdurrant@amazon.com>
---
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Cc: Wei Liu <wl@xen.org>
Cc: Anthony PERARD <anthony.perard@citrix.com>

This is small and IMO it would be nice to have this in 4.14 but I'd like an
opinion from a maintainer too.

v2:
 - Use GCNEW_ARRAY
---
 tools/libxl/libxl_domain.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_domain.c b/tools/libxl/libxl_domain.c
index fef2cd4e13..bb21087eb8 100644
--- a/tools/libxl/libxl_domain.c
+++ b/tools/libxl/libxl_domain.c
@@ -314,11 +314,13 @@ libxl_dominfo * libxl_list_domain(libxl_ctx *ctx, int *nb_domain_out)
 {
     libxl_dominfo *ptr = NULL;
     int i, ret;
-    xc_domaininfo_t info[1024];
+    xc_domaininfo_t *info;
     int size = 0;
     uint32_t domid = 0;
     GC_INIT(ctx);
 
+    GCNEW_ARRAY(info, 1024);
+
     while ((ret = xc_domain_getinfolist(ctx->xch, domid, 1024, info)) > 0) {
         ptr = libxl__realloc(NOGC, ptr, (size + ret) * sizeof(libxl_dominfo));
         for (i = 0; i < ret; i++) {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 28 15:27:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 15:27:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeKR2-0002Ym-9N; Thu, 28 May 2020 15:27:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bLP3=7K=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jeKR1-0002Yd-DG
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 15:27:07 +0000
X-Inumbo-ID: af79e986-a0f7-11ea-9947-bc764e2007e4
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7d00::608])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af79e986-a0f7-11ea-9947-bc764e2007e4;
 Thu, 28 May 2020 15:27:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x49loOKd0eLq+Ci8SrugkRc00z80hcsQX17k9oTzHC4=;
 b=Q3/QlUSiNfIKPgnr10Nnt/5xpCuTh2osygZRHA8jwp1dicO98oAZgXPus5miCpdFjrN26V32zu/tMI1o7LDPMi0qz9cGe/Tn3XGKWO4fjEHe4uq9O442WlXPtYnXIIGn+9+SKix2aqPs6K+3hI+GzdJIzY1TcAp2WiDc8JhZTM8=
Received: from MR2P264CA0056.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500:31::20)
 by VI1PR08MB3391.eurprd08.prod.outlook.com (2603:10a6:803:83::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.27; Thu, 28 May
 2020 15:27:03 +0000
Received: from VE1EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:500:31:cafe::4f) by MR2P264CA0056.outlook.office365.com
 (2603:10a6:500:31::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.17 via Frontend
 Transport; Thu, 28 May 2020 15:27:03 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT034.mail.protection.outlook.com (10.152.18.85) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3021.23 via Frontend Transport; Thu, 28 May 2020 15:27:02 +0000
Received: ("Tessian outbound cff7dd4de28a:v57");
 Thu, 28 May 2020 15:27:02 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 4ba243edbbda6a26
X-CR-MTA-TID: 64aa7808
Received: from 72b4dd1d4e3b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9F79D85F-8E11-4C8B-AA30-1402C01297F0.1; 
 Thu, 28 May 2020 15:27:02 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 72b4dd1d4e3b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 28 May 2020 15:27:02 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fGDDAeSlF0aiahxa37OAMA9PqojcavNfZKKGZp4gjwqMkKY1ugUov0s1sC70CA0C0uEtAXEW4WNDvIsQYhu8d1YbueN4RH9elvo7lcHLkZWi/f9ETEwtET+/+SkvX+wXJpbx5vvcsIXI4VZeh0coezBzqDkwi+4UtHCvvZOnGk3k3/xrj5kiw9GtOT/XyOGjLPSX3kfHRdw9Q9pn83a+H4VoARjSwYwUy04b2AVHlM019tMq9sXk2BOQkU0atCg/pTB3+sPGj9VtmdbRHdFePzFY9jDc7yezTj/SC7fY5841tO/uCRY2UieR7Xyz6kJ/sI1qupEjdx9hJrJForEQ9w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x49loOKd0eLq+Ci8SrugkRc00z80hcsQX17k9oTzHC4=;
 b=IKpw31S1XMY5MOdkvnBuhq6iOJunbGJ+ZFTdfZL/8bBFPxFk2CWNGFdlrkisV4qiVavUTM1LzzTNDl28r8pUxqiHQeT+5XcS7yW1EiCy3kDeME0QUapB25PInxPD+ZlWaYoWy309iNM270iOYj4umaO4qanSTvmqHtvYL5X8InUmYa5wMzsdzAl8NCgKoiHUP6w/kreCbPvBCAzWofTjYFjOuR8ytjOKQz/lxXuPGd7v79Bbr8UFOrhz5Q9YmjNxzem7O02RCJPwSoQCw3/W+uOjqt46cQMwWcixHbDAgVKie0tbF4afrMeAMcE8c60CnaCysrnQ9cPzBDJ9lHxanw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x49loOKd0eLq+Ci8SrugkRc00z80hcsQX17k9oTzHC4=;
 b=Q3/QlUSiNfIKPgnr10Nnt/5xpCuTh2osygZRHA8jwp1dicO98oAZgXPus5miCpdFjrN26V32zu/tMI1o7LDPMi0qz9cGe/Tn3XGKWO4fjEHe4uq9O442WlXPtYnXIIGn+9+SKix2aqPs6K+3hI+GzdJIzY1TcAp2WiDc8JhZTM8=
Authentication-Results-Original: lists.xenproject.org; dkim=none (message not
 signed) header.d=none; lists.xenproject.org;
 dmarc=none action=none header.from=arm.com;
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3211.eurprd08.prod.outlook.com (2603:10a6:5:27::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.17; Thu, 28 May
 2020 15:27:01 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3021.030; Thu, 28 May 2020
 15:27:01 +0000
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Subject: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Date: Thu, 28 May 2020 16:25:31 +0100
Message-Id: <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1590675919.git.bertrand.marquis@arm.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
Content-Type: text/plain
X-ClientProxiedBy: SN4PR0701CA0017.namprd07.prod.outlook.com
 (2603:10b6:803:28::27) To DB7PR08MB3689.eurprd08.prod.outlook.com
 (2603:10a6:10:79::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from e109506-lin.cambridge.arm.com (217.140.106.52) by
 SN4PR0701CA0017.namprd07.prod.outlook.com (2603:10b6:803:28::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.19 via Frontend
 Transport; Thu, 28 May 2020 15:26:58 +0000
X-Mailer: git-send-email 2.17.1
X-Originating-IP: [217.140.106.52]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: f4d97222-9bd9-4b92-6a65-08d8031b9296
X-MS-TrafficTypeDiagnostic: DB7PR08MB3211:|VI1PR08MB3391:
X-Microsoft-Antispam-PRVS: <VI1PR08MB3391D220CC1D2717A8BA085A9D8E0@VI1PR08MB3391.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:2399;OLM:2399;
X-Forefront-PRVS: 0417A3FFD2
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: OVKOPqr7sWKHHwBca/ky7F9njEiU4E8PjsuX5iEdNQO1u3AgEUXo2jh4k8XQ9Z4NHBWhR95stYW/V1dbO3SshOKxPuEGgJOncdatHNdLvfLwLCg/tU5kJKoMJIlqg5kufqQQVVyEnNFSkcOrpspIMy49z8PN0J5KpZ4Vzd8adrunDRB+iLfJm6B35Jc/5nVp5lD63ED/FwCBUZkhBw/DYWlskvI8lsJc8HuXv6Jwzg0WfU5f8FjLCvgPel+KDn9nqjcxAVrNjsHXT5cyb5oDAkD751VZGH99NuVv5rho9+5HbtqFpn+fDUTsit6BWDSqu3O/v6BGOhiPkIXrrAYg730j13rVHVvuKnA/38ybuG0m1pXDNxza38b1iZrOy/XskypZgwLloIxviM159CMcRTeUHmVdEau4+0Ru6UBxrsw=
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(346002)(396003)(376002)(366004)(136003)(36756003)(66946007)(316002)(6666004)(54906003)(478600001)(186003)(44832011)(956004)(66476007)(66556008)(7696005)(52116002)(2906002)(2616005)(16526019)(5660300002)(26005)(8676002)(7416002)(4326008)(8936002)(86362001)(83380400001)(6916009)(6486002)(136400200001)(21314003);
 DIR:OUT; SFP:1101; 
X-MS-Exchange-AntiSpam-MessageData: kRcrYODIDK0s/gXVJPdF71N7EC5EbpQCcjYDQMl/pvkHwh6EfdTXE2440OqT2npOTnzQQG25wG/xriu+ReSala1Q5tahzL3xKDQ3h0N9XLuHi+8vYu58I8gtLr2kH6Xq6wivBWiYBcG47lj10lCqIMTzdCbtZLo45rpqAcTWaEcT7citPmcbeyxhdnmHQsDuhKI37WUL5O3Qcp7tsoOAsddluxZIWqPlhe5N2Pr+2qlGmdsw330xS/FtfnYtb9Ug+h1mgEG2maNVioTtnG1GDaIttt/Qezrc0+06A1y3kezEx1+ESOZozxLcaKMGkrEnN2/9KUuO2bq7+ei9ZWjZnxHAHef4HtE+OLdCVyA90luAcVCxejGRqv0il4e0Uz4GYT0+/J93VXIVPVO8hmPOizwMsLkye+Y7f7kYRF6yfDggQFXVtks0cY6sBLRzupNuEenW7o9NOQF6fOq4DP8DUgzw8req5qbHzWcIDyTJU2n7ao8RRPM28pMKQIKATeQr
X-MS-Exchange-Transport-Forked: True
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3211
Original-Authentication-Results: lists.xenproject.org;
 dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(396003)(136003)(376002)(346002)(46966005)(6666004)(81166007)(478600001)(356005)(26005)(2616005)(82740400003)(36756003)(16526019)(70586007)(82310400002)(5660300002)(70206006)(336012)(956004)(47076004)(107886003)(8936002)(6486002)(54906003)(86362001)(36906005)(6916009)(8676002)(44832011)(7696005)(4326008)(2906002)(186003)(316002)(83380400001)(21314003)(136400200001);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 853b39a9-567d-4a3c-eae9-08d8031b914a
X-Forefront-PRVS: 0417A3FFD2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: TjEuu2pJ84NWMpojryCZ0GHlsBRHybGW92iASM5BuWA/r+p/287DesgZGxQLod6b0aLtW7dBoUQ9O9R0XM6I9H4r5NASInEwDoi5s3Uaa73QvITH6fdVeVG0nxWwE42GLCrXqxUdO5U8quzBgfsALkbDXno3BowklJ5YeW2qz9bHH4o/I9Ldmrlzo1xID5gEtXvOIxIv5F/IGXM5IBpzGUd1nA3OiLDvFe1IVHEPEgGksCjSIlAaFL8JycG5iERpaXUHuQOlFF6tKTlBWjVC4XDkMxUhXT3m5SPrnrKit1PusEN0XqgyoBkEky0jEt/EMee1WrxCvmDWW53l8A3+jqPaTKAf2viyImT1Fjsa0xOlB1uV2EDG6PsNVwwbbuFJBEuhUTKZ1rQhmTERMUyFbodN/qKwM+hg08/LVTCMb9vLgxLK9Zs23MWEfXn8Kduczy5C5X8Ui04MD/iWqWPbQA==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 May 2020 15:27:02.9827 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f4d97222-9bd9-4b92-6a65-08d8031b9296
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3391
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

At the moment on Arm, a Linux guest running with KTPI enabled will
cause the following error when a context switch happens in user mode:
(XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0

This patch is modifying runstate handling to map the area given by the
guest inside Xen during the hypercall.
This is removing the guest virtual to physical conversion during context
switches which removes the bug and improve performance by preventing to
walk page tables during context switches.

--
In the current status, this patch is only working on Arm and needs to
be fixed on X86 (see #error on domain.c for missing get_page_from_gva).

Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
---
 xen/arch/arm/domain.c   | 32 +++++++++-------
 xen/arch/x86/domain.c   | 51 ++++++++++++++-----------
 xen/common/domain.c     | 84 ++++++++++++++++++++++++++++++++++-------
 xen/include/xen/sched.h | 11 ++++--
 4 files changed, 124 insertions(+), 54 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 31169326b2..799b0e0103 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -278,33 +278,37 @@ static void ctxt_switch_to(struct vcpu *n)
 /* Update per-VCPU guest runstate shared memory area (if registered). */
 static void update_runstate_area(struct vcpu *v)
 {
-    void __user *guest_handle = NULL;
-    struct vcpu_runstate_info runstate;
+    struct vcpu_runstate_info *runstate;
 
-    if ( guest_handle_is_null(runstate_guest(v)) )
+    /* XXX why do we accept not to block here */
+    if ( !spin_trylock(&v->runstate_guest_lock) )
         return;
 
-    memcpy(&runstate, &v->runstate, sizeof(runstate));
+    runstate = runstate_guest(v);
+
+    if (runstate == NULL)
+    {
+        spin_unlock(&v->runstate_guest_lock);
+        return;
+    }
 
     if ( VM_ASSIST(v->domain, runstate_update_flag) )
     {
-        guest_handle = &v->runstate_guest.p->state_entry_time + 1;
-        guest_handle--;
-        runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
-        __raw_copy_to_guest(guest_handle,
-                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
+        runstate->state_entry_time |= XEN_RUNSTATE_UPDATE;
         smp_wmb();
+        v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
     }
 
-    __copy_to_guest(runstate_guest(v), &runstate, 1);
+    memcpy(runstate, &v->runstate, sizeof(v->runstate));
 
-    if ( guest_handle )
+    if ( VM_ASSIST(v->domain, runstate_update_flag) )
     {
-        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
+        runstate->state_entry_time &= ~XEN_RUNSTATE_UPDATE;
         smp_wmb();
-        __raw_copy_to_guest(guest_handle,
-                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
+        v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
     }
+
+    spin_unlock(&v->runstate_guest_lock);
 }
 
 static void schedule_tail(struct vcpu *prev)
diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 6327ba0790..10c6ceb561 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1642,57 +1642,62 @@ void paravirt_ctxt_switch_to(struct vcpu *v)
 /* Update per-VCPU guest runstate shared memory area (if registered). */
 bool update_runstate_area(struct vcpu *v)
 {
-    bool rc;
     struct guest_memory_policy policy = { .nested_guest_mode = false };
-    void __user *guest_handle = NULL;
-    struct vcpu_runstate_info runstate;
+    struct vcpu_runstate_info *runstate;
 
-    if ( guest_handle_is_null(runstate_guest(v)) )
+    /* XXX: should we return false ? */
+    if ( !spin_trylock(&v->runstate_guest_lock) )
         return true;
 
-    update_guest_memory_policy(v, &policy);
+    runstate = runstate_guest(v);
 
-    memcpy(&runstate, &v->runstate, sizeof(runstate));
+    if (runstate == NULL)
+    {
+        spin_unlock(&v->runstate_guest_lock);
+        return true;
+    }
+
+    update_guest_memory_policy(v, &policy);
 
     if ( VM_ASSIST(v->domain, runstate_update_flag) )
     {
-        guest_handle = has_32bit_shinfo(v->domain)
-            ? &v->runstate_guest.compat.p->state_entry_time + 1
-            : &v->runstate_guest.native.p->state_entry_time + 1;
-        guest_handle--;
-        runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
-        __raw_copy_to_guest(guest_handle,
-                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
+        runstate->state_entry_time |= XEN_RUNSTATE_UPDATE;
         smp_wmb();
+        if (has_32bit_shinfo(v->domain))
+            v->runstate_guest.compat->state_entry_time |= XEN_RUNSTATE_UPDATE;
+        else
+            v->runstate_guest.native->state_entry_time |= XEN_RUNSTATE_UPDATE;
     }
 
     if ( has_32bit_shinfo(v->domain) )
     {
         struct compat_vcpu_runstate_info info;
-
         XLAT_vcpu_runstate_info(&info, &runstate);
-        __copy_to_guest(v->runstate_guest.compat, &info, 1);
-        rc = true;
+        memcpy(v->runstate_guest.compat, &info, 1);
     }
     else
-        rc = __copy_to_guest(runstate_guest(v), &runstate, 1) !=
-             sizeof(runstate);
+        memcpy(runstate, &v->runstate, sizeof(v->runstate));
 
-    if ( guest_handle )
+    if ( VM_ASSIST(v->domain, runstate_update_flag) )
     {
-        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
+        runstate->state_entry_time |= XEN_RUNSTATE_UPDATE;
         smp_wmb();
-        __raw_copy_to_guest(guest_handle,
-                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
+        if (has_32bit_shinfo(v->domain))
+            v->runstate_guest.compat->state_entry_time |= XEN_RUNSTATE_UPDATE;
+        else
+            v->runstate_guest.native->state_entry_time |= XEN_RUNSTATE_UPDATE;
     }
 
+    spin_unlock(&v->runstate_guest_lock);
+
     update_guest_memory_policy(v, &policy);
 
-    return rc;
+    return true;
 }
 
 static void _update_runstate_area(struct vcpu *v)
 {
+    /* XXX: this should be removed */
     if ( !update_runstate_area(v) && is_pv_vcpu(v) &&
          !(v->arch.flags & TF_kernel_mode) )
         v->arch.pv.need_update_runstate_area = 1;
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 7cc9526139..acc6f90ba3 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -161,6 +161,7 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
     v->dirty_cpu = VCPU_CPU_CLEAN;
 
     spin_lock_init(&v->virq_lock);
+    spin_lock_init(&v->runstate_guest_lock);
 
     tasklet_init(&v->continue_hypercall_tasklet, NULL, NULL);
 
@@ -691,6 +692,66 @@ int rcu_lock_live_remote_domain_by_id(domid_t dom, struct domain **d)
     return 0;
 }
 
+static void  unmap_runstate_area(struct vcpu *v, unsigned int lock)
+{
+    mfn_t mfn;
+
+    if ( ! runstate_guest(v) )
+        return;
+
+    if (lock)
+        spin_lock(&v->runstate_guest_lock);
+
+    mfn = domain_page_map_to_mfn(runstate_guest(v));
+
+    unmap_domain_page_global((void *)
+                            ((unsigned long)v->runstate_guest &
+                             PAGE_MASK));
+
+    put_page_and_type(mfn_to_page(mfn));
+    runstate_guest(v) = NULL;
+
+    if (lock)
+        spin_unlock(&v->runstate_guest_lock);
+}
+
+static int map_runstate_area(struct vcpu *v,
+                    struct vcpu_register_runstate_memory_area *area)
+{
+    unsigned long offset = area->addr.p & ~PAGE_MASK;
+    void *mapping;
+    struct page_info *page;
+    size_t size = sizeof(struct vcpu_runstate_info);
+
+    ASSERT(runstate_guest(v) == NULL);
+
+    /* do not allow an area crossing 2 pages */
+    if ( offset > (PAGE_SIZE - size) )
+        return -EINVAL;
+
+#ifdef CONFIG_ARM
+    page = get_page_from_gva(v, area->addr.p, GV2M_WRITE);
+#else
+    /* XXX how to solve this one ? */
+#error get_page_from_gva is not available on other archs
+#endif
+    if ( !page )
+        return -EINVAL;
+
+    mapping = __map_domain_page_global(page);
+
+    if ( mapping == NULL )
+    {
+        put_page_and_type(page);
+        return -ENOMEM;
+    }
+
+    runstate_guest(v) = (struct vcpu_runstate_info *)
+        ((unsigned long)mapping + offset);
+
+    return 0;
+}
+
 int domain_kill(struct domain *d)
 {
     int rc = 0;
@@ -727,7 +788,10 @@ int domain_kill(struct domain *d)
         if ( cpupool_move_domain(d, cpupool0) )
             return -ERESTART;
         for_each_vcpu ( d, v )
+        {
+            unmap_runstate_area(v, 0);
             unmap_vcpu_info(v);
+        }
         d->is_dying = DOMDYING_dead;
         /* Mem event cleanup has to go here because the rings 
          * have to be put before we call put_domain. */
@@ -1167,7 +1231,7 @@ int domain_soft_reset(struct domain *d)
 
     for_each_vcpu ( d, v )
     {
-        set_xen_guest_handle(runstate_guest(v), NULL);
+        unmap_runstate_area(v, 1);
         unmap_vcpu_info(v);
     }
 
@@ -1484,7 +1548,6 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
     case VCPUOP_register_runstate_memory_area:
     {
         struct vcpu_register_runstate_memory_area area;
-        struct vcpu_runstate_info runstate;
 
         rc = -EFAULT;
         if ( copy_from_guest(&area, arg, 1) )
@@ -1493,18 +1556,13 @@ long do_vcpu_op(int cmd, unsigned int vcpuid, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( !guest_handle_okay(area.addr.h, 1) )
             break;
 
-        rc = 0;
-        runstate_guest(v) = area.addr.h;
+        spin_lock(&v->runstate_guest_lock);
 
-        if ( v == current )
-        {
-            __copy_to_guest(runstate_guest(v), &v->runstate, 1);
-        }
-        else
-        {
-            vcpu_runstate_get(v, &runstate);
-            __copy_to_guest(runstate_guest(v), &runstate, 1);
-        }
+        unmap_runstate_area(v, 0);
+
+        rc = map_runstate_area(v, &area);
+
+        spin_unlock(&v->runstate_guest_lock);
 
         break;
     }
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index ac53519d7f..ab87182e81 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -166,15 +166,18 @@ struct vcpu
     struct sched_unit *sched_unit;
 
     struct vcpu_runstate_info runstate;
+
+    spinlock_t      runstate_guest_lock;
+
 #ifndef CONFIG_COMPAT
 # define runstate_guest(v) ((v)->runstate_guest)
-    XEN_GUEST_HANDLE(vcpu_runstate_info_t) runstate_guest; /* guest address */
+    vcpu_runstate_info_t *runstate_guest; /* mapped address of guest copy */
 #else
 # define runstate_guest(v) ((v)->runstate_guest.native)
     union {
-        XEN_GUEST_HANDLE(vcpu_runstate_info_t) native;
-        XEN_GUEST_HANDLE(vcpu_runstate_info_compat_t) compat;
-    } runstate_guest; /* guest address */
+        vcpu_runstate_info_t *native;
+        vcpu_runstate_info_compat_t *compat;
+    } runstate_guest; /* mapped address of guest copy */
 #endif
     unsigned int     new_state;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 28 15:27:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 15:27:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeKQy-0002YR-12; Thu, 28 May 2020 15:27:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bLP3=7K=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jeKQw-0002YL-Hg
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 15:27:02 +0000
X-Inumbo-ID: acf59f2a-a0f7-11ea-81bc-bc764e2007e4
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1b::62f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id acf59f2a-a0f7-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 15:27:01 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xXajeLJr1xURQlu89ETkH/wIRtkinyHgRZNQgmvzHY0=;
 b=f3Rc9bbT7hHyQakROP1GYVNnpJ6RJKs1257em44lSO10gsKE/nPavSvsxBSJZPDuG9rAtFeKauQfw1PtrcgsZyMqQKm4Gn/6FIYMpt6MVwVx+NjxFk9d7fFfsbm8MxLcrfoDANubMEai6SuBD8Vlq4uwF5q8A7pWsRfP/FfzPxw=
Received: from AM6P194CA0013.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:90::26)
 by AM6PR08MB5031.eurprd08.prod.outlook.com (2603:10a6:20b:ed::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.17; Thu, 28 May
 2020 15:26:59 +0000
Received: from VE1EUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:90:cafe::91) by AM6P194CA0013.outlook.office365.com
 (2603:10a6:209:90::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.19 via Frontend
 Transport; Thu, 28 May 2020 15:26:59 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT018.mail.protection.outlook.com (10.152.18.135) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3021.23 via Frontend Transport; Thu, 28 May 2020 15:26:58 +0000
Received: ("Tessian outbound 9eabd37e4fee:v57");
 Thu, 28 May 2020 15:26:58 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: e1738221c2e6404f
X-CR-MTA-TID: 64aa7808
Received: from a50bca964cc9.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9A98372F-679D-4263-B54C-7274E3E5BB74.1; 
 Thu, 28 May 2020 15:26:58 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a50bca964cc9.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 28 May 2020 15:26:58 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZJjlTUJWpswE6Jr6HeGueszjp6sZUixOSfWn0qBlB+cbxgZ3OrdwBLn8cfm/4C24A1Ii7mJYr4x4iC7UxYBq7/DktGeXZaf5hasL3syixkVJtx4Kdym4YIN5KS+fpL3vVTw9GGB0xZmUz4gl6OrqWAvJ3dGchpG9BaCnft3R3VfB98wZlA/kSSPow1x1JAiI2b6c40C5cOvH9TDTDaUtxXmm0WvXptGW0EhqbVUC08fVXPL2dGCfce1M6PiQq6XV8BFpmBnvcZgG9ECJfO6YjS0LK9l1o1vgEfO6eFkXdj5T3+JAiKTvYQp7CMm/zYz8FlnEA0x0FFwW2BEwRtQL4A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xXajeLJr1xURQlu89ETkH/wIRtkinyHgRZNQgmvzHY0=;
 b=NyLTvQpKpWK1vLpgzSbsXeMeuDDvrIph8eLskaKukneuBI36+NX3E/sLL0M5Wc/uPSRTiEWjrGvyKxqGGW/tUNS1ykWYr+XAZwLxDbgxekx5d3KeHtRFTPwUY3Z9WjzQs0IoqrpUe2O4Cy44MC9Pv1cSznaTw668T7VObVoZTFhmt2UjrgVzC2zeAiWMjkuMezWfTdtgfCIL8sJlsBDucCcuG0AsTbArLTJTV0jOnZ0Nb/AhLsvXgG2fgzAVFmAV39E4BJPC0tyvD0YBaNQ+1kkMXPx8QNAx7cPU7dqs0JZ0eX5TH7W13uO3MgsHpF62WdKT5Sp5CWyGpEgysJuz7w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xXajeLJr1xURQlu89ETkH/wIRtkinyHgRZNQgmvzHY0=;
 b=f3Rc9bbT7hHyQakROP1GYVNnpJ6RJKs1257em44lSO10gsKE/nPavSvsxBSJZPDuG9rAtFeKauQfw1PtrcgsZyMqQKm4Gn/6FIYMpt6MVwVx+NjxFk9d7fFfsbm8MxLcrfoDANubMEai6SuBD8Vlq4uwF5q8A7pWsRfP/FfzPxw=
Authentication-Results-Original: lists.xenproject.org; dkim=none (message not
 signed) header.d=none; lists.xenproject.org;
 dmarc=none action=none header.from=arm.com;
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3211.eurprd08.prod.outlook.com (2603:10a6:5:27::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.17; Thu, 28 May
 2020 15:26:57 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3021.030; Thu, 28 May 2020
 15:26:57 +0000
From: Bertrand Marquis <bertrand.marquis@arm.com>
To: xen-devel@lists.xenproject.org
Subject: [RFC PATCH 0/1] Runstate error with KPTI
Date: Thu, 28 May 2020 16:25:30 +0100
Message-Id: <cover.1590675919.git.bertrand.marquis@arm.com>
X-Mailer: git-send-email 2.17.1
Content-Type: text/plain
X-ClientProxiedBy: SN4PR0701CA0017.namprd07.prod.outlook.com
 (2603:10b6:803:28::27) To DB7PR08MB3689.eurprd08.prod.outlook.com
 (2603:10a6:10:79::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
Received: from e109506-lin.cambridge.arm.com (217.140.106.52) by
 SN4PR0701CA0017.namprd07.prod.outlook.com (2603:10b6:803:28::27) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.19 via Frontend
 Transport; Thu, 28 May 2020 15:26:54 +0000
X-Mailer: git-send-email 2.17.1
X-Originating-IP: [217.140.106.52]
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 6c03affa-5090-499c-b6df-08d8031b902d
X-MS-TrafficTypeDiagnostic: DB7PR08MB3211:|AM6PR08MB5031:
X-Microsoft-Antispam-PRVS: <AM6PR08MB50310DEABA005D52FCC0B90A9D8E0@AM6PR08MB5031.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;OLM:9508;
X-Forefront-PRVS: 0417A3FFD2
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: KxA3f1ayDrtZfjjN5N6KdwKM02pLOOosadVn7BZhIF5ktVAs50qfcQJ/vdPOkS9Q9mzOWSUnGQv8NVIo2bPHM08YYIdUs1LJPsb9BslN7CjuMnB9I7dCXXZrirBxJh7402xiL5/YHZJMUlfHQsOdhI24J0n5v/PB5wAAzT5m97Ln7EtfkShoNpyvRCY852q93D95nawdtCRYPDuQfalFGowp7MvInyCb2dcynsjJtk9VCNg8k941h7t0MDiDaZRqrvovE+4smIvkOY1zku6a1i5RdL3fGPqmWEn4ukMvO8jlR1eI+R2MwWusfWSodndB6pg5S+s5NVuTnB+vwUtNgnW0cm7CBZe8wS+arZWuHBY3DrBK04vV1vRr2eIcZsa0hQ2wIXSDvjT9FnJnE+nDL+u5nldliC8/JOORmk7Wl4hJIEQuLvHhtIXEOLsegbIWkL3yCq8ZiGoPR0OBhy3bybkgEHs7OxZsS4ykgEfaDHU=
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(346002)(396003)(376002)(366004)(136003)(36756003)(66946007)(316002)(6666004)(54906003)(478600001)(966005)(186003)(44832011)(956004)(66476007)(66556008)(7696005)(52116002)(2906002)(2616005)(16526019)(5660300002)(26005)(8676002)(7416002)(4326008)(8936002)(86362001)(83380400001)(6916009)(6486002)(136400200001)(21314003);
 DIR:OUT; SFP:1101; 
X-MS-Exchange-AntiSpam-MessageData: to284wEs8iTwH1sCE2czj8ie8+vihx8DAjR7PEZZSdsu3K8EibaW34cLyCHnyKipwyrwFC0LGnwaXTbO5ZD3wqLEzPuepQt5padGN2bIQmAvAQdEn1QlLGHCvK1qgJsvuhLXNRI3VEaFnGIPYRcxyTfteyrw+MZbvsm6zHbbsBKvaAenDu3flaPiBX79PdzT5e7RXy8QrZ1hAw7z642IN1IDX8wkhjzhZnsf/0wq4j1hljg1ta869YSKCJQcolIjdAeM/jng1sH6nudfe2Sh3JqH0ZBWetWfO+4D4jvgBjC+QDE5h+Qrd/XYLlzJFMuODowfG/1qTFEkE2+OYRzPCuwF8A7O/AegOhW9Dvh/Fk3qTbk4uNpQe9/wqTDhnGHKC30BOQrgeKCDNU2im0QNizm4EinzxNrTsPYi5wObuK428JnxwCLcEM6rMgGt37whcixRqoHFOmgtsPHmDeMdjDpIhiI/R26ZsOKNCRf6Us8ZPPgEa3DfPCqDdGNNq6+i
X-MS-Exchange-Transport-Forked: True
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3211
Original-Authentication-Results: lists.xenproject.org;
 dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(346002)(376002)(136003)(39860400002)(46966005)(478600001)(70586007)(86362001)(966005)(16526019)(186003)(26005)(336012)(8676002)(2906002)(956004)(2616005)(82740400003)(316002)(4326008)(47076004)(6486002)(54906003)(107886003)(36906005)(44832011)(82310400002)(81166007)(356005)(70206006)(6666004)(36756003)(8936002)(7696005)(83380400001)(6916009)(5660300002)(21314003)(136400200001);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 84dbb719-acbd-46b3-f4d4-08d8031b8f3a
X-Forefront-PRVS: 0417A3FFD2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: CCg4+YFveXLo5UWRk0rpVILqUEgr84lq0LOR81lGtU4FE9/70szAPQTR4BcSz4z1Q7AC0F7oxfb2mmUtjZvh/FHQ/aZbbV/1gjqvpHjm3/mSexkNB331kIzZql/nFPbahBw8jczTrg3IwvHYQPTnFJwc0CSdcmJnkNRKb9Eug6gs430ZzCHDohgmSqjUqqsuFd8vabe/VtvY4Gd7gXER9JXw6cjCuTmfblZqp2ix+GNtAn35DBbu2MfQ5sDag0u6EUUGpKBJd3bKOVLn4mVbFh7oLKeKzfrJVMWZ5Xksdys6YcsAahDY7fhazv0d74QDk76W2DqphmsgdUCQULY15+q1jVFGBIqPitGfId5bQKc+f+z1OBdFiqhJeioFdaIHcolubdvofOkUjgalwjHO4Wdi2oWIOYZgRwPoIOAEtfsXN3bmQ7EAsX6W9ErA8ZdY1U7TIwe6rKP9KjtdDYWkGDCNcnksF7BI8SxFkC8w4XoeP372yCwRcXvrcbwUJG9co/qpywW2GxphJDt6U13lww==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 May 2020 15:26:58.9641 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6c03affa-5090-499c-b6df-08d8031b902d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5031
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The following patch implements a solution to the bug occuring on Arm
with Linux with KPTI enabled during a context switch from user mode:
(XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0

This is an answer to the discussion started here:
https://lists.xenproject.org/archives/html/xen-devel/2020-05/msg00735.html

and a modification of the patches submitted here:
https://lists.xenproject.org/archives/html/xen-devel/2019-05/msg02320.html

This is submitted as an RFC as the solution is only working on Arm and I
would need some help for the x86 implementation.
On x86 this needs at least a solution to implement an equivalent of
get_page_from_gva (see #error in domain.c) and implementation of the
different runstate_update functions.
Any help or suggestion on that would be nice.

I also added some XXX in different places as part of the code of the
original patch I started from are not completely clear to me.

Bertrand Marquis (1):
  xen: Use a global mapping for runstate

 xen/arch/arm/domain.c   | 32 +++++++++-------
 xen/arch/x86/domain.c   | 51 ++++++++++++++-----------
 xen/common/domain.c     | 84 ++++++++++++++++++++++++++++++++++-------
 xen/include/xen/sched.h | 11 ++++--
 4 files changed, 124 insertions(+), 54 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 28 15:31:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 15:31:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeKUw-0003Re-Vd; Thu, 28 May 2020 15:31:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=reLY=7K=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1jeKUv-0003RZ-Fg
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 15:31:09 +0000
X-Inumbo-ID: 40829f36-a0f8-11ea-9dbe-bc764e2007e4
Received: from mail-lj1-x241.google.com (unknown [2a00:1450:4864:20::241])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40829f36-a0f8-11ea-9dbe-bc764e2007e4;
 Thu, 28 May 2020 15:31:08 +0000 (UTC)
Received: by mail-lj1-x241.google.com with SMTP id o9so2227631ljj.6
 for <xen-devel@lists.xenproject.org>; Thu, 28 May 2020 08:31:08 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=28hEgZsH1Lw6BO+XOV0Mxi/G5PLUb05cF7tcDh+gdMQ=;
 b=Gv254GGIGrYLpmtX563mzU8bbJmC28oHVumkPrqhoRXbFZp+LGIeJYn53VC3N9jmHT
 Fgt3lrmZKyHytF1rQZvDjF/vYG61X7upEZ//+AiSw0dzhOHixAyDL35rFj4AWX5Okjfn
 Ay7p4Z/uPbR9sIUzzgPI4JsbeXDZZtQ8C/9C4yOIHLrlP3fdmsU2Pghk57GzmULEAABG
 pWvt5UBTT0uENeVYQ/QFS1hU6upjNSwYh2l4THgC8WXfn9N2L/YrFzJPveW68/JzkVao
 6/a5KAb2v8HehVkRpWgVkzObSNvg3q/Pvd/SYuipMsI224SnZ8u6TMxPJTjywnu3gCOv
 Uk2w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=28hEgZsH1Lw6BO+XOV0Mxi/G5PLUb05cF7tcDh+gdMQ=;
 b=e/kz04MS0SENaYWABtagfM4IVp19ZlQTykARi/nqeGtEgY7dzQJKsAcRJKuj2j/cd6
 16WVHlXum5jj6SBOWITByVmTEPLe+oMnIhQBUeGEUuLLxQAzTBZ5KViZw77JtUcFLv4i
 n8chwXLjYVQtR/KmLyzK69cvnl2DNrnp/Y0Jj4VmU9yNVS9YpdKntb9nhWW3FQneX5XS
 xIdgaS4Oiv1wvge31S/YuhCoO1KJ38A23u062Y8+yMSqMJ8g1j1dxCV/DXaSVgsyc84L
 sdizOUzCfN8UQqoomCLEL6ZG6Vy16pC3fRiCQDLj9H/tF2OhGYTYr0xlfMJpVu6XJLsO
 83mA==
X-Gm-Message-State: AOAM531Gc0B9eXMV4DdkDrcRYLnFOhSvNOmQlP+L386prC/cdD6ix/eK
 o4aHIH9Z2Fa4fHCS0D6J52rm6sQ7g4qoY422kCGQl5Kd
X-Google-Smtp-Source: ABdhPJyFVh/4KE8vjq+OWjYqKGRundu++XbvDVyXKJxSHLw/dtIluV+e8+VX3fJJVeO2URHCfAFX/5TVMuAmMBsjDzg=
X-Received: by 2002:a2e:7f04:: with SMTP id a4mr1911659ljd.114.1590679867716; 
 Thu, 28 May 2020 08:31:07 -0700 (PDT)
MIME-Version: 1.0
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
 <20200519190230.29519-2-ian.jackson@eu.citrix.com>
In-Reply-To: <20200519190230.29519-2-ian.jackson@eu.citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 28 May 2020 11:30:56 -0400
Message-ID: <CAKf6xpt-0MRhFnQz2VBfdYEgE8GtPCj=mK+t-aQ88uTsJtL_sw@mail.gmail.com>
Subject: Re: [OSSTEST PATCH 01/38] ts-logs-capture: Cope if xl shutdown leaves
 domain running for a bit
To: Ian Jackson <ian.jackson@eu.citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Tue, May 19, 2020 at 3:03 PM Ian Jackson <ian.jackson@eu.citrix.com> wrote:
>
> This seems mostly to affect buster but it could in principle affect
> earlier releases too I think.
>
> In principle it would be nice to fix this bug, and to have a proper
> test for it, but a reliable test is hard and an unreliable one is not
> useful.  So I guess we are going to have this workaround
> indefinitely...

`xl shutdown -w` waits for either domain shutdown or destruction in
wait_for_domain_deaths()
https://github.com/xen-project/xen/blob/master/tools/xl/xl_vmcontrol.c#L183

My understanding is shutdown happens first when the guest stops and
destruction happens afterward when all the resources are cleaned up.
So your race is that the domain shutdown, but it still shows up in `xl
list` since it hasn't been destroyed.

OpenXT has a hack to only wait for destruction
https://github.com/OpenXT/xenclient-oe/blob/master/recipes-extended/xen/files/xl-shutdown-wait-for-domain-death.patch.
I didn't write it, and the explanation isn't specific, but I think the
purpose is to all resources are released before the OpenXT toolstack
proceeds with (blktap) cleanup.

Maybe a new `xl shutdown` flag to wait for domain destruction would be
worthwhile?

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu May 28 15:33:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 15:33:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeKWW-0003XY-C9; Thu, 28 May 2020 15:32:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P1kI=7K=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jeKWV-0003XR-1p
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 15:32:47 +0000
X-Inumbo-ID: 772bbbd0-a0f8-11ea-a7f7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 772bbbd0-a0f8-11ea-a7f7-12813bfff9fa;
 Thu, 28 May 2020 15:32:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ADqRe4Ua1cFkUlohHve5DyP9EAlEZB6Zyg3untLkO70=; b=KSeH/hzOiuuqyBxsd2LTexpJK
 EW/Qn4xxuvi7m0knofybilhjza/06tPoyTmLmwDJnq4i+HFdiOI0NVRyKSc3mize8b8ogghhgG1Ye
 1ILGx/tvRxVkmXTbwbIqEhBk6/LdR85dvw1hDSLM9gRDUSCNcwCUYT9EHKKbe3R2ZLM0I=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeKWN-0006ty-NX; Thu, 28 May 2020 15:32:39 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeKWN-0000Tk-Fx; Thu, 28 May 2020 15:32:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jeKWN-0000OW-FQ; Thu, 28 May 2020 15:32:39 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150414-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150414: tolerable trouble: fail/pass/starved -
 PUSHED
X-Osstest-Failures: xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=9f3e9139fa6c3d620eb08dff927518fc88200b8d
X-Osstest-Versions-That: xen=d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 28 May 2020 15:32:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150414 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150414/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    16 guest-start/debian.repeat fail REGR. vs. 150394

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate           fail  like 150394
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150394
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150394
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150394
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150394
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150394
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150394
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150394
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150394
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150394
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  9f3e9139fa6c3d620eb08dff927518fc88200b8d
baseline version:
 xen                  d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707

Last test of basis   150394  2020-05-27 06:35:55 Z    1 days
Testing same since   150414  2020-05-27 19:37:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Tamas K Lengyel <tamas@tklengyel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d89e5e65f3..9f3e9139fa  9f3e9139fa6c3d620eb08dff927518fc88200b8d -> master


From xen-devel-bounces@lists.xenproject.org Thu May 28 15:37:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 15:37:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeKaa-0003mm-1d; Thu, 28 May 2020 15:37:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WTQv=7K=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jeKaZ-0003mh-AJ
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 15:36:59 +0000
X-Inumbo-ID: 10ec624c-a0f9-11ea-9947-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 10ec624c-a0f9-11ea-9947-bc764e2007e4;
 Thu, 28 May 2020 15:36:58 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: fW9UcRnGtaSb/vQbF641kwn586xwA1LwZHfddXB+nK3WepsVMyx1sDlsiOEs32UkPyqbzyc1NT
 fm3wZkgmXhG2Zq7OFSfRho9E86TMfhtFiqN1QSBQTlUj8omLtRSrYyCS0oH8WJFGU10loMclO0
 RBPaq0CI6ZInpHkMQhEzjDQWeWeOv0byxapIs+rS3m0r6RMPzzD3RanE4MHsTMScv9GZ9YxYpF
 FOUgd/FkjxPvGtyurdGDK4EDopUZRTzIBYWg3IybRLiMJUeT+KlXJgAIlFsjzlIQYO0QndLzj+
 m44=
X-SBRS: 2.7
X-MesageID: 18969184
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,445,1583211600"; d="scan'208";a="18969184"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24271.55958.888390.570837@mariner.uk.xensource.com>
Date: Thu, 28 May 2020 16:36:54 +0100
To: Jason Andryuk <jandryuk@gmail.com>
Subject: Re: [OSSTEST PATCH 01/38] ts-logs-capture: Cope if xl shutdown leaves
 domain running for a bit
In-Reply-To: <CAKf6xpt-0MRhFnQz2VBfdYEgE8GtPCj=mK+t-aQ88uTsJtL_sw@mail.gmail.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
 <20200519190230.29519-2-ian.jackson@eu.citrix.com>
 <CAKf6xpt-0MRhFnQz2VBfdYEgE8GtPCj=mK+t-aQ88uTsJtL_sw@mail.gmail.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Jason Andryuk writes ("Re: [OSSTEST PATCH 01/38] ts-logs-capture: Cope if xl shutdown leaves domain running for a bit"):
> My understanding is shutdown happens first when the guest stops and
> destruction happens afterward when all the resources are cleaned up.
> So your race is that the domain shutdown, but it still shows up in `xl
> list` since it hasn't been destroyed.
> 
> OpenXT has a hack to only wait for destruction
> https://github.com/OpenXT/xenclient-oe/blob/master/recipes-extended/xen/files/xl-shutdown-wait-for-domain-death.patch.
> I didn't write it, and the explanation isn't specific, but I think the
> purpose is to all resources are released before the OpenXT toolstack
> proceeds with (blktap) cleanup.
> 
> Maybe a new `xl shutdown` flag to wait for domain destruction would be
> worthwhile?

Yes!  Repeating the -w flag maybe.

Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 28 15:41:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 15:41:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeKex-0004an-Jm; Thu, 28 May 2020 15:41:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WTQv=7K=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jeKew-0004ai-72
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 15:41:30 +0000
X-Inumbo-ID: b1f10e80-a0f9-11ea-a7f8-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b1f10e80-a0f9-11ea-a7f8-12813bfff9fa;
 Thu, 28 May 2020 15:41:29 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: gaa51i9pdfbywEsCwMif99Nv/ljXkDveN50Kbw1wRdwu2docxTAk5zPX9cwm7G2xvHG+sqsyxH
 aO+HJlkQM4NkAdXNiFpxnI9QIzTISxVtx1id6kfOqN8rBllA1yWjolu5jVdPy2orIWrNF1oXNf
 s9XCK8SMVwuz/xnQ/oe9Vl46gGrEVoaeKVsKBfDsZrtY/JZQNQ5iOI3pPP/GvdIAOx/H6EjnkH
 UDRVqQ8l+VF9Kr5ss51BOfrLog0gTysMYVGDdVfUKLCkzBP37WSVOHZV78f9NqNyA9UVQ9EHvL
 Tmo=
X-SBRS: 2.7
X-MesageID: 18673932
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,445,1583211600"; d="scan'208";a="18673932"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24271.56227.367257.277033@mariner.uk.xensource.com>
Date: Thu, 28 May 2020 16:41:23 +0100
To: Julien Grall <julien@xen.org>
Subject: Re: [OSSTEST PATCH 22/38] buster: Extend guest bootloader workaround
In-Reply-To: <d4fc0391-9f40-86ad-f304-70bb0cd73e9b@xen.org>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
 <20200519190230.29519-23-ian.jackson@eu.citrix.com>
 <7747c676-f9da-cb97-bd93-78dc13138d03@xen.org>
 <24261.17724.382954.918761@mariner.uk.xensource.com>
 <e4e7e515-587a-ad81-c9b7-b7cfa69108be@xen.org>
 <24271.53336.125796.634580@mariner.uk.xensource.com>
 <d4fc0391-9f40-86ad-f304-70bb0cd73e9b@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <Ian.Jackson@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Julien Grall writes ("Re: [OSSTEST PATCH 22/38] buster: Extend guest bootloader workaround"):
> On 28/05/2020 15:53, Ian Jackson wrote:
> > It's Complicated.  There are several options, but the usual ones are:
> > 
> > 1. pygrub: Install some version of grub, which generates
> >     /boot/grub.cfg.  It doesn't matter very much which version of grub
> >     because grub.cfg is read by pygrub in dom0 and that fishes out the
> >     kernel and initrd.  Many of osstest's tests do this.
> > 
> > 2. host kernel: Simply pass the dom0 kernel *and initramfs* as the
> >     kernel image to the guest.  This works if the kernel has the right
> >     modules for the guest storage, which it can easily do.  On x86 an
> >     amd64 kernel can run an i386 userland.
> > 
> > 3. pvgrub.
> 
> Thanks for the explanation. How do you select it in the Osstest today?

I think osstest does all three (not very sure about (2).  Installs
made with the Debian xen-tools package tend to do (2) by default.
Installs made with d-i can do (2) or (3).

> > Is this the same as "EADK" ?  I'm afraid I don't follow...
> 
> Sorry, I should have been more precise. I meant that we are able to boot 
> a Arm guest using UEFI as we added support in EDK2 (I think in Xen we 
> use the term ovmf).

Right.

> When using EFI, the guest can boot exactly the same way as it would on 
> baremetal. The toolstack is just loading the firmware in the guest memory.
> 
> IIRC we have already regular EFI testing on x86 in Osstest. I am 
> thinking to extend them to Arm at some point.

Our arm64 boxes are all booting via UEFI right now.

We have to do a different bodge to load xen.efi rather than grub;
osstest makes a xen.cfg.  That bodge is extended to buster by

  Subject: [OSSTEST PATCH 34/38] buster: grub, arm64: extend
      chainloading workaround

> > Where should I do that ?  I guess I mean, in which bugtracker ?
> 
>  From the comment in the code, I would assume this is a bug/enhancement 
> against the Debian installer. But I may have misundertood it.

Oh I see.  I think amybe the problem was the lack of grub support.  Is
that all sorted in current Debian unstable/testing ?  If so it may
well all come out in the wash.

Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 28 15:45:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 15:45:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeKiJ-0004lf-7I; Thu, 28 May 2020 15:44:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WTQv=7K=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jeKiH-0004l9-QE
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 15:44:57 +0000
X-Inumbo-ID: 2e2e98ba-a0fa-11ea-a7f8-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2e2e98ba-a0fa-11ea-a7f8-12813bfff9fa;
 Thu, 28 May 2020 15:44:57 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ql4oFMY44/0+izGe6sI3DPS766VWivHK7e+UQht1ienJumBBxINCvVrusNF/csgnzzoAXeugyh
 RkwXKFIBD86Pkjuj2I1aG9zb1zEiLiKk/BMNTWG/85+cXJhRlZQpqWajrLRoeSeVu6r9Xef0FV
 B4ldKXFF8GnjPuSq9+xelkfZFV39bwLlVzWP4Gzqc6+wKArRHm0CE4IfmDAe4zFfciS9oABf8h
 qpZSY4udpoySoWKrMRFmh4z29UUZ1QoZQUEtSS3znqL7frOEwE6e5YdNRFxGVH1XZaNtZD5RXr
 dIk=
X-SBRS: 2.7
X-MesageID: 19028051
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,445,1583211600"; d="scan'208";a="19028051"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24271.56435.782626.994907@mariner.uk.xensource.com>
Date: Thu, 28 May 2020 16:44:51 +0100
To: Paul Durrant <paul@xen.org>
Subject: Re: [PATCH v2] libxl: stop libxl_domain_info() consuming massive
 amounts of stack
In-Reply-To: <20200528151330.20964-1-paul@xen.org>
References: <20200528151330.20964-1-paul@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Paul
 Durrant <pdurrant@amazon.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Paul Durrant writes ("[PATCH v2] libxl: stop libxl_domain_info() consuming massive amounts of stack"):
> From: Paul Durrant <pdurrant@amazon.com>
> 
> Currently an array of 1024 xc_domaininfo_t is declared on stack. That alone
> consumes ~112k. Since libxl_domain_info() creates a new gc this patch simply
> uses it to allocate the array instead.
> 
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>
> ---
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>
> Cc: Wei Liu <wl@xen.org>
> Cc: Anthony PERARD <anthony.perard@citrix.com>
> 
> This is small and IMO it would be nice to have this in 4.14 but I'd like an
> opinion from a maintainer too.

Reviewed-by: Ian Jackson <ian.jackson@eu.citrix.com>

I agree that this is 4.14 material.

Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 28 16:00:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 16:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeKxG-0006uu-IA; Thu, 28 May 2020 16:00:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eoT6=7K=xen.org=roger@srs-us1.protection.inumbo.net>)
 id 1jeKxF-0006up-2c
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 16:00:25 +0000
X-Inumbo-ID: 56b86458-a0fc-11ea-a7fe-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 56b86458-a0fc-11ea-a7fe-12813bfff9fa;
 Thu, 28 May 2020 16:00:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Transfer-Encoding:Content-Type:
 MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=jeBFJd2UNVo6GKio2LfOs9T6os+j2HtTsJirjHMMsvw=; b=VM3EVXMJRkn+n0wyplatkbnr9d
 4FHQbkXCMmoz3X8A6niIWc0CSa1Ibmv1OTmTcWKXBFQF5VK+WD7dy8dStMkoSWRazsweertG2W820
 feSXerwfvEfBpjr7b6ZX57QvP8nWN+gb0JNpKvjSL71wEYMdi3as8C2nXle0ytDhjoxA=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeKxC-00080W-WD; Thu, 28 May 2020 16:00:23 +0000
Received: from [93.176.191.173] (helo=localhost)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeKxC-0007ml-LX; Thu, 28 May 2020 16:00:22 +0000
Date: Thu, 28 May 2020 18:00:15 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger@xen.org>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86/hvm: Improve error information in handle_pio()
Message-ID: <20200528160003.GG1195@Air-de-Roger>
References: <20200528130738.12816-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20200528130738.12816-1-andrew.cooper3@citrix.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul.durrant@citrix.com>, Wei Liu <wl@xen.org>,
 Jan Beulich <JBeulich@suse.com>,
 Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 28, 2020 at 02:07:38PM +0100, Andrew Cooper wrote:
> domain_crash() should always have a message which emitted even in release
> builds, so something more useful than this is presented.
> 
>   (XEN) domain_crash called from io.c:171
>   (XEN) domain_crash called from io.c:171
>   (XEN) domain_crash called from io.c:171
>   ...
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Paul Durrant <paul.durrant@citrix.com>
> CC: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
> 
> Part of a bug reported by Marek.  Something else is wonky in the IO emulation
> state, and preventing us from yielding to the scheduler so the domain can
> progress with being shut down.
> ---
>  xen/arch/x86/hvm/io.c | 4 +++-
>  1 file changed, 3 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/x86/hvm/io.c b/xen/arch/x86/hvm/io.c
> index a5b0a23f06..4e468bfb6b 100644
> --- a/xen/arch/x86/hvm/io.c
> +++ b/xen/arch/x86/hvm/io.c
> @@ -167,7 +167,9 @@ bool handle_pio(uint16_t port, unsigned int size, int dir)
>          break;
>  
>      default:
> -        gdprintk(XENLOG_ERR, "Weird HVM ioemulation status %d.\n", rc);
> +        gprintk(XENLOG_ERR, "Unexpected PIO status %d, port %#x %s 0x%0*lx\n",
> +                rc, port, dir == IOREQ_WRITE ? "write" : "read",
> +                size * 2, data & ((1ul << (size * 8)) - 1));

I wonder, should data be initialized to 0 in order to prevent writing
garbage here if the buffer is not filled in the read case?

Note sure it's better to print garbage or just 0 in that case, as in
both cases it won't be possible to figure out if it's real data or
just the emulation didn't get to fill it (unless the specific error
path is checked).

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 28 16:15:56 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 16:15:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeLBy-0007t4-TG; Thu, 28 May 2020 16:15:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VkFg=7K=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeLBy-0007sz-Cf
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 16:15:38 +0000
X-Inumbo-ID: 76fe2372-a0fe-11ea-a806-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 76fe2372-a0fe-11ea-a806-12813bfff9fa;
 Thu, 28 May 2020 16:15:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7A451B1B4;
 Thu, 28 May 2020 16:15:35 +0000 (UTC)
Subject: Re: [PATCH v2 10/14] x86/extable: Adjust extable handling to be
 shadow stack compatible
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-11-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b36b5868-0b7c-2b45-a994-0ff5ea170433@suse.com>
Date: Thu, 28 May 2020 18:15:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200527191847.17207-11-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 21:18, Andrew Cooper wrote:
> @@ -400,6 +400,18 @@ unsigned long get_stack_trace_bottom(unsigned long sp)
>      }
>  }
>  
> +static unsigned long get_shstk_bottom(unsigned long sp)
> +{
> +    switch ( get_stack_page(sp) )
> +    {
> +#ifdef CONFIG_XEN_SHSTK
> +    case 0:  return ROUNDUP(sp, IST_SHSTK_SIZE) - sizeof(unsigned long);
> +    case 5:  return ROUNDUP(sp, PAGE_SIZE)      - sizeof(unsigned long);

PRIMARY_SHSTK_SLOT please and, if introduced as suggested for the
earlier patch, IST_SHSTK_SLOT in the earlier line.

> @@ -763,6 +775,56 @@ static void do_reserved_trap(struct cpu_user_regs *regs)
>            trapnr, vec_name(trapnr), regs->error_code);
>  }
>  
> +static void extable_shstk_fixup(struct cpu_user_regs *regs, unsigned long fixup)
> +{
> +    unsigned long ssp, *ptr, *base;
> +
> +    asm ( "rdsspq %0" : "=r" (ssp) : "0" (1) );
> +    if ( ssp == 1 )
> +        return;
> +
> +    ptr = _p(ssp);
> +    base = _p(get_shstk_bottom(ssp));
> +
> +    for ( ; ptr < base; ++ptr )
> +    {
> +        /*
> +         * Search for %rip.  The shstk currently looks like this:
> +         *
> +         *   ...  [Likely pointed to by SSP]
> +         *   %cs  [== regs->cs]
> +         *   %rip [== regs->rip]
> +         *   SSP  [Likely points to 3 slots higher, above %cs]
> +         *   ...  [call tree to this function, likely 2/3 slots]
> +         *
> +         * and we want to overwrite %rip with fixup.  There are two
> +         * complications:
> +         *   1) We cant depend on SSP values, because they won't differ by 3
> +         *      slots if the exception is taken on an IST stack.
> +         *   2) There are synthetic (unrealistic but not impossible) scenarios
> +         *      where %rip can end up in the call tree to this function, so we
> +         *      can't check against regs->rip alone.
> +         *
> +         * Check for both reg->rip and regs->cs matching.

Nit: regs->rip

> +         */
> +
> +        if ( ptr[0] == regs->rip && ptr[1] == regs->cs )
> +        {
> +            asm ( "wrssq %[fix], %[stk]"
> +                  : [stk] "=m" (*ptr)

Could this be ptr[0], to match the if()?

Considering how important it is that we don't fix up the wrong stack
location here, I continue to wonder if we wouldn't better also
include the SSP value on the stack in the checking, at the very
least by way of an ASSERT() or BUG_ON(). Since, with how the code is
currently written, this would require a somewhat odd looking ptr[-1]
I also wonder whether "while ( ++ptr < base )" as the loop header
wouldn't be better. The first entry on the stack can't be the RIP
we look for anyway, can it?

> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -708,7 +708,16 @@ exception_with_ints_disabled:
>          call  search_pre_exception_table
>          testq %rax,%rax                 # no fixup code for faulting EIP?
>          jz    1b
> -        movq  %rax,UREGS_rip(%rsp)
> +        movq  %rax,UREGS_rip(%rsp)      # fixup regular stack
> +
> +#ifdef CONFIG_XEN_SHSTK
> +        mov    $1, %edi
> +        rdsspq %rdi
> +        cmp    $1, %edi
> +        je     .L_exn_shstk_done
> +        wrssq  %rax, (%rdi)             # fixup shadow stack

According to the comment in extable_shstk_fixup(), isn't the value
to be replaced at 8(%rdi)?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 28 16:54:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 16:54:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeLmx-0002go-OG; Thu, 28 May 2020 16:53:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=eoT6=7K=xen.org=roger@srs-us1.protection.inumbo.net>)
 id 1jeLmw-0002gj-UC
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 16:53:51 +0000
X-Inumbo-ID: cd8945e6-a103-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cd8945e6-a103-11ea-8993-bc764e2007e4;
 Thu, 28 May 2020 16:53:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID
 :Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID
 :Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:
 Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe
 :List-Post:List-Owner:List-Archive;
 bh=V7w820wa1+fEiEUXpJ2LH8gb1KqjGKuhDd50Xd/mjHU=; b=0TrSdihaJKFYQvsCtP/BitCr8o
 9bP145ncNzi1/fz+2KxzW7JDhd2lYc920mjqP8srn8vR/QQtISxnQvb2/XCr0MnKCVxTSJ4noaje9
 Rl158TF3y4+dBTF9L9w/9O2/yboPztuJvgjDHTtryItG8ari0az+IlwAfTLPHF5uXE5E=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeLmu-0000ea-Id; Thu, 28 May 2020 16:53:48 +0000
Received: from [93.176.191.173] (helo=localhost)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeLmt-0002EI-Ua; Thu, 28 May 2020 16:53:48 +0000
Date: Thu, 28 May 2020 18:53:41 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger@xen.org>
To: Bertrand Marquis <bertrand.marquis@arm.com>
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Message-ID: <20200528165341.GH1195@Air-de-Roger>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 28, 2020 at 04:25:31PM +0100, Bertrand Marquis wrote:
> At the moment on Arm, a Linux guest running with KTPI enabled will
> cause the following error when a context switch happens in user mode:
> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
> 
> This patch is modifying runstate handling to map the area given by the
> guest inside Xen during the hypercall.
> This is removing the guest virtual to physical conversion during context
> switches which removes the bug and improve performance by preventing to
> walk page tables during context switches.
> 
> --
> In the current status, this patch is only working on Arm and needs to
> be fixed on X86 (see #error on domain.c for missing get_page_from_gva).
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
>  xen/arch/arm/domain.c   | 32 +++++++++-------
>  xen/arch/x86/domain.c   | 51 ++++++++++++++-----------
>  xen/common/domain.c     | 84 ++++++++++++++++++++++++++++++++++-------
>  xen/include/xen/sched.h | 11 ++++--
>  4 files changed, 124 insertions(+), 54 deletions(-)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 31169326b2..799b0e0103 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -278,33 +278,37 @@ static void ctxt_switch_to(struct vcpu *n)
>  /* Update per-VCPU guest runstate shared memory area (if registered). */
>  static void update_runstate_area(struct vcpu *v)
>  {
> -    void __user *guest_handle = NULL;
> -    struct vcpu_runstate_info runstate;
> +    struct vcpu_runstate_info *runstate;
>  
> -    if ( guest_handle_is_null(runstate_guest(v)) )
> +    /* XXX why do we accept not to block here */
> +    if ( !spin_trylock(&v->runstate_guest_lock) )

IMO the runstate is not a crucial piece of information, so it's better
to context switch fast.

>          return;
>  
> -    memcpy(&runstate, &v->runstate, sizeof(runstate));
> +    runstate = runstate_guest(v);
> +
> +    if (runstate == NULL)

In general we don't explicitly check for NULL, and you could write
this as:

    if ( runstate ) ...

Note the adding spaces between parentheses and the condition. I would
also likely check runstate_guest(v) directly and assign to runstate
afterwards if it's set.

> +    {
> +        spin_unlock(&v->runstate_guest_lock);
> +        return;
> +    }
>  
>      if ( VM_ASSIST(v->domain, runstate_update_flag) )
>      {
> -        guest_handle = &v->runstate_guest.p->state_entry_time + 1;
> -        guest_handle--;
> -        runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
> -        __raw_copy_to_guest(guest_handle,
> -                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
> +        runstate->state_entry_time |= XEN_RUNSTATE_UPDATE;
>          smp_wmb();
> +        v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
>      }
>  
> -    __copy_to_guest(runstate_guest(v), &runstate, 1);
> +    memcpy(runstate, &v->runstate, sizeof(v->runstate));
>  
> -    if ( guest_handle )
> +    if ( VM_ASSIST(v->domain, runstate_update_flag) )
>      {
> -        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
> +        runstate->state_entry_time &= ~XEN_RUNSTATE_UPDATE;
>          smp_wmb();

I think you need the barrier before clearing XEN_RUNSTATE_UPDATE from
the guest version of the runstate info, to make sure writes are not
re-ordered and hence that the XEN_RUNSTATE_UPDATE flag in the guest
version is not cleared before the full write has been committed?

> -        __raw_copy_to_guest(guest_handle,
> -                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
> +        v->runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
>      }
> +
> +    spin_unlock(&v->runstate_guest_lock);
>  }
>  
>  static void schedule_tail(struct vcpu *prev)
> diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
> index 6327ba0790..10c6ceb561 100644
> --- a/xen/arch/x86/domain.c
> +++ b/xen/arch/x86/domain.c
> @@ -1642,57 +1642,62 @@ void paravirt_ctxt_switch_to(struct vcpu *v)
>  /* Update per-VCPU guest runstate shared memory area (if registered). */
>  bool update_runstate_area(struct vcpu *v)
>  {
> -    bool rc;
>      struct guest_memory_policy policy = { .nested_guest_mode = false };
> -    void __user *guest_handle = NULL;
> -    struct vcpu_runstate_info runstate;
> +    struct vcpu_runstate_info *runstate;
>  
> -    if ( guest_handle_is_null(runstate_guest(v)) )
> +    /* XXX: should we return false ? */
> +    if ( !spin_trylock(&v->runstate_guest_lock) )
>          return true;
>  
> -    update_guest_memory_policy(v, &policy);
> +    runstate = runstate_guest(v);
>  
> -    memcpy(&runstate, &v->runstate, sizeof(runstate));
> +    if (runstate == NULL)
> +    {
> +        spin_unlock(&v->runstate_guest_lock);
> +        return true;
> +    }
> +
> +    update_guest_memory_policy(v, &policy);
>  
>      if ( VM_ASSIST(v->domain, runstate_update_flag) )
>      {
> -        guest_handle = has_32bit_shinfo(v->domain)
> -            ? &v->runstate_guest.compat.p->state_entry_time + 1
> -            : &v->runstate_guest.native.p->state_entry_time + 1;
> -        guest_handle--;
> -        runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
> -        __raw_copy_to_guest(guest_handle,
> -                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
> +        runstate->state_entry_time |= XEN_RUNSTATE_UPDATE;
>          smp_wmb();
> +        if (has_32bit_shinfo(v->domain))
> +            v->runstate_guest.compat->state_entry_time |= XEN_RUNSTATE_UPDATE;
> +        else
> +            v->runstate_guest.native->state_entry_time |= XEN_RUNSTATE_UPDATE;

I'm confused here, isn't runstate == v->runstate_guest.native at this
point?

I think you want to update v->runstate.state_entry_time here?

>      }
>  
>      if ( has_32bit_shinfo(v->domain) )
>      {
>          struct compat_vcpu_runstate_info info;
> -
>          XLAT_vcpu_runstate_info(&info, &runstate);
> -        __copy_to_guest(v->runstate_guest.compat, &info, 1);
> -        rc = true;
> +        memcpy(v->runstate_guest.compat, &info, 1);
>      }
>      else
> -        rc = __copy_to_guest(runstate_guest(v), &runstate, 1) !=
> -             sizeof(runstate);
> +        memcpy(runstate, &v->runstate, sizeof(v->runstate));
>  
> -    if ( guest_handle )
> +    if ( VM_ASSIST(v->domain, runstate_update_flag) )
>      {
> -        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
> +        runstate->state_entry_time |= XEN_RUNSTATE_UPDATE;
>          smp_wmb();
> -        __raw_copy_to_guest(guest_handle,
> -                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
> +        if (has_32bit_shinfo(v->domain))
> +            v->runstate_guest.compat->state_entry_time |= XEN_RUNSTATE_UPDATE;
> +        else
> +            v->runstate_guest.native->state_entry_time |= XEN_RUNSTATE_UPDATE;

Same comment here related to the usage of runstate_guest instead of
runstate.

>      }
>  
> +    spin_unlock(&v->runstate_guest_lock);
> +
>      update_guest_memory_policy(v, &policy);
>  
> -    return rc;
> +    return true;
>  }
>  
>  static void _update_runstate_area(struct vcpu *v)
>  {
> +    /* XXX: this should be removed */
>      if ( !update_runstate_area(v) && is_pv_vcpu(v) &&
>           !(v->arch.flags & TF_kernel_mode) )
>          v->arch.pv.need_update_runstate_area = 1;
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 7cc9526139..acc6f90ba3 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -161,6 +161,7 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
>      v->dirty_cpu = VCPU_CPU_CLEAN;
>  
>      spin_lock_init(&v->virq_lock);
> +    spin_lock_init(&v->runstate_guest_lock);
>  
>      tasklet_init(&v->continue_hypercall_tasklet, NULL, NULL);
>  
> @@ -691,6 +692,66 @@ int rcu_lock_live_remote_domain_by_id(domid_t dom, struct domain **d)
>      return 0;
>  }
>  
> +static void  unmap_runstate_area(struct vcpu *v, unsigned int lock)

lock wants to be a bool here.

> +{
> +    mfn_t mfn;
> +
> +    if ( ! runstate_guest(v) )
> +        return;

I think you must check runstate_guest with the lock taken?

> +
> +    if (lock)
> +        spin_lock(&v->runstate_guest_lock);
> +
> +    mfn = domain_page_map_to_mfn(runstate_guest(v));
> +
> +    unmap_domain_page_global((void *)
> +                            ((unsigned long)v->runstate_guest &
> +                             PAGE_MASK));
> +
> +    put_page_and_type(mfn_to_page(mfn));
> +    runstate_guest(v) = NULL;
> +
> +    if (lock)
> +        spin_unlock(&v->runstate_guest_lock);
> +}
> +
> +static int map_runstate_area(struct vcpu *v,
> +                    struct vcpu_register_runstate_memory_area *area)
> +{
> +    unsigned long offset = area->addr.p & ~PAGE_MASK;
> +    void *mapping;
> +    struct page_info *page;
> +    size_t size = sizeof(struct vcpu_runstate_info);
> +
> +    ASSERT(runstate_guest(v) == NULL);
> +
> +    /* do not allow an area crossing 2 pages */
> +    if ( offset > (PAGE_SIZE - size) )
> +        return -EINVAL;

I'm afraid this is not suitable, Linux will BUG if
VCPUOP_register_runstate_memory_area returns an error, and current
Linux code doesn't check that the area doesn't cross a page
boundary. You will need to take a reference to the possible two pages
in that case.

> +
> +#ifdef CONFIG_ARM
> +    page = get_page_from_gva(v, area->addr.p, GV2M_WRITE);
> +#else
> +    /* XXX how to solve this one ? */

We have hvm_translate_get_page which seems similar, will need to look
into this.

> +#error get_page_from_gva is not available on other archs
> +#endif
> +    if ( !page )
> +        return -EINVAL;
> +
> +    mapping = __map_domain_page_global(page);
> +
> +    if ( mapping == NULL )
> +    {
> +        put_page_and_type(page);
> +        return -ENOMEM;
> +    }
> +
> +    runstate_guest(v) = (struct vcpu_runstate_info *)
> +        ((unsigned long)mapping + offset);

There's no need to cast to unsigned long, you can just do pointer
arithmetic on the void * directly. That should also get rid of the
cast to vcpu_runstate_info I think.

> +
> +    return 0;
> +}
> +
>  int domain_kill(struct domain *d)
>  {
>      int rc = 0;
> @@ -727,7 +788,10 @@ int domain_kill(struct domain *d)
>          if ( cpupool_move_domain(d, cpupool0) )
>              return -ERESTART;
>          for_each_vcpu ( d, v )
> +        {
> +            unmap_runstate_area(v, 0);

Why is it not appropriate here to hold the lock? It might not be
technically needed, but still should work?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 28 17:00:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 17:00:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeLtQ-0003Zr-Fu; Thu, 28 May 2020 17:00:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P1kI=7K=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jeLtP-0003Zm-JT
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 17:00:31 +0000
X-Inumbo-ID: b96d2c52-a104-11ea-a80f-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b96d2c52-a104-11ea-a80f-12813bfff9fa;
 Thu, 28 May 2020 17:00:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=kTQvzezCVY8uTBeOLZRQYbg+d9gR/bVWvjS6YAlAuHY=; b=AaXr5Z4CVTd/NB3jO9T9XqEXQ
 0YW0R7qF1obAB/SJNjUykqML/EtgzY02lqVZg5/DqByDsJChxewwnTcfmdywkN01tlL+HHC91vZUU
 +H1wzHdnZ7DbY5uFldQp2psDuNclyesjO3eG+kE/bojSrYrGsSOIPupEy7cD4LqQAesMQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeLtI-0000oM-Pe; Thu, 28 May 2020 17:00:24 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeLtI-0004Wy-GU; Thu, 28 May 2020 17:00:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jeLtI-0006Tb-Fo; Thu, 28 May 2020 17:00:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150438-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150438: tolerable all pass - PUSHED
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
X-Osstest-Versions-That: xen=724913de8ac8426d313a4645741d86c1169ae406
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 28 May 2020 17:00:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150438 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150438/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199
baseline version:
 xen                  724913de8ac8426d313a4645741d86c1169ae406

Last test of basis   150433  2020-05-28 11:01:11 Z    0 days
Testing same since   150438  2020-05-28 14:01:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   724913de8a..1497e78068  1497e78068421d83956f8e82fb6e1bf1fc3b1199 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 28 17:19:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 17:19:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeMBp-0004dp-7q; Thu, 28 May 2020 17:19:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bLP3=7K=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jeMBn-0004dk-SM
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 17:19:32 +0000
X-Inumbo-ID: 639bb070-a107-11ea-9947-bc764e2007e4
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe05::604])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 639bb070-a107-11ea-9947-bc764e2007e4;
 Thu, 28 May 2020 17:19:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CQPozITMBLjJhUz1YG14TFpGvIBnvAbTakJjTTPdOhg=;
 b=lZZKoW7tJr/1FWXk4BQRDzlay7zFOICgvKJJqSnFs7MzqVCnWDTD/P/Fh9m3DdZm950gztpDTRtk8KbG3s9Cph94aTVmpz7xcuicdw+zzUjjN5nBQjPGxxSHZt8auZRBqFW5uXSSYzAcMuHW3QJqJBpz+QR6/DyAtVWhm2YMuMM=
Received: from AM6P191CA0005.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:8b::18)
 by AM0PR08MB3747.eurprd08.prod.outlook.com (2603:10a6:208:105::24) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.19; Thu, 28 May
 2020 17:19:27 +0000
Received: from VE1EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8b:cafe::3) by AM6P191CA0005.outlook.office365.com
 (2603:10a6:209:8b::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.17 via Frontend
 Transport; Thu, 28 May 2020 17:19:27 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT033.mail.protection.outlook.com (10.152.18.147) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3021.23 via Frontend Transport; Thu, 28 May 2020 17:19:27 +0000
Received: ("Tessian outbound d078647f4174:v57");
 Thu, 28 May 2020 17:19:27 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5fc4eb480e706d3e
X-CR-MTA-TID: 64aa7808
Received: from db595615124e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DC340EA7-1702-4259-983E-00489DD567AB.1; 
 Thu, 28 May 2020 17:19:21 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id db595615124e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 28 May 2020 17:19:21 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JzYg7UEdoFzw5A/rj7+pE8Xmzq4R87Pb+BYt0dOtRZKpUJjwY46Y5dGTmwMoCnxkjSrqygHlasTqy11oz6SQH6hsvoI3Eell4qM04Opnv5ifN0N0v9zGSI+GQHsnoq5Ok9cyw0XCudq1qwfZwoEeOr3pf86IJGRtH0N45oC6PWkROcjaol9Ojb2NlyvTnVim515TmpR5LKca3dc6mTRVDnUW9hUw37cGsTrFUTs0vUg0R37ZQ5AcDGT2Ppcb7IpoQFNCTSSt28muvSSomypY9Z64lH6Fw7uHHbYK+ICqxQI2tgGh+HHcOhgi9zgb3rLfPoe+yjvSF2pRBj9gq5EUAA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CQPozITMBLjJhUz1YG14TFpGvIBnvAbTakJjTTPdOhg=;
 b=Be3om56Cup5ohf0ahk+brnxXpo8rwD4xhdqEB3dJ6oLL32WaaN2aeOg8ML5KW0iNbrye7Jz1RDEWd5yZlhrmlVwtKlUIg/Ewhm+p1+kmYKagjgXrm6WbpxAaHgBmDNvO0f+0/61qzvUTomgjC1q45shXmCwIsJyI9UIZxSaHkxcqpWmQIDbBk9uVhBbYcGXfX8+a7qDOEKocwzkVHQ/R7Tzs+YyjvkmazeiLihi+qQq40GbKrnfA4FDi5+LBD7RElAWheVudBMJcoiLxXJgnkzX2OAtdWNRbNUqTVdjhDtpV5Iem/tBS2nHCVQR6yXjsEYOjjuRzBnri3k5KIrbyFQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CQPozITMBLjJhUz1YG14TFpGvIBnvAbTakJjTTPdOhg=;
 b=lZZKoW7tJr/1FWXk4BQRDzlay7zFOICgvKJJqSnFs7MzqVCnWDTD/P/Fh9m3DdZm950gztpDTRtk8KbG3s9Cph94aTVmpz7xcuicdw+zzUjjN5nBQjPGxxSHZt8auZRBqFW5uXSSYzAcMuHW3QJqJBpz+QR6/DyAtVWhm2YMuMM=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3193.eurprd08.prod.outlook.com (2603:10a6:5:24::33) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.26; Thu, 28 May
 2020 17:19:18 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3021.030; Thu, 28 May 2020
 17:19:18 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger@xen.org>
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Topic: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Index: AQHWNP6nIxIB/r3XC025DLnqIbmaJai9txeAgAAHJ4A=
Date: Thu, 28 May 2020 17:19:18 +0000
Message-ID: <B0CBD25E-49D8-4AE5-B424-83E9A05FBF58@arm.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <20200528165341.GH1195@Air-de-Roger>
In-Reply-To: <20200528165341.GH1195@Air-de-Roger>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 64faa0d3-cc11-4bfc-4fd8-08d8032b466b
x-ms-traffictypediagnostic: DB7PR08MB3193:|AM0PR08MB3747:
X-Microsoft-Antispam-PRVS: <AM0PR08MB374797BF80CE4B57ED2C5A929D8E0@AM0PR08MB3747.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 0417A3FFD2
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: 33wfc49RVTJajtPfnPdspjmyBEbH1lmIKSdOZ0WbwanbDJcOgXsEKZVLKGjx94TGDFT3y3UCmCmhaEEN3lLDuONOBz2cle4qC0EHutCnzjNYnPl7fkRgFjv6Ak4a+Luj19a9azNhl+k6wBQvQS2JLkKyJ3jC6HO2GvqJE3pJ+nSprmpc15FbjVG3/6KVmE8M2+p2oh0m5VWY+oxwhy2880R1v0pMBHWRDXdBS9BsIR0avN9XiqDh7ootUl6hW+1IRdzvOgfjJTfWu/kPUyCUittvuxFBOKOp8VB/teCMVkOciP12/OuPA4p7vaclwtR1nk1IgL4D0WIabcnl78w/U3rQi8O4LB96Az1nn/bna43hJPIkY6SymcdYGG2vO8+V
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(366004)(396003)(39860400002)(346002)(136003)(376002)(36756003)(316002)(2616005)(54906003)(33656002)(71200400001)(6486002)(30864003)(83380400001)(91956017)(66476007)(64756008)(66946007)(76116006)(66446008)(8936002)(66556008)(86362001)(478600001)(7416002)(5660300002)(6506007)(53546011)(2906002)(6916009)(4326008)(186003)(26005)(6512007)(8676002)(21314003);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: s9C4uiDViVp0ssnI8N5R7Jqo4aUeK/WEiHdptpiOxujo37v+vi/lPk8q/I3P6umKTA2HUcmSCkAOPpi6Un8yDEHzNqVN2VFYA5+9Xr0fftM2JbtrDuA4E9RITQ3otopI3WT+aLCoIrTjNOiFtvgmrTrgCT02/qVbGOphsqJOfWWFiFe7aZuQanVLlS6Lehsl51Mq814vej1Rq01AUhEuCuAcLf98OD/edDjAUMh5LRmlItG99KUQOpM27/TdSs5frI2vs383+RCjM/9mfsQF7AAiPM5bW+9xHa1SdMcRw+kt8oZKAJtHasNdBZXcgfEuaLY73RWS5mQFuwSWkWyVhdwNu+a/jqy6RMewCJUo4b79bNBillUDAAm9SWydZm6sS1m9kIsstqMn6enN7XDzPptO1QVOFUIZFk/NPGQlOFRz4mh9I8u2tJmgD3YmD0CV/GYPOEPVzaTy/2zeeF63GntdexzAtc/BWtAWlsh8rxs=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <F393C782B3006A479B299F596109549D@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3193
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(376002)(396003)(136003)(346002)(46966005)(36906005)(2616005)(316002)(2906002)(70586007)(70206006)(478600001)(30864003)(5660300002)(6486002)(54906003)(8676002)(26005)(4326008)(86362001)(186003)(83380400001)(6512007)(47076004)(36756003)(81166007)(82740400003)(33656002)(6506007)(53546011)(336012)(6862004)(82310400002)(356005)(8936002)(21314003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: f623cd26-a8cd-4870-db27-08d8032b414e
X-Forefront-PRVS: 0417A3FFD2
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: fmgg4blG2XhtlKQ0Sqgem0QgCNjezjBWK+0ca6DVPmUjS0XcBhXqS7AC5oOhe/tkBAZWu0O3Y//MjJPIwQ2OraK+SS4ansmsU0VuRmemwHIU/fD6X53k+fagDK0uSwYyATXF5jvJE4MplVoou10C6jtcTO8AodHuClxZ3G0XQcRTSVreq+U/8HRutJ1b1HYDoEBSjW+/YfU5ZdMon4QRAEsdbYfGkMO3qrcPfMvW9mdwPsbZOuA5bAqIpcsgcPE4SizeIFI5Tozp6jqNW/29EUOp80EB07EhOD2xaiuBALa5301CJmvQd0usyz/G2ajb/7Fmo0+u8Syu+L3649XfVib8cwpr4xmckwnnQu0huOkj0MBUNFjP+gKpVjeUdhFKNGlde0UFtP8JaN6gGmlR9K1rPvHpwlP3zXGolNKVDVs=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 May 2020 17:19:27.1380 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 64faa0d3-cc11-4bfc-4fd8-08d8032b466b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3747
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMjggTWF5IDIwMjAsIGF0IDE3OjUzLCBSb2dlciBQYXUgTW9ubsOpIDxyb2dlckB4
ZW4ub3JnPiB3cm90ZToNCj4gDQo+IE9uIFRodSwgTWF5IDI4LCAyMDIwIGF0IDA0OjI1OjMxUE0g
KzAxMDAsIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+PiBBdCB0aGUgbW9tZW50IG9uIEFybSwg
YSBMaW51eCBndWVzdCBydW5uaW5nIHdpdGggS1RQSSBlbmFibGVkIHdpbGwNCj4+IGNhdXNlIHRo
ZSBmb2xsb3dpbmcgZXJyb3Igd2hlbiBhIGNvbnRleHQgc3dpdGNoIGhhcHBlbnMgaW4gdXNlciBt
b2RlOg0KPj4gKFhFTikgcDJtLmM6MTg5MDogZDF2MDogRmFpbGVkIHRvIHdhbGsgcGFnZS10YWJs
ZSB2YSAweGZmZmZmZjgzN2ViZTBjZDANCj4+IA0KPj4gVGhpcyBwYXRjaCBpcyBtb2RpZnlpbmcg
cnVuc3RhdGUgaGFuZGxpbmcgdG8gbWFwIHRoZSBhcmVhIGdpdmVuIGJ5IHRoZQ0KPj4gZ3Vlc3Qg
aW5zaWRlIFhlbiBkdXJpbmcgdGhlIGh5cGVyY2FsbC4NCj4+IFRoaXMgaXMgcmVtb3ZpbmcgdGhl
IGd1ZXN0IHZpcnR1YWwgdG8gcGh5c2ljYWwgY29udmVyc2lvbiBkdXJpbmcgY29udGV4dA0KPj4g
c3dpdGNoZXMgd2hpY2ggcmVtb3ZlcyB0aGUgYnVnIGFuZCBpbXByb3ZlIHBlcmZvcm1hbmNlIGJ5
IHByZXZlbnRpbmcgdG8NCj4+IHdhbGsgcGFnZSB0YWJsZXMgZHVyaW5nIGNvbnRleHQgc3dpdGNo
ZXMuDQo+PiANCj4+IC0tDQo+PiBJbiB0aGUgY3VycmVudCBzdGF0dXMsIHRoaXMgcGF0Y2ggaXMg
b25seSB3b3JraW5nIG9uIEFybSBhbmQgbmVlZHMgdG8NCj4+IGJlIGZpeGVkIG9uIFg4NiAoc2Vl
ICNlcnJvciBvbiBkb21haW4uYyBmb3IgbWlzc2luZyBnZXRfcGFnZV9mcm9tX2d2YSkuDQo+PiAN
Cj4+IFNpZ25lZC1vZmYtYnk6IEJlcnRyYW5kIE1hcnF1aXMgPGJlcnRyYW5kLm1hcnF1aXNAYXJt
LmNvbT4NCj4+IC0tLQ0KPj4geGVuL2FyY2gvYXJtL2RvbWFpbi5jICAgfCAzMiArKysrKysrKyst
LS0tLS0tDQo+PiB4ZW4vYXJjaC94ODYvZG9tYWluLmMgICB8IDUxICsrKysrKysrKysrKysrLS0t
LS0tLS0tLS0NCj4+IHhlbi9jb21tb24vZG9tYWluLmMgICAgIHwgODQgKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysrKysrKy0tLS0tLS0NCj4+IHhlbi9pbmNsdWRlL3hlbi9zY2hlZC5oIHwg
MTEgKysrKy0tDQo+PiA0IGZpbGVzIGNoYW5nZWQsIDEyNCBpbnNlcnRpb25zKCspLCA1NCBkZWxl
dGlvbnMoLSkNCj4+IA0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL2FybS9kb21haW4uYyBiL3hl
bi9hcmNoL2FybS9kb21haW4uYw0KPj4gaW5kZXggMzExNjkzMjZiMi4uNzk5YjBlMDEwMyAxMDA2
NDQNCj4+IC0tLSBhL3hlbi9hcmNoL2FybS9kb21haW4uYw0KPj4gKysrIGIveGVuL2FyY2gvYXJt
L2RvbWFpbi5jDQo+PiBAQCAtMjc4LDMzICsyNzgsMzcgQEAgc3RhdGljIHZvaWQgY3R4dF9zd2l0
Y2hfdG8oc3RydWN0IHZjcHUgKm4pDQo+PiAvKiBVcGRhdGUgcGVyLVZDUFUgZ3Vlc3QgcnVuc3Rh
dGUgc2hhcmVkIG1lbW9yeSBhcmVhIChpZiByZWdpc3RlcmVkKS4gKi8NCj4+IHN0YXRpYyB2b2lk
IHVwZGF0ZV9ydW5zdGF0ZV9hcmVhKHN0cnVjdCB2Y3B1ICp2KQ0KPj4gew0KPj4gLSAgICB2b2lk
IF9fdXNlciAqZ3Vlc3RfaGFuZGxlID0gTlVMTDsNCj4+IC0gICAgc3RydWN0IHZjcHVfcnVuc3Rh
dGVfaW5mbyBydW5zdGF0ZTsNCj4+ICsgICAgc3RydWN0IHZjcHVfcnVuc3RhdGVfaW5mbyAqcnVu
c3RhdGU7DQo+PiANCj4+IC0gICAgaWYgKCBndWVzdF9oYW5kbGVfaXNfbnVsbChydW5zdGF0ZV9n
dWVzdCh2KSkgKQ0KPj4gKyAgICAvKiBYWFggd2h5IGRvIHdlIGFjY2VwdCBub3QgdG8gYmxvY2sg
aGVyZSAqLw0KPj4gKyAgICBpZiAoICFzcGluX3RyeWxvY2soJnYtPnJ1bnN0YXRlX2d1ZXN0X2xv
Y2spICkNCj4gDQo+IElNTyB0aGUgcnVuc3RhdGUgaXMgbm90IGEgY3J1Y2lhbCBwaWVjZSBvZiBp
bmZvcm1hdGlvbiwgc28gaXQncyBiZXR0ZXINCj4gdG8gY29udGV4dCBzd2l0Y2ggZmFzdC4NCg0K
T2sgSSB3aWxsIGFkZCBhIGNvbW1lbnQgdGhlcmUgdG8gZXhwbGFpbiB0aGF0IG90aGVyd2lzZSBp
dCBpcyBub3Qgb2J2aW91cyB3aHkgc2ltcGx5IGlnbm9yZSBhbmQgY29udGludWUuDQoNCj4gDQo+
PiAgICAgICAgIHJldHVybjsNCj4+IA0KPj4gLSAgICBtZW1jcHkoJnJ1bnN0YXRlLCAmdi0+cnVu
c3RhdGUsIHNpemVvZihydW5zdGF0ZSkpOw0KPj4gKyAgICBydW5zdGF0ZSA9IHJ1bnN0YXRlX2d1
ZXN0KHYpOw0KPj4gKw0KPj4gKyAgICBpZiAocnVuc3RhdGUgPT0gTlVMTCkNCj4gDQo+IEluIGdl
bmVyYWwgd2UgZG9uJ3QgZXhwbGljaXRseSBjaGVjayBmb3IgTlVMTCwgYW5kIHlvdSBjb3VsZCB3
cml0ZQ0KPiB0aGlzIGFzOg0KPiANCj4gICAgaWYgKCBydW5zdGF0ZSApIC4uLg0KPiANCj4gTm90
ZSB0aGUgYWRkaW5nIHNwYWNlcyBiZXR3ZWVuIHBhcmVudGhlc2VzIGFuZCB0aGUgY29uZGl0aW9u
LiBJIHdvdWxkDQo+IGFsc28gbGlrZWx5IGNoZWNrIHJ1bnN0YXRlX2d1ZXN0KHYpIGRpcmVjdGx5
IGFuZCBhc3NpZ24gdG8gcnVuc3RhdGUNCj4gYWZ0ZXJ3YXJkcyBpZiBpdCdzIHNldC4NCg0KT2sN
Cg0KPiANCj4+ICsgICAgew0KPj4gKyAgICAgICAgc3Bpbl91bmxvY2soJnYtPnJ1bnN0YXRlX2d1
ZXN0X2xvY2spOw0KPj4gKyAgICAgICAgcmV0dXJuOw0KPj4gKyAgICB9DQo+PiANCj4+ICAgICBp
ZiAoIFZNX0FTU0lTVCh2LT5kb21haW4sIHJ1bnN0YXRlX3VwZGF0ZV9mbGFnKSApDQo+PiAgICAg
ew0KPj4gLSAgICAgICAgZ3Vlc3RfaGFuZGxlID0gJnYtPnJ1bnN0YXRlX2d1ZXN0LnAtPnN0YXRl
X2VudHJ5X3RpbWUgKyAxOw0KPj4gLSAgICAgICAgZ3Vlc3RfaGFuZGxlLS07DQo+PiAtICAgICAg
ICBydW5zdGF0ZS5zdGF0ZV9lbnRyeV90aW1lIHw9IFhFTl9SVU5TVEFURV9VUERBVEU7DQo+PiAt
ICAgICAgICBfX3Jhd19jb3B5X3RvX2d1ZXN0KGd1ZXN0X2hhbmRsZSwNCj4+IC0gICAgICAgICAg
ICAgICAgICAgICAgICAgICAgKHZvaWQgKikoJnJ1bnN0YXRlLnN0YXRlX2VudHJ5X3RpbWUgKyAx
KSAtIDEsIDEpOw0KPj4gKyAgICAgICAgcnVuc3RhdGUtPnN0YXRlX2VudHJ5X3RpbWUgfD0gWEVO
X1JVTlNUQVRFX1VQREFURTsNCj4+ICAgICAgICAgc21wX3dtYigpOw0KPj4gKyAgICAgICAgdi0+
cnVuc3RhdGUuc3RhdGVfZW50cnlfdGltZSB8PSBYRU5fUlVOU1RBVEVfVVBEQVRFOw0KPj4gICAg
IH0NCj4+IA0KPj4gLSAgICBfX2NvcHlfdG9fZ3Vlc3QocnVuc3RhdGVfZ3Vlc3QodiksICZydW5z
dGF0ZSwgMSk7DQo+PiArICAgIG1lbWNweShydW5zdGF0ZSwgJnYtPnJ1bnN0YXRlLCBzaXplb2Yo
di0+cnVuc3RhdGUpKTsNCj4+IA0KPj4gLSAgICBpZiAoIGd1ZXN0X2hhbmRsZSApDQo+PiArICAg
IGlmICggVk1fQVNTSVNUKHYtPmRvbWFpbiwgcnVuc3RhdGVfdXBkYXRlX2ZsYWcpICkNCj4+ICAg
ICB7DQo+PiAtICAgICAgICBydW5zdGF0ZS5zdGF0ZV9lbnRyeV90aW1lICY9IH5YRU5fUlVOU1RB
VEVfVVBEQVRFOw0KPj4gKyAgICAgICAgcnVuc3RhdGUtPnN0YXRlX2VudHJ5X3RpbWUgJj0gflhF
Tl9SVU5TVEFURV9VUERBVEU7DQo+PiAgICAgICAgIHNtcF93bWIoKTsNCj4gDQo+IEkgdGhpbmsg
eW91IG5lZWQgdGhlIGJhcnJpZXIgYmVmb3JlIGNsZWFyaW5nIFhFTl9SVU5TVEFURV9VUERBVEUg
ZnJvbQ0KPiB0aGUgZ3Vlc3QgdmVyc2lvbiBvZiB0aGUgcnVuc3RhdGUgaW5mbywgdG8gbWFrZSBz
dXJlIHdyaXRlcyBhcmUgbm90DQo+IHJlLW9yZGVyZWQgYW5kIGhlbmNlIHRoYXQgdGhlIFhFTl9S
VU5TVEFURV9VUERBVEUgZmxhZyBpbiB0aGUgZ3Vlc3QNCj4gdmVyc2lvbiBpcyBub3QgY2xlYXJl
ZCBiZWZvcmUgdGhlIGZ1bGwgd3JpdGUgaGFzIGJlZW4gY29tbWl0dGVkPw0KDQpWZXJ5IHRydWUu
IEkgd2lsbCBmaXggdGhhdC4NCg0KPiANCj4+IC0gICAgICAgIF9fcmF3X2NvcHlfdG9fZ3Vlc3Qo
Z3Vlc3RfaGFuZGxlLA0KPj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAodm9pZCAqKSgm
cnVuc3RhdGUuc3RhdGVfZW50cnlfdGltZSArIDEpIC0gMSwgMSk7DQo+PiArICAgICAgICB2LT5y
dW5zdGF0ZS5zdGF0ZV9lbnRyeV90aW1lICY9IH5YRU5fUlVOU1RBVEVfVVBEQVRFOw0KPj4gICAg
IH0NCj4+ICsNCj4+ICsgICAgc3Bpbl91bmxvY2soJnYtPnJ1bnN0YXRlX2d1ZXN0X2xvY2spOw0K
Pj4gfQ0KPj4gDQo+PiBzdGF0aWMgdm9pZCBzY2hlZHVsZV90YWlsKHN0cnVjdCB2Y3B1ICpwcmV2
KQ0KPj4gZGlmZiAtLWdpdCBhL3hlbi9hcmNoL3g4Ni9kb21haW4uYyBiL3hlbi9hcmNoL3g4Ni9k
b21haW4uYw0KPj4gaW5kZXggNjMyN2JhMDc5MC4uMTBjNmNlYjU2MSAxMDA2NDQNCj4+IC0tLSBh
L3hlbi9hcmNoL3g4Ni9kb21haW4uYw0KPj4gKysrIGIveGVuL2FyY2gveDg2L2RvbWFpbi5jDQo+
PiBAQCAtMTY0Miw1NyArMTY0Miw2MiBAQCB2b2lkIHBhcmF2aXJ0X2N0eHRfc3dpdGNoX3RvKHN0
cnVjdCB2Y3B1ICp2KQ0KPj4gLyogVXBkYXRlIHBlci1WQ1BVIGd1ZXN0IHJ1bnN0YXRlIHNoYXJl
ZCBtZW1vcnkgYXJlYSAoaWYgcmVnaXN0ZXJlZCkuICovDQo+PiBib29sIHVwZGF0ZV9ydW5zdGF0
ZV9hcmVhKHN0cnVjdCB2Y3B1ICp2KQ0KPj4gew0KPj4gLSAgICBib29sIHJjOw0KPj4gICAgIHN0
cnVjdCBndWVzdF9tZW1vcnlfcG9saWN5IHBvbGljeSA9IHsgLm5lc3RlZF9ndWVzdF9tb2RlID0g
ZmFsc2UgfTsNCj4+IC0gICAgdm9pZCBfX3VzZXIgKmd1ZXN0X2hhbmRsZSA9IE5VTEw7DQo+PiAt
ICAgIHN0cnVjdCB2Y3B1X3J1bnN0YXRlX2luZm8gcnVuc3RhdGU7DQo+PiArICAgIHN0cnVjdCB2
Y3B1X3J1bnN0YXRlX2luZm8gKnJ1bnN0YXRlOw0KPj4gDQo+PiAtICAgIGlmICggZ3Vlc3RfaGFu
ZGxlX2lzX251bGwocnVuc3RhdGVfZ3Vlc3QodikpICkNCj4+ICsgICAgLyogWFhYOiBzaG91bGQg
d2UgcmV0dXJuIGZhbHNlID8gKi8NCj4+ICsgICAgaWYgKCAhc3Bpbl90cnlsb2NrKCZ2LT5ydW5z
dGF0ZV9ndWVzdF9sb2NrKSApDQo+PiAgICAgICAgIHJldHVybiB0cnVlOw0KPj4gDQo+PiAtICAg
IHVwZGF0ZV9ndWVzdF9tZW1vcnlfcG9saWN5KHYsICZwb2xpY3kpOw0KPj4gKyAgICBydW5zdGF0
ZSA9IHJ1bnN0YXRlX2d1ZXN0KHYpOw0KPj4gDQo+PiAtICAgIG1lbWNweSgmcnVuc3RhdGUsICZ2
LT5ydW5zdGF0ZSwgc2l6ZW9mKHJ1bnN0YXRlKSk7DQo+PiArICAgIGlmIChydW5zdGF0ZSA9PSBO
VUxMKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICBzcGluX3VubG9jaygmdi0+cnVuc3RhdGVfZ3Vl
c3RfbG9jayk7DQo+PiArICAgICAgICByZXR1cm4gdHJ1ZTsNCj4+ICsgICAgfQ0KPj4gKw0KPj4g
KyAgICB1cGRhdGVfZ3Vlc3RfbWVtb3J5X3BvbGljeSh2LCAmcG9saWN5KTsNCj4+IA0KPj4gICAg
IGlmICggVk1fQVNTSVNUKHYtPmRvbWFpbiwgcnVuc3RhdGVfdXBkYXRlX2ZsYWcpICkNCj4+ICAg
ICB7DQo+PiAtICAgICAgICBndWVzdF9oYW5kbGUgPSBoYXNfMzJiaXRfc2hpbmZvKHYtPmRvbWFp
bikNCj4+IC0gICAgICAgICAgICA/ICZ2LT5ydW5zdGF0ZV9ndWVzdC5jb21wYXQucC0+c3RhdGVf
ZW50cnlfdGltZSArIDENCj4+IC0gICAgICAgICAgICA6ICZ2LT5ydW5zdGF0ZV9ndWVzdC5uYXRp
dmUucC0+c3RhdGVfZW50cnlfdGltZSArIDE7DQo+PiAtICAgICAgICBndWVzdF9oYW5kbGUtLTsN
Cj4+IC0gICAgICAgIHJ1bnN0YXRlLnN0YXRlX2VudHJ5X3RpbWUgfD0gWEVOX1JVTlNUQVRFX1VQ
REFURTsNCj4+IC0gICAgICAgIF9fcmF3X2NvcHlfdG9fZ3Vlc3QoZ3Vlc3RfaGFuZGxlLA0KPj4g
LSAgICAgICAgICAgICAgICAgICAgICAgICAgICAodm9pZCAqKSgmcnVuc3RhdGUuc3RhdGVfZW50
cnlfdGltZSArIDEpIC0gMSwgMSk7DQo+PiArICAgICAgICBydW5zdGF0ZS0+c3RhdGVfZW50cnlf
dGltZSB8PSBYRU5fUlVOU1RBVEVfVVBEQVRFOw0KPj4gICAgICAgICBzbXBfd21iKCk7DQo+PiAr
ICAgICAgICBpZiAoaGFzXzMyYml0X3NoaW5mbyh2LT5kb21haW4pKQ0KPj4gKyAgICAgICAgICAg
IHYtPnJ1bnN0YXRlX2d1ZXN0LmNvbXBhdC0+c3RhdGVfZW50cnlfdGltZSB8PSBYRU5fUlVOU1RB
VEVfVVBEQVRFOw0KPj4gKyAgICAgICAgZWxzZQ0KPj4gKyAgICAgICAgICAgIHYtPnJ1bnN0YXRl
X2d1ZXN0Lm5hdGl2ZS0+c3RhdGVfZW50cnlfdGltZSB8PSBYRU5fUlVOU1RBVEVfVVBEQVRFOw0K
PiANCj4gSSdtIGNvbmZ1c2VkIGhlcmUsIGlzbid0IHJ1bnN0YXRlID09IHYtPnJ1bnN0YXRlX2d1
ZXN0Lm5hdGl2ZSBhdCB0aGlzDQo+IHBvaW50Pw0KPiANCj4gSSB0aGluayB5b3Ugd2FudCB0byB1
cGRhdGUgdi0+cnVuc3RhdGUuc3RhdGVfZW50cnlfdGltZSBoZXJlPw0KDQpJIHdpbGwgaGF2ZSB0
byBkaWcgZGVlcGVyIG9uIHRoZSB4ODYgaW1wbGVtZW50YXRpb24gb24gdGhhdCBwYXJ0IGJlY2F1
c2UgdGhlIGNvbXBhdGliaWxpdHkgaGFuZGxpbmcgaXMgbm90IHN0cmFpZ2h0IGZvcndhcmQuDQpD
dXJyZW50bHkgaWYgdGhlIGNvbXBhdGliaWxpdHkgbW9kZSBpcyByZXF1aXJlZCBib3RoIG91ciBp
bnRlcm5hbCBhbmQgZXh0ZXJuYWwgY29weSBvZiB0aGUgcnVuc3RhdGUgYXJlIGluIGNvbXBhdGli
aWxpdHkgbW9kZS4NCg0KSXQgbWlnaHQgYmUgc2ltcGxlciB0byBvbmx5IGhhbmRsZSB0aGUgY29t
cGF0aWJpbGl0eSBjb252ZXJzaW9uIGR1cmluZyB0aGUgdXBkYXRlX3J1bnN0YXRlX2FyZWEgaW5z
dGVhZCBvZiBkb2luZyBpdCBldmVyeXdoZXJlID8NCkJ1dCBtYXliZSB0aGlzIHNob3VsZCBiZSBh
IGNoYW5nZSBmb3IgYW4gb3RoZXIgcGF0Y2ggKGlmIGFueSkuDQoNCj4gDQo+PiAgICAgfQ0KPj4g
DQo+PiAgICAgaWYgKCBoYXNfMzJiaXRfc2hpbmZvKHYtPmRvbWFpbikgKQ0KPj4gICAgIHsNCj4+
ICAgICAgICAgc3RydWN0IGNvbXBhdF92Y3B1X3J1bnN0YXRlX2luZm8gaW5mbzsNCj4+IC0NCj4+
ICAgICAgICAgWExBVF92Y3B1X3J1bnN0YXRlX2luZm8oJmluZm8sICZydW5zdGF0ZSk7DQo+PiAt
ICAgICAgICBfX2NvcHlfdG9fZ3Vlc3Qodi0+cnVuc3RhdGVfZ3Vlc3QuY29tcGF0LCAmaW5mbywg
MSk7DQo+PiAtICAgICAgICByYyA9IHRydWU7DQo+PiArICAgICAgICBtZW1jcHkodi0+cnVuc3Rh
dGVfZ3Vlc3QuY29tcGF0LCAmaW5mbywgMSk7DQo+PiAgICAgfQ0KPj4gICAgIGVsc2UNCj4+IC0g
ICAgICAgIHJjID0gX19jb3B5X3RvX2d1ZXN0KHJ1bnN0YXRlX2d1ZXN0KHYpLCAmcnVuc3RhdGUs
IDEpICE9DQo+PiAtICAgICAgICAgICAgIHNpemVvZihydW5zdGF0ZSk7DQo+PiArICAgICAgICBt
ZW1jcHkocnVuc3RhdGUsICZ2LT5ydW5zdGF0ZSwgc2l6ZW9mKHYtPnJ1bnN0YXRlKSk7DQo+PiAN
Cj4+IC0gICAgaWYgKCBndWVzdF9oYW5kbGUgKQ0KPj4gKyAgICBpZiAoIFZNX0FTU0lTVCh2LT5k
b21haW4sIHJ1bnN0YXRlX3VwZGF0ZV9mbGFnKSApDQo+PiAgICAgew0KPj4gLSAgICAgICAgcnVu
c3RhdGUuc3RhdGVfZW50cnlfdGltZSAmPSB+WEVOX1JVTlNUQVRFX1VQREFURTsNCj4+ICsgICAg
ICAgIHJ1bnN0YXRlLT5zdGF0ZV9lbnRyeV90aW1lIHw9IFhFTl9SVU5TVEFURV9VUERBVEU7DQo+
PiAgICAgICAgIHNtcF93bWIoKTsNCj4+IC0gICAgICAgIF9fcmF3X2NvcHlfdG9fZ3Vlc3QoZ3Vl
c3RfaGFuZGxlLA0KPj4gLSAgICAgICAgICAgICAgICAgICAgICAgICAgICAodm9pZCAqKSgmcnVu
c3RhdGUuc3RhdGVfZW50cnlfdGltZSArIDEpIC0gMSwgMSk7DQo+PiArICAgICAgICBpZiAoaGFz
XzMyYml0X3NoaW5mbyh2LT5kb21haW4pKQ0KPj4gKyAgICAgICAgICAgIHYtPnJ1bnN0YXRlX2d1
ZXN0LmNvbXBhdC0+c3RhdGVfZW50cnlfdGltZSB8PSBYRU5fUlVOU1RBVEVfVVBEQVRFOw0KPj4g
KyAgICAgICAgZWxzZQ0KPj4gKyAgICAgICAgICAgIHYtPnJ1bnN0YXRlX2d1ZXN0Lm5hdGl2ZS0+
c3RhdGVfZW50cnlfdGltZSB8PSBYRU5fUlVOU1RBVEVfVVBEQVRFOw0KPiANCj4gU2FtZSBjb21t
ZW50IGhlcmUgcmVsYXRlZCB0byB0aGUgdXNhZ2Ugb2YgcnVuc3RhdGVfZ3Vlc3QgaW5zdGVhZCBv
Zg0KPiBydW5zdGF0ZS4NCg0KQWdyZWUNCg0KPiANCj4+ICAgICB9DQo+PiANCj4+ICsgICAgc3Bp
bl91bmxvY2soJnYtPnJ1bnN0YXRlX2d1ZXN0X2xvY2spOw0KPj4gKw0KPj4gICAgIHVwZGF0ZV9n
dWVzdF9tZW1vcnlfcG9saWN5KHYsICZwb2xpY3kpOw0KPj4gDQo+PiAtICAgIHJldHVybiByYzsN
Cj4+ICsgICAgcmV0dXJuIHRydWU7DQo+PiB9DQo+PiANCj4+IHN0YXRpYyB2b2lkIF91cGRhdGVf
cnVuc3RhdGVfYXJlYShzdHJ1Y3QgdmNwdSAqdikNCj4+IHsNCj4+ICsgICAgLyogWFhYOiB0aGlz
IHNob3VsZCBiZSByZW1vdmVkICovDQo+PiAgICAgaWYgKCAhdXBkYXRlX3J1bnN0YXRlX2FyZWEo
dikgJiYgaXNfcHZfdmNwdSh2KSAmJg0KPj4gICAgICAgICAgISh2LT5hcmNoLmZsYWdzICYgVEZf
a2VybmVsX21vZGUpICkNCj4+ICAgICAgICAgdi0+YXJjaC5wdi5uZWVkX3VwZGF0ZV9ydW5zdGF0
ZV9hcmVhID0gMTsNCj4+IGRpZmYgLS1naXQgYS94ZW4vY29tbW9uL2RvbWFpbi5jIGIveGVuL2Nv
bW1vbi9kb21haW4uYw0KPj4gaW5kZXggN2NjOTUyNjEzOS4uYWNjNmY5MGJhMyAxMDA2NDQNCj4+
IC0tLSBhL3hlbi9jb21tb24vZG9tYWluLmMNCj4+ICsrKyBiL3hlbi9jb21tb24vZG9tYWluLmMN
Cj4+IEBAIC0xNjEsNiArMTYxLDcgQEAgc3RydWN0IHZjcHUgKnZjcHVfY3JlYXRlKHN0cnVjdCBk
b21haW4gKmQsIHVuc2lnbmVkIGludCB2Y3B1X2lkKQ0KPj4gICAgIHYtPmRpcnR5X2NwdSA9IFZD
UFVfQ1BVX0NMRUFOOw0KPj4gDQo+PiAgICAgc3Bpbl9sb2NrX2luaXQoJnYtPnZpcnFfbG9jayk7
DQo+PiArICAgIHNwaW5fbG9ja19pbml0KCZ2LT5ydW5zdGF0ZV9ndWVzdF9sb2NrKTsNCj4+IA0K
Pj4gICAgIHRhc2tsZXRfaW5pdCgmdi0+Y29udGludWVfaHlwZXJjYWxsX3Rhc2tsZXQsIE5VTEws
IE5VTEwpOw0KPj4gDQo+PiBAQCAtNjkxLDYgKzY5Miw2NiBAQCBpbnQgcmN1X2xvY2tfbGl2ZV9y
ZW1vdGVfZG9tYWluX2J5X2lkKGRvbWlkX3QgZG9tLCBzdHJ1Y3QgZG9tYWluICoqZCkNCj4+ICAg
ICByZXR1cm4gMDsNCj4+IH0NCj4+IA0KPj4gK3N0YXRpYyB2b2lkICB1bm1hcF9ydW5zdGF0ZV9h
cmVhKHN0cnVjdCB2Y3B1ICp2LCB1bnNpZ25lZCBpbnQgbG9jaykNCj4gDQo+IGxvY2sgd2FudHMg
dG8gYmUgYSBib29sIGhlcmUuDQpPayBJIHdpbGwgZml4IHRoYXQuDQoNCj4gDQo+PiArew0KPj4g
KyAgICBtZm5fdCBtZm47DQo+PiArDQo+PiArICAgIGlmICggISBydW5zdGF0ZV9ndWVzdCh2KSAp
DQo+PiArICAgICAgICByZXR1cm47DQo+IA0KPiBJIHRoaW5rIHlvdSBtdXN0IGNoZWNrIHJ1bnN0
YXRlX2d1ZXN0IHdpdGggdGhlIGxvY2sgdGFrZW4/DQoNClJpZ2h0LCBJIHdpbGwgZml4IHRoYXQu
DQoNCj4gDQo+PiArDQo+PiArICAgIGlmIChsb2NrKQ0KPj4gKyAgICAgICAgc3Bpbl9sb2NrKCZ2
LT5ydW5zdGF0ZV9ndWVzdF9sb2NrKTsNCj4+ICsNCj4+ICsgICAgbWZuID0gZG9tYWluX3BhZ2Vf
bWFwX3RvX21mbihydW5zdGF0ZV9ndWVzdCh2KSk7DQo+PiArDQo+PiArICAgIHVubWFwX2RvbWFp
bl9wYWdlX2dsb2JhbCgodm9pZCAqKQ0KPj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAo
KHVuc2lnbmVkIGxvbmcpdi0+cnVuc3RhdGVfZ3Vlc3QgJg0KPj4gKyAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgUEFHRV9NQVNLKSk7DQo+PiArDQo+PiArICAgIHB1dF9wYWdlX2FuZF90eXBl
KG1mbl90b19wYWdlKG1mbikpOw0KPj4gKyAgICBydW5zdGF0ZV9ndWVzdCh2KSA9IE5VTEw7DQo+
PiArDQo+PiArICAgIGlmIChsb2NrKQ0KPj4gKyAgICAgICAgc3Bpbl91bmxvY2soJnYtPnJ1bnN0
YXRlX2d1ZXN0X2xvY2spOw0KPj4gK30NCj4+ICsNCj4+ICtzdGF0aWMgaW50IG1hcF9ydW5zdGF0
ZV9hcmVhKHN0cnVjdCB2Y3B1ICp2LA0KPj4gKyAgICAgICAgICAgICAgICAgICAgc3RydWN0IHZj
cHVfcmVnaXN0ZXJfcnVuc3RhdGVfbWVtb3J5X2FyZWEgKmFyZWEpDQo+PiArew0KPj4gKyAgICB1
bnNpZ25lZCBsb25nIG9mZnNldCA9IGFyZWEtPmFkZHIucCAmIH5QQUdFX01BU0s7DQo+PiArICAg
IHZvaWQgKm1hcHBpbmc7DQo+PiArICAgIHN0cnVjdCBwYWdlX2luZm8gKnBhZ2U7DQo+PiArICAg
IHNpemVfdCBzaXplID0gc2l6ZW9mKHN0cnVjdCB2Y3B1X3J1bnN0YXRlX2luZm8pOw0KPj4gKw0K
Pj4gKyAgICBBU1NFUlQocnVuc3RhdGVfZ3Vlc3QodikgPT0gTlVMTCk7DQo+PiArDQo+PiArICAg
IC8qIGRvIG5vdCBhbGxvdyBhbiBhcmVhIGNyb3NzaW5nIDIgcGFnZXMgKi8NCj4+ICsgICAgaWYg
KCBvZmZzZXQgPiAoUEFHRV9TSVpFIC0gc2l6ZSkgKQ0KPj4gKyAgICAgICAgcmV0dXJuIC1FSU5W
QUw7DQo+IA0KPiBJJ20gYWZyYWlkIHRoaXMgaXMgbm90IHN1aXRhYmxlLCBMaW51eCB3aWxsIEJV
RyBpZg0KPiBWQ1BVT1BfcmVnaXN0ZXJfcnVuc3RhdGVfbWVtb3J5X2FyZWEgcmV0dXJucyBhbiBl
cnJvciwgYW5kIGN1cnJlbnQNCj4gTGludXggY29kZSBkb2Vzbid0IGNoZWNrIHRoYXQgdGhlIGFy
ZWEgZG9lc24ndCBjcm9zcyBhIHBhZ2UNCj4gYm91bmRhcnkuIFlvdSB3aWxsIG5lZWQgdG8gdGFr
ZSBhIHJlZmVyZW5jZSB0byB0aGUgcG9zc2libGUgdHdvIHBhZ2VzDQo+IGluIHRoYXQgY2FzZS4N
Cg0KT2ssIEkgd2lsbCBmaXggdGhhdC4NCg0KPiANCj4+ICsNCj4+ICsjaWZkZWYgQ09ORklHX0FS
TQ0KPj4gKyAgICBwYWdlID0gZ2V0X3BhZ2VfZnJvbV9ndmEodiwgYXJlYS0+YWRkci5wLCBHVjJN
X1dSSVRFKTsNCj4+ICsjZWxzZQ0KPj4gKyAgICAvKiBYWFggaG93IHRvIHNvbHZlIHRoaXMgb25l
ID8gKi8NCj4gDQo+IFdlIGhhdmUgaHZtX3RyYW5zbGF0ZV9nZXRfcGFnZSB3aGljaCBzZWVtcyBz
aW1pbGFyLCB3aWxsIG5lZWQgdG8gbG9vaw0KPiBpbnRvIHRoaXMuDQoNCk9rIEkgd2lsbCB3YWl0
IGZvciBtb3JlIGluZm9ybWF0aW9uIGZyb20geW91IG9uIHRoYXQgb25lLg0KDQo+IA0KPj4gKyNl
cnJvciBnZXRfcGFnZV9mcm9tX2d2YSBpcyBub3QgYXZhaWxhYmxlIG9uIG90aGVyIGFyY2hzDQo+
PiArI2VuZGlmDQo+PiArICAgIGlmICggIXBhZ2UgKQ0KPj4gKyAgICAgICAgcmV0dXJuIC1FSU5W
QUw7DQo+PiArDQo+PiArICAgIG1hcHBpbmcgPSBfX21hcF9kb21haW5fcGFnZV9nbG9iYWwocGFn
ZSk7DQo+PiArDQo+PiArICAgIGlmICggbWFwcGluZyA9PSBOVUxMICkNCj4+ICsgICAgew0KPj4g
KyAgICAgICAgcHV0X3BhZ2VfYW5kX3R5cGUocGFnZSk7DQo+PiArICAgICAgICByZXR1cm4gLUVO
T01FTTsNCj4+ICsgICAgfQ0KPj4gKw0KPj4gKyAgICBydW5zdGF0ZV9ndWVzdCh2KSA9IChzdHJ1
Y3QgdmNwdV9ydW5zdGF0ZV9pbmZvICopDQo+PiArICAgICAgICAoKHVuc2lnbmVkIGxvbmcpbWFw
cGluZyArIG9mZnNldCk7DQo+IA0KPiBUaGVyZSdzIG5vIG5lZWQgdG8gY2FzdCB0byB1bnNpZ25l
ZCBsb25nLCB5b3UgY2FuIGp1c3QgZG8gcG9pbnRlcg0KPiBhcml0aG1ldGljIG9uIHRoZSB2b2lk
ICogZGlyZWN0bHkuIFRoYXQgc2hvdWxkIGFsc28gZ2V0IHJpZCBvZiB0aGUNCj4gY2FzdCB0byB2
Y3B1X3J1bnN0YXRlX2luZm8gSSB0aGluay4NCg0KU29tZSBjb21waWxlcnMgYXJlIG5vdCBhbGxv
d2luZyBhcml0aG1ldGljcyBvbiB2b2lkKiBhbmQgZ2NjIHdpbGwgZm9yYmlkIGl0IHdpdGggLXBl
ZGFudGljLWVycm9ycyB0aGF04oCZcyB3aHkgSSB3YXMgdXNlIHRvIHdyaXRlIGNvZGUgbGlrZSB0
aGF0Lg0KSSB3aWxsIGZpeCB0aGF0Lg0KDQo+IA0KPj4gKw0KPj4gKyAgICByZXR1cm4gMDsNCj4+
ICt9DQo+PiArDQo+PiBpbnQgZG9tYWluX2tpbGwoc3RydWN0IGRvbWFpbiAqZCkNCj4+IHsNCj4+
ICAgICBpbnQgcmMgPSAwOw0KPj4gQEAgLTcyNyw3ICs3ODgsMTAgQEAgaW50IGRvbWFpbl9raWxs
KHN0cnVjdCBkb21haW4gKmQpDQo+PiAgICAgICAgIGlmICggY3B1cG9vbF9tb3ZlX2RvbWFpbihk
LCBjcHVwb29sMCkgKQ0KPj4gICAgICAgICAgICAgcmV0dXJuIC1FUkVTVEFSVDsNCj4+ICAgICAg
ICAgZm9yX2VhY2hfdmNwdSAoIGQsIHYgKQ0KPj4gKyAgICAgICAgew0KPj4gKyAgICAgICAgICAg
IHVubWFwX3J1bnN0YXRlX2FyZWEodiwgMCk7DQo+IA0KPiBXaHkgaXMgaXQgbm90IGFwcHJvcHJp
YXRlIGhlcmUgdG8gaG9sZCB0aGUgbG9jaz8gSXQgbWlnaHQgbm90IGJlDQo+IHRlY2huaWNhbGx5
IG5lZWRlZCwgYnV0IHN0aWxsIHNob3VsZCB3b3JrPw0KDQpJbiBhIGtpbGxpbmcgc2NlbmFyaW8g
eW91IG1pZ2h0IHN0b3AgYSBjb3JlIHdoaWxlIGl0IHdhcyByZXNjaGVkdWxpbmcuDQpDb3VsZG7i
gJl0IGEgY29yZSBiZSBraWxsZWQgdXNpbmcgYSBjcm9zcyBjb3JlIGludGVycnVwdCA/DQpJZiB0
aGlzIGlzIHRoZSBjYXNlIHRoZW4gSSB3b3VsZCBuZWVkIHRvIGRvIG1hc2tlZCBpbnRlcnJ1cHQg
bG9ja2luZyBzZWN0aW9ucyB0byBwcmV2ZW50IHRoYXQgY2FzZSA/DQoNClRoYW5rcyBmb3IgdGhl
IGZlZWRiYWNrLg0KQmVydHJhbmQNCg0K


From xen-devel-bounces@lists.xenproject.org Thu May 28 17:26:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 17:26:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeMIZ-0005TN-0S; Thu, 28 May 2020 17:26:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gT1p=7K=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jeMIX-0005TI-Rb
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 17:26:29 +0000
X-Inumbo-ID: 5d526a28-a108-11ea-a814-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5d526a28-a108-11ea-a814-12813bfff9fa;
 Thu, 28 May 2020 17:26:29 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:54226
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jeMIT-000rLv-MP (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Thu, 28 May 2020 18:26:26 +0100
Subject: Re: [PATCH v2 02/14] x86/traps: Factor out extable_fixup() and make
 printing consistent
To: Jan Beulich <jbeulich@suse.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-3-andrew.cooper3@citrix.com>
 <9cb12ae3-ef09-3b81-caef-0b1d61426a42@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <2d550f5d-6330-6adc-2865-f8421845957f@citrix.com>
Date: Thu, 28 May 2020 18:26:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <9cb12ae3-ef09-3b81-caef-0b1d61426a42@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28/05/2020 10:50, Jan Beulich wrote:
> On 27.05.2020 21:18, Andrew Cooper wrote:
>> UD faults never had any diagnostics printed, and the others were inconsistent.
>>
>> Don't use dprintk() because identifying traps.c is actively unhelpful in the
>> message, as it is the location of the fixup, not the fault.  Use the new
>> vec_name() infrastructure, rather than leaving raw numbers for the log.
>>
>>   (XEN) Running stub recovery selftests...
>>   (XEN) Fixup #UD[0000]: ffff82d07fffd040 [ffff82d07fffd040] -> ffff82d0403ac9d6
>>   (XEN) Fixup #GP[0000]: ffff82d07fffd041 [ffff82d07fffd041] -> ffff82d0403ac9d6
>>   (XEN) Fixup #SS[0000]: ffff82d07fffd040 [ffff82d07fffd040] -> ffff82d0403ac9d6
>>   (XEN) Fixup #BP[0000]: ffff82d07fffd041 [ffff82d07fffd041] -> ffff82d0403ac9d6
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> As before
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

> albeit I realize I have one more suggestion:
>
>> --- a/xen/arch/x86/traps.c
>> +++ b/xen/arch/x86/traps.c
>> @@ -772,10 +772,31 @@ static void do_reserved_trap(struct cpu_user_regs *regs)
>>            trapnr, vec_name(trapnr), regs->error_code);
>>  }
>>  
>> +static bool extable_fixup(struct cpu_user_regs *regs, bool print)
>> +{
>> +    unsigned long fixup = search_exception_table(regs);
>> +
>> +    if ( unlikely(fixup == 0) )
>> +        return false;
>> +
>> +    /*
>> +     * Don't use dprintk() because the __FILE__ reference is unhelpful.
>> +     * Can currently be triggered by guests.  Make sure we ratelimit.
>> +     */
>> +    if ( IS_ENABLED(CONFIG_DEBUG) && print )
> How about pulling the IS_ENABLED(CONFIG_DEBUG) into the call sites
> currently passing "true"?

That is an obfuscation, not an improvement, in code legibility.

It is however a transformation that the compiler does (as part of
dropping the print parameter entirely).

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 28 18:00:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 18:00:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeMp5-0000ML-Vb; Thu, 28 May 2020 18:00:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=19By=7K=amazon.com=prvs=410ac67ab=anchalag@srs-us1.protection.inumbo.net>)
 id 1jeMp5-0000LG-1B
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 18:00:07 +0000
X-Inumbo-ID: 0f83b1c6-a10d-11ea-81bc-bc764e2007e4
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f83b1c6-a10d-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 18:00:06 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1590688807; x=1622224807;
 h=from:to:subject:date:message-id:references:in-reply-to:
 content-id:content-transfer-encoding:mime-version;
 bh=oNLZyUl66SSao9Xw7N5boBOPLPGR9xPVVpFRzGyK3NE=;
 b=HCu4nuKQHOsy4bbZOw2OUbdlIjpKkGJBhKmVcZ6TRWVyaR/o4R0KGEqb
 sixFu5YRC4jV3RHUSdmaFd38BI9bJc9qQVdGtOzSu7cTy+BD8wt2aeBto
 POwbwxfSWgyHn7zGb1TYv0n4sMXog4pNHZ8CfB9VxaPlpG288c5vtQ3Wg w=;
IronPort-SDR: DwLXL+vYnHFAOhYL+bAJq2wvFGMWzlfLyvOlliswhA0S5m+T3cKSKThAFDRaimJS+sGp4RwSS/
 uPgnrdWwGcFA==
X-IronPort-AV: E=Sophos;i="5.73,445,1583193600"; d="scan'208";a="46991822"
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-1d-37fd6b3d.us-east-1.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9102.sea19.amazon.com with ESMTP;
 28 May 2020 18:00:01 +0000
Received: from EX13MTAUWB001.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162])
 by email-inbound-relay-1d-37fd6b3d.us-east-1.amazon.com (Postfix) with ESMTPS
 id 772142849E5; Thu, 28 May 2020 17:59:53 +0000 (UTC)
Received: from EX13D10UWB002.ant.amazon.com (10.43.161.130) by
 EX13MTAUWB001.ant.amazon.com (10.43.161.207) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 28 May 2020 17:59:52 +0000
Received: from EX13D07UWB002.ant.amazon.com (10.43.161.131) by
 EX13D10UWB002.ant.amazon.com (10.43.161.130) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Thu, 28 May 2020 17:59:52 +0000
Received: from EX13D07UWB002.ant.amazon.com ([10.43.161.131]) by
 EX13D07UWB002.ant.amazon.com ([10.43.161.131]) with mapi id 15.00.1497.006;
 Thu, 28 May 2020 17:59:52 +0000
From: "Agarwal, Anchal" <anchalag@amazon.com>
To: "tglx@linutronix.de" <tglx@linutronix.de>, "mingo@redhat.com"
 <mingo@redhat.com>, "bp@alien8.de" <bp@alien8.de>, "hpa@zytor.com"
 <hpa@zytor.com>, "x86@kernel.org" <x86@kernel.org>,
 "boris.ostrovsky@oracle.com" <boris.ostrovsky@oracle.com>, "jgross@suse.com"
 <jgross@suse.com>, "linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
 "linux-mm@kvack.org" <linux-mm@kvack.org>, "Kamata, Munehisa"
 <kamatam@amazon.com>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>, "roger.pau@citrix.com"
 <roger.pau@citrix.com>, "axboe@kernel.dk" <axboe@kernel.dk>,
 "davem@davemloft.net" <davem@davemloft.net>, "rjw@rjwysocki.net"
 <rjw@rjwysocki.net>, "len.brown@intel.com" <len.brown@intel.com>,
 "pavel@ucw.cz" <pavel@ucw.cz>, "peterz@infradead.org" <peterz@infradead.org>, 
 "Valentin, Eduardo" <eduval@amazon.com>, "Singh, Balbir" <sblbir@amazon.com>, 
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "vkuznets@redhat.com" <vkuznets@redhat.com>, "netdev@vger.kernel.org"
 <netdev@vger.kernel.org>, "linux-kernel@vger.kernel.org"
 <linux-kernel@vger.kernel.org>, "Woodhouse, David" <dwmw@amazon.co.uk>,
 "benh@kernel.crashing.org" <benh@kernel.crashing.org>
Subject: Re: [PATCH 00/12] Fix PM hibernation in Xen guests
Thread-Topic: [PATCH 00/12] Fix PM hibernation in Xen guests
Thread-Index: AQHWLiw8BReG6kpgjke8dS/vAeapjqi9Yd+A
Date: Thu, 28 May 2020 17:59:52 +0000
Message-ID: <0C3CEAD6-E79C-490E-8FEA-2276E87BD7B4@amazon.com>
References: <cover.1589926004.git.anchalag@amazon.com>
In-Reply-To: <cover.1589926004.git.anchalag@amazon.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.162.50]
Content-Type: text/plain; charset="utf-8"
Content-ID: <20D843639614D14FA2EFCF8BE29971CC@amazon.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

QSBnZW50bGUgcGluZyBvbiB0aGlzIHdob2xlIHBhdGNoIHNlcmllcy4NCg0KVGhhbmtzLA0KQW5j
aGFsDQoNCu+7vyAgICBIZWxsbywNCiAgICBUaGlzIHNlcmllcyBmaXhlcyBQTSBoaWJlcm5hdGlv
biBmb3IgaHZtIGd1ZXN0cyBydW5uaW5nIG9uIHhlbiBoeXBlcnZpc29yLg0KICAgIFRoZSBydW5u
aW5nIGd1ZXN0IGNvdWxkIG5vdyBiZSBoaWJlcm5hdGVkIGFuZCByZXN1bWVkIHN1Y2Nlc3NmdWxs
eSBhdCBhDQogICAgbGF0ZXIgdGltZS4gVGhlIGZpeGVzIGZvciBQTSBoaWJlcm5hdGlvbiBhcmUg
YWRkZWQgdG8gYmxvY2sgYW5kDQogICAgbmV0d29yayBkZXZpY2UgZHJpdmVycyBpLmUgeGVuLWJs
a2Zyb250IGFuZCB4ZW4tbmV0ZnJvbnQuIEFueSBvdGhlciBkcml2ZXINCiAgICB0aGF0IG5lZWRz
IHRvIGFkZCBTNCBzdXBwb3J0IGlmIG5vdCBhbHJlYWR5LCBjYW4gZm9sbG93IHNhbWUgbWV0aG9k
IG9mDQogICAgaW50cm9kdWNpbmcgZnJlZXplL3RoYXcvcmVzdG9yZSBjYWxsYmFja3MuDQogICAg
VGhlIHBhdGNoZXMgaGFkIGJlZW4gdGVzdGVkIGFnYWluc3QgdXBzdHJlYW0ga2VybmVsIGFuZCB4
ZW40LjExLiBMYXJnZQ0KICAgIHNjYWxlIHRlc3RpbmcgaXMgYWxzbyBkb25lIG9uIFhlbiBiYXNl
ZCBBbWF6b24gRUMyIGluc3RhbmNlcy4gQWxsIHRoaXMgdGVzdGluZw0KICAgIGludm9sdmVkIHJ1
bm5pbmcgbWVtb3J5IGV4aGF1c3Rpbmcgd29ya2xvYWQgaW4gdGhlIGJhY2tncm91bmQuDQoNCiAg
ICBEb2luZyBndWVzdCBoaWJlcm5hdGlvbiBkb2VzIG5vdCBpbnZvbHZlIGFueSBzdXBwb3J0IGZy
b20gaHlwZXJ2aXNvciBhbmQNCiAgICB0aGlzIHdheSBndWVzdCBoYXMgY29tcGxldGUgY29udHJv
bCBvdmVyIGl0cyBzdGF0ZS4gSW5mcmFzdHJ1Y3R1cmUNCiAgICByZXN0cmljdGlvbnMgZm9yIHNh
dmluZyB1cCBndWVzdCBzdGF0ZSBjYW4gYmUgb3ZlcmNvbWUgYnkgZ3Vlc3QgaW5pdGlhdGVkDQog
ICAgaGliZXJuYXRpb24uDQoNCiAgICBUaGVzZSBwYXRjaGVzIHdlcmUgc2VuZCBvdXQgYXMgUkZD
IGJlZm9yZSBhbmQgYWxsIHRoZSBmZWVkYmFjayBoYWQgYmVlbg0KICAgIGluY29ycG9yYXRlZCBp
biB0aGUgcGF0Y2hlcy4gVGhlIGxhc3QgUkZDVjMgY291bGQgYmUgZm91bmQgaGVyZToNCiAgICBo
dHRwczovL2xrbWwub3JnL2xrbWwvMjAyMC8yLzE0LzI3ODkNCg0KICAgIEtub3duIGlzc3VlczoN
CiAgICAxLktBU0xSIGNhdXNlcyBpbnRlcm1pdHRlbnQgaGliZXJuYXRpb24gZmFpbHVyZXMuIFZN
IGZhaWxzIHRvIHJlc3VtZXMgYW5kDQogICAgaGFzIHRvIGJlIHJlc3RhcnRlZC4gSSB3aWxsIGlu
dmVzdGlnYXRlIHRoaXMgaXNzdWUgc2VwYXJhdGVseSBhbmQgc2hvdWxkbid0DQogICAgYmUgYSBi
bG9ja2VyIGZvciB0aGlzIHBhdGNoIHNlcmllcy4NCiAgICAyLiBEdXJpbmcgaGliZXJuYXRpb24s
IEkgb2JzZXJ2ZWQgc29tZXRpbWVzIHRoYXQgZnJlZXppbmcgb2YgdGFza3MgZmFpbHMgZHVlDQog
ICAgdG8gYnVzeSBYRlMgd29ya3F1ZXVlaVt4ZnMtY2lsL3hmcy1zeW5jXS4gVGhpcyBpcyBhbHNv
IGludGVybWl0dGVudCBtYXkgYmUgMQ0KICAgIG91dCBvZiAyMDAgcnVucyBhbmQgaGliZXJuYXRp
b24gaXMgYWJvcnRlZCBpbiB0aGlzIGNhc2UuIFJlLXRyeWluZyBoaWJlcm5hdGlvbg0KICAgIG1h
eSB3b3JrLiBBbHNvLCB0aGlzIGlzIGEga25vd24gaXNzdWUgd2l0aCBoaWJlcm5hdGlvbiBhbmQg
c29tZQ0KICAgIGZpbGVzeXN0ZW1zIGxpa2UgWEZTIGhhcyBiZWVuIGRpc2N1c3NlZCBieSB0aGUg
Y29tbXVuaXR5IGZvciB5ZWFycyB3aXRoIG5vdCBhbg0KICAgIGVmZmVjdHZlIHJlc29sdXRpb24g
YXQgdGhpcyBwb2ludC4NCg0KICAgIFRlc3RpbmcgSG93IHRvOg0KICAgIC0tLS0tLS0tLS0tLS0t
LQ0KICAgIDEuIFNldHVwIHhlbiBoeXBlcnZpc29yIG9uIGEgcGh5c2ljYWwgbWFjaGluZVsgSSB1
c2VkIFVidW50dSAxNi4wNCArdXBzdHJlYW0NCiAgICB4ZW4tNC4xMV0NCiAgICAyLiBCcmluZyB1
cCBhIEhWTSBndWVzdCB3L3Qga2VybmVsIGNvbXBpbGVkIHdpdGggaGliZXJuYXRpb24gcGF0Y2hl
cw0KICAgIFtJIHVzZWQgdWJ1bnR1MTguMDQgbmV0Ym9vdCBiaW9uaWMgaW1hZ2VzIGFuZCBhbHNv
IEFtYXpvbiBMaW51eCBvbi1wcmVtIGltYWdlc10uDQogICAgMy4gQ3JlYXRlIGEgc3dhcCBmaWxl
IHNpemU9UkFNIHNpemUNCiAgICA0LiBVcGRhdGUgZ3J1YiBwYXJhbWV0ZXJzIGFuZCByZWJvb3QN
CiAgICA1LiBUcmlnZ2VyIHBtLWhpYmVybmF0aW9uIGZyb20gd2l0aGluIHRoZSBWTQ0KDQogICAg
RXhhbXBsZToNCiAgICBTZXQgdXAgYSBmaWxlLWJhY2tlZCBzd2FwIHNwYWNlLiBTd2FwIGZpbGUg
c2l6ZT49VG90YWwgbWVtb3J5IG9uIHRoZSBzeXN0ZW0NCiAgICBzdWRvIGRkIGlmPS9kZXYvemVy
byBvZj0vc3dhcCBicz0kKCggMTAyNCAqIDEwMjQgKSkgY291bnQ9NDA5NiAjIDQwOTZNaUINCiAg
ICBzdWRvIGNobW9kIDYwMCAvc3dhcA0KICAgIHN1ZG8gbWtzd2FwIC9zd2FwDQogICAgc3VkbyBz
d2Fwb24gL3N3YXANCg0KICAgIFVwZGF0ZSByZXN1bWUgZGV2aWNlL3Jlc3VtZSBvZmZzZXQgaW4g
Z3J1YiBpZiB1c2luZyBzd2FwIGZpbGU6DQogICAgcmVzdW1lPS9kZXYveHZkYTEgcmVzdW1lX29m
ZnNldD0yMDA3MDQgbm9fY29uc29sZV9zdXNwZW5kPTENCg0KICAgIEV4ZWN1dGU6DQogICAgLS0t
LS0tLS0NCiAgICBzdWRvIHBtLWhpYmVybmF0ZQ0KICAgIE9SDQogICAgZWNobyBkaXNrID4gL3N5
cy9wb3dlci9zdGF0ZSAmJiBlY2hvIHJlYm9vdCA+IC9zeXMvcG93ZXIvZGlzaw0KDQogICAgQ29t
cHV0ZSByZXN1bWUgb2Zmc2V0IGNvZGU6DQogICAgIg0KICAgICMhL3Vzci9iaW4vZW52IHB5dGhv
bg0KICAgIGltcG9ydCBzeXMNCiAgICBpbXBvcnQgYXJyYXkNCiAgICBpbXBvcnQgZmNudGwNCg0K
ICAgICNzd2FwIGZpbGUNCiAgICBmID0gb3BlbihzeXMuYXJndlsxXSwgJ3InKQ0KICAgIGJ1ZiA9
IGFycmF5LmFycmF5KCdMJywgWzBdKQ0KDQogICAgI0ZJQk1BUA0KICAgIHJldCA9IGZjbnRsLmlv
Y3RsKGYuZmlsZW5vKCksIDB4MDEsIGJ1ZikNCiAgICBwcmludCBidWZbMF0NCiAgICAiDQoNCg0K
ICAgIEFuY2hhbCBBZ2Fyd2FsICg1KToNCiAgICAgIHg4Ni94ZW46IEludHJvZHVjZSBuZXcgZnVu
Y3Rpb24gdG8gbWFwIEhZUEVSVklTT1Jfc2hhcmVkX2luZm8gb24NCiAgICAgICAgUmVzdW1lDQog
ICAgICBnZW5pcnE6IFNodXRkb3duIGlycSBjaGlwcyBpbiBzdXNwZW5kL3Jlc3VtZSBkdXJpbmcg
aGliZXJuYXRpb24NCiAgICAgIHhlbjogSW50cm9kdWNlIHdyYXBwZXIgZm9yIHNhdmUvcmVzdG9y
ZSBzY2hlZCBjbG9jayBvZmZzZXQNCiAgICAgIHhlbjogVXBkYXRlIHNjaGVkIGNsb2NrIG9mZnNl
dCB0byBhdm9pZCBzeXN0ZW0gaW5zdGFiaWxpdHkgaW4NCiAgICAgICAgaGliZXJuYXRpb24NCiAg
ICAgIFBNIC8gaGliZXJuYXRlOiB1cGRhdGUgdGhlIHJlc3VtZSBvZmZzZXQgb24gU05BUFNIT1Rf
U0VUX1NXQVBfQVJFQQ0KDQogICAgTXVuZWhpc2EgS2FtYXRhICg3KToNCiAgICAgIHhlbi9tYW5h
Z2U6IGtlZXAgdHJhY2sgb2YgdGhlIG9uLWdvaW5nIHN1c3BlbmQgbW9kZQ0KICAgICAgeGVuYnVz
OiBhZGQgZnJlZXplL3RoYXcvcmVzdG9yZSBjYWxsYmFja3Mgc3VwcG9ydA0KICAgICAgeDg2L3hl
bjogYWRkIHN5c3RlbSBjb3JlIHN1c3BlbmQgYW5kIHJlc3VtZSBjYWxsYmFja3MNCiAgICAgIHhl
bi1ibGtmcm9udDogYWRkIGNhbGxiYWNrcyBmb3IgUE0gc3VzcGVuZCBhbmQgaGliZXJuYXRpb24N
CiAgICAgIHhlbi1uZXRmcm9udDogYWRkIGNhbGxiYWNrcyBmb3IgUE0gc3VzcGVuZCBhbmQgaGli
ZXJuYXRpb24NCiAgICAgIHhlbi90aW1lOiBpbnRyb2R1Y2UgeGVuX3tzYXZlLHJlc3RvcmV9X3N0
ZWFsX2Nsb2NrDQogICAgICB4ODYveGVuOiBzYXZlIGFuZCByZXN0b3JlIHN0ZWFsIGNsb2NrDQoN
CiAgICAgYXJjaC94ODYveGVuL2VubGlnaHRlbl9odm0uYyAgICAgIHwgICA4ICsrDQogICAgIGFy
Y2gveDg2L3hlbi9zdXNwZW5kLmMgICAgICAgICAgICB8ICA3MiArKysrKysrKysrKysrKysrKysN
CiAgICAgYXJjaC94ODYveGVuL3RpbWUuYyAgICAgICAgICAgICAgIHwgIDE4ICsrKystDQogICAg
IGFyY2gveDg2L3hlbi94ZW4tb3BzLmggICAgICAgICAgICB8ICAgMyArDQogICAgIGRyaXZlcnMv
YmxvY2sveGVuLWJsa2Zyb250LmMgICAgICB8IDEyMiArKysrKysrKysrKysrKysrKysrKysrKysr
KysrLS0NCiAgICAgZHJpdmVycy9uZXQveGVuLW5ldGZyb250LmMgICAgICAgIHwgIDk4ICsrKysr
KysrKysrKysrKysrKysrKysrLQ0KICAgICBkcml2ZXJzL3hlbi9ldmVudHMvZXZlbnRzX2Jhc2Uu
YyAgfCAgIDEgKw0KICAgICBkcml2ZXJzL3hlbi9tYW5hZ2UuYyAgICAgICAgICAgICAgfCAgNzMg
KysrKysrKysrKysrKysrKysrDQogICAgIGRyaXZlcnMveGVuL3RpbWUuYyAgICAgICAgICAgICAg
ICB8ICAyOSArKysrKystDQogICAgIGRyaXZlcnMveGVuL3hlbmJ1cy94ZW5idXNfcHJvYmUuYyB8
ICA5OSArKysrKysrKysrKysrKysrKysrLS0tLS0NCiAgICAgaW5jbHVkZS9saW51eC9pcnEuaCAg
ICAgICAgICAgICAgIHwgICAyICsNCiAgICAgaW5jbHVkZS94ZW4veGVuLW9wcy5oICAgICAgICAg
ICAgIHwgICA4ICsrDQogICAgIGluY2x1ZGUveGVuL3hlbmJ1cy5oICAgICAgICAgICAgICB8ICAg
MyArDQogICAgIGtlcm5lbC9pcnEvY2hpcC5jICAgICAgICAgICAgICAgICB8ICAgMiArLQ0KICAg
ICBrZXJuZWwvaXJxL2ludGVybmFscy5oICAgICAgICAgICAgfCAgIDEgKw0KICAgICBrZXJuZWwv
aXJxL3BtLmMgICAgICAgICAgICAgICAgICAgfCAgMzEgKysrKystLS0NCiAgICAga2VybmVsL3Bv
d2VyL3VzZXIuYyAgICAgICAgICAgICAgIHwgICA2ICstDQogICAgIDE3IGZpbGVzIGNoYW5nZWQs
IDUzNiBpbnNlcnRpb25zKCspLCA0MCBkZWxldGlvbnMoLSkNCg0KICAgIC0tIA0KICAgIDIuMjQu
MS5BTVpODQoNCg0K


From xen-devel-bounces@lists.xenproject.org Thu May 28 18:10:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 18:10:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeMyn-0001FX-Uj; Thu, 28 May 2020 18:10:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gT1p=7K=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jeMym-0001FS-PN
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 18:10:08 +0000
X-Inumbo-ID: 7615f470-a10e-11ea-a820-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7615f470-a10e-11ea-a820-12813bfff9fa;
 Thu, 28 May 2020 18:10:07 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:55224
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jeMyi-000DnK-KH (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Thu, 28 May 2020 19:10:04 +0100
Subject: Re: [PATCH v2 03/14] x86/shstk: Introduce Supervisor Shadow Stack
 support
To: Jan Beulich <jbeulich@suse.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-4-andrew.cooper3@citrix.com>
 <4f535d4c-b3b3-fe5b-8b57-af736dc0a360@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <ad95944a-bd21-2ba8-6214-49d86050e816@citrix.com>
Date: Thu, 28 May 2020 19:10:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <4f535d4c-b3b3-fe5b-8b57-af736dc0a360@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28/05/2020 11:25, Jan Beulich wrote:
> On 27.05.2020 21:18, Andrew Cooper wrote:
>> Introduce CONFIG_HAS_AS_CET to determine whether CET instructions are
>> supported in the assembler, and CONFIG_XEN_SHSTK as the main build option.
>>
>> Introduce cet={no-,}shstk to for a user to select whether or not to use shadow
>> stacks at runtime, and X86_FEATURE_XEN_SHSTK to determine Xen's overall
>> enablement of shadow stacks.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wl@xen.org>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>>
>> LLVM 6 supports CET-SS instructions while only LLVM 7 supports CET-IBT
>> instructions.  We'd need to split HAS_AS_CET into two if we want to support
>> supervisor shadow stacks with LLVM 6.  (This demonstrates exactly why picking
>> a handful of instructions to test is the right approach.)
> In this case I agree with splitting; I wasn't aware clang implemented
> the respective insns piecemeal.

I only became aware when trying Clang for v2.  I'll turn it into
HAS_AS_CET_SS, because the more I think about IBT, the more I think that
will need to be tied to the compiler, rather than the assembler.

>> --- a/docs/misc/xen-command-line.pandoc
>> +++ b/docs/misc/xen-command-line.pandoc
>> @@ -270,6 +270,23 @@ and not running softirqs. Reduce this if softirqs are not being run frequently
>>  enough. Setting this to a high value may cause boot failure, particularly if
>>  the NMI watchdog is also enabled.
>>  
>> +### cet
>> +    = List of [ shstk=<bool> ]
>> +
>> +    Applicability: x86
>> +
>> +Controls for the use of Control-flow Enforcement Technology.  CET is group of
> Nit: "... is a group of ..."

Oops.

>
>> --- a/xen/arch/x86/Kconfig
>> +++ b/xen/arch/x86/Kconfig
>> @@ -34,6 +34,10 @@ config ARCH_DEFCONFIG
>>  config INDIRECT_THUNK
>>  	def_bool $(cc-option,-mindirect-branch-register)
>>  
>> +config HAS_AS_CET
>> +	# binutils >= 2.29 and LLVM >= 7
>> +	def_bool $(as-instr,wrssq %rax$(comma)0;setssbsy;endbr64)
> So you put me in a really awkward position: I'd really like to see
> this series go in for 4.14, yet I've previously indicated I want the
> underlying concept to first be agreed upon, before any uses get
> introduced.

There are already users.  One of them is even in context.

I don't see that there is anything open for dispute in the first place. 
Being able to do exactly this was a one key driving factor to a newer
Kconfig, because it is superior mechanism to the ad-hoc mess we had
previously (not to mention, a vast detriment to build time).

> A fundamental problem with this, at least as long as (a) more of
> Anthony's series hasn't been committed and (b) we re-build Xen upon
> installing (as root), even if it was fully built (as non-root) and
> is ready without any re-building.

How is any any of this relevant to Anthony's recent changes?

You've always been asking for trouble if your `make` before `make
install` has as different toolchain, even in regular autotools/userspace
software.  The newer Kconfig logic might make this trouble far more
obvious, but doesn't introduce the problem.

> Each of these aspects means that
> what you've configured and built may not be what gets installed, by
> virtue of the tool chains differing. (a) in addition may lead to
> the install-time rebuild to actually go wrong, due to there being
> dependency tracking issues on at least {xen,efi}.lds (which I've
> noticed in a different context yesterday).

Again - how is any of this related to the recent changes?

Fundamentally, it is either no regression from 4.13, or already needs
bugfixing due to the existing users in 4.14.

Neither of these options affects this patch.

>> --- a/xen/scripts/Kconfig.include
>> +++ b/xen/scripts/Kconfig.include
>> @@ -31,6 +31,10 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -E -x c /dev/null -o /de
>>  # Return y if the linker supports <flag>, n otherwise
>>  ld-option = $(success,$(LD) -v $(1))
>>  
>> +# $(as-instr,<instr>)
>> +# Return y if the assembler supports <instr>, n otherwise
>> +as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -)
> Is this actually checking the right thing in the clang case?

Appears to work correctly for me.  (Again, its either fine, or need
bugfixing anyway for 4.14, and raising with Linux as this is unmodified
upstream code like the rest of Kconfig.include).

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 28 18:55:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 18:55:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeNfy-0004bJ-GQ; Thu, 28 May 2020 18:54:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lRPh=7K=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jeNfx-0004bE-FO
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 18:54:45 +0000
X-Inumbo-ID: b13060ee-a114-11ea-a827-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b13060ee-a114-11ea-a827-12813bfff9fa;
 Thu, 28 May 2020 18:54:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=jbYElpDc+CL4tGj/CNC+Ad3XGfI0bHxW5iLGicuCWRk=; b=w+LcBQgbe5nc4WTjlUE0jEoR1m
 AA2dNJGR5f6nEhBLsT0+cr8AOOzNg7xTZmjz6GkdndVswFECe6Q1LTXk79dpaI0swWrsvFAYT7X4q
 gYFM9hnQ3B+PqN6/86oPLJLD7frO1IcChIge2iuYntaomaatPYNEbsNDMVaA7BLfTpkg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeNfr-0003CD-4q; Thu, 28 May 2020 18:54:39 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeNfq-0001ET-Og; Thu, 28 May 2020 18:54:39 +0000
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
To: Bertrand Marquis <bertrand.marquis@arm.com>, xen-devel@lists.xenproject.org
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
Date: Thu, 28 May 2020 19:54:35 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "paul@xen.org" <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Xia, Hongyan" <hongyxia@amazon.com>,
 Jan Beulich <jbeulich@suse.com>, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Bertrand,

Thank you for the patch.

On 28/05/2020 16:25, Bertrand Marquis wrote:
> At the moment on Arm, a Linux guest running with KTPI enabled will
> cause the following error when a context switch happens in user mode:
> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
> 
> This patch is modifying runstate handling to map the area given by the
> guest inside Xen during the hypercall.
> This is removing the guest virtual to physical conversion during context
> switches which removes the bug

It would be good to spell out that a virtual address is not stable. So 
relying on it is wrong.

> and improve performance by preventing to
> walk page tables during context switches.

With Secret free hypervisor in mind, I would like to suggest to 
map/unmap the runstate during context switch.

The cost should be minimal when there is a direct map (i.e on Arm64 and 
x86) and still provide better performance on Arm32.

The change should be minimal compare to the current approach but this 
could be taken care separately if you don't have time.

> 
> --
> In the current status, this patch is only working on Arm and needs to
> be fixed on X86 (see #error on domain.c for missing get_page_from_gva).
> 
> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
> ---
>   xen/arch/arm/domain.c   | 32 +++++++++-------
>   xen/arch/x86/domain.c   | 51 ++++++++++++++-----------
>   xen/common/domain.c     | 84 ++++++++++++++++++++++++++++++++++-------
>   xen/include/xen/sched.h | 11 ++++--
>   4 files changed, 124 insertions(+), 54 deletions(-)
> 
> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
> index 31169326b2..799b0e0103 100644
> --- a/xen/arch/arm/domain.c
> +++ b/xen/arch/arm/domain.c
> @@ -278,33 +278,37 @@ static void ctxt_switch_to(struct vcpu *n)
>   /* Update per-VCPU guest runstate shared memory area (if registered). */
>   static void update_runstate_area(struct vcpu *v)
>   {
> -    void __user *guest_handle = NULL;
> -    struct vcpu_runstate_info runstate;
> +    struct vcpu_runstate_info *runstate;
>   
> -    if ( guest_handle_is_null(runstate_guest(v)) )
> +    /* XXX why do we accept not to block here */
> +    if ( !spin_trylock(&v->runstate_guest_lock) )
>           return;
>   
> -    memcpy(&runstate, &v->runstate, sizeof(runstate));
> +    runstate = runstate_guest(v);
> +
> +    if (runstate == NULL)
> +    {
> +        spin_unlock(&v->runstate_guest_lock);
> +        return;
> +    }
>   
>       if ( VM_ASSIST(v->domain, runstate_update_flag) )
>       {
> -        guest_handle = &v->runstate_guest.p->state_entry_time + 1;
> -        guest_handle--;
> -        runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
> -        __raw_copy_to_guest(guest_handle,
> -                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
> +        runstate->state_entry_time |= XEN_RUNSTATE_UPDATE;
>           smp_wmb();

Because you set v->runstate.state_entry_time below, the placement of the 
barrier seems a bit odd.

I would suggest to update v->runstate.state_entry_time first and then 
update runstate->state_entry_time.

> +        v->runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
>       }
>   
> -    __copy_to_guest(runstate_guest(v), &runstate, 1);
> +    memcpy(runstate, &v->runstate, sizeof(v->runstate));
>   
> -    if ( guest_handle )
> +    if ( VM_ASSIST(v->domain, runstate_update_flag) )
>       {
> -        runstate.state_entry_time &= ~XEN_RUNSTATE_UPDATE;
> +        runstate->state_entry_time &= ~XEN_RUNSTATE_UPDATE;
>           smp_wmb();

You want to update runstate->state_entry_time after the barrier not before.

>   static void _update_runstate_area(struct vcpu *v)
>   {
> +    /* XXX: this should be removed */
>       if ( !update_runstate_area(v) && is_pv_vcpu(v) &&
>            !(v->arch.flags & TF_kernel_mode) )
>           v->arch.pv.need_update_runstate_area = 1;
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 7cc9526139..acc6f90ba3 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -161,6 +161,7 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
>       v->dirty_cpu = VCPU_CPU_CLEAN;
>   
>       spin_lock_init(&v->virq_lock);
> +    spin_lock_init(&v->runstate_guest_lock);
>   
>       tasklet_init(&v->continue_hypercall_tasklet, NULL, NULL);
>   
> @@ -691,6 +692,66 @@ int rcu_lock_live_remote_domain_by_id(domid_t dom, struct domain **d)
>       return 0;
>   }
>   
> +static void  unmap_runstate_area(struct vcpu *v, unsigned int lock)

NIT: There is an extra space after void

Also, AFAICT, the lock is only taking two values. Please switch to a bool.

> +{
> +    mfn_t mfn;
> +
> +    if ( ! runstate_guest(v) )

NIT: We don't usually put a space after !.

But shouldn't this be checked within the lock?


> +        return;
> +
> +    if (lock)

NIT: if ( ... )

> +        spin_lock(&v->runstate_guest_lock);
> +
> +    mfn = domain_page_map_to_mfn(runstate_guest(v));
> +
> +    unmap_domain_page_global((void *)
> +                            ((unsigned long)v->runstate_guest &
> +                             PAGE_MASK));
> +
> +    put_page_and_type(mfn_to_page(mfn));
> +    runstate_guest(v) = NULL;
> +
> +    if (lock)

Ditto.

> +        spin_unlock(&v->runstate_guest_lock);
> +}
> +
> +static int map_runstate_area(struct vcpu *v,
> +                    struct vcpu_register_runstate_memory_area *area)
> +{
> +    unsigned long offset = area->addr.p & ~PAGE_MASK;
> +    void *mapping;
> +    struct page_info *page;
> +    size_t size = sizeof(struct vcpu_runstate_info);
> +
> +    ASSERT(runstate_guest(v) == NULL);
> +
> +    /* do not allow an area crossing 2 pages */
> +    if ( offset > (PAGE_SIZE - size) )
> +        return -EINVAL;

This is a change in behavior for the guest. If we are going forward with 
this, this will want a separate patch with its own explanation why this 
is done.

> +
> +#ifdef CONFIG_ARM
> +    page = get_page_from_gva(v, area->addr.p, GV2M_WRITE);

A guest is allowed to setup the runstate for a different vCPU than the 
current one. This will lead to get_page_from_gva() to fail as the 
function cannot yet work with a vCPU other than current.

AFAICT, there is no restriction on when the runstate hypercall can be 
called. So this can even be called before the vCPU is brought up.

I was going to suggest to use the current vCPU for translating the 
address. However, it would be reasonable for an OS to use the same 
virtual address for all the vCPUs assuming the page-tables are different 
per vCPU.

Recent Linux are using a per-cpu area, so the virtual address should be 
different for each vCPU. But I don't know how the other OSes works. 
Roger should be able to help for FreeBSD at least.

I have CCed Paul for the Windows drivers.

If we decide to introduce some restriction then they should be explained 
in the commit message and also documented in the public header (we have 
been pretty bad at documenting change in the past!).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 28 19:13:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 19:13:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeNxl-0006Ir-1K; Thu, 28 May 2020 19:13:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lRPh=7K=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jeNxj-0006Ij-V2
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 19:13:08 +0000
X-Inumbo-ID: 43151b74-a117-11ea-9dbe-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43151b74-a117-11ea-9dbe-bc764e2007e4;
 Thu, 28 May 2020 19:13:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=gdzdzxQWyHvqIMFrWwBAsCN6nbH5qSJ0n1v5OvUpoOQ=; b=0b+V7SJKjImsGb48pqt5Ah5RMD
 Pw4jdjvtNjdgdjFFi9ZYNouxShyeGwx2DUVdXjMSAvCYAFwmoNjLWcQ96B2/tWcPIDXAgNjhJEOxW
 xOhGm4h/7ZVn+Gy3gLTb1QuM3Lal/xDGTmMrNYHGEcO+y1X9Y9Bc4EEtYzItGyQQKxYo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeNxe-0003b9-4q; Thu, 28 May 2020 19:13:02 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeNxd-0002XH-TY; Thu, 28 May 2020 19:13:02 +0000
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger@xen.org>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <20200528165341.GH1195@Air-de-Roger>
 <B0CBD25E-49D8-4AE5-B424-83E9A05FBF58@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <de72ffe2-34a9-0b65-8d66-3f1322258d0c@xen.org>
Date: Thu, 28 May 2020 20:12:59 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.14; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <B0CBD25E-49D8-4AE5-B424-83E9A05FBF58@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 28/05/2020 18:19, Bertrand Marquis wrote:
>>> +
>>> +    return 0;
>>> +}
>>> +
>>> int domain_kill(struct domain *d)
>>> {
>>>      int rc = 0;
>>> @@ -727,7 +788,10 @@ int domain_kill(struct domain *d)
>>>          if ( cpupool_move_domain(d, cpupool0) )
>>>              return -ERESTART;
>>>          for_each_vcpu ( d, v )
>>> +        {
>>> +            unmap_runstate_area(v, 0);
>>
>> Why is it not appropriate here to hold the lock? It might not be
>> technically needed, but still should work?
> 
> In a killing scenario you might stop a core while it was rescheduling.
> Couldn’t a core be killed using a cross core interrupt ?

You can't stop a vCPU in the middle of the context switch. The vCPU can 
only be scheduled out at specific point in Xen.

> If this is the case then I would need to do masked interrupt locking sections to prevent that case ?

At the beginning of the function domain_kill() the domain will be paused 
(see domain_pause()). After this steps none of the vCPUs will be running 
or be scheduled.

You should technically use the lock everywhere to avoid static analyzer 
throwing a warning because you access variable with and without the 
lock. A comment would at least be useful in the code if we go ahead with 
this.

As an aside, I think you want the lock to always disable the interrupts 
otherwise check_lock() (this can be enabled with CONFIG_DEBUG_LOCKS only 
on x86 though) will shout at you because your lock can be taken in both 
IRQ-safe and IRQ-unsafe context.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 28 19:39:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 19:39:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeONG-00089c-8y; Thu, 28 May 2020 19:39:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P1kI=7K=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jeONF-00089W-6m
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 19:39:29 +0000
X-Inumbo-ID: f1229914-a11a-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f1229914-a11a-11ea-8993-bc764e2007e4;
 Thu, 28 May 2020 19:39:27 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=LzpABOTmHMakTHFsf4Fhp7O5krjnxk21TmkgQt+4Ho8=; b=xR+Plisyav0wXg1QK/kI1MjAn
 DFhcvR/Lut/K6SgNMUE0mlWHPYUAu871/2OPGf+wOGwPdwpHEbO5Ryh51nqNULnnVfVI8LNbNIlDE
 hFJVUfLqKa9CLQYQ/+u/TE73fvxywBEJW17M3J/5CFizFbagSdLN4rq7OXghVorpuGU3k=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeONC-00047L-PL; Thu, 28 May 2020 19:39:26 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeONC-00061T-CA; Thu, 28 May 2020 19:39:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jeONC-0008Oz-BR; Thu, 28 May 2020 19:39:26 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150420-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150420: tolerable trouble: fail/pass/starved -
 PUSHED
X-Osstest-Failures: qemu-mainline:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: qemuu=06539ebc76b8625587aa78d646a9d8d5fddf84f3
X-Osstest-Versions-That: qemuu=ddc760832fa8cf5e93b9d9e6e854a5114ac63510
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 28 May 2020 19:39:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150420 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150420/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2   7 xen-boot         fail in 150406 pass in 150420
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat  fail pass in 150406

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150391
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150391
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150391
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150391
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150391
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150391
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                06539ebc76b8625587aa78d646a9d8d5fddf84f3
baseline version:
 qemuu                ddc760832fa8cf5e93b9d9e6e854a5114ac63510

Last test of basis   150391  2020-05-27 01:37:34 Z    1 days
Testing same since   150406  2020-05-27 15:36:19 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Aleksandar Markovic <aleksandar.qemu.devel@gmail.com>
  Huacai Chen <chenhc@lemote.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   ddc760832f..06539ebc76  06539ebc76b8625587aa78d646a9d8d5fddf84f3 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Thu May 28 21:29:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:29:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQ5n-0000T9-6h; Thu, 28 May 2020 21:29:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LyZP=7K=gmail.com=raistlin.df@srs-us1.protection.inumbo.net>)
 id 1jeQ5m-0000Sw-1L
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:29:34 +0000
X-Inumbo-ID: 5227e598-a12a-11ea-a83e-12813bfff9fa
Received: from mail-wm1-f67.google.com (unknown [209.85.128.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5227e598-a12a-11ea-a83e-12813bfff9fa;
 Thu, 28 May 2020 21:29:33 +0000 (UTC)
Received: by mail-wm1-f67.google.com with SMTP id r15so712207wmh.5
 for <xen-devel@lists.xenproject.org>; Thu, 28 May 2020 14:29:33 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:from:to:cc:date:message-id:in-reply-to
 :references:user-agent:mime-version:content-transfer-encoding;
 bh=RguJeW3M7hHVpj8Rg/UhcbGBJXMeePiaJDG/SqFx9qw=;
 b=Cp4uWq9AV541hp6APMQuO6c/0ap0HLV3X0gKNGOxpP6Wkt7IuD3/MWH9geeJYFkmgE
 0sLDXTTtJQLbDC45YbuNDGCwR3GOga0Ah2pD2ej0DW5fEPkTMSU9ukTPMZxBPANkw5CE
 LSsMsavhRku5YFbKs7BilRKnE0TwMqJsfhDeOD7qesemR2t2038O4fq+cCa8mRrvgv93
 wpwBDTArI9umvriIJqKb6W4JAoYgnR6tpjjm3VGhqOZxB0jFJqspjSe5IAegUJkAuXBt
 f7lCV+uAn4E13tcGUMfO+Qd2DumU8mdi9R9xw49+ZUVfwvTAHINs8L8IDIlIfZSHQ7e/
 TYJQ==
X-Gm-Message-State: AOAM531/7kB9YLmCqKFTExCO98DoUMufPnluAgbDhKZhDoMq2WL+A3yP
 XEEdFI86rut5L8r3i5s+3EI=
X-Google-Smtp-Source: ABdhPJw0iKqTYcG09RFoWIAaYF3y8Wo0zb4gte2lvn5QnotHfh+oR6jtQRIiygSVayWiXsFmRbMpNQ==
X-Received: by 2002:a7b:c005:: with SMTP id c5mr5162869wmb.22.1590701372297;
 Thu, 28 May 2020 14:29:32 -0700 (PDT)
Received: from [192.168.0.36] (87.78.186.89.cust.ip.kpnqwest.it.
 [89.186.78.87])
 by smtp.gmail.com with ESMTPSA id u13sm7390046wmm.6.2020.05.28.14.29.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 28 May 2020 14:29:31 -0700 (PDT)
Subject: [PATCH v2 2/7] xen: credit2: factor runqueue initialization in its
 own function.
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Date: Thu, 28 May 2020 23:29:30 +0200
Message-ID: <159070137084.12060.14661333224235870762.stgit@Palanthas>
In-Reply-To: <159070133878.12060.13318432301910522647.stgit@Palanthas>
References: <159070133878.12060.13318432301910522647.stgit@Palanthas>
User-Agent: StGit/0.21
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

As it will be useful in later changes. While there, fix
the doc-comment.

No functional change intended.

Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
---
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
Changes from v1:
* new patch
---
 xen/common/sched/credit2.c |   35 +++++++++++++++++++++++------------
 1 file changed, 23 insertions(+), 12 deletions(-)

diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index 697c9f917d..8a4f28b9f5 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -3766,21 +3766,16 @@ csched2_alloc_pdata(const struct scheduler *ops, int cpu)
     return spc;
 }
 
-/* Returns the ID of the runqueue the cpu is assigned to. */
-static struct csched2_runqueue_data *
-init_pdata(struct csched2_private *prv, struct csched2_pcpu *spc,
-           unsigned int cpu)
+/*
+ * Do what's necessary to add cpu to the rqd (including activating the
+ * runqueue, if this is the first CPU we put in it).
+ */
+static void
+init_cpu_runqueue(struct csched2_private *prv, struct csched2_pcpu *spc,
+                  unsigned int cpu, struct csched2_runqueue_data *rqd)
 {
-    struct csched2_runqueue_data *rqd;
     unsigned int rcpu;
 
-    ASSERT(rw_is_write_locked(&prv->lock));
-    ASSERT(!cpumask_test_cpu(cpu, &prv->initialized));
-    /* CPU data needs to be allocated, but still uninitialized. */
-    ASSERT(spc);
-
-    rqd = spc->rqd;
-
     ASSERT(rqd && !cpumask_test_cpu(cpu, &spc->rqd->active));
 
     printk(XENLOG_INFO "Adding cpu %d to runqueue %d\n", cpu, rqd->id);
@@ -3816,6 +3811,22 @@ init_pdata(struct csched2_private *prv, struct csched2_pcpu *spc,
 
     if ( rqd->nr_cpus == 1 )
         rqd->pick_bias = cpu;
+}
+
+/* Returns a pointer to the runqueue the cpu is assigned to. */
+static struct csched2_runqueue_data *
+init_pdata(struct csched2_private *prv, struct csched2_pcpu *spc,
+           unsigned int cpu)
+{
+    struct csched2_runqueue_data *rqd;
+
+    ASSERT(rw_is_write_locked(&prv->lock));
+    ASSERT(!cpumask_test_cpu(cpu, &prv->initialized));
+    /* CPU data needs to be allocated, but still uninitialized. */
+    ASSERT(spc);
+
+    rqd = spc->rqd;
+    init_cpu_runqueue(prv, spc, cpu, rqd);
 
     return rqd;
 }



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:29:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:29:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQ5a-0000S6-GX; Thu, 28 May 2020 21:29:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LyZP=7K=gmail.com=raistlin.df@srs-us1.protection.inumbo.net>)
 id 1jeQ5Z-0000S1-Bo
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:29:21 +0000
X-Inumbo-ID: 4a5e06f8-a12a-11ea-a83e-12813bfff9fa
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a5e06f8-a12a-11ea-a83e-12813bfff9fa;
 Thu, 28 May 2020 21:29:20 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id l10so879226wrr.10
 for <xen-devel@lists.xenproject.org>; Thu, 28 May 2020 14:29:20 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:from:to:cc:date:message-id:user-agent
 :mime-version:content-transfer-encoding;
 bh=SdCpL4YJ7rILVK5R+t0Bn3JMeoXxs9gIE9CZ+hqvM/w=;
 b=SeLjkQwfxtmmoYMh1CfAw8J86FvIRnZAvmR8/nuRQMxrZRurvJaT2Nrq0dAm7R6734
 W9U74QpwV50Cs/lSzws1kyAccSZyJ7CKzlCl6xSdloIAxkTwTQIHyVzNhjRIg4dOMMsd
 l5bblpqscZiWRbjRJqhQbvRC1GWm7LMk57WxWcb8onSIA3glfi0GL5dsm3vzGhbdGEAG
 GUqCYTni/4EaS8t9QGEfPXlqcUt4AKhAxdhmWhbjMNDEpeDhdWekkMlbP2y6k+irh+Vj
 XLGBOtP7tL4FP9TeS813b0LNSXbLb+ukm1iiRkI+i8h0xKfmD71dGA4jJxMadBu5ziRS
 HKZg==
X-Gm-Message-State: AOAM532Wyi1VZcsa9CKGtxkLWKveslo7VhD9tvrMB2az0kw3xJfX2T6F
 tDfBZJhtpN03YSNJsJ/icZ8=
X-Google-Smtp-Source: ABdhPJyCoIGMmcnF8v9uTYF7aqYO01PCHMgMjnnglVHCOHxL4uvLI8XnpKjQzs2Q2RcO05nmYi8stg==
X-Received: by 2002:adf:a51b:: with SMTP id i27mr5271422wrb.173.1590701359139; 
 Thu, 28 May 2020 14:29:19 -0700 (PDT)
Received: from [192.168.0.36] (87.78.186.89.cust.ip.kpnqwest.it.
 [89.186.78.87])
 by smtp.gmail.com with ESMTPSA id k12sm5324286wrn.42.2020.05.28.14.29.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 28 May 2020 14:29:18 -0700 (PDT)
Subject: [PATCH v2 0/7] xen: credit2: limit the number of CPUs per runqueue
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Date: Thu, 28 May 2020 23:29:17 +0200
Message-ID: <159070133878.12060.13318432301910522647.stgit@Palanthas>
User-Agent: StGit/0.21
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hello!

Here's v2 of this series... a bit late, but technically still in time
for code-freeze, although I understand this is quite tight! :-P

Anyway, In Credit2, the CPUs are assigned to runqueues according to the
host topology. For instance, if we want per-socket runqueues (which is
the default), the CPUs that are in the same socket will end up in the
same runqueue.

This is generally good for scalability, at least until the number of
CPUs that end up in the same runqueue is not too high. In fact, all this
CPUs will compete for the same spinlock, for making scheduling decisions
and manipulating the scheduler data structures. Therefore, if they are
too many, that can become a bottleneck.

This has not been an issue so far, but architectures with 128 CPUs per
socket are now available, and it is certainly unideal to have so many
CPUs in the same runqueue, competing for the same locks, etc.

Let's therefore set a cap to the total number of CPUs that can share a
runqueue. The value is set to 16, by default, but can be changed with
a boot command line parameter.

Note also that, if the host has hyperthreading (or equivalent), and we
are not using core-scheduling), additional care is taken to avoid splitting
CPUs that are hyperthread siblings among different runqueues.

In v2, in addition to trying to address the review comments, I've added
the logic for doing a full rebalance of the scheduler runqueues, while
the system is running. This is actually something that itself came up
during review of v1, when we realized that we do not only wanted a cap,
we also wanted some balancing, and if you want real balancing, you have
to be able to re-arrange the runqueue layout, dynamically.

It took a while because I, although I had something that looked a lot
like the final solution implemented in this patch, could not see how to
solve cleanly and effectively the issue of having the vCPUs in the
runqueues, while trying to re-balance them. It was while talking with
Juergen that we figured out that we can actually pause the domains,
which I had not thought at... So, once again, Juergen, thanks! :-)

I have done the most of the stress testing with core-scheduling
disabled, and it has survived to anything I threw at it, but of course
the more testing the better, and I will be able to actually do more of
it, in the coming days.

IAC, I have also verified that at least a few core-scheduling enabled
configs also work.

There are git branches here:
 git://xenbits.xen.org/people/dariof/xen.git  sched/credit2-max-cpus-runqueue-v2
 http://xenbits.xen.org/gitweb/?p=people/dariof/xen.git;a=shortlog;h=refs/heads/sched/credit2-max-cpus-runqueue-v2

 https://github.com/dfaggioli/xen/tree/sched/credit2-max-cpus-runqueue-v2

While v1 is at the following link:
 https://lore.kernel.org/xen-devel/158818022727.24327.14309662489731832234.stgit@Palanthas/T/#m1e885a0f0a1feef83790ac179ab66512201cb770

Thanks and Regards
---
Dario Faggioli (7):
      xen: credit2: factor cpu to runqueue matching in a function
      xen: credit2: factor runqueue initialization in its own function.
      xen: cpupool: add a back-pointer from a scheduler to its pool
      xen: credit2: limit the max number of CPUs in a runqueue
      xen: credit2: compute cpus per-runqueue more dynamically.
      cpupool: create an the 'cpupool sync' infrastructure
      xen: credit2: rebalance the number of CPUs in the scheduler runqueues

 docs/misc/xen-command-line.pandoc |   14 +
 xen/common/sched/cpupool.c        |   53 ++++
 xen/common/sched/credit2.c        |  437 ++++++++++++++++++++++++++++++++++---
 xen/common/sched/private.h        |    7 +
 xen/include/asm-arm/cpufeature.h  |    5 
 xen/include/asm-x86/processor.h   |    5 
 xen/include/xen/sched.h           |    1 
 7 files changed, 491 insertions(+), 31 deletions(-)

--
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


From xen-devel-bounces@lists.xenproject.org Thu May 28 21:29:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:29:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQ5g-0000SL-Pk; Thu, 28 May 2020 21:29:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LyZP=7K=gmail.com=raistlin.df@srs-us1.protection.inumbo.net>)
 id 1jeQ5f-0000SE-BZ
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:29:27 +0000
X-Inumbo-ID: 4e3be89e-a12a-11ea-8993-bc764e2007e4
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e3be89e-a12a-11ea-8993-bc764e2007e4;
 Thu, 28 May 2020 21:29:26 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id x6so839989wrm.13
 for <xen-devel@lists.xenproject.org>; Thu, 28 May 2020 14:29:26 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:from:to:cc:date:message-id:in-reply-to
 :references:user-agent:mime-version:content-transfer-encoding;
 bh=JBpkxXxyifxfXBjwP1Ac2WLMkdMOyk1zAam+z/GHBmg=;
 b=ISUONMeK9D/WCUOtqVufpoY5qKXiYNIIA8PSggey/W7A7gYNcpu1ghO6f33Dg1Wi6n
 3hJ2bX/loxqIb1Cxx0OYr6MV2iG2vRdS6551jflbwQKSpRuKKxiRYhcMDPHcvDZgl+qE
 xmgf00kfEEcx+l+1MyJZq2v8OHbASRdAdAKP0uIZIpSxJ1zFHDF6L9h0D19phBeTF175
 krje1XT/YlJuv6yg0Eaf4QPBg1MHDV6/XR6CTIRCsNv4ieUaY/pzXxH0FArYFMJL3Aib
 IgHw36Hul68uv780YmydvZw7LboqPhZF3NVSrH0Wt1q+zMWlDtTpmjgSOirAfGAgmUDR
 WyGQ==
X-Gm-Message-State: AOAM531T1Uh5gENu4d26PMQFIkTXkU0SYenQ/WG/DqKbDwSH0kbdUqfw
 kA9Cc+SDv3VGi3Vml4L5k6sKVytE
X-Google-Smtp-Source: ABdhPJxZDMoEaDChbbXxfLrdkKE5tLhILEdSe7ifgSNEpvEpXkayrtHxOHruayVsuicYAMU2jV2edQ==
X-Received: by 2002:a5d:68c2:: with SMTP id p2mr5351046wrw.253.1590701365691; 
 Thu, 28 May 2020 14:29:25 -0700 (PDT)
Received: from [192.168.0.36] (87.78.186.89.cust.ip.kpnqwest.it.
 [89.186.78.87])
 by smtp.gmail.com with ESMTPSA id d2sm6729892wrs.95.2020.05.28.14.29.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 28 May 2020 14:29:25 -0700 (PDT)
Subject: [PATCH v2 1/7] xen: credit2: factor cpu to runqueue matching in a
 function
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Date: Thu, 28 May 2020 23:29:24 +0200
Message-ID: <159070136424.12060.2223986236933194278.stgit@Palanthas>
In-Reply-To: <159070133878.12060.13318432301910522647.stgit@Palanthas>
References: <159070133878.12060.13318432301910522647.stgit@Palanthas>
User-Agent: StGit/0.21
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Just move the big if() condition in an inline function.

No functional change intended.

Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
---
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
 xen/common/sched/credit2.c |   28 +++++++++++++++++-----------
 1 file changed, 17 insertions(+), 11 deletions(-)

diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index 34f05c3e2a..697c9f917d 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -838,6 +838,20 @@ static inline bool same_core(unsigned int cpua, unsigned int cpub)
            cpu_to_core(cpua) == cpu_to_core(cpub);
 }
 
+static inline bool
+cpu_runqueue_match(const struct csched2_runqueue_data *rqd, unsigned int cpu)
+{
+    unsigned int peer_cpu = rqd->pick_bias;
+
+    BUG_ON(cpu_to_socket(peer_cpu) == XEN_INVALID_SOCKET_ID);
+
+    /* OPT_RUNQUEUE_CPU will never find an existing runqueue. */
+    return opt_runqueue == OPT_RUNQUEUE_ALL ||
+           (opt_runqueue == OPT_RUNQUEUE_CORE && same_core(peer_cpu, cpu)) ||
+           (opt_runqueue == OPT_RUNQUEUE_SOCKET && same_socket(peer_cpu, cpu)) ||
+           (opt_runqueue == OPT_RUNQUEUE_NODE && same_node(peer_cpu, cpu));
+}
+
 static struct csched2_runqueue_data *
 cpu_add_to_runqueue(struct csched2_private *prv, unsigned int cpu)
 {
@@ -855,21 +869,11 @@ cpu_add_to_runqueue(struct csched2_private *prv, unsigned int cpu)
     rqd_ins = &prv->rql;
     list_for_each_entry ( rqd, &prv->rql, rql )
     {
-        unsigned int peer_cpu;
-
         /* Remember first unused queue index. */
         if ( !rqi_unused && rqd->id > rqi )
             rqi_unused = true;
 
-        peer_cpu = rqd->pick_bias;
-        BUG_ON(cpu_to_socket(cpu) == XEN_INVALID_SOCKET_ID ||
-               cpu_to_socket(peer_cpu) == XEN_INVALID_SOCKET_ID);
-
-        /* OPT_RUNQUEUE_CPU will never find an existing runqueue. */
-        if ( opt_runqueue == OPT_RUNQUEUE_ALL ||
-             (opt_runqueue == OPT_RUNQUEUE_CORE && same_core(peer_cpu, cpu)) ||
-             (opt_runqueue == OPT_RUNQUEUE_SOCKET && same_socket(peer_cpu, cpu)) ||
-             (opt_runqueue == OPT_RUNQUEUE_NODE && same_node(peer_cpu, cpu)) )
+        if ( cpu_runqueue_match(rqd, cpu) )
         {
             rqd_valid = true;
             break;
@@ -3744,6 +3748,8 @@ csched2_alloc_pdata(const struct scheduler *ops, int cpu)
     struct csched2_pcpu *spc;
     struct csched2_runqueue_data *rqd;
 
+    BUG_ON(cpu_to_socket(cpu) == XEN_INVALID_SOCKET_ID);
+
     spc = xzalloc(struct csched2_pcpu);
     if ( spc == NULL )
         return ERR_PTR(-ENOMEM);



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:29:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:29:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQ5u-0000Un-Et; Thu, 28 May 2020 21:29:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LyZP=7K=gmail.com=raistlin.df@srs-us1.protection.inumbo.net>)
 id 1jeQ5s-0000UQ-Ob
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:29:40 +0000
X-Inumbo-ID: 560df526-a12a-11ea-a83e-12813bfff9fa
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 560df526-a12a-11ea-a83e-12813bfff9fa;
 Thu, 28 May 2020 21:29:39 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id l11so994528wru.0
 for <xen-devel@lists.xenproject.org>; Thu, 28 May 2020 14:29:39 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:from:to:cc:date:message-id:in-reply-to
 :references:user-agent:mime-version:content-transfer-encoding;
 bh=wP+bknS/YQFF/cic7V0+7p/sNGGep29LYMTamuMpA6Y=;
 b=GiXcJy/GYhrFuVn2NGUdbeREpOAD9M8S3lmGVAwWC30MuNPeG2epOw+3z3pQ92NAiq
 9AuqQrnHUXO4b8cS6BzdS70R5ScN5Pox2wMCfd+OBxbz2oM7761vPGr35r3Dxb5dlzET
 folZ9RzY4kCyvIpIqnuUde1cauSse3FKRPvbPzzplrRr+rMbXtOAF7rSusg6llTPj0Nn
 hIBZZUc16dBY1et0LuHrjoMrLrt4uAhcMjLfsVhUSLAhx0J0T5C5wHbVeYwVBqMQdpqM
 lkd3Okf0WcQtzRFvU3G+IHr4PCkqwCy2CyC7RlBGT7Twa/pLRBgSVZz3QBk/GrP87sFK
 sKCA==
X-Gm-Message-State: AOAM5319ij2fFo+I820nWKkCY4meFAIxLgssFVkAZxI0HM2qIHm6TolS
 bPDo54WckDQJHPq9ok6d58fSrQap
X-Google-Smtp-Source: ABdhPJxGVTpK4BVv64hCPnZesAphm6BdyzNlNg637ASAcJ5f6KCUWPUlc2RuiWxUibxdp0d1o5JJcw==
X-Received: by 2002:adf:e648:: with SMTP id b8mr5534153wrn.386.1590701378810; 
 Thu, 28 May 2020 14:29:38 -0700 (PDT)
Received: from [192.168.0.36] (87.78.186.89.cust.ip.kpnqwest.it.
 [89.186.78.87])
 by smtp.gmail.com with ESMTPSA id f128sm7844905wme.1.2020.05.28.14.29.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 28 May 2020 14:29:38 -0700 (PDT)
Subject: [PATCH v2 3/7] xen: cpupool: add a back-pointer from a scheduler to
 its pool
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Date: Thu, 28 May 2020 23:29:37 +0200
Message-ID: <159070137738.12060.10928971799525755388.stgit@Palanthas>
In-Reply-To: <159070133878.12060.13318432301910522647.stgit@Palanthas>
References: <159070133878.12060.13318432301910522647.stgit@Palanthas>
User-Agent: StGit/0.21
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

If we need to know within which pool a particular scheduler
is working, we can do that by querying the cpupool pointer
of any of the sched_resource-s (i.e., ~ any of the CPUs)
assigned to the scheduler itself.

Basically, we pick any sched_resource that we know uses that
scheduler, and we check its *cpupool pointer. If we really
know that the resource uses the scheduler, this is fine, as
it also means the resource is inside the pool we are
looking for.

But, of course, we can do that for a pool/scheduler that has
not any been given any sched_resource yet (or if we do not
know whether or not it has any sched_resource).

To overcome such limitation, add a back pointer from the
scheduler, to its own pool.

Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
---
Cc: Juergen Gross <jgross@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>
---
Changes from v1:
* new patch
---
 xen/common/sched/cpupool.c |    1 +
 xen/common/sched/private.h |    1 +
 2 files changed, 2 insertions(+)

diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 0664f7fa3d..7ea641ca26 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -287,6 +287,7 @@ static struct cpupool *cpupool_create(
         if ( c->sched == NULL )
             goto err;
     }
+    c->sched->cpupool = c;
     c->gran = opt_sched_granularity;
 
     *q = c;
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index b9a5b4c01c..df50976eb2 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -275,6 +275,7 @@ struct scheduler {
     char *opt_name;         /* option name for this scheduler    */
     unsigned int sched_id;  /* ID for this scheduler             */
     void *sched_data;       /* global data pointer               */
+    struct cpupool *cpupool;/* points to this scheduler's pool   */
 
     int          (*global_init)    (void);
 



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:29:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:29:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQ60-0000Wv-NR; Thu, 28 May 2020 21:29:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LyZP=7K=gmail.com=raistlin.df@srs-us1.protection.inumbo.net>)
 id 1jeQ5z-0000Wd-KF
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:29:47 +0000
X-Inumbo-ID: 5a133d48-a12a-11ea-9dbe-bc764e2007e4
Received: from mail-wr1-f68.google.com (unknown [209.85.221.68])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a133d48-a12a-11ea-9dbe-bc764e2007e4;
 Thu, 28 May 2020 21:29:46 +0000 (UTC)
Received: by mail-wr1-f68.google.com with SMTP id j16so908973wrb.7
 for <xen-devel@lists.xenproject.org>; Thu, 28 May 2020 14:29:46 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:from:to:cc:date:message-id:in-reply-to
 :references:user-agent:mime-version:content-transfer-encoding;
 bh=3OHWyR/HwCR9rAgCd2sPFwONc/WZNxWgzepLW7mEl5s=;
 b=O6ZQheLB0zc2hDGifGFP21rWCimraF4c6i5Q/171V7ADdEislZXv+y3ddg3/bSsR5J
 xv6EzoRkCIAqjwxjjNqxsnbmfCFmIg6bbPXL6HSvCRbr/gAe0nMUpd39oscDC57HSR2v
 7Fc7P9etgurKf6C+JhMsoDY6lZ0NnKQoHw5At30PgWH0YO/OCwlvHiSzUV32IbXPkEYQ
 LKOjDCddNZpF3dblT3d9NHo4DYVg4qJjzgJERYmg1zbRFmqBO89Bi6baSJ9to2ad4ZOS
 ipbYA0hr3yfHH+x0HMBG+glGDgi1yVgjTWU2dUc/vmsuKkOmzUAbhO1RKlulAoeQSOff
 zr1A==
X-Gm-Message-State: AOAM530o+25K35YLrLfA9Qbhn3n38BXHFFTiRmj4Yx+IJdi8qku5bNvS
 UeTtbtc5qi9cv5X4Q9tcjtbAo5vr
X-Google-Smtp-Source: ABdhPJyDtEKhRCBAmDt+TXjHz1fweMI24AbQNOf2IVNAfXv58Q7YnhR6n8/q/qoSew4h9144P3iIxQ==
X-Received: by 2002:adf:ee47:: with SMTP id w7mr5240607wro.171.1590701385533; 
 Thu, 28 May 2020 14:29:45 -0700 (PDT)
Received: from [192.168.0.36] (87.78.186.89.cust.ip.kpnqwest.it.
 [89.186.78.87])
 by smtp.gmail.com with ESMTPSA id t185sm7871999wmt.28.2020.05.28.14.29.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 28 May 2020 14:29:44 -0700 (PDT)
Subject: [PATCH v2 4/7] xen: credit2: limit the max number of CPUs in a
 runqueue
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Date: Thu, 28 May 2020 23:29:44 +0200
Message-ID: <159070138395.12060.9523981885146042705.stgit@Palanthas>
In-Reply-To: <159070133878.12060.13318432301910522647.stgit@Palanthas>
References: <159070133878.12060.13318432301910522647.stgit@Palanthas>
User-Agent: StGit/0.21
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In Credit2 CPUs (can) share runqueues, depending on the topology. For
instance, with per-socket runqueues (the default) all the CPUs that are
part of the same socket share a runqueue.

On platform with a huge number of CPUs per socket, that could be a
problem. An example is AMD EPYC2 servers, where we can have up to 128
CPUs in a socket.

It is of course possible to define other, still topology-based, runqueue
arrangements (e.g., per-LLC, per-DIE, etc). But that may still result in
runqueues with too many CPUs on other/future platforms. For instance, a
system with 96 CPUs and 2 NUMA nodes will end up having 48 CPUs per
runqueue. Not as bad, but still a lot!

Therefore, let's set a limit to the max number of CPUs that can share a
Credit2 runqueue. The actual value is configurable (at boot time), the
default being 16. If, for instance,  there are more than 16 CPUs in a
socket, they'll be split among two (or more) runqueues.

Note: with core scheduling enabled, this parameter sets the max number
of *scheduling resources* that can share a runqueue. Therefore, with
granularity set to core (and assumint 2 threads per core), we will have
at most 16 cores per runqueue, which corresponds to 32 threads. But that
is fine, considering how core scheduling works.

Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
---
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Juergen Gross <jgross@suse.com>
---
Changes from v1:
- always try to add a CPU to the runqueue with the least CPUs already in
  it. This should guarantee a more even distribution of CPUs among
  runqueues, as requested during review;
- rename the matching function from foo_smt_bar() to foo_siblings_bar(),
  which is more generic, and do the same to the per-arch wrappers;
- deal with the case where the user is trying to set fewer CPUs per
  runqueue than there are siblings per core (by putting siblings in the
  same runq anyway, but logging a message), as requested during review;
- use the per-cpupool value for the scheduling granularity, as requested
  during review;
- add a comment about why we also count siblings that are currently
  outside of our cpupool, as suggested during review;
- add a boot command line doc entry;
- fix typos in comments;
---
 docs/misc/xen-command-line.pandoc |   14 ++++
 xen/common/sched/credit2.c        |  144 +++++++++++++++++++++++++++++++++++--
 xen/include/asm-arm/cpufeature.h  |    5 +
 xen/include/asm-x86/processor.h   |    5 +
 4 files changed, 162 insertions(+), 6 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index e16bb90184..1787f2c8fb 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1840,6 +1840,20 @@ with read and write permissions.
 
 Choose the default scheduler.
 
+### sched_credit2_max_cpus_runqueue
+> `= <integer>`
+
+> Default: `16`
+
+Defines how many CPUs will be put, at most, in each Credit2 runqueue.
+
+Runqueues are still arranged according to the host topology (and following
+what indicated by the 'credit2_runqueue' parameter). But we also have a cap
+to the number of CPUs that share each runqueues.
+
+A value that is a submultiple of the number of online CPUs is recommended,
+as that would likely produce a perfectly balanced runqueue configuration.
+
 ### sched_credit2_migrate_resist
 > `= <integer>`
 
diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index 8a4f28b9f5..f4d3f8ae6b 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -25,6 +25,7 @@
 #include <xen/trace.h>
 #include <xen/cpu.h>
 #include <xen/keyhandler.h>
+#include <asm/processor.h>
 
 #include "private.h"
 
@@ -471,6 +472,22 @@ static int __init parse_credit2_runqueue(const char *s)
 }
 custom_param("credit2_runqueue", parse_credit2_runqueue);
 
+/*
+ * How many CPUs will be put, at most, in each runqueue.
+ *
+ * Runqueues are still arranged according to the host topology (and according
+ * to the value of the 'credit2_runqueue' parameter). But we also have a cap
+ * to the number of CPUs that share runqueues.
+ *
+ * This should be considered an upper limit. In fact, we also try to balance
+ * the number of CPUs in each runqueue. And, when doing that, it is possible
+ * that fewer CPUs than what this parameters mandates will actually be put
+ * in each runqueue.
+ */
+#define MAX_CPUS_RUNQ 16
+static unsigned int __read_mostly opt_max_cpus_runqueue = MAX_CPUS_RUNQ;
+integer_param("sched_credit2_max_cpus_runqueue", opt_max_cpus_runqueue);
+
 /*
  * Per-runqueue data
  */
@@ -852,18 +869,83 @@ cpu_runqueue_match(const struct csched2_runqueue_data *rqd, unsigned int cpu)
            (opt_runqueue == OPT_RUNQUEUE_NODE && same_node(peer_cpu, cpu));
 }
 
+/*
+ * Additional checks, to avoid separating siblings in different runqueues.
+ * This deals with both Intel's HTs and AMD's CUs. An arch that does not have
+ * any similar concept will just have cpu_nr_siblings() always return 1, and
+ * setup the cpu_sibling_mask-s acordingly (as currently does ARM), and things
+ * will just work as well.
+ */
+static bool
+cpu_runqueue_siblings_match(const struct csched2_runqueue_data *rqd,
+                            unsigned int cpu, unsigned int max_cpus_runq)
+{
+    unsigned int nr_sibls = cpu_nr_siblings(cpu);
+    unsigned int rcpu, tot_sibls = 0;
+
+    /*
+     * If we put the CPU in this runqueue, we must be sure that there will
+     * be enough room for accepting its sibling(s) as well.
+     */
+    cpumask_clear(cpumask_scratch_cpu(cpu));
+    for_each_cpu ( rcpu, &rqd->active )
+    {
+        ASSERT(rcpu != cpu);
+        if ( !cpumask_intersects(per_cpu(cpu_sibling_mask, rcpu), cpumask_scratch_cpu(cpu)) )
+        {
+            /*
+             * For each CPU already in the runqueue, account for it and for
+             * its sibling(s), independently from whether they are in the
+             * runqueue or not. Of course, we do this only once, for each CPU
+             * that is already inside the runqueue and all its siblings!
+             *
+             * This way, even if there are CPUs in the runqueue with siblings
+             * in a different cpupools, we still count all of them here.
+             * The reason for this is that, if at some future point we will
+             * move those sibling CPUs to this cpupool, we want them to land
+             * in this runqueue. Hence we must be sure to leave space for them.
+             */
+            cpumask_or(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu),
+                       per_cpu(cpu_sibling_mask, rcpu));
+            tot_sibls += cpu_nr_siblings(rcpu);
+        }
+    }
+    /*
+     * We know that neither the CPU, nor any of its sibling are here,
+     * or we wouldn't even have entered the function.
+     */
+    ASSERT(!cpumask_intersects(cpumask_scratch_cpu(cpu),
+                               per_cpu(cpu_sibling_mask, cpu)));
+
+    /* Try adding CPU and its sibling(s) to the count and check... */
+    return tot_sibls + nr_sibls <= max_cpus_runq;
+}
+
 static struct csched2_runqueue_data *
-cpu_add_to_runqueue(struct csched2_private *prv, unsigned int cpu)
+cpu_add_to_runqueue(const struct scheduler *ops, unsigned int cpu)
 {
+    struct csched2_private *prv = csched2_priv(ops);
     struct csched2_runqueue_data *rqd, *rqd_new;
+    struct csched2_runqueue_data *rqd_valid = NULL;
     struct list_head *rqd_ins;
     unsigned long flags;
     int rqi = 0;
-    bool rqi_unused = false, rqd_valid = false;
+    unsigned int min_rqs, max_cpus_runq;
+    bool rqi_unused = false;
 
     /* Prealloc in case we need it - not allowed with interrupts off. */
     rqd_new = xzalloc(struct csched2_runqueue_data);
 
+    /*
+     * While respecting the limit of not having more than the max number of
+     * CPUs per runqueue, let's also try to "spread" the CPU, as evenly as
+     * possible, among the runqueues. For doing that, we need to know upfront
+     * how many CPUs we have, so let's use the number of CPUs that are online
+     * for that.
+     */
+    min_rqs = ((num_online_cpus() - 1) / opt_max_cpus_runqueue) + 1;
+    max_cpus_runq = num_online_cpus() / min_rqs;
+
     write_lock_irqsave(&prv->lock, flags);
 
     rqd_ins = &prv->rql;
@@ -873,10 +955,59 @@ cpu_add_to_runqueue(struct csched2_private *prv, unsigned int cpu)
         if ( !rqi_unused && rqd->id > rqi )
             rqi_unused = true;
 
+        /*
+         * First of all, let's check whether, according to the system
+         * topology, this CPU belongs in this runqueue.
+         */
         if ( cpu_runqueue_match(rqd, cpu) )
         {
-            rqd_valid = true;
-            break;
+            /*
+             * If the CPU has any siblings, they are online and they are
+             * being added to this cpupool, always keep them together. Even
+             * if that means violating what the opt_max_cpus_runqueue param
+             * indicates. However, if this happens, chances are high that a
+             * too small value was used for the parameter, so warn the user
+             * about that.
+             *
+             * Note that we cannot check this once and for all, say, during
+             * scheduler initialization. In fact, at least in theory, the
+             * number of siblings a CPU has may not be the same for all the
+             * CPUs.
+             */
+            if ( cpumask_intersects(&rqd->active, per_cpu(cpu_sibling_mask, cpu)) )
+            {
+                if ( cpumask_weight(&rqd->active) >= opt_max_cpus_runqueue )
+                {
+                        printk("WARNING: %s: more than opt_max_cpus_runqueue "
+                               "in a runqueue (%u vs %u), due to topology constraints.\n"
+                               "Consider raising it!\n",
+                               __func__, opt_max_cpus_runqueue,
+                               cpumask_weight(&rqd->active));
+                }
+                rqd_valid = rqd;
+                break;
+            }
+
+            /*
+             * If we're using core (or socket) scheduling, no need to do any
+             * further checking beyond the number of CPUs already in this
+             * runqueue respecting our upper bound.
+             *
+             * Otherwise, let's try to make sure that siblings stay in the
+             * same runqueue, pretty much under any cinrcumnstances.
+             */
+            if ( rqd->refcnt < max_cpus_runq && (ops->cpupool->gran != SCHED_GRAN_cpu ||
+                  cpu_runqueue_siblings_match(rqd, cpu, max_cpus_runq)) )
+            {
+                /*
+                 * This runqueue is ok, but as we said, we also want an even
+                 * distribution of the CPUs. So, unless this is the very first
+                 * match, we go on, check all runqueues and actually add the
+                 * CPU into the one that is less full.
+                 */
+                if ( !rqd_valid || rqd->refcnt < rqd_valid->refcnt )
+                    rqd_valid = rqd;
+            }
         }
 
         if ( !rqi_unused )
@@ -900,6 +1031,8 @@ cpu_add_to_runqueue(struct csched2_private *prv, unsigned int cpu)
         rqd->pick_bias = cpu;
         rqd->id = rqi;
     }
+    else
+        rqd = rqd_valid;
 
     rqd->refcnt++;
 
@@ -3744,7 +3877,6 @@ csched2_dump(const struct scheduler *ops)
 static void *
 csched2_alloc_pdata(const struct scheduler *ops, int cpu)
 {
-    struct csched2_private *prv = csched2_priv(ops);
     struct csched2_pcpu *spc;
     struct csched2_runqueue_data *rqd;
 
@@ -3754,7 +3886,7 @@ csched2_alloc_pdata(const struct scheduler *ops, int cpu)
     if ( spc == NULL )
         return ERR_PTR(-ENOMEM);
 
-    rqd = cpu_add_to_runqueue(prv, cpu);
+    rqd = cpu_add_to_runqueue(ops, cpu);
     if ( IS_ERR(rqd) )
     {
         xfree(spc);
diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
index 9af5666628..8fdf9685d7 100644
--- a/xen/include/asm-arm/cpufeature.h
+++ b/xen/include/asm-arm/cpufeature.h
@@ -64,6 +64,11 @@ static inline bool cpus_have_cap(unsigned int num)
     return test_bit(num, cpu_hwcaps);
 }
 
+static inline cpu_nr_siblings(unsigned int)
+{
+    return 1;
+}
+
 /* System capability check for constant cap */
 #define cpus_have_const_cap(num) ({                 \
         register_t __ret;                           \
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 070691882b..73017c3f4b 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -174,6 +174,11 @@ extern void init_intel_cacheinfo(struct cpuinfo_x86 *c);
 
 unsigned int apicid_to_socket(unsigned int);
 
+static inline int cpu_nr_siblings(unsigned int cpu)
+{
+    return cpu_data[cpu].x86_num_siblings;
+}
+
 /*
  * Generic CPUID function
  * clear %ecx since some cpus (Cyrix MII) do not set or clear %ecx



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:30:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:30:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQ67-0000ZI-15; Thu, 28 May 2020 21:29:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LyZP=7K=gmail.com=raistlin.df@srs-us1.protection.inumbo.net>)
 id 1jeQ66-0000Z6-Cb
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:29:54 +0000
X-Inumbo-ID: 5dfc8d56-a12a-11ea-a83e-12813bfff9fa
Received: from mail-wr1-f66.google.com (unknown [209.85.221.66])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5dfc8d56-a12a-11ea-a83e-12813bfff9fa;
 Thu, 28 May 2020 21:29:53 +0000 (UTC)
Received: by mail-wr1-f66.google.com with SMTP id y17so866668wrn.11
 for <xen-devel@lists.xenproject.org>; Thu, 28 May 2020 14:29:53 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:from:to:cc:date:message-id:in-reply-to
 :references:user-agent:mime-version:content-transfer-encoding;
 bh=rQ7Tl9PBNxTBxCYn4Y9mVqk3mlMzPIyms5hZ/Bt2tmY=;
 b=XW2tBMNn0zNlfZv58/8SgZLlaHX5/d8REukdrsubBAeSTWiMX8u1mwBaJd/bUJDBnK
 ONXW6BJD1F59FfL49ynBEeT/yYCZnFhxoaU+hPfBqgmCV9AfieFljzfytoLz1SgpUA7M
 ZdYJBmxFMg+PGQKSY2gqjuRbbWxpi2xS/6LJ+P92MigyJu6BK6cC8Imx+5KBfHk6TbaV
 RX7Dcy7Sx6miCVHYDPvOSP4fSR9sJI1Nhd4YWRTugGq483Syb4HUoDfqggWhE76aRSvg
 TMB/LAlGI1cR9ezQtmks34jumw0N8sIkLgf64ZaO9Fgdb28wa3DXtE3sku1icgpEqlsb
 m7Cw==
X-Gm-Message-State: AOAM5330f5NY8sR2I3i6Dii3XSJLv1n1EmUmmCEzxnJpsKJh0Scoptmz
 8Dtb/edOd9gFRUBCHUZyQUM=
X-Google-Smtp-Source: ABdhPJy6mBC6m3+RplBD4/x+yE9izwjhW1w9Ukki0KWqb83MAezO3fTKYvF3932his/v/tC6w+pq8w==
X-Received: by 2002:adf:a3c9:: with SMTP id m9mr5312588wrb.405.1590701392123; 
 Thu, 28 May 2020 14:29:52 -0700 (PDT)
Received: from [192.168.0.36] (87.78.186.89.cust.ip.kpnqwest.it.
 [89.186.78.87])
 by smtp.gmail.com with ESMTPSA id y207sm9318196wmd.7.2020.05.28.14.29.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 28 May 2020 14:29:51 -0700 (PDT)
Subject: [PATCH v2 5/7] xen: credit2: compute cpus per-runqueue more
 dynamically.
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Date: Thu, 28 May 2020 23:29:50 +0200
Message-ID: <159070139062.12060.9216996770730278147.stgit@Palanthas>
In-Reply-To: <159070133878.12060.13318432301910522647.stgit@Palanthas>
References: <159070133878.12060.13318432301910522647.stgit@Palanthas>
User-Agent: StGit/0.21
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

During boot, we use num_online_cpus() as an indication of how
many CPUs will end up in cpupool 0. We then decide (basing also
on the value of the boot time parameter opt_max_cpus_runqueue)
the actual number of CPUs that we want in each runqueue, in such
a way that the runqueue themselves are as balanced (in therms of
how many CPUs they have) as much as possible.

After boot, though, when for instance we are creating a cpupool,
it would be more appropriate to use the number of CPUs of the
pool, rather than the total number of online CPUs.

Do exactly that, even if this means (since from Xen's perspective
CPUs are added to pools one by one) we'll be computing a different
maximum number of CPUs per runqueue at each time.

In fact, we do it in preparation for the next change where,
after having computed the new value, we will also re-balance
the runqueues, by rebuilding them in such a way that the newly
computed maximum is actually respected for all of them.

Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
---
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
Changes from v1:
* new patch
---
 xen/common/sched/credit2.c |   30 +++++++++++++++++++++++++-----
 1 file changed, 25 insertions(+), 5 deletions(-)

diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index f4d3f8ae6b..af6d374677 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -922,7 +922,8 @@ cpu_runqueue_siblings_match(const struct csched2_runqueue_data *rqd,
 }
 
 static struct csched2_runqueue_data *
-cpu_add_to_runqueue(const struct scheduler *ops, unsigned int cpu)
+cpu_add_to_runqueue(const struct scheduler *ops, unsigned int nr_cpus,
+                    unsigned int cpu)
 {
     struct csched2_private *prv = csched2_priv(ops);
     struct csched2_runqueue_data *rqd, *rqd_new;
@@ -943,8 +944,8 @@ cpu_add_to_runqueue(const struct scheduler *ops, unsigned int cpu)
      * how many CPUs we have, so let's use the number of CPUs that are online
      * for that.
      */
-    min_rqs = ((num_online_cpus() - 1) / opt_max_cpus_runqueue) + 1;
-    max_cpus_runq = num_online_cpus() / min_rqs;
+    min_rqs = ((nr_cpus - 1) / opt_max_cpus_runqueue) + 1;
+    max_cpus_runq = nr_cpus / min_rqs;
 
     write_lock_irqsave(&prv->lock, flags);
 
@@ -3781,8 +3782,10 @@ csched2_dump(const struct scheduler *ops)
      */
     read_lock_irqsave(&prv->lock, flags);
 
-    printk("Active queues: %d\n"
+    printk("Active CPUs: %u\n"
+           "Active queues: %u\n"
            "\tdefault-weight     = %d\n",
+           cpumask_weight(&prv->initialized),
            prv->active_queues,
            CSCHED2_DEFAULT_WEIGHT);
     list_for_each_entry ( rqd, &prv->rql, rql )
@@ -3879,6 +3882,7 @@ csched2_alloc_pdata(const struct scheduler *ops, int cpu)
 {
     struct csched2_pcpu *spc;
     struct csched2_runqueue_data *rqd;
+    unsigned int nr_cpus;
 
     BUG_ON(cpu_to_socket(cpu) == XEN_INVALID_SOCKET_ID);
 
@@ -3886,7 +3890,23 @@ csched2_alloc_pdata(const struct scheduler *ops, int cpu)
     if ( spc == NULL )
         return ERR_PTR(-ENOMEM);
 
-    rqd = cpu_add_to_runqueue(ops, cpu);
+    /*
+     * If the system is booting, we know that, at this point, num_online_cpus()
+     * CPUs have been brought up, and will be added to the default cpupool and
+     * hence to this scheduler. This is valuable information that we can use
+     * to build the runqueues in an already balanced state.
+     *
+     * On the other hand, when we are live, and e.g., are creating a new
+     * cpupool, or adding CPUs to an already existing one,  we have no way to
+     * know in advance, from here, how many CPUs it will have. Therefore, in
+     * that case, we just use the current number of CPUs that the pool has,
+     * plus 1, because we are in the process of adding it, for the balancing.
+     * This will likely provide suboptimal results, and we rely on dynamic
+     * runqueue rebalancing for fixing it up.
+     */
+    nr_cpus = system_state < SYS_STATE_active ? num_online_cpus() :
+        cpumask_weight(&csched2_priv(ops)->initialized) + 1;
+    rqd = cpu_add_to_runqueue(ops, nr_cpus, cpu);
     if ( IS_ERR(rqd) )
     {
         xfree(spc);



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:30:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQ6E-0000kp-F6; Thu, 28 May 2020 21:30:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LyZP=7K=gmail.com=raistlin.df@srs-us1.protection.inumbo.net>)
 id 1jeQ6C-0000c9-Nn
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:30:00 +0000
X-Inumbo-ID: 62131932-a12a-11ea-a83e-12813bfff9fa
Received: from mail-wr1-f65.google.com (unknown [209.85.221.65])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 62131932-a12a-11ea-a83e-12813bfff9fa;
 Thu, 28 May 2020 21:29:59 +0000 (UTC)
Received: by mail-wr1-f65.google.com with SMTP id y17so867126wrn.11
 for <xen-devel@lists.xenproject.org>; Thu, 28 May 2020 14:29:59 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:from:to:cc:date:message-id:in-reply-to
 :references:user-agent:mime-version:content-transfer-encoding;
 bh=fW30aG1W0jN7We9433N5nJ1Y6xjm4AOXMc+J0AYDRT0=;
 b=aoPYK8Y4idezJX48/tvAOnk18aUtbwgpwMpW4hadoxewRPhjwq99EclFNHu8AHAyht
 sS3mIoNxwC11a9NCorGXMGEnfTN2Gas/V4+xZUVkF+j7iMurBwGLzvEAQcytZSwhXoYS
 UrfasN4TsZZCfHjpycQGFSn8+mlb7sGPOUvLDWABndGRotgtsk4pEWJFTjO1v1yzhnnJ
 mOWpp/4Bp7VkSuGCPGp482HX2uTW7B0spszRDEw9WPI4bEhCiU5sTyT3sHnqQcu1OPpf
 0VVM6X14AgOKSWW1Sdd+rK7Cj+cFAYBnDwPA5bxEziEQNLuKCOosfflM7+uYUTEPiRd/
 svMA==
X-Gm-Message-State: AOAM531SPO9iHSzbA5cFFk3dLleab6jRUEfDgmhX3X6tSWRvboRPlYFB
 m2HMrTNOHvt2g/YUNOOAWGE=
X-Google-Smtp-Source: ABdhPJzwKIRpWKuB1+AU4ejvi/dOi5O1e15rqE5i9fKNhqi+asxrnpNPwSxZ+Hur5VKKNbo9+/1JVQ==
X-Received: by 2002:adf:8b0c:: with SMTP id n12mr5861182wra.340.1590701398903; 
 Thu, 28 May 2020 14:29:58 -0700 (PDT)
Received: from [192.168.0.36] (87.78.186.89.cust.ip.kpnqwest.it.
 [89.186.78.87])
 by smtp.gmail.com with ESMTPSA id f128sm7845886wme.1.2020.05.28.14.29.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 28 May 2020 14:29:58 -0700 (PDT)
Subject: [PATCH v2 6/7] cpupool: create an the 'cpupool sync' infrastructure
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Date: Thu, 28 May 2020 23:29:57 +0200
Message-ID: <159070139727.12060.7434914618426479787.stgit@Palanthas>
In-Reply-To: <159070133878.12060.13318432301910522647.stgit@Palanthas>
References: <159070133878.12060.13318432301910522647.stgit@Palanthas>
User-Agent: StGit/0.21
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In case we want to make some live changes to the configuration
of (typically) the scheduler of a cpupool, we need things to be
quiet in that pool.

Not necessarily like with stop machine, but we at least need
to make sure that no domains are neither running not sitting
in the runqueues of the scheduler itself.

In fact, we need exactly something like this mechanism, for
changing "on the fly" which CPUs are assigned to which runqueue
in a Credit2 cpupool (check the following changes).
Therefore, instead than doing something specific for such a
use case, let's implement a generic mechanism.

Reason is, of course, that it may turn out to be useful for
other purposes, in future. But even for this specific case,
it is much easier and cleaner to just cede control to cpupool
code, instead of trying to do everything inside the scheduler.

Within the new cpupool_sync() function, we want to pause all
domains of a pool, including potentially the one calling
the function. Therefore, we defer the pausing, the actual work
and also the unpausing to a tasklet.

Suggested-by: Juergen Gross <jgross@suse.com>
Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
---
Cc: Juergen Gross <jgross@suse.com>
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Julien Grall <julien@xen.org>
Cc: Stefano Stabellini <sstabellini@kernel.org>
---
Changes from v1:
* new patch
---
 xen/common/sched/cpupool.c |   52 ++++++++++++++++++++++++++++++++++++++++++++
 xen/common/sched/private.h |    6 +++++
 xen/include/xen/sched.h    |    1 +
 3 files changed, 59 insertions(+)

diff --git a/xen/common/sched/cpupool.c b/xen/common/sched/cpupool.c
index 7ea641ca26..122c371c7a 100644
--- a/xen/common/sched/cpupool.c
+++ b/xen/common/sched/cpupool.c
@@ -234,6 +234,42 @@ void cpupool_put(struct cpupool *pool)
     free_cpupool_struct(pool);
 }
 
+void do_cpupool_sync(void *arg)
+{
+    struct cpupool *c = arg;
+    struct domain *d;
+
+
+    spin_lock(&cpupool_lock);
+
+    /*
+     * With this second call (and this time to domain_pause()) we basically
+     * make sure that all the domains have actually stopped running.
+     */
+    rcu_read_lock(&domlist_read_lock);
+    for_each_domain_in_cpupool(d, c)
+        domain_pause(d);
+    rcu_read_unlock(&domlist_read_lock);
+
+    /*
+     * Let's invoke the function that the caller provided. We pass a reference
+     * to our own scheduler as a parameter, with which it should easily reach
+     * anything it needs.
+     */
+    c->sync_ctl.func(c->sched);
+
+    /* We called pause twice, so we need to to the same with unpause. */
+    rcu_read_lock(&domlist_read_lock);
+    for_each_domain_in_cpupool(d, c)
+    {
+        domain_unpause(d);
+        domain_unpause(d);
+    }
+    rcu_read_unlock(&domlist_read_lock);
+
+    spin_unlock(&cpupool_lock);
+}
+
 /*
  * create a new cpupool with specified poolid and scheduler
  * returns pointer to new cpupool structure if okay, NULL else
@@ -292,6 +328,8 @@ static struct cpupool *cpupool_create(
 
     *q = c;
 
+    tasklet_init(&c->sync_ctl.tasklet, do_cpupool_sync, c);
+
     spin_unlock(&cpupool_lock);
 
     debugtrace_printk("Created cpupool %d with scheduler %s (%s)\n",
@@ -332,6 +370,7 @@ static int cpupool_destroy(struct cpupool *c)
         return -EBUSY;
     }
     *q = c->next;
+    tasklet_kill(&c->sync_ctl.tasklet);
     spin_unlock(&cpupool_lock);
 
     cpupool_put(c);
@@ -372,6 +411,19 @@ int cpupool_move_domain(struct domain *d, struct cpupool *c)
     return ret;
 }
 
+void cpupool_sync(struct cpupool *c, void (*func)(void*))
+{
+    struct domain *d;
+
+    rcu_read_lock(&domlist_read_lock);
+    for_each_domain_in_cpupool(d, c)
+        domain_pause_nosync(d);
+    rcu_read_unlock(&domlist_read_lock);
+
+    c->sync_ctl.func = func;
+    tasklet_schedule_on_cpu(&c->sync_ctl.tasklet, cpumask_first(c->cpu_valid));
+}
+
 /*
  * assign a specific cpu to a cpupool
  * cpupool_lock must be held
diff --git a/xen/common/sched/private.h b/xen/common/sched/private.h
index df50976eb2..4705c8b119 100644
--- a/xen/common/sched/private.h
+++ b/xen/common/sched/private.h
@@ -503,6 +503,11 @@ static inline void sched_unit_unpause(const struct sched_unit *unit)
 #define REGISTER_SCHEDULER(x) static const struct scheduler *x##_entry \
   __used_section(".data.schedulers") = &x;
 
+struct cpupool_sync_ctl {
+    struct tasklet tasklet;
+    void (*func)(void*);
+};
+
 struct cpupool
 {
     int              cpupool_id;
@@ -514,6 +519,7 @@ struct cpupool
     struct scheduler *sched;
     atomic_t         refcnt;
     enum sched_gran  gran;
+    struct cpupool_sync_ctl sync_ctl;
 };
 
 static inline cpumask_t *cpupool_domain_master_cpumask(const struct domain *d)
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index ac53519d7f..e2a233c96c 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1061,6 +1061,7 @@ extern enum cpufreq_controller {
 } cpufreq_controller;
 
 int cpupool_move_domain(struct domain *d, struct cpupool *c);
+void cpupool_sync(struct cpupool *c, void (*func)(void*));
 int cpupool_do_sysctl(struct xen_sysctl_cpupool_op *op);
 int cpupool_get_id(const struct domain *d);
 const cpumask_t *cpupool_valid_cpus(const struct cpupool *pool);



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:30:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:30:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQ6K-0001KY-RJ; Thu, 28 May 2020 21:30:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=LyZP=7K=gmail.com=raistlin.df@srs-us1.protection.inumbo.net>)
 id 1jeQ6J-0001GE-NN
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:30:07 +0000
X-Inumbo-ID: 661e1d24-a12a-11ea-a83e-12813bfff9fa
Received: from mail-wr1-f67.google.com (unknown [209.85.221.67])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 661e1d24-a12a-11ea-a83e-12813bfff9fa;
 Thu, 28 May 2020 21:30:06 +0000 (UTC)
Received: by mail-wr1-f67.google.com with SMTP id x13so942679wrv.4
 for <xen-devel@lists.xenproject.org>; Thu, 28 May 2020 14:30:06 -0700 (PDT)
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:subject:from:to:cc:date:message-id:in-reply-to
 :references:user-agent:mime-version:content-transfer-encoding;
 bh=yM9i/il81pQeM/q8E3fSLqcs+S42XWd8SHRMmUQBSRY=;
 b=lCwj9cddSm2qkutmRcrGljB4ZLt+V1qujCnsa56Pi+xWNUTCJGnTnW0O+IrayXXWz6
 ZxEHnvuySMVJPftn7Q6sINk979BicwX3lM553TGOlW2+kQODjl3t4ap8teVHwTztujOK
 SQ9XyuI9BD9ge9DYt0FR8R7+SPYAW4ph9xXRUYwAwoRaXegG+vkRZ5mzwB57fac4GKR/
 Ufnr8qQNXmXsTneSZeDoG4+WX7ixTFbLFpt79+kIC4jj5KQJpVP3YUsF2sTBSbCjrSPQ
 J5Ba6WVb08qqiSs8NwEVwD0YIlcs6MNaFZS4VXxZSf817m7OytKt5HJHhfbABUogGyM6
 AWvA==
X-Gm-Message-State: AOAM530GxwZTbDbIIButsZgwA7SfWZx7ZC7K1v6jo1m/ld7cC0n52XZX
 F063Z9TGWITRixycolCciVk=
X-Google-Smtp-Source: ABdhPJzRpciuBc1EO5nViCEa20e7q+SNxn2KdzPCdCSiQCg93fz7vpi2Hvt1MwpVn8NPee/t3YxoGg==
X-Received: by 2002:adf:9ccf:: with SMTP id h15mr5428672wre.275.1590701405541; 
 Thu, 28 May 2020 14:30:05 -0700 (PDT)
Received: from [192.168.0.36] (87.78.186.89.cust.ip.kpnqwest.it.
 [89.186.78.87])
 by smtp.gmail.com with ESMTPSA id l19sm7717544wmj.14.2020.05.28.14.30.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 28 May 2020 14:30:04 -0700 (PDT)
Subject: [PATCH v2 7/7] xen: credit2: rebalance the number of CPUs in the
 scheduler runqueues
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Date: Thu, 28 May 2020 23:30:04 +0200
Message-ID: <159070140400.12060.4466204111704460801.stgit@Palanthas>
In-Reply-To: <159070133878.12060.13318432301910522647.stgit@Palanthas>
References: <159070133878.12060.13318432301910522647.stgit@Palanthas>
User-Agent: StGit/0.21
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

When adding and removing CPUs to/from a pool, we can end up in a
situation where some runqueues have a lot of CPUs, while other have only
a couple of them. Even if the scheduler (namely, the load balancer)
should be capable of dealing with such a situation, it is something that
is better avoided.

We now have all the pieces in place to attempt an actual re-balancement
of the Credit2 scheduler runqueues, so let's go for it.

In short:
- every time we add or remove a CPU, especially considering the topology
  implications (e.g., we may have removed the last HT from a queue, so
  now there's space there for two CPUs, etc), we try to rebalance;
- rebalancing happens under the control of the cpupool_sync() mechanism.
  Basically, it happens from inside a tasklet, after having put the
  cpupool in a quiescent state;
- the new runqueue configuration may end up being both different, but
  also identical to the current one. It would be good to have a way to
  check whether the result would be identical, and in which case skip
  the balancing, but there is no way to do that.

Rebalancing, since it pauses all the domain of a pool, etc, is a time
consuming operation. But it only happens when the cpupool configuration
is changed, so it is considered acceptable.

Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
---
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Juergen Gross <jgross@suse.com>
---
Changes from v1:
* new patch
---
 xen/common/sched/credit2.c |  208 ++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 207 insertions(+), 1 deletion(-)

diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index af6d374677..72d1961b1b 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -3983,6 +3983,194 @@ init_pdata(struct csched2_private *prv, struct csched2_pcpu *spc,
     return rqd;
 }
 
+/*
+ * Let's get the hard work of rebalancing runqueues done.
+ *
+ * This function is called from a tasklet, spawned by cpupool_sync().
+ * We run in idle vcpu context and we can assume that all the vcpus of all
+ * the domains within this cpupool are stopped... So we are relatively free
+ * to manipulate the scheduler's runqueues.
+ */
+static void do_rebalance_runqueues(void *arg)
+{
+    const struct scheduler *ops = arg;
+    struct csched2_private *prv = csched2_priv(ops);
+    struct csched2_runqueue_data *rqd, *rqd_temp;
+    struct csched2_unit *csu, *csu_temp;
+    struct list_head rq_list, csu_list;
+    spinlock_t temp_lock;
+    unsigned long flags;
+    unsigned int cpu;
+
+    INIT_LIST_HEAD(&rq_list);
+    INIT_LIST_HEAD(&csu_list);
+    spin_lock_init(&temp_lock);
+
+    /*
+     * This is where we will temporarily re-route the locks of all the CPUs,
+     * while we take them outside of their current runqueue, and before adding
+     * them to their new ones. Let's just take it right away, so any sort of
+     * scheduling activity in any of them will stop at it, and won't race with
+     * us.
+     */
+    spin_lock_irq(&temp_lock);
+
+    /*
+     * Everyone is paused, so we don't have any unit in any runqueue. Still,
+     * units are "assigned" each one to a runqueue, for debug dumps and for
+     * calculating and tracking the weights. Since the current runqueues are
+     * going away, we need to deassign everyone from its runqueue. We will
+     * put all of them back into one of the new runqueue, before the end.
+     */
+    write_lock(&prv->lock);
+    list_for_each_entry_safe ( rqd, rqd_temp, &prv->rql, rql )
+    {
+        spin_lock(&rqd->lock);
+        /*
+         * We're deassigning the units, but we don't want to loose track
+         * of them... Otherwise how do we do the re-assignment to the new
+         * runqueues? So, let's stash them in a list.
+         */
+        list_for_each_entry_safe ( csu, csu_temp, &rqd->svc, rqd_elem )
+        {
+            runq_deassign(csu->unit);
+            list_add(&csu->rqd_elem, &csu_list);
+        }
+
+        /*
+         * Now we want to prepare for getting rid of the runqueues as well.
+         * Each CPU has a pointer to the scheduler lock, which in case of
+         * Credit2 is the runqueue lock of the runqueue where the CPU is.
+         * But again, runqueues are vanishing, so let's re-route all such
+         * locks to our safe temporary solution that we introduced above.
+         */
+        for_each_cpu ( cpu, &rqd->active )
+            get_sched_res(cpu)->schedule_lock = &temp_lock;
+        spin_unlock(&rqd->lock);
+
+        /*
+         * And, finally, "dequeue the runqueues", one by one. Similarly to
+         * what we do with units, we need to park them in a temporary list.
+         * In this case, they are actually going away, but we need to do this
+         * because we can't free() them with IRQs disabled.
+         */
+        prv->active_queues--;
+        list_del(&rqd->rql);
+        list_add(&rqd->rql, &rq_list);
+    }
+    ASSERT(prv->active_queues == 0);
+    write_unlock(&prv->lock);
+
+    spin_unlock_irq(&temp_lock);
+
+    /*
+     * Since we have to drop the lock anyway (in order to be able to call
+     * cpu_add_to_runqueue() below), let's also get rid of the old runqueues,
+     * now that we can.
+     */
+    list_for_each_entry_safe ( rqd, rqd_temp, &rq_list, rql )
+    {
+        list_del(&rqd->rql);
+        xfree(rqd);
+    }
+    rqd = NULL;
+
+    /*
+     * We've got no lock! Well, this is still fine as:
+     * - the CPUs may, for some reason, try to schedule, and manage to do so,
+     *   taking turns on our global temporary spinlock. But there should be
+     *   nothing to schedule;
+     * - we are safe from more cpupool manipulations as cpupool_sync() owns
+     *   the cpupool_lock.
+     */
+
+    /*
+     * Now, for each CPU, we have to put them back in a runqueue. Of course,
+     * we have no runqueue any longer, so they'll be re-created. We basically
+     * follow pretty much the exact same path of when we add a CPU to a pool.
+     */
+    for_each_cpu ( cpu, &prv->initialized )
+    {
+        struct csched2_pcpu *spc = csched2_pcpu(cpu);
+
+        /*
+         * The new runqueues need to be allocated, and cpu_add_to_runqueue()
+         * takes care of that. We are, however, in a very delicate state, as
+         * we have destroyed all the previous runqueues. I.e., if an error
+         * (e.g., not enough memory) occurs here, there is no way we can
+         * go back to a sane previous state, so let's just crash.
+         *
+         * Note that, at this time, the number of CPUs we have in the
+         * "initialized" mask represents how many CPUs we have in this pool.
+         * So we can use it, for computing the balance, basically in the same
+         * way as we use num_online_cpu() during boot time. 
+         */
+        rqd = cpu_add_to_runqueue(ops, cpumask_weight(&prv->initialized), cpu);
+        if ( IS_ERR(rqd) )
+        {
+            printk(XENLOG_ERR " Major problems while rebalancing the runqueues!\n");
+            BUG();
+        }
+        spc->rqd = rqd;
+
+        spin_lock_irq(&temp_lock);
+        write_lock(&prv->lock);
+
+        init_cpu_runqueue(prv, spc, cpu, rqd);
+
+        /*
+         * Bring the scheduler lock back to where it belongs, given the new
+         * runqueue, for the various CPUs. Barrier is there because we want
+         * all the runqueue initialization steps that we have made to be
+         * visible, exactly as it was for everyone that takes the lock
+         * (see the comment in common/sched/core.c:schedule_cpu_add() ).
+         */
+        smp_wmb();
+        get_sched_res(cpu)->schedule_lock = &rqd->lock;
+
+        write_unlock(&prv->lock);
+        spin_unlock_irq(&temp_lock);
+    }
+
+    /*
+     * And, finally, everything should be in place again. We can finalize the
+     * work by adding back the units in the runqueues' lists (picking them
+     * up from the temporary list we used). Note that it is not necessary to
+     * call csched2_res_pick(), for deciding on which runqueue to put each
+     * of them. Thins is:
+     *  - with the old runqueue, each entity was associated to a
+     *    sched_resource / CPU;
+     *  - they where assigned to the runqueue in which such CPU was;
+     *  - all the CPUs that were there, with the old runqueues, are still
+     *    here, although in different runqueues;
+     *  - we can just let the units be associated with the runqueues where
+     *    theirs CPU has gone.
+     *
+     *  This means that, even though the load was balanced, with the previous
+     *  runqueues, it now most likely now will not be. But this is not a big
+     *  deal as the load balancer will make things even again, given a little
+     *  time.
+     */
+    list_for_each_entry_safe ( csu, csu_temp, &csu_list, rqd_elem )
+    {
+        spinlock_t *lock;
+
+        lock = unit_schedule_lock_irqsave(csu->unit, &flags);
+        list_del_init(&csu->rqd_elem);
+        runq_assign(csu->unit);
+        unit_schedule_unlock_irqrestore(lock, flags, csu->unit);
+    }
+}
+
+static void rebalance_runqueues(const struct scheduler *ops)
+{
+    struct cpupool *c = ops->cpupool;
+
+    ASSERT(c->sched == ops);
+
+    cpupool_sync(c, do_rebalance_runqueues);
+}
+
 /* Change the scheduler of cpu to us (Credit2). */
 static spinlock_t *
 csched2_switch_sched(struct scheduler *new_ops, unsigned int cpu,
@@ -4017,6 +4205,16 @@ csched2_switch_sched(struct scheduler *new_ops, unsigned int cpu,
      */
     ASSERT(get_sched_res(cpu)->schedule_lock != &rqd->lock);
 
+    /*
+     * We have added a CPU to the pool. Unless we are booting (in which
+     * case cpu_add_to_runqueue() balances the CPUs by itself) or we are in
+     * the very special case of still having fewer CPUs than how many we
+     * can put in just one runqueue... We need to try to rebalance.
+     */
+    if ( system_state >= SYS_STATE_active &&
+          cpumask_weight(&prv->initialized) > opt_max_cpus_runqueue )
+        rebalance_runqueues(new_ops);
+
     write_unlock(&prv->lock);
 
     return &rqd->lock;
@@ -4105,7 +4303,15 @@ csched2_free_pdata(const struct scheduler *ops, void *pcpu, int cpu)
     else
         rqd = NULL;
 
-    write_unlock_irqrestore(&prv->lock, flags);
+    /*
+     * Similarly to what said in csched2_switch_sched(), since we have just
+     * removed a CPU, it's good to check whether we can rebalance the
+     * runqueues.
+     */
+    if ( cpumask_weight(&prv->initialized) >= opt_max_cpus_runqueue )
+        rebalance_runqueues(ops);
+
+     write_unlock_irqrestore(&prv->lock, flags);
 
     xfree(rqd);
     xfree(pcpu);



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:30:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:30:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQ6o-0001Yv-6z; Thu, 28 May 2020 21:30:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oNcS=7K=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jeQ6n-0001YS-5B
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:30:37 +0000
X-Inumbo-ID: 77a57f42-a12a-11ea-8993-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77a57f42-a12a-11ea-8993-bc764e2007e4;
 Thu, 28 May 2020 21:30:36 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: EusOiouMvTaXx6zJNG9zrV9DEYJA66X4YRaJTu37Vo6llw/tyaBW5sc2+NrQZ8bcuiLDFsS3ja
 QaZixovxj0H6a/3DPFL6fXa5EhywSZBpBVpGSdENY77lZYtJdBBW1cbIBlg/tADl9GVSqDfzHc
 IR1R7Ktkd/nomRp2ID/NdFEjhQembJhGqNs1NUQ9/eUovGESkyLXezi/vCdejNrtq/pevltHJv
 0DaYADBtnOS3fLuCGpO+7L+j965vZsJvK5V5TnCU3r9TERpSdakY+k3+mQceIBrRtL19iw4Cfz
 5Uk=
X-SBRS: 2.7
X-MesageID: 18707096
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,446,1583211600"; d="scan'208";a="18707096"
From: George Dunlap <George.Dunlap@citrix.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Subject: Call for agenda items for June XenProject Community Call @ 15:00 UTC
Thread-Topic: Call for agenda items for June XenProject Community Call @ 15:00
 UTC
Thread-Index: AQHWNTc1Ql8FFk3g90eJ8vkDj8k8tw==
Date: Thu, 28 May 2020 21:30:29 +0000
Message-ID: <48D62274-9C3A-46C6-B7A0-0B11BC224BC8@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <BC5BB855E3960B4BBE68C2225642209E@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Rian
 Quinn <rianquinn@gmail.com>, "Daniel P. Smith" <dpsmith@apertussolutions.com>,
 =?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLRG91ZyBHb2xkc3RlaW4=?=
 <cardoe@cardoe.com>, Brian Woods <brian.woods@xilinx.com>,
 Rich Persaud <persaur@gmail.com>,
 "anastassios.nanos@onapp.com" <anastassios.nanos@onapp.com>,
 "mirela.simonovic@aggios.com" <mirela.simonovic@aggios.com>,
 "edgar.iglesias@xilinx.com" <edgar.iglesias@xilinx.com>, "Ji,
 John" <john.ji@intel.com>, "robin.randhawa@arm.com" <robin.randhawa@arm.com>,
 "daniel.kiper@oracle.com" <daniel.kiper@oracle.com>,
 =?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLQW1pdCBTaGFo?= <amit@infradead.org>,
 Matt Spencer <Matt.Spencer@arm.com>, Bobby Eshleman <bobby.eshleman@gmail.com>,
 Robert Townley <rob.townley@gmail.com>, Artem Mygaiev <Artem_Mygaiev@epam.com>,
 =?utf-8?B?4oCL4oCL4oCL4oCL4oCL4oCL4oCLVmFyYWQgR2F1dGFt?=
 <varadgautam@gmail.com>, Julien Grall <julien@xen.org>,
 Tamas K Lengyel <tamas.k.lengyel@gmail.com>,
 Christopher Clark <christopher.w.clark@gmail.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Ian Jackson <Ian.Jackson@citrix.com>,
 "vfachin@de.adit-jv.com" <vfachin@de.adit-jv.com>, Kevin
 Pearson <kevin.pearson@ortmanconsulting.com>,
 "intel-xen@intel.com" <intel-xen@intel.com>,
 Jarvis Roach <Jarvis.Roach@dornerworks.com>, Juergen Gross <jgross@suse.com>,
 Sergey Dyasli <sergey.dyasli@citrix.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Paul Durrant <pdurrant@amazon.com>,
 Jeff Kubascik <Jeff.Kubascik@dornerworks.com>, "Natarajan,
 Janakarajan" <jnataraj@amd.com>, Stewart
 Hildebrand <Stewart.Hildebrand@dornerworks.com>,
 Volodymyr Babchuk <volodymyr_babchuk@epam.com>, Olivier
 Lambert <olivier.lambert@vates.fr>, David
 Woodhouse <dwmw@amazon.co.uk>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGkgYWxsLA0KDQpUaGUgcHJvcG9zZWQgYWdlbmRhIGlzIGluIGh0dHBzOi8vY3J5cHRwYWQuZnIv
cGFkLyMvMi9wYWQvZWRpdC90T09LRTcyendJS09pVDA1cXVtZ0k5ZXkvIGFuZCB5b3UgY2FuIGVk
aXQgdG8gYWRkIGl0ZW1zLiAgQWx0ZXJuYXRpdmVseSwgeW91IGNhbiByZXBseSB0byB0aGlzIG1h
aWwgZGlyZWN0bHkuDQoNCkFnZW5kYSBpdGVtcyBhcHByZWNpYXRlZCBhIGZldyBkYXlzIGJlZm9y
ZSB0aGUgY2FsbDogcGxlYXNlIHB1dCB5b3VyIG5hbWUgYmVzaWRlcyBpdGVtcyBpZiB5b3UgZWRp
dCB0aGUgZG9jdW1lbnQuDQoNCk5vdGUgdGhlIGZvbGxvd2luZyBhZG1pbmlzdHJhdGl2ZSBjb252
ZW50aW9ucyBmb3IgdGhlIGNhbGw6DQoqIFVubGVzcywgYWdyZWVkIGluIHRoZSBwZXJ2aW91cyBt
ZWV0aW5nIG90aGVyd2lzZSwgdGhlIGNhbGwgaXMgb24gdGhlIDFzdCBUaHVyc2RheSBvZiBlYWNo
IG1vbnRoIGF0IDE2MDAgQnJpdGlzaCBUaW1lIChlaXRoZXIgR01UIG9yIEJTVCkNCiogSSB1c3Vh
bGx5IHNlbmQgb3V0IGEgbWVldGluZyByZW1pbmRlciBhIGZldyBkYXlzIGJlZm9yZSB3aXRoIGEg
cHJvdmlzaW9uYWwgYWdlbmRhDQoNCiogSWYgeW91IHdhbnQgdG8gYmUgQ0MnZWQgcGxlYXNlIGFk
ZCBvciByZW1vdmUgeW91cnNlbGYgZnJvbSB0aGUgc2lnbi11cC1zaGVldCBhdCBodHRwczovL2Ny
eXB0cGFkLmZyL3BhZC8jLzIvcGFkL2VkaXQvRDl2R3ppaFB4eEFPZTZSRlB6MHNSQ2YrLw0KDQpC
ZXN0IFJlZ2FyZHMNCkdlb3JnZQ0KDQo9PSBEaWFsLWluIEluZm9ybWF0aW9uID09DQojIyBNZWV0
aW5nIHRpbWUNCjE1OjAwIC0gMTY6MDAgVVRDIChkdXJpbmcgQlNUKQ0KRnVydGhlciBJbnRlcm5h
dGlvbmFsIG1lZXRpbmcgdGltZXM6IGh0dHBzOi8vd3d3LnRpbWVhbmRkYXRlLmNvbS93b3JsZGNs
b2NrL21lZXRpbmdkZXRhaWxzLmh0bWw/eWVhcj0yMDIwJm1vbnRoPTUmZGF5PTcmaG91cj0xNSZt
aW49MCZzZWM9MCZwMT0xMjM0JnAyPTM3JnAzPTIyNCZwND0xNzkNCg0KDQojIyBEaWFsIGluIGRl
dGFpbHMNCldlYjogaHR0cHM6Ly93d3cuZ290b21lZXQubWUvR2VvcmdlRHVubGFwDQoNCllvdSBj
YW4gYWxzbyBkaWFsIGluIHVzaW5nIHlvdXIgcGhvbmUuDQpBY2Nlc3MgQ29kZTogMTY4LTY4Mi0x
MDkNCg0KQ2hpbmEgKFRvbGwgRnJlZSk6IDQwMDggODExMDg0DQpHZXJtYW55OiArNDkgNjkyIDU3
MzYgNzMxNw0KUG9sYW5kIChUb2xsIEZyZWUpOiAwMCA4MDAgMTEyNDc1OQ0KVWtyYWluZSAoVG9s
bCBGcmVlKTogMCA4MDAgNTAgMTczMw0KVW5pdGVkIEtpbmdkb206ICs0NCAzMzAgMjIxIDAwODgN
ClVuaXRlZCBTdGF0ZXM6ICsxICg1NzEpIDMxNy0zMTI5DQpTcGFpbjogKzM0IDkzMiA3NSAyMDA0
DQoNCg0KTW9yZSBwaG9uZSBudW1iZXJzDQpBdXN0cmFsaWE6ICs2MSAyIDkwODcgMzYwNA0KQXVz
dHJpYTogKzQzIDcgMjA4MSA1NDI3DQpBcmdlbnRpbmEgKFRvbGwgRnJlZSk6IDAgODAwIDQ0NCAz
Mzc1DQpCYWhyYWluIChUb2xsIEZyZWUpOiA4MDAgODEgMTExDQpCZWxhcnVzIChUb2xsIEZyZWUp
OiA4IDgyMCAwMDExIDA0MDANCkJlbGdpdW06ICszMiAyOCA5MyA3MDE4DQpCcmF6aWwgKFRvbGwg
RnJlZSk6IDAgODAwIDA0NyA0OTA2DQpCdWxnYXJpYSAoVG9sbCBGcmVlKTogMDA4MDAgMTIwIDQ0
MTcNCkNhbmFkYTogKzEgKDY0NykgNDk3LTkzOTENCkNoaWxlIChUb2xsIEZyZWUpOiA4MDAgMzk1
IDE1MA0KQ29sb21iaWEgKFRvbGwgRnJlZSk6IDAxIDgwMCA1MTggNDQ4Mw0KQ3plY2ggUmVwdWJs
aWMgKFRvbGwgRnJlZSk6IDgwMCA1MDA0NDgNCkRlbm1hcms6ICs0NSAzMiA3MiAwMyA4Mg0KRmlu
bGFuZDogKzM1OCA5MjMgMTcgMDU2OA0KRnJhbmNlOiArMzMgMTcwIDk1MCA1OTQNCkdyZWVjZSAo
VG9sbCBGcmVlKTogMDAgODAwIDQ0MTQgMzgzOA0KSG9uZyBLb25nIChUb2xsIEZyZWUpOiAzMDcx
MzE2OTkwNi04ODYtOTY1DQpIdW5nYXJ5IChUb2xsIEZyZWUpOiAoMDYpIDgwIDk4NiAyNTUNCklj
ZWxhbmQgKFRvbGwgRnJlZSk6IDgwMCA3MjA0DQpJbmRpYSAoVG9sbCBGcmVlKTogMTgwMDI2Njky
NzINCkluZG9uZXNpYSAoVG9sbCBGcmVlKTogMDA3IDgwMyAwMjAgNTM3NQ0KSXJlbGFuZDogKzM1
MyAxNSAzNjAgNzI4DQpJc3JhZWwgKFRvbGwgRnJlZSk6IDEgODA5IDQ1NCA4MzANCkl0YWx5OiAr
MzkgMCAyNDcgOTIgMTMgMDENCkphcGFuIChUb2xsIEZyZWUpOiAwIDEyMCA2NjMgODAwDQpLb3Jl
YSwgUmVwdWJsaWMgb2YgKFRvbGwgRnJlZSk6IDAwNzk4IDE0IDIwNyA0OTE0DQpMdXhlbWJvdXJn
IChUb2xsIEZyZWUpOiA4MDAgODUxNTgNCk1hbGF5c2lhIChUb2xsIEZyZWUpOiAxIDgwMCA4MSA2
ODU0DQpNZXhpY28gKFRvbGwgRnJlZSk6IDAxIDgwMCA1MjIgMTEzMw0KTmV0aGVybGFuZHM6ICsz
MSAyMDcgOTQxIDM3Nw0KTmV3IFplYWxhbmQ6ICs2NCA5IDI4MCA2MzAyDQpOb3J3YXk6ICs0NyAy
MSA5MyAzNyA1MQ0KUGFuYW1hIChUb2xsIEZyZWUpOiAwMCA4MDAgMjI2IDc5MjgNClBlcnUgKFRv
bGwgRnJlZSk6IDAgODAwIDc3MDIzDQpQaGlsaXBwaW5lcyAoVG9sbCBGcmVlKTogMSA4MDAgMTEx
MCAxNjYxDQpQb3J0dWdhbCAoVG9sbCBGcmVlKTogODAwIDgxOSA1NzUNClJvbWFuaWEgKFRvbGwg
RnJlZSk6IDAgODAwIDQxMCAwMjkNClJ1c3NpYW4gRmVkZXJhdGlvbiAoVG9sbCBGcmVlKTogOCA4
MDAgMTAwIDYyMDMNClNhdWRpIEFyYWJpYSAoVG9sbCBGcmVlKTogODAwIDg0NCAzNjMzDQpTaW5n
YXBvcmUgKFRvbGwgRnJlZSk6IDE4MDA3MjMxMzIzDQpTb3V0aCBBZnJpY2EgKFRvbGwgRnJlZSk6
IDAgODAwIDU1NSA0NDcNClN3ZWRlbjogKzQ2IDg1MyA1MjcgODI3DQpTd2l0emVybGFuZDogKzQx
IDIyNSA0NTk5IDc4DQpUYWl3YW4gKFRvbGwgRnJlZSk6IDAgODAwIDY2NiA4NTQNClRoYWlsYW5k
IChUb2xsIEZyZWUpOiAwMDEgODAwIDAxMSAwMjMNClR1cmtleSAoVG9sbCBGcmVlKTogMDAgODAw
IDQ0ODggMjM2ODMNClVuaXRlZCBBcmFiIEVtaXJhdGVzIChUb2xsIEZyZWUpOiA4MDAgMDQ0IDQw
NDM5DQpVcnVndWF5IChUb2xsIEZyZWUpOiAwMDA0IDAxOSAxMDE4DQpWaWV0IE5hbSAoVG9sbCBG
cmVlKTogMTIyIDgwIDQ4MQ0K4oCL4oCL4oCL4oCL4oCL4oCL4oCLDQoNCkZpcnN0IEdvVG9NZWV0
aW5nPyBMZXQncyBkbyBhIHF1aWNrIHN5c3RlbSBjaGVjazoNCg0KaHR0cHM6Ly9saW5rLmdvdG9t
ZWV0aW5nLmNvbS9zeXN0ZW0tY2hlY2s=


From xen-devel-bounces@lists.xenproject.org Thu May 28 21:40:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:40:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQFu-0002Ao-9X; Thu, 28 May 2020 21:40:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeQFs-00027q-2W
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:40:00 +0000
X-Inumbo-ID: c48cc116-a12b-11ea-9947-bc764e2007e4
Received: from forwardcorp1o.mail.yandex.net (unknown [95.108.205.193])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c48cc116-a12b-11ea-9947-bc764e2007e4;
 Thu, 28 May 2020 21:39:54 +0000 (UTC)
Received: from mxbackcorp2j.mail.yandex.net (mxbackcorp2j.mail.yandex.net
 [IPv6:2a02:6b8:0:1619::119])
 by forwardcorp1o.mail.yandex.net (Yandex) with ESMTP id 1C63E2E15FE;
 Fri, 29 May 2020 00:39:53 +0300 (MSK)
Received: from vla5-58875c36c028.qloud-c.yandex.net
 (vla5-58875c36c028.qloud-c.yandex.net [2a02:6b8:c18:340b:0:640:5887:5c36])
 by mxbackcorp2j.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 5g6CXdTwEO-dlf0DWeC; Fri, 29 May 2020 00:39:53 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590701993; bh=YGsXreK94O+yIKcs4mqo5YoWnIMbpywJxpUDcqMPHyM=;
 h=Message-Id:Date:Subject:To:From:Cc;
 b=ARN/Ahb1iWocNosGtI4XVj/i308SMpysWnO9d5NcrMDPN0Gfu0WcxG0+XXyk7TDT5
 sqH9gEp4qd8FIY3dQjC+/KVlI2L4r0lQUrWoyrPoYub0XOD3+jH9kiLqOJKBLAROLu
 4QjGmOmaG2DXqdSS8vNZy78uuku819U6oJvMEPls=
Authentication-Results: mxbackcorp2j.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by vla5-58875c36c028.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 KqoauPPys3-dlXqUjYH; Fri, 29 May 2020 00:39:47 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v7 0/8] block: enhance handling of size-related BlockConf
 properties
Date: Fri, 29 May 2020 00:39:38 +0300
Message-Id: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Eric Blake <eblake@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Anthony Perard <anthony.perard@citrix.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Keith Busch <kbusch@kernel.org>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 John Snow <jsnow@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

BlockConf includes several properties counted in bytes.=0D
=0D
Enhance their handling in some aspects, specifically=0D
=0D
- accept common size suffixes (k, m)=0D
- perform consistency checks on the values=0D
- lift the upper limit on physical_block_size and logical_block_size=0D
=0D
Also fix the accessor for opt_io_size in virtio-blk to make it consistent w=
ith=0D
the size of the field.=0D
=0D
History:=0D
v6 -> v7:=0D
- avoid overflow in min_io_size check [Eric]=0D
- try again to perform the art form in patch splitting [Eric]=0D
=0D
v5 -> v6:=0D
- fix forgotten xen-block and swim=0D
- add prop_size32 instead of going with 64bit=0D
=0D
v4 -> v5:=0D
- re-split the patches [Philippe]=0D
- fix/reword error messages [Philippe, Kevin]=0D
- do early return on failed consistency check [Philippe]=0D
- use QEMU_IS_ALIGNED instead of open coding [Philippe]=0D
- make all BlockConf size props support suffixes=0D
- expand the log for virtio-blk opt_io_size [Michael]=0D
=0D
v3 -> v4:=0D
- add patch to fix opt_io_size width in virtio-blk=0D
- add patch to perform consistency checks [Kevin]=0D
- check min_io_size against truncation [Kevin]=0D
=0D
v2 -> v3:=0D
- mention qcow2 cluster size limit in the log and comment [Eric]=0D
=0D
v1 -> v2:=0D
- cap the property at 2 MiB [Eric]=0D
- accept size suffixes=0D
=0D
Roman Kagan (8):=0D
  virtio-blk: store opt_io_size with correct size=0D
  block: consolidate blocksize properties consistency checks=0D
  qdev-properties: blocksize: use same limits in code and description=0D
  qdev-properties: add size32 property type=0D
  qdev-properties: make blocksize accept size suffixes=0D
  block: make BlockConf size props 32bit and accept size suffixes=0D
  qdev-properties: add getter for size32 and blocksize=0D
  block: lift blocksize property limit to 2 MiB=0D
=0D
 include/hw/block/block.h     |  14 +-=0D
 include/hw/qdev-properties.h |   5 +-=0D
 hw/block/block.c             |  41 ++-=0D
 hw/block/fdc.c               |   5 +-=0D
 hw/block/nvme.c              |   5 +-=0D
 hw/block/swim.c              |   5 +-=0D
 hw/block/virtio-blk.c        |   9 +-=0D
 hw/block/xen-block.c         |   6 +-=0D
 hw/core/qdev-properties.c    |  85 +++++-=0D
 hw/ide/qdev.c                |   5 +-=0D
 hw/scsi/scsi-disk.c          |  12 +-=0D
 hw/usb/dev-storage.c         |   5 +-=0D
 tests/qemu-iotests/172.out   | 532 +++++++++++++++++------------------=0D
 13 files changed, 420 insertions(+), 309 deletions(-)=0D
=0D
-- =0D
2.26.2=0D
=0D


From xen-devel-bounces@lists.xenproject.org Thu May 28 21:40:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:40:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQFy-0002hz-I2; Thu, 28 May 2020 21:40:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeQFw-0002UC-So
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:40:04 +0000
X-Inumbo-ID: c8016bb2-a12b-11ea-9947-bc764e2007e4
Received: from forwardcorp1j.mail.yandex.net (unknown [2a02:6b8:0:1619::183])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c8016bb2-a12b-11ea-9947-bc764e2007e4;
 Thu, 28 May 2020 21:40:00 +0000 (UTC)
Received: from mxbackcorp2j.mail.yandex.net (mxbackcorp2j.mail.yandex.net
 [IPv6:2a02:6b8:0:1619::119])
 by forwardcorp1j.mail.yandex.net (Yandex) with ESMTP id E14EF2E0E4D;
 Fri, 29 May 2020 00:39:58 +0300 (MSK)
Received: from vla5-58875c36c028.qloud-c.yandex.net
 (vla5-58875c36c028.qloud-c.yandex.net [2a02:6b8:c18:340b:0:640:5887:5c36])
 by mxbackcorp2j.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 57WkjCjfeF-drfO2KuB; Fri, 29 May 2020 00:39:58 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590701998; bh=QhaZGhUGT//OQOhXA4U9vk1hEzoswycgPYq+LACJTvA=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=J1hsV3Zu3Kdk9x90+JIO4BvbwGSJO3/RF4PmBR4BmaXKQVKnQcKyItZ63s7RucraM
 CURaYPuDINuwrQULpzWQrWMzSkGQLZf249L/ihkF3TkWOuuHW6eEBstRKf1TXLJltz
 yU/dtNb2Jdb5G1Tu53A2QGg3GUdAA+fck6P5utdE=
Authentication-Results: mxbackcorp2j.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by vla5-58875c36c028.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 KqoauPPys3-drXq1Pqx; Fri, 29 May 2020 00:39:53 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v7 1/8] virtio-blk: store opt_io_size with correct size
Date: Fri, 29 May 2020 00:39:39 +0300
Message-Id: <20200528213946.1636444-2-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
References: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Eric Blake <eblake@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Anthony Perard <anthony.perard@citrix.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Keith Busch <kbusch@kernel.org>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 John Snow <jsnow@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The width of opt_io_size in virtio_blk_config is 32bit.  However, it's
written with virtio_stw_p; this may result in value truncation, and on
big-endian systems with legacy virtio in completely bogus readings in
the guest.

Use the appropriate accessor to store it.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
v4 -> v5:
- split out into separate patch [Philippe]

 hw/block/virtio-blk.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index f5f6fc925e..413083e62f 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -918,7 +918,7 @@ static void virtio_blk_update_config(VirtIODevice *vdev, uint8_t *config)
     virtio_stw_p(vdev, &blkcfg.geometry.cylinders, conf->cyls);
     virtio_stl_p(vdev, &blkcfg.blk_size, blk_size);
     virtio_stw_p(vdev, &blkcfg.min_io_size, conf->min_io_size / blk_size);
-    virtio_stw_p(vdev, &blkcfg.opt_io_size, conf->opt_io_size / blk_size);
+    virtio_stl_p(vdev, &blkcfg.opt_io_size, conf->opt_io_size / blk_size);
     blkcfg.geometry.heads = conf->heads;
     /*
      * We must ensure that the block device capacity is a multiple of
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:40:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:40:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQG7-0002no-SU; Thu, 28 May 2020 21:40:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeQG7-0002ng-CA
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:40:15 +0000
X-Inumbo-ID: ceeda756-a12b-11ea-9dbe-bc764e2007e4
Received: from forwardcorp1o.mail.yandex.net (unknown [2a02:6b8:0:1a2d::193])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ceeda756-a12b-11ea-9dbe-bc764e2007e4;
 Thu, 28 May 2020 21:40:12 +0000 (UTC)
Received: from mxbackcorp1j.mail.yandex.net (mxbackcorp1j.mail.yandex.net
 [IPv6:2a02:6b8:0:1619::162])
 by forwardcorp1o.mail.yandex.net (Yandex) with ESMTP id 4A91E2E1628;
 Fri, 29 May 2020 00:40:11 +0300 (MSK)
Received: from vla5-58875c36c028.qloud-c.yandex.net
 (vla5-58875c36c028.qloud-c.yandex.net [2a02:6b8:c18:340b:0:640:5887:5c36])
 by mxbackcorp1j.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 XInhed9mIK-dxeurjwL; Fri, 29 May 2020 00:40:11 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590702011; bh=kVY6FV62QcfCHwxFdjqYrHINjsHStH5S4jKjvLIimMs=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=dAOJ387Wr89LaCgO8aZBSZj8XlkSMTV3AooKBBL/EdBlxV9pMbY8nOpjz6FEtPuzA
 H7dAlIgS/nPvOhL9SHHTNGQF5uMK4v13qMLPlyVOofUzLmuWzIaCunO2IATxqmVP0A
 H+Li2Jbe/NBZGy9JBi5xc/q3AmH0zIx6qVfdOM00=
Authentication-Results: mxbackcorp1j.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by vla5-58875c36c028.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 KqoauPPys3-dxXqUetq; Fri, 29 May 2020 00:39:59 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v7 2/8] block: consolidate blocksize properties consistency
 checks
Date: Fri, 29 May 2020 00:39:40 +0300
Message-Id: <20200528213946.1636444-3-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
References: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Eric Blake <eblake@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Anthony Perard <anthony.perard@citrix.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Keith Busch <kbusch@kernel.org>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 John Snow <jsnow@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Several block device properties related to blocksize configuration must
be in certain relationship WRT each other: physical block must be no
smaller than logical block; min_io_size, opt_io_size, and
discard_granularity must be a multiple of a logical block.

To ensure these requirements are met, add corresponding consistency
checks to blkconf_blocksizes, adjusting its signature to communicate
possible error to the caller.  Also remove the now redundant consistency
checks from the specific devices.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
v5 -> v6:
- fix forgotten xen-block and swim

v4 -> v5:
- fix/reword error messages [Philippe, Kevin]
- do early return on failed consistency check [Philippe]
- use QEMU_IS_ALIGNED instead of open coding [Philippe]

 include/hw/block/block.h   |  2 +-
 hw/block/block.c           | 30 +++++++++++++++++++++++++++++-
 hw/block/fdc.c             |  5 ++++-
 hw/block/nvme.c            |  5 ++++-
 hw/block/swim.c            |  5 ++++-
 hw/block/virtio-blk.c      |  7 +------
 hw/block/xen-block.c       |  6 +-----
 hw/ide/qdev.c              |  5 ++++-
 hw/scsi/scsi-disk.c        | 12 +++++-------
 hw/usb/dev-storage.c       |  5 ++++-
 tests/qemu-iotests/172.out |  2 +-
 11 files changed, 58 insertions(+), 26 deletions(-)

diff --git a/include/hw/block/block.h b/include/hw/block/block.h
index d7246f3862..784953a237 100644
--- a/include/hw/block/block.h
+++ b/include/hw/block/block.h
@@ -87,7 +87,7 @@ bool blk_check_size_and_read_all(BlockBackend *blk, void *buf, hwaddr size,
 bool blkconf_geometry(BlockConf *conf, int *trans,
                       unsigned cyls_max, unsigned heads_max, unsigned secs_max,
                       Error **errp);
-void blkconf_blocksizes(BlockConf *conf);
+bool blkconf_blocksizes(BlockConf *conf, Error **errp);
 bool blkconf_apply_backend_options(BlockConf *conf, bool readonly,
                                    bool resizable, Error **errp);
 
diff --git a/hw/block/block.c b/hw/block/block.c
index bf56c7612b..b22207c921 100644
--- a/hw/block/block.c
+++ b/hw/block/block.c
@@ -61,7 +61,7 @@ bool blk_check_size_and_read_all(BlockBackend *blk, void *buf, hwaddr size,
     return true;
 }
 
-void blkconf_blocksizes(BlockConf *conf)
+bool blkconf_blocksizes(BlockConf *conf, Error **errp)
 {
     BlockBackend *blk = conf->blk;
     BlockSizes blocksizes;
@@ -83,6 +83,34 @@ void blkconf_blocksizes(BlockConf *conf)
             conf->logical_block_size = BDRV_SECTOR_SIZE;
         }
     }
+
+    if (conf->logical_block_size > conf->physical_block_size) {
+        error_setg(errp,
+                   "logical_block_size > physical_block_size not supported");
+        return false;
+    }
+
+    if (!QEMU_IS_ALIGNED(conf->min_io_size, conf->logical_block_size)) {
+        error_setg(errp,
+                   "min_io_size must be a multiple of logical_block_size");
+        return false;
+    }
+
+    if (!QEMU_IS_ALIGNED(conf->opt_io_size, conf->logical_block_size)) {
+        error_setg(errp,
+                   "opt_io_size must be a multiple of logical_block_size");
+        return false;
+    }
+
+    if (conf->discard_granularity != -1 &&
+        !QEMU_IS_ALIGNED(conf->discard_granularity,
+                         conf->logical_block_size)) {
+        error_setg(errp, "discard_granularity must be "
+                   "a multiple of logical_block_size");
+        return false;
+    }
+
+    return true;
 }
 
 bool blkconf_apply_backend_options(BlockConf *conf, bool readonly,
diff --git a/hw/block/fdc.c b/hw/block/fdc.c
index c5fb9d6ece..8eda572ef4 100644
--- a/hw/block/fdc.c
+++ b/hw/block/fdc.c
@@ -554,7 +554,10 @@ static void floppy_drive_realize(DeviceState *qdev, Error **errp)
         read_only = !blk_bs(dev->conf.blk) || blk_is_read_only(dev->conf.blk);
     }
 
-    blkconf_blocksizes(&dev->conf);
+    if (!blkconf_blocksizes(&dev->conf, errp)) {
+        return;
+    }
+
     if (dev->conf.logical_block_size != 512 ||
         dev->conf.physical_block_size != 512)
     {
diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index 2f3100e56c..672650e162 100644
--- a/hw/block/nvme.c
+++ b/hw/block/nvme.c
@@ -1390,7 +1390,10 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp)
         host_memory_backend_set_mapped(n->pmrdev, true);
     }
 
-    blkconf_blocksizes(&n->conf);
+    if (!blkconf_blocksizes(&n->conf, errp)) {
+        return;
+    }
+
     if (!blkconf_apply_backend_options(&n->conf, blk_is_read_only(n->conf.blk),
                                        false, errp)) {
         return;
diff --git a/hw/block/swim.c b/hw/block/swim.c
index 8f124782f4..74f56e8f46 100644
--- a/hw/block/swim.c
+++ b/hw/block/swim.c
@@ -189,7 +189,10 @@ static void swim_drive_realize(DeviceState *qdev, Error **errp)
         assert(ret == 0);
     }
 
-    blkconf_blocksizes(&dev->conf);
+    if (!blkconf_blocksizes(&dev->conf, errp)) {
+        return;
+    }
+
     if (dev->conf.logical_block_size != 512 ||
         dev->conf.physical_block_size != 512)
     {
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 413083e62f..4ffdb130be 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1162,12 +1162,7 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
         return;
     }
 
-    blkconf_blocksizes(&conf->conf);
-
-    if (conf->conf.logical_block_size >
-        conf->conf.physical_block_size) {
-        error_setg(errp,
-                   "logical_block_size > physical_block_size not supported");
+    if (!blkconf_blocksizes(&conf->conf, errp)) {
         return;
     }
 
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index 570489d6d9..e17fec50e1 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -239,11 +239,7 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
         return;
     }
 
-    blkconf_blocksizes(conf);
-
-    if (conf->logical_block_size > conf->physical_block_size) {
-        error_setg(
-            errp, "logical_block_size > physical_block_size not supported");
+    if (!blkconf_blocksizes(conf, errp)) {
         return;
     }
 
diff --git a/hw/ide/qdev.c b/hw/ide/qdev.c
index 06b11583f5..b4821b2403 100644
--- a/hw/ide/qdev.c
+++ b/hw/ide/qdev.c
@@ -187,7 +187,10 @@ static void ide_dev_initfn(IDEDevice *dev, IDEDriveKind kind, Error **errp)
         return;
     }
 
-    blkconf_blocksizes(&dev->conf);
+    if (!blkconf_blocksizes(&dev->conf, errp)) {
+        return;
+    }
+
     if (dev->conf.logical_block_size != 512) {
         error_setg(errp, "logical_block_size must be 512 for IDE");
         return;
diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index 387503e11b..8ce68a9dd6 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -2346,12 +2346,7 @@ static void scsi_realize(SCSIDevice *dev, Error **errp)
         return;
     }
 
-    blkconf_blocksizes(&s->qdev.conf);
-
-    if (s->qdev.conf.logical_block_size >
-        s->qdev.conf.physical_block_size) {
-        error_setg(errp,
-                   "logical_block_size > physical_block_size not supported");
+    if (!blkconf_blocksizes(&s->qdev.conf, errp)) {
         return;
     }
 
@@ -2436,7 +2431,9 @@ static void scsi_hd_realize(SCSIDevice *dev, Error **errp)
     if (s->qdev.conf.blk) {
         ctx = blk_get_aio_context(s->qdev.conf.blk);
         aio_context_acquire(ctx);
-        blkconf_blocksizes(&s->qdev.conf);
+        if (!blkconf_blocksizes(&s->qdev.conf, errp)) {
+            goto out;
+        }
     }
     s->qdev.blocksize = s->qdev.conf.logical_block_size;
     s->qdev.type = TYPE_DISK;
@@ -2444,6 +2441,7 @@ static void scsi_hd_realize(SCSIDevice *dev, Error **errp)
         s->product = g_strdup("QEMU HARDDISK");
     }
     scsi_realize(&s->qdev, errp);
+out:
     if (ctx) {
         aio_context_release(ctx);
     }
diff --git a/hw/usb/dev-storage.c b/hw/usb/dev-storage.c
index 4eba47538d..de461f37bd 100644
--- a/hw/usb/dev-storage.c
+++ b/hw/usb/dev-storage.c
@@ -599,7 +599,10 @@ static void usb_msd_storage_realize(USBDevice *dev, Error **errp)
         return;
     }
 
-    blkconf_blocksizes(&s->conf);
+    if (!blkconf_blocksizes(&s->conf, errp)) {
+        return;
+    }
+
     if (!blkconf_apply_backend_options(&s->conf, blk_is_read_only(blk), true,
                                        errp)) {
         return;
diff --git a/tests/qemu-iotests/172.out b/tests/qemu-iotests/172.out
index 7abbe82427..59cc70aebb 100644
--- a/tests/qemu-iotests/172.out
+++ b/tests/qemu-iotests/172.out
@@ -1204,7 +1204,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,physica
                 drive-type = "144"
 
 Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,logical_block_size=4096
-QEMU_PROG: -device floppy,drive=none0,logical_block_size=4096: Physical and logical block size must be 512 for floppy
+QEMU_PROG: -device floppy,drive=none0,logical_block_size=4096: logical_block_size > physical_block_size not supported
 
 Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,physical_block_size=1024
 QEMU_PROG: -device floppy,drive=none0,physical_block_size=1024: Physical and logical block size must be 512 for floppy
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:40:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:40:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQGE-0002q3-4s; Thu, 28 May 2020 21:40:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeQGD-0002pi-8R
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:40:21 +0000
X-Inumbo-ID: d2d25b1e-a12b-11ea-9947-bc764e2007e4
Received: from forwardcorp1p.mail.yandex.net (unknown
 [2a02:6b8:0:1472:2741:0:8b6:217])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d2d25b1e-a12b-11ea-9947-bc764e2007e4;
 Thu, 28 May 2020 21:40:18 +0000 (UTC)
Received: from mxbackcorp1g.mail.yandex.net (mxbackcorp1g.mail.yandex.net
 [IPv6:2a02:6b8:0:1402::301])
 by forwardcorp1p.mail.yandex.net (Yandex) with ESMTP id A2AEC2E094D;
 Fri, 29 May 2020 00:40:17 +0300 (MSK)
Received: from vla5-58875c36c028.qloud-c.yandex.net
 (vla5-58875c36c028.qloud-c.yandex.net [2a02:6b8:c18:340b:0:640:5887:5c36])
 by mxbackcorp1g.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 XyXqfhV9ak-eBImwSEL; Fri, 29 May 2020 00:40:17 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590702017; bh=e/snD3eRLt3OD/77mBucMy84xJYC36ARRHwF7w0zqIc=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=a61iP8p1OPQXlCsL4kqDkdM1sPBrQA9F1w+ycRSumQdjV4xDstR5Ny80gfxO/6esp
 /FcUhqt6Jujttn9Mm77h/cHfZ5Mv6IY0LDDhTjwmgpFyFO7ZrjYnwed7WcpG6NBBba
 HAkjADgLWt15cvwTlBp0FcTWXNGN8BqNNOC1Cx5g=
Authentication-Results: mxbackcorp1g.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by vla5-58875c36c028.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 KqoauPPys3-eBXqvQYw; Fri, 29 May 2020 00:40:11 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v7 3/8] qdev-properties: blocksize: use same limits in code
 and description
Date: Fri, 29 May 2020 00:39:41 +0300
Message-Id: <20200528213946.1636444-4-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
References: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Eric Blake <eblake@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Anthony Perard <anthony.perard@citrix.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Keith Busch <kbusch@kernel.org>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 John Snow <jsnow@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Make it easier (more visible) to maintain the limits on the blocksize
properties in sync with the respective description, by using macros both
in the code and in the description.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
v4 -> v5:
- split out into separate patch [Philippe]

 hw/core/qdev-properties.c | 21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index cc924815da..249dc69bd8 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -729,6 +729,13 @@ const PropertyInfo qdev_prop_pci_devfn = {
 
 /* --- blocksize --- */
 
+/* lower limit is sector size */
+#define MIN_BLOCK_SIZE          512
+#define MIN_BLOCK_SIZE_STR      stringify(MIN_BLOCK_SIZE)
+/* upper limit is the max power of 2 that fits in uint16_t */
+#define MAX_BLOCK_SIZE          32768
+#define MAX_BLOCK_SIZE_STR      stringify(MAX_BLOCK_SIZE)
+
 static void set_blocksize(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
@@ -736,8 +743,6 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
     Property *prop = opaque;
     uint16_t value, *ptr = qdev_get_prop_ptr(dev, prop);
     Error *local_err = NULL;
-    const int64_t min = 512;
-    const int64_t max = 32768;
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -750,9 +755,12 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
         return;
     }
     /* value of 0 means "unset" */
-    if (value && (value < min || value > max)) {
-        error_setg(errp, QERR_PROPERTY_VALUE_OUT_OF_RANGE,
-                   dev->id ? : "", name, (int64_t)value, min, max);
+    if (value && (value < MIN_BLOCK_SIZE || value > MAX_BLOCK_SIZE)) {
+        error_setg(errp,
+                   "Property %s.%s doesn't take value %" PRIu16
+                   " (minimum: " MIN_BLOCK_SIZE_STR
+                   ", maximum: " MAX_BLOCK_SIZE_STR ")",
+                   dev->id ? : "", name, value);
         return;
     }
 
@@ -769,7 +777,8 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
 
 const PropertyInfo qdev_prop_blocksize = {
     .name  = "uint16",
-    .description = "A power of two between 512 and 32768",
+    .description = "A power of two between " MIN_BLOCK_SIZE_STR
+                   " and " MAX_BLOCK_SIZE_STR,
     .get   = get_uint16,
     .set   = set_blocksize,
     .set_default_value = set_default_value_uint,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:40:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:40:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQGI-0002rb-E9; Thu, 28 May 2020 21:40:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeQGG-0002r3-Mk
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:40:24 +0000
X-Inumbo-ID: d520d756-a12b-11ea-a842-12813bfff9fa
Received: from forwardcorp1j.mail.yandex.net (unknown [5.45.199.163])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d520d756-a12b-11ea-a842-12813bfff9fa;
 Thu, 28 May 2020 21:40:22 +0000 (UTC)
Received: from mxbackcorp1j.mail.yandex.net (mxbackcorp1j.mail.yandex.net
 [IPv6:2a02:6b8:0:1619::162])
 by forwardcorp1j.mail.yandex.net (Yandex) with ESMTP id BA59E2E0E4D;
 Fri, 29 May 2020 00:40:21 +0300 (MSK)
Received: from vla5-58875c36c028.qloud-c.yandex.net
 (vla5-58875c36c028.qloud-c.yandex.net [2a02:6b8:c18:340b:0:640:5887:5c36])
 by mxbackcorp1j.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 cApy1ma1ft-eIeujrcV; Fri, 29 May 2020 00:40:21 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590702021; bh=7tQKIjaOXt/4Y7vYYUMOTnckyEJyyiKNNu3wuyOniCw=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=IyJ61abDZvYdolV88vhvsRf/qW/U9Jap3J3kd6DM7xB7m+T+ajR/cSZy9vxxpNxa2
 ZxymbfVphokM0ohXrOdASuTtNlN1PdAbADJyPpV2CZkMzy2juLJPZeOcM5N3AgbSSK
 PYCYZwm6nmz+tuIaEtq3V6JA4jSnR+uDVG77Scik=
Authentication-Results: mxbackcorp1j.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by vla5-58875c36c028.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 KqoauPPys3-eHXqcO0n; Fri, 29 May 2020 00:40:18 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v7 4/8] qdev-properties: add size32 property type
Date: Fri, 29 May 2020 00:39:42 +0300
Message-Id: <20200528213946.1636444-5-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
References: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Eric Blake <eblake@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Anthony Perard <anthony.perard@citrix.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Keith Busch <kbusch@kernel.org>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 John Snow <jsnow@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Introduce size32 property type which handles size suffixes (k, m) just
like size property, but is uint32_t rather than uint64_t.  It's going to
be useful for properties that are byte sizes but are inherently 32bit,
like BlkConf.opt_io_size or .discard_granularity (they are switched to
this new property type in a followup commit).

The getter for size32 is left out for a separate patch as its benefit is
less obvious, and it affects test output; for now the regular uint32
getter is used.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
---
v6 -> v7:
- split out into separate patch [Eric]

 include/hw/qdev-properties.h |  3 +++
 hw/core/qdev-properties.c    | 40 ++++++++++++++++++++++++++++++++++++
 2 files changed, 43 insertions(+)

diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
index f161604fb6..c03eadfad6 100644
--- a/include/hw/qdev-properties.h
+++ b/include/hw/qdev-properties.h
@@ -29,6 +29,7 @@ extern const PropertyInfo qdev_prop_drive;
 extern const PropertyInfo qdev_prop_drive_iothread;
 extern const PropertyInfo qdev_prop_netdev;
 extern const PropertyInfo qdev_prop_pci_devfn;
+extern const PropertyInfo qdev_prop_size32;
 extern const PropertyInfo qdev_prop_blocksize;
 extern const PropertyInfo qdev_prop_pci_host_devaddr;
 extern const PropertyInfo qdev_prop_uuid;
@@ -196,6 +197,8 @@ extern const PropertyInfo qdev_prop_pcie_link_width;
                         BlockdevOnError)
 #define DEFINE_PROP_BIOS_CHS_TRANS(_n, _s, _f, _d) \
     DEFINE_PROP_SIGNED(_n, _s, _f, _d, qdev_prop_bios_chs_trans, int)
+#define DEFINE_PROP_SIZE32(_n, _s, _f, _d)                       \
+    DEFINE_PROP_UNSIGNED(_n, _s, _f, _d, qdev_prop_size32, uint32_t)
 #define DEFINE_PROP_BLOCKSIZE(_n, _s, _f) \
     DEFINE_PROP_UNSIGNED(_n, _s, _f, 0, qdev_prop_blocksize, uint16_t)
 #define DEFINE_PROP_PCI_HOST_DEVADDR(_n, _s, _f) \
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index 249dc69bd8..d943755832 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -727,6 +727,46 @@ const PropertyInfo qdev_prop_pci_devfn = {
     .set_default_value = set_default_value_int,
 };
 
+/* --- 32bit unsigned int 'size' type --- */
+
+static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
+                       Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+    Property *prop = opaque;
+    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t value;
+    Error *local_err = NULL;
+
+    if (dev->realized) {
+        qdev_prop_set_after_realize(dev, name, errp);
+        return;
+    }
+
+    visit_type_size(v, name, &value, &local_err);
+    if (local_err) {
+        error_propagate(errp, local_err);
+        return;
+    }
+
+    if (value > UINT32_MAX) {
+        error_setg(errp,
+                   "Property %s.%s doesn't take value %" PRIu64
+                   " (maximum: " stringify(UINT32_MAX) ")",
+                   dev->id ? : "", name, value);
+        return;
+    }
+
+    *ptr = value;
+}
+
+const PropertyInfo qdev_prop_size32 = {
+    .name  = "size",
+    .get = get_uint32,
+    .set = set_size32,
+    .set_default_value = set_default_value_uint,
+};
+
 /* --- blocksize --- */
 
 /* lower limit is sector size */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:40:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:40:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQGN-0002up-So; Thu, 28 May 2020 21:40:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeQGN-0002uN-1x
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:40:31 +0000
X-Inumbo-ID: d91c0f24-a12b-11ea-a842-12813bfff9fa
Received: from forwardcorp1j.mail.yandex.net (unknown [5.45.199.163])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d91c0f24-a12b-11ea-a842-12813bfff9fa;
 Thu, 28 May 2020 21:40:29 +0000 (UTC)
Received: from mxbackcorp1g.mail.yandex.net (mxbackcorp1g.mail.yandex.net
 [IPv6:2a02:6b8:0:1402::301])
 by forwardcorp1j.mail.yandex.net (Yandex) with ESMTP id 679A02E0E4D;
 Fri, 29 May 2020 00:40:28 +0300 (MSK)
Received: from vla5-58875c36c028.qloud-c.yandex.net
 (vla5-58875c36c028.qloud-c.yandex.net [2a02:6b8:c18:340b:0:640:5887:5c36])
 by mxbackcorp1g.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 PNbNuCHaNu-eMICwYct; Fri, 29 May 2020 00:40:28 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590702028; bh=5RSCVs6L6zfegAd6aySYoeQzRtcXoc5ATSUz8Gh7fPk=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=kVYWGWsNParhYuG6ULSCFMbhl+gxJT6H8x1ZAyazb5XsNyFSE+XPP7uOEihSqQzUh
 s/6GsoCcf7GOlwuidJS7mwQ7ckF7WyaHiWOwi6qsOFo8xEhwJQkl9yA/L1hOmAQljZ
 76CeB5+6F8X6R6RrQssMuyLtdgMHC9f2LhsTUVXo=
Authentication-Results: mxbackcorp1g.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by vla5-58875c36c028.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 KqoauPPys3-eLXqFHpn; Fri, 29 May 2020 00:40:22 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v7 5/8] qdev-properties: make blocksize accept size suffixes
Date: Fri, 29 May 2020 00:39:43 +0300
Message-Id: <20200528213946.1636444-6-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
References: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Eric Blake <eblake@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Anthony Perard <anthony.perard@citrix.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Keith Busch <kbusch@kernel.org>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 John Snow <jsnow@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

It appears convenient to be able to specify physical_block_size and
logical_block_size using common size suffixes.

Teach the blocksize property setter to interpret them.  Also express the
upper and lower limits in the respective units.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
---
v6 -> v7:
- split out into separate patch [Eric]

 hw/core/qdev-properties.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index d943755832..a79062b428 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -14,6 +14,7 @@
 #include "qapi/visitor.h"
 #include "chardev/char.h"
 #include "qemu/uuid.h"
+#include "qemu/units.h"
 
 void qdev_prop_set_after_realize(DeviceState *dev, const char *name,
                                   Error **errp)
@@ -771,17 +772,18 @@ const PropertyInfo qdev_prop_size32 = {
 
 /* lower limit is sector size */
 #define MIN_BLOCK_SIZE          512
-#define MIN_BLOCK_SIZE_STR      stringify(MIN_BLOCK_SIZE)
+#define MIN_BLOCK_SIZE_STR      "512 B"
 /* upper limit is the max power of 2 that fits in uint16_t */
-#define MAX_BLOCK_SIZE          32768
-#define MAX_BLOCK_SIZE_STR      stringify(MAX_BLOCK_SIZE)
+#define MAX_BLOCK_SIZE          (32 * KiB)
+#define MAX_BLOCK_SIZE_STR      "32 KiB"
 
 static void set_blocksize(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint16_t value, *ptr = qdev_get_prop_ptr(dev, prop);
+    uint16_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t value;
     Error *local_err = NULL;
 
     if (dev->realized) {
@@ -789,7 +791,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
         return;
     }
 
-    visit_type_uint16(v, name, &value, &local_err);
+    visit_type_size(v, name, &value, &local_err);
     if (local_err) {
         error_propagate(errp, local_err);
         return;
@@ -797,7 +799,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
     /* value of 0 means "unset" */
     if (value && (value < MIN_BLOCK_SIZE || value > MAX_BLOCK_SIZE)) {
         error_setg(errp,
-                   "Property %s.%s doesn't take value %" PRIu16
+                   "Property %s.%s doesn't take value %" PRIu64
                    " (minimum: " MIN_BLOCK_SIZE_STR
                    ", maximum: " MAX_BLOCK_SIZE_STR ")",
                    dev->id ? : "", name, value);
@@ -816,7 +818,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
 }
 
 const PropertyInfo qdev_prop_blocksize = {
-    .name  = "uint16",
+    .name  = "size",
     .description = "A power of two between " MIN_BLOCK_SIZE_STR
                    " and " MAX_BLOCK_SIZE_STR,
     .get   = get_uint16,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:40:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:40:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQGS-0002xQ-5R; Thu, 28 May 2020 21:40:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeQGR-0002x4-H0
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:40:35 +0000
X-Inumbo-ID: dbf67356-a12b-11ea-a842-12813bfff9fa
Received: from forwardcorp1p.mail.yandex.net (unknown [77.88.29.217])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dbf67356-a12b-11ea-a842-12813bfff9fa;
 Thu, 28 May 2020 21:40:34 +0000 (UTC)
Received: from mxbackcorp2j.mail.yandex.net (mxbackcorp2j.mail.yandex.net
 [IPv6:2a02:6b8:0:1619::119])
 by forwardcorp1p.mail.yandex.net (Yandex) with ESMTP id 368372E094D;
 Fri, 29 May 2020 00:40:33 +0300 (MSK)
Received: from vla5-58875c36c028.qloud-c.yandex.net
 (vla5-58875c36c028.qloud-c.yandex.net [2a02:6b8:c18:340b:0:640:5887:5c36])
 by mxbackcorp2j.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 6B27twZHb8-eSfuTBtm; Fri, 29 May 2020 00:40:33 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590702033; bh=RswyRtwtBWdULttAfpPdwBoW/nGa0Ey4G3dbrsYzIC8=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=KwxoPh7NG4LGf33hEYMh1vMRub1HxR6OQDRFY9c+JmZ0VHWsfr4zRoFButnR8ThP9
 97rZD1y420S4Hmiz9L6aecEAxN435n5FqK13YyO/7OC9QYB7BtiIKh2QHsCg9iYC8c
 A+vHMFWr1pNY1n36DJxND1Jqv1VrEwJ7Ax1j75y0=
Authentication-Results: mxbackcorp2j.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by vla5-58875c36c028.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 KqoauPPys3-eSXqFT6u; Fri, 29 May 2020 00:40:28 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v7 6/8] block: make BlockConf size props 32bit and accept size
 suffixes
Date: Fri, 29 May 2020 00:39:44 +0300
Message-Id: <20200528213946.1636444-7-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
References: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Eric Blake <eblake@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Anthony Perard <anthony.perard@citrix.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Keith Busch <kbusch@kernel.org>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 John Snow <jsnow@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Convert all size-related properties in BlockConf to 32bit.  This will
allow to accomodate bigger block sizes (in a followup patch).
This also allows to make them all accept size suffixes, either via
DEFINE_PROP_BLOCKSIZE or via DEFINE_PROP_SIZE32.

Also, since min_io_size is exposed to the guest by scsi and virtio-blk
devices as an uint16_t in units of logical blocks, introduce an
additional check in blkconf_blocksizes to prevent its silent truncation.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
---
v6 -> v7:
- split out into separate patch [Eric]
- avoid overflow in min_io_size check [Eric]

 include/hw/block/block.h     | 12 ++++++------
 include/hw/qdev-properties.h |  2 +-
 hw/block/block.c             | 11 +++++++++++
 hw/core/qdev-properties.c    |  4 ++--
 4 files changed, 20 insertions(+), 9 deletions(-)

diff --git a/include/hw/block/block.h b/include/hw/block/block.h
index 784953a237..1e8b6253dd 100644
--- a/include/hw/block/block.h
+++ b/include/hw/block/block.h
@@ -18,9 +18,9 @@
 
 typedef struct BlockConf {
     BlockBackend *blk;
-    uint16_t physical_block_size;
-    uint16_t logical_block_size;
-    uint16_t min_io_size;
+    uint32_t physical_block_size;
+    uint32_t logical_block_size;
+    uint32_t min_io_size;
     uint32_t opt_io_size;
     int32_t bootindex;
     uint32_t discard_granularity;
@@ -51,9 +51,9 @@ static inline unsigned int get_physical_block_exp(BlockConf *conf)
                           _conf.logical_block_size),                    \
     DEFINE_PROP_BLOCKSIZE("physical_block_size", _state,                \
                           _conf.physical_block_size),                   \
-    DEFINE_PROP_UINT16("min_io_size", _state, _conf.min_io_size, 0),    \
-    DEFINE_PROP_UINT32("opt_io_size", _state, _conf.opt_io_size, 0),    \
-    DEFINE_PROP_UINT32("discard_granularity", _state,                   \
+    DEFINE_PROP_SIZE32("min_io_size", _state, _conf.min_io_size, 0),    \
+    DEFINE_PROP_SIZE32("opt_io_size", _state, _conf.opt_io_size, 0),    \
+    DEFINE_PROP_SIZE32("discard_granularity", _state,                   \
                        _conf.discard_granularity, -1),                  \
     DEFINE_PROP_ON_OFF_AUTO("write-cache", _state, _conf.wce,           \
                             ON_OFF_AUTO_AUTO),                          \
diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
index c03eadfad6..5252bb6b1a 100644
--- a/include/hw/qdev-properties.h
+++ b/include/hw/qdev-properties.h
@@ -200,7 +200,7 @@ extern const PropertyInfo qdev_prop_pcie_link_width;
 #define DEFINE_PROP_SIZE32(_n, _s, _f, _d)                       \
     DEFINE_PROP_UNSIGNED(_n, _s, _f, _d, qdev_prop_size32, uint32_t)
 #define DEFINE_PROP_BLOCKSIZE(_n, _s, _f) \
-    DEFINE_PROP_UNSIGNED(_n, _s, _f, 0, qdev_prop_blocksize, uint16_t)
+    DEFINE_PROP_UNSIGNED(_n, _s, _f, 0, qdev_prop_blocksize, uint32_t)
 #define DEFINE_PROP_PCI_HOST_DEVADDR(_n, _s, _f) \
     DEFINE_PROP(_n, _s, _f, qdev_prop_pci_host_devaddr, PCIHostDeviceAddress)
 #define DEFINE_PROP_OFF_AUTO_PCIBAR(_n, _s, _f, _d) \
diff --git a/hw/block/block.c b/hw/block/block.c
index b22207c921..7410b24dee 100644
--- a/hw/block/block.c
+++ b/hw/block/block.c
@@ -96,6 +96,17 @@ bool blkconf_blocksizes(BlockConf *conf, Error **errp)
         return false;
     }
 
+    /*
+     * all devices which support min_io_size (scsi and virtio-blk) expose it to
+     * the guest as a uint16_t in units of logical blocks
+     */
+    if (conf->min_io_size / conf->logical_block_size > UINT16_MAX) {
+        error_setg(errp,
+                   "min_io_size must not exceed " stringify(UINT16_MAX)
+                   " logical blocks");
+        return false;
+    }
+
     if (!QEMU_IS_ALIGNED(conf->opt_io_size, conf->logical_block_size)) {
         error_setg(errp,
                    "opt_io_size must be a multiple of logical_block_size");
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index a79062b428..3cbe3f56a8 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -782,7 +782,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
     uint64_t value;
     Error *local_err = NULL;
 
@@ -821,7 +821,7 @@ const PropertyInfo qdev_prop_blocksize = {
     .name  = "size",
     .description = "A power of two between " MIN_BLOCK_SIZE_STR
                    " and " MAX_BLOCK_SIZE_STR,
-    .get   = get_uint16,
+    .get   = get_uint32,
     .set   = set_blocksize,
     .set_default_value = set_default_value_uint,
 };
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:40:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:40:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQGZ-00031V-Ea; Thu, 28 May 2020 21:40:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeQGX-00030V-Mx
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:40:41 +0000
X-Inumbo-ID: defc5566-a12b-11ea-81bc-bc764e2007e4
Received: from forwardcorp1j.mail.yandex.net (unknown [5.45.199.163])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id defc5566-a12b-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 21:40:39 +0000 (UTC)
Received: from mxbackcorp2j.mail.yandex.net (mxbackcorp2j.mail.yandex.net
 [IPv6:2a02:6b8:0:1619::119])
 by forwardcorp1j.mail.yandex.net (Yandex) with ESMTP id 45A832E0E4D;
 Fri, 29 May 2020 00:40:38 +0300 (MSK)
Received: from vla5-58875c36c028.qloud-c.yandex.net
 (vla5-58875c36c028.qloud-c.yandex.net [2a02:6b8:c18:340b:0:640:5887:5c36])
 by mxbackcorp2j.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 3z677wbNTe-eXfGAqPR; Fri, 29 May 2020 00:40:38 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590702038; bh=Xz7jW75qNhpFVuRV01Pk9lJa6te4Bxqql0EsizkrxTU=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=aa9r5LeBt1dZoSoHCzm7DsFEbKay4sCQSO4XVOTcKvDtqlS/3xyG51daRWUprygk/
 IKN7DD8DHVu5oEI8TTdII34upiI1idGFbLgW+yLJ4sYrlB0CY3BZ4Sr/g9ukkY3Egy
 4EC3E+Teqeh2keDgtvz8RmoF6z1UcVOq7x5XsQmw=
Authentication-Results: mxbackcorp2j.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by vla5-58875c36c028.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 KqoauPPys3-eXXqGFjw; Fri, 29 May 2020 00:40:33 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v7 7/8] qdev-properties: add getter for size32 and blocksize
Date: Fri, 29 May 2020 00:39:45 +0300
Message-Id: <20200528213946.1636444-8-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
References: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Eric Blake <eblake@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Anthony Perard <anthony.perard@citrix.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Keith Busch <kbusch@kernel.org>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 John Snow <jsnow@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add getter for size32, and use it for blocksize, too.

In its human-readable branch, it reports approximate size in
human-readable units next to the exact byte value, like the getter for
64bit size does.

Adjust the expected test output accordingly.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
---
v6 -> v7:
- split out into separate patch [Eric]

 hw/core/qdev-properties.c  |  15 +-
 tests/qemu-iotests/172.out | 530 ++++++++++++++++++-------------------
 2 files changed, 278 insertions(+), 267 deletions(-)

diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index 3cbe3f56a8..8f35d494a4 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -730,6 +730,17 @@ const PropertyInfo qdev_prop_pci_devfn = {
 
 /* --- 32bit unsigned int 'size' type --- */
 
+static void get_size32(Object *obj, Visitor *v, const char *name, void *opaque,
+                       Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+    Property *prop = opaque;
+    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t value = *ptr;
+
+    visit_type_size(v, name, &value, errp);
+}
+
 static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
                        Error **errp)
 {
@@ -763,7 +774,7 @@ static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
 
 const PropertyInfo qdev_prop_size32 = {
     .name  = "size",
-    .get = get_uint32,
+    .get = get_size32,
     .set = set_size32,
     .set_default_value = set_default_value_uint,
 };
@@ -821,7 +832,7 @@ const PropertyInfo qdev_prop_blocksize = {
     .name  = "size",
     .description = "A power of two between " MIN_BLOCK_SIZE_STR
                    " and " MAX_BLOCK_SIZE_STR,
-    .get   = get_uint32,
+    .get   = get_size32,
     .set   = set_blocksize,
     .set_default_value = set_default_value_uint,
 };
diff --git a/tests/qemu-iotests/172.out b/tests/qemu-iotests/172.out
index 59cc70aebb..e782c5957e 100644
--- a/tests/qemu-iotests/172.out
+++ b/tests/qemu-iotests/172.out
@@ -24,11 +24,11 @@ Testing:
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -54,11 +54,11 @@ Testing: -fda TEST_DIR/t.qcow2
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -81,22 +81,22 @@ Testing: -fdb TEST_DIR/t.qcow2
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -119,22 +119,22 @@ Testing: -fda TEST_DIR/t.qcow2 -fdb TEST_DIR/t.qcow2.2
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -160,11 +160,11 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -187,22 +187,22 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2,index=1
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -225,22 +225,22 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=floppy,file=TEST_DIR/t
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -266,11 +266,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -global isa-fdc.driveA=none0
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -293,11 +293,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -global isa-fdc.driveB=none0
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -320,22 +320,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -361,11 +361,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -388,11 +388,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,unit=1
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -415,22 +415,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -456,22 +456,22 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -494,22 +494,22 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -532,11 +532,11 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -559,11 +559,11 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -589,22 +589,22 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -627,22 +627,22 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -665,22 +665,22 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -703,22 +703,22 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -750,22 +750,22 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.q
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -788,22 +788,22 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.q
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -832,22 +832,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -870,22 +870,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -908,22 +908,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -946,22 +946,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -999,11 +999,11 @@ Testing: -device floppy
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = ""
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -1026,11 +1026,11 @@ Testing: -device floppy,drive-type=120
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = ""
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "120"
@@ -1053,11 +1053,11 @@ Testing: -device floppy,drive-type=144
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = ""
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -1080,11 +1080,11 @@ Testing: -device floppy,drive-type=288
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = ""
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -1110,11 +1110,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,drive-t
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "120"
@@ -1137,11 +1137,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,drive-t
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -1167,11 +1167,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,logical
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -1194,11 +1194,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,physica
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:40:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:40:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQGe-00035I-09; Thu, 28 May 2020 21:40:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeQGc-00034O-NF
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:40:46 +0000
X-Inumbo-ID: e0dcd298-a12b-11ea-81bc-bc764e2007e4
Received: from forwardcorp1j.mail.yandex.net (unknown [2a02:6b8:0:1619::183])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e0dcd298-a12b-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 21:40:42 +0000 (UTC)
Received: from mxbackcorp1o.mail.yandex.net (mxbackcorp1o.mail.yandex.net
 [IPv6:2a02:6b8:0:1a2d::301])
 by forwardcorp1j.mail.yandex.net (Yandex) with ESMTP id 6BB102E157C;
 Fri, 29 May 2020 00:40:41 +0300 (MSK)
Received: from vla5-58875c36c028.qloud-c.yandex.net
 (vla5-58875c36c028.qloud-c.yandex.net [2a02:6b8:c18:340b:0:640:5887:5c36])
 by mxbackcorp1o.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 6ayeGEvfGf-ecxW8imF; Fri, 29 May 2020 00:40:41 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590702041; bh=5lpXMZJx3pv3PDNCloJv50KPyO1W0cyKFfB8gLTBL3Y=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=nBSx0HlWCym/ElDcHofQ6kqNW+l3CoR1a2nvOGtfo5CqGFoFTYK7ZEKQAHp5IGI8G
 8Xfc4GlSO4SmCRLH9inLtJg4CeAbHbdRLlJVg6eS25NLFONIs1r5bGVA/V3YcX31AV
 77m/0qSJr0anPFB1aJFVYU8/KhLK9fAkpJdJtixs=
Authentication-Results: mxbackcorp1o.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by vla5-58875c36c028.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 KqoauPPys3-ecXqZD3g; Fri, 29 May 2020 00:40:38 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v7 8/8] block: lift blocksize property limit to 2 MiB
Date: Fri, 29 May 2020 00:39:46 +0300
Message-Id: <20200528213946.1636444-9-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
References: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, Eric Blake <eblake@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Anthony Perard <anthony.perard@citrix.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Keith Busch <kbusch@kernel.org>,
 xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 John Snow <jsnow@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Logical and physical block sizes in QEMU are limited to 32 KiB.

This appears unnecessarily tight, and we've seen bigger block sizes
handy at times.

Lift the limitation up to 2 MiB which appears to be good enough for
everybody, and matches the qcow2 cluster size limit.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
v6 -> v7:
- fix spelling in the log [Eric]

 hw/core/qdev-properties.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index 8f35d494a4..d66a498d36 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -784,9 +784,12 @@ const PropertyInfo qdev_prop_size32 = {
 /* lower limit is sector size */
 #define MIN_BLOCK_SIZE          512
 #define MIN_BLOCK_SIZE_STR      "512 B"
-/* upper limit is the max power of 2 that fits in uint16_t */
-#define MAX_BLOCK_SIZE          (32 * KiB)
-#define MAX_BLOCK_SIZE_STR      "32 KiB"
+/*
+ * upper limit is arbitrary, 2 MiB looks sufficient for all sensible uses, and
+ * matches qcow2 cluster size limit
+ */
+#define MAX_BLOCK_SIZE          (2 * MiB)
+#define MAX_BLOCK_SIZE_STR      "2 MiB"
 
 static void set_blocksize(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:45:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:45:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQLJ-0003iP-LC; Thu, 28 May 2020 21:45:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HyPx=7K=redhat.com=eblake@srs-us1.protection.inumbo.net>)
 id 1jeQLI-0003iK-Kt
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:45:36 +0000
X-Inumbo-ID: 90269c8e-a12c-11ea-9dbe-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [207.211.31.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 90269c8e-a12c-11ea-9dbe-bc764e2007e4;
 Thu, 28 May 2020 21:45:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590702335;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=4kww8Kn7lt15WNtiBiHu7yq5dFmH5ZOSHRj6DUAGi/8=;
 b=DxGuYlkAf7Kwf31oRSntoY9i9vQTUBKuim33NSbUjvKjBninOUujUSS2kNGYXA35H7OGfZ
 s5qz+OTdOe0yMSkQo6QB61uI0dzemdzpV2ObkcA63ldoDyW2txEWaEJ8tD4qeg7KIF3AZD
 ItZ7BfRRTP+BUJXj+a8U2t/nvSgY0Fk=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-97-eawdfFkvNqet1I0c3Ybv0Q-1; Thu, 28 May 2020 17:45:31 -0400
X-MC-Unique: eawdfFkvNqet1I0c3Ybv0Q-1
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DC4E118FF662;
 Thu, 28 May 2020 21:45:29 +0000 (UTC)
Received: from [10.3.112.88] (ovpn-112-88.phx2.redhat.com [10.3.112.88])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 9680D60C87;
 Thu, 28 May 2020 21:45:19 +0000 (UTC)
Subject: Re: [PATCH v7 4/8] qdev-properties: add size32 property type
To: Roman Kagan <rvkagan@yandex-team.ru>, qemu-devel@nongnu.org
References: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
 <20200528213946.1636444-5-rvkagan@yandex-team.ru>
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
Message-ID: <78e3587a-efea-970a-b47e-8d187464d955@redhat.com>
Date: Thu, 28 May 2020 16:45:19 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200528213946.1636444-5-rvkagan@yandex-team.ru>
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Anthony Perard <anthony.perard@citrix.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Keith Busch <kbusch@kernel.org>,
 xen-devel@lists.xenproject.org, John Snow <jsnow@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/28/20 4:39 PM, Roman Kagan wrote:
> Introduce size32 property type which handles size suffixes (k, m) just
> like size property, but is uint32_t rather than uint64_t.

Does it handle 'g' as well? (even though the set of valid 32-bit sizes 
with a g suffix is rather small ;)

>  It's going to
> be useful for properties that are byte sizes but are inherently 32bit,
> like BlkConf.opt_io_size or .discard_granularity (they are switched to
> this new property type in a followup commit).
> 
> The getter for size32 is left out for a separate patch as its benefit is
> less obvious, and it affects test output; for now the regular uint32
> getter is used.
> 
> Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
> ---
>

> +static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
> +                       Error **errp)
> +{
> +    DeviceState *dev = DEVICE(obj);
> +    Property *prop = opaque;
> +    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
> +    uint64_t value;
> +    Error *local_err = NULL;
> +
> +    if (dev->realized) {
> +        qdev_prop_set_after_realize(dev, name, errp);
> +        return;
> +    }
> +
> +    visit_type_size(v, name, &value, &local_err);

Yes, it does.

Whether or not the commit message is tweaked,
Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:47:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:47:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQMc-0003nO-0N; Thu, 28 May 2020 21:46:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HyPx=7K=redhat.com=eblake@srs-us1.protection.inumbo.net>)
 id 1jeQMa-0003nI-Je
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:46:56 +0000
X-Inumbo-ID: c00db8f6-a12c-11ea-9947-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.81])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id c00db8f6-a12c-11ea-9947-bc764e2007e4;
 Thu, 28 May 2020 21:46:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590702415;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=V0pRPNGUX2oCMzddlIiBI2W3R+cSD6MK8vxPHoVUMtw=;
 b=A/rr5iwV7cw3J4HqCwU6nJwSsEatKlgqL4gVzOpGTOPZWlhGm/3JS9dzETo9uakX7drCsj
 F0CmjZP8YBvi1IbVn+pCsc3x10a6k49vOb+rjJLjFUkmyFlk+7eW2yjNfOEzeOIE8GQedC
 9ObHQcsCmtUsg15myOSrZcJg4PPGDCs=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-93-wkXx4XWbP_GhfeGr0WPl_g-1; Thu, 28 May 2020 17:46:54 -0400
X-MC-Unique: wkXx4XWbP_GhfeGr0WPl_g-1
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id B5536835B40;
 Thu, 28 May 2020 21:46:52 +0000 (UTC)
Received: from [10.3.112.88] (ovpn-112-88.phx2.redhat.com [10.3.112.88])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 1952D7E467;
 Thu, 28 May 2020 21:46:43 +0000 (UTC)
Subject: Re: [PATCH v7 5/8] qdev-properties: make blocksize accept size
 suffixes
To: Roman Kagan <rvkagan@yandex-team.ru>, qemu-devel@nongnu.org
References: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
 <20200528213946.1636444-6-rvkagan@yandex-team.ru>
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
Message-ID: <a4e3fa72-7ad2-8be0-d714-14505be1373b@redhat.com>
Date: Thu, 28 May 2020 16:46:42 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200528213946.1636444-6-rvkagan@yandex-team.ru>
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Anthony Perard <anthony.perard@citrix.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Keith Busch <kbusch@kernel.org>,
 xen-devel@lists.xenproject.org, John Snow <jsnow@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/28/20 4:39 PM, Roman Kagan wrote:
> It appears convenient to be able to specify physical_block_size and
> logical_block_size using common size suffixes.
> 
> Teach the blocksize property setter to interpret them.  Also express the
> upper and lower limits in the respective units.
> 
> Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
> ---

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:53:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:53:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQSx-0004fj-N1; Thu, 28 May 2020 21:53:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HyPx=7K=redhat.com=eblake@srs-us1.protection.inumbo.net>)
 id 1jeQSw-0004fe-EY
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:53:30 +0000
X-Inumbo-ID: aaaedade-a12d-11ea-a843-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id aaaedade-a12d-11ea-a843-12813bfff9fa;
 Thu, 28 May 2020 21:53:30 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590702809;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=GiVumsXOj6XnWRoyp7j8nZNDqWEQBH1FAMwCBk4UFKE=;
 b=BMY/AZcSltVKW5G3e7xjaLxRjPeODSZMEhl+0v2CR06asAF3ckI0cOPC6eQcVt8uoB4vFd
 4VKMGFSiw+gBRx7+dN6HNnmKJmp7khST9UvJRZkWPu5k3D3XJ7dTLWgEv99x0Q6cGm09vK
 A/gUVW1RrGtzZZhFpFF3UJk4LaZqNSY=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-234-e215EmtEMfO9UYzBHDxWlg-1; Thu, 28 May 2020 17:53:25 -0400
X-MC-Unique: e215EmtEMfO9UYzBHDxWlg-1
Received: from smtp.corp.redhat.com (int-mx04.intmail.prod.int.phx2.redhat.com
 [10.5.11.14])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id D9E8518FF661;
 Thu, 28 May 2020 21:53:23 +0000 (UTC)
Received: from [10.3.112.88] (ovpn-112-88.phx2.redhat.com [10.3.112.88])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id 0C1ED5D9CD;
 Thu, 28 May 2020 21:53:14 +0000 (UTC)
Subject: Re: [PATCH v7 6/8] block: make BlockConf size props 32bit and accept
 size suffixes
To: Roman Kagan <rvkagan@yandex-team.ru>, qemu-devel@nongnu.org
References: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
 <20200528213946.1636444-7-rvkagan@yandex-team.ru>
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
Message-ID: <0439aa5e-413c-cf7e-83b7-1e942a882f5a@redhat.com>
Date: Thu, 28 May 2020 16:53:14 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200528213946.1636444-7-rvkagan@yandex-team.ru>
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.14
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Anthony Perard <anthony.perard@citrix.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Keith Busch <kbusch@kernel.org>,
 xen-devel@lists.xenproject.org, John Snow <jsnow@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/28/20 4:39 PM, Roman Kagan wrote:
> Convert all size-related properties in BlockConf to 32bit.  This will
> allow to accomodate bigger block sizes (in a followup patch).

s/allow to accomodate/accommodate/

> This also allows to make them all accept size suffixes, either via
> DEFINE_PROP_BLOCKSIZE or via DEFINE_PROP_SIZE32.
> 
> Also, since min_io_size is exposed to the guest by scsi and virtio-blk
> devices as an uint16_t in units of logical blocks, introduce an
> additional check in blkconf_blocksizes to prevent its silent truncation.
> 
> Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
> ---

> +    if (conf->min_io_size / conf->logical_block_size > UINT16_MAX) {
> +        error_setg(errp,
> +                   "min_io_size must not exceed " stringify(UINT16_MAX)
> +                   " logical blocks");

On my libc, this results in "must not exceed (65535) logical blocks".

Worse, I could envision a platform where it prints something funky like:

"exceed (2 * (32768) + 1) logical", based on however complex the 
definition of UINT16_MAX is.  You're better off printing this one with 
%d than with stringify().

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Thu May 28 21:57:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 21:57:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeQX0-0004pr-7R; Thu, 28 May 2020 21:57:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HyPx=7K=redhat.com=eblake@srs-us1.protection.inumbo.net>)
 id 1jeQWy-0004pm-8C
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 21:57:40 +0000
X-Inumbo-ID: 3fd483c0-a12e-11ea-a843-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 3fd483c0-a12e-11ea-a843-12813bfff9fa;
 Thu, 28 May 2020 21:57:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590703059;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=Uy2/hCClOi0N+cvDZXjD2eV1wRJxIY+lbAjkHKJLR28=;
 b=h1V9ZGAuM0IrEHlw959s4QWtMtuxGKjTYbR415i4kmoY8g78MyxzEu7hlIFlUI5C8zjNDL
 jdnPeA5AD/ZNV85sNyGXfUHnQD8M9jUD8S8TAt7kkn8Votz3V2WFgaIoq6Om/BY9X1LYCS
 3vNCsUe0EvwKV/YOW5xfCW513N0Qye4=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-362-r5kJoFc-NyGbktdslDHbpA-1; Thu, 28 May 2020 17:57:33 -0400
X-MC-Unique: r5kJoFc-NyGbktdslDHbpA-1
Received: from smtp.corp.redhat.com (int-mx01.intmail.prod.int.phx2.redhat.com
 [10.5.11.11])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 7CC2218FF660;
 Thu, 28 May 2020 21:57:31 +0000 (UTC)
Received: from [10.3.112.88] (ovpn-112-88.phx2.redhat.com [10.3.112.88])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id F1F689CA0;
 Thu, 28 May 2020 21:57:22 +0000 (UTC)
Subject: Re: [PATCH v7 7/8] qdev-properties: add getter for size32 and
 blocksize
To: Roman Kagan <rvkagan@yandex-team.ru>, qemu-devel@nongnu.org
References: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
 <20200528213946.1636444-8-rvkagan@yandex-team.ru>
From: Eric Blake <eblake@redhat.com>
Organization: Red Hat, Inc.
Message-ID: <d727b621-071e-e061-2c7f-d14798bdf681@redhat.com>
Date: Thu, 28 May 2020 16:57:22 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200528213946.1636444-8-rvkagan@yandex-team.ru>
Content-Language: en-US
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.11
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Daniel_P=2e_Berrang=c3=a9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>,
 =?UTF-8?Q?Philippe_Mathieu-Daud=c3=a9?= <philmd@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Max Reitz <mreitz@redhat.com>, Anthony Perard <anthony.perard@citrix.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Keith Busch <kbusch@kernel.org>,
 xen-devel@lists.xenproject.org, John Snow <jsnow@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/28/20 4:39 PM, Roman Kagan wrote:
> Add getter for size32, and use it for blocksize, too.
> 
> In its human-readable branch, it reports approximate size in
> human-readable units next to the exact byte value, like the getter for
> 64bit size does.
> 
> Adjust the expected test output accordingly.
> 
> Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
> ---
> v6 -> v7:
> - split out into separate patch [Eric]

Reviewed-by: Eric Blake <eblake@redhat.com>

-- 
Eric Blake, Principal Software Engineer
Red Hat, Inc.           +1-919-301-3226
Virtualization:  qemu.org | libvirt.org



From xen-devel-bounces@lists.xenproject.org Thu May 28 22:35:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 22:35:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeR7W-0008B6-42; Thu, 28 May 2020 22:35:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeR7U-0008B1-6M
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 22:35:24 +0000
X-Inumbo-ID: 83794f0c-a133-11ea-81bc-bc764e2007e4
Received: from forwardcorp1j.mail.yandex.net (unknown [5.45.199.163])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 83794f0c-a133-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 22:35:21 +0000 (UTC)
Received: from mxbackcorp1o.mail.yandex.net (mxbackcorp1o.mail.yandex.net
 [IPv6:2a02:6b8:0:1a2d::301])
 by forwardcorp1j.mail.yandex.net (Yandex) with ESMTP id ACCCD2E0E4D;
 Fri, 29 May 2020 01:35:19 +0300 (MSK)
Received: from sas1-9998cec34266.qloud-c.yandex.net
 (sas1-9998cec34266.qloud-c.yandex.net [2a02:6b8:c14:3a0e:0:640:9998:cec3])
 by mxbackcorp1o.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 BuNyaWpDN3-ZFxKo312; Fri, 29 May 2020 01:35:19 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590705319; bh=8cadgulWOGNTQdBMKpVo058EzTWu105wWXqXNmDyI2Q=;
 h=In-Reply-To:Message-ID:Subject:To:From:References:Date:Cc;
 b=sYf34wspRvEm3uY0kJIMbV/vm1gg+TU5JxAbj4M8vapwXTBr32eWTdNBhLvq6bb6U
 Hod9ijXHn9stbImabjisBTH/JszZKMdPX4/OHYmcMAw6QWFoXs6gk20Vmsq0qf8rhQ
 eHr35kE05eJVjVk9Dj46B1n6R5mibG0sH1ks+Bh0=
Authentication-Results: mxbackcorp1o.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by sas1-9998cec34266.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 9jfGj8qT6B-ZFXuqeYQ; Fri, 29 May 2020 01:35:15 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
Date: Fri, 29 May 2020 01:35:14 +0300
From: Roman Kagan <rvkagan@yandex-team.ru>
To: Eric Blake <eblake@redhat.com>
Subject: Re: [PATCH v7 4/8] qdev-properties: add size32 property type
Message-ID: <20200528223514.GA1255099@rvkaganb.lan>
Mail-Followup-To: Roman Kagan <rvkagan@yandex-team.ru>,
 Eric Blake <eblake@redhat.com>, qemu-devel@nongnu.org,
 Gerd Hoffmann <kraxel@redhat.com>, Max Reitz <mreitz@redhat.com>,
 John Snow <jsnow@redhat.com>, Keith Busch <kbusch@kernel.org>,
 Kevin Wolf <kwolf@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Fam Zheng <fam@euphon.net>, qemu-block@nongnu.org,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>
References: <20200528213946.1636444-1-rvkagan@yandex-team.ru>
 <20200528213946.1636444-5-rvkagan@yandex-team.ru>
 <78e3587a-efea-970a-b47e-8d187464d955@redhat.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <78e3587a-efea-970a-b47e-8d187464d955@redhat.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Kevin Wolf <kwolf@redhat.com>, Fam Zheng <fam@euphon.net>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>,
 Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, qemu-devel@nongnu.org,
 Max Reitz <mreitz@redhat.com>, Anthony Perard <anthony.perard@citrix.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>, Keith Busch <kbusch@kernel.org>,
 xen-devel@lists.xenproject.org, John Snow <jsnow@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 28, 2020 at 04:45:19PM -0500, Eric Blake wrote:
> On 5/28/20 4:39 PM, Roman Kagan wrote:
> > Introduce size32 property type which handles size suffixes (k, m) just
> > like size property, but is uint32_t rather than uint64_t.
> 
> Does it handle 'g' as well? (even though the set of valid 32-bit sizes with
> a g suffix is rather small ;)
> 
> >  It's going to
> > be useful for properties that are byte sizes but are inherently 32bit,
> > like BlkConf.opt_io_size or .discard_granularity (they are switched to
> > this new property type in a followup commit).
> > 
> > The getter for size32 is left out for a separate patch as its benefit is
> > less obvious, and it affects test output; for now the regular uint32
> > getter is used.
> > 
> > Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
> > ---
> > 
> 
> > +static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
> > +                       Error **errp)
> > +{
> > +    DeviceState *dev = DEVICE(obj);
> > +    Property *prop = opaque;
> > +    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
> > +    uint64_t value;
> > +    Error *local_err = NULL;
> > +
> > +    if (dev->realized) {
> > +        qdev_prop_set_after_realize(dev, name, errp);
> > +        return;
> > +    }
> > +
> > +    visit_type_size(v, name, &value, &local_err);
> 
> Yes, it does.
> 
> Whether or not the commit message is tweaked,
> Reviewed-by: Eric Blake <eblake@redhat.com>

I did this stupid stringify(UINT32_MAX) here too.  It's even uglier
here, with an 'U' appended to the number in the brackets, but somehow it
didn't strike me in the eye while testing.

So I'll fix this too in the respin, and drop the r-b.

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Thu May 28 22:55:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 22:55:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeRQv-0001SS-SH; Thu, 28 May 2020 22:55:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeRQt-0001SN-N8
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 22:55:28 +0000
X-Inumbo-ID: 50c1c4d8-a136-11ea-8993-bc764e2007e4
Received: from forwardcorp1p.mail.yandex.net (unknown [77.88.29.217])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 50c1c4d8-a136-11ea-8993-bc764e2007e4;
 Thu, 28 May 2020 22:55:25 +0000 (UTC)
Received: from mxbackcorp1o.mail.yandex.net (mxbackcorp1o.mail.yandex.net
 [IPv6:2a02:6b8:0:1a2d::301])
 by forwardcorp1p.mail.yandex.net (Yandex) with ESMTP id 7FE012E14E9;
 Fri, 29 May 2020 01:55:23 +0300 (MSK)
Received: from iva4-7c3d9abce76c.qloud-c.yandex.net
 (iva4-7c3d9abce76c.qloud-c.yandex.net [2a02:6b8:c0c:4e8e:0:640:7c3d:9abc])
 by mxbackcorp1o.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 42411HDFKv-tGx0AjR1; Fri, 29 May 2020 01:55:23 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590706523; bh=bUJnbRB6fLAgdvUHR4vxsn1k8XIGmwL4yvdLETP4YEw=;
 h=Message-Id:Date:Subject:To:From:Cc;
 b=eHQklP5M7jKdipoVYcPit0Q9YR1xnbAZbfwiHxyU18yqXF8RL4w0ZWEIaaqkQfqFX
 d/QwHBTTr/nXZhih1CCsXqjZ12d0H3UYKhWSFeqX0j+VJMiBHvsyEU1y98I5AgL7Ix
 1roQi0iBKl3xvR7GGgpcnQLmwDeFQ4PaAlyCHp3o=
Authentication-Results: mxbackcorp1o.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by iva4-7c3d9abce76c.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 tdfEhvD3Vg-tGWSfLS8; Fri, 29 May 2020 01:55:16 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v8 0/8] block: enhance handling of size-related BlockConf
 properties
Date: Fri, 29 May 2020 01:55:08 +0300
Message-Id: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, John Snow <jsnow@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Eric Blake <eblake@redhat.com>, Max Reitz <mreitz@redhat.com>,
 Keith Busch <kbusch@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

BlockConf includes several properties counted in bytes.=0D
=0D
Enhance their handling in some aspects, specifically=0D
=0D
- accept common size suffixes (k, m)=0D
- perform consistency checks on the values=0D
- lift the upper limit on physical_block_size and logical_block_size=0D
=0D
Also fix the accessor for opt_io_size in virtio-blk to make it consistent w=
ith=0D
the size of the field.=0D
=0D
History:=0D
v7 -> v8:=0D
- replace stringify with %u in error messages [Eric]=0D
- fix wording in logs [Eric]=0D
=0D
v6 -> v7:=0D
- avoid overflow in min_io_size check [Eric]=0D
- try again to perform the art form in patch splitting [Eric]=0D
=0D
v5 -> v6:=0D
- fix forgotten xen-block and swim=0D
- add prop_size32 instead of going with 64bit=0D
=0D
v4 -> v5:=0D
- re-split the patches [Philippe]=0D
- fix/reword error messages [Philippe, Kevin]=0D
- do early return on failed consistency check [Philippe]=0D
- use QEMU_IS_ALIGNED instead of open coding [Philippe]=0D
- make all BlockConf size props support suffixes=0D
- expand the log for virtio-blk opt_io_size [Michael]=0D
=0D
v3 -> v4:=0D
- add patch to fix opt_io_size width in virtio-blk=0D
- add patch to perform consistency checks [Kevin]=0D
- check min_io_size against truncation [Kevin]=0D
=0D
v2 -> v3:=0D
- mention qcow2 cluster size limit in the log and comment [Eric]=0D
=0D
v1 -> v2:=0D
- cap the property at 2 MiB [Eric]=0D
- accept size suffixes=0D
=0D
Roman Kagan (8):=0D
  virtio-blk: store opt_io_size with correct size=0D
  block: consolidate blocksize properties consistency checks=0D
  qdev-properties: blocksize: use same limits in code and description=0D
  qdev-properties: add size32 property type=0D
  qdev-properties: make blocksize accept size suffixes=0D
  block: make BlockConf size props 32bit and accept size suffixes=0D
  qdev-properties: add getter for size32 and blocksize=0D
  block: lift blocksize property limit to 2 MiB=0D
=0D
 include/hw/block/block.h     |  14 +-=0D
 include/hw/qdev-properties.h |   5 +-=0D
 hw/block/block.c             |  40 ++-=0D
 hw/block/fdc.c               |   5 +-=0D
 hw/block/nvme.c              |   5 +-=0D
 hw/block/swim.c              |   5 +-=0D
 hw/block/virtio-blk.c        |   9 +-=0D
 hw/block/xen-block.c         |   6 +-=0D
 hw/core/qdev-properties.c    |  85 +++++-=0D
 hw/ide/qdev.c                |   5 +-=0D
 hw/scsi/scsi-disk.c          |  12 +-=0D
 hw/usb/dev-storage.c         |   5 +-=0D
 tests/qemu-iotests/172.out   | 532 +++++++++++++++++------------------=0D
 13 files changed, 419 insertions(+), 309 deletions(-)=0D
=0D
-- =0D
2.26.2=0D
=0D


From xen-devel-bounces@lists.xenproject.org Thu May 28 22:55:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 22:55:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeRR1-0001Sk-3w; Thu, 28 May 2020 22:55:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeRQz-0001Sa-Hx
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 22:55:33 +0000
X-Inumbo-ID: 546ff6d6-a136-11ea-a84b-12813bfff9fa
Received: from forwardcorp1o.mail.yandex.net (unknown [95.108.205.193])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 546ff6d6-a136-11ea-a84b-12813bfff9fa;
 Thu, 28 May 2020 22:55:31 +0000 (UTC)
Received: from mxbackcorp1g.mail.yandex.net (mxbackcorp1g.mail.yandex.net
 [IPv6:2a02:6b8:0:1402::301])
 by forwardcorp1o.mail.yandex.net (Yandex) with ESMTP id 01E682E15D2;
 Fri, 29 May 2020 01:55:30 +0300 (MSK)
Received: from iva4-7c3d9abce76c.qloud-c.yandex.net
 (iva4-7c3d9abce76c.qloud-c.yandex.net [2a02:6b8:c0c:4e8e:0:640:7c3d:9abc])
 by mxbackcorp1g.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 N6lERP3yVg-tNI0lfAA; Fri, 29 May 2020 01:55:29 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590706529; bh=VwPc7pvzSa3b+JOMa0v3fJ6046neqwas8Bvboo3VPpQ=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=PIqY1z+L9AK0bvYc8ga7KCWy7TpIWnCsKikcjXTNqsqx/zP8RS5orA9hTANdjmSnb
 J3Qyw5xOs2tgehyy13fV/QGIdhEze9K9yVX60Sugx0LCVBUipaglE071B7U+wnYHSr
 FClBfEC+CRiGhiyAdKl6D5apcb3QhKrkpiFqIKsY=
Authentication-Results: mxbackcorp1g.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by iva4-7c3d9abce76c.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 tdfEhvD3Vg-tNWSxwCw; Fri, 29 May 2020 01:55:23 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v8 1/8] virtio-blk: store opt_io_size with correct size
Date: Fri, 29 May 2020 01:55:09 +0300
Message-Id: <20200528225516.1676602-2-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
References: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, John Snow <jsnow@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Eric Blake <eblake@redhat.com>, Max Reitz <mreitz@redhat.com>,
 Keith Busch <kbusch@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The width of opt_io_size in virtio_blk_config is 32bit.  However, it's
written with virtio_stw_p; this may result in value truncation, and on
big-endian systems with legacy virtio in completely bogus readings in
the guest.

Use the appropriate accessor to store it.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Kevin Wolf <kwolf@redhat.com>
---
 hw/block/virtio-blk.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index f5f6fc925e..413083e62f 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -918,7 +918,7 @@ static void virtio_blk_update_config(VirtIODevice *vdev, uint8_t *config)
     virtio_stw_p(vdev, &blkcfg.geometry.cylinders, conf->cyls);
     virtio_stl_p(vdev, &blkcfg.blk_size, blk_size);
     virtio_stw_p(vdev, &blkcfg.min_io_size, conf->min_io_size / blk_size);
-    virtio_stw_p(vdev, &blkcfg.opt_io_size, conf->opt_io_size / blk_size);
+    virtio_stl_p(vdev, &blkcfg.opt_io_size, conf->opt_io_size / blk_size);
     blkcfg.geometry.heads = conf->heads;
     /*
      * We must ensure that the block device capacity is a multiple of
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 22:55:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 22:55:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeRR4-0001TA-C4; Thu, 28 May 2020 22:55:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeRR3-0001Sx-3T
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 22:55:37 +0000
X-Inumbo-ID: 565f07e8-a136-11ea-9dbe-bc764e2007e4
Received: from forwardcorp1p.mail.yandex.net (unknown
 [2a02:6b8:0:1472:2741:0:8b6:217])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 565f07e8-a136-11ea-9dbe-bc764e2007e4;
 Thu, 28 May 2020 22:55:34 +0000 (UTC)
Received: from mxbackcorp2j.mail.yandex.net (mxbackcorp2j.mail.yandex.net
 [IPv6:2a02:6b8:0:1619::119])
 by forwardcorp1p.mail.yandex.net (Yandex) with ESMTP id 8B6CB2E14E9;
 Fri, 29 May 2020 01:55:33 +0300 (MSK)
Received: from iva4-7c3d9abce76c.qloud-c.yandex.net
 (iva4-7c3d9abce76c.qloud-c.yandex.net [2a02:6b8:c0c:4e8e:0:640:7c3d:9abc])
 by mxbackcorp2j.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 2kCvJxL0U1-tUf8aneP; Fri, 29 May 2020 01:55:33 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590706533; bh=h74noWJBfWPpdXH6bM94KzrYaaY1BQvtecT0FnpRfl0=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=Kyk8p71hWdT3nRgyicKPAlYXO1C+7GXrJ4/bnrVazcQW5TupmepvmjbwReVi8bdh+
 GAXtqvEMnKp1u4eYqwb2LoqTevx/u8UvVqrn219Sxx5rei2sbwBPPjx4rT91etNjzS
 HrijfrXowKo3uc6qOiiXgZXpxcG1kC5UgDv7lWis=
Authentication-Results: mxbackcorp2j.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by iva4-7c3d9abce76c.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 tdfEhvD3Vg-tUWSkUsh; Fri, 29 May 2020 01:55:30 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v8 2/8] block: consolidate blocksize properties consistency
 checks
Date: Fri, 29 May 2020 01:55:10 +0300
Message-Id: <20200528225516.1676602-3-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
References: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, John Snow <jsnow@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Eric Blake <eblake@redhat.com>, Max Reitz <mreitz@redhat.com>,
 Keith Busch <kbusch@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Several block device properties related to blocksize configuration must
be in certain relationship WRT each other: physical block must be no
smaller than logical block; min_io_size, opt_io_size, and
discard_granularity must be a multiple of a logical block.

To ensure these requirements are met, add corresponding consistency
checks to blkconf_blocksizes, adjusting its signature to communicate
possible error to the caller.  Also remove the now redundant consistency
checks from the specific devices.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Paul Durrant <paul@xen.org>
---
 include/hw/block/block.h   |  2 +-
 hw/block/block.c           | 30 +++++++++++++++++++++++++++++-
 hw/block/fdc.c             |  5 ++++-
 hw/block/nvme.c            |  5 ++++-
 hw/block/swim.c            |  5 ++++-
 hw/block/virtio-blk.c      |  7 +------
 hw/block/xen-block.c       |  6 +-----
 hw/ide/qdev.c              |  5 ++++-
 hw/scsi/scsi-disk.c        | 12 +++++-------
 hw/usb/dev-storage.c       |  5 ++++-
 tests/qemu-iotests/172.out |  2 +-
 11 files changed, 58 insertions(+), 26 deletions(-)

diff --git a/include/hw/block/block.h b/include/hw/block/block.h
index d7246f3862..784953a237 100644
--- a/include/hw/block/block.h
+++ b/include/hw/block/block.h
@@ -87,7 +87,7 @@ bool blk_check_size_and_read_all(BlockBackend *blk, void *buf, hwaddr size,
 bool blkconf_geometry(BlockConf *conf, int *trans,
                       unsigned cyls_max, unsigned heads_max, unsigned secs_max,
                       Error **errp);
-void blkconf_blocksizes(BlockConf *conf);
+bool blkconf_blocksizes(BlockConf *conf, Error **errp);
 bool blkconf_apply_backend_options(BlockConf *conf, bool readonly,
                                    bool resizable, Error **errp);
 
diff --git a/hw/block/block.c b/hw/block/block.c
index bf56c7612b..b22207c921 100644
--- a/hw/block/block.c
+++ b/hw/block/block.c
@@ -61,7 +61,7 @@ bool blk_check_size_and_read_all(BlockBackend *blk, void *buf, hwaddr size,
     return true;
 }
 
-void blkconf_blocksizes(BlockConf *conf)
+bool blkconf_blocksizes(BlockConf *conf, Error **errp)
 {
     BlockBackend *blk = conf->blk;
     BlockSizes blocksizes;
@@ -83,6 +83,34 @@ void blkconf_blocksizes(BlockConf *conf)
             conf->logical_block_size = BDRV_SECTOR_SIZE;
         }
     }
+
+    if (conf->logical_block_size > conf->physical_block_size) {
+        error_setg(errp,
+                   "logical_block_size > physical_block_size not supported");
+        return false;
+    }
+
+    if (!QEMU_IS_ALIGNED(conf->min_io_size, conf->logical_block_size)) {
+        error_setg(errp,
+                   "min_io_size must be a multiple of logical_block_size");
+        return false;
+    }
+
+    if (!QEMU_IS_ALIGNED(conf->opt_io_size, conf->logical_block_size)) {
+        error_setg(errp,
+                   "opt_io_size must be a multiple of logical_block_size");
+        return false;
+    }
+
+    if (conf->discard_granularity != -1 &&
+        !QEMU_IS_ALIGNED(conf->discard_granularity,
+                         conf->logical_block_size)) {
+        error_setg(errp, "discard_granularity must be "
+                   "a multiple of logical_block_size");
+        return false;
+    }
+
+    return true;
 }
 
 bool blkconf_apply_backend_options(BlockConf *conf, bool readonly,
diff --git a/hw/block/fdc.c b/hw/block/fdc.c
index c5fb9d6ece..8eda572ef4 100644
--- a/hw/block/fdc.c
+++ b/hw/block/fdc.c
@@ -554,7 +554,10 @@ static void floppy_drive_realize(DeviceState *qdev, Error **errp)
         read_only = !blk_bs(dev->conf.blk) || blk_is_read_only(dev->conf.blk);
     }
 
-    blkconf_blocksizes(&dev->conf);
+    if (!blkconf_blocksizes(&dev->conf, errp)) {
+        return;
+    }
+
     if (dev->conf.logical_block_size != 512 ||
         dev->conf.physical_block_size != 512)
     {
diff --git a/hw/block/nvme.c b/hw/block/nvme.c
index 2f3100e56c..672650e162 100644
--- a/hw/block/nvme.c
+++ b/hw/block/nvme.c
@@ -1390,7 +1390,10 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp)
         host_memory_backend_set_mapped(n->pmrdev, true);
     }
 
-    blkconf_blocksizes(&n->conf);
+    if (!blkconf_blocksizes(&n->conf, errp)) {
+        return;
+    }
+
     if (!blkconf_apply_backend_options(&n->conf, blk_is_read_only(n->conf.blk),
                                        false, errp)) {
         return;
diff --git a/hw/block/swim.c b/hw/block/swim.c
index 8f124782f4..74f56e8f46 100644
--- a/hw/block/swim.c
+++ b/hw/block/swim.c
@@ -189,7 +189,10 @@ static void swim_drive_realize(DeviceState *qdev, Error **errp)
         assert(ret == 0);
     }
 
-    blkconf_blocksizes(&dev->conf);
+    if (!blkconf_blocksizes(&dev->conf, errp)) {
+        return;
+    }
+
     if (dev->conf.logical_block_size != 512 ||
         dev->conf.physical_block_size != 512)
     {
diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
index 413083e62f..4ffdb130be 100644
--- a/hw/block/virtio-blk.c
+++ b/hw/block/virtio-blk.c
@@ -1162,12 +1162,7 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
         return;
     }
 
-    blkconf_blocksizes(&conf->conf);
-
-    if (conf->conf.logical_block_size >
-        conf->conf.physical_block_size) {
-        error_setg(errp,
-                   "logical_block_size > physical_block_size not supported");
+    if (!blkconf_blocksizes(&conf->conf, errp)) {
         return;
     }
 
diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
index 570489d6d9..e17fec50e1 100644
--- a/hw/block/xen-block.c
+++ b/hw/block/xen-block.c
@@ -239,11 +239,7 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
         return;
     }
 
-    blkconf_blocksizes(conf);
-
-    if (conf->logical_block_size > conf->physical_block_size) {
-        error_setg(
-            errp, "logical_block_size > physical_block_size not supported");
+    if (!blkconf_blocksizes(conf, errp)) {
         return;
     }
 
diff --git a/hw/ide/qdev.c b/hw/ide/qdev.c
index 06b11583f5..b4821b2403 100644
--- a/hw/ide/qdev.c
+++ b/hw/ide/qdev.c
@@ -187,7 +187,10 @@ static void ide_dev_initfn(IDEDevice *dev, IDEDriveKind kind, Error **errp)
         return;
     }
 
-    blkconf_blocksizes(&dev->conf);
+    if (!blkconf_blocksizes(&dev->conf, errp)) {
+        return;
+    }
+
     if (dev->conf.logical_block_size != 512) {
         error_setg(errp, "logical_block_size must be 512 for IDE");
         return;
diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
index 387503e11b..8ce68a9dd6 100644
--- a/hw/scsi/scsi-disk.c
+++ b/hw/scsi/scsi-disk.c
@@ -2346,12 +2346,7 @@ static void scsi_realize(SCSIDevice *dev, Error **errp)
         return;
     }
 
-    blkconf_blocksizes(&s->qdev.conf);
-
-    if (s->qdev.conf.logical_block_size >
-        s->qdev.conf.physical_block_size) {
-        error_setg(errp,
-                   "logical_block_size > physical_block_size not supported");
+    if (!blkconf_blocksizes(&s->qdev.conf, errp)) {
         return;
     }
 
@@ -2436,7 +2431,9 @@ static void scsi_hd_realize(SCSIDevice *dev, Error **errp)
     if (s->qdev.conf.blk) {
         ctx = blk_get_aio_context(s->qdev.conf.blk);
         aio_context_acquire(ctx);
-        blkconf_blocksizes(&s->qdev.conf);
+        if (!blkconf_blocksizes(&s->qdev.conf, errp)) {
+            goto out;
+        }
     }
     s->qdev.blocksize = s->qdev.conf.logical_block_size;
     s->qdev.type = TYPE_DISK;
@@ -2444,6 +2441,7 @@ static void scsi_hd_realize(SCSIDevice *dev, Error **errp)
         s->product = g_strdup("QEMU HARDDISK");
     }
     scsi_realize(&s->qdev, errp);
+out:
     if (ctx) {
         aio_context_release(ctx);
     }
diff --git a/hw/usb/dev-storage.c b/hw/usb/dev-storage.c
index 4eba47538d..de461f37bd 100644
--- a/hw/usb/dev-storage.c
+++ b/hw/usb/dev-storage.c
@@ -599,7 +599,10 @@ static void usb_msd_storage_realize(USBDevice *dev, Error **errp)
         return;
     }
 
-    blkconf_blocksizes(&s->conf);
+    if (!blkconf_blocksizes(&s->conf, errp)) {
+        return;
+    }
+
     if (!blkconf_apply_backend_options(&s->conf, blk_is_read_only(blk), true,
                                        errp)) {
         return;
diff --git a/tests/qemu-iotests/172.out b/tests/qemu-iotests/172.out
index 7abbe82427..59cc70aebb 100644
--- a/tests/qemu-iotests/172.out
+++ b/tests/qemu-iotests/172.out
@@ -1204,7 +1204,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,physica
                 drive-type = "144"
 
 Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,logical_block_size=4096
-QEMU_PROG: -device floppy,drive=none0,logical_block_size=4096: Physical and logical block size must be 512 for floppy
+QEMU_PROG: -device floppy,drive=none0,logical_block_size=4096: logical_block_size > physical_block_size not supported
 
 Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,physical_block_size=1024
 QEMU_PROG: -device floppy,drive=none0,physical_block_size=1024: Physical and logical block size must be 512 for floppy
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 22:55:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 22:55:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeRR6-0001To-L3; Thu, 28 May 2020 22:55:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeRR5-0001TP-KR
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 22:55:39 +0000
X-Inumbo-ID: 5854c448-a136-11ea-9dbe-bc764e2007e4
Received: from forwardcorp1p.mail.yandex.net (unknown
 [2a02:6b8:0:1472:2741:0:8b6:217])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5854c448-a136-11ea-9dbe-bc764e2007e4;
 Thu, 28 May 2020 22:55:37 +0000 (UTC)
Received: from mxbackcorp1g.mail.yandex.net (mxbackcorp1g.mail.yandex.net
 [IPv6:2a02:6b8:0:1402::301])
 by forwardcorp1p.mail.yandex.net (Yandex) with ESMTP id CF9FC2E129E;
 Fri, 29 May 2020 01:55:36 +0300 (MSK)
Received: from iva4-7c3d9abce76c.qloud-c.yandex.net
 (iva4-7c3d9abce76c.qloud-c.yandex.net [2a02:6b8:c0c:4e8e:0:640:7c3d:9abc])
 by mxbackcorp1g.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 lmRN7zNptv-tXI868jM; Fri, 29 May 2020 01:55:36 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590706536; bh=UKiDYu0qh9wG7LQok5L+3Mp+XQ/Lqrhu664LIgCAPR0=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=LX5XN0aP+VS3Aw5Y5wqV8WwuYfOQJBFYgSOXYggziF7GclUNWheOGQ3ZWF1VlRuBY
 mY91qfWoXcbd4HGEVs4oYywEJg5NKSH3/aka/xOZELxPtso6Jwwv19+bPGulOIOfXd
 rQQCy5R1XrMQdSCYQKQzWgWMH4W6DO0Ew4ZeiHQ4=
Authentication-Results: mxbackcorp1g.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by iva4-7c3d9abce76c.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 tdfEhvD3Vg-tXWSPpmr; Fri, 29 May 2020 01:55:33 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v8 3/8] qdev-properties: blocksize: use same limits in code
 and description
Date: Fri, 29 May 2020 01:55:11 +0300
Message-Id: <20200528225516.1676602-4-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
References: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, John Snow <jsnow@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Eric Blake <eblake@redhat.com>, Max Reitz <mreitz@redhat.com>,
 Keith Busch <kbusch@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Make it easier (more visible) to maintain the limits on the blocksize
properties in sync with the respective description, by using macros both
in the code and in the description.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 hw/core/qdev-properties.c | 21 +++++++++++++++------
 1 file changed, 15 insertions(+), 6 deletions(-)

diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index cc924815da..249dc69bd8 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -729,6 +729,13 @@ const PropertyInfo qdev_prop_pci_devfn = {
 
 /* --- blocksize --- */
 
+/* lower limit is sector size */
+#define MIN_BLOCK_SIZE          512
+#define MIN_BLOCK_SIZE_STR      stringify(MIN_BLOCK_SIZE)
+/* upper limit is the max power of 2 that fits in uint16_t */
+#define MAX_BLOCK_SIZE          32768
+#define MAX_BLOCK_SIZE_STR      stringify(MAX_BLOCK_SIZE)
+
 static void set_blocksize(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
@@ -736,8 +743,6 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
     Property *prop = opaque;
     uint16_t value, *ptr = qdev_get_prop_ptr(dev, prop);
     Error *local_err = NULL;
-    const int64_t min = 512;
-    const int64_t max = 32768;
 
     if (dev->realized) {
         qdev_prop_set_after_realize(dev, name, errp);
@@ -750,9 +755,12 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
         return;
     }
     /* value of 0 means "unset" */
-    if (value && (value < min || value > max)) {
-        error_setg(errp, QERR_PROPERTY_VALUE_OUT_OF_RANGE,
-                   dev->id ? : "", name, (int64_t)value, min, max);
+    if (value && (value < MIN_BLOCK_SIZE || value > MAX_BLOCK_SIZE)) {
+        error_setg(errp,
+                   "Property %s.%s doesn't take value %" PRIu16
+                   " (minimum: " MIN_BLOCK_SIZE_STR
+                   ", maximum: " MAX_BLOCK_SIZE_STR ")",
+                   dev->id ? : "", name, value);
         return;
     }
 
@@ -769,7 +777,8 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
 
 const PropertyInfo qdev_prop_blocksize = {
     .name  = "uint16",
-    .description = "A power of two between 512 and 32768",
+    .description = "A power of two between " MIN_BLOCK_SIZE_STR
+                   " and " MAX_BLOCK_SIZE_STR,
     .get   = get_uint16,
     .set   = set_blocksize,
     .set_default_value = set_default_value_uint,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 22:55:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 22:55:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeRR9-0001Ur-Tf; Thu, 28 May 2020 22:55:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeRR8-0001UN-G5
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 22:55:42 +0000
X-Inumbo-ID: 5a01ee06-a136-11ea-81bc-bc764e2007e4
Received: from forwardcorp1p.mail.yandex.net (unknown [77.88.29.217])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a01ee06-a136-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 22:55:40 +0000 (UTC)
Received: from mxbackcorp1j.mail.yandex.net (mxbackcorp1j.mail.yandex.net
 [IPv6:2a02:6b8:0:1619::162])
 by forwardcorp1p.mail.yandex.net (Yandex) with ESMTP id 9F3C62E14E9;
 Fri, 29 May 2020 01:55:39 +0300 (MSK)
Received: from iva4-7c3d9abce76c.qloud-c.yandex.net
 (iva4-7c3d9abce76c.qloud-c.yandex.net [2a02:6b8:c0c:4e8e:0:640:7c3d:9abc])
 by mxbackcorp1j.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 bzLrH8MH3J-tbeu8N5G; Fri, 29 May 2020 01:55:39 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590706539; bh=vg3Jrxy+SnwHMQrKPD58GmqliyjIcv2SrK/yR09VWLc=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=E2ZNZMdNOofhSOUyHHPBJfOZuzXCKqwLmFqkVobSyW8BwxMMkbaSzOJm/MBo4poYG
 zlqwfqK5Cdg62coMaaJQdc4oyBpIaFRZCK6nhGIGyL3/Cp5vZr/Dvda+Hm6WdiytsF
 Sklx23sOAl5XhWa5N7zmZmwT2UDkdO9rNWWiO3ps=
Authentication-Results: mxbackcorp1j.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by iva4-7c3d9abce76c.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 tdfEhvD3Vg-taWSAwaa; Fri, 29 May 2020 01:55:37 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v8 4/8] qdev-properties: add size32 property type
Date: Fri, 29 May 2020 01:55:12 +0300
Message-Id: <20200528225516.1676602-5-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
References: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, John Snow <jsnow@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Eric Blake <eblake@redhat.com>, Max Reitz <mreitz@redhat.com>,
 Keith Busch <kbusch@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Introduce size32 property type which handles size suffixes (k, m, g)
just like size property, but is uint32_t rather than uint64_t.  It's
going to be useful for properties that are byte sizes but are inherently
32bit, like BlkConf.opt_io_size or .discard_granularity (they are
switched to this new property type in a followup commit).

The getter for size32 is left out for a separate patch as its benefit is
less obvious, and it affects test output; for now the regular uint32
getter is used.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
---
v7 -> v8:
- replace stringify with %u in the error message [Eric]
- fix wording in the log [Eric]

 include/hw/qdev-properties.h |  3 +++
 hw/core/qdev-properties.c    | 40 ++++++++++++++++++++++++++++++++++++
 2 files changed, 43 insertions(+)

diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
index f161604fb6..c03eadfad6 100644
--- a/include/hw/qdev-properties.h
+++ b/include/hw/qdev-properties.h
@@ -29,6 +29,7 @@ extern const PropertyInfo qdev_prop_drive;
 extern const PropertyInfo qdev_prop_drive_iothread;
 extern const PropertyInfo qdev_prop_netdev;
 extern const PropertyInfo qdev_prop_pci_devfn;
+extern const PropertyInfo qdev_prop_size32;
 extern const PropertyInfo qdev_prop_blocksize;
 extern const PropertyInfo qdev_prop_pci_host_devaddr;
 extern const PropertyInfo qdev_prop_uuid;
@@ -196,6 +197,8 @@ extern const PropertyInfo qdev_prop_pcie_link_width;
                         BlockdevOnError)
 #define DEFINE_PROP_BIOS_CHS_TRANS(_n, _s, _f, _d) \
     DEFINE_PROP_SIGNED(_n, _s, _f, _d, qdev_prop_bios_chs_trans, int)
+#define DEFINE_PROP_SIZE32(_n, _s, _f, _d)                       \
+    DEFINE_PROP_UNSIGNED(_n, _s, _f, _d, qdev_prop_size32, uint32_t)
 #define DEFINE_PROP_BLOCKSIZE(_n, _s, _f) \
     DEFINE_PROP_UNSIGNED(_n, _s, _f, 0, qdev_prop_blocksize, uint16_t)
 #define DEFINE_PROP_PCI_HOST_DEVADDR(_n, _s, _f) \
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index 249dc69bd8..40c13f6ebe 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -727,6 +727,46 @@ const PropertyInfo qdev_prop_pci_devfn = {
     .set_default_value = set_default_value_int,
 };
 
+/* --- 32bit unsigned int 'size' type --- */
+
+static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
+                       Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+    Property *prop = opaque;
+    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t value;
+    Error *local_err = NULL;
+
+    if (dev->realized) {
+        qdev_prop_set_after_realize(dev, name, errp);
+        return;
+    }
+
+    visit_type_size(v, name, &value, &local_err);
+    if (local_err) {
+        error_propagate(errp, local_err);
+        return;
+    }
+
+    if (value > UINT32_MAX) {
+        error_setg(errp,
+                   "Property %s.%s doesn't take value %" PRIu64
+                   " (maximum: %u)",
+                   dev->id ? : "", name, value, UINT32_MAX);
+        return;
+    }
+
+    *ptr = value;
+}
+
+const PropertyInfo qdev_prop_size32 = {
+    .name  = "size",
+    .get = get_uint32,
+    .set = set_size32,
+    .set_default_value = set_default_value_uint,
+};
+
 /* --- blocksize --- */
 
 /* lower limit is sector size */
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 22:55:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 22:55:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeRRF-0001XQ-9j; Thu, 28 May 2020 22:55:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeRRD-0001WY-Gc
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 22:55:47 +0000
X-Inumbo-ID: 5c2bad8e-a136-11ea-9947-bc764e2007e4
Received: from forwardcorp1j.mail.yandex.net (unknown [2a02:6b8:0:1619::183])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c2bad8e-a136-11ea-9947-bc764e2007e4;
 Thu, 28 May 2020 22:55:44 +0000 (UTC)
Received: from mxbackcorp1o.mail.yandex.net (mxbackcorp1o.mail.yandex.net
 [IPv6:2a02:6b8:0:1a2d::301])
 by forwardcorp1j.mail.yandex.net (Yandex) with ESMTP id 925782E0E4D;
 Fri, 29 May 2020 01:55:42 +0300 (MSK)
Received: from iva4-7c3d9abce76c.qloud-c.yandex.net
 (iva4-7c3d9abce76c.qloud-c.yandex.net [2a02:6b8:c0c:4e8e:0:640:7c3d:9abc])
 by mxbackcorp1o.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 8FKAi2zKzd-tex0vN3w; Fri, 29 May 2020 01:55:42 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590706542; bh=jZjZvWgRCwmhlfFa7bumuR0eozqg540D/Q+LuGee5IQ=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=SnBI9IKbDRTF4vvxqHoqoTFMnQUlJCk+8D1m7FNWSTvAa/UTvtp+e6/dXw3I+Lzj3
 oycmaxScqeIySlf14+pJHiQNpxHawDh3hVQO58cR0kpiSUz9+hc0viN/gZpfp/ykt5
 jIdgHapwCEdCuxup82PXmt25PQLrc1uqHksljX6g=
Authentication-Results: mxbackcorp1o.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by iva4-7c3d9abce76c.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 tdfEhvD3Vg-tdWSLP5D; Fri, 29 May 2020 01:55:39 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v8 5/8] qdev-properties: make blocksize accept size suffixes
Date: Fri, 29 May 2020 01:55:13 +0300
Message-Id: <20200528225516.1676602-6-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
References: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, John Snow <jsnow@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Eric Blake <eblake@redhat.com>, Max Reitz <mreitz@redhat.com>,
 Keith Busch <kbusch@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

It appears convenient to be able to specify physical_block_size and
logical_block_size using common size suffixes.

Teach the blocksize property setter to interpret them.  Also express the
upper and lower limits in the respective units.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 hw/core/qdev-properties.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index 40c13f6ebe..c9af6a1341 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -14,6 +14,7 @@
 #include "qapi/visitor.h"
 #include "chardev/char.h"
 #include "qemu/uuid.h"
+#include "qemu/units.h"
 
 void qdev_prop_set_after_realize(DeviceState *dev, const char *name,
                                   Error **errp)
@@ -771,17 +772,18 @@ const PropertyInfo qdev_prop_size32 = {
 
 /* lower limit is sector size */
 #define MIN_BLOCK_SIZE          512
-#define MIN_BLOCK_SIZE_STR      stringify(MIN_BLOCK_SIZE)
+#define MIN_BLOCK_SIZE_STR      "512 B"
 /* upper limit is the max power of 2 that fits in uint16_t */
-#define MAX_BLOCK_SIZE          32768
-#define MAX_BLOCK_SIZE_STR      stringify(MAX_BLOCK_SIZE)
+#define MAX_BLOCK_SIZE          (32 * KiB)
+#define MAX_BLOCK_SIZE_STR      "32 KiB"
 
 static void set_blocksize(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint16_t value, *ptr = qdev_get_prop_ptr(dev, prop);
+    uint16_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t value;
     Error *local_err = NULL;
 
     if (dev->realized) {
@@ -789,7 +791,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
         return;
     }
 
-    visit_type_uint16(v, name, &value, &local_err);
+    visit_type_size(v, name, &value, &local_err);
     if (local_err) {
         error_propagate(errp, local_err);
         return;
@@ -797,7 +799,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
     /* value of 0 means "unset" */
     if (value && (value < MIN_BLOCK_SIZE || value > MAX_BLOCK_SIZE)) {
         error_setg(errp,
-                   "Property %s.%s doesn't take value %" PRIu16
+                   "Property %s.%s doesn't take value %" PRIu64
                    " (minimum: " MIN_BLOCK_SIZE_STR
                    ", maximum: " MAX_BLOCK_SIZE_STR ")",
                    dev->id ? : "", name, value);
@@ -816,7 +818,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
 }
 
 const PropertyInfo qdev_prop_blocksize = {
-    .name  = "uint16",
+    .name  = "size",
     .description = "A power of two between " MIN_BLOCK_SIZE_STR
                    " and " MAX_BLOCK_SIZE_STR,
     .get   = get_uint16,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 22:56:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 22:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeRRK-0001aU-Jr; Thu, 28 May 2020 22:55:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeRRI-0001ZL-Ha
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 22:55:52 +0000
X-Inumbo-ID: 5d4bb3c6-a136-11ea-81bc-bc764e2007e4
Received: from forwardcorp1j.mail.yandex.net (unknown [2a02:6b8:0:1619::183])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d4bb3c6-a136-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 22:55:46 +0000 (UTC)
Received: from mxbackcorp2j.mail.yandex.net (mxbackcorp2j.mail.yandex.net
 [IPv6:2a02:6b8:0:1619::119])
 by forwardcorp1j.mail.yandex.net (Yandex) with ESMTP id 2A1E22E14BF;
 Fri, 29 May 2020 01:55:45 +0300 (MSK)
Received: from iva4-7c3d9abce76c.qloud-c.yandex.net
 (iva4-7c3d9abce76c.qloud-c.yandex.net [2a02:6b8:c0c:4e8e:0:640:7c3d:9abc])
 by mxbackcorp2j.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 Ap2SDXjETt-thf8qBVB; Fri, 29 May 2020 01:55:45 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590706545; bh=PhEkTimd3SQ3QLg1zcS+EtvrR4fcTlHBY939aPyxn9s=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=Z/rL3vDjNR50KeCvc9UAKfgRNIPGUSjjAMUZ6ihYSEKRv+wEnXzoqwyZWrz8Ulb4i
 XdZUyxodNCSW42twTAOY45jtuduNXnUpXhfbci1uRDzV1G6rOArUDQaOqHTlooo4rS
 7P8XFJP1+gOBifO2q4ue2dFO9cE9DVr/gfk80GRM=
Authentication-Results: mxbackcorp2j.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by iva4-7c3d9abce76c.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 tdfEhvD3Vg-tgWSqfVv; Fri, 29 May 2020 01:55:43 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v8 6/8] block: make BlockConf size props 32bit and accept size
 suffixes
Date: Fri, 29 May 2020 01:55:14 +0300
Message-Id: <20200528225516.1676602-7-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
References: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, John Snow <jsnow@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Eric Blake <eblake@redhat.com>, Max Reitz <mreitz@redhat.com>,
 Keith Busch <kbusch@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Convert all size-related properties in BlockConf to 32bit.  This will
accommodate bigger block sizes (in a followup patch).  This also allows
to make them all accept size suffixes, either via DEFINE_PROP_BLOCKSIZE
or via DEFINE_PROP_SIZE32.

Also, since min_io_size is exposed to the guest by scsi and virtio-blk
devices as an uint16_t in units of logical blocks, introduce an
additional check in blkconf_blocksizes to prevent its silent truncation.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
---
v7 -> v8:
- replace stringify with %u in the error message [Eric]
- fix wording in the log [Eric]

 include/hw/block/block.h     | 12 ++++++------
 include/hw/qdev-properties.h |  2 +-
 hw/block/block.c             | 10 ++++++++++
 hw/core/qdev-properties.c    |  4 ++--
 4 files changed, 19 insertions(+), 9 deletions(-)

diff --git a/include/hw/block/block.h b/include/hw/block/block.h
index 784953a237..1e8b6253dd 100644
--- a/include/hw/block/block.h
+++ b/include/hw/block/block.h
@@ -18,9 +18,9 @@
 
 typedef struct BlockConf {
     BlockBackend *blk;
-    uint16_t physical_block_size;
-    uint16_t logical_block_size;
-    uint16_t min_io_size;
+    uint32_t physical_block_size;
+    uint32_t logical_block_size;
+    uint32_t min_io_size;
     uint32_t opt_io_size;
     int32_t bootindex;
     uint32_t discard_granularity;
@@ -51,9 +51,9 @@ static inline unsigned int get_physical_block_exp(BlockConf *conf)
                           _conf.logical_block_size),                    \
     DEFINE_PROP_BLOCKSIZE("physical_block_size", _state,                \
                           _conf.physical_block_size),                   \
-    DEFINE_PROP_UINT16("min_io_size", _state, _conf.min_io_size, 0),    \
-    DEFINE_PROP_UINT32("opt_io_size", _state, _conf.opt_io_size, 0),    \
-    DEFINE_PROP_UINT32("discard_granularity", _state,                   \
+    DEFINE_PROP_SIZE32("min_io_size", _state, _conf.min_io_size, 0),    \
+    DEFINE_PROP_SIZE32("opt_io_size", _state, _conf.opt_io_size, 0),    \
+    DEFINE_PROP_SIZE32("discard_granularity", _state,                   \
                        _conf.discard_granularity, -1),                  \
     DEFINE_PROP_ON_OFF_AUTO("write-cache", _state, _conf.wce,           \
                             ON_OFF_AUTO_AUTO),                          \
diff --git a/include/hw/qdev-properties.h b/include/hw/qdev-properties.h
index c03eadfad6..5252bb6b1a 100644
--- a/include/hw/qdev-properties.h
+++ b/include/hw/qdev-properties.h
@@ -200,7 +200,7 @@ extern const PropertyInfo qdev_prop_pcie_link_width;
 #define DEFINE_PROP_SIZE32(_n, _s, _f, _d)                       \
     DEFINE_PROP_UNSIGNED(_n, _s, _f, _d, qdev_prop_size32, uint32_t)
 #define DEFINE_PROP_BLOCKSIZE(_n, _s, _f) \
-    DEFINE_PROP_UNSIGNED(_n, _s, _f, 0, qdev_prop_blocksize, uint16_t)
+    DEFINE_PROP_UNSIGNED(_n, _s, _f, 0, qdev_prop_blocksize, uint32_t)
 #define DEFINE_PROP_PCI_HOST_DEVADDR(_n, _s, _f) \
     DEFINE_PROP(_n, _s, _f, qdev_prop_pci_host_devaddr, PCIHostDeviceAddress)
 #define DEFINE_PROP_OFF_AUTO_PCIBAR(_n, _s, _f, _d) \
diff --git a/hw/block/block.c b/hw/block/block.c
index b22207c921..1e34573da7 100644
--- a/hw/block/block.c
+++ b/hw/block/block.c
@@ -96,6 +96,16 @@ bool blkconf_blocksizes(BlockConf *conf, Error **errp)
         return false;
     }
 
+    /*
+     * all devices which support min_io_size (scsi and virtio-blk) expose it to
+     * the guest as a uint16_t in units of logical blocks
+     */
+    if (conf->min_io_size / conf->logical_block_size > UINT16_MAX) {
+        error_setg(errp, "min_io_size must not exceed %u logical blocks",
+                   UINT16_MAX);
+        return false;
+    }
+
     if (!QEMU_IS_ALIGNED(conf->opt_io_size, conf->logical_block_size)) {
         error_setg(errp,
                    "opt_io_size must be a multiple of logical_block_size");
diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index c9af6a1341..bd4abdc1d1 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -782,7 +782,7 @@ static void set_blocksize(Object *obj, Visitor *v, const char *name,
 {
     DeviceState *dev = DEVICE(obj);
     Property *prop = opaque;
-    uint16_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
     uint64_t value;
     Error *local_err = NULL;
 
@@ -821,7 +821,7 @@ const PropertyInfo qdev_prop_blocksize = {
     .name  = "size",
     .description = "A power of two between " MIN_BLOCK_SIZE_STR
                    " and " MAX_BLOCK_SIZE_STR,
-    .get   = get_uint16,
+    .get   = get_uint32,
     .set   = set_blocksize,
     .set_default_value = set_default_value_uint,
 };
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 22:56:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 22:56:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeRRN-0001cY-Vr; Thu, 28 May 2020 22:55:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeRRN-0001c9-Gv
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 22:55:57 +0000
X-Inumbo-ID: 5ecdafec-a136-11ea-81bc-bc764e2007e4
Received: from forwardcorp1p.mail.yandex.net (unknown
 [2a02:6b8:0:1472:2741:0:8b6:217])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ecdafec-a136-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 22:55:48 +0000 (UTC)
Received: from mxbackcorp1o.mail.yandex.net (mxbackcorp1o.mail.yandex.net
 [IPv6:2a02:6b8:0:1a2d::301])
 by forwardcorp1p.mail.yandex.net (Yandex) with ESMTP id A877A2E129E;
 Fri, 29 May 2020 01:55:47 +0300 (MSK)
Received: from iva4-7c3d9abce76c.qloud-c.yandex.net
 (iva4-7c3d9abce76c.qloud-c.yandex.net [2a02:6b8:c0c:4e8e:0:640:7c3d:9abc])
 by mxbackcorp1o.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 Ui6p83dbCa-tjx8f6bV; Fri, 29 May 2020 01:55:47 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590706547; bh=U9nHw4Fk6ms5lMxzt8QxxS5p9xmnanon9uS62a1foEw=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=TP7rzzfrRENAsK6DKFYJSj7HqoSTosx8FnwC/AFCXH2C1rXdy4h8ZMZMDdLzOqk4H
 /ofVKIIGjndjR5LyZ9hZCvjKKPHrWvHws3q5moTN61XT9wAaJ/64rmdmiCxURedTtr
 cL6XLXItEaTcvAlZym2nSozlOkEUJ5JEXsb4dzc8=
Authentication-Results: mxbackcorp1o.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by iva4-7c3d9abce76c.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 tdfEhvD3Vg-tjWSxfhb; Fri, 29 May 2020 01:55:45 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v8 7/8] qdev-properties: add getter for size32 and blocksize
Date: Fri, 29 May 2020 01:55:15 +0300
Message-Id: <20200528225516.1676602-8-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
References: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, John Snow <jsnow@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Eric Blake <eblake@redhat.com>, Max Reitz <mreitz@redhat.com>,
 Keith Busch <kbusch@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Add getter for size32, and use it for blocksize, too.

In its human-readable branch, it reports approximate size in
human-readable units next to the exact byte value, like the getter for
64bit size does.

Adjust the expected test output accordingly.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 hw/core/qdev-properties.c  |  15 +-
 tests/qemu-iotests/172.out | 530 ++++++++++++++++++-------------------
 2 files changed, 278 insertions(+), 267 deletions(-)

diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index bd4abdc1d1..63d48db70c 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -730,6 +730,17 @@ const PropertyInfo qdev_prop_pci_devfn = {
 
 /* --- 32bit unsigned int 'size' type --- */
 
+static void get_size32(Object *obj, Visitor *v, const char *name, void *opaque,
+                       Error **errp)
+{
+    DeviceState *dev = DEVICE(obj);
+    Property *prop = opaque;
+    uint32_t *ptr = qdev_get_prop_ptr(dev, prop);
+    uint64_t value = *ptr;
+
+    visit_type_size(v, name, &value, errp);
+}
+
 static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
                        Error **errp)
 {
@@ -763,7 +774,7 @@ static void set_size32(Object *obj, Visitor *v, const char *name, void *opaque,
 
 const PropertyInfo qdev_prop_size32 = {
     .name  = "size",
-    .get = get_uint32,
+    .get = get_size32,
     .set = set_size32,
     .set_default_value = set_default_value_uint,
 };
@@ -821,7 +832,7 @@ const PropertyInfo qdev_prop_blocksize = {
     .name  = "size",
     .description = "A power of two between " MIN_BLOCK_SIZE_STR
                    " and " MAX_BLOCK_SIZE_STR,
-    .get   = get_uint32,
+    .get   = get_size32,
     .set   = set_blocksize,
     .set_default_value = set_default_value_uint,
 };
diff --git a/tests/qemu-iotests/172.out b/tests/qemu-iotests/172.out
index 59cc70aebb..e782c5957e 100644
--- a/tests/qemu-iotests/172.out
+++ b/tests/qemu-iotests/172.out
@@ -24,11 +24,11 @@ Testing:
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -54,11 +54,11 @@ Testing: -fda TEST_DIR/t.qcow2
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -81,22 +81,22 @@ Testing: -fdb TEST_DIR/t.qcow2
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -119,22 +119,22 @@ Testing: -fda TEST_DIR/t.qcow2 -fdb TEST_DIR/t.qcow2.2
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -160,11 +160,11 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -187,22 +187,22 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2,index=1
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -225,22 +225,22 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=floppy,file=TEST_DIR/t
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -266,11 +266,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -global isa-fdc.driveA=none0
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -293,11 +293,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -global isa-fdc.driveB=none0
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -320,22 +320,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -361,11 +361,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -388,11 +388,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,unit=1
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -415,22 +415,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -456,22 +456,22 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -494,22 +494,22 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -532,11 +532,11 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -559,11 +559,11 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -global is
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -589,22 +589,22 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -627,22 +627,22 @@ Testing: -fda TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -665,22 +665,22 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -703,22 +703,22 @@ Testing: -fdb TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qcow2.2 -device fl
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "floppy1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -750,22 +750,22 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.q
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -788,22 +788,22 @@ Testing: -drive if=floppy,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.q
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "floppy0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -832,22 +832,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -870,22 +870,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -908,22 +908,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -946,22 +946,22 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -drive if=none,file=TEST_DIR/t.qco
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none1"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
               dev: floppy, id ""
                 unit = 1 (0x1)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -999,11 +999,11 @@ Testing: -device floppy
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = ""
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -1026,11 +1026,11 @@ Testing: -device floppy,drive-type=120
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = ""
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "120"
@@ -1053,11 +1053,11 @@ Testing: -device floppy,drive-type=144
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = ""
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -1080,11 +1080,11 @@ Testing: -device floppy,drive-type=288
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = ""
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -1110,11 +1110,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,drive-t
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "120"
@@ -1137,11 +1137,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,drive-t
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "288"
@@ -1167,11 +1167,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,logical
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
@@ -1194,11 +1194,11 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,physica
               dev: floppy, id ""
                 unit = 0 (0x0)
                 drive = "none0"
-                logical_block_size = 512 (0x200)
-                physical_block_size = 512 (0x200)
-                min_io_size = 0 (0x0)
-                opt_io_size = 0 (0x0)
-                discard_granularity = 4294967295 (0xffffffff)
+                logical_block_size = 512 (512 B)
+                physical_block_size = 512 (512 B)
+                min_io_size = 0 (0 B)
+                opt_io_size = 0 (0 B)
+                discard_granularity = 4294967295 (4 GiB)
                 write-cache = "auto"
                 share-rw = false
                 drive-type = "144"
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 28 22:56:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 28 May 2020 22:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeRRT-0001gI-CN; Thu, 28 May 2020 22:56:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=leMz=7K=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jeRRS-0001ff-HW
 for xen-devel@lists.xenproject.org; Thu, 28 May 2020 22:56:02 +0000
X-Inumbo-ID: 6215db2a-a136-11ea-81bc-bc764e2007e4
Received: from forwardcorp1p.mail.yandex.net (unknown
 [2a02:6b8:0:1472:2741:0:8b6:217])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6215db2a-a136-11ea-81bc-bc764e2007e4;
 Thu, 28 May 2020 22:55:54 +0000 (UTC)
Received: from mxbackcorp1g.mail.yandex.net (mxbackcorp1g.mail.yandex.net
 [IPv6:2a02:6b8:0:1402::301])
 by forwardcorp1p.mail.yandex.net (Yandex) with ESMTP id 32C8B2E129E;
 Fri, 29 May 2020 01:55:53 +0300 (MSK)
Received: from iva4-7c3d9abce76c.qloud-c.yandex.net
 (iva4-7c3d9abce76c.qloud-c.yandex.net [2a02:6b8:c0c:4e8e:0:640:7c3d:9abc])
 by mxbackcorp1g.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 khWQ9a8QXE-tmImd9j8; Fri, 29 May 2020 01:55:53 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590706553; bh=itQBN5U3tIfrRjr7457Zr+edY1y64XE74HVeivo86lM=;
 h=In-Reply-To:Message-Id:References:Date:Subject:To:From:Cc;
 b=QxWGv/p1mseN6guUFN5KYndvGTq4k0mrL5Ss9GRClWK210Mu68nWVZwYo4tHdwoJs
 d+lIK5E30z/7U6EHatw/dmi9UaDzWYrjqjwwQL53PdI5ERCeRwPNvvOGheMBrtWdGx
 Fynmct87UZBGpabgcraQ8zfOg+ZRm9zywA7JMFI4=
Authentication-Results: mxbackcorp1g.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by iva4-7c3d9abce76c.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 tdfEhvD3Vg-tlWSL1MN; Fri, 29 May 2020 01:55:48 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
From: Roman Kagan <rvkagan@yandex-team.ru>
To: qemu-devel@nongnu.org
Subject: [PATCH v8 8/8] block: lift blocksize property limit to 2 MiB
Date: Fri, 29 May 2020 01:55:16 +0300
Message-Id: <20200528225516.1676602-9-rvkagan@yandex-team.ru>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
References: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?q?Daniel=20P=2E=20Berrang=C3=A9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, John Snow <jsnow@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, Laurent Vivier <laurent@vivier.eu>,
 Eric Blake <eblake@redhat.com>, Max Reitz <mreitz@redhat.com>,
 Keith Busch <kbusch@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>, Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <philmd@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Logical and physical block sizes in QEMU are limited to 32 KiB.

This appears unnecessarily tight, and we've seen bigger block sizes
handy at times.

Lift the limitation up to 2 MiB which appears to be good enough for
everybody, and matches the qcow2 cluster size limit.

Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
Reviewed-by: Eric Blake <eblake@redhat.com>
---
 hw/core/qdev-properties.c | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/hw/core/qdev-properties.c b/hw/core/qdev-properties.c
index 63d48db70c..ead35d7ffd 100644
--- a/hw/core/qdev-properties.c
+++ b/hw/core/qdev-properties.c
@@ -784,9 +784,12 @@ const PropertyInfo qdev_prop_size32 = {
 /* lower limit is sector size */
 #define MIN_BLOCK_SIZE          512
 #define MIN_BLOCK_SIZE_STR      "512 B"
-/* upper limit is the max power of 2 that fits in uint16_t */
-#define MAX_BLOCK_SIZE          (32 * KiB)
-#define MAX_BLOCK_SIZE_STR      "32 KiB"
+/*
+ * upper limit is arbitrary, 2 MiB looks sufficient for all sensible uses, and
+ * matches qcow2 cluster size limit
+ */
+#define MAX_BLOCK_SIZE          (2 * MiB)
+#define MAX_BLOCK_SIZE_STR      "2 MiB"
 
 static void set_blocksize(Object *obj, Visitor *v, const char *name,
                           void *opaque, Error **errp)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri May 29 00:36:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 00:36:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeT0Y-0002zy-LI; Fri, 29 May 2020 00:36:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3yhI=7L=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1jeT0Y-0002zt-80
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 00:36:22 +0000
X-Inumbo-ID: 6a53ab6a-a144-11ea-8993-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6a53ab6a-a144-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 00:36:20 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: wNmvSahmlECqvHg8JedH8RH/SKV/ZtpBZD5yQOkwQklXAkkK2tdbw9cYZpQ1grSS6NuAzPm99s
 ZCXnSApvy4M+vi3fH0e4HHV7wAvDJfCYB/SD0TVVuFra5Qjvq47DmEU6rPR6+SNf/a3RHI/E5D
 8uN6QYuSjmdZ01rZs7t0svsO6e6zsZETKLHsDF6lij6av30wpHXGEQqLHoBWOyls7C9BPL2UkN
 Ei5WSmaOOuIal6G+xvSEBedIlOpzj98LxXyfPtOTUGZ9Abo3SS8eRkcRU2ajQgGkbzEXWrC7AV
 riY=
X-SBRS: 2.7
X-MesageID: 19456684
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,446,1583211600"; d="scan'208";a="19456684"
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [PATCH] x86/svm: do not try to handle recalc NPT faults immediately
Date: Fri, 29 May 2020 01:35:53 +0100
Message-ID: <1590712553-7298-1-git-send-email-igor.druzhinin@citrix.com>
X-Mailer: git-send-email 2.7.4
MIME-Version: 1.0
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Igor Druzhinin <igor.druzhinin@citrix.com>, wl@xen.org,
 andrew.cooper3@citrix.com, george.dunlap@citrix.com, jbeulich@suse.com,
 roger.pau@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

A recalculation NPT fault doesn't always require additional handling
in hvm_hap_nested_page_fault(), moreover in general case if there is no
explicit handling done there - the fault is wrongly considered fatal.

Instead of trying to be opportunistic - use safer approach and handle
P2M recalculation in a separate NPT fault by attempting to retry after
making the necessary adjustments. This is aligned with Intel behavior
where there are separate VMEXITs for recalculation and EPT violations
(faults) and only faults are handled in hvm_hap_nested_page_fault().
Do it by also unifying do_recalc return code with Intel implementation
where returning 1 means P2M was actually changed.

This covers a specific case of migration with vGPU assigned on AMD:
global log-dirty is enabled and causes immediate recalculation NPT
fault in MMIO area upon access.

Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
---
This is a safer alternative to:
https://lists.xenproject.org/archives/html/xen-devel/2020-05/msg01662.html
and more correct approach from my PoV.
---
 xen/arch/x86/hvm/svm/svm.c | 5 +++--
 xen/arch/x86/mm/p2m-pt.c   | 8 ++++++--
 2 files changed, 9 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
index 46a1aac..7f6f578 100644
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -2923,9 +2923,10 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
             v->arch.hvm.svm.cached_insn_len = vmcb->guest_ins_len & 0xf;
         rc = vmcb->exitinfo1 & PFEC_page_present
              ? p2m_pt_handle_deferred_changes(vmcb->exitinfo2) : 0;
-        if ( rc >= 0 )
+        if ( rc == 0 )
+            /* If no recal adjustments were being made - handle this fault */
             svm_do_nested_pgfault(v, regs, vmcb->exitinfo1, vmcb->exitinfo2);
-        else
+        else if ( rc < 0 )
         {
             printk(XENLOG_G_ERR
                    "%pv: Error %d handling NPF (gpa=%08lx ec=%04lx)\n",
diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
index 5c05017..377565b 100644
--- a/xen/arch/x86/mm/p2m-pt.c
+++ b/xen/arch/x86/mm/p2m-pt.c
@@ -340,7 +340,7 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
     unsigned long gfn_remainder = gfn;
     unsigned int level = 4;
     l1_pgentry_t *pent;
-    int err = 0;
+    int err = 0, rc = 0;
 
     table = map_domain_page(pagetable_get_mfn(p2m_get_pagetable(p2m)));
     while ( --level )
@@ -402,6 +402,8 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
                 clear_recalc(l1, e);
                 err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
                 ASSERT(!err);
+
+                rc = 1;
             }
         }
         unmap_domain_page((void *)((unsigned long)pent & PAGE_MASK));
@@ -448,12 +450,14 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
             clear_recalc(l1, e);
         err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
         ASSERT(!err);
+
+        rc = 1;
     }
 
  out:
     unmap_domain_page(table);
 
-    return err;
+    return err ? err : rc;
 }
 
 int p2m_pt_handle_deferred_changes(uint64_t gpa)
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Fri May 29 01:50:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 01:50:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeU9o-0001QL-6V; Fri, 29 May 2020 01:50:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T8V9=7L=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jeU9n-0001QG-DS
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 01:49:59 +0000
X-Inumbo-ID: b207f31c-a14e-11ea-a85a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b207f31c-a14e-11ea-a85a-12813bfff9fa;
 Fri, 29 May 2020 01:49:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1DMBWNPh5CCI/GkhlRwlI8GQhnHAGEfMDJmmKbTQmq8=; b=drAq92Hkn8RdPt5NqJK+vTtcg
 27kKeKi0v8kkCOFnT+f+hOhbFhk+Z61Do1Rh77EPPhrjEbh2vp3NtRgADE3adxXhw4R0bxsZcxI/p
 Mh/6ppGnfup7F3RUjuYSzHqb/Yh55O7tN1OfjSFAGg3L4LxtNBAS9s3YyReusOngpFQig=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeU9i-0005Uw-Vy; Fri, 29 May 2020 01:49:55 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeU9i-0000GH-M3; Fri, 29 May 2020 01:49:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jeU9i-0003QH-L4; Fri, 29 May 2020 01:49:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150423-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 150423: regressions - trouble: fail/pass/starved
X-Osstest-Failures: linux-5.4:build-amd64-libvirt:libvirt-build:fail:regression
 linux-5.4:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-rtds:guest-saverestore:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: linux=e0d81ce760044efd3f26004aa32821c34968512a
X-Osstest-Versions-That: linux=1cdaf895c99d319c0007d0b62818cf85fc4b087f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 29 May 2020 01:49:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150423 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150423/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build  fail in 150410 REGR. vs. 150294

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 150410

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked in 150410 n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)           blocked in 150410 n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)          blocked in 150410 n/a
 test-amd64-amd64-libvirt      1 build-check(1)           blocked in 150410 n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)           blocked in 150410 n/a
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150294
 test-amd64-amd64-xl-rtds     15 guest-saverestore   fail in 150410 like 150294
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150294
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                e0d81ce760044efd3f26004aa32821c34968512a
baseline version:
 linux                1cdaf895c99d319c0007d0b62818cf85fc4b087f

Last test of basis   150294  2020-05-21 07:55:33 Z    7 days
Testing same since   150410  2020-05-27 16:09:38 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Al Viro <viro@zeniv.linux.org.uk>
  Alain Volmat <alain.volmat@st.com>
  Alan Maguire <alan.maguire@oracle.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Monakov <amonakov@ispras.ru>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexandru Ardelean <alexandru.ardelean@analog.com>
  Alexei Starovoitov <ast@kernel.org>
  Andreas Färber <afaerber@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artem Borisov <dedsa2002@gmail.com>
  Arun Easi <aeasi@marvell.com>
  Aurabindo Pillai <aurabindo.pillai@amd.com>
  Aymeric Agon-Rambosson <aymeric.agon@yandex.com>
  Babu Moger <babu.moger@amd.com>
  Bin Lu <Bin.Lu@arm.com>
  Bob Peterson <rpeterso@redhat.com>
  Bodo Stroesser <bstroesser@ts.fujitsu.com>
  Brent Lu <brent.lu@intel.com>
  Bryant G. Ly <bryangly@gmail.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chris Chiu <chiu@endlessm.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Gmeiner <christian.gmeiner@gmail.com>
  Christian Lachner <gladiac@gmail.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Colin Xu <colin.xu@intel.com>
  Cristian Ciocaltea <cristian.ciocaltea@gmail.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Drake <drake@endlessm.com>
  Daniel Playfair Cal <daniel.playfair.cal@gmail.com>
  David Hildenbrand <david@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dragos Bogdan <dragos.bogdan@analog.com>
  Eric Biggers <ebiggers@google.com>
  Ewan D. Milne <emilne@redhat.com>
  Fabrice Gasnier <fabrice.gasnier@st.com>
  Frédéric Pierret (fepitre) <frederic.pierret@qubes-os.org>
  Gavin Shan <gshan@redhat.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Gerald Schaefer <gerald.schaefer@de.ibm.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregory CLEMENT <gregory.clement@bootlin.com>
  Hans de Goede <hdegoede@redhat.com>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Igor Russkikh <irusskikh@marvell.com>
  Ilya Dryomov <idryomov@gmail.com>
  Ingo Molnar <mingo@kernel.org>
  Jakub Sitnicki <jakub@cloudflare.com>
  James Hilliard <james.hilliard1@gmail.com>
  Jian-Hong Pan <jian-hong@endlessm.com>
  Jiri Kosina <jkosina@suse.cz>
  Joerg Roedel <jroedel@suse.de>
  John Hubbard <jhubbard@nvidia.com>
  John Johansen <john.johansen@canonical.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Juliet Kim <julietk@linux.vnet.ibm.com>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Kailang Yang <kailang@realtek.com>
  Kees Cook <keescook@chromium.org>
  Keno Fischer <keno@juliacomputing.com>
  Kevin Hao <haokexin@gmail.com>
  Klaus Doth <kdlnx@doth.eu>
  Krzysztof Struczynski <krzysztof.struczynski@huawei.com>
  Lianbo Jiang <lijiang@redhat.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Loïc Yhuel <loic.yhuel@gmail.com>
  Lucas Stach <l.stach@pengutronix.de>
  Marco Elver <elver@google.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Mauro Carvalho Chehab <mchehab@kernel.org>
  Maxim Petrov <mmrmaximuzz@gmail.com>
  Mel Gorman <mgorman@techsingularity.net>
  Miaohe Lin <linmiaohe@huawei.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michał Mirosław <mirq-linux@rere.qmqm.pl>
  Mike Pozulp <pozulp.kernel@gmail.com>
  Mimi Zohar <zohar@linux.ibm.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Navid Emamdoost <navid.emamdoost@gmail.com>
  Neil Horman <nhorman@tuxdriver.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nilesh Javali <njavali@marvell.com>
  Olivier Moysan <olivier.moysan@st.com>
  Oscar Carter <oscar.carter@gmx.com>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Cercueil <paul@crapouillou.net>
  PeiSen Hou <pshou@realtek.com>
  Peter Ujfalusi <peter.ujfalusi@ti.com>
  Peter Xu <peterx@redhat.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <a.p.zijlstra@chello.nl>
  Phil Auld <pauld@redhat.com>
  Philipp Rudo <prudo@linux.ibm.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Qian Cai <cai@lca.pw>
  Qiushi Wu <wu000273@umn.edu>
  Quinn Tran <qutran@marvell.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ricardo Ribalda Delgado <ribalda@kernel.org>
  Richard Clark <richard.xnu.clark@gmail.com>
  Richard Weinberger <richard@nod.at>
  Rick Edgecombe <rick.p.edgecombe@intel.com>
  Roberto Sassu <roberto.sassu@huawei.com>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Russell Currey <ruscur@russell.cc>
  Sagar Shrikant Kadam <sagar.kadam@sifive.com>
  Samuel Iglesias Gonsalvez <siglesias@igalia.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Sasha Levin <sashal@kernel.org>
  Scott Bahling <sbahling@suse.com>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Shay Agroskin <shayagr@amazon.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Stefano Garzarella <sgarzare@redhat.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudeep Holla <sudeep.holla@arm.com>
  Takashi Iwai <tiwai@suse.de>
  Thierry Reding <treding@nvidia.com>
  Thomas Gleixner <tglx@linutronix.de>
  Tomas Winkler <tomas.winkler@intel.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vincent Guittot <vincent.guittot@linaro.org>
  Vinod Koul <vkoul@kernel.org>
  Vladimir Murzin <vladimir.murzin@arm.com>
  Wei Yongjun <weiyongjun1@huawei.com>
  Will Deacon <will@kernel.org>
  Wolfram Sang <wsa@kernel.org>
  Wolfram Sang <wsa@the-dreams.de>
  Wu Bo <wubo40@huawei.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yoshiyuki Kurauchi <ahochauwaaaaa@gmail.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 3181 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 29 02:03:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 02:03:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeUMh-0003Nl-JC; Fri, 29 May 2020 02:03:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=F66H=7L=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jeUMg-0003Ng-Il
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 02:03:18 +0000
X-Inumbo-ID: 8f4866e8-a150-11ea-a85a-12813bfff9fa
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8f4866e8-a150-11ea-a85a-12813bfff9fa;
 Fri, 29 May 2020 02:03:16 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04T21puW121232;
 Fri, 29 May 2020 02:03:13 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=from : to : cc :
 subject : date : message-id; s=corp-2020-01-29;
 bh=1DrUzr8iNDuy+mdtvz3XzNbrgeH8IM8IGqzi1tapQ60=;
 b=rTDzgnhVuu5mGfDVdgNNcrLujUua9cGYdnjngXVho/grDBxaQotormUOKs+IUw2CLoh1
 lI14dyA3qnEs2R9mimPztTUtyPypFJQJHKx7qnBZu6YK+XBYBD0HTF2rZqNwiguaiHeY
 paFsuKMzkCfx7WmMt5+mm1KxPmwi19M3jKCPnqkgwG4ooOTDaCbBVfU6x2bPcpNSRSC9
 Mw9T+slMc0R+KQEQB8/tPirFxCtzOnQ+3AYjyhPxSTzc8gcFW8AVp6DQ4HSpNev53vo+
 i8b+vqCg7q0ues3m5bP9yDfRUNmLoF29kAEMHT41cFV5lTg/Pu4kIF2kIVDGNK36RxwI mw== 
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by userp2120.oracle.com with ESMTP id 318xbk81pg-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Fri, 29 May 2020 02:03:13 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04T239ho142956;
 Fri, 29 May 2020 02:03:12 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by aserp3030.oracle.com with ESMTP id 317ddtqq21-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 29 May 2020 02:03:12 +0000
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 04T22w8a024931;
 Fri, 29 May 2020 02:02:58 GMT
Received: from ovs104.us.oracle.com (/10.149.224.204)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Thu, 28 May 2020 19:02:58 -0700
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
To: linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org
Subject: [PATCH] xen/pci: Get rid of verbose_request and use dev_dbg() instead
Date: Thu, 28 May 2020 22:24:52 -0400
Message-Id: <1590719092-8578-1-git-send-email-boris.ostrovsky@oracle.com>
X-Mailer: git-send-email 1.8.3.1
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9635
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999
 bulkscore=0 mlxscore=0
 phishscore=0 adultscore=0 suspectscore=0 spamscore=0 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2005290012
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9635
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999
 spamscore=0 mlxscore=0
 lowpriorityscore=0 priorityscore=1501 phishscore=0 cotscore=-2147483648
 suspectscore=0 bulkscore=0 clxscore=1015 impostorscore=0 malwarescore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2005290012
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: bhelgaas@google.com, jgross@suse.com,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, sstabellini@kernel.org,
 konrad.wilk@oracle.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Information printed under verbose_request is clearly used for debugging
only. Remove it and use dev_dbg() instead.

Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>
---
 drivers/pci/xen-pcifront.c                  | 27 +++++++++-------------
 drivers/xen/xen-pciback/conf_space.c        | 14 ++++--------
 drivers/xen/xen-pciback/conf_space_header.c | 20 +++++------------
 drivers/xen/xen-pciback/pciback.h           |  2 --
 drivers/xen/xen-pciback/pciback_ops.c       | 35 +++++++++--------------------
 5 files changed, 31 insertions(+), 67 deletions(-)

diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index d1b16cf..fab267e 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -77,9 +77,6 @@ static inline void pcifront_init_sd(struct pcifront_sd *sd,
 static DEFINE_SPINLOCK(pcifront_dev_lock);
 static struct pcifront_device *pcifront_dev;
 
-static int verbose_request;
-module_param(verbose_request, int, 0644);
-
 static int errno_to_pcibios_err(int errno)
 {
 	switch (errno) {
@@ -190,18 +187,16 @@ static int pcifront_bus_read(struct pci_bus *bus, unsigned int devfn,
 	struct pcifront_sd *sd = bus->sysdata;
 	struct pcifront_device *pdev = pcifront_get_pdev(sd);
 
-	if (verbose_request)
-		dev_info(&pdev->xdev->dev,
-			 "read dev=%04x:%02x:%02x.%d - offset %x size %d\n",
-			 pci_domain_nr(bus), bus->number, PCI_SLOT(devfn),
-			 PCI_FUNC(devfn), where, size);
+	dev_dbg(&pdev->xdev->dev,
+		"read dev=%04x:%02x:%02x.%d - offset %x size %d\n",
+		pci_domain_nr(bus), bus->number, PCI_SLOT(devfn),
+		PCI_FUNC(devfn), where, size);
 
 	err = do_pci_op(pdev, &op);
 
 	if (likely(!err)) {
-		if (verbose_request)
-			dev_info(&pdev->xdev->dev, "read got back value %x\n",
-				 op.value);
+		dev_dbg(&pdev->xdev->dev, "read got back value %x\n",
+			op.value);
 
 		*val = op.value;
 	} else if (err == -ENODEV) {
@@ -229,12 +224,10 @@ static int pcifront_bus_write(struct pci_bus *bus, unsigned int devfn,
 	struct pcifront_sd *sd = bus->sysdata;
 	struct pcifront_device *pdev = pcifront_get_pdev(sd);
 
-	if (verbose_request)
-		dev_info(&pdev->xdev->dev,
-			 "write dev=%04x:%02x:%02x.%d - "
-			 "offset %x size %d val %x\n",
-			 pci_domain_nr(bus), bus->number,
-			 PCI_SLOT(devfn), PCI_FUNC(devfn), where, size, val);
+	dev_dbg(&pdev->xdev->dev,
+		"write dev=%04x:%02x:%02x.%d - offset %x size %d val %x\n",
+		pci_domain_nr(bus), bus->number,
+		PCI_SLOT(devfn), PCI_FUNC(devfn), where, size, val);
 
 	return errno_to_pcibios_err(do_pci_op(pdev, &op));
 }
diff --git a/drivers/xen/xen-pciback/conf_space.c b/drivers/xen/xen-pciback/conf_space.c
index f2df4e5..059de92 100644
--- a/drivers/xen/xen-pciback/conf_space.c
+++ b/drivers/xen/xen-pciback/conf_space.c
@@ -156,9 +156,7 @@ int xen_pcibk_config_read(struct pci_dev *dev, int offset, int size,
 	 * (as if device didn't respond) */
 	u32 value = 0, tmp_val;
 
-	if (unlikely(verbose_request))
-		dev_printk(KERN_DEBUG, &dev->dev, "read %d bytes at 0x%x\n",
-			   size, offset);
+	dev_dbg(&dev->dev, "read %d bytes at 0x%x\n", size, offset);
 
 	if (!valid_request(offset, size)) {
 		err = XEN_PCI_ERR_invalid_offset;
@@ -197,9 +195,7 @@ int xen_pcibk_config_read(struct pci_dev *dev, int offset, int size,
 	}
 
 out:
-	if (unlikely(verbose_request))
-		dev_printk(KERN_DEBUG, &dev->dev,
-			   "read %d bytes at 0x%x = %x\n", size, offset, value);
+	dev_dbg(&dev->dev, "read %d bytes at 0x%x = %x\n", size, offset, value);
 
 	*ret_val = value;
 	return xen_pcibios_err_to_errno(err);
@@ -214,10 +210,8 @@ int xen_pcibk_config_write(struct pci_dev *dev, int offset, int size, u32 value)
 	u32 tmp_val;
 	int field_start, field_end;
 
-	if (unlikely(verbose_request))
-		dev_printk(KERN_DEBUG, &dev->dev,
-			   "write request %d bytes at 0x%x = %x\n", size,
-			   offset, value);
+	dev_dbg(&dev->dev, "write request %d bytes at 0x%x = %x\n",
+		size, offset, value);
 
 	if (!valid_request(offset, size))
 		return XEN_PCI_ERR_invalid_offset;
diff --git a/drivers/xen/xen-pciback/conf_space_header.c b/drivers/xen/xen-pciback/conf_space_header.c
index b277b68..ac45cdc 100644
--- a/drivers/xen/xen-pciback/conf_space_header.c
+++ b/drivers/xen/xen-pciback/conf_space_header.c
@@ -68,36 +68,30 @@ static int command_write(struct pci_dev *dev, int offset, u16 value, void *data)
 
 	dev_data = pci_get_drvdata(dev);
 	if (!pci_is_enabled(dev) && is_enable_cmd(value)) {
-		if (unlikely(verbose_request))
-			dev_printk(KERN_DEBUG, &dev->dev, "enable\n");
+		dev_dbg(&dev->dev, "enable\n");
 		err = pci_enable_device(dev);
 		if (err)
 			return err;
 		if (dev_data)
 			dev_data->enable_intx = 1;
 	} else if (pci_is_enabled(dev) && !is_enable_cmd(value)) {
-		if (unlikely(verbose_request))
-			dev_printk(KERN_DEBUG, &dev->dev, "disable\n");
+		dev_dbg(&dev->dev, "disable\n");
 		pci_disable_device(dev);
 		if (dev_data)
 			dev_data->enable_intx = 0;
 	}
 
 	if (!dev->is_busmaster && is_master_cmd(value)) {
-		if (unlikely(verbose_request))
-			dev_printk(KERN_DEBUG, &dev->dev, "set bus master\n");
+		dev_dbg(&dev->dev, "set bus master\n");
 		pci_set_master(dev);
 	} else if (dev->is_busmaster && !is_master_cmd(value)) {
-		if (unlikely(verbose_request))
-			dev_printk(KERN_DEBUG, &dev->dev, "clear bus master\n");
+		dev_dbg(&dev->dev, "clear bus master\n");
 		pci_clear_master(dev);
 	}
 
 	if (!(cmd->val & PCI_COMMAND_INVALIDATE) &&
 	    (value & PCI_COMMAND_INVALIDATE)) {
-		if (unlikely(verbose_request))
-			dev_printk(KERN_DEBUG, &dev->dev,
-				   "enable memory-write-invalidate\n");
+		dev_dbg(&dev->dev, "enable memory-write-invalidate\n");
 		err = pci_set_mwi(dev);
 		if (err) {
 			dev_warn(&dev->dev, "cannot enable memory-write-invalidate (%d)\n",
@@ -106,9 +100,7 @@ static int command_write(struct pci_dev *dev, int offset, u16 value, void *data)
 		}
 	} else if ((cmd->val & PCI_COMMAND_INVALIDATE) &&
 		   !(value & PCI_COMMAND_INVALIDATE)) {
-		if (unlikely(verbose_request))
-			dev_printk(KERN_DEBUG, &dev->dev,
-				   "disable memory-write-invalidate\n");
+		dev_dbg(&dev->dev, "disable memory-write-invalidate\n");
 		pci_clear_mwi(dev);
 	}
 
diff --git a/drivers/xen/xen-pciback/pciback.h b/drivers/xen/xen-pciback/pciback.h
index 7c95516..f1ed2db 100644
--- a/drivers/xen/xen-pciback/pciback.h
+++ b/drivers/xen/xen-pciback/pciback.h
@@ -186,8 +186,6 @@ static inline void xen_pcibk_release_devices(struct xen_pcibk_device *pdev)
 int xen_pcibk_xenbus_register(void);
 void xen_pcibk_xenbus_unregister(void);
 
-extern int verbose_request;
-
 void xen_pcibk_test_and_schedule_op(struct xen_pcibk_device *pdev);
 #endif
 
diff --git a/drivers/xen/xen-pciback/pciback_ops.c b/drivers/xen/xen-pciback/pciback_ops.c
index 8545ca7..e11a743 100644
--- a/drivers/xen/xen-pciback/pciback_ops.c
+++ b/drivers/xen/xen-pciback/pciback_ops.c
@@ -15,9 +15,6 @@
 #include <linux/sched.h>
 #include "pciback.h"
 
-int verbose_request;
-module_param(verbose_request, int, 0644);
-
 static irqreturn_t xen_pcibk_guest_interrupt(int irq, void *dev_id);
 
 /* Ensure a device is has the fake IRQ handler "turned on/off" and is
@@ -148,9 +145,6 @@ int xen_pcibk_enable_msi(struct xen_pcibk_device *pdev,
 	struct xen_pcibk_dev_data *dev_data;
 	int status;
 
-	if (unlikely(verbose_request))
-		dev_printk(KERN_DEBUG, &dev->dev, "enable MSI\n");
-
 	if (dev->msi_enabled)
 		status = -EALREADY;
 	else if (dev->msix_enabled)
@@ -169,8 +163,8 @@ int xen_pcibk_enable_msi(struct xen_pcibk_device *pdev,
 	 * the local domain's IRQ number. */
 
 	op->value = dev->irq ? xen_pirq_from_irq(dev->irq) : 0;
-	if (unlikely(verbose_request))
-		dev_printk(KERN_DEBUG, &dev->dev, "MSI: %d\n", op->value);
+
+	dev_dbg(&dev->dev, "MSI: %d\n", op->value);
 
 	dev_data = pci_get_drvdata(dev);
 	if (dev_data)
@@ -183,9 +177,6 @@ int xen_pcibk_enable_msi(struct xen_pcibk_device *pdev,
 int xen_pcibk_disable_msi(struct xen_pcibk_device *pdev,
 			  struct pci_dev *dev, struct xen_pci_op *op)
 {
-	if (unlikely(verbose_request))
-		dev_printk(KERN_DEBUG, &dev->dev, "disable MSI\n");
-
 	if (dev->msi_enabled) {
 		struct xen_pcibk_dev_data *dev_data;
 
@@ -196,8 +187,9 @@ int xen_pcibk_disable_msi(struct xen_pcibk_device *pdev,
 			dev_data->ack_intr = 1;
 	}
 	op->value = dev->irq ? xen_pirq_from_irq(dev->irq) : 0;
-	if (unlikely(verbose_request))
-		dev_printk(KERN_DEBUG, &dev->dev, "MSI: %d\n", op->value);
+
+	dev_dbg(&dev->dev, "MSI: %d\n", op->value);
+
 	return 0;
 }
 
@@ -210,8 +202,7 @@ int xen_pcibk_enable_msix(struct xen_pcibk_device *pdev,
 	struct msix_entry *entries;
 	u16 cmd;
 
-	if (unlikely(verbose_request))
-		dev_printk(KERN_DEBUG, &dev->dev, "enable MSI-X\n");
+	dev_dbg(&dev->dev, "enable MSI-X\n");
 
 	if (op->value > SH_INFO_MAX_VEC)
 		return -EINVAL;
@@ -244,10 +235,8 @@ int xen_pcibk_enable_msix(struct xen_pcibk_device *pdev,
 			if (entries[i].vector) {
 				op->msix_entries[i].vector =
 					xen_pirq_from_irq(entries[i].vector);
-				if (unlikely(verbose_request))
-					dev_printk(KERN_DEBUG, &dev->dev,
-						"MSI-X[%d]: %d\n", i,
-						op->msix_entries[i].vector);
+				dev_dbg(&dev->dev, "MSI-X[%d]: %d\n", i,
+					op->msix_entries[i].vector);
 			}
 		}
 	} else
@@ -267,9 +256,6 @@ int xen_pcibk_enable_msix(struct xen_pcibk_device *pdev,
 int xen_pcibk_disable_msix(struct xen_pcibk_device *pdev,
 			   struct pci_dev *dev, struct xen_pci_op *op)
 {
-	if (unlikely(verbose_request))
-		dev_printk(KERN_DEBUG, &dev->dev, "disable MSI-X\n");
-
 	if (dev->msix_enabled) {
 		struct xen_pcibk_dev_data *dev_data;
 
@@ -284,8 +270,9 @@ int xen_pcibk_disable_msix(struct xen_pcibk_device *pdev,
 	 * an undefined IRQ value of zero.
 	 */
 	op->value = dev->irq ? xen_pirq_from_irq(dev->irq) : 0;
-	if (unlikely(verbose_request))
-		dev_printk(KERN_DEBUG, &dev->dev, "MSI-X: %d\n", op->value);
+
+	dev_dbg(&dev->dev, "MSI-X: %d\n", op->value);
+
 	return 0;
 }
 #endif
-- 
1.8.3.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 04:49:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 04:49:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeWwv-0008QY-38; Fri, 29 May 2020 04:48:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5ub4=7L=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jeWwt-0008QT-Hb
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 04:48:51 +0000
X-Inumbo-ID: b004ba78-a167-11ea-9dbe-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b004ba78-a167-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 04:48:50 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 88794AE7D;
 Fri, 29 May 2020 04:48:48 +0000 (UTC)
Subject: Re: [PATCH] xen/pci: Get rid of verbose_request and use dev_dbg()
 instead
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>, linux-pci@vger.kernel.org,
 xen-devel@lists.xenproject.org
References: <1590719092-8578-1-git-send-email-boris.ostrovsky@oracle.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <f72d6b15-ab33-6815-5b38-f709ac50f7ab@suse.com>
Date: Fri, 29 May 2020 06:48:47 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <1590719092-8578-1-git-send-email-boris.ostrovsky@oracle.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: bhelgaas@google.com, sstabellini@kernel.org, konrad.wilk@oracle.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.20 04:24, Boris Ostrovsky wrote:
> Information printed under verbose_request is clearly used for debugging
> only. Remove it and use dev_dbg() instead.
> 
> Signed-off-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen



From xen-devel-bounces@lists.xenproject.org Fri May 29 07:14:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 07:14:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeZCs-0004YY-Vc; Fri, 29 May 2020 07:13:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mKAR=7L=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jeZCr-0004YT-H0
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 07:13:29 +0000
X-Inumbo-ID: e50bd01c-a17b-11ea-9947-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e50bd01c-a17b-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 07:13:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=9BfMUNsA5Hr7jzdeY81E1fcOyOfgkOlgUqotF7hKDdw=; b=PIc6KJxTJcJyf3dnhRyoAsUkPW
 DbSTjaa8YlCvA1l9/ymIYNTazyhc/yV+E8F79OhKpdm/Oj7OVv4xuJyp8gcLh04A96VByluvq5Gz/
 KgZZjStrAsXz+LcrpYFS9DNJlQZjL//oWfdmEKPo3OrgiizOndOrOMizSqmYsAGI1X5s=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeZCp-0004dR-BS; Fri, 29 May 2020 07:13:27 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeZCp-00004v-5K; Fri, 29 May 2020 07:13:27 +0000
Subject: Re: [OSSTEST PATCH 22/38] buster: Extend guest bootloader workaround
To: Ian Jackson <ian.jackson@citrix.com>
References: <20200519190230.29519-1-ian.jackson@eu.citrix.com>
 <20200519190230.29519-23-ian.jackson@eu.citrix.com>
 <7747c676-f9da-cb97-bd93-78dc13138d03@xen.org>
 <24261.17724.382954.918761@mariner.uk.xensource.com>
 <e4e7e515-587a-ad81-c9b7-b7cfa69108be@xen.org>
 <24271.53336.125796.634580@mariner.uk.xensource.com>
 <d4fc0391-9f40-86ad-f304-70bb0cd73e9b@xen.org>
 <24271.56227.367257.277033@mariner.uk.xensource.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1b9dc551-48f1-1ec0-939b-63c46428fd89@xen.org>
Date: Fri, 29 May 2020 08:13:25 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <24271.56227.367257.277033@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 28/05/2020 16:41, Ian Jackson wrote:
> Julien Grall writes ("Re: [OSSTEST PATCH 22/38] buster: Extend guest bootloader workaround"):
>> On 28/05/2020 15:53, Ian Jackson wrote:
>>> It's Complicated.  There are several options, but the usual ones are:
>>>
>>> 1. pygrub: Install some version of grub, which generates
>>>      /boot/grub.cfg.  It doesn't matter very much which version of grub
>>>      because grub.cfg is read by pygrub in dom0 and that fishes out the
>>>      kernel and initrd.  Many of osstest's tests do this.
>>>
>>> 2. host kernel: Simply pass the dom0 kernel *and initramfs* as the
>>>      kernel image to the guest.  This works if the kernel has the right
>>>      modules for the guest storage, which it can easily do.  On x86 an
>>>      amd64 kernel can run an i386 userland.
>>>
>>> 3. pvgrub.
>>
>> Thanks for the explanation. How do you select it in the Osstest today?
> 
> I think osstest does all three (not very sure about (2).  Installs
> made with the Debian xen-tools package tend to do (2) by default.
> Installs made with d-i can do (2) or (3).
> 
>>> Is this the same as "EADK" ?  I'm afraid I don't follow...
>>
>> Sorry, I should have been more precise. I meant that we are able to boot
>> a Arm guest using UEFI as we added support in EDK2 (I think in Xen we
>> use the term ovmf).
> 
> Right.
> 
>> When using EFI, the guest can boot exactly the same way as it would on
>> baremetal. The toolstack is just loading the firmware in the guest memory.
>>
>> IIRC we have already regular EFI testing on x86 in Osstest. I am
>> thinking to extend them to Arm at some point.
> 
> Our arm64 boxes are all booting via UEFI right now.
> 
> We have to do a different bodge to load xen.efi rather than grub;
> osstest makes a xen.cfg.  That bodge is extended to buster by
> 
>    Subject: [OSSTEST PATCH 34/38] buster: grub, arm64: extend
>        chainloading workaround

We should also be able to use EFI in the guest directly as well :).

> 
>>> Where should I do that ?  I guess I mean, in which bugtracker ?
>>
>>   From the comment in the code, I would assume this is a bug/enhancement
>> against the Debian installer. But I may have misundertood it.
> 
> Oh I see.  I think amybe the problem was the lack of grub support.  Is
> that all sorted in current Debian unstable/testing ?  If so it may
> well all come out in the wash.

I haven't tried a recent Debian installer on Xen on Arm. I will have a 
try and see what it installs now.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 29 07:20:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 07:20:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeZJ7-0004lX-Ln; Fri, 29 May 2020 07:19:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tJvr=7L=xen.org=roger@srs-us1.protection.inumbo.net>)
 id 1jeZJ6-0004lS-8b
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 07:19:56 +0000
X-Inumbo-ID: cbc07ddc-a17c-11ea-9dbe-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cbc07ddc-a17c-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 07:19:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID
 :Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID
 :Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:
 Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe
 :List-Post:List-Owner:List-Archive;
 bh=6zV0oe34hSDj/zcEpfFGNXe/u7D9RyLAlVlrm2Kn2Kg=; b=Clui/u5OW7nAkQJDPf5thdO0x5
 qCl+SXF9d+aAFisb0SOQwTd4m9E+XQx1X1WzRycuc3NgqnCPyFCRmBsxwZJh+duw8JzvgLWHH1qQX
 pvkKZtCKhyuCicHHwsaLPldwg4YBWG9DR0kMDp9bjJgFYjV7QwCeEDSx8ZxphSqUEbbM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeZJ3-0004lN-RA; Fri, 29 May 2020 07:19:53 +0000
Received: from [93.176.191.173] (helo=localhost)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeZJ3-0000LL-BG; Fri, 29 May 2020 07:19:53 +0000
Date: Fri, 29 May 2020 09:19:43 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger@xen.org>
To: Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Message-ID: <20200529071930.GI1195@Air-de-Roger>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "paul@xen.org" <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, "Xia,
 Hongyan" <hongyxia@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 28, 2020 at 07:54:35PM +0100, Julien Grall wrote:
> Hi Bertrand,
> 
> Thank you for the patch.
> 
> On 28/05/2020 16:25, Bertrand Marquis wrote:
> > +static int map_runstate_area(struct vcpu *v,
> > +                    struct vcpu_register_runstate_memory_area *area)
> > +{
> > +    unsigned long offset = area->addr.p & ~PAGE_MASK;
> > +    void *mapping;
> > +    struct page_info *page;
> > +    size_t size = sizeof(struct vcpu_runstate_info);
> > +
> > +    ASSERT(runstate_guest(v) == NULL);
> > +
> > +    /* do not allow an area crossing 2 pages */
> > +    if ( offset > (PAGE_SIZE - size) )
> > +        return -EINVAL;
> 
> This is a change in behavior for the guest. If we are going forward with
> this, this will want a separate patch with its own explanation why this is
> done.

I don't think we can go this route without supporting crossing a page
boundary.

Linux will BUG if VCPUOP_register_runstate_memory_area fails, and
AFAICT there's no check in Linux to assure the runstate area doesn't
cross a page boundary. If we want to go this route we must support the
area crossing a page boundary, or else we will break existing
guests.

> > +
> > +#ifdef CONFIG_ARM
> > +    page = get_page_from_gva(v, area->addr.p, GV2M_WRITE);
> 
> A guest is allowed to setup the runstate for a different vCPU than the
> current one. This will lead to get_page_from_gva() to fail as the function
> cannot yet work with a vCPU other than current.
> 
> AFAICT, there is no restriction on when the runstate hypercall can be
> called. So this can even be called before the vCPU is brought up.
> 
> I was going to suggest to use the current vCPU for translating the address.
> However, it would be reasonable for an OS to use the same virtual address
> for all the vCPUs assuming the page-tables are different per vCPU.

Hm, it's a tricky question. Using the current vCPU page tables would
seem like a good compromise, but it needs to get added to the header
as a note, and this should ideally be merged at the start of a
development cycle to get people time to test and report issues.

> Recent Linux are using a per-cpu area, so the virtual address should be
> different for each vCPU. But I don't know how the other OSes works. Roger
> should be able to help for FreeBSD at least.

FreeBSD doesn't use VCPUOP_register_runstate_memory_area at all, so we
are safe in that regard.

I never got around to implementing the required scheduler changes in
order to support stolen time accounting.  Note sure this has changed
since I last checked, the bhyve and KVM guys also had interest in
properly accounting for stolen time on FreeBSD IIRC.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 29 07:25:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 07:25:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeZOp-0005cC-BX; Fri, 29 May 2020 07:25:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mKAR=7L=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jeZOo-0005c7-Bu
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 07:25:50 +0000
X-Inumbo-ID: 9eef271c-a17d-11ea-81bc-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9eef271c-a17d-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 07:25:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=avu625cK9g/Vkyk5Cu/wReuBeqnDW8McW9abrjzuw2k=; b=e25Z1HptLuji0F/REtUTm9IQyT
 E3IA+pSglAscxwPxluBVKIGiRvD5I4BX+eKMe6SnZJD/WVWA+hQqfoLcxbylOn/Phk343YJ5OmYG7
 sz8tK+Ypqf3ixY0M9JlPu9xRkLdO6Oui8/69rIVyqf2h48sTA45mp0OxhruMh8Egorng=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeZOl-0004sH-Du; Fri, 29 May 2020 07:25:47 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeZOl-0000lA-6Y; Fri, 29 May 2020 07:25:47 +0000
Subject: Re: [PATCH v2 3/3] clang: don't define nocall
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20200528144023.10814-1-roger.pau@citrix.com>
 <20200528144023.10814-4-roger.pau@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <8aa8d35f-2928-2096-a47c-26659c5a43a2@xen.org>
Date: Fri, 29 May 2020 08:25:44 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <20200528144023.10814-4-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Roger,

On 28/05/2020 15:40, Roger Pau Monne wrote:
> Clang doesn't support attribute error, and the possible equivalents
> like diagnose_if don't seem to work well in this case as they trigger
> when when the function is not called (just by being used by the
> APPEND_CALL macro).

OOI, could you share the diagnose_if change you tried?

> 
> Define nocall to a noop on clang until a proper solution can be found.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Julien Grall <jgrall@amazon.com>

> ---
>   xen/include/xen/compiler.h | 6 +++++-
>   1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/include/xen/compiler.h b/xen/include/xen/compiler.h
> index c22439b7a4..225e09e5f7 100644
> --- a/xen/include/xen/compiler.h
> +++ b/xen/include/xen/compiler.h
> @@ -20,7 +20,11 @@
>   
>   #define __weak        __attribute__((__weak__))
>   
> -#define nocall        __attribute__((error("Nonstandard ABI")))
> +#if !defined(__clang__)
> +# define nocall        __attribute__((error("Nonstandard ABI")))
> +#else
> +# define nocall
> +#endif
>   
>   #if (!defined(__clang__) && (__GNUC__ == 4) && (__GNUC_MINOR__ < 5))
>   #define unreachable() do {} while (1)
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 29 07:35:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 07:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeZY7-0006WW-E7; Fri, 29 May 2020 07:35:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeZY6-0006WR-4Y
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 07:35:26 +0000
X-Inumbo-ID: f52bc116-a17e-11ea-8993-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f52bc116-a17e-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 07:35:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A12EEB032;
 Fri, 29 May 2020 07:35:22 +0000 (UTC)
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
To: Julien Grall <julien@xen.org>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dcfbca54-4773-9f43-1826-f5137a41bd9f@suse.com>
Date: Fri, 29 May 2020 09:35:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "paul@xen.org" <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, "Xia,
 Hongyan" <hongyxia@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>, xen-devel@lists.xenproject.org,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.05.2020 20:54, Julien Grall wrote:
> On 28/05/2020 16:25, Bertrand Marquis wrote:
>> At the moment on Arm, a Linux guest running with KTPI enabled will
>> cause the following error when a context switch happens in user mode:
>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>>
>> This patch is modifying runstate handling to map the area given by the
>> guest inside Xen during the hypercall.
>> This is removing the guest virtual to physical conversion during context
>> switches which removes the bug
> 
> It would be good to spell out that a virtual address is not stable. So 
> relying on it is wrong.

Guests at present are permitted to change the mapping underneath the
virtual address provided (this may not be the best idea, but the
interface is like it is). Therefore I don't think the present
interface can be changed like this. Instead a new interface will need
adding which takes a guest physical address instead. (Which, in the
end, will merely be one tiny step towards making the hypercall
interfaces use guest physical addresses. And it would be nice if an
overall concept was hashed out first how that conversion should
occur, such that the change here could at least be made fit that
planned model. For example, an option might be to retain all present
hypercall numbering and simply dedicate a bit in the top level
hypercall numbers indicating whether _all_ involved addresses for
that operation are physical vs virtual ones.)

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 07:40:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 07:40:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeZck-0007Jp-0o; Fri, 29 May 2020 07:40:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3I2r=7L=redhat.com=kraxel@srs-us1.protection.inumbo.net>)
 id 1jeZci-0007Jk-Ah
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 07:40:12 +0000
X-Inumbo-ID: a02a850c-a17f-11ea-a877-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id a02a850c-a17f-11ea-a877-12813bfff9fa;
 Fri, 29 May 2020 07:40:10 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590738010;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:content-type:content-type:in-reply-to:in-reply-to:
 references:references; bh=ssGDGVmdYGoZoZzLhx8dAk7n43EOoBPZwcr8nrwVXFM=;
 b=BSwN0RYN/l0QxejSfnU1qCsKqZczhSsM5ek7vW28DsJabwLtrAi/IUcfpeaCOr799dxeEC
 yBgXc+byZ9uoSZZSB61ewFdPvVZZvWa66FVzTjY46Zj/YT5KbiaBKsIukkqD0O5RJI5oHg
 bcFLObFkUyZ1ucS4Uk7ZbKMUqEhRTJE=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-218-9Zu2jfIcPd2yPb2z-z5D6w-1; Fri, 29 May 2020 03:40:08 -0400
X-MC-Unique: 9Zu2jfIcPd2yPb2z-z5D6w-1
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 98693107ACF2;
 Fri, 29 May 2020 07:40:07 +0000 (UTC)
Received: from sirius.home.kraxel.org (ovpn-113-50.ams2.redhat.com
 [10.36.113.50])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 7FDFE10013DB;
 Fri, 29 May 2020 07:39:58 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 6100F9DB0; Fri, 29 May 2020 09:39:57 +0200 (CEST)
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v3 4/4] microvm: move virtio base to 0xfeb00000
Date: Fri, 29 May 2020 09:39:57 +0200
Message-Id: <20200529073957.8018-5-kraxel@redhat.com>
In-Reply-To: <20200529073957.8018-1-kraxel@redhat.com>
References: <20200529073957.8018-1-kraxel@redhat.com>
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=US-ASCII
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Sergio Lopez <slp@redhat.com>,
 Paul Durrant <paul@xen.org>, "Michael S. Tsirkin" <mst@redhat.com>,
 imammedo@redhat.com, Gerd Hoffmann <kraxel@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>, philmd@redhat.com,
 Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Place virtio-mmio devices near the other mmio regions,
next ioapic is at @ 0xfec00000.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 include/hw/i386/microvm.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/hw/i386/microvm.h b/include/hw/i386/microvm.h
index ba68d1f22bb3..fd34b78e0d2a 100644
--- a/include/hw/i386/microvm.h
+++ b/include/hw/i386/microvm.h
@@ -26,7 +26,7 @@
 #include "hw/i386/x86.h"
 
 /* Platform virtio definitions */
-#define VIRTIO_MMIO_BASE      0xc0000000
+#define VIRTIO_MMIO_BASE      0xfeb00000
 #define VIRTIO_IRQ_BASE       5
 #define VIRTIO_NUM_TRANSPORTS 8
 #define VIRTIO_CMDLINE_MAXLEN 64
-- 
2.18.4



From xen-devel-bounces@lists.xenproject.org Fri May 29 07:40:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 07:40:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeZcm-0007K5-8S; Fri, 29 May 2020 07:40:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3I2r=7L=redhat.com=kraxel@srs-us1.protection.inumbo.net>)
 id 1jeZck-0007Jv-EB
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 07:40:14 +0000
X-Inumbo-ID: a1c888fa-a17f-11ea-9dbe-bc764e2007e4
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id a1c888fa-a17f-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 07:40:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590738013;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=gb5GkJLhKQmpKKUOPuJsTnckJHtFKjaTwf/g8nYkDxc=;
 b=J8BmK2T+MjMPfspS3k28eb3fDnJqlXzVO9XwgZL/4am6vhRpt3JfhmGC1cPodSqi8udt8o
 0Ed0k9xZni22K6cNu0Kt+de/aRcKUB8GHCLQagmh963qSWcNlCxfbj4jiZFU0gwIfJSfM4
 JBinHyW+3k84MPVa81Lijgq/FWpPeHI=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-334-Zz3F4r-EMV2AXkdZs1hcTA-1; Fri, 29 May 2020 03:40:09 -0400
X-MC-Unique: Zz3F4r-EMV2AXkdZs1hcTA-1
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 0866F100A8ED;
 Fri, 29 May 2020 07:40:08 +0000 (UTC)
Received: from sirius.home.kraxel.org (ovpn-113-50.ams2.redhat.com
 [10.36.113.50])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 79CD810013C2;
 Fri, 29 May 2020 07:39:58 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 57E439DAF; Fri, 29 May 2020 09:39:57 +0200 (CEST)
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v3 3/4] x86: move max-ram-below-4g to pc
Date: Fri, 29 May 2020 09:39:56 +0200
Message-Id: <20200529073957.8018-4-kraxel@redhat.com>
In-Reply-To: <20200529073957.8018-1-kraxel@redhat.com>
References: <20200529073957.8018-1-kraxel@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Sergio Lopez <slp@redhat.com>,
 Paul Durrant <paul@xen.org>, "Michael S. Tsirkin" <mst@redhat.com>,
 imammedo@redhat.com, Gerd Hoffmann <kraxel@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>, philmd@redhat.com,
 Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Move from X86MachineClass to PCMachineClass so it disappears
from microvm machine type property list.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
---
 include/hw/i386/pc.h  |  2 ++
 include/hw/i386/x86.h |  4 ----
 hw/i386/pc.c          | 46 +++++++++++++++++++++++++++++++++++++++++++
 hw/i386/pc_piix.c     | 10 +++++-----
 hw/i386/pc_q35.c      | 10 +++++-----
 hw/i386/x86.c         | 46 -------------------------------------------
 hw/i386/xen/xen-hvm.c |  2 +-
 7 files changed, 59 insertions(+), 61 deletions(-)

diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
index 5e3b19ab78fc..dce1273c7dad 100644
--- a/include/hw/i386/pc.h
+++ b/include/hw/i386/pc.h
@@ -35,6 +35,7 @@ struct PCMachineState {
     PFlashCFI01 *flash[2];
 
     /* Configuration options: */
+    uint64_t max_ram_below_4g;
     OnOffAuto vmport;
 
     bool acpi_build_enabled;
@@ -51,6 +52,7 @@ struct PCMachineState {
 };
 
 #define PC_MACHINE_ACPI_DEVICE_PROP "acpi-device"
+#define PC_MACHINE_MAX_RAM_BELOW_4G "max-ram-below-4g"
 #define PC_MACHINE_DEVMEM_REGION_SIZE "device-memory-region-size"
 #define PC_MACHINE_VMPORT           "vmport"
 #define PC_MACHINE_SMBUS            "smbus"
diff --git a/include/hw/i386/x86.h b/include/hw/i386/x86.h
index b52285481687..b79f24e28545 100644
--- a/include/hw/i386/x86.h
+++ b/include/hw/i386/x86.h
@@ -51,9 +51,6 @@ typedef struct {
     qemu_irq *gsi;
     GMappedFile *initrd_mapped_file;
 
-    /* Configuration options: */
-    uint64_t max_ram_below_4g;
-
     /* RAM information (sizes, addresses, configuration): */
     ram_addr_t below_4g_mem_size, above_4g_mem_size;
 
@@ -82,7 +79,6 @@ typedef struct {
     AddressSpace *ioapic_as;
 } X86MachineState;
 
-#define X86_MACHINE_MAX_RAM_BELOW_4G "max-ram-below-4g"
 #define X86_MACHINE_SMM              "smm"
 #define X86_MACHINE_ACPI             "acpi"
 
diff --git a/hw/i386/pc.c b/hw/i386/pc.c
index c5db7be6d8b1..6d6f6decb32c 100644
--- a/hw/i386/pc.c
+++ b/hw/i386/pc.c
@@ -1831,6 +1831,45 @@ static void pc_machine_set_pit(Object *obj, bool value, Error **errp)
     pcms->pit_enabled = value;
 }
 
+static void pc_machine_get_max_ram_below_4g(Object *obj, Visitor *v,
+                                            const char *name, void *opaque,
+                                            Error **errp)
+{
+    PCMachineState *pcms = PC_MACHINE(obj);
+    uint64_t value = pcms->max_ram_below_4g;
+
+    visit_type_size(v, name, &value, errp);
+}
+
+static void pc_machine_set_max_ram_below_4g(Object *obj, Visitor *v,
+                                            const char *name, void *opaque,
+                                            Error **errp)
+{
+    PCMachineState *pcms = PC_MACHINE(obj);
+    Error *error = NULL;
+    uint64_t value;
+
+    visit_type_size(v, name, &value, &error);
+    if (error) {
+        error_propagate(errp, error);
+        return;
+    }
+    if (value > 4 * GiB) {
+        error_setg(&error,
+                   "Machine option 'max-ram-below-4g=%"PRIu64
+                   "' expects size less than or equal to 4G", value);
+        error_propagate(errp, error);
+        return;
+    }
+
+    if (value < 1 * MiB) {
+        warn_report("Only %" PRIu64 " bytes of RAM below the 4GiB boundary,"
+                    "BIOS may not work with less than 1MiB", value);
+    }
+
+    pcms->max_ram_below_4g = value;
+}
+
 static void pc_machine_initfn(Object *obj)
 {
     PCMachineState *pcms = PC_MACHINE(obj);
@@ -1840,6 +1879,7 @@ static void pc_machine_initfn(Object *obj)
 #else
     pcms->vmport = ON_OFF_AUTO_OFF;
 #endif /* CONFIG_VMPORT */
+    pcms->max_ram_below_4g = 0; /* use default */
     /* acpi build is enabled by default if machine supports it */
     pcms->acpi_build_enabled = PC_MACHINE_GET_CLASS(pcms)->has_acpi_build;
     pcms->smbus_enabled = true;
@@ -1938,6 +1978,12 @@ static void pc_machine_class_init(ObjectClass *oc, void *data)
     mc->numa_mem_supported = true;
     mc->default_ram_id = "pc.ram";
 
+    object_class_property_add(oc, PC_MACHINE_MAX_RAM_BELOW_4G, "size",
+        pc_machine_get_max_ram_below_4g, pc_machine_set_max_ram_below_4g,
+        NULL, NULL);
+    object_class_property_set_description(oc, PC_MACHINE_MAX_RAM_BELOW_4G,
+        "Maximum ram below the 4G boundary (32bit boundary)");
+
     object_class_property_add(oc, PC_MACHINE_DEVMEM_REGION_SIZE, "int",
         pc_machine_get_device_memory_region_size, NULL,
         NULL, NULL);
diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
index f66e1d73ce0b..503c35f7bf4c 100644
--- a/hw/i386/pc_piix.c
+++ b/hw/i386/pc_piix.c
@@ -127,11 +127,11 @@ static void pc_init1(MachineState *machine,
     if (xen_enabled()) {
         xen_hvm_init(pcms, &ram_memory);
     } else {
-        if (!x86ms->max_ram_below_4g) {
-            x86ms->max_ram_below_4g = 0xe0000000; /* default: 3.5G */
+        if (!pcms->max_ram_below_4g) {
+            pcms->max_ram_below_4g = 0xe0000000; /* default: 3.5G */
         }
-        lowmem = x86ms->max_ram_below_4g;
-        if (machine->ram_size >= x86ms->max_ram_below_4g) {
+        lowmem = pcms->max_ram_below_4g;
+        if (machine->ram_size >= pcms->max_ram_below_4g) {
             if (pcmc->gigabyte_align) {
                 if (lowmem > 0xc0000000) {
                     lowmem = 0xc0000000;
@@ -140,7 +140,7 @@ static void pc_init1(MachineState *machine,
                     warn_report("Large machine and max_ram_below_4g "
                                 "(%" PRIu64 ") not a multiple of 1G; "
                                 "possible bad performance.",
-                                x86ms->max_ram_below_4g);
+                                pcms->max_ram_below_4g);
                 }
             }
         }
diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
index 4ba8ac8774e4..90e8fb2cb737 100644
--- a/hw/i386/pc_q35.c
+++ b/hw/i386/pc_q35.c
@@ -154,18 +154,18 @@ static void pc_q35_init(MachineState *machine)
     /* Handle the machine opt max-ram-below-4g.  It is basically doing
      * min(qemu limit, user limit).
      */
-    if (!x86ms->max_ram_below_4g) {
-        x86ms->max_ram_below_4g = 4 * GiB;
+    if (!pcms->max_ram_below_4g) {
+        pcms->max_ram_below_4g = 4 * GiB;
     }
-    if (lowmem > x86ms->max_ram_below_4g) {
-        lowmem = x86ms->max_ram_below_4g;
+    if (lowmem > pcms->max_ram_below_4g) {
+        lowmem = pcms->max_ram_below_4g;
         if (machine->ram_size - lowmem > lowmem &&
             lowmem & (1 * GiB - 1)) {
             warn_report("There is possibly poor performance as the ram size "
                         " (0x%" PRIx64 ") is more then twice the size of"
                         " max-ram-below-4g (%"PRIu64") and"
                         " max-ram-below-4g is not a multiple of 1G.",
-                        (uint64_t)machine->ram_size, x86ms->max_ram_below_4g);
+                        (uint64_t)machine->ram_size, pcms->max_ram_below_4g);
         }
     }
 
diff --git a/hw/i386/x86.c b/hw/i386/x86.c
index 7a3bc7ab6639..49884e5c1597 100644
--- a/hw/i386/x86.c
+++ b/hw/i386/x86.c
@@ -846,45 +846,6 @@ void x86_bios_rom_init(MemoryRegion *rom_memory, bool isapc_ram_fw)
                                 bios);
 }
 
-static void x86_machine_get_max_ram_below_4g(Object *obj, Visitor *v,
-                                             const char *name, void *opaque,
-                                             Error **errp)
-{
-    X86MachineState *x86ms = X86_MACHINE(obj);
-    uint64_t value = x86ms->max_ram_below_4g;
-
-    visit_type_size(v, name, &value, errp);
-}
-
-static void x86_machine_set_max_ram_below_4g(Object *obj, Visitor *v,
-                                             const char *name, void *opaque,
-                                             Error **errp)
-{
-    X86MachineState *x86ms = X86_MACHINE(obj);
-    Error *error = NULL;
-    uint64_t value;
-
-    visit_type_size(v, name, &value, &error);
-    if (error) {
-        error_propagate(errp, error);
-        return;
-    }
-    if (value > 4 * GiB) {
-        error_setg(&error,
-                   "Machine option 'max-ram-below-4g=%"PRIu64
-                   "' expects size less than or equal to 4G", value);
-        error_propagate(errp, error);
-        return;
-    }
-
-    if (value < 1 * MiB) {
-        warn_report("Only %" PRIu64 " bytes of RAM below the 4GiB boundary,"
-                    "BIOS may not work with less than 1MiB", value);
-    }
-
-    x86ms->max_ram_below_4g = value;
-}
-
 bool x86_machine_is_smm_enabled(X86MachineState *x86ms)
 {
     bool smm_available = false;
@@ -958,7 +919,6 @@ static void x86_machine_initfn(Object *obj)
 
     x86ms->smm = ON_OFF_AUTO_AUTO;
     x86ms->acpi = ON_OFF_AUTO_AUTO;
-    x86ms->max_ram_below_4g = 0; /* use default */
     x86ms->smp_dies = 1;
 
     x86ms->apicid_from_cpu_idx = x86_apicid_from_cpu_idx;
@@ -980,12 +940,6 @@ static void x86_machine_class_init(ObjectClass *oc, void *data)
     x86mc->save_tsc_khz = true;
     nc->nmi_monitor_handler = x86_nmi;
 
-    object_class_property_add(oc, X86_MACHINE_MAX_RAM_BELOW_4G, "size",
-        x86_machine_get_max_ram_below_4g, x86_machine_set_max_ram_below_4g,
-        NULL, NULL);
-    object_class_property_set_description(oc, X86_MACHINE_MAX_RAM_BELOW_4G,
-        "Maximum ram below the 4G boundary (32bit boundary)");
-
     object_class_property_add(oc, X86_MACHINE_SMM, "OnOffAuto",
         x86_machine_get_smm, x86_machine_set_smm,
         NULL, NULL);
diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 82ece6b9e739..d6f4674418e9 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -203,7 +203,7 @@ static void xen_ram_init(PCMachineState *pcms,
     ram_addr_t block_len;
     uint64_t user_lowmem =
         object_property_get_uint(qdev_get_machine(),
-                                 X86_MACHINE_MAX_RAM_BELOW_4G,
+                                 PC_MACHINE_MAX_RAM_BELOW_4G,
                                  &error_abort);
 
     /* Handle the machine opt max-ram-below-4g.  It is basically doing
-- 
2.18.4



From xen-devel-bounces@lists.xenproject.org Fri May 29 07:40:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 07:40:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeZcn-0007Ke-GZ; Fri, 29 May 2020 07:40:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3I2r=7L=redhat.com=kraxel@srs-us1.protection.inumbo.net>)
 id 1jeZcn-0007KY-5T
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 07:40:17 +0000
X-Inumbo-ID: a184ad9e-a17f-11ea-a877-12813bfff9fa
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.120])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id a184ad9e-a17f-11ea-a877-12813bfff9fa;
 Fri, 29 May 2020 07:40:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590738013;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=FEpeW4hSXWUgbGj7HHo1jY8yfFQRooPvNkXyo34CA7s=;
 b=AClKcPqWlP5ohZSc4nTz35uVT1gx7Xd9QRO3Th2ZxfP9Xbm5ouirnqcBBJ2Mx5eFKkt+ER
 Ti/lT8e/FcXBCbZ9Z8ZuTW/y9VojnR1Bo+iJA3ATK/0w+Gs+k9DZEfsMDafKeVdf50LbmV
 sZliqI0mRVn3aVnZzsbILje1f8Ddc8Q=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-116-Q409k48zNK-msMJjsPcrfg-1; Fri, 29 May 2020 03:40:11 -0400
X-MC-Unique: Q409k48zNK-msMJjsPcrfg-1
Received: from smtp.corp.redhat.com (int-mx05.intmail.prod.int.phx2.redhat.com
 [10.5.11.15])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 30E79107ACCA;
 Fri, 29 May 2020 07:40:10 +0000 (UTC)
Received: from sirius.home.kraxel.org (ovpn-113-50.ams2.redhat.com
 [10.36.113.50])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 6935A6298C;
 Fri, 29 May 2020 07:39:58 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 3BB719DAD; Fri, 29 May 2020 09:39:57 +0200 (CEST)
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v3 1/4] microvm: use 3G split unconditionally
Date: Fri, 29 May 2020 09:39:54 +0200
Message-Id: <20200529073957.8018-2-kraxel@redhat.com>
In-Reply-To: <20200529073957.8018-1-kraxel@redhat.com>
References: <20200529073957.8018-1-kraxel@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.15
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Sergio Lopez <slp@redhat.com>,
 Paul Durrant <paul@xen.org>, "Michael S. Tsirkin" <mst@redhat.com>,
 imammedo@redhat.com, Gerd Hoffmann <kraxel@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>, philmd@redhat.com,
 Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Looks like the logic was copied over from q35.

q35 does this for backward compatibility, there is no reason to do this
on microvm though.  Also microvm doesn't need much mmio space, 1G is
more than enough.  Using an mmio window smaller than 1G is bad for
gigabyte alignment and hugepages though.  So split @ 3G unconditionally.

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
---
 hw/i386/microvm.c | 16 +---------------
 1 file changed, 1 insertion(+), 15 deletions(-)

diff --git a/hw/i386/microvm.c b/hw/i386/microvm.c
index 937db10ae6a5..44f940813b07 100644
--- a/hw/i386/microvm.c
+++ b/hw/i386/microvm.c
@@ -170,23 +170,9 @@ static void microvm_memory_init(MicrovmMachineState *mms)
     MemoryRegion *ram_below_4g, *ram_above_4g;
     MemoryRegion *system_memory = get_system_memory();
     FWCfgState *fw_cfg;
-    ram_addr_t lowmem;
+    ram_addr_t lowmem = 0xc0000000; /* 3G */
     int i;
 
-    /*
-     * Check whether RAM fits below 4G (leaving 1/2 GByte for IO memory
-     * and 256 Mbytes for PCI Express Enhanced Configuration Access Mapping
-     * also known as MMCFG).
-     * If it doesn't, we need to split it in chunks below and above 4G.
-     * In any case, try to make sure that guest addresses aligned at
-     * 1G boundaries get mapped to host addresses aligned at 1G boundaries.
-     */
-    if (machine->ram_size >= 0xb0000000) {
-        lowmem = 0x80000000;
-    } else {
-        lowmem = 0xb0000000;
-    }
-
     /*
      * Handle the machine opt max-ram-below-4g.  It is basically doing
      * min(qemu limit, user limit).
-- 
2.18.4



From xen-devel-bounces@lists.xenproject.org Fri May 29 07:40:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 07:40:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeZcs-0007MH-Oe; Fri, 29 May 2020 07:40:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3I2r=7L=redhat.com=kraxel@srs-us1.protection.inumbo.net>)
 id 1jeZcs-0007M0-5k
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 07:40:22 +0000
X-Inumbo-ID: a1ef532c-a17f-11ea-a877-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [207.211.31.81])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id a1ef532c-a17f-11ea-a877-12813bfff9fa;
 Fri, 29 May 2020 07:40:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590738013;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding:
 in-reply-to:in-reply-to:references:references;
 bh=4iqwU8K1xw7zSdeBIG54ZrsqaU3AB+FYUp1ncUWCyLA=;
 b=EcPIUYH0SZNAl1A5UfewodZ8bSbbiYcK59EymKOhYcDh/0M7ys0OyE4HxHn+vZvo7mRe5p
 To1FOrpwFYS1ux7y1zpsCllU9TjfDAiE8mDB/lqegPfIR3YixWlZy3Wwgnyrm28D2PuO+i
 Z50PuKPOOpA6nxJFLWpGc1c9Qy9S1fw=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-471-Lmzt2jhAOjaXj-0gVVTj-g-1; Fri, 29 May 2020 03:40:11 -0400
X-MC-Unique: Lmzt2jhAOjaXj-0gVVTj-g-1
Received: from smtp.corp.redhat.com (int-mx02.intmail.prod.int.phx2.redhat.com
 [10.5.11.12])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id 4272C461;
 Fri, 29 May 2020 07:40:10 +0000 (UTC)
Received: from sirius.home.kraxel.org (ovpn-113-50.ams2.redhat.com
 [10.36.113.50])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 6EBAA60C05;
 Fri, 29 May 2020 07:39:58 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 44EB19DAE; Fri, 29 May 2020 09:39:57 +0200 (CEST)
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v3 2/4] microvm: drop max-ram-below-4g support
Date: Fri, 29 May 2020 09:39:55 +0200
Message-Id: <20200529073957.8018-3-kraxel@redhat.com>
In-Reply-To: <20200529073957.8018-1-kraxel@redhat.com>
References: <20200529073957.8018-1-kraxel@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.12
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Sergio Lopez <slp@redhat.com>,
 Paul Durrant <paul@xen.org>, "Michael S. Tsirkin" <mst@redhat.com>,
 imammedo@redhat.com, Gerd Hoffmann <kraxel@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>, philmd@redhat.com,
 Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Not useful for microvm and allows users to shoot themself
into the foot (make ram + mmio overlap).

Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
Reviewed-by: Igor Mammedov <imammedo@redhat.com>
Acked-by: Paolo Bonzini <pbonzini@redhat.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@redhat.com>
---
 hw/i386/microvm.c | 19 -------------------
 1 file changed, 19 deletions(-)

diff --git a/hw/i386/microvm.c b/hw/i386/microvm.c
index 44f940813b07..5e931975a06d 100644
--- a/hw/i386/microvm.c
+++ b/hw/i386/microvm.c
@@ -173,25 +173,6 @@ static void microvm_memory_init(MicrovmMachineState *mms)
     ram_addr_t lowmem = 0xc0000000; /* 3G */
     int i;
 
-    /*
-     * Handle the machine opt max-ram-below-4g.  It is basically doing
-     * min(qemu limit, user limit).
-     */
-    if (!x86ms->max_ram_below_4g) {
-        x86ms->max_ram_below_4g = 4 * GiB;
-    }
-    if (lowmem > x86ms->max_ram_below_4g) {
-        lowmem = x86ms->max_ram_below_4g;
-        if (machine->ram_size - lowmem > lowmem &&
-            lowmem & (1 * GiB - 1)) {
-            warn_report("There is possibly poor performance as the ram size "
-                        " (0x%" PRIx64 ") is more then twice the size of"
-                        " max-ram-below-4g (%"PRIu64") and"
-                        " max-ram-below-4g is not a multiple of 1G.",
-                        (uint64_t)machine->ram_size, x86ms->max_ram_below_4g);
-        }
-    }
-
     if (machine->ram_size > lowmem) {
         x86ms->above_4g_mem_size = machine->ram_size - lowmem;
         x86ms->below_4g_mem_size = lowmem;
-- 
2.18.4



From xen-devel-bounces@lists.xenproject.org Fri May 29 07:40:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 07:40:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeZcy-0007Pn-58; Fri, 29 May 2020 07:40:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3I2r=7L=redhat.com=kraxel@srs-us1.protection.inumbo.net>)
 id 1jeZcx-0007PN-8p
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 07:40:27 +0000
X-Inumbo-ID: a2e19467-a17f-11ea-a877-12813bfff9fa
Received: from us-smtp-1.mimecast.com (unknown [205.139.110.61])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id a2e19467-a17f-11ea-a877-12813bfff9fa;
 Fri, 29 May 2020 07:40:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590738015;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 content-transfer-encoding:content-transfer-encoding;
 bh=pQn0bGDZvr2S3O3jxeuAg3AzJn+Iii0I+O8+ViqeBjo=;
 b=SqRBi73QcIL5NcVeB4u3EvriEeKDSGPFd/PAvU61uYSDUvRqwqcPSXQlDSzSHn5yzsBHkN
 GRcEEBWCWjIjeDYR19ZQm8H3uuoNotltZ9mCNCHBv0yxYxNtpnmeszq3tNnlPDyCGBmMSV
 l7saN9urTortR/fOYqlbU2sggbuGZJE=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-258-j2IMzAvKPCG-RW4t-W3vUw-1; Fri, 29 May 2020 03:40:14 -0400
X-MC-Unique: j2IMzAvKPCG-RW4t-W3vUw-1
Received: from smtp.corp.redhat.com (int-mx03.intmail.prod.int.phx2.redhat.com
 [10.5.11.13])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id DF73D800D24;
 Fri, 29 May 2020 07:40:12 +0000 (UTC)
Received: from sirius.home.kraxel.org (ovpn-113-50.ams2.redhat.com
 [10.36.113.50])
 by smtp.corp.redhat.com (Postfix) with ESMTP id 6EB1FA1035;
 Fri, 29 May 2020 07:39:58 +0000 (UTC)
Received: by sirius.home.kraxel.org (Postfix, from userid 1000)
 id 3314616E2C; Fri, 29 May 2020 09:39:57 +0200 (CEST)
From: Gerd Hoffmann <kraxel@redhat.com>
To: qemu-devel@nongnu.org
Subject: [PATCH v3 0/4] microvm: memory config tweaks
Date: Fri, 29 May 2020 09:39:53 +0200
Message-Id: <20200529073957.8018-1-kraxel@redhat.com>
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.79 on 10.5.11.13
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Sergio Lopez <slp@redhat.com>,
 Paul Durrant <paul@xen.org>, "Michael S. Tsirkin" <mst@redhat.com>,
 imammedo@redhat.com, Gerd Hoffmann <kraxel@redhat.com>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>, xen-devel@lists.xenproject.org,
 Anthony Perard <anthony.perard@citrix.com>,
 Paolo Bonzini <pbonzini@redhat.com>, philmd@redhat.com,
 Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

With more microvm memory config tweaks split this into its owns series,
the microvm acpi patch series is already big enough ...

v2:
 - use 3G split.
 - add patch to move virtio-mmio region.
 - pick up acks & reviews.
v3:
 - fix xen build.
 - pick up acks & reviews.

take care,
  Gerd

Gerd Hoffmann (4):
  microvm: use 3G split unconditionally
  microvm: drop max-ram-below-4g support
  x86: move max-ram-below-4g to pc
  microvm: move virtio base to 0xfeb00000

 include/hw/i386/microvm.h |  2 +-
 include/hw/i386/pc.h      |  2 ++
 include/hw/i386/x86.h     |  4 ----
 hw/i386/microvm.c         | 35 +----------------------------
 hw/i386/pc.c              | 46 +++++++++++++++++++++++++++++++++++++++
 hw/i386/pc_piix.c         | 10 ++++-----
 hw/i386/pc_q35.c          | 10 ++++-----
 hw/i386/x86.c             | 46 ---------------------------------------
 hw/i386/xen/xen-hvm.c     |  2 +-
 9 files changed, 61 insertions(+), 96 deletions(-)

-- 
2.18.4



From xen-devel-bounces@lists.xenproject.org Fri May 29 07:44:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 07:44:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeZge-0007qB-M7; Fri, 29 May 2020 07:44:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vz0h=7L=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jeZgd-0007q6-AM
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 07:44:15 +0000
X-Inumbo-ID: 30e74ab2-a180-11ea-9dbe-bc764e2007e4
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30e74ab2-a180-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 07:44:14 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id j10so2269770wrw.8
 for <xen-devel@lists.xenproject.org>; Fri, 29 May 2020 00:44:14 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=roczZs7L61Vci60GG4PyZ2tp//AeYjJpdFj+UdVmozo=;
 b=kVT2UBVm/1EZryY/rKocSOM/k2UrcZ5kNCim3jJYL6LcJUT2aHEvzNhyDM4WJckU74
 gos3ymR4REE8hp9qGtCdT0qjXaE/djyD0jOEQU8wP/25U7xVgi2IoPgk4BfxOg9A2F2v
 93Hd1xmkdPf5B/4tnudxhTa02y18sfqeD1bPqC6QrgNrnpdYh4pne3Bq8i0HZO+C9xQV
 tIA/cij7jtBeLg4H0smg38SvYD9FHLWGn8Y/KpjI0jX653T+0UAwbn9v+hkl5vCeWRun
 bnCCEUfig3GKvMptggD9m9FfjOjmF6j6r0tUw4Tw5YpYFZ8a0MQ2vAdxyNFfg7zEELBJ
 qZfg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=roczZs7L61Vci60GG4PyZ2tp//AeYjJpdFj+UdVmozo=;
 b=QP46kkSS6zT3VQR73i0lmH6aNY8KFY0Xsrk0cY/X3JMPl2CsOFgIY6PxUg3yJ/uC7G
 +cSKCop2sxXDc897886evZT5pxphYV6h7MaabBFDOzRg0l3TxE78kfp+qiDVUDIayb9D
 U+cEBhnAz+QWTIxCFuwdGkMEphMtAZEzeqN7g5ZpyVw2reoKbaC3CpRnus0A9hCwTVZ/
 FEFkq1gDnpzzj6W8FzaGeyKaCh+zqzRO8Y3tGDIH0VyDIHRjGneBLmss3WEzZ05fgMV9
 7HdA6lbWzC1Vc56D2RSegBCeDbZkjCI5f8U8l2cRKhbjDQnGeqgj3T7PN9meSHSMd+ER
 REaA==
X-Gm-Message-State: AOAM530JYXQT82xOTHxvd+Ss1Snf3irFD0IW0JaIjT1YwS8WpM5ufRmU
 s3kTQXbu0oDiPelUwolhG5M=
X-Google-Smtp-Source: ABdhPJw3P3lScRsjcgafefCTJ413eRfiQVmkSNAZZX8pYMS3ltLolA7D7n+kaGxiJ0ddXBqW/sptlw==
X-Received: by 2002:adf:dc8e:: with SMTP id r14mr6978205wrj.333.1590738253250; 
 Fri, 29 May 2020 00:44:13 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.185])
 by smtp.gmail.com with ESMTPSA id g69sm11901534wmg.15.2020.05.29.00.44.11
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 29 May 2020 00:44:12 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Gerd Hoffmann'" <kraxel@redhat.com>,
	<qemu-devel@nongnu.org>
References: <20200529073957.8018-1-kraxel@redhat.com>
 <20200529073957.8018-4-kraxel@redhat.com>
In-Reply-To: <20200529073957.8018-4-kraxel@redhat.com>
Subject: RE: [PATCH v3 3/4] x86: move max-ram-below-4g to pc
Date: Fri, 29 May 2020 08:44:10 +0100
Message-ID: <002c01d6358c$f20d74b0$d6285e10$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="utf-8"
Content-Transfer-Encoding: quoted-printable
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQDGIkPqD5sx0/r6TkPYYJAoRbayPwH/kqgVqs7YoJA=
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Eduardo Habkost' <ehabkost@redhat.com>, 'Sergio Lopez' <slp@redhat.com>,
 "'Michael S. Tsirkin'" <mst@redhat.com>, imammedo@redhat.com,
 'Marcel Apfelbaum' <marcel.apfelbaum@gmail.com>,
 xen-devel@lists.xenproject.org, 'Anthony Perard' <anthony.perard@citrix.com>,
 'Paolo Bonzini' <pbonzini@redhat.com>, philmd@redhat.com,
 'Richard Henderson' <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Gerd Hoffmann <kraxel@redhat.com>
> Sent: 29 May 2020 08:40
> To: qemu-devel@nongnu.org
> Cc: Paolo Bonzini <pbonzini@redhat.com>; Eduardo Habkost =
<ehabkost@redhat.com>; Sergio Lopez
> <slp@redhat.com>; philmd@redhat.com; Marcel Apfelbaum =
<marcel.apfelbaum@gmail.com>; Stefano Stabellini
> <sstabellini@kernel.org>; Paul Durrant <paul@xen.org>; =
xen-devel@lists.xenproject.org; Michael S.
> Tsirkin <mst@redhat.com>; Anthony Perard <anthony.perard@citrix.com>; =
Richard Henderson
> <rth@twiddle.net>; imammedo@redhat.com; Gerd Hoffmann =
<kraxel@redhat.com>
> Subject: [PATCH v3 3/4] x86: move max-ram-below-4g to pc
>=20
> Move from X86MachineClass to PCMachineClass so it disappears
> from microvm machine type property list.
>=20
> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> Reviewed-by: Philippe Mathieu-Daud=C3=A9 <philmd@redhat.com>
> Acked-by: Paolo Bonzini <pbonzini@redhat.com>

Xen change...

Reviewed-by: Paul Durrant <paul@xen.org>

> ---
>  include/hw/i386/pc.h  |  2 ++
>  include/hw/i386/x86.h |  4 ----
>  hw/i386/pc.c          | 46 =
+++++++++++++++++++++++++++++++++++++++++++
>  hw/i386/pc_piix.c     | 10 +++++-----
>  hw/i386/pc_q35.c      | 10 +++++-----
>  hw/i386/x86.c         | 46 =
-------------------------------------------
>  hw/i386/xen/xen-hvm.c |  2 +-
>  7 files changed, 59 insertions(+), 61 deletions(-)
>=20
> diff --git a/include/hw/i386/pc.h b/include/hw/i386/pc.h
> index 5e3b19ab78fc..dce1273c7dad 100644
> --- a/include/hw/i386/pc.h
> +++ b/include/hw/i386/pc.h
> @@ -35,6 +35,7 @@ struct PCMachineState {
>      PFlashCFI01 *flash[2];
>=20
>      /* Configuration options: */
> +    uint64_t max_ram_below_4g;
>      OnOffAuto vmport;
>=20
>      bool acpi_build_enabled;
> @@ -51,6 +52,7 @@ struct PCMachineState {
>  };
>=20
>  #define PC_MACHINE_ACPI_DEVICE_PROP "acpi-device"
> +#define PC_MACHINE_MAX_RAM_BELOW_4G "max-ram-below-4g"
>  #define PC_MACHINE_DEVMEM_REGION_SIZE "device-memory-region-size"
>  #define PC_MACHINE_VMPORT           "vmport"
>  #define PC_MACHINE_SMBUS            "smbus"
> diff --git a/include/hw/i386/x86.h b/include/hw/i386/x86.h
> index b52285481687..b79f24e28545 100644
> --- a/include/hw/i386/x86.h
> +++ b/include/hw/i386/x86.h
> @@ -51,9 +51,6 @@ typedef struct {
>      qemu_irq *gsi;
>      GMappedFile *initrd_mapped_file;
>=20
> -    /* Configuration options: */
> -    uint64_t max_ram_below_4g;
> -
>      /* RAM information (sizes, addresses, configuration): */
>      ram_addr_t below_4g_mem_size, above_4g_mem_size;
>=20
> @@ -82,7 +79,6 @@ typedef struct {
>      AddressSpace *ioapic_as;
>  } X86MachineState;
>=20
> -#define X86_MACHINE_MAX_RAM_BELOW_4G "max-ram-below-4g"
>  #define X86_MACHINE_SMM              "smm"
>  #define X86_MACHINE_ACPI             "acpi"
>=20
> diff --git a/hw/i386/pc.c b/hw/i386/pc.c
> index c5db7be6d8b1..6d6f6decb32c 100644
> --- a/hw/i386/pc.c
> +++ b/hw/i386/pc.c
> @@ -1831,6 +1831,45 @@ static void pc_machine_set_pit(Object *obj, =
bool value, Error **errp)
>      pcms->pit_enabled =3D value;
>  }
>=20
> +static void pc_machine_get_max_ram_below_4g(Object *obj, Visitor *v,
> +                                            const char *name, void =
*opaque,
> +                                            Error **errp)
> +{
> +    PCMachineState *pcms =3D PC_MACHINE(obj);
> +    uint64_t value =3D pcms->max_ram_below_4g;
> +
> +    visit_type_size(v, name, &value, errp);
> +}
> +
> +static void pc_machine_set_max_ram_below_4g(Object *obj, Visitor *v,
> +                                            const char *name, void =
*opaque,
> +                                            Error **errp)
> +{
> +    PCMachineState *pcms =3D PC_MACHINE(obj);
> +    Error *error =3D NULL;
> +    uint64_t value;
> +
> +    visit_type_size(v, name, &value, &error);
> +    if (error) {
> +        error_propagate(errp, error);
> +        return;
> +    }
> +    if (value > 4 * GiB) {
> +        error_setg(&error,
> +                   "Machine option 'max-ram-below-4g=3D%"PRIu64
> +                   "' expects size less than or equal to 4G", value);
> +        error_propagate(errp, error);
> +        return;
> +    }
> +
> +    if (value < 1 * MiB) {
> +        warn_report("Only %" PRIu64 " bytes of RAM below the 4GiB =
boundary,"
> +                    "BIOS may not work with less than 1MiB", value);
> +    }
> +
> +    pcms->max_ram_below_4g =3D value;
> +}
> +
>  static void pc_machine_initfn(Object *obj)
>  {
>      PCMachineState *pcms =3D PC_MACHINE(obj);
> @@ -1840,6 +1879,7 @@ static void pc_machine_initfn(Object *obj)
>  #else
>      pcms->vmport =3D ON_OFF_AUTO_OFF;
>  #endif /* CONFIG_VMPORT */
> +    pcms->max_ram_below_4g =3D 0; /* use default */
>      /* acpi build is enabled by default if machine supports it */
>      pcms->acpi_build_enabled =3D =
PC_MACHINE_GET_CLASS(pcms)->has_acpi_build;
>      pcms->smbus_enabled =3D true;
> @@ -1938,6 +1978,12 @@ static void pc_machine_class_init(ObjectClass =
*oc, void *data)
>      mc->numa_mem_supported =3D true;
>      mc->default_ram_id =3D "pc.ram";
>=20
> +    object_class_property_add(oc, PC_MACHINE_MAX_RAM_BELOW_4G, =
"size",
> +        pc_machine_get_max_ram_below_4g, =
pc_machine_set_max_ram_below_4g,
> +        NULL, NULL);
> +    object_class_property_set_description(oc, =
PC_MACHINE_MAX_RAM_BELOW_4G,
> +        "Maximum ram below the 4G boundary (32bit boundary)");
> +
>      object_class_property_add(oc, PC_MACHINE_DEVMEM_REGION_SIZE, =
"int",
>          pc_machine_get_device_memory_region_size, NULL,
>          NULL, NULL);
> diff --git a/hw/i386/pc_piix.c b/hw/i386/pc_piix.c
> index f66e1d73ce0b..503c35f7bf4c 100644
> --- a/hw/i386/pc_piix.c
> +++ b/hw/i386/pc_piix.c
> @@ -127,11 +127,11 @@ static void pc_init1(MachineState *machine,
>      if (xen_enabled()) {
>          xen_hvm_init(pcms, &ram_memory);
>      } else {
> -        if (!x86ms->max_ram_below_4g) {
> -            x86ms->max_ram_below_4g =3D 0xe0000000; /* default: 3.5G =
*/
> +        if (!pcms->max_ram_below_4g) {
> +            pcms->max_ram_below_4g =3D 0xe0000000; /* default: 3.5G =
*/
>          }
> -        lowmem =3D x86ms->max_ram_below_4g;
> -        if (machine->ram_size >=3D x86ms->max_ram_below_4g) {
> +        lowmem =3D pcms->max_ram_below_4g;
> +        if (machine->ram_size >=3D pcms->max_ram_below_4g) {
>              if (pcmc->gigabyte_align) {
>                  if (lowmem > 0xc0000000) {
>                      lowmem =3D 0xc0000000;
> @@ -140,7 +140,7 @@ static void pc_init1(MachineState *machine,
>                      warn_report("Large machine and max_ram_below_4g "
>                                  "(%" PRIu64 ") not a multiple of 1G; =
"
>                                  "possible bad performance.",
> -                                x86ms->max_ram_below_4g);
> +                                pcms->max_ram_below_4g);
>                  }
>              }
>          }
> diff --git a/hw/i386/pc_q35.c b/hw/i386/pc_q35.c
> index 4ba8ac8774e4..90e8fb2cb737 100644
> --- a/hw/i386/pc_q35.c
> +++ b/hw/i386/pc_q35.c
> @@ -154,18 +154,18 @@ static void pc_q35_init(MachineState *machine)
>      /* Handle the machine opt max-ram-below-4g.  It is basically =
doing
>       * min(qemu limit, user limit).
>       */
> -    if (!x86ms->max_ram_below_4g) {
> -        x86ms->max_ram_below_4g =3D 4 * GiB;
> +    if (!pcms->max_ram_below_4g) {
> +        pcms->max_ram_below_4g =3D 4 * GiB;
>      }
> -    if (lowmem > x86ms->max_ram_below_4g) {
> -        lowmem =3D x86ms->max_ram_below_4g;
> +    if (lowmem > pcms->max_ram_below_4g) {
> +        lowmem =3D pcms->max_ram_below_4g;
>          if (machine->ram_size - lowmem > lowmem &&
>              lowmem & (1 * GiB - 1)) {
>              warn_report("There is possibly poor performance as the =
ram size "
>                          " (0x%" PRIx64 ") is more then twice the size =
of"
>                          " max-ram-below-4g (%"PRIu64") and"
>                          " max-ram-below-4g is not a multiple of 1G.",
> -                        (uint64_t)machine->ram_size, =
x86ms->max_ram_below_4g);
> +                        (uint64_t)machine->ram_size, =
pcms->max_ram_below_4g);
>          }
>      }
>=20
> diff --git a/hw/i386/x86.c b/hw/i386/x86.c
> index 7a3bc7ab6639..49884e5c1597 100644
> --- a/hw/i386/x86.c
> +++ b/hw/i386/x86.c
> @@ -846,45 +846,6 @@ void x86_bios_rom_init(MemoryRegion *rom_memory, =
bool isapc_ram_fw)
>                                  bios);
>  }
>=20
> -static void x86_machine_get_max_ram_below_4g(Object *obj, Visitor *v,
> -                                             const char *name, void =
*opaque,
> -                                             Error **errp)
> -{
> -    X86MachineState *x86ms =3D X86_MACHINE(obj);
> -    uint64_t value =3D x86ms->max_ram_below_4g;
> -
> -    visit_type_size(v, name, &value, errp);
> -}
> -
> -static void x86_machine_set_max_ram_below_4g(Object *obj, Visitor *v,
> -                                             const char *name, void =
*opaque,
> -                                             Error **errp)
> -{
> -    X86MachineState *x86ms =3D X86_MACHINE(obj);
> -    Error *error =3D NULL;
> -    uint64_t value;
> -
> -    visit_type_size(v, name, &value, &error);
> -    if (error) {
> -        error_propagate(errp, error);
> -        return;
> -    }
> -    if (value > 4 * GiB) {
> -        error_setg(&error,
> -                   "Machine option 'max-ram-below-4g=3D%"PRIu64
> -                   "' expects size less than or equal to 4G", value);
> -        error_propagate(errp, error);
> -        return;
> -    }
> -
> -    if (value < 1 * MiB) {
> -        warn_report("Only %" PRIu64 " bytes of RAM below the 4GiB =
boundary,"
> -                    "BIOS may not work with less than 1MiB", value);
> -    }
> -
> -    x86ms->max_ram_below_4g =3D value;
> -}
> -
>  bool x86_machine_is_smm_enabled(X86MachineState *x86ms)
>  {
>      bool smm_available =3D false;
> @@ -958,7 +919,6 @@ static void x86_machine_initfn(Object *obj)
>=20
>      x86ms->smm =3D ON_OFF_AUTO_AUTO;
>      x86ms->acpi =3D ON_OFF_AUTO_AUTO;
> -    x86ms->max_ram_below_4g =3D 0; /* use default */
>      x86ms->smp_dies =3D 1;
>=20
>      x86ms->apicid_from_cpu_idx =3D x86_apicid_from_cpu_idx;
> @@ -980,12 +940,6 @@ static void x86_machine_class_init(ObjectClass =
*oc, void *data)
>      x86mc->save_tsc_khz =3D true;
>      nc->nmi_monitor_handler =3D x86_nmi;
>=20
> -    object_class_property_add(oc, X86_MACHINE_MAX_RAM_BELOW_4G, =
"size",
> -        x86_machine_get_max_ram_below_4g, =
x86_machine_set_max_ram_below_4g,
> -        NULL, NULL);
> -    object_class_property_set_description(oc, =
X86_MACHINE_MAX_RAM_BELOW_4G,
> -        "Maximum ram below the 4G boundary (32bit boundary)");
> -
>      object_class_property_add(oc, X86_MACHINE_SMM, "OnOffAuto",
>          x86_machine_get_smm, x86_machine_set_smm,
>          NULL, NULL);
> diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
> index 82ece6b9e739..d6f4674418e9 100644
> --- a/hw/i386/xen/xen-hvm.c
> +++ b/hw/i386/xen/xen-hvm.c
> @@ -203,7 +203,7 @@ static void xen_ram_init(PCMachineState *pcms,
>      ram_addr_t block_len;
>      uint64_t user_lowmem =3D
>          object_property_get_uint(qdev_get_machine(),
> -                                 X86_MACHINE_MAX_RAM_BELOW_4G,
> +                                 PC_MACHINE_MAX_RAM_BELOW_4G,
>                                   &error_abort);
>=20
>      /* Handle the machine opt max-ram-below-4g.  It is basically =
doing
> --
> 2.18.4




From xen-devel-bounces@lists.xenproject.org Fri May 29 07:47:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 07:47:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeZjW-0007z7-5N; Fri, 29 May 2020 07:47:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeZjV-0007z2-LG
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 07:47:13 +0000
X-Inumbo-ID: 9b3832e6-a180-11ea-a87c-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9b3832e6-a180-11ea-a87c-12813bfff9fa;
 Fri, 29 May 2020 07:47:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id E9623B03D;
 Fri, 29 May 2020 07:47:10 +0000 (UTC)
Subject: Re: [PATCH 2/2] SUPPORT.md: add hypervisor file system
To: Juergen Gross <jgross@suse.com>
References: <20200526095038.27378-1-jgross@suse.com>
 <20200526095038.27378-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <eb18525f-56ee-9791-0a72-9c1e1e36a9c9@suse.com>
Date: Fri, 29 May 2020 09:47:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200526095038.27378-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 26.05.2020 11:50, Juergen Gross wrote:
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Fri May 29 08:03:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 08:03:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeZyc-0001kn-TA; Fri, 29 May 2020 08:02:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vz0h=7L=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jeZyb-0001ki-6Q
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 08:02:49 +0000
X-Inumbo-ID: c90e8880-a182-11ea-9947-bc764e2007e4
Received: from mail-wm1-x333.google.com (unknown [2a00:1450:4864:20::333])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c90e8880-a182-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 08:02:48 +0000 (UTC)
Received: by mail-wm1-x333.google.com with SMTP id k26so2247727wmi.4
 for <xen-devel@lists.xenproject.org>; Fri, 29 May 2020 01:02:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:subject:date:message-id:mime-version
 :content-transfer-encoding:content-language:thread-index;
 bh=2TvzXF49B/JE0BOcTqfEISgDEX57eQJaXw7vhxyiURo=;
 b=GKVGtidhj4ri1AbyeeEBX5ZIMDCIkmVuWRM0SCrj3ErHU8y04jEAepFeQTeXhxrjf/
 SmRQN0j4Zq0qGql5jmzPRW6/rh1Ar7AJ/LXHeBG6on3+OLdb35ztlWicNMO0dRhhLOR3
 /o/v+MsK1tdr1/iDclrRR6QTeVt+KKT1SfZc49NSTcDeq2D6GBs9+Q7EGqFR8At2u4ib
 MQlJRR23S3XQzuKB5sg0JgcPu8GZc7BaTriuWQUqWiSndeaHNahzzVqnDA/zNCDxZZHo
 g4/djfWsV9H4NzQdAW8SqGoaXoSq/xqENYD0kvBzq8qc78iARInhsLqsEJh3ubKHpZPr
 XGhg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index;
 bh=2TvzXF49B/JE0BOcTqfEISgDEX57eQJaXw7vhxyiURo=;
 b=ZqVaMN+pO9PZk8EqeinUvfWV4w6v+JfIRuaSIs/RNTy/F3Rn+2+dvcgSN2+DBNHt+D
 Xw0WjlViyLVCZ0OWztUkdMPynE5whBO+kgOZmaJ4FNjflwg4HTUW9/vHEHbgwcPxPPeo
 ETiLQy7FtVanscZBBDf3wd/QWi9xI+vRJWkmQgZiMASNYkpRdNHDj+uvEYdPS+rOTegE
 uKNQzCiWKNHLwSyiIp5ptguoeywy4e2LYQ0Bq8+1oqzKqZn89pDTbSTVle+Bds7wwQof
 O/dNJ0INMhUxxyOCJpd6R5R0A/fDiU4WVEL+TbNsFFfoMm97HnNJzid87Ut218Ctvi7t
 UB0w==
X-Gm-Message-State: AOAM530BG9Y/DMS1hbbIGwKkBWjMmO91RSCnrfrBGf0oliQQxR73rJ7W
 zZQ1FL9pfum+KN53/cADqhF/sN4kJAk=
X-Google-Smtp-Source: ABdhPJyTvFS8qoQiIQs6YaPyqEQKygCv2cl6KYBTPJ7PMsFJMcF7/xPvH3ERuOVQwzSt1aOBWmcULg==
X-Received: by 2002:a05:600c:2317:: with SMTP id
 23mr7454219wmo.139.1590739367369; 
 Fri, 29 May 2020 01:02:47 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.187])
 by smtp.gmail.com with ESMTPSA id j135sm11216264wmj.43.2020.05.29.01.02.46
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 29 May 2020 01:02:46 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: <xen-devel@lists.xenproject.org>
Subject: *** Cut-off date for Xen 4.14 is today ***
Date: Fri, 29 May 2020 09:02:45 +0100
Message-ID: <002d01d6358f$8a10a370$9e31ea50$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AdY1jx5n1zlinPaDQPCRqd80uWMYcg==
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi all,

This is a reminder that the cut-off date for Xen 4.14 is today (May 29th). If you want your features to be included for the release,
please make sure they are committed a.s.a.p.

  Paul Durrant



From xen-devel-bounces@lists.xenproject.org Fri May 29 08:14:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 08:14:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jea9e-0002f3-V8; Fri, 29 May 2020 08:14:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tYAP=7L=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jea9d-0002ey-3a
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 08:14:13 +0000
X-Inumbo-ID: 606a549c-a184-11ea-a87e-12813bfff9fa
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.13.88]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 606a549c-a184-11ea-a87e-12813bfff9fa;
 Fri, 29 May 2020 08:14:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=59hoZBAK2rB0b8gl3WTDIfJ4A9zFDcZO3/xCm454M2A=;
 b=QtYbl8ip6ZC2NdVHivMzBs+qO6Zr4QGHQSy9poIQ45r54kOJcjCBXxjSct7n7pJQ20Z+2Xcg7yuRigZaEGs/gWpTSUUm8VtrtsQwsOfGCnPXu0DySM/4QASQumrOZpzPweVBtMpQWvKiadnm4LahK7ETQDnm+0+47/OLIXKXa+A=
Received: from AM6P191CA0009.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:8b::22)
 by VI1PR08MB4400.eurprd08.prod.outlook.com (2603:10a6:803:f8::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.19; Fri, 29 May
 2020 08:14:08 +0000
Received: from AM5EUR03FT061.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8b:cafe::8e) by AM6P191CA0009.outlook.office365.com
 (2603:10a6:209:8b::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.19 via Frontend
 Transport; Fri, 29 May 2020 08:14:08 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT061.mail.protection.outlook.com (10.152.16.247) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3021.23 via Frontend Transport; Fri, 29 May 2020 08:14:08 +0000
Received: ("Tessian outbound 9eabd37e4fee:v57");
 Fri, 29 May 2020 08:14:08 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 07f6684c94e68cd5
X-CR-MTA-TID: 64aa7808
Received: from 1f4d4bfd69a8.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 71E3942B-750D-478F-98CD-5047EA84ADB8.1; 
 Fri, 29 May 2020 08:14:02 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1f4d4bfd69a8.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 29 May 2020 08:14:02 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m3LS4vsRObt0zMIgde9IaM45mu8NWlAo4K2u2Tu7i5AGXHGcVgIkswobXF14oTorDPVVTrHX2GuKpuYCZXD5h2VjP8NaWpCEhuxFxTFdMPol5DP4O7SnVtPPSmPwrOS9B2P9OSzVLdwJkm/LdCgno5ZD8DxM/J9Mtt+Yq1C4uleMJgeJuW9zTAm7ACHsd/QPwFX6J+72n/SzQ8IJmfyNhtk3Sb8WcC0WJCHr/sx517CLxi/3XxATbHw+8PBaxyul9xF27D6eANnNriK7DyCXTHQ86ylKHl5coHEqm3oKSL289aWImRKeWPOme/qrZrLzSxF/MgdIk5AEEyr4ql+8Ug==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=59hoZBAK2rB0b8gl3WTDIfJ4A9zFDcZO3/xCm454M2A=;
 b=FAxb3Zi2FLgNAeO5nCtKjz7Qvynsgp59ilF5rNIcKLM4GaqczcWvawFfzJ/qO+itOGhZd/Drs4PNtEXRJJbKgJNCpWYyF8mzlOsq76fmPMcr0mJTrhjtiaw8R9xFFPN98nnIoCvNJj0MBE3YiEhu1iiXbP5EwkGtMewnzdeWMVqWih0sT9f/7gTlICLJSBmCPkDa4EdSFrigpbvtu61gXur0VuafXR/a/ZZUJdoXHtCBe7I64VWYoBd8BU4tb6Y6lJqALFCo+nh/adKYmt9nWjpSqYy8Tgi1rVBXQoMzhePqRbnQa+uzIXFTKNOIR7LNQbhaVfLncy2cJbE1aQ36nA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=59hoZBAK2rB0b8gl3WTDIfJ4A9zFDcZO3/xCm454M2A=;
 b=QtYbl8ip6ZC2NdVHivMzBs+qO6Zr4QGHQSy9poIQ45r54kOJcjCBXxjSct7n7pJQ20Z+2Xcg7yuRigZaEGs/gWpTSUUm8VtrtsQwsOfGCnPXu0DySM/4QASQumrOZpzPweVBtMpQWvKiadnm4LahK7ETQDnm+0+47/OLIXKXa+A=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3003.eurprd08.prod.outlook.com (2603:10a6:5:1b::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.27; Fri, 29 May
 2020 08:13:59 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3045.018; Fri, 29 May 2020
 08:13:59 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Topic: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Index: AQHWNP6nIxIB/r3XC025DLnqIbmaJai92N6AgADfWQA=
Date: Fri, 29 May 2020 08:13:58 +0000
Message-ID: <3B88C76B-6972-4A66-AFDC-0B5C27FBA740@arm.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
In-Reply-To: <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: afd9fb9a-7e5e-4d31-fb27-08d803a842eb
x-ms-traffictypediagnostic: DB7PR08MB3003:|VI1PR08MB4400:
X-Microsoft-Antispam-PRVS: <VI1PR08MB440064E954E6B9EB031603E89D8F0@VI1PR08MB4400.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 04180B6720
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: immARdMgT12JxKrNwOuzJ3NdQuZZH4kXPL4GEghpcjWHcPdzzr1kBoaOlx98Jwc6pzNpXH40DdAx5Nj2sJuSbdk23iuBgNatHcKsm188iXCqKJ+CHWnuVO+/T/WNI8enBB2k95n/ZXsSIToiraDUwFN+m9eEODh6LgO4hqMdDEynX7w7TezLV3oFoyQaE/JoIK8OVOsJhOwpfG75+6UmfFbzTmMuNnGUKNc0BXAQrLMiMLT5J+1FotiKwywTO7hGk3qcal3y4n0PnlKSy5SyYeYu/ITw8KvWwjgIGIJq2ngcpEIr4/HCC8bgGJNSmpUK+LIedE0MNQAk2bpGi652E8+SwZcExqo2d7UFBeTEnf9ueqLSwwPtYZAd70FZ4DtS
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(376002)(39860400002)(396003)(366004)(346002)(478600001)(26005)(4326008)(6512007)(54906003)(8676002)(8936002)(71200400001)(2616005)(316002)(186003)(6916009)(6506007)(53546011)(33656002)(86362001)(2906002)(83380400001)(66946007)(6486002)(76116006)(66556008)(91956017)(5660300002)(7416002)(64756008)(66446008)(66476007)(36756003)(21314003);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: lFm75CrLAFE+DESr4SUkN/zEIcsGRqnFbKeHu/1Kk1BpG/2jMS89YfIa26/QqH5t2CElXUuVYgO3EFNbJLJbuizSLshLTOAr+LLWZsMYd0p/U87t8HwV7FmDilaTooCJpm+OchZX0j5CG/Q9FPP8DIEliFci78EY1h80r3IhsRdx7NdCXRDEFCY5Oljmu2aDF5IdBoMY/wqxkt/MlL/+kic3+EzyXbh4VWGhH8A9R7secBryK50RfQDAhCjPYw7jEszkGpoIu2HcPqloDhFSuNA4kqymArrUFZBs3TxtwKxo3c9rw3Q5eSpKmSfnKUpwU7JBo++zNbcI72ZORvpvnkkY6OgUzIgBKzdtPpgPcQlBVn3CaJNX2vRyWfQ6r7cyqkulmKHu0WGEQxFVzu+VvkNmkwvYp+wpne6AusVQrdpaVs4UrHJhk6JU0KLVaC5H3d0RYNwdIGjeidTSbnvGsT0JBnTY+ZytAyyndFpkVxYTcIKOgumnop5Zlr58Xd+Q
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <87638ED74B9B0C4F87ABDF00161B35BC@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3003
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT061.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(396003)(136003)(39860400002)(376002)(46966005)(6512007)(47076004)(83380400001)(26005)(186003)(86362001)(4326008)(336012)(81166007)(53546011)(6862004)(6506007)(82310400002)(33656002)(82740400003)(8936002)(356005)(36906005)(36756003)(8676002)(2616005)(316002)(54906003)(5660300002)(6486002)(70586007)(2906002)(70206006)(478600001)(21314003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 09c794e1-ce42-4155-fa6c-08d803a83d6d
X-Forefront-PRVS: 04180B6720
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: J6B9No99P3TBIfNPZa8+7v2eR/XeOjXBWnr6rUNeWnedYEx+5SGMYZPltgIGvl5JvtcgTICnd/aRUCx4cqZvepgMpNH8nTq5bRJ1IQxg1vcZBXgObw4lCokvG/V/94kfv5+gSSM5I26Gm1KSqD1uwxH8iG09rmoHLJbdcMS+N2wf5cJNIj9lpfESJlwkj+6byDD+7gIR9hjM59jBovklw0Q0I4CPUz9nZ1u2wwz0pS0vV3jyeruiTkyI9T8VeTfvVmaT+BggqIL5cGzi3MgP9YDOGbxFsW6CRTb0YYRvtL4Yl7J2LsVwHpvNgV09imZVOwd7zrqBV3pTICsQJlB5PbV/T3sEsIXCsTHGKDxd42mI44Pt0m6q4Tvx4wRCs9HYZ0wh51hgPuSPXWeJpByms025qz1XzpSa252YOTdu7wI=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 May 2020 08:14:08.4286 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: afd9fb9a-7e5e-4d31-fb27-08d803a842eb
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4400
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "paul@xen.org" <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Xia, Hongyan" <hongyxia@amazon.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGkgSnVsaWVuLA0KDQo+IE9uIDI4IE1heSAyMDIwLCBhdCAxOTo1NCwgSnVsaWVuIEdyYWxsIDxq
dWxpZW5AeGVuLm9yZz4gd3JvdGU6DQo+IA0KPiBIaSBCZXJ0cmFuZCwNCj4gDQo+IFRoYW5rIHlv
dSBmb3IgdGhlIHBhdGNoLg0KPiANCj4gT24gMjgvMDUvMjAyMCAxNjoyNSwgQmVydHJhbmQgTWFy
cXVpcyB3cm90ZToNCj4+IEF0IHRoZSBtb21lbnQgb24gQXJtLCBhIExpbnV4IGd1ZXN0IHJ1bm5p
bmcgd2l0aCBLVFBJIGVuYWJsZWQgd2lsbA0KPj4gY2F1c2UgdGhlIGZvbGxvd2luZyBlcnJvciB3
aGVuIGEgY29udGV4dCBzd2l0Y2ggaGFwcGVucyBpbiB1c2VyIG1vZGU6DQo+PiAoWEVOKSBwMm0u
YzoxODkwOiBkMXYwOiBGYWlsZWQgdG8gd2FsayBwYWdlLXRhYmxlIHZhIDB4ZmZmZmZmODM3ZWJl
MGNkMA0KPj4gVGhpcyBwYXRjaCBpcyBtb2RpZnlpbmcgcnVuc3RhdGUgaGFuZGxpbmcgdG8gbWFw
IHRoZSBhcmVhIGdpdmVuIGJ5IHRoZQ0KPj4gZ3Vlc3QgaW5zaWRlIFhlbiBkdXJpbmcgdGhlIGh5
cGVyY2FsbC4NCj4+IFRoaXMgaXMgcmVtb3ZpbmcgdGhlIGd1ZXN0IHZpcnR1YWwgdG8gcGh5c2lj
YWwgY29udmVyc2lvbiBkdXJpbmcgY29udGV4dA0KPj4gc3dpdGNoZXMgd2hpY2ggcmVtb3ZlcyB0
aGUgYnVnDQo+IA0KPiBJdCB3b3VsZCBiZSBnb29kIHRvIHNwZWxsIG91dCB0aGF0IGEgdmlydHVh
bCBhZGRyZXNzIGlzIG5vdCBzdGFibGUuIFNvIHJlbHlpbmcgb24gaXQgaXMgd3JvbmcuDQo+IA0K
Pj4gYW5kIGltcHJvdmUgcGVyZm9ybWFuY2UgYnkgcHJldmVudGluZyB0bw0KPj4gd2FsayBwYWdl
IHRhYmxlcyBkdXJpbmcgY29udGV4dCBzd2l0Y2hlcy4NCj4gDQo+IFdpdGggU2VjcmV0IGZyZWUg
aHlwZXJ2aXNvciBpbiBtaW5kLCBJIHdvdWxkIGxpa2UgdG8gc3VnZ2VzdCB0byBtYXAvdW5tYXAg
dGhlIHJ1bnN0YXRlIGR1cmluZyBjb250ZXh0IHN3aXRjaC4NCj4gDQo+IFRoZSBjb3N0IHNob3Vs
ZCBiZSBtaW5pbWFsIHdoZW4gdGhlcmUgaXMgYSBkaXJlY3QgbWFwIChpLmUgb24gQXJtNjQgYW5k
IHg4NikgYW5kIHN0aWxsIHByb3ZpZGUgYmV0dGVyIHBlcmZvcm1hbmNlIG9uIEFybTMyLg0KDQpF
dmVuIHdpdGggYSBtaW5pbWFsIGNvc3QgdGhpcyBpcyBzdGlsbCBhZGRpbmcgc29tZSBub24gcmVh
bC10aW1lIGJlaGF2aW91ciB0byB0aGUgY29udGV4dCBzd2l0Y2guDQpCdXQgZGVmaW5pdGVseSBm
cm9tIHRoZSBzZWN1cml0eSBwb2ludCBvZiB2aWV3IGFzIHdlIGhhdmUgdG8gbWFwIGEgcGFnZSBm
cm9tIHRoZSBndWVzdCwgd2UgY291bGQgaGF2ZSBhY2Nlc3NpYmxlIGluIFhlbiBzb21lIGRhdGEg
dGhhdCBzaG91bGQgbm90IGJlIHRoZXJlLg0KVGhlcmUgaXMgYSB0cmFkZSBoZXJlIHdoZXJlOg0K
LSB4ZW4gY2FuIHByb3RlY3QgYnkgbWFwL3VubWFwcGluZw0KLSBhIGd1ZXN0IHdoaWNoIHdhbnRz
IHRvIHNlY3VyZSBoaXMgZGF0YSBzaG91bGQgZWl0aGVyIG5vdCB1c2UgaXQgb3IgbWFrZSBzdXJl
IHRoZXJlIGlzIG5vdGhpbmcgZWxzZSBpbiB0aGUgcGFnZQ0KDQpUaGF0IHNvdW5kcyBsaWtlIGEg
dGhyZWFkIGxvY2FsIHN0b3JhZ2Uga2luZCBvZiBwcm9ibGVtYXRpYyB3aGVyZSB3ZSB3YW50IGRh
dGEgZnJvbSB4ZW4gdG8gYmUgYWNjZXNzaWJsZSBmYXN0IGZyb20gdGhlIGd1ZXN0IGFuZCBlYXN5
IHRvIGJlIG1vZGlmaWVkIGZyb20geGVuLg0KDQo+IA0KPiBUaGUgY2hhbmdlIHNob3VsZCBiZSBt
aW5pbWFsIGNvbXBhcmUgdG8gdGhlIGN1cnJlbnQgYXBwcm9hY2ggYnV0IHRoaXMgY291bGQgYmUg
dGFrZW4gY2FyZSBzZXBhcmF0ZWx5IGlmIHlvdSBkb24ndCBoYXZlIHRpbWUuDQoNCkkgY291bGQg
YWRkIHRoYXQgdG8gdGhlIHNlcmllIGluIGEgc2VwYXJhdGUgcGF0Y2ggc28gdGhhdCBpdCBjYW4g
YmUgZGlzY3Vzc2VkIGFuZCB0ZXN0IHNlcGFyYXRlbHkgPw0KDQo+IA0KPj4gLS0NCj4+IEluIHRo
ZSBjdXJyZW50IHN0YXR1cywgdGhpcyBwYXRjaCBpcyBvbmx5IHdvcmtpbmcgb24gQXJtIGFuZCBu
ZWVkcyB0bw0KPj4gYmUgZml4ZWQgb24gWDg2IChzZWUgI2Vycm9yIG9uIGRvbWFpbi5jIGZvciBt
aXNzaW5nIGdldF9wYWdlX2Zyb21fZ3ZhKS4NCj4+IFNpZ25lZC1vZmYtYnk6IEJlcnRyYW5kIE1h
cnF1aXMgPGJlcnRyYW5kLm1hcnF1aXNAYXJtLmNvbT4NCj4+IC0tLQ0KPj4gIHhlbi9hcmNoL2Fy
bS9kb21haW4uYyAgIHwgMzIgKysrKysrKysrLS0tLS0tLQ0KPj4gIHhlbi9hcmNoL3g4Ni9kb21h
aW4uYyAgIHwgNTEgKysrKysrKysrKysrKystLS0tLS0tLS0tLQ0KPj4gIHhlbi9jb21tb24vZG9t
YWluLmMgICAgIHwgODQgKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0tLS0tLS0N
Cj4+ICB4ZW4vaW5jbHVkZS94ZW4vc2NoZWQuaCB8IDExICsrKystLQ0KPj4gIDQgZmlsZXMgY2hh
bmdlZCwgMTI0IGluc2VydGlvbnMoKyksIDU0IGRlbGV0aW9ucygtKQ0KPj4gZGlmZiAtLWdpdCBh
L3hlbi9hcmNoL2FybS9kb21haW4uYyBiL3hlbi9hcmNoL2FybS9kb21haW4uYw0KPj4gaW5kZXgg
MzExNjkzMjZiMi4uNzk5YjBlMDEwMyAxMDA2NDQNCj4+IC0tLSBhL3hlbi9hcmNoL2FybS9kb21h
aW4uYw0KPj4gKysrIGIveGVuL2FyY2gvYXJtL2RvbWFpbi5jDQo+PiBAQCAtMjc4LDMzICsyNzgs
MzcgQEAgc3RhdGljIHZvaWQgY3R4dF9zd2l0Y2hfdG8oc3RydWN0IHZjcHUgKm4pDQo+PiAgLyog
VXBkYXRlIHBlci1WQ1BVIGd1ZXN0IHJ1bnN0YXRlIHNoYXJlZCBtZW1vcnkgYXJlYSAoaWYgcmVn
aXN0ZXJlZCkuICovDQo+PiAgc3RhdGljIHZvaWQgdXBkYXRlX3J1bnN0YXRlX2FyZWEoc3RydWN0
IHZjcHUgKnYpDQo+PiAgew0KPj4gLSAgICB2b2lkIF9fdXNlciAqZ3Vlc3RfaGFuZGxlID0gTlVM
TDsNCj4+IC0gICAgc3RydWN0IHZjcHVfcnVuc3RhdGVfaW5mbyBydW5zdGF0ZTsNCj4+ICsgICAg
c3RydWN0IHZjcHVfcnVuc3RhdGVfaW5mbyAqcnVuc3RhdGU7DQo+PiAgLSAgICBpZiAoIGd1ZXN0
X2hhbmRsZV9pc19udWxsKHJ1bnN0YXRlX2d1ZXN0KHYpKSApDQo+PiArICAgIC8qIFhYWCB3aHkg
ZG8gd2UgYWNjZXB0IG5vdCB0byBibG9jayBoZXJlICovDQo+PiArICAgIGlmICggIXNwaW5fdHJ5
bG9jaygmdi0+cnVuc3RhdGVfZ3Vlc3RfbG9jaykgKQ0KPj4gICAgICAgICAgcmV0dXJuOw0KPj4g
IC0gICAgbWVtY3B5KCZydW5zdGF0ZSwgJnYtPnJ1bnN0YXRlLCBzaXplb2YocnVuc3RhdGUpKTsN
Cj4+ICsgICAgcnVuc3RhdGUgPSBydW5zdGF0ZV9ndWVzdCh2KTsNCj4+ICsNCj4+ICsgICAgaWYg
KHJ1bnN0YXRlID09IE5VTEwpDQo+PiArICAgIHsNCj4+ICsgICAgICAgIHNwaW5fdW5sb2NrKCZ2
LT5ydW5zdGF0ZV9ndWVzdF9sb2NrKTsNCj4+ICsgICAgICAgIHJldHVybjsNCj4+ICsgICAgfQ0K
Pj4gICAgICAgIGlmICggVk1fQVNTSVNUKHYtPmRvbWFpbiwgcnVuc3RhdGVfdXBkYXRlX2ZsYWcp
ICkNCj4+ICAgICAgew0KPj4gLSAgICAgICAgZ3Vlc3RfaGFuZGxlID0gJnYtPnJ1bnN0YXRlX2d1
ZXN0LnAtPnN0YXRlX2VudHJ5X3RpbWUgKyAxOw0KPj4gLSAgICAgICAgZ3Vlc3RfaGFuZGxlLS07
DQo+PiAtICAgICAgICBydW5zdGF0ZS5zdGF0ZV9lbnRyeV90aW1lIHw9IFhFTl9SVU5TVEFURV9V
UERBVEU7DQo+PiAtICAgICAgICBfX3Jhd19jb3B5X3RvX2d1ZXN0KGd1ZXN0X2hhbmRsZSwNCj4+
IC0gICAgICAgICAgICAgICAgICAgICAgICAgICAgKHZvaWQgKikoJnJ1bnN0YXRlLnN0YXRlX2Vu
dHJ5X3RpbWUgKyAxKSAtIDEsIDEpOw0KPj4gKyAgICAgICAgcnVuc3RhdGUtPnN0YXRlX2VudHJ5
X3RpbWUgfD0gWEVOX1JVTlNUQVRFX1VQREFURTsNCj4+ICAgICAgICAgIHNtcF93bWIoKTsNCj4g
DQo+IEJlY2F1c2UgeW91IHNldCB2LT5ydW5zdGF0ZS5zdGF0ZV9lbnRyeV90aW1lIGJlbG93LCB0
aGUgcGxhY2VtZW50IG9mIHRoZSBiYXJyaWVyIHNlZW1zIGEgYml0IG9kZC4NCj4gDQo+IEkgd291
bGQgc3VnZ2VzdCB0byB1cGRhdGUgdi0+cnVuc3RhdGUuc3RhdGVfZW50cnlfdGltZSBmaXJzdCBh
bmQgdGhlbiB1cGRhdGUgcnVuc3RhdGUtPnN0YXRlX2VudHJ5X3RpbWUuDQoNCldlIGRvIHdhbnQg
dGhlIGd1ZXN0IHRvIGtub3cgd2hlbiB3ZSBtb2RpZnkgdGhlIHJ1bnN0YXRlIHNvIHdlIG5lZWQg
dG8gbWFrZSBzdXJlIHRoZSBYRU5fUlVOU1RBVEVfVVBEQVRFIGlzIGFjdHVhbGx5IHNldCBpbiBh
IHZpc2libGUgd2F5IGJlZm9yZSB3ZSBkbyB0aGUgbWVtY3B5Lg0KVGhhdOKAmXMgd2h5IHRoZSBi
YXJyaWVyIGlzIGFmdGVyLg0KDQo+IA0KPj4gKyAgICAgICAgdi0+cnVuc3RhdGUuc3RhdGVfZW50
cnlfdGltZSB8PSBYRU5fUlVOU1RBVEVfVVBEQVRFOw0KPj4gICAgICB9DQo+PiAgLSAgICBfX2Nv
cHlfdG9fZ3Vlc3QocnVuc3RhdGVfZ3Vlc3QodiksICZydW5zdGF0ZSwgMSk7DQo+PiArICAgIG1l
bWNweShydW5zdGF0ZSwgJnYtPnJ1bnN0YXRlLCBzaXplb2Yodi0+cnVuc3RhdGUpKTsNCj4+ICAt
ICAgIGlmICggZ3Vlc3RfaGFuZGxlICkNCj4+ICsgICAgaWYgKCBWTV9BU1NJU1Qodi0+ZG9tYWlu
LCBydW5zdGF0ZV91cGRhdGVfZmxhZykgKQ0KPj4gICAgICB7DQo+PiAtICAgICAgICBydW5zdGF0
ZS5zdGF0ZV9lbnRyeV90aW1lICY9IH5YRU5fUlVOU1RBVEVfVVBEQVRFOw0KPj4gKyAgICAgICAg
cnVuc3RhdGUtPnN0YXRlX2VudHJ5X3RpbWUgJj0gflhFTl9SVU5TVEFURV9VUERBVEU7DQo+PiAg
ICAgICAgICBzbXBfd21iKCk7DQo+IA0KPiBZb3Ugd2FudCB0byB1cGRhdGUgcnVuc3RhdGUtPnN0
YXRlX2VudHJ5X3RpbWUgYWZ0ZXIgdGhlIGJhcnJpZXIgbm90IGJlZm9yZS4NCkFncmVlDQoNCj4g
DQo+PiAgc3RhdGljIHZvaWQgX3VwZGF0ZV9ydW5zdGF0ZV9hcmVhKHN0cnVjdCB2Y3B1ICp2KQ0K
Pj4gIHsNCj4+ICsgICAgLyogWFhYOiB0aGlzIHNob3VsZCBiZSByZW1vdmVkICovDQo+PiAgICAg
IGlmICggIXVwZGF0ZV9ydW5zdGF0ZV9hcmVhKHYpICYmIGlzX3B2X3ZjcHUodikgJiYNCj4+ICAg
ICAgICAgICAhKHYtPmFyY2guZmxhZ3MgJiBURl9rZXJuZWxfbW9kZSkgKQ0KPj4gICAgICAgICAg
di0+YXJjaC5wdi5uZWVkX3VwZGF0ZV9ydW5zdGF0ZV9hcmVhID0gMTsNCj4+IGRpZmYgLS1naXQg
YS94ZW4vY29tbW9uL2RvbWFpbi5jIGIveGVuL2NvbW1vbi9kb21haW4uYw0KPj4gaW5kZXggN2Nj
OTUyNjEzOS4uYWNjNmY5MGJhMyAxMDA2NDQNCj4+IC0tLSBhL3hlbi9jb21tb24vZG9tYWluLmMN
Cj4+ICsrKyBiL3hlbi9jb21tb24vZG9tYWluLmMNCj4+IEBAIC0xNjEsNiArMTYxLDcgQEAgc3Ry
dWN0IHZjcHUgKnZjcHVfY3JlYXRlKHN0cnVjdCBkb21haW4gKmQsIHVuc2lnbmVkIGludCB2Y3B1
X2lkKQ0KPj4gICAgICB2LT5kaXJ0eV9jcHUgPSBWQ1BVX0NQVV9DTEVBTjsNCj4+ICAgICAgICBz
cGluX2xvY2tfaW5pdCgmdi0+dmlycV9sb2NrKTsNCj4+ICsgICAgc3Bpbl9sb2NrX2luaXQoJnYt
PnJ1bnN0YXRlX2d1ZXN0X2xvY2spOw0KPj4gICAgICAgIHRhc2tsZXRfaW5pdCgmdi0+Y29udGlu
dWVfaHlwZXJjYWxsX3Rhc2tsZXQsIE5VTEwsIE5VTEwpOw0KPj4gIEBAIC02OTEsNiArNjkyLDY2
IEBAIGludCByY3VfbG9ja19saXZlX3JlbW90ZV9kb21haW5fYnlfaWQoZG9taWRfdCBkb20sIHN0
cnVjdCBkb21haW4gKipkKQ0KPj4gICAgICByZXR1cm4gMDsNCj4+ICB9DQo+PiAgK3N0YXRpYyB2
b2lkICB1bm1hcF9ydW5zdGF0ZV9hcmVhKHN0cnVjdCB2Y3B1ICp2LCB1bnNpZ25lZCBpbnQgbG9j
aykNCj4gDQo+IE5JVDogVGhlcmUgaXMgYW4gZXh0cmEgc3BhY2UgYWZ0ZXIgdm9pZA0KPiANCj4g
QWxzbywgQUZBSUNULCB0aGUgbG9jayBpcyBvbmx5IHRha2luZyB0d28gdmFsdWVzLiBQbGVhc2Ug
c3dpdGNoIHRvIGEgYm9vbC4NCg0KQWdyZWUNCg0KPiANCj4+ICt7DQo+PiArICAgIG1mbl90IG1m
bjsNCj4+ICsNCj4+ICsgICAgaWYgKCAhIHJ1bnN0YXRlX2d1ZXN0KHYpICkNCj4gDQo+IE5JVDog
V2UgZG9uJ3QgdXN1YWxseSBwdXQgYSBzcGFjZSBhZnRlciAhLg0KPiANCj4gQnV0IHNob3VsZG4n
dCB0aGlzIGJlIGNoZWNrZWQgd2l0aGluIHRoZSBsb2NrPw0KDQpBZ3JlZQ0KDQo+IA0KPiANCj4+
ICsgICAgICAgIHJldHVybjsNCj4+ICsNCj4+ICsgICAgaWYgKGxvY2spDQo+IA0KPiBOSVQ6IGlm
ICggLi4uICkNCj4gDQoNCkFjaw0KDQo+PiArICAgICAgICBzcGluX2xvY2soJnYtPnJ1bnN0YXRl
X2d1ZXN0X2xvY2spOw0KPj4gKw0KPj4gKyAgICBtZm4gPSBkb21haW5fcGFnZV9tYXBfdG9fbWZu
KHJ1bnN0YXRlX2d1ZXN0KHYpKTsNCj4+ICsNCj4+ICsgICAgdW5tYXBfZG9tYWluX3BhZ2VfZ2xv
YmFsKCh2b2lkICopDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICgodW5zaWduZWQg
bG9uZyl2LT5ydW5zdGF0ZV9ndWVzdCAmDQo+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBQQUdFX01BU0spKTsNCj4+ICsNCj4+ICsgICAgcHV0X3BhZ2VfYW5kX3R5cGUobWZuX3RvX3Bh
Z2UobWZuKSk7DQo+PiArICAgIHJ1bnN0YXRlX2d1ZXN0KHYpID0gTlVMTDsNCj4+ICsNCj4+ICsg
ICAgaWYgKGxvY2spDQo+IA0KPiBEaXR0by4NCg0KQWNrDQoNCj4gDQo+PiArICAgICAgICBzcGlu
X3VubG9jaygmdi0+cnVuc3RhdGVfZ3Vlc3RfbG9jayk7DQo+PiArfQ0KPj4gKw0KPj4gK3N0YXRp
YyBpbnQgbWFwX3J1bnN0YXRlX2FyZWEoc3RydWN0IHZjcHUgKnYsDQo+PiArICAgICAgICAgICAg
ICAgICAgICBzdHJ1Y3QgdmNwdV9yZWdpc3Rlcl9ydW5zdGF0ZV9tZW1vcnlfYXJlYSAqYXJlYSkN
Cj4+ICt7DQo+PiArICAgIHVuc2lnbmVkIGxvbmcgb2Zmc2V0ID0gYXJlYS0+YWRkci5wICYgflBB
R0VfTUFTSzsNCj4+ICsgICAgdm9pZCAqbWFwcGluZzsNCj4+ICsgICAgc3RydWN0IHBhZ2VfaW5m
byAqcGFnZTsNCj4+ICsgICAgc2l6ZV90IHNpemUgPSBzaXplb2Yoc3RydWN0IHZjcHVfcnVuc3Rh
dGVfaW5mbyk7DQo+PiArDQo+PiArICAgIEFTU0VSVChydW5zdGF0ZV9ndWVzdCh2KSA9PSBOVUxM
KTsNCj4+ICsNCj4+ICsgICAgLyogZG8gbm90IGFsbG93IGFuIGFyZWEgY3Jvc3NpbmcgMiBwYWdl
cyAqLw0KPj4gKyAgICBpZiAoIG9mZnNldCA+IChQQUdFX1NJWkUgLSBzaXplKSApDQo+PiArICAg
ICAgICByZXR1cm4gLUVJTlZBTDsNCj4gDQo+IFRoaXMgaXMgYSBjaGFuZ2UgaW4gYmVoYXZpb3Ig
Zm9yIHRoZSBndWVzdC4gSWYgd2UgYXJlIGdvaW5nIGZvcndhcmQgd2l0aCB0aGlzLCB0aGlzIHdp
bGwgd2FudCBhIHNlcGFyYXRlIHBhdGNoIHdpdGggaXRzIG93biBleHBsYW5hdGlvbiB3aHkgdGhp
cyBpcyBkb25lLg0KDQpBY2sgSSBuZWVkIHRvIGFkZCBzdXBwb3J0IGZvciBhbiBhcmVhIGNyb3Nz
aW5nIHBhZ2VzDQoNCj4gDQo+PiArDQo+PiArI2lmZGVmIENPTkZJR19BUk0NCj4+ICsgICAgcGFn
ZSA9IGdldF9wYWdlX2Zyb21fZ3ZhKHYsIGFyZWEtPmFkZHIucCwgR1YyTV9XUklURSk7DQo+IA0K
PiBBIGd1ZXN0IGlzIGFsbG93ZWQgdG8gc2V0dXAgdGhlIHJ1bnN0YXRlIGZvciBhIGRpZmZlcmVu
dCB2Q1BVIHRoYW4gdGhlIGN1cnJlbnQgb25lLiBUaGlzIHdpbGwgbGVhZCB0byBnZXRfcGFnZV9m
cm9tX2d2YSgpIHRvIGZhaWwgYXMgdGhlIGZ1bmN0aW9uIGNhbm5vdCB5ZXQgd29yayB3aXRoIGEg
dkNQVSBvdGhlciB0aGFuIGN1cnJlbnQuDQoNCklmIHRoZSBhcmVhIGlzIG1hcHBlZCBwZXIgY3B1
IGJ1dCBpc27igJl0IHRoZSBhaW0gb2YgdGhpcyB0byBoYXZlIGEgd2F5IHRvIGNoZWNrIG90aGVy
IGNwdXMgc3RhdHVzID8NCg0KPiANCj4gQUZBSUNULCB0aGVyZSBpcyBubyByZXN0cmljdGlvbiBv
biB3aGVuIHRoZSBydW5zdGF0ZSBoeXBlcmNhbGwgY2FuIGJlIGNhbGxlZC4gU28gdGhpcyBjYW4g
ZXZlbiBiZSBjYWxsZWQgYmVmb3JlIHRoZSB2Q1BVIGlzIGJyb3VnaHQgdXAuDQoNCkkgdW5kZXJz
dGFuZCB0aGUgcmVtYXJrIGJ1dCBpdCBzdGlsbCBmZWVscyB2ZXJ5IHdlaXJkIHRvIGFsbG93IGFu
IGludmFsaWQgYWRkcmVzcyBpbiBhbiBoeXBlcmNhbGwuDQpXb3VsZG7igJl0IHdlIGhhdmUgYSBs
b3Qgb2YgcG90ZW50aWFsIGlzc3VlcyBhY2NlcHRpbmcgYW4gYWRkcmVzcyB0aGF0IHdlIGNhbm5v
dCBjaGVjayA/DQoNCg0KPiANCj4gSSB3YXMgZ29pbmcgdG8gc3VnZ2VzdCB0byB1c2UgdGhlIGN1
cnJlbnQgdkNQVSBmb3IgdHJhbnNsYXRpbmcgdGhlIGFkZHJlc3MuIEhvd2V2ZXIsIGl0IHdvdWxk
IGJlIHJlYXNvbmFibGUgZm9yIGFuIE9TIHRvIHVzZSB0aGUgc2FtZSB2aXJ0dWFsIGFkZHJlc3Mg
Zm9yIGFsbCB0aGUgdkNQVXMgYXNzdW1pbmcgdGhlIHBhZ2UtdGFibGVzIGFyZSBkaWZmZXJlbnQg
cGVyIHZDUFUuDQo+IA0KPiBSZWNlbnQgTGludXggYXJlIHVzaW5nIGEgcGVyLWNwdSBhcmVhLCBz
byB0aGUgdmlydHVhbCBhZGRyZXNzIHNob3VsZCBiZSBkaWZmZXJlbnQgZm9yIGVhY2ggdkNQVS4g
QnV0IEkgZG9uJ3Qga25vdyBob3cgdGhlIG90aGVyIE9TZXMgd29ya3MuIFJvZ2VyIHNob3VsZCBi
ZSBhYmxlIHRvIGhlbHAgZm9yIEZyZWVCU0QgYXQgbGVhc3QuDQo+IA0KPiBJIGhhdmUgQ0NlZCBQ
YXVsIGZvciB0aGUgV2luZG93cyBkcml2ZXJzLg0KPiANCj4gSWYgd2UgZGVjaWRlIHRvIGludHJv
ZHVjZSBzb21lIHJlc3RyaWN0aW9uIHRoZW4gdGhleSBzaG91bGQgYmUgZXhwbGFpbmVkIGluIHRo
ZSBjb21taXQgbWVzc2FnZSBhbmQgYWxzbyBkb2N1bWVudGVkIGluIHRoZSBwdWJsaWMgaGVhZGVy
ICh3ZSBoYXZlIGJlZW4gcHJldHR5IGJhZCBhdCBkb2N1bWVudGluZyBjaGFuZ2UgaW4gdGhlIHBh
c3QhKS4NCg0KQWdyZWUNCg0KQ2hlZXJzDQpCZXJ0cmFuZA0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri May 29 08:16:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 08:16:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeaBV-0002mz-FU; Fri, 29 May 2020 08:16:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tYAP=7L=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jeaBV-0002mu-2b
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 08:16:09 +0000
X-Inumbo-ID: a5d0da92-a184-11ea-a87e-12813bfff9fa
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.68]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a5d0da92-a184-11ea-a87e-12813bfff9fa;
 Fri, 29 May 2020 08:16:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vrDXmiz0l+amY/5J1pxupmFOtEFDeXqRXqpOm0W35Ow=;
 b=ZBhBM1TCxaKizuV6q1H1yypiBZU9p7MZCJMq0CMJuwpUzTA6zI60+EH4MDD/bb4Gf9jPn4WRWeFrWSi6fguXTIMoh+aNDnESBO1mA+VasT7t/JF2dcpfFavJcWD4OzG8+KYPaWgl0cRTa3bYreL/zviICnvXa/DHEl8JfkUZAmQ=
Received: from AM0PR02CA0030.eurprd02.prod.outlook.com (2603:10a6:208:3e::43)
 by AM0PR08MB3650.eurprd08.prod.outlook.com (2603:10a6:208:da::28)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.19; Fri, 29 May
 2020 08:16:06 +0000
Received: from AM5EUR03FT038.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:3e:cafe::f6) by AM0PR02CA0030.outlook.office365.com
 (2603:10a6:208:3e::43) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.19 via Frontend
 Transport; Fri, 29 May 2020 08:16:06 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT038.mail.protection.outlook.com (10.152.17.118) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3021.23 via Frontend Transport; Fri, 29 May 2020 08:16:05 +0000
Received: ("Tessian outbound 952576a3272a:v57");
 Fri, 29 May 2020 08:16:05 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: abcf1a27a241ce2f
X-CR-MTA-TID: 64aa7808
Received: from da24b9dfa802.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B7A0716F-E345-4147-9814-B80C9B787F68.1; 
 Fri, 29 May 2020 08:16:00 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id da24b9dfa802.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 29 May 2020 08:16:00 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=k8EeBOBxCZli9u71R5u+nBP1ixjI09HlMOf/AESpblf50scl0pC3QeMxpAVUWnp8UzikM7wDj1krUFTyzl2JhCeMEg6M8ilkCsu/1eW3gim+B2++COxViMKIub8Ht5ohvJ+ITgMPeRSQBlAFclevyeShYZAHc7QxtHnAw+Vo3+/JvLsUMQRdGSJJ9Z9gRsuWIxItkxhW9Exx+fIe8BqboaiW0nDGZZ4YktE52O0w2gjrhXBjPG599irp1akQncX2tKd7sEkucU8BzNkd0RK67QeZ+WryLfhv0q37R4mEiiouFUdP6ZxLYMyEvhC3fqkcRWtMay0SZKsEZr7j2xNqTQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vrDXmiz0l+amY/5J1pxupmFOtEFDeXqRXqpOm0W35Ow=;
 b=Ussmmiov4wPp1r1exYFWvpt+fPtRNJB0aeSG4BSJFxDQWfGcN9k1l7kp+H2VOZqPeNnSHcLaHi12lYnsgi4LJ53HXitrWKzysN/Vc4L+YIs2uWv5GqbBDi3NoKqJJBGFQ5HF37Zjdg2e/kgBj1FjgJJQZHC8TBSJy9UzaZ+yn6k2bARLa2h220KkpGjCVHCAQfcRIL4ZpAZd8b1bzO8Ev862JV0CdnjhKVENw3NrKsHOFO3fTgFHlEzCOA0rX8g9aqZkTwzwMgymXXnet9wYsZmEDEh3kE0TcroWk6m1dWPGSoqqHh5T73wt5ZEXdP12risyGmOginV0V9LHEU39Tg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vrDXmiz0l+amY/5J1pxupmFOtEFDeXqRXqpOm0W35Ow=;
 b=ZBhBM1TCxaKizuV6q1H1yypiBZU9p7MZCJMq0CMJuwpUzTA6zI60+EH4MDD/bb4Gf9jPn4WRWeFrWSi6fguXTIMoh+aNDnESBO1mA+VasT7t/JF2dcpfFavJcWD4OzG8+KYPaWgl0cRTa3bYreL/zviICnvXa/DHEl8JfkUZAmQ=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3353.eurprd08.prod.outlook.com (2603:10a6:5:20::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.27; Fri, 29 May
 2020 08:15:59 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3045.018; Fri, 29 May 2020
 08:15:59 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Topic: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Index: AQHWNP6nIxIB/r3XC025DLnqIbmaJai9txeAgAAHJ4CAAB/EgIAA2sQA
Date: Fri, 29 May 2020 08:15:59 +0000
Message-ID: <9BC88255-F0BD-4A1A-BECF-CE30C0B64035@arm.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <20200528165341.GH1195@Air-de-Roger>
 <B0CBD25E-49D8-4AE5-B424-83E9A05FBF58@arm.com>
 <de72ffe2-34a9-0b65-8d66-3f1322258d0c@xen.org>
In-Reply-To: <de72ffe2-34a9-0b65-8d66-3f1322258d0c@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: c169d545-f744-476d-21a0-08d803a888f0
x-ms-traffictypediagnostic: DB7PR08MB3353:|AM0PR08MB3650:
X-Microsoft-Antispam-PRVS: <AM0PR08MB3650E904B3283152C9CA14F99D8F0@AM0PR08MB3650.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 04180B6720
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: 5YRyXe028Ix7AQdcPdxAGZSFN2MN2yiiv9wwB1bHh4QhQzcdj+hAGpyWlhusQVoAwsY5XKAhmWoUd0Edy2MhEpKVibR8ntn65EDzqK51oc2xX8oAYsjnkB/Q/2nnTUC7xBipLx/CHwlAMe6tLQESt4qZemoUwFEeIPWWo0xjOmM9urlAvwjfpavnOjYo6DgDBTIPN3HIEW4oKBw+zPjX4eEjw431s/xSTSP8/nJj8NupHvBIY3VLY25est3NC0M7ODsrDekJ56Bz5wb2N2/lxoBXj16LyDBt6hhpV4HaVMmxtXXky8yj22csEH4Xrr/1m3rCW1EWNQGCFlAur8CupQ==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(366004)(39860400002)(136003)(376002)(346002)(396003)(54906003)(5660300002)(7416002)(64756008)(83380400001)(91956017)(66476007)(66946007)(76116006)(66556008)(8936002)(71200400001)(66446008)(36756003)(4326008)(478600001)(86362001)(8676002)(6486002)(6512007)(26005)(6506007)(186003)(2906002)(316002)(33656002)(6916009)(2616005)(53546011);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: mSkF3TxmBDTUB57NE2ivxzUWFBPHDU81rT7/5DCA2ZgObfLbQx5XPSKrqpol8oJg5RNSOitOWOyPMmqjkL5p3Tdd5Vpv7Z2xiM+GIftmaXTLqO3ES7yRZQeb2zDi8u27HiQ5CbEPSc2RuDjKujxe0tjIicQjSsYXCgW/659ll6L6x74hBFB8WdTZHU5N9a4r/LAEpC28IYwLSlyxf5xCgc165ib9B0mmHcPPdn+KOGUBwmbU8CnpSFoqebwwTmc0iknNpGVKKqEqrx342xYUtcHhAJNiurjfcPjBu0jpBZ/U8XB+EyrCXwhlYfy4dF7XWRHIgtPjGEJ2ZJXvw8wRTImWwsedt11OXr7HQP4eVBR3T4sd0jS/17HjOwU3fdJHfBqjDPxSIwp4SJc4+rsZsQiKA9H04n7jH8v6uuNRfwHJCa3pChdv8umjPL3qmCbcMEvCKXdjTO8TAMx8dP+m8IipwecyBLqpTRJcoXvldvezNYdg6FEqaBbfuHGoPtL7
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <4245DE9CCC0ECB4192B7AB7C79A9A8A5@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3353
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT038.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(376002)(39860400002)(396003)(346002)(46966005)(336012)(53546011)(186003)(6486002)(36756003)(2906002)(5660300002)(26005)(478600001)(86362001)(6862004)(356005)(82740400003)(8936002)(82310400002)(316002)(54906003)(33656002)(70586007)(70206006)(4326008)(81166007)(6512007)(8676002)(83380400001)(2616005)(36906005)(6506007)(47076004);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: f2ed0932-6809-4099-b46f-08d803a88508
X-Forefront-PRVS: 04180B6720
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: dPDFWA4gn2qiCOgihrc69SdmDRo2fIha6uPW+sHabPF8/qOcRk2OhbbBHc/Z1md5eVuKcc65RR6ZU8CJAcYktcM6rbJ6zGITplT85WYKOpsIEGPjBQ47Awq/sL9rWJXla1LcYu4dohMOrKUhJ+n9QF84Yh4ZY/x+Myv3gYtX23V7FbbKm7nRGD5lnYJvsiZbXDO4SbqSkXklnf1BxmhdiwVX1DNNhOLC+CB3D1Gx/7uCviAyUnwyl1ma7/uVUmpo7AR/4Yv9ow5rHsGHdHA9VmTbMdCO3vz5BysMXKZo4346tD6fYDg5/2odAXLlWUCEgw1s5mMK5HpVL4U+WW17gaNWqvwc3D5oU6FfhiIqmd3OTpRjWguHa4VivJqbThR2mC30qubCkDNYICDTq9lf7A==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 May 2020 08:16:05.9163 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c169d545-f744-476d-21a0-08d803a888f0
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3650
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger@xen.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMjggTWF5IDIwMjAsIGF0IDIwOjEyLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpLA0KPiANCj4gT24gMjgvMDUvMjAyMCAxODoxOSwgQmVydHJh
bmQgTWFycXVpcyB3cm90ZToNCj4+Pj4gKw0KPj4+PiArICAgIHJldHVybiAwOw0KPj4+PiArfQ0K
Pj4+PiArDQo+Pj4+IGludCBkb21haW5fa2lsbChzdHJ1Y3QgZG9tYWluICpkKQ0KPj4+PiB7DQo+
Pj4+ICAgICBpbnQgcmMgPSAwOw0KPj4+PiBAQCAtNzI3LDcgKzc4OCwxMCBAQCBpbnQgZG9tYWlu
X2tpbGwoc3RydWN0IGRvbWFpbiAqZCkNCj4+Pj4gICAgICAgICBpZiAoIGNwdXBvb2xfbW92ZV9k
b21haW4oZCwgY3B1cG9vbDApICkNCj4+Pj4gICAgICAgICAgICAgcmV0dXJuIC1FUkVTVEFSVDsN
Cj4+Pj4gICAgICAgICBmb3JfZWFjaF92Y3B1ICggZCwgdiApDQo+Pj4+ICsgICAgICAgIHsNCj4+
Pj4gKyAgICAgICAgICAgIHVubWFwX3J1bnN0YXRlX2FyZWEodiwgMCk7DQo+Pj4gDQo+Pj4gV2h5
IGlzIGl0IG5vdCBhcHByb3ByaWF0ZSBoZXJlIHRvIGhvbGQgdGhlIGxvY2s/IEl0IG1pZ2h0IG5v
dCBiZQ0KPj4+IHRlY2huaWNhbGx5IG5lZWRlZCwgYnV0IHN0aWxsIHNob3VsZCB3b3JrPw0KPj4g
SW4gYSBraWxsaW5nIHNjZW5hcmlvIHlvdSBtaWdodCBzdG9wIGEgY29yZSB3aGlsZSBpdCB3YXMg
cmVzY2hlZHVsaW5nLg0KPj4gQ291bGRu4oCZdCBhIGNvcmUgYmUga2lsbGVkIHVzaW5nIGEgY3Jv
c3MgY29yZSBpbnRlcnJ1cHQgPw0KPiANCj4gWW91IGNhbid0IHN0b3AgYSB2Q1BVIGluIHRoZSBt
aWRkbGUgb2YgdGhlIGNvbnRleHQgc3dpdGNoLiBUaGUgdkNQVSBjYW4gb25seSBiZSBzY2hlZHVs
ZWQgb3V0IGF0IHNwZWNpZmljIHBvaW50IGluIFhlbi4NCg0KT2sNCg0KPiANCj4+IElmIHRoaXMg
aXMgdGhlIGNhc2UgdGhlbiBJIHdvdWxkIG5lZWQgdG8gZG8gbWFza2VkIGludGVycnVwdCBsb2Nr
aW5nIHNlY3Rpb25zIHRvIHByZXZlbnQgdGhhdCBjYXNlID8NCj4gDQo+IEF0IHRoZSBiZWdpbm5p
bmcgb2YgdGhlIGZ1bmN0aW9uIGRvbWFpbl9raWxsKCkgdGhlIGRvbWFpbiB3aWxsIGJlIHBhdXNl
ZCAoc2VlIGRvbWFpbl9wYXVzZSgpKS4gQWZ0ZXIgdGhpcyBzdGVwcyBub25lIG9mIHRoZSB2Q1BV
cyB3aWxsIGJlIHJ1bm5pbmcgb3IgYmUgc2NoZWR1bGVkLg0KPiANCj4gWW91IHNob3VsZCB0ZWNo
bmljYWxseSB1c2UgdGhlIGxvY2sgZXZlcnl3aGVyZSB0byBhdm9pZCBzdGF0aWMgYW5hbHl6ZXIg
dGhyb3dpbmcgYSB3YXJuaW5nIGJlY2F1c2UgeW91IGFjY2VzcyB2YXJpYWJsZSB3aXRoIGFuZCB3
aXRob3V0IHRoZSBsb2NrLiBBIGNvbW1lbnQgd291bGQgYXQgbGVhc3QgYmUgdXNlZnVsIGluIHRo
ZSBjb2RlIGlmIHdlIGdvIGFoZWFkIHdpdGggdGhpcy4NCj4gDQo+IEFzIGFuIGFzaWRlLCBJIHRo
aW5rIHlvdSB3YW50IHRoZSBsb2NrIHRvIGFsd2F5cyBkaXNhYmxlIHRoZSBpbnRlcnJ1cHRzIG90
aGVyd2lzZSBjaGVja19sb2NrKCkgKHRoaXMgY2FuIGJlIGVuYWJsZWQgd2l0aCBDT05GSUdfREVC
VUdfTE9DS1Mgb25seSBvbiB4ODYgdGhvdWdoKSB3aWxsIHNob3V0IGF0IHlvdSBiZWNhdXNlIHlv
dXIgbG9jayBjYW4gYmUgdGFrZW4gaW4gYm90aCBJUlEtc2FmZSBhbmQgSVJRLXVuc2FmZSBjb250
ZXh0Lg0KDQpPayBJIHVuZGVyc3RhbmQgdGhhdC4NCkkgd2lsbCB0YWtlIHRoZSBsb2NrIGhlcmUu
DQoNCkNoZWVycw0KQmVydHJhbmQNCg0K


From xen-devel-bounces@lists.xenproject.org Fri May 29 08:25:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 08:25:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeaKC-0003gQ-Bk; Fri, 29 May 2020 08:25:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tYAP=7L=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jeaKB-0003gL-FL
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 08:25:07 +0000
X-Inumbo-ID: e6c125e2-a185-11ea-a87e-12813bfff9fa
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.48]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e6c125e2-a185-11ea-a87e-12813bfff9fa;
 Fri, 29 May 2020 08:25:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/M72lr6B7WmhcuhHttzb10K1jr7NGMv84WsGVXqJtps=;
 b=q02/pFqFJlV820oCwbTuboRtqqUh9daRuWPuuIw0+unWlPGtIF67ulJG+eYjcu1Cz6imFXZ4RdYMvRovDbCERDM6imd9T1vTSMKg9tJelH6qVywgBthPkWI5F/tPRUKOGwuYxx3WzSJxGMOK7ILPvQupoqUlfvo9cjMxKBvAdoU=
Received: from AM5PR0101CA0030.eurprd01.prod.exchangelabs.com
 (2603:10a6:206:16::43) by DB8PR08MB3945.eurprd08.prod.outlook.com
 (2603:10a6:10:a3::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.27; Fri, 29 May
 2020 08:25:04 +0000
Received: from AM5EUR03FT056.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:16:cafe::78) by AM5PR0101CA0030.outlook.office365.com
 (2603:10a6:206:16::43) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.17 via Frontend
 Transport; Fri, 29 May 2020 08:25:04 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT056.mail.protection.outlook.com (10.152.17.224) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3021.23 via Frontend Transport; Fri, 29 May 2020 08:25:03 +0000
Received: ("Tessian outbound 14e212f6ce41:v57");
 Fri, 29 May 2020 08:25:03 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: ff69422b973bbbc5
X-CR-MTA-TID: 64aa7808
Received: from ee7413a5b539.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 D4EBB9D0-BA37-416D-8EF5-7194A1343F01.1; 
 Fri, 29 May 2020 08:24:58 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id ee7413a5b539.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 29 May 2020 08:24:58 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DuEtzreeAA0GgfqwcHnKcXrBDZCXJn2+lvCje7f0oCxggdgmAUh/8E5+V4rfXjGaKaHuH7JrrKAoJCWHIGE51wLvhI90HRw2nm9xqpwDtgkKFVAYSY/yEPdjfojPe15uFkMLg7KlssOHvUD/B+KMafOkBMfO/W+Q8aB4chBsLX7D/HyJQrFtyASh+SHUTFQQbOGzMXJmiUBUOkmGiNhPuHXCt3fqkKpGW93lgj9tuSUKe3qbenBekWLWTrSIox2xZk+Hkv4PDnDjVrYxJsxd6Wq70avXWNprKNCGMnP3o5qdWZea0w3Noj+QAEy5j3qnT16qqAMqkMuHMnI+5HgF3A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/M72lr6B7WmhcuhHttzb10K1jr7NGMv84WsGVXqJtps=;
 b=N61acOQaO6bZp/z9chDMPX0/n+4UIImE6WwXeaM5YA/FKDGgeWEF1/mp2rUtVJ5SEcjb2tirWvO81mC77f62Dq89vHed/d9SD9+t7S5OxqGutnizXglGJUY8LIUr9mSF3GHjJ1dks7VrcARA8b+ppysmMBwidj/icWB0OZ1iqb+7Z4ToFii4zCrX8mvLFwR2WKmf+nOOtCBme1peXhEh1D9HGjbKrzfDQokE9iwCixwRm/f+kY0Xl5eO3DMQD3D1wiSb801tnwpsJNbmwtOuWHkdnu45j5VmD/pPPXB9nCAJPuuDwSuxmEFKNk/nJNFdoqTE2ImipvHIr36HwDQ8ZA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/M72lr6B7WmhcuhHttzb10K1jr7NGMv84WsGVXqJtps=;
 b=q02/pFqFJlV820oCwbTuboRtqqUh9daRuWPuuIw0+unWlPGtIF67ulJG+eYjcu1Cz6imFXZ4RdYMvRovDbCERDM6imd9T1vTSMKg9tJelH6qVywgBthPkWI5F/tPRUKOGwuYxx3WzSJxGMOK7ILPvQupoqUlfvo9cjMxKBvAdoU=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3020.eurprd08.prod.outlook.com (2603:10a6:5:1d::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.27; Fri, 29 May
 2020 08:24:54 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3045.018; Fri, 29 May 2020
 08:24:54 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger@xen.org>
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Topic: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Index: AQHWNP6nIxIB/r3XC025DLnqIbmaJai92N6AgADQMICAABI3AA==
Date: Fri, 29 May 2020 08:24:54 +0000
Message-ID: <9AEB63E3-8314-4D27-B837-E11BD853DD99@arm.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <20200529071930.GI1195@Air-de-Roger>
In-Reply-To: <20200529071930.GI1195@Air-de-Roger>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: ea2b202a-912b-4189-19a1-08d803a9c9a7
x-ms-traffictypediagnostic: DB7PR08MB3020:|DB8PR08MB3945:
X-Microsoft-Antispam-PRVS: <DB8PR08MB39456AB97707CE37D55797BA9D8F0@DB8PR08MB3945.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 04180B6720
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: DJUYVcaXfvxw76XpImEcobO83d4XZEDQpHFlYV2rADydB5qETYSh3qvqY3k9ZUOCw5XqCD9FzNnpxB+mcOmN98Td0CC1f9F83vk6XDdDROcuMjawYjMnmrp3V1LZfApPEoP8JAGOFgbDbJg6VfNQNyrSvvczfZf1izll6Nh/pi/rndPv7hhbB1P+oVg3UxXDNrn6P3Hqtb8HTrDfZpQpOWt+gm66hP0cigJcUQPOeLliWzZ0el5Qxpv8CnuSVAYutpFpW/q6VkvfyrPVhtHca0FkbA2E7WKIbuKMOOcr/Sd5/ZQHFdNGnoBu1F+b3Ier1aE0aOB3uKFIzkBlfg/3ZA==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(346002)(376002)(396003)(136003)(366004)(7416002)(2616005)(478600001)(71200400001)(316002)(54906003)(66556008)(33656002)(66476007)(91956017)(6916009)(66946007)(86362001)(76116006)(66446008)(83380400001)(6486002)(64756008)(8936002)(8676002)(4326008)(6506007)(5660300002)(2906002)(36756003)(53546011)(186003)(26005)(6512007);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: qmUmJMngXkWeraoZEuTQ78IvDV3e6Oo2KVkljzEbhIohD1eZrpHbf6Wa3vYOCm6vRcb4LGmvh2730X0QCf04/otYDN6R4HWBZzomvIjR0RiFDzNDYIiavIQi3c1pHYe2S2yqDy2uIo21EEOKRFdRHIlfji8Te65aKEUpw0LhYkphGXYZt+SguwVzw3o3Qv7+/OTa3JRzQm/E56tLWPFKLzCpVXbNW0r8z7xiD2S9iYv3ggmKyDGRsBwapr+F3IM+ZanmCug854o5DcVCx4o+2kopKCbGqppe0/6be1M9yetzREBZ41LeZRXtLXAJ4Tmi45apMkvI82Efb6YUY9MmywMiNeaOjsjHuhwIeDkILfOgqxLixqBHiPiXJ+efOeHT0eu+Kg6U/iUhFjq+jNk3TrNwVQoo3WR6tESX//th+W8HzFzX2nTPsUwmnQ7Jv26xzXPVArAUSrxchvVOxav52e785/D1WagUNK+dVMGdRbb7u9aGv/pzuvbRlQLnZLiz
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <29B68D287AD7314D8C6E3C32AC8BC6B5@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3020
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT056.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(376002)(396003)(39860400002)(136003)(46966005)(83380400001)(2906002)(336012)(86362001)(6512007)(6862004)(2616005)(4326008)(54906003)(82740400003)(36756003)(47076004)(356005)(70586007)(81166007)(8936002)(70206006)(26005)(36906005)(6506007)(53546011)(33656002)(6486002)(5660300002)(8676002)(478600001)(186003)(82310400002)(316002);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 0b2803d7-7af1-4eba-daee-08d803a9c42e
X-Forefront-PRVS: 04180B6720
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: jhZ+U8Hsi2v5Y9H+YwDIIuv5c48oHQi1+eP8/4b5CUApaAJYYWRh+l1y9jJc2aeGYs5M7ff7m+vmDR3eaZ7kFFIYaYIsNsqmTJjs8bAofEu+QDEDvWkWr1ho0SCQpefKktDg+knCWnnQLMPWdDjCSSpP5ZW1XqLD6dG1Z6Xwwa9C3cThUvkDwyOzsdPPPatEJ9FGLniRMTVYmhpeP+tZN5M2lzVmm+W1/uIgQz/6DPYmo83yxeWN6i5Y9Mbus7SVY5344S5roxVWYb1bDhA2k5ULQdCkiK7xX4KSaxQb7Jf8TfFtUkG2mjMAbLJci30AfB4GNVRP+yqP09AVx5tIivSR0OCBRWYQLi44RCc4gSp59s2BAnOknnYdGsDvGII5/oEuw7F+5jnpT+GA274Ogg==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 May 2020 08:25:03.9823 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ea2b202a-912b-4189-19a1-08d803a9c9a7
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB3945
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, "paul@xen.org" <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Xia, Hongyan" <hongyxia@amazon.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMjkgTWF5IDIwMjAsIGF0IDA4OjE5LCBSb2dlciBQYXUgTW9ubsOpIDxyb2dlckB4
ZW4ub3JnPiB3cm90ZToNCj4gDQo+IE9uIFRodSwgTWF5IDI4LCAyMDIwIGF0IDA3OjU0OjM1UE0g
KzAxMDAsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4+IEhpIEJlcnRyYW5kLA0KPj4gDQo+PiBUaGFu
ayB5b3UgZm9yIHRoZSBwYXRjaC4NCj4+IA0KPj4gT24gMjgvMDUvMjAyMCAxNjoyNSwgQmVydHJh
bmQgTWFycXVpcyB3cm90ZToNCj4+PiArc3RhdGljIGludCBtYXBfcnVuc3RhdGVfYXJlYShzdHJ1
Y3QgdmNwdSAqdiwNCj4+PiArICAgICAgICAgICAgICAgICAgICBzdHJ1Y3QgdmNwdV9yZWdpc3Rl
cl9ydW5zdGF0ZV9tZW1vcnlfYXJlYSAqYXJlYSkNCj4+PiArew0KPj4+ICsgICAgdW5zaWduZWQg
bG9uZyBvZmZzZXQgPSBhcmVhLT5hZGRyLnAgJiB+UEFHRV9NQVNLOw0KPj4+ICsgICAgdm9pZCAq
bWFwcGluZzsNCj4+PiArICAgIHN0cnVjdCBwYWdlX2luZm8gKnBhZ2U7DQo+Pj4gKyAgICBzaXpl
X3Qgc2l6ZSA9IHNpemVvZihzdHJ1Y3QgdmNwdV9ydW5zdGF0ZV9pbmZvKTsNCj4+PiArDQo+Pj4g
KyAgICBBU1NFUlQocnVuc3RhdGVfZ3Vlc3QodikgPT0gTlVMTCk7DQo+Pj4gKw0KPj4+ICsgICAg
LyogZG8gbm90IGFsbG93IGFuIGFyZWEgY3Jvc3NpbmcgMiBwYWdlcyAqLw0KPj4+ICsgICAgaWYg
KCBvZmZzZXQgPiAoUEFHRV9TSVpFIC0gc2l6ZSkgKQ0KPj4+ICsgICAgICAgIHJldHVybiAtRUlO
VkFMOw0KPj4gDQo+PiBUaGlzIGlzIGEgY2hhbmdlIGluIGJlaGF2aW9yIGZvciB0aGUgZ3Vlc3Qu
IElmIHdlIGFyZSBnb2luZyBmb3J3YXJkIHdpdGgNCj4+IHRoaXMsIHRoaXMgd2lsbCB3YW50IGEg
c2VwYXJhdGUgcGF0Y2ggd2l0aCBpdHMgb3duIGV4cGxhbmF0aW9uIHdoeSB0aGlzIGlzDQo+PiBk
b25lLg0KPiANCj4gSSBkb24ndCB0aGluayB3ZSBjYW4gZ28gdGhpcyByb3V0ZSB3aXRob3V0IHN1
cHBvcnRpbmcgY3Jvc3NpbmcgYSBwYWdlDQo+IGJvdW5kYXJ5Lg0KPiANCj4gTGludXggd2lsbCBC
VUcgaWYgVkNQVU9QX3JlZ2lzdGVyX3J1bnN0YXRlX21lbW9yeV9hcmVhIGZhaWxzLCBhbmQNCj4g
QUZBSUNUIHRoZXJlJ3Mgbm8gY2hlY2sgaW4gTGludXggdG8gYXNzdXJlIHRoZSBydW5zdGF0ZSBh
cmVhIGRvZXNuJ3QNCj4gY3Jvc3MgYSBwYWdlIGJvdW5kYXJ5LiBJZiB3ZSB3YW50IHRvIGdvIHRo
aXMgcm91dGUgd2UgbXVzdCBzdXBwb3J0IHRoZQ0KPiBhcmVhIGNyb3NzaW5nIGEgcGFnZSBib3Vu
ZGFyeSwgb3IgZWxzZSB3ZSB3aWxsIGJyZWFrIGV4aXN0aW5nDQo+IGd1ZXN0cy4NCg0KQWdyZWUs
IEkgd2lsbCBhZGQgY3Jvc3MgcGFnZSBib3VuZGFyeSBzdXBwb3J0Lg0KDQo+IA0KPj4+ICsNCj4+
PiArI2lmZGVmIENPTkZJR19BUk0NCj4+PiArICAgIHBhZ2UgPSBnZXRfcGFnZV9mcm9tX2d2YSh2
LCBhcmVhLT5hZGRyLnAsIEdWMk1fV1JJVEUpOw0KPj4gDQo+PiBBIGd1ZXN0IGlzIGFsbG93ZWQg
dG8gc2V0dXAgdGhlIHJ1bnN0YXRlIGZvciBhIGRpZmZlcmVudCB2Q1BVIHRoYW4gdGhlDQo+PiBj
dXJyZW50IG9uZS4gVGhpcyB3aWxsIGxlYWQgdG8gZ2V0X3BhZ2VfZnJvbV9ndmEoKSB0byBmYWls
IGFzIHRoZSBmdW5jdGlvbg0KPj4gY2Fubm90IHlldCB3b3JrIHdpdGggYSB2Q1BVIG90aGVyIHRo
YW4gY3VycmVudC4NCj4+IA0KPj4gQUZBSUNULCB0aGVyZSBpcyBubyByZXN0cmljdGlvbiBvbiB3
aGVuIHRoZSBydW5zdGF0ZSBoeXBlcmNhbGwgY2FuIGJlDQo+PiBjYWxsZWQuIFNvIHRoaXMgY2Fu
IGV2ZW4gYmUgY2FsbGVkIGJlZm9yZSB0aGUgdkNQVSBpcyBicm91Z2h0IHVwLg0KPj4gDQo+PiBJ
IHdhcyBnb2luZyB0byBzdWdnZXN0IHRvIHVzZSB0aGUgY3VycmVudCB2Q1BVIGZvciB0cmFuc2xh
dGluZyB0aGUgYWRkcmVzcy4NCj4+IEhvd2V2ZXIsIGl0IHdvdWxkIGJlIHJlYXNvbmFibGUgZm9y
IGFuIE9TIHRvIHVzZSB0aGUgc2FtZSB2aXJ0dWFsIGFkZHJlc3MNCj4+IGZvciBhbGwgdGhlIHZD
UFVzIGFzc3VtaW5nIHRoZSBwYWdlLXRhYmxlcyBhcmUgZGlmZmVyZW50IHBlciB2Q1BVLg0KPiAN
Cj4gSG0sIGl0J3MgYSB0cmlja3kgcXVlc3Rpb24uIFVzaW5nIHRoZSBjdXJyZW50IHZDUFUgcGFn
ZSB0YWJsZXMgd291bGQNCj4gc2VlbSBsaWtlIGEgZ29vZCBjb21wcm9taXNlLCBidXQgaXQgbmVl
ZHMgdG8gZ2V0IGFkZGVkIHRvIHRoZSBoZWFkZXINCj4gYXMgYSBub3RlLCBhbmQgdGhpcyBzaG91
bGQgaWRlYWxseSBiZSBtZXJnZWQgYXQgdGhlIHN0YXJ0IG9mIGENCj4gZGV2ZWxvcG1lbnQgY3lj
bGUgdG8gZ2V0IHBlb3BsZSB0aW1lIHRvIHRlc3QgYW5kIHJlcG9ydCBpc3N1ZXMuDQoNCkkgYWdy
ZWUgYW5kIGFzIHRoaXMgd2lsbCBub3QgZ28gaW4gNC4xNCB3ZSBjb3VsZCBnb3QgdGhpcyByb3V0
ZSB0byBoYXZlIHRoaXMgaW4gNC4xNSA/DQoNCkJlcnRyYW5kDQoNCj4gDQo+PiBSZWNlbnQgTGlu
dXggYXJlIHVzaW5nIGEgcGVyLWNwdSBhcmVhLCBzbyB0aGUgdmlydHVhbCBhZGRyZXNzIHNob3Vs
ZCBiZQ0KPj4gZGlmZmVyZW50IGZvciBlYWNoIHZDUFUuIEJ1dCBJIGRvbid0IGtub3cgaG93IHRo
ZSBvdGhlciBPU2VzIHdvcmtzLiBSb2dlcg0KPj4gc2hvdWxkIGJlIGFibGUgdG8gaGVscCBmb3Ig
RnJlZUJTRCBhdCBsZWFzdC4NCj4gDQo+IEZyZWVCU0QgZG9lc24ndCB1c2UgVkNQVU9QX3JlZ2lz
dGVyX3J1bnN0YXRlX21lbW9yeV9hcmVhIGF0IGFsbCwgc28gd2UNCj4gYXJlIHNhZmUgaW4gdGhh
dCByZWdhcmQuDQo+IA0KPiBJIG5ldmVyIGdvdCBhcm91bmQgdG8gaW1wbGVtZW50aW5nIHRoZSBy
ZXF1aXJlZCBzY2hlZHVsZXIgY2hhbmdlcyBpbg0KPiBvcmRlciB0byBzdXBwb3J0IHN0b2xlbiB0
aW1lIGFjY291bnRpbmcuICBOb3RlIHN1cmUgdGhpcyBoYXMgY2hhbmdlZA0KPiBzaW5jZSBJIGxh
c3QgY2hlY2tlZCwgdGhlIGJoeXZlIGFuZCBLVk0gZ3V5cyBhbHNvIGhhZCBpbnRlcmVzdCBpbg0K
PiBwcm9wZXJseSBhY2NvdW50aW5nIGZvciBzdG9sZW4gdGltZSBvbiBGcmVlQlNEIElJUkMuDQo+
IA0KPiBSb2dlci4NCg0K


From xen-devel-bounces@lists.xenproject.org Fri May 29 08:31:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 08:31:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeaQV-0004YA-2L; Fri, 29 May 2020 08:31:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tJvr=7L=xen.org=roger@srs-us1.protection.inumbo.net>)
 id 1jeaQT-0004Y5-DW
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 08:31:37 +0000
X-Inumbo-ID: cf85b5a4-a186-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cf85b5a4-a186-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 08:31:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Transfer-Encoding:Content-Type:
 MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=vZOraFRi6ebW9XR1eiOz56rc1fu0447TUnDn/r88w0o=; b=tj5OnQuyLzx2BNwRr0g0R7Hb9u
 /1VDZaiEQ1meYA/dRnaBhw7hAqH/+RANumF4l1F1R+PEWyGNpceBEuZl6D5gqxcLZxWYASKMClauS
 7UdjweEQeKSApHI3FdYLfmCZIo4rZ/ptXqRwtOctjoC0TC4Ux6dqljOl0AqVBaK4KZJ8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeaQS-0006qZ-5f; Fri, 29 May 2020 08:31:36 +0000
Received: from [212.230.157.105] (helo=localhost)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeaQR-0004kQ-Qv; Fri, 29 May 2020 08:31:36 +0000
Date: Fri, 29 May 2020 10:31:22 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger@xen.org>
To: Julien Grall <julien@xen.org>
Subject: Re: [PATCH v2 3/3] clang: don't define nocall
Message-ID: <20200529083122.GJ1195@Air-de-Roger>
References: <20200528144023.10814-1-roger.pau@citrix.com>
 <20200528144023.10814-4-roger.pau@citrix.com>
 <8aa8d35f-2928-2096-a47c-26659c5a43a2@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <8aa8d35f-2928-2096-a47c-26659c5a43a2@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 29, 2020 at 08:25:44AM +0100, Julien Grall wrote:
> Hi Roger,
> 
> On 28/05/2020 15:40, Roger Pau Monne wrote:
> > Clang doesn't support attribute error, and the possible equivalents
> > like diagnose_if don't seem to work well in this case as they trigger
> > when when the function is not called (just by being used by the
> > APPEND_CALL macro).
> 
> OOI, could you share the diagnose_if change you tried?

I've sent a reduced version to the llvm-dev mailing list, because I
think the documentation should be fixed for diagnose_if. The email
with the example is at:

http://lists.llvm.org/pipermail/llvm-dev/2020-May/141908.html

FWIW, using the deprecated attribute will also trigger the same
error/warning even when not calling the function from C.

> > 
> > Define nocall to a noop on clang until a proper solution can be found.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Acked-by: Julien Grall <jgrall@amazon.com>

Thanks!


From xen-devel-bounces@lists.xenproject.org Fri May 29 08:33:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 08:33:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeaRq-0004eU-GG; Fri, 29 May 2020 08:33:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tYAP=7L=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jeaRo-0004eN-Hy
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 08:33:00 +0000
X-Inumbo-ID: 00946000-a187-11ea-a87e-12813bfff9fa
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.5.67]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 00946000-a187-11ea-a87e-12813bfff9fa;
 Fri, 29 May 2020 08:32:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=I5BzPPSFQvc9wW4UV/9/2D3DsN+5dgsKOAgzDAHFIdA=;
 b=MC51E3rqi2LPZrDsX0VbgNGvpxdONRSp9CyE6oe3O8q91dvfi76rRNzFz82qkapx5IRxFDJw7lg0n7F6Ki5fa/F7LwFTEEfGryKmAIFfWZ2KZDq5qrx3CKEKLQLDRTGPWGiPzEZyb5v0oPLg7WmAeEvoQrEa9e+ncpU4BSJYJFw=
Received: from DB6PR07CA0001.eurprd07.prod.outlook.com (2603:10a6:6:2d::11) by
 DB6PR0801MB1880.eurprd08.prod.outlook.com (2603:10a6:4:74::14) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3021.27; Fri, 29 May 2020 08:32:58 +0000
Received: from DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2d:cafe::4a) by DB6PR07CA0001.outlook.office365.com
 (2603:10a6:6:2d::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3066.8 via Frontend
 Transport; Fri, 29 May 2020 08:32:58 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT034.mail.protection.outlook.com (10.152.20.87) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3021.23 via Frontend Transport; Fri, 29 May 2020 08:32:58 +0000
Received: ("Tessian outbound cff7dd4de28a:v57");
 Fri, 29 May 2020 08:32:58 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: f3c6309428053c17
X-CR-MTA-TID: 64aa7808
Received: from e24b14a90549.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 CF3941F9-8BD8-46F4-9CEF-2D2F93F0FBBF.1; 
 Fri, 29 May 2020 08:32:52 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e24b14a90549.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 29 May 2020 08:32:52 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=guxG7GS7TkFTAWC5cq4tOx3PNhRbUOPs2Z/USwmY9flzS+Y6dpHtJywWLokeH7RSEsCY67SBwLvWSbQqeJ9iIoI4EMIQlCQD8N9J4o5GylM6O3325jjDIhIt4ZpI/SbwWDeO0yy6j0jlLA519TkkVej52LMZ87BxScuANSFNnSRjN88MPduLQgpDztlAb7wHGYuGw50OU5WvS+p2zaUFo9khxgxMdgk3B/+utFny2UDc9rKQ1FsTVbEk8xi5KjAKvvkX01kCFNH+1NfOVowja75B6pOdLXUNV2PJqegWLxjXCkRekKh2s02rx1Nk+VKYD+7NzG9m+FbH8YuTgVY7WQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=I5BzPPSFQvc9wW4UV/9/2D3DsN+5dgsKOAgzDAHFIdA=;
 b=IE+4P1akeTTPtOjlRmWiDPPEcNQhniWYLW4Av8h+DCUAwqUrTpv88jyety5HD1MTWiloLZpcU6uZbnGbkTe+2GS0EeWiJ+MJ56bMqZTrYIZZElvMbD//Vv6MD1RXzI1DfbPZVaIgX5PSzcxyM2k0TmQBzjwuKNZKU/uGoXllqa/gFvE5rYovet+XSFBeRSFofCUVfvYb7EtP4W9Www4znaDCMfs6usuKnZHqbdYKxLtdPj30EHWZJf3JwB7VbHoRr8rdfVS6J5NTTOiy8fXSdHrfBXud29yOdFgTYNr5R5Hfcb2F9FBq41XuKjW/F1S5E1YSL5eIWwvX6dJO+ac8gA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=I5BzPPSFQvc9wW4UV/9/2D3DsN+5dgsKOAgzDAHFIdA=;
 b=MC51E3rqi2LPZrDsX0VbgNGvpxdONRSp9CyE6oe3O8q91dvfi76rRNzFz82qkapx5IRxFDJw7lg0n7F6Ki5fa/F7LwFTEEfGryKmAIFfWZ2KZDq5qrx3CKEKLQLDRTGPWGiPzEZyb5v0oPLg7WmAeEvoQrEa9e+ncpU4BSJYJFw=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3020.eurprd08.prod.outlook.com (2603:10a6:5:1d::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.27; Fri, 29 May
 2020 08:32:51 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3045.018; Fri, 29 May 2020
 08:32:51 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Topic: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Index: AQHWNP6nIxIB/r3XC025DLnqIbmaJai92N6AgADUiwCAABAUgA==
Date: Fri, 29 May 2020 08:32:51 +0000
Message-ID: <B5889544-3EB5-41ED-8428-8BCA05269371@arm.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <dcfbca54-4773-9f43-1826-f5137a41bd9f@suse.com>
In-Reply-To: <dcfbca54-4773-9f43-1826-f5137a41bd9f@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 50044696-2ab6-4014-ece8-08d803aae437
x-ms-traffictypediagnostic: DB7PR08MB3020:|DB6PR0801MB1880:
X-Microsoft-Antispam-PRVS: <DB6PR0801MB1880CEED50294EA422DF91609D8F0@DB6PR0801MB1880.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 04180B6720
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: UyLy3c5UST3h0ONVJOd2Iw/Sdf8PouRsMweJIjEC86ZlB+tC9+CarntmIePaPvzRLmy+EGFWOUPqhna/iYdUzjXv+1cJO/06dz988L9FgkBJGErbF9oU5axBkzJ4voYZY8aabLN6tYdqF9Iv5kUmyMf8mVr2I4kIj2k7F+PBJ9P40C5C0RR4ogdv+AEIGKM31j3GkLHgoiNPYF2FzZh5X2DCEtjJDuJT2NgbzMCtXWEX7SmLhM5FuFFKU4FmBrWW/9iD5to5h4JoF/Rt4VzXZxRPYKajdsrK1pwGH5KyeYAJiXsN+P7yI7JTcgKMfnYi1mHnTOhYf2KFidq9Rt50Cw==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(346002)(376002)(396003)(136003)(366004)(7416002)(2616005)(478600001)(71200400001)(316002)(54906003)(66556008)(33656002)(66476007)(91956017)(6916009)(66946007)(86362001)(76116006)(66446008)(83380400001)(6486002)(64756008)(8936002)(8676002)(4326008)(6506007)(5660300002)(2906002)(36756003)(53546011)(186003)(26005)(6512007);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: 7bdANW2kYBP30KEoh23A1HU/wBMXCQvITVTDfBxgLX02PO+BjeTbDyNrcyJW8mRKV2B2RmP3ACXDl24MIqexXhxo40Dp7gVvGc4AsTagnsEthNfFJflItMZ3k61A97Y8teWe8JhYPTZldfGtCIdi3ssnxw4lHoUBrLR9QgnNSpoSAkMd3okY29Pz2EI5lxcPceLbBkO+5l1NQ2wrwfn7qUnYtlCiWtqRocoglZMWQPp+MwPxEPe/QWGzJFek0OXr4VcwkzDfUQlNzi39mjAwBlzbCUllvAq6m8EMxtIkvRtagHzQWiV9gat0HqDTxs/5bXmm9F/LeEn7HdNc8Vs/HC5GIUGgloRQeRUWJPH+1kMxgAsF1ruJQfVGnkKhOcL62dohG7/fGcJIYbqjv8lP2NU7+g0gzmuacDbiUi7Dmf9x4JSdFNQuCDPnVUOKsLEBBR5BmOX55VIiNqqBHqE18e+ms/cRJgJcjZ5UQt7lmH5RM5THj83QV3QO15y7mHGd
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <40A53B9510EABF4C9A65849D63CC6B34@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3020
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(346002)(136003)(39860400002)(376002)(46966005)(70206006)(70586007)(336012)(478600001)(2616005)(6862004)(4326008)(8676002)(6512007)(8936002)(2906002)(6486002)(316002)(54906003)(36756003)(82740400003)(82310400002)(81166007)(356005)(33656002)(47076004)(5660300002)(83380400001)(6506007)(53546011)(86362001)(186003)(26005);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 8ce062ff-6440-458a-d74b-08d803aae087
X-Forefront-PRVS: 04180B6720
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 1muwqaRvUbsvnbzEE7ud7br8lYHaMbAajF4ryqI2K+5zOHI/eYcSjGyVmA2e2c9XRthMLB6GmdUnFTtqanOK5yzlvKVGEJLh6xfHSLAMgyMFtMPUz4M8bpdPA/M3ET4tZ6fhZYQmJAOl7BKVag5eS+wmoecr2z9VsUk9stX0KehofIY5LXNjEo0eHbCDPVHgvo6VFdUhqeOcfAIbpr8XyPhoLlf7kBOMWFL7G9htfWPwt+ICYT+I/Z7Xpx9Y4Jcf69tsUOR6lcZxvie9StQDx34VH82mQHALRYe+gXudgE/miAhrYN2pFZeYGL/Yapur2SI67mMQu9lf2jlPbnvGoV76dxuTaA7gCnuPp6Py+KOPfLmUnqv25kEphvrE4hMh5kEOixOn7d2gDAffNZo57A==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 May 2020 08:32:58.1049 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 50044696-2ab6-4014-ece8-08d803aae437
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1880
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, "paul@xen.org" <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Xia, Hongyan" <hongyxia@amazon.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan

> On 29 May 2020, at 08:35, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 28.05.2020 20:54, Julien Grall wrote:
>> On 28/05/2020 16:25, Bertrand Marquis wrote:
>>> At the moment on Arm, a Linux guest running with KTPI enabled will
>>> cause the following error when a context switch happens in user mode:
>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>>>=20
>>> This patch is modifying runstate handling to map the area given by the
>>> guest inside Xen during the hypercall.
>>> This is removing the guest virtual to physical conversion during contex=
t
>>> switches which removes the bug
>>=20
>> It would be good to spell out that a virtual address is not stable. So=20
>> relying on it is wrong.
>=20
> Guests at present are permitted to change the mapping underneath the
> virtual address provided (this may not be the best idea, but the
> interface is like it is). Therefore I don't think the present
> interface can be changed like this. Instead a new interface will need
> adding which takes a guest physical address instead. (Which, in the
> end, will merely be one tiny step towards making the hypercall
> interfaces use guest physical addresses. And it would be nice if an
> overall concept was hashed out first how that conversion should
> occur, such that the change here could at least be made fit that
> planned model. For example, an option might be to retain all present
> hypercall numbering and simply dedicate a bit in the top level
> hypercall numbers indicating whether _all_ involved addresses for
> that operation are physical vs virtual ones.)

I definitely fully agree that moving to interfaces using physical addresses
would definitely be better but would need new hypercall numbers (or the
bit system you suggest) to keep backward compatibility.

Regarding the change of virtual address, even though this is theoretically
possible with the current interface I do not really see how this could be
handled cleanly with KPTI or even without it as this would not be an atomic
change on the guest side so the only way to cleanly do this would be to
do an hypercall to remove the address in xen and then recall the hypercall
to register the new address.

So the only way to solve the KPTI issue would actually be to create a new
hypercall and keep the current bug/problem ?

Bertrand



From xen-devel-bounces@lists.xenproject.org Fri May 29 08:34:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 08:34:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeaT6-0004qI-7T; Fri, 29 May 2020 08:34:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeaT5-0004qB-HO
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 08:34:19 +0000
X-Inumbo-ID: 2ef25baa-a187-11ea-a87e-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2ef25baa-a187-11ea-a87e-12813bfff9fa;
 Fri, 29 May 2020 08:34:17 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 8CE2AAC96;
 Fri, 29 May 2020 08:34:15 +0000 (UTC)
Subject: Re: [PATCH v10 07/12] xen: provide version information in hypfs
To: Juergen Gross <jgross@suse.com>, Paul Durrant <paul@xen.org>
References: <20200519072106.26894-1-jgross@suse.com>
 <20200519072106.26894-8-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <88b80e61-3fb4-8f89-0597-d6959033478b@suse.com>
Date: Fri, 29 May 2020 10:34:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200519072106.26894-8-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19.05.2020 09:21, Juergen Gross wrote:
> @@ -373,6 +374,52 @@ void __init do_initcalls(void)
>          (*call)();
>  }
>  
> +#ifdef CONFIG_HYPFS
> +static unsigned int __read_mostly major_version;
> +static unsigned int __read_mostly minor_version;
> +
> +static HYPFS_DIR_INIT(buildinfo, "buildinfo");
> +static HYPFS_DIR_INIT(compileinfo, "compileinfo");
> +static HYPFS_DIR_INIT(version, "version");
> +static HYPFS_UINT_INIT(major, "major", major_version);
> +static HYPFS_UINT_INIT(minor, "minor", minor_version);

These two lines fail to build with gcc 4.1 ("unknown field 'content'
specified in initializer"), which I've deliberately tried as a last
minute post-commit, pre-push check. I therefore reverted this change
before pushing.

Paul, Jürgen - please advise how to proceed, considering today's
deadline. I'd accept pushing the rest of the series, if a fix for
the issue will then still be permitted in later. Otherwise I'd have
to wait for a fixed (incremental) version.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 08:37:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 08:37:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeaVx-00050n-Ls; Fri, 29 May 2020 08:37:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeaVw-00050d-EN
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 08:37:16 +0000
X-Inumbo-ID: 99691122-a187-11ea-a87e-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 99691122-a187-11ea-a87e-12813bfff9fa;
 Fri, 29 May 2020 08:37:16 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 469ABAB5C;
 Fri, 29 May 2020 08:37:14 +0000 (UTC)
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <dcfbca54-4773-9f43-1826-f5137a41bd9f@suse.com>
 <B5889544-3EB5-41ED-8428-8BCA05269371@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cd578ace-1c07-00c5-77d2-79955c9b14f3@suse.com>
Date: Fri, 29 May 2020 10:37:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <B5889544-3EB5-41ED-8428-8BCA05269371@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, "paul@xen.org" <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Xia, Hongyan" <hongyxia@amazon.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 10:32, Bertrand Marquis wrote:
> So the only way to solve the KPTI issue would actually be to create a new
> hypercall and keep the current bug/problem ?

That's my view on it at least, yes.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 08:45:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 08:45:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeadc-0005ri-Fm; Fri, 29 May 2020 08:45:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeadb-0005rd-3x
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 08:45:11 +0000
X-Inumbo-ID: b4430056-a188-11ea-a881-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b4430056-a188-11ea-a881-12813bfff9fa;
 Fri, 29 May 2020 08:45:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A6429AB3D;
 Fri, 29 May 2020 08:45:08 +0000 (UTC)
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <3B88C76B-6972-4A66-AFDC-0B5C27FBA740@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <52e26c9d-b662-2597-b521-dacf4f8acfc8@suse.com>
Date: Fri, 29 May 2020 10:45:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <3B88C76B-6972-4A66-AFDC-0B5C27FBA740@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, "paul@xen.org" <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Xia, Hongyan" <hongyxia@amazon.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 10:13, Bertrand Marquis wrote:
>> On 28 May 2020, at 19:54, Julien Grall <julien@xen.org> wrote:
>> AFAICT, there is no restriction on when the runstate hypercall can be called. So this can even be called before the vCPU is brought up.
> 
> I understand the remark but it still feels very weird to allow an invalid address in an hypercall.
> Wouldn’t we have a lot of potential issues accepting an address that we cannot check ?

I don't think so: The hypervisor uses copy_to_guest() to protect
itself from the addresses to be invalid at the time of copying.
If the guest doesn't make sure they're valid at that time, it
simply won't get the information (perhaps until Xen's next
attempt to copy it out).

You may want to take a look at the x86 side of this (also the
vCPU time updating): Due to the way x86-64 PV guests work, the
address may legitimately be unmapped at the time Xen wants to
copy it, when the vCPU is currently executing guest user mode
code. In such a case the copy gets retried the next time the
guest transitions from user to kernel mode (which involves a
page table change).

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 08:49:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 08:49:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeahI-00061V-08; Fri, 29 May 2020 08:49:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BsAk=7L=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jeahF-00061N-Uj
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 08:48:57 +0000
X-Inumbo-ID: 3b74096c-a189-11ea-81bc-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b74096c-a189-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 08:48:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id B267BAB3D;
 Fri, 29 May 2020 08:48:55 +0000 (UTC)
Message-ID: <6fcdb69457e5768b0fa2259f83a23158e9c939f5.camel@suse.com>
Subject: Re: [Xen-devel] [RFC 1/9] schedule: Introduce per-pcpu time accounting
From: Dario Faggioli <dfaggioli@suse.com>
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "andrii.anisov@gmail.com" <andrii.anisov@gmail.com>,
 "george.dunlap@eu.citrix.com" <george.dunlap@eu.citrix.com>,
 "julien.grall@arm.com" <julien.grall@arm.com>, 
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Date: Fri, 29 May 2020 10:48:53 +0200
In-Reply-To: <0e46fc4b29b7c3b3e6b4ca4704b9e7dac5738868.camel@epam.com>
References: <1568197942-15374-1-git-send-email-andrii.anisov@gmail.com>
 <1568197942-15374-2-git-send-email-andrii.anisov@gmail.com>
 <8c74cacb-ff73-eddc-626c-f6fa862cf5a6@arm.com>
 <f3767489-e46a-3830-8b3c-0b637b71e0b8@gmail.com>
 <0e46fc4b29b7c3b3e6b4ca4704b9e7dac5738868.camel@epam.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-Jlnn4kmmZc/gXVGeGbpT"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "wl@xen.org" <wl@xen.org>, "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
 "ian.jackson@eu.citrix.com" <ian.jackson@eu.citrix.com>,
 "tim@xen.org" <tim@xen.org>, "jbeulich@suse.com" <jbeulich@suse.com>,
 "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-Jlnn4kmmZc/gXVGeGbpT
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2020-05-26 at 02:27 +0000, Volodymyr Babchuk wrote:
> Hello All,
>=20
Hello Volodymyr,

> This is gentle reminder about this RFC.=20
>=20
> Sadly, Andrii Anisov has left our team. But I'm commited to continue
> his work on time accounting and real time scheduling.
>=20
Ok, so, first of all, sorry that this has not been properly addressed.

I personally never forgot about it or anything... Still, I haven't been
able to look into it properly.

> I do realize, that proposed patches have become moldy. I can rebase
> them onto current master, if it would help.=20
>
As a matter of fact, it would. Especially considering that, AFAICT,
this pre-dates core-scheduling.

So, if you're really keen doing such a rebase and resending, I will be
happy to have a look at how it ends up looking like.

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-Jlnn4kmmZc/gXVGeGbpT
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl7QzHYACgkQFkJ4iaW4
c+7ypA/8CbBPD3EmaxI68spXbZkuJsC+kYybAazVr7B2PTaQTnQPqL4tDf4h6R8P
Q1gsGH2zdk+8YoXxPGM5Uer4lbk6SRwK0u/KPJuqxTjfsfpYa4n/kgAtZORSbwXk
6eCqLhatSYoLgzWR+9kiq9d5ORAexzO0wUsiiosacynUMMhybTbhidAX2NbItZfL
Ss4gdHx86HGT4ro0DtWuICRG6Rbji6JW40EXrG0e58VGmW+skV0msg1MmdNzNkNm
MrIy1OXNmMqisBNgzHzwKq16M1kW1cYH4QicWPTzIm0pB0mk085M0FzKE6+EnPPR
/qd5sWQKXI2jVoSIxHABoiuLf1Yrc4YKktj6WNPHm/KTZqHkjUP6s/XdOwQCp50m
CmEj0n8zmPMx4AudfCNjdVmE182UhLr6bgXisTKrNV9Mnjz3DVL/CteNMv8ZKAxq
NiH8plUSsd6g7HVSUmFCd/O7W0U2XMiXQphOh82QhOKoneJz/GdJQDkZAcxtEONY
3NWPNHr9VmKc24UNfqwETeWXHcqOjm6Mfb1+PmtX4+kpAek1NuyH6sBCp6sUCE+K
Yl6T051PSWw4CZoCVSi7u1VelSLE9O0bPkMC8xCKto+UetIKb8uyyh9Jw3aIlSbv
p5PG3MLHvkCmzWvPAVABZdUNcrHvuPtWNapeI29hG2zoSxv2rH0=
=xNCK
-----END PGP SIGNATURE-----

--=-Jlnn4kmmZc/gXVGeGbpT--



From xen-devel-bounces@lists.xenproject.org Fri May 29 08:52:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 08:52:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeakQ-0006oh-FU; Fri, 29 May 2020 08:52:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeakP-0006oc-Ja
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 08:52:13 +0000
X-Inumbo-ID: b023cb94-a189-11ea-9dbe-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b023cb94-a189-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 08:52:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 8CCBAAAD0;
 Fri, 29 May 2020 08:52:11 +0000 (UTC)
Subject: Re: Xen XSM/FLASK policy, grub defaults, etc.
To: George Dunlap <George.Dunlap@citrix.com>,
 Ian Jackson <Ian.Jackson@citrix.com>
References: <24270.35349.838484.116865@mariner.uk.xensource.com>
 <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <757d53a0-ec8f-62f1-ca20-72eaf9e1c84d@suse.com>
Date: Fri, 29 May 2020 10:52:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 "cjwatson@debian.org" <cjwatson@debian.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 18:08, George Dunlap wrote:
>> On May 27, 2020, at 4:41 PM, Ian Jackson <ian.jackson@citrix.com> wrote:
>> 2. Xen should disable the XSM policy build when FLASK is disabled.
>> This is unfortunately not so simple because the XSM policy build is a
>> tools option and FLASK is a Xen option and the configuration systems
>> are disjoint.  But at the very least a default build, which has no XSM
>> support, should not build an XSM policy file either.
> 
> A simple thing to do here would be to have the flask policy controlled by a configure --flask option.  If neither --flask nor --no-flask is specified, we could maybe have configure also check the contents of xen/.config to see if CONFIG_XSM_FLASK is enabled?

But that's creating a chicken-and-egg problem: There's no ordering
between the tools/ and xen/ builds. xen/ may not be built at all,
and this is going to become increasingly likely once the xen/ part
of the tree supports out-of-tree builds (at least I'll switch most
of my build trees to that model as soon as it's available).

Do we perhaps need to resort to a make command line option, usable
at the top level as well as for major subtree builds, and being
honored (as long as set either way, falling back to current
behavior if unset) by both ./configure and the hypervisor's
Kconfig?

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 08:53:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 08:53:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jealN-0006u9-Ti; Fri, 29 May 2020 08:53:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T8V9=7L=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jealM-0006tx-I8
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 08:53:12 +0000
X-Inumbo-ID: cf9691c8-a189-11ea-a881-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cf9691c8-a189-11ea-a881-12813bfff9fa;
 Fri, 29 May 2020 08:53:05 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hlccQB3XmzLqV1fh58q0zcxVKGKftK3TCe++AsvVQbg=; b=OsHS4Vxu0X4U7zJWybzZ0TEFz
 hObgJTMVZprdh3Mhs0pBHnu+FBIH0bKf4Rfeus8AMtPJDA1BbxgWEnmapxqfaROAyxg+/bYTGqDmh
 9ri+0sjDyr1sEX47PrOqxyECs9hRja5zRbW7dLM5xJc6D/6fZ1OnKm6w+tzVj91XPM81U=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jealE-0007K0-W4; Fri, 29 May 2020 08:53:05 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jealE-0006lx-MR; Fri, 29 May 2020 08:53:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jealE-0001VY-Lc; Fri, 29 May 2020 08:53:04 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150432-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150432: tolerable trouble: fail/pass/starved -
 PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: linux=b0c3ba31be3e45a130e13b278cf3b90f69bda6f6
X-Osstest-Versions-That: linux=444fc5cde64330661bf59944c43844e7d4c2ccd8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 29 May 2020 08:53:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150432 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150432/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150390
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150390
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150390
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150390
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150390
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150390
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150390
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150390
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 linux                b0c3ba31be3e45a130e13b278cf3b90f69bda6f6
baseline version:
 linux                444fc5cde64330661bf59944c43844e7d4c2ccd8

Last test of basis   150390  2020-05-27 00:39:33 Z    2 days
Testing same since   150413  2020-05-27 18:39:37 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Amir Goldstein <amir73il@gmail.com>
  Daniel Xu <dxu@dxuuu.xyz>
  Eric W. Biederman <ebiederm@xmission.com>
  Jan Kara <jack@suse.cz>
  Linus Torvalds <torvalds@linux-foundation.org>
  Odin Ugedal <odin@ugedal.com>
  Roman Gushchin <guro@fb.com>
  Tejun Heo <tj@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   444fc5cde643..b0c3ba31be3e  b0c3ba31be3e45a130e13b278cf3b90f69bda6f6 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Fri May 29 09:02:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 09:02:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeau5-0007rj-Uk; Fri, 29 May 2020 09:02:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeau4-0007re-8g
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 09:02:12 +0000
X-Inumbo-ID: 14daf304-a18b-11ea-a881-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 14daf304-a18b-11ea-a881-12813bfff9fa;
 Fri, 29 May 2020 09:02:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 0A755AB3D;
 Fri, 29 May 2020 09:02:09 +0000 (UTC)
Subject: Re: Xen XSM/FLASK policy, grub defaults, etc.
To: George Dunlap <George.Dunlap@citrix.com>,
 Ian Jackson <Ian.Jackson@citrix.com>
References: <24270.35349.838484.116865@mariner.uk.xensource.com>
 <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <09d87f3d-e166-1e7c-ec8d-dd3e54f366fb@suse.com>
Date: Fri, 29 May 2020 11:02:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 "cjwatson@debian.org" <cjwatson@debian.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 18:08, George Dunlap wrote:
>> On May 27, 2020, at 4:41 PM, Ian Jackson <ian.jackson@citrix.com> wrote:
>>
>> The Xen tools build system builds a FLASK policy by default.  It does
>> this even if the hypervisor build for XSM is disabled.
>>
>> I recently sent patches upstream to grub to support XSM in
>> update-grub.  update-grub is the program which examines your /boot and
>> generates appropriate bootloader entries.  My merge request
>>  https://salsa.debian.org/grub-team/grub/-/merge_requests/18
>> finds XSM policy files, and when theya are found, generates "XSM
>> enabled" bootloader entries. [1]
>>
>> The result of these two things together is that a default build of
>> grub will result in these "XSM enabled" bootloader entries.  In
>> practice I think these entries will boot because everything ignores
>> the additional XSM policy file (!) and Xen ignores the
>> "flask=enforcing" option (!!)
>>
>> This is not particularly good.  Offering people an "XSM enabled"
>> option which does nothing is poor because it might think they have the
>> extra security but actually significantly more steps are needed.  But
>> there doesn't appear to be any way for update-grub to tell whether a
>> particular hypervisor does support XSM or not.
>>
>> I think the following changes would be good:
>>
>> 1. Xen should reject "flask=enforcing" if it is built without FLASK
>> support, rather than ignoring it.  This will ensure users are not
>> misled by these boot options since they will be broken.
> 
> +1

Yeah, probably better indeed, despite the current behavior being
documented correctly. I'll make a patch.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 09:19:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 09:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jebAF-0000S4-F9; Fri, 29 May 2020 09:18:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tYAP=7L=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jebAE-0000Rz-KJ
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 09:18:54 +0000
X-Inumbo-ID: 69d3e97c-a18d-11ea-81bc-bc764e2007e4
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1a::617])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 69d3e97c-a18d-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 09:18:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iGD1SjqY/6jwHOi5VOo/3XKg6aWSbbIKOPJKUtL0VVs=;
 b=u/FriGkMOLgf6UGd2lIFjzBaKaOSwLguTP1t3rMpDZ8UsH9+u/4waMmh1rZaG1X2J5j2EwMkxI3hlAYI5HfomCYihwnHN6kVGCW+TkGxGdjx0m+qvVYEyuDgWOyIuBQr3jdtCDybG49NoUz8BgxTBZhiT1EfNnraeIGMrN/xxq4=
Received: from AM4PR07CA0034.eurprd07.prod.outlook.com (2603:10a6:205:1::47)
 by AM6PR08MB5159.eurprd08.prod.outlook.com (2603:10a6:20b:e2::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.18; Fri, 29 May
 2020 09:18:51 +0000
Received: from VE1EUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:205:1:cafe::68) by AM4PR07CA0034.outlook.office365.com
 (2603:10a6:205:1::47) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3066.8 via Frontend
 Transport; Fri, 29 May 2020 09:18:51 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT023.mail.protection.outlook.com (10.152.18.133) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3021.23 via Frontend Transport; Fri, 29 May 2020 09:18:50 +0000
Received: ("Tessian outbound 952576a3272a:v57");
 Fri, 29 May 2020 09:18:50 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 505691475dd5c9ea
X-CR-MTA-TID: 64aa7808
Received: from c7f750dd4e4a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 65A5B446-ACA6-421B-8FDE-25D56F574876.1; 
 Fri, 29 May 2020 09:18:45 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c7f750dd4e4a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 29 May 2020 09:18:45 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZAkIfupaHfSZ91GnZFva5MF57dSkNfcj+OH22S08Mo85DYWQdg5LPHFrjA4uO6T0xVKtXUtJl+F4tzezjF5sMoY+o/FE39ckP1TLXA5OhEVcW3No2Uvdqj5WhrUEk3HAUmvVJpUkV40XgpbGl5WVbkb4dKd3OTw0KbfYMnRgYFbR2Zp78RsNhUCyKdCnnxO+MWSAi9mSCyTPbBQZTLP0J6ApdVxyIQJXbmJ4eX69crIM7Znbz0yZKzsg4h3vxWLg0R6i+F7abEuUgkE3YxQ60YPhrpUyZKi3iyUhvyTof6dpK2R/YEyfXshQwE8ZmR4szJMcKjiZ47zXeyR9W9kmCA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iGD1SjqY/6jwHOi5VOo/3XKg6aWSbbIKOPJKUtL0VVs=;
 b=Y9uif/bwriynOYJ9CLbGb1RPIFHVtN5tCQ+SDHTUVFf0WWxknXHnDJG7o9rh2DJ+B1xPN6XP4Nf99UxYsct84KA6niXnVNAE+GoQgOdM9tpHX6YUt1OGqFjVIZi9ZoYqYg9A0vfJIVMWt4O/TPl2IBUZJLM55Dd6Tq8qQ+cM8D6Ka+lhRCKUTNGaNMELSg3+GX+9sr5ns9a4LZe/eGH0syQeOTs1uwZIFlCNxsxcUQTX0GU2RsmXghCJlrS6QYzakiDWfUeCnJLd2kGgivHkWiK+IMhlc3x7e0YVP7WSiNvd38UaFM8YIpUaYAM5kBR1LIC3JtqLVuPLPU93gW24jA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iGD1SjqY/6jwHOi5VOo/3XKg6aWSbbIKOPJKUtL0VVs=;
 b=u/FriGkMOLgf6UGd2lIFjzBaKaOSwLguTP1t3rMpDZ8UsH9+u/4waMmh1rZaG1X2J5j2EwMkxI3hlAYI5HfomCYihwnHN6kVGCW+TkGxGdjx0m+qvVYEyuDgWOyIuBQr3jdtCDybG49NoUz8BgxTBZhiT1EfNnraeIGMrN/xxq4=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3915.eurprd08.prod.outlook.com (2603:10a6:10:34::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.19; Fri, 29 May
 2020 09:18:42 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3045.018; Fri, 29 May 2020
 09:18:42 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Topic: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Index: AQHWNP6nIxIB/r3XC025DLnqIbmaJai92N6AgADfWQCAAAi2gIAACV6A
Date: Fri, 29 May 2020 09:18:42 +0000
Message-ID: <077FCC5B-AD47-4707-AF55-12F0455ED26F@arm.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <3B88C76B-6972-4A66-AFDC-0B5C27FBA740@arm.com>
 <52e26c9d-b662-2597-b521-dacf4f8acfc8@suse.com>
In-Reply-To: <52e26c9d-b662-2597-b521-dacf4f8acfc8@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: b6059383-2905-451c-decc-08d803b14d1c
x-ms-traffictypediagnostic: DB7PR08MB3915:|AM6PR08MB5159:
X-Microsoft-Antispam-PRVS: <AM6PR08MB5159FD6309FE29AFD9782F5B9D8F0@AM6PR08MB5159.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 04180B6720
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: uXVEWg7/sFJgzOwP0d9XBaRVNE5IRiFUXCM80k5zo2F1chhPttLXv4zz7k5A/g2lc2nyBFgSvnUbw6vcUhgGGBvnC5moQ21Rm9FtHMGwbsNHV13LLRICp0ucvsrzH64nG15b3jNJn1wHhkCf7K8bekpHmcV1Lhf4KjHZKSFF4p0lcL66AcDGWW1vLU0Gi8tenge7cesorjT3p3aVLZwLULvaWpwjXWVlyZ2mk9gjQ5srdcU69KW7++dsKtJ9r33w2ySPSelCIv/nZYz0NIjMNis/6LhEJ7oMrDEuO/YYPIxx015PdcQS8oFJ+DAVBvAB
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(366004)(376002)(136003)(346002)(396003)(39860400002)(6506007)(53546011)(2616005)(316002)(54906003)(7416002)(6916009)(33656002)(8936002)(4326008)(8676002)(36756003)(6512007)(6486002)(66946007)(76116006)(91956017)(66476007)(186003)(66446008)(83380400001)(86362001)(66556008)(64756008)(2906002)(71200400001)(478600001)(5660300002)(26005);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: SCGY5A6LEutY4O0Jp8BxAfqtNgf/wc6k9knZ3z6XMzDLs4Ih574BDz5IhNob/7sDMi5sMERoEYBP60oXqLWLS7zVAJU1r0WGMVy/MMVeTm2U3RyyLi6iLh3rB9V/Ns8M/FncTzPIEnr0C5XaPkdwSwmsKc5lstVFODSwRZZ3CDrdzYwCemeCpEqT0yqtA3YtBdlw7lsgnsCOQk0hIbzK9CbE3vfMnup3sS6mc86J/2IvDAxj9IVsmG43dSM5B61VZUF88Qyur7zaZM2x0H5C3DEirLvP88P3jJgT4RKqD1+swNL1/N2GdoELfAbUw+uDxcJSI02DgBRA5q7uNOk72v8b5tPMNo1wSWgR/GjCtC1eBGKQ6qbWWUxhnnv5wIyXuryEHNs4dGVdggCfqkT9qrTVw3KOupuNv0WJ5+KWZyNA39fvosyfORKToi49nAUYDYSCW99zl3c5/tQDxOb70jgjtUgBSnjKMuYUK5ikQQLexPLvI9qrwZc6SjeNcJT1
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <49B9AB3CD18B8E449DA9095043D4CB6A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3915
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(136003)(39860400002)(376002)(396003)(346002)(46966005)(336012)(478600001)(2616005)(70206006)(186003)(6862004)(70586007)(2906002)(26005)(5660300002)(6512007)(316002)(53546011)(36906005)(6506007)(54906003)(83380400001)(33656002)(8936002)(82310400002)(356005)(4326008)(86362001)(47076004)(81166007)(6486002)(8676002)(36756003)(82740400003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: 056d050f-79c3-4a73-1963-08d803b147f7
X-Forefront-PRVS: 04180B6720
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Iq5FKOpityPdMBpl65NRmPiQa0BwqpDkUFbvoaB3iMLTJOc2pJoZJd5CumEz38Bsc+fmCvwPRBJ5OUMHp/Zfjmt2IKEDY/NAyJXIhD/5OBo+7gJW/dPXs3TxF6h2Qmf88VBjAYZvKR2VbkT+SijHWf2kgGGQcTaO/VbiLLCeyLPwo+N+M8I5zd/SWm+JDzAYAF+eNkrc8ISDAI4xbJZ+r5FpP5OsimoCdRYESAaXI3JVObQ1qYkl4COc6bEsFGpRgooX94QHluSIT62ZiQ5nwVUI1Cz+CXG63MtkRvOyP/k1C/i02R4ifXCpaeJVvIGVFafssGkRNmzI/vYEJV6rTT8pXGFH43Qt8V5yibuq0hhiILeOmhb09dCLD24obLO08ijxFRL473hdJy8fCf4g7g==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 May 2020 09:18:50.9678 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b6059383-2905-451c-decc-08d803b14d1c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB5159
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, "paul@xen.org" <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Xia, Hongyan" <hongyxia@amazon.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

SGkgSmFuLA0KDQo+IE9uIDI5IE1heSAyMDIwLCBhdCAwOTo0NSwgSmFuIEJldWxpY2ggPGpiZXVs
aWNoQHN1c2UuY29tPiB3cm90ZToNCj4gDQo+IE9uIDI5LjA1LjIwMjAgMTA6MTMsIEJlcnRyYW5k
IE1hcnF1aXMgd3JvdGU6DQo+Pj4gT24gMjggTWF5IDIwMjAsIGF0IDE5OjU0LCBKdWxpZW4gR3Jh
bGwgPGp1bGllbkB4ZW4ub3JnPiB3cm90ZToNCj4+PiBBRkFJQ1QsIHRoZXJlIGlzIG5vIHJlc3Ry
aWN0aW9uIG9uIHdoZW4gdGhlIHJ1bnN0YXRlIGh5cGVyY2FsbCBjYW4gYmUgY2FsbGVkLiBTbyB0
aGlzIGNhbiBldmVuIGJlIGNhbGxlZCBiZWZvcmUgdGhlIHZDUFUgaXMgYnJvdWdodCB1cC4NCj4+
IA0KPj4gSSB1bmRlcnN0YW5kIHRoZSByZW1hcmsgYnV0IGl0IHN0aWxsIGZlZWxzIHZlcnkgd2Vp
cmQgdG8gYWxsb3cgYW4gaW52YWxpZCBhZGRyZXNzIGluIGFuIGh5cGVyY2FsbC4NCj4+IFdvdWxk
buKAmXQgd2UgaGF2ZSBhIGxvdCBvZiBwb3RlbnRpYWwgaXNzdWVzIGFjY2VwdGluZyBhbiBhZGRy
ZXNzIHRoYXQgd2UgY2Fubm90IGNoZWNrID8NCj4gDQo+IEkgZG9uJ3QgdGhpbmsgc286IFRoZSBo
eXBlcnZpc29yIHVzZXMgY29weV90b19ndWVzdCgpIHRvIHByb3RlY3QNCj4gaXRzZWxmIGZyb20g
dGhlIGFkZHJlc3NlcyB0byBiZSBpbnZhbGlkIGF0IHRoZSB0aW1lIG9mIGNvcHlpbmcuDQo+IElm
IHRoZSBndWVzdCBkb2Vzbid0IG1ha2Ugc3VyZSB0aGV5J3JlIHZhbGlkIGF0IHRoYXQgdGltZSwg
aXQNCj4gc2ltcGx5IHdvbid0IGdldCB0aGUgaW5mb3JtYXRpb24gKHBlcmhhcHMgdW50aWwgWGVu
J3MgbmV4dA0KPiBhdHRlbXB0IHRvIGNvcHkgaXQgb3V0KS4NCj4gDQo+IFlvdSBtYXkgd2FudCB0
byB0YWtlIGEgbG9vayBhdCB0aGUgeDg2IHNpZGUgb2YgdGhpcyAoYWxzbyB0aGUNCj4gdkNQVSB0
aW1lIHVwZGF0aW5nKTogRHVlIHRvIHRoZSB3YXkgeDg2LTY0IFBWIGd1ZXN0cyB3b3JrLCB0aGUN
Cj4gYWRkcmVzcyBtYXkgbGVnaXRpbWF0ZWx5IGJlIHVubWFwcGVkIGF0IHRoZSB0aW1lIFhlbiB3
YW50cyB0bw0KPiBjb3B5IGl0LCB3aGVuIHRoZSB2Q1BVIGlzIGN1cnJlbnRseSBleGVjdXRpbmcg
Z3Vlc3QgdXNlciBtb2RlDQo+IGNvZGUuIEluIHN1Y2ggYSBjYXNlIHRoZSBjb3B5IGdldHMgcmV0
cmllZCB0aGUgbmV4dCB0aW1lIHRoZQ0KPiBndWVzdCB0cmFuc2l0aW9ucyBmcm9tIHVzZXIgdG8g
a2VybmVsIG1vZGUgKHdoaWNoIGludm9sdmVzIGENCj4gcGFnZSB0YWJsZSBjaGFuZ2UpLg0KDQpJ
ZiBJIHVuZGVyc3RhbmQgZXZlcnl0aGluZyBjb3JyZWN0bHkgcnVuc3RhdGUgaXMgdXBkYXRlZCBv
bmx5IGlmIHRoZXJlIGlzDQphIGNvbnRleHQgc3dpdGNoIGluIHhlbiB3aGlsZSB0aGUgZ3Vlc3Qg
aXMgcnVubmluZyBpbiBrZXJuZWwgbW9kZSBhbmQNCmlmIHRoZSBhZGRyZXNzIGlzIG1hcHBlZCBh
dCB0aGF0IHRpbWUuDQoNClNvIHRoaXMgaXMgYSBiZXN0IGVmZm9ydCBpbiBYZW4gYW5kIHRoZSBn
dWVzdCBjYW5ub3QgcmVhbGx5IHJlbHkgb24gdGhlDQpydW5zdGF0ZSBpbmZvcm1hdGlvbiAoYXMg
aXQgbWlnaHQgbm90IGJlIHVwIHRvIGRhdGUpLg0KQ291bGQgdGhpcyBoYXZlIGltcGFjdHMgc29t
ZWhvdyBpZiB0aGlzIGlzIHVzZWQgZm9yIHNjaGVkdWxpbmcgPw0KDQpJbiB0aGUgZW5kIHRoZSBv
bmx5IGFjY2VwdGVkIHRyYWRlIG9mZiB3b3VsZCBiZSB0bzoNCi0gcmVkdWNlIGVycm9yIHZlcmJv
c2l0eSBhbmQganVzdCBpZ25vcmUgaXQNCi0gaW50cm9kdWNlIGEgbmV3IHN5c3RlbSBjYWxsIHVz
aW5nIGEgcGh5c2ljYWwgYWRkcmVzcw0KICAtPiBVc2luZyBhIHZpcnR1YWwgYWRkcmVzcyB3aXRo
IHJlc3RyaWN0aW9ucyBzb3VuZHMgdmVyeSBjb21wbGV4DQogIHRvIGRvY3VtZW50IChjdXJyZW50
IGNvcmUsIG5vIHJlbWFwcGluZykuDQoNCkJ1dCBpdCBmZWVscyBsaWtlIGhhdmluZyBvbmx5IG9u
ZSBoeXBlcmNhbGwgdXNpbmcgZ3Vlc3QgcGh5c2ljYWwgYWRkcmVzc2VzDQp3b3VsZCBub3QgcmVh
bGx5IGJlIGxvZ2ljIGFuZCB0aGlzIGtpbmQgb2YgY2hhbmdlIHNob3VsZCBiZSBtYWRlIGFjcm9z
cw0KYWxsIGh5cGVyY2FsbHMgaWYgaXQgaXMgZG9uZS4NCiANCkJlcnRyYW5kDQoNCg==


From xen-devel-bounces@lists.xenproject.org Fri May 29 09:19:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 09:19:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jebAt-0000Uz-Tc; Fri, 29 May 2020 09:19:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5ub4=7L=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jebAt-0000Uo-37
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 09:19:35 +0000
X-Inumbo-ID: 81c3ecd0-a18d-11ea-a882-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 81c3ecd0-a18d-11ea-a882-12813bfff9fa;
 Fri, 29 May 2020 09:19:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id BA5FFAB3D;
 Fri, 29 May 2020 09:19:31 +0000 (UTC)
Subject: Re: [PATCH v10 07/12] xen: provide version information in hypfs
To: Jan Beulich <jbeulich@suse.com>, Paul Durrant <paul@xen.org>
References: <20200519072106.26894-1-jgross@suse.com>
 <20200519072106.26894-8-jgross@suse.com>
 <88b80e61-3fb4-8f89-0597-d6959033478b@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <65af43c0-2ed4-4330-501f-d561468b7a0e@suse.com>
Date: Fri, 29 May 2020 11:19:30 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <88b80e61-3fb4-8f89-0597-d6959033478b@suse.com>
Content-Type: multipart/mixed; boundary="------------447C7EB7590E9151B9414CFF"
Content-Language: en-US
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a multi-part message in MIME format.
--------------447C7EB7590E9151B9414CFF
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit

On 29.05.20 10:34, Jan Beulich wrote:
> On 19.05.2020 09:21, Juergen Gross wrote:
>> @@ -373,6 +374,52 @@ void __init do_initcalls(void)
>>           (*call)();
>>   }
>>   
>> +#ifdef CONFIG_HYPFS
>> +static unsigned int __read_mostly major_version;
>> +static unsigned int __read_mostly minor_version;
>> +
>> +static HYPFS_DIR_INIT(buildinfo, "buildinfo");
>> +static HYPFS_DIR_INIT(compileinfo, "compileinfo");
>> +static HYPFS_DIR_INIT(version, "version");
>> +static HYPFS_UINT_INIT(major, "major", major_version);
>> +static HYPFS_UINT_INIT(minor, "minor", minor_version);
> 
> These two lines fail to build with gcc 4.1 ("unknown field 'content'
> specified in initializer"), which I've deliberately tried as a last
> minute post-commit, pre-push check. I therefore reverted this change
> before pushing.
> 
> Paul, Jürgen - please advise how to proceed, considering today's
> deadline. I'd accept pushing the rest of the series, if a fix for
> the issue will then still be permitted in later. Otherwise I'd have
> to wait for a fixed (incremental) version

The attached patch should fix this problem (assuming the anonymous
union is to blame).

Could you verify that, please?

In case the patch is fine, I'll resend the rest of the series with
that patch included, as there are adaptions in later patches needed.


Juergen

--------------447C7EB7590E9151B9414CFF
Content-Type: text/x-patch; charset=UTF-8;
 name="0001-xen-hypfs-make-struct-hypfs_entry_leaf-initializers-.patch"
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment;
 filename*0="0001-xen-hypfs-make-struct-hypfs_entry_leaf-initializers-.pa";
 filename*1="tch"

>From 1b56440bd50a523bbdbd96f0e1e96c85793108db Mon Sep 17 00:00:00 2001
From: Juergen Gross <jgross@suse.com>
Date: Fri, 29 May 2020 11:09:43 +0200
Subject: [PATCH] xen/hypfs: make struct hypfs_entry_leaf initializers work
 with gcc 4.1

gcc 4.1 has problems with static initializers for anonymous unions.
Fix this by naming the union in struct hypfs_entry_leaf.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/common/hypfs.c      | 8 ++++----
 xen/include/xen/hypfs.h | 6 +++---
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index 9c2213a068..a111c2f86d 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -126,7 +126,7 @@ int hypfs_add_leaf(struct hypfs_entry_dir *parent,
 {
     int ret;
 
-    if ( !leaf->content )
+    if ( !leaf->u.content )
         ret = -EINVAL;
     else
         ret = add_entry(parent, &leaf->e);
@@ -255,7 +255,7 @@ int hypfs_read_leaf(const struct hypfs_entry *entry,
 
     l = container_of(entry, const struct hypfs_entry_leaf, e);
 
-    return copy_to_guest(uaddr, l->content, entry->size) ? -EFAULT: 0;
+    return copy_to_guest(uaddr, l->u.content, entry->size) ? -EFAULT: 0;
 }
 
 static int hypfs_read(const struct hypfs_entry *entry,
@@ -317,7 +317,7 @@ int hypfs_write_leaf(struct hypfs_entry_leaf *leaf,
         goto out;
 
     ret = 0;
-    memcpy(leaf->write_ptr, buf, ulen);
+    memcpy(leaf->u.write_ptr, buf, ulen);
     leaf->e.size = ulen;
 
  out:
@@ -341,7 +341,7 @@ int hypfs_write_bool(struct hypfs_entry_leaf *leaf,
     if ( copy_from_guest(&buf, uaddr, ulen) )
         return -EFAULT;
 
-    *(bool *)leaf->write_ptr = buf;
+    *(bool *)leaf->u.write_ptr = buf;
 
     return 0;
 }
diff --git a/xen/include/xen/hypfs.h b/xen/include/xen/hypfs.h
index 5c6a0ccece..39845ec5ae 100644
--- a/xen/include/xen/hypfs.h
+++ b/xen/include/xen/hypfs.h
@@ -26,7 +26,7 @@ struct hypfs_entry_leaf {
     union {
         const void *content;
         void *write_ptr;
-    };
+    } u;
 };
 
 struct hypfs_entry_dir {
@@ -68,7 +68,7 @@ struct hypfs_entry_dir {
 static inline void hypfs_string_set_reference(struct hypfs_entry_leaf *leaf,
                                               const char *str)
 {
-    leaf->content = str;
+    leaf->u.content = str;
     leaf->e.size = strlen(str) + 1;
 }
 
@@ -81,7 +81,7 @@ static inline void hypfs_string_set_reference(struct hypfs_entry_leaf *leaf,
         .e.max_size = (wr) ? sizeof(contvar) : 0,        \
         .e.read = hypfs_read_leaf,                       \
         .e.write = (wr),                                 \
-        .content = &(contvar),                           \
+        .u.content = &(contvar),                         \
     }
 
 #define HYPFS_UINT_INIT(var, nam, contvar)                       \
-- 
2.26.2


--------------447C7EB7590E9151B9414CFF--


From xen-devel-bounces@lists.xenproject.org Fri May 29 09:27:46 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 09:27:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jebIj-0001SI-No; Fri, 29 May 2020 09:27:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tJvr=7L=xen.org=roger@srs-us1.protection.inumbo.net>)
 id 1jebIi-0001SD-5N
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 09:27:40 +0000
X-Inumbo-ID: a3d26170-a18e-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a3d26170-a18e-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 09:27:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Transfer-Encoding:Content-Type:
 MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=BxZZQfnKFewwzq6TFnDVatgM6OggcVXQyx/sXIIHAcI=; b=w3HrklYNl+P8T19a167HNJaL8z
 gTW5RAxMHFSeAkK0dpCQ7C6ixBME8D9PDndGjhxSAb8Gx2f+aMTJk+D7XMC3YvdHby84LHa6O87z3
 RVaGr5UDNYHAxZyLB6lc5SWcCy12LbxEohF/ximCti8eIVg7r8jXNyzZA4Udn8+0gNKc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jebIf-00084J-GH; Fri, 29 May 2020 09:27:37 +0000
Received: from [212.230.157.105] (helo=localhost)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jebIf-0008Qa-1D; Fri, 29 May 2020 09:27:37 +0000
Date: Fri, 29 May 2020 11:27:29 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger@xen.org>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Message-ID: <20200529092716.GK1195@Air-de-Roger>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <3B88C76B-6972-4A66-AFDC-0B5C27FBA740@arm.com>
 <52e26c9d-b662-2597-b521-dacf4f8acfc8@suse.com>
 <077FCC5B-AD47-4707-AF55-12F0455ED26F@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <077FCC5B-AD47-4707-AF55-12F0455ED26F@arm.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, "paul@xen.org" <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Xia, Hongyan" <hongyxia@amazon.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 29, 2020 at 09:18:42AM +0000, Bertrand Marquis wrote:
> Hi Jan,
> 
> > On 29 May 2020, at 09:45, Jan Beulich <jbeulich@suse.com> wrote:
> > 
> > On 29.05.2020 10:13, Bertrand Marquis wrote:
> >>> On 28 May 2020, at 19:54, Julien Grall <julien@xen.org> wrote:
> >>> AFAICT, there is no restriction on when the runstate hypercall can be called. So this can even be called before the vCPU is brought up.
> >> 
> >> I understand the remark but it still feels very weird to allow an invalid address in an hypercall.
> >> Wouldn’t we have a lot of potential issues accepting an address that we cannot check ?
> > 
> > I don't think so: The hypervisor uses copy_to_guest() to protect
> > itself from the addresses to be invalid at the time of copying.
> > If the guest doesn't make sure they're valid at that time, it
> > simply won't get the information (perhaps until Xen's next
> > attempt to copy it out).
> > 
> > You may want to take a look at the x86 side of this (also the
> > vCPU time updating): Due to the way x86-64 PV guests work, the
> > address may legitimately be unmapped at the time Xen wants to
> > copy it, when the vCPU is currently executing guest user mode
> > code. In such a case the copy gets retried the next time the
> > guest transitions from user to kernel mode (which involves a
> > page table change).
> 
> If I understand everything correctly runstate is updated only if there is
> a context switch in xen while the guest is running in kernel mode and
> if the address is mapped at that time.
> 
> So this is a best effort in Xen and the guest cannot really rely on the
> runstate information (as it might not be up to date).
> Could this have impacts somehow if this is used for scheduling ?
> 
> In the end the only accepted trade off would be to:
> - reduce error verbosity and just ignore it
> - introduce a new system call using a physical address
>   -> Using a virtual address with restrictions sounds very complex
>   to document (current core, no remapping).
> 
> But it feels like having only one hypercall using guest physical addresses
> would not really be logic and this kind of change should be made across
> all hypercalls if it is done.

FRT, there are other hypercalls using a physical address instead of a
linear one, see VCPUOP_register_vcpu_info for example. It's just a
mixed bag right now, with some hypercalls using a linear address and
some using a physical one.

I think introducing a new hypercall that uses a physical address would
be fine, and then you can add a set of restrictions similar to the
ones listed by VCPUOP_register_vcpu_info.

Changing the current hypercall as proposed is risky, but I think the
current behavior is broken by design specially on auto translated
guests, even more with XPTI.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 29 09:31:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 09:31:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jebMY-0002Gf-7y; Fri, 29 May 2020 09:31:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jebMW-0002Ga-Dd
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 09:31:36 +0000
X-Inumbo-ID: 30298bda-a18f-11ea-9947-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30298bda-a18f-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 09:31:35 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 9303AAF77;
 Fri, 29 May 2020 09:31:33 +0000 (UTC)
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <3B88C76B-6972-4A66-AFDC-0B5C27FBA740@arm.com>
 <52e26c9d-b662-2597-b521-dacf4f8acfc8@suse.com>
 <077FCC5B-AD47-4707-AF55-12F0455ED26F@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3dce8e37-b9a3-c3e6-10f4-b7489f1dea03@suse.com>
Date: Fri, 29 May 2020 11:31:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <077FCC5B-AD47-4707-AF55-12F0455ED26F@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, "paul@xen.org" <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Xia, Hongyan" <hongyxia@amazon.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 11:18, Bertrand Marquis wrote:
> Hi Jan,
> 
>> On 29 May 2020, at 09:45, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 29.05.2020 10:13, Bertrand Marquis wrote:
>>>> On 28 May 2020, at 19:54, Julien Grall <julien@xen.org> wrote:
>>>> AFAICT, there is no restriction on when the runstate hypercall can be called. So this can even be called before the vCPU is brought up.
>>>
>>> I understand the remark but it still feels very weird to allow an invalid address in an hypercall.
>>> Wouldn’t we have a lot of potential issues accepting an address that we cannot check ?
>>
>> I don't think so: The hypervisor uses copy_to_guest() to protect
>> itself from the addresses to be invalid at the time of copying.
>> If the guest doesn't make sure they're valid at that time, it
>> simply won't get the information (perhaps until Xen's next
>> attempt to copy it out).
>>
>> You may want to take a look at the x86 side of this (also the
>> vCPU time updating): Due to the way x86-64 PV guests work, the
>> address may legitimately be unmapped at the time Xen wants to
>> copy it, when the vCPU is currently executing guest user mode
>> code. In such a case the copy gets retried the next time the
>> guest transitions from user to kernel mode (which involves a
>> page table change).
> 
> If I understand everything correctly runstate is updated only if there is
> a context switch in xen while the guest is running in kernel mode and
> if the address is mapped at that time.
> 
> So this is a best effort in Xen and the guest cannot really rely on the
> runstate information (as it might not be up to date).
> Could this have impacts somehow if this is used for scheduling ?

Yes, it could, and hence it's not really best effort only, but
said retry upon guest mode switch had been added years ago.
(This was one of the things overlooked when x86-64 support was
introduced, as x86-32 doesn't have this same problem.) The
updating is best-effort only as far as a misbehaving guest goes;
in all other aspects it's reliable, assuming that vCPU's only
look at their own data for the purpose of making decisions
(logging and alike are of course still fine, as long as people
are aware of the data possibly being stale).

> In the end the only accepted trade off would be to:
> - reduce error verbosity and just ignore it
> - introduce a new system call using a physical address
>   -> Using a virtual address with restrictions sounds very complex
>   to document (current core, no remapping).
> 
> But it feels like having only one hypercall using guest physical addresses
> would not really be logic and this kind of change should be made across
> all hypercalls if it is done.

Hence my request to preferably first settle on a model, such
that the change here could be simply the first step on that
journey.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 09:34:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 09:34:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jebPP-0002OY-NT; Fri, 29 May 2020 09:34:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T8V9=7L=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jebPP-0002OT-9r
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 09:34:35 +0000
X-Inumbo-ID: 979c22fc-a18f-11ea-a882-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 979c22fc-a18f-11ea-a882-12813bfff9fa;
 Fri, 29 May 2020 09:34:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=AvtBuYeb1l7+x/pZ/Cb3JtDXpEP14QvoM6lE/EoWFr8=; b=yVihxydJbFxwHiH0Oz6dC71PJ
 9R5+2hIGTnnBNhuZ844dhon4sr2GRdSUkymRl0rsYlEVZxorRJVZbXvDWrlCgP1Y5WqmPPV+mxneF
 7TEGqlqtYRTXpwgaqKJf6kMqwFR1liwDC0nzogDsQQvTTOgyW06TawJ7ZH2qoUKKJ6cFc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jebPI-0008Cy-M0; Fri, 29 May 2020 09:34:28 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jebPI-00089q-9D; Fri, 29 May 2020 09:34:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jebPI-00023V-8X; Fri, 29 May 2020 09:34:28 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150465-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150465: regressions - trouble: blocked/fail
X-Osstest-Failures: xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
 xen-unstable-smoke:build-amd64:xen-build:fail:regression
 xen-unstable-smoke:build-armhf:xen-build:fail:regression
 xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: xen=ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 29 May 2020 09:34:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150465 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150465/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 150438
 build-amd64                   6 xen-build                fail REGR. vs. 150438
 build-armhf                   6 xen-build                fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a

version targeted for testing:
 xen                  ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    0 days
Testing same since   150465  2020-05-29 09:02:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:20:32 2020 +0200

    tools: add xenfs tool
    
    Add the xenfs tool for accessing the hypervisor filesystem.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 86234eafb95295621aef6c618e4c22c10d8e4138
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:20:21 2020 +0200

    libs: add libxenhypfs
    
    Add the new library libxenhypfs for access to the hypervisor filesystem.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5b5ccafb0c425b85a60fd4f241d5f6951d0e4928
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:15:50 2020 +0200

    xen: add basic hypervisor filesystem support
    
    Add the infrastructure for the hypervisor filesystem.
    
    This includes the hypercall interface and the base functions for
    entry creation, deletion and modification.
    
    In order not to have to repeat the same pattern multiple times in case
    adding a new node should BUG_ON() failure, the helpers for adding a
    node (hypfs_add_dir() and hypfs_add_leaf()) get a nofault parameter
    causing the BUG() in case of a failure.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 0e9dcd0159c671608e154da5b8b7e0edd2905067
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:15:35 2020 +0200

    docs: add feature document for Xen hypervisor sysfs-like support
    
    On the 2019 Xen developer summit there was agreement that the Xen
    hypervisor should gain support for a hierarchical name-value store
    similar to the Linux kernel's sysfs.
    
    In the beginning there should only be basic support: entries can be
    added from the hypervisor itself only, there is a simple hypercall
    interface to read the data.
    
    Add a feature document for setting the base of a discussion regarding
    the desired functionality and the entries to add.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit c48a9956e334a5dd99e846d04ad56185b07aab64
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:15:08 2020 +0200

    xen: add a generic way to include binary files as variables
    
    Add a new script xen/tools/binfile for including a binary file at build
    time being usable via a pointer and a size variable in the hypervisor.
    
    Make use of that generic tool in xsm.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri May 29 09:34:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 09:34:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jebPW-0002P1-Vt; Fri, 29 May 2020 09:34:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jebPV-0002Os-Hm
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 09:34:41 +0000
X-Inumbo-ID: 9ec03c6a-a18f-11ea-a882-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9ec03c6a-a18f-11ea-a882-12813bfff9fa;
 Fri, 29 May 2020 09:34:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 77525AC85;
 Fri, 29 May 2020 09:34:39 +0000 (UTC)
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] xsm: also panic upon "flask=enforcing" when XSM_FLASK=n
Message-ID: <8a4c4486-cf27-66a0-5ff9-5329277deccf@suse.com>
Date: Fri, 29 May 2020 11:34:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>, Daniel de Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

While the behavior to ignore this option without FLASK support was
properly documented, it is still somewhat surprising to someone using
this option and then still _not_ getting the assumed security. Add a
2nd handler for the command line option for the XSM_FLASK=n case, and
invoke panic() when the option is specified (and not subsequently
overridden by "flask=disabled").

Suggested-by: Ian Jackson <ian.jackson@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -998,8 +998,9 @@ to use the default.
 > Default: `enforcing`
 
 Specify how the FLASK security server should be configured.  This option is only
-available if the hypervisor was compiled with FLASK support.  This can be
-enabled by running either:
+available if the hypervisor was compiled with FLASK support, except that
+`flask=enforcing` will still keep the hypervisor from successfully booting even
+without FLASK support.  FLASK support can be enabled by running either:
 - make -C xen config and enabling XSM and FLASK.
 - make -C xen menuconfig and enabling 'FLux Advanced Security Kernel support' and 'Xen Security Modules support'
 
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -211,7 +211,33 @@ int __init register_xsm(struct xsm_opera
     return 0;
 }
 
-#endif
+#endif /* CONFIG_XSM */
+
+#ifndef CONFIG_XSM_FLASK
+static bool __initdata _flask_enforcing;
+
+static int __init parse_flask_param(const char *s)
+{
+    if ( !strcmp(s, "enforcing") )
+        _flask_enforcing = true;
+    else if ( !strcmp(s, "disabled") )
+        _flask_enforcing = false;
+    else
+        return -EINVAL;
+
+    return 0;
+}
+custom_param("flask", parse_flask_param);
+
+static int __init check_flask_enforcing(void)
+{
+    if ( _flask_enforcing )
+        panic("\"flask=enforcing\" specified without FLASK support\n");
+
+    return 0;
+}
+__initcall(check_flask_enforcing);
+#endif /* !CONFIG_XSM_FLASK */
 
 long do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {


From xen-devel-bounces@lists.xenproject.org Fri May 29 09:43:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 09:43:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jebYL-0003P0-1Z; Fri, 29 May 2020 09:43:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mKAR=7L=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jebYJ-0003Ov-Ai
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 09:43:47 +0000
X-Inumbo-ID: e4427aea-a190-11ea-81bc-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e4427aea-a190-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 09:43:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=KOGKXrDjJc4HsAjRnN91cMB+Y4PI364zII3ohOgiF/M=; b=oUp7MPF2gnkkQGKGfM12cC9xpI
 yWySF2g0JuiwPL572jiNjXiLcGZt1t+ddke84y5LowrjQkfqdAl5mkxQXnbOU1m+KLAwBlJC79PJW
 C3ExR6eRpj4YDQWwhPMOoxrq8pE5XpAHDgdX6U8hHqNCNVeCn0XiaclYQu0cy/1247sc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jebYC-0008PD-Va; Fri, 29 May 2020 09:43:40 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jebYC-0000x7-BD; Fri, 29 May 2020 09:43:40 +0000
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <3B88C76B-6972-4A66-AFDC-0B5C27FBA740@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <07a1008d-1acb-aab6-ab10-176e7856a296@xen.org>
Date: Fri, 29 May 2020 10:43:36 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <3B88C76B-6972-4A66-AFDC-0B5C27FBA740@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "paul@xen.org" <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Xia, Hongyan" <hongyxia@amazon.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Bertrand,

On 29/05/2020 09:13, Bertrand Marquis wrote:
> Hi Julien,
> 
>> On 28 May 2020, at 19:54, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Bertrand,
>>
>> Thank you for the patch.
>>
>> On 28/05/2020 16:25, Bertrand Marquis wrote:
>>> At the moment on Arm, a Linux guest running with KTPI enabled will
>>> cause the following error when a context switch happens in user mode:
>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>>> This patch is modifying runstate handling to map the area given by the
>>> guest inside Xen during the hypercall.
>>> This is removing the guest virtual to physical conversion during context
>>> switches which removes the bug
>>
>> It would be good to spell out that a virtual address is not stable. So relying on it is wrong.
>>
>>> and improve performance by preventing to
>>> walk page tables during context switches.
>>
>> With Secret free hypervisor in mind, I would like to suggest to map/unmap the runstate during context switch.
>>
>> The cost should be minimal when there is a direct map (i.e on Arm64 and x86) and still provide better performance on Arm32.
> 
> Even with a minimal cost this is still adding some non real-time behaviour to the context switch.

Just to be clear, by minimal I meant the mapping part is just a 
virt_to_mfn() call and the unmapping is a NOP.

IHMO, if virt_to_mfn() ends up to add non-RT behavior then you have much 
bigger problem than just this call.

> But definitely from the security point of view as we have to map a page from the guest, we could have accessible in Xen some data that should not be there.
> There is a trade here where:
> - xen can protect by map/unmapping
> - a guest which wants to secure his data should either not use it or make sure there is nothing else in the page

Both are valid and depends on your setup. One may want to protect all 
the existing guests, so modifying a guest may not be a solution.

> 
> That sounds like a thread local storage kind of problematic where we want data from xen to be accessible fast from the guest and easy to be modified from xen.

Agree. On x86, they have a perdomain mapping so it would be possible to 
do it. We would need to add the feature on Arm.

> 
>>
>> The change should be minimal compare to the current approach but this could be taken care separately if you don't have time.
> 
> I could add that to the serie in a separate patch so that it can be discussed and test separately ?

If you are picking the task, then I would suggest to add it directly in 
this patch.

> 
>>
>>> --
>>> In the current status, this patch is only working on Arm and needs to
>>> be fixed on X86 (see #error on domain.c for missing get_page_from_gva).
>>> Signed-off-by: Bertrand Marquis <bertrand.marquis@arm.com>
>>> ---
>>>   xen/arch/arm/domain.c   | 32 +++++++++-------
>>>   xen/arch/x86/domain.c   | 51 ++++++++++++++-----------
>>>   xen/common/domain.c     | 84 ++++++++++++++++++++++++++++++++++-------
>>>   xen/include/xen/sched.h | 11 ++++--
>>>   4 files changed, 124 insertions(+), 54 deletions(-)
>>> diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
>>> index 31169326b2..799b0e0103 100644
>>> --- a/xen/arch/arm/domain.c
>>> +++ b/xen/arch/arm/domain.c
>>> @@ -278,33 +278,37 @@ static void ctxt_switch_to(struct vcpu *n)
>>>   /* Update per-VCPU guest runstate shared memory area (if registered). */
>>>   static void update_runstate_area(struct vcpu *v)
>>>   {
>>> -    void __user *guest_handle = NULL;
>>> -    struct vcpu_runstate_info runstate;
>>> +    struct vcpu_runstate_info *runstate;
>>>   -    if ( guest_handle_is_null(runstate_guest(v)) )
>>> +    /* XXX why do we accept not to block here */
>>> +    if ( !spin_trylock(&v->runstate_guest_lock) )
>>>           return;
>>>   -    memcpy(&runstate, &v->runstate, sizeof(runstate));
>>> +    runstate = runstate_guest(v);
>>> +
>>> +    if (runstate == NULL)
>>> +    {
>>> +        spin_unlock(&v->runstate_guest_lock);
>>> +        return;
>>> +    }
>>>         if ( VM_ASSIST(v->domain, runstate_update_flag) )
>>>       {
>>> -        guest_handle = &v->runstate_guest.p->state_entry_time + 1;
>>> -        guest_handle--;
>>> -        runstate.state_entry_time |= XEN_RUNSTATE_UPDATE;
>>> -        __raw_copy_to_guest(guest_handle,
>>> -                            (void *)(&runstate.state_entry_time + 1) - 1, 1);
>>> +        runstate->state_entry_time |= XEN_RUNSTATE_UPDATE;
>>>           smp_wmb();
>>
>> Because you set v->runstate.state_entry_time below, the placement of the barrier seems a bit odd.
>>
>> I would suggest to update v->runstate.state_entry_time first and then update runstate->state_entry_time.
> 
> We do want the guest to know when we modify the runstate so we need to make sure the XEN_RUNSTATE_UPDATE is actually set in a visible way before we do the memcpy.
> That’s why the barrier is after.

I think you misunderstood my comment here. I meant to write the 
following code:

v->runstate.state_entry_time = ...
runstate->state_entry_time = ...
smp_wmb()

So it is much clearer that the barrier is because 
runstate->state_entry_time needs to be updated before the memcpy().

>>> +
>>> +#ifdef CONFIG_ARM
>>> +    page = get_page_from_gva(v, area->addr.p, GV2M_WRITE);
>>
>> A guest is allowed to setup the runstate for a different vCPU than the current one. This will lead to get_page_from_gva() to fail as the function cannot yet work with a vCPU other than current.
> 
> If the area is mapped per cpu but isn’t the aim of this to have a way to check other cpus status ?

The aim is to collect how much time a vCPU has been unscheduled. This 
doesn't have to be run from a different vCPU.

Anyway, my point is the hypercall allows it today. If you want to make 
such restriction, then we need to agree on it and document it.

> 
>>
>> AFAICT, there is no restriction on when the runstate hypercall can be called. So this can even be called before the vCPU is brought up.
> 
> I understand the remark but it still feels very weird to allow an invalid address in an hypercall.
> Wouldn’t we have a lot of potential issues accepting an address that we cannot check ?

Well, that's why you see errors when using KPTI ;). If you use a global 
mapping, then it is not possible to continue without validating the address.

But to do this, you will have to put restriction on the hypercalls. I 
would be OK to make such restriction on Arm.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 29 09:50:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 09:50:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jebeH-0003aA-O0; Fri, 29 May 2020 09:49:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5BQD=7L=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jebeF-0003a5-VC
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 09:49:56 +0000
X-Inumbo-ID: c0163aa2-a191-11ea-9dbe-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c0163aa2-a191-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 09:49:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
 References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=gOmCYfViT/gnMVzEfQOK08rkdIYmB9uvCDe9hw0Yw5Y=; b=CdoVjT0Z8XSHGVcO/KWT3rhrip
 AtQSdn2RcYzae1lcl8ig2HRWcrOfzx1hLHmKGhxAlbi4NCANThJppbAbCjDYtjP4mP3JVZQ872d0I
 /9xkvKKRz1Lp32/bL5so75g+W3RVw8t/sLrBrMRmb2XRbQDahG3oQn/VUAqHHG4HpJVo=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jebeC-00005E-NV; Fri, 29 May 2020 09:49:52 +0000
Received: from 54-240-197-228.amazon.com ([54.240.197.228]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jebeC-0001Jo-C7; Fri, 29 May 2020 09:49:52 +0000
Message-ID: <bb5b3e39f59e652dd4040fcfe936334aa1a6d17b.camel@xen.org>
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
From: Hongyan Xia <hx242@xen.org>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>, Julien Grall <julien@xen.org>
Date: Fri, 29 May 2020 10:49:49 +0100
In-Reply-To: <3B88C76B-6972-4A66-AFDC-0B5C27FBA740@arm.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <3B88C76B-6972-4A66-AFDC-0B5C27FBA740@arm.com>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "paul@xen.org" <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, 2020-05-29 at 08:13 +0000, Bertrand Marquis wrote:
> Hi Julien,
> 
> > On 28 May 2020, at 19:54, Julien Grall <julien@xen.org> wrote:
> > 
> > Hi Bertrand,
> > 
> > Thank you for the patch.
> > 
> > On 28/05/2020 16:25, Bertrand Marquis wrote:
> > > At the moment on Arm, a Linux guest running with KTPI enabled
> > > will
> > > cause the following error when a context switch happens in user
> > > mode:
> > > (XEN) p2m.c:1890: d1v0: Failed to walk page-table va
> > > 0xffffff837ebe0cd0
> > > This patch is modifying runstate handling to map the area given
> > > by the
> > > guest inside Xen during the hypercall.
> > > This is removing the guest virtual to physical conversion during
> > > context
> > > switches which removes the bug
> > 
> > It would be good to spell out that a virtual address is not stable.
> > So relying on it is wrong.
> > 
> > > and improve performance by preventing to
> > > walk page tables during context switches.
> > 
> > With Secret free hypervisor in mind, I would like to suggest to
> > map/unmap the runstate during context switch.
> > 
> > The cost should be minimal when there is a direct map (i.e on Arm64
> > and x86) and still provide better performance on Arm32.
> 
> Even with a minimal cost this is still adding some non real-time
> behaviour to the context switch.
> But definitely from the security point of view as we have to map a
> page from the guest, we could have accessible in Xen some data that
> should not be there.
> There is a trade here where:
> - xen can protect by map/unmapping
> - a guest which wants to secure his data should either not use it or
> make sure there is nothing else in the page
> 
> That sounds like a thread local storage kind of problematic where we
> want data from xen to be accessible fast from the guest and easy to
> be modified from xen.

Can't we just map it into the per-domain region, so that the mapping
and unmapping of runstate is baked into the per-domain region switch
itself during context switch?

Anyway, I am glad to help with secret-free bits if required, although
my knowledge on the Xen Arm side is limited.

Hongyan



From xen-devel-bounces@lists.xenproject.org Fri May 29 09:53:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 09:53:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jebhj-0004Oq-8d; Fri, 29 May 2020 09:53:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jebhi-0004Ol-P3
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 09:53:30 +0000
X-Inumbo-ID: 3e972f12-a192-11ea-a884-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3e972f12-a192-11ea-a884-12813bfff9fa;
 Fri, 29 May 2020 09:53:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 8744BAC85;
 Fri, 29 May 2020 09:53:26 +0000 (UTC)
Subject: Re: [PATCH v10 07/12] xen: provide version information in hypfs
To: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
References: <20200519072106.26894-1-jgross@suse.com>
 <20200519072106.26894-8-jgross@suse.com>
 <88b80e61-3fb4-8f89-0597-d6959033478b@suse.com>
 <65af43c0-2ed4-4330-501f-d561468b7a0e@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <efb1abe3-781e-8f17-0bdd-bb15e78f05e0@suse.com>
Date: Fri, 29 May 2020 11:53:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <65af43c0-2ed4-4330-501f-d561468b7a0e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 11:19, Jürgen Groß wrote:
> On 29.05.20 10:34, Jan Beulich wrote:
>> On 19.05.2020 09:21, Juergen Gross wrote:
>>> @@ -373,6 +374,52 @@ void __init do_initcalls(void)
>>>           (*call)();
>>>   }
>>>   
>>> +#ifdef CONFIG_HYPFS
>>> +static unsigned int __read_mostly major_version;
>>> +static unsigned int __read_mostly minor_version;
>>> +
>>> +static HYPFS_DIR_INIT(buildinfo, "buildinfo");
>>> +static HYPFS_DIR_INIT(compileinfo, "compileinfo");
>>> +static HYPFS_DIR_INIT(version, "version");
>>> +static HYPFS_UINT_INIT(major, "major", major_version);
>>> +static HYPFS_UINT_INIT(minor, "minor", minor_version);
>>
>> These two lines fail to build with gcc 4.1 ("unknown field 'content'
>> specified in initializer"), which I've deliberately tried as a last
>> minute post-commit, pre-push check. I therefore reverted this change
>> before pushing.
>>
>> Paul, Jürgen - please advise how to proceed, considering today's
>> deadline. I'd accept pushing the rest of the series, if a fix for
>> the issue will then still be permitted in later. Otherwise I'd have
>> to wait for a fixed (incremental) version
> 
> The attached patch should fix this problem (assuming the anonymous
> union is to blame).
> 
> Could you verify that, please?

Reviewed-by: Jan Beulich <jbeulich@suse.com>
Tested-by: Jan Beulich <jbeulich@suse.com>

> In case the patch is fine, I'll resend the rest of the series with
> that patch included, as there are adaptions in later patches needed.

No need to, if you trust me to have made the right changes - I've
also verified the rest of the series builds fine there.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 09:53:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 09:53:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jebhv-0004QX-HG; Fri, 29 May 2020 09:53:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TCiv=7L=redhat.com=armbru@srs-us1.protection.inumbo.net>)
 id 1jebhu-0004QK-3D
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 09:53:42 +0000
X-Inumbo-ID: 4682678c-a192-11ea-81bc-bc764e2007e4
Received: from us-smtp-delivery-1.mimecast.com (unknown [205.139.110.120])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 4682678c-a192-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 09:53:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=redhat.com;
 s=mimecast20190719; t=1590746020;
 h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
 in-reply-to:in-reply-to:references:references;
 bh=GBbcRVFEfSxughTklZo44hT/qdDRmX6G0cum4Aww9eQ=;
 b=V31SYRYB+IKCCsOr8DFt+iQ9dOkMDn9oczKRCnTmSoYJBe3PYWwYDHCQS5ByV7YvGPs2Cj
 Bk4Z9z18MbzLBnrPsZm2IajMTe7+LPGUUIp3BweywGDJZt+XsxXH1T4jzk+yUfLEp0D4PF
 WVcs1Q5eB33Hy30RJDlRyipLewg+yCs=
Received: from mimecast-mx01.redhat.com (mimecast-mx01.redhat.com
 [209.132.183.4]) (Using TLS) by relay.mimecast.com with ESMTP id
 us-mta-250-hXw6j1zUO6qJWRJ1JtaZ6A-1; Fri, 29 May 2020 05:53:38 -0400
X-MC-Unique: hXw6j1zUO6qJWRJ1JtaZ6A-1
Received: from smtp.corp.redhat.com (int-mx07.intmail.prod.int.phx2.redhat.com
 [10.5.11.22])
 (using TLSv1.2 with cipher AECDH-AES256-SHA (256/256 bits))
 (No client certificate requested)
 by mimecast-mx01.redhat.com (Postfix) with ESMTPS id A3FFABFC7;
 Fri, 29 May 2020 09:53:36 +0000 (UTC)
Received: from blackfin.pond.sub.org (ovpn-112-32.ams2.redhat.com
 [10.36.112.32])
 by smtp.corp.redhat.com (Postfix) with ESMTPS id AC8F110013D4;
 Fri, 29 May 2020 09:53:27 +0000 (UTC)
Received: by blackfin.pond.sub.org (Postfix, from userid 1000)
 id 2BDC6113864A; Fri, 29 May 2020 11:53:26 +0200 (CEST)
From: Markus Armbruster <armbru@redhat.com>
To: Roman Kagan <rvkagan@yandex-team.ru>
Subject: Re: [PATCH v8 2/8] block: consolidate blocksize properties
 consistency checks
References: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
 <20200528225516.1676602-3-rvkagan@yandex-team.ru>
Date: Fri, 29 May 2020 11:53:26 +0200
In-Reply-To: <20200528225516.1676602-3-rvkagan@yandex-team.ru> (Roman Kagan's
 message of "Fri, 29 May 2020 01:55:10 +0300")
Message-ID: <87r1v3m5ih.fsf@dusky.pond.sub.org>
User-Agent: Gnus/5.13 (Gnus v5.13) Emacs/26.3 (gnu/linux)
MIME-Version: 1.0
X-Scanned-By: MIMEDefang 2.84 on 10.5.11.22
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: redhat.com
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "Daniel P. =?utf-8?Q?Berrang=C3=A9?=" <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>,
 Philippe =?utf-8?Q?Mathieu-Daud=C3=A9?= <philmd@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, qemu-devel@nongnu.org,
 Laurent Vivier <laurent@vivier.eu>, Anthony Perard <anthony.perard@citrix.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 xen-devel@lists.xenproject.org, Keith Busch <kbusch@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>, Max Reitz <mreitz@redhat.com>,
 John Snow <jsnow@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Roman Kagan <rvkagan@yandex-team.ru> writes:

> Several block device properties related to blocksize configuration must
> be in certain relationship WRT each other: physical block must be no
> smaller than logical block; min_io_size, opt_io_size, and
> discard_granularity must be a multiple of a logical block.
>
> To ensure these requirements are met, add corresponding consistency
> checks to blkconf_blocksizes, adjusting its signature to communicate
> possible error to the caller.  Also remove the now redundant consistency
> checks from the specific devices.
>
> Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
> Reviewed-by: Eric Blake <eblake@redhat.com>
> Reviewed-by: Paul Durrant <paul@xen.org>
> ---
>  include/hw/block/block.h   |  2 +-
>  hw/block/block.c           | 30 +++++++++++++++++++++++++++++-
>  hw/block/fdc.c             |  5 ++++-
>  hw/block/nvme.c            |  5 ++++-
>  hw/block/swim.c            |  5 ++++-
>  hw/block/virtio-blk.c      |  7 +------
>  hw/block/xen-block.c       |  6 +-----
>  hw/ide/qdev.c              |  5 ++++-
>  hw/scsi/scsi-disk.c        | 12 +++++-------
>  hw/usb/dev-storage.c       |  5 ++++-
>  tests/qemu-iotests/172.out |  2 +-
>  11 files changed, 58 insertions(+), 26 deletions(-)
>
> diff --git a/include/hw/block/block.h b/include/hw/block/block.h
> index d7246f3862..784953a237 100644
> --- a/include/hw/block/block.h
> +++ b/include/hw/block/block.h
> @@ -87,7 +87,7 @@ bool blk_check_size_and_read_all(BlockBackend *blk, void *buf, hwaddr size,
>  bool blkconf_geometry(BlockConf *conf, int *trans,
>                        unsigned cyls_max, unsigned heads_max, unsigned secs_max,
>                        Error **errp);
> -void blkconf_blocksizes(BlockConf *conf);
> +bool blkconf_blocksizes(BlockConf *conf, Error **errp);
>  bool blkconf_apply_backend_options(BlockConf *conf, bool readonly,
>                                     bool resizable, Error **errp);
>  
> diff --git a/hw/block/block.c b/hw/block/block.c
> index bf56c7612b..b22207c921 100644
> --- a/hw/block/block.c
> +++ b/hw/block/block.c
> @@ -61,7 +61,7 @@ bool blk_check_size_and_read_all(BlockBackend *blk, void *buf, hwaddr size,
>      return true;
>  }
>  
> -void blkconf_blocksizes(BlockConf *conf)
> +bool blkconf_blocksizes(BlockConf *conf, Error **errp)
>  {
>      BlockBackend *blk = conf->blk;
>      BlockSizes blocksizes;
> @@ -83,6 +83,34 @@ void blkconf_blocksizes(BlockConf *conf)
>              conf->logical_block_size = BDRV_SECTOR_SIZE;
>          }
>      }
> +
> +    if (conf->logical_block_size > conf->physical_block_size) {
> +        error_setg(errp,
> +                   "logical_block_size > physical_block_size not supported");
> +        return false;
> +    }

Pardon me if this has been answered already for prior revisions: do we
really support physical block sizes that are not a multiple of the
logical block size?

> +
> +    if (!QEMU_IS_ALIGNED(conf->min_io_size, conf->logical_block_size)) {
> +        error_setg(errp,
> +                   "min_io_size must be a multiple of logical_block_size");
> +        return false;
> +    }
> +
> +    if (!QEMU_IS_ALIGNED(conf->opt_io_size, conf->logical_block_size)) {
> +        error_setg(errp,
> +                   "opt_io_size must be a multiple of logical_block_size");
> +        return false;
> +    }
> +
> +    if (conf->discard_granularity != -1 &&
> +        !QEMU_IS_ALIGNED(conf->discard_granularity,
> +                         conf->logical_block_size)) {
> +        error_setg(errp, "discard_granularity must be "
> +                   "a multiple of logical_block_size");
> +        return false;
> +    }
> +
> +    return true;
>  }
>  
>  bool blkconf_apply_backend_options(BlockConf *conf, bool readonly,
> diff --git a/hw/block/fdc.c b/hw/block/fdc.c
> index c5fb9d6ece..8eda572ef4 100644
> --- a/hw/block/fdc.c
> +++ b/hw/block/fdc.c
> @@ -554,7 +554,10 @@ static void floppy_drive_realize(DeviceState *qdev, Error **errp)
>          read_only = !blk_bs(dev->conf.blk) || blk_is_read_only(dev->conf.blk);
>      }
>  
> -    blkconf_blocksizes(&dev->conf);
> +    if (!blkconf_blocksizes(&dev->conf, errp)) {
> +        return;
> +    }
> +
>      if (dev->conf.logical_block_size != 512 ||
>          dev->conf.physical_block_size != 512)
>      {
> diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> index 2f3100e56c..672650e162 100644
> --- a/hw/block/nvme.c
> +++ b/hw/block/nvme.c
> @@ -1390,7 +1390,10 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp)
>          host_memory_backend_set_mapped(n->pmrdev, true);
>      }
>  
> -    blkconf_blocksizes(&n->conf);
> +    if (!blkconf_blocksizes(&n->conf, errp)) {
> +        return;
> +    }
> +
>      if (!blkconf_apply_backend_options(&n->conf, blk_is_read_only(n->conf.blk),
>                                         false, errp)) {
>          return;
> diff --git a/hw/block/swim.c b/hw/block/swim.c
> index 8f124782f4..74f56e8f46 100644
> --- a/hw/block/swim.c
> +++ b/hw/block/swim.c
> @@ -189,7 +189,10 @@ static void swim_drive_realize(DeviceState *qdev, Error **errp)
>          assert(ret == 0);
>      }
>  
> -    blkconf_blocksizes(&dev->conf);
> +    if (!blkconf_blocksizes(&dev->conf, errp)) {
> +        return;
> +    }
> +
>      if (dev->conf.logical_block_size != 512 ||
>          dev->conf.physical_block_size != 512)
>      {
> diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
> index 413083e62f..4ffdb130be 100644
> --- a/hw/block/virtio-blk.c
> +++ b/hw/block/virtio-blk.c
> @@ -1162,12 +1162,7 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
>          return;
>      }
>  
> -    blkconf_blocksizes(&conf->conf);
> -
> -    if (conf->conf.logical_block_size >
> -        conf->conf.physical_block_size) {
> -        error_setg(errp,
> -                   "logical_block_size > physical_block_size not supported");
> +    if (!blkconf_blocksizes(&conf->conf, errp)) {
>          return;
>      }
>  
> diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
> index 570489d6d9..e17fec50e1 100644
> --- a/hw/block/xen-block.c
> +++ b/hw/block/xen-block.c
> @@ -239,11 +239,7 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
>          return;
>      }
>  
> -    blkconf_blocksizes(conf);
> -
> -    if (conf->logical_block_size > conf->physical_block_size) {
> -        error_setg(
> -            errp, "logical_block_size > physical_block_size not supported");
> +    if (!blkconf_blocksizes(conf, errp)) {
>          return;
>      }
>  
> diff --git a/hw/ide/qdev.c b/hw/ide/qdev.c
> index 06b11583f5..b4821b2403 100644
> --- a/hw/ide/qdev.c
> +++ b/hw/ide/qdev.c
> @@ -187,7 +187,10 @@ static void ide_dev_initfn(IDEDevice *dev, IDEDriveKind kind, Error **errp)
>          return;
>      }
>  
> -    blkconf_blocksizes(&dev->conf);
> +    if (!blkconf_blocksizes(&dev->conf, errp)) {
> +        return;
> +    }
> +
>      if (dev->conf.logical_block_size != 512) {
>          error_setg(errp, "logical_block_size must be 512 for IDE");
>          return;
> diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
> index 387503e11b..8ce68a9dd6 100644
> --- a/hw/scsi/scsi-disk.c
> +++ b/hw/scsi/scsi-disk.c
> @@ -2346,12 +2346,7 @@ static void scsi_realize(SCSIDevice *dev, Error **errp)
>          return;
>      }
>  
> -    blkconf_blocksizes(&s->qdev.conf);
> -
> -    if (s->qdev.conf.logical_block_size >
> -        s->qdev.conf.physical_block_size) {
> -        error_setg(errp,
> -                   "logical_block_size > physical_block_size not supported");
> +    if (!blkconf_blocksizes(&s->qdev.conf, errp)) {
>          return;
>      }
>  
> @@ -2436,7 +2431,9 @@ static void scsi_hd_realize(SCSIDevice *dev, Error **errp)
>      if (s->qdev.conf.blk) {
>          ctx = blk_get_aio_context(s->qdev.conf.blk);
>          aio_context_acquire(ctx);
> -        blkconf_blocksizes(&s->qdev.conf);
> +        if (!blkconf_blocksizes(&s->qdev.conf, errp)) {
> +            goto out;
> +        }
>      }
>      s->qdev.blocksize = s->qdev.conf.logical_block_size;
>      s->qdev.type = TYPE_DISK;
> @@ -2444,6 +2441,7 @@ static void scsi_hd_realize(SCSIDevice *dev, Error **errp)
>          s->product = g_strdup("QEMU HARDDISK");
>      }
>      scsi_realize(&s->qdev, errp);
> +out:
>      if (ctx) {
>          aio_context_release(ctx);
>      }
> diff --git a/hw/usb/dev-storage.c b/hw/usb/dev-storage.c
> index 4eba47538d..de461f37bd 100644
> --- a/hw/usb/dev-storage.c
> +++ b/hw/usb/dev-storage.c
> @@ -599,7 +599,10 @@ static void usb_msd_storage_realize(USBDevice *dev, Error **errp)
>          return;
>      }
>  
> -    blkconf_blocksizes(&s->conf);
> +    if (!blkconf_blocksizes(&s->conf, errp)) {
> +        return;
> +    }
> +
>      if (!blkconf_apply_backend_options(&s->conf, blk_is_read_only(blk), true,
>                                         errp)) {
>          return;
> diff --git a/tests/qemu-iotests/172.out b/tests/qemu-iotests/172.out
> index 7abbe82427..59cc70aebb 100644
> --- a/tests/qemu-iotests/172.out
> +++ b/tests/qemu-iotests/172.out
> @@ -1204,7 +1204,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,physica
>                  drive-type = "144"
>  
>  Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,logical_block_size=4096
> -QEMU_PROG: -device floppy,drive=none0,logical_block_size=4096: Physical and logical block size must be 512 for floppy
> +QEMU_PROG: -device floppy,drive=none0,logical_block_size=4096: logical_block_size > physical_block_size not supported
>  
>  Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,physical_block_size=1024
>  QEMU_PROG: -device floppy,drive=none0,physical_block_size=1024: Physical and logical block size must be 512 for floppy

This no longer exercises floppy_drive_realize()'s check of
logical_block_size:

    if (dev->conf.logical_block_size != 512 ||
        dev->conf.physical_block_size != 512)
    {
        error_setg(errp, "Physical and logical block size must "
                   "be 512 for floppy");
        return;
    }

Please update the test.



From xen-devel-bounces@lists.xenproject.org Fri May 29 09:55:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 09:55:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jebjz-0004d5-1h; Fri, 29 May 2020 09:55:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K2ub=7L=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jebjy-0004cz-6V
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 09:55:50 +0000
X-Inumbo-ID: 927d5048-a192-11ea-a884-12813bfff9fa
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 927d5048-a192-11ea-a884-12813bfff9fa;
 Fri, 29 May 2020 09:55:48 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: APnZCmYO1jHiQ2/5NGVObmdQqF1pqHQbm6o5GYuCIiegfTwXA22LRJ1aJj5icJvgnW/aBMX+r+
 RWYXaeivtT33PvrMNm+F+LOkD+fcAJMPrBc3yfKy79maqNvF0EFmSU2eKNn8IDnh/VrkIj2U+n
 OQGOJEcYWiYa/IHLNQGUKnk/xRmHTH2mnoEFyvloV4+H+61yGcGWlEl4HBrbDrMAKHU3gTYpUU
 ClynxUM4S6MAe74NSHoiuo0WxlwLFOGwHXJ/sfgkk0rWWrWylAVlyBDbb9WC0YFizk0OmKWvSr
 zCU=
X-SBRS: 2.7
X-MesageID: 18996183
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,448,1583211600"; d="scan'208";a="18996183"
From: George Dunlap <George.Dunlap@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: Xen XSM/FLASK policy, grub defaults, etc.
Thread-Topic: Xen XSM/FLASK policy, grub defaults, etc.
Thread-Index: AQHWND0/xSJFUOR5YU28XikgbPuX46i7+DWAgAKqrACAABHAAA==
Date: Fri, 29 May 2020 09:55:44 +0000
Message-ID: <9A98D1CA-83E5-4E03-8ED6-E353653338CB@citrix.com>
References: <24270.35349.838484.116865@mariner.uk.xensource.com>
 <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
 <757d53a0-ec8f-62f1-ca20-72eaf9e1c84d@suse.com>
In-Reply-To: <757d53a0-ec8f-62f1-ca20-72eaf9e1c84d@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <F4C8026B8615C34A912099095CD116CF@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, "cjwatson@debian.org" <cjwatson@debian.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Ian Jackson <Ian.Jackson@citrix.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDI5LCAyMDIwLCBhdCA5OjUyIEFNLCBKYW4gQmV1bGljaCA8amJldWxpY2hA
c3VzZS5jb20+IHdyb3RlOg0KPiANCj4gT24gMjcuMDUuMjAyMCAxODowOCwgR2VvcmdlIER1bmxh
cCB3cm90ZToNCj4+PiBPbiBNYXkgMjcsIDIwMjAsIGF0IDQ6NDEgUE0sIElhbiBKYWNrc29uIDxp
YW4uamFja3NvbkBjaXRyaXguY29tPiB3cm90ZToNCj4+PiAyLiBYZW4gc2hvdWxkIGRpc2FibGUg
dGhlIFhTTSBwb2xpY3kgYnVpbGQgd2hlbiBGTEFTSyBpcyBkaXNhYmxlZC4NCj4+PiBUaGlzIGlz
IHVuZm9ydHVuYXRlbHkgbm90IHNvIHNpbXBsZSBiZWNhdXNlIHRoZSBYU00gcG9saWN5IGJ1aWxk
IGlzIGENCj4+PiB0b29scyBvcHRpb24gYW5kIEZMQVNLIGlzIGEgWGVuIG9wdGlvbiBhbmQgdGhl
IGNvbmZpZ3VyYXRpb24gc3lzdGVtcw0KPj4+IGFyZSBkaXNqb2ludC4gIEJ1dCBhdCB0aGUgdmVy
eSBsZWFzdCBhIGRlZmF1bHQgYnVpbGQsIHdoaWNoIGhhcyBubyBYU00NCj4+PiBzdXBwb3J0LCBz
aG91bGQgbm90IGJ1aWxkIGFuIFhTTSBwb2xpY3kgZmlsZSBlaXRoZXIuDQo+PiANCj4+IEEgc2lt
cGxlIHRoaW5nIHRvIGRvIGhlcmUgd291bGQgYmUgdG8gaGF2ZSB0aGUgZmxhc2sgcG9saWN5IGNv
bnRyb2xsZWQgYnkgYSBjb25maWd1cmUgLS1mbGFzayBvcHRpb24uICBJZiBuZWl0aGVyIC0tZmxh
c2sgbm9yIC0tbm8tZmxhc2sgaXMgc3BlY2lmaWVkLCB3ZSBjb3VsZCBtYXliZSBoYXZlIGNvbmZp
Z3VyZSBhbHNvIGNoZWNrIHRoZSBjb250ZW50cyBvZiB4ZW4vLmNvbmZpZyB0byBzZWUgaWYgQ09O
RklHX1hTTV9GTEFTSyBpcyBlbmFibGVkPw0KPiANCj4gQnV0IHRoYXQncyBjcmVhdGluZyBhIGNo
aWNrZW4tYW5kLWVnZyBwcm9ibGVtOiBUaGVyZSdzIG5vIG9yZGVyaW5nDQo+IGJldHdlZW4gdGhl
IHRvb2xzLyBhbmQgeGVuLyBidWlsZHMuIHhlbi8gbWF5IG5vdCBiZSBidWlsdCBhdCBhbGwsDQo+
IGFuZCB0aGlzIGlzIGdvaW5nIHRvIGJlY29tZSBpbmNyZWFzaW5nbHkgbGlrZWx5IG9uY2UgdGhl
IHhlbi8gcGFydA0KPiBvZiB0aGUgdHJlZSBzdXBwb3J0cyBvdXQtb2YtdHJlZSBidWlsZHMgKGF0
IGxlYXN0IEknbGwgc3dpdGNoIG1vc3QNCj4gb2YgbXkgYnVpbGQgdHJlZXMgdG8gdGhhdCBtb2Rl
bCBhcyBzb29uIGFzIGl0J3MgYXZhaWxhYmxlKS4NCg0KRG8gb3V0LW9mLXRyZWUgYnVpbGRzIHB1
dCB0aGUgLmNvbmZpZyBvdXQgb2YgdHJlZSBhcyB3ZWxsPyAgSWYgc28sIHllcywgdGhhdCB3b3Vs
ZCBkZWZpbml0ZWx5IGxpbWl0IHRoZSB1c2VmdWxuZXNzIG9mIHRoaXMgaWRlYS4NCg0KTXkgc3Vn
Z2VzdGlvbiB3YXMgYmFzZWQgb24gdGhlIGlkZWEgdGhhdCBhIOKAnHR5cGljYWzigJ0gYnVpbGQg
KndoaWNoIG1pZ2h0IGVuYWJsZSBYU00qIHdvdWxkIGxvb2sgbGlrZSB0aGlzOg0KDQoxLiBSdW4g
4oCYbWFrZSBtZW51Y29uZmln4oCZIChvciBzb21ldGhpbmcgbGlrZSBpdCkgaW4geGVuLw0KDQoy
LiBSdW4gLi9jb25maWd1cmUgYXQgdGhlIHRvcGxldmVsDQoNCjMuIERvIHRoZSBmdWxsIGJ1aWxk
Lg0KDQpCdXQgbG9va2luZyBiYWNrIGF0IGl0LCB0aGVyZeKAmXMgbm8gcGFydGljdWxhciByZWFz
b24gdGhhdCBzb21lb25lIGNvbWluZyB0byBidWlsZCBYZW4gbWlnaHQgZXhwZWN0IHRvIGRvIHRo
aW5ncyBpbiB0aGF0IG9yZGVyIHJhdGhlciB0aGFuICMxLg0KDQpJdCBtaWdodCBtYWtlIHNlbnNl
IHRvIHNpbXBseSBkZWNsYXJlIHRoYXQgdGhlIHRvb2xzIGRlcGVuZCBvbiB0aGUgeGVuIGNvbmZp
ZywgYW5kIG1vZGlmeWluZyAuL2NvbmZpZ3VyZSB0byByZXByZXNlbnQgdGhhdDoNCg0KMS4gQWRk
IGEgYOKAlHhlbi1jb25maWc9YCBvcHRpb24gdG8gY29uZmlndXJlIHRlbGxpbmcgaXQgd2hlcmUg
dG8gbG9vayBmb3IgdGhlIHhlbiBjb25maWc7IGlmIHRoYXTigJlzIG5vdCBzcGVjaWZpZWQsIGxv
b2sgZm9yIGEgc3BlY2lmaWMgc2hlbGwgdmFyaWFibGUgc2F5aW5nIHdoZXJlIHRoZSBYZW4gdHJl
ZSBpczsgaWYgdGhhdOKAmXMgbm90IGF2YWlsYWJsZSwgbG9va2luZyBpbiB0aGUgY3VycmVudCB0
cmVlLg0KDQoyLiBIYXZlIC4vY29uZmlndXJlIGZhaWwgYnkgZGVmYXVsdCBpZiBpdCBjYW7igJl0
IGZpbmQgYSAuY29uZmlnLCB1bmxlc3Mg4oCUbm8teGVuLWNvbmZpZyBpcyBzcGVjaWZpZWQuDQoN
Cj4gRG8gd2UgcGVyaGFwcyBuZWVkIHRvIHJlc29ydCB0byBhIG1ha2UgY29tbWFuZCBsaW5lIG9w
dGlvbiwgdXNhYmxlDQo+IGF0IHRoZSB0b3AgbGV2ZWwgYXMgd2VsbCBhcyBmb3IgbWFqb3Igc3Vi
dHJlZSBidWlsZHMsIGFuZCBiZWluZw0KPiBob25vcmVkIChhcyBsb25nIGFzIHNldCBlaXRoZXIg
d2F5LCBmYWxsaW5nIGJhY2sgdG8gY3VycmVudA0KPiBiZWhhdmlvciBpZiB1bnNldCkgYnkgYm90
aCAuL2NvbmZpZ3VyZSBhbmQgdGhlIGh5cGVydmlzb3Incw0KPiBLY29uZmlnPw0KDQpXaGF0IGtp
bmQgb2YgY29tbWFuZC1saW5lIG9wdGlvbiBkaWQgeW91IGhhdmUgaW4gbWluZD8NCg0KIC1HZW9y
Z2U=


From xen-devel-bounces@lists.xenproject.org Fri May 29 09:56:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 09:56:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jebkq-0004i0-Bg; Fri, 29 May 2020 09:56:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5ub4=7L=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jebko-0004hq-FY
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 09:56:42 +0000
X-Inumbo-ID: b226005c-a192-11ea-a884-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b226005c-a192-11ea-a884-12813bfff9fa;
 Fri, 29 May 2020 09:56:41 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7AFF1ACC3;
 Fri, 29 May 2020 09:56:40 +0000 (UTC)
Subject: Re: [PATCH v10 07/12] xen: provide version information in hypfs
To: Jan Beulich <jbeulich@suse.com>
References: <20200519072106.26894-1-jgross@suse.com>
 <20200519072106.26894-8-jgross@suse.com>
 <88b80e61-3fb4-8f89-0597-d6959033478b@suse.com>
 <65af43c0-2ed4-4330-501f-d561468b7a0e@suse.com>
 <efb1abe3-781e-8f17-0bdd-bb15e78f05e0@suse.com>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <6b54a551-d42d-33fa-bcc7-4b9c249e5b84@suse.com>
Date: Fri, 29 May 2020 11:56:39 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <efb1abe3-781e-8f17-0bdd-bb15e78f05e0@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.20 11:53, Jan Beulich wrote:
> On 29.05.2020 11:19, Jürgen Groß wrote:
>> On 29.05.20 10:34, Jan Beulich wrote:
>>> On 19.05.2020 09:21, Juergen Gross wrote:
>>>> @@ -373,6 +374,52 @@ void __init do_initcalls(void)
>>>>            (*call)();
>>>>    }
>>>>    
>>>> +#ifdef CONFIG_HYPFS
>>>> +static unsigned int __read_mostly major_version;
>>>> +static unsigned int __read_mostly minor_version;
>>>> +
>>>> +static HYPFS_DIR_INIT(buildinfo, "buildinfo");
>>>> +static HYPFS_DIR_INIT(compileinfo, "compileinfo");
>>>> +static HYPFS_DIR_INIT(version, "version");
>>>> +static HYPFS_UINT_INIT(major, "major", major_version);
>>>> +static HYPFS_UINT_INIT(minor, "minor", minor_version);
>>>
>>> These two lines fail to build with gcc 4.1 ("unknown field 'content'
>>> specified in initializer"), which I've deliberately tried as a last
>>> minute post-commit, pre-push check. I therefore reverted this change
>>> before pushing.
>>>
>>> Paul, Jürgen - please advise how to proceed, considering today's
>>> deadline. I'd accept pushing the rest of the series, if a fix for
>>> the issue will then still be permitted in later. Otherwise I'd have
>>> to wait for a fixed (incremental) version
>>
>> The attached patch should fix this problem (assuming the anonymous
>> union is to blame).
>>
>> Could you verify that, please?
> 
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> Tested-by: Jan Beulich <jbeulich@suse.com>
> 
>> In case the patch is fine, I'll resend the rest of the series with
>> that patch included, as there are adaptions in later patches needed.
> 
> No need to, if you trust me to have made the right changes - I've
> also verified the rest of the series builds fine there.

Thanks, of course I trust you. :-)


Juergen


From xen-devel-bounces@lists.xenproject.org Fri May 29 09:58:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 09:58:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jebmw-0004qr-O2; Fri, 29 May 2020 09:58:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jebmv-0004qm-Qm
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 09:58:53 +0000
X-Inumbo-ID: 0036150c-a193-11ea-a884-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0036150c-a193-11ea-a884-12813bfff9fa;
 Fri, 29 May 2020 09:58:52 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 9AAE7AEE5;
 Fri, 29 May 2020 09:58:51 +0000 (UTC)
Subject: Re: [PATCH 2/2] xen: credit2: limit the max number of CPUs in a
 runqueue
To: Dario Faggioli <dfaggioli@suse.com>
References: <158818022727.24327.14309662489731832234.stgit@Palanthas>
 <158818179558.24327.11334680191217289878.stgit@Palanthas>
 <3db33b8a-ba97-f302-a325-e989ff0e7084@suse.com>
 <7b34b1b2c4b36399ad16f6e72a872e15d949f4bf.camel@suse.com>
 <cd566bb2-753f-b0eb-3c6a-bc2dc01cf37c@suse.com>
 <a959e9e807dc1f832d151ab72f324f2c084c2461.camel@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bc005ca2-7bf2-3bb0-b9cd-0be05c914f3f@suse.com>
Date: Fri, 29 May 2020 11:58:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <a959e9e807dc1f832d151ab72f324f2c084c2461.camel@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.05.2020 16:55, Dario Faggioli wrote:
> On Wed, 2020-05-27 at 08:17 +0200, Jan Beulich wrote:
>> On 27.05.2020 00:00, Dario Faggioli wrote:
>>> Just in case, is there a
>>> way to identify them easily, like with a mask or something, in the
>>> code
>>> already?
>>
>> cpu_sibling_mask still gets used for both, so there's no mask
>> to use. As per set_cpu_sibling_map() you can look at
>> cpu_data[].compute_unit_id to tell, but that's of course x86-
>> specific (as is the entire compute unit concept).
>>
> Right. And thanks for the pointers.
> 
> But then, what I am understanding by having a look there is that I
> indeed can use (again, appropriately wrapped) x86_num_siblings, for
> telling, in this function, whether a CPU has any, and if yes how many,
> HT (Intel) or CU (AMD) siblings in total, although some of them may
> currently be offline.
> 
> Which means I will be treating HTs and CUs the same which, thinking
> more about it (and thinking actually to CUs, rather than to any cache
> sharing relationship), does make sense for this feature.
> 
> Does this make sense, or am I missing or misinterpreting anything?

Well, it effectively answers the question I had raised: "What about HT
vs AMD Fam15's CUs? Do you want both to be treated the same here?"

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 10:07:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 10:07:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jebuk-0005pN-Iy; Fri, 29 May 2020 10:06:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jebuj-0005pI-Jh
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 10:06:57 +0000
X-Inumbo-ID: 2022cf76-a194-11ea-a885-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2022cf76-a194-11ea-a885-12813bfff9fa;
 Fri, 29 May 2020 10:06:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 665E5ABE2;
 Fri, 29 May 2020 10:06:54 +0000 (UTC)
Subject: Re: Xen XSM/FLASK policy, grub defaults, etc.
To: George Dunlap <George.Dunlap@citrix.com>
References: <24270.35349.838484.116865@mariner.uk.xensource.com>
 <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
 <757d53a0-ec8f-62f1-ca20-72eaf9e1c84d@suse.com>
 <9A98D1CA-83E5-4E03-8ED6-E353653338CB@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <92986869-678c-572f-89f4-ccf84039022a@suse.com>
Date: Fri, 29 May 2020 12:06:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <9A98D1CA-83E5-4E03-8ED6-E353653338CB@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 "cjwatson@debian.org" <cjwatson@debian.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Ian Jackson <Ian.Jackson@citrix.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 11:55, George Dunlap wrote:
> 
> 
>> On May 29, 2020, at 9:52 AM, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 27.05.2020 18:08, George Dunlap wrote:
>>>> On May 27, 2020, at 4:41 PM, Ian Jackson <ian.jackson@citrix.com> wrote:
>>>> 2. Xen should disable the XSM policy build when FLASK is disabled.
>>>> This is unfortunately not so simple because the XSM policy build is a
>>>> tools option and FLASK is a Xen option and the configuration systems
>>>> are disjoint.  But at the very least a default build, which has no XSM
>>>> support, should not build an XSM policy file either.
>>>
>>> A simple thing to do here would be to have the flask policy controlled by a configure --flask option.  If neither --flask nor --no-flask is specified, we could maybe have configure also check the contents of xen/.config to see if CONFIG_XSM_FLASK is enabled?
>>
>> But that's creating a chicken-and-egg problem: There's no ordering
>> between the tools/ and xen/ builds. xen/ may not be built at all,
>> and this is going to become increasingly likely once the xen/ part
>> of the tree supports out-of-tree builds (at least I'll switch most
>> of my build trees to that model as soon as it's available).
> 
> Do out-of-tree builds put the .config out of tree as well?  If so, yes, that would definitely limit the usefulness of this idea.

Yes, all output files go into the build tree, to keep the source tree
clean. And .config is effectively an output file, despite it possibly
getting pre-populated into the build tree.

> My suggestion was based on the idea that a “typical” build *which might enable XSM* would look like this:
> 
> 1. Run ‘make menuconfig’ (or something like it) in xen/
> 
> 2. Run ./configure at the toplevel
> 
> 3. Do the full build.
> 
> But looking back at it, there’s no particular reason that someone coming to build Xen might expect to do things in that order rather than #1.
> 
> It might make sense to simply declare that the tools depend on the xen config, and modifying ./configure to represent that:
> 
> 1. Add a `—xen-config=` option to configure telling it where to look for the xen config; if that’s not specified, look for a specific shell variable saying where the Xen tree is; if that’s not available, looking in the current tree.
> 
> 2. Have ./configure fail by default if it can’t find a .config, unless —no-xen-config is specified.

Hmm, yes, that's certainly an option. The intended behavior with
--no-xen-config would be of interest though: Just what it is
now, i.e. controlled by yet another configure option?

>> Do we perhaps need to resort to a make command line option, usable
>> at the top level as well as for major subtree builds, and being
>> honored (as long as set either way, falling back to current
>> behavior if unset) by both ./configure and the hypervisor's
>> Kconfig?
> 
> What kind of command-line option did you have in mind?

Something like "XSM_FLASK=y".

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 10:07:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 10:07:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jebvJ-0005s1-Un; Fri, 29 May 2020 10:07:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jebvI-0005rr-Nk
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 10:07:32 +0000
X-Inumbo-ID: 35add962-a194-11ea-81bc-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35add962-a194-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 10:07:32 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:51378
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jebv9-000oOY-Jb (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 11:07:23 +0100
Subject: Re: [PATCH] xsm: also panic upon "flask=enforcing" when XSM_FLASK=n
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <8a4c4486-cf27-66a0-5ff9-5329277deccf@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c90b70f7-e52e-405c-adb4-1303d7d1c009@citrix.com>
Date: Fri, 29 May 2020 11:07:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <8a4c4486-cf27-66a0-5ff9-5329277deccf@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>, Daniel de Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 10:34, Jan Beulich wrote:
> While the behavior to ignore this option without FLASK support was
> properly documented, it is still somewhat surprising to someone using
> this option and then still _not_ getting the assumed security. Add a
> 2nd handler for the command line option for the XSM_FLASK=n case, and
> invoke panic() when the option is specified (and not subsequently
> overridden by "flask=disabled").
>
> Suggested-by: Ian Jackson <ian.jackson@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

I'm very tempted to nack this outright, lest I remind both of you of the
total disaster that was XSA-9, and the subsequent retraction of the code
which did exactly this.

If you want to do something like this, prohibit creating guests so the
administrator can still log in and unbreak, instead of having it
entering a reboot loop with no output.  The console isn't established
this early, so none of this text makes it out onto VGA/serial.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 10:19:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 10:19:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jec6p-0006rq-6X; Fri, 29 May 2020 10:19:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BsAk=7L=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jec6o-0006rK-0X
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 10:19:26 +0000
X-Inumbo-ID: de5e6184-a195-11ea-8993-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id de5e6184-a195-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 10:19:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 401A4B123;
 Fri, 29 May 2020 10:19:23 +0000 (UTC)
Message-ID: <d95076bab67035437d1e055c76adf4ad824b81e8.camel@suse.com>
Subject: Re: [PATCH 2/2] xen: credit2: limit the max number of CPUs in a
 runqueue
From: Dario Faggioli <dfaggioli@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Date: Fri, 29 May 2020 12:19:21 +0200
In-Reply-To: <bc005ca2-7bf2-3bb0-b9cd-0be05c914f3f@suse.com>
References: <158818022727.24327.14309662489731832234.stgit@Palanthas>
 <158818179558.24327.11334680191217289878.stgit@Palanthas>
 <3db33b8a-ba97-f302-a325-e989ff0e7084@suse.com>
 <7b34b1b2c4b36399ad16f6e72a872e15d949f4bf.camel@suse.com>
 <cd566bb2-753f-b0eb-3c6a-bc2dc01cf37c@suse.com>
 <a959e9e807dc1f832d151ab72f324f2c084c2461.camel@suse.com>
 <bc005ca2-7bf2-3bb0-b9cd-0be05c914f3f@suse.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-wy4INCHohOVOxGNLEbcq"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-wy4INCHohOVOxGNLEbcq
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2020-05-29 at 11:58 +0200, Jan Beulich wrote:
> On 28.05.2020 16:55, Dario Faggioli wrote:
> >=20
> > Which means I will be treating HTs and CUs the same which, thinking
> > more about it (and thinking actually to CUs, rather than to any
> > cache
> > sharing relationship), does make sense for this feature.
> >=20
> > Does this make sense, or am I missing or misinterpreting anything?
>=20
> Well, it effectively answers the question I had raised: "What about
> HT
> vs AMD Fam15's CUs? Do you want both to be treated the same here?"
>=20
Exactly! :-)

And, in fact, the answer is "Yes, I want them being treated the same,
and the patches are actually doing ".

Sorry for not getting it right away, and just replying like that in the
first place.

At least, this led to better (more generic) comments and variable names
in v2 of this patch.

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-wy4INCHohOVOxGNLEbcq
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl7Q4aoACgkQFkJ4iaW4
c+6fahAAlWVhZCevkedtkPUe2HQMxQP1cguOA5KzczytT02SExMXj/FO3/zkswN2
WsJIpkRBvdkrjq71YSjW9NsCxyTdKsv/twwVO7VOj7tR9rAaB0VU7d1zes0bWfN9
H4gRSa+L82wdI0xW9q+ZxlZRmqhJK0MjhTzaWz6bevM7D3Wrx+t8POW0iTMmCiMB
tjljMe7BlqxCgxsb/2fXoGmYzOXSNVLD7IgfDzhPnmOZ2YTjteCsHtpTq/jvCdMU
hDjBSoLfWKH5wkkb/SkBwqdNx3lXgsf6/6Alq3wXxmVYf05R6CHkbxVqzB2qVBm+
AIkbxrG3aPJ3KqQOuZoA0sk4uppPqqKvZXuRXlD3PaKt86qeI2W2HsTOXLh4TkCz
dQPxs/AZUsCwtJKMOZKRQFwiGsfE59cyFXGWaal7TibYAlUyYH0gtwgPxM+Kc305
wksJzr1GaDigSvOCznZpuYLrw2AJG7XGcwFqtqUPWy0nsFVfP7eLN2r5Ps7KgtPu
D7sGz1SqPN2ModAyp96O+Ixh2kslf7P8Xf7xb0nW772CX2XA1c4wtcL3uWnUnoBQ
C1OYttENwlnuVkBEpF2nU8vEZIBpUvuZIVMCo5ASnfZr49G143gfC2ovV7KGk79n
JmKfDvWd2UwW2uxYTgJns84f4gLXMYkJX+0IHMc8mBfmy9ilILw=
=tv67
-----END PGP SIGNATURE-----

--=-wy4INCHohOVOxGNLEbcq--



From xen-devel-bounces@lists.xenproject.org Fri May 29 10:20:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 10:20:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jec7p-0007aG-Gl; Fri, 29 May 2020 10:20:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BsAk=7L=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jec7p-0007aB-66
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 10:20:29 +0000
X-Inumbo-ID: 043dd8a8-a196-11ea-a88b-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 043dd8a8-a196-11ea-a88b-12813bfff9fa;
 Fri, 29 May 2020 10:20:28 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 023CCB11F;
 Fri, 29 May 2020 10:20:26 +0000 (UTC)
Message-ID: <78e986122386915cacba8b4c3b572a460f9622e1.camel@suse.com>
Subject: Re: [PATCH 0/3] Automation: improve openSUSE containers + podman
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Date: Fri, 29 May 2020 12:20:25 +0200
In-Reply-To: <86969ba1ea7e270267cfaa3408a89b55c86b3dca.camel@suse.com>
References: <158827088416.19371.17008531228521109457.stgit@Palanthas>
 <86969ba1ea7e270267cfaa3408a89b55c86b3dca.camel@suse.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-3lN/oI1im3UMHAh5rfeP"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Doug Goldstein <cardoe@cardoe.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-3lN/oI1im3UMHAh5rfeP
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2020-05-21 at 09:43 +0200, Dario Faggioli wrote:
> On Thu, 2020-04-30 at 20:27 +0200, Dario Faggioli wrote:
> > Hello,
> >=20
> > This short series contains some improvements for building Xen in
> > openSUSE containers. In fact, the build dependencies inside the
> > Tumbleweed container are updated and more handy helpers are added,
> > in
> > containerize, for referring to both Leap and Tumbleweed containers.
> >=20
> > In addition to that, in patch 3, the containerize script is
> > enhanced
> > so
> > that it is now possible to use podman, instead of docker. Rootless
> > mode
> > for podman also works (provided the system is properly configured)
> > which,
> > IMO, is rather nice.
> >=20
> > Docker of course continue to work, and is kept as the default.
> >=20
> Ping?
>
Ping^2? :-D

Adding Wei. get-maintainers.pl seems told me I should no Cc him, so I
dind't, but I've seen automation/ stuff Acked-by him recently, so...

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-3lN/oI1im3UMHAh5rfeP
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl7Q4ekACgkQFkJ4iaW4
c+4Qug/+PmeZZaWX05mKFCir7jn9Xs/ZmFB/wn0nuPWeD2OxfuljSt22VPhMgM97
qz6s1BHgZuFEbEZUFflwxJN2349Uo2fPrMhCyqgbsrxRNjbKjFYGtWW0iqvnaenN
NJ222mH6TOodrfWQ8WBUCW3fooMuNi3GZJxLW8C5olsPxv3HK3647gSntGheWpob
HDtT8dAEwLK4AJEZccc5ZaIUB7I9f0sSAtO2lbEEITVNi4QMD9dqu8Lv/S2rffZ+
CjVPJy2IeanIv4q59j2pgTny7RfiiluoSoF1xMyjKTnqOWy//X+S+S0EhGG3R6Ep
NHmxmgG2SMP9yFPY6jvWWh/XoylBUF6JGXnKeljt32wao7J903uDv9bIOs+jnTtg
movSk0TDdujaiWw6Irj3SKejr+YqgOOJHoekxXNsGOdQx/TjTWVAoqp446yBVseb
ltMrTyjM1nfO53LHQrim4S6zEe+amJ3NM0P3aN2Wds82Qm0es5fCxgGmYsUZI8I2
mMES962iSk32TGBkeTkhwgFi6Wcd1ZKhXmlpjva2rICqNFwDnd4pRGyjGg14L87z
swOY1Y/XzIW5fMQ0YE293ugN/DwyTQOOg+ZO2KHjl250f3XueLc8/E1k1/1rIqXd
QqgwKm2ILNWTpcilup2eCi2gHDQ2nyUwj29lKvny2nyJuJzZs84=
=AZEr
-----END PGP SIGNATURE-----

--=-3lN/oI1im3UMHAh5rfeP--



From xen-devel-bounces@lists.xenproject.org Fri May 29 10:30:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 10:30:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecH3-0007ui-Gh; Fri, 29 May 2020 10:30:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5ub4=7L=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jecH1-0007sh-V7
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 10:29:59 +0000
X-Inumbo-ID: 572bcec0-a197-11ea-a88b-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 572bcec0-a197-11ea-a88b-12813bfff9fa;
 Fri, 29 May 2020 10:29:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 8DBD3AC2C;
 Fri, 29 May 2020 10:29:55 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] tools: fix Rules.mk library make variables
Date: Fri, 29 May 2020 12:29:53 +0200
Message-Id: <20200529102953.12714-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Both SHDEPS_libxendevicemodel and SHDEPS_libxenhypfs have a bug by
adding $(SHLIB_xencall) instead of $(SHLIB_libxencall).

The former seems not to have any negative impact, probably because
it is not used anywhere in Xen without the correct $(SHLIB_libxencall)
being used, too.

Fixes: 86234eafb95295 ("libs: add libxenhypfs")
Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/Rules.mk | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/tools/Rules.mk b/tools/Rules.mk
index 518bb5660a..936fb83fa4 100644
--- a/tools/Rules.mk
+++ b/tools/Rules.mk
@@ -128,12 +128,12 @@ LDLIBS_libxenforeignmemory = $(SHDEPS_libxenforeignmemory) $(XEN_LIBXENFOREIGNME
 SHLIB_libxenforeignmemory  = $(SHDEPS_libxenforeignmemory) -Wl,-rpath-link=$(XEN_LIBXENFOREIGNMEMORY)
 
 CFLAGS_libxendevicemodel = -I$(XEN_LIBXENDEVICEMODEL)/include $(CFLAGS_xeninclude)
-SHDEPS_libxendevicemodel = $(SHLIB_libxentoollog) $(SHLIB_libxentoolcore) $(SHLIB_xencall)
+SHDEPS_libxendevicemodel = $(SHLIB_libxentoollog) $(SHLIB_libxentoolcore) $(SHLIB_libxencall)
 LDLIBS_libxendevicemodel = $(SHDEPS_libxendevicemodel) $(XEN_LIBXENDEVICEMODEL)/libxendevicemodel$(libextension)
 SHLIB_libxendevicemodel  = $(SHDEPS_libxendevicemodel) -Wl,-rpath-link=$(XEN_LIBXENDEVICEMODEL)
 
 CFLAGS_libxenhypfs = -I$(XEN_LIBXENHYPFS)/include $(CFLAGS_xeninclude)
-SHDEPS_libxenhypfs = $(SHLIB_libxentoollog) $(SHLIB_libxentoolcore) $(SHLIB_xencall)
+SHDEPS_libxenhypfs = $(SHLIB_libxentoollog) $(SHLIB_libxentoolcore) $(SHLIB_libxencall)
 LDLIBS_libxenhypfs = $(SHDEPS_libxenhypfs) $(XEN_LIBXENHYPFS)/libxenhypfs$(libextension)
 SHLIB_libxenhypfs  = $(SHDEPS_libxenhypfs) -Wl,-rpath-link=$(XEN_LIBXENHYPFS)
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri May 29 10:30:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 10:30:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecH8-0008TJ-Ol; Fri, 29 May 2020 10:30:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jecH7-0008LL-7i
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 10:30:05 +0000
X-Inumbo-ID: 5bd5c6ec-a197-11ea-8993-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5bd5c6ec-a197-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 10:30:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 32C07AC2D;
 Fri, 29 May 2020 10:30:03 +0000 (UTC)
Subject: Re: [PATCH] xsm: also panic upon "flask=enforcing" when XSM_FLASK=n
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <8a4c4486-cf27-66a0-5ff9-5329277deccf@suse.com>
 <c90b70f7-e52e-405c-adb4-1303d7d1c009@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <58e33283-a883-3bde-c697-8605586abace@suse.com>
Date: Fri, 29 May 2020 12:30:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <c90b70f7-e52e-405c-adb4-1303d7d1c009@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@eu.citrix.com>,
 Ian Jackson <ian.jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel de Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 12:07, Andrew Cooper wrote:
> On 29/05/2020 10:34, Jan Beulich wrote:
>> While the behavior to ignore this option without FLASK support was
>> properly documented, it is still somewhat surprising to someone using
>> this option and then still _not_ getting the assumed security. Add a
>> 2nd handler for the command line option for the XSM_FLASK=n case, and
>> invoke panic() when the option is specified (and not subsequently
>> overridden by "flask=disabled").
>>
>> Suggested-by: Ian Jackson <ian.jackson@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I'm very tempted to nack this outright, lest I remind both of you of the
> total disaster that was XSA-9, and the subsequent retraction of the code
> which did exactly this.
> 
> If you want to do something like this, prohibit creating guests so the
> administrator can still log in and unbreak,

Unbreaking is as easy as removing the command line option, or
adding "flask=disable" at the end of the command line.

Preventing to create guests is another option, but complicated
by the late-hwdom feature we still have - to achieve what you
want we'd have to permit creating that one further domain.
Dom0less perhaps also would need special treatment (and there
I'm not sure we'd know which of the domains we are supposed to
allow being created, and which one(s) not).

> instead of having it
> entering a reboot loop with no output.  The console isn't established
> this early, so none of this text makes it out onto VGA/serial.

You didn't look at the patch then: I'm intentionally _not_
panic()-ing from the command line parsing function, but from
an initcall. Both VGA and serial have been set up by that time.
(I was in fact considering to pull it a little earlier, into
a pre-SMP initcall.)

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 10:39:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 10:39:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecPz-0000ba-3B; Fri, 29 May 2020 10:39:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hF3w=7L=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jecPy-0000bV-N2
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 10:39:14 +0000
X-Inumbo-ID: a2d6d710-a198-11ea-a88c-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a2d6d710-a198-11ea-a88c-12813bfff9fa;
 Fri, 29 May 2020 10:39:13 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: aWzodHWkH279h+ue5cAAG9u/wBMxcfcLiiH/qGcgwM1+ziwlrE/wKRMwoqPYf/gee85sirQjBP
 iOlgz6UnrmGhQvKPAT7INk2K7hUlzjg52oeMIk+WdUBQPAjVAjdIgWbwUNCCdDX7W4spmzWDG1
 8pr8PcVbR1KXT3nXwSXVfCPpvvtowPtJ4KKYHJua6T5Kqi3/zdcPMqpFXa54B7nyhkPdc2TDLm
 O+cb7EPib1XTdV/kl/S6JDUDugy5XdhC1Ztdqj7Uk6wjJ9SLy+CLpX/KsGe5z6C/eJtopCXI0C
 FN0=
X-SBRS: 2.7
X-MesageID: 18744923
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,448,1583211600"; d="scan'208";a="18744923"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24272.58955.340315.479568@mariner.uk.xensource.com>
Date: Fri, 29 May 2020 11:39:07 +0100
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] xsm: also panic upon "flask=enforcing" when XSM_FLASK=n
In-Reply-To: <c90b70f7-e52e-405c-adb4-1303d7d1c009@citrix.com>
References: <8a4c4486-cf27-66a0-5ff9-5329277deccf@suse.com>
 <c90b70f7-e52e-405c-adb4-1303d7d1c009@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel de Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Andrew Cooper writes ("Re: [PATCH] xsm: also panic upon "flask=enforcing" when XSM_FLASK=n"):
> On 29/05/2020 10:34, Jan Beulich wrote:
> > While the behavior to ignore this option without FLASK support was
> > properly documented, it is still somewhat surprising to someone using
> > this option and then still _not_ getting the assumed security. Add a
> > 2nd handler for the command line option for the XSM_FLASK=n case, and
> > invoke panic() when the option is specified (and not subsequently
> > overridden by "flask=disabled").
> >
> > Suggested-by: Ian Jackson <ian.jackson@citrix.com>
> > Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> I'm very tempted to nack this outright, lest I remind both of you of the
> total disaster that was XSA-9, and the subsequent retraction of the code
> which did exactly this.

You are right to remind me of, well, whatever it is you are trying to
remind me of, since I'm afraid I don't know what you mean ...  Do you
really mean XSA-9 ?  I went at looked it up and the connection eludes
me.

> If you want to do something like this, prohibit creating guests so the
> administrator can still log in and unbreak, instead of having it
> entering a reboot loop with no output. The console isn't established
> this early, so none of this text makes it out onto VGA/serial.

Certainly a silent reboot loop would be very unhelpful.  I think if
Xen were to actually print something to the serial console that would
be tolerable (since the administrator can then adjust the boot command
line), but your suggestion to prohibit guest creation would be fine
too.

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Fri May 29 10:51:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 10:51:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecb8-0002EP-7M; Fri, 29 May 2020 10:50:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hF3w=7L=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jecb7-0002EK-7X
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 10:50:45 +0000
X-Inumbo-ID: 3ee3610e-a19a-11ea-9947-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ee3610e-a19a-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 10:50:44 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: HxHYzqkqzCyc3Pq55Ti+XZrR9+GbZeJXYhymQnEOkahJPih8qOMYgV24Zky8gKZixg10E5Uyam
 RXcTpsg6ZBR7Hczh6V9V0CvAKxR70MiB5nGqZz/r0i/pUOXS17MaLvebthz+VzxYvI8YlVws+I
 LX0448fI06xqDckvMgtf6bx++Z8dmnYpT8IpjvLIKIeKM8HuwBmYGTI+pfKOz3RNlM31ZGa23F
 ie0NBu3GxSpTsyxukYF19HWQZgBT5gKj79f9w6Xc9tAaoYL4PqmoztunP540jnfenZQPDI9l4A
 Om0=
X-SBRS: 2.7
X-MesageID: 18765620
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,448,1583211600"; d="scan'208";a="18765620"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Message-ID: <24272.59646.746545.343358@mariner.uk.xensource.com>
Date: Fri, 29 May 2020 11:50:38 +0100
To: George Dunlap <George.Dunlap@citrix.com>
Subject: Re: Xen XSM/FLASK policy, grub defaults, etc.
In-Reply-To: <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
References: <24270.35349.838484.116865@mariner.uk.xensource.com>
 <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 "cjwatson@debian.org" <cjwatson@debian.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("Re: Xen XSM/FLASK policy, grub defaults, etc."):
> > On May 27, 2020, at 4:41 PM, Ian Jackson <ian.jackson@citrix.com> wrote:
> > 3. Failing that, Xen should provide some other mechanism which would
> > enable something like update-grub to determine whether a particular
> > hypervisor can sensibly be run with a policy file and flask=enforcing.
> 
> So you want update-grub to check whether *the Xen binary it’s creating entries for* has FLASK enabled.  We generally include the Xen config used to build the hypervisor — could we have it check for CONFIG_XSM_FLASK?

That would be a possibility.  Including kernel configs has gone out of
fashion but I think most distros ship them.

Are we confident that this config name will remain stable ?

And I guess if the .config can't be found then the XSM boot entry
should be included ?

Ian.


From xen-devel-bounces@lists.xenproject.org Fri May 29 10:51:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 10:51:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecbq-0002Hc-GL; Fri, 29 May 2020 10:51:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hF3w=7L=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jecbo-0002HR-Va
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 10:51:29 +0000
X-Inumbo-ID: 58def19a-a19a-11ea-8993-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58def19a-a19a-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 10:51:28 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: jSbxCWYuGHUarZwQ0b9uNEHE68IVcRWKLLyJwsud8zSrwqa+EKO5VdEQXgRqA4/RSi/D7n9jEd
 EpSPaB4KT5dcKYx8DK31uUxZ8Ud/aKhhbdBx2s+utNY3ofmjr7jHRcjVfsv92epPc7xHe5JM57
 jvARv9qaKa1G5jIOqbXsiAZxwDwMz3tb6MwHZqGy5BjKKWBfS04otayPDH13emOWfqathegUAP
 A/8RHVWf4qUTtp2H7MOeENNcuujg7hlapgqqUCk8lg/Ad8u8WHUWU5HM6YQj1itMlAxa/Vs3un
 2eI=
X-SBRS: 2.7
X-MesageID: 19041143
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,448,1583211600"; d="scan'208";a="19041143"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Message-ID: <24272.59689.495738.841808@mariner.uk.xensource.com>
Date: Fri, 29 May 2020 11:51:21 +0100
To: George Dunlap <George.Dunlap@citrix.com>, Daniel De Graaf
 <dgdegra@tycho.nsa.gov>, Andrew Cooper <Andrew.Cooper3@citrix.com>, "Jan
 Beulich" <jbeulich@suse.com>, Julien Grall <julien@xen.org>, "Stefano
 Stabellini" <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "cjwatson@debian.org" <cjwatson@debian.org>
Subject: Re: Xen XSM/FLASK policy, grub defaults, etc.
In-Reply-To: <24272.59646.746545.343358@mariner.uk.xensource.com>
References: <24270.35349.838484.116865@mariner.uk.xensource.com>
 <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
 <24272.59646.746545.343358@mariner.uk.xensource.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Ian Jackson writes ("Re: Xen XSM/FLASK policy, grub defaults, etc."):
> George Dunlap writes ("Re: Xen XSM/FLASK policy, grub defaults, etc."):
> > > On May 27, 2020, at 4:41 PM, Ian Jackson <ian.jackson@citrix.com> wrote:
> > > 3. Failing that, Xen should provide some other mechanism which would
> > > enable something like update-grub to determine whether a particular
> > > hypervisor can sensibly be run with a policy file and flask=enforcing.
> > 
> > So you want update-grub to check whether *the Xen binary it’s creating entries for* has FLASK enabled.  We generally include the Xen config used to build the hypervisor — could we have it check for CONFIG_XSM_FLASK?
> 
> That would be a possibility.  Including kernel configs has gone out of
> fashion but I think most distros ship them.

I mean most distros ship *Xen* configs even if they don't ship *Linux*
ones.

Ian.


From xen-devel-bounces@lists.xenproject.org Fri May 29 10:52:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 10:52:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeccw-0002OE-Qa; Fri, 29 May 2020 10:52:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mKAR=7L=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jeccv-0002O1-2j
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 10:52:37 +0000
X-Inumbo-ID: 81e48a82-a19a-11ea-9dbe-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 81e48a82-a19a-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 10:52:36 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=u2KZ2qiKEr4Ck19+FUrXzpklwGUqh5WnUQyBrGCeDOs=; b=m/j2ibkZXK68SLVA719KFXwvPa
 l/A98ixm5eAues8zmQCImBrvCgCmR2WKxxMDCGnIh+u/D68HsS7ADY7NWPi02f+5AywflmHrt/lrH
 cIMLwubJwpj0W5o40pL5bq2zuYINdGfK4mJHcyxI4ewvYyJQj3oHumkxjzn3S5jv66YU=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeccp-0001Xr-IZ; Fri, 29 May 2020 10:52:31 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeccp-0005Ru-Am; Fri, 29 May 2020 10:52:31 +0000
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
To: Bertrand Marquis <Bertrand.Marquis@arm.com>,
 Jan Beulich <jbeulich@suse.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <3B88C76B-6972-4A66-AFDC-0B5C27FBA740@arm.com>
 <52e26c9d-b662-2597-b521-dacf4f8acfc8@suse.com>
 <077FCC5B-AD47-4707-AF55-12F0455ED26F@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <038f55bc-5486-4347-f902-3e3ab5c9a53d@xen.org>
Date: Fri, 29 May 2020 11:52:28 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <077FCC5B-AD47-4707-AF55-12F0455ED26F@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "paul@xen.org" <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Xia, Hongyan" <hongyxia@amazon.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 29/05/2020 10:18, Bertrand Marquis wrote:
>> On 29 May 2020, at 09:45, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 29.05.2020 10:13, Bertrand Marquis wrote:
>>>> On 28 May 2020, at 19:54, Julien Grall <julien@xen.org> wrote:
>>>> AFAICT, there is no restriction on when the runstate hypercall can be called. So this can even be called before the vCPU is brought up.
>>>
>>> I understand the remark but it still feels very weird to allow an invalid address in an hypercall.
>>> Wouldn’t we have a lot of potential issues accepting an address that we cannot check ?
>>
>> I don't think so: The hypervisor uses copy_to_guest() to protect
>> itself from the addresses to be invalid at the time of copying.
>> If the guest doesn't make sure they're valid at that time, it
>> simply won't get the information (perhaps until Xen's next
>> attempt to copy it out).
>>
>> You may want to take a look at the x86 side of this (also the
>> vCPU time updating): Due to the way x86-64 PV guests work, the
>> address may legitimately be unmapped at the time Xen wants to
>> copy it, when the vCPU is currently executing guest user mode
>> code. In such a case the copy gets retried the next time the
>> guest transitions from user to kernel mode (which involves a
>> page table change).
> 
> If I understand everything correctly runstate is updated only if there is
> a context switch in xen while the guest is running in kernel mode and
> if the address is mapped at that time.
> 
> So this is a best effort in Xen and the guest cannot really rely on the
> runstate information (as it might not be up to date).
> Could this have impacts somehow if this is used for scheduling ?
> 
> In the end the only accepted trade off would be to:
> - reduce error verbosity and just ignore it

The error is already a dprintk(XENLOG_G_DEBUG,...). So you can't really 
do better in term of verbosity.

But I would still like to keep it as there was some weirdness hapenning 
also in the non-KPTI case (see [1]).

> - introduce a new system call using a physical address
>    -> Using a virtual address with restrictions sounds very complex
>    to document (current core, no remapping).
> 
> But it feels like having only one hypercall using guest physical addresses
> would not really be logic and this kind of change should be made across
> all hypercalls if it is done.

This is not correct, there are other hypercalls using guest physical 
address (for instance, EVTCHNOP_init_control).

At least on Arm, this is the only hypercall that requires to keep the 
virtual address across the hypercall.

For all the other hypercalls, the virtual address is used for buffer. 
This is still risky but less than this one. It is also going to be a 
major rework that would require quite a bit of work.

So I would rather trying to fix the most concerning instance now and 
address the rest afterwards.

Cheers,

[1] https://lists.xen.org/archives/html/xen-devel/2017-11/msg00942.html

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 29 10:54:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 10:54:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeceT-0002W1-6E; Fri, 29 May 2020 10:54:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K2ub=7L=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jeceR-0002Vs-Vb
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 10:54:12 +0000
X-Inumbo-ID: ba074f12-a19a-11ea-9947-bc764e2007e4
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba074f12-a19a-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 10:54:11 +0000 (UTC)
Authentication-Results: esa1.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: bERa16mt1R/JAJK61o8LRYzmUQRyjQzranHHWzNaJlOJMp9ZIsMBYTU/7KaMmJzeIfnde12jcf
 4ImnVP5ZCe67aRE53TNWmnfyS8VJa+fcID1t5RxpPGLKJmpR13Qj2Drng3u8AtxzqiKgeHCky2
 CquWTTHYTQN5yD1nuVzffvl0oSu/y4+4QvYy4PZfJhiLqkenqDO1f8xcW88Add764MopP4LhVK
 eO0atxlMx8L8sXTmzfyH7eK7AEC1jvgZDYFame2nGbZ2YAZdyG5e7Wq2vnxs1o0eq9iMrh/T+c
 s/U=
X-SBRS: 2.7
X-MesageID: 19041289
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,448,1583211600"; d="scan'208";a="19041289"
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: [PATCH] xsm: also panic upon "flask=enforcing" when XSM_FLASK=n
Thread-Topic: [PATCH] xsm: also panic upon "flask=enforcing" when XSM_FLASK=n
Thread-Index: AQHWNZxiYb+bsyhbWESXJxTbHwGxDKi+tSMAgAAI34CAAAQxgA==
Date: Fri, 29 May 2020 10:54:07 +0000
Message-ID: <B6E74C43-224F-4A8A-B073-46DDCB38BCA3@citrix.com>
References: <8a4c4486-cf27-66a0-5ff9-5329277deccf@suse.com>
 <c90b70f7-e52e-405c-adb4-1303d7d1c009@citrix.com>
 <24272.58955.340315.479568@mariner.uk.xensource.com>
In-Reply-To: <24272.58955.340315.479568@mariner.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <568ADFE2154D1E4C9B515CB577DF4044@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel de Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQo+IE9uIE1heSAyOSwgMjAyMCwgYXQgMTE6MzkgQU0sIElhbiBKYWNrc29uIDxpYW4uamFja3Nv
bkBjaXRyaXguY29tPiB3cm90ZToNCj4gDQo+IEFuZHJldyBDb29wZXIgd3JpdGVzICgiUmU6IFtQ
QVRDSF0geHNtOiBhbHNvIHBhbmljIHVwb24gImZsYXNrPWVuZm9yY2luZyIgd2hlbiBYU01fRkxB
U0s9biIpOg0KPj4gT24gMjkvMDUvMjAyMCAxMDozNCwgSmFuIEJldWxpY2ggd3JvdGU6DQo+Pj4g
V2hpbGUgdGhlIGJlaGF2aW9yIHRvIGlnbm9yZSB0aGlzIG9wdGlvbiB3aXRob3V0IEZMQVNLIHN1
cHBvcnQgd2FzDQo+Pj4gcHJvcGVybHkgZG9jdW1lbnRlZCwgaXQgaXMgc3RpbGwgc29tZXdoYXQg
c3VycHJpc2luZyB0byBzb21lb25lIHVzaW5nDQo+Pj4gdGhpcyBvcHRpb24gYW5kIHRoZW4gc3Rp
bGwgX25vdF8gZ2V0dGluZyB0aGUgYXNzdW1lZCBzZWN1cml0eS4gQWRkIGENCj4+PiAybmQgaGFu
ZGxlciBmb3IgdGhlIGNvbW1hbmQgbGluZSBvcHRpb24gZm9yIHRoZSBYU01fRkxBU0s9biBjYXNl
LCBhbmQNCj4+PiBpbnZva2UgcGFuaWMoKSB3aGVuIHRoZSBvcHRpb24gaXMgc3BlY2lmaWVkIChh
bmQgbm90IHN1YnNlcXVlbnRseQ0KPj4+IG92ZXJyaWRkZW4gYnkgImZsYXNrPWRpc2FibGVkIiku
DQo+Pj4gDQo+Pj4gU3VnZ2VzdGVkLWJ5OiBJYW4gSmFja3NvbiA8aWFuLmphY2tzb25AY2l0cml4
LmNvbT4NCj4+PiBTaWduZWQtb2ZmLWJ5OiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+
DQo+PiANCj4+IEknbSB2ZXJ5IHRlbXB0ZWQgdG8gbmFjayB0aGlzIG91dHJpZ2h0LCBsZXN0IEkg
cmVtaW5kIGJvdGggb2YgeW91IG9mIHRoZQ0KPj4gdG90YWwgZGlzYXN0ZXIgdGhhdCB3YXMgWFNB
LTksIGFuZCB0aGUgc3Vic2VxdWVudCByZXRyYWN0aW9uIG9mIHRoZSBjb2RlDQo+PiB3aGljaCBk
aWQgZXhhY3RseSB0aGlzLg0KPiANCj4gWW91IGFyZSByaWdodCB0byByZW1pbmQgbWUgb2YsIHdl
bGwsIHdoYXRldmVyIGl0IGlzIHlvdSBhcmUgdHJ5aW5nIHRvDQo+IHJlbWluZCBtZSBvZiwgc2lu
Y2UgSSdtIGFmcmFpZCBJIGRvbid0IGtub3cgd2hhdCB5b3UgbWVhbiAuLi4gIERvIHlvdQ0KPiBy
ZWFsbHkgbWVhbiBYU0EtOSA/ICBJIHdlbnQgYXQgbG9va2VkIGl0IHVwIGFuZCB0aGUgY29ubmVj
dGlvbiBlbHVkZXMNCj4gbWUuDQoNCmh0dHA6Ly94ZW5iaXRzLnhlbi5vcmcvaGcveGVuLXVuc3Rh
YmxlLmhnL3Jldi9iYzJmM2E4NDhmOWENCg0KV2hpY2ggYXBwYXJlbnRseSB3b3VsZCBjYXVzZSBY
ZW4gdG8gKHB1cnBvc2VseSkgcGFuaWMgd2hlbiBib290ZWQgb24gc3lzdGVtcyB3aXRoIHRoZSBz
cGVjaWZpZWQgQU1EIGVycmF0dW0uDQoNCj4+IElmIHlvdSB3YW50IHRvIGRvIHNvbWV0aGluZyBs
aWtlIHRoaXMsIHByb2hpYml0IGNyZWF0aW5nIGd1ZXN0cyBzbyB0aGUNCj4+IGFkbWluaXN0cmF0
b3IgY2FuIHN0aWxsIGxvZyBpbiBhbmQgdW5icmVhaywgaW5zdGVhZCBvZiBoYXZpbmcgaXQNCj4+
IGVudGVyaW5nIGEgcmVib290IGxvb3Agd2l0aCBubyBvdXRwdXQuICBUaGUgY29uc29sZSBpc24n
dCBlc3RhYmxpc2hlZA0KPj4gdGhpcyBlYXJseSwgc28gbm9uZSBvZiB0aGlzIHRleHQgbWFrZXMg
aXQgb3V0IG9udG8gVkdBL3NlcmlhbC4NCj4gDQo+IENlcnRhaW5seSBhIHNpbGVudCByZWJvb3Qg
bG9vcCB3b3VsZCBiZSB2ZXJ5IHVuaGVscGZ1bC4gIEkgdGhpbmsgaWYNCj4gWGVuIHdlcmUgdG8g
YWN0dWFsbHkgcHJpbnQgc29tZXRoaW5nIHRvIHRoZSBzZXJpYWwgY29uc29sZSB0aGF0IHdvdWxk
DQo+IGJlIHRvbGVyYWJsZSAoc2luY2UgdGhlIGFkbWluaXN0cmF0b3IgY2FuIHRoZW4gYWRqdXN0
IHRoZSBib290IGNvbW1hbmQNCj4gbGluZSksIGJ1dCB5b3VyIHN1Z2dlc3Rpb24gdG8gcHJvaGli
aXQgZ3Vlc3QgY3JlYXRpb24gd291bGQgYmUgZmluZQ0KPiB0b28uDQoNCkl0IHNlZW1zIGxpa2Ug
aGF2aW5nIHRoZSBpbmZyYXN0cnVjdHVyZSB0byBwdXQgWGVuIGludG8gYSDigJx1bnNhZmUgbW9k
ZeKAnSwgd2hlcmUgd2UgcmVmdXNlZCB0byBjcmVhdGUgZ3Vlc3RzIChvciBzb21lIG90aGVyIHNp
bWlsYXIgcmVzcG9uc2UpLCB3b3VsZCBiZSBhIGdvb2QgaW52ZXN0bWVudCB0byBoYW5kbGUgdGhp
cyBzb3J0IG9mIGlzc3VlIGluIHRoZSBmdXR1cmUuDQoNCiAtR2Vvcmdl


From xen-devel-bounces@lists.xenproject.org Fri May 29 10:56:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 10:56:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jech0-0002gn-PE; Fri, 29 May 2020 10:56:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hlOG=7L=yandex-team.ru=rvkagan@srs-us1.protection.inumbo.net>)
 id 1jecgz-0002gg-Hk
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 10:56:50 +0000
X-Inumbo-ID: 170096f6-a19b-11ea-a893-12813bfff9fa
Received: from forwardcorp1p.mail.yandex.net (unknown [77.88.29.217])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 170096f6-a19b-11ea-a893-12813bfff9fa;
 Fri, 29 May 2020 10:56:47 +0000 (UTC)
Received: from mxbackcorp1g.mail.yandex.net (mxbackcorp1g.mail.yandex.net
 [IPv6:2a02:6b8:0:1402::301])
 by forwardcorp1p.mail.yandex.net (Yandex) with ESMTP id B05B12E150B;
 Fri, 29 May 2020 13:56:45 +0300 (MSK)
Received: from sas1-9998cec34266.qloud-c.yandex.net
 (sas1-9998cec34266.qloud-c.yandex.net [2a02:6b8:c14:3a0e:0:640:9998:cec3])
 by mxbackcorp1g.mail.yandex.net (mxbackcorp/Yandex) with ESMTP id
 C0dMrlp5nF-ucIOOhIW; Fri, 29 May 2020 13:56:45 +0300
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yandex-team.ru;
 s=default; 
 t=1590749805; bh=G1Y+AsTfEGyWO3X/IwL5ebwsjjHayAfjlAggCqZ0R2c=;
 h=In-Reply-To:Message-ID:Subject:To:From:References:Date:Cc;
 b=CgPc82c+iFzBRlXzgmepj0bZmxnyRiXXL7yjT5eVElFDeSuiZmdZajNCKKdFeLDwu
 wCEl5KFGnYhIMj5/aKO5pGI9s4lVjBlu/mKljHhpzxaeYwQBWaqJ6JnW3KU1AWwTdW
 hpAlAZLdeAvwfZFJHsbC70wdHRhI3mFJIeOEvMtQ=
Authentication-Results: mxbackcorp1g.mail.yandex.net;
 dkim=pass header.i=@yandex-team.ru
Received: from dynamic-vpn.dhcp.yndx.net (dynamic-vpn.dhcp.yndx.net
 [2a02:6b8:b081:1318::1:10])
 by sas1-9998cec34266.qloud-c.yandex.net (smtpcorp/Yandex) with ESMTPSA id
 dFgBuk5uAo-ucXSPWVD; Fri, 29 May 2020 13:56:38 +0300
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits))
 (Client certificate not present)
Date: Fri, 29 May 2020 13:56:36 +0300
From: Roman Kagan <rvkagan@yandex-team.ru>
To: Markus Armbruster <armbru@redhat.com>
Subject: Re: [PATCH v8 2/8] block: consolidate blocksize properties
 consistency checks
Message-ID: <20200529105636.GB1255099@rvkaganb.lan>
Mail-Followup-To: Roman Kagan <rvkagan@yandex-team.ru>,
 Markus Armbruster <armbru@redhat.com>, qemu-devel@nongnu.org,
 Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>, John Snow <jsnow@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>,
 Laurent Vivier <laurent@vivier.eu>, Max Reitz <mreitz@redhat.com>,
 Keith Busch <kbusch@kernel.org>, Gerd Hoffmann <kraxel@redhat.com>,
 Stefan Hajnoczi <stefanha@redhat.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 xen-devel@lists.xenproject.org,
 Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>
References: <20200528225516.1676602-1-rvkagan@yandex-team.ru>
 <20200528225516.1676602-3-rvkagan@yandex-team.ru>
 <87r1v3m5ih.fsf@dusky.pond.sub.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <87r1v3m5ih.fsf@dusky.pond.sub.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Fam Zheng <fam@euphon.net>, Kevin Wolf <kwolf@redhat.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Daniel =?iso-8859-1?Q?P=2E_Berrang=E9?= <berrange@redhat.com>,
 Eduardo Habkost <ehabkost@redhat.com>, qemu-block@nongnu.org,
 Paul Durrant <paul@xen.org>,
 Philippe =?iso-8859-1?Q?Mathieu-Daud=E9?= <philmd@redhat.com>,
 "Michael S. Tsirkin" <mst@redhat.com>, qemu-devel@nongnu.org,
 Laurent Vivier <laurent@vivier.eu>, Anthony Perard <anthony.perard@citrix.com>,
 Gerd Hoffmann <kraxel@redhat.com>, Stefan Hajnoczi <stefanha@redhat.com>,
 xen-devel@lists.xenproject.org, Keith Busch <kbusch@kernel.org>,
 Paolo Bonzini <pbonzini@redhat.com>, Max Reitz <mreitz@redhat.com>,
 John Snow <jsnow@redhat.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 29, 2020 at 11:53:26AM +0200, Markus Armbruster wrote:
> Roman Kagan <rvkagan@yandex-team.ru> writes:
> 
> > Several block device properties related to blocksize configuration must
> > be in certain relationship WRT each other: physical block must be no
> > smaller than logical block; min_io_size, opt_io_size, and
> > discard_granularity must be a multiple of a logical block.
> >
> > To ensure these requirements are met, add corresponding consistency
> > checks to blkconf_blocksizes, adjusting its signature to communicate
> > possible error to the caller.  Also remove the now redundant consistency
> > checks from the specific devices.
> >
> > Signed-off-by: Roman Kagan <rvkagan@yandex-team.ru>
> > Reviewed-by: Eric Blake <eblake@redhat.com>
> > Reviewed-by: Paul Durrant <paul@xen.org>
> > ---
> >  include/hw/block/block.h   |  2 +-
> >  hw/block/block.c           | 30 +++++++++++++++++++++++++++++-
> >  hw/block/fdc.c             |  5 ++++-
> >  hw/block/nvme.c            |  5 ++++-
> >  hw/block/swim.c            |  5 ++++-
> >  hw/block/virtio-blk.c      |  7 +------
> >  hw/block/xen-block.c       |  6 +-----
> >  hw/ide/qdev.c              |  5 ++++-
> >  hw/scsi/scsi-disk.c        | 12 +++++-------
> >  hw/usb/dev-storage.c       |  5 ++++-
> >  tests/qemu-iotests/172.out |  2 +-
> >  11 files changed, 58 insertions(+), 26 deletions(-)
> >
> > diff --git a/include/hw/block/block.h b/include/hw/block/block.h
> > index d7246f3862..784953a237 100644
> > --- a/include/hw/block/block.h
> > +++ b/include/hw/block/block.h
> > @@ -87,7 +87,7 @@ bool blk_check_size_and_read_all(BlockBackend *blk, void *buf, hwaddr size,
> >  bool blkconf_geometry(BlockConf *conf, int *trans,
> >                        unsigned cyls_max, unsigned heads_max, unsigned secs_max,
> >                        Error **errp);
> > -void blkconf_blocksizes(BlockConf *conf);
> > +bool blkconf_blocksizes(BlockConf *conf, Error **errp);
> >  bool blkconf_apply_backend_options(BlockConf *conf, bool readonly,
> >                                     bool resizable, Error **errp);
> >  
> > diff --git a/hw/block/block.c b/hw/block/block.c
> > index bf56c7612b..b22207c921 100644
> > --- a/hw/block/block.c
> > +++ b/hw/block/block.c
> > @@ -61,7 +61,7 @@ bool blk_check_size_and_read_all(BlockBackend *blk, void *buf, hwaddr size,
> >      return true;
> >  }
> >  
> > -void blkconf_blocksizes(BlockConf *conf)
> > +bool blkconf_blocksizes(BlockConf *conf, Error **errp)
> >  {
> >      BlockBackend *blk = conf->blk;
> >      BlockSizes blocksizes;
> > @@ -83,6 +83,34 @@ void blkconf_blocksizes(BlockConf *conf)
> >              conf->logical_block_size = BDRV_SECTOR_SIZE;
> >          }
> >      }
> > +
> > +    if (conf->logical_block_size > conf->physical_block_size) {
> > +        error_setg(errp,
> > +                   "logical_block_size > physical_block_size not supported");
> > +        return false;
> > +    }
> 
> Pardon me if this has been answered already for prior revisions: do we
> really support physical block sizes that are not a multiple of the
> logical block size?

Both physical and logical block sizes are required to be powers of two,
so the former is certain to be a multiple of the latter.

> > +
> > +    if (!QEMU_IS_ALIGNED(conf->min_io_size, conf->logical_block_size)) {
> > +        error_setg(errp,
> > +                   "min_io_size must be a multiple of logical_block_size");
> > +        return false;
> > +    }
> > +
> > +    if (!QEMU_IS_ALIGNED(conf->opt_io_size, conf->logical_block_size)) {
> > +        error_setg(errp,
> > +                   "opt_io_size must be a multiple of logical_block_size");
> > +        return false;
> > +    }
> > +
> > +    if (conf->discard_granularity != -1 &&
> > +        !QEMU_IS_ALIGNED(conf->discard_granularity,
> > +                         conf->logical_block_size)) {
> > +        error_setg(errp, "discard_granularity must be "
> > +                   "a multiple of logical_block_size");
> > +        return false;
> > +    }
> > +
> > +    return true;
> >  }
> >  
> >  bool blkconf_apply_backend_options(BlockConf *conf, bool readonly,
> > diff --git a/hw/block/fdc.c b/hw/block/fdc.c
> > index c5fb9d6ece..8eda572ef4 100644
> > --- a/hw/block/fdc.c
> > +++ b/hw/block/fdc.c
> > @@ -554,7 +554,10 @@ static void floppy_drive_realize(DeviceState *qdev, Error **errp)
> >          read_only = !blk_bs(dev->conf.blk) || blk_is_read_only(dev->conf.blk);
> >      }
> >  
> > -    blkconf_blocksizes(&dev->conf);
> > +    if (!blkconf_blocksizes(&dev->conf, errp)) {
> > +        return;
> > +    }
> > +
> >      if (dev->conf.logical_block_size != 512 ||
> >          dev->conf.physical_block_size != 512)
> >      {
> > diff --git a/hw/block/nvme.c b/hw/block/nvme.c
> > index 2f3100e56c..672650e162 100644
> > --- a/hw/block/nvme.c
> > +++ b/hw/block/nvme.c
> > @@ -1390,7 +1390,10 @@ static void nvme_realize(PCIDevice *pci_dev, Error **errp)
> >          host_memory_backend_set_mapped(n->pmrdev, true);
> >      }
> >  
> > -    blkconf_blocksizes(&n->conf);
> > +    if (!blkconf_blocksizes(&n->conf, errp)) {
> > +        return;
> > +    }
> > +
> >      if (!blkconf_apply_backend_options(&n->conf, blk_is_read_only(n->conf.blk),
> >                                         false, errp)) {
> >          return;
> > diff --git a/hw/block/swim.c b/hw/block/swim.c
> > index 8f124782f4..74f56e8f46 100644
> > --- a/hw/block/swim.c
> > +++ b/hw/block/swim.c
> > @@ -189,7 +189,10 @@ static void swim_drive_realize(DeviceState *qdev, Error **errp)
> >          assert(ret == 0);
> >      }
> >  
> > -    blkconf_blocksizes(&dev->conf);
> > +    if (!blkconf_blocksizes(&dev->conf, errp)) {
> > +        return;
> > +    }
> > +
> >      if (dev->conf.logical_block_size != 512 ||
> >          dev->conf.physical_block_size != 512)
> >      {
> > diff --git a/hw/block/virtio-blk.c b/hw/block/virtio-blk.c
> > index 413083e62f..4ffdb130be 100644
> > --- a/hw/block/virtio-blk.c
> > +++ b/hw/block/virtio-blk.c
> > @@ -1162,12 +1162,7 @@ static void virtio_blk_device_realize(DeviceState *dev, Error **errp)
> >          return;
> >      }
> >  
> > -    blkconf_blocksizes(&conf->conf);
> > -
> > -    if (conf->conf.logical_block_size >
> > -        conf->conf.physical_block_size) {
> > -        error_setg(errp,
> > -                   "logical_block_size > physical_block_size not supported");
> > +    if (!blkconf_blocksizes(&conf->conf, errp)) {
> >          return;
> >      }
> >  
> > diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
> > index 570489d6d9..e17fec50e1 100644
> > --- a/hw/block/xen-block.c
> > +++ b/hw/block/xen-block.c
> > @@ -239,11 +239,7 @@ static void xen_block_realize(XenDevice *xendev, Error **errp)
> >          return;
> >      }
> >  
> > -    blkconf_blocksizes(conf);
> > -
> > -    if (conf->logical_block_size > conf->physical_block_size) {
> > -        error_setg(
> > -            errp, "logical_block_size > physical_block_size not supported");
> > +    if (!blkconf_blocksizes(conf, errp)) {
> >          return;
> >      }
> >  
> > diff --git a/hw/ide/qdev.c b/hw/ide/qdev.c
> > index 06b11583f5..b4821b2403 100644
> > --- a/hw/ide/qdev.c
> > +++ b/hw/ide/qdev.c
> > @@ -187,7 +187,10 @@ static void ide_dev_initfn(IDEDevice *dev, IDEDriveKind kind, Error **errp)
> >          return;
> >      }
> >  
> > -    blkconf_blocksizes(&dev->conf);
> > +    if (!blkconf_blocksizes(&dev->conf, errp)) {
> > +        return;
> > +    }
> > +
> >      if (dev->conf.logical_block_size != 512) {
> >          error_setg(errp, "logical_block_size must be 512 for IDE");
> >          return;
> > diff --git a/hw/scsi/scsi-disk.c b/hw/scsi/scsi-disk.c
> > index 387503e11b..8ce68a9dd6 100644
> > --- a/hw/scsi/scsi-disk.c
> > +++ b/hw/scsi/scsi-disk.c
> > @@ -2346,12 +2346,7 @@ static void scsi_realize(SCSIDevice *dev, Error **errp)
> >          return;
> >      }
> >  
> > -    blkconf_blocksizes(&s->qdev.conf);
> > -
> > -    if (s->qdev.conf.logical_block_size >
> > -        s->qdev.conf.physical_block_size) {
> > -        error_setg(errp,
> > -                   "logical_block_size > physical_block_size not supported");
> > +    if (!blkconf_blocksizes(&s->qdev.conf, errp)) {
> >          return;
> >      }
> >  
> > @@ -2436,7 +2431,9 @@ static void scsi_hd_realize(SCSIDevice *dev, Error **errp)
> >      if (s->qdev.conf.blk) {
> >          ctx = blk_get_aio_context(s->qdev.conf.blk);
> >          aio_context_acquire(ctx);
> > -        blkconf_blocksizes(&s->qdev.conf);
> > +        if (!blkconf_blocksizes(&s->qdev.conf, errp)) {
> > +            goto out;
> > +        }
> >      }
> >      s->qdev.blocksize = s->qdev.conf.logical_block_size;
> >      s->qdev.type = TYPE_DISK;
> > @@ -2444,6 +2441,7 @@ static void scsi_hd_realize(SCSIDevice *dev, Error **errp)
> >          s->product = g_strdup("QEMU HARDDISK");
> >      }
> >      scsi_realize(&s->qdev, errp);
> > +out:
> >      if (ctx) {
> >          aio_context_release(ctx);
> >      }
> > diff --git a/hw/usb/dev-storage.c b/hw/usb/dev-storage.c
> > index 4eba47538d..de461f37bd 100644
> > --- a/hw/usb/dev-storage.c
> > +++ b/hw/usb/dev-storage.c
> > @@ -599,7 +599,10 @@ static void usb_msd_storage_realize(USBDevice *dev, Error **errp)
> >          return;
> >      }
> >  
> > -    blkconf_blocksizes(&s->conf);
> > +    if (!blkconf_blocksizes(&s->conf, errp)) {
> > +        return;
> > +    }
> > +
> >      if (!blkconf_apply_backend_options(&s->conf, blk_is_read_only(blk), true,
> >                                         errp)) {
> >          return;
> > diff --git a/tests/qemu-iotests/172.out b/tests/qemu-iotests/172.out
> > index 7abbe82427..59cc70aebb 100644
> > --- a/tests/qemu-iotests/172.out
> > +++ b/tests/qemu-iotests/172.out
> > @@ -1204,7 +1204,7 @@ Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,physica
> >                  drive-type = "144"
> >  
> >  Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,logical_block_size=4096
> > -QEMU_PROG: -device floppy,drive=none0,logical_block_size=4096: Physical and logical block size must be 512 for floppy
> > +QEMU_PROG: -device floppy,drive=none0,logical_block_size=4096: logical_block_size > physical_block_size not supported
> >  
> >  Testing: -drive if=none,file=TEST_DIR/t.qcow2 -device floppy,drive=none0,physical_block_size=1024
> >  QEMU_PROG: -device floppy,drive=none0,physical_block_size=1024: Physical and logical block size must be 512 for floppy
> 
> This no longer exercises floppy_drive_realize()'s check of
> logical_block_size:
> 
>     if (dev->conf.logical_block_size != 512 ||
>         dev->conf.physical_block_size != 512)

Right, this check of logical_block_size here becomes redundant now,
because eariler it's verified to be no less than 512 and no more than
physical_block_size, which is required to be 512 here.  I thought it
made no harm to leave it here as it was, and decided not to bother
replacing it with a comment as to why the condition is known to be true.

>     {
>         error_setg(errp, "Physical and logical block size must "
>                    "be 512 for floppy");
>         return;
>     }
> 
> Please update the test.

The test still makes sense, it just triggers another assertion now.  How
do you suggest to update it?

Thanks,
Roman.


From xen-devel-bounces@lists.xenproject.org Fri May 29 10:58:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 10:58:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecij-0002nF-4Y; Fri, 29 May 2020 10:58:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K2ub=7L=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jecih-0002n9-Cd
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 10:58:35 +0000
X-Inumbo-ID: 56f79746-a19b-11ea-8993-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 56f79746-a19b-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 10:58:34 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 1NFytPxIw25jPXcWDGl2CV7si/0dB4EPPX9WJk1Eu9998TMB8wpwiwKpd6EJVAut/MhZjt5si1
 WhgTrUt+JD9OQ6idUX4Vxe67WhXY1DYODeEQWvIUky26OGLS/y0uaVxBbKq7kNodqKaJIZDjIT
 b7/jld0cB6ydLPWKAYSId4UVNeW3oMBbTLw/GKQ2hlwh64dY5LfNz+WpZiC7L1qXo4X192skRB
 XCLtPKer+ffonrDdy0j+QPGNeKgiLMGTBPEeukBXVv43aJWyvFfcdn85SG6oIgw5CEM2kXhGFU
 vIU=
X-SBRS: 2.7
X-MesageID: 18766074
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,448,1583211600"; d="scan'208";a="18766074"
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <Ian.Jackson@citrix.com>
Subject: Re: Xen XSM/FLASK policy, grub defaults, etc.
Thread-Topic: Xen XSM/FLASK policy, grub defaults, etc.
Thread-Index: AQHWND0/xSJFUOR5YU28XikgbPuX46i7+DWAgALLwwCAAAIzAA==
Date: Fri, 29 May 2020 10:58:30 +0000
Message-ID: <A4F2A1E4-CED3-4390-9CE8-51912A08F511@citrix.com>
References: <24270.35349.838484.116865@mariner.uk.xensource.com>
 <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
 <24272.59646.746545.343358@mariner.uk.xensource.com>
In-Reply-To: <24272.59646.746545.343358@mariner.uk.xensource.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <74EE9DB3CAA28D45B86914045689FEA0@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 "cjwatson@debian.org" <cjwatson@debian.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDI5LCAyMDIwLCBhdCAxMTo1MCBBTSwgSWFuIEphY2tzb24gPGlhbi5qYWNr
c29uQGNpdHJpeC5jb20+IHdyb3RlOg0KPiANCj4gR2VvcmdlIER1bmxhcCB3cml0ZXMgKCJSZTog
WGVuIFhTTS9GTEFTSyBwb2xpY3ksIGdydWIgZGVmYXVsdHMsIGV0Yy4iKToNCj4+PiBPbiBNYXkg
MjcsIDIwMjAsIGF0IDQ6NDEgUE0sIElhbiBKYWNrc29uIDxpYW4uamFja3NvbkBjaXRyaXguY29t
PiB3cm90ZToNCj4+PiAzLiBGYWlsaW5nIHRoYXQsIFhlbiBzaG91bGQgcHJvdmlkZSBzb21lIG90
aGVyIG1lY2hhbmlzbSB3aGljaCB3b3VsZA0KPj4+IGVuYWJsZSBzb21ldGhpbmcgbGlrZSB1cGRh
dGUtZ3J1YiB0byBkZXRlcm1pbmUgd2hldGhlciBhIHBhcnRpY3VsYXINCj4+PiBoeXBlcnZpc29y
IGNhbiBzZW5zaWJseSBiZSBydW4gd2l0aCBhIHBvbGljeSBmaWxlIGFuZCBmbGFzaz1lbmZvcmNp
bmcuDQo+PiANCj4+IFNvIHlvdSB3YW50IHVwZGF0ZS1ncnViIHRvIGNoZWNrIHdoZXRoZXIgKnRo
ZSBYZW4gYmluYXJ5IGl04oCZcyBjcmVhdGluZyBlbnRyaWVzIGZvciogaGFzIEZMQVNLIGVuYWJs
ZWQuICBXZSBnZW5lcmFsbHkgaW5jbHVkZSB0aGUgWGVuIGNvbmZpZyB1c2VkIHRvIGJ1aWxkIHRo
ZSBoeXBlcnZpc29yIOKAlCBjb3VsZCB3ZSBoYXZlIGl0IGNoZWNrIGZvciBDT05GSUdfWFNNX0ZM
QVNLPw0KPiANCj4gVGhhdCB3b3VsZCBiZSBhIHBvc3NpYmlsaXR5LiAgSW5jbHVkaW5nIGtlcm5l
bCBjb25maWdzIGhhcyBnb25lIG91dCBvZg0KPiBmYXNoaW9uIGJ1dCBJIHRoaW5rIG1vc3QgZGlz
dHJvcyBzaGlwIHRoZW0uDQo+IA0KPiBBcmUgd2UgY29uZmlkZW50IHRoYXQgdGhpcyBjb25maWcg
bmFtZSB3aWxsIHJlbWFpbiBzdGFibGUgPw0KDQpCZWZvcmUgdGFraW5nIHRoaXMgYXBwcm9hY2gs
IHdlIHNob3VsZCBwcm9iYWJseSBhZ3JlZSB0byBkZWNsYXJlIGl0IHN0YWJsZSwgYW5kIHdyaXRl
IGEgY29tbWVudCB0byB0aGF0IGVmZmVjdCBpbiB0aGUgS2NvbmZpZyBmaWxlcy4NCg0KPiANCj4g
QW5kIEkgZ3Vlc3MgaWYgdGhlIC5jb25maWcgY2FuJ3QgYmUgZm91bmQgdGhlbiB0aGUgWFNNIGJv
b3QgZW50cnkNCj4gc2hvdWxkIGJlIGluY2x1ZGVkID8NCg0KSXQgbG9va3MgbGlrZSBhdCB0aGUg
bW9tZW50IGV4cGVyaW1lbnRhbCBjb25maWcgZW50cmllcyBhcmUg4oCcdW5wZXJzb25z4oCdIHdp
dGhvdXQgQ09ORklHX0VYUEVSSU1FTlRBTD15OyBhdCBsZWFzdCwgYHJtIC5jb25maWcgJiYgbWFr
ZSBkZWZjb25maWcgJiYgZ3JlcCAtaSBmbGFza2AgZG9lc27igJl0IHR1cm4gdXAgYW55dGhpbmcg
Zm9yIG1lLg0KDQogLUdlb3JnZQ==


From xen-devel-bounces@lists.xenproject.org Fri May 29 10:58:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 10:58:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecin-0002o4-C7; Fri, 29 May 2020 10:58:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T8V9=7L=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jecil-0002nl-Kd
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 10:58:39 +0000
X-Inumbo-ID: 5915dc05-a19b-11ea-a893-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5915dc05-a19b-11ea-a893-12813bfff9fa;
 Fri, 29 May 2020 10:58:38 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=7hl45pFbQ3P44WJMsszGWIcznPc3UF0jNLfslXg/Lpk=; b=Uqsqy8eN+H8xRvJbNtiP9wyZq
 EkVtI4fZrt8jgWoGYf88fLVIAU88MmLRBnGjMNXzrdbgMz4Yassg4oo06Mxz5M22FGfvyBFYH3DRx
 e3pmuAwfQ5HaA0nytSArVk7zQr4tPZMe4Cy1LwuejnHQO9XdXq/5HcJPR6v4hAQg6/Hbk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jecij-0001gc-P8; Fri, 29 May 2020 10:58:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jecig-0003eV-N8; Fri, 29 May 2020 10:58:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jecig-00008u-MU; Fri, 29 May 2020 10:58:34 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150469-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150469: regressions - trouble: blocked/fail
X-Osstest-Failures: xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
 xen-unstable-smoke:build-amd64:xen-build:fail:regression
 xen-unstable-smoke:build-armhf:xen-build:fail:regression
 xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: xen=ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 29 May 2020 10:58:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150469 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150469/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 150438
 build-amd64                   6 xen-build                fail REGR. vs. 150438
 build-armhf                   6 xen-build                fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    0 days
Testing same since   150465  2020-05-29 09:02:14 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:20:32 2020 +0200

    tools: add xenfs tool
    
    Add the xenfs tool for accessing the hypervisor filesystem.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 86234eafb95295621aef6c618e4c22c10d8e4138
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:20:21 2020 +0200

    libs: add libxenhypfs
    
    Add the new library libxenhypfs for access to the hypervisor filesystem.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5b5ccafb0c425b85a60fd4f241d5f6951d0e4928
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:15:50 2020 +0200

    xen: add basic hypervisor filesystem support
    
    Add the infrastructure for the hypervisor filesystem.
    
    This includes the hypercall interface and the base functions for
    entry creation, deletion and modification.
    
    In order not to have to repeat the same pattern multiple times in case
    adding a new node should BUG_ON() failure, the helpers for adding a
    node (hypfs_add_dir() and hypfs_add_leaf()) get a nofault parameter
    causing the BUG() in case of a failure.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 0e9dcd0159c671608e154da5b8b7e0edd2905067
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:15:35 2020 +0200

    docs: add feature document for Xen hypervisor sysfs-like support
    
    On the 2019 Xen developer summit there was agreement that the Xen
    hypervisor should gain support for a hierarchical name-value store
    similar to the Linux kernel's sysfs.
    
    In the beginning there should only be basic support: entries can be
    added from the hypervisor itself only, there is a simple hypercall
    interface to read the data.
    
    Add a feature document for setting the base of a discussion regarding
    the desired functionality and the entries to add.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit c48a9956e334a5dd99e846d04ad56185b07aab64
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:15:08 2020 +0200

    xen: add a generic way to include binary files as variables
    
    Add a new script xen/tools/binfile for including a binary file at build
    time being usable via a pointer and a size variable in the hypervisor.
    
    Make use of that generic tool in xsm.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri May 29 10:59:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 10:59:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecjs-0002xv-NX; Fri, 29 May 2020 10:59:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mKAR=7L=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jecjr-0002xm-NR
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 10:59:47 +0000
X-Inumbo-ID: 82a9e16e-a19b-11ea-81bc-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82a9e16e-a19b-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 10:59:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=bsXupUX5CAQgLcEwi1O7VUnO+wqZMFdhDU3R/Md8S0o=; b=dOwn4Yfuko8rcl2gDNiGLd7fGg
 DiJGnSfUpT9YfAaQBFix7Qel1jNWJVTz/Y8z7QvuVSxjx5JmmsvKAMZM9YuSzwcNdjT+FTCN9aZk6
 keGQjhOxuxIVeKl/YR/Ct8E6yhzdC7XSGg3EDw5bfXz7jNiRpS/KQGazWQ+o2cjHY8X8=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jecjo-0001hr-3p; Fri, 29 May 2020 10:59:44 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jecjn-0005qx-Si; Fri, 29 May 2020 10:59:44 +0000
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
To: Jan Beulich <jbeulich@suse.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <dcfbca54-4773-9f43-1826-f5137a41bd9f@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <43781f37-184d-3ac8-8997-0a9be1de05ce@xen.org>
Date: Fri, 29 May 2020 11:59:40 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <dcfbca54-4773-9f43-1826-f5137a41bd9f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "paul@xen.org" <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, "Xia,
 Hongyan" <hongyxia@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>, xen-devel@lists.xenproject.org,
 nd@arm.com, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan,

On 29/05/2020 08:35, Jan Beulich wrote:
> On 28.05.2020 20:54, Julien Grall wrote:
>> On 28/05/2020 16:25, Bertrand Marquis wrote:
>>> At the moment on Arm, a Linux guest running with KTPI enabled will
>>> cause the following error when a context switch happens in user mode:
>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>>>
>>> This patch is modifying runstate handling to map the area given by the
>>> guest inside Xen during the hypercall.
>>> This is removing the guest virtual to physical conversion during context
>>> switches which removes the bug
>>
>> It would be good to spell out that a virtual address is not stable. So
>> relying on it is wrong.
> 
> Guests at present are permitted to change the mapping underneath the
> virtual address provided (this may not be the best idea, but the
> interface is like it is).

Well yes, it could be point to data used by the userpsace. So you could 
corrupt a program. It is not very great.

So I would be ready to accept such restriction on Arm at least because 
KPTI use case is far more concerning that a kernel trying to change the 
location of the runstate in physical memory.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 29 11:02:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:02:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecmO-0003qU-9e; Fri, 29 May 2020 11:02:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jecmM-0003qP-Tx
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:02:22 +0000
X-Inumbo-ID: de166b08-a19b-11ea-a893-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id de166b08-a19b-11ea-a893-12813bfff9fa;
 Fri, 29 May 2020 11:02:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 9C6B7AD09;
 Fri, 29 May 2020 11:02:19 +0000 (UTC)
Subject: Re: Xen XSM/FLASK policy, grub defaults, etc.
To: Ian Jackson <ian.jackson@citrix.com>
References: <24270.35349.838484.116865@mariner.uk.xensource.com>
 <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
 <24272.59646.746545.343358@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4a8e7cf2-8f63-d4d2-e051-9484a5b8c8ed@suse.com>
Date: Fri, 29 May 2020 13:02:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <24272.59646.746545.343358@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 "cjwatson@debian.org" <cjwatson@debian.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 12:50, Ian Jackson wrote:
> George Dunlap writes ("Re: Xen XSM/FLASK policy, grub defaults, etc."):
>>> On May 27, 2020, at 4:41 PM, Ian Jackson <ian.jackson@citrix.com> wrote:
>>> 3. Failing that, Xen should provide some other mechanism which would
>>> enable something like update-grub to determine whether a particular
>>> hypervisor can sensibly be run with a policy file and flask=enforcing.
>>
>> So you want update-grub to check whether *the Xen binary it’s creating entries for* has FLASK enabled.  We generally include the Xen config used to build the hypervisor — could we have it check for CONFIG_XSM_FLASK?
> 
> That would be a possibility.  Including kernel configs has gone out of
> fashion but I think most distros ship them.
> 
> Are we confident that this config name will remain stable ?

Well, if it's to be used like this, then we'll have to keep it
stable if at all possible. But that's the reason why I dislike
the .config grep-ing approach (not just for Xen, also for
Linux). It would imo be better if the binary included something
that can be queried. Such a "something" is then much more
logical to keep stable, imo. This "something" could be an ELF
note, for example (assuming a similar problem to the one here
doesn't exist for xen.efi, or else we'd need to find a solution
there, too).

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 11:12:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:12:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecvv-0004mN-FS; Fri, 29 May 2020 11:12:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5BQD=7L=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jecvt-0004mC-U2
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:12:13 +0000
X-Inumbo-ID: 3ee7014e-a19d-11ea-a898-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3ee7014e-a19d-11ea-a898-12813bfff9fa;
 Fri, 29 May 2020 11:12:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=A+TE2n44MU/U8wnXdQo51DXLXvHrK7RSZBV8BhLXJJ8=; b=GcVxU7bQKTXHQY6PTyJu3C7UEz
 Mw3s/kYHsHjPzbFI861iCTrCgWJE0+RTWSjMFZz48QdFzyzxwaKT40QcShQvUGsVvBPeqzS3JlTHg
 QpKIRDitTFRtMfjlnszZ4oY03G9BPopkAAYydfo/IN0KmjXIuiLQsW/T1PhAURVC2Ums=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecvs-00021g-31; Fri, 29 May 2020 11:12:12 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecvr-0006tM-Pa; Fri, 29 May 2020 11:12:12 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 01/15] x86/mm: map_pages_to_xen would better have one exit
 path
Date: Fri, 29 May 2020 12:11:45 +0100
Message-Id: <03c70130ab8ad216bacc9900e2a920fae32a824b.1590750232.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

We will soon rewrite the function to handle dynamically mapping and
unmapping of page tables. Since dynamic mappings may map and unmap pages
in different iterations of the while loop, we need to lift pl3e out of
the loop.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed since v4:
- drop the end_of_loop goto label.

Changed since v3:
- remove asserts on rc since rc never gets changed to anything else.
- reword commit message.
---
 xen/arch/x86/mm.c | 20 +++++++++++++-------
 1 file changed, 13 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 54980b4eb1..d99f9bc133 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5068,9 +5068,11 @@ int map_pages_to_xen(
     unsigned int flags)
 {
     bool locking = system_state > SYS_STATE_boot;
+    l3_pgentry_t *pl3e, ol3e;
     l2_pgentry_t *pl2e, ol2e;
     l1_pgentry_t *pl1e, ol1e;
     unsigned int  i;
+    int rc = -ENOMEM;
 
 #define flush_flags(oldf) do {                 \
     unsigned int o_ = (oldf);                  \
@@ -5088,10 +5090,11 @@ int map_pages_to_xen(
 
     while ( nr_mfns != 0 )
     {
-        l3_pgentry_t ol3e, *pl3e = virt_to_xen_l3e(virt);
+        pl3e = virt_to_xen_l3e(virt);
 
         if ( !pl3e )
-            return -ENOMEM;
+            goto out;
+
         ol3e = *pl3e;
 
         if ( cpu_has_page1gb &&
@@ -5181,7 +5184,7 @@ int map_pages_to_xen(
 
             l2t = alloc_xen_pagetable();
             if ( l2t == NULL )
-                return -ENOMEM;
+                goto out;
 
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
@@ -5210,7 +5213,7 @@ int map_pages_to_xen(
 
         pl2e = virt_to_xen_l2e(virt);
         if ( !pl2e )
-            return -ENOMEM;
+            goto out;
 
         if ( ((((virt >> PAGE_SHIFT) | mfn_x(mfn)) &
                ((1u << PAGETABLE_ORDER) - 1)) == 0) &&
@@ -5254,7 +5257,7 @@ int map_pages_to_xen(
             {
                 pl1e = virt_to_xen_l1e(virt);
                 if ( pl1e == NULL )
-                    return -ENOMEM;
+                    goto out;
             }
             else if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
             {
@@ -5282,7 +5285,7 @@ int map_pages_to_xen(
 
                 l1t = alloc_xen_pagetable();
                 if ( l1t == NULL )
-                    return -ENOMEM;
+                    goto out;
 
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
@@ -5428,7 +5431,10 @@ int map_pages_to_xen(
 
 #undef flush_flags
 
-    return 0;
+    rc = 0;
+
+ out:
+    return rc;
 }
 
 int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:12:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:12:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecvu-0004mH-87; Fri, 29 May 2020 11:12:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5BQD=7L=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jecvs-0004m7-Tv
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:12:12 +0000
X-Inumbo-ID: 3eac4bbc-a19d-11ea-9dbe-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3eac4bbc-a19d-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 11:12:12 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From:Sender:Reply-To:
 MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=FEfIwhqHFmiOENKi3fOR0wow9f3racvpv9g6qH4tbJo=; b=ep28E21rfFLIRerArhIEm3AUIJ
 utXF2oObWEN5vrSnWiLAKxnXyKbsbBJKqAJR/Y9j8yCM5cfWy1kgvYLC6zlzIGviaMskyg6YUUW4p
 Z2k4lZLXBXrhj9sjiL+CgG5iVB/W77dKuCYwSdwXJl952JGjV0/CqI3SsUoCLSftJsZ0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecvq-00021e-LK; Fri, 29 May 2020 11:12:10 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecvq-0006tM-AS; Fri, 29 May 2020 11:12:10 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 00/15] switch to domheap for Xen page tables
Date: Fri, 29 May 2020 12:11:44 +0100
Message-Id: <cover.1590750232.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Hongyan Xia <hongyxia@amazon.com>

This series rewrites all the remaining functions and finally makes the
switch from xenheap to domheap for Xen page tables, so that they no
longer need to rely on the direct map, which is a big step towards
removing the direct map.

This series depends on the following mini-series:
https://lists.xenproject.org/archives/html/xen-devel/2020-04/msg00730.html

---
Changed in v7:
- rebase and cleanup.
- address comments in v6.
- add alloc_map_clear_xen_pt() helper to simplify the patches in this
  series.

Changed in v6:
- drop the patches that have already been merged.
- rebase and cleanup.
- rewrite map_pages_to_xen() and modify_xen_mappings() in a way that
  does not require an end_of_loop goto label.

Hongyan Xia (2):
  x86/mm: drop old page table APIs
  x86: switch to use domheap page for page tables

Wei Liu (13):
  x86/mm: map_pages_to_xen would better have one exit path
  x86/mm: make sure there is one exit path for modify_xen_mappings
  x86/mm: rewrite virt_to_xen_l*e
  x86/mm: switch to new APIs in map_pages_to_xen
  x86/mm: switch to new APIs in modify_xen_mappings
  x86_64/mm: introduce pl2e in paging_init
  x86_64/mm: switch to new APIs in paging_init
  x86_64/mm: switch to new APIs in setup_m2p_table
  efi: use new page table APIs in copy_mapping
  efi: switch to new APIs in EFI code
  x86/smpboot: add exit path for clone_mapping()
  x86/smpboot: switch clone_mapping() to new APIs
  x86/mm: drop _new suffix for page table APIs

 xen/arch/x86/domain_page.c |  11 +-
 xen/arch/x86/efi/runtime.h |  13 +-
 xen/arch/x86/mm.c          | 273 +++++++++++++++++++++++--------------
 xen/arch/x86/setup.c       |   4 +-
 xen/arch/x86/smpboot.c     |  70 ++++++----
 xen/arch/x86/x86_64/mm.c   |  82 ++++++-----
 xen/common/efi/boot.c      |  87 +++++++-----
 xen/common/efi/efi.h       |   3 +-
 xen/common/efi/runtime.c   |   8 +-
 xen/common/vmap.c          |   1 +
 xen/include/asm-x86/mm.h   |   7 +-
 xen/include/asm-x86/page.h |  13 +-
 12 files changed, 354 insertions(+), 218 deletions(-)

-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:12:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecvz-0004mv-NJ; Fri, 29 May 2020 11:12:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5BQD=7L=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jecvx-0004me-SZ
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:12:17 +0000
X-Inumbo-ID: 3fbe03e2-a19d-11ea-9dbe-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3fbe03e2-a19d-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 11:12:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=wpKga3yyO5WYBCcsTT/9ZqLEb5cI9pIQiRxp5mp3PjU=; b=RyNqcxQd5F8o0SmM/SdkYCxZ/x
 /HD7oLZYAEojQE/q/FHNkVirpBbrotQnDPYk2oVVfYNC06Kqto67WzrWaSMf6KUnOqiQ84152Myp+
 Ar172xs0z4a8Quc2UZ1w5C3wxWMb1IQqUY0FIVyBg3ROhNs24vlQC0kNkyxXrlXiqkME=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecvt-00021p-H9; Fri, 29 May 2020 11:12:13 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecvt-0006tM-79; Fri, 29 May 2020 11:12:13 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 02/15] x86/mm: make sure there is one exit path for
 modify_xen_mappings
Date: Fri, 29 May 2020 12:11:46 +0100
Message-Id: <3edeb558a0586cf5ecb235c9159cd00fe1197b9e.1590750232.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

We will soon need to handle dynamically mapping / unmapping page
tables in the said function. Since dynamic mappings may map and unmap
pl3e in different iterations, lift pl3e out of the loop.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed since v4:
- drop the end_of_loop goto label.

Changed since v3:
- remove asserts on rc since it never gets changed to anything else.
---
 xen/arch/x86/mm.c | 15 +++++++++++----
 1 file changed, 11 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index d99f9bc133..462682ba70 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5457,10 +5457,12 @@ int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
 int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 {
     bool locking = system_state > SYS_STATE_boot;
+    l3_pgentry_t *pl3e;
     l2_pgentry_t *pl2e;
     l1_pgentry_t *pl1e;
     unsigned int  i;
     unsigned long v = s;
+    int rc = -ENOMEM;
 
     /* Set of valid PTE bits which may be altered. */
 #define FLAGS_MASK (_PAGE_NX|_PAGE_RW|_PAGE_PRESENT)
@@ -5471,7 +5473,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
     while ( v < e )
     {
-        l3_pgentry_t *pl3e = virt_to_xen_l3e(v);
+        pl3e = virt_to_xen_l3e(v);
 
         if ( !pl3e || !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
@@ -5504,7 +5506,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             /* PAGE1GB: shatter the superpage and fall through. */
             l2t = alloc_xen_pagetable();
             if ( !l2t )
-                return -ENOMEM;
+                goto out;
+
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(*pl3e) +
@@ -5561,7 +5564,8 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 /* PSE: shatter the superpage and try again. */
                 l1t = alloc_xen_pagetable();
                 if ( !l1t )
-                    return -ENOMEM;
+                    goto out;
+
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
@@ -5694,7 +5698,10 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     flush_area(NULL, FLUSH_TLB_GLOBAL);
 
 #undef FLAGS_MASK
-    return 0;
+    rc = 0;
+
+ out:
+    return rc;
 }
 
 #undef flush_area
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:12:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecvz-0004n6-Vt; Fri, 29 May 2020 11:12:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5BQD=7L=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jecvy-0004mm-S9
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:12:18 +0000
X-Inumbo-ID: 4102c85a-a19d-11ea-a898-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4102c85a-a19d-11ea-a898-12813bfff9fa;
 Fri, 29 May 2020 11:12:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=U9cLqBwN1ctta6xGvOhwYBfX8v2oSfZhysMp6/YsWG8=; b=o2JKyP3Nt8m+1kDKpHx406WlU8
 s2jjRargfDKvJ08wcpCBDmYCqrNBPirBGOokoe8L+ZfOXQx29nHRnwgBNYW364h0CGNpqE3BsTnUx
 tNGcqR+jBAES2joXbCvzpdBDThkfwtsP2y3+cCnIJ2VfwyPCpT7QLcr+5XNrAUeDONj4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecvv-00021x-Gh; Fri, 29 May 2020 11:12:15 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecvv-0006tM-2E; Fri, 29 May 2020 11:12:15 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 03/15] x86/mm: rewrite virt_to_xen_l*e
Date: Fri, 29 May 2020 12:11:47 +0100
Message-Id: <fd5d98198d9539b232a570a83e7a24be2407e739.1590750232.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, julien@xen.org,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

Rewrite those functions to use the new APIs. Modify its callers to unmap
the pointer returned. Since alloc_xen_pagetable_new() is almost never
useful unless accompanied by page clearing and a mapping, introduce a
helper alloc_map_clear_xen_pt() for this sequence.

Note that the change of virt_to_xen_l1e() also requires vmap_to_mfn() to
unmap the page, which requires domain_page.h header in vmap.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v7:
- remove a comment.
- use l1e_get_mfn() instead of converting things back and forth.
- add alloc_map_clear_xen_pt().
- unmap before the next mapping to reduce mapcache pressure.
- use normal unmap calls instead of the macro in error paths because
  unmap can handle NULL now.
---
 xen/arch/x86/domain_page.c | 11 +++--
 xen/arch/x86/mm.c          | 96 +++++++++++++++++++++++++++-----------
 xen/common/vmap.c          |  1 +
 xen/include/asm-x86/mm.h   |  1 +
 xen/include/asm-x86/page.h |  8 +++-
 5 files changed, 86 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/domain_page.c b/xen/arch/x86/domain_page.c
index b03728e18e..dc8627c1b5 100644
--- a/xen/arch/x86/domain_page.c
+++ b/xen/arch/x86/domain_page.c
@@ -333,21 +333,24 @@ void unmap_domain_page_global(const void *ptr)
 mfn_t domain_page_map_to_mfn(const void *ptr)
 {
     unsigned long va = (unsigned long)ptr;
-    const l1_pgentry_t *pl1e;
+    l1_pgentry_t l1e;
 
     if ( va >= DIRECTMAP_VIRT_START )
         return _mfn(virt_to_mfn(ptr));
 
     if ( va >= VMAP_VIRT_START && va < VMAP_VIRT_END )
     {
-        pl1e = virt_to_xen_l1e(va);
+        const l1_pgentry_t *pl1e = virt_to_xen_l1e(va);
+
         BUG_ON(!pl1e);
+        l1e = *pl1e;
+        unmap_domain_page(pl1e);
     }
     else
     {
         ASSERT(va >= MAPCACHE_VIRT_START && va < MAPCACHE_VIRT_END);
-        pl1e = &__linear_l1_table[l1_linear_offset(va)];
+        l1e = __linear_l1_table[l1_linear_offset(va)];
     }
 
-    return l1e_get_mfn(*pl1e);
+    return l1e_get_mfn(l1e);
 }
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 462682ba70..b67ecf4107 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4948,8 +4948,28 @@ void free_xen_pagetable_new(mfn_t mfn)
         free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
 }
 
+void *alloc_map_clear_xen_pt(mfn_t *pmfn)
+{
+    mfn_t mfn = alloc_xen_pagetable_new();
+    void *ret;
+
+    if ( mfn_eq(mfn, INVALID_MFN) )
+        return NULL;
+
+    if ( pmfn )
+        *pmfn = mfn;
+    ret = map_domain_page(mfn);
+    clear_page(ret);
+
+    return ret;
+}
+
 static DEFINE_SPINLOCK(map_pgdir_lock);
 
+/*
+ * For virt_to_xen_lXe() functions, they take a virtual address and return a
+ * pointer to Xen's LX entry. Caller needs to unmap the pointer.
+ */
 static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
 {
     l4_pgentry_t *pl4e;
@@ -4958,33 +4978,33 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
     if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l3_pgentry_t *l3t = alloc_xen_pagetable();
+        mfn_t l3mfn;
+        l3_pgentry_t *l3t = alloc_map_clear_xen_pt(&l3mfn);
 
         if ( !l3t )
             return NULL;
-        clear_page(l3t);
+        UNMAP_DOMAIN_PAGE(l3t);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l4e_get_flags(*pl4e) & _PAGE_PRESENT) )
         {
-            l4_pgentry_t l4e = l4e_from_paddr(__pa(l3t), __PAGE_HYPERVISOR);
+            l4_pgentry_t l4e = l4e_from_mfn(l3mfn, __PAGE_HYPERVISOR);
 
             l4e_write(pl4e, l4e);
             efi_update_l4_pgtable(l4_table_offset(v), l4e);
-            l3t = NULL;
+            l3mfn = INVALID_MFN;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( l3t )
-            free_xen_pagetable(l3t);
+        free_xen_pagetable_new(l3mfn);
     }
 
-    return l4e_to_l3e(*pl4e) + l3_table_offset(v);
+    return map_l3t_from_l4e(*pl4e) + l3_table_offset(v);
 }
 
 static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
 {
-    l3_pgentry_t *pl3e;
+    l3_pgentry_t *pl3e, l3e;
 
     pl3e = virt_to_xen_l3e(v);
     if ( !pl3e )
@@ -4993,31 +5013,37 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l2_pgentry_t *l2t = alloc_xen_pagetable();
+        mfn_t l2mfn;
+        l2_pgentry_t *l2t = alloc_map_clear_xen_pt(&l2mfn);
 
         if ( !l2t )
+        {
+            unmap_domain_page(pl3e);
             return NULL;
-        clear_page(l2t);
+        }
+        UNMAP_DOMAIN_PAGE(l2t);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
-            l3e_write(pl3e, l3e_from_paddr(__pa(l2t), __PAGE_HYPERVISOR));
-            l2t = NULL;
+            l3e_write(pl3e, l3e_from_mfn(l2mfn, __PAGE_HYPERVISOR));
+            l2mfn = INVALID_MFN;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( l2t )
-            free_xen_pagetable(l2t);
+        free_xen_pagetable_new(l2mfn);
     }
 
     BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
-    return l3e_to_l2e(*pl3e) + l2_table_offset(v);
+    l3e = *pl3e;
+    unmap_domain_page(pl3e);
+
+    return map_l2t_from_l3e(l3e) + l2_table_offset(v);
 }
 
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
 {
-    l2_pgentry_t *pl2e;
+    l2_pgentry_t *pl2e, l2e;
 
     pl2e = virt_to_xen_l2e(v);
     if ( !pl2e )
@@ -5026,26 +5052,32 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
         bool locking = system_state > SYS_STATE_boot;
-        l1_pgentry_t *l1t = alloc_xen_pagetable();
+        mfn_t l1mfn;
+        l1_pgentry_t *l1t = alloc_map_clear_xen_pt(&l1mfn);
 
         if ( !l1t )
+        {
+            unmap_domain_page(pl2e);
             return NULL;
-        clear_page(l1t);
+        }
+        UNMAP_DOMAIN_PAGE(l1t);
         if ( locking )
             spin_lock(&map_pgdir_lock);
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
-            l2e_write(pl2e, l2e_from_paddr(__pa(l1t), __PAGE_HYPERVISOR));
-            l1t = NULL;
+            l2e_write(pl2e, l2e_from_mfn(l1mfn, __PAGE_HYPERVISOR));
+            l1mfn = INVALID_MFN;
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        if ( l1t )
-            free_xen_pagetable(l1t);
+        free_xen_pagetable_new(l1mfn);
     }
 
     BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
-    return l2e_to_l1e(*pl2e) + l1_table_offset(v);
+    l2e = *pl2e;
+    unmap_domain_page(pl2e);
+
+    return map_l1t_from_l2e(l2e) + l1_table_offset(v);
 }
 
 /* Convert to from superpage-mapping flags for map_pages_to_xen(). */
@@ -5068,8 +5100,8 @@ int map_pages_to_xen(
     unsigned int flags)
 {
     bool locking = system_state > SYS_STATE_boot;
-    l3_pgentry_t *pl3e, ol3e;
-    l2_pgentry_t *pl2e, ol2e;
+    l3_pgentry_t *pl3e = NULL, ol3e;
+    l2_pgentry_t *pl2e = NULL, ol2e;
     l1_pgentry_t *pl1e, ol1e;
     unsigned int  i;
     int rc = -ENOMEM;
@@ -5090,6 +5122,10 @@ int map_pages_to_xen(
 
     while ( nr_mfns != 0 )
     {
+        /* Clean up mappings mapped in the previous iteration. */
+        UNMAP_DOMAIN_PAGE(pl3e);
+        UNMAP_DOMAIN_PAGE(pl2e);
+
         pl3e = virt_to_xen_l3e(virt);
 
         if ( !pl3e )
@@ -5258,6 +5294,8 @@ int map_pages_to_xen(
                 pl1e = virt_to_xen_l1e(virt);
                 if ( pl1e == NULL )
                     goto out;
+
+                UNMAP_DOMAIN_PAGE(pl1e);
             }
             else if ( l2e_get_flags(*pl2e) & _PAGE_PSE )
             {
@@ -5434,6 +5472,8 @@ int map_pages_to_xen(
     rc = 0;
 
  out:
+    unmap_domain_page(pl2e);
+    unmap_domain_page(pl3e);
     return rc;
 }
 
@@ -5457,7 +5497,7 @@ int populate_pt_range(unsigned long virt, unsigned long nr_mfns)
 int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 {
     bool locking = system_state > SYS_STATE_boot;
-    l3_pgentry_t *pl3e;
+    l3_pgentry_t *pl3e = NULL;
     l2_pgentry_t *pl2e;
     l1_pgentry_t *pl1e;
     unsigned int  i;
@@ -5473,6 +5513,9 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
     while ( v < e )
     {
+        /* Clean up mappings mapped in the previous iteration. */
+        UNMAP_DOMAIN_PAGE(pl3e);
+
         pl3e = virt_to_xen_l3e(v);
 
         if ( !pl3e || !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
@@ -5701,6 +5744,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     rc = 0;
 
  out:
+    unmap_domain_page(pl3e);
     return rc;
 }
 
diff --git a/xen/common/vmap.c b/xen/common/vmap.c
index faebc1ddf1..9964ab2096 100644
--- a/xen/common/vmap.c
+++ b/xen/common/vmap.c
@@ -1,6 +1,7 @@
 #ifdef VMAP_VIRT_START
 #include <xen/bitmap.h>
 #include <xen/cache.h>
+#include <xen/domain_page.h>
 #include <xen/init.h>
 #include <xen/mm.h>
 #include <xen/pfn.h>
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 3d3f9d49ac..42d1a78731 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -587,6 +587,7 @@ void *alloc_xen_pagetable(void);
 void free_xen_pagetable(void *v);
 mfn_t alloc_xen_pagetable_new(void);
 void free_xen_pagetable_new(mfn_t mfn);
+void *alloc_map_clear_xen_pt(mfn_t *pmfn);
 
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
 
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 5acf3d3d5a..3854feb3ea 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -291,7 +291,13 @@ void copy_page_sse2(void *, const void *);
 #define pfn_to_paddr(pfn)   __pfn_to_paddr(pfn)
 #define paddr_to_pfn(pa)    __paddr_to_pfn(pa)
 #define paddr_to_pdx(pa)    pfn_to_pdx(paddr_to_pfn(pa))
-#define vmap_to_mfn(va)     _mfn(l1e_get_pfn(*virt_to_xen_l1e((unsigned long)(va))))
+
+#define vmap_to_mfn(va) ({                                                  \
+        const l1_pgentry_t *pl1e_ = virt_to_xen_l1e((unsigned long)(va));   \
+        mfn_t mfn_ = l1e_get_mfn(*pl1e_);                                   \
+        unmap_domain_page(pl1e_);                                           \
+        mfn_; })
+
 #define vmap_to_page(va)    mfn_to_page(vmap_to_mfn(va))
 
 #endif /* !defined(__ASSEMBLY__) */
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:12:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:12:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecw4-0004p2-Bz; Fri, 29 May 2020 11:12:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5BQD=7L=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jecw2-0004oU-Sq
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:12:22 +0000
X-Inumbo-ID: 42c52f20-a19d-11ea-9947-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 42c52f20-a19d-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:12:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hxD+dCa5/4bshGRDmMFVlWW1acMXO1IiEGiX1UxC8RE=; b=ywAAVV72FNevc4d4gtt3c9vrc7
 aPpAJjUzRz3H9WqJcbskRbjkZpSNscppV/XLe9bvP4u9l8DofNgbc+gbQgyi6OIZ+a9rmfTK9wjzz
 oF3vD1AY0kgLOrsWtWo1p6gbC3z3ivPcxCGHnju/lgoLFKJ1Dw9qph89/JSkXIYRh3eQ=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecvy-000229-D8; Fri, 29 May 2020 11:12:18 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecvy-0006tM-2u; Fri, 29 May 2020 11:12:18 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 05/15] x86/mm: switch to new APIs in modify_xen_mappings
Date: Fri, 29 May 2020 12:11:49 +0100
Message-Id: <2d57f21e24cc898ba41ec537ea5df7ad5dfd6a05.1590750232.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

Page tables allocated in that function should be mapped and unmapped
now.

Note that pl2e now maybe mapped and unmapped in different iterations, so
we need to add clean-ups for that.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v7:
- use normal unmap in the error path.
---
 xen/arch/x86/mm.c | 57 ++++++++++++++++++++++++++++++-----------------
 1 file changed, 36 insertions(+), 21 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 9cb1c6b347..26694e2f30 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5510,7 +5510,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 {
     bool locking = system_state > SYS_STATE_boot;
     l3_pgentry_t *pl3e = NULL;
-    l2_pgentry_t *pl2e;
+    l2_pgentry_t *pl2e = NULL;
     l1_pgentry_t *pl1e;
     unsigned int  i;
     unsigned long v = s;
@@ -5526,6 +5526,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     while ( v < e )
     {
         /* Clean up mappings mapped in the previous iteration. */
+        UNMAP_DOMAIN_PAGE(pl2e);
         UNMAP_DOMAIN_PAGE(pl3e);
 
         pl3e = virt_to_xen_l3e(v);
@@ -5543,6 +5544,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
         if ( l3e_get_flags(*pl3e) & _PAGE_PSE )
         {
             l2_pgentry_t *l2t;
+            mfn_t l2mfn;
 
             if ( l2_table_offset(v) == 0 &&
                  l1_table_offset(v) == 0 &&
@@ -5559,35 +5561,38 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
 
             /* PAGE1GB: shatter the superpage and fall through. */
-            l2t = alloc_xen_pagetable();
-            if ( !l2t )
+            l2mfn = alloc_xen_pagetable_new();
+            if ( mfn_eq(l2mfn, INVALID_MFN) )
                 goto out;
 
+            l2t = map_domain_page(l2mfn);
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(*pl3e) +
                                        (i << PAGETABLE_ORDER),
                                        l3e_get_flags(*pl3e)));
+            UNMAP_DOMAIN_PAGE(l2t);
+
             if ( locking )
                 spin_lock(&map_pgdir_lock);
             if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
-                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(l2t),
-                                                    __PAGE_HYPERVISOR));
-                l2t = NULL;
+                l3e_write_atomic(pl3e,
+                                 l3e_from_mfn(l2mfn, __PAGE_HYPERVISOR));
+                l2mfn = INVALID_MFN;
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
-            if ( l2t )
-                free_xen_pagetable(l2t);
+
+            free_xen_pagetable_new(l2mfn);
         }
 
         /*
          * The L3 entry has been verified to be present, and we've dealt with
          * 1G pages as well, so the L2 table cannot require allocation.
          */
-        pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(v);
+        pl2e = map_l2t_from_l3e(*pl3e) + l2_table_offset(v);
 
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
@@ -5615,41 +5620,45 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             else
             {
                 l1_pgentry_t *l1t;
-
                 /* PSE: shatter the superpage and try again. */
-                l1t = alloc_xen_pagetable();
-                if ( !l1t )
+                mfn_t l1mfn = alloc_xen_pagetable_new();
+
+                if ( mfn_eq(l1mfn, INVALID_MFN) )
                     goto out;
 
+                l1t = map_domain_page(l1mfn);
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
                                            l2e_get_flags(*pl2e) & ~_PAGE_PSE));
+                UNMAP_DOMAIN_PAGE(l1t);
+
                 if ( locking )
                     spin_lock(&map_pgdir_lock);
                 if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) &&
                      (l2e_get_flags(*pl2e) & _PAGE_PSE) )
                 {
-                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(l1t),
+                    l2e_write_atomic(pl2e, l2e_from_mfn(l1mfn,
                                                         __PAGE_HYPERVISOR));
-                    l1t = NULL;
+                    l1mfn = INVALID_MFN;
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
-                if ( l1t )
-                    free_xen_pagetable(l1t);
+
+                free_xen_pagetable_new(l1mfn);
             }
         }
         else
         {
             l1_pgentry_t nl1e, *l1t;
+            mfn_t l1mfn;
 
             /*
              * Ordinary 4kB mapping: The L2 entry has been verified to be
              * present, and we've dealt with 2M pages as well, so the L1 table
              * cannot require allocation.
              */
-            pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(v);
+            pl1e = map_l1t_from_l2e(*pl2e) + l1_table_offset(v);
 
             /* Confirm the caller isn't trying to create new mappings. */
             if ( !(l1e_get_flags(*pl1e) & _PAGE_PRESENT) )
@@ -5660,6 +5669,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                                (l1e_get_flags(*pl1e) & ~FLAGS_MASK) | nf);
 
             l1e_write_atomic(pl1e, nl1e);
+            UNMAP_DOMAIN_PAGE(pl1e);
             v += PAGE_SIZE;
 
             /*
@@ -5689,10 +5699,12 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 continue;
             }
 
-            l1t = l2e_to_l1e(*pl2e);
+            l1mfn = l2e_get_mfn(*pl2e);
+            l1t = map_domain_page(l1mfn);
             for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                 if ( l1e_get_intpte(l1t[i]) != 0 )
                     break;
+            UNMAP_DOMAIN_PAGE(l1t);
             if ( i == L1_PAGETABLE_ENTRIES )
             {
                 /* Empty: zap the L2E and free the L1 page. */
@@ -5700,7 +5712,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable(l1t);
+                free_xen_pagetable_new(l1mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
@@ -5731,11 +5743,13 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
 
         {
             l2_pgentry_t *l2t;
+            mfn_t l2mfn = l3e_get_mfn(*pl3e);
 
-            l2t = l3e_to_l2e(*pl3e);
+            l2t = map_domain_page(l2mfn);
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 if ( l2e_get_intpte(l2t[i]) != 0 )
                     break;
+            UNMAP_DOMAIN_PAGE(l2t);
             if ( i == L2_PAGETABLE_ENTRIES )
             {
                 /* Empty: zap the L3E and free the L2 page. */
@@ -5743,7 +5757,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable(l2t);
+                free_xen_pagetable_new(l2mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
@@ -5756,6 +5770,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
     rc = 0;
 
  out:
+    unmap_domain_page(pl2e);
     unmap_domain_page(pl3e);
     return rc;
 }
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:12:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:12:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecw5-0004pW-Jj; Fri, 29 May 2020 11:12:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5BQD=7L=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jecw3-0004oq-SK
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:12:23 +0000
X-Inumbo-ID: 4202667a-a19d-11ea-a898-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4202667a-a19d-11ea-a898-12813bfff9fa;
 Fri, 29 May 2020 11:12:17 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=uNouwTMV+lL8Fxj9UOLXd3I4quR46n4/2NM/8F/pgpU=; b=MpdGMfw0PYefqAripTtHiKjk8s
 qf60V6cZJKxxRdhj4YnKq2r7+IkrWdI1ZJk6PrUH+AJvsxzbbh6AM13AO9lNmTv2lHRf7vaB16oHU
 f2iQiQYcITmXIexzBhhJw4JD9PgIAdaKLXNSJXgWuy+dPy0GFp4zE+PRMMk/mgKMZtPY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecvw-000222-UN; Fri, 29 May 2020 11:12:16 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecvw-0006tM-Kt; Fri, 29 May 2020 11:12:16 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 04/15] x86/mm: switch to new APIs in map_pages_to_xen
Date: Fri, 29 May 2020 12:11:48 +0100
Message-Id: <0c7cd882e8f8e853d2da78f41cce42d1f70b3bdf.1590750232.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

Page tables allocated in that function should be mapped and unmapped
now.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
---
 xen/arch/x86/mm.c | 60 ++++++++++++++++++++++++++++-------------------
 1 file changed, 36 insertions(+), 24 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index b67ecf4107..9cb1c6b347 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5154,7 +5154,7 @@ int map_pages_to_xen(
                 }
                 else
                 {
-                    l2_pgentry_t *l2t = l3e_to_l2e(ol3e);
+                    l2_pgentry_t *l2t = map_l2t_from_l3e(ol3e);
 
                     for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                     {
@@ -5166,10 +5166,11 @@ int map_pages_to_xen(
                         else
                         {
                             unsigned int j;
-                            const l1_pgentry_t *l1t = l2e_to_l1e(ol2e);
+                            const l1_pgentry_t *l1t = map_l1t_from_l2e(ol2e);
 
                             for ( j = 0; j < L1_PAGETABLE_ENTRIES; j++ )
                                 flush_flags(l1e_get_flags(l1t[j]));
+                            unmap_domain_page(l1t);
                         }
                     }
                     flush_area(virt, flush_flags);
@@ -5178,9 +5179,10 @@ int map_pages_to_xen(
                         ol2e = l2t[i];
                         if ( (l2e_get_flags(ol2e) & _PAGE_PRESENT) &&
                              !(l2e_get_flags(ol2e) & _PAGE_PSE) )
-                            free_xen_pagetable(l2e_to_l1e(ol2e));
+                            free_xen_pagetable_new(l2e_get_mfn(ol2e));
                     }
-                    free_xen_pagetable(l2t);
+                    unmap_domain_page(l2t);
+                    free_xen_pagetable_new(l3e_get_mfn(ol3e));
                 }
             }
 
@@ -5197,6 +5199,7 @@ int map_pages_to_xen(
             unsigned int flush_flags =
                 FLUSH_TLB | FLUSH_ORDER(2 * PAGETABLE_ORDER);
             l2_pgentry_t *l2t;
+            mfn_t l2mfn;
 
             /* Skip this PTE if there is no change. */
             if ( ((l3e_get_pfn(ol3e) & ~(L2_PAGETABLE_ENTRIES *
@@ -5218,15 +5221,17 @@ int map_pages_to_xen(
                 continue;
             }
 
-            l2t = alloc_xen_pagetable();
-            if ( l2t == NULL )
+            l2mfn = alloc_xen_pagetable_new();
+            if ( mfn_eq(l2mfn, INVALID_MFN) )
                 goto out;
 
+            l2t = map_domain_page(l2mfn);
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
                 l2e_write(l2t + i,
                           l2e_from_pfn(l3e_get_pfn(ol3e) +
                                        (i << PAGETABLE_ORDER),
                                        l3e_get_flags(ol3e)));
+            UNMAP_DOMAIN_PAGE(l2t);
 
             if ( l3e_get_flags(ol3e) & _PAGE_GLOBAL )
                 flush_flags |= FLUSH_TLB_GLOBAL;
@@ -5236,15 +5241,15 @@ int map_pages_to_xen(
             if ( (l3e_get_flags(*pl3e) & _PAGE_PRESENT) &&
                  (l3e_get_flags(*pl3e) & _PAGE_PSE) )
             {
-                l3e_write_atomic(pl3e, l3e_from_mfn(virt_to_mfn(l2t),
-                                                    __PAGE_HYPERVISOR));
-                l2t = NULL;
+                l3e_write_atomic(pl3e,
+                                 l3e_from_mfn(l2mfn, __PAGE_HYPERVISOR));
+                l2mfn = INVALID_MFN;
             }
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
             flush_area(virt, flush_flags);
-            if ( l2t )
-                free_xen_pagetable(l2t);
+
+            free_xen_pagetable_new(l2mfn);
         }
 
         pl2e = virt_to_xen_l2e(virt);
@@ -5272,12 +5277,13 @@ int map_pages_to_xen(
                 }
                 else
                 {
-                    l1_pgentry_t *l1t = l2e_to_l1e(ol2e);
+                    l1_pgentry_t *l1t = map_l1t_from_l2e(ol2e);
 
                     for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                         flush_flags(l1e_get_flags(l1t[i]));
                     flush_area(virt, flush_flags);
-                    free_xen_pagetable(l1t);
+                    unmap_domain_page(l1t);
+                    free_xen_pagetable_new(l2e_get_mfn(ol2e));
                 }
             }
 
@@ -5302,6 +5308,7 @@ int map_pages_to_xen(
                 unsigned int flush_flags =
                     FLUSH_TLB | FLUSH_ORDER(PAGETABLE_ORDER);
                 l1_pgentry_t *l1t;
+                mfn_t l1mfn;
 
                 /* Skip this PTE if there is no change. */
                 if ( (((l2e_get_pfn(*pl2e) & ~(L1_PAGETABLE_ENTRIES - 1)) +
@@ -5321,14 +5328,16 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                l1t = alloc_xen_pagetable();
-                if ( l1t == NULL )
+                l1mfn = alloc_xen_pagetable_new();
+                if ( mfn_eq(l1mfn, INVALID_MFN) )
                     goto out;
 
+                l1t = map_domain_page(l1mfn);
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     l1e_write(&l1t[i],
                               l1e_from_pfn(l2e_get_pfn(*pl2e) + i,
                                            lNf_to_l1f(l2e_get_flags(*pl2e))));
+                UNMAP_DOMAIN_PAGE(l1t);
 
                 if ( l2e_get_flags(*pl2e) & _PAGE_GLOBAL )
                     flush_flags |= FLUSH_TLB_GLOBAL;
@@ -5338,20 +5347,21 @@ int map_pages_to_xen(
                 if ( (l2e_get_flags(*pl2e) & _PAGE_PRESENT) &&
                      (l2e_get_flags(*pl2e) & _PAGE_PSE) )
                 {
-                    l2e_write_atomic(pl2e, l2e_from_mfn(virt_to_mfn(l1t),
+                    l2e_write_atomic(pl2e, l2e_from_mfn(l1mfn,
                                                         __PAGE_HYPERVISOR));
-                    l1t = NULL;
+                    l1mfn = INVALID_MFN;
                 }
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(virt, flush_flags);
-                if ( l1t )
-                    free_xen_pagetable(l1t);
+
+                free_xen_pagetable_new(l1mfn);
             }
 
-            pl1e  = l2e_to_l1e(*pl2e) + l1_table_offset(virt);
+            pl1e  = map_l1t_from_l2e(*pl2e) + l1_table_offset(virt);
             ol1e  = *pl1e;
             l1e_write_atomic(pl1e, l1e_from_mfn(mfn, flags));
+            UNMAP_DOMAIN_PAGE(pl1e);
             if ( (l1e_get_flags(ol1e) & _PAGE_PRESENT) )
             {
                 unsigned int flush_flags = FLUSH_TLB | FLUSH_ORDER(0);
@@ -5395,12 +5405,13 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                l1t = l2e_to_l1e(ol2e);
+                l1t = map_l1t_from_l2e(ol2e);
                 base_mfn = l1e_get_pfn(l1t[0]) & ~(L1_PAGETABLE_ENTRIES - 1);
                 for ( i = 0; i < L1_PAGETABLE_ENTRIES; i++ )
                     if ( (l1e_get_pfn(l1t[i]) != (base_mfn + i)) ||
                          (l1e_get_flags(l1t[i]) != flags) )
                         break;
+                UNMAP_DOMAIN_PAGE(l1t);
                 if ( i == L1_PAGETABLE_ENTRIES )
                 {
                     l2e_write_atomic(pl2e, l2e_from_pfn(base_mfn,
@@ -5410,7 +5421,7 @@ int map_pages_to_xen(
                     flush_area(virt - PAGE_SIZE,
                                FLUSH_TLB_GLOBAL |
                                FLUSH_ORDER(PAGETABLE_ORDER));
-                    free_xen_pagetable(l2e_to_l1e(ol2e));
+                    free_xen_pagetable_new(l2e_get_mfn(ol2e));
                 }
                 else if ( locking )
                     spin_unlock(&map_pgdir_lock);
@@ -5443,7 +5454,7 @@ int map_pages_to_xen(
                 continue;
             }
 
-            l2t = l3e_to_l2e(ol3e);
+            l2t = map_l2t_from_l3e(ol3e);
             base_mfn = l2e_get_pfn(l2t[0]) & ~(L2_PAGETABLE_ENTRIES *
                                               L1_PAGETABLE_ENTRIES - 1);
             for ( i = 0; i < L2_PAGETABLE_ENTRIES; i++ )
@@ -5451,6 +5462,7 @@ int map_pages_to_xen(
                       (base_mfn + (i << PAGETABLE_ORDER))) ||
                      (l2e_get_flags(l2t[i]) != l1f_to_lNf(flags)) )
                     break;
+            UNMAP_DOMAIN_PAGE(l2t);
             if ( i == L2_PAGETABLE_ENTRIES )
             {
                 l3e_write_atomic(pl3e, l3e_from_pfn(base_mfn,
@@ -5460,7 +5472,7 @@ int map_pages_to_xen(
                 flush_area(virt - PAGE_SIZE,
                            FLUSH_TLB_GLOBAL |
                            FLUSH_ORDER(2*PAGETABLE_ORDER));
-                free_xen_pagetable(l3e_to_l2e(ol3e));
+                free_xen_pagetable_new(l3e_get_mfn(ol3e));
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:12:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:12:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecwA-0004sa-5q; Fri, 29 May 2020 11:12:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5BQD=7L=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jecw8-0004rq-SQ
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:12:28 +0000
X-Inumbo-ID: 4370f4c2-a19d-11ea-a898-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4370f4c2-a19d-11ea-a898-12813bfff9fa;
 Fri, 29 May 2020 11:12:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ZtFKVU3lKKYpcsIjk2cK0oN7MN/6FJLYI1BuXSuoRzM=; b=XkepALBMjvM9mdCZoafPz7i+4S
 Q5x3YpNa/El2jDWxuSnkB4M4D2Skt6/yqo5uxEpXy4O8y6kXY78JQ2MgoGZLbixpPD9kGjP67zra2
 MiRhUPUrxESirUvNuYMy+9aovbHw6j8MpdpVV87a7sc7PvH1nPeOjIoiudUvBZOjs/NM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecvz-00022G-R7; Fri, 29 May 2020 11:12:19 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecvz-0006tM-I7; Fri, 29 May 2020 11:12:19 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 06/15] x86_64/mm: introduce pl2e in paging_init
Date: Fri, 29 May 2020 12:11:50 +0100
Message-Id: <6e840488d3512bb1b0d5e678391323df5301e1e0.1590750232.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

We will soon map and unmap pages in paging_init(). Introduce pl2e so
that we can use l2_ro_mpt to point to the page table itself.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>

---
Changed in v7:
- reword commit message.
---
 xen/arch/x86/x86_64/mm.c | 16 +++++++++-------
 1 file changed, 9 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 102079a801..243014a119 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -479,7 +479,7 @@ void __init paging_init(void)
     unsigned long i, mpt_size, va;
     unsigned int n, memflags;
     l3_pgentry_t *l3_ro_mpt;
-    l2_pgentry_t *l2_ro_mpt = NULL;
+    l2_pgentry_t *pl2e = NULL, *l2_ro_mpt = NULL;
     struct page_info *l1_pg;
 
     /*
@@ -529,7 +529,7 @@ void __init paging_init(void)
             (L2_PAGETABLE_SHIFT - 3 + PAGE_SHIFT)));
 
         if ( cpu_has_page1gb &&
-             !((unsigned long)l2_ro_mpt & ~PAGE_MASK) &&
+             !((unsigned long)pl2e & ~PAGE_MASK) &&
              (mpt_size >> L3_PAGETABLE_SHIFT) > (i >> PAGETABLE_ORDER) )
         {
             unsigned int k, holes;
@@ -589,7 +589,7 @@ void __init paging_init(void)
             memset((void *)(RDWR_MPT_VIRT_START + (i << L2_PAGETABLE_SHIFT)),
                    0xFF, 1UL << L2_PAGETABLE_SHIFT);
         }
-        if ( !((unsigned long)l2_ro_mpt & ~PAGE_MASK) )
+        if ( !((unsigned long)pl2e & ~PAGE_MASK) )
         {
             if ( (l2_ro_mpt = alloc_xen_pagetable()) == NULL )
                 goto nomem;
@@ -597,13 +597,14 @@ void __init paging_init(void)
             l3e_write(&l3_ro_mpt[l3_table_offset(va)],
                       l3e_from_paddr(__pa(l2_ro_mpt),
                                      __PAGE_HYPERVISOR_RO | _PAGE_USER));
+            pl2e = l2_ro_mpt;
             ASSERT(!l2_table_offset(va));
         }
         /* NB. Cannot be GLOBAL: guest user mode should not see it. */
         if ( l1_pg )
-            l2e_write(l2_ro_mpt, l2e_from_page(
+            l2e_write(pl2e, l2e_from_page(
                 l1_pg, /*_PAGE_GLOBAL|*/_PAGE_PSE|_PAGE_USER|_PAGE_PRESENT));
-        l2_ro_mpt++;
+        pl2e++;
     }
 #undef CNT
 #undef MFN
@@ -613,6 +614,7 @@ void __init paging_init(void)
         goto nomem;
     compat_idle_pg_table_l2 = l2_ro_mpt;
     clear_page(l2_ro_mpt);
+    pl2e = l2_ro_mpt;
     /* Allocate and map the compatibility mode machine-to-phys table. */
     mpt_size = (mpt_size >> 1) + (1UL << (L2_PAGETABLE_SHIFT - 1));
     if ( mpt_size > RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START )
@@ -625,7 +627,7 @@ void __init paging_init(void)
              sizeof(*compat_machine_to_phys_mapping))
     BUILD_BUG_ON((sizeof(*frame_table) & ~sizeof(*frame_table)) % \
                  sizeof(*compat_machine_to_phys_mapping));
-    for ( i = 0; i < (mpt_size >> L2_PAGETABLE_SHIFT); i++, l2_ro_mpt++ )
+    for ( i = 0; i < (mpt_size >> L2_PAGETABLE_SHIFT); i++, pl2e++ )
     {
         memflags = MEMF_node(phys_to_nid(i <<
             (L2_PAGETABLE_SHIFT - 2 + PAGE_SHIFT)));
@@ -647,7 +649,7 @@ void __init paging_init(void)
                         (i << L2_PAGETABLE_SHIFT)),
                0xFF, 1UL << L2_PAGETABLE_SHIFT);
         /* NB. Cannot be GLOBAL as the ptes get copied into per-VM space. */
-        l2e_write(l2_ro_mpt, l2e_from_page(l1_pg, _PAGE_PSE|_PAGE_PRESENT));
+        l2e_write(pl2e, l2e_from_page(l1_pg, _PAGE_PSE|_PAGE_PRESENT));
     }
 #undef CNT
 #undef MFN
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:12:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:12:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecw9-0004sE-Sd; Fri, 29 May 2020 11:12:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5BQD=7L=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jecw7-0004qa-T4
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:12:27 +0000
X-Inumbo-ID: 454bad0a-a19d-11ea-9947-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 454bad0a-a19d-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:12:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=yjQKyxZLA0QE9GtStVEVXPSECpE1q2ci33MDmy3Zy3Y=; b=XYB7xs7QRSFY1emHQLtBmUwT2x
 XJRzSFS+uc6qdE90IrcfcEilXtm1v46WTFWepVs2pPlZb8ebinjMLLJzCtlB9nQR94F6b+dJt8stz
 my/tWeA2F5GZIzvUgJnh+jE9GH4J/avzOdu2nCTma+REsh+OLGUiCxwEulTBj1DEi4I4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecw2-00022S-OW; Fri, 29 May 2020 11:12:22 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecw2-0006tM-F5; Fri, 29 May 2020 11:12:22 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 08/15] x86_64/mm: switch to new APIs in setup_m2p_table
Date: Fri, 29 May 2020 12:11:52 +0100
Message-Id: <14aec5f25e8226a45dbc6b26fcc9981ea5f66a90.1590750232.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

Avoid repetitive mapping of l2_ro_mpt by keeping it across loops, and
only unmap and map it when crossing 1G boundaries.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v7:
- avoid repetitive mapping of l2_ro_mpt.
- edit commit message.
- switch to alloc_map_clear_xen_pt().
---
 xen/arch/x86/x86_64/mm.c | 32 ++++++++++++++++++--------------
 1 file changed, 18 insertions(+), 14 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 8877ac7bb7..cfc3de9091 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -385,7 +385,8 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
 
     ASSERT(l4e_get_flags(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)])
             & _PAGE_PRESENT);
-    l3_ro_mpt = l4e_to_l3e(idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]);
+    l3_ro_mpt = map_l3t_from_l4e(
+                    idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)]);
 
     smap = (info->spfn & (~((1UL << (L2_PAGETABLE_SHIFT - 3)) -1)));
     emap = ((info->epfn + ((1UL << (L2_PAGETABLE_SHIFT - 3)) - 1 )) &
@@ -403,6 +404,10 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
     i = smap;
     while ( i < emap )
     {
+        if ( (RO_MPT_VIRT_START + i * sizeof(*machine_to_phys_mapping)) &
+             ((1UL << L3_PAGETABLE_SHIFT) - 1) )
+            UNMAP_DOMAIN_PAGE(l2_ro_mpt);
+
         switch ( m2p_mapped(i) )
         {
         case M2P_1G_MAPPED:
@@ -438,32 +443,29 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
 
             ASSERT(!(l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
                   _PAGE_PSE));
-            if ( l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
-              _PAGE_PRESENT )
-                l2_ro_mpt = l3e_to_l2e(l3_ro_mpt[l3_table_offset(va)]) +
-                  l2_table_offset(va);
-            else
+            if ( (l3e_get_flags(l3_ro_mpt[l3_table_offset(va)]) &
+                  _PAGE_PRESENT) && !l2_ro_mpt)
+                l2_ro_mpt = map_l2t_from_l3e(l3_ro_mpt[l3_table_offset(va)]);
+            else if ( !l2_ro_mpt )
             {
-                l2_ro_mpt = alloc_xen_pagetable();
+                mfn_t l2_ro_mpt_mfn;
+
+                l2_ro_mpt = alloc_map_clear_xen_pt(&l2_ro_mpt_mfn);
                 if ( !l2_ro_mpt )
                 {
                     ret = -ENOMEM;
                     goto error;
                 }
 
-                clear_page(l2_ro_mpt);
                 l3e_write(&l3_ro_mpt[l3_table_offset(va)],
-                          l3e_from_paddr(__pa(l2_ro_mpt),
-                                         __PAGE_HYPERVISOR_RO | _PAGE_USER));
-                l2_ro_mpt += l2_table_offset(va);
+                          l3e_from_mfn(l2_ro_mpt_mfn,
+                                       __PAGE_HYPERVISOR_RO | _PAGE_USER));
             }
 
             /* NB. Cannot be GLOBAL: guest user mode should not see it. */
-            l2e_write(l2_ro_mpt, l2e_from_mfn(mfn,
+            l2e_write(&l2_ro_mpt[l2_table_offset(va)], l2e_from_mfn(mfn,
                    /*_PAGE_GLOBAL|*/_PAGE_PSE|_PAGE_USER|_PAGE_PRESENT));
         }
-        if ( !((unsigned long)l2_ro_mpt & ~PAGE_MASK) )
-            l2_ro_mpt = NULL;
         i += ( 1UL << (L2_PAGETABLE_SHIFT - 3));
     }
 #undef CNT
@@ -471,6 +473,8 @@ static int setup_m2p_table(struct mem_hotadd_info *info)
 
     ret = setup_compat_m2p_table(info);
 error:
+    unmap_domain_page(l2_ro_mpt);
+    unmap_domain_page(l3_ro_mpt);
     return ret;
 }
 
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:12:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecwF-0004wl-K7; Fri, 29 May 2020 11:12:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5BQD=7L=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jecwD-0004vK-SU
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:12:33 +0000
X-Inumbo-ID: 448b44e8-a19d-11ea-a898-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 448b44e8-a19d-11ea-a898-12813bfff9fa;
 Fri, 29 May 2020 11:12:22 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=d/G4aLlUZF4eW/MxiUCbta356VDjjBDxySOEDoS9lss=; b=BmIA9jbzGLcm/VH3qnha2DW0fR
 zJ0WsvOlkZbxwGiCWiEOwZw3h4zDYX/AUrp/Ft/D4zTfHNRT1fShZLUkLkfRHNkPecM5nsm2Jcr0u
 JgkSwaWKu7EF84gl7PIS0OdE/kxJfxVy8wxSQkWyleNE+IqsZn5rSo7gHiP1n/Ja4zZs=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecw1-00022M-9w; Fri, 29 May 2020 11:12:21 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecw1-0006tM-0n; Fri, 29 May 2020 11:12:21 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 07/15] x86_64/mm: switch to new APIs in paging_init
Date: Fri, 29 May 2020 12:11:51 +0100
Message-Id: <7eb8f68f2202d97062d714d35a8b1d6a972cc623.1590750232.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

Map and unmap pages instead of relying on the direct map.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v7:
- use the new alloc_map_clear_xen_pt() helper.
- move the unmap of pl3t up a bit.
- remove the unmaps in the nomem path.
---
 xen/arch/x86/x86_64/mm.c | 36 ++++++++++++++++++++++--------------
 1 file changed, 22 insertions(+), 14 deletions(-)

diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index 243014a119..8877ac7bb7 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -481,6 +481,7 @@ void __init paging_init(void)
     l3_pgentry_t *l3_ro_mpt;
     l2_pgentry_t *pl2e = NULL, *l2_ro_mpt = NULL;
     struct page_info *l1_pg;
+    mfn_t l3_ro_mpt_mfn, l2_ro_mpt_mfn;
 
     /*
      * We setup the L3s for 1:1 mapping if host support memory hotplug
@@ -493,22 +494,23 @@ void __init paging_init(void)
         if ( !(l4e_get_flags(idle_pg_table[l4_table_offset(va)]) &
               _PAGE_PRESENT) )
         {
-            l3_pgentry_t *pl3t = alloc_xen_pagetable();
+            mfn_t l3mfn;
+            l3_pgentry_t *pl3t = alloc_map_clear_xen_pt(&l3mfn);
 
             if ( !pl3t )
                 goto nomem;
-            clear_page(pl3t);
+            UNMAP_DOMAIN_PAGE(pl3t);
             l4e_write(&idle_pg_table[l4_table_offset(va)],
-                      l4e_from_paddr(__pa(pl3t), __PAGE_HYPERVISOR_RW));
+                      l4e_from_mfn(l3mfn, __PAGE_HYPERVISOR_RW));
         }
     }
 
     /* Create user-accessible L2 directory to map the MPT for guests. */
-    if ( (l3_ro_mpt = alloc_xen_pagetable()) == NULL )
+    l3_ro_mpt = alloc_map_clear_xen_pt(&l3_ro_mpt_mfn);
+    if ( !l3_ro_mpt )
         goto nomem;
-    clear_page(l3_ro_mpt);
     l4e_write(&idle_pg_table[l4_table_offset(RO_MPT_VIRT_START)],
-              l4e_from_paddr(__pa(l3_ro_mpt), __PAGE_HYPERVISOR_RO | _PAGE_USER));
+              l4e_from_mfn(l3_ro_mpt_mfn, __PAGE_HYPERVISOR_RO | _PAGE_USER));
 
     /*
      * Allocate and map the machine-to-phys table.
@@ -591,12 +593,15 @@ void __init paging_init(void)
         }
         if ( !((unsigned long)pl2e & ~PAGE_MASK) )
         {
-            if ( (l2_ro_mpt = alloc_xen_pagetable()) == NULL )
+            UNMAP_DOMAIN_PAGE(l2_ro_mpt);
+
+            l2_ro_mpt = alloc_map_clear_xen_pt(&l2_ro_mpt_mfn);
+            if ( !l2_ro_mpt )
                 goto nomem;
-            clear_page(l2_ro_mpt);
+
             l3e_write(&l3_ro_mpt[l3_table_offset(va)],
-                      l3e_from_paddr(__pa(l2_ro_mpt),
-                                     __PAGE_HYPERVISOR_RO | _PAGE_USER));
+                      l3e_from_mfn(l2_ro_mpt_mfn,
+                                   __PAGE_HYPERVISOR_RO | _PAGE_USER));
             pl2e = l2_ro_mpt;
             ASSERT(!l2_table_offset(va));
         }
@@ -608,13 +613,16 @@ void __init paging_init(void)
     }
 #undef CNT
 #undef MFN
+    UNMAP_DOMAIN_PAGE(l2_ro_mpt);
+    UNMAP_DOMAIN_PAGE(l3_ro_mpt);
 
     /* Create user-accessible L2 directory to map the MPT for compat guests. */
-    if ( (l2_ro_mpt = alloc_xen_pagetable()) == NULL )
+    l2_ro_mpt_mfn = alloc_xen_pagetable_new();
+    if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
         goto nomem;
-    compat_idle_pg_table_l2 = l2_ro_mpt;
-    clear_page(l2_ro_mpt);
-    pl2e = l2_ro_mpt;
+    compat_idle_pg_table_l2 = map_domain_page_global(l2_ro_mpt_mfn);
+    clear_page(compat_idle_pg_table_l2);
+    pl2e = compat_idle_pg_table_l2;
     /* Allocate and map the compatibility mode machine-to-phys table. */
     mpt_size = (mpt_size >> 1) + (1UL << (L2_PAGETABLE_SHIFT - 1));
     if ( mpt_size > RDWR_COMPAT_MPT_VIRT_END - RDWR_COMPAT_MPT_VIRT_START )
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:12:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:12:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jecwK-00051I-VG; Fri, 29 May 2020 11:12:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5BQD=7L=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jecwI-0004za-SZ
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:12:38 +0000
X-Inumbo-ID: 45ca6424-a19d-11ea-a898-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 45ca6424-a19d-11ea-a898-12813bfff9fa;
 Fri, 29 May 2020 11:12:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=3ia5t7aD91zFWrOTJLQvEkhVOgAWqDPMpKLXYMNiGwk=; b=Ra9ikO68KYQYeqAnq+j2Xlsf/g
 8OAD12Ek+gVsoc3GOz+FvWIo8S4hn8CYZgyL9JHdANMaa7oq9vklChdegf9wNqGC4RfumGGAHxPCR
 iI/ncl/Uo4MueYV83culpD2rSxvxD4mDctWWnwpPxVMLWZWobWwRffV83EItHPStoZKI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecw3-00022b-QY; Fri, 29 May 2020 11:12:23 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecw3-0006tM-Gt; Fri, 29 May 2020 11:12:23 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 09/15] efi: use new page table APIs in copy_mapping
Date: Fri, 29 May 2020 12:11:53 +0100
Message-Id: <0259b645c81ecc3879240e30760b0e7641a2b602.1590750232.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: julien@xen.org, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

After inspection ARM doesn't have alloc_xen_pagetable so this function
is x86 only, which means it is safe for us to change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v7:
- hoist l3 variables out of the loop to avoid repetitive mappings.
---
 xen/common/efi/boot.c | 34 ++++++++++++++++++++++++----------
 1 file changed, 24 insertions(+), 10 deletions(-)

diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index a6f84c945a..2599ae50d2 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -6,6 +6,7 @@
 #include <xen/compile.h>
 #include <xen/ctype.h>
 #include <xen/dmi.h>
+#include <xen/domain_page.h>
 #include <xen/init.h>
 #include <xen/keyhandler.h>
 #include <xen/lib.h>
@@ -1442,29 +1443,42 @@ static __init void copy_mapping(unsigned long mfn, unsigned long end,
                                                  unsigned long emfn))
 {
     unsigned long next;
+    l3_pgentry_t *l3src = NULL, *l3dst = NULL;
 
     for ( ; mfn < end; mfn = next )
     {
         l4_pgentry_t l4e = efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)];
-        l3_pgentry_t *l3src, *l3dst;
         unsigned long va = (unsigned long)mfn_to_virt(mfn);
 
+        if ( !((mfn << PAGE_SHIFT) & ((1UL << L4_PAGETABLE_SHIFT) - 1)) )
+        {
+            UNMAP_DOMAIN_PAGE(l3src);
+            UNMAP_DOMAIN_PAGE(l3dst);
+        }
         next = mfn + (1UL << (L3_PAGETABLE_SHIFT - PAGE_SHIFT));
         if ( !is_valid(mfn, min(next, end)) )
             continue;
-        if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
+        if ( !l3dst )
         {
-            l3dst = alloc_xen_pagetable();
-            BUG_ON(!l3dst);
-            clear_page(l3dst);
-            efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)] =
-                l4e_from_paddr(virt_to_maddr(l3dst), __PAGE_HYPERVISOR);
+            if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
+            {
+                mfn_t l3mfn;
+
+                l3dst = alloc_map_clear_xen_pt(&l3mfn);
+                BUG_ON(!l3dst);
+                efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)] =
+                    l4e_from_mfn(l3mfn, __PAGE_HYPERVISOR);
+            }
+            else
+                l3dst = map_l3t_from_l4e(l4e);
         }
-        else
-            l3dst = l4e_to_l3e(l4e);
-        l3src = l4e_to_l3e(idle_pg_table[l4_table_offset(va)]);
+        if ( !l3src )
+            l3src = map_l3t_from_l4e(idle_pg_table[l4_table_offset(va)]);
         l3dst[l3_table_offset(mfn << PAGE_SHIFT)] = l3src[l3_table_offset(va)];
     }
+
+    unmap_domain_page(l3src);
+    unmap_domain_page(l3dst);
 }
 
 static bool __init ram_range_valid(unsigned long smfn, unsigned long emfn)
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:20:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:20:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jed3Q-0005oS-QB; Fri, 29 May 2020 11:20:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jed3P-0005oN-FM
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:19:59 +0000
X-Inumbo-ID: 5461cc10-a19e-11ea-9dbe-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5461cc10-a19e-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 11:19:58 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3N-0003xZ-MS; Fri, 29 May 2020 12:19:57 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org,
	Paul Durrant <paul@xen.org>
Subject: [OSSTEST PATCH v2 00/49] Switch to Debian buster (= Debian stable)
Date: Fri, 29 May 2020 12:18:56 +0100
Message-Id: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: committers@xenproject.org, Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This series looks about as ready as it is going to be.  Unfortunately
there are still two issues, each of which cropped up once in my final
formal retest.  See below.

What are people's opinions?  Should I push this to osstest pretest
soon after the Xen codefreeze (eg, after we get the first push after
freeze) ?

The downside would be introducing new low-probability heisenbugs.  But
in my tests it seems that some existing heisenbugs may be abated.
It's hard to say for sure.  The upside is a lower risk that Debian
will break our stuff and also being able to commission a number of
machines which do not work with stretch (Debian oldstable).

The two issues:

1. In one test, libvirt segfaulted.  This is with osstest's own-built
   tested version of libvirt; osstest's own-build tested version of
   Linux; and osstest's own-built xen.git#master.

   http://logs.test-lab.xenproject.org/osstest/logs/150456/test-amd64-i386-libvirt/info.html
   http://logs.test-lab.xenproject.org/osstest/logs/150456/test-amd64-i386-libvirt/rimava1---var-log-kern.log.gz
   | May 28 21:39:11 rimava1 kernel: [ 1146.326593] libvirtd[16868]: segfault at 820 ip b72535b0 sp abce5090 error 4 in libc-2.28.so[b71f0000+14e000]

2. In one test, the Debian installer hung.  The kernel printed many
   soft lockup messages.  This is with osstest's own dom0 kernel and
   hypervisor, but the 4.19.67-2+deb10u1 kernel from Debian.

   http://logs.test-lab.xenproject.org/osstest/logs/150456/test-amd64-amd64-xl-qemut-debianhvm-i386-xsm/info.html
   http://logs.test-lab.xenproject.org/osstest/logs/150456/test-amd64-amd64-xl-qemut-debianhvm-i386-xsm/godello1---var-log-xen-qemu-dm-debianhvm.guest.osstest.log.gz

Ian Jackson (49):
  ts-logs-capture: Cope if xl shutdown leaves domain running for a bit
  ts-xen-build-prep: Install rsync
  lvcreate argments: pass --yes -Z y -W y
  TestSupport: allow more time for apt
  Booting: Use `--' rather than `---' to introduce host cmdline
  di_installcmdline_core: Pass locale on d-i command line
  setupboot_grub2: Drop $submenu variable
  ts-leak-check: Ignore buster's udevd too
  Bodge systemd random seed arrangements
  Debian guests made with xen-tools: Write systemd random seed file
  ts-debian-di-install: Provide guest with more RAM
  Debian: preseed: use priority= alias
  Debian: Specify `priority=critical' rather than locale
  Honour 'LinuxSerialConsole <suite>' host property
  buster: make-hosts-flight: Add to possible suites for hosts flight
  buster: Extend grub2 uefi no install workaround
  buster: ts-host-install: Extend net.ifnames workaround
  buster: Deinstall the "systemd" package
  buster: preseed partman-auto-lvm/guided_size
  buster: ts-host-install: NTP not honoured bug remains
  buster: Extend ARM clock workaround
  buster: Extend guest bootloader workaround
  Honour DebianImageFile_SUITE_ARCH
  buster: Specify DebianImageFile_SUITE_ARCH
  20_linux_xen: Copy Debian buster version into our initramfs area
  20_linux_xen: Adhoc template substitution
  20_linux_xen: Ignore xenpolicy and config files too
  20_linux_xen: Support Xen Security Modules (XSM/FLASK)
  mg-debian-installer-update: support overlay-intramfs-SUITE
  overlay-initrd-buster/sbin/reopen-console: Copy from Debian
  overlay-initrd-buster/sbin/reopen-console: Fix #932416
  buster: chiark-scripts: Install a new version on buster too
  buster: Provide TftpDiVersion
  buster: grub, arm64: extend chainloading workaround
  buster: setupboot_grub2: Note what files exist in /boot
  buster: setupboot_grub2: Handle missing policy file bug
  buster: Extend workaround for dhcpd EROFS bug
  ts-xen-install: Add $ho argument to some_extradebs
  ts-xen-install: Move some_extradebs to Debian.pm
  Debian.pm: Break out standard_extradebs
  Debian.pm: Move standard_extradebs to ts-host-install
  buster: Install own linux-libc-dev package (!)
  setupboot_grub2: Insist on space after directives
  setupboot_grub2: Print line number of entry we are using
  setupboot_grub2: Recognise --nounzip for initramfs
  setupboot_grub2: Copy hv command line from grub to xen.cfg
  setupboot_grub2: Do not boot with XSM policy etc. unless xsm=1
  buster 20_linux_xen: Only load policy in XSM-enabled builds
  buster: Switch to Debian buster as the default suite

 Osstest.pm                                    |   2 +-
 Osstest/Debian.pm                             | 128 +++++--
 Osstest/TestSupport.pm                        |  15 +-
 make-hosts-flight                             |   2 +-
 mfi-common                                    |   9 +
 mg-debian-installer-update                    |  20 ++
 overlay-buster/etc/grub.d/20_linux_xen        | 327 ++++++++++++++++++
 overlay-initrd-buster/sbin/reopen-console     | 126 +++++++
 .../override.conf                             |   3 +
 overlay/usr/local/bin/random-seed-add         |  33 ++
 production-config                             |   5 +
 ts-debian-di-install                          |   2 +-
 ts-debian-fixup                               |   6 +
 ts-host-install                               |   6 +-
 ts-leak-check                                 |   1 +
 ts-logs-capture                               |   1 +
 ts-xen-build-prep                             |   6 +-
 ts-xen-install                                |  41 +--
 18 files changed, 656 insertions(+), 77 deletions(-)
 create mode 100755 overlay-buster/etc/grub.d/20_linux_xen
 create mode 100755 overlay-initrd-buster/sbin/reopen-console
 create mode 100644 overlay/etc/systemd/system/systemd-random-seed.service.d/override.conf
 create mode 100755 overlay/usr/local/bin/random-seed-add

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:20:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:20:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jed3a-0006SL-Ap; Fri, 29 May 2020 11:20:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jed3Z-0006SG-CR
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:20:09 +0000
X-Inumbo-ID: 54a7bb08-a19e-11ea-81bc-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 54a7bb08-a19e-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 11:19:58 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3O-0003xZ-4f; Fri, 29 May 2020 12:19:58 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 02/49] ts-xen-build-prep: Install rsync
Date: Fri, 29 May 2020 12:18:58 +0100
Message-Id: <20200529111945.21394-3-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

osstest uses this for transferring configuration, build artefacts, and
so on.

In Debian stretch and earlier, rsync happened to be pulled in by
something else.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-xen-build-prep | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-xen-build-prep b/ts-xen-build-prep
index e9298d54..8e73f763 100755
--- a/ts-xen-build-prep
+++ b/ts-xen-build-prep
@@ -197,7 +197,7 @@ END
 }
 
 sub prep () {
-    my @packages = qw(mercurial
+    my @packages = qw(mercurial rsync
                       build-essential bin86 bcc iasl bc
                       flex bison cmake
                       libpci-dev libncurses5-dev libssl-dev python-dev
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:20:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:20:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jed3V-0006DE-1l; Fri, 29 May 2020 11:20:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jed3U-00066V-Be
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:20:04 +0000
X-Inumbo-ID: 5480d47a-a19e-11ea-9dbe-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5480d47a-a19e-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 11:19:58 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3N-0003xZ-TK; Fri, 29 May 2020 12:19:58 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 01/49] ts-logs-capture: Cope if xl shutdown leaves
 domain running for a bit
Date: Fri, 29 May 2020 12:18:57 +0100
Message-Id: <20200529111945.21394-2-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This seems mostly to affect buster but it could in principle affect
earlier releases too I think.

In principle it would be nice to fix this bug, and to have a proper
test for it, but a reliable test is hard and an unreliable one is not
useful.  So I guess we are going to have this workaround
indefinitely...

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-logs-capture | 1 +
 1 file changed, 1 insertion(+)

diff --git a/ts-logs-capture b/ts-logs-capture
index 0320a5a5..d75a2fda 100755
--- a/ts-logs-capture
+++ b/ts-logs-capture
@@ -272,6 +272,7 @@ sub shutdown_guests () {
 		( xl shutdown -a -F -w ; echo y ) &
 	    ) | (
 		read x
+		sleep 10 # xl shutdown is a bit racy :-/
 		xl list | awk '!/^Domain-0 |^Name / {print $2}' \
 		| xargs -t -r -n1 xl destroy ||:
 	    )
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:20:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:20:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jed3f-0006U2-IO; Fri, 29 May 2020 11:20:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jed3e-0006Tk-Cg
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:20:14 +0000
X-Inumbo-ID: 54c7a0c6-a19e-11ea-81bc-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 54c7a0c6-a19e-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 11:19:59 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3O-0003xZ-CH; Fri, 29 May 2020 12:19:58 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 03/49] lvcreate argments: pass --yes -Z y -W y
Date: Fri, 29 May 2020 12:18:59 +0100
Message-Id: <20200529111945.21394-4-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The documentation seesm to think this is the default but empirically
it isn't.  In our environment --yes is fine.

I have reported this to Debian as #953183.  Also vaguely related (and
discovered by me at the same time) is #953185.

This came up while trying to get things work on buster.  I don't know
what has changed.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 2 +-
 ts-xen-build-prep      | 4 ++--
 2 files changed, 3 insertions(+), 3 deletions(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 1e7da676..43766ee3 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -935,7 +935,7 @@ sub lv_create ($$$$) {
     my ($ho, $vg, $lv, $mb) = @_;
     my $lvdev = "/dev/$vg/$lv";
     target_cmd_root($ho, "lvremove -f $lvdev ||:");
-    target_cmd_root($ho, "lvcreate -L ${mb}M -n $lv $vg");
+    target_cmd_root($ho, "lvcreate --yes -Z y -W y -L ${mb}M -n $lv $vg");
     target_cmd_root($ho, "dd if=/dev/zero of=$lvdev count=10");
     return $lvdev;
 }
diff --git a/ts-xen-build-prep b/ts-xen-build-prep
index 8e73f763..dabb9921 100755
--- a/ts-xen-build-prep
+++ b/ts-xen-build-prep
@@ -61,7 +61,7 @@ sub determine_vg_lv () {
 sub lvextend_stage1 () {
     target_cmd_root($ho, <<END);
         set -ex; if ! test -f /root/swap_osstest_enabled; then
-            lvcreate -L 10G -n swap_osstest_build $vg ||:
+            lvcreate --yes -Z y -W y -L 10G -n swap_osstest_build $vg ||:
             mkswap /dev/$vg/swap_osstest_build ||:
             swapon /dev/$vg/swap_osstest_build
             touch /root/swap_osstest_enabled
@@ -84,7 +84,7 @@ sub vginfo () {
 
 sub lvcreate () {
     target_cmd_output_root($ho,
-			   "lvdisplay $lv || lvcreate -l 1 -n $lvleaf $vg");
+			   "lvdisplay $lv || lvcreate --yes -Z y -W y -l 1 -n $lvleaf $vg");
 }
 
 sub lvextend1 ($$$) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:20:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:20:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jed3j-0006Vh-R5; Fri, 29 May 2020 11:20:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jed3j-0006VV-Bl
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:20:19 +0000
X-Inumbo-ID: 54f13c1a-a19e-11ea-9dbe-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 54f13c1a-a19e-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 11:19:59 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3O-0003xZ-Ik; Fri, 29 May 2020 12:19:58 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 04/49] TestSupport: allow more time for apt
Date: Fri, 29 May 2020 12:19:00 +0100
Message-Id: <20200529111945.21394-5-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Empirically some of these operations can take longer than 30s,
especially with a cold cache.

Note that because of host sharing and our on-host apt lock, the
timeout needs to be the same for every apt operation: a fast operation
could be blocked behind a slow one.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index 43766ee3..f4e9414c 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -637,12 +637,12 @@ sub target_install_packages_nonfree_nonconcurrent ($@) {
     my ($ho, @packages) = @_;
     my $slist= '/etc/apt/sources.list';
     my $xsuites= 'contrib non-free';
-    target_cmd_root($ho, <<END, 30);
+    target_cmd_root($ho, <<END, 300);
     perl -i~ -pe 'next unless m/^deb/; s{ main\$}{\$& $xsuites};' $slist
     apt-get update
 END
     target_run_pkgmanager_install($ho,\@packages);
-    target_cmd_root($ho, <<END, 30);
+    target_cmd_root($ho, <<END, 300);
     mv $slist~ $slist
     apt-get update
 END
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:20:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:20:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jed3p-0006YA-7W; Fri, 29 May 2020 11:20:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jed3o-0006Xq-CZ
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:20:24 +0000
X-Inumbo-ID: 550ce9f6-a19e-11ea-9947-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 550ce9f6-a19e-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:19:59 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3O-0003xZ-R1; Fri, 29 May 2020 12:19:58 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 05/49] Booting: Use `--' rather than `---' to
 introduce host cmdline
Date: Fri, 29 May 2020 12:19:01 +0100
Message-Id: <20200529111945.21394-6-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Because systemd did something obnoxious, the kernel retaliated in the
game of Core Wars by hiding all arguments before `--' from userspace.
So use `---' instead so that all the arguments remain visible.

This in some sense now applies to host installs a change we had
already made to Debian HVM guests.  See osstest#493b7395
  ts-debian-hvm-install: Use ---, and no longer duplicate $gconsole
and https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=762007
  Kernel command line handling change breaks d-i user-params functionality

This change is fine for all non-ancient versions of Debian, so I have
not made it conditional.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index f4e9414c..ff8103f2 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -2909,7 +2909,7 @@ label overwrite
 	menu label ^Overwrite
 	menu default
 	kernel $kern
-	append $dicmd initrd=$initrd -- $hocmd
+	append $dicmd initrd=$initrd --- $hocmd
 	ipappend $xopts{ipappend}
 	$dtbs
 default overwrite
@@ -2956,7 +2956,7 @@ sub setup_netboot_di_uefi ($$$$$;%) {
 set default=0
 set timeout=5
 menuentry 'overwrite' {
-  linux $kern $dicmd -- $hocmd
+  linux $kern $dicmd --- $hocmd
   initrd $initrd
 }
 END
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:20:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:20:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jed3u-0006cT-GV; Fri, 29 May 2020 11:20:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jed3t-0006bA-Cw
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:20:29 +0000
X-Inumbo-ID: 552d69f6-a19e-11ea-9947-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 552d69f6-a19e-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:19:59 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3P-0003xZ-1c; Fri, 29 May 2020 12:19:59 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 06/49] di_installcmdline_core: Pass locale on d-i
 command line
Date: Fri, 29 May 2020 12:19:02 +0100
Message-Id: <20200529111945.21394-7-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In buster, d-i wants when setting up the network, ie before the
preseed is loaded.

We leave it in the preseed too because why not.

I think this change should be fine for older versions of Debian.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 6e9d2072..ba975b87 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -699,7 +699,8 @@ sub di_installcmdline_core ($$;@) {
                "hostname=$tho->{Name}",
                "$xopts{PreseedScheme}=$ps_url",
                "netcfg/dhcp_timeout=150",
-               "netcfg/choose_interface=$netcfg_interface"
+               "netcfg/choose_interface=$netcfg_interface",
+               "debian-installer/locale=en_GB",
                );
 
     my $debconf_priority= $xopts{DebconfPriority};
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:20:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jed3z-0006hM-Pa; Fri, 29 May 2020 11:20:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jed3y-0006g8-DW
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:20:34 +0000
X-Inumbo-ID: 554e1a0c-a19e-11ea-9947-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 554e1a0c-a19e-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:20:00 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3P-0003xZ-8q; Fri, 29 May 2020 12:19:59 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 07/49] setupboot_grub2: Drop $submenu variable
Date: Fri, 29 May 2020 12:19:03 +0100
Message-Id: <20200529111945.21394-8-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

We really only used this to check how many levels deep in { we are.
That can be done by checking $#offsets, which is >0 if we are in a
submenu and not otherwise.  We lose the ability to report the start
line of the submenu, but that's OK.

But as a bonus, we no longer bomb out on nested submenus: previously
the first } would cause $submenu to be undef.  Now we pop from
@offsets and all is fine.

Nested submenus are present in Debian buster.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 9 +++------
 1 file changed, 3 insertions(+), 6 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index ba975b87..b8bf67dc 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -452,15 +452,13 @@ sub setupboot_grub2 ($$$$) {
         my @offsets = (0);
         my $entry;
         my $chainentry;
-        my $submenu;
         while (<$f>) {
             next if m/^\s*\#/ || !m/\S/;
             if (m/^\s*\}\s*$/) {
-                die unless $entry || $submenu;
-                if (!$entry && $submenu) {
-                    logm("Met end of a submenu $submenu->{StartLine}..$.. ".
+                die unless $entry || $#offsets;
+                if (!$entry && $#offsets) {
+                    logm("Met end of a submenu at $. (@offsets) ".
                         "Our want kern is $want_kernver");
-                    $submenu= undef;
                     pop @offsets;
                     $offsets[$#offsets]++;
                     next;
@@ -510,7 +508,6 @@ sub setupboot_grub2 ($$$$) {
                 $offsets[$#offsets]++;
             }
             if (m/^\s*submenu\s+[\'\"](.*)[\'\"].*\{\s*$/) {
-                $submenu={ StartLine =>$., MenuEntryPath => join ">", @offsets };
                 push @offsets,(0);
             }
             if (m/^\s*chainloader\s*\/EFI\/osstest\/xen.efi/) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:20:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:20:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jed44-0006l0-33; Fri, 29 May 2020 11:20:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jed43-0006kD-DB
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:20:39 +0000
X-Inumbo-ID: 556c9e32-a19e-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 556c9e32-a19e-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:20:00 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3P-0003xZ-Ea; Fri, 29 May 2020 12:19:59 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 08/49] ts-leak-check: Ignore buster's udevd too
Date: Fri, 29 May 2020 12:19:04 +0100
Message-Id: <20200529111945.21394-9-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

For reasons I don't propose to investigate, on buster udevd shows up
like this:

  2019-11-26 18:13:48 Z LEAKED [process 2633 /lib/systemd/systemd-udevd] process: root      2633  1555  0 18:10 ?        00:00:00 /lib/systemd/systemd-udevd

This does not match our suppression.  Add an additional suppression.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-leak-check | 1 +
 1 file changed, 1 insertion(+)

diff --git a/ts-leak-check b/ts-leak-check
index 41e6245d..f3cca8aa 100755
--- a/ts-leak-check
+++ b/ts-leak-check
@@ -202,6 +202,7 @@ xenstore /vm
 xenstore /libxl
 
 process .* udevd
+process .* .*/systemd-udevd
 process .* /.+/systemd-shim
 
 file /var/run/xenstored/db
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:20:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:20:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jed49-0006od-BY; Fri, 29 May 2020 11:20:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jed48-0006nv-DN
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:20:44 +0000
X-Inumbo-ID: 559fe83c-a19e-11ea-9dbe-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 559fe83c-a19e-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 11:20:00 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3P-0003xZ-Lq; Fri, 29 May 2020 12:19:59 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 09/49] Bodge systemd random seed arrangements
Date: Fri, 29 May 2020 12:19:05 +0100
Message-Id: <20200529111945.21394-10-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

systemd does not regard the contents of the random seed file as useful
for the purposes of placating the kernel's entropy tracker.  As a
result, the system hangs at boot waiting for entropy.

Fix this by providing a small program which can be used to load a seed
file into /dev/random and also call RNDADDTOENTCNT to add the
appropriate amount to the kernel's counter.

Arrange to run this program instead of
   /lib/systemd/systemd-random-seed load

With systemd the random seed file is in /var/lib/systemd/random-seed
rather than /var/lib/urandom/random-seed.

And, provide an initial contents of this file, via a d-i late_command.

Unfortunately we must hardcode the actual numerical value of
RNDADDTOENTCNT because we don't have a suitable compiler anywhere
nearby.  It seems to have the same value on i386, amd64, armhf and
arm64, our currently supported architectures.

Thanks to Colin Watson for pointers to the systemd random unit and
Matthew Vernon for instructions on overriding just ExecStart.

I think this change should be a no-op on non-systemd systems.

In principle this is a bug in Debian or in systemd, that ought to be
reported upstream.  However, it has been extensively discussed on
debian-devel and it does not seem that any improvement is likely.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm                             | 18 ++++++++++
 .../override.conf                             |  3 ++
 overlay/usr/local/bin/random-seed-add         | 33 +++++++++++++++++++
 3 files changed, 54 insertions(+)
 create mode 100644 overlay/etc/systemd/system/systemd-random-seed.service.d/override.conf
 create mode 100755 overlay/usr/local/bin/random-seed-add

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index b8bf67dc..8ccacc79 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -49,6 +49,7 @@ BEGIN {
                       di_installcmdline_core
                       di_vg_name
                       debian_dhcp_rofs_fix
+		      debian_write_random_seed_command
                       );
     %EXPORT_TAGS = ( );
 
@@ -1087,6 +1088,13 @@ ln -s . /target/boot/boot
 END
     }
 
+    my $cmd = debian_write_random_seed_command('/target');
+    preseed_hook_command($ho, 'late_command', $sfx, <<END);
+#!/bin/sh
+set -ex
+$cmd
+END
+
     $preseed_file .= preseed_hook_cmds();
 
     return create_webfile($ho, "preseed$sfx", $preseed_file);
@@ -1612,4 +1620,14 @@ mv '$script.new' '$script'
 END
 }
 
+sub debian_write_random_seed_command ($) {
+    my ($mountpoint) = @_;
+    my $dir = "$mountpoint/var/lib/systemd";
+    return <<END;
+        umask 077
+        test -d $dir || mkdir -m 0755 $dir
+        dd if=/dev/urandom of=$dir/random-seed bs=1k count=1
+END
+}
+
 1;
diff --git a/overlay/etc/systemd/system/systemd-random-seed.service.d/override.conf b/overlay/etc/systemd/system/systemd-random-seed.service.d/override.conf
new file mode 100644
index 00000000..f6cc0f84
--- /dev/null
+++ b/overlay/etc/systemd/system/systemd-random-seed.service.d/override.conf
@@ -0,0 +1,3 @@
+[Service]
+ExecStart=
+ExecStart=/usr/local/bin/random-seed-add /var/lib/systemd/random-seed
diff --git a/overlay/usr/local/bin/random-seed-add b/overlay/usr/local/bin/random-seed-add
new file mode 100755
index 00000000..89e75c4d
--- /dev/null
+++ b/overlay/usr/local/bin/random-seed-add
@@ -0,0 +1,33 @@
+#!/usr/bin/perl -w
+use strict;
+
+open R, '>', '/dev/random' or die "open /dev/random: $!\n";
+R->autoflush(1);
+
+sub rndaddtoentcnt ($) {
+    my ($bits) = @_;
+    my $x = pack 'L', $bits;
+    my $r = ioctl R, 0x40045201, $x;
+    defined $r or die "RNDADDTOENTCNT: $!\n";
+}
+
+sub process_stdin ($) {
+    my ($f) = @_;
+    my $got = read STDIN, $_, 512;
+    defined $got or die "read $f: $!\n";
+    last if !$got;
+    print R $_ or die "write /dev/random: $!\n";
+    my $bits = length($_) * 8;
+    rndaddtoentcnt($bits);
+}
+
+if (!@ARGV) {
+    process_stdin('stdin');
+} else {
+    die "no options supported\n" if $ARGV[0] =~ m/^\-/;
+    foreach my $f (@ARGV) {
+        open STDIN, '<', $f or die "open for reading $f: $!\n";
+        process_stdin($f);
+    }
+}
+
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:26:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:26:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedA3-0007XY-1e; Fri, 29 May 2020 11:26:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K2ub=7L=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jedA1-0007XT-Ht
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:26:49 +0000
X-Inumbo-ID: 48ff5ee0-a19f-11ea-9947-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 48ff5ee0-a19f-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:26:48 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 6grXn+OLhyh93kdoxBZnR91t4WZgnmX5+ICqZyoFk7YiiaFGEyvFonYrYct8qI6FERMJAefL0K
 Jnvk+LJMhlT0Ai7FtoNovy7DVAHiq2elTwjwmg2FXNmnHVta4R5hTGjycK2+nmkZ8W3PH42p4Z
 N/Ao0L3xv9QBaMxb6jtXg4Tj4qOoc3b1iFKnRPxms1XYu8PH+Njm8j/S0IFMgeE3NRh3WfLv2q
 ARHEpXgpEkrBIou5vcvGfokc2nB2fcflNT94yDV7WuuHeRQvNLH7ixkJ/glOwtCTaPmcqJeLNR
 QtM=
X-SBRS: 2.7
X-MesageID: 19001746
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,448,1583211600"; d="scan'208";a="19001746"
From: George Dunlap <George.Dunlap@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: Xen XSM/FLASK policy, grub defaults, etc.
Thread-Topic: Xen XSM/FLASK policy, grub defaults, etc.
Thread-Index: AQHWND0/xSJFUOR5YU28XikgbPuX46i7+DWAgALLwwCAAANFAIAABtKA
Date: Fri, 29 May 2020 11:26:45 +0000
Message-ID: <96F32637-E410-4EC8-937A-CFC8BE724352@citrix.com>
References: <24270.35349.838484.116865@mariner.uk.xensource.com>
 <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
 <24272.59646.746545.343358@mariner.uk.xensource.com>
 <4a8e7cf2-8f63-d4d2-e051-9484a5b8c8ed@suse.com>
In-Reply-To: <4a8e7cf2-8f63-d4d2-e051-9484a5b8c8ed@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <28F1714F0E393C40ABF7D25F43EB4578@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien
 Grall <julien@xen.org>, "cjwatson@debian.org" <cjwatson@debian.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Ian Jackson <Ian.Jackson@citrix.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDI5LCAyMDIwLCBhdCAxMjowMiBQTSwgSmFuIEJldWxpY2ggPGpiZXVsaWNo
QHN1c2UuY29tPiB3cm90ZToNCj4gDQo+IE9uIDI5LjA1LjIwMjAgMTI6NTAsIElhbiBKYWNrc29u
IHdyb3RlOg0KPj4gR2VvcmdlIER1bmxhcCB3cml0ZXMgKCJSZTogWGVuIFhTTS9GTEFTSyBwb2xp
Y3ksIGdydWIgZGVmYXVsdHMsIGV0Yy4iKToNCj4+Pj4gT24gTWF5IDI3LCAyMDIwLCBhdCA0OjQx
IFBNLCBJYW4gSmFja3NvbiA8aWFuLmphY2tzb25AY2l0cml4LmNvbT4gd3JvdGU6DQo+Pj4+IDMu
IEZhaWxpbmcgdGhhdCwgWGVuIHNob3VsZCBwcm92aWRlIHNvbWUgb3RoZXIgbWVjaGFuaXNtIHdo
aWNoIHdvdWxkDQo+Pj4+IGVuYWJsZSBzb21ldGhpbmcgbGlrZSB1cGRhdGUtZ3J1YiB0byBkZXRl
cm1pbmUgd2hldGhlciBhIHBhcnRpY3VsYXINCj4+Pj4gaHlwZXJ2aXNvciBjYW4gc2Vuc2libHkg
YmUgcnVuIHdpdGggYSBwb2xpY3kgZmlsZSBhbmQgZmxhc2s9ZW5mb3JjaW5nLg0KPj4+IA0KPj4+
IFNvIHlvdSB3YW50IHVwZGF0ZS1ncnViIHRvIGNoZWNrIHdoZXRoZXIgKnRoZSBYZW4gYmluYXJ5
IGl04oCZcyBjcmVhdGluZyBlbnRyaWVzIGZvciogaGFzIEZMQVNLIGVuYWJsZWQuICBXZSBnZW5l
cmFsbHkgaW5jbHVkZSB0aGUgWGVuIGNvbmZpZyB1c2VkIHRvIGJ1aWxkIHRoZSBoeXBlcnZpc29y
IOKAlCBjb3VsZCB3ZSBoYXZlIGl0IGNoZWNrIGZvciBDT05GSUdfWFNNX0ZMQVNLPw0KPj4gDQo+
PiBUaGF0IHdvdWxkIGJlIGEgcG9zc2liaWxpdHkuICBJbmNsdWRpbmcga2VybmVsIGNvbmZpZ3Mg
aGFzIGdvbmUgb3V0IG9mDQo+PiBmYXNoaW9uIGJ1dCBJIHRoaW5rIG1vc3QgZGlzdHJvcyBzaGlw
IHRoZW0uDQo+PiANCj4+IEFyZSB3ZSBjb25maWRlbnQgdGhhdCB0aGlzIGNvbmZpZyBuYW1lIHdp
bGwgcmVtYWluIHN0YWJsZSA/DQo+IA0KPiBXZWxsLCBpZiBpdCdzIHRvIGJlIHVzZWQgbGlrZSB0
aGlzLCB0aGVuIHdlJ2xsIGhhdmUgdG8ga2VlcCBpdA0KPiBzdGFibGUgaWYgYXQgYWxsIHBvc3Np
YmxlLiBCdXQgdGhhdCdzIHRoZSByZWFzb24gd2h5IEkgZGlzbGlrZQ0KPiB0aGUgLmNvbmZpZyBn
cmVwLWluZyBhcHByb2FjaCAobm90IGp1c3QgZm9yIFhlbiwgYWxzbyBmb3INCj4gTGludXgpLiBJ
dCB3b3VsZCBpbW8gYmUgYmV0dGVyIGlmIHRoZSBiaW5hcnkgaW5jbHVkZWQgc29tZXRoaW5nDQo+
IHRoYXQgY2FuIGJlIHF1ZXJpZWQuIFN1Y2ggYSAic29tZXRoaW5nIiBpcyB0aGVuIG11Y2ggbW9y
ZQ0KPiBsb2dpY2FsIHRvIGtlZXAgc3RhYmxlLCBpbW8uIFRoaXMgInNvbWV0aGluZyIgY291bGQg
YmUgYW4gRUxGDQo+IG5vdGUsIGZvciBleGFtcGxlIChhc3N1bWluZyBhIHNpbWlsYXIgcHJvYmxl
bSB0byB0aGUgb25lIGhlcmUNCj4gZG9lc24ndCBleGlzdCBmb3IgeGVuLmVmaSwgb3IgZWxzZSB3
ZSdkIG5lZWQgdG8gZmluZCBhIHNvbHV0aW9uDQo+IHRoZXJlLCB0b28pLg0KDQpJIHRoaW5rIGFu
IGVsZiBub3RlIG9uIHRoZSBiaW5hcnkgd291bGQgYmUgbmljZTsgYnV0IGl0IHdvbuKAmXQgaGVs
cCB1bnRpbCBhbGwgdGhlIGRpc3Ryb3MgcGljayB1cCBYZW4gNC4xNS4NCg0KV2hpY2ggaXNu4oCZ
dCB0byBzYXkgd2Ugc2hvdWxkbuKAmXQgZG8gaXQ7IGJ1dCBpdCBtaWdodCBiZSBuaWNlIHRvIGFs
c28gaGF2ZSBhbiBpbnRlcm1lZGlhdGUgc29sdXRpb24gdGhhdCB3b3JrcyByaWdodCBub3csIGV2
ZW4gaWYgaXTigJlzIG5vdCBvcHRpbWFsLg0KDQogLUdlb3JnZQ==


From xen-devel-bounces@lists.xenproject.org Fri May 29 11:29:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:29:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedCg-0007et-N2; Fri, 29 May 2020 11:29:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5BQD=7L=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jedCf-0007ei-4O
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:29:33 +0000
X-Inumbo-ID: aa5d526e-a19f-11ea-a89b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aa5d526e-a19f-11ea-a89b-12813bfff9fa;
 Fri, 29 May 2020 11:29:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=kou7BUVm2zurtDNUWeIELcG3/VI2PHL1zpdI7tC6pEk=; b=N4AsY0nMtOb+/Lbs2sc4xyA284
 nXSPCsgX1zRPXgYZUXVU6UIGKZdq0XLMI7SHanrc3TNE6LbEg69DMypvpSywHULWjNYSftA0KoFui
 W0Avo0KQ4Xu386HhGfrkIn59+YbEgBJEMaZqPdxjjjZWlDzXlftAGwHs6YD1Zk3R53MM=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jedCd-0002T8-KT; Fri, 29 May 2020 11:29:31 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecw6-0006tM-CY; Fri, 29 May 2020 11:12:26 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 11/15] x86/smpboot: add exit path for clone_mapping()
Date: Fri, 29 May 2020 12:11:55 +0100
Message-Id: <b1af8f7318ef42ae27daf40668311f53dc25591d.1590750232.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

We will soon need to clean up page table mappings in the exit path.

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v7:
- edit commit message.
- begin with rc = 0 and set it to -ENOMEM ahead of if().
---
 xen/arch/x86/smpboot.c | 16 +++++++++++-----
 1 file changed, 11 insertions(+), 5 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 170ab24e66..43abf2a332 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -674,6 +674,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     l3_pgentry_t *pl3e;
     l2_pgentry_t *pl2e;
     l1_pgentry_t *pl1e;
+    int rc = 0;
 
     /*
      * Sanity check 'linear'.  We only allow cloning from the Xen virtual
@@ -714,7 +715,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
             pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(linear);
             flags = l1e_get_flags(*pl1e);
             if ( !(flags & _PAGE_PRESENT) )
-                return 0;
+                goto out;
             pfn = l1e_get_pfn(*pl1e);
         }
     }
@@ -722,8 +723,9 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     if ( !(root_get_flags(rpt[root_table_offset(linear)]) & _PAGE_PRESENT) )
     {
         pl3e = alloc_xen_pagetable();
+        rc = -ENOMEM;
         if ( !pl3e )
-            return -ENOMEM;
+            goto out;
         clear_page(pl3e);
         l4e_write(&rpt[root_table_offset(linear)],
                   l4e_from_paddr(__pa(pl3e), __PAGE_HYPERVISOR));
@@ -736,8 +738,9 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
         pl2e = alloc_xen_pagetable();
+        rc = -ENOMEM;
         if ( !pl2e )
-            return -ENOMEM;
+            goto out;
         clear_page(pl2e);
         l3e_write(pl3e, l3e_from_paddr(__pa(pl2e), __PAGE_HYPERVISOR));
     }
@@ -752,8 +755,9 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
         pl1e = alloc_xen_pagetable();
+        rc = -ENOMEM;
         if ( !pl1e )
-            return -ENOMEM;
+            goto out;
         clear_page(pl1e);
         l2e_write(pl2e, l2e_from_paddr(__pa(pl1e), __PAGE_HYPERVISOR));
     }
@@ -774,7 +778,9 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     else
         l1e_write(pl1e, l1e_from_pfn(pfn, flags));
 
-    return 0;
+    rc = 0;
+ out:
+    return rc;
 }
 
 DEFINE_PER_CPU(root_pgentry_t *, root_pgt);
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:29:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:29:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedCg-0007en-FV; Fri, 29 May 2020 11:29:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5BQD=7L=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jedCe-0007ed-Hd
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:29:32 +0000
X-Inumbo-ID: aa6b1804-a19f-11ea-81bc-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa6b1804-a19f-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 11:29:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=bP1uKRBWvNfAhTGPUck4uG79xo7OG/LNneHRpiyKC/0=; b=xyaUD2OIffjkLfuO3lRQ14ACFm
 VuaS/eRgCyZdjHnIQouJqgnHCxysLHHw6Yoaknl1isO5T7b43p26YnGSW/nBar98wL6f/4D5ZamlE
 KoR9zAWDj2kf+OITgZaVEqf1UzGFgZdCuqmzYMWliNoyysthvyi3N7qyy+EFbzl8HEE4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jedCd-0002TC-OM; Fri, 29 May 2020 11:29:31 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecw7-0006tM-S2; Fri, 29 May 2020 11:12:28 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 12/15] x86/smpboot: switch clone_mapping() to new APIs
Date: Fri, 29 May 2020 12:11:56 +0100
Message-Id: <2c1d26b0c7fc681d291adc50f65f77922f10f9d2.1590750232.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v7:
- change patch title
- remove initialiser of pl3e.
- combine the initialisation of pl3e into a single assignment.
- use the new alloc_map_clear() helper.
- use the normal map_domain_page() in the error path.
---
 xen/arch/x86/smpboot.c | 44 ++++++++++++++++++++++++++----------------
 1 file changed, 27 insertions(+), 17 deletions(-)

diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 43abf2a332..186211ccc9 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -672,8 +672,8 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     unsigned long linear = (unsigned long)ptr, pfn;
     unsigned int flags;
     l3_pgentry_t *pl3e;
-    l2_pgentry_t *pl2e;
-    l1_pgentry_t *pl1e;
+    l2_pgentry_t *pl2e = NULL;
+    l1_pgentry_t *pl1e = NULL;
     int rc = 0;
 
     /*
@@ -688,7 +688,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
          (linear >= XEN_VIRT_END && linear < DIRECTMAP_VIRT_START) )
         return -EINVAL;
 
-    pl3e = l4e_to_l3e(idle_pg_table[root_table_offset(linear)]) +
+    pl3e = map_l3t_from_l4e(idle_pg_table[root_table_offset(linear)]) +
         l3_table_offset(linear);
 
     flags = l3e_get_flags(*pl3e);
@@ -701,7 +701,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
     }
     else
     {
-        pl2e = l3e_to_l2e(*pl3e) + l2_table_offset(linear);
+        pl2e = map_l2t_from_l3e(*pl3e) + l2_table_offset(linear);
         flags = l2e_get_flags(*pl2e);
         ASSERT(flags & _PAGE_PRESENT);
         if ( flags & _PAGE_PSE )
@@ -712,7 +712,7 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         }
         else
         {
-            pl1e = l2e_to_l1e(*pl2e) + l1_table_offset(linear);
+            pl1e = map_l1t_from_l2e(*pl2e) + l1_table_offset(linear);
             flags = l1e_get_flags(*pl1e);
             if ( !(flags & _PAGE_PRESENT) )
                 goto out;
@@ -720,51 +720,58 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
         }
     }
 
+    UNMAP_DOMAIN_PAGE(pl1e);
+    UNMAP_DOMAIN_PAGE(pl2e);
+    UNMAP_DOMAIN_PAGE(pl3e);
+
     if ( !(root_get_flags(rpt[root_table_offset(linear)]) & _PAGE_PRESENT) )
     {
-        pl3e = alloc_xen_pagetable();
+        mfn_t l3mfn;
+
+        pl3e = alloc_map_clear_xen_pt(&l3mfn);
         rc = -ENOMEM;
         if ( !pl3e )
             goto out;
-        clear_page(pl3e);
         l4e_write(&rpt[root_table_offset(linear)],
-                  l4e_from_paddr(__pa(pl3e), __PAGE_HYPERVISOR));
+                  l4e_from_mfn(l3mfn, __PAGE_HYPERVISOR));
     }
     else
-        pl3e = l4e_to_l3e(rpt[root_table_offset(linear)]);
+        pl3e = map_l3t_from_l4e(rpt[root_table_offset(linear)]);
 
     pl3e += l3_table_offset(linear);
 
     if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
     {
-        pl2e = alloc_xen_pagetable();
+        mfn_t l2mfn;
+
+        pl2e = alloc_map_clear_xen_pt(&l2mfn);
         rc = -ENOMEM;
         if ( !pl2e )
             goto out;
-        clear_page(pl2e);
-        l3e_write(pl3e, l3e_from_paddr(__pa(pl2e), __PAGE_HYPERVISOR));
+        l3e_write(pl3e, l3e_from_mfn(l2mfn, __PAGE_HYPERVISOR));
     }
     else
     {
         ASSERT(!(l3e_get_flags(*pl3e) & _PAGE_PSE));
-        pl2e = l3e_to_l2e(*pl3e);
+        pl2e = map_l2t_from_l3e(*pl3e);
     }
 
     pl2e += l2_table_offset(linear);
 
     if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
     {
-        pl1e = alloc_xen_pagetable();
+        mfn_t l1mfn;
+
+        pl1e = alloc_map_clear_xen_pt(&l1mfn);
         rc = -ENOMEM;
         if ( !pl1e )
             goto out;
-        clear_page(pl1e);
-        l2e_write(pl2e, l2e_from_paddr(__pa(pl1e), __PAGE_HYPERVISOR));
+        l2e_write(pl2e, l2e_from_mfn(l1mfn, __PAGE_HYPERVISOR));
     }
     else
     {
         ASSERT(!(l2e_get_flags(*pl2e) & _PAGE_PSE));
-        pl1e = l2e_to_l1e(*pl2e);
+        pl1e = map_l1t_from_l2e(*pl2e);
     }
 
     pl1e += l1_table_offset(linear);
@@ -780,6 +787,9 @@ static int clone_mapping(const void *ptr, root_pgentry_t *rpt)
 
     rc = 0;
  out:
+    unmap_domain_page(pl1e);
+    unmap_domain_page(pl2e);
+    unmap_domain_page(pl3e);
     return rc;
 }
 
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:29:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:29:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedCk-0007fp-1y; Fri, 29 May 2020 11:29:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5BQD=7L=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jedCj-0007fZ-He
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:29:37 +0000
X-Inumbo-ID: aa7f56ca-a19f-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aa7f56ca-a19f-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:29:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=YEyP3e+saUdHL++my7j7Pco1QGzdKCrNuWAbFZKB+YE=; b=Hg5LjaMQaMU/5alHCjUNBW7F+B
 hF58DQTvFPakeGUaZ7p4xWQLjenzZepZ/PuKHRBfHRwgu1wBOpcH5a2L3CvoNemSTLjGT0Cauq27i
 9kXu+IyZTmStxI/LWBrvBJD22cRE4IvRf9aLGoYsIQ0UX1liBHttDaGcbli7LCi2gx7o=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jedCd-0002TA-MX; Fri, 29 May 2020 11:29:31 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecwC-0006tM-5z; Fri, 29 May 2020 11:12:32 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 15/15] x86/mm: drop _new suffix for page table APIs
Date: Fri, 29 May 2020 12:11:59 +0100
Message-Id: <5f4c9dddc7f34228d348d3d445e754fe0b59d269.1590750232.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

No functional change.

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Acked-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/mm.c        | 44 ++++++++++++++++++++--------------------
 xen/arch/x86/smpboot.c   |  6 +++---
 xen/arch/x86/x86_64/mm.c |  2 +-
 xen/include/asm-x86/mm.h |  4 ++--
 4 files changed, 28 insertions(+), 28 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 16f1aa3344..bf35a1a998 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -356,7 +356,7 @@ void __init arch_init_memory(void)
             ASSERT(root_pgt_pv_xen_slots < ROOT_PAGETABLE_PV_XEN_SLOTS);
             if ( l4_table_offset(split_va) == l4_table_offset(split_va - 1) )
             {
-                mfn_t l3mfn = alloc_xen_pagetable_new();
+                mfn_t l3mfn = alloc_xen_pagetable();
 
                 if ( !mfn_eq(l3mfn, INVALID_MFN) )
                 {
@@ -4914,7 +4914,7 @@ int mmcfg_intercept_write(
  * them. The caller must check whether the allocation has succeeded, and only
  * pass valid MFNs to map_domain_page().
  */
-mfn_t alloc_xen_pagetable_new(void)
+mfn_t alloc_xen_pagetable(void)
 {
     if ( system_state != SYS_STATE_early_boot )
     {
@@ -4929,7 +4929,7 @@ mfn_t alloc_xen_pagetable_new(void)
 }
 
 /* mfn can be INVALID_MFN */
-void free_xen_pagetable_new(mfn_t mfn)
+void free_xen_pagetable(mfn_t mfn)
 {
     if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn, INVALID_MFN) )
         free_domheap_page(mfn_to_page(mfn));
@@ -4937,7 +4937,7 @@ void free_xen_pagetable_new(mfn_t mfn)
 
 void *alloc_map_clear_xen_pt(mfn_t *pmfn)
 {
-    mfn_t mfn = alloc_xen_pagetable_new();
+    mfn_t mfn = alloc_xen_pagetable();
     void *ret;
 
     if ( mfn_eq(mfn, INVALID_MFN) )
@@ -4983,7 +4983,7 @@ static l3_pgentry_t *virt_to_xen_l3e(unsigned long v)
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        free_xen_pagetable_new(l3mfn);
+        free_xen_pagetable(l3mfn);
     }
 
     return map_l3t_from_l4e(*pl4e) + l3_table_offset(v);
@@ -5018,7 +5018,7 @@ static l2_pgentry_t *virt_to_xen_l2e(unsigned long v)
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        free_xen_pagetable_new(l2mfn);
+        free_xen_pagetable(l2mfn);
     }
 
     BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
@@ -5057,7 +5057,7 @@ l1_pgentry_t *virt_to_xen_l1e(unsigned long v)
         }
         if ( locking )
             spin_unlock(&map_pgdir_lock);
-        free_xen_pagetable_new(l1mfn);
+        free_xen_pagetable(l1mfn);
     }
 
     BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
@@ -5166,10 +5166,10 @@ int map_pages_to_xen(
                         ol2e = l2t[i];
                         if ( (l2e_get_flags(ol2e) & _PAGE_PRESENT) &&
                              !(l2e_get_flags(ol2e) & _PAGE_PSE) )
-                            free_xen_pagetable_new(l2e_get_mfn(ol2e));
+                            free_xen_pagetable(l2e_get_mfn(ol2e));
                     }
                     unmap_domain_page(l2t);
-                    free_xen_pagetable_new(l3e_get_mfn(ol3e));
+                    free_xen_pagetable(l3e_get_mfn(ol3e));
                 }
             }
 
@@ -5208,7 +5208,7 @@ int map_pages_to_xen(
                 continue;
             }
 
-            l2mfn = alloc_xen_pagetable_new();
+            l2mfn = alloc_xen_pagetable();
             if ( mfn_eq(l2mfn, INVALID_MFN) )
                 goto out;
 
@@ -5236,7 +5236,7 @@ int map_pages_to_xen(
                 spin_unlock(&map_pgdir_lock);
             flush_area(virt, flush_flags);
 
-            free_xen_pagetable_new(l2mfn);
+            free_xen_pagetable(l2mfn);
         }
 
         pl2e = virt_to_xen_l2e(virt);
@@ -5270,7 +5270,7 @@ int map_pages_to_xen(
                         flush_flags(l1e_get_flags(l1t[i]));
                     flush_area(virt, flush_flags);
                     unmap_domain_page(l1t);
-                    free_xen_pagetable_new(l2e_get_mfn(ol2e));
+                    free_xen_pagetable(l2e_get_mfn(ol2e));
                 }
             }
 
@@ -5315,7 +5315,7 @@ int map_pages_to_xen(
                     goto check_l3;
                 }
 
-                l1mfn = alloc_xen_pagetable_new();
+                l1mfn = alloc_xen_pagetable();
                 if ( mfn_eq(l1mfn, INVALID_MFN) )
                     goto out;
 
@@ -5342,7 +5342,7 @@ int map_pages_to_xen(
                     spin_unlock(&map_pgdir_lock);
                 flush_area(virt, flush_flags);
 
-                free_xen_pagetable_new(l1mfn);
+                free_xen_pagetable(l1mfn);
             }
 
             pl1e  = map_l1t_from_l2e(*pl2e) + l1_table_offset(virt);
@@ -5408,7 +5408,7 @@ int map_pages_to_xen(
                     flush_area(virt - PAGE_SIZE,
                                FLUSH_TLB_GLOBAL |
                                FLUSH_ORDER(PAGETABLE_ORDER));
-                    free_xen_pagetable_new(l2e_get_mfn(ol2e));
+                    free_xen_pagetable(l2e_get_mfn(ol2e));
                 }
                 else if ( locking )
                     spin_unlock(&map_pgdir_lock);
@@ -5459,7 +5459,7 @@ int map_pages_to_xen(
                 flush_area(virt - PAGE_SIZE,
                            FLUSH_TLB_GLOBAL |
                            FLUSH_ORDER(2*PAGETABLE_ORDER));
-                free_xen_pagetable_new(l3e_get_mfn(ol3e));
+                free_xen_pagetable(l3e_get_mfn(ol3e));
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
@@ -5548,7 +5548,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             }
 
             /* PAGE1GB: shatter the superpage and fall through. */
-            l2mfn = alloc_xen_pagetable_new();
+            l2mfn = alloc_xen_pagetable();
             if ( mfn_eq(l2mfn, INVALID_MFN) )
                 goto out;
 
@@ -5572,7 +5572,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             if ( locking )
                 spin_unlock(&map_pgdir_lock);
 
-            free_xen_pagetable_new(l2mfn);
+            free_xen_pagetable(l2mfn);
         }
 
         /*
@@ -5608,7 +5608,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
             {
                 l1_pgentry_t *l1t;
                 /* PSE: shatter the superpage and try again. */
-                mfn_t l1mfn = alloc_xen_pagetable_new();
+                mfn_t l1mfn = alloc_xen_pagetable();
 
                 if ( mfn_eq(l1mfn, INVALID_MFN) )
                     goto out;
@@ -5632,7 +5632,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
 
-                free_xen_pagetable_new(l1mfn);
+                free_xen_pagetable(l1mfn);
             }
         }
         else
@@ -5699,7 +5699,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable_new(l1mfn);
+                free_xen_pagetable(l1mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
@@ -5744,7 +5744,7 @@ int modify_xen_mappings(unsigned long s, unsigned long e, unsigned int nf)
                 if ( locking )
                     spin_unlock(&map_pgdir_lock);
                 flush_area(NULL, FLUSH_TLB_GLOBAL); /* flush before free */
-                free_xen_pagetable_new(l2mfn);
+                free_xen_pagetable(l2mfn);
             }
             else if ( locking )
                 spin_unlock(&map_pgdir_lock);
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 9cdf198fd6..a20540e36a 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -901,15 +901,15 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
                     continue;
 
                 ASSERT(!(l2e_get_flags(l2t[i2]) & _PAGE_PSE));
-                free_xen_pagetable_new(l2e_get_mfn(l2t[i2]));
+                free_xen_pagetable(l2e_get_mfn(l2t[i2]));
             }
 
             unmap_domain_page(l2t);
-            free_xen_pagetable_new(l2mfn);
+            free_xen_pagetable(l2mfn);
         }
 
         unmap_domain_page(l3t);
-        free_xen_pagetable_new(l3mfn);
+        free_xen_pagetable(l3mfn);
     }
 
     free_xenheap_page(rpt);
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index cfc3de9091..91d7b24da8 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -621,7 +621,7 @@ void __init paging_init(void)
     UNMAP_DOMAIN_PAGE(l3_ro_mpt);
 
     /* Create user-accessible L2 directory to map the MPT for compat guests. */
-    l2_ro_mpt_mfn = alloc_xen_pagetable_new();
+    l2_ro_mpt_mfn = alloc_xen_pagetable();
     if ( mfn_eq(l2_ro_mpt_mfn, INVALID_MFN) )
         goto nomem;
     compat_idle_pg_table_l2 = map_domain_page_global(l2_ro_mpt_mfn);
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 2ac0cdab83..482fbd9d39 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -583,8 +583,8 @@ int vcpu_destroy_pagetables(struct vcpu *);
 void *do_page_walk(struct vcpu *v, unsigned long addr);
 
 /* Allocator functions for Xen pagetables. */
-mfn_t alloc_xen_pagetable_new(void);
-void free_xen_pagetable_new(mfn_t mfn);
+mfn_t alloc_xen_pagetable(void);
+void free_xen_pagetable(mfn_t mfn);
 void *alloc_map_clear_xen_pt(mfn_t *pmfn);
 
 l1_pgentry_t *virt_to_xen_l1e(unsigned long v);
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:29:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:29:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedCl-0007gU-Ao; Fri, 29 May 2020 11:29:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5BQD=7L=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jedCj-0007fk-UZ
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:29:37 +0000
X-Inumbo-ID: aa667d9f-a19f-11ea-a89b-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aa667d9f-a19f-11ea-a89b-12813bfff9fa;
 Fri, 29 May 2020 11:29:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=AOPXuyrnBetpykZYb1JYdvtXpR6UThNeTDJ+i8B83m8=; b=qQy9KmhP/H5PFvOC5faAq3eGBz
 fpHmtMR5Xw5EKSHya7cEKudMqfgFzrS3XJl3JPR5fieOiebTyWS67dbK79AplpXYTLxQXDr1W7Ru2
 ZUnfBon1Kx8+9aQ4fWkEhhuK6k/8z84iyjOKZNGixLRmVSr49iOZYqf9hlQPVli8i2tY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jedCd-0002TM-Sy; Fri, 29 May 2020 11:29:31 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecw4-0006tM-V1; Fri, 29 May 2020 11:12:25 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 10/15] efi: switch to new APIs in EFI code
Date: Fri, 29 May 2020 12:11:54 +0100
Message-Id: <586cb3db63838c5eb10822cdd4efec999e886f02.1590750232.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Wei Liu <wei.liu2@citrix.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>

---
Changed in v7:
- add blank line after declaration.
- rename efi_l4_pgtable into efi_l4t.
- pass the mapped efi_l4t to copy_mapping() instead of map it again.
- use the alloc_map_clear_xen_pt() API.
- unmap pl3e, pl2e, l1t earlier.
---
 xen/arch/x86/efi/runtime.h | 13 ++++++---
 xen/common/efi/boot.c      | 55 ++++++++++++++++++++++----------------
 xen/common/efi/efi.h       |  3 ++-
 xen/common/efi/runtime.c   |  8 +++---
 4 files changed, 48 insertions(+), 31 deletions(-)

diff --git a/xen/arch/x86/efi/runtime.h b/xen/arch/x86/efi/runtime.h
index d9eb8f5c27..77866c5f21 100644
--- a/xen/arch/x86/efi/runtime.h
+++ b/xen/arch/x86/efi/runtime.h
@@ -1,12 +1,19 @@
+#include <xen/domain_page.h>
+#include <xen/mm.h>
 #include <asm/atomic.h>
 #include <asm/mc146818rtc.h>
 
 #ifndef COMPAT
-l4_pgentry_t *__read_mostly efi_l4_pgtable;
+mfn_t __read_mostly efi_l4_mfn = INVALID_MFN_INITIALIZER;
 
 void efi_update_l4_pgtable(unsigned int l4idx, l4_pgentry_t l4e)
 {
-    if ( efi_l4_pgtable )
-        l4e_write(efi_l4_pgtable + l4idx, l4e);
+    if ( !mfn_eq(efi_l4_mfn, INVALID_MFN) )
+    {
+        l4_pgentry_t *efi_l4t = map_domain_page(efi_l4_mfn);
+
+        l4e_write(efi_l4t + l4idx, l4e);
+        unmap_domain_page(efi_l4t);
+    }
 }
 #endif
diff --git a/xen/common/efi/boot.c b/xen/common/efi/boot.c
index 2599ae50d2..74e23fbac8 100644
--- a/xen/common/efi/boot.c
+++ b/xen/common/efi/boot.c
@@ -1440,14 +1440,15 @@ custom_param("efi", parse_efi_param);
 
 static __init void copy_mapping(unsigned long mfn, unsigned long end,
                                 bool (*is_valid)(unsigned long smfn,
-                                                 unsigned long emfn))
+                                                 unsigned long emfn),
+                                l4_pgentry_t *efi_l4t)
 {
     unsigned long next;
     l3_pgentry_t *l3src = NULL, *l3dst = NULL;
 
     for ( ; mfn < end; mfn = next )
     {
-        l4_pgentry_t l4e = efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)];
+        l4_pgentry_t l4e = efi_l4t[l4_table_offset(mfn << PAGE_SHIFT)];
         unsigned long va = (unsigned long)mfn_to_virt(mfn);
 
         if ( !((mfn << PAGE_SHIFT) & ((1UL << L4_PAGETABLE_SHIFT) - 1)) )
@@ -1466,7 +1467,7 @@ static __init void copy_mapping(unsigned long mfn, unsigned long end,
 
                 l3dst = alloc_map_clear_xen_pt(&l3mfn);
                 BUG_ON(!l3dst);
-                efi_l4_pgtable[l4_table_offset(mfn << PAGE_SHIFT)] =
+                efi_l4t[l4_table_offset(mfn << PAGE_SHIFT)] =
                     l4e_from_mfn(l3mfn, __PAGE_HYPERVISOR);
             }
             else
@@ -1499,6 +1500,7 @@ static bool __init rt_range_valid(unsigned long smfn, unsigned long emfn)
 void __init efi_init_memory(void)
 {
     unsigned int i;
+    l4_pgentry_t *efi_l4t;
     struct rt_extra {
         struct rt_extra *next;
         unsigned long smfn, emfn;
@@ -1613,11 +1615,10 @@ void __init efi_init_memory(void)
      * Set up 1:1 page tables for runtime calls. See SetVirtualAddressMap() in
      * efi_exit_boot().
      */
-    efi_l4_pgtable = alloc_xen_pagetable();
-    BUG_ON(!efi_l4_pgtable);
-    clear_page(efi_l4_pgtable);
+    efi_l4t = alloc_map_clear_xen_pt(&efi_l4_mfn);
+    BUG_ON(!efi_l4t);
 
-    copy_mapping(0, max_page, ram_range_valid);
+    copy_mapping(0, max_page, ram_range_valid, efi_l4t);
 
     /* Insert non-RAM runtime mappings inside the direct map. */
     for ( i = 0; i < efi_memmap_size; i += efi_mdesc_size )
@@ -1633,58 +1634,64 @@ void __init efi_init_memory(void)
             copy_mapping(PFN_DOWN(desc->PhysicalStart),
                          PFN_UP(desc->PhysicalStart +
                                 (desc->NumberOfPages << EFI_PAGE_SHIFT)),
-                         rt_range_valid);
+                         rt_range_valid, efi_l4t);
     }
 
     /* Insert non-RAM runtime mappings outside of the direct map. */
     while ( (extra = extra_head) != NULL )
     {
         unsigned long addr = extra->smfn << PAGE_SHIFT;
-        l4_pgentry_t l4e = efi_l4_pgtable[l4_table_offset(addr)];
+        l4_pgentry_t l4e = efi_l4t[l4_table_offset(addr)];
         l3_pgentry_t *pl3e;
         l2_pgentry_t *pl2e;
         l1_pgentry_t *l1t;
 
         if ( !(l4e_get_flags(l4e) & _PAGE_PRESENT) )
         {
-            pl3e = alloc_xen_pagetable();
+            mfn_t l3mfn;
+
+            pl3e = alloc_map_clear_xen_pt(&l3mfn);
             BUG_ON(!pl3e);
-            clear_page(pl3e);
-            efi_l4_pgtable[l4_table_offset(addr)] =
-                l4e_from_paddr(virt_to_maddr(pl3e), __PAGE_HYPERVISOR);
+            efi_l4t[l4_table_offset(addr)] =
+                l4e_from_mfn(l3mfn, __PAGE_HYPERVISOR);
         }
         else
-            pl3e = l4e_to_l3e(l4e);
+            pl3e = map_l3t_from_l4e(l4e);
         pl3e += l3_table_offset(addr);
         if ( !(l3e_get_flags(*pl3e) & _PAGE_PRESENT) )
         {
-            pl2e = alloc_xen_pagetable();
+            mfn_t l2mfn;
+
+            pl2e = alloc_map_clear_xen_pt(&l2mfn);
             BUG_ON(!pl2e);
-            clear_page(pl2e);
-            *pl3e = l3e_from_paddr(virt_to_maddr(pl2e), __PAGE_HYPERVISOR);
+            *pl3e = l3e_from_mfn(l2mfn, __PAGE_HYPERVISOR);
         }
         else
         {
             BUG_ON(l3e_get_flags(*pl3e) & _PAGE_PSE);
-            pl2e = l3e_to_l2e(*pl3e);
+            pl2e = map_l2t_from_l3e(*pl3e);
         }
+        UNMAP_DOMAIN_PAGE(pl3e);
         pl2e += l2_table_offset(addr);
         if ( !(l2e_get_flags(*pl2e) & _PAGE_PRESENT) )
         {
-            l1t = alloc_xen_pagetable();
+            mfn_t l1mfn;
+
+            l1t = alloc_map_clear_xen_pt(&l1mfn);
             BUG_ON(!l1t);
-            clear_page(l1t);
-            *pl2e = l2e_from_paddr(virt_to_maddr(l1t), __PAGE_HYPERVISOR);
+            *pl2e = l2e_from_mfn(l1mfn, __PAGE_HYPERVISOR);
         }
         else
         {
             BUG_ON(l2e_get_flags(*pl2e) & _PAGE_PSE);
-            l1t = l2e_to_l1e(*pl2e);
+            l1t = map_l1t_from_l2e(*pl2e);
         }
+        UNMAP_DOMAIN_PAGE(pl2e);
         for ( i = l1_table_offset(addr);
               i < L1_PAGETABLE_ENTRIES && extra->smfn < extra->emfn;
               ++i, ++extra->smfn )
             l1t[i] = l1e_from_pfn(extra->smfn, extra->prot);
+        UNMAP_DOMAIN_PAGE(l1t);
 
         if ( extra->smfn == extra->emfn )
         {
@@ -1696,6 +1703,8 @@ void __init efi_init_memory(void)
     /* Insert Xen mappings. */
     for ( i = l4_table_offset(HYPERVISOR_VIRT_START);
           i < l4_table_offset(DIRECTMAP_VIRT_END); ++i )
-        efi_l4_pgtable[i] = idle_pg_table[i];
+        efi_l4t[i] = idle_pg_table[i];
+
+    unmap_domain_page(efi_l4t);
 }
 #endif
diff --git a/xen/common/efi/efi.h b/xen/common/efi/efi.h
index 2e38d05f3d..e364bae3e0 100644
--- a/xen/common/efi/efi.h
+++ b/xen/common/efi/efi.h
@@ -6,6 +6,7 @@
 #include <efi/eficapsule.h>
 #include <efi/efiapi.h>
 #include <xen/efi.h>
+#include <xen/mm.h>
 #include <xen/spinlock.h>
 #include <asm/page.h>
 
@@ -29,7 +30,7 @@ extern UINTN efi_memmap_size, efi_mdesc_size;
 extern void *efi_memmap;
 
 #ifdef CONFIG_X86
-extern l4_pgentry_t *efi_l4_pgtable;
+extern mfn_t efi_l4_mfn;
 #endif
 
 extern const struct efi_pci_rom *efi_pci_roms;
diff --git a/xen/common/efi/runtime.c b/xen/common/efi/runtime.c
index 95367694b5..375b94229e 100644
--- a/xen/common/efi/runtime.c
+++ b/xen/common/efi/runtime.c
@@ -85,7 +85,7 @@ struct efi_rs_state efi_rs_enter(void)
     static const u32 mxcsr = MXCSR_DEFAULT;
     struct efi_rs_state state = { .cr3 = 0 };
 
-    if ( !efi_l4_pgtable )
+    if ( mfn_eq(efi_l4_mfn, INVALID_MFN) )
         return state;
 
     state.cr3 = read_cr3();
@@ -111,7 +111,7 @@ struct efi_rs_state efi_rs_enter(void)
         lgdt(&gdt_desc);
     }
 
-    switch_cr3_cr4(virt_to_maddr(efi_l4_pgtable), read_cr4());
+    switch_cr3_cr4(mfn_to_maddr(efi_l4_mfn), read_cr4());
 
     return state;
 }
@@ -140,9 +140,9 @@ void efi_rs_leave(struct efi_rs_state *state)
 
 bool efi_rs_using_pgtables(void)
 {
-    return efi_l4_pgtable &&
+    return !mfn_eq(efi_l4_mfn, INVALID_MFN) &&
            (smp_processor_id() == efi_rs_on_cpu) &&
-           (read_cr3() == virt_to_maddr(efi_l4_pgtable));
+           (read_cr3() == mfn_to_maddr(efi_l4_mfn));
 }
 
 unsigned long efi_get_time(void)
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:29:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:29:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedCp-0007iS-KH; Fri, 29 May 2020 11:29:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5BQD=7L=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jedCo-0007i1-IT
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:29:42 +0000
X-Inumbo-ID: aaa08e44-a19f-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aaa08e44-a19f-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:29:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=2o9JVrYNzYpbIDhDtoJt6aTVVTe+wwF1KohL4X9QeEI=; b=UUmeVUHBq3KFFYf6JCLtTyrDHe
 OZC9zxH5tLQ2pjF700HYmblMl5iNoFMqxydO6LfCdZcP1HWWK3hkwe4dzvYrEDWT4NHmqA7Y/eRcR
 h98KKJDMn3GWQ9t7Aa5KKQg0xE9IJIdWvj79DIwi152wN0tlPkWTqWeh1KoEsMgKBWlc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jedCd-0002TI-RP; Fri, 29 May 2020 11:29:31 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecw9-0006tM-9R; Fri, 29 May 2020 11:12:29 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 13/15] x86/mm: drop old page table APIs
Date: Fri, 29 May 2020 12:11:57 +0100
Message-Id: <cb75e682b8597828b46bcbd79ee3aadeba6ebf53.1590750232.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Hongyan Xia <hongyxia@amazon.com>

Two sets of old APIs, alloc/free_xen_pagetable() and lXe_to_lYe(), are
now dropped to avoid the dependency on direct map.

There are two special cases which still have not been re-written into
the new APIs, thus need special treatment:

rpt in smpboot.c cannot use ephemeral mappings yet. The problem is that
rpt is read and written in context switch code, but the mapping
infrastructure is NOT context-switch-safe, meaning we cannot map rpt in
one domain and unmap in another. Before the mapping infrastructure
supports context switches, rpt has to be globally mapped.

Also, lXe_to_lYe() during Xen image relocation cannot be converted into
map/unmap pairs. We cannot hold on to mappings while the mapping
infrastructure is being relocated! It is enough to remove the direct map
in the second e820 pass, so we still use the direct map (<4GiB) in Xen
relocation (which is during the first e820 pass).

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
Reviewed-by: Jan Beulich <jbeulich@suse.com>
---
 xen/arch/x86/mm.c          | 14 --------------
 xen/arch/x86/setup.c       |  4 ++--
 xen/arch/x86/smpboot.c     |  4 ++--
 xen/include/asm-x86/mm.h   |  2 --
 xen/include/asm-x86/page.h |  5 -----
 5 files changed, 4 insertions(+), 25 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 26694e2f30..38cfa3ce25 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4908,20 +4908,6 @@ int mmcfg_intercept_write(
     return X86EMUL_OKAY;
 }
 
-void *alloc_xen_pagetable(void)
-{
-    mfn_t mfn = alloc_xen_pagetable_new();
-
-    return mfn_eq(mfn, INVALID_MFN) ? NULL : mfn_to_virt(mfn_x(mfn));
-}
-
-void free_xen_pagetable(void *v)
-{
-    mfn_t mfn = v ? virt_to_mfn(v) : INVALID_MFN;
-
-    free_xen_pagetable_new(mfn);
-}
-
 /*
  * For these PTE APIs, the caller must follow the alloc-map-unmap-free
  * lifecycle, which means explicitly mapping the PTE pages before accessing
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 2dec7a3fc6..b08b69ff5d 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1190,7 +1190,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
                     continue;
                 *pl4e = l4e_from_intpte(l4e_get_intpte(*pl4e) +
                                         xen_phys_start);
-                pl3e = l4e_to_l3e(*pl4e);
+                pl3e = __va(l4e_get_paddr(*pl4e));
                 for ( j = 0; j < L3_PAGETABLE_ENTRIES; j++, pl3e++ )
                 {
                     /* Not present, 1GB mapping, or already relocated? */
@@ -1200,7 +1200,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
                         continue;
                     *pl3e = l3e_from_intpte(l3e_get_intpte(*pl3e) +
                                             xen_phys_start);
-                    pl2e = l3e_to_l2e(*pl3e);
+                    pl2e = __va(l3e_get_paddr(*pl3e));
                     for ( k = 0; k < L2_PAGETABLE_ENTRIES; k++, pl2e++ )
                     {
                         /* Not present, PSE, or already relocated? */
diff --git a/xen/arch/x86/smpboot.c b/xen/arch/x86/smpboot.c
index 186211ccc9..9cdf198fd6 100644
--- a/xen/arch/x86/smpboot.c
+++ b/xen/arch/x86/smpboot.c
@@ -808,7 +808,7 @@ static int setup_cpu_root_pgt(unsigned int cpu)
     if ( !opt_xpti_hwdom && !opt_xpti_domu )
         return 0;
 
-    rpt = alloc_xen_pagetable();
+    rpt = alloc_xenheap_page();
     if ( !rpt )
         return -ENOMEM;
 
@@ -912,7 +912,7 @@ static void cleanup_cpu_root_pgt(unsigned int cpu)
         free_xen_pagetable_new(l3mfn);
     }
 
-    free_xen_pagetable(rpt);
+    free_xenheap_page(rpt);
 
     /* Also zap the stub mapping for this CPU. */
     if ( stub_linear )
diff --git a/xen/include/asm-x86/mm.h b/xen/include/asm-x86/mm.h
index 42d1a78731..2ac0cdab83 100644
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -583,8 +583,6 @@ int vcpu_destroy_pagetables(struct vcpu *);
 void *do_page_walk(struct vcpu *v, unsigned long addr);
 
 /* Allocator functions for Xen pagetables. */
-void *alloc_xen_pagetable(void);
-void free_xen_pagetable(void *v);
 mfn_t alloc_xen_pagetable_new(void);
 void free_xen_pagetable_new(mfn_t mfn);
 void *alloc_map_clear_xen_pt(mfn_t *pmfn);
diff --git a/xen/include/asm-x86/page.h b/xen/include/asm-x86/page.h
index 3854feb3ea..ec66ad8df6 100644
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -188,11 +188,6 @@ static inline l4_pgentry_t l4e_from_paddr(paddr_t pa, unsigned int flags)
 #define l4e_has_changed(x,y,flags) \
     ( !!(((x).l4 ^ (y).l4) & ((PADDR_MASK&PAGE_MASK)|put_pte_flags(flags))) )
 
-/* Pagetable walking. */
-#define l2e_to_l1e(x)              ((l1_pgentry_t *)__va(l2e_get_paddr(x)))
-#define l3e_to_l2e(x)              ((l2_pgentry_t *)__va(l3e_get_paddr(x)))
-#define l4e_to_l3e(x)              ((l3_pgentry_t *)__va(l4e_get_paddr(x)))
-
 #define map_l1t_from_l2e(x)        (l1_pgentry_t *)map_domain_page(l2e_get_mfn(x))
 #define map_l2t_from_l3e(x)        (l2_pgentry_t *)map_domain_page(l3e_get_mfn(x))
 #define map_l3t_from_l4e(x)        (l3_pgentry_t *)map_domain_page(l4e_get_mfn(x))
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:29:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:29:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedCu-0007l3-1S; Fri, 29 May 2020 11:29:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5BQD=7L=xen.org=hx242@srs-us1.protection.inumbo.net>)
 id 1jedCt-0007ko-I5
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:29:47 +0000
X-Inumbo-ID: aae1726a-a19f-11ea-81bc-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aae1726a-a19f-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 11:29:32 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:
 Sender:Reply-To:MIME-Version:Content-Type:Content-Transfer-Encoding:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=5GpvlNXBQkGOlL6kv5pnM1xMgumu2zgUz3A3ypIp71s=; b=VLBwcgXpIeYRQJEvVFseBIoUc2
 dHcFlJzl2GoeMzSH1IOf1NisE0cQfPnt+zk3Vtj36hsVGCgL/mHhGK4rmqi6D3cWUUDVRZAq73MWZ
 w2j9NeF0Eh+ACevWoFf4FBO9S6f0O7B1OsHTrtVQfsu75izN6AHCHrgy7KxrQHCc9mtI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jedCd-0002TG-Pw; Fri, 29 May 2020 11:29:31 +0000
Received: from 54-240-197-236.amazon.com ([54.240.197.236]
 helo=u1bbd043a57dd5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1jecwA-0006tM-OS; Fri, 29 May 2020 11:12:31 +0000
From: Hongyan Xia <hx242@xen.org>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v7 14/15] x86: switch to use domheap page for page tables
Date: Fri, 29 May 2020 12:11:58 +0100
Message-Id: <85808fae77da535b2997bede8965d22d5c80c5d3.1590750232.git.hongyxia@amazon.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
In-Reply-To: <cover.1590750232.git.hongyxia@amazon.com>
References: <cover.1590750232.git.hongyxia@amazon.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, julien@xen.org,
 Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

From: Hongyan Xia <hongyxia@amazon.com>

Signed-off-by: Wei Liu <wei.liu2@citrix.com>
Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
---
 xen/arch/x86/mm.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 38cfa3ce25..16f1aa3344 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4918,10 +4918,11 @@ mfn_t alloc_xen_pagetable_new(void)
 {
     if ( system_state != SYS_STATE_early_boot )
     {
-        void *ptr = alloc_xenheap_page();
 
-        BUG_ON(!hardware_domain && !ptr);
-        return ptr ? virt_to_mfn(ptr) : INVALID_MFN;
+        struct page_info *pg = alloc_domheap_page(NULL, 0);
+
+        BUG_ON(!hardware_domain && !pg);
+        return pg ? page_to_mfn(pg) : INVALID_MFN;
     }
 
     return alloc_boot_pages(1, 1);
@@ -4931,7 +4932,7 @@ mfn_t alloc_xen_pagetable_new(void)
 void free_xen_pagetable_new(mfn_t mfn)
 {
     if ( system_state != SYS_STATE_early_boot && !mfn_eq(mfn, INVALID_MFN) )
-        free_xenheap_page(mfn_to_virt(mfn_x(mfn)));
+        free_domheap_page(mfn_to_page(mfn));
 }
 
 void *alloc_map_clear_xen_pt(mfn_t *pmfn)
-- 
2.24.1.AMZN



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:30:24 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:30:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedDK-0000AV-B2; Fri, 29 May 2020 11:30:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UCxL=7L=oracle.com=daniel.kiper@srs-us1.protection.inumbo.net>)
 id 1jedDJ-00009m-5a
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:30:13 +0000
X-Inumbo-ID: c25ba118-a19f-11ea-9947-bc764e2007e4
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c25ba118-a19f-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:30:12 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04TBRqwW031292;
 Fri, 29 May 2020 11:29:54 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=date : from : to : cc
 : subject : message-id : mime-version : content-type; s=corp-2020-01-29;
 bh=6iJ4GznZqm2hlC858ywesGuHoOqwbI3EjIs9Cz3b8xw=;
 b=lCxs8Ju2uHpWLEkPe2LPs/Zc/PUgNbNqoNOH1xPPrYxdgN+Dqlxj78SXBlz97UIGRkS3
 sfwA4zGEz5LarLDbkUp/FU+MNB1SDgQQPet2LOmUIdQE8Hp2sqpvKwLN+S18j996ZLmm
 ykS5TUe3gq7N5OFcBdOgjyfGSW1xIPKUCKm5lDzieQJ3bsg0S/1jpzJyq7LojDr8SHDZ
 JvQlybZjNvoyKfMQm4RlorKoLDvviWwlJLje7FNdjLrie1et0wDka/p2du91XQXNYETI
 AVJ1C7AL50/tHEDVHoxy7DEVQHkSAPODUOc/m3tZYaWihL71EsPHSAPSlL/ZkgDlVUjC aQ== 
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2120.oracle.com with ESMTP id 318xbk9wc2-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Fri, 29 May 2020 11:29:54 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04TBRfFK184985;
 Fri, 29 May 2020 11:27:54 GMT
Received: from userv0121.oracle.com (userv0121.oracle.com [156.151.31.72])
 by aserp3020.oracle.com with ESMTP id 317j5y74m1-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 29 May 2020 11:27:53 +0000
Received: from abhmp0014.oracle.com (abhmp0014.oracle.com [141.146.116.20])
 by userv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 04TBRhSd021678;
 Fri, 29 May 2020 11:27:43 GMT
Received: from tomti.i.net-space.pl (/10.175.161.105)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Fri, 29 May 2020 04:27:43 -0700
Date: Fri, 29 May 2020 13:27:35 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: grub-devel@gnu.org, linux-kernel@vger.kernel.org,
 trenchboot-devel@googlegroups.com, x86@kernel.org,
 xen-devel@lists.xenproject.org
Subject: [BOOTLOADER SPECIFICATION RFC] The bootloader log format for
 TrenchBoot and others
Message-ID: <20200529112735.qln44ds6z7djheof@tomti.i.net-space.pl>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
User-Agent: NeoMutt/20170113 (1.7.2)
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9635
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0
 spamscore=0 suspectscore=0
 mlxlogscore=999 mlxscore=0 adultscore=0 phishscore=0 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2005290092
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9635
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999
 spamscore=0 mlxscore=0
 lowpriorityscore=0 priorityscore=1501 phishscore=0 cotscore=-2147483648
 suspectscore=0 bulkscore=0 clxscore=1011 impostorscore=0 malwarescore=0
 adultscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2005290092
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: michal.zygowski@3mdeb.com, eric.snowberg@oracle.com, mtottenh@akamai.com,
 ard.biesheuvel@linaro.org, dpsmith@apertussolutions.com,
 andrew.cooper3@citrix.com, konrad.wilk@oracle.com, phcoder@gmail.com,
 javierm@redhat.com, mjg59@google.com, alexander.burmashev@oracle.com,
 krystian.hebel@3mdeb.com, kanth.ghatraju@oracle.com,
 lukasz.hawrylko@linux.intel.com, ross.philipson@oracle.com, hpa@zytor.com,
 leif@nuviainc.com, pjones@redhat.com, alec.brown@oracle.com,
 piotr.krol@3mdeb.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hey,

Below you can find my rough idea of the bootloader log format which is
generic thing but initially will be used for TrenchBoot work. I discussed
this proposal with Ross and Daniel S. So, the idea went through initial
sanitization. Now I would like to take feedback from other folks too.
So, please take a look and complain...

In general we want to pass the messages produced by the bootloader to the OS
kernel and finally to the userspace for further processing and analysis. Below
is the description of the structures which will be used for this thing.

  struct bootloader_log_msgs
  {
    grub_uint32_t level;
    grub_uint32_t facility;
    char type[];
    char msg[];
  }

  struct bootloader_log
  {
    grub_uint32_t version;
    grub_uint32_t producer;
    grub_uint32_t size;
    grub_uint32_t next_off;
    bootloader_log_msgs msgs[];
  }

The members of struct bootloader_log:
  - version: the bootloader log format version number, 1 for now,
  - producer: the producer/bootloader type; we can steal some values from
    linux/Documentation/x86/boot.rst:type_of_loader,
  - size: total size of the log buffer including the bootloader_log struct,
  - next_off: offset in bytes, from start of the bootloader_log struct,
    of the next byte after the last log message in the msgs[];
    i.e. the offset of the next available log message slot,
  - msgs: the array of log messages.

The members of struct bootloader_log_msgs:
  - level: similar to syslog meaning; can be used to differentiate
    normal messages from debug messages; exact interpretation depends
    on the current producer/bootloader type specified in the
    bootloader_log.producer,
  - facility: similar to syslog meaning; can be used to differentiate
    the sources of the messages, e.g. message produced by networking
    module; exact interpretation depends on the current producer/bootloader
    type specified in the bootloader_log.producer,
  - type: similar to the facility member but NUL terminated string instead of integer;
    this will be used by GRUB for messages printed using grub_dprintf(),
  - msg: the bootloader log message, NUL terminated string.

Note: The bootloaders are free to use/ignore any given set of level,
      facility and/or type members. Though the usage of these members
      has to be clearly defined. Ignored integer members should be set
      to 0. Ignored type member should contain an empty NUL terminated
      string. msg member is mandatory but can be an empty NUL terminated
      string.

Taking into account [1] and [2] I want to make this functionality generic
as much as possible. So, this bootloader log can be used with any bootloader
and OS kernel. However, initially the functionality will be implemented for
the Linux kernel and its boot protocol.

In case of Linux kernel the pointer to the bootloader_log struct should
be passed from the bootloader to the kernel through the boot_params and
the bootloader_log struct contents should be exposed via sysfs. E.g.
somewhere at /sys/kernel/debug or /sys/kernel/tracing or maybe we should
create new /sys/bootloader/log node.

If everybody is OK with this rough proposal then I will start working
on making it a part of Multiboot2 specification (the text above is just
raw description of the idea; it is not final text which land into the
spec). If you see better place for this thing just drop me a line.

Daniel

[1] https://lists.gnu.org/archive/html/grub-devel/2019-10/msg00107.html
[2] https://lists.gnu.org/archive/html/grub-devel/2019-11/msg00079.html


From xen-devel-bounces@lists.xenproject.org Fri May 29 11:30:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:30:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedDg-0000NH-Ls; Fri, 29 May 2020 11:30:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vz0h=7L=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jedDf-0000Mh-Kz
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:30:35 +0000
X-Inumbo-ID: cf9d655a-a19f-11ea-9947-bc764e2007e4
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cf9d655a-a19f-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:30:34 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id s8so3106597wrt.9
 for <xen-devel@lists.xenproject.org>; Fri, 29 May 2020 04:30:34 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=sz1FcDFLMegKlUZkDsfZp3Ndlk2QZ4y65naYGuGgzZ4=;
 b=jbhoKm2h1VwIxBjfDrvHZi4N3CWp0Q9vEEnRgo/8De8lsy25j4flxsOgnUO+QJz+yc
 VPK8gGykJY3MjksCdGjYJkkZB5AEKhlTQ3Uv6W4Da+GRQQIQkPkDvuA8g5eLiOs/NqRe
 fnytXAUA+RLp69uMCVUoL9e6GANPu2IrntVG0BMfIx2RsKzd2xk+ITkNhGQc6VEjeF4B
 gmvcSdbNAqVmJ3kd5A721RfTFoe9QWBVeW9vPaNIjhyW3CH7wKxhpiscQ6B9I0+Nn6lx
 lYW2i2a4sbAjrUpQZ1jM0IchNw7aIE0wpNEWckDhcirLl9EXZe1fiXKegAExzcGzXjF2
 nXBg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=sz1FcDFLMegKlUZkDsfZp3Ndlk2QZ4y65naYGuGgzZ4=;
 b=uMk/2GlfAMLM8Safwc5wWOyMK7w5lMxGK0koJucK1poMMaSTj2CZigarqCytxI40HX
 eqOZ1wtQ8lmYwkdYIuflZh7+vMu73kD+CjE+OCTRuMADSgBdiIKHf+5cnExiO0a/leak
 462yuXOkxnc2Q4uCsYww7mixnbNlhy3ny3pZC4WAhDaYYlm6gHDPjcyKg59bjwY9TzhD
 j2/3w0k1TZSoSwjSV3vBNHgB8x2iPj+VSWIDm/zuHGmkgnS+25A8eQiFjWEKzuppPPjp
 e4zM5u8SzaAF4kIBWVVThy7obRBlF3xOzXi8N/EWLyjMLU7I3R3zbyQhNg8bYf2IWa3/
 8z/Q==
X-Gm-Message-State: AOAM531MUPMmbXWaGmqHoDFpmLuJ7ERh/faAT6P8MGzZDeOGLxYDf0vC
 DPsVpKKci3GIT5FR4OoRw+Q=
X-Google-Smtp-Source: ABdhPJw5gUHgQzVpWfymafzik1vjb7np8MPJfZuQgVzbfXSev0O6vfk96ckzO5C4zvOHcPF4K2iE5Q==
X-Received: by 2002:a5d:4d01:: with SMTP id z1mr8938355wrt.29.1590751833907;
 Fri, 29 May 2020 04:30:33 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.187])
 by smtp.gmail.com with ESMTPSA id l17sm10213603wmi.3.2020.05.29.04.30.32
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 29 May 2020 04:30:33 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Ian Jackson'" <ian.jackson@eu.citrix.com>,
 <xen-devel@lists.xenproject.org>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
Subject: RE: [OSSTEST PATCH v2 00/49] Switch to Debian buster (= Debian stable)
Date: Fri, 29 May 2020 12:30:31 +0100
Message-ID: <005401d635ac$90bf9510$b23ebf30$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQH6xReBsxPpGuE++Yl5QcE4sAtmFah1zWCQ
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: committers@xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Ian Jackson <ian.jackson@eu.citrix.com>
> Sent: 29 May 2020 12:19
> To: xen-devel@lists.xenproject.org; Paul Durrant <paul@xen.org>
> Cc: Ian Jackson <ian.jackson@eu.citrix.com>; committers@xenproject.org
> Subject: [OSSTEST PATCH v2 00/49] Switch to Debian buster (= Debian stable)
> 
> This series looks about as ready as it is going to be.  Unfortunately
> there are still two issues, each of which cropped up once in my final
> formal retest.  See below.
> 
> What are people's opinions?  Should I push this to osstest pretest
> soon after the Xen codefreeze (eg, after we get the first push after
> freeze) ?
> 

That sounds reasonable; I don't think we want to perturb osstest until after the first push.

> The downside would be introducing new low-probability heisenbugs.  But
> in my tests it seems that some existing heisenbugs may be abated.
> It's hard to say for sure.  The upside is a lower risk that Debian
> will break our stuff and also being able to commission a number of
> machines which do not work with stretch (Debian oldstable).
> 

I assume we can revert if things go badly wrong and being able to commission more machines would seem to be quite beneficial at this
stage.

  Paul

> The two issues:
> 
> 1. In one test, libvirt segfaulted.  This is with osstest's own-built
>    tested version of libvirt; osstest's own-build tested version of
>    Linux; and osstest's own-built xen.git#master.
> 
>    http://logs.test-lab.xenproject.org/osstest/logs/150456/test-amd64-i386-libvirt/info.html
>    http://logs.test-lab.xenproject.org/osstest/logs/150456/test-amd64-i386-libvirt/rimava1---var-log-
> kern.log.gz
>    | May 28 21:39:11 rimava1 kernel: [ 1146.326593] libvirtd[16868]: segfault at 820 ip b72535b0 sp
> abce5090 error 4 in libc-2.28.so[b71f0000+14e000]
> 
> 2. In one test, the Debian installer hung.  The kernel printed many
>    soft lockup messages.  This is with osstest's own dom0 kernel and
>    hypervisor, but the 4.19.67-2+deb10u1 kernel from Debian.
> 
>    http://logs.test-lab.xenproject.org/osstest/logs/150456/test-amd64-amd64-xl-qemut-debianhvm-i386-
> xsm/info.html
>    http://logs.test-lab.xenproject.org/osstest/logs/150456/test-amd64-amd64-xl-qemut-debianhvm-i386-
> xsm/godello1---var-log-xen-qemu-dm-debianhvm.guest.osstest.log.gz
> 
> Ian Jackson (49):
>   ts-logs-capture: Cope if xl shutdown leaves domain running for a bit
>   ts-xen-build-prep: Install rsync
>   lvcreate argments: pass --yes -Z y -W y
>   TestSupport: allow more time for apt
>   Booting: Use `--' rather than `---' to introduce host cmdline
>   di_installcmdline_core: Pass locale on d-i command line
>   setupboot_grub2: Drop $submenu variable
>   ts-leak-check: Ignore buster's udevd too
>   Bodge systemd random seed arrangements
>   Debian guests made with xen-tools: Write systemd random seed file
>   ts-debian-di-install: Provide guest with more RAM
>   Debian: preseed: use priority= alias
>   Debian: Specify `priority=critical' rather than locale
>   Honour 'LinuxSerialConsole <suite>' host property
>   buster: make-hosts-flight: Add to possible suites for hosts flight
>   buster: Extend grub2 uefi no install workaround
>   buster: ts-host-install: Extend net.ifnames workaround
>   buster: Deinstall the "systemd" package
>   buster: preseed partman-auto-lvm/guided_size
>   buster: ts-host-install: NTP not honoured bug remains
>   buster: Extend ARM clock workaround
>   buster: Extend guest bootloader workaround
>   Honour DebianImageFile_SUITE_ARCH
>   buster: Specify DebianImageFile_SUITE_ARCH
>   20_linux_xen: Copy Debian buster version into our initramfs area
>   20_linux_xen: Adhoc template substitution
>   20_linux_xen: Ignore xenpolicy and config files too
>   20_linux_xen: Support Xen Security Modules (XSM/FLASK)
>   mg-debian-installer-update: support overlay-intramfs-SUITE
>   overlay-initrd-buster/sbin/reopen-console: Copy from Debian
>   overlay-initrd-buster/sbin/reopen-console: Fix #932416
>   buster: chiark-scripts: Install a new version on buster too
>   buster: Provide TftpDiVersion
>   buster: grub, arm64: extend chainloading workaround
>   buster: setupboot_grub2: Note what files exist in /boot
>   buster: setupboot_grub2: Handle missing policy file bug
>   buster: Extend workaround for dhcpd EROFS bug
>   ts-xen-install: Add $ho argument to some_extradebs
>   ts-xen-install: Move some_extradebs to Debian.pm
>   Debian.pm: Break out standard_extradebs
>   Debian.pm: Move standard_extradebs to ts-host-install
>   buster: Install own linux-libc-dev package (!)
>   setupboot_grub2: Insist on space after directives
>   setupboot_grub2: Print line number of entry we are using
>   setupboot_grub2: Recognise --nounzip for initramfs
>   setupboot_grub2: Copy hv command line from grub to xen.cfg
>   setupboot_grub2: Do not boot with XSM policy etc. unless xsm=1
>   buster 20_linux_xen: Only load policy in XSM-enabled builds
>   buster: Switch to Debian buster as the default suite
> 
>  Osstest.pm                                    |   2 +-
>  Osstest/Debian.pm                             | 128 +++++--
>  Osstest/TestSupport.pm                        |  15 +-
>  make-hosts-flight                             |   2 +-
>  mfi-common                                    |   9 +
>  mg-debian-installer-update                    |  20 ++
>  overlay-buster/etc/grub.d/20_linux_xen        | 327 ++++++++++++++++++
>  overlay-initrd-buster/sbin/reopen-console     | 126 +++++++
>  .../override.conf                             |   3 +
>  overlay/usr/local/bin/random-seed-add         |  33 ++
>  production-config                             |   5 +
>  ts-debian-di-install                          |   2 +-
>  ts-debian-fixup                               |   6 +
>  ts-host-install                               |   6 +-
>  ts-leak-check                                 |   1 +
>  ts-logs-capture                               |   1 +
>  ts-xen-build-prep                             |   6 +-
>  ts-xen-install                                |  41 +--
>  18 files changed, 656 insertions(+), 77 deletions(-)
>  create mode 100755 overlay-buster/etc/grub.d/20_linux_xen
>  create mode 100755 overlay-initrd-buster/sbin/reopen-console
>  create mode 100644 overlay/etc/systemd/system/systemd-random-seed.service.d/override.conf
>  create mode 100755 overlay/usr/local/bin/random-seed-add
> 
> --
> 2.20.1




From xen-devel-bounces@lists.xenproject.org Fri May 29 11:32:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:32:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedFZ-0000jE-3S; Fri, 29 May 2020 11:32:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedFX-0000iy-Q0
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:32:31 +0000
X-Inumbo-ID: 15034272-a1a0-11ea-9947-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15034272-a1a0-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:32:31 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3W-0003xZ-UT; Fri, 29 May 2020 12:20:07 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 36/49] buster: setupboot_grub2: Handle missing policy
 file bug
Date: Fri, 29 May 2020 12:19:32 +0100
Message-Id: <20200529111945.21394-37-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is a complex interaction between update-grub and the Xen build
system on ARM64.  Not sure exactly who to blame but since we have our
own 20_linux_xen bodge, let's wait until we don't.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 0386ff7a..51217fb4 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -496,7 +496,17 @@ sub setupboot_grub2 ($$$$) {
 			 " kernel $entry->{KernVer}, not $want_kernver)");
 		} elsif ($want_xsm && !defined $entry->{Xenpolicy}) {
 		    logm("(skipping entry at $entry->{StartLine}..$.;".
-			 " XSM policy file not present)");
+			 " XSM policy file not mentioned)");
+		} elsif ($ho->{Suite} =~ m/buster/ &&
+			 defined $entry->{Xenpolicy} &&
+			 !$bootfiles{
+                             $entry->{Xenpolicy} =~ m{^/?} ? $' : die
+						 }) {
+		    # Our 20_linux_xen bodge with buster's update-grub
+		    # generates entries which mention /boot/xenpolicy-xen
+		    # even though that file doesn't exist on ARM64.
+		    logm("(skipping entry at $entry->{StartLine}..$.;".
+			 " XSM policy file not on disk!)");
 		} else {
 		    # yes!
 		    last;
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:32:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:32:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedFd-0000k0-BQ; Fri, 29 May 2020 11:32:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedFc-0000ji-Nv
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:32:36 +0000
X-Inumbo-ID: 17e83dee-a1a0-11ea-9dbe-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 17e83dee-a1a0-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 11:32:36 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3S-0003xZ-BE; Fri, 29 May 2020 12:20:02 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 20/49] buster: ts-host-install: NTP not honoured bug
 remains
Date: Fri, 29 May 2020 12:19:16 +0100
Message-Id: <20200529111945.21394-21-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Debian #778564 remains open.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-host-install | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-host-install b/ts-host-install
index fe26f70f..253dbb5d 100755
--- a/ts-host-install
+++ b/ts-host-install
@@ -152,7 +152,7 @@ END
 	    my $done= 0;
 	    while (<EI>) {
 		if (m/^server\b|^pool\b\s/) {
-		    if ($ho->{Suite} =~ m/lenny|squeeze|wheezy|jessie|stretch/) {
+		    if ($ho->{Suite} =~ m/lenny|squeeze|wheezy|jessie|stretch|buster/) {
 			$_= $done ? "" : "server $ntpserver\n";
 		    } else {
 			m/^server \Q$ntpserver\E\s/ or
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:32:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:32:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedFi-0000lV-KG; Fri, 29 May 2020 11:32:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedFh-0000lA-OI
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:32:41 +0000
X-Inumbo-ID: 19289672-a1a0-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 19289672-a1a0-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:32:38 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3T-0003xZ-U8; Fri, 29 May 2020 12:20:03 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 27/49] 20_linux_xen: Ignore xenpolicy and config files
 too
Date: Fri, 29 May 2020 12:19:23 +0100
Message-Id: <20200529111945.21394-28-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

"file_is_not_sym" currently only checks for xen-syms.  Extend it to
disregard xenpolicy (XSM policy files) and files ending .config (which
are built by the Xen upstream build system in some configurations and
can therefore end up in /boot).

Rename the function accordingly, to "file_is_not_xen_garbage".

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 overlay-buster/etc/grub.d/20_linux_xen | 8 ++++++--
 1 file changed, 6 insertions(+), 2 deletions(-)

diff --git a/overlay-buster/etc/grub.d/20_linux_xen b/overlay-buster/etc/grub.d/20_linux_xen
index fb3ed82f..01dfcb57 100755
--- a/overlay-buster/etc/grub.d/20_linux_xen
+++ b/overlay-buster/etc/grub.d/20_linux_xen
@@ -167,10 +167,14 @@ if [ "x${linux_list}" = "x" ] ; then
     exit 0
 fi
 
-file_is_not_sym () {
+file_is_not_xen_garbage () {
     case "$1" in
 	*/xen-syms-*)
 	    return 1;;
+	*/xenpolicy-*)
+	    return 1;;
+	*/*.config)
+	    return 1;;
 	*)
 	    return 0;;
     esac
@@ -178,7 +182,7 @@ file_is_not_sym () {
 
 xen_list=
 for i in /boot/xen*; do
-    if grub_file_is_not_garbage "$i" && file_is_not_sym "$i" ; then xen_list="$xen_list $i" ; fi
+    if grub_file_is_not_garbage "$i" && file_is_not_xen_garbage "$i" ; then xen_list="$xen_list $i" ; fi
 done
 prepare_boot_cache=
 boot_device_id=
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:32:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:32:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedFo-0000oB-2s; Fri, 29 May 2020 11:32:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedFm-0000ne-OR
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:32:46 +0000
X-Inumbo-ID: 1a487112-a1a0-11ea-81bc-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1a487112-a1a0-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 11:32:40 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3T-0003xZ-OK; Fri, 29 May 2020 12:20:03 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 26/49] 20_linux_xen: Adhoc template substitution
Date: Fri, 29 May 2020 12:19:22 +0100
Message-Id: <20200529111945.21394-27-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This file is a template that various build-time variables get
substituted into.  Make thos substitutions by hand (actually, by
copying the values our file for stretch).  And rename the file.

So now we are using our file instead of the grub package's.  But it is
the same...

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 .../etc/grub.d/{20_linux_xen.in => 20_linux_xen}       | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)
 rename overlay-buster/etc/grub.d/{20_linux_xen.in => 20_linux_xen} (98%)
 mode change 100644 => 100755

diff --git a/overlay-buster/etc/grub.d/20_linux_xen.in b/overlay-buster/etc/grub.d/20_linux_xen
old mode 100644
new mode 100755
similarity index 98%
rename from overlay-buster/etc/grub.d/20_linux_xen.in
rename to overlay-buster/etc/grub.d/20_linux_xen
index 98ef163c..fb3ed82f
--- a/overlay-buster/etc/grub.d/20_linux_xen.in
+++ b/overlay-buster/etc/grub.d/20_linux_xen
@@ -17,14 +17,14 @@ set -e
 # You should have received a copy of the GNU General Public License
 # along with GRUB.  If not, see <http://www.gnu.org/licenses/>.
 
-prefix="@prefix@"
-exec_prefix="@exec_prefix@"
-datarootdir="@datarootdir@"
+prefix="/usr"
+exec_prefix="/usr"
+datarootdir="/usr/share"
 
 . "$pkgdatadir/grub-mkconfig_lib"
 
-export TEXTDOMAIN=@PACKAGE@
-export TEXTDOMAINDIR="@localedir@"
+export TEXTDOMAIN=grub
+export TEXTDOMAINDIR="${datarootdir}/locale"
 
 CLASS="--class gnu-linux --class gnu --class os --class xen"
 SUPPORTED_INITS="sysvinit:/lib/sysvinit/init systemd:/lib/systemd/systemd upstart:/sbin/upstart"
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:32:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:32:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedFt-0000qd-Cd; Fri, 29 May 2020 11:32:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedFr-0000pw-Og
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:32:51 +0000
X-Inumbo-ID: 1b6d7fc4-a1a0-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b6d7fc4-a1a0-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:32:41 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3Z-0003xZ-3J; Fri, 29 May 2020 12:20:09 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 39/49] ts-xen-install: Move some_extradebs to Debian.pm
Date: Fri, 29 May 2020 12:19:35 +0100
Message-Id: <20200529111945.21394-40-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 38 ++++++++++++++++++++++++++++++++++++++
 ts-xen-install    | 36 ------------------------------------
 2 files changed, 38 insertions(+), 36 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 49d94b9b..d51ac493 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -24,6 +24,7 @@ use POSIX;
 
 use IO::File;
 use File::Copy;
+use File::Basename;
 
 use Osstest;
 use Osstest::TestSupport;
@@ -50,6 +51,7 @@ BEGIN {
                       di_vg_name
                       debian_dhcp_rofs_fix
 		      debian_write_random_seed_command
+		      some_extradebs
                       );
     %EXPORT_TAGS = ( );
 
@@ -1646,4 +1648,40 @@ sub debian_write_random_seed_command ($) {
 END
 }
 
+sub some_extradebs ($$) {
+    my ($ho, $items) = @_;
+    my $cfgvar = join('_', @$items);
+    my $specs = $c{$cfgvar};
+    if (!length $specs) {
+	logm("$cfgvar: no extra debs");
+	return;
+    }
+    my $counter = 0;
+    my $rsync_installed;
+    foreach my $spec (split /\s+/, $specs) {
+	my $path = $spec;
+	$path = "$c{Images}/$path" unless $path =~ m{^/|^\./};
+	my ($ontarget, $dpkgopts);
+	if ($path =~ m{/$}) {
+	    $ontarget = "extrapackages-$cfgvar-$counter"; $counter++;
+	    $dpkgopts = '-iGROEB';
+	    logm("$cfgvar: updating packages from directory $path");
+	    target_install_packages($ho, qw(rsync)) unless $rsync_installed++;
+	    target_putfile_root($ho,300, "$path/.", $ontarget, '-r');
+	} elsif ($path =~ m{\.deb$}) {
+	    $path =~ s{_\.deb}{ "_$ho->{Arch}.deb" }e;
+	    logm("$cfgvar: installing $path");
+	    $ontarget = basename($path);
+	    $dpkgopts = '-iB';
+	    target_putfile_root($ho,300, $path, $ontarget);
+	} else {
+	    die "no / or . deb in $spec ?";
+	}
+	target_cmd_root($ho,
+			"dpkg --force-confold $dpkgopts $ontarget </dev/null",
+			300);
+	target_run_pkgmanager_install($ho, [], 0,1);
+    }
+}
+
 1;
diff --git a/ts-xen-install b/ts-xen-install
index 6196a890..d67cd121 100755
--- a/ts-xen-install
+++ b/ts-xen-install
@@ -71,42 +71,6 @@ sub packages () {
         if toolstack($ho)->{ExtraPackages};
 }
 
-sub some_extradebs ($$) {
-    my ($ho, $items) = @_;
-    my $cfgvar = join('_', @$items);
-    my $specs = $c{$cfgvar};
-    if (!length $specs) {
-	logm("$cfgvar: no extra debs");
-	return;
-    }
-    my $counter = 0;
-    my $rsync_installed;
-    foreach my $spec (split /\s+/, $specs) {
-	my $path = $spec;
-	$path = "$c{Images}/$path" unless $path =~ m{^/|^\./};
-	my ($ontarget, $dpkgopts);
-	if ($path =~ m{/$}) {
-	    $ontarget = "extrapackages-$cfgvar-$counter"; $counter++;
-	    $dpkgopts = '-iGROEB';
-	    logm("$cfgvar: updating packages from directory $path");
-	    target_install_packages($ho, qw(rsync)) unless $rsync_installed++;
-	    target_putfile_root($ho,300, "$path/.", $ontarget, '-r');
-	} elsif ($path =~ m{\.deb$}) {
-	    $path =~ s{_\.deb}{ "_$ho->{Arch}.deb" }e;
-	    logm("$cfgvar: installing $path");
-	    $ontarget = basename($path);
-	    $dpkgopts = '-iB';
-	    target_putfile_root($ho,300, $path, $ontarget);
-	} else {
-	    die "no / or . deb in $spec ?";
-	}
-	target_cmd_root($ho,
-			"dpkg --force-confold $dpkgopts $ontarget </dev/null",
-			300);
-	target_run_pkgmanager_install($ho, [], 0,1);
-    }
-}
-
 sub extradebs () {
     my $suite = $ho->{Suite};
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:32:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:32:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedFx-0000t1-L8; Fri, 29 May 2020 11:32:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedFw-0000sc-O8
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:32:56 +0000
X-Inumbo-ID: 1c9a0f5c-a1a0-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c9a0f5c-a1a0-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:32:43 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3a-0003xZ-Ai; Fri, 29 May 2020 12:20:10 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 44/49] setupboot_grub2: Print line number of entry we
 are using
Date: Fri, 29 May 2020 12:19:40 +0100
Message-Id: <20200529111945.21394-45-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index a20569e5..615047cb 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -562,7 +562,7 @@ sub setupboot_grub2 ($$$$) {
 
         die unless $entry->{Title};
 
-        logm("boot check: grub2, found $entry->{Title}");
+        logm("boot check: grub2, l.$., found $entry->{Title}");
 
 	die unless $entry->{$kernkey};
 	if (defined $xenhopt) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:33:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:33:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedG2-0000wS-Ul; Fri, 29 May 2020 11:33:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedG1-0000vr-Oq
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:33:01 +0000
X-Inumbo-ID: 1e116858-a1a0-11ea-9dbe-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e116858-a1a0-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 11:32:46 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3U-0003xZ-G1; Fri, 29 May 2020 12:20:04 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 30/49] overlay-initrd-buster/sbin/reopen-console: Copy
 from Debian
Date: Fri, 29 May 2020 12:19:26 +0100
Message-Id: <20200529111945.21394-31-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

We are going to patch this file to work around a bug, using the new
overlay mechanism.

The first step is to include the file in our overlay so we overwrite
it.  Currently, this is a no-op, so no functional change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 overlay-initrd-buster/sbin/reopen-console | 94 +++++++++++++++++++++++
 1 file changed, 94 insertions(+)
 create mode 100755 overlay-initrd-buster/sbin/reopen-console

diff --git a/overlay-initrd-buster/sbin/reopen-console b/overlay-initrd-buster/sbin/reopen-console
new file mode 100755
index 00000000..dd354deb
--- /dev/null
+++ b/overlay-initrd-buster/sbin/reopen-console
@@ -0,0 +1,94 @@
+#!/bin/sh
+
+# First find the enabled consoles from the kernel, noting if one is 'preferred'
+# Record these.
+# Run the startup scripts on the preferred console
+
+# In order to have D-I appear on all consoles, modify the inittab to
+# add one entry for each console, running debian-installer.
+# Finally HUP init so that it runs those installers
+# (but doesn't rerun the sysinit startup stuff, including this script)
+
+
+NL="
+"
+
+LOGGER_UP=0
+LOG_FILE=/var/log/reopen-console
+
+log() {
+	# In very early startup we don't have syslog. Log to file that
+	# we can flush out later so we can at least see what happened
+	# at early startup
+	if [ $LOGGER_UP -eq 1 ]; then
+	        logger -t reopen-console "$@"
+	else
+		echo "$@" >> $LOG_FILE
+	fi
+}
+
+flush_logger () {
+	cat $LOG_FILE | logger -t reopen-console
+	rm $LOG_FILE
+}
+
+consoles=
+preferred=
+# Retrieve all enabled consoles from kernel; ignore those
+# for which no device file exists
+
+kernelconsoles="$(cat /proc/consoles)"
+for cons in $(echo "$kernelconsoles" | sed -n -r -e 's/(^.*)  .*\((.*)\).*$/\1/p' )
+do
+	log "Looking at console $cons from /proc/consoles"
+	status=$(echo "$kernelconsoles" | grep $cons | sed -n -r -e 's/(^.*) *.*\((.*)\).*$/\2/p' )
+	if [ -e "/dev/$cons" ] && [ $(echo "$status" | grep -o 'E') ]; then
+		consoles="${consoles:+$consoles$NL}$cons"
+		log "   Adding $cons to consoles list"
+	fi
+	# 'C' console is 'most prefered'.
+	if [ $(echo "$status" | grep -o 'C') ]; then
+		preferred="$cons"
+		log "   $cons is preferred"
+	fi
+done
+
+if [ -z "$consoles" ]; then
+	# Nothing found? Default to /dev/console.
+	log "Found no consoles! Defaulting to /dev/console"
+	consoles=console
+fi
+if [ -z "$preferred" ]; then
+	#None marked preferred? Use the first one
+	preferred=$(echo "$consoles" | head -n 1)
+	log "Found no preferred console. Picking $preferred"
+fi
+
+for cons in $consoles
+do
+	echo "/dev/$cons " >> /var/run/console-devices
+done
+echo "/dev/$preferred " > /var/run/console-preferred
+
+
+# Add debian-installer lines into inittab - one per console
+for cons in $consoles
+do
+	log "Adding inittab entry for $cons"
+	echo "$cons::respawn:/sbin/debian-installer" >> /etc/inittab
+done
+
+# Run the startup scripts once, using the preferred console
+cons=$(cat /var/run/console-preferred)
+# Some other session may have that console as ctty. Steal it from them
+/sbin/steal-ctty $cons "$@"
+
+# Now we should have syslog running, so flush our log entries
+LOGGER_UP=1
+flush_logger
+
+# Finally restart init to run debian-installer on discovered consoles
+log "Restarting init to start d-i on the consoles we found"
+kill -HUP 1
+
+exit 0
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:33:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:33:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedG8-0000zs-74; Fri, 29 May 2020 11:33:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedG6-0000yz-Oi
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:33:06 +0000
X-Inumbo-ID: 20c7e036-a1a0-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 20c7e036-a1a0-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:32:50 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3T-0003xZ-Ia; Fri, 29 May 2020 12:20:03 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 25/49] 20_linux_xen: Copy Debian buster version into
 our initramfs area
Date: Fri, 29 May 2020 12:19:21 +0100
Message-Id: <20200529111945.21394-26-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is from 41e42571ebc50fa351cd63ce40044946652c5c72 in Debian's grub
package.

We are going to want to modify this to support XSM/FLASK and cope with
upstream build outputs.

In this commit we dump the exact file contents across.  It's not
effective right now because of the ".in" extension.  In fact, the file
is a template.

At the time of writing I am trying to send our substantive changes
upstream via Debian's Gitlab:
  https://salsa.debian.org/grub-team/grub/-/merge_requests/18

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 overlay-buster/etc/grub.d/20_linux_xen.in | 299 ++++++++++++++++++++++
 1 file changed, 299 insertions(+)
 create mode 100644 overlay-buster/etc/grub.d/20_linux_xen.in

diff --git a/overlay-buster/etc/grub.d/20_linux_xen.in b/overlay-buster/etc/grub.d/20_linux_xen.in
new file mode 100644
index 00000000..98ef163c
--- /dev/null
+++ b/overlay-buster/etc/grub.d/20_linux_xen.in
@@ -0,0 +1,299 @@
+#! /bin/sh
+set -e
+
+# grub-mkconfig helper script.
+# Copyright (C) 2006,2007,2008,2009,2010  Free Software Foundation, Inc.
+#
+# GRUB is free software: you can redistribute it and/or modify
+# it under the terms of the GNU General Public License as published by
+# the Free Software Foundation, either version 3 of the License, or
+# (at your option) any later version.
+#
+# GRUB is distributed in the hope that it will be useful,
+# but WITHOUT ANY WARRANTY; without even the implied warranty of
+# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+# GNU General Public License for more details.
+#
+# You should have received a copy of the GNU General Public License
+# along with GRUB.  If not, see <http://www.gnu.org/licenses/>.
+
+prefix="@prefix@"
+exec_prefix="@exec_prefix@"
+datarootdir="@datarootdir@"
+
+. "$pkgdatadir/grub-mkconfig_lib"
+
+export TEXTDOMAIN=@PACKAGE@
+export TEXTDOMAINDIR="@localedir@"
+
+CLASS="--class gnu-linux --class gnu --class os --class xen"
+SUPPORTED_INITS="sysvinit:/lib/sysvinit/init systemd:/lib/systemd/systemd upstart:/sbin/upstart"
+
+if [ "x${GRUB_DISTRIBUTOR}" = "x" ] ; then
+  OS=GNU/Linux
+else
+  OS="${GRUB_DISTRIBUTOR} GNU/Linux"
+  CLASS="--class $(echo ${GRUB_DISTRIBUTOR} | tr 'A-Z' 'a-z' | cut -d' ' -f1|LC_ALL=C sed 's,[^[:alnum:]_],_,g') ${CLASS}"
+fi
+
+# loop-AES arranges things so that /dev/loop/X can be our root device, but
+# the initrds that Linux uses don't like that.
+case ${GRUB_DEVICE} in
+  /dev/loop/*|/dev/loop[0-9])
+    GRUB_DEVICE=`losetup ${GRUB_DEVICE} | sed -e "s/^[^(]*(\([^)]\+\)).*/\1/"`
+    # We can't cope with devices loop-mounted from files here.
+    case ${GRUB_DEVICE} in
+      /dev/*) ;;
+      *) exit 0 ;;
+    esac
+  ;;
+esac
+
+# btrfs may reside on multiple devices. We cannot pass them as value of root= parameter
+# and mounting btrfs requires user space scanning, so force UUID in this case.
+if [ "x${GRUB_DEVICE_UUID}" = "x" ] || [ "x${GRUB_DISABLE_LINUX_UUID}" = "xtrue" ] \
+    || ! test -e "/dev/disk/by-uuid/${GRUB_DEVICE_UUID}" \
+    || ( test -e "${GRUB_DEVICE}" && uses_abstraction "${GRUB_DEVICE}" lvm ); then
+  LINUX_ROOT_DEVICE=${GRUB_DEVICE}
+else
+  LINUX_ROOT_DEVICE=UUID=${GRUB_DEVICE_UUID}
+fi
+
+# Allow overriding GRUB_CMDLINE_LINUX and GRUB_CMDLINE_LINUX_DEFAULT.
+if [ "${GRUB_CMDLINE_LINUX_XEN_REPLACE}" ]; then
+  GRUB_CMDLINE_LINUX="${GRUB_CMDLINE_LINUX_XEN_REPLACE}"
+fi
+if [ "${GRUB_CMDLINE_LINUX_XEN_REPLACE_DEFAULT}" ]; then
+  GRUB_CMDLINE_LINUX_DEFAULT="${GRUB_CMDLINE_LINUX_XEN_REPLACE_DEFAULT}"
+fi
+
+case x"$GRUB_FS" in
+    xbtrfs)
+	rootsubvol="`make_system_path_relative_to_its_root /`"
+	rootsubvol="${rootsubvol#/}"
+	if [ "x${rootsubvol}" != x ]; then
+	    GRUB_CMDLINE_LINUX="rootflags=subvol=${rootsubvol} ${GRUB_CMDLINE_LINUX}"
+	fi;;
+    xzfs)
+	rpool=`${grub_probe} --device ${GRUB_DEVICE} --target=fs_label 2>/dev/null || true`
+	bootfs="`make_system_path_relative_to_its_root / | sed -e "s,@$,,"`"
+	LINUX_ROOT_DEVICE="ZFS=${rpool}${bootfs%/}"
+	;;
+esac
+
+title_correction_code=
+
+linux_entry ()
+{
+  os="$1"
+  version="$2"
+  xen_version="$3"
+  type="$4"
+  args="$5"
+  xen_args="$6"
+  if [ -z "$boot_device_id" ]; then
+      boot_device_id="$(grub_get_device_id "${GRUB_DEVICE}")"
+  fi
+  if [ x$type != xsimple ] ; then
+      if [ x$type = xrecovery ] ; then
+	  title="$(gettext_printf "%s, with Xen %s and Linux %s (%s)" "${os}" "${xen_version}" "${version}" "$(gettext "${GRUB_RECOVERY_TITLE}")")"
+      elif [ "${type#init-}" != "$type" ] ; then
+	  title="$(gettext_printf "%s, with Xen %s and Linux %s (%s)" "${os}" "${xen_version}" "${version}" "${type#init-}")"
+      else
+	  title="$(gettext_printf "%s, with Xen %s and Linux %s" "${os}" "${xen_version}" "${version}")"
+      fi
+      replacement_title="$(echo "Advanced options for ${OS}" | sed 's,>,>>,g')>$(echo "$title" | sed 's,>,>>,g')"
+      if [ x"Xen ${xen_version}>$title" = x"$GRUB_ACTUAL_DEFAULT" ]; then
+         quoted="$(echo "$GRUB_ACTUAL_DEFAULT" | grub_quote)"
+         title_correction_code="${title_correction_code}if [ \"x\$default\" = '$quoted' ]; then default='$(echo "$replacement_title" | grub_quote)'; fi;"
+         grub_warn "$(gettext_printf "Please don't use old title \`%s' for GRUB_DEFAULT, use \`%s' (for versions before 2.00) or \`%s' (for 2.00 or later)" "$GRUB_ACTUAL_DEFAULT" "$replacement_title" "gnulinux-advanced-$boot_device_id>gnulinux-$version-$type-$boot_device_id")"
+      fi
+      echo "menuentry '$(echo "$title" | grub_quote)' ${CLASS} \$menuentry_id_option 'xen-gnulinux-$version-$type-$boot_device_id' {" | sed "s/^/$submenu_indentation/"
+  else
+      title="$(gettext_printf "%s, with Xen hypervisor" "${os}")"
+      echo "menuentry '$(echo "$title" | grub_quote)' ${CLASS} \$menuentry_id_option 'xen-gnulinux-simple-$boot_device_id' {" | sed "s/^/$submenu_indentation/"
+  fi
+  if [ x$type != xrecovery ] ; then
+      save_default_entry | grub_add_tab | sed "s/^/$submenu_indentation/"
+  fi
+
+  if [ -z "${prepare_boot_cache}" ]; then
+    prepare_boot_cache="$(prepare_grub_to_access_device ${GRUB_DEVICE_BOOT} | grub_add_tab)"
+  fi
+  printf '%s\n' "${prepare_boot_cache}" | sed "s/^/$submenu_indentation/"
+  xmessage="$(gettext_printf "Loading Xen %s ..." ${xen_version})"
+  lmessage="$(gettext_printf "Loading Linux %s ..." ${version})"
+  sed "s/^/$submenu_indentation/" << EOF
+	echo	'$(echo "$xmessage" | grub_quote)'
+        if [ "\$grub_platform" = "pc" -o "\$grub_platform" = "" ]; then
+            xen_rm_opts=
+        else
+            xen_rm_opts="no-real-mode edd=off"
+        fi
+	${xen_loader}	${rel_xen_dirname}/${xen_basename} placeholder ${xen_args} \${xen_rm_opts}
+	echo	'$(echo "$lmessage" | grub_quote)'
+	${module_loader}	${rel_dirname}/${basename} placeholder root=${linux_root_device_thisversion} ro ${args}
+EOF
+  if test -n "${initrd}" ; then
+    # TRANSLATORS: ramdisk isn't identifier. Should be translated.
+    message="$(gettext_printf "Loading initial ramdisk ...")"
+    sed "s/^/$submenu_indentation/" << EOF
+	echo	'$(echo "$message" | grub_quote)'
+	${module_loader}	--nounzip   ${rel_dirname}/${initrd}
+EOF
+  fi
+  sed "s/^/$submenu_indentation/" << EOF
+}
+EOF
+}
+
+linux_list=
+for i in /boot/vmlinu[xz]-* /vmlinu[xz]-* /boot/kernel-*; do
+    if grub_file_is_not_garbage "$i"; then
+    	basename=$(basename $i)
+	version=$(echo $basename | sed -e "s,^[^0-9]*-,,g")
+	dirname=$(dirname $i)
+	config=
+	for j in "${dirname}/config-${version}" "${dirname}/config-${alt_version}" "/etc/kernels/kernel-config-${version}" ; do
+	    if test -e "${j}" ; then
+		config="${j}"
+		break
+	    fi
+	done
+        if (grep -qx "CONFIG_XEN_DOM0=y" "${config}" 2> /dev/null || grep -qx "CONFIG_XEN_PRIVILEGED_GUEST=y" "${config}" 2> /dev/null); then linux_list="$linux_list $i" ; fi
+    fi
+done
+if [ "x${linux_list}" = "x" ] ; then
+    exit 0
+fi
+
+file_is_not_sym () {
+    case "$1" in
+	*/xen-syms-*)
+	    return 1;;
+	*)
+	    return 0;;
+    esac
+}
+
+xen_list=
+for i in /boot/xen*; do
+    if grub_file_is_not_garbage "$i" && file_is_not_sym "$i" ; then xen_list="$xen_list $i" ; fi
+done
+prepare_boot_cache=
+boot_device_id=
+
+title_correction_code=
+
+machine=`uname -m`
+
+case "$machine" in
+    i?86) GENKERNEL_ARCH="x86" ;;
+    mips|mips64) GENKERNEL_ARCH="mips" ;;
+    mipsel|mips64el) GENKERNEL_ARCH="mipsel" ;;
+    arm*) GENKERNEL_ARCH="arm" ;;
+    *) GENKERNEL_ARCH="$machine" ;;
+esac
+
+# Extra indentation to add to menu entries in a submenu. We're not in a submenu
+# yet, so it's empty. In a submenu it will be equal to '\t' (one tab).
+submenu_indentation=""
+
+is_top_level=true
+
+while [ "x${xen_list}" != "x" ] ; do
+    list="${linux_list}"
+    current_xen=`version_find_latest $xen_list`
+    xen_basename=`basename ${current_xen}`
+    xen_dirname=`dirname ${current_xen}`
+    rel_xen_dirname=`make_system_path_relative_to_its_root $xen_dirname`
+    xen_version=`echo $xen_basename | sed -e "s,.gz$,,g;s,^xen-,,g"`
+    if [ -z "$boot_device_id" ]; then
+	boot_device_id="$(grub_get_device_id "${GRUB_DEVICE}")"
+    fi
+    if [ "x$is_top_level" != xtrue ]; then
+	echo "	submenu '$(gettext_printf "Xen hypervisor, version %s" "${xen_version}" | grub_quote)' \$menuentry_id_option 'xen-hypervisor-$xen_version-$boot_device_id' {"
+    fi
+    if ($grub_file --is-x86-multiboot2 $current_xen); then
+	xen_loader="multiboot2"
+	module_loader="module2"
+    else
+	xen_loader="multiboot"
+	module_loader="module"
+    fi
+    while [ "x$list" != "x" ] ; do
+	linux=`version_find_latest $list`
+	gettext_printf "Found linux image: %s\n" "$linux" >&2
+	basename=`basename $linux`
+	dirname=`dirname $linux`
+	rel_dirname=`make_system_path_relative_to_its_root $dirname`
+	version=`echo $basename | sed -e "s,^[^0-9]*-,,g"`
+	alt_version=`echo $version | sed -e "s,\.old$,,g"`
+	linux_root_device_thisversion="${LINUX_ROOT_DEVICE}"
+
+	initrd=
+	for i in "initrd.img-${version}" "initrd-${version}.img" "initrd-${version}.gz" \
+	   "initrd-${version}" "initramfs-${version}.img" \
+	   "initrd.img-${alt_version}" "initrd-${alt_version}.img" \
+	   "initrd-${alt_version}" "initramfs-${alt_version}.img" \
+	   "initramfs-genkernel-${version}" \
+	   "initramfs-genkernel-${alt_version}" \
+	   "initramfs-genkernel-${GENKERNEL_ARCH}-${version}" \
+	   "initramfs-genkernel-${GENKERNEL_ARCH}-${alt_version}" ; do
+	    if test -e "${dirname}/${i}" ; then
+		initrd="$i"
+		break
+	    fi
+	done
+	if test -n "${initrd}" ; then
+	    gettext_printf "Found initrd image: %s\n" "${dirname}/${initrd}" >&2
+	else
+    # "UUID=" magic is parsed by initrds.  Since there's no initrd, it can't work here.
+	    linux_root_device_thisversion=${GRUB_DEVICE}
+	fi
+
+	if [ "x$is_top_level" = xtrue ] && [ "x${GRUB_DISABLE_SUBMENU}" != xy ]; then
+	    linux_entry "${OS}" "${version}" "${xen_version}" simple \
+		"${GRUB_CMDLINE_LINUX} ${GRUB_CMDLINE_LINUX_DEFAULT}" "${GRUB_CMDLINE_XEN} ${GRUB_CMDLINE_XEN_DEFAULT}"
+
+	    submenu_indentation="$grub_tab$grub_tab"
+    
+	    if [ -z "$boot_device_id" ]; then
+		boot_device_id="$(grub_get_device_id "${GRUB_DEVICE}")"
+	    fi
+            # TRANSLATORS: %s is replaced with an OS name
+	    echo "submenu '$(gettext_printf "Advanced options for %s (with Xen hypervisor)" "${OS}" | grub_quote)' \$menuentry_id_option 'gnulinux-advanced-$boot_device_id' {"
+	echo "	submenu '$(gettext_printf "Xen hypervisor, version %s" "${xen_version}" | grub_quote)' \$menuentry_id_option 'xen-hypervisor-$xen_version-$boot_device_id' {"
+	   is_top_level=false
+	fi
+
+	linux_entry "${OS}" "${version}" "${xen_version}" advanced \
+	    "${GRUB_CMDLINE_LINUX} ${GRUB_CMDLINE_LINUX_DEFAULT}" "${GRUB_CMDLINE_XEN} ${GRUB_CMDLINE_XEN_DEFAULT}"
+	for supported_init in ${SUPPORTED_INITS}; do
+	    init_path="${supported_init#*:}"
+	    if [ -x "${init_path}" ] && [ "$(readlink -f /sbin/init)" != "$(readlink -f "${init_path}")" ]; then
+		linux_entry "${OS}" "${version}" "${xen_version}" "init-${supported_init%%:*}" \
+		    "${GRUB_CMDLINE_LINUX} ${GRUB_CMDLINE_LINUX_DEFAULT} init=${init_path}" "${GRUB_CMDLINE_XEN} ${GRUB_CMDLINE_XEN_DEFAULT}"
+
+	    fi
+	done
+	if [ "x${GRUB_DISABLE_RECOVERY}" != "xtrue" ]; then
+	    linux_entry "${OS}" "${version}" "${xen_version}" recovery \
+		"single ${GRUB_CMDLINE_LINUX}" "${GRUB_CMDLINE_XEN}"
+	fi
+
+	list=`echo $list | tr ' ' '\n' | fgrep -vx "$linux" | tr '\n' ' '`
+    done
+    if [ x"$is_top_level" != xtrue ]; then
+	echo '	}'
+    fi
+    xen_list=`echo $xen_list | tr ' ' '\n' | fgrep -vx "$current_xen" | tr '\n' ' '`
+done
+
+# If at least one kernel was found, then we need to
+# add a closing '}' for the submenu command.
+if [ x"$is_top_level" != xtrue ]; then
+  echo '}'
+fi
+
+echo "$title_correction_code"
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:33:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:33:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedGC-00013V-MX; Fri, 29 May 2020 11:33:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedGB-00012k-OE
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:33:11 +0000
X-Inumbo-ID: 2243513e-a1a0-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2243513e-a1a0-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:32:53 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3V-0003xZ-UL; Fri, 29 May 2020 12:20:06 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 35/49] buster: setupboot_grub2: Note what files exist
 in /boot
Date: Fri, 29 May 2020 12:19:31 +0100
Message-Id: <20200529111945.21394-36-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Nothing uses this yet.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 9f1ce1df..0386ff7a 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -448,6 +448,11 @@ sub setupboot_grub2 ($$$$) {
         get_host_property($ho, "firmware") eq "uefi" &&
         $ho->{Suite} =~ m/jessie|stretch|buster/ && $ho->{Arch} =~ m/^arm/;
 
+    my %bootfiles =
+	map { $_ => 1 }
+	split / /,
+	target_cmd_output_root($ho, "cd /boot && echo *");
+
     my $parsemenu= sub {
         my $f= bl_getmenu_open($ho, $rmenu, "$stash/$ho->{Name}--grub.cfg.1");
     
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:33:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:33:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedGI-00017I-1F; Fri, 29 May 2020 11:33:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedGG-00016T-Pb
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:33:16 +0000
X-Inumbo-ID: 2379c542-a1a0-11ea-81bc-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2379c542-a1a0-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 11:32:55 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3X-0003xZ-P8; Fri, 29 May 2020 12:20:08 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 37/49] buster: Extend workaround for dhcpd EROFS bug
Date: Fri, 29 May 2020 12:19:33 +0100
Message-Id: <20200529111945.21394-38-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 51217fb4..49d94b9b 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -1624,7 +1624,7 @@ sub debian_dhcp_rofs_fix ($$) {
     # / is still ro.  In stretch, the isc dhcp client spins requesting
     # an address and then sending a DHCPDECLINE (and then, usually,
     # eventually works).
-    return '' unless $ho->{Suite} =~ m/stretch/;
+    return '' unless $ho->{Suite} =~ m/stretch|buster/;
     my $script = "$rootdir/lib/udev/ifupdown-hotplug";
     <<END.<<'ENDQ'.<<END
 set -ex
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:33:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:33:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedGN-0001BU-B7; Fri, 29 May 2020 11:33:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedGL-0001AN-Pc
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:33:21 +0000
X-Inumbo-ID: 256280b0-a1a0-11ea-9dbe-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 256280b0-a1a0-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 11:32:58 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3S-0003xZ-4N; Fri, 29 May 2020 12:20:02 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 19/49] buster: preseed partman-auto-lvm/guided_size
Date: Fri, 29 May 2020 12:19:15 +0100
Message-Id: <20200529111945.21394-20-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Otherwise we get this question:

  | You may use the whole volume group for guided partitioning, or part
  | of it.  [...]
  | Amount of volume group to use for guided partitioning:

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index a328160e..bac33d00 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -938,6 +938,7 @@ d-i partman/choose_partition select finish
 d-i partman/confirm boolean true
 d-i partman-lvm/confirm boolean true
 d-i partman-lvm/device_remove_lvm_span boolean true
+d-i partman-auto-lvm/guided_size string max
 
 d-i partman/confirm_nooverwrite true
 d-i partman-lvm/confirm_nooverwrite true
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:33:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:33:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedGS-0001GS-Kt; Fri, 29 May 2020 11:33:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedGQ-0001Ep-QA
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:33:26 +0000
X-Inumbo-ID: 2696c888-a1a0-11ea-9947-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2696c888-a1a0-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:33:00 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3Y-0003xZ-Qw; Fri, 29 May 2020 12:20:08 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 38/49] ts-xen-install: Add $ho argument to
 some_extradebs
Date: Fri, 29 May 2020 12:19:34 +0100
Message-Id: <20200529111945.21394-39-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This is going to move to Debian.pm.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-xen-install | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/ts-xen-install b/ts-xen-install
index 08b4ea23..6196a890 100755
--- a/ts-xen-install
+++ b/ts-xen-install
@@ -71,8 +71,8 @@ sub packages () {
         if toolstack($ho)->{ExtraPackages};
 }
 
-sub some_extradebs ($) {
-    my ($items) = @_;
+sub some_extradebs ($$) {
+    my ($ho, $items) = @_;
     my $cfgvar = join('_', @$items);
     my $specs = $c{$cfgvar};
     if (!length $specs) {
@@ -111,11 +111,11 @@ sub extradebs () {
     my $suite = $ho->{Suite};
 
     # $c{ DebianExtraPackages_<suite> }
-    some_extradebs([ 'DebianExtraPackages', $suite ]);
+    some_extradebs($ho, [ 'DebianExtraPackages', $suite ]);
 
     # $c{ DebianExtraPackages_<firmware>_<arch>_<suite> }
     my $firmware = get_host_property($ho, "firmware");
-    some_extradebs([ 'DebianExtraPackages', $firmware, $ho->{Arch}, $suite ]);
+    some_extradebs($ho, [ 'DebianExtraPackages', $firmware, $ho->{Arch}, $suite ]);
 }
 
 sub extract () {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:33:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:33:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedGW-0001Kp-Uj; Fri, 29 May 2020 11:33:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedGV-0001Jq-PE
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:33:31 +0000
X-Inumbo-ID: 29472bf4-a1a0-11ea-9dbe-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 29472bf4-a1a0-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 11:33:05 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3U-0003xZ-SX; Fri, 29 May 2020 12:20:05 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 32/49] buster: chiark-scripts: Install a new version
 on buster too
Date: Fri, 29 May 2020 12:19:28 +0100
Message-Id: <20200529111945.21394-33-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

We need various fixes that are not in buster, sadly.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 production-config | 1 +
 1 file changed, 1 insertion(+)

diff --git a/production-config b/production-config
index f0ddc132..e3870d47 100644
--- a/production-config
+++ b/production-config
@@ -107,6 +107,7 @@ TftpGrubVersion XXXX-XX-XX
 
 DebianExtraPackages_jessie chiark-scripts_6.0.3~citrix1_all.deb
 DebianExtraPackages_stretch chiark-scripts_6.0.4~citrix1_all.deb
+DebianExtraPackages_buster chiark-scripts_6.0.5~citrix1_all.deb
 
 DebianExtraPackages_uefi_i386_jessie   extradebs-uefi-i386-2018-04-01/
 DebianExtraPackages_uefi_amd64_jessie  extradebs-uefi-amd64-2018-04-01/
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:33:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:33:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedGc-0001Pe-7w; Fri, 29 May 2020 11:33:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedGa-0001OO-PX
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:33:36 +0000
X-Inumbo-ID: 2b81093a-a1a0-11ea-9947-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b81093a-a1a0-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:33:08 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3R-0003xZ-5Z; Fri, 29 May 2020 12:20:01 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 15/49] buster: make-hosts-flight: Add to possible
 suites for hosts flight
Date: Fri, 29 May 2020 12:19:11 +0100
Message-Id: <20200529111945.21394-16-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 make-hosts-flight | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/make-hosts-flight b/make-hosts-flight
index 92da1c7c..e2c3776a 100755
--- a/make-hosts-flight
+++ b/make-hosts-flight
@@ -26,7 +26,7 @@ blessing=$4
 buildflight=$5
 
 : ${ALL_ARCHES:=amd64 i386 arm64 armhf}
-: ${ALL_SUITES:=stretch jessie}
+: ${ALL_SUITES:=buster stretch jessie}
 # ^ most preferred suite first
 
 : ${PERHOST_MAXWAIT:=20000} # seconds
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:33:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:33:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedGh-0001Uf-I2; Fri, 29 May 2020 11:33:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedGf-0001T4-QP
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:33:41 +0000
X-Inumbo-ID: 2ef07952-a1a0-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2ef07952-a1a0-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:33:14 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3V-0003xZ-5S; Fri, 29 May 2020 12:20:05 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 33/49] buster: Provide TftpDiVersion
Date: Fri, 29 May 2020 12:19:29 +0100
Message-Id: <20200529111945.21394-34-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 production-config | 1 +
 1 file changed, 1 insertion(+)

diff --git a/production-config b/production-config
index e3870d47..6372ac9a 100644
--- a/production-config
+++ b/production-config
@@ -91,6 +91,7 @@ TftpNetbootGroup osstest
 TftpDiVersion_wheezy 2016-06-08
 TftpDiVersion_jessie 2018-06-26
 TftpDiVersion_stretch 2020-02-10
+TftpDiVersion_buster 2020-05-19
 
 DebianSnapshotBackports_jessie http://snapshot.debian.org/archive/debian/20190206T211314Z/
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:33:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:33:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedGm-0001Zk-SU; Fri, 29 May 2020 11:33:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedGk-0001Xz-Q0
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:33:46 +0000
X-Inumbo-ID: 301a01b8-a1a0-11ea-9dbe-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 301a01b8-a1a0-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 11:33:16 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3S-0003xZ-K0; Fri, 29 May 2020 12:20:02 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 21/49] buster: Extend ARM clock workaround
Date: Fri, 29 May 2020 12:19:17 +0100
Message-Id: <20200529111945.21394-22-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

CC: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index bac33d00..71167351 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -248,7 +248,7 @@ END
 	push @xenkopt, $xenkopt;
 	# https://bugs.xenproject.org/xen/bug/45
 	push @xenkopt, "clk_ignore_unused"
-	    if $ho->{Suite} =~ m/wheezy|jessie|stretch/;
+	    if $ho->{Suite} =~ m/wheezy|jessie|stretch|buster/;
 
 	$xenkopt = join ' ', @xenkopt;
 	logm("Dom0 Linux options: $xenkopt");
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:33:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:33:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedGr-0001e1-5p; Fri, 29 May 2020 11:33:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedGp-0001co-QA
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:33:51 +0000
X-Inumbo-ID: 32caa9a8-a1a0-11ea-9947-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 32caa9a8-a1a0-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:33:21 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3T-0003xZ-52; Fri, 29 May 2020 12:20:03 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 23/49] Honour DebianImageFile_SUITE_ARCH
Date: Fri, 29 May 2020 12:19:19 +0100
Message-Id: <20200529111945.21394-24-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This lets us specify the whole filename, not just a version.
This is needed because for buster we are going to use
   debian-10.2.0-ARCH-xfce-CD-1.iso

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 mfi-common | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/mfi-common b/mfi-common
index b40f057e..640cf328 100644
--- a/mfi-common
+++ b/mfi-common
@@ -522,6 +522,15 @@ job_create_test () {
 
 usual_debianhvm_image () {
   local arch=$1; shift
+  if [ -n "$DEBIAN_IMAGE_FILE" ]; then
+      echo $DEBIAN_IMAGE_FILE
+      return
+  fi
+  local file=`getconfig DebianImageFile_${guestsuite}_${arch}`
+  if [ -n "$file " ]; then
+      echo $file
+      return
+  fi
   local ver=$DEBIAN_IMAGE_VERSION
   if [ -z "$ver" ] ; then
       ver=`getconfig DebianImageVersion_$guestsuite`
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:33:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:33:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedGv-0001iK-Fb; Fri, 29 May 2020 11:33:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedGu-0001hk-Pg
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:33:56 +0000
X-Inumbo-ID: 351ec9dc-a1a0-11ea-9dbe-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 351ec9dc-a1a0-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 11:33:25 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3Q-0003xZ-VG; Fri, 29 May 2020 12:20:01 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 14/49] Honour 'LinuxSerialConsole <suite>' host
 property
Date: Fri, 29 May 2020 12:19:10 +0100
Message-Id: <20200529111945.21394-15-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This works like LinuxSerialConsole.

I originally wrote this to try to work around #940028, where multiple
d-i autoinstalls run in parallel leading to hard-to-debug lossage!
Explicitly specing the console causes it to run only on that one.

However, it turns out that explicitly specifying the console does not
always work and a better fix is needed.  Nevertheless, having added
this feature it seems foolish to throw it away.

Currently there are no hosts with this property so no functiaonal
change.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/TestSupport.pm | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/Osstest/TestSupport.pm b/Osstest/TestSupport.pm
index ff8103f2..7eeac49f 100644
--- a/Osstest/TestSupport.pm
+++ b/Osstest/TestSupport.pm
@@ -1447,7 +1447,10 @@ sub get_target_property ($$;$) {
 sub get_host_native_linux_console ($) {
     my ($ho) = @_;
 
-    my $console = get_host_property($ho, "LinuxSerialConsole", "ttyS0");
+    my $console;
+    $console //= get_host_property($ho, "LinuxSerialConsole $ho->{Suite}")
+	if $ho->{Suite};
+    $console //= get_host_property($ho, "LinuxSerialConsole", "ttyS0");
     return $console if $console eq 'NONE';
 
     return "$console,$c{Baud}n8";
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:34:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:34:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedH1-0001or-Os; Fri, 29 May 2020 11:34:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedGz-0001n4-R4
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:34:01 +0000
X-Inumbo-ID: 3739ae80-a1a0-11ea-81bc-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3739ae80-a1a0-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 11:33:28 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3R-0003xZ-Lg; Fri, 29 May 2020 12:20:01 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 17/49] buster: ts-host-install: Extend net.ifnames
 workaround
Date: Fri, 29 May 2020 12:19:13 +0100
Message-Id: <20200529111945.21394-18-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Really we should fix this by making a .deb in Debian that we could
install.  But this is a longer-term project.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-host-install | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/ts-host-install b/ts-host-install
index 7a72a867..fe26f70f 100755
--- a/ts-host-install
+++ b/ts-host-install
@@ -282,7 +282,7 @@ END
 
     # Don't use "Predictable Network Interface Names"
     # https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/
-    push @hocmdline, "net.ifnames=0" if $ho->{Suite} =~ m/stretch/;
+    push @hocmdline, "net.ifnames=0" if $ho->{Suite} =~ m/stretch|buster/;
 
     push @hocmdline,
         get_host_property($ho, "linux-boot-append $ho->{Suite}", ''),
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:34:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:34:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedH6-0001tf-2a; Fri, 29 May 2020 11:34:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedH4-0001sE-QP
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:34:06 +0000
X-Inumbo-ID: 386a69a2-a1a0-11ea-9947-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 386a69a2-a1a0-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:33:30 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3T-0003xZ-Bl; Fri, 29 May 2020 12:20:03 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 24/49] buster: Specify DebianImageFile_SUITE_ARCH
Date: Fri, 29 May 2020 12:19:20 +0100
Message-Id: <20200529111945.21394-25-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 production-config | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/production-config b/production-config
index 103b8915..f0ddc132 100644
--- a/production-config
+++ b/production-config
@@ -98,6 +98,9 @@ DebianSnapshotBackports_jessie http://snapshot.debian.org/archive/debian/2019020
 DebianImageVersion_wheezy 7.2.0
 DebianImageVersion_jessie 8.2.0
 DebianImageVersion_stretch 9.4.0
+DebianImageFile_buster_amd64 debian-10.2.0-amd64-xfce-CD-1.iso
+DebianImageFile_buster_i386 debian-10.2.0-i386-xfce-CD-1.iso
+
 
 # Update with ./mg-netgrub-loader-update
 TftpGrubVersion XXXX-XX-XX
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:34:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:34:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedHA-0001zK-J3; Fri, 29 May 2020 11:34:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedH9-0001yU-Qy
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:34:11 +0000
X-Inumbo-ID: 39b706b2-a1a0-11ea-9947-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39b706b2-a1a0-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:33:32 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3Q-0003xZ-5r; Fri, 29 May 2020 12:20:00 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 11/49] ts-debian-di-install: Provide guest with more
 RAM
Date: Fri, 29 May 2020 12:19:07 +0100
Message-Id: <20200529111945.21394-12-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

buster cannot boot in so little because its initramfs and kernel are
too large.  Bump it to 2G.

However, our armhf test nodes have very little RAM.  And the Debian
armhf does fit in them as a guest still, so use a smaller value there.

Keying this off the architecture rather than the available host memory
is better because you do need the bigger value precisely if you are
not using armhf, and this makes osstest less dependent on a completely
accurate and populated host properties database.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-debian-di-install | 2 +-
 ts-debian-fixup      | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/ts-debian-di-install b/ts-debian-di-install
index 9abb4956..d84407cf 100755
--- a/ts-debian-di-install
+++ b/ts-debian-di-install
@@ -64,7 +64,7 @@ $gn ||= 'debian';
 
 our $ho= selecthost($whhost);
 
-our $ram_mb=    512;
+our $ram_mb= $r{arch} =~ m/^armhf/ ? 768 : 2048;
 our $disk_mb= 10000;
 
 our $guesthost= $gn.
diff --git a/ts-debian-fixup b/ts-debian-fixup
index dfeb4d39..528fb03b 100755
--- a/ts-debian-fixup
+++ b/ts-debian-fixup
@@ -104,7 +104,7 @@ sub console () {
 
 sub randomseed () {
     my $cmd = debian_write_random_seed_command($mountpoint);
-    target_cmd_root($ho, "set -ex\n".cmd);
+    target_cmd_root($ho, "set -ex\n".$cmd);
 }
 
 sub filesystems () {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:34:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:34:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedHF-00024x-UG; Fri, 29 May 2020 11:34:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedHE-00023s-R0
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:34:16 +0000
X-Inumbo-ID: 3af54a34-a1a0-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3af54a34-a1a0-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:33:34 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3S-0003xZ-US; Fri, 29 May 2020 12:20:03 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 22/49] buster: Extend guest bootloader workaround
Date: Fri, 29 May 2020 12:19:18 +0100
Message-Id: <20200529111945.21394-23-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

CC: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 71167351..3fc9e555 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -1064,7 +1064,7 @@ END
     logm("\$arch is $arch, \$suite is $suite");
     if ($xopts{PvMenuLst} &&
 	$arch =~ /^arm/ &&
-	$suite =~ /wheezy|jessie|stretch|sid/ ) {
+	$suite =~ /wheezy|jessie|stretch|buster|sid/ ) {
 
 	# Debian doesn't currently know what bootloader to install in
 	# a Xen guest on ARM. We install pv-grub-menu above which
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:34:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:34:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedHK-00029p-97; Fri, 29 May 2020 11:34:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedHJ-00029M-Rg
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:34:21 +0000
X-Inumbo-ID: 3e9d5140-a1a0-11ea-9947-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e9d5140-a1a0-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:33:41 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3b-0003xZ-Mx; Fri, 29 May 2020 12:20:11 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 49/49] buster: Switch to Debian buster as the default
 suite
Date: Fri, 29 May 2020 12:19:45 +0100
Message-Id: <20200529111945.21394-50-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest.pm b/Osstest.pm
index 1e381d8f..63dddd95 100644
--- a/Osstest.pm
+++ b/Osstest.pm
@@ -87,7 +87,7 @@ our %c = qw(
 
     Images images
 
-    DebianSuite stretch
+    DebianSuite buster
     DebianMirrorSubpath debian
 
     TestHostKeypairPath id_rsa_osstest
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:34:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:34:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedHP-0002Fi-K3; Fri, 29 May 2020 11:34:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedHO-0002Eo-Qn
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:34:26 +0000
X-Inumbo-ID: 3fffa68c-a1a0-11ea-81bc-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3fffa68c-a1a0-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 11:33:43 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3b-0003xZ-Ey; Fri, 29 May 2020 12:20:11 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 48/49] buster 20_linux_xen: Only load policy in
 XSM-enabled builds
Date: Fri, 29 May 2020 12:19:44 +0100
Message-Id: <20200529111945.21394-49-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 overlay-buster/etc/grub.d/20_linux_xen | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/overlay-buster/etc/grub.d/20_linux_xen b/overlay-buster/etc/grub.d/20_linux_xen
index 4d3294a2..6f2a98ba 100755
--- a/overlay-buster/etc/grub.d/20_linux_xen
+++ b/overlay-buster/etc/grub.d/20_linux_xen
@@ -159,7 +159,7 @@ EOF
 	${module_loader}	--nounzip   ${rel_dirname}/${initrd}
 EOF
   fi
-  if test -n "${xenpolicy}" ; then
+  if ${xsm} && test -n "${xenpolicy}" ; then
     message="$(gettext_printf "Loading XSM policy ...")"
     sed "s/^/$submenu_indentation/" << EOF
 	echo	'$(echo "$message" | grub_quote)'
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:34:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:34:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedHU-0002MD-Us; Fri, 29 May 2020 11:34:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedHT-0002L4-Qg
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:34:31 +0000
X-Inumbo-ID: 43e1ff52-a1a0-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43e1ff52-a1a0-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:33:49 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3U-0003xZ-A8; Fri, 29 May 2020 12:20:04 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 29/49] mg-debian-installer-update: support
 overlay-intramfs-SUITE
Date: Fri, 29 May 2020 12:19:25 +0100
Message-Id: <20200529111945.21394-30-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This lets us patch the installer more easily.

No uses yet.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 mg-debian-installer-update | 20 ++++++++++++++++++++
 1 file changed, 20 insertions(+)

diff --git a/mg-debian-installer-update b/mg-debian-installer-update
index f1e682f9..fb4fe2ab 100755
--- a/mg-debian-installer-update
+++ b/mg-debian-installer-update
@@ -33,6 +33,8 @@ sbase=$site/dists/$suite
 
 src=$sbase/main/installer-$arch/current/images/netboot/
 
+osstest_dir="$(pwd)"
+
 case ${suite}_${arch} in
     lenny_armhf|squeeze_armhf|lenny_arm64|squeeze_arm64|wheezy_arm64)
         # No such thing.
@@ -188,6 +190,24 @@ if [ "x$specialkernel" != x ]; then
     rm -rf x
 fi
 
+overlay_initrd=$osstest_dir/overlay-initrd-$suite
+if [ -e "$overlay_initrd" ]; then
+    for f in $files; do
+        s=${f/:*} ; d=${f/*:}
+        case "$d" in
+            *initrd*)
+                echo "adding $overlay_initrd to $d"
+                (
+                    set -e
+                    cd "$overlay_initrd"
+                    find -print0 | cpio -0 -Hnewc -o \
+                        | gzip -9nf
+                ) >>$d.new
+                ;;
+        esac
+    done
+fi
+
 for f in $files; do
         s=${f/:*} ; d=${f/*:}
         mv -f $d.new $d
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:34:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:34:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedHa-0002S8-8Q; Fri, 29 May 2020 11:34:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedHY-0002Qo-Qx
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:34:36 +0000
X-Inumbo-ID: 45167a06-a1a0-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 45167a06-a1a0-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:33:51 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3a-0003xZ-3W; Fri, 29 May 2020 12:20:10 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 43/49] setupboot_grub2: Insist on space after
 directives
Date: Fri, 29 May 2020 12:19:39 +0100
Message-Id: <20200529111945.21394-44-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

These parsing regexps were all wrong!

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 2d30b3e9..a20569e5 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -529,17 +529,17 @@ sub setupboot_grub2 ($$$$) {
             if (m/^\s*submenu\s+[\'\"](.*)[\'\"].*\{\s*$/) {
                 push @offsets,(0);
             }
-            if (m/^\s*chainloader\s*\/EFI\/osstest\/xen.efi/) {
+            if (m/^\s*chainloader\s+\/EFI\/osstest\/xen.efi/) {
                 die unless $entry;
                 $entry->{Hv}= $1;
                 $entry->{Chainload} = 1;
             }
-            if (m/^\s*multiboot2?\s*(?:\/boot)?\/(xen\-[0-9][-+.0-9a-z]*\S+)/) {
+            if (m/^\s*multiboot2?\s+(?:\/boot)?\/(xen\-[0-9][-+.0-9a-z]*\S+)/) {
                 die unless $entry;
                 $entry->{Hv}= $1;
                 $entry->{Chainload} = 0;
             }
-            if (m/^\s*multiboot2?\s*(?:\/boot)?\/(vmlinu[xz]-(\S+))\s+(.*)/) {
+            if (m/^\s*multiboot2?\s+(?:\/boot)?\/(vmlinu[xz]-(\S+))\s+(.*)/) {
                 die unless $entry;
                 $entry->{KernOnly}= $1;
                 $entry->{KernVer}= $2;
@@ -551,7 +551,7 @@ sub setupboot_grub2 ($$$$) {
                 $entry->{KernVer}= $2;
                 $entry->{KernOpts}= $3;
             }
-            if (m/^\s*module2?\s*(?:\/boot)?\/(initrd\S+)/) {
+            if (m/^\s*module2?\s+(?:\/boot)?\/(initrd\S+)/) {
                 $entry->{Initrd}= $1;
             }
 	    if (m/^\s*module2?\s*\/(xenpolicy\S+)/) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:34:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:34:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedHe-0002XQ-IR; Fri, 29 May 2020 11:34:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedHd-0002WP-RE
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:34:41 +0000
X-Inumbo-ID: 4687f11c-a1a0-11ea-9dbe-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4687f11c-a1a0-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 11:33:54 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3U-0003xZ-MT; Fri, 29 May 2020 12:20:04 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 31/49] overlay-initrd-buster/sbin/reopen-console: Fix
 #932416
Date: Fri, 29 May 2020 12:19:27 +0100
Message-Id: <20200529111945.21394-32-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This bug affects us.  Cherry pick the changes to the relevant file
from the commit in the upstream debian-installer repo:

  https://salsa.debian.org/installer-team/rootskel/commit/0ee43d05b83f8ef5a856f3282e002a111809cef9

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 overlay-initrd-buster/sbin/reopen-console | 36 +++++++++++++++++++++--
 1 file changed, 34 insertions(+), 2 deletions(-)

diff --git a/overlay-initrd-buster/sbin/reopen-console b/overlay-initrd-buster/sbin/reopen-console
index dd354deb..13b15a33 100755
--- a/overlay-initrd-buster/sbin/reopen-console
+++ b/overlay-initrd-buster/sbin/reopen-console
@@ -16,6 +16,17 @@ NL="
 LOGGER_UP=0
 LOG_FILE=/var/log/reopen-console
 
+# If we're running with preseeding, we have a problem with running d-i
+# on multiple consoles. We'll end up running each of those d-i
+# instances in parallel with all kinds of hilarious undefined
+# behaviour as they trip over each other! If we detect that we're
+# preseeding (via any of the possible preseed methods), DO NOT run d-i
+# multiple times. Instead, fall back to the older, more simple
+# behaviour and run it once. If the user wants to see or interact with
+# their preseed on a specific console, they get to tell us which one
+# they want to use.
+PRESEEDING=0
+
 log() {
 	# In very early startup we don't have syslog. Log to file that
 	# we can flush out later so we can at least see what happened
@@ -32,6 +43,20 @@ flush_logger () {
 	rm $LOG_FILE
 }
 
+# If we have a preseed.cfg in the initramfs
+if [ -e /preseed.cfg ]; then
+    log "Found /preseed.cfg; falling back to simple mode for preseeding"
+    PRESEEDING=1
+fi
+
+# Have we been told to do preseeding stuff on the boot command line?
+for WORD in auto url; do
+    if (grep -qw "$WORD" /proc/cmdline); then
+	log "Found \"$WORD\" in the command line; falling back to simple mode for preseeding"
+	PRESEEDING=1
+    fi
+done
+
 consoles=
 preferred=
 # Retrieve all enabled consoles from kernel; ignore those
@@ -44,7 +69,7 @@ do
 	status=$(echo "$kernelconsoles" | grep $cons | sed -n -r -e 's/(^.*) *.*\((.*)\).*$/\2/p' )
 	if [ -e "/dev/$cons" ] && [ $(echo "$status" | grep -o 'E') ]; then
 		consoles="${consoles:+$consoles$NL}$cons"
-		log "   Adding $cons to consoles list"
+		log "   Adding $cons to possible consoles list"
 	fi
 	# 'C' console is 'most prefered'.
 	if [ $(echo "$status" | grep -o 'C') ]; then
@@ -64,6 +89,13 @@ if [ -z "$preferred" ]; then
 	log "Found no preferred console. Picking $preferred"
 fi
 
+# If we're preseeding, do simple stuff here (see above). We just
+# want one console. Let's pick the preferred one ONLY
+if [ $PRESEEDING = 1 ]; then
+    log "Running with preseeding. Picking preferred $preferred ONLY"
+    consoles=$preferred
+fi
+
 for cons in $consoles
 do
 	echo "/dev/$cons " >> /var/run/console-devices
@@ -88,7 +120,7 @@ LOGGER_UP=1
 flush_logger
 
 # Finally restart init to run debian-installer on discovered consoles
-log "Restarting init to start d-i on the consoles we found"
+log "Restarting init to start d-i on the console(s) we found"
 kill -HUP 1
 
 exit 0
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:34:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:34:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedHj-0002d0-Sy; Fri, 29 May 2020 11:34:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedHi-0002bw-SK
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:34:46 +0000
X-Inumbo-ID: 47b7cab2-a1a0-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47b7cab2-a1a0-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:33:56 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3b-0003xZ-5y; Fri, 29 May 2020 12:20:11 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 47/49] setupboot_grub2: Do not boot with XSM policy
 etc. unless xsm=1
Date: Fri, 29 May 2020 12:19:43 +0100
Message-Id: <20200529111945.21394-48-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This prevents us from passing an XSM policy file, and
`flask=enforcing', in supposedly-non-XSM tests.

These bootloader entries can appear because the Xen upstream build
ships XSM policy files by default even if XSM is disabled in the
hypervisor, causing update-grub to generate useless `XSM enabled'
entries.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index c18bf718..b140ede2 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -499,6 +499,9 @@ sub setupboot_grub2 ($$$$) {
 		} elsif ($want_xsm && !defined $entry->{Xenpolicy}) {
 		    logm("(skipping entry at $entry->{StartLine}..$.;".
 			 " XSM policy file not mentioned)");
+		} elsif (!$want_xsm && defined $entry->{Xenpolicy}) {
+		    logm("(skipping entry at $entry->{StartLine}..$.;".
+			 " XSM policy file, but we don't want XSM)");
 		} elsif ($ho->{Suite} =~ m/buster/ &&
 			 defined $entry->{Xenpolicy} &&
 			 !$bootfiles{
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:34:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:34:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedHp-0002ir-6q; Fri, 29 May 2020 11:34:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedHn-0002hE-RR
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:34:51 +0000
X-Inumbo-ID: 4990ebde-a1a0-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4990ebde-a1a0-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:33:59 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3Z-0003xZ-Qa; Fri, 29 May 2020 12:20:09 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 42/49] buster: Install own linux-libc-dev package (!)
Date: Fri, 29 May 2020 12:19:38 +0100
Message-Id: <20200529111945.21394-43-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

As reported here:
  https://patchew.org/QEMU/20200513120147.21443-1-f4bug@amsat.org/
  https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=960271
the kernel has broken the build of upstream qemu.  This made it
into a Debian stable kernel update.  This breaks our CI runs almost
completely, when they run with buster.

I spoke to the Debian kernel folks and apparently there is no intent
to fast-track a fix to this.  So instead I have made a kernel source
package with the patch from that bug report, and built the
linux-libc-dev package fromk it.  The source is here for the time
being:
  https://www.chiark.greenend.org.uk/~ijackson/quicksand/2020/libc-kernel-bug.960271/

Deployment note: the source and linux-libc-dev_*.deb are in the
images directory on osstest@test-lab.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 production-config | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/production-config b/production-config
index 6372ac9a..7a8c0fd3 100644
--- a/production-config
+++ b/production-config
@@ -108,7 +108,7 @@ TftpGrubVersion XXXX-XX-XX
 
 DebianExtraPackages_jessie chiark-scripts_6.0.3~citrix1_all.deb
 DebianExtraPackages_stretch chiark-scripts_6.0.4~citrix1_all.deb
-DebianExtraPackages_buster chiark-scripts_6.0.5~citrix1_all.deb
+DebianExtraPackages_buster chiark-scripts_6.0.5~citrix1_all.deb libc-kernel-bug.960271/linux-libc-dev_4.19.118-2.0iwj_.deb
 
 DebianExtraPackages_uefi_i386_jessie   extradebs-uefi-i386-2018-04-01/
 DebianExtraPackages_uefi_amd64_jessie  extradebs-uefi-amd64-2018-04-01/
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:34:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:34:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedHu-0002oK-I7; Fri, 29 May 2020 11:34:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedHs-0002md-SC
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:34:56 +0000
X-Inumbo-ID: 4ac56d86-a1a0-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ac56d86-a1a0-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:34:01 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3Z-0003xZ-CA; Fri, 29 May 2020 12:20:09 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 40/49] Debian.pm: Break out standard_extradebs
Date: Fri, 29 May 2020 12:19:36 +0100
Message-Id: <20200529111945.21394-41-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Refactor this out of ts-xen-install.  We are going to run it in
ts-host-install.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 7 ++++++-
 ts-xen-install    | 3 +--
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index d51ac493..60393ca9 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -51,7 +51,6 @@ BEGIN {
                       di_vg_name
                       debian_dhcp_rofs_fix
 		      debian_write_random_seed_command
-		      some_extradebs
                       );
     %EXPORT_TAGS = ( );
 
@@ -1684,4 +1683,10 @@ sub some_extradebs ($$) {
     }
 }
 
+sub standard_extradebs ($) {
+    my ($ho) = @_;
+    # $c{ DebianExtraPackages_<suite> }
+    some_extradebs($ho, [ 'DebianExtraPackages', $ho->{Suite} ]);
+}
+
 1;
diff --git a/ts-xen-install b/ts-xen-install
index d67cd121..965fd519 100755
--- a/ts-xen-install
+++ b/ts-xen-install
@@ -74,8 +74,7 @@ sub packages () {
 sub extradebs () {
     my $suite = $ho->{Suite};
 
-    # $c{ DebianExtraPackages_<suite> }
-    some_extradebs($ho, [ 'DebianExtraPackages', $suite ]);
+    standard_extradebs($ho);
 
     # $c{ DebianExtraPackages_<firmware>_<arch>_<suite> }
     my $firmware = get_host_property($ho, "firmware");
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:35:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:35:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedHz-0002ta-0Y; Fri, 29 May 2020 11:35:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedHx-0002sM-SP
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:35:01 +0000
X-Inumbo-ID: 4c099578-a1a0-11ea-9dbe-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4c099578-a1a0-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 11:34:03 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3Z-0003xZ-JO; Fri, 29 May 2020 12:20:09 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 41/49] Debian.pm: Move standard_extradebs to
 ts-host-install
Date: Fri, 29 May 2020 12:19:37 +0100
Message-Id: <20200529111945.21394-42-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This makes it effect builds on Debian, too.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 1 +
 ts-host-install   | 2 ++
 ts-xen-install    | 2 --
 3 files changed, 3 insertions(+), 2 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 60393ca9..2d30b3e9 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -51,6 +51,7 @@ BEGIN {
                       di_vg_name
                       debian_dhcp_rofs_fix
 		      debian_write_random_seed_command
+		      some_extradebs standard_extradebs
                       );
     %EXPORT_TAGS = ( );
 
diff --git a/ts-host-install b/ts-host-install
index 253dbb5d..924c1e06 100755
--- a/ts-host-install
+++ b/ts-host-install
@@ -146,6 +146,8 @@ END
             qw(amd64-microcode intel-microcode));
     }
 
+    standard_extradebs($ho);
+
     my $ntpserver = get_target_property($ho, 'NtpServer');
     if ($ntpserver) {
 	target_editfile_root($ho, '/etc/ntp.conf', sub {
diff --git a/ts-xen-install b/ts-xen-install
index 965fd519..5d4f8b0d 100755
--- a/ts-xen-install
+++ b/ts-xen-install
@@ -74,8 +74,6 @@ sub packages () {
 sub extradebs () {
     my $suite = $ho->{Suite};
 
-    standard_extradebs($ho);
-
     # $c{ DebianExtraPackages_<firmware>_<arch>_<suite> }
     my $firmware = get_host_property($ho, "firmware");
     some_extradebs($ho, [ 'DebianExtraPackages', $firmware, $ho->{Arch}, $suite ]);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:35:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:35:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedI4-0002yG-AP; Fri, 29 May 2020 11:35:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedI2-0002xD-Si
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:35:06 +0000
X-Inumbo-ID: 4e8bf2aa-a1a0-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e8bf2aa-a1a0-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:34:07 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3Q-0003xZ-Bz; Fri, 29 May 2020 12:20:00 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 12/49] Debian: preseed: use priority= alias
Date: Fri, 29 May 2020 12:19:08 +0100
Message-Id: <20200529111945.21394-13-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This marginally reduces command line clobber.  This alias has been
supported approximately forever.  (And this code is currently only
used when DebconfPriority is set, which it generally isn't.)

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 8ccacc79..345aff68 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -702,7 +702,7 @@ sub di_installcmdline_core ($$;@) {
                );
 
     my $debconf_priority= $xopts{DebconfPriority};
-    push @cl, "debconf/priority=$debconf_priority"
+    push @cl, "priority=$debconf_priority"
         if defined $debconf_priority;
     push @cl, "rescue/enable=true" if $xopts{RescueMode};
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:35:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:35:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedI9-00033H-J7; Fri, 29 May 2020 11:35:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedI7-00031l-SK
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:35:11 +0000
X-Inumbo-ID: 5121d66a-a1a0-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5121d66a-a1a0-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:34:12 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3P-0003xZ-UC; Fri, 29 May 2020 12:20:00 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 10/49] Debian guests made with xen-tools: Write
 systemd random seed file
Date: Fri, 29 May 2020 12:19:06 +0100
Message-Id: <20200529111945.21394-11-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

When the Debian guest is not made with d-i, we must still provide this
random seed file.  This can be done in ts-debian-fixup.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 ts-debian-fixup | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/ts-debian-fixup b/ts-debian-fixup
index fef9836e..dfeb4d39 100755
--- a/ts-debian-fixup
+++ b/ts-debian-fixup
@@ -102,6 +102,11 @@ sub console () {
     logm("extra: $extra");
 }
 
+sub randomseed () {
+    my $cmd = debian_write_random_seed_command($mountpoint);
+    target_cmd_root($ho, "set -ex\n".cmd);
+}
+
 sub filesystems () {
     my $rootdev= $r{"$gho->{Guest}_rootdev"};
     return unless defined($rootdev) && length($rootdev);
@@ -215,6 +220,7 @@ END
 target_cmd_root($ho, debian_dhcp_rofs_fix($ho, $mountpoint));
 
 console();
+randomseed();
 filesystems();
 otherfixupcfg();
 writecfg();
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:35:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:35:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedID-00037g-Ta; Fri, 29 May 2020 11:35:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedIC-00036a-T8
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:35:16 +0000
X-Inumbo-ID: 525bdc06-a1a0-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 525bdc06-a1a0-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:34:14 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3R-0003xZ-EW; Fri, 29 May 2020 12:20:01 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 16/49] buster: Extend grub2 uefi no install workaround
Date: Fri, 29 May 2020 12:19:12 +0100
Message-Id: <20200529111945.21394-17-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

src:grub2 is RFH in Debian, which is a contributory factor to these
patches in #789798 and #792547 languishing.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 7b311a14..9b4ef967 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -1466,7 +1466,7 @@ d-i partman-auto/expert_recipe string					\\
 END
 
     if (get_host_property($ho, "firmware") eq "uefi") {
-	die unless $ho->{Suite} =~ m/jessie|stretch/;
+	die unless $ho->{Suite} =~ m/jessie|stretch|buster/;
 	# Prevent grub-install from making a new Debian boot entry, so
 	# we always reboot from the network. Debian bug #789798 proposes a
 	# properly preseedable solution to this.
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:35:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:35:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedIJ-0003DJ-8W; Fri, 29 May 2020 11:35:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedIH-0003By-Ss
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:35:21 +0000
X-Inumbo-ID: 539bf114-a1a0-11ea-9dbe-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 539bf114-a1a0-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 11:34:16 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3a-0003xZ-Kc; Fri, 29 May 2020 12:20:10 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 45/49] setupboot_grub2: Recognise --nounzip for
 initramfs
Date: Fri, 29 May 2020 12:19:41 +0100
Message-Id: <20200529111945.21394-46-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Modern versions of update-grub like to add this.  We need to spot this
so that under EFI we generate the right things in xen.cfg.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 615047cb..de53c1ac 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -551,7 +551,7 @@ sub setupboot_grub2 ($$$$) {
                 $entry->{KernVer}= $2;
                 $entry->{KernOpts}= $3;
             }
-            if (m/^\s*module2?\s+(?:\/boot)?\/(initrd\S+)/) {
+            if (m/^\s*module2?\s+(?:--nounzip\s+)*(?:\/boot)?\/(initrd\S+)/) {
                 $entry->{Initrd}= $1;
             }
 	    if (m/^\s*module2?\s*\/(xenpolicy\S+)/) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:35:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:35:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedIO-0003Jp-Hm; Fri, 29 May 2020 11:35:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedIM-0003HT-T9
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:35:26 +0000
X-Inumbo-ID: 54de43ec-a1a0-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 54de43ec-a1a0-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:34:18 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3Q-0003xZ-N1; Fri, 29 May 2020 12:20:00 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 13/49] Debian: Specify `priority=critical' rather than
 locale
Date: Fri, 29 May 2020 12:19:09 +0100
Message-Id: <20200529111945.21394-14-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

In buster, it appears that specifying locale on the command line is
not sufficient.  Rather than adding more things to the command line,
instead, just say `priority=critical', by defaulting $debconf_priority
to 'critical'.

I think this change should be fine for earlier suites too.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 345aff68..7b311a14 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -698,12 +698,10 @@ sub di_installcmdline_core ($$;@) {
                "$xopts{PreseedScheme}=$ps_url",
                "netcfg/dhcp_timeout=150",
                "netcfg/choose_interface=$netcfg_interface",
-               "debian-installer/locale=en_GB",
                );
 
-    my $debconf_priority= $xopts{DebconfPriority};
-    push @cl, "priority=$debconf_priority"
-        if defined $debconf_priority;
+    my $debconf_priority= $xopts{DebconfPriority} // 'critical';
+    push @cl, "priority=$debconf_priority";
     push @cl, "rescue/enable=true" if $xopts{RescueMode};
 
     if ($r{syslog_server}) {
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:35:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:35:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedIS-0003QQ-TZ; Fri, 29 May 2020 11:35:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedIR-0003Oy-UA
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:35:31 +0000
X-Inumbo-ID: 5887ed22-a1a0-11ea-8993-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5887ed22-a1a0-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 11:34:24 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3U-0003xZ-3e; Fri, 29 May 2020 12:20:04 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 28/49] 20_linux_xen: Support Xen Security Modules
 (XSM/FLASK)
Date: Fri, 29 May 2020 12:19:24 +0100
Message-Id: <20200529111945.21394-29-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

XSM is enabled by adding "flask=enforcing" as a Xen command line
argument, and providing the policy file as a grub module.

We make entries for both with and without XSM.  If XSM is not compiled
into Xen, then there are no policy files, so no change to the boot
options.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 overlay-buster/etc/grub.d/20_linux_xen | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/overlay-buster/etc/grub.d/20_linux_xen b/overlay-buster/etc/grub.d/20_linux_xen
index 01dfcb57..4d3294a2 100755
--- a/overlay-buster/etc/grub.d/20_linux_xen
+++ b/overlay-buster/etc/grub.d/20_linux_xen
@@ -84,6 +84,11 @@ esac
 title_correction_code=
 
 linux_entry ()
+{
+  linux_entry_xsm "$@" false
+  linux_entry_xsm "$@" true
+}
+linux_entry_xsm ()
 {
   os="$1"
   version="$2"
@@ -91,6 +96,18 @@ linux_entry ()
   type="$4"
   args="$5"
   xen_args="$6"
+  xsm="$7"
+  # If user wants to enable XSM support, make sure there's
+  # corresponding policy file.
+  if ${xsm} ; then
+      xenpolicy="xenpolicy-$xen_version"
+      if test ! -e "${xen_dirname}/${xenpolicy}" ; then
+	  return
+      fi
+      xen_args="$xen_args flask=enforcing"
+      xen_version="$(gettext_printf "%s (XSM enabled)" "$xen_version")"
+      # xen_version is used for messages only; actual file is xen_basename
+  fi
   if [ -z "$boot_device_id" ]; then
       boot_device_id="$(grub_get_device_id "${GRUB_DEVICE}")"
   fi
@@ -140,6 +157,13 @@ EOF
     sed "s/^/$submenu_indentation/" << EOF
 	echo	'$(echo "$message" | grub_quote)'
 	${module_loader}	--nounzip   ${rel_dirname}/${initrd}
+EOF
+  fi
+  if test -n "${xenpolicy}" ; then
+    message="$(gettext_printf "Loading XSM policy ...")"
+    sed "s/^/$submenu_indentation/" << EOF
+	echo	'$(echo "$message" | grub_quote)'
+	${module_loader}     ${rel_dirname}/${xenpolicy}
 EOF
   fi
   sed "s/^/$submenu_indentation/" << EOF
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:35:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:35:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedIY-0003W8-9P; Fri, 29 May 2020 11:35:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedIW-0003Uv-UE
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:35:36 +0000
X-Inumbo-ID: 5a05be90-a1a0-11ea-9947-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a05be90-a1a0-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:34:26 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3R-0003xZ-TQ; Fri, 29 May 2020 12:20:02 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 18/49] buster: Deinstall the "systemd" package
Date: Fri, 29 May 2020 12:19:14 +0100
Message-Id: <20200529111945.21394-19-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This installs a pam rule which causes logins to hang.  It also seems
to cause some kind of udev wedge.

We are using sysvinit so this package is not desirable.  Empirically,
removing it makes the system work.

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 1 +
 1 file changed, 1 insertion(+)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 9b4ef967..a328160e 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -842,6 +842,7 @@ sub preseed_base ($$$;@) {
     if ( $suite !~ /squeeze|wheezy/ ) {
        preseed_hook_command($ho, 'late_command', $sfx, <<END)
 in-target apt-get install -y sysvinit-core
+in-target apt-get remove -y systemd
 END
     }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:35:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:35:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedId-0003dL-L8; Fri, 29 May 2020 11:35:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedIb-0003ay-Tx
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:35:41 +0000
X-Inumbo-ID: 5c4487e0-a1a0-11ea-9947-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c4487e0-a1a0-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:34:30 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3V-0003xZ-ED; Fri, 29 May 2020 12:20:05 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 34/49] buster: grub,
 arm64: extend chainloading workaround
Date: Fri, 29 May 2020 12:19:30 +0100
Message-Id: <20200529111945.21394-35-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

multiboot[2] isn't supported.

Also link to the bug report.

CC: Julien Grall <julien@xen.org>
CC: Stefano Stabellini <sstabellini@kernel.org>
Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index 3fc9e555..9f1ce1df 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -443,9 +443,10 @@ sub setupboot_grub2 ($$$$) {
     my $kernkey= (defined $xenhopt ? 'KernDom0' : 'KernOnly');
 
     # Grub2 on jessie/stretch ARM* doesn't do multiboot, so we must chainload.
+    # https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=884770
     my $need_uefi_chainload =
         get_host_property($ho, "firmware") eq "uefi" &&
-        $ho->{Suite} =~ m/jessie|stretch/ && $ho->{Arch} =~ m/^arm/;
+        $ho->{Suite} =~ m/jessie|stretch|buster/ && $ho->{Arch} =~ m/^arm/;
 
     my $parsemenu= sub {
         my $f= bl_getmenu_open($ho, $rmenu, "$stash/$ho->{Name}--grub.cfg.1");
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:35:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:35:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedIj-0003jb-1A; Fri, 29 May 2020 11:35:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gj5c=7L=chiark.greenend.org.uk=ijackson@srs-us1.protection.inumbo.net>)
 id 1jedIg-0003hV-Ul
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:35:46 +0000
X-Inumbo-ID: 5ea31d30-a1a0-11ea-9947-bc764e2007e4
Received: from chiark.greenend.org.uk (unknown [2001:ba8:1e3::])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ea31d30-a1a0-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 11:34:34 +0000 (UTC)
Received: from [172.18.45.5] (helo=zealot.relativity.greenend.org.uk)
 by chiark.greenend.org.uk (Debian Exim 4.84_2 #1) with esmtp
 (return-path ijackson@chiark.greenend.org.uk)
 id 1jed3a-0003xZ-Ui; Fri, 29 May 2020 12:20:11 +0100
From: Ian Jackson <ian.jackson@eu.citrix.com>
To: xen-devel@lists.xenproject.org
Subject: [OSSTEST PATCH 46/49] setupboot_grub2: Copy hv command line from grub
 to xen.cfg
Date: Fri, 29 May 2020 12:19:42 +0100
Message-Id: <20200529111945.21394-47-ian.jackson@eu.citrix.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

This reuses all of the stuff that update-grub, etc., have put there.

In particular without this we never have flask=enforcing!

We have to do something about the ${xen_rm_opts} that appear in these
entries.  In principle there might be many variable expansions, but in
practice there is only this one It applies only to x86, and this use
of chainloading to xen.efi and reading xen.cfg applies only to arm64.
And anyway we weren't putting it these things in before.  So OK...

Signed-off-by: Ian Jackson <ian.jackson@eu.citrix.com>
---
 Osstest/Debian.pm | 9 ++++++---
 1 file changed, 6 insertions(+), 3 deletions(-)

diff --git a/Osstest/Debian.pm b/Osstest/Debian.pm
index de53c1ac..c18bf718 100644
--- a/Osstest/Debian.pm
+++ b/Osstest/Debian.pm
@@ -534,10 +534,11 @@ sub setupboot_grub2 ($$$$) {
                 $entry->{Hv}= $1;
                 $entry->{Chainload} = 1;
             }
-            if (m/^\s*multiboot2?\s+(?:\/boot)?\/(xen\-[0-9][-+.0-9a-z]*\S+)/) {
+            if (m/^\s*multiboot2?\s+(?:\/boot)?\/(xen\-[0-9][-+.0-9a-z]*\S+)\s+(.*)/) {
                 die unless $entry;
                 $entry->{Hv}= $1;
                 $entry->{Chainload} = 0;
+		$entry->{HvOpts} = $2;
             }
             if (m/^\s*multiboot2?\s+(?:\/boot)?\/(vmlinu[xz]-(\S+))\s+(.*)/) {
                 die unless $entry;
@@ -575,7 +576,7 @@ sub setupboot_grub2 ($$$$) {
             # Propagate relevant fields of the main entry over to the
             # chain entry for use of subsequent code.
             foreach (qw(KernVer KernDom0 KernOnly KernOpts
-                        Initrd Xenpolicy)) {
+                        Initrd Xenpolicy HvOpts)) {
 		next unless $entry->{$_};
 		die if $chainentry->{$_};
 		$chainentry->{$_} = $entry->{$_};
@@ -604,12 +605,14 @@ sub setupboot_grub2 ($$$$) {
 
 	if ($need_uefi_chainload) {
 	    my $entry= $parsemenu->();
+	    my $hvopts = $entry->{HvOpts};
+	    $hvopts =~ s/\$\{\w+\}//g; # delete these and hope!
 	    my $xencfg = <<END;
 [global]
 default=osstest
 
 [osstest]
-options=$xenhopt
+options=$hvopts
 kernel=vmlinuz $entry->{KernOpts}
 END
             $xencfg .= "ramdisk=initrd.gz\n" if $entry->{Initrd};
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:37:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:37:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedK6-0004Oc-Jo; Fri, 29 May 2020 11:37:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5ub4=7L=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jedK5-0004OR-Mb
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:37:13 +0000
X-Inumbo-ID: bcdbc276-a1a0-11ea-81bc-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bcdbc276-a1a0-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 11:37:12 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 64992AE5E;
 Fri, 29 May 2020 11:37:11 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v3] docs: update xenstore-migration.md
Date: Fri, 29 May 2020 13:37:09 +0200
Message-Id: <20200529113709.17489-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Update connection record details:

- make flags common for sockets and domains (makes it easier to have a
  C union for conn-spec)
- add pending incoming data (needed for handling partially read
  requests when doing live update)
- add partial response length (needed for proper split to individual
  responses after live update)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- added out-resp-len to connection record

V3:
- better commit message (Julien Grall)
- same sequence for fields and detailed descriptions (Julien Grall)
- some wording corrected (Paul Durrant)

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 docs/designs/xenstore-migration.md | 72 +++++++++++++++++-------------
 1 file changed, 41 insertions(+), 31 deletions(-)

diff --git a/docs/designs/xenstore-migration.md b/docs/designs/xenstore-migration.md
index 34a2afd17e..2ce2c836f5 100644
--- a/docs/designs/xenstore-migration.md
+++ b/docs/designs/xenstore-migration.md
@@ -147,43 +147,60 @@ the domain being migrated.
 ```
     0       1       2       3       4       5       6       7    octet
 +-------+-------+-------+-------+-------+-------+-------+-------+
-| conn-id                       | conn-type     | conn-spec
+| conn-id                       | conn-type     | flags         |
++-------------------------------+---------------+---------------+
+| conn-spec
 ...
-+-------------------------------+-------------------------------+
-| data-len                      | data
-+-------------------------------+
++---------------+---------------+-------------------------------+
+| in-data-len   | out-resp-len  | out-data-len                  |
++---------------+---------------+-------------------------------+
+| data
 ...
 ```
 
 
-| Field       | Description                                     |
-|-------------|-------------------------------------------------|
-| `conn-id`   | A non-zero number used to identify this         |
-|             | connection in subsequent connection-specific    |
-|             | records                                         |
-|             |                                                 |
-| `conn-type` | 0x0000: shared ring                             |
-|             | 0x0001: socket                                  |
-|             | 0x0002 - 0xFFFF: reserved for future use        |
-|             |                                                 |
-| `conn-spec` | See below                                       |
-|             |                                                 |
-| `data-len`  | The length (in octets) of any pending data not  |
-|             | yet written to the connection                   |
-|             |                                                 |
-| `data`      | Pending data (may be empty)                     |
+| Field          | Description                                  |
+|----------------|----------------------------------------------|
+| `conn-id`      | A non-zero number used to identify this      |
+|                | connection in subsequent connection-specific |
+|                | records                                      |
+|                |                                              |
+| `conn-type`    | 0x0000: shared ring                          |
+|                | 0x0001: socket                               |
+|                | 0x0002 - 0xFFFF: reserved for future use     |
+|                |                                              |
+| `flags`        | A bit-wise OR of:                            |
+|                | 0001: read-only                              |
+|                |                                              |
+| `conn-spec`    | See below                                    |
+|                |                                              |
+| `in-data-len`  | The length (in octets) of any data read      |
+|                | from the connection not yet processed        |
+|                |                                              |
+| `out-resp-len` | The length (in octets) of a partial response |
+|                | not yet written to the connection            |
+|                |                                              |
+| `out-data-len` | The length (in octets) of any pending data   |
+|                | not yet written to the connection, including |
+|                | a partial response (see `out-resp-len`)      |
+|                |                                              |
+| `data`         | Pending data: first in-data-len octets of    |
+|                | read data, then out-data-len octets of       |
+|                | written data (any of both may be empty)      |
 
-The format of `conn-spec` is dependent upon `conn-type`.
+In case of live update the connection record for the connection via which
+the live update command was issued will contain the response for the live
+update command in the pending not yet written data.
 
 \pagebreak
 
+The format of `conn-spec` is dependent upon `conn-type`.
+
 For `shared ring` connections it is as follows:
 
 
 ```
     0       1       2       3       4       5       6       7    octet
-                                                +-------+-------+
-                                                | flags         |
 +---------------+---------------+---------------+---------------+
 | domid         | tdomid        | evtchn                        |
 +-------------------------------+-------------------------------+
@@ -198,8 +215,6 @@ For `shared ring` connections it is as follows:
 |           | it has been subject to an SET_TARGET              |
 |           | operation [2] or DOMID_INVALID [3] otherwise      |
 |           |                                                   |
-| `flags`   | Must be zero                                      |
-|           |                                                   |
 | `evtchn`  | The port number of the interdomain channel used   |
 |           | by `domid` to communicate with xenstored          |
 |           |                                                   |
@@ -211,8 +226,6 @@ For `socket` connections it is as follows:
 
 
 ```
-                                                +-------+-------+
-                                                | flags         |
 +---------------+---------------+---------------+---------------+
 | socket-fd                     | pad                           |
 +-------------------------------+-------------------------------+
@@ -221,9 +234,6 @@ For `socket` connections it is as follows:
 
 | Field       | Description                                     |
 |-------------|-------------------------------------------------|
-| `flags`     | A bit-wise OR of:                               |
-|             | 0001: read-only                                 |
-|             |                                                 |
 | `socket-fd` | The file descriptor of the connected socket     |
 
 This type of connection is only relevant for live update, where the xenstored
@@ -398,4 +408,4 @@ explanation of node permissions.
 
 [3] See https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/xen.h;hb=HEAD#l612
 
-[4] https://wiki.xen.org/wiki/XenBus
\ No newline at end of file
+[4] https://wiki.xen.org/wiki/XenBus
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:45:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:45:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedSD-0005Wp-CQ; Fri, 29 May 2020 11:45:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AhMB=7L=amazon.com=prvs=411b2c4a0=jgrall@srs-us1.protection.inumbo.net>)
 id 1jedSC-0005Wk-1I
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:45:36 +0000
X-Inumbo-ID: e869ef98-a1a1-11ea-81bc-bc764e2007e4
Received: from smtp-fw-9101.amazon.com (unknown [207.171.184.25])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e869ef98-a1a1-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 11:45:35 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
 t=1590752736; x=1622288736;
 h=to:cc:references:from:message-id:date:mime-version:
 in-reply-to:content-transfer-encoding:subject;
 bh=YJW69XafmmZdb2y56tFRAS9Yp2quo8eccmETTgI+Pu4=;
 b=sF0wmP5OKtNasd/kxOebYzMxb8lvy+F5aiCvq4e+HNVWqUMSmhR9v3bO
 76Vn72ftWtmoBJ2Exx4FrdJYblBBk8N9Tr8Rh5yRrZSLB94gqJnNvP6MY
 OpRVola4HqX8LLxBqb1KX2Xuqo2Zsy/GeR5wJp6A3fQBadgG7hZS6EWRO I=;
IronPort-SDR: JO5/Nhh+pXWLYX2voKxgysP1m9Z6Vrk3Fd8Qabv3wa99V+nPeLMkikaoMAm21TMHmH6G7BFRHA
 5ZFLaEg7r7IQ==
X-IronPort-AV: E=Sophos;i="5.73,448,1583193600"; d="scan'208";a="39166308"
Subject: Re: [PATCH 6/7] xen/guest_access: Consolidate guest access helpers in
 xen/guest_access.h
Received: from sea32-co-svc-lb4-vlan3.sea.corp.amazon.com (HELO
 email-inbound-relay-2b-a7fdc47a.us-west-2.amazon.com) ([10.47.23.38])
 by smtp-border-fw-out-9101.sea19.amazon.com with ESMTP;
 29 May 2020 11:45:34 +0000
Received: from EX13MTAUEA002.ant.amazon.com
 (pdx4-ws-svc-p6-lb7-vlan2.pdx.amazon.com [10.170.41.162])
 by email-inbound-relay-2b-a7fdc47a.us-west-2.amazon.com (Postfix) with ESMTPS
 id 6B622C0830; Fri, 29 May 2020 11:45:32 +0000 (UTC)
Received: from EX13D03UEA001.ant.amazon.com (10.43.61.200) by
 EX13MTAUEA002.ant.amazon.com (10.43.61.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 29 May 2020 11:45:31 +0000
Received: from EX13MTAUEA001.ant.amazon.com (10.43.61.82) by
 EX13D03UEA001.ant.amazon.com (10.43.61.200) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 29 May 2020 11:45:31 +0000
Received: from a483e7b01a66.ant.amazon.com (10.1.212.33) by
 mail-relay.amazon.com (10.43.61.243) with Microsoft SMTP Server id
 15.0.1497.2 via Frontend Transport; Fri, 29 May 2020 11:45:30 +0000
To: Ian Jackson <ian.jackson@citrix.com>, Julien Grall <julien@xen.org>
References: <20200404131017.27330-1-julien@xen.org>
 <20200404131017.27330-7-julien@xen.org>
 <e2588f6e-1f13-b66f-8e3d-b8568f67b62a@suse.com>
 <041a9f9f-cc9e-eac5-cdd2-555fb1c88e6f@xen.org>
 <cf6c0e0b-ade0-587f-ea0e-80b02b21b1a9@suse.com>
 <c8e66108-7ac1-fb51-841f-21886b731f04@xen.org>
 <f02f09ec-b643-8321-e235-ce0ee5526ab3@suse.com>
 <69deb8f4-bafe-734c-f6fa-de41ecf539d2@xen.org>
 <c38f4581-42a6-bb4a-1f84-066528edd3ee@xen.org>
 <aa209d94-2b39-7932-919b-9842f376e0dc@xen.org>
 <24259.62913.259268.987283@mariner.uk.xensource.com>
From: Julien Grall <jgrall@amazon.com>
Message-ID: <2850386c-7669-5487-00ae-2f84794cd562@amazon.com>
Date: Fri, 29 May 2020 12:45:29 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <24259.62913.259268.987283@mariner.uk.xensource.com>
Content-Type: text/plain; charset="utf-8"; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
Precedence: Bulk
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei
 Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Ian,

On 19/05/2020 16:05, Ian Jackson wrote:
> Hi.  My attention was drawn to this thread.
> 
> As I understand it, everyone is agreed that deduplicating the
> implementation is good (I also agree).  The debate is only between:

Thank you for stepping in!

> 
> 1. Put it in xen/ until an arch comes along that needs something
>    different, at which point maybe introduce an asm-generic-style
>    thing with default implementations.
> 
> 2. Say, now, that this is a default implementation and it should go in
>     asm-generic.
> 
> My starting point is that Julien, as the primary author of this
> cleanup, should be given leeway on a matter of taste like this.
> (There are as I understand it no wider implications.)
> 
> Also, ISTM that it can be argued that introducing a new abstraction is
> an additional piece of work.  Doing that is certainly not hampered by
> Julien's change.  So that would be another reason to take Julien's
> patch as-is.
> 
> On the merits, I don't have anything to add to the arguments already
> presented.  I am considerably more persuaded by Julien's arguments
> than Jan's.
> 
> So on all levels I think this commit should go in, unless there are
> other concerns that have not been discussed here ?
The major blocker is where the common header lives. The rest are small 
comments I should address in the next version.

I will send a new version (probably post freeze) to address those comments.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 29 11:46:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:46:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedT1-0005ZJ-LW; Fri, 29 May 2020 11:46:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BsAk=7L=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jedT0-0005Z9-FS
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:46:26 +0000
X-Inumbo-ID: 05b69920-a1a2-11ea-a89d-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 05b69920-a1a2-11ea-a89d-12813bfff9fa;
 Fri, 29 May 2020 11:46:24 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 36F00AD93;
 Fri, 29 May 2020 11:46:23 +0000 (UTC)
Message-ID: <ab810b293ca8324ca3fba22476401a58435243fa.camel@suse.com>
Subject: Re: [PATCH 0/2] xen: credit2: limit the number of CPUs per runqueue
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Date: Fri, 29 May 2020 13:46:21 +0200
In-Reply-To: <158818022727.24327.14309662489731832234.stgit@Palanthas>
References: <158818022727.24327.14309662489731832234.stgit@Palanthas>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-+qL9EtQohQ8xfVPoO4tS"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-+qL9EtQohQ8xfVPoO4tS
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

So,

I felt like providing some additional thoughts about this series, from
a release point of view (adding Paul).

Timing is *beyond tight* so if this series, entirely or partly, has any
chance to go in, it would be through some form of exception, which of
course comes with some risks, etc.

I did work hard to submit the full series, because I wanted people to
be able to see the complete solution. However, I think the series
itself can be logically split in two parts.

Basically, if we just consider patches 1 and 4 we will end up, right
after boot, with a system that has smaller runqueues. They will most
likely be balanced in terms of how many CPUs each one has, so a good
setup. This will likely (actual differences seems to depend *quite a
bit* on the actual workload) be an improvement for very large systems.

This is a path will get a decent amount of testing in OSSTests, from
now until the day of the release, I think, because booting with the
default CPU configuration and setup is what most (all?) OSSTests' jobs
do.

If the user starts to create pools, we can get to a point where the
different runqueues are unbalanced, i.e., each one has a different
number of CPUs in them, wrt the others. This, however:
* can happen already, as of today, without this patches. Whether these
  patches may make things "better" or "worse", from this point of view,
  it's impossible to tell, because it actually depends on what CPUs=20
  the user moves among pools or put offline, etc.
* this means that the scheduler has to deal with unbalanced runqueues=20
  anyway, and if it doesn't, it's a bug and, again, it seems to me=20
  that these patches don't make things particularly better or worse.

So, again, for patches 1 and 4 alone, it looks to me that we'd get
improvements, at least in some cases, the codepath will get some
testing and they do not introduce additional or different issues than
what we have already right now.

They also are at their second iteration, as the original patch series
was comprised of exactly those two patches.

So, I think it would be interesting if these two patches would be given
a chance, even just of getting some reviews... And I would be fine
going through the formal process necessary for making that to happen
myself.

Then, there's the rest, the runqueue rebalancing thing. As said above,
I personally would love if we'd release with it, but I see one rather
big issue. In fact, such mechanism is triggered and stressed is
stressed by the dynamic creation and manipulation of cpupools (and CPU
on/off-lining), and we don't have an OSSTests test for that. Therefore,
we are not in the best position for catching issues it may have
introduced.

I can commit to do some testing myself, but it's not the same thing has
having them in our CI, I know that very well. So, I'd be interested in
hearing what others think about these other patches as well, and I am
happy to do my best to make sure that they are working fine, if we
decide to try to include them too, but I do see this as much more of a
risk myself.

So, any thoughts? :-)

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-+qL9EtQohQ8xfVPoO4tS
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl7Q9g0ACgkQFkJ4iaW4
c+7EnhAAwTimyy3I+yI3AWOYUv40ynQ4r7V9hyj4vYIu8Z5pUi2NnuIaL4InpmLc
Pfg56ITTRxAHtxMtBO3mnthd1hRKmMlNja9QEoqlELvlFWVLZiXc2p5q9dyKN7UE
g46asOTFsAaD9T0Dqj4KnZvPf7BE4RjigxD85XpmVZdlHZ0itV5CvCGmFWood43W
cCVptXztg6T+P9JZnZ6s61qTQb/aRCtFKDoI8nVdP8LCQ94NqLQ1XCghbfJdXwyv
XwMpWEjJyQbpHvMx3BrOsr1VWxd+ZmzEBxaEZnOu3KniuViPlGhcBnBADiENaBeM
bSLSsFt+TFdfwteV/PTQ2yW5MCyGV+6Z57V1GXHvCiamZ4gmKlxTxdDa3KQPWJ7D
c81RtfEm1wQQUg2Q2bcBxOxn4hqaDRTc+/tBFDTw3OO12afscsRIBmHMshZL24yD
+bvN4eEeQuApUTNv5Oen5xJyoBHgGGTBmAPnUnOPS/C43kdMO2jy0zeKDq0k0+hH
RT8BsNeY0P2vgsA/q3a8I36+kSKDzNvYZb66WDEyxWemJCDva14J7AYE51H6W1u3
wJfIuxsgQMH3rnJPcekfgU6sJZb2UsH9SP2WlJf/rCBRskzbPZsx10cUEFsDT00l
146zGalxSIFb9RZPh/83nM078MPUoLxfp0fMyW5HOliNqrGQgkQ=
=UvJw
-----END PGP SIGNATURE-----

--=-+qL9EtQohQ8xfVPoO4tS--



From xen-devel-bounces@lists.xenproject.org Fri May 29 11:48:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 11:48:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedUq-0005ia-1g; Fri, 29 May 2020 11:48:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vz0h=7L=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1jedUo-0005iV-DO
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:48:18 +0000
X-Inumbo-ID: 491610ec-a1a2-11ea-9dbe-bc764e2007e4
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 491610ec-a1a2-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 11:48:17 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id r15so3081098wmh.5
 for <xen-devel@lists.xenproject.org>; Fri, 29 May 2020 04:48:17 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=from:reply-to:to:cc:references:in-reply-to:subject:date:message-id
 :mime-version:content-transfer-encoding:content-language
 :thread-index; bh=1sm3Jxhiv2igCNdvdTHQazyw8QwNeenyNt7cresh5E8=;
 b=nBialjnqQwye9qAXLp62DpsWOL/ADOnfx3mNdgTlEWD4Fk4Dj6meIGkvVnVSqmOtmp
 EzgRaQs7s6nCzWIDk6XjGz/6TzOJNXXzYzGEXvp6mgr1Mbcau0VFdlxh74aKpnpMam8R
 EPYTRWMkMYmsdhILbtDntCp9uO6twOKZZq5vHNRauJULFHZOzuokwXpVVcDFa6LzD+/I
 KnnJiIAE5IIEUWKd1y+H17fQV8A0RvyRf2TdTQFt8g6h18qE799Gl5CXC4SkYYH7mDiC
 Mp5Z8RtkzyxIEYSCxr8vmi7qy0V0EBkvyWBuefQw6vl60j0TL8Br5T1CLyRekWENNpzw
 grKA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:reply-to:to:cc:references:in-reply-to
 :subject:date:message-id:mime-version:content-transfer-encoding
 :content-language:thread-index;
 bh=1sm3Jxhiv2igCNdvdTHQazyw8QwNeenyNt7cresh5E8=;
 b=B+u7UI9FkBV1n2P51tW0844Lpn99mxfGLIECtkiHlxtCB4saSa0idz2IBR5nOcGBAw
 W5ExPo5sAmENctQce2m0wfChRevWkYUZXIVMuuAYeSSYsD0istmpdhUZ9Ddz/9L5Qjn5
 uMyOrjh49pBxx8gR6EA2Mgh6hdinC5HI3EunoZK+M68vdIQrUC3onwlvKc0av/taFoIO
 LpsDiExJ3wJuH1hJ9Tj5kjTChEwTKzTsTBafayMhHweeayjrS8LHWhrHwysJ0pfwQGFC
 lhr3ZuauX38Te1jwR3KEKKzIIOyhAU+AerwZYOn/69XUpM2Npr0qOuvkvgLmlGKHpl4T
 OYag==
X-Gm-Message-State: AOAM533PNSf98XUAScer/9pfu8dDw4S4RloXquusEzR6CbFjGpm6864L
 awGn8NFc29O7zeGvKeT5r4E=
X-Google-Smtp-Source: ABdhPJzasT8DrPILN066I7RUAYYGG3H3DA4jPj+4r2Z8RnaFXU3OalGtQ8JUn2TJzzSNazRlPAzvgg==
X-Received: by 2002:a1c:bc0a:: with SMTP id m10mr7993488wmf.173.1590752896647; 
 Fri, 29 May 2020 04:48:16 -0700 (PDT)
Received: from CBGR90WXYV0 ([54.239.6.186])
 by smtp.gmail.com with ESMTPSA id z12sm10460730wrg.9.2020.05.29.04.48.15
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 29 May 2020 04:48:16 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: "Paul Durrant" <paul@xen.org>
To: "'Juergen Gross'" <jgross@suse.com>,
	<xen-devel@lists.xenproject.org>
References: <20200529113709.17489-1-jgross@suse.com>
In-Reply-To: <20200529113709.17489-1-jgross@suse.com>
Subject: RE: [PATCH v3] docs: update xenstore-migration.md
Date: Fri, 29 May 2020 12:48:14 +0100
Message-ID: <005d01d635af$0a3e6ae0$1ebb40a0$@xen.org>
MIME-Version: 1.0
Content-Type: text/plain;
	charset="us-ascii"
Content-Transfer-Encoding: 7bit
X-Mailer: Microsoft Outlook 16.0
Content-Language: en-gb
Thread-Index: AQH0JalnYx5eU26y8WXqv8eSeBzaDKiDEdGw
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Reply-To: paul@xen.org
Cc: 'Stefano Stabellini' <sstabellini@kernel.org>,
 'Julien Grall' <julien@xen.org>, 'Wei Liu' <wl@xen.org>,
 'Andrew Cooper' <andrew.cooper3@citrix.com>,
 'Ian Jackson' <ian.jackson@eu.citrix.com>,
 'George Dunlap' <george.dunlap@citrix.com>, 'Jan Beulich' <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

> -----Original Message-----
> From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Juergen Gross
> Sent: 29 May 2020 12:37
> To: xen-devel@lists.xenproject.org
> Cc: Juergen Gross <jgross@suse.com>; Stefano Stabellini <sstabellini@kernel.org>; Julien Grall
> <julien@xen.org>; Wei Liu <wl@xen.org>; Andrew Cooper <andrew.cooper3@citrix.com>; Ian Jackson
> <ian.jackson@eu.citrix.com>; George Dunlap <george.dunlap@citrix.com>; Jan Beulich <jbeulich@suse.com>
> Subject: [PATCH v3] docs: update xenstore-migration.md
> 
> Update connection record details:
> 
> - make flags common for sockets and domains (makes it easier to have a
>   C union for conn-spec)
> - add pending incoming data (needed for handling partially read
>   requests when doing live update)
> - add partial response length (needed for proper split to individual
>   responses after live update)
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

LGTM

Reviewed-by: Paul Durrant <paul@xen.org>

> ---
> V2:
> - added out-resp-len to connection record
> 
> V3:
> - better commit message (Julien Grall)
> - same sequence for fields and detailed descriptions (Julien Grall)
> - some wording corrected (Paul Durrant)
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  docs/designs/xenstore-migration.md | 72 +++++++++++++++++-------------
>  1 file changed, 41 insertions(+), 31 deletions(-)
> 
> diff --git a/docs/designs/xenstore-migration.md b/docs/designs/xenstore-migration.md
> index 34a2afd17e..2ce2c836f5 100644
> --- a/docs/designs/xenstore-migration.md
> +++ b/docs/designs/xenstore-migration.md
> @@ -147,43 +147,60 @@ the domain being migrated.
>  ```
>      0       1       2       3       4       5       6       7    octet
>  +-------+-------+-------+-------+-------+-------+-------+-------+
> -| conn-id                       | conn-type     | conn-spec
> +| conn-id                       | conn-type     | flags         |
> ++-------------------------------+---------------+---------------+
> +| conn-spec
>  ...
> -+-------------------------------+-------------------------------+
> -| data-len                      | data
> -+-------------------------------+
> ++---------------+---------------+-------------------------------+
> +| in-data-len   | out-resp-len  | out-data-len                  |
> ++---------------+---------------+-------------------------------+
> +| data
>  ...
>  ```
> 
> 
> -| Field       | Description                                     |
> -|-------------|-------------------------------------------------|
> -| `conn-id`   | A non-zero number used to identify this         |
> -|             | connection in subsequent connection-specific    |
> -|             | records                                         |
> -|             |                                                 |
> -| `conn-type` | 0x0000: shared ring                             |
> -|             | 0x0001: socket                                  |
> -|             | 0x0002 - 0xFFFF: reserved for future use        |
> -|             |                                                 |
> -| `conn-spec` | See below                                       |
> -|             |                                                 |
> -| `data-len`  | The length (in octets) of any pending data not  |
> -|             | yet written to the connection                   |
> -|             |                                                 |
> -| `data`      | Pending data (may be empty)                     |
> +| Field          | Description                                  |
> +|----------------|----------------------------------------------|
> +| `conn-id`      | A non-zero number used to identify this      |
> +|                | connection in subsequent connection-specific |
> +|                | records                                      |
> +|                |                                              |
> +| `conn-type`    | 0x0000: shared ring                          |
> +|                | 0x0001: socket                               |
> +|                | 0x0002 - 0xFFFF: reserved for future use     |
> +|                |                                              |
> +| `flags`        | A bit-wise OR of:                            |
> +|                | 0001: read-only                              |
> +|                |                                              |
> +| `conn-spec`    | See below                                    |
> +|                |                                              |
> +| `in-data-len`  | The length (in octets) of any data read      |
> +|                | from the connection not yet processed        |
> +|                |                                              |
> +| `out-resp-len` | The length (in octets) of a partial response |
> +|                | not yet written to the connection            |
> +|                |                                              |
> +| `out-data-len` | The length (in octets) of any pending data   |
> +|                | not yet written to the connection, including |
> +|                | a partial response (see `out-resp-len`)      |
> +|                |                                              |
> +| `data`         | Pending data: first in-data-len octets of    |
> +|                | read data, then out-data-len octets of       |
> +|                | written data (any of both may be empty)      |
> 
> -The format of `conn-spec` is dependent upon `conn-type`.
> +In case of live update the connection record for the connection via which
> +the live update command was issued will contain the response for the live
> +update command in the pending not yet written data.
> 
>  \pagebreak
> 
> +The format of `conn-spec` is dependent upon `conn-type`.
> +
>  For `shared ring` connections it is as follows:
> 
> 
>  ```
>      0       1       2       3       4       5       6       7    octet
> -                                                +-------+-------+
> -                                                | flags         |
>  +---------------+---------------+---------------+---------------+
>  | domid         | tdomid        | evtchn                        |
>  +-------------------------------+-------------------------------+
> @@ -198,8 +215,6 @@ For `shared ring` connections it is as follows:
>  |           | it has been subject to an SET_TARGET              |
>  |           | operation [2] or DOMID_INVALID [3] otherwise      |
>  |           |                                                   |
> -| `flags`   | Must be zero                                      |
> -|           |                                                   |
>  | `evtchn`  | The port number of the interdomain channel used   |
>  |           | by `domid` to communicate with xenstored          |
>  |           |                                                   |
> @@ -211,8 +226,6 @@ For `socket` connections it is as follows:
> 
> 
>  ```
> -                                                +-------+-------+
> -                                                | flags         |
>  +---------------+---------------+---------------+---------------+
>  | socket-fd                     | pad                           |
>  +-------------------------------+-------------------------------+
> @@ -221,9 +234,6 @@ For `socket` connections it is as follows:
> 
>  | Field       | Description                                     |
>  |-------------|-------------------------------------------------|
> -| `flags`     | A bit-wise OR of:                               |
> -|             | 0001: read-only                                 |
> -|             |                                                 |
>  | `socket-fd` | The file descriptor of the connected socket     |
> 
>  This type of connection is only relevant for live update, where the xenstored
> @@ -398,4 +408,4 @@ explanation of node permissions.
> 
>  [3] See https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/xen.h;hb=HEAD#l612
> 
> -[4] https://wiki.xen.org/wiki/XenBus
> \ No newline at end of file
> +[4] https://wiki.xen.org/wiki/XenBus
> --
> 2.26.2
> 




From xen-devel-bounces@lists.xenproject.org Fri May 29 12:00:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 12:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedg6-0006pe-AY; Fri, 29 May 2020 11:59:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jedg5-0006pZ-Dp
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 11:59:57 +0000
X-Inumbo-ID: e9962128-a1a3-11ea-81bc-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e9962128-a1a3-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 11:59:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 2868EABBD;
 Fri, 29 May 2020 11:59:55 +0000 (UTC)
Subject: Re: [PATCH v2 03/14] x86/shstk: Introduce Supervisor Shadow Stack
 support
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-4-andrew.cooper3@citrix.com>
 <4f535d4c-b3b3-fe5b-8b57-af736dc0a360@suse.com>
 <ad95944a-bd21-2ba8-6214-49d86050e816@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c3c3aea0-806f-4058-c1aa-cdc0f75007e2@suse.com>
Date: Fri, 29 May 2020 13:59:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <ad95944a-bd21-2ba8-6214-49d86050e816@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.05.2020 20:10, Andrew Cooper wrote:
> On 28/05/2020 11:25, Jan Beulich wrote:
>> On 27.05.2020 21:18, Andrew Cooper wrote:
>>> --- a/xen/arch/x86/Kconfig
>>> +++ b/xen/arch/x86/Kconfig
>>> @@ -34,6 +34,10 @@ config ARCH_DEFCONFIG
>>>  config INDIRECT_THUNK
>>>  	def_bool $(cc-option,-mindirect-branch-register)
>>>  
>>> +config HAS_AS_CET
>>> +	# binutils >= 2.29 and LLVM >= 7
>>> +	def_bool $(as-instr,wrssq %rax$(comma)0;setssbsy;endbr64)
>> So you put me in a really awkward position: I'd really like to see
>> this series go in for 4.14, yet I've previously indicated I want the
>> underlying concept to first be agreed upon, before any uses get
>> introduced.
> 
> There are already users.  One of them is even in context.

Hmm, indeed. I clearly didn't notice this aspect when reviewing
Anthony's series.

> I don't see that there is anything open for dispute in the first place. 
> Being able to do exactly this was a one key driving factor to a newer
> Kconfig, because it is superior mechanism to the ad-hoc mess we had
> previously (not to mention, a vast detriment to build time).

This "key driving factor" was presumably from your perspective.
Could you point me to a discussion (and resulting decision) that
this is an explicit goal of that work? I don't recall any, and
hence I also don't recall having been given a chance in influence
the direction, decision, and overall outcome.

In the interest of getting this series in for 4.14, and on the
assumption that you're willing to have a discussion on the
direction wrt storing tool chain capabilities in .config before
any further uses get added (and with the potential need to undo
the ones we have / gain here)
Reviewed-by: Jan Beulich <jbeulich@suse.com>

The comment at the very end may point at an issue that wants
sorting, the others below are merely replies to some of the
points you've made.

>> A fundamental problem with this, at least as long as (a) more of
>> Anthony's series hasn't been committed and (b) we re-build Xen upon
>> installing (as root), even if it was fully built (as non-root) and
>> is ready without any re-building.
> 
> How is any any of this relevant to Anthony's recent changes?

Command line options not getting written to .*.cmd and read back
and compare upon rebuild is the issue here. Albeit thinking
about it again for anything that's stored in .config that's
for now compensated for by a change to .config triggering a
global rebuild.

The specific issue I had run into was with XEN_BUILD_EFI changing
without xen.lds getting re-generated.

> You've always been asking for trouble if your `make` before `make
> install` has as different toolchain, even in regular autotools/userspace
> software.  The newer Kconfig logic might make this trouble far more
> obvious, but doesn't introduce the problem.

I disagree. There should be no dependency on the tool chain at
all at install time, with a fully built tree. It ought to be
fine to not even have a tool chain accessible to root, even
less so the precise one that was used for building the tree.

>>> --- a/xen/scripts/Kconfig.include
>>> +++ b/xen/scripts/Kconfig.include
>>> @@ -31,6 +31,10 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -E -x c /dev/null -o /de
>>>  # Return y if the linker supports <flag>, n otherwise
>>>  ld-option = $(success,$(LD) -v $(1))
>>>  
>>> +# $(as-instr,<instr>)
>>> +# Return y if the assembler supports <instr>, n otherwise
>>> +as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -)
>> Is this actually checking the right thing in the clang case?
> 
> Appears to work correctly for me.  (Again, its either fine, or need
> bugfixing anyway for 4.14, and raising with Linux as this is unmodified
> upstream code like the rest of Kconfig.include).

This answer made me try it out: On a system with clang 5 and
binutils 2.32 "incsspd	%eax" translates fine with
-no-integrated-as but doesn't without. The previously mentioned
odd use of CLANG_FLAGS, perhaps together with the disconnect
from where we establish whether to use -no-integrated-as in the
first place (arch.mk; I have no idea whether the CFLAGS
determined would be usable by the kconfig invocation, nor how
to solve the chicken-and-egg problem there if this is meant to
work that way), may be the reason for this. Cc-ing Anthony once
again ...

As an aside - this being taken from Linux doesn't mean it's
suitable for our use. For example, Linux'es way to use (or not)
-no-integrated-as is entirely different from ours (via LLVM_IAS
setting, used in the top level Makefile). I don't think I can
see whether the check above, for the case at hand, would work
correctly there, both with and without that variable set.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 12:10:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 12:10:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedq6-0000AB-Pq; Fri, 29 May 2020 12:10:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T8V9=7L=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jedq5-0000A6-Tm
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 12:10:17 +0000
X-Inumbo-ID: 58610388-a1a5-11ea-a8a3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 58610388-a1a5-11ea-a8a3-12813bfff9fa;
 Fri, 29 May 2020 12:10:11 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1FXtzeW7ulYNIib9lSSXj/DqZmA+hp4SIOEXUGTKKM4=; b=C2Tpuf1CV9EukEbxbomWvhR3M
 H26cY9VkpmpVwuiImJolihAltd7eQ7OZSMhcVKrzIU2A60V2qxPzAlopi+fsZnHGDabdGWhhhubv9
 1XQMunZJzgAeL2uDWnyVyLcJ2HGgeQNEQCszzKht3xH3lnyKGDWojDXGQCUMBeg0AApcQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jedpz-0003Zw-4B; Fri, 29 May 2020 12:10:11 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jedpy-0007Nu-No; Fri, 29 May 2020 12:10:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jedpy-0008Mh-N2; Fri, 29 May 2020 12:10:10 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150444-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150444: tolerable trouble: fail/pass/starved -
 PUSHED
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This: xen=724913de8ac8426d313a4645741d86c1169ae406
X-Osstest-Versions-That: xen=9f3e9139fa6c3d620eb08dff927518fc88200b8d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 29 May 2020 12:10:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150444 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150444/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150414
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150414
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150414
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150414
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150414
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150414
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150414
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150414
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150414
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150414
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx  2 hosts-allocate               starved  n/a

version targeted for testing:
 xen                  724913de8ac8426d313a4645741d86c1169ae406
baseline version:
 xen                  9f3e9139fa6c3d620eb08dff927518fc88200b8d

Last test of basis   150414  2020-05-27 19:37:11 Z    1 days
Testing same since   150444  2020-05-28 15:34:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien@xen.org>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 starved 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9f3e9139fa..724913de8a  724913de8ac8426d313a4645741d86c1169ae406 -> master


From xen-devel-bounces@lists.xenproject.org Fri May 29 12:12:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 12:12:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedra-0000T9-8H; Fri, 29 May 2020 12:11:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mKAR=7L=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jedrZ-0000T4-Gj
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 12:11:49 +0000
X-Inumbo-ID: 926fd7ac-a1a5-11ea-81bc-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 926fd7ac-a1a5-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 12:11:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:To:Subject:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=7f6iMEW7aL+H2CjHWwwFv2l9d/8JKx/1kSI5nX+HJ9M=; b=HtPS6PPBsrHIA8psAmyvaxyeMx
 dJQuvDuz6xBeA4Q+HxcnWHiDNRVeb1H8O4oTkI9rjN3+RBB9tOc1XVOWir0P8BAapX/m2jJU9qsBf
 s/ED2gVRYkzXpdeiFo+TxgxnrHh7PJYO27uHI3UdA/noBrYozBpPzALIxHmEkYeo5QqY=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1jedrY-0003d5-Jw
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 12:11:48 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1jedrY-0002Gm-9Z
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 12:11:48 +0000
Subject: Re: [PATCH v3] docs: update xenstore-migration.md
To: xen-devel@lists.xenproject.org
References: <20200529113709.17489-1-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e91c0686-d6d4-aef0-1f60-7a53eead7358@xen.org>
Date: Fri, 29 May 2020 13:11:47 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <20200529113709.17489-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 29/05/2020 12:37, Juergen Gross wrote:
> Update connection record details:
> 
> - make flags common for sockets and domains (makes it easier to have a
>    C union for conn-spec)
> - add pending incoming data (needed for handling partially read
>    requests when doing live update)
> - add partial response length (needed for proper split to individual
>    responses after live update)
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
> V2:
> - added out-resp-len to connection record
> 
> V3:
> - better commit message (Julien Grall)
> - same sequence for fields and detailed descriptions (Julien Grall)
> - some wording corrected (Paul Durrant)
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   docs/designs/xenstore-migration.md | 72 +++++++++++++++++-------------
>   1 file changed, 41 insertions(+), 31 deletions(-)
> 
> diff --git a/docs/designs/xenstore-migration.md b/docs/designs/xenstore-migration.md
> index 34a2afd17e..2ce2c836f5 100644
> --- a/docs/designs/xenstore-migration.md
> +++ b/docs/designs/xenstore-migration.md
> @@ -147,43 +147,60 @@ the domain being migrated.
>   ```
>       0       1       2       3       4       5       6       7    octet
>   +-------+-------+-------+-------+-------+-------+-------+-------+
> -| conn-id                       | conn-type     | conn-spec
> +| conn-id                       | conn-type     | flags         |
> ++-------------------------------+---------------+---------------+
> +| conn-spec
>   ...
> -+-------------------------------+-------------------------------+
> -| data-len                      | data
> -+-------------------------------+
> ++---------------+---------------+-------------------------------+
> +| in-data-len   | out-resp-len  | out-data-len                  |
> ++---------------+---------------+-------------------------------+
> +| data
>   ...
>   ```
>   
>   
> -| Field       | Description                                     |
> -|-------------|-------------------------------------------------|
> -| `conn-id`   | A non-zero number used to identify this         |
> -|             | connection in subsequent connection-specific    |
> -|             | records                                         |
> -|             |                                                 |
> -| `conn-type` | 0x0000: shared ring                             |
> -|             | 0x0001: socket                                  |
> -|             | 0x0002 - 0xFFFF: reserved for future use        |
> -|             |                                                 |
> -| `conn-spec` | See below                                       |
> -|             |                                                 |
> -| `data-len`  | The length (in octets) of any pending data not  |
> -|             | yet written to the connection                   |
> -|             |                                                 |
> -| `data`      | Pending data (may be empty)                     |
> +| Field          | Description                                  |
> +|----------------|----------------------------------------------|
> +| `conn-id`      | A non-zero number used to identify this      |
> +|                | connection in subsequent connection-specific |
> +|                | records                                      |
> +|                |                                              |
> +| `conn-type`    | 0x0000: shared ring                          |
> +|                | 0x0001: socket                               |
> +|                | 0x0002 - 0xFFFF: reserved for future use     |
> +|                |                                              |
> +| `flags`        | A bit-wise OR of:                            |
> +|                | 0001: read-only                              |
> +|                |                                              |
> +| `conn-spec`    | See below                                    |
> +|                |                                              |
> +| `in-data-len`  | The length (in octets) of any data read      |
> +|                | from the connection not yet processed        |
> +|                |                                              |
> +| `out-resp-len` | The length (in octets) of a partial response |
> +|                | not yet written to the connection            |
> +|                |                                              |
> +| `out-data-len` | The length (in octets) of any pending data   |
> +|                | not yet written to the connection, including |
> +|                | a partial response (see `out-resp-len`)      |
> +|                |                                              |
> +| `data`         | Pending data: first in-data-len octets of    |
> +|                | read data, then out-data-len octets of       |
> +|                | written data (any of both may be empty)      |
>   
> -The format of `conn-spec` is dependent upon `conn-type`.
> +In case of live update the connection record for the connection via which
> +the live update command was issued will contain the response for the live
> +update command in the pending not yet written data.
>   
>   \pagebreak
>   
> +The format of `conn-spec` is dependent upon `conn-type`.
> +
>   For `shared ring` connections it is as follows:
>   
>   
>   ```
>       0       1       2       3       4       5       6       7    octet
> -                                                +-------+-------+
> -                                                | flags         |
>   +---------------+---------------+---------------+---------------+
>   | domid         | tdomid        | evtchn                        |
>   +-------------------------------+-------------------------------+
> @@ -198,8 +215,6 @@ For `shared ring` connections it is as follows:
>   |           | it has been subject to an SET_TARGET              |
>   |           | operation [2] or DOMID_INVALID [3] otherwise      |
>   |           |                                                   |
> -| `flags`   | Must be zero                                      |
> -|           |                                                   |
>   | `evtchn`  | The port number of the interdomain channel used   |
>   |           | by `domid` to communicate with xenstored          |
>   |           |                                                   |
> @@ -211,8 +226,6 @@ For `socket` connections it is as follows:
>   
>   
>   ```
> -                                                +-------+-------+
> -                                                | flags         |
>   +---------------+---------------+---------------+---------------+
>   | socket-fd                     | pad                           |
>   +-------------------------------+-------------------------------+
> @@ -221,9 +234,6 @@ For `socket` connections it is as follows:
>   
>   | Field       | Description                                     |
>   |-------------|-------------------------------------------------|
> -| `flags`     | A bit-wise OR of:                               |
> -|             | 0001: read-only                                 |
> -|             |                                                 |
>   | `socket-fd` | The file descriptor of the connected socket     |
>   
>   This type of connection is only relevant for live update, where the xenstored
> @@ -398,4 +408,4 @@ explanation of node permissions.
>   
>   [3] See https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=xen/include/public/xen.h;hb=HEAD#l612
>   
> -[4] https://wiki.xen.org/wiki/XenBus
> \ No newline at end of file
> +[4] https://wiki.xen.org/wiki/XenBus
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 29 12:18:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 12:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jedyP-0000he-5B; Fri, 29 May 2020 12:18:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jedyO-0000hZ-8X
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 12:18:52 +0000
X-Inumbo-ID: 8df82a8e-a1a6-11ea-81bc-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8df82a8e-a1a6-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 12:18:51 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:56132
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jedyK-000Cj0-Jc (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 13:18:48 +0100
Subject: Re: [PATCH v10 1/9] x86emul: address x86_insn_is_mem_{access, write}()
 omissions
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
 <f41a4f27-bbe2-6450-38c1-6c4e23f2b07b@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <8e976b4b-60f2-bf94-843d-0fe0ba57087c@citrix.com>
Date: Fri, 29 May 2020 13:18:47 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <f41a4f27-bbe2-6450-38c1-6c4e23f2b07b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25/05/2020 15:26, Jan Beulich wrote:
> First of all explain in comments what the functions' purposes are. Then
> make them actually match their comments.
>
> Note that fc6fa977be54 ("x86emul: extend x86_insn_is_mem_write()
> coverage") didn't actually fix the function's behavior for {,V}STMXCSR:
> Both are covered by generic code higher up in the function, due to
> x86_decode_twobyte() already doing suitable adjustments. And VSTMXCSR
> wouldn't have been covered anyway without a further X86EMUL_OPC_VEX()
> case label. Keep the inner case label in a comment for reference.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v10: Move ARPL case to earlier switch() x86_insn_is_mem_write(). Make
>      group 5 handling actually work there. Drop VMPTRST case. Also
>      handle CLFLUSH*, CLWB, UDn, and remaining PREFETCH* in
>      x86_insn_is_mem_access().
> v9: New.
>
> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -11474,25 +11474,87 @@ x86_insn_operand_ea(const struct x86_emu
>      return state->ea.mem.off;
>  }
>  
> +/*
> + * This function means to return 'true' for all supported insns with explicit
> + * accesses to memory.  This means also insns which don't have an explicit
> + * memory operand (like POP), but it does not mean e.g. segment selector
> + * loads, where the descriptor table access is considered an implicit one.
> + */
>  bool
>  x86_insn_is_mem_access(const struct x86_emulate_state *state,
>                         const struct x86_emulate_ctxt *ctxt)
>  {
> +    if ( mode_64bit() && state->not_64bit )
> +        return false;

Is this path actually used?  state->not_64bit ought to fail instruction
decode, at which point we wouldn't have a valid state to be used here.

Everything else looks ok, so Acked-by: Andrew Cooper
<andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri May 29 12:23:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 12:23:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jee30-0001dI-OA; Fri, 29 May 2020 12:23:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jee2z-0001dD-Ar
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 12:23:37 +0000
X-Inumbo-ID: 38337a12-a1a7-11ea-8993-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 38337a12-a1a7-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 12:23:36 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 8CA4BAE96;
 Fri, 29 May 2020 12:23:35 +0000 (UTC)
Subject: Re: [PATCH v2 11/14] x86/alt: Adjust _alternative_instructions() to
 not create shadow stacks
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-12-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1e6d1503-40a8-55b9-9bd3-750cf301220d@suse.com>
Date: Fri, 29 May 2020 14:23:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200527191847.17207-12-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 21:18, Andrew Cooper wrote:
> @@ -398,6 +399,19 @@ static void __init _alternative_instructions(bool force)
>          panic("Timed out waiting for alternatives self-NMI to hit\n");
>  
>      set_nmi_callback(saved_nmi_callback);
> +
> +    /*
> +     * When Xen is using shadow stacks, the alternatives clearing CR0.WP and
> +     * writing into the mappings set dirty bits, turning the mappings into
> +     * shadow stack mappings.
> +     *
> +     * While we can execute from them, this would also permit them to be the
> +     * target of WRSS instructions, so reset the dirty after patching.
> +     */
> +    if ( cpu_has_xen_shstk )
> +        modify_xen_mappings(XEN_VIRT_START + MB(2),
> +                            (unsigned long)&__2M_text_end,
> +                            PAGE_HYPERVISOR_RX);

Am I misremembering, or did you post a patch before that should
be part of this series, as being a prereq to this change,
making modify_xen_mappings() also respect _PAGE_DIRTY as
requested by the caller?

Additionally I notice this

        if ( desc->Attribute & (efi_bs_revision < EFI_REVISION(2, 5)
                                ? EFI_MEMORY_WP : EFI_MEMORY_RO) )
            prot &= ~_PAGE_RW;

in efi_init_memory(). Afaict we will need to clear _PAGE_DIRTY
there as well, with prot starting out as PAGE_HYPERVISOR_RWX.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 12:24:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 12:24:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jee3V-0001fa-1K; Fri, 29 May 2020 12:24:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jee3T-0001fP-Px
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 12:24:07 +0000
X-Inumbo-ID: 49bb3dc4-a1a7-11ea-a8a4-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 49bb3dc4-a1a7-11ea-a8a4-12813bfff9fa;
 Fri, 29 May 2020 12:24:06 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:56290
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jee3P-000GC8-MQ (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 13:24:04 +0100
Subject: Re: [PATCH v10 2/9] x86emul: rework CMP and TEST emulation
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
 <5843dca9-1a1a-a32e-3cb0-95cd93533723@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c215f77f-f645-eb08-3ac1-7d9f211e1abf@citrix.com>
Date: Fri, 29 May 2020 13:24:03 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <5843dca9-1a1a-a32e-3cb0-95cd93533723@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25/05/2020 15:26, Jan Beulich wrote:
> Unlike similarly encoded insns these don't write their memory operands,

"write to their".

> and hence x86_is_mem_write() should return false for them. However,
> rather than adding special logic there, rework how their emulation gets
> done, by making decoding attributes properly describe the r/o nature of
> their memory operands.

Describe how?  I see you've change the order of operands encoding, but
then override it back?

~Andrew

> Note how this also allows dropping custom LOCK prefix checks.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v10: New.
>
> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -84,7 +84,7 @@ static const opcode_desc_t opcode_table[
>      ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
>      ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps,
>      /* 0x38 - 0x3F */
> -    ByteOp|DstMem|SrcReg|ModRM, DstMem|SrcReg|ModRM,
> +    ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
>      ByteOp|DstReg|SrcMem|ModRM, DstReg|SrcMem|ModRM,
>      ByteOp|DstEax|SrcImm, DstEax|SrcImm, 0, ImplicitOps,
>      /* 0x40 - 0x4F */
> @@ -2481,7 +2481,6 @@ x86_decode_onebyte(
>      case 0x60: /* pusha */
>      case 0x61: /* popa */
>      case 0x62: /* bound */
> -    case 0x82: /* Grp1 (x86/32 only) */
>      case 0xc4: /* les */
>      case 0xc5: /* lds */
>      case 0xce: /* into */
> @@ -2491,6 +2490,14 @@ x86_decode_onebyte(
>          state->not_64bit = true;
>          break;
>  
> +    case 0x82: /* Grp1 (x86/32 only) */
> +        state->not_64bit = true;
> +        /* fall through */
> +    case 0x80: case 0x81: case 0x83: /* Grp1 */
> +        if ( (modrm_reg & 7) == 7 ) /* cmp */
> +            state->desc = (state->desc & ByteOp) | DstNone | SrcMem;
> +        break;
> +
>      case 0x90: /* nop / pause */
>          if ( repe_prefix() )
>              ctxt->opcode |= X86EMUL_OPC_F3(0, 0);
> @@ -2521,6 +2528,11 @@ x86_decode_onebyte(
>          imm2 = insn_fetch_type(uint8_t);
>          break;
>  
> +    case 0xf6: case 0xf7: /* Grp3 */
> +        if ( !(modrm_reg & 6) ) /* test */
> +            state->desc = (state->desc & ByteOp) | DstNone | SrcMem;
> +        break;
> +
>      case 0xff: /* Grp5 */
>          switch ( modrm_reg & 7 )
>          {
> @@ -3928,13 +3940,11 @@ x86_emulate(
>          break;
>  
>      case 0x38: case 0x39: cmp: /* cmp reg,mem */
> -        if ( ops->rmw && dst.type == OP_MEM &&
> -             (rc = read_ulong(dst.mem.seg, dst.mem.off, &dst.val,
> -                              dst.bytes, ctxt, ops)) != X86EMUL_OKAY )
> -            goto done;
> -        /* fall through */
> +        emulate_2op_SrcV("cmp", dst, src, _regs.eflags);
> +        dst.type = OP_NONE;
> +        break;
> +
>      case 0x3a ... 0x3d: /* cmp */
> -        generate_exception_if(lock_prefix, EXC_UD);
>          emulate_2op_SrcV("cmp", src, dst, _regs.eflags);
>          dst.type = OP_NONE;
>          break;
> @@ -4239,7 +4249,9 @@ x86_emulate(
>          case 4: goto and;
>          case 5: goto sub;
>          case 6: goto xor;
> -        case 7: goto cmp;
> +        case 7:
> +            dst.val = imm1;
> +            goto cmp;
>          }
>          break;
>  
> @@ -5233,11 +5245,8 @@ x86_emulate(
>              unsigned long u[2], v;
>  
>          case 0 ... 1: /* test */
> -            generate_exception_if(lock_prefix, EXC_UD);
> -            if ( ops->rmw && dst.type == OP_MEM &&
> -                 (rc = read_ulong(dst.mem.seg, dst.mem.off, &dst.val,
> -                                  dst.bytes, ctxt, ops)) != X86EMUL_OKAY )
> -                goto done;
> +            dst.val = imm1;
> +            dst.bytes = src.bytes;
>              goto test;
>          case 2: /* not */
>              if ( ops->rmw && dst.type == OP_MEM )
>



From xen-devel-bounces@lists.xenproject.org Fri May 29 12:25:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 12:25:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jee4z-0001q0-CY; Fri, 29 May 2020 12:25:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T8V9=7L=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jee4y-0001pu-Sb
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 12:25:40 +0000
X-Inumbo-ID: 817e9102-a1a7-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 817e9102-a1a7-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 12:25:39 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=iUmnYQhtBKZWx1QqX79TZnXthOCRElE4arZbs0Tgpm4=; b=F+M6G0Tdj2ByeeyeXnx6ll6Vg
 sUFHcr+8emmxIUsb9F3riM2ZuPFqwoAEoaBO488BxdhYzI/JW5DnCG93l9TMXbCgQrBnCcRHUACIZ
 DKU1lrBFR4kzl95EWRTmwmaQ1gUZQJ0HqxQM0OIG8CkW6K9C3T9lmknwRWg7ohYEWAYTg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jee4w-0003uK-Q0; Fri, 29 May 2020 12:25:38 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jee4w-0007w8-If; Fri, 29 May 2020 12:25:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jee4w-0008Jm-I5; Fri, 29 May 2020 12:25:38 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150472-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150472: regressions - trouble: blocked/fail
X-Osstest-Failures: xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
 xen-unstable-smoke:build-amd64:xen-build:fail:regression
 xen-unstable-smoke:build-armhf:xen-build:fail:regression
 xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: xen=9b9a83e43598b231111487378d6037fa8fa473d5
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 29 May 2020 12:25:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150472 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150472/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 150438
 build-amd64                   6 xen-build                fail REGR. vs. 150438
 build-armhf                   6 xen-build                fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  9b9a83e43598b231111487378d6037fa8fa473d5
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    0 days
Failing since        150465  2020-05-29 09:02:14 Z    0 days    3 attempts
Testing same since   150472  2020-05-29 11:01:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9b9a83e43598b231111487378d6037fa8fa473d5
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:22:50 2020 +0200

    SUPPORT.md: add hypervisor file system
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 7f8d2dc29ea5a51f88ec253be93970768ec9fac2
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:22:42 2020 +0200

    CHANGELOG: add hypervisor file system support
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Paul Durrant <paul@xen.org>

commit 02e9a9cf20950e78c816987415ed920d72444f94
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:20:31 2020 +0200

    xen: remove XEN_SYSCTL_set_parameter support
    
    The functionality of XEN_SYSCTL_set_parameter is available via hypfs
    now, so it can be removed.
    
    This allows to remove the kernel_param structure for runtime parameters
    by putting the now only used structure element into the hypfs node
    structure of the runtime parameters.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit a2486890689713116351e5bbfb8f104c797479cc
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:20:16 2020 +0200

    tools/libxc: remove xc_set_parameters()
    
    There is no user of xc_set_parameters() left, so remove it.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 2ea4b9829cf95b59f75f0c70543f2368d702305e
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:20:08 2020 +0200

    tools/libxl: use libxenhypfs for setting xen runtime parameters
    
    Instead of xc_set_parameters() use xenhypfs_write() for setting
    parameters of the hypervisor.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit a659d7cab9afcba337cb60225738fd85ff7aa3da
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:18:36 2020 +0200

    xen: add runtime parameter access support to hypfs
    
    Add support to read and modify values of hypervisor runtime parameters
    via the hypervisor file system.
    
    As runtime parameters can be modified via a sysctl, too, this path has
    to take the hypfs rw_lock as writer.
    
    For custom runtime parameters the connection between the parameter
    value and the file system is done via an init function which will set
    the initial value (if needed) and the leaf properties.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 58263ed7713e8132c2bc00bc870399ea31bf2231
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:15:54 2020 +0200

    xen: add /buildinfo/config entry to hypervisor filesystem
    
    Add the /buildinfo/config entry to the hypervisor filesystem. This
    entry contains the .config file used to build the hypervisor.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit b5d4711d2b17753498a3f587585f11bf9ca5af85
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:14:51 2020 +0200

    xen: provide version information in hypfs
    
    Provide version and compile information in /buildinfo/ node of the
    Xen hypervisor file system. As this information is accessible by dom0
    only no additional security problem arises.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 048f82ccd1b3dda511af25a7a8524c8ba5ca2786
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:14:24 2020 +0200

    xen/hypfs: make struct hypfs_entry_leaf initializers work with gcc 4.1
    
    gcc 4.1 has problems with static initializers for anonymous unions.
    Fix this by naming the union in struct hypfs_entry_leaf.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Tested-by: Jan Beulich <jbeulich@suse.com>

commit ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:20:32 2020 +0200

    tools: add xenfs tool
    
    Add the xenfs tool for accessing the hypervisor filesystem.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 86234eafb95295621aef6c618e4c22c10d8e4138
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:20:21 2020 +0200

    libs: add libxenhypfs
    
    Add the new library libxenhypfs for access to the hypervisor filesystem.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5b5ccafb0c425b85a60fd4f241d5f6951d0e4928
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:15:50 2020 +0200

    xen: add basic hypervisor filesystem support
    
    Add the infrastructure for the hypervisor filesystem.
    
    This includes the hypercall interface and the base functions for
    entry creation, deletion and modification.
    
    In order not to have to repeat the same pattern multiple times in case
    adding a new node should BUG_ON() failure, the helpers for adding a
    node (hypfs_add_dir() and hypfs_add_leaf()) get a nofault parameter
    causing the BUG() in case of a failure.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 0e9dcd0159c671608e154da5b8b7e0edd2905067
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:15:35 2020 +0200

    docs: add feature document for Xen hypervisor sysfs-like support
    
    On the 2019 Xen developer summit there was agreement that the Xen
    hypervisor should gain support for a hierarchical name-value store
    similar to the Linux kernel's sysfs.
    
    In the beginning there should only be basic support: entries can be
    added from the hypervisor itself only, there is a simple hypercall
    interface to read the data.
    
    Add a feature document for setting the base of a discussion regarding
    the desired functionality and the entries to add.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit c48a9956e334a5dd99e846d04ad56185b07aab64
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:15:08 2020 +0200

    xen: add a generic way to include binary files as variables
    
    Add a new script xen/tools/binfile for including a binary file at build
    time being usable via a pointer and a size variable in the hypervisor.
    
    Make use of that generic tool in xsm.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri May 29 12:28:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 12:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jee7H-0001y6-RD; Fri, 29 May 2020 12:28:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jee7G-0001y1-GL
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 12:28:02 +0000
X-Inumbo-ID: d655b67e-a1a7-11ea-81bc-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d655b67e-a1a7-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 12:28:02 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:56400
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jee7E-000IiI-Jl (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 13:28:00 +0100
Subject: Re: [PATCH v10 3/9] x86emul: also test decoding and mem access /
 write logic
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
 <627a6fbb-8e84-0509-78c8-942736e64503@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <7b53fdeb-5851-62f5-6e3e-406d630ff7a2@citrix.com>
Date: Fri, 29 May 2020 13:27:59 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <627a6fbb-8e84-0509-78c8-942736e64503@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25/05/2020 15:27, Jan Beulich wrote:
> x86emul_is_mem_{access,write}() (and their interaction with
> x86_decode()) have become sufficiently complex that we should have a way
> to test this logic. Start by covering legacy encoded GPR insns, with the
> exception of a few the main emulator doesn't support yet (left as
> comments in the respective tables, or about to be added by subsequent
> patches). This has already helped spot a few flaws in said logic,
> addressed by (revised) earlier patches.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Extra testing is always good.


From xen-devel-bounces@lists.xenproject.org Fri May 29 12:31:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 12:31:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeeA3-0002la-9N; Fri, 29 May 2020 12:30:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L269=7L=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1jeeA1-0002lR-CE
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 12:30:53 +0000
X-Inumbo-ID: 3ad24914-a1a8-11ea-9947-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ad24914-a1a8-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 12:30:50 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id B0EB5A270B;
 Fri, 29 May 2020 14:30:49 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id AA057A2705;
 Fri, 29 May 2020 14:30:48 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id F_I4ynYO_U7C; Fri, 29 May 2020 14:30:46 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 917D7A270B;
 Fri, 29 May 2020 14:30:46 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id K-Yff4L-2gVQ; Fri, 29 May 2020 14:30:46 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 6E551A2705;
 Fri, 29 May 2020 14:30:46 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 5D29F21DAE;
 Fri, 29 May 2020 14:30:16 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id zzk5R2CFM7iu; Fri, 29 May 2020 14:30:06 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 490BC2230B;
 Fri, 29 May 2020 14:30:04 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id JTnnuDWH0lyQ; Fri, 29 May 2020 14:30:04 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id EDFCB22309;
 Fri, 29 May 2020 14:30:03 +0200 (CEST)
Date: Fri, 29 May 2020 14:30:03 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: xen-devel@lists.xenproject.org
Message-ID: <1317891616.59673956.1590755403816.JavaMail.zimbra@cert.pl>
Subject: [BUG] Core scheduling patches causing deadlock in some situations
MIME-Version: 1.0
Content-Type: multipart/mixed; 
 boundary="----=_Part_59673954_663965119.1590755403814"
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - GC83 (Win)/8.6.0_GA_1194)
Thread-Topic: Core scheduling patches causing deadlock in some situations
Thread-Index: WWr0Rc/E1PKaz4xzv8PLmNwv/Afpcg==
X-Zimbra-DL: chivay@cert.pl, bonus@cert.pl
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, chivay@cert.pl, bonus@cert.pl,
 Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

------=_Part_59673954_663965119.1590755403814
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable

Hello,

I'm running DRAKVUF on Dell Inc. PowerEdge R640/08HT8T server with Intel(R)=
 Xeon(R) Gold 6132 CPU @ 2.60GHz CPU.
When upgrading from Xen RELEASE 4.12 to 4.13, we have noticed some stabilit=
y problems concerning freezes of Dom0 (Debian Buster):

---

maj 27 23:17:02 debian kernel: rcu: INFO: rcu_sched self-detected stall on =
CPU
maj 27 23:17:02 debian kernel: rcu: 0-....: (5250 ticks this GP) idle=3Dcee=
/1/0x4000000000000002 softirq=3D11964/11964 fqs=3D2515
maj 27 23:17:02 debian kernel: rcu: (t=3D5251 jiffies g=3D27237 q=3D799)
maj 27 23:17:02 debian kernel: NMI backtrace for cpu 0
maj 27 23:17:02 debian kernel: CPU: 0 PID: 643 Comm: z_rd_int_1 Tainted: P =
OE 4.19.0-6-amd64 #1 Debian 4.19.67-2+deb10u2
maj 27 23:17:02 debian kernel: Hardware name: Dell Inc. PowerEdge R640/08HT=
8T, BIOS 2.1.8 04/30/2019
maj 27 23:17:02 debian kernel: Call Trace:
maj 27 23:17:02 debian kernel: <IRQ>
maj 27 23:17:02 debian kernel: dump_stack+0x5c/0x80
maj 27 23:17:02 debian kernel: nmi_cpu_backtrace.cold.4+0x13/0x50
maj 27 23:17:02 debian kernel: ? lapic_can_unplug_cpu.cold.29+0x3b/0x3b
maj 27 23:17:02 debian kernel: nmi_trigger_cpumask_backtrace+0xf9/0xfb
maj 27 23:17:02 debian kernel: rcu_dump_cpu_stacks+0x9b/0xcb
maj 27 23:17:02 debian kernel: rcu_check_callbacks.cold.81+0x1db/0x335
maj 27 23:17:02 debian kernel: ? tick_sched_do_timer+0x60/0x60
maj 27 23:17:02 debian kernel: update_process_times+0x28/0x60
maj 27 23:17:02 debian kernel: tick_sched_handle+0x22/0x60

---

This usually results in machine being completely unresponsive and performin=
g an automated reboot after some time.

I've bisected commits between 4.12 and 4.13 and it seems like this is the p=
atch which introduced a bug:
https://github.com/xen-project/xen/commit/7c7b407e77724f37c4b448930777a59a4=
79feb21

Enclosed you can find the `xl dmesg` log (attachment: dmesg.txt) from the f=
resh boot of the machine on which the bug was reproduced.

I'm also attaching the `xl info` output from this machine:

---

release : 4.19.0-6-amd64
version : #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11)
machine : x86_64
nr_cpus : 14
max_cpu_id : 223
nr_nodes : 1
cores_per_socket : 14
threads_per_core : 1
cpu_mhz : 2593.930
hw_caps : bfebfbff:77fef3ff:2c100800:00000121:0000000f:d19ffffb:00000008:00=
000100
virt_caps : pv hvm hvm_directio pv_directio hap shadow
total_memory : 130541
free_memory : 63591
sharing_freed_memory : 0
sharing_used_memory : 0
outstanding_claims : 0
free_cpus : 0
xen_major : 4
xen_minor : 13
xen_extra : -unstable
xen_version : 4.13-unstable
xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hv=
m-3.0-x86_64
xen_scheduler : credit2
xen_pagesize : 4096
platform_params : virt_start=3D0xffff800000000000
xen_changeset : Wed Oct 2 09:27:27 2019 +0200 git:7c7b407e77-dirty
xen_commandline : placeholder dom0_mem=3D65270M,max:65270M dom0_max_vcpus=
=3D6 dom0_vcpus_pin=3D1 force-ept=3D1 ept=3Dpml=3D0 hap_1gb=3D0 hap_2mb=3D0=
 altp2m=3D1 smt=3D0 no-real-mode edd=3Doff
cc_compiler : gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
cc_compile_by : root
cc_compile_domain :
cc_compile_date : Fri May 29 02:13:39 UTC 2020
build_id : 958cea737ee01f06e595d52191a6d7bb5ee67deb
xend_config_format : 4

---


Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska

------=_Part_59673954_663965119.1590755403814
Content-Type: text/plain; name=dmesg.txt
Content-Disposition: attachment; filename=dmesg.txt
Content-Transfer-Encoding: base64

KFhFTikgcGFyYW1ldGVyICJuby1yZWFsLW1vZGUiIHVua25vd24hCihYRU4pIHBhcmFtZXRlciAi
ZWRkIiB1bmtub3duIQogWGVuIDQuMTMtdW5zdGFibGUKKFhFTikgWGVuIHZlcnNpb24gNC4xMy11
bnN0YWJsZSAocm9vdEApIChnY2MgKFVidW50dSA3LjUuMC0zdWJ1bnR1MX4xOC4wNCkgNy41LjAp
IGRlYnVnPXkgIEZyaSBNYXkgMjkgMDI6MTM6MzkgVVRDIDIwMjAKKFhFTikgTGF0ZXN0IENoYW5n
ZVNldDogV2VkIE9jdCAyIDA5OjI3OjI3IDIwMTkgKzAyMDAgZ2l0OjdjN2I0MDdlNzctZGlydHkK
KFhFTikgYnVpbGQtaWQ6IDk1OGNlYTczN2VlMDFmMDZlNTk1ZDUyMTkxYTZkN2JiNWVlNjdkZWIK
KFhFTikgQm9vdGxvYWRlcjogR1JVQiAyLjAyK2Rmc2cxLTIwCihYRU4pIENvbW1hbmQgbGluZTog
cGxhY2Vob2xkZXIgZG9tMF9tZW09NjUyNzBNLG1heDo2NTI3ME0gZG9tMF9tYXhfdmNwdXM9NiBk
b20wX3ZjcHVzX3Bpbj0xIGZvcmNlLWVwdD0xIGVwdD1wbWw9MCBoYXBfMWdiPTAgaGFwXzJtYj0w
IGFsdHAybT0xIHNtdD0wIG5vLXJlYWwtbW9kZSBlZGQ9b2ZmCihYRU4pIFhlbiBpbWFnZSBsb2Fk
IGJhc2UgYWRkcmVzczogMHg1NmMwMDAwMAooWEVOKSBWaWRlbyBpbmZvcm1hdGlvbjoKKFhFTikg
IFZHQSBpcyBncmFwaGljcyBtb2RlIDEyODB4MTAyNCwgMzIgYnBwCihYRU4pIERpc2MgaW5mb3Jt
YXRpb246CihYRU4pICBGb3VuZCAwIE1CUiBzaWduYXR1cmVzCihYRU4pICBGb3VuZCAyIEVERCBp
bmZvcm1hdGlvbiBzdHJ1Y3R1cmVzCihYRU4pIEVGSSBSQU0gbWFwOgooWEVOKSAgMDAwMDAwMDAw
MDAwMDAwMCAtIDAwMDAwMDAwMDAwYTAwMDAgKHVzYWJsZSkKKFhFTikgIDAwMDAwMDAwMDAwYTAw
MDAgLSAwMDAwMDAwMDAwMTAwMDAwIChyZXNlcnZlZCkKKFhFTikgIDAwMDAwMDAwMDAxMDAwMDAg
LSAwMDAwMDAwMDRkM2MwMDAwICh1c2FibGUpCihYRU4pICAwMDAwMDAwMDRkM2MwMDAwIC0gMDAw
MDAwMDA0ZDRhNDAwMCAocmVzZXJ2ZWQpCihYRU4pICAwMDAwMDAwMDRkNGE0MDAwIC0gMDAwMDAw
MDA1ZWVmZjAwMCAodXNhYmxlKQooWEVOKSAgMDAwMDAwMDA1ZWVmZjAwMCAtIDAwMDAwMDAwNmUz
ZmYwMDAgKHJlc2VydmVkKQooWEVOKSAgMDAwMDAwMDA2ZTNmZjAwMCAtIDAwMDAwMDAwNmYzZmYw
MDAgKEFDUEkgTlZTKQooWEVOKSAgMDAwMDAwMDA2ZjNmZjAwMCAtIDAwMDAwMDAwNmY3ZmYwMDAg
KEFDUEkgZGF0YSkKKFhFTikgIDAwMDAwMDAwNmY3ZmYwMDAgLSAwMDAwMDAwMDZmODAwMDAwICh1
c2FibGUpCihYRU4pICAwMDAwMDAwMDZmODAwMDAwIC0gMDAwMDAwMDA5MDAwMDAwMCAocmVzZXJ2
ZWQpCihYRU4pICAwMDAwMDAwMGZlMDAwMDAwIC0gMDAwMDAwMDBmZTAxMTAwMCAocmVzZXJ2ZWQp
CihYRU4pICAwMDAwMDAwMTAwMDAwMDAwIC0gMDAwMDAwMjA4MDAwMDAwMCAodXNhYmxlKQooWEVO
KSBBQ1BJOiBSU0RQIDZGN0ZFMDE0LCAwMDI0IChyMiBERUxMICApCihYRU4pIEFDUEk6IFhTRFQg
NkY0MEMxODgsIDAwRjQgKHIxIERFTEwgICBQRV9TQzMgICAgICAgICAgMCAgICAgICAxMDAwMDEz
KQooWEVOKSBBQ1BJOiBGQUNQIDZGN0Y4MDAwLCAwMTE0IChyNiBERUxMICAgUEVfU0MzICAgICAg
ICAgIDAgREVMTCAgICAgICAgMSkKKFhFTikgQUNQSTogRFNEVCA2RjUwRDAwMCwgMkRCQ0U2IChy
MiBERUxMICAgUEVfU0MzICAgICAgICAgIDMgREVMTCAgICAgICAgMSkKKFhFTikgQUNQSTogRkFD
UyA2RjFBRTAwMCwgMDA0MAooWEVOKSBBQ1BJOiBTU0RUIDZGN0ZDMDAwLCAwNDZDIChyMiAgSU5U
RUwgQUREUlhMQVQgICAgICAgIDEgSU5UTCAyMDE4MDUwOCkKKFhFTikgQUNQSTogTUNFSiA2RjdG
QjAwMCwgMDEzMCAocjEgREVMTCAgIFBFX1NDMyAgICAgICAgICAyIERFTEwgICAgICAgIDEpCihY
RU4pIEFDUEk6IFdEQVQgNkY3RkEwMDAsIDAxMzQgKHIxIERFTEwgICBQRV9TQzMgICAgICAgICAg
MSBERUxMICAgICAgICAxKQooWEVOKSBBQ1BJOiBTTElDIDZGN0Y5MDAwLCAwMDI0IChyMSBERUxM
ICAgUEVfU0MzICAgICAgICAgIDEgREVMTCAgICAgICAgMSkKKFhFTikgQUNQSTogSFBFVCA2RjdG
NzAwMCwgMDAzOCAocjEgREVMTCAgIFBFX1NDMyAgICAgICAgICAxIERFTEwgICAgICAgIDEpCihY
RU4pIEFDUEk6IEFQSUMgNkY3RjUwMDAsIDE2REUgKHI0IERFTEwgICBQRV9TQzMgICAgICAgICAg
MCBERUxMICAgICAgICAxKQooWEVOKSBBQ1BJOiBNQ0ZHIDZGN0Y0MDAwLCAwMDNDIChyMSBERUxM
ICAgUEVfU0MzICAgICAgICAgIDEgREVMTCAgICAgICAgMSkKKFhFTikgQUNQSTogTUlHVCA2RjdG
MzAwMCwgMDA0MCAocjEgREVMTCAgIFBFX1NDMyAgICAgICAgICAwIERFTEwgICAgICAgIDEpCihY
RU4pIEFDUEk6IE1TQ1QgNkY3RjIwMDAsIDAwOTAgKHIxIERFTEwgICBQRV9TQzMgICAgICAgICAg
MSBERUxMICAgICAgICAxKQooWEVOKSBBQ1BJOiBQQ0FUIDZGN0YxMDAwLCAwMDY4IChyMiBERUxM
ICAgUEVfU0MzICAgICAgICAgIDIgREVMTCAgICAgICAgMSkKKFhFTikgQUNQSTogUENDVCA2RjdG
MDAwMCwgMDA2RSAocjEgREVMTCAgIFBFX1NDMyAgICAgICAgICAyIERFTEwgICAgICAgIDEpCihY
RU4pIEFDUEk6IFJBU0YgNkY3RUYwMDAsIDAwMzAgKHIxIERFTEwgICBQRV9TQzMgICAgICAgICAg
MSBERUxMICAgICAgICAxKQooWEVOKSBBQ1BJOiBTTElUIDZGN0VFMDAwLCAwNDJDIChyMSBERUxM
ICAgUEVfU0MzICAgICAgICAgIDEgREVMTCAgICAgICAgMSkKKFhFTikgQUNQSTogU1JBVCA2RjdF
QjAwMCwgMkQzMCAocjMgREVMTCAgIFBFX1NDMyAgICAgICAgICAyIERFTEwgICAgICAgIDEpCihY
RU4pIEFDUEk6IFNWT1MgNkY3RUEwMDAsIDAwMzIgKHIxIERFTEwgICBQRV9TQzMgICAgICAgICAg
MCBERUxMICAgICAgICAxKQooWEVOKSBBQ1BJOiBXU01UIDZGN0U5MDAwLCAwMDI4IChyMSBERUxM
ICAgUEVfU0MzICAgICAgICAgIDAgREVMTCAgICAgICAgMSkKKFhFTikgQUNQSTogT0VNNCA2RjQ1
RjAwMCwgQUQxQzEgKHIyICBJTlRFTCBDUFUgIENTVCAgICAgMzAwMCBJTlRMIDIwMTgwNTA4KQoo
WEVOKSBBQ1BJOiBTU0RUIDZGNDI3MDAwLCAzNzQ2NSAocjIgIElOVEVMIFNTRFQgIFBNICAgICA0
MDAwIElOVEwgMjAxODA1MDgpCihYRU4pIEFDUEk6IFNTRFQgNkY0MEQwMDAsIDBBMUYgKHIyIERF
TEwgICBQRV9TQzMgICAgICAgICAgMCBERUxMICAgICAgICAxKQooWEVOKSBBQ1BJOiBTU0RUIDZG
NDIzMDAwLCAzNTdGIChyMiAgSU5URUwgU3BzTm0gICAgICAgICAgIDIgSU5UTCAyMDE4MDUwOCkK
KFhFTikgQUNQSTogRE1BUiA2RjdGRDAwMCwgMDExOCAocjEgREVMTCAgIFBFX1NDMyAgICAgICAg
ICAxIERFTEwgICAgICAgIDEpCihYRU4pIEFDUEk6IEhFU1QgNkY0MjIwMDAsIDAxN0MgKHIxIERF
TEwgICBQRV9TQzMgICAgICAgICAgMiBERUxMICAgICAgICAxKQooWEVOKSBBQ1BJOiBCRVJUIDZG
NDIxMDAwLCAwMDMwIChyMSBERUxMICAgUEVfU0MzICAgICAgICAgIDIgREVMTCAgICAgICAgMSkK
KFhFTikgQUNQSTogRVJTVCA2RjQyMDAwMCwgMDIzMCAocjEgREVMTCAgIFBFX1NDMyAgICAgICAg
ICAyIERFTEwgICAgICAgIDEpCihYRU4pIEFDUEk6IEVJTkogNkY0MUYwMDAsIDAxNTAgKHIxIERF
TEwgICBQRV9TQzMgICAgICAgICAgMiBERUxMICAgICAgICAxKQooWEVOKSBTeXN0ZW0gUkFNOiAx
MzA1NDFNQiAoMTMzNjc0NzM2a0IpCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMDAgLT4gTm9k
ZSAwCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMGMgLT4gTm9kZSAwCihYRU4pIFNSQVQ6IFBY
TSAwIC0+IEFQSUMgMDIgLT4gTm9kZSAwCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMGEgLT4g
Tm9kZSAwCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMDQgLT4gTm9kZSAwCihYRU4pIFNSQVQ6
IFBYTSAwIC0+IEFQSUMgMDggLT4gTm9kZSAwCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMDYg
LT4gTm9kZSAwCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMWMgLT4gTm9kZSAwCihYRU4pIFNS
QVQ6IFBYTSAwIC0+IEFQSUMgMTAgLT4gTm9kZSAwCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMg
MWEgLT4gTm9kZSAwCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMTIgLT4gTm9kZSAwCihYRU4p
IFNSQVQ6IFBYTSAwIC0+IEFQSUMgMTggLT4gTm9kZSAwCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQ
SUMgMTQgLT4gTm9kZSAwCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMTYgLT4gTm9kZSAwCihY
RU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMDEgLT4gTm9kZSAwCihYRU4pIFNSQVQ6IFBYTSAwIC0+
IEFQSUMgMGQgLT4gTm9kZSAwCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMDMgLT4gTm9kZSAw
CihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMGIgLT4gTm9kZSAwCihYRU4pIFNSQVQ6IFBYTSAw
IC0+IEFQSUMgMDUgLT4gTm9kZSAwCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMDkgLT4gTm9k
ZSAwCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMDcgLT4gTm9kZSAwCihYRU4pIFNSQVQ6IFBY
TSAwIC0+IEFQSUMgMWQgLT4gTm9kZSAwCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMTEgLT4g
Tm9kZSAwCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMWIgLT4gTm9kZSAwCihYRU4pIFNSQVQ6
IFBYTSAwIC0+IEFQSUMgMTMgLT4gTm9kZSAwCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMTkg
LT4gTm9kZSAwCihYRU4pIFNSQVQ6IFBYTSAwIC0+IEFQSUMgMTUgLT4gTm9kZSAwCihYRU4pIFNS
QVQ6IFBYTSAwIC0+IEFQSUMgMTcgLT4gTm9kZSAwCihYRU4pIFNSQVQ6IE5vZGUgMCBQWE0gMCAw
LTgwMDAwMDAwCihYRU4pIFNSQVQ6IE5vZGUgMCBQWE0gMCAxMDAwMDAwMDAtMjA4MDAwMDAwMAoo
WEVOKSBOVU1BOiBVc2luZyAyMCBmb3IgdGhlIGhhc2ggc2hpZnQuCihYRU4pIERvbWFpbiBoZWFw
IGluaXRpYWxpc2VkCihYRU4pIHZlc2FmYjogZnJhbWVidWZmZXIgYXQgMHgwMDAwMDAwMDkxMDAw
MDAwLCBtYXBwZWQgdG8gMHhmZmZmODJjMDAwMjAxMDAwLCB1c2luZyA1MTIwaywgdG90YWwgNTEy
MGsKKFhFTikgdmVzYWZiOiBtb2RlIGlzIDEyODB4MTAyNHgzMiwgbGluZWxlbmd0aD01MTIwLCBm
b250IDh4MTYKKFhFTikgdmVzYWZiOiBUcnVlY29sb3I6IHNpemU9ODo4Ojg6OCwgc2hpZnQ9MjQ6
MTY6ODowCihYRU4pIENQVSBWZW5kb3I6IEludGVsLCBGYW1pbHkgNiAoMHg2KSwgTW9kZWwgODUg
KDB4NTUpLCBTdGVwcGluZyA0IChyYXcgMDAwNTA2NTQpCihYRU4pIFNNQklPUyAzLjIgcHJlc2Vu
dC4KKFhFTikgVXNpbmcgQVBJQyBkcml2ZXIgZGVmYXVsdAooWEVOKSBBQ1BJOiBQTS1UaW1lciBJ
TyBQb3J0OiAweDUwOCAoMzIgYml0cykKKFhFTikgQUNQSTogdjUgU0xFRVAgSU5GTzogY29udHJv
bFswOjBdLCBzdGF0dXNbMDowXQooWEVOKSBBQ1BJOiBTTEVFUCBJTkZPOiBwbTF4X2NudFsxOjUw
NCwxOjBdLCBwbTF4X2V2dFsxOjUwMCwxOjBdCihYRU4pIEFDUEk6IDMyLzY0WCBGQUNTIGFkZHJl
c3MgbWlzbWF0Y2ggaW4gRkFEVCAtIDZmMWFlMDAwLzAwMDAwMDAwMDAwMDAwMDAsIHVzaW5nIDMy
CihYRU4pIEFDUEk6ICAgICAgICAgICAgIHdha2V1cF92ZWNbNmYxYWUwMGNdLCB2ZWNfc2l6ZVsy
MF0KKFhFTikgQUNQSTogTG9jYWwgQVBJQyBhZGRyZXNzIDB4ZmVlMDAwMDAKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwMF0gbGFwaWNfaWRbMHgwMF0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwY10gbGFwaWNfaWRbMHgwY10gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwMl0gbGFwaWNfaWRbMHgwMl0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwYV0gbGFwaWNfaWRbMHgwYV0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwNF0gbGFwaWNfaWRbMHgwNF0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwOF0gbGFwaWNfaWRbMHgwOF0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwNl0gbGFwaWNfaWRbMHgwNl0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgxYV0gbGFwaWNfaWRbMHgxY10gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwZV0gbGFwaWNfaWRbMHgxMF0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgxOF0gbGFwaWNfaWRbMHgxYV0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgxMF0gbGFwaWNfaWRbMHgxMl0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgxNl0gbGFwaWNfaWRbMHgxOF0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgxMl0gbGFwaWNfaWRbMHgxNF0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgxNF0gbGFwaWNfaWRbMHgxNl0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwMV0gbGFwaWNfaWRbMHgwMV0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwZF0gbGFwaWNfaWRbMHgwZF0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwM10gbGFwaWNfaWRbMHgwM10gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwYl0gbGFwaWNfaWRbMHgwYl0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwNV0gbGFwaWNfaWRbMHgwNV0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwOV0gbGFwaWNfaWRbMHgwOV0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwN10gbGFwaWNfaWRbMHgwN10gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgxYl0gbGFwaWNfaWRbMHgxZF0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgwZl0gbGFwaWNfaWRbMHgxMV0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgxOV0gbGFwaWNfaWRbMHgxYl0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgxMV0gbGFwaWNfaWRbMHgxM10gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgxN10gbGFwaWNfaWRbMHgxOV0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgxM10gbGFwaWNfaWRbMHgxNV0gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUMgKGFjcGlfaWRbMHgxNV0gbGFwaWNfaWRbMHgxN10gZW5hYmxlZCkKKFhFTikgQUNQSTog
TEFQSUNfTk1JIChhY3BpX2lkWzB4ZmZdIGhpZ2ggbGV2ZWwgbGludFsweDFdKQooWEVOKSBBQ1BJ
OiBYMkFQSUNfTk1JICh1aWRbMHhmZmZmZmZmZl0gaGlnaCBsZXZlbCBsaW50WzB4MV0pCihYRU4p
IE92ZXJyaWRpbmcgQVBJQyBkcml2ZXIgd2l0aCBiaWdzbXAKKFhFTikgQUNQSTogSU9BUElDIChp
ZFsweDA4XSBhZGRyZXNzWzB4ZmVjMDAwMDBdIGdzaV9iYXNlWzBdKQooWEVOKSBJT0FQSUNbMF06
IGFwaWNfaWQgOCwgdmVyc2lvbiAzMiwgYWRkcmVzcyAweGZlYzAwMDAwLCBHU0kgMC0yMwooWEVO
KSBBQ1BJOiBJT0FQSUMgKGlkWzB4MDldIGFkZHJlc3NbMHhmZWMwMTAwMF0gZ3NpX2Jhc2VbMjRd
KQooWEVOKSBJT0FQSUNbMV06IGFwaWNfaWQgOSwgdmVyc2lvbiAzMiwgYWRkcmVzcyAweGZlYzAx
MDAwLCBHU0kgMjQtMzEKKFhFTikgQUNQSTogSU9BUElDIChpZFsweDBhXSBhZGRyZXNzWzB4ZmVj
MDgwMDBdIGdzaV9iYXNlWzMyXSkKKFhFTikgSU9BUElDWzJdOiBhcGljX2lkIDEwLCB2ZXJzaW9u
IDMyLCBhZGRyZXNzIDB4ZmVjMDgwMDAsIEdTSSAzMi0zOQooWEVOKSBBQ1BJOiBJT0FQSUMgKGlk
WzB4MGJdIGFkZHJlc3NbMHhmZWMxMDAwMF0gZ3NpX2Jhc2VbNDBdKQooWEVOKSBJT0FQSUNbM106
IGFwaWNfaWQgMTEsIHZlcnNpb24gMzIsIGFkZHJlc3MgMHhmZWMxMDAwMCwgR1NJIDQwLTQ3CihY
RU4pIEFDUEk6IElPQVBJQyAoaWRbMHgwY10gYWRkcmVzc1sweGZlYzE4MDAwXSBnc2lfYmFzZVs0
OF0pCihYRU4pIElPQVBJQ1s0XTogYXBpY19pZCAxMiwgdmVyc2lvbiAzMiwgYWRkcmVzcyAweGZl
YzE4MDAwLCBHU0kgNDgtNTUKKFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1c19pcnEg
MCBnbG9iYWxfaXJxIDIgZGZsIGRmbCkKKFhFTikgQUNQSTogSU5UX1NSQ19PVlIgKGJ1cyAwIGJ1
c19pcnEgOSBnbG9iYWxfaXJxIDkgaGlnaCBsZXZlbCkKKFhFTikgQUNQSTogSVJRMCB1c2VkIGJ5
IG92ZXJyaWRlLgooWEVOKSBBQ1BJOiBJUlEyIHVzZWQgYnkgb3ZlcnJpZGUuCihYRU4pIEFDUEk6
IElSUTkgdXNlZCBieSBvdmVycmlkZS4KKFhFTikgRW5hYmxpbmcgQVBJQyBtb2RlOiAgUGh5cy4g
IFVzaW5nIDUgSS9PIEFQSUNzCihYRU4pIEFDUEk6IEhQRVQgaWQ6IDB4ODA4NmE3MDEgYmFzZTog
MHhmZWQwMDAwMAooWEVOKSBQQ0k6IE1DRkcgY29uZmlndXJhdGlvbiAwOiBiYXNlIDgwMDAwMDAw
IHNlZ21lbnQgMDAwMCBidXNlcyAwMCAtIGZmCihYRU4pIFBDSTogTUNGRyBhcmVhIGF0IDgwMDAw
MDAwIHJlc2VydmVkIGluIEU4MjAKKFhFTikgUENJOiBVc2luZyBNQ0ZHIGZvciBzZWdtZW50IDAw
MDAgYnVzIDAwLWZmCihYRU4pIFtWVC1EXSAgUk1SUiBhZGRyZXNzIHJhbmdlIDZmMWEwMDAwLi42
ZjFhMmZmZiBub3QgaW4gcmVzZXJ2ZWQgbWVtb3J5OyBuZWVkICJpb21tdV9pbmNsdXNpdmVfbWFw
cGluZz0xIj8KKFhFTikgWGVuIEVSU1Qgc3VwcG9ydCBpcyBpbml0aWFsaXplZC4KKFhFTikgSEVT
VDogVGFibGUgcGFyc2luZyBoYXMgYmVlbiBpbml0aWFsaXplZAooWEVOKSBVc2luZyBBQ1BJIChN
QURUKSBmb3IgU01QIGNvbmZpZ3VyYXRpb24gaW5mb3JtYXRpb24KKFhFTikgU01QOiBBbGxvd2lu
ZyAyMjQgQ1BVcyAoMTk2IGhvdHBsdWcgQ1BVcykKKFhFTikgSVJRIGxpbWl0czogNTYgR1NJLCA1
MzM2IE1TSS9NU0ktWAooWEVOKSBOb3QgZW5hYmxpbmcgeDJBUElDICh1cG9uIGZpcm13YXJlIHJl
cXVlc3QpCihYRU4pIHhzdGF0ZTogc2l6ZTogMHhhODggYW5kIHN0YXRlczogMHgyZmYKKFhFTikg
bWNlX2ludGVsLmM6Nzc4OiBNQ0EgQ2FwYWJpbGl0eTogZmlyc3RiYW5rIDAsIGV4dGVuZGVkIE1D
RSBNU1IgMCwgQkNBU1QsIFNFUgooWEVOKSBDUFUwOiBJbnRlbCBtYWNoaW5lIGNoZWNrIHJlcG9y
dGluZyBlbmFibGVkCihYRU4pIFNwZWN1bGF0aXZlIG1pdGlnYXRpb24gZmFjaWxpdGllczoKKFhF
TikgICBIYXJkd2FyZSBmZWF0dXJlczogSUJSUy9JQlBCIFNUSUJQIEwxRF9GTFVTSCBTU0JECihY
RU4pICAgQ29tcGlsZWQtaW4gc3VwcG9ydDogSU5ESVJFQ1RfVEhVTksgU0hBRE9XX1BBR0lORwoo
WEVOKSAgIFhlbiBzZXR0aW5nczogQlRJLVRodW5rIEpNUCwgU1BFQ19DVFJMOiBJQlJTKyBTU0JE
LSwgT3RoZXI6IElCUEIgTDFEX0ZMVVNICihYRU4pICAgTDFURjogYmVsaWV2ZWQgdnVsbmVyYWJs
ZSwgbWF4cGh5c2FkZHIgTDFEIDQ2LCBDUFVJRCA0NiwgU2FmZSBhZGRyZXNzIDMwMDAwMDAwMDAw
MAooWEVOKSAgIFN1cHBvcnQgZm9yIEhWTSBWTXM6IE1TUl9TUEVDX0NUUkwgUlNCIEVBR0VSX0ZQ
VQooWEVOKSAgIFN1cHBvcnQgZm9yIFBWIFZNczogTVNSX1NQRUNfQ1RSTCBSU0IgRUFHRVJfRlBV
CihYRU4pICAgWFBUSSAoNjQtYml0IFBWIG9ubHkpOiBEb20wIGVuYWJsZWQsIERvbVUgZW5hYmxl
ZCAod2l0aCBQQ0lEKQooWEVOKSAgIFBWIEwxVEYgc2hhZG93aW5nOiBEb20wIGRpc2FibGVkLCBE
b21VIGVuYWJsZWQKKFhFTikgVXNpbmcgc2NoZWR1bGVyOiBTTVAgQ3JlZGl0IFNjaGVkdWxlciBy
ZXYyIChjcmVkaXQyKQooWEVOKSBJbml0aWFsaXppbmcgQ3JlZGl0MiBzY2hlZHVsZXIKKFhFTikg
IGxvYWRfcHJlY2lzaW9uX3NoaWZ0OiAxOAooWEVOKSAgbG9hZF93aW5kb3dfc2hpZnQ6IDMwCihY
RU4pICB1bmRlcmxvYWRfYmFsYW5jZV90b2xlcmFuY2U6IDAKKFhFTikgIG92ZXJsb2FkX2JhbGFu
Y2VfdG9sZXJhbmNlOiAtMwooWEVOKSAgcnVucXVldWVzIGFycmFuZ2VtZW50OiBzb2NrZXQKKFhF
TikgIGNhcCBlbmZvcmNlbWVudCBncmFudWxhcml0eTogMTBtcwooWEVOKSBsb2FkIHRyYWNraW5n
IHdpbmRvdyBsZW5ndGggMTA3Mzc0MTgyNCBucwooWEVOKSBQbGF0Zm9ybSB0aW1lciBpcyAyMy45
OTlNSHogSFBFVAooWEVOKSBEZXRlY3RlZCAyNTkzLjkzMCBNSHogcHJvY2Vzc29yLgooWEVOKSBF
RkkgbWVtb3J5IG1hcDoKKFhFTikgIDAwMDAwMDAwMDAwMDAtMDAwMDAwMDAwMGZmZiB0eXBlPTMg
YXR0cj0wMDAwMDAwMDAwMDAwMDBmCihYRU4pICAwMDAwMDAwMDAxMDAwLTAwMDAwMDAwMDRmZmYg
dHlwZT0yIGF0dHI9MDAwMDAwMDAwMDAwMDAwZgooWEVOKSAgMDAwMDAwMDAwNTAwMC0wMDAwMDAw
MDBmZmZmIHR5cGU9NyBhdHRyPTAwMDAwMDAwMDAwMDAwMGYKKFhFTikgIDAwMDAwMDAwMTAwMDAt
MDAwMDAwMDAxM2ZmZiB0eXBlPTMgYXR0cj0wMDAwMDAwMDAwMDAwMDBmCihYRU4pICAwMDAwMDAw
MDE0MDAwLTAwMDAwMDAwNTFmZmYgdHlwZT03IGF0dHI9MDAwMDAwMDAwMDAwMDAwZgooWEVOKSAg
MDAwMDAwMDA1MjAwMC0wMDAwMDAwMDYxZmZmIHR5cGU9MiBhdHRyPTAwMDAwMDAwMDAwMDAwMGYK
KFhFTikgIDAwMDAwMDAwNjIwMDAtMDAwMDAwMDA4ZGZmZiB0eXBlPTMgYXR0cj0wMDAwMDAwMDAw
MDAwMDBmCihYRU4pICAwMDAwMDAwMDhlMDAwLTAwMDAwMDAwOGZmZmYgdHlwZT03IGF0dHI9MDAw
MDAwMDAwMDAwMDAwZgooWEVOKSAgMDAwMDAwMDA5MDAwMC0wMDAwMDAwMDlmZmZmIHR5cGU9MyBh
dHRyPTAwMDAwMDAwMDAwMDAwMGYKKFhFTikgIDAwMDAwMDAxMDAwMDAtMDAwMDAwMDYwNmZmZiB0
eXBlPTIgYXR0cj0wMDAwMDAwMDAwMDAwMDBmCihYRU4pICAwMDAwMDAwNjA3MDAwLTAwMDAwMDBi
ZmZmZmYgdHlwZT03IGF0dHI9MDAwMDAwMDAwMDAwMDAwZgooWEVOKSAgMDAwMDAwMGMwMDAwMC0w
MDAwMDAwZmZmZmZmIHR5cGU9MyBhdHRyPTAwMDAwMDAwMDAwMDAwMGYKKFhFTikgIDAwMDAwMDEw
MDAwMDAtMDAwMDAwMmFjMWZmZiB0eXBlPTIgYXR0cj0wMDAwMDAwMDAwMDAwMDBmCihYRU4pICAw
MDAwMDAyYWMyMDAwLTAwMDAwMWRjZWJmZmYgdHlwZT03IGF0dHI9MDAwMDAwMDAwMDAwMDAwZgoo
WEVOKSAgMDAwMDAxZGNlYzAwMC0wMDAwMDMxZWZmZmZmIHR5cGU9MSBhdHRyPTAwMDAwMDAwMDAw
MDAwMGYKKFhFTikgIDAwMDAwMzFmMDAwMDAtMDAwMDAzMWZmZmZmZiB0eXBlPTQgYXR0cj0wMDAw
MDAwMDAwMDAwMDBmCihYRU4pICAwMDAwMDMyMDAwMDAwLTAwMDAwNGJhMjlmZmYgdHlwZT03IGF0
dHI9MDAwMDAwMDAwMDAwMDAwZgooWEVOKSAgMDAwMDA0YmEyYTAwMC0wMDAwMDRiYjlmZmZmIHR5
cGU9MSBhdHRyPTAwMDAwMDAwMDAwMDAwMGYKKFhFTikgIDAwMDAwNGJiYTAwMDAtMDAwMDA0YmQx
NWZmZiB0eXBlPTIgYXR0cj0wMDAwMDAwMDAwMDAwMDBmCihYRU4pICAwMDAwMDRiZDE2MDAwLTAw
MDAwNGJlM2FmZmYgdHlwZT0xIGF0dHI9MDAwMDAwMDAwMDAwMDAwZgooWEVOKSAgMDAwMDA0YmUz
YjAwMC0wMDAwMDRkM2JmZmZmIHR5cGU9MyBhdHRyPTAwMDAwMDAwMDAwMDAwMGYKKFhFTikgIDAw
MDAwNGQzYzAwMDAtMDAwMDA0ZDRhM2ZmZiB0eXBlPTUgYXR0cj04MDAwMDAwMDAwMDAwMDBmCihY
RU4pICAwMDAwMDRkNGE0MDAwLTAwMDAwNGQ5ZmNmZmYgdHlwZT0zIGF0dHI9MDAwMDAwMDAwMDAw
MDAwZgooWEVOKSAgMDAwMDA0ZDlmZDAwMC0wMDAwMDRkYzAwZmZmIHR5cGU9NCBhdHRyPTAwMDAw
MDAwMDAwMDAwMGYKKFhFTikgIDAwMDAwNGRjMDEwMDAtMDAwMDA0ZGRmZmZmZiB0eXBlPTMgYXR0
cj0wMDAwMDAwMDAwMDAwMDBmCihYRU4pICAwMDAwMDRkZTAwMDAwLTAwMDAwNGRmZmZmZmYgdHlw
ZT00IGF0dHI9MDAwMDAwMDAwMDAwMDAwZgooWEVOKSAgMDAwMDA0ZTAwMDAwMC0wMDAwMDRlMWZm
ZmZmIHR5cGU9MyBhdHRyPTAwMDAwMDAwMDAwMDAwMGYKKFhFTikgIDAwMDAwNGUyMDAwMDAtMDAw
MDA0ZTNmZmZmZiB0eXBlPTQgYXR0cj0wMDAwMDAwMDAwMDAwMDBmCihYRU4pICAwMDAwMDRlNDAw
MDAwLTAwMDAwNGU1NjdmZmYgdHlwZT0zIGF0dHI9MDAwMDAwMDAwMDAwMDAwZgooWEVOKSAgMDAw
MDA0ZTU2ODAwMC0wMDAwMDRlNTg3ZmZmIHR5cGU9NCBhdHRyPTAwMDAwMDAwMDAwMDAwMGYKKFhF
TikgIDAwMDAwNGU1ODgwMDAtMDAwMDA0ZTYwZGZmZiB0eXBlPTMgYXR0cj0wMDAwMDAwMDAwMDAw
MDBmCihYRU4pICAwMDAwMDRlNjBlMDAwLTAwMDAwNGU4NmVmZmYgdHlwZT00IGF0dHI9MDAwMDAw
MDAwMDAwMDAwZgooWEVOKSAgMDAwMDA0ZTg2ZjAwMC0wMDAwMDRlODdjZmZmIHR5cGU9MyBhdHRy
PTAwMDAwMDAwMDAwMDAwMGYKKFhFTikgIDAwMDAwNGU4N2QwMDAtMDAwMDA1MWEzZmZmZiB0eXBl
PTQgYXR0cj0wMDAwMDAwMDAwMDAwMDBmCihYRU4pICAwMDAwMDUxYTQwMDAwLTAwMDAwNTFhNjhm
ZmYgdHlwZT0zIGF0dHI9MDAwMDAwMDAwMDAwMDAwZgooWEVOKSAgMDAwMDA1MWE2OTAwMC0wMDAw
MDUxYjAwZmZmIHR5cGU9NCBhdHRyPTAwMDAwMDAwMDAwMDAwMGYKKFhFTikgIDAwMDAwNTFiMDEw
MDAtMDAwMDA1MWIwMWZmZiB0eXBlPTIgYXR0cj0wMDAwMDAwMDAwMDAwMDBmCihYRU4pICAwMDAw
MDUxYjAyMDAwLTAwMDAwNTFlZmVmZmYgdHlwZT0zIGF0dHI9MDAwMDAwMDAwMDAwMDAwZgooWEVO
KSAgMDAwMDA1MWVmZjAwMC0wMDAwMDU2ZGZmZmZmIHR5cGU9NyBhdHRyPTAwMDAwMDAwMDAwMDAw
MGYKKFhFTikgIDAwMDAwNTZlMDAwMDAtMDAwMDA1NzFkOGZmZiB0eXBlPTIgYXR0cj0wMDAwMDAw
MDAwMDAwMDBmCihYRU4pICAwMDAwMDU3MWQ5MDAwLTAwMDAwNTc1ZGFmZmYgdHlwZT03IGF0dHI9
MDAwMDAwMDAwMDAwMDAwZgooWEVOKSAgMDAwMDA1NzVkYjAwMC0wMDAwMDVlZWZlZmZmIHR5cGU9
NCBhdHRyPTAwMDAwMDAwMDAwMDAwMGYKKFhFTikgIDAwMDAwNWVlZmYwMDAtMDAwMDA1ZWZmZWZm
ZiB0eXBlPTYgYXR0cj04MDAwMDAwMDAwMDAwMDBmCihYRU4pICAwMDAwMDVlZmZmMDAwLTAwMDAw
NWYxZmVmZmYgdHlwZT01IGF0dHI9ODAwMDAwMDAwMDAwMDAwZgooWEVOKSAgMDAwMDA1ZjFmZjAw
MC0wMDAwMDZlM2ZlZmZmIHR5cGU9MCBhdHRyPTAwMDAwMDAwMDAwMDAwMGYKKFhFTikgIDAwMDAw
NmUzZmYwMDAtMDAwMDA2ZjNmZWZmZiB0eXBlPTEwIGF0dHI9MDAwMDAwMDAwMDAwMDAwZgooWEVO
KSAgMDAwMDA2ZjNmZjAwMC0wMDAwMDZmN2ZlZmZmIHR5cGU9OSBhdHRyPTAwMDAwMDAwMDAwMDAw
MGYKKFhFTikgIDAwMDAwNmY3ZmYwMDAtMDAwMDA2ZjdmZmZmZiB0eXBlPTQgYXR0cj0wMDAwMDAw
MDAwMDAwMDBmCihYRU4pICAwMDAwMTAwMDAwMDAwLTAwMDIwN2ZmZmZmZmYgdHlwZT03IGF0dHI9
MDAwMDAwMDAwMDAwMDAwZgooWEVOKSAgMDAwMDAwMDBhMDAwMC0wMDAwMDAwMGZmZmZmIHR5cGU9
MCBhdHRyPTAwMDAwMDAwMDAwMDAwMDAKKFhFTikgIDAwMDAwNmY4MDAwMDAtMDAwMDA2ZmZmZmZm
ZiB0eXBlPTAgYXR0cj0wMDAwMDAwMDAwMDAwMDAwCihYRU4pICAwMDAwMDcwMDAwMDAwLTAwMDAw
NzdiZmZmZmYgdHlwZT0wIGF0dHI9MDAwMDAwMDAwMDAwMDAwOQooWEVOKSAgMDAwMDA3N2MwMDAw
MC0wMDAwMDdmZmZmZmZmIHR5cGU9MCBhdHRyPTAwMDAwMDAwMDAwMDAwMDAKKFhFTikgIDAwMDAw
ODAwMDAwMDAtMDAwMDA4ZmZmZmZmZiB0eXBlPTExIGF0dHI9ODAwMDAwMDAwMDAwMDAwMQooWEVO
KSAgMDAwMDBmZTAwMDAwMC0wMDAwMGZlMDEwZmZmIHR5cGU9MTEgYXR0cj04MDAwMDAwMDAwMDAw
MDAxCihYRU4pIGFsdCB0YWJsZSBmZmZmODJkMDgwNDZmMWIwIC0+IGZmZmY4MmQwODA0N2JhYjgK
KFhFTikgSW50ZWwgVlQtZCBpb21tdSAyIHN1cHBvcnRlZCBwYWdlIHNpemVzOiA0a0IsIDJNQiwg
MUdCLgooWEVOKSBJbnRlbCBWVC1kIGlvbW11IDEgc3VwcG9ydGVkIHBhZ2Ugc2l6ZXM6IDRrQiwg
Mk1CLCAxR0IuCihYRU4pIEludGVsIFZULWQgaW9tbXUgMCBzdXBwb3J0ZWQgcGFnZSBzaXplczog
NGtCLCAyTUIsIDFHQi4KKFhFTikgSW50ZWwgVlQtZCBpb21tdSAzIHN1cHBvcnRlZCBwYWdlIHNp
emVzOiA0a0IsIDJNQiwgMUdCLgooWEVOKSBJbnRlbCBWVC1kIFNub29wIENvbnRyb2wgZW5hYmxl
ZC4KKFhFTikgSW50ZWwgVlQtZCBEb20wIERNQSBQYXNzdGhyb3VnaCBub3QgZW5hYmxlZC4KKFhF
TikgSW50ZWwgVlQtZCBRdWV1ZWQgSW52YWxpZGF0aW9uIGVuYWJsZWQuCihYRU4pIEludGVsIFZU
LWQgSW50ZXJydXB0IFJlbWFwcGluZyBlbmFibGVkLgooWEVOKSBJbnRlbCBWVC1kIFBvc3RlZCBJ
bnRlcnJ1cHQgbm90IGVuYWJsZWQuCihYRU4pIEludGVsIFZULWQgU2hhcmVkIEVQVCB0YWJsZXMg
bm90IGVuYWJsZWQuCihYRU4pIEkvTyB2aXJ0dWFsaXNhdGlvbiBlbmFibGVkCihYRU4pICAtIERv
bTAgbW9kZTogUmVsYXhlZAooWEVOKSBJbnRlcnJ1cHQgcmVtYXBwaW5nIGVuYWJsZWQKKFhFTikg
bnJfc29ja2V0czogOAooWEVOKSBFbmFibGVkIGRpcmVjdGVkIEVPSSB3aXRoIGlvYXBpY19hY2tf
b2xkIG9uIQooWEVOKSBFTkFCTElORyBJTy1BUElDIElSUXMKKFhFTikgIC0+IFVzaW5nIG9sZCBB
Q0sgbWV0aG9kCihYRU4pIC4uVElNRVI6IHZlY3Rvcj0weEYwIGFwaWMxPTAgcGluMT0yIGFwaWMy
PS0xIHBpbjI9LTEKKFhFTikgVFNDIGRlYWRsaW5lIHRpbWVyIGVuYWJsZWQKKFhFTikgRGVmYXVs
dGluZyB0byBhbHRlcm5hdGl2ZSBrZXkgaGFuZGxpbmc7IHNlbmQgJ0EnIHRvIHN3aXRjaCB0byBu
b3JtYWwgbW9kZS4KKFhFTikgQWxsb2NhdGVkIGNvbnNvbGUgcmluZyBvZiAyNTYgS2lCLgooWEVO
KSBtd2FpdC1pZGxlOiBNV0FJVCBzdWJzdGF0ZXM6IDB4MjAyMAooWEVOKSBtd2FpdC1pZGxlOiB2
MC40LjEgbW9kZWwgMHg1NQooWEVOKSBtd2FpdC1pZGxlOiBsYXBpY190aW1lcl9yZWxpYWJsZV9z
dGF0ZXMgMHhmZmZmZmZmZgooWEVOKSBWTVg6IFN1cHBvcnRlZCBhZHZhbmNlZCBmZWF0dXJlczoK
KFhFTikgIC0gQVBJQyBNTUlPIGFjY2VzcyB2aXJ0dWFsaXNhdGlvbgooWEVOKSAgLSBBUElDIFRQ
UiBzaGFkb3cKKFhFTikgIC0gRXh0ZW5kZWQgUGFnZSBUYWJsZXMgKEVQVCkKKFhFTikgIC0gVmly
dHVhbC1Qcm9jZXNzb3IgSWRlbnRpZmllcnMgKFZQSUQpCihYRU4pICAtIFZpcnR1YWwgTk1JCihY
RU4pICAtIE1TUiBkaXJlY3QtYWNjZXNzIGJpdG1hcAooWEVOKSAgLSBVbnJlc3RyaWN0ZWQgR3Vl
c3QKKFhFTikgIC0gQVBJQyBSZWdpc3RlciBWaXJ0dWFsaXphdGlvbgooWEVOKSAgLSBWaXJ0dWFs
IEludGVycnVwdCBEZWxpdmVyeQooWEVOKSAgLSBQb3N0ZWQgSW50ZXJydXB0IFByb2Nlc3NpbmcK
KFhFTikgIC0gVk1DUyBzaGFkb3dpbmcKKFhFTikgIC0gVk0gRnVuY3Rpb25zCihYRU4pICAtIFZp
cnR1YWxpc2F0aW9uIEV4Y2VwdGlvbnMKKFhFTikgIC0gVFNDIFNjYWxpbmcKKFhFTikgSFZNOiBB
U0lEcyBlbmFibGVkLgooWEVOKSBIVk06IFZNWCBlbmFibGVkCihYRU4pIEhWTTogSGFyZHdhcmUg
QXNzaXN0ZWQgUGFnaW5nIChIQVApIGRldGVjdGVkCihYRU4pIEhWTTogSEFQIHBhZ2Ugc2l6ZXM6
IDRrQiwgMk1CIFtkaXNhYmxlZF0sIDFHQiBbZGlzYWJsZWRdCihYRU4pIGFsdCB0YWJsZSBmZmZm
ODJkMDgwNDZmMWIwIC0+IGZmZmY4MmQwODA0N2JhYjgKKFhFTikgQ1BVIDEgc3RpbGwgbm90IGRl
YWQuLi4KKFhFTikgQ1BVIDEgc3RpbGwgbm90IGRlYWQuLi4KKFhFTikgQ1BVIDEgc3RpbGwgbm90
IGRlYWQuLi4KKFhFTikgQ1BVIDEgc3RpbGwgbm90IGRlYWQuLi4KKFhFTikgQ1BVIDEgc3RpbGwg
bm90IGRlYWQuLi4KKFhFTikgQ1BVIDEgc3RpbGwgbm90IGRlYWQuLi4KKFhFTikgQ1BVIDEgc3Rp
bGwgbm90IGRlYWQuLi4KKFhFTikgQ1BVIDEgc3RpbGwgbm90IGRlYWQuLi4KKFhFTikgQ1BVIDEg
c3RpbGwgbm90IGRlYWQuLi4KKFhFTikgQ1BVIDEgc3RpbGwgbm90IGRlYWQuLi4KKFhFTikgQnJv
dWdodCB1cCAxNCBDUFVzCihYRU4pIFBhcmtlZCAxNCBDUFVzCihYRU4pIEFkZGluZyBjcHUgMCB0
byBydW5xdWV1ZSAwCihYRU4pICBGaXJzdCBjcHUgb24gcnVucXVldWUsIGFjdGl2YXRpbmcKKFhF
TikgQWRkaW5nIGNwdSAyIHRvIHJ1bnF1ZXVlIDAKKFhFTikgQWRkaW5nIGNwdSA0IHRvIHJ1bnF1
ZXVlIDAKKFhFTikgQWRkaW5nIGNwdSA2IHRvIHJ1bnF1ZXVlIDAKKFhFTikgQWRkaW5nIGNwdSA4
IHRvIHJ1bnF1ZXVlIDAKKFhFTikgQWRkaW5nIGNwdSAxMCB0byBydW5xdWV1ZSAwCihYRU4pIEFk
ZGluZyBjcHUgMTIgdG8gcnVucXVldWUgMAooWEVOKSBBZGRpbmcgY3B1IDE0IHRvIHJ1bnF1ZXVl
IDAKKFhFTikgQWRkaW5nIGNwdSAxNiB0byBydW5xdWV1ZSAwCihYRU4pIEFkZGluZyBjcHUgMTgg
dG8gcnVucXVldWUgMAooWEVOKSBBZGRpbmcgY3B1IDIwIHRvIHJ1bnF1ZXVlIDAKKFhFTikgQWRk
aW5nIGNwdSAyMiB0byBydW5xdWV1ZSAwCihYRU4pIEFkZGluZyBjcHUgMjQgdG8gcnVucXVldWUg
MAooWEVOKSBBZGRpbmcgY3B1IDI2IHRvIHJ1bnF1ZXVlIDAKKFhFTikgUnVubmluZyBzdHViIHJl
Y292ZXJ5IHNlbGZ0ZXN0cy4uLgooWEVOKSB0cmFwcy5jOjE1ODk6IEdQRiAoMDAwMCk6IGZmZmY4
MmQwYmZmZmYwNDEgW2ZmZmY4MmQwYmZmZmYwNDFdIC0+IGZmZmY4MmQwODAzODIzZjIKKFhFTikg
dHJhcHMuYzo3ODQ6IFRyYXAgMTI6IGZmZmY4MmQwYmZmZmYwNDAgW2ZmZmY4MmQwYmZmZmYwNDBd
IC0+IGZmZmY4MmQwODAzODIzZjIKKFhFTikgdHJhcHMuYzoxMTIzOiBUcmFwIDM6IGZmZmY4MmQw
YmZmZmYwNDEgW2ZmZmY4MmQwYmZmZmYwNDFdIC0+IGZmZmY4MmQwODAzODIzZjIKKFhFTikgbWNo
ZWNrX3BvbGw6IE1hY2hpbmUgY2hlY2sgcG9sbGluZyB0aW1lciBzdGFydGVkLgooWEVOKSBEb20w
IGhhcyBtYXhpbXVtIDEwMTYgUElSUXMKKFhFTikgTlggKEV4ZWN1dGUgRGlzYWJsZSkgcHJvdGVj
dGlvbiBhY3RpdmUKKFhFTikgKioqIEJ1aWxkaW5nIGEgUFYgRG9tMCAqKioKKFhFTikgRUxGOiBw
aGRyOiBwYWRkcj0weDEwMDAwMDAgbWVtc3o9MHhmM2YwMDAKKFhFTikgRUxGOiBwaGRyOiBwYWRk
cj0weDIwMDAwMDAgbWVtc3o9MHg0MzMwMDAKKFhFTikgRUxGOiBwaGRyOiBwYWRkcj0weDI0MzMw
MDAgbWVtc3o9MHgyMzQxOAooWEVOKSBFTEY6IHBoZHI6IHBhZGRyPTB4MjQ1NzAwMCBtZW1zej0w
eDNkNTAwMAooWEVOKSBFTEY6IG1lbW9yeTogMHgxMDAwMDAwIC0+IDB4MjgyYzAwMAooWEVOKSBF
TEY6IG5vdGU6IEdVRVNUX09TID0gImxpbnV4IgooWEVOKSBFTEY6IG5vdGU6IEdVRVNUX1ZFUlNJ
T04gPSAiMi42IgooWEVOKSBFTEY6IG5vdGU6IFhFTl9WRVJTSU9OID0gInhlbi0zLjAiCihYRU4p
IEVMRjogbm90ZTogVklSVF9CQVNFID0gMHhmZmZmZmZmZjgwMDAwMDAwCihYRU4pIEVMRjogbm90
ZTogSU5JVF9QMk0gPSAweDgwMDAwMDAwMDAKKFhFTikgRUxGOiBub3RlOiBFTlRSWSA9IDB4ZmZm
ZmZmZmY4MjQ1NzE4MAooWEVOKSBFTEY6IG5vdGU6IEhZUEVSQ0FMTF9QQUdFID0gMHhmZmZmZmZm
ZjgxMDAxMDAwCihYRU4pIEVMRjogbm90ZTogRkVBVFVSRVMgPSAiIXdyaXRhYmxlX3BhZ2VfdGFi
bGVzfHBhZV9wZ2Rpcl9hYm92ZV80Z2IiCihYRU4pIEVMRjogbm90ZTogU1VQUE9SVEVEX0ZFQVRV
UkVTID0gMHg4ODAxCihYRU4pIEVMRjogbm90ZTogUEFFX01PREUgPSAieWVzIgooWEVOKSBFTEY6
IG5vdGU6IExPQURFUiA9ICJnZW5lcmljIgooWEVOKSBFTEY6IG5vdGU6IHVua25vd24gKDB4ZCkK
KFhFTikgRUxGOiBub3RlOiBTVVNQRU5EX0NBTkNFTCA9IDB4MQooWEVOKSBFTEY6IG5vdGU6IE1P
RF9TVEFSVF9QRk4gPSAweDEKKFhFTikgRUxGOiBub3RlOiBIVl9TVEFSVF9MT1cgPSAweGZmZmY4
MDAwMDAwMDAwMDAKKFhFTikgRUxGOiBub3RlOiBQQUREUl9PRkZTRVQgPSAwCihYRU4pIEVMRjog
bm90ZTogUEhZUzMyX0VOVFJZID0gMHgxMDAwM2IwCihYRU4pIEVMRjogRm91bmQgUFZIIGltYWdl
CihYRU4pIEVMRjogYWRkcmVzc2VzOgooWEVOKSAgICAgdmlydF9iYXNlICAgICAgICA9IDB4ZmZm
ZmZmZmY4MDAwMDAwMAooWEVOKSAgICAgZWxmX3BhZGRyX29mZnNldCA9IDB4MAooWEVOKSAgICAg
dmlydF9vZmZzZXQgICAgICA9IDB4ZmZmZmZmZmY4MDAwMDAwMAooWEVOKSAgICAgdmlydF9rc3Rh
cnQgICAgICA9IDB4ZmZmZmZmZmY4MTAwMDAwMAooWEVOKSAgICAgdmlydF9rZW5kICAgICAgICA9
IDB4ZmZmZmZmZmY4MjgyYzAwMAooWEVOKSAgICAgdmlydF9lbnRyeSAgICAgICA9IDB4ZmZmZmZm
ZmY4MjQ1NzE4MAooWEVOKSAgICAgcDJtX2Jhc2UgICAgICAgICA9IDB4ODAwMDAwMDAwMAooWEVO
KSAgWGVuICBrZXJuZWw6IDY0LWJpdCwgbHNiLCBjb21wYXQzMgooWEVOKSAgRG9tMCBrZXJuZWw6
IDY0LWJpdCwgUEFFLCBsc2IsIHBhZGRyIDB4MTAwMDAwMCAtPiAweDI4MmMwMDAKKFhFTikgUEhZ
U0lDQUwgTUVNT1JZIEFSUkFOR0VNRU5UOgooWEVOKSAgRG9tMCBhbGxvYy46ICAgMDAwMDAwMjAx
YzAwMDAwMC0+MDAwMDAwMjAyMDAwMDAwMCAoMTY2ODU4ODYgcGFnZXMgdG8gYmUgYWxsb2NhdGVk
KQooWEVOKSAgSW5pdC4gcmFtZGlzazogMDAwMDAwMjA3ZTUzZTAwMC0+MDAwMDAwMjA3ZmZmZjkz
ZQooWEVOKSBWSVJUVUFMIE1FTU9SWSBBUlJBTkdFTUVOVDoKKFhFTikgIExvYWRlZCBrZXJuZWw6
IGZmZmZmZmZmODEwMDAwMDAtPmZmZmZmZmZmODI4MmMwMDAKKFhFTikgIEluaXQuIHJhbWRpc2s6
IDAwMDAwMDAwMDAwMDAwMDAtPjAwMDAwMDAwMDAwMDAwMDAKKFhFTikgIFBoeXMtTWFjaCBtYXA6
IDAwMDAwMDgwMDAwMDAwMDAtPjAwMDAwMDgwMDdmN2IwMDAKKFhFTikgIFN0YXJ0IGluZm86ICAg
IGZmZmZmZmZmODI4MmMwMDAtPmZmZmZmZmZmODI4MmM0YjgKKFhFTikgIFhlbnN0b3JlIHJpbmc6
IDAwMDAwMDAwMDAwMDAwMDAtPjAwMDAwMDAwMDAwMDAwMDAKKFhFTikgIENvbnNvbGUgcmluZzog
IDAwMDAwMDAwMDAwMDAwMDAtPjAwMDAwMDAwMDAwMDAwMDAKKFhFTikgIFBhZ2UgdGFibGVzOiAg
IGZmZmZmZmZmODI4MmQwMDAtPmZmZmZmZmZmODI4NDYwMDAKKFhFTikgIEJvb3Qgc3RhY2s6ICAg
IGZmZmZmZmZmODI4NDYwMDAtPmZmZmZmZmZmODI4NDcwMDAKKFhFTikgIFRPVEFMOiAgICAgICAg
IGZmZmZmZmZmODAwMDAwMDAtPmZmZmZmZmZmODJjMDAwMDAKKFhFTikgIEVOVFJZIEFERFJFU1M6
IGZmZmZmZmZmODI0NTcxODAKKFhFTikgRG9tMCBoYXMgbWF4aW11bSA2IFZDUFVzCihYRU4pIEVM
RjogcGhkciAwIGF0IDB4ZmZmZmZmZmY4MTAwMDAwMCAtPiAweGZmZmZmZmZmODFmM2YwMDAKKFhF
TikgRUxGOiBwaGRyIDEgYXQgMHhmZmZmZmZmZjgyMDAwMDAwIC0+IDB4ZmZmZmZmZmY4MjQzMzAw
MAooWEVOKSBFTEY6IHBoZHIgMiBhdCAweGZmZmZmZmZmODI0MzMwMDAgLT4gMHhmZmZmZmZmZjgy
NDU2NDE4CihYRU4pIEVMRjogcGhkciAzIGF0IDB4ZmZmZmZmZmY4MjQ1NzAwMCAtPiAweGZmZmZm
ZmZmODI1YzYwMDAKKFhFTikgSW5pdGlhbCBsb3cgbWVtb3J5IHZpcnEgdGhyZXNob2xkIHNldCBh
dCAweDQwMDAgcGFnZXMuCihYRU4pIFNjcnViYmluZyBGcmVlIFJBTSBpbiBiYWNrZ3JvdW5kCihY
RU4pIFN0ZC4gTG9nbGV2ZWw6IEFsbAooWEVOKSBHdWVzdCBMb2dsZXZlbDogQWxsCihYRU4pIFhl
biBpcyByZWxpbnF1aXNoaW5nIFZHQSBjb25zb2xlLgooWEVOKSAqKiogU2VyaWFsIGlucHV0IHRv
IERPTTAgKHR5cGUgJ0NUUkwtYScgdGhyZWUgdGltZXMgdG8gc3dpdGNoIGlucHV0KQooWEVOKSBG
cmVlZCA1MzZrQiBpbml0IG1lbW9yeQooWEVOKSBkMDogRm9yY2luZyB3cml0ZSBlbXVsYXRpb24g
b24gTUZOcyA4MDAwMC04ZmZmZgooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjAwLjAKKFhF
TikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDowNS4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6
MDA6MDUuMgooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjA1LjQKKFhFTikgUENJIGFkZCBk
ZXZpY2UgMDAwMDowMDowOC4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MDguMQooWEVO
KSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjA4LjIKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDow
MDoxMS4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTEuNQooWEVOKSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjAwOjE0LjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNC4yCihYRU4p
IFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MTYuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAw
OjE2LjEKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMDoxNi40CihYRU4pIFBDSSBhZGQgZGV2
aWNlIDAwMDA6MDA6MTcuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjFjLjAKKFhFTikg
UENJIGFkZCBkZXZpY2UgMDAwMDowMDoxYy40CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6
MWYuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjAwOjFmLjIKKFhFTikgUENJIGFkZCBkZXZp
Y2UgMDAwMDowMDoxZi40CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDA6MWYuNQooWEVOKSBQ
Q0kgYWRkIGRldmljZSAwMDAwOjAxOjAwLjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDowMTow
MC4xCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MDI6MDAuMAooWEVOKSBQQ0kgYWRkIGRldmlj
ZSAwMDAwOjAzOjAwLjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDoxNjowMC4wCihYRU4pIFBD
SSBhZGQgZGV2aWNlIDAwMDA6MTY6MDIuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjE2OjA1
LjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDoxNjowNS4yCihYRU4pIFBDSSBhZGQgZGV2aWNl
IDAwMDA6MTY6MDUuNAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjE2OjA4LjAKKFhFTikgUENJ
IGFkZCBkZXZpY2UgMDAwMDoxNjowOC4xCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MTY6MDgu
MgooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjE2OjA4LjMKKFhFTikgUENJIGFkZCBkZXZpY2Ug
MDAwMDoxNjowOC40CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MTY6MDguNQooWEVOKSBQQ0kg
YWRkIGRldmljZSAwMDAwOjE2OjA4LjYKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDoxNjowOC43
CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MTY6MDkuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAw
MDAwOjE2OjA5LjEKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDoxNjowOS4yCihYRU4pIFBDSSBh
ZGQgZGV2aWNlIDAwMDA6MTY6MDkuMwooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjE2OjA5LjQK
KFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDoxNjowOS41CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAw
MDA6MTY6MDkuNgooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjE2OjA5LjcKKFhFTikgUENJIGFk
ZCBkZXZpY2UgMDAwMDoxNjowYS4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MTY6MGEuMQoo
WEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjE2OjBhLjIKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAw
MDoxNjowYS4zCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MTY6MGEuNAooWEVOKSBQQ0kgYWRk
IGRldmljZSAwMDAwOjE2OjBhLjUKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDoxNjowYS42CihY
RU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MTY6MGEuNwooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAw
OjE2OjBiLjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDoxNjowYi4xCihYRU4pIFBDSSBhZGQg
ZGV2aWNlIDAwMDA6MTY6MGIuMgooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjE2OjBiLjMKKFhF
TikgUENJIGFkZCBkZXZpY2UgMDAwMDoxNjowZS4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6
MTY6MGUuMQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjE2OjBlLjIKKFhFTikgUENJIGFkZCBk
ZXZpY2UgMDAwMDoxNjowZS4zCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MTY6MGUuNAooWEVO
KSBQQ0kgYWRkIGRldmljZSAwMDAwOjE2OjBlLjUKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDox
NjowZS42CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MTY6MGUuNwooWEVOKSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjE2OjBmLjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDoxNjowZi4xCihYRU4p
IFBDSSBhZGQgZGV2aWNlIDAwMDA6MTY6MGYuMgooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjE2
OjBmLjMKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDoxNjowZi40CihYRU4pIFBDSSBhZGQgZGV2
aWNlIDAwMDA6MTY6MGYuNQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjE2OjBmLjYKKFhFTikg
UENJIGFkZCBkZXZpY2UgMDAwMDoxNjowZi43CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MTY6
MTAuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjE2OjEwLjEKKFhFTikgUENJIGFkZCBkZXZp
Y2UgMDAwMDoxNjoxMC4yCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MTY6MTAuMwooWEVOKSBQ
Q0kgYWRkIGRldmljZSAwMDAwOjE2OjEwLjQKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDoxNjox
MC41CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MTY6MTAuNgooWEVOKSBQQ0kgYWRkIGRldmlj
ZSAwMDAwOjE2OjEwLjcKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDoxNjoxMS4wCihYRU4pIFBD
SSBhZGQgZGV2aWNlIDAwMDA6MTY6MTEuMQooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjE2OjEx
LjIKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDoxNjoxMS4zCihYRU4pIFBDSSBhZGQgZGV2aWNl
IDAwMDA6MTY6MWQuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjE2OjFkLjEKKFhFTikgUENJ
IGFkZCBkZXZpY2UgMDAwMDoxNjoxZC4yCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MTY6MWQu
MwooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjE2OjFlLjAKKFhFTikgUENJIGFkZCBkZXZpY2Ug
MDAwMDoxNjoxZS4xCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MTY6MWUuMgooWEVOKSBQQ0kg
YWRkIGRldmljZSAwMDAwOjE2OjFlLjMKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDoxNjoxZS40
CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6MTY6MWUuNQooWEVOKSBQQ0kgYWRkIGRldmljZSAw
MDAwOjE2OjFlLjYKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDoxNzowMC4wCihYRU4pIFBDSSBh
ZGQgZGV2aWNlIDAwMDA6MTg6MDAuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjE4OjAwLjEK
KFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDo2NDowMC4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAw
MDA6NjQ6MDUuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjY0OjA1LjIKKFhFTikgUENJIGFk
ZCBkZXZpY2UgMDAwMDo2NDowNS40CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6NjQ6MDguMAoo
WEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjY0OjA5LjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAw
MDo2NDowYS4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6NjQ6MGEuMQooWEVOKSBQQ0kgYWRk
IGRldmljZSAwMDAwOjY0OjBhLjIKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDo2NDowYS4zCihY
RU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6NjQ6MGEuNAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAw
OjY0OjBhLjUKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDo2NDowYS42CihYRU4pIFBDSSBhZGQg
ZGV2aWNlIDAwMDA6NjQ6MGEuNwooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjY0OjBiLjAKKFhF
TikgUENJIGFkZCBkZXZpY2UgMDAwMDo2NDowYi4xCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6
NjQ6MGIuMgooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjY0OjBiLjMKKFhFTikgUENJIGFkZCBk
ZXZpY2UgMDAwMDo2NDowYy4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6NjQ6MGMuMQooWEVO
KSBQQ0kgYWRkIGRldmljZSAwMDAwOjY0OjBjLjIKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDo2
NDowYy4zCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6NjQ6MGMuNAooWEVOKSBQQ0kgYWRkIGRl
dmljZSAwMDAwOjY0OjBjLjUKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDo2NDowYy42CihYRU4p
IFBDSSBhZGQgZGV2aWNlIDAwMDA6NjQ6MGMuNwooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjY0
OjBkLjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDo2NDowZC4xCihYRU4pIFBDSSBhZGQgZGV2
aWNlIDAwMDA6NjQ6MGQuMgooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOjY0OjBkLjMKKFhFTikg
UENJIGFkZCBkZXZpY2UgMDAwMDo2NTowMC4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6YjI6
MDUuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOmIyOjA1LjIKKFhFTikgUENJIGFkZCBkZXZp
Y2UgMDAwMDpiMjowNS40CihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6YjI6MGUuMAooWEVOKSBQ
Q0kgYWRkIGRldmljZSAwMDAwOmIyOjBlLjEKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDpiMjow
Zi4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6YjI6MGYuMQooWEVOKSBQQ0kgYWRkIGRldmlj
ZSAwMDAwOmIyOjEwLjAKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDpiMjoxMC4xCihYRU4pIFBD
SSBhZGQgZGV2aWNlIDAwMDA6YjI6MTIuMAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOmIyOjEy
LjEKKFhFTikgUENJIGFkZCBkZXZpY2UgMDAwMDpiMjoxMi4yCihYRU4pIFBDSSBhZGQgZGV2aWNl
IDAwMDA6YjI6MTIuNAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOmIyOjEyLjUKKFhFTikgUENJ
IGFkZCBkZXZpY2UgMDAwMDpiMjoxNS4wCihYRU4pIFBDSSBhZGQgZGV2aWNlIDAwMDA6YjI6MTYu
MAooWEVOKSBQQ0kgYWRkIGRldmljZSAwMDAwOmIyOjE2LjQKKFhFTikgUENJIGFkZCBkZXZpY2Ug
MDAwMDpiMjoxNy4wCihYRU4pIGQwOiBGb3JjaW5nIHJlYWQtb25seSBhY2Nlc3MgdG8gTUZOIGZl
ZDAwCihYRU4pIHRyYXBzLmM6MTU4OTogR1BGICgwMDAwKTogZmZmZjgyZDA4MDM3NzcxNCBbZW11
bC1wcml2LW9wLmMjcmVhZF9tc3IrMHgyMTEvMHg0ODNdIC0+IGZmZmY4MmQwODAzODJiYjMK
------=_Part_59673954_663965119.1590755403814--



From xen-devel-bounces@lists.xenproject.org Fri May 29 12:33:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 12:33:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeeCT-00031E-Ra; Fri, 29 May 2020 12:33:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jeeCS-000318-Vq
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 12:33:25 +0000
X-Inumbo-ID: 9684c020-a1a8-11ea-9947-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9684c020-a1a8-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 12:33:24 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:56560
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jeeCQ-000LlT-JZ (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 13:33:22 +0100
Subject: Re: [PATCH v10 4/9] x86emul: disable FPU/MMX/SIMD insn emulation when
 !HVM
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
 <a2c36be0-03f0-00ca-33e9-9915773ccb0f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <bece2101-f760-8a40-a153-e4d23f352a02@citrix.com>
Date: Fri, 29 May 2020 13:33:21 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <a2c36be0-03f0-00ca-33e9-9915773ccb0f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25/05/2020 15:27, Jan Beulich wrote:
> In a pure PV environment (the PV shim in particular) we don't really
> need emulation of all these. To limit #ifdef-ary utilize some of the
> CASE_*() macros we have, by providing variants expanding to
> (effectively) nothing (really a label, which in turn requires passing
> -Wno-unused-label to the compiler when build such configurations).
>
> Due to the mixture of macro and #ifdef use, the placement of some of
> the #ifdef-s is a little arbitrary.
>
> The resulting object file's .text is less than half the size of the
> original, and looks to also be compiling a little more quickly.
>
> This is meant as a first step; more parts can likely be disabled down
> the road.
>
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v7: Integrate into this series. Re-base.
> ---
> I'll be happy to take suggestions allowing to avoid -Wno-unused-label.

I really would prefer the simplified version, which doesn't need
-Wno-unused-label at all.

I specifically don't see a need for these to be selected individually,
and a consequence of that is a vastly simplified patch.

However, to avoid this stalemate, Bregrudingly-acked-by: Andrew Cooper
<andrew.cooper3@citrix.com> if you still insist on taking this route.


From xen-devel-bounces@lists.xenproject.org Fri May 29 12:36:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 12:36:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeeFX-0003BP-B6; Fri, 29 May 2020 12:36:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mKAR=7L=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jeeFW-0003BI-1T
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 12:36:34 +0000
X-Inumbo-ID: 07365766-a1a9-11ea-9dbe-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 07365766-a1a9-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 12:36:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=wvhyX9H0v12XqSGol0PuAHrmfn/99YqXvqO85qcjzS4=; b=40HT0c4zpNUrylEpbgKlPdi3gO
 W95SoA/0JCiRYRLbkNT6JbU2vp5K3PR/Cv1IvnpvjZDmLJ+8oMIxEW7qM7zcwbgbhENN8N1d9He7+
 VJ6ncXJFQVsTJUiDoWAoC+8QnicHQ7AKKFLaC6cDr6OpcPc7UVGKP3WwROscuLbm4Tmc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeeFQ-00049J-JQ; Fri, 29 May 2020 12:36:28 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeeFQ-0003uC-CR; Fri, 29 May 2020 12:36:28 +0000
Subject: Re: [PATCH v2 3/3] clang: don't define nocall
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger@xen.org>
References: <20200528144023.10814-1-roger.pau@citrix.com>
 <20200528144023.10814-4-roger.pau@citrix.com>
 <8aa8d35f-2928-2096-a47c-26659c5a43a2@xen.org>
 <20200529083122.GJ1195@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <4164d344-5670-565c-6c3a-d9fe40f76ed8@xen.org>
Date: Fri, 29 May 2020 13:36:26 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <20200529083122.GJ1195@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 29/05/2020 09:31, Roger Pau Monné wrote:
> On Fri, May 29, 2020 at 08:25:44AM +0100, Julien Grall wrote:
>> Hi Roger,
>>
>> On 28/05/2020 15:40, Roger Pau Monne wrote:
>>> Clang doesn't support attribute error, and the possible equivalents
>>> like diagnose_if don't seem to work well in this case as they trigger
>>> when when the function is not called (just by being used by the
>>> APPEND_CALL macro).
>>
>> OOI, could you share the diagnose_if change you tried?
> 
> I've sent a reduced version to the llvm-dev mailing list, because I
> think the documentation should be fixed for diagnose_if. The email
> with the example is at:
> 
> http://lists.llvm.org/pipermail/llvm-dev/2020-May/141908.html

Thanks!

> 
> FWIW, using the deprecated attribute will also trigger the same
> error/warning even when not calling the function from C.

I read the documentation before asking the code and I did wonder why 
diagnose_if wouldn't work.

I am guessing the behavior wasn't intended.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 29 12:41:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 12:41:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeeJl-000404-Si; Fri, 29 May 2020 12:40:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeeJl-0003zx-0V
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 12:40:57 +0000
X-Inumbo-ID: a3806864-a1a9-11ea-a8a9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a3806864-a1a9-11ea-a8a9-12813bfff9fa;
 Fri, 29 May 2020 12:40:56 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 9A7E9ACB1;
 Fri, 29 May 2020 12:40:54 +0000 (UTC)
Subject: Re: [PATCH v2 12/14] x86/entry: Adjust guest paths to be shadow stack
 compatible
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-13-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5be19f55-979a-3cef-18bf-f9673cef1da3@suse.com>
Date: Fri, 29 May 2020 14:40:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200527191847.17207-13-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 21:18, Andrew Cooper wrote:
> The SYSCALL/SYSENTER/SYSRET paths need to use {SET,CLR}SSBSY.  The IRET to
> guest paths must not.  In the SYSRET path, re-position the mov which loads rip
> into %rcx so we can use %rcx for CLRSSBSY, rather than spilling another
> register to the stack.
> 
> While we can in principle detect shadow stack corruption and a failure to
> clear the supervisor token busy bit in the SYSRET path (by inspecting the
> carry flag following CLRSSBSY), we cannot detect similar problems for the IRET
> path (IRET is specified not to fault in this case).
> 
> We will double fault at some point later, when next trying to enter Xen, due
> to an already-set supervisor shadow stack busy bit.  As SYSRET is a uncommon
> path anyway, avoid the added complexity for no appreciable gain.

I'm okay with the avoidance of complexity, but why is the SYSRET path
uncommon? Almost all hypercall returns ought to take that path?

> --- a/xen/arch/x86/x86_64/entry.S
> +++ b/xen/arch/x86/x86_64/entry.S
> @@ -191,9 +191,16 @@ restore_all_guest:
>          sarq  $47,%rcx
>          incl  %ecx
>          cmpl  $1,%ecx
> -        movq  8(%rsp),%rcx            # RIP
>          ja    iret_exit_to_guest

This removal from the shared part of the exit path needs to be
reflected on both of the sides of the JA, i.e. ...

>  
> +        /* Clear the supervisor shadow stack token busy bit. */
> +.macro rag_clrssbsy
> +        rdsspq %rcx
> +        clrssbsy (%rcx)
> +.endm
> +        ALTERNATIVE "", rag_clrssbsy, X86_FEATURE_XEN_SHSTK
> +
> +        movq  8(%rsp), %rcx           # RIP

... not just here, but also like this (with the JA above changed
to target the new label):

         ALIGN
 /* No special register assumptions. */
+.Liret_exit_to_guest:
+        movq  8(%rsp),%rcx            # RIP
 iret_exit_to_guest:
         andl  $~(X86_EFLAGS_IOPL|X86_EFLAGS_NT|X86_EFLAGS_VM),24(%rsp)
         orl   $X86_EFLAGS_IF,24(%rsp)

Granted it's mostly cosmetic, as the IRETQ ought to fault, but
it's still a use of IRET in place of SYSRET, and hence we better
get guest register state right. With this or a functionally
identical adjustment (or a clarification on what makes you
convinced this adjustment isn't needed)
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 12:44:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 12:44:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeeNA-0004FA-BP; Fri, 29 May 2020 12:44:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5ub4=7L=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jeeN9-0004F5-FA
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 12:44:27 +0000
X-Inumbo-ID: 212a550e-a1aa-11ea-9947-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 212a550e-a1aa-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 12:44:26 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7E302ACCC;
 Fri, 29 May 2020 12:44:25 +0000 (UTC)
Subject: Re: [BUG] Core scheduling patches causing deadlock in some situations
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>,
 xen-devel@lists.xenproject.org
References: <1317891616.59673956.1590755403816.JavaMail.zimbra@cert.pl>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <57d213df-9edb-8f31-59e4-13f5cc07b543@suse.com>
Date: Fri, 29 May 2020 14:44:24 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <1317891616.59673956.1590755403816.JavaMail.zimbra@cert.pl>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: chivay@cert.pl, Tamas K Lengyel <tamas.k.lengyel@gmail.com>, bonus@cert.pl
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.20 14:30, Michał Leszczyński wrote:
> Hello,
> 
> I'm running DRAKVUF on Dell Inc. PowerEdge R640/08HT8T server with Intel(R) Xeon(R) Gold 6132 CPU @ 2.60GHz CPU.
> When upgrading from Xen RELEASE 4.12 to 4.13, we have noticed some stability problems concerning freezes of Dom0 (Debian Buster):
> 
> ---
> 
> maj 27 23:17:02 debian kernel: rcu: INFO: rcu_sched self-detected stall on CPU
> maj 27 23:17:02 debian kernel: rcu: 0-....: (5250 ticks this GP) idle=cee/1/0x4000000000000002 softirq=11964/11964 fqs=2515
> maj 27 23:17:02 debian kernel: rcu: (t=5251 jiffies g=27237 q=799)
> maj 27 23:17:02 debian kernel: NMI backtrace for cpu 0
> maj 27 23:17:02 debian kernel: CPU: 0 PID: 643 Comm: z_rd_int_1 Tainted: P OE 4.19.0-6-amd64 #1 Debian 4.19.67-2+deb10u2
> maj 27 23:17:02 debian kernel: Hardware name: Dell Inc. PowerEdge R640/08HT8T, BIOS 2.1.8 04/30/2019
> maj 27 23:17:02 debian kernel: Call Trace:
> maj 27 23:17:02 debian kernel: <IRQ>
> maj 27 23:17:02 debian kernel: dump_stack+0x5c/0x80
> maj 27 23:17:02 debian kernel: nmi_cpu_backtrace.cold.4+0x13/0x50
> maj 27 23:17:02 debian kernel: ? lapic_can_unplug_cpu.cold.29+0x3b/0x3b
> maj 27 23:17:02 debian kernel: nmi_trigger_cpumask_backtrace+0xf9/0xfb
> maj 27 23:17:02 debian kernel: rcu_dump_cpu_stacks+0x9b/0xcb
> maj 27 23:17:02 debian kernel: rcu_check_callbacks.cold.81+0x1db/0x335
> maj 27 23:17:02 debian kernel: ? tick_sched_do_timer+0x60/0x60
> maj 27 23:17:02 debian kernel: update_process_times+0x28/0x60
> maj 27 23:17:02 debian kernel: tick_sched_handle+0x22/0x60
> 
> ---
> 
> This usually results in machine being completely unresponsive and performing an automated reboot after some time.
> 
> I've bisected commits between 4.12 and 4.13 and it seems like this is the patch which introduced a bug:
> https://github.com/xen-project/xen/commit/7c7b407e77724f37c4b448930777a59a479feb21
> 
> Enclosed you can find the `xl dmesg` log (attachment: dmesg.txt) from the fresh boot of the machine on which the bug was reproduced.
> 
> I'm also attaching the `xl info` output from this machine:
> 
> ---
> 
> release : 4.19.0-6-amd64
> version : #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11)
> machine : x86_64
> nr_cpus : 14
> max_cpu_id : 223
> nr_nodes : 1
> cores_per_socket : 14
> threads_per_core : 1
> cpu_mhz : 2593.930
> hw_caps : bfebfbff:77fef3ff:2c100800:00000121:0000000f:d19ffffb:00000008:00000100
> virt_caps : pv hvm hvm_directio pv_directio hap shadow
> total_memory : 130541
> free_memory : 63591
> sharing_freed_memory : 0
> sharing_used_memory : 0
> outstanding_claims : 0
> free_cpus : 0
> xen_major : 4
> xen_minor : 13
> xen_extra : -unstable
> xen_version : 4.13-unstable
> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_scheduler : credit2
> xen_pagesize : 4096
> platform_params : virt_start=0xffff800000000000
> xen_changeset : Wed Oct 2 09:27:27 2019 +0200 git:7c7b407e77-dirty

Which is your original Xen base? This output is clearly obtained at the
end of the bisect process.

There have been quite some corrections since the release of Xen 4.13, so
please make sure you are running the most actual version (4.13.1).


Juergen


From xen-devel-bounces@lists.xenproject.org Fri May 29 12:52:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 12:52:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeeUs-0005Dd-6N; Fri, 29 May 2020 12:52:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L269=7L=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1jeeUq-0005DY-W3
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 12:52:25 +0000
X-Inumbo-ID: 3da9c984-a1ab-11ea-9947-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3da9c984-a1ab-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 12:52:24 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id F30AEA2570;
 Fri, 29 May 2020 14:52:22 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id EF3D8A2547;
 Fri, 29 May 2020 14:52:21 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id vRZyfN0mawu3; Fri, 29 May 2020 14:52:21 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 557EDA2570;
 Fri, 29 May 2020 14:52:21 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id o8WDoAEVcNJS; Fri, 29 May 2020 14:52:21 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 3614DA2547;
 Fri, 29 May 2020 14:52:21 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 2B08E222AA;
 Fri, 29 May 2020 14:51:51 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id vV0-pD2cC7HA; Fri, 29 May 2020 14:51:45 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 90857222E0;
 Fri, 29 May 2020 14:51:45 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id En165QHYGHzw; Fri, 29 May 2020 14:51:45 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 67B692226D;
 Fri, 29 May 2020 14:51:45 +0200 (CEST)
Date: Fri, 29 May 2020 14:51:45 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: =?utf-8?Q?J=C3=BCrgen_Gro=C3=9F?= <jgross@suse.com>
Message-ID: <1150720994.59695220.1590756705329.JavaMail.zimbra@cert.pl>
In-Reply-To: <57d213df-9edb-8f31-59e4-13f5cc07b543@suse.com>
References: <1317891616.59673956.1590755403816.JavaMail.zimbra@cert.pl>
 <57d213df-9edb-8f31-59e4-13f5cc07b543@suse.com>
Subject: Re: [BUG] Core scheduling patches causing deadlock in some situations
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - GC83 (Win)/8.6.0_GA_1194)
Thread-Topic: Core scheduling patches causing deadlock in some situations
Thread-Index: Ip8pjHOe2SXZ63rsxdtycq26oWTKvw==
X-Zimbra-DL: chivay@cert.pl, bonus@cert.pl
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: chivay@cert.pl, xen-devel@lists.xenproject.org, bonus@cert.pl,
 Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

----- 29 maj 2020 o 14:44, J=C3=BCrgen Gro=C3=9F jgross@suse.com napisa=C5=
=82(a):

> On 29.05.20 14:30, Micha=C5=82 Leszczy=C5=84ski wrote:
>> Hello,
>>=20
>> I'm running DRAKVUF on Dell Inc. PowerEdge R640/08HT8T server with Intel=
(R)
>> Xeon(R) Gold 6132 CPU @ 2.60GHz CPU.
>> When upgrading from Xen RELEASE 4.12 to 4.13, we have noticed some stabi=
lity
>> problems concerning freezes of Dom0 (Debian Buster):
>>=20
>> ---
>>=20
>> maj 27 23:17:02 debian kernel: rcu: INFO: rcu_sched self-detected stall =
on CPU
>> maj 27 23:17:02 debian kernel: rcu: 0-....: (5250 ticks this GP)
>> idle=3Dcee/1/0x4000000000000002 softirq=3D11964/11964 fqs=3D2515
>> maj 27 23:17:02 debian kernel: rcu: (t=3D5251 jiffies g=3D27237 q=3D799)
>> maj 27 23:17:02 debian kernel: NMI backtrace for cpu 0
>> maj 27 23:17:02 debian kernel: CPU: 0 PID: 643 Comm: z_rd_int_1 Tainted:=
 P OE
>> 4.19.0-6-amd64 #1 Debian 4.19.67-2+deb10u2
>> maj 27 23:17:02 debian kernel: Hardware name: Dell Inc. PowerEdge R640/0=
8HT8T,
>> BIOS 2.1.8 04/30/2019
>> maj 27 23:17:02 debian kernel: Call Trace:
>> maj 27 23:17:02 debian kernel: <IRQ>
>> maj 27 23:17:02 debian kernel: dump_stack+0x5c/0x80
>> maj 27 23:17:02 debian kernel: nmi_cpu_backtrace.cold.4+0x13/0x50
>> maj 27 23:17:02 debian kernel: ? lapic_can_unplug_cpu.cold.29+0x3b/0x3b
>> maj 27 23:17:02 debian kernel: nmi_trigger_cpumask_backtrace+0xf9/0xfb
>> maj 27 23:17:02 debian kernel: rcu_dump_cpu_stacks+0x9b/0xcb
>> maj 27 23:17:02 debian kernel: rcu_check_callbacks.cold.81+0x1db/0x335
>> maj 27 23:17:02 debian kernel: ? tick_sched_do_timer+0x60/0x60
>> maj 27 23:17:02 debian kernel: update_process_times+0x28/0x60
>> maj 27 23:17:02 debian kernel: tick_sched_handle+0x22/0x60
>>=20
>> ---
>>=20
>> This usually results in machine being completely unresponsive and perfor=
ming an
>> automated reboot after some time.
>>=20
>> I've bisected commits between 4.12 and 4.13 and it seems like this is th=
e patch
>> which introduced a bug:
>> https://github.com/xen-project/xen/commit/7c7b407e77724f37c4b448930777a5=
9a479feb21
>>=20
>> Enclosed you can find the `xl dmesg` log (attachment: dmesg.txt) from th=
e fresh
>> boot of the machine on which the bug was reproduced.
>>=20
>> I'm also attaching the `xl info` output from this machine:
>>=20
>> ---
>>=20
>> release : 4.19.0-6-amd64
>> version : #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11)
>> machine : x86_64
>> nr_cpus : 14
>> max_cpu_id : 223
>> nr_nodes : 1
>> cores_per_socket : 14
>> threads_per_core : 1
>> cpu_mhz : 2593.930
>> hw_caps :
>> bfebfbff:77fef3ff:2c100800:00000121:0000000f:d19ffffb:00000008:00000100
>> virt_caps : pv hvm hvm_directio pv_directio hap shadow
>> total_memory : 130541
>> free_memory : 63591
>> sharing_freed_memory : 0
>> sharing_used_memory : 0
>> outstanding_claims : 0
>> free_cpus : 0
>> xen_major : 4
>> xen_minor : 13
>> xen_extra : -unstable
>> xen_version : 4.13-unstable
>> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p
>> hvm-3.0-x86_64
>> xen_scheduler : credit2
>> xen_pagesize : 4096
>> platform_params : virt_start=3D0xffff800000000000
>> xen_changeset : Wed Oct 2 09:27:27 2019 +0200 git:7c7b407e77-dirty
>=20
> Which is your original Xen base? This output is clearly obtained at the
> end of the bisect process.
>=20
> There have been quite some corrections since the release of Xen 4.13, so
> please make sure you are running the most actual version (4.13.1).
>=20
>=20
> Juergen

Sure, we have tested both RELEASE 4.13 and RELEASE 4.13.1. Unfortunately th=
ese corrections didn't help and the bug is still reproducible.

>From our testing it turns out that:

Known working revision: 997d6248a9ae932d0dbaac8d8755c2b15fec25dc
Broken revision: 6278553325a9f76d37811923221b21db3882e017
First bad commit: 7c7b407e77724f37c4b448930777a59a479feb21


Best regards,
Micha=C5=82 Leszczy=C5=84ski
CERT Polska


From xen-devel-bounces@lists.xenproject.org Fri May 29 12:52:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 12:52:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeeVK-0005GD-FE; Fri, 29 May 2020 12:52:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeeVK-0005G6-4Q
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 12:52:54 +0000
X-Inumbo-ID: 4f1c586c-a1ab-11ea-a8ab-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f1c586c-a1ab-11ea-a8ab-12813bfff9fa;
 Fri, 29 May 2020 12:52:53 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 0F9AFAE0F;
 Fri, 29 May 2020 12:52:52 +0000 (UTC)
Subject: Re: [PATCH v2 13/14] x86/S3: Save and restore Shadow Stack
 configuration
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-14-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c1f1cb73-65f7-f2f7-161c-b505edc5959e@suse.com>
Date: Fri, 29 May 2020 14:52:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200527191847.17207-14-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 21:18, Andrew Cooper wrote:
> See code for details
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Wei Liu <wl@xen.org>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> 
> Semi-RFC - I can't actually test this path.  Currently attempting to arrange
> for someone else to.

Nevertheless
Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one question, just for my understanding:

> @@ -48,6 +58,51 @@ ENTRY(s3_resume)
>          pushq   %rax
>          lretq
>  1:
> +#ifdef CONFIG_XEN_SHSTK
> +        /*
> +         * Restoring SSP is a little complicated, because we are intercepting
> +         * an in-use shadow stack.  Write a temporary token under the stack,
> +         * so SETSSBSY will successfully load a value useful for us, then
> +         * reset MSR_PL0_SSP to its usual value and pop the temporary token.
> +         */
> +        mov     saved_rsp(%rip), %rdi
> +        cmpq    $1, %rdi
> +        je      .L_shstk_done
> +
> +        /* Set up MSR_S_CET. */
> +        mov     $MSR_S_CET, %ecx
> +        xor     %edx, %edx
> +        mov     $CET_SHSTK_EN | CET_WRSS_EN, %eax
> +        wrmsr
> +
> +        /* Construct the temporary supervisor token under SSP. */
> +        sub     $8, %rdi
> +
> +        /* Load it into MSR_PL0_SSP. */
> +        mov     $MSR_PL0_SSP, %ecx
> +        mov     %rdi, %rdx
> +        shr     $32, %rdx
> +        mov     %edi, %eax
> +        wrmsr
> +
> +        /* Enable CET.  MSR_INTERRUPT_SSP_TABLE is set up later in load_system_tables(). */
> +        mov     $XEN_MINIMAL_CR4 | X86_CR4_CET, %ebx
> +        mov     %rbx, %cr4

Does this imply NMI or #MC are fatal between here and there?

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 12:57:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 12:57:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeeZB-0005U3-1Y; Fri, 29 May 2020 12:56:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T8V9=7L=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jeeZ9-0005Ty-Vc
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 12:56:52 +0000
X-Inumbo-ID: d97449ac-a1ab-11ea-a8ae-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d97449ac-a1ab-11ea-a8ae-12813bfff9fa;
 Fri, 29 May 2020 12:56:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1mDn0ZmkMPxvV5UASxA4Ca+GOE7sk+YihisM92zqJcU=; b=ivzSTsUbCWyJj6F9nmkj292wW
 hJxDzDr6zbIPgkWgujZVe1+eywCNHHoEprLz6s0N4HQ9sEvaWYcvVFQv85Qqce32moLFv0PMUxwaH
 qMQem94bi0OR9wuHomLDikqTOme2YKPT8f4AWQezRxXrI0CG9K6cZqljJTqU38wWxIcB4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeeZ2-0004ao-FI; Fri, 29 May 2020 12:56:44 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeeZ2-0000WL-3v; Fri, 29 May 2020 12:56:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jeeZ2-0003Ne-3H; Fri, 29 May 2020 12:56:44 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150462-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150462: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=6f60d2a8503ce8c624bce6b53bf7b68476f5056f
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 29 May 2020 12:56:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150462 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150462/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              6f60d2a8503ce8c624bce6b53bf7b68476f5056f
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  133 days
Failing since        146211  2020-01-18 04:18:52 Z  132 days  123 attempts
Testing same since   150462  2020-05-29 04:18:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Chris Jester-Young <cky@cky.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yan Wang <wangyan122@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 19583 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 29 13:09:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 13:09:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeel3-0006Yf-9c; Fri, 29 May 2020 13:09:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeel1-0006Ya-L6
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 13:09:07 +0000
X-Inumbo-ID: 933b59d8-a1ad-11ea-a8b4-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 933b59d8-a1ad-11ea-a8b4-12813bfff9fa;
 Fri, 29 May 2020 13:09:06 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4B1D1AD63;
 Fri, 29 May 2020 13:09:05 +0000 (UTC)
Subject: Re: [PATCH v2 14/14] x86/shstk: Activate Supervisor Shadow Stacks
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-15-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <886eccd3-e2d4-fdc5-f1cd-e8671a5271e2@suse.com>
Date: Fri, 29 May 2020 15:09:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200527191847.17207-15-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27.05.2020 21:18, Andrew Cooper wrote:
> With all other plumbing in place, activate shadow stacks when possible.  Note
> that this requires prohibiting the use of PV32.  Compatibility can be
> maintained if necessary via PV-Shim.

In the revision log you say "Discuss CET-SS disabling PV32", and I
agree both here and in the command line doc you mention the "that"
aspect. But what about the "why"? Aiui "is incompatible" or
"requires" are too strong statements: It could be made work (by
disabling / enabling CET on the way out of / back into Xen), but
besides losing some of the intended protection that way, it would
be quite a bit of overhead. So it's more like a design decision,
and it would be nice to express it like this at least in the
commit message.

> --- a/xen/arch/x86/setup.c
> +++ b/xen/arch/x86/setup.c
> @@ -664,6 +664,13 @@ static void __init noreturn reinit_bsp_stack(void)
>      stack_base[0] = stack;
>      memguard_guard_stack(stack);
>  
> +    if ( cpu_has_xen_shstk )
> +    {
> +        wrmsrl(MSR_PL0_SSP, (unsigned long)stack + 0x5ff8);

Please replace this remaining literal number accordingly.

> @@ -1801,6 +1823,10 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>  
>      alternative_branches();
>  
> +    /* Defer CR4.CET until alternatives have finished playing with CR4.WP */
> +    if ( cpu_has_xen_shstk )
> +        set_in_cr4(X86_CR4_CET);

Nit: The comment still wants changing to CR0.WP.

With these taken care of in some form
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 13:10:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 13:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeely-0007Cf-JD; Fri, 29 May 2020 13:10:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tJvr=7L=xen.org=roger@srs-us1.protection.inumbo.net>)
 id 1jeelx-000790-6u
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 13:10:05 +0000
X-Inumbo-ID: b5243d1d-a1ad-11ea-a8b4-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b5243d1d-a1ad-11ea-a8b4-12813bfff9fa;
 Fri, 29 May 2020 13:10:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID
 :Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID
 :Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:
 Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe
 :List-Post:List-Owner:List-Archive;
 bh=DHAOINmpm6nq8+NpyX885jUqaREsot82c5E8XAtKKj4=; b=cgQQCoONN/2apq1i8rh9YnrxFh
 IrxlgptKpPSEg7pTWsaEwlMLf9TMAYa/IPbNSGJeJThBunJATpUBSobLJjYBApe6h63AH4xCHpomO
 CSGA8sKBnhM/hHM8jGoxRzwzDo5CfsaE/M3yllE2OdqEmG8kKMkS/D/Qnq5JriV2xRqk=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeels-0004s6-1K; Fri, 29 May 2020 13:10:00 +0000
Received: from [212.230.157.105] (helo=localhost)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeelr-0006II-Bj; Fri, 29 May 2020 13:09:59 +0000
Date: Fri, 29 May 2020 15:09:49 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger@xen.org>
To: Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Message-ID: <20200529130936.GM1195@Air-de-Roger>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <dcfbca54-4773-9f43-1826-f5137a41bd9f@suse.com>
 <43781f37-184d-3ac8-8997-0a9be1de05ce@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <43781f37-184d-3ac8-8997-0a9be1de05ce@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "paul@xen.org" <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, "Xia,
 Hongyan" <hongyxia@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org, nd@arm.com,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 29, 2020 at 11:59:40AM +0100, Julien Grall wrote:
> Hi Jan,
> 
> On 29/05/2020 08:35, Jan Beulich wrote:
> > On 28.05.2020 20:54, Julien Grall wrote:
> > > On 28/05/2020 16:25, Bertrand Marquis wrote:
> > > > At the moment on Arm, a Linux guest running with KTPI enabled will
> > > > cause the following error when a context switch happens in user mode:
> > > > (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
> > > > 
> > > > This patch is modifying runstate handling to map the area given by the
> > > > guest inside Xen during the hypercall.
> > > > This is removing the guest virtual to physical conversion during context
> > > > switches which removes the bug
> > > 
> > > It would be good to spell out that a virtual address is not stable. So
> > > relying on it is wrong.
> > 
> > Guests at present are permitted to change the mapping underneath the
> > virtual address provided (this may not be the best idea, but the
> > interface is like it is).
> 
> Well yes, it could be point to data used by the userpsace. So you could
> corrupt a program. It is not very great.

Yes, that's also my worry with the current hypercall. The current
interface is IMO broken for autotranslated guests, at least in the way
it's currently used by OSes.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 29 13:15:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 13:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeer6-0007ah-8c; Fri, 29 May 2020 13:15:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5ub4=7L=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jeer5-0007ac-9B
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 13:15:23 +0000
X-Inumbo-ID: 7305dc78-a1ae-11ea-a8b4-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7305dc78-a1ae-11ea-a8b4-12813bfff9fa;
 Fri, 29 May 2020 13:15:21 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 79FA2AEDD;
 Fri, 29 May 2020 13:15:20 +0000 (UTC)
Subject: Re: [BUG] Core scheduling patches causing deadlock in some situations
To: =?UTF-8?Q?Micha=c5=82_Leszczy=c5=84ski?= <michal.leszczynski@cert.pl>
References: <1317891616.59673956.1590755403816.JavaMail.zimbra@cert.pl>
 <57d213df-9edb-8f31-59e4-13f5cc07b543@suse.com>
 <1150720994.59695220.1590756705329.JavaMail.zimbra@cert.pl>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <1f68a02a-3472-1bb0-90b9-6f3ccefca0b3@suse.com>
Date: Fri, 29 May 2020 15:15:19 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <1150720994.59695220.1590756705329.JavaMail.zimbra@cert.pl>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: chivay@cert.pl, xen-devel@lists.xenproject.org, bonus@cert.pl,
 Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.20 14:51, Michał Leszczyński wrote:
> ----- 29 maj 2020 o 14:44, Jürgen Groß jgross@suse.com napisał(a):
> 
>> On 29.05.20 14:30, Michał Leszczyński wrote:
>>> Hello,
>>>
>>> I'm running DRAKVUF on Dell Inc. PowerEdge R640/08HT8T server with Intel(R)
>>> Xeon(R) Gold 6132 CPU @ 2.60GHz CPU.
>>> When upgrading from Xen RELEASE 4.12 to 4.13, we have noticed some stability
>>> problems concerning freezes of Dom0 (Debian Buster):
>>>
>>> ---
>>>
>>> maj 27 23:17:02 debian kernel: rcu: INFO: rcu_sched self-detected stall on CPU
>>> maj 27 23:17:02 debian kernel: rcu: 0-....: (5250 ticks this GP)
>>> idle=cee/1/0x4000000000000002 softirq=11964/11964 fqs=2515
>>> maj 27 23:17:02 debian kernel: rcu: (t=5251 jiffies g=27237 q=799)
>>> maj 27 23:17:02 debian kernel: NMI backtrace for cpu 0
>>> maj 27 23:17:02 debian kernel: CPU: 0 PID: 643 Comm: z_rd_int_1 Tainted: P OE
>>> 4.19.0-6-amd64 #1 Debian 4.19.67-2+deb10u2
>>> maj 27 23:17:02 debian kernel: Hardware name: Dell Inc. PowerEdge R640/08HT8T,
>>> BIOS 2.1.8 04/30/2019
>>> maj 27 23:17:02 debian kernel: Call Trace:
>>> maj 27 23:17:02 debian kernel: <IRQ>
>>> maj 27 23:17:02 debian kernel: dump_stack+0x5c/0x80
>>> maj 27 23:17:02 debian kernel: nmi_cpu_backtrace.cold.4+0x13/0x50
>>> maj 27 23:17:02 debian kernel: ? lapic_can_unplug_cpu.cold.29+0x3b/0x3b
>>> maj 27 23:17:02 debian kernel: nmi_trigger_cpumask_backtrace+0xf9/0xfb
>>> maj 27 23:17:02 debian kernel: rcu_dump_cpu_stacks+0x9b/0xcb
>>> maj 27 23:17:02 debian kernel: rcu_check_callbacks.cold.81+0x1db/0x335
>>> maj 27 23:17:02 debian kernel: ? tick_sched_do_timer+0x60/0x60
>>> maj 27 23:17:02 debian kernel: update_process_times+0x28/0x60
>>> maj 27 23:17:02 debian kernel: tick_sched_handle+0x22/0x60
>>>
>>> ---
>>>
>>> This usually results in machine being completely unresponsive and performing an
>>> automated reboot after some time.
>>>
>>> I've bisected commits between 4.12 and 4.13 and it seems like this is the patch
>>> which introduced a bug:
>>> https://github.com/xen-project/xen/commit/7c7b407e77724f37c4b448930777a59a479feb21
>>>
>>> Enclosed you can find the `xl dmesg` log (attachment: dmesg.txt) from the fresh
>>> boot of the machine on which the bug was reproduced.
>>>
>>> I'm also attaching the `xl info` output from this machine:
>>>
>>> ---
>>>
>>> release : 4.19.0-6-amd64
>>> version : #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11)
>>> machine : x86_64
>>> nr_cpus : 14
>>> max_cpu_id : 223
>>> nr_nodes : 1
>>> cores_per_socket : 14
>>> threads_per_core : 1
>>> cpu_mhz : 2593.930
>>> hw_caps :
>>> bfebfbff:77fef3ff:2c100800:00000121:0000000f:d19ffffb:00000008:00000100
>>> virt_caps : pv hvm hvm_directio pv_directio hap shadow
>>> total_memory : 130541
>>> free_memory : 63591
>>> sharing_freed_memory : 0
>>> sharing_used_memory : 0
>>> outstanding_claims : 0
>>> free_cpus : 0
>>> xen_major : 4
>>> xen_minor : 13
>>> xen_extra : -unstable
>>> xen_version : 4.13-unstable
>>> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_32p
>>> hvm-3.0-x86_64
>>> xen_scheduler : credit2
>>> xen_pagesize : 4096
>>> platform_params : virt_start=0xffff800000000000
>>> xen_changeset : Wed Oct 2 09:27:27 2019 +0200 git:7c7b407e77-dirty
>>
>> Which is your original Xen base? This output is clearly obtained at the
>> end of the bisect process.
>>
>> There have been quite some corrections since the release of Xen 4.13, so
>> please make sure you are running the most actual version (4.13.1).
>>
>>
>> Juergen
> 
> Sure, we have tested both RELEASE 4.13 and RELEASE 4.13.1. Unfortunately these corrections didn't help and the bug is still reproducible.
> 
>  From our testing it turns out that:
> 
> Known working revision: 997d6248a9ae932d0dbaac8d8755c2b15fec25dc
> Broken revision: 6278553325a9f76d37811923221b21db3882e017
> First bad commit: 7c7b407e77724f37c4b448930777a59a479feb21

Would it be possible to test xen unstable, too?

I could imagine e.g. commit b492c65da5ec5ed or 99266e31832fb4a4 to have
an impact here.


Juergen


From xen-devel-bounces@lists.xenproject.org Fri May 29 13:22:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 13:22:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeexj-00007E-0t; Fri, 29 May 2020 13:22:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hF3w=7L=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jeexh-000079-Pv
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 13:22:13 +0000
X-Inumbo-ID: 67feb1fa-a1af-11ea-a8b6-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 67feb1fa-a1af-11ea-a8b6-12813bfff9fa;
 Fri, 29 May 2020 13:22:12 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: GU7uOkr3EuzE612T1vejUKRWuo3H9021yEIyWHQ8qIkvsDCxl0YXuPIDLl+iycTskPPRM8vhw4
 PU47+ywWgYgh3D2aFanOHLqK+H/U/WkMDIactjIt+Dws7+b+gfiXXVmTnWINQ13Za7K5e5bH6l
 2sNqx7kbYP1K9Xx/wb2N2EY8TNpVYAT2feCJEMro+/aMk7iGJ0u8r04ReOkDU2KFNGQwyZLC90
 lmdNxqSZzxylpK3YwsSy6MXagzozGGEmRzSgCtEci7rMgTa8L8JrzOeGt2ZRNgBk7wd+LebSam
 0U0=
X-SBRS: 2.7
X-MesageID: 19108972
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,448,1583211600"; d="scan'208";a="19108972"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24273.3201.443254.296963@mariner.uk.xensource.com>
Date: Fri, 29 May 2020 14:22:09 +0100
To: "paul@xen.org" <paul@xen.org>
Subject: RE: [OSSTEST PATCH v2 00/49] Switch to Debian buster (= Debian stable)
In-Reply-To: <005401d635ac$90bf9510$b23ebf30$@xen.org>
References: <20200529111945.21394-1-ian.jackson@eu.citrix.com>
 <005401d635ac$90bf9510$b23ebf30$@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "committers@xenproject.org" <committers@xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Paul Durrant writes ("RE: [OSSTEST PATCH v2 00/49] Switch to Debian buster (= Debian stable)"):
> I assume we can revert if things go badly wrong and being able to commission more machines would seem to be quite beneficial at this
> stage.

Thanks for your opinion.

It would be possible to revert the final switch, yes.  Most of the
other changes are supposed to work with stretch too.

I haven't done a test run with the new code, but the old version of
Debian.  I will do that.  That will give confidence that we could, in
fact, revert things.

Ian.


From xen-devel-bounces@lists.xenproject.org Fri May 29 13:27:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 13:27:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jef29-0000IQ-Jc; Fri, 29 May 2020 13:26:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tJvr=7L=xen.org=roger@srs-us1.protection.inumbo.net>)
 id 1jef28-0000IL-21
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 13:26:48 +0000
X-Inumbo-ID: 0bad9e7e-a1b0-11ea-9947-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0bad9e7e-a1b0-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 13:26:47 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Type:MIME-Version:References:Message-ID
 :Subject:Cc:To:From:Date:Sender:Reply-To:Content-Transfer-Encoding:Content-ID
 :Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:
 Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:List-Subscribe
 :List-Post:List-Owner:List-Archive;
 bh=y9d3Sz8TkZs8K4+aocoQALMZMUfk0CJsw4anA87e/mQ=; b=fKLRYmggfk771pQEovQC9Jfuby
 DE6ZqkPi70Hu4cVD6H6/qgl7289Kq64ApRlXBFpzhOLOm4k6R2TG0DI6A2McTvs+lDZvboolYRJjl
 X8TUD7Rm4bH0DA3gjJOpoS0jQiMYjRTzsdJO5pP8z+6KrMjv/eKgEzLdv9E6YujIMbT0=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jef25-0005FB-Oa; Fri, 29 May 2020 13:26:45 +0000
Received: from [212.230.157.105] (helo=localhost)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jef25-0007ZM-8N; Fri, 29 May 2020 13:26:45 +0000
Date: Fri, 29 May 2020 15:26:37 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger@xen.org>
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Message-ID: <20200529132020.GN1195@Air-de-Roger>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <dcfbca54-4773-9f43-1826-f5137a41bd9f@suse.com>
 <B5889544-3EB5-41ED-8428-8BCA05269371@arm.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <B5889544-3EB5-41ED-8428-8BCA05269371@arm.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, "paul@xen.org" <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Xia, Hongyan" <hongyxia@amazon.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 29, 2020 at 08:32:51AM +0000, Bertrand Marquis wrote:
> Hi Jan
> 
> > On 29 May 2020, at 08:35, Jan Beulich <jbeulich@suse.com> wrote:
> > 
> > On 28.05.2020 20:54, Julien Grall wrote:
> >> On 28/05/2020 16:25, Bertrand Marquis wrote:
> >>> At the moment on Arm, a Linux guest running with KTPI enabled will
> >>> cause the following error when a context switch happens in user mode:
> >>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
> >>> 
> >>> This patch is modifying runstate handling to map the area given by the
> >>> guest inside Xen during the hypercall.
> >>> This is removing the guest virtual to physical conversion during context
> >>> switches which removes the bug
> >> 
> >> It would be good to spell out that a virtual address is not stable. So 
> >> relying on it is wrong.
> > 
> > Guests at present are permitted to change the mapping underneath the
> > virtual address provided (this may not be the best idea, but the
> > interface is like it is). Therefore I don't think the present
> > interface can be changed like this. Instead a new interface will need
> > adding which takes a guest physical address instead. (Which, in the
> > end, will merely be one tiny step towards making the hypercall
> > interfaces use guest physical addresses. And it would be nice if an
> > overall concept was hashed out first how that conversion should
> > occur, such that the change here could at least be made fit that
> > planned model. For example, an option might be to retain all present
> > hypercall numbering and simply dedicate a bit in the top level
> > hypercall numbers indicating whether _all_ involved addresses for
> > that operation are physical vs virtual ones.)
> 
> I definitely fully agree that moving to interfaces using physical addresses
> would definitely be better but would need new hypercall numbers (or the
> bit system you suggest) to keep backward compatibility.
> 
> Regarding the change of virtual address, even though this is theoretically
> possible with the current interface I do not really see how this could be
> handled cleanly with KPTI or even without it as this would not be an atomic
> change on the guest side so the only way to cleanly do this would be to
> do an hypercall to remove the address in xen and then recall the hypercall
> to register the new address.
> 
> So the only way to solve the KPTI issue would actually be to create a new
> hypercall and keep the current bug/problem ?

I think you will find it easier to just introduce a new hypercall that
uses a physical address and has a set of restrictions similar to
VCPUOP_register_vcpu_info for example than to bend the current
hypercall into doing something sane.

TBH I would just remove the error message on Arm from the current
hypercall, I don't think it's useful. If there's corruption caused by
the hypercall we could always make it a noop and simply update the
runstate area only once at registration and leave it like that. The
guest should check the timestamp in the data and realize the
information is stale.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 29 13:30:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 13:30:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jef5B-0000Qm-48; Fri, 29 May 2020 13:29:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jef59-0000Qh-Iv
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 13:29:55 +0000
X-Inumbo-ID: 7b52d686-a1b0-11ea-a8b6-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7b52d686-a1b0-11ea-a8b6-12813bfff9fa;
 Fri, 29 May 2020 13:29:54 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id B62E2AD6B;
 Fri, 29 May 2020 13:29:53 +0000 (UTC)
Subject: Re: [PATCH v10 1/9] x86emul: address x86_insn_is_mem_{access, write}()
 omissions
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
 <f41a4f27-bbe2-6450-38c1-6c4e23f2b07b@suse.com>
 <8e976b4b-60f2-bf94-843d-0fe0ba57087c@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5e46ec9b-2ae0-3d28-01c8-794356532456@suse.com>
Date: Fri, 29 May 2020 15:29:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <8e976b4b-60f2-bf94-843d-0fe0ba57087c@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 14:18, Andrew Cooper wrote:
> On 25/05/2020 15:26, Jan Beulich wrote:
>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>> @@ -11474,25 +11474,87 @@ x86_insn_operand_ea(const struct x86_emu
>>      return state->ea.mem.off;
>>  }
>>  
>> +/*
>> + * This function means to return 'true' for all supported insns with explicit
>> + * accesses to memory.  This means also insns which don't have an explicit
>> + * memory operand (like POP), but it does not mean e.g. segment selector
>> + * loads, where the descriptor table access is considered an implicit one.
>> + */
>>  bool
>>  x86_insn_is_mem_access(const struct x86_emulate_state *state,
>>                         const struct x86_emulate_ctxt *ctxt)
>>  {
>> +    if ( mode_64bit() && state->not_64bit )
>> +        return false;
> 
> Is this path actually used?

Yes, it is. It's only x86_emulate() which has

    generate_exception_if(state->not_64bit && mode_64bit(), EXC_UD);

right now.

> state->not_64bit ought to fail instruction
> decode, at which point we wouldn't have a valid state to be used here.

x86_decode() currently doesn't have much raising of #UD at all, I
think. If it wasn't like this, the not_64bit field wouldn't be
needed - it's used only to communicate from decode to execute.
We're not entirely consistent with this though, seeing in
x86_decode_onebyte(), a few lines below the block of case labels
setting that field,

    case 0x9a: /* call (far, absolute) */
    case 0xea: /* jmp (far, absolute) */
        generate_exception_if(mode_64bit(), EXC_UD);

We could certainly drop the field and raise #UD during decode
already, but don't you think we then should do so for all
encodings that ultimately lead to #UD, e.g. also for AVX insns
without AVX available to the guest? This would make
x86_decode() quite a bit larger, as it would then also need to
have a giant switch() (or something else that's suitable to
cover all cases).

> Everything else looks ok, so Acked-by: Andrew Cooper
> <andrew.cooper3@citrix.com>

Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 13:37:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 13:37:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jefCY-0001Ov-UR; Fri, 29 May 2020 13:37:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mKAR=7L=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jefCX-0001Oq-V8
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 13:37:34 +0000
X-Inumbo-ID: 8ca936d7-a1b1-11ea-a8b7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8ca936d7-a1b1-11ea-a8b7-12813bfff9fa;
 Fri, 29 May 2020 13:37:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=gRMf3JL5qzUN0yHgRk3JM7qMoHl49Y05x8/7LcQXiuw=; b=KuaznYQ7X8KHbVDAllbBAhop1T
 ceasurU4g6hde745U42si5i9HGciEimH+biicEvPADFBkr0v/pUGy+4xN6nL6FWWab3Lb29tKgKJm
 PBTfC9Urkp4YLF7PkZo6LpgnWAvDnXchtXzFNkjBrEuJ7wQXP8wQrb6XfK7rm4JA7YXc=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jefCS-0005Tl-Ef; Fri, 29 May 2020 13:37:28 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jefCR-0008DK-VP; Fri, 29 May 2020 13:37:28 +0000
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger@xen.org>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <dcfbca54-4773-9f43-1826-f5137a41bd9f@suse.com>
 <B5889544-3EB5-41ED-8428-8BCA05269371@arm.com>
 <20200529132020.GN1195@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <e7a757b4-b285-7089-91ea-d4248443aaf1@xen.org>
Date: Fri, 29 May 2020 14:37:24 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <20200529132020.GN1195@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "paul@xen.org" <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Xia, Hongyan" <hongyxia@amazon.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 29/05/2020 14:26, Roger Pau Monné wrote:
> TBH I would just remove the error message on Arm from the current
> hypercall, I don't think it's useful.
The message is part of the helpers get_page_from_gva() which is also 
used by copy_{to, from}_guest. While it may not be useful in the context 
of the runstate, it was introduced because there was some other weird 
bug happening before KPTI even existed (see [1]). I haven't yet managed 
to find the bottom line of the issue.

So I would still very much like to keep the message in place. Although 
we could reduce the number of cases where this is hapenning based on the 
fault.

Note this is a dprintk(XENLOG_G_DEBUG,...) so the verbosity of the 
logging is only for debug build.

Cheers,

[1] https://lists.xen.org/archives/html/xen-devel/2017-11/msg00942.html

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 29 13:38:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 13:38:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jefD2-0001Rt-CR; Fri, 29 May 2020 13:38:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T8V9=7L=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jefD1-0001Rk-H2
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 13:38:03 +0000
X-Inumbo-ID: 9aea02b6-a1b1-11ea-9947-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9aea02b6-a1b1-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 13:37:56 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Wh8FwrB/KXKniOGfeFCqFk1wCrtbix80E+6SD017LNY=; b=pf/xPN1APk+jvzr6txTUUCaOK
 qQwCj0qiSxVuk1hSCJTYAA7r1ne8ad2tgCgGXdfWRN9NTrJmkVlqDET71Qk+U4ciB+K4WP9n6INbK
 rRDa3Z5QuC6Kkt86fmYdjTXUFw9JN1OYyPYbISa7pIfi40bb/1Rx/2fT01qngjiK4p4v0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jefCu-0005U9-Eu; Fri, 29 May 2020 13:37:56 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jefCt-0001sl-Ui; Fri, 29 May 2020 13:37:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jefCt-00087T-U0; Fri, 29 May 2020 13:37:55 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150479-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150479: regressions - trouble: blocked/fail
X-Osstest-Failures: xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
 xen-unstable-smoke:build-amd64:xen-build:fail:regression
 xen-unstable-smoke:build-armhf:xen-build:fail:regression
 xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: xen=9b9a83e43598b231111487378d6037fa8fa473d5
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 29 May 2020 13:37:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150479 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150479/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 150438
 build-amd64                   6 xen-build                fail REGR. vs. 150438
 build-armhf                   6 xen-build                fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  9b9a83e43598b231111487378d6037fa8fa473d5
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    0 days
Failing since        150465  2020-05-29 09:02:14 Z    0 days    4 attempts
Testing same since   150472  2020-05-29 11:01:58 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9b9a83e43598b231111487378d6037fa8fa473d5
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:22:50 2020 +0200

    SUPPORT.md: add hypervisor file system
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 7f8d2dc29ea5a51f88ec253be93970768ec9fac2
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:22:42 2020 +0200

    CHANGELOG: add hypervisor file system support
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Paul Durrant <paul@xen.org>

commit 02e9a9cf20950e78c816987415ed920d72444f94
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:20:31 2020 +0200

    xen: remove XEN_SYSCTL_set_parameter support
    
    The functionality of XEN_SYSCTL_set_parameter is available via hypfs
    now, so it can be removed.
    
    This allows to remove the kernel_param structure for runtime parameters
    by putting the now only used structure element into the hypfs node
    structure of the runtime parameters.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit a2486890689713116351e5bbfb8f104c797479cc
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:20:16 2020 +0200

    tools/libxc: remove xc_set_parameters()
    
    There is no user of xc_set_parameters() left, so remove it.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 2ea4b9829cf95b59f75f0c70543f2368d702305e
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:20:08 2020 +0200

    tools/libxl: use libxenhypfs for setting xen runtime parameters
    
    Instead of xc_set_parameters() use xenhypfs_write() for setting
    parameters of the hypervisor.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit a659d7cab9afcba337cb60225738fd85ff7aa3da
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:18:36 2020 +0200

    xen: add runtime parameter access support to hypfs
    
    Add support to read and modify values of hypervisor runtime parameters
    via the hypervisor file system.
    
    As runtime parameters can be modified via a sysctl, too, this path has
    to take the hypfs rw_lock as writer.
    
    For custom runtime parameters the connection between the parameter
    value and the file system is done via an init function which will set
    the initial value (if needed) and the leaf properties.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 58263ed7713e8132c2bc00bc870399ea31bf2231
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:15:54 2020 +0200

    xen: add /buildinfo/config entry to hypervisor filesystem
    
    Add the /buildinfo/config entry to the hypervisor filesystem. This
    entry contains the .config file used to build the hypervisor.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit b5d4711d2b17753498a3f587585f11bf9ca5af85
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:14:51 2020 +0200

    xen: provide version information in hypfs
    
    Provide version and compile information in /buildinfo/ node of the
    Xen hypervisor file system. As this information is accessible by dom0
    only no additional security problem arises.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 048f82ccd1b3dda511af25a7a8524c8ba5ca2786
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:14:24 2020 +0200

    xen/hypfs: make struct hypfs_entry_leaf initializers work with gcc 4.1
    
    gcc 4.1 has problems with static initializers for anonymous unions.
    Fix this by naming the union in struct hypfs_entry_leaf.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Tested-by: Jan Beulich <jbeulich@suse.com>

commit ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:20:32 2020 +0200

    tools: add xenfs tool
    
    Add the xenfs tool for accessing the hypervisor filesystem.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 86234eafb95295621aef6c618e4c22c10d8e4138
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:20:21 2020 +0200

    libs: add libxenhypfs
    
    Add the new library libxenhypfs for access to the hypervisor filesystem.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5b5ccafb0c425b85a60fd4f241d5f6951d0e4928
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:15:50 2020 +0200

    xen: add basic hypervisor filesystem support
    
    Add the infrastructure for the hypervisor filesystem.
    
    This includes the hypercall interface and the base functions for
    entry creation, deletion and modification.
    
    In order not to have to repeat the same pattern multiple times in case
    adding a new node should BUG_ON() failure, the helpers for adding a
    node (hypfs_add_dir() and hypfs_add_leaf()) get a nofault parameter
    causing the BUG() in case of a failure.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 0e9dcd0159c671608e154da5b8b7e0edd2905067
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:15:35 2020 +0200

    docs: add feature document for Xen hypervisor sysfs-like support
    
    On the 2019 Xen developer summit there was agreement that the Xen
    hypervisor should gain support for a hierarchical name-value store
    similar to the Linux kernel's sysfs.
    
    In the beginning there should only be basic support: entries can be
    added from the hypervisor itself only, there is a simple hypercall
    interface to read the data.
    
    Add a feature document for setting the base of a discussion regarding
    the desired functionality and the entries to add.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit c48a9956e334a5dd99e846d04ad56185b07aab64
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:15:08 2020 +0200

    xen: add a generic way to include binary files as variables
    
    Add a new script xen/tools/binfile for including a binary file at build
    time being usable via a pointer and a size variable in the hypervisor.
    
    Make use of that generic tool in xsm.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri May 29 13:41:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 13:41:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jefG6-0002Pa-Tb; Fri, 29 May 2020 13:41:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jefG6-0002PV-16
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 13:41:14 +0000
X-Inumbo-ID: 0fdc211c-a1b2-11ea-81bc-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0fdc211c-a1b2-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 13:41:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 68494AF5A;
 Fri, 29 May 2020 13:41:12 +0000 (UTC)
Subject: Re: [PATCH v10 2/9] x86emul: rework CMP and TEST emulation
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
 <5843dca9-1a1a-a32e-3cb0-95cd93533723@suse.com>
 <c215f77f-f645-eb08-3ac1-7d9f211e1abf@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <629b042b-1226-6d2d-6509-569bb3c64abe@suse.com>
Date: Fri, 29 May 2020 15:41:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <c215f77f-f645-eb08-3ac1-7d9f211e1abf@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 14:24, Andrew Cooper wrote:
> On 25/05/2020 15:26, Jan Beulich wrote:
>> Unlike similarly encoded insns these don't write their memory operands,
> 
> "write to their".
> 
>> and hence x86_is_mem_write() should return false for them. However,
>> rather than adding special logic there, rework how their emulation gets
>> done, by making decoding attributes properly describe the r/o nature of
>> their memory operands.
> 
> Describe how?  I see you've change the order of operands encoding, but
> then override it back?

There's no overriding back, I don't think: I change the table entries
for opcodes 0x38 and 0x39, with no other adjustments the the attributes
later on. For the other opcodes I leave the table entries as they are,
and override the attributes for the specific sub-cases (identified by
ModRM.reg).

For opcodes 0x38 and 0x39 the change of the table entries implies
changing the order of operands as passed to emulate_2op_SrcV(), hence
the splitting of the cases in the main switch().

I didn't think this was necessary to spell out in the commit message,
but of course I can re-use most of the text above and add it into
there, if you think that would help.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 13:51:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 13:51:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jefQ2-0003QB-S1; Fri, 29 May 2020 13:51:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L269=7L=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1jefQ1-0003Q6-8U
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 13:51:29 +0000
X-Inumbo-ID: 7e1d2c4c-a1b3-11ea-81bc-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7e1d2c4c-a1b3-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 13:51:28 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 335D0A2775;
 Fri, 29 May 2020 15:51:27 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 2C15DA277D;
 Fri, 29 May 2020 15:51:26 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id 3uw1pO5uso_W; Fri, 29 May 2020 15:51:25 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 70C5CA277A;
 Fri, 29 May 2020 15:51:25 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id WVC7vMyh_we3; Fri, 29 May 2020 15:51:25 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 4CFD2A2775;
 Fri, 29 May 2020 15:51:25 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 41C2522330;
 Fri, 29 May 2020 15:50:55 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id S0oPN9scV3LA; Fri, 29 May 2020 15:50:49 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 909C32234C;
 Fri, 29 May 2020 15:50:49 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id JHyWT0tei2K5; Fri, 29 May 2020 15:50:49 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 67B7022090;
 Fri, 29 May 2020 15:50:49 +0200 (CEST)
Date: Fri, 29 May 2020 15:50:49 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: =?utf-8?Q?J=C3=BCrgen_Gro=C3=9F?= <jgross@suse.com>
Message-ID: <1623831291.59734817.1590760249321.JavaMail.zimbra@cert.pl>
In-Reply-To: <1f68a02a-3472-1bb0-90b9-6f3ccefca0b3@suse.com>
References: <1317891616.59673956.1590755403816.JavaMail.zimbra@cert.pl>
 <57d213df-9edb-8f31-59e4-13f5cc07b543@suse.com>
 <1150720994.59695220.1590756705329.JavaMail.zimbra@cert.pl>
 <1f68a02a-3472-1bb0-90b9-6f3ccefca0b3@suse.com>
Subject: Re: [BUG] Core scheduling patches causing deadlock in some situations
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - GC83 (Win)/8.6.0_GA_1194)
Thread-Topic: Core scheduling patches causing deadlock in some situations
Thread-Index: MiQjRIgSyAs1U7BGHrKsgOhfnCVHNw==
X-Zimbra-DL: chivay@cert.pl, bonus@cert.pl
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: chivay@cert.pl, xen-devel@lists.xenproject.org, bonus@cert.pl,
 Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

----- 29 maj 2020 o 15:15, J=C3=BCrgen Gro=C3=9F jgross@suse.com napisa=C5=
=82(a):

> On 29.05.20 14:51, Micha=C5=82 Leszczy=C5=84ski wrote:
>> ----- 29 maj 2020 o 14:44, J=C3=BCrgen Gro=C3=9F jgross@suse.com napisa=
=C5=82(a):
>>=20
>>> On 29.05.20 14:30, Micha=C5=82 Leszczy=C5=84ski wrote:
>>>> Hello,
>>>>
>>>> I'm running DRAKVUF on Dell Inc. PowerEdge R640/08HT8T server with Int=
el(R)
>>>> Xeon(R) Gold 6132 CPU @ 2.60GHz CPU.
>>>> When upgrading from Xen RELEASE 4.12 to 4.13, we have noticed some sta=
bility
>>>> problems concerning freezes of Dom0 (Debian Buster):
>>>>
>>>> ---
>>>>
>>>> maj 27 23:17:02 debian kernel: rcu: INFO: rcu_sched self-detected stal=
l on CPU
>>>> maj 27 23:17:02 debian kernel: rcu: 0-....: (5250 ticks this GP)
>>>> idle=3Dcee/1/0x4000000000000002 softirq=3D11964/11964 fqs=3D2515
>>>> maj 27 23:17:02 debian kernel: rcu: (t=3D5251 jiffies g=3D27237 q=3D79=
9)
>>>> maj 27 23:17:02 debian kernel: NMI backtrace for cpu 0
>>>> maj 27 23:17:02 debian kernel: CPU: 0 PID: 643 Comm: z_rd_int_1 Tainte=
d: P OE
>>>> 4.19.0-6-amd64 #1 Debian 4.19.67-2+deb10u2
>>>> maj 27 23:17:02 debian kernel: Hardware name: Dell Inc. PowerEdge R640=
/08HT8T,
>>>> BIOS 2.1.8 04/30/2019
>>>> maj 27 23:17:02 debian kernel: Call Trace:
>>>> maj 27 23:17:02 debian kernel: <IRQ>
>>>> maj 27 23:17:02 debian kernel: dump_stack+0x5c/0x80
>>>> maj 27 23:17:02 debian kernel: nmi_cpu_backtrace.cold.4+0x13/0x50
>>>> maj 27 23:17:02 debian kernel: ? lapic_can_unplug_cpu.cold.29+0x3b/0x3=
b
>>>> maj 27 23:17:02 debian kernel: nmi_trigger_cpumask_backtrace+0xf9/0xfb
>>>> maj 27 23:17:02 debian kernel: rcu_dump_cpu_stacks+0x9b/0xcb
>>>> maj 27 23:17:02 debian kernel: rcu_check_callbacks.cold.81+0x1db/0x335
>>>> maj 27 23:17:02 debian kernel: ? tick_sched_do_timer+0x60/0x60
>>>> maj 27 23:17:02 debian kernel: update_process_times+0x28/0x60
>>>> maj 27 23:17:02 debian kernel: tick_sched_handle+0x22/0x60
>>>>
>>>> ---
>>>>
>>>> This usually results in machine being completely unresponsive and perf=
orming an
>>>> automated reboot after some time.
>>>>
>>>> I've bisected commits between 4.12 and 4.13 and it seems like this is =
the patch
>>>> which introduced a bug:
>>>> https://github.com/xen-project/xen/commit/7c7b407e77724f37c4b448930777=
a59a479feb21
>>>>
>>>> Enclosed you can find the `xl dmesg` log (attachment: dmesg.txt) from =
the fresh
>>>> boot of the machine on which the bug was reproduced.
>>>>
>>>> I'm also attaching the `xl info` output from this machine:
>>>>
>>>> ---
>>>>
>>>> release : 4.19.0-6-amd64
>>>> version : #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11)
>>>> machine : x86_64
>>>> nr_cpus : 14
>>>> max_cpu_id : 223
>>>> nr_nodes : 1
>>>> cores_per_socket : 14
>>>> threads_per_core : 1
>>>> cpu_mhz : 2593.930
>>>> hw_caps :
>>>> bfebfbff:77fef3ff:2c100800:00000121:0000000f:d19ffffb:00000008:0000010=
0
>>>> virt_caps : pv hvm hvm_directio pv_directio hap shadow
>>>> total_memory : 130541
>>>> free_memory : 63591
>>>> sharing_freed_memory : 0
>>>> sharing_used_memory : 0
>>>> outstanding_claims : 0
>>>> free_cpus : 0
>>>> xen_major : 4
>>>> xen_minor : 13
>>>> xen_extra : -unstable
>>>> xen_version : 4.13-unstable
>>>> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86_3=
2p
>>>> hvm-3.0-x86_64
>>>> xen_scheduler : credit2
>>>> xen_pagesize : 4096
>>>> platform_params : virt_start=3D0xffff800000000000
>>>> xen_changeset : Wed Oct 2 09:27:27 2019 +0200 git:7c7b407e77-dirty
>>>
>>> Which is your original Xen base? This output is clearly obtained at the
>>> end of the bisect process.
>>>
>>> There have been quite some corrections since the release of Xen 4.13, s=
o
>>> please make sure you are running the most actual version (4.13.1).
>>>
>>>
>>> Juergen
>>=20
>> Sure, we have tested both RELEASE 4.13 and RELEASE 4.13.1. Unfortunately=
 these
>> corrections didn't help and the bug is still reproducible.
>>=20
>>  From our testing it turns out that:
>>=20
>> Known working revision: 997d6248a9ae932d0dbaac8d8755c2b15fec25dc
>> Broken revision: 6278553325a9f76d37811923221b21db3882e017
>> First bad commit: 7c7b407e77724f37c4b448930777a59a479feb21
>=20
> Would it be possible to test xen unstable, too?
>=20
> I could imagine e.g. commit b492c65da5ec5ed or 99266e31832fb4a4 to have
> an impact here.
>=20
>=20
> Juergen


I've tried b492c65da5ec5ed revision but it seems that there is some problem=
 with ALTP2M support, so I can't launch anything at all.

maj 29 15:45:32 debian drakrun[1223]: Failed to set HVM_PARAM_ALTP2M, RC: -=
1
maj 29 15:45:32 debian drakrun[1223]: VMI_ERROR: xc_altp2m_switch_to_view r=
eturned rc: -1


xl info:

---

release                : 4.19.0-6-amd64
version                : #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11)
machine                : x86_64
nr_cpus                : 14
max_cpu_id             : 223
nr_nodes               : 1
cores_per_socket       : 14
threads_per_core       : 1
cpu_mhz                : 2593.977
hw_caps                : bfebfbff:77fef3ff:2c100800:00000121:0000000f:d19ff=
ffb:00000008:00000100
virt_caps              : pv hvm hvm_directio pv_directio hap shadow iommu_h=
ap_pt_share
total_memory           : 130541
free_memory            : 63591
sharing_freed_memory   : 0
sharing_used_memory    : 0
outstanding_claims     : 0
free_cpus              : 0
xen_major              : 4
xen_minor              : 14
xen_extra              : -unstable
xen_version            : 4.14-unstable
xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-=
3.0-x86_32p hvm-3.0-x86_64
xen_scheduler          : credit2
xen_pagesize           : 4096
platform_params        : virt_start=3D0xffff800000000000
xen_changeset          : Thu May 14 17:36:13 2020 +0200 git:b492c65da5-dirt=
y
xen_commandline        : placeholder dom0_mem=3D65270M,max:65270M dom0_max_=
vcpus=3D6 dom0_vcpus_pin=3D1 force-ept=3D1 ept=3Dpml=3D0 hap_1gb=3D0 hap_2m=
b=3D0 altp2m=3D1 smt=3D0 no-real-mode edd=3Doff
cc_compiler            : gcc (Ubuntu 7.5.0-3ubuntu1~18.04) 7.5.0
cc_compile_by          : root
cc_compile_domain      :
cc_compile_date        : Fri May 29 13:18:41 UTC 2020
build_id               : cd3948792d88ec0bc45e03b227f6cbab9572b76b
xend_config_format     : 4

---

Best regards,
Micha=C5=82 Leszczy=C5=84ski


From xen-devel-bounces@lists.xenproject.org Fri May 29 13:53:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 13:53:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jefS1-0003Wo-7w; Fri, 29 May 2020 13:53:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=VNM+=7L=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1jefRz-0003WI-34
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 13:53:31 +0000
X-Inumbo-ID: c66c54e6-a1b3-11ea-81bc-bc764e2007e4
Received: from mo4-p01-ob.smtp.rzone.de (unknown [81.169.146.164])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c66c54e6-a1b3-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 13:53:29 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1590760408;
 s=strato-dkim-0002; d=aepfle.de;
 h=Message-Id:Date:Subject:Cc:To:From:X-RZG-CLASS-ID:X-RZG-AUTH:From:
 Subject:Sender;
 bh=hFBD+qJ/gCpK9NJn5KvfJ5VJMc3vedN28pNRuhbtqjA=;
 b=Udg4+S03Wm9TCxsh5c6qpM4/J63b9BUCQYrrWn6SMNB9B86osRf1goYp7eh8UEiPZL
 KbAVbiSFcViThco6e5i8bljSIe6hSfbAis8Ma+lWxvfcf29JqQNd1wCCIikiNpa/JIkm
 Ck852Ji2+4mkNa+b8xTSDCGnzAtkeqsTpAcMyD6093ufSdyABpXTIbfqpJkmxjLJBq6H
 HaqPrL9DL94irIptLwk96z30ZjFZZc1k4xHAivAjWxvXt2osXZTkmMMfg7zt68aG72Sh
 Sb3ZZPKW276FphQ3kWUE03Xv5rx/4N2WtTWpZQN2QCZd8WNHbz3yV5AiQxV3a3a6y36W
 pAWw==
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzBW/OdlBZQ4AHSS3GpFjw=="
X-RZG-CLASS-ID: mo00
Received: from sender by smtp.strato.de (RZmta 46.9.0 DYNA|AUTH)
 with ESMTPSA id k02b75w4TDrA62H
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256 bits))
 (Client did not present a certificate);
 Fri, 29 May 2020 15:53:10 +0200 (CEST)
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v1] INSTALL: remove TODO section
Date: Fri, 29 May 2020 15:53:03 +0200
Message-Id: <20200529135303.18457-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Olaf Hering <olaf@aepfle.de>, Stefano Stabellini <sstabellini@kernel.org>,
 Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The default value '/' for DESTDIR is unusual, but does probably not hurt.

Fixes commit f2b40dababedcd8355bf3e85d00baf17f9827131
Fixes commit 8e986e5a61efeca92b9b268e77957d45d8316ee4

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 INSTALL | 7 -------
 1 file changed, 7 deletions(-)

diff --git a/INSTALL b/INSTALL
index 72dc4b67dd..0d3eb89f02 100644
--- a/INSTALL
+++ b/INSTALL
@@ -365,12 +365,5 @@ make XEN_TARGET_ARCH=arm64 CROSS_COMPILE=aarch64-linux-gnu- \
         DESTDIR=/some/path install
 
 
-TODO
-====
-
- - DESTDIR should be empty, not "/"
- - add make uninstall targets
- - replace private path variables as needed (SBINDIR/sbindir)
- - ...
 
 # vim: tw=72 et


From xen-devel-bounces@lists.xenproject.org Fri May 29 13:54:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 13:54:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jefSI-0003Y5-HT; Fri, 29 May 2020 13:53:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tYAP=7L=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jefSI-0003Xz-12
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 13:53:50 +0000
X-Inumbo-ID: d1e03c3e-a1b3-11ea-9947-bc764e2007e4
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.2.52]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d1e03c3e-a1b3-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 13:53:48 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UMIjrEr3uCgbOYxLbUKqsqpRsbEK6b9/AtTsdYEBkMs=;
 b=VnkAwLWOVkffEjVhw3xWuwxMNLJVu67zGVFlnv9Rl0VH6GK7GKex/Vi4oHVHs5O0xq/RdNnY1+Htpd7zlf1FT1wI4OJ/ffyil17fjn8Ur7wFC8JCXqJoRFbDQzCelSezKY+49HJXumQW1CX36dibGtUsJJW36cgp5GRTfDU2+Z0=
Received: from DB8PR03CA0008.eurprd03.prod.outlook.com (2603:10a6:10:be::21)
 by DB8PR08MB3994.eurprd08.prod.outlook.com (2603:10a6:10:a6::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.27; Fri, 29 May
 2020 13:53:45 +0000
Received: from DB5EUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:be:cafe::ce) by DB8PR03CA0008.outlook.office365.com
 (2603:10a6:10:be::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.17 via Frontend
 Transport; Fri, 29 May 2020 13:53:45 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT018.mail.protection.outlook.com (10.152.20.69) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3021.23 via Frontend Transport; Fri, 29 May 2020 13:53:45 +0000
Received: ("Tessian outbound b157666c5529:v57");
 Fri, 29 May 2020 13:53:45 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: f9234978a896f37d
X-CR-MTA-TID: 64aa7808
Received: from 40103269d7fb.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 2A06A3E5-2B8E-4F71-9F85-3E409D9B2B90.1; 
 Fri, 29 May 2020 13:53:39 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 40103269d7fb.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 29 May 2020 13:53:39 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DxWXa0mneM3Fnb35ZPNFEVkhOvynO4OJzKJQ4DXE/dDRGmXygo1tzJjyav/B4rYvOnKsuCeT4sMYjWc/zKCD7yZrzYfqGpeDlF0mZMoVLHAaRmDPmU0zhPLS/Qurs1R/WO7hDn+KeD2A365CRhoKa4n5WoZLlZbmUr7kYqr4EQMwe252IqdrSDIYXmJxvJjYmREBJmmA+KWp9dHKpSK5Hnv3EmjjQ7jMgSGo7j1lKpqC1jD+9KDgn3cKXp10UIzBII1wfvwCWCetsAWr6zH+ZG2i2+oni7FBVoiYr9ImgkJ1W11JqgoVXWHiusa3bN6SYOY2yeUqxIfkUDpuuyWZPg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UMIjrEr3uCgbOYxLbUKqsqpRsbEK6b9/AtTsdYEBkMs=;
 b=LsvoApdtYG0F/s6GMfmtuwkOrUuz6FOMkN6OypOMbDzplCPeDqvtZuhrPbE06xWoh5eVzzB8INYW938MtUTe2GVh1sn4eWXrJi527cdI2dlOyFCzZYfLuEEQs407BZlLS7Zxui5owMbVwvolj1e8Wi6qL8Oxml0boLRPcaRDrqEFH+gBBSIqIUNiylogsIZC8vKnTirKoAqnns9X8wWUyDzLabPeo8rJzf0aTdYMM3Iga76hv4RgBoCL33YhL8F+j7yy5QyTxexntpKkzvDRRjfQeP/KEHmSMws1Q7AIt4AvGKFe6ESI+hFmvUdg6qUSMnEhfW7kxqJOn32fjOemgQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UMIjrEr3uCgbOYxLbUKqsqpRsbEK6b9/AtTsdYEBkMs=;
 b=VnkAwLWOVkffEjVhw3xWuwxMNLJVu67zGVFlnv9Rl0VH6GK7GKex/Vi4oHVHs5O0xq/RdNnY1+Htpd7zlf1FT1wI4OJ/ffyil17fjn8Ur7wFC8JCXqJoRFbDQzCelSezKY+49HJXumQW1CX36dibGtUsJJW36cgp5GRTfDU2+Z0=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB3034.eurprd08.prod.outlook.com (2603:10a6:5:24::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.27; Fri, 29 May
 2020 13:53:35 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3045.018; Fri, 29 May 2020
 13:53:35 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger@xen.org>
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Topic: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Index: AQHWNP6nIxIB/r3XC025DLnqIbmaJai92N6AgADfWQCAAAi2gIAACV6AgAACdoCAAEpYAA==
Date: Fri, 29 May 2020 13:53:35 +0000
Message-ID: <B04DCD98-E475-49C8-8540-68BE1DB96AF5@arm.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <3B88C76B-6972-4A66-AFDC-0B5C27FBA740@arm.com>
 <52e26c9d-b662-2597-b521-dacf4f8acfc8@suse.com>
 <077FCC5B-AD47-4707-AF55-12F0455ED26F@arm.com>
 <20200529092716.GK1195@Air-de-Roger>
In-Reply-To: <20200529092716.GK1195@Air-de-Roger>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [217.140.99.251]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: 3ab2c8d4-e042-478e-49fb-08d803d7b485
x-ms-traffictypediagnostic: DB7PR08MB3034:|DB8PR08MB3994:
X-Microsoft-Antispam-PRVS: <DB8PR08MB3994A9869F4BD7EEE199942B9D8F0@DB8PR08MB3994.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 04180B6720
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: tnPm61UnNyC82NFd3xuo3JoULrHdDm103D0Hb+/dU6r3WJXLbRn9eb6qd0fEpdDpO/RJO0PxWiJbYGPszOZ1Ooo0FwZ+4QAv4e/v/hu5TavjLGgWdqBnk72jv6YtXzBVAqOi/UpQjmIX5kHnmbiInAi2Rr3F2gnEcyjm3LQBKxBpXjNu+oCfeQA92DHPcz/R20ri+t+enFILuTQ3Y7ZCLH8zWr4UirrFCZhHXJPezliKkuQRQMNYJI6j5JV3apDO9RWWeUfyhhy3V64GhjgCyTSMy4QjOdXAxeKZWYF9YE+rFFRH2MBQ1G4C3o7RID3CFft2tZHZoHxpwBcZxZNxsA==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(366004)(39860400002)(346002)(376002)(136003)(396003)(2906002)(54906003)(6916009)(7416002)(2616005)(64756008)(186003)(86362001)(478600001)(66476007)(66556008)(66446008)(91956017)(66946007)(76116006)(316002)(4326008)(8936002)(6486002)(8676002)(6512007)(33656002)(36756003)(83380400001)(71200400001)(53546011)(26005)(5660300002)(6506007);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: AQgPYolQrC+yb7Jd+kFk3UNYo4qDi9VPH++KKj+0CtsG3HUi2PeTK6E+OXGEQy+h5qObIevUe29uPQ8ZlkDSnUUrg5BxAiiIIygUU8HkS66SACeL22H0iDcEbDe1B602bNoneZzjuxTyXUiZ9p/5AlnMBEeVNZcdlkFfgJEVkqYJKNghCWQn+HRUSqsH+a4ywV8qCWo6LrYW7g1uFGKUbylVwukSKS6a6y1FT0+Nl2Md/yiokvSYl1duZD5yPiL/Q+012VkcCfM+RDCb30RI9IV8BY5WPA5eNDUnyOtMqdvWLJSKkeWV8b/1TnnQUqpNg0Y4RN+VSAqHuC8xxlXY/Yn3K2oKtnBAU7Z35obUjpOwv0b8mC4Kj1FHSg+kI9YBJ1P31aDQIX+aPivlFe5+kCXzTlEQxTGiT+JBNfe0tS2+k+n2wau/hp/RMwbSbEm3RpMfA3fEbzVRxFBcaUl05xw/nHaI5xPHRMnD/sT2pXAOnP3KAjX8wAK7IJVX9xbL
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <69788FAB475F1E43A72E0D9EC3E1010A@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3034
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: DB5EUR03FT018.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(39860400002)(376002)(136003)(396003)(46966005)(2906002)(86362001)(6512007)(6486002)(54906003)(316002)(81166007)(4326008)(70586007)(6862004)(6506007)(70206006)(186003)(8936002)(82310400002)(26005)(53546011)(356005)(478600001)(47076004)(82740400003)(5660300002)(36756003)(8676002)(336012)(2616005)(33656002)(83380400001);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: d6c95d69-2704-4b65-859f-08d803d7ae8e
X-Forefront-PRVS: 04180B6720
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: wra8en0buN3eN3jUUupdW1vX7bKIyrk6xpqSayu3hmOew70Xr87Hye6zL87OxLRzbHME4qnMCCdT6qvs98GYvTiTkfwje9udC6AA1OF2wEHkoe9DhN9KlLyJMPBl1UaYXEuQqfuTbYolK0mDP98bM3P7cCDl/vWNMDijWhgx82n7K37JRFS2KyA0JVZpamrVVzbddKgHOfZAxeIpOslKoW22ZAl4Ohw23VaQD+iUI54OmZBS8bsPiJw3lWtwvMheTnOL6A+rBvRrJUkWy7bRH9wJEzz6MK8tgsR57d14qon1Sq40SoZ8TMXZvEsluS0nbKzNk9iT9bbqvYkfvO0gm/FAbNB/+fThikCnN74tA0shyoJu58/jDWTh6ekUHMwxwfKkApu8oNThxXLDm7qjuw==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 May 2020 13:53:45.4311 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3ab2c8d4-e042-478e-49fb-08d803d7b485
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB3994
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, "paul@xen.org" <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Xia, Hongyan" <hongyxia@amazon.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMjkgTWF5IDIwMjAsIGF0IDEwOjI3LCBSb2dlciBQYXUgTW9ubsOpIDxyb2dlckB4
ZW4ub3JnPiB3cm90ZToNCj4gDQo+IE9uIEZyaSwgTWF5IDI5LCAyMDIwIGF0IDA5OjE4OjQyQU0g
KzAwMDAsIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+PiBIaSBKYW4sDQo+PiANCj4+PiBPbiAy
OSBNYXkgMjAyMCwgYXQgMDk6NDUsIEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4gd3Jv
dGU6DQo+Pj4gDQo+Pj4gT24gMjkuMDUuMjAyMCAxMDoxMywgQmVydHJhbmQgTWFycXVpcyB3cm90
ZToNCj4+Pj4+IE9uIDI4IE1heSAyMDIwLCBhdCAxOTo1NCwgSnVsaWVuIEdyYWxsIDxqdWxpZW5A
eGVuLm9yZz4gd3JvdGU6DQo+Pj4+PiBBRkFJQ1QsIHRoZXJlIGlzIG5vIHJlc3RyaWN0aW9uIG9u
IHdoZW4gdGhlIHJ1bnN0YXRlIGh5cGVyY2FsbCBjYW4gYmUgY2FsbGVkLiBTbyB0aGlzIGNhbiBl
dmVuIGJlIGNhbGxlZCBiZWZvcmUgdGhlIHZDUFUgaXMgYnJvdWdodCB1cC4NCj4+Pj4gDQo+Pj4+
IEkgdW5kZXJzdGFuZCB0aGUgcmVtYXJrIGJ1dCBpdCBzdGlsbCBmZWVscyB2ZXJ5IHdlaXJkIHRv
IGFsbG93IGFuIGludmFsaWQgYWRkcmVzcyBpbiBhbiBoeXBlcmNhbGwuDQo+Pj4+IFdvdWxkbuKA
mXQgd2UgaGF2ZSBhIGxvdCBvZiBwb3RlbnRpYWwgaXNzdWVzIGFjY2VwdGluZyBhbiBhZGRyZXNz
IHRoYXQgd2UgY2Fubm90IGNoZWNrID8NCj4+PiANCj4+PiBJIGRvbid0IHRoaW5rIHNvOiBUaGUg
aHlwZXJ2aXNvciB1c2VzIGNvcHlfdG9fZ3Vlc3QoKSB0byBwcm90ZWN0DQo+Pj4gaXRzZWxmIGZy
b20gdGhlIGFkZHJlc3NlcyB0byBiZSBpbnZhbGlkIGF0IHRoZSB0aW1lIG9mIGNvcHlpbmcuDQo+
Pj4gSWYgdGhlIGd1ZXN0IGRvZXNuJ3QgbWFrZSBzdXJlIHRoZXkncmUgdmFsaWQgYXQgdGhhdCB0
aW1lLCBpdA0KPj4+IHNpbXBseSB3b24ndCBnZXQgdGhlIGluZm9ybWF0aW9uIChwZXJoYXBzIHVu
dGlsIFhlbidzIG5leHQNCj4+PiBhdHRlbXB0IHRvIGNvcHkgaXQgb3V0KS4NCj4+PiANCj4+PiBZ
b3UgbWF5IHdhbnQgdG8gdGFrZSBhIGxvb2sgYXQgdGhlIHg4NiBzaWRlIG9mIHRoaXMgKGFsc28g
dGhlDQo+Pj4gdkNQVSB0aW1lIHVwZGF0aW5nKTogRHVlIHRvIHRoZSB3YXkgeDg2LTY0IFBWIGd1
ZXN0cyB3b3JrLCB0aGUNCj4+PiBhZGRyZXNzIG1heSBsZWdpdGltYXRlbHkgYmUgdW5tYXBwZWQg
YXQgdGhlIHRpbWUgWGVuIHdhbnRzIHRvDQo+Pj4gY29weSBpdCwgd2hlbiB0aGUgdkNQVSBpcyBj
dXJyZW50bHkgZXhlY3V0aW5nIGd1ZXN0IHVzZXIgbW9kZQ0KPj4+IGNvZGUuIEluIHN1Y2ggYSBj
YXNlIHRoZSBjb3B5IGdldHMgcmV0cmllZCB0aGUgbmV4dCB0aW1lIHRoZQ0KPj4+IGd1ZXN0IHRy
YW5zaXRpb25zIGZyb20gdXNlciB0byBrZXJuZWwgbW9kZSAod2hpY2ggaW52b2x2ZXMgYQ0KPj4+
IHBhZ2UgdGFibGUgY2hhbmdlKS4NCj4+IA0KPj4gSWYgSSB1bmRlcnN0YW5kIGV2ZXJ5dGhpbmcg
Y29ycmVjdGx5IHJ1bnN0YXRlIGlzIHVwZGF0ZWQgb25seSBpZiB0aGVyZSBpcw0KPj4gYSBjb250
ZXh0IHN3aXRjaCBpbiB4ZW4gd2hpbGUgdGhlIGd1ZXN0IGlzIHJ1bm5pbmcgaW4ga2VybmVsIG1v
ZGUgYW5kDQo+PiBpZiB0aGUgYWRkcmVzcyBpcyBtYXBwZWQgYXQgdGhhdCB0aW1lLg0KPj4gDQo+
PiBTbyB0aGlzIGlzIGEgYmVzdCBlZmZvcnQgaW4gWGVuIGFuZCB0aGUgZ3Vlc3QgY2Fubm90IHJl
YWxseSByZWx5IG9uIHRoZQ0KPj4gcnVuc3RhdGUgaW5mb3JtYXRpb24gKGFzIGl0IG1pZ2h0IG5v
dCBiZSB1cCB0byBkYXRlKS4NCj4+IENvdWxkIHRoaXMgaGF2ZSBpbXBhY3RzIHNvbWVob3cgaWYg
dGhpcyBpcyB1c2VkIGZvciBzY2hlZHVsaW5nID8NCj4+IA0KPj4gSW4gdGhlIGVuZCB0aGUgb25s
eSBhY2NlcHRlZCB0cmFkZSBvZmYgd291bGQgYmUgdG86DQo+PiAtIHJlZHVjZSBlcnJvciB2ZXJi
b3NpdHkgYW5kIGp1c3QgaWdub3JlIGl0DQo+PiAtIGludHJvZHVjZSBhIG5ldyBzeXN0ZW0gY2Fs
bCB1c2luZyBhIHBoeXNpY2FsIGFkZHJlc3MNCj4+ICAtPiBVc2luZyBhIHZpcnR1YWwgYWRkcmVz
cyB3aXRoIHJlc3RyaWN0aW9ucyBzb3VuZHMgdmVyeSBjb21wbGV4DQo+PiAgdG8gZG9jdW1lbnQg
KGN1cnJlbnQgY29yZSwgbm8gcmVtYXBwaW5nKS4NCj4+IA0KPj4gQnV0IGl0IGZlZWxzIGxpa2Ug
aGF2aW5nIG9ubHkgb25lIGh5cGVyY2FsbCB1c2luZyBndWVzdCBwaHlzaWNhbCBhZGRyZXNzZXMN
Cj4+IHdvdWxkIG5vdCByZWFsbHkgYmUgbG9naWMgYW5kIHRoaXMga2luZCBvZiBjaGFuZ2Ugc2hv
dWxkIGJlIG1hZGUgYWNyb3NzDQo+PiBhbGwgaHlwZXJjYWxscyBpZiBpdCBpcyBkb25lLg0KPiAN
Cj4gRlJULCB0aGVyZSBhcmUgb3RoZXIgaHlwZXJjYWxscyB1c2luZyBhIHBoeXNpY2FsIGFkZHJl
c3MgaW5zdGVhZCBvZiBhDQo+IGxpbmVhciBvbmUsIHNlZSBWQ1BVT1BfcmVnaXN0ZXJfdmNwdV9p
bmZvIGZvciBleGFtcGxlLiBJdCdzIGp1c3QgYQ0KPiBtaXhlZCBiYWcgcmlnaHQgbm93LCB3aXRo
IHNvbWUgaHlwZXJjYWxscyB1c2luZyBhIGxpbmVhciBhZGRyZXNzIGFuZA0KPiBzb21lIHVzaW5n
IGEgcGh5c2ljYWwgb25lLg0KPiANCj4gSSB0aGluayBpbnRyb2R1Y2luZyBhIG5ldyBoeXBlcmNh
bGwgdGhhdCB1c2VzIGEgcGh5c2ljYWwgYWRkcmVzcyB3b3VsZA0KPiBiZSBmaW5lLCBhbmQgdGhl
biB5b3UgY2FuIGFkZCBhIHNldCBvZiByZXN0cmljdGlvbnMgc2ltaWxhciB0byB0aGUNCj4gb25l
cyBsaXN0ZWQgYnkgVkNQVU9QX3JlZ2lzdGVyX3ZjcHVfaW5mby4NCg0KWWVzIEkgZm91bmQgdGhh
dCBhbmQgSSBhbHNvIHdvbmRlcmVkIHdoeSBydW5zdGF0ZSB3YXMgbm90IGluY2x1ZGVkIGluIHRo
ZSB2Y3B1X2luZm8gaW4gZmFjdC4NCg0KPiANCj4gQ2hhbmdpbmcgdGhlIGN1cnJlbnQgaHlwZXJj
YWxsIGFzIHByb3Bvc2VkIGlzIHJpc2t5LCBidXQgSSB0aGluayB0aGUNCj4gY3VycmVudCBiZWhh
dmlvciBpcyBicm9rZW4gYnkgZGVzaWduIHNwZWNpYWxseSBvbiBhdXRvIHRyYW5zbGF0ZWQNCj4g
Z3Vlc3RzLCBldmVuIG1vcmUgd2l0aCBYUFRJLg0KPiANCj4gVGhhbmtzLCBSb2dlci4NCg0K


From xen-devel-bounces@lists.xenproject.org Fri May 29 13:55:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 13:55:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jefUD-0003lo-2U; Fri, 29 May 2020 13:55:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jefUB-0003li-ER
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 13:55:47 +0000
X-Inumbo-ID: 18305b6a-a1b4-11ea-a8be-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 18305b6a-a1b4-11ea-a8be-12813bfff9fa;
 Fri, 29 May 2020 13:55:46 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:58992
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jefU8-000BYy-Jy (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 14:55:44 +0100
Subject: Re: [PATCH v10 5/9] x86emul: support MOVDIR{I,64B} insns
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
 <ae7ff12a-edf9-45b5-b7c9-6c5b5d0739c0@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <15b5f062-1911-9561-58b8-78c8027f3a68@citrix.com>
Date: Fri, 29 May 2020 14:55:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <ae7ff12a-edf9-45b5-b7c9-6c5b5d0739c0@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25/05/2020 15:28, Jan Beulich wrote:
> Introduce a new blk() hook, paralleling the rmw() one in a certain way,
> but being intended for larger data sizes, and hence its HVM intermediate
> handling function doesn't fall back to splitting the operation if the
> requested virtual address can't be mapped.
>
> Note that SDM revision 071 doesn't specify exception behavior for
> ModRM.mod == 0b11; assuming #UD here.

Once again - I don't think this wants calling out.  That encoding space
will be used for a new Grp at some point in the future, and be a
different instruction.

>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Reviewed-by: Paul Durrant <paul@xen.org>

Acked-by: Andrew Cooper <andrew.cooper@citrix.com>, although with one
recommendation...

> --- a/xen/include/public/arch-x86/cpufeatureset.h
> +++ b/xen/include/public/arch-x86/cpufeatureset.h
> @@ -241,6 +241,8 @@ XEN_CPUFEATURE(AVX512_VPOPCNTDQ, 6*32+14
>  XEN_CPUFEATURE(TSXLDTRK,      6*32+16) /*a  TSX load tracking suspend/resume insns */
>  XEN_CPUFEATURE(RDPID,         6*32+22) /*A  RDPID instruction */
>  XEN_CPUFEATURE(CLDEMOTE,      6*32+25) /*A  CLDEMOTE instruction */
> +XEN_CPUFEATURE(MOVDIRI,       6*32+27) /*A  MOVDIRI instruction */
> +XEN_CPUFEATURE(MOVDIR64B,     6*32+28) /*A  MOVDIR64B instruction */

I'd be tempted to leave these as 'a' for now, seeing as we have the ability.

These instructions aren't actually of any use for domains without PCI
devices, and a "default" will be more migrateable as a consequence.

We're going to need further toolstack changes to make CXL-passthrough
viable, so instruction adjustments can be part of that work.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:00:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:00:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jefYS-0004e4-Lu; Fri, 29 May 2020 14:00:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jefYQ-0004dz-R7
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:00:10 +0000
X-Inumbo-ID: b5292898-a1b4-11ea-a8c2-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b5292898-a1b4-11ea-a8c2-12813bfff9fa;
 Fri, 29 May 2020 14:00:09 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A75E4AC24;
 Fri, 29 May 2020 14:00:08 +0000 (UTC)
Subject: Re: [PATCH v10 5/9] x86emul: support MOVDIR{I,64B} insns
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
 <ae7ff12a-edf9-45b5-b7c9-6c5b5d0739c0@suse.com>
 <15b5f062-1911-9561-58b8-78c8027f3a68@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <de4bc55c-64d5-d0b8-893d-6b397ea66042@suse.com>
Date: Fri, 29 May 2020 16:00:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <15b5f062-1911-9561-58b8-78c8027f3a68@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 15:55, Andrew Cooper wrote:
> On 25/05/2020 15:28, Jan Beulich wrote:
>> Introduce a new blk() hook, paralleling the rmw() one in a certain way,
>> but being intended for larger data sizes, and hence its HVM intermediate
>> handling function doesn't fall back to splitting the operation if the
>> requested virtual address can't be mapped.
>>
>> Note that SDM revision 071 doesn't specify exception behavior for
>> ModRM.mod == 0b11; assuming #UD here.
> 
> Once again - I don't think this wants calling out.  That encoding space
> will be used for a new Grp at some point in the future, and be a
> different instruction.

Possible; without it spelled out one may also think (at least
for MOVDIRI) that the register-only encoding could be a
re-encoding of plain MOV. I'd prefer to keep it.

>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Reviewed-by: Paul Durrant <paul@xen.org>
> 
> Acked-by: Andrew Cooper <andrew.cooper@citrix.com>, although with one
> recommendation...

Thanks and ...

>> --- a/xen/include/public/arch-x86/cpufeatureset.h
>> +++ b/xen/include/public/arch-x86/cpufeatureset.h
>> @@ -241,6 +241,8 @@ XEN_CPUFEATURE(AVX512_VPOPCNTDQ, 6*32+14
>>  XEN_CPUFEATURE(TSXLDTRK,      6*32+16) /*a  TSX load tracking suspend/resume insns */
>>  XEN_CPUFEATURE(RDPID,         6*32+22) /*A  RDPID instruction */
>>  XEN_CPUFEATURE(CLDEMOTE,      6*32+25) /*A  CLDEMOTE instruction */
>> +XEN_CPUFEATURE(MOVDIRI,       6*32+27) /*A  MOVDIRI instruction */
>> +XEN_CPUFEATURE(MOVDIR64B,     6*32+28) /*A  MOVDIR64B instruction */
> 
> I'd be tempted to leave these as 'a' for now, seeing as we have the ability.
> 
> These instructions aren't actually of any use for domains without PCI
> devices, and a "default" will be more migrateable as a consequence.

... okay, done.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:02:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:02:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jefaw-0004qs-6w; Fri, 29 May 2020 14:02:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tYAP=7L=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jefav-0004qn-AG
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:02:45 +0000
X-Inumbo-ID: 111e2da6-a1b5-11ea-81bc-bc764e2007e4
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0c::61b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 111e2da6-a1b5-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 14:02:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=97AUB9UKezqq0E2Ia39v4XMDJZMNKBJzPcztESS3ZLw=;
 b=KnJuUgwBKRJ8ixOuCx4uTHcjonG97/fv3DoUCvKM6dAJONh08RG6aQHsbsqXArYwlD/EWzlfIqS3e8vNlj8jDmypQ2WvkLhk/Yyn3mCnXUfmDJ4I19VWJ/xvLS5y9nM8QJEXh04QkMp/F+7f9nRqEmfOl+G+zfm89BuePXgwO9Q=
Received: from AM6PR08CA0012.eurprd08.prod.outlook.com (2603:10a6:20b:b2::24)
 by VE1PR08MB4704.eurprd08.prod.outlook.com (2603:10a6:802:b0::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.27; Fri, 29 May
 2020 14:02:41 +0000
Received: from AM5EUR03FT057.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:b2:cafe::d9) by AM6PR08CA0012.outlook.office365.com
 (2603:10a6:20b:b2::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.17 via Frontend
 Transport; Fri, 29 May 2020 14:02:41 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT057.mail.protection.outlook.com (10.152.17.44) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3021.23 via Frontend Transport; Fri, 29 May 2020 14:02:40 +0000
Received: ("Tessian outbound 952576a3272a:v57");
 Fri, 29 May 2020 14:02:40 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3674b68a6b74deb4
X-CR-MTA-TID: 64aa7808
Received: from 8d5c88b0e0b7.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 81A0429F-B201-4597-B761-C634A939EFB6.1; 
 Fri, 29 May 2020 14:02:35 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8d5c88b0e0b7.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 29 May 2020 14:02:35 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=U0dGsYGgGgc4OjFT40K13gA2Jtv/oKjchl5s+K7CoZypLszutEhvW2ptprPsIRSAlC+A9oqeAdkyPhrvTHULK5AACa48VLOxYBhsdOOW1GmKiT3kKwyrFkA/g8uLVz5GKiHP3kWunzbpUHBXkvQDZuqrmk83D1jEQd48RgdGuSSYfPr9kBgyO8f5wOxdBYxnrqwudYBbidw0WTmCF7+LJTtb+QWEkOHQssPhNFMfmQv6gL+YD4T9IjsrVWd8e69KT9OipemRY2bNkEkVkXpJ/tRciflqtpzksHudjwffrFqC8ZJhQX0eC5UaM+IIDVUHme+bKZ+Yt7ydPLPYCaX21g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=97AUB9UKezqq0E2Ia39v4XMDJZMNKBJzPcztESS3ZLw=;
 b=gz5ScWkhWlQgY6v4EI48oSHgL2q3gOv9NwNXJ2tFp8ZKDSR1E6GZsVSMS3ySI05GHdWILqoD7kHrADtJ/HjEEb2gbkIi0IKLc0Xj0sk3ocPRttdrE4uH8SrvCK4/GTglDmveCsZuMh8ysdxnH0BSXgPA5/SXZAkSx9k8eVxuCtbhZSxBg9OQ+PcfEHW/mwmOYbGJxAqQMG+3s0XauNTfU5HQHdYy7pVXkgPufeEjy293hKJWevgiRD4XSvguA1CdDJuCocf9MtJcKbTJvTu/gTmVcDnScvU1rNvLzK506+DSFPzJGzi/OrAKe+1Io2u6MG1ZRdi4uqE8XEuGBHIy0w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=97AUB9UKezqq0E2Ia39v4XMDJZMNKBJzPcztESS3ZLw=;
 b=KnJuUgwBKRJ8ixOuCx4uTHcjonG97/fv3DoUCvKM6dAJONh08RG6aQHsbsqXArYwlD/EWzlfIqS3e8vNlj8jDmypQ2WvkLhk/Yyn3mCnXUfmDJ4I19VWJ/xvLS5y9nM8QJEXh04QkMp/F+7f9nRqEmfOl+G+zfm89BuePXgwO9Q=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB2969.eurprd08.prod.outlook.com (2603:10a6:5:20::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.19; Fri, 29 May
 2020 14:02:29 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3045.018; Fri, 29 May 2020
 14:02:29 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Topic: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Index: AQHWNP6nIxIB/r3XC025DLnqIbmaJai92N6AgADfWQCAABkLAIAASFMA
Date: Fri, 29 May 2020 14:02:29 +0000
Message-ID: <1CEE9707-3201-4218-ACF0-7829181A769C@arm.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <3B88C76B-6972-4A66-AFDC-0B5C27FBA740@arm.com>
 <07a1008d-1acb-aab6-ab10-176e7856a296@xen.org>
In-Reply-To: <07a1008d-1acb-aab6-ab10-176e7856a296@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: d5fb8be7-504c-4a90-4e45-08d803d8f3c6
x-ms-traffictypediagnostic: DB7PR08MB2969:|VE1PR08MB4704:
X-Microsoft-Antispam-PRVS: <VE1PR08MB470442EA9056F214D97843749D8F0@VE1PR08MB4704.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 04180B6720
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: 16YTx18od4txer2tIq2kdoab00SFEHFksLnUkSRPFlz8+9pBraih9kJBULszL8UI6UKvTjZyS2wC+mIlysf+ApvNBSCWovrYpuAk/dezFuNS645q1VwSfAfLYoH4VmrseyxPDT9UyvhZiAGsSuWeSwFKSLxICYLtL1/UzlobSRRpJfmAQkv8N3wBSTdDlXjBo14WVY6A0S97MlLED6ndT9FQ/vD13GJCPa+CbW8+KemYUCbWKsXfx0bStRI5iBypfT0mys9cM/Ckl2iJu4M6zjIROPMBwSWN4mr3d3Nk1CI6/3NgPU2BAujaOZTKKIspNcX2CRkC5yxhcUWY3MDypLNUJQ3j9cvKIduti7mKwmf/dH6fSv4wX8hcmxPytWH6
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(136003)(376002)(366004)(396003)(346002)(76116006)(91956017)(66556008)(66946007)(5660300002)(8936002)(316002)(2906002)(66476007)(7416002)(33656002)(71200400001)(54906003)(4326008)(6512007)(83380400001)(2616005)(6916009)(8676002)(36756003)(26005)(6486002)(53546011)(86362001)(478600001)(64756008)(66446008)(6506007)(186003)(21314003);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: Zu63oOBVPjXvm5GyTPkG18IF6UZhdDzMqFdpTdCX6MrYt9pzeIQ1KdxAzrzatvKLzp7N6c3W2wRW1v+xT+c4vVnkhDVi4gwbuPGsjqIT4xdlNcUCGWbdoHTS3zsSIPRkOptYUiAwrQl8iMYKZqMYgOg9CHnk9Tj2KI74pwfS5ng8m8lHQdmNLvHv6YG3hkAs8Cj7iLdW/F3a4CwQqzWtpP8pOgElo9Bo3cSbdqOQN9IKjWJemiVbm1Ogx1aiGSQeN+RRgJXipxV2jchpkeAK2wAP4u/tQE+vCBiDECTyhGLndkDmGyucS8pZ7kgyto4d9b8cHOuqhez1J2psNynZ5AygZ/k+iwCM6WrdilolmpV+ycJ+jLNmyJ+HNBbUjVxKC2WemFiHEhiCsNrRmYwLWlSMlHKWqQtfmKsNCQSIb9mlyARe1/YQubP2aP8pk+8K2TErgG+XDT9ORwONaqQW74AOkFlPypj7gksLub7/HRTkeBLnG0pkPtRe3y1of90S
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <F2D19437BD76B5468E1A2354D8E6845D@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB2969
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: AM5EUR03FT057.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(346002)(396003)(376002)(136003)(39860400002)(46966005)(6862004)(186003)(356005)(47076004)(4326008)(81166007)(82740400003)(6486002)(54906003)(33656002)(86362001)(316002)(36906005)(82310400002)(26005)(6512007)(6506007)(53546011)(70586007)(8936002)(478600001)(70206006)(2616005)(5660300002)(336012)(8676002)(2906002)(36756003)(83380400001)(21314003);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: cd43441c-4198-4322-2e61-08d803d8ecfd
X-Forefront-PRVS: 04180B6720
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: BltN9VW8GKGq6yYWpgVlYo9vx0A3TiBvRF5QcyJdfu9J+AxxoCNbah4KYFFqSLWZatXeCKYTuCvM/IFvBJbSxqz/mayBPaKtxwLUJ+G54Jp8soWh1jr0CChRJ6XpX0GgvHpZ2Yb7xF0iZAcpnFmKWE7f4zZUaD35BqVwgKI6OBa3bvnGQrKMETNl6dx+968m1/TpqMnyQlIPSx0I8kGb/Aiis6/WsgQec5etuynHsI9ik/2CFxBACsAwYpQe9LhX2UuuoPJbZdrb9YYuQFUjdrM5L2rnNRmNuneT88n2atB0HXVkjcBtooTW7iUfKcKcgy2NhItn9ZPL6xajKL+S3/a8rkOCh/CwmC3oCUALpJcxlhvNRYI+DDKXZDMiSnqfQEK2NdlGbCkFnUYUAKQXS2t5VRLOV/JQrKfWkMRSQFc=
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 May 2020 14:02:40.9844 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d5fb8be7-504c-4a90-4e45-08d803d8f3c6
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB4704
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "paul@xen.org" <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Xia, Hongyan" <hongyxia@amazon.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gMjkgTWF5IDIwMjAsIGF0IDEwOjQzLCBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4u
b3JnPiB3cm90ZToNCj4gDQo+IEhpIEJlcnRyYW5kLA0KPiANCj4gT24gMjkvMDUvMjAyMCAwOTox
MywgQmVydHJhbmQgTWFycXVpcyB3cm90ZToNCj4+IEhpIEp1bGllbiwNCj4+PiBPbiAyOCBNYXkg
MjAyMCwgYXQgMTk6NTQsIEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+IHdyb3RlOg0KPj4+
IA0KPj4+IEhpIEJlcnRyYW5kLA0KPj4+IA0KPj4+IFRoYW5rIHlvdSBmb3IgdGhlIHBhdGNoLg0K
Pj4+IA0KPj4+IE9uIDI4LzA1LzIwMjAgMTY6MjUsIEJlcnRyYW5kIE1hcnF1aXMgd3JvdGU6DQo+
Pj4+IEF0IHRoZSBtb21lbnQgb24gQXJtLCBhIExpbnV4IGd1ZXN0IHJ1bm5pbmcgd2l0aCBLVFBJ
IGVuYWJsZWQgd2lsbA0KPj4+PiBjYXVzZSB0aGUgZm9sbG93aW5nIGVycm9yIHdoZW4gYSBjb250
ZXh0IHN3aXRjaCBoYXBwZW5zIGluIHVzZXIgbW9kZToNCj4+Pj4gKFhFTikgcDJtLmM6MTg5MDog
ZDF2MDogRmFpbGVkIHRvIHdhbGsgcGFnZS10YWJsZSB2YSAweGZmZmZmZjgzN2ViZTBjZDANCj4+
Pj4gVGhpcyBwYXRjaCBpcyBtb2RpZnlpbmcgcnVuc3RhdGUgaGFuZGxpbmcgdG8gbWFwIHRoZSBh
cmVhIGdpdmVuIGJ5IHRoZQ0KPj4+PiBndWVzdCBpbnNpZGUgWGVuIGR1cmluZyB0aGUgaHlwZXJj
YWxsLg0KPj4+PiBUaGlzIGlzIHJlbW92aW5nIHRoZSBndWVzdCB2aXJ0dWFsIHRvIHBoeXNpY2Fs
IGNvbnZlcnNpb24gZHVyaW5nIGNvbnRleHQNCj4+Pj4gc3dpdGNoZXMgd2hpY2ggcmVtb3ZlcyB0
aGUgYnVnDQo+Pj4gDQo+Pj4gSXQgd291bGQgYmUgZ29vZCB0byBzcGVsbCBvdXQgdGhhdCBhIHZp
cnR1YWwgYWRkcmVzcyBpcyBub3Qgc3RhYmxlLiBTbyByZWx5aW5nIG9uIGl0IGlzIHdyb25nLg0K
Pj4+IA0KPj4+PiBhbmQgaW1wcm92ZSBwZXJmb3JtYW5jZSBieSBwcmV2ZW50aW5nIHRvDQo+Pj4+
IHdhbGsgcGFnZSB0YWJsZXMgZHVyaW5nIGNvbnRleHQgc3dpdGNoZXMuDQo+Pj4gDQo+Pj4gV2l0
aCBTZWNyZXQgZnJlZSBoeXBlcnZpc29yIGluIG1pbmQsIEkgd291bGQgbGlrZSB0byBzdWdnZXN0
IHRvIG1hcC91bm1hcCB0aGUgcnVuc3RhdGUgZHVyaW5nIGNvbnRleHQgc3dpdGNoLg0KPj4+IA0K
Pj4+IFRoZSBjb3N0IHNob3VsZCBiZSBtaW5pbWFsIHdoZW4gdGhlcmUgaXMgYSBkaXJlY3QgbWFw
IChpLmUgb24gQXJtNjQgYW5kIHg4NikgYW5kIHN0aWxsIHByb3ZpZGUgYmV0dGVyIHBlcmZvcm1h
bmNlIG9uIEFybTMyLg0KPj4gRXZlbiB3aXRoIGEgbWluaW1hbCBjb3N0IHRoaXMgaXMgc3RpbGwg
YWRkaW5nIHNvbWUgbm9uIHJlYWwtdGltZSBiZWhhdmlvdXIgdG8gdGhlIGNvbnRleHQgc3dpdGNo
Lg0KPiANCj4gSnVzdCB0byBiZSBjbGVhciwgYnkgbWluaW1hbCBJIG1lYW50IHRoZSBtYXBwaW5n
IHBhcnQgaXMganVzdCBhIHZpcnRfdG9fbWZuKCkgY2FsbCBhbmQgdGhlIHVubWFwcGluZyBpcyBh
IE5PUC4NCj4gDQo+IElITU8sIGlmIHZpcnRfdG9fbWZuKCkgZW5kcyB1cCB0byBhZGQgbm9uLVJU
IGJlaGF2aW9yIHRoZW4geW91IGhhdmUgbXVjaCBiaWdnZXIgcHJvYmxlbSB0aGFuIGp1c3QgdGhp
cyBjYWxsLg0KPiANCj4+IEJ1dCBkZWZpbml0ZWx5IGZyb20gdGhlIHNlY3VyaXR5IHBvaW50IG9m
IHZpZXcgYXMgd2UgaGF2ZSB0byBtYXAgYSBwYWdlIGZyb20gdGhlIGd1ZXN0LCB3ZSBjb3VsZCBo
YXZlIGFjY2Vzc2libGUgaW4gWGVuIHNvbWUgZGF0YSB0aGF0IHNob3VsZCBub3QgYmUgdGhlcmUu
DQo+PiBUaGVyZSBpcyBhIHRyYWRlIGhlcmUgd2hlcmU6DQo+PiAtIHhlbiBjYW4gcHJvdGVjdCBi
eSBtYXAvdW5tYXBwaW5nDQo+PiAtIGEgZ3Vlc3Qgd2hpY2ggd2FudHMgdG8gc2VjdXJlIGhpcyBk
YXRhIHNob3VsZCBlaXRoZXIgbm90IHVzZSBpdCBvciBtYWtlIHN1cmUgdGhlcmUgaXMgbm90aGlu
ZyBlbHNlIGluIHRoZSBwYWdlDQo+IA0KPiBCb3RoIGFyZSB2YWxpZCBhbmQgZGVwZW5kcyBvbiB5
b3VyIHNldHVwLiBPbmUgbWF5IHdhbnQgdG8gcHJvdGVjdCBhbGwgdGhlIGV4aXN0aW5nIGd1ZXN0
cywgc28gbW9kaWZ5aW5nIGEgZ3Vlc3QgbWF5IG5vdCBiZSBhIHNvbHV0aW9uLg0KDQpUaGUgZmFj
dCB0byBtYXAvdW5tYXAgaXMgaW5jcmVhc2luZyB0aGUgcHJvdGVjdGlvbiBidXQgbm90IHJlbW92
aW5nIHRoZSBwcm9ibGVtIGNvbXBsZXRlbHkuDQoNCj4gDQo+PiBUaGF0IHNvdW5kcyBsaWtlIGEg
dGhyZWFkIGxvY2FsIHN0b3JhZ2Uga2luZCBvZiBwcm9ibGVtYXRpYyB3aGVyZSB3ZSB3YW50IGRh
dGEgZnJvbSB4ZW4gdG8gYmUgYWNjZXNzaWJsZSBmYXN0IGZyb20gdGhlIGd1ZXN0IGFuZCBlYXN5
IHRvIGJlIG1vZGlmaWVkIGZyb20geGVuLg0KPiANCj4gQWdyZWUuIE9uIHg4NiwgdGhleSBoYXZl
IGEgcGVyZG9tYWluIG1hcHBpbmcgc28gaXQgd291bGQgYmUgcG9zc2libGUgdG8gZG8gaXQuIFdl
IHdvdWxkIG5lZWQgdG8gYWRkIHRoZSBmZWF0dXJlIG9uIEFybS4NCg0KVGhhdCB3b3VsZCBkZWZp
bml0ZWx5IGJlIGNsZWFuZXIuDQoNCj4gDQo+Pj4gDQo+Pj4gVGhlIGNoYW5nZSBzaG91bGQgYmUg
bWluaW1hbCBjb21wYXJlIHRvIHRoZSBjdXJyZW50IGFwcHJvYWNoIGJ1dCB0aGlzIGNvdWxkIGJl
IHRha2VuIGNhcmUgc2VwYXJhdGVseSBpZiB5b3UgZG9uJ3QgaGF2ZSB0aW1lLg0KPj4gSSBjb3Vs
ZCBhZGQgdGhhdCB0byB0aGUgc2VyaWUgaW4gYSBzZXBhcmF0ZSBwYXRjaCBzbyB0aGF0IGl0IGNh
biBiZSBkaXNjdXNzZWQgYW5kIHRlc3Qgc2VwYXJhdGVseSA/DQo+IA0KPiBJZiB5b3UgYXJlIHBp
Y2tpbmcgdGhlIHRhc2ssIHRoZW4gSSB3b3VsZCBzdWdnZXN0IHRvIGFkZCBpdCBkaXJlY3RseSBp
biB0aGlzIHBhdGNoLg0KDQpJIHdpbGwgc2VlIHRoYXQgYnV0IHRoZSBkaXNjdXNzaW9uIGlzIGxl
YWRpbmcgbW9yZSBvbiBhIHJlc3VsdCB3aGVyZSB3ZSBhY2NlcHQgdGhlIGN1cnJlbnQgc3RhdHVz
Lg0KDQo+IA0KPj4+IA0KPj4+PiAtLQ0KPj4+PiBJbiB0aGUgY3VycmVudCBzdGF0dXMsIHRoaXMg
cGF0Y2ggaXMgb25seSB3b3JraW5nIG9uIEFybSBhbmQgbmVlZHMgdG8NCj4+Pj4gYmUgZml4ZWQg
b24gWDg2IChzZWUgI2Vycm9yIG9uIGRvbWFpbi5jIGZvciBtaXNzaW5nIGdldF9wYWdlX2Zyb21f
Z3ZhKS4NCj4+Pj4gU2lnbmVkLW9mZi1ieTogQmVydHJhbmQgTWFycXVpcyA8YmVydHJhbmQubWFy
cXVpc0Bhcm0uY29tPg0KPj4+PiAtLS0NCj4+Pj4gIHhlbi9hcmNoL2FybS9kb21haW4uYyAgIHwg
MzIgKysrKysrKysrLS0tLS0tLQ0KPj4+PiAgeGVuL2FyY2gveDg2L2RvbWFpbi5jICAgfCA1MSAr
KysrKysrKysrKysrKy0tLS0tLS0tLS0tDQo+Pj4+ICB4ZW4vY29tbW9uL2RvbWFpbi5jICAgICB8
IDg0ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKystLS0tLS0tDQo+Pj4+ICB4ZW4v
aW5jbHVkZS94ZW4vc2NoZWQuaCB8IDExICsrKystLQ0KPj4+PiAgNCBmaWxlcyBjaGFuZ2VkLCAx
MjQgaW5zZXJ0aW9ucygrKSwgNTQgZGVsZXRpb25zKC0pDQo+Pj4+IGRpZmYgLS1naXQgYS94ZW4v
YXJjaC9hcm0vZG9tYWluLmMgYi94ZW4vYXJjaC9hcm0vZG9tYWluLmMNCj4+Pj4gaW5kZXggMzEx
NjkzMjZiMi4uNzk5YjBlMDEwMyAxMDA2NDQNCj4+Pj4gLS0tIGEveGVuL2FyY2gvYXJtL2RvbWFp
bi5jDQo+Pj4+ICsrKyBiL3hlbi9hcmNoL2FybS9kb21haW4uYw0KPj4+PiBAQCAtMjc4LDMzICsy
NzgsMzcgQEAgc3RhdGljIHZvaWQgY3R4dF9zd2l0Y2hfdG8oc3RydWN0IHZjcHUgKm4pDQo+Pj4+
ICAvKiBVcGRhdGUgcGVyLVZDUFUgZ3Vlc3QgcnVuc3RhdGUgc2hhcmVkIG1lbW9yeSBhcmVhIChp
ZiByZWdpc3RlcmVkKS4gKi8NCj4+Pj4gIHN0YXRpYyB2b2lkIHVwZGF0ZV9ydW5zdGF0ZV9hcmVh
KHN0cnVjdCB2Y3B1ICp2KQ0KPj4+PiAgew0KPj4+PiAtICAgIHZvaWQgX191c2VyICpndWVzdF9o
YW5kbGUgPSBOVUxMOw0KPj4+PiAtICAgIHN0cnVjdCB2Y3B1X3J1bnN0YXRlX2luZm8gcnVuc3Rh
dGU7DQo+Pj4+ICsgICAgc3RydWN0IHZjcHVfcnVuc3RhdGVfaW5mbyAqcnVuc3RhdGU7DQo+Pj4+
ICAtICAgIGlmICggZ3Vlc3RfaGFuZGxlX2lzX251bGwocnVuc3RhdGVfZ3Vlc3QodikpICkNCj4+
Pj4gKyAgICAvKiBYWFggd2h5IGRvIHdlIGFjY2VwdCBub3QgdG8gYmxvY2sgaGVyZSAqLw0KPj4+
PiArICAgIGlmICggIXNwaW5fdHJ5bG9jaygmdi0+cnVuc3RhdGVfZ3Vlc3RfbG9jaykgKQ0KPj4+
PiAgICAgICAgICByZXR1cm47DQo+Pj4+ICAtICAgIG1lbWNweSgmcnVuc3RhdGUsICZ2LT5ydW5z
dGF0ZSwgc2l6ZW9mKHJ1bnN0YXRlKSk7DQo+Pj4+ICsgICAgcnVuc3RhdGUgPSBydW5zdGF0ZV9n
dWVzdCh2KTsNCj4+Pj4gKw0KPj4+PiArICAgIGlmIChydW5zdGF0ZSA9PSBOVUxMKQ0KPj4+PiAr
ICAgIHsNCj4+Pj4gKyAgICAgICAgc3Bpbl91bmxvY2soJnYtPnJ1bnN0YXRlX2d1ZXN0X2xvY2sp
Ow0KPj4+PiArICAgICAgICByZXR1cm47DQo+Pj4+ICsgICAgfQ0KPj4+PiAgICAgICAgaWYgKCBW
TV9BU1NJU1Qodi0+ZG9tYWluLCBydW5zdGF0ZV91cGRhdGVfZmxhZykgKQ0KPj4+PiAgICAgIHsN
Cj4+Pj4gLSAgICAgICAgZ3Vlc3RfaGFuZGxlID0gJnYtPnJ1bnN0YXRlX2d1ZXN0LnAtPnN0YXRl
X2VudHJ5X3RpbWUgKyAxOw0KPj4+PiAtICAgICAgICBndWVzdF9oYW5kbGUtLTsNCj4+Pj4gLSAg
ICAgICAgcnVuc3RhdGUuc3RhdGVfZW50cnlfdGltZSB8PSBYRU5fUlVOU1RBVEVfVVBEQVRFOw0K
Pj4+PiAtICAgICAgICBfX3Jhd19jb3B5X3RvX2d1ZXN0KGd1ZXN0X2hhbmRsZSwNCj4+Pj4gLSAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAodm9pZCAqKSgmcnVuc3RhdGUuc3RhdGVfZW50cnlf
dGltZSArIDEpIC0gMSwgMSk7DQo+Pj4+ICsgICAgICAgIHJ1bnN0YXRlLT5zdGF0ZV9lbnRyeV90
aW1lIHw9IFhFTl9SVU5TVEFURV9VUERBVEU7DQo+Pj4+ICAgICAgICAgIHNtcF93bWIoKTsNCj4+
PiANCj4+PiBCZWNhdXNlIHlvdSBzZXQgdi0+cnVuc3RhdGUuc3RhdGVfZW50cnlfdGltZSBiZWxv
dywgdGhlIHBsYWNlbWVudCBvZiB0aGUgYmFycmllciBzZWVtcyBhIGJpdCBvZGQuDQo+Pj4gDQo+
Pj4gSSB3b3VsZCBzdWdnZXN0IHRvIHVwZGF0ZSB2LT5ydW5zdGF0ZS5zdGF0ZV9lbnRyeV90aW1l
IGZpcnN0IGFuZCB0aGVuIHVwZGF0ZSBydW5zdGF0ZS0+c3RhdGVfZW50cnlfdGltZS4NCj4+IFdl
IGRvIHdhbnQgdGhlIGd1ZXN0IHRvIGtub3cgd2hlbiB3ZSBtb2RpZnkgdGhlIHJ1bnN0YXRlIHNv
IHdlIG5lZWQgdG8gbWFrZSBzdXJlIHRoZSBYRU5fUlVOU1RBVEVfVVBEQVRFIGlzIGFjdHVhbGx5
IHNldCBpbiBhIHZpc2libGUgd2F5IGJlZm9yZSB3ZSBkbyB0aGUgbWVtY3B5Lg0KPj4gVGhhdOKA
mXMgd2h5IHRoZSBiYXJyaWVyIGlzIGFmdGVyLg0KPiANCj4gSSB0aGluayB5b3UgbWlzdW5kZXJz
dG9vZCBteSBjb21tZW50IGhlcmUuIEkgbWVhbnQgdG8gd3JpdGUgdGhlIGZvbGxvd2luZyBjb2Rl
Og0KPiANCj4gdi0+cnVuc3RhdGUuc3RhdGVfZW50cnlfdGltZSA9IC4uLg0KPiBydW5zdGF0ZS0+
c3RhdGVfZW50cnlfdGltZSA9IC4uLg0KPiBzbXBfd21iKCkNCj4gDQo+IFNvIGl0IGlzIG11Y2gg
Y2xlYXJlciB0aGF0IHRoZSBiYXJyaWVyIGlzIGJlY2F1c2UgcnVuc3RhdGUtPnN0YXRlX2VudHJ5
X3RpbWUgbmVlZHMgdG8gYmUgdXBkYXRlZCBiZWZvcmUgdGhlIG1lbWNweSgpLg0KPiANCj4+Pj4g
Kw0KPj4+PiArI2lmZGVmIENPTkZJR19BUk0NCj4+Pj4gKyAgICBwYWdlID0gZ2V0X3BhZ2VfZnJv
bV9ndmEodiwgYXJlYS0+YWRkci5wLCBHVjJNX1dSSVRFKTsNCj4+PiANCj4+PiBBIGd1ZXN0IGlz
IGFsbG93ZWQgdG8gc2V0dXAgdGhlIHJ1bnN0YXRlIGZvciBhIGRpZmZlcmVudCB2Q1BVIHRoYW4g
dGhlIGN1cnJlbnQgb25lLiBUaGlzIHdpbGwgbGVhZCB0byBnZXRfcGFnZV9mcm9tX2d2YSgpIHRv
IGZhaWwgYXMgdGhlIGZ1bmN0aW9uIGNhbm5vdCB5ZXQgd29yayB3aXRoIGEgdkNQVSBvdGhlciB0
aGFuIGN1cnJlbnQuDQo+PiBJZiB0aGUgYXJlYSBpcyBtYXBwZWQgcGVyIGNwdSBidXQgaXNu4oCZ
dCB0aGUgYWltIG9mIHRoaXMgdG8gaGF2ZSBhIHdheSB0byBjaGVjayBvdGhlciBjcHVzIHN0YXR1
cyA/DQo+IA0KPiBUaGUgYWltIGlzIHRvIGNvbGxlY3QgaG93IG11Y2ggdGltZSBhIHZDUFUgaGFz
IGJlZW4gdW5zY2hlZHVsZWQuIFRoaXMgZG9lc24ndCBoYXZlIHRvIGJlIHJ1biBmcm9tIGEgZGlm
ZmVyZW50IHZDUFUuDQo+IA0KPiBBbnl3YXksIG15IHBvaW50IGlzIHRoZSBoeXBlcmNhbGwgYWxs
b3dzIGl0IHRvZGF5LiBJZiB5b3Ugd2FudCB0byBtYWtlIHN1Y2ggcmVzdHJpY3Rpb24sIHRoZW4g
d2UgbmVlZCB0byBhZ3JlZSBvbiBpdCBhbmQgZG9jdW1lbnQgaXQuDQo+IA0KPj4+IA0KPj4+IEFG
QUlDVCwgdGhlcmUgaXMgbm8gcmVzdHJpY3Rpb24gb24gd2hlbiB0aGUgcnVuc3RhdGUgaHlwZXJj
YWxsIGNhbiBiZSBjYWxsZWQuIFNvIHRoaXMgY2FuIGV2ZW4gYmUgY2FsbGVkIGJlZm9yZSB0aGUg
dkNQVSBpcyBicm91Z2h0IHVwLg0KPj4gSSB1bmRlcnN0YW5kIHRoZSByZW1hcmsgYnV0IGl0IHN0
aWxsIGZlZWxzIHZlcnkgd2VpcmQgdG8gYWxsb3cgYW4gaW52YWxpZCBhZGRyZXNzIGluIGFuIGh5
cGVyY2FsbC4NCj4+IFdvdWxkbuKAmXQgd2UgaGF2ZSBhIGxvdCBvZiBwb3RlbnRpYWwgaXNzdWVz
IGFjY2VwdGluZyBhbiBhZGRyZXNzIHRoYXQgd2UgY2Fubm90IGNoZWNrID8NCj4gDQo+IFdlbGws
IHRoYXQncyB3aHkgeW91IHNlZSBlcnJvcnMgd2hlbiB1c2luZyBLUFRJIDspLiBJZiB5b3UgdXNl
IGEgZ2xvYmFsIG1hcHBpbmcsIHRoZW4gaXQgaXMgbm90IHBvc3NpYmxlIHRvIGNvbnRpbnVlIHdp
dGhvdXQgdmFsaWRhdGluZyB0aGUgYWRkcmVzcy4NCj4gDQo+IEJ1dCB0byBkbyB0aGlzLCB5b3Ug
d2lsbCBoYXZlIHRvIHB1dCByZXN0cmljdGlvbiBvbiB0aGUgaHlwZXJjYWxscy4gSSB3b3VsZCBi
ZSBPSyB0byBtYWtlIHN1Y2ggcmVzdHJpY3Rpb24gb24gQXJtLg0KPiANCj4gQ2hlZXJzLA0KPiAN
Cj4gLS0gDQo+IEp1bGllbiBHcmFsbA0KDQo=


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:08:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:08:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jefgT-00054i-TK; Fri, 29 May 2020 14:08:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jefgS-00054d-ES
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:08:28 +0000
X-Inumbo-ID: ddfe0eea-a1b5-11ea-81bc-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ddfe0eea-a1b5-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 14:08:27 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:59376
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jefgP-000L2n-Jb (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 15:08:25 +0100
Subject: Re: [PATCH v10 7/9] x86emul: support FNSTENV and FNSAVE
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
 <ec5d12df-dbef-af1a-7649-44a9a7d46de4@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6851a948-e4a8-5caf-e87b-c4b361bdd950@citrix.com>
Date: Fri, 29 May 2020 15:08:24 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <ec5d12df-dbef-af1a-7649-44a9a7d46de4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25/05/2020 15:29, Jan Beulich wrote:
> To avoid introducing another boolean into emulator state, the
> rex_prefix field gets (ab)used to convey the real/VM86 vs protected mode
> info (affecting structure layout, albeit not size) to x86_emul_blk().
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>, with one suggestion.

> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
> @@ -856,6 +856,9 @@ struct x86_emulate_state {
>      enum {
>          blk_NONE,
>          blk_enqcmd,
> +#ifndef X86EMUL_NO_FPU
> +        blk_fst, /* FNSTENV, FNSAVE */
> +#endif
>          blk_movdir,
>      } blk;
>      uint8_t modrm, modrm_mod, modrm_reg, modrm_rm;
> @@ -897,6 +900,50 @@ struct x86_emulate_state {
>  #define PTR_POISON NULL /* 32-bit builds are for user-space, so NULL is OK. */
>  #endif
>  
> +#ifndef X86EMUL_NO_FPU
> +struct x87_env16 {
> +    uint16_t fcw;
> +    uint16_t fsw;
> +    uint16_t ftw;
> +    union {
> +        struct {
> +            uint16_t fip_lo;
> +            uint16_t fop:11, :1, fip_hi:4;
> +            uint16_t fdp_lo;
> +            uint16_t :12, fdp_hi:4;
> +        } real;
> +        struct {
> +            uint16_t fip;
> +            uint16_t fcs;
> +            uint16_t fdp;
> +            uint16_t fds;
> +        } prot;
> +    } mode;
> +};
> +
> +struct x87_env32 {
> +    uint32_t fcw:16, :16;
> +    uint32_t fsw:16, :16;
> +    uint32_t ftw:16, :16;

uint16_t fcw, :16;
uint16_t fsw, :16;
uint16_t ftw, :16;

?

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:09:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:09:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jefhO-0005A2-Fr; Fri, 29 May 2020 14:09:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jefhM-00059t-NW
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:09:24 +0000
X-Inumbo-ID: fef22500-a1b5-11ea-a8c2-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fef22500-a1b5-11ea-a8c2-12813bfff9fa;
 Fri, 29 May 2020 14:09:23 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:59402
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jefhJ-000Lau-KY (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 15:09:21 +0100
Subject: Re: [PATCH v10 8/9] x86emul: support FLDENV and FRSTOR
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
 <6dc6eb46-c7cb-ca88-2f92-04b99be1b9ef@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <26c1276a-900c-e79d-b31b-2a0efec24cf4@citrix.com>
Date: Fri, 29 May 2020 15:09:20 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <6dc6eb46-c7cb-ca88-2f92-04b99be1b9ef@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25/05/2020 15:29, Jan Beulich wrote:
> While the Intel SDM claims that FRSTOR itself may raise #MF upon
> completion, this was confirmed by Intel to be a doc error which will be
> corrected in due course; behavior is like FLDENV, and like old hard copy
> manuals describe it.
>
> Re-arrange a switch() statement's case label order to allow for
> fall-through from FLDENV handling to FNSTENV's.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:12:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:12:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jefk4-00064X-Vn; Fri, 29 May 2020 14:12:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jefk3-00064R-QF
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:12:11 +0000
X-Inumbo-ID: 630c5a38-a1b6-11ea-a8c5-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 630c5a38-a1b6-11ea-a8c5-12813bfff9fa;
 Fri, 29 May 2020 14:12:11 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id ED984AC24;
 Fri, 29 May 2020 14:12:09 +0000 (UTC)
Subject: Re: [PATCH v10 7/9] x86emul: support FNSTENV and FNSAVE
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
 <ec5d12df-dbef-af1a-7649-44a9a7d46de4@suse.com>
 <6851a948-e4a8-5caf-e87b-c4b361bdd950@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c2031de6-d58f-1480-dd30-b9cdcb928db5@suse.com>
Date: Fri, 29 May 2020 16:12:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <6851a948-e4a8-5caf-e87b-c4b361bdd950@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 16:08, Andrew Cooper wrote:
> On 25/05/2020 15:29, Jan Beulich wrote:
>> To avoid introducing another boolean into emulator state, the
>> rex_prefix field gets (ab)used to convey the real/VM86 vs protected mode
>> info (affecting structure layout, albeit not size) to x86_emul_blk().
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>,

Thanks.

> with one suggestion.
> 
>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>> @@ -856,6 +856,9 @@ struct x86_emulate_state {
>>      enum {
>>          blk_NONE,
>>          blk_enqcmd,
>> +#ifndef X86EMUL_NO_FPU
>> +        blk_fst, /* FNSTENV, FNSAVE */
>> +#endif
>>          blk_movdir,
>>      } blk;
>>      uint8_t modrm, modrm_mod, modrm_reg, modrm_rm;
>> @@ -897,6 +900,50 @@ struct x86_emulate_state {
>>  #define PTR_POISON NULL /* 32-bit builds are for user-space, so NULL is OK. */
>>  #endif
>>  
>> +#ifndef X86EMUL_NO_FPU
>> +struct x87_env16 {
>> +    uint16_t fcw;
>> +    uint16_t fsw;
>> +    uint16_t ftw;
>> +    union {
>> +        struct {
>> +            uint16_t fip_lo;
>> +            uint16_t fop:11, :1, fip_hi:4;
>> +            uint16_t fdp_lo;
>> +            uint16_t :12, fdp_hi:4;
>> +        } real;
>> +        struct {
>> +            uint16_t fip;
>> +            uint16_t fcs;
>> +            uint16_t fdp;
>> +            uint16_t fds;
>> +        } prot;
>> +    } mode;
>> +};
>> +
>> +struct x87_env32 {
>> +    uint32_t fcw:16, :16;
>> +    uint32_t fsw:16, :16;
>> +    uint32_t ftw:16, :16;
> 
> uint16_t fcw, :16;
> uint16_t fsw, :16;
> uint16_t ftw, :16;
> 
> ?

You had suggested this before, and I did reply that my intention
was to have x87_env16 use uint16_t throughout, and x87_env32
uint32_t respectively for all its pieces. In the end it doesn't
make a difference, and hence this cosmetic aspect is what made
me pick one of the various possible options.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:15:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:15:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jefn2-0006CQ-Go; Fri, 29 May 2020 14:15:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mKAR=7L=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jefn1-0006CL-0J
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:15:15 +0000
X-Inumbo-ID: d0894152-a1b6-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d0894152-a1b6-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 14:15:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=6cmyS76XBiQACTTTkkE0sQn9UUH3eieO0x/9/g6de3Q=; b=XcWzge/trjJcoZZgemfsTiHPYO
 4qtKhKGGKc63wLjkf65WiAr2cVOsrA7T5jSX5VTga4P4gv4NIr5UOl4PwnwwodR6O3pVzPurqTWPz
 sbBrFRZ0rjqBiX0TT3ZhaitLZgxMn20O4mSYK4aBE9XRP8rH1TFQ+ttHKhE7LZ1p1IL4=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jefmw-0006OT-5m; Fri, 29 May 2020 14:15:10 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jefmv-0002X8-UP; Fri, 29 May 2020 14:15:10 +0000
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
To: Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <3B88C76B-6972-4A66-AFDC-0B5C27FBA740@arm.com>
 <07a1008d-1acb-aab6-ab10-176e7856a296@xen.org>
 <1CEE9707-3201-4218-ACF0-7829181A769C@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <38115dda-f6dc-c35d-fbaa-3735456d226f@xen.org>
Date: Fri, 29 May 2020 15:15:06 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <1CEE9707-3201-4218-ACF0-7829181A769C@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "paul@xen.org" <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Xia, Hongyan" <hongyxia@amazon.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



On 29/05/2020 15:02, Bertrand Marquis wrote:
> 
> 
>> On 29 May 2020, at 10:43, Julien Grall <julien@xen.org> wrote:
>>
>> Hi Bertrand,
>>
>> On 29/05/2020 09:13, Bertrand Marquis wrote:
>>> Hi Julien,
>>>> On 28 May 2020, at 19:54, Julien Grall <julien@xen.org> wrote:
>>>>
>>>> Hi Bertrand,
>>>>
>>>> Thank you for the patch.
>>>>
>>>> On 28/05/2020 16:25, Bertrand Marquis wrote:
>>>>> At the moment on Arm, a Linux guest running with KTPI enabled will
>>>>> cause the following error when a context switch happens in user mode:
>>>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0cd0
>>>>> This patch is modifying runstate handling to map the area given by the
>>>>> guest inside Xen during the hypercall.
>>>>> This is removing the guest virtual to physical conversion during context
>>>>> switches which removes the bug
>>>>
>>>> It would be good to spell out that a virtual address is not stable. So relying on it is wrong.
>>>>
>>>>> and improve performance by preventing to
>>>>> walk page tables during context switches.
>>>>
>>>> With Secret free hypervisor in mind, I would like to suggest to map/unmap the runstate during context switch.
>>>>
>>>> The cost should be minimal when there is a direct map (i.e on Arm64 and x86) and still provide better performance on Arm32.
>>> Even with a minimal cost this is still adding some non real-time behaviour to the context switch.
>>
>> Just to be clear, by minimal I meant the mapping part is just a virt_to_mfn() call and the unmapping is a NOP.
>>
>> IHMO, if virt_to_mfn() ends up to add non-RT behavior then you have much bigger problem than just this call.
>>
>>> But definitely from the security point of view as we have to map a page from the guest, we could have accessible in Xen some data that should not be there.
>>> There is a trade here where:
>>> - xen can protect by map/unmapping
>>> - a guest which wants to secure his data should either not use it or make sure there is nothing else in the page
>>
>> Both are valid and depends on your setup. One may want to protect all the existing guests, so modifying a guest may not be a solution.
> 
> The fact to map/unmap is increasing the protection but not removing the problem completely.

I would be curious to understand why the problem is not completely removed.

 From my perspective, this covers the case where Xen could leak the 
information of one domain to another domain. When there is no direct 
mapping, temporary mappings via domain_map_page() will be either 
per-pCPU (or per-vCPU). So the content should never be (easily) 
accessible by another running domain while it is mapped.

If the guest is concerned about exposing the data to Xen, then it is a 
completely different issue and should be taken care by the guest iself.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:22:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:22:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeftX-00079L-8V; Fri, 29 May 2020 14:21:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tYAP=7L=arm.com=bertrand.marquis@srs-us1.protection.inumbo.net>)
 id 1jeftW-00079G-AH
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:21:58 +0000
X-Inumbo-ID: c091a2de-a1b7-11ea-81bc-bc764e2007e4
Received: from EUR02-VE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe06::625])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c091a2de-a1b7-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 14:21:57 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3TsrYCiHH609KhDx/UK9fcnT62zA0TBsXE68ACIzW60=;
 b=GIzOY2Bz0jP+Qn159r68kCfi1aVpMOKe/QnqhVIaoYTHsbdV2kBUojgFofepYblgvmtxGkZSS5gw+7iG5BzIucNGZpPLg8b+ThTKxwzY7ksMdFl6fESj9XOyvtZzF1vZt03TZqg8IJcTSEOkJaRm9/XeXZweMNbE3X0bO0SnULw=
Received: from AM5PR0701CA0071.eurprd07.prod.outlook.com (2603:10a6:203:2::33)
 by VI1PR08MB3712.eurprd08.prod.outlook.com (2603:10a6:803:b9::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3045.18; Fri, 29 May
 2020 14:21:53 +0000
Received: from VE1EUR03FT026.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:2:cafe::87) by AM5PR0701CA0071.outlook.office365.com
 (2603:10a6:203:2::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3066.7 via Frontend
 Transport; Fri, 29 May 2020 14:21:53 +0000
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org;
 dmarc=bestguesspass action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT026.mail.protection.outlook.com (10.152.18.148) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.3021.23 via Frontend Transport; Fri, 29 May 2020 14:21:52 +0000
Received: ("Tessian outbound cff7dd4de28a:v57");
 Fri, 29 May 2020 14:21:52 +0000
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7a8e300d250247dc
X-CR-MTA-TID: 64aa7808
Received: from aa8dbfd4d248.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 64929208-47A5-4BE4-BEC5-C87A4C337905.1; 
 Fri, 29 May 2020 14:21:46 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id aa8dbfd4d248.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 29 May 2020 14:21:46 +0000
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=mPwyidpa9RReWN5SVb+k6uWCND3ENVTxeNVJ3flSh/R5qqgA6GR9+IEv46KwWP+R9lM8tlDOlZ8mfRDjK+QpkUTsvbTBovBLyDtLy7p0sW1ODOzRzOPHi5D2gzdAYDtEy+M9HU7I57RafgbLl94yAlZs+MQh5ftJAcdXe4OTuALvUNFKgIHCXqNm6I9ipiZ54lsavmyBOrjMEG4v+hpRBymwlyh9ePLkMe3yNfeAqeZ/ZUBVPlTxynwFQyapNyNFw/E0CYi8Jk0TbYDonioBBXUMYdvaXLF97iqcyj+J9MhKmMFXoWTHW+AGXMfcA9F9Rxoes1B+4dhTtLT8HpaD4Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; 
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3TsrYCiHH609KhDx/UK9fcnT62zA0TBsXE68ACIzW60=;
 b=FBq0fNO60wuEPHd1EQ48xiyKS9AZt3TMA2jx0CamCy/XYgkLbhF5N0OxGIuvO6A6CQky0Q47Z1Ytjc8tC9esS5EO9V9QvQBaHRT40nC5OcygkkoO7tDx+3pF7UzQAVnlpksr/UKzbFqbc5Quki+beJ/nReMLqPJt9v/MX2tbSDWQvb+TMnb+JyoGpMJrJLZ0x98hYMasH6pdTc0PB2+XZzGaq0CwTck+5kUGzXq0kZkjm0TmFqo4AEpbvy4Kutmys2QKaVdUo857qq5u5Q3uvvA6Ex21u7P/bIrAA8TU9S3YO3hpaZ65IBBLAIg4Ow+NpWjCrKJH2he2LVsCUojicA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com; 
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3TsrYCiHH609KhDx/UK9fcnT62zA0TBsXE68ACIzW60=;
 b=GIzOY2Bz0jP+Qn159r68kCfi1aVpMOKe/QnqhVIaoYTHsbdV2kBUojgFofepYblgvmtxGkZSS5gw+7iG5BzIucNGZpPLg8b+ThTKxwzY7ksMdFl6fESj9XOyvtZzF1vZt03TZqg8IJcTSEOkJaRm9/XeXZweMNbE3X0bO0SnULw=
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com (2603:10a6:10:79::16)
 by DB7PR08MB4617.eurprd08.prod.outlook.com (2603:10a6:10:75::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.3021.27; Fri, 29 May
 2020 14:21:45 +0000
Received: from DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8]) by DB7PR08MB3689.eurprd08.prod.outlook.com
 ([fe80::4001:43ad:d113:46a8%5]) with mapi id 15.20.3045.018; Fri, 29 May 2020
 14:21:44 +0000
From: Bertrand Marquis <Bertrand.Marquis@arm.com>
To: Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Topic: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Thread-Index: AQHWNP6nIxIB/r3XC025DLnqIbmaJai92N6AgADfWQCAABkLAIAASFMAgAADiACAAAHaAA==
Date: Fri, 29 May 2020 14:21:44 +0000
Message-ID: <771AA834-1A13-44C6-B7B1-FCC09F1AEB18@arm.com>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <3B88C76B-6972-4A66-AFDC-0B5C27FBA740@arm.com>
 <07a1008d-1acb-aab6-ab10-176e7856a296@xen.org>
 <1CEE9707-3201-4218-ACF0-7829181A769C@arm.com>
 <38115dda-f6dc-c35d-fbaa-3735456d226f@xen.org>
In-Reply-To: <38115dda-f6dc-c35d-fbaa-3735456d226f@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [82.24.250.194]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-HT: Tenant
X-MS-Office365-Filtering-Correlation-Id: d2150729-0bef-493c-d669-08d803dba219
x-ms-traffictypediagnostic: DB7PR08MB4617:|VI1PR08MB3712:
X-Microsoft-Antispam-PRVS: <VI1PR08MB3712CA552528C82AF4EC2C469D8F0@VI1PR08MB3712.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
x-forefront-prvs: 04180B6720
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original: EhCtR3iAPAp+pe8p88GGiiWGAq/JDu9P4G4IEUKyd5JZjteCcaGRC+OZM3kwTnq++DdgjFXaRvNyn6cUX3jieltgK+5C4wTdQMvBFZ3voUXPmW1SnQvzqhGYLrcQfZhjjMlHs/0RlE2WSWaMm/KOy5ko3xbhqys14BkFYDa4bO5ulvz7X/ccx7svDoXuq7ahImWPh9ATfzqGiELlNCCskTjGHNZ5d7TrcYdYIioVxf4xobE3aUwEV9IhieADWQ85R+iRYJCgMTI2R+YRCQ9pODhfRJ/OeiS+4VdyEDTmf51r0GsUoXSySA0h6l2WflLR3edCa0KM9Fh3eObACQZzpg==
X-Forefront-Antispam-Report-Untrusted: CIP:255.255.255.255; CTRY:; LANG:en;
 SCL:1; SRV:; IPV:NLI; SFV:NSPM; H:DB7PR08MB3689.eurprd08.prod.outlook.com;
 PTR:; CAT:NONE; SFTY:;
 SFS:(4636009)(39860400002)(346002)(366004)(396003)(136003)(376002)(64756008)(2616005)(8936002)(76116006)(2906002)(91956017)(6486002)(71200400001)(26005)(36756003)(6506007)(86362001)(53546011)(66446008)(5660300002)(4326008)(186003)(7416002)(478600001)(66556008)(6916009)(66946007)(66476007)(33656002)(316002)(83380400001)(54906003)(6512007)(8676002);
 DIR:OUT; SFP:1101; 
x-ms-exchange-antispam-messagedata: +QAsLXeaqQ2gk79Tt0AjXru/FlRxpyvLniI52KlYK16k4jfzvcgGSew3MPuEF9Xf8YkmuYH1J/pz5PDKodRPWG8jj9h4Mg02xy8xuF9iekofQPzbhMBkTLz3KojgmbBGsd8wEPLA3wgw3jtwmbP7rIR4HjIVaM6iIQhcUFk7gq+ECSQMJ6Je6PReUxQM3VPQso6yl+OJxR6Lpwce3S7AJZcRo2tC6o7po/5CKSlqzzhuMsE+Ofb2jbr2nRHVCEWEkXHFaVoV6He3ct/LjZenelxsFtuJZrsT187lkLLYhqQIroX2/WR7WpuriTyB5umngyCKb2FqD44tpih4bFnI/apt0X/rQl2emLCvGwdlCb0eJMtR2PZ+JRcbxYY8YB+1i6xJq4zQpWrsluNVT0ezlXlpNYSHmQwcSYVW6szPd5UKckYDFXgU5ZwwDGW/bIvQzRl+Z/B6j0v1zXjjaYLtb46Qz7aw7duZtnDNdwT8xNoLImDeHVUMHmQ9v5n2vqlj
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-ID: <44FF4C389C814B4EB6E462C10E9DAE40@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB4617
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped: VE1EUR03FT026.eop-EUR03.prod.protection.outlook.com
X-Forefront-Antispam-Report: CIP:63.35.35.123; CTRY:IE; LANG:en; SCL:1; SRV:;
 IPV:CAL; SFV:NSPM; H:64aa7808-outbound-1.mta.getcheckrecipient.com;
 PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com; CAT:NONE; SFTY:;
 SFS:(4636009)(396003)(136003)(376002)(346002)(39860400002)(46966005)(36756003)(6512007)(186003)(2616005)(70586007)(6862004)(70206006)(336012)(5660300002)(2906002)(478600001)(6486002)(36906005)(6506007)(53546011)(26005)(316002)(54906003)(8676002)(81166007)(4326008)(82310400002)(33656002)(83380400001)(8936002)(82740400003)(86362001)(47076004)(356005);
 DIR:OUT; SFP:1101; 
X-MS-Office365-Filtering-Correlation-Id-Prvs: dcb77e59-2662-4466-e7c1-08d803db9d8b
X-Forefront-PRVS: 04180B6720
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: j0a12E5DOJC/qIAkOG//K3BLyyeL/rYP/ZicYpCXLJYSvBLFJxwnFiKUBN3BLw+NMTO6J647vne5IPPzw/c43DtPI+3hlLd+YXc40vxVfU7KiEzvrLfxeNp/koDvLZSdbAyIf88XLBr+Lmw46BB5yJyfUIAtxq3hkdV5TB7BRqUvo4ehri5TY84+XajXFyekbA07libRx3rs5k6vPGQnx+aabSjKiSim9b/5ws2lw3DVtItGIDEJkAz22/q0WffKcq5I68i4MQff1q0TwRSVvFNtxkstxhbJHgiOf2/L+7A/CN+47YI1a9gg0/pgZZOTZgZsbFhphH+pQ2yz5bXHrbTvRDpGQoiZmqV96kq7oMFUsSBttYc6/wwRtXLmEgDo1qtaMHQu9mKNzXWPOgub8A==
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 29 May 2020 14:21:52.3329 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d2150729-0bef-493c-d669-08d803dba219
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d; Ip=[63.35.35.123];
 Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3712
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "paul@xen.org" <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, "Xia, Hongyan" <hongyxia@amazon.com>,
 Jan Beulich <jbeulich@suse.com>, xen-devel <xen-devel@lists.xenproject.org>,
 nd <nd@arm.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 =?iso-8859-1?Q?Roger_Pau_Monn=E9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On 29 May 2020, at 15:15, Julien Grall <julien@xen.org> wrote:
>=20
>=20
>=20
> On 29/05/2020 15:02, Bertrand Marquis wrote:
>>> On 29 May 2020, at 10:43, Julien Grall <julien@xen.org> wrote:
>>>=20
>>> Hi Bertrand,
>>>=20
>>> On 29/05/2020 09:13, Bertrand Marquis wrote:
>>>> Hi Julien,
>>>>> On 28 May 2020, at 19:54, Julien Grall <julien@xen.org> wrote:
>>>>>=20
>>>>> Hi Bertrand,
>>>>>=20
>>>>> Thank you for the patch.
>>>>>=20
>>>>> On 28/05/2020 16:25, Bertrand Marquis wrote:
>>>>>> At the moment on Arm, a Linux guest running with KTPI enabled will
>>>>>> cause the following error when a context switch happens in user mode=
:
>>>>>> (XEN) p2m.c:1890: d1v0: Failed to walk page-table va 0xffffff837ebe0=
cd0
>>>>>> This patch is modifying runstate handling to map the area given by t=
he
>>>>>> guest inside Xen during the hypercall.
>>>>>> This is removing the guest virtual to physical conversion during con=
text
>>>>>> switches which removes the bug
>>>>>=20
>>>>> It would be good to spell out that a virtual address is not stable. S=
o relying on it is wrong.
>>>>>=20
>>>>>> and improve performance by preventing to
>>>>>> walk page tables during context switches.
>>>>>=20
>>>>> With Secret free hypervisor in mind, I would like to suggest to map/u=
nmap the runstate during context switch.
>>>>>=20
>>>>> The cost should be minimal when there is a direct map (i.e on Arm64 a=
nd x86) and still provide better performance on Arm32.
>>>> Even with a minimal cost this is still adding some non real-time behav=
iour to the context switch.
>>>=20
>>> Just to be clear, by minimal I meant the mapping part is just a virt_to=
_mfn() call and the unmapping is a NOP.
>>>=20
>>> IHMO, if virt_to_mfn() ends up to add non-RT behavior then you have muc=
h bigger problem than just this call.
>>>=20
>>>> But definitely from the security point of view as we have to map a pag=
e from the guest, we could have accessible in Xen some data that should not=
 be there.
>>>> There is a trade here where:
>>>> - xen can protect by map/unmapping
>>>> - a guest which wants to secure his data should either not use it or m=
ake sure there is nothing else in the page
>>>=20
>>> Both are valid and depends on your setup. One may want to protect all t=
he existing guests, so modifying a guest may not be a solution.
>> The fact to map/unmap is increasing the protection but not removing the =
problem completely.
>=20
> I would be curious to understand why the problem is not completely remove=
d.
>=20
> From my perspective, this covers the case where Xen could leak the inform=
ation of one domain to another domain. When there is no direct mapping, tem=
porary mappings via domain_map_page() will be either per-pCPU (or per-vCPU)=
. So the content should never be (easily) accessible by another running dom=
ain while it is mapped.
>=20
> If the guest is concerned about exposing the data to Xen, then it is a co=
mpletely different issue and should be taken care by the guest iself.

Even temporarily mapped you still have access to more then you need so you =
could potentially modify something from the guest at that point (from an in=
terrupt handler for example).
The attack surface is reduced a lot I agree but if the guest does not make =
sure that nothing else is available in the page, you can potentially still =
get an access.

Cheers
Bertrand



From xen-devel-bounces@lists.xenproject.org Fri May 29 14:26:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:26:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jefxT-0007J8-QA; Fri, 29 May 2020 14:26:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jefxS-0007J3-0z
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:26:02 +0000
X-Inumbo-ID: 51225c26-a1b8-11ea-a8c6-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 51225c26-a1b8-11ea-a8c6-12813bfff9fa;
 Fri, 29 May 2020 14:26:00 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:60018
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jefxM-000X31-M5 (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 15:25:56 +0100
Subject: Re: [PATCH v10 9/9] x86emul: support FXSAVE/FXRSTOR
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
 <31ee7b12-c256-3186-30ca-45665b241a8b@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <a2603fbc-5b2a-f3bb-59bd-e862e1464636@citrix.com>
Date: Fri, 29 May 2020 15:25:56 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <31ee7b12-c256-3186-30ca-45665b241a8b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 25/05/2020 15:30, Jan Beulich wrote:
> Note that FPU selector handling as well as MXCSR mask saving for now
> does not honor differences between host and guest visible featuresets.
>
> While for Intel operation of the insns with CR4.OSFXSR=0 is
> implementation dependent, use the easiest solution there: Simply don't
> look at the bit in the first place. For AMD and alike the behavior is
> well defined, so it gets handled together with FFXSR.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:33:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeg4r-0008G8-NL; Fri, 29 May 2020 14:33:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tJvr=7L=xen.org=roger@srs-us1.protection.inumbo.net>)
 id 1jeg4r-0008G3-7w
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:33:41 +0000
X-Inumbo-ID: 63c35c30-a1b9-11ea-a8c9-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 63c35c30-a1b9-11ea-a8c9-12813bfff9fa;
 Fri, 29 May 2020 14:33:40 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Transfer-Encoding:Content-Type:
 MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1dEu+DjDhOpoOSVMFjfq8dVN3zMfViUk4+4rbcwdu4Y=; b=sWRWxXI9bYXavpsZz1d7nLS3uo
 2EAjy3m7oQrezhe2kSj64KvnTP8f3us1U/hsuNrc51PhzAo9wEb3ukKaEghKi9r4aFRhnv278D2+p
 DoTlOZEKFp8oVu48w2bJpOwJws5SBSFlFEffiZZlVt7my1TgtFuKBthIqZaeHufpgSyg=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeg4p-0006m5-VD; Fri, 29 May 2020 14:33:39 +0000
Received: from [212.230.157.105] (helo=localhost)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeg4p-0003ao-CA; Fri, 29 May 2020 14:33:39 +0000
Date: Fri, 29 May 2020 16:33:31 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger@xen.org>
To: Igor Druzhinin <igor.druzhinin@citrix.com>
Subject: Re: [PATCH] x86/svm: do not try to handle recalc NPT faults
 immediately
Message-ID: <20200529143331.GO1195@Air-de-Roger>
References: <1590712553-7298-1-git-send-email-igor.druzhinin@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <1590712553-7298-1-git-send-email-igor.druzhinin@citrix.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, george.dunlap@citrix.com, wl@xen.org,
 jbeulich@suse.com, andrew.cooper3@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 29, 2020 at 01:35:53AM +0100, Igor Druzhinin wrote:
> A recalculation NPT fault doesn't always require additional handling
> in hvm_hap_nested_page_fault(), moreover in general case if there is no
> explicit handling done there - the fault is wrongly considered fatal.
> 
> Instead of trying to be opportunistic - use safer approach and handle
> P2M recalculation in a separate NPT fault by attempting to retry after
> making the necessary adjustments. This is aligned with Intel behavior
> where there are separate VMEXITs for recalculation and EPT violations
> (faults) and only faults are handled in hvm_hap_nested_page_fault().
> Do it by also unifying do_recalc return code with Intel implementation
> where returning 1 means P2M was actually changed.

That seems like a good approach IMO.

Do you know whether this will make the code slower? (since there are
cases previously handled in a single vmexit that would take two
vmexits now)

> This covers a specific case of migration with vGPU assigned on AMD:
> global log-dirty is enabled and causes immediate recalculation NPT
> fault in MMIO area upon access.
> 
> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> This is a safer alternative to:
> https://lists.xenproject.org/archives/html/xen-devel/2020-05/msg01662.html
> and more correct approach from my PoV.
> ---
>  xen/arch/x86/hvm/svm/svm.c | 5 +++--
>  xen/arch/x86/mm/p2m-pt.c   | 8 ++++++--
>  2 files changed, 9 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
> index 46a1aac..7f6f578 100644
> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -2923,9 +2923,10 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>              v->arch.hvm.svm.cached_insn_len = vmcb->guest_ins_len & 0xf;
>          rc = vmcb->exitinfo1 & PFEC_page_present
>               ? p2m_pt_handle_deferred_changes(vmcb->exitinfo2) : 0;
> -        if ( rc >= 0 )
> +        if ( rc == 0 )
> +            /* If no recal adjustments were being made - handle this fault */
>              svm_do_nested_pgfault(v, regs, vmcb->exitinfo1, vmcb->exitinfo2);
> -        else
> +        else if ( rc < 0 )
>          {
>              printk(XENLOG_G_ERR
>                     "%pv: Error %d handling NPF (gpa=%08lx ec=%04lx)\n",
> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
> index 5c05017..377565b 100644
> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -340,7 +340,7 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
>      unsigned long gfn_remainder = gfn;
>      unsigned int level = 4;
>      l1_pgentry_t *pent;
> -    int err = 0;
> +    int err = 0, rc = 0;
>  
>      table = map_domain_page(pagetable_get_mfn(p2m_get_pagetable(p2m)));
>      while ( --level )
> @@ -402,6 +402,8 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
>                  clear_recalc(l1, e);
>                  err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
>                  ASSERT(!err);
> +
> +                rc = 1;
>              }
>          }
>          unmap_domain_page((void *)((unsigned long)pent & PAGE_MASK));
> @@ -448,12 +450,14 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
>              clear_recalc(l1, e);
>          err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
>          ASSERT(!err);
> +
> +        rc = 1;
>      }
>  
>   out:
>      unmap_domain_page(table);
>  
> -    return err;
> +    return err ? err : rc;

Nit: you can use the elvis operator here: return err ?: rc;

Also I couldn't spot any caller that would have troubles with the
function now returning 1 in certain conditions, can you confirm the
callers have been audited?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:34:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:34:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeg5X-0008In-0c; Fri, 29 May 2020 14:34:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jeg5V-0008Ie-TA
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:34:21 +0000
X-Inumbo-ID: 7bad2dbc-a1b9-11ea-a8c9-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7bad2dbc-a1b9-11ea-a8c9-12813bfff9fa;
 Fri, 29 May 2020 14:34:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id AAE7DB001;
 Fri, 29 May 2020 14:34:19 +0000 (UTC)
Subject: Re: [PATCH] x86/svm: do not try to handle recalc NPT faults
 immediately
To: Igor Druzhinin <igor.druzhinin@citrix.com>
References: <1590712553-7298-1-git-send-email-igor.druzhinin@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bb934c0c-3f0d-df7e-1720-8dbbdbf7691d@suse.com>
Date: Fri, 29 May 2020 16:34:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <1590712553-7298-1-git-send-email-igor.druzhinin@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, roger.pau@citrix.com,
 george.dunlap@citrix.com, wl@xen.org, andrew.cooper3@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 02:35, Igor Druzhinin wrote:
> A recalculation NPT fault doesn't always require additional handling
> in hvm_hap_nested_page_fault(), moreover in general case if there is no
> explicit handling done there - the fault is wrongly considered fatal.
> 
> Instead of trying to be opportunistic - use safer approach and handle
> P2M recalculation in a separate NPT fault by attempting to retry after
> making the necessary adjustments. This is aligned with Intel behavior
> where there are separate VMEXITs for recalculation and EPT violations
> (faults) and only faults are handled in hvm_hap_nested_page_fault().
> Do it by also unifying do_recalc return code with Intel implementation
> where returning 1 means P2M was actually changed.
> 
> This covers a specific case of migration with vGPU assigned on AMD:
> global log-dirty is enabled and causes immediate recalculation NPT
> fault in MMIO area upon access.

To be honest, from this last paragraph I still can't really derive
what goes wrong exactly why, before this change.

> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> ---
> This is a safer alternative to:
> https://lists.xenproject.org/archives/html/xen-devel/2020-05/msg01662.html
> and more correct approach from my PoV.

Indeed - I was about to reply there, but then I thought I'd first
look at this patch, in case it was a replacement.

> --- a/xen/arch/x86/hvm/svm/svm.c
> +++ b/xen/arch/x86/hvm/svm/svm.c
> @@ -2923,9 +2923,10 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>              v->arch.hvm.svm.cached_insn_len = vmcb->guest_ins_len & 0xf;
>          rc = vmcb->exitinfo1 & PFEC_page_present
>               ? p2m_pt_handle_deferred_changes(vmcb->exitinfo2) : 0;
> -        if ( rc >= 0 )
> +        if ( rc == 0 )
> +            /* If no recal adjustments were being made - handle this fault */
>              svm_do_nested_pgfault(v, regs, vmcb->exitinfo1, vmcb->exitinfo2);
> -        else
> +        else if ( rc < 0 )

So from going through the code and judging by the comment in
finish_type_change() (which btw you will need to update, to avoid
it becoming stale) the >= here was there just in case, without
there actually being any case where a positive value would be
returned. It that's also the conclusion you've drawn, then I
think it would help mentioning this in the description.

It is also desirable to mention finish_type_change() not being
affected, as already dealing with the > 0 case.

> --- a/xen/arch/x86/mm/p2m-pt.c
> +++ b/xen/arch/x86/mm/p2m-pt.c
> @@ -340,7 +340,7 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
>      unsigned long gfn_remainder = gfn;
>      unsigned int level = 4;
>      l1_pgentry_t *pent;
> -    int err = 0;
> +    int err = 0, rc = 0;
>  
>      table = map_domain_page(pagetable_get_mfn(p2m_get_pagetable(p2m)));
>      while ( --level )
> @@ -402,6 +402,8 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
>                  clear_recalc(l1, e);
>                  err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
>                  ASSERT(!err);
> +
> +                rc = 1;
>              }
>          }
>          unmap_domain_page((void *)((unsigned long)pent & PAGE_MASK));
> @@ -448,12 +450,14 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
>              clear_recalc(l1, e);
>          err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
>          ASSERT(!err);
> +
> +        rc = 1;
>      }
>  
>   out:
>      unmap_domain_page(table);
>  
> -    return err;
> +    return err ? err : rc;

Typically we write this as "err ?: rc". I'd like to ask that "rc" also
be renamed, to something like "recalc_done", and then to become bool.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:36:31 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:36:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeg7W-0008Tp-D2; Fri, 29 May 2020 14:36:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tJvr=7L=xen.org=roger@srs-us1.protection.inumbo.net>)
 id 1jeg7U-0008Tk-Ui
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:36:24 +0000
X-Inumbo-ID: c5987e72-a1b9-11ea-a8ca-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c5987e72-a1b9-11ea-a8ca-12813bfff9fa;
 Fri, 29 May 2020 14:36:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=In-Reply-To:Content-Transfer-Encoding:Content-Type:
 MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ASbQW/l111czFXENkL/tSDDP7R2aHdoRCkb4iDylmps=; b=1l5yRhLyWTpqwFK+pRrC2rJk3s
 0fYCYLxNGY6Jx1eeXDx1jS4l5HVMsVk1VVxB2lfC1yvGpQubGVLYvu4GEjrxVbgDdhv8cESbGMZaM
 XXB6Gg/ZBTE3hNlcwFfWeQSD3oBvKqgdLv4cN5h3Q6clbx791g6G9LLKAoOeYoqcQm5E=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeg7S-0006p9-Vi; Fri, 29 May 2020 14:36:22 +0000
Received: from [212.230.157.105] (helo=localhost)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <roger@xen.org>)
 id 1jeg7S-0003l0-GD; Fri, 29 May 2020 14:36:22 +0000
Date: Fri, 29 May 2020 16:36:13 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger@xen.org>
To: Julien Grall <julien@xen.org>
Subject: Re: [RFC PATCH 1/1] xen: Use a global mapping for runstate
Message-ID: <20200529143613.GP1195@Air-de-Roger>
References: <cover.1590675919.git.bertrand.marquis@arm.com>
 <03e7cd740922bfbaa479f22d81d9de06f718a305.1590675919.git.bertrand.marquis@arm.com>
 <e63a83a1-7d71-9cc5-517a-275e17880e2b@xen.org>
 <dcfbca54-4773-9f43-1826-f5137a41bd9f@suse.com>
 <B5889544-3EB5-41ED-8428-8BCA05269371@arm.com>
 <20200529132020.GN1195@Air-de-Roger>
 <e7a757b4-b285-7089-91ea-d4248443aaf1@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <e7a757b4-b285-7089-91ea-d4248443aaf1@xen.org>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "paul@xen.org" <paul@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, "Xia,
 Hongyan" <hongyxia@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>, Jan Beulich <jbeulich@suse.com>,
 xen-devel <xen-devel@lists.xenproject.org>, nd <nd@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 29, 2020 at 02:37:24PM +0100, Julien Grall wrote:
> Hi,
> 
> On 29/05/2020 14:26, Roger Pau Monné wrote:
> > TBH I would just remove the error message on Arm from the current
> > hypercall, I don't think it's useful.
> The message is part of the helpers get_page_from_gva() which is also used by
> copy_{to, from}_guest. While it may not be useful in the context of the
> runstate, it was introduced because there was some other weird bug happening
> before KPTI even existed (see [1]). I haven't yet managed to find the bottom
> line of the issue.
> 
> So I would still very much like to keep the message in place. Although we
> could reduce the number of cases where this is hapenning based on the fault.

Ack, I someone hove wrongly assumed the message was explicitly about
the runstate area, not a generic one in the helpers.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:46:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:46:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegGy-00013A-BC; Fri, 29 May 2020 14:46:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jegGw-000134-PG
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:46:10 +0000
X-Inumbo-ID: 223dab74-a1bb-11ea-9dbe-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 223dab74-a1bb-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 14:46:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A5CBBAFB8;
 Fri, 29 May 2020 14:46:08 +0000 (UTC)
Subject: Re: [PATCH] x86/hvm: Improve error information in handle_pio()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200528130738.12816-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7ba144b3-3741-8bc5-8417-ceb84e6d23ce@suse.com>
Date: Fri, 29 May 2020 16:46:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200528130738.12816-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul.durrant@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.05.2020 15:07, Andrew Cooper wrote:
> domain_crash() should always have a message which emitted even in release
> builds, so something more useful than this is presented.
> 
>   (XEN) domain_crash called from io.c:171
>   (XEN) domain_crash called from io.c:171
>   (XEN) domain_crash called from io.c:171
>   ...
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> --- a/xen/arch/x86/hvm/io.c
> +++ b/xen/arch/x86/hvm/io.c
> @@ -167,7 +167,9 @@ bool handle_pio(uint16_t port, unsigned int size, int dir)
>          break;
>  
>      default:
> -        gdprintk(XENLOG_ERR, "Weird HVM ioemulation status %d.\n", rc);
> +        gprintk(XENLOG_ERR, "Unexpected PIO status %d, port %#x %s 0x%0*lx\n",
> +                rc, port, dir == IOREQ_WRITE ? "write" : "read",
> +                size * 2, data & ((1ul << (size * 8)) - 1));

I agree with Roger that potentially logging rubbish for IOREQ_READ
may be confusing, so initializing "data" might end up being better.
Perhaps simply drop (or put in a comment) the
"if ( dir == IOREQ_WRITE )" at the top of the function? (As an
aside, it's also odd for "data" to be "unsigned long" rather than
just "unsigned int" or, less preferable, "uint32_t".)

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:48:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:48:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegIX-00018y-Me; Fri, 29 May 2020 14:47:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hF3w=7L=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jegIW-00018t-E5
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:47:48 +0000
X-Inumbo-ID: 5c5796ee-a1bb-11ea-81bc-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c5796ee-a1bb-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 14:47:47 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: viVHp8AO1Zt2nI/Z3zSZXcL6MVxc0T6JUCXptF7OtqoLsQkJZ3qaePBCEprcgJlukWsVP0bpJs
 4/N7yWjTH4sTKS97w7D5necpdd1vKrP4Bhto2FDz7hrQ1586AQXI3gdwidl5ktX/BloSdotZ/V
 p/krZsxVtDKXdFUi9fxzfkQx033S8JE5MuHa8c+1m0d8QwlK6dKV9K+ymk2vTdPW3SA7oDXy/y
 B5tQOdR3AzIM7Y4wIgxAPJYuEWam0GJ+eWRBeuJX6gwGz/pkQPWNqZmxRqE89kX6pcOpxu1N+f
 wtM=
X-SBRS: 2.7
X-MesageID: 18765118
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,448,1583211600"; d="scan'208";a="18765118"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Message-ID: <24273.8332.662607.125522@mariner.uk.xensource.com>
Date: Fri, 29 May 2020 15:47:40 +0100
To: George Dunlap <George.Dunlap@citrix.com>
Subject: Re: Xen XSM/FLASK policy, grub defaults, etc.
In-Reply-To: <96F32637-E410-4EC8-937A-CFC8BE724352@citrix.com>
References: <24270.35349.838484.116865@mariner.uk.xensource.com>
 <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
 <24272.59646.746545.343358@mariner.uk.xensource.com>
 <4a8e7cf2-8f63-d4d2-e051-9484a5b8c8ed@suse.com>
 <96F32637-E410-4EC8-937A-CFC8BE724352@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 "cjwatson@debian.org" <cjwatson@debian.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

George Dunlap writes ("Re: Xen XSM/FLASK policy, grub defaults, etc."):
> Which isn’t to say we shouldn’t do it; but it might be nice to also have an intermediate solution that works right now, even if it’s not optimal.

I propose the following behaviour by updste-grub:

 1. Look for an ELF note, TBD.  If it's found, make XSM boot entries.
    (For now, skip this step, since the ELF note is not defined.)

 2. Look for a .config alongside the Xen binary.  Look for
       ^CONFIG_XSM_FLASK=y
    If the file is is not found, or no line matches, make no XSM
    boot entries.

Ian.


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:48:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:48:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegJA-0001Bo-WA; Fri, 29 May 2020 14:48:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5ub4=7L=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jegJA-0001Bf-4o
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:48:28 +0000
X-Inumbo-ID: 746de9c2-a1bb-11ea-81bc-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 746de9c2-a1bb-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 14:48:27 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id C845BB00A;
 Fri, 29 May 2020 14:48:26 +0000 (UTC)
Subject: Re: [PATCH v2 1/7] xen: credit2: factor cpu to runqueue matching in a
 function
To: Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <159070133878.12060.13318432301910522647.stgit@Palanthas>
 <159070136424.12060.2223986236933194278.stgit@Palanthas>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <5d691e2f-0b45-5308-b5cb-bdef8ee8712e@suse.com>
Date: Fri, 29 May 2020 16:48:25 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <159070136424.12060.2223986236933194278.stgit@Palanthas>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.05.20 23:29, Dario Faggioli wrote:
> Just move the big if() condition in an inline function.
> 
> No functional change intended.
> 
> Signed-off-by: Dario Faggioli <dfaggioli@suse.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:48:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:48:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegJL-0001Ep-8L; Fri, 29 May 2020 14:48:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pRu7=7L=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jegJK-0001ES-I2
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:48:38 +0000
X-Inumbo-ID: 7a6880c6-a1bb-11ea-9947-bc764e2007e4
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a6880c6-a1bb-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 14:48:37 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id r7so4004929wro.1
 for <xen-devel@lists.xenproject.org>; Fri, 29 May 2020 07:48:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=yji15Dwy1ioHCVgZaGRdOYf8jrZqOi3UMj7qITADuIg=;
 b=GuwFkxhVmfXqc2+4L47AfCKkwBNT6UY7H0LjMUjIVsc/T8r2B6Eh46hOSWSfpKkMaz
 SneUiD7a6D+pvZ5Y3MDduKnA6IPEPiSVAh/s7XC2V445snXHQ3BdhmRcvj85bju3FzX7
 VQqTd8IlDd6q2Yz3tOo//LHXuvj6BpP7Fz05vRTyuawzYqcXkAKT/gjfyD92wZsH72e0
 NLDCC4r8//wH2a/PHTv8utNBirtwoESyiPflGUR9m9iE2fyza/l1YyoAi3Y/qu79imGZ
 fnmIsGNslE2knDk4VcRmEnMnkLf4ZIMFYlwqj1FWN4dbs/Qt++dFJFm5m6IG4QZ/jxDv
 +tYg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=yji15Dwy1ioHCVgZaGRdOYf8jrZqOi3UMj7qITADuIg=;
 b=iP8ZDuQuzVB79F00sB4/6Etb9KfmOEN1SxUzi3J+hMGLTu4V5U4G4mr/HDRr+0BjNR
 An4ELEprwlR9bs2L8o5DyiHF+9leveogSC3RkPsySr6JC/+vEhkWKZyQyMGtAbrxifAF
 zJqjSjMjXCqIdLFf2L/YNCKBUZwRe1cg0uZYF+EI5HBFgeo05LKCcGxFhzqu7Vs+XdYL
 sOF5gaNlTkLIcD88eSuvh8J5Qvw4rECnGohqkbrcL9g/ERnOIzSZVRi4llywsXoouaBn
 b0jc94f85R8b9oha6o33paf7Pc3jiuJcu41J1gfIP7uj3gUa+duCuhX+jnJ5a6laWB2S
 Uw/g==
X-Gm-Message-State: AOAM530261VahvUUnQSJUr66yzVqaMhAqdAfB+Co3j9e8bwQP/VtxDvB
 XOVtIdhbhv7eU9uuyBULibvbFsLye4ghP6GESO0=
X-Google-Smtp-Source: ABdhPJwILZEwPCb4JSLXYblLNRzV+9xrKbcPbR8YvqWlVFU2zvvYL5zy21aqZQhgiCg0PnDGZrbOdVZjmtZLFrlw8fI=
X-Received: by 2002:a5d:490f:: with SMTP id x15mr8957789wrq.259.1590763716778; 
 Fri, 29 May 2020 07:48:36 -0700 (PDT)
MIME-Version: 1.0
References: <1317891616.59673956.1590755403816.JavaMail.zimbra@cert.pl>
 <57d213df-9edb-8f31-59e4-13f5cc07b543@suse.com>
 <1150720994.59695220.1590756705329.JavaMail.zimbra@cert.pl>
 <1f68a02a-3472-1bb0-90b9-6f3ccefca0b3@suse.com>
 <1623831291.59734817.1590760249321.JavaMail.zimbra@cert.pl>
In-Reply-To: <1623831291.59734817.1590760249321.JavaMail.zimbra@cert.pl>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Fri, 29 May 2020 08:48:01 -0600
Message-ID: <CABfawhmjeoVpRgAbDAA_T6rMiqPjQUvLPEn5t1-1JwZFJicNKQ@mail.gmail.com>
Subject: Re: [BUG] Core scheduling patches causing deadlock in some situations
To: =?UTF-8?B?TWljaGHFgiBMZXN6Y3p5xYRza2k=?= <michal.leszczynski@cert.pl>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, chivay@cert.pl, bonus@cert.pl
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 29, 2020 at 7:51 AM Micha=C5=82 Leszczy=C5=84ski
<michal.leszczynski@cert.pl> wrote:
>
> ----- 29 maj 2020 o 15:15, J=C3=BCrgen Gro=C3=9F jgross@suse.com napisa=
=C5=82(a):
>
> > On 29.05.20 14:51, Micha=C5=82 Leszczy=C5=84ski wrote:
> >> ----- 29 maj 2020 o 14:44, J=C3=BCrgen Gro=C3=9F jgross@suse.com napis=
a=C5=82(a):
> >>
> >>> On 29.05.20 14:30, Micha=C5=82 Leszczy=C5=84ski wrote:
> >>>> Hello,
> >>>>
> >>>> I'm running DRAKVUF on Dell Inc. PowerEdge R640/08HT8T server with I=
ntel(R)
> >>>> Xeon(R) Gold 6132 CPU @ 2.60GHz CPU.
> >>>> When upgrading from Xen RELEASE 4.12 to 4.13, we have noticed some s=
tability
> >>>> problems concerning freezes of Dom0 (Debian Buster):
> >>>>
> >>>> ---
> >>>>
> >>>> maj 27 23:17:02 debian kernel: rcu: INFO: rcu_sched self-detected st=
all on CPU
> >>>> maj 27 23:17:02 debian kernel: rcu: 0-....: (5250 ticks this GP)
> >>>> idle=3Dcee/1/0x4000000000000002 softirq=3D11964/11964 fqs=3D2515
> >>>> maj 27 23:17:02 debian kernel: rcu: (t=3D5251 jiffies g=3D27237 q=3D=
799)
> >>>> maj 27 23:17:02 debian kernel: NMI backtrace for cpu 0
> >>>> maj 27 23:17:02 debian kernel: CPU: 0 PID: 643 Comm: z_rd_int_1 Tain=
ted: P OE
> >>>> 4.19.0-6-amd64 #1 Debian 4.19.67-2+deb10u2
> >>>> maj 27 23:17:02 debian kernel: Hardware name: Dell Inc. PowerEdge R6=
40/08HT8T,
> >>>> BIOS 2.1.8 04/30/2019
> >>>> maj 27 23:17:02 debian kernel: Call Trace:
> >>>> maj 27 23:17:02 debian kernel: <IRQ>
> >>>> maj 27 23:17:02 debian kernel: dump_stack+0x5c/0x80
> >>>> maj 27 23:17:02 debian kernel: nmi_cpu_backtrace.cold.4+0x13/0x50
> >>>> maj 27 23:17:02 debian kernel: ? lapic_can_unplug_cpu.cold.29+0x3b/0=
x3b
> >>>> maj 27 23:17:02 debian kernel: nmi_trigger_cpumask_backtrace+0xf9/0x=
fb
> >>>> maj 27 23:17:02 debian kernel: rcu_dump_cpu_stacks+0x9b/0xcb
> >>>> maj 27 23:17:02 debian kernel: rcu_check_callbacks.cold.81+0x1db/0x3=
35
> >>>> maj 27 23:17:02 debian kernel: ? tick_sched_do_timer+0x60/0x60
> >>>> maj 27 23:17:02 debian kernel: update_process_times+0x28/0x60
> >>>> maj 27 23:17:02 debian kernel: tick_sched_handle+0x22/0x60
> >>>>
> >>>> ---
> >>>>
> >>>> This usually results in machine being completely unresponsive and pe=
rforming an
> >>>> automated reboot after some time.
> >>>>
> >>>> I've bisected commits between 4.12 and 4.13 and it seems like this i=
s the patch
> >>>> which introduced a bug:
> >>>> https://github.com/xen-project/xen/commit/7c7b407e77724f37c4b4489307=
77a59a479feb21
> >>>>
> >>>> Enclosed you can find the `xl dmesg` log (attachment: dmesg.txt) fro=
m the fresh
> >>>> boot of the machine on which the bug was reproduced.
> >>>>
> >>>> I'm also attaching the `xl info` output from this machine:
> >>>>
> >>>> ---
> >>>>
> >>>> release : 4.19.0-6-amd64
> >>>> version : #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11)
> >>>> machine : x86_64
> >>>> nr_cpus : 14
> >>>> max_cpu_id : 223
> >>>> nr_nodes : 1
> >>>> cores_per_socket : 14
> >>>> threads_per_core : 1
> >>>> cpu_mhz : 2593.930
> >>>> hw_caps :
> >>>> bfebfbff:77fef3ff:2c100800:00000121:0000000f:d19ffffb:00000008:00000=
100
> >>>> virt_caps : pv hvm hvm_directio pv_directio hap shadow
> >>>> total_memory : 130541
> >>>> free_memory : 63591
> >>>> sharing_freed_memory : 0
> >>>> sharing_used_memory : 0
> >>>> outstanding_claims : 0
> >>>> free_cpus : 0
> >>>> xen_major : 4
> >>>> xen_minor : 13
> >>>> xen_extra : -unstable
> >>>> xen_version : 4.13-unstable
> >>>> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x86=
_32p
> >>>> hvm-3.0-x86_64
> >>>> xen_scheduler : credit2
> >>>> xen_pagesize : 4096
> >>>> platform_params : virt_start=3D0xffff800000000000
> >>>> xen_changeset : Wed Oct 2 09:27:27 2019 +0200 git:7c7b407e77-dirty
> >>>
> >>> Which is your original Xen base? This output is clearly obtained at t=
he
> >>> end of the bisect process.
> >>>
> >>> There have been quite some corrections since the release of Xen 4.13,=
 so
> >>> please make sure you are running the most actual version (4.13.1).
> >>>
> >>>
> >>> Juergen
> >>
> >> Sure, we have tested both RELEASE 4.13 and RELEASE 4.13.1. Unfortunate=
ly these
> >> corrections didn't help and the bug is still reproducible.
> >>
> >>  From our testing it turns out that:
> >>
> >> Known working revision: 997d6248a9ae932d0dbaac8d8755c2b15fec25dc
> >> Broken revision: 6278553325a9f76d37811923221b21db3882e017
> >> First bad commit: 7c7b407e77724f37c4b448930777a59a479feb21
> >
> > Would it be possible to test xen unstable, too?
> >
> > I could imagine e.g. commit b492c65da5ec5ed or 99266e31832fb4a4 to have
> > an impact here.
> >
> >
> > Juergen
>
>
> I've tried b492c65da5ec5ed revision but it seems that there is some probl=
em with ALTP2M support, so I can't launch anything at all.
>
> maj 29 15:45:32 debian drakrun[1223]: Failed to set HVM_PARAM_ALTP2M, RC:=
 -1
> maj 29 15:45:32 debian drakrun[1223]: VMI_ERROR: xc_altp2m_switch_to_view=
 returned rc: -1

Ough, great, that's another regression in 4.14-unstable. I ran into it
myself but couldn't spend time to figure out whether its just
something in my configuration or not so I reverted to 4.13.1. Now we
know it's a real bug.

Tamas


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:49:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:49:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegKI-0001Pl-OD; Fri, 29 May 2020 14:49:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jegKH-0001Pb-LW
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:49:37 +0000
X-Inumbo-ID: 9dc8ac26-a1bb-11ea-8993-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9dc8ac26-a1bb-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 14:49:37 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id DDE61B01E;
 Fri, 29 May 2020 14:49:35 +0000 (UTC)
Subject: Re: [PATCH v2 3/3] clang: don't define nocall
To: Roger Pau Monne <roger.pau@citrix.com>
References: <20200528144023.10814-1-roger.pau@citrix.com>
 <20200528144023.10814-4-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bbc9aab3-6cb4-79b3-f646-a9565d39d0e2@suse.com>
Date: Fri, 29 May 2020 16:49:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200528144023.10814-4-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.05.2020 16:40, Roger Pau Monne wrote:
> --- a/xen/include/xen/compiler.h
> +++ b/xen/include/xen/compiler.h
> @@ -20,7 +20,11 @@
>  
>  #define __weak        __attribute__((__weak__))
>  
> -#define nocall        __attribute__((error("Nonstandard ABI")))
> +#if !defined(__clang__)
> +# define nocall        __attribute__((error("Nonstandard ABI")))

I think a blank wants dropping from here. I also should have noticed
already on Andrew's putting in of this construct that in a header
we'd better use __error__. Both can easily be taken care of while
committing.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:50:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:50:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegLK-0002C6-22; Fri, 29 May 2020 14:50:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5ub4=7L=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jegLI-0002Bg-D2
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:50:40 +0000
X-Inumbo-ID: c343080c-a1bb-11ea-81bc-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c343080c-a1bb-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 14:50:40 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 0D3B7B00A;
 Fri, 29 May 2020 14:50:38 +0000 (UTC)
Subject: Re: [PATCH v2 2/7] xen: credit2: factor runqueue initialization in
 its own function.
To: Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <159070133878.12060.13318432301910522647.stgit@Palanthas>
 <159070137084.12060.14661333224235870762.stgit@Palanthas>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <b3de2398-0708-e274-2497-b30afc2e069c@suse.com>
Date: Fri, 29 May 2020 16:50:38 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <159070137084.12060.14661333224235870762.stgit@Palanthas>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.05.20 23:29, Dario Faggioli wrote:
> As it will be useful in later changes. While there, fix
> the doc-comment.
> 
> No functional change intended.
> 
> Signed-off-by: Dario Faggioli <dfaggioli@suse.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:54:19 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:54:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegOl-0002S6-Hx; Fri, 29 May 2020 14:54:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5ub4=7L=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jegOk-0002S1-Du
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:54:14 +0000
X-Inumbo-ID: 42c4807e-a1bc-11ea-a8cc-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 42c4807e-a1bc-11ea-a8cc-12813bfff9fa;
 Fri, 29 May 2020 14:54:13 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A4976AC24;
 Fri, 29 May 2020 14:54:12 +0000 (UTC)
Subject: Re: [PATCH v2 3/7] xen: cpupool: add a back-pointer from a scheduler
 to its pool
To: Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <159070133878.12060.13318432301910522647.stgit@Palanthas>
 <159070137738.12060.10928971799525755388.stgit@Palanthas>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <01541fda-c7dc-1eac-7184-970244eaf505@suse.com>
Date: Fri, 29 May 2020 16:54:12 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <159070137738.12060.10928971799525755388.stgit@Palanthas>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.05.20 23:29, Dario Faggioli wrote:
> If we need to know within which pool a particular scheduler
> is working, we can do that by querying the cpupool pointer
> of any of the sched_resource-s (i.e., ~ any of the CPUs)
> assigned to the scheduler itself.
> 
> Basically, we pick any sched_resource that we know uses that
> scheduler, and we check its *cpupool pointer. If we really
> know that the resource uses the scheduler, this is fine, as
> it also means the resource is inside the pool we are
> looking for.
> 
> But, of course, we can do that for a pool/scheduler that has

s/can/can't/ ?

> not any been given any sched_resource yet (or if we do not
> know whether or not it has any sched_resource).
> 
> To overcome such limitation, add a back pointer from the
> scheduler, to its own pool.
> 
> Signed-off-by: Dario Faggioli <dfaggioli@suse.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:56:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:56:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegQx-0002a7-Ut; Fri, 29 May 2020 14:56:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BsAk=7L=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jegQw-0002a2-LB
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:56:30 +0000
X-Inumbo-ID: 93ec1868-a1bc-11ea-8993-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 93ec1868-a1bc-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 14:56:30 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 1B0A2B066;
 Fri, 29 May 2020 14:56:29 +0000 (UTC)
Message-ID: <2357c1d259ab09715ef747ff7e19c0a3a99bb1d0.camel@suse.com>
Subject: Re: [PATCH v2 3/7] xen: cpupool: add a back-pointer from a
 scheduler to its pool
From: Dario Faggioli <dfaggioli@suse.com>
To: =?ISO-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, 
 xen-devel@lists.xenproject.org
Date: Fri, 29 May 2020 16:56:27 +0200
In-Reply-To: <01541fda-c7dc-1eac-7184-970244eaf505@suse.com>
References: <159070133878.12060.13318432301910522647.stgit@Palanthas>
 <159070137738.12060.10928971799525755388.stgit@Palanthas>
 <01541fda-c7dc-1eac-7184-970244eaf505@suse.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-/rqpfXBmpkig0s0hs6mi"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: George Dunlap <george.dunlap@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-/rqpfXBmpkig0s0hs6mi
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2020-05-29 at 16:54 +0200, J=C3=BCrgen Gro=C3=9F wrote:
> On 28.05.20 23:29, Dario Faggioli wrote:
> > If we need to know within which pool a particular scheduler
> > is working, we can do that by querying the cpupool pointer
> > of any of the sched_resource-s (i.e., ~ any of the CPUs)
> > assigned to the scheduler itself.
> >=20
> > Basically, we pick any sched_resource that we know uses that
> > scheduler, and we check its *cpupool pointer. If we really
> > know that the resource uses the scheduler, this is fine, as
> > it also means the resource is inside the pool we are
> > looking for.
> >=20
> > But, of course, we can do that for a pool/scheduler that has
>=20
> s/can/can't/ ?
>=20
And I even wrote "of course"... :-/

Yes, _of_course_, I meant can't. :-)

> Reviewed-by: Juergen Gross <jgross@suse.com>
>=20
Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-/rqpfXBmpkig0s0hs6mi
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl7RIpsACgkQFkJ4iaW4
c+746BAA09a/GMnZjoDq+IgIMq9SGtf8GV0nwXAHyjoWsyDih2qWtYKt0zskwirp
DO5PwXLoYhy9GMbS/n1kPpcTTLyPx7a9wNfQ6hAo40SMLhrMF59+HIgU+pq2gv50
lrSOIxqw1cB1bE+Uz6oGg30K6Ru0D40aUnGWOshqGY7RfeS43MlzBU0AvRUlx+cu
GAuUd1dTMccYMZozth8owJZtOXYL1lslv9gEyuGs09GePdb0aG21LV568HkqKyEQ
kwHE1Xzueatol90yV2BSmYZBv91nD26UTb3q5fM0GOU+DInaG4GiUdqpBgyyKyX4
qcJlm7HbG+5qmQ/d/s8h4cWJYqezqZBfrxcxUF0NIhEDz+YTrx0vzA2o8wePRoqK
l0NXpIKekM2ZUSpCmoM7fs+Ri7Q56JbNQVd0NzI+bUHibb+CsnmESh8Moa+KSI8l
0Gnapj8Ksj7FssdQLJz/GfdPLLKhHLsFlHKa/Sh0GJ7w+7k2II7AHaKGwkhIGoqG
nOHJIvk6BesBKoDjMsjaqml5DhnuE42OmjcnITGXL5s27oB0U2UWezKdnjiTtDTd
/35CSb/OguS85rbEaD3Flu31vNrgd39tUU2vj+E+y1c/IeY9fPY1/cbcqQOFsyg7
iyB7CHTgUDaLTdhZDO6gWK95ygohA1/CLeOXfngDCSaydwrq7sM=
=7dp5
-----END PGP SIGNATURE-----

--=-/rqpfXBmpkig0s0hs6mi--



From xen-devel-bounces@lists.xenproject.org Fri May 29 14:56:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:56:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegR1-0002bL-6y; Fri, 29 May 2020 14:56:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jegR0-0002aS-8q
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:56:34 +0000
X-Inumbo-ID: 962c4814-a1bc-11ea-a8cd-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 962c4814-a1bc-11ea-a8cd-12813bfff9fa;
 Fri, 29 May 2020 14:56:33 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 929D6B077;
 Fri, 29 May 2020 14:56:32 +0000 (UTC)
Subject: Re: Xen XSM/FLASK policy, grub defaults, etc.
To: Ian Jackson <ian.jackson@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
References: <24270.35349.838484.116865@mariner.uk.xensource.com>
 <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
 <24272.59646.746545.343358@mariner.uk.xensource.com>
 <4a8e7cf2-8f63-d4d2-e051-9484a5b8c8ed@suse.com>
 <96F32637-E410-4EC8-937A-CFC8BE724352@citrix.com>
 <24273.8332.662607.125522@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5dc712c4-a57d-a387-89cd-e8a8991226a8@suse.com>
Date: Fri, 29 May 2020 16:56:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <24273.8332.662607.125522@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 "cjwatson@debian.org" <cjwatson@debian.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 16:47, Ian Jackson wrote:
> George Dunlap writes ("Re: Xen XSM/FLASK policy, grub defaults, etc."):
>> Which isn’t to say we shouldn’t do it; but it might be nice to also have an intermediate solution that works right now, even if it’s not optimal.
> 
> I propose the following behaviour by updste-grub:
> 
>  1. Look for an ELF note, TBD.  If it's found, make XSM boot entries.
>     (For now, skip this step, since the ELF note is not defined.)
> 
>  2. Look for a .config alongside the Xen binary.  Look for
>        ^CONFIG_XSM_FLASK=y
>     If the file is is not found, or no line matches, make no XSM
>     boot entries.

Sounds like a plan then, I think.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 14:58:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 14:58:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegSN-0002ky-I5; Fri, 29 May 2020 14:57:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jegSM-0002kt-DU
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 14:57:58 +0000
X-Inumbo-ID: c82b2eb6-a1bc-11ea-a8cd-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c82b2eb6-a1bc-11ea-a8cd-12813bfff9fa;
 Fri, 29 May 2020 14:57:57 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 9D60BAC24;
 Fri, 29 May 2020 14:57:56 +0000 (UTC)
Subject: Re: [PATCH] x86/hvm: Improve error information in handle_pio()
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <20200528130738.12816-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <58d93dbe-11e3-ac1b-c2b5-b410ae837726@suse.com>
Date: Fri, 29 May 2020 16:57:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200528130738.12816-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul.durrant@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.05.2020 15:07, Andrew Cooper wrote:
> domain_crash() should always have a message which emitted even in release

Oh, forgot to say: The wording here looks somewhat strange (and thus
unclear) to me.

> builds, so something more useful than this is presented.
> 
>   (XEN) domain_crash called from io.c:171
>   (XEN) domain_crash called from io.c:171
>   (XEN) domain_crash called from io.c:171
>   ...
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>



From xen-devel-bounces@lists.xenproject.org Fri May 29 15:04:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:04:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegY3-0003k3-6r; Fri, 29 May 2020 15:03:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jegY1-0003jy-Oy
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:03:49 +0000
X-Inumbo-ID: 9931aecc-a1bd-11ea-81bc-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9931aecc-a1bd-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 15:03:48 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:32924
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jegXx-000xTJ-K4 (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 16:03:45 +0100
Subject: Re: [PATCH v10 1/9] x86emul: address x86_insn_is_mem_{access, write}()
 omissions
To: Jan Beulich <jbeulich@suse.com>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
 <f41a4f27-bbe2-6450-38c1-6c4e23f2b07b@suse.com>
 <8e976b4b-60f2-bf94-843d-0fe0ba57087c@citrix.com>
 <5e46ec9b-2ae0-3d28-01c8-794356532456@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <56a4b478-6b3e-1eea-12b6-560dcb2776f2@citrix.com>
Date: Fri, 29 May 2020 16:03:44 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <5e46ec9b-2ae0-3d28-01c8-794356532456@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 14:29, Jan Beulich wrote:
> On 29.05.2020 14:18, Andrew Cooper wrote:
>> On 25/05/2020 15:26, Jan Beulich wrote:
>>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>>> @@ -11474,25 +11474,87 @@ x86_insn_operand_ea(const struct x86_emu
>>>      return state->ea.mem.off;
>>>  }
>>>  
>>> +/*
>>> + * This function means to return 'true' for all supported insns with explicit
>>> + * accesses to memory.  This means also insns which don't have an explicit
>>> + * memory operand (like POP), but it does not mean e.g. segment selector
>>> + * loads, where the descriptor table access is considered an implicit one.
>>> + */
>>>  bool
>>>  x86_insn_is_mem_access(const struct x86_emulate_state *state,
>>>                         const struct x86_emulate_ctxt *ctxt)
>>>  {
>>> +    if ( mode_64bit() && state->not_64bit )
>>> +        return false;
>> Is this path actually used?
> Yes, it is. It's only x86_emulate() which has
>
>     generate_exception_if(state->not_64bit && mode_64bit(), EXC_UD);
>
> right now.

Oh.  That is a bit awkward.

>> state->not_64bit ought to fail instruction
>> decode, at which point we wouldn't have a valid state to be used here.
> x86_decode() currently doesn't have much raising of #UD at all, I
> think. If it wasn't like this, the not_64bit field wouldn't be
> needed - it's used only to communicate from decode to execute.
> We're not entirely consistent with this though, seeing in
> x86_decode_onebyte(), a few lines below the block of case labels
> setting that field,
>
>     case 0x9a: /* call (far, absolute) */
>     case 0xea: /* jmp (far, absolute) */
>         generate_exception_if(mode_64bit(), EXC_UD);

This is because there is no legitimate way to determine the end of the
instruction.

Most of the not_64bit instructions do have a well-defined end, even if
they aren't permitted for use.

> We could certainly drop the field and raise #UD during decode
> already, but don't you think we then should do so for all
> encodings that ultimately lead to #UD, e.g. also for AVX insns
> without AVX available to the guest? This would make
> x86_decode() quite a bit larger, as it would then also need to
> have a giant switch() (or something else that's suitable to
> cover all cases).

I think there is a semantic difference between "we can't parse the
instruction", and "we can parse it, but it's not legitimate in this
context".

I don't think our exact split is correct, but I don't think moving all
#UD checking into x86_decode() is correct either.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 15:05:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegZr-0003qj-Jh; Fri, 29 May 2020 15:05:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mKAR=7L=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jegZr-0003qd-1n
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:05:43 +0000
X-Inumbo-ID: dd87ebd6-a1bd-11ea-81bc-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd87ebd6-a1bd-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 15:05:42 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=FfHiBUA1Nx1d6+aUh4SA2rdUE4k8uXjAOCxpP1CZ79Y=; b=c1+x16AKH4L2VkyrOoXvvt7wy/
 iCbgmCbBGH+iTLnllbGw/DslzrH+AC2Z7NPc0X+e4E8rwygENlQi4YLui3pAuDqO7Z4sbMZ/kmXzr
 DO36Eq+gAasxoZ+8NgNcFTk9p6DE9HiCDjd1Xc4M0EKom00QDGQXNLg0apPc6OyDaHuI=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jegZi-0007UG-9I; Fri, 29 May 2020 15:05:34 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jegZi-00062U-28; Fri, 29 May 2020 15:05:34 +0000
Subject: Re: Xen XSM/FLASK policy, grub defaults, etc.
To: Ian Jackson <ian.jackson@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>
References: <24270.35349.838484.116865@mariner.uk.xensource.com>
 <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
 <24272.59646.746545.343358@mariner.uk.xensource.com>
 <4a8e7cf2-8f63-d4d2-e051-9484a5b8c8ed@suse.com>
 <96F32637-E410-4EC8-937A-CFC8BE724352@citrix.com>
 <24273.8332.662607.125522@mariner.uk.xensource.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7eaa7541-f698-b29b-b4b3-1f40fc37c5d7@xen.org>
Date: Fri, 29 May 2020 16:05:31 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <24273.8332.662607.125522@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "cjwatson@debian.org" <cjwatson@debian.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Ian,

On 29/05/2020 15:47, Ian Jackson wrote:
> George Dunlap writes ("Re: Xen XSM/FLASK policy, grub defaults, etc."):
>> Which isn’t to say we shouldn’t do it; but it might be nice to also have an intermediate solution that works right now, even if it’s not optimal.
> 
> I propose the following behaviour by updste-grub:
> 
>   1. Look for an ELF note, TBD.  If it's found, make XSM boot entries.
>      (For now, skip this step, since the ELF note is not defined.)

I am afraid the ELF note is a very x86 thing. On Arm, we don't have such 
thing for the kernel/xen (actually the final binary is not even an ELF).

> 
>   2. Look for a .config alongside the Xen binary.  Look for
>         ^CONFIG_XSM_FLASK=y
>      If the file is is not found, or no line matches, make no XSM
>      boot entries.

... this would probably be the best solution for Arm.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 29 15:06:52 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:06:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegax-0003wt-UF; Fri, 29 May 2020 15:06:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BsAk=7L=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jegaw-0003wm-BH
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:06:50 +0000
X-Inumbo-ID: 051a158e-a1be-11ea-81bc-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 051a158e-a1be-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 15:06:49 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 3E5D2ACC5;
 Fri, 29 May 2020 15:06:48 +0000 (UTC)
Message-ID: <60c7224bf3f502e62a21deeb4d45f0aec9cdd943.camel@suse.com>
Subject: Re: [PATCH 0/2] xen: credit2: limit the number of CPUs per runqueue
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Date: Fri, 29 May 2020 17:06:46 +0200
In-Reply-To: <ab810b293ca8324ca3fba22476401a58435243fa.camel@suse.com>
References: <158818022727.24327.14309662489731832234.stgit@Palanthas>
 <ab810b293ca8324ca3fba22476401a58435243fa.camel@suse.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-GwpRhYkaFr/g6S+yArtv"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Paul Durrant <paul@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-GwpRhYkaFr/g6S+yArtv
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2020-05-29 at 13:46 +0200, Dario Faggioli wrote:
> Basically, if we just consider patches 1 and 4 we will end up, right
> after boot, with a system that has smaller runqueues.
>
Actually, to be fully precise, given how I reorganized the series, it's
not patches 1 and 4, it's patches 1, 3 and 4.

Hopefully, that is not a big deal, but it patch 3 is really a problem,
I can re-arrange patch 4 for working without it.

Apart from this, and for adding more information, on a system with 96
CPUs in 2 sockets, this is how the runqueues looks like (with these
patches:

(XEN) Online Cpus: 0-95
(XEN) Cpupool 0:
(XEN) Cpus: 0-95
(XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
(XEN) Scheduler: SMP Credit Scheduler rev2 (credit2)
(XEN) Active queues: 6
(XEN)   default-weight     =3D 256
(XEN) Runqueue 0:
(XEN)   ncpus              =3D 16
(XEN)   cpus               =3D 0-15
(XEN)   max_weight         =3D 256
(XEN)   pick_bias          =3D 0
(XEN)   instload           =3D 0
(XEN)   aveload            =3D 1223 (~0%)
(XEN)   idlers: 00000000,00000000,00000000,00000000,00000000,00000000,0000f=
fff
(XEN)   tickled: 00000000,00000000,00000000,00000000,00000000,00000000,0000=
0000
(XEN)   fully idle cores: 00000000,00000000,00000000,00000000,00000000,0000=
0000,0000ffff
(XEN) Runqueue 1:
(XEN)   ncpus              =3D 16
(XEN)   cpus               =3D 16-31
(XEN)   max_weight         =3D 256
(XEN)   pick_bias          =3D 16
(XEN)   instload           =3D 0
(XEN)   aveload            =3D 3324 (~1%)
(XEN)   idlers: 00000000,00000000,00000000,00000000,00000000,00000000,ffff0=
000
(XEN)   tickled: 00000000,00000000,00000000,00000000,00000000,00000000,0000=
0000
(XEN)   fully idle cores: 00000000,00000000,00000000,00000000,00000000,0000=
0000,ffff0000
(XEN) Runqueue 2:
(XEN)   ncpus              =3D 16
(XEN)   cpus               =3D 32-47
(XEN)   max_weight         =3D 256
(XEN)   pick_bias          =3D 32
(XEN)   instload           =3D 1
(XEN)   aveload            =3D 8996 (~3%)
(XEN)   idlers: 00000000,00000000,00000000,00000000,00000000,0000feff,00000=
000
(XEN)   tickled: 00000000,00000000,00000000,00000000,00000000,00000000,0000=
0000
(XEN)   fully idle cores: 00000000,00000000,00000000,00000000,00000000,0000=
fcff,00000000
(XEN) Runqueue 3:
(XEN)   ncpus              =3D 16
(XEN)   cpus               =3D 48-63
(XEN)   max_weight         =3D 256
(XEN)   pick_bias          =3D 48
(XEN)   instload           =3D 0
(XEN)   aveload            =3D 2424 (~0%)
(XEN)   idlers: 00000000,00000000,00000000,00000000,00000000,ffff0000,00000=
000
(XEN)   tickled: 00000000,00000000,00000000,00000000,00000000,00000000,0000=
0000
(XEN)   fully idle cores: 00000000,00000000,00000000,00000000,00000000,ffff=
0000,00000000
(XEN) Runqueue 4:
(XEN)   ncpus              =3D 16
(XEN)   cpus               =3D 64-79
(XEN)   max_weight         =3D 256
(XEN)   pick_bias          =3D 66
(XEN)   instload           =3D 0
(XEN)   aveload            =3D 1070 (~0%)
(XEN)   idlers: 00000000,00000000,00000000,00000000,0000ffff,00000000,00000=
000
(XEN)   tickled: 00000000,00000000,00000000,00000000,00000000,00000000,0000=
0000
(XEN)   fully idle cores: 00000000,00000000,00000000,00000000,0000ffff,0000=
0000,00000000
(XEN) Runqueue 5:
(XEN)   ncpus              =3D 16
(XEN)   cpus               =3D 80-95
(XEN)   max_weight         =3D 256
(XEN)   pick_bias          =3D 82
(XEN)   instload           =3D 0
(XEN)   aveload            =3D 425 (~0%)
(XEN)   idlers: 00000000,00000000,00000000,00000000,ffff0000,00000000,00000=
000
(XEN)   tickled: 00000000,00000000,00000000,00000000,00000000,00000000,0000=
0000
(XEN)   fully idle cores: 00000000,00000000,00000000,00000000,ffff0000,0000=
0000,00000000

Without the patches, there would be just 2 of them (on with CPUs 0-47
and another with CPUs 48-95).

On a system with "just" 16 CPUs, in 2 sockets, they look like this:

(XEN) Online Cpus: 0-15
(XEN) Cpupool 0:
(XEN) Cpus: 0-15
(XEN) Scheduling granularity: cpu, 1 CPU per sched-resource
(XEN) Scheduler: SMP Credit Scheduler rev2 (credit2)
(XEN) Active queues: 2
(XEN) 	default-weight     =3D 256
(XEN) Runqueue 0:
(XEN) 	ncpus              =3D 8
(XEN) 	cpus               =3D 0-7
(XEN) 	max_weight         =3D 256
(XEN) 	pick_bias          =3D 0
(XEN) 	instload           =3D 0
(XEN) 	aveload            =3D 7077 (~2%)
(XEN) 	idlers: 00000000,000000ff
(XEN) 	tickled: 00000000,00000000
(XEN) 	fully idle cores: 00000000,000000ff
(XEN) Runqueue 1:
(XEN) 	ncpus              =3D 8
(XEN) 	cpus               =3D 8-15
(XEN) 	max_weight         =3D 256
(XEN) 	pick_bias          =3D 8
(XEN) 	instload           =3D 1
(XEN) 	aveload            =3D 11848 (~4%)
(XEN) 	idlers: 00000000,0000fe00
(XEN) 	tickled: 00000000,00000000
(XEN) 	fully idle cores: 00000000,0000fc00

There are still 2, because there are 2 sockets, and we still honor the
topology (and 8 CPUs in a runqueue is fine, because is lower than 16).

I'll share the same output on a 256 CPU system, as soon as I finish
installing it.

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-GwpRhYkaFr/g6S+yArtv
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl7RJQYACgkQFkJ4iaW4
c+4uixAAlTOJjjZ9Rj6VGZf5eymjfRW7DlYb3IhIAiTcOLVrTYUF8+OCDJ+8AJLS
+OCPR9ANqIoJ5HBNgPcvPtMTmS0VqMSxkVIBS/bQseQixJl9mBLqbENCNwE4wOBk
8Q9h7LIzmaANn4P8FuhnjnWU6UJYbOv9UfiUN3OSQqANS3cjqtA8g1PGzCII2kV5
rwJhAOuuX3GQ0AGRu/8kvqN3lz3stSLhdyvLAtCjFhvQ2xnRNycnEVZJc8j21yZ6
CS551p7exO4UBw5KdSERPe/XuwtMZfpvtWJBOI9ufC6tAWIB5mDGCGZUDIw9flNZ
ckLbcVsA+qFyf2UITVRqoFOlnLaEeBP4YkZ9aMZF/KhL0L4zNeVK7/u9Kzgxm2FD
NrrXd641CSU+NmNEsasPEEA2d2iHzv8vA1Wv8mRQzOQGXr0lyDq03Vra5xZbmfwl
gmbI2lQu/9hCpK0UZDGW8x1l5r8RhTKWyUOrOxibPotlBsZWgN0i/FYh9Ikr2ste
OCg9UtHXXUvEOk6dytQAjz50N4NDDAJKKoyQgG45Wd1znuwQpeNRX9VDKveGN+67
iI+4U9SAgYBFwSPXVBn34PefT7USHf9FfGKcduSuL+PYMfhE/SmG6V+SoP0ZOJq2
vzMcfmBN1dbJy15Fh+1k7dlWJi+wqUZkMTaQAmhb9OUgR7A3+/I=
=LxRn
-----END PGP SIGNATURE-----

--=-GwpRhYkaFr/g6S+yArtv--



From xen-devel-bounces@lists.xenproject.org Fri May 29 15:07:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:07:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegbJ-00040T-A9; Fri, 29 May 2020 15:07:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3yhI=7L=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1jegbI-00040I-Ay
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:07:12 +0000
X-Inumbo-ID: 1219e28c-a1be-11ea-8993-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1219e28c-a1be-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 15:07:11 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: e7XbytowCZx0ORFF5jF8a6kGqCFaqssE1F7VlabNyK7R2AlLLWub5sftG0Y61FE4T1S2FJyniL
 lSZ4W78NgiuUK48V2uDucyoXzLXFhaxYubrgmE6uueoMM7ICCkLGfq25PndSQ1sISd+KT0XSTm
 flPgBl41970kDw2BXh4f1ENsKfDs0yV/hENx3gxaQFEL3qYkR7sOPdyphGKRVok4AM6jSRa9gD
 N9Dzc2eg1m0nSduDx3kiraWS4r8M+9UokCyW2zDOHAnDxc1lj/jLxL+287NGG7UShflgEYXs3m
 k1w=
X-SBRS: 2.7
X-MesageID: 19118900
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,449,1583211600"; d="scan'208";a="19118900"
Subject: Re: [PATCH] x86/svm: do not try to handle recalc NPT faults
 immediately
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger@xen.org>
References: <1590712553-7298-1-git-send-email-igor.druzhinin@citrix.com>
 <20200529143331.GO1195@Air-de-Roger>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <98dda864-e47e-bba5-a66b-02662708c540@citrix.com>
Date: Fri, 29 May 2020 16:06:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101
 Thunderbird/60.9.0
MIME-Version: 1.0
In-Reply-To: <20200529143331.GO1195@Air-de-Roger>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, george.dunlap@citrix.com, wl@xen.org,
 jbeulich@suse.com, andrew.cooper3@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 15:33, Roger Pau Monné wrote:
> On Fri, May 29, 2020 at 01:35:53AM +0100, Igor Druzhinin wrote:
>> A recalculation NPT fault doesn't always require additional handling
>> in hvm_hap_nested_page_fault(), moreover in general case if there is no
>> explicit handling done there - the fault is wrongly considered fatal.
>>
>> Instead of trying to be opportunistic - use safer approach and handle
>> P2M recalculation in a separate NPT fault by attempting to retry after
>> making the necessary adjustments. This is aligned with Intel behavior
>> where there are separate VMEXITs for recalculation and EPT violations
>> (faults) and only faults are handled in hvm_hap_nested_page_fault().
>> Do it by also unifying do_recalc return code with Intel implementation
>> where returning 1 means P2M was actually changed.
> 
> That seems like a good approach IMO.
> 
> Do you know whether this will make the code slower? (since there are
> cases previously handled in a single vmexit that would take two
> vmexits now)

The only case I could think of is memory writes during migration -
first fault would propagate P2M type recalculation while the second
actually log a dirty page.

The slowdown would be only during live phase obviously but should be
marginal and in line with what we currently have on Intel.

>> This covers a specific case of migration with vGPU assigned on AMD:
>> global log-dirty is enabled and causes immediate recalculation NPT
>> fault in MMIO area upon access.
>>
>> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
> 
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>
> 
>> ---
>> This is a safer alternative to:
>> https://lists.xenproject.org/archives/html/xen-devel/2020-05/msg01662.html
>> and more correct approach from my PoV.
>> ---
>>  xen/arch/x86/hvm/svm/svm.c | 5 +++--
>>  xen/arch/x86/mm/p2m-pt.c   | 8 ++++++--
>>  2 files changed, 9 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/arch/x86/hvm/svm/svm.c b/xen/arch/x86/hvm/svm/svm.c
>> index 46a1aac..7f6f578 100644
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -2923,9 +2923,10 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>>              v->arch.hvm.svm.cached_insn_len = vmcb->guest_ins_len & 0xf;
>>          rc = vmcb->exitinfo1 & PFEC_page_present
>>               ? p2m_pt_handle_deferred_changes(vmcb->exitinfo2) : 0;
>> -        if ( rc >= 0 )
>> +        if ( rc == 0 )
>> +            /* If no recal adjustments were being made - handle this fault */
>>              svm_do_nested_pgfault(v, regs, vmcb->exitinfo1, vmcb->exitinfo2);
>> -        else
>> +        else if ( rc < 0 )
>>          {
>>              printk(XENLOG_G_ERR
>>                     "%pv: Error %d handling NPF (gpa=%08lx ec=%04lx)\n",
>> diff --git a/xen/arch/x86/mm/p2m-pt.c b/xen/arch/x86/mm/p2m-pt.c
>> index 5c05017..377565b 100644
>> --- a/xen/arch/x86/mm/p2m-pt.c
>> +++ b/xen/arch/x86/mm/p2m-pt.c
>> @@ -340,7 +340,7 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
>>      unsigned long gfn_remainder = gfn;
>>      unsigned int level = 4;
>>      l1_pgentry_t *pent;
>> -    int err = 0;
>> +    int err = 0, rc = 0;
>>  
>>      table = map_domain_page(pagetable_get_mfn(p2m_get_pagetable(p2m)));
>>      while ( --level )
>> @@ -402,6 +402,8 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
>>                  clear_recalc(l1, e);
>>                  err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
>>                  ASSERT(!err);
>> +
>> +                rc = 1;
>>              }
>>          }
>>          unmap_domain_page((void *)((unsigned long)pent & PAGE_MASK));
>> @@ -448,12 +450,14 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
>>              clear_recalc(l1, e);
>>          err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
>>          ASSERT(!err);
>> +
>> +        rc = 1;
>>      }
>>  
>>   out:
>>      unmap_domain_page(table);
>>  
>> -    return err;
>> +    return err ? err : rc;
> 
> Nit: you can use the elvis operator here: return err ?: rc;
> 
> Also I couldn't spot any caller that would have troubles with the
> function now returning 1 in certain conditions, can you confirm the
> callers have been audited?

Yes, I checked all the callers before making the change. That's actually
where I spotted Intel side is doing exactly the same already.

Igor


From xen-devel-bounces@lists.xenproject.org Fri May 29 15:07:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:07:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegbV-000436-J0; Fri, 29 May 2020 15:07:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jegbT-00042Z-Iy
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:07:23 +0000
X-Inumbo-ID: 191a92ac-a1be-11ea-9dbe-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 191a92ac-a1be-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 15:07:23 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:33028
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jegbQ-0010N0-Li (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 16:07:20 +0100
Subject: Re: [PATCH v10 2/9] x86emul: rework CMP and TEST emulation
To: Jan Beulich <jbeulich@suse.com>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
 <5843dca9-1a1a-a32e-3cb0-95cd93533723@suse.com>
 <c215f77f-f645-eb08-3ac1-7d9f211e1abf@citrix.com>
 <629b042b-1226-6d2d-6509-569bb3c64abe@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d1d304ee-613e-c013-2000-39c5707c3af0@citrix.com>
Date: Fri, 29 May 2020 16:07:19 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <629b042b-1226-6d2d-6509-569bb3c64abe@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 14:41, Jan Beulich wrote:
> On 29.05.2020 14:24, Andrew Cooper wrote:
>> On 25/05/2020 15:26, Jan Beulich wrote:
>>> Unlike similarly encoded insns these don't write their memory operands,
>> "write to their".
>>
>>> and hence x86_is_mem_write() should return false for them. However,
>>> rather than adding special logic there, rework how their emulation gets
>>> done, by making decoding attributes properly describe the r/o nature of
>>> their memory operands.
>> Describe how?  I see you've change the order of operands encoding, but
>> then override it back?
> There's no overriding back, I don't think: I change the table entries
> for opcodes 0x38 and 0x39, with no other adjustments the the attributes
> later on. For the other opcodes I leave the table entries as they are,
> and override the attributes for the specific sub-cases (identified by
> ModRM.reg).
>
> For opcodes 0x38 and 0x39 the change of the table entries implies
> changing the order of operands as passed to emulate_2op_SrcV(), hence
> the splitting of the cases in the main switch().
> I didn't think this was necessary to spell out in the commit message,
> but of course I can re-use most of the text above and add it into
> there, if you think that would help.

Yes please.  With something suitable, Acked-by: Andrew Cooper
<andrew.cooper3@citrix.com>

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 15:09:45 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:09:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegdf-0004Hh-Ve; Fri, 29 May 2020 15:09:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jegde-0004Hb-LN
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:09:38 +0000
X-Inumbo-ID: 69a4846c-a1be-11ea-9dbe-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 69a4846c-a1be-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 15:09:38 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:33084
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jegda-0011lW-Lu (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 16:09:34 +0100
Subject: Re: [PATCH] x86/hvm: Improve error information in handle_pio()
To: Jan Beulich <jbeulich@suse.com>
References: <20200528130738.12816-1-andrew.cooper3@citrix.com>
 <58d93dbe-11e3-ac1b-c2b5-b410ae837726@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <19108621-8350-5779-f917-2b1f6b1cc2d4@citrix.com>
Date: Fri, 29 May 2020 16:09:33 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <58d93dbe-11e3-ac1b-c2b5-b410ae837726@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul.durrant@citrix.com>,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 15:57, Jan Beulich wrote:
> On 28.05.2020 15:07, Andrew Cooper wrote:
>> domain_crash() should always have a message which emitted even in release
> Oh, forgot to say: The wording here looks somewhat strange (and thus
> unclear) to me.

It should read "which is emitted", but this is basically a stopgap fix
seeing as I still haven't adjusted domain_crash() to force a message to
be provided.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 15:10:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:10:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jege6-0004qv-7y; Fri, 29 May 2020 15:10:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jege5-0004gj-6y
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:10:05 +0000
X-Inumbo-ID: 797f0466-a1be-11ea-81bc-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 797f0466-a1be-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 15:10:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id BAFF2AC24;
 Fri, 29 May 2020 15:10:03 +0000 (UTC)
Subject: Re: [PATCH v2 1/3] x86/mm: do not attempt to convert _PAGE_GNTTAB to
 a boolean
To: Roger Pau Monne <roger.pau@citrix.com>
References: <20200528144023.10814-1-roger.pau@citrix.com>
 <20200528144023.10814-2-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1fdfa7ca-ea1f-3c29-a953-24749dbc9f70@suse.com>
Date: Fri, 29 May 2020 17:10:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200528144023.10814-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.05.2020 16:40, Roger Pau Monne wrote:
> Clang 10 complains with:
> 
> mm.c:1239:10: error: converting the result of '<<' to a boolean always evaluates to true
>       [-Werror,-Wtautological-constant-compare]
>     if ( _PAGE_GNTTAB && (l1e_get_flags(l1e) & _PAGE_GNTTAB) &&

And taking the wording of the message exactly as it is, it is wrong
(if the shifted value is zero, or if all set bits get shifted out).
But the warning is bogus imo anyway - we have it like this for a
reason. I'd then wonder whether -Wtautological-constant-compare
actually does any good, or whether as an alternative we don't want
to disable it.

I further wonder whether they might not warn about the same in #if
down the road.

> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -1236,7 +1236,8 @@ void put_page_from_l1e(l1_pgentry_t l1e, struct domain *l1e_owner)
>       * (Note that the undestroyable active grants are not a security hole in
>       * Xen. All active grants can safely be cleaned up when the domain dies.)
>       */
> -    if ( _PAGE_GNTTAB && (l1e_get_flags(l1e) & _PAGE_GNTTAB) &&
> +#if _PAGE_GNTTAB
> +    if ( (l1e_get_flags(l1e) & _PAGE_GNTTAB) &&
>           !l1e_owner->is_shutting_down && !l1e_owner->is_dying )

Us moving in general(?) away from #if / #ifdef to constructs including
the condition in a suitable form in a non-preprocessor expression, I
think we want a comment here clarifying that this shouldn't be
converted back to how it is right now. With that added, for the
immediate purpose:
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 15:11:17 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:11:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegfE-0005FC-JR; Fri, 29 May 2020 15:11:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T8V9=7L=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jegfD-0005F2-8k
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:11:15 +0000
X-Inumbo-ID: a3104394-a1be-11ea-81bc-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a3104394-a1be-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 15:11:14 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=8sI0ySeI4B1jtNwSWCd+iKciDnNv33+twDnCM5+e5T4=; b=w/blFMxKxDbGu42G/ql73qCDyg
 sfQrTpgbZfWbLRTPY/3WfH5VU3k/3xz7qHxZVe3NAF/u2ROgsjkgrz5yYLxOfUhiAR7KhJlr4wXmu
 sBAoAB2pBQk20B0bl1KPzLfcopwG9XLBF1QIUhyHeoPCSTqzPJkbzYyH1D8E1AEr8zHQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jegfB-0007dY-Fc; Fri, 29 May 2020 15:11:13 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jegfB-0007Nw-1x; Fri, 29 May 2020 15:11:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jegfB-0006PU-1M; Fri, 29 May 2020 15:11:13 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable-smoke bisection] complete build-arm64-xsm
Message-Id: <E1jegfB-0006PU-1M@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 29 May 2020 15:11:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable-smoke
xenbranch xen-unstable-smoke
job build-arm64-xsm
testid xen-build

Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
  Bug not present: 86234eafb95295621aef6c618e4c22c10d8e4138
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/150490/


  commit ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri May 29 10:20:32 2020 +0200
  
      tools: add xenfs tool
      
      Add the xenfs tool for accessing the hypervisor filesystem.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Acked-by: Wei Liu <wl@xen.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable-smoke/build-arm64-xsm.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable-smoke/build-arm64-xsm.xen-build --summary-out=tmp/150490.bisection-summary --basis-template=150438 --blessings=real,real-bisect xen-unstable-smoke build-arm64-xsm xen-build
Searching for failure / basis pass:
 150488 fail [host=laxton1] / 150438 [host=laxton0] 150433 ok.
Failure / basis pass flights: 150488 / 150433
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 410cc30fdc590417ae730d635bbc70257adf6750 9b9a83e43598b231111487378d6037fa8fa473d5
Basis pass 410cc30fdc590417ae730d635bbc70257adf6750 724913de8ac8426d313a4645741d86c1169ae406
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/qemu-xen.git#410cc30fdc590417ae730d635bbc70257adf6750-410cc30fdc590417ae730d635bbc70257adf6750 git://xenbits.xen.org/xen.git#724913de8ac8426d313a4645741d86c1169ae406-9b9a83e43598b231111487378d6037fa8fa473d5
Loaded 5001 nodes in revision graph
Searching for test results:
 150472 fail 410cc30fdc590417ae730d635bbc70257adf6750 9b9a83e43598b231111487378d6037fa8fa473d5
 150465 fail 410cc30fdc590417ae730d635bbc70257adf6750 ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
 150438 [host=laxton0]
 150487 fail 410cc30fdc590417ae730d635bbc70257adf6750 ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
 150467 pass 410cc30fdc590417ae730d635bbc70257adf6750 724913de8ac8426d313a4645741d86c1169ae406
 150478 pass 410cc30fdc590417ae730d635bbc70257adf6750 86234eafb95295621aef6c618e4c22c10d8e4138
 150433 pass 410cc30fdc590417ae730d635bbc70257adf6750 724913de8ac8426d313a4645741d86c1169ae406
 150468 fail 410cc30fdc590417ae730d635bbc70257adf6750 ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
 150469 fail 410cc30fdc590417ae730d635bbc70257adf6750 ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
 150489 pass 410cc30fdc590417ae730d635bbc70257adf6750 86234eafb95295621aef6c618e4c22c10d8e4138
 150479 fail 410cc30fdc590417ae730d635bbc70257adf6750 9b9a83e43598b231111487378d6037fa8fa473d5
 150471 pass 410cc30fdc590417ae730d635bbc70257adf6750 1497e78068421d83956f8e82fb6e1bf1fc3b1199
 150488 fail 410cc30fdc590417ae730d635bbc70257adf6750 9b9a83e43598b231111487378d6037fa8fa473d5
 150482 pass 410cc30fdc590417ae730d635bbc70257adf6750 724913de8ac8426d313a4645741d86c1169ae406
 150475 pass 410cc30fdc590417ae730d635bbc70257adf6750 0e9dcd0159c671608e154da5b8b7e0edd2905067
 150490 fail 410cc30fdc590417ae730d635bbc70257adf6750 ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
 150476 pass 410cc30fdc590417ae730d635bbc70257adf6750 5b5ccafb0c425b85a60fd4f241d5f6951d0e4928
 150484 fail 410cc30fdc590417ae730d635bbc70257adf6750 9b9a83e43598b231111487378d6037fa8fa473d5
 150485 fail 410cc30fdc590417ae730d635bbc70257adf6750 ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
 150486 pass 410cc30fdc590417ae730d635bbc70257adf6750 86234eafb95295621aef6c618e4c22c10d8e4138
Searching for interesting versions
 Result found: flight 150433 (pass), for basis pass
 Result found: flight 150472 (fail), for basis failure
 Repro found: flight 150482 (pass), for basis pass
 Repro found: flight 150484 (fail), for basis failure
 0 revisions at 410cc30fdc590417ae730d635bbc70257adf6750 86234eafb95295621aef6c618e4c22c10d8e4138
No revisions left to test, checking graph state.
 Result found: flight 150478 (pass), for last pass
 Result found: flight 150485 (fail), for first failure
 Repro found: flight 150486 (pass), for last pass
 Repro found: flight 150487 (fail), for first failure
 Repro found: flight 150489 (pass), for last pass
 Repro found: flight 150490 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
  Bug not present: 86234eafb95295621aef6c618e4c22c10d8e4138
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/150490/


  commit ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
  Author: Juergen Gross <jgross@suse.com>
  Date:   Fri May 29 10:20:32 2020 +0200
  
      tools: add xenfs tool
      
      Add the xenfs tool for accessing the hypervisor filesystem.
      
      Signed-off-by: Juergen Gross <jgross@suse.com>
      Acked-by: Wei Liu <wl@xen.org>

Revision graph left in /home/logs/results/bisect/xen-unstable-smoke/build-arm64-xsm.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
150490: tolerable ALL FAIL

flight 150490 xen-unstable-smoke real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/150490/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-arm64-xsm               6 xen-build               fail baseline untested


jobs:
 build-arm64-xsm                                              fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri May 29 15:11:23 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:11:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegfL-0005Go-SI; Fri, 29 May 2020 15:11:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jegfJ-0005GD-Ua
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:11:21 +0000
X-Inumbo-ID: a629454f-a1be-11ea-a8d0-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a629454f-a1be-11ea-a8d0-12813bfff9fa;
 Fri, 29 May 2020 15:11:20 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7A0B6ACC5;
 Fri, 29 May 2020 15:11:19 +0000 (UTC)
Subject: Re: Xen XSM/FLASK policy, grub defaults, etc.
To: Julien Grall <julien@xen.org>
References: <24270.35349.838484.116865@mariner.uk.xensource.com>
 <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
 <24272.59646.746545.343358@mariner.uk.xensource.com>
 <4a8e7cf2-8f63-d4d2-e051-9484a5b8c8ed@suse.com>
 <96F32637-E410-4EC8-937A-CFC8BE724352@citrix.com>
 <24273.8332.662607.125522@mariner.uk.xensource.com>
 <7eaa7541-f698-b29b-b4b3-1f40fc37c5d7@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <63838ce9-8dbf-aacf-521d-97540758a945@suse.com>
Date: Fri, 29 May 2020 17:11:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <7eaa7541-f698-b29b-b4b3-1f40fc37c5d7@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "cjwatson@debian.org" <cjwatson@debian.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@citrix.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 17:05, Julien Grall wrote:
> On 29/05/2020 15:47, Ian Jackson wrote:
>> George Dunlap writes ("Re: Xen XSM/FLASK policy, grub defaults, etc."):
>>> Which isn’t to say we shouldn’t do it; but it might be nice to also have an intermediate solution that works right now, even if it’s not optimal.
>>
>> I propose the following behaviour by updste-grub:
>>
>>   1. Look for an ELF note, TBD.  If it's found, make XSM boot entries.
>>      (For now, skip this step, since the ELF note is not defined.)
> 
> I am afraid the ELF note is a very x86 thing. On Arm, we don't have such 
> thing for the kernel/xen (actually the final binary is not even an ELF).

Ouch - of course. Is there anything similar one could employ there?

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 15:15:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:15:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegjZ-0005Vr-EX; Fri, 29 May 2020 15:15:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T8V9=7L=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jegjY-0005Vm-Ku
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:15:44 +0000
X-Inumbo-ID: 43f7f00e-a1bf-11ea-a8d7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 43f7f00e-a1bf-11ea-a8d7-12813bfff9fa;
 Fri, 29 May 2020 15:15:44 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=hqJUOWekooLGXjl/U07AAEWFuOE+8xFrYmWELYmNq4U=; b=EIFxklwrsBL3TWNQEZEtnGhM4
 o1cOUibVBqFtmlP0ByEAgXwRUfPFsg4OQN7ehrelryCfNdghAS+G+XgsGMmNPCbO3HmXs7BiCLedq
 1ekZjwPnPpfiHaadlc/eospUZD1fEDEU0bPRvFmiXhSibV+jwGn8ejlgjKLIopKOfk+U0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jegjX-0007kq-M2; Fri, 29 May 2020 15:15:43 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jegjX-0007VM-Bf; Fri, 29 May 2020 15:15:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jegjX-00018Q-B2; Fri, 29 May 2020 15:15:43 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150488-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150488: regressions - trouble: blocked/fail
X-Osstest-Failures: xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
 xen-unstable-smoke:build-amd64:xen-build:fail:regression
 xen-unstable-smoke:build-armhf:xen-build:fail:regression
 xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: xen=9b9a83e43598b231111487378d6037fa8fa473d5
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 29 May 2020 15:15:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150488 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150488/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 150438
 build-amd64                   6 xen-build                fail REGR. vs. 150438
 build-armhf                   6 xen-build                fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  9b9a83e43598b231111487378d6037fa8fa473d5
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    1 days
Failing since        150465  2020-05-29 09:02:14 Z    0 days    5 attempts
Testing same since   150472  2020-05-29 11:01:58 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 9b9a83e43598b231111487378d6037fa8fa473d5
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:22:50 2020 +0200

    SUPPORT.md: add hypervisor file system
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 7f8d2dc29ea5a51f88ec253be93970768ec9fac2
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:22:42 2020 +0200

    CHANGELOG: add hypervisor file system support
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Paul Durrant <paul@xen.org>

commit 02e9a9cf20950e78c816987415ed920d72444f94
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:20:31 2020 +0200

    xen: remove XEN_SYSCTL_set_parameter support
    
    The functionality of XEN_SYSCTL_set_parameter is available via hypfs
    now, so it can be removed.
    
    This allows to remove the kernel_param structure for runtime parameters
    by putting the now only used structure element into the hypfs node
    structure of the runtime parameters.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit a2486890689713116351e5bbfb8f104c797479cc
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:20:16 2020 +0200

    tools/libxc: remove xc_set_parameters()
    
    There is no user of xc_set_parameters() left, so remove it.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 2ea4b9829cf95b59f75f0c70543f2368d702305e
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:20:08 2020 +0200

    tools/libxl: use libxenhypfs for setting xen runtime parameters
    
    Instead of xc_set_parameters() use xenhypfs_write() for setting
    parameters of the hypervisor.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit a659d7cab9afcba337cb60225738fd85ff7aa3da
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:18:36 2020 +0200

    xen: add runtime parameter access support to hypfs
    
    Add support to read and modify values of hypervisor runtime parameters
    via the hypervisor file system.
    
    As runtime parameters can be modified via a sysctl, too, this path has
    to take the hypfs rw_lock as writer.
    
    For custom runtime parameters the connection between the parameter
    value and the file system is done via an init function which will set
    the initial value (if needed) and the leaf properties.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 58263ed7713e8132c2bc00bc870399ea31bf2231
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:15:54 2020 +0200

    xen: add /buildinfo/config entry to hypervisor filesystem
    
    Add the /buildinfo/config entry to the hypervisor filesystem. This
    entry contains the .config file used to build the hypervisor.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit b5d4711d2b17753498a3f587585f11bf9ca5af85
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:14:51 2020 +0200

    xen: provide version information in hypfs
    
    Provide version and compile information in /buildinfo/ node of the
    Xen hypervisor file system. As this information is accessible by dom0
    only no additional security problem arises.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>

commit 048f82ccd1b3dda511af25a7a8524c8ba5ca2786
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 12:14:24 2020 +0200

    xen/hypfs: make struct hypfs_entry_leaf initializers work with gcc 4.1
    
    gcc 4.1 has problems with static initializers for anonymous unions.
    Fix this by naming the union in struct hypfs_entry_leaf.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Tested-by: Jan Beulich <jbeulich@suse.com>

commit ef716e1dc6206adc5e2a181fe0e20dfd6072bf4c
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:20:32 2020 +0200

    tools: add xenfs tool
    
    Add the xenfs tool for accessing the hypervisor filesystem.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 86234eafb95295621aef6c618e4c22c10d8e4138
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:20:21 2020 +0200

    libs: add libxenhypfs
    
    Add the new library libxenhypfs for access to the hypervisor filesystem.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5b5ccafb0c425b85a60fd4f241d5f6951d0e4928
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:15:50 2020 +0200

    xen: add basic hypervisor filesystem support
    
    Add the infrastructure for the hypervisor filesystem.
    
    This includes the hypercall interface and the base functions for
    entry creation, deletion and modification.
    
    In order not to have to repeat the same pattern multiple times in case
    adding a new node should BUG_ON() failure, the helpers for adding a
    node (hypfs_add_dir() and hypfs_add_leaf()) get a nofault parameter
    causing the BUG() in case of a failure.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 0e9dcd0159c671608e154da5b8b7e0edd2905067
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:15:35 2020 +0200

    docs: add feature document for Xen hypervisor sysfs-like support
    
    On the 2019 Xen developer summit there was agreement that the Xen
    hypervisor should gain support for a hierarchical name-value store
    similar to the Linux kernel's sysfs.
    
    In the beginning there should only be basic support: entries can be
    added from the hypervisor itself only, there is a simple hypercall
    interface to read the data.
    
    Add a feature document for setting the base of a discussion regarding
    the desired functionality and the entries to add.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit c48a9956e334a5dd99e846d04ad56185b07aab64
Author: Juergen Gross <jgross@suse.com>
Date:   Fri May 29 10:15:08 2020 +0200

    xen: add a generic way to include binary files as variables
    
    Add a new script xen/tools/binfile for including a binary file at build
    time being usable via a pointer and a size variable in the hypervisor.
    
    Make use of that generic tool in xsm.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri May 29 15:18:01 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:18:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeglk-0005ez-0r; Fri, 29 May 2020 15:18:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3yhI=7L=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1jeglj-0005eu-1l
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:17:59 +0000
X-Inumbo-ID: 92c3caaa-a1bf-11ea-a8d7-12813bfff9fa
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 92c3caaa-a1bf-11ea-a8d7-12813bfff9fa;
 Fri, 29 May 2020 15:17:56 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: gzE/Y/bcAQE6Hc/ZTh1X3ETkde1v3Ul5V/2yVBkTH6i3ZEA5z6XgmiS46wVLlICUHocVigImHc
 k36YYUwFjfcE1sZeIVWwtdW+kL1HMLMiJiF/yp5oYX8r8fiJWOROVRmKShYpCmwB3RDW7Rglet
 28gROmxIWkwVl+nL8CoKEQ+8wOsgF4sj/bR+8nyINezUzYwDJhvVeHB94HCicM0Ynq0P9p4z6A
 LkXoRewx+D4/p9sjCt+zqku0ONExI71e1BIFWGYiCA5/5KEiuEVzZ6UZSfirBSu/46tZfEeZkl
 1qw=
X-SBRS: 2.7
X-MesageID: 19120258
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,449,1583211600"; d="scan'208";a="19120258"
Subject: Re: [PATCH] x86/svm: do not try to handle recalc NPT faults
 immediately
To: Jan Beulich <jbeulich@suse.com>
References: <1590712553-7298-1-git-send-email-igor.druzhinin@citrix.com>
 <bb934c0c-3f0d-df7e-1720-8dbbdbf7691d@suse.com>
From: Igor Druzhinin <igor.druzhinin@citrix.com>
Message-ID: <026404fb-54b7-d03f-b4c5-367bcb5af41d@citrix.com>
Date: Fri, 29 May 2020 16:17:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101
 Thunderbird/60.9.0
MIME-Version: 1.0
In-Reply-To: <bb934c0c-3f0d-df7e-1720-8dbbdbf7691d@suse.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, roger.pau@citrix.com,
 george.dunlap@citrix.com, wl@xen.org, andrew.cooper3@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 15:34, Jan Beulich wrote:
> On 29.05.2020 02:35, Igor Druzhinin wrote:
>> A recalculation NPT fault doesn't always require additional handling
>> in hvm_hap_nested_page_fault(), moreover in general case if there is no
>> explicit handling done there - the fault is wrongly considered fatal.
>>
>> Instead of trying to be opportunistic - use safer approach and handle
>> P2M recalculation in a separate NPT fault by attempting to retry after
>> making the necessary adjustments. This is aligned with Intel behavior
>> where there are separate VMEXITs for recalculation and EPT violations
>> (faults) and only faults are handled in hvm_hap_nested_page_fault().
>> Do it by also unifying do_recalc return code with Intel implementation
>> where returning 1 means P2M was actually changed.
>>
>> This covers a specific case of migration with vGPU assigned on AMD:
>> global log-dirty is enabled and causes immediate recalculation NPT
>> fault in MMIO area upon access.
> 
> To be honest, from this last paragraph I still can't really derive
> what goes wrong exactly why, before this change.

I admit it might require some knowledge of how vGPU is implemented. I will try
to give more info in this paragraph.

>> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>> ---
>> This is a safer alternative to:
>> https://lists.xenproject.org/archives/html/xen-devel/2020-05/msg01662.html
>> and more correct approach from my PoV.
> 
> Indeed - I was about to reply there, but then I thought I'd first
> look at this patch, in case it was a replacement.
> 
>> --- a/xen/arch/x86/hvm/svm/svm.c
>> +++ b/xen/arch/x86/hvm/svm/svm.c
>> @@ -2923,9 +2923,10 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>>              v->arch.hvm.svm.cached_insn_len = vmcb->guest_ins_len & 0xf;
>>          rc = vmcb->exitinfo1 & PFEC_page_present
>>               ? p2m_pt_handle_deferred_changes(vmcb->exitinfo2) : 0;
>> -        if ( rc >= 0 )
>> +        if ( rc == 0 )
>> +            /* If no recal adjustments were being made - handle this fault */
>>              svm_do_nested_pgfault(v, regs, vmcb->exitinfo1, vmcb->exitinfo2);
>> -        else
>> +        else if ( rc < 0 )
> 
> So from going through the code and judging by the comment in
> finish_type_change() (which btw you will need to update, to avoid
> it becoming stale) the >= here was there just in case, without
> there actually being any case where a positive value would be
> returned. It that's also the conclusion you've drawn, then I
> think it would help mentioning this in the description.

I re-read the comments in finish_type_change() and to me they look
pretty much in line with the now common interface between EPT and NPT
recalc calls. 

Ok, I will point out that I concluded there was no additional intent
of necessarily calling svm_do_nested_pgfault() after recalc.

> It is also desirable to mention finish_type_change() not being
> affected, as already dealing with the > 0 case.

Will mention that as well.

>> --- a/xen/arch/x86/mm/p2m-pt.c
>> +++ b/xen/arch/x86/mm/p2m-pt.c
>> @@ -340,7 +340,7 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
>>      unsigned long gfn_remainder = gfn;
>>      unsigned int level = 4;
>>      l1_pgentry_t *pent;
>> -    int err = 0;
>> +    int err = 0, rc = 0;
>>  
>>      table = map_domain_page(pagetable_get_mfn(p2m_get_pagetable(p2m)));
>>      while ( --level )
>> @@ -402,6 +402,8 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
>>                  clear_recalc(l1, e);
>>                  err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
>>                  ASSERT(!err);
>> +
>> +                rc = 1;
>>              }
>>          }
>>          unmap_domain_page((void *)((unsigned long)pent & PAGE_MASK));
>> @@ -448,12 +450,14 @@ static int do_recalc(struct p2m_domain *p2m, unsigned long gfn)
>>              clear_recalc(l1, e);
>>          err = p2m->write_p2m_entry(p2m, gfn, pent, e, level + 1);
>>          ASSERT(!err);
>> +
>> +        rc = 1;
>>      }
>>  
>>   out:
>>      unmap_domain_page(table);
>>  
>> -    return err;
>> +    return err ? err : rc;
> 
> Typically we write this as "err ?: rc". I'd like to ask that "rc" also
> be renamed, to something like "recalc_done", and then to become bool.

Sure.

Igor
 


From xen-devel-bounces@lists.xenproject.org Fri May 29 15:20:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:20:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegnh-0005lk-DB; Fri, 29 May 2020 15:20:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jegng-0005lf-Fb
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:20:00 +0000
X-Inumbo-ID: dc24cbc2-a1bf-11ea-8993-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc24cbc2-a1bf-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 15:19:59 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id A480FB07D;
 Fri, 29 May 2020 15:19:58 +0000 (UTC)
Subject: Re: [PATCH v10 1/9] x86emul: address x86_insn_is_mem_{access, write}()
 omissions
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <d2375ecb-f662-99d8-84c2-e9f9c5cf7b9e@suse.com>
 <f41a4f27-bbe2-6450-38c1-6c4e23f2b07b@suse.com>
 <8e976b4b-60f2-bf94-843d-0fe0ba57087c@citrix.com>
 <5e46ec9b-2ae0-3d28-01c8-794356532456@suse.com>
 <56a4b478-6b3e-1eea-12b6-560dcb2776f2@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1c030ba3-b1e0-62b6-b048-f907acade662@suse.com>
Date: Fri, 29 May 2020 17:19:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <56a4b478-6b3e-1eea-12b6-560dcb2776f2@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>, Roger Pau Monne <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29.05.2020 17:03, Andrew Cooper wrote:
> On 29/05/2020 14:29, Jan Beulich wrote:
>> On 29.05.2020 14:18, Andrew Cooper wrote:
>>> On 25/05/2020 15:26, Jan Beulich wrote:
>>>> --- a/xen/arch/x86/x86_emulate/x86_emulate.c
>>>> +++ b/xen/arch/x86/x86_emulate/x86_emulate.c
>>>> @@ -11474,25 +11474,87 @@ x86_insn_operand_ea(const struct x86_emu
>>>>      return state->ea.mem.off;
>>>>  }
>>>>  
>>>> +/*
>>>> + * This function means to return 'true' for all supported insns with explicit
>>>> + * accesses to memory.  This means also insns which don't have an explicit
>>>> + * memory operand (like POP), but it does not mean e.g. segment selector
>>>> + * loads, where the descriptor table access is considered an implicit one.
>>>> + */
>>>>  bool
>>>>  x86_insn_is_mem_access(const struct x86_emulate_state *state,
>>>>                         const struct x86_emulate_ctxt *ctxt)
>>>>  {
>>>> +    if ( mode_64bit() && state->not_64bit )
>>>> +        return false;
>>> Is this path actually used?
>> Yes, it is. It's only x86_emulate() which has
>>
>>     generate_exception_if(state->not_64bit && mode_64bit(), EXC_UD);
>>
>> right now.
> 
> Oh.  That is a bit awkward.
> 
>>> state->not_64bit ought to fail instruction
>>> decode, at which point we wouldn't have a valid state to be used here.
>> x86_decode() currently doesn't have much raising of #UD at all, I
>> think. If it wasn't like this, the not_64bit field wouldn't be
>> needed - it's used only to communicate from decode to execute.
>> We're not entirely consistent with this though, seeing in
>> x86_decode_onebyte(), a few lines below the block of case labels
>> setting that field,
>>
>>     case 0x9a: /* call (far, absolute) */
>>     case 0xea: /* jmp (far, absolute) */
>>         generate_exception_if(mode_64bit(), EXC_UD);
> 
> This is because there is no legitimate way to determine the end of the
> instruction.
> 
> Most of the not_64bit instructions do have a well-defined end, even if
> they aren't permitted for use.

I wouldn't bet on that. Just look at the re-use of opcode D6
(salc in 32-bit) by L1OM for an extreme example. There's nothing
we can say on how any of the reserved opcodes may get re-used.
Prior to AVX / AVX512 we'd have estimated C4, C5, and 62 wrongly
as well (but that's true independent of execution mode).

>> We could certainly drop the field and raise #UD during decode
>> already, but don't you think we then should do so for all
>> encodings that ultimately lead to #UD, e.g. also for AVX insns
>> without AVX available to the guest? This would make
>> x86_decode() quite a bit larger, as it would then also need to
>> have a giant switch() (or something else that's suitable to
>> cover all cases).
> 
> I think there is a semantic difference between "we can't parse the
> instruction", and "we can parse it, but it's not legitimate in this
> context".
> 
> I don't think our exact split is correct, but I don't think moving all
> #UD checking into x86_decode() is correct either.

Do you have any clear, sufficiently simple rule in mind for where
we could draw the boundary?

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 15:23:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:23:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegrJ-0006gu-U8; Fri, 29 May 2020 15:23:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5ub4=7L=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jegrH-0006gp-QM
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:23:43 +0000
X-Inumbo-ID: 60d4b166-a1c0-11ea-a8d8-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 60d4b166-a1c0-11ea-a8d8-12813bfff9fa;
 Fri, 29 May 2020 15:23:42 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 4C44DB0A5;
 Fri, 29 May 2020 15:23:41 +0000 (UTC)
Subject: Re: [PATCH v2 4/7] xen: credit2: limit the max number of CPUs in a
 runqueue
To: Dario Faggioli <dfaggioli@suse.com>, xen-devel@lists.xenproject.org
References: <159070133878.12060.13318432301910522647.stgit@Palanthas>
 <159070138395.12060.9523981885146042705.stgit@Palanthas>
From: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>
Message-ID: <826d5572-0c72-4fd4-f182-38023ca555ef@suse.com>
Date: Fri, 29 May 2020 17:23:40 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <159070138395.12060.9523981885146042705.stgit@Palanthas>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.05.20 23:29, Dario Faggioli wrote:
> In Credit2 CPUs (can) share runqueues, depending on the topology. For
> instance, with per-socket runqueues (the default) all the CPUs that are
> part of the same socket share a runqueue.
> 
> On platform with a huge number of CPUs per socket, that could be a
> problem. An example is AMD EPYC2 servers, where we can have up to 128
> CPUs in a socket.
> 
> It is of course possible to define other, still topology-based, runqueue
> arrangements (e.g., per-LLC, per-DIE, etc). But that may still result in
> runqueues with too many CPUs on other/future platforms. For instance, a
> system with 96 CPUs and 2 NUMA nodes will end up having 48 CPUs per
> runqueue. Not as bad, but still a lot!
> 
> Therefore, let's set a limit to the max number of CPUs that can share a
> Credit2 runqueue. The actual value is configurable (at boot time), the
> default being 16. If, for instance,  there are more than 16 CPUs in a
> socket, they'll be split among two (or more) runqueues.
> 
> Note: with core scheduling enabled, this parameter sets the max number
> of *scheduling resources* that can share a runqueue. Therefore, with
> granularity set to core (and assumint 2 threads per core), we will have
> at most 16 cores per runqueue, which corresponds to 32 threads. But that
> is fine, considering how core scheduling works.
> 
> Signed-off-by: Dario Faggioli <dfaggioli@suse.com>

Reviewed-by: Juergen Gross <jgross@suse.com>

with one additional question below.

> ---
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: George Dunlap <george.dunlap@citrix.com>
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Juergen Gross <jgross@suse.com>
> ---
> Changes from v1:
> - always try to add a CPU to the runqueue with the least CPUs already in
>    it. This should guarantee a more even distribution of CPUs among
>    runqueues, as requested during review;
> - rename the matching function from foo_smt_bar() to foo_siblings_bar(),
>    which is more generic, and do the same to the per-arch wrappers;
> - deal with the case where the user is trying to set fewer CPUs per
>    runqueue than there are siblings per core (by putting siblings in the
>    same runq anyway, but logging a message), as requested during review;
> - use the per-cpupool value for the scheduling granularity, as requested
>    during review;
> - add a comment about why we also count siblings that are currently
>    outside of our cpupool, as suggested during review;
> - add a boot command line doc entry;
> - fix typos in comments;
> ---
>   docs/misc/xen-command-line.pandoc |   14 ++++
>   xen/common/sched/credit2.c        |  144 +++++++++++++++++++++++++++++++++++--
>   xen/include/asm-arm/cpufeature.h  |    5 +
>   xen/include/asm-x86/processor.h   |    5 +
>   4 files changed, 162 insertions(+), 6 deletions(-)
> 
> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
> index e16bb90184..1787f2c8fb 100644
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -1840,6 +1840,20 @@ with read and write permissions.
>   
>   Choose the default scheduler.
>   
> +### sched_credit2_max_cpus_runqueue
> +> `= <integer>`
> +
> +> Default: `16`
> +
> +Defines how many CPUs will be put, at most, in each Credit2 runqueue.
> +
> +Runqueues are still arranged according to the host topology (and following
> +what indicated by the 'credit2_runqueue' parameter). But we also have a cap
> +to the number of CPUs that share each runqueues.
> +
> +A value that is a submultiple of the number of online CPUs is recommended,
> +as that would likely produce a perfectly balanced runqueue configuration.
> +
>   ### sched_credit2_migrate_resist
>   > `= <integer>`
>   
> diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
> index 8a4f28b9f5..f4d3f8ae6b 100644
> --- a/xen/common/sched/credit2.c
> +++ b/xen/common/sched/credit2.c
> @@ -25,6 +25,7 @@
>   #include <xen/trace.h>
>   #include <xen/cpu.h>
>   #include <xen/keyhandler.h>
> +#include <asm/processor.h>
>   
>   #include "private.h"
>   
> @@ -471,6 +472,22 @@ static int __init parse_credit2_runqueue(const char *s)
>   }
>   custom_param("credit2_runqueue", parse_credit2_runqueue);
>   
> +/*
> + * How many CPUs will be put, at most, in each runqueue.
> + *
> + * Runqueues are still arranged according to the host topology (and according
> + * to the value of the 'credit2_runqueue' parameter). But we also have a cap
> + * to the number of CPUs that share runqueues.
> + *
> + * This should be considered an upper limit. In fact, we also try to balance
> + * the number of CPUs in each runqueue. And, when doing that, it is possible
> + * that fewer CPUs than what this parameters mandates will actually be put
> + * in each runqueue.
> + */
> +#define MAX_CPUS_RUNQ 16
> +static unsigned int __read_mostly opt_max_cpus_runqueue = MAX_CPUS_RUNQ;
> +integer_param("sched_credit2_max_cpus_runqueue", opt_max_cpus_runqueue);
> +
>   /*
>    * Per-runqueue data
>    */
> @@ -852,18 +869,83 @@ cpu_runqueue_match(const struct csched2_runqueue_data *rqd, unsigned int cpu)
>              (opt_runqueue == OPT_RUNQUEUE_NODE && same_node(peer_cpu, cpu));
>   }
>   
> +/*
> + * Additional checks, to avoid separating siblings in different runqueues.
> + * This deals with both Intel's HTs and AMD's CUs. An arch that does not have
> + * any similar concept will just have cpu_nr_siblings() always return 1, and
> + * setup the cpu_sibling_mask-s acordingly (as currently does ARM), and things
> + * will just work as well.
> + */
> +static bool
> +cpu_runqueue_siblings_match(const struct csched2_runqueue_data *rqd,
> +                            unsigned int cpu, unsigned int max_cpus_runq)
> +{
> +    unsigned int nr_sibls = cpu_nr_siblings(cpu);
> +    unsigned int rcpu, tot_sibls = 0;
> +
> +    /*
> +     * If we put the CPU in this runqueue, we must be sure that there will
> +     * be enough room for accepting its sibling(s) as well.
> +     */
> +    cpumask_clear(cpumask_scratch_cpu(cpu));
> +    for_each_cpu ( rcpu, &rqd->active )
> +    {
> +        ASSERT(rcpu != cpu);
> +        if ( !cpumask_intersects(per_cpu(cpu_sibling_mask, rcpu), cpumask_scratch_cpu(cpu)) )
> +        {
> +            /*
> +             * For each CPU already in the runqueue, account for it and for
> +             * its sibling(s), independently from whether they are in the
> +             * runqueue or not. Of course, we do this only once, for each CPU
> +             * that is already inside the runqueue and all its siblings!
> +             *
> +             * This way, even if there are CPUs in the runqueue with siblings
> +             * in a different cpupools, we still count all of them here.
> +             * The reason for this is that, if at some future point we will
> +             * move those sibling CPUs to this cpupool, we want them to land
> +             * in this runqueue. Hence we must be sure to leave space for them.
> +             */
> +            cpumask_or(cpumask_scratch_cpu(cpu), cpumask_scratch_cpu(cpu),
> +                       per_cpu(cpu_sibling_mask, rcpu));
> +            tot_sibls += cpu_nr_siblings(rcpu);
> +        }
> +    }
> +    /*
> +     * We know that neither the CPU, nor any of its sibling are here,
> +     * or we wouldn't even have entered the function.
> +     */
> +    ASSERT(!cpumask_intersects(cpumask_scratch_cpu(cpu),
> +                               per_cpu(cpu_sibling_mask, cpu)));
> +
> +    /* Try adding CPU and its sibling(s) to the count and check... */
> +    return tot_sibls + nr_sibls <= max_cpus_runq;
> +}
> +
>   static struct csched2_runqueue_data *
> -cpu_add_to_runqueue(struct csched2_private *prv, unsigned int cpu)
> +cpu_add_to_runqueue(const struct scheduler *ops, unsigned int cpu)
>   {
> +    struct csched2_private *prv = csched2_priv(ops);
>       struct csched2_runqueue_data *rqd, *rqd_new;
> +    struct csched2_runqueue_data *rqd_valid = NULL;
>       struct list_head *rqd_ins;
>       unsigned long flags;
>       int rqi = 0;
> -    bool rqi_unused = false, rqd_valid = false;
> +    unsigned int min_rqs, max_cpus_runq;
> +    bool rqi_unused = false;
>   
>       /* Prealloc in case we need it - not allowed with interrupts off. */
>       rqd_new = xzalloc(struct csched2_runqueue_data);
>   
> +    /*
> +     * While respecting the limit of not having more than the max number of
> +     * CPUs per runqueue, let's also try to "spread" the CPU, as evenly as
> +     * possible, among the runqueues. For doing that, we need to know upfront
> +     * how many CPUs we have, so let's use the number of CPUs that are online
> +     * for that.
> +     */
> +    min_rqs = ((num_online_cpus() - 1) / opt_max_cpus_runqueue) + 1;
> +    max_cpus_runq = num_online_cpus() / min_rqs;
> +
>       write_lock_irqsave(&prv->lock, flags);
>   
>       rqd_ins = &prv->rql;
> @@ -873,10 +955,59 @@ cpu_add_to_runqueue(struct csched2_private *prv, unsigned int cpu)
>           if ( !rqi_unused && rqd->id > rqi )
>               rqi_unused = true;
>   
> +        /*
> +         * First of all, let's check whether, according to the system
> +         * topology, this CPU belongs in this runqueue.
> +         */
>           if ( cpu_runqueue_match(rqd, cpu) )
>           {
> -            rqd_valid = true;
> -            break;
> +            /*
> +             * If the CPU has any siblings, they are online and they are
> +             * being added to this cpupool, always keep them together. Even
> +             * if that means violating what the opt_max_cpus_runqueue param
> +             * indicates. However, if this happens, chances are high that a
> +             * too small value was used for the parameter, so warn the user
> +             * about that.
> +             *
> +             * Note that we cannot check this once and for all, say, during
> +             * scheduler initialization. In fact, at least in theory, the
> +             * number of siblings a CPU has may not be the same for all the
> +             * CPUs.
> +             */
> +            if ( cpumask_intersects(&rqd->active, per_cpu(cpu_sibling_mask, cpu)) )
> +            {
> +                if ( cpumask_weight(&rqd->active) >= opt_max_cpus_runqueue )
> +                {
> +                        printk("WARNING: %s: more than opt_max_cpus_runqueue "
> +                               "in a runqueue (%u vs %u), due to topology constraints.\n"
> +                               "Consider raising it!\n",
> +                               __func__, opt_max_cpus_runqueue,

Is printing the function name really adding any important information?

> +                               cpumask_weight(&rqd->active));
> +                }
> +                rqd_valid = rqd;
> +                break;
> +            }
> +
> +            /*
> +             * If we're using core (or socket) scheduling, no need to do any
> +             * further checking beyond the number of CPUs already in this
> +             * runqueue respecting our upper bound.
> +             *
> +             * Otherwise, let's try to make sure that siblings stay in the
> +             * same runqueue, pretty much under any cinrcumnstances.
> +             */
> +            if ( rqd->refcnt < max_cpus_runq && (ops->cpupool->gran != SCHED_GRAN_cpu ||
> +                  cpu_runqueue_siblings_match(rqd, cpu, max_cpus_runq)) )
> +            {
> +                /*
> +                 * This runqueue is ok, but as we said, we also want an even
> +                 * distribution of the CPUs. So, unless this is the very first
> +                 * match, we go on, check all runqueues and actually add the
> +                 * CPU into the one that is less full.
> +                 */
> +                if ( !rqd_valid || rqd->refcnt < rqd_valid->refcnt )
> +                    rqd_valid = rqd;
> +            }
>           }
>   
>           if ( !rqi_unused )
> @@ -900,6 +1031,8 @@ cpu_add_to_runqueue(struct csched2_private *prv, unsigned int cpu)
>           rqd->pick_bias = cpu;
>           rqd->id = rqi;
>       }
> +    else
> +        rqd = rqd_valid;
>   
>       rqd->refcnt++;
>   
> @@ -3744,7 +3877,6 @@ csched2_dump(const struct scheduler *ops)
>   static void *
>   csched2_alloc_pdata(const struct scheduler *ops, int cpu)
>   {
> -    struct csched2_private *prv = csched2_priv(ops);
>       struct csched2_pcpu *spc;
>       struct csched2_runqueue_data *rqd;
>   
> @@ -3754,7 +3886,7 @@ csched2_alloc_pdata(const struct scheduler *ops, int cpu)
>       if ( spc == NULL )
>           return ERR_PTR(-ENOMEM);
>   
> -    rqd = cpu_add_to_runqueue(prv, cpu);
> +    rqd = cpu_add_to_runqueue(ops, cpu);
>       if ( IS_ERR(rqd) )
>       {
>           xfree(spc);
> diff --git a/xen/include/asm-arm/cpufeature.h b/xen/include/asm-arm/cpufeature.h
> index 9af5666628..8fdf9685d7 100644
> --- a/xen/include/asm-arm/cpufeature.h
> +++ b/xen/include/asm-arm/cpufeature.h
> @@ -64,6 +64,11 @@ static inline bool cpus_have_cap(unsigned int num)
>       return test_bit(num, cpu_hwcaps);
>   }
>   
> +static inline cpu_nr_siblings(unsigned int)
> +{
> +    return 1;
> +}
> +
>   /* System capability check for constant cap */
>   #define cpus_have_const_cap(num) ({                 \
>           register_t __ret;                           \
> diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
> index 070691882b..73017c3f4b 100644
> --- a/xen/include/asm-x86/processor.h
> +++ b/xen/include/asm-x86/processor.h
> @@ -174,6 +174,11 @@ extern void init_intel_cacheinfo(struct cpuinfo_x86 *c);
>   
>   unsigned int apicid_to_socket(unsigned int);
>   
> +static inline int cpu_nr_siblings(unsigned int cpu)
> +{
> +    return cpu_data[cpu].x86_num_siblings;
> +}
> +
>   /*
>    * Generic CPUID function
>    * clear %ecx since some cpus (Cyrix MII) do not set or clear %ecx
> 



From xen-devel-bounces@lists.xenproject.org Fri May 29 15:25:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:25:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jegsc-0006mP-8x; Fri, 29 May 2020 15:25:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3yhI=7L=citrix.com=igor.druzhinin@srs-us1.protection.inumbo.net>)
 id 1jegsb-0006mJ-Dh
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:25:05 +0000
X-Inumbo-ID: 921a3728-a1c0-11ea-a8d8-12813bfff9fa
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 921a3728-a1c0-11ea-a8d8-12813bfff9fa;
 Fri, 29 May 2020 15:25:05 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: eMASQgUNIxf6utwX+JKKmoRpxE9KsHtdpvm18Bo4OmtjLV+g5giO0c71kQ08KY8kuJEldjFzpj
 VYkqlU12OyVy2SK+GIdbxyljSqc+WMe9H4rXxoPc+OSsaZNK6uwdqzKP/k8rTXnop6qNEIHBAV
 4zrb1uXgPxcBUpEIUsxZuKrCn1MsE12NyH7vOw+rzzwCL8RlrprbeE/6cj3yUVi8StJMcDrlq+
 zARNMC96YL6LGCWcBIYl/ZxW95JuylGKpOGxOwXzRXQztHoWfo0iA8+NToVxKtqf0gToJukfy6
 R0s=
X-SBRS: 2.7
X-MesageID: 19513531
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,449,1583211600"; d="scan'208";a="19513531"
Subject: Re: [PATCH] x86/svm: do not try to handle recalc NPT faults
 immediately
From: Igor Druzhinin <igor.druzhinin@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
References: <1590712553-7298-1-git-send-email-igor.druzhinin@citrix.com>
 <bb934c0c-3f0d-df7e-1720-8dbbdbf7691d@suse.com>
 <026404fb-54b7-d03f-b4c5-367bcb5af41d@citrix.com>
Message-ID: <973a9bbb-d40d-8fd0-5e14-6119efd093b7@citrix.com>
Date: Fri, 29 May 2020 16:24:59 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:60.0) Gecko/20100101
 Thunderbird/60.9.0
MIME-Version: 1.0
In-Reply-To: <026404fb-54b7-d03f-b4c5-367bcb5af41d@citrix.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, roger.pau@citrix.com,
 george.dunlap@citrix.com, wl@xen.org, andrew.cooper3@citrix.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 16:17, Igor Druzhinin wrote:
> On 29/05/2020 15:34, Jan Beulich wrote:
>> On 29.05.2020 02:35, Igor Druzhinin wrote:
>>> A recalculation NPT fault doesn't always require additional handling
>>> in hvm_hap_nested_page_fault(), moreover in general case if there is no
>>> explicit handling done there - the fault is wrongly considered fatal.
>>>
>>> Instead of trying to be opportunistic - use safer approach and handle
>>> P2M recalculation in a separate NPT fault by attempting to retry after
>>> making the necessary adjustments. This is aligned with Intel behavior
>>> where there are separate VMEXITs for recalculation and EPT violations
>>> (faults) and only faults are handled in hvm_hap_nested_page_fault().
>>> Do it by also unifying do_recalc return code with Intel implementation
>>> where returning 1 means P2M was actually changed.
>>>
>>> This covers a specific case of migration with vGPU assigned on AMD:
>>> global log-dirty is enabled and causes immediate recalculation NPT
>>> fault in MMIO area upon access.
>>
>> To be honest, from this last paragraph I still can't really derive
>> what goes wrong exactly why, before this change.
> 
> I admit it might require some knowledge of how vGPU is implemented. I will try
> to give more info in this paragraph.
> 
>>> Signed-off-by: Igor Druzhinin <igor.druzhinin@citrix.com>
>>> ---
>>> This is a safer alternative to:
>>> https://lists.xenproject.org/archives/html/xen-devel/2020-05/msg01662.html
>>> and more correct approach from my PoV.
>>
>> Indeed - I was about to reply there, but then I thought I'd first
>> look at this patch, in case it was a replacement.
>>
>>> --- a/xen/arch/x86/hvm/svm/svm.c
>>> +++ b/xen/arch/x86/hvm/svm/svm.c
>>> @@ -2923,9 +2923,10 @@ void svm_vmexit_handler(struct cpu_user_regs *regs)
>>>              v->arch.hvm.svm.cached_insn_len = vmcb->guest_ins_len & 0xf;
>>>          rc = vmcb->exitinfo1 & PFEC_page_present
>>>               ? p2m_pt_handle_deferred_changes(vmcb->exitinfo2) : 0;
>>> -        if ( rc >= 0 )
>>> +        if ( rc == 0 )
>>> +            /* If no recal adjustments were being made - handle this fault */
>>>              svm_do_nested_pgfault(v, regs, vmcb->exitinfo1, vmcb->exitinfo2);
>>> -        else
>>> +        else if ( rc < 0 )
>>
>> So from going through the code and judging by the comment in
>> finish_type_change() (which btw you will need to update, to avoid
>> it becoming stale) the >= here was there just in case, without
>> there actually being any case where a positive value would be
>> returned. It that's also the conclusion you've drawn, then I
>> think it would help mentioning this in the description.
> 
> I re-read the comments in finish_type_change() and to me they look
> pretty much in line with the now common interface between EPT and NPT
> recalc calls. 

Sorry, upon close examination there is indeed a new case missed. Thanks
for pointing out.

Igor


From xen-devel-bounces@lists.xenproject.org Fri May 29 15:36:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:36:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeh3M-0007rY-F6; Fri, 29 May 2020 15:36:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=BsAk=7L=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1jeh3L-0007rT-Gp
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:36:11 +0000
X-Inumbo-ID: 1ee629fe-a1c2-11ea-8993-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ee629fe-a1c2-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 15:36:10 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 7B17AAB64;
 Fri, 29 May 2020 15:36:09 +0000 (UTC)
Message-ID: <311bd9fe78078e3e27e877e4b7258604bea65335.camel@suse.com>
Subject: Re: [PATCH v2 4/7] xen: credit2: limit the max number of CPUs in a
 runqueue
From: Dario Faggioli <dfaggioli@suse.com>
To: =?ISO-8859-1?Q?J=FCrgen_Gro=DF?= <jgross@suse.com>, 
 xen-devel@lists.xenproject.org
Date: Fri, 29 May 2020 17:36:07 +0200
In-Reply-To: <826d5572-0c72-4fd4-f182-38023ca555ef@suse.com>
References: <159070133878.12060.13318432301910522647.stgit@Palanthas>
 <159070138395.12060.9523981885146042705.stgit@Palanthas>
 <826d5572-0c72-4fd4-f182-38023ca555ef@suse.com>
Organization: SUSE
Content-Type: multipart/signed; micalg="pgp-sha256";
 protocol="application/pgp-signature"; boundary="=-J79X0446EziDo6qeBxR8"
User-Agent: Evolution 3.36.2 
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>


--=-J79X0446EziDo6qeBxR8
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, 2020-05-29 at 17:23 +0200, J=C3=BCrgen Gro=C3=9F wrote:
> On 28.05.20 23:29, Dario Faggioli wrote:
> > In Credit2 CPUs (can) share runqueues, depending on the topology.
> > For
> > instance, with per-socket runqueues (the default) all the CPUs that
> > are
> > part of the same socket share a runqueue.
> >=20
> > On platform with a huge number of CPUs per socket, that could be a
> > problem. An example is AMD EPYC2 servers, where we can have up to
> > 128
> > CPUs in a socket.
> >=20
> > It is of course possible to define other, still topology-based,
> > runqueue
> > arrangements (e.g., per-LLC, per-DIE, etc). But that may still
> > result in
> > runqueues with too many CPUs on other/future platforms. For
> > instance, a
> > system with 96 CPUs and 2 NUMA nodes will end up having 48 CPUs per
> > runqueue. Not as bad, but still a lot!
> >=20
> > Therefore, let's set a limit to the max number of CPUs that can
> > share a
> > Credit2 runqueue. The actual value is configurable (at boot time),
> > the
> > default being 16. If, for instance,  there are more than 16 CPUs in
> > a
> > socket, they'll be split among two (or more) runqueues.
> >=20
> > Note: with core scheduling enabled, this parameter sets the max
> > number
> > of *scheduling resources* that can share a runqueue. Therefore,
> > with
> > granularity set to core (and assumint 2 threads per core), we will
> > have
> > at most 16 cores per runqueue, which corresponds to 32 threads. But
> > that
> > is fine, considering how core scheduling works.
> >=20
> > Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
>=20
> Reviewed-by: Juergen Gross <jgross@suse.com>
>=20
Thanks!

> with one additional question below.
>=20
Sure...

> > +            if ( cpumask_intersects(&rqd->active,
> > per_cpu(cpu_sibling_mask, cpu)) )
> > +            {
> > +                if ( cpumask_weight(&rqd->active) >=3D
> > opt_max_cpus_runqueue )
> > +                {
> > +                        printk("WARNING: %s: more than
> > opt_max_cpus_runqueue "
> > +                               "in a runqueue (%u vs %u), due to
> > topology constraints.\n"
> > +                               "Consider raising it!\n",
> > +                               __func__, opt_max_cpus_runqueue,
>=20
> Is printing the function name really adding any important
> information?
>=20
I personally don't think it does. I am mostly following suite from both
this file and other places, even for this kind of warnings, striving
for consistency.

I'd surely be fine removing it from all the warnings about the boot
time parameter, here in credit2 and in scheduling in general. And if
now it's not the time for doing that, I'm happy to get rid of it from
here, and do the rest later. Or to leave it everywhere for now but
remove it from everywhere later.

And I don't have a too strong opinion myself, so, whatever we think is
best, I can do it.

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)


--=-J79X0446EziDo6qeBxR8
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAl7RK+cACgkQFkJ4iaW4
c+5Jdw//RQ8f7MPIz/AiJktLyhfA2+2X3rHN43rYZx3LalfN2mE8yBWZymsYb2MT
8TQpNPqWL8EHL8DQ1C6jZGz+SHBBS4CS0d95+awTFe0b3Ul/ev7B4xc8GN6Viqiv
8BXg7bUGiHzG7/llzRVGTP6UmxCpp/1HkK8LcLw4JNno0K6DWiCFJn+eaKHUSJnP
WU4pI/HjyrXdNbn/ltfnjwiyAtZEJFfwzaAeL1dtiL6EyNFc+vcWBjqXbyTwemZA
2S86jXMJ4N7Jmek0TraI1vm3alV61nspwrrEkdGXvdYH4RlehTXjJR+9o2kZr7fW
hom+5ze+B3uFp3aRbLf/TLR0XmUSAdEeKA/LbLZ1R4hSBz8MkVPQIEpID8/4OMpZ
wegyMHQ2s3CcoFRFPF1AS0P+YZOZik3EmQOdU1gS8lk40qts8Yo0GZ373qvfjAeW
QiOsRQiqxzPr+Gfb8K0bYVVxse27jz1a6xZ/SVH1dA7WHkyNjBiYBtdySK8kf6Tg
TF8oIAOxvuCcWNEkGBgcrMaT74ytkVFAwSHnmT1FaRl+fVwkEt7SqgNgExzfR4YF
HWujf2TnE9DxSgT1z2fBBKPexE6KkXNVvVr1v2TTTVP1McM+PSvLNcbfOeZ1QlYM
0qsTbP5E7HDPsjoxlIvqj5EWYwheb8iYEvXSoccB2e6Z7OIQPVM=
=/f/2
-----END PGP SIGNATURE-----

--=-J79X0446EziDo6qeBxR8--



From xen-devel-bounces@lists.xenproject.org Fri May 29 15:43:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:43:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehAP-0000Ne-7U; Fri, 29 May 2020 15:43:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T8V9=7L=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jehAO-0000NZ-Ac
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:43:28 +0000
X-Inumbo-ID: 22eba7f8-a1c3-11ea-9dbe-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 22eba7f8-a1c3-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 15:43:26 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=lggXKX0qv+Xfgmp9NrwaIzHFDYG58tic9d+BrAqSkyU=; b=IDF7nMTGU6Gmnw1U+y3iYBOOn
 TPltBskQSNCAuZwWAMKMPWpiDXE1QPd51rurW0Y5+Xh95augt2fXYhCgiZr0LWbMa8CagmHDwfBNj
 AbzUEfDCx4bxV1+uCeKnWjsyeUdOc/v3kh5jBezfsWQM8UJ9VQBKKT+MZyPNDnS6We65E=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jehAM-0008Oc-56; Fri, 29 May 2020 15:43:26 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jehAL-0000Vh-Ts; Fri, 29 May 2020 15:43:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jehAL-00058w-T8; Fri, 29 May 2020 15:43:25 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150457-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150457: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=a20ab81d22300cca80325c284f21eefee99aa740
X-Osstest-Versions-That: qemuu=06539ebc76b8625587aa78d646a9d8d5fddf84f3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 29 May 2020 15:43:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150457 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150457/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10       fail  like 150420
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150420
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150420
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150420
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150420
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150420
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150420
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                a20ab81d22300cca80325c284f21eefee99aa740
baseline version:
 qemuu                06539ebc76b8625587aa78d646a9d8d5fddf84f3

Last test of basis   150420  2020-05-28 04:29:55 Z    1 days
Testing same since   150457  2020-05-28 20:07:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Alex Williamson <alex.williamson@redhat.com>
  Alistair Francis <alistair.francis@wdc.com>
  Cleber Rosa <crosa@redhat.com>
  Cédric Le Goater <clg@kaod.org>
  David Gibson <david@gibson.dropbear.id.au>
  Greg Kurz <groug@kaod.org>
  Leonardo Bras <leobras.c@gmail.com>
  Leonardo Bras <leonardo@linux.ibm.com>
  Markus Armbruster <armbru@redhat.com>
  Nicholas Piggin <npiggin@gmail.com>
  Paul Durrant <paul@xen.org>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>
  Philippe Mathieu-Daudé <philmd@redhat.com>
  Shivaprasad G Bhat <sbhat@linux.ibm.com>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   06539ebc76..a20ab81d22  a20ab81d22300cca80325c284f21eefee99aa740 -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Fri May 29 15:44:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:44:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehBC-0000Sk-LP; Fri, 29 May 2020 15:44:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wp2E=7L=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jehBB-0000SB-25
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:44:17 +0000
X-Inumbo-ID: 40391566-a1c3-11ea-81bc-bc764e2007e4
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40391566-a1c3-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 15:44:16 +0000 (UTC)
Authentication-Results: esa5.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: ONeqteup3dw6Lh+z7jrKEbgJlIGDTIAlGo2dQNpr7TcYtTUojg5zTrQwssXp7yTfM4ySGg2kHA
 mQgz2TfSfCiwoyClkyLr+re1BT+Nh4KUpGErp6z3FA2eOgGuQn4Ax1HjzILlfKuNsVi97p7S+D
 9bE5ffjwgsHfXnFdbtnJJ5gVSi8Ldcfo4oaVKY9YlAjZvvUK8/io0jYhZ3zCS9J4JD2CZopgBl
 MbZiWfLrLafq1rm23E+/5gvbnGZX0xk85wRr47uHOv6Vj+gGc9qoWmTVwZZ1AsIZl5yt08ewB4
 +dA=
X-SBRS: 2.7
X-MesageID: 19024984
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,449,1583211600"; d="scan'208";a="19024984"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
Subject: [XEN PATCH] xen/build: introduce CLANG_FLAGS for testing other CFLAGS
Date: Fri, 29 May 2020 16:43:43 +0100
Message-ID: <20200529154343.1616925-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Anthony PERARD <anthony.perard@citrix.com>,
 =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Commit 534519f0514f ("xen: Have Kconfig check $(CC)'s version")
introduced the use of CLANG_FLAGS in Kconfig which is used when
testing for other CFLAGS via $(cc-option ...) but CLANG_FLAGS doesn't
exist in the Xen build system. (It's a Linux/Kbuild variable that
haven't been added yet.)

The missing CLANG_FLAGS isn't an issue for $(cc-option ..) but it
would be when $(as-instr ..) gets imported from Kbuild to tests
assembly instruction. We need to know if we are going to use clang's
assembler or not.

CLANG_FLAGS needs to be calculated before we call Kconfig.

So, this patch adds CLANG_FLAGS which may contain two flags which are
needed for further testing of $(CC)'s capabilities:
  -no-integrated-as
    This flags isn't new, but simply tested earlier so that it can be
    used in Kconfig. The flags is only added for x86 builds like
    before.
  -Werror=unknown-warning-option
    The one is new and is to make sure that the warning is enabled,
    even though it is by default but could be disabled in a particular
    build of clang, see Linux's commit e8de12fb7cde ("kbuild: Check
    for unknown options with cc-option usage in Kconfig and clang")

    It is present in clang 3.0.0, according Linux's commit
    589834b3a009 ("kbuild: Add -Werror=unknown-warning-option to
    CLANG_FLAGS").

(The "note" that say that the flags was only added once wasn't true
when tested on CentOS 6, so the patch uses $(or) and the flag will only
be added once.)

Fixes: 534519f0514f ("xen: Have Kconfig check $(CC)'s version")
Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
---
 xen/Makefile         | 31 +++++++++++++++++++++++++++++--
 xen/arch/x86/arch.mk | 24 ------------------------
 2 files changed, 29 insertions(+), 26 deletions(-)

diff --git a/xen/Makefile b/xen/Makefile
index 30c3568a4791..2e897f222cc9 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -48,6 +48,7 @@ default: build
 .PHONY: dist
 dist: install
 
+include scripts/Kbuild.include
 
 ifneq ($(root-make-done),y)
 # section to run before calling Rules.mk, but only once.
@@ -124,11 +125,37 @@ ifneq ($(filter %config,$(MAKECMDGOALS)),)
     config-build := y
 endif
 
+# CLANG_FLAGS needs to be calculated before calling Kconfig
+ifneq ($(shell $(CC) --version 2>&1 | head -n 1 | grep clang),)
+CLANG_FLAGS :=
+
+ifeq ($(TARGET_ARCH),x86)
+# The tests to select whether the integrated assembler is usable need to happen
+# before testing any assembler features, or else the result of the tests would
+# be stale if the integrated assembler is not used.
+
+# Older clang's built-in assembler doesn't understand .skip with labels:
+# https://bugs.llvm.org/show_bug.cgi?id=27369
+t1 = $(call as-insn,$(CC),".L0: .L1: .skip (.L1 - .L0)",,-no-integrated-as)
+
+# Check whether clang asm()-s support .include.
+t2 = $(call as-insn,$(CC) -I$(BASEDIR)/include,".include \"asm-x86/indirect_thunk_asm.h\"",,-no-integrated-as)
+
+# Check whether clang keeps .macro-s between asm()-s:
+# https://bugs.llvm.org/show_bug.cgi?id=36110
+t3 = $(call as-insn,$(CC),".macro FOO;.endm"$(close); asm volatile $(open)".macro FOO;.endm",-no-integrated-as)
+
+CLANG_FLAGS += $(call or,$(t1),$(t2),$(t3))
+endif
+
+CLANG_FLAGS += -Werror=unknown-warning-option
+CFLAGS += $(CLANG_FLAGS)
+export CLANG_FLAGS
+endif
+
 export root-make-done := y
 endif # root-make-done
 
-include scripts/Kbuild.include
-
 # Shorthand for kconfig
 kconfig = -f $(BASEDIR)/tools/kconfig/Makefile.kconfig ARCH=$(ARCH) SRCARCH=$(SRCARCH) HOSTCC="$(HOSTCC)" HOSTCXX="$(HOSTCXX)"
 
diff --git a/xen/arch/x86/arch.mk b/xen/arch/x86/arch.mk
index 62b7c9700776..1f7211623399 100644
--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -11,30 +11,6 @@ CFLAGS += -DXEN_IMG_OFFSET=$(XEN_IMG_OFFSET)
 # Prevent floating-point variables from creeping into Xen.
 CFLAGS += -msoft-float
 
-ifeq ($(CONFIG_CC_IS_CLANG),y)
-# Note: Any test which adds -no-integrated-as will cause subsequent tests to
-# succeed, and not trigger further additions.
-#
-# The tests to select whether the integrated assembler is usable need to happen
-# before testing any assembler features, or else the result of the tests would
-# be stale if the integrated assembler is not used.
-
-# Older clang's built-in assembler doesn't understand .skip with labels:
-# https://bugs.llvm.org/show_bug.cgi?id=27369
-$(call as-option-add,CFLAGS,CC,".L0: .L1: .skip (.L1 - .L0)",,\
-                     -no-integrated-as)
-
-# Check whether clang asm()-s support .include.
-$(call as-option-add,CFLAGS,CC,".include \"asm-x86/indirect_thunk_asm.h\"",,\
-                     -no-integrated-as)
-
-# Check whether clang keeps .macro-s between asm()-s:
-# https://bugs.llvm.org/show_bug.cgi?id=36110
-$(call as-option-add,CFLAGS,CC,\
-                     ".macro FOO;.endm"$$(close); asm volatile $$(open)".macro FOO;.endm",\
-                     -no-integrated-as)
-endif
-
 $(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
 $(call cc-option-add,CFLAGS,CC,-Wnested-externs)
 $(call as-option-add,CFLAGS,CC,"vmcall",-DHAVE_AS_VMX)
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Fri May 29 15:45:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:45:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehCd-0000lk-LR; Fri, 29 May 2020 15:45:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=jshP=7L=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1jehCc-0000la-Gr
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:45:46 +0000
X-Inumbo-ID: 75d9d516-a1c3-11ea-8993-bc764e2007e4
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 75d9d516-a1c3-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 15:45:46 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 0F210ACB8;
 Fri, 29 May 2020 15:45:45 +0000 (UTC)
Subject: Re: [PATCH v2 2/3] build32: don't discard .shstrtab in linker script
To: Roger Pau Monne <roger.pau@citrix.com>
References: <20200528144023.10814-1-roger.pau@citrix.com>
 <20200528144023.10814-3-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <41429ddc-a6c3-ddba-97d6-aeb413df1960@suse.com>
Date: Fri, 29 May 2020 17:45:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200528144023.10814-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: xen-devel@lists.xenproject.org, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28.05.2020 16:40, Roger Pau Monne wrote:
> LLVM linker doesn't support discarding .shstrtab, and complains with:
> 
> ld -melf_i386_fbsd -N -T build32.lds -o reloc.lnk reloc.o
> ld: error: discarding .shstrtab section is not allowed

Well, yes, GNU ld is more intelligent and doesn't extend the
discarding to the control sections in the first place. All
of .symtab, .strtab, and .shstrtab are still there.

> Add an explicit .shstrtab section to the linker script after the text
> section in order to make LLVM LD happy.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Assuming the change was tested to not confuse GNU ld
Acked-by: Jan Beulich <jbeulich@suse.com>

I wouldn't mind extending this to the other two control
sections named above. In case the binaries need picking
apart, them being present is surely helpful.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 29 15:48:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:48:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehEz-0000v1-2y; Fri, 29 May 2020 15:48:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hF3w=7L=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jehEx-0000uw-AC
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:48:11 +0000
X-Inumbo-ID: cbac4532-a1c3-11ea-9dbe-bc764e2007e4
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cbac4532-a1c3-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 15:48:10 +0000 (UTC)
Authentication-Results: esa6.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: HQwYVuc6l0ySs9dnsV8dDqfvpXDY17CFaFnOndVZzkpPBYac8DjLERki30XEYcFFDCfpnCaBuD
 0UGc4fmj0KNo5nad9QDD/ieSNaqc8ORHz21nAYrq63Ii65/bjz7+nEpBVZAtcEG8qpGZgOclKo
 HDfgM8l2lsDjL4POEv2CVn6FUKPhtVtgIoK7GYWizm2Ja6BSOiPkAvUNEotk5RQAcyNoYtB9jB
 LIdVqcgEfFa6X+TbvqL7KhX0Qon65T14ypnYOVaNkpezxzJy9kQGZlQUpQ3dFHhFXPyEaEk9GD
 Fzs=
X-SBRS: 2.7
X-MesageID: 19123574
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,449,1583211600"; d="scan'208";a="19123574"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24273.11950.128172.524707@mariner.uk.xensource.com>
Date: Fri, 29 May 2020 16:47:58 +0100
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 17/17] docs/xl.cfg: Rewrite cpuid= section
In-Reply-To: <20200127143444.25538-18-andrew.cooper3@citrix.com>
References: <20200127143444.25538-1-andrew.cooper3@citrix.com>
 <20200127143444.25538-18-andrew.cooper3@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony
 Perard <anthony.perard@citrix.com>, Xen-devel <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Andrew Cooper writes ("[PATCH v2 17/17] docs/xl.cfg: Rewrite cpuid= section"):
> This is partly to adjust the description of 'k' and 's' seeing as they have
> changed, but mostly restructuring the information for clarity.
> 
> In particular, use indentation to clearly separate the areas discussing libxl
> format from xend format.  In addition, extend the xend format section to
> discuss subleaf notation.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri May 29 15:50:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:50:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehH9-0001ex-G0; Fri, 29 May 2020 15:50:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hF3w=7L=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jehH8-0001er-VD
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:50:26 +0000
X-Inumbo-ID: 1c8f2dac-a1c4-11ea-a8dc-12813bfff9fa
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1c8f2dac-a1c4-11ea-a8dc-12813bfff9fa;
 Fri, 29 May 2020 15:50:25 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: fBWYZBBs1op9YnQI1yi++tyl6dGKiTeorTVV8woKQiZGrfehCNZMx+fPttN57PTiQVQdauLqOR
 8tyRg1TqG4k50FX65Uj/J/RWUS8MO8WhnI+40gn3BC0gI4uKKnXkH7Eqj6Z2s8n4DB74X4XycH
 2O43CVAb1dDXmmbcHyO27F5a8srd6/5fRELiG/ArDaHrEwFJh3mDU/qPv6zBY/U1agqNheZBky
 ajOvvQxUGpCcpMUPti6tr1u9Ne2e/5bTtvZbma2T31ObIU+qVt3dqhTIN9qGKnjvYcBHUR3s/t
 O9A=
X-SBRS: 2.7
X-MesageID: 18772315
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,449,1583211600"; d="scan'208";a="18772315"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24273.12092.863484.245282@mariner.uk.xensource.com>
Date: Fri, 29 May 2020 16:50:20 +0100
To: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] tools: fix Rules.mk library make variables
In-Reply-To: <20200529102953.12714-1-jgross@suse.com>
References: <20200529102953.12714-1-jgross@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Wei
 Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Juergen Gross writes ("[PATCH] tools: fix Rules.mk library make variables"):
> Both SHDEPS_libxendevicemodel and SHDEPS_libxenhypfs have a bug by
> adding $(SHLIB_xencall) instead of $(SHLIB_libxencall).
> 
> The former seems not to have any negative impact, probably because
> it is not used anywhere in Xen without the correct $(SHLIB_libxencall)
> being used, too.
> 
> Fixes: 86234eafb95295 ("libs: add libxenhypfs")
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

I will commit this momentarily.

Ian.


From xen-devel-bounces@lists.xenproject.org Fri May 29 15:51:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:51:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehI4-0001sk-QV; Fri, 29 May 2020 15:51:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wp2E=7L=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jehI4-0001sd-53
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:51:24 +0000
X-Inumbo-ID: 3ede104e-a1c4-11ea-9947-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ede104e-a1c4-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 15:51:23 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 5C0TP9lny+c4XNSIeQ5PbKSojd1Oy7xKe5/Gi37v6/Y9MNMn5wQh8XpydhVUMx2xFiTBZ7yHhq
 dx68TyXp3RvUJGOiPEUIsoxe30nd4q8Ud/QA9Jm/QksPwJQNPNf1q+ZNsgIG7G2vMys32+bAXS
 TsvfMuwlxCIChdvUuTyfvkiDgm3kOHjd4lI1Jug/x7hitpxAmxSvSEtQ590n27r4qjoQRCaNZ8
 P2m0VT+rIx0SunWq82POQdZWjnjQNk3T7S1My3fripDLWgDj2tc+TguHRj8yWNsKzDUG999fOa
 6n4=
X-SBRS: 2.7
X-MesageID: 19517254
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,449,1583211600"; d="scan'208";a="19517254"
Date: Fri, 29 May 2020 16:51:18 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2 03/14] x86/shstk: Introduce Supervisor Shadow Stack
 support
Message-ID: <20200529155118.GL2105@perard.uk.xensource.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-4-andrew.cooper3@citrix.com>
 <4f535d4c-b3b3-fe5b-8b57-af736dc0a360@suse.com>
 <ad95944a-bd21-2ba8-6214-49d86050e816@citrix.com>
 <c3c3aea0-806f-4058-c1aa-cdc0f75007e2@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c3c3aea0-806f-4058-c1aa-cdc0f75007e2@suse.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 29, 2020 at 01:59:55PM +0200, Jan Beulich wrote:
> On 28.05.2020 20:10, Andrew Cooper wrote:
> > On 28/05/2020 11:25, Jan Beulich wrote:
> >> On 27.05.2020 21:18, Andrew Cooper wrote:
> >>> --- a/xen/scripts/Kconfig.include
> >>> +++ b/xen/scripts/Kconfig.include
> >>> @@ -31,6 +31,10 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -E -x c /dev/null -o /de
> >>>  # Return y if the linker supports <flag>, n otherwise
> >>>  ld-option = $(success,$(LD) -v $(1))
> >>>  
> >>> +# $(as-instr,<instr>)
> >>> +# Return y if the assembler supports <instr>, n otherwise
> >>> +as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -)
> >> Is this actually checking the right thing in the clang case?
> > 
> > Appears to work correctly for me. (Again, its either fine, or need
> > bugfixing anyway for 4.14, and raising with Linux as this is unmodified
> > upstream code like the rest of Kconfig.include).
> 
> This answer made me try it out: On a system with clang 5 and
> binutils 2.32 "incsspd	%eax" translates fine with
> -no-integrated-as but doesn't without. The previously mentioned
> odd use of CLANG_FLAGS, perhaps together with the disconnect
> from where we establish whether to use -no-integrated-as in the
> first place (arch.mk; I have no idea whether the CFLAGS
> determined would be usable by the kconfig invocation, nor how
> to solve the chicken-and-egg problem there if this is meant to
> work that way), may be the reason for this. Cc-ing Anthony once
> again ...

I've just prepare/sent a patch which should fix this CLANG_FLAGS issue
and allows to tests the assembler in Kconfig.

See: [XEN PATCH] xen/build: introduce CLANG_FLAGS for testing other CFLAGS

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri May 29 15:53:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehKV-00021Y-7s; Fri, 29 May 2020 15:53:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hF3w=7L=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jehKU-00021T-88
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:53:54 +0000
X-Inumbo-ID: 94d80343-a1c4-11ea-a8dc-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 94d80343-a1c4-11ea-a8dc-12813bfff9fa;
 Fri, 29 May 2020 15:53:49 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: VgNztI4E0pMS45/t1Eue0i0VhZ8BPddJ995Bb7c5IW+YwIGDfxm1S9e9Apfq0qh1H5OSLa1fIs
 OTiPImdgUeaUxIAjjlXVo3u5nXtg0ygzgy7UPns3sbo/TvnL8rZHLNqxv4rhuH24QPwkMmzuL2
 dQZxwyepc8mKlvJBJVhWAh8+5PnL41U36VEwc86lTtjUXSZxgXYply75N6mFWtuZCzHd0r1fea
 Ze+vBS6SM0WpPvsflqUdCWSGx21AIyRFclj7Nk8V6Ry/HGXlXwuujSEL2pwVdObfqEvF1Er+z3
 hTE=
X-SBRS: 2.7
X-MesageID: 18794562
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,449,1583211600"; d="scan'208";a="18794562"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24273.12296.2786.109911@mariner.uk.xensource.com>
Date: Fri, 29 May 2020 16:53:44 +0100
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: Re: [PATCH v2 07/17] libxc/restore: STATIC_DATA_END inference for v2
 compatibility
In-Reply-To: <2076e9a4-c07e-9aab-1cc2-f38f7eacd8c0@citrix.com>
References: <20200127143444.25538-1-andrew.cooper3@citrix.com>
 <20200127143444.25538-8-andrew.cooper3@citrix.com>
 <24148.2202.912512.939428@mariner.uk.xensource.com>
 <cea79256-f260-1710-a783-dadec276e32a@citrix.com>
 <24161.10156.858608.199136@mariner.uk.xensource.com>
 <2076e9a4-c07e-9aab-1cc2-f38f7eacd8c0@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Andrew Cooper writes ("Re: [PATCH v2 07/17] libxc/restore: STATIC_DATA_END inference for v2 compatibility"):
> On 05/03/2020 16:24, Ian Jackson wrote:
> > Andrew Cooper writes ("Re: [PATCH v2 07/17] libxc/restore: STATIC_DATA_END inference for v2 compatibility"):
> >> More importantly however, by design, common code can't call
> >> arch-specific code without a restore_ops hook. Deduping these would
> >> require breaking the restriction which is currently doing a decent job
> >> of keeping x86-isms out of common code.
> > I'm afraid you're going to have to explain that to me at a bit greater
> > length.  The biggest thing that is confusing me about your statement
> > here is that your patch is indeed adding x86-specific code to a common
> > file.  But also I don't understand the comment about restore_ops
> > hooks - do you mean something in restore_callbacks ?
> 
> No. restore_callbacks are plumbing between libxl-save-helper and libxc.
> 
> restore_ops are internal to the restore code, and come in x86_pv and
> x86_hvm flavours, with ARM existing in some theoretical future. The
> design of the common save/restore code had deliberate measures put in
> place to make it hard to get arch-specific details slip into common
> code, so porting to different architectures didn't have to start by
> doing a bunch of cleanup.
> 
> tl;dr I could put an #ifdef x86'd static inline in the root common
> header (xc_sr_common.h), but the overall complexity is greater than what
> is presented here.

Well, I still don't quite follow but as you point out on irc I haven't
replied for too long.  I don't think I should withhold my ack at this
stage.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

Thanks,
Ian.


From xen-devel-bounces@lists.xenproject.org Fri May 29 15:59:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 15:59:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehPL-0002Fm-0M; Fri, 29 May 2020 15:58:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hF3w=7L=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jehPJ-0002Fh-8r
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 15:58:53 +0000
X-Inumbo-ID: 4a593f92-a1c5-11ea-81bc-bc764e2007e4
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a593f92-a1c5-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 15:58:52 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: GPUUh8qB2JwxVxteQ8OryyBqIJynNY/kZSfgGZNZlMbo1aAGACI4TpWS2zeErxjkyWFj6N9+3o
 b752A+WkWkoD78hFcpZZKgNow0XvZx2jvNPfcwAuape1wLYXxYP/FlhIYzW9IOdhrRvRD0SpS+
 0FoZ6Jx2rPhrKsF5AzEeBES8MfU3+LZlLBIrqzcP+3rARGIlpcbXTZyteminyKksxRtTY5bGI7
 Bk2qbHDMM4yoW0A1IfPoGC/icXKQeZY9uN6S0IIlhvs08XHuXUO20WhZvwgHpXgNQiZT1iG+Da
 Dyo=
X-SBRS: 2.7
X-MesageID: 18795323
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,449,1583211600"; d="scan'208";a="18795323"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: 8bit
Message-ID: <24273.12599.136836.514214@mariner.uk.xensource.com>
Date: Fri, 29 May 2020 16:58:47 +0100
To: Andrew Cooper <Andrew.Cooper3@citrix.com>
Subject: Re: [PATCH 17/20] tools/libx[cl]: Plumb static_data_done() up into
 libxl
In-Reply-To: <dd477627-1c36-ae3f-3c7a-ef15d5b9fa21@citrix.com>
References: <20200103130616.13724-1-andrew.cooper3@citrix.com>
 <20200103130616.13724-4-andrew.cooper3@citrix.com>
 <24093.64177.923164.677965@mariner.uk.xensource.com>
 <dd477627-1c36-ae3f-3c7a-ef15d5b9fa21@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony
 Perard <anthony.perard@citrix.com>, Xen-devel <xen-devel@lists.xenproject.org>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Andrew Cooper writes ("Re: [PATCH 17/20] tools/libx[cl]: Plumb static_data_done() up into libxl"):
> There are several things going on here.
> 
> One is the control flow marker of "We reached the end of the static
> data". A higher level toolstack needs to know this unconditionally,
> which is why the callback is mandatory.
> 
> For v2 compatibility, its callers cope with "this is where an end of
> static data would be in a v3 stream", but that abstracted away so the
> higher level toolstack doesn't know or need to care.
> 
> The missing parameter is "p.s. here are the things we were expecting but
> didn't get - you need to pick up the pieces". For now, it is synonymous
> with "here is a v2 stream without CPUID data", but that won't be
> accurate in the future if/when new static data records get retrofitted.

Thanks for the explanation.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri May 29 16:01:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 16:01:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehRT-0003g7-DU; Fri, 29 May 2020 16:01:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hF3w=7L=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jehRS-0003g2-OC
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 16:01:06 +0000
X-Inumbo-ID: 9a478388-a1c5-11ea-81bc-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9a478388-a1c5-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 16:01:06 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: wHREbYpI1PafjnsVZ9jennHF2HPs0u5s5d9EcytBvkkYMcs73pQiQ99yUmYzt7HUP6HMz40wg3
 OMdNGO4JaDMJhCy7hfG4kXmkcw9hHRvG1eeDmr3b79tLpwIvrPtVGlKW13nkhXhBBr83JwjE2c
 +jVdsm2nd+wSFC3N+kmTKUfn8FcV9rTeQLuToUf1+94TjzXapnuK61+PoioUM7byBqhitgAJjr
 pC/uJkCzWw2ywfLMeCAbWU9Le5E54+ciGFfdnnkhp5zU+HesZknSPKv/NDRwX/kmSOi5OVb3vx
 /ds=
X-SBRS: 2.7
X-MesageID: 19519416
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,449,1583211600"; d="scan'208";a="19519416"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24273.12731.894283.390797@mariner.uk.xensource.com>
Date: Fri, 29 May 2020 17:00:59 +0100
To: "paul@xen.org" <paul@xen.org>
Subject: RE: [PATCH v3] docs: update xenstore-migration.md
In-Reply-To: <005d01d635af$0a3e6ae0$1ebb40a0$@xen.org>
References: <20200529113709.17489-1-jgross@suse.com>
 <005d01d635af$0a3e6ae0$1ebb40a0$@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: 'Juergen Gross' <jgross@suse.com>,
 'Stefano Stabellini' <sstabellini@kernel.org>, 'Julien Grall' <julien@xen.org>,
 'Wei Liu' <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>, 'Jan
 Beulich' <jbeulich@suse.com>, Ian Jackson <Ian.Jackson@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Paul Durrant writes ("RE: [PATCH v3] docs: update xenstore-migration.md"):
> > -----Original Message-----
> > From: Xen-devel <xen-devel-bounces@lists.xenproject.org> On Behalf Of Juergen Gross
> > Sent: 29 May 2020 12:37
> > To: xen-devel@lists.xenproject.org
> > Cc: Juergen Gross <jgross@suse.com>; Stefano Stabellini <sstabellini@kernel.org>; Julien Grall
> > <julien@xen.org>; Wei Liu <wl@xen.org>; Andrew Cooper <andrew.cooper3@citrix.com>; Ian Jackson
> > <ian.jackson@eu.citrix.com>; George Dunlap <george.dunlap@citrix.com>; Jan Beulich <jbeulich@suse.com>
> > Subject: [PATCH v3] docs: update xenstore-migration.md
> > 
> > Update connection record details:
> > 
> > - make flags common for sockets and domains (makes it easier to have a
> >   C union for conn-spec)
> > - add pending incoming data (needed for handling partially read
> >   requests when doing live update)
> > - add partial response length (needed for proper split to individual
> >   responses after live update)
> > 
> > Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> LGTM
> 
> Reviewed-by: Paul Durrant <paul@xen.org>

Thanks, committed.

Ian.


From xen-devel-bounces@lists.xenproject.org Fri May 29 16:06:47 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 16:06:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehWh-0003rg-3c; Fri, 29 May 2020 16:06:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=U2UY=7L=tklsoftware.com=tamas@srs-us1.protection.inumbo.net>)
 id 1jehWf-0003rb-Ea
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 16:06:29 +0000
X-Inumbo-ID: 5ae221ac-a1c6-11ea-8993-bc764e2007e4
Received: from mail-il1-x142.google.com (unknown [2607:f8b0:4864:20::142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ae221ac-a1c6-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 16:06:29 +0000 (UTC)
Received: by mail-il1-x142.google.com with SMTP id q18so3006535ilm.5
 for <xen-devel@lists.xenproject.org>; Fri, 29 May 2020 09:06:29 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=tklengyel-com.20150623.gappssmtp.com; s=20150623;
 h=from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=bNwL/Y4/GmosR1INEWPqs8uou9ZF6KIaP09v37iMA8o=;
 b=ZpMqUyKC9muT2SHHfkxzqXvn1mtPRX28OHUNm4r78cw5QFkoJCbg/w4Djk/HIUA0qW
 HS0ihojMTcnuDy9dyHzsihd8JcyRmKNVMlmngTfQTSyWNimS4a2HdGXDM6AHhEqsLmnX
 eJHdq8Q7MhlmkjoHGwl6gZ+G2qZshkZC6ACTQ7UPgrK40Djb1UMdOk5yzhYwFVEeZc4K
 FtJvbbgVFhJ9bdeArzZmxSd8A5rXUQ0l9JG6z1wEUmbptphznwxm3HiALVR7cRjyDv49
 5hXCrdno/uxwx2lEX8M5CcEbCQuEHRDkjrL0Ic9i3D77lOz2Y+kC9MR0meNUbl6+wsJ9
 rYAQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=bNwL/Y4/GmosR1INEWPqs8uou9ZF6KIaP09v37iMA8o=;
 b=pbTLCkU1j4ejgkUgSL0mHLPrTKxXx+HgTSwGmibDKqNjWsfT9fD/hk0Fnhxa5y0Tu7
 tgEVvsGJ9S7+oG7iTdRjt8G3S63izw+CoXEOpbNi88mHOKLLN2QMaSPWTJq75Yrpr4EY
 PcS/srMhb5Y9i0e9PMRM9SLS35/bT16komxV0NHbEzkuskDk9hQRAq6svVWoBUT3teVa
 e8Ay8qWTNKFUoy2bZfsbHLF2xZsU9sc6PPNKxP3eFspqucRgt21vGAgUq4rSxdCGyx+3
 oRphfyt/PmDcNd/hw7XKxUUme0FELA4mQsDIW3ern7iFibI6x3zG2onXhK046aN8Yq/W
 mlfg==
X-Gm-Message-State: AOAM5301tKyM1+u6y2jswb+tJ3G40KNi6uxcGLcSDR1Q96UYB7BeOmSB
 yQkDYk2qMF4e/V1xUN7t6az0EoZSpIk=
X-Google-Smtp-Source: ABdhPJy/Pnd8jRPCx2DpkyXutbBVS53wYLfNc7JUlASsX6AwkfiqM3i6v464vVbtfZ9DMt31xrI22A==
X-Received: by 2002:a05:6e02:11a5:: with SMTP id
 5mr8317098ilj.108.1590768388219; 
 Fri, 29 May 2020 09:06:28 -0700 (PDT)
Received: from localhost (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124])
 by smtp.gmail.com with ESMTPSA id y19sm3940972iod.41.2020.05.29.09.06.26
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 29 May 2020 09:06:27 -0700 (PDT)
From: Tamas K Lengyel <tamas@tklengyel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH for-4.14] tools/libxl: fix setting altp2m param broken by
 1e9bc407cf0
Date: Fri, 29 May 2020 10:06:21 -0600
Message-Id: <20200529160621.3123-1-tamas@tklengyel.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The patch 1e9bc407cf0 mistakenly converted the altp2m config option to a
boolean. This is incorrect and breaks external-only usecases of altp2m that
is set with a value of 2.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
 tools/libxl/libxl_x86.c | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index f8bc828e62..272736850b 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -391,7 +391,6 @@ static int hvm_set_conf_params(libxl__gc *gc, uint32_t domid,
     libxl_ctx *ctx = libxl__gc_owner(gc);
     xc_interface *xch = ctx->xch;
     int ret = ERROR_FAIL;
-    bool altp2m = info->altp2m;
 
     switch(info->type) {
     case LIBXL_DOMAIN_TYPE_HVM:
@@ -433,7 +432,7 @@ static int hvm_set_conf_params(libxl__gc *gc, uint32_t domid,
             LOG(ERROR, "Couldn't set HVM_PARAM_NESTEDHVM");
             goto out;
         }
-        if (xc_hvm_param_set(xch, domid, HVM_PARAM_ALTP2M, altp2m)) {
+        if (xc_hvm_param_set(xch, domid, HVM_PARAM_ALTP2M, info->altp2m)) {
             LOG(ERROR, "Couldn't set HVM_PARAM_ALTP2M");
             goto out;
         }
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri May 29 16:12:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 16:12:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehcg-0004pd-QD; Fri, 29 May 2020 16:12:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pRu7=7L=gmail.com=tamas.k.lengyel@srs-us1.protection.inumbo.net>)
 id 1jehcf-0004pY-E0
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 16:12:41 +0000
X-Inumbo-ID: 381a607a-a1c7-11ea-81bc-bc764e2007e4
Received: from mail-wr1-x443.google.com (unknown [2a00:1450:4864:20::443])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 381a607a-a1c7-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 16:12:40 +0000 (UTC)
Received: by mail-wr1-x443.google.com with SMTP id l11so4374914wru.0
 for <xen-devel@lists.xenproject.org>; Fri, 29 May 2020 09:12:40 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=7EgdLrEb7DOAeLqpFM9UPsXZM+ZFMGBgVcwrh4hnytE=;
 b=FsQfajaGEruVzlgl5aoCpAveXI+aJaY93Hh+ckJmfusyEFv26xlxlvKutNUStjz9n2
 HT1ey6muIWce9B+nwYKpURibMs/X2zjwf5oM6wrrhQPp/8XzgXR4udB+zWXRNDvQI+HF
 pjV3rOQ5JHzmQ8sSMiZ9/WZ3ebOI/GTDJZp9RztE3Z293SUJSUoJ648iXUcQsIeTNSYC
 6EX9jwW5hOirahmAdOjfjZUXi18OvXkAGMs9kjjhVlohOn3p1wyoJcKABkT7cu9NbaW5
 XSUWw/7caXASinMxCry882RldV8L7gtO0R7ui6zmXXIx1HWPPfT//lVB/rFcT2CoJe8R
 cDcw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=7EgdLrEb7DOAeLqpFM9UPsXZM+ZFMGBgVcwrh4hnytE=;
 b=uWlLDLHfCAtvYvJ2rbUptXdWOAQrNta/sUL1mK6IPyO0X7o7YjMZbQrbs375snr7HO
 bC7VcrRwk1qI+3UDtmAoZSij0RQDojEyMBME6JNckksQ0Vj6nM6MYS1ZZWMyBjFci+q8
 clXlcpYFgZYKQGqLUitQkbqMor7ikuz9rb1jCepOczmJEvi8INl2isAacxYLQJCz4vC3
 WmxP38YgJg6ZZalzO1Qlvy9NRtpAyYSwd6DfSf/C1muQgX4fLw11iClAdCicL9HX/qXO
 WsoKDTn0eEl0zMQSBEFyMf69TPVK6ngAEFMJNQkIg4quZvWRoxNJsVrJaqDcUvufF8tn
 10XA==
X-Gm-Message-State: AOAM531SyZ/XGANgfXuq/BWrrC+80w8aqKhtcS1lvFD88FYNbse2xWPc
 vWUOHaU2gK0ginMyg4a0I78uGujaOQV85aKF6fE=
X-Google-Smtp-Source: ABdhPJywx06x9dUiXQ02d7VG7JPP36Ral6E7f1UQa4PKToNV4LlrGD0zOE4v8giSGdgnI7vMwRFmUU3boyh4ieVwk3M=
X-Received: by 2002:a5d:490f:: with SMTP id x15mr9293500wrq.259.1590768759553; 
 Fri, 29 May 2020 09:12:39 -0700 (PDT)
MIME-Version: 1.0
References: <1317891616.59673956.1590755403816.JavaMail.zimbra@cert.pl>
 <57d213df-9edb-8f31-59e4-13f5cc07b543@suse.com>
 <1150720994.59695220.1590756705329.JavaMail.zimbra@cert.pl>
 <1f68a02a-3472-1bb0-90b9-6f3ccefca0b3@suse.com>
 <1623831291.59734817.1590760249321.JavaMail.zimbra@cert.pl>
 <CABfawhmjeoVpRgAbDAA_T6rMiqPjQUvLPEn5t1-1JwZFJicNKQ@mail.gmail.com>
In-Reply-To: <CABfawhmjeoVpRgAbDAA_T6rMiqPjQUvLPEn5t1-1JwZFJicNKQ@mail.gmail.com>
From: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Date: Fri, 29 May 2020 10:12:03 -0600
Message-ID: <CABfawhmUbowsMS6dS8SCcgMGe9GY8HenHG7LEyZcqh39DwiXMQ@mail.gmail.com>
Subject: Re: [BUG] Core scheduling patches causing deadlock in some situations
To: =?UTF-8?B?TWljaGHFgiBMZXN6Y3p5xYRza2k=?= <michal.leszczynski@cert.pl>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, chivay@cert.pl, bonus@cert.pl
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 29, 2020 at 8:48 AM Tamas K Lengyel
<tamas.k.lengyel@gmail.com> wrote:
>
> On Fri, May 29, 2020 at 7:51 AM Micha=C5=82 Leszczy=C5=84ski
> <michal.leszczynski@cert.pl> wrote:
> >
> > ----- 29 maj 2020 o 15:15, J=C3=BCrgen Gro=C3=9F jgross@suse.com napisa=
=C5=82(a):
> >
> > > On 29.05.20 14:51, Micha=C5=82 Leszczy=C5=84ski wrote:
> > >> ----- 29 maj 2020 o 14:44, J=C3=BCrgen Gro=C3=9F jgross@suse.com nap=
isa=C5=82(a):
> > >>
> > >>> On 29.05.20 14:30, Micha=C5=82 Leszczy=C5=84ski wrote:
> > >>>> Hello,
> > >>>>
> > >>>> I'm running DRAKVUF on Dell Inc. PowerEdge R640/08HT8T server with=
 Intel(R)
> > >>>> Xeon(R) Gold 6132 CPU @ 2.60GHz CPU.
> > >>>> When upgrading from Xen RELEASE 4.12 to 4.13, we have noticed some=
 stability
> > >>>> problems concerning freezes of Dom0 (Debian Buster):
> > >>>>
> > >>>> ---
> > >>>>
> > >>>> maj 27 23:17:02 debian kernel: rcu: INFO: rcu_sched self-detected =
stall on CPU
> > >>>> maj 27 23:17:02 debian kernel: rcu: 0-....: (5250 ticks this GP)
> > >>>> idle=3Dcee/1/0x4000000000000002 softirq=3D11964/11964 fqs=3D2515
> > >>>> maj 27 23:17:02 debian kernel: rcu: (t=3D5251 jiffies g=3D27237 q=
=3D799)
> > >>>> maj 27 23:17:02 debian kernel: NMI backtrace for cpu 0
> > >>>> maj 27 23:17:02 debian kernel: CPU: 0 PID: 643 Comm: z_rd_int_1 Ta=
inted: P OE
> > >>>> 4.19.0-6-amd64 #1 Debian 4.19.67-2+deb10u2
> > >>>> maj 27 23:17:02 debian kernel: Hardware name: Dell Inc. PowerEdge =
R640/08HT8T,
> > >>>> BIOS 2.1.8 04/30/2019
> > >>>> maj 27 23:17:02 debian kernel: Call Trace:
> > >>>> maj 27 23:17:02 debian kernel: <IRQ>
> > >>>> maj 27 23:17:02 debian kernel: dump_stack+0x5c/0x80
> > >>>> maj 27 23:17:02 debian kernel: nmi_cpu_backtrace.cold.4+0x13/0x50
> > >>>> maj 27 23:17:02 debian kernel: ? lapic_can_unplug_cpu.cold.29+0x3b=
/0x3b
> > >>>> maj 27 23:17:02 debian kernel: nmi_trigger_cpumask_backtrace+0xf9/=
0xfb
> > >>>> maj 27 23:17:02 debian kernel: rcu_dump_cpu_stacks+0x9b/0xcb
> > >>>> maj 27 23:17:02 debian kernel: rcu_check_callbacks.cold.81+0x1db/0=
x335
> > >>>> maj 27 23:17:02 debian kernel: ? tick_sched_do_timer+0x60/0x60
> > >>>> maj 27 23:17:02 debian kernel: update_process_times+0x28/0x60
> > >>>> maj 27 23:17:02 debian kernel: tick_sched_handle+0x22/0x60
> > >>>>
> > >>>> ---
> > >>>>
> > >>>> This usually results in machine being completely unresponsive and =
performing an
> > >>>> automated reboot after some time.
> > >>>>
> > >>>> I've bisected commits between 4.12 and 4.13 and it seems like this=
 is the patch
> > >>>> which introduced a bug:
> > >>>> https://github.com/xen-project/xen/commit/7c7b407e77724f37c4b44893=
0777a59a479feb21
> > >>>>
> > >>>> Enclosed you can find the `xl dmesg` log (attachment: dmesg.txt) f=
rom the fresh
> > >>>> boot of the machine on which the bug was reproduced.
> > >>>>
> > >>>> I'm also attaching the `xl info` output from this machine:
> > >>>>
> > >>>> ---
> > >>>>
> > >>>> release : 4.19.0-6-amd64
> > >>>> version : #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11)
> > >>>> machine : x86_64
> > >>>> nr_cpus : 14
> > >>>> max_cpu_id : 223
> > >>>> nr_nodes : 1
> > >>>> cores_per_socket : 14
> > >>>> threads_per_core : 1
> > >>>> cpu_mhz : 2593.930
> > >>>> hw_caps :
> > >>>> bfebfbff:77fef3ff:2c100800:00000121:0000000f:d19ffffb:00000008:000=
00100
> > >>>> virt_caps : pv hvm hvm_directio pv_directio hap shadow
> > >>>> total_memory : 130541
> > >>>> free_memory : 63591
> > >>>> sharing_freed_memory : 0
> > >>>> sharing_used_memory : 0
> > >>>> outstanding_claims : 0
> > >>>> free_cpus : 0
> > >>>> xen_major : 4
> > >>>> xen_minor : 13
> > >>>> xen_extra : -unstable
> > >>>> xen_version : 4.13-unstable
> > >>>> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-x=
86_32p
> > >>>> hvm-3.0-x86_64
> > >>>> xen_scheduler : credit2
> > >>>> xen_pagesize : 4096
> > >>>> platform_params : virt_start=3D0xffff800000000000
> > >>>> xen_changeset : Wed Oct 2 09:27:27 2019 +0200 git:7c7b407e77-dirty
> > >>>
> > >>> Which is your original Xen base? This output is clearly obtained at=
 the
> > >>> end of the bisect process.
> > >>>
> > >>> There have been quite some corrections since the release of Xen 4.1=
3, so
> > >>> please make sure you are running the most actual version (4.13.1).
> > >>>
> > >>>
> > >>> Juergen
> > >>
> > >> Sure, we have tested both RELEASE 4.13 and RELEASE 4.13.1. Unfortuna=
tely these
> > >> corrections didn't help and the bug is still reproducible.
> > >>
> > >>  From our testing it turns out that:
> > >>
> > >> Known working revision: 997d6248a9ae932d0dbaac8d8755c2b15fec25dc
> > >> Broken revision: 6278553325a9f76d37811923221b21db3882e017
> > >> First bad commit: 7c7b407e77724f37c4b448930777a59a479feb21
> > >
> > > Would it be possible to test xen unstable, too?
> > >
> > > I could imagine e.g. commit b492c65da5ec5ed or 99266e31832fb4a4 to ha=
ve
> > > an impact here.
> > >
> > >
> > > Juergen
> >
> >
> > I've tried b492c65da5ec5ed revision but it seems that there is some pro=
blem with ALTP2M support, so I can't launch anything at all.
> >
> > maj 29 15:45:32 debian drakrun[1223]: Failed to set HVM_PARAM_ALTP2M, R=
C: -1
> > maj 29 15:45:32 debian drakrun[1223]: VMI_ERROR: xc_altp2m_switch_to_vi=
ew returned rc: -1
>
> Ough, great, that's another regression in 4.14-unstable. I ran into it
> myself but couldn't spend time to figure out whether its just
> something in my configuration or not so I reverted to 4.13.1. Now we
> know it's a real bug.

This was a bug in libxl, I've sent a patch in that fixes it but you
can grab it from https://github.com/tklengyel/xen/tree/libxl_fix.
There is also an update to DRAKVUF that will need to be applied due to
the recent altp2m visibility option having to be specified, you can
grab that from https://github.com/tklengyel/drakvuf/tree/4.14.

Tamas


From xen-devel-bounces@lists.xenproject.org Fri May 29 16:15:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 16:15:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehf5-0004xS-Ba; Fri, 29 May 2020 16:15:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jehf3-0004xN-Nq
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 16:15:09 +0000
X-Inumbo-ID: 905517a8-a1c7-11ea-a8e7-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 905517a8-a1c7-11ea-a8e7-12813bfff9fa;
 Fri, 29 May 2020 16:15:08 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:35052
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jehf0-000i1d-Ja (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 17:15:06 +0100
Subject: Re: [PATCH for-4.14] tools/libxl: fix setting altp2m param broken by
 1e9bc407cf0
To: Tamas K Lengyel <tamas@tklengyel.com>, xen-devel@lists.xenproject.org
References: <20200529160621.3123-1-tamas@tklengyel.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <69225f69-3ca0-c759-03a5-60d6464a7eb4@citrix.com>
Date: Fri, 29 May 2020 17:15:05 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200529160621.3123-1-tamas@tklengyel.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 17:06, Tamas K Lengyel wrote:
> The patch 1e9bc407cf0 mistakenly converted the altp2m config option to a
> boolean. This is incorrect and breaks external-only usecases of altp2m that
> is set with a value of 2.
>
> Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>

Urg yes.  Sorry.

However, this doesn't build because there is another use of the altp2m
variable between the two hunks below, for compatiblity with the older
altp2mhvm option.

I think changing its type just to int out to suffice?

~Andrew

> ---
>  tools/libxl/libxl_x86.c | 3 +--
>  1 file changed, 1 insertion(+), 2 deletions(-)
>
> diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
> index f8bc828e62..272736850b 100644
> --- a/tools/libxl/libxl_x86.c
> +++ b/tools/libxl/libxl_x86.c
> @@ -391,7 +391,6 @@ static int hvm_set_conf_params(libxl__gc *gc, uint32_t domid,
>      libxl_ctx *ctx = libxl__gc_owner(gc);
>      xc_interface *xch = ctx->xch;
>      int ret = ERROR_FAIL;
> -    bool altp2m = info->altp2m;
>  
>      switch(info->type) {
>      case LIBXL_DOMAIN_TYPE_HVM:
> @@ -433,7 +432,7 @@ static int hvm_set_conf_params(libxl__gc *gc, uint32_t domid,
>              LOG(ERROR, "Couldn't set HVM_PARAM_NESTEDHVM");
>              goto out;
>          }
> -        if (xc_hvm_param_set(xch, domid, HVM_PARAM_ALTP2M, altp2m)) {
> +        if (xc_hvm_param_set(xch, domid, HVM_PARAM_ALTP2M, info->altp2m)) {
>              LOG(ERROR, "Couldn't set HVM_PARAM_ALTP2M");
>              goto out;
>          }



From xen-devel-bounces@lists.xenproject.org Fri May 29 16:15:41 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 16:15:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehfZ-00050I-Ln; Fri, 29 May 2020 16:15:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=K2ub=7L=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1jehfY-000509-DN
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 16:15:40 +0000
X-Inumbo-ID: a2fb0bce-a1c7-11ea-81bc-bc764e2007e4
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2fb0bce-a1c7-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 16:15:39 +0000 (UTC)
Authentication-Results: esa3.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 38FOTJdBGyyLsqaMNQ7d+0f9HtyrVHXBruRG7Y9k/p1Z1xYWXr4ewkwKMatofCTKthXgRGUIE6
 WDFmhXM3bs5sOn3KvXlP7c3wh4EAZR4IZf3IvOdtVEWos7Ox4/CdIe+v1YYBGyVtl2n890C6TP
 nV50LZzuPzbiKq7iQyJg2vlPQdETGpJtbdTnHbba77LRgQzyE7j1oCGg7sS/OUObr+OqhT1gnL
 HL4gjFhXQy/rvuUZpcGbolWvSyiCS8m92oGCt4k/NMe5Ike52y8lSnwvwKhNSSqWDrDGeVS29R
 MVY=
X-SBRS: 2.7
X-MesageID: 18777715
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,449,1583211600"; d="scan'208";a="18777715"
From: George Dunlap <George.Dunlap@citrix.com>
To: Dario Faggioli <dfaggioli@suse.com>
Subject: Re: [PATCH 0/2] xen: credit2: limit the number of CPUs per runqueue
Thread-Topic: [PATCH 0/2] xen: credit2: limit the number of CPUs per runqueue
Thread-Index: AQHWHky4XA5G+M3bUEaQnq/cZhz5cKi+/2qAgABLOYA=
Date: Fri, 29 May 2020 16:15:35 +0000
Message-ID: <430647E9-7EC9-4041-9809-CACD4BC451BC@citrix.com>
References: <158818022727.24327.14309662489731832234.stgit@Palanthas>
 <ab810b293ca8324ca3fba22476401a58435243fa.camel@suse.com>
In-Reply-To: <ab810b293ca8324ca3fba22476401a58435243fa.camel@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: Apple Mail (2.3608.80.23.2.2)
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
Content-Type: text/plain; charset="utf-8"
Content-ID: <0D0FEF10C005914883F53CF3818974A6@citrix.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Paul Durrant <paul@xen.org>, Jan Beulich <jbeulich@suse.com>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

DQoNCj4gT24gTWF5IDI5LCAyMDIwLCBhdCAxMjo0NiBQTSwgRGFyaW8gRmFnZ2lvbGkgPGRmYWdn
aW9saUBzdXNlLmNvbT4gd3JvdGU6DQo+IA0KPiBTbywNCj4gDQo+IEkgZmVsdCBsaWtlIHByb3Zp
ZGluZyBzb21lIGFkZGl0aW9uYWwgdGhvdWdodHMgYWJvdXQgdGhpcyBzZXJpZXMsIGZyb20NCj4g
YSByZWxlYXNlIHBvaW50IG9mIHZpZXcgKGFkZGluZyBQYXVsKS4NCj4gDQo+IFRpbWluZyBpcyAq
YmV5b25kIHRpZ2h0KiBzbyBpZiB0aGlzIHNlcmllcywgZW50aXJlbHkgb3IgcGFydGx5LCBoYXMg
YW55DQo+IGNoYW5jZSB0byBnbyBpbiwgaXQgd291bGQgYmUgdGhyb3VnaCBzb21lIGZvcm0gb2Yg
ZXhjZXB0aW9uLCB3aGljaCBvZg0KPiBjb3Vyc2UgY29tZXMgd2l0aCBzb21lIHJpc2tzLCBldGMu
DQo+IA0KPiBJIGRpZCB3b3JrIGhhcmQgdG8gc3VibWl0IHRoZSBmdWxsIHNlcmllcywgYmVjYXVz
ZSBJIHdhbnRlZCBwZW9wbGUgdG8NCj4gYmUgYWJsZSB0byBzZWUgdGhlIGNvbXBsZXRlIHNvbHV0
aW9uLiBIb3dldmVyLCBJIHRoaW5rIHRoZSBzZXJpZXMNCj4gaXRzZWxmIGNhbiBiZSBsb2dpY2Fs
bHkgc3BsaXQgaW4gdHdvIHBhcnRzLg0KPiANCj4gQmFzaWNhbGx5LCBpZiB3ZSBqdXN0IGNvbnNp
ZGVyIHBhdGNoZXMgMSBhbmQgNCB3ZSB3aWxsIGVuZCB1cCwgcmlnaHQNCj4gYWZ0ZXIgYm9vdCwg
d2l0aCBhIHN5c3RlbSB0aGF0IGhhcyBzbWFsbGVyIHJ1bnF1ZXVlcy4gVGhleSB3aWxsIG1vc3QN
Cj4gbGlrZWx5IGJlIGJhbGFuY2VkIGluIHRlcm1zIG9mIGhvdyBtYW55IENQVXMgZWFjaCBvbmUg
aGFzLCBzbyBhIGdvb2QNCj4gc2V0dXAuIFRoaXMgd2lsbCBsaWtlbHkgKGFjdHVhbCBkaWZmZXJl
bmNlcyBzZWVtcyB0byBkZXBlbmQgKnF1aXRlIGENCj4gYml0KiBvbiB0aGUgYWN0dWFsIHdvcmts
b2FkKSBiZSBhbiBpbXByb3ZlbWVudCBmb3IgdmVyeSBsYXJnZSBzeXN0ZW1zLg0KDQpGdW5kYW1l
bnRhbGx5LCBJIGZlZWwgbGlrZSB0aGUgcmVhc29uIHdlIGhhdmUgdGhlIGZlYXR1cmUgZnJlZXpl
IGlzIGV4YWN0bHkgdG8gaGF2ZSB0byBhdm9pZCBxdWVzdGlvbnMgbGlrZSB0aGlzLiAgU29tZXRo
aW5nIHZlcnkgbXVjaCBsaWtlIHBhdGNoIDQgd2FzIHBvc3RlZCBiZWZvcmUgdGhlIGxhc3QgcG9z
dGluZyBkYXRlOyBwYXRjaGVzIDEtNCByZWNlaXZlZCBSLWLigJlzIGJlZm9yZSB0aGUgZmVhdHVy
ZSBmcmVlemUuICBJIHRoaW5rIHRoZXkgc2hvdWxkIHByb2JhYmx5IGdvIGluLg0KDQpUaGUgcmVi
YWxhbmNpbmcgcGF0Y2hlcyBJ4oCZbSBpbmNsaW5lZCB0byBzYXkgc2hvdWxkIHdhaXQgdW50aWwg
dGhleeKAmXZlIGhhZCBhIGJpdCBtb3JlIHRpbWUgdG8gYmUgdGhvdWdodCBhYm91dC4NCg0KIC1H
ZW9yZ2UNCg0K


From xen-devel-bounces@lists.xenproject.org Fri May 29 16:18:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 16:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehiC-0005Br-3p; Fri, 29 May 2020 16:18:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=U2UY=7L=tklsoftware.com=tamas@srs-us1.protection.inumbo.net>)
 id 1jehiB-0005Bm-3w
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 16:18:23 +0000
X-Inumbo-ID: 03f6fc76-a1c8-11ea-9dbe-bc764e2007e4
Received: from mail-ed1-x541.google.com (unknown [2a00:1450:4864:20::541])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03f6fc76-a1c8-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 16:18:22 +0000 (UTC)
Received: by mail-ed1-x541.google.com with SMTP id m21so2128233eds.13
 for <xen-devel@lists.xenproject.org>; Fri, 29 May 2020 09:18:22 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=tklengyel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=mbMuxJHVPAc7Wx25YihDuP2TpUDE5CiHWoqylZyuUYQ=;
 b=j6uuToaoEO44uxSdHBmKJZUIdlCUIM7YnjCwnm+5yfMyNqC+LHKN2V7aUk5NRN4pMt
 lrzaVozgncjjNnwd3HLbbaz5R/7lACk7mBF3DgA7lq2LDWjXOx+oMpDmYAHYwOJj6F5k
 JmyHZv0DzWX84QN7eYxzWHmIhytTJbSKb0ix6DJysst2toyJiL/VhBGupI7S30vdA9BY
 I35bWsasMG8kEffFPmvFl1MkZ457/hIJOP7iuvfb8d2pwjPM+jHQPWMi5YuOB/Qyl8F/
 G/796pMgd0tO8QXRa3twQL1il6qkkLsZZ+M5KjfDXC8bDjP6WF4hHiEla7s23vR7wE61
 VTVA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=mbMuxJHVPAc7Wx25YihDuP2TpUDE5CiHWoqylZyuUYQ=;
 b=I4ICNI6jSeGtQ9mcxZiepLcMerbB4JzInryxt38GHJ5Ur7NP7pAExwy1TJen8Ou638
 ck+3oEaHOfUenuz6FTWYtkbSmnJVTb6EfzBndSJG4XPgLhzttxGkZHpXQD5yJXCR3sD9
 +cUNT7oCRhwHjo12ccWEUtn9yd8Us0IcaN+rDo3+A0di8QR8IYLdYJoO6dMbeOXj9MGI
 qDWFoiJTGQd1pKMsVr9ahRcPFK07XtiOw3Dq3/4pEfXBZUGO2BDN7MJP2ccSy5b/85ac
 3MYoYuuf9j5/yB3f9reURxbDZTmsDbEjFllvPmz5idAGM1dSm/B1uwBduJHTNbQQO5Yc
 henQ==
X-Gm-Message-State: AOAM531nC7qcuZ5bmxJZoUXnF8iRt/c4thfOhoEBYAfeWGPgm5UKEUFH
 khzDzp5lrMicSYg9ZeXwAVF5S9iAM/I=
X-Google-Smtp-Source: ABdhPJwQz42PMmtDccUntyHIAsoraoBhP75XfY12Sgkd4yJ94+Jjs2/881q4KhZf+UPOOFn8kk6mCw==
X-Received: by 2002:a05:6402:1770:: with SMTP id
 da16mr8774636edb.122.1590769101539; 
 Fri, 29 May 2020 09:18:21 -0700 (PDT)
Received: from mail-wr1-f46.google.com (mail-wr1-f46.google.com.
 [209.85.221.46])
 by smtp.gmail.com with ESMTPSA id r18sm6009683eds.29.2020.05.29.09.18.21
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 29 May 2020 09:18:21 -0700 (PDT)
Received: by mail-wr1-f46.google.com with SMTP id j10so4300149wrw.8
 for <xen-devel@lists.xenproject.org>; Fri, 29 May 2020 09:18:21 -0700 (PDT)
X-Received: by 2002:adf:e648:: with SMTP id b8mr9718468wrn.386.1590769100791; 
 Fri, 29 May 2020 09:18:20 -0700 (PDT)
MIME-Version: 1.0
References: <20200529160621.3123-1-tamas@tklengyel.com>
 <69225f69-3ca0-c759-03a5-60d6464a7eb4@citrix.com>
In-Reply-To: <69225f69-3ca0-c759-03a5-60d6464a7eb4@citrix.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Fri, 29 May 2020 10:17:45 -0600
X-Gmail-Original-Message-ID: <CABfawhkeo+5t+ofGs25pzVhR0RO6QYaYQbQv0f-baB+i9uAOxg@mail.gmail.com>
Message-ID: <CABfawhkeo+5t+ofGs25pzVhR0RO6QYaYQbQv0f-baB+i9uAOxg@mail.gmail.com>
Subject: Re: [PATCH for-4.14] tools/libxl: fix setting altp2m param broken by
 1e9bc407cf0
To: Andrew Cooper <andrew.cooper3@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 29, 2020 at 10:15 AM Andrew Cooper
<andrew.cooper3@citrix.com> wrote:
>
> On 29/05/2020 17:06, Tamas K Lengyel wrote:
> > The patch 1e9bc407cf0 mistakenly converted the altp2m config option to a
> > boolean. This is incorrect and breaks external-only usecases of altp2m that
> > is set with a value of 2.
> >
> > Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
>
> Urg yes.  Sorry.
>
> However, this doesn't build because there is another use of the altp2m
> variable between the two hunks below, for compatiblity with the older
> altp2mhvm option.

Eh, so much for hastily sending a patch with last minute changes.

>
> I think changing its type just to int out to suffice?

Indeed, that would work as well. Let me just resend with that.

Tamas


From xen-devel-bounces@lists.xenproject.org Fri May 29 16:22:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 16:22:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehmO-00067W-Lp; Fri, 29 May 2020 16:22:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=U2UY=7L=tklsoftware.com=tamas@srs-us1.protection.inumbo.net>)
 id 1jehmN-00067R-Lf
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 16:22:43 +0000
X-Inumbo-ID: 9f941326-a1c8-11ea-8993-bc764e2007e4
Received: from mail-oi1-x244.google.com (unknown [2607:f8b0:4864:20::244])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9f941326-a1c8-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 16:22:43 +0000 (UTC)
Received: by mail-oi1-x244.google.com with SMTP id w4so3090107oia.1
 for <xen-devel@lists.xenproject.org>; Fri, 29 May 2020 09:22:43 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=tklengyel-com.20150623.gappssmtp.com; s=20150623;
 h=from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=3Fg0rrM7+ln+XqJgJiVqac7czaLMntvGuEOYTNHktkI=;
 b=ZhavIuKTFVV6ev207J3C345uY8iNeT4+Sr6NNp4U+FDt+DL8ea660azh03fumFDCOq
 4nIrNIIJhoSeYwgLzcuHClSRTefBGPwhEita1r+08/LTdJ9+fJWS38LPX/0KlK9QnPbF
 Z4ONcl4Iiqfy9ti+Q/lIGPbeJWvIHN6qx4iZ27+Ml8ltzYTHbS5qNdSdp7vZvp847+ne
 BVVmADmWb2I6AyA70hcwCY8jYIxO7qQgK0ZxdObyayCzqAqJjrfcWl2JT7nl/0PD0UZH
 XliT6HA6HuOr1mGAjrL9OQ2/klsho8nPwr6FL/OGuFjK8J/hHxyZCsRDMXlQGgf8h7tJ
 wfaw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=3Fg0rrM7+ln+XqJgJiVqac7czaLMntvGuEOYTNHktkI=;
 b=Ptt/hsuuXmTqs8n+KsuF+hHCGYk6QCCFmrZewl21CXfj8aFRMixlB7oUgb8ptTL548
 Rxt+8opqg2yijdV93TmoA0MgXwBKo8nb7VLcjdIv/fzQJyiQDjMy09a+7T/s83Hha25a
 rVDgnob2tanAvZUKc9DfbhAl5/jZ4YieHjIgl8BW1TakvWNJOMg3jcl2mK3TErY0f+rZ
 bfx7xbMz4oPJU1HHP3dxWmbQozDTm7Elox787+G8DZ1apFqDS6P7YtQNwcuJPel7ESfj
 mTWw5Cj17MNc93OwVkl2fQ7AbxqrgJr1UVesj8oqiK8XU5UUeww1yM+YIW+hyQpMWI97
 Uccw==
X-Gm-Message-State: AOAM532c/EGY9v8qyOMH4IrG7rN1xsO4BLvPETHCSLheWD0Z34/sMrJj
 2L4JfGFSp2sS/VSSFnrkfjgNb09ou8k=
X-Google-Smtp-Source: ABdhPJywU00eYjcmhRsCu/lkigi4LaEqjW2ZrgqQ0Q+Ct/rP1p9cVAZmXdDdb9l0vGGeuZ/FhIjqGQ==
X-Received: by 2002:aca:b585:: with SMTP id e127mr622601oif.52.1590769357368; 
 Fri, 29 May 2020 09:22:37 -0700 (PDT)
Received: from localhost (c-71-205-12-124.hsd1.co.comcast.net. [71.205.12.124])
 by smtp.gmail.com with ESMTPSA id 89sm2599596oth.9.2020.05.29.09.22.36
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 29 May 2020 09:22:36 -0700 (PDT)
From: Tamas K Lengyel <tamas@tklengyel.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH v2 for-4.14] tools/libxl: fix setting altp2m param broken by
 1e9bc407cf0
Date: Fri, 29 May 2020 10:22:34 -0600
Message-Id: <20200529162234.16824-1-tamas@tklengyel.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Tamas K Lengyel <tamas@tklengyel.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

The patch 1e9bc407cf0 mistakenly converted the altp2m config option to a
boolean. This is incorrect and breaks external-only usecases of altp2m that
is set with a value of 2.

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
v2: just convert bool to unsigned int
---
 tools/libxl/libxl_x86.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libxl/libxl_x86.c b/tools/libxl/libxl_x86.c
index f8bc828e62..e57f63282e 100644
--- a/tools/libxl/libxl_x86.c
+++ b/tools/libxl/libxl_x86.c
@@ -391,7 +391,7 @@ static int hvm_set_conf_params(libxl__gc *gc, uint32_t domid,
     libxl_ctx *ctx = libxl__gc_owner(gc);
     xc_interface *xch = ctx->xch;
     int ret = ERROR_FAIL;
-    bool altp2m = info->altp2m;
+    unsigned int altp2m = info->altp2m;
 
     switch(info->type) {
     case LIBXL_DOMAIN_TYPE_HVM:
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri May 29 16:25:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 16:25:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehoW-0006EL-2H; Fri, 29 May 2020 16:24:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jehoU-0006EG-VX
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 16:24:55 +0000
X-Inumbo-ID: ed8a5be4-a1c8-11ea-9947-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ed8a5be4-a1c8-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 16:24:54 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:35382
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jehoR-000ouI-Lu (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 17:24:51 +0100
Subject: Re: [PATCH v3] x86/PV: remove unnecessary toggle_guest_pt() overhead
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <24d8b606-f74b-9367-d67e-e952838c7048@suse.com>
 <d7840278-b999-65fa-40bf-2b78e5266837@citrix.com>
 <a6084473-2fb7-4106-66a4-d180ef483314@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3dc314dd-3016-aa5e-c327-834b88e9eec2@citrix.com>
Date: Fri, 29 May 2020 17:24:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <a6084473-2fb7-4106-66a4-d180ef483314@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 22/05/2020 11:07, Jan Beulich wrote:
> On 21.05.2020 18:46, Andrew Cooper wrote:
>> On 05/05/2020 07:16, Jan Beulich wrote:
>>> While the mere updating of ->pv_cr3 and ->root_pgt_changed aren't overly
>>> expensive (but still needed only for the toggle_guest_mode() path), the
>>> effect of the latter on the exit-to-guest path is not insignificant.
>>> Move the logic into toggle_guest_mode(), on the basis that
>>> toggle_guest_pt() will always be invoked in pairs, yet we can't safely
>>> undo the setting of root_pgt_changed during the second of these
>>> invocations.
>>>
>>> While at it, add a comment ahead of toggle_guest_pt() to clarify its
>>> intended usage.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> I'm still of the opinion that the commit message wants rewriting to get
>> the important points across clearly.
>>
>> And those are that toggle_guest_pt() is called in pairs specifically to
>> read kernel data structures when emulating a userspace action, and that
>> this doesn't modify cr3 from the guests point of view, and therefore
>> doesn't need the resync on exit-to-guest path.
> Is this
>
> "toggle_guest_pt() is called in pairs, to read guest kernel data
>  structures when emulating a guest userspace action. Hence this doesn't
>  modify cr3 from the guest's point of view, and therefore doesn't need
>  any resync on the exit-to-guest path. Therefore move the updating of
>  ->pv_cr3 and ->root_pgt_changed into toggle_guest_mode(), since undoing
>  the changes during the second of these invocations wouldn't be a safe
>  thing to do."
>
> any better?

Yes - that will do.

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri May 29 16:27:06 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 16:27:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehqb-0006MX-GS; Fri, 29 May 2020 16:27:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jehqb-0006MS-2C
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 16:27:05 +0000
X-Inumbo-ID: 3a0d3997-a1c9-11ea-a8e7-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3a0d3997-a1c9-11ea-a8e7-12813bfff9fa;
 Fri, 29 May 2020 16:27:04 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:35466
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jehqX-000qPe-Jz (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 17:27:01 +0100
Subject: Re: [PATCH v2 for-4.14] tools/libxl: fix setting altp2m param broken
 by 1e9bc407cf0
To: Tamas K Lengyel <tamas@tklengyel.com>, xen-devel@lists.xenproject.org
References: <20200529162234.16824-1-tamas@tklengyel.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <00da0381-e132-03e1-3717-02f4e968ec32@citrix.com>
Date: Fri, 29 May 2020 17:27:00 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200529162234.16824-1-tamas@tklengyel.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony PERARD <anthony.perard@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 17:22, Tamas K Lengyel wrote:
> The patch 1e9bc407cf0 mistakenly converted the altp2m config option to a
> boolean. This is incorrect and breaks external-only usecases of altp2m that
> is set with a value of 2.
>
> Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>

Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>

Sorry for breaking it to begin with.


From xen-devel-bounces@lists.xenproject.org Fri May 29 16:32:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 16:32:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jehw7-0007JB-56; Fri, 29 May 2020 16:32:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hF3w=7L=citrix.com=ian.jackson@srs-us1.protection.inumbo.net>)
 id 1jehw6-0007J6-9E
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 16:32:46 +0000
X-Inumbo-ID: 06279288-a1ca-11ea-a8e7-12813bfff9fa
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06279288-a1ca-11ea-a8e7-12813bfff9fa;
 Fri, 29 May 2020 16:32:45 +0000 (UTC)
Authentication-Results: esa2.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: cNZNwNdz9imlDuD7aarbNOnTiS4Q0578xyQsM7utOUPYSoTWOxh7HgA9JAN7O+N0wLtseKdk40
 wr5wDYle/jfJAbGB/q9BO+vk+h+Fyg79TAw2BIVgnvxiXdW8RbXXKTxbsWXfytpS0s6fGShoGB
 gLUgIznsw6jfbpAYwu6BQ5x+QEPpduWy8MzMuyzEYqFr/Rr1DOhZpPB6cjYOxaAIgWl2qrENdq
 DAi+LODHzosOucNMZ3KpYzp00HobG+qNRr5kGiT72lbFBIw6QRwvHfrDkUjmTH6yQnu6a6t/pf
 PJs=
X-SBRS: 2.7
X-MesageID: 18801622
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,449,1583211600"; d="scan'208";a="18801622"
From: Ian Jackson <ian.jackson@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Transfer-Encoding: 7bit
Message-ID: <24273.14616.523593.758476@mariner.uk.xensource.com>
Date: Fri, 29 May 2020 17:32:24 +0100
To: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 for-4.14] tools/libxl: fix setting altp2m param broken
 by 1e9bc407cf0
In-Reply-To: <00da0381-e132-03e1-3717-02f4e968ec32@citrix.com>
References: <20200529162234.16824-1-tamas@tklengyel.com>
 <00da0381-e132-03e1-3717-02f4e968ec32@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Tamas K Lengyel <tamas@tklengyel.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Andrew Cooper writes ("Re: [PATCH v2 for-4.14] tools/libxl: fix setting altp2m param broken by 1e9bc407cf0"):
> On 29/05/2020 17:22, Tamas K Lengyel wrote:
> > The patch 1e9bc407cf0 mistakenly converted the altp2m config option to a
> > boolean. This is incorrect and breaks external-only usecases of altp2m that
> > is set with a value of 2.
> >
> > Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
> 
> Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> Sorry for breaking it to begin with.

Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>

and pushed.


From xen-devel-bounces@lists.xenproject.org Fri May 29 16:37:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 16:37:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jei0f-0007UV-Pl; Fri, 29 May 2020 16:37:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wp2E=7L=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1jei0e-0007UQ-LV
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 16:37:28 +0000
X-Inumbo-ID: aed5e704-a1ca-11ea-8993-bc764e2007e4
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aed5e704-a1ca-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 16:37:28 +0000 (UTC)
Authentication-Results: esa4.hc3370-68.iphmx.com;
 dkim=none (message not signed) header.i=none
IronPort-SDR: 54EilSntCTQud5ajH6XVnza1bO4Lqnoh3Zy8zBlF3AiGtOzowxprf1HACKHYCswC5XEDpikHH9
 02H67rhq1XY46D+R1+AFXakzbxUYzylfa7FOANbswrhETVEOuXquy50A6eXuS89FLT0J1jxQFv
 76jEGvcOY+21Fe/kpab3R0802ZpeiIIiQf6Do/rf+RORO5xu1Lw52Iz9dQIu06Vep0nFE9vvb2
 g26LrijsREgsO/jjMkJeTyIPxd5ZqpVBeRbT39nZZ2x2iYGnJ2mjo0F8XbOfKDV2mPwHebWG0b
 7pY=
X-SBRS: 2.7
X-MesageID: 19524735
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
X-IronPort-AV: E=Sophos;i="5.73,449,1583211600"; d="scan'208";a="19524735"
Date: Fri, 29 May 2020 17:37:22 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: George Dunlap <George.Dunlap@citrix.com>
Subject: Re: [PATCH v2 3/5] automation/archlinux: Add 32-bit glibc headers
Message-ID: <20200529163722.GM2105@perard.uk.xensource.com>
References: <20200526221612.900922-1-george.dunlap@citrix.com>
 <20200526221612.900922-4-george.dunlap@citrix.com>
 <20200527104316.GH2105@perard.uk.xensource.com>
 <20200527112928.72amqcojenrz2a46@liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net>
 <CF50B50F-BDC0-4290-A606-33927CE86FE9@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CF50B50F-BDC0-4290-A606-33927CE86FE9@citrix.com>
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Doug Goldstein <cardoe@cardoe.com>, Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Thu, May 28, 2020 at 12:32:02PM +0100, George Dunlap wrote:
> On May 27, 2020, at 12:29 PM, Wei Liu <wl@xen.org> wrote:
> > All automation patches:
> > 
> > Acked-by: Wei Liu <wl@xen.org>
> > 
> > Anthony, can you generate and push the new images? Thanks.
> 
> These are checked in now.
> 
> BTW it looks like there may be some other compilation issues updating the archlinux image.  I tested the minimum configuration required to get the golang bindings to build; but a full build fails with other errors I haven’t had time to diagnose yet.

The only issue seems to be that ipxe doesn't build. A simple -Wno-error
would make it works.

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Fri May 29 17:12:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 17:12:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeiXx-0002nM-MJ; Fri, 29 May 2020 17:11:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=EHfH=7L=amacapital.net=luto@srs-us1.protection.inumbo.net>)
 id 1jeiXv-0002nH-R1
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 17:11:51 +0000
X-Inumbo-ID: 7c4c02fa-a1cf-11ea-9dbe-bc764e2007e4
Received: from mail-pl1-x62e.google.com (unknown [2607:f8b0:4864:20::62e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c4c02fa-a1cf-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 17:11:50 +0000 (UTC)
Received: by mail-pl1-x62e.google.com with SMTP id q16so1397747plr.2
 for <xen-devel@lists.xenproject.org>; Fri, 29 May 2020 10:11:50 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=amacapital-net.20150623.gappssmtp.com; s=20150623;
 h=content-transfer-encoding:from:mime-version:subject:date:message-id
 :references:cc:in-reply-to:to;
 bh=nGp/+75iqlq3uduDZuD3kTDOrfn+KoznVQbzyWd/W0o=;
 b=Iz5G/NbYnESBfEH6EWdVjujs8eFUmFP+RhrFVUnFccpnXv4Fa/BMMly6nTeGCYKQNz
 C8wEVN7Kny21GRz8hF5Q3DYntTnHnUP4Bb6E83XcTorLX6s0lCKByUnCXxHn0Qt87hIL
 a0Z3Hue9PxXFyB8S7an2uFhZkyhX47Ewf9Ct5tsdtdhtUH/nsbizxhYF19tREa6fIVtT
 PpeK7rJJcszGtoMaiZGWrzO1cp53RsPAYS3GNy93jxWxzsXT1QDnoq9MV/++bGm12YKg
 NEwfwlYSkboqITa2pkQa+0WBMg4DtDLrqQhTq2xfBYHBfSCvwkzHMejnXMDnUP6al0IK
 Eq+g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:content-transfer-encoding:from:mime-version
 :subject:date:message-id:references:cc:in-reply-to:to;
 bh=nGp/+75iqlq3uduDZuD3kTDOrfn+KoznVQbzyWd/W0o=;
 b=E2Dui8Kh2afqdGn4gZUej3prUsk0uvBt79ma1kVfFNAHCRl736sf2cJ9QElefBc1cK
 tvIgvkpZmVRCNB3rHsrCxL0o4I2iseABJCCldzlumBG9EdAotEaAi1tvnG4ya7UkQaiJ
 SWskJD0KPjzg5fXcKcNn/Dip8gNiPa76Xfl7xCMnnF7br1nL40Gs+aeXrNI7OB0rFNgl
 +DPAmKa56zWI7dMtSYYFRFLVeCXJlNvIP1rzKFMVIaVcaxHB27cUQnSVdnS8+PODOW1n
 4z1DIUn14tLjnZKoDOX6rhQR5a2KaaH9pewBqHJCDrB5QoFtR82CukPzSfw3+Xs12FYZ
 UWzg==
X-Gm-Message-State: AOAM5324+DeLR1lZa28VH4hKiQXl+JgAwCzAp9q6I/pbghVbG2imnc1H
 rvp+XIVqCXANurpPDP23HWlbzA==
X-Google-Smtp-Source: ABdhPJxU7IET/hCFPb/sQTpkd+UhssO24dR0raJ+KxqwUUO0QOtBkWnqsfO01aR/A7zGCXOYcrsdlw==
X-Received: by 2002:a17:90a:9c6:: with SMTP id
 64mr10932757pjo.83.1590772309919; 
 Fri, 29 May 2020 10:11:49 -0700 (PDT)
Received: from ?IPv6:2601:646:c200:1ef2:2dc1:c1e9:9ff7:54d4?
 ([2601:646:c200:1ef2:2dc1:c1e9:9ff7:54d4])
 by smtp.gmail.com with ESMTPSA id n8sm32418pjq.49.2020.05.29.10.11.44
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 29 May 2020 10:11:48 -0700 (PDT)
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
From: Andy Lutomirski <luto@amacapital.net>
Mime-Version: 1.0 (1.0)
Subject: Re: [BOOTLOADER SPECIFICATION RFC] The bootloader log format for
 TrenchBoot and others
Date: Fri, 29 May 2020 10:11:40 -0700
Message-Id: <7FE0DF48-C221-460E-99D5-00E42128283C@amacapital.net>
References: <20200529112735.qln44ds6z7djheof@tomti.i.net-space.pl>
In-Reply-To: <20200529112735.qln44ds6z7djheof@tomti.i.net-space.pl>
To: Daniel Kiper <daniel.kiper@oracle.com>
X-Mailer: iPhone Mail (17E262)
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: dpsmith@apertussolutions.com, alexander.burmashev@oracle.com,
 krystian.hebel@3mdeb.com, hpa@zytor.com, michal.zygowski@3mdeb.com,
 grub-devel@gnu.org, x86@kernel.org, javierm@redhat.com,
 kanth.ghatraju@oracle.com, ross.philipson@oracle.com,
 xen-devel@lists.xenproject.org, leif@nuviainc.com,
 trenchboot-devel@googlegroups.com, alec.brown@oracle.com, mtottenh@akamai.com,
 konrad.wilk@oracle.com, phcoder@gmail.com, piotr.krol@3mdeb.com,
 ard.biesheuvel@linaro.org, andrew.cooper3@citrix.com,
 linux-kernel@vger.kernel.org, mjg59@google.com, eric.snowberg@oracle.com,
 pjones@redhat.com, lukasz.hawrylko@linux.intel.com
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>



> On May 29, 2020, at 4:30 AM, Daniel Kiper <daniel.kiper@oracle.com> wrote:=

>=20
> =EF=BB=BFHey,
>=20
> Below you can find my rough idea of the bootloader log format which is
> generic thing but initially will be used for TrenchBoot work. I discussed
> this proposal with Ross and Daniel S. So, the idea went through initial
> sanitization. Now I would like to take feedback from other folks too.
> So, please take a look and complain...

> In general we want to pass the messages produced by the bootloader to the O=
S
> kernel and finally to the userspace for further processing and analysis. B=
elow
> is the description of the structures which will be used for this thing.

Is the intent for a human to read these, or to get them into the system log f=
ile, or are they intended to actually change the behavior of the system?

IOW, what is this for?=


From xen-devel-bounces@lists.xenproject.org Fri May 29 17:25:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 17:25:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeiks-0003q5-Ss; Fri, 29 May 2020 17:25:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=mKAR=7L=xen.org=julien@srs-us1.protection.inumbo.net>)
 id 1jeikr-0003q0-Le
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 17:25:13 +0000
X-Inumbo-ID: 5ab53cc2-a1d1-11ea-81bc-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ab53cc2-a1d1-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 17:25:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
 s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
 MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=yveHsKIkPGCJOia9sJHWlG6RuOoukAdfzhEBNIBMXf0=; b=Imrku9NFV7ena52MRWJKCHOVO8
 RNyg+XsuzemvxIFOHARVYL6v/4DIGt5bXLGgx2aESXzrdUtJkpn3JnV60mPzP6Dfnvrb3W3Jc4ke/
 LZ1n+hzqKYIph1Qhk5D0vt20Zj4uR1pXqZSCgnL+ULCc7VziDhrkRyh7hl5DM6ulmr0A=;
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeikf-0002he-Sk; Fri, 29 May 2020 17:25:01 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1jeikf-0008DZ-Ji; Fri, 29 May 2020 17:25:01 +0000
Subject: Re: Xen XSM/FLASK policy, grub defaults, etc.
To: Jan Beulich <jbeulich@suse.com>
References: <24270.35349.838484.116865@mariner.uk.xensource.com>
 <0D83AAA6-A205-4256-8A38-CC8122AC063D@citrix.com>
 <24272.59646.746545.343358@mariner.uk.xensource.com>
 <4a8e7cf2-8f63-d4d2-e051-9484a5b8c8ed@suse.com>
 <96F32637-E410-4EC8-937A-CFC8BE724352@citrix.com>
 <24273.8332.662607.125522@mariner.uk.xensource.com>
 <7eaa7541-f698-b29b-b4b3-1f40fc37c5d7@xen.org>
 <63838ce9-8dbf-aacf-521d-97540758a945@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <429e81a2-80d5-0d50-6352-6471cd4698a8@xen.org>
Date: Fri, 29 May 2020 18:24:59 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:68.0)
 Gecko/20100101 Thunderbird/68.8.1
MIME-Version: 1.0
In-Reply-To: <63838ce9-8dbf-aacf-521d-97540758a945@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?UTF-8?B?SsO8cmdlbiBHcm/Dnw==?= <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "cjwatson@debian.org" <cjwatson@debian.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <Andrew.Cooper3@citrix.com>,
 George Dunlap <George.Dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Ian Jackson <ian.jackson@citrix.com>, Daniel De Graaf <dgdegra@tycho.nsa.gov>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi Jan,

On 29/05/2020 16:11, Jan Beulich wrote:
> On 29.05.2020 17:05, Julien Grall wrote:
>> On 29/05/2020 15:47, Ian Jackson wrote:
>>> George Dunlap writes ("Re: Xen XSM/FLASK policy, grub defaults, etc."):
>>>> Which isn’t to say we shouldn’t do it; but it might be nice to also have an intermediate solution that works right now, even if it’s not optimal.
>>>
>>> I propose the following behaviour by updste-grub:
>>>
>>>    1. Look for an ELF note, TBD.  If it's found, make XSM boot entries.
>>>       (For now, skip this step, since the ELF note is not defined.)
>>
>> I am afraid the ELF note is a very x86 thing. On Arm, we don't have such
>> thing for the kernel/xen (actually the final binary is not even an ELF).
> 
> Ouch - of course. Is there anything similar one could employ there?

In the past, we discussed about adding support for note in the zImage 
(arm32 kernel format) but I never got the chance to pursue the discussion.

With Juergen's hypfs series, the hypervisor now embbed the .config. Is 
it possible to extract it?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 29 17:38:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 17:38:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeixE-00059E-Lp; Fri, 29 May 2020 17:38:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T8V9=7L=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jeixD-000599-F8
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 17:37:59 +0000
X-Inumbo-ID: 22f34fd4-a1d3-11ea-a8f4-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 22f34fd4-a1d3-11ea-a8f4-12813bfff9fa;
 Fri, 29 May 2020 17:37:58 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=48+5Rw/R1syS3jfI2y3hLLwuwfTh6AZnuM/wwuXlfZQ=; b=LwBq4Cx5+C+WrN8ZEMna1xcwc
 4LCtyzM0acLfjU92AqrIpSlcVu6jlD+y/oVr3w3tXqwB7KkkN3MI/7M0wmTLuZSmYKHHhAZ9gv1RU
 JGJBkYhtAf941EDgEQNAQ5jsqtlOudmlTCoCtjdh4yf0VFUyxS9IU96AkSELuDKlHHlL4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeixB-00030g-Sv; Fri, 29 May 2020 17:37:57 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeixB-0005PU-HH; Fri, 29 May 2020 17:37:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jeixB-0008Uf-GU; Fri, 29 May 2020 17:37:57 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150495-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150495: regressions - trouble: blocked/fail
X-Osstest-Failures: xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
 xen-unstable-smoke:build-amd64:xen-build:fail:regression
 xen-unstable-smoke:build-armhf:xen-build:fail:regression
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
 xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: xen=51f9df5c19f2b8a780aa2547cdf3d20736bfddcc
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 29 May 2020 17:37:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150495 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150495/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 150438
 build-amd64                   6 xen-build                fail REGR. vs. 150438
 build-armhf                   6 xen-build                fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a

version targeted for testing:
 xen                  51f9df5c19f2b8a780aa2547cdf3d20736bfddcc
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    1 days
Failing since        150465  2020-05-29 09:02:14 Z    0 days    6 attempts
Testing same since   150495  2020-05-29 16:03:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 442 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 29 17:40:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 17:40:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeizg-0005tT-4Y; Fri, 29 May 2020 17:40:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jeizf-0005tN-1l
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 17:40:31 +0000
X-Inumbo-ID: 7d4794f4-a1d3-11ea-a8f4-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7d4794f4-a1d3-11ea-a8f4-12813bfff9fa;
 Fri, 29 May 2020 17:40:30 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:38184
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jeizY-000WmB-Jv (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 18:40:24 +0100
Subject: Re: [XEN PATCH] xen/build: introduce CLANG_FLAGS for testing other
 CFLAGS
To: Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20200529154343.1616925-1-anthony.perard@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b60a175b-b5f4-c094-6c76-f8a7fd9f3658@citrix.com>
Date: Fri, 29 May 2020 18:40:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200529154343.1616925-1-anthony.perard@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 16:43, Anthony PERARD wrote:
> Commit 534519f0514f ("xen: Have Kconfig check $(CC)'s version")
> introduced the use of CLANG_FLAGS in Kconfig which is used when
> testing for other CFLAGS via $(cc-option ...) but CLANG_FLAGS doesn't
> exist in the Xen build system. (It's a Linux/Kbuild variable that
> haven't been added yet.)
>
> The missing CLANG_FLAGS isn't an issue for $(cc-option ..) but it
> would be when $(as-instr ..) gets imported from Kbuild to tests
> assembly instruction. We need to know if we are going to use clang's
> assembler or not.
>
> CLANG_FLAGS needs to be calculated before we call Kconfig.
>
> So, this patch adds CLANG_FLAGS which may contain two flags which are
> needed for further testing of $(CC)'s capabilities:
>   -no-integrated-as
>     This flags isn't new, but simply tested earlier so that it can be
>     used in Kconfig. The flags is only added for x86 builds like
>     before.
>   -Werror=unknown-warning-option
>     The one is new and is to make sure that the warning is enabled,
>     even though it is by default but could be disabled in a particular
>     build of clang, see Linux's commit e8de12fb7cde ("kbuild: Check
>     for unknown options with cc-option usage in Kconfig and clang")
>
>     It is present in clang 3.0.0, according Linux's commit
>     589834b3a009 ("kbuild: Add -Werror=unknown-warning-option to
>     CLANG_FLAGS").
>
> (The "note" that say that the flags was only added once wasn't true
> when tested on CentOS 6, so the patch uses $(or) and the flag will only
> be added once.)
>
> Fixes: 534519f0514f ("xen: Have Kconfig check $(CC)'s version")
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Fri May 29 17:40:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 17:40:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeizn-00062d-Co; Fri, 29 May 2020 17:40:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=U2UY=7L=tklsoftware.com=tamas@srs-us1.protection.inumbo.net>)
 id 1jeizl-00061j-Vr
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 17:40:38 +0000
X-Inumbo-ID: 81679c96-a1d3-11ea-8993-bc764e2007e4
Received: from mail-ej1-x642.google.com (unknown [2a00:1450:4864:20::642])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 81679c96-a1d3-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 17:40:37 +0000 (UTC)
Received: by mail-ej1-x642.google.com with SMTP id k11so2854084ejr.9
 for <xen-devel@lists.xenproject.org>; Fri, 29 May 2020 10:40:37 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=tklengyel-com.20150623.gappssmtp.com; s=20150623;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=7HK8mOY3BV+ku81uwlyallfkjvbwlZmNX57RqeAJgfE=;
 b=Y/CF7+JFBOAU3Cer6Y6reC7djOCMUs0Iag2bhU978XqlKJ6F0z29VBrkeE/LtHnx1w
 mvG/MucZxWK7ktgdOEDPVyfKfICl9ufTuH7vtc74lZIaL+nT1Mn7Yv+VMHJlxgUpa9fa
 cn0N8GkIcLUTORIQVB/jJi/IjT0FnK0i3KES5uXqqYnB2rRWSAUahX6BLkpsDreyUgnR
 KcKSSptK0AEwva5+o+af0WgR2YyURAvdKtDwNb36LxbJ933ThXXl75pz6XQf1yOAwv8U
 Lhgk/VUcxvq4IeeesaGmOX7SQwz0xjtGyYurxv2uh3NdLVugdbT2bGIhUqQBFvMeSQ4y
 qu1w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=7HK8mOY3BV+ku81uwlyallfkjvbwlZmNX57RqeAJgfE=;
 b=h4GMFOP5fxJCPWUDIekyr3XnRl2xHwmC6tEvC7xQYF46BIV3MtKp5OIUjCH6dfOiC5
 A2PsZUYOzEShwBUIiU0q/UNtz87KUF8N+qOISAki+Lc7FNiRbSw5aXFW5UqVqJkkBriK
 keRmljsxBc6HQchPI2bGx3kbhY7yrNxNavCrZTpuHbRjdkSvdcobJLvCrf/gpvB1Sn8U
 R2Kb9bDR4RTQmKj5K4l+UX0+9zbabccEdJ7R8sFaImRHCIVAZvqwe5ahmF8TaiKIS0Lq
 eUpapnfbQ9jawRQqJmaQR8xLKP9pCOxsU1ZNebne48vCrp4haHYn06GsjSwh6HZIf8nX
 BOQA==
X-Gm-Message-State: AOAM5306XyyEjZ5UfRkZ+asHTcWcH+fr6BdsZxoxPBMtjXRGSwJDd+ca
 AKgZFMP/DEJFhzqdWFUM00QP75Rmwoc=
X-Google-Smtp-Source: ABdhPJx1fM/0VSQwF3JWyKgY34j4nwomCbWk5GlXiEWHYJTWgcWyh3wFksZqpcM84kw5/Et3CAkDOQ==
X-Received: by 2002:a17:907:217a:: with SMTP id
 rl26mr8473721ejb.209.1590774036440; 
 Fri, 29 May 2020 10:40:36 -0700 (PDT)
Received: from mail-wm1-f50.google.com (mail-wm1-f50.google.com.
 [209.85.128.50])
 by smtp.gmail.com with ESMTPSA id h8sm7510406edk.72.2020.05.29.10.40.35
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 29 May 2020 10:40:35 -0700 (PDT)
Received: by mail-wm1-f50.google.com with SMTP id r15so4565448wmh.5
 for <xen-devel@lists.xenproject.org>; Fri, 29 May 2020 10:40:35 -0700 (PDT)
X-Received: by 2002:a1c:1b13:: with SMTP id b19mr8783754wmb.84.1590774035183; 
 Fri, 29 May 2020 10:40:35 -0700 (PDT)
MIME-Version: 1.0
References: <20200529162234.16824-1-tamas@tklengyel.com>
 <00da0381-e132-03e1-3717-02f4e968ec32@citrix.com>
 <24273.14616.523593.758476@mariner.uk.xensource.com>
In-Reply-To: <24273.14616.523593.758476@mariner.uk.xensource.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Fri, 29 May 2020 11:39:59 -0600
X-Gmail-Original-Message-ID: <CABfawhnf7jMDFZxzmEi65d13Wqttuju5=Bm2C5vi0Us7wKSaEQ@mail.gmail.com>
Message-ID: <CABfawhnf7jMDFZxzmEi65d13Wqttuju5=Bm2C5vi0Us7wKSaEQ@mail.gmail.com>
Subject: Re: [PATCH v2 for-4.14] tools/libxl: fix setting altp2m param broken
 by 1e9bc407cf0
To: Ian Jackson <ian.jackson@citrix.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Fri, May 29, 2020 at 10:32 AM Ian Jackson <ian.jackson@citrix.com> wrote:
>
> Andrew Cooper writes ("Re: [PATCH v2 for-4.14] tools/libxl: fix setting altp2m param broken by 1e9bc407cf0"):
> > On 29/05/2020 17:22, Tamas K Lengyel wrote:
> > > The patch 1e9bc407cf0 mistakenly converted the altp2m config option to a
> > > boolean. This is incorrect and breaks external-only usecases of altp2m that
> > > is set with a value of 2.
> > >
> > > Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
> >
> > Reviewed-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >
> > Sorry for breaking it to begin with.
>
> Acked-by: Ian Jackson <ian.jackson@eu.citrix.com>
>
> and pushed.

Thanks for the fast turn around.

Tamas


From xen-devel-bounces@lists.xenproject.org Fri May 29 18:28:25 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 18:28:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jejjj-0001im-7n; Fri, 29 May 2020 18:28:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5ub4=7L=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1jejji-0001ih-7d
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 18:28:06 +0000
X-Inumbo-ID: 22817dee-a1da-11ea-a900-12813bfff9fa
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 22817dee-a1da-11ea-a900-12813bfff9fa;
 Fri, 29 May 2020 18:28:04 +0000 (UTC)
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
Received: from relay2.suse.de (unknown [195.135.220.254])
 by mx2.suse.de (Postfix) with ESMTP id 660B8AD94;
 Fri, 29 May 2020 18:28:03 +0000 (UTC)
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Subject: [PATCH] xen/build: fix xen/tools/binfile
Date: Fri, 29 May 2020 20:28:00 +0200
Message-Id: <20200529182800.27555-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Juergen Gross <jgross@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

xen/tools/binfile contains a bash specific command (let). This leads
to build failures on systems not using bash as /bin/sh.

Replace "let SHIFT=$OPTIND-1" by "SHIFT=$((OPTIND-1))".

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/tools/binfile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/tools/binfile b/xen/tools/binfile
index df0301183f..23099c49bf 100755
--- a/xen/tools/binfile
+++ b/xen/tools/binfile
@@ -17,7 +17,7 @@ while getopts "ia:" opt; do
         ;;
     esac
 done
-let "SHIFT=$OPTIND-1"
+SHIFT=$((OPTIND-1))
 shift $SHIFT
 
 target=$1
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri May 29 18:36:48 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 18:36:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jejs5-0002i2-39; Fri, 29 May 2020 18:36:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jejs3-0002hx-W2
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 18:36:44 +0000
X-Inumbo-ID: 56077277-a1db-11ea-a904-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 56077277-a1db-11ea-a904-12813bfff9fa;
 Fri, 29 May 2020 18:36:42 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:40022
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jejry-0011li-KS (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 19:36:38 +0100
Subject: Re: [PATCH v2 03/14] x86/shstk: Introduce Supervisor Shadow Stack
 support
To: Jan Beulich <jbeulich@suse.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-4-andrew.cooper3@citrix.com>
 <4f535d4c-b3b3-fe5b-8b57-af736dc0a360@suse.com>
 <ad95944a-bd21-2ba8-6214-49d86050e816@citrix.com>
 <c3c3aea0-806f-4058-c1aa-cdc0f75007e2@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <f6ec0a0e-c7d0-22b5-b633-458a7fe2375f@citrix.com>
Date: Fri, 29 May 2020 19:36:35 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <c3c3aea0-806f-4058-c1aa-cdc0f75007e2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Anthony Perard <anthony.perard@citrix.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 12:59, Jan Beulich wrote:
> On 28.05.2020 20:10, Andrew Cooper wrote:
>> On 28/05/2020 11:25, Jan Beulich wrote:
>>> On 27.05.2020 21:18, Andrew Cooper wrote:
>>>> --- a/xen/arch/x86/Kconfig
>>>> +++ b/xen/arch/x86/Kconfig
>>>> @@ -34,6 +34,10 @@ config ARCH_DEFCONFIG
>>>>  config INDIRECT_THUNK
>>>>  	def_bool $(cc-option,-mindirect-branch-register)
>>>>  
>>>> +config HAS_AS_CET
>>>> +	# binutils >= 2.29 and LLVM >= 7
>>>> +	def_bool $(as-instr,wrssq %rax$(comma)0;setssbsy;endbr64)
>>> So you put me in a really awkward position: I'd really like to see
>>> this series go in for 4.14, yet I've previously indicated I want the
>>> underlying concept to first be agreed upon, before any uses get
>>> introduced.
>> There are already users.  One of them is even in context.
> Hmm, indeed. I clearly didn't notice this aspect when reviewing
> Anthony's series.
>
>> I don't see that there is anything open for dispute in the first place. 
>> Being able to do exactly this was a one key driving factor to a newer
>> Kconfig, because it is superior mechanism to the ad-hoc mess we had
>> previously (not to mention, a vast detriment to build time).
> This "key driving factor" was presumably from your perspective.
> Could you point me to a discussion (and resulting decision) that
> this is an explicit goal of that work? I don't recall any, and
> hence I also don't recall having been given a chance in influence
> the direction, decision, and overall outcome.

It took up a large chunk of the build system design session in Chicago.

>
> In the interest of getting this series in for 4.14, and on the
> assumption that you're willing to have a discussion on the
> direction wrt storing tool chain capabilities in .config before
> any further uses get added (and with the potential need to undo
> the ones we have / gain here)
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.


From xen-devel-bounces@lists.xenproject.org Fri May 29 18:39:27 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 18:39:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jejuW-0002pk-K2; Fri, 29 May 2020 18:39:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jejuV-0002pf-MW
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 18:39:15 +0000
X-Inumbo-ID: b2188af0-a1db-11ea-a904-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b2188af0-a1db-11ea-a904-12813bfff9fa;
 Fri, 29 May 2020 18:39:15 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:40100
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jejuS-0001W5-MV (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 19:39:14 +0100
Subject: Re: [PATCH v2 03/14] x86/shstk: Introduce Supervisor Shadow Stack
 support
To: Anthony PERARD <anthony.perard@citrix.com>, Jan Beulich <jbeulich@suse.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-4-andrew.cooper3@citrix.com>
 <4f535d4c-b3b3-fe5b-8b57-af736dc0a360@suse.com>
 <ad95944a-bd21-2ba8-6214-49d86050e816@citrix.com>
 <c3c3aea0-806f-4058-c1aa-cdc0f75007e2@suse.com>
 <20200529155118.GL2105@perard.uk.xensource.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <4c759ccc-b256-c074-0bd8-fe2d5c728715@citrix.com>
Date: Fri, 29 May 2020 19:39:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200529155118.GL2105@perard.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 16:51, Anthony PERARD wrote:
> On Fri, May 29, 2020 at 01:59:55PM +0200, Jan Beulich wrote:
>> On 28.05.2020 20:10, Andrew Cooper wrote:
>>> On 28/05/2020 11:25, Jan Beulich wrote:
>>>> On 27.05.2020 21:18, Andrew Cooper wrote:
>>>>> --- a/xen/scripts/Kconfig.include
>>>>> +++ b/xen/scripts/Kconfig.include
>>>>> @@ -31,6 +31,10 @@ cc-option = $(success,$(CC) -Werror $(CLANG_FLAGS) $(1) -E -x c /dev/null -o /de
>>>>>  # Return y if the linker supports <flag>, n otherwise
>>>>>  ld-option = $(success,$(LD) -v $(1))
>>>>>  
>>>>> +# $(as-instr,<instr>)
>>>>> +# Return y if the assembler supports <instr>, n otherwise
>>>>> +as-instr = $(success,printf "%b\n" "$(1)" | $(CC) $(CLANG_FLAGS) -c -x assembler -o /dev/null -)
>>>> Is this actually checking the right thing in the clang case?
>>> Appears to work correctly for me.  (Again, its either fine, or need
>>> bugfixing anyway for 4.14, and raising with Linux as this is unmodified
>>> upstream code like the rest of Kconfig.include).
>> This answer made me try it out: On a system with clang 5 and
>> binutils 2.32 "incsspd	%eax" translates fine with
>> -no-integrated-as but doesn't without. The previously mentioned
>> odd use of CLANG_FLAGS, perhaps together with the disconnect
>> from where we establish whether to use -no-integrated-as in the
>> first place (arch.mk; I have no idea whether the CFLAGS
>> determined would be usable by the kconfig invocation, nor how
>> to solve the chicken-and-egg problem there if this is meant to
>> work that way), may be the reason for this. Cc-ing Anthony once
>> again ...
> I've just prepare/sent a patch which should fix this CLANG_FLAGS issue
> and allows to tests the assembler in Kconfig.
>
> See: [XEN PATCH] xen/build: introduce CLANG_FLAGS for testing other CFLAGS

Thanks.  I've checked carefully, and this is an improvement from before.

Therefore I have acked and included the patch.

However, I think there is a further problem.  When -no-integrated-as
does get passed down, I don't see Clang falling back to using the Gnu
assember, so I suspect we have a further plumbing problem.  (It only
affects Clang 5.0 and earlier, so obsolete toolchains these days).

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 18:51:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 18:51:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jek5o-0004cl-KX; Fri, 29 May 2020 18:50:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jek5o-0004cg-68
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 18:50:56 +0000
X-Inumbo-ID: 5395c93c-a1dd-11ea-9dbe-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5395c93c-a1dd-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 18:50:55 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:40568
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jek5l-0008Uz-Kn (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 19:50:53 +0100
Subject: Re: [PATCH v2 04/14] x86/traps: Implement #CP handler and extend #PF
 for shadow stacks
To: Jan Beulich <jbeulich@suse.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-5-andrew.cooper3@citrix.com>
 <424dc7f2-d999-19e1-42ad-226cf22783eb@suse.com>
 <a5fa915b-b0ce-8247-09bb-dac3d149c6b5@citrix.com>
 <21cfcf09-930d-c0cd-6860-92523732a594@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <5282b2f4-35b7-4942-5240-e9305a64d938@citrix.com>
Date: Fri, 29 May 2020 19:50:44 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <21cfcf09-930d-c0cd-6860-92523732a594@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28/05/2020 14:31, Jan Beulich wrote:
> On 28.05.2020 15:22, Andrew Cooper wrote:
>> On 28/05/2020 13:03, Jan Beulich wrote:
>>> On 27.05.2020 21:18, Andrew Cooper wrote:
>>>> @@ -940,7 +944,8 @@ autogen_stubs: /* Automatically generated stubs. */
>>>>          entrypoint 1b
>>>>  
>>>>          /* Reserved exceptions, heading towards do_reserved_trap(). */
>>>> -        .elseif vec == TRAP_copro_seg || vec == TRAP_spurious_int || (vec > TRAP_simd_error && vec < TRAP_nr)
>>>> +        .elseif vec == X86_EXC_CSO || vec == X86_EXC_SPV || \
>>>> +                vec == X86_EXC_VE  || (vec > X86_EXC_CP && vec < TRAP_nr)
>>> Adding yet another || here adds to the fragility of the entire
>>> construct. Wouldn't it be better to implement do_entry_VE at
>>> this occasion, even its handling continues to end up in
>>> do_reserved_trap()? This would have the benefit of avoiding the
>>> pointless checking of %spl first thing in its handling. Feel
>>> free to keep the R-b if you decide to go this route.
>> I actually have a different plan, which deletes this entire clause, and
>> simplifies our autogen sanity checking somewhat.
>>
>> For vectors which Xen has no implementation of (for whatever reason),
>> use DPL0, non-present descriptors, and redirect #NP[IDT] into
>> do_reserved_trap().
> Except that #NP itself being a contributory exception, if the such
> covered exception is also contributory (e.g. #CP) or of page fault
> class (e.g. #VE), we'd get #DF instead of #NP afaict.

Hmm.  Good point.

I also had some other cleanup plans.  (In due course,) I'll see what I
can do to make the status quo better.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 19:05:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 19:05:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jekJW-0005k1-V8; Fri, 29 May 2020 19:05:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rqsE=7L=knorrie.org=hans@srs-us1.protection.inumbo.net>)
 id 1jekJV-0005jw-RM
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 19:05:05 +0000
X-Inumbo-ID: 4d43f9ee-a1df-11ea-a90b-12813bfff9fa
Received: from syrinx.knorrie.org (unknown [82.94.188.77])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4d43f9ee-a1df-11ea-a90b-12813bfff9fa;
 Fri, 29 May 2020 19:05:04 +0000 (UTC)
Received: from [IPv6:2a02:a213:2b80:f000::12] (unknown
 [IPv6:2a02:a213:2b80:f000::12])
 (using TLSv1.3 with cipher TLS_AES_128_GCM_SHA256 (128/128 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (No client certificate requested)
 by syrinx.knorrie.org (Postfix) with ESMTPSA id 08E6D60927CEC;
 Fri, 29 May 2020 21:05:03 +0200 (CEST)
Subject: Re: Bug: toolstack allows too low values to be set for shadow_memory
To: Jan Beulich <jbeulich@suse.com>
References: <37137142-1e34-0f78-c950-91bcd6543eb8@knorrie.org>
 <304a7a55-20a6-1dfe-1f3a-dabe90d28f40@suse.com>
From: Hans van Kranenburg <hans@knorrie.org>
Autocrypt: addr=hans@knorrie.org; keydata=
 mQINBFo2pooBEADwTBe/lrCa78zuhVkmpvuN+pXPWHkYs0LuAgJrOsOKhxLkYXn6Pn7e3xm+
 ySfxwtFmqLUMPWujQYF0r5C6DteypL7XvkPP+FPVlQnDIifyEoKq8JZRPsAFt1S87QThYPC3
 mjfluLUKVBP21H3ZFUGjcf+hnJSN9d9MuSQmAvtJiLbRTo5DTZZvO/SuQlmafaEQteaOswme
 DKRcIYj7+FokaW9n90P8agvPZJn50MCKy1D2QZwvw0g2ZMR8yUdtsX6fHTe7Ym+tHIYM3Tsg
 2KKgt17NTxIqyttcAIaVRs4+dnQ23J98iFmVHyT+X2Jou+KpHuULES8562QltmkchA7YxZpT
 mLMZ6TPit+sIocvxFE5dGiT1FMpjM5mOVCNOP+KOup/N7jobCG15haKWtu9k0kPz+trT3NOn
 gZXecYzBmasSJro60O4bwBayG9ILHNn+v/ZLg/jv33X2MV7oYXf+ustwjXnYUqVmjZkdI/pt
 30lcNUxCANvTF861OgvZUR4WoMNK4krXtodBoEImjmT385LATGFt9HnXd1rQ4QzqyMPBk84j
 roX5NpOzNZrNJiUxj+aUQZcINtbpmvskGpJX0RsfhOh2fxfQ39ZP/0a2C59gBQuVCH6C5qsY
 rc1qTIpGdPYT+J1S2rY88AvPpr2JHZbiVqeB3jIlwVSmkYeB/QARAQABtCZIYW5zIHZhbiBL
 cmFuZW5idXJnIDxoYW5zQGtub3JyaWUub3JnPokCTgQTAQoAOBYhBOJv1o/B6NS2GUVGTueB
 VzIYDCpVBQJaNq7KAhsDBQsJCAcDBRUKCQgLBRYCAwEAAh4BAheAAAoJEOeBVzIYDCpVgDMQ
 ANSQMebh0Rr6RNhfA+g9CKiCDMGWZvHvvq3BNo9TqAo9BC4neAoVciSmeZXIlN8xVALf6rF8
 lKy8L1omocMcWw7TlvZHBr2gZHKlFYYC34R2NvxS0xO8Iw5rhEU6paYaKzlrvxuXuHMVXgjj
 bM3zBiN8W4b9VW1MoynP9nvm1WaGtFI9GIyK9j6mBCU+N5hpvFtt4DBmuWjzdDkd3sWUufYd
 nQhGimWHEg95GWhQUiFvr4HRvYJpbjRRRQG3O/5Fm0YyTYZkI5CDzQIm5lhqKNqmuf2ENstS
 8KcBImlbwlzEpK9Pa3Z5MUeLZ5Ywwv+d11fyhk53aT9bipdEipvcGa6DrA0DquO4WlQR+RKU
 ywoGTgntwFu8G0+tmD8J1UE6kIzFwE5kiFWjM0rxv1tAgV9ZWqmp3sbI7vzbZXn+KI/wosHV
 iDeW5rYg+PdmnOlYXQIJO+t0KmF5zJlSe7daylKZKTYtk7w1Fq/Oh1Rps9h1C4sXN8OAUO7h
 1SAnEtehHfv52nPxwZiI6eqbvqV0uEEyLFS5pCuuwmPpC8AmOrciY2T8T+4pmkJNO2Nd3jOP
 cnJgAQrxPvD7ACp/85LParnoz5c9/nPHJB1FgbAa7N5d8ubqJgi+k9Q2lAL9vBxK67aZlFZ0
 Kd7u1w1rUlY12KlFWzxpd4TuHZJ8rwi7PUceuQINBFo2sK8BEADSZP5cKnGl2d7CHXdpAzVF
 6K4Hxwn5eHyKC1D/YvsY+otq3PnfLJeMf1hzv2OSrGaEAkGJh/9yXPOkQ+J1OxJJs9CY0fqB
 MvHZ98iTyeFAq+4CwKcnZxLiBchQJQd0dFPujtcoMkWgzp3QdzONdkK4P7+9XfryPECyCSUF
 ib2aEkuU3Ic4LYfsBqGR5hezbJqOs96ExMnYUCEAS5aeejr3xNb8NqZLPqU38SQCTLrAmPAX
 glKVnYyEVxFUV8EXXY6AK31lRzpCqmPxLoyhPAPda9BXchRluy+QOyg+Yn4Q2DSwbgCYPrxo
 HTZKxH+E+JxCMfSW35ZE5ufvAbY3IrfHIhbNnHyxbTRgYMDbTQCDyN9F2Rvx3EButRMApj+v
 OuaMBJF/fWfxL3pSIosG9Q7uPc+qJvVMHMRNnS0Y1QQ5ZPLG0zI5TeHzMnGmSTbcvn/NOxDe
 6EhumcclFS0foHR78l1uOhUItya/48WCJE3FvOS3+KBhYvXCsG84KVsJeen+ieX/8lnSn0d2
 ZvUsj+6wo+d8tcOAP+KGwJ+ElOilqW29QfV4qvqmxnWjDYQWzxU9WGagU3z0diN97zMEO4D8
 SfUu72S5O0o9ATgid9lEzMKdagXP94x5CRvBydWu1E5CTgKZ3YZv+U3QclOG5p9/4+QNbhqH
 W4SaIIg90CFMiwARAQABiQRsBBgBCgAgFiEE4m/Wj8Ho1LYZRUZO54FXMhgMKlUFAlo2sK8C
 GwICQAkQ54FXMhgMKlXBdCAEGQEKAB0WIQRJbJ13A1ob3rfuShiywd9yY2FfbAUCWjawrwAK
 CRCywd9yY2FfbMKbEACIGLdFrD5j8rz/1fm8xWTJlOb3+o5A6fdJ2eyPwr5njJZSG9i5R28c
 dMmcwLtVisfedBUYLaMBmCEHnj7ylOgJi60HE74ZySX055hKECNfmA9Q7eidxta5WeXeTPSb
 PwTQkAgUZ576AO129MKKP4jkEiNENePMuYugCuW7XGR+FCEC2efYlVwDQy24ZfR9Q1dNK2ny
 0gH1c+313l0JcNTKjQ0e7M9KsQSKUr6Tk0VGTFZE2dp+dJF1sxtWhJ6Ci7N1yyj3buFFpD9c
 kj5YQFqBkEwt3OGtYNuLfdwR4d47CEGdQSm52n91n/AKdhRDG5xvvADG0qLGBXdWvbdQFllm
 v47TlJRDc9LmwpIqgtaUGTVjtkhw0SdiwJX+BjhtWTtrQPbseDe2pN3gWte/dPidJWnj8zzS
 ggZ5otY2reSvM+79w/odUlmtaFx+IyFITuFnBVcMF0uGmQBBxssew8rePQejYQHz0bZUDNbD
 VaZiXqP4njzBJu5+nzNxQKzQJ0VDF6ve5K49y0RpT4IjNOupZ+OtlZTQyM7moag+Y6bcJ7KK
 8+MRdRjGFFWP6H/RCSFAfoOGIKTlZHubjgetyQhMwKJQ5KnGDm+XUkeIWyevPfCVPNvqF2q3
 viQm0taFit8L+x7ATpolZuSCat5PSXtgx1liGjBpPKnERxyNLQ/erRNcEACwEJliFbQm+c2i
 6ccpx2cdtyAI1yzWuE0nr9DqpsEbIZzTCIVyry/VZgdJ27YijGJWesj/ie/8PtpDu0Cf1pty
 QOKSpC9WvRCFGJPGS8MmvzepmX2DYQ5MSKTO5tRJZ8EwCFfd9OxX2g280rdcDyCFkY3BYrf9
 ic2PTKQokx+9sLCHAC/+feSx/MA/vYpY1EJwkAr37mP7Q8KA9PCRShJziiljh5tKQeIG4sz1
 QjOrS8WryEwI160jKBBNc/M5n2kiIPCrapBGsL58MumrtbL53VimFOAJaPaRWNSdWCJSnVSv
 kCHMl/1fRgzXEMpEmOlBEY0Kdd1Ut3S2cuwejzI+WbrQLgeps2N70Ztq50PkfWkj0jeethhI
 FqIJzNlUqVkHl1zCWSFsghxiMyZmqULaGcSDItYQ+3c9fxIO/v0zDg7bLeG9Zbj4y8E47xqJ
 6brtAAEJ1RIM42gzF5GW71BqZrbFFoI0C6AzgHjaQP1xfj7nBRSBz4ObqnsuvRr7H6Jme5rl
 eg7COIbm8R7zsFjF4tC6k5HMc1tZ8xX+WoDsurqeQuBOg7rggmhJEpDK2f+g8DsvKtP14Vs0
 Sn7fVJi87b5HZojry1lZB2pXUH90+GWPF7DabimBki4QLzmyJ/ENH8GspFulVR3U7r3YYQ5K
 ctOSoRq9pGmMi231Q+xx9LkCDQRaOtArARAA50ylThKbq0ACHyomxjQ6nFNxa9ICp6byU9Lh
 hKOax0GB6l4WebMsQLhVGRQ8H7DT84E7QLRYsidEbneB1ciToZkL5YFFaVxY0Hj1wKxCFcVo
 CRNtOfoPnHQ5m/eDLaO4o0KKL/kaxZwTn2jnl6BQDGX1Aak0u4KiUlFtoWn/E/NIv5QbTGSw
 IYuzWqqYBIzFtDbiQRvGw0NuKxAGMhwXy8VP05mmNwRdyh/CC4rWQPBTvTeMwr3nl8/G+16/
 cn4RNGhDiGTTXcX03qzZ5jZ5N7GLY5JtE6pTpLG+EXn5pAnQ7MvuO19cCbp6Dj8fXRmI0SVX
 WKSo0A2C8xH6KLCRfUMzD7nvDRU+bAHQmbi5cZBODBZ5yp5CfIL1KUCSoiGOMpMin3FrarIl
 cxhNtoE+ya23A+JVtOwtM53ESra9cJL4WPkyk/E3OvNDmh8U6iZXn4ZaKQTHaxN9yvmAUhZQ
 iQi/sABwxCcQQ2ydRb86Vjcbx+FUr5OoEyQS46gc3KN5yax9D3H9wrptOzkNNMUhFj0oK0fX
 /MYDWOFeuNBTYk1uFRJDmHAOp01rrMHRogQAkMBuJDMrMHfolivZw8RKfdPzgiI500okLTzH
 C0wgSSAOyHKGZjYjbEwmxsl3sLJck9IPOKvqQi1DkvpOPFSUeX3LPBIav5UUlXt0wjbzInUA
 EQEAAYkCNgQYAQoAIBYhBOJv1o/B6NS2GUVGTueBVzIYDCpVBQJaOtArAhsMAAoJEOeBVzIY
 DCpV4kgP+wUh3BDRhuKaZyianKroStgr+LM8FIUwQs3Fc8qKrcDaa35vdT9cocDZjkaGHprp
 mlN0OuT2PB+Djt7am2noV6Kv1C8EnCPpyDBCwa7DntGdGcGMjH9w6aR4/ruNRUGS1aSMw8sR
 QgpTVWEyzHlnIH92D+k+IhdNG+eJ6o1fc7MeC0gUwMt27Im+TxVxc0JRfniNk8PUAg4kvJq7
 z7NLBUcJsIh3hM0WHQH9AYe/mZhQq5oyZTsz4jo/dWFRSlpY7zrDS2TZNYt4cCfZj1bIdpbf
 SpRi9M3W/yBF2WOkwYgbkqGnTUvr+3r0LMCH2H7nzENrYxNY2kFmDX9bBvOWsWpcMdOEo99/
 Iayz5/q2d1rVjYVFRm5U9hG+C7BYvtUOnUvSEBeE4tnJBMakbJPYxWe61yANDQubPsINB10i
 ngzsm553yqEjLTuWOjzdHLpE4lzD416ExCoZy7RLEHNhM1YQSI2RNs8umlDfZM9Lek1+1kgB
 vT3RH0/CpPJgveWV5xDOKuhD8j5l7FME+t2RWP+gyLid6dE0C7J03ir90PlTEkMEHEzyJMPt
 OhO05Phy+d51WPTo1VSKxhL4bsWddHLfQoXW8RQ388Q69JG4m+JhNH/XvWe3aQFpYP+GZuzO
 hkMez0lHCaVOOLBSKHkAHh9i0/pH+/3hfEa4NsoHCpyy
Message-ID: <c1547f46-bdea-21e4-19cf-4c85dfac3f25@knorrie.org>
Date: Fri, 29 May 2020 21:05:02 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.4.1
MIME-Version: 1.0
In-Reply-To: <304a7a55-20a6-1dfe-1f3a-dabe90d28f40@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi,

On 5/26/20 9:41 AM, Jan Beulich wrote:
> On 25.05.2020 17:51, Hans van Kranenburg wrote:
>> This bug report is a follow-up to the thread "Domu windows 2012 crash"
>> on the xen-users list. In there we found out that it is possible to set
>> a value for shadow_memory that is lower than a safe minimum value.
>>
>> This became apparent after XSA-304, which caused using more of this type
>> of memory. Having a hardcoded line like shadow_memory = 8 results in
>> random crashes of the guest,
> 
> I don't think it is the tool stack's responsibility to override
> admin requested values, or at least not as far a affecting guest
> stability goes;

This is not primarily a technical issue, or about if software works
correct in a mathematical proven way.

It's a usability issue, so it's about what levels of unknowingly
unloading guns into feet is deemed desirable.

And, if that's happening, how difficult it should be for a user to
actually find out what's wrong.

> host stability of course needs to be guaranteed,
> but that's then still the hypervisor's job, not the tool stack's.
> 
> Compare this to e.g. setting too small a memory= for a guest to
> be able to boot at all, or setting maxmem > memory for a guest
> without balloon driver.
> 
> Furthermore - what would the suggestion be as to a "safe minimum
> value"? Assuming _all_ large pages may potentially get shattered
> is surely a waste of memory, unless the admin really knows
> guests are going to behave that way. (In your report you also
> didn't mention what memory= values the issue was observed with.
> Obviously larger memory= also require bumping shadow_memory= at
> least from some point onwards.)

Thanks,
Hans


From xen-devel-bounces@lists.xenproject.org Fri May 29 19:05:12 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 19:05:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jekJb-0005kI-6l; Fri, 29 May 2020 19:05:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T8V9=7L=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jekJZ-0005k7-MU
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 19:05:09 +0000
X-Inumbo-ID: 50663132-a1df-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 50663132-a1df-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 19:05:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=1UVK56gpo6abrF2Dj8xV+mzkLvna7y5eLTSJRGSnz3g=; b=c/aTv8POB6uUQDjqt6eLYBnBT
 GvAi9gutAFos0EhrfB3YDHItcwTqNwXCtspyD4GJjPdnpWKC96L1gzWb72OXJMcSunR1YBzVRAd6z
 2UjsNIWJXnLRTg4PZCHcFYCEgGgkj6Wrcs8NdnOkr5HonZA0CJUOFBGl9PSMJJslnUUwA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jekJY-0004tR-3G; Fri, 29 May 2020 19:05:08 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jekJX-0003HV-Pe; Fri, 29 May 2020 19:05:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jekJX-0003XE-P3; Fri, 29 May 2020 19:05:07 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150498-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150498: regressions - trouble: blocked/fail
X-Osstest-Failures: xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
 xen-unstable-smoke:build-amd64:xen-build:fail:regression
 xen-unstable-smoke:build-armhf:xen-build:fail:regression
 xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: xen=8e2aa76dc1670e82eaa15683353853bc66bf54fc
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 29 May 2020 19:05:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150498 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150498/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 150438
 build-amd64                   6 xen-build                fail REGR. vs. 150438
 build-armhf                   6 xen-build                fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  8e2aa76dc1670e82eaa15683353853bc66bf54fc
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    1 days
Failing since        150465  2020-05-29 09:02:14 Z    0 days    7 attempts
Testing same since   150498  2020-05-29 18:01:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 773 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 29 19:21:36 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 19:21:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jekZ8-0007hn-Kz; Fri, 29 May 2020 19:21:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jekZ7-0007gB-7F
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 19:21:13 +0000
X-Inumbo-ID: 8e1fed7c-a1e1-11ea-9dbe-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e1fed7c-a1e1-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 19:21:11 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:41684
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jekZ2-000OgG-MS (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 20:21:09 +0100
Subject: Re: [PATCH v2 05/14] x86/shstk: Re-layout the stack block for shadow
 stacks
To: Jan Beulich <jbeulich@suse.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-6-andrew.cooper3@citrix.com>
 <03cc30f8-4849-f77d-857d-b63248c70a25@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <bfa93460-486b-3977-475e-8c92114baa75@citrix.com>
Date: Fri, 29 May 2020 20:21:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <03cc30f8-4849-f77d-857d-b63248c70a25@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28/05/2020 13:33, Jan Beulich wrote:
> On 27.05.2020 21:18, Andrew Cooper wrote:
>> --- a/xen/arch/x86/traps.c
>> +++ b/xen/arch/x86/traps.c
>> @@ -365,20 +365,15 @@ static void show_guest_stack(struct vcpu *v, const struct cpu_user_regs *regs)
>>  /*
>>   * Notes for get_stack_trace_bottom() and get_stack_dump_bottom()
>>   *
>> - * Stack pages 0 - 3:
>> + * Stack pages 1 - 4:
>>   *   These are all 1-page IST stacks.  Each of these stacks have an exception
>>   *   frame and saved register state at the top.  The interesting bound for a
>>   *   trace is the word adjacent to this, while the bound for a dump is the
>>   *   very top, including the exception frame.
>>   *
>> - * Stack pages 4 and 5:
>> - *   None of these are particularly interesting.  With MEMORY_GUARD, page 5 is
>> - *   explicitly not present, so attempting to dump or trace it is
>> - *   counterproductive.  Without MEMORY_GUARD, it is possible for a call chain
>> - *   to use the entire primary stack and wander into page 5.  In this case,
>> - *   consider these pages an extension of the primary stack to aid debugging
>> - *   hopefully rare situations where the primary stack has effective been
>> - *   overflown.
>> + * Stack pages 0 and 5:
>> + *   Shadow stacks.  These are mapped read-only, and used by CET-SS capable
>> + *   processors.  They will never contain regular stack data.
> I don't mind the comment getting put in place already here, but will it
> reflect reality even when CET-SS is not in use, in that the pages then
> still are mapped r/o rather than being left unmapped to act as guard
> pages not only for stack pushes but also for stack pops?

I can't parse this question.

However, I think it is answered by the following patch which does move
things to unilaterally being r/o even in the non-CET-SS case.

> At which point
> the "dump or trace it is counterproductive" remark would still apply in
> this case, and hence may better be retained.

Well - I'm thinking forwards to cleanup where we'd want to integrate the
shadow stack into stack trace reporting, at which point we would
consider these frames interesting to dump/trace.

>
>> @@ -392,13 +387,10 @@ unsigned long get_stack_trace_bottom(unsigned long sp)
>>  {
>>      switch ( get_stack_page(sp) )
>>      {
>> -    case 0 ... 3:
>> +    case 1 ... 4:
>>          return ROUNDUP(sp, PAGE_SIZE) -
>>              offsetof(struct cpu_user_regs, es) - sizeof(unsigned long);
>>  
>> -#ifndef MEMORY_GUARD
>> -    case 4 ... 5:
>> -#endif
>>      case 6 ... 7:
>>          return ROUNDUP(sp, STACK_SIZE) -
>>              sizeof(struct cpu_info) - sizeof(unsigned long);
>> @@ -412,12 +404,9 @@ unsigned long get_stack_dump_bottom(unsigned long sp)
>>  {
>>      switch ( get_stack_page(sp) )
>>      {
>> -    case 0 ... 3:
>> +    case 1 ... 4:
>>          return ROUNDUP(sp, PAGE_SIZE) - sizeof(unsigned long);
>>  
>> -#ifndef MEMORY_GUARD
>> -    case 4 ... 5:
>> -#endif
>>      case 6 ... 7:
>>          return ROUNDUP(sp, STACK_SIZE) - sizeof(unsigned long);
> The need to adjust these literal numbers demonstrates how fragile
> this is. I admit I can't see a good way to get rid of the literal
> numbers altogether,

Frankly, this is why there is a massive comment, and I really didn't
want to introduce PRIMARY_SHSTK_SLOT to begin with, because the whole
thing is fragile and there is no obvious naming/labelling scheme which
is liable to survive tweaking.

>  but could I talk you into switching to (for
> the latter, as example)
>
>     switch ( get_stack_page(sp) )
>     {
>     case 0: case PRIMARY_SHSTK_SLOT:
>         return 0;
>
>     case 1 ... 4:
>         return ROUNDUP(sp, PAGE_SIZE) - sizeof(unsigned long);
>
>     case 6 ... 7:
>         return ROUNDUP(sp, STACK_SIZE) - sizeof(unsigned long);
>
>     default:
>         return sp - sizeof(unsigned long);
>     }
>
> ? Of course this will need the callers to be aware they may get
> back zero, but there are only very few (which made me notice the
> functions would better be static).

It was definitely needed externally at some point in the past.

>  And the returning of zero may
> then want changing (conditionally upon us using CET-SS) in a
> later patch, where iirc you use the shadow stack for call trace
> generation.
>
> As a positive side effect this will yield a compile error if
> PRIMARY_SHSTK_SLOT gets changed without adjusting these
> functions.

Overall to your question, potentially as future clean-up to how we
express stacks, but not right now for 4.14.

>
>> --- a/xen/include/asm-x86/config.h
>> +++ b/xen/include/asm-x86/config.h
>> @@ -75,6 +75,9 @@
>>  /* Primary stack is restricted to 8kB by guard pages. */
>>  #define PRIMARY_STACK_SIZE 8192
>>  
>> +/* Primary shadow stack is slot 5 of 8, immediately under the primary stack. */
>> +#define PRIMARY_SHSTK_SLOT 5
> Any reason to put it here rather than ...
>
>> --- a/xen/include/asm-x86/current.h
>> +++ b/xen/include/asm-x86/current.h
>> @@ -16,12 +16,12 @@
>>   *
>>   * 7 - Primary stack (with a struct cpu_info at the top)
>>   * 6 - Primary stack
>> - * 5 - Optionally not present (MEMORY_GUARD)
>> - * 4 - Unused; optionally not present (MEMORY_GUARD)
>> - * 3 - Unused; optionally not present (MEMORY_GUARD)
>> - * 2 - MCE IST stack
>> - * 1 - NMI IST stack
>> - * 0 - Double Fault IST stack
>> + * 5 - Primay Shadow Stack (read-only)
>> + * 4 - #DF IST stack
>> + * 3 - #DB IST stack
>> + * 2 - NMI IST stack
>> + * 1 - #MC IST stack
>> + * 0 - IST Shadow Stacks (4x 1k, read-only)
>>   */
> ... right below this comment?

Yes - grouping the related constants.

> Same question as above regarding the "read-only" here.

I'll adjust the commit message to make it clearer that some of the text
here is made true in the next patch.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 19:35:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 19:35:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jekmo-0000Uj-B8; Fri, 29 May 2020 19:35:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jekmn-0000UZ-4F
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 19:35:21 +0000
X-Inumbo-ID: 87dcf336-a1e3-11ea-81bc-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 87dcf336-a1e3-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 19:35:20 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:42118
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jekmj-000WiI-Lp (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 20:35:17 +0100
Subject: Re: [PATCH v2 06/14] x86/shstk: Create shadow stacks
To: Jan Beulich <jbeulich@suse.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-7-andrew.cooper3@citrix.com>
 <8a02b933-3b7e-ded9-8bf3-a1c35f2ef7ae@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <fe8f077d-2048-38af-5deb-0d9dda48cf36@citrix.com>
Date: Fri, 29 May 2020 20:35:16 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <8a02b933-3b7e-ded9-8bf3-a1c35f2ef7ae@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28/05/2020 13:50, Jan Beulich wrote:
> On 27.05.2020 21:18, Andrew Cooper wrote:
>> --- a/xen/arch/x86/cpu/common.c
>> +++ b/xen/arch/x86/cpu/common.c
>> @@ -769,6 +769,30 @@ void load_system_tables(void)
>>  	tss->rsp1 = 0x8600111111111111ul;
>>  	tss->rsp2 = 0x8600111111111111ul;
>>  
>> +	/* Set up the shadow stack IST. */
>> +	if (cpu_has_xen_shstk) {
>> +		volatile uint64_t *ist_ssp = this_cpu(tss_page).ist_ssp;
>> +
>> +		/*
>> +		 * Used entries must point at the supervisor stack token.
>> +		 * Unused entries are poisoned.
>> +		 *
>> +		 * This IST Table may be live, and the NMI/#MC entries must
>> +		 * remain valid on every instruction boundary, hence the
>> +		 * volatile qualifier.
>> +		 */
> Move this comment ahead of what it comments on, as we usually have it?
>
>> +		ist_ssp[0] = 0x8600111111111111ul;
>> +		ist_ssp[IST_MCE] = stack_top + (IST_MCE * IST_SHSTK_SIZE) - 8;
>> +		ist_ssp[IST_NMI] = stack_top + (IST_NMI * IST_SHSTK_SIZE) - 8;
>> +		ist_ssp[IST_DB]	 = stack_top + (IST_DB	* IST_SHSTK_SIZE) - 8;
>> +		ist_ssp[IST_DF]	 = stack_top + (IST_DF	* IST_SHSTK_SIZE) - 8;
> Strictly speaking you want to introduce
>
> #define IST_SHSTK_SLOT 0
>
> next to PRIMARY_SHSTK_SLOT and use
>
> 		ist_ssp[IST_MCE] = stack_top + (IST_SHSTK_SLOT * PAGE_SIZE) +
>                                                (IST_MCE * IST_SHSTK_SIZE) - 8;
>
> etc here. It's getting longish, so I'm not going to insist. But if you
> go this route, then please also below / elsewhere.

Actually no.  I've got a much better idea, based on how Linux does the
same, but it's definitely 4.15 material at this point.

>
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -5994,12 +5994,33 @@ void memguard_unguard_range(void *p, unsigned long l)
>>  
>>  #endif
>>  
>> +static void write_sss_token(unsigned long *ptr)
>> +{
>> +    /*
>> +     * A supervisor shadow stack token is its own linear address, with the
>> +     * busy bit (0) clear.
>> +     */
>> +    *ptr = (unsigned long)ptr;
>> +}
>> +
>>  void memguard_guard_stack(void *p)
>>  {
>> -    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
>> +    /* IST Shadow stacks.  4x 1k in stack page 0. */
>> +    if ( IS_ENABLED(CONFIG_XEN_SHSTK) )
>> +    {
>> +        write_sss_token(p + (IST_MCE * IST_SHSTK_SIZE) - 8);
>> +        write_sss_token(p + (IST_NMI * IST_SHSTK_SIZE) - 8);
>> +        write_sss_token(p + (IST_DB  * IST_SHSTK_SIZE) - 8);
>> +        write_sss_token(p + (IST_DF  * IST_SHSTK_SIZE) - 8);
> Up to now two successive memguard_guard_stack() were working fine. This
> will be no longer the case, just as an observation.

I don't think that matters.

>
>> +    }
>> +    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, PAGE_HYPERVISOR_SHSTK);
> As already hinted at in reply to the previous patch, I think this wants
> to remain _PAGE_NONE when we don't use CET-SS.

The commit message discussed why that is not an option (currently), and
why I don't consider it a good idea to make possible.

>> +    /* Primary Shadow Stack.  1x 4k in stack page 5. */
>>      p += PRIMARY_SHSTK_SLOT * PAGE_SIZE;
>> -    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, _PAGE_NONE);
>> +    if ( IS_ENABLED(CONFIG_XEN_SHSTK) )
>> +        write_sss_token(p + PAGE_SIZE - 8);
>> +
>> +    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, PAGE_HYPERVISOR_SHSTK);
>>  }
>>  
>>  void memguard_unguard_stack(void *p)
> Would this function perhaps better zap the tokens?

Why?  We don't zap any other stack contents, and let the regular page
scrubbing clean it.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 19:44:09 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 19:44:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jekur-0001Xi-8L; Fri, 29 May 2020 19:43:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jekup-0001Xd-K2
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 19:43:39 +0000
X-Inumbo-ID: b12d405a-a1e4-11ea-9947-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b12d405a-a1e4-11ea-9947-bc764e2007e4;
 Fri, 29 May 2020 19:43:39 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:42398
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jekum-000aRg-Jw (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 20:43:36 +0100
Subject: Re: [PATCH v2 10/14] x86/extable: Adjust extable handling to be
 shadow stack compatible
To: Jan Beulich <jbeulich@suse.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-11-andrew.cooper3@citrix.com>
 <b36b5868-0b7c-2b45-a994-0ff5ea170433@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0c7f425a-996f-8840-f1e2-79381edb6456@citrix.com>
Date: Fri, 29 May 2020 20:43:35 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <b36b5868-0b7c-2b45-a994-0ff5ea170433@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 28/05/2020 17:15, Jan Beulich wrote:
> On 27.05.2020 21:18, Andrew Cooper wrote:
>> @@ -400,6 +400,18 @@ unsigned long get_stack_trace_bottom(unsigned long sp)
>>      }
>>  }
>>  
>> +static unsigned long get_shstk_bottom(unsigned long sp)
>> +{
>> +    switch ( get_stack_page(sp) )
>> +    {
>> +#ifdef CONFIG_XEN_SHSTK
>> +    case 0:  return ROUNDUP(sp, IST_SHSTK_SIZE) - sizeof(unsigned long);
>> +    case 5:  return ROUNDUP(sp, PAGE_SIZE)      - sizeof(unsigned long);
> PRIMARY_SHSTK_SLOT please and, if introduced as suggested for the
> earlier patch, IST_SHSTK_SLOT in the earlier line.
>
>> @@ -763,6 +775,56 @@ static void do_reserved_trap(struct cpu_user_regs *regs)
>>            trapnr, vec_name(trapnr), regs->error_code);
>>  }
>>  
>> +static void extable_shstk_fixup(struct cpu_user_regs *regs, unsigned long fixup)
>> +{
>> +    unsigned long ssp, *ptr, *base;
>> +
>> +    asm ( "rdsspq %0" : "=r" (ssp) : "0" (1) );
>> +    if ( ssp == 1 )
>> +        return;
>> +
>> +    ptr = _p(ssp);
>> +    base = _p(get_shstk_bottom(ssp));
>> +
>> +    for ( ; ptr < base; ++ptr )
>> +    {
>> +        /*
>> +         * Search for %rip.  The shstk currently looks like this:
>> +         *
>> +         *   ...  [Likely pointed to by SSP]
>> +         *   %cs  [== regs->cs]
>> +         *   %rip [== regs->rip]
>> +         *   SSP  [Likely points to 3 slots higher, above %cs]
>> +         *   ...  [call tree to this function, likely 2/3 slots]
>> +         *
>> +         * and we want to overwrite %rip with fixup.  There are two
>> +         * complications:
>> +         *   1) We cant depend on SSP values, because they won't differ by 3
>> +         *      slots if the exception is taken on an IST stack.
>> +         *   2) There are synthetic (unrealistic but not impossible) scenarios
>> +         *      where %rip can end up in the call tree to this function, so we
>> +         *      can't check against regs->rip alone.
>> +         *
>> +         * Check for both reg->rip and regs->cs matching.
> Nit: regs->rip
>
>> +         */
>> +
>> +        if ( ptr[0] == regs->rip && ptr[1] == regs->cs )
>> +        {
>> +            asm ( "wrssq %[fix], %[stk]"
>> +                  : [stk] "=m" (*ptr)
> Could this be ptr[0], to match the if()?
>
> Considering how important it is that we don't fix up the wrong stack
> location here, I continue to wonder if we wouldn't better also
> include the SSP value on the stack in the checking, at the very
> least by way of an ASSERT() or BUG_ON().

Well no, for the reason discussed in point 1.

Its not technically an issue right now, but there is no possible way to
BUILD_BUG_ON() someone turning an exception into IST, or stopping the
use of the extable infrastructure on a #DB.

Such a check would lie in wait and either provide an unexpected tangent
to someone debugging a complicated issue (I do use #DB for a fair bit),
or become a security vulnerability.

> Since, with how the code is
> currently written, this would require a somewhat odd looking ptr[-1]
> I also wonder whether "while ( ++ptr < base )" as the loop header
> wouldn't be better. The first entry on the stack can't be the RIP
> we look for anyway, can it?

Yes it can.

>> --- a/xen/arch/x86/x86_64/entry.S
>> +++ b/xen/arch/x86/x86_64/entry.S
>> @@ -708,7 +708,16 @@ exception_with_ints_disabled:
>>          call  search_pre_exception_table
>>          testq %rax,%rax                 # no fixup code for faulting EIP?
>>          jz    1b
>> -        movq  %rax,UREGS_rip(%rsp)
>> +        movq  %rax,UREGS_rip(%rsp)      # fixup regular stack
>> +
>> +#ifdef CONFIG_XEN_SHSTK
>> +        mov    $1, %edi
>> +        rdsspq %rdi
>> +        cmp    $1, %edi
>> +        je     .L_exn_shstk_done
>> +        wrssq  %rax, (%rdi)             # fixup shadow stack
> According to the comment in extable_shstk_fixup(), isn't the value
> to be replaced at 8(%rdi)?

Hmm - I think you're right.  I thought I had this covered by unit tests.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 19:47:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 19:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jekyC-0001hH-NP; Fri, 29 May 2020 19:47:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jekyA-0001h8-On
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 19:47:06 +0000
X-Inumbo-ID: 2be55d28-a1e5-11ea-a910-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2be55d28-a1e5-11ea-a910-12813bfff9fa;
 Fri, 29 May 2020 19:47:05 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:42512
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jeky4-000bfP-MQ (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 20:47:03 +0100
Subject: Re: [PATCH v2 11/14] x86/alt: Adjust _alternative_instructions() to
 not create shadow stacks
To: Jan Beulich <jbeulich@suse.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-12-andrew.cooper3@citrix.com>
 <1e6d1503-40a8-55b9-9bd3-750cf301220d@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <b56a9583-2a8a-6667-6753-8c3d25e1c3ab@citrix.com>
Date: Fri, 29 May 2020 20:46:26 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <1e6d1503-40a8-55b9-9bd3-750cf301220d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 13:23, Jan Beulich wrote:
> On 27.05.2020 21:18, Andrew Cooper wrote:
>> @@ -398,6 +399,19 @@ static void __init _alternative_instructions(bool force)
>>          panic("Timed out waiting for alternatives self-NMI to hit\n");
>>  
>>      set_nmi_callback(saved_nmi_callback);
>> +
>> +    /*
>> +     * When Xen is using shadow stacks, the alternatives clearing CR0.WP and
>> +     * writing into the mappings set dirty bits, turning the mappings into
>> +     * shadow stack mappings.
>> +     *
>> +     * While we can execute from them, this would also permit them to be the
>> +     * target of WRSS instructions, so reset the dirty after patching.
>> +     */
>> +    if ( cpu_has_xen_shstk )
>> +        modify_xen_mappings(XEN_VIRT_START + MB(2),
>> +                            (unsigned long)&__2M_text_end,
>> +                            PAGE_HYPERVISOR_RX);
> Am I misremembering, or did you post a patch before that should
> be part of this series, as being a prereq to this change,
> making modify_xen_mappings() also respect _PAGE_DIRTY as
> requested by the caller?

No.  Its the hunk you deleted from lower in this patch.

> Additionally I notice this
>
>         if ( desc->Attribute & (efi_bs_revision < EFI_REVISION(2, 5)
>                                 ? EFI_MEMORY_WP : EFI_MEMORY_RO) )
>             prot &= ~_PAGE_RW;
>
> in efi_init_memory(). Afaict we will need to clear _PAGE_DIRTY
> there as well, with prot starting out as PAGE_HYPERVISOR_RWX.

Ok.  I'll pull that out into a separate patch.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 19:58:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 19:58:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jel8x-0002je-Pd; Fri, 29 May 2020 19:58:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jel8w-0002jZ-Mg
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 19:58:14 +0000
X-Inumbo-ID: ba8f07ee-a1e6-11ea-a915-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ba8f07ee-a1e6-11ea-a915-12813bfff9fa;
 Fri, 29 May 2020 19:58:13 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:42950
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jel8s-000h0s-KY (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 20:58:10 +0100
Subject: Re: [PATCH v2 12/14] x86/entry: Adjust guest paths to be shadow stack
 compatible
To: Jan Beulich <jbeulich@suse.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-13-andrew.cooper3@citrix.com>
 <5be19f55-979a-3cef-18bf-f9673cef1da3@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <99703d99-8e36-a082-4548-8261a2af90ff@citrix.com>
Date: Fri, 29 May 2020 20:58:09 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <5be19f55-979a-3cef-18bf-f9673cef1da3@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 13:40, Jan Beulich wrote:
> On 27.05.2020 21:18, Andrew Cooper wrote:
>> The SYSCALL/SYSENTER/SYSRET paths need to use {SET,CLR}SSBSY.  The IRET to
>> guest paths must not.  In the SYSRET path, re-position the mov which loads rip
>> into %rcx so we can use %rcx for CLRSSBSY, rather than spilling another
>> register to the stack.
>>
>> While we can in principle detect shadow stack corruption and a failure to
>> clear the supervisor token busy bit in the SYSRET path (by inspecting the
>> carry flag following CLRSSBSY), we cannot detect similar problems for the IRET
>> path (IRET is specified not to fault in this case).
>>
>> We will double fault at some point later, when next trying to enter Xen, due
>> to an already-set supervisor shadow stack busy bit.  As SYSRET is a uncommon
>> path anyway, avoid the added complexity for no appreciable gain.
> I'm okay with the avoidance of complexity, but why is the SYSRET path
> uncommon? Almost all hypercall returns ought to take that path?

But hypercalls returns aren't the majority of returns to guest context.

IRET from Xen IPIs, or from event channel injections hitting guest
userspace, are the most common in a non-idle system.

>
>> --- a/xen/arch/x86/x86_64/entry.S
>> +++ b/xen/arch/x86/x86_64/entry.S
>> @@ -191,9 +191,16 @@ restore_all_guest:
>>          sarq  $47,%rcx
>>          incl  %ecx
>>          cmpl  $1,%ecx
>> -        movq  8(%rsp),%rcx            # RIP
>>          ja    iret_exit_to_guest
> This removal from the shared part of the exit path needs to be
> reflected on both of the sides of the JA, i.e. ...
>
>>  
>> +        /* Clear the supervisor shadow stack token busy bit. */
>> +.macro rag_clrssbsy
>> +        rdsspq %rcx
>> +        clrssbsy (%rcx)
>> +.endm
>> +        ALTERNATIVE "", rag_clrssbsy, X86_FEATURE_XEN_SHSTK
>> +
>> +        movq  8(%rsp), %rcx           # RIP
> ... not just here, but also like this (with the JA above changed
> to target the new label):
>
>          ALIGN
>  /* No special register assumptions. */
> +.Liret_exit_to_guest:
> +        movq  8(%rsp),%rcx            # RIP
>  iret_exit_to_guest:
>          andl  $~(X86_EFLAGS_IOPL|X86_EFLAGS_NT|X86_EFLAGS_VM),24(%rsp)
>          orl   $X86_EFLAGS_IF,24(%rsp)
>
> Granted it's mostly cosmetic, as the IRETQ ought to fault, but
> it's still a use of IRET in place of SYSRET, and hence we better
> get guest register state right. With this or a functionally
> identical adjustment (or a clarification on what makes you
> convinced this adjustment isn't needed)
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Ah yes.  I really ought to retroactively create an XSA-7 PoC for this.

Will fix.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 20:00:59 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 20:00:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jelBP-0003eW-AQ; Fri, 29 May 2020 20:00:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jelBN-0003eR-Vh
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 20:00:46 +0000
X-Inumbo-ID: 14f8bd92-a1e7-11ea-9dbe-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14f8bd92-a1e7-11ea-9dbe-bc764e2007e4;
 Fri, 29 May 2020 20:00:45 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:43020
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jelBL-000i5y-JQ (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 21:00:43 +0100
Subject: Re: [PATCH v2 13/14] x86/S3: Save and restore Shadow Stack
 configuration
To: Jan Beulich <jbeulich@suse.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-14-andrew.cooper3@citrix.com>
 <c1f1cb73-65f7-f2f7-161c-b505edc5959e@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <ec3517ca-c5b3-951f-0bbf-05414f82ca19@citrix.com>
Date: Fri, 29 May 2020 21:00:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <c1f1cb73-65f7-f2f7-161c-b505edc5959e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 13:52, Jan Beulich wrote:
> On 27.05.2020 21:18, Andrew Cooper wrote:
>> See code for details
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Wei Liu <wl@xen.org>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>>
>> Semi-RFC - I can't actually test this path.  Currently attempting to arrange
>> for someone else to.
> Nevertheless
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with one question, just for my understanding:
>
>> @@ -48,6 +58,51 @@ ENTRY(s3_resume)
>>          pushq   %rax
>>          lretq
>>  1:
>> +#ifdef CONFIG_XEN_SHSTK
>> +        /*
>> +         * Restoring SSP is a little complicated, because we are intercepting
>> +         * an in-use shadow stack.  Write a temporary token under the stack,
>> +         * so SETSSBSY will successfully load a value useful for us, then
>> +         * reset MSR_PL0_SSP to its usual value and pop the temporary token.
>> +         */
>> +        mov     saved_rsp(%rip), %rdi
>> +        cmpq    $1, %rdi
>> +        je      .L_shstk_done
>> +
>> +        /* Set up MSR_S_CET. */
>> +        mov     $MSR_S_CET, %ecx
>> +        xor     %edx, %edx
>> +        mov     $CET_SHSTK_EN | CET_WRSS_EN, %eax
>> +        wrmsr
>> +
>> +        /* Construct the temporary supervisor token under SSP. */
>> +        sub     $8, %rdi
>> +
>> +        /* Load it into MSR_PL0_SSP. */
>> +        mov     $MSR_PL0_SSP, %ecx
>> +        mov     %rdi, %rdx
>> +        shr     $32, %rdx
>> +        mov     %edi, %eax
>> +        wrmsr
>> +
>> +        /* Enable CET.  MSR_INTERRUPT_SSP_TABLE is set up later in load_system_tables(). */
>> +        mov     $XEN_MINIMAL_CR4 | X86_CR4_CET, %ebx
>> +        mov     %rbx, %cr4
> Does this imply NMI or #MC are fatal between here and there?

Yes, but that is always the case during CPU bringup.

Only a few instructions ago, we didn't have an IDT, and we don't have
yet have an established %tr, so can't get the regular IST pointer either.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 20:28:38 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 20:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jelc2-0005mI-HU; Fri, 29 May 2020 20:28:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jelc1-0005mC-1P
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 20:28:17 +0000
X-Inumbo-ID: ec9667d8-a1ea-11ea-81bc-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec9667d8-a1ea-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 20:28:15 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:43814
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jelbw-000xwg-M4 (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 21:28:12 +0100
Subject: Re: [PATCH v2 14/14] x86/shstk: Activate Supervisor Shadow Stacks
To: Jan Beulich <jbeulich@suse.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-15-andrew.cooper3@citrix.com>
 <886eccd3-e2d4-fdc5-f1cd-e8671a5271e2@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <e544c9da-1a58-a79c-5017-ccda40fc56eb@citrix.com>
Date: Fri, 29 May 2020 21:28:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <886eccd3-e2d4-fdc5-f1cd-e8671a5271e2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 14:09, Jan Beulich wrote:
> On 27.05.2020 21:18, Andrew Cooper wrote:
>> With all other plumbing in place, activate shadow stacks when possible.  Note
>> that this requires prohibiting the use of PV32.  Compatibility can be
>> maintained if necessary via PV-Shim.
> In the revision log you say "Discuss CET-SS disabling PV32", and I
> agree both here and in the command line doc you mention the "that"
> aspect. But what about the "why"? Aiui "is incompatible" or
> "requires" are too strong statements: It could be made work (by
> disabling / enabling CET on the way out of / back into Xen), but
> besides losing some of the intended protection that way, it would
> be quite a bit of overhead. So it's more like a design decision,
> and it would be nice to express it like this at least in the
> commit message.

For starters, the guest kernel and Xen share the single
MSR_S_CET.SHSKT_EN bit, as they are both supervisor in the eyes of the
processor.

We can't use the PV32_CR4 trick to turn CET.SS off on return to guest
kernel context, because (unlike SMAP/SMEP), the race condition with a
late NMI would manifest as #CP against the IRET, not a spurious page fault.

Furthermore, an IRET to Ring 3 and an IRET to Ring 1 now differ by three
words on the shadow stack.  An IRET to Ring 1 is a supervisor return, so
performs consistency checks on %cs/%lip/SSP on the shadow stack.  We
could in theory shuffle the shadow stack by 3 words on context switch.

It might theoretically be possible to make something which functioned
correctly with a PV guest kernel which doesn't understand a
paravirtualised version of supervisor shadow stacks, but I quickly
concluded that it isn't even worth the effort to figure for certain.

>
>> --- a/xen/arch/x86/setup.c
>> +++ b/xen/arch/x86/setup.c
>> @@ -664,6 +664,13 @@ static void __init noreturn reinit_bsp_stack(void)
>>      stack_base[0] = stack;
>>      memguard_guard_stack(stack);
>>  
>> +    if ( cpu_has_xen_shstk )
>> +    {
>> +        wrmsrl(MSR_PL0_SSP, (unsigned long)stack + 0x5ff8);
> Please replace this remaining literal number accordingly.
>
>> @@ -1801,6 +1823,10 @@ void __init noreturn __start_xen(unsigned long mbi_p)
>>  
>>      alternative_branches();
>>  
>> +    /* Defer CR4.CET until alternatives have finished playing with CR4.WP */
>> +    if ( cpu_has_xen_shstk )
>> +        set_in_cr4(X86_CR4_CET);
> Nit: The comment still wants changing to CR0.WP.
>
> With these taken care of in some form
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

Thanks.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 20:55:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 20:55:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jem1z-0000AW-Vv; Fri, 29 May 2020 20:55:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T8V9=7L=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jem1y-0000AR-Cx
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 20:55:06 +0000
X-Inumbo-ID: ab55715c-a1ee-11ea-a919-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ab55715c-a1ee-11ea-a919-12813bfff9fa;
 Fri, 29 May 2020 20:55:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=BDLyPhfWELd0wCILItOoCx8ScCj5Zvv83eMaF4Du4fY=; b=UcRXyTB/5fdzGy2Du7wFevRkn
 cbGqRDTZgPvzSVF25qlHfFeHyZ9uhB2psz+BTDseTKdinlTOjZTPi3QlIjXpOVThFkMaFzZiMfu+4
 8ywUOi3mYwJDgEHQfkMx0rGj/j+c3cjZ7LCYXefvOshhJ447P0WJv5PNDY4RbQ6WcrJFs=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jem1v-0007EB-ED; Fri, 29 May 2020 20:55:03 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jem1v-0003cL-5F; Fri, 29 May 2020 20:55:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jem1u-00007x-VB; Fri, 29 May 2020 20:55:02 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150502-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150502: regressions - trouble: blocked/fail
X-Osstest-Failures: xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
 xen-unstable-smoke:build-amd64:xen-build:fail:regression
 xen-unstable-smoke:build-armhf:xen-build:fail:regression
 xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
 xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: xen=8e2aa76dc1670e82eaa15683353853bc66bf54fc
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 29 May 2020 20:55:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150502 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150502/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 150438
 build-amd64                   6 xen-build                fail REGR. vs. 150438
 build-armhf                   6 xen-build                fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  8e2aa76dc1670e82eaa15683353853bc66bf54fc
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    1 days
Failing since        150465  2020-05-29 09:02:14 Z    0 days    8 attempts
Testing same since   150498  2020-05-29 18:01:30 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 773 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 29 21:18:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 21:18:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jemNu-0002DO-V7; Fri, 29 May 2020 21:17:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jemNt-0002DH-R1
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 21:17:45 +0000
X-Inumbo-ID: d61b307c-a1f1-11ea-8993-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d61b307c-a1f1-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 21:17:44 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:45194
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jemNp-000MWN-M8 (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 22:17:41 +0100
Subject: Re: [PATCH v2 10/14] x86/extable: Adjust extable handling to be
 shadow stack compatible
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-11-andrew.cooper3@citrix.com>
 <b36b5868-0b7c-2b45-a994-0ff5ea170433@suse.com>
 <0c7f425a-996f-8840-f1e2-79381edb6456@citrix.com>
Message-ID: <9d86ecf8-eaaa-7c2c-a3cc-b832d013a225@citrix.com>
Date: Fri, 29 May 2020 22:17:40 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <0c7f425a-996f-8840-f1e2-79381edb6456@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 20:43, Andrew Cooper wrote:
> On 28/05/2020 17:15, Jan Beulich wrote:
>> On 27.05.2020 21:18, Andrew Cooper wrote:
>>> +
>>> +        if ( ptr[0] == regs->rip && ptr[1] == regs->cs )
>>> +        {
>>> +            asm ( "wrssq %[fix], %[stk]"
>>> +                  : [stk] "=m" (*ptr)
>> Could this be ptr[0], to match the if()?
>>
>> Considering how important it is that we don't fix up the wrong stack
>> location here, I continue to wonder if we wouldn't better also
>> include the SSP value on the stack in the checking, at the very
>> least by way of an ASSERT() or BUG_ON().
> Well no, for the reason discussed in point 1.
>
> Its not technically an issue right now, but there is no possible way to
> BUILD_BUG_ON() someone turning an exception into IST, or stopping the
> use of the extable infrastructure on a #DB.
>
> Such a check would lie in wait and either provide an unexpected tangent
> to someone debugging a complicated issue (I do use #DB for a fair bit),
> or become a security vulnerability.

Also (which I forgot first time around),

The ptr[1] == regs->cs check is 64 bits wide, and the way to get that on
the shadow stack would be to execute a call instruction ending at at
0xe008 linear, or from a bad WRSSQ edit, both of which are serious
errors and worthy of hitting the BUG().

>>> --- a/xen/arch/x86/x86_64/entry.S
>>> +++ b/xen/arch/x86/x86_64/entry.S
>>> @@ -708,7 +708,16 @@ exception_with_ints_disabled:
>>>          call  search_pre_exception_table
>>>          testq %rax,%rax                 # no fixup code for faulting EIP?
>>>          jz    1b
>>> -        movq  %rax,UREGS_rip(%rsp)
>>> +        movq  %rax,UREGS_rip(%rsp)      # fixup regular stack
>>> +
>>> +#ifdef CONFIG_XEN_SHSTK
>>> +        mov    $1, %edi
>>> +        rdsspq %rdi
>>> +        cmp    $1, %edi
>>> +        je     .L_exn_shstk_done
>>> +        wrssq  %rax, (%rdi)             # fixup shadow stack
>> According to the comment in extable_shstk_fixup(), isn't the value
>> to be replaced at 8(%rdi)?
> Hmm - I think you're right.  I thought I had this covered by unit tests.

The unit test has been fixed, and so has this code.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 21:32:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 21:32:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jemc4-00044I-9U; Fri, 29 May 2020 21:32:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jemc2-00044D-WE
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 21:32:23 +0000
X-Inumbo-ID: e16154f0-a1f3-11ea-a91f-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e16154f0-a1f3-11ea-a91f-12813bfff9fa;
 Fri, 29 May 2020 21:32:22 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:45594
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jembx-000UBs-Li (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 22:32:17 +0100
Subject: Re: [PATCH] xen/build: fix xen/tools/binfile
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
References: <20200529182800.27555-1-jgross@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c9e5a94b-eaaa-2930-2052-e14180e389dc@citrix.com>
Date: Fri, 29 May 2020 22:32:17 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200529182800.27555-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 19:28, Juergen Gross wrote:
> xen/tools/binfile contains a bash specific command (let). This leads
> to build failures on systems not using bash as /bin/sh.
>
> Replace "let SHIFT=$OPTIND-1" by "SHIFT=$((OPTIND-1))".
>
> Signed-off-by: Juergen Gross <jgross@suse.com>

A/T-by and pushed.  Thanks for the quick turnaround.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 21:46:14 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 21:46:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jempB-0005Fl-Ml; Fri, 29 May 2020 21:45:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jempA-0005Fg-56
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 21:45:56 +0000
X-Inumbo-ID: c4c40a2a-a1f5-11ea-a91f-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c4c40a2a-a1f5-11ea-a91f-12813bfff9fa;
 Fri, 29 May 2020 21:45:53 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:46062
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jemp4-000ak6-Jt (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 22:45:50 +0100
Subject: Re: [PATCH v2 06/14] x86/shstk: Create shadow stacks
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
 <20200527191847.17207-7-andrew.cooper3@citrix.com>
 <8a02b933-3b7e-ded9-8bf3-a1c35f2ef7ae@suse.com>
 <fe8f077d-2048-38af-5deb-0d9dda48cf36@citrix.com>
Message-ID: <fb55d660-4a81-101b-85a4-7ece3c98b8ef@citrix.com>
Date: Fri, 29 May 2020 22:45:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <fe8f077d-2048-38af-5deb-0d9dda48cf36@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 29/05/2020 20:35, Andrew Cooper wrote:
>>> +    }
>>> +    map_pages_to_xen((unsigned long)p, virt_to_mfn(p), 1, PAGE_HYPERVISOR_SHSTK);
>> As already hinted at in reply to the previous patch, I think this wants
>> to remain _PAGE_NONE when we don't use CET-SS.
> The commit message discussed why that is not an option (currently), and
> why I don't consider it a good idea to make possible.

Apologies.  I thought I'd written it in the commit message, but it was
half in the previous patch, and not terribly clear.  I've reworked both.

The current practical reason is to do with clone_mappings() in the XPTI
case.

A wild off-stack read is far far less likely than hitting the guard page
with a push first, which means that a R/O guard page is about the same
usefulness to us, but results in a much more simple stack setup, as it
doesn't vary attributes with enabled features.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 22:29:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 22:29:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jenUw-0000xn-Sg; Fri, 29 May 2020 22:29:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mY44=7L=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jenUv-0000xg-1e
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 22:29:05 +0000
X-Inumbo-ID: cd348bf2-a1fb-11ea-8993-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cd348bf2-a1fb-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 22:29:04 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:47284
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jenUq-000xCm-Jg (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Fri, 29 May 2020 23:29:00 +0100
Subject: Re: [PATCH v2 00/14] x86: Support for CET Supervisor Shadow Stacks
To: Xen-devel <xen-devel@lists.xenproject.org>
References: <20200527191847.17207-1-andrew.cooper3@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c7bf5933-8ec3-0cbf-3d65-855b4e1d6b2f@citrix.com>
Date: Fri, 29 May 2020 23:28:59 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200527191847.17207-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Wei Liu <wl@xen.org>, Jan Beulich <JBeulich@suse.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27/05/2020 20:18, Andrew Cooper wrote:
> This series implements Shadow Stack support for Xen to use.

Given that we almost got to agreement, and considering the value of this
feature, I've fixed up most of the remaining comments and committed the
series.

The main area of concern was the fragility of stack expressions.  I've
got a plan for 4.15 to far more robust (by borrowing a trick from
Linux), and have left the existing logic at least self-consistent.

If there are still major concerns with the result, we can fix that up
early next week.

~Andrew


From xen-devel-bounces@lists.xenproject.org Fri May 29 22:33:18 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 22:33:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jenYz-0001st-F6; Fri, 29 May 2020 22:33:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T8V9=7L=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jenYx-0001sl-Bu
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 22:33:15 +0000
X-Inumbo-ID: 5ea6f340-a1fc-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ea6f340-a1fc-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 22:33:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=gIRQMjDvllv9Ns9aPOCIxKDsib6z3C84idhLVOrV9Gs=; b=QqpV0ZywF/kO/RY6HgVmfBrSV
 JUrtnyHcIx8biSNgFU8ygTEy23ewlLgnJO6DmtmucnkDM35vZ2VtV68zPhpuUM62rxhO9s25i+PxL
 7zw0JR7PdSOhAD9Joi66K53oBjCM4ksh1YFDXlfWsm+6lpel/DXBRqkR3FEtPx4Q8FHkU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jenYp-0000ro-Cw; Fri, 29 May 2020 22:33:07 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jenYp-0001tK-12; Fri, 29 May 2020 22:33:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jenYp-0008Rv-0J; Fri, 29 May 2020 22:33:07 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150460-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 150460: regressions - FAIL
X-Osstest-Failures: linux-5.4:build-armhf:xen-build:fail:regression
 linux-5.4:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
 linux-5.4:test-armhf-armhf-examine:build-check(1):blocked:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 linux-5.4:build-armhf-libvirt:build-check(1):blocked:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
 linux-5.4:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
 linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=e0d81ce760044efd3f26004aa32821c34968512a
X-Osstest-Versions-That: linux=1cdaf895c99d319c0007d0b62818cf85fc4b087f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 29 May 2020 22:33:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150460 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150460/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf                   6 xen-build                fail REGR. vs. 150294

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-examine      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds     16 guest-localmigrate           fail  like 150273
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                e0d81ce760044efd3f26004aa32821c34968512a
baseline version:
 linux                1cdaf895c99d319c0007d0b62818cf85fc4b087f

Last test of basis   150294  2020-05-21 07:55:33 Z    8 days
Testing same since   150410  2020-05-27 16:09:38 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Al Viro <viro@zeniv.linux.org.uk>
  Alain Volmat <alain.volmat@st.com>
  Alan Maguire <alan.maguire@oracle.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Monakov <amonakov@ispras.ru>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexandru Ardelean <alexandru.ardelean@analog.com>
  Alexei Starovoitov <ast@kernel.org>
  Andreas Färber <afaerber@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artem Borisov <dedsa2002@gmail.com>
  Arun Easi <aeasi@marvell.com>
  Aurabindo Pillai <aurabindo.pillai@amd.com>
  Aymeric Agon-Rambosson <aymeric.agon@yandex.com>
  Babu Moger <babu.moger@amd.com>
  Bin Lu <Bin.Lu@arm.com>
  Bob Peterson <rpeterso@redhat.com>
  Bodo Stroesser <bstroesser@ts.fujitsu.com>
  Brent Lu <brent.lu@intel.com>
  Bryant G. Ly <bryangly@gmail.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chris Chiu <chiu@endlessm.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Gmeiner <christian.gmeiner@gmail.com>
  Christian Lachner <gladiac@gmail.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Colin Xu <colin.xu@intel.com>
  Cristian Ciocaltea <cristian.ciocaltea@gmail.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Drake <drake@endlessm.com>
  Daniel Playfair Cal <daniel.playfair.cal@gmail.com>
  David Hildenbrand <david@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dragos Bogdan <dragos.bogdan@analog.com>
  Eric Biggers <ebiggers@google.com>
  Ewan D. Milne <emilne@redhat.com>
  Fabrice Gasnier <fabrice.gasnier@st.com>
  Frédéric Pierret (fepitre) <frederic.pierret@qubes-os.org>
  Gavin Shan <gshan@redhat.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Gerald Schaefer <gerald.schaefer@de.ibm.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregory CLEMENT <gregory.clement@bootlin.com>
  Hans de Goede <hdegoede@redhat.com>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Igor Russkikh <irusskikh@marvell.com>
  Ilya Dryomov <idryomov@gmail.com>
  Ingo Molnar <mingo@kernel.org>
  Jakub Sitnicki <jakub@cloudflare.com>
  James Hilliard <james.hilliard1@gmail.com>
  Jian-Hong Pan <jian-hong@endlessm.com>
  Jiri Kosina <jkosina@suse.cz>
  Joerg Roedel <jroedel@suse.de>
  John Hubbard <jhubbard@nvidia.com>
  John Johansen <john.johansen@canonical.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Juliet Kim <julietk@linux.vnet.ibm.com>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Kailang Yang <kailang@realtek.com>
  Kees Cook <keescook@chromium.org>
  Keno Fischer <keno@juliacomputing.com>
  Kevin Hao <haokexin@gmail.com>
  Klaus Doth <kdlnx@doth.eu>
  Krzysztof Struczynski <krzysztof.struczynski@huawei.com>
  Lianbo Jiang <lijiang@redhat.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Loïc Yhuel <loic.yhuel@gmail.com>
  Lucas Stach <l.stach@pengutronix.de>
  Marco Elver <elver@google.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Mauro Carvalho Chehab <mchehab@kernel.org>
  Maxim Petrov <mmrmaximuzz@gmail.com>
  Mel Gorman <mgorman@techsingularity.net>
  Miaohe Lin <linmiaohe@huawei.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michał Mirosław <mirq-linux@rere.qmqm.pl>
  Mike Pozulp <pozulp.kernel@gmail.com>
  Mimi Zohar <zohar@linux.ibm.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Navid Emamdoost <navid.emamdoost@gmail.com>
  Neil Horman <nhorman@tuxdriver.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nilesh Javali <njavali@marvell.com>
  Olivier Moysan <olivier.moysan@st.com>
  Oscar Carter <oscar.carter@gmx.com>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Cercueil <paul@crapouillou.net>
  PeiSen Hou <pshou@realtek.com>
  Peter Ujfalusi <peter.ujfalusi@ti.com>
  Peter Xu <peterx@redhat.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <a.p.zijlstra@chello.nl>
  Phil Auld <pauld@redhat.com>
  Philipp Rudo <prudo@linux.ibm.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Qian Cai <cai@lca.pw>
  Qiushi Wu <wu000273@umn.edu>
  Quinn Tran <qutran@marvell.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ricardo Ribalda Delgado <ribalda@kernel.org>
  Richard Clark <richard.xnu.clark@gmail.com>
  Richard Weinberger <richard@nod.at>
  Rick Edgecombe <rick.p.edgecombe@intel.com>
  Roberto Sassu <roberto.sassu@huawei.com>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Russell Currey <ruscur@russell.cc>
  Sagar Shrikant Kadam <sagar.kadam@sifive.com>
  Samuel Iglesias Gonsalvez <siglesias@igalia.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Sasha Levin <sashal@kernel.org>
  Scott Bahling <sbahling@suse.com>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Shay Agroskin <shayagr@amazon.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Stefano Garzarella <sgarzare@redhat.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudeep Holla <sudeep.holla@arm.com>
  Takashi Iwai <tiwai@suse.de>
  Thierry Reding <treding@nvidia.com>
  Thomas Gleixner <tglx@linutronix.de>
  Tomas Winkler <tomas.winkler@intel.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vincent Guittot <vincent.guittot@linaro.org>
  Vinod Koul <vkoul@kernel.org>
  Vladimir Murzin <vladimir.murzin@arm.com>
  Wei Yongjun <weiyongjun1@huawei.com>
  Will Deacon <will@kernel.org>
  Wolfram Sang <wsa@kernel.org>
  Wolfram Sang <wsa@the-dreams.de>
  Wu Bo <wubo40@huawei.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yoshiyuki Kurauchi <ahochauwaaaaa@gmail.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  fail    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     blocked 
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 3181 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 29 22:37:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 22:37:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jend0-00023s-4X; Fri, 29 May 2020 22:37:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=T8V9=7L=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jencz-00023n-AS
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 22:37:25 +0000
X-Inumbo-ID: f48a2df0-a1fc-11ea-81bc-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f48a2df0-a1fc-11ea-81bc-bc764e2007e4;
 Fri, 29 May 2020 22:37:19 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=p5LuDrrP0RCFVMlwWT2BWdfqB+VwMcbbS6VQzz66bWc=; b=7D8ahKQOaBMpqUvhtgQboT0Ql
 BbDonkE21plRnhlwgyG8ZHp9/71xvw4VX5NQBCqPs3g7iqTsL3yzxKOOQ9BZRAP0S4As3/Tfc1KKb
 siuyJJI/aWox0FnOgZ54zhKJ881v7MqW7DJQ3dn37VoO2YYpmbExQOU/gTrV+92XhnOO4=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jenct-0000xt-3T; Fri, 29 May 2020 22:37:19 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jencs-0002Rn-Mh; Fri, 29 May 2020 22:37:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jencs-0004y0-M8; Fri, 29 May 2020 22:37:18 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150505-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150505: regressions - trouble: blocked/fail
X-Osstest-Failures: xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
 xen-unstable-smoke:build-amd64:xen-build:fail:regression
 xen-unstable-smoke:build-armhf:xen-build:fail:regression
 xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: xen=8e2aa76dc1670e82eaa15683353853bc66bf54fc
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 29 May 2020 22:37:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150505 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150505/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 150438
 build-amd64                   6 xen-build                fail REGR. vs. 150438
 build-armhf                   6 xen-build                fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  8e2aa76dc1670e82eaa15683353853bc66bf54fc
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    1 days
Failing since        150465  2020-05-29 09:02:14 Z    0 days    9 attempts
Testing same since   150498  2020-05-29 18:01:30 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 773 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 29 23:52:16 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 29 May 2020 23:52:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeomx-0000tq-VD; Fri, 29 May 2020 23:51:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=L269=7L=cert.pl=michall@srs-us1.protection.inumbo.net>)
 id 1jeomw-0000tl-7h
 for xen-devel@lists.xenproject.org; Fri, 29 May 2020 23:51:46 +0000
X-Inumbo-ID: 59d58588-a207-11ea-8993-bc764e2007e4
Received: from bagnar.nask.net.pl (unknown [195.187.242.196])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59d58588-a207-11ea-8993-bc764e2007e4;
 Fri, 29 May 2020 23:51:44 +0000 (UTC)
Received: from bagnar.nask.net.pl (unknown [172.16.9.10])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 0DEB1A1D1B;
 Sat, 30 May 2020 01:51:44 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 065F4A1D1A;
 Sat, 30 May 2020 01:51:43 +0200 (CEST)
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id NZNHhCzZDS_a; Sat, 30 May 2020 01:51:42 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 33996A1D1B;
 Sat, 30 May 2020 01:51:42 +0200 (CEST)
X-Virus-Scanned: amavisd-new at bagnar.nask.net.pl
Received: from bagnar.nask.net.pl ([127.0.0.1])
 by localhost (bagnar.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 2MOl8DibFNKn; Sat, 30 May 2020 01:51:42 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir-ext.nask.net.pl
 [195.187.242.210])
 by bagnar.nask.net.pl (Postfix) with ESMTP id 1415DA1D1A;
 Sat, 30 May 2020 01:51:42 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 061CF22315;
 Sat, 30 May 2020 01:51:12 +0200 (CEST)
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id t6XSdRV7beLe; Sat, 30 May 2020 01:51:06 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by belindir.nask.net.pl (Postfix) with ESMTP id 40D6222317;
 Sat, 30 May 2020 01:51:06 +0200 (CEST)
X-Virus-Scanned: amavisd-new at belindir.nask.net.pl
Received: from belindir.nask.net.pl ([127.0.0.1])
 by localhost (belindir.nask.net.pl [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 6QJvHqLHUnxU; Sat, 30 May 2020 01:51:06 +0200 (CEST)
Received: from belindir.nask.net.pl (belindir.nask.net.pl [172.16.10.10])
 by belindir.nask.net.pl (Postfix) with ESMTP id 0F54122315;
 Sat, 30 May 2020 01:51:06 +0200 (CEST)
Date: Sat, 30 May 2020 01:51:05 +0200 (CEST)
From: =?utf-8?Q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>
To: Tamas K Lengyel <tamas.k.lengyel@gmail.com>
Message-ID: <1227421304.59835733.1590796265807.JavaMail.zimbra@cert.pl>
In-Reply-To: <CABfawhmUbowsMS6dS8SCcgMGe9GY8HenHG7LEyZcqh39DwiXMQ@mail.gmail.com>
References: <1317891616.59673956.1590755403816.JavaMail.zimbra@cert.pl>
 <57d213df-9edb-8f31-59e4-13f5cc07b543@suse.com>
 <1150720994.59695220.1590756705329.JavaMail.zimbra@cert.pl>
 <1f68a02a-3472-1bb0-90b9-6f3ccefca0b3@suse.com>
 <1623831291.59734817.1590760249321.JavaMail.zimbra@cert.pl>
 <CABfawhmjeoVpRgAbDAA_T6rMiqPjQUvLPEn5t1-1JwZFJicNKQ@mail.gmail.com>
 <CABfawhmUbowsMS6dS8SCcgMGe9GY8HenHG7LEyZcqh39DwiXMQ@mail.gmail.com>
Subject: Re: [BUG] Core scheduling patches causing deadlock in some situations
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [172.16.10.10]
X-Mailer: Zimbra 8.6.0_GA_1194 (ZimbraWebClient - GC83 (Win)/8.6.0_GA_1194)
Thread-Topic: Core scheduling patches causing deadlock in some situations
Thread-Index: /ogUKyFrbbu2LuoRmFbM/mOB5O9qPA==
X-Zimbra-DL: chivay@cert.pl, bonus@cert.pl
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: =?utf-8?Q?J=C3=BCrgen_Gro=C3=9F?= <jgross@suse.com>,
 Xen-devel <xen-devel@lists.xenproject.org>, chivay@cert.pl, bonus@cert.pl
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

----- 29 maj 2020 o 18:12, Tamas K Lengyel tamas.k.lengyel@gmail.com napisa=
=C5=82(a):

> On Fri, May 29, 2020 at 8:48 AM Tamas K Lengyel
> <tamas.k.lengyel@gmail.com> wrote:
>>
>> On Fri, May 29, 2020 at 7:51 AM Micha=C5=82 Leszczy=C5=84ski
>> <michal.leszczynski@cert.pl> wrote:
>> >
>> > ----- 29 maj 2020 o 15:15, J=C3=BCrgen Gro=C3=9F jgross@suse.com napis=
a=C5=82(a):
>> >
>> > > On 29.05.20 14:51, Micha=C5=82 Leszczy=C5=84ski wrote:
>> > >> ----- 29 maj 2020 o 14:44, J=C3=BCrgen Gro=C3=9F jgross@suse.com na=
pisa=C5=82(a):
>> > >>
>> > >>> On 29.05.20 14:30, Micha=C5=82 Leszczy=C5=84ski wrote:
>> > >>>> Hello,
>> > >>>>
>> > >>>> I'm running DRAKVUF on Dell Inc. PowerEdge R640/08HT8T server wit=
h Intel(R)
>> > >>>> Xeon(R) Gold 6132 CPU @ 2.60GHz CPU.
>> > >>>> When upgrading from Xen RELEASE 4.12 to 4.13, we have noticed som=
e stability
>> > >>>> problems concerning freezes of Dom0 (Debian Buster):
>> > >>>>
>> > >>>> ---
>> > >>>>
>> > >>>> maj 27 23:17:02 debian kernel: rcu: INFO: rcu_sched self-detected=
 stall on CPU
>> > >>>> maj 27 23:17:02 debian kernel: rcu: 0-....: (5250 ticks this GP)
>> > >>>> idle=3Dcee/1/0x4000000000000002 softirq=3D11964/11964 fqs=3D2515
>> > >>>> maj 27 23:17:02 debian kernel: rcu: (t=3D5251 jiffies g=3D27237 q=
=3D799)
>> > >>>> maj 27 23:17:02 debian kernel: NMI backtrace for cpu 0
>> > >>>> maj 27 23:17:02 debian kernel: CPU: 0 PID: 643 Comm: z_rd_int_1 T=
ainted: P OE
>> > >>>> 4.19.0-6-amd64 #1 Debian 4.19.67-2+deb10u2
>> > >>>> maj 27 23:17:02 debian kernel: Hardware name: Dell Inc. PowerEdge=
 R640/08HT8T,
>> > >>>> BIOS 2.1.8 04/30/2019
>> > >>>> maj 27 23:17:02 debian kernel: Call Trace:
>> > >>>> maj 27 23:17:02 debian kernel: <IRQ>
>> > >>>> maj 27 23:17:02 debian kernel: dump_stack+0x5c/0x80
>> > >>>> maj 27 23:17:02 debian kernel: nmi_cpu_backtrace.cold.4+0x13/0x50
>> > >>>> maj 27 23:17:02 debian kernel: ? lapic_can_unplug_cpu.cold.29+0x3=
b/0x3b
>> > >>>> maj 27 23:17:02 debian kernel: nmi_trigger_cpumask_backtrace+0xf9=
/0xfb
>> > >>>> maj 27 23:17:02 debian kernel: rcu_dump_cpu_stacks+0x9b/0xcb
>> > >>>> maj 27 23:17:02 debian kernel: rcu_check_callbacks.cold.81+0x1db/=
0x335
>> > >>>> maj 27 23:17:02 debian kernel: ? tick_sched_do_timer+0x60/0x60
>> > >>>> maj 27 23:17:02 debian kernel: update_process_times+0x28/0x60
>> > >>>> maj 27 23:17:02 debian kernel: tick_sched_handle+0x22/0x60
>> > >>>>
>> > >>>> ---
>> > >>>>
>> > >>>> This usually results in machine being completely unresponsive and=
 performing an
>> > >>>> automated reboot after some time.
>> > >>>>
>> > >>>> I've bisected commits between 4.12 and 4.13 and it seems like thi=
s is the patch
>> > >>>> which introduced a bug:
>> > >>>> https://github.com/xen-project/xen/commit/7c7b407e77724f37c4b4489=
30777a59a479feb21
>> > >>>>
>> > >>>> Enclosed you can find the `xl dmesg` log (attachment: dmesg.txt) =
from the fresh
>> > >>>> boot of the machine on which the bug was reproduced.
>> > >>>>
>> > >>>> I'm also attaching the `xl info` output from this machine:
>> > >>>>
>> > >>>> ---
>> > >>>>
>> > >>>> release : 4.19.0-6-amd64
>> > >>>> version : #1 SMP Debian 4.19.67-2+deb10u2 (2019-11-11)
>> > >>>> machine : x86_64
>> > >>>> nr_cpus : 14
>> > >>>> max_cpu_id : 223
>> > >>>> nr_nodes : 1
>> > >>>> cores_per_socket : 14
>> > >>>> threads_per_core : 1
>> > >>>> cpu_mhz : 2593.930
>> > >>>> hw_caps :
>> > >>>> bfebfbff:77fef3ff:2c100800:00000121:0000000f:d19ffffb:00000008:00=
000100
>> > >>>> virt_caps : pv hvm hvm_directio pv_directio hap shadow
>> > >>>> total_memory : 130541
>> > >>>> free_memory : 63591
>> > >>>> sharing_freed_memory : 0
>> > >>>> sharing_used_memory : 0
>> > >>>> outstanding_claims : 0
>> > >>>> free_cpus : 0
>> > >>>> xen_major : 4
>> > >>>> xen_minor : 13
>> > >>>> xen_extra : -unstable
>> > >>>> xen_version : 4.13-unstable
>> > >>>> xen_caps : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32 hvm-3.0-=
x86_32p
>> > >>>> hvm-3.0-x86_64
>> > >>>> xen_scheduler : credit2
>> > >>>> xen_pagesize : 4096
>> > >>>> platform_params : virt_start=3D0xffff800000000000
>> > >>>> xen_changeset : Wed Oct 2 09:27:27 2019 +0200 git:7c7b407e77-dirt=
y
>> > >>>
>> > >>> Which is your original Xen base? This output is clearly obtained a=
t the
>> > >>> end of the bisect process.
>> > >>>
>> > >>> There have been quite some corrections since the release of Xen 4.=
13, so
>> > >>> please make sure you are running the most actual version (4.13.1).
>> > >>>
>> > >>>
>> > >>> Juergen
>> > >>
>> > >> Sure, we have tested both RELEASE 4.13 and RELEASE 4.13.1. Unfortun=
ately these
>> > >> corrections didn't help and the bug is still reproducible.
>> > >>
>> > >>  From our testing it turns out that:
>> > >>
>> > >> Known working revision: 997d6248a9ae932d0dbaac8d8755c2b15fec25dc
>> > >> Broken revision: 6278553325a9f76d37811923221b21db3882e017
>> > >> First bad commit: 7c7b407e77724f37c4b448930777a59a479feb21
>> > >
>> > > Would it be possible to test xen unstable, too?
>> > >
>> > > I could imagine e.g. commit b492c65da5ec5ed or 99266e31832fb4a4 to h=
ave
>> > > an impact here.
>> > >
>> > >
>> > > Juergen
>> >
>> >
>> > I've tried b492c65da5ec5ed revision but it seems that there is some pr=
oblem with
>> > ALTP2M support, so I can't launch anything at all.
>> >
>> > maj 29 15:45:32 debian drakrun[1223]: Failed to set HVM_PARAM_ALTP2M, =
RC: -1
>> > maj 29 15:45:32 debian drakrun[1223]: VMI_ERROR: xc_altp2m_switch_to_v=
iew
>> > returned rc: -1
>>
>> Ough, great, that's another regression in 4.14-unstable. I ran into it
>> myself but couldn't spend time to figure out whether its just
>> something in my configuration or not so I reverted to 4.13.1. Now we
>> know it's a real bug.
>=20
> This was a bug in libxl, I've sent a patch in that fixes it but you
> can grab it from https://github.com/tklengyel/xen/tree/libxl_fix.
> There is also an update to DRAKVUF that will need to be applied due to
> the recent altp2m visibility option having to be specified, you can
> grab that from https://github.com/tklengyel/drakvuf/tree/4.14.
>=20
> Tamas


After checking out 99266e31832fb4a4 and applying a patch from https://githu=
b.com/tklengyel/xen/tree/libxl_fix it's again possible to succesfully launc=
h DRAKVUF but the deadlock caused by the scheduler is still reproducible an=
d the whole machine freezes just after a few seconds after starting the ana=
lysis.

So I would suppose that since 7c7b407e77724f37c4b448930777a59a479feb21 thro=
ugh 99266e31832fb4a4 there is still a bug in scheduler which causes freeze =
when using DRAKVUF on some machines or at least some default behavior of Xe=
n hypervisor has changed in some improper way.


Best regards
Micha=C5=82 Leszczy=C5=84ski
CERT Polska


From xen-devel-bounces@lists.xenproject.org Sat May 30 00:05:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 00:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jep0W-0002Si-NJ; Sat, 30 May 2020 00:05:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=74Fv=7M=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jep0V-0002SC-B9
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 00:05:47 +0000
X-Inumbo-ID: 4f7b8be4-a209-11ea-a928-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f7b8be4-a209-11ea-a928-12813bfff9fa;
 Sat, 30 May 2020 00:05:46 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=UQQBLkC2p8VaRm+t3TQPAlUY5dzWKchcpNsZEj1oC5U=; b=Iradis70pGQjfqJuqHfH+iPUZ
 kIjgYhvP4KVu/mk4HwUgVn+n7X+rXW8v+lTlB29BXRDXRJK9ILZdLrwndD+tPHECAUqLy/9IxdCEX
 YLbx1mVo8GcQtdlnoWdZB3xUXuwAb9hX/zvng3AWiJbhWSHE1dDzyVdZVRLIkrV0IFGkw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jep0T-0003Mh-La; Sat, 30 May 2020 00:05:45 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jep0T-00080r-AE; Sat, 30 May 2020 00:05:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jep0T-0004HU-9a; Sat, 30 May 2020 00:05:45 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150510-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150510: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:build-amd64:xen-build:fail:regression
 xen-unstable-smoke:build-armhf:xen-build:fail:regression
 xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:build-amd64-libvirt:build-check(1):blocked:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=b60ab42db2f04dcf56ebbfedfb9b0c65a75e4bac
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 30 May 2020 00:05:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150510 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150510/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 150438
 build-armhf                   6 xen-build                fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b60ab42db2f04dcf56ebbfedfb9b0c65a75e4bac
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    1 days
Failing since        150465  2020-05-29 09:02:14 Z    0 days   10 attempts
Testing same since   150510  2020-05-29 23:00:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  fail    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-amd64-libvirt                                     blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1155 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 30 00:30:13 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 00:30:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jepNr-0004I0-Qy; Sat, 30 May 2020 00:29:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YcdI=7M=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jepNq-0004Hv-LZ
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 00:29:54 +0000
X-Inumbo-ID: ad169d2c-a20c-11ea-a928-12813bfff9fa
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ad169d2c-a20c-11ea-a928-12813bfff9fa;
 Sat, 30 May 2020 00:29:52 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:50762
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jepNk-000pHZ-L1 (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Sat, 30 May 2020 01:29:48 +0100
Subject: Re: [PATCH v6 5/5] tools/libxc: make use of domain context
 SHARED_INFO record...
To: Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20200527173407.1398-1-paul@xen.org>
 <20200527173407.1398-6-paul@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <51f8cd86-9582-4fba-7e67-0c4b227870d1@citrix.com>
Date: Sat, 30 May 2020 01:29:47 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200527173407.1398-6-paul@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Paul Durrant <pdurrant@amazon.com>, Ian Jackson <ian.jackson@eu.citrix.com>,
 Wei Liu <wl@xen.org>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 27/05/2020 18:34, Paul Durrant wrote:
> ... in the save/restore code.
>
> This patch replaces direct mapping of the shared_info_frame (retrieved
> using XEN_DOMCTL_getdomaininfo) with save/load of the domain context
> SHARED_INFO record.
>
> No modifications are made to the definition of the migration stream at
> this point. Subsequent patches will define a record in the libxc domain
> image format for passing domain context and convert the save/restore code
> to use that.
>
> Signed-off-by: Paul Durrant <pdurrant@amazon.com>

I was going to fix up the remaining issues and commit this, but there is
a rather major problem in the way.  Therefore, here is a rather more
full review.

> diff --git a/tools/libxc/xc_sr_common.c b/tools/libxc/xc_sr_common.c
> index dd9a11b4b5..1acb3765aa 100644
> --- a/tools/libxc/xc_sr_common.c
> +++ b/tools/libxc/xc_sr_common.c
> @@ -138,6 +138,73 @@ int read_record(struct xc_sr_context *ctx, int fd, struct xc_sr_record *rec)
>      return 0;
>  };
>  
> +int get_domain_context(struct xc_sr_context *ctx)
> +{
> +    xc_interface *xch = ctx->xch;
> +    size_t len = 0;
> +    int rc;
> +
> +    if ( ctx->domain_context.buffer )
> +    {
> +        ERROR("Domain context already present");
> +        return -1;
> +    }
> +
> +    rc = xc_domain_getcontext(xch, ctx->domid, NULL, &len);
> +    if ( rc < 0 )
> +    {
> +        PERROR("Unable to get size of domain context");
> +        return -1;
> +    }
> +
> +    ctx->domain_context.buffer = malloc(len);
> +    if ( !ctx->domain_context.buffer )
> +    {
> +        PERROR("Unable to allocate memory for domain context");
> +        return -1;
> +    }

There is an xc_sr_blob interface, as this is a common pattern.

> +
> +    rc = xc_domain_getcontext(xch, ctx->domid, ctx->domain_context.buffer,
> +                              &len);
> +    if ( rc < 0 )
> +    {
> +        PERROR("Unable to get domain context");
> +        return -1;
> +    }
> +
> +    ctx->domain_context.len = len;
> +
> +    return 0;
> +}
> +
> +int set_domain_context(struct xc_sr_context *ctx)
> +{
> +    xc_interface *xch = ctx->xch;
> +    int rc;
> +
> +    if ( !ctx->domain_context.buffer )
> +    {
> +        ERROR("Domain context not present");
> +        return -1;
> +    }
> +
> +    rc = xc_domain_setcontext(xch, ctx->domid, ctx->domain_context.buffer,
> +                              ctx->domain_context.len);
> +
> +    if ( rc < 0 )
> +    {
> +        PERROR("Unable to set domain context");
> +        return -1;
> +    }
> +
> +    return 0;
> +}
> +
> +void common_cleanup(struct xc_sr_context *ctx)
> +{
> +    free(ctx->domain_context.buffer);

There is only a single caller to this function, so there is a (possibly
latent) memory leak.

> +}
> +
>  static void __attribute__((unused)) build_assertions(void)
>  {
>      BUILD_BUG_ON(sizeof(struct xc_sr_ihdr) != 24);
> diff --git a/tools/libxc/xc_sr_common.h b/tools/libxc/xc_sr_common.h
> index 5dd51ccb15..0d61978b08 100644
> --- a/tools/libxc/xc_sr_common.h
> +++ b/tools/libxc/xc_sr_common.h
> @@ -208,6 +208,11 @@ struct xc_sr_context
>  
>      xc_dominfo_t dominfo;
>  
> +    struct {
> +        void *buffer;
> +        unsigned int len;
> +    } domain_context;

As noted above, xc_sr_blob domain_context;

> diff --git a/tools/libxc/xc_sr_restore_x86_pv.c b/tools/libxc/xc_sr_restore_x86_pv.c
> index 904ccc462a..21982a38ad 100644
> --- a/tools/libxc/xc_sr_restore_x86_pv.c
> +++ b/tools/libxc/xc_sr_restore_x86_pv.c
> @@ -865,7 +865,7 @@ static int handle_shared_info(struct xc_sr_context *ctx,
>      xc_interface *xch = ctx->xch;
>      unsigned int i;
>      int rc = -1;
> -    shared_info_any_t *guest_shinfo = NULL;
> +    shared_info_any_t *guest_shinfo;
>      const shared_info_any_t *old_shinfo = rec->data;
>  
>      if ( !ctx->x86.pv.restore.seen_pv_info )
> @@ -878,18 +878,14 @@ static int handle_shared_info(struct xc_sr_context *ctx,
>      {
>          ERROR("X86_PV_SHARED_INFO record wrong size: length %u"
>                ", expected 4096", rec->length);
> -        goto err;
> +        return -1;
>      }
>  
> -    guest_shinfo = xc_map_foreign_range(
> -        xch, ctx->domid, PAGE_SIZE, PROT_READ | PROT_WRITE,
> -        ctx->dominfo.shared_info_frame);
> -    if ( !guest_shinfo )
> -    {
> -        PERROR("Failed to map Shared Info at mfn %#lx",
> -               ctx->dominfo.shared_info_frame);
> -        goto err;
> -    }
> +    rc = x86_pv_get_shinfo(ctx);
> +    if ( rc )
> +        return rc;

If I'm following this logic correctly, we're now in the final throws of
completing migration with data in hand, and we ask the hypervisor to
gather the entire domain context for this (not-yet-run) guest, copy it
(twice, even) down into userspace, so we can scan through it to find the
appropriate tag, copy less than a page's worth of data, then pass the
full buffer back to Xen to unserialise the whole thing over the guest.

The restore path shouldn't be calling DOMCTL_get* at all.  It is
conceptually wrong, and a waste of time/effort which would be better
spent with the guest unpaused.

What the restore path should be doing is passing data from the stream,
straight into DOMCTL_set* (and attempting to do this will probably
demonstrate why hvmctxt was an especially poor API to copy).  The layout
of existing migration-v2 blocks was designed around blobs which Xen
produces and consumes directly, specifically to minimise the processing
required.

I think it is a layering violation to have libxc pick apart and reframe
the internals of another layers' blob.

What is not currently clear is whether the domain context wants sending
as a discrete blob itself (and have a new chunk type allocated for the
purpose), or each bit of it is going to want picking apart.  This
largely depends on what else is going in the blob at a Xen level.

Also, I'd like to see the plans for the HVM side of this logic before
deciding on whether the PV side is appropriate.  I know we have a
dedicated SHARED_INFO record right now, but it would be fine (AFAICT) to
relax the migration spec to state that one of {SHARED_INFO,
DOMAIN_CONTEXT} must be sent for PV.

> @@ -854,13 +835,27 @@ static int write_x86_pv_p2m_frames(struct xc_sr_context *ctx)
>   */
>  static int write_shared_info(struct xc_sr_context *ctx)
>  {
> +    xc_interface *xch = ctx->xch;
>      struct xc_sr_record rec = {
>          .type = REC_TYPE_SHARED_INFO,
>          .length = PAGE_SIZE,
> -        .data = ctx->x86.pv.shinfo,
>      };
> +    int rc;
>  
> -    return write_record(ctx, &rec);
> +    if ( !(rec.data = calloc(1, PAGE_SIZE)) )
> +    {
> +        ERROR("Cannot allocate buffer for SHARED_INFO data");
> +        return -1;
> +    }
> +
> +    BUILD_BUG_ON(sizeof(*ctx->x86.pv.shinfo) > PAGE_SIZE);
> +    memcpy(rec.data, ctx->x86.pv.shinfo, sizeof(*ctx->x86.pv.shinfo));

These are some very contorted hoops to extend the data size sent.

write_split_record() is the the tool to use here, although the above
suggestions may render this change unnecessary.

> @@ -1041,7 +1036,7 @@ static int x86_pv_setup(struct xc_sr_context *ctx)
>      if ( rc )
>          return rc;
>  
> -    rc = map_shinfo(ctx);
> +    rc = x86_pv_get_shinfo(ctx);
>      if ( rc )
>          return rc;
>  
> @@ -1112,12 +1107,11 @@ static int x86_pv_cleanup(struct xc_sr_context *ctx)
>      if ( ctx->x86.pv.p2m )
>          munmap(ctx->x86.pv.p2m, ctx->x86.pv.p2m_frames * PAGE_SIZE);
>  
> -    if ( ctx->x86.pv.shinfo )
> -        munmap(ctx->x86.pv.shinfo, PAGE_SIZE);
> -
>      if ( ctx->x86.pv.m2p )
>          munmap(ctx->x86.pv.m2p, ctx->x86.pv.nr_m2p_frames * PAGE_SIZE);
>  
> +    common_cleanup(ctx);
> +
>      return 0;
>  }

This pair highlights an asymmetry in the patch, which will need fixing
by whomever adds a second field to domain_context.  Obtaining the
context for use should shouldn't be a hidden side effect of getting the
shared info.

~Andrew


From xen-devel-bounces@lists.xenproject.org Sat May 30 03:22:30 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 03:22:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jes4P-0003Tc-1z; Sat, 30 May 2020 03:22:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=74Fv=7M=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jes4N-0003TX-TL
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 03:21:59 +0000
X-Inumbo-ID: b493f30c-a224-11ea-a940-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b493f30c-a224-11ea-a940-12813bfff9fa;
 Sat, 30 May 2020 03:21:52 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Hp8+zDnf0Hs00g5tZEeEEbJh9AHwtj6FdYO0SrTXX94=; b=t1Z1TcVXxA2v2bKHGMTumNmHe
 yEkgJyCTu6ZlTXuhXpU8ebksjQ8WgELABdffs4s3lDl5EPShuCKx3VBKHNFbeEP+enCzEF8PbdKZQ
 s34LrMgK8+LqS0ix2QNuxiFu3Xo9YfHYd0HN7vnhegTKjd7RfLnnvDRep9O/k7JLdHxbQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jes4F-0000gG-E7; Sat, 30 May 2020 03:21:51 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jes4E-0003af-U7; Sat, 30 May 2020 03:21:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jes4E-0002o8-TX; Sat, 30 May 2020 03:21:50 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150464-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150464: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=75caf310d16cc5e2f851c048cd597f5437013368
X-Osstest-Versions-That: linux=b0c3ba31be3e45a130e13b278cf3b90f69bda6f6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 30 May 2020 03:21:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150464 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150464/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150432
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150432
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150432
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150432
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150432
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150432
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150432
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150432
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                75caf310d16cc5e2f851c048cd597f5437013368
baseline version:
 linux                b0c3ba31be3e45a130e13b278cf3b90f69bda6f6

Last test of basis   150432  2020-05-28 10:58:42 Z    1 days
Testing same since   150464  2020-05-29 08:55:41 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexander Potapenko <glider@google.com>
  Andrew Morton <akpm@linux-foundation.org>
  Arnd Bergmann <arnd@arndb.de>
  Brendan Shanks <bshanks@codeweavers.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chen-Yu Tsai <wens@csie.org>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Dennis Kadioglu <denk@eclipso.email>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Enric Balletbo i Serra <enric.balletbo@collabora.com>
  Evan Green <evgreen@chromium.org>
  Guenter Roeck <linux@roeck-us.net>
  Guo Ren <guoren@linux.alibaba.com>
  Gustavo A. R. Silva <gustavoars@kernel.org>
  Hans de Goede <hdegoede@redhat.com>
  Hugh Dickins <hughd@google.com>
  James Hilliard <james.hilliard1@gmail.com>
  Johannes Weiner <hannes@cmpxchg.org>
  Johnny Chuang <johnny.chuang@emc.com.tw>
  Kees Cook <keescook@chromium.org>
  Kevin Locke <kevin@kevinlocke.name>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Konstantin Khlebnikov <khlebnikov@yandex-team.ru>
  Linus Torvalds <torvalds@linux-foundation.org>
  Qian Cai <cai@lca.pw>
  Song Liu <songliubraving@fb.com>
  Stephan Gerhold <stephan@gerhold.net>
  Vitaly Wool <vitaly.wool@konsulko.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wei Yongjun <weiyongjun1@huawei.com>
  Wolfram Sang <wsa@kernel.org>
  Łukasz Patron <priv.luk@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   b0c3ba31be3e..75caf310d16c  75caf310d16cc5e2f851c048cd597f5437013368 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sat May 30 04:01:39 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 04:01:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jesgT-0006wY-4k; Sat, 30 May 2020 04:01:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=74Fv=7M=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jesgR-0006wT-GZ
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 04:01:19 +0000
X-Inumbo-ID: 34373bf0-a22a-11ea-9dbe-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 34373bf0-a22a-11ea-9dbe-bc764e2007e4;
 Sat, 30 May 2020 04:01:13 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=dRusWSi/uw2+isj/PuEY5nX0CvPHMe8UDNTEKsNFtss=; b=50i6glg4IcYYHKdA8OeNn3QL7
 svf/PfEUQgC7SvFfgZBOaHLHnc5nXMZbod/UHzxCmbPNcmtSdPoTBbPFoprkVYS0X9JiSXLxx4khi
 XqyZaqIqUktHbFFZ1Pf43EJTLZk879Lu8k7MhZ1WS0tqf+FrB2ugBMFCOv+cY5SaoS4gE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jesgL-0001XP-5O; Sat, 30 May 2020 04:01:13 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jesgK-0004r6-PC; Sat, 30 May 2020 04:01:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jesgK-0006jg-OZ; Sat, 30 May 2020 04:01:12 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150515-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150515: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=ad33a573c009d72466432b41ba0591c64e819c19
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 30 May 2020 04:01:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150515 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150515/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ad33a573c009d72466432b41ba0591c64e819c19
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    1 days
Failing since        150465  2020-05-29 09:02:14 Z    0 days   11 attempts
Testing same since   150515  2020-05-30 01:00:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1196 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 30 04:48:28 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 04:48:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jetPc-00029H-TC; Sat, 30 May 2020 04:48:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=74Fv=7M=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jetPb-00029C-5k
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 04:47:59 +0000
X-Inumbo-ID: b79c96ba-a230-11ea-a945-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b79c96ba-a230-11ea-a945-12813bfff9fa;
 Sat, 30 May 2020 04:47:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To:Sender:
 Reply-To:Cc:MIME-Version:Content-Type:Content-Transfer-Encoding:Content-ID:
 Content-Description:Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc
 :Resent-Message-ID:In-Reply-To:References:List-Id:List-Help:List-Unsubscribe:
 List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=jmfSFcn9QEdYJm4zk5Ag2WBkeelZRc5vlFJOkPMdMIU=; b=Ng3foKGsfkvdtPCVkEinavUIsB
 L7ALSYQqvKURBg/tutdsyvZ5tmnAmDYXtSwjtbH6KcknAWpwg1zPWQZS15dHy175Mldj8UrChM2xz
 K5N9lqinh11R5dtJ4TL8somcjf+IkSqL2tki9+55HL5t0M4fz+m5V/DLkDTQCOIJW8IQ=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jetPR-0002dH-Uw; Sat, 30 May 2020 04:47:50 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jetPR-0006dO-Dj; Sat, 30 May 2020 04:47:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jetPR-0006KD-77; Sat, 30 May 2020 04:47:49 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [libvirt bisection] complete build-armhf-libvirt
Message-Id: <E1jetPR-0006KD-77@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 30 May 2020 04:47:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

branch xen-unstable
xenbranch xen-unstable
job build-armhf-libvirt
testid libvirt-build

Tree: libvirt git://libvirt.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  libvirt git://libvirt.org/libvirt.git
  Bug introduced:  4d5f50d86b760864240c695adc341379fb47a796
  Bug not present: a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/150521/


  commit 4d5f50d86b760864240c695adc341379fb47a796
  Author: Pavel Hrdina <phrdina@redhat.com>
  Date:   Wed Jan 8 22:54:31 2020 +0100
  
      bootstrap.conf: stop creating AUTHORS file
      
      The existence of AUTHORS file is required for GNU projects but since
      commit <8bfb36db40f38e92823b657b5a342652064b5adc> we do not require
      these files to exist.
      
      Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/libvirt/build-armhf-libvirt.libvirt-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/libvirt/build-armhf-libvirt.libvirt-build --summary-out=tmp/150521.bisection-summary --basis-template=146182 --blessings=real,real-bisect libvirt build-armhf-libvirt libvirt-build
Searching for failure / basis pass:
 150462 fail [host=cubietruck-metzinger] / 146182 [host=cubietruck-picasso] 146156 ok.
Failure / basis pass flights: 150462 / 146156
(tree in basispass but not in latest: libvirt_gnulib)
Tree: libvirt git://libvirt.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 6f60d2a8503ce8c624bce6b53bf7b68476f5056f 27acf0ef828bf719b2053ba398b195829413dbdd 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 410cc30fdc590417ae730d635bbc70257adf6750 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
Basis pass 0f814c0fed420209ccb881325b854beaa7c70fcf 317d3eeb963a515e15a63fa356d8ebcda7041a51 70911f1f4aee0366b6122f2b90d367ec0f066beb 933ebad2470a169504799a1d95b8e410bd9847ef 2f4d068645c211e309812372cd0ac58c9024e93b 03bfe526ecadc86f31eda433b91dc90be0563919
Generating revisions with ./adhoc-revtuple-generator  git://libvirt.org/libvirt.git#0f814c0fed420209ccb881325b854beaa7c70fcf-6f60d2a8503ce8c624bce6b53bf7b68476f5056f https://gitlab.com/keycodemap/keycodemapdb.git#317d3eeb963a515e15a63fa356d8ebcda7041a51-27acf0ef828bf719b2053ba398b195829413dbdd git://xenbits.xen.org/osstest/ovmf.git#70911f1f4aee0366b6122f2b90d367ec0f066beb-568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 git://xenbits.xen.org/qemu-xen.git#933ebad2470a169504799a1d95b8e410bd9847ef-410cc30f\
 dc590417ae730d635bbc70257adf6750 git://xenbits.xen.org/osstest/seabios.git#2f4d068645c211e309812372cd0ac58c9024e93b-2e3de6253422112ae43e608661ba94ea6b345694 git://xenbits.xen.org/xen.git#03bfe526ecadc86f31eda433b91dc90be0563919-9f3e9139fa6c3d620eb08dff927518fc88200b8d
Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.

fatal: remote error: git-cache-proxy: git remote died with error exit code 1 // Fetching origin // fatal: Unable to look up xenbits.xen.org (port 9418) (Temporary failure in name resolution) // error: Could not fetch origin
Cloning into bare repository '/home/osstest/repos/ovmf'...
Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.

>From git://cache:9419/git://xenbits.xen.org/osstest/ovmf
 * [new branch]            xen-tested-master -> origin/xen-tested-master
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Use of uninitialized value $parents in array dereference at ./adhoc-revtuple-generator line 465.
Use of uninitialized value in concatenation (.) or string at ./adhoc-revtuple-generator line 465.
Loaded 83414 nodes in revision graph
Searching for test results:
 146182 [host=cubietruck-picasso]
 146156 pass 0f814c0fed420209ccb881325b854beaa7c70fcf 317d3eeb963a515e15a63fa356d8ebcda7041a51 70911f1f4aee0366b6122f2b90d367ec0f066beb 933ebad2470a169504799a1d95b8e410bd9847ef 2f4d068645c211e309812372cd0ac58c9024e93b 03bfe526ecadc86f31eda433b91dc90be0563919
 146240 [host=cubietruck-picasso]
 146211 fail irrelevant
 146299 [host=cubietruck-braque]
 146347 [host=cubietruck-braque]
 146366 [host=cubietruck-picasso]
 146344 [host=cubietruck-picasso]
 146348 [host=cubietruck-braque]
 146380 [host=cubietruck-braque]
 146406 [host=cubietruck-braque]
 146368 [host=cubietruck-picasso]
 146334 [host=cubietruck-braque]
 146392 [host=cubietruck-braque]
 146339 [host=cubietruck-braque]
 146341 [host=cubietruck-braque]
 146352 [host=cubietruck-picasso]
 146371 [host=cubietruck-picasso]
 146359 [host=cubietruck-picasso]
 146342 [host=cubietruck-braque]
 146411 [host=cubietruck-braque]
 146383 [host=cubietruck-braque]
 146360 [host=cubietruck-picasso]
 146364 [host=cubietruck-picasso]
 146399 [host=cubietruck-braque]
 146373 [host=cubietruck-picasso]
 146377 [host=cubietruck-picasso]
 146386 [host=cubietruck-braque]
 146374 [host=cubietruck-braque]
 146378 [host=cubietruck-picasso]
 146389 [host=cubietruck-braque]
 146410 [host=cubietruck-picasso]
 146402 [host=cubietruck-braque]
 146394 [host=cubietruck-braque]
 146404 [host=cubietruck-braque]
 146407 [host=cubietruck-braque]
 146455 fail irrelevant
 146509 [host=cubietruck-braque]
 146489 [host=cubietruck-braque]
 146528 [host=cubietruck-braque]
 146546 [host=cubietruck-braque]
 146565 [host=cubietruck-braque]
 146586 [host=cubietruck-braque]
 146616 [host=cubietruck-gleizes]
 146636 [host=cubietruck-gleizes]
 146660 fail irrelevant
 146689 [host=cubietruck-picasso]
 146737 [host=cubietruck-gleizes]
 146756 [host=cubietruck-gleizes]
 146714 [host=cubietruck-braque]
 146775 [host=cubietruck-braque]
 146799 [host=cubietruck-braque]
 146843 [host=cubietruck-picasso]
 146921 [host=cubietruck-braque]
 146995 [host=cubietruck-gleizes]
 147040 [host=cubietruck-gleizes]
 147084 [host=cubietruck-gleizes]
 147141 [host=cubietruck-gleizes]
 147195 fail irrelevant
 147265 fail irrelevant
 147340 fail irrelevant
 147419 [host=cubietruck-braque]
 147477 [host=cubietruck-braque]
 147520 [host=cubietruck-gleizes]
 147583 [host=cubietruck-braque]
 147649 [host=cubietruck-gleizes]
 147703 fail irrelevant
 147784 fail irrelevant
 147736 [host=cubietruck-braque]
 147885 [host=cubietruck-braque]
 147831 [host=cubietruck-braque]
 147981 fail irrelevant
 148068 [host=cubietruck-picasso]
 148144 [host=cubietruck-picasso]
 148196 [host=cubietruck-gleizes]
 148269 [host=cubietruck-picasso]
 148331 [host=cubietruck-braque]
 148406 [host=cubietruck-gleizes]
 148459 [host=cubietruck-gleizes]
 148503 [host=cubietruck-gleizes]
 148547 [host=cubietruck-gleizes]
 148615 [host=cubietruck-braque]
 148583 fail irrelevant
 148651 [host=cubietruck-gleizes]
 148688 fail irrelevant
 148729 fail irrelevant
 148775 fail irrelevant
 148799 [host=cubietruck-braque]
 148830 [host=cubietruck-gleizes]
 148887 [host=cubietruck-braque]
 149003 fail 153fd683681be13f380378acfc531cc3df206fd1 317d3eeb963a515e15a63fa356d8ebcda7041a51 9a1f14ad721bbcd833ec5108944c44a502392f03 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 9f27372677a68206d511de88ede22c53369a4ff7
 148954 fail irrelevant
 148990 fail irrelevant
 148997 fail 27a6edf50f121ad663ad31dbc2ebf9936fa8ead5 317d3eeb963a515e15a63fa356d8ebcda7041a51 c8b8157e126ae2fb6f65842677251d300ceff104 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 9b71d6a759a6835c7723afa3d79e1e7f10da4396
 148959 [host=cubietruck-braque]
 148967 [host=cubietruck-braque]
 148975 [host=cubietruck-braque]
 148969 [host=cubietruck-braque]
 148982 pass 0f814c0fed420209ccb881325b854beaa7c70fcf 317d3eeb963a515e15a63fa356d8ebcda7041a51 70911f1f4aee0366b6122f2b90d367ec0f066beb 933ebad2470a169504799a1d95b8e410bd9847ef 2f4d068645c211e309812372cd0ac58c9024e93b 03bfe526ecadc86f31eda433b91dc90be0563919
 148993 fail 0d0d60ddc5e58359cff5be8dfd6dd27e98da0282 317d3eeb963a515e15a63fa356d8ebcda7041a51 788421d5a766a4ce216e99e2277bb11c54e7d0f6 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 3dd724dff085e13ad520f8e35aea717db2ff07d0
 149006 pass a1cd25b919509be2645dbe6f952d5263e0d4e4e5 317d3eeb963a515e15a63fa356d8ebcda7041a51 710ff7490ad897383eb35d1becadabd21a733f24 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 149011 fail 2feaa925bba06e77be918bcbfab63bc8201c8f19 317d3eeb963a515e15a63fa356d8ebcda7041a51 4e2ac8062cbe907be9fbf6b2e6f1fc947690c4de 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 1eeedaf5a0d9ed6324f3bd5b700bb22eb4355341
 149016 pass a7f3b901aacadef269bf335c1714b45e444e78e8 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 149019 fail d0236e2a554f2321512276b897e8a8a44f68e969 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 149020 pass 6b4140dafb6b3db9ead2e260757f1c3583936f19 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 149001 [host=cubietruck-picasso]
 149022 pass a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 149025 [host=cubietruck-picasso]
 149028 [host=cubietruck-picasso]
 149032 [host=cubietruck-picasso]
 149035 [host=cubietruck-picasso]
 149043 [host=cubietruck-picasso]
 149037 [host=cubietruck-picasso]
 149041 [host=cubietruck-picasso]
 149045 [host=cubietruck-picasso]
 149074 fail irrelevant
 149123 fail irrelevant
 149154 fail irrelevant
 149193 [host=cubietruck-gleizes]
 149234 fail irrelevant
 149268 [host=cubietruck-braque]
 149314 [host=cubietruck-gleizes]
 149376 [host=cubietruck-gleizes]
 149407 fail irrelevant
 149434 [host=cubietruck-picasso]
 149455 [host=cubietruck-braque]
 149482 fail irrelevant
 149550 [host=cubietruck-braque]
 149508 [host=cubietruck-picasso]
 149590 [host=cubietruck-picasso]
 149643 [host=cubietruck-picasso]
 149629 [host=cubietruck-braque]
 149615 [host=cubietruck-braque]
 149635 fail irrelevant
 149684 fail irrelevant
 149666 fail irrelevant
 149696 [host=cubietruck-picasso]
 149732 [host=cubietruck-picasso]
 149773 [host=cubietruck-gleizes]
 149746 [host=cubietruck-gleizes]
 149803 [host=cubietruck-gleizes]
 149826 [host=cubietruck-gleizes]
 149886 [host=cubietruck-gleizes]
 149833 [host=cubietruck-gleizes]
 149850 [host=cubietruck-gleizes]
 149870 [host=cubietruck-gleizes]
 149909 [host=cubietruck-braque]
 149902 [host=cubietruck-braque]
 149895 [host=cubietruck-gleizes]
 150053 fail irrelevant
 150062 fail irrelevant
 150083 [host=cubietruck-braque]
 150099 [host=cubietruck-braque]
 150121 []
 150155 [host=cubietruck-braque]
 150131 [host=cubietruck-gleizes]
 150146 [host=cubietruck-gleizes]
 150170 [host=cubietruck-gleizes]
 150210 [host=cubietruck-gleizes]
 150228 fail 144dfe4215902b40a9d17fdb326054bbd8e07563 27acf0ef828bf719b2053ba398b195829413dbdd 9099dcbd61c8d22b5eedda783d143c222d2705a3 410cc30fdc590417ae730d635bbc70257adf6750 665dce17c04b574bb0ebcde4cac129c3dd9e681c 664e1bc12f8658da124a4eff7a8f16da073bd47f
 150190 [host=cubietruck-braque]
 150222 [host=cubietruck-gleizes]
 150237 [host=cubietruck-braque]
 150268 fail irrelevant
 150287 [host=cubietruck-braque]
 150347 [host=cubietruck-braque]
 150317 [host=cubietruck-picasso]
 150339 [host=cubietruck-picasso]
 150359 [host=cubietruck-braque]
 150374 fail irrelevant
 150404 [host=cubietruck-braque]
 150393 [host=cubietruck-braque]
 150395 [host=cubietruck-braque]
 150411 [host=cubietruck-braque]
 150405 [host=cubietruck-braque]
 150415 [host=cubietruck-braque]
 150417 [host=cubietruck-braque]
 150462 fail 6f60d2a8503ce8c624bce6b53bf7b68476f5056f 27acf0ef828bf719b2053ba398b195829413dbdd 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 410cc30fdc590417ae730d635bbc70257adf6750 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 150418 [host=cubietruck-braque]
 150424 [host=cubietruck-braque]
 150429 [host=cubietruck-braque]
 150474 [host=cubietruck-picasso]
 150435 [host=cubietruck-picasso]
 150431 [host=cubietruck-braque]
 150419 [host=cubietruck-picasso]
 150434 [host=cubietruck-braque]
 150466 [host=cubietruck-picasso]
 150455 [host=cubietruck-picasso]
 150458 [host=cubietruck-picasso]
 150459 [host=cubietruck-picasso]
 150470 [host=cubietruck-picasso]
 150461 [host=cubietruck-picasso]
 150463 [host=cubietruck-picasso]
 150483 pass 0f814c0fed420209ccb881325b854beaa7c70fcf 317d3eeb963a515e15a63fa356d8ebcda7041a51 70911f1f4aee0366b6122f2b90d367ec0f066beb 933ebad2470a169504799a1d95b8e410bd9847ef 2f4d068645c211e309812372cd0ac58c9024e93b 03bfe526ecadc86f31eda433b91dc90be0563919
 150493 fail 6f60d2a8503ce8c624bce6b53bf7b68476f5056f 27acf0ef828bf719b2053ba398b195829413dbdd 568eee7cf319fa95183c8d3b5e8dcf6e078ab8b3 410cc30fdc590417ae730d635bbc70257adf6750 2e3de6253422112ae43e608661ba94ea6b345694 9f3e9139fa6c3d620eb08dff927518fc88200b8d
 150501 fail 4d5f50d86b760864240c695adc341379fb47a796 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 150506 pass a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 150511 fail 4d5f50d86b760864240c695adc341379fb47a796 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 150514 pass a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
 150521 fail 4d5f50d86b760864240c695adc341379fb47a796 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
Searching for interesting versions
 Result found: flight 146156 (pass), for basis pass
 Result found: flight 150462 (fail), for basis failure
 Repro found: flight 150483 (pass), for basis pass
 Repro found: flight 150493 (fail), for basis failure
 0 revisions at a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7 317d3eeb963a515e15a63fa356d8ebcda7041a51 a5235562444021e9c5aff08f45daa6b5b7952c7a 933ebad2470a169504799a1d95b8e410bd9847ef 76551856b28d227cb0386a1ab0e774329b941f7d 97f10daf5f4bac91db732ef45c562839686f2c04
No revisions left to test, checking graph state.
 Result found: flight 149022 (pass), for last pass
 Result found: flight 150501 (fail), for first failure
 Repro found: flight 150506 (pass), for last pass
 Repro found: flight 150511 (fail), for first failure
 Repro found: flight 150514 (pass), for last pass
 Repro found: flight 150521 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  libvirt git://libvirt.org/libvirt.git
  Bug introduced:  4d5f50d86b760864240c695adc341379fb47a796
  Bug not present: a1a18c6ab55869d3b00cf8c32e0e2262a10c8ce7
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/150521/

Auto packing the repository in background for optimum performance.
See "git help gc" for manual housekeeping.
error: The last gc run reported the following. Please correct the root cause
and remove gc.log.
Automatic cleanup will not be performed until the file is removed.

warning: There are too many unreachable loose objects; run 'git prune' to remove them.


  commit 4d5f50d86b760864240c695adc341379fb47a796
  Author: Pavel Hrdina <phrdina@redhat.com>
  Date:   Wed Jan 8 22:54:31 2020 +0100
  
      bootstrap.conf: stop creating AUTHORS file
      
      The existence of AUTHORS file is required for GNU projects but since
      commit <8bfb36db40f38e92823b657b5a342652064b5adc> we do not require
      these files to exist.
      
      Signed-off-by: Pavel Hrdina <phrdina@redhat.com>
      Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>

dot: graph is too large for cairo-renderer bitmaps. Scaling by 0.203509 to fit
pnmtopng: 32 colors found
Revision graph left in /home/logs/results/bisect/libvirt/build-armhf-libvirt.libvirt-build.{dot,ps,png,html,svg}.
----------------------------------------
150521: tolerable ALL FAIL

flight 150521 libvirt real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/150521/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build           fail baseline untested


jobs:
 build-armhf-libvirt                                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat May 30 06:57:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 06:57:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jevQX-0004f3-KK; Sat, 30 May 2020 06:57:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=74Fv=7M=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jevQW-0004ey-1g
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 06:57:04 +0000
X-Inumbo-ID: c3eac8d0-a242-11ea-a951-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c3eac8d0-a242-11ea-a951-12813bfff9fa;
 Sat, 30 May 2020 06:57:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=RCR4gXeg3ZwlLW2kijjRMhKvM5UEGGI2m7MPmyVxWEg=; b=jitpWtTwShqK50VdRFpHqtAI/
 eB90f+bI9uNyWHoxe4Zzp8rW5F6aAc6DIB0ZsN9b6WFTygl/U5ieFJpzmQIddzUGd5bkFydWBGdzz
 BxSPhNfxJqgr37d+JgqIDE0S9a5JlsbxL6UVldz6kYky7aD21X+BFgbiz2LSyos61Ohsw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jevQU-0005cL-9m; Sat, 30 May 2020 06:57:02 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jevQT-0004Jy-Q0; Sat, 30 May 2020 06:57:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jevQT-0003nF-PO; Sat, 30 May 2020 06:57:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150524-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150524: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=ad33a573c009d72466432b41ba0591c64e819c19
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 30 May 2020 06:57:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150524 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150524/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ad33a573c009d72466432b41ba0591c64e819c19
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    1 days
Failing since        150465  2020-05-29 09:02:14 Z    0 days   12 attempts
Testing same since   150515  2020-05-30 01:00:47 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1196 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 30 08:30:33 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 08:30:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jewsU-0004Oi-Qd; Sat, 30 May 2020 08:30:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=74Fv=7M=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jewsS-0004CI-Oi
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 08:30:00 +0000
X-Inumbo-ID: bc7d33a0-a24f-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bc7d33a0-a24f-11ea-8993-bc764e2007e4;
 Sat, 30 May 2020 08:29:53 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=fee/Jb7qKte17ONWVsJdJ+Nvo9sS8xNqkWYrSVPtYsw=; b=sJjNw83gfKfhVW7NDkbqUPH+t
 rQH8tt0nqn3QKVIka3CI+bRUYWROVpIDfPOXOC91aAWbruNxVQdrBocce03CmcRHvrJh13S/Kjrmj
 D5lU0N5YGPcNqGlnl5DfbgpE0Wt1jQfpYHpDx1PPjmdQoXEu0c9kySWBOlQJSXzCupn90=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jewsL-00081A-D7; Sat, 30 May 2020 08:29:53 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jewsL-00016b-0T; Sat, 30 May 2020 08:29:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jewsK-0001Ui-Vi; Sat, 30 May 2020 08:29:52 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150477-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150477: tolerable FAIL - PUSHED
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-saverestore:fail:allowable
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
X-Osstest-Versions-That: xen=724913de8ac8426d313a4645741d86c1169ae406
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 30 May 2020 08:29:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150477 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150477/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     15 guest-saverestore        fail REGR. vs. 150444

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150444
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150444
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150444
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150444
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150444
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150444
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150444
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150444
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150444
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199
baseline version:
 xen                  724913de8ac8426d313a4645741d86c1169ae406

Last test of basis   150444  2020-05-28 15:34:30 Z    1 days
Testing same since   150477  2020-05-29 12:11:43 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Anthony PERARD <anthony.perard@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   724913de8a..1497e78068  1497e78068421d83956f8e82fb6e1bf1fc3b1199 -> master


From xen-devel-bounces@lists.xenproject.org Sat May 30 09:39:02 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 09:39:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jexwp-0001aH-7V; Sat, 30 May 2020 09:38:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=74Fv=7M=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jexwo-0001aC-7I
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 09:38:34 +0000
X-Inumbo-ID: 4eff890e-a259-11ea-a965-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4eff890e-a259-11ea-a965-12813bfff9fa;
 Sat, 30 May 2020 09:38:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=+NqioLsVl0gygz0T6E/Zo7teBpIlPZJxL+FqqiRqe4Y=; b=JnEQoMYhkPD4Ynt+z98hEeC3N
 kFUAK24mrqHqO2DYpTtKtNACq51pcQgJUIkX1D7J4ViQSCv0kXZ0SQe1xFF4bzGJaFxDXDvLilO9i
 LHxrvxx+eUbKFniLUaAKRHaFvm/UZYxohUxjd0mvwPauKbCmgLn7ZZatINjjy+lLrALtw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jexwd-0000yf-Tn; Sat, 30 May 2020 09:38:23 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jexwd-0006Xc-MF; Sat, 30 May 2020 09:38:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jexwd-0003fF-Lc; Sat, 30 May 2020 09:38:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150527-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150527: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=ad33a573c009d72466432b41ba0591c64e819c19
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 30 May 2020 09:38:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150527 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150527/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ad33a573c009d72466432b41ba0591c64e819c19
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    1 days
Failing since        150465  2020-05-29 09:02:14 Z    1 days   13 attempts
Testing same since   150515  2020-05-30 01:00:47 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1196 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 30 10:30:32 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 10:30:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jeykn-0006Qx-EJ; Sat, 30 May 2020 10:30:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=74Fv=7M=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jeykl-0006Qs-Vs
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 10:30:12 +0000
X-Inumbo-ID: 8511be02-a260-11ea-a96e-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8511be02-a260-11ea-a96e-12813bfff9fa;
 Sat, 30 May 2020 10:30:02 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=2TEml/NtXC+zJ63mZVTrgmEggkEo0HNGFyD0J3Isc30=; b=tG9bDhXT1DCZ8BbtlD0VYyrUc
 9XjSMKGF4QQgqcv98+/dmR/wr9G4ibbnPfsfsTEwCnooefDDxCzuvzsDaEr/Fu21dnCUeFCZwDLPi
 YY1RZc4MDLaKsdEFAV8ia3PNPycakX6/r0QH1v14RwaZxdpba0fzc1KbvqrDMxDoymlJI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeykb-00024v-CM; Sat, 30 May 2020 10:30:01 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jeykb-0000zq-4C; Sat, 30 May 2020 10:30:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jeykb-0002I1-3a; Sat, 30 May 2020 10:30:01 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150492-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150492: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=b8bee16e94df0fcd03bdad9969c30894418b0e6e
X-Osstest-Versions-That: qemuu=a20ab81d22300cca80325c284f21eefee99aa740
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 30 May 2020 10:30:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150492 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150492/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     16 guest-localmigrate       fail REGR. vs. 150457

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150457
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150457
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150457
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150457
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150457
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                b8bee16e94df0fcd03bdad9969c30894418b0e6e
baseline version:
 qemuu                a20ab81d22300cca80325c284f21eefee99aa740

Last test of basis   150457  2020-05-28 20:07:40 Z    1 days
Testing same since   150492  2020-05-29 15:45:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  BALATON Zoltan <balaton@eik.bme.hu>
  Gerd Hoffmann <kraxel@redhat.com>
  Peter Maydell <peter.maydell@linaro.org>
  Philippe Mathieu-Daudé <f4bug@amsat.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   a20ab81d22..b8bee16e94  b8bee16e94df0fcd03bdad9969c30894418b0e6e -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sat May 30 12:07:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 12:07:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jf0Ga-0005Xj-4j; Sat, 30 May 2020 12:07:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=74Fv=7M=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jf0GZ-0005Xb-Ca
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 12:07:07 +0000
X-Inumbo-ID: 12d84e9c-a26e-11ea-a97d-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 12d84e9c-a26e-11ea-a97d-12813bfff9fa;
 Sat, 30 May 2020 12:07:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=BuuwhNIAdrmeZu2Qr25b+aQInydO/9cSSM9k4EFCdE0=; b=W9jNXZri8COVLihCyamnSTdhy
 KAhOnBvr5f3cLsh419uH1zDYAPY0HPUA7U5x38hqadkAxljQMps+dntdoPtmqYG+ckJM/3Ih5lpOt
 PgRCuDI30lmSj4bgg+qv/bl6E4nX6+atWaPOoNB0scURxizziz5y2PltvFJcYcxotF4Rk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jf0GU-00044W-Tr; Sat, 30 May 2020 12:07:02 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jf0GU-0005gq-Kt; Sat, 30 May 2020 12:07:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jf0GU-0005ck-K8; Sat, 30 May 2020 12:07:02 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150508-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 150508: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-5.4:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
X-Osstest-Versions-This: linux=e0d81ce760044efd3f26004aa32821c34968512a
X-Osstest-Versions-That: linux=1cdaf895c99d319c0007d0b62818cf85fc4b087f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 30 May 2020 12:07:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150508 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150508/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate           fail  like 150273
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150294
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop             fail never pass
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop              fail never pass
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop             fail never pass
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop             fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass

version targeted for testing:
 linux                e0d81ce760044efd3f26004aa32821c34968512a
baseline version:
 linux                1cdaf895c99d319c0007d0b62818cf85fc4b087f

Last test of basis   150294  2020-05-21 07:55:33 Z    9 days
Testing same since   150410  2020-05-27 16:09:38 Z    2 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Al Viro <viro@zeniv.linux.org.uk>
  Alain Volmat <alain.volmat@st.com>
  Alan Maguire <alan.maguire@oracle.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Monakov <amonakov@ispras.ru>
  Alexander Usyskin <alexander.usyskin@intel.com>
  Alexandru Ardelean <alexandru.ardelean@analog.com>
  Alexei Starovoitov <ast@kernel.org>
  Andreas Färber <afaerber@suse.de>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andriin@fb.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Ard Biesheuvel <ardb@kernel.org>
  Artem Borisov <dedsa2002@gmail.com>
  Arun Easi <aeasi@marvell.com>
  Aurabindo Pillai <aurabindo.pillai@amd.com>
  Aymeric Agon-Rambosson <aymeric.agon@yandex.com>
  Babu Moger <babu.moger@amd.com>
  Bin Lu <Bin.Lu@arm.com>
  Bob Peterson <rpeterso@redhat.com>
  Bodo Stroesser <bstroesser@ts.fujitsu.com>
  Brent Lu <brent.lu@intel.com>
  Bryant G. Ly <bryangly@gmail.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chris Chiu <chiu@endlessm.com>
  Chris Wilson <chris@chris-wilson.co.uk>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Gmeiner <christian.gmeiner@gmail.com>
  Christian Lachner <gladiac@gmail.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Colin Xu <colin.xu@intel.com>
  Cristian Ciocaltea <cristian.ciocaltea@gmail.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Daniel Drake <drake@endlessm.com>
  Daniel Playfair Cal <daniel.playfair.cal@gmail.com>
  David Hildenbrand <david@redhat.com>
  David Howells <dhowells@redhat.com>
  David S. Miller <davem@davemloft.net>
  Dragos Bogdan <dragos.bogdan@analog.com>
  Eric Biggers <ebiggers@google.com>
  Ewan D. Milne <emilne@redhat.com>
  Fabrice Gasnier <fabrice.gasnier@st.com>
  Frédéric Pierret (fepitre) <frederic.pierret@qubes-os.org>
  Gavin Shan <gshan@redhat.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Gerald Schaefer <gerald.schaefer@de.ibm.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregory CLEMENT <gregory.clement@bootlin.com>
  Hans de Goede <hdegoede@redhat.com>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Igor Russkikh <irusskikh@marvell.com>
  Ilya Dryomov <idryomov@gmail.com>
  Ingo Molnar <mingo@kernel.org>
  Jakub Sitnicki <jakub@cloudflare.com>
  James Hilliard <james.hilliard1@gmail.com>
  Jian-Hong Pan <jian-hong@endlessm.com>
  Jiri Kosina <jkosina@suse.cz>
  Joerg Roedel <jroedel@suse.de>
  John Hubbard <jhubbard@nvidia.com>
  John Johansen <john.johansen@canonical.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Juliet Kim <julietk@linux.vnet.ibm.com>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Kailang Yang <kailang@realtek.com>
  Kees Cook <keescook@chromium.org>
  Keno Fischer <keno@juliacomputing.com>
  Kevin Hao <haokexin@gmail.com>
  Klaus Doth <kdlnx@doth.eu>
  Krzysztof Struczynski <krzysztof.struczynski@huawei.com>
  Lianbo Jiang <lijiang@redhat.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Loïc Yhuel <loic.yhuel@gmail.com>
  Lucas Stach <l.stach@pengutronix.de>
  Marco Elver <elver@google.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Masami Hiramatsu <mhiramat@kernel.org>
  Mauro Carvalho Chehab <mchehab@kernel.org>
  Maxim Petrov <mmrmaximuzz@gmail.com>
  Mel Gorman <mgorman@techsingularity.net>
  Miaohe Lin <linmiaohe@huawei.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michał Mirosław <mirq-linux@rere.qmqm.pl>
  Mike Pozulp <pozulp.kernel@gmail.com>
  Mimi Zohar <zohar@linux.ibm.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Navid Emamdoost <navid.emamdoost@gmail.com>
  Neil Horman <nhorman@tuxdriver.com>
  Niklas Schnelle <schnelle@linux.ibm.com>
  Nilesh Javali <njavali@marvell.com>
  Olivier Moysan <olivier.moysan@st.com>
  Oscar Carter <oscar.carter@gmx.com>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Cercueil <paul@crapouillou.net>
  PeiSen Hou <pshou@realtek.com>
  Peter Ujfalusi <peter.ujfalusi@ti.com>
  Peter Xu <peterx@redhat.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Peter Zijlstra <a.p.zijlstra@chello.nl>
  Phil Auld <pauld@redhat.com>
  Philipp Rudo <prudo@linux.ibm.com>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Qian Cai <cai@lca.pw>
  Qiushi Wu <wu000273@umn.edu>
  Quinn Tran <qutran@marvell.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ricardo Ribalda Delgado <ribalda@kernel.org>
  Richard Clark <richard.xnu.clark@gmail.com>
  Richard Weinberger <richard@nod.at>
  Rick Edgecombe <rick.p.edgecombe@intel.com>
  Roberto Sassu <roberto.sassu@huawei.com>
  Rodrigo Vivi <rodrigo.vivi@intel.com>
  Russell Currey <ruscur@russell.cc>
  Sagar Shrikant Kadam <sagar.kadam@sifive.com>
  Samuel Iglesias Gonsalvez <siglesias@igalia.com>
  Sascha Hauer <s.hauer@pengutronix.de>
  Sasha Levin <sashal@kernel.org>
  Scott Bahling <sbahling@suse.com>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Shay Agroskin <shayagr@amazon.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Stefano Garzarella <sgarzare@redhat.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudeep Holla <sudeep.holla@arm.com>
  Takashi Iwai <tiwai@suse.de>
  Thierry Reding <treding@nvidia.com>
  Thomas Gleixner <tglx@linutronix.de>
  Tomas Winkler <tomas.winkler@intel.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vincent Guittot <vincent.guittot@linaro.org>
  Vinod Koul <vkoul@kernel.org>
  Vladimir Murzin <vladimir.murzin@arm.com>
  Wei Yongjun <weiyongjun1@huawei.com>
  Will Deacon <will@kernel.org>
  Wolfram Sang <wsa@kernel.org>
  Wolfram Sang <wsa@the-dreams.de>
  Wu Bo <wubo40@huawei.com>
  Xin Tan <tanxin.ctf@gmail.com>
  Xiyu Yang <xiyuyang19@fudan.edu.cn>
  Yoshiyuki Kurauchi <ahochauwaaaaa@gmail.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   1cdaf895c99d..e0d81ce76004  e0d81ce760044efd3f26004aa32821c34968512a -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Sat May 30 12:58:20 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 12:58:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jf13r-0001Dd-1N; Sat, 30 May 2020 12:58:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=74Fv=7M=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jf13p-0001DY-MB
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 12:58:01 +0000
X-Inumbo-ID: 3092d996-a275-11ea-81bc-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3092d996-a275-11ea-81bc-bc764e2007e4;
 Sat, 30 May 2020 12:58:00 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Fx9VhnJpTb+kCVIYcxyxx8NqFChy38PgnitRHC7Xkz4=; b=Wtpa/rSMigTi5msh/Te7s31Ji
 6zLFny8NXNAsiXbIE59WbGEeGxpuYk29v2TrhE65VdhSUEJauFc819A/UXoEkhppod+ggCbUwhSUJ
 3mucc2Mj6iaEPXzEz7Q6+IQGWF0Y9tMNS5WW77jWiFURvH4xmlIz9m9sN+KgFEhDsYST8=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jf13m-00054G-QR; Sat, 30 May 2020 12:57:58 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jf13m-0000BE-A0; Sat, 30 May 2020 12:57:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jf13m-0007Be-9U; Sat, 30 May 2020 12:57:58 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150523-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150523: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=6f60d2a8503ce8c624bce6b53bf7b68476f5056f
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 30 May 2020 12:57:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150523 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150523/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              6f60d2a8503ce8c624bce6b53bf7b68476f5056f
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  134 days
Failing since        146211  2020-01-18 04:18:52 Z  133 days  124 attempts
Testing same since   150462  2020-05-29 04:18:46 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Chris Jester-Young <cky@cky.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yan Wang <wangyan122@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 19583 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 30 13:01:07 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 13:01:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jf16o-00022H-LG; Sat, 30 May 2020 13:01:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=74Fv=7M=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jf16n-00022B-Jw
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 13:01:05 +0000
X-Inumbo-ID: 9df9136a-a275-11ea-a989-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9df9136a-a275-11ea-a989-12813bfff9fa;
 Sat, 30 May 2020 13:01:03 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=U/ueQBi9ZagP6RWerZbeFINVml4j6G1H7MtFQJgruaA=; b=6oM8WaFZ2WOcCk4n9H4uE9Cr4
 aGAQPGlKrjc29L/czVWWy0KE650xSx//Lh+O4ElMBu7UpY9cG6YVg/snwZWsnRXG33XqKmOezMFJ8
 +nqNPQiWesnFmduiHB6uo+azSKpb39wqjmaJPyfV/NKdtryBmS9AjYZOH26BQH3r23d78=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jf16k-00059M-WA; Sat, 30 May 2020 13:01:03 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jf16k-0000O3-E3; Sat, 30 May 2020 13:01:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jf16k-000862-DW; Sat, 30 May 2020 13:01:02 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150531-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150531: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=ad33a573c009d72466432b41ba0591c64e819c19
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 30 May 2020 13:01:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150531 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150531/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ad33a573c009d72466432b41ba0591c64e819c19
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    1 days
Failing since        150465  2020-05-29 09:02:14 Z    1 days   14 attempts
Testing same since   150515  2020-05-30 01:00:47 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1196 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 30 15:55:34 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 15:55:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jf3pB-0007Ww-CG; Sat, 30 May 2020 15:55:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YcdI=7M=hermes.cam.ac.uk=amc96@srs-us1.protection.inumbo.net>)
 id 1jf3pA-0007Wr-0q
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 15:55:04 +0000
X-Inumbo-ID: ec2bd9e2-a28d-11ea-81bc-bc764e2007e4
Received: from ppsw-31.csi.cam.ac.uk (unknown [131.111.8.131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ec2bd9e2-a28d-11ea-81bc-bc764e2007e4;
 Sat, 30 May 2020 15:55:03 +0000 (UTC)
X-Cam-AntiVirus: no malware found
X-Cam-ScannerInfo: http://help.uis.cam.ac.uk/email-scanner-virus
Received: from 88-109-182-220.dynamic.dsl.as9105.com ([88.109.182.220]:50216
 helo=[192.168.1.219])
 by ppsw-31.csi.cam.ac.uk (smtp.hermes.cam.ac.uk [131.111.8.157]:465)
 with esmtpsa (PLAIN:amc96) (TLSv1.2:ECDHE-RSA-AES128-GCM-SHA256:128)
 id 1jf3p4-000jH7-MD (Exim 4.92.3)
 (return-path <amc96@hermes.cam.ac.uk>); Sat, 30 May 2020 16:54:58 +0100
Subject: Re: [PATCH v10 05/12] libs: add libxenhypfs
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
References: <20200519072106.26894-1-jgross@suse.com>
 <20200519072106.26894-6-jgross@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <8468b7ea-81ba-0512-c638-322803134576@citrix.com>
Date: Sat, 30 May 2020 16:54:57 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.7.0
MIME-Version: 1.0
In-Reply-To: <20200519072106.26894-6-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 Ian Jackson <ian.jackson@eu.citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 19/05/2020 08:20, Juergen Gross wrote:
> diff --git a/tools/libs/hypfs/include/xenhypfs.h b/tools/libs/hypfs/include/xenhypfs.h
> new file mode 100644
> index 0000000000..ab157edceb
> --- /dev/null
> +++ b/tools/libs/hypfs/include/xenhypfs.h
> @@ -0,0 +1,90 @@
> +/*
> + * Copyright (c) 2019 SUSE Software Solutions Germany GmbH
> + *
> + * This library is free software; you can redistribute it and/or
> + * modify it under the terms of the GNU Lesser General Public
> + * License as published by the Free Software Foundation;
> + * version 2.1 of the License.
> + *
> + * This library is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * Lesser General Public License for more details.
> + *
> + * You should have received a copy of the GNU Lesser General Public
> + * License along with this library; If not, see <http://www.gnu.org/licenses/>.
> + */
> +#ifndef XENHYPFS_H
> +#define XENHYPFS_H
> +
> +#include <stdbool.h>
> +#include <stdint.h>
> +#include <sys/types.h>
> +
> +/* Callers who don't care don't need to #include <xentoollog.h> */
> +struct xentoollog_logger;
> +
> +typedef struct xenhypfs_handle xenhypfs_handle;
> +
> +struct xenhypfs_dirent {
> +    char *name;
> +    size_t size;
> +    enum {
> +        xenhypfs_type_dir,
> +        xenhypfs_type_blob,
> +        xenhypfs_type_string,
> +        xenhypfs_type_uint,
> +        xenhypfs_type_int,
> +        xenhypfs_type_bool
> +    } type;
> +    enum {
> +        xenhypfs_enc_plain,
> +        xenhypfs_enc_gzip
> +    } encoding;
> +    bool is_writable;
> +};

I'm afraid this a blocker bug for 4.14.

enum's aren't safe in public ABI structs, even under _GNU_SOURCE.  Use
unsigned int's, and declare the enumerations earlier.

There is also 3/7 bytes of trailing padding and very little forward
extensibility.  How about an unsigned int flags, of which writeable is
the bottom bit, seeing as this is purely an informational field?

~Andrew


From xen-devel-bounces@lists.xenproject.org Sat May 30 17:17:54 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 17:17:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jf56q-0006DC-Of; Sat, 30 May 2020 17:17:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=74Fv=7M=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jf56q-0006D7-0Y
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 17:17:24 +0000
X-Inumbo-ID: 69ef3670-a299-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 69ef3670-a299-11ea-8993-bc764e2007e4;
 Sat, 30 May 2020 17:17:18 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=moC0Ntqmu5dRfyeqniAtDXDmuV8q+STn7yFQ5dVDZ10=; b=HXosUgBPJ1mB0Ao01TDm2zH+x
 YALLYJhzKNw/LA1vJe8C0vhDRof4AbCnypejeMY7LfpbA1v4C5hP/7c109v9p/kX0ishutp8aiSoW
 mQ9w234NImpXsVNpajlohPbK2GkewuuTmZeembghIRYmpt/soDW0hVkQjF+kqqqKboYGU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jf56j-0002To-6j; Sat, 30 May 2020 17:17:17 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jf56i-0005dI-QQ; Sat, 30 May 2020 17:17:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jf56i-000618-Pr; Sat, 30 May 2020 17:17:16 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150536-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150536: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=ad33a573c009d72466432b41ba0591c64e819c19
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 30 May 2020 17:17:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150536 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150536/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ad33a573c009d72466432b41ba0591c64e819c19
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    2 days
Failing since        150465  2020-05-29 09:02:14 Z    1 days   15 attempts
Testing same since   150515  2020-05-30 01:00:47 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1196 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 30 18:56:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 18:56:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jf6dt-0005vq-Tv; Sat, 30 May 2020 18:55:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=74Fv=7M=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jf6ds-0005vl-7j
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 18:55:36 +0000
X-Inumbo-ID: 238f9040-a2a7-11ea-a9e3-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 238f9040-a2a7-11ea-a9e3-12813bfff9fa;
 Sat, 30 May 2020 18:55:33 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=vBXe09p4+Eb/pqp0Cf/OSxjqp7PbSgsHryYJfT2RmwU=; b=HJ4aMwocJJ1pmNV2ng0Ud4XhJ
 HS8hJI7N6PW6oYt/pczC9UZmExsyfbRLUyyewS2RLvC74HnMtxrA7ebZH+xMuoJ1M3MhcRyB409RA
 ty5rp6dhVW/a6kOYFBZXJ+dsU6hdSSwvY/6RNETdv+oczHDPZxDBwfZl3NRWyqKgN8USg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jf6do-0004UE-0R; Sat, 30 May 2020 18:55:32 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jf6dd-00048T-Fc; Sat, 30 May 2020 18:55:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jf6dd-0008LA-Dk; Sat, 30 May 2020 18:55:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150520-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150520: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:regression
 linux-linus:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:regression
 linux-linus:test-amd64-amd64-xl-rtds:guest-saverestore:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=86852175b016f0c6873dcbc24b93d12b7b246612
X-Osstest-Versions-That: linux=75caf310d16cc5e2f851c048cd597f5437013368
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 30 May 2020 18:55:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150520 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150520/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail REGR. vs. 150464
 test-armhf-armhf-xl-vhd     15 guest-start/debian.repeat fail REGR. vs. 150464

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     15 guest-saverestore        fail REGR. vs. 150464

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150464
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150464
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150464
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150464
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150464
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150464
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150464
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150464
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                86852175b016f0c6873dcbc24b93d12b7b246612
baseline version:
 linux                75caf310d16cc5e2f851c048cd597f5437013368

Last test of basis   150464  2020-05-29 08:55:41 Z    1 days
Testing same since   150520  2020-05-30 03:24:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Alaa Hleihel <alaa@mellanox.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Deucher <alexdeucher@gmail.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Aric Cyr <aric.cyr@amd.com>
  Arnd Bergmann <arnd@arndb.de>
  Borislav Petkov <bp@suse.de>
  Catalin Marinas <catalin.marinas@arm.com>
  Changming Liu <liu.changm@northeastern.edu>
  Chris Chiu <chiu@endlessm.com>
  Christoph Hellwig <hch@lst.de>
  Dave Airlie <airlied@redhat.com>
  Dennis Dalessandro <dennis.dalessandro@intel.com>
  Dennis YC Hsieh <dennis-yc.hsieh@mediatek.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Doug Ledford <dledford@redhat.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Grygorii Strashko <grygorii.strashko@ti.com>
  Hamish Martin <hamish.martin@alliedtelesis.co.nz>
  Helge Deller <deller@gmx.de>
  Hsin-Yi Wang <hsinyi@chromium.org>
  Ian Ray <ian.ray@ge.com>
  Ilya Dryomov <idryomov@gmail.com>
  Jason Gunthorpe <jgg@mellanox.com>
  Jeff Layton <jlayton@kernel.org>
  Jens Axboe <axboe@kernel.dk>
  Jerry Lee <leisurelysw24@gmail.com>
  Joerg Roedel <jroedel@suse.de>
  Jonathan Marek <jonathan@marek.ca>
  Kaike Wan <kaike.wan@intel.com>
  Kailang Yang <kailang@realtek.com>
  Krzysztof Kozlowski <krzk@kernel.org>
  Leon Romanovsky <leonro@mellanox.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lubomir Rintel <lkundrak@v3.sk>
  Maor Gottlieb <maorg@mellanox.com>
  Matthias Brugger <matthias.bgg@gmail.com>
  Maxime Ripard <maxime@cerno.tech>
  Nobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp>
  Paul Cercueil <paul@crapouillou.net>
  Peng Hao <richard.peng@oppo.com>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Qiushi Wu <wu000273@umn.edu>
  Robert Beckett <bob.beckett@collabora.com>
  Sam Ravnborg <sam@ravnborg.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Shawn Guo <shawnguo@kernel.org>
  Simon Ser <contact@emersion.fr>
  Stefan Wahren <stefan.wahren@i2se.com>
  Stephen Boyd <sboyd@kernel.org>
  Takashi Iwai <tiwai@suse.de>
  Tony Lindgren <tony@atomide.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Valentine Fatiev <valentinef@mellanox.com>
  Vincent Stehlé <vincent.stehle@laposte.net>
  Vinod Koul <vkoul@kernel.org>
  Will Deacon <will@kernel.org>
  Yuji Ishikawa <yuji2.ishikawa@toshiba.co.jp>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1367 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 30 20:54:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 20:54:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jf8Up-0007Lb-2I; Sat, 30 May 2020 20:54:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=74Fv=7M=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jf8Un-0007LW-Jh
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 20:54:21 +0000
X-Inumbo-ID: bbe02318-a2b7-11ea-a9f7-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bbe02318-a2b7-11ea-a9f7-12813bfff9fa;
 Sat, 30 May 2020 20:54:20 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=/WLpE/IKwNpBe4lO3cNyEfz3AHSsozVC77J0jGq1VFc=; b=tQ7cgUot2QNzToJSLO0dZFRjC
 DT/szMjt4RMUSL1DWZJj6exoc79cqM+w2V2JtQWIBdGcT+FlIQOi+/EUROhfWrHSc8w2wARqPGF49
 wJYaKlltWxyX2yphEJpUUckO5k7usvAp4KrcAIXZo1inUzt84GXkGazeLufv+iLqN+sCo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jf8Ul-0006yX-Td; Sat, 30 May 2020 20:54:19 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jf8Ul-0001fF-M0; Sat, 30 May 2020 20:54:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jf8Ul-0002Ha-LL; Sat, 30 May 2020 20:54:19 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150541-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150541: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=ad33a573c009d72466432b41ba0591c64e819c19
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 30 May 2020 20:54:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150541 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150541/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ad33a573c009d72466432b41ba0591c64e819c19
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    2 days
Failing since        150465  2020-05-29 09:02:14 Z    1 days   16 attempts
Testing same since   150515  2020-05-30 01:00:47 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1196 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 30 22:28:08 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 22:28:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jf9x4-0006Im-2z; Sat, 30 May 2020 22:27:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HnEH=7M=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jf9x2-0006Ih-5Z
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 22:27:36 +0000
X-Inumbo-ID: c136bedc-a2c4-11ea-aa06-12813bfff9fa
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c136bedc-a2c4-11ea-aa06-12813bfff9fa;
 Sat, 30 May 2020 22:27:33 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04UMM6I1098715;
 Sat, 30 May 2020 22:26:48 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=CBHeXinRSNKscFFXqb9c9PV2yKgdDM1KpYggUDQwi+Q=;
 b=WyX+m6Zvxh5lConkWkFX3kS0hdsLPuUxDG6Ek8mXJFtSNttmUPs9D3dyY4ZMcB2UJBkq
 EOWMGNjvbkKqEIAfrq/FHGnRELR9UuXqFfFEkZza60jvbPYsFxlrEenJj3LT1GtOQ+2d
 0ZMLln8PGYHq53o6YLU/tt7bL2dx7M4wJiSM17+lIxQ/qi5HdYGVK+jwYqFBle8qz7x8
 n6RyHbaFk/vNuA3Mq9BoChiKw4YS8JseTtPNxlBLqFoYF0MFQeLeAgNA27kU1v9zT+Sv
 vKRFkrBkOuo+xSKm4jZaffXJzUcQARNcPhcR77mEOqLRAI373UAQqpt9tO/WIpgK+YUs CA== 
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2120.oracle.com with ESMTP id 31bg4mssge-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Sat, 30 May 2020 22:26:48 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04UMNnUB129829;
 Sat, 30 May 2020 22:26:48 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by userp3020.oracle.com with ESMTP id 31bethnktf-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Sat, 30 May 2020 22:26:48 +0000
Received: from abhmp0006.oracle.com (abhmp0006.oracle.com [141.146.116.12])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 04UMQZa9014487;
 Sat, 30 May 2020 22:26:35 GMT
Received: from [10.39.241.21] (/10.39.241.21)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Sat, 30 May 2020 15:26:34 -0700
Subject: Re: [PATCH 01/12] xen/manage: keep track of the on-going suspend mode
To: Anchal Agarwal <anchalag@amazon.com>, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, hpa@zytor.com, x86@kernel.org, jgross@suse.com,
 linux-pm@vger.kernel.org, linux-mm@kvack.org, kamatam@amazon.com,
 sstabellini@kernel.org, konrad.wilk@oracle.com, roger.pau@citrix.com,
 axboe@kernel.dk, davem@davemloft.net, rjw@rjwysocki.net,
 len.brown@intel.com, pavel@ucw.cz, peterz@infradead.org,
 eduval@amazon.com, sblbir@amazon.com, xen-devel@lists.xenproject.org,
 vkuznets@redhat.com, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, dwmw@amazon.co.uk, benh@kernel.crashing.org
References: <cover.1589926004.git.anchalag@amazon.com>
 <20200519232451.GA18632@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <d360e97f-1935-89f1-6dab-3b0bc6b1b3e2@oracle.com>
Date: Sat, 30 May 2020 18:26:32 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <20200519232451.GA18632@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9637
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0
 mlxlogscore=999
 bulkscore=0 mlxscore=0 phishscore=0 suspectscore=0 malwarescore=0
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2005300173
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9637
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 clxscore=1011
 lowpriorityscore=0
 malwarescore=0 phishscore=0 suspectscore=0 priorityscore=1501 adultscore=0
 mlxlogscore=999 cotscore=-2147483648 bulkscore=0 mlxscore=0
 impostorscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2004280000 definitions=main-2005300173
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/19/20 7:24 PM, Anchal Agarwal wrote:
> =20
> +enum suspend_modes {
> +	NO_SUSPEND =3D 0,
> +	XEN_SUSPEND,
> +	PM_SUSPEND,
> +	PM_HIBERNATION,
> +};
> +
> +/* Protected by pm_mutex */
> +static enum suspend_modes suspend_mode =3D NO_SUSPEND;
> +
> +bool xen_suspend_mode_is_xen_suspend(void)
> +{
> +	return suspend_mode =3D=3D XEN_SUSPEND;
> +}
> +
> +bool xen_suspend_mode_is_pm_suspend(void)
> +{
> +	return suspend_mode =3D=3D PM_SUSPEND;
> +}
> +
> +bool xen_suspend_mode_is_pm_hibernation(void)
> +{
> +	return suspend_mode =3D=3D PM_HIBERNATION;
> +}
> +


I don't see these last two used anywhere. Are you, in fact,
distinguishing between PM suspend and hibernation?


(I would also probably shorten the name a bit, perhaps
xen_is_pv/pm_suspend()?)


-boris





From xen-devel-bounces@lists.xenproject.org Sat May 30 22:57:37 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 22:57:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfAPl-0000L8-HM; Sat, 30 May 2020 22:57:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HnEH=7M=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jfAPk-0000L3-NX
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 22:57:16 +0000
X-Inumbo-ID: e7bc26c4-a2c8-11ea-9dbe-bc764e2007e4
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7bc26c4-a2c8-11ea-9dbe-bc764e2007e4;
 Sat, 30 May 2020 22:57:15 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04UMr5Bx162406;
 Sat, 30 May 2020 22:56:38 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=nRI8/WGqU10HxUBdfywB8qJJ+zkivb2UYjEAwimayWg=;
 b=j9RAE4QJDyU5mkZtmHGy9KeRSad5ty+NbS47nrwkSZqhsxge3YTaJtMzPDdR/5IPG3wf
 rnO7jFpNK3ZBwx3S+3EnZEwhJdtVcJ9bOEAj1aZG1Rf/NwALS2bDwYskk8MtvmcRUrNm
 6wE01JzWk5OmP+lrQnNRzCeXGaQf/Bxj4gGoSU5ORhwHs6VjQTRPUVHmgA3KTNpf5/4D
 W6JgqISSuyVK5E380RpU7mTPzlmzLESU37iMiq1rzxuSP7AH5Hq0/3mE7RfazgSLqJX5
 p5TBnPP2FTpyeUWEzAqyAdjFdFLmowD1GyTFHbqwjxhjzRPeBItITt0e95/S8JnaHxH2 sg== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by aserp2120.oracle.com with ESMTP id 31bfeksw3v-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Sat, 30 May 2020 22:56:38 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04UMr5er060838;
 Sat, 30 May 2020 22:56:37 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by userp3030.oracle.com with ESMTP id 31bckr8f4r-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Sat, 30 May 2020 22:56:37 +0000
Received: from abhmp0001.oracle.com (abhmp0001.oracle.com [141.146.116.7])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 04UMuXtw014919;
 Sat, 30 May 2020 22:56:33 GMT
Received: from [10.39.241.21] (/10.39.241.21)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Sat, 30 May 2020 15:56:32 -0700
Subject: Re: [PATCH 02/12] xenbus: add freeze/thaw/restore callbacks support
To: Anchal Agarwal <anchalag@amazon.com>, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, hpa@zytor.com, x86@kernel.org, jgross@suse.com,
 linux-pm@vger.kernel.org, linux-mm@kvack.org, kamatam@amazon.com,
 sstabellini@kernel.org, konrad.wilk@oracle.com, roger.pau@citrix.com,
 axboe@kernel.dk, davem@davemloft.net, rjw@rjwysocki.net,
 len.brown@intel.com, pavel@ucw.cz, peterz@infradead.org,
 eduval@amazon.com, sblbir@amazon.com, xen-devel@lists.xenproject.org,
 vkuznets@redhat.com, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, dwmw@amazon.co.uk, benh@kernel.crashing.org
References: <cover.1589926004.git.anchalag@amazon.com>
 <7fd12227f923eacc5841b47bd69f72b4105843a7.1589926004.git.anchalag@amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <835ca864-3e35-9a82-f3fd-24ca4e2ec06e@oracle.com>
Date: Sat, 30 May 2020 18:56:30 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <7fd12227f923eacc5841b47bd69f72b4105843a7.1589926004.git.anchalag@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9637
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0
 malwarescore=0 adultscore=0
 suspectscore=0 mlxscore=0 spamscore=0 mlxlogscore=999 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2005300178
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9637
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 suspectscore=0
 mlxlogscore=999 priorityscore=1501 bulkscore=0 phishscore=0 clxscore=1015
 impostorscore=0 adultscore=0 spamscore=0 mlxscore=0 lowpriorityscore=0
 cotscore=-2147483648 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2005300178
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/19/20 7:25 PM, Anchal Agarwal wrote:
> =20
>  int xenbus_dev_resume(struct device *dev)
>  {
> -	int err;
> +	int err =3D 0;


That's not necessary.


>  	struct xenbus_driver *drv;
>  	struct xenbus_device *xdev
>  		=3D container_of(dev, struct xenbus_device, dev);
> -
> +	bool xen_suspend =3D xen_suspend_mode_is_xen_suspend();
>  	DPRINTK("%s", xdev->nodename);
> =20
>  	if (dev->driver =3D=3D NULL)
> @@ -627,24 +645,32 @@ int xenbus_dev_resume(struct device *dev)
>  	drv =3D to_xenbus_driver(dev->driver);
>  	err =3D talk_to_otherend(xdev);
>  	if (err) {
> -		pr_warn("resume (talk_to_otherend) %s failed: %i\n",
> +		pr_warn("%s (talk_to_otherend) %s failed: %i\n",


Please use dev_warn() everywhere, we just had a bunch of patches that
replaced pr_warn(). In fact,=C2=A0 this is one of the lines that got chan=
ged.


> =20
>  int xenbus_dev_cancel(struct device *dev)
>  {
> -	/* Do nothing */
> -	DPRINTK("cancel");
> +	int err =3D 0;


Again, no need to initialize.


> +	struct xenbus_driver *drv;
> +	struct xenbus_device *xdev
> +		=3D container_of(dev, struct xenbus_device, dev);


xendev please to be consistent with other code. And use to_xenbus_device(=
).


-boris



From xen-devel-bounces@lists.xenproject.org Sat May 30 23:04:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 23:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfAWp-0001EW-9j; Sat, 30 May 2020 23:04:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HnEH=7M=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jfAWo-0001ER-Bj
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 23:04:34 +0000
X-Inumbo-ID: eca68a98-a2c9-11ea-9dbe-bc764e2007e4
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eca68a98-a2c9-11ea-9dbe-bc764e2007e4;
 Sat, 30 May 2020 23:04:33 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04UN2U8h167944;
 Sat, 30 May 2020 23:04:06 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=ArGbxdMf43nV2+Q8KO87k8kj/lIFrSIB63prdwdnqAk=;
 b=FcJ2lbEDjniowfATo3EvNy7aLk0eja41UWKv4Eb9R19t6yFA6Y2Hr67cdYIWkH0r8nq1
 vX0hhi9GQyXDyCaiBIeLEOF/cXWAhw6prOHTbrzHenWpkIZZoGhNGiGx7SeBBGfau8fh
 Mr10uXg+FCjLYilgRkCi2MVpIvMYmbmcefaz5aA0DtrkWDLymK+aWg3UkmXzSmegZQyr
 mbVgy4wcIg0/pAbwviwegoW20zUTXZq6iK3xG6M/rB/x3RmgUtPmF39RSYIZbt/tAUXD
 0P0Qs3926RZp542b9pvfvYCrU9nExcBjOQJPnmgbtIZ7azvONHMPVbJhWwjKDWSXtuyW 6w== 
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by aserp2120.oracle.com with ESMTP id 31bfekswe0-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Sat, 30 May 2020 23:04:06 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04UMrm1I189140;
 Sat, 30 May 2020 23:02:06 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
 by userp3020.oracle.com with ESMTP id 31bethp7bq-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Sat, 30 May 2020 23:02:05 +0000
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
 by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 04UN232l015388;
 Sat, 30 May 2020 23:02:03 GMT
Received: from [10.39.241.21] (/10.39.241.21)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Sat, 30 May 2020 16:02:03 -0700
Subject: Re: [PATCH 03/12] x86/xen: Introduce new function to map
 HYPERVISOR_shared_info on Resume
To: Anchal Agarwal <anchalag@amazon.com>, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, hpa@zytor.com, x86@kernel.org, jgross@suse.com,
 linux-pm@vger.kernel.org, linux-mm@kvack.org, kamatam@amazon.com,
 sstabellini@kernel.org, konrad.wilk@oracle.com, roger.pau@citrix.com,
 axboe@kernel.dk, davem@davemloft.net, rjw@rjwysocki.net,
 len.brown@intel.com, pavel@ucw.cz, peterz@infradead.org,
 eduval@amazon.com, sblbir@amazon.com, xen-devel@lists.xenproject.org,
 vkuznets@redhat.com, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, dwmw@amazon.co.uk, benh@kernel.crashing.org
References: <cover.1589926004.git.anchalag@amazon.com>
 <529f544a64bb93b920bf86b1d3f86d93b0a4219b.1589926004.git.anchalag@amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <72989b50-0c13-7a2b-19e2-de4a3646c83f@oracle.com>
Date: Sat, 30 May 2020 19:02:01 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <529f544a64bb93b920bf86b1d3f86d93b0a4219b.1589926004.git.anchalag@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9637
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0
 mlxlogscore=999
 bulkscore=0 mlxscore=0 phishscore=0 suspectscore=0 malwarescore=0
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2005300178
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9637
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 suspectscore=0
 mlxlogscore=999 priorityscore=1501 bulkscore=0 phishscore=0 clxscore=1015
 impostorscore=0 adultscore=0 spamscore=0 mlxscore=0 lowpriorityscore=0
 cotscore=-2147483648 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2005300179
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/19/20 7:25 PM, Anchal Agarwal wrote:
> Introduce a small function which re-uses shared page's PA allocated
> during guest initialization time in reserve_shared_info() and not
> allocate new page during resume flow.
> It also  does the mapping of shared_info_page by calling
> xen_hvm_init_shared_info() to use the function.
>
> Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
> ---
>  arch/x86/xen/enlighten_hvm.c | 7 +++++++
>  arch/x86/xen/xen-ops.h       | 1 +
>  2 files changed, 8 insertions(+)
>
> diff --git a/arch/x86/xen/enlighten_hvm.c b/arch/x86/xen/enlighten_hvm.c
> index e138f7de52d2..75b1ec7a0fcd 100644
> --- a/arch/x86/xen/enlighten_hvm.c
> +++ b/arch/x86/xen/enlighten_hvm.c
> @@ -27,6 +27,13 @@
>  
>  static unsigned long shared_info_pfn;
>  
> +void xen_hvm_map_shared_info(void)
> +{
> +	xen_hvm_init_shared_info();
> +	if (shared_info_pfn)
> +		HYPERVISOR_shared_info = __va(PFN_PHYS(shared_info_pfn));
> +}
> +


AFAICT it is only called once so I don't see a need for new routine.


And is it possible for shared_info_pfn to be NULL in resume path (which
is where this is called)?


-boris




From xen-devel-bounces@lists.xenproject.org Sat May 30 23:12:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 23:12:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfAec-00026E-7U; Sat, 30 May 2020 23:12:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HnEH=7M=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jfAeb-000269-3I
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 23:12:37 +0000
X-Inumbo-ID: 0c8157fc-a2cb-11ea-8993-bc764e2007e4
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0c8157fc-a2cb-11ea-8993-bc764e2007e4;
 Sat, 30 May 2020 23:12:36 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04UN8oUM185958;
 Sat, 30 May 2020 23:12:13 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=EpkuvONv+FBwjQWXgmZJk+12q7HwpXh2vQ1RirksenM=;
 b=NfizBJMdiSk3R34bt2ybXF87ImDOmHRAgiaEh9FL4ikBrs81MAO+bua7IPPlUVMPCZlu
 jheJT5kHc0+9vsG3W1pRaEgp4n0juvT1/OGZm2OG0rBHdz+FgGhjtSctD0TzMF9A8kEb
 krcmaJcok72Q77EzcEnZGiReNWRuavr0hmxsum0SLdtQgHUzeW/1ImIQrglnBqK8VQVL
 QeOro44u9DetD4rrYqjSqE6tnBZKFlq4ibsBniWCI7FzqSgrAnYLMOFse3zJDfOdD58g
 v59S1YAbKsMsh6kupKHD7L7+bQKkP+3EvxhqxRaRJhmfE4OpyFAw0opFcctVjRtfhy3z cQ== 
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by aserp2120.oracle.com with ESMTP id 31bfekswr1-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Sat, 30 May 2020 23:12:13 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04UN9DOq139971;
 Sat, 30 May 2020 23:10:12 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by aserp3030.oracle.com with ESMTP id 31bdh9y86e-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Sat, 30 May 2020 23:10:12 +0000
Received: from abhmp0012.oracle.com (abhmp0012.oracle.com [141.146.116.18])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 04UNA9rb020405;
 Sat, 30 May 2020 23:10:09 GMT
Received: from [10.39.241.21] (/10.39.241.21)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Sat, 30 May 2020 16:10:08 -0700
Subject: Re: [PATCH 04/12] x86/xen: add system core suspend and resume
 callbacks
To: Anchal Agarwal <anchalag@amazon.com>, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, hpa@zytor.com, x86@kernel.org, jgross@suse.com,
 linux-pm@vger.kernel.org, linux-mm@kvack.org, kamatam@amazon.com,
 sstabellini@kernel.org, konrad.wilk@oracle.com, roger.pau@citrix.com,
 axboe@kernel.dk, davem@davemloft.net, rjw@rjwysocki.net,
 len.brown@intel.com, pavel@ucw.cz, peterz@infradead.org,
 eduval@amazon.com, sblbir@amazon.com, xen-devel@lists.xenproject.org,
 vkuznets@redhat.com, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, dwmw@amazon.co.uk, benh@kernel.crashing.org
References: <cover.1589926004.git.anchalag@amazon.com>
 <79cf02631dc00e62ebf90410bfbbdb52fe7024cb.1589926004.git.anchalag@amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <4b577564-e4c3-0182-2b9e-5f79004f32a1@oracle.com>
Date: Sat, 30 May 2020 19:10:04 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <79cf02631dc00e62ebf90410bfbbdb52fe7024cb.1589926004.git.anchalag@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9637
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 adultscore=0
 mlxscore=0 mlxlogscore=999
 suspectscore=0 phishscore=0 malwarescore=0 bulkscore=0 spamscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2005300180
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9637
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 suspectscore=0
 mlxlogscore=999 priorityscore=1501 bulkscore=0 phishscore=0 clxscore=1015
 impostorscore=0 adultscore=0 spamscore=0 mlxscore=0 lowpriorityscore=0
 cotscore=-2147483648 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2005300180
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/19/20 7:26 PM, Anchal Agarwal wrote:
> From: Munehisa Kamata <kamatam@amazon.com>
>
> Add Xen PVHVM specific system core callbacks for PM suspend and
> hibernation support. The callbacks suspend and resume Xen
> primitives,like shared_info, pvclock and grant table. Note that
> Xen suspend can handle them in a different manner, but system
> core callbacks are called from the context.


I don't think I understand that last sentence.


>  So if the callbacks
> are called from Xen suspend context, return immediately.
>


> +
> +static int xen_syscore_suspend(void)
> +{
> +	struct xen_remove_from_physmap xrfp;
> +	int ret;
> +
> +	/* Xen suspend does similar stuffs in its own logic */
> +	if (xen_suspend_mode_is_xen_suspend())
> +		return 0;
> +
> +	xrfp.domid =3D DOMID_SELF;
> +	xrfp.gpfn =3D __pa(HYPERVISOR_shared_info) >> PAGE_SHIFT;
> +
> +	ret =3D HYPERVISOR_memory_op(XENMEM_remove_from_physmap, &xrfp);
> +	if (!ret)
> +		HYPERVISOR_shared_info =3D &xen_dummy_shared_info;
> +
> +	return ret;
> +}
> +
> +static void xen_syscore_resume(void)
> +{
> +	/* Xen suspend does similar stuffs in its own logic */
> +	if (xen_suspend_mode_is_xen_suspend())
> +		return;
> +
> +	/* No need to setup vcpu_info as it's already moved off */
> +	xen_hvm_map_shared_info();
> +
> +	pvclock_resume();
> +
> +	gnttab_resume();


Do you call gnttab_suspend() in pm suspend path?


> +}
> +
> +/*
> + * These callbacks will be called with interrupts disabled and when ha=
ving only
> + * one CPU online.
> + */
> +static struct syscore_ops xen_hvm_syscore_ops =3D {
> +	.suspend =3D xen_syscore_suspend,
> +	.resume =3D xen_syscore_resume
> +};
> +
> +void __init xen_setup_syscore_ops(void)
> +{
> +	if (xen_hvm_domain())


Have you tested this (the whole feature, not just this patch) with PVH
guest BTW? And PVH dom0 for that matter?


-boris


> +		register_syscore_ops(&xen_hvm_syscore_ops);
> +}





From xen-devel-bounces@lists.xenproject.org Sat May 30 23:20:29 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 23:20:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfAmA-0002xd-1E; Sat, 30 May 2020 23:20:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HnEH=7M=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jfAm9-0002xY-1J
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 23:20:25 +0000
X-Inumbo-ID: 22eef296-a2cc-11ea-aa0a-12813bfff9fa
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 22eef296-a2cc-11ea-aa0a-12813bfff9fa;
 Sat, 30 May 2020 23:20:23 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04UNIZJf035939;
 Sat, 30 May 2020 23:19:48 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=liZOA+vop5L29J/phx2L4dwqfJSr8s8lhrbmTKn/6zI=;
 b=Lr6kWN0FzjHh2vnwnxF+y/BTx3Hdl9Z1CPp2OYuad+vBvfziMd3hy4d87xiV9UoWcIcD
 l3o4iQKkjoDxZz44IxZ5hKwaCTBHkfbYl0H52bC2oPwVmuYWsKUkB2Vfr/njvOgqtwsN
 Vvd5tqoWrFBp699ukuYQhotwc9DLJ7dQXyYZ4gUFhzYYl2fS/UB99r9qiy/rm7tS4s/a
 d4cZZFYTk7eSM18GkhIkWtbvdNrbdeTNocAiUavihSqQOExWZRizb+JlJWVQEpTxxdLT
 CE3DS8qDu9ipz4zsisgm0EZdbT1YvF+8nbM+WnQS0uQ878U49HUMK0WDwrcjtljY28Oc VA== 
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2130.oracle.com with ESMTP id 31bewqj048-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Sat, 30 May 2020 23:19:48 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04UNDoh1171381;
 Sat, 30 May 2020 23:17:47 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by aserp3020.oracle.com with ESMTP id 31bfa1hqy5-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Sat, 30 May 2020 23:17:47 +0000
Received: from abhmp0005.oracle.com (abhmp0005.oracle.com [141.146.116.11])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 04UNHiJI022532;
 Sat, 30 May 2020 23:17:44 GMT
Received: from [10.39.241.21] (/10.39.241.21)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Sat, 30 May 2020 16:17:44 -0700
Subject: Re: [PATCH 05/12] genirq: Shutdown irq chips in suspend/resume during
 hibernation
To: Anchal Agarwal <anchalag@amazon.com>, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, hpa@zytor.com, x86@kernel.org, jgross@suse.com,
 linux-pm@vger.kernel.org, linux-mm@kvack.org, kamatam@amazon.com,
 sstabellini@kernel.org, konrad.wilk@oracle.com, roger.pau@citrix.com,
 axboe@kernel.dk, davem@davemloft.net, rjw@rjwysocki.net,
 len.brown@intel.com, pavel@ucw.cz, peterz@infradead.org,
 eduval@amazon.com, sblbir@amazon.com, xen-devel@lists.xenproject.org,
 vkuznets@redhat.com, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, dwmw@amazon.co.uk, benh@kernel.crashing.org
References: <cover.1589926004.git.anchalag@amazon.com>
 <fce013fc1348f02b8e4ec61e7a631093c72f993c.1589926004.git.anchalag@amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <0471e6e3-b6ed-d2c6-db41-1688a0af9abd@oracle.com>
Date: Sat, 30 May 2020 19:17:41 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <fce013fc1348f02b8e4ec61e7a631093c72f993c.1589926004.git.anchalag@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9637
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 mlxscore=0 adultscore=0
 suspectscore=0 mlxlogscore=999 phishscore=0 spamscore=0 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2005300181
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9637
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0
 priorityscore=1501 bulkscore=0
 phishscore=0 suspectscore=0 impostorscore=0 cotscore=-2147483648
 lowpriorityscore=0 mlxscore=0 adultscore=0 spamscore=0 mlxlogscore=999
 malwarescore=0 clxscore=1015 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2004280000 definitions=main-2005300181
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/19/20 7:26 PM, Anchal Agarwal wrote:
> Many legacy device drivers do not implement power management (PM)
> functions which means that interrupts requested by these drivers stay
> in active state when the kernel is hibernated.
>
> This does not matter on bare metal and on most hypervisors because the
> interrupt is restored on resume without any noticable side effects as
> it stays connected to the same physical or virtual interrupt line.
>
> The XEN interrupt mechanism is different as it maintains a mapping
> between the Linux interrupt number and a XEN event channel. If the
> interrupt stays active on hibernation this mapping is preserved but
> there is unfortunately no guarantee that on resume the same event
> channels are reassigned to these devices. This can result in event
> channel conflicts which prevent the affected devices from being
> restored correctly.
>
> One way to solve this would be to add the necessary power management
> functions to all affected legacy device drivers, but that's a
> questionable effort which does not provide any benefits on non-XEN
> environments.
>
> The least intrusive and most efficient solution is to provide a
> mechanism which allows the core interrupt code to tear down these
> interrupts on hibernation and bring them back up again on resume. This
> allows the XEN event channel mechanism to assign an arbitrary event
> channel on resume without affecting the functionality of these
> devices.
>
> Fortunately all these device interrupts are handled by a dedicated XEN
> interrupt chip so the chip can be marked that all interrupts connected
> to it are handled this way. This is pretty much in line with the other
> interrupt chip specific quirks, e.g. IRQCHIP_MASK_ON_SUSPEND.
>
> Add a new quirk flag IRQCHIP_SHUTDOWN_ON_SUSPEND and add support for
> it the core interrupt suspend/resume paths.
>
> Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
> Signed-off--by: Thomas Gleixner <tglx@linutronix.de>


Since Thomas wrote this patch I think it should also have "From: " him.


-boris




From xen-devel-bounces@lists.xenproject.org Sat May 30 23:34:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 23:34:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfAz3-0003uk-8A; Sat, 30 May 2020 23:33:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HnEH=7M=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jfAz1-0003uf-MJ
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 23:33:43 +0000
X-Inumbo-ID: fee29eb4-a2cd-11ea-aa0a-12813bfff9fa
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fee29eb4-a2cd-11ea-aa0a-12813bfff9fa;
 Sat, 30 May 2020 23:33:42 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04UNTgG4017275;
 Sat, 30 May 2020 23:33:12 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=bpnxv8YRoYnV/V0HR0U2Kr/99s99vonN/fiLRTXTQ2I=;
 b=MWV7EaS35dIKUubQlgdn44K13sB4+JJQ6rbNEUh5Jy9ovQ3KIwFXER8tDnKH5b4AvcfU
 oReAOc1makkqLQsL+ijNEoqyvvwPsO0Xge6Qz/a8ozkFAK634sY2l8AHbmsQ5Ms2RvVI
 mWE3Wvr8vfNN8EE4w4smFR478Feq4zSkcU+myC6IpUwL7CwFsYfwsF14JSkFhymeqiK+
 L+BMVkE9NRXeS3usVPv3GV89Akjm7flg0URYIewj2h9k50tSANnAjFqfABdn3tK6WJjF
 jJ5llmed44VYBQ3sFXEkqqhZ+LmtgLyLr0PQbQmlZ8GE5Q9j5vIje2gGZNMmafxb9Dih ZQ== 
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by aserp2120.oracle.com with ESMTP id 31bfeksxem-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Sat, 30 May 2020 23:33:12 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04UNX5Ic150405;
 Sat, 30 May 2020 23:33:11 GMT
Received: from userv0122.oracle.com (userv0122.oracle.com [156.151.31.75])
 by userp3030.oracle.com with ESMTP id 31bckr95xa-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Sat, 30 May 2020 23:33:11 +0000
Received: from abhmp0008.oracle.com (abhmp0008.oracle.com [141.146.116.14])
 by userv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 04UNX2Sg027376;
 Sat, 30 May 2020 23:33:03 GMT
Received: from [10.39.241.21] (/10.39.241.21)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Sat, 30 May 2020 16:33:02 -0700
Subject: Re: [PATCH 08/12] xen/time: introduce xen_{save,restore}_steal_clock
To: Anchal Agarwal <anchalag@amazon.com>, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, hpa@zytor.com, x86@kernel.org, jgross@suse.com,
 linux-pm@vger.kernel.org, linux-mm@kvack.org, kamatam@amazon.com,
 sstabellini@kernel.org, konrad.wilk@oracle.com, roger.pau@citrix.com,
 axboe@kernel.dk, davem@davemloft.net, rjw@rjwysocki.net,
 len.brown@intel.com, pavel@ucw.cz, peterz@infradead.org,
 eduval@amazon.com, sblbir@amazon.com, xen-devel@lists.xenproject.org,
 vkuznets@redhat.com, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, dwmw@amazon.co.uk, benh@kernel.crashing.org
References: <cover.1589926004.git.anchalag@amazon.com>
 <ae90ece495d29f54fc9986a07f45ab6659136573.1589926004.git.anchalag@amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <5e1094c5-f6c3-c83d-d86f-0bbeaa9f2086@oracle.com>
Date: Sat, 30 May 2020 19:32:59 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <ae90ece495d29f54fc9986a07f45ab6659136573.1589926004.git.anchalag@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9637
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0
 malwarescore=0 adultscore=0
 suspectscore=0 mlxscore=0 spamscore=0 mlxlogscore=999 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2005300184
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9637
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 malwarescore=0
 suspectscore=0
 mlxlogscore=999 priorityscore=1501 bulkscore=0 phishscore=0 clxscore=1015
 impostorscore=0 adultscore=0 spamscore=0 mlxscore=0 lowpriorityscore=0
 cotscore=-2147483648 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2004280000 definitions=main-2005300183
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/19/20 7:28 PM, Anchal Agarwal wrote:
> From: Munehisa Kamata <kamatam@amazon.com>
>
> Currently, steal time accounting code in scheduler expects steal clock
> callback to provide monotonically increasing value. If the accounting
> code receives a smaller value than previous one, it uses a negative
> value to calculate steal time and results in incorrectly updated idle
> and steal time accounting. This breaks userspace tools which read
> /proc/stat.
>
> top - 08:05:35 up  2:12,  3 users,  load average: 0.00, 0.07, 0.23
> Tasks:  80 total,   1 running,  79 sleeping,   0 stopped,   0 zombie
> Cpu(s):  0.0%us,  0.0%sy,  0.0%ni,30100.0%id,  0.0%wa,  0.0%hi, 0.0%si,=
-1253874204672.0%st
>
> This can actually happen when a Xen PVHVM guest gets restored from
> hibernation, because such a restored guest is just a fresh domain from
> Xen perspective and the time information in runstate info starts over
> from scratch.
>
> This patch introduces xen_save_steal_clock() which saves current values=

> in runstate info into per-cpu variables. Its couterpart,
> xen_restore_steal_clock(), sets offset if it found the current values i=
n
> runstate info are smaller than previous ones. xen_steal_clock() is also=

> modified to use the offset to ensure that scheduler only sees
> monotonically increasing number.
>
> Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
> Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
> ---
>  drivers/xen/time.c    | 29 ++++++++++++++++++++++++++++-
>  include/xen/xen-ops.h |  2 ++
>  2 files changed, 30 insertions(+), 1 deletion(-)
>
> diff --git a/drivers/xen/time.c b/drivers/xen/time.c
> index 0968859c29d0..3560222cc0dd 100644
> --- a/drivers/xen/time.c
> +++ b/drivers/xen/time.c
> @@ -23,6 +23,9 @@ static DEFINE_PER_CPU(struct vcpu_runstate_info, xen_=
runstate);
> =20
>  static DEFINE_PER_CPU(u64[4], old_runstate_time);
> =20
> +static DEFINE_PER_CPU(u64, xen_prev_steal_clock);
> +static DEFINE_PER_CPU(u64, xen_steal_clock_offset);


Can you use old_runstate_time here? It is used to solve a similar
problem for pv suspend, isn't it?


-boris






From xen-devel-bounces@lists.xenproject.org Sat May 30 23:46:57 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 23:46:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfBBi-0004ph-G0; Sat, 30 May 2020 23:46:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HnEH=7M=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1jfBBg-0004pc-Jd
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 23:46:48 +0000
X-Inumbo-ID: d2fb5136-a2cf-11ea-9dbe-bc764e2007e4
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d2fb5136-a2cf-11ea-9dbe-bc764e2007e4;
 Sat, 30 May 2020 23:46:47 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04UNkBVD028322;
 Sat, 30 May 2020 23:46:11 GMT
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com;
 h=subject : to :
 references : from : message-id : date : mime-version : in-reply-to :
 content-type : content-transfer-encoding; s=corp-2020-01-29;
 bh=CGJWHE0csoRD/xXyfOk0gl8RIuzI9ee8fxwxm6/4ZSc=;
 b=MzzvgN8DtWnQ8zO9sruEXEJ2VXp6h04eNlvTg0r7ykCuv1KKXIz4IauDR+AhGW+dCyw7
 XSqkXFU8BA2+C5TPCHTCIedXMxD28U/+dG3+EemrcLzw3Q4OHk557p4zKn91EhlU/UYA
 u1cqeceFOiOCLrvqj8uLsAU7O4tJQrJYh8gLccCJW1dTLriOxFKIqTZOfr1buzCZBPOK
 RToOXklHUsNxqEk4eslZ9ejz1a4oJ6Q4xdm2L1OalO9W8GyPVUhrz0ZX17FjdBNEBobt
 l0RGmwm330lxk7rA6LRWb00h+Mu2cP11w9TobX7jqJnqe3G1WHpQcuWRoEZXc8URST/l eQ== 
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by userp2120.oracle.com with ESMTP id 31bg4msv5r-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=FAIL);
 Sat, 30 May 2020 23:46:11 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 04UNhDvY021958;
 Sat, 30 May 2020 23:44:10 GMT
Received: from aserv0122.oracle.com (aserv0122.oracle.com [141.146.126.236])
 by aserp3030.oracle.com with ESMTP id 31c12jgbdn-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Sat, 30 May 2020 23:44:10 +0000
Received: from abhmp0007.oracle.com (abhmp0007.oracle.com [141.146.116.13])
 by aserv0122.oracle.com (8.14.4/8.14.4) with ESMTP id 04UNi9wl021788;
 Sat, 30 May 2020 23:44:09 GMT
Received: from [10.39.241.21] (/10.39.241.21)
 by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Sat, 30 May 2020 16:44:08 -0700
Subject: Re: [PATCH 09/12] x86/xen: save and restore steal clock
To: Anchal Agarwal <anchalag@amazon.com>, tglx@linutronix.de, mingo@redhat.com,
 bp@alien8.de, hpa@zytor.com, x86@kernel.org, jgross@suse.com,
 linux-pm@vger.kernel.org, linux-mm@kvack.org, kamatam@amazon.com,
 sstabellini@kernel.org, konrad.wilk@oracle.com, roger.pau@citrix.com,
 axboe@kernel.dk, davem@davemloft.net, rjw@rjwysocki.net,
 len.brown@intel.com, pavel@ucw.cz, peterz@infradead.org,
 eduval@amazon.com, sblbir@amazon.com, xen-devel@lists.xenproject.org,
 vkuznets@redhat.com, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org, dwmw@amazon.co.uk, benh@kernel.crashing.org
References: <cover.1589926004.git.anchalag@amazon.com>
 <6f39a1594a25ab5325f34e1e297900d699cd92bf.1589926004.git.anchalag@amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Autocrypt: addr=boris.ostrovsky@oracle.com; keydata=
 xsFNBFH8CgsBEAC0KiOi9siOvlXatK2xX99e/J3OvApoYWjieVQ9232Eb7GzCWrItCzP8FUV
 PQg8rMsSd0OzIvvjbEAvaWLlbs8wa3MtVLysHY/DfqRK9Zvr/RgrsYC6ukOB7igy2PGqZd+M
 MDnSmVzik0sPvB6xPV7QyFsykEgpnHbvdZAUy/vyys8xgT0PVYR5hyvhyf6VIfGuvqIsvJw5
 C8+P71CHI+U/IhsKrLrsiYHpAhQkw+Zvyeml6XSi5w4LXDbF+3oholKYCkPwxmGdK8MUIdkM
 d7iYdKqiP4W6FKQou/lC3jvOceGupEoDV9botSWEIIlKdtm6C4GfL45RD8V4B9iy24JHPlom
 woVWc0xBZboQguhauQqrBFooHO3roEeM1pxXjLUbDtH4t3SAI3gt4dpSyT3EvzhyNQVVIxj2
 FXnIChrYxR6S0ijSqUKO0cAduenhBrpYbz9qFcB/GyxD+ZWY7OgQKHUZMWapx5bHGQ8bUZz2
 SfjZwK+GETGhfkvNMf6zXbZkDq4kKB/ywaKvVPodS1Poa44+B9sxbUp1jMfFtlOJ3AYB0WDS
 Op3d7F2ry20CIf1Ifh0nIxkQPkTX7aX5rI92oZeu5u038dHUu/dO2EcuCjl1eDMGm5PLHDSP
 0QUw5xzk1Y8MG1JQ56PtqReO33inBXG63yTIikJmUXFTw6lLJwARAQABzTNCb3JpcyBPc3Ry
 b3Zza3kgKFdvcmspIDxib3Jpcy5vc3Ryb3Zza3lAb3JhY2xlLmNvbT7CwXgEEwECACIFAlH8
 CgsCGwMGCwkIBwMCBhUIAgkKCwQWAgMBAh4BAheAAAoJEIredpCGysGyasEP/j5xApopUf4g
 9Fl3UxZuBx+oduuw3JHqgbGZ2siA3EA4bKwtKq8eT7ekpApn4c0HA8TWTDtgZtLSV5IdH+9z
 JimBDrhLkDI3Zsx2CafL4pMJvpUavhc5mEU8myp4dWCuIylHiWG65agvUeFZYK4P33fGqoaS
 VGx3tsQIAr7MsQxilMfRiTEoYH0WWthhE0YVQzV6kx4wj4yLGYPPBtFqnrapKKC8yFTpgjaK
 jImqWhU9CSUAXdNEs/oKVR1XlkDpMCFDl88vKAuJwugnixjbPFTVPyoC7+4Bm/FnL3iwlJVE
 qIGQRspt09r+datFzPqSbp5Fo/9m4JSvgtPp2X2+gIGgLPWp2ft1NXHHVWP19sPgEsEJXSr9
 tskM8ScxEkqAUuDs6+x/ISX8wa5Pvmo65drN+JWA8EqKOHQG6LUsUdJolFM2i4Z0k40BnFU/
 kjTARjrXW94LwokVy4x+ZYgImrnKWeKac6fMfMwH2aKpCQLlVxdO4qvJkv92SzZz4538az1T
 m+3ekJAimou89cXwXHCFb5WqJcyjDfdQF857vTn1z4qu7udYCuuV/4xDEhslUq1+GcNDjAhB
 nNYPzD+SvhWEsrjuXv+fDONdJtmLUpKs4Jtak3smGGhZsqpcNv8nQzUGDQZjuCSmDqW8vn2o
 hWwveNeRTkxh+2x1Qb3GT46uzsFNBFH8CgsBEADGC/yx5ctcLQlB9hbq7KNqCDyZNoYu1HAB
 Hal3MuxPfoGKObEktawQPQaSTB5vNlDxKihezLnlT/PKjcXC2R1OjSDinlu5XNGc6mnky03q
 yymUPyiMtWhBBftezTRxWRslPaFWlg/h/Y1iDuOcklhpr7K1h1jRPCrf1yIoxbIpDbffnuyz
 kuto4AahRvBU4Js4sU7f/btU+h+e0AcLVzIhTVPIz7PM+Gk2LNzZ3/on4dnEc/qd+ZZFlOQ4
 KDN/hPqlwA/YJsKzAPX51L6Vv344pqTm6Z0f9M7YALB/11FO2nBB7zw7HAUYqJeHutCwxm7i
 BDNt0g9fhviNcJzagqJ1R7aPjtjBoYvKkbwNu5sWDpQ4idnsnck4YT6ctzN4I+6lfkU8zMzC
 gM2R4qqUXmxFIS4Bee+gnJi0Pc3KcBYBZsDK44FtM//5Cp9DrxRQOh19kNHBlxkmEb8kL/pw
 XIDcEq8MXzPBbxwHKJ3QRWRe5jPNpf8HCjnZz0XyJV0/4M1JvOua7IZftOttQ6KnM4m6WNIZ
 2ydg7dBhDa6iv1oKdL7wdp/rCulVWn8R7+3cRK95SnWiJ0qKDlMbIN8oGMhHdin8cSRYdmHK
 kTnvSGJNlkis5a+048o0C6jI3LozQYD/W9wq7MvgChgVQw1iEOB4u/3FXDEGulRVko6xCBU4
 SQARAQABwsFfBBgBAgAJBQJR/AoLAhsMAAoJEIredpCGysGyfvMQAIywR6jTqix6/fL0Ip8G
 jpt3uk//QNxGJE3ZkUNLX6N786vnEJvc1beCu6EwqD1ezG9fJKMl7F3SEgpYaiKEcHfoKGdh
 30B3Hsq44vOoxR6zxw2B/giADjhmWTP5tWQ9548N4VhIZMYQMQCkdqaueSL+8asp8tBNP+TJ
 PAIIANYvJaD8xA7sYUXGTzOXDh2THWSvmEWWmzok8er/u6ZKdS1YmZkUy8cfzrll/9hiGCTj
 u3qcaOM6i/m4hqtvsI1cOORMVwjJF4+IkC5ZBoeRs/xW5zIBdSUoC8L+OCyj5JETWTt40+lu
 qoqAF/AEGsNZTrwHJYu9rbHH260C0KYCNqmxDdcROUqIzJdzDKOrDmebkEVnxVeLJBIhYZUd
 t3Iq9hdjpU50TA6sQ3mZxzBdfRgg+vaj2DsJqI5Xla9QGKD+xNT6v14cZuIMZzO7w0DoojM4
 ByrabFsOQxGvE0w9Dch2BDSI2Xyk1zjPKxG1VNBQVx3flH37QDWpL2zlJikW29Ws86PHdthh
 Fm5PY8YtX576DchSP6qJC57/eAAe/9ztZdVAdesQwGb9hZHJc75B+VNm4xrh/PJO6c1THqdQ
 19WVJ+7rDx3PhVncGlbAOiiiE3NOFPJ1OQYxPKtpBUukAlOTnkKE6QcA4zckFepUkfmBV1wM
 Jg6OxFYd01z+a+oL
Message-ID: <5edb4147-af12-3a0e-e8f7-5b72650209ac@oracle.com>
Date: Sat, 30 May 2020 19:44:06 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.8.0
MIME-Version: 1.0
In-Reply-To: <6f39a1594a25ab5325f34e1e297900d699cd92bf.1589926004.git.anchalag@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-US
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9637
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0
 phishscore=0 malwarescore=0
 adultscore=0 suspectscore=0 spamscore=0 bulkscore=0 mlxlogscore=999
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2004280000
 definitions=main-2005300185
X-Proofpoint-Virus-Version: vendor=nai engine=6000 definitions=9637
 signatures=668686
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 clxscore=1015
 lowpriorityscore=0
 malwarescore=0 phishscore=0 suspectscore=0 priorityscore=1501 adultscore=0
 mlxlogscore=999 cotscore=-2147483648 bulkscore=0 mlxscore=0
 impostorscore=0 spamscore=0 classifier=spam adjust=0 reason=mlx
 scancount=1 engine=8.12.0-2004280000 definitions=main-2005300185
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On 5/19/20 7:28 PM, Anchal Agarwal wrote:
> From: Munehisa Kamata <kamatam@amazon.com>
>
> Save steal clock values of all present CPUs in the system core ops
> suspend callbacks. Also, restore a boot CPU's steal clock in the system=

> core resume callback. For non-boot CPUs, restore after they're brought
> up, because runstate info for non-boot CPUs are not active until then.
>
> Signed-off-by: Munehisa Kamata <kamatam@amazon.com>
> Signed-off-by: Anchal Agarwal <anchalag@amazon.com>
> ---
>  arch/x86/xen/suspend.c | 13 ++++++++++++-
>  arch/x86/xen/time.c    |  3 +++
>  2 files changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/arch/x86/xen/suspend.c b/arch/x86/xen/suspend.c
> index 784c4484100b..dae0f74f5390 100644
> --- a/arch/x86/xen/suspend.c
> +++ b/arch/x86/xen/suspend.c
> @@ -91,12 +91,20 @@ void xen_arch_suspend(void)
>  static int xen_syscore_suspend(void)
>  {
>  	struct xen_remove_from_physmap xrfp;
> -	int ret;
> +	int cpu, ret;
> =20
>  	/* Xen suspend does similar stuffs in its own logic */
>  	if (xen_suspend_mode_is_xen_suspend())
>  		return 0;
> =20
> +	for_each_present_cpu(cpu) {
> +		/*
> +		 * Nonboot CPUs are already offline, but the last copy of
> +		 * runstate info is still accessible.
> +		 */
> +		xen_save_steal_clock(cpu);
> +	}
> +
>  	xrfp.domid =3D DOMID_SELF;
>  	xrfp.gpfn =3D __pa(HYPERVISOR_shared_info) >> PAGE_SHIFT;
> =20
> @@ -118,6 +126,9 @@ static void xen_syscore_resume(void)
> =20
>  	pvclock_resume();


Doesn't make any difference but I think since this patch is where you
are dealing with clock then pvclock_resume() should be added here and
not in the earlier patch.


-boris


> =20
> +	/* Nonboot CPUs will be resumed when they're brought up */
> +	xen_restore_steal_clock(smp_processor_id());
> +
>  	gnttab_resume();
>  }
> =20
> diff --git a/arch/x86/xen/time.c b/arch/x86/xen/time.c
> index c8897aad13cd..33d754564b09 100644
> --- a/arch/x86/xen/time.c
> +++ b/arch/x86/xen/time.c
> @@ -545,6 +545,9 @@ static void xen_hvm_setup_cpu_clockevents(void)
>  {
>  	int cpu =3D smp_processor_id();
>  	xen_setup_runstate_info(cpu);
> +	if (cpu)
> +		xen_restore_steal_clock(cpu);
> +
>  	/*
>  	 * xen_setup_timer(cpu) - snprintf is bad in atomic context. Hence
>  	 * doing it xen_hvm_cpu_notify (which gets called by smp_init during





From xen-devel-bounces@lists.xenproject.org Sat May 30 23:52:26 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 30 May 2020 23:52:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfBGt-0005ef-82; Sat, 30 May 2020 23:52:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=74Fv=7M=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jfBGs-0005ea-9d
 for xen-devel@lists.xenproject.org; Sat, 30 May 2020 23:52:10 +0000
X-Inumbo-ID: 92cb836e-a2d0-11ea-aa0c-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 92cb836e-a2d0-11ea-aa0c-12813bfff9fa;
 Sat, 30 May 2020 23:52:08 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=rEOyFFzoK9lAYxrqBcGcD3I9S2eu8y1oyIv+irqr7ps=; b=R0kM3q8Etr30wWAVbeEUzKfG1
 QNE33J9pnZah7f2wnZ6orvLo/yZy9qf6CIu6SIAJYmu37io1Yz3j7LMR0pQ6r5UiLqsXlAgrwH6lw
 9JbYtwnCgpUNHU8iPemzdTI2pP9ihLUvXfyMDa7CDp9ZCmm8q3P751s4trGr5vMVkYSjE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfBGq-0002AS-63; Sat, 30 May 2020 23:52:08 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfBGp-0001ah-MH; Sat, 30 May 2020 23:52:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jfBGp-0000RQ-Ld; Sat, 30 May 2020 23:52:07 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150546-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150546: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=ad33a573c009d72466432b41ba0591c64e819c19
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 30 May 2020 23:52:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150546 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150546/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ad33a573c009d72466432b41ba0591c64e819c19
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    2 days
Failing since        150465  2020-05-29 09:02:14 Z    1 days   17 attempts
Testing same since   150515  2020-05-30 01:00:47 Z    0 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1196 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 31 00:30:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 00:30:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfBrY-0000kx-BX; Sun, 31 May 2020 00:30:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bt/9=7N=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jfBrX-0000eQ-0V
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 00:30:03 +0000
X-Inumbo-ID: d9b51466-a2d5-11ea-aa15-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d9b51466-a2d5-11ea-aa15-12813bfff9fa;
 Sun, 31 May 2020 00:29:55 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=ZilytP7zOY7NBkdULbjHdMWAqc3FL6EvaiuVRtLOfeE=; b=LQWdAv+xDznUmbECQ1SFLb8we
 YvDBYg4SyW5FpYP/+DSumv9rBQZji/KlEXwL51YJkkxvWyK/vgNonR8WBzgNzCCNp+/umhsyn90Mv
 mGh6CvR9I4fFI2+GU68hBDvcdk/sAbbG3BD02VtdFfQLbco+bnDY/wr5JLQCQc2EGKTaU=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfBrO-0003X6-S2; Sun, 31 May 2020 00:29:54 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfBrO-0002vE-KL; Sun, 31 May 2020 00:29:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jfBrO-00070M-H2; Sun, 31 May 2020 00:29:54 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150529-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150529: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-xl-rtds:guest-saverestore:fail:heisenbug
 xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 31 May 2020 00:29:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150529 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150529/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds    15 guest-saverestore fail in 150477 pass in 150529
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 150477
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat  fail pass in 150477

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10  fail blocked in 150477
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150477
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150477
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150477
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150477
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150477
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150477
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150477
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150477
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150477
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150529  2020-05-30 08:31:56 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun May 31 01:22:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 01:22:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfCgD-0006KI-JG; Sun, 31 May 2020 01:22:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bt/9=7N=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jfCgC-0006KD-1Y
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 01:22:24 +0000
X-Inumbo-ID: 2a3996a8-a2dd-11ea-aa1a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2a3996a8-a2dd-11ea-aa1a-12813bfff9fa;
 Sun, 31 May 2020 01:22:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=H+5SDXysbtkyisXyHlAnPSCphbxl7wAdNpu6KSDMb8M=; b=gOBgqt2H8+Ej+Vqmjt7+CSN7x
 4A2WUrLyAuASLa7ESL2LXURqCbti6P4vTyvyf5cdJWbZ7L7WtQMt5IWr0Hg4kUNJEwKObq/A2OP9I
 paTPY6qGepP2gB3LhuzxfUzemeIV+mSttvyEPyBcuJNH5SQNzRQDYduh+PKuc8cmuR8mg=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfCg3-0005zA-SU; Sun, 31 May 2020 01:22:15 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfCg3-0007QO-Hv; Sun, 31 May 2020 01:22:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jfCg3-0005N3-HH; Sun, 31 May 2020 01:22:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150532-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 150532: tolerable FAIL - PUSHED
X-Osstest-Failures: qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: qemuu=c86274bc2e34295764fb44c2aef3cf29623f9b4b
X-Osstest-Versions-That: qemuu=b8bee16e94df0fcd03bdad9969c30894418b0e6e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 31 May 2020 01:22:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150532 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150532/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     16 guest-localmigrate           fail  like 150492
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150492
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150492
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150492
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150492
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150492
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                c86274bc2e34295764fb44c2aef3cf29623f9b4b
baseline version:
 qemuu                b8bee16e94df0fcd03bdad9969c30894418b0e6e

Last test of basis   150492  2020-05-29 15:45:14 Z    1 days
Testing same since   150532  2020-05-30 10:31:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Bennée <alex.bennee@linaro.org>
  Igor Mammedow <imammedo@redhat.com>
  Nikolay Igotti <igotti@gmail.com>
  Peter Maydell <peter.maydell@linaro.org>
  Robert Foley <robert.foley@linaro.org>
  Thomas Huth <thuth@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/qemu-xen.git
   b8bee16e94..c86274bc2e  c86274bc2e34295764fb44c2aef3cf29623f9b4b -> upstream-tested


From xen-devel-bounces@lists.xenproject.org Sun May 31 02:57:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 02:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfE9b-0005dg-AQ; Sun, 31 May 2020 02:56:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bt/9=7N=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jfE9a-0005db-CV
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 02:56:50 +0000
X-Inumbo-ID: 5f0eea4c-a2ea-11ea-aa1a-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5f0eea4c-a2ea-11ea-aa1a-12813bfff9fa;
 Sun, 31 May 2020 02:56:49 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=I/PiYoumUc52hNcLk48Ym1hsFFOBc2pEk4rEm79/OF4=; b=yUbcjNZZ4Nwm/jzp4u1T6VGv7
 N0Sjx+g6tShk2r3LW5ckEzx5Go+IQILeWYeRB+9ZJO0/FvQEFLqLwXMHwHdw0iQxEeeCCz4Z/NLS2
 aj1fq7U8Yqbc0jZrzMbZ7EFrgeuuMwhS+1xFlb+Ce4az0K4et5s5O1nNG8YLSqwRsX3+I=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfE9Y-0008IF-5o; Sun, 31 May 2020 02:56:48 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfE9X-0003YD-Ue; Sun, 31 May 2020 02:56:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jfE9X-00022Y-Ty; Sun, 31 May 2020 02:56:47 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150549-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150549: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=ad33a573c009d72466432b41ba0591c64e819c19
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 31 May 2020 02:56:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150549 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150549/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ad33a573c009d72466432b41ba0591c64e819c19
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    2 days
Failing since        150465  2020-05-29 09:02:14 Z    1 days   18 attempts
Testing same since   150515  2020-05-30 01:00:47 Z    1 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1196 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 31 04:51:58 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 04:51:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfFwW-000770-2y; Sun, 31 May 2020 04:51:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bt/9=7N=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jfFwU-00076v-FE
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 04:51:26 +0000
X-Inumbo-ID: 60b4b088-a2fa-11ea-9947-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 60b4b088-a2fa-11ea-9947-bc764e2007e4;
 Sun, 31 May 2020 04:51:23 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=eKdWvm14jJnSdkASX3468SbY2u50OuyLpvP2FcvLz0w=; b=L5A39RuNPY1Mho1kwIA4O6A/u
 HbiuKxBenmbbanzebNqXe5wRxi0/z3K7s3oy0ovNhAl4P8xueliqSPCFx2z37A8ju/CjQSWwafSfq
 o5Ribmr9EYyS45BhoDxIEiT2Y8Vpc5iQmJJfsc3/Aj/gKz64qzJqcOFck5dwveD89gOVo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfFwQ-0002Lw-W8; Sun, 31 May 2020 04:51:23 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfFwP-00086u-Q7; Sun, 31 May 2020 04:51:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jfFwP-0001jR-O1; Sun, 31 May 2020 04:51:21 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150543-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150543: regressions - FAIL
X-Osstest-Failures: linux-linus:test-arm64-arm64-libvirt-xsm:guest-start.2:fail:regression
 linux-linus:test-amd64-amd64-xl-rtds:guest-saverestore:fail:heisenbug
 linux-linus:test-arm64-arm64-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-armhf-armhf-xl-vhd:guest-start/debian.repeat:fail:heisenbug
 linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
 linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=86852175b016f0c6873dcbc24b93d12b7b246612
X-Osstest-Versions-That: linux=75caf310d16cc5e2f851c048cd597f5437013368
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 31 May 2020 04:51:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150543 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150543/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-libvirt-xsm 17 guest-start.2            fail REGR. vs. 150464

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds    15 guest-saverestore fail in 150520 pass in 150543
 test-arm64-arm64-libvirt-xsm 16 guest-start/debian.repeat fail in 150520 pass in 150543
 test-armhf-armhf-xl-vhd 15 guest-start/debian.repeat fail in 150520 pass in 150543
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 150520

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     18 guest-localmigrate/x10   fail REGR. vs. 150464

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150464
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150464
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150464
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150464
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150464
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150464
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150464
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150464
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                86852175b016f0c6873dcbc24b93d12b7b246612
baseline version:
 linux                75caf310d16cc5e2f851c048cd597f5437013368

Last test of basis   150464  2020-05-29 08:55:41 Z    1 days
Testing same since   150520  2020-05-30 03:24:39 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Alaa Hleihel <alaa@mellanox.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Deucher <alexdeucher@gmail.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Aric Cyr <aric.cyr@amd.com>
  Arnd Bergmann <arnd@arndb.de>
  Borislav Petkov <bp@suse.de>
  Catalin Marinas <catalin.marinas@arm.com>
  Changming Liu <liu.changm@northeastern.edu>
  Chris Chiu <chiu@endlessm.com>
  Christoph Hellwig <hch@lst.de>
  Dave Airlie <airlied@redhat.com>
  Dennis Dalessandro <dennis.dalessandro@intel.com>
  Dennis YC Hsieh <dennis-yc.hsieh@mediatek.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Doug Ledford <dledford@redhat.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Grygorii Strashko <grygorii.strashko@ti.com>
  Hamish Martin <hamish.martin@alliedtelesis.co.nz>
  Helge Deller <deller@gmx.de>
  Hsin-Yi Wang <hsinyi@chromium.org>
  Ian Ray <ian.ray@ge.com>
  Ilya Dryomov <idryomov@gmail.com>
  Jason Gunthorpe <jgg@mellanox.com>
  Jeff Layton <jlayton@kernel.org>
  Jens Axboe <axboe@kernel.dk>
  Jerry Lee <leisurelysw24@gmail.com>
  Joerg Roedel <jroedel@suse.de>
  Jonathan Marek <jonathan@marek.ca>
  Kaike Wan <kaike.wan@intel.com>
  Kailang Yang <kailang@realtek.com>
  Krzysztof Kozlowski <krzk@kernel.org>
  Leon Romanovsky <leonro@mellanox.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lubomir Rintel <lkundrak@v3.sk>
  Maor Gottlieb <maorg@mellanox.com>
  Matthias Brugger <matthias.bgg@gmail.com>
  Maxime Ripard <maxime@cerno.tech>
  Nobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp>
  Paul Cercueil <paul@crapouillou.net>
  Peng Hao <richard.peng@oppo.com>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Qiushi Wu <wu000273@umn.edu>
  Robert Beckett <bob.beckett@collabora.com>
  Sam Ravnborg <sam@ravnborg.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Shawn Guo <shawnguo@kernel.org>
  Simon Ser <contact@emersion.fr>
  Stefan Wahren <stefan.wahren@i2se.com>
  Stephen Boyd <sboyd@kernel.org>
  Takashi Iwai <tiwai@suse.de>
  Tony Lindgren <tony@atomide.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Valentine Fatiev <valentinef@mellanox.com>
  Vincent Stehlé <vincent.stehle@laposte.net>
  Vinod Koul <vkoul@kernel.org>
  Will Deacon <will@kernel.org>
  Yuji Ishikawa <yuji2.ishikawa@toshiba.co.jp>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1367 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 31 05:39:03 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 05:39:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfGgH-0002PU-KN; Sun, 31 May 2020 05:38:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bt/9=7N=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jfGgG-0002PP-A3
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 05:38:44 +0000
X-Inumbo-ID: fd37c872-a300-11ea-9947-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd37c872-a300-11ea-9947-bc764e2007e4;
 Sun, 31 May 2020 05:38:43 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=doML1R+seRQoOVy/jia9MKOCVGPPYlXfTDGY1nFCCZw=; b=yrV7V5eCnyyHh94AWDfb+Zc7/
 NsfRaH8DNi6p179iKVkLT/6g9jBuzD1NXuxayN9eWYQPOq51bP7QKW2cPE7uQUrPI6ev5+lSUQkzl
 OWpVPtnCFxkURHQDYG2Q1gBYInI8qwP7B+PRlLNkQIwY95Jr3TZeqj2U3fXwyKcpQ/fwk=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfGgE-0003dE-8t; Sun, 31 May 2020 05:38:42 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfGgE-0003Sy-14; Sun, 31 May 2020 05:38:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jfGgE-0001cD-09; Sun, 31 May 2020 05:38:42 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150553-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150553: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=ad33a573c009d72466432b41ba0591c64e819c19
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 31 May 2020 05:38:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150553 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150553/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ad33a573c009d72466432b41ba0591c64e819c19
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    2 days
Failing since        150465  2020-05-29 09:02:14 Z    1 days   19 attempts
Testing same since   150515  2020-05-30 01:00:47 Z    1 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1196 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 31 06:16:11 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 06:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfHG4-0005lc-Nk; Sun, 31 May 2020 06:15:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bt/9=7N=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jfHG4-0005lX-1D
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 06:15:44 +0000
X-Inumbo-ID: 2531ba40-a306-11ea-81bc-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2531ba40-a306-11ea-81bc-bc764e2007e4;
 Sun, 31 May 2020 06:15:37 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=2vARd4g//DUtZzc0EbE0c9SnVEZO0YR6i2I/hdpurXc=; b=qxrIh5N6lzkRSToPwBzO+Mi/y
 B4RPpMEFhbCZTYcYGI/dAvI3euZNIX75sploZblr1tb+9AQHHQCoon7Zog2DhcaF7NNV4mpmfGV25
 DIAJIlMlpW9I/zs5Kv9ioEJsslLM4F/Das2zAokZiawaUY/WuJa+NSpZFNcvDMhIzvvCw=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfHFx-0004RK-8m; Sun, 31 May 2020 06:15:37 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfHFx-0006By-19; Sun, 31 May 2020 06:15:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jfHFw-0008F2-Vk; Sun, 31 May 2020 06:15:36 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150555-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 150555: regressions - FAIL
X-Osstest-Failures: libvirt:build-amd64-libvirt:libvirt-build:fail:regression
 libvirt:build-i386-libvirt:libvirt-build:fail:regression
 libvirt:build-arm64-libvirt:libvirt-build:fail:regression
 libvirt:build-armhf-libvirt:libvirt-build:fail:regression
 libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
 libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
 libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This: libvirt=f6c79ca2af3607eb1cbbb7208c194f7cbf7a6abd
X-Osstest-Versions-That: libvirt=a1cd25b919509be2645dbe6f952d5263e0d4e4e5
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 31 May 2020 06:15:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150555 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150555/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 146182
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 146182
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 146182

Tests which did not succeed, but are not blocking:
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              f6c79ca2af3607eb1cbbb7208c194f7cbf7a6abd
baseline version:
 libvirt              a1cd25b919509be2645dbe6f952d5263e0d4e4e5

Last test of basis   146182  2020-01-17 06:00:23 Z  135 days
Failing since        146211  2020-01-18 04:18:52 Z  134 days  125 attempts
Testing same since   150555  2020-05-31 04:18:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrea Bolognani <abologna@redhat.com>
  Arnaud Patard <apatard@hupstream.com>
  Artur Puzio <contact@puzio.waw.pl>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Chen Hanxiao <chen_han_xiao@126.com>
  Chris Jester-Young <cky@cky.nz>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Daniel Veillard <veillard@redhat.com>
  Dario Faggioli <dfaggioli@suse.com>
  Erik Skultety <eskultet@redhat.com>
  Gaurav Agrawal <agrawalgaurav@gnome.org>
  Han Han <hhan@redhat.com>
  Jaak Ristioja <jaak@ristioja.ee>
  Jamie Strandboge <jamie@canonical.com>
  Jim Fehlig <jfehlig@suse.com>
  Jiri Denemark <jdenemar@redhat.com>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Laine Stump <laine@redhat.com>
  Leonid Bloch <lb.workbox@gmail.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <LMa@suse.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Mark Asselstine <mark.asselstine@windriver.com>
  Mauro S. M. Rodrigues <maurosr@linux.vnet.ibm.com>
  Michal Privoznik <mprivozn@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Pavel Hrdina <phrdina@redhat.com>
  Pavel Mores <pmores@redhat.com>
  Peter Krempa <pkrempa@redhat.com>
  Philipp Hahn <hahn@univention.de>
  Pino Toscano <ptoscano@redhat.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Rafael Fonseca <r4f4rfs@gmail.com>
  Richard W.M. Jones <rjones@redhat.com>
  Rikard Falkeborn <rikard.falkeborn@gmail.com>
  Ryan Moeller <ryan@iXsystems.com>
  Sahid Orentino Ferdjaoui <sahid.ferdjaoui@canonical.com>
  Sebastian Mitterle <smitterl@redhat.com>
  Seeteena Thoufeek <s1seetee@linux.vnet.ibm.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Stefan Hajnoczi <stefanha@redhat.com>
  Thomas Huth <thuth@redhat.com>
  Tobin Feldman-Fitzthum <tobin@linux.vnet.ibm.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Wu Qingliang <wuqingliang4@huawei.com>
  Xu Yandong <xuyandong2@huawei.com>
  Yan Wang <wangyan122@huawei.com>
  Yi Li <yili@winhong.com>
  Your Name <you@example.com>
  Zhang Bo <oscar.zhangbo@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>
  Zhimin Feng <fengzhimin1@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 19591 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 31 08:54:04 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 08:54:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfJih-0002MP-EY; Sun, 31 May 2020 08:53:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bt/9=7N=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jfJig-0002MK-9Y
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 08:53:26 +0000
X-Inumbo-ID: 30286c4e-a31c-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30286c4e-a31c-11ea-8993-bc764e2007e4;
 Sun, 31 May 2020 08:53:25 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=X/p2bONt7zYH+jiurjq8dmHwnrnBVHHyzvhyVZurzGg=; b=OFj7M9Wq2scyg6zQJ4Dp/8i04
 JB3TCqQ/Wb7F2hemqUC+zPVJrcCNDVRwQKGvZBfWBRTmRKL/4RhsQGkyq8uq9ycJs4i75iDh6hBn8
 BamjH8D1s/R+5od4KrGtiULK4T1AyNzNSUd7udHIfAxEXJAcK93UqXlYe/D1kvQSZadZo=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfJie-00089I-Ku; Sun, 31 May 2020 08:53:24 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfJie-0005yd-Cs; Sun, 31 May 2020 08:53:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jfJie-00072Q-CG; Sun, 31 May 2020 08:53:24 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150559-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150559: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=ad33a573c009d72466432b41ba0591c64e819c19
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 31 May 2020 08:53:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150559 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150559/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ad33a573c009d72466432b41ba0591c64e819c19
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    2 days
Failing since        150465  2020-05-29 09:02:14 Z    1 days   20 attempts
Testing same since   150515  2020-05-30 01:00:47 Z    1 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1196 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 31 09:50:51 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 09:50:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfKbw-0007Gi-O9; Sun, 31 May 2020 09:50:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bt/9=7N=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jfKbu-0007Gd-RI
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 09:50:30 +0000
X-Inumbo-ID: 26079304-a324-11ea-aa46-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 26079304-a324-11ea-aa46-12813bfff9fa;
 Sun, 31 May 2020 09:50:24 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=Eycy947+Dw7kAVnTYN2D2iziXjgEdVhYOfX2LO2HfHM=; b=ecLmQvEZdhvYL05+M6zBhXk1u
 5lDFNq7Id/eRNYSq/uias8tO39XXVaBIIvXkcWhUZpxV4IpSN+YW+S/0u4jg+rlAxTjvSMq9CZ3Yz
 5QMMhmFvRhGD9U8OkoFG/oKcNg6VfoAS7xsSg9K3sLPbTahe3kdmtijFfMmb+4wsOhboA=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfKbn-0000r0-SH; Sun, 31 May 2020 09:50:23 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfKbn-0001nl-Kg; Sun, 31 May 2020 09:50:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jfKbn-0007qG-K1; Sun, 31 May 2020 09:50:23 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150564-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 150564: all pass - PUSHED
X-Osstest-Versions-This: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
X-Osstest-Versions-That: xen=d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 31 May 2020 09:50:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150564 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150564/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199
baseline version:
 xen                  d89e5e65f305740b2f7bd56e6f3b6c9c52ee0707

Last test of basis   150397  2020-05-27 09:19:08 Z    4 days
Testing same since   150564  2020-05-31 09:18:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien@xen.org>
  Stefano Stabellini <sstabellini@kernel.org>
  Stefano Stabellini <stefano.stabellini@xilinx.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d89e5e65f3..1497e78068  1497e78068421d83956f8e82fb6e1bf1fc3b1199 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun May 31 11:22:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 11:22:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfM1x-0006Ez-K1; Sun, 31 May 2020 11:21:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bt/9=7N=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jfM1w-0006Et-59
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 11:21:28 +0000
X-Inumbo-ID: dae40daa-a330-11ea-aa4f-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dae40daa-a330-11ea-aa4f-12813bfff9fa;
 Sun, 31 May 2020 11:21:21 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=m/W0Jm4KO7OW83jJCjGT7lsvr56oWESYMHMfd9FL5ZM=; b=MMTAoROogM2PCVrvGnDMTWn3I
 rsboYmoo+KLC2pOh//IoKpG6+ofn9vEJe/9OlhEpWhPfHCRR6k+JNVy+JjOBZcHHTlN9DufFeDeiZ
 xLj1zGKDyV6zl7EC4Nb2ekJgCPM+3sKuDKyIg5isYohhDTGS0U40oU6tLlajAKd0iV0CE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfM1o-0002lH-Ka; Sun, 31 May 2020 11:21:20 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfM1o-00062H-Ca; Sun, 31 May 2020 11:21:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jfM1o-0003Wf-BO; Sun, 31 May 2020 11:21:20 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150551-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 150551: tolerable FAIL
X-Osstest-Failures: xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:heisenbug
 xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 31 May 2020 11:21:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150551 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150551/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine    4 memdisk-try-append fail in 150529 pass in 150551
 test-amd64-amd64-xl-rtds     16 guest-localmigrate         fail pass in 150529

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds 18 guest-localmigrate/x10 fail in 150529 blocked in 150551
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150529
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150529
 test-armhf-armhf-xl-rtds     16 guest-start/debian.repeat    fail  like 150529
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150529
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150529
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150529
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150529
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150529
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150529
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop             fail like 150529
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150551  2020-05-31 01:52:17 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun May 31 12:25:35 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 12:25:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfN1V-0002rc-Oo; Sun, 31 May 2020 12:25:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bt/9=7N=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jfN1U-0002rX-PA
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 12:25:04 +0000
X-Inumbo-ID: be61d53c-a339-11ea-aa52-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id be61d53c-a339-11ea-aa52-12813bfff9fa;
 Sun, 31 May 2020 12:24:59 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=+v4P/dyIY+o33eGe6LiR1mQf9OZBOTUqfapw6lRoV9U=; b=DG9VqYFhz6aXW/uMBkRu6ud3D
 Ce/5wICF8+oS5qDkYRr5YCtslE8jS73jja8JtYO8zSKE4HonuTq2ALCR8E0q5gk0JLw5R9R0iG25C
 1CxY8EujtHh5Xvg9dld6AxW4KX3G31+wlEdcOyQmex60sD99HdxsWNAVGFDUnKd8qyYTE=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfN1O-00042p-Ib; Sun, 31 May 2020 12:24:58 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfN1O-0007fn-8F; Sun, 31 May 2020 12:24:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jfN1O-0000A6-7T; Sun, 31 May 2020 12:24:58 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150562-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150562: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=ad33a573c009d72466432b41ba0591c64e819c19
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 31 May 2020 12:24:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150562 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150562/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ad33a573c009d72466432b41ba0591c64e819c19
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    2 days
Failing since        150465  2020-05-29 09:02:14 Z    2 days   21 attempts
Testing same since   150515  2020-05-30 01:00:47 Z    1 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1196 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 31 15:16:22 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 15:16:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfPgp-0008N8-Dv; Sun, 31 May 2020 15:15:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bt/9=7N=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jfPgo-0008N3-KD
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 15:15:54 +0000
X-Inumbo-ID: 9d1f6f8e-a351-11ea-9dbe-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9d1f6f8e-a351-11ea-9dbe-bc764e2007e4;
 Sun, 31 May 2020 15:15:51 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=NgL7EbUuwHd/2f1xVotNgdl2TQkq2Y0VqgsfWoaa3pg=; b=7CqtMbMh6AlQh26g5prleZ547
 4MGwm952E44Hqbm7MxoulfANhqwyebUkukbFuI1fWUvpW9SBBtN2i/TruXSX8CcSy+/iTi9z9SNzy
 JPaN8LCTmY3mI/w/NbaQL1Dhim9no32r/u2fjcYXbT5VuYkb7JOsTkp5EDMvkriA54y40=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfPgk-0007YM-6Q; Sun, 31 May 2020 15:15:50 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfPgj-00038K-Ua; Sun, 31 May 2020 15:15:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jfPgj-0001iV-Tw; Sun, 31 May 2020 15:15:49 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150556-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 150556: tolerable FAIL - PUSHED
X-Osstest-Failures: linux-linus:test-amd64-amd64-xl-rtds:guest-saverestore:fail:allowable
 linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
 linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
 linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
 linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This: linux=ffeb595d84811dde16a28b33d8a7cf26d51d51b3
X-Osstest-Versions-That: linux=75caf310d16cc5e2f851c048cd597f5437013368
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 31 May 2020 15:15:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150556 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150556/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     15 guest-saverestore        fail REGR. vs. 150464

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 17 guest-stop            fail like 150464
 test-amd64-amd64-xl-qemuu-win7-amd64 17 guest-stop            fail like 150464
 test-amd64-amd64-xl-qemut-ws16-amd64 17 guest-stop            fail like 150464
 test-amd64-i386-xl-qemuu-win7-amd64 17 guest-stop             fail like 150464
 test-armhf-armhf-libvirt     14 saverestore-support-check    fail  like 150464
 test-amd64-i386-xl-qemut-win7-amd64 17 guest-stop             fail like 150464
 test-armhf-armhf-libvirt-raw 13 saverestore-support-check    fail  like 150464
 test-amd64-amd64-xl-qemuu-ws16-amd64 17 guest-stop            fail like 150464
 test-amd64-i386-xl-pvshim    12 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      13 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  13 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 11 migrate-support-check fail never pass
 test-amd64-amd64-qemuu-nested-amd 17 debian-hvm-install/l1/l2  fail never pass
 test-arm64-arm64-xl-credit1  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 13 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 14 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 14 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 13 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 14 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 13 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 14 saverestore-support-check    fail never pass
 test-amd64-i386-xl-qemuu-ws16-amd64 17 guest-stop              fail never pass
 test-arm64-arm64-xl-seattle  13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  14 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-qemut-ws16-amd64 17 guest-stop              fail never pass
 test-armhf-armhf-xl-vhd      12 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      13 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     14 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     13 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 12 migrate-support-check        fail   never pass

version targeted for testing:
 linux                ffeb595d84811dde16a28b33d8a7cf26d51d51b3
baseline version:
 linux                75caf310d16cc5e2f851c048cd597f5437013368

Last test of basis   150464  2020-05-29 08:55:41 Z    2 days
Failing since        150520  2020-05-30 03:24:39 Z    1 days    3 attempts
Testing same since   150556  2020-05-31 04:53:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Alaa Hleihel <alaa@mellanox.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Deucher <alexdeucher@gmail.com>
  Andreas Gruenbacher <agruenba@redhat.com>
  Andrew Donnellan <ajd@linux.ibm.com>
  Aric Cyr <aric.cyr@amd.com>
  Arnd Bergmann <arnd@arndb.de>
  asmaa@mellanox.com
  Axel Lin <axel.lin@ingics.com>
  Bartosz Golaszewski <bgolaszewski@baylibre.com>
  Borislav Petkov <bp@suse.de>
  Catalin Marinas <catalin.marinas@arm.com>
  Changming Liu <liu.changm@northeastern.edu>
  Chris Chiu <chiu@endlessm.com>
  Christoph Hellwig <hch@lst.de>
  Daniel Axtens <dja@axtens.net>
  Dave Airlie <airlied@redhat.com>
  Dennis Dalessandro <dennis.dalessandro@intel.com>
  Dennis YC Hsieh <dennis-yc.hsieh@mediatek.com>
  Dongli Zhang <dongli.zhang@oracle.com>
  Doug Ledford <dledford@redhat.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Grygorii Strashko <grygorii.strashko@ti.com>
  Hamish Martin <hamish.martin@alliedtelesis.co.nz>
  Hans Verkuil <hverkuil@xs4all.nl>
  Helge Deller <deller@gmx.de>
  Hsin-Yi Wang <hsinyi@chromium.org>
  Ian Ray <ian.ray@ge.com>
  Ilya Dryomov <idryomov@gmail.com>
  Jason Gunthorpe <jgg@mellanox.com>
  Jeff Layton <jlayton@kernel.org>
  Jens Axboe <axboe@kernel.dk>
  Jerry Lee <leisurelysw24@gmail.com>
  Joerg Roedel <jroedel@suse.de>
  Jonathan Marek <jonathan@marek.ca>
  Kaike Wan <kaike.wan@intel.com>
  Kailang Yang <kailang@realtek.com>
  Krzysztof Kozlowski <krzk@kernel.org>
  Leon Romanovsky <leonro@mellanox.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Lubomir Rintel <lkundrak@v3.sk>
  Maor Gottlieb <maorg@mellanox.com>
  Matthias Brugger <matthias.bgg@gmail.com>
  Maxime Ripard <maxime@cerno.tech>
  Michael Ellerman <mpe@ellerman.id.au>
  Nobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp>
  Paul Cercueil <paul@crapouillou.net>
  Peng Hao <richard.peng@oppo.com>
  Qingqing Zhuo <qingqing.zhuo@amd.com>
  Qiushi Wu <wu000273@umn.edu>
  Robert Beckett <bob.beckett@collabora.com>
  Sam Ravnborg <sam@ravnborg.org>
  Sascha Hauer <s.hauer@pengutronix.de>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Shawn Guo <shawnguo@kernel.org>
  Simon Ser <contact@emersion.fr>
  Stefan Wahren <stefan.wahren@i2se.com>
  Stephen Boyd <sboyd@kernel.org>
  Takashi Iwai <tiwai@suse.de>
  Tiezhu Yang <yangtiezhu@loongson.cn>
  Tony Lindgren <tony@atomide.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Valentine Fatiev <valentinef@mellanox.com>
  Vincent Stehlé <vincent.stehle@laposte.net>
  Vinod Koul <vkoul@kernel.org>
  Will Deacon <will@kernel.org>
  Yuji Ishikawa <yuji2.ishikawa@toshiba.co.jp>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   75caf310d16c..ffeb595d8481  ffeb595d84811dde16a28b33d8a7cf26d51d51b3 -> tested/linux-linus


From xen-devel-bounces@lists.xenproject.org Sun May 31 15:51:53 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 15:51:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfQFL-0003B5-Bp; Sun, 31 May 2020 15:51:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bt/9=7N=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jfQFK-0003B0-9m
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 15:51:34 +0000
X-Inumbo-ID: 96f7cd68-a356-11ea-9dbe-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 96f7cd68-a356-11ea-9dbe-bc764e2007e4;
 Sun, 31 May 2020 15:51:28 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=EWfK21FPkfK84hGojF0HcVvs+uOzZnNIt6UJN1BQtqQ=; b=SHXX9/QEIOYfPwS62/qXy0P2b
 FuWvFXY9jAAfsaVTcyfH1GO1LZgnDKcx8EytC6rHAUyyUeZZk4YbqW6iE+wOhg7MR0hvcARtszHDd
 p3EWBYTyOoY3PZddAQV1dcMaIJaHjhI1Jm8XBeY9zbLL/OWxSD6pHkdOwfHqtcQXccDl0=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfQFE-0008G2-5W; Sun, 31 May 2020 15:51:28 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfQF3-0003wI-TD; Sun, 31 May 2020 15:51:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jfQF3-0006iN-Sa; Sun, 31 May 2020 15:51:17 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150568-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150568: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=ad33a573c009d72466432b41ba0591c64e819c19
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 31 May 2020 15:51:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150568 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150568/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ad33a573c009d72466432b41ba0591c64e819c19
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    3 days
Failing since        150465  2020-05-29 09:02:14 Z    2 days   22 attempts
Testing same since   150515  2020-05-30 01:00:47 Z    1 days   12 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1196 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 31 17:38:42 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 17:38:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfRue-0003XJ-Te; Sun, 31 May 2020 17:38:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yblG=7N=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jfRud-0003XE-Co
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 17:38:19 +0000
X-Inumbo-ID: 83958332-a365-11ea-8993-bc764e2007e4
Received: from mail-wr1-x432.google.com (unknown [2a00:1450:4864:20::432])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 83958332-a365-11ea-8993-bc764e2007e4;
 Sun, 31 May 2020 17:38:18 +0000 (UTC)
Received: by mail-wr1-x432.google.com with SMTP id p5so3252198wrw.9
 for <xen-devel@lists.xenproject.org>; Sun, 31 May 2020 10:38:18 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:mime-version
 :content-transfer-encoding;
 bh=szrK4QZPlIWx7hRKiEEGeyw3BNLUYHyOmBgpWr2R+Zs=;
 b=RRupkXgrGQxJyX5hU1YPGP3v77tjIwZaUEnwXH0Ycbz02nIC6z6jqL9K8VtpiGoZir
 oA5pe+0bmwn1mXRbgMEVLrpCh0d0sQKIqycavfwLuiXSOjZIKwvCbubwBhoyDEBNPMCT
 s+JJpU2UR3ihqsLIA18rk+lRx9dyVpfl+cDN3EjfVf2e36iXhgEEGnJ27Qv/8ZBGlMox
 ykuYCjITpXiI6USr1w2maE1SCHGk41hkpnblFQV9vLwdJzdQ3YlpFxieC+56rAU2D7yq
 5PNPRsTkvV+wrpFl90mqzCJzSIssp7+Q7OTmU3vfT7rPQLnXsnh1Vs2G+XHB/+/JcrN8
 ISPA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :mime-version:content-transfer-encoding;
 bh=szrK4QZPlIWx7hRKiEEGeyw3BNLUYHyOmBgpWr2R+Zs=;
 b=iaEx/eA5Oy1smoexw5EvhpmlJJan0tebB+f5rkecZP4x1mIqUk3EgCw7eaezH4EF3o
 b1rS6wK4UrjKVfld4D043hpn9ki3OYif1EfNqW7sbFUYNFYQaHNxuB/n8TQBoMJci/Fi
 NE+HSYjon57+mHol8rgyvSQhSmgRSow7ASCLHws3KRQorF0d6BX4GugnT9FvbBWc2rlz
 ouLbeyS5RLOBhpO7JKTADfiAV04TBH1HlhLbTaD9U+u7Sl+Py6o7Y5+FH2Vns0GCrpgh
 4NrUDSO9IFZdiyKhfjql34Q/vsLwtAAzLexX4CMQn7JW544wbRVKPxjqii+8PBwkIDl/
 OLdA==
X-Gm-Message-State: AOAM531O4/O5du2Q7rERE2vP3Mz6Zvn1aViY4BBEOw+iNPp9HXh3jn0m
 fJZ7Qnz5naMuoK426upaSpY=
X-Google-Smtp-Source: ABdhPJzi1VbONK1FgNfUM2ScOB+Tk4wTN8mIKjCc/CNlbtzbBZvFo/IjBHFzqkbrQE86Hdu0UVxctg==
X-Received: by 2002:adf:8b55:: with SMTP id v21mr19093025wra.187.1590946697876; 
 Sun, 31 May 2020 10:38:17 -0700 (PDT)
Received: from localhost.localdomain (43.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.43])
 by smtp.gmail.com with ESMTPSA id l19sm7973121wmj.14.2020.05.31.10.38.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 31 May 2020 10:38:16 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Subject: [PATCH 0/8] hw: Fix some incomplete memory region size
Date: Sun, 31 May 2020 19:38:06 +0200
Message-Id: <20200531173814.8734-1-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 Andrew Jeffery <andrew@aj.id.au>, Helge Deller <deller@gmx.de>,
 "Michael S. Tsirkin" <mst@redhat.com>, Joel Stanley <joel@jms.id.au>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 qemu-trivial@nongnu.org, qemu-arm@nongnu.org,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 qemu-ppc@nongnu.org, Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

memory_region_set_size() handle the 16 Exabytes limit by
special-casing the UINT64_MAX value.
This is not a problem for the 32-bit maximum, 4 GiB, but
in some places we incorrectly use UINT32_MAX instead of
4 GiB, and end up missing 1 byte in the memory region.

This series fixes the cases I encountered.
Also included few patches while reviewing, I replaced some
magic values by the IEC binary prefix equivalent.

Regards,

Phil.

Philippe Mathieu-Daudé (8):
  hw/arm/aspeed: Correct DRAM container region size
  hw/pci-host/prep: Correct RAVEN bus bridge memory region size
  hw/pci/pci_bridge: Correct pci_bridge_io memory region size
  hw/pci/pci_bridge: Use the IEC binary prefix definitions
  hw/pci-host: Use the IEC binary prefix definitions
  hw/hppa/dino: Use the IEC binary prefix definitions
  hw/i386/xen/xen-hvm: Use the IEC binary prefix definitions
  target/i386/cpu: Use the IEC binary prefix definitions

 hw/arm/aspeed.c         | 2 +-
 hw/hppa/dino.c          | 4 ++--
 hw/i386/xen/xen-hvm.c   | 3 ++-
 hw/pci-host/i440fx.c    | 3 ++-
 hw/pci-host/prep.c      | 2 +-
 hw/pci-host/q35.c       | 2 +-
 hw/pci-host/versatile.c | 5 +++--
 hw/pci/pci_bridge.c     | 7 ++++---
 target/i386/cpu.c       | 2 +-
 9 files changed, 17 insertions(+), 13 deletions(-)

-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sun May 31 17:38:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 17:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfRuo-0003Xu-Ck; Sun, 31 May 2020 17:38:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yblG=7N=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jfRun-0003Xp-9M
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 17:38:29 +0000
X-Inumbo-ID: 85503244-a365-11ea-8993-bc764e2007e4
Received: from mail-wr1-x442.google.com (unknown [2a00:1450:4864:20::442])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 85503244-a365-11ea-8993-bc764e2007e4;
 Sun, 31 May 2020 17:38:21 +0000 (UTC)
Received: by mail-wr1-x442.google.com with SMTP id l10so9153100wrr.10
 for <xen-devel@lists.xenproject.org>; Sun, 31 May 2020 10:38:21 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=z6MXV/mZkWfnz890Yo+s0rvg8AHT39PfBFvpsFHjWdc=;
 b=QYvY3gXUnPTn17LDoEHOKbq6Mu4mCC5ZirXYyRfQqjW5yMjkLaLcPgaNmCokzrSMk1
 iS14BZGt5nbepXJHY+8YKtfXPZXZoNrZ+xgr27SkIjDH7+NfBvh7a00/wX4juM+cvnwK
 kJvp8SHZ9O1Fw+foevpkIx2zJ5dHganq6zkOmnCCVuUfWTlmW+u0u0wAD+jVqP2ieGrl
 iT20yzOXUuxrY/CD15YcZTH3BtjOGqMctoePxbz/lG6hLw9EA8Dkfw7WaiJU7nfxWIpo
 pcD/IKA8vgRB1+wufffNFi3pgiHvatdIAE+NJgbDW7auOUENhXlhJYpgVbftvrr2aaPq
 uw9w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=z6MXV/mZkWfnz890Yo+s0rvg8AHT39PfBFvpsFHjWdc=;
 b=PrBmLFsWRjv/R7/rDPQA/9Vsx26MAeMfHU1beU7QYbwKpBdV8AG3YXX/dRVGOb6tFu
 g7Krm0pb3roAjRPftReMah8/7X/Zct308exG9/7FowSouN/s39XCiPFceyEdkaLYMFP7
 /MYGVZ+nDmX2koZcASeZmxEQ06rbBRlooLECAm1ZyHDCqVouxoWRtFMqIhiT357Gnyzj
 GDshLHazbxTsdZehdXgLjYHx1q/2fEYSPl3FntitWoihntUk7LSEPovea7iWEnFqhgSh
 J8V1p4sYFDVVPSyDBfRZ7ekEdFsgGl62RbOchWJoZHJl1dlB329eMpmW5MNmcsCGSMz9
 nxmQ==
X-Gm-Message-State: AOAM530ufPxts58hCr/SzEx3RgO8DUr0TQkj4LsShPhmBRNO/lsAIbZF
 ulgvQ7lX4jfRH8Jjuis+PcU=
X-Google-Smtp-Source: ABdhPJxmf1AHg9yOHBeUWTlVu63d2WHNpQGKdZws+bxSgk4Mw6bA+b97Z+h44ot+Y4hgdoKMFKkyYw==
X-Received: by 2002:a5d:5351:: with SMTP id t17mr4028148wrv.287.1590946700911; 
 Sun, 31 May 2020 10:38:20 -0700 (PDT)
Received: from localhost.localdomain (43.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.43])
 by smtp.gmail.com with ESMTPSA id l19sm7973121wmj.14.2020.05.31.10.38.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 31 May 2020 10:38:20 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Subject: [PATCH 2/8] hw/pci-host/prep: Correct RAVEN bus bridge memory region
 size
Date: Sun, 31 May 2020 19:38:08 +0200
Message-Id: <20200531173814.8734-3-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200531173814.8734-1-f4bug@amsat.org>
References: <20200531173814.8734-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 Andrew Jeffery <andrew@aj.id.au>, Helge Deller <deller@gmx.de>,
 "Michael S. Tsirkin" <mst@redhat.com>, Joel Stanley <joel@jms.id.au>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 qemu-trivial@nongnu.org, qemu-arm@nongnu.org,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 qemu-ppc@nongnu.org, Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

memory_region_set_size() handle the 16 Exabytes limit by
special-casing the UINT64_MAX value. This is not a problem
for the 32-bit maximum, 4 GiB.
By using the UINT32_MAX value, the bm-raven MemoryRegion
ends up missing 1 byte:

  $ qemu-system-ppc -M prep -S -monitor stdio -usb
  memory-region: bm-raven
    0000000000000000-00000000fffffffe (prio 0, i/o): bm-raven
      0000000000000000-000000003effffff (prio 0, i/o): alias bm-pci-memory @pci-memory 0000000000000000-000000003effffff
      0000000080000000-00000000ffffffff (prio 0, i/o): alias bm-system @system 0000000000000000-000000007fffffff

Fix by using the correct value. We now have:

  memory-region: bm-raven
    0000000000000000-00000000ffffffff (prio 0, i/o): bm-raven
      0000000000000000-000000003effffff (prio 0, i/o): alias bm-pci-memory @pci-memory 0000000000000000-000000003effffff
      0000000080000000-00000000ffffffff (prio 0, i/o): alias bm-system @system 0000000000000000-000000007fffffff

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/pci-host/prep.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/pci-host/prep.c b/hw/pci-host/prep.c
index 1a02e9a670..88e2fc66a9 100644
--- a/hw/pci-host/prep.c
+++ b/hw/pci-host/prep.c
@@ -294,7 +294,7 @@ static void raven_pcihost_initfn(Object *obj)
                              &s->pci_memory, &s->pci_io, 0, TYPE_PCI_BUS);
 
     /* Bus master address space */
-    memory_region_init(&s->bm, obj, "bm-raven", UINT32_MAX);
+    memory_region_init(&s->bm, obj, "bm-raven", 4 * GiB);
     memory_region_init_alias(&s->bm_pci_memory_alias, obj, "bm-pci-memory",
                              &s->pci_memory, 0,
                              memory_region_size(&s->pci_memory));
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sun May 31 17:38:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 17:38:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfRuj-0003XV-4x; Sun, 31 May 2020 17:38:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yblG=7N=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jfRui-0003XP-94
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 17:38:24 +0000
X-Inumbo-ID: 84661984-a365-11ea-9947-bc764e2007e4
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 84661984-a365-11ea-9947-bc764e2007e4;
 Sun, 31 May 2020 17:38:20 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id c71so8569741wmd.5
 for <xen-devel@lists.xenproject.org>; Sun, 31 May 2020 10:38:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=9Mxl/dNTcRwhkGe3G4jYrG/McOP5BFcH0HXoztP9CuE=;
 b=KN9foiNw1uNODlYLGsqkaiPFpDICcY39hhwQ0vwdyo0AGdI9BbCHCSkh+p4J9h4Laq
 DuJIwu3OpLHu4sHXUICkVbKQmAQmO+D8yyfjCtpGxkzofi4Igvfw3BC01Z7/GOK9kSfa
 /CHzYk/dCYbs+astQL+vk6kJZJNm3vCHZDeZsshhM7YKE/GswD1u2po1Hu0sHnTWODaV
 aw+XELKTPKtS86NcCcXXbY9Yd54eTLdGu3rk0hKXJbOXEaV87ip22Lyy/G35mPcBAnhe
 e4EhgrNgwMPPUSeRGMOLlO+3sZx1yyfRCuxcn0pMT9nWhIm8j0WbNSdQCQD9WVWhr+7/
 mH+Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=9Mxl/dNTcRwhkGe3G4jYrG/McOP5BFcH0HXoztP9CuE=;
 b=Mh3P2JS3VWo5iJdSRP5rpNlL21krJpnsywv/7teAinhCL90CrLT8NOQocFX+xLSo/h
 h3UrX4ZxAhhABz5NE3LKMEWIcjSNmhdKClstiUoL9yRnCcOMDZ9O7rJxPOrofmvOq3sh
 unAH1HqZzA7J3KS8DGzLL0K4+Z7FaOAJKTiKw0XmzGBCUkWqs9KcBFpUo4UEVvOoYNQa
 T3Faw1vuYV4falsCjiOy74f02zoflax0n5SYuBE1JNdRQZH1j4HOdPgbyvzWBQWpUUxS
 iaauu3ijxktzsH3+jvJntM53Q4O+Vhqjy727NQibk/uUubn1P794xWP5RFjW2Ix2Q+Rb
 Hzlg==
X-Gm-Message-State: AOAM5303eV3dL/meg9uaktz81YtIWSKwiRWn3tFhxSAI4RnOzh8AOheH
 XaQ2+4W3+AJTaH6sAIaq5Ug=
X-Google-Smtp-Source: ABdhPJzXMrBZtqieig0y6kV4ABn1VIpJ1TZsTStlJMWpP9ZXmgSjXn+WbGAkxI3zyZ0yWA3NhOCtng==
X-Received: by 2002:a1c:5606:: with SMTP id k6mr19052387wmb.10.1590946699366; 
 Sun, 31 May 2020 10:38:19 -0700 (PDT)
Received: from localhost.localdomain (43.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.43])
 by smtp.gmail.com with ESMTPSA id l19sm7973121wmj.14.2020.05.31.10.38.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 31 May 2020 10:38:18 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Subject: [PATCH 1/8] hw/arm/aspeed: Correct DRAM container region size
Date: Sun, 31 May 2020 19:38:07 +0200
Message-Id: <20200531173814.8734-2-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200531173814.8734-1-f4bug@amsat.org>
References: <20200531173814.8734-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 Andrew Jeffery <andrew@aj.id.au>, Helge Deller <deller@gmx.de>,
 "Michael S. Tsirkin" <mst@redhat.com>, Joel Stanley <joel@jms.id.au>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 qemu-trivial@nongnu.org, qemu-arm@nongnu.org,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 qemu-ppc@nongnu.org, Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

memory_region_set_size() handle the 16 Exabytes limit by
special-casing the UINT64_MAX value. This is not a problem
for the 32-bit maximum, 4 GiB.
By using the UINT32_MAX value, the aspeed-ram-container
MemoryRegion ends up missing 1 byte:

 $ qemu-system-arm -M ast2600-evb -S -monitor stdio
 (qemu) info mtree

  address-space: aspeed.fmc-ast2600-dma-dram
    0000000080000000-000000017ffffffe (prio 0, i/o): aspeed-ram-container
      0000000080000000-00000000bfffffff (prio 0, ram): ram
      00000000c0000000-ffffffffffffffff (prio 0, i/o): max_ram

Fix by using the correct value. We now have:

  address-space: aspeed.fmc-ast2600-dma-dram
    0000000080000000-000000017fffffff (prio 0, i/o): aspeed-ram-container
      0000000080000000-00000000bfffffff (prio 0, ram): ram
      00000000c0000000-ffffffffffffffff (prio 0, i/o): max_ram

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/arm/aspeed.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/hw/arm/aspeed.c b/hw/arm/aspeed.c
index 2c23297edf..62344ac6a3 100644
--- a/hw/arm/aspeed.c
+++ b/hw/arm/aspeed.c
@@ -262,7 +262,7 @@ static void aspeed_machine_init(MachineState *machine)
     bmc = g_new0(AspeedBoardState, 1);
 
     memory_region_init(&bmc->ram_container, NULL, "aspeed-ram-container",
-                       UINT32_MAX);
+                       4 * GiB);
     memory_region_add_subregion(&bmc->ram_container, 0, machine->ram);
 
     object_initialize_child(OBJECT(machine), "soc", &bmc->soc,
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sun May 31 17:38:44 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 17:38:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfRus-0003YV-LB; Sun, 31 May 2020 17:38:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yblG=7N=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jfRus-0003YP-A0
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 17:38:34 +0000
X-Inumbo-ID: 8635776e-a365-11ea-8993-bc764e2007e4
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8635776e-a365-11ea-8993-bc764e2007e4;
 Sun, 31 May 2020 17:38:23 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id c71so8569812wmd.5
 for <xen-devel@lists.xenproject.org>; Sun, 31 May 2020 10:38:23 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=KsQ1BQRlJKLRUJr0jZUSVevT0fbQ9tRG4Kdv5xJn5Tg=;
 b=OvvgLC/+ddLEdgOqG9BGzOSTBcy0qN+18q9JC4/GfG3Fv91qKjhOlaWmQsPxLKfIYF
 mfaemWp0HiASJbUQUtd6QuGfRI62byfE90yIS3vCqYnsVS75yNstLw/2gw3sfw3FYXEW
 dBxgPuXkmZioeuXxlU6fZo2HcwQ/Gs8sqkvr9BuK4zsXEaFMMySi/8Cu6OqTnjoSf/SC
 sAkQG0K6/+6kfk7BDu6pdCiZg999VMnGuU5pmOk+21AWQNPNvyp6a5rTtQ9EXGp+4LkB
 kchQAzw2OdcC4MBliuOIAPwp+otTA1rZeh4VWsDHXsUsgu/jx/jTiApCJFkIF93rtgw6
 xZFA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=KsQ1BQRlJKLRUJr0jZUSVevT0fbQ9tRG4Kdv5xJn5Tg=;
 b=ID98wBgqAel/uZvvHdPbr0lZ1xfCYxMYygBTrkqmvGHOvCUVunbjQlBILZthTrBdUR
 lGhs/hxhQyh9zzd020fSUiNxjlQ8PZAY00npbscdNpvdDOzCKzq7+9j/LpTgSHCbgSj3
 fTa1H2QXeT589xKsdHjaJ5zYJr+E+L5zBEMpI80gziAdDBi8EJ8u7r1OftimdF1YgXpU
 Z05+RlUWxdSm2wff1CTWHHDUjsc50sBAtQpqWYXDhvBfyyTUfnhWLc2Z+5gBL6nRUxKi
 r99La1ZCD8yERjK3wrI0z6dr4Ja7MijbEHdQwqK3+RF/RUl7o12DfYW6T3U9JCO1vjIE
 +s+w==
X-Gm-Message-State: AOAM5310REyGEDqhqoVevWOSRKA65/FuuuYx39UV1wjuVVGeKN5Kjcyw
 L3SGDZmJA8dMUpXnV55i5FM=
X-Google-Smtp-Source: ABdhPJzeybcJCTXmZjvw+0M7Jmw3AwmGdkeKINOR50NTyuWnxoCKp8N0SHbRKuVp2dnjlfQ5FAFP8A==
X-Received: by 2002:a1c:808d:: with SMTP id b135mr17521127wmd.94.1590946702340; 
 Sun, 31 May 2020 10:38:22 -0700 (PDT)
Received: from localhost.localdomain (43.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.43])
 by smtp.gmail.com with ESMTPSA id l19sm7973121wmj.14.2020.05.31.10.38.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 31 May 2020 10:38:21 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Subject: [PATCH 3/8] hw/pci/pci_bridge: Correct pci_bridge_io memory region
 size
Date: Sun, 31 May 2020 19:38:09 +0200
Message-Id: <20200531173814.8734-4-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200531173814.8734-1-f4bug@amsat.org>
References: <20200531173814.8734-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 Andrew Jeffery <andrew@aj.id.au>, Helge Deller <deller@gmx.de>,
 "Michael S. Tsirkin" <mst@redhat.com>, Joel Stanley <joel@jms.id.au>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 qemu-trivial@nongnu.org, qemu-arm@nongnu.org,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 qemu-ppc@nongnu.org, Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

memory_region_set_size() handle the 16 Exabytes limit by
special-casing the UINT64_MAX value. This is not a problem
for the 32-bit maximum, 4 GiB.
By using the UINT32_MAX value, the pci_bridge_io MemoryRegion
ends up missing 1 byte:

  (qemu) info mtree
  memory-region: pci_bridge_io
    0000000000000000-00000000fffffffe (prio 0, i/o): pci_bridge_io
      0000000000000060-0000000000000060 (prio 0, i/o): i8042-data
      0000000000000064-0000000000000064 (prio 0, i/o): i8042-cmd
      00000000000001ce-00000000000001d1 (prio 0, i/o): vbe
      0000000000000378-000000000000037f (prio 0, i/o): parallel
      00000000000003b4-00000000000003b5 (prio 0, i/o): vga
      ...

Fix by using the correct value. We now have:

  memory-region: pci_bridge_io
    0000000000000000-00000000ffffffff (prio 0, i/o): pci_bridge_io
      0000000000000060-0000000000000060 (prio 0, i/o): i8042-data
      0000000000000064-0000000000000064 (prio 0, i/o): i8042-cmd
      ...

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/pci/pci_bridge.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/pci/pci_bridge.c b/hw/pci/pci_bridge.c
index 97967d12eb..3ba3203f72 100644
--- a/hw/pci/pci_bridge.c
+++ b/hw/pci/pci_bridge.c
@@ -30,6 +30,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "qemu/units.h"
 #include "hw/pci/pci_bridge.h"
 #include "hw/pci/pci_bus.h"
 #include "qemu/module.h"
@@ -381,7 +382,7 @@ void pci_bridge_initfn(PCIDevice *dev, const char *typename)
     memory_region_init(&br->address_space_mem, OBJECT(br), "pci_bridge_pci", UINT64_MAX);
     sec_bus->address_space_io = &br->address_space_io;
     memory_region_init(&br->address_space_io, OBJECT(br), "pci_bridge_io",
-                       UINT32_MAX);
+                       4 * GiB);
     br->windows = pci_bridge_region_init(br);
     QLIST_INIT(&sec_bus->child);
     QLIST_INSERT_HEAD(&parent->child, sec_bus, sibling);
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sun May 31 17:38:49 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 17:38:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfRux-0003a3-TM; Sun, 31 May 2020 17:38:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yblG=7N=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jfRux-0003Zp-AH
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 17:38:39 +0000
X-Inumbo-ID: 8711ec9e-a365-11ea-8993-bc764e2007e4
Received: from mail-wr1-x441.google.com (unknown [2a00:1450:4864:20::441])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8711ec9e-a365-11ea-8993-bc764e2007e4;
 Sun, 31 May 2020 17:38:24 +0000 (UTC)
Received: by mail-wr1-x441.google.com with SMTP id p5so3252345wrw.9
 for <xen-devel@lists.xenproject.org>; Sun, 31 May 2020 10:38:24 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=Dv8k9OvX0sGDM6SnC2SEv28YQ5tgOTM9QioS6R+rf/M=;
 b=BVSryTZWMKqgtf1QMR74iA2a1/KO4Uy3SphIAK8LPJsyPkwckWfXDNibM+axg726DZ
 SOxrLOGVIxjQO1zqPrMp0H1sgXw3bFO+/tNCygs7RSfe47FZ6fzyl8luYeaYKphOjkVi
 OCMSIO+jJyhY8KvoRpSZImqE+0XAVhWggJCSkSwrg20N9ndbq6O399AVmjDCPvyP/Tb5
 dlQa2RKuW+AN6SwcOGxhziuKVbffRnR2ra0pRvolwWtCNXZiQU5hjwUjw+a/RC3Dn5mu
 mqyABHZCPs1C01fLaMueU7HT+U+bbJ4chOlXdK8XgTkduVhjcj2+7mnalCo1xT6WZsXh
 tIIA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=Dv8k9OvX0sGDM6SnC2SEv28YQ5tgOTM9QioS6R+rf/M=;
 b=E8uFG/M3bg0MjrJgGhZMlFUAKFer3Se64X+zrIi2niMtkWibzmGZZFAk8NV28Z5vFZ
 OWEsKm9AXS6LQ2EcfmP6dBZ2eabp0yPJUj4f94E7ygsA5EEM6uiTCX8UU6HPqxk2XdQM
 oVu10Ltn/nnscAF3d6pOiVc2h5CVODsz2pcaBGABgZujfALsy6DVbhp895GA8F/KwB6Y
 eN5uQohn/L9pHyl4L+4ugdR86aaSMUo7h64aAH9+13ueU1zPZ0uI9mG4TIf8tQoSTqNu
 6i3Z0fxt3Vi+8L3/lf8y+AcVCi4gnxhX/8007FZGvYdF6dFBwynJu0KQEPYTJcDDPhLa
 32TA==
X-Gm-Message-State: AOAM532O9SJ5NLQ/beZB+Kp9GSYRVW2XZc37lX+prX2+UdrEoslqidTU
 NyG743IjmBBWxu3gXMDQ6mE=
X-Google-Smtp-Source: ABdhPJwiBhhdiSv5xipwUBmxFfUuE7Js6srJ5I4x//n2LAjphXaXilwy/UiUFaG7+ZnnplrFb1ATaw==
X-Received: by 2002:adf:f7ce:: with SMTP id a14mr17874609wrq.362.1590946703877; 
 Sun, 31 May 2020 10:38:23 -0700 (PDT)
Received: from localhost.localdomain (43.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.43])
 by smtp.gmail.com with ESMTPSA id l19sm7973121wmj.14.2020.05.31.10.38.22
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 31 May 2020 10:38:23 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Subject: [PATCH 4/8] hw/pci/pci_bridge: Use the IEC binary prefix definitions
Date: Sun, 31 May 2020 19:38:10 +0200
Message-Id: <20200531173814.8734-5-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200531173814.8734-1-f4bug@amsat.org>
References: <20200531173814.8734-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 Andrew Jeffery <andrew@aj.id.au>, Helge Deller <deller@gmx.de>,
 "Michael S. Tsirkin" <mst@redhat.com>, Joel Stanley <joel@jms.id.au>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 qemu-trivial@nongnu.org, qemu-arm@nongnu.org,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 qemu-ppc@nongnu.org, Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

IEC binary prefixes ease code review: the unit is explicit.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/pci/pci_bridge.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/pci/pci_bridge.c b/hw/pci/pci_bridge.c
index 3ba3203f72..3789c17edc 100644
--- a/hw/pci/pci_bridge.c
+++ b/hw/pci/pci_bridge.c
@@ -423,14 +423,14 @@ int pci_bridge_qemu_reserve_cap_init(PCIDevice *dev, int cap_offset,
     }
 
     if (res_reserve.mem_non_pref != (uint64_t)-1 &&
-        res_reserve.mem_non_pref >= (1ULL << 32)) {
+        res_reserve.mem_non_pref >= 4 * GiB) {
         error_setg(errp,
                    "PCI resource reserve cap: mem-reserve must be less than 4G");
         return -EINVAL;
     }
 
     if (res_reserve.mem_pref_32 != (uint64_t)-1 &&
-        res_reserve.mem_pref_32 >= (1ULL << 32)) {
+        res_reserve.mem_pref_32 >= 4 * GiB) {
         error_setg(errp,
                    "PCI resource reserve cap: pref32-reserve  must be less than 4G");
         return -EINVAL;
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sun May 31 17:38:55 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 17:38:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfRv3-0003cN-6F; Sun, 31 May 2020 17:38:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yblG=7N=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jfRv2-0003br-AU
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 17:38:44 +0000
X-Inumbo-ID: 87ee27d6-a365-11ea-81bc-bc764e2007e4
Received: from mail-wr1-x444.google.com (unknown [2a00:1450:4864:20::444])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 87ee27d6-a365-11ea-81bc-bc764e2007e4;
 Sun, 31 May 2020 17:38:26 +0000 (UTC)
Received: by mail-wr1-x444.google.com with SMTP id c3so9114561wru.12
 for <xen-devel@lists.xenproject.org>; Sun, 31 May 2020 10:38:26 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=R9h6TBijqwhWtCdWuge0QqjjfhcYOfthGpKadm9tGxU=;
 b=clCSszb1FCsID9L9t4vBQapFKkE9cKOSakgm1eGvx1xsWw2JAQNwLPoaGyVtpO42pe
 kBKp/gPUmWIa/w5oOdIrRSY8p4p3XayTg3tMrrOmZmC34eM3rOnIEYRVnlpbvTCc6+Vo
 HEPQNWcpPsZAYdMebzBYVG/m9y9BWrVBmpNxm2yP1xXyoU3t8PWsCPYU5BKwTMzBj1HS
 BPino4U2lYhzNDvluhBK5euxZu9633dvESJdHU8nk0XGJtc8X7A2ua3EJZKg1u4SPdX2
 0Jc2etnb43hAo2TU2bhSL1ZDKnP3BXxfNBGozVyquRrYoWn4Hz++GY3WW7gghPDFdniU
 m7Qw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=R9h6TBijqwhWtCdWuge0QqjjfhcYOfthGpKadm9tGxU=;
 b=o8WBSUen77TVPyEtM8BVT5SHy2XZ4y3Cuk6d2SnDsqdPU35r0iaS7aK5XtHobXGWp9
 QYF/UBkSMULDb7wEuAQRgfhM1Q0hIDNwbfHZ+wlzK10Tt25Vyjo4vVz16vmgxVxiY078
 L8I3PzxHiMrXAgtQ8Bx8bKujEdetIZXdnTaLUw1SicGihQ8e0VVAib2e6GcGSJjiyxrO
 +iCyLI7GzPyAEloSeQJcHqXdKiCBWxhh7ewbJ40pG3fYvvnDC3oKq3eRYYWqmN5oKO7s
 Xw92fWamf4CqCNz1i0cy4RXRf/Lvbv+xRxaTWg0izXDMT1brSVHPg0G6UMBwuuBLy5Ye
 tavg==
X-Gm-Message-State: AOAM532HRevb8hL3JDFJEIY9c4otrPdCmimjKQx+MDfglhVjrkHI7Ajt
 vIDpc2XDUSageqXIuGdU1Qg=
X-Google-Smtp-Source: ABdhPJytdErKTCE9Dfb5cKXCrSruxfcAtIRvlaMsZNBz3NN+Eo12ujY//8NvCs6N5Z61jtTmgTyVVg==
X-Received: by 2002:adf:f512:: with SMTP id q18mr19634939wro.38.1590946705256; 
 Sun, 31 May 2020 10:38:25 -0700 (PDT)
Received: from localhost.localdomain (43.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.43])
 by smtp.gmail.com with ESMTPSA id l19sm7973121wmj.14.2020.05.31.10.38.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 31 May 2020 10:38:24 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Subject: [PATCH 5/8] hw/pci-host: Use the IEC binary prefix definitions
Date: Sun, 31 May 2020 19:38:11 +0200
Message-Id: <20200531173814.8734-6-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200531173814.8734-1-f4bug@amsat.org>
References: <20200531173814.8734-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 Andrew Jeffery <andrew@aj.id.au>, Helge Deller <deller@gmx.de>,
 "Michael S. Tsirkin" <mst@redhat.com>, Joel Stanley <joel@jms.id.au>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 qemu-trivial@nongnu.org, qemu-arm@nongnu.org,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 qemu-ppc@nongnu.org, Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

IEC binary prefixes ease code review: the unit is explicit.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/pci-host/i440fx.c    | 3 ++-
 hw/pci-host/q35.c       | 2 +-
 hw/pci-host/versatile.c | 5 +++--
 3 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/hw/pci-host/i440fx.c b/hw/pci-host/i440fx.c
index 0adbd77553..aefb416c8f 100644
--- a/hw/pci-host/i440fx.c
+++ b/hw/pci-host/i440fx.c
@@ -23,6 +23,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "qemu/units.h"
 #include "qemu/range.h"
 #include "hw/i386/pc.h"
 #include "hw/pci/pci.h"
@@ -301,7 +302,7 @@ PCIBus *i440fx_init(const char *host_type, const char *pci_type,
     memory_region_set_enabled(&f->smram_region, true);
 
     /* smram, as seen by SMM CPUs */
-    memory_region_init(&f->smram, OBJECT(d), "smram", 1ull << 32);
+    memory_region_init(&f->smram, OBJECT(d), "smram", 4 * GiB);
     memory_region_set_enabled(&f->smram, true);
     memory_region_init_alias(&f->low_smram, OBJECT(d), "smram-low",
                              f->ram_memory, 0xa0000, 0x20000);
diff --git a/hw/pci-host/q35.c b/hw/pci-host/q35.c
index 352aeecfa7..b788f17b2c 100644
--- a/hw/pci-host/q35.c
+++ b/hw/pci-host/q35.c
@@ -589,7 +589,7 @@ static void mch_realize(PCIDevice *d, Error **errp)
     memory_region_set_enabled(&mch->open_high_smram, false);
 
     /* smram, as seen by SMM CPUs */
-    memory_region_init(&mch->smram, OBJECT(mch), "smram", 1ull << 32);
+    memory_region_init(&mch->smram, OBJECT(mch), "smram", 4 * GiB);
     memory_region_set_enabled(&mch->smram, true);
     memory_region_init_alias(&mch->low_smram, OBJECT(mch), "smram-low",
                              mch->ram_memory, MCH_HOST_BRIDGE_SMRAM_C_BASE,
diff --git a/hw/pci-host/versatile.c b/hw/pci-host/versatile.c
index cfb9a78ea6..8ddfb8772a 100644
--- a/hw/pci-host/versatile.c
+++ b/hw/pci-host/versatile.c
@@ -8,6 +8,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "qemu/units.h"
 #include "hw/sysbus.h"
 #include "migration/vmstate.h"
 #include "hw/irq.h"
@@ -399,8 +400,8 @@ static void pci_vpb_realize(DeviceState *dev, Error **errp)
     pci_map_irq_fn mapfn;
     int i;
 
-    memory_region_init(&s->pci_io_space, OBJECT(s), "pci_io", 1ULL << 32);
-    memory_region_init(&s->pci_mem_space, OBJECT(s), "pci_mem", 1ULL << 32);
+    memory_region_init(&s->pci_io_space, OBJECT(s), "pci_io", 4 * GiB);
+    memory_region_init(&s->pci_mem_space, OBJECT(s), "pci_mem", 4 * GiB);
 
     pci_root_bus_new_inplace(&s->pci_bus, sizeof(s->pci_bus), dev, "pci",
                              &s->pci_mem_space, &s->pci_io_space,
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sun May 31 17:39:00 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 17:39:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfRv8-0003ef-Hw; Sun, 31 May 2020 17:38:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yblG=7N=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jfRv7-0003e8-9Z
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 17:38:49 +0000
X-Inumbo-ID: 88be7936-a365-11ea-8993-bc764e2007e4
Received: from mail-wm1-x342.google.com (unknown [2a00:1450:4864:20::342])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 88be7936-a365-11ea-8993-bc764e2007e4;
 Sun, 31 May 2020 17:38:27 +0000 (UTC)
Received: by mail-wm1-x342.google.com with SMTP id k26so9130932wmi.4
 for <xen-devel@lists.xenproject.org>; Sun, 31 May 2020 10:38:27 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=iYvFEgksB/BdazMa4Cs2c4jhXMbkN0WXaFJS9CHNgVo=;
 b=mLOiMHew1qF2hGdISfFnUSQy6ouAsSbwhEJxg/OGYSPf9xv1xTbVr8mBMRqJHP0j55
 yBAV5qmG/+djftEUqMiPwGr4Bh3GmX3ZVamOeBJXaKaDNXOaK/dz/G5T07RNwS0DFgrY
 c6bALnWU6nCvGXRvmEhcQH90po/UBDda62lpMh8uL/oBuNFtkrZ/+WZD/xRTxEPgKh8w
 YobGHA9manrtqDfq5TIha9eaxMdsJWURJqAn1SCZ+ZvWMz1ywD46nuh/+b1NyAbjDy7+
 GWlf7/+f8s9uLntj5m+01uzRU7k+D/zt9t5VXEvBWhKQ9MlGmJn6WzbC/VlmhCnqS0u0
 g/ng==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=iYvFEgksB/BdazMa4Cs2c4jhXMbkN0WXaFJS9CHNgVo=;
 b=CB7OI1ol+ATI2rfGAEt7szes8XDcFce1QBl13ZOgXfxXR/FYKWbUxtKKpBINXiYF2i
 oLfNqX4DX68Zvmr/oaSZljadKKIfJDztOG2c+JwhbssDxtEEQU9oRL3+cr3chFqH0Fw4
 HKoVOxbpxBuovNbpOXxnFBCJ4+oYasxSUjt0CyPX+WdbBy/sJFfyaZ9rZCL34sluQN54
 4vVjbR8Ilq6ieKRtKWVMjJpRyWgyH1i3Ab8NjhJLFVRAoswT+RzayClILP+ALKUEHn1G
 YvHmH25i2lHTSG80yNeo6JctgxHnW/xqI66HLIOsYWRbD22B1geCtZxPonyZwBA/wXu6
 FMRQ==
X-Gm-Message-State: AOAM531vz5kMz62yjSEhhd+dOietFzhymgKEhxmPB8iPuFomA9VyuNqg
 QWl9VLN6O4JTALAvS5nZUgE=
X-Google-Smtp-Source: ABdhPJwhb/IFbThB/tKVOWT/s2NPGR4HDMLpVaDysxOxobXYSvPIAsMTEGuw+7HN9OrZnw/sT/Aqyw==
X-Received: by 2002:a1c:4d0c:: with SMTP id o12mr18594700wmh.181.1590946706678; 
 Sun, 31 May 2020 10:38:26 -0700 (PDT)
Received: from localhost.localdomain (43.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.43])
 by smtp.gmail.com with ESMTPSA id l19sm7973121wmj.14.2020.05.31.10.38.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 31 May 2020 10:38:26 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Subject: [PATCH 6/8] hw/hppa/dino: Use the IEC binary prefix definitions
Date: Sun, 31 May 2020 19:38:12 +0200
Message-Id: <20200531173814.8734-7-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200531173814.8734-1-f4bug@amsat.org>
References: <20200531173814.8734-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 Andrew Jeffery <andrew@aj.id.au>, Helge Deller <deller@gmx.de>,
 "Michael S. Tsirkin" <mst@redhat.com>, Joel Stanley <joel@jms.id.au>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 qemu-trivial@nongnu.org, qemu-arm@nongnu.org,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 qemu-ppc@nongnu.org, Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

IEC binary prefixes ease code review: the unit is explicit.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/hppa/dino.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/hw/hppa/dino.c b/hw/hppa/dino.c
index 2b1b38c58a..7290f23962 100644
--- a/hw/hppa/dino.c
+++ b/hw/hppa/dino.c
@@ -542,7 +542,7 @@ PCIBus *dino_init(MemoryRegion *addr_space,
                                 &s->parent_obj.data_mem);
 
     /* Dino PCI bus memory.  */
-    memory_region_init(&s->pci_mem, OBJECT(s), "pci-memory", 1ull << 32);
+    memory_region_init(&s->pci_mem, OBJECT(s), "pci-memory", 4 * GiB);
 
     b = pci_register_root_bus(dev, "pci", dino_set_irq, dino_pci_map_irq, s,
                               &s->pci_mem, get_system_io(),
@@ -561,7 +561,7 @@ PCIBus *dino_init(MemoryRegion *addr_space,
     }
 
     /* Set up PCI view of memory: Bus master address space.  */
-    memory_region_init(&s->bm, OBJECT(s), "bm-dino", 1ull << 32);
+    memory_region_init(&s->bm, OBJECT(s), "bm-dino", 4 * GiB);
     memory_region_init_alias(&s->bm_ram_alias, OBJECT(s),
                              "bm-system", addr_space, 0,
                              0xf0000000 + DINO_MEM_CHUNK_SIZE);
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sun May 31 17:39:05 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 17:39:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfRvC-0003hA-Qg; Sun, 31 May 2020 17:38:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yblG=7N=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jfRvC-0003gl-A9
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 17:38:54 +0000
X-Inumbo-ID: 899b0f22-a365-11ea-9947-bc764e2007e4
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 899b0f22-a365-11ea-9947-bc764e2007e4;
 Sun, 31 May 2020 17:38:28 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id k26so9130958wmi.4
 for <xen-devel@lists.xenproject.org>; Sun, 31 May 2020 10:38:28 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=nx6i+AHtHhOPOYC0ludmYEcyPCtH+R6fArfoXhgX/80=;
 b=ecRtJ9+dMcz6HFcCrjMA79m2NLL9pLTe3YlR6u/5U8B/pDwEZnHkpthloUcJD9mdeX
 z4NzrruZHIQ5BDed7AH8zcr/JzgBoZ1wQmFBvCf0zqPlH2VDi3PlRQEl/z3dTDw8CP0R
 VEh2hNdQFoGTa0ucGgNoDbeocX77DmNTTR7f1AuNpakZHtfsid+pCIBY0so133aao2cN
 b+i37OA+ThO+bxJeMeXuuwDMmwuLCLkIY+kfFnePWYnEWyGruAmEtPG2VehLleFSgTQF
 GNo4UF310Aboap3C76RpTvPD64wZq/2auDdcnjb13FWLdoOod69RR73rZOQlRY4ZML6K
 vZmg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=nx6i+AHtHhOPOYC0ludmYEcyPCtH+R6fArfoXhgX/80=;
 b=ibIKM81wuAJwmyyB9/sJG5jiR1rrduahqSCDE7tksQI9lNEzx3xqzPJy0eYMWvhikr
 lcUmQBy8ESZgRMM3fr1FFDlYH4SU8YQRx9GWybH7SpT5BC1RZBZfv6S8DRJT5fAPt4ka
 Nd1ManrvpVMrFrkWP29FzdJzswFtcUonUfm0ojgNAsQ3rN5edMqEXNo4PzxdjmfjjcrN
 O3RNW94GW3FNhHvFFcCH/knhtus+BkCh0+qOX5h7zUyskUPKPvp385/9fpgp7UIe2syG
 E8bLKb/MdyLmFMuF4jDsJv28gdyUVbYpOGYY0Bmau4x404vlkS1PvuLe0ypIUg2oUEVv
 Vyig==
X-Gm-Message-State: AOAM531Vof60L2JDKfP3KE0YJVYCTgO+1kfkQ0kwT9r54l5/BfavwCrJ
 gpkjuqjn4AOIwiScz4LLGQ8=
X-Google-Smtp-Source: ABdhPJwrR3vwdIenjSppjEGj76X18jhrVpnPd8ljKzuiMJUGDNuZXGPyuZJVIrkrS466Afme7Vb7pg==
X-Received: by 2002:a1c:39c1:: with SMTP id g184mr17816585wma.9.1590946708068; 
 Sun, 31 May 2020 10:38:28 -0700 (PDT)
Received: from localhost.localdomain (43.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.43])
 by smtp.gmail.com with ESMTPSA id l19sm7973121wmj.14.2020.05.31.10.38.26
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 31 May 2020 10:38:27 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Subject: [PATCH 7/8] hw/i386/xen/xen-hvm: Use the IEC binary prefix definitions
Date: Sun, 31 May 2020 19:38:13 +0200
Message-Id: <20200531173814.8734-8-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200531173814.8734-1-f4bug@amsat.org>
References: <20200531173814.8734-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 Andrew Jeffery <andrew@aj.id.au>, Helge Deller <deller@gmx.de>,
 "Michael S. Tsirkin" <mst@redhat.com>, Joel Stanley <joel@jms.id.au>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 qemu-trivial@nongnu.org, qemu-arm@nongnu.org,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 qemu-ppc@nongnu.org, Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

IEC binary prefixes ease code review: the unit is explicit.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 hw/i386/xen/xen-hvm.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
index 82ece6b9e7..679d74e6a3 100644
--- a/hw/i386/xen/xen-hvm.c
+++ b/hw/i386/xen/xen-hvm.c
@@ -9,6 +9,7 @@
  */
 
 #include "qemu/osdep.h"
+#include "qemu/units.h"
 
 #include "cpu.h"
 #include "hw/pci/pci.h"
@@ -230,7 +231,7 @@ static void xen_ram_init(PCMachineState *pcms,
          * Xen does not allocate the memory continuously, it keeps a
          * hole of the size computed above or passed in.
          */
-        block_len = (1ULL << 32) + x86ms->above_4g_mem_size;
+        block_len = 4 * GiB + x86ms->above_4g_mem_size;
     }
     memory_region_init_ram(&ram_memory, NULL, "xen.ram", block_len,
                            &error_fatal);
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sun May 31 17:39:10 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 17:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfRvI-0003kj-3T; Sun, 31 May 2020 17:39:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yblG=7N=gmail.com=philippe.mathieu.daude@srs-us1.protection.inumbo.net>)
 id 1jfRvH-0003k2-Ak
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 17:38:59 +0000
X-Inumbo-ID: 8a6dad42-a365-11ea-8993-bc764e2007e4
Received: from mail-wm1-x343.google.com (unknown [2a00:1450:4864:20::343])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a6dad42-a365-11ea-8993-bc764e2007e4;
 Sun, 31 May 2020 17:38:30 +0000 (UTC)
Received: by mail-wm1-x343.google.com with SMTP id r9so8601143wmh.2
 for <xen-devel@lists.xenproject.org>; Sun, 31 May 2020 10:38:30 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=sender:from:to:cc:subject:date:message-id:in-reply-to:references
 :mime-version:content-transfer-encoding;
 bh=jnJt4nNSB0E2pLIKum3vjItQrLzo5DO2iXhJKs1nBc8=;
 b=kPgx3uRgc7TXjjpq1UvXYDhszlo8l3sRGueug/C8vRhLjgvZQ5Mieac9b318WBFJ2/
 aeYe32r9/P1YQ0T5IBGEZZ7DL+Wy3+63lPgyyyIx0+YmMwTp5IOUGd2IT1wgrMl/x1tR
 KJz6LeuNQroYjMDQHIcgiF8BdxJylz7XpfRhndCHWb8t1o//QCW5SH5/lCsvTvNfTjTe
 II8tGrhBafOYy7pR3oJLWiTvPSFZijdqdapNq7SavzV2LWsL6fiHgv7zZkvxDTXFVwkn
 nhmlNB2/Qmcnalw8/VjytMqmdx2T42K5JNc3hbG2/fcaGN2tjOK+OXLUHm3TmUzbuQVt
 aGQQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:sender:from:to:cc:subject:date:message-id
 :in-reply-to:references:mime-version:content-transfer-encoding;
 bh=jnJt4nNSB0E2pLIKum3vjItQrLzo5DO2iXhJKs1nBc8=;
 b=BclZ/uhSNDChlhpBIjhhQnuYxq1KSsoNlKJtbIHkK4+FDVmiiXv9Qp4gkl6C00h/9n
 VQPd5IJ3KKS41AYyr8eqfQYNMwqDWpJq14KcfybMlakMX0VZeErtC0eWbANfudMW0d+j
 XdME3d1sxdC7WEziyNafl2ycPV05AYZqc7wcbCVfjyWAvas2TbHLer5HaWnFsVZrc7yt
 +YtbPR7a9S4lk5ahPX/xoHhDWvwBFcFjGENqgMp7+5psCvZqwKf84l7fZcNeZzE4lqne
 120q5IGbFJnpNDR97/ipjlG0Fr2TY1zEEUk7Pc71ehJsBSN+JxdCZG61i4W8ZI/21WYK
 f1Dg==
X-Gm-Message-State: AOAM5307va8P9V6xYF0vK/uULZE2df6x/Rro7lHA62di05sB/mOqEspY
 Q5iEJYnoAwWhKKrcZoXj/2A=
X-Google-Smtp-Source: ABdhPJzC9j6ttjFbMxV1i2cC0ijncxDDP5cwavA0spex0KcuX2eB8xIhSwUxzBVMqmUzJNtm/2Oq4w==
X-Received: by 2002:a7b:cc82:: with SMTP id p2mr17828736wma.101.1590946709508; 
 Sun, 31 May 2020 10:38:29 -0700 (PDT)
Received: from localhost.localdomain (43.red-83-51-162.dynamicip.rima-tde.net.
 [83.51.162.43])
 by smtp.gmail.com with ESMTPSA id l19sm7973121wmj.14.2020.05.31.10.38.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 31 May 2020 10:38:28 -0700 (PDT)
From: =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>
To: qemu-devel@nongnu.org
Subject: [PATCH 8/8] target/i386/cpu: Use the IEC binary prefix definitions
Date: Sun, 31 May 2020 19:38:14 +0200
Message-Id: <20200531173814.8734-9-f4bug@amsat.org>
X-Mailer: git-send-email 2.21.3
In-Reply-To: <20200531173814.8734-1-f4bug@amsat.org>
References: <20200531173814.8734-1-f4bug@amsat.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Peter Maydell <peter.maydell@linaro.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 Andrew Jeffery <andrew@aj.id.au>, Helge Deller <deller@gmx.de>,
 "Michael S. Tsirkin" <mst@redhat.com>, Joel Stanley <joel@jms.id.au>,
 =?UTF-8?q?Philippe=20Mathieu-Daud=C3=A9?= <f4bug@amsat.org>,
 qemu-trivial@nongnu.org, qemu-arm@nongnu.org,
 =?UTF-8?q?Herv=C3=A9=20Poussineau?= <hpoussin@reactos.org>,
 =?UTF-8?q?C=C3=A9dric=20Le=20Goater?= <clg@kaod.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org,
 qemu-ppc@nongnu.org, Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

IEC binary prefixes ease code review: the unit is explicit.

Signed-off-by: Philippe Mathieu-Daudé <f4bug@amsat.org>
---
 target/i386/cpu.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 3733d9a279..33ce4861fb 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -6159,7 +6159,7 @@ static void x86_cpu_machine_done(Notifier *n, void *unused)
     if (smram) {
         cpu->smram = g_new(MemoryRegion, 1);
         memory_region_init_alias(cpu->smram, OBJECT(cpu), "smram",
-                                 smram, 0, 1ull << 32);
+                                 smram, 0, 4 * GiB);
         memory_region_set_enabled(cpu->smram, true);
         memory_region_add_subregion_overlap(cpu->cpu_as_root, 0, cpu->smram, 1);
     }
-- 
2.21.3



From xen-devel-bounces@lists.xenproject.org Sun May 31 18:12:43 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 18:12:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfSRb-0007kn-RE; Sun, 31 May 2020 18:12:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bt/9=7N=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jfSRa-0007ki-1U
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 18:12:22 +0000
X-Inumbo-ID: 42211998-a36a-11ea-8993-bc764e2007e4
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 42211998-a36a-11ea-8993-bc764e2007e4;
 Sun, 31 May 2020 18:12:16 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=eR/Oz0gL4PWcWS2adscFiAZanMgZ2JDfieh1oF0PrLs=; b=B3KNkNHSfaQbd58RpVKKjXuwJ
 5u08v3m3h92reH3qOUTHJU7jTsq8TMvFu1f02TVkLuo54wReliHHLeUrG6pJjFFRBdgtHaz8fLMkf
 w4Sfogt9lgt0Menaq4SC7LTeOcd15Pzwf1BX7iycnInqRGfEtrcuz3u0X32L3UzS34VNI=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfSRT-0003MG-EW; Sun, 31 May 2020 18:12:15 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfSRT-00073X-4r; Sun, 31 May 2020 18:12:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jfSRT-0006ZI-4A; Sun, 31 May 2020 18:12:15 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150576-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150576: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=ad33a573c009d72466432b41ba0591c64e819c19
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 31 May 2020 18:12:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150576 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150576/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ad33a573c009d72466432b41ba0591c64e819c19
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    3 days
Failing since        150465  2020-05-29 09:02:14 Z    2 days   23 attempts
Testing same since   150515  2020-05-30 01:00:47 Z    1 days   13 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1196 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 31 19:49:21 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 19:49:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfTww-0006vZ-Vk; Sun, 31 May 2020 19:48:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IKBM=7N=linaro.org=peter.maydell@srs-us1.protection.inumbo.net>)
 id 1jfTwv-0006vU-Hi
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 19:48:49 +0000
X-Inumbo-ID: bed1f36a-a377-11ea-81bc-bc764e2007e4
Received: from mail-oi1-x242.google.com (unknown [2607:f8b0:4864:20::242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bed1f36a-a377-11ea-81bc-bc764e2007e4;
 Sun, 31 May 2020 19:48:48 +0000 (UTC)
Received: by mail-oi1-x242.google.com with SMTP id r67so7308538oih.0
 for <xen-devel@lists.xenproject.org>; Sun, 31 May 2020 12:48:48 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linaro.org; s=google;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc:content-transfer-encoding;
 bh=ILaa0c1UMVCHaphm0DdLpskuHaCCEchvE3UnkzWKwTU=;
 b=gh/yOzBvDIyuIfovF4HlMlYod16q8wUChDeH3hG7Vg+iTSHsq68cyaZOlaiNPIf9US
 ygkjSaJT2vmqdUJQIEGlL8gV2MtXWHco1mZIOuO5KiIKQs3G5nr+4EVe214ZZXeEv7t2
 BrAvP75A4QrCdtXWHeTAoOJNw5pvGzbuC2G/kLi877DgbTs+y3Xd2yQqelINQ+/4seCD
 JnV4TjmGBJgF8NzHohYqz7Kn6m0rIYAsc00stNMZ+W0zNNKTer3Nbo9C6s5802r+PVaz
 RvSVWI8S5dnalEPSFqzdJ82hUjtjWbzfyTWqx+oNh8fIgRphvyytYus3c9E04tDW/7Xd
 ywBQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc:content-transfer-encoding;
 bh=ILaa0c1UMVCHaphm0DdLpskuHaCCEchvE3UnkzWKwTU=;
 b=Qr/ZNdZFX2YW4/LImHnwSp7d+irh40hIeHvpOEdo/q0uWqljTD3dNTogn+5GNLELWk
 Ii9RC/tkmdbayIUTlJoJZnaz5OifCo5NEDyGvOK2ETP44BZ2c1Qye5Myj+FADXx8wD0n
 LZapM79Sn4GErpP+d2eJDG/j8kUlL0i6bU4jpl6MliMsCYHfCSQ+bLOuzuTtQN4iWrX7
 Ysa2JDJiLGG1NSghoEH9riEfogvmT2+w8H/5RvXd1VSPQXoD0udqsuof6h9ODR3Wy163
 BPOmR3IDAlG6EFxtfHjAO8YJM7GpXWnvh2YxeQu+mfghi4o2Ra6H6lhXHQnkJv2Yv3Q/
 cG2w==
X-Gm-Message-State: AOAM5305uJEh0P0C5MT7XfautsJntChl4Zfx2HpEOmI/VXRMe03FaQug
 kD+2s2PRn0cHn7KVVXpkdbFTZFYpFaU1z58QiceJpA==
X-Google-Smtp-Source: ABdhPJxKeNN10qTHbz3+NODuq1ECUVF+14p38NKGTSU2wB+Fz1ZTvl0ES71h3qOGWCxhFj3oAr/7710lgqqoniYFZ4Q=
X-Received: by 2002:aca:5451:: with SMTP id i78mr7136005oib.48.1590954528355; 
 Sun, 31 May 2020 12:48:48 -0700 (PDT)
MIME-Version: 1.0
References: <20200531173814.8734-1-f4bug@amsat.org>
In-Reply-To: <20200531173814.8734-1-f4bug@amsat.org>
From: Peter Maydell <peter.maydell@linaro.org>
Date: Sun, 31 May 2020 20:48:37 +0100
Message-ID: <CAFEAcA95OLKmG1xNVjAg_KPt3hfN4vT5wvZ6SJbcKk5zdYO_Gg@mail.gmail.com>
Subject: Re: [PATCH 0/8] hw: Fix some incomplete memory region size
To: =?UTF-8?Q?Philippe_Mathieu=2DDaud=C3=A9?= <f4bug@amsat.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: qemu-ppc <qemu-ppc@nongnu.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Eduardo Habkost <ehabkost@redhat.com>, Paul Durrant <paul@xen.org>,
 Andrew Jeffery <andrew@aj.id.au>, Helge Deller <deller@gmx.de>,
 "Michael S. Tsirkin" <mst@redhat.com>, QEMU Developers <qemu-devel@nongnu.org>,
 Joel Stanley <joel@jms.id.au>, QEMU Trivial <qemu-trivial@nongnu.org>,
 qemu-arm <qemu-arm@nongnu.org>,
 =?UTF-8?Q?Herv=C3=A9_Poussineau?= <hpoussin@reactos.org>,
 =?UTF-8?Q?C=C3=A9dric_Le_Goater?= <clg@kaod.org>,
 Marcel Apfelbaum <marcel.apfelbaum@gmail.com>,
 Paolo Bonzini <pbonzini@redhat.com>,
 Anthony Perard <anthony.perard@citrix.com>,
 "open list:X86" <xen-devel@lists.xenproject.org>,
 Richard Henderson <rth@twiddle.net>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sun, 31 May 2020 at 18:38, Philippe Mathieu-Daud=C3=A9 <f4bug@amsat.org>=
 wrote:
>
> memory_region_set_size() handle the 16 Exabytes limit by
> special-casing the UINT64_MAX value.
> This is not a problem for the 32-bit maximum, 4 GiB, but
> in some places we incorrectly use UINT32_MAX instead of
> 4 GiB, and end up missing 1 byte in the memory region.
>
> This series fixes the cases I encountered.
> Also included few patches while reviewing, I replaced some
> magic values by the IEC binary prefix equivalent.
>
> Regards,
>
> Phil.
>
> Philippe Mathieu-Daud=C3=A9 (8):
>   hw/arm/aspeed: Correct DRAM container region size
>   hw/pci-host/prep: Correct RAVEN bus bridge memory region size
>   hw/pci/pci_bridge: Correct pci_bridge_io memory region size
>   hw/pci/pci_bridge: Use the IEC binary prefix definitions
>   hw/pci-host: Use the IEC binary prefix definitions
>   hw/hppa/dino: Use the IEC binary prefix definitions
>   hw/i386/xen/xen-hvm: Use the IEC binary prefix definitions
>   target/i386/cpu: Use the IEC binary prefix definitions

whole series:
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>

thanks
-- PMM


From xen-devel-bounces@lists.xenproject.org Sun May 31 21:13:40 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 21:13:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfVGY-0005tp-FK; Sun, 31 May 2020 21:13:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bt/9=7N=xenproject.org=osstest-admin@srs-us1.protection.inumbo.net>)
 id 1jfVGX-0005tk-Gn
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 21:13:09 +0000
X-Inumbo-ID: 86121260-a383-11ea-aaa6-12813bfff9fa
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 86121260-a383-11ea-aaa6-12813bfff9fa;
 Sun, 31 May 2020 21:13:07 +0000 (UTC)
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
 d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
 Content-Transfer-Encoding:Content-Type:Message-ID:To:Sender:Reply-To:Cc:
 Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
 Resent-To:Resent-Cc:Resent-Message-ID:In-Reply-To:References:List-Id:
 List-Help:List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
 bh=83GXXRYiJeiOXlDY+0kCOFX1ICPtOuyFMgtgwdfHknI=; b=KXAYIBuiRKPGCW4Ml8jU8JVot
 VNZTodDPjXOd8rMbIGUxu8WtuebMviRCcrqkhxaJ/IG1MdTt/9LLp96cpzfO0uKsN2pxNgi6yb+y3
 hLK6oTuhcLR3QJ7GYp82OT1qWoOMRANlObhFo8KnRS0oiTLF6LyP5ooCZ7Al/OKrfQFWc=;
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfVGU-0007Er-HF; Sun, 31 May 2020 21:13:06 +0000
Received: from [172.16.144.3] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.89)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1jfVGU-0005UP-8L; Sun, 31 May 2020 21:13:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.89) (envelope-from <osstest-admin@xenproject.org>)
 id 1jfVGU-0001ei-7h; Sun, 31 May 2020 21:13:06 +0000
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-150581-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 150581: regressions - FAIL
X-Osstest-Failures: xen-unstable-smoke:test-amd64-amd64-libvirt:guest-start:fail:regression
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
 xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This: xen=ad33a573c009d72466432b41ba0591c64e819c19
X-Osstest-Versions-That: xen=1497e78068421d83956f8e82fb6e1bf1fc3b1199
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 31 May 2020 21:13:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

flight 150581 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/150581/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-libvirt     12 guest-start              fail REGR. vs. 150438

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm      13 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      14 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          13 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          14 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ad33a573c009d72466432b41ba0591c64e819c19
baseline version:
 xen                  1497e78068421d83956f8e82fb6e1bf1fc3b1199

Last test of basis   150438  2020-05-28 14:01:19 Z    3 days
Failing since        150465  2020-05-29 09:02:14 Z    2 days   24 attempts
Testing same since   150515  2020-05-30 01:00:47 Z    1 days   14 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Andrew Cooper <andrew.cooper@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tamas K Lengyel <tamas@tklengyel.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1196 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 31 22:06:15 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 22:06:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfW5c-0001jo-Iw; Sun, 31 May 2020 22:05:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Jz0d=7N=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1jfW5a-0001jj-MQ
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 22:05:54 +0000
X-Inumbo-ID: e570b98a-a38a-11ea-8993-bc764e2007e4
Received: from mail-qv1-xf2e.google.com (unknown [2607:f8b0:4864:20::f2e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e570b98a-a38a-11ea-8993-bc764e2007e4;
 Sun, 31 May 2020 22:05:53 +0000 (UTC)
Received: by mail-qv1-xf2e.google.com with SMTP id f89so3634594qva.3
 for <xen-devel@lists.xenproject.org>; Sun, 31 May 2020 15:05:53 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=zededa.com; s=google;
 h=mime-version:from:date:message-id:subject:to:cc;
 bh=m3hCykIFfDuc556EkAd2MzlXxt69c5qKxSkCefOY8Tc=;
 b=b8WSe1917oWKcBIzDhxQzjPiw6UP0QYa3360vfp+3tychI06UXE/TRQbaaC4jge7Se
 jSPnnxjsiPbn0xjCkaF3uuqs6eOfatD2h/RgVvXafVZZniJzIz7QKtMlwqTABTVTliby
 m5YkB8IzDhRegl3UJPBYhLXQn4Odst1Qs98qRS3o3Ofcqid9YKeu1Y0wcA2V+qymzZks
 UqXPTwkV7Uug80OztDt3vh7p7Z9YB5Em0r/tSwHRBxfXdhmCsEc2s1RqaK6QrU297ws5
 I8DfEeCuTo3idJucKbsxmi2HaIormRFvlkO2u/NxFopGGXq+Nosjuhk2bV6haTDodLZ5
 lf7A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:from:date:message-id:subject:to:cc;
 bh=m3hCykIFfDuc556EkAd2MzlXxt69c5qKxSkCefOY8Tc=;
 b=PIPA2jyqKL/J8WbbDgawXiy6mg1cUqw7fJrHtsoNRJUsFk/CClHTCnZ7iGn6vp0CSa
 9AM31g2atri4pggLIUMp9MBUjbz1Jek/KZ1RN9ej+uQnnC/J6NRjIT+SuwKXCEU+MtV1
 1snlqmEKpsDzpsx8j77oDEgkJLiKjkdH1T3Qmv2NkB3OJ6AVUks2M2XhrQE83QJRZeUi
 bFiEZbpIsrnG65dXkejOJpH8eC5wXnqFgIwzvGQPGg0I88gwNV9S3vXvVFTSCdiv6L1h
 V1Cy9+gtwqRJZfSV21zy00Gz0I9Xj9EE3HZD8rCpvK+KQPttp0CbD+UDVKWVHV14r9+V
 gamQ==
X-Gm-Message-State: AOAM532FqR+/FrjE7AdgOSmZ2wb2n33s6uuLFjNqOoJfeAPDPu9dXBc9
 ZxzcneUUabqPhmS/NHxT+UkmPdALzMdCc8aN2JHDWidU/LI=
X-Google-Smtp-Source: ABdhPJy8sEtObPTb7oepTJISI2si0swIZAMlfWtR4NZ2WWGhi6yeD5gIob4TqpWf1eEchb9chbXz39xB8k1ePtldsII=
X-Received: by 2002:a0c:eb11:: with SMTP id j17mr128872qvp.193.1590962753196; 
 Sun, 31 May 2020 15:05:53 -0700 (PDT)
MIME-Version: 1.0
From: Roman Shaposhnik <roman@zededa.com>
Date: Sun, 31 May 2020 15:05:42 -0700
Message-ID: <CAMmSBy9R57ntWmzNZDvwcvJM1f1wwD7ogWvCshipAcPX4x-TmQ@mail.gmail.com>
Subject: UEFI support in ARM DomUs
To: Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Nataliya Korovkina <malus.brandywine@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

Hi!

with a lot of help from Stefano, we're getting RPi4 support in
Project EVE pretty much on par between KVM and Xen.

One big area that still remains is supporting UEFI boot sequence
for DomUs. With KVM, given the qemu virt device model this is
as simple as using either stock UEFI build for arm or even U-Boot
EFI emulation environment and passing it via -bios option.

Obviously with Xen on ARM we don't have the device model so
my understanding is that the easiest way we can support it would
be to port UEFI's OvmfPkg/OvmfXen target to ARM (it seems to
be currently exclusively X64).

So here's my first question: if there's anybody on this list who had
a hand in implementing OvmfPkg/OvmfXen can you please share
your thoughts on how much work that port may be (or whether it is
even feasible -- I may totally be missing something really obvious here).

And as long as I've got your attention: two more questions:
   1.. compared to the above, would porting pvgrub to ARM be any
   easier or more difficult?

   2. same question for teaching u-boot about PV calls.

Thanks,
Roman.

P.S. Oh and I guess between:
   0. OvmfPkg/OvmfXen on ARM64
   1. pvgrub on ARM64
   2. u-boot/EFI emulation with PV calls backend
I didn't miss any other obvious way of making UEFI-aware VM images
to boot on Xen ARM64 DomU, right?


From xen-devel-bounces@lists.xenproject.org Sun May 31 22:24:50 2020
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 31 May 2020 22:24:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1jfWNT-0003PG-38; Sun, 31 May 2020 22:24:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xoz2=7N=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1jfWNR-0003PB-Ku
 for xen-devel@lists.xenproject.org; Sun, 31 May 2020 22:24:21 +0000
X-Inumbo-ID: 7904c00e-a38d-11ea-9dbe-bc764e2007e4
Received: from mail-wr1-x42d.google.com (unknown [2a00:1450:4864:20::42d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7904c00e-a38d-11ea-9dbe-bc764e2007e4;
 Sun, 31 May 2020 22:24:20 +0000 (UTC)
Received: by mail-wr1-x42d.google.com with SMTP id x13so9618800wrv.4
 for <xen-devel@lists.xenproject.org>; Sun, 31 May 2020 15:24:20 -0700 (PDT)
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025;
 h=mime-version:references:in-reply-to:from:date:message-id:subject:to
 :cc; bh=/UJb8KEJypffVbd/B3RxwITb6qIOpcwhFRkS6x75qDc=;
 b=RMI7yFABiYBqJNvIjEDL+GZ1uZZ0qZ7+vr7atbniLeIUQL8PevwhmnAZSCe/okqH49
 FZNUJLhziTu3RsxvL4cypriXIMpxZQ3A88UvP1seRQB/jJJp+eliW5VawaEwJ7QiMGib
 N/Iq41BXHcBCy28socon87RWacIt8VhPoSfRoCP9MT8oRff5JPCLeibTALuAzJFYXCUz
 FDovG6WzQJLF0iu/Deenr1n0nbtEN/td/fRrJcKI3k+KJSNEr/ao+DbtPeBD70eoIGdO
 XERF0zcluYSk7NfCoix9bOrBzUh773dg96PZCUOHn3b1ZIEEmGZd3DS8KfV3sNrEsGmh
 Tdiw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=1e100.net; s=20161025;
 h=x-gm-message-state:mime-version:references:in-reply-to:from:date
 :message-id:subject:to:cc;
 bh=/UJb8KEJypffVbd/B3RxwITb6qIOpcwhFRkS6x75qDc=;
 b=ukhQ9oBo185b3MjZmW+ZkfOIycd2azSzwSHSDl6JOpoUAE33FSFmNs7TUmoue8KKJt
 ZlZsC9jz9E7hOz+FasKdNolar4qwjtdr8h8FX/IQDQsYA85cfZztKy9zS24LVcGx0tzA
 tkdGDmPZHP2dgUnjPqky0KrH8kIgRVf2LxW+7vcCoG/8mSa438frS2PAIvJY78Zpehcl
 jIaywkTmKRVay5ZevJ8nwAx0pvjDy7UbIi86kaMVqfrfezDTJxsCRAehDzj/NQe/s4gE
 mZnn0F3tfKWRzPQQ7dS3XkUDJDX/Haz24CI9ktdVZM+myRXW7Esyj6yNC3w81hT3U0ZB
 c07Q==
X-Gm-Message-State: AOAM533Ay1GKaAzwU13kkbQBBrygKvUijI98FHjnvQYNPWalugxUkX1I
 WG5jJDD6X7RpI+naKSqh2Pqigb7nGoAr4hVbq+4=
X-Google-Smtp-Source: ABdhPJzwdfO+DaZUHIByCEdAx/jIwtd60SV800vRmYLuZKrSwYySUyndI56WOtxCDPZ0rOqeJnJ6FeKR/muVYiz2OeE=
X-Received: by 2002:adf:f790:: with SMTP id q16mr19297813wrp.399.1590963860024; 
 Sun, 31 May 2020 15:24:20 -0700 (PDT)
MIME-Version: 1.0
References: <CAMmSBy9R57ntWmzNZDvwcvJM1f1wwD7ogWvCshipAcPX4x-TmQ@mail.gmail.com>
In-Reply-To: <CAMmSBy9R57ntWmzNZDvwcvJM1f1wwD7ogWvCshipAcPX4x-TmQ@mail.gmail.com>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Sun, 31 May 2020 23:24:09 +0100
Message-ID: <CAJ=z9a2B1+A8jPXiE9adNSTWHENtULnmAxq2M5v6MxB5thZLfw@mail.gmail.com>
Subject: Re: UEFI support in ARM DomUs
To: Roman Shaposhnik <roman@zededa.com>
Content-Type: text/plain; charset="UTF-8"
X-BeenThere: xen-devel@lists.xenproject.org
X-Mailman-Version: 2.1.29
Precedence: list
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Cc: Xen-devel <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Nataliya Korovkina <malus.brandywine@gmail.com>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>

On Sun, 31 May 2020 at 23:05, Roman Shaposhnik <roman@zededa.com> wrote:
> Hi!
>
> with a lot of help from Stefano, we're getting RPi4 support in
> Project EVE pretty much on par between KVM and Xen.
>
> One big area that still remains is supporting UEFI boot sequence
> for DomUs. With KVM, given the qemu virt device model this is
> as simple as using either stock UEFI build for arm or even U-Boot
> EFI emulation environment and passing it via -bios option.
>
> Obviously with Xen on ARM we don't have the device model so
> my understanding is that the easiest way we can support it would
> be to port UEFI's OvmfPkg/OvmfXen target to ARM (it seems to
> be currently exclusively X64).

EDK2 has been supporting Xen on Arm for the past 5 years. We don't use
OvmfPkg/OvmfXen but ArmVirtPkg/ArmVirtXen (see [1]).
I haven't tried to build it recently, but I should be able to help if
there is any issue with it.

Cheers,

[1] https://github.com/tianocore/edk2/blob/master/ArmVirtPkg/ArmVirtXen.fdf


